title
stringlengths
2
280
pmid
stringlengths
7
8
background_abstract
stringlengths
7
1.19k
background_abstract_label
stringclasses
18 values
methods_abstract
stringlengths
55
1.48k
methods_abstract_label
stringlengths
5
31
results_abstract
stringlengths
73
1.92k
results_abstract_label
stringclasses
13 values
conclusions_abstract
stringlengths
40
1.21k
conclusions_abstract_label
stringclasses
28 values
mesh_descriptor_names
list
pmcid
stringlengths
5
8
background_title
stringlengths
10
95
background_text
stringlengths
107
48k
methods_title
stringlengths
6
112
methods_text
stringlengths
124
48.4k
results_title
stringlengths
6
122
results_text
stringlengths
152
62.3k
conclusions_title
stringlengths
8
80
conclusions_text
stringlengths
4
19k
other_sections_titles
list
other_sections_texts
list
other_sections_sec_types
list
all_sections_titles
list
all_sections_texts
list
all_sections_sec_types
list
keywords
list
whole_article_text
stringlengths
5.86k
159k
whole_article_abstract
stringlengths
942
2.95k
background_conclusion_text
stringlengths
390
49.9k
background_conclusion_abstract
stringlengths
962
2.9k
whole_article_text_length
int64
1.14k
24.1k
whole_article_abstract_length
int64
187
492
num_sections
int64
2
28
most_frequent_words
list
keybert_topics
list
annotated_base_background_abstract_prompt
stringclasses
1 value
annotated_base_methods_abstract_prompt
stringclasses
1 value
annotated_base_results_abstract_prompt
stringclasses
1 value
annotated_base_conclusions_abstract_prompt
stringclasses
1 value
annotated_base_whole_article_abstract_prompt
stringclasses
1 value
annotated_base_background_conclusion_abstract_prompt
stringclasses
1 value
annotated_keywords_background_abstract_prompt
stringlengths
32
1.14k
annotated_keywords_methods_abstract_prompt
stringlengths
32
1.14k
annotated_keywords_results_abstract_prompt
stringlengths
32
1.14k
annotated_keywords_conclusions_abstract_prompt
stringlengths
32
1.14k
annotated_keywords_whole_article_abstract_prompt
stringlengths
32
1.14k
annotated_keywords_background_conclusion_abstract_prompt
stringlengths
32
1.14k
annotated_mesh_background_abstract_prompt
stringlengths
45
741
annotated_mesh_methods_abstract_prompt
stringlengths
45
741
annotated_mesh_results_abstract_prompt
stringlengths
45
723
annotated_mesh_conclusions_abstract_prompt
stringlengths
55
741
annotated_mesh_whole_article_abstract_prompt
stringlengths
45
741
annotated_mesh_background_conclusion_abstract_prompt
stringlengths
55
741
annotated_keybert_background_abstract_prompt
stringclasses
1 value
annotated_keybert_methods_abstract_prompt
stringclasses
1 value
annotated_keybert_results_abstract_prompt
stringclasses
1 value
annotated_keybert_conclusions_abstract_prompt
stringclasses
1 value
annotated_keybert_whole_article_abstract_prompt
stringclasses
1 value
annotated_keybert_background_conclusion_abstract_prompt
stringclasses
1 value
annotated_most_frequent_background_abstract_prompt
stringlengths
70
221
annotated_most_frequent_methods_abstract_prompt
stringlengths
70
221
annotated_most_frequent_results_abstract_prompt
stringlengths
70
221
annotated_most_frequent_conclusions_abstract_prompt
stringlengths
70
221
annotated_most_frequent_whole_article_abstract_prompt
stringlengths
70
221
annotated_most_frequent_background_conclusion_abstract_prompt
stringlengths
70
221
annotated_tf_idf_background_abstract_prompt
stringlengths
75
297
annotated_tf_idf_methods_abstract_prompt
stringlengths
67
356
annotated_tf_idf_results_abstract_prompt
stringlengths
67
299
annotated_tf_idf_conclusions_abstract_prompt
stringlengths
82
389
annotated_tf_idf_whole_article_abstract_prompt
stringlengths
71
254
annotated_tf_idf_background_conclusion_abstract_prompt
stringlengths
71
254
annotated_entity_plan_background_abstract_prompt
stringlengths
20
258
annotated_entity_plan_methods_abstract_prompt
stringlengths
20
450
annotated_entity_plan_results_abstract_prompt
stringlengths
20
523
annotated_entity_plan_conclusions_abstract_prompt
stringlengths
20
281
annotated_entity_plan_whole_article_abstract_prompt
stringlengths
45
733
annotated_entity_plan_background_conclusion_abstract_prompt
stringlengths
45
733
The effect of wearing sanitary napkins of different thicknesses on physiological and psychological responses in Muslim females.
25189184
Menstruation is associated with significant unpleasantness, and wearing a sanitary napkin (SN) during menses causes discomfort. In addition, many Muslim women use a thick type of SN during menses due to the religious requirement that even disposable SNs be washed before disposal. Therefore, the objective of this study was to measure the physiological and psychological responses to wearing SNs of different thicknesses during menstruation and non-menstruation phases at rest and during physical activity/exercise among Muslim women.
BACKGROUND
Eighteen Muslim females were randomly assigned to wear an ultra slim type (US, thin) or a maxi type (MT, thick) SN on two different occasions (i.e., during non-menses and menses). Each subject tested both types of SN. Upon arriving at the laboratory, each subject was equipped with an ambulatory electrocardiograph and rested in a seated position for 10 min. She was then given either an US or MT SN, put it in place, and rested in a seated position for 10 min. Each subject then walked at 3 km/h for 10 min, sat resting for 10 min, and then walked at 5 km/h for another 10 min. At the end of each 10-min stage, subjects marked their feelings of discomfort on the visual analog scale (VAS). Perceived exertion during exercise was evaluated using the Borg scale. Heart rate and low frequency-to-high frequency ratio (LF/HF) of heart rate variability were continuously recorded during rest and exercise.
METHODS
During both the non-menses and menses trials, VAS and LF/HF were significantly lower in subjects using the US SN compared to the MT SN. These results indicate that when wearing the US SN, subjects were more comfortable and did not increase sympathetic activities. Meanwhile, perceived exertion during exercise had no significant difference between US and MT although the means of the scores for US tended to be lower than those of MT.
RESULTS
The results of this study (VAS and LF/HF) indicate that wearing an US SN induces less physiological and psychological stress compared to wearing a MT SN. Thus, use of the former will empower women to live their lives with vitality during menses.
CONCLUSIONS
[ "Adult", "Electrocardiography, Ambulatory", "Female", "Heart Rate", "Humans", "Islam", "Menstrual Hygiene Products", "Menstruation", "Visual Analog Scale", "Women's Health", "Young Adult" ]
4177679
Background
Female menstruation is associated with significant unpleasant symptoms, which can be physical, behavioral, and emotional [1,2]. Consequently, reducing mental stress during the menstruation period is an important quality of life issue for women. Wearing a sanitary napkin (SN) is believed to influence mental stress responses of women during daily living activities [1,2]. Although mental stress is a psychological response, it affects several physiological processes in the human body. Generally, the human sensory receptors respond to physical stimuli, such as touch and pain, and trigger physiological changes in the body as interpreted by the autonomic nervous system [3]. The parasympathetic nervous system is suppressed and the sympathetic nervous system (SNS) is activated [3]. This causes physiological secretion of epinephrine and norepinephrine, which leads to vasoconstriction of blood vessels, increased muscle tension, and changes in heart rate and heart rate variability (HRV) [3]. In the past, researchers have used HRV to measure mental stress [1-5]. In particular, the high frequency (HF) band (0.15 to 0.4 Hz) in frequency-domain analysis has been regarded as the marker of parasympathetic (vagal) activity, and the low frequency (LF) band (0.04 to 0.15 Hz) has been regarded as the marker of sympathovagal interaction, especially sympathetic activity. Consequently, the LF-to-HF (LF/HF) ratio represents the sympathovagal balance [6,7]. HRV in response to stress differs between genders, and women’s neuroendocrine responses are often unpredictable [3]. Previous studies have used different types of SNs as the physical stimuli and measured physiological responses to their use in order to find ways to improve the quality of life of women during menses [1,4,5]. For example, Park and Watanuki [1,2] reported that, although the use of SNs increased physiological loading, their use did not result in significant effects on the LF and HF components or the LF/HF ratio of HRV at 20, 30, and 40 min after application compared to during the 10 min resting state. In a laboratory-based experiment, application of different types of SN had different physiological and psychological outcomes [1,2]. Mechanical stimulation by SNs with higher roughness (frictional coefficient = 0.312) increased SNS activities and brain arousal compared to SNs with lower roughness (frictional coefficient = 0.142) [1]. Researchers who focused on physiological changes caused by application of SNs of different thicknesses over the menstrual cycle reported that women felt more comfortable using thin type SNs compared to thick type SNs [4,5]. Thin type SNs caused less mechanical stimulation and did not increase the SNS activities compared to the thick type SN [4,5]. Muslim females prefer to use the thick type SN due to cultural and religious practices, as SNs must be washed before disposal. This is based on tradition passed down for generations as in the older days there were no napkins and many used old cloths as napkins [8]. In addition, many believe that if used SNs are not washed before being thrown away, one risks being afflicted with hysteria and disturbances and is unhygienic. Research on the psychological and physiological changes that occur in women when wearing SNs of different thicknesses is limited and, to our knowledge, there have been no studies on the effect of wearing SNs during exercise/physical activity among Muslim women. Therefore, the goal of this study was to measure the physiological and psychological responses to wearing SNs of different thicknesses during menstruation and non-menstruation phases at rest and during physical activity/exercise among Muslim women.
Methods
Subjects Unmarried Muslim females were solicited from the Institute to participate in this study. Eighteen unmarried Muslim females with regular menstrual cycles (cycle range 21 to 39 days) volunteered to participated in the study during the non-menses period (Experiment I: age 24.6 ± 1.4 years, height 155.3 ± 4.5 cm, weight 51.2 ± 12.4 kg, BMI 21.2 ± 4.8), and 18 unmarried Muslim females participated during their menses period (Experiment II: age 25.0 ± 1.4 years, height 156.2 ± 5.0 cm, weight 51.8 ± 9.9 kg, BMI 21.1 ± 3.7), 16 of whom had participated in Experiment I. None of the subjects had any premenstrual syndrome, which was assessed via a questionnaire before participation in the study. None of the subjects had any menstrual pains during their menses or took any analgesics to control menstrual pains. Written informed consent was obtained from all subjects after a full explanation of the study purpose and protocol. Subjects were asked to abstain from eating, smoking, and exercise at least 1 h before the experiments. This study was approved by the Human Ethics Committee of Universiti Sains Malaysia. Unmarried Muslim females were solicited from the Institute to participate in this study. Eighteen unmarried Muslim females with regular menstrual cycles (cycle range 21 to 39 days) volunteered to participated in the study during the non-menses period (Experiment I: age 24.6 ± 1.4 years, height 155.3 ± 4.5 cm, weight 51.2 ± 12.4 kg, BMI 21.2 ± 4.8), and 18 unmarried Muslim females participated during their menses period (Experiment II: age 25.0 ± 1.4 years, height 156.2 ± 5.0 cm, weight 51.8 ± 9.9 kg, BMI 21.1 ± 3.7), 16 of whom had participated in Experiment I. None of the subjects had any premenstrual syndrome, which was assessed via a questionnaire before participation in the study. None of the subjects had any menstrual pains during their menses or took any analgesics to control menstrual pains. Written informed consent was obtained from all subjects after a full explanation of the study purpose and protocol. Subjects were asked to abstain from eating, smoking, and exercise at least 1 h before the experiments. This study was approved by the Human Ethics Committee of Universiti Sains Malaysia. Experimental conditions To avoid potential diurnal variations, subjects were always tested at the same time of day (between 13:30 h and 17:00 h) in the same quiet, temperature-controlled room (26.3 ± 1.9°C, 51.6 ± 8.7% relative humidity). To avoid potential diurnal variations, subjects were always tested at the same time of day (between 13:30 h and 17:00 h) in the same quiet, temperature-controlled room (26.3 ± 1.9°C, 51.6 ± 8.7% relative humidity). Materials Two different types of SN were used in this study: ultra slim (US, thin; Laurier Perfect Comfort Ultra Slim) and maxi type (MT, thick; Laurier Active Comfort Super Maxi) (Table 1). Both types of SN were marketed products (Kao Corporation) consisting of three layers: top-sheet (non-woven sheet), absorbent sheet (MT: Pulp; US: pulp + specially designed washable absorbing polymers), and back sheet (polythene film). Description of sanitary napkins used in the study Two different types of SN were used in this study: ultra slim (US, thin; Laurier Perfect Comfort Ultra Slim) and maxi type (MT, thick; Laurier Active Comfort Super Maxi) (Table 1). Both types of SN were marketed products (Kao Corporation) consisting of three layers: top-sheet (non-woven sheet), absorbent sheet (MT: Pulp; US: pulp + specially designed washable absorbing polymers), and back sheet (polythene film). Description of sanitary napkins used in the study Experimental design A crossover repeated-measures design with random assignment was used. Subjects were required to complete two phases of the experiment: Experiment I was the non-menses phase and Experiment II was the menses phase. The menses phase represented a more complex situation than the non-menses phase because of menstrual pain, worries about leakage, and so on. Thus, to evaluate the effect of SNs on the subject’s comfort, experiments were executed during the non-menses phase. For each experiment, subjects had to complete two visits, one using the US SN and the other using the MT SN (assigned in a randomized order). For Experiment I, subjects completed the first and second visits on two continuous days, whereas for Experiment II the visits took place in two different months, with the gap between the first and second visit being no more than 3 months. A crossover repeated-measures design with random assignment was used. Subjects were required to complete two phases of the experiment: Experiment I was the non-menses phase and Experiment II was the menses phase. The menses phase represented a more complex situation than the non-menses phase because of menstrual pain, worries about leakage, and so on. Thus, to evaluate the effect of SNs on the subject’s comfort, experiments were executed during the non-menses phase. For each experiment, subjects had to complete two visits, one using the US SN and the other using the MT SN (assigned in a randomized order). For Experiment I, subjects completed the first and second visits on two continuous days, whereas for Experiment II the visits took place in two different months, with the gap between the first and second visit being no more than 3 months. Procedures Upon arrival in the laboratory for Experiment I, subjects were required to fill out a pre-questionnaire to identify the type of SN used regularly during their heavy flow menses days. Subjects were then equipped with an ambulatory electrocardiogram monitor (Active Tracer, Model: AC-301A, Japan) with electrodes placed on three sites on the chest, and they rested in a seated position for 10 min. Subjects were then given either an US or MT SN (given in a randomized order, and subject were blinded to the type of SN given), they were asked to put it in place, and they again rested in a seated position for 10 min. Subjects then walked at 3 km/h on a treadmill for 10 min, sat resting for 10 min, and then continued walking briskly at 5 km/h on a treadmill for another 10 min (Figure 1). At the end of each 10-min stage, subjects were asked to mark their feelings of discomfort on a visual analog scale (VAS) as described by Scott and Huskisson [9]. For the treadmill stage, immediately upon completing the 10 min of walking, subjects were required to point immediately on the printed 10-point Borg scale to record their level of perceived exertion [10]. HRV was recorded continuously during rest and exercise by the electrocardiogram monitor, and the frequency of the R-R interval was analyzed by the MemCalc method using MemCalc/Tarawa software. From the analysis, the LF/HF ratio was calculated. The data collected during minutes 5 to 8 of each stage were used for the analysis. On the second visit (the following day), subjects repeated the same protocols while wearing the other SN type. Flow of the project design. VAS = visual analog scale for determination of feeling of discomfort; ECG = electrocardiogram, which was continuously monitored. Upon arrival at the laboratory for Experiment II, subjects were given an US SN, asked to put it in place, and rested in the seated position for 10 min. They then were given either an US or MT SN in randomized order and asked to put it in place. The subjects completed this experiment during the second and third days after the onset of menstruation, and the average value of both days was used in the data analysis. If a subject was randomly assigned to use the MT SN on the first menses visit (second and third day of menstruation), she was given the US SN on her second menses visit (second and third day of menstruation), and vice versa. At the end of Experiment II, subjects were required to fill out the post-questionnaire to identify the type of SN that made them feel more active. Upon arrival in the laboratory for Experiment I, subjects were required to fill out a pre-questionnaire to identify the type of SN used regularly during their heavy flow menses days. Subjects were then equipped with an ambulatory electrocardiogram monitor (Active Tracer, Model: AC-301A, Japan) with electrodes placed on three sites on the chest, and they rested in a seated position for 10 min. Subjects were then given either an US or MT SN (given in a randomized order, and subject were blinded to the type of SN given), they were asked to put it in place, and they again rested in a seated position for 10 min. Subjects then walked at 3 km/h on a treadmill for 10 min, sat resting for 10 min, and then continued walking briskly at 5 km/h on a treadmill for another 10 min (Figure 1). At the end of each 10-min stage, subjects were asked to mark their feelings of discomfort on a visual analog scale (VAS) as described by Scott and Huskisson [9]. For the treadmill stage, immediately upon completing the 10 min of walking, subjects were required to point immediately on the printed 10-point Borg scale to record their level of perceived exertion [10]. HRV was recorded continuously during rest and exercise by the electrocardiogram monitor, and the frequency of the R-R interval was analyzed by the MemCalc method using MemCalc/Tarawa software. From the analysis, the LF/HF ratio was calculated. The data collected during minutes 5 to 8 of each stage were used for the analysis. On the second visit (the following day), subjects repeated the same protocols while wearing the other SN type. Flow of the project design. VAS = visual analog scale for determination of feeling of discomfort; ECG = electrocardiogram, which was continuously monitored. Upon arrival at the laboratory for Experiment II, subjects were given an US SN, asked to put it in place, and rested in the seated position for 10 min. They then were given either an US or MT SN in randomized order and asked to put it in place. The subjects completed this experiment during the second and third days after the onset of menstruation, and the average value of both days was used in the data analysis. If a subject was randomly assigned to use the MT SN on the first menses visit (second and third day of menstruation), she was given the US SN on her second menses visit (second and third day of menstruation), and vice versa. At the end of Experiment II, subjects were required to fill out the post-questionnaire to identify the type of SN that made them feel more active. Statistical analysis Statistical analysis was performed using SPSS software (version 22, IBM, USA). The data of VAS, LF/HF, and Borg scale were evaluated using a two-way (sample × stage) repeated measures analysis of variance (ANOVA) with Bonferroni post hoc test to identify the overall difference between US and MT SNs. The levels of significance were set at P <0.05. Data are presented as relative values to eliminate the effect of individual and daily differences. Statistical analysis was performed using SPSS software (version 22, IBM, USA). The data of VAS, LF/HF, and Borg scale were evaluated using a two-way (sample × stage) repeated measures analysis of variance (ANOVA) with Bonferroni post hoc test to identify the overall difference between US and MT SNs. The levels of significance were set at P <0.05. Data are presented as relative values to eliminate the effect of individual and daily differences.
Results
Analyses of the pre-questionnaire showed that 89% of the subjects who participated in Experiment I (non-menses phase) and 94% who participated in Experiment II (menses phase) used a thick type SN during their heavy flow menstruation days. The post-questionnaire revealed that 72% of the subjects in Experiment II felt more active wearing the US SN compared to the MT SN. The post-questionnaire answers also revealed the subjects’ reasons for choosing their preferred type of SN (Figure 2). Analysis of the questionnaire on the usage of different type of sanitary napkins. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. There was a consistent trend of reduced feelings of discomfort during seated rest and exercise when wearing the US SN compared to the MT SN in both experiments. The main effect of “sample” was significant (Figure 3; F(1, 34) = 15.44, P <0.001 during the non-menses phase and F(1, 34) = 6.60, P <0.05 during the menses phase) and VAS values were significantly reduced in the subjects wearing the US SN compared to the MT SN (Experiment I: P <0.001, Experiment II: P <0.05). Relative values of feeling of discomfort as measured using VAS during the non-menses (A) and menses (B) phases of the experiment. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. The main effect of “sample” was significant during the non-menses phase (F(1, 34) = 15.44, P <0.001) and the menses phase (F(1, 34) = 6.60, P <0.05) by a two-way (sample × stage) ANOVA. The LF/HF ratio was lower in subjects wearing the US SN compared to the MT SN in all stages of the experiments, and the significant main effect of “sample” was identified (Figure 4; F(1, 34) = 7.30, P <0.01 during the non-menses phase and F(1, 34) = 8.55, P <0.01 during the menses phase). The main effect of “stage” was significant in the menses phase only (F(3, 102) = 3.17, P <0.05). No interaction between “sample” and “stage” was observed. The LF/HF ratio for the US SN was significantly lower than that for MT in both experiments (Experiments I and II: P <0.01). Relative values of LF/HF during the non-menses (A) and menses (B) phases of the experiment. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. The main effect of “sample” was significant during the non-menses phase (F(1, 34) = 7.30, P <0.01) and the menses phase (F(1, 34) = 8.55, P <0.01), and the main effect of “stage” was significant during the menses phase only (F(3, 102) = 3.17, P <0.05) by a two-way (sample × stage) ANOVA. The means of scores on the Borg scale (Figure 5) were lower when wearing the US SN compared to the MT SN at the end of the 3 km/h walk phase in Experiments I and II, and the 5 km/h walk phase in Experiment I. However, there was no significance in the main effect of “sample”. The main effect of “stage” was significant in both experiments (F(1, 34) = 19.19, P <0.001 during the non-menses phase and F(1, 32) = 43.75, P <0.001 during the menses phase). Results of Borg scale during the non-menses phase (A) and the menses phase (B) of the experiment. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. The main effect of “stage” was significant during the non-menses phase (F(1, 34) = 19.19, P <0.001) and the menses phase (F(1, 32) = 43.75, P <0.001) by a two-way (sample × stage) ANOVA.
Conclusions
The physiological and psychological differences (LF/HF ratio and VAS) between subjects wearing the US SN and the MT SN exhibited a consistent trend in both Experiments I and II (i.e., the ratio was lower in those using the US SN). This finding illustrates that the US SN is more comfortable to wear than the MT SN, and comfort is an important factor that could eliminate physiological and psychological stress and unpleasant symptoms during menstruation.
[ "Background", "Subjects", "Experimental conditions", "Materials", "Experimental design", "Procedures", "Statistical analysis", "Study limitations", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Female menstruation is associated with significant unpleasant symptoms, which can be physical, behavioral, and emotional [1,2]. Consequently, reducing mental stress during the menstruation period is an important quality of life issue for women. Wearing a sanitary napkin (SN) is believed to influence mental stress responses of women during daily living activities [1,2]. Although mental stress is a psychological response, it affects several physiological processes in the human body. Generally, the human sensory receptors respond to physical stimuli, such as touch and pain, and trigger physiological changes in the body as interpreted by the autonomic nervous system [3]. The parasympathetic nervous system is suppressed and the sympathetic nervous system (SNS) is activated [3]. This causes physiological secretion of epinephrine and norepinephrine, which leads to vasoconstriction of blood vessels, increased muscle tension, and changes in heart rate and heart rate variability (HRV) [3].\nIn the past, researchers have used HRV to measure mental stress [1-5]. In particular, the high frequency (HF) band (0.15 to 0.4 Hz) in frequency-domain analysis has been regarded as the marker of parasympathetic (vagal) activity, and the low frequency (LF) band (0.04 to 0.15 Hz) has been regarded as the marker of sympathovagal interaction, especially sympathetic activity. Consequently, the LF-to-HF (LF/HF) ratio represents the sympathovagal balance [6,7]. HRV in response to stress differs between genders, and women’s neuroendocrine responses are often unpredictable [3].\nPrevious studies have used different types of SNs as the physical stimuli and measured physiological responses to their use in order to find ways to improve the quality of life of women during menses [1,4,5]. For example, Park and Watanuki [1,2] reported that, although the use of SNs increased physiological loading, their use did not result in significant effects on the LF and HF components or the LF/HF ratio of HRV at 20, 30, and 40 min after application compared to during the 10 min resting state. In a laboratory-based experiment, application of different types of SN had different physiological and psychological outcomes [1,2]. Mechanical stimulation by SNs with higher roughness (frictional coefficient = 0.312) increased SNS activities and brain arousal compared to SNs with lower roughness (frictional coefficient = 0.142) [1]. Researchers who focused on physiological changes caused by application of SNs of different thicknesses over the menstrual cycle reported that women felt more comfortable using thin type SNs compared to thick type SNs [4,5]. Thin type SNs caused less mechanical stimulation and did not increase the SNS activities compared to the thick type SN [4,5].\nMuslim females prefer to use the thick type SN due to cultural and religious practices, as SNs must be washed before disposal. This is based on tradition passed down for generations as in the older days there were no napkins and many used old cloths as napkins [8]. In addition, many believe that if used SNs are not washed before being thrown away, one risks being afflicted with hysteria and disturbances and is unhygienic. Research on the psychological and physiological changes that occur in women when wearing SNs of different thicknesses is limited and, to our knowledge, there have been no studies on the effect of wearing SNs during exercise/physical activity among Muslim women. Therefore, the goal of this study was to measure the physiological and psychological responses to wearing SNs of different thicknesses during menstruation and non-menstruation phases at rest and during physical activity/exercise among Muslim women.", "Unmarried Muslim females were solicited from the Institute to participate in this study. Eighteen unmarried Muslim females with regular menstrual cycles (cycle range 21 to 39 days) volunteered to participated in the study during the non-menses period (Experiment I: age 24.6 ± 1.4 years, height 155.3 ± 4.5 cm, weight 51.2 ± 12.4 kg, BMI 21.2 ± 4.8), and 18 unmarried Muslim females participated during their menses period (Experiment II: age 25.0 ± 1.4 years, height 156.2 ± 5.0 cm, weight 51.8 ± 9.9 kg, BMI 21.1 ± 3.7), 16 of whom had participated in Experiment I. None of the subjects had any premenstrual syndrome, which was assessed via a questionnaire before participation in the study. None of the subjects had any menstrual pains during their menses or took any analgesics to control menstrual pains. Written informed consent was obtained from all subjects after a full explanation of the study purpose and protocol. Subjects were asked to abstain from eating, smoking, and exercise at least 1 h before the experiments. This study was approved by the Human Ethics Committee of Universiti Sains Malaysia.", "To avoid potential diurnal variations, subjects were always tested at the same time of day (between 13:30 h and 17:00 h) in the same quiet, temperature-controlled room (26.3 ± 1.9°C, 51.6 ± 8.7% relative humidity).", "Two different types of SN were used in this study: ultra slim (US, thin; Laurier Perfect Comfort Ultra Slim) and maxi type (MT, thick; Laurier Active Comfort Super Maxi) (Table 1). Both types of SN were marketed products (Kao Corporation) consisting of three layers: top-sheet (non-woven sheet), absorbent sheet (MT: Pulp; US: pulp + specially designed washable absorbing polymers), and back sheet (polythene film).\nDescription of sanitary napkins used in the study", "A crossover repeated-measures design with random assignment was used. Subjects were required to complete two phases of the experiment: Experiment I was the non-menses phase and Experiment II was the menses phase. The menses phase represented a more complex situation than the non-menses phase because of menstrual pain, worries about leakage, and so on. Thus, to evaluate the effect of SNs on the subject’s comfort, experiments were executed during the non-menses phase. For each experiment, subjects had to complete two visits, one using the US SN and the other using the MT SN (assigned in a randomized order). For Experiment I, subjects completed the first and second visits on two continuous days, whereas for Experiment II the visits took place in two different months, with the gap between the first and second visit being no more than 3 months.", "Upon arrival in the laboratory for Experiment I, subjects were required to fill out a pre-questionnaire to identify the type of SN used regularly during their heavy flow menses days. Subjects were then equipped with an ambulatory electrocardiogram monitor (Active Tracer, Model: AC-301A, Japan) with electrodes placed on three sites on the chest, and they rested in a seated position for 10 min. Subjects were then given either an US or MT SN (given in a randomized order, and subject were blinded to the type of SN given), they were asked to put it in place, and they again rested in a seated position for 10 min. Subjects then walked at 3 km/h on a treadmill for 10 min, sat resting for 10 min, and then continued walking briskly at 5 km/h on a treadmill for another 10 min (Figure 1). At the end of each 10-min stage, subjects were asked to mark their feelings of discomfort on a visual analog scale (VAS) as described by Scott and Huskisson [9]. For the treadmill stage, immediately upon completing the 10 min of walking, subjects were required to point immediately on the printed 10-point Borg scale to record their level of perceived exertion [10]. HRV was recorded continuously during rest and exercise by the electrocardiogram monitor, and the frequency of the R-R interval was analyzed by the MemCalc method using MemCalc/Tarawa software. From the analysis, the LF/HF ratio was calculated. The data collected during minutes 5 to 8 of each stage were used for the analysis. On the second visit (the following day), subjects repeated the same protocols while wearing the other SN type.\nFlow of the project design. VAS = visual analog scale for determination of feeling of discomfort; ECG = electrocardiogram, which was continuously monitored.\nUpon arrival at the laboratory for Experiment II, subjects were given an US SN, asked to put it in place, and rested in the seated position for 10 min. They then were given either an US or MT SN in randomized order and asked to put it in place. The subjects completed this experiment during the second and third days after the onset of menstruation, and the average value of both days was used in the data analysis. If a subject was randomly assigned to use the MT SN on the first menses visit (second and third day of menstruation), she was given the US SN on her second menses visit (second and third day of menstruation), and vice versa. At the end of Experiment II, subjects were required to fill out the post-questionnaire to identify the type of SN that made them feel more active.", "Statistical analysis was performed using SPSS software (version 22, IBM, USA). The data of VAS, LF/HF, and Borg scale were evaluated using a two-way (sample × stage) repeated measures analysis of variance (ANOVA) with Bonferroni post hoc test to identify the overall difference between US and MT SNs. The levels of significance were set at P <0.05. Data are presented as relative values to eliminate the effect of individual and daily differences.", "One of the limitations of this study was that serum levels of ovarian hormones (estradiol and progesterone) and thyroid hormones were not measured. Concentrations of these hormones change during different phases of the menstrual cycle and differ among subjects. Further, they can modulate HRV and the related autonomic nervous system. Additionally, the subjects were assumed to be in good mental and physical health, but they did not undergo a medical examination. These factors should be taken into account in future studies.", "ANOVA: Analysis of variance; HF: High frequency; HRV: Heart rate variability; LF: Low frequency; MT: Maxi type; SN: Sanitary napkin; SNS: Sympathetic nervous system; US: Ultra slim; VAS: Visual analog scale.", "The authors declare that they have no competing interests.", "RS, MA, and AMCM designed the study. NGM, NZA, and LKS conducted the experimental work. NGM, MA, MS, and RS analyzed data, prepared figures, and drafted the manuscript. All authors participated in data interpretation and revised the manuscript. The final version of manuscript was approved by all authors." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Experimental conditions", "Materials", "Experimental design", "Procedures", "Statistical analysis", "Results", "Discussion", "Study limitations", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Female menstruation is associated with significant unpleasant symptoms, which can be physical, behavioral, and emotional [1,2]. Consequently, reducing mental stress during the menstruation period is an important quality of life issue for women. Wearing a sanitary napkin (SN) is believed to influence mental stress responses of women during daily living activities [1,2]. Although mental stress is a psychological response, it affects several physiological processes in the human body. Generally, the human sensory receptors respond to physical stimuli, such as touch and pain, and trigger physiological changes in the body as interpreted by the autonomic nervous system [3]. The parasympathetic nervous system is suppressed and the sympathetic nervous system (SNS) is activated [3]. This causes physiological secretion of epinephrine and norepinephrine, which leads to vasoconstriction of blood vessels, increased muscle tension, and changes in heart rate and heart rate variability (HRV) [3].\nIn the past, researchers have used HRV to measure mental stress [1-5]. In particular, the high frequency (HF) band (0.15 to 0.4 Hz) in frequency-domain analysis has been regarded as the marker of parasympathetic (vagal) activity, and the low frequency (LF) band (0.04 to 0.15 Hz) has been regarded as the marker of sympathovagal interaction, especially sympathetic activity. Consequently, the LF-to-HF (LF/HF) ratio represents the sympathovagal balance [6,7]. HRV in response to stress differs between genders, and women’s neuroendocrine responses are often unpredictable [3].\nPrevious studies have used different types of SNs as the physical stimuli and measured physiological responses to their use in order to find ways to improve the quality of life of women during menses [1,4,5]. For example, Park and Watanuki [1,2] reported that, although the use of SNs increased physiological loading, their use did not result in significant effects on the LF and HF components or the LF/HF ratio of HRV at 20, 30, and 40 min after application compared to during the 10 min resting state. In a laboratory-based experiment, application of different types of SN had different physiological and psychological outcomes [1,2]. Mechanical stimulation by SNs with higher roughness (frictional coefficient = 0.312) increased SNS activities and brain arousal compared to SNs with lower roughness (frictional coefficient = 0.142) [1]. Researchers who focused on physiological changes caused by application of SNs of different thicknesses over the menstrual cycle reported that women felt more comfortable using thin type SNs compared to thick type SNs [4,5]. Thin type SNs caused less mechanical stimulation and did not increase the SNS activities compared to the thick type SN [4,5].\nMuslim females prefer to use the thick type SN due to cultural and religious practices, as SNs must be washed before disposal. This is based on tradition passed down for generations as in the older days there were no napkins and many used old cloths as napkins [8]. In addition, many believe that if used SNs are not washed before being thrown away, one risks being afflicted with hysteria and disturbances and is unhygienic. Research on the psychological and physiological changes that occur in women when wearing SNs of different thicknesses is limited and, to our knowledge, there have been no studies on the effect of wearing SNs during exercise/physical activity among Muslim women. Therefore, the goal of this study was to measure the physiological and psychological responses to wearing SNs of different thicknesses during menstruation and non-menstruation phases at rest and during physical activity/exercise among Muslim women.", " Subjects Unmarried Muslim females were solicited from the Institute to participate in this study. Eighteen unmarried Muslim females with regular menstrual cycles (cycle range 21 to 39 days) volunteered to participated in the study during the non-menses period (Experiment I: age 24.6 ± 1.4 years, height 155.3 ± 4.5 cm, weight 51.2 ± 12.4 kg, BMI 21.2 ± 4.8), and 18 unmarried Muslim females participated during their menses period (Experiment II: age 25.0 ± 1.4 years, height 156.2 ± 5.0 cm, weight 51.8 ± 9.9 kg, BMI 21.1 ± 3.7), 16 of whom had participated in Experiment I. None of the subjects had any premenstrual syndrome, which was assessed via a questionnaire before participation in the study. None of the subjects had any menstrual pains during their menses or took any analgesics to control menstrual pains. Written informed consent was obtained from all subjects after a full explanation of the study purpose and protocol. Subjects were asked to abstain from eating, smoking, and exercise at least 1 h before the experiments. This study was approved by the Human Ethics Committee of Universiti Sains Malaysia.\nUnmarried Muslim females were solicited from the Institute to participate in this study. Eighteen unmarried Muslim females with regular menstrual cycles (cycle range 21 to 39 days) volunteered to participated in the study during the non-menses period (Experiment I: age 24.6 ± 1.4 years, height 155.3 ± 4.5 cm, weight 51.2 ± 12.4 kg, BMI 21.2 ± 4.8), and 18 unmarried Muslim females participated during their menses period (Experiment II: age 25.0 ± 1.4 years, height 156.2 ± 5.0 cm, weight 51.8 ± 9.9 kg, BMI 21.1 ± 3.7), 16 of whom had participated in Experiment I. None of the subjects had any premenstrual syndrome, which was assessed via a questionnaire before participation in the study. None of the subjects had any menstrual pains during their menses or took any analgesics to control menstrual pains. Written informed consent was obtained from all subjects after a full explanation of the study purpose and protocol. Subjects were asked to abstain from eating, smoking, and exercise at least 1 h before the experiments. This study was approved by the Human Ethics Committee of Universiti Sains Malaysia.\n Experimental conditions To avoid potential diurnal variations, subjects were always tested at the same time of day (between 13:30 h and 17:00 h) in the same quiet, temperature-controlled room (26.3 ± 1.9°C, 51.6 ± 8.7% relative humidity).\nTo avoid potential diurnal variations, subjects were always tested at the same time of day (between 13:30 h and 17:00 h) in the same quiet, temperature-controlled room (26.3 ± 1.9°C, 51.6 ± 8.7% relative humidity).\n Materials Two different types of SN were used in this study: ultra slim (US, thin; Laurier Perfect Comfort Ultra Slim) and maxi type (MT, thick; Laurier Active Comfort Super Maxi) (Table 1). Both types of SN were marketed products (Kao Corporation) consisting of three layers: top-sheet (non-woven sheet), absorbent sheet (MT: Pulp; US: pulp + specially designed washable absorbing polymers), and back sheet (polythene film).\nDescription of sanitary napkins used in the study\nTwo different types of SN were used in this study: ultra slim (US, thin; Laurier Perfect Comfort Ultra Slim) and maxi type (MT, thick; Laurier Active Comfort Super Maxi) (Table 1). Both types of SN were marketed products (Kao Corporation) consisting of three layers: top-sheet (non-woven sheet), absorbent sheet (MT: Pulp; US: pulp + specially designed washable absorbing polymers), and back sheet (polythene film).\nDescription of sanitary napkins used in the study\n Experimental design A crossover repeated-measures design with random assignment was used. Subjects were required to complete two phases of the experiment: Experiment I was the non-menses phase and Experiment II was the menses phase. The menses phase represented a more complex situation than the non-menses phase because of menstrual pain, worries about leakage, and so on. Thus, to evaluate the effect of SNs on the subject’s comfort, experiments were executed during the non-menses phase. For each experiment, subjects had to complete two visits, one using the US SN and the other using the MT SN (assigned in a randomized order). For Experiment I, subjects completed the first and second visits on two continuous days, whereas for Experiment II the visits took place in two different months, with the gap between the first and second visit being no more than 3 months.\nA crossover repeated-measures design with random assignment was used. Subjects were required to complete two phases of the experiment: Experiment I was the non-menses phase and Experiment II was the menses phase. The menses phase represented a more complex situation than the non-menses phase because of menstrual pain, worries about leakage, and so on. Thus, to evaluate the effect of SNs on the subject’s comfort, experiments were executed during the non-menses phase. For each experiment, subjects had to complete two visits, one using the US SN and the other using the MT SN (assigned in a randomized order). For Experiment I, subjects completed the first and second visits on two continuous days, whereas for Experiment II the visits took place in two different months, with the gap between the first and second visit being no more than 3 months.\n Procedures Upon arrival in the laboratory for Experiment I, subjects were required to fill out a pre-questionnaire to identify the type of SN used regularly during their heavy flow menses days. Subjects were then equipped with an ambulatory electrocardiogram monitor (Active Tracer, Model: AC-301A, Japan) with electrodes placed on three sites on the chest, and they rested in a seated position for 10 min. Subjects were then given either an US or MT SN (given in a randomized order, and subject were blinded to the type of SN given), they were asked to put it in place, and they again rested in a seated position for 10 min. Subjects then walked at 3 km/h on a treadmill for 10 min, sat resting for 10 min, and then continued walking briskly at 5 km/h on a treadmill for another 10 min (Figure 1). At the end of each 10-min stage, subjects were asked to mark their feelings of discomfort on a visual analog scale (VAS) as described by Scott and Huskisson [9]. For the treadmill stage, immediately upon completing the 10 min of walking, subjects were required to point immediately on the printed 10-point Borg scale to record their level of perceived exertion [10]. HRV was recorded continuously during rest and exercise by the electrocardiogram monitor, and the frequency of the R-R interval was analyzed by the MemCalc method using MemCalc/Tarawa software. From the analysis, the LF/HF ratio was calculated. The data collected during minutes 5 to 8 of each stage were used for the analysis. On the second visit (the following day), subjects repeated the same protocols while wearing the other SN type.\nFlow of the project design. VAS = visual analog scale for determination of feeling of discomfort; ECG = electrocardiogram, which was continuously monitored.\nUpon arrival at the laboratory for Experiment II, subjects were given an US SN, asked to put it in place, and rested in the seated position for 10 min. They then were given either an US or MT SN in randomized order and asked to put it in place. The subjects completed this experiment during the second and third days after the onset of menstruation, and the average value of both days was used in the data analysis. If a subject was randomly assigned to use the MT SN on the first menses visit (second and third day of menstruation), she was given the US SN on her second menses visit (second and third day of menstruation), and vice versa. At the end of Experiment II, subjects were required to fill out the post-questionnaire to identify the type of SN that made them feel more active.\nUpon arrival in the laboratory for Experiment I, subjects were required to fill out a pre-questionnaire to identify the type of SN used regularly during their heavy flow menses days. Subjects were then equipped with an ambulatory electrocardiogram monitor (Active Tracer, Model: AC-301A, Japan) with electrodes placed on three sites on the chest, and they rested in a seated position for 10 min. Subjects were then given either an US or MT SN (given in a randomized order, and subject were blinded to the type of SN given), they were asked to put it in place, and they again rested in a seated position for 10 min. Subjects then walked at 3 km/h on a treadmill for 10 min, sat resting for 10 min, and then continued walking briskly at 5 km/h on a treadmill for another 10 min (Figure 1). At the end of each 10-min stage, subjects were asked to mark their feelings of discomfort on a visual analog scale (VAS) as described by Scott and Huskisson [9]. For the treadmill stage, immediately upon completing the 10 min of walking, subjects were required to point immediately on the printed 10-point Borg scale to record their level of perceived exertion [10]. HRV was recorded continuously during rest and exercise by the electrocardiogram monitor, and the frequency of the R-R interval was analyzed by the MemCalc method using MemCalc/Tarawa software. From the analysis, the LF/HF ratio was calculated. The data collected during minutes 5 to 8 of each stage were used for the analysis. On the second visit (the following day), subjects repeated the same protocols while wearing the other SN type.\nFlow of the project design. VAS = visual analog scale for determination of feeling of discomfort; ECG = electrocardiogram, which was continuously monitored.\nUpon arrival at the laboratory for Experiment II, subjects were given an US SN, asked to put it in place, and rested in the seated position for 10 min. They then were given either an US or MT SN in randomized order and asked to put it in place. The subjects completed this experiment during the second and third days after the onset of menstruation, and the average value of both days was used in the data analysis. If a subject was randomly assigned to use the MT SN on the first menses visit (second and third day of menstruation), she was given the US SN on her second menses visit (second and third day of menstruation), and vice versa. At the end of Experiment II, subjects were required to fill out the post-questionnaire to identify the type of SN that made them feel more active.\n Statistical analysis Statistical analysis was performed using SPSS software (version 22, IBM, USA). The data of VAS, LF/HF, and Borg scale were evaluated using a two-way (sample × stage) repeated measures analysis of variance (ANOVA) with Bonferroni post hoc test to identify the overall difference between US and MT SNs. The levels of significance were set at P <0.05. Data are presented as relative values to eliminate the effect of individual and daily differences.\nStatistical analysis was performed using SPSS software (version 22, IBM, USA). The data of VAS, LF/HF, and Borg scale were evaluated using a two-way (sample × stage) repeated measures analysis of variance (ANOVA) with Bonferroni post hoc test to identify the overall difference between US and MT SNs. The levels of significance were set at P <0.05. Data are presented as relative values to eliminate the effect of individual and daily differences.", "Unmarried Muslim females were solicited from the Institute to participate in this study. Eighteen unmarried Muslim females with regular menstrual cycles (cycle range 21 to 39 days) volunteered to participated in the study during the non-menses period (Experiment I: age 24.6 ± 1.4 years, height 155.3 ± 4.5 cm, weight 51.2 ± 12.4 kg, BMI 21.2 ± 4.8), and 18 unmarried Muslim females participated during their menses period (Experiment II: age 25.0 ± 1.4 years, height 156.2 ± 5.0 cm, weight 51.8 ± 9.9 kg, BMI 21.1 ± 3.7), 16 of whom had participated in Experiment I. None of the subjects had any premenstrual syndrome, which was assessed via a questionnaire before participation in the study. None of the subjects had any menstrual pains during their menses or took any analgesics to control menstrual pains. Written informed consent was obtained from all subjects after a full explanation of the study purpose and protocol. Subjects were asked to abstain from eating, smoking, and exercise at least 1 h before the experiments. This study was approved by the Human Ethics Committee of Universiti Sains Malaysia.", "To avoid potential diurnal variations, subjects were always tested at the same time of day (between 13:30 h and 17:00 h) in the same quiet, temperature-controlled room (26.3 ± 1.9°C, 51.6 ± 8.7% relative humidity).", "Two different types of SN were used in this study: ultra slim (US, thin; Laurier Perfect Comfort Ultra Slim) and maxi type (MT, thick; Laurier Active Comfort Super Maxi) (Table 1). Both types of SN were marketed products (Kao Corporation) consisting of three layers: top-sheet (non-woven sheet), absorbent sheet (MT: Pulp; US: pulp + specially designed washable absorbing polymers), and back sheet (polythene film).\nDescription of sanitary napkins used in the study", "A crossover repeated-measures design with random assignment was used. Subjects were required to complete two phases of the experiment: Experiment I was the non-menses phase and Experiment II was the menses phase. The menses phase represented a more complex situation than the non-menses phase because of menstrual pain, worries about leakage, and so on. Thus, to evaluate the effect of SNs on the subject’s comfort, experiments were executed during the non-menses phase. For each experiment, subjects had to complete two visits, one using the US SN and the other using the MT SN (assigned in a randomized order). For Experiment I, subjects completed the first and second visits on two continuous days, whereas for Experiment II the visits took place in two different months, with the gap between the first and second visit being no more than 3 months.", "Upon arrival in the laboratory for Experiment I, subjects were required to fill out a pre-questionnaire to identify the type of SN used regularly during their heavy flow menses days. Subjects were then equipped with an ambulatory electrocardiogram monitor (Active Tracer, Model: AC-301A, Japan) with electrodes placed on three sites on the chest, and they rested in a seated position for 10 min. Subjects were then given either an US or MT SN (given in a randomized order, and subject were blinded to the type of SN given), they were asked to put it in place, and they again rested in a seated position for 10 min. Subjects then walked at 3 km/h on a treadmill for 10 min, sat resting for 10 min, and then continued walking briskly at 5 km/h on a treadmill for another 10 min (Figure 1). At the end of each 10-min stage, subjects were asked to mark their feelings of discomfort on a visual analog scale (VAS) as described by Scott and Huskisson [9]. For the treadmill stage, immediately upon completing the 10 min of walking, subjects were required to point immediately on the printed 10-point Borg scale to record their level of perceived exertion [10]. HRV was recorded continuously during rest and exercise by the electrocardiogram monitor, and the frequency of the R-R interval was analyzed by the MemCalc method using MemCalc/Tarawa software. From the analysis, the LF/HF ratio was calculated. The data collected during minutes 5 to 8 of each stage were used for the analysis. On the second visit (the following day), subjects repeated the same protocols while wearing the other SN type.\nFlow of the project design. VAS = visual analog scale for determination of feeling of discomfort; ECG = electrocardiogram, which was continuously monitored.\nUpon arrival at the laboratory for Experiment II, subjects were given an US SN, asked to put it in place, and rested in the seated position for 10 min. They then were given either an US or MT SN in randomized order and asked to put it in place. The subjects completed this experiment during the second and third days after the onset of menstruation, and the average value of both days was used in the data analysis. If a subject was randomly assigned to use the MT SN on the first menses visit (second and third day of menstruation), she was given the US SN on her second menses visit (second and third day of menstruation), and vice versa. At the end of Experiment II, subjects were required to fill out the post-questionnaire to identify the type of SN that made them feel more active.", "Statistical analysis was performed using SPSS software (version 22, IBM, USA). The data of VAS, LF/HF, and Borg scale were evaluated using a two-way (sample × stage) repeated measures analysis of variance (ANOVA) with Bonferroni post hoc test to identify the overall difference between US and MT SNs. The levels of significance were set at P <0.05. Data are presented as relative values to eliminate the effect of individual and daily differences.", "Analyses of the pre-questionnaire showed that 89% of the subjects who participated in Experiment I (non-menses phase) and 94% who participated in Experiment II (menses phase) used a thick type SN during their heavy flow menstruation days. The post-questionnaire revealed that 72% of the subjects in Experiment II felt more active wearing the US SN compared to the MT SN. The post-questionnaire answers also revealed the subjects’ reasons for choosing their preferred type of SN (Figure 2).\nAnalysis of the questionnaire on the usage of different type of sanitary napkins. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin.\nThere was a consistent trend of reduced feelings of discomfort during seated rest and exercise when wearing the US SN compared to the MT SN in both experiments. The main effect of “sample” was significant (Figure 3; F(1, 34) = 15.44, P <0.001 during the non-menses phase and F(1, 34) = 6.60, P <0.05 during the menses phase) and VAS values were significantly reduced in the subjects wearing the US SN compared to the MT SN (Experiment I: P <0.001, Experiment II: P <0.05).\nRelative values of feeling of discomfort as measured using VAS during the non-menses (A) and menses (B) phases of the experiment. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. The main effect of “sample” was significant during the non-menses phase (F(1, 34) = 15.44, P <0.001) and the menses phase (F(1, 34) = 6.60, P <0.05) by a two-way (sample × stage) ANOVA.\nThe LF/HF ratio was lower in subjects wearing the US SN compared to the MT SN in all stages of the experiments, and the significant main effect of “sample” was identified (Figure 4; F(1, 34) = 7.30, P <0.01 during the non-menses phase and F(1, 34) = 8.55, P <0.01 during the menses phase). The main effect of “stage” was significant in the menses phase only (F(3, 102) = 3.17, P <0.05). No interaction between “sample” and “stage” was observed. The LF/HF ratio for the US SN was significantly lower than that for MT in both experiments (Experiments I and II: P <0.01).\nRelative values of LF/HF during the non-menses (A) and menses (B) phases of the experiment. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. The main effect of “sample” was significant during the non-menses phase (F(1, 34) = 7.30, P <0.01) and the menses phase (F(1, 34) = 8.55, P <0.01), and the main effect of “stage” was significant during the menses phase only (F(3, 102) = 3.17, P <0.05) by a two-way (sample × stage) ANOVA.\nThe means of scores on the Borg scale (Figure 5) were lower when wearing the US SN compared to the MT SN at the end of the 3 km/h walk phase in Experiments I and II, and the 5 km/h walk phase in Experiment I. However, there was no significance in the main effect of “sample”. The main effect of “stage” was significant in both experiments (F(1, 34) = 19.19, P <0.001 during the non-menses phase and F(1, 32) = 43.75, P <0.001 during the menses phase).\nResults of Borg scale during the non-menses phase (A) and the menses phase (B) of the experiment. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. The main effect of “stage” was significant during the non-menses phase (F(1, 34) = 19.19, P <0.001) and the menses phase (F(1, 32) = 43.75, P <0.001) by a two-way (sample × stage) ANOVA.", "During menses, Malaysian Muslim women generally prefer to use a thick type SN without absorbing polymers to increase their confidence level, to be free from worries about leakage, and to enable easy washing of used SNs, which is required for cultural and religious reasons. In the current study, the majority of the Malaysian Muslim females (90%) reported using a thick type SN during their menstruation period. In contrast, another study reported that only 38% of Japanese women use a thick type SN during their heavy flow menstruation days [5].\nAs was found in the Japanese study [4,5], wearing SNs of different thicknesses elucidated different physiological and psychological responses in Malaysian women. Compared to subjects wearing the MT SN, those wearing the US SN showed decreased percentage changes of the LF/HF ratio and lower relative VAS scores. Thus, wearing the US SN did not increase SNS activities, and the subjects using this type of SN felt more comfortable (Figures 3 and 4). These results indicate that wearing the US SN reduced physiological and psychological stress in the subjects, especially during menstruation, even though they had previously been used to using thick type SNs.\nRegarding perceived exertion (as measured using the Borg scale), a previous study reported that a feeling of comfort and the Borg scale exhibited an inverse relationship [11]. In our study, although there were no significant differences between US and MT, the means of scores tended to be lower when wearing the US SN compared to the MT SN (Figure 5). In addition, 72% of the subjects reported feeling more active when wearing the US SN compared to the MT SN in the post-questionnaire (Figure 2). Feeling more comfortable when wearing the US SN might have caused the subjects to feel more active or experience less physical fatigue [11-13]. However, the connection between the use of the different SN types and physical activity remains unclear. Hence, future studies should objectively measure whether wearing an US SN decreases physical fatigue or improves exercise performance; the answer will better explain the relationship between US SNs and physical activity. Furthermore, it appears that the use of US SNs will allow women to be more active and manage their stress during menstruation, thereby increasing their quality of life.\n Study limitations One of the limitations of this study was that serum levels of ovarian hormones (estradiol and progesterone) and thyroid hormones were not measured. Concentrations of these hormones change during different phases of the menstrual cycle and differ among subjects. Further, they can modulate HRV and the related autonomic nervous system. Additionally, the subjects were assumed to be in good mental and physical health, but they did not undergo a medical examination. These factors should be taken into account in future studies.\nOne of the limitations of this study was that serum levels of ovarian hormones (estradiol and progesterone) and thyroid hormones were not measured. Concentrations of these hormones change during different phases of the menstrual cycle and differ among subjects. Further, they can modulate HRV and the related autonomic nervous system. Additionally, the subjects were assumed to be in good mental and physical health, but they did not undergo a medical examination. These factors should be taken into account in future studies.", "One of the limitations of this study was that serum levels of ovarian hormones (estradiol and progesterone) and thyroid hormones were not measured. Concentrations of these hormones change during different phases of the menstrual cycle and differ among subjects. Further, they can modulate HRV and the related autonomic nervous system. Additionally, the subjects were assumed to be in good mental and physical health, but they did not undergo a medical examination. These factors should be taken into account in future studies.", "The physiological and psychological differences (LF/HF ratio and VAS) between subjects wearing the US SN and the MT SN exhibited a consistent trend in both Experiments I and II (i.e., the ratio was lower in those using the US SN). This finding illustrates that the US SN is more comfortable to wear than the MT SN, and comfort is an important factor that could eliminate physiological and psychological stress and unpleasant symptoms during menstruation.", "ANOVA: Analysis of variance; HF: High frequency; HRV: Heart rate variability; LF: Low frequency; MT: Maxi type; SN: Sanitary napkin; SNS: Sympathetic nervous system; US: Ultra slim; VAS: Visual analog scale.", "The authors declare that they have no competing interests.", "RS, MA, and AMCM designed the study. NGM, NZA, and LKS conducted the experimental work. NGM, MA, MS, and RS analyzed data, prepared figures, and drafted the manuscript. All authors participated in data interpretation and revised the manuscript. The final version of manuscript was approved by all authors." ]
[ null, "methods", null, null, null, null, null, null, "results", "discussion", null, "conclusions", null, null, null ]
[ "Comfort", "Menstruation", "Physical activity", "Sanitary napkin" ]
Background: Female menstruation is associated with significant unpleasant symptoms, which can be physical, behavioral, and emotional [1,2]. Consequently, reducing mental stress during the menstruation period is an important quality of life issue for women. Wearing a sanitary napkin (SN) is believed to influence mental stress responses of women during daily living activities [1,2]. Although mental stress is a psychological response, it affects several physiological processes in the human body. Generally, the human sensory receptors respond to physical stimuli, such as touch and pain, and trigger physiological changes in the body as interpreted by the autonomic nervous system [3]. The parasympathetic nervous system is suppressed and the sympathetic nervous system (SNS) is activated [3]. This causes physiological secretion of epinephrine and norepinephrine, which leads to vasoconstriction of blood vessels, increased muscle tension, and changes in heart rate and heart rate variability (HRV) [3]. In the past, researchers have used HRV to measure mental stress [1-5]. In particular, the high frequency (HF) band (0.15 to 0.4 Hz) in frequency-domain analysis has been regarded as the marker of parasympathetic (vagal) activity, and the low frequency (LF) band (0.04 to 0.15 Hz) has been regarded as the marker of sympathovagal interaction, especially sympathetic activity. Consequently, the LF-to-HF (LF/HF) ratio represents the sympathovagal balance [6,7]. HRV in response to stress differs between genders, and women’s neuroendocrine responses are often unpredictable [3]. Previous studies have used different types of SNs as the physical stimuli and measured physiological responses to their use in order to find ways to improve the quality of life of women during menses [1,4,5]. For example, Park and Watanuki [1,2] reported that, although the use of SNs increased physiological loading, their use did not result in significant effects on the LF and HF components or the LF/HF ratio of HRV at 20, 30, and 40 min after application compared to during the 10 min resting state. In a laboratory-based experiment, application of different types of SN had different physiological and psychological outcomes [1,2]. Mechanical stimulation by SNs with higher roughness (frictional coefficient = 0.312) increased SNS activities and brain arousal compared to SNs with lower roughness (frictional coefficient = 0.142) [1]. Researchers who focused on physiological changes caused by application of SNs of different thicknesses over the menstrual cycle reported that women felt more comfortable using thin type SNs compared to thick type SNs [4,5]. Thin type SNs caused less mechanical stimulation and did not increase the SNS activities compared to the thick type SN [4,5]. Muslim females prefer to use the thick type SN due to cultural and religious practices, as SNs must be washed before disposal. This is based on tradition passed down for generations as in the older days there were no napkins and many used old cloths as napkins [8]. In addition, many believe that if used SNs are not washed before being thrown away, one risks being afflicted with hysteria and disturbances and is unhygienic. Research on the psychological and physiological changes that occur in women when wearing SNs of different thicknesses is limited and, to our knowledge, there have been no studies on the effect of wearing SNs during exercise/physical activity among Muslim women. Therefore, the goal of this study was to measure the physiological and psychological responses to wearing SNs of different thicknesses during menstruation and non-menstruation phases at rest and during physical activity/exercise among Muslim women. Methods: Subjects Unmarried Muslim females were solicited from the Institute to participate in this study. Eighteen unmarried Muslim females with regular menstrual cycles (cycle range 21 to 39 days) volunteered to participated in the study during the non-menses period (Experiment I: age 24.6 ± 1.4 years, height 155.3 ± 4.5 cm, weight 51.2 ± 12.4 kg, BMI 21.2 ± 4.8), and 18 unmarried Muslim females participated during their menses period (Experiment II: age 25.0 ± 1.4 years, height 156.2 ± 5.0 cm, weight 51.8 ± 9.9 kg, BMI 21.1 ± 3.7), 16 of whom had participated in Experiment I. None of the subjects had any premenstrual syndrome, which was assessed via a questionnaire before participation in the study. None of the subjects had any menstrual pains during their menses or took any analgesics to control menstrual pains. Written informed consent was obtained from all subjects after a full explanation of the study purpose and protocol. Subjects were asked to abstain from eating, smoking, and exercise at least 1 h before the experiments. This study was approved by the Human Ethics Committee of Universiti Sains Malaysia. Unmarried Muslim females were solicited from the Institute to participate in this study. Eighteen unmarried Muslim females with regular menstrual cycles (cycle range 21 to 39 days) volunteered to participated in the study during the non-menses period (Experiment I: age 24.6 ± 1.4 years, height 155.3 ± 4.5 cm, weight 51.2 ± 12.4 kg, BMI 21.2 ± 4.8), and 18 unmarried Muslim females participated during their menses period (Experiment II: age 25.0 ± 1.4 years, height 156.2 ± 5.0 cm, weight 51.8 ± 9.9 kg, BMI 21.1 ± 3.7), 16 of whom had participated in Experiment I. None of the subjects had any premenstrual syndrome, which was assessed via a questionnaire before participation in the study. None of the subjects had any menstrual pains during their menses or took any analgesics to control menstrual pains. Written informed consent was obtained from all subjects after a full explanation of the study purpose and protocol. Subjects were asked to abstain from eating, smoking, and exercise at least 1 h before the experiments. This study was approved by the Human Ethics Committee of Universiti Sains Malaysia. Experimental conditions To avoid potential diurnal variations, subjects were always tested at the same time of day (between 13:30 h and 17:00 h) in the same quiet, temperature-controlled room (26.3 ± 1.9°C, 51.6 ± 8.7% relative humidity). To avoid potential diurnal variations, subjects were always tested at the same time of day (between 13:30 h and 17:00 h) in the same quiet, temperature-controlled room (26.3 ± 1.9°C, 51.6 ± 8.7% relative humidity). Materials Two different types of SN were used in this study: ultra slim (US, thin; Laurier Perfect Comfort Ultra Slim) and maxi type (MT, thick; Laurier Active Comfort Super Maxi) (Table 1). Both types of SN were marketed products (Kao Corporation) consisting of three layers: top-sheet (non-woven sheet), absorbent sheet (MT: Pulp; US: pulp + specially designed washable absorbing polymers), and back sheet (polythene film). Description of sanitary napkins used in the study Two different types of SN were used in this study: ultra slim (US, thin; Laurier Perfect Comfort Ultra Slim) and maxi type (MT, thick; Laurier Active Comfort Super Maxi) (Table 1). Both types of SN were marketed products (Kao Corporation) consisting of three layers: top-sheet (non-woven sheet), absorbent sheet (MT: Pulp; US: pulp + specially designed washable absorbing polymers), and back sheet (polythene film). Description of sanitary napkins used in the study Experimental design A crossover repeated-measures design with random assignment was used. Subjects were required to complete two phases of the experiment: Experiment I was the non-menses phase and Experiment II was the menses phase. The menses phase represented a more complex situation than the non-menses phase because of menstrual pain, worries about leakage, and so on. Thus, to evaluate the effect of SNs on the subject’s comfort, experiments were executed during the non-menses phase. For each experiment, subjects had to complete two visits, one using the US SN and the other using the MT SN (assigned in a randomized order). For Experiment I, subjects completed the first and second visits on two continuous days, whereas for Experiment II the visits took place in two different months, with the gap between the first and second visit being no more than 3 months. A crossover repeated-measures design with random assignment was used. Subjects were required to complete two phases of the experiment: Experiment I was the non-menses phase and Experiment II was the menses phase. The menses phase represented a more complex situation than the non-menses phase because of menstrual pain, worries about leakage, and so on. Thus, to evaluate the effect of SNs on the subject’s comfort, experiments were executed during the non-menses phase. For each experiment, subjects had to complete two visits, one using the US SN and the other using the MT SN (assigned in a randomized order). For Experiment I, subjects completed the first and second visits on two continuous days, whereas for Experiment II the visits took place in two different months, with the gap between the first and second visit being no more than 3 months. Procedures Upon arrival in the laboratory for Experiment I, subjects were required to fill out a pre-questionnaire to identify the type of SN used regularly during their heavy flow menses days. Subjects were then equipped with an ambulatory electrocardiogram monitor (Active Tracer, Model: AC-301A, Japan) with electrodes placed on three sites on the chest, and they rested in a seated position for 10 min. Subjects were then given either an US or MT SN (given in a randomized order, and subject were blinded to the type of SN given), they were asked to put it in place, and they again rested in a seated position for 10 min. Subjects then walked at 3 km/h on a treadmill for 10 min, sat resting for 10 min, and then continued walking briskly at 5 km/h on a treadmill for another 10 min (Figure 1). At the end of each 10-min stage, subjects were asked to mark their feelings of discomfort on a visual analog scale (VAS) as described by Scott and Huskisson [9]. For the treadmill stage, immediately upon completing the 10 min of walking, subjects were required to point immediately on the printed 10-point Borg scale to record their level of perceived exertion [10]. HRV was recorded continuously during rest and exercise by the electrocardiogram monitor, and the frequency of the R-R interval was analyzed by the MemCalc method using MemCalc/Tarawa software. From the analysis, the LF/HF ratio was calculated. The data collected during minutes 5 to 8 of each stage were used for the analysis. On the second visit (the following day), subjects repeated the same protocols while wearing the other SN type. Flow of the project design. VAS = visual analog scale for determination of feeling of discomfort; ECG = electrocardiogram, which was continuously monitored. Upon arrival at the laboratory for Experiment II, subjects were given an US SN, asked to put it in place, and rested in the seated position for 10 min. They then were given either an US or MT SN in randomized order and asked to put it in place. The subjects completed this experiment during the second and third days after the onset of menstruation, and the average value of both days was used in the data analysis. If a subject was randomly assigned to use the MT SN on the first menses visit (second and third day of menstruation), she was given the US SN on her second menses visit (second and third day of menstruation), and vice versa. At the end of Experiment II, subjects were required to fill out the post-questionnaire to identify the type of SN that made them feel more active. Upon arrival in the laboratory for Experiment I, subjects were required to fill out a pre-questionnaire to identify the type of SN used regularly during their heavy flow menses days. Subjects were then equipped with an ambulatory electrocardiogram monitor (Active Tracer, Model: AC-301A, Japan) with electrodes placed on three sites on the chest, and they rested in a seated position for 10 min. Subjects were then given either an US or MT SN (given in a randomized order, and subject were blinded to the type of SN given), they were asked to put it in place, and they again rested in a seated position for 10 min. Subjects then walked at 3 km/h on a treadmill for 10 min, sat resting for 10 min, and then continued walking briskly at 5 km/h on a treadmill for another 10 min (Figure 1). At the end of each 10-min stage, subjects were asked to mark their feelings of discomfort on a visual analog scale (VAS) as described by Scott and Huskisson [9]. For the treadmill stage, immediately upon completing the 10 min of walking, subjects were required to point immediately on the printed 10-point Borg scale to record their level of perceived exertion [10]. HRV was recorded continuously during rest and exercise by the electrocardiogram monitor, and the frequency of the R-R interval was analyzed by the MemCalc method using MemCalc/Tarawa software. From the analysis, the LF/HF ratio was calculated. The data collected during minutes 5 to 8 of each stage were used for the analysis. On the second visit (the following day), subjects repeated the same protocols while wearing the other SN type. Flow of the project design. VAS = visual analog scale for determination of feeling of discomfort; ECG = electrocardiogram, which was continuously monitored. Upon arrival at the laboratory for Experiment II, subjects were given an US SN, asked to put it in place, and rested in the seated position for 10 min. They then were given either an US or MT SN in randomized order and asked to put it in place. The subjects completed this experiment during the second and third days after the onset of menstruation, and the average value of both days was used in the data analysis. If a subject was randomly assigned to use the MT SN on the first menses visit (second and third day of menstruation), she was given the US SN on her second menses visit (second and third day of menstruation), and vice versa. At the end of Experiment II, subjects were required to fill out the post-questionnaire to identify the type of SN that made them feel more active. Statistical analysis Statistical analysis was performed using SPSS software (version 22, IBM, USA). The data of VAS, LF/HF, and Borg scale were evaluated using a two-way (sample × stage) repeated measures analysis of variance (ANOVA) with Bonferroni post hoc test to identify the overall difference between US and MT SNs. The levels of significance were set at P <0.05. Data are presented as relative values to eliminate the effect of individual and daily differences. Statistical analysis was performed using SPSS software (version 22, IBM, USA). The data of VAS, LF/HF, and Borg scale were evaluated using a two-way (sample × stage) repeated measures analysis of variance (ANOVA) with Bonferroni post hoc test to identify the overall difference between US and MT SNs. The levels of significance were set at P <0.05. Data are presented as relative values to eliminate the effect of individual and daily differences. Subjects: Unmarried Muslim females were solicited from the Institute to participate in this study. Eighteen unmarried Muslim females with regular menstrual cycles (cycle range 21 to 39 days) volunteered to participated in the study during the non-menses period (Experiment I: age 24.6 ± 1.4 years, height 155.3 ± 4.5 cm, weight 51.2 ± 12.4 kg, BMI 21.2 ± 4.8), and 18 unmarried Muslim females participated during their menses period (Experiment II: age 25.0 ± 1.4 years, height 156.2 ± 5.0 cm, weight 51.8 ± 9.9 kg, BMI 21.1 ± 3.7), 16 of whom had participated in Experiment I. None of the subjects had any premenstrual syndrome, which was assessed via a questionnaire before participation in the study. None of the subjects had any menstrual pains during their menses or took any analgesics to control menstrual pains. Written informed consent was obtained from all subjects after a full explanation of the study purpose and protocol. Subjects were asked to abstain from eating, smoking, and exercise at least 1 h before the experiments. This study was approved by the Human Ethics Committee of Universiti Sains Malaysia. Experimental conditions: To avoid potential diurnal variations, subjects were always tested at the same time of day (between 13:30 h and 17:00 h) in the same quiet, temperature-controlled room (26.3 ± 1.9°C, 51.6 ± 8.7% relative humidity). Materials: Two different types of SN were used in this study: ultra slim (US, thin; Laurier Perfect Comfort Ultra Slim) and maxi type (MT, thick; Laurier Active Comfort Super Maxi) (Table 1). Both types of SN were marketed products (Kao Corporation) consisting of three layers: top-sheet (non-woven sheet), absorbent sheet (MT: Pulp; US: pulp + specially designed washable absorbing polymers), and back sheet (polythene film). Description of sanitary napkins used in the study Experimental design: A crossover repeated-measures design with random assignment was used. Subjects were required to complete two phases of the experiment: Experiment I was the non-menses phase and Experiment II was the menses phase. The menses phase represented a more complex situation than the non-menses phase because of menstrual pain, worries about leakage, and so on. Thus, to evaluate the effect of SNs on the subject’s comfort, experiments were executed during the non-menses phase. For each experiment, subjects had to complete two visits, one using the US SN and the other using the MT SN (assigned in a randomized order). For Experiment I, subjects completed the first and second visits on two continuous days, whereas for Experiment II the visits took place in two different months, with the gap between the first and second visit being no more than 3 months. Procedures: Upon arrival in the laboratory for Experiment I, subjects were required to fill out a pre-questionnaire to identify the type of SN used regularly during their heavy flow menses days. Subjects were then equipped with an ambulatory electrocardiogram monitor (Active Tracer, Model: AC-301A, Japan) with electrodes placed on three sites on the chest, and they rested in a seated position for 10 min. Subjects were then given either an US or MT SN (given in a randomized order, and subject were blinded to the type of SN given), they were asked to put it in place, and they again rested in a seated position for 10 min. Subjects then walked at 3 km/h on a treadmill for 10 min, sat resting for 10 min, and then continued walking briskly at 5 km/h on a treadmill for another 10 min (Figure 1). At the end of each 10-min stage, subjects were asked to mark their feelings of discomfort on a visual analog scale (VAS) as described by Scott and Huskisson [9]. For the treadmill stage, immediately upon completing the 10 min of walking, subjects were required to point immediately on the printed 10-point Borg scale to record their level of perceived exertion [10]. HRV was recorded continuously during rest and exercise by the electrocardiogram monitor, and the frequency of the R-R interval was analyzed by the MemCalc method using MemCalc/Tarawa software. From the analysis, the LF/HF ratio was calculated. The data collected during minutes 5 to 8 of each stage were used for the analysis. On the second visit (the following day), subjects repeated the same protocols while wearing the other SN type. Flow of the project design. VAS = visual analog scale for determination of feeling of discomfort; ECG = electrocardiogram, which was continuously monitored. Upon arrival at the laboratory for Experiment II, subjects were given an US SN, asked to put it in place, and rested in the seated position for 10 min. They then were given either an US or MT SN in randomized order and asked to put it in place. The subjects completed this experiment during the second and third days after the onset of menstruation, and the average value of both days was used in the data analysis. If a subject was randomly assigned to use the MT SN on the first menses visit (second and third day of menstruation), she was given the US SN on her second menses visit (second and third day of menstruation), and vice versa. At the end of Experiment II, subjects were required to fill out the post-questionnaire to identify the type of SN that made them feel more active. Statistical analysis: Statistical analysis was performed using SPSS software (version 22, IBM, USA). The data of VAS, LF/HF, and Borg scale were evaluated using a two-way (sample × stage) repeated measures analysis of variance (ANOVA) with Bonferroni post hoc test to identify the overall difference between US and MT SNs. The levels of significance were set at P <0.05. Data are presented as relative values to eliminate the effect of individual and daily differences. Results: Analyses of the pre-questionnaire showed that 89% of the subjects who participated in Experiment I (non-menses phase) and 94% who participated in Experiment II (menses phase) used a thick type SN during their heavy flow menstruation days. The post-questionnaire revealed that 72% of the subjects in Experiment II felt more active wearing the US SN compared to the MT SN. The post-questionnaire answers also revealed the subjects’ reasons for choosing their preferred type of SN (Figure 2). Analysis of the questionnaire on the usage of different type of sanitary napkins. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. There was a consistent trend of reduced feelings of discomfort during seated rest and exercise when wearing the US SN compared to the MT SN in both experiments. The main effect of “sample” was significant (Figure 3; F(1, 34) = 15.44, P <0.001 during the non-menses phase and F(1, 34) = 6.60, P <0.05 during the menses phase) and VAS values were significantly reduced in the subjects wearing the US SN compared to the MT SN (Experiment I: P <0.001, Experiment II: P <0.05). Relative values of feeling of discomfort as measured using VAS during the non-menses (A) and menses (B) phases of the experiment. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. The main effect of “sample” was significant during the non-menses phase (F(1, 34) = 15.44, P <0.001) and the menses phase (F(1, 34) = 6.60, P <0.05) by a two-way (sample × stage) ANOVA. The LF/HF ratio was lower in subjects wearing the US SN compared to the MT SN in all stages of the experiments, and the significant main effect of “sample” was identified (Figure 4; F(1, 34) = 7.30, P <0.01 during the non-menses phase and F(1, 34) = 8.55, P <0.01 during the menses phase). The main effect of “stage” was significant in the menses phase only (F(3, 102) = 3.17, P <0.05). No interaction between “sample” and “stage” was observed. The LF/HF ratio for the US SN was significantly lower than that for MT in both experiments (Experiments I and II: P <0.01). Relative values of LF/HF during the non-menses (A) and menses (B) phases of the experiment. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. The main effect of “sample” was significant during the non-menses phase (F(1, 34) = 7.30, P <0.01) and the menses phase (F(1, 34) = 8.55, P <0.01), and the main effect of “stage” was significant during the menses phase only (F(3, 102) = 3.17, P <0.05) by a two-way (sample × stage) ANOVA. The means of scores on the Borg scale (Figure 5) were lower when wearing the US SN compared to the MT SN at the end of the 3 km/h walk phase in Experiments I and II, and the 5 km/h walk phase in Experiment I. However, there was no significance in the main effect of “sample”. The main effect of “stage” was significant in both experiments (F(1, 34) = 19.19, P <0.001 during the non-menses phase and F(1, 32) = 43.75, P <0.001 during the menses phase). Results of Borg scale during the non-menses phase (A) and the menses phase (B) of the experiment. US = ultra slim type of sanitary napkin; MT = maxi type of sanitary napkin. The main effect of “stage” was significant during the non-menses phase (F(1, 34) = 19.19, P <0.001) and the menses phase (F(1, 32) = 43.75, P <0.001) by a two-way (sample × stage) ANOVA. Discussion: During menses, Malaysian Muslim women generally prefer to use a thick type SN without absorbing polymers to increase their confidence level, to be free from worries about leakage, and to enable easy washing of used SNs, which is required for cultural and religious reasons. In the current study, the majority of the Malaysian Muslim females (90%) reported using a thick type SN during their menstruation period. In contrast, another study reported that only 38% of Japanese women use a thick type SN during their heavy flow menstruation days [5]. As was found in the Japanese study [4,5], wearing SNs of different thicknesses elucidated different physiological and psychological responses in Malaysian women. Compared to subjects wearing the MT SN, those wearing the US SN showed decreased percentage changes of the LF/HF ratio and lower relative VAS scores. Thus, wearing the US SN did not increase SNS activities, and the subjects using this type of SN felt more comfortable (Figures 3 and 4). These results indicate that wearing the US SN reduced physiological and psychological stress in the subjects, especially during menstruation, even though they had previously been used to using thick type SNs. Regarding perceived exertion (as measured using the Borg scale), a previous study reported that a feeling of comfort and the Borg scale exhibited an inverse relationship [11]. In our study, although there were no significant differences between US and MT, the means of scores tended to be lower when wearing the US SN compared to the MT SN (Figure 5). In addition, 72% of the subjects reported feeling more active when wearing the US SN compared to the MT SN in the post-questionnaire (Figure 2). Feeling more comfortable when wearing the US SN might have caused the subjects to feel more active or experience less physical fatigue [11-13]. However, the connection between the use of the different SN types and physical activity remains unclear. Hence, future studies should objectively measure whether wearing an US SN decreases physical fatigue or improves exercise performance; the answer will better explain the relationship between US SNs and physical activity. Furthermore, it appears that the use of US SNs will allow women to be more active and manage their stress during menstruation, thereby increasing their quality of life. Study limitations One of the limitations of this study was that serum levels of ovarian hormones (estradiol and progesterone) and thyroid hormones were not measured. Concentrations of these hormones change during different phases of the menstrual cycle and differ among subjects. Further, they can modulate HRV and the related autonomic nervous system. Additionally, the subjects were assumed to be in good mental and physical health, but they did not undergo a medical examination. These factors should be taken into account in future studies. One of the limitations of this study was that serum levels of ovarian hormones (estradiol and progesterone) and thyroid hormones were not measured. Concentrations of these hormones change during different phases of the menstrual cycle and differ among subjects. Further, they can modulate HRV and the related autonomic nervous system. Additionally, the subjects were assumed to be in good mental and physical health, but they did not undergo a medical examination. These factors should be taken into account in future studies. Study limitations: One of the limitations of this study was that serum levels of ovarian hormones (estradiol and progesterone) and thyroid hormones were not measured. Concentrations of these hormones change during different phases of the menstrual cycle and differ among subjects. Further, they can modulate HRV and the related autonomic nervous system. Additionally, the subjects were assumed to be in good mental and physical health, but they did not undergo a medical examination. These factors should be taken into account in future studies. Conclusions: The physiological and psychological differences (LF/HF ratio and VAS) between subjects wearing the US SN and the MT SN exhibited a consistent trend in both Experiments I and II (i.e., the ratio was lower in those using the US SN). This finding illustrates that the US SN is more comfortable to wear than the MT SN, and comfort is an important factor that could eliminate physiological and psychological stress and unpleasant symptoms during menstruation. Abbreviations: ANOVA: Analysis of variance; HF: High frequency; HRV: Heart rate variability; LF: Low frequency; MT: Maxi type; SN: Sanitary napkin; SNS: Sympathetic nervous system; US: Ultra slim; VAS: Visual analog scale. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: RS, MA, and AMCM designed the study. NGM, NZA, and LKS conducted the experimental work. NGM, MA, MS, and RS analyzed data, prepared figures, and drafted the manuscript. All authors participated in data interpretation and revised the manuscript. The final version of manuscript was approved by all authors.
Background: Menstruation is associated with significant unpleasantness, and wearing a sanitary napkin (SN) during menses causes discomfort. In addition, many Muslim women use a thick type of SN during menses due to the religious requirement that even disposable SNs be washed before disposal. Therefore, the objective of this study was to measure the physiological and psychological responses to wearing SNs of different thicknesses during menstruation and non-menstruation phases at rest and during physical activity/exercise among Muslim women. Methods: Eighteen Muslim females were randomly assigned to wear an ultra slim type (US, thin) or a maxi type (MT, thick) SN on two different occasions (i.e., during non-menses and menses). Each subject tested both types of SN. Upon arriving at the laboratory, each subject was equipped with an ambulatory electrocardiograph and rested in a seated position for 10 min. She was then given either an US or MT SN, put it in place, and rested in a seated position for 10 min. Each subject then walked at 3 km/h for 10 min, sat resting for 10 min, and then walked at 5 km/h for another 10 min. At the end of each 10-min stage, subjects marked their feelings of discomfort on the visual analog scale (VAS). Perceived exertion during exercise was evaluated using the Borg scale. Heart rate and low frequency-to-high frequency ratio (LF/HF) of heart rate variability were continuously recorded during rest and exercise. Results: During both the non-menses and menses trials, VAS and LF/HF were significantly lower in subjects using the US SN compared to the MT SN. These results indicate that when wearing the US SN, subjects were more comfortable and did not increase sympathetic activities. Meanwhile, perceived exertion during exercise had no significant difference between US and MT although the means of the scores for US tended to be lower than those of MT. Conclusions: The results of this study (VAS and LF/HF) indicate that wearing an US SN induces less physiological and psychological stress compared to wearing a MT SN. Thus, use of the former will empower women to live their lives with vitality during menses.
Background: Female menstruation is associated with significant unpleasant symptoms, which can be physical, behavioral, and emotional [1,2]. Consequently, reducing mental stress during the menstruation period is an important quality of life issue for women. Wearing a sanitary napkin (SN) is believed to influence mental stress responses of women during daily living activities [1,2]. Although mental stress is a psychological response, it affects several physiological processes in the human body. Generally, the human sensory receptors respond to physical stimuli, such as touch and pain, and trigger physiological changes in the body as interpreted by the autonomic nervous system [3]. The parasympathetic nervous system is suppressed and the sympathetic nervous system (SNS) is activated [3]. This causes physiological secretion of epinephrine and norepinephrine, which leads to vasoconstriction of blood vessels, increased muscle tension, and changes in heart rate and heart rate variability (HRV) [3]. In the past, researchers have used HRV to measure mental stress [1-5]. In particular, the high frequency (HF) band (0.15 to 0.4 Hz) in frequency-domain analysis has been regarded as the marker of parasympathetic (vagal) activity, and the low frequency (LF) band (0.04 to 0.15 Hz) has been regarded as the marker of sympathovagal interaction, especially sympathetic activity. Consequently, the LF-to-HF (LF/HF) ratio represents the sympathovagal balance [6,7]. HRV in response to stress differs between genders, and women’s neuroendocrine responses are often unpredictable [3]. Previous studies have used different types of SNs as the physical stimuli and measured physiological responses to their use in order to find ways to improve the quality of life of women during menses [1,4,5]. For example, Park and Watanuki [1,2] reported that, although the use of SNs increased physiological loading, their use did not result in significant effects on the LF and HF components or the LF/HF ratio of HRV at 20, 30, and 40 min after application compared to during the 10 min resting state. In a laboratory-based experiment, application of different types of SN had different physiological and psychological outcomes [1,2]. Mechanical stimulation by SNs with higher roughness (frictional coefficient = 0.312) increased SNS activities and brain arousal compared to SNs with lower roughness (frictional coefficient = 0.142) [1]. Researchers who focused on physiological changes caused by application of SNs of different thicknesses over the menstrual cycle reported that women felt more comfortable using thin type SNs compared to thick type SNs [4,5]. Thin type SNs caused less mechanical stimulation and did not increase the SNS activities compared to the thick type SN [4,5]. Muslim females prefer to use the thick type SN due to cultural and religious practices, as SNs must be washed before disposal. This is based on tradition passed down for generations as in the older days there were no napkins and many used old cloths as napkins [8]. In addition, many believe that if used SNs are not washed before being thrown away, one risks being afflicted with hysteria and disturbances and is unhygienic. Research on the psychological and physiological changes that occur in women when wearing SNs of different thicknesses is limited and, to our knowledge, there have been no studies on the effect of wearing SNs during exercise/physical activity among Muslim women. Therefore, the goal of this study was to measure the physiological and psychological responses to wearing SNs of different thicknesses during menstruation and non-menstruation phases at rest and during physical activity/exercise among Muslim women. Conclusions: The physiological and psychological differences (LF/HF ratio and VAS) between subjects wearing the US SN and the MT SN exhibited a consistent trend in both Experiments I and II (i.e., the ratio was lower in those using the US SN). This finding illustrates that the US SN is more comfortable to wear than the MT SN, and comfort is an important factor that could eliminate physiological and psychological stress and unpleasant symptoms during menstruation.
Background: Menstruation is associated with significant unpleasantness, and wearing a sanitary napkin (SN) during menses causes discomfort. In addition, many Muslim women use a thick type of SN during menses due to the religious requirement that even disposable SNs be washed before disposal. Therefore, the objective of this study was to measure the physiological and psychological responses to wearing SNs of different thicknesses during menstruation and non-menstruation phases at rest and during physical activity/exercise among Muslim women. Methods: Eighteen Muslim females were randomly assigned to wear an ultra slim type (US, thin) or a maxi type (MT, thick) SN on two different occasions (i.e., during non-menses and menses). Each subject tested both types of SN. Upon arriving at the laboratory, each subject was equipped with an ambulatory electrocardiograph and rested in a seated position for 10 min. She was then given either an US or MT SN, put it in place, and rested in a seated position for 10 min. Each subject then walked at 3 km/h for 10 min, sat resting for 10 min, and then walked at 5 km/h for another 10 min. At the end of each 10-min stage, subjects marked their feelings of discomfort on the visual analog scale (VAS). Perceived exertion during exercise was evaluated using the Borg scale. Heart rate and low frequency-to-high frequency ratio (LF/HF) of heart rate variability were continuously recorded during rest and exercise. Results: During both the non-menses and menses trials, VAS and LF/HF were significantly lower in subjects using the US SN compared to the MT SN. These results indicate that when wearing the US SN, subjects were more comfortable and did not increase sympathetic activities. Meanwhile, perceived exertion during exercise had no significant difference between US and MT although the means of the scores for US tended to be lower than those of MT. Conclusions: The results of this study (VAS and LF/HF) indicate that wearing an US SN induces less physiological and psychological stress compared to wearing a MT SN. Thus, use of the former will empower women to live their lives with vitality during menses.
6,159
437
15
[ "sn", "subjects", "menses", "experiment", "mt", "type", "phase", "study", "menses phase", "10" ]
[ "test", "test" ]
[CONTENT] Comfort | Menstruation | Physical activity | Sanitary napkin [SUMMARY]
[CONTENT] Comfort | Menstruation | Physical activity | Sanitary napkin [SUMMARY]
[CONTENT] Comfort | Menstruation | Physical activity | Sanitary napkin [SUMMARY]
[CONTENT] Comfort | Menstruation | Physical activity | Sanitary napkin [SUMMARY]
[CONTENT] Comfort | Menstruation | Physical activity | Sanitary napkin [SUMMARY]
[CONTENT] Comfort | Menstruation | Physical activity | Sanitary napkin [SUMMARY]
[CONTENT] Adult | Electrocardiography, Ambulatory | Female | Heart Rate | Humans | Islam | Menstrual Hygiene Products | Menstruation | Visual Analog Scale | Women's Health | Young Adult [SUMMARY]
[CONTENT] Adult | Electrocardiography, Ambulatory | Female | Heart Rate | Humans | Islam | Menstrual Hygiene Products | Menstruation | Visual Analog Scale | Women's Health | Young Adult [SUMMARY]
[CONTENT] Adult | Electrocardiography, Ambulatory | Female | Heart Rate | Humans | Islam | Menstrual Hygiene Products | Menstruation | Visual Analog Scale | Women's Health | Young Adult [SUMMARY]
[CONTENT] Adult | Electrocardiography, Ambulatory | Female | Heart Rate | Humans | Islam | Menstrual Hygiene Products | Menstruation | Visual Analog Scale | Women's Health | Young Adult [SUMMARY]
[CONTENT] Adult | Electrocardiography, Ambulatory | Female | Heart Rate | Humans | Islam | Menstrual Hygiene Products | Menstruation | Visual Analog Scale | Women's Health | Young Adult [SUMMARY]
[CONTENT] Adult | Electrocardiography, Ambulatory | Female | Heart Rate | Humans | Islam | Menstrual Hygiene Products | Menstruation | Visual Analog Scale | Women's Health | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] sn | subjects | menses | experiment | mt | type | phase | study | menses phase | 10 [SUMMARY]
[CONTENT] sn | subjects | menses | experiment | mt | type | phase | study | menses phase | 10 [SUMMARY]
[CONTENT] sn | subjects | menses | experiment | mt | type | phase | study | menses phase | 10 [SUMMARY]
[CONTENT] sn | subjects | menses | experiment | mt | type | phase | study | menses phase | 10 [SUMMARY]
[CONTENT] sn | subjects | menses | experiment | mt | type | phase | study | menses phase | 10 [SUMMARY]
[CONTENT] sn | subjects | menses | experiment | mt | type | phase | study | menses phase | 10 [SUMMARY]
[CONTENT] sns | physiological | women | mental stress | stress | physical | activity | responses | changes | different [SUMMARY]
[CONTENT] subjects | 10 | experiment | sn | min | 10 min | menses | second | given | asked [SUMMARY]
[CONTENT] phase | menses phase | menses | 34 | main effect | type sanitary | main | type sanitary napkin | non menses | menses phase 34 [SUMMARY]
[CONTENT] sn | physiological psychological | physiological | psychological | ratio | mt sn | wearing sn mt sn | wearing sn mt | sn comfortable | sn comfort important factor [SUMMARY]
[CONTENT] sn | subjects | menses | experiment | phase | menses phase | study | mt | sns | type [SUMMARY]
[CONTENT] sn | subjects | menses | experiment | phase | menses phase | study | mt | sns | type [SUMMARY]
[CONTENT] SN ||| Muslim | SN ||| Muslim [SUMMARY]
[CONTENT] Eighteen | Muslim | US | SN | two ||| SN ||| 10 ||| US | MT SN | 10 | 3 km | 10 | 10 | 5 km | 10 ||| 10 ||| Borg ||| LF/HF [SUMMARY]
[CONTENT] VAS | LF/HF | US | SN | the MT SN ||| US | SN ||| US | MT | US | MT [SUMMARY]
[CONTENT] VAS | LF/HF | US | SN | MT SN ||| [SUMMARY]
[CONTENT] SN ||| Muslim | SN ||| Muslim ||| Eighteen | Muslim | US | SN | two ||| SN ||| 10 ||| US | MT SN | 10 | 3 km | 10 | 10 | 5 km | 10 ||| 10 ||| Borg ||| LF/HF ||| VAS | LF/HF | US | SN | the MT SN ||| US | SN ||| US | MT | US | MT ||| VAS | LF/HF | US | SN | MT SN ||| [SUMMARY]
[CONTENT] SN ||| Muslim | SN ||| Muslim ||| Eighteen | Muslim | US | SN | two ||| SN ||| 10 ||| US | MT SN | 10 | 3 km | 10 | 10 | 5 km | 10 ||| 10 ||| Borg ||| LF/HF ||| VAS | LF/HF | US | SN | the MT SN ||| US | SN ||| US | MT | US | MT ||| VAS | LF/HF | US | SN | MT SN ||| [SUMMARY]
Reproductive health service use and social determinants among the floating population: a quantitative comparative study in Guangzhou City.
25359153
The World Health Assembly has pledged to achieve universal reproductive health (RH) coverage by 2015. Therefore, China has been vigorously promoting the equalisation of basic public health services (i.e. RH services). The floating population (FP) is the largest special group of internal migrants in China and constitutes the current national focus. However, gaps exist in the access of this group to RH services in China.
BACKGROUND
A total of 453 members of the FP and 794 members of the residential population (RP) aged 18 to 50 years from five urban districts in Guangzhou City were recruited to participate in a cross-sectional survey in 2009. Information on demographics and socioeconomic status (SES) were collected from these two groups to evaluate the utilisation of RH knowledge and skills and family planning services (FPS), and to identify social determinants.
METHODS
The proportion of individuals with low SES in the FP (19.2%) was higher than that in the RP (6.3%) (P <0.001). Of the FP, 9.7% to 35.8% had no knowledge of at least one skill, a proportion higher than the counterpart values (6.2% to 27.5%) for the RP (P <0.05). The frequency of FPS use among the FP and RP was low. However, FPS use was higher among the FP than among the RP (3.51 vs. 2.99) (P =0.050). Logistic regression analysis was used to analyse the social determinants that influence FPS use in the FP and RP. The factors that affect FPS utilisation of the RP included SES (OR =4.652, 95% CI =1.751, 12.362), whereas those of the FP excluded SES.
RESULTS
The FPS use of the FP in Guangzhou City was higher under equalised public health services. However, a need still exists to help the FP with low SES to improve their RH knowledge and skills through access to public RH services.
CONCLUSIONS
[ "Adolescent", "Adult", "Attitude to Health", "China", "Cross-Sectional Studies", "Family Planning Services", "Female", "Health Knowledge, Attitudes, Practice", "Humans", "Male", "Middle Aged", "Reproductive Health Services", "Rural Population", "Socioeconomic Factors", "Transients and Migrants", "United States", "Urban Population", "Young Adult" ]
4219102
Background
Reproductive health (RH) was initially proposed by the World Health Organization Special Programme of Research in Human Reproduction (WHO/HRP) in 1988. The International Conference on Population and Development held in Cairo in 1994 officially defined RH as a state of complete physical, mental and social well-being, not merely the absence of disease, in all matters relating to the reproductive system and to its function and processes. The definition of RH was the core content of the “Platform for Action”. The conference likewise became a milestone in RH history [1]. The World Health Assembly held in 1995 re-emphasised the importance of a global RH strategy and advanced the international health objective of achieving universal access to RH by 2015 [2]. Three of the eight Millennium Development Goals adopted by the United Nations in 2000 were directly related to reproductive and sexual health (i.e. improving maternal health, reducing child mortality, and fighting HIV/AIDS, malaria, and other diseases) [3]. In 2004, WHO announced the global RH strategy once again [4] along with the recommendations for monitoring RH at the national level [5]. RH is not only related to human reproduction and development, but also to several kinds of major diseases and social problems. Therefore, RH is vital to the sustainable development of population, society, and economy, and is an important as well as indispensable part of the overall health of humans. It not only affects the current society, but also directly influences the future of human society. Hence, enjoying RH is a common right of everyone. Floating population (FP) is a terminology used to describe a group of people who reside in a given population for a certain amount of time and for various reasons, but are not generally considered as part of the official census count. Based on the ‘Report on the development of floating population in China’ [6], the total FP size in China was nearly 230 million in 2013, a number that constitutes 17% of the total national population. Approximately 23 million members of the FP have been living in Guangdong Province for over half a year. Guangdong is the top FP residence, with a total FP population of over 28 million, accounting for roughly 12% of the entire FP in China. Four million members of the FP in Guangdong reside in Guangzhou City [7]. In China, households are divided into urban and rural households based on the geographical location of the official residence of the householder and on the occupation of his/her parents. Different household types enjoy different types of social welfare. As a transmigration group between urban and rural areas, the FP has a fairly low educational level and poor living and working environments. Their condition leads to poor health and presents an urgent need for public health services. In particular, the FP is vulnerable to public health problems, especially to those related to RH. As receivers of RH services, the FP is a disadvantaged group. A study in Guangzhou City reported that the rate of antenatal examination for the residential population (RP) was 99.72% in 2003, the rate of prenatal care was 86.4%, and the rate of hospital delivery was 98.4%. In comparison, the rate of antenatal examination for the FP was only 74.53%, the rate of prenatal care was only 43.1%, and the hospital delivery rate was only 75.3% [8]. Another survey in Zhejiang Province showed that the rate of postpartum visit for the FP was 85.84% and the system administration rate was 45.64%, whereas the rate of postpartum visit for the RP was 96.34% and the system administration rate was 96.34%. Such findings reveal that the FP falls behind the RP in terms of prenatal care, hospital delivery, postpartum visit, and system administration. Some studies have reported that since the FP only moved into their current location, the provision of family planning services (FPS) is not ideal, and the rate of access to FPS is low, reaching only 25% to 30% [9]. As an important part of the Millennium Development Goals, equal access to RH services has been recognised and accepted worldwide. In recent years, China has vigorously promoted the equalisation of basic public services in family planning, which enables the FP to enjoy the same propaganda, service, and management of FPS as the RP. Therefore, the present survey took the RP as the control and performed a comparative analysis between the control and the FP. The objective was to identify gaps and deficiencies, as well as to provide evidence for decision making to improve RH public services for the FP. In October 2010, China vigorously carried out pilot work to promote the equalisation of a basic public health service (i.e. FPS), which aims to cover all of the FP in pilot cities [10]. In the current study, data on demographic and socioeconomic status (SES) characteristics were collected from both FP and RP to evaluate RH knowledge and skills and the utilisation of RH services, as well as to identify their social determinants. The findings are expected to provide evidence to assist the improvement of RH public services for the FP.
Methods
This study was approved by the Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology (IRB No: FWA00007304). Before the investigation, a written informed consent form regarding the method, purpose, and meaning of this survey was provided to the respondents. If the respondents wished to take part in the survey, they were asked to affix their signatures on the page. Written informed consent forms were obtained from all respondents prior to the investigation. Subjects of the study The criteria for eligible FP participants were as follows: (1) a rural-to-urban migrant aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The excluding criterion was the inability to read or answer the study questionnaires (e.g. dementia, difficulties with the language). The criteria for eligible RP participants were as follows: (1) member of a registered household in Guangzhou City, and aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The criteria for eligible FP participants were as follows: (1) a rural-to-urban migrant aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The excluding criterion was the inability to read or answer the study questionnaires (e.g. dementia, difficulties with the language). The criteria for eligible RP participants were as follows: (1) member of a registered household in Guangzhou City, and aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. Face-to-face interview By reviewing existing literature and consulting relevant experts, we develop a self-made questionnaire that contained four components: (1) personal demographic characteristics; (2) household demographic characteristics; (3) self-reported knowledge and RH skill level, RH needs, and RH information approach; and (4) utilisation of RH services in different health care settings. The questionnaire was filled by the respondents themselves. In the case of an illiterate subject, a trained investigator filled the questionnaire with the answers of the respondent. Completing the questionnaire required approximately 20 minutes. All the completed questionnaires were collected immediately by the investigators. If the respondents can use basic RH knowledge and skills, we judge their skills level as good. If they have limited RH knowledge and skills, we judge their skills as midlevel. If the respondents have no RH knowledge and skills, we judge their skill level as poor. By reviewing existing literature and consulting relevant experts, we develop a self-made questionnaire that contained four components: (1) personal demographic characteristics; (2) household demographic characteristics; (3) self-reported knowledge and RH skill level, RH needs, and RH information approach; and (4) utilisation of RH services in different health care settings. The questionnaire was filled by the respondents themselves. In the case of an illiterate subject, a trained investigator filled the questionnaire with the answers of the respondent. Completing the questionnaire required approximately 20 minutes. All the completed questionnaires were collected immediately by the investigators. If the respondents can use basic RH knowledge and skills, we judge their skills level as good. If they have limited RH knowledge and skills, we judge their skills as midlevel. If the respondents have no RH knowledge and skills, we judge their skill level as poor. Sampling strategy Five administrative zones, namely, Baiyun, Huangpu, Yuexiu, Liwan, and Haizhu, were selected out of the 12 administrative zones or counties in Guangzhou. These zones represent new or old regions as well as developing or developed economies. All communities in the five zones were classified into two categories based on their economic situation. One community was randomly selected from each category. Finally, a total of 10 communities were selected, from which the subjects were randomly recruited. A total of 1,300 questionnaires were distributed and 1,247 valid questionnaires were returned, generating a response rate of 95.92%. The sample used in this survey constituted 453 members of the FP and 794 members of the RP. Five administrative zones, namely, Baiyun, Huangpu, Yuexiu, Liwan, and Haizhu, were selected out of the 12 administrative zones or counties in Guangzhou. These zones represent new or old regions as well as developing or developed economies. All communities in the five zones were classified into two categories based on their economic situation. One community was randomly selected from each category. Finally, a total of 10 communities were selected, from which the subjects were randomly recruited. A total of 1,300 questionnaires were distributed and 1,247 valid questionnaires were returned, generating a response rate of 95.92%. The sample used in this survey constituted 453 members of the FP and 794 members of the RP. SES classification standard Relevant research data from the WHO confirmed that health status, health behaviour, health service accessibility, and health service utilisation were related with SES (i.e. education, income, occupation), and vary based on race, ethnicity, and gender [6,11]. Sociologists commonly use SES as a means to predict individual behaviour. Therefore, in the present study, we used this index to identify the relationship of SES with FPS utilisation. In this study, demographic characteristics, SES, and other measures of FP and RP were analysed. The SES is designated to the position or rank of a person or group in society. The SES is a comprehensive assessment of individual educational level, income, occupation, wealth, living area, and so on [12-14]. Education, income, and occupation are the most commonly used SES indicators [15]. In the present study, we selected personal monthly income, occupation, and educational level as SES indicators. The variables of these indicators were categorised and assigned to corresponding scores following the list in Table 1. The total of the assignment scores for the three variables ranged from 3 to 15. To facilitate further data analysis, we assumed a total score of 3 to 6 to indicate low SES, 7 to 11 for middle-level SES, and 12 to 15 for high SES.Table 1 Educational level, income, and occupation assignments Personal monthly income (RMB) Educational level Occupation Score ≥4000University or aboveAdministrative institutions or enterprises53000 ~ 3999Senior middle school or polytechnic schoolBusinessmen42000 ~ 2999Junior middle schoolService industries31000 ~ 1999Elementary schoolRetirees with retired wages2<1000Other elseAwaiting job assignment/laid-off workers1 Educational level, income, and occupation assignments Relevant research data from the WHO confirmed that health status, health behaviour, health service accessibility, and health service utilisation were related with SES (i.e. education, income, occupation), and vary based on race, ethnicity, and gender [6,11]. Sociologists commonly use SES as a means to predict individual behaviour. Therefore, in the present study, we used this index to identify the relationship of SES with FPS utilisation. In this study, demographic characteristics, SES, and other measures of FP and RP were analysed. The SES is designated to the position or rank of a person or group in society. The SES is a comprehensive assessment of individual educational level, income, occupation, wealth, living area, and so on [12-14]. Education, income, and occupation are the most commonly used SES indicators [15]. In the present study, we selected personal monthly income, occupation, and educational level as SES indicators. The variables of these indicators were categorised and assigned to corresponding scores following the list in Table 1. The total of the assignment scores for the three variables ranged from 3 to 15. To facilitate further data analysis, we assumed a total score of 3 to 6 to indicate low SES, 7 to 11 for middle-level SES, and 12 to 15 for high SES.Table 1 Educational level, income, and occupation assignments Personal monthly income (RMB) Educational level Occupation Score ≥4000University or aboveAdministrative institutions or enterprises53000 ~ 3999Senior middle school or polytechnic schoolBusinessmen42000 ~ 2999Junior middle schoolService industries31000 ~ 1999Elementary schoolRetirees with retired wages2<1000Other elseAwaiting job assignment/laid-off workers1 Educational level, income, and occupation assignments Statistical methods To test the statistical significance between groups, t-test was used for measurement variables and chi-square test was used for categorical variables. Binary logistic regression analysis with selection method was likewise used to identify the dependence of FPS use on independent variables. These variables included age, sex, marital status, number of women and children in the family, SES, social security, RH needs, location of the Community Health Center (CHC) or Family Planning Service Center (FPSC), RH care awareness, participation in RH lectures, and RH knowledge and skills. Logistic regression analyses were performed to examine the correlating factors of FPS utilisation. Differences exist between the FP and RP in terms of SES, and the family planning work of the government for the FP and RP varies as well. Hence, we performed logistic regression separately for FP and RP to obtain the different influencing factors. We used enter methods to select variables that are correlated with FPS utilisation. Outcome variables included the FPS use and odds ratios (ORs) and their 95% confidence intervals (CIs). The inclusion P value was 0.05, and the removal P value was 0.10. EpiData version 3.0 was used to construct the database. Statistical software SPSS version 17.0 was utilised to perform the statistical analyses. To test the statistical significance between groups, t-test was used for measurement variables and chi-square test was used for categorical variables. Binary logistic regression analysis with selection method was likewise used to identify the dependence of FPS use on independent variables. These variables included age, sex, marital status, number of women and children in the family, SES, social security, RH needs, location of the Community Health Center (CHC) or Family Planning Service Center (FPSC), RH care awareness, participation in RH lectures, and RH knowledge and skills. Logistic regression analyses were performed to examine the correlating factors of FPS utilisation. Differences exist between the FP and RP in terms of SES, and the family planning work of the government for the FP and RP varies as well. Hence, we performed logistic regression separately for FP and RP to obtain the different influencing factors. We used enter methods to select variables that are correlated with FPS utilisation. Outcome variables included the FPS use and odds ratios (ORs) and their 95% confidence intervals (CIs). The inclusion P value was 0.05, and the removal P value was 0.10. EpiData version 3.0 was used to construct the database. Statistical software SPSS version 17.0 was utilised to perform the statistical analyses.
Results
Demographic and SES characteristics Table 2 shows the demographic and SES characteristics of the respondents. A total of 1,247 respondents aged 18 to 50 years were examined. The mean (SD) respondent age was 35.3 (8.01) years. The FP had a mean age (SD) of 33.0 (7.85) years, which was lower than the counterpart value of 36.6 (7.82) years for the RP respondents (P <0.001). Up to 65.1% of the FP were aged between 18 and 36 years old. The proportion of unmarried subjects among the FP (18.5%) was larger than that among the RP (11.7%) (P =0.001). The proportion of the FP that engaged in social health insurance was 60.9%, which was smaller than that of the RP (83.8%) (P <0.001). Furthermore, 21.9% of the FP and 30.6% of the RP participated in commercial health insurance (P =0.001). No difference was observed in the gender composition between the FP and RP, as well as in the composition of households that enjoy the minimum living guarantee. With regard to the SES, 83.2% of the FP were at the middle SES level or below, and 19.2% of the FP were at the low SES level. Meanwhile, 93.7% of the RP were at the middle SES level or above, while only 6.3% of the RP were at the low SES level. The average annual household income of the RP was RMB 45,600, which was higher than that of the FP (RMB 34,000).Table 2 Demographic characteristics and SES of FP and RP Demographics FP(%) N = 453 RP(%) N = 794 P -value Age (years) <0.001*  18–2473 (16.1)55 (06.9)  25–36222 (49.0)345 (43.5)  37–50158 (34.9)394 (49.6) Gender 0.501  Male111 (24.5)209 (26.3)  Female342 (75.5)585 (73.7) Marital status 0.001*  Married84 (18.5)93 (11.7)  Unmarried369 (81.5)701 (88.3) Minimum living guarantee enjoyment 0.168  No408 (90.1)734 (92.4)  Yes45 (09.9)60 (07.6) Social insurance <0.001*  Not involved177 (39.1)129 (16.2)  Involved276 (60.9)665 (83.8) Commercial insurance 0.001*  Not involved354 (78.1)551 (69.4)  Involved99 (21.9)243 (30.6) SES <0.001*  Low87 (19.2)50 (06.3)  Middle290 (64.0)522 (65.7)  High76 (16.8)222 (28.0) Average annual household income (mean ± std) (×RMB 10000) 3.40 ± 4.014.56 ± 4.580.011**P ≤0.05 indicates a statistical significance. Demographic characteristics and SES of FP and RP *P ≤0.05 indicates a statistical significance. Table 2 shows the demographic and SES characteristics of the respondents. A total of 1,247 respondents aged 18 to 50 years were examined. The mean (SD) respondent age was 35.3 (8.01) years. The FP had a mean age (SD) of 33.0 (7.85) years, which was lower than the counterpart value of 36.6 (7.82) years for the RP respondents (P <0.001). Up to 65.1% of the FP were aged between 18 and 36 years old. The proportion of unmarried subjects among the FP (18.5%) was larger than that among the RP (11.7%) (P =0.001). The proportion of the FP that engaged in social health insurance was 60.9%, which was smaller than that of the RP (83.8%) (P <0.001). Furthermore, 21.9% of the FP and 30.6% of the RP participated in commercial health insurance (P =0.001). No difference was observed in the gender composition between the FP and RP, as well as in the composition of households that enjoy the minimum living guarantee. With regard to the SES, 83.2% of the FP were at the middle SES level or below, and 19.2% of the FP were at the low SES level. Meanwhile, 93.7% of the RP were at the middle SES level or above, while only 6.3% of the RP were at the low SES level. The average annual household income of the RP was RMB 45,600, which was higher than that of the FP (RMB 34,000).Table 2 Demographic characteristics and SES of FP and RP Demographics FP(%) N = 453 RP(%) N = 794 P -value Age (years) <0.001*  18–2473 (16.1)55 (06.9)  25–36222 (49.0)345 (43.5)  37–50158 (34.9)394 (49.6) Gender 0.501  Male111 (24.5)209 (26.3)  Female342 (75.5)585 (73.7) Marital status 0.001*  Married84 (18.5)93 (11.7)  Unmarried369 (81.5)701 (88.3) Minimum living guarantee enjoyment 0.168  No408 (90.1)734 (92.4)  Yes45 (09.9)60 (07.6) Social insurance <0.001*  Not involved177 (39.1)129 (16.2)  Involved276 (60.9)665 (83.8) Commercial insurance 0.001*  Not involved354 (78.1)551 (69.4)  Involved99 (21.9)243 (30.6) SES <0.001*  Low87 (19.2)50 (06.3)  Middle290 (64.0)522 (65.7)  High76 (16.8)222 (28.0) Average annual household income (mean ± std) (×RMB 10000) 3.40 ± 4.014.56 ± 4.580.011**P ≤0.05 indicates a statistical significance. Demographic characteristics and SES of FP and RP *P ≤0.05 indicates a statistical significance. RH knowledge and skills Table 3 presents the findings on RH knowledge and skills among the FP and RP. Compared with the RP, a smaller percentage of the FP possessed RH knowledge about newlywed health (47.9% vs. 59.1%), prevention and control of sexually transmitted diseases (STDs) (36.4% vs. 43.1%), health care of pregnant women and puerpera (41.3% vs. 54.4%). The proportion of the FP that had no knowledge about such types of information was larger than that of the RP. However, no difference was observed between the FP and RP in terms of the frequency of participation in RH lectures. Up to 61.8% of the FP and 56.2% of the RP reported having never participated in RH lectures.Table 3 Level of RH knowledge and skills among FP and RP RH knowledge and skills FP N = 453(%) RP N = 794(%) P -value RH skills for Pregnancy test 0.002*  Good158 (34.9)349 (44.0)  Middle203 (44.8)328 (41.3)  Poor92 (20.3)117 (14.7) Contraception <0.001*  Good231 (51.0)486 (61.2)  Middle167 (36.9)274 (34.5)  Poor55 (12.1)34 (04.3) Cleaning reproductive tracts 0.007*  Good181 (40.0)381 (48.0)  Middle228 (50.3)363 (45.7)  Poor44 (09.7)50 (06.3) Maternal nutrition during pregnancy 0.001*  Good92 (20.3)205 (25.8)  Middle241 (53.2)445 (56.0)  Poor120 (26.5)144 (18.1) Miscarriage prevention <0.001*  Good82 (18.1)173 (21.8)  Middle209 (46.1)403 (50.8)  Poor162 (35.8)218 (27.5) Prenatal education <0.001*  Good86 (19.0)180 (22.7)  Middle213 (47.0)428 (53.9)  Poor154 (34.0)186 (23.4) Safe sex <0.001*  Good186 (41.1)417 (52.5)  Middle217 (47.9)328 (41.3)  Poor50 (11.0)49 (06.2) RH knowledge about health care of pregnant women <0.001*  Good187 (41.3)432 (54.4)  Middle212 (46.1)300 (37.8)  Poor54 (11.9)62 (07.8) Prevention and control of STD infection 0.016*  Good165 (36.4)342 (43.1)  Middle209 (46.1)353 (44.5)  Poor79 (17.4)99 (12.5) Newlywed health <0.001*  Good217 (47.9)469 (59.1)  Middle181 (40.0)276 (34.8)  Poor55 (12.1)49 (06.2) Attendance in RH lectures 0.056  Yes173 (38.2)348 (43.8)  No280 (61.8)446 (56.2)*P ≤0.05 indicates a statistical significance. Level of RH knowledge and skills among FP and RP *P ≤0.05 indicates a statistical significance. Similarly, in comparison with the RP, a smaller percentage of the FP was equipped with RH skills for pregnancy test (34.9% vs. 44.0%), contraception (51.0% vs. 61.2%), cleaning genital tracts (40.0% vs. 48.0%), maternal nutrition during pregnancy (20.3% vs. 25.8%), miscarriage prevention (18.1% vs. 21.8%), early education for foetus (19.0% vs. 22.7%), and safe sex (41.1% vs. 52.5%). The proportion of the FP that reported no RH skills was larger than that of the RP. Table 3 presents the findings on RH knowledge and skills among the FP and RP. Compared with the RP, a smaller percentage of the FP possessed RH knowledge about newlywed health (47.9% vs. 59.1%), prevention and control of sexually transmitted diseases (STDs) (36.4% vs. 43.1%), health care of pregnant women and puerpera (41.3% vs. 54.4%). The proportion of the FP that had no knowledge about such types of information was larger than that of the RP. However, no difference was observed between the FP and RP in terms of the frequency of participation in RH lectures. Up to 61.8% of the FP and 56.2% of the RP reported having never participated in RH lectures.Table 3 Level of RH knowledge and skills among FP and RP RH knowledge and skills FP N = 453(%) RP N = 794(%) P -value RH skills for Pregnancy test 0.002*  Good158 (34.9)349 (44.0)  Middle203 (44.8)328 (41.3)  Poor92 (20.3)117 (14.7) Contraception <0.001*  Good231 (51.0)486 (61.2)  Middle167 (36.9)274 (34.5)  Poor55 (12.1)34 (04.3) Cleaning reproductive tracts 0.007*  Good181 (40.0)381 (48.0)  Middle228 (50.3)363 (45.7)  Poor44 (09.7)50 (06.3) Maternal nutrition during pregnancy 0.001*  Good92 (20.3)205 (25.8)  Middle241 (53.2)445 (56.0)  Poor120 (26.5)144 (18.1) Miscarriage prevention <0.001*  Good82 (18.1)173 (21.8)  Middle209 (46.1)403 (50.8)  Poor162 (35.8)218 (27.5) Prenatal education <0.001*  Good86 (19.0)180 (22.7)  Middle213 (47.0)428 (53.9)  Poor154 (34.0)186 (23.4) Safe sex <0.001*  Good186 (41.1)417 (52.5)  Middle217 (47.9)328 (41.3)  Poor50 (11.0)49 (06.2) RH knowledge about health care of pregnant women <0.001*  Good187 (41.3)432 (54.4)  Middle212 (46.1)300 (37.8)  Poor54 (11.9)62 (07.8) Prevention and control of STD infection 0.016*  Good165 (36.4)342 (43.1)  Middle209 (46.1)353 (44.5)  Poor79 (17.4)99 (12.5) Newlywed health <0.001*  Good217 (47.9)469 (59.1)  Middle181 (40.0)276 (34.8)  Poor55 (12.1)49 (06.2) Attendance in RH lectures 0.056  Yes173 (38.2)348 (43.8)  No280 (61.8)446 (56.2)*P ≤0.05 indicates a statistical significance. Level of RH knowledge and skills among FP and RP *P ≤0.05 indicates a statistical significance. Similarly, in comparison with the RP, a smaller percentage of the FP was equipped with RH skills for pregnancy test (34.9% vs. 44.0%), contraception (51.0% vs. 61.2%), cleaning genital tracts (40.0% vs. 48.0%), maternal nutrition during pregnancy (20.3% vs. 25.8%), miscarriage prevention (18.1% vs. 21.8%), early education for foetus (19.0% vs. 22.7%), and safe sex (41.1% vs. 52.5%). The proportion of the FP that reported no RH skills was larger than that of the RP. FPS utilisation Table 4 shows the utilisation of FPS based on service providers in the latest year for both the FP and RP. The table indicates that the average number of times the FP received FPS was higher than that of the RP. However, no significant difference existed between the FP and RP in terms of the presence of hospitals, CHC, MCHB, and FPSC. The average number of times that the FP and RP received FPS from the FPSC, 1.87 and 1.73, respectively, was relatively higher than that for the other locations. The average number of times that the FP used FPS in the CHC was 0.67, which was higher than that of the RP (0.48) (P =0.006).Table 4 Mean (mean ± std) of times that subjects utilise FPS, grouped by service providers Service providers FP RP P -value Hospitals0.57 ± 1.1450.46 ± 0.9860.079MCHB0.39 ± 0.9290.32 ± 0.8450.206CHC0.67 ± 1.2990.48 ± 0.9300.006*FPSC1.87 ± 2.9151.73 ± 2.5300.364Total3.51 ± 4.7582.99 ± 4.0440.050**P ≤0.05 indicates a statistical significance. Mean (mean ± std) of times that subjects utilise FPS, grouped by service providers *P ≤0.05 indicates a statistical significance. Table 4 shows the utilisation of FPS based on service providers in the latest year for both the FP and RP. The table indicates that the average number of times the FP received FPS was higher than that of the RP. However, no significant difference existed between the FP and RP in terms of the presence of hospitals, CHC, MCHB, and FPSC. The average number of times that the FP and RP received FPS from the FPSC, 1.87 and 1.73, respectively, was relatively higher than that for the other locations. The average number of times that the FP used FPS in the CHC was 0.67, which was higher than that of the RP (0.48) (P =0.006).Table 4 Mean (mean ± std) of times that subjects utilise FPS, grouped by service providers Service providers FP RP P -value Hospitals0.57 ± 1.1450.46 ± 0.9860.079MCHB0.39 ± 0.9290.32 ± 0.8450.206CHC0.67 ± 1.2990.48 ± 0.9300.006*FPSC1.87 ± 2.9151.73 ± 2.5300.364Total3.51 ± 4.7582.99 ± 4.0440.050**P ≤0.05 indicates a statistical significance. Mean (mean ± std) of times that subjects utilise FPS, grouped by service providers *P ≤0.05 indicates a statistical significance. Logistic regression analysis of FPS utilisation The results of logistic regression analysis on the relationship of FPS use with the demographics and SES of the FP and RP are presented in Table 5. The factors that influenced the FPS use of the FP included gender (OR =0.208), marital status (OR =0.267), RH needs (OR =3.246), participation in RH lectures (OR =3.113), pregnancy test skills (OR =4.981), health awareness (OR =5.303), and knowledge about the location of FPS institutions (OR =2.141). Meanwhile, the factors that influenced the FPS use of the RP included SES (OR =4.652), contraceptive skills (OR =2.570), gender (OR =0.456), marital status (OR =0.299), RH needs (OR =1.663), participation in RH lectures (OR =1.987), and number of children under five years in the family (OR =0.557).Table 5 Social determinants of FPS use of FP and RP Social determinants of FPS use FP RP OR( 95% CI) P -value OR( 95% CI) P -value Gender <0.001*<0.001*  Female11  Male0.208(0.108 ~ 0.401)0.208(0.108 ~ 0.401) Marital status 0.001*0.001*  Married11  Unmarried0.267(0.120 ~ 0.596)0.267(0.120 ~ 0.596) SES level 0.4590.459  High11  Middle1.600(0.744 ~ 3.441)1.600(0.744 ~ 3.441)  Low1.692(0.673 ~ 4.255)1.692(0.673 ~ 4.255) RH needs <0.001*<0.001*  No11  Yes3.246(1.683 ~ 6.260)3.246(1.683 ~ 6.260) RH skill for contraception 0.6550.655  Poor11  Middle1.240(0.359 ~ 4.280)1.240(0.359 ~ 4.280)  Good1.582(0.445 ~ 5.617)1.582(0.445 ~ 5.617) Attendance in RH lectures <0.001*<0.001*  No11  Yes3.113(1.920 ~ 5.048)3.113(1.920 ~ 5.048) RH care awareness 0.015*0.015*  Poor11  Middle7.183(1.790 ~ 28.821)7.183(1.790 ~ 28.821)  Good5.303(1.227 ~ 22.918)5.303(1.227 ~ 22.918) RH skill for pregnancy test <0.001*<0.001*  Poor11  Middle2.837(1.396 ~ 5.767)2.837(1.396 ~ 5.767)  Good4.981(2.395 ~ 10.359)4.981(2.395 ~ 10.359) Knowledge of the FPSC location 0.049*0.049*  No11  Yes2.141(1.002 ~ 4.572)2.141(1.002 ~ 4.572) Number of children under five years in the family 0.949(0.523 ~ 1.722)0.8630.949(0.523 ~ 1.722)0.863*P ≤0.05 indicates a statistical significance. Social determinants of FPS use of FP and RP *P ≤0.05 indicates a statistical significance. The results of logistic regression analysis on the relationship of FPS use with the demographics and SES of the FP and RP are presented in Table 5. The factors that influenced the FPS use of the FP included gender (OR =0.208), marital status (OR =0.267), RH needs (OR =3.246), participation in RH lectures (OR =3.113), pregnancy test skills (OR =4.981), health awareness (OR =5.303), and knowledge about the location of FPS institutions (OR =2.141). Meanwhile, the factors that influenced the FPS use of the RP included SES (OR =4.652), contraceptive skills (OR =2.570), gender (OR =0.456), marital status (OR =0.299), RH needs (OR =1.663), participation in RH lectures (OR =1.987), and number of children under five years in the family (OR =0.557).Table 5 Social determinants of FPS use of FP and RP Social determinants of FPS use FP RP OR( 95% CI) P -value OR( 95% CI) P -value Gender <0.001*<0.001*  Female11  Male0.208(0.108 ~ 0.401)0.208(0.108 ~ 0.401) Marital status 0.001*0.001*  Married11  Unmarried0.267(0.120 ~ 0.596)0.267(0.120 ~ 0.596) SES level 0.4590.459  High11  Middle1.600(0.744 ~ 3.441)1.600(0.744 ~ 3.441)  Low1.692(0.673 ~ 4.255)1.692(0.673 ~ 4.255) RH needs <0.001*<0.001*  No11  Yes3.246(1.683 ~ 6.260)3.246(1.683 ~ 6.260) RH skill for contraception 0.6550.655  Poor11  Middle1.240(0.359 ~ 4.280)1.240(0.359 ~ 4.280)  Good1.582(0.445 ~ 5.617)1.582(0.445 ~ 5.617) Attendance in RH lectures <0.001*<0.001*  No11  Yes3.113(1.920 ~ 5.048)3.113(1.920 ~ 5.048) RH care awareness 0.015*0.015*  Poor11  Middle7.183(1.790 ~ 28.821)7.183(1.790 ~ 28.821)  Good5.303(1.227 ~ 22.918)5.303(1.227 ~ 22.918) RH skill for pregnancy test <0.001*<0.001*  Poor11  Middle2.837(1.396 ~ 5.767)2.837(1.396 ~ 5.767)  Good4.981(2.395 ~ 10.359)4.981(2.395 ~ 10.359) Knowledge of the FPSC location 0.049*0.049*  No11  Yes2.141(1.002 ~ 4.572)2.141(1.002 ~ 4.572) Number of children under five years in the family 0.949(0.523 ~ 1.722)0.8630.949(0.523 ~ 1.722)0.863*P ≤0.05 indicates a statistical significance. Social determinants of FPS use of FP and RP *P ≤0.05 indicates a statistical significance.
Conclusions
The FP in Guangzhou City is characterised by a lower SES level and insurance rate, more RH needs, and a lower level of self-protective consciousness compared with the RP. Meanwhile, the FPS use of the FP is better than that of the RP because of government attention. The FP requires the provision of more RH care services (i.e. health education). These findings provide evidence that can assist decision makers in bridging these coverage gaps. Achieving a truly universal RH coverage would certainly improve the RH consciousness of the FP in China.
[ "Background", "Subjects of the study", "Face-to-face interview", "Sampling strategy", "SES classification standard", "Statistical methods", "Demographic and SES characteristics", "RH knowledge and skills", "FPS utilisation", "Logistic regression analysis of FPS utilisation" ]
[ "Reproductive health (RH) was initially proposed by the World Health Organization Special Programme of Research in Human Reproduction (WHO/HRP) in 1988. The International Conference on Population and Development held in Cairo in 1994 officially defined RH as a state of complete physical, mental and social well-being, not merely the absence of disease, in all matters relating to the reproductive system and to its function and processes. The definition of RH was the core content of the “Platform for Action”. The conference likewise became a milestone in RH history [1]. The World Health Assembly held in 1995 re-emphasised the importance of a global RH strategy and advanced the international health objective of achieving universal access to RH by 2015 [2]. Three of the eight Millennium Development Goals adopted by the United Nations in 2000 were directly related to reproductive and sexual health (i.e. improving maternal health, reducing child mortality, and fighting HIV/AIDS, malaria, and other diseases) [3]. In 2004, WHO announced the global RH strategy once again [4] along with the recommendations for monitoring RH at the national level [5].\nRH is not only related to human reproduction and development, but also to several kinds of major diseases and social problems. Therefore, RH is vital to the sustainable development of population, society, and economy, and is an important as well as indispensable part of the overall health of humans. It not only affects the current society, but also directly influences the future of human society. Hence, enjoying RH is a common right of everyone.\nFloating population (FP) is a terminology used to describe a group of people who reside in a given population for a certain amount of time and for various reasons, but are not generally considered as part of the official census count. Based on the ‘Report on the development of floating population in China’ [6], the total FP size in China was nearly 230 million in 2013, a number that constitutes 17% of the total national population. Approximately 23 million members of the FP have been living in Guangdong Province for over half a year. Guangdong is the top FP residence, with a total FP population of over 28 million, accounting for roughly 12% of the entire FP in China. Four million members of the FP in Guangdong reside in Guangzhou City [7]. In China, households are divided into urban and rural households based on the geographical location of the official residence of the householder and on the occupation of his/her parents. Different household types enjoy different types of social welfare. As a transmigration group between urban and rural areas, the FP has a fairly low educational level and poor living and working environments. Their condition leads to poor health and presents an urgent need for public health services. In particular, the FP is vulnerable to public health problems, especially to those related to RH. As receivers of RH services, the FP is a disadvantaged group. A study in Guangzhou City reported that the rate of antenatal examination for the residential population (RP) was 99.72% in 2003, the rate of prenatal care was 86.4%, and the rate of hospital delivery was 98.4%. In comparison, the rate of antenatal examination for the FP was only 74.53%, the rate of prenatal care was only 43.1%, and the hospital delivery rate was only 75.3% [8]. Another survey in Zhejiang Province showed that the rate of postpartum visit for the FP was 85.84% and the system administration rate was 45.64%, whereas the rate of postpartum visit for the RP was 96.34% and the system administration rate was 96.34%. Such findings reveal that the FP falls behind the RP in terms of prenatal care, hospital delivery, postpartum visit, and system administration. Some studies have reported that since the FP only moved into their current location, the provision of family planning services (FPS) is not ideal, and the rate of access to FPS is low, reaching only 25% to 30% [9].\nAs an important part of the Millennium Development Goals, equal access to RH services has been recognised and accepted worldwide. In recent years, China has vigorously promoted the equalisation of basic public services in family planning, which enables the FP to enjoy the same propaganda, service, and management of FPS as the RP. Therefore, the present survey took the RP as the control and performed a comparative analysis between the control and the FP. The objective was to identify gaps and deficiencies, as well as to provide evidence for decision making to improve RH public services for the FP. In October 2010, China vigorously carried out pilot work to promote the equalisation of a basic public health service (i.e. FPS), which aims to cover all of the FP in pilot cities [10]. In the current study, data on demographic and socioeconomic status (SES) characteristics were collected from both FP and RP to evaluate RH knowledge and skills and the utilisation of RH services, as well as to identify their social determinants. The findings are expected to provide evidence to assist the improvement of RH public services for the FP.", "The criteria for eligible FP participants were as follows: (1) a rural-to-urban migrant aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The excluding criterion was the inability to read or answer the study questionnaires (e.g. dementia, difficulties with the language). The criteria for eligible RP participants were as follows: (1) member of a registered household in Guangzhou City, and aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent.", "By reviewing existing literature and consulting relevant experts, we develop a self-made questionnaire that contained four components: (1) personal demographic characteristics; (2) household demographic characteristics; (3) self-reported knowledge and RH skill level, RH needs, and RH information approach; and (4) utilisation of RH services in different health care settings. The questionnaire was filled by the respondents themselves. In the case of an illiterate subject, a trained investigator filled the questionnaire with the answers of the respondent. Completing the questionnaire required approximately 20 minutes. All the completed questionnaires were collected immediately by the investigators. If the respondents can use basic RH knowledge and skills, we judge their skills level as good. If they have limited RH knowledge and skills, we judge their skills as midlevel. If the respondents have no RH knowledge and skills, we judge their skill level as poor.", "Five administrative zones, namely, Baiyun, Huangpu, Yuexiu, Liwan, and Haizhu, were selected out of the 12 administrative zones or counties in Guangzhou. These zones represent new or old regions as well as developing or developed economies. All communities in the five zones were classified into two categories based on their economic situation. One community was randomly selected from each category. Finally, a total of 10 communities were selected, from which the subjects were randomly recruited. A total of 1,300 questionnaires were distributed and 1,247 valid questionnaires were returned, generating a response rate of 95.92%. The sample used in this survey constituted 453 members of the FP and 794 members of the RP.", "Relevant research data from the WHO confirmed that health status, health behaviour, health service accessibility, and health service utilisation were related with SES (i.e. education, income, occupation), and vary based on race, ethnicity, and gender [6,11]. Sociologists commonly use SES as a means to predict individual behaviour. Therefore, in the present study, we used this index to identify the relationship of SES with FPS utilisation.\nIn this study, demographic characteristics, SES, and other measures of FP and RP were analysed. The SES is designated to the position or rank of a person or group in society. The SES is a comprehensive assessment of individual educational level, income, occupation, wealth, living area, and so on [12-14]. Education, income, and occupation are the most commonly used SES indicators [15]. In the present study, we selected personal monthly income, occupation, and educational level as SES indicators. The variables of these indicators were categorised and assigned to corresponding scores following the list in Table 1. The total of the assignment scores for the three variables ranged from 3 to 15. To facilitate further data analysis, we assumed a total score of 3 to 6 to indicate low SES, 7 to 11 for middle-level SES, and 12 to 15 for high SES.Table 1\nEducational level, income, and occupation assignments\n\nPersonal monthly income (RMB)\n\nEducational level\n\nOccupation\n\nScore\n≥4000University or aboveAdministrative institutions or enterprises53000 ~ 3999Senior middle school or polytechnic schoolBusinessmen42000 ~ 2999Junior middle schoolService industries31000 ~ 1999Elementary schoolRetirees with retired wages2<1000Other elseAwaiting job assignment/laid-off workers1\n\nEducational level, income, and occupation assignments\n", "To test the statistical significance between groups, t-test was used for measurement variables and chi-square test was used for categorical variables. Binary logistic regression analysis with selection method was likewise used to identify the dependence of FPS use on independent variables. These variables included age, sex, marital status, number of women and children in the family, SES, social security, RH needs, location of the Community Health Center (CHC) or Family Planning Service Center (FPSC), RH care awareness, participation in RH lectures, and RH knowledge and skills.\nLogistic regression analyses were performed to examine the correlating factors of FPS utilisation. Differences exist between the FP and RP in terms of SES, and the family planning work of the government for the FP and RP varies as well. Hence, we performed logistic regression separately for FP and RP to obtain the different influencing factors. We used enter methods to select variables that are correlated with FPS utilisation. Outcome variables included the FPS use and odds ratios (ORs) and their 95% confidence intervals (CIs). The inclusion P value was 0.05, and the removal P value was 0.10. EpiData version 3.0 was used to construct the database. Statistical software SPSS version 17.0 was utilised to perform the statistical analyses.", "Table 2 shows the demographic and SES characteristics of the respondents. A total of 1,247 respondents aged 18 to 50 years were examined. The mean (SD) respondent age was 35.3 (8.01) years. The FP had a mean age (SD) of 33.0 (7.85) years, which was lower than the counterpart value of 36.6 (7.82) years for the RP respondents (P <0.001). Up to 65.1% of the FP were aged between 18 and 36 years old. The proportion of unmarried subjects among the FP (18.5%) was larger than that among the RP (11.7%) (P =0.001). The proportion of the FP that engaged in social health insurance was 60.9%, which was smaller than that of the RP (83.8%) (P <0.001). Furthermore, 21.9% of the FP and 30.6% of the RP participated in commercial health insurance (P =0.001). No difference was observed in the gender composition between the FP and RP, as well as in the composition of households that enjoy the minimum living guarantee. With regard to the SES, 83.2% of the FP were at the middle SES level or below, and 19.2% of the FP were at the low SES level. Meanwhile, 93.7% of the RP were at the middle SES level or above, while only 6.3% of the RP were at the low SES level. The average annual household income of the RP was RMB 45,600, which was higher than that of the FP (RMB 34,000).Table 2\nDemographic characteristics and SES of FP and RP\n\nDemographics\n\nFP(%) N = 453\n\nRP(%) N = 794\n\nP\n-value\n\nAge (years)\n<0.001*  18–2473 (16.1)55 (06.9)  25–36222 (49.0)345 (43.5)  37–50158 (34.9)394 (49.6)\nGender\n0.501  Male111 (24.5)209 (26.3)  Female342 (75.5)585 (73.7)\nMarital status\n0.001*  Married84 (18.5)93 (11.7)  Unmarried369 (81.5)701 (88.3)\nMinimum living guarantee enjoyment\n0.168  No408 (90.1)734 (92.4)  Yes45 (09.9)60 (07.6)\nSocial insurance\n<0.001*  Not involved177 (39.1)129 (16.2)  Involved276 (60.9)665 (83.8)\nCommercial insurance\n0.001*  Not involved354 (78.1)551 (69.4)  Involved99 (21.9)243 (30.6)\nSES\n<0.001*  Low87 (19.2)50 (06.3)  Middle290 (64.0)522 (65.7)  High76 (16.8)222 (28.0)\nAverage annual household income (mean ± std) (×RMB 10000)\n3.40 ± 4.014.56 ± 4.580.011**P ≤0.05 indicates a statistical significance.\n\nDemographic characteristics and SES of FP and RP\n\n*P ≤0.05 indicates a statistical significance.", "Table 3 presents the findings on RH knowledge and skills among the FP and RP. Compared with the RP, a smaller percentage of the FP possessed RH knowledge about newlywed health (47.9% vs. 59.1%), prevention and control of sexually transmitted diseases (STDs) (36.4% vs. 43.1%), health care of pregnant women and puerpera (41.3% vs. 54.4%). The proportion of the FP that had no knowledge about such types of information was larger than that of the RP. However, no difference was observed between the FP and RP in terms of the frequency of participation in RH lectures. Up to 61.8% of the FP and 56.2% of the RP reported having never participated in RH lectures.Table 3\nLevel of RH knowledge and skills among FP and RP\n\nRH knowledge and skills\n\nFP N = 453(%)\n\nRP N = 794(%)\n\nP\n-value\n\nRH skills for Pregnancy test\n0.002*  Good158 (34.9)349 (44.0)  Middle203 (44.8)328 (41.3)  Poor92 (20.3)117 (14.7)\nContraception\n<0.001*  Good231 (51.0)486 (61.2)  Middle167 (36.9)274 (34.5)  Poor55 (12.1)34 (04.3)\nCleaning reproductive tracts\n0.007*  Good181 (40.0)381 (48.0)  Middle228 (50.3)363 (45.7)  Poor44 (09.7)50 (06.3)\nMaternal nutrition during pregnancy\n0.001*  Good92 (20.3)205 (25.8)  Middle241 (53.2)445 (56.0)  Poor120 (26.5)144 (18.1)\nMiscarriage prevention\n<0.001*  Good82 (18.1)173 (21.8)  Middle209 (46.1)403 (50.8)  Poor162 (35.8)218 (27.5)\nPrenatal education\n<0.001*  Good86 (19.0)180 (22.7)  Middle213 (47.0)428 (53.9)  Poor154 (34.0)186 (23.4)\nSafe sex\n<0.001*  Good186 (41.1)417 (52.5)  Middle217 (47.9)328 (41.3)  Poor50 (11.0)49 (06.2)\nRH knowledge about health care of pregnant women\n<0.001*  Good187 (41.3)432 (54.4)  Middle212 (46.1)300 (37.8)  Poor54 (11.9)62 (07.8)\nPrevention and control of STD infection\n0.016*  Good165 (36.4)342 (43.1)  Middle209 (46.1)353 (44.5)  Poor79 (17.4)99 (12.5)\nNewlywed health\n<0.001*  Good217 (47.9)469 (59.1)  Middle181 (40.0)276 (34.8)  Poor55 (12.1)49 (06.2)\nAttendance in RH lectures\n0.056  Yes173 (38.2)348 (43.8)  No280 (61.8)446 (56.2)*P ≤0.05 indicates a statistical significance.\n\nLevel of RH knowledge and skills among FP and RP\n\n*P ≤0.05 indicates a statistical significance.\nSimilarly, in comparison with the RP, a smaller percentage of the FP was equipped with RH skills for pregnancy test (34.9% vs. 44.0%), contraception (51.0% vs. 61.2%), cleaning genital tracts (40.0% vs. 48.0%), maternal nutrition during pregnancy (20.3% vs. 25.8%), miscarriage prevention (18.1% vs. 21.8%), early education for foetus (19.0% vs. 22.7%), and safe sex (41.1% vs. 52.5%). The proportion of the FP that reported no RH skills was larger than that of the RP.", "Table 4 shows the utilisation of FPS based on service providers in the latest year for both the FP and RP. The table indicates that the average number of times the FP received FPS was higher than that of the RP. However, no significant difference existed between the FP and RP in terms of the presence of hospitals, CHC, MCHB, and FPSC. The average number of times that the FP and RP received FPS from the FPSC, 1.87 and 1.73, respectively, was relatively higher than that for the other locations. The average number of times that the FP used FPS in the CHC was 0.67, which was higher than that of the RP (0.48) (P =0.006).Table 4\nMean (mean ± std) of times that subjects utilise FPS, grouped by service providers\n\nService providers\n\nFP\n\nRP\n\nP\n-value\nHospitals0.57 ± 1.1450.46 ± 0.9860.079MCHB0.39 ± 0.9290.32 ± 0.8450.206CHC0.67 ± 1.2990.48 ± 0.9300.006*FPSC1.87 ± 2.9151.73 ± 2.5300.364Total3.51 ± 4.7582.99 ± 4.0440.050**P ≤0.05 indicates a statistical significance.\n\nMean (mean ± std) of times that subjects utilise FPS, grouped by service providers\n\n*P ≤0.05 indicates a statistical significance.", "The results of logistic regression analysis on the relationship of FPS use with the demographics and SES of the FP and RP are presented in Table 5. The factors that influenced the FPS use of the FP included gender (OR =0.208), marital status (OR =0.267), RH needs (OR =3.246), participation in RH lectures (OR =3.113), pregnancy test skills (OR =4.981), health awareness (OR =5.303), and knowledge about the location of FPS institutions (OR =2.141). Meanwhile, the factors that influenced the FPS use of the RP included SES (OR =4.652), contraceptive skills (OR =2.570), gender (OR =0.456), marital status (OR =0.299), RH needs (OR =1.663), participation in RH lectures (OR =1.987), and number of children under five years in the family (OR =0.557).Table 5\nSocial determinants of FPS use of FP and RP\n\nSocial determinants of FPS use\n\nFP\n\nRP\n\nOR(\n95% \nCI)\n\nP\n-value\n\nOR(\n95% \nCI)\n\nP\n-value\n\nGender\n<0.001*<0.001*  Female11  Male0.208(0.108 ~ 0.401)0.208(0.108 ~ 0.401)\nMarital status\n0.001*0.001*  Married11  Unmarried0.267(0.120 ~ 0.596)0.267(0.120 ~ 0.596)\nSES level\n0.4590.459  High11  Middle1.600(0.744 ~ 3.441)1.600(0.744 ~ 3.441)  Low1.692(0.673 ~ 4.255)1.692(0.673 ~ 4.255)\nRH needs\n<0.001*<0.001*  No11  Yes3.246(1.683 ~ 6.260)3.246(1.683 ~ 6.260)\nRH skill for contraception\n0.6550.655  Poor11  Middle1.240(0.359 ~ 4.280)1.240(0.359 ~ 4.280)  Good1.582(0.445 ~ 5.617)1.582(0.445 ~ 5.617)\nAttendance in RH lectures\n<0.001*<0.001*  No11  Yes3.113(1.920 ~ 5.048)3.113(1.920 ~ 5.048)\nRH care awareness\n0.015*0.015*  Poor11  Middle7.183(1.790 ~ 28.821)7.183(1.790 ~ 28.821)  Good5.303(1.227 ~ 22.918)5.303(1.227 ~ 22.918)\nRH skill for pregnancy test\n<0.001*<0.001*  Poor11  Middle2.837(1.396 ~ 5.767)2.837(1.396 ~ 5.767)  Good4.981(2.395 ~ 10.359)4.981(2.395 ~ 10.359)\nKnowledge of the FPSC location\n0.049*0.049*  No11  Yes2.141(1.002 ~ 4.572)2.141(1.002 ~ 4.572)\nNumber of children under five years in the family\n0.949(0.523 ~ 1.722)0.8630.949(0.523 ~ 1.722)0.863*P ≤0.05 indicates a statistical significance.\n\nSocial determinants of FPS use of FP and RP\n\n*P ≤0.05 indicates a statistical significance." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects of the study", "Face-to-face interview", "Sampling strategy", "SES classification standard", "Statistical methods", "Results", "Demographic and SES characteristics", "RH knowledge and skills", "FPS utilisation", "Logistic regression analysis of FPS utilisation", "Discussion", "Conclusions" ]
[ "Reproductive health (RH) was initially proposed by the World Health Organization Special Programme of Research in Human Reproduction (WHO/HRP) in 1988. The International Conference on Population and Development held in Cairo in 1994 officially defined RH as a state of complete physical, mental and social well-being, not merely the absence of disease, in all matters relating to the reproductive system and to its function and processes. The definition of RH was the core content of the “Platform for Action”. The conference likewise became a milestone in RH history [1]. The World Health Assembly held in 1995 re-emphasised the importance of a global RH strategy and advanced the international health objective of achieving universal access to RH by 2015 [2]. Three of the eight Millennium Development Goals adopted by the United Nations in 2000 were directly related to reproductive and sexual health (i.e. improving maternal health, reducing child mortality, and fighting HIV/AIDS, malaria, and other diseases) [3]. In 2004, WHO announced the global RH strategy once again [4] along with the recommendations for monitoring RH at the national level [5].\nRH is not only related to human reproduction and development, but also to several kinds of major diseases and social problems. Therefore, RH is vital to the sustainable development of population, society, and economy, and is an important as well as indispensable part of the overall health of humans. It not only affects the current society, but also directly influences the future of human society. Hence, enjoying RH is a common right of everyone.\nFloating population (FP) is a terminology used to describe a group of people who reside in a given population for a certain amount of time and for various reasons, but are not generally considered as part of the official census count. Based on the ‘Report on the development of floating population in China’ [6], the total FP size in China was nearly 230 million in 2013, a number that constitutes 17% of the total national population. Approximately 23 million members of the FP have been living in Guangdong Province for over half a year. Guangdong is the top FP residence, with a total FP population of over 28 million, accounting for roughly 12% of the entire FP in China. Four million members of the FP in Guangdong reside in Guangzhou City [7]. In China, households are divided into urban and rural households based on the geographical location of the official residence of the householder and on the occupation of his/her parents. Different household types enjoy different types of social welfare. As a transmigration group between urban and rural areas, the FP has a fairly low educational level and poor living and working environments. Their condition leads to poor health and presents an urgent need for public health services. In particular, the FP is vulnerable to public health problems, especially to those related to RH. As receivers of RH services, the FP is a disadvantaged group. A study in Guangzhou City reported that the rate of antenatal examination for the residential population (RP) was 99.72% in 2003, the rate of prenatal care was 86.4%, and the rate of hospital delivery was 98.4%. In comparison, the rate of antenatal examination for the FP was only 74.53%, the rate of prenatal care was only 43.1%, and the hospital delivery rate was only 75.3% [8]. Another survey in Zhejiang Province showed that the rate of postpartum visit for the FP was 85.84% and the system administration rate was 45.64%, whereas the rate of postpartum visit for the RP was 96.34% and the system administration rate was 96.34%. Such findings reveal that the FP falls behind the RP in terms of prenatal care, hospital delivery, postpartum visit, and system administration. Some studies have reported that since the FP only moved into their current location, the provision of family planning services (FPS) is not ideal, and the rate of access to FPS is low, reaching only 25% to 30% [9].\nAs an important part of the Millennium Development Goals, equal access to RH services has been recognised and accepted worldwide. In recent years, China has vigorously promoted the equalisation of basic public services in family planning, which enables the FP to enjoy the same propaganda, service, and management of FPS as the RP. Therefore, the present survey took the RP as the control and performed a comparative analysis between the control and the FP. The objective was to identify gaps and deficiencies, as well as to provide evidence for decision making to improve RH public services for the FP. In October 2010, China vigorously carried out pilot work to promote the equalisation of a basic public health service (i.e. FPS), which aims to cover all of the FP in pilot cities [10]. In the current study, data on demographic and socioeconomic status (SES) characteristics were collected from both FP and RP to evaluate RH knowledge and skills and the utilisation of RH services, as well as to identify their social determinants. The findings are expected to provide evidence to assist the improvement of RH public services for the FP.", "This study was approved by the Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology (IRB No: FWA00007304). Before the investigation, a written informed consent form regarding the method, purpose, and meaning of this survey was provided to the respondents. If the respondents wished to take part in the survey, they were asked to affix their signatures on the page. Written informed consent forms were obtained from all respondents prior to the investigation.\n Subjects of the study The criteria for eligible FP participants were as follows: (1) a rural-to-urban migrant aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The excluding criterion was the inability to read or answer the study questionnaires (e.g. dementia, difficulties with the language). The criteria for eligible RP participants were as follows: (1) member of a registered household in Guangzhou City, and aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent.\nThe criteria for eligible FP participants were as follows: (1) a rural-to-urban migrant aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The excluding criterion was the inability to read or answer the study questionnaires (e.g. dementia, difficulties with the language). The criteria for eligible RP participants were as follows: (1) member of a registered household in Guangzhou City, and aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent.\n Face-to-face interview By reviewing existing literature and consulting relevant experts, we develop a self-made questionnaire that contained four components: (1) personal demographic characteristics; (2) household demographic characteristics; (3) self-reported knowledge and RH skill level, RH needs, and RH information approach; and (4) utilisation of RH services in different health care settings. The questionnaire was filled by the respondents themselves. In the case of an illiterate subject, a trained investigator filled the questionnaire with the answers of the respondent. Completing the questionnaire required approximately 20 minutes. All the completed questionnaires were collected immediately by the investigators. If the respondents can use basic RH knowledge and skills, we judge their skills level as good. If they have limited RH knowledge and skills, we judge their skills as midlevel. If the respondents have no RH knowledge and skills, we judge their skill level as poor.\nBy reviewing existing literature and consulting relevant experts, we develop a self-made questionnaire that contained four components: (1) personal demographic characteristics; (2) household demographic characteristics; (3) self-reported knowledge and RH skill level, RH needs, and RH information approach; and (4) utilisation of RH services in different health care settings. The questionnaire was filled by the respondents themselves. In the case of an illiterate subject, a trained investigator filled the questionnaire with the answers of the respondent. Completing the questionnaire required approximately 20 minutes. All the completed questionnaires were collected immediately by the investigators. If the respondents can use basic RH knowledge and skills, we judge their skills level as good. If they have limited RH knowledge and skills, we judge their skills as midlevel. If the respondents have no RH knowledge and skills, we judge their skill level as poor.\n Sampling strategy Five administrative zones, namely, Baiyun, Huangpu, Yuexiu, Liwan, and Haizhu, were selected out of the 12 administrative zones or counties in Guangzhou. These zones represent new or old regions as well as developing or developed economies. All communities in the five zones were classified into two categories based on their economic situation. One community was randomly selected from each category. Finally, a total of 10 communities were selected, from which the subjects were randomly recruited. A total of 1,300 questionnaires were distributed and 1,247 valid questionnaires were returned, generating a response rate of 95.92%. The sample used in this survey constituted 453 members of the FP and 794 members of the RP.\nFive administrative zones, namely, Baiyun, Huangpu, Yuexiu, Liwan, and Haizhu, were selected out of the 12 administrative zones or counties in Guangzhou. These zones represent new or old regions as well as developing or developed economies. All communities in the five zones were classified into two categories based on their economic situation. One community was randomly selected from each category. Finally, a total of 10 communities were selected, from which the subjects were randomly recruited. A total of 1,300 questionnaires were distributed and 1,247 valid questionnaires were returned, generating a response rate of 95.92%. The sample used in this survey constituted 453 members of the FP and 794 members of the RP.\n SES classification standard Relevant research data from the WHO confirmed that health status, health behaviour, health service accessibility, and health service utilisation were related with SES (i.e. education, income, occupation), and vary based on race, ethnicity, and gender [6,11]. Sociologists commonly use SES as a means to predict individual behaviour. Therefore, in the present study, we used this index to identify the relationship of SES with FPS utilisation.\nIn this study, demographic characteristics, SES, and other measures of FP and RP were analysed. The SES is designated to the position or rank of a person or group in society. The SES is a comprehensive assessment of individual educational level, income, occupation, wealth, living area, and so on [12-14]. Education, income, and occupation are the most commonly used SES indicators [15]. In the present study, we selected personal monthly income, occupation, and educational level as SES indicators. The variables of these indicators were categorised and assigned to corresponding scores following the list in Table 1. The total of the assignment scores for the three variables ranged from 3 to 15. To facilitate further data analysis, we assumed a total score of 3 to 6 to indicate low SES, 7 to 11 for middle-level SES, and 12 to 15 for high SES.Table 1\nEducational level, income, and occupation assignments\n\nPersonal monthly income (RMB)\n\nEducational level\n\nOccupation\n\nScore\n≥4000University or aboveAdministrative institutions or enterprises53000 ~ 3999Senior middle school or polytechnic schoolBusinessmen42000 ~ 2999Junior middle schoolService industries31000 ~ 1999Elementary schoolRetirees with retired wages2<1000Other elseAwaiting job assignment/laid-off workers1\n\nEducational level, income, and occupation assignments\n\nRelevant research data from the WHO confirmed that health status, health behaviour, health service accessibility, and health service utilisation were related with SES (i.e. education, income, occupation), and vary based on race, ethnicity, and gender [6,11]. Sociologists commonly use SES as a means to predict individual behaviour. Therefore, in the present study, we used this index to identify the relationship of SES with FPS utilisation.\nIn this study, demographic characteristics, SES, and other measures of FP and RP were analysed. The SES is designated to the position or rank of a person or group in society. The SES is a comprehensive assessment of individual educational level, income, occupation, wealth, living area, and so on [12-14]. Education, income, and occupation are the most commonly used SES indicators [15]. In the present study, we selected personal monthly income, occupation, and educational level as SES indicators. The variables of these indicators were categorised and assigned to corresponding scores following the list in Table 1. The total of the assignment scores for the three variables ranged from 3 to 15. To facilitate further data analysis, we assumed a total score of 3 to 6 to indicate low SES, 7 to 11 for middle-level SES, and 12 to 15 for high SES.Table 1\nEducational level, income, and occupation assignments\n\nPersonal monthly income (RMB)\n\nEducational level\n\nOccupation\n\nScore\n≥4000University or aboveAdministrative institutions or enterprises53000 ~ 3999Senior middle school or polytechnic schoolBusinessmen42000 ~ 2999Junior middle schoolService industries31000 ~ 1999Elementary schoolRetirees with retired wages2<1000Other elseAwaiting job assignment/laid-off workers1\n\nEducational level, income, and occupation assignments\n\n Statistical methods To test the statistical significance between groups, t-test was used for measurement variables and chi-square test was used for categorical variables. Binary logistic regression analysis with selection method was likewise used to identify the dependence of FPS use on independent variables. These variables included age, sex, marital status, number of women and children in the family, SES, social security, RH needs, location of the Community Health Center (CHC) or Family Planning Service Center (FPSC), RH care awareness, participation in RH lectures, and RH knowledge and skills.\nLogistic regression analyses were performed to examine the correlating factors of FPS utilisation. Differences exist between the FP and RP in terms of SES, and the family planning work of the government for the FP and RP varies as well. Hence, we performed logistic regression separately for FP and RP to obtain the different influencing factors. We used enter methods to select variables that are correlated with FPS utilisation. Outcome variables included the FPS use and odds ratios (ORs) and their 95% confidence intervals (CIs). The inclusion P value was 0.05, and the removal P value was 0.10. EpiData version 3.0 was used to construct the database. Statistical software SPSS version 17.0 was utilised to perform the statistical analyses.\nTo test the statistical significance between groups, t-test was used for measurement variables and chi-square test was used for categorical variables. Binary logistic regression analysis with selection method was likewise used to identify the dependence of FPS use on independent variables. These variables included age, sex, marital status, number of women and children in the family, SES, social security, RH needs, location of the Community Health Center (CHC) or Family Planning Service Center (FPSC), RH care awareness, participation in RH lectures, and RH knowledge and skills.\nLogistic regression analyses were performed to examine the correlating factors of FPS utilisation. Differences exist between the FP and RP in terms of SES, and the family planning work of the government for the FP and RP varies as well. Hence, we performed logistic regression separately for FP and RP to obtain the different influencing factors. We used enter methods to select variables that are correlated with FPS utilisation. Outcome variables included the FPS use and odds ratios (ORs) and their 95% confidence intervals (CIs). The inclusion P value was 0.05, and the removal P value was 0.10. EpiData version 3.0 was used to construct the database. Statistical software SPSS version 17.0 was utilised to perform the statistical analyses.", "The criteria for eligible FP participants were as follows: (1) a rural-to-urban migrant aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The excluding criterion was the inability to read or answer the study questionnaires (e.g. dementia, difficulties with the language). The criteria for eligible RP participants were as follows: (1) member of a registered household in Guangzhou City, and aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent.", "By reviewing existing literature and consulting relevant experts, we develop a self-made questionnaire that contained four components: (1) personal demographic characteristics; (2) household demographic characteristics; (3) self-reported knowledge and RH skill level, RH needs, and RH information approach; and (4) utilisation of RH services in different health care settings. The questionnaire was filled by the respondents themselves. In the case of an illiterate subject, a trained investigator filled the questionnaire with the answers of the respondent. Completing the questionnaire required approximately 20 minutes. All the completed questionnaires were collected immediately by the investigators. If the respondents can use basic RH knowledge and skills, we judge their skills level as good. If they have limited RH knowledge and skills, we judge their skills as midlevel. If the respondents have no RH knowledge and skills, we judge their skill level as poor.", "Five administrative zones, namely, Baiyun, Huangpu, Yuexiu, Liwan, and Haizhu, were selected out of the 12 administrative zones or counties in Guangzhou. These zones represent new or old regions as well as developing or developed economies. All communities in the five zones were classified into two categories based on their economic situation. One community was randomly selected from each category. Finally, a total of 10 communities were selected, from which the subjects were randomly recruited. A total of 1,300 questionnaires were distributed and 1,247 valid questionnaires were returned, generating a response rate of 95.92%. The sample used in this survey constituted 453 members of the FP and 794 members of the RP.", "Relevant research data from the WHO confirmed that health status, health behaviour, health service accessibility, and health service utilisation were related with SES (i.e. education, income, occupation), and vary based on race, ethnicity, and gender [6,11]. Sociologists commonly use SES as a means to predict individual behaviour. Therefore, in the present study, we used this index to identify the relationship of SES with FPS utilisation.\nIn this study, demographic characteristics, SES, and other measures of FP and RP were analysed. The SES is designated to the position or rank of a person or group in society. The SES is a comprehensive assessment of individual educational level, income, occupation, wealth, living area, and so on [12-14]. Education, income, and occupation are the most commonly used SES indicators [15]. In the present study, we selected personal monthly income, occupation, and educational level as SES indicators. The variables of these indicators were categorised and assigned to corresponding scores following the list in Table 1. The total of the assignment scores for the three variables ranged from 3 to 15. To facilitate further data analysis, we assumed a total score of 3 to 6 to indicate low SES, 7 to 11 for middle-level SES, and 12 to 15 for high SES.Table 1\nEducational level, income, and occupation assignments\n\nPersonal monthly income (RMB)\n\nEducational level\n\nOccupation\n\nScore\n≥4000University or aboveAdministrative institutions or enterprises53000 ~ 3999Senior middle school or polytechnic schoolBusinessmen42000 ~ 2999Junior middle schoolService industries31000 ~ 1999Elementary schoolRetirees with retired wages2<1000Other elseAwaiting job assignment/laid-off workers1\n\nEducational level, income, and occupation assignments\n", "To test the statistical significance between groups, t-test was used for measurement variables and chi-square test was used for categorical variables. Binary logistic regression analysis with selection method was likewise used to identify the dependence of FPS use on independent variables. These variables included age, sex, marital status, number of women and children in the family, SES, social security, RH needs, location of the Community Health Center (CHC) or Family Planning Service Center (FPSC), RH care awareness, participation in RH lectures, and RH knowledge and skills.\nLogistic regression analyses were performed to examine the correlating factors of FPS utilisation. Differences exist between the FP and RP in terms of SES, and the family planning work of the government for the FP and RP varies as well. Hence, we performed logistic regression separately for FP and RP to obtain the different influencing factors. We used enter methods to select variables that are correlated with FPS utilisation. Outcome variables included the FPS use and odds ratios (ORs) and their 95% confidence intervals (CIs). The inclusion P value was 0.05, and the removal P value was 0.10. EpiData version 3.0 was used to construct the database. Statistical software SPSS version 17.0 was utilised to perform the statistical analyses.", " Demographic and SES characteristics Table 2 shows the demographic and SES characteristics of the respondents. A total of 1,247 respondents aged 18 to 50 years were examined. The mean (SD) respondent age was 35.3 (8.01) years. The FP had a mean age (SD) of 33.0 (7.85) years, which was lower than the counterpart value of 36.6 (7.82) years for the RP respondents (P <0.001). Up to 65.1% of the FP were aged between 18 and 36 years old. The proportion of unmarried subjects among the FP (18.5%) was larger than that among the RP (11.7%) (P =0.001). The proportion of the FP that engaged in social health insurance was 60.9%, which was smaller than that of the RP (83.8%) (P <0.001). Furthermore, 21.9% of the FP and 30.6% of the RP participated in commercial health insurance (P =0.001). No difference was observed in the gender composition between the FP and RP, as well as in the composition of households that enjoy the minimum living guarantee. With regard to the SES, 83.2% of the FP were at the middle SES level or below, and 19.2% of the FP were at the low SES level. Meanwhile, 93.7% of the RP were at the middle SES level or above, while only 6.3% of the RP were at the low SES level. The average annual household income of the RP was RMB 45,600, which was higher than that of the FP (RMB 34,000).Table 2\nDemographic characteristics and SES of FP and RP\n\nDemographics\n\nFP(%) N = 453\n\nRP(%) N = 794\n\nP\n-value\n\nAge (years)\n<0.001*  18–2473 (16.1)55 (06.9)  25–36222 (49.0)345 (43.5)  37–50158 (34.9)394 (49.6)\nGender\n0.501  Male111 (24.5)209 (26.3)  Female342 (75.5)585 (73.7)\nMarital status\n0.001*  Married84 (18.5)93 (11.7)  Unmarried369 (81.5)701 (88.3)\nMinimum living guarantee enjoyment\n0.168  No408 (90.1)734 (92.4)  Yes45 (09.9)60 (07.6)\nSocial insurance\n<0.001*  Not involved177 (39.1)129 (16.2)  Involved276 (60.9)665 (83.8)\nCommercial insurance\n0.001*  Not involved354 (78.1)551 (69.4)  Involved99 (21.9)243 (30.6)\nSES\n<0.001*  Low87 (19.2)50 (06.3)  Middle290 (64.0)522 (65.7)  High76 (16.8)222 (28.0)\nAverage annual household income (mean ± std) (×RMB 10000)\n3.40 ± 4.014.56 ± 4.580.011**P ≤0.05 indicates a statistical significance.\n\nDemographic characteristics and SES of FP and RP\n\n*P ≤0.05 indicates a statistical significance.\nTable 2 shows the demographic and SES characteristics of the respondents. A total of 1,247 respondents aged 18 to 50 years were examined. The mean (SD) respondent age was 35.3 (8.01) years. The FP had a mean age (SD) of 33.0 (7.85) years, which was lower than the counterpart value of 36.6 (7.82) years for the RP respondents (P <0.001). Up to 65.1% of the FP were aged between 18 and 36 years old. The proportion of unmarried subjects among the FP (18.5%) was larger than that among the RP (11.7%) (P =0.001). The proportion of the FP that engaged in social health insurance was 60.9%, which was smaller than that of the RP (83.8%) (P <0.001). Furthermore, 21.9% of the FP and 30.6% of the RP participated in commercial health insurance (P =0.001). No difference was observed in the gender composition between the FP and RP, as well as in the composition of households that enjoy the minimum living guarantee. With regard to the SES, 83.2% of the FP were at the middle SES level or below, and 19.2% of the FP were at the low SES level. Meanwhile, 93.7% of the RP were at the middle SES level or above, while only 6.3% of the RP were at the low SES level. The average annual household income of the RP was RMB 45,600, which was higher than that of the FP (RMB 34,000).Table 2\nDemographic characteristics and SES of FP and RP\n\nDemographics\n\nFP(%) N = 453\n\nRP(%) N = 794\n\nP\n-value\n\nAge (years)\n<0.001*  18–2473 (16.1)55 (06.9)  25–36222 (49.0)345 (43.5)  37–50158 (34.9)394 (49.6)\nGender\n0.501  Male111 (24.5)209 (26.3)  Female342 (75.5)585 (73.7)\nMarital status\n0.001*  Married84 (18.5)93 (11.7)  Unmarried369 (81.5)701 (88.3)\nMinimum living guarantee enjoyment\n0.168  No408 (90.1)734 (92.4)  Yes45 (09.9)60 (07.6)\nSocial insurance\n<0.001*  Not involved177 (39.1)129 (16.2)  Involved276 (60.9)665 (83.8)\nCommercial insurance\n0.001*  Not involved354 (78.1)551 (69.4)  Involved99 (21.9)243 (30.6)\nSES\n<0.001*  Low87 (19.2)50 (06.3)  Middle290 (64.0)522 (65.7)  High76 (16.8)222 (28.0)\nAverage annual household income (mean ± std) (×RMB 10000)\n3.40 ± 4.014.56 ± 4.580.011**P ≤0.05 indicates a statistical significance.\n\nDemographic characteristics and SES of FP and RP\n\n*P ≤0.05 indicates a statistical significance.\n RH knowledge and skills Table 3 presents the findings on RH knowledge and skills among the FP and RP. Compared with the RP, a smaller percentage of the FP possessed RH knowledge about newlywed health (47.9% vs. 59.1%), prevention and control of sexually transmitted diseases (STDs) (36.4% vs. 43.1%), health care of pregnant women and puerpera (41.3% vs. 54.4%). The proportion of the FP that had no knowledge about such types of information was larger than that of the RP. However, no difference was observed between the FP and RP in terms of the frequency of participation in RH lectures. Up to 61.8% of the FP and 56.2% of the RP reported having never participated in RH lectures.Table 3\nLevel of RH knowledge and skills among FP and RP\n\nRH knowledge and skills\n\nFP N = 453(%)\n\nRP N = 794(%)\n\nP\n-value\n\nRH skills for Pregnancy test\n0.002*  Good158 (34.9)349 (44.0)  Middle203 (44.8)328 (41.3)  Poor92 (20.3)117 (14.7)\nContraception\n<0.001*  Good231 (51.0)486 (61.2)  Middle167 (36.9)274 (34.5)  Poor55 (12.1)34 (04.3)\nCleaning reproductive tracts\n0.007*  Good181 (40.0)381 (48.0)  Middle228 (50.3)363 (45.7)  Poor44 (09.7)50 (06.3)\nMaternal nutrition during pregnancy\n0.001*  Good92 (20.3)205 (25.8)  Middle241 (53.2)445 (56.0)  Poor120 (26.5)144 (18.1)\nMiscarriage prevention\n<0.001*  Good82 (18.1)173 (21.8)  Middle209 (46.1)403 (50.8)  Poor162 (35.8)218 (27.5)\nPrenatal education\n<0.001*  Good86 (19.0)180 (22.7)  Middle213 (47.0)428 (53.9)  Poor154 (34.0)186 (23.4)\nSafe sex\n<0.001*  Good186 (41.1)417 (52.5)  Middle217 (47.9)328 (41.3)  Poor50 (11.0)49 (06.2)\nRH knowledge about health care of pregnant women\n<0.001*  Good187 (41.3)432 (54.4)  Middle212 (46.1)300 (37.8)  Poor54 (11.9)62 (07.8)\nPrevention and control of STD infection\n0.016*  Good165 (36.4)342 (43.1)  Middle209 (46.1)353 (44.5)  Poor79 (17.4)99 (12.5)\nNewlywed health\n<0.001*  Good217 (47.9)469 (59.1)  Middle181 (40.0)276 (34.8)  Poor55 (12.1)49 (06.2)\nAttendance in RH lectures\n0.056  Yes173 (38.2)348 (43.8)  No280 (61.8)446 (56.2)*P ≤0.05 indicates a statistical significance.\n\nLevel of RH knowledge and skills among FP and RP\n\n*P ≤0.05 indicates a statistical significance.\nSimilarly, in comparison with the RP, a smaller percentage of the FP was equipped with RH skills for pregnancy test (34.9% vs. 44.0%), contraception (51.0% vs. 61.2%), cleaning genital tracts (40.0% vs. 48.0%), maternal nutrition during pregnancy (20.3% vs. 25.8%), miscarriage prevention (18.1% vs. 21.8%), early education for foetus (19.0% vs. 22.7%), and safe sex (41.1% vs. 52.5%). The proportion of the FP that reported no RH skills was larger than that of the RP.\nTable 3 presents the findings on RH knowledge and skills among the FP and RP. Compared with the RP, a smaller percentage of the FP possessed RH knowledge about newlywed health (47.9% vs. 59.1%), prevention and control of sexually transmitted diseases (STDs) (36.4% vs. 43.1%), health care of pregnant women and puerpera (41.3% vs. 54.4%). The proportion of the FP that had no knowledge about such types of information was larger than that of the RP. However, no difference was observed between the FP and RP in terms of the frequency of participation in RH lectures. Up to 61.8% of the FP and 56.2% of the RP reported having never participated in RH lectures.Table 3\nLevel of RH knowledge and skills among FP and RP\n\nRH knowledge and skills\n\nFP N = 453(%)\n\nRP N = 794(%)\n\nP\n-value\n\nRH skills for Pregnancy test\n0.002*  Good158 (34.9)349 (44.0)  Middle203 (44.8)328 (41.3)  Poor92 (20.3)117 (14.7)\nContraception\n<0.001*  Good231 (51.0)486 (61.2)  Middle167 (36.9)274 (34.5)  Poor55 (12.1)34 (04.3)\nCleaning reproductive tracts\n0.007*  Good181 (40.0)381 (48.0)  Middle228 (50.3)363 (45.7)  Poor44 (09.7)50 (06.3)\nMaternal nutrition during pregnancy\n0.001*  Good92 (20.3)205 (25.8)  Middle241 (53.2)445 (56.0)  Poor120 (26.5)144 (18.1)\nMiscarriage prevention\n<0.001*  Good82 (18.1)173 (21.8)  Middle209 (46.1)403 (50.8)  Poor162 (35.8)218 (27.5)\nPrenatal education\n<0.001*  Good86 (19.0)180 (22.7)  Middle213 (47.0)428 (53.9)  Poor154 (34.0)186 (23.4)\nSafe sex\n<0.001*  Good186 (41.1)417 (52.5)  Middle217 (47.9)328 (41.3)  Poor50 (11.0)49 (06.2)\nRH knowledge about health care of pregnant women\n<0.001*  Good187 (41.3)432 (54.4)  Middle212 (46.1)300 (37.8)  Poor54 (11.9)62 (07.8)\nPrevention and control of STD infection\n0.016*  Good165 (36.4)342 (43.1)  Middle209 (46.1)353 (44.5)  Poor79 (17.4)99 (12.5)\nNewlywed health\n<0.001*  Good217 (47.9)469 (59.1)  Middle181 (40.0)276 (34.8)  Poor55 (12.1)49 (06.2)\nAttendance in RH lectures\n0.056  Yes173 (38.2)348 (43.8)  No280 (61.8)446 (56.2)*P ≤0.05 indicates a statistical significance.\n\nLevel of RH knowledge and skills among FP and RP\n\n*P ≤0.05 indicates a statistical significance.\nSimilarly, in comparison with the RP, a smaller percentage of the FP was equipped with RH skills for pregnancy test (34.9% vs. 44.0%), contraception (51.0% vs. 61.2%), cleaning genital tracts (40.0% vs. 48.0%), maternal nutrition during pregnancy (20.3% vs. 25.8%), miscarriage prevention (18.1% vs. 21.8%), early education for foetus (19.0% vs. 22.7%), and safe sex (41.1% vs. 52.5%). The proportion of the FP that reported no RH skills was larger than that of the RP.\n FPS utilisation Table 4 shows the utilisation of FPS based on service providers in the latest year for both the FP and RP. The table indicates that the average number of times the FP received FPS was higher than that of the RP. However, no significant difference existed between the FP and RP in terms of the presence of hospitals, CHC, MCHB, and FPSC. The average number of times that the FP and RP received FPS from the FPSC, 1.87 and 1.73, respectively, was relatively higher than that for the other locations. The average number of times that the FP used FPS in the CHC was 0.67, which was higher than that of the RP (0.48) (P =0.006).Table 4\nMean (mean ± std) of times that subjects utilise FPS, grouped by service providers\n\nService providers\n\nFP\n\nRP\n\nP\n-value\nHospitals0.57 ± 1.1450.46 ± 0.9860.079MCHB0.39 ± 0.9290.32 ± 0.8450.206CHC0.67 ± 1.2990.48 ± 0.9300.006*FPSC1.87 ± 2.9151.73 ± 2.5300.364Total3.51 ± 4.7582.99 ± 4.0440.050**P ≤0.05 indicates a statistical significance.\n\nMean (mean ± std) of times that subjects utilise FPS, grouped by service providers\n\n*P ≤0.05 indicates a statistical significance.\nTable 4 shows the utilisation of FPS based on service providers in the latest year for both the FP and RP. The table indicates that the average number of times the FP received FPS was higher than that of the RP. However, no significant difference existed between the FP and RP in terms of the presence of hospitals, CHC, MCHB, and FPSC. The average number of times that the FP and RP received FPS from the FPSC, 1.87 and 1.73, respectively, was relatively higher than that for the other locations. The average number of times that the FP used FPS in the CHC was 0.67, which was higher than that of the RP (0.48) (P =0.006).Table 4\nMean (mean ± std) of times that subjects utilise FPS, grouped by service providers\n\nService providers\n\nFP\n\nRP\n\nP\n-value\nHospitals0.57 ± 1.1450.46 ± 0.9860.079MCHB0.39 ± 0.9290.32 ± 0.8450.206CHC0.67 ± 1.2990.48 ± 0.9300.006*FPSC1.87 ± 2.9151.73 ± 2.5300.364Total3.51 ± 4.7582.99 ± 4.0440.050**P ≤0.05 indicates a statistical significance.\n\nMean (mean ± std) of times that subjects utilise FPS, grouped by service providers\n\n*P ≤0.05 indicates a statistical significance.\n Logistic regression analysis of FPS utilisation The results of logistic regression analysis on the relationship of FPS use with the demographics and SES of the FP and RP are presented in Table 5. The factors that influenced the FPS use of the FP included gender (OR =0.208), marital status (OR =0.267), RH needs (OR =3.246), participation in RH lectures (OR =3.113), pregnancy test skills (OR =4.981), health awareness (OR =5.303), and knowledge about the location of FPS institutions (OR =2.141). Meanwhile, the factors that influenced the FPS use of the RP included SES (OR =4.652), contraceptive skills (OR =2.570), gender (OR =0.456), marital status (OR =0.299), RH needs (OR =1.663), participation in RH lectures (OR =1.987), and number of children under five years in the family (OR =0.557).Table 5\nSocial determinants of FPS use of FP and RP\n\nSocial determinants of FPS use\n\nFP\n\nRP\n\nOR(\n95% \nCI)\n\nP\n-value\n\nOR(\n95% \nCI)\n\nP\n-value\n\nGender\n<0.001*<0.001*  Female11  Male0.208(0.108 ~ 0.401)0.208(0.108 ~ 0.401)\nMarital status\n0.001*0.001*  Married11  Unmarried0.267(0.120 ~ 0.596)0.267(0.120 ~ 0.596)\nSES level\n0.4590.459  High11  Middle1.600(0.744 ~ 3.441)1.600(0.744 ~ 3.441)  Low1.692(0.673 ~ 4.255)1.692(0.673 ~ 4.255)\nRH needs\n<0.001*<0.001*  No11  Yes3.246(1.683 ~ 6.260)3.246(1.683 ~ 6.260)\nRH skill for contraception\n0.6550.655  Poor11  Middle1.240(0.359 ~ 4.280)1.240(0.359 ~ 4.280)  Good1.582(0.445 ~ 5.617)1.582(0.445 ~ 5.617)\nAttendance in RH lectures\n<0.001*<0.001*  No11  Yes3.113(1.920 ~ 5.048)3.113(1.920 ~ 5.048)\nRH care awareness\n0.015*0.015*  Poor11  Middle7.183(1.790 ~ 28.821)7.183(1.790 ~ 28.821)  Good5.303(1.227 ~ 22.918)5.303(1.227 ~ 22.918)\nRH skill for pregnancy test\n<0.001*<0.001*  Poor11  Middle2.837(1.396 ~ 5.767)2.837(1.396 ~ 5.767)  Good4.981(2.395 ~ 10.359)4.981(2.395 ~ 10.359)\nKnowledge of the FPSC location\n0.049*0.049*  No11  Yes2.141(1.002 ~ 4.572)2.141(1.002 ~ 4.572)\nNumber of children under five years in the family\n0.949(0.523 ~ 1.722)0.8630.949(0.523 ~ 1.722)0.863*P ≤0.05 indicates a statistical significance.\n\nSocial determinants of FPS use of FP and RP\n\n*P ≤0.05 indicates a statistical significance.\nThe results of logistic regression analysis on the relationship of FPS use with the demographics and SES of the FP and RP are presented in Table 5. The factors that influenced the FPS use of the FP included gender (OR =0.208), marital status (OR =0.267), RH needs (OR =3.246), participation in RH lectures (OR =3.113), pregnancy test skills (OR =4.981), health awareness (OR =5.303), and knowledge about the location of FPS institutions (OR =2.141). Meanwhile, the factors that influenced the FPS use of the RP included SES (OR =4.652), contraceptive skills (OR =2.570), gender (OR =0.456), marital status (OR =0.299), RH needs (OR =1.663), participation in RH lectures (OR =1.987), and number of children under five years in the family (OR =0.557).Table 5\nSocial determinants of FPS use of FP and RP\n\nSocial determinants of FPS use\n\nFP\n\nRP\n\nOR(\n95% \nCI)\n\nP\n-value\n\nOR(\n95% \nCI)\n\nP\n-value\n\nGender\n<0.001*<0.001*  Female11  Male0.208(0.108 ~ 0.401)0.208(0.108 ~ 0.401)\nMarital status\n0.001*0.001*  Married11  Unmarried0.267(0.120 ~ 0.596)0.267(0.120 ~ 0.596)\nSES level\n0.4590.459  High11  Middle1.600(0.744 ~ 3.441)1.600(0.744 ~ 3.441)  Low1.692(0.673 ~ 4.255)1.692(0.673 ~ 4.255)\nRH needs\n<0.001*<0.001*  No11  Yes3.246(1.683 ~ 6.260)3.246(1.683 ~ 6.260)\nRH skill for contraception\n0.6550.655  Poor11  Middle1.240(0.359 ~ 4.280)1.240(0.359 ~ 4.280)  Good1.582(0.445 ~ 5.617)1.582(0.445 ~ 5.617)\nAttendance in RH lectures\n<0.001*<0.001*  No11  Yes3.113(1.920 ~ 5.048)3.113(1.920 ~ 5.048)\nRH care awareness\n0.015*0.015*  Poor11  Middle7.183(1.790 ~ 28.821)7.183(1.790 ~ 28.821)  Good5.303(1.227 ~ 22.918)5.303(1.227 ~ 22.918)\nRH skill for pregnancy test\n<0.001*<0.001*  Poor11  Middle2.837(1.396 ~ 5.767)2.837(1.396 ~ 5.767)  Good4.981(2.395 ~ 10.359)4.981(2.395 ~ 10.359)\nKnowledge of the FPSC location\n0.049*0.049*  No11  Yes2.141(1.002 ~ 4.572)2.141(1.002 ~ 4.572)\nNumber of children under five years in the family\n0.949(0.523 ~ 1.722)0.8630.949(0.523 ~ 1.722)0.863*P ≤0.05 indicates a statistical significance.\n\nSocial determinants of FPS use of FP and RP\n\n*P ≤0.05 indicates a statistical significance.", "Table 2 shows the demographic and SES characteristics of the respondents. A total of 1,247 respondents aged 18 to 50 years were examined. The mean (SD) respondent age was 35.3 (8.01) years. The FP had a mean age (SD) of 33.0 (7.85) years, which was lower than the counterpart value of 36.6 (7.82) years for the RP respondents (P <0.001). Up to 65.1% of the FP were aged between 18 and 36 years old. The proportion of unmarried subjects among the FP (18.5%) was larger than that among the RP (11.7%) (P =0.001). The proportion of the FP that engaged in social health insurance was 60.9%, which was smaller than that of the RP (83.8%) (P <0.001). Furthermore, 21.9% of the FP and 30.6% of the RP participated in commercial health insurance (P =0.001). No difference was observed in the gender composition between the FP and RP, as well as in the composition of households that enjoy the minimum living guarantee. With regard to the SES, 83.2% of the FP were at the middle SES level or below, and 19.2% of the FP were at the low SES level. Meanwhile, 93.7% of the RP were at the middle SES level or above, while only 6.3% of the RP were at the low SES level. The average annual household income of the RP was RMB 45,600, which was higher than that of the FP (RMB 34,000).Table 2\nDemographic characteristics and SES of FP and RP\n\nDemographics\n\nFP(%) N = 453\n\nRP(%) N = 794\n\nP\n-value\n\nAge (years)\n<0.001*  18–2473 (16.1)55 (06.9)  25–36222 (49.0)345 (43.5)  37–50158 (34.9)394 (49.6)\nGender\n0.501  Male111 (24.5)209 (26.3)  Female342 (75.5)585 (73.7)\nMarital status\n0.001*  Married84 (18.5)93 (11.7)  Unmarried369 (81.5)701 (88.3)\nMinimum living guarantee enjoyment\n0.168  No408 (90.1)734 (92.4)  Yes45 (09.9)60 (07.6)\nSocial insurance\n<0.001*  Not involved177 (39.1)129 (16.2)  Involved276 (60.9)665 (83.8)\nCommercial insurance\n0.001*  Not involved354 (78.1)551 (69.4)  Involved99 (21.9)243 (30.6)\nSES\n<0.001*  Low87 (19.2)50 (06.3)  Middle290 (64.0)522 (65.7)  High76 (16.8)222 (28.0)\nAverage annual household income (mean ± std) (×RMB 10000)\n3.40 ± 4.014.56 ± 4.580.011**P ≤0.05 indicates a statistical significance.\n\nDemographic characteristics and SES of FP and RP\n\n*P ≤0.05 indicates a statistical significance.", "Table 3 presents the findings on RH knowledge and skills among the FP and RP. Compared with the RP, a smaller percentage of the FP possessed RH knowledge about newlywed health (47.9% vs. 59.1%), prevention and control of sexually transmitted diseases (STDs) (36.4% vs. 43.1%), health care of pregnant women and puerpera (41.3% vs. 54.4%). The proportion of the FP that had no knowledge about such types of information was larger than that of the RP. However, no difference was observed between the FP and RP in terms of the frequency of participation in RH lectures. Up to 61.8% of the FP and 56.2% of the RP reported having never participated in RH lectures.Table 3\nLevel of RH knowledge and skills among FP and RP\n\nRH knowledge and skills\n\nFP N = 453(%)\n\nRP N = 794(%)\n\nP\n-value\n\nRH skills for Pregnancy test\n0.002*  Good158 (34.9)349 (44.0)  Middle203 (44.8)328 (41.3)  Poor92 (20.3)117 (14.7)\nContraception\n<0.001*  Good231 (51.0)486 (61.2)  Middle167 (36.9)274 (34.5)  Poor55 (12.1)34 (04.3)\nCleaning reproductive tracts\n0.007*  Good181 (40.0)381 (48.0)  Middle228 (50.3)363 (45.7)  Poor44 (09.7)50 (06.3)\nMaternal nutrition during pregnancy\n0.001*  Good92 (20.3)205 (25.8)  Middle241 (53.2)445 (56.0)  Poor120 (26.5)144 (18.1)\nMiscarriage prevention\n<0.001*  Good82 (18.1)173 (21.8)  Middle209 (46.1)403 (50.8)  Poor162 (35.8)218 (27.5)\nPrenatal education\n<0.001*  Good86 (19.0)180 (22.7)  Middle213 (47.0)428 (53.9)  Poor154 (34.0)186 (23.4)\nSafe sex\n<0.001*  Good186 (41.1)417 (52.5)  Middle217 (47.9)328 (41.3)  Poor50 (11.0)49 (06.2)\nRH knowledge about health care of pregnant women\n<0.001*  Good187 (41.3)432 (54.4)  Middle212 (46.1)300 (37.8)  Poor54 (11.9)62 (07.8)\nPrevention and control of STD infection\n0.016*  Good165 (36.4)342 (43.1)  Middle209 (46.1)353 (44.5)  Poor79 (17.4)99 (12.5)\nNewlywed health\n<0.001*  Good217 (47.9)469 (59.1)  Middle181 (40.0)276 (34.8)  Poor55 (12.1)49 (06.2)\nAttendance in RH lectures\n0.056  Yes173 (38.2)348 (43.8)  No280 (61.8)446 (56.2)*P ≤0.05 indicates a statistical significance.\n\nLevel of RH knowledge and skills among FP and RP\n\n*P ≤0.05 indicates a statistical significance.\nSimilarly, in comparison with the RP, a smaller percentage of the FP was equipped with RH skills for pregnancy test (34.9% vs. 44.0%), contraception (51.0% vs. 61.2%), cleaning genital tracts (40.0% vs. 48.0%), maternal nutrition during pregnancy (20.3% vs. 25.8%), miscarriage prevention (18.1% vs. 21.8%), early education for foetus (19.0% vs. 22.7%), and safe sex (41.1% vs. 52.5%). The proportion of the FP that reported no RH skills was larger than that of the RP.", "Table 4 shows the utilisation of FPS based on service providers in the latest year for both the FP and RP. The table indicates that the average number of times the FP received FPS was higher than that of the RP. However, no significant difference existed between the FP and RP in terms of the presence of hospitals, CHC, MCHB, and FPSC. The average number of times that the FP and RP received FPS from the FPSC, 1.87 and 1.73, respectively, was relatively higher than that for the other locations. The average number of times that the FP used FPS in the CHC was 0.67, which was higher than that of the RP (0.48) (P =0.006).Table 4\nMean (mean ± std) of times that subjects utilise FPS, grouped by service providers\n\nService providers\n\nFP\n\nRP\n\nP\n-value\nHospitals0.57 ± 1.1450.46 ± 0.9860.079MCHB0.39 ± 0.9290.32 ± 0.8450.206CHC0.67 ± 1.2990.48 ± 0.9300.006*FPSC1.87 ± 2.9151.73 ± 2.5300.364Total3.51 ± 4.7582.99 ± 4.0440.050**P ≤0.05 indicates a statistical significance.\n\nMean (mean ± std) of times that subjects utilise FPS, grouped by service providers\n\n*P ≤0.05 indicates a statistical significance.", "The results of logistic regression analysis on the relationship of FPS use with the demographics and SES of the FP and RP are presented in Table 5. The factors that influenced the FPS use of the FP included gender (OR =0.208), marital status (OR =0.267), RH needs (OR =3.246), participation in RH lectures (OR =3.113), pregnancy test skills (OR =4.981), health awareness (OR =5.303), and knowledge about the location of FPS institutions (OR =2.141). Meanwhile, the factors that influenced the FPS use of the RP included SES (OR =4.652), contraceptive skills (OR =2.570), gender (OR =0.456), marital status (OR =0.299), RH needs (OR =1.663), participation in RH lectures (OR =1.987), and number of children under five years in the family (OR =0.557).Table 5\nSocial determinants of FPS use of FP and RP\n\nSocial determinants of FPS use\n\nFP\n\nRP\n\nOR(\n95% \nCI)\n\nP\n-value\n\nOR(\n95% \nCI)\n\nP\n-value\n\nGender\n<0.001*<0.001*  Female11  Male0.208(0.108 ~ 0.401)0.208(0.108 ~ 0.401)\nMarital status\n0.001*0.001*  Married11  Unmarried0.267(0.120 ~ 0.596)0.267(0.120 ~ 0.596)\nSES level\n0.4590.459  High11  Middle1.600(0.744 ~ 3.441)1.600(0.744 ~ 3.441)  Low1.692(0.673 ~ 4.255)1.692(0.673 ~ 4.255)\nRH needs\n<0.001*<0.001*  No11  Yes3.246(1.683 ~ 6.260)3.246(1.683 ~ 6.260)\nRH skill for contraception\n0.6550.655  Poor11  Middle1.240(0.359 ~ 4.280)1.240(0.359 ~ 4.280)  Good1.582(0.445 ~ 5.617)1.582(0.445 ~ 5.617)\nAttendance in RH lectures\n<0.001*<0.001*  No11  Yes3.113(1.920 ~ 5.048)3.113(1.920 ~ 5.048)\nRH care awareness\n0.015*0.015*  Poor11  Middle7.183(1.790 ~ 28.821)7.183(1.790 ~ 28.821)  Good5.303(1.227 ~ 22.918)5.303(1.227 ~ 22.918)\nRH skill for pregnancy test\n<0.001*<0.001*  Poor11  Middle2.837(1.396 ~ 5.767)2.837(1.396 ~ 5.767)  Good4.981(2.395 ~ 10.359)4.981(2.395 ~ 10.359)\nKnowledge of the FPSC location\n0.049*0.049*  No11  Yes2.141(1.002 ~ 4.572)2.141(1.002 ~ 4.572)\nNumber of children under five years in the family\n0.949(0.523 ~ 1.722)0.8630.949(0.523 ~ 1.722)0.863*P ≤0.05 indicates a statistical significance.\n\nSocial determinants of FPS use of FP and RP\n\n*P ≤0.05 indicates a statistical significance.", "Our findings show that the FP is younger, has a lower SES level and insurance rate, and reports more RH needs than the RP. Nearly half of the FP in our study were sexually active young adults aged 25 to 36 years, living far away from their hometowns, and had sexual partners or no fixed sexual partners. Such subjects were also characterised by sexual liberality, high incidence of premarital sex, and poor awareness of safe sex. In addition, their low SES level, poor self-protective consciousness, and lack of knowledge on RH and contraception increased their risks of premarital pregnancy, abortion, STDs, and HIV infection. According to Zheng et al. [16], the incidence of premarital pregnancy and extramarital pregnancy among the FP is increasing, and the rate of induced abortion for unwanted pregnancies is high. In a survey, Zhang [17] reported that 25% of unmarried members of the FP exhibit reckless sexual behaviours. Such behaviours often increase the prevalence of accidental pregnancies and induced abortion among female members of the FP. These unsafe sex behaviours have also been significant risk factors for the spread of STDs and AIDS. Sun et al. [18] reported that the RH care-seeking behaviours of the FP are mainly instigated by the need to seek health care services and medical treatment for reproductive disorders. In particular, the FP encountered serious issues about unplanned pregnancies and health care of pregnant women and puerpera, as well as the prevalence of reproductive illnesses as a result of their behaviours. A considerable number of FP members engaged in a so-called 3D (i.e. dirty, dangerous, and difficult) career, which is characterised by the lack of essential economic and medical security. Several members of the FP included in our study were awaiting job assignments or were unemployed and had no income. Therefore, they were vulnerable to poverty when they arrived in the city. The ignorance of such members of the FP on RH issues also aggravated the problem of reproductive safety, which merits urgent attention.\nThe FP generally had some RH knowledge and skills. However, their level of such RH knowledge and skills was lower than that of the RP. Zhou [19] revealed that the FP was equipped with less knowledge on contraception than the household population in Wuhan City. On the one hand, this situation may be a consequence of the imperfect management system, based on which the FP members did not compulsively receive FPS provided by the CHC, MCHB, or other public health institutions. Therefore, we hypothesise that the FP is inadequately served and cared for (i.e. insufficient health education for the FP) in many aspects of RH care. The FP members mainly served as a registered population in a city, despite the existence of a relevant policy that suggests that the FP should enjoy the same services as those as the RP, with birth control as the principal goal [20]. On the other hand, the high risks of RH issues among the FP are associated with the low educational level, frequent migrations, impeded information channels, and poor RH skills and knowledge of the population. Based on the KAP model of health promotion theory, knowledge is the foundation of attitude and behavioural change [21]. People who have health knowledge and who adopt positive and correct beliefs as well as attitudes tend to conduct healthy behaviours and foster healthy habits. Women with inadequate RH knowledge may therefore be unaware of the risks of gynaecological diseases, and may refrain from seeking medical treatment because of the belief that such diseases will not affect their lives.\nThe Chinese government has implemented a family planning program to control the excessive population growth and improve population quality and health. The program is regarded as a basic national policy of China. China has attached great importance to FPS for a long time and has established the specialised FPSC to promote the FPS use of the FP. Moreover, the CHCs are fundamental providers of the basic public health and medical services included in the primary health care system. The FP is familiar with CHCs and is more inclined to seek medical help or treatment from such centres in case of RH disorders. According to Chen Hui, 73.3% of the FP hope to receive the FPS provided by FPS centres, whereas 23.1% hope to receive such services provided by CHCs [22]. Sun et al. [23] found that low-level medical institutions are given priority by the FP in seeking RH-related medical services. Other reports also show that [24] women constituted a majority in seeking community-based RH service. Therefore, the Chinese government should strengthen the community service system and improve the accessibility of public RH services based on the characteristics of the FP.\nIn this study, the factors that influence the FPS use of residents included gender, marital status, RH knowledge and skills, and SES. This result is consistent with those of a Canadian study [25] that reported educational level, gender, poverty, and migration as the major social factors related to RH. Another local study [17] indicated that the main influencing factors of RH service use among the FP were educational level, marital status, and RH knowledge. In the current study, people with RH needs naturally received FPS more than those without such needs. Compared with females, males reported less FP use, a finding also confirmed by Debra et al. [26]. Furthermore, RH care awareness considerably influenced the use of RH services for both the FP and RP. According to Peng, the RH status is closely associated with self-care awareness among rural women in China [27]. This phenomenon can be explained by the fact that people with a strong sense of self-care tend to seek medical treatment as soon as they are ill, and they focus closely on their own health, especially in terms of reproductive and contraception health. Sufficient RH awareness likewise contributes to the generation of correct RH behaviours [28].\nThe SES is found to be closely related to the use of RH services. El-Kak indicated that women from low-income families are more likely to use subsidised RH services [29]. Poor people typically experience high levels of unmet family planning needs, whereas such unmet needs gradually decrease among the rich as a result of their increased FPS use. Women with an unmet need are those who are fecund and sexually active but do not use any contraception method and report a lack of preference for more children or a desire to delay the birth of their next child [30]. In Malawi, the rate of FPS use was approximately 20% and 36% among married women at the bottom and top SES levels respectively. On the contrary, based on a report in 2007, the FPS use was relatively equal across all SES groups in the Philippines and in Bangladesh. Evidence demonstrates decreasing inequalities in the FPS use among various SES groups [24].\nIn China, governments are implementing the equalisation policy of public RH services to promote equity in RH use among the FP. Government-purchased FPS are provided freely to individuals in need regardless of SES level, thereby equalising FPS use among the FP. As for the RP, the low SES group faces more RH problems and needs, therefore resulting in higher FPS use. The low SES group in the RP belongs to the vulnerable group as the FP in society. Hence, the government should also pay more attention to this group to improve RH, and consequently, to improve social management and safeguard social stability. Chen et al. reported that the Chinese Yi people of high SES have a better RH status compared with those of low SES [9]. Another piece of evidence indicates that women of low SES are more likely to suffer reproductive tract infections than other women [31]. Therefore, the low SES population is a group that requires significant FPS in China.\nIn China, sex is generally considered to be an embarrassing topic for the unmarried population. Only married women tend to talk about experiences and information about sex and contraception and have free access to contraceptive measures of the local family planning departments [28]. Few people discuss relevant issues on contraception and sex with unmarried women. Therefore, the unmarried population has limited knowledge about RH, not to mention the use of RH services. Our findings also indicated that the larger the number of children aged less than five years is in a family, the less frequent the FPS use. This finding is consistent with the study of Tangcharoensathien, in which women with two or more children are reported to receive less essential obstetric services compared with those who have less than two children [23]. This phenomenon is attributed to the need of parents to spend considerable time and energy in caring for their children, therefore reducing their opportunities to use FPS.", "The FP in Guangzhou City is characterised by a lower SES level and insurance rate, more RH needs, and a lower level of self-protective consciousness compared with the RP. Meanwhile, the FPS use of the FP is better than that of the RP because of government attention. The FP requires the provision of more RH care services (i.e. health education). These findings provide evidence that can assist decision makers in bridging these coverage gaps. Achieving a truly universal RH coverage would certainly improve the RH consciousness of the FP in China." ]
[ null, "methods", null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "Floating population", "Residential population", "Reproductive health", "Family planning services", "Socioeconomic status" ]
Background: Reproductive health (RH) was initially proposed by the World Health Organization Special Programme of Research in Human Reproduction (WHO/HRP) in 1988. The International Conference on Population and Development held in Cairo in 1994 officially defined RH as a state of complete physical, mental and social well-being, not merely the absence of disease, in all matters relating to the reproductive system and to its function and processes. The definition of RH was the core content of the “Platform for Action”. The conference likewise became a milestone in RH history [1]. The World Health Assembly held in 1995 re-emphasised the importance of a global RH strategy and advanced the international health objective of achieving universal access to RH by 2015 [2]. Three of the eight Millennium Development Goals adopted by the United Nations in 2000 were directly related to reproductive and sexual health (i.e. improving maternal health, reducing child mortality, and fighting HIV/AIDS, malaria, and other diseases) [3]. In 2004, WHO announced the global RH strategy once again [4] along with the recommendations for monitoring RH at the national level [5]. RH is not only related to human reproduction and development, but also to several kinds of major diseases and social problems. Therefore, RH is vital to the sustainable development of population, society, and economy, and is an important as well as indispensable part of the overall health of humans. It not only affects the current society, but also directly influences the future of human society. Hence, enjoying RH is a common right of everyone. Floating population (FP) is a terminology used to describe a group of people who reside in a given population for a certain amount of time and for various reasons, but are not generally considered as part of the official census count. Based on the ‘Report on the development of floating population in China’ [6], the total FP size in China was nearly 230 million in 2013, a number that constitutes 17% of the total national population. Approximately 23 million members of the FP have been living in Guangdong Province for over half a year. Guangdong is the top FP residence, with a total FP population of over 28 million, accounting for roughly 12% of the entire FP in China. Four million members of the FP in Guangdong reside in Guangzhou City [7]. In China, households are divided into urban and rural households based on the geographical location of the official residence of the householder and on the occupation of his/her parents. Different household types enjoy different types of social welfare. As a transmigration group between urban and rural areas, the FP has a fairly low educational level and poor living and working environments. Their condition leads to poor health and presents an urgent need for public health services. In particular, the FP is vulnerable to public health problems, especially to those related to RH. As receivers of RH services, the FP is a disadvantaged group. A study in Guangzhou City reported that the rate of antenatal examination for the residential population (RP) was 99.72% in 2003, the rate of prenatal care was 86.4%, and the rate of hospital delivery was 98.4%. In comparison, the rate of antenatal examination for the FP was only 74.53%, the rate of prenatal care was only 43.1%, and the hospital delivery rate was only 75.3% [8]. Another survey in Zhejiang Province showed that the rate of postpartum visit for the FP was 85.84% and the system administration rate was 45.64%, whereas the rate of postpartum visit for the RP was 96.34% and the system administration rate was 96.34%. Such findings reveal that the FP falls behind the RP in terms of prenatal care, hospital delivery, postpartum visit, and system administration. Some studies have reported that since the FP only moved into their current location, the provision of family planning services (FPS) is not ideal, and the rate of access to FPS is low, reaching only 25% to 30% [9]. As an important part of the Millennium Development Goals, equal access to RH services has been recognised and accepted worldwide. In recent years, China has vigorously promoted the equalisation of basic public services in family planning, which enables the FP to enjoy the same propaganda, service, and management of FPS as the RP. Therefore, the present survey took the RP as the control and performed a comparative analysis between the control and the FP. The objective was to identify gaps and deficiencies, as well as to provide evidence for decision making to improve RH public services for the FP. In October 2010, China vigorously carried out pilot work to promote the equalisation of a basic public health service (i.e. FPS), which aims to cover all of the FP in pilot cities [10]. In the current study, data on demographic and socioeconomic status (SES) characteristics were collected from both FP and RP to evaluate RH knowledge and skills and the utilisation of RH services, as well as to identify their social determinants. The findings are expected to provide evidence to assist the improvement of RH public services for the FP. Methods: This study was approved by the Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology (IRB No: FWA00007304). Before the investigation, a written informed consent form regarding the method, purpose, and meaning of this survey was provided to the respondents. If the respondents wished to take part in the survey, they were asked to affix their signatures on the page. Written informed consent forms were obtained from all respondents prior to the investigation. Subjects of the study The criteria for eligible FP participants were as follows: (1) a rural-to-urban migrant aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The excluding criterion was the inability to read or answer the study questionnaires (e.g. dementia, difficulties with the language). The criteria for eligible RP participants were as follows: (1) member of a registered household in Guangzhou City, and aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The criteria for eligible FP participants were as follows: (1) a rural-to-urban migrant aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The excluding criterion was the inability to read or answer the study questionnaires (e.g. dementia, difficulties with the language). The criteria for eligible RP participants were as follows: (1) member of a registered household in Guangzhou City, and aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. Face-to-face interview By reviewing existing literature and consulting relevant experts, we develop a self-made questionnaire that contained four components: (1) personal demographic characteristics; (2) household demographic characteristics; (3) self-reported knowledge and RH skill level, RH needs, and RH information approach; and (4) utilisation of RH services in different health care settings. The questionnaire was filled by the respondents themselves. In the case of an illiterate subject, a trained investigator filled the questionnaire with the answers of the respondent. Completing the questionnaire required approximately 20 minutes. All the completed questionnaires were collected immediately by the investigators. If the respondents can use basic RH knowledge and skills, we judge their skills level as good. If they have limited RH knowledge and skills, we judge their skills as midlevel. If the respondents have no RH knowledge and skills, we judge their skill level as poor. By reviewing existing literature and consulting relevant experts, we develop a self-made questionnaire that contained four components: (1) personal demographic characteristics; (2) household demographic characteristics; (3) self-reported knowledge and RH skill level, RH needs, and RH information approach; and (4) utilisation of RH services in different health care settings. The questionnaire was filled by the respondents themselves. In the case of an illiterate subject, a trained investigator filled the questionnaire with the answers of the respondent. Completing the questionnaire required approximately 20 minutes. All the completed questionnaires were collected immediately by the investigators. If the respondents can use basic RH knowledge and skills, we judge their skills level as good. If they have limited RH knowledge and skills, we judge their skills as midlevel. If the respondents have no RH knowledge and skills, we judge their skill level as poor. Sampling strategy Five administrative zones, namely, Baiyun, Huangpu, Yuexiu, Liwan, and Haizhu, were selected out of the 12 administrative zones or counties in Guangzhou. These zones represent new or old regions as well as developing or developed economies. All communities in the five zones were classified into two categories based on their economic situation. One community was randomly selected from each category. Finally, a total of 10 communities were selected, from which the subjects were randomly recruited. A total of 1,300 questionnaires were distributed and 1,247 valid questionnaires were returned, generating a response rate of 95.92%. The sample used in this survey constituted 453 members of the FP and 794 members of the RP. Five administrative zones, namely, Baiyun, Huangpu, Yuexiu, Liwan, and Haizhu, were selected out of the 12 administrative zones or counties in Guangzhou. These zones represent new or old regions as well as developing or developed economies. All communities in the five zones were classified into two categories based on their economic situation. One community was randomly selected from each category. Finally, a total of 10 communities were selected, from which the subjects were randomly recruited. A total of 1,300 questionnaires were distributed and 1,247 valid questionnaires were returned, generating a response rate of 95.92%. The sample used in this survey constituted 453 members of the FP and 794 members of the RP. SES classification standard Relevant research data from the WHO confirmed that health status, health behaviour, health service accessibility, and health service utilisation were related with SES (i.e. education, income, occupation), and vary based on race, ethnicity, and gender [6,11]. Sociologists commonly use SES as a means to predict individual behaviour. Therefore, in the present study, we used this index to identify the relationship of SES with FPS utilisation. In this study, demographic characteristics, SES, and other measures of FP and RP were analysed. The SES is designated to the position or rank of a person or group in society. The SES is a comprehensive assessment of individual educational level, income, occupation, wealth, living area, and so on [12-14]. Education, income, and occupation are the most commonly used SES indicators [15]. In the present study, we selected personal monthly income, occupation, and educational level as SES indicators. The variables of these indicators were categorised and assigned to corresponding scores following the list in Table 1. The total of the assignment scores for the three variables ranged from 3 to 15. To facilitate further data analysis, we assumed a total score of 3 to 6 to indicate low SES, 7 to 11 for middle-level SES, and 12 to 15 for high SES.Table 1 Educational level, income, and occupation assignments Personal monthly income (RMB) Educational level Occupation Score ≥4000University or aboveAdministrative institutions or enterprises53000 ~ 3999Senior middle school or polytechnic schoolBusinessmen42000 ~ 2999Junior middle schoolService industries31000 ~ 1999Elementary schoolRetirees with retired wages2<1000Other elseAwaiting job assignment/laid-off workers1 Educational level, income, and occupation assignments Relevant research data from the WHO confirmed that health status, health behaviour, health service accessibility, and health service utilisation were related with SES (i.e. education, income, occupation), and vary based on race, ethnicity, and gender [6,11]. Sociologists commonly use SES as a means to predict individual behaviour. Therefore, in the present study, we used this index to identify the relationship of SES with FPS utilisation. In this study, demographic characteristics, SES, and other measures of FP and RP were analysed. The SES is designated to the position or rank of a person or group in society. The SES is a comprehensive assessment of individual educational level, income, occupation, wealth, living area, and so on [12-14]. Education, income, and occupation are the most commonly used SES indicators [15]. In the present study, we selected personal monthly income, occupation, and educational level as SES indicators. The variables of these indicators were categorised and assigned to corresponding scores following the list in Table 1. The total of the assignment scores for the three variables ranged from 3 to 15. To facilitate further data analysis, we assumed a total score of 3 to 6 to indicate low SES, 7 to 11 for middle-level SES, and 12 to 15 for high SES.Table 1 Educational level, income, and occupation assignments Personal monthly income (RMB) Educational level Occupation Score ≥4000University or aboveAdministrative institutions or enterprises53000 ~ 3999Senior middle school or polytechnic schoolBusinessmen42000 ~ 2999Junior middle schoolService industries31000 ~ 1999Elementary schoolRetirees with retired wages2<1000Other elseAwaiting job assignment/laid-off workers1 Educational level, income, and occupation assignments Statistical methods To test the statistical significance between groups, t-test was used for measurement variables and chi-square test was used for categorical variables. Binary logistic regression analysis with selection method was likewise used to identify the dependence of FPS use on independent variables. These variables included age, sex, marital status, number of women and children in the family, SES, social security, RH needs, location of the Community Health Center (CHC) or Family Planning Service Center (FPSC), RH care awareness, participation in RH lectures, and RH knowledge and skills. Logistic regression analyses were performed to examine the correlating factors of FPS utilisation. Differences exist between the FP and RP in terms of SES, and the family planning work of the government for the FP and RP varies as well. Hence, we performed logistic regression separately for FP and RP to obtain the different influencing factors. We used enter methods to select variables that are correlated with FPS utilisation. Outcome variables included the FPS use and odds ratios (ORs) and their 95% confidence intervals (CIs). The inclusion P value was 0.05, and the removal P value was 0.10. EpiData version 3.0 was used to construct the database. Statistical software SPSS version 17.0 was utilised to perform the statistical analyses. To test the statistical significance between groups, t-test was used for measurement variables and chi-square test was used for categorical variables. Binary logistic regression analysis with selection method was likewise used to identify the dependence of FPS use on independent variables. These variables included age, sex, marital status, number of women and children in the family, SES, social security, RH needs, location of the Community Health Center (CHC) or Family Planning Service Center (FPSC), RH care awareness, participation in RH lectures, and RH knowledge and skills. Logistic regression analyses were performed to examine the correlating factors of FPS utilisation. Differences exist between the FP and RP in terms of SES, and the family planning work of the government for the FP and RP varies as well. Hence, we performed logistic regression separately for FP and RP to obtain the different influencing factors. We used enter methods to select variables that are correlated with FPS utilisation. Outcome variables included the FPS use and odds ratios (ORs) and their 95% confidence intervals (CIs). The inclusion P value was 0.05, and the removal P value was 0.10. EpiData version 3.0 was used to construct the database. Statistical software SPSS version 17.0 was utilised to perform the statistical analyses. Subjects of the study: The criteria for eligible FP participants were as follows: (1) a rural-to-urban migrant aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. The excluding criterion was the inability to read or answer the study questionnaires (e.g. dementia, difficulties with the language). The criteria for eligible RP participants were as follows: (1) member of a registered household in Guangzhou City, and aged 18 to 50 years, (2) residing at Guangzhou City for more than six months, and (3) can provide oral informed consent. Face-to-face interview: By reviewing existing literature and consulting relevant experts, we develop a self-made questionnaire that contained four components: (1) personal demographic characteristics; (2) household demographic characteristics; (3) self-reported knowledge and RH skill level, RH needs, and RH information approach; and (4) utilisation of RH services in different health care settings. The questionnaire was filled by the respondents themselves. In the case of an illiterate subject, a trained investigator filled the questionnaire with the answers of the respondent. Completing the questionnaire required approximately 20 minutes. All the completed questionnaires were collected immediately by the investigators. If the respondents can use basic RH knowledge and skills, we judge their skills level as good. If they have limited RH knowledge and skills, we judge their skills as midlevel. If the respondents have no RH knowledge and skills, we judge their skill level as poor. Sampling strategy: Five administrative zones, namely, Baiyun, Huangpu, Yuexiu, Liwan, and Haizhu, were selected out of the 12 administrative zones or counties in Guangzhou. These zones represent new or old regions as well as developing or developed economies. All communities in the five zones were classified into two categories based on their economic situation. One community was randomly selected from each category. Finally, a total of 10 communities were selected, from which the subjects were randomly recruited. A total of 1,300 questionnaires were distributed and 1,247 valid questionnaires were returned, generating a response rate of 95.92%. The sample used in this survey constituted 453 members of the FP and 794 members of the RP. SES classification standard: Relevant research data from the WHO confirmed that health status, health behaviour, health service accessibility, and health service utilisation were related with SES (i.e. education, income, occupation), and vary based on race, ethnicity, and gender [6,11]. Sociologists commonly use SES as a means to predict individual behaviour. Therefore, in the present study, we used this index to identify the relationship of SES with FPS utilisation. In this study, demographic characteristics, SES, and other measures of FP and RP were analysed. The SES is designated to the position or rank of a person or group in society. The SES is a comprehensive assessment of individual educational level, income, occupation, wealth, living area, and so on [12-14]. Education, income, and occupation are the most commonly used SES indicators [15]. In the present study, we selected personal monthly income, occupation, and educational level as SES indicators. The variables of these indicators were categorised and assigned to corresponding scores following the list in Table 1. The total of the assignment scores for the three variables ranged from 3 to 15. To facilitate further data analysis, we assumed a total score of 3 to 6 to indicate low SES, 7 to 11 for middle-level SES, and 12 to 15 for high SES.Table 1 Educational level, income, and occupation assignments Personal monthly income (RMB) Educational level Occupation Score ≥4000University or aboveAdministrative institutions or enterprises53000 ~ 3999Senior middle school or polytechnic schoolBusinessmen42000 ~ 2999Junior middle schoolService industries31000 ~ 1999Elementary schoolRetirees with retired wages2<1000Other elseAwaiting job assignment/laid-off workers1 Educational level, income, and occupation assignments Statistical methods: To test the statistical significance between groups, t-test was used for measurement variables and chi-square test was used for categorical variables. Binary logistic regression analysis with selection method was likewise used to identify the dependence of FPS use on independent variables. These variables included age, sex, marital status, number of women and children in the family, SES, social security, RH needs, location of the Community Health Center (CHC) or Family Planning Service Center (FPSC), RH care awareness, participation in RH lectures, and RH knowledge and skills. Logistic regression analyses were performed to examine the correlating factors of FPS utilisation. Differences exist between the FP and RP in terms of SES, and the family planning work of the government for the FP and RP varies as well. Hence, we performed logistic regression separately for FP and RP to obtain the different influencing factors. We used enter methods to select variables that are correlated with FPS utilisation. Outcome variables included the FPS use and odds ratios (ORs) and their 95% confidence intervals (CIs). The inclusion P value was 0.05, and the removal P value was 0.10. EpiData version 3.0 was used to construct the database. Statistical software SPSS version 17.0 was utilised to perform the statistical analyses. Results: Demographic and SES characteristics Table 2 shows the demographic and SES characteristics of the respondents. A total of 1,247 respondents aged 18 to 50 years were examined. The mean (SD) respondent age was 35.3 (8.01) years. The FP had a mean age (SD) of 33.0 (7.85) years, which was lower than the counterpart value of 36.6 (7.82) years for the RP respondents (P <0.001). Up to 65.1% of the FP were aged between 18 and 36 years old. The proportion of unmarried subjects among the FP (18.5%) was larger than that among the RP (11.7%) (P =0.001). The proportion of the FP that engaged in social health insurance was 60.9%, which was smaller than that of the RP (83.8%) (P <0.001). Furthermore, 21.9% of the FP and 30.6% of the RP participated in commercial health insurance (P =0.001). No difference was observed in the gender composition between the FP and RP, as well as in the composition of households that enjoy the minimum living guarantee. With regard to the SES, 83.2% of the FP were at the middle SES level or below, and 19.2% of the FP were at the low SES level. Meanwhile, 93.7% of the RP were at the middle SES level or above, while only 6.3% of the RP were at the low SES level. The average annual household income of the RP was RMB 45,600, which was higher than that of the FP (RMB 34,000).Table 2 Demographic characteristics and SES of FP and RP Demographics FP(%) N = 453 RP(%) N = 794 P -value Age (years) <0.001*  18–2473 (16.1)55 (06.9)  25–36222 (49.0)345 (43.5)  37–50158 (34.9)394 (49.6) Gender 0.501  Male111 (24.5)209 (26.3)  Female342 (75.5)585 (73.7) Marital status 0.001*  Married84 (18.5)93 (11.7)  Unmarried369 (81.5)701 (88.3) Minimum living guarantee enjoyment 0.168  No408 (90.1)734 (92.4)  Yes45 (09.9)60 (07.6) Social insurance <0.001*  Not involved177 (39.1)129 (16.2)  Involved276 (60.9)665 (83.8) Commercial insurance 0.001*  Not involved354 (78.1)551 (69.4)  Involved99 (21.9)243 (30.6) SES <0.001*  Low87 (19.2)50 (06.3)  Middle290 (64.0)522 (65.7)  High76 (16.8)222 (28.0) Average annual household income (mean ± std) (×RMB 10000) 3.40 ± 4.014.56 ± 4.580.011**P ≤0.05 indicates a statistical significance. Demographic characteristics and SES of FP and RP *P ≤0.05 indicates a statistical significance. Table 2 shows the demographic and SES characteristics of the respondents. A total of 1,247 respondents aged 18 to 50 years were examined. The mean (SD) respondent age was 35.3 (8.01) years. The FP had a mean age (SD) of 33.0 (7.85) years, which was lower than the counterpart value of 36.6 (7.82) years for the RP respondents (P <0.001). Up to 65.1% of the FP were aged between 18 and 36 years old. The proportion of unmarried subjects among the FP (18.5%) was larger than that among the RP (11.7%) (P =0.001). The proportion of the FP that engaged in social health insurance was 60.9%, which was smaller than that of the RP (83.8%) (P <0.001). Furthermore, 21.9% of the FP and 30.6% of the RP participated in commercial health insurance (P =0.001). No difference was observed in the gender composition between the FP and RP, as well as in the composition of households that enjoy the minimum living guarantee. With regard to the SES, 83.2% of the FP were at the middle SES level or below, and 19.2% of the FP were at the low SES level. Meanwhile, 93.7% of the RP were at the middle SES level or above, while only 6.3% of the RP were at the low SES level. The average annual household income of the RP was RMB 45,600, which was higher than that of the FP (RMB 34,000).Table 2 Demographic characteristics and SES of FP and RP Demographics FP(%) N = 453 RP(%) N = 794 P -value Age (years) <0.001*  18–2473 (16.1)55 (06.9)  25–36222 (49.0)345 (43.5)  37–50158 (34.9)394 (49.6) Gender 0.501  Male111 (24.5)209 (26.3)  Female342 (75.5)585 (73.7) Marital status 0.001*  Married84 (18.5)93 (11.7)  Unmarried369 (81.5)701 (88.3) Minimum living guarantee enjoyment 0.168  No408 (90.1)734 (92.4)  Yes45 (09.9)60 (07.6) Social insurance <0.001*  Not involved177 (39.1)129 (16.2)  Involved276 (60.9)665 (83.8) Commercial insurance 0.001*  Not involved354 (78.1)551 (69.4)  Involved99 (21.9)243 (30.6) SES <0.001*  Low87 (19.2)50 (06.3)  Middle290 (64.0)522 (65.7)  High76 (16.8)222 (28.0) Average annual household income (mean ± std) (×RMB 10000) 3.40 ± 4.014.56 ± 4.580.011**P ≤0.05 indicates a statistical significance. Demographic characteristics and SES of FP and RP *P ≤0.05 indicates a statistical significance. RH knowledge and skills Table 3 presents the findings on RH knowledge and skills among the FP and RP. Compared with the RP, a smaller percentage of the FP possessed RH knowledge about newlywed health (47.9% vs. 59.1%), prevention and control of sexually transmitted diseases (STDs) (36.4% vs. 43.1%), health care of pregnant women and puerpera (41.3% vs. 54.4%). The proportion of the FP that had no knowledge about such types of information was larger than that of the RP. However, no difference was observed between the FP and RP in terms of the frequency of participation in RH lectures. Up to 61.8% of the FP and 56.2% of the RP reported having never participated in RH lectures.Table 3 Level of RH knowledge and skills among FP and RP RH knowledge and skills FP N = 453(%) RP N = 794(%) P -value RH skills for Pregnancy test 0.002*  Good158 (34.9)349 (44.0)  Middle203 (44.8)328 (41.3)  Poor92 (20.3)117 (14.7) Contraception <0.001*  Good231 (51.0)486 (61.2)  Middle167 (36.9)274 (34.5)  Poor55 (12.1)34 (04.3) Cleaning reproductive tracts 0.007*  Good181 (40.0)381 (48.0)  Middle228 (50.3)363 (45.7)  Poor44 (09.7)50 (06.3) Maternal nutrition during pregnancy 0.001*  Good92 (20.3)205 (25.8)  Middle241 (53.2)445 (56.0)  Poor120 (26.5)144 (18.1) Miscarriage prevention <0.001*  Good82 (18.1)173 (21.8)  Middle209 (46.1)403 (50.8)  Poor162 (35.8)218 (27.5) Prenatal education <0.001*  Good86 (19.0)180 (22.7)  Middle213 (47.0)428 (53.9)  Poor154 (34.0)186 (23.4) Safe sex <0.001*  Good186 (41.1)417 (52.5)  Middle217 (47.9)328 (41.3)  Poor50 (11.0)49 (06.2) RH knowledge about health care of pregnant women <0.001*  Good187 (41.3)432 (54.4)  Middle212 (46.1)300 (37.8)  Poor54 (11.9)62 (07.8) Prevention and control of STD infection 0.016*  Good165 (36.4)342 (43.1)  Middle209 (46.1)353 (44.5)  Poor79 (17.4)99 (12.5) Newlywed health <0.001*  Good217 (47.9)469 (59.1)  Middle181 (40.0)276 (34.8)  Poor55 (12.1)49 (06.2) Attendance in RH lectures 0.056  Yes173 (38.2)348 (43.8)  No280 (61.8)446 (56.2)*P ≤0.05 indicates a statistical significance. Level of RH knowledge and skills among FP and RP *P ≤0.05 indicates a statistical significance. Similarly, in comparison with the RP, a smaller percentage of the FP was equipped with RH skills for pregnancy test (34.9% vs. 44.0%), contraception (51.0% vs. 61.2%), cleaning genital tracts (40.0% vs. 48.0%), maternal nutrition during pregnancy (20.3% vs. 25.8%), miscarriage prevention (18.1% vs. 21.8%), early education for foetus (19.0% vs. 22.7%), and safe sex (41.1% vs. 52.5%). The proportion of the FP that reported no RH skills was larger than that of the RP. Table 3 presents the findings on RH knowledge and skills among the FP and RP. Compared with the RP, a smaller percentage of the FP possessed RH knowledge about newlywed health (47.9% vs. 59.1%), prevention and control of sexually transmitted diseases (STDs) (36.4% vs. 43.1%), health care of pregnant women and puerpera (41.3% vs. 54.4%). The proportion of the FP that had no knowledge about such types of information was larger than that of the RP. However, no difference was observed between the FP and RP in terms of the frequency of participation in RH lectures. Up to 61.8% of the FP and 56.2% of the RP reported having never participated in RH lectures.Table 3 Level of RH knowledge and skills among FP and RP RH knowledge and skills FP N = 453(%) RP N = 794(%) P -value RH skills for Pregnancy test 0.002*  Good158 (34.9)349 (44.0)  Middle203 (44.8)328 (41.3)  Poor92 (20.3)117 (14.7) Contraception <0.001*  Good231 (51.0)486 (61.2)  Middle167 (36.9)274 (34.5)  Poor55 (12.1)34 (04.3) Cleaning reproductive tracts 0.007*  Good181 (40.0)381 (48.0)  Middle228 (50.3)363 (45.7)  Poor44 (09.7)50 (06.3) Maternal nutrition during pregnancy 0.001*  Good92 (20.3)205 (25.8)  Middle241 (53.2)445 (56.0)  Poor120 (26.5)144 (18.1) Miscarriage prevention <0.001*  Good82 (18.1)173 (21.8)  Middle209 (46.1)403 (50.8)  Poor162 (35.8)218 (27.5) Prenatal education <0.001*  Good86 (19.0)180 (22.7)  Middle213 (47.0)428 (53.9)  Poor154 (34.0)186 (23.4) Safe sex <0.001*  Good186 (41.1)417 (52.5)  Middle217 (47.9)328 (41.3)  Poor50 (11.0)49 (06.2) RH knowledge about health care of pregnant women <0.001*  Good187 (41.3)432 (54.4)  Middle212 (46.1)300 (37.8)  Poor54 (11.9)62 (07.8) Prevention and control of STD infection 0.016*  Good165 (36.4)342 (43.1)  Middle209 (46.1)353 (44.5)  Poor79 (17.4)99 (12.5) Newlywed health <0.001*  Good217 (47.9)469 (59.1)  Middle181 (40.0)276 (34.8)  Poor55 (12.1)49 (06.2) Attendance in RH lectures 0.056  Yes173 (38.2)348 (43.8)  No280 (61.8)446 (56.2)*P ≤0.05 indicates a statistical significance. Level of RH knowledge and skills among FP and RP *P ≤0.05 indicates a statistical significance. Similarly, in comparison with the RP, a smaller percentage of the FP was equipped with RH skills for pregnancy test (34.9% vs. 44.0%), contraception (51.0% vs. 61.2%), cleaning genital tracts (40.0% vs. 48.0%), maternal nutrition during pregnancy (20.3% vs. 25.8%), miscarriage prevention (18.1% vs. 21.8%), early education for foetus (19.0% vs. 22.7%), and safe sex (41.1% vs. 52.5%). The proportion of the FP that reported no RH skills was larger than that of the RP. FPS utilisation Table 4 shows the utilisation of FPS based on service providers in the latest year for both the FP and RP. The table indicates that the average number of times the FP received FPS was higher than that of the RP. However, no significant difference existed between the FP and RP in terms of the presence of hospitals, CHC, MCHB, and FPSC. The average number of times that the FP and RP received FPS from the FPSC, 1.87 and 1.73, respectively, was relatively higher than that for the other locations. The average number of times that the FP used FPS in the CHC was 0.67, which was higher than that of the RP (0.48) (P =0.006).Table 4 Mean (mean ± std) of times that subjects utilise FPS, grouped by service providers Service providers FP RP P -value Hospitals0.57 ± 1.1450.46 ± 0.9860.079MCHB0.39 ± 0.9290.32 ± 0.8450.206CHC0.67 ± 1.2990.48 ± 0.9300.006*FPSC1.87 ± 2.9151.73 ± 2.5300.364Total3.51 ± 4.7582.99 ± 4.0440.050**P ≤0.05 indicates a statistical significance. Mean (mean ± std) of times that subjects utilise FPS, grouped by service providers *P ≤0.05 indicates a statistical significance. Table 4 shows the utilisation of FPS based on service providers in the latest year for both the FP and RP. The table indicates that the average number of times the FP received FPS was higher than that of the RP. However, no significant difference existed between the FP and RP in terms of the presence of hospitals, CHC, MCHB, and FPSC. The average number of times that the FP and RP received FPS from the FPSC, 1.87 and 1.73, respectively, was relatively higher than that for the other locations. The average number of times that the FP used FPS in the CHC was 0.67, which was higher than that of the RP (0.48) (P =0.006).Table 4 Mean (mean ± std) of times that subjects utilise FPS, grouped by service providers Service providers FP RP P -value Hospitals0.57 ± 1.1450.46 ± 0.9860.079MCHB0.39 ± 0.9290.32 ± 0.8450.206CHC0.67 ± 1.2990.48 ± 0.9300.006*FPSC1.87 ± 2.9151.73 ± 2.5300.364Total3.51 ± 4.7582.99 ± 4.0440.050**P ≤0.05 indicates a statistical significance. Mean (mean ± std) of times that subjects utilise FPS, grouped by service providers *P ≤0.05 indicates a statistical significance. Logistic regression analysis of FPS utilisation The results of logistic regression analysis on the relationship of FPS use with the demographics and SES of the FP and RP are presented in Table 5. The factors that influenced the FPS use of the FP included gender (OR =0.208), marital status (OR =0.267), RH needs (OR =3.246), participation in RH lectures (OR =3.113), pregnancy test skills (OR =4.981), health awareness (OR =5.303), and knowledge about the location of FPS institutions (OR =2.141). Meanwhile, the factors that influenced the FPS use of the RP included SES (OR =4.652), contraceptive skills (OR =2.570), gender (OR =0.456), marital status (OR =0.299), RH needs (OR =1.663), participation in RH lectures (OR =1.987), and number of children under five years in the family (OR =0.557).Table 5 Social determinants of FPS use of FP and RP Social determinants of FPS use FP RP OR( 95% CI) P -value OR( 95% CI) P -value Gender <0.001*<0.001*  Female11  Male0.208(0.108 ~ 0.401)0.208(0.108 ~ 0.401) Marital status 0.001*0.001*  Married11  Unmarried0.267(0.120 ~ 0.596)0.267(0.120 ~ 0.596) SES level 0.4590.459  High11  Middle1.600(0.744 ~ 3.441)1.600(0.744 ~ 3.441)  Low1.692(0.673 ~ 4.255)1.692(0.673 ~ 4.255) RH needs <0.001*<0.001*  No11  Yes3.246(1.683 ~ 6.260)3.246(1.683 ~ 6.260) RH skill for contraception 0.6550.655  Poor11  Middle1.240(0.359 ~ 4.280)1.240(0.359 ~ 4.280)  Good1.582(0.445 ~ 5.617)1.582(0.445 ~ 5.617) Attendance in RH lectures <0.001*<0.001*  No11  Yes3.113(1.920 ~ 5.048)3.113(1.920 ~ 5.048) RH care awareness 0.015*0.015*  Poor11  Middle7.183(1.790 ~ 28.821)7.183(1.790 ~ 28.821)  Good5.303(1.227 ~ 22.918)5.303(1.227 ~ 22.918) RH skill for pregnancy test <0.001*<0.001*  Poor11  Middle2.837(1.396 ~ 5.767)2.837(1.396 ~ 5.767)  Good4.981(2.395 ~ 10.359)4.981(2.395 ~ 10.359) Knowledge of the FPSC location 0.049*0.049*  No11  Yes2.141(1.002 ~ 4.572)2.141(1.002 ~ 4.572) Number of children under five years in the family 0.949(0.523 ~ 1.722)0.8630.949(0.523 ~ 1.722)0.863*P ≤0.05 indicates a statistical significance. Social determinants of FPS use of FP and RP *P ≤0.05 indicates a statistical significance. The results of logistic regression analysis on the relationship of FPS use with the demographics and SES of the FP and RP are presented in Table 5. The factors that influenced the FPS use of the FP included gender (OR =0.208), marital status (OR =0.267), RH needs (OR =3.246), participation in RH lectures (OR =3.113), pregnancy test skills (OR =4.981), health awareness (OR =5.303), and knowledge about the location of FPS institutions (OR =2.141). Meanwhile, the factors that influenced the FPS use of the RP included SES (OR =4.652), contraceptive skills (OR =2.570), gender (OR =0.456), marital status (OR =0.299), RH needs (OR =1.663), participation in RH lectures (OR =1.987), and number of children under five years in the family (OR =0.557).Table 5 Social determinants of FPS use of FP and RP Social determinants of FPS use FP RP OR( 95% CI) P -value OR( 95% CI) P -value Gender <0.001*<0.001*  Female11  Male0.208(0.108 ~ 0.401)0.208(0.108 ~ 0.401) Marital status 0.001*0.001*  Married11  Unmarried0.267(0.120 ~ 0.596)0.267(0.120 ~ 0.596) SES level 0.4590.459  High11  Middle1.600(0.744 ~ 3.441)1.600(0.744 ~ 3.441)  Low1.692(0.673 ~ 4.255)1.692(0.673 ~ 4.255) RH needs <0.001*<0.001*  No11  Yes3.246(1.683 ~ 6.260)3.246(1.683 ~ 6.260) RH skill for contraception 0.6550.655  Poor11  Middle1.240(0.359 ~ 4.280)1.240(0.359 ~ 4.280)  Good1.582(0.445 ~ 5.617)1.582(0.445 ~ 5.617) Attendance in RH lectures <0.001*<0.001*  No11  Yes3.113(1.920 ~ 5.048)3.113(1.920 ~ 5.048) RH care awareness 0.015*0.015*  Poor11  Middle7.183(1.790 ~ 28.821)7.183(1.790 ~ 28.821)  Good5.303(1.227 ~ 22.918)5.303(1.227 ~ 22.918) RH skill for pregnancy test <0.001*<0.001*  Poor11  Middle2.837(1.396 ~ 5.767)2.837(1.396 ~ 5.767)  Good4.981(2.395 ~ 10.359)4.981(2.395 ~ 10.359) Knowledge of the FPSC location 0.049*0.049*  No11  Yes2.141(1.002 ~ 4.572)2.141(1.002 ~ 4.572) Number of children under five years in the family 0.949(0.523 ~ 1.722)0.8630.949(0.523 ~ 1.722)0.863*P ≤0.05 indicates a statistical significance. Social determinants of FPS use of FP and RP *P ≤0.05 indicates a statistical significance. Demographic and SES characteristics: Table 2 shows the demographic and SES characteristics of the respondents. A total of 1,247 respondents aged 18 to 50 years were examined. The mean (SD) respondent age was 35.3 (8.01) years. The FP had a mean age (SD) of 33.0 (7.85) years, which was lower than the counterpart value of 36.6 (7.82) years for the RP respondents (P <0.001). Up to 65.1% of the FP were aged between 18 and 36 years old. The proportion of unmarried subjects among the FP (18.5%) was larger than that among the RP (11.7%) (P =0.001). The proportion of the FP that engaged in social health insurance was 60.9%, which was smaller than that of the RP (83.8%) (P <0.001). Furthermore, 21.9% of the FP and 30.6% of the RP participated in commercial health insurance (P =0.001). No difference was observed in the gender composition between the FP and RP, as well as in the composition of households that enjoy the minimum living guarantee. With regard to the SES, 83.2% of the FP were at the middle SES level or below, and 19.2% of the FP were at the low SES level. Meanwhile, 93.7% of the RP were at the middle SES level or above, while only 6.3% of the RP were at the low SES level. The average annual household income of the RP was RMB 45,600, which was higher than that of the FP (RMB 34,000).Table 2 Demographic characteristics and SES of FP and RP Demographics FP(%) N = 453 RP(%) N = 794 P -value Age (years) <0.001*  18–2473 (16.1)55 (06.9)  25–36222 (49.0)345 (43.5)  37–50158 (34.9)394 (49.6) Gender 0.501  Male111 (24.5)209 (26.3)  Female342 (75.5)585 (73.7) Marital status 0.001*  Married84 (18.5)93 (11.7)  Unmarried369 (81.5)701 (88.3) Minimum living guarantee enjoyment 0.168  No408 (90.1)734 (92.4)  Yes45 (09.9)60 (07.6) Social insurance <0.001*  Not involved177 (39.1)129 (16.2)  Involved276 (60.9)665 (83.8) Commercial insurance 0.001*  Not involved354 (78.1)551 (69.4)  Involved99 (21.9)243 (30.6) SES <0.001*  Low87 (19.2)50 (06.3)  Middle290 (64.0)522 (65.7)  High76 (16.8)222 (28.0) Average annual household income (mean ± std) (×RMB 10000) 3.40 ± 4.014.56 ± 4.580.011**P ≤0.05 indicates a statistical significance. Demographic characteristics and SES of FP and RP *P ≤0.05 indicates a statistical significance. RH knowledge and skills: Table 3 presents the findings on RH knowledge and skills among the FP and RP. Compared with the RP, a smaller percentage of the FP possessed RH knowledge about newlywed health (47.9% vs. 59.1%), prevention and control of sexually transmitted diseases (STDs) (36.4% vs. 43.1%), health care of pregnant women and puerpera (41.3% vs. 54.4%). The proportion of the FP that had no knowledge about such types of information was larger than that of the RP. However, no difference was observed between the FP and RP in terms of the frequency of participation in RH lectures. Up to 61.8% of the FP and 56.2% of the RP reported having never participated in RH lectures.Table 3 Level of RH knowledge and skills among FP and RP RH knowledge and skills FP N = 453(%) RP N = 794(%) P -value RH skills for Pregnancy test 0.002*  Good158 (34.9)349 (44.0)  Middle203 (44.8)328 (41.3)  Poor92 (20.3)117 (14.7) Contraception <0.001*  Good231 (51.0)486 (61.2)  Middle167 (36.9)274 (34.5)  Poor55 (12.1)34 (04.3) Cleaning reproductive tracts 0.007*  Good181 (40.0)381 (48.0)  Middle228 (50.3)363 (45.7)  Poor44 (09.7)50 (06.3) Maternal nutrition during pregnancy 0.001*  Good92 (20.3)205 (25.8)  Middle241 (53.2)445 (56.0)  Poor120 (26.5)144 (18.1) Miscarriage prevention <0.001*  Good82 (18.1)173 (21.8)  Middle209 (46.1)403 (50.8)  Poor162 (35.8)218 (27.5) Prenatal education <0.001*  Good86 (19.0)180 (22.7)  Middle213 (47.0)428 (53.9)  Poor154 (34.0)186 (23.4) Safe sex <0.001*  Good186 (41.1)417 (52.5)  Middle217 (47.9)328 (41.3)  Poor50 (11.0)49 (06.2) RH knowledge about health care of pregnant women <0.001*  Good187 (41.3)432 (54.4)  Middle212 (46.1)300 (37.8)  Poor54 (11.9)62 (07.8) Prevention and control of STD infection 0.016*  Good165 (36.4)342 (43.1)  Middle209 (46.1)353 (44.5)  Poor79 (17.4)99 (12.5) Newlywed health <0.001*  Good217 (47.9)469 (59.1)  Middle181 (40.0)276 (34.8)  Poor55 (12.1)49 (06.2) Attendance in RH lectures 0.056  Yes173 (38.2)348 (43.8)  No280 (61.8)446 (56.2)*P ≤0.05 indicates a statistical significance. Level of RH knowledge and skills among FP and RP *P ≤0.05 indicates a statistical significance. Similarly, in comparison with the RP, a smaller percentage of the FP was equipped with RH skills for pregnancy test (34.9% vs. 44.0%), contraception (51.0% vs. 61.2%), cleaning genital tracts (40.0% vs. 48.0%), maternal nutrition during pregnancy (20.3% vs. 25.8%), miscarriage prevention (18.1% vs. 21.8%), early education for foetus (19.0% vs. 22.7%), and safe sex (41.1% vs. 52.5%). The proportion of the FP that reported no RH skills was larger than that of the RP. FPS utilisation: Table 4 shows the utilisation of FPS based on service providers in the latest year for both the FP and RP. The table indicates that the average number of times the FP received FPS was higher than that of the RP. However, no significant difference existed between the FP and RP in terms of the presence of hospitals, CHC, MCHB, and FPSC. The average number of times that the FP and RP received FPS from the FPSC, 1.87 and 1.73, respectively, was relatively higher than that for the other locations. The average number of times that the FP used FPS in the CHC was 0.67, which was higher than that of the RP (0.48) (P =0.006).Table 4 Mean (mean ± std) of times that subjects utilise FPS, grouped by service providers Service providers FP RP P -value Hospitals0.57 ± 1.1450.46 ± 0.9860.079MCHB0.39 ± 0.9290.32 ± 0.8450.206CHC0.67 ± 1.2990.48 ± 0.9300.006*FPSC1.87 ± 2.9151.73 ± 2.5300.364Total3.51 ± 4.7582.99 ± 4.0440.050**P ≤0.05 indicates a statistical significance. Mean (mean ± std) of times that subjects utilise FPS, grouped by service providers *P ≤0.05 indicates a statistical significance. Logistic regression analysis of FPS utilisation: The results of logistic regression analysis on the relationship of FPS use with the demographics and SES of the FP and RP are presented in Table 5. The factors that influenced the FPS use of the FP included gender (OR =0.208), marital status (OR =0.267), RH needs (OR =3.246), participation in RH lectures (OR =3.113), pregnancy test skills (OR =4.981), health awareness (OR =5.303), and knowledge about the location of FPS institutions (OR =2.141). Meanwhile, the factors that influenced the FPS use of the RP included SES (OR =4.652), contraceptive skills (OR =2.570), gender (OR =0.456), marital status (OR =0.299), RH needs (OR =1.663), participation in RH lectures (OR =1.987), and number of children under five years in the family (OR =0.557).Table 5 Social determinants of FPS use of FP and RP Social determinants of FPS use FP RP OR( 95% CI) P -value OR( 95% CI) P -value Gender <0.001*<0.001*  Female11  Male0.208(0.108 ~ 0.401)0.208(0.108 ~ 0.401) Marital status 0.001*0.001*  Married11  Unmarried0.267(0.120 ~ 0.596)0.267(0.120 ~ 0.596) SES level 0.4590.459  High11  Middle1.600(0.744 ~ 3.441)1.600(0.744 ~ 3.441)  Low1.692(0.673 ~ 4.255)1.692(0.673 ~ 4.255) RH needs <0.001*<0.001*  No11  Yes3.246(1.683 ~ 6.260)3.246(1.683 ~ 6.260) RH skill for contraception 0.6550.655  Poor11  Middle1.240(0.359 ~ 4.280)1.240(0.359 ~ 4.280)  Good1.582(0.445 ~ 5.617)1.582(0.445 ~ 5.617) Attendance in RH lectures <0.001*<0.001*  No11  Yes3.113(1.920 ~ 5.048)3.113(1.920 ~ 5.048) RH care awareness 0.015*0.015*  Poor11  Middle7.183(1.790 ~ 28.821)7.183(1.790 ~ 28.821)  Good5.303(1.227 ~ 22.918)5.303(1.227 ~ 22.918) RH skill for pregnancy test <0.001*<0.001*  Poor11  Middle2.837(1.396 ~ 5.767)2.837(1.396 ~ 5.767)  Good4.981(2.395 ~ 10.359)4.981(2.395 ~ 10.359) Knowledge of the FPSC location 0.049*0.049*  No11  Yes2.141(1.002 ~ 4.572)2.141(1.002 ~ 4.572) Number of children under five years in the family 0.949(0.523 ~ 1.722)0.8630.949(0.523 ~ 1.722)0.863*P ≤0.05 indicates a statistical significance. Social determinants of FPS use of FP and RP *P ≤0.05 indicates a statistical significance. Discussion: Our findings show that the FP is younger, has a lower SES level and insurance rate, and reports more RH needs than the RP. Nearly half of the FP in our study were sexually active young adults aged 25 to 36 years, living far away from their hometowns, and had sexual partners or no fixed sexual partners. Such subjects were also characterised by sexual liberality, high incidence of premarital sex, and poor awareness of safe sex. In addition, their low SES level, poor self-protective consciousness, and lack of knowledge on RH and contraception increased their risks of premarital pregnancy, abortion, STDs, and HIV infection. According to Zheng et al. [16], the incidence of premarital pregnancy and extramarital pregnancy among the FP is increasing, and the rate of induced abortion for unwanted pregnancies is high. In a survey, Zhang [17] reported that 25% of unmarried members of the FP exhibit reckless sexual behaviours. Such behaviours often increase the prevalence of accidental pregnancies and induced abortion among female members of the FP. These unsafe sex behaviours have also been significant risk factors for the spread of STDs and AIDS. Sun et al. [18] reported that the RH care-seeking behaviours of the FP are mainly instigated by the need to seek health care services and medical treatment for reproductive disorders. In particular, the FP encountered serious issues about unplanned pregnancies and health care of pregnant women and puerpera, as well as the prevalence of reproductive illnesses as a result of their behaviours. A considerable number of FP members engaged in a so-called 3D (i.e. dirty, dangerous, and difficult) career, which is characterised by the lack of essential economic and medical security. Several members of the FP included in our study were awaiting job assignments or were unemployed and had no income. Therefore, they were vulnerable to poverty when they arrived in the city. The ignorance of such members of the FP on RH issues also aggravated the problem of reproductive safety, which merits urgent attention. The FP generally had some RH knowledge and skills. However, their level of such RH knowledge and skills was lower than that of the RP. Zhou [19] revealed that the FP was equipped with less knowledge on contraception than the household population in Wuhan City. On the one hand, this situation may be a consequence of the imperfect management system, based on which the FP members did not compulsively receive FPS provided by the CHC, MCHB, or other public health institutions. Therefore, we hypothesise that the FP is inadequately served and cared for (i.e. insufficient health education for the FP) in many aspects of RH care. The FP members mainly served as a registered population in a city, despite the existence of a relevant policy that suggests that the FP should enjoy the same services as those as the RP, with birth control as the principal goal [20]. On the other hand, the high risks of RH issues among the FP are associated with the low educational level, frequent migrations, impeded information channels, and poor RH skills and knowledge of the population. Based on the KAP model of health promotion theory, knowledge is the foundation of attitude and behavioural change [21]. People who have health knowledge and who adopt positive and correct beliefs as well as attitudes tend to conduct healthy behaviours and foster healthy habits. Women with inadequate RH knowledge may therefore be unaware of the risks of gynaecological diseases, and may refrain from seeking medical treatment because of the belief that such diseases will not affect their lives. The Chinese government has implemented a family planning program to control the excessive population growth and improve population quality and health. The program is regarded as a basic national policy of China. China has attached great importance to FPS for a long time and has established the specialised FPSC to promote the FPS use of the FP. Moreover, the CHCs are fundamental providers of the basic public health and medical services included in the primary health care system. The FP is familiar with CHCs and is more inclined to seek medical help or treatment from such centres in case of RH disorders. According to Chen Hui, 73.3% of the FP hope to receive the FPS provided by FPS centres, whereas 23.1% hope to receive such services provided by CHCs [22]. Sun et al. [23] found that low-level medical institutions are given priority by the FP in seeking RH-related medical services. Other reports also show that [24] women constituted a majority in seeking community-based RH service. Therefore, the Chinese government should strengthen the community service system and improve the accessibility of public RH services based on the characteristics of the FP. In this study, the factors that influence the FPS use of residents included gender, marital status, RH knowledge and skills, and SES. This result is consistent with those of a Canadian study [25] that reported educational level, gender, poverty, and migration as the major social factors related to RH. Another local study [17] indicated that the main influencing factors of RH service use among the FP were educational level, marital status, and RH knowledge. In the current study, people with RH needs naturally received FPS more than those without such needs. Compared with females, males reported less FP use, a finding also confirmed by Debra et al. [26]. Furthermore, RH care awareness considerably influenced the use of RH services for both the FP and RP. According to Peng, the RH status is closely associated with self-care awareness among rural women in China [27]. This phenomenon can be explained by the fact that people with a strong sense of self-care tend to seek medical treatment as soon as they are ill, and they focus closely on their own health, especially in terms of reproductive and contraception health. Sufficient RH awareness likewise contributes to the generation of correct RH behaviours [28]. The SES is found to be closely related to the use of RH services. El-Kak indicated that women from low-income families are more likely to use subsidised RH services [29]. Poor people typically experience high levels of unmet family planning needs, whereas such unmet needs gradually decrease among the rich as a result of their increased FPS use. Women with an unmet need are those who are fecund and sexually active but do not use any contraception method and report a lack of preference for more children or a desire to delay the birth of their next child [30]. In Malawi, the rate of FPS use was approximately 20% and 36% among married women at the bottom and top SES levels respectively. On the contrary, based on a report in 2007, the FPS use was relatively equal across all SES groups in the Philippines and in Bangladesh. Evidence demonstrates decreasing inequalities in the FPS use among various SES groups [24]. In China, governments are implementing the equalisation policy of public RH services to promote equity in RH use among the FP. Government-purchased FPS are provided freely to individuals in need regardless of SES level, thereby equalising FPS use among the FP. As for the RP, the low SES group faces more RH problems and needs, therefore resulting in higher FPS use. The low SES group in the RP belongs to the vulnerable group as the FP in society. Hence, the government should also pay more attention to this group to improve RH, and consequently, to improve social management and safeguard social stability. Chen et al. reported that the Chinese Yi people of high SES have a better RH status compared with those of low SES [9]. Another piece of evidence indicates that women of low SES are more likely to suffer reproductive tract infections than other women [31]. Therefore, the low SES population is a group that requires significant FPS in China. In China, sex is generally considered to be an embarrassing topic for the unmarried population. Only married women tend to talk about experiences and information about sex and contraception and have free access to contraceptive measures of the local family planning departments [28]. Few people discuss relevant issues on contraception and sex with unmarried women. Therefore, the unmarried population has limited knowledge about RH, not to mention the use of RH services. Our findings also indicated that the larger the number of children aged less than five years is in a family, the less frequent the FPS use. This finding is consistent with the study of Tangcharoensathien, in which women with two or more children are reported to receive less essential obstetric services compared with those who have less than two children [23]. This phenomenon is attributed to the need of parents to spend considerable time and energy in caring for their children, therefore reducing their opportunities to use FPS. Conclusions: The FP in Guangzhou City is characterised by a lower SES level and insurance rate, more RH needs, and a lower level of self-protective consciousness compared with the RP. Meanwhile, the FPS use of the FP is better than that of the RP because of government attention. The FP requires the provision of more RH care services (i.e. health education). These findings provide evidence that can assist decision makers in bridging these coverage gaps. Achieving a truly universal RH coverage would certainly improve the RH consciousness of the FP in China.
Background: The World Health Assembly has pledged to achieve universal reproductive health (RH) coverage by 2015. Therefore, China has been vigorously promoting the equalisation of basic public health services (i.e. RH services). The floating population (FP) is the largest special group of internal migrants in China and constitutes the current national focus. However, gaps exist in the access of this group to RH services in China. Methods: A total of 453 members of the FP and 794 members of the residential population (RP) aged 18 to 50 years from five urban districts in Guangzhou City were recruited to participate in a cross-sectional survey in 2009. Information on demographics and socioeconomic status (SES) were collected from these two groups to evaluate the utilisation of RH knowledge and skills and family planning services (FPS), and to identify social determinants. Results: The proportion of individuals with low SES in the FP (19.2%) was higher than that in the RP (6.3%) (P <0.001). Of the FP, 9.7% to 35.8% had no knowledge of at least one skill, a proportion higher than the counterpart values (6.2% to 27.5%) for the RP (P <0.05). The frequency of FPS use among the FP and RP was low. However, FPS use was higher among the FP than among the RP (3.51 vs. 2.99) (P =0.050). Logistic regression analysis was used to analyse the social determinants that influence FPS use in the FP and RP. The factors that affect FPS utilisation of the RP included SES (OR =4.652, 95% CI =1.751, 12.362), whereas those of the FP excluded SES. Conclusions: The FPS use of the FP in Guangzhou City was higher under equalised public health services. However, a need still exists to help the FP with low SES to improve their RH knowledge and skills through access to public RH services.
Background: Reproductive health (RH) was initially proposed by the World Health Organization Special Programme of Research in Human Reproduction (WHO/HRP) in 1988. The International Conference on Population and Development held in Cairo in 1994 officially defined RH as a state of complete physical, mental and social well-being, not merely the absence of disease, in all matters relating to the reproductive system and to its function and processes. The definition of RH was the core content of the “Platform for Action”. The conference likewise became a milestone in RH history [1]. The World Health Assembly held in 1995 re-emphasised the importance of a global RH strategy and advanced the international health objective of achieving universal access to RH by 2015 [2]. Three of the eight Millennium Development Goals adopted by the United Nations in 2000 were directly related to reproductive and sexual health (i.e. improving maternal health, reducing child mortality, and fighting HIV/AIDS, malaria, and other diseases) [3]. In 2004, WHO announced the global RH strategy once again [4] along with the recommendations for monitoring RH at the national level [5]. RH is not only related to human reproduction and development, but also to several kinds of major diseases and social problems. Therefore, RH is vital to the sustainable development of population, society, and economy, and is an important as well as indispensable part of the overall health of humans. It not only affects the current society, but also directly influences the future of human society. Hence, enjoying RH is a common right of everyone. Floating population (FP) is a terminology used to describe a group of people who reside in a given population for a certain amount of time and for various reasons, but are not generally considered as part of the official census count. Based on the ‘Report on the development of floating population in China’ [6], the total FP size in China was nearly 230 million in 2013, a number that constitutes 17% of the total national population. Approximately 23 million members of the FP have been living in Guangdong Province for over half a year. Guangdong is the top FP residence, with a total FP population of over 28 million, accounting for roughly 12% of the entire FP in China. Four million members of the FP in Guangdong reside in Guangzhou City [7]. In China, households are divided into urban and rural households based on the geographical location of the official residence of the householder and on the occupation of his/her parents. Different household types enjoy different types of social welfare. As a transmigration group between urban and rural areas, the FP has a fairly low educational level and poor living and working environments. Their condition leads to poor health and presents an urgent need for public health services. In particular, the FP is vulnerable to public health problems, especially to those related to RH. As receivers of RH services, the FP is a disadvantaged group. A study in Guangzhou City reported that the rate of antenatal examination for the residential population (RP) was 99.72% in 2003, the rate of prenatal care was 86.4%, and the rate of hospital delivery was 98.4%. In comparison, the rate of antenatal examination for the FP was only 74.53%, the rate of prenatal care was only 43.1%, and the hospital delivery rate was only 75.3% [8]. Another survey in Zhejiang Province showed that the rate of postpartum visit for the FP was 85.84% and the system administration rate was 45.64%, whereas the rate of postpartum visit for the RP was 96.34% and the system administration rate was 96.34%. Such findings reveal that the FP falls behind the RP in terms of prenatal care, hospital delivery, postpartum visit, and system administration. Some studies have reported that since the FP only moved into their current location, the provision of family planning services (FPS) is not ideal, and the rate of access to FPS is low, reaching only 25% to 30% [9]. As an important part of the Millennium Development Goals, equal access to RH services has been recognised and accepted worldwide. In recent years, China has vigorously promoted the equalisation of basic public services in family planning, which enables the FP to enjoy the same propaganda, service, and management of FPS as the RP. Therefore, the present survey took the RP as the control and performed a comparative analysis between the control and the FP. The objective was to identify gaps and deficiencies, as well as to provide evidence for decision making to improve RH public services for the FP. In October 2010, China vigorously carried out pilot work to promote the equalisation of a basic public health service (i.e. FPS), which aims to cover all of the FP in pilot cities [10]. In the current study, data on demographic and socioeconomic status (SES) characteristics were collected from both FP and RP to evaluate RH knowledge and skills and the utilisation of RH services, as well as to identify their social determinants. The findings are expected to provide evidence to assist the improvement of RH public services for the FP. Conclusions: The FP in Guangzhou City is characterised by a lower SES level and insurance rate, more RH needs, and a lower level of self-protective consciousness compared with the RP. Meanwhile, the FPS use of the FP is better than that of the RP because of government attention. The FP requires the provision of more RH care services (i.e. health education). These findings provide evidence that can assist decision makers in bridging these coverage gaps. Achieving a truly universal RH coverage would certainly improve the RH consciousness of the FP in China.
Background: The World Health Assembly has pledged to achieve universal reproductive health (RH) coverage by 2015. Therefore, China has been vigorously promoting the equalisation of basic public health services (i.e. RH services). The floating population (FP) is the largest special group of internal migrants in China and constitutes the current national focus. However, gaps exist in the access of this group to RH services in China. Methods: A total of 453 members of the FP and 794 members of the residential population (RP) aged 18 to 50 years from five urban districts in Guangzhou City were recruited to participate in a cross-sectional survey in 2009. Information on demographics and socioeconomic status (SES) were collected from these two groups to evaluate the utilisation of RH knowledge and skills and family planning services (FPS), and to identify social determinants. Results: The proportion of individuals with low SES in the FP (19.2%) was higher than that in the RP (6.3%) (P <0.001). Of the FP, 9.7% to 35.8% had no knowledge of at least one skill, a proportion higher than the counterpart values (6.2% to 27.5%) for the RP (P <0.05). The frequency of FPS use among the FP and RP was low. However, FPS use was higher among the FP than among the RP (3.51 vs. 2.99) (P =0.050). Logistic regression analysis was used to analyse the social determinants that influence FPS use in the FP and RP. The factors that affect FPS utilisation of the RP included SES (OR =4.652, 95% CI =1.751, 12.362), whereas those of the FP excluded SES. Conclusions: The FPS use of the FP in Guangzhou City was higher under equalised public health services. However, a need still exists to help the FP with low SES to improve their RH knowledge and skills through access to public RH services.
11,756
379
14
[ "fp", "rh", "rp", "ses", "fps", "001", "health", "fp rp", "level", "knowledge" ]
[ "test", "test" ]
[CONTENT] Floating population | Residential population | Reproductive health | Family planning services | Socioeconomic status [SUMMARY]
[CONTENT] Floating population | Residential population | Reproductive health | Family planning services | Socioeconomic status [SUMMARY]
[CONTENT] Floating population | Residential population | Reproductive health | Family planning services | Socioeconomic status [SUMMARY]
[CONTENT] Floating population | Residential population | Reproductive health | Family planning services | Socioeconomic status [SUMMARY]
[CONTENT] Floating population | Residential population | Reproductive health | Family planning services | Socioeconomic status [SUMMARY]
[CONTENT] Floating population | Residential population | Reproductive health | Family planning services | Socioeconomic status [SUMMARY]
[CONTENT] Adolescent | Adult | Attitude to Health | China | Cross-Sectional Studies | Family Planning Services | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Middle Aged | Reproductive Health Services | Rural Population | Socioeconomic Factors | Transients and Migrants | United States | Urban Population | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Attitude to Health | China | Cross-Sectional Studies | Family Planning Services | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Middle Aged | Reproductive Health Services | Rural Population | Socioeconomic Factors | Transients and Migrants | United States | Urban Population | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Attitude to Health | China | Cross-Sectional Studies | Family Planning Services | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Middle Aged | Reproductive Health Services | Rural Population | Socioeconomic Factors | Transients and Migrants | United States | Urban Population | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Attitude to Health | China | Cross-Sectional Studies | Family Planning Services | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Middle Aged | Reproductive Health Services | Rural Population | Socioeconomic Factors | Transients and Migrants | United States | Urban Population | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Attitude to Health | China | Cross-Sectional Studies | Family Planning Services | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Middle Aged | Reproductive Health Services | Rural Population | Socioeconomic Factors | Transients and Migrants | United States | Urban Population | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Attitude to Health | China | Cross-Sectional Studies | Family Planning Services | Female | Health Knowledge, Attitudes, Practice | Humans | Male | Middle Aged | Reproductive Health Services | Rural Population | Socioeconomic Factors | Transients and Migrants | United States | Urban Population | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] fp | rh | rp | ses | fps | 001 | health | fp rp | level | knowledge [SUMMARY]
[CONTENT] fp | rh | rp | ses | fps | 001 | health | fp rp | level | knowledge [SUMMARY]
[CONTENT] fp | rh | rp | ses | fps | 001 | health | fp rp | level | knowledge [SUMMARY]
[CONTENT] fp | rh | rp | ses | fps | 001 | health | fp rp | level | knowledge [SUMMARY]
[CONTENT] fp | rh | rp | ses | fps | 001 | health | fp rp | level | knowledge [SUMMARY]
[CONTENT] fp | rh | rp | ses | fps | 001 | health | fp rp | level | knowledge [SUMMARY]
[CONTENT] rh | fp | rate | population | development | public | services | china | health | million [SUMMARY]
[CONTENT] ses | variables | occupation | income occupation | rh | income | level | educational | educational level | questionnaire [SUMMARY]
[CONTENT] 001 | fp | rp | rh | vs | fps | fp rp | ses | 34 | mean [SUMMARY]
[CONTENT] coverage | rh | consciousness | fp | lower | assist decision makers bridging | use fp better rp | coverage certainly improve rh | coverage gaps | coverage gaps achieving [SUMMARY]
[CONTENT] rh | fp | rp | 001 | ses | fps | level | knowledge | health | skills [SUMMARY]
[CONTENT] rh | fp | rp | 001 | ses | fps | level | knowledge | health | skills [SUMMARY]
[CONTENT] The World Health Assembly | 2015 ||| China | RH ||| FP | China ||| China [SUMMARY]
[CONTENT] 453 | FP | 794 | 18 to 50 years | five | Guangzhou City | 2009 ||| two | FPS [SUMMARY]
[CONTENT] SES | FP | 19.2% | 6.3% ||| ||| FP | 9.7% | 35.8% | at least one | 6.2% to 27.5% ||| FPS | FP ||| FPS | FP | 3.51 | 2.99 ||| 0.050 ||| FPS | FP | RP ||| FPS | 4.652 | 95% | CI | 1.751 | 12.362 | FP | SES [SUMMARY]
[CONTENT] FPS | FP | Guangzhou City ||| FP | SES [SUMMARY]
[CONTENT] The World Health Assembly | 2015 ||| China | RH ||| FP | China ||| China ||| 453 | FP | 794 | 18 to 50 years | five | Guangzhou City | 2009 ||| two | FPS ||| ||| SES | FP | 19.2% | 6.3% ||| FP | 9.7% | 35.8% | at least one | 6.2% to 27.5% ||| FPS | FP ||| FPS | FP | 3.51 | 2.99 ||| 0.050 ||| FPS | FP | RP ||| FPS | 4.652 | 95% | CI | 1.751 | 12.362 | FP | SES ||| FPS | FP | Guangzhou City ||| FP | SES [SUMMARY]
[CONTENT] The World Health Assembly | 2015 ||| China | RH ||| FP | China ||| China ||| 453 | FP | 794 | 18 to 50 years | five | Guangzhou City | 2009 ||| two | FPS ||| ||| SES | FP | 19.2% | 6.3% ||| FP | 9.7% | 35.8% | at least one | 6.2% to 27.5% ||| FPS | FP ||| FPS | FP | 3.51 | 2.99 ||| 0.050 ||| FPS | FP | RP ||| FPS | 4.652 | 95% | CI | 1.751 | 12.362 | FP | SES ||| FPS | FP | Guangzhou City ||| FP | SES [SUMMARY]
Unintentional injury mortality in India, 2005: nationally representative mortality survey of 1.1 million homes.
22741813
Unintentional injuries are an important cause of death in India. However, no reliable nationally representative estimates of unintentional injury deaths are available. Thus, we examined unintentional injury deaths in a nationally representative mortality survey.
BACKGROUND
Trained field staff interviewed a living relative of those who had died during 2001-03. The verbal autopsy reports were sent to two of the 130 trained physicians, who independently assigned an ICD-10 code to each death. Discrepancies were resolved through reconciliation and adjudication. Proportionate cause specific mortality was used to produce national unintentional injury mortality estimates based on United Nations population and death estimates.
METHODS
In 2005, unintentional injury caused 648,000 deaths (7% of all deaths; 58/100,000 population). Unintentional injury mortality rates were higher among males than females, and in rural versus urban areas. Road traffic injuries (185,000 deaths; 29% of all unintentional injury deaths), falls (160,000 deaths, 25%) and drowning (73,000 deaths, 11%) were the three leading causes of unintentional injury mortality, with fire-related injury causing 5% of these deaths. The highest unintentional mortality rates were in those aged 70 years or older (410/100,000).
RESULTS
These direct estimates of unintentional injury deaths in India (0.6 million) are lower than WHO indirect estimates (0.8 million), but double the estimates which rely on police reports (0.3 million). Importantly, they revise upward the mortality due to falls, particularly in the elderly, and revise downward mortality due to fires. Ongoing monitoring of injury mortality will enable development of evidence based injury prevention programs.
CONCLUSIONS
[ "Accidental Falls", "Accidents", "Accidents, Traffic", "Adolescent", "Adult", "Age Distribution", "Aged", "Cause of Death", "Child", "Child, Preschool", "Drowning", "Female", "Fires", "Humans", "India", "Infant", "International Classification of Diseases", "Male", "Middle Aged", "Qualitative Research", "Risk Factors", "Rural Population", "Sex Distribution", "Surveys and Questionnaires", "Urban Population", "Wounds and Injuries", "Young Adult" ]
3532420
Background
Indirect estimates by the World Health Organization (WHO) and the Global Burden of Diseases Study (GBD) suggest that unintentional injuries account for 3.9 million deaths worldwide [1], of which about 90% occur in low- and middle-income countries. The majority of these deaths are attributable to road traffic injuries, falls, drowning, poisoning and burns [1]. In 2004, WHO estimated about 0.8 million deaths in India were due to unintentional injuries [1]. Direct Indian estimates of unintentional injury deaths relying on annual National Crime Records Bureau (NCRB) reports of injury deaths from police records showed only 0.3 million injury deaths in 2005 [2], but police record are subject to under-reporting and misclassification [3-5]. Other sources of mortality data from selected health centres in rural areas [6], and selected urban hospitals [7] are not representative of the population of India, and have other methodological limitations [8]. The objective of this paper is to estimate total unintentional injury mortality in India and its variation by gender, rural/urban residence and region using results from a nationally representative survey of the causes of deaths.
Methods
Study setting and data collection The Registrar General of India (RGI) randomly selected 6671 small areas from approximately one million small areas defined in the 1991 census for its Sample Registration System (SRS) [9]. In 1993, household characteristics of the SRS areas, each with about 150 houses and 1000 people, were documented. The SRS sample frame covered 6.3 million people in all 28 states and seven union territories of India. SRS sampling frame is based on the results of census of India, which is conducted every ten years. The selected households are continuously monitored for vital events by two independent surveyors. The first is a part-time enumerator (commonly a resident of the village/area or a local school teacher familiar with the area/village) who visits the households every month. The second is a full-time (nonmedical) Registrar General of India’s surveyor who visits the households every 6 months. Another staff member from the office of Registrar General of India does the reconciliation of vital events reported by the part-time enumerator and the full-time surveyor, arriving at a final list of births and deaths for each household, at the completion each half-yearly survey. In the last decade, the RGI has introduced an advanced form of verbal autopsy called “RHIME” (Routine, Reliable, Representative and Re-sampled Household Investigation of Mortality with Medical Evaluation) [9,10]. Verbal autopsy is a method of ascertaining the cause of death by seeking information on signs, symptoms and circumstances from a family member or care taker of the deceased [10]. Since 2001, about 800 non-medical graduates who were full time employees of the RGI, had knowledge of local languages and were trained to implement the RHIME method visited the families to record events preceding each death using three age specific questionnaires (neonatal, child and adult) including a narrative in the local language. The neonatal (0–28 days) and child death (29 days–14 years) questionnaires included a direct question, “Did s/he die from an injury or accident? If yes, what was the kind of injury or accident?” Response options included 1) Road traffic accident 2) Falls 3) Fall of objects (on to the person) 4) Burns 5) Drowning 6) Poisoning 7) Bite/sting 8) Natural disaster 9) Homicide/assault 10) Unknown. Place of death was recorded in all deaths with response options of 1) Home 2) Health facility (like government hospitals, private hospitals and registered practitioners) and 3) Others (including road side, public area, on transportation and body of water). A random sample of about 5% of the areas was resurveyed independently generally with consistent results. Details of the methods, validation results, and preliminary results for various diseases have been reported previously [10-14]. The Registrar General of India (RGI) randomly selected 6671 small areas from approximately one million small areas defined in the 1991 census for its Sample Registration System (SRS) [9]. In 1993, household characteristics of the SRS areas, each with about 150 houses and 1000 people, were documented. The SRS sample frame covered 6.3 million people in all 28 states and seven union territories of India. SRS sampling frame is based on the results of census of India, which is conducted every ten years. The selected households are continuously monitored for vital events by two independent surveyors. The first is a part-time enumerator (commonly a resident of the village/area or a local school teacher familiar with the area/village) who visits the households every month. The second is a full-time (nonmedical) Registrar General of India’s surveyor who visits the households every 6 months. Another staff member from the office of Registrar General of India does the reconciliation of vital events reported by the part-time enumerator and the full-time surveyor, arriving at a final list of births and deaths for each household, at the completion each half-yearly survey. In the last decade, the RGI has introduced an advanced form of verbal autopsy called “RHIME” (Routine, Reliable, Representative and Re-sampled Household Investigation of Mortality with Medical Evaluation) [9,10]. Verbal autopsy is a method of ascertaining the cause of death by seeking information on signs, symptoms and circumstances from a family member or care taker of the deceased [10]. Since 2001, about 800 non-medical graduates who were full time employees of the RGI, had knowledge of local languages and were trained to implement the RHIME method visited the families to record events preceding each death using three age specific questionnaires (neonatal, child and adult) including a narrative in the local language. The neonatal (0–28 days) and child death (29 days–14 years) questionnaires included a direct question, “Did s/he die from an injury or accident? If yes, what was the kind of injury or accident?” Response options included 1) Road traffic accident 2) Falls 3) Fall of objects (on to the person) 4) Burns 5) Drowning 6) Poisoning 7) Bite/sting 8) Natural disaster 9) Homicide/assault 10) Unknown. Place of death was recorded in all deaths with response options of 1) Home 2) Health facility (like government hospitals, private hospitals and registered practitioners) and 3) Others (including road side, public area, on transportation and body of water). A random sample of about 5% of the areas was resurveyed independently generally with consistent results. Details of the methods, validation results, and preliminary results for various diseases have been reported previously [10-14]. Cause of death assignment The field reports including the individual narratives were sent randomly based on the language, to at least two of 130 physicians who were specially trained in disease coding. The physicians assessed the underlying cause of death and assigned a three character code from the International Classification of disease (ICD), 10th Revision [15]. Unintentional injury deaths were allocated ICD codes from chapter XX for external causes of morbidity and mortality, including V01-X59, Y40-Y86, Y88, and Y89 codes. In case of chapter level disagreement between the two physicians, the final ICD code was assigned by a third senior physician. In case of sub chapter disagreements, specific codes were adjudicated by a specific codes were adjudicated two members of the research team. Reports could not be collected for 12% of the identified deaths mostly due to migration of the household or change of residence; this is unlikely to have led to any systematic misclassification in cause of death as these missing deaths were distributed across all states. Moreover, the SRS definition of usual resident included those who travel away from home for periodic work [9], so deaths away from home were captured provided the whole household had not moved. A total of about 136,480 deaths were identified between January 1, 2001 and December 31, 2003. About 9% of all the death reports could not be coded due to field problems such as poor image quality of the narrative or insufficient information; hence cause of death was identified for 122,828 deaths. The field reports including the individual narratives were sent randomly based on the language, to at least two of 130 physicians who were specially trained in disease coding. The physicians assessed the underlying cause of death and assigned a three character code from the International Classification of disease (ICD), 10th Revision [15]. Unintentional injury deaths were allocated ICD codes from chapter XX for external causes of morbidity and mortality, including V01-X59, Y40-Y86, Y88, and Y89 codes. In case of chapter level disagreement between the two physicians, the final ICD code was assigned by a third senior physician. In case of sub chapter disagreements, specific codes were adjudicated by a specific codes were adjudicated two members of the research team. Reports could not be collected for 12% of the identified deaths mostly due to migration of the household or change of residence; this is unlikely to have led to any systematic misclassification in cause of death as these missing deaths were distributed across all states. Moreover, the SRS definition of usual resident included those who travel away from home for periodic work [9], so deaths away from home were captured provided the whole household had not moved. A total of about 136,480 deaths were identified between January 1, 2001 and December 31, 2003. About 9% of all the death reports could not be coded due to field problems such as poor image quality of the narrative or insufficient information; hence cause of death was identified for 122,828 deaths. National estimates for absolute number of deaths and mortality rates We applied the proportion of each cause of death to the UN Population Division estimates of Indian deaths from all causes in 2005 ((9.8 million; upper and lower limits 9.4 and 10.3 million respectively) [13,16]. UN estimates were used for more accurate calculation of deaths and mortality rates (using Preston and Coale method) [17] because the SRS undercounts mortality by approximately 10% [18,19]. All major cause of death like malaria, HIV and child mortality [12,13] has been estimated for the year 2005 from the Million Death Study making cause specific mortality comparable for policy implications. The proportion of cause specific deaths in each age category was weighted to the SRS sampling fractions in the rural and urban parts of each state. However, unweighted proportions yielded nearly identical results [20]. Application of the data from 2001–03 to 2005 deaths should not introduce major bias as there was little change in the yearly distribution of cause of death in the present study (p = 0.736 for yearly variation in proportional mortality). Sub national estimates were produced for six major regions (north, south, west, central, northeast, and east) [9] from segregating the national UN totals by the relative SRS death rates, as described earlier [21]. Live births totals from UN were used to calculate mortality rates for children younger than five years [21,22]. Confidence interval (95%) for each cause of death proportion or mortality rate was calculated (using the Taylor linearization method) on the basis of the survey design and the observed sample deaths in the present study [23]. SRS enrolment is on a voluntary basis, and its confidentiality and consent procedures are defined as part of the Registration of Births and Deaths Act, 1969. Oral consent was obtained in the first SRS sample frame. Families are free to withdraw from the study, but the compliance is close to 100%. The study poses no or minimal risks to enrolled subjects. All personal identifiers present in the raw data are anonymised before analysis. The study has been approved by the review boards of the Post-Graduate Institute of Medical Education and Research, St. Michael’s Hospital and the Indian Council of Medical Research. We applied the proportion of each cause of death to the UN Population Division estimates of Indian deaths from all causes in 2005 ((9.8 million; upper and lower limits 9.4 and 10.3 million respectively) [13,16]. UN estimates were used for more accurate calculation of deaths and mortality rates (using Preston and Coale method) [17] because the SRS undercounts mortality by approximately 10% [18,19]. All major cause of death like malaria, HIV and child mortality [12,13] has been estimated for the year 2005 from the Million Death Study making cause specific mortality comparable for policy implications. The proportion of cause specific deaths in each age category was weighted to the SRS sampling fractions in the rural and urban parts of each state. However, unweighted proportions yielded nearly identical results [20]. Application of the data from 2001–03 to 2005 deaths should not introduce major bias as there was little change in the yearly distribution of cause of death in the present study (p = 0.736 for yearly variation in proportional mortality). Sub national estimates were produced for six major regions (north, south, west, central, northeast, and east) [9] from segregating the national UN totals by the relative SRS death rates, as described earlier [21]. Live births totals from UN were used to calculate mortality rates for children younger than five years [21,22]. Confidence interval (95%) for each cause of death proportion or mortality rate was calculated (using the Taylor linearization method) on the basis of the survey design and the observed sample deaths in the present study [23]. SRS enrolment is on a voluntary basis, and its confidentiality and consent procedures are defined as part of the Registration of Births and Deaths Act, 1969. Oral consent was obtained in the first SRS sample frame. Families are free to withdraw from the study, but the compliance is close to 100%. The study poses no or minimal risks to enrolled subjects. All personal identifiers present in the raw data are anonymised before analysis. The study has been approved by the review boards of the Post-Graduate Institute of Medical Education and Research, St. Michael’s Hospital and the Indian Council of Medical Research.
Results
Unintentional injuries accounted for 7% (8023/122,828) of all deaths (Table 1). A small number (155; 1.9%) of injury deaths were excluded from analyses as intent (unintentional or intentional) could not be determined. Unintentional injury deaths constituted nearly 20% of total deaths at ages 5–29 years and 12% of total deaths at ages 30–44 years. Over 80% (6621) of unintentional injury deaths occurred in rural areas. More males (5228) than females (2795) died from unintentional injury, and male deaths exceeded female deaths at all ages except beyond 70 years. Unintentional injury attributed deaths in the Sample Registration System 2001–2003 and estimated national rates for 2005 by age, sex and place of residence *Mortality rate per 1000 live births. The national mortality rate (MR) for unintentional injury per 100 000 population was estimated to be 58 (males 71, females 43), with higher rates in rural (60) than urban areas (50). The mortality rates were highest at ages 70 years or higher (410/100 000), with falls accounting for 63% of all injury deaths in this age group. In absolute terms, during 2005, about 648 000 deaths from unintentional injuries occurred in India (95% CI 634 000-662 000; Table 2). Number of unintentional injury deaths by type, in the Sample Registration System, 2001–2003 and estimated national totals for 2005 *Other injuries include transport accidents other than RTI, poisoning, exposure to electric current, radiation, extreme ambient air temperature and pressure, contact with hot substances, other accidental threat to breathing, overexertion, travel and privation and adverse of medical and surgical interventions. Details of ICD codes are available from http://apps.who.int/classifications/icd10/browse/2010/en#/XX. † Deaths due to undetermined intent (Y10- Y34) were 155 (2% of all injury deaths); of these 15% of these were Y14 (Poisoning by exposure to unspecified drugs, medicaments and biological substance, undetermined intent). Road traffic injuries (RTI) were the leading cause of death, accounting for 185 000 deaths or 29% of all unintentional injury deaths (MR = 16.5), followed by falls (160 000, 25%; MR = 14.3) and drowning (73 000; 11%; MR = 6.4). Males had higher mortality rates for all sub-types of unintentional injuries except for fire-related deaths. Females aged 15–29 years (9,900; MR = 5.8, 95% CI 4.9- 6.4) had the highest mortality rates from fire. The ratio of male to female mortality rates were as follows: RTI (4:1), drowning (2:1); fires (1:3); and falls (1:1). Figure 1 provides the age distribution for the top three causes of unintentional injury deaths. RTI were the leading cause of death at ages 15–59 years (41% of all unintentional injury deaths in the age group) whereas deaths due to falls were more common in older people (38% of unintentional injury deaths at ages 60–69 years and 63% at ages 70 years and older). Age-distribution of unintentional injury death proportions for the three leading causes among males and females are reported in Additional file 1: Figure S1 and Additional file 2: Figure S2, respectively. Age-distribution of unintentional injury mortality for the three leading causes of injuries in India, 2005. The three leading causes of unintentional injuries are presented as a proportion of all unintentional deaths in the sample. The pattern of unintentional mortality in rural and urban areas was similar, however RTI constituted a higher proportion of unintentional injury deaths in urban areas (40%), while deaths due to mechanical forces (12%) and contact with venomous and plants (9%) were in higher in rural areas. The proportion of injury deaths by type also varied across the six major regions of India (Figure 2). Regional variations were also observed, with the highest unintentional injury mortality rate in South India (62 per 100 000) and the lowest in Northeast (47 per 100 000). RTI accounted for over 40% of unintentional deaths in the North, but only about 20% in the East. Causes of unintentional deaths in India, for rural/urban area and six major regions, 2001–03. Proportion of deaths by type of unintentional injury for rural/urban area and six major regions are presented for all unintentional deaths in the study sample. Number of all unintentional injury deaths and unintentional injury mortality rate (MR)/100 000 population has been reported. Of all unintentional injury deaths, 43% occurred at home, 17% at health facilities, and 35% at other places (Figure 3). Place of death could not be determined in 5% of the deaths. About 63% of RTI deaths were recorded as occurring at other places, most often at the site of injury or on route to a health facility. Most deaths due to falls (72%) and forces of nature (67%) occurred at home. Fires were the only injury that had a high proportion (44%) of deaths in a health facility. Place of death by type of unintentional injury in India. Proportion by place of death at home, health facility or others is presented for all unintentional deaths in the study sample. Others include places like road side, public area, on transportation and body of water. Ranking is within all unintentional injury deaths and is based on respective mortality rates for each type of injury.
Conclusions
These direct estimates of unintentional injury deaths in India (0.6 million) are lower than WHO indirect estimates (0.8 million), but double the estimates which rely on police reports (0.3 million). Importantly, they revise upward the mortality due to falls, particularly in the elderly, and revise downward mortality due to fires. Road traffic injuries, falls and drowning are the leading cause of unintentional injury deaths in India.
[ "Background", "Study setting and data collection", "Cause of death assignment", "National estimates for absolute number of deaths and mortality rates", "Limitations", "Abbreviations", "Competing interests", "Authors' contributions", "Financial disclosures", "Pre-publication history" ]
[ "Indirect estimates by the World Health Organization (WHO) and the Global Burden of Diseases Study (GBD) suggest that unintentional injuries account for 3.9 million deaths worldwide [1], of which about 90% occur in low- and middle-income countries. The majority of these deaths are attributable to road traffic injuries, falls, drowning, poisoning and burns [1].\nIn 2004, WHO estimated about 0.8 million deaths in India were due to unintentional injuries [1]. Direct Indian estimates of unintentional injury deaths relying on annual National Crime Records Bureau (NCRB) reports of injury deaths from police records showed only 0.3 million injury deaths in 2005 [2], but police record are subject to under-reporting and misclassification [3-5]. Other sources of mortality data from selected health centres in rural areas [6], and selected urban hospitals [7] are not representative of the population of India, and have other methodological limitations [8].\nThe objective of this paper is to estimate total unintentional injury mortality in India and its variation by gender, rural/urban residence and region using results from a nationally representative survey of the causes of deaths.", "The Registrar General of India (RGI) randomly selected 6671 small areas from approximately one million small areas defined in the 1991 census for its Sample Registration System (SRS) [9]. In 1993, household characteristics of the SRS areas, each with about 150 houses and 1000 people, were documented. The SRS sample frame covered 6.3 million people in all 28 states and seven union territories of India. SRS sampling frame is based on the results of census of India, which is conducted every ten years. The selected households are continuously monitored for vital events by two independent surveyors. The first is a part-time enumerator (commonly a resident of the village/area or a local school teacher familiar with the area/village) who visits the households every month. The second is a full-time (nonmedical) Registrar General of India’s surveyor who visits the households every 6 months. Another staff member from the office of Registrar General of India does the reconciliation of vital events reported by the part-time enumerator and the full-time surveyor, arriving at a final list of births and deaths for each household, at the completion each half-yearly survey.\nIn the last decade, the RGI has introduced an advanced form of verbal autopsy called “RHIME” (Routine, Reliable, Representative and Re-sampled Household Investigation of Mortality with Medical Evaluation) [9,10]. Verbal autopsy is a method of ascertaining the cause of death by seeking information on signs, symptoms and circumstances from a family member or care taker of the deceased [10]. Since 2001, about 800 non-medical graduates who were full time employees of the RGI, had knowledge of local languages and were trained to implement the RHIME method visited the families to record events preceding each death using three age specific questionnaires (neonatal, child and adult) including a narrative in the local language. The neonatal (0–28 days) and child death (29 days–14 years) questionnaires included a direct question, “Did s/he die from an injury or accident? If yes, what was the kind of injury or accident?” Response options included 1) Road traffic accident 2) Falls 3) Fall of objects (on to the person) 4) Burns 5) Drowning 6) Poisoning 7) Bite/sting 8) Natural disaster 9) Homicide/assault 10) Unknown. Place of death was recorded in all deaths with response options of 1) Home 2) Health facility (like government hospitals, private hospitals and registered practitioners) and 3) Others (including road side, public area, on transportation and body of water). A random sample of about 5% of the areas was resurveyed independently generally with consistent results. Details of the methods, validation results, and preliminary results for various diseases have been reported previously [10-14].", "The field reports including the individual narratives were sent randomly based on the language, to at least two of 130 physicians who were specially trained in disease coding. The physicians assessed the underlying cause of death and assigned a three character code from the International Classification of disease (ICD), 10th Revision [15]. Unintentional injury deaths were allocated ICD codes from chapter XX for external causes of morbidity and mortality, including V01-X59, Y40-Y86, Y88, and Y89 codes. In case of chapter level disagreement between the two physicians, the final ICD code was assigned by a third senior physician. In case of sub chapter disagreements, specific codes were adjudicated by a specific codes were adjudicated two members of the research team.\nReports could not be collected for 12% of the identified deaths mostly due to migration of the household or change of residence; this is unlikely to have led to any systematic misclassification in cause of death as these missing deaths were distributed across all states.\nMoreover, the SRS definition of usual resident included those who travel away from home for periodic work [9], so deaths away from home were captured provided the whole household had not moved. A total of about 136,480 deaths were identified between January 1, 2001 and December 31, 2003. About 9% of all the death reports could not be coded due to field problems such as poor image quality of the narrative or insufficient information; hence cause of death was identified for 122,828 deaths.", "We applied the proportion of each cause of death to the UN Population Division estimates of Indian deaths from all causes in 2005 ((9.8 million; upper and lower limits 9.4 and 10.3 million respectively) [13,16]. UN estimates were used for more accurate calculation of deaths and mortality rates (using Preston and Coale method) [17] because the SRS undercounts mortality by approximately 10% [18,19]. All major cause of death like malaria, HIV and child mortality [12,13] has been estimated for the year 2005 from the Million Death Study making cause specific mortality comparable for policy implications.\nThe proportion of cause specific deaths in each age category was weighted to the SRS sampling fractions in the rural and urban parts of each state. However, unweighted proportions yielded nearly identical results [20]. Application of the data from 2001–03 to 2005 deaths should not introduce major bias as there was little change in the yearly distribution of cause of death in the present study (p = 0.736 for yearly variation in proportional mortality). Sub national estimates were produced for six major regions (north, south, west, central, northeast, and east) [9] from segregating the national UN totals by the relative SRS death rates, as described earlier [21]. Live births totals from UN were used to calculate mortality rates for children younger than five years [21,22]. Confidence interval (95%) for each cause of death proportion or mortality rate was calculated (using the Taylor linearization method) on the basis of the survey design and the observed sample deaths in the present study [23].\nSRS enrolment is on a voluntary basis, and its confidentiality and consent procedures are defined as part of the Registration of Births and Deaths Act, 1969. Oral consent was obtained in the first SRS sample frame. Families are free to withdraw from the study, but the compliance is close to 100%. The study poses no or minimal risks to enrolled subjects. All personal identifiers present in the raw data are anonymised before analysis. The study has been approved by the review boards of the Post-Graduate Institute of Medical Education and Research, St. Michael’s Hospital and the Indian Council of Medical Research.", "Our study had some limitations. Verbal autopsy methods are known to misclassify some causes of death among neonates and older age groups of 70 years and above [10,34]. Earlier comparisons of verbal autopsy to urban hospital records in India indicated a sensitivity of 85% and specificity of over 95% for injuries [35]. We caution, however, that hospital-based studies are not ideal studies as a large majority of deaths occur in India without medical consultation [36] and also because of the differences observed in age-sex composition of injury deaths recorded in health facility versus those recorded at home (Additional file 3: Table S1). Misclassification for injuries overall has been low in verbal autopsy validation studies elsewhere, with the exception of falls where some misclassification with cerebro-vascular conditions was reported [37]. We estimate the injury deaths for year 2005 using proportionate injury mortality recorded during 2001-03. We did not observe any change in proportionate mortality from 2001 to 2003, hence, assumed that proportionate mortality would not have changed in next two year also, however, that may not be the case.\nThe disease burden in India is undergoing a transition with the burden of both chronic conditions and injury rapidly rising. However, injury is a neglected epidemic in India and few resources are dedicated towards prevention or treatment of injuries. Our results provide reliable national and regional estimates of injury mortality which can inform the allocation of resources and development of an evidence based national and state policy for injury control. Our results suggest upward revision is needed of falls mortality and perhaps downward revision of fire related deaths. Future research should aim to formulate effective injury surveillance systems, epidemiological assessment of all outcomes of injuries, advocacy for prevention and treatment and appraisal of existing effective interventions for injury prevention and trauma care.", "GBD, Global Burden of Disease; ICD, International Classification of Disease; MCCD, Medically Certified Cause of Death; NCRB, National Crime and Records Bureau; RGI, Registrar General of India; SRS, Sample Registration System; SCD, Survey of Causes of Death; WHO, World Health Organisation.", "The authors declare they have no competing interests.", "PJ and the academic partners in India (MDS Collaborators) planned the Million Death Study in close collaboration with the RGI. JJ, WS and PJ conducted the analyses and drafted the paper. PJ is the chief investigator and the guarantor for the study. All authors participated in interpreting the data and writing the manuscript. All authors read and approved the final manuscript.", "This work was supported by the Fogarty International Centre of the US National Institutes of Health [grant R01 TW05991–01]; Canadian Institute of Health Research [CIHR; IEG-53506]; International Research Development Centre [Grant 102172]; and Li Ka Shing Knowledge Institute at St. Michael’s Hospital, University of Toronto (CGHR support). PJ is supported by the Canada Research Chair program. JJ is supported by Endeavour Research Fellowship Programme. The funding sources had no role in study design or conduct, including data collection, analysis, and interpretation. PJ had full access to all data and final responsibility for the decision to submit for publication on behalf of all authors.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/12/487/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study setting and data collection", "Cause of death assignment", "National estimates for absolute number of deaths and mortality rates", "Results", "Discussion", "Limitations", "Conclusions", "Abbreviations", "Competing interests", "Authors' contributions", "Financial disclosures", "Pre-publication history", "Supplementary Material" ]
[ "Indirect estimates by the World Health Organization (WHO) and the Global Burden of Diseases Study (GBD) suggest that unintentional injuries account for 3.9 million deaths worldwide [1], of which about 90% occur in low- and middle-income countries. The majority of these deaths are attributable to road traffic injuries, falls, drowning, poisoning and burns [1].\nIn 2004, WHO estimated about 0.8 million deaths in India were due to unintentional injuries [1]. Direct Indian estimates of unintentional injury deaths relying on annual National Crime Records Bureau (NCRB) reports of injury deaths from police records showed only 0.3 million injury deaths in 2005 [2], but police record are subject to under-reporting and misclassification [3-5]. Other sources of mortality data from selected health centres in rural areas [6], and selected urban hospitals [7] are not representative of the population of India, and have other methodological limitations [8].\nThe objective of this paper is to estimate total unintentional injury mortality in India and its variation by gender, rural/urban residence and region using results from a nationally representative survey of the causes of deaths.", " Study setting and data collection The Registrar General of India (RGI) randomly selected 6671 small areas from approximately one million small areas defined in the 1991 census for its Sample Registration System (SRS) [9]. In 1993, household characteristics of the SRS areas, each with about 150 houses and 1000 people, were documented. The SRS sample frame covered 6.3 million people in all 28 states and seven union territories of India. SRS sampling frame is based on the results of census of India, which is conducted every ten years. The selected households are continuously monitored for vital events by two independent surveyors. The first is a part-time enumerator (commonly a resident of the village/area or a local school teacher familiar with the area/village) who visits the households every month. The second is a full-time (nonmedical) Registrar General of India’s surveyor who visits the households every 6 months. Another staff member from the office of Registrar General of India does the reconciliation of vital events reported by the part-time enumerator and the full-time surveyor, arriving at a final list of births and deaths for each household, at the completion each half-yearly survey.\nIn the last decade, the RGI has introduced an advanced form of verbal autopsy called “RHIME” (Routine, Reliable, Representative and Re-sampled Household Investigation of Mortality with Medical Evaluation) [9,10]. Verbal autopsy is a method of ascertaining the cause of death by seeking information on signs, symptoms and circumstances from a family member or care taker of the deceased [10]. Since 2001, about 800 non-medical graduates who were full time employees of the RGI, had knowledge of local languages and were trained to implement the RHIME method visited the families to record events preceding each death using three age specific questionnaires (neonatal, child and adult) including a narrative in the local language. The neonatal (0–28 days) and child death (29 days–14 years) questionnaires included a direct question, “Did s/he die from an injury or accident? If yes, what was the kind of injury or accident?” Response options included 1) Road traffic accident 2) Falls 3) Fall of objects (on to the person) 4) Burns 5) Drowning 6) Poisoning 7) Bite/sting 8) Natural disaster 9) Homicide/assault 10) Unknown. Place of death was recorded in all deaths with response options of 1) Home 2) Health facility (like government hospitals, private hospitals and registered practitioners) and 3) Others (including road side, public area, on transportation and body of water). A random sample of about 5% of the areas was resurveyed independently generally with consistent results. Details of the methods, validation results, and preliminary results for various diseases have been reported previously [10-14].\nThe Registrar General of India (RGI) randomly selected 6671 small areas from approximately one million small areas defined in the 1991 census for its Sample Registration System (SRS) [9]. In 1993, household characteristics of the SRS areas, each with about 150 houses and 1000 people, were documented. The SRS sample frame covered 6.3 million people in all 28 states and seven union territories of India. SRS sampling frame is based on the results of census of India, which is conducted every ten years. The selected households are continuously monitored for vital events by two independent surveyors. The first is a part-time enumerator (commonly a resident of the village/area or a local school teacher familiar with the area/village) who visits the households every month. The second is a full-time (nonmedical) Registrar General of India’s surveyor who visits the households every 6 months. Another staff member from the office of Registrar General of India does the reconciliation of vital events reported by the part-time enumerator and the full-time surveyor, arriving at a final list of births and deaths for each household, at the completion each half-yearly survey.\nIn the last decade, the RGI has introduced an advanced form of verbal autopsy called “RHIME” (Routine, Reliable, Representative and Re-sampled Household Investigation of Mortality with Medical Evaluation) [9,10]. Verbal autopsy is a method of ascertaining the cause of death by seeking information on signs, symptoms and circumstances from a family member or care taker of the deceased [10]. Since 2001, about 800 non-medical graduates who were full time employees of the RGI, had knowledge of local languages and were trained to implement the RHIME method visited the families to record events preceding each death using three age specific questionnaires (neonatal, child and adult) including a narrative in the local language. The neonatal (0–28 days) and child death (29 days–14 years) questionnaires included a direct question, “Did s/he die from an injury or accident? If yes, what was the kind of injury or accident?” Response options included 1) Road traffic accident 2) Falls 3) Fall of objects (on to the person) 4) Burns 5) Drowning 6) Poisoning 7) Bite/sting 8) Natural disaster 9) Homicide/assault 10) Unknown. Place of death was recorded in all deaths with response options of 1) Home 2) Health facility (like government hospitals, private hospitals and registered practitioners) and 3) Others (including road side, public area, on transportation and body of water). A random sample of about 5% of the areas was resurveyed independently generally with consistent results. Details of the methods, validation results, and preliminary results for various diseases have been reported previously [10-14].\n Cause of death assignment The field reports including the individual narratives were sent randomly based on the language, to at least two of 130 physicians who were specially trained in disease coding. The physicians assessed the underlying cause of death and assigned a three character code from the International Classification of disease (ICD), 10th Revision [15]. Unintentional injury deaths were allocated ICD codes from chapter XX for external causes of morbidity and mortality, including V01-X59, Y40-Y86, Y88, and Y89 codes. In case of chapter level disagreement between the two physicians, the final ICD code was assigned by a third senior physician. In case of sub chapter disagreements, specific codes were adjudicated by a specific codes were adjudicated two members of the research team.\nReports could not be collected for 12% of the identified deaths mostly due to migration of the household or change of residence; this is unlikely to have led to any systematic misclassification in cause of death as these missing deaths were distributed across all states.\nMoreover, the SRS definition of usual resident included those who travel away from home for periodic work [9], so deaths away from home were captured provided the whole household had not moved. A total of about 136,480 deaths were identified between January 1, 2001 and December 31, 2003. About 9% of all the death reports could not be coded due to field problems such as poor image quality of the narrative or insufficient information; hence cause of death was identified for 122,828 deaths.\nThe field reports including the individual narratives were sent randomly based on the language, to at least two of 130 physicians who were specially trained in disease coding. The physicians assessed the underlying cause of death and assigned a three character code from the International Classification of disease (ICD), 10th Revision [15]. Unintentional injury deaths were allocated ICD codes from chapter XX for external causes of morbidity and mortality, including V01-X59, Y40-Y86, Y88, and Y89 codes. In case of chapter level disagreement between the two physicians, the final ICD code was assigned by a third senior physician. In case of sub chapter disagreements, specific codes were adjudicated by a specific codes were adjudicated two members of the research team.\nReports could not be collected for 12% of the identified deaths mostly due to migration of the household or change of residence; this is unlikely to have led to any systematic misclassification in cause of death as these missing deaths were distributed across all states.\nMoreover, the SRS definition of usual resident included those who travel away from home for periodic work [9], so deaths away from home were captured provided the whole household had not moved. A total of about 136,480 deaths were identified between January 1, 2001 and December 31, 2003. About 9% of all the death reports could not be coded due to field problems such as poor image quality of the narrative or insufficient information; hence cause of death was identified for 122,828 deaths.\n National estimates for absolute number of deaths and mortality rates We applied the proportion of each cause of death to the UN Population Division estimates of Indian deaths from all causes in 2005 ((9.8 million; upper and lower limits 9.4 and 10.3 million respectively) [13,16]. UN estimates were used for more accurate calculation of deaths and mortality rates (using Preston and Coale method) [17] because the SRS undercounts mortality by approximately 10% [18,19]. All major cause of death like malaria, HIV and child mortality [12,13] has been estimated for the year 2005 from the Million Death Study making cause specific mortality comparable for policy implications.\nThe proportion of cause specific deaths in each age category was weighted to the SRS sampling fractions in the rural and urban parts of each state. However, unweighted proportions yielded nearly identical results [20]. Application of the data from 2001–03 to 2005 deaths should not introduce major bias as there was little change in the yearly distribution of cause of death in the present study (p = 0.736 for yearly variation in proportional mortality). Sub national estimates were produced for six major regions (north, south, west, central, northeast, and east) [9] from segregating the national UN totals by the relative SRS death rates, as described earlier [21]. Live births totals from UN were used to calculate mortality rates for children younger than five years [21,22]. Confidence interval (95%) for each cause of death proportion or mortality rate was calculated (using the Taylor linearization method) on the basis of the survey design and the observed sample deaths in the present study [23].\nSRS enrolment is on a voluntary basis, and its confidentiality and consent procedures are defined as part of the Registration of Births and Deaths Act, 1969. Oral consent was obtained in the first SRS sample frame. Families are free to withdraw from the study, but the compliance is close to 100%. The study poses no or minimal risks to enrolled subjects. All personal identifiers present in the raw data are anonymised before analysis. The study has been approved by the review boards of the Post-Graduate Institute of Medical Education and Research, St. Michael’s Hospital and the Indian Council of Medical Research.\nWe applied the proportion of each cause of death to the UN Population Division estimates of Indian deaths from all causes in 2005 ((9.8 million; upper and lower limits 9.4 and 10.3 million respectively) [13,16]. UN estimates were used for more accurate calculation of deaths and mortality rates (using Preston and Coale method) [17] because the SRS undercounts mortality by approximately 10% [18,19]. All major cause of death like malaria, HIV and child mortality [12,13] has been estimated for the year 2005 from the Million Death Study making cause specific mortality comparable for policy implications.\nThe proportion of cause specific deaths in each age category was weighted to the SRS sampling fractions in the rural and urban parts of each state. However, unweighted proportions yielded nearly identical results [20]. Application of the data from 2001–03 to 2005 deaths should not introduce major bias as there was little change in the yearly distribution of cause of death in the present study (p = 0.736 for yearly variation in proportional mortality). Sub national estimates were produced for six major regions (north, south, west, central, northeast, and east) [9] from segregating the national UN totals by the relative SRS death rates, as described earlier [21]. Live births totals from UN were used to calculate mortality rates for children younger than five years [21,22]. Confidence interval (95%) for each cause of death proportion or mortality rate was calculated (using the Taylor linearization method) on the basis of the survey design and the observed sample deaths in the present study [23].\nSRS enrolment is on a voluntary basis, and its confidentiality and consent procedures are defined as part of the Registration of Births and Deaths Act, 1969. Oral consent was obtained in the first SRS sample frame. Families are free to withdraw from the study, but the compliance is close to 100%. The study poses no or minimal risks to enrolled subjects. All personal identifiers present in the raw data are anonymised before analysis. The study has been approved by the review boards of the Post-Graduate Institute of Medical Education and Research, St. Michael’s Hospital and the Indian Council of Medical Research.", "The Registrar General of India (RGI) randomly selected 6671 small areas from approximately one million small areas defined in the 1991 census for its Sample Registration System (SRS) [9]. In 1993, household characteristics of the SRS areas, each with about 150 houses and 1000 people, were documented. The SRS sample frame covered 6.3 million people in all 28 states and seven union territories of India. SRS sampling frame is based on the results of census of India, which is conducted every ten years. The selected households are continuously monitored for vital events by two independent surveyors. The first is a part-time enumerator (commonly a resident of the village/area or a local school teacher familiar with the area/village) who visits the households every month. The second is a full-time (nonmedical) Registrar General of India’s surveyor who visits the households every 6 months. Another staff member from the office of Registrar General of India does the reconciliation of vital events reported by the part-time enumerator and the full-time surveyor, arriving at a final list of births and deaths for each household, at the completion each half-yearly survey.\nIn the last decade, the RGI has introduced an advanced form of verbal autopsy called “RHIME” (Routine, Reliable, Representative and Re-sampled Household Investigation of Mortality with Medical Evaluation) [9,10]. Verbal autopsy is a method of ascertaining the cause of death by seeking information on signs, symptoms and circumstances from a family member or care taker of the deceased [10]. Since 2001, about 800 non-medical graduates who were full time employees of the RGI, had knowledge of local languages and were trained to implement the RHIME method visited the families to record events preceding each death using three age specific questionnaires (neonatal, child and adult) including a narrative in the local language. The neonatal (0–28 days) and child death (29 days–14 years) questionnaires included a direct question, “Did s/he die from an injury or accident? If yes, what was the kind of injury or accident?” Response options included 1) Road traffic accident 2) Falls 3) Fall of objects (on to the person) 4) Burns 5) Drowning 6) Poisoning 7) Bite/sting 8) Natural disaster 9) Homicide/assault 10) Unknown. Place of death was recorded in all deaths with response options of 1) Home 2) Health facility (like government hospitals, private hospitals and registered practitioners) and 3) Others (including road side, public area, on transportation and body of water). A random sample of about 5% of the areas was resurveyed independently generally with consistent results. Details of the methods, validation results, and preliminary results for various diseases have been reported previously [10-14].", "The field reports including the individual narratives were sent randomly based on the language, to at least two of 130 physicians who were specially trained in disease coding. The physicians assessed the underlying cause of death and assigned a three character code from the International Classification of disease (ICD), 10th Revision [15]. Unintentional injury deaths were allocated ICD codes from chapter XX for external causes of morbidity and mortality, including V01-X59, Y40-Y86, Y88, and Y89 codes. In case of chapter level disagreement between the two physicians, the final ICD code was assigned by a third senior physician. In case of sub chapter disagreements, specific codes were adjudicated by a specific codes were adjudicated two members of the research team.\nReports could not be collected for 12% of the identified deaths mostly due to migration of the household or change of residence; this is unlikely to have led to any systematic misclassification in cause of death as these missing deaths were distributed across all states.\nMoreover, the SRS definition of usual resident included those who travel away from home for periodic work [9], so deaths away from home were captured provided the whole household had not moved. A total of about 136,480 deaths were identified between January 1, 2001 and December 31, 2003. About 9% of all the death reports could not be coded due to field problems such as poor image quality of the narrative or insufficient information; hence cause of death was identified for 122,828 deaths.", "We applied the proportion of each cause of death to the UN Population Division estimates of Indian deaths from all causes in 2005 ((9.8 million; upper and lower limits 9.4 and 10.3 million respectively) [13,16]. UN estimates were used for more accurate calculation of deaths and mortality rates (using Preston and Coale method) [17] because the SRS undercounts mortality by approximately 10% [18,19]. All major cause of death like malaria, HIV and child mortality [12,13] has been estimated for the year 2005 from the Million Death Study making cause specific mortality comparable for policy implications.\nThe proportion of cause specific deaths in each age category was weighted to the SRS sampling fractions in the rural and urban parts of each state. However, unweighted proportions yielded nearly identical results [20]. Application of the data from 2001–03 to 2005 deaths should not introduce major bias as there was little change in the yearly distribution of cause of death in the present study (p = 0.736 for yearly variation in proportional mortality). Sub national estimates were produced for six major regions (north, south, west, central, northeast, and east) [9] from segregating the national UN totals by the relative SRS death rates, as described earlier [21]. Live births totals from UN were used to calculate mortality rates for children younger than five years [21,22]. Confidence interval (95%) for each cause of death proportion or mortality rate was calculated (using the Taylor linearization method) on the basis of the survey design and the observed sample deaths in the present study [23].\nSRS enrolment is on a voluntary basis, and its confidentiality and consent procedures are defined as part of the Registration of Births and Deaths Act, 1969. Oral consent was obtained in the first SRS sample frame. Families are free to withdraw from the study, but the compliance is close to 100%. The study poses no or minimal risks to enrolled subjects. All personal identifiers present in the raw data are anonymised before analysis. The study has been approved by the review boards of the Post-Graduate Institute of Medical Education and Research, St. Michael’s Hospital and the Indian Council of Medical Research.", "Unintentional injuries accounted for 7% (8023/122,828) of all deaths (Table 1). A small number (155; 1.9%) of injury deaths were excluded from analyses as intent (unintentional or intentional) could not be determined. Unintentional injury deaths constituted nearly 20% of total deaths at ages 5–29 years and 12% of total deaths at ages 30–44 years. Over 80% (6621) of unintentional injury deaths occurred in rural areas. More males (5228) than females (2795) died from unintentional injury, and male deaths exceeded female deaths at all ages except beyond 70 years.\nUnintentional injury attributed deaths in the Sample Registration System 2001–2003 and estimated national rates for 2005 by age, sex and place of residence\n*Mortality rate per 1000 live births.\nThe national mortality rate (MR) for unintentional injury per 100 000 population was estimated to be 58 (males 71, females 43), with higher rates in rural (60) than urban areas (50). The mortality rates were highest at ages 70 years or higher (410/100 000), with falls accounting for 63% of all injury deaths in this age group. In absolute terms, during 2005, about 648 000 deaths from unintentional injuries occurred in India (95% CI 634 000-662 000; Table 2).\nNumber of unintentional injury deaths by type, in the Sample Registration System, 2001–2003 and estimated national totals for 2005\n*Other injuries include transport accidents other than RTI, poisoning, exposure to electric current, radiation, extreme ambient air temperature and pressure, contact with hot substances, other accidental threat to breathing, overexertion, travel and privation and adverse of medical and surgical interventions. Details of ICD codes are available from http://apps.who.int/classifications/icd10/browse/2010/en#/XX.\n† Deaths due to undetermined intent (Y10- Y34) were 155 (2% of all injury deaths); of these 15% of these were Y14 (Poisoning by exposure to unspecified drugs, medicaments and biological substance, undetermined intent).\nRoad traffic injuries (RTI) were the leading cause of death, accounting for 185 000 deaths or 29% of all unintentional injury deaths (MR = 16.5), followed by falls (160 000, 25%; MR = 14.3) and drowning (73 000; 11%; MR = 6.4). Males had higher mortality rates for all sub-types of unintentional injuries except for fire-related deaths. Females aged 15–29 years (9,900; MR = 5.8, 95% CI 4.9- 6.4) had the highest mortality rates from fire. The ratio of male to female mortality rates were as follows: RTI (4:1), drowning (2:1); fires (1:3); and falls (1:1).\nFigure 1 provides the age distribution for the top three causes of unintentional injury deaths. RTI were the leading cause of death at ages 15–59 years (41% of all unintentional injury deaths in the age group) whereas deaths due to falls were more common in older people (38% of unintentional injury deaths at ages 60–69 years and 63% at ages 70 years and older). Age-distribution of unintentional injury death proportions for the three leading causes among males and females are reported in Additional file 1: Figure S1 and Additional file 2: Figure S2, respectively.\nAge-distribution of unintentional injury mortality for the three leading causes of injuries in India, 2005. The three leading causes of unintentional injuries are presented as a proportion of all unintentional deaths in the sample.\nThe pattern of unintentional mortality in rural and urban areas was similar, however RTI constituted a higher proportion of unintentional injury deaths in urban areas (40%), while deaths due to mechanical forces (12%) and contact with venomous and plants (9%) were in higher in rural areas. The proportion of injury deaths by type also varied across the six major regions of India (Figure 2). Regional variations were also observed, with the highest unintentional injury mortality rate in South India (62 per 100 000) and the lowest in Northeast (47 per 100 000). RTI accounted for over 40% of unintentional deaths in the North, but only about 20% in the East.\nCauses of unintentional deaths in India, for rural/urban area and six major regions, 2001–03. Proportion of deaths by type of unintentional injury for rural/urban area and six major regions are presented for all unintentional deaths in the study sample. Number of all unintentional injury deaths and unintentional injury mortality rate (MR)/100 000 population has been reported.\nOf all unintentional injury deaths, 43% occurred at home, 17% at health facilities, and 35% at other places (Figure 3). Place of death could not be determined in 5% of the deaths. About 63% of RTI deaths were recorded as occurring at other places, most often at the site of injury or on route to a health facility. Most deaths due to falls (72%) and forces of nature (67%) occurred at home. Fires were the only injury that had a high proportion (44%) of deaths in a health facility.\nPlace of death by type of unintentional injury in India. Proportion by place of death at home, health facility or others is presented for all unintentional deaths in the study sample. Others include places like road side, public area, on transportation and body of water. Ranking is within all unintentional injury deaths and is based on respective mortality rates for each type of injury.", "A nationally representative survey of deaths indicates that over 0.6 million persons died due to unintentional injury in India in 2005. This is twice the direct estimate of deaths from the NCRB (0.3 million) but lower than the WHO indirect estimates (0.8 million; Table 3). The underestimation in NCRB reports is likely due to reliance on police registration and thus may suffer from under-reporting by victims and families for certain types of injuries [4,5]. The WHO-Global Burden of Disease indirect estimates rely heavily on the Medically Certified Cause of Death (MCCD) (30% weight) for urban deaths and Survey of Cause of Death (SCD) (70% weight) for rural deaths, both of which rely on utilization of health facilities.\nComparison of national injury death rates, per 100 000 populations from present study and other sources\n*Poisoning deaths are defined differently by NCRB and includes deaths contact with venomous plants and animals and poisoning due to other agents like chemicals, liquor.\n†Varying codes are included are used from all three sources. Present study “Others” codes include codes V90-V98, W75-W84, X10-X19, X50-X59, W85-W99, X40-X49, Y40-Y86, Y88. The WHO “Others” codes include V90-V98, W20-W64, W75-W99, X10-X39, X50-X59, Y40-Y86, Y88,Y89. Codes for “Others” by NCRB were not defined but excluded transport accidents, drowning, explosions, falls, and fires and poisoning.\nOur study, a household survey using verbal autopsy method is less vulnerable to biases which affected the estimate of cause specific mortality in earlier studies (Figures 123). Indeed, we find notable differences in the age and sex composition of unintentional injury deaths between our study and the earlier MCCD, SCD and NCRB data, as well as the indirect estimates from GBD (Additional file 3: Table S1). The MCCD sample of selected urban hospitals is not representative of deaths in urban areas and suffers from quality of physician coding and completeness problems [8-10]. There are expected differences in presentation at hospital for different injuries [24]. Compared to our study, the MCCD reports a higher proportion of deaths from RTI, fires and poisoning, but lower proportions of deaths from falls and drowning (Additional file 4: Table S2).\nSimilarly, the SCD was based on a sample of villages with primary health care centers from selected states, and is not representative of the rural population [8-10]. It too has limitations including incomplete coverage, poor coding of causes of death and a higher proportion of ill-defined deaths [6,8]. Compared to our study, mortality proportions reported by SCD were higher for drowning and fires but lower for falls.\nMortality from RTI, particularly among males, was highest in the economically productive age group of 15–59 years, which constitutes 58% of India’s population [18]. Deaths in this age-group are likely to cause substantial household deprivation from the loss of a key wage earner [25]. Our mortality rates are consistent with the results of several local studies in India, including showing that a marked excess in young and middle aged males [26].\nOur study estimates for RTI deaths are twice those reported by the NCRB. While the NCRB reported RTI deaths in urban settings might only be modestly under-reported [3,5], no comparative data exists for rural areas where the majority (77%, 1801/2339) of the RTI deaths occurred in our study. The NCRB reports also appear to overestimate the proportion of RTI deaths for heavy vehicles occupants and underestimate those from pedestrians, showing differential reporting by types of road users [2]. Similar discrepancies between police data, vital registration data and verbal autopsy based nationally representative studies have been reported in other Asian countries like China and Thailand [27,28].\nOur estimates for fire related deaths in India are one-fifth of the previous indirect estimates for India from the GBD [1]. The MCCD and SCD do not classify fire related deaths by intent [6,7] making comparison to the present study difficult. Reports on deaths by family members might well under-report fire related deaths that were homicide versus unintentional, particularly in the Indian context of dowry deaths among younger adult women [2,4]. However, the proportional mortality distribution for fire related deaths by age and sex, in the present study is not markedly different from previous data sources (Additional file 3: Table S1), suggesting this bias may be modest.\nOn the other hand, the MCCD and SCD facility-based estimates might over-represent fire related deaths. Indeed, we noted much higher proportion of fire related deaths in a health facility (44%) as compared to all unintentional injury deaths in a health facility (17%). Further studies are required that compile multiple sources of mortality, hospital and forensic data to quantify reliably the age and gender-specific patterns of fire related deaths. Yet, the observed high proportion of fire related deaths among young adult women remains of significant concern.\nFall and drowning deaths are less likely to be medically certified and therefore would be under represented in national estimates based on hospital/medical facility data leading to an under estimation of deaths due to drowning and fall. As noted in earlier studies in the South Asia region including Bangladesh [29,30], drowning was the leading cause of unintentional death at ages 0–4 years, causing 22 000 deaths every year with higher rates in rural than in urban areas. Drowning deaths in children younger than 5 years are higher in the eastern and north eastern regions of India, which are the delta areas for major rivers, the Ganges and the Brahmaputra [21].\nMortality rates due to falls in all age groups were found to be 1.7 fold higher than those estimated indirectly [1], but consistent with recent local studies from India [5,31-33]. While pediatric falls and related traumatic brain injuries have been studied somewhat in the South Asia Region [29], there is little literature on falls in older people [33]. Our study reports three times higher deaths among older ages of 60 years and beyond, than the MCCD [7]. With a rising aged population, falls are a significant emerging public health issue in India.\n Limitations Our study had some limitations. Verbal autopsy methods are known to misclassify some causes of death among neonates and older age groups of 70 years and above [10,34]. Earlier comparisons of verbal autopsy to urban hospital records in India indicated a sensitivity of 85% and specificity of over 95% for injuries [35]. We caution, however, that hospital-based studies are not ideal studies as a large majority of deaths occur in India without medical consultation [36] and also because of the differences observed in age-sex composition of injury deaths recorded in health facility versus those recorded at home (Additional file 3: Table S1). Misclassification for injuries overall has been low in verbal autopsy validation studies elsewhere, with the exception of falls where some misclassification with cerebro-vascular conditions was reported [37]. We estimate the injury deaths for year 2005 using proportionate injury mortality recorded during 2001-03. We did not observe any change in proportionate mortality from 2001 to 2003, hence, assumed that proportionate mortality would not have changed in next two year also, however, that may not be the case.\nThe disease burden in India is undergoing a transition with the burden of both chronic conditions and injury rapidly rising. However, injury is a neglected epidemic in India and few resources are dedicated towards prevention or treatment of injuries. Our results provide reliable national and regional estimates of injury mortality which can inform the allocation of resources and development of an evidence based national and state policy for injury control. Our results suggest upward revision is needed of falls mortality and perhaps downward revision of fire related deaths. Future research should aim to formulate effective injury surveillance systems, epidemiological assessment of all outcomes of injuries, advocacy for prevention and treatment and appraisal of existing effective interventions for injury prevention and trauma care.\nOur study had some limitations. Verbal autopsy methods are known to misclassify some causes of death among neonates and older age groups of 70 years and above [10,34]. Earlier comparisons of verbal autopsy to urban hospital records in India indicated a sensitivity of 85% and specificity of over 95% for injuries [35]. We caution, however, that hospital-based studies are not ideal studies as a large majority of deaths occur in India without medical consultation [36] and also because of the differences observed in age-sex composition of injury deaths recorded in health facility versus those recorded at home (Additional file 3: Table S1). Misclassification for injuries overall has been low in verbal autopsy validation studies elsewhere, with the exception of falls where some misclassification with cerebro-vascular conditions was reported [37]. We estimate the injury deaths for year 2005 using proportionate injury mortality recorded during 2001-03. We did not observe any change in proportionate mortality from 2001 to 2003, hence, assumed that proportionate mortality would not have changed in next two year also, however, that may not be the case.\nThe disease burden in India is undergoing a transition with the burden of both chronic conditions and injury rapidly rising. However, injury is a neglected epidemic in India and few resources are dedicated towards prevention or treatment of injuries. Our results provide reliable national and regional estimates of injury mortality which can inform the allocation of resources and development of an evidence based national and state policy for injury control. Our results suggest upward revision is needed of falls mortality and perhaps downward revision of fire related deaths. Future research should aim to formulate effective injury surveillance systems, epidemiological assessment of all outcomes of injuries, advocacy for prevention and treatment and appraisal of existing effective interventions for injury prevention and trauma care.", "Our study had some limitations. Verbal autopsy methods are known to misclassify some causes of death among neonates and older age groups of 70 years and above [10,34]. Earlier comparisons of verbal autopsy to urban hospital records in India indicated a sensitivity of 85% and specificity of over 95% for injuries [35]. We caution, however, that hospital-based studies are not ideal studies as a large majority of deaths occur in India without medical consultation [36] and also because of the differences observed in age-sex composition of injury deaths recorded in health facility versus those recorded at home (Additional file 3: Table S1). Misclassification for injuries overall has been low in verbal autopsy validation studies elsewhere, with the exception of falls where some misclassification with cerebro-vascular conditions was reported [37]. We estimate the injury deaths for year 2005 using proportionate injury mortality recorded during 2001-03. We did not observe any change in proportionate mortality from 2001 to 2003, hence, assumed that proportionate mortality would not have changed in next two year also, however, that may not be the case.\nThe disease burden in India is undergoing a transition with the burden of both chronic conditions and injury rapidly rising. However, injury is a neglected epidemic in India and few resources are dedicated towards prevention or treatment of injuries. Our results provide reliable national and regional estimates of injury mortality which can inform the allocation of resources and development of an evidence based national and state policy for injury control. Our results suggest upward revision is needed of falls mortality and perhaps downward revision of fire related deaths. Future research should aim to formulate effective injury surveillance systems, epidemiological assessment of all outcomes of injuries, advocacy for prevention and treatment and appraisal of existing effective interventions for injury prevention and trauma care.", "These direct estimates of unintentional injury deaths in India (0.6 million) are lower than WHO indirect estimates (0.8 million), but double the estimates which rely on police reports (0.3 million). Importantly, they revise upward the mortality due to falls, particularly in the elderly, and revise downward mortality due to fires. Road traffic injuries, falls and drowning are the leading cause of unintentional injury deaths in India.", "GBD, Global Burden of Disease; ICD, International Classification of Disease; MCCD, Medically Certified Cause of Death; NCRB, National Crime and Records Bureau; RGI, Registrar General of India; SRS, Sample Registration System; SCD, Survey of Causes of Death; WHO, World Health Organisation.", "The authors declare they have no competing interests.", "PJ and the academic partners in India (MDS Collaborators) planned the Million Death Study in close collaboration with the RGI. JJ, WS and PJ conducted the analyses and drafted the paper. PJ is the chief investigator and the guarantor for the study. All authors participated in interpreting the data and writing the manuscript. All authors read and approved the final manuscript.", "This work was supported by the Fogarty International Centre of the US National Institutes of Health [grant R01 TW05991–01]; Canadian Institute of Health Research [CIHR; IEG-53506]; International Research Development Centre [Grant 102172]; and Li Ka Shing Knowledge Institute at St. Michael’s Hospital, University of Toronto (CGHR support). PJ is supported by the Canada Research Chair program. JJ is supported by Endeavour Research Fellowship Programme. The funding sources had no role in study design or conduct, including data collection, analysis, and interpretation. PJ had full access to all data and final responsibility for the decision to submit for publication on behalf of all authors.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/12/487/prepub\n", "Figure S1. Age-distribution of unintentional injury mortality for the three leading causes of injuries among males in the study population, 2001–03. The three leading causes of unintentional injuries are presented as a proportion of all unintentional deaths among males in the sample. \nClick here for file\nFigure S2. Age-distribution of unintentional injury mortality for the three leading causes of injuries among females in the study population, 2001–03. The three leading causes of unintentional injuries are presented as a proportion of all unintentional deaths among males in the sample. \nClick here for file\nTable S1. Proportions of unintentional injury and fire-related deaths by age and sex group from mortality surveys, indirect estimates and the present study.\nClick here for file\nTable S2. Comparison of injury proportions (%) to total deaths at all ages in rural and urban areas, from present study and other data sources.\nClick here for file" ]
[ null, "methods", null, null, null, "results", "discussion", null, "conclusions", null, null, null, null, null, "supplementary-material" ]
[ "Unintentional-injuries", "Mortality", "Verbal autopsy", "India" ]
Background: Indirect estimates by the World Health Organization (WHO) and the Global Burden of Diseases Study (GBD) suggest that unintentional injuries account for 3.9 million deaths worldwide [1], of which about 90% occur in low- and middle-income countries. The majority of these deaths are attributable to road traffic injuries, falls, drowning, poisoning and burns [1]. In 2004, WHO estimated about 0.8 million deaths in India were due to unintentional injuries [1]. Direct Indian estimates of unintentional injury deaths relying on annual National Crime Records Bureau (NCRB) reports of injury deaths from police records showed only 0.3 million injury deaths in 2005 [2], but police record are subject to under-reporting and misclassification [3-5]. Other sources of mortality data from selected health centres in rural areas [6], and selected urban hospitals [7] are not representative of the population of India, and have other methodological limitations [8]. The objective of this paper is to estimate total unintentional injury mortality in India and its variation by gender, rural/urban residence and region using results from a nationally representative survey of the causes of deaths. Methods: Study setting and data collection The Registrar General of India (RGI) randomly selected 6671 small areas from approximately one million small areas defined in the 1991 census for its Sample Registration System (SRS) [9]. In 1993, household characteristics of the SRS areas, each with about 150 houses and 1000 people, were documented. The SRS sample frame covered 6.3 million people in all 28 states and seven union territories of India. SRS sampling frame is based on the results of census of India, which is conducted every ten years. The selected households are continuously monitored for vital events by two independent surveyors. The first is a part-time enumerator (commonly a resident of the village/area or a local school teacher familiar with the area/village) who visits the households every month. The second is a full-time (nonmedical) Registrar General of India’s surveyor who visits the households every 6 months. Another staff member from the office of Registrar General of India does the reconciliation of vital events reported by the part-time enumerator and the full-time surveyor, arriving at a final list of births and deaths for each household, at the completion each half-yearly survey. In the last decade, the RGI has introduced an advanced form of verbal autopsy called “RHIME” (Routine, Reliable, Representative and Re-sampled Household Investigation of Mortality with Medical Evaluation) [9,10]. Verbal autopsy is a method of ascertaining the cause of death by seeking information on signs, symptoms and circumstances from a family member or care taker of the deceased [10]. Since 2001, about 800 non-medical graduates who were full time employees of the RGI, had knowledge of local languages and were trained to implement the RHIME method visited the families to record events preceding each death using three age specific questionnaires (neonatal, child and adult) including a narrative in the local language. The neonatal (0–28 days) and child death (29 days–14 years) questionnaires included a direct question, “Did s/he die from an injury or accident? If yes, what was the kind of injury or accident?” Response options included 1) Road traffic accident 2) Falls 3) Fall of objects (on to the person) 4) Burns 5) Drowning 6) Poisoning 7) Bite/sting 8) Natural disaster 9) Homicide/assault 10) Unknown. Place of death was recorded in all deaths with response options of 1) Home 2) Health facility (like government hospitals, private hospitals and registered practitioners) and 3) Others (including road side, public area, on transportation and body of water). A random sample of about 5% of the areas was resurveyed independently generally with consistent results. Details of the methods, validation results, and preliminary results for various diseases have been reported previously [10-14]. The Registrar General of India (RGI) randomly selected 6671 small areas from approximately one million small areas defined in the 1991 census for its Sample Registration System (SRS) [9]. In 1993, household characteristics of the SRS areas, each with about 150 houses and 1000 people, were documented. The SRS sample frame covered 6.3 million people in all 28 states and seven union territories of India. SRS sampling frame is based on the results of census of India, which is conducted every ten years. The selected households are continuously monitored for vital events by two independent surveyors. The first is a part-time enumerator (commonly a resident of the village/area or a local school teacher familiar with the area/village) who visits the households every month. The second is a full-time (nonmedical) Registrar General of India’s surveyor who visits the households every 6 months. Another staff member from the office of Registrar General of India does the reconciliation of vital events reported by the part-time enumerator and the full-time surveyor, arriving at a final list of births and deaths for each household, at the completion each half-yearly survey. In the last decade, the RGI has introduced an advanced form of verbal autopsy called “RHIME” (Routine, Reliable, Representative and Re-sampled Household Investigation of Mortality with Medical Evaluation) [9,10]. Verbal autopsy is a method of ascertaining the cause of death by seeking information on signs, symptoms and circumstances from a family member or care taker of the deceased [10]. Since 2001, about 800 non-medical graduates who were full time employees of the RGI, had knowledge of local languages and were trained to implement the RHIME method visited the families to record events preceding each death using three age specific questionnaires (neonatal, child and adult) including a narrative in the local language. The neonatal (0–28 days) and child death (29 days–14 years) questionnaires included a direct question, “Did s/he die from an injury or accident? If yes, what was the kind of injury or accident?” Response options included 1) Road traffic accident 2) Falls 3) Fall of objects (on to the person) 4) Burns 5) Drowning 6) Poisoning 7) Bite/sting 8) Natural disaster 9) Homicide/assault 10) Unknown. Place of death was recorded in all deaths with response options of 1) Home 2) Health facility (like government hospitals, private hospitals and registered practitioners) and 3) Others (including road side, public area, on transportation and body of water). A random sample of about 5% of the areas was resurveyed independently generally with consistent results. Details of the methods, validation results, and preliminary results for various diseases have been reported previously [10-14]. Cause of death assignment The field reports including the individual narratives were sent randomly based on the language, to at least two of 130 physicians who were specially trained in disease coding. The physicians assessed the underlying cause of death and assigned a three character code from the International Classification of disease (ICD), 10th Revision [15]. Unintentional injury deaths were allocated ICD codes from chapter XX for external causes of morbidity and mortality, including V01-X59, Y40-Y86, Y88, and Y89 codes. In case of chapter level disagreement between the two physicians, the final ICD code was assigned by a third senior physician. In case of sub chapter disagreements, specific codes were adjudicated by a specific codes were adjudicated two members of the research team. Reports could not be collected for 12% of the identified deaths mostly due to migration of the household or change of residence; this is unlikely to have led to any systematic misclassification in cause of death as these missing deaths were distributed across all states. Moreover, the SRS definition of usual resident included those who travel away from home for periodic work [9], so deaths away from home were captured provided the whole household had not moved. A total of about 136,480 deaths were identified between January 1, 2001 and December 31, 2003. About 9% of all the death reports could not be coded due to field problems such as poor image quality of the narrative or insufficient information; hence cause of death was identified for 122,828 deaths. The field reports including the individual narratives were sent randomly based on the language, to at least two of 130 physicians who were specially trained in disease coding. The physicians assessed the underlying cause of death and assigned a three character code from the International Classification of disease (ICD), 10th Revision [15]. Unintentional injury deaths were allocated ICD codes from chapter XX for external causes of morbidity and mortality, including V01-X59, Y40-Y86, Y88, and Y89 codes. In case of chapter level disagreement between the two physicians, the final ICD code was assigned by a third senior physician. In case of sub chapter disagreements, specific codes were adjudicated by a specific codes were adjudicated two members of the research team. Reports could not be collected for 12% of the identified deaths mostly due to migration of the household or change of residence; this is unlikely to have led to any systematic misclassification in cause of death as these missing deaths were distributed across all states. Moreover, the SRS definition of usual resident included those who travel away from home for periodic work [9], so deaths away from home were captured provided the whole household had not moved. A total of about 136,480 deaths were identified between January 1, 2001 and December 31, 2003. About 9% of all the death reports could not be coded due to field problems such as poor image quality of the narrative or insufficient information; hence cause of death was identified for 122,828 deaths. National estimates for absolute number of deaths and mortality rates We applied the proportion of each cause of death to the UN Population Division estimates of Indian deaths from all causes in 2005 ((9.8 million; upper and lower limits 9.4 and 10.3 million respectively) [13,16]. UN estimates were used for more accurate calculation of deaths and mortality rates (using Preston and Coale method) [17] because the SRS undercounts mortality by approximately 10% [18,19]. All major cause of death like malaria, HIV and child mortality [12,13] has been estimated for the year 2005 from the Million Death Study making cause specific mortality comparable for policy implications. The proportion of cause specific deaths in each age category was weighted to the SRS sampling fractions in the rural and urban parts of each state. However, unweighted proportions yielded nearly identical results [20]. Application of the data from 2001–03 to 2005 deaths should not introduce major bias as there was little change in the yearly distribution of cause of death in the present study (p = 0.736 for yearly variation in proportional mortality). Sub national estimates were produced for six major regions (north, south, west, central, northeast, and east) [9] from segregating the national UN totals by the relative SRS death rates, as described earlier [21]. Live births totals from UN were used to calculate mortality rates for children younger than five years [21,22]. Confidence interval (95%) for each cause of death proportion or mortality rate was calculated (using the Taylor linearization method) on the basis of the survey design and the observed sample deaths in the present study [23]. SRS enrolment is on a voluntary basis, and its confidentiality and consent procedures are defined as part of the Registration of Births and Deaths Act, 1969. Oral consent was obtained in the first SRS sample frame. Families are free to withdraw from the study, but the compliance is close to 100%. The study poses no or minimal risks to enrolled subjects. All personal identifiers present in the raw data are anonymised before analysis. The study has been approved by the review boards of the Post-Graduate Institute of Medical Education and Research, St. Michael’s Hospital and the Indian Council of Medical Research. We applied the proportion of each cause of death to the UN Population Division estimates of Indian deaths from all causes in 2005 ((9.8 million; upper and lower limits 9.4 and 10.3 million respectively) [13,16]. UN estimates were used for more accurate calculation of deaths and mortality rates (using Preston and Coale method) [17] because the SRS undercounts mortality by approximately 10% [18,19]. All major cause of death like malaria, HIV and child mortality [12,13] has been estimated for the year 2005 from the Million Death Study making cause specific mortality comparable for policy implications. The proportion of cause specific deaths in each age category was weighted to the SRS sampling fractions in the rural and urban parts of each state. However, unweighted proportions yielded nearly identical results [20]. Application of the data from 2001–03 to 2005 deaths should not introduce major bias as there was little change in the yearly distribution of cause of death in the present study (p = 0.736 for yearly variation in proportional mortality). Sub national estimates were produced for six major regions (north, south, west, central, northeast, and east) [9] from segregating the national UN totals by the relative SRS death rates, as described earlier [21]. Live births totals from UN were used to calculate mortality rates for children younger than five years [21,22]. Confidence interval (95%) for each cause of death proportion or mortality rate was calculated (using the Taylor linearization method) on the basis of the survey design and the observed sample deaths in the present study [23]. SRS enrolment is on a voluntary basis, and its confidentiality and consent procedures are defined as part of the Registration of Births and Deaths Act, 1969. Oral consent was obtained in the first SRS sample frame. Families are free to withdraw from the study, but the compliance is close to 100%. The study poses no or minimal risks to enrolled subjects. All personal identifiers present in the raw data are anonymised before analysis. The study has been approved by the review boards of the Post-Graduate Institute of Medical Education and Research, St. Michael’s Hospital and the Indian Council of Medical Research. Study setting and data collection: The Registrar General of India (RGI) randomly selected 6671 small areas from approximately one million small areas defined in the 1991 census for its Sample Registration System (SRS) [9]. In 1993, household characteristics of the SRS areas, each with about 150 houses and 1000 people, were documented. The SRS sample frame covered 6.3 million people in all 28 states and seven union territories of India. SRS sampling frame is based on the results of census of India, which is conducted every ten years. The selected households are continuously monitored for vital events by two independent surveyors. The first is a part-time enumerator (commonly a resident of the village/area or a local school teacher familiar with the area/village) who visits the households every month. The second is a full-time (nonmedical) Registrar General of India’s surveyor who visits the households every 6 months. Another staff member from the office of Registrar General of India does the reconciliation of vital events reported by the part-time enumerator and the full-time surveyor, arriving at a final list of births and deaths for each household, at the completion each half-yearly survey. In the last decade, the RGI has introduced an advanced form of verbal autopsy called “RHIME” (Routine, Reliable, Representative and Re-sampled Household Investigation of Mortality with Medical Evaluation) [9,10]. Verbal autopsy is a method of ascertaining the cause of death by seeking information on signs, symptoms and circumstances from a family member or care taker of the deceased [10]. Since 2001, about 800 non-medical graduates who were full time employees of the RGI, had knowledge of local languages and were trained to implement the RHIME method visited the families to record events preceding each death using three age specific questionnaires (neonatal, child and adult) including a narrative in the local language. The neonatal (0–28 days) and child death (29 days–14 years) questionnaires included a direct question, “Did s/he die from an injury or accident? If yes, what was the kind of injury or accident?” Response options included 1) Road traffic accident 2) Falls 3) Fall of objects (on to the person) 4) Burns 5) Drowning 6) Poisoning 7) Bite/sting 8) Natural disaster 9) Homicide/assault 10) Unknown. Place of death was recorded in all deaths with response options of 1) Home 2) Health facility (like government hospitals, private hospitals and registered practitioners) and 3) Others (including road side, public area, on transportation and body of water). A random sample of about 5% of the areas was resurveyed independently generally with consistent results. Details of the methods, validation results, and preliminary results for various diseases have been reported previously [10-14]. Cause of death assignment: The field reports including the individual narratives were sent randomly based on the language, to at least two of 130 physicians who were specially trained in disease coding. The physicians assessed the underlying cause of death and assigned a three character code from the International Classification of disease (ICD), 10th Revision [15]. Unintentional injury deaths were allocated ICD codes from chapter XX for external causes of morbidity and mortality, including V01-X59, Y40-Y86, Y88, and Y89 codes. In case of chapter level disagreement between the two physicians, the final ICD code was assigned by a third senior physician. In case of sub chapter disagreements, specific codes were adjudicated by a specific codes were adjudicated two members of the research team. Reports could not be collected for 12% of the identified deaths mostly due to migration of the household or change of residence; this is unlikely to have led to any systematic misclassification in cause of death as these missing deaths were distributed across all states. Moreover, the SRS definition of usual resident included those who travel away from home for periodic work [9], so deaths away from home were captured provided the whole household had not moved. A total of about 136,480 deaths were identified between January 1, 2001 and December 31, 2003. About 9% of all the death reports could not be coded due to field problems such as poor image quality of the narrative or insufficient information; hence cause of death was identified for 122,828 deaths. National estimates for absolute number of deaths and mortality rates: We applied the proportion of each cause of death to the UN Population Division estimates of Indian deaths from all causes in 2005 ((9.8 million; upper and lower limits 9.4 and 10.3 million respectively) [13,16]. UN estimates were used for more accurate calculation of deaths and mortality rates (using Preston and Coale method) [17] because the SRS undercounts mortality by approximately 10% [18,19]. All major cause of death like malaria, HIV and child mortality [12,13] has been estimated for the year 2005 from the Million Death Study making cause specific mortality comparable for policy implications. The proportion of cause specific deaths in each age category was weighted to the SRS sampling fractions in the rural and urban parts of each state. However, unweighted proportions yielded nearly identical results [20]. Application of the data from 2001–03 to 2005 deaths should not introduce major bias as there was little change in the yearly distribution of cause of death in the present study (p = 0.736 for yearly variation in proportional mortality). Sub national estimates were produced for six major regions (north, south, west, central, northeast, and east) [9] from segregating the national UN totals by the relative SRS death rates, as described earlier [21]. Live births totals from UN were used to calculate mortality rates for children younger than five years [21,22]. Confidence interval (95%) for each cause of death proportion or mortality rate was calculated (using the Taylor linearization method) on the basis of the survey design and the observed sample deaths in the present study [23]. SRS enrolment is on a voluntary basis, and its confidentiality and consent procedures are defined as part of the Registration of Births and Deaths Act, 1969. Oral consent was obtained in the first SRS sample frame. Families are free to withdraw from the study, but the compliance is close to 100%. The study poses no or minimal risks to enrolled subjects. All personal identifiers present in the raw data are anonymised before analysis. The study has been approved by the review boards of the Post-Graduate Institute of Medical Education and Research, St. Michael’s Hospital and the Indian Council of Medical Research. Results: Unintentional injuries accounted for 7% (8023/122,828) of all deaths (Table 1). A small number (155; 1.9%) of injury deaths were excluded from analyses as intent (unintentional or intentional) could not be determined. Unintentional injury deaths constituted nearly 20% of total deaths at ages 5–29 years and 12% of total deaths at ages 30–44 years. Over 80% (6621) of unintentional injury deaths occurred in rural areas. More males (5228) than females (2795) died from unintentional injury, and male deaths exceeded female deaths at all ages except beyond 70 years. Unintentional injury attributed deaths in the Sample Registration System 2001–2003 and estimated national rates for 2005 by age, sex and place of residence *Mortality rate per 1000 live births. The national mortality rate (MR) for unintentional injury per 100 000 population was estimated to be 58 (males 71, females 43), with higher rates in rural (60) than urban areas (50). The mortality rates were highest at ages 70 years or higher (410/100 000), with falls accounting for 63% of all injury deaths in this age group. In absolute terms, during 2005, about 648 000 deaths from unintentional injuries occurred in India (95% CI 634 000-662 000; Table 2). Number of unintentional injury deaths by type, in the Sample Registration System, 2001–2003 and estimated national totals for 2005 *Other injuries include transport accidents other than RTI, poisoning, exposure to electric current, radiation, extreme ambient air temperature and pressure, contact with hot substances, other accidental threat to breathing, overexertion, travel and privation and adverse of medical and surgical interventions. Details of ICD codes are available from http://apps.who.int/classifications/icd10/browse/2010/en#/XX. † Deaths due to undetermined intent (Y10- Y34) were 155 (2% of all injury deaths); of these 15% of these were Y14 (Poisoning by exposure to unspecified drugs, medicaments and biological substance, undetermined intent). Road traffic injuries (RTI) were the leading cause of death, accounting for 185 000 deaths or 29% of all unintentional injury deaths (MR = 16.5), followed by falls (160 000, 25%; MR = 14.3) and drowning (73 000; 11%; MR = 6.4). Males had higher mortality rates for all sub-types of unintentional injuries except for fire-related deaths. Females aged 15–29 years (9,900; MR = 5.8, 95% CI 4.9- 6.4) had the highest mortality rates from fire. The ratio of male to female mortality rates were as follows: RTI (4:1), drowning (2:1); fires (1:3); and falls (1:1). Figure 1 provides the age distribution for the top three causes of unintentional injury deaths. RTI were the leading cause of death at ages 15–59 years (41% of all unintentional injury deaths in the age group) whereas deaths due to falls were more common in older people (38% of unintentional injury deaths at ages 60–69 years and 63% at ages 70 years and older). Age-distribution of unintentional injury death proportions for the three leading causes among males and females are reported in Additional file 1: Figure S1 and Additional file 2: Figure S2, respectively. Age-distribution of unintentional injury mortality for the three leading causes of injuries in India, 2005. The three leading causes of unintentional injuries are presented as a proportion of all unintentional deaths in the sample. The pattern of unintentional mortality in rural and urban areas was similar, however RTI constituted a higher proportion of unintentional injury deaths in urban areas (40%), while deaths due to mechanical forces (12%) and contact with venomous and plants (9%) were in higher in rural areas. The proportion of injury deaths by type also varied across the six major regions of India (Figure 2). Regional variations were also observed, with the highest unintentional injury mortality rate in South India (62 per 100 000) and the lowest in Northeast (47 per 100 000). RTI accounted for over 40% of unintentional deaths in the North, but only about 20% in the East. Causes of unintentional deaths in India, for rural/urban area and six major regions, 2001–03. Proportion of deaths by type of unintentional injury for rural/urban area and six major regions are presented for all unintentional deaths in the study sample. Number of all unintentional injury deaths and unintentional injury mortality rate (MR)/100 000 population has been reported. Of all unintentional injury deaths, 43% occurred at home, 17% at health facilities, and 35% at other places (Figure 3). Place of death could not be determined in 5% of the deaths. About 63% of RTI deaths were recorded as occurring at other places, most often at the site of injury or on route to a health facility. Most deaths due to falls (72%) and forces of nature (67%) occurred at home. Fires were the only injury that had a high proportion (44%) of deaths in a health facility. Place of death by type of unintentional injury in India. Proportion by place of death at home, health facility or others is presented for all unintentional deaths in the study sample. Others include places like road side, public area, on transportation and body of water. Ranking is within all unintentional injury deaths and is based on respective mortality rates for each type of injury. Discussion: A nationally representative survey of deaths indicates that over 0.6 million persons died due to unintentional injury in India in 2005. This is twice the direct estimate of deaths from the NCRB (0.3 million) but lower than the WHO indirect estimates (0.8 million; Table 3). The underestimation in NCRB reports is likely due to reliance on police registration and thus may suffer from under-reporting by victims and families for certain types of injuries [4,5]. The WHO-Global Burden of Disease indirect estimates rely heavily on the Medically Certified Cause of Death (MCCD) (30% weight) for urban deaths and Survey of Cause of Death (SCD) (70% weight) for rural deaths, both of which rely on utilization of health facilities. Comparison of national injury death rates, per 100 000 populations from present study and other sources *Poisoning deaths are defined differently by NCRB and includes deaths contact with venomous plants and animals and poisoning due to other agents like chemicals, liquor. †Varying codes are included are used from all three sources. Present study “Others” codes include codes V90-V98, W75-W84, X10-X19, X50-X59, W85-W99, X40-X49, Y40-Y86, Y88. The WHO “Others” codes include V90-V98, W20-W64, W75-W99, X10-X39, X50-X59, Y40-Y86, Y88,Y89. Codes for “Others” by NCRB were not defined but excluded transport accidents, drowning, explosions, falls, and fires and poisoning. Our study, a household survey using verbal autopsy method is less vulnerable to biases which affected the estimate of cause specific mortality in earlier studies (Figures 123). Indeed, we find notable differences in the age and sex composition of unintentional injury deaths between our study and the earlier MCCD, SCD and NCRB data, as well as the indirect estimates from GBD (Additional file 3: Table S1). The MCCD sample of selected urban hospitals is not representative of deaths in urban areas and suffers from quality of physician coding and completeness problems [8-10]. There are expected differences in presentation at hospital for different injuries [24]. Compared to our study, the MCCD reports a higher proportion of deaths from RTI, fires and poisoning, but lower proportions of deaths from falls and drowning (Additional file 4: Table S2). Similarly, the SCD was based on a sample of villages with primary health care centers from selected states, and is not representative of the rural population [8-10]. It too has limitations including incomplete coverage, poor coding of causes of death and a higher proportion of ill-defined deaths [6,8]. Compared to our study, mortality proportions reported by SCD were higher for drowning and fires but lower for falls. Mortality from RTI, particularly among males, was highest in the economically productive age group of 15–59 years, which constitutes 58% of India’s population [18]. Deaths in this age-group are likely to cause substantial household deprivation from the loss of a key wage earner [25]. Our mortality rates are consistent with the results of several local studies in India, including showing that a marked excess in young and middle aged males [26]. Our study estimates for RTI deaths are twice those reported by the NCRB. While the NCRB reported RTI deaths in urban settings might only be modestly under-reported [3,5], no comparative data exists for rural areas where the majority (77%, 1801/2339) of the RTI deaths occurred in our study. The NCRB reports also appear to overestimate the proportion of RTI deaths for heavy vehicles occupants and underestimate those from pedestrians, showing differential reporting by types of road users [2]. Similar discrepancies between police data, vital registration data and verbal autopsy based nationally representative studies have been reported in other Asian countries like China and Thailand [27,28]. Our estimates for fire related deaths in India are one-fifth of the previous indirect estimates for India from the GBD [1]. The MCCD and SCD do not classify fire related deaths by intent [6,7] making comparison to the present study difficult. Reports on deaths by family members might well under-report fire related deaths that were homicide versus unintentional, particularly in the Indian context of dowry deaths among younger adult women [2,4]. However, the proportional mortality distribution for fire related deaths by age and sex, in the present study is not markedly different from previous data sources (Additional file 3: Table S1), suggesting this bias may be modest. On the other hand, the MCCD and SCD facility-based estimates might over-represent fire related deaths. Indeed, we noted much higher proportion of fire related deaths in a health facility (44%) as compared to all unintentional injury deaths in a health facility (17%). Further studies are required that compile multiple sources of mortality, hospital and forensic data to quantify reliably the age and gender-specific patterns of fire related deaths. Yet, the observed high proportion of fire related deaths among young adult women remains of significant concern. Fall and drowning deaths are less likely to be medically certified and therefore would be under represented in national estimates based on hospital/medical facility data leading to an under estimation of deaths due to drowning and fall. As noted in earlier studies in the South Asia region including Bangladesh [29,30], drowning was the leading cause of unintentional death at ages 0–4 years, causing 22 000 deaths every year with higher rates in rural than in urban areas. Drowning deaths in children younger than 5 years are higher in the eastern and north eastern regions of India, which are the delta areas for major rivers, the Ganges and the Brahmaputra [21]. Mortality rates due to falls in all age groups were found to be 1.7 fold higher than those estimated indirectly [1], but consistent with recent local studies from India [5,31-33]. While pediatric falls and related traumatic brain injuries have been studied somewhat in the South Asia Region [29], there is little literature on falls in older people [33]. Our study reports three times higher deaths among older ages of 60 years and beyond, than the MCCD [7]. With a rising aged population, falls are a significant emerging public health issue in India. Limitations Our study had some limitations. Verbal autopsy methods are known to misclassify some causes of death among neonates and older age groups of 70 years and above [10,34]. Earlier comparisons of verbal autopsy to urban hospital records in India indicated a sensitivity of 85% and specificity of over 95% for injuries [35]. We caution, however, that hospital-based studies are not ideal studies as a large majority of deaths occur in India without medical consultation [36] and also because of the differences observed in age-sex composition of injury deaths recorded in health facility versus those recorded at home (Additional file 3: Table S1). Misclassification for injuries overall has been low in verbal autopsy validation studies elsewhere, with the exception of falls where some misclassification with cerebro-vascular conditions was reported [37]. We estimate the injury deaths for year 2005 using proportionate injury mortality recorded during 2001-03. We did not observe any change in proportionate mortality from 2001 to 2003, hence, assumed that proportionate mortality would not have changed in next two year also, however, that may not be the case. The disease burden in India is undergoing a transition with the burden of both chronic conditions and injury rapidly rising. However, injury is a neglected epidemic in India and few resources are dedicated towards prevention or treatment of injuries. Our results provide reliable national and regional estimates of injury mortality which can inform the allocation of resources and development of an evidence based national and state policy for injury control. Our results suggest upward revision is needed of falls mortality and perhaps downward revision of fire related deaths. Future research should aim to formulate effective injury surveillance systems, epidemiological assessment of all outcomes of injuries, advocacy for prevention and treatment and appraisal of existing effective interventions for injury prevention and trauma care. Our study had some limitations. Verbal autopsy methods are known to misclassify some causes of death among neonates and older age groups of 70 years and above [10,34]. Earlier comparisons of verbal autopsy to urban hospital records in India indicated a sensitivity of 85% and specificity of over 95% for injuries [35]. We caution, however, that hospital-based studies are not ideal studies as a large majority of deaths occur in India without medical consultation [36] and also because of the differences observed in age-sex composition of injury deaths recorded in health facility versus those recorded at home (Additional file 3: Table S1). Misclassification for injuries overall has been low in verbal autopsy validation studies elsewhere, with the exception of falls where some misclassification with cerebro-vascular conditions was reported [37]. We estimate the injury deaths for year 2005 using proportionate injury mortality recorded during 2001-03. We did not observe any change in proportionate mortality from 2001 to 2003, hence, assumed that proportionate mortality would not have changed in next two year also, however, that may not be the case. The disease burden in India is undergoing a transition with the burden of both chronic conditions and injury rapidly rising. However, injury is a neglected epidemic in India and few resources are dedicated towards prevention or treatment of injuries. Our results provide reliable national and regional estimates of injury mortality which can inform the allocation of resources and development of an evidence based national and state policy for injury control. Our results suggest upward revision is needed of falls mortality and perhaps downward revision of fire related deaths. Future research should aim to formulate effective injury surveillance systems, epidemiological assessment of all outcomes of injuries, advocacy for prevention and treatment and appraisal of existing effective interventions for injury prevention and trauma care. Limitations: Our study had some limitations. Verbal autopsy methods are known to misclassify some causes of death among neonates and older age groups of 70 years and above [10,34]. Earlier comparisons of verbal autopsy to urban hospital records in India indicated a sensitivity of 85% and specificity of over 95% for injuries [35]. We caution, however, that hospital-based studies are not ideal studies as a large majority of deaths occur in India without medical consultation [36] and also because of the differences observed in age-sex composition of injury deaths recorded in health facility versus those recorded at home (Additional file 3: Table S1). Misclassification for injuries overall has been low in verbal autopsy validation studies elsewhere, with the exception of falls where some misclassification with cerebro-vascular conditions was reported [37]. We estimate the injury deaths for year 2005 using proportionate injury mortality recorded during 2001-03. We did not observe any change in proportionate mortality from 2001 to 2003, hence, assumed that proportionate mortality would not have changed in next two year also, however, that may not be the case. The disease burden in India is undergoing a transition with the burden of both chronic conditions and injury rapidly rising. However, injury is a neglected epidemic in India and few resources are dedicated towards prevention or treatment of injuries. Our results provide reliable national and regional estimates of injury mortality which can inform the allocation of resources and development of an evidence based national and state policy for injury control. Our results suggest upward revision is needed of falls mortality and perhaps downward revision of fire related deaths. Future research should aim to formulate effective injury surveillance systems, epidemiological assessment of all outcomes of injuries, advocacy for prevention and treatment and appraisal of existing effective interventions for injury prevention and trauma care. Conclusions: These direct estimates of unintentional injury deaths in India (0.6 million) are lower than WHO indirect estimates (0.8 million), but double the estimates which rely on police reports (0.3 million). Importantly, they revise upward the mortality due to falls, particularly in the elderly, and revise downward mortality due to fires. Road traffic injuries, falls and drowning are the leading cause of unintentional injury deaths in India. Abbreviations: GBD, Global Burden of Disease; ICD, International Classification of Disease; MCCD, Medically Certified Cause of Death; NCRB, National Crime and Records Bureau; RGI, Registrar General of India; SRS, Sample Registration System; SCD, Survey of Causes of Death; WHO, World Health Organisation. Competing interests: The authors declare they have no competing interests. Authors' contributions: PJ and the academic partners in India (MDS Collaborators) planned the Million Death Study in close collaboration with the RGI. JJ, WS and PJ conducted the analyses and drafted the paper. PJ is the chief investigator and the guarantor for the study. All authors participated in interpreting the data and writing the manuscript. All authors read and approved the final manuscript. Financial disclosures: This work was supported by the Fogarty International Centre of the US National Institutes of Health [grant R01 TW05991–01]; Canadian Institute of Health Research [CIHR; IEG-53506]; International Research Development Centre [Grant 102172]; and Li Ka Shing Knowledge Institute at St. Michael’s Hospital, University of Toronto (CGHR support). PJ is supported by the Canada Research Chair program. JJ is supported by Endeavour Research Fellowship Programme. The funding sources had no role in study design or conduct, including data collection, analysis, and interpretation. PJ had full access to all data and final responsibility for the decision to submit for publication on behalf of all authors. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/12/487/prepub Supplementary Material: Figure S1. Age-distribution of unintentional injury mortality for the three leading causes of injuries among males in the study population, 2001–03. The three leading causes of unintentional injuries are presented as a proportion of all unintentional deaths among males in the sample. Click here for file Figure S2. Age-distribution of unintentional injury mortality for the three leading causes of injuries among females in the study population, 2001–03. The three leading causes of unintentional injuries are presented as a proportion of all unintentional deaths among males in the sample. Click here for file Table S1. Proportions of unintentional injury and fire-related deaths by age and sex group from mortality surveys, indirect estimates and the present study. Click here for file Table S2. Comparison of injury proportions (%) to total deaths at all ages in rural and urban areas, from present study and other data sources. Click here for file
Background: Unintentional injuries are an important cause of death in India. However, no reliable nationally representative estimates of unintentional injury deaths are available. Thus, we examined unintentional injury deaths in a nationally representative mortality survey. Methods: Trained field staff interviewed a living relative of those who had died during 2001-03. The verbal autopsy reports were sent to two of the 130 trained physicians, who independently assigned an ICD-10 code to each death. Discrepancies were resolved through reconciliation and adjudication. Proportionate cause specific mortality was used to produce national unintentional injury mortality estimates based on United Nations population and death estimates. Results: In 2005, unintentional injury caused 648,000 deaths (7% of all deaths; 58/100,000 population). Unintentional injury mortality rates were higher among males than females, and in rural versus urban areas. Road traffic injuries (185,000 deaths; 29% of all unintentional injury deaths), falls (160,000 deaths, 25%) and drowning (73,000 deaths, 11%) were the three leading causes of unintentional injury mortality, with fire-related injury causing 5% of these deaths. The highest unintentional mortality rates were in those aged 70 years or older (410/100,000). Conclusions: These direct estimates of unintentional injury deaths in India (0.6 million) are lower than WHO indirect estimates (0.8 million), but double the estimates which rely on police reports (0.3 million). Importantly, they revise upward the mortality due to falls, particularly in the elderly, and revise downward mortality due to fires. Ongoing monitoring of injury mortality will enable development of evidence based injury prevention programs.
Background: Indirect estimates by the World Health Organization (WHO) and the Global Burden of Diseases Study (GBD) suggest that unintentional injuries account for 3.9 million deaths worldwide [1], of which about 90% occur in low- and middle-income countries. The majority of these deaths are attributable to road traffic injuries, falls, drowning, poisoning and burns [1]. In 2004, WHO estimated about 0.8 million deaths in India were due to unintentional injuries [1]. Direct Indian estimates of unintentional injury deaths relying on annual National Crime Records Bureau (NCRB) reports of injury deaths from police records showed only 0.3 million injury deaths in 2005 [2], but police record are subject to under-reporting and misclassification [3-5]. Other sources of mortality data from selected health centres in rural areas [6], and selected urban hospitals [7] are not representative of the population of India, and have other methodological limitations [8]. The objective of this paper is to estimate total unintentional injury mortality in India and its variation by gender, rural/urban residence and region using results from a nationally representative survey of the causes of deaths. Conclusions: These direct estimates of unintentional injury deaths in India (0.6 million) are lower than WHO indirect estimates (0.8 million), but double the estimates which rely on police reports (0.3 million). Importantly, they revise upward the mortality due to falls, particularly in the elderly, and revise downward mortality due to fires. Road traffic injuries, falls and drowning are the leading cause of unintentional injury deaths in India.
Background: Unintentional injuries are an important cause of death in India. However, no reliable nationally representative estimates of unintentional injury deaths are available. Thus, we examined unintentional injury deaths in a nationally representative mortality survey. Methods: Trained field staff interviewed a living relative of those who had died during 2001-03. The verbal autopsy reports were sent to two of the 130 trained physicians, who independently assigned an ICD-10 code to each death. Discrepancies were resolved through reconciliation and adjudication. Proportionate cause specific mortality was used to produce national unintentional injury mortality estimates based on United Nations population and death estimates. Results: In 2005, unintentional injury caused 648,000 deaths (7% of all deaths; 58/100,000 population). Unintentional injury mortality rates were higher among males than females, and in rural versus urban areas. Road traffic injuries (185,000 deaths; 29% of all unintentional injury deaths), falls (160,000 deaths, 25%) and drowning (73,000 deaths, 11%) were the three leading causes of unintentional injury mortality, with fire-related injury causing 5% of these deaths. The highest unintentional mortality rates were in those aged 70 years or older (410/100,000). Conclusions: These direct estimates of unintentional injury deaths in India (0.6 million) are lower than WHO indirect estimates (0.8 million), but double the estimates which rely on police reports (0.3 million). Importantly, they revise upward the mortality due to falls, particularly in the elderly, and revise downward mortality due to fires. Ongoing monitoring of injury mortality will enable development of evidence based injury prevention programs.
8,002
311
15
[ "deaths", "injury", "mortality", "death", "unintentional", "india", "study", "cause", "unintentional injury", "injury deaths" ]
[ "test", "test" ]
[CONTENT] Unintentional-injuries | Mortality | Verbal autopsy | India [SUMMARY]
[CONTENT] Unintentional-injuries | Mortality | Verbal autopsy | India [SUMMARY]
[CONTENT] Unintentional-injuries | Mortality | Verbal autopsy | India [SUMMARY]
[CONTENT] Unintentional-injuries | Mortality | Verbal autopsy | India [SUMMARY]
[CONTENT] Unintentional-injuries | Mortality | Verbal autopsy | India [SUMMARY]
[CONTENT] Unintentional-injuries | Mortality | Verbal autopsy | India [SUMMARY]
[CONTENT] Accidental Falls | Accidents | Accidents, Traffic | Adolescent | Adult | Age Distribution | Aged | Cause of Death | Child | Child, Preschool | Drowning | Female | Fires | Humans | India | Infant | International Classification of Diseases | Male | Middle Aged | Qualitative Research | Risk Factors | Rural Population | Sex Distribution | Surveys and Questionnaires | Urban Population | Wounds and Injuries | Young Adult [SUMMARY]
[CONTENT] Accidental Falls | Accidents | Accidents, Traffic | Adolescent | Adult | Age Distribution | Aged | Cause of Death | Child | Child, Preschool | Drowning | Female | Fires | Humans | India | Infant | International Classification of Diseases | Male | Middle Aged | Qualitative Research | Risk Factors | Rural Population | Sex Distribution | Surveys and Questionnaires | Urban Population | Wounds and Injuries | Young Adult [SUMMARY]
[CONTENT] Accidental Falls | Accidents | Accidents, Traffic | Adolescent | Adult | Age Distribution | Aged | Cause of Death | Child | Child, Preschool | Drowning | Female | Fires | Humans | India | Infant | International Classification of Diseases | Male | Middle Aged | Qualitative Research | Risk Factors | Rural Population | Sex Distribution | Surveys and Questionnaires | Urban Population | Wounds and Injuries | Young Adult [SUMMARY]
[CONTENT] Accidental Falls | Accidents | Accidents, Traffic | Adolescent | Adult | Age Distribution | Aged | Cause of Death | Child | Child, Preschool | Drowning | Female | Fires | Humans | India | Infant | International Classification of Diseases | Male | Middle Aged | Qualitative Research | Risk Factors | Rural Population | Sex Distribution | Surveys and Questionnaires | Urban Population | Wounds and Injuries | Young Adult [SUMMARY]
[CONTENT] Accidental Falls | Accidents | Accidents, Traffic | Adolescent | Adult | Age Distribution | Aged | Cause of Death | Child | Child, Preschool | Drowning | Female | Fires | Humans | India | Infant | International Classification of Diseases | Male | Middle Aged | Qualitative Research | Risk Factors | Rural Population | Sex Distribution | Surveys and Questionnaires | Urban Population | Wounds and Injuries | Young Adult [SUMMARY]
[CONTENT] Accidental Falls | Accidents | Accidents, Traffic | Adolescent | Adult | Age Distribution | Aged | Cause of Death | Child | Child, Preschool | Drowning | Female | Fires | Humans | India | Infant | International Classification of Diseases | Male | Middle Aged | Qualitative Research | Risk Factors | Rural Population | Sex Distribution | Surveys and Questionnaires | Urban Population | Wounds and Injuries | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] deaths | injury | mortality | death | unintentional | india | study | cause | unintentional injury | injury deaths [SUMMARY]
[CONTENT] deaths | injury | mortality | death | unintentional | india | study | cause | unintentional injury | injury deaths [SUMMARY]
[CONTENT] deaths | injury | mortality | death | unintentional | india | study | cause | unintentional injury | injury deaths [SUMMARY]
[CONTENT] deaths | injury | mortality | death | unintentional | india | study | cause | unintentional injury | injury deaths [SUMMARY]
[CONTENT] deaths | injury | mortality | death | unintentional | india | study | cause | unintentional injury | injury deaths [SUMMARY]
[CONTENT] deaths | injury | mortality | death | unintentional | india | study | cause | unintentional injury | injury deaths [SUMMARY]
[CONTENT] deaths | unintentional | million deaths | injury | injuries | injury deaths | million | police | unintentional injuries | india [SUMMARY]
[CONTENT] death | deaths | srs | cause | cause death | time | mortality | 10 | household | un [SUMMARY]
[CONTENT] unintentional | deaths | injury | unintentional injury | 000 | injury deaths | unintentional injury deaths | rti | mr | ages [SUMMARY]
[CONTENT] revise | unintentional injury deaths india | injury deaths india | million | estimates | deaths india | unintentional injury deaths | falls | unintentional injury | unintentional [SUMMARY]
[CONTENT] deaths | injury | unintentional | mortality | death | india | study | injuries | unintentional injury | cause [SUMMARY]
[CONTENT] deaths | injury | unintentional | mortality | death | india | study | injuries | unintentional injury | cause [SUMMARY]
[CONTENT] India ||| ||| [SUMMARY]
[CONTENT] 2001-03 ||| two | 130 ||| ||| United Nations [SUMMARY]
[CONTENT] 2005 | 648,000 | 7% | 58/100,000 ||| ||| 185,000 | 29% | 160,000 | 25% | 73,000 | 11% | three | 5% ||| 70 years or [SUMMARY]
[CONTENT] India | 0.6 million | 0.8 million | 0.3 million ||| ||| [SUMMARY]
[CONTENT] India ||| ||| ||| 2001-03 ||| two | 130 ||| ||| United Nations ||| ||| 2005 | 648,000 | 7% | 58/100,000 ||| ||| 185,000 | 29% | 160,000 | 25% | 73,000 | 11% | three | 5% ||| 70 years or ||| India | 0.6 million | 0.8 million | 0.3 million ||| ||| [SUMMARY]
[CONTENT] India ||| ||| ||| 2001-03 ||| two | 130 ||| ||| United Nations ||| ||| 2005 | 648,000 | 7% | 58/100,000 ||| ||| 185,000 | 29% | 160,000 | 25% | 73,000 | 11% | three | 5% ||| 70 years or ||| India | 0.6 million | 0.8 million | 0.3 million ||| ||| [SUMMARY]
Antibiotic resistance and genotype of beta-lactamase producing Escherichia coli in nosocomial infections in Cotonou, Benin.
25595314
Beta lactams are the most commonly used group of antimicrobials worldwide. The presence of extended-spectrum lactamases (ESBL) affects significantly the treatment of infections due to multidrug resistant strains of gram-negative bacilli. The aim of this study was to characterize the beta-lactamase resistance genes in Escherichia coli isolated from nosocomial infections in Cotonou, Benin.
BACKGROUND
Escherichia coli strains were isolated from various biological samples such as urine, pus, vaginal swab, sperm, blood, spinal fluid and catheter. Isolated bacteria were submitted to eleven usual antibiotics, using disc diffusion method according to NCCLS criteria, for resistance analysis. Beta-lactamase production was determined by an acidimetric method with benzylpenicillin. Microbiological characterization of ESBL enzymes was done by double disc synergy test and the resistance genes TEM and SHV were screened by specific PCR.
METHODS
ESBL phenotype was detected in 29 isolates (35.5%). The most active antibiotic was imipenem (96.4% as susceptibility rate) followed by ceftriaxone (58.3%) and gentamicin (54.8%). High resistance rates were observed with amoxicillin (92.8%), ampicillin (94%) and trimethoprim/sulfamethoxazole (85.7%). The genotype TEM was predominant in ESBL and non ESBL isolates with respectively 72.4% and 80%. SHV-type beta-lactamase genes occurred in 24.1% ESBL strains and in 18.1% of non ESBL isolates.
RESULTS
This study revealed the presence of ESBL producing Eschericiha coli in Cotonou. It demonstrated also high resistance rate to antibiotics commonly used for infections treatment. Continuous monitoring and judicious antibiotic usage are required.
CONCLUSION
[ "Anti-Bacterial Agents", "Benin", "Cross Infection", "Drug Resistance, Multiple, Bacterial", "Escherichia coli", "Escherichia coli Infections", "Genotype", "Humans", "Microbial Sensitivity Tests", "beta-Lactamases", "beta-Lactams" ]
4304606
Background
Antibiotics resistance is a paramount issue in medical practice. Production of β-lactamases is an important means by which Gram-negative bacteria exhibit resistance to β-lactam antibiotics [1]. The increasing prevalence of pathogens producing extended-spectrum-lactamases (ESBLs) is showed worldwide in hospitalized patients as well as in out-patients. Infections with ESBL producing organisms are associated with higher rates of mortality, morbidity and healthcare expenditure. ESBLs are often plasmid mediated; though they occur predominantly in Escherichia coli and Klebsiella species, they have also been described in other genera of the Enterobacteriacea [1,2]. Multidrug resistance in ESBL producers limit therapeutic options and subsequently facilitate the dissemination of these bacteria strains. Several studies have reported the increasing resistance rate of commonly prescribed antibiotics such as ampicillin, trimethoprim/sulfamethoxazole and ciprofloxacin in clinical isolates of Escherichia coli [3-6]. Antibiotic resistance varies according to geographic locations and is directly proportional to the use and misuse of antibiotics. Understanding the effect of drug resistance is crucial because of its deep impact on the treatment of infections. In Benin, little information about resistance to third generation cephalosporin as well as multi drug resistance in Escherichia coli is known. As antibiotic treatment must rely on antimicrobial susceptible pattern, current knowledge on susceptibility is essential for appropriate therapy. Previous studies reported the presence of TEM and SHV in nosocomial and community isolated Escherichia coli strains in Benin [7,8]. The aim of the present study was to characterize clinical isolates of Escherichia coli obtained from several infections in Cotonou in order to (i) determine the susceptibility patterns to antibiotics, (ii) evaluate the prevalence of ESBL and (iii) identify the genes involved in the resistance.
null
null
Results
A total of 84 nosocomial Escherichia coli isolates were included in this study. The majority of isolates (72 strains: 85.7%) were from urine samples. Specimens from pus represented 4.76% of the isolates, followed by vagina swab and spinal liquid with 2.38% each. Isolates from sperm and catheter were 1.2% each. The antibiotics susceptibility test revealed that the most efficient antibiotics were imipenem (96.4% as susceptibility rate) followed by ceftriaxone (58.3%) and gentamicin (54.8%). High resistance rates were observed with amoxicillin (92.8%), ampicillin (94%), and trimethoprim/sulfamethoxazole (85.7%). Resistance to amoxicillin/clavulanic acid was 85.7% with a high rate of intermediate resistance (46.7%). We observed homogeneity in the resistance profile of ESBL-producers which were multi drugs resistant with at least a resistance to 8 antibiotics out of the 11 tested (Table 1).Table 1 Antibiotic susceptibility pattern of ESBL and non ESBL producing isolates Antibiotics Susceptibility rate % ESBL (n = 29) Non-ESBL (n = 55) p-value* R S AM97.62.429 (100%)50 (91 %)0.4AMC85.714.324 (82.7%)15 (27.3%)10−4 AMX95.24.829 (100%)49 (89.1%)0.17IPM3.696.401 (3.4%)00 (0%)0.3CTX56.540.528 (96.5%)01 (1.8%)<10−9 CRO44.758.328 (96.5%)01 (1.8%)10−3 CIP91.78.329 (100%)14 (25.5%)10−10 NOR53.646.428 (96.5%)14 (25.5%)<10−9 AN83.316.721(72.4%)19 (34.5%)0.04G45.254.824 (82.7)12 (21.8)10−7 SXT86.913.129 (100%)43 (78.2%)6.10−3 *P value is for the comparison of resistance of ESBL-producers with non-producers.AM: Ampicilline; AMC: Amoxicilline/clavulanic acid; AMX: Amoxicilline; IPM: Imipenem; CTX: Cefotaxime; CRO: Ceftriaxone; CIP: Ciprofloxacine; NOR: Norfloxacine; AN: Amikacine; G: Gentamicine; SXT: Trimethoprim/Sulfamethoxazole. Antibiotic susceptibility pattern of ESBL and non ESBL producing isolates *P value is for the comparison of resistance of ESBL-producers with non-producers. AM: Ampicilline; AMC: Amoxicilline/clavulanic acid; AMX: Amoxicilline; IPM: Imipenem; CTX: Cefotaxime; CRO: Ceftriaxone; CIP: Ciprofloxacine; NOR: Norfloxacine; AN: Amikacine; G: Gentamicine; SXT: Trimethoprim/Sulfamethoxazole. According to the biochemical detection of beta-lactamase production, 87.0% of isolates were positive using the acidimetric test by hydrolyzing penicillin G. Of the 84 isolates screened for ESBL production by the double disk test, 29 (35.5%) were positive using either cefotaxime or ceftriaxone. Among these ESBL-producers, 21 were from urinary tract infection (UTI) and 8 from other infection sites. The other infection sites seem to be disproportionally represented than UTI (27.6% versus 14%). This fact could be explained with reduced number of samples drawn from non-urinary infections. Comparison of susceptibility rates of ESBL producers with those of ESBL non producers showed similar values for amoxicillin, ampicillin and imipenem. No statistically significant difference between the two groups was observed (Table 1). For all other tested antibiotics, ESBL strains were more resistant than non ESBL strains. According to the PCR, the genotypes TEM and SHV were distributed as follow: blaTEM 65 (77.4%) and blaSHV 17 (20.2%). These genotypes occurred singularly in 54 isolates (64.3%) with 51(94.4%) for blaTEM and 3 (5.6%) for blaSHV. Both TEM and SHV genes were present in 14 isolates (16.7%) and absent in 16 isolates (19%) including 10 non ESBL strains and 6 ESBL strains. Out of the 29 isolates with ESBL phenotype, the PCR analysis revealed 72.4% strains with the genotype TEM, 24.1% with the genotype SHV and 5 strains harbored both genes. Figure 1 showed the agarose gel of PCR products following amplification of SHV genes.Figure 1 Agarose gel of PCR products following amplification of SHV genes. M: Molecular weight marker; Lanes 02 to 08: SHV positive samples; T0: negative probe without DNA. Agarose gel of PCR products following amplification of SHV genes. M: Molecular weight marker; Lanes 02 to 08: SHV positive samples; T0: negative probe without DNA. As shown in Table 2, there were no difference in resistance rate against ceftriaxone and cefotaxime. No difference was found between the inhibition diameters of both antibiotics regarding the genotypes TEM and SHV.Table 2 Association between ESBL genotype and antibiotic sensibility profile Genotypes Presence Cefotaxime ceftriaxone n (%) Fisher’s exact test S I R p-value TEM+0 (0%)0 (0%)21 (100%)0.27-0 (0%)1 (12.5%)7 (87.5%)SHV+0 (0%)0 (0%)7 (100%)0.76-0 (0%)1 (4.5%)21 (95.5%)TEM + SHV+0 (0%)0 (0%)5 (100%)0.54-0 (0%)1 (16.7%)5 (83.3%)S = sensitive; I = intermediate; R = resistant. + indicate the presence of the genotype. – indicate the absence of the genotype. Association between ESBL genotype and antibiotic sensibility profile S = sensitive; I = intermediate; R = resistant. + indicate the presence of the genotype. – indicate the absence of the genotype.
Conclusions
This study reveals the presence of ESBL producing Escherichia coli in clinical isolates from several hospitals in Cotonou. The use of some first line treatment antibiotics such as penicillin and trimethoprim/sulfamethoxazole seems inappropriate. Antibiotics resistance surveillance and the determination of molecular characteristics of ESBL isolates are primordial to ensure the judicious use of antimicrobial drugs.
[ "Bacterial strains", "Antibiotic susceptibility testing and detection of ESBL", "Biochemical detection of beta-lactamase production", "Detection of beta-lactamase genes", "Statistical analysis" ]
[ "The study was carried out in three hospitals in Cotonou, from September 2012 to April 2013. Consecutive, non-repeated nosocomial Escherichia coli isolates obtained from urine, pus, vaginal swab, sperm, blood, spinal fluid and catheter samples received in the bacteriology laboratories were analyzed. The isolated microorganisms were considered as nosocomial origin if they were isolated from patients admitted to the hospitals since 48 hours or more. The main pathogens were identified by cultural characteristics and standard biochemical procedures and were confirmed with API 20 E (Biomérieux, Marcy l’Etoile, France) identification system.", "Antibiotic susceptibility was performed by the disc diffusion method on Mueller Hinton agar (Bio-Rad, Marne la Coquette, France) according to the recommendations of the Antibiogram Committee of the French Society for Microbiology (Comité de l’Antibiogramme de la Société Française de Microbiologie) [9]. The following antibiotic discs (drug concentration in μg) were tested: amoxicillin (25 μg), amoxicillin/clavulanic acid (20/10 μg), ampicillin (10 μg), imipenem (10 μg), cefotaxime (30 μg), ceftriaxone (30 μg), ciprofloxacin (5 μg), norfloxacin (5 μg), amikacin (30 μg), gentamicin (15 μg) and trimethoprim/ sulfamethoxazole (1.25/23.75 μg), all from Bio-Rad (Bio-Rad, Marne la Coquette, France).\nESBL phenotypes were detected by double-disk synergy according to the method described by Jarlier et al. [10]. Disks of cefotaxime and ceftriaxone were placed 20 mm from an amoxicillin/clavulanate disk. Enhancement of the inhibition zone of the third-generation cephalosporin toward the amoxicillin/clavulanate disk indicated the possible presence of an ESBL. Escherichia coli ATCC 25922 was used as control.", "The presence of beta-lactamase was tested by an acidimetric method using benzylpenicillin as substrate [11]. A single colony was resuspended and mixed with the indicator solution. The indicator solution was prepared by adding 1 ml of sterile distilled water and 100 μl of 1% phenol red solution to a vial of one million units of sodium benzylpenicillin (Crystapen, Glaxo). A solution of 1 N sodium hydroxide was added until the development of violet color (pH 8.5). Several colonies were suspended in NaCl, 9‰ to get a dense suspension. 150 μl of penicillin phenol red solution was added and the color development observed within 1 hour. The solution turned yellow in the presence of beta-lactamase.", "All the strains were further analyzed by PCR to detect beta-lactamase genes. Total DNA extraction was performed using the heat-shock method. Briefly, a single bacteria colony was inoculated into 5 ml of Luria-Bertani broth (Biorad, Marne la Coquette, France) and incubated for 20 h at 37°C. Cells from 1.5 ml of the overnight culture were harvested by centrifugation at 7000 RPM for 5 min. After the supernatant was removed, the pellet was washed twice with sterile water and resuspended in 500 μl of sterile water. This suspension was incubated at 95°C during 10 min. The supernatant was stored at −20°C for PCR analysis.\nThe presence of beta-lactamase genes, blaTEM and blaSHV, was detected using specific primers: for the TEM genes OT-1-F [5′-TTGGGTGCACGAGTGGG TTA-3′] and OT-2-R [5′-TAATTGTTGCCGGGAAGCTA-3′] which amplified a 465 bp fragment [12]; for the SHV genes, SHV-A[5′-CACTCAAGGATGTATTGTG-3′] and SHV-B[5′-TTAGCGTTGCCAGTGCTCG-3′] which amplified a fragment of 885 bp [13].\nAmplification reactions were performed in a volume of 30 μl containing 4 μl of supernatant, (volume) PCR buffer (1x), MgCl2 (1.5 mM), (volume) dNTPs (200 μM), (volume) primer (0.5 mM), 0.2 μL of Taq DNA polymerase (1,5U). An Biometra thermal cycler was used for the amplification. The cycling conditions were the following:\nTEM: initial denaturation at 94°C for 5 min, followed by 30 cycles of denaturation at 94°C for 30 s, annealing at 52°C for 30 s , extension at 72°C for 1 min;\nSHV: initial denaturation at 96°C for 15 s, followed by 30 cycles of denaturation at 96°C for 15 s, annealing at 50°C for 15 s, extension at 72°C for 2 min. Both PCR programs were followed by a final extension step of 10 min.\nAmplified PCR products were separated on 1.5% agarose gels, stained with ethidium bromide and visualized under UV illumination.", "Data were analyzed with Epi Info® version 3.5.4. Differences in antibiotic susceptibility among different groups were statistically analyzed by the Fisher exact test. An associated P-value < 0.05 was considered significant." ]
[ null, null, null, null, null ]
[ "Background", "Materials and methods", "Bacterial strains", "Antibiotic susceptibility testing and detection of ESBL", "Biochemical detection of beta-lactamase production", "Detection of beta-lactamase genes", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ "Antibiotics resistance is a paramount issue in medical practice. Production of β-lactamases is an important means by which Gram-negative bacteria exhibit resistance to β-lactam antibiotics [1]. The increasing prevalence of pathogens producing extended-spectrum-lactamases (ESBLs) is showed worldwide in hospitalized patients as well as in out-patients. Infections with ESBL producing organisms are associated with higher rates of mortality, morbidity and healthcare expenditure. ESBLs are often plasmid mediated; though they occur predominantly in Escherichia coli and Klebsiella species, they have also been described in other genera of the Enterobacteriacea [1,2].\nMultidrug resistance in ESBL producers limit therapeutic options and subsequently facilitate the dissemination of these bacteria strains. Several studies have reported the increasing resistance rate of commonly prescribed antibiotics such as ampicillin, trimethoprim/sulfamethoxazole and ciprofloxacin in clinical isolates of Escherichia coli [3-6].\nAntibiotic resistance varies according to geographic locations and is directly proportional to the use and misuse of antibiotics. Understanding the effect of drug resistance is crucial because of its deep impact on the treatment of infections. In Benin, little information about resistance to third generation cephalosporin as well as multi drug resistance in Escherichia coli is known. As antibiotic treatment must rely on antimicrobial susceptible pattern, current knowledge on susceptibility is essential for appropriate therapy. Previous studies reported the presence of TEM and SHV in nosocomial and community isolated Escherichia coli strains in Benin [7,8].\nThe aim of the present study was to characterize clinical isolates of Escherichia coli obtained from several infections in Cotonou in order to (i) determine the susceptibility patterns to antibiotics, (ii) evaluate the prevalence of ESBL and (iii) identify the genes involved in the resistance.", " Bacterial strains The study was carried out in three hospitals in Cotonou, from September 2012 to April 2013. Consecutive, non-repeated nosocomial Escherichia coli isolates obtained from urine, pus, vaginal swab, sperm, blood, spinal fluid and catheter samples received in the bacteriology laboratories were analyzed. The isolated microorganisms were considered as nosocomial origin if they were isolated from patients admitted to the hospitals since 48 hours or more. The main pathogens were identified by cultural characteristics and standard biochemical procedures and were confirmed with API 20 E (Biomérieux, Marcy l’Etoile, France) identification system.\nThe study was carried out in three hospitals in Cotonou, from September 2012 to April 2013. Consecutive, non-repeated nosocomial Escherichia coli isolates obtained from urine, pus, vaginal swab, sperm, blood, spinal fluid and catheter samples received in the bacteriology laboratories were analyzed. The isolated microorganisms were considered as nosocomial origin if they were isolated from patients admitted to the hospitals since 48 hours or more. The main pathogens were identified by cultural characteristics and standard biochemical procedures and were confirmed with API 20 E (Biomérieux, Marcy l’Etoile, France) identification system.\n Antibiotic susceptibility testing and detection of ESBL Antibiotic susceptibility was performed by the disc diffusion method on Mueller Hinton agar (Bio-Rad, Marne la Coquette, France) according to the recommendations of the Antibiogram Committee of the French Society for Microbiology (Comité de l’Antibiogramme de la Société Française de Microbiologie) [9]. The following antibiotic discs (drug concentration in μg) were tested: amoxicillin (25 μg), amoxicillin/clavulanic acid (20/10 μg), ampicillin (10 μg), imipenem (10 μg), cefotaxime (30 μg), ceftriaxone (30 μg), ciprofloxacin (5 μg), norfloxacin (5 μg), amikacin (30 μg), gentamicin (15 μg) and trimethoprim/ sulfamethoxazole (1.25/23.75 μg), all from Bio-Rad (Bio-Rad, Marne la Coquette, France).\nESBL phenotypes were detected by double-disk synergy according to the method described by Jarlier et al. [10]. Disks of cefotaxime and ceftriaxone were placed 20 mm from an amoxicillin/clavulanate disk. Enhancement of the inhibition zone of the third-generation cephalosporin toward the amoxicillin/clavulanate disk indicated the possible presence of an ESBL. Escherichia coli ATCC 25922 was used as control.\nAntibiotic susceptibility was performed by the disc diffusion method on Mueller Hinton agar (Bio-Rad, Marne la Coquette, France) according to the recommendations of the Antibiogram Committee of the French Society for Microbiology (Comité de l’Antibiogramme de la Société Française de Microbiologie) [9]. The following antibiotic discs (drug concentration in μg) were tested: amoxicillin (25 μg), amoxicillin/clavulanic acid (20/10 μg), ampicillin (10 μg), imipenem (10 μg), cefotaxime (30 μg), ceftriaxone (30 μg), ciprofloxacin (5 μg), norfloxacin (5 μg), amikacin (30 μg), gentamicin (15 μg) and trimethoprim/ sulfamethoxazole (1.25/23.75 μg), all from Bio-Rad (Bio-Rad, Marne la Coquette, France).\nESBL phenotypes were detected by double-disk synergy according to the method described by Jarlier et al. [10]. Disks of cefotaxime and ceftriaxone were placed 20 mm from an amoxicillin/clavulanate disk. Enhancement of the inhibition zone of the third-generation cephalosporin toward the amoxicillin/clavulanate disk indicated the possible presence of an ESBL. Escherichia coli ATCC 25922 was used as control.\n Biochemical detection of beta-lactamase production The presence of beta-lactamase was tested by an acidimetric method using benzylpenicillin as substrate [11]. A single colony was resuspended and mixed with the indicator solution. The indicator solution was prepared by adding 1 ml of sterile distilled water and 100 μl of 1% phenol red solution to a vial of one million units of sodium benzylpenicillin (Crystapen, Glaxo). A solution of 1 N sodium hydroxide was added until the development of violet color (pH 8.5). Several colonies were suspended in NaCl, 9‰ to get a dense suspension. 150 μl of penicillin phenol red solution was added and the color development observed within 1 hour. The solution turned yellow in the presence of beta-lactamase.\nThe presence of beta-lactamase was tested by an acidimetric method using benzylpenicillin as substrate [11]. A single colony was resuspended and mixed with the indicator solution. The indicator solution was prepared by adding 1 ml of sterile distilled water and 100 μl of 1% phenol red solution to a vial of one million units of sodium benzylpenicillin (Crystapen, Glaxo). A solution of 1 N sodium hydroxide was added until the development of violet color (pH 8.5). Several colonies were suspended in NaCl, 9‰ to get a dense suspension. 150 μl of penicillin phenol red solution was added and the color development observed within 1 hour. The solution turned yellow in the presence of beta-lactamase.\n Detection of beta-lactamase genes All the strains were further analyzed by PCR to detect beta-lactamase genes. Total DNA extraction was performed using the heat-shock method. Briefly, a single bacteria colony was inoculated into 5 ml of Luria-Bertani broth (Biorad, Marne la Coquette, France) and incubated for 20 h at 37°C. Cells from 1.5 ml of the overnight culture were harvested by centrifugation at 7000 RPM for 5 min. After the supernatant was removed, the pellet was washed twice with sterile water and resuspended in 500 μl of sterile water. This suspension was incubated at 95°C during 10 min. The supernatant was stored at −20°C for PCR analysis.\nThe presence of beta-lactamase genes, blaTEM and blaSHV, was detected using specific primers: for the TEM genes OT-1-F [5′-TTGGGTGCACGAGTGGG TTA-3′] and OT-2-R [5′-TAATTGTTGCCGGGAAGCTA-3′] which amplified a 465 bp fragment [12]; for the SHV genes, SHV-A[5′-CACTCAAGGATGTATTGTG-3′] and SHV-B[5′-TTAGCGTTGCCAGTGCTCG-3′] which amplified a fragment of 885 bp [13].\nAmplification reactions were performed in a volume of 30 μl containing 4 μl of supernatant, (volume) PCR buffer (1x), MgCl2 (1.5 mM), (volume) dNTPs (200 μM), (volume) primer (0.5 mM), 0.2 μL of Taq DNA polymerase (1,5U). An Biometra thermal cycler was used for the amplification. The cycling conditions were the following:\nTEM: initial denaturation at 94°C for 5 min, followed by 30 cycles of denaturation at 94°C for 30 s, annealing at 52°C for 30 s , extension at 72°C for 1 min;\nSHV: initial denaturation at 96°C for 15 s, followed by 30 cycles of denaturation at 96°C for 15 s, annealing at 50°C for 15 s, extension at 72°C for 2 min. Both PCR programs were followed by a final extension step of 10 min.\nAmplified PCR products were separated on 1.5% agarose gels, stained with ethidium bromide and visualized under UV illumination.\nAll the strains were further analyzed by PCR to detect beta-lactamase genes. Total DNA extraction was performed using the heat-shock method. Briefly, a single bacteria colony was inoculated into 5 ml of Luria-Bertani broth (Biorad, Marne la Coquette, France) and incubated for 20 h at 37°C. Cells from 1.5 ml of the overnight culture were harvested by centrifugation at 7000 RPM for 5 min. After the supernatant was removed, the pellet was washed twice with sterile water and resuspended in 500 μl of sterile water. This suspension was incubated at 95°C during 10 min. The supernatant was stored at −20°C for PCR analysis.\nThe presence of beta-lactamase genes, blaTEM and blaSHV, was detected using specific primers: for the TEM genes OT-1-F [5′-TTGGGTGCACGAGTGGG TTA-3′] and OT-2-R [5′-TAATTGTTGCCGGGAAGCTA-3′] which amplified a 465 bp fragment [12]; for the SHV genes, SHV-A[5′-CACTCAAGGATGTATTGTG-3′] and SHV-B[5′-TTAGCGTTGCCAGTGCTCG-3′] which amplified a fragment of 885 bp [13].\nAmplification reactions were performed in a volume of 30 μl containing 4 μl of supernatant, (volume) PCR buffer (1x), MgCl2 (1.5 mM), (volume) dNTPs (200 μM), (volume) primer (0.5 mM), 0.2 μL of Taq DNA polymerase (1,5U). An Biometra thermal cycler was used for the amplification. The cycling conditions were the following:\nTEM: initial denaturation at 94°C for 5 min, followed by 30 cycles of denaturation at 94°C for 30 s, annealing at 52°C for 30 s , extension at 72°C for 1 min;\nSHV: initial denaturation at 96°C for 15 s, followed by 30 cycles of denaturation at 96°C for 15 s, annealing at 50°C for 15 s, extension at 72°C for 2 min. Both PCR programs were followed by a final extension step of 10 min.\nAmplified PCR products were separated on 1.5% agarose gels, stained with ethidium bromide and visualized under UV illumination.\n Statistical analysis Data were analyzed with Epi Info® version 3.5.4. Differences in antibiotic susceptibility among different groups were statistically analyzed by the Fisher exact test. An associated P-value < 0.05 was considered significant.\nData were analyzed with Epi Info® version 3.5.4. Differences in antibiotic susceptibility among different groups were statistically analyzed by the Fisher exact test. An associated P-value < 0.05 was considered significant.", "The study was carried out in three hospitals in Cotonou, from September 2012 to April 2013. Consecutive, non-repeated nosocomial Escherichia coli isolates obtained from urine, pus, vaginal swab, sperm, blood, spinal fluid and catheter samples received in the bacteriology laboratories were analyzed. The isolated microorganisms were considered as nosocomial origin if they were isolated from patients admitted to the hospitals since 48 hours or more. The main pathogens were identified by cultural characteristics and standard biochemical procedures and were confirmed with API 20 E (Biomérieux, Marcy l’Etoile, France) identification system.", "Antibiotic susceptibility was performed by the disc diffusion method on Mueller Hinton agar (Bio-Rad, Marne la Coquette, France) according to the recommendations of the Antibiogram Committee of the French Society for Microbiology (Comité de l’Antibiogramme de la Société Française de Microbiologie) [9]. The following antibiotic discs (drug concentration in μg) were tested: amoxicillin (25 μg), amoxicillin/clavulanic acid (20/10 μg), ampicillin (10 μg), imipenem (10 μg), cefotaxime (30 μg), ceftriaxone (30 μg), ciprofloxacin (5 μg), norfloxacin (5 μg), amikacin (30 μg), gentamicin (15 μg) and trimethoprim/ sulfamethoxazole (1.25/23.75 μg), all from Bio-Rad (Bio-Rad, Marne la Coquette, France).\nESBL phenotypes were detected by double-disk synergy according to the method described by Jarlier et al. [10]. Disks of cefotaxime and ceftriaxone were placed 20 mm from an amoxicillin/clavulanate disk. Enhancement of the inhibition zone of the third-generation cephalosporin toward the amoxicillin/clavulanate disk indicated the possible presence of an ESBL. Escherichia coli ATCC 25922 was used as control.", "The presence of beta-lactamase was tested by an acidimetric method using benzylpenicillin as substrate [11]. A single colony was resuspended and mixed with the indicator solution. The indicator solution was prepared by adding 1 ml of sterile distilled water and 100 μl of 1% phenol red solution to a vial of one million units of sodium benzylpenicillin (Crystapen, Glaxo). A solution of 1 N sodium hydroxide was added until the development of violet color (pH 8.5). Several colonies were suspended in NaCl, 9‰ to get a dense suspension. 150 μl of penicillin phenol red solution was added and the color development observed within 1 hour. The solution turned yellow in the presence of beta-lactamase.", "All the strains were further analyzed by PCR to detect beta-lactamase genes. Total DNA extraction was performed using the heat-shock method. Briefly, a single bacteria colony was inoculated into 5 ml of Luria-Bertani broth (Biorad, Marne la Coquette, France) and incubated for 20 h at 37°C. Cells from 1.5 ml of the overnight culture were harvested by centrifugation at 7000 RPM for 5 min. After the supernatant was removed, the pellet was washed twice with sterile water and resuspended in 500 μl of sterile water. This suspension was incubated at 95°C during 10 min. The supernatant was stored at −20°C for PCR analysis.\nThe presence of beta-lactamase genes, blaTEM and blaSHV, was detected using specific primers: for the TEM genes OT-1-F [5′-TTGGGTGCACGAGTGGG TTA-3′] and OT-2-R [5′-TAATTGTTGCCGGGAAGCTA-3′] which amplified a 465 bp fragment [12]; for the SHV genes, SHV-A[5′-CACTCAAGGATGTATTGTG-3′] and SHV-B[5′-TTAGCGTTGCCAGTGCTCG-3′] which amplified a fragment of 885 bp [13].\nAmplification reactions were performed in a volume of 30 μl containing 4 μl of supernatant, (volume) PCR buffer (1x), MgCl2 (1.5 mM), (volume) dNTPs (200 μM), (volume) primer (0.5 mM), 0.2 μL of Taq DNA polymerase (1,5U). An Biometra thermal cycler was used for the amplification. The cycling conditions were the following:\nTEM: initial denaturation at 94°C for 5 min, followed by 30 cycles of denaturation at 94°C for 30 s, annealing at 52°C for 30 s , extension at 72°C for 1 min;\nSHV: initial denaturation at 96°C for 15 s, followed by 30 cycles of denaturation at 96°C for 15 s, annealing at 50°C for 15 s, extension at 72°C for 2 min. Both PCR programs were followed by a final extension step of 10 min.\nAmplified PCR products were separated on 1.5% agarose gels, stained with ethidium bromide and visualized under UV illumination.", "Data were analyzed with Epi Info® version 3.5.4. Differences in antibiotic susceptibility among different groups were statistically analyzed by the Fisher exact test. An associated P-value < 0.05 was considered significant.", "A total of 84 nosocomial Escherichia coli isolates were included in this study. The majority of isolates (72 strains: 85.7%) were from urine samples. Specimens from pus represented 4.76% of the isolates, followed by vagina swab and spinal liquid with 2.38% each. Isolates from sperm and catheter were 1.2% each.\nThe antibiotics susceptibility test revealed that the most efficient antibiotics were imipenem (96.4% as susceptibility rate) followed by ceftriaxone (58.3%) and gentamicin (54.8%). High resistance rates were observed with amoxicillin (92.8%), ampicillin (94%), and trimethoprim/sulfamethoxazole (85.7%). Resistance to amoxicillin/clavulanic acid was 85.7% with a high rate of intermediate resistance (46.7%). We observed homogeneity in the resistance profile of ESBL-producers which were multi drugs resistant with at least a resistance to 8 antibiotics out of the 11 tested (Table 1).Table 1\nAntibiotic susceptibility pattern of ESBL and non ESBL producing isolates\n\nAntibiotics\n\nSusceptibility rate %\n\nESBL (n = 29)\n\nNon-ESBL (n = 55)\n\np-value*\n\nR\n\nS\nAM97.62.429 (100%)50 (91 %)0.4AMC85.714.324 (82.7%)15 (27.3%)10−4\nAMX95.24.829 (100%)49 (89.1%)0.17IPM3.696.401 (3.4%)00 (0%)0.3CTX56.540.528 (96.5%)01 (1.8%)<10−9\nCRO44.758.328 (96.5%)01 (1.8%)10−3\nCIP91.78.329 (100%)14 (25.5%)10−10\nNOR53.646.428 (96.5%)14 (25.5%)<10−9\nAN83.316.721(72.4%)19 (34.5%)0.04G45.254.824 (82.7)12 (21.8)10−7\nSXT86.913.129 (100%)43 (78.2%)6.10−3\n*P value is for the comparison of resistance of ESBL-producers with non-producers.AM: Ampicilline; AMC: Amoxicilline/clavulanic acid; AMX: Amoxicilline; IPM: Imipenem; CTX: Cefotaxime; CRO: Ceftriaxone; CIP: Ciprofloxacine; NOR: Norfloxacine; AN: Amikacine; G: Gentamicine; SXT: Trimethoprim/Sulfamethoxazole.\n\nAntibiotic susceptibility pattern of ESBL and non ESBL producing isolates\n\n*P value is for the comparison of resistance of ESBL-producers with non-producers.\nAM: Ampicilline; AMC: Amoxicilline/clavulanic acid; AMX: Amoxicilline; IPM: Imipenem; CTX: Cefotaxime; CRO: Ceftriaxone; CIP: Ciprofloxacine; NOR: Norfloxacine; AN: Amikacine; G: Gentamicine; SXT: Trimethoprim/Sulfamethoxazole.\nAccording to the biochemical detection of beta-lactamase production, 87.0% of isolates were positive using the acidimetric test by hydrolyzing penicillin G.\nOf the 84 isolates screened for ESBL production by the double disk test, 29 (35.5%) were positive using either cefotaxime or ceftriaxone. Among these ESBL-producers, 21 were from urinary tract infection (UTI) and 8 from other infection sites. The other infection sites seem to be disproportionally represented than UTI (27.6% versus 14%). This fact could be explained with reduced number of samples drawn from non-urinary infections.\nComparison of susceptibility rates of ESBL producers with those of ESBL non producers showed similar values for amoxicillin, ampicillin and imipenem. No statistically significant difference between the two groups was observed (Table 1). For all other tested antibiotics, ESBL strains were more resistant than non ESBL strains.\nAccording to the PCR, the genotypes TEM and SHV were distributed as follow: blaTEM 65 (77.4%) and blaSHV 17 (20.2%). These genotypes occurred singularly in 54 isolates (64.3%) with 51(94.4%) for blaTEM and 3 (5.6%) for blaSHV. Both TEM and SHV genes were present in 14 isolates (16.7%) and absent in 16 isolates (19%) including 10 non ESBL strains and 6 ESBL strains.\nOut of the 29 isolates with ESBL phenotype, the PCR analysis revealed 72.4% strains with the genotype TEM, 24.1% with the genotype SHV and 5 strains harbored both genes. Figure 1 showed the agarose gel of PCR products following amplification of SHV genes.Figure 1\nAgarose gel of PCR products following amplification of SHV genes. M: Molecular weight marker; Lanes 02 to 08: SHV positive samples; T0: negative probe without DNA.\n\nAgarose gel of PCR products following amplification of SHV genes. M: Molecular weight marker; Lanes 02 to 08: SHV positive samples; T0: negative probe without DNA.\nAs shown in Table 2, there were no difference in resistance rate against ceftriaxone and cefotaxime. No difference was found between the inhibition diameters of both antibiotics regarding the genotypes TEM and SHV.Table 2\nAssociation between ESBL genotype and antibiotic sensibility profile\n\nGenotypes\n\nPresence\n\nCefotaxime ceftriaxone n (%)\n\nFisher’s exact test\n\nS\n\nI\n\nR\n\np-value\nTEM+0 (0%)0 (0%)21 (100%)0.27-0 (0%)1 (12.5%)7 (87.5%)SHV+0 (0%)0 (0%)7 (100%)0.76-0 (0%)1 (4.5%)21 (95.5%)TEM + SHV+0 (0%)0 (0%)5 (100%)0.54-0 (0%)1 (16.7%)5 (83.3%)S = sensitive; I = intermediate; R = resistant. + indicate the presence of the genotype. – indicate the absence of the genotype.\n\nAssociation between ESBL genotype and antibiotic sensibility profile\n\nS = sensitive; I = intermediate; R = resistant. + indicate the presence of the genotype. – indicate the absence of the genotype.", "Our study was carried out in three hospitals in Cotonou, Benin. Of the 84 isolates tested, 72 (85.7%) were from urinary tract infections (UTI). This finding is similar to results previously described by several studies [7,14-16]. This was expected since clinical isolates of Escherichia coli were predominantly responsible of UTI [17,18].\nThe highest susceptibility of E. coli isolates was found to imipenem (96.4%). In 2007 we have showed that the resistance rate to imipenem was 5% in ESBL producers and 2% in non ESBL isolates [7]. A similar high sensibility (93%) was observed by Muvunyi et al. in Rwanda [19]. Studies conducted in Iran [20], Morocco [21] and Nigeria [22] reported almost high sensibility of E. coli isolates to imipenem at rates of 98.4%, 96.7% and 92.5% respectively. This finding may be due to the stability and the high activity of carbapenem against most beta-lactamases.\nHigh resistance rates were founded to commonly used antibiotics in Benin such as ampicillin (97.6%), amoxicillin (95.2%) and trimethoprim/sulfamethoxazole (86.9%). The observed high rates may be due to uncontrolled consumption, consequence of easy access to inefficient and cheap antibiotics.\nAmong the tested fluoroquinolones, only 8.3% of the isolates were sensitive to ciprofloxacin. Norfloxacin was more efficient with 44.6% of sensitive strains, but lower than the result reported by Thakur et al. [6] with 52.5% of sensitivity. Previous studies in Benin reported lower resistance rates of ciprofloxacin compared to our study. Ahoyo et al. reported 48% and 16% resistance to ciprofloxacin in ESBL and non ESBL E. coli from nosocomial infections, respectively [7]. Our survey realized in 2004 showed resistance rate of 18% in ESBL producers and 14% in non ESBL isolates from community acquired UTI [8]. The higher rate of resistance to ciprofloxacin in the current study could be explained by intensive use of ciprofloxacin in the past decade.\nLower resistance rates to ciprofloxacin, usually prescribed in uncomplicated UTI and other infections, have been reported in Nepal (45%) [6], in India (52.80%) [23] and in Iran (60%) [20]. It was showed that uncontrolled use of quinolones in the past years led to a growing resistance to these antibiotics particularly to ciprofloxacin worldwide [24,25]. The use of ciprofloxacin selected also ESBL producers due to the widespread co-detection of both resistance mechanisms [26,27].\nHigh resistance rate to amoxicillin/clavulanate acid was also observed (85.7%). This finding is in agreement with previous investigations in Benin. In 2007, we reported resistance rates of 97.5% in ESBL and 53% in non ESBL E. coli strains from nosocomial infections [7]. In our previous study performed in 2004, we detected 72% of resistance in ESBL and 9% in non ESBL E. coli isolates from out-patients [8].\nWe showed in this study that 87% of isolated strains were positive for beta-lactamase detection by acidimetric method indicating that the most important mechanism of resistance to beta-lactam antibiotics is the production of beta-lactamase.\nThe high prevalence of ESBL-producing strains (35.5%) was similar to those generally shown in developing countries.\nOur finding is very close to the prevalence ESBL reported in India (35.45%) [28] and similar to the prevalence in Tanzania (39.1%) [29]. Previous studies in Benin revealed relative high prevalence rates of ESBL producing E. coli. In 2007, Ahoyo et al. showed a prevalence of 22% in isolates from various nosocomial infections. In 2004, we founded that 14.8% of E. coli strains isolated from urinary tract infection (UTI) were ESBL [8]. Lower ESBL prevalence was described in Morocco (1.3%) in UTI isolated strains [30], in Cameroon (16%) in strains isolated from feces in the community [25].\nAmong the ESBL-producers, the other infection sites seem to be disproportionally represented than UTI (27.6% versus 14%). This fact could be explained with a small number of samples drawn from non-urinary infections.\nThe comparison between ESBL producing strains and non ESBL showed that ESBL-producers were significantly more resistant to cephalosporins, quinolones, aminosides, trimethoprim/sulfamethoxazole and amoxicillin/clavulanic acid than non-ESBL producers. The genes encoding ESBLs are usually located in transferable plasmids that may also carry other resistance determinants, such as those for resistance to aminoglycosides, tetracyclines, chloramphenicol, trimethoprim, sulphamides, and quinolones [31,32].\nThe genotype TEM was predominant in ESBL and non ESBL isolates with respectively 72,4% and 80%. This fact was generally observed in E. coli. The remaining 6 ESBL strains which were non TEM and non SHV could harbor CTX-M genes. Widespread dissemination of these genes has been described in Africa and elsewhere [14,25,31].\nOut of the 65 strains harboring TEM genes, 59 showed positive result for the detection of enzyme production using hydrolysis of penicillin. Of the 17 SHV positive strains, 15 (80.95%) were positive for this test.\nIn this study, the phenotypic screening for ESBL was realized by resistance to cefotaxime or ceftriaxone. The presence of genotype TEM and SHV could not predict the resistance pattern to these cephalosporins. Our finding is consistent with the results reported by Maina et al. who found no significant association between genotypes TEM and SHV and susceptibility to cefotaxime and ceftriaxone [33].", "This study reveals the presence of ESBL producing Escherichia coli in clinical isolates from several hospitals in Cotonou. The use of some first line treatment antibiotics such as penicillin and trimethoprim/sulfamethoxazole seems inappropriate. Antibiotics resistance surveillance and the determination of molecular characteristics of ESBL isolates are primordial to ensure the judicious use of antimicrobial drugs." ]
[ "introduction", "materials|methods", null, null, null, null, null, "results", "discussion", "conclusion" ]
[ "\nEscherichia coli\n", "ESBL", "Resistance gene" ]
Background: Antibiotics resistance is a paramount issue in medical practice. Production of β-lactamases is an important means by which Gram-negative bacteria exhibit resistance to β-lactam antibiotics [1]. The increasing prevalence of pathogens producing extended-spectrum-lactamases (ESBLs) is showed worldwide in hospitalized patients as well as in out-patients. Infections with ESBL producing organisms are associated with higher rates of mortality, morbidity and healthcare expenditure. ESBLs are often plasmid mediated; though they occur predominantly in Escherichia coli and Klebsiella species, they have also been described in other genera of the Enterobacteriacea [1,2]. Multidrug resistance in ESBL producers limit therapeutic options and subsequently facilitate the dissemination of these bacteria strains. Several studies have reported the increasing resistance rate of commonly prescribed antibiotics such as ampicillin, trimethoprim/sulfamethoxazole and ciprofloxacin in clinical isolates of Escherichia coli [3-6]. Antibiotic resistance varies according to geographic locations and is directly proportional to the use and misuse of antibiotics. Understanding the effect of drug resistance is crucial because of its deep impact on the treatment of infections. In Benin, little information about resistance to third generation cephalosporin as well as multi drug resistance in Escherichia coli is known. As antibiotic treatment must rely on antimicrobial susceptible pattern, current knowledge on susceptibility is essential for appropriate therapy. Previous studies reported the presence of TEM and SHV in nosocomial and community isolated Escherichia coli strains in Benin [7,8]. The aim of the present study was to characterize clinical isolates of Escherichia coli obtained from several infections in Cotonou in order to (i) determine the susceptibility patterns to antibiotics, (ii) evaluate the prevalence of ESBL and (iii) identify the genes involved in the resistance. Materials and methods: Bacterial strains The study was carried out in three hospitals in Cotonou, from September 2012 to April 2013. Consecutive, non-repeated nosocomial Escherichia coli isolates obtained from urine, pus, vaginal swab, sperm, blood, spinal fluid and catheter samples received in the bacteriology laboratories were analyzed. The isolated microorganisms were considered as nosocomial origin if they were isolated from patients admitted to the hospitals since 48 hours or more. The main pathogens were identified by cultural characteristics and standard biochemical procedures and were confirmed with API 20 E (Biomérieux, Marcy l’Etoile, France) identification system. The study was carried out in three hospitals in Cotonou, from September 2012 to April 2013. Consecutive, non-repeated nosocomial Escherichia coli isolates obtained from urine, pus, vaginal swab, sperm, blood, spinal fluid and catheter samples received in the bacteriology laboratories were analyzed. The isolated microorganisms were considered as nosocomial origin if they were isolated from patients admitted to the hospitals since 48 hours or more. The main pathogens were identified by cultural characteristics and standard biochemical procedures and were confirmed with API 20 E (Biomérieux, Marcy l’Etoile, France) identification system. Antibiotic susceptibility testing and detection of ESBL Antibiotic susceptibility was performed by the disc diffusion method on Mueller Hinton agar (Bio-Rad, Marne la Coquette, France) according to the recommendations of the Antibiogram Committee of the French Society for Microbiology (Comité de l’Antibiogramme de la Société Française de Microbiologie) [9]. The following antibiotic discs (drug concentration in μg) were tested: amoxicillin (25 μg), amoxicillin/clavulanic acid (20/10 μg), ampicillin (10 μg), imipenem (10 μg), cefotaxime (30 μg), ceftriaxone (30 μg), ciprofloxacin (5 μg), norfloxacin (5 μg), amikacin (30 μg), gentamicin (15 μg) and trimethoprim/ sulfamethoxazole (1.25/23.75 μg), all from Bio-Rad (Bio-Rad, Marne la Coquette, France). ESBL phenotypes were detected by double-disk synergy according to the method described by Jarlier et al. [10]. Disks of cefotaxime and ceftriaxone were placed 20 mm from an amoxicillin/clavulanate disk. Enhancement of the inhibition zone of the third-generation cephalosporin toward the amoxicillin/clavulanate disk indicated the possible presence of an ESBL. Escherichia coli ATCC 25922 was used as control. Antibiotic susceptibility was performed by the disc diffusion method on Mueller Hinton agar (Bio-Rad, Marne la Coquette, France) according to the recommendations of the Antibiogram Committee of the French Society for Microbiology (Comité de l’Antibiogramme de la Société Française de Microbiologie) [9]. The following antibiotic discs (drug concentration in μg) were tested: amoxicillin (25 μg), amoxicillin/clavulanic acid (20/10 μg), ampicillin (10 μg), imipenem (10 μg), cefotaxime (30 μg), ceftriaxone (30 μg), ciprofloxacin (5 μg), norfloxacin (5 μg), amikacin (30 μg), gentamicin (15 μg) and trimethoprim/ sulfamethoxazole (1.25/23.75 μg), all from Bio-Rad (Bio-Rad, Marne la Coquette, France). ESBL phenotypes were detected by double-disk synergy according to the method described by Jarlier et al. [10]. Disks of cefotaxime and ceftriaxone were placed 20 mm from an amoxicillin/clavulanate disk. Enhancement of the inhibition zone of the third-generation cephalosporin toward the amoxicillin/clavulanate disk indicated the possible presence of an ESBL. Escherichia coli ATCC 25922 was used as control. Biochemical detection of beta-lactamase production The presence of beta-lactamase was tested by an acidimetric method using benzylpenicillin as substrate [11]. A single colony was resuspended and mixed with the indicator solution. The indicator solution was prepared by adding 1 ml of sterile distilled water and 100 μl of 1% phenol red solution to a vial of one million units of sodium benzylpenicillin (Crystapen, Glaxo). A solution of 1 N sodium hydroxide was added until the development of violet color (pH 8.5). Several colonies were suspended in NaCl, 9‰ to get a dense suspension. 150 μl of penicillin phenol red solution was added and the color development observed within 1 hour. The solution turned yellow in the presence of beta-lactamase. The presence of beta-lactamase was tested by an acidimetric method using benzylpenicillin as substrate [11]. A single colony was resuspended and mixed with the indicator solution. The indicator solution was prepared by adding 1 ml of sterile distilled water and 100 μl of 1% phenol red solution to a vial of one million units of sodium benzylpenicillin (Crystapen, Glaxo). A solution of 1 N sodium hydroxide was added until the development of violet color (pH 8.5). Several colonies were suspended in NaCl, 9‰ to get a dense suspension. 150 μl of penicillin phenol red solution was added and the color development observed within 1 hour. The solution turned yellow in the presence of beta-lactamase. Detection of beta-lactamase genes All the strains were further analyzed by PCR to detect beta-lactamase genes. Total DNA extraction was performed using the heat-shock method. Briefly, a single bacteria colony was inoculated into 5 ml of Luria-Bertani broth (Biorad, Marne la Coquette, France) and incubated for 20 h at 37°C. Cells from 1.5 ml of the overnight culture were harvested by centrifugation at 7000 RPM for 5 min. After the supernatant was removed, the pellet was washed twice with sterile water and resuspended in 500 μl of sterile water. This suspension was incubated at 95°C during 10 min. The supernatant was stored at −20°C for PCR analysis. The presence of beta-lactamase genes, blaTEM and blaSHV, was detected using specific primers: for the TEM genes OT-1-F [5′-TTGGGTGCACGAGTGGG TTA-3′] and OT-2-R [5′-TAATTGTTGCCGGGAAGCTA-3′] which amplified a 465 bp fragment [12]; for the SHV genes, SHV-A[5′-CACTCAAGGATGTATTGTG-3′] and SHV-B[5′-TTAGCGTTGCCAGTGCTCG-3′] which amplified a fragment of 885 bp [13]. Amplification reactions were performed in a volume of 30 μl containing 4 μl of supernatant, (volume) PCR buffer (1x), MgCl2 (1.5 mM), (volume) dNTPs (200 μM), (volume) primer (0.5 mM), 0.2 μL of Taq DNA polymerase (1,5U). An Biometra thermal cycler was used for the amplification. The cycling conditions were the following: TEM: initial denaturation at 94°C for 5 min, followed by 30 cycles of denaturation at 94°C for 30 s, annealing at 52°C for 30 s , extension at 72°C for 1 min; SHV: initial denaturation at 96°C for 15 s, followed by 30 cycles of denaturation at 96°C for 15 s, annealing at 50°C for 15 s, extension at 72°C for 2 min. Both PCR programs were followed by a final extension step of 10 min. Amplified PCR products were separated on 1.5% agarose gels, stained with ethidium bromide and visualized under UV illumination. All the strains were further analyzed by PCR to detect beta-lactamase genes. Total DNA extraction was performed using the heat-shock method. Briefly, a single bacteria colony was inoculated into 5 ml of Luria-Bertani broth (Biorad, Marne la Coquette, France) and incubated for 20 h at 37°C. Cells from 1.5 ml of the overnight culture were harvested by centrifugation at 7000 RPM for 5 min. After the supernatant was removed, the pellet was washed twice with sterile water and resuspended in 500 μl of sterile water. This suspension was incubated at 95°C during 10 min. The supernatant was stored at −20°C for PCR analysis. The presence of beta-lactamase genes, blaTEM and blaSHV, was detected using specific primers: for the TEM genes OT-1-F [5′-TTGGGTGCACGAGTGGG TTA-3′] and OT-2-R [5′-TAATTGTTGCCGGGAAGCTA-3′] which amplified a 465 bp fragment [12]; for the SHV genes, SHV-A[5′-CACTCAAGGATGTATTGTG-3′] and SHV-B[5′-TTAGCGTTGCCAGTGCTCG-3′] which amplified a fragment of 885 bp [13]. Amplification reactions were performed in a volume of 30 μl containing 4 μl of supernatant, (volume) PCR buffer (1x), MgCl2 (1.5 mM), (volume) dNTPs (200 μM), (volume) primer (0.5 mM), 0.2 μL of Taq DNA polymerase (1,5U). An Biometra thermal cycler was used for the amplification. The cycling conditions were the following: TEM: initial denaturation at 94°C for 5 min, followed by 30 cycles of denaturation at 94°C for 30 s, annealing at 52°C for 30 s , extension at 72°C for 1 min; SHV: initial denaturation at 96°C for 15 s, followed by 30 cycles of denaturation at 96°C for 15 s, annealing at 50°C for 15 s, extension at 72°C for 2 min. Both PCR programs were followed by a final extension step of 10 min. Amplified PCR products were separated on 1.5% agarose gels, stained with ethidium bromide and visualized under UV illumination. Statistical analysis Data were analyzed with Epi Info® version 3.5.4. Differences in antibiotic susceptibility among different groups were statistically analyzed by the Fisher exact test. An associated P-value < 0.05 was considered significant. Data were analyzed with Epi Info® version 3.5.4. Differences in antibiotic susceptibility among different groups were statistically analyzed by the Fisher exact test. An associated P-value < 0.05 was considered significant. Bacterial strains: The study was carried out in three hospitals in Cotonou, from September 2012 to April 2013. Consecutive, non-repeated nosocomial Escherichia coli isolates obtained from urine, pus, vaginal swab, sperm, blood, spinal fluid and catheter samples received in the bacteriology laboratories were analyzed. The isolated microorganisms were considered as nosocomial origin if they were isolated from patients admitted to the hospitals since 48 hours or more. The main pathogens were identified by cultural characteristics and standard biochemical procedures and were confirmed with API 20 E (Biomérieux, Marcy l’Etoile, France) identification system. Antibiotic susceptibility testing and detection of ESBL: Antibiotic susceptibility was performed by the disc diffusion method on Mueller Hinton agar (Bio-Rad, Marne la Coquette, France) according to the recommendations of the Antibiogram Committee of the French Society for Microbiology (Comité de l’Antibiogramme de la Société Française de Microbiologie) [9]. The following antibiotic discs (drug concentration in μg) were tested: amoxicillin (25 μg), amoxicillin/clavulanic acid (20/10 μg), ampicillin (10 μg), imipenem (10 μg), cefotaxime (30 μg), ceftriaxone (30 μg), ciprofloxacin (5 μg), norfloxacin (5 μg), amikacin (30 μg), gentamicin (15 μg) and trimethoprim/ sulfamethoxazole (1.25/23.75 μg), all from Bio-Rad (Bio-Rad, Marne la Coquette, France). ESBL phenotypes were detected by double-disk synergy according to the method described by Jarlier et al. [10]. Disks of cefotaxime and ceftriaxone were placed 20 mm from an amoxicillin/clavulanate disk. Enhancement of the inhibition zone of the third-generation cephalosporin toward the amoxicillin/clavulanate disk indicated the possible presence of an ESBL. Escherichia coli ATCC 25922 was used as control. Biochemical detection of beta-lactamase production: The presence of beta-lactamase was tested by an acidimetric method using benzylpenicillin as substrate [11]. A single colony was resuspended and mixed with the indicator solution. The indicator solution was prepared by adding 1 ml of sterile distilled water and 100 μl of 1% phenol red solution to a vial of one million units of sodium benzylpenicillin (Crystapen, Glaxo). A solution of 1 N sodium hydroxide was added until the development of violet color (pH 8.5). Several colonies were suspended in NaCl, 9‰ to get a dense suspension. 150 μl of penicillin phenol red solution was added and the color development observed within 1 hour. The solution turned yellow in the presence of beta-lactamase. Detection of beta-lactamase genes: All the strains were further analyzed by PCR to detect beta-lactamase genes. Total DNA extraction was performed using the heat-shock method. Briefly, a single bacteria colony was inoculated into 5 ml of Luria-Bertani broth (Biorad, Marne la Coquette, France) and incubated for 20 h at 37°C. Cells from 1.5 ml of the overnight culture were harvested by centrifugation at 7000 RPM for 5 min. After the supernatant was removed, the pellet was washed twice with sterile water and resuspended in 500 μl of sterile water. This suspension was incubated at 95°C during 10 min. The supernatant was stored at −20°C for PCR analysis. The presence of beta-lactamase genes, blaTEM and blaSHV, was detected using specific primers: for the TEM genes OT-1-F [5′-TTGGGTGCACGAGTGGG TTA-3′] and OT-2-R [5′-TAATTGTTGCCGGGAAGCTA-3′] which amplified a 465 bp fragment [12]; for the SHV genes, SHV-A[5′-CACTCAAGGATGTATTGTG-3′] and SHV-B[5′-TTAGCGTTGCCAGTGCTCG-3′] which amplified a fragment of 885 bp [13]. Amplification reactions were performed in a volume of 30 μl containing 4 μl of supernatant, (volume) PCR buffer (1x), MgCl2 (1.5 mM), (volume) dNTPs (200 μM), (volume) primer (0.5 mM), 0.2 μL of Taq DNA polymerase (1,5U). An Biometra thermal cycler was used for the amplification. The cycling conditions were the following: TEM: initial denaturation at 94°C for 5 min, followed by 30 cycles of denaturation at 94°C for 30 s, annealing at 52°C for 30 s , extension at 72°C for 1 min; SHV: initial denaturation at 96°C for 15 s, followed by 30 cycles of denaturation at 96°C for 15 s, annealing at 50°C for 15 s, extension at 72°C for 2 min. Both PCR programs were followed by a final extension step of 10 min. Amplified PCR products were separated on 1.5% agarose gels, stained with ethidium bromide and visualized under UV illumination. Statistical analysis: Data were analyzed with Epi Info® version 3.5.4. Differences in antibiotic susceptibility among different groups were statistically analyzed by the Fisher exact test. An associated P-value < 0.05 was considered significant. Results: A total of 84 nosocomial Escherichia coli isolates were included in this study. The majority of isolates (72 strains: 85.7%) were from urine samples. Specimens from pus represented 4.76% of the isolates, followed by vagina swab and spinal liquid with 2.38% each. Isolates from sperm and catheter were 1.2% each. The antibiotics susceptibility test revealed that the most efficient antibiotics were imipenem (96.4% as susceptibility rate) followed by ceftriaxone (58.3%) and gentamicin (54.8%). High resistance rates were observed with amoxicillin (92.8%), ampicillin (94%), and trimethoprim/sulfamethoxazole (85.7%). Resistance to amoxicillin/clavulanic acid was 85.7% with a high rate of intermediate resistance (46.7%). We observed homogeneity in the resistance profile of ESBL-producers which were multi drugs resistant with at least a resistance to 8 antibiotics out of the 11 tested (Table 1).Table 1 Antibiotic susceptibility pattern of ESBL and non ESBL producing isolates Antibiotics Susceptibility rate % ESBL (n = 29) Non-ESBL (n = 55) p-value* R S AM97.62.429 (100%)50 (91 %)0.4AMC85.714.324 (82.7%)15 (27.3%)10−4 AMX95.24.829 (100%)49 (89.1%)0.17IPM3.696.401 (3.4%)00 (0%)0.3CTX56.540.528 (96.5%)01 (1.8%)<10−9 CRO44.758.328 (96.5%)01 (1.8%)10−3 CIP91.78.329 (100%)14 (25.5%)10−10 NOR53.646.428 (96.5%)14 (25.5%)<10−9 AN83.316.721(72.4%)19 (34.5%)0.04G45.254.824 (82.7)12 (21.8)10−7 SXT86.913.129 (100%)43 (78.2%)6.10−3 *P value is for the comparison of resistance of ESBL-producers with non-producers.AM: Ampicilline; AMC: Amoxicilline/clavulanic acid; AMX: Amoxicilline; IPM: Imipenem; CTX: Cefotaxime; CRO: Ceftriaxone; CIP: Ciprofloxacine; NOR: Norfloxacine; AN: Amikacine; G: Gentamicine; SXT: Trimethoprim/Sulfamethoxazole. Antibiotic susceptibility pattern of ESBL and non ESBL producing isolates *P value is for the comparison of resistance of ESBL-producers with non-producers. AM: Ampicilline; AMC: Amoxicilline/clavulanic acid; AMX: Amoxicilline; IPM: Imipenem; CTX: Cefotaxime; CRO: Ceftriaxone; CIP: Ciprofloxacine; NOR: Norfloxacine; AN: Amikacine; G: Gentamicine; SXT: Trimethoprim/Sulfamethoxazole. According to the biochemical detection of beta-lactamase production, 87.0% of isolates were positive using the acidimetric test by hydrolyzing penicillin G. Of the 84 isolates screened for ESBL production by the double disk test, 29 (35.5%) were positive using either cefotaxime or ceftriaxone. Among these ESBL-producers, 21 were from urinary tract infection (UTI) and 8 from other infection sites. The other infection sites seem to be disproportionally represented than UTI (27.6% versus 14%). This fact could be explained with reduced number of samples drawn from non-urinary infections. Comparison of susceptibility rates of ESBL producers with those of ESBL non producers showed similar values for amoxicillin, ampicillin and imipenem. No statistically significant difference between the two groups was observed (Table 1). For all other tested antibiotics, ESBL strains were more resistant than non ESBL strains. According to the PCR, the genotypes TEM and SHV were distributed as follow: blaTEM 65 (77.4%) and blaSHV 17 (20.2%). These genotypes occurred singularly in 54 isolates (64.3%) with 51(94.4%) for blaTEM and 3 (5.6%) for blaSHV. Both TEM and SHV genes were present in 14 isolates (16.7%) and absent in 16 isolates (19%) including 10 non ESBL strains and 6 ESBL strains. Out of the 29 isolates with ESBL phenotype, the PCR analysis revealed 72.4% strains with the genotype TEM, 24.1% with the genotype SHV and 5 strains harbored both genes. Figure 1 showed the agarose gel of PCR products following amplification of SHV genes.Figure 1 Agarose gel of PCR products following amplification of SHV genes. M: Molecular weight marker; Lanes 02 to 08: SHV positive samples; T0: negative probe without DNA. Agarose gel of PCR products following amplification of SHV genes. M: Molecular weight marker; Lanes 02 to 08: SHV positive samples; T0: negative probe without DNA. As shown in Table 2, there were no difference in resistance rate against ceftriaxone and cefotaxime. No difference was found between the inhibition diameters of both antibiotics regarding the genotypes TEM and SHV.Table 2 Association between ESBL genotype and antibiotic sensibility profile Genotypes Presence Cefotaxime ceftriaxone n (%) Fisher’s exact test S I R p-value TEM+0 (0%)0 (0%)21 (100%)0.27-0 (0%)1 (12.5%)7 (87.5%)SHV+0 (0%)0 (0%)7 (100%)0.76-0 (0%)1 (4.5%)21 (95.5%)TEM + SHV+0 (0%)0 (0%)5 (100%)0.54-0 (0%)1 (16.7%)5 (83.3%)S = sensitive; I = intermediate; R = resistant. + indicate the presence of the genotype. – indicate the absence of the genotype. Association between ESBL genotype and antibiotic sensibility profile S = sensitive; I = intermediate; R = resistant. + indicate the presence of the genotype. – indicate the absence of the genotype. Discussion: Our study was carried out in three hospitals in Cotonou, Benin. Of the 84 isolates tested, 72 (85.7%) were from urinary tract infections (UTI). This finding is similar to results previously described by several studies [7,14-16]. This was expected since clinical isolates of Escherichia coli were predominantly responsible of UTI [17,18]. The highest susceptibility of E. coli isolates was found to imipenem (96.4%). In 2007 we have showed that the resistance rate to imipenem was 5% in ESBL producers and 2% in non ESBL isolates [7]. A similar high sensibility (93%) was observed by Muvunyi et al. in Rwanda [19]. Studies conducted in Iran [20], Morocco [21] and Nigeria [22] reported almost high sensibility of E. coli isolates to imipenem at rates of 98.4%, 96.7% and 92.5% respectively. This finding may be due to the stability and the high activity of carbapenem against most beta-lactamases. High resistance rates were founded to commonly used antibiotics in Benin such as ampicillin (97.6%), amoxicillin (95.2%) and trimethoprim/sulfamethoxazole (86.9%). The observed high rates may be due to uncontrolled consumption, consequence of easy access to inefficient and cheap antibiotics. Among the tested fluoroquinolones, only 8.3% of the isolates were sensitive to ciprofloxacin. Norfloxacin was more efficient with 44.6% of sensitive strains, but lower than the result reported by Thakur et al. [6] with 52.5% of sensitivity. Previous studies in Benin reported lower resistance rates of ciprofloxacin compared to our study. Ahoyo et al. reported 48% and 16% resistance to ciprofloxacin in ESBL and non ESBL E. coli from nosocomial infections, respectively [7]. Our survey realized in 2004 showed resistance rate of 18% in ESBL producers and 14% in non ESBL isolates from community acquired UTI [8]. The higher rate of resistance to ciprofloxacin in the current study could be explained by intensive use of ciprofloxacin in the past decade. Lower resistance rates to ciprofloxacin, usually prescribed in uncomplicated UTI and other infections, have been reported in Nepal (45%) [6], in India (52.80%) [23] and in Iran (60%) [20]. It was showed that uncontrolled use of quinolones in the past years led to a growing resistance to these antibiotics particularly to ciprofloxacin worldwide [24,25]. The use of ciprofloxacin selected also ESBL producers due to the widespread co-detection of both resistance mechanisms [26,27]. High resistance rate to amoxicillin/clavulanate acid was also observed (85.7%). This finding is in agreement with previous investigations in Benin. In 2007, we reported resistance rates of 97.5% in ESBL and 53% in non ESBL E. coli strains from nosocomial infections [7]. In our previous study performed in 2004, we detected 72% of resistance in ESBL and 9% in non ESBL E. coli isolates from out-patients [8]. We showed in this study that 87% of isolated strains were positive for beta-lactamase detection by acidimetric method indicating that the most important mechanism of resistance to beta-lactam antibiotics is the production of beta-lactamase. The high prevalence of ESBL-producing strains (35.5%) was similar to those generally shown in developing countries. Our finding is very close to the prevalence ESBL reported in India (35.45%) [28] and similar to the prevalence in Tanzania (39.1%) [29]. Previous studies in Benin revealed relative high prevalence rates of ESBL producing E. coli. In 2007, Ahoyo et al. showed a prevalence of 22% in isolates from various nosocomial infections. In 2004, we founded that 14.8% of E. coli strains isolated from urinary tract infection (UTI) were ESBL [8]. Lower ESBL prevalence was described in Morocco (1.3%) in UTI isolated strains [30], in Cameroon (16%) in strains isolated from feces in the community [25]. Among the ESBL-producers, the other infection sites seem to be disproportionally represented than UTI (27.6% versus 14%). This fact could be explained with a small number of samples drawn from non-urinary infections. The comparison between ESBL producing strains and non ESBL showed that ESBL-producers were significantly more resistant to cephalosporins, quinolones, aminosides, trimethoprim/sulfamethoxazole and amoxicillin/clavulanic acid than non-ESBL producers. The genes encoding ESBLs are usually located in transferable plasmids that may also carry other resistance determinants, such as those for resistance to aminoglycosides, tetracyclines, chloramphenicol, trimethoprim, sulphamides, and quinolones [31,32]. The genotype TEM was predominant in ESBL and non ESBL isolates with respectively 72,4% and 80%. This fact was generally observed in E. coli. The remaining 6 ESBL strains which were non TEM and non SHV could harbor CTX-M genes. Widespread dissemination of these genes has been described in Africa and elsewhere [14,25,31]. Out of the 65 strains harboring TEM genes, 59 showed positive result for the detection of enzyme production using hydrolysis of penicillin. Of the 17 SHV positive strains, 15 (80.95%) were positive for this test. In this study, the phenotypic screening for ESBL was realized by resistance to cefotaxime or ceftriaxone. The presence of genotype TEM and SHV could not predict the resistance pattern to these cephalosporins. Our finding is consistent with the results reported by Maina et al. who found no significant association between genotypes TEM and SHV and susceptibility to cefotaxime and ceftriaxone [33]. Conclusions: This study reveals the presence of ESBL producing Escherichia coli in clinical isolates from several hospitals in Cotonou. The use of some first line treatment antibiotics such as penicillin and trimethoprim/sulfamethoxazole seems inappropriate. Antibiotics resistance surveillance and the determination of molecular characteristics of ESBL isolates are primordial to ensure the judicious use of antimicrobial drugs.
Background: Beta lactams are the most commonly used group of antimicrobials worldwide. The presence of extended-spectrum lactamases (ESBL) affects significantly the treatment of infections due to multidrug resistant strains of gram-negative bacilli. The aim of this study was to characterize the beta-lactamase resistance genes in Escherichia coli isolated from nosocomial infections in Cotonou, Benin. Methods: Escherichia coli strains were isolated from various biological samples such as urine, pus, vaginal swab, sperm, blood, spinal fluid and catheter. Isolated bacteria were submitted to eleven usual antibiotics, using disc diffusion method according to NCCLS criteria, for resistance analysis. Beta-lactamase production was determined by an acidimetric method with benzylpenicillin. Microbiological characterization of ESBL enzymes was done by double disc synergy test and the resistance genes TEM and SHV were screened by specific PCR. Results: ESBL phenotype was detected in 29 isolates (35.5%). The most active antibiotic was imipenem (96.4% as susceptibility rate) followed by ceftriaxone (58.3%) and gentamicin (54.8%). High resistance rates were observed with amoxicillin (92.8%), ampicillin (94%) and trimethoprim/sulfamethoxazole (85.7%). The genotype TEM was predominant in ESBL and non ESBL isolates with respectively 72.4% and 80%. SHV-type beta-lactamase genes occurred in 24.1% ESBL strains and in 18.1% of non ESBL isolates. Conclusions: This study revealed the presence of ESBL producing Eschericiha coli in Cotonou. It demonstrated also high resistance rate to antibiotics commonly used for infections treatment. Continuous monitoring and judicious antibiotic usage are required.
Background: Antibiotics resistance is a paramount issue in medical practice. Production of β-lactamases is an important means by which Gram-negative bacteria exhibit resistance to β-lactam antibiotics [1]. The increasing prevalence of pathogens producing extended-spectrum-lactamases (ESBLs) is showed worldwide in hospitalized patients as well as in out-patients. Infections with ESBL producing organisms are associated with higher rates of mortality, morbidity and healthcare expenditure. ESBLs are often plasmid mediated; though they occur predominantly in Escherichia coli and Klebsiella species, they have also been described in other genera of the Enterobacteriacea [1,2]. Multidrug resistance in ESBL producers limit therapeutic options and subsequently facilitate the dissemination of these bacteria strains. Several studies have reported the increasing resistance rate of commonly prescribed antibiotics such as ampicillin, trimethoprim/sulfamethoxazole and ciprofloxacin in clinical isolates of Escherichia coli [3-6]. Antibiotic resistance varies according to geographic locations and is directly proportional to the use and misuse of antibiotics. Understanding the effect of drug resistance is crucial because of its deep impact on the treatment of infections. In Benin, little information about resistance to third generation cephalosporin as well as multi drug resistance in Escherichia coli is known. As antibiotic treatment must rely on antimicrobial susceptible pattern, current knowledge on susceptibility is essential for appropriate therapy. Previous studies reported the presence of TEM and SHV in nosocomial and community isolated Escherichia coli strains in Benin [7,8]. The aim of the present study was to characterize clinical isolates of Escherichia coli obtained from several infections in Cotonou in order to (i) determine the susceptibility patterns to antibiotics, (ii) evaluate the prevalence of ESBL and (iii) identify the genes involved in the resistance. Conclusions: This study reveals the presence of ESBL producing Escherichia coli in clinical isolates from several hospitals in Cotonou. The use of some first line treatment antibiotics such as penicillin and trimethoprim/sulfamethoxazole seems inappropriate. Antibiotics resistance surveillance and the determination of molecular characteristics of ESBL isolates are primordial to ensure the judicious use of antimicrobial drugs.
Background: Beta lactams are the most commonly used group of antimicrobials worldwide. The presence of extended-spectrum lactamases (ESBL) affects significantly the treatment of infections due to multidrug resistant strains of gram-negative bacilli. The aim of this study was to characterize the beta-lactamase resistance genes in Escherichia coli isolated from nosocomial infections in Cotonou, Benin. Methods: Escherichia coli strains were isolated from various biological samples such as urine, pus, vaginal swab, sperm, blood, spinal fluid and catheter. Isolated bacteria were submitted to eleven usual antibiotics, using disc diffusion method according to NCCLS criteria, for resistance analysis. Beta-lactamase production was determined by an acidimetric method with benzylpenicillin. Microbiological characterization of ESBL enzymes was done by double disc synergy test and the resistance genes TEM and SHV were screened by specific PCR. Results: ESBL phenotype was detected in 29 isolates (35.5%). The most active antibiotic was imipenem (96.4% as susceptibility rate) followed by ceftriaxone (58.3%) and gentamicin (54.8%). High resistance rates were observed with amoxicillin (92.8%), ampicillin (94%) and trimethoprim/sulfamethoxazole (85.7%). The genotype TEM was predominant in ESBL and non ESBL isolates with respectively 72.4% and 80%. SHV-type beta-lactamase genes occurred in 24.1% ESBL strains and in 18.1% of non ESBL isolates. Conclusions: This study revealed the presence of ESBL producing Eschericiha coli in Cotonou. It demonstrated also high resistance rate to antibiotics commonly used for infections treatment. Continuous monitoring and judicious antibiotic usage are required.
5,417
309
10
[ "esbl", "μg", "resistance", "isolates", "shv", "10", "30", "strains", "genes", "non" ]
[ "test", "test" ]
null
[CONTENT] Escherichia coli | ESBL | Resistance gene [SUMMARY]
null
[CONTENT] Escherichia coli | ESBL | Resistance gene [SUMMARY]
[CONTENT] Escherichia coli | ESBL | Resistance gene [SUMMARY]
[CONTENT] Escherichia coli | ESBL | Resistance gene [SUMMARY]
[CONTENT] Escherichia coli | ESBL | Resistance gene [SUMMARY]
[CONTENT] Anti-Bacterial Agents | Benin | Cross Infection | Drug Resistance, Multiple, Bacterial | Escherichia coli | Escherichia coli Infections | Genotype | Humans | Microbial Sensitivity Tests | beta-Lactamases | beta-Lactams [SUMMARY]
null
[CONTENT] Anti-Bacterial Agents | Benin | Cross Infection | Drug Resistance, Multiple, Bacterial | Escherichia coli | Escherichia coli Infections | Genotype | Humans | Microbial Sensitivity Tests | beta-Lactamases | beta-Lactams [SUMMARY]
[CONTENT] Anti-Bacterial Agents | Benin | Cross Infection | Drug Resistance, Multiple, Bacterial | Escherichia coli | Escherichia coli Infections | Genotype | Humans | Microbial Sensitivity Tests | beta-Lactamases | beta-Lactams [SUMMARY]
[CONTENT] Anti-Bacterial Agents | Benin | Cross Infection | Drug Resistance, Multiple, Bacterial | Escherichia coli | Escherichia coli Infections | Genotype | Humans | Microbial Sensitivity Tests | beta-Lactamases | beta-Lactams [SUMMARY]
[CONTENT] Anti-Bacterial Agents | Benin | Cross Infection | Drug Resistance, Multiple, Bacterial | Escherichia coli | Escherichia coli Infections | Genotype | Humans | Microbial Sensitivity Tests | beta-Lactamases | beta-Lactams [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] esbl | μg | resistance | isolates | shv | 10 | 30 | strains | genes | non [SUMMARY]
null
[CONTENT] esbl | μg | resistance | isolates | shv | 10 | 30 | strains | genes | non [SUMMARY]
[CONTENT] esbl | μg | resistance | isolates | shv | 10 | 30 | strains | genes | non [SUMMARY]
[CONTENT] esbl | μg | resistance | isolates | shv | 10 | 30 | strains | genes | non [SUMMARY]
[CONTENT] esbl | μg | resistance | isolates | shv | 10 | 30 | strains | genes | non [SUMMARY]
[CONTENT] resistance | antibiotics | coli | escherichia | escherichia coli | infections | drug resistance | increasing | studies reported | lactamases [SUMMARY]
null
[CONTENT] esbl | genotype | shv | isolates | non | 10 | producers | resistance | 100 | table [SUMMARY]
[CONTENT] use | antibiotics | isolates | esbl | escherichia coli clinical isolates | esbl producing escherichia coli | surveillance determination molecular | characteristics esbl | characteristics esbl isolates | characteristics esbl isolates primordial [SUMMARY]
[CONTENT] μg | esbl | resistance | solution | isolates | 10 | antibiotics | 30 | shv | analyzed [SUMMARY]
[CONTENT] μg | esbl | resistance | solution | isolates | 10 | antibiotics | 30 | shv | analyzed [SUMMARY]
[CONTENT] ||| ESBL | gram ||| Escherichia | Cotonou | Benin [SUMMARY]
null
[CONTENT] 29 | 35.5% ||| 96.4% | 58.3% | 54.8% ||| 92.8% | 94% | 85.7% ||| TEM | ESBL | 72.4% | 80% ||| SHV | 24.1% | ESBL | 18.1% [SUMMARY]
[CONTENT] ESBL | Eschericiha | Cotonou ||| ||| [SUMMARY]
[CONTENT] ||| ESBL | gram ||| Escherichia | Cotonou | Benin ||| Escherichia ||| eleven | NCCLS ||| ||| ESBL | TEM | SHV | PCR ||| ||| 29 | 35.5% ||| 96.4% | 58.3% | 54.8% ||| 92.8% | 94% | 85.7% ||| TEM | ESBL | 72.4% | 80% ||| SHV | 24.1% | ESBL | 18.1% ||| ESBL | Eschericiha | Cotonou ||| ||| [SUMMARY]
[CONTENT] ||| ESBL | gram ||| Escherichia | Cotonou | Benin ||| Escherichia ||| eleven | NCCLS ||| ||| ESBL | TEM | SHV | PCR ||| ||| 29 | 35.5% ||| 96.4% | 58.3% | 54.8% ||| 92.8% | 94% | 85.7% ||| TEM | ESBL | 72.4% | 80% ||| SHV | 24.1% | ESBL | 18.1% ||| ESBL | Eschericiha | Cotonou ||| ||| [SUMMARY]
Incidence and Seasonal Variation of Distal Radius Fractures in Korea: a Population-based Study.
29359536
The present study aimed to investigate the incidence and seasonal variation of distal radius fractures (DRFs) in Korea.
BACKGROUND
We analyzed a nationwide database acquired from the Korean Health Insurance Review and Assessment Service from 2011 to 2015. We used International Classification of Diseases, 10th revision codes and procedure codes to identify patients of all ages with newly diagnosed DRFs.
METHODS
An average of about 130,000 DRFs occurred annually in Korea. The incidence of DRF, by age group, was highest in the 10 to 14-year-old age group for males and the highest in the 70s age group for females, with a rapid increase of incidence after 50 years. The peak incidence of DRF occurred during winter; however, the incidence greatly varied annually when compared with that of other seasons. The incidence of DRFs during the winter season was correlated with the average temperature.
RESULTS
The annual incidence of DRF was 130,000 in Korea. The incidence increased under an intense cold surge during winter. Active preventive measures are recommended especially in women exceeding 50 years considering the higher incidence in this age group.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Child", "Child, Preschool", "Databases, Factual", "Female", "Humans", "Incidence", "Infant", "Infant, Newborn", "Male", "Middle Aged", "Radius Fractures", "Republic of Korea", "Seasons", "Young Adult" ]
5785624
INTRODUCTION
Distal radius fractures (DRFs) are one of the most common fractures not only among adults but also among children and adolescents.12 More than 640,000 cases of DRFs are reported annually in the United States,3 and almost 71,000 cases are reported in the British adult population.4 The high incidence of DRF causes a substantial economic burden and other social problems including decreased school attendance, lost work hours, loss of independence, and lasting disability.5 Therefore, DRF prevention is important, and it is essential to understand the epidemiology of DRFs to achieve this. Since most DRFs are caused by low-energy trauma, such as a fall from a standing height,6 slippery weather conditions may cause fracture epidemics.7 Thus, the incidence of DRF, which shows a clear seasonal variation, is known to increase in winter.89 However, most previous epidemiological studies of the incidence and seasonal variation of DRFs have been confined to European countries and North America.710 Few studies have examined seasonal variation in DRFs in Asia.11 Lee et al.12 reported seasonal variations in DRF incidence in Korea; however, their study had some limitations; it had a short-term study period and was conducted in a single center. Thus, our aim in the present study was to examine the incidence and seasonal variation of DRFs in Korea based on an analysis of nationwide data acquired from the Korean Health Insurance Review and Assessment Service (HIRA).
METHODS
Data source The authors analyzed nationwide data obtained from the HIRA from 2011 to 2015. In Korea, 97% of the entire population is legally obligated to enroll in the National Health Insurance (NHI) program. Patients only pay about 30% of the total medical cost to healthcare institutions, and all healthcare providers submit claims data for inpatient and outpatient management, including diagnostic codes, which are classified according to the International Classification of Diseases, 10th revision (ICD-10), procedure codes, prescription records, demographic information, and direct medical costs to the HIRA to request reimbursement for the remaining 70% of the medical cost from the NHI service. Of the remaining 3% of the population not registered in the NHI program, excluding illegal residents, most receive healthcare coverage through the Medical Aid Program. The claims data for patients covered by the Medical Aid Program are also reviewed by the HIRA. Hence, the medical records of almost all newly admitted or hospitalized patients in healthcare institutions in Korea are prospectively recorded by the HIRA. The authors analyzed nationwide data obtained from the HIRA from 2011 to 2015. In Korea, 97% of the entire population is legally obligated to enroll in the National Health Insurance (NHI) program. Patients only pay about 30% of the total medical cost to healthcare institutions, and all healthcare providers submit claims data for inpatient and outpatient management, including diagnostic codes, which are classified according to the International Classification of Diseases, 10th revision (ICD-10), procedure codes, prescription records, demographic information, and direct medical costs to the HIRA to request reimbursement for the remaining 70% of the medical cost from the NHI service. Of the remaining 3% of the population not registered in the NHI program, excluding illegal residents, most receive healthcare coverage through the Medical Aid Program. The claims data for patients covered by the Medical Aid Program are also reviewed by the HIRA. Hence, the medical records of almost all newly admitted or hospitalized patients in healthcare institutions in Korea are prospectively recorded by the HIRA. Data collection The authors used ICD-10 codes and procedure codes to identify patients of all ages with newly diagnosed DRFs in Korea between 2011 and 2015 (Table 1). Although HIRA data provide patient identifiers, if a patient with a DRF makes multiple visits to a healthcare institution, claims data for the number of visits are generated. To avoid the risk of multiple counting, the authors used a previously reported method,13 as follows. First, patients with surgically treated DRFs were identified with ICD-10 codes (S52.5 and S52.6) and operation codes (N0603, N0613, N0607, N0617, N0993, N0994, and N0983) (Table 1). Each operation code was counted as a single case. However, percutaneous pinning and external fixation are often performed simultaneously. Therefore, if a patient received the two operations on the same day, they were considered to be performed on a single DRF. The same criterion was applied when a patient received open reduction with internal fixation and external fixation on the same day. Subsequently, in order to identify patients with DRFs who received conservative treatment, those who underwent surgical treatment were excluded from the HIRA data using patient identifiers, and those with splint or cast codes (T6020, T6030, T6151, and T6152) for DRF ICD-10 codes (S52.5 and S52.6) were included (Table 1). For conservative treatment, multiple splint or cast codes are commonly entered for a single case of DRF because an initially applied splint can be substituted by a cast at a later stage, or a long-arm cast can be changed to a short-arm cast, etc. For this reason, additional codes entered after a period of 6 months following the initial entry of splint or cast codes were recounted.14 ICD-10 = International Classification of Diseases, 10th revision, ORIF = open reduction with internal fixation. We examined patient data to identify the year and month in which the fracture occurred, and the patient's age and sex. We first counted the number of DRFs for each year and then calculated age-adjusted and sex-specific incidence rates per 100,000 persons with DRFs using the 2010 United States population as the standard population. Estimated year-specific, age-specific, and sex-specific populations in Korea were obtained from the “Statistics Korea” website (http://www.kosis.kr). Patients were divided into groups according to age (5-year intervals), and then the incidence of DRFs per age group per study year was calculated according to the population of each age group in Korea for a given study year (per 100,000 persons). We divided the year into 4 seasons to examine the seasonal variation in the incidence of DRFs; March through May was defined as spring, June through August as summer, September through November as autumn, and December through February as winter. Seasonal variations in the incidence of DRFs were examined according to year and age group (15-year intervals). Lastly, the relationship between the monthly incidence of DRFs and the monthly national average temperature in Korea during winter was investigated. The authors used ICD-10 codes and procedure codes to identify patients of all ages with newly diagnosed DRFs in Korea between 2011 and 2015 (Table 1). Although HIRA data provide patient identifiers, if a patient with a DRF makes multiple visits to a healthcare institution, claims data for the number of visits are generated. To avoid the risk of multiple counting, the authors used a previously reported method,13 as follows. First, patients with surgically treated DRFs were identified with ICD-10 codes (S52.5 and S52.6) and operation codes (N0603, N0613, N0607, N0617, N0993, N0994, and N0983) (Table 1). Each operation code was counted as a single case. However, percutaneous pinning and external fixation are often performed simultaneously. Therefore, if a patient received the two operations on the same day, they were considered to be performed on a single DRF. The same criterion was applied when a patient received open reduction with internal fixation and external fixation on the same day. Subsequently, in order to identify patients with DRFs who received conservative treatment, those who underwent surgical treatment were excluded from the HIRA data using patient identifiers, and those with splint or cast codes (T6020, T6030, T6151, and T6152) for DRF ICD-10 codes (S52.5 and S52.6) were included (Table 1). For conservative treatment, multiple splint or cast codes are commonly entered for a single case of DRF because an initially applied splint can be substituted by a cast at a later stage, or a long-arm cast can be changed to a short-arm cast, etc. For this reason, additional codes entered after a period of 6 months following the initial entry of splint or cast codes were recounted.14 ICD-10 = International Classification of Diseases, 10th revision, ORIF = open reduction with internal fixation. We examined patient data to identify the year and month in which the fracture occurred, and the patient's age and sex. We first counted the number of DRFs for each year and then calculated age-adjusted and sex-specific incidence rates per 100,000 persons with DRFs using the 2010 United States population as the standard population. Estimated year-specific, age-specific, and sex-specific populations in Korea were obtained from the “Statistics Korea” website (http://www.kosis.kr). Patients were divided into groups according to age (5-year intervals), and then the incidence of DRFs per age group per study year was calculated according to the population of each age group in Korea for a given study year (per 100,000 persons). We divided the year into 4 seasons to examine the seasonal variation in the incidence of DRFs; March through May was defined as spring, June through August as summer, September through November as autumn, and December through February as winter. Seasonal variations in the incidence of DRFs were examined according to year and age group (15-year intervals). Lastly, the relationship between the monthly incidence of DRFs and the monthly national average temperature in Korea during winter was investigated. Statistical analysis The annual percentage changes in the age-adjusted incidence rate of DRFs were calculated from 2011 to 2015 using joinpoint regression analysis (Joinpoint Regression Program, version 4.3.1.0; National Cancer Institute, Bethesda, MD, USA). All other data sets were analyzed using SAS statistical software version 9.13 (SAS Institute, Cary, NC, USA). Univariate analysis was conducted using the t-test. The correlation between the monthly number of DRFs and the national monthly mean temperatures in winter (https://data.kma.go.kr) was analyzed using the Spearman rank test. Values of P < 0.05 were considered significant. The annual percentage changes in the age-adjusted incidence rate of DRFs were calculated from 2011 to 2015 using joinpoint regression analysis (Joinpoint Regression Program, version 4.3.1.0; National Cancer Institute, Bethesda, MD, USA). All other data sets were analyzed using SAS statistical software version 9.13 (SAS Institute, Cary, NC, USA). Univariate analysis was conducted using the t-test. The correlation between the monthly number of DRFs and the national monthly mean temperatures in winter (https://data.kma.go.kr) was analyzed using the Spearman rank test. Values of P < 0.05 were considered significant. Ethics statement This study protocol was exempted for review by the Institutional Review Board of the Hanyang University Hospital (HYUH 2017-06-027) in accordance with the exemption criteria. This study protocol was exempted for review by the Institutional Review Board of the Hanyang University Hospital (HYUH 2017-06-027) in accordance with the exemption criteria.
RESULTS
A total of 647,655 DRFs (621,451 patients) occurred from 2011 to 2015; thus, an average of about 130,000 DRFs occurred annually in Korea. Of these, 259,453 cases (249,457 patients) involved male patients, and 388,202 cases (371,994 patients) involved female patients, with a male to female ratio of 1.50. The mean age of patients with DRFs was 47.2 years (standard deviation [SD] ± 25.5). The mean age of men was 30.2 years (SD ± 23.2), while that of women was significantly higher at 58.6 years (SD ± 20.2, P < 0.001). The total number of DRFs increased from 2011 to 2013 (128,000 and 137,054, respectively), and then decreased after 2013 (129,687 in 2014 and 116,750 in 2015) (Table 2). The age-adjusted incidence rate per 100,000 persons increased between 2011 and 2012 (278.0 and 291.7, respectively), and then decreased after 2012 (288.33 in 2013 and 246.00 in 2015) (Fig. 1, Table 2). The annual percentage change in the age-adjusted rate for 5 years was calculated as −2.9% (95% confidence interval [CI], −8.2%–2.6%), which was not statistically significant (P = 0.190). aUse of the United States population in 2010 as the control. Age-adjusted and sex-specific rate of distal radius fractures per 100,000 persons in Korea from 2011 to 2015. The 5-year age group-specific number of DRFs from 2011 to 2015 was highest in the 10 to 14-year-old age group (124,591 cases, 19%), followed by the 55 to 59-year-old age group (70,978 cases, 11%), and the 60 to 64 year-old age group (61,608 cases, 10%) (Fig. 2). Among those under 45-year-old, the incidence was higher in male than in female individuals; in particular, boys accounted for 82% (101,796 cases) of DRF cases in the 10 to 14-year-old age group. The age group of 45-year-old women had a higher incidence of DRFs than their male counterparts, and 82% (317,687 DRFs) of 387,607 DRFs in persons over 50 years occurred in the female population (Fig. 2, Table 3). The incidence rate per 100,000 persons by age group peaked in the 10 to 14-year-old age group, then decreased, and increased again as they aged (Fig. 3A). In the female population, the annual incidence rate per 100,000 persons was highest in the 70 to 79-year-old age group, while the 10 to 14-year-old age group showed the highest incidence rate in the male population (Fig. 3B and C). The total number of cases with distal radius fractures, by age group, from 2011 to 2015. Incidence rate of distal radius fractures per 100,000 persons, by age group — (A) Total population, (B) Men, and (C) Women. During the study period, the number of DRFs was highest in winter (33%), followed by autumn (23%), spring (23%), and summer (22%) (Table 4). The annual change in the number of DRFs in winter was more prominent than in other seasons (Fig. 4, Table 4). While the incidence proportion of DRFs in children, adolescents and young men under 30-year-old peaked during spring, females over 15-year-old and men over 30-year-old had the highest incidence proportion during winter (Fig. 5). During the study period, the highest mean monthly number of DRFs occurred in January (15,722 cases, 12%), followed by December (15,508 cases, 12%), and February (11,321 cases, 9%) (Fig. 6). As the national monthly mean winter temperature increased, the monthly number of DRFs tended to decrease, and the correlation was significant (P = 0.003, r = −0.704) (Fig. 7). Values are presented as number (%). Seasonal variation in the incidence of distal radius fractures by year in the total population. Seasonal variation in the incidence of distal radius fractures by age group from 2011 to 2015 — (A) Men and (B) Women. The mean monthly number of distal radius fractures from 2011 to 2015. The correlation between the monthly number of distal radius fractures and the national monthly mean winter temperatures.
null
null
[ "Data source", "Data collection", "Statistical analysis", "Ethics statement" ]
[ "The authors analyzed nationwide data obtained from the HIRA from 2011 to 2015. In Korea, 97% of the entire population is legally obligated to enroll in the National Health Insurance (NHI) program. Patients only pay about 30% of the total medical cost to healthcare institutions, and all healthcare providers submit claims data for inpatient and outpatient management, including diagnostic codes, which are classified according to the International Classification of Diseases, 10th revision (ICD-10), procedure codes, prescription records, demographic information, and direct medical costs to the HIRA to request reimbursement for the remaining 70% of the medical cost from the NHI service. Of the remaining 3% of the population not registered in the NHI program, excluding illegal residents, most receive healthcare coverage through the Medical Aid Program. The claims data for patients covered by the Medical Aid Program are also reviewed by the HIRA. Hence, the medical records of almost all newly admitted or hospitalized patients in healthcare institutions in Korea are prospectively recorded by the HIRA.", "The authors used ICD-10 codes and procedure codes to identify patients of all ages with newly diagnosed DRFs in Korea between 2011 and 2015 (Table 1). Although HIRA data provide patient identifiers, if a patient with a DRF makes multiple visits to a healthcare institution, claims data for the number of visits are generated. To avoid the risk of multiple counting, the authors used a previously reported method,13 as follows. First, patients with surgically treated DRFs were identified with ICD-10 codes (S52.5 and S52.6) and operation codes (N0603, N0613, N0607, N0617, N0993, N0994, and N0983) (Table 1). Each operation code was counted as a single case. However, percutaneous pinning and external fixation are often performed simultaneously. Therefore, if a patient received the two operations on the same day, they were considered to be performed on a single DRF. The same criterion was applied when a patient received open reduction with internal fixation and external fixation on the same day. Subsequently, in order to identify patients with DRFs who received conservative treatment, those who underwent surgical treatment were excluded from the HIRA data using patient identifiers, and those with splint or cast codes (T6020, T6030, T6151, and T6152) for DRF ICD-10 codes (S52.5 and S52.6) were included (Table 1). For conservative treatment, multiple splint or cast codes are commonly entered for a single case of DRF because an initially applied splint can be substituted by a cast at a later stage, or a long-arm cast can be changed to a short-arm cast, etc. For this reason, additional codes entered after a period of 6 months following the initial entry of splint or cast codes were recounted.14\nICD-10 = International Classification of Diseases, 10th revision, ORIF = open reduction with internal fixation.\nWe examined patient data to identify the year and month in which the fracture occurred, and the patient's age and sex. We first counted the number of DRFs for each year and then calculated age-adjusted and sex-specific incidence rates per 100,000 persons with DRFs using the 2010 United States population as the standard population. Estimated year-specific, age-specific, and sex-specific populations in Korea were obtained from the “Statistics Korea” website (http://www.kosis.kr). Patients were divided into groups according to age (5-year intervals), and then the incidence of DRFs per age group per study year was calculated according to the population of each age group in Korea for a given study year (per 100,000 persons).\nWe divided the year into 4 seasons to examine the seasonal variation in the incidence of DRFs; March through May was defined as spring, June through August as summer, September through November as autumn, and December through February as winter. Seasonal variations in the incidence of DRFs were examined according to year and age group (15-year intervals). Lastly, the relationship between the monthly incidence of DRFs and the monthly national average temperature in Korea during winter was investigated.", "The annual percentage changes in the age-adjusted incidence rate of DRFs were calculated from 2011 to 2015 using joinpoint regression analysis (Joinpoint Regression Program, version 4.3.1.0; National Cancer Institute, Bethesda, MD, USA). All other data sets were analyzed using SAS statistical software version 9.13 (SAS Institute, Cary, NC, USA). Univariate analysis was conducted using the t-test. The correlation between the monthly number of DRFs and the national monthly mean temperatures in winter (https://data.kma.go.kr) was analyzed using the Spearman rank test. Values of P < 0.05 were considered significant.", "This study protocol was exempted for review by the Institutional Review Board of the Hanyang University Hospital (HYUH 2017-06-027) in accordance with the exemption criteria." ]
[ null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Data source", "Data collection", "Statistical analysis", "Ethics statement", "RESULTS", "DISCUSSION" ]
[ "Distal radius fractures (DRFs) are one of the most common fractures not only among adults but also among children and adolescents.12 More than 640,000 cases of DRFs are reported annually in the United States,3 and almost 71,000 cases are reported in the British adult population.4 The high incidence of DRF causes a substantial economic burden and other social problems including decreased school attendance, lost work hours, loss of independence, and lasting disability.5 Therefore, DRF prevention is important, and it is essential to understand the epidemiology of DRFs to achieve this.\nSince most DRFs are caused by low-energy trauma, such as a fall from a standing height,6 slippery weather conditions may cause fracture epidemics.7 Thus, the incidence of DRF, which shows a clear seasonal variation, is known to increase in winter.89 However, most previous epidemiological studies of the incidence and seasonal variation of DRFs have been confined to European countries and North America.710 Few studies have examined seasonal variation in DRFs in Asia.11 Lee et al.12 reported seasonal variations in DRF incidence in Korea; however, their study had some limitations; it had a short-term study period and was conducted in a single center.\nThus, our aim in the present study was to examine the incidence and seasonal variation of DRFs in Korea based on an analysis of nationwide data acquired from the Korean Health Insurance Review and Assessment Service (HIRA).", " Data source The authors analyzed nationwide data obtained from the HIRA from 2011 to 2015. In Korea, 97% of the entire population is legally obligated to enroll in the National Health Insurance (NHI) program. Patients only pay about 30% of the total medical cost to healthcare institutions, and all healthcare providers submit claims data for inpatient and outpatient management, including diagnostic codes, which are classified according to the International Classification of Diseases, 10th revision (ICD-10), procedure codes, prescription records, demographic information, and direct medical costs to the HIRA to request reimbursement for the remaining 70% of the medical cost from the NHI service. Of the remaining 3% of the population not registered in the NHI program, excluding illegal residents, most receive healthcare coverage through the Medical Aid Program. The claims data for patients covered by the Medical Aid Program are also reviewed by the HIRA. Hence, the medical records of almost all newly admitted or hospitalized patients in healthcare institutions in Korea are prospectively recorded by the HIRA.\nThe authors analyzed nationwide data obtained from the HIRA from 2011 to 2015. In Korea, 97% of the entire population is legally obligated to enroll in the National Health Insurance (NHI) program. Patients only pay about 30% of the total medical cost to healthcare institutions, and all healthcare providers submit claims data for inpatient and outpatient management, including diagnostic codes, which are classified according to the International Classification of Diseases, 10th revision (ICD-10), procedure codes, prescription records, demographic information, and direct medical costs to the HIRA to request reimbursement for the remaining 70% of the medical cost from the NHI service. Of the remaining 3% of the population not registered in the NHI program, excluding illegal residents, most receive healthcare coverage through the Medical Aid Program. The claims data for patients covered by the Medical Aid Program are also reviewed by the HIRA. Hence, the medical records of almost all newly admitted or hospitalized patients in healthcare institutions in Korea are prospectively recorded by the HIRA.\n Data collection The authors used ICD-10 codes and procedure codes to identify patients of all ages with newly diagnosed DRFs in Korea between 2011 and 2015 (Table 1). Although HIRA data provide patient identifiers, if a patient with a DRF makes multiple visits to a healthcare institution, claims data for the number of visits are generated. To avoid the risk of multiple counting, the authors used a previously reported method,13 as follows. First, patients with surgically treated DRFs were identified with ICD-10 codes (S52.5 and S52.6) and operation codes (N0603, N0613, N0607, N0617, N0993, N0994, and N0983) (Table 1). Each operation code was counted as a single case. However, percutaneous pinning and external fixation are often performed simultaneously. Therefore, if a patient received the two operations on the same day, they were considered to be performed on a single DRF. The same criterion was applied when a patient received open reduction with internal fixation and external fixation on the same day. Subsequently, in order to identify patients with DRFs who received conservative treatment, those who underwent surgical treatment were excluded from the HIRA data using patient identifiers, and those with splint or cast codes (T6020, T6030, T6151, and T6152) for DRF ICD-10 codes (S52.5 and S52.6) were included (Table 1). For conservative treatment, multiple splint or cast codes are commonly entered for a single case of DRF because an initially applied splint can be substituted by a cast at a later stage, or a long-arm cast can be changed to a short-arm cast, etc. For this reason, additional codes entered after a period of 6 months following the initial entry of splint or cast codes were recounted.14\nICD-10 = International Classification of Diseases, 10th revision, ORIF = open reduction with internal fixation.\nWe examined patient data to identify the year and month in which the fracture occurred, and the patient's age and sex. We first counted the number of DRFs for each year and then calculated age-adjusted and sex-specific incidence rates per 100,000 persons with DRFs using the 2010 United States population as the standard population. Estimated year-specific, age-specific, and sex-specific populations in Korea were obtained from the “Statistics Korea” website (http://www.kosis.kr). Patients were divided into groups according to age (5-year intervals), and then the incidence of DRFs per age group per study year was calculated according to the population of each age group in Korea for a given study year (per 100,000 persons).\nWe divided the year into 4 seasons to examine the seasonal variation in the incidence of DRFs; March through May was defined as spring, June through August as summer, September through November as autumn, and December through February as winter. Seasonal variations in the incidence of DRFs were examined according to year and age group (15-year intervals). Lastly, the relationship between the monthly incidence of DRFs and the monthly national average temperature in Korea during winter was investigated.\nThe authors used ICD-10 codes and procedure codes to identify patients of all ages with newly diagnosed DRFs in Korea between 2011 and 2015 (Table 1). Although HIRA data provide patient identifiers, if a patient with a DRF makes multiple visits to a healthcare institution, claims data for the number of visits are generated. To avoid the risk of multiple counting, the authors used a previously reported method,13 as follows. First, patients with surgically treated DRFs were identified with ICD-10 codes (S52.5 and S52.6) and operation codes (N0603, N0613, N0607, N0617, N0993, N0994, and N0983) (Table 1). Each operation code was counted as a single case. However, percutaneous pinning and external fixation are often performed simultaneously. Therefore, if a patient received the two operations on the same day, they were considered to be performed on a single DRF. The same criterion was applied when a patient received open reduction with internal fixation and external fixation on the same day. Subsequently, in order to identify patients with DRFs who received conservative treatment, those who underwent surgical treatment were excluded from the HIRA data using patient identifiers, and those with splint or cast codes (T6020, T6030, T6151, and T6152) for DRF ICD-10 codes (S52.5 and S52.6) were included (Table 1). For conservative treatment, multiple splint or cast codes are commonly entered for a single case of DRF because an initially applied splint can be substituted by a cast at a later stage, or a long-arm cast can be changed to a short-arm cast, etc. For this reason, additional codes entered after a period of 6 months following the initial entry of splint or cast codes were recounted.14\nICD-10 = International Classification of Diseases, 10th revision, ORIF = open reduction with internal fixation.\nWe examined patient data to identify the year and month in which the fracture occurred, and the patient's age and sex. We first counted the number of DRFs for each year and then calculated age-adjusted and sex-specific incidence rates per 100,000 persons with DRFs using the 2010 United States population as the standard population. Estimated year-specific, age-specific, and sex-specific populations in Korea were obtained from the “Statistics Korea” website (http://www.kosis.kr). Patients were divided into groups according to age (5-year intervals), and then the incidence of DRFs per age group per study year was calculated according to the population of each age group in Korea for a given study year (per 100,000 persons).\nWe divided the year into 4 seasons to examine the seasonal variation in the incidence of DRFs; March through May was defined as spring, June through August as summer, September through November as autumn, and December through February as winter. Seasonal variations in the incidence of DRFs were examined according to year and age group (15-year intervals). Lastly, the relationship between the monthly incidence of DRFs and the monthly national average temperature in Korea during winter was investigated.\n Statistical analysis The annual percentage changes in the age-adjusted incidence rate of DRFs were calculated from 2011 to 2015 using joinpoint regression analysis (Joinpoint Regression Program, version 4.3.1.0; National Cancer Institute, Bethesda, MD, USA). All other data sets were analyzed using SAS statistical software version 9.13 (SAS Institute, Cary, NC, USA). Univariate analysis was conducted using the t-test. The correlation between the monthly number of DRFs and the national monthly mean temperatures in winter (https://data.kma.go.kr) was analyzed using the Spearman rank test. Values of P < 0.05 were considered significant.\nThe annual percentage changes in the age-adjusted incidence rate of DRFs were calculated from 2011 to 2015 using joinpoint regression analysis (Joinpoint Regression Program, version 4.3.1.0; National Cancer Institute, Bethesda, MD, USA). All other data sets were analyzed using SAS statistical software version 9.13 (SAS Institute, Cary, NC, USA). Univariate analysis was conducted using the t-test. The correlation between the monthly number of DRFs and the national monthly mean temperatures in winter (https://data.kma.go.kr) was analyzed using the Spearman rank test. Values of P < 0.05 were considered significant.\n Ethics statement This study protocol was exempted for review by the Institutional Review Board of the Hanyang University Hospital (HYUH 2017-06-027) in accordance with the exemption criteria.\nThis study protocol was exempted for review by the Institutional Review Board of the Hanyang University Hospital (HYUH 2017-06-027) in accordance with the exemption criteria.", "The authors analyzed nationwide data obtained from the HIRA from 2011 to 2015. In Korea, 97% of the entire population is legally obligated to enroll in the National Health Insurance (NHI) program. Patients only pay about 30% of the total medical cost to healthcare institutions, and all healthcare providers submit claims data for inpatient and outpatient management, including diagnostic codes, which are classified according to the International Classification of Diseases, 10th revision (ICD-10), procedure codes, prescription records, demographic information, and direct medical costs to the HIRA to request reimbursement for the remaining 70% of the medical cost from the NHI service. Of the remaining 3% of the population not registered in the NHI program, excluding illegal residents, most receive healthcare coverage through the Medical Aid Program. The claims data for patients covered by the Medical Aid Program are also reviewed by the HIRA. Hence, the medical records of almost all newly admitted or hospitalized patients in healthcare institutions in Korea are prospectively recorded by the HIRA.", "The authors used ICD-10 codes and procedure codes to identify patients of all ages with newly diagnosed DRFs in Korea between 2011 and 2015 (Table 1). Although HIRA data provide patient identifiers, if a patient with a DRF makes multiple visits to a healthcare institution, claims data for the number of visits are generated. To avoid the risk of multiple counting, the authors used a previously reported method,13 as follows. First, patients with surgically treated DRFs were identified with ICD-10 codes (S52.5 and S52.6) and operation codes (N0603, N0613, N0607, N0617, N0993, N0994, and N0983) (Table 1). Each operation code was counted as a single case. However, percutaneous pinning and external fixation are often performed simultaneously. Therefore, if a patient received the two operations on the same day, they were considered to be performed on a single DRF. The same criterion was applied when a patient received open reduction with internal fixation and external fixation on the same day. Subsequently, in order to identify patients with DRFs who received conservative treatment, those who underwent surgical treatment were excluded from the HIRA data using patient identifiers, and those with splint or cast codes (T6020, T6030, T6151, and T6152) for DRF ICD-10 codes (S52.5 and S52.6) were included (Table 1). For conservative treatment, multiple splint or cast codes are commonly entered for a single case of DRF because an initially applied splint can be substituted by a cast at a later stage, or a long-arm cast can be changed to a short-arm cast, etc. For this reason, additional codes entered after a period of 6 months following the initial entry of splint or cast codes were recounted.14\nICD-10 = International Classification of Diseases, 10th revision, ORIF = open reduction with internal fixation.\nWe examined patient data to identify the year and month in which the fracture occurred, and the patient's age and sex. We first counted the number of DRFs for each year and then calculated age-adjusted and sex-specific incidence rates per 100,000 persons with DRFs using the 2010 United States population as the standard population. Estimated year-specific, age-specific, and sex-specific populations in Korea were obtained from the “Statistics Korea” website (http://www.kosis.kr). Patients were divided into groups according to age (5-year intervals), and then the incidence of DRFs per age group per study year was calculated according to the population of each age group in Korea for a given study year (per 100,000 persons).\nWe divided the year into 4 seasons to examine the seasonal variation in the incidence of DRFs; March through May was defined as spring, June through August as summer, September through November as autumn, and December through February as winter. Seasonal variations in the incidence of DRFs were examined according to year and age group (15-year intervals). Lastly, the relationship between the monthly incidence of DRFs and the monthly national average temperature in Korea during winter was investigated.", "The annual percentage changes in the age-adjusted incidence rate of DRFs were calculated from 2011 to 2015 using joinpoint regression analysis (Joinpoint Regression Program, version 4.3.1.0; National Cancer Institute, Bethesda, MD, USA). All other data sets were analyzed using SAS statistical software version 9.13 (SAS Institute, Cary, NC, USA). Univariate analysis was conducted using the t-test. The correlation between the monthly number of DRFs and the national monthly mean temperatures in winter (https://data.kma.go.kr) was analyzed using the Spearman rank test. Values of P < 0.05 were considered significant.", "This study protocol was exempted for review by the Institutional Review Board of the Hanyang University Hospital (HYUH 2017-06-027) in accordance with the exemption criteria.", "A total of 647,655 DRFs (621,451 patients) occurred from 2011 to 2015; thus, an average of about 130,000 DRFs occurred annually in Korea. Of these, 259,453 cases (249,457 patients) involved male patients, and 388,202 cases (371,994 patients) involved female patients, with a male to female ratio of 1.50. The mean age of patients with DRFs was 47.2 years (standard deviation [SD] ± 25.5). The mean age of men was 30.2 years (SD ± 23.2), while that of women was significantly higher at 58.6 years (SD ± 20.2, P < 0.001). The total number of DRFs increased from 2011 to 2013 (128,000 and 137,054, respectively), and then decreased after 2013 (129,687 in 2014 and 116,750 in 2015) (Table 2). The age-adjusted incidence rate per 100,000 persons increased between 2011 and 2012 (278.0 and 291.7, respectively), and then decreased after 2012 (288.33 in 2013 and 246.00 in 2015) (Fig. 1, Table 2). The annual percentage change in the age-adjusted rate for 5 years was calculated as −2.9% (95% confidence interval [CI], −8.2%–2.6%), which was not statistically significant (P = 0.190).\naUse of the United States population in 2010 as the control.\nAge-adjusted and sex-specific rate of distal radius fractures per 100,000 persons in Korea from 2011 to 2015.\nThe 5-year age group-specific number of DRFs from 2011 to 2015 was highest in the 10 to 14-year-old age group (124,591 cases, 19%), followed by the 55 to 59-year-old age group (70,978 cases, 11%), and the 60 to 64 year-old age group (61,608 cases, 10%) (Fig. 2). Among those under 45-year-old, the incidence was higher in male than in female individuals; in particular, boys accounted for 82% (101,796 cases) of DRF cases in the 10 to 14-year-old age group. The age group of 45-year-old women had a higher incidence of DRFs than their male counterparts, and 82% (317,687 DRFs) of 387,607 DRFs in persons over 50 years occurred in the female population (Fig. 2, Table 3). The incidence rate per 100,000 persons by age group peaked in the 10 to 14-year-old age group, then decreased, and increased again as they aged (Fig. 3A). In the female population, the annual incidence rate per 100,000 persons was highest in the 70 to 79-year-old age group, while the 10 to 14-year-old age group showed the highest incidence rate in the male population (Fig. 3B and C).\nThe total number of cases with distal radius fractures, by age group, from 2011 to 2015.\nIncidence rate of distal radius fractures per 100,000 persons, by age group — (A) Total population, (B) Men, and (C) Women.\nDuring the study period, the number of DRFs was highest in winter (33%), followed by autumn (23%), spring (23%), and summer (22%) (Table 4). The annual change in the number of DRFs in winter was more prominent than in other seasons (Fig. 4, Table 4). While the incidence proportion of DRFs in children, adolescents and young men under 30-year-old peaked during spring, females over 15-year-old and men over 30-year-old had the highest incidence proportion during winter (Fig. 5). During the study period, the highest mean monthly number of DRFs occurred in January (15,722 cases, 12%), followed by December (15,508 cases, 12%), and February (11,321 cases, 9%) (Fig. 6). As the national monthly mean winter temperature increased, the monthly number of DRFs tended to decrease, and the correlation was significant (P = 0.003, r = −0.704) (Fig. 7).\nValues are presented as number (%).\nSeasonal variation in the incidence of distal radius fractures by year in the total population.\nSeasonal variation in the incidence of distal radius fractures by age group from 2011 to 2015 — (A) Men and (B) Women.\nThe mean monthly number of distal radius fractures from 2011 to 2015.\nThe correlation between the monthly number of distal radius fractures and the national monthly mean winter temperatures.", "An average of 130,000 cases of DRFs occurred annually from 2011 to 2015 in Korea. The annual age-adjusted incidence rate per 100,000 persons was between 240 and 300, but no statistically significant annual change was observed. The incidence of DRFs in males was highest in children and adolescents, while in females it was highest in those in their 70s. Further, females showed a markedly increased DRF incidence after the age of 50 years. DRFs reached peak incidence during winter. However, the annual incidence of DRFs occurring in winter greatly varied compared with that occurring in other seasons and correlated with the average temperature during the winter season. Children, adolescents and young men under 30-year-old had a peak incidence proportion of DRFs during spring, and females over 15-year-old and males over 30-year-old had the highest incidence proportion of DRFs during winter.\nIn studies using national claims data, the number of DRFs may vary slightly from study to study depending on their assessment methods.1415 Kwon et al.14 examined the DRF incidence and related mortality in patients aged ≥ 50 years with between 2008 and 2012, and reported 65,654 and 74,720 DRFs in 2011 and 2012, respectively. In our study, however, the numbers of DRFs in patients aged ≥ 50 years were 72,890 and 81,694 in 2011 and 2012 respectively, and higher than those of the previous study (Table 3). We considered conservatively-treated patients with multiple procedure codes for splints or casts within 6 months as a single DRF, similar to the method used by Kwon et al.14 However, patients who received two operations were counted as two DRFs, assuming that they had bilateral DRFs. If the primary surgery was not successful, the patients needed reoperation. In our study, if reoperation was performed, we counted these cases as 2 DRFs. Our method has the potential for overestimating the number of DRFs, but we consider these are very rare cases. Thus, the method we used to assess the number of DRFs produced slightly different results than those reported by the previous study, but this difference was less than 10%.14\nIn our study, the incidence rates of DRFs in children and adolescents between 10 and 14 years of age were high, especially in the males, similar to those of previous studies.16 The cause of a peak incidence rate is known to have a correlation with pubertal growth spurt;17 the process of bone mineralization cannot keep up with the abrupt increase in new bone development, resulting in bones that are particularly susceptible to fracture. It is known that physical activities among these age groups are vigorous18 and most pediatric forearm fractures are reported to correlate with playing or sports activities.1920 In particular, the incidence proportion of the DRFs in children and adolescents was previously reported to be high in spring.1921 This observation is consistent with our study that children, adolescents and males younger than 30-year-old engaging in vigorous physical activities showed a high incidence proportion of DRFs in spring. Such results are considered due in part to the nature of spring season being appropriate for playing and sports activities with better weather conditions.\nWe also found that the incidence rate of DRFs notably increased in individuals over 50 years of age, particularly in women, which was similar to the findings of previous studies.422 The incidence of DRFs increased in older women due to various risk factors, including decreased bone mineral density (BMD), low vitamin D levels, and postural balance decline.232425 Postural balance decline in older women is associated with a decrease in lower limb strength, sensory system function (vision, vestibular, and proprioception), and flexibility.26 Fu et al.27 reported that sedentary women aged 40 to 60 years improved postural stability through a balance training program. However, the importance of such balance training is often overlooked compared to BMD evaluation.23 The prevention of osteoporosis and balance decline is needed to reduce the incidence of DRFs in women 50-year-old and older.\nSeveral previous studies in Europe and North America have confirmed that the incidence of DRFs increases in the winter, and this pattern is reported to be associated with snowy and icy conditions.7810 However, in East Asia, including China, Japan, and Korea, there have been few studies investigating the seasonal variation in the incidence of DRFs. The marked decrease in temperature during winter in East Asia is accompanied by widespread inflows of cold continental air, called cold surges or cold waves.28 A cold surge is one of the most important characteristic weather phenomena of the East Asian winter monsoon on synoptic time scales.29 Therefore, seasonal variations in the incidence of DRFs may occur due to such seasonal characteristics in East Asia. All age groups, excluding those usually associated with high-level physical activities, showed a high incidence proportion of DRF during winter.\nBischoff-Ferrari et al.9 reported that higher winter temperatures were inversely related to the risk for DRFs. In this study, the incidence of DRFs tended to decrease as the national mean temperature in winter increased. Such effects from weather conditions resulted in annual variations of the DRF incidence rate in winter. During winter, when under an intense cold surge, proper maintenance of roads is necessary in order to reduce the incidence of DRFs.\nAlthough this study employed a large sample size, based on a nationwide database, it also has some limitations. First, the correlation between the incidence of DRFs and snowfall could not be analyzed. Although the Korean Meteorological Administration (https://data.kma.go.kr) provides precipitation data, which covers both rainfall and snowfall, it does not distinguish between these 2 parameters. Thus, we could not analyze the correlation between the incidence of DRFs and snowfall. Second, since the HIRA data did not provide the cause of injury, an analysis on the injury mechanism was not available. Third, primary surgery and reoperations could not be distinguished in the coding system. As such, when 2 operation codes were entered for a single patient, it was assumed that the second operation code was entered for the opposite side, rather than for reoperation. With this method, the patients with reoperation were counted as two DRFs, and thus the number of DRFs may have been overestimated. Finally, there is the possibility of some code errors in such a large database.\nIn conclusion, the annual incidence of DRF was 130,000 in Korea. Children, adolescents, and men under 30-year-old had a high incidence of DRFs in spring. However, the incidence in females over 15-year-old and men over 30-year-old was highest in winter. The incidence of DRFs in winter varied notably from year to year. In winter, the DRF incidence increased under an intense cold surge. Active preventive measures are essential, especially in women 50 years or older given the high DRF incidence in this group." ]
[ "intro", "methods", null, null, null, null, "results", "discussion" ]
[ "Distal Radius Fracture", "Epidemiology", "Incidence", "Seasonal Variation" ]
INTRODUCTION: Distal radius fractures (DRFs) are one of the most common fractures not only among adults but also among children and adolescents.12 More than 640,000 cases of DRFs are reported annually in the United States,3 and almost 71,000 cases are reported in the British adult population.4 The high incidence of DRF causes a substantial economic burden and other social problems including decreased school attendance, lost work hours, loss of independence, and lasting disability.5 Therefore, DRF prevention is important, and it is essential to understand the epidemiology of DRFs to achieve this. Since most DRFs are caused by low-energy trauma, such as a fall from a standing height,6 slippery weather conditions may cause fracture epidemics.7 Thus, the incidence of DRF, which shows a clear seasonal variation, is known to increase in winter.89 However, most previous epidemiological studies of the incidence and seasonal variation of DRFs have been confined to European countries and North America.710 Few studies have examined seasonal variation in DRFs in Asia.11 Lee et al.12 reported seasonal variations in DRF incidence in Korea; however, their study had some limitations; it had a short-term study period and was conducted in a single center. Thus, our aim in the present study was to examine the incidence and seasonal variation of DRFs in Korea based on an analysis of nationwide data acquired from the Korean Health Insurance Review and Assessment Service (HIRA). METHODS: Data source The authors analyzed nationwide data obtained from the HIRA from 2011 to 2015. In Korea, 97% of the entire population is legally obligated to enroll in the National Health Insurance (NHI) program. Patients only pay about 30% of the total medical cost to healthcare institutions, and all healthcare providers submit claims data for inpatient and outpatient management, including diagnostic codes, which are classified according to the International Classification of Diseases, 10th revision (ICD-10), procedure codes, prescription records, demographic information, and direct medical costs to the HIRA to request reimbursement for the remaining 70% of the medical cost from the NHI service. Of the remaining 3% of the population not registered in the NHI program, excluding illegal residents, most receive healthcare coverage through the Medical Aid Program. The claims data for patients covered by the Medical Aid Program are also reviewed by the HIRA. Hence, the medical records of almost all newly admitted or hospitalized patients in healthcare institutions in Korea are prospectively recorded by the HIRA. The authors analyzed nationwide data obtained from the HIRA from 2011 to 2015. In Korea, 97% of the entire population is legally obligated to enroll in the National Health Insurance (NHI) program. Patients only pay about 30% of the total medical cost to healthcare institutions, and all healthcare providers submit claims data for inpatient and outpatient management, including diagnostic codes, which are classified according to the International Classification of Diseases, 10th revision (ICD-10), procedure codes, prescription records, demographic information, and direct medical costs to the HIRA to request reimbursement for the remaining 70% of the medical cost from the NHI service. Of the remaining 3% of the population not registered in the NHI program, excluding illegal residents, most receive healthcare coverage through the Medical Aid Program. The claims data for patients covered by the Medical Aid Program are also reviewed by the HIRA. Hence, the medical records of almost all newly admitted or hospitalized patients in healthcare institutions in Korea are prospectively recorded by the HIRA. Data collection The authors used ICD-10 codes and procedure codes to identify patients of all ages with newly diagnosed DRFs in Korea between 2011 and 2015 (Table 1). Although HIRA data provide patient identifiers, if a patient with a DRF makes multiple visits to a healthcare institution, claims data for the number of visits are generated. To avoid the risk of multiple counting, the authors used a previously reported method,13 as follows. First, patients with surgically treated DRFs were identified with ICD-10 codes (S52.5 and S52.6) and operation codes (N0603, N0613, N0607, N0617, N0993, N0994, and N0983) (Table 1). Each operation code was counted as a single case. However, percutaneous pinning and external fixation are often performed simultaneously. Therefore, if a patient received the two operations on the same day, they were considered to be performed on a single DRF. The same criterion was applied when a patient received open reduction with internal fixation and external fixation on the same day. Subsequently, in order to identify patients with DRFs who received conservative treatment, those who underwent surgical treatment were excluded from the HIRA data using patient identifiers, and those with splint or cast codes (T6020, T6030, T6151, and T6152) for DRF ICD-10 codes (S52.5 and S52.6) were included (Table 1). For conservative treatment, multiple splint or cast codes are commonly entered for a single case of DRF because an initially applied splint can be substituted by a cast at a later stage, or a long-arm cast can be changed to a short-arm cast, etc. For this reason, additional codes entered after a period of 6 months following the initial entry of splint or cast codes were recounted.14 ICD-10 = International Classification of Diseases, 10th revision, ORIF = open reduction with internal fixation. We examined patient data to identify the year and month in which the fracture occurred, and the patient's age and sex. We first counted the number of DRFs for each year and then calculated age-adjusted and sex-specific incidence rates per 100,000 persons with DRFs using the 2010 United States population as the standard population. Estimated year-specific, age-specific, and sex-specific populations in Korea were obtained from the “Statistics Korea” website (http://www.kosis.kr). Patients were divided into groups according to age (5-year intervals), and then the incidence of DRFs per age group per study year was calculated according to the population of each age group in Korea for a given study year (per 100,000 persons). We divided the year into 4 seasons to examine the seasonal variation in the incidence of DRFs; March through May was defined as spring, June through August as summer, September through November as autumn, and December through February as winter. Seasonal variations in the incidence of DRFs were examined according to year and age group (15-year intervals). Lastly, the relationship between the monthly incidence of DRFs and the monthly national average temperature in Korea during winter was investigated. The authors used ICD-10 codes and procedure codes to identify patients of all ages with newly diagnosed DRFs in Korea between 2011 and 2015 (Table 1). Although HIRA data provide patient identifiers, if a patient with a DRF makes multiple visits to a healthcare institution, claims data for the number of visits are generated. To avoid the risk of multiple counting, the authors used a previously reported method,13 as follows. First, patients with surgically treated DRFs were identified with ICD-10 codes (S52.5 and S52.6) and operation codes (N0603, N0613, N0607, N0617, N0993, N0994, and N0983) (Table 1). Each operation code was counted as a single case. However, percutaneous pinning and external fixation are often performed simultaneously. Therefore, if a patient received the two operations on the same day, they were considered to be performed on a single DRF. The same criterion was applied when a patient received open reduction with internal fixation and external fixation on the same day. Subsequently, in order to identify patients with DRFs who received conservative treatment, those who underwent surgical treatment were excluded from the HIRA data using patient identifiers, and those with splint or cast codes (T6020, T6030, T6151, and T6152) for DRF ICD-10 codes (S52.5 and S52.6) were included (Table 1). For conservative treatment, multiple splint or cast codes are commonly entered for a single case of DRF because an initially applied splint can be substituted by a cast at a later stage, or a long-arm cast can be changed to a short-arm cast, etc. For this reason, additional codes entered after a period of 6 months following the initial entry of splint or cast codes were recounted.14 ICD-10 = International Classification of Diseases, 10th revision, ORIF = open reduction with internal fixation. We examined patient data to identify the year and month in which the fracture occurred, and the patient's age and sex. We first counted the number of DRFs for each year and then calculated age-adjusted and sex-specific incidence rates per 100,000 persons with DRFs using the 2010 United States population as the standard population. Estimated year-specific, age-specific, and sex-specific populations in Korea were obtained from the “Statistics Korea” website (http://www.kosis.kr). Patients were divided into groups according to age (5-year intervals), and then the incidence of DRFs per age group per study year was calculated according to the population of each age group in Korea for a given study year (per 100,000 persons). We divided the year into 4 seasons to examine the seasonal variation in the incidence of DRFs; March through May was defined as spring, June through August as summer, September through November as autumn, and December through February as winter. Seasonal variations in the incidence of DRFs were examined according to year and age group (15-year intervals). Lastly, the relationship between the monthly incidence of DRFs and the monthly national average temperature in Korea during winter was investigated. Statistical analysis The annual percentage changes in the age-adjusted incidence rate of DRFs were calculated from 2011 to 2015 using joinpoint regression analysis (Joinpoint Regression Program, version 4.3.1.0; National Cancer Institute, Bethesda, MD, USA). All other data sets were analyzed using SAS statistical software version 9.13 (SAS Institute, Cary, NC, USA). Univariate analysis was conducted using the t-test. The correlation between the monthly number of DRFs and the national monthly mean temperatures in winter (https://data.kma.go.kr) was analyzed using the Spearman rank test. Values of P < 0.05 were considered significant. The annual percentage changes in the age-adjusted incidence rate of DRFs were calculated from 2011 to 2015 using joinpoint regression analysis (Joinpoint Regression Program, version 4.3.1.0; National Cancer Institute, Bethesda, MD, USA). All other data sets were analyzed using SAS statistical software version 9.13 (SAS Institute, Cary, NC, USA). Univariate analysis was conducted using the t-test. The correlation between the monthly number of DRFs and the national monthly mean temperatures in winter (https://data.kma.go.kr) was analyzed using the Spearman rank test. Values of P < 0.05 were considered significant. Ethics statement This study protocol was exempted for review by the Institutional Review Board of the Hanyang University Hospital (HYUH 2017-06-027) in accordance with the exemption criteria. This study protocol was exempted for review by the Institutional Review Board of the Hanyang University Hospital (HYUH 2017-06-027) in accordance with the exemption criteria. Data source: The authors analyzed nationwide data obtained from the HIRA from 2011 to 2015. In Korea, 97% of the entire population is legally obligated to enroll in the National Health Insurance (NHI) program. Patients only pay about 30% of the total medical cost to healthcare institutions, and all healthcare providers submit claims data for inpatient and outpatient management, including diagnostic codes, which are classified according to the International Classification of Diseases, 10th revision (ICD-10), procedure codes, prescription records, demographic information, and direct medical costs to the HIRA to request reimbursement for the remaining 70% of the medical cost from the NHI service. Of the remaining 3% of the population not registered in the NHI program, excluding illegal residents, most receive healthcare coverage through the Medical Aid Program. The claims data for patients covered by the Medical Aid Program are also reviewed by the HIRA. Hence, the medical records of almost all newly admitted or hospitalized patients in healthcare institutions in Korea are prospectively recorded by the HIRA. Data collection: The authors used ICD-10 codes and procedure codes to identify patients of all ages with newly diagnosed DRFs in Korea between 2011 and 2015 (Table 1). Although HIRA data provide patient identifiers, if a patient with a DRF makes multiple visits to a healthcare institution, claims data for the number of visits are generated. To avoid the risk of multiple counting, the authors used a previously reported method,13 as follows. First, patients with surgically treated DRFs were identified with ICD-10 codes (S52.5 and S52.6) and operation codes (N0603, N0613, N0607, N0617, N0993, N0994, and N0983) (Table 1). Each operation code was counted as a single case. However, percutaneous pinning and external fixation are often performed simultaneously. Therefore, if a patient received the two operations on the same day, they were considered to be performed on a single DRF. The same criterion was applied when a patient received open reduction with internal fixation and external fixation on the same day. Subsequently, in order to identify patients with DRFs who received conservative treatment, those who underwent surgical treatment were excluded from the HIRA data using patient identifiers, and those with splint or cast codes (T6020, T6030, T6151, and T6152) for DRF ICD-10 codes (S52.5 and S52.6) were included (Table 1). For conservative treatment, multiple splint or cast codes are commonly entered for a single case of DRF because an initially applied splint can be substituted by a cast at a later stage, or a long-arm cast can be changed to a short-arm cast, etc. For this reason, additional codes entered after a period of 6 months following the initial entry of splint or cast codes were recounted.14 ICD-10 = International Classification of Diseases, 10th revision, ORIF = open reduction with internal fixation. We examined patient data to identify the year and month in which the fracture occurred, and the patient's age and sex. We first counted the number of DRFs for each year and then calculated age-adjusted and sex-specific incidence rates per 100,000 persons with DRFs using the 2010 United States population as the standard population. Estimated year-specific, age-specific, and sex-specific populations in Korea were obtained from the “Statistics Korea” website (http://www.kosis.kr). Patients were divided into groups according to age (5-year intervals), and then the incidence of DRFs per age group per study year was calculated according to the population of each age group in Korea for a given study year (per 100,000 persons). We divided the year into 4 seasons to examine the seasonal variation in the incidence of DRFs; March through May was defined as spring, June through August as summer, September through November as autumn, and December through February as winter. Seasonal variations in the incidence of DRFs were examined according to year and age group (15-year intervals). Lastly, the relationship between the monthly incidence of DRFs and the monthly national average temperature in Korea during winter was investigated. Statistical analysis: The annual percentage changes in the age-adjusted incidence rate of DRFs were calculated from 2011 to 2015 using joinpoint regression analysis (Joinpoint Regression Program, version 4.3.1.0; National Cancer Institute, Bethesda, MD, USA). All other data sets were analyzed using SAS statistical software version 9.13 (SAS Institute, Cary, NC, USA). Univariate analysis was conducted using the t-test. The correlation between the monthly number of DRFs and the national monthly mean temperatures in winter (https://data.kma.go.kr) was analyzed using the Spearman rank test. Values of P < 0.05 were considered significant. Ethics statement: This study protocol was exempted for review by the Institutional Review Board of the Hanyang University Hospital (HYUH 2017-06-027) in accordance with the exemption criteria. RESULTS: A total of 647,655 DRFs (621,451 patients) occurred from 2011 to 2015; thus, an average of about 130,000 DRFs occurred annually in Korea. Of these, 259,453 cases (249,457 patients) involved male patients, and 388,202 cases (371,994 patients) involved female patients, with a male to female ratio of 1.50. The mean age of patients with DRFs was 47.2 years (standard deviation [SD] ± 25.5). The mean age of men was 30.2 years (SD ± 23.2), while that of women was significantly higher at 58.6 years (SD ± 20.2, P < 0.001). The total number of DRFs increased from 2011 to 2013 (128,000 and 137,054, respectively), and then decreased after 2013 (129,687 in 2014 and 116,750 in 2015) (Table 2). The age-adjusted incidence rate per 100,000 persons increased between 2011 and 2012 (278.0 and 291.7, respectively), and then decreased after 2012 (288.33 in 2013 and 246.00 in 2015) (Fig. 1, Table 2). The annual percentage change in the age-adjusted rate for 5 years was calculated as −2.9% (95% confidence interval [CI], −8.2%–2.6%), which was not statistically significant (P = 0.190). aUse of the United States population in 2010 as the control. Age-adjusted and sex-specific rate of distal radius fractures per 100,000 persons in Korea from 2011 to 2015. The 5-year age group-specific number of DRFs from 2011 to 2015 was highest in the 10 to 14-year-old age group (124,591 cases, 19%), followed by the 55 to 59-year-old age group (70,978 cases, 11%), and the 60 to 64 year-old age group (61,608 cases, 10%) (Fig. 2). Among those under 45-year-old, the incidence was higher in male than in female individuals; in particular, boys accounted for 82% (101,796 cases) of DRF cases in the 10 to 14-year-old age group. The age group of 45-year-old women had a higher incidence of DRFs than their male counterparts, and 82% (317,687 DRFs) of 387,607 DRFs in persons over 50 years occurred in the female population (Fig. 2, Table 3). The incidence rate per 100,000 persons by age group peaked in the 10 to 14-year-old age group, then decreased, and increased again as they aged (Fig. 3A). In the female population, the annual incidence rate per 100,000 persons was highest in the 70 to 79-year-old age group, while the 10 to 14-year-old age group showed the highest incidence rate in the male population (Fig. 3B and C). The total number of cases with distal radius fractures, by age group, from 2011 to 2015. Incidence rate of distal radius fractures per 100,000 persons, by age group — (A) Total population, (B) Men, and (C) Women. During the study period, the number of DRFs was highest in winter (33%), followed by autumn (23%), spring (23%), and summer (22%) (Table 4). The annual change in the number of DRFs in winter was more prominent than in other seasons (Fig. 4, Table 4). While the incidence proportion of DRFs in children, adolescents and young men under 30-year-old peaked during spring, females over 15-year-old and men over 30-year-old had the highest incidence proportion during winter (Fig. 5). During the study period, the highest mean monthly number of DRFs occurred in January (15,722 cases, 12%), followed by December (15,508 cases, 12%), and February (11,321 cases, 9%) (Fig. 6). As the national monthly mean winter temperature increased, the monthly number of DRFs tended to decrease, and the correlation was significant (P = 0.003, r = −0.704) (Fig. 7). Values are presented as number (%). Seasonal variation in the incidence of distal radius fractures by year in the total population. Seasonal variation in the incidence of distal radius fractures by age group from 2011 to 2015 — (A) Men and (B) Women. The mean monthly number of distal radius fractures from 2011 to 2015. The correlation between the monthly number of distal radius fractures and the national monthly mean winter temperatures. DISCUSSION: An average of 130,000 cases of DRFs occurred annually from 2011 to 2015 in Korea. The annual age-adjusted incidence rate per 100,000 persons was between 240 and 300, but no statistically significant annual change was observed. The incidence of DRFs in males was highest in children and adolescents, while in females it was highest in those in their 70s. Further, females showed a markedly increased DRF incidence after the age of 50 years. DRFs reached peak incidence during winter. However, the annual incidence of DRFs occurring in winter greatly varied compared with that occurring in other seasons and correlated with the average temperature during the winter season. Children, adolescents and young men under 30-year-old had a peak incidence proportion of DRFs during spring, and females over 15-year-old and males over 30-year-old had the highest incidence proportion of DRFs during winter. In studies using national claims data, the number of DRFs may vary slightly from study to study depending on their assessment methods.1415 Kwon et al.14 examined the DRF incidence and related mortality in patients aged ≥ 50 years with between 2008 and 2012, and reported 65,654 and 74,720 DRFs in 2011 and 2012, respectively. In our study, however, the numbers of DRFs in patients aged ≥ 50 years were 72,890 and 81,694 in 2011 and 2012 respectively, and higher than those of the previous study (Table 3). We considered conservatively-treated patients with multiple procedure codes for splints or casts within 6 months as a single DRF, similar to the method used by Kwon et al.14 However, patients who received two operations were counted as two DRFs, assuming that they had bilateral DRFs. If the primary surgery was not successful, the patients needed reoperation. In our study, if reoperation was performed, we counted these cases as 2 DRFs. Our method has the potential for overestimating the number of DRFs, but we consider these are very rare cases. Thus, the method we used to assess the number of DRFs produced slightly different results than those reported by the previous study, but this difference was less than 10%.14 In our study, the incidence rates of DRFs in children and adolescents between 10 and 14 years of age were high, especially in the males, similar to those of previous studies.16 The cause of a peak incidence rate is known to have a correlation with pubertal growth spurt;17 the process of bone mineralization cannot keep up with the abrupt increase in new bone development, resulting in bones that are particularly susceptible to fracture. It is known that physical activities among these age groups are vigorous18 and most pediatric forearm fractures are reported to correlate with playing or sports activities.1920 In particular, the incidence proportion of the DRFs in children and adolescents was previously reported to be high in spring.1921 This observation is consistent with our study that children, adolescents and males younger than 30-year-old engaging in vigorous physical activities showed a high incidence proportion of DRFs in spring. Such results are considered due in part to the nature of spring season being appropriate for playing and sports activities with better weather conditions. We also found that the incidence rate of DRFs notably increased in individuals over 50 years of age, particularly in women, which was similar to the findings of previous studies.422 The incidence of DRFs increased in older women due to various risk factors, including decreased bone mineral density (BMD), low vitamin D levels, and postural balance decline.232425 Postural balance decline in older women is associated with a decrease in lower limb strength, sensory system function (vision, vestibular, and proprioception), and flexibility.26 Fu et al.27 reported that sedentary women aged 40 to 60 years improved postural stability through a balance training program. However, the importance of such balance training is often overlooked compared to BMD evaluation.23 The prevention of osteoporosis and balance decline is needed to reduce the incidence of DRFs in women 50-year-old and older. Several previous studies in Europe and North America have confirmed that the incidence of DRFs increases in the winter, and this pattern is reported to be associated with snowy and icy conditions.7810 However, in East Asia, including China, Japan, and Korea, there have been few studies investigating the seasonal variation in the incidence of DRFs. The marked decrease in temperature during winter in East Asia is accompanied by widespread inflows of cold continental air, called cold surges or cold waves.28 A cold surge is one of the most important characteristic weather phenomena of the East Asian winter monsoon on synoptic time scales.29 Therefore, seasonal variations in the incidence of DRFs may occur due to such seasonal characteristics in East Asia. All age groups, excluding those usually associated with high-level physical activities, showed a high incidence proportion of DRF during winter. Bischoff-Ferrari et al.9 reported that higher winter temperatures were inversely related to the risk for DRFs. In this study, the incidence of DRFs tended to decrease as the national mean temperature in winter increased. Such effects from weather conditions resulted in annual variations of the DRF incidence rate in winter. During winter, when under an intense cold surge, proper maintenance of roads is necessary in order to reduce the incidence of DRFs. Although this study employed a large sample size, based on a nationwide database, it also has some limitations. First, the correlation between the incidence of DRFs and snowfall could not be analyzed. Although the Korean Meteorological Administration (https://data.kma.go.kr) provides precipitation data, which covers both rainfall and snowfall, it does not distinguish between these 2 parameters. Thus, we could not analyze the correlation between the incidence of DRFs and snowfall. Second, since the HIRA data did not provide the cause of injury, an analysis on the injury mechanism was not available. Third, primary surgery and reoperations could not be distinguished in the coding system. As such, when 2 operation codes were entered for a single patient, it was assumed that the second operation code was entered for the opposite side, rather than for reoperation. With this method, the patients with reoperation were counted as two DRFs, and thus the number of DRFs may have been overestimated. Finally, there is the possibility of some code errors in such a large database. In conclusion, the annual incidence of DRF was 130,000 in Korea. Children, adolescents, and men under 30-year-old had a high incidence of DRFs in spring. However, the incidence in females over 15-year-old and men over 30-year-old was highest in winter. The incidence of DRFs in winter varied notably from year to year. In winter, the DRF incidence increased under an intense cold surge. Active preventive measures are essential, especially in women 50 years or older given the high DRF incidence in this group.
Background: The present study aimed to investigate the incidence and seasonal variation of distal radius fractures (DRFs) in Korea. Methods: We analyzed a nationwide database acquired from the Korean Health Insurance Review and Assessment Service from 2011 to 2015. We used International Classification of Diseases, 10th revision codes and procedure codes to identify patients of all ages with newly diagnosed DRFs. Results: An average of about 130,000 DRFs occurred annually in Korea. The incidence of DRF, by age group, was highest in the 10 to 14-year-old age group for males and the highest in the 70s age group for females, with a rapid increase of incidence after 50 years. The peak incidence of DRF occurred during winter; however, the incidence greatly varied annually when compared with that of other seasons. The incidence of DRFs during the winter season was correlated with the average temperature. Conclusions: The annual incidence of DRF was 130,000 in Korea. The incidence increased under an intense cold surge during winter. Active preventive measures are recommended especially in women exceeding 50 years considering the higher incidence in this age group.
null
null
5,224
216
8
[ "drfs", "incidence", "year", "age", "data", "codes", "patients", "winter", "korea", "incidence drfs" ]
[ "test", "test" ]
null
null
[CONTENT] Distal Radius Fracture | Epidemiology | Incidence | Seasonal Variation [SUMMARY]
[CONTENT] Distal Radius Fracture | Epidemiology | Incidence | Seasonal Variation [SUMMARY]
[CONTENT] Distal Radius Fracture | Epidemiology | Incidence | Seasonal Variation [SUMMARY]
null
[CONTENT] Distal Radius Fracture | Epidemiology | Incidence | Seasonal Variation [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Child, Preschool | Databases, Factual | Female | Humans | Incidence | Infant | Infant, Newborn | Male | Middle Aged | Radius Fractures | Republic of Korea | Seasons | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Child, Preschool | Databases, Factual | Female | Humans | Incidence | Infant | Infant, Newborn | Male | Middle Aged | Radius Fractures | Republic of Korea | Seasons | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Child, Preschool | Databases, Factual | Female | Humans | Incidence | Infant | Infant, Newborn | Male | Middle Aged | Radius Fractures | Republic of Korea | Seasons | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Child, Preschool | Databases, Factual | Female | Humans | Incidence | Infant | Infant, Newborn | Male | Middle Aged | Radius Fractures | Republic of Korea | Seasons | Young Adult [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] drfs | incidence | year | age | data | codes | patients | winter | korea | incidence drfs [SUMMARY]
[CONTENT] drfs | incidence | year | age | data | codes | patients | winter | korea | incidence drfs [SUMMARY]
[CONTENT] drfs | incidence | year | age | data | codes | patients | winter | korea | incidence drfs [SUMMARY]
null
[CONTENT] drfs | incidence | year | age | data | codes | patients | winter | korea | incidence drfs [SUMMARY]
null
[CONTENT] drfs | variation drfs | seasonal variation drfs | seasonal | incidence | drf | seasonal variation | variation | incidence seasonal | incidence seasonal variation [SUMMARY]
[CONTENT] codes | year | drfs | patient | medical | cast | data | age | patients | icd 10 [SUMMARY]
[CONTENT] age | old | year old | age group | fig | year | group | cases | year old age group | old age group [SUMMARY]
null
[CONTENT] drfs | incidence | year | age | codes | medical | data | patients | study | drf [SUMMARY]
null
[CONTENT] Korea [SUMMARY]
[CONTENT] the Korean Health Insurance Review and Assessment Service | 2011 to 2015 ||| International Classification of Diseases | 10th [SUMMARY]
[CONTENT] about 130,000 | annually | Korea ||| DRF | 10 to | 14-year-old | 70s | 50 years ||| DRF | winter | annually ||| the winter season [SUMMARY]
null
[CONTENT] Korea ||| the Korean Health Insurance Review and Assessment Service | 2011 to 2015 ||| International Classification of Diseases | 10th ||| about 130,000 | annually | Korea ||| DRF | 10 to | 14-year-old | 70s | 50 years ||| DRF | winter | annually ||| the winter season ||| annual | DRF | 130,000 | Korea ||| winter ||| 50 years [SUMMARY]
null
Early weight-bearing after periacetabular osteotomy leads to a high incidence of postoperative pelvic fractures.
25015753
It has not been shown whether accelerated rehabilitation following periacetabular osteotomy (PAO) is effective for early recovery. The purpose of this retrospective study was to compare complication rates in patients with standard and accelerated rehabilitation protocols who underwent PAO.
BACKGROUND
Between January 2002 and August 2011, patients with a lateral center-edge (CE) angle of < 20°, showing good joint congruency with the hip in abduction, pre- or early stage of osteoarthritis, and age younger than 60 years were included in this study. We evaluated 156 hips in 138 patients, with a mean age at the time of surgery of 30 years. Full weight-bearing with two crutches started 2 months postoperatively in 73 patients (80 hips) with the standard rehabilitation protocol. In 65 patients (76 hips) with the accelerated rehabilitation protocol, postoperative strengthening of the hip, thigh and core musculature was begun on the day of surgery as tolerated. The exercise program included active hip range of motion, and gentle isometric hamstring and quadriceps muscle sets; these exercises were performed for 30 minutes in the morning and 30 minutes in the afternoon with a physical therapist every weekday for 6 weeks. Full weight-bearing with two axillary crutches started on the day of surgery as tolerated. Complications were evaluated for 2 years.
METHODS
The clinical results at the time of follow-up were similar in the two groups. The average periods between the osteotomy and full-weight-bearing walking without support were 4.2 months and 6.9 months in patients with the accelerated and standard rehabilitation protocols (P < 0.001), indicating that the accelerated rehabilitation protocol could achieve earlier recovery of patients. However, postoperative fractures of the ischial ramus and posterior column of the pelvis were more frequently found in patients with the accelerated rehabilitation protocol (8/76) than in those with the standard rehabilitation protocol (1/80) (P = 0.013).
RESULTS
The accelerated rehabilitation protocol seems to have advantages for early muscle recovery in patients undergoing PAO; however, postoperative pelvic fracture rates were unacceptably high in patients with this protocol.
CONCLUSION
[ "Acetabulum", "Adolescent", "Adult", "Crutches", "Exercise Therapy", "Female", "Follow-Up Studies", "Fractures, Bone", "Hip Dislocation", "Humans", "Incidence", "Ischium", "Isometric Contraction", "Male", "Muscle Strength", "Osteotomy", "Pelvic Bones", "Postoperative Complications", "Radiography", "Range of Motion, Articular", "Resistance Training", "Retrospective Studies", "Weight-Bearing" ]
4100493
Background
The efficacy of an accelerated multimodal intervention in order to shorten the time of recovery after surgery has been reported [1-3]. The cost-effectiveness of clinical pathways including an accelerated perioperative care and rehabilitation intervention following total hip and knee arthroplasty (THA and TKA) has been shown [1]. Various reorienting acetabular osteotomies have been described [4-6]. The advantage of therapeutic exercise after periacetabular osteotomy (PAO) has been reported to promote the activity levels of patients to return to sports [7]. However, it has not been shown whether an accelerated perioperative care and rehabilitation intervention following PAO is effective. We have performed PAO through an Ollier lateral U transtrochanteric approach since 1990 with consistent surgical indications and techniques [8]. We hypothesized that an accelerated protocol after PAO is effective for shortening the time of recovery and decreasing the rate of perioperative complications. The purpose of this retrospective study was to compare complication rates including the incidences of postoperative fractures in patients with standard and accelerated rehabilitation protocols who underwent PAO.
Methods
The study design was approved by the Ethics Committee of Asahikawa Medical University. All investigations were conducted in conformity with ethical principles of research. Written informed consent for this study was obtained from all patients. We retrospectively assessed all patients who had been managed using PAO between January 2002 and August 2011. During this period, 163 PAOs were performed in 145 patients for the treatment of acetabular dysplasia in adolescents and adults. All the patients reported moderate to severe hip pain. Surgical indications for the PAO included a lateral center-edge (CE) angle [9] of < 20° on anteroposterior radiographs, showing good joint congruency with the hip in abduction, pre- or early stage of grade 0, 1 or 2 osteoarthritis [10], and age younger than 60 years. Contraindications for the osteotomy were poor joint congruency showing partial narrowing or disappearance of the joint space with the hip in abduction, a severely deformed femoral head, advanced stage of Tönnis grade 3 osteoarthritis and age older than 60 years. In one patient (one hip), PAO was combined with an intertrochanteric valgus osteotomy. This patient was excluded from analysis because femoral osteotomies might affect the postoperative rehabilitation process. Six patients (six hips) were lost to follow-up. We evaluated the remaining 138 patients (156 hips). Nineteen patients were male and 119 were female. The left side was treated in 82 hips and the right side in 74 hips. Eighteen of the 138 patients underwent bilateral surgery. The average age of patients at the time of surgery was 30 years (range 11–59 years), and the average patient weight was 55.8 kg (range 39–83 kg). The operative techniques were described previously [8]. All of the procedures were performed by one surgeon who had performed more than 200 PAOs before the study periods. Complications were evaluated for 2 years. During the study period, two rehabilitation protocols were applied (Table 1). For 76 consecutive hips in 65 patients between January 2004 and December 2008, an accelerated rehabilitation protocol was applied to achieve early postoperative recovery and functional improvement of patients. Postoperative strengthening of the hip, thigh and core musculature was begun on the day after surgery as tolerated. The exercise program included active hip range of motion, and gentle isometric hamstring and quadriceps muscle sets; these exercises were performed for 30 minutes in the morning and 30 minutes in the afternoon with a physical therapist every weekday for six weeks for all the patients. Full weight-bearing with two axillary crutches started on the day after surgery as tolerated. All weight-bearing exercises were performed under the guidance of physical therapists. Full weight-bearing was promoted using the weight scale by therapists. Muscle exercises of 1 hour with outpatient physical therapy continued 2 times per week for 3 months postoperatively. Comparison of standard and accelerated rehabilitation protocols In the other period, a standard rehabilitation protocol was applied for 73 patients (80 hips). Since January 2009, a standard protocol has been applied again for all patients because postoperative pelvic fractures occurred frequently during the period in which the accelerated rehabilitation protocol was applied. The patient was allowed to use a wheelchair, and active range of motion, quadriceps and straight leg-raising exercises were begun on the first postoperative day. Non-weight-bearing walking with two crutches was also allowed as tolerated. Muscle exercises for 20 minutes per day with a physical therapist were performed every weekday for 6 weeks for all the patients. All weight-bearing exercises were performed under the guidance of physical therapists. Prophylaxis against deep-vein thrombosis was not routinely administered for both groups. Only high-risk patients with a previous history of thrombosis were managed with aspirin for 2 weeks postoperatively. Clinical and radiographic evaluations were performed preoperatively, and 1, 2, 4, 6, 12, 18 and 24 months postoperatively. The Harris hip score was also evaluated preoperatively and at the time of 2-year follow-up visits. Preoperative and postoperative clinical data were collected from charts of the patients. Perioperative complications that occurred within 12 months after surgery included deep infection, pulmonary embolism, avascular necrosis of the acetabular fragment, displacement of the greater trochanter, fracture of the pubic ramus, ischial ramus or posterior column of the pelvis, nonunion of pubis and heterotopic ossifications. Fractures were defined as discontinuities of the bone other than the osteotomies that did not occur during surgery. Conventional anteroposterior and lateral radiographs were obtained with the patient in the supine position for each evaluation. The CE angle, the acetabular head index (AHI) [11] and the acetabular angle of Sharp [12] were measured. The head lateralization index was measured [13]. The presence of the cross-over sign of acetabular retroversion was recorded. Gender, follow-up period, bilateral or unilateral involvement, severity of osteoarthritis, CE angle, angle of Sharp, AHI and head lateralization index were comparable between the standard and accelerated rehabilitation protocol groups (Table 2). Radiographic images were transferred to Image J software (National Institutes of Health, Bethesda, Maryland, USA) on personal computers, and measurements were performed with an accuracy of ±0.01 mm and ±0.01 degrees. Comparison of data on the patients, clinical outcomes, and radiographic evaluations in the standard and accelerated protocol groups An orthopaedic instructor who specialized in hip surgery and imaging analyses of the hip joint made all radiographic measurements. Preoperative and periodic postoperative radiographic analyses were performed in a blinded fashion for all patients. Univariate analyses included the chi-square test, the Mann–Whitney U test and the Wilcoxon signed rank test where appropriate. Preoperative and postoperative Harris hip scores were compared by the Wilcoxon signed rank test. The chi-square test was used for analyses of the clinical factors. The Wilcoxon signed rank test was used for analyses of preoperative and postoperative center-edge angle, acetabular head index, Sharp angle and head lateralization index. A probability value less than 0.05 was considered significant. Statistical analyses were performed using SPSS software, version 19.0 (SPSS Inc., Chicago, Illinois, USA).
Results
The Harris hip score increased from a preoperative overall average of 69 points (range 52–95 points) to 91 points (range 54–100 points) at the most recent follow-up (P < 0.001). Overall, 146 hips (94%) had a hip score equal to or greater than 80 points. The mean Harris hip scores at preoperative and follow-up evaluation were similar in the two groups (Table 2). The overall mean CE angle increased (P < 0.001) from −2° ± 9° preoperatively to 35° ± 6° postoperatively, the mean AHI increased (P < 0.001) from 54 ± 9 to 89 ± 7 and the mean Sharp angle decreased (P < 0.001) from 52° ± 4° to 40° ± 4°. These improvements were similar in the two groups (Table 2). Preoperative cross-over signs were observed in 14 (9%) hips, which were absent postoperatively. The preoperative Tönnis grade of osteoarthritis was grade 0 in 39 (25%) hips, grade 1 in 114 (73%) hips and grade 2 in three (2%) hips. Three (2%) hips underwent THA. The average period between the osteotomy and walking without support was 4.2 months (range 2.0-10.5 months) in 65 patients with the accelerated rehabilitation protocol and 6.9 months (range 2.5-15.0 months) in 73 patients with the standard rehabilitation protocol (P < 0.001), indicating that the accelerated rehabilitation protocol could achieve earlier recovery of patients. There were no intraoperative fractures as determined by intraoperative inspection and postoperative radiographs in any patients. Postoperative fractures of the ischial ramus and posterior column of the pelvis were more frequently found in patients with the accelerated rehabilitation protocol (8/76) than in those with the standard rehabilitation protocol (1/80) (P = 0.013). Other perioperative complications were similar in the two groups (Table 3). Postoperative fractures occurred at an average of 2.0 months (range 0.2-12 months) postoperatively. Slight pain or dullness continued for an average of 3.5 months (range 2.0-12.0 months) in patients with fractures. The mean time to union of ischial fracture was 9.0 months. All fractures united without surgical intervention (Figures 1, 2, 3, 4). There was no loss of corrections. One postoperative deep infection extended to septic arthritis, which healed by surgical debridement. One hip with displacement of the greater trochanter was revised and refixed using a metal cable grip system. Bone union could be obtained. No patient had an injury to the great vessels or major nerves. No patient had symptoms resulting from damage to the lateral femoral cutaneous nerves. Comparison of perioperative complications in the standard and accelerated protocol groups Radiographic findings of a 39-year-old woman. A preoperative radiograph showing acetabular dysplasia of the right hip. A radiograph of the same patient made 1 week after periacetabular osteotomy showing good coverage of the femoral head without fracture of the posterior column. The accelerated rehabilitation protocol was applied to the patient. A radiograph of the same patient made 4 weeks after osteotomy. The patient reported right buttock pain 3.5 weeks postoperatively. A fracture of the posterior column (arrow) was revealed, which was not seen in Figure 1. A radiograph of the same patient made 2 years after osteotomy showing solid union of the fracture. The right buttock pain continued for 3 months and then decreased. The patient reported no hip pain. The Harris hip score was 98 points at the time of follow-up.
Conclusion
The accelerated rehabilitation protocol seems to have advantages for early muscle recovery in patients undergoing PAO; however, postoperative pelvic fracture rates were unacceptably high in patients with this protocol. It is still unclear whether it is worth applying the accelerated rehabilitation protocol after PAO. These fractures do not seem to influence long-term clinical results after PAO. Surgeons must be aware that postoperative pelvic fracture rates would increase if they apply the accelerated rehabilitation protocol. We now prefer the standard rehabilitation protocol for patients undergoing PAO.
[ "Background", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "The efficacy of an accelerated multimodal intervention in order to shorten the time of recovery after surgery has been reported [1-3]. The cost-effectiveness of clinical pathways including an accelerated perioperative care and rehabilitation intervention following total hip and knee arthroplasty (THA and TKA) has been shown [1].\nVarious reorienting acetabular osteotomies have been described [4-6]. The advantage of therapeutic exercise after periacetabular osteotomy (PAO) has been reported to promote the activity levels of patients to return to sports [7]. However, it has not been shown whether an accelerated perioperative care and rehabilitation intervention following PAO is effective.\nWe have performed PAO through an Ollier lateral U transtrochanteric approach since 1990 with consistent surgical indications and techniques [8]. We hypothesized that an accelerated protocol after PAO is effective for shortening the time of recovery and decreasing the rate of perioperative complications. The purpose of this retrospective study was to compare complication rates including the incidences of postoperative fractures in patients with standard and accelerated rehabilitation protocols who underwent PAO.", "PAO: Periacetabular osteotomy; THA: Total hip arthroplasty; TKA: Total knee arthroplasty; CE angle: Center-edge angle; AHI: Acetabular head index.", "The authors declare that they have no competing interests.", "HI assembled and analyzed the data, and wrote the manuscript. HT, TS and YN collected clinical follow-up data and analyzed the data. HI and TM performed the surgery, recorded the intraoperative data, and arranged intraoperative photography. TM was the head of department and principal investigator. All authors red and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2474/15/234/prepub\n" ]
[ null, null, null, null, null ]
[ "Background", "Methods", "Results", "Discussion", "Conclusion", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "The efficacy of an accelerated multimodal intervention in order to shorten the time of recovery after surgery has been reported [1-3]. The cost-effectiveness of clinical pathways including an accelerated perioperative care and rehabilitation intervention following total hip and knee arthroplasty (THA and TKA) has been shown [1].\nVarious reorienting acetabular osteotomies have been described [4-6]. The advantage of therapeutic exercise after periacetabular osteotomy (PAO) has been reported to promote the activity levels of patients to return to sports [7]. However, it has not been shown whether an accelerated perioperative care and rehabilitation intervention following PAO is effective.\nWe have performed PAO through an Ollier lateral U transtrochanteric approach since 1990 with consistent surgical indications and techniques [8]. We hypothesized that an accelerated protocol after PAO is effective for shortening the time of recovery and decreasing the rate of perioperative complications. The purpose of this retrospective study was to compare complication rates including the incidences of postoperative fractures in patients with standard and accelerated rehabilitation protocols who underwent PAO.", "The study design was approved by the Ethics Committee of Asahikawa Medical University. All investigations were conducted in conformity with ethical principles of research. Written informed consent for this study was obtained from all patients.\nWe retrospectively assessed all patients who had been managed using PAO between January 2002 and August 2011. During this period, 163 PAOs were performed in 145 patients for the treatment of acetabular dysplasia in adolescents and adults. All the patients reported moderate to severe hip pain. Surgical indications for the PAO included a lateral center-edge (CE) angle [9] of < 20° on anteroposterior radiographs, showing good joint congruency with the hip in abduction, pre- or early stage of grade 0, 1 or 2 osteoarthritis [10], and age younger than 60 years. Contraindications for the osteotomy were poor joint congruency showing partial narrowing or disappearance of the joint space with the hip in abduction, a severely deformed femoral head, advanced stage of Tönnis grade 3 osteoarthritis and age older than 60 years. In one patient (one hip), PAO was combined with an intertrochanteric valgus osteotomy. This patient was excluded from analysis because femoral osteotomies might affect the postoperative rehabilitation process. Six patients (six hips) were lost to follow-up. We evaluated the remaining 138 patients (156 hips). Nineteen patients were male and 119 were female. The left side was treated in 82 hips and the right side in 74 hips. Eighteen of the 138 patients underwent bilateral surgery. The average age of patients at the time of surgery was 30 years (range 11–59 years), and the average patient weight was 55.8 kg (range 39–83 kg). The operative techniques were described previously [8]. All of the procedures were performed by one surgeon who had performed more than 200 PAOs before the study periods. Complications were evaluated for 2 years.\nDuring the study period, two rehabilitation protocols were applied (Table 1). For 76 consecutive hips in 65 patients between January 2004 and December 2008, an accelerated rehabilitation protocol was applied to achieve early postoperative recovery and functional improvement of patients. Postoperative strengthening of the hip, thigh and core musculature was begun on the day after surgery as tolerated. The exercise program included active hip range of motion, and gentle isometric hamstring and quadriceps muscle sets; these exercises were performed for 30 minutes in the morning and 30 minutes in the afternoon with a physical therapist every weekday for six weeks for all the patients. Full weight-bearing with two axillary crutches started on the day after surgery as tolerated. All weight-bearing exercises were performed under the guidance of physical therapists. Full weight-bearing was promoted using the weight scale by therapists. Muscle exercises of 1 hour with outpatient physical therapy continued 2 times per week for 3 months postoperatively.\nComparison of standard and accelerated rehabilitation protocols\nIn the other period, a standard rehabilitation protocol was applied for 73 patients (80 hips). Since January 2009, a standard protocol has been applied again for all patients because postoperative pelvic fractures occurred frequently during the period in which the accelerated rehabilitation protocol was applied. The patient was allowed to use a wheelchair, and active range of motion, quadriceps and straight leg-raising exercises were begun on the first postoperative day. Non-weight-bearing walking with two crutches was also allowed as tolerated. Muscle exercises for 20 minutes per day with a physical therapist were performed every weekday for 6 weeks for all the patients. All weight-bearing exercises were performed under the guidance of physical therapists.\nProphylaxis against deep-vein thrombosis was not routinely administered for both groups. Only high-risk patients with a previous history of thrombosis were managed with aspirin for 2 weeks postoperatively.\nClinical and radiographic evaluations were performed preoperatively, and 1, 2, 4, 6, 12, 18 and 24 months postoperatively. The Harris hip score was also evaluated preoperatively and at the time of 2-year follow-up visits. Preoperative and postoperative clinical data were collected from charts of the patients.\nPerioperative complications that occurred within 12 months after surgery included deep infection, pulmonary embolism, avascular necrosis of the acetabular fragment, displacement of the greater trochanter, fracture of the pubic ramus, ischial ramus or posterior column of the pelvis, nonunion of pubis and heterotopic ossifications. Fractures were defined as discontinuities of the bone other than the osteotomies that did not occur during surgery.\nConventional anteroposterior and lateral radiographs were obtained with the patient in the supine position for each evaluation. The CE angle, the acetabular head index (AHI) [11] and the acetabular angle of Sharp [12] were measured. The head lateralization index was measured [13]. The presence of the cross-over sign of acetabular retroversion was recorded. Gender, follow-up period, bilateral or unilateral involvement, severity of osteoarthritis, CE angle, angle of Sharp, AHI and head lateralization index were comparable between the standard and accelerated rehabilitation protocol groups (Table 2). Radiographic images were transferred to Image J software (National Institutes of Health, Bethesda, Maryland, USA) on personal computers, and measurements were performed with an accuracy of ±0.01 mm and ±0.01 degrees.\nComparison of data on the patients, clinical outcomes, and radiographic evaluations in the standard and accelerated protocol groups\nAn orthopaedic instructor who specialized in hip surgery and imaging analyses of the hip joint made all radiographic measurements. Preoperative and periodic postoperative radiographic analyses were performed in a blinded fashion for all patients.\nUnivariate analyses included the chi-square test, the Mann–Whitney U test and the Wilcoxon signed rank test where appropriate. Preoperative and postoperative Harris hip scores were compared by the Wilcoxon signed rank test. The chi-square test was used for analyses of the clinical factors. The Wilcoxon signed rank test was used for analyses of preoperative and postoperative center-edge angle, acetabular head index, Sharp angle and head lateralization index. A probability value less than 0.05 was considered significant. Statistical analyses were performed using SPSS software, version 19.0 (SPSS Inc., Chicago, Illinois, USA).", "The Harris hip score increased from a preoperative overall average of 69 points (range 52–95 points) to 91 points (range 54–100 points) at the most recent follow-up (P < 0.001). Overall, 146 hips (94%) had a hip score equal to or greater than 80 points. The mean Harris hip scores at preoperative and follow-up evaluation were similar in the two groups (Table 2). The overall mean CE angle increased (P < 0.001) from −2° ± 9° preoperatively to 35° ± 6° postoperatively, the mean AHI increased (P < 0.001) from 54 ± 9 to 89 ± 7 and the mean Sharp angle decreased (P < 0.001) from 52° ± 4° to 40° ± 4°. These improvements were similar in the two groups (Table 2). Preoperative cross-over signs were observed in 14 (9%) hips, which were absent postoperatively. The preoperative Tönnis grade of osteoarthritis was grade 0 in 39 (25%) hips, grade 1 in 114 (73%) hips and grade 2 in three (2%) hips. Three (2%) hips underwent THA.\nThe average period between the osteotomy and walking without support was 4.2 months (range 2.0-10.5 months) in 65 patients with the accelerated rehabilitation protocol and 6.9 months (range 2.5-15.0 months) in 73 patients with the standard rehabilitation protocol (P < 0.001), indicating that the accelerated rehabilitation protocol could achieve earlier recovery of patients.\nThere were no intraoperative fractures as determined by intraoperative inspection and postoperative radiographs in any patients. Postoperative fractures of the ischial ramus and posterior column of the pelvis were more frequently found in patients with the accelerated rehabilitation protocol (8/76) than in those with the standard rehabilitation protocol (1/80) (P = 0.013). Other perioperative complications were similar in the two groups (Table 3). Postoperative fractures occurred at an average of 2.0 months (range 0.2-12 months) postoperatively. Slight pain or dullness continued for an average of 3.5 months (range 2.0-12.0 months) in patients with fractures. The mean time to union of ischial fracture was 9.0 months. All fractures united without surgical intervention (Figures 1, 2, 3, 4). There was no loss of corrections. One postoperative deep infection extended to septic arthritis, which healed by surgical debridement. One hip with displacement of the greater trochanter was revised and refixed using a metal cable grip system. Bone union could be obtained. No patient had an injury to the great vessels or major nerves. No patient had symptoms resulting from damage to the lateral femoral cutaneous nerves.\nComparison of perioperative complications in the standard and accelerated protocol groups\nRadiographic findings of a 39-year-old woman. A preoperative radiograph showing acetabular dysplasia of the right hip.\nA radiograph of the same patient made 1 week after periacetabular osteotomy showing good coverage of the femoral head without fracture of the posterior column. The accelerated rehabilitation protocol was applied to the patient.\nA radiograph of the same patient made 4 weeks after osteotomy. The patient reported right buttock pain 3.5 weeks postoperatively. A fracture of the posterior column (arrow) was revealed, which was not seen in Figure 1.\nA radiograph of the same patient made 2 years after osteotomy showing solid union of the fracture. The right buttock pain continued for 3 months and then decreased. The patient reported no hip pain. The Harris hip score was 98 points at the time of follow-up.", "Many authors have reported success with accelerated rehabilitation protocols after THA and TKA [1-3]. Immediate mobilization on the day of surgery has been reported to decrease the length of stay without adverse effects on complications or readmissions. A previous randomized clinical trial showed that an accelerated perioperative care and rehabilitation protocol can be both cost-saving and effective, with a reduction in the length of hospital stay and a gain in health-related quality of life [2]. However, it has not been clear whether accelerated rehabilitation for patients undergoing PAO is also effective.\nThe average period between the osteotomy and full weight-bearing walking without support was shorter in patients with the accelerated rehabilitation protocol in this study. This indicates that the accelerated rehabilitation protocol had advantages for early muscle recovery in patients undergoing PAO.\nHowever, the incidence of postoperative pelvic fractures was higher in patients with the accelerated rehabilitation protocol. Patients with PAO may exhibit several different features compared with those with THA. Because PAO is osteotomy surgery, proper time is necessary for pelvic bone union and healing. There is no need for bone union or healing in most total joint replacements. It seems that this is the fundamental difference between these two surgical procedures. The load transmission patterns through the pelvic ring change soon after PAO. Kaku et al. reported that load transfer through the pelvis is higher in the superior pubic ramus than in the inferior ramus [17]. Because the load can be transferred only through the inferior pubic ramus and the ischium after PAO, increased load might cause high strain, which results in stress fractures of the ischial ramus postoperatively. The load transmitted through the posterior column of the pelvis also increased soon after PAO. These increases in load seemed higher in patients with the accelerated rehabilitation protocol, especially for several weeks postoperatively, which might have caused high fracture rates of the ischial ramus and the posterior column.\nEspinosa et al. described that the incidence of ischial fractures was extremely low (0.9%) and suggested that the polygonal shape of the PAO is not the root cause of ischial fractures [18]. They discussed that substantial weakening of the bone could occur while performing the ischial cuts during the PAO. Fractures of the ischial ramus and the posterior column of the pelvis occurred in three of 210 hips with the standard rehabilitation protocol in the present study, which coincides with their results. In contrast, those fractures occurred in seven of seventy-six hips with the accelerated rehabilitation protocol, which was much higher than the rate with the standard rehabilitation protocol. This fracture rate was unacceptably high compared with those in previous studies (Table 4). All fractures in this study healed uneventfully with nonoperative treatment. Because there were no documented injuries preceding the fractures, it must be assumed that normal loads were sufficient to cause stress fractures in patients with the accelerated rehabilitation protocol. Although postoperative fractures of the ischial ramus and posterior column of the pelvis were more frequently found in patients with the accelerated rehabilitation protocol, they do not seem to have influenced the two-year outcomes after PAO.\nLiterature review of postoperative fixation loss or fracture after periacetabular osteotomies\nOur study has several limitations. Our study design was retrospective and included a relatively small number of patients, which limits the statistical power. Our results are representative only of Asian patients with short stature and low body mass index. They may not be applicable to Caucasian patients.", "The accelerated rehabilitation protocol seems to have advantages for early muscle recovery in patients undergoing PAO; however, postoperative pelvic fracture rates were unacceptably high in patients with this protocol. It is still unclear whether it is worth applying the accelerated rehabilitation protocol after PAO. These fractures do not seem to influence long-term clinical results after PAO. Surgeons must be aware that postoperative pelvic fracture rates would increase if they apply the accelerated rehabilitation protocol. We now prefer the standard rehabilitation protocol for patients undergoing PAO.", "PAO: Periacetabular osteotomy; THA: Total hip arthroplasty; TKA: Total knee arthroplasty; CE angle: Center-edge angle; AHI: Acetabular head index.", "The authors declare that they have no competing interests.", "HI assembled and analyzed the data, and wrote the manuscript. HT, TS and YN collected clinical follow-up data and analyzed the data. HI and TM performed the surgery, recorded the intraoperative data, and arranged intraoperative photography. TM was the head of department and principal investigator. All authors red and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2474/15/234/prepub\n" ]
[ null, "methods", "results", "discussion", "conclusions", null, null, null, null ]
[ "Periacetabular osteotomy", "Accelerated rehabilitation protocol", "Complications" ]
Background: The efficacy of an accelerated multimodal intervention in order to shorten the time of recovery after surgery has been reported [1-3]. The cost-effectiveness of clinical pathways including an accelerated perioperative care and rehabilitation intervention following total hip and knee arthroplasty (THA and TKA) has been shown [1]. Various reorienting acetabular osteotomies have been described [4-6]. The advantage of therapeutic exercise after periacetabular osteotomy (PAO) has been reported to promote the activity levels of patients to return to sports [7]. However, it has not been shown whether an accelerated perioperative care and rehabilitation intervention following PAO is effective. We have performed PAO through an Ollier lateral U transtrochanteric approach since 1990 with consistent surgical indications and techniques [8]. We hypothesized that an accelerated protocol after PAO is effective for shortening the time of recovery and decreasing the rate of perioperative complications. The purpose of this retrospective study was to compare complication rates including the incidences of postoperative fractures in patients with standard and accelerated rehabilitation protocols who underwent PAO. Methods: The study design was approved by the Ethics Committee of Asahikawa Medical University. All investigations were conducted in conformity with ethical principles of research. Written informed consent for this study was obtained from all patients. We retrospectively assessed all patients who had been managed using PAO between January 2002 and August 2011. During this period, 163 PAOs were performed in 145 patients for the treatment of acetabular dysplasia in adolescents and adults. All the patients reported moderate to severe hip pain. Surgical indications for the PAO included a lateral center-edge (CE) angle [9] of < 20° on anteroposterior radiographs, showing good joint congruency with the hip in abduction, pre- or early stage of grade 0, 1 or 2 osteoarthritis [10], and age younger than 60 years. Contraindications for the osteotomy were poor joint congruency showing partial narrowing or disappearance of the joint space with the hip in abduction, a severely deformed femoral head, advanced stage of Tönnis grade 3 osteoarthritis and age older than 60 years. In one patient (one hip), PAO was combined with an intertrochanteric valgus osteotomy. This patient was excluded from analysis because femoral osteotomies might affect the postoperative rehabilitation process. Six patients (six hips) were lost to follow-up. We evaluated the remaining 138 patients (156 hips). Nineteen patients were male and 119 were female. The left side was treated in 82 hips and the right side in 74 hips. Eighteen of the 138 patients underwent bilateral surgery. The average age of patients at the time of surgery was 30 years (range 11–59 years), and the average patient weight was 55.8 kg (range 39–83 kg). The operative techniques were described previously [8]. All of the procedures were performed by one surgeon who had performed more than 200 PAOs before the study periods. Complications were evaluated for 2 years. During the study period, two rehabilitation protocols were applied (Table 1). For 76 consecutive hips in 65 patients between January 2004 and December 2008, an accelerated rehabilitation protocol was applied to achieve early postoperative recovery and functional improvement of patients. Postoperative strengthening of the hip, thigh and core musculature was begun on the day after surgery as tolerated. The exercise program included active hip range of motion, and gentle isometric hamstring and quadriceps muscle sets; these exercises were performed for 30 minutes in the morning and 30 minutes in the afternoon with a physical therapist every weekday for six weeks for all the patients. Full weight-bearing with two axillary crutches started on the day after surgery as tolerated. All weight-bearing exercises were performed under the guidance of physical therapists. Full weight-bearing was promoted using the weight scale by therapists. Muscle exercises of 1 hour with outpatient physical therapy continued 2 times per week for 3 months postoperatively. Comparison of standard and accelerated rehabilitation protocols In the other period, a standard rehabilitation protocol was applied for 73 patients (80 hips). Since January 2009, a standard protocol has been applied again for all patients because postoperative pelvic fractures occurred frequently during the period in which the accelerated rehabilitation protocol was applied. The patient was allowed to use a wheelchair, and active range of motion, quadriceps and straight leg-raising exercises were begun on the first postoperative day. Non-weight-bearing walking with two crutches was also allowed as tolerated. Muscle exercises for 20 minutes per day with a physical therapist were performed every weekday for 6 weeks for all the patients. All weight-bearing exercises were performed under the guidance of physical therapists. Prophylaxis against deep-vein thrombosis was not routinely administered for both groups. Only high-risk patients with a previous history of thrombosis were managed with aspirin for 2 weeks postoperatively. Clinical and radiographic evaluations were performed preoperatively, and 1, 2, 4, 6, 12, 18 and 24 months postoperatively. The Harris hip score was also evaluated preoperatively and at the time of 2-year follow-up visits. Preoperative and postoperative clinical data were collected from charts of the patients. Perioperative complications that occurred within 12 months after surgery included deep infection, pulmonary embolism, avascular necrosis of the acetabular fragment, displacement of the greater trochanter, fracture of the pubic ramus, ischial ramus or posterior column of the pelvis, nonunion of pubis and heterotopic ossifications. Fractures were defined as discontinuities of the bone other than the osteotomies that did not occur during surgery. Conventional anteroposterior and lateral radiographs were obtained with the patient in the supine position for each evaluation. The CE angle, the acetabular head index (AHI) [11] and the acetabular angle of Sharp [12] were measured. The head lateralization index was measured [13]. The presence of the cross-over sign of acetabular retroversion was recorded. Gender, follow-up period, bilateral or unilateral involvement, severity of osteoarthritis, CE angle, angle of Sharp, AHI and head lateralization index were comparable between the standard and accelerated rehabilitation protocol groups (Table 2). Radiographic images were transferred to Image J software (National Institutes of Health, Bethesda, Maryland, USA) on personal computers, and measurements were performed with an accuracy of ±0.01 mm and ±0.01 degrees. Comparison of data on the patients, clinical outcomes, and radiographic evaluations in the standard and accelerated protocol groups An orthopaedic instructor who specialized in hip surgery and imaging analyses of the hip joint made all radiographic measurements. Preoperative and periodic postoperative radiographic analyses were performed in a blinded fashion for all patients. Univariate analyses included the chi-square test, the Mann–Whitney U test and the Wilcoxon signed rank test where appropriate. Preoperative and postoperative Harris hip scores were compared by the Wilcoxon signed rank test. The chi-square test was used for analyses of the clinical factors. The Wilcoxon signed rank test was used for analyses of preoperative and postoperative center-edge angle, acetabular head index, Sharp angle and head lateralization index. A probability value less than 0.05 was considered significant. Statistical analyses were performed using SPSS software, version 19.0 (SPSS Inc., Chicago, Illinois, USA). Results: The Harris hip score increased from a preoperative overall average of 69 points (range 52–95 points) to 91 points (range 54–100 points) at the most recent follow-up (P < 0.001). Overall, 146 hips (94%) had a hip score equal to or greater than 80 points. The mean Harris hip scores at preoperative and follow-up evaluation were similar in the two groups (Table 2). The overall mean CE angle increased (P < 0.001) from −2° ± 9° preoperatively to 35° ± 6° postoperatively, the mean AHI increased (P < 0.001) from 54 ± 9 to 89 ± 7 and the mean Sharp angle decreased (P < 0.001) from 52° ± 4° to 40° ± 4°. These improvements were similar in the two groups (Table 2). Preoperative cross-over signs were observed in 14 (9%) hips, which were absent postoperatively. The preoperative Tönnis grade of osteoarthritis was grade 0 in 39 (25%) hips, grade 1 in 114 (73%) hips and grade 2 in three (2%) hips. Three (2%) hips underwent THA. The average period between the osteotomy and walking without support was 4.2 months (range 2.0-10.5 months) in 65 patients with the accelerated rehabilitation protocol and 6.9 months (range 2.5-15.0 months) in 73 patients with the standard rehabilitation protocol (P < 0.001), indicating that the accelerated rehabilitation protocol could achieve earlier recovery of patients. There were no intraoperative fractures as determined by intraoperative inspection and postoperative radiographs in any patients. Postoperative fractures of the ischial ramus and posterior column of the pelvis were more frequently found in patients with the accelerated rehabilitation protocol (8/76) than in those with the standard rehabilitation protocol (1/80) (P = 0.013). Other perioperative complications were similar in the two groups (Table 3). Postoperative fractures occurred at an average of 2.0 months (range 0.2-12 months) postoperatively. Slight pain or dullness continued for an average of 3.5 months (range 2.0-12.0 months) in patients with fractures. The mean time to union of ischial fracture was 9.0 months. All fractures united without surgical intervention (Figures 1, 2, 3, 4). There was no loss of corrections. One postoperative deep infection extended to septic arthritis, which healed by surgical debridement. One hip with displacement of the greater trochanter was revised and refixed using a metal cable grip system. Bone union could be obtained. No patient had an injury to the great vessels or major nerves. No patient had symptoms resulting from damage to the lateral femoral cutaneous nerves. Comparison of perioperative complications in the standard and accelerated protocol groups Radiographic findings of a 39-year-old woman. A preoperative radiograph showing acetabular dysplasia of the right hip. A radiograph of the same patient made 1 week after periacetabular osteotomy showing good coverage of the femoral head without fracture of the posterior column. The accelerated rehabilitation protocol was applied to the patient. A radiograph of the same patient made 4 weeks after osteotomy. The patient reported right buttock pain 3.5 weeks postoperatively. A fracture of the posterior column (arrow) was revealed, which was not seen in Figure 1. A radiograph of the same patient made 2 years after osteotomy showing solid union of the fracture. The right buttock pain continued for 3 months and then decreased. The patient reported no hip pain. The Harris hip score was 98 points at the time of follow-up. Discussion: Many authors have reported success with accelerated rehabilitation protocols after THA and TKA [1-3]. Immediate mobilization on the day of surgery has been reported to decrease the length of stay without adverse effects on complications or readmissions. A previous randomized clinical trial showed that an accelerated perioperative care and rehabilitation protocol can be both cost-saving and effective, with a reduction in the length of hospital stay and a gain in health-related quality of life [2]. However, it has not been clear whether accelerated rehabilitation for patients undergoing PAO is also effective. The average period between the osteotomy and full weight-bearing walking without support was shorter in patients with the accelerated rehabilitation protocol in this study. This indicates that the accelerated rehabilitation protocol had advantages for early muscle recovery in patients undergoing PAO. However, the incidence of postoperative pelvic fractures was higher in patients with the accelerated rehabilitation protocol. Patients with PAO may exhibit several different features compared with those with THA. Because PAO is osteotomy surgery, proper time is necessary for pelvic bone union and healing. There is no need for bone union or healing in most total joint replacements. It seems that this is the fundamental difference between these two surgical procedures. The load transmission patterns through the pelvic ring change soon after PAO. Kaku et al. reported that load transfer through the pelvis is higher in the superior pubic ramus than in the inferior ramus [17]. Because the load can be transferred only through the inferior pubic ramus and the ischium after PAO, increased load might cause high strain, which results in stress fractures of the ischial ramus postoperatively. The load transmitted through the posterior column of the pelvis also increased soon after PAO. These increases in load seemed higher in patients with the accelerated rehabilitation protocol, especially for several weeks postoperatively, which might have caused high fracture rates of the ischial ramus and the posterior column. Espinosa et al. described that the incidence of ischial fractures was extremely low (0.9%) and suggested that the polygonal shape of the PAO is not the root cause of ischial fractures [18]. They discussed that substantial weakening of the bone could occur while performing the ischial cuts during the PAO. Fractures of the ischial ramus and the posterior column of the pelvis occurred in three of 210 hips with the standard rehabilitation protocol in the present study, which coincides with their results. In contrast, those fractures occurred in seven of seventy-six hips with the accelerated rehabilitation protocol, which was much higher than the rate with the standard rehabilitation protocol. This fracture rate was unacceptably high compared with those in previous studies (Table 4). All fractures in this study healed uneventfully with nonoperative treatment. Because there were no documented injuries preceding the fractures, it must be assumed that normal loads were sufficient to cause stress fractures in patients with the accelerated rehabilitation protocol. Although postoperative fractures of the ischial ramus and posterior column of the pelvis were more frequently found in patients with the accelerated rehabilitation protocol, they do not seem to have influenced the two-year outcomes after PAO. Literature review of postoperative fixation loss or fracture after periacetabular osteotomies Our study has several limitations. Our study design was retrospective and included a relatively small number of patients, which limits the statistical power. Our results are representative only of Asian patients with short stature and low body mass index. They may not be applicable to Caucasian patients. Conclusion: The accelerated rehabilitation protocol seems to have advantages for early muscle recovery in patients undergoing PAO; however, postoperative pelvic fracture rates were unacceptably high in patients with this protocol. It is still unclear whether it is worth applying the accelerated rehabilitation protocol after PAO. These fractures do not seem to influence long-term clinical results after PAO. Surgeons must be aware that postoperative pelvic fracture rates would increase if they apply the accelerated rehabilitation protocol. We now prefer the standard rehabilitation protocol for patients undergoing PAO. Abbreviations: PAO: Periacetabular osteotomy; THA: Total hip arthroplasty; TKA: Total knee arthroplasty; CE angle: Center-edge angle; AHI: Acetabular head index. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: HI assembled and analyzed the data, and wrote the manuscript. HT, TS and YN collected clinical follow-up data and analyzed the data. HI and TM performed the surgery, recorded the intraoperative data, and arranged intraoperative photography. TM was the head of department and principal investigator. All authors red and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2474/15/234/prepub
Background: It has not been shown whether accelerated rehabilitation following periacetabular osteotomy (PAO) is effective for early recovery. The purpose of this retrospective study was to compare complication rates in patients with standard and accelerated rehabilitation protocols who underwent PAO. Methods: Between January 2002 and August 2011, patients with a lateral center-edge (CE) angle of < 20°, showing good joint congruency with the hip in abduction, pre- or early stage of osteoarthritis, and age younger than 60 years were included in this study. We evaluated 156 hips in 138 patients, with a mean age at the time of surgery of 30 years. Full weight-bearing with two crutches started 2 months postoperatively in 73 patients (80 hips) with the standard rehabilitation protocol. In 65 patients (76 hips) with the accelerated rehabilitation protocol, postoperative strengthening of the hip, thigh and core musculature was begun on the day of surgery as tolerated. The exercise program included active hip range of motion, and gentle isometric hamstring and quadriceps muscle sets; these exercises were performed for 30 minutes in the morning and 30 minutes in the afternoon with a physical therapist every weekday for 6 weeks. Full weight-bearing with two axillary crutches started on the day of surgery as tolerated. Complications were evaluated for 2 years. Results: The clinical results at the time of follow-up were similar in the two groups. The average periods between the osteotomy and full-weight-bearing walking without support were 4.2 months and 6.9 months in patients with the accelerated and standard rehabilitation protocols (P < 0.001), indicating that the accelerated rehabilitation protocol could achieve earlier recovery of patients. However, postoperative fractures of the ischial ramus and posterior column of the pelvis were more frequently found in patients with the accelerated rehabilitation protocol (8/76) than in those with the standard rehabilitation protocol (1/80) (P = 0.013). Conclusions: The accelerated rehabilitation protocol seems to have advantages for early muscle recovery in patients undergoing PAO; however, postoperative pelvic fracture rates were unacceptably high in patients with this protocol.
Background: The efficacy of an accelerated multimodal intervention in order to shorten the time of recovery after surgery has been reported [1-3]. The cost-effectiveness of clinical pathways including an accelerated perioperative care and rehabilitation intervention following total hip and knee arthroplasty (THA and TKA) has been shown [1]. Various reorienting acetabular osteotomies have been described [4-6]. The advantage of therapeutic exercise after periacetabular osteotomy (PAO) has been reported to promote the activity levels of patients to return to sports [7]. However, it has not been shown whether an accelerated perioperative care and rehabilitation intervention following PAO is effective. We have performed PAO through an Ollier lateral U transtrochanteric approach since 1990 with consistent surgical indications and techniques [8]. We hypothesized that an accelerated protocol after PAO is effective for shortening the time of recovery and decreasing the rate of perioperative complications. The purpose of this retrospective study was to compare complication rates including the incidences of postoperative fractures in patients with standard and accelerated rehabilitation protocols who underwent PAO. Conclusion: The accelerated rehabilitation protocol seems to have advantages for early muscle recovery in patients undergoing PAO; however, postoperative pelvic fracture rates were unacceptably high in patients with this protocol. It is still unclear whether it is worth applying the accelerated rehabilitation protocol after PAO. These fractures do not seem to influence long-term clinical results after PAO. Surgeons must be aware that postoperative pelvic fracture rates would increase if they apply the accelerated rehabilitation protocol. We now prefer the standard rehabilitation protocol for patients undergoing PAO.
Background: It has not been shown whether accelerated rehabilitation following periacetabular osteotomy (PAO) is effective for early recovery. The purpose of this retrospective study was to compare complication rates in patients with standard and accelerated rehabilitation protocols who underwent PAO. Methods: Between January 2002 and August 2011, patients with a lateral center-edge (CE) angle of < 20°, showing good joint congruency with the hip in abduction, pre- or early stage of osteoarthritis, and age younger than 60 years were included in this study. We evaluated 156 hips in 138 patients, with a mean age at the time of surgery of 30 years. Full weight-bearing with two crutches started 2 months postoperatively in 73 patients (80 hips) with the standard rehabilitation protocol. In 65 patients (76 hips) with the accelerated rehabilitation protocol, postoperative strengthening of the hip, thigh and core musculature was begun on the day of surgery as tolerated. The exercise program included active hip range of motion, and gentle isometric hamstring and quadriceps muscle sets; these exercises were performed for 30 minutes in the morning and 30 minutes in the afternoon with a physical therapist every weekday for 6 weeks. Full weight-bearing with two axillary crutches started on the day of surgery as tolerated. Complications were evaluated for 2 years. Results: The clinical results at the time of follow-up were similar in the two groups. The average periods between the osteotomy and full-weight-bearing walking without support were 4.2 months and 6.9 months in patients with the accelerated and standard rehabilitation protocols (P < 0.001), indicating that the accelerated rehabilitation protocol could achieve earlier recovery of patients. However, postoperative fractures of the ischial ramus and posterior column of the pelvis were more frequently found in patients with the accelerated rehabilitation protocol (8/76) than in those with the standard rehabilitation protocol (1/80) (P = 0.013). Conclusions: The accelerated rehabilitation protocol seems to have advantages for early muscle recovery in patients undergoing PAO; however, postoperative pelvic fracture rates were unacceptably high in patients with this protocol.
3,010
406
9
[ "patients", "rehabilitation", "protocol", "accelerated", "rehabilitation protocol", "pao", "accelerated rehabilitation", "postoperative", "hip", "fractures" ]
[ "test", "test" ]
[CONTENT] Periacetabular osteotomy | Accelerated rehabilitation protocol | Complications [SUMMARY]
[CONTENT] Periacetabular osteotomy | Accelerated rehabilitation protocol | Complications [SUMMARY]
[CONTENT] Periacetabular osteotomy | Accelerated rehabilitation protocol | Complications [SUMMARY]
[CONTENT] Periacetabular osteotomy | Accelerated rehabilitation protocol | Complications [SUMMARY]
[CONTENT] Periacetabular osteotomy | Accelerated rehabilitation protocol | Complications [SUMMARY]
[CONTENT] Periacetabular osteotomy | Accelerated rehabilitation protocol | Complications [SUMMARY]
[CONTENT] Acetabulum | Adolescent | Adult | Crutches | Exercise Therapy | Female | Follow-Up Studies | Fractures, Bone | Hip Dislocation | Humans | Incidence | Ischium | Isometric Contraction | Male | Muscle Strength | Osteotomy | Pelvic Bones | Postoperative Complications | Radiography | Range of Motion, Articular | Resistance Training | Retrospective Studies | Weight-Bearing [SUMMARY]
[CONTENT] Acetabulum | Adolescent | Adult | Crutches | Exercise Therapy | Female | Follow-Up Studies | Fractures, Bone | Hip Dislocation | Humans | Incidence | Ischium | Isometric Contraction | Male | Muscle Strength | Osteotomy | Pelvic Bones | Postoperative Complications | Radiography | Range of Motion, Articular | Resistance Training | Retrospective Studies | Weight-Bearing [SUMMARY]
[CONTENT] Acetabulum | Adolescent | Adult | Crutches | Exercise Therapy | Female | Follow-Up Studies | Fractures, Bone | Hip Dislocation | Humans | Incidence | Ischium | Isometric Contraction | Male | Muscle Strength | Osteotomy | Pelvic Bones | Postoperative Complications | Radiography | Range of Motion, Articular | Resistance Training | Retrospective Studies | Weight-Bearing [SUMMARY]
[CONTENT] Acetabulum | Adolescent | Adult | Crutches | Exercise Therapy | Female | Follow-Up Studies | Fractures, Bone | Hip Dislocation | Humans | Incidence | Ischium | Isometric Contraction | Male | Muscle Strength | Osteotomy | Pelvic Bones | Postoperative Complications | Radiography | Range of Motion, Articular | Resistance Training | Retrospective Studies | Weight-Bearing [SUMMARY]
[CONTENT] Acetabulum | Adolescent | Adult | Crutches | Exercise Therapy | Female | Follow-Up Studies | Fractures, Bone | Hip Dislocation | Humans | Incidence | Ischium | Isometric Contraction | Male | Muscle Strength | Osteotomy | Pelvic Bones | Postoperative Complications | Radiography | Range of Motion, Articular | Resistance Training | Retrospective Studies | Weight-Bearing [SUMMARY]
[CONTENT] Acetabulum | Adolescent | Adult | Crutches | Exercise Therapy | Female | Follow-Up Studies | Fractures, Bone | Hip Dislocation | Humans | Incidence | Ischium | Isometric Contraction | Male | Muscle Strength | Osteotomy | Pelvic Bones | Postoperative Complications | Radiography | Range of Motion, Articular | Resistance Training | Retrospective Studies | Weight-Bearing [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] patients | rehabilitation | protocol | accelerated | rehabilitation protocol | pao | accelerated rehabilitation | postoperative | hip | fractures [SUMMARY]
[CONTENT] patients | rehabilitation | protocol | accelerated | rehabilitation protocol | pao | accelerated rehabilitation | postoperative | hip | fractures [SUMMARY]
[CONTENT] patients | rehabilitation | protocol | accelerated | rehabilitation protocol | pao | accelerated rehabilitation | postoperative | hip | fractures [SUMMARY]
[CONTENT] patients | rehabilitation | protocol | accelerated | rehabilitation protocol | pao | accelerated rehabilitation | postoperative | hip | fractures [SUMMARY]
[CONTENT] patients | rehabilitation | protocol | accelerated | rehabilitation protocol | pao | accelerated rehabilitation | postoperative | hip | fractures [SUMMARY]
[CONTENT] patients | rehabilitation | protocol | accelerated | rehabilitation protocol | pao | accelerated rehabilitation | postoperative | hip | fractures [SUMMARY]
[CONTENT] pao | accelerated | intervention | care rehabilitation intervention | perioperative care rehabilitation intervention | care rehabilitation intervention following | shown | time recovery | including | following [SUMMARY]
[CONTENT] patients | performed | hip | analyses | exercises | test | weight | postoperative | angle | physical [SUMMARY]
[CONTENT] months | patient | points | range | 001 | mean | hip | hips | preoperative | protocol [SUMMARY]
[CONTENT] protocol | rehabilitation protocol | rehabilitation | pao | postoperative pelvic fracture | postoperative pelvic fracture rates | pelvic fracture | pelvic fracture rates | accelerated rehabilitation protocol | patients [SUMMARY]
[CONTENT] patients | rehabilitation | pao | protocol | accelerated | rehabilitation protocol | accelerated rehabilitation | accelerated rehabilitation protocol | hip | declare competing [SUMMARY]
[CONTENT] patients | rehabilitation | pao | protocol | accelerated | rehabilitation protocol | accelerated rehabilitation | accelerated rehabilitation protocol | hip | declare competing [SUMMARY]
[CONTENT] PAO ||| PAO [SUMMARY]
[CONTENT] Between January 2002 | August 2011 | CE | 20 | 60 years ||| 156 | 138 | 30 years ||| two | 2 months | 73 | 80 ||| 65 | 76 | the day ||| 30 minutes in the morning | 30 minutes | 6 weeks ||| two | the day ||| 2 years [SUMMARY]
[CONTENT] two ||| 4.2 months | 6.9 months | 0.001 ||| 8/76 | 1/80 | 0.013 [SUMMARY]
[CONTENT] PAO [SUMMARY]
[CONTENT] PAO ||| PAO ||| Between January 2002 | August 2011 | CE | 20 | 60 years ||| 156 | 138 | 30 years ||| two | 2 months | 73 | 80 ||| 65 | 76 | the day ||| 30 minutes in the morning | 30 minutes | 6 weeks ||| two | the day ||| 2 years ||| ||| two ||| 4.2 months | 6.9 months | 0.001 ||| 8/76 | 1/80 | 0.013 ||| PAO [SUMMARY]
[CONTENT] PAO ||| PAO ||| Between January 2002 | August 2011 | CE | 20 | 60 years ||| 156 | 138 | 30 years ||| two | 2 months | 73 | 80 ||| 65 | 76 | the day ||| 30 minutes in the morning | 30 minutes | 6 weeks ||| two | the day ||| 2 years ||| ||| two ||| 4.2 months | 6.9 months | 0.001 ||| 8/76 | 1/80 | 0.013 ||| PAO [SUMMARY]
Improvement of drug dose calculations by classroom teaching or e-learning: a randomised controlled trial in nurses.
25344483
Insufficient skills in drug dose calculations increase the risk for medication errors. Even experienced nurses may struggle with such calculations. Learning flexibility and cost considerations make e-learning interesting as an alternative to classroom teaching. This study compared the learning outcome and risk of error after a course in drug dose calculations for nurses with the two methods.
INTRODUCTION
In a randomised controlled open study, nurses from hospitals and primary healthcare were randomised to either e-learning or classroom teaching. Before and after a 2-day course, the nurses underwent a multiple choice test in drug dose calculations: 14 tasks with four alternative answers (score 0-14), and a statement regarding the certainty of each answer (score 0-3). High risk of error was being certain that incorrect answer was correct. The results are given as the mean (SD).
METHODS
16 men and 167 women participated in the study, aged 42.0 (9.5) years with a working experience of 12.3 (9.5) years. The number of correct answers after e-learning was 11.6 (2.0) and after classroom teaching 11.9 (2.0) (p=0.18, NS); improvement were 0.5 (1.6) and 0.9 (2.2), respectively (p=0.07, NS). Classroom learning was significantly superior to e-learning among participants with a pretest score below 9. In support of e-learning was evaluation of specific value for the working situation. There was no difference in risk of error between groups after the course (p=0.77).
RESULTS
The study showed no differences in learning outcome or risk of error between e-learning and classroom teaching in drug dose calculations. The overall learning outcome was small. Weak precourse knowledge was associated with better outcome after classroom teaching.
CONCLUSIONS
[ "Adult", "Computer-Assisted Instruction", "Drug Dosage Calculations", "Education, Nursing", "Female", "Humans", "Male", "Medication Errors", "Middle Aged", "Quality Improvement" ]
4212177
Introduction
From international reviews and reports of adverse drug events, incorrect doses account for up to one-third of the events.1–3 Many health professionals find drug dose calculations difficult. The majority of medical students are unable to calculate the mass of a drug in solution correctly, and around half the doctors are unable to convert drug doses correctly from a percentage concentration or dilution to mass concentration.4 5 Nurses carry out practical drug management after the physicians’ prescriptions both in hospitals and primary healthcare. In Norway, a faultless test in drug dose calculations during nursing education is required to become a registered nurse.6 Both nursing students and experienced nurses have problems with drug dose calculations, and nursing students early in the programme showed limited basic skills in arithmetic.7–10 We have shown a high risk of error in conversion of units in 10% of registered nurses in an earlier study.11 E-learning was introduced with the internet in the early 1990s, and has been increasingly used in medical and healthcare education. E-learning is independent of time and place, and the training is easier to organise in the health services than classroom teaching, and at a lower cost. A meta-analysis from 2009 summarised more than 200 studies in health professions education, and concluded that e-learning is associated with large positive effects compared with no intervention, but compared with other interventions the effects are generally small.12 There is a lack of drug dose calculation studies where different didactic methods are compared. The objective of this study was to compare the learning outcome, certainty and risk of error in drug dose calculations after courses with either self-directed e-learning or conventional classroom teaching. Further aims were to study factors associated with the learning outcome and risk of error.
Methods
Design A randomised controlled open study with a parallel group design. A randomised controlled open study with a parallel group design. Participants Registered nurses working in two hospitals and three municipalities in Eastern Norway were recruited to participate in the study. Inclusion criteria were nurses with at least 1 year of work experience in a 50% part-time job or more. Excluded were nurses working in outpatient clinics, those who did not administer drugs and any who did not master the Norwegian language sufficiently. The study was performed from September 2007 to April 2009. Registered nurses working in two hospitals and three municipalities in Eastern Norway were recruited to participate in the study. Inclusion criteria were nurses with at least 1 year of work experience in a 50% part-time job or more. Excluded were nurses working in outpatient clinics, those who did not administer drugs and any who did not master the Norwegian language sufficiently. The study was performed from September 2007 to April 2009. Interventions At inclusion, all participants completed a form with relevant background characteristics, and nine statements from the General Health Questionnaire (GHQ 30).13 Quality of Life tools are often used to explore psychological well-being. The GHQ 30 contains the dimensions of a sense of coping and self-esteem/well-being, and was used to evaluate to what extent the nurses’ sense of coping affected their calculation skills. The nurses performed a multiple choice (MCQ) test in drug dose calculations. The questions were standard calculation tasks for bachelor students in nursing at university colleges. The test was taken either on paper or on an internet website. The time available for the test was 1 h, and the participants were allowed to use a calculator. After the test, the nurses were randomised to one of two 2-day courses in drug dose calculations. One group was assigned to a self-directed, interactive internet-based e-learning course developed at a Norwegian university college. The other was assigned to a 1-day conventional classroom course and a 1-day self-study. The content of the two courses was the same: a review of the basic theory of the different types of calculations, followed by examples and exercises. The topics covered were conversion between units; formulas for dose, quantity and strength; infusions; and dilutions. The e-learning group continued with interactive tests, hints and suggested solutions. They had access to a collection of tests with feedback on answers, and a printout of the compendium was available. The classroom group had 1 day lecture covering the basic theory; exercises in groups; discussion in a plenary session and an individual test at the end of the day. The second day was self-study, with a textbook including exercises used at the same college.14 Two to four weeks after the course, the nurses were retested in drug dose calculations with a similar MCQ test as the pretest. At inclusion, all participants completed a form with relevant background characteristics, and nine statements from the General Health Questionnaire (GHQ 30).13 Quality of Life tools are often used to explore psychological well-being. The GHQ 30 contains the dimensions of a sense of coping and self-esteem/well-being, and was used to evaluate to what extent the nurses’ sense of coping affected their calculation skills. The nurses performed a multiple choice (MCQ) test in drug dose calculations. The questions were standard calculation tasks for bachelor students in nursing at university colleges. The test was taken either on paper or on an internet website. The time available for the test was 1 h, and the participants were allowed to use a calculator. After the test, the nurses were randomised to one of two 2-day courses in drug dose calculations. One group was assigned to a self-directed, interactive internet-based e-learning course developed at a Norwegian university college. The other was assigned to a 1-day conventional classroom course and a 1-day self-study. The content of the two courses was the same: a review of the basic theory of the different types of calculations, followed by examples and exercises. The topics covered were conversion between units; formulas for dose, quantity and strength; infusions; and dilutions. The e-learning group continued with interactive tests, hints and suggested solutions. They had access to a collection of tests with feedback on answers, and a printout of the compendium was available. The classroom group had 1 day lecture covering the basic theory; exercises in groups; discussion in a plenary session and an individual test at the end of the day. The second day was self-study, with a textbook including exercises used at the same college.14 Two to four weeks after the course, the nurses were retested in drug dose calculations with a similar MCQ test as the pretest. Sample size Studies testing drug dose calculations in nurses have shown a mean score of 75% (SD 15%).15–17 In a study with 14 questions, this is equivalent to a score of 10.5 (SD 2.1). To detect a difference of one correct answer between the two didactic methods with a strength of 0.8 and α<0.05, it was necessary to include 74 participants in each group. Owing to the likely dropouts, the aim was to randomise 180 participating nurses. Studies testing drug dose calculations in nurses have shown a mean score of 75% (SD 15%).15–17 In a study with 14 questions, this is equivalent to a score of 10.5 (SD 2.1). To detect a difference of one correct answer between the two didactic methods with a strength of 0.8 and α<0.05, it was necessary to include 74 participants in each group. Owing to the likely dropouts, the aim was to randomise 180 participating nurses. Randomisation At inclusion, each nurse was stratified according to five workplaces: internal medicine, surgery or psychiatric wards in hospitals, and nursing home or ambulatory care in primary healthcare. Immediately after submission of the pretest, the participants were randomised to one of the two didactic methods by predefined computer-generated lists for each stratum. At inclusion, each nurse was stratified according to five workplaces: internal medicine, surgery or psychiatric wards in hospitals, and nursing home or ambulatory care in primary healthcare. Immediately after submission of the pretest, the participants were randomised to one of the two didactic methods by predefined computer-generated lists for each stratum.
Results
In total, 212 registered nurses were included in the study, and 183 were eligible for randomisation. Figure 1 shows the flow of participants throughout the study, and table 1 summarises the participant characteristics and the pretest results. The two groups were well balanced with respect to baseline characteristics. Of the 183 nurses, 79 (43%) were recruited from hospitals (48 from surgery departments, including intensive care units; 23 from internal medicine wards; 8 from psychiatric wards) and 104 (57%) from primary healthcare (52 from nursing homes and 52 from ambulatory healthcare). Nearly half of the nurses (48%) performed drug dose calculations weekly or more often. Participant flow chart. Participants’ characteristics and pretest results The results are given as mean (SD in brackets), or number of participants (proportion in brackets). *Scale: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. †Upper secondary school. ‡Scale: 0=more/better than usual, 1=as usual, 2=less/worse than usual, 3=much less/worse than usual. Statistical tests: t test, Mann-Whitney U test, χ2 test, Fisher exact test. There was a tendency for more dropouts in the e-learning group: 18.4% vs 9.9% (p=0.10). The dropouts did not differ from those who completed the study regarding the workplace: 12 from hospitals and 14 from primary healthcare (p=0.74), or pretest result: score 10.5 vs 11.1, 95% CI for difference −1.5:+0.2 (p=0.13). Knowledge, learning outcome and risk of error The test results before and after the course are shown in figure 2, and the upper part of table 2 gives the main results after e-learning and classroom teaching. No significant difference between the two didactic methods was detected for the overall test score, certainty or risk of error. The overall knowledge score improved from 11.1 (2.0) to 11.8 (2.0) (p<0.001). Before and after the course, 20 (10.9%) and 37 (20.2%) participants, respectively, completed a faultless test. The overall risk of error decreased after the course from 1.5 (0.3) to 1.4 (0.3) (p<0.001), but 41 nurses (22%) showed an increased risk, 20 from the e-learning group and 21 from the classroom group. This proportion is within the limits of what could appear by coincidence from a normal distribution (24%), and with a mean learning outcome of 0.7 (0.2). Test results in drug dose calculations. Main results after course in drug dose calculations Results are given as mean (SD). Statistical test: Mann-Whitney U test. *Auxiliary subgroup analysis. An analysis of the 141 participants who completed the study according to the protocol did not alter the main finding that there was no difference between the two didactic methods. The overall knowledge score improved from 11.1 (2.0) to 12.0 (2.0) (p<0.001). Table 3 gives the results as the proportion of correct answers and the proportion of answers with a high risk of error within each calculation topic before and after the course. The test results in each topic for the two didactic methods showed that the classroom group scored significantly better after the course in conversion of units: 86% correct answers vs 78% (p<0.001), with no difference in the other topics. Overall, there were significant differences between the four topics in knowledge and risk of error both before and after the course, p<0.001 (Friedman's test). Sense of coping or self-esteem/well-being was not affected by the course for either of the groups, data not shown. Knowledge and high risk of error within each calculation topic before and after course Results are given as mean (SD). Statistical test: Wilcoxon signed-rank test, Friedman's test. Factors significantly associated with good learning outcome and reduction in the risk of error after the course are given in table 4. Among these factors, the randomisation to classroom teaching was significantly better in learning outcome, adjusted for other variables. Both low pretest knowledge and certainty score were associated with a reduced risk of error after the course, as were being a man and working in hospital. Self-evaluations of coping and self-esteem/well-being were neither associated with learning outcome nor with risk of error. The total R2 changes for the variables significantly associated with good learning outcome and risk of error were 0.28 and 0.18, respectively. Factors significantly associated with learning outcome and reduction in risk of error after course in drug dose calculations Multivariable regression analysis with all participant characteristics included as possible factors (n=183). Statistical test: linear regression analysis, after bivariable correlation tests Pearson and Spearman. The test results before and after the course are shown in figure 2, and the upper part of table 2 gives the main results after e-learning and classroom teaching. No significant difference between the two didactic methods was detected for the overall test score, certainty or risk of error. The overall knowledge score improved from 11.1 (2.0) to 11.8 (2.0) (p<0.001). Before and after the course, 20 (10.9%) and 37 (20.2%) participants, respectively, completed a faultless test. The overall risk of error decreased after the course from 1.5 (0.3) to 1.4 (0.3) (p<0.001), but 41 nurses (22%) showed an increased risk, 20 from the e-learning group and 21 from the classroom group. This proportion is within the limits of what could appear by coincidence from a normal distribution (24%), and with a mean learning outcome of 0.7 (0.2). Test results in drug dose calculations. Main results after course in drug dose calculations Results are given as mean (SD). Statistical test: Mann-Whitney U test. *Auxiliary subgroup analysis. An analysis of the 141 participants who completed the study according to the protocol did not alter the main finding that there was no difference between the two didactic methods. The overall knowledge score improved from 11.1 (2.0) to 12.0 (2.0) (p<0.001). Table 3 gives the results as the proportion of correct answers and the proportion of answers with a high risk of error within each calculation topic before and after the course. The test results in each topic for the two didactic methods showed that the classroom group scored significantly better after the course in conversion of units: 86% correct answers vs 78% (p<0.001), with no difference in the other topics. Overall, there were significant differences between the four topics in knowledge and risk of error both before and after the course, p<0.001 (Friedman's test). Sense of coping or self-esteem/well-being was not affected by the course for either of the groups, data not shown. Knowledge and high risk of error within each calculation topic before and after course Results are given as mean (SD). Statistical test: Wilcoxon signed-rank test, Friedman's test. Factors significantly associated with good learning outcome and reduction in the risk of error after the course are given in table 4. Among these factors, the randomisation to classroom teaching was significantly better in learning outcome, adjusted for other variables. Both low pretest knowledge and certainty score were associated with a reduced risk of error after the course, as were being a man and working in hospital. Self-evaluations of coping and self-esteem/well-being were neither associated with learning outcome nor with risk of error. The total R2 changes for the variables significantly associated with good learning outcome and risk of error were 0.28 and 0.18, respectively. Factors significantly associated with learning outcome and reduction in risk of error after course in drug dose calculations Multivariable regression analysis with all participant characteristics included as possible factors (n=183). Statistical test: linear regression analysis, after bivariable correlation tests Pearson and Spearman. Course evaluation Nearly all (97.5%) of the participants stated a need for training courses in drug dose calculations. The evaluation after the course showed no difference between the didactic methods in the expressed degree of difficulty or course satisfaction, data not shown. The specific value of the course for working situations was scored 3.1 (0.7) in the e-learning group and 2.7 (0.7) in the classroom group (p<0.001). Nearly all (97.5%) of the participants stated a need for training courses in drug dose calculations. The evaluation after the course showed no difference between the didactic methods in the expressed degree of difficulty or course satisfaction, data not shown. The specific value of the course for working situations was scored 3.1 (0.7) in the e-learning group and 2.7 (0.7) in the classroom group (p<0.001). Auxiliary analyses A post hoc analysis for subgroups with a pretest knowledge score ≥9 and <9 is given in the lower part of table 2. For participants with a low prescore, classroom teaching gave a significantly better learning outcome and reduced risk of error after the course. The overall knowledge score improved in the high score group from 11.6 (1.4) to 12.0 (1.9) and in the low score group from 7.2 (1.0) to 9.9 (2.3), and the difference in learning outcome was highly significant (p<0.001). A post hoc analysis for subgroups with a pretest knowledge score ≥9 and <9 is given in the lower part of table 2. For participants with a low prescore, classroom teaching gave a significantly better learning outcome and reduced risk of error after the course. The overall knowledge score improved in the high score group from 11.6 (1.4) to 12.0 (1.9) and in the low score group from 7.2 (1.0) to 9.9 (2.3), and the difference in learning outcome was highly significant (p<0.001).
Conclusion
The study was not able to demonstrate any differences between e-learning and classroom teaching in drug dose calculations, with respect to learning outcome, certainty or risk of error. The overall learning outcome was without practical significance, and conversion of units was the only topic that was significantly improved after the course. An independent factor in favour of classroom teaching was weak pretest knowledge, while factors suggesting use of e-learning could be the need for training in relevant work specific tasks and time effective repetition.
[ "Design", "Participants", "Interventions", "Sample size", "Randomisation", "Data collection", "Participant characteristics", "Outcomes", "Drug dose calculation test and certainty in calculations", "Risk of error", "Course evaluation", "Ethical considerations", "Data analysis", "Knowledge, learning outcome and risk of error", "Course evaluation", "Auxiliary analyses", "Drug dose calculation skills", "Risk of error", "Importance for practice", "Study limitations" ]
[ "A randomised controlled open study with a parallel group design.", "Registered nurses working in two hospitals and three municipalities in Eastern Norway were recruited to participate in the study. Inclusion criteria were nurses with at least 1 year of work experience in a 50% part-time job or more. Excluded were nurses working in outpatient clinics, those who did not administer drugs and any who did not master the Norwegian language sufficiently. The study was performed from September 2007 to April 2009.", "At inclusion, all participants completed a form with relevant background characteristics, and nine statements from the General Health Questionnaire (GHQ 30).13 Quality of Life tools are often used to explore psychological well-being. The GHQ 30 contains the dimensions of a sense of coping and self-esteem/well-being, and was used to evaluate to what extent the nurses’ sense of coping affected their calculation skills. The nurses performed a multiple choice (MCQ) test in drug dose calculations. The questions were standard calculation tasks for bachelor students in nursing at university colleges. The test was taken either on paper or on an internet website. The time available for the test was 1 h, and the participants were allowed to use a calculator.\nAfter the test, the nurses were randomised to one of two 2-day courses in drug dose calculations. One group was assigned to a self-directed, interactive internet-based e-learning course developed at a Norwegian university college. The other was assigned to a 1-day conventional classroom course and a 1-day self-study. The content of the two courses was the same: a review of the basic theory of the different types of calculations, followed by examples and exercises. The topics covered were conversion between units; formulas for dose, quantity and strength; infusions; and dilutions. The e-learning group continued with interactive tests, hints and suggested solutions. They had access to a collection of tests with feedback on answers, and a printout of the compendium was available. The classroom group had 1 day lecture covering the basic theory; exercises in groups; discussion in a plenary session and an individual test at the end of the day. The second day was self-study, with a textbook including exercises used at the same college.14 Two to four weeks after the course, the nurses were retested in drug dose calculations with a similar MCQ test as the pretest.", "Studies testing drug dose calculations in nurses have shown a mean score of 75% (SD 15%).15–17 In a study with 14 questions, this is equivalent to a score of 10.5 (SD 2.1). To detect a difference of one correct answer between the two didactic methods with a strength of 0.8 and α<0.05, it was necessary to include 74 participants in each group. Owing to the likely dropouts, the aim was to randomise 180 participating nurses.", "At inclusion, each nurse was stratified according to five workplaces: internal medicine, surgery or psychiatric wards in hospitals, and nursing home or ambulatory care in primary healthcare. Immediately after submission of the pretest, the participants were randomised to one of the two didactic methods by predefined computer-generated lists for each stratum.", " Participant characteristics The following background characteristics were recorded: age; gender; childhood and education as a nurse in or outside of Norway; length of work experience as a nurse in at least a 50% part-time job; part-time job percentage in the past 12 months; present workplace in a specific hospital department (surgery, internal medicine or psychiatry) or primary healthcare (nursing home or ambulatory care); and frequency of drug dose calculation tasks at work, score 0–3: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. Further educational background was recorded (yes/no): mathematics beyond the first mandatory year at upper secondary school; other education prior to nursing; postgraduate specialisation and courses in drug dose calculations during the past 3 years. The participants registered motivation for the courses in drug dose calculations, rated as 1=very unmotivated, 2=relatively unmotivated, 3=relatively motivated, 4=very motivated.\nIn addition, the participants were asked to consider statements from GHQ 30, in the context of performing medication tasks: five regarding coping (finding life a struggle; being able to enjoy normal activities; feeling reasonably happy; getting scared or panicky for no good reason and being capable of making decisions), and four regarding self-esteem/well-being (overall doing things well; satisfied with the way they have carried out their task; managing to keep busy and occupied; and managing as well as most people in the same situation). The ratings of these statements were 0–3: 0=more/better than usual, 1=as usual, 2=less/worse than usual and 3=much less/worse than usual; ‘as usual’ was defined as the normal state.\nThe following background characteristics were recorded: age; gender; childhood and education as a nurse in or outside of Norway; length of work experience as a nurse in at least a 50% part-time job; part-time job percentage in the past 12 months; present workplace in a specific hospital department (surgery, internal medicine or psychiatry) or primary healthcare (nursing home or ambulatory care); and frequency of drug dose calculation tasks at work, score 0–3: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. Further educational background was recorded (yes/no): mathematics beyond the first mandatory year at upper secondary school; other education prior to nursing; postgraduate specialisation and courses in drug dose calculations during the past 3 years. The participants registered motivation for the courses in drug dose calculations, rated as 1=very unmotivated, 2=relatively unmotivated, 3=relatively motivated, 4=very motivated.\nIn addition, the participants were asked to consider statements from GHQ 30, in the context of performing medication tasks: five regarding coping (finding life a struggle; being able to enjoy normal activities; feeling reasonably happy; getting scared or panicky for no good reason and being capable of making decisions), and four regarding self-esteem/well-being (overall doing things well; satisfied with the way they have carried out their task; managing to keep busy and occupied; and managing as well as most people in the same situation). The ratings of these statements were 0–3: 0=more/better than usual, 1=as usual, 2=less/worse than usual and 3=much less/worse than usual; ‘as usual’ was defined as the normal state.\n Outcomes Drug dose calculation test and certainty in calculations A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\nA drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\n Risk of error Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\nRisk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\n Course evaluation After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.\nAfter the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.\n Drug dose calculation test and certainty in calculations A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\nA drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\n Risk of error Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\nRisk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\n Course evaluation After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.\nAfter the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.\n Ethical considerations All participants gave written informed consent. The tests were performed de-identified. A list connecting the study participant number to the names was kept until after the retest, in case any of the participants had forgotten their number. To protect the participants from any consequences because of the test, the data were made anonymous before the analysis.\nEven if the study might uncover that individuals showed a high risk of medication errors due to lacking calculation skills, it was considered ethically justifiable not to be able to expose their identity to their employer.\nAll participants gave written informed consent. The tests were performed de-identified. A list connecting the study participant number to the names was kept until after the retest, in case any of the participants had forgotten their number. To protect the participants from any consequences because of the test, the data were made anonymous before the analysis.\nEven if the study might uncover that individuals showed a high risk of medication errors due to lacking calculation skills, it was considered ethically justifiable not to be able to expose their identity to their employer.\n Data analysis The analysis was performed with intention-to-treat analyses. In addition, a per protocol analysis was performed for the main results. Depending on data distribution, comparisons between groups were analysed with a χ2 or Fisher’s exact test, a t test or Mann-Whitney U test, analysis of variance, Friedman, and Pearson or Spearman tests for correlations, and a Wilcoxon signed-rank test for paired comparisons before and after the course. All variables possibly associated with the learning outcome and change in risk of error were entered in linear regression analyses to identify independent predictors.18 Two-tailed significance tests were used, and a p value <0.05 was considered statistically significant.\nThe protocol contained instructions for handling missing data. Unanswered questions were scored as ‘incorrect answer’, and unanswered certainty scores as ‘very uncertain’. For participants who did not take the test after the course, the result from the pretest (last observation) was carried forward. The analysis was performed with SPSS V.18.0 (SPSS Inc, Chicago, Illinois, USA). All results are given as the mean and (SD) if not otherwise indicated.\nThe analysis was performed with intention-to-treat analyses. In addition, a per protocol analysis was performed for the main results. Depending on data distribution, comparisons between groups were analysed with a χ2 or Fisher’s exact test, a t test or Mann-Whitney U test, analysis of variance, Friedman, and Pearson or Spearman tests for correlations, and a Wilcoxon signed-rank test for paired comparisons before and after the course. All variables possibly associated with the learning outcome and change in risk of error were entered in linear regression analyses to identify independent predictors.18 Two-tailed significance tests were used, and a p value <0.05 was considered statistically significant.\nThe protocol contained instructions for handling missing data. Unanswered questions were scored as ‘incorrect answer’, and unanswered certainty scores as ‘very uncertain’. For participants who did not take the test after the course, the result from the pretest (last observation) was carried forward. The analysis was performed with SPSS V.18.0 (SPSS Inc, Chicago, Illinois, USA). All results are given as the mean and (SD) if not otherwise indicated.", "The following background characteristics were recorded: age; gender; childhood and education as a nurse in or outside of Norway; length of work experience as a nurse in at least a 50% part-time job; part-time job percentage in the past 12 months; present workplace in a specific hospital department (surgery, internal medicine or psychiatry) or primary healthcare (nursing home or ambulatory care); and frequency of drug dose calculation tasks at work, score 0–3: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. Further educational background was recorded (yes/no): mathematics beyond the first mandatory year at upper secondary school; other education prior to nursing; postgraduate specialisation and courses in drug dose calculations during the past 3 years. The participants registered motivation for the courses in drug dose calculations, rated as 1=very unmotivated, 2=relatively unmotivated, 3=relatively motivated, 4=very motivated.\nIn addition, the participants were asked to consider statements from GHQ 30, in the context of performing medication tasks: five regarding coping (finding life a struggle; being able to enjoy normal activities; feeling reasonably happy; getting scared or panicky for no good reason and being capable of making decisions), and four regarding self-esteem/well-being (overall doing things well; satisfied with the way they have carried out their task; managing to keep busy and occupied; and managing as well as most people in the same situation). The ratings of these statements were 0–3: 0=more/better than usual, 1=as usual, 2=less/worse than usual and 3=much less/worse than usual; ‘as usual’ was defined as the normal state.", " Drug dose calculation test and certainty in calculations A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\nA drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\n Risk of error Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\nRisk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\n Course evaluation After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.\nAfter the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.", "A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.", "Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).", "After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.", "All participants gave written informed consent. The tests were performed de-identified. A list connecting the study participant number to the names was kept until after the retest, in case any of the participants had forgotten their number. To protect the participants from any consequences because of the test, the data were made anonymous before the analysis.\nEven if the study might uncover that individuals showed a high risk of medication errors due to lacking calculation skills, it was considered ethically justifiable not to be able to expose their identity to their employer.", "The analysis was performed with intention-to-treat analyses. In addition, a per protocol analysis was performed for the main results. Depending on data distribution, comparisons between groups were analysed with a χ2 or Fisher’s exact test, a t test or Mann-Whitney U test, analysis of variance, Friedman, and Pearson or Spearman tests for correlations, and a Wilcoxon signed-rank test for paired comparisons before and after the course. All variables possibly associated with the learning outcome and change in risk of error were entered in linear regression analyses to identify independent predictors.18 Two-tailed significance tests were used, and a p value <0.05 was considered statistically significant.\nThe protocol contained instructions for handling missing data. Unanswered questions were scored as ‘incorrect answer’, and unanswered certainty scores as ‘very uncertain’. For participants who did not take the test after the course, the result from the pretest (last observation) was carried forward. The analysis was performed with SPSS V.18.0 (SPSS Inc, Chicago, Illinois, USA). All results are given as the mean and (SD) if not otherwise indicated.", "The test results before and after the course are shown in figure 2, and the upper part of table 2 gives the main results after e-learning and classroom teaching. No significant difference between the two didactic methods was detected for the overall test score, certainty or risk of error. The overall knowledge score improved from 11.1 (2.0) to 11.8 (2.0) (p<0.001). Before and after the course, 20 (10.9%) and 37 (20.2%) participants, respectively, completed a faultless test. The overall risk of error decreased after the course from 1.5 (0.3) to 1.4 (0.3) (p<0.001), but 41 nurses (22%) showed an increased risk, 20 from the e-learning group and 21 from the classroom group. This proportion is within the limits of what could appear by coincidence from a normal distribution (24%), and with a mean learning outcome of 0.7 (0.2).\nTest results in drug dose calculations.\nMain results after course in drug dose calculations\nResults are given as mean (SD).\nStatistical test: Mann-Whitney U test.\n*Auxiliary subgroup analysis.\nAn analysis of the 141 participants who completed the study according to the protocol did not alter the main finding that there was no difference between the two didactic methods. The overall knowledge score improved from 11.1 (2.0) to 12.0 (2.0) (p<0.001).\nTable 3 gives the results as the proportion of correct answers and the proportion of answers with a high risk of error within each calculation topic before and after the course. The test results in each topic for the two didactic methods showed that the classroom group scored significantly better after the course in conversion of units: 86% correct answers vs 78% (p<0.001), with no difference in the other topics. Overall, there were significant differences between the four topics in knowledge and risk of error both before and after the course, p<0.001 (Friedman's test). Sense of coping or self-esteem/well-being was not affected by the course for either of the groups, data not shown.\nKnowledge and high risk of error within each calculation topic before and after course\nResults are given as mean (SD).\nStatistical test: Wilcoxon signed-rank test, Friedman's test.\nFactors significantly associated with good learning outcome and reduction in the risk of error after the course are given in table 4. Among these factors, the randomisation to classroom teaching was significantly better in learning outcome, adjusted for other variables. Both low pretest knowledge and certainty score were associated with a reduced risk of error after the course, as were being a man and working in hospital. Self-evaluations of coping and self-esteem/well-being were neither associated with learning outcome nor with risk of error. The total R2 changes for the variables significantly associated with good learning outcome and risk of error were 0.28 and 0.18, respectively.\nFactors significantly associated with learning outcome and reduction in risk of error after course in drug dose calculations\nMultivariable regression analysis with all participant characteristics included as possible factors (n=183).\nStatistical test: linear regression analysis, after bivariable correlation tests Pearson and Spearman.", "Nearly all (97.5%) of the participants stated a need for training courses in drug dose calculations.\nThe evaluation after the course showed no difference between the didactic methods in the expressed degree of difficulty or course satisfaction, data not shown. The specific value of the course for working situations was scored 3.1 (0.7) in the e-learning group and 2.7 (0.7) in the classroom group (p<0.001).", "A post hoc analysis for subgroups with a pretest knowledge score ≥9 and <9 is given in the lower part of table 2. For participants with a low prescore, classroom teaching gave a significantly better learning outcome and reduced risk of error after the course. The overall knowledge score improved in the high score group from 11.6 (1.4) to 12.0 (1.9) and in the low score group from 7.2 (1.0) to 9.9 (2.3), and the difference in learning outcome was highly significant (p<0.001).", "The study was not able to demonstrate an overall difference in learning outcome between the two didactic methods, either of statistical or clinical importance. Both methods resulted in improvement of drug dose calculations after the course, although the learning outcome was smaller than what was defined as clinically relevant. Adjusted for other contributing factors for learning outcome in the multivariable analysis, the classroom method was statistically superior to e-learning, and so was the case for a subgroup with a low pretest result. This finding from the post hoc analysis was probably the only outcome that could have a meaningful practical implication for choice of learning strategy, if reproduced in new studies. These results were in accordance with a meta-analysis of 201 trials comparing e-learning with other methods.19 The review summarised that any educational action gives a positive outcome, regardless of the method. E-learning works compared with no intervention, but tested against conventional methods it is difficult to detect any differences.\nDrug dose calculations are not advanced in a mathematical sense. The basic arithmetic functions of addition, subtraction, multiplication or division are needed to decide decimals and fractions. What seems to be challenging is to conceptually understand the difference in information from the concentration denomination: per cent or mass per unit volume, or the ability to set up the right calculation for the relationship between dose or mass, volume or amount and concentration or strength. A standard labelling to mass per unit volume has been strongly recommended.20\n21\nThe fact that only 1 out of 10 nurses performed a faultless pretest was not surprising, from what is previously shown. In a study by McMullan, only 5% of the nurses achieved 80% correct calculations.22 Although statistically significant, the limited overall learning outcome after the courses was somewhat disappointing, with only 2 out of 10 with faultless tests. It seemed that the incorrect calculations were more frequent in conversion of units, the least complex task in the mathematical sense. The conversion of units improved the most after the course, while the learning outcome in the arithmetic tasks of infusions and dilutions were unchanged. This has also been observed by other investigators, and supports the view that the challenges in drug dose calculations are more likely due to a poor conceptual understanding.10 Recent papers address the importance of including conceptual (understanding the problem), calculation (dosage computation) and technical measurement (dosage measurement) competence in teaching nurses in vocational mathematics, with models to help them understand the ‘what’, the ‘why’ and the ‘how’ in dosage problem solving.23\n24", "The study was not able to demonstrate any difference in the risk of error between the e-learning and classroom groups, either before or after the course. Asking for certainty in each calculation made it possible for the nurses to express whether they normally would have consulted others or not when doing the calculation. Being certain that an incorrect answer was correct was regarded as an adequate estimate for a high risk of error. To the best of our knowledge, such a method for estimating a risk of error from a test situation is not described by others, and may be a contribution to future research. Owing to the low learning outcome, one could fear that increased certainty would lead to an increased risk of error. Therefore, it was satisfying that the overall risk of error declined after the course with both methods. Although a proportion of 22% with an increased risk of error after taking the course seemed alarming, it was within the limit of what could occur by chance, due to the small learning outcome. However, one may speculate that taking courses may increase the risk of error, if the feeling of being secure is increased without a corresponding improvement of knowledge. This might have implications for the need of follow-up after courses.\nThe factors that were associated with a reduced risk of error after the calculation course could indicate who might benefit from training like this: being a man; working in hospital; low pretest score and low pretest certainty score. This supports the finding in the auxiliary analysis that nurses with weak drug dose calculation skills benefit the most from taking courses. Nevertheless, the risk of error demonstrated in the study did not necessarily reflect the real risk of adverse events affecting patients, as the test situation cannot measure how often miscalculations were performed or how serious the clinical implications might be for any patient. Such studies still need to be done.", "The fact that 48% of the participants in the study performed drug dose calculations at least weekly was more than anticipated. It has been a common perception that the need for most nurses to calculate drug doses is small in today's clinical practice. The reported extent of calculations underscores the importance of good skills in this field.\nWhen the need for continuous improvement and maintenance of skills is identified, the time and resources available will be decisive for the possibility to implement further training activities. E-learning is more often a preferred choice in health services institutions, as it is both flexible and cost-effective. In our study, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. However, this method also had more dropouts and a lesser learning outcome for those with low skills. In a review article commenting on the results of a meta-analysis of e-learning and conventional instruction methods, Cook claims that rather than more comparative studies, further research should focus on conditions (how and when) under which e-learning is a preferable method.12\nAn implication of the findings can be to let nurses regularly attend an e-learning course followed by a screening test to uncover the weak calculation topics. Those who need further training should be offered a more tailored follow-up. Others have also documented that a combination of different learning and teaching strategies do result in better retention of drug calculation skills compared with lectures alone.23 Further studies of the effect of the introduction of drug dose calculation apps would also be of interest, as well as more authentic observation studies in a high fidelity simulation environment, as reported from a Scottish HHS study.26\nInterestingly, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. This may be explained by the flexibility of the e-learning course, which allowed the participants to concentrate on the items that were considered difficult and relevant for their work, while the classroom group had to follow through the whole programme. Nearly all the nurses themselves realised that they needed more training in drug dose calculations, and an important factor was that motivation for the course was associated with a good learning outcome in the study. This indicates that the professional leadership in health institutions should facilitate and encourage the nurses to improve their skills further in drug dose calculations.\nIn addition to regularly training in calculations, written procedures for specific dilutions and infusions used in the wards would be of importance as a quality insurance for improved patient safety. This must be a part of the management responsibility.", "The participants in this study were recruited through the management line, and the study population represents a limited part of the total nurse population. We assume that nurses with low calculation skills would, to a lesser degree, volunteer for such a study, and hence presume that the calculation skills in clinical practice would be lower than shown in this study. External validity might be an issue in studies with voluntary participation, and extrapolation of the findings of the study to all registered nurses should be performed with caution.\nSome may question the quality of the course content and duration or teaching conditions of the courses, especially since the learning outcome of the courses were not convincing. However, the main aim for the study was to compare the two didactic methods. Also, to ensure a fair comparison and similar content of the courses, the subject teacher, who was a part of the group that developed the e-learning course, was also responsible for the classroom lectures. Since the teacher had an interest in both didactic methods, the probability for her to affect the course arrangements in favour of one of them was regarded as small. The questionnaire used was the same as that used to test the nursing students, and the calculation tasks were considered to be in accordance with the tasks that were performed in the nursing practice.\nAnother limitation could be the controlled test conditions, without time pressure and interruptions that are often the case in a stressful work situation, which tend towards better results than in reality. On the other hand, the calculation test situation itself may be stressful for the nurses, since many have struggled to pass a similar test during their studies.\nSelecting two dimensions from the GHQ 30 questionnaire may also be a methodological limitation. Although no correlation between the outcomes and coping or well-being/self-esteem was detected, the usage of only parts of the tool excluded the possibility of detecting an association between physiological well-being in general and drug dose calculation skills." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Design", "Participants", "Interventions", "Sample size", "Randomisation", "Data collection", "Participant characteristics", "Outcomes", "Drug dose calculation test and certainty in calculations", "Risk of error", "Course evaluation", "Ethical considerations", "Data analysis", "Results", "Knowledge, learning outcome and risk of error", "Course evaluation", "Auxiliary analyses", "Discussion", "Drug dose calculation skills", "Risk of error", "Importance for practice", "Study limitations", "Conclusion", "Supplementary Material" ]
[ "From international reviews and reports of adverse drug events, incorrect doses account for up to one-third of the events.1–3 Many health professionals find drug dose calculations difficult. The majority of medical students are unable to calculate the mass of a drug in solution correctly, and around half the doctors are unable to convert drug doses correctly from a percentage concentration or dilution to mass concentration.4\n5 Nurses carry out practical drug management after the physicians’ prescriptions both in hospitals and primary healthcare. In Norway, a faultless test in drug dose calculations during nursing education is required to become a registered nurse.6 Both nursing students and experienced nurses have problems with drug dose calculations, and nursing students early in the programme showed limited basic skills in arithmetic.7–10 We have shown a high risk of error in conversion of units in 10% of registered nurses in an earlier study.11\nE-learning was introduced with the internet in the early 1990s, and has been increasingly used in medical and healthcare education. E-learning is independent of time and place, and the training is easier to organise in the health services than classroom teaching, and at a lower cost. A meta-analysis from 2009 summarised more than 200 studies in health professions education, and concluded that e-learning is associated with large positive effects compared with no intervention, but compared with other interventions the effects are generally small.12 There is a lack of drug dose calculation studies where different didactic methods are compared.\nThe objective of this study was to compare the learning outcome, certainty and risk of error in drug dose calculations after courses with either self-directed e-learning or conventional classroom teaching. Further aims were to study factors associated with the learning outcome and risk of error.", " Design A randomised controlled open study with a parallel group design.\nA randomised controlled open study with a parallel group design.\n Participants Registered nurses working in two hospitals and three municipalities in Eastern Norway were recruited to participate in the study. Inclusion criteria were nurses with at least 1 year of work experience in a 50% part-time job or more. Excluded were nurses working in outpatient clinics, those who did not administer drugs and any who did not master the Norwegian language sufficiently. The study was performed from September 2007 to April 2009.\nRegistered nurses working in two hospitals and three municipalities in Eastern Norway were recruited to participate in the study. Inclusion criteria were nurses with at least 1 year of work experience in a 50% part-time job or more. Excluded were nurses working in outpatient clinics, those who did not administer drugs and any who did not master the Norwegian language sufficiently. The study was performed from September 2007 to April 2009.\n Interventions At inclusion, all participants completed a form with relevant background characteristics, and nine statements from the General Health Questionnaire (GHQ 30).13 Quality of Life tools are often used to explore psychological well-being. The GHQ 30 contains the dimensions of a sense of coping and self-esteem/well-being, and was used to evaluate to what extent the nurses’ sense of coping affected their calculation skills. The nurses performed a multiple choice (MCQ) test in drug dose calculations. The questions were standard calculation tasks for bachelor students in nursing at university colleges. The test was taken either on paper or on an internet website. The time available for the test was 1 h, and the participants were allowed to use a calculator.\nAfter the test, the nurses were randomised to one of two 2-day courses in drug dose calculations. One group was assigned to a self-directed, interactive internet-based e-learning course developed at a Norwegian university college. The other was assigned to a 1-day conventional classroom course and a 1-day self-study. The content of the two courses was the same: a review of the basic theory of the different types of calculations, followed by examples and exercises. The topics covered were conversion between units; formulas for dose, quantity and strength; infusions; and dilutions. The e-learning group continued with interactive tests, hints and suggested solutions. They had access to a collection of tests with feedback on answers, and a printout of the compendium was available. The classroom group had 1 day lecture covering the basic theory; exercises in groups; discussion in a plenary session and an individual test at the end of the day. The second day was self-study, with a textbook including exercises used at the same college.14 Two to four weeks after the course, the nurses were retested in drug dose calculations with a similar MCQ test as the pretest.\nAt inclusion, all participants completed a form with relevant background characteristics, and nine statements from the General Health Questionnaire (GHQ 30).13 Quality of Life tools are often used to explore psychological well-being. The GHQ 30 contains the dimensions of a sense of coping and self-esteem/well-being, and was used to evaluate to what extent the nurses’ sense of coping affected their calculation skills. The nurses performed a multiple choice (MCQ) test in drug dose calculations. The questions were standard calculation tasks for bachelor students in nursing at university colleges. The test was taken either on paper or on an internet website. The time available for the test was 1 h, and the participants were allowed to use a calculator.\nAfter the test, the nurses were randomised to one of two 2-day courses in drug dose calculations. One group was assigned to a self-directed, interactive internet-based e-learning course developed at a Norwegian university college. The other was assigned to a 1-day conventional classroom course and a 1-day self-study. The content of the two courses was the same: a review of the basic theory of the different types of calculations, followed by examples and exercises. The topics covered were conversion between units; formulas for dose, quantity and strength; infusions; and dilutions. The e-learning group continued with interactive tests, hints and suggested solutions. They had access to a collection of tests with feedback on answers, and a printout of the compendium was available. The classroom group had 1 day lecture covering the basic theory; exercises in groups; discussion in a plenary session and an individual test at the end of the day. The second day was self-study, with a textbook including exercises used at the same college.14 Two to four weeks after the course, the nurses were retested in drug dose calculations with a similar MCQ test as the pretest.\n Sample size Studies testing drug dose calculations in nurses have shown a mean score of 75% (SD 15%).15–17 In a study with 14 questions, this is equivalent to a score of 10.5 (SD 2.1). To detect a difference of one correct answer between the two didactic methods with a strength of 0.8 and α<0.05, it was necessary to include 74 participants in each group. Owing to the likely dropouts, the aim was to randomise 180 participating nurses.\nStudies testing drug dose calculations in nurses have shown a mean score of 75% (SD 15%).15–17 In a study with 14 questions, this is equivalent to a score of 10.5 (SD 2.1). To detect a difference of one correct answer between the two didactic methods with a strength of 0.8 and α<0.05, it was necessary to include 74 participants in each group. Owing to the likely dropouts, the aim was to randomise 180 participating nurses.\n Randomisation At inclusion, each nurse was stratified according to five workplaces: internal medicine, surgery or psychiatric wards in hospitals, and nursing home or ambulatory care in primary healthcare. Immediately after submission of the pretest, the participants were randomised to one of the two didactic methods by predefined computer-generated lists for each stratum.\nAt inclusion, each nurse was stratified according to five workplaces: internal medicine, surgery or psychiatric wards in hospitals, and nursing home or ambulatory care in primary healthcare. Immediately after submission of the pretest, the participants were randomised to one of the two didactic methods by predefined computer-generated lists for each stratum.", "A randomised controlled open study with a parallel group design.", "Registered nurses working in two hospitals and three municipalities in Eastern Norway were recruited to participate in the study. Inclusion criteria were nurses with at least 1 year of work experience in a 50% part-time job or more. Excluded were nurses working in outpatient clinics, those who did not administer drugs and any who did not master the Norwegian language sufficiently. The study was performed from September 2007 to April 2009.", "At inclusion, all participants completed a form with relevant background characteristics, and nine statements from the General Health Questionnaire (GHQ 30).13 Quality of Life tools are often used to explore psychological well-being. The GHQ 30 contains the dimensions of a sense of coping and self-esteem/well-being, and was used to evaluate to what extent the nurses’ sense of coping affected their calculation skills. The nurses performed a multiple choice (MCQ) test in drug dose calculations. The questions were standard calculation tasks for bachelor students in nursing at university colleges. The test was taken either on paper or on an internet website. The time available for the test was 1 h, and the participants were allowed to use a calculator.\nAfter the test, the nurses were randomised to one of two 2-day courses in drug dose calculations. One group was assigned to a self-directed, interactive internet-based e-learning course developed at a Norwegian university college. The other was assigned to a 1-day conventional classroom course and a 1-day self-study. The content of the two courses was the same: a review of the basic theory of the different types of calculations, followed by examples and exercises. The topics covered were conversion between units; formulas for dose, quantity and strength; infusions; and dilutions. The e-learning group continued with interactive tests, hints and suggested solutions. They had access to a collection of tests with feedback on answers, and a printout of the compendium was available. The classroom group had 1 day lecture covering the basic theory; exercises in groups; discussion in a plenary session and an individual test at the end of the day. The second day was self-study, with a textbook including exercises used at the same college.14 Two to four weeks after the course, the nurses were retested in drug dose calculations with a similar MCQ test as the pretest.", "Studies testing drug dose calculations in nurses have shown a mean score of 75% (SD 15%).15–17 In a study with 14 questions, this is equivalent to a score of 10.5 (SD 2.1). To detect a difference of one correct answer between the two didactic methods with a strength of 0.8 and α<0.05, it was necessary to include 74 participants in each group. Owing to the likely dropouts, the aim was to randomise 180 participating nurses.", "At inclusion, each nurse was stratified according to five workplaces: internal medicine, surgery or psychiatric wards in hospitals, and nursing home or ambulatory care in primary healthcare. Immediately after submission of the pretest, the participants were randomised to one of the two didactic methods by predefined computer-generated lists for each stratum.", " Participant characteristics The following background characteristics were recorded: age; gender; childhood and education as a nurse in or outside of Norway; length of work experience as a nurse in at least a 50% part-time job; part-time job percentage in the past 12 months; present workplace in a specific hospital department (surgery, internal medicine or psychiatry) or primary healthcare (nursing home or ambulatory care); and frequency of drug dose calculation tasks at work, score 0–3: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. Further educational background was recorded (yes/no): mathematics beyond the first mandatory year at upper secondary school; other education prior to nursing; postgraduate specialisation and courses in drug dose calculations during the past 3 years. The participants registered motivation for the courses in drug dose calculations, rated as 1=very unmotivated, 2=relatively unmotivated, 3=relatively motivated, 4=very motivated.\nIn addition, the participants were asked to consider statements from GHQ 30, in the context of performing medication tasks: five regarding coping (finding life a struggle; being able to enjoy normal activities; feeling reasonably happy; getting scared or panicky for no good reason and being capable of making decisions), and four regarding self-esteem/well-being (overall doing things well; satisfied with the way they have carried out their task; managing to keep busy and occupied; and managing as well as most people in the same situation). The ratings of these statements were 0–3: 0=more/better than usual, 1=as usual, 2=less/worse than usual and 3=much less/worse than usual; ‘as usual’ was defined as the normal state.\nThe following background characteristics were recorded: age; gender; childhood and education as a nurse in or outside of Norway; length of work experience as a nurse in at least a 50% part-time job; part-time job percentage in the past 12 months; present workplace in a specific hospital department (surgery, internal medicine or psychiatry) or primary healthcare (nursing home or ambulatory care); and frequency of drug dose calculation tasks at work, score 0–3: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. Further educational background was recorded (yes/no): mathematics beyond the first mandatory year at upper secondary school; other education prior to nursing; postgraduate specialisation and courses in drug dose calculations during the past 3 years. The participants registered motivation for the courses in drug dose calculations, rated as 1=very unmotivated, 2=relatively unmotivated, 3=relatively motivated, 4=very motivated.\nIn addition, the participants were asked to consider statements from GHQ 30, in the context of performing medication tasks: five regarding coping (finding life a struggle; being able to enjoy normal activities; feeling reasonably happy; getting scared or panicky for no good reason and being capable of making decisions), and four regarding self-esteem/well-being (overall doing things well; satisfied with the way they have carried out their task; managing to keep busy and occupied; and managing as well as most people in the same situation). The ratings of these statements were 0–3: 0=more/better than usual, 1=as usual, 2=less/worse than usual and 3=much less/worse than usual; ‘as usual’ was defined as the normal state.\n Outcomes Drug dose calculation test and certainty in calculations A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\nA drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\n Risk of error Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\nRisk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\n Course evaluation After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.\nAfter the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.\n Drug dose calculation test and certainty in calculations A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\nA drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\n Risk of error Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\nRisk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\n Course evaluation After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.\nAfter the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.\n Ethical considerations All participants gave written informed consent. The tests were performed de-identified. A list connecting the study participant number to the names was kept until after the retest, in case any of the participants had forgotten their number. To protect the participants from any consequences because of the test, the data were made anonymous before the analysis.\nEven if the study might uncover that individuals showed a high risk of medication errors due to lacking calculation skills, it was considered ethically justifiable not to be able to expose their identity to their employer.\nAll participants gave written informed consent. The tests were performed de-identified. A list connecting the study participant number to the names was kept until after the retest, in case any of the participants had forgotten their number. To protect the participants from any consequences because of the test, the data were made anonymous before the analysis.\nEven if the study might uncover that individuals showed a high risk of medication errors due to lacking calculation skills, it was considered ethically justifiable not to be able to expose their identity to their employer.\n Data analysis The analysis was performed with intention-to-treat analyses. In addition, a per protocol analysis was performed for the main results. Depending on data distribution, comparisons between groups were analysed with a χ2 or Fisher’s exact test, a t test or Mann-Whitney U test, analysis of variance, Friedman, and Pearson or Spearman tests for correlations, and a Wilcoxon signed-rank test for paired comparisons before and after the course. All variables possibly associated with the learning outcome and change in risk of error were entered in linear regression analyses to identify independent predictors.18 Two-tailed significance tests were used, and a p value <0.05 was considered statistically significant.\nThe protocol contained instructions for handling missing data. Unanswered questions were scored as ‘incorrect answer’, and unanswered certainty scores as ‘very uncertain’. For participants who did not take the test after the course, the result from the pretest (last observation) was carried forward. The analysis was performed with SPSS V.18.0 (SPSS Inc, Chicago, Illinois, USA). All results are given as the mean and (SD) if not otherwise indicated.\nThe analysis was performed with intention-to-treat analyses. In addition, a per protocol analysis was performed for the main results. Depending on data distribution, comparisons between groups were analysed with a χ2 or Fisher’s exact test, a t test or Mann-Whitney U test, analysis of variance, Friedman, and Pearson or Spearman tests for correlations, and a Wilcoxon signed-rank test for paired comparisons before and after the course. All variables possibly associated with the learning outcome and change in risk of error were entered in linear regression analyses to identify independent predictors.18 Two-tailed significance tests were used, and a p value <0.05 was considered statistically significant.\nThe protocol contained instructions for handling missing data. Unanswered questions were scored as ‘incorrect answer’, and unanswered certainty scores as ‘very uncertain’. For participants who did not take the test after the course, the result from the pretest (last observation) was carried forward. The analysis was performed with SPSS V.18.0 (SPSS Inc, Chicago, Illinois, USA). All results are given as the mean and (SD) if not otherwise indicated.", "The following background characteristics were recorded: age; gender; childhood and education as a nurse in or outside of Norway; length of work experience as a nurse in at least a 50% part-time job; part-time job percentage in the past 12 months; present workplace in a specific hospital department (surgery, internal medicine or psychiatry) or primary healthcare (nursing home or ambulatory care); and frequency of drug dose calculation tasks at work, score 0–3: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. Further educational background was recorded (yes/no): mathematics beyond the first mandatory year at upper secondary school; other education prior to nursing; postgraduate specialisation and courses in drug dose calculations during the past 3 years. The participants registered motivation for the courses in drug dose calculations, rated as 1=very unmotivated, 2=relatively unmotivated, 3=relatively motivated, 4=very motivated.\nIn addition, the participants were asked to consider statements from GHQ 30, in the context of performing medication tasks: five regarding coping (finding life a struggle; being able to enjoy normal activities; feeling reasonably happy; getting scared or panicky for no good reason and being capable of making decisions), and four regarding self-esteem/well-being (overall doing things well; satisfied with the way they have carried out their task; managing to keep busy and occupied; and managing as well as most people in the same situation). The ratings of these statements were 0–3: 0=more/better than usual, 1=as usual, 2=less/worse than usual and 3=much less/worse than usual; ‘as usual’ was defined as the normal state.", " Drug dose calculation test and certainty in calculations A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\nA drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.\n Risk of error Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\nRisk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).\n Course evaluation After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.\nAfter the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.", "A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1.", "Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3).", "After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large.", "All participants gave written informed consent. The tests were performed de-identified. A list connecting the study participant number to the names was kept until after the retest, in case any of the participants had forgotten their number. To protect the participants from any consequences because of the test, the data were made anonymous before the analysis.\nEven if the study might uncover that individuals showed a high risk of medication errors due to lacking calculation skills, it was considered ethically justifiable not to be able to expose their identity to their employer.", "The analysis was performed with intention-to-treat analyses. In addition, a per protocol analysis was performed for the main results. Depending on data distribution, comparisons between groups were analysed with a χ2 or Fisher’s exact test, a t test or Mann-Whitney U test, analysis of variance, Friedman, and Pearson or Spearman tests for correlations, and a Wilcoxon signed-rank test for paired comparisons before and after the course. All variables possibly associated with the learning outcome and change in risk of error were entered in linear regression analyses to identify independent predictors.18 Two-tailed significance tests were used, and a p value <0.05 was considered statistically significant.\nThe protocol contained instructions for handling missing data. Unanswered questions were scored as ‘incorrect answer’, and unanswered certainty scores as ‘very uncertain’. For participants who did not take the test after the course, the result from the pretest (last observation) was carried forward. The analysis was performed with SPSS V.18.0 (SPSS Inc, Chicago, Illinois, USA). All results are given as the mean and (SD) if not otherwise indicated.", "In total, 212 registered nurses were included in the study, and 183 were eligible for randomisation. Figure 1 shows the flow of participants throughout the study, and table 1 summarises the participant characteristics and the pretest results. The two groups were well balanced with respect to baseline characteristics. Of the 183 nurses, 79 (43%) were recruited from hospitals (48 from surgery departments, including intensive care units; 23 from internal medicine wards; 8 from psychiatric wards) and 104 (57%) from primary healthcare (52 from nursing homes and 52 from ambulatory healthcare). Nearly half of the nurses (48%) performed drug dose calculations weekly or more often.\nParticipant flow chart.\nParticipants’ characteristics and pretest results\nThe results are given as mean (SD in brackets), or number of participants (proportion in brackets).\n*Scale: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day.\n†Upper secondary school.\n‡Scale: 0=more/better than usual, 1=as usual, 2=less/worse than usual, 3=much less/worse than usual. Statistical tests: t test, Mann-Whitney U test, χ2 test, Fisher exact test.\nThere was a tendency for more dropouts in the e-learning group: 18.4% vs 9.9% (p=0.10). The dropouts did not differ from those who completed the study regarding the workplace: 12 from hospitals and 14 from primary healthcare (p=0.74), or pretest result: score 10.5 vs 11.1, 95% CI for difference −1.5:+0.2 (p=0.13).\n Knowledge, learning outcome and risk of error The test results before and after the course are shown in figure 2, and the upper part of table 2 gives the main results after e-learning and classroom teaching. No significant difference between the two didactic methods was detected for the overall test score, certainty or risk of error. The overall knowledge score improved from 11.1 (2.0) to 11.8 (2.0) (p<0.001). Before and after the course, 20 (10.9%) and 37 (20.2%) participants, respectively, completed a faultless test. The overall risk of error decreased after the course from 1.5 (0.3) to 1.4 (0.3) (p<0.001), but 41 nurses (22%) showed an increased risk, 20 from the e-learning group and 21 from the classroom group. This proportion is within the limits of what could appear by coincidence from a normal distribution (24%), and with a mean learning outcome of 0.7 (0.2).\nTest results in drug dose calculations.\nMain results after course in drug dose calculations\nResults are given as mean (SD).\nStatistical test: Mann-Whitney U test.\n*Auxiliary subgroup analysis.\nAn analysis of the 141 participants who completed the study according to the protocol did not alter the main finding that there was no difference between the two didactic methods. The overall knowledge score improved from 11.1 (2.0) to 12.0 (2.0) (p<0.001).\nTable 3 gives the results as the proportion of correct answers and the proportion of answers with a high risk of error within each calculation topic before and after the course. The test results in each topic for the two didactic methods showed that the classroom group scored significantly better after the course in conversion of units: 86% correct answers vs 78% (p<0.001), with no difference in the other topics. Overall, there were significant differences between the four topics in knowledge and risk of error both before and after the course, p<0.001 (Friedman's test). Sense of coping or self-esteem/well-being was not affected by the course for either of the groups, data not shown.\nKnowledge and high risk of error within each calculation topic before and after course\nResults are given as mean (SD).\nStatistical test: Wilcoxon signed-rank test, Friedman's test.\nFactors significantly associated with good learning outcome and reduction in the risk of error after the course are given in table 4. Among these factors, the randomisation to classroom teaching was significantly better in learning outcome, adjusted for other variables. Both low pretest knowledge and certainty score were associated with a reduced risk of error after the course, as were being a man and working in hospital. Self-evaluations of coping and self-esteem/well-being were neither associated with learning outcome nor with risk of error. The total R2 changes for the variables significantly associated with good learning outcome and risk of error were 0.28 and 0.18, respectively.\nFactors significantly associated with learning outcome and reduction in risk of error after course in drug dose calculations\nMultivariable regression analysis with all participant characteristics included as possible factors (n=183).\nStatistical test: linear regression analysis, after bivariable correlation tests Pearson and Spearman.\nThe test results before and after the course are shown in figure 2, and the upper part of table 2 gives the main results after e-learning and classroom teaching. No significant difference between the two didactic methods was detected for the overall test score, certainty or risk of error. The overall knowledge score improved from 11.1 (2.0) to 11.8 (2.0) (p<0.001). Before and after the course, 20 (10.9%) and 37 (20.2%) participants, respectively, completed a faultless test. The overall risk of error decreased after the course from 1.5 (0.3) to 1.4 (0.3) (p<0.001), but 41 nurses (22%) showed an increased risk, 20 from the e-learning group and 21 from the classroom group. This proportion is within the limits of what could appear by coincidence from a normal distribution (24%), and with a mean learning outcome of 0.7 (0.2).\nTest results in drug dose calculations.\nMain results after course in drug dose calculations\nResults are given as mean (SD).\nStatistical test: Mann-Whitney U test.\n*Auxiliary subgroup analysis.\nAn analysis of the 141 participants who completed the study according to the protocol did not alter the main finding that there was no difference between the two didactic methods. The overall knowledge score improved from 11.1 (2.0) to 12.0 (2.0) (p<0.001).\nTable 3 gives the results as the proportion of correct answers and the proportion of answers with a high risk of error within each calculation topic before and after the course. The test results in each topic for the two didactic methods showed that the classroom group scored significantly better after the course in conversion of units: 86% correct answers vs 78% (p<0.001), with no difference in the other topics. Overall, there were significant differences between the four topics in knowledge and risk of error both before and after the course, p<0.001 (Friedman's test). Sense of coping or self-esteem/well-being was not affected by the course for either of the groups, data not shown.\nKnowledge and high risk of error within each calculation topic before and after course\nResults are given as mean (SD).\nStatistical test: Wilcoxon signed-rank test, Friedman's test.\nFactors significantly associated with good learning outcome and reduction in the risk of error after the course are given in table 4. Among these factors, the randomisation to classroom teaching was significantly better in learning outcome, adjusted for other variables. Both low pretest knowledge and certainty score were associated with a reduced risk of error after the course, as were being a man and working in hospital. Self-evaluations of coping and self-esteem/well-being were neither associated with learning outcome nor with risk of error. The total R2 changes for the variables significantly associated with good learning outcome and risk of error were 0.28 and 0.18, respectively.\nFactors significantly associated with learning outcome and reduction in risk of error after course in drug dose calculations\nMultivariable regression analysis with all participant characteristics included as possible factors (n=183).\nStatistical test: linear regression analysis, after bivariable correlation tests Pearson and Spearman.\n Course evaluation Nearly all (97.5%) of the participants stated a need for training courses in drug dose calculations.\nThe evaluation after the course showed no difference between the didactic methods in the expressed degree of difficulty or course satisfaction, data not shown. The specific value of the course for working situations was scored 3.1 (0.7) in the e-learning group and 2.7 (0.7) in the classroom group (p<0.001).\nNearly all (97.5%) of the participants stated a need for training courses in drug dose calculations.\nThe evaluation after the course showed no difference between the didactic methods in the expressed degree of difficulty or course satisfaction, data not shown. The specific value of the course for working situations was scored 3.1 (0.7) in the e-learning group and 2.7 (0.7) in the classroom group (p<0.001).\n Auxiliary analyses A post hoc analysis for subgroups with a pretest knowledge score ≥9 and <9 is given in the lower part of table 2. For participants with a low prescore, classroom teaching gave a significantly better learning outcome and reduced risk of error after the course. The overall knowledge score improved in the high score group from 11.6 (1.4) to 12.0 (1.9) and in the low score group from 7.2 (1.0) to 9.9 (2.3), and the difference in learning outcome was highly significant (p<0.001).\nA post hoc analysis for subgroups with a pretest knowledge score ≥9 and <9 is given in the lower part of table 2. For participants with a low prescore, classroom teaching gave a significantly better learning outcome and reduced risk of error after the course. The overall knowledge score improved in the high score group from 11.6 (1.4) to 12.0 (1.9) and in the low score group from 7.2 (1.0) to 9.9 (2.3), and the difference in learning outcome was highly significant (p<0.001).", "The test results before and after the course are shown in figure 2, and the upper part of table 2 gives the main results after e-learning and classroom teaching. No significant difference between the two didactic methods was detected for the overall test score, certainty or risk of error. The overall knowledge score improved from 11.1 (2.0) to 11.8 (2.0) (p<0.001). Before and after the course, 20 (10.9%) and 37 (20.2%) participants, respectively, completed a faultless test. The overall risk of error decreased after the course from 1.5 (0.3) to 1.4 (0.3) (p<0.001), but 41 nurses (22%) showed an increased risk, 20 from the e-learning group and 21 from the classroom group. This proportion is within the limits of what could appear by coincidence from a normal distribution (24%), and with a mean learning outcome of 0.7 (0.2).\nTest results in drug dose calculations.\nMain results after course in drug dose calculations\nResults are given as mean (SD).\nStatistical test: Mann-Whitney U test.\n*Auxiliary subgroup analysis.\nAn analysis of the 141 participants who completed the study according to the protocol did not alter the main finding that there was no difference between the two didactic methods. The overall knowledge score improved from 11.1 (2.0) to 12.0 (2.0) (p<0.001).\nTable 3 gives the results as the proportion of correct answers and the proportion of answers with a high risk of error within each calculation topic before and after the course. The test results in each topic for the two didactic methods showed that the classroom group scored significantly better after the course in conversion of units: 86% correct answers vs 78% (p<0.001), with no difference in the other topics. Overall, there were significant differences between the four topics in knowledge and risk of error both before and after the course, p<0.001 (Friedman's test). Sense of coping or self-esteem/well-being was not affected by the course for either of the groups, data not shown.\nKnowledge and high risk of error within each calculation topic before and after course\nResults are given as mean (SD).\nStatistical test: Wilcoxon signed-rank test, Friedman's test.\nFactors significantly associated with good learning outcome and reduction in the risk of error after the course are given in table 4. Among these factors, the randomisation to classroom teaching was significantly better in learning outcome, adjusted for other variables. Both low pretest knowledge and certainty score were associated with a reduced risk of error after the course, as were being a man and working in hospital. Self-evaluations of coping and self-esteem/well-being were neither associated with learning outcome nor with risk of error. The total R2 changes for the variables significantly associated with good learning outcome and risk of error were 0.28 and 0.18, respectively.\nFactors significantly associated with learning outcome and reduction in risk of error after course in drug dose calculations\nMultivariable regression analysis with all participant characteristics included as possible factors (n=183).\nStatistical test: linear regression analysis, after bivariable correlation tests Pearson and Spearman.", "Nearly all (97.5%) of the participants stated a need for training courses in drug dose calculations.\nThe evaluation after the course showed no difference between the didactic methods in the expressed degree of difficulty or course satisfaction, data not shown. The specific value of the course for working situations was scored 3.1 (0.7) in the e-learning group and 2.7 (0.7) in the classroom group (p<0.001).", "A post hoc analysis for subgroups with a pretest knowledge score ≥9 and <9 is given in the lower part of table 2. For participants with a low prescore, classroom teaching gave a significantly better learning outcome and reduced risk of error after the course. The overall knowledge score improved in the high score group from 11.6 (1.4) to 12.0 (1.9) and in the low score group from 7.2 (1.0) to 9.9 (2.3), and the difference in learning outcome was highly significant (p<0.001).", " Drug dose calculation skills The study was not able to demonstrate an overall difference in learning outcome between the two didactic methods, either of statistical or clinical importance. Both methods resulted in improvement of drug dose calculations after the course, although the learning outcome was smaller than what was defined as clinically relevant. Adjusted for other contributing factors for learning outcome in the multivariable analysis, the classroom method was statistically superior to e-learning, and so was the case for a subgroup with a low pretest result. This finding from the post hoc analysis was probably the only outcome that could have a meaningful practical implication for choice of learning strategy, if reproduced in new studies. These results were in accordance with a meta-analysis of 201 trials comparing e-learning with other methods.19 The review summarised that any educational action gives a positive outcome, regardless of the method. E-learning works compared with no intervention, but tested against conventional methods it is difficult to detect any differences.\nDrug dose calculations are not advanced in a mathematical sense. The basic arithmetic functions of addition, subtraction, multiplication or division are needed to decide decimals and fractions. What seems to be challenging is to conceptually understand the difference in information from the concentration denomination: per cent or mass per unit volume, or the ability to set up the right calculation for the relationship between dose or mass, volume or amount and concentration or strength. A standard labelling to mass per unit volume has been strongly recommended.20\n21\nThe fact that only 1 out of 10 nurses performed a faultless pretest was not surprising, from what is previously shown. In a study by McMullan, only 5% of the nurses achieved 80% correct calculations.22 Although statistically significant, the limited overall learning outcome after the courses was somewhat disappointing, with only 2 out of 10 with faultless tests. It seemed that the incorrect calculations were more frequent in conversion of units, the least complex task in the mathematical sense. The conversion of units improved the most after the course, while the learning outcome in the arithmetic tasks of infusions and dilutions were unchanged. This has also been observed by other investigators, and supports the view that the challenges in drug dose calculations are more likely due to a poor conceptual understanding.10 Recent papers address the importance of including conceptual (understanding the problem), calculation (dosage computation) and technical measurement (dosage measurement) competence in teaching nurses in vocational mathematics, with models to help them understand the ‘what’, the ‘why’ and the ‘how’ in dosage problem solving.23\n24\nThe study was not able to demonstrate an overall difference in learning outcome between the two didactic methods, either of statistical or clinical importance. Both methods resulted in improvement of drug dose calculations after the course, although the learning outcome was smaller than what was defined as clinically relevant. Adjusted for other contributing factors for learning outcome in the multivariable analysis, the classroom method was statistically superior to e-learning, and so was the case for a subgroup with a low pretest result. This finding from the post hoc analysis was probably the only outcome that could have a meaningful practical implication for choice of learning strategy, if reproduced in new studies. These results were in accordance with a meta-analysis of 201 trials comparing e-learning with other methods.19 The review summarised that any educational action gives a positive outcome, regardless of the method. E-learning works compared with no intervention, but tested against conventional methods it is difficult to detect any differences.\nDrug dose calculations are not advanced in a mathematical sense. The basic arithmetic functions of addition, subtraction, multiplication or division are needed to decide decimals and fractions. What seems to be challenging is to conceptually understand the difference in information from the concentration denomination: per cent or mass per unit volume, or the ability to set up the right calculation for the relationship between dose or mass, volume or amount and concentration or strength. A standard labelling to mass per unit volume has been strongly recommended.20\n21\nThe fact that only 1 out of 10 nurses performed a faultless pretest was not surprising, from what is previously shown. In a study by McMullan, only 5% of the nurses achieved 80% correct calculations.22 Although statistically significant, the limited overall learning outcome after the courses was somewhat disappointing, with only 2 out of 10 with faultless tests. It seemed that the incorrect calculations were more frequent in conversion of units, the least complex task in the mathematical sense. The conversion of units improved the most after the course, while the learning outcome in the arithmetic tasks of infusions and dilutions were unchanged. This has also been observed by other investigators, and supports the view that the challenges in drug dose calculations are more likely due to a poor conceptual understanding.10 Recent papers address the importance of including conceptual (understanding the problem), calculation (dosage computation) and technical measurement (dosage measurement) competence in teaching nurses in vocational mathematics, with models to help them understand the ‘what’, the ‘why’ and the ‘how’ in dosage problem solving.23\n24\n Risk of error The study was not able to demonstrate any difference in the risk of error between the e-learning and classroom groups, either before or after the course. Asking for certainty in each calculation made it possible for the nurses to express whether they normally would have consulted others or not when doing the calculation. Being certain that an incorrect answer was correct was regarded as an adequate estimate for a high risk of error. To the best of our knowledge, such a method for estimating a risk of error from a test situation is not described by others, and may be a contribution to future research. Owing to the low learning outcome, one could fear that increased certainty would lead to an increased risk of error. Therefore, it was satisfying that the overall risk of error declined after the course with both methods. Although a proportion of 22% with an increased risk of error after taking the course seemed alarming, it was within the limit of what could occur by chance, due to the small learning outcome. However, one may speculate that taking courses may increase the risk of error, if the feeling of being secure is increased without a corresponding improvement of knowledge. This might have implications for the need of follow-up after courses.\nThe factors that were associated with a reduced risk of error after the calculation course could indicate who might benefit from training like this: being a man; working in hospital; low pretest score and low pretest certainty score. This supports the finding in the auxiliary analysis that nurses with weak drug dose calculation skills benefit the most from taking courses. Nevertheless, the risk of error demonstrated in the study did not necessarily reflect the real risk of adverse events affecting patients, as the test situation cannot measure how often miscalculations were performed or how serious the clinical implications might be for any patient. Such studies still need to be done.\nThe study was not able to demonstrate any difference in the risk of error between the e-learning and classroom groups, either before or after the course. Asking for certainty in each calculation made it possible for the nurses to express whether they normally would have consulted others or not when doing the calculation. Being certain that an incorrect answer was correct was regarded as an adequate estimate for a high risk of error. To the best of our knowledge, such a method for estimating a risk of error from a test situation is not described by others, and may be a contribution to future research. Owing to the low learning outcome, one could fear that increased certainty would lead to an increased risk of error. Therefore, it was satisfying that the overall risk of error declined after the course with both methods. Although a proportion of 22% with an increased risk of error after taking the course seemed alarming, it was within the limit of what could occur by chance, due to the small learning outcome. However, one may speculate that taking courses may increase the risk of error, if the feeling of being secure is increased without a corresponding improvement of knowledge. This might have implications for the need of follow-up after courses.\nThe factors that were associated with a reduced risk of error after the calculation course could indicate who might benefit from training like this: being a man; working in hospital; low pretest score and low pretest certainty score. This supports the finding in the auxiliary analysis that nurses with weak drug dose calculation skills benefit the most from taking courses. Nevertheless, the risk of error demonstrated in the study did not necessarily reflect the real risk of adverse events affecting patients, as the test situation cannot measure how often miscalculations were performed or how serious the clinical implications might be for any patient. Such studies still need to be done.\n Importance for practice The fact that 48% of the participants in the study performed drug dose calculations at least weekly was more than anticipated. It has been a common perception that the need for most nurses to calculate drug doses is small in today's clinical practice. The reported extent of calculations underscores the importance of good skills in this field.\nWhen the need for continuous improvement and maintenance of skills is identified, the time and resources available will be decisive for the possibility to implement further training activities. E-learning is more often a preferred choice in health services institutions, as it is both flexible and cost-effective. In our study, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. However, this method also had more dropouts and a lesser learning outcome for those with low skills. In a review article commenting on the results of a meta-analysis of e-learning and conventional instruction methods, Cook claims that rather than more comparative studies, further research should focus on conditions (how and when) under which e-learning is a preferable method.12\nAn implication of the findings can be to let nurses regularly attend an e-learning course followed by a screening test to uncover the weak calculation topics. Those who need further training should be offered a more tailored follow-up. Others have also documented that a combination of different learning and teaching strategies do result in better retention of drug calculation skills compared with lectures alone.23 Further studies of the effect of the introduction of drug dose calculation apps would also be of interest, as well as more authentic observation studies in a high fidelity simulation environment, as reported from a Scottish HHS study.26\nInterestingly, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. This may be explained by the flexibility of the e-learning course, which allowed the participants to concentrate on the items that were considered difficult and relevant for their work, while the classroom group had to follow through the whole programme. Nearly all the nurses themselves realised that they needed more training in drug dose calculations, and an important factor was that motivation for the course was associated with a good learning outcome in the study. This indicates that the professional leadership in health institutions should facilitate and encourage the nurses to improve their skills further in drug dose calculations.\nIn addition to regularly training in calculations, written procedures for specific dilutions and infusions used in the wards would be of importance as a quality insurance for improved patient safety. This must be a part of the management responsibility.\nThe fact that 48% of the participants in the study performed drug dose calculations at least weekly was more than anticipated. It has been a common perception that the need for most nurses to calculate drug doses is small in today's clinical practice. The reported extent of calculations underscores the importance of good skills in this field.\nWhen the need for continuous improvement and maintenance of skills is identified, the time and resources available will be decisive for the possibility to implement further training activities. E-learning is more often a preferred choice in health services institutions, as it is both flexible and cost-effective. In our study, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. However, this method also had more dropouts and a lesser learning outcome for those with low skills. In a review article commenting on the results of a meta-analysis of e-learning and conventional instruction methods, Cook claims that rather than more comparative studies, further research should focus on conditions (how and when) under which e-learning is a preferable method.12\nAn implication of the findings can be to let nurses regularly attend an e-learning course followed by a screening test to uncover the weak calculation topics. Those who need further training should be offered a more tailored follow-up. Others have also documented that a combination of different learning and teaching strategies do result in better retention of drug calculation skills compared with lectures alone.23 Further studies of the effect of the introduction of drug dose calculation apps would also be of interest, as well as more authentic observation studies in a high fidelity simulation environment, as reported from a Scottish HHS study.26\nInterestingly, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. This may be explained by the flexibility of the e-learning course, which allowed the participants to concentrate on the items that were considered difficult and relevant for their work, while the classroom group had to follow through the whole programme. Nearly all the nurses themselves realised that they needed more training in drug dose calculations, and an important factor was that motivation for the course was associated with a good learning outcome in the study. This indicates that the professional leadership in health institutions should facilitate and encourage the nurses to improve their skills further in drug dose calculations.\nIn addition to regularly training in calculations, written procedures for specific dilutions and infusions used in the wards would be of importance as a quality insurance for improved patient safety. This must be a part of the management responsibility.\n Study limitations The participants in this study were recruited through the management line, and the study population represents a limited part of the total nurse population. We assume that nurses with low calculation skills would, to a lesser degree, volunteer for such a study, and hence presume that the calculation skills in clinical practice would be lower than shown in this study. External validity might be an issue in studies with voluntary participation, and extrapolation of the findings of the study to all registered nurses should be performed with caution.\nSome may question the quality of the course content and duration or teaching conditions of the courses, especially since the learning outcome of the courses were not convincing. However, the main aim for the study was to compare the two didactic methods. Also, to ensure a fair comparison and similar content of the courses, the subject teacher, who was a part of the group that developed the e-learning course, was also responsible for the classroom lectures. Since the teacher had an interest in both didactic methods, the probability for her to affect the course arrangements in favour of one of them was regarded as small. The questionnaire used was the same as that used to test the nursing students, and the calculation tasks were considered to be in accordance with the tasks that were performed in the nursing practice.\nAnother limitation could be the controlled test conditions, without time pressure and interruptions that are often the case in a stressful work situation, which tend towards better results than in reality. On the other hand, the calculation test situation itself may be stressful for the nurses, since many have struggled to pass a similar test during their studies.\nSelecting two dimensions from the GHQ 30 questionnaire may also be a methodological limitation. Although no correlation between the outcomes and coping or well-being/self-esteem was detected, the usage of only parts of the tool excluded the possibility of detecting an association between physiological well-being in general and drug dose calculation skills.\nThe participants in this study were recruited through the management line, and the study population represents a limited part of the total nurse population. We assume that nurses with low calculation skills would, to a lesser degree, volunteer for such a study, and hence presume that the calculation skills in clinical practice would be lower than shown in this study. External validity might be an issue in studies with voluntary participation, and extrapolation of the findings of the study to all registered nurses should be performed with caution.\nSome may question the quality of the course content and duration or teaching conditions of the courses, especially since the learning outcome of the courses were not convincing. However, the main aim for the study was to compare the two didactic methods. Also, to ensure a fair comparison and similar content of the courses, the subject teacher, who was a part of the group that developed the e-learning course, was also responsible for the classroom lectures. Since the teacher had an interest in both didactic methods, the probability for her to affect the course arrangements in favour of one of them was regarded as small. The questionnaire used was the same as that used to test the nursing students, and the calculation tasks were considered to be in accordance with the tasks that were performed in the nursing practice.\nAnother limitation could be the controlled test conditions, without time pressure and interruptions that are often the case in a stressful work situation, which tend towards better results than in reality. On the other hand, the calculation test situation itself may be stressful for the nurses, since many have struggled to pass a similar test during their studies.\nSelecting two dimensions from the GHQ 30 questionnaire may also be a methodological limitation. Although no correlation between the outcomes and coping or well-being/self-esteem was detected, the usage of only parts of the tool excluded the possibility of detecting an association between physiological well-being in general and drug dose calculation skills.", "The study was not able to demonstrate an overall difference in learning outcome between the two didactic methods, either of statistical or clinical importance. Both methods resulted in improvement of drug dose calculations after the course, although the learning outcome was smaller than what was defined as clinically relevant. Adjusted for other contributing factors for learning outcome in the multivariable analysis, the classroom method was statistically superior to e-learning, and so was the case for a subgroup with a low pretest result. This finding from the post hoc analysis was probably the only outcome that could have a meaningful practical implication for choice of learning strategy, if reproduced in new studies. These results were in accordance with a meta-analysis of 201 trials comparing e-learning with other methods.19 The review summarised that any educational action gives a positive outcome, regardless of the method. E-learning works compared with no intervention, but tested against conventional methods it is difficult to detect any differences.\nDrug dose calculations are not advanced in a mathematical sense. The basic arithmetic functions of addition, subtraction, multiplication or division are needed to decide decimals and fractions. What seems to be challenging is to conceptually understand the difference in information from the concentration denomination: per cent or mass per unit volume, or the ability to set up the right calculation for the relationship between dose or mass, volume or amount and concentration or strength. A standard labelling to mass per unit volume has been strongly recommended.20\n21\nThe fact that only 1 out of 10 nurses performed a faultless pretest was not surprising, from what is previously shown. In a study by McMullan, only 5% of the nurses achieved 80% correct calculations.22 Although statistically significant, the limited overall learning outcome after the courses was somewhat disappointing, with only 2 out of 10 with faultless tests. It seemed that the incorrect calculations were more frequent in conversion of units, the least complex task in the mathematical sense. The conversion of units improved the most after the course, while the learning outcome in the arithmetic tasks of infusions and dilutions were unchanged. This has also been observed by other investigators, and supports the view that the challenges in drug dose calculations are more likely due to a poor conceptual understanding.10 Recent papers address the importance of including conceptual (understanding the problem), calculation (dosage computation) and technical measurement (dosage measurement) competence in teaching nurses in vocational mathematics, with models to help them understand the ‘what’, the ‘why’ and the ‘how’ in dosage problem solving.23\n24", "The study was not able to demonstrate any difference in the risk of error between the e-learning and classroom groups, either before or after the course. Asking for certainty in each calculation made it possible for the nurses to express whether they normally would have consulted others or not when doing the calculation. Being certain that an incorrect answer was correct was regarded as an adequate estimate for a high risk of error. To the best of our knowledge, such a method for estimating a risk of error from a test situation is not described by others, and may be a contribution to future research. Owing to the low learning outcome, one could fear that increased certainty would lead to an increased risk of error. Therefore, it was satisfying that the overall risk of error declined after the course with both methods. Although a proportion of 22% with an increased risk of error after taking the course seemed alarming, it was within the limit of what could occur by chance, due to the small learning outcome. However, one may speculate that taking courses may increase the risk of error, if the feeling of being secure is increased without a corresponding improvement of knowledge. This might have implications for the need of follow-up after courses.\nThe factors that were associated with a reduced risk of error after the calculation course could indicate who might benefit from training like this: being a man; working in hospital; low pretest score and low pretest certainty score. This supports the finding in the auxiliary analysis that nurses with weak drug dose calculation skills benefit the most from taking courses. Nevertheless, the risk of error demonstrated in the study did not necessarily reflect the real risk of adverse events affecting patients, as the test situation cannot measure how often miscalculations were performed or how serious the clinical implications might be for any patient. Such studies still need to be done.", "The fact that 48% of the participants in the study performed drug dose calculations at least weekly was more than anticipated. It has been a common perception that the need for most nurses to calculate drug doses is small in today's clinical practice. The reported extent of calculations underscores the importance of good skills in this field.\nWhen the need for continuous improvement and maintenance of skills is identified, the time and resources available will be decisive for the possibility to implement further training activities. E-learning is more often a preferred choice in health services institutions, as it is both flexible and cost-effective. In our study, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. However, this method also had more dropouts and a lesser learning outcome for those with low skills. In a review article commenting on the results of a meta-analysis of e-learning and conventional instruction methods, Cook claims that rather than more comparative studies, further research should focus on conditions (how and when) under which e-learning is a preferable method.12\nAn implication of the findings can be to let nurses regularly attend an e-learning course followed by a screening test to uncover the weak calculation topics. Those who need further training should be offered a more tailored follow-up. Others have also documented that a combination of different learning and teaching strategies do result in better retention of drug calculation skills compared with lectures alone.23 Further studies of the effect of the introduction of drug dose calculation apps would also be of interest, as well as more authentic observation studies in a high fidelity simulation environment, as reported from a Scottish HHS study.26\nInterestingly, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. This may be explained by the flexibility of the e-learning course, which allowed the participants to concentrate on the items that were considered difficult and relevant for their work, while the classroom group had to follow through the whole programme. Nearly all the nurses themselves realised that they needed more training in drug dose calculations, and an important factor was that motivation for the course was associated with a good learning outcome in the study. This indicates that the professional leadership in health institutions should facilitate and encourage the nurses to improve their skills further in drug dose calculations.\nIn addition to regularly training in calculations, written procedures for specific dilutions and infusions used in the wards would be of importance as a quality insurance for improved patient safety. This must be a part of the management responsibility.", "The participants in this study were recruited through the management line, and the study population represents a limited part of the total nurse population. We assume that nurses with low calculation skills would, to a lesser degree, volunteer for such a study, and hence presume that the calculation skills in clinical practice would be lower than shown in this study. External validity might be an issue in studies with voluntary participation, and extrapolation of the findings of the study to all registered nurses should be performed with caution.\nSome may question the quality of the course content and duration or teaching conditions of the courses, especially since the learning outcome of the courses were not convincing. However, the main aim for the study was to compare the two didactic methods. Also, to ensure a fair comparison and similar content of the courses, the subject teacher, who was a part of the group that developed the e-learning course, was also responsible for the classroom lectures. Since the teacher had an interest in both didactic methods, the probability for her to affect the course arrangements in favour of one of them was regarded as small. The questionnaire used was the same as that used to test the nursing students, and the calculation tasks were considered to be in accordance with the tasks that were performed in the nursing practice.\nAnother limitation could be the controlled test conditions, without time pressure and interruptions that are often the case in a stressful work situation, which tend towards better results than in reality. On the other hand, the calculation test situation itself may be stressful for the nurses, since many have struggled to pass a similar test during their studies.\nSelecting two dimensions from the GHQ 30 questionnaire may also be a methodological limitation. Although no correlation between the outcomes and coping or well-being/self-esteem was detected, the usage of only parts of the tool excluded the possibility of detecting an association between physiological well-being in general and drug dose calculation skills.", "The study was not able to demonstrate any differences between e-learning and classroom teaching in drug dose calculations, with respect to learning outcome, certainty or risk of error. The overall learning outcome was without practical significance, and conversion of units was the only topic that was significantly improved after the course. An independent factor in favour of classroom teaching was weak pretest knowledge, while factors suggesting use of e-learning could be the need for training in relevant work specific tasks and time effective repetition.", "" ]
[ "intro", "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", null, null, null, null, "conclusions", "supplementary-material" ]
[ "HEALTH SERVICES ADMINISTRATION & MANAGEMENT", "MEDICAL EDUCATION & TRAINING" ]
Introduction: From international reviews and reports of adverse drug events, incorrect doses account for up to one-third of the events.1–3 Many health professionals find drug dose calculations difficult. The majority of medical students are unable to calculate the mass of a drug in solution correctly, and around half the doctors are unable to convert drug doses correctly from a percentage concentration or dilution to mass concentration.4 5 Nurses carry out practical drug management after the physicians’ prescriptions both in hospitals and primary healthcare. In Norway, a faultless test in drug dose calculations during nursing education is required to become a registered nurse.6 Both nursing students and experienced nurses have problems with drug dose calculations, and nursing students early in the programme showed limited basic skills in arithmetic.7–10 We have shown a high risk of error in conversion of units in 10% of registered nurses in an earlier study.11 E-learning was introduced with the internet in the early 1990s, and has been increasingly used in medical and healthcare education. E-learning is independent of time and place, and the training is easier to organise in the health services than classroom teaching, and at a lower cost. A meta-analysis from 2009 summarised more than 200 studies in health professions education, and concluded that e-learning is associated with large positive effects compared with no intervention, but compared with other interventions the effects are generally small.12 There is a lack of drug dose calculation studies where different didactic methods are compared. The objective of this study was to compare the learning outcome, certainty and risk of error in drug dose calculations after courses with either self-directed e-learning or conventional classroom teaching. Further aims were to study factors associated with the learning outcome and risk of error. Methods: Design A randomised controlled open study with a parallel group design. A randomised controlled open study with a parallel group design. Participants Registered nurses working in two hospitals and three municipalities in Eastern Norway were recruited to participate in the study. Inclusion criteria were nurses with at least 1 year of work experience in a 50% part-time job or more. Excluded were nurses working in outpatient clinics, those who did not administer drugs and any who did not master the Norwegian language sufficiently. The study was performed from September 2007 to April 2009. Registered nurses working in two hospitals and three municipalities in Eastern Norway were recruited to participate in the study. Inclusion criteria were nurses with at least 1 year of work experience in a 50% part-time job or more. Excluded were nurses working in outpatient clinics, those who did not administer drugs and any who did not master the Norwegian language sufficiently. The study was performed from September 2007 to April 2009. Interventions At inclusion, all participants completed a form with relevant background characteristics, and nine statements from the General Health Questionnaire (GHQ 30).13 Quality of Life tools are often used to explore psychological well-being. The GHQ 30 contains the dimensions of a sense of coping and self-esteem/well-being, and was used to evaluate to what extent the nurses’ sense of coping affected their calculation skills. The nurses performed a multiple choice (MCQ) test in drug dose calculations. The questions were standard calculation tasks for bachelor students in nursing at university colleges. The test was taken either on paper or on an internet website. The time available for the test was 1 h, and the participants were allowed to use a calculator. After the test, the nurses were randomised to one of two 2-day courses in drug dose calculations. One group was assigned to a self-directed, interactive internet-based e-learning course developed at a Norwegian university college. The other was assigned to a 1-day conventional classroom course and a 1-day self-study. The content of the two courses was the same: a review of the basic theory of the different types of calculations, followed by examples and exercises. The topics covered were conversion between units; formulas for dose, quantity and strength; infusions; and dilutions. The e-learning group continued with interactive tests, hints and suggested solutions. They had access to a collection of tests with feedback on answers, and a printout of the compendium was available. The classroom group had 1 day lecture covering the basic theory; exercises in groups; discussion in a plenary session and an individual test at the end of the day. The second day was self-study, with a textbook including exercises used at the same college.14 Two to four weeks after the course, the nurses were retested in drug dose calculations with a similar MCQ test as the pretest. At inclusion, all participants completed a form with relevant background characteristics, and nine statements from the General Health Questionnaire (GHQ 30).13 Quality of Life tools are often used to explore psychological well-being. The GHQ 30 contains the dimensions of a sense of coping and self-esteem/well-being, and was used to evaluate to what extent the nurses’ sense of coping affected their calculation skills. The nurses performed a multiple choice (MCQ) test in drug dose calculations. The questions were standard calculation tasks for bachelor students in nursing at university colleges. The test was taken either on paper or on an internet website. The time available for the test was 1 h, and the participants were allowed to use a calculator. After the test, the nurses were randomised to one of two 2-day courses in drug dose calculations. One group was assigned to a self-directed, interactive internet-based e-learning course developed at a Norwegian university college. The other was assigned to a 1-day conventional classroom course and a 1-day self-study. The content of the two courses was the same: a review of the basic theory of the different types of calculations, followed by examples and exercises. The topics covered were conversion between units; formulas for dose, quantity and strength; infusions; and dilutions. The e-learning group continued with interactive tests, hints and suggested solutions. They had access to a collection of tests with feedback on answers, and a printout of the compendium was available. The classroom group had 1 day lecture covering the basic theory; exercises in groups; discussion in a plenary session and an individual test at the end of the day. The second day was self-study, with a textbook including exercises used at the same college.14 Two to four weeks after the course, the nurses were retested in drug dose calculations with a similar MCQ test as the pretest. Sample size Studies testing drug dose calculations in nurses have shown a mean score of 75% (SD 15%).15–17 In a study with 14 questions, this is equivalent to a score of 10.5 (SD 2.1). To detect a difference of one correct answer between the two didactic methods with a strength of 0.8 and α<0.05, it was necessary to include 74 participants in each group. Owing to the likely dropouts, the aim was to randomise 180 participating nurses. Studies testing drug dose calculations in nurses have shown a mean score of 75% (SD 15%).15–17 In a study with 14 questions, this is equivalent to a score of 10.5 (SD 2.1). To detect a difference of one correct answer between the two didactic methods with a strength of 0.8 and α<0.05, it was necessary to include 74 participants in each group. Owing to the likely dropouts, the aim was to randomise 180 participating nurses. Randomisation At inclusion, each nurse was stratified according to five workplaces: internal medicine, surgery or psychiatric wards in hospitals, and nursing home or ambulatory care in primary healthcare. Immediately after submission of the pretest, the participants were randomised to one of the two didactic methods by predefined computer-generated lists for each stratum. At inclusion, each nurse was stratified according to five workplaces: internal medicine, surgery or psychiatric wards in hospitals, and nursing home or ambulatory care in primary healthcare. Immediately after submission of the pretest, the participants were randomised to one of the two didactic methods by predefined computer-generated lists for each stratum. Design: A randomised controlled open study with a parallel group design. Participants: Registered nurses working in two hospitals and three municipalities in Eastern Norway were recruited to participate in the study. Inclusion criteria were nurses with at least 1 year of work experience in a 50% part-time job or more. Excluded were nurses working in outpatient clinics, those who did not administer drugs and any who did not master the Norwegian language sufficiently. The study was performed from September 2007 to April 2009. Interventions: At inclusion, all participants completed a form with relevant background characteristics, and nine statements from the General Health Questionnaire (GHQ 30).13 Quality of Life tools are often used to explore psychological well-being. The GHQ 30 contains the dimensions of a sense of coping and self-esteem/well-being, and was used to evaluate to what extent the nurses’ sense of coping affected their calculation skills. The nurses performed a multiple choice (MCQ) test in drug dose calculations. The questions were standard calculation tasks for bachelor students in nursing at university colleges. The test was taken either on paper or on an internet website. The time available for the test was 1 h, and the participants were allowed to use a calculator. After the test, the nurses were randomised to one of two 2-day courses in drug dose calculations. One group was assigned to a self-directed, interactive internet-based e-learning course developed at a Norwegian university college. The other was assigned to a 1-day conventional classroom course and a 1-day self-study. The content of the two courses was the same: a review of the basic theory of the different types of calculations, followed by examples and exercises. The topics covered were conversion between units; formulas for dose, quantity and strength; infusions; and dilutions. The e-learning group continued with interactive tests, hints and suggested solutions. They had access to a collection of tests with feedback on answers, and a printout of the compendium was available. The classroom group had 1 day lecture covering the basic theory; exercises in groups; discussion in a plenary session and an individual test at the end of the day. The second day was self-study, with a textbook including exercises used at the same college.14 Two to four weeks after the course, the nurses were retested in drug dose calculations with a similar MCQ test as the pretest. Sample size: Studies testing drug dose calculations in nurses have shown a mean score of 75% (SD 15%).15–17 In a study with 14 questions, this is equivalent to a score of 10.5 (SD 2.1). To detect a difference of one correct answer between the two didactic methods with a strength of 0.8 and α<0.05, it was necessary to include 74 participants in each group. Owing to the likely dropouts, the aim was to randomise 180 participating nurses. Randomisation: At inclusion, each nurse was stratified according to five workplaces: internal medicine, surgery or psychiatric wards in hospitals, and nursing home or ambulatory care in primary healthcare. Immediately after submission of the pretest, the participants were randomised to one of the two didactic methods by predefined computer-generated lists for each stratum. Data collection: Participant characteristics The following background characteristics were recorded: age; gender; childhood and education as a nurse in or outside of Norway; length of work experience as a nurse in at least a 50% part-time job; part-time job percentage in the past 12 months; present workplace in a specific hospital department (surgery, internal medicine or psychiatry) or primary healthcare (nursing home or ambulatory care); and frequency of drug dose calculation tasks at work, score 0–3: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. Further educational background was recorded (yes/no): mathematics beyond the first mandatory year at upper secondary school; other education prior to nursing; postgraduate specialisation and courses in drug dose calculations during the past 3 years. The participants registered motivation for the courses in drug dose calculations, rated as 1=very unmotivated, 2=relatively unmotivated, 3=relatively motivated, 4=very motivated. In addition, the participants were asked to consider statements from GHQ 30, in the context of performing medication tasks: five regarding coping (finding life a struggle; being able to enjoy normal activities; feeling reasonably happy; getting scared or panicky for no good reason and being capable of making decisions), and four regarding self-esteem/well-being (overall doing things well; satisfied with the way they have carried out their task; managing to keep busy and occupied; and managing as well as most people in the same situation). The ratings of these statements were 0–3: 0=more/better than usual, 1=as usual, 2=less/worse than usual and 3=much less/worse than usual; ‘as usual’ was defined as the normal state. The following background characteristics were recorded: age; gender; childhood and education as a nurse in or outside of Norway; length of work experience as a nurse in at least a 50% part-time job; part-time job percentage in the past 12 months; present workplace in a specific hospital department (surgery, internal medicine or psychiatry) or primary healthcare (nursing home or ambulatory care); and frequency of drug dose calculation tasks at work, score 0–3: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. Further educational background was recorded (yes/no): mathematics beyond the first mandatory year at upper secondary school; other education prior to nursing; postgraduate specialisation and courses in drug dose calculations during the past 3 years. The participants registered motivation for the courses in drug dose calculations, rated as 1=very unmotivated, 2=relatively unmotivated, 3=relatively motivated, 4=very motivated. In addition, the participants were asked to consider statements from GHQ 30, in the context of performing medication tasks: five regarding coping (finding life a struggle; being able to enjoy normal activities; feeling reasonably happy; getting scared or panicky for no good reason and being capable of making decisions), and four regarding self-esteem/well-being (overall doing things well; satisfied with the way they have carried out their task; managing to keep busy and occupied; and managing as well as most people in the same situation). The ratings of these statements were 0–3: 0=more/better than usual, 1=as usual, 2=less/worse than usual and 3=much less/worse than usual; ‘as usual’ was defined as the normal state. Outcomes Drug dose calculation test and certainty in calculations A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1. A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1. Risk of error Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3). Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3). Course evaluation After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large. After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large. Drug dose calculation test and certainty in calculations A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1. A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1. Risk of error Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3). Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3). Course evaluation After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large. After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large. Ethical considerations All participants gave written informed consent. The tests were performed de-identified. A list connecting the study participant number to the names was kept until after the retest, in case any of the participants had forgotten their number. To protect the participants from any consequences because of the test, the data were made anonymous before the analysis. Even if the study might uncover that individuals showed a high risk of medication errors due to lacking calculation skills, it was considered ethically justifiable not to be able to expose their identity to their employer. All participants gave written informed consent. The tests were performed de-identified. A list connecting the study participant number to the names was kept until after the retest, in case any of the participants had forgotten their number. To protect the participants from any consequences because of the test, the data were made anonymous before the analysis. Even if the study might uncover that individuals showed a high risk of medication errors due to lacking calculation skills, it was considered ethically justifiable not to be able to expose their identity to their employer. Data analysis The analysis was performed with intention-to-treat analyses. In addition, a per protocol analysis was performed for the main results. Depending on data distribution, comparisons between groups were analysed with a χ2 or Fisher’s exact test, a t test or Mann-Whitney U test, analysis of variance, Friedman, and Pearson or Spearman tests for correlations, and a Wilcoxon signed-rank test for paired comparisons before and after the course. All variables possibly associated with the learning outcome and change in risk of error were entered in linear regression analyses to identify independent predictors.18 Two-tailed significance tests were used, and a p value <0.05 was considered statistically significant. The protocol contained instructions for handling missing data. Unanswered questions were scored as ‘incorrect answer’, and unanswered certainty scores as ‘very uncertain’. For participants who did not take the test after the course, the result from the pretest (last observation) was carried forward. The analysis was performed with SPSS V.18.0 (SPSS Inc, Chicago, Illinois, USA). All results are given as the mean and (SD) if not otherwise indicated. The analysis was performed with intention-to-treat analyses. In addition, a per protocol analysis was performed for the main results. Depending on data distribution, comparisons between groups were analysed with a χ2 or Fisher’s exact test, a t test or Mann-Whitney U test, analysis of variance, Friedman, and Pearson or Spearman tests for correlations, and a Wilcoxon signed-rank test for paired comparisons before and after the course. All variables possibly associated with the learning outcome and change in risk of error were entered in linear regression analyses to identify independent predictors.18 Two-tailed significance tests were used, and a p value <0.05 was considered statistically significant. The protocol contained instructions for handling missing data. Unanswered questions were scored as ‘incorrect answer’, and unanswered certainty scores as ‘very uncertain’. For participants who did not take the test after the course, the result from the pretest (last observation) was carried forward. The analysis was performed with SPSS V.18.0 (SPSS Inc, Chicago, Illinois, USA). All results are given as the mean and (SD) if not otherwise indicated. Participant characteristics: The following background characteristics were recorded: age; gender; childhood and education as a nurse in or outside of Norway; length of work experience as a nurse in at least a 50% part-time job; part-time job percentage in the past 12 months; present workplace in a specific hospital department (surgery, internal medicine or psychiatry) or primary healthcare (nursing home or ambulatory care); and frequency of drug dose calculation tasks at work, score 0–3: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. Further educational background was recorded (yes/no): mathematics beyond the first mandatory year at upper secondary school; other education prior to nursing; postgraduate specialisation and courses in drug dose calculations during the past 3 years. The participants registered motivation for the courses in drug dose calculations, rated as 1=very unmotivated, 2=relatively unmotivated, 3=relatively motivated, 4=very motivated. In addition, the participants were asked to consider statements from GHQ 30, in the context of performing medication tasks: five regarding coping (finding life a struggle; being able to enjoy normal activities; feeling reasonably happy; getting scared or panicky for no good reason and being capable of making decisions), and four regarding self-esteem/well-being (overall doing things well; satisfied with the way they have carried out their task; managing to keep busy and occupied; and managing as well as most people in the same situation). The ratings of these statements were 0–3: 0=more/better than usual, 1=as usual, 2=less/worse than usual and 3=much less/worse than usual; ‘as usual’ was defined as the normal state. Outcomes: Drug dose calculation test and certainty in calculations A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1. A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1. Risk of error Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3). Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3). Course evaluation After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large. After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large. Drug dose calculation test and certainty in calculations: A drug dose calculation test was performed before and after the course: 14 MCQs with four alternative answers. The topics were as follows (number of questions in brackets): conversion of units,7 formulas for calculation of dose, quantity or strength,4 infusions2 and dilutions.1 For each question, the participants indicated a self-estimated certainty, graded from 0 to 3: 0=very uncertain, and would search for help; 1=relatively uncertain, and would probably search for help; 2=relatively certain, and would probably not search for help; and 3=very certain, and would not search for help. The questionnaires used are enclosed as online supplementary additional file 1. Risk of error: Risk of error was estimated by combining knowledge and certainty for each question rated on a scale from 1 to 3, devised for the study. Correct answer combined with relatively or high certainty was regarded as a low risk of error (score=1), any answer combined with relatively or very low certainty was regarded as a moderate risk of error (score=2), and being very or relatively certain that an incorrect answer was correct was regarded as a high risk of error (score=3). Course evaluation: After the course, the nurses recorded their assessment of the level of difficulty of the course related to their own prior knowledge (1=very difficult, 2=relatively difficult, 3=relatively easy, 4=very easy); and course satisfaction (1=very unsatisfied, 2=relatively unsatisfied, 3=relatively satisfied, 4=very satisfied). An evaluation of the usefulness of the specific course in drug dose calculations in daily work as a nurse was rated from 1=very small, 2=relatively small, 3=relatively large to 4=very large. Ethical considerations: All participants gave written informed consent. The tests were performed de-identified. A list connecting the study participant number to the names was kept until after the retest, in case any of the participants had forgotten their number. To protect the participants from any consequences because of the test, the data were made anonymous before the analysis. Even if the study might uncover that individuals showed a high risk of medication errors due to lacking calculation skills, it was considered ethically justifiable not to be able to expose their identity to their employer. Data analysis: The analysis was performed with intention-to-treat analyses. In addition, a per protocol analysis was performed for the main results. Depending on data distribution, comparisons between groups were analysed with a χ2 or Fisher’s exact test, a t test or Mann-Whitney U test, analysis of variance, Friedman, and Pearson or Spearman tests for correlations, and a Wilcoxon signed-rank test for paired comparisons before and after the course. All variables possibly associated with the learning outcome and change in risk of error were entered in linear regression analyses to identify independent predictors.18 Two-tailed significance tests were used, and a p value <0.05 was considered statistically significant. The protocol contained instructions for handling missing data. Unanswered questions were scored as ‘incorrect answer’, and unanswered certainty scores as ‘very uncertain’. For participants who did not take the test after the course, the result from the pretest (last observation) was carried forward. The analysis was performed with SPSS V.18.0 (SPSS Inc, Chicago, Illinois, USA). All results are given as the mean and (SD) if not otherwise indicated. Results: In total, 212 registered nurses were included in the study, and 183 were eligible for randomisation. Figure 1 shows the flow of participants throughout the study, and table 1 summarises the participant characteristics and the pretest results. The two groups were well balanced with respect to baseline characteristics. Of the 183 nurses, 79 (43%) were recruited from hospitals (48 from surgery departments, including intensive care units; 23 from internal medicine wards; 8 from psychiatric wards) and 104 (57%) from primary healthcare (52 from nursing homes and 52 from ambulatory healthcare). Nearly half of the nurses (48%) performed drug dose calculations weekly or more often. Participant flow chart. Participants’ characteristics and pretest results The results are given as mean (SD in brackets), or number of participants (proportion in brackets). *Scale: 0=less than monthly, 1=monthly, 2=weekly, 3=every working day. †Upper secondary school. ‡Scale: 0=more/better than usual, 1=as usual, 2=less/worse than usual, 3=much less/worse than usual. Statistical tests: t test, Mann-Whitney U test, χ2 test, Fisher exact test. There was a tendency for more dropouts in the e-learning group: 18.4% vs 9.9% (p=0.10). The dropouts did not differ from those who completed the study regarding the workplace: 12 from hospitals and 14 from primary healthcare (p=0.74), or pretest result: score 10.5 vs 11.1, 95% CI for difference −1.5:+0.2 (p=0.13). Knowledge, learning outcome and risk of error The test results before and after the course are shown in figure 2, and the upper part of table 2 gives the main results after e-learning and classroom teaching. No significant difference between the two didactic methods was detected for the overall test score, certainty or risk of error. The overall knowledge score improved from 11.1 (2.0) to 11.8 (2.0) (p<0.001). Before and after the course, 20 (10.9%) and 37 (20.2%) participants, respectively, completed a faultless test. The overall risk of error decreased after the course from 1.5 (0.3) to 1.4 (0.3) (p<0.001), but 41 nurses (22%) showed an increased risk, 20 from the e-learning group and 21 from the classroom group. This proportion is within the limits of what could appear by coincidence from a normal distribution (24%), and with a mean learning outcome of 0.7 (0.2). Test results in drug dose calculations. Main results after course in drug dose calculations Results are given as mean (SD). Statistical test: Mann-Whitney U test. *Auxiliary subgroup analysis. An analysis of the 141 participants who completed the study according to the protocol did not alter the main finding that there was no difference between the two didactic methods. The overall knowledge score improved from 11.1 (2.0) to 12.0 (2.0) (p<0.001). Table 3 gives the results as the proportion of correct answers and the proportion of answers with a high risk of error within each calculation topic before and after the course. The test results in each topic for the two didactic methods showed that the classroom group scored significantly better after the course in conversion of units: 86% correct answers vs 78% (p<0.001), with no difference in the other topics. Overall, there were significant differences between the four topics in knowledge and risk of error both before and after the course, p<0.001 (Friedman's test). Sense of coping or self-esteem/well-being was not affected by the course for either of the groups, data not shown. Knowledge and high risk of error within each calculation topic before and after course Results are given as mean (SD). Statistical test: Wilcoxon signed-rank test, Friedman's test. Factors significantly associated with good learning outcome and reduction in the risk of error after the course are given in table 4. Among these factors, the randomisation to classroom teaching was significantly better in learning outcome, adjusted for other variables. Both low pretest knowledge and certainty score were associated with a reduced risk of error after the course, as were being a man and working in hospital. Self-evaluations of coping and self-esteem/well-being were neither associated with learning outcome nor with risk of error. The total R2 changes for the variables significantly associated with good learning outcome and risk of error were 0.28 and 0.18, respectively. Factors significantly associated with learning outcome and reduction in risk of error after course in drug dose calculations Multivariable regression analysis with all participant characteristics included as possible factors (n=183). Statistical test: linear regression analysis, after bivariable correlation tests Pearson and Spearman. The test results before and after the course are shown in figure 2, and the upper part of table 2 gives the main results after e-learning and classroom teaching. No significant difference between the two didactic methods was detected for the overall test score, certainty or risk of error. The overall knowledge score improved from 11.1 (2.0) to 11.8 (2.0) (p<0.001). Before and after the course, 20 (10.9%) and 37 (20.2%) participants, respectively, completed a faultless test. The overall risk of error decreased after the course from 1.5 (0.3) to 1.4 (0.3) (p<0.001), but 41 nurses (22%) showed an increased risk, 20 from the e-learning group and 21 from the classroom group. This proportion is within the limits of what could appear by coincidence from a normal distribution (24%), and with a mean learning outcome of 0.7 (0.2). Test results in drug dose calculations. Main results after course in drug dose calculations Results are given as mean (SD). Statistical test: Mann-Whitney U test. *Auxiliary subgroup analysis. An analysis of the 141 participants who completed the study according to the protocol did not alter the main finding that there was no difference between the two didactic methods. The overall knowledge score improved from 11.1 (2.0) to 12.0 (2.0) (p<0.001). Table 3 gives the results as the proportion of correct answers and the proportion of answers with a high risk of error within each calculation topic before and after the course. The test results in each topic for the two didactic methods showed that the classroom group scored significantly better after the course in conversion of units: 86% correct answers vs 78% (p<0.001), with no difference in the other topics. Overall, there were significant differences between the four topics in knowledge and risk of error both before and after the course, p<0.001 (Friedman's test). Sense of coping or self-esteem/well-being was not affected by the course for either of the groups, data not shown. Knowledge and high risk of error within each calculation topic before and after course Results are given as mean (SD). Statistical test: Wilcoxon signed-rank test, Friedman's test. Factors significantly associated with good learning outcome and reduction in the risk of error after the course are given in table 4. Among these factors, the randomisation to classroom teaching was significantly better in learning outcome, adjusted for other variables. Both low pretest knowledge and certainty score were associated with a reduced risk of error after the course, as were being a man and working in hospital. Self-evaluations of coping and self-esteem/well-being were neither associated with learning outcome nor with risk of error. The total R2 changes for the variables significantly associated with good learning outcome and risk of error were 0.28 and 0.18, respectively. Factors significantly associated with learning outcome and reduction in risk of error after course in drug dose calculations Multivariable regression analysis with all participant characteristics included as possible factors (n=183). Statistical test: linear regression analysis, after bivariable correlation tests Pearson and Spearman. Course evaluation Nearly all (97.5%) of the participants stated a need for training courses in drug dose calculations. The evaluation after the course showed no difference between the didactic methods in the expressed degree of difficulty or course satisfaction, data not shown. The specific value of the course for working situations was scored 3.1 (0.7) in the e-learning group and 2.7 (0.7) in the classroom group (p<0.001). Nearly all (97.5%) of the participants stated a need for training courses in drug dose calculations. The evaluation after the course showed no difference between the didactic methods in the expressed degree of difficulty or course satisfaction, data not shown. The specific value of the course for working situations was scored 3.1 (0.7) in the e-learning group and 2.7 (0.7) in the classroom group (p<0.001). Auxiliary analyses A post hoc analysis for subgroups with a pretest knowledge score ≥9 and <9 is given in the lower part of table 2. For participants with a low prescore, classroom teaching gave a significantly better learning outcome and reduced risk of error after the course. The overall knowledge score improved in the high score group from 11.6 (1.4) to 12.0 (1.9) and in the low score group from 7.2 (1.0) to 9.9 (2.3), and the difference in learning outcome was highly significant (p<0.001). A post hoc analysis for subgroups with a pretest knowledge score ≥9 and <9 is given in the lower part of table 2. For participants with a low prescore, classroom teaching gave a significantly better learning outcome and reduced risk of error after the course. The overall knowledge score improved in the high score group from 11.6 (1.4) to 12.0 (1.9) and in the low score group from 7.2 (1.0) to 9.9 (2.3), and the difference in learning outcome was highly significant (p<0.001). Knowledge, learning outcome and risk of error: The test results before and after the course are shown in figure 2, and the upper part of table 2 gives the main results after e-learning and classroom teaching. No significant difference between the two didactic methods was detected for the overall test score, certainty or risk of error. The overall knowledge score improved from 11.1 (2.0) to 11.8 (2.0) (p<0.001). Before and after the course, 20 (10.9%) and 37 (20.2%) participants, respectively, completed a faultless test. The overall risk of error decreased after the course from 1.5 (0.3) to 1.4 (0.3) (p<0.001), but 41 nurses (22%) showed an increased risk, 20 from the e-learning group and 21 from the classroom group. This proportion is within the limits of what could appear by coincidence from a normal distribution (24%), and with a mean learning outcome of 0.7 (0.2). Test results in drug dose calculations. Main results after course in drug dose calculations Results are given as mean (SD). Statistical test: Mann-Whitney U test. *Auxiliary subgroup analysis. An analysis of the 141 participants who completed the study according to the protocol did not alter the main finding that there was no difference between the two didactic methods. The overall knowledge score improved from 11.1 (2.0) to 12.0 (2.0) (p<0.001). Table 3 gives the results as the proportion of correct answers and the proportion of answers with a high risk of error within each calculation topic before and after the course. The test results in each topic for the two didactic methods showed that the classroom group scored significantly better after the course in conversion of units: 86% correct answers vs 78% (p<0.001), with no difference in the other topics. Overall, there were significant differences between the four topics in knowledge and risk of error both before and after the course, p<0.001 (Friedman's test). Sense of coping or self-esteem/well-being was not affected by the course for either of the groups, data not shown. Knowledge and high risk of error within each calculation topic before and after course Results are given as mean (SD). Statistical test: Wilcoxon signed-rank test, Friedman's test. Factors significantly associated with good learning outcome and reduction in the risk of error after the course are given in table 4. Among these factors, the randomisation to classroom teaching was significantly better in learning outcome, adjusted for other variables. Both low pretest knowledge and certainty score were associated with a reduced risk of error after the course, as were being a man and working in hospital. Self-evaluations of coping and self-esteem/well-being were neither associated with learning outcome nor with risk of error. The total R2 changes for the variables significantly associated with good learning outcome and risk of error were 0.28 and 0.18, respectively. Factors significantly associated with learning outcome and reduction in risk of error after course in drug dose calculations Multivariable regression analysis with all participant characteristics included as possible factors (n=183). Statistical test: linear regression analysis, after bivariable correlation tests Pearson and Spearman. Course evaluation: Nearly all (97.5%) of the participants stated a need for training courses in drug dose calculations. The evaluation after the course showed no difference between the didactic methods in the expressed degree of difficulty or course satisfaction, data not shown. The specific value of the course for working situations was scored 3.1 (0.7) in the e-learning group and 2.7 (0.7) in the classroom group (p<0.001). Auxiliary analyses: A post hoc analysis for subgroups with a pretest knowledge score ≥9 and <9 is given in the lower part of table 2. For participants with a low prescore, classroom teaching gave a significantly better learning outcome and reduced risk of error after the course. The overall knowledge score improved in the high score group from 11.6 (1.4) to 12.0 (1.9) and in the low score group from 7.2 (1.0) to 9.9 (2.3), and the difference in learning outcome was highly significant (p<0.001). Discussion: Drug dose calculation skills The study was not able to demonstrate an overall difference in learning outcome between the two didactic methods, either of statistical or clinical importance. Both methods resulted in improvement of drug dose calculations after the course, although the learning outcome was smaller than what was defined as clinically relevant. Adjusted for other contributing factors for learning outcome in the multivariable analysis, the classroom method was statistically superior to e-learning, and so was the case for a subgroup with a low pretest result. This finding from the post hoc analysis was probably the only outcome that could have a meaningful practical implication for choice of learning strategy, if reproduced in new studies. These results were in accordance with a meta-analysis of 201 trials comparing e-learning with other methods.19 The review summarised that any educational action gives a positive outcome, regardless of the method. E-learning works compared with no intervention, but tested against conventional methods it is difficult to detect any differences. Drug dose calculations are not advanced in a mathematical sense. The basic arithmetic functions of addition, subtraction, multiplication or division are needed to decide decimals and fractions. What seems to be challenging is to conceptually understand the difference in information from the concentration denomination: per cent or mass per unit volume, or the ability to set up the right calculation for the relationship between dose or mass, volume or amount and concentration or strength. A standard labelling to mass per unit volume has been strongly recommended.20 21 The fact that only 1 out of 10 nurses performed a faultless pretest was not surprising, from what is previously shown. In a study by McMullan, only 5% of the nurses achieved 80% correct calculations.22 Although statistically significant, the limited overall learning outcome after the courses was somewhat disappointing, with only 2 out of 10 with faultless tests. It seemed that the incorrect calculations were more frequent in conversion of units, the least complex task in the mathematical sense. The conversion of units improved the most after the course, while the learning outcome in the arithmetic tasks of infusions and dilutions were unchanged. This has also been observed by other investigators, and supports the view that the challenges in drug dose calculations are more likely due to a poor conceptual understanding.10 Recent papers address the importance of including conceptual (understanding the problem), calculation (dosage computation) and technical measurement (dosage measurement) competence in teaching nurses in vocational mathematics, with models to help them understand the ‘what’, the ‘why’ and the ‘how’ in dosage problem solving.23 24 The study was not able to demonstrate an overall difference in learning outcome between the two didactic methods, either of statistical or clinical importance. Both methods resulted in improvement of drug dose calculations after the course, although the learning outcome was smaller than what was defined as clinically relevant. Adjusted for other contributing factors for learning outcome in the multivariable analysis, the classroom method was statistically superior to e-learning, and so was the case for a subgroup with a low pretest result. This finding from the post hoc analysis was probably the only outcome that could have a meaningful practical implication for choice of learning strategy, if reproduced in new studies. These results were in accordance with a meta-analysis of 201 trials comparing e-learning with other methods.19 The review summarised that any educational action gives a positive outcome, regardless of the method. E-learning works compared with no intervention, but tested against conventional methods it is difficult to detect any differences. Drug dose calculations are not advanced in a mathematical sense. The basic arithmetic functions of addition, subtraction, multiplication or division are needed to decide decimals and fractions. What seems to be challenging is to conceptually understand the difference in information from the concentration denomination: per cent or mass per unit volume, or the ability to set up the right calculation for the relationship between dose or mass, volume or amount and concentration or strength. A standard labelling to mass per unit volume has been strongly recommended.20 21 The fact that only 1 out of 10 nurses performed a faultless pretest was not surprising, from what is previously shown. In a study by McMullan, only 5% of the nurses achieved 80% correct calculations.22 Although statistically significant, the limited overall learning outcome after the courses was somewhat disappointing, with only 2 out of 10 with faultless tests. It seemed that the incorrect calculations were more frequent in conversion of units, the least complex task in the mathematical sense. The conversion of units improved the most after the course, while the learning outcome in the arithmetic tasks of infusions and dilutions were unchanged. This has also been observed by other investigators, and supports the view that the challenges in drug dose calculations are more likely due to a poor conceptual understanding.10 Recent papers address the importance of including conceptual (understanding the problem), calculation (dosage computation) and technical measurement (dosage measurement) competence in teaching nurses in vocational mathematics, with models to help them understand the ‘what’, the ‘why’ and the ‘how’ in dosage problem solving.23 24 Risk of error The study was not able to demonstrate any difference in the risk of error between the e-learning and classroom groups, either before or after the course. Asking for certainty in each calculation made it possible for the nurses to express whether they normally would have consulted others or not when doing the calculation. Being certain that an incorrect answer was correct was regarded as an adequate estimate for a high risk of error. To the best of our knowledge, such a method for estimating a risk of error from a test situation is not described by others, and may be a contribution to future research. Owing to the low learning outcome, one could fear that increased certainty would lead to an increased risk of error. Therefore, it was satisfying that the overall risk of error declined after the course with both methods. Although a proportion of 22% with an increased risk of error after taking the course seemed alarming, it was within the limit of what could occur by chance, due to the small learning outcome. However, one may speculate that taking courses may increase the risk of error, if the feeling of being secure is increased without a corresponding improvement of knowledge. This might have implications for the need of follow-up after courses. The factors that were associated with a reduced risk of error after the calculation course could indicate who might benefit from training like this: being a man; working in hospital; low pretest score and low pretest certainty score. This supports the finding in the auxiliary analysis that nurses with weak drug dose calculation skills benefit the most from taking courses. Nevertheless, the risk of error demonstrated in the study did not necessarily reflect the real risk of adverse events affecting patients, as the test situation cannot measure how often miscalculations were performed or how serious the clinical implications might be for any patient. Such studies still need to be done. The study was not able to demonstrate any difference in the risk of error between the e-learning and classroom groups, either before or after the course. Asking for certainty in each calculation made it possible for the nurses to express whether they normally would have consulted others or not when doing the calculation. Being certain that an incorrect answer was correct was regarded as an adequate estimate for a high risk of error. To the best of our knowledge, such a method for estimating a risk of error from a test situation is not described by others, and may be a contribution to future research. Owing to the low learning outcome, one could fear that increased certainty would lead to an increased risk of error. Therefore, it was satisfying that the overall risk of error declined after the course with both methods. Although a proportion of 22% with an increased risk of error after taking the course seemed alarming, it was within the limit of what could occur by chance, due to the small learning outcome. However, one may speculate that taking courses may increase the risk of error, if the feeling of being secure is increased without a corresponding improvement of knowledge. This might have implications for the need of follow-up after courses. The factors that were associated with a reduced risk of error after the calculation course could indicate who might benefit from training like this: being a man; working in hospital; low pretest score and low pretest certainty score. This supports the finding in the auxiliary analysis that nurses with weak drug dose calculation skills benefit the most from taking courses. Nevertheless, the risk of error demonstrated in the study did not necessarily reflect the real risk of adverse events affecting patients, as the test situation cannot measure how often miscalculations were performed or how serious the clinical implications might be for any patient. Such studies still need to be done. Importance for practice The fact that 48% of the participants in the study performed drug dose calculations at least weekly was more than anticipated. It has been a common perception that the need for most nurses to calculate drug doses is small in today's clinical practice. The reported extent of calculations underscores the importance of good skills in this field. When the need for continuous improvement and maintenance of skills is identified, the time and resources available will be decisive for the possibility to implement further training activities. E-learning is more often a preferred choice in health services institutions, as it is both flexible and cost-effective. In our study, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. However, this method also had more dropouts and a lesser learning outcome for those with low skills. In a review article commenting on the results of a meta-analysis of e-learning and conventional instruction methods, Cook claims that rather than more comparative studies, further research should focus on conditions (how and when) under which e-learning is a preferable method.12 An implication of the findings can be to let nurses regularly attend an e-learning course followed by a screening test to uncover the weak calculation topics. Those who need further training should be offered a more tailored follow-up. Others have also documented that a combination of different learning and teaching strategies do result in better retention of drug calculation skills compared with lectures alone.23 Further studies of the effect of the introduction of drug dose calculation apps would also be of interest, as well as more authentic observation studies in a high fidelity simulation environment, as reported from a Scottish HHS study.26 Interestingly, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. This may be explained by the flexibility of the e-learning course, which allowed the participants to concentrate on the items that were considered difficult and relevant for their work, while the classroom group had to follow through the whole programme. Nearly all the nurses themselves realised that they needed more training in drug dose calculations, and an important factor was that motivation for the course was associated with a good learning outcome in the study. This indicates that the professional leadership in health institutions should facilitate and encourage the nurses to improve their skills further in drug dose calculations. In addition to regularly training in calculations, written procedures for specific dilutions and infusions used in the wards would be of importance as a quality insurance for improved patient safety. This must be a part of the management responsibility. The fact that 48% of the participants in the study performed drug dose calculations at least weekly was more than anticipated. It has been a common perception that the need for most nurses to calculate drug doses is small in today's clinical practice. The reported extent of calculations underscores the importance of good skills in this field. When the need for continuous improvement and maintenance of skills is identified, the time and resources available will be decisive for the possibility to implement further training activities. E-learning is more often a preferred choice in health services institutions, as it is both flexible and cost-effective. In our study, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. However, this method also had more dropouts and a lesser learning outcome for those with low skills. In a review article commenting on the results of a meta-analysis of e-learning and conventional instruction methods, Cook claims that rather than more comparative studies, further research should focus on conditions (how and when) under which e-learning is a preferable method.12 An implication of the findings can be to let nurses regularly attend an e-learning course followed by a screening test to uncover the weak calculation topics. Those who need further training should be offered a more tailored follow-up. Others have also documented that a combination of different learning and teaching strategies do result in better retention of drug calculation skills compared with lectures alone.23 Further studies of the effect of the introduction of drug dose calculation apps would also be of interest, as well as more authentic observation studies in a high fidelity simulation environment, as reported from a Scottish HHS study.26 Interestingly, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. This may be explained by the flexibility of the e-learning course, which allowed the participants to concentrate on the items that were considered difficult and relevant for their work, while the classroom group had to follow through the whole programme. Nearly all the nurses themselves realised that they needed more training in drug dose calculations, and an important factor was that motivation for the course was associated with a good learning outcome in the study. This indicates that the professional leadership in health institutions should facilitate and encourage the nurses to improve their skills further in drug dose calculations. In addition to regularly training in calculations, written procedures for specific dilutions and infusions used in the wards would be of importance as a quality insurance for improved patient safety. This must be a part of the management responsibility. Study limitations The participants in this study were recruited through the management line, and the study population represents a limited part of the total nurse population. We assume that nurses with low calculation skills would, to a lesser degree, volunteer for such a study, and hence presume that the calculation skills in clinical practice would be lower than shown in this study. External validity might be an issue in studies with voluntary participation, and extrapolation of the findings of the study to all registered nurses should be performed with caution. Some may question the quality of the course content and duration or teaching conditions of the courses, especially since the learning outcome of the courses were not convincing. However, the main aim for the study was to compare the two didactic methods. Also, to ensure a fair comparison and similar content of the courses, the subject teacher, who was a part of the group that developed the e-learning course, was also responsible for the classroom lectures. Since the teacher had an interest in both didactic methods, the probability for her to affect the course arrangements in favour of one of them was regarded as small. The questionnaire used was the same as that used to test the nursing students, and the calculation tasks were considered to be in accordance with the tasks that were performed in the nursing practice. Another limitation could be the controlled test conditions, without time pressure and interruptions that are often the case in a stressful work situation, which tend towards better results than in reality. On the other hand, the calculation test situation itself may be stressful for the nurses, since many have struggled to pass a similar test during their studies. Selecting two dimensions from the GHQ 30 questionnaire may also be a methodological limitation. Although no correlation between the outcomes and coping or well-being/self-esteem was detected, the usage of only parts of the tool excluded the possibility of detecting an association between physiological well-being in general and drug dose calculation skills. The participants in this study were recruited through the management line, and the study population represents a limited part of the total nurse population. We assume that nurses with low calculation skills would, to a lesser degree, volunteer for such a study, and hence presume that the calculation skills in clinical practice would be lower than shown in this study. External validity might be an issue in studies with voluntary participation, and extrapolation of the findings of the study to all registered nurses should be performed with caution. Some may question the quality of the course content and duration or teaching conditions of the courses, especially since the learning outcome of the courses were not convincing. However, the main aim for the study was to compare the two didactic methods. Also, to ensure a fair comparison and similar content of the courses, the subject teacher, who was a part of the group that developed the e-learning course, was also responsible for the classroom lectures. Since the teacher had an interest in both didactic methods, the probability for her to affect the course arrangements in favour of one of them was regarded as small. The questionnaire used was the same as that used to test the nursing students, and the calculation tasks were considered to be in accordance with the tasks that were performed in the nursing practice. Another limitation could be the controlled test conditions, without time pressure and interruptions that are often the case in a stressful work situation, which tend towards better results than in reality. On the other hand, the calculation test situation itself may be stressful for the nurses, since many have struggled to pass a similar test during their studies. Selecting two dimensions from the GHQ 30 questionnaire may also be a methodological limitation. Although no correlation between the outcomes and coping or well-being/self-esteem was detected, the usage of only parts of the tool excluded the possibility of detecting an association between physiological well-being in general and drug dose calculation skills. Drug dose calculation skills: The study was not able to demonstrate an overall difference in learning outcome between the two didactic methods, either of statistical or clinical importance. Both methods resulted in improvement of drug dose calculations after the course, although the learning outcome was smaller than what was defined as clinically relevant. Adjusted for other contributing factors for learning outcome in the multivariable analysis, the classroom method was statistically superior to e-learning, and so was the case for a subgroup with a low pretest result. This finding from the post hoc analysis was probably the only outcome that could have a meaningful practical implication for choice of learning strategy, if reproduced in new studies. These results were in accordance with a meta-analysis of 201 trials comparing e-learning with other methods.19 The review summarised that any educational action gives a positive outcome, regardless of the method. E-learning works compared with no intervention, but tested against conventional methods it is difficult to detect any differences. Drug dose calculations are not advanced in a mathematical sense. The basic arithmetic functions of addition, subtraction, multiplication or division are needed to decide decimals and fractions. What seems to be challenging is to conceptually understand the difference in information from the concentration denomination: per cent or mass per unit volume, or the ability to set up the right calculation for the relationship between dose or mass, volume or amount and concentration or strength. A standard labelling to mass per unit volume has been strongly recommended.20 21 The fact that only 1 out of 10 nurses performed a faultless pretest was not surprising, from what is previously shown. In a study by McMullan, only 5% of the nurses achieved 80% correct calculations.22 Although statistically significant, the limited overall learning outcome after the courses was somewhat disappointing, with only 2 out of 10 with faultless tests. It seemed that the incorrect calculations were more frequent in conversion of units, the least complex task in the mathematical sense. The conversion of units improved the most after the course, while the learning outcome in the arithmetic tasks of infusions and dilutions were unchanged. This has also been observed by other investigators, and supports the view that the challenges in drug dose calculations are more likely due to a poor conceptual understanding.10 Recent papers address the importance of including conceptual (understanding the problem), calculation (dosage computation) and technical measurement (dosage measurement) competence in teaching nurses in vocational mathematics, with models to help them understand the ‘what’, the ‘why’ and the ‘how’ in dosage problem solving.23 24 Risk of error: The study was not able to demonstrate any difference in the risk of error between the e-learning and classroom groups, either before or after the course. Asking for certainty in each calculation made it possible for the nurses to express whether they normally would have consulted others or not when doing the calculation. Being certain that an incorrect answer was correct was regarded as an adequate estimate for a high risk of error. To the best of our knowledge, such a method for estimating a risk of error from a test situation is not described by others, and may be a contribution to future research. Owing to the low learning outcome, one could fear that increased certainty would lead to an increased risk of error. Therefore, it was satisfying that the overall risk of error declined after the course with both methods. Although a proportion of 22% with an increased risk of error after taking the course seemed alarming, it was within the limit of what could occur by chance, due to the small learning outcome. However, one may speculate that taking courses may increase the risk of error, if the feeling of being secure is increased without a corresponding improvement of knowledge. This might have implications for the need of follow-up after courses. The factors that were associated with a reduced risk of error after the calculation course could indicate who might benefit from training like this: being a man; working in hospital; low pretest score and low pretest certainty score. This supports the finding in the auxiliary analysis that nurses with weak drug dose calculation skills benefit the most from taking courses. Nevertheless, the risk of error demonstrated in the study did not necessarily reflect the real risk of adverse events affecting patients, as the test situation cannot measure how often miscalculations were performed or how serious the clinical implications might be for any patient. Such studies still need to be done. Importance for practice: The fact that 48% of the participants in the study performed drug dose calculations at least weekly was more than anticipated. It has been a common perception that the need for most nurses to calculate drug doses is small in today's clinical practice. The reported extent of calculations underscores the importance of good skills in this field. When the need for continuous improvement and maintenance of skills is identified, the time and resources available will be decisive for the possibility to implement further training activities. E-learning is more often a preferred choice in health services institutions, as it is both flexible and cost-effective. In our study, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. However, this method also had more dropouts and a lesser learning outcome for those with low skills. In a review article commenting on the results of a meta-analysis of e-learning and conventional instruction methods, Cook claims that rather than more comparative studies, further research should focus on conditions (how and when) under which e-learning is a preferable method.12 An implication of the findings can be to let nurses regularly attend an e-learning course followed by a screening test to uncover the weak calculation topics. Those who need further training should be offered a more tailored follow-up. Others have also documented that a combination of different learning and teaching strategies do result in better retention of drug calculation skills compared with lectures alone.23 Further studies of the effect of the introduction of drug dose calculation apps would also be of interest, as well as more authentic observation studies in a high fidelity simulation environment, as reported from a Scottish HHS study.26 Interestingly, the e-learning group stated a higher specific value of the course for working situations, although the course content was similar in both methods. This may be explained by the flexibility of the e-learning course, which allowed the participants to concentrate on the items that were considered difficult and relevant for their work, while the classroom group had to follow through the whole programme. Nearly all the nurses themselves realised that they needed more training in drug dose calculations, and an important factor was that motivation for the course was associated with a good learning outcome in the study. This indicates that the professional leadership in health institutions should facilitate and encourage the nurses to improve their skills further in drug dose calculations. In addition to regularly training in calculations, written procedures for specific dilutions and infusions used in the wards would be of importance as a quality insurance for improved patient safety. This must be a part of the management responsibility. Study limitations: The participants in this study were recruited through the management line, and the study population represents a limited part of the total nurse population. We assume that nurses with low calculation skills would, to a lesser degree, volunteer for such a study, and hence presume that the calculation skills in clinical practice would be lower than shown in this study. External validity might be an issue in studies with voluntary participation, and extrapolation of the findings of the study to all registered nurses should be performed with caution. Some may question the quality of the course content and duration or teaching conditions of the courses, especially since the learning outcome of the courses were not convincing. However, the main aim for the study was to compare the two didactic methods. Also, to ensure a fair comparison and similar content of the courses, the subject teacher, who was a part of the group that developed the e-learning course, was also responsible for the classroom lectures. Since the teacher had an interest in both didactic methods, the probability for her to affect the course arrangements in favour of one of them was regarded as small. The questionnaire used was the same as that used to test the nursing students, and the calculation tasks were considered to be in accordance with the tasks that were performed in the nursing practice. Another limitation could be the controlled test conditions, without time pressure and interruptions that are often the case in a stressful work situation, which tend towards better results than in reality. On the other hand, the calculation test situation itself may be stressful for the nurses, since many have struggled to pass a similar test during their studies. Selecting two dimensions from the GHQ 30 questionnaire may also be a methodological limitation. Although no correlation between the outcomes and coping or well-being/self-esteem was detected, the usage of only parts of the tool excluded the possibility of detecting an association between physiological well-being in general and drug dose calculation skills. Conclusion: The study was not able to demonstrate any differences between e-learning and classroom teaching in drug dose calculations, with respect to learning outcome, certainty or risk of error. The overall learning outcome was without practical significance, and conversion of units was the only topic that was significantly improved after the course. An independent factor in favour of classroom teaching was weak pretest knowledge, while factors suggesting use of e-learning could be the need for training in relevant work specific tasks and time effective repetition. Supplementary Material:
Background: Insufficient skills in drug dose calculations increase the risk for medication errors. Even experienced nurses may struggle with such calculations. Learning flexibility and cost considerations make e-learning interesting as an alternative to classroom teaching. This study compared the learning outcome and risk of error after a course in drug dose calculations for nurses with the two methods. Methods: In a randomised controlled open study, nurses from hospitals and primary healthcare were randomised to either e-learning or classroom teaching. Before and after a 2-day course, the nurses underwent a multiple choice test in drug dose calculations: 14 tasks with four alternative answers (score 0-14), and a statement regarding the certainty of each answer (score 0-3). High risk of error was being certain that incorrect answer was correct. The results are given as the mean (SD). Results: 16 men and 167 women participated in the study, aged 42.0 (9.5) years with a working experience of 12.3 (9.5) years. The number of correct answers after e-learning was 11.6 (2.0) and after classroom teaching 11.9 (2.0) (p=0.18, NS); improvement were 0.5 (1.6) and 0.9 (2.2), respectively (p=0.07, NS). Classroom learning was significantly superior to e-learning among participants with a pretest score below 9. In support of e-learning was evaluation of specific value for the working situation. There was no difference in risk of error between groups after the course (p=0.77). Conclusions: The study showed no differences in learning outcome or risk of error between e-learning and classroom teaching in drug dose calculations. The overall learning outcome was small. Weak precourse knowledge was associated with better outcome after classroom teaching.
Introduction: From international reviews and reports of adverse drug events, incorrect doses account for up to one-third of the events.1–3 Many health professionals find drug dose calculations difficult. The majority of medical students are unable to calculate the mass of a drug in solution correctly, and around half the doctors are unable to convert drug doses correctly from a percentage concentration or dilution to mass concentration.4 5 Nurses carry out practical drug management after the physicians’ prescriptions both in hospitals and primary healthcare. In Norway, a faultless test in drug dose calculations during nursing education is required to become a registered nurse.6 Both nursing students and experienced nurses have problems with drug dose calculations, and nursing students early in the programme showed limited basic skills in arithmetic.7–10 We have shown a high risk of error in conversion of units in 10% of registered nurses in an earlier study.11 E-learning was introduced with the internet in the early 1990s, and has been increasingly used in medical and healthcare education. E-learning is independent of time and place, and the training is easier to organise in the health services than classroom teaching, and at a lower cost. A meta-analysis from 2009 summarised more than 200 studies in health professions education, and concluded that e-learning is associated with large positive effects compared with no intervention, but compared with other interventions the effects are generally small.12 There is a lack of drug dose calculation studies where different didactic methods are compared. The objective of this study was to compare the learning outcome, certainty and risk of error in drug dose calculations after courses with either self-directed e-learning or conventional classroom teaching. Further aims were to study factors associated with the learning outcome and risk of error. Conclusion: The study was not able to demonstrate any differences between e-learning and classroom teaching in drug dose calculations, with respect to learning outcome, certainty or risk of error. The overall learning outcome was without practical significance, and conversion of units was the only topic that was significantly improved after the course. An independent factor in favour of classroom teaching was weak pretest knowledge, while factors suggesting use of e-learning could be the need for training in relevant work specific tasks and time effective repetition.
Background: Insufficient skills in drug dose calculations increase the risk for medication errors. Even experienced nurses may struggle with such calculations. Learning flexibility and cost considerations make e-learning interesting as an alternative to classroom teaching. This study compared the learning outcome and risk of error after a course in drug dose calculations for nurses with the two methods. Methods: In a randomised controlled open study, nurses from hospitals and primary healthcare were randomised to either e-learning or classroom teaching. Before and after a 2-day course, the nurses underwent a multiple choice test in drug dose calculations: 14 tasks with four alternative answers (score 0-14), and a statement regarding the certainty of each answer (score 0-3). High risk of error was being certain that incorrect answer was correct. The results are given as the mean (SD). Results: 16 men and 167 women participated in the study, aged 42.0 (9.5) years with a working experience of 12.3 (9.5) years. The number of correct answers after e-learning was 11.6 (2.0) and after classroom teaching 11.9 (2.0) (p=0.18, NS); improvement were 0.5 (1.6) and 0.9 (2.2), respectively (p=0.07, NS). Classroom learning was significantly superior to e-learning among participants with a pretest score below 9. In support of e-learning was evaluation of specific value for the working situation. There was no difference in risk of error between groups after the course (p=0.77). Conclusions: The study showed no differences in learning outcome or risk of error between e-learning and classroom teaching in drug dose calculations. The overall learning outcome was small. Weak precourse knowledge was associated with better outcome after classroom teaching.
14,753
344
26
[ "course", "learning", "risk", "test", "error", "risk error", "dose", "drug", "drug dose", "study" ]
[ "test", "test" ]
[CONTENT] HEALTH SERVICES ADMINISTRATION & MANAGEMENT | MEDICAL EDUCATION & TRAINING [SUMMARY]
[CONTENT] HEALTH SERVICES ADMINISTRATION & MANAGEMENT | MEDICAL EDUCATION & TRAINING [SUMMARY]
[CONTENT] HEALTH SERVICES ADMINISTRATION & MANAGEMENT | MEDICAL EDUCATION & TRAINING [SUMMARY]
[CONTENT] HEALTH SERVICES ADMINISTRATION & MANAGEMENT | MEDICAL EDUCATION & TRAINING [SUMMARY]
[CONTENT] HEALTH SERVICES ADMINISTRATION & MANAGEMENT | MEDICAL EDUCATION & TRAINING [SUMMARY]
[CONTENT] HEALTH SERVICES ADMINISTRATION & MANAGEMENT | MEDICAL EDUCATION & TRAINING [SUMMARY]
[CONTENT] Adult | Computer-Assisted Instruction | Drug Dosage Calculations | Education, Nursing | Female | Humans | Male | Medication Errors | Middle Aged | Quality Improvement [SUMMARY]
[CONTENT] Adult | Computer-Assisted Instruction | Drug Dosage Calculations | Education, Nursing | Female | Humans | Male | Medication Errors | Middle Aged | Quality Improvement [SUMMARY]
[CONTENT] Adult | Computer-Assisted Instruction | Drug Dosage Calculations | Education, Nursing | Female | Humans | Male | Medication Errors | Middle Aged | Quality Improvement [SUMMARY]
[CONTENT] Adult | Computer-Assisted Instruction | Drug Dosage Calculations | Education, Nursing | Female | Humans | Male | Medication Errors | Middle Aged | Quality Improvement [SUMMARY]
[CONTENT] Adult | Computer-Assisted Instruction | Drug Dosage Calculations | Education, Nursing | Female | Humans | Male | Medication Errors | Middle Aged | Quality Improvement [SUMMARY]
[CONTENT] Adult | Computer-Assisted Instruction | Drug Dosage Calculations | Education, Nursing | Female | Humans | Male | Medication Errors | Middle Aged | Quality Improvement [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] course | learning | risk | test | error | risk error | dose | drug | drug dose | study [SUMMARY]
[CONTENT] course | learning | risk | test | error | risk error | dose | drug | drug dose | study [SUMMARY]
[CONTENT] course | learning | risk | test | error | risk error | dose | drug | drug dose | study [SUMMARY]
[CONTENT] course | learning | risk | test | error | risk error | dose | drug | drug dose | study [SUMMARY]
[CONTENT] course | learning | risk | test | error | risk error | dose | drug | drug dose | study [SUMMARY]
[CONTENT] course | learning | risk | test | error | risk error | dose | drug | drug dose | study [SUMMARY]
[CONTENT] drug | learning | education | compared | students | health | dose calculations nursing | correctly | medical | drug dose calculations nursing [SUMMARY]
[CONTENT] day | nurses | exercises | test | group | study | randomised | inclusion | calculations | self [SUMMARY]
[CONTENT] course | test | risk | error | risk error | results | 001 | learning | significantly | score [SUMMARY]
[CONTENT] learning | classroom teaching | teaching | effective repetition | respect learning | respect learning outcome | practical significance | respect learning outcome certainty | practical significance conversion | teaching weak [SUMMARY]
[CONTENT] course | relatively | learning | risk | risk error | error | test | study | dose | drug [SUMMARY]
[CONTENT] course | relatively | learning | risk | risk error | error | test | study | dose | drug [SUMMARY]
[CONTENT] ||| ||| ||| two [SUMMARY]
[CONTENT] ||| 2-day | 14 | four | 0-3 ||| ||| [SUMMARY]
[CONTENT] 16 | 167 | 42.0 | 9.5) years | 12.3 | 9.5) years ||| 11.6 | 2.0 | 11.9 | 2.0 | NS | 0.5 | 1.6 | 0.9 | 2.2 | NS ||| 9 ||| ||| [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| two ||| ||| 2-day | 14 | four | 0-3 ||| ||| ||| 16 | 167 | 42.0 | 9.5) years | 12.3 | 9.5) years ||| 11.6 | 2.0 | 11.9 | 2.0 | NS | 0.5 | 1.6 | 0.9 | 2.2 | NS ||| 9 ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| two ||| ||| 2-day | 14 | four | 0-3 ||| ||| ||| 16 | 167 | 42.0 | 9.5) years | 12.3 | 9.5) years ||| 11.6 | 2.0 | 11.9 | 2.0 | NS | 0.5 | 1.6 | 0.9 | 2.2 | NS ||| 9 ||| ||| ||| ||| ||| [SUMMARY]
Ketamine administration in healthy volunteers reproduces aberrant agency experiences associated with schizophrenia.
21302161
Aberrant experience of agency is characteristic of schizophrenia. An understanding of the neurobiological basis of such experience is therefore of considerable importance for developing successful models of the disease. We aimed to characterise the effects of ketamine, a drug model for psychosis, on sense of agency (SoA). SoA is associated with a subjective compression of the temporal interval between an action and its effects: This is known as "intentional binding". This action-effect binding provides an indirect measure of SoA. Previous research has found that the magnitude of binding is exaggerated in patients with schizophrenia. We therefore investigated whether ketamine administration to otherwise healthy adults induced a similar pattern of binding.
INTRODUCTION
14 right-handed healthy participants (8 female; mean age 22.4 years) were given low-dose ketamine (100 ng/mL plasma) and completed the binding task. They also underwent structured clinical interviews.
METHODS
Ketamine mimicked the performance of schizophrenia patients on the intentional binding task, significantly increasing binding relative to placebo. The size of this effect also correlated with aberrant bodily experiences engendered by the drug.
RESULTS
These data suggest that ketamine may be able to mimic certain aberrant agency experiences that characterise schizophrenia. The link to individual changes in bodily experience suggests that the fundamental change produced by the drug has wider consequences in terms of individuals' experiences of their bodies and movements.
CONCLUSIONS
[ "Adult", "Dose-Response Relationship, Drug", "Excitatory Amino Acid Antagonists", "Female", "Healthy Volunteers", "Humans", "Judgment", "Ketamine", "Male", "Models, Psychological", "Psychoses, Substance-Induced", "Schizophrenia", "Sensation", "Young Adult" ]
3144485
INTRODUCTION
Administration of the anaesthetic agent, ketamine, to healthy participants produces a state that resembles schizophrenia (Ghoneim, Hinrichs, Mewaldt, & Petersen, 1985; Krystal et al., 1994; Lahti, Weiler, Tamara Michaelidis, Parwani, & Tamminga, 2001). Although there are notable differences between the ketamine state and established schizophrenic illness (for example, ketamine does not reliably produce auditory hallucinations; Fletcher & Honey, 2006), ketamine does produce a range of symptoms associated with endogenous psychosis, including perceptual changes, ideas of reference, thought disorder, and some negative symptoms (Ghoneim et al., 1985; Krystal et al., 1994; Lahti et al., 2001; Mason, Morgan, Stefanovic, & Curran, 2008; Morgan, Mofeez, Brandner, Bromley, & Curran, 2004; Pomarol-Clotet et al., 2006). In addition, a number of cognitive changes produced by ketamine are comparable to those seen in schizophrenia (e.g., learning: Corlett, Murray, et al., 2007; memory: Fletcher & Honey, 2006; attention: Oranje et al., 2000; language: Covington et al., 2007). Overall, the effects of ketamine are most strikingly characteristic of the earliest stages of psychosis (Corlett, Honey, & Fletcher, 2007). Moreover, ketamine causes changes in brain activity that overlap with those reported in schizophrenia (Breier, Malhotra, Pinals, Weisenfeld, & Pickar, 1997; Corlett et al., 2006; Vollenweider, Leenders, Oye, Hell, & Angst, 1997; Vollenweider, Leenders, Scharfetter, et al., 1997). An important next step is to explore the effects of ketamine in greater detail and to exploit the potential that this approach offers for relating cognitive-behavioural function to subjective experiences in psychosis. Schizophrenia is associated with important changes in the experience of voluntary action such as those that occur in delusions of control (Frith, 1992). Although it has received little formal documentation, ketamine also, in our experience, alters the way that participants experience their own actions. For example, participants sometimes report that they don't feel fully in control of their own actions (“I don't feel in control of my muscles …,” and “… as though someone else was controlling my movements;” Pomarol-Clotet et al., 2006). Given these observations, together with the perceptuomotor abnormalities in schizophrenia, the current study was set up to characterise the effects of ketamine on a task examining voluntary actions and their sensory consequences. Sense of agency (SoA) refers to the experience of initiating and controlling voluntary action to achieve effects in the outside world. Sense of agency is a background feeling that accompanies most of our actions. Perhaps because of its ubiquity, it has proved difficult to isolate and measure experimentally. Recently, action-related changes in time perception have been proposed as a proxy for SoA (Haggard, Clark, & Kalogeras, 2002; Moore & Haggard, 2008; Moore, Lagnado, Deal, & Haggard, 2009). Situations that elicit SoA are associated with systematic changes in the temporal experience of actions and outcomes: There is a subjective compression of the interval between the action and the outcome. This relation between SoA and subjective time is revealed in the intentional binding paradigm developed by Haggard et al. (2002). In an agency condition, in which participants’ actions produced outcome tones, participants judged the time of an action or the time of the subsequent tone, in separate blocks of trials. Actions were perceived as occurring later in time compared to a nonagency (baseline) condition in which participants’ actions did not produce tones. In addition, a tone that followed the action was perceived as occurring earlier in time compared to a nonagency (baseline) condition involving tones but no actions. Importantly, these shifts were only found for voluntary actions: When the outcome was caused by an involuntary movement the reverse pattern of results was observed (actions perceived earlier and outcomes perceived later than their respective baseline estimates). Increased SoA is therefore associated with a later awareness of the action, and an earlier awareness of the outcome. This effect is robust and has been consistently replicated (see, for example, Engbert & Wohlschläger, 2007; Engbert, Wohlschläger, & Haggard, 2008; Moore, Wegner, & Haggard, 2009; Tsakiris & Haggard, 2003). It has also been shown that these changes in the subjective experience of time correlate with explicit higher order changes in the sense of agency, as measured using subjective rating scales (Ebert & Wegner, 2010; Moore & Haggard, 2010). In this way, intentional binding offers a precise, implicit measure of SoA. Of primary interest to the present study is the fact that the binding effect, defined as the temporal attraction between voluntary action and outcome, is greater in people with schizophrenia (Haggard, Martin, Taylor-Clarke, Jeannerod, & Franck, 2003; Voss et al., 2010). That is, people with schizophrenia show increased intentional binding. Our principal aim here was to determine whether ketamine also induced increased binding, as previously reported in schizophrenia. We also investigated the relationship between this implicit measure of SoA and subjective experiences of dissociation and psychotic-like phenomena produced by the drug as measured using the Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998). Here we focused our analysis on changes in the subjective experience of one's own body, since sense of ownership (SoO) over one's body and SoA may be related. For example, in healthy individuals SoA for a voluntary action may strongly depend on a SoO (Gallagher, 2000, 2007; Tsakiris, Schütz-Bosbach, & Gallagher, 2007). The reverse relationship may also hold, whereby the neurocognitive processes that give rise to sense of agency also contribute to Soo (Tsakiris, Prabhu, & Haggard, 2006). Dissociative symptoms, such as depersonalisation, are a common effect of the ketamine challenge (Goff & Coyle, 2001). Furthermore, there is frequent co-occurrence of depersonalisation and abnormal bodily experience (Sierra, Baker, Medford, & David, 2005; Simeon et al., 2008). Although not typically associated with established schizophrenic illness, depersonalisation appears to be associated with the schizophrenia prodrome (Goff & Coyle, 2001; Krystal et al., 1994). Therefore, given the link between bodily experience and sense of agency, and the common disruption of bodily experience engendered by the ketamine challenge, the body perception subscale on the CADSS questionnaire was of primary interest.
null
null
RESULTS
During the intentional binding task, the target plasma ketamine concentration was 100 ng/mL, and the mean ± SD) measured ketamine plasma concentration was 157 ± 36 ng/mL. Ketamine effects on intentional binding Table 1 presents the binding effects for keypresses and tones (mean shifts from baseline) for the 14 participants who completed the task. These data show that in the agency conditions on both placebo and ketamine, keypresses were bound towards tones and tones were bound back towards keypresses. This is consistent with the intentional binding effect, as previously reported (e.g., Engbert et al., 2008; Haggard et al., 2002; Moore & Haggard, 2008). Mean judgement errors in ms (SD across subjects) and shifts relative to baseline conditions in ms Table 1 (final column) also presents the overall binding measure (keypress binding minus tone binding). These data show that overall binding was greater under ketamine compared with placebo. A paired-samples t-test revealed that this difference was significant, t(13) = 2.79, p = .008 (onetailed). Follow-up paired sample t-tests suggest that this difference is due to differences in binding for actions towards tones, t(13) = 2.35, p = .036 (two-tailed) rather than differences in binding for tones towards actions, t(13) = 0.242, p = .812 (two-tailed). Furthermore, this exaggerated binding appears to be driven by changes in baseline action judgements; isolated actions on ketamine were perceived as occurring significantly earlier than on placebo, t(13) = 2.59, p = .023 (two-tailed). Intentional binding is an implicit measure of SoA. These findings therefore suggest that SoA is exaggerated under ketamine, which is consistent with previous data on patients with schizophrenia (Haggard et al., 2003; Voss et al., 2010). Table 1 presents the binding effects for keypresses and tones (mean shifts from baseline) for the 14 participants who completed the task. These data show that in the agency conditions on both placebo and ketamine, keypresses were bound towards tones and tones were bound back towards keypresses. This is consistent with the intentional binding effect, as previously reported (e.g., Engbert et al., 2008; Haggard et al., 2002; Moore & Haggard, 2008). Mean judgement errors in ms (SD across subjects) and shifts relative to baseline conditions in ms Table 1 (final column) also presents the overall binding measure (keypress binding minus tone binding). These data show that overall binding was greater under ketamine compared with placebo. A paired-samples t-test revealed that this difference was significant, t(13) = 2.79, p = .008 (onetailed). Follow-up paired sample t-tests suggest that this difference is due to differences in binding for actions towards tones, t(13) = 2.35, p = .036 (two-tailed) rather than differences in binding for tones towards actions, t(13) = 0.242, p = .812 (two-tailed). Furthermore, this exaggerated binding appears to be driven by changes in baseline action judgements; isolated actions on ketamine were perceived as occurring significantly earlier than on placebo, t(13) = 2.59, p = .023 (two-tailed). Intentional binding is an implicit measure of SoA. These findings therefore suggest that SoA is exaggerated under ketamine, which is consistent with previous data on patients with schizophrenia (Haggard et al., 2003; Voss et al., 2010). The relation between binding and body perception We also examined the strength of correlation between binding on ketamine and scores on the CADSS assessment. The overall main effect of ketamine was generated by changes in action binding. Therefore, our correlations were based on this binding measure. There was no significant correlation between action binding and the overall CADSS score, r = .197, p = 499 (two-tailed). Further analyses focused on the body perception subscale. There was a significant positive correlation between action binding and Item 6 on the CADSS, which asks, “Do you feel disconnected from your own body?,” r = .549, p = .042 (two-tailed) (see Figure 2). This suggests that the more participants felt disconnected from their bodies on ketamine, the greater the intentional binding effect. There was no significant correlation between action binding and Item 7 on the CADSS which asks “Does your sense of your own body feel changed: for instance, does your own body feel unusually large or unusually small?,” r = .208, p = .476 (two-tailed). Scatter plot showing the significant correlation between action binding and CADSS Item 6 (“Do you feel disconnected from your own body?”) on ketamine (100 ng/mL). We also examined the strength of correlation between binding on ketamine and scores on the CADSS assessment. The overall main effect of ketamine was generated by changes in action binding. Therefore, our correlations were based on this binding measure. There was no significant correlation between action binding and the overall CADSS score, r = .197, p = 499 (two-tailed). Further analyses focused on the body perception subscale. There was a significant positive correlation between action binding and Item 6 on the CADSS, which asks, “Do you feel disconnected from your own body?,” r = .549, p = .042 (two-tailed) (see Figure 2). This suggests that the more participants felt disconnected from their bodies on ketamine, the greater the intentional binding effect. There was no significant correlation between action binding and Item 7 on the CADSS which asks “Does your sense of your own body feel changed: for instance, does your own body feel unusually large or unusually small?,” r = .208, p = .476 (two-tailed). Scatter plot showing the significant correlation between action binding and CADSS Item 6 (“Do you feel disconnected from your own body?”) on ketamine (100 ng/mL). Control analyses The CADSS questionnaire also measures changes in time perception. Given the temporal nature of our SoA measure we investigated the putative relation between binding and general changes in time perception. There were no significant correlations between action binding and time perception items on the scale (Item 1 “Do things seem to be moving in slow motion?,” r = 198, p = .498; Item 12 “Does this experience seem to take much longer than you would have expected?,” r = −.337, p = .238; Item 13 “Do things seem to be happening very quickly, as if there is a lifetime in a moment?,” r = .265, p = .360). This suggests that changes in action binding were not related to general changes in the subjective experience of time. To determine the presence of possible drug order effects in our data we compared mean overall binding on ketamine versus placebo, introducing “order” (ketamine first vs. placebo first) as a between-subjects variable. We found no significant main effect of “order,” F(1, 12) = 0.381, p = .548, and no significant interaction, F(1, 12) = 0.889, p = 364. This suggests that changes in binding were not linked to drug order. We also compared standard deviations of time estimates across repeated trials. These provide a measure of perceptual timing variability, with higher standard deviations reflecting inconsistent timing performance. This may indicate difficulty in using the clock for timing judgements, erratic allocation of attention either to the action/tone or to the clock, or general confusion. The increase in binding on ketamine was driven by differences in the binding of actions towards tones, so we focus on standard deviation of action time estimates. On ketamine the mean standard deviation was 77 ms (SD = 32) while on placebo it was 67 ms (SD = 18). Despite the numerical increase, the difference in mean standard deviation was not significant, t(13) = 1.149, p = .271 (two-tailed). This suggests that changes in action binding were not related to general changes in timing ability. In a final control analysis, we investigated whether there was a significant reduction in the speed of the self-paced response on ketamine, as it could be that changes in binding are related to changes in motor function. On ketamine the mean response latency was 3798 ms (SD = 1580), whereas on placebo it was 3538 ms (SD = 1160). Despite the numerical increase in response latency, this difference was not significant, t(13) = 0.945, p = .362 (two-tailed). This suggests that changes in action binding were not related to changes in motor function (as measured by the response latency). The CADSS questionnaire also measures changes in time perception. Given the temporal nature of our SoA measure we investigated the putative relation between binding and general changes in time perception. There were no significant correlations between action binding and time perception items on the scale (Item 1 “Do things seem to be moving in slow motion?,” r = 198, p = .498; Item 12 “Does this experience seem to take much longer than you would have expected?,” r = −.337, p = .238; Item 13 “Do things seem to be happening very quickly, as if there is a lifetime in a moment?,” r = .265, p = .360). This suggests that changes in action binding were not related to general changes in the subjective experience of time. To determine the presence of possible drug order effects in our data we compared mean overall binding on ketamine versus placebo, introducing “order” (ketamine first vs. placebo first) as a between-subjects variable. We found no significant main effect of “order,” F(1, 12) = 0.381, p = .548, and no significant interaction, F(1, 12) = 0.889, p = 364. This suggests that changes in binding were not linked to drug order. We also compared standard deviations of time estimates across repeated trials. These provide a measure of perceptual timing variability, with higher standard deviations reflecting inconsistent timing performance. This may indicate difficulty in using the clock for timing judgements, erratic allocation of attention either to the action/tone or to the clock, or general confusion. The increase in binding on ketamine was driven by differences in the binding of actions towards tones, so we focus on standard deviation of action time estimates. On ketamine the mean standard deviation was 77 ms (SD = 32) while on placebo it was 67 ms (SD = 18). Despite the numerical increase, the difference in mean standard deviation was not significant, t(13) = 1.149, p = .271 (two-tailed). This suggests that changes in action binding were not related to general changes in timing ability. In a final control analysis, we investigated whether there was a significant reduction in the speed of the self-paced response on ketamine, as it could be that changes in binding are related to changes in motor function. On ketamine the mean response latency was 3798 ms (SD = 1580), whereas on placebo it was 3538 ms (SD = 1160). Despite the numerical increase in response latency, this difference was not significant, t(13) = 0.945, p = .362 (two-tailed). This suggests that changes in action binding were not related to changes in motor function (as measured by the response latency).
DISCUSSION
We investigated whether the psychotomimetic effects of ketamine extend to producing aberrant agency experiences associated with schizophrenia. On the intentional binding task under placebo conditions, the expected binding effect (a compression of the subjective interval between action and outcome; Haggard et al., 2002) was observed. Under ketamine this effect was exaggerated, as has been previously reported in people with schizophrenia (Haggard et al., 2003; Voss et al., 2010). The effect of ketamine on action–outcome binding is intriguing: The exaggerated effect was driven primarily by an increase in binding of actions towards the tone, rather than binding of tones back towards actions. Action binding represents the difference between action time estimates in the agency condition and action time estimates in the baseline condition. Previous studies have found that the experience of isolated action, as in the baseline condition, is anticipatory: On average, participants are aware of moving slightly before the actual onset of movement (Haggard, Newman, & Magno, 1999; Libet et al., 1983). This suggests that motor experience in this context is not based on feedback generated by the actual movement itself. If it were, one would expect a slightly delayed awareness of moving owing to inherent delays in the transmission of sensory information to the brain (Obhi, Planetta, & Scantlebury, 2009). Instead, it has been proposed that the experience of isolated action is linked to processes occurring prior to movement onset (Haggard, 2003). In our data, the baseline experience of action on ketamine was significantly earlier than on placebo, whereas the baseline action awareness on placebo was, unusually, slightly delayed relative to the actual keypress. This pattern of results suggests that the drug may have exaggerated the putative influence of action preparation on the experience of action. However, although baseline action experience is generally anticipatory, the intentional binding effect in healthy adults shows that causing an external event through one's own actions (as in our agency condition) draws the temporal experience of action towards that event (Engbert & Wohlschläger, 2007; Haggard et al., 2002; Moore & Haggard, 2008; Moore, Lagnado, et al., 2009). It is this shift in action experience that represents the binding effect for actions, and it was present on both placebo and ketamine. However, the magnitude of the shift was significantly greater on ketamine. It appears, therefore, that the presence of the tone exerted a particularly strong influence on action experience. In short, although ketamine has a strong effect on action experience when the action occurs without a perceptual consequence, we cannot interpret the drug's effect merely in terms of this baseline action experience. Rather, the significantly greater subjective shift, on ketamine, in the experience of action towards the tone means that a full explanation of the effects of ketamine must take into account the experience of action in both the absence and the presence of the tone. Thus, bringing together the key results from the intentional binding task, ketamine appears to boost the influence of action preparation on action awareness, but also to boost the influence of the effects of action (a tone) on action awareness. This combination may seem paradoxical. However, several results suggest that the action experience is in fact a synthesis of a range of different events occurring over an extended time period between preparation and consequence (Banks & Isham, 2009; Haggard, 2005; Haggard, Cartledge, Dafydd, & Oakley, 2004; Lau, Rogers, & Passingham, 2007; Moore & Haggard, 2008; Moore, Wegner, et al., 2009). In normal circumstances, action awareness is likely to be the result of integration of efferent and afferent processes in the sensorimotor system (Moore & Haggard, 2008; Moore, Wegner, et al., 2009; Synofzik, Vosgerau, & Newen, 2008). On ketamine, however, the processes underlying this normal process of integration may be compromised. To this extent, our results are consistent with a ketamine-induced deficit in monitoring action signals. Participants appeared to feel dissociated from their own actions while on ketamine, since their representations of their own actions were susceptible to influences from other events, such as their original intentions and their subsequent effects. Confirmation of this dissociative interpretation comes from the correlations found between intentional binding and the specific CADSS item concerning the feeling of disconnection from the body. Taken together, these findings suggest that ketamine may preferentially influence a neural system for monitoring action. As a result of this deficit, actions on ketamine become mutable and vulnerable to capture by other events. However, given the apparently tight coupling of SoO and SoA, the fact that increased SoA was associated with an increase in the feeling of disconnection from one's body may be surprising. Dissociations between SoO and SoA are not uncommon in psychiatric illness such as schizophrenia. For example, a patient with passivity phenomena will recognise their actions as the movements of their own body (preserved SoO) but will experience their actions as produced by an external force (reduced SoA). However, these dissociations cannot explain our finding that an increase in SoA was associated with reduced SoO on ketamine. The mutability hypothesis discussed earlier may provide an explanation: If ketamine engenders mutability in the experience of action, then the more one's experience of action is “captured” by external sensory events the greater the externalisation of bodily experience may be, resulting in the feeling of “disconnection” from one's own body. What might be the neurochemical and neuroanatomical basis of the hyperbinding effect we observed? One possibility is that hyperbinding is the product of aberrant prediction error signalling. Prediction error refers to the mismatch between expectation and occurrence, and is used as a teaching signal to drive causal associations between events (Dickinson, 2001). Although midbrain dopamine neurons may signal a reward prediction error (Schultz & Dickinson, 2000), others have argued that their activity profile may reflect a novelty, salience, or surprise signal used by organisms to judge whether or not they caused a surprising event to happen (Redgrave & Gurney, 2006). We have previously shown that ketamine induces prediction error responses to predictable events and thus increases the salience of those events (Corlett et al., 2006). Neurochemically, ketamine may increase dopamine and glutamate corelease, in the mesocortical pathway between the midbrain and prefrontal cortex (Corlett et al., 2006; Corlett, Honey, et al., 2007). Such signalling has been suggested to register surprise and permit its explanation (Lavin et al., 2005). Since associations between intention, action, and outcome are well learned, the ketamine induced hyperbinding effect we report presently may reflect inappropriate salience of action–outcome causal associations, via aberrant prediction error signalling. Our findings overall are compatible with the notion that the execution of action and SoA may be linked by a simple computational principle (minimising prediction error), which, when perturbed, could explain the varied phenomenology of psychosis (Corlett, Frith, & Fletcher, 2009; Fletcher & Frith, 2009). The hyperbinding found previously in schizophrenia patients (Haggard et al., 2003; Voss et al., 2010), and here found also with ketamine, suggests an exaggerated SoA. A number of other studies, using different paradigms, have reported data that are consistent with this interpretation. For example, people with schizophrenia (including those experiencing passivity symptoms) are more likely than healthy controls to attribute the source of distorted or ambiguous visual feedback of an action to themselves (Daprati et al., 1997; Fourneret et al., 2002; Franck et al., 2001; Schnell et al., 2008). This suggests a tendency towards over-attribution of sensory consequences of movement to oneself (Synofzik et al., 2008). However, these data are at odds with the feeling of reduced SoA that is typically reported by patients. One solution to this paradox is offered by Franck et al. (2001), who have suggested that patients with passivity symptoms have a tendency towards self-attribution of extraneous events (see also Daprati et al., 1997). This could result in a feeling of being influenced when observing another action, and hyperassociation when observing action outcomes. In short, it may be possible to recognise strongly the outcomes of one's actions while at the same time feeling a diminished sense of agency for the actions themselves. This implies a distinction between feeling one is the author of action on the one hand, and feeling one is the author of an effect on the other. This putative distinction would be usefully explored in future studies. It should also be noted that exaggerated SoA may be associated with certain schizophrenia subtypes, particularly those with self-referential symptoms. For example, patients with persecutory delusions feel a greater sense of control over action outcomes compared with healthy and patient controls (Kaney & Bentall, 1992). Therefore, the exaggerated agency effects shown in previous patient studies could be driven by the presence of patients with self-referential symptoms in these samples. Intriguingly, self-referential symptoms are also a common effect of the ketamine challenge (Corlett, Honey, & Fletcher, 2007; Honey et al., 2006). It may be, therefore, that the increased SoA found in the current study is associated with this specific effect of the drug. Our study shows that ketamine can mimic aberrant agency experiences associated with schizophrenia, but certain limitations of the task used should be noted. Unlike previous intentional binding studies (Haggard et al., 2002), we did not include any involuntary movement conditions. Using transcranial magnetic stimulation to induce involuntary movements, Haggard et al. (2002) showed that the binding of actions and outcomes was specific to voluntary, self-generated movement. In fact, when involuntary transcranial magnetic stimulation (TMS)-induced movements were followed by tones, they found a temporal repulsion between involuntary movement and tone. We did not include this TMS condition because the focus of our investigation was whether ketamine increased the magnitude of intentional binding for voluntary actions. It would be interesting in the future to explore the effect of the ketamine challenge on this “repulsion” effect. Limitations of the paradigm should also be noted. Intentional binding represents an implicit measure of agency experience. That is, participants are not required to make explicit agency judgements, such as the attribution of an observed movement to its correct origin (as in Farrer & Frith, 2002, for example). Implicit measures have certain advantages, such as the quantification of subjective experience, and the mitigation of demand effects. Also, such tasks may allow us to detect subtle perceptual and cognitive changes engendered by the drug and relate them to the early stages of psychosis. However, there are certain drawbacks. Primarily, implicit measures will fail to capture the broader phenomenology of SoA, in particular the highly complex phenomenology associated with delusions of agency in established psychosis. In the current study this limitation was mitigated somewhat by the observation that changes in these subtle implicit measures correlate with participants’ self-reports of drug-induced changes in body experience. Finally, limitations of the ketamine model of schizophrenia should also be acknowledged. For example, ketamine produces a range of symptoms associated with endogenous psychosis (arguably a broader range than other drug models of the disease; Krystal et al., 1994), but there are notable exceptions (Fletcher & Honey, 2006). Furthermore, ketamine produces changes that are not necessarily associated with schizophrenia, such as euphoria (Fletcher & Honey, 2006). Although it is important to acknowledge limitations of the drug model, we do not feel they undermine our interpretation of the present data, given the fact that these data are consistent with schizophrenic psychopathology and replicate previous behavioural data from patients with the disease (Haggard et al., 2003; Voss et al., 2010). Despite these caveats, this study shows that the psychotomimetic property of ketamine may extend to aberrant experiences of agency associated with schizophrenia. In particular, ketamine mimics the exaggerated intentional binding effect that has been found in association with the disease. The pattern of results suggested a mutable experience of action on ketamine, consistent with a deficit in the neural circuits for action monitoring. We believe that these findings may be explained in terms of changes in stimulus salience via aberrant prediction error signalling. Ketamine may be a valuable psychopharmacological model of aberrant agency experiences found in schizophrenia. To this extent, it could be used to elucidate the neurobiological and psychological basis of such aberrant experiences.
[ "Participants", "Experimental design", "Infusion protocol", "Intentional binding", "Clinical assessment", "Ketamine effects on intentional binding", "The relation between binding and body perception", "Control analyses", "DISCUSSION" ]
[ "Eighteen right-handed healthy volunteers were recruited (eight female; mean age = 22.4, range = 19–26; mean NART IQ = 114 [± 7]). The study was approved by Addenbrooke's NHS Trust Research Ethics Committee. Participants provided written, informed consent.\nOne participant was excluded from the analysis on the basis of a preexisting history of psychiatric illness (although all participants were screened for the presence of psychiatric illness in themselves and relatives prior to taking part in the study, this participant only disclosed this information after testing). Three participants failed to complete the intentional binding task owing to nausea produced by the drug infusion. Therefore, 14 participants were included in the final analysis.\nThese same participants also completed other cognitive tasks, unrelated to SoA, during infusion. It is planned to publish those results elsewhere.", "The study used a double-blind, placebo-controlled, randomised, within-subjects design.", "Participants were administered placebo (saline) or racemic ketamine (2 mg/mL) as an intravenous infusion using a target-controlled infusion system comprising a computer which implemented Stanpump software (S Shafer; http://www.opentci.org/doku.php?id=code:code) to control a syringe driver infusion pump (Graseby 3500; Graseby Medical Ltd, Watford, UK). Stanpump was programmed to use a two-compartmental pharmacokinetic model (Rigby-Jones, Sneyd, & Absalom, 2006), to implement a complex infusion profile designed to achieve prespecified plasma ketamine concentrations.\nDuring the drug session, participants received first low-dose ketamine (plasma target 100 ng/mL) and then higher dose (plasma target 200 ng/mL). The intentional binding task was completed at the low dose. Drug and placebo sessions were separated by at least 1 week. Participants also underwent a clinical rating (see later). The order of drug and placebo visits was counterbalanced across all 18 participants initially recruited. Of the 14 participants who were included in the final analysis, eight participants completed the ketamine session first.", "Participants watched a computer screen on which a hand rotated around a clock-face (marked at conventional “5-minute” intervals) (see Figure 1). Each full rotation lasted 2560 ms. In the agency condition, participants pressed a key with their right index finger at a time of their choosing. This keypress produced a tone after a delay of 250 ms. The clock-hand continued rotating for a random period of time (between 1500 ms and 2500 ms). This ensures that the finishing position of the clock-hand is not informative with respect to where it was when the action or tone occurred (see Libet, Gleason, Wright, & Pearl, 1983). When the hand stopped rotating, participants verbally reported the time of their keypress or the subsequent tone. These judgements were blocked, so participants only made a single type of estimate on each trial in each block. To make the time estimates, participants reported the position of the hand on the clock face when they either pressed the key or heard the tone. Participants completed a block of 20 action estimate trials and a block of 20 tone estimate trials.\nTrial structure in the agency condition (following Haggard et al., 2002). Participants pressed the key at a time of their choosing, which produced a tone after a delay of 250 ms. Participants judged where the clock hand was when they pressed the key or when they heard the tone, in separate blocks of trials.\nThey completed two further 20 trial baseline blocks of time estimates. In one block (baseline action) participants pressed the key at a time of their choosing. However, the keypress never produced a tone, and on each trial participants reported the time of the keypress. In the other block (baseline effect) participants made no keypresses. Instead, a tone would sound at a random time on each trial and participants reported the time of the tone. The order of agency and baseline blocks was randomised anew for each participant. All blocks (baseline and agency) were performed during the drug/saline infusion.\nFor our analysis we calculated an overall measure of intentional binding. We first calculated the binding effect for actions and tones individually. Action binding is found by subtracting the mean time estimate in the baseline action condition from the mean time estimate of actions in the agency condition. Tone binding is found by subtracting the mean time estimate in the baseline tone condition from the mean time estimate of tones in the agency condition. The overall measure of intentional binding was calculated by combining action and tone binding (i.e., action binding minus tone binding). To determine the effect of ketamine on intentional binding (and therefore SoA), this overall measure of intentional binding was compared within subjects (ketamine vs. placebo; paired-samples t-test).", "The Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998) was administered at both 100 ng/mL and 200 ng/mL. There are five subscales, each of which consists of items (questions), and participants’ responses are coded on a 5-point scale (0 = “Not at all” through to 4 = “Extremely”). As discussed, our analysis focused primarily on the body perception category. We assessed the strength of correlation between scores on items relating to body perception at 100 ng/mL with binding on ketamine.", "Table 1 presents the binding effects for keypresses and tones (mean shifts from baseline) for the 14 participants who completed the task. These data show that in the agency conditions on both placebo and ketamine, keypresses were bound towards tones and tones were bound back towards keypresses. This is consistent with the intentional binding effect, as previously reported (e.g., Engbert et al., 2008; Haggard et al., 2002; Moore & Haggard, 2008).\nMean judgement errors in ms (SD across subjects) and shifts relative to baseline conditions in ms\nTable 1 (final column) also presents the overall binding measure (keypress binding minus tone binding). These data show that overall binding was greater under ketamine compared with placebo. A paired-samples t-test revealed that this difference was significant, t(13) = 2.79, p = .008 (onetailed). Follow-up paired sample t-tests suggest that this difference is due to differences in binding for actions towards tones, t(13) = 2.35, p = .036 (two-tailed) rather than differences in binding for tones towards actions, t(13) = 0.242, p = .812 (two-tailed). Furthermore, this exaggerated binding appears to be driven by changes in baseline action judgements; isolated actions on ketamine were perceived as occurring significantly earlier than on placebo, t(13) = 2.59, p = .023 (two-tailed). Intentional binding is an implicit measure of SoA. These findings therefore suggest that SoA is exaggerated under ketamine, which is consistent with previous data on patients with schizophrenia (Haggard et al., 2003; Voss et al., 2010).", "We also examined the strength of correlation between binding on ketamine and scores on the CADSS assessment. The overall main effect of ketamine was generated by changes in action binding. Therefore, our correlations were based on this binding measure. There was no significant correlation between action binding and the overall CADSS score, r = .197, p = 499 (two-tailed). Further analyses focused on the body perception subscale. There was a significant positive correlation between action binding and Item 6 on the CADSS, which asks, “Do you feel disconnected from your own body?,” r = .549, p = .042 (two-tailed) (see Figure 2). This suggests that the more participants felt disconnected from their bodies on ketamine, the greater the intentional binding effect. There was no significant correlation between action binding and Item 7 on the CADSS which asks “Does your sense of your own body feel changed: for instance, does your own body feel unusually large or unusually small?,” r = .208, p = .476 (two-tailed).\nScatter plot showing the significant correlation between action binding and CADSS Item 6 (“Do you feel disconnected from your own body?”) on ketamine (100 ng/mL).", "The CADSS questionnaire also measures changes in time perception. Given the temporal nature of our SoA measure we investigated the putative relation between binding and general changes in time perception. There were no significant correlations between action binding and time perception items on the scale (Item 1 “Do things seem to be moving in slow motion?,” r = 198, p = .498; Item 12 “Does this experience seem to take much longer than you would have expected?,” r = −.337, p = .238; Item 13 “Do things seem to be happening very quickly, as if there is a lifetime in a moment?,” r = .265, p = .360). This suggests that changes in action binding were not related to general changes in the subjective experience of time.\nTo determine the presence of possible drug order effects in our data we compared mean overall binding on ketamine versus placebo, introducing “order” (ketamine first vs. placebo first) as a between-subjects variable. We found no significant main effect of “order,” F(1, 12) = 0.381, p = .548, and no significant interaction, F(1, 12) = 0.889, p = 364. This suggests that changes in binding were not linked to drug order.\nWe also compared standard deviations of time estimates across repeated trials. These provide a measure of perceptual timing variability, with higher standard deviations reflecting inconsistent timing performance. This may indicate difficulty in using the clock for timing judgements, erratic allocation of attention either to the action/tone or to the clock, or general confusion. The increase in binding on ketamine was driven by differences in the binding of actions towards tones, so we focus on standard deviation of action time estimates. On ketamine the mean standard deviation was 77 ms (SD = 32) while on placebo it was 67 ms (SD = 18). Despite the numerical increase, the difference in mean standard deviation was not significant, t(13) = 1.149, p = .271 (two-tailed). This suggests that changes in action binding were not related to general changes in timing ability.\nIn a final control analysis, we investigated whether there was a significant reduction in the speed of the self-paced response on ketamine, as it could be that changes in binding are related to changes in motor function. On ketamine the mean response latency was 3798 ms (SD = 1580), whereas on placebo it was 3538 ms (SD = 1160). Despite the numerical increase in response latency, this difference was not significant, t(13) = 0.945, p = .362 (two-tailed). This suggests that changes in action binding were not related to changes in motor function (as measured by the response latency).", "We investigated whether the psychotomimetic effects of ketamine extend to producing aberrant agency experiences associated with schizophrenia. On the intentional binding task under placebo conditions, the expected binding effect (a compression of the subjective interval between action and outcome; Haggard et al., 2002) was observed. Under ketamine this effect was exaggerated, as has been previously reported in people with schizophrenia (Haggard et al., 2003; Voss et al., 2010).\nThe effect of ketamine on action–outcome binding is intriguing: The exaggerated effect was driven primarily by an increase in binding of actions towards the tone, rather than binding of tones back towards actions. Action binding represents the difference between action time estimates in the agency condition and action time estimates in the baseline condition. Previous studies have found that the experience of isolated action, as in the baseline condition, is anticipatory: On average, participants are aware of moving slightly before the actual onset of movement (Haggard, Newman, & Magno, 1999; Libet et al., 1983). This suggests that motor experience in this context is not based on feedback generated by the actual movement itself. If it were, one would expect a slightly delayed awareness of moving owing to inherent delays in the transmission of sensory information to the brain (Obhi, Planetta, & Scantlebury, 2009). Instead, it has been proposed that the experience of isolated action is linked to processes occurring prior to movement onset (Haggard, 2003). In our data, the baseline experience of action on ketamine was significantly earlier than on placebo, whereas the baseline action awareness on placebo was, unusually, slightly delayed relative to the actual keypress. This pattern of results suggests that the drug may have exaggerated the putative influence of action preparation on the experience of action.\nHowever, although baseline action experience is generally anticipatory, the intentional binding effect in healthy adults shows that causing an external event through one's own actions (as in our agency condition) draws the temporal experience of action towards that event (Engbert & Wohlschläger, 2007; Haggard et al., 2002; Moore & Haggard, 2008; Moore, Lagnado, et al., 2009). It is this shift in action experience that represents the binding effect for actions, and it was present on both placebo and ketamine. However, the magnitude of the shift was significantly greater on ketamine. It appears, therefore, that the presence of the tone exerted a particularly strong influence on action experience. In short, although ketamine has a strong effect on action experience when the action occurs without a perceptual consequence, we cannot interpret the drug's effect merely in terms of this baseline action experience. Rather, the significantly greater subjective shift, on ketamine, in the experience of action towards the tone means that a full explanation of the effects of ketamine must take into account the experience of action in both the absence and the presence of the tone.\nThus, bringing together the key results from the intentional binding task, ketamine appears to boost the influence of action preparation on action awareness, but also to boost the influence of the effects of action (a tone) on action awareness. This combination may seem paradoxical. However, several results suggest that the action experience is in fact a synthesis of a range of different events occurring over an extended time period between preparation and consequence (Banks & Isham, 2009; Haggard, 2005; Haggard, Cartledge, Dafydd, & Oakley, 2004; Lau, Rogers, & Passingham, 2007; Moore & Haggard, 2008; Moore, Wegner, et al., 2009). In normal circumstances, action awareness is likely to be the result of integration of efferent and afferent processes in the sensorimotor system (Moore & Haggard, 2008; Moore, Wegner, et al., 2009; Synofzik, Vosgerau, & Newen, 2008). On ketamine, however, the processes underlying this normal process of integration may be compromised.\nTo this extent, our results are consistent with a ketamine-induced deficit in monitoring action signals. Participants appeared to feel dissociated from their own actions while on ketamine, since their representations of their own actions were susceptible to influences from other events, such as their original intentions and their subsequent effects. Confirmation of this dissociative interpretation comes from the correlations found between intentional binding and the specific CADSS item concerning the feeling of disconnection from the body. Taken together, these findings suggest that ketamine may preferentially influence a neural system for monitoring action. As a result of this deficit, actions on ketamine become mutable and vulnerable to capture by other events. However, given the apparently tight coupling of SoO and SoA, the fact that increased SoA was associated with an increase in the feeling of disconnection from one's body may be surprising. Dissociations between SoO and SoA are not uncommon in psychiatric illness such as schizophrenia. For example, a patient with passivity phenomena will recognise their actions as the movements of their own body (preserved SoO) but will experience their actions as produced by an external force (reduced SoA). However, these dissociations cannot explain our finding that an increase in SoA was associated with reduced SoO on ketamine. The mutability hypothesis discussed earlier may provide an explanation: If ketamine engenders mutability in the experience of action, then the more one's experience of action is “captured” by external sensory events the greater the externalisation of bodily experience may be, resulting in the feeling of “disconnection” from one's own body.\nWhat might be the neurochemical and neuroanatomical basis of the hyperbinding effect we observed? One possibility is that hyperbinding is the product of aberrant prediction error signalling. Prediction error refers to the mismatch between expectation and occurrence, and is used as a teaching signal to drive causal associations between events (Dickinson, 2001). Although midbrain dopamine neurons may signal a reward prediction error (Schultz & Dickinson, 2000), others have argued that their activity profile may reflect a novelty, salience, or surprise signal used by organisms to judge whether or not they caused a surprising event to happen (Redgrave & Gurney, 2006). We have previously shown that ketamine induces prediction error responses to predictable events and thus increases the salience of those events (Corlett et al., 2006). Neurochemically, ketamine may increase dopamine and glutamate corelease, in the mesocortical pathway between the midbrain and prefrontal cortex (Corlett et al., 2006; Corlett, Honey, et al., 2007). Such signalling has been suggested to register surprise and permit its explanation (Lavin et al., 2005). Since associations between intention, action, and outcome are well learned, the ketamine induced hyperbinding effect we report presently may reflect inappropriate salience of action–outcome causal associations, via aberrant prediction error signalling. Our findings overall are compatible with the notion that the execution of action and SoA may be linked by a simple computational principle (minimising prediction error), which, when perturbed, could explain the varied phenomenology of psychosis (Corlett, Frith, & Fletcher, 2009; Fletcher & Frith, 2009).\nThe hyperbinding found previously in schizophrenia patients (Haggard et al., 2003; Voss et al., 2010), and here found also with ketamine, suggests an exaggerated SoA. A number of other studies, using different paradigms, have reported data that are consistent with this interpretation. For example, people with schizophrenia (including those experiencing passivity symptoms) are more likely than healthy controls to attribute the source of distorted or ambiguous visual feedback of an action to themselves (Daprati et al., 1997; Fourneret et al., 2002; Franck et al., 2001; Schnell et al., 2008). This suggests a tendency towards over-attribution of sensory consequences of movement to oneself (Synofzik et al., 2008). However, these data are at odds with the feeling of reduced SoA that is typically reported by patients. One solution to this paradox is offered by Franck et al. (2001), who have suggested that patients with passivity symptoms have a tendency towards self-attribution of extraneous events (see also Daprati et al., 1997). This could result in a feeling of being influenced when observing another action, and hyperassociation when observing action outcomes. In short, it may be possible to recognise strongly the outcomes of one's actions while at the same time feeling a diminished sense of agency for the actions themselves. This implies a distinction between feeling one is the author of action on the one hand, and feeling one is the author of an effect on the other. This putative distinction would be usefully explored in future studies.\nIt should also be noted that exaggerated SoA may be associated with certain schizophrenia subtypes, particularly those with self-referential symptoms. For example, patients with persecutory delusions feel a greater sense of control over action outcomes compared with healthy and patient controls (Kaney & Bentall, 1992). Therefore, the exaggerated agency effects shown in previous patient studies could be driven by the presence of patients with self-referential symptoms in these samples. Intriguingly, self-referential symptoms are also a common effect of the ketamine challenge (Corlett, Honey, & Fletcher, 2007; Honey et al., 2006). It may be, therefore, that the increased SoA found in the current study is associated with this specific effect of the drug.\nOur study shows that ketamine can mimic aberrant agency experiences associated with schizophrenia, but certain limitations of the task used should be noted. Unlike previous intentional binding studies (Haggard et al., 2002), we did not include any involuntary movement conditions. Using transcranial magnetic stimulation to induce involuntary movements, Haggard et al. (2002) showed that the binding of actions and outcomes was specific to voluntary, self-generated movement. In fact, when involuntary transcranial magnetic stimulation (TMS)-induced movements were followed by tones, they found a temporal repulsion between involuntary movement and tone. We did not include this TMS condition because the focus of our investigation was whether ketamine increased the magnitude of intentional binding for voluntary actions. It would be interesting in the future to explore the effect of the ketamine challenge on this “repulsion” effect.\nLimitations of the paradigm should also be noted. Intentional binding represents an implicit measure of agency experience. That is, participants are not required to make explicit agency judgements, such as the attribution of an observed movement to its correct origin (as in Farrer & Frith, 2002, for example). Implicit measures have certain advantages, such as the quantification of subjective experience, and the mitigation of demand effects. Also, such tasks may allow us to detect subtle perceptual and cognitive changes engendered by the drug and relate them to the early stages of psychosis. However, there are certain drawbacks. Primarily, implicit measures will fail to capture the broader phenomenology of SoA, in particular the highly complex phenomenology associated with delusions of agency in established psychosis. In the current study this limitation was mitigated somewhat by the observation that changes in these subtle implicit measures correlate with participants’ self-reports of drug-induced changes in body experience.\nFinally, limitations of the ketamine model of schizophrenia should also be acknowledged. For example, ketamine produces a range of symptoms associated with endogenous psychosis (arguably a broader range than other drug models of the disease; Krystal et al., 1994), but there are notable exceptions (Fletcher & Honey, 2006). Furthermore, ketamine produces changes that are not necessarily associated with schizophrenia, such as euphoria (Fletcher & Honey, 2006). Although it is important to acknowledge limitations of the drug model, we do not feel they undermine our interpretation of the present data, given the fact that these data are consistent with schizophrenic psychopathology and replicate previous behavioural data from patients with the disease (Haggard et al., 2003; Voss et al., 2010).\nDespite these caveats, this study shows that the psychotomimetic property of ketamine may extend to aberrant experiences of agency associated with schizophrenia. In particular, ketamine mimics the exaggerated intentional binding effect that has been found in association with the disease. The pattern of results suggested a mutable experience of action on ketamine, consistent with a deficit in the neural circuits for action monitoring. We believe that these findings may be explained in terms of changes in stimulus salience via aberrant prediction error signalling. Ketamine may be a valuable psychopharmacological model of aberrant agency experiences found in schizophrenia. To this extent, it could be used to elucidate the neurobiological and psychological basis of such aberrant experiences." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Participants", "Experimental design", "Infusion protocol", "Intentional binding", "Clinical assessment", "RESULTS", "Ketamine effects on intentional binding", "The relation between binding and body perception", "Control analyses", "DISCUSSION" ]
[ "Administration of the anaesthetic agent, ketamine, to healthy participants produces a state that resembles schizophrenia (Ghoneim, Hinrichs, Mewaldt, & Petersen, 1985; Krystal et al., 1994; Lahti, Weiler, Tamara Michaelidis, Parwani, & Tamminga, 2001). Although there are notable differences between the ketamine state and established schizophrenic illness (for example, ketamine does not reliably produce auditory hallucinations; Fletcher & Honey, 2006), ketamine does produce a range of symptoms associated with endogenous psychosis, including perceptual changes, ideas of reference, thought disorder, and some negative symptoms (Ghoneim et al., 1985; Krystal et al., 1994; Lahti et al., 2001; Mason, Morgan, Stefanovic, & Curran, 2008; Morgan, Mofeez, Brandner, Bromley, & Curran, 2004; Pomarol-Clotet et al., 2006). In addition, a number of cognitive changes produced by ketamine are comparable to those seen in schizophrenia (e.g., learning: Corlett, Murray, et al., 2007; memory: Fletcher & Honey, 2006; attention: Oranje et al., 2000; language: Covington et al., 2007). Overall, the effects of ketamine are most strikingly characteristic of the earliest stages of psychosis (Corlett, Honey, & Fletcher, 2007). Moreover, ketamine causes changes in brain activity that overlap with those reported in schizophrenia (Breier, Malhotra, Pinals, Weisenfeld, & Pickar, 1997; Corlett et al., 2006; Vollenweider, Leenders, Oye, Hell, & Angst, 1997; Vollenweider, Leenders, Scharfetter, et al., 1997). An important next step is to explore the effects of ketamine in greater detail and to exploit the potential that this approach offers for relating cognitive-behavioural function to subjective experiences in psychosis.\nSchizophrenia is associated with important changes in the experience of voluntary action such as those that occur in delusions of control (Frith, 1992). Although it has received little formal documentation, ketamine also, in our experience, alters the way that participants experience their own actions. For example, participants sometimes report that they don't feel fully in control of their own actions (“I don't feel in control of my muscles …,” and “… as though someone else was controlling my movements;” Pomarol-Clotet et al., 2006). Given these observations, together with the perceptuomotor abnormalities in schizophrenia, the current study was set up to characterise the effects of ketamine on a task examining voluntary actions and their sensory consequences.\nSense of agency (SoA) refers to the experience of initiating and controlling voluntary action to achieve effects in the outside world. Sense of agency is a background feeling that accompanies most of our actions. Perhaps because of its ubiquity, it has proved difficult to isolate and measure experimentally. Recently, action-related changes in time perception have been proposed as a proxy for SoA (Haggard, Clark, & Kalogeras, 2002; Moore & Haggard, 2008; Moore, Lagnado, Deal, & Haggard, 2009).\nSituations that elicit SoA are associated with systematic changes in the temporal experience of actions and outcomes: There is a subjective compression of the interval between the action and the outcome. This relation between SoA and subjective time is revealed in the intentional binding paradigm developed by Haggard et al. (2002). In an agency condition, in which participants’ actions produced outcome tones, participants judged the time of an action or the time of the subsequent tone, in separate blocks of trials. Actions were perceived as occurring later in time compared to a nonagency (baseline) condition in which participants’ actions did not produce tones. In addition, a tone that followed the action was perceived as occurring earlier in time compared to a nonagency (baseline) condition involving tones but no actions. Importantly, these shifts were only found for voluntary actions: When the outcome was caused by an involuntary movement the reverse pattern of results was observed (actions perceived earlier and outcomes perceived later than their respective baseline estimates).\nIncreased SoA is therefore associated with a later awareness of the action, and an earlier awareness of the outcome. This effect is robust and has been consistently replicated (see, for example, Engbert & Wohlschläger, 2007; Engbert, Wohlschläger, & Haggard, 2008; Moore, Wegner, & Haggard, 2009; Tsakiris & Haggard, 2003). It has also been shown that these changes in the subjective experience of time correlate with explicit higher order changes in the sense of agency, as measured using subjective rating scales (Ebert & Wegner, 2010; Moore & Haggard, 2010). In this way, intentional binding offers a precise, implicit measure of SoA.\nOf primary interest to the present study is the fact that the binding effect, defined as the temporal attraction between voluntary action and outcome, is greater in people with schizophrenia (Haggard, Martin, Taylor-Clarke, Jeannerod, & Franck, 2003; Voss et al., 2010). That is, people with schizophrenia show increased intentional binding. Our principal aim here was to determine whether ketamine also induced increased binding, as previously reported in schizophrenia.\nWe also investigated the relationship between this implicit measure of SoA and subjective experiences of dissociation and psychotic-like phenomena produced by the drug as measured using the Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998). Here we focused our analysis on changes in the subjective experience of one's own body, since sense of ownership (SoO) over one's body and SoA may be related. For example, in healthy individuals SoA for a voluntary action may strongly depend on a SoO (Gallagher, 2000, 2007; Tsakiris, Schütz-Bosbach, & Gallagher, 2007). The reverse relationship may also hold, whereby the neurocognitive processes that give rise to sense of agency also contribute to Soo (Tsakiris, Prabhu, & Haggard, 2006).\nDissociative symptoms, such as depersonalisation, are a common effect of the ketamine challenge (Goff & Coyle, 2001). Furthermore, there is frequent co-occurrence of depersonalisation and abnormal bodily experience (Sierra, Baker, Medford, & David, 2005; Simeon et al., 2008). Although not typically associated with established schizophrenic illness, depersonalisation appears to be associated with the schizophrenia prodrome (Goff & Coyle, 2001; Krystal et al., 1994). Therefore, given the link between bodily experience and sense of agency, and the common disruption of bodily experience engendered by the ketamine challenge, the body perception subscale on the CADSS questionnaire was of primary interest.", " Participants Eighteen right-handed healthy volunteers were recruited (eight female; mean age = 22.4, range = 19–26; mean NART IQ = 114 [± 7]). The study was approved by Addenbrooke's NHS Trust Research Ethics Committee. Participants provided written, informed consent.\nOne participant was excluded from the analysis on the basis of a preexisting history of psychiatric illness (although all participants were screened for the presence of psychiatric illness in themselves and relatives prior to taking part in the study, this participant only disclosed this information after testing). Three participants failed to complete the intentional binding task owing to nausea produced by the drug infusion. Therefore, 14 participants were included in the final analysis.\nThese same participants also completed other cognitive tasks, unrelated to SoA, during infusion. It is planned to publish those results elsewhere.\nEighteen right-handed healthy volunteers were recruited (eight female; mean age = 22.4, range = 19–26; mean NART IQ = 114 [± 7]). The study was approved by Addenbrooke's NHS Trust Research Ethics Committee. Participants provided written, informed consent.\nOne participant was excluded from the analysis on the basis of a preexisting history of psychiatric illness (although all participants were screened for the presence of psychiatric illness in themselves and relatives prior to taking part in the study, this participant only disclosed this information after testing). Three participants failed to complete the intentional binding task owing to nausea produced by the drug infusion. Therefore, 14 participants were included in the final analysis.\nThese same participants also completed other cognitive tasks, unrelated to SoA, during infusion. It is planned to publish those results elsewhere.\n Experimental design The study used a double-blind, placebo-controlled, randomised, within-subjects design.\nThe study used a double-blind, placebo-controlled, randomised, within-subjects design.\n Infusion protocol Participants were administered placebo (saline) or racemic ketamine (2 mg/mL) as an intravenous infusion using a target-controlled infusion system comprising a computer which implemented Stanpump software (S Shafer; http://www.opentci.org/doku.php?id=code:code) to control a syringe driver infusion pump (Graseby 3500; Graseby Medical Ltd, Watford, UK). Stanpump was programmed to use a two-compartmental pharmacokinetic model (Rigby-Jones, Sneyd, & Absalom, 2006), to implement a complex infusion profile designed to achieve prespecified plasma ketamine concentrations.\nDuring the drug session, participants received first low-dose ketamine (plasma target 100 ng/mL) and then higher dose (plasma target 200 ng/mL). The intentional binding task was completed at the low dose. Drug and placebo sessions were separated by at least 1 week. Participants also underwent a clinical rating (see later). The order of drug and placebo visits was counterbalanced across all 18 participants initially recruited. Of the 14 participants who were included in the final analysis, eight participants completed the ketamine session first.\nParticipants were administered placebo (saline) or racemic ketamine (2 mg/mL) as an intravenous infusion using a target-controlled infusion system comprising a computer which implemented Stanpump software (S Shafer; http://www.opentci.org/doku.php?id=code:code) to control a syringe driver infusion pump (Graseby 3500; Graseby Medical Ltd, Watford, UK). Stanpump was programmed to use a two-compartmental pharmacokinetic model (Rigby-Jones, Sneyd, & Absalom, 2006), to implement a complex infusion profile designed to achieve prespecified plasma ketamine concentrations.\nDuring the drug session, participants received first low-dose ketamine (plasma target 100 ng/mL) and then higher dose (plasma target 200 ng/mL). The intentional binding task was completed at the low dose. Drug and placebo sessions were separated by at least 1 week. Participants also underwent a clinical rating (see later). The order of drug and placebo visits was counterbalanced across all 18 participants initially recruited. Of the 14 participants who were included in the final analysis, eight participants completed the ketamine session first.\n Intentional binding Participants watched a computer screen on which a hand rotated around a clock-face (marked at conventional “5-minute” intervals) (see Figure 1). Each full rotation lasted 2560 ms. In the agency condition, participants pressed a key with their right index finger at a time of their choosing. This keypress produced a tone after a delay of 250 ms. The clock-hand continued rotating for a random period of time (between 1500 ms and 2500 ms). This ensures that the finishing position of the clock-hand is not informative with respect to where it was when the action or tone occurred (see Libet, Gleason, Wright, & Pearl, 1983). When the hand stopped rotating, participants verbally reported the time of their keypress or the subsequent tone. These judgements were blocked, so participants only made a single type of estimate on each trial in each block. To make the time estimates, participants reported the position of the hand on the clock face when they either pressed the key or heard the tone. Participants completed a block of 20 action estimate trials and a block of 20 tone estimate trials.\nTrial structure in the agency condition (following Haggard et al., 2002). Participants pressed the key at a time of their choosing, which produced a tone after a delay of 250 ms. Participants judged where the clock hand was when they pressed the key or when they heard the tone, in separate blocks of trials.\nThey completed two further 20 trial baseline blocks of time estimates. In one block (baseline action) participants pressed the key at a time of their choosing. However, the keypress never produced a tone, and on each trial participants reported the time of the keypress. In the other block (baseline effect) participants made no keypresses. Instead, a tone would sound at a random time on each trial and participants reported the time of the tone. The order of agency and baseline blocks was randomised anew for each participant. All blocks (baseline and agency) were performed during the drug/saline infusion.\nFor our analysis we calculated an overall measure of intentional binding. We first calculated the binding effect for actions and tones individually. Action binding is found by subtracting the mean time estimate in the baseline action condition from the mean time estimate of actions in the agency condition. Tone binding is found by subtracting the mean time estimate in the baseline tone condition from the mean time estimate of tones in the agency condition. The overall measure of intentional binding was calculated by combining action and tone binding (i.e., action binding minus tone binding). To determine the effect of ketamine on intentional binding (and therefore SoA), this overall measure of intentional binding was compared within subjects (ketamine vs. placebo; paired-samples t-test).\nParticipants watched a computer screen on which a hand rotated around a clock-face (marked at conventional “5-minute” intervals) (see Figure 1). Each full rotation lasted 2560 ms. In the agency condition, participants pressed a key with their right index finger at a time of their choosing. This keypress produced a tone after a delay of 250 ms. The clock-hand continued rotating for a random period of time (between 1500 ms and 2500 ms). This ensures that the finishing position of the clock-hand is not informative with respect to where it was when the action or tone occurred (see Libet, Gleason, Wright, & Pearl, 1983). When the hand stopped rotating, participants verbally reported the time of their keypress or the subsequent tone. These judgements were blocked, so participants only made a single type of estimate on each trial in each block. To make the time estimates, participants reported the position of the hand on the clock face when they either pressed the key or heard the tone. Participants completed a block of 20 action estimate trials and a block of 20 tone estimate trials.\nTrial structure in the agency condition (following Haggard et al., 2002). Participants pressed the key at a time of their choosing, which produced a tone after a delay of 250 ms. Participants judged where the clock hand was when they pressed the key or when they heard the tone, in separate blocks of trials.\nThey completed two further 20 trial baseline blocks of time estimates. In one block (baseline action) participants pressed the key at a time of their choosing. However, the keypress never produced a tone, and on each trial participants reported the time of the keypress. In the other block (baseline effect) participants made no keypresses. Instead, a tone would sound at a random time on each trial and participants reported the time of the tone. The order of agency and baseline blocks was randomised anew for each participant. All blocks (baseline and agency) were performed during the drug/saline infusion.\nFor our analysis we calculated an overall measure of intentional binding. We first calculated the binding effect for actions and tones individually. Action binding is found by subtracting the mean time estimate in the baseline action condition from the mean time estimate of actions in the agency condition. Tone binding is found by subtracting the mean time estimate in the baseline tone condition from the mean time estimate of tones in the agency condition. The overall measure of intentional binding was calculated by combining action and tone binding (i.e., action binding minus tone binding). To determine the effect of ketamine on intentional binding (and therefore SoA), this overall measure of intentional binding was compared within subjects (ketamine vs. placebo; paired-samples t-test).\n Clinical assessment The Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998) was administered at both 100 ng/mL and 200 ng/mL. There are five subscales, each of which consists of items (questions), and participants’ responses are coded on a 5-point scale (0 = “Not at all” through to 4 = “Extremely”). As discussed, our analysis focused primarily on the body perception category. We assessed the strength of correlation between scores on items relating to body perception at 100 ng/mL with binding on ketamine.\nThe Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998) was administered at both 100 ng/mL and 200 ng/mL. There are five subscales, each of which consists of items (questions), and participants’ responses are coded on a 5-point scale (0 = “Not at all” through to 4 = “Extremely”). As discussed, our analysis focused primarily on the body perception category. We assessed the strength of correlation between scores on items relating to body perception at 100 ng/mL with binding on ketamine.", "Eighteen right-handed healthy volunteers were recruited (eight female; mean age = 22.4, range = 19–26; mean NART IQ = 114 [± 7]). The study was approved by Addenbrooke's NHS Trust Research Ethics Committee. Participants provided written, informed consent.\nOne participant was excluded from the analysis on the basis of a preexisting history of psychiatric illness (although all participants were screened for the presence of psychiatric illness in themselves and relatives prior to taking part in the study, this participant only disclosed this information after testing). Three participants failed to complete the intentional binding task owing to nausea produced by the drug infusion. Therefore, 14 participants were included in the final analysis.\nThese same participants also completed other cognitive tasks, unrelated to SoA, during infusion. It is planned to publish those results elsewhere.", "The study used a double-blind, placebo-controlled, randomised, within-subjects design.", "Participants were administered placebo (saline) or racemic ketamine (2 mg/mL) as an intravenous infusion using a target-controlled infusion system comprising a computer which implemented Stanpump software (S Shafer; http://www.opentci.org/doku.php?id=code:code) to control a syringe driver infusion pump (Graseby 3500; Graseby Medical Ltd, Watford, UK). Stanpump was programmed to use a two-compartmental pharmacokinetic model (Rigby-Jones, Sneyd, & Absalom, 2006), to implement a complex infusion profile designed to achieve prespecified plasma ketamine concentrations.\nDuring the drug session, participants received first low-dose ketamine (plasma target 100 ng/mL) and then higher dose (plasma target 200 ng/mL). The intentional binding task was completed at the low dose. Drug and placebo sessions were separated by at least 1 week. Participants also underwent a clinical rating (see later). The order of drug and placebo visits was counterbalanced across all 18 participants initially recruited. Of the 14 participants who were included in the final analysis, eight participants completed the ketamine session first.", "Participants watched a computer screen on which a hand rotated around a clock-face (marked at conventional “5-minute” intervals) (see Figure 1). Each full rotation lasted 2560 ms. In the agency condition, participants pressed a key with their right index finger at a time of their choosing. This keypress produced a tone after a delay of 250 ms. The clock-hand continued rotating for a random period of time (between 1500 ms and 2500 ms). This ensures that the finishing position of the clock-hand is not informative with respect to where it was when the action or tone occurred (see Libet, Gleason, Wright, & Pearl, 1983). When the hand stopped rotating, participants verbally reported the time of their keypress or the subsequent tone. These judgements were blocked, so participants only made a single type of estimate on each trial in each block. To make the time estimates, participants reported the position of the hand on the clock face when they either pressed the key or heard the tone. Participants completed a block of 20 action estimate trials and a block of 20 tone estimate trials.\nTrial structure in the agency condition (following Haggard et al., 2002). Participants pressed the key at a time of their choosing, which produced a tone after a delay of 250 ms. Participants judged where the clock hand was when they pressed the key or when they heard the tone, in separate blocks of trials.\nThey completed two further 20 trial baseline blocks of time estimates. In one block (baseline action) participants pressed the key at a time of their choosing. However, the keypress never produced a tone, and on each trial participants reported the time of the keypress. In the other block (baseline effect) participants made no keypresses. Instead, a tone would sound at a random time on each trial and participants reported the time of the tone. The order of agency and baseline blocks was randomised anew for each participant. All blocks (baseline and agency) were performed during the drug/saline infusion.\nFor our analysis we calculated an overall measure of intentional binding. We first calculated the binding effect for actions and tones individually. Action binding is found by subtracting the mean time estimate in the baseline action condition from the mean time estimate of actions in the agency condition. Tone binding is found by subtracting the mean time estimate in the baseline tone condition from the mean time estimate of tones in the agency condition. The overall measure of intentional binding was calculated by combining action and tone binding (i.e., action binding minus tone binding). To determine the effect of ketamine on intentional binding (and therefore SoA), this overall measure of intentional binding was compared within subjects (ketamine vs. placebo; paired-samples t-test).", "The Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998) was administered at both 100 ng/mL and 200 ng/mL. There are five subscales, each of which consists of items (questions), and participants’ responses are coded on a 5-point scale (0 = “Not at all” through to 4 = “Extremely”). As discussed, our analysis focused primarily on the body perception category. We assessed the strength of correlation between scores on items relating to body perception at 100 ng/mL with binding on ketamine.", "During the intentional binding task, the target plasma ketamine concentration was 100 ng/mL, and the mean ± SD) measured ketamine plasma concentration was 157 ± 36 ng/mL.\n Ketamine effects on intentional binding Table 1 presents the binding effects for keypresses and tones (mean shifts from baseline) for the 14 participants who completed the task. These data show that in the agency conditions on both placebo and ketamine, keypresses were bound towards tones and tones were bound back towards keypresses. This is consistent with the intentional binding effect, as previously reported (e.g., Engbert et al., 2008; Haggard et al., 2002; Moore & Haggard, 2008).\nMean judgement errors in ms (SD across subjects) and shifts relative to baseline conditions in ms\nTable 1 (final column) also presents the overall binding measure (keypress binding minus tone binding). These data show that overall binding was greater under ketamine compared with placebo. A paired-samples t-test revealed that this difference was significant, t(13) = 2.79, p = .008 (onetailed). Follow-up paired sample t-tests suggest that this difference is due to differences in binding for actions towards tones, t(13) = 2.35, p = .036 (two-tailed) rather than differences in binding for tones towards actions, t(13) = 0.242, p = .812 (two-tailed). Furthermore, this exaggerated binding appears to be driven by changes in baseline action judgements; isolated actions on ketamine were perceived as occurring significantly earlier than on placebo, t(13) = 2.59, p = .023 (two-tailed). Intentional binding is an implicit measure of SoA. These findings therefore suggest that SoA is exaggerated under ketamine, which is consistent with previous data on patients with schizophrenia (Haggard et al., 2003; Voss et al., 2010).\nTable 1 presents the binding effects for keypresses and tones (mean shifts from baseline) for the 14 participants who completed the task. These data show that in the agency conditions on both placebo and ketamine, keypresses were bound towards tones and tones were bound back towards keypresses. This is consistent with the intentional binding effect, as previously reported (e.g., Engbert et al., 2008; Haggard et al., 2002; Moore & Haggard, 2008).\nMean judgement errors in ms (SD across subjects) and shifts relative to baseline conditions in ms\nTable 1 (final column) also presents the overall binding measure (keypress binding minus tone binding). These data show that overall binding was greater under ketamine compared with placebo. A paired-samples t-test revealed that this difference was significant, t(13) = 2.79, p = .008 (onetailed). Follow-up paired sample t-tests suggest that this difference is due to differences in binding for actions towards tones, t(13) = 2.35, p = .036 (two-tailed) rather than differences in binding for tones towards actions, t(13) = 0.242, p = .812 (two-tailed). Furthermore, this exaggerated binding appears to be driven by changes in baseline action judgements; isolated actions on ketamine were perceived as occurring significantly earlier than on placebo, t(13) = 2.59, p = .023 (two-tailed). Intentional binding is an implicit measure of SoA. These findings therefore suggest that SoA is exaggerated under ketamine, which is consistent with previous data on patients with schizophrenia (Haggard et al., 2003; Voss et al., 2010).\n The relation between binding and body perception We also examined the strength of correlation between binding on ketamine and scores on the CADSS assessment. The overall main effect of ketamine was generated by changes in action binding. Therefore, our correlations were based on this binding measure. There was no significant correlation between action binding and the overall CADSS score, r = .197, p = 499 (two-tailed). Further analyses focused on the body perception subscale. There was a significant positive correlation between action binding and Item 6 on the CADSS, which asks, “Do you feel disconnected from your own body?,” r = .549, p = .042 (two-tailed) (see Figure 2). This suggests that the more participants felt disconnected from their bodies on ketamine, the greater the intentional binding effect. There was no significant correlation between action binding and Item 7 on the CADSS which asks “Does your sense of your own body feel changed: for instance, does your own body feel unusually large or unusually small?,” r = .208, p = .476 (two-tailed).\nScatter plot showing the significant correlation between action binding and CADSS Item 6 (“Do you feel disconnected from your own body?”) on ketamine (100 ng/mL).\nWe also examined the strength of correlation between binding on ketamine and scores on the CADSS assessment. The overall main effect of ketamine was generated by changes in action binding. Therefore, our correlations were based on this binding measure. There was no significant correlation between action binding and the overall CADSS score, r = .197, p = 499 (two-tailed). Further analyses focused on the body perception subscale. There was a significant positive correlation between action binding and Item 6 on the CADSS, which asks, “Do you feel disconnected from your own body?,” r = .549, p = .042 (two-tailed) (see Figure 2). This suggests that the more participants felt disconnected from their bodies on ketamine, the greater the intentional binding effect. There was no significant correlation between action binding and Item 7 on the CADSS which asks “Does your sense of your own body feel changed: for instance, does your own body feel unusually large or unusually small?,” r = .208, p = .476 (two-tailed).\nScatter plot showing the significant correlation between action binding and CADSS Item 6 (“Do you feel disconnected from your own body?”) on ketamine (100 ng/mL).\n Control analyses The CADSS questionnaire also measures changes in time perception. Given the temporal nature of our SoA measure we investigated the putative relation between binding and general changes in time perception. There were no significant correlations between action binding and time perception items on the scale (Item 1 “Do things seem to be moving in slow motion?,” r = 198, p = .498; Item 12 “Does this experience seem to take much longer than you would have expected?,” r = −.337, p = .238; Item 13 “Do things seem to be happening very quickly, as if there is a lifetime in a moment?,” r = .265, p = .360). This suggests that changes in action binding were not related to general changes in the subjective experience of time.\nTo determine the presence of possible drug order effects in our data we compared mean overall binding on ketamine versus placebo, introducing “order” (ketamine first vs. placebo first) as a between-subjects variable. We found no significant main effect of “order,” F(1, 12) = 0.381, p = .548, and no significant interaction, F(1, 12) = 0.889, p = 364. This suggests that changes in binding were not linked to drug order.\nWe also compared standard deviations of time estimates across repeated trials. These provide a measure of perceptual timing variability, with higher standard deviations reflecting inconsistent timing performance. This may indicate difficulty in using the clock for timing judgements, erratic allocation of attention either to the action/tone or to the clock, or general confusion. The increase in binding on ketamine was driven by differences in the binding of actions towards tones, so we focus on standard deviation of action time estimates. On ketamine the mean standard deviation was 77 ms (SD = 32) while on placebo it was 67 ms (SD = 18). Despite the numerical increase, the difference in mean standard deviation was not significant, t(13) = 1.149, p = .271 (two-tailed). This suggests that changes in action binding were not related to general changes in timing ability.\nIn a final control analysis, we investigated whether there was a significant reduction in the speed of the self-paced response on ketamine, as it could be that changes in binding are related to changes in motor function. On ketamine the mean response latency was 3798 ms (SD = 1580), whereas on placebo it was 3538 ms (SD = 1160). Despite the numerical increase in response latency, this difference was not significant, t(13) = 0.945, p = .362 (two-tailed). This suggests that changes in action binding were not related to changes in motor function (as measured by the response latency).\nThe CADSS questionnaire also measures changes in time perception. Given the temporal nature of our SoA measure we investigated the putative relation between binding and general changes in time perception. There were no significant correlations between action binding and time perception items on the scale (Item 1 “Do things seem to be moving in slow motion?,” r = 198, p = .498; Item 12 “Does this experience seem to take much longer than you would have expected?,” r = −.337, p = .238; Item 13 “Do things seem to be happening very quickly, as if there is a lifetime in a moment?,” r = .265, p = .360). This suggests that changes in action binding were not related to general changes in the subjective experience of time.\nTo determine the presence of possible drug order effects in our data we compared mean overall binding on ketamine versus placebo, introducing “order” (ketamine first vs. placebo first) as a between-subjects variable. We found no significant main effect of “order,” F(1, 12) = 0.381, p = .548, and no significant interaction, F(1, 12) = 0.889, p = 364. This suggests that changes in binding were not linked to drug order.\nWe also compared standard deviations of time estimates across repeated trials. These provide a measure of perceptual timing variability, with higher standard deviations reflecting inconsistent timing performance. This may indicate difficulty in using the clock for timing judgements, erratic allocation of attention either to the action/tone or to the clock, or general confusion. The increase in binding on ketamine was driven by differences in the binding of actions towards tones, so we focus on standard deviation of action time estimates. On ketamine the mean standard deviation was 77 ms (SD = 32) while on placebo it was 67 ms (SD = 18). Despite the numerical increase, the difference in mean standard deviation was not significant, t(13) = 1.149, p = .271 (two-tailed). This suggests that changes in action binding were not related to general changes in timing ability.\nIn a final control analysis, we investigated whether there was a significant reduction in the speed of the self-paced response on ketamine, as it could be that changes in binding are related to changes in motor function. On ketamine the mean response latency was 3798 ms (SD = 1580), whereas on placebo it was 3538 ms (SD = 1160). Despite the numerical increase in response latency, this difference was not significant, t(13) = 0.945, p = .362 (two-tailed). This suggests that changes in action binding were not related to changes in motor function (as measured by the response latency).", "Table 1 presents the binding effects for keypresses and tones (mean shifts from baseline) for the 14 participants who completed the task. These data show that in the agency conditions on both placebo and ketamine, keypresses were bound towards tones and tones were bound back towards keypresses. This is consistent with the intentional binding effect, as previously reported (e.g., Engbert et al., 2008; Haggard et al., 2002; Moore & Haggard, 2008).\nMean judgement errors in ms (SD across subjects) and shifts relative to baseline conditions in ms\nTable 1 (final column) also presents the overall binding measure (keypress binding minus tone binding). These data show that overall binding was greater under ketamine compared with placebo. A paired-samples t-test revealed that this difference was significant, t(13) = 2.79, p = .008 (onetailed). Follow-up paired sample t-tests suggest that this difference is due to differences in binding for actions towards tones, t(13) = 2.35, p = .036 (two-tailed) rather than differences in binding for tones towards actions, t(13) = 0.242, p = .812 (two-tailed). Furthermore, this exaggerated binding appears to be driven by changes in baseline action judgements; isolated actions on ketamine were perceived as occurring significantly earlier than on placebo, t(13) = 2.59, p = .023 (two-tailed). Intentional binding is an implicit measure of SoA. These findings therefore suggest that SoA is exaggerated under ketamine, which is consistent with previous data on patients with schizophrenia (Haggard et al., 2003; Voss et al., 2010).", "We also examined the strength of correlation between binding on ketamine and scores on the CADSS assessment. The overall main effect of ketamine was generated by changes in action binding. Therefore, our correlations were based on this binding measure. There was no significant correlation between action binding and the overall CADSS score, r = .197, p = 499 (two-tailed). Further analyses focused on the body perception subscale. There was a significant positive correlation between action binding and Item 6 on the CADSS, which asks, “Do you feel disconnected from your own body?,” r = .549, p = .042 (two-tailed) (see Figure 2). This suggests that the more participants felt disconnected from their bodies on ketamine, the greater the intentional binding effect. There was no significant correlation between action binding and Item 7 on the CADSS which asks “Does your sense of your own body feel changed: for instance, does your own body feel unusually large or unusually small?,” r = .208, p = .476 (two-tailed).\nScatter plot showing the significant correlation between action binding and CADSS Item 6 (“Do you feel disconnected from your own body?”) on ketamine (100 ng/mL).", "The CADSS questionnaire also measures changes in time perception. Given the temporal nature of our SoA measure we investigated the putative relation between binding and general changes in time perception. There were no significant correlations between action binding and time perception items on the scale (Item 1 “Do things seem to be moving in slow motion?,” r = 198, p = .498; Item 12 “Does this experience seem to take much longer than you would have expected?,” r = −.337, p = .238; Item 13 “Do things seem to be happening very quickly, as if there is a lifetime in a moment?,” r = .265, p = .360). This suggests that changes in action binding were not related to general changes in the subjective experience of time.\nTo determine the presence of possible drug order effects in our data we compared mean overall binding on ketamine versus placebo, introducing “order” (ketamine first vs. placebo first) as a between-subjects variable. We found no significant main effect of “order,” F(1, 12) = 0.381, p = .548, and no significant interaction, F(1, 12) = 0.889, p = 364. This suggests that changes in binding were not linked to drug order.\nWe also compared standard deviations of time estimates across repeated trials. These provide a measure of perceptual timing variability, with higher standard deviations reflecting inconsistent timing performance. This may indicate difficulty in using the clock for timing judgements, erratic allocation of attention either to the action/tone or to the clock, or general confusion. The increase in binding on ketamine was driven by differences in the binding of actions towards tones, so we focus on standard deviation of action time estimates. On ketamine the mean standard deviation was 77 ms (SD = 32) while on placebo it was 67 ms (SD = 18). Despite the numerical increase, the difference in mean standard deviation was not significant, t(13) = 1.149, p = .271 (two-tailed). This suggests that changes in action binding were not related to general changes in timing ability.\nIn a final control analysis, we investigated whether there was a significant reduction in the speed of the self-paced response on ketamine, as it could be that changes in binding are related to changes in motor function. On ketamine the mean response latency was 3798 ms (SD = 1580), whereas on placebo it was 3538 ms (SD = 1160). Despite the numerical increase in response latency, this difference was not significant, t(13) = 0.945, p = .362 (two-tailed). This suggests that changes in action binding were not related to changes in motor function (as measured by the response latency).", "We investigated whether the psychotomimetic effects of ketamine extend to producing aberrant agency experiences associated with schizophrenia. On the intentional binding task under placebo conditions, the expected binding effect (a compression of the subjective interval between action and outcome; Haggard et al., 2002) was observed. Under ketamine this effect was exaggerated, as has been previously reported in people with schizophrenia (Haggard et al., 2003; Voss et al., 2010).\nThe effect of ketamine on action–outcome binding is intriguing: The exaggerated effect was driven primarily by an increase in binding of actions towards the tone, rather than binding of tones back towards actions. Action binding represents the difference between action time estimates in the agency condition and action time estimates in the baseline condition. Previous studies have found that the experience of isolated action, as in the baseline condition, is anticipatory: On average, participants are aware of moving slightly before the actual onset of movement (Haggard, Newman, & Magno, 1999; Libet et al., 1983). This suggests that motor experience in this context is not based on feedback generated by the actual movement itself. If it were, one would expect a slightly delayed awareness of moving owing to inherent delays in the transmission of sensory information to the brain (Obhi, Planetta, & Scantlebury, 2009). Instead, it has been proposed that the experience of isolated action is linked to processes occurring prior to movement onset (Haggard, 2003). In our data, the baseline experience of action on ketamine was significantly earlier than on placebo, whereas the baseline action awareness on placebo was, unusually, slightly delayed relative to the actual keypress. This pattern of results suggests that the drug may have exaggerated the putative influence of action preparation on the experience of action.\nHowever, although baseline action experience is generally anticipatory, the intentional binding effect in healthy adults shows that causing an external event through one's own actions (as in our agency condition) draws the temporal experience of action towards that event (Engbert & Wohlschläger, 2007; Haggard et al., 2002; Moore & Haggard, 2008; Moore, Lagnado, et al., 2009). It is this shift in action experience that represents the binding effect for actions, and it was present on both placebo and ketamine. However, the magnitude of the shift was significantly greater on ketamine. It appears, therefore, that the presence of the tone exerted a particularly strong influence on action experience. In short, although ketamine has a strong effect on action experience when the action occurs without a perceptual consequence, we cannot interpret the drug's effect merely in terms of this baseline action experience. Rather, the significantly greater subjective shift, on ketamine, in the experience of action towards the tone means that a full explanation of the effects of ketamine must take into account the experience of action in both the absence and the presence of the tone.\nThus, bringing together the key results from the intentional binding task, ketamine appears to boost the influence of action preparation on action awareness, but also to boost the influence of the effects of action (a tone) on action awareness. This combination may seem paradoxical. However, several results suggest that the action experience is in fact a synthesis of a range of different events occurring over an extended time period between preparation and consequence (Banks & Isham, 2009; Haggard, 2005; Haggard, Cartledge, Dafydd, & Oakley, 2004; Lau, Rogers, & Passingham, 2007; Moore & Haggard, 2008; Moore, Wegner, et al., 2009). In normal circumstances, action awareness is likely to be the result of integration of efferent and afferent processes in the sensorimotor system (Moore & Haggard, 2008; Moore, Wegner, et al., 2009; Synofzik, Vosgerau, & Newen, 2008). On ketamine, however, the processes underlying this normal process of integration may be compromised.\nTo this extent, our results are consistent with a ketamine-induced deficit in monitoring action signals. Participants appeared to feel dissociated from their own actions while on ketamine, since their representations of their own actions were susceptible to influences from other events, such as their original intentions and their subsequent effects. Confirmation of this dissociative interpretation comes from the correlations found between intentional binding and the specific CADSS item concerning the feeling of disconnection from the body. Taken together, these findings suggest that ketamine may preferentially influence a neural system for monitoring action. As a result of this deficit, actions on ketamine become mutable and vulnerable to capture by other events. However, given the apparently tight coupling of SoO and SoA, the fact that increased SoA was associated with an increase in the feeling of disconnection from one's body may be surprising. Dissociations between SoO and SoA are not uncommon in psychiatric illness such as schizophrenia. For example, a patient with passivity phenomena will recognise their actions as the movements of their own body (preserved SoO) but will experience their actions as produced by an external force (reduced SoA). However, these dissociations cannot explain our finding that an increase in SoA was associated with reduced SoO on ketamine. The mutability hypothesis discussed earlier may provide an explanation: If ketamine engenders mutability in the experience of action, then the more one's experience of action is “captured” by external sensory events the greater the externalisation of bodily experience may be, resulting in the feeling of “disconnection” from one's own body.\nWhat might be the neurochemical and neuroanatomical basis of the hyperbinding effect we observed? One possibility is that hyperbinding is the product of aberrant prediction error signalling. Prediction error refers to the mismatch between expectation and occurrence, and is used as a teaching signal to drive causal associations between events (Dickinson, 2001). Although midbrain dopamine neurons may signal a reward prediction error (Schultz & Dickinson, 2000), others have argued that their activity profile may reflect a novelty, salience, or surprise signal used by organisms to judge whether or not they caused a surprising event to happen (Redgrave & Gurney, 2006). We have previously shown that ketamine induces prediction error responses to predictable events and thus increases the salience of those events (Corlett et al., 2006). Neurochemically, ketamine may increase dopamine and glutamate corelease, in the mesocortical pathway between the midbrain and prefrontal cortex (Corlett et al., 2006; Corlett, Honey, et al., 2007). Such signalling has been suggested to register surprise and permit its explanation (Lavin et al., 2005). Since associations between intention, action, and outcome are well learned, the ketamine induced hyperbinding effect we report presently may reflect inappropriate salience of action–outcome causal associations, via aberrant prediction error signalling. Our findings overall are compatible with the notion that the execution of action and SoA may be linked by a simple computational principle (minimising prediction error), which, when perturbed, could explain the varied phenomenology of psychosis (Corlett, Frith, & Fletcher, 2009; Fletcher & Frith, 2009).\nThe hyperbinding found previously in schizophrenia patients (Haggard et al., 2003; Voss et al., 2010), and here found also with ketamine, suggests an exaggerated SoA. A number of other studies, using different paradigms, have reported data that are consistent with this interpretation. For example, people with schizophrenia (including those experiencing passivity symptoms) are more likely than healthy controls to attribute the source of distorted or ambiguous visual feedback of an action to themselves (Daprati et al., 1997; Fourneret et al., 2002; Franck et al., 2001; Schnell et al., 2008). This suggests a tendency towards over-attribution of sensory consequences of movement to oneself (Synofzik et al., 2008). However, these data are at odds with the feeling of reduced SoA that is typically reported by patients. One solution to this paradox is offered by Franck et al. (2001), who have suggested that patients with passivity symptoms have a tendency towards self-attribution of extraneous events (see also Daprati et al., 1997). This could result in a feeling of being influenced when observing another action, and hyperassociation when observing action outcomes. In short, it may be possible to recognise strongly the outcomes of one's actions while at the same time feeling a diminished sense of agency for the actions themselves. This implies a distinction between feeling one is the author of action on the one hand, and feeling one is the author of an effect on the other. This putative distinction would be usefully explored in future studies.\nIt should also be noted that exaggerated SoA may be associated with certain schizophrenia subtypes, particularly those with self-referential symptoms. For example, patients with persecutory delusions feel a greater sense of control over action outcomes compared with healthy and patient controls (Kaney & Bentall, 1992). Therefore, the exaggerated agency effects shown in previous patient studies could be driven by the presence of patients with self-referential symptoms in these samples. Intriguingly, self-referential symptoms are also a common effect of the ketamine challenge (Corlett, Honey, & Fletcher, 2007; Honey et al., 2006). It may be, therefore, that the increased SoA found in the current study is associated with this specific effect of the drug.\nOur study shows that ketamine can mimic aberrant agency experiences associated with schizophrenia, but certain limitations of the task used should be noted. Unlike previous intentional binding studies (Haggard et al., 2002), we did not include any involuntary movement conditions. Using transcranial magnetic stimulation to induce involuntary movements, Haggard et al. (2002) showed that the binding of actions and outcomes was specific to voluntary, self-generated movement. In fact, when involuntary transcranial magnetic stimulation (TMS)-induced movements were followed by tones, they found a temporal repulsion between involuntary movement and tone. We did not include this TMS condition because the focus of our investigation was whether ketamine increased the magnitude of intentional binding for voluntary actions. It would be interesting in the future to explore the effect of the ketamine challenge on this “repulsion” effect.\nLimitations of the paradigm should also be noted. Intentional binding represents an implicit measure of agency experience. That is, participants are not required to make explicit agency judgements, such as the attribution of an observed movement to its correct origin (as in Farrer & Frith, 2002, for example). Implicit measures have certain advantages, such as the quantification of subjective experience, and the mitigation of demand effects. Also, such tasks may allow us to detect subtle perceptual and cognitive changes engendered by the drug and relate them to the early stages of psychosis. However, there are certain drawbacks. Primarily, implicit measures will fail to capture the broader phenomenology of SoA, in particular the highly complex phenomenology associated with delusions of agency in established psychosis. In the current study this limitation was mitigated somewhat by the observation that changes in these subtle implicit measures correlate with participants’ self-reports of drug-induced changes in body experience.\nFinally, limitations of the ketamine model of schizophrenia should also be acknowledged. For example, ketamine produces a range of symptoms associated with endogenous psychosis (arguably a broader range than other drug models of the disease; Krystal et al., 1994), but there are notable exceptions (Fletcher & Honey, 2006). Furthermore, ketamine produces changes that are not necessarily associated with schizophrenia, such as euphoria (Fletcher & Honey, 2006). Although it is important to acknowledge limitations of the drug model, we do not feel they undermine our interpretation of the present data, given the fact that these data are consistent with schizophrenic psychopathology and replicate previous behavioural data from patients with the disease (Haggard et al., 2003; Voss et al., 2010).\nDespite these caveats, this study shows that the psychotomimetic property of ketamine may extend to aberrant experiences of agency associated with schizophrenia. In particular, ketamine mimics the exaggerated intentional binding effect that has been found in association with the disease. The pattern of results suggested a mutable experience of action on ketamine, consistent with a deficit in the neural circuits for action monitoring. We believe that these findings may be explained in terms of changes in stimulus salience via aberrant prediction error signalling. Ketamine may be a valuable psychopharmacological model of aberrant agency experiences found in schizophrenia. To this extent, it could be used to elucidate the neurobiological and psychological basis of such aberrant experiences." ]
[ "intro", "materials|methods", null, null, null, null, null, "results", null, null, null, null ]
[ "Action-outcome binding", "Ketamine", "Schizophrenia", "Sense of agency", "Volition", "Voluntary action" ]
INTRODUCTION: Administration of the anaesthetic agent, ketamine, to healthy participants produces a state that resembles schizophrenia (Ghoneim, Hinrichs, Mewaldt, & Petersen, 1985; Krystal et al., 1994; Lahti, Weiler, Tamara Michaelidis, Parwani, & Tamminga, 2001). Although there are notable differences between the ketamine state and established schizophrenic illness (for example, ketamine does not reliably produce auditory hallucinations; Fletcher & Honey, 2006), ketamine does produce a range of symptoms associated with endogenous psychosis, including perceptual changes, ideas of reference, thought disorder, and some negative symptoms (Ghoneim et al., 1985; Krystal et al., 1994; Lahti et al., 2001; Mason, Morgan, Stefanovic, & Curran, 2008; Morgan, Mofeez, Brandner, Bromley, & Curran, 2004; Pomarol-Clotet et al., 2006). In addition, a number of cognitive changes produced by ketamine are comparable to those seen in schizophrenia (e.g., learning: Corlett, Murray, et al., 2007; memory: Fletcher & Honey, 2006; attention: Oranje et al., 2000; language: Covington et al., 2007). Overall, the effects of ketamine are most strikingly characteristic of the earliest stages of psychosis (Corlett, Honey, & Fletcher, 2007). Moreover, ketamine causes changes in brain activity that overlap with those reported in schizophrenia (Breier, Malhotra, Pinals, Weisenfeld, & Pickar, 1997; Corlett et al., 2006; Vollenweider, Leenders, Oye, Hell, & Angst, 1997; Vollenweider, Leenders, Scharfetter, et al., 1997). An important next step is to explore the effects of ketamine in greater detail and to exploit the potential that this approach offers for relating cognitive-behavioural function to subjective experiences in psychosis. Schizophrenia is associated with important changes in the experience of voluntary action such as those that occur in delusions of control (Frith, 1992). Although it has received little formal documentation, ketamine also, in our experience, alters the way that participants experience their own actions. For example, participants sometimes report that they don't feel fully in control of their own actions (“I don't feel in control of my muscles …,” and “… as though someone else was controlling my movements;” Pomarol-Clotet et al., 2006). Given these observations, together with the perceptuomotor abnormalities in schizophrenia, the current study was set up to characterise the effects of ketamine on a task examining voluntary actions and their sensory consequences. Sense of agency (SoA) refers to the experience of initiating and controlling voluntary action to achieve effects in the outside world. Sense of agency is a background feeling that accompanies most of our actions. Perhaps because of its ubiquity, it has proved difficult to isolate and measure experimentally. Recently, action-related changes in time perception have been proposed as a proxy for SoA (Haggard, Clark, & Kalogeras, 2002; Moore & Haggard, 2008; Moore, Lagnado, Deal, & Haggard, 2009). Situations that elicit SoA are associated with systematic changes in the temporal experience of actions and outcomes: There is a subjective compression of the interval between the action and the outcome. This relation between SoA and subjective time is revealed in the intentional binding paradigm developed by Haggard et al. (2002). In an agency condition, in which participants’ actions produced outcome tones, participants judged the time of an action or the time of the subsequent tone, in separate blocks of trials. Actions were perceived as occurring later in time compared to a nonagency (baseline) condition in which participants’ actions did not produce tones. In addition, a tone that followed the action was perceived as occurring earlier in time compared to a nonagency (baseline) condition involving tones but no actions. Importantly, these shifts were only found for voluntary actions: When the outcome was caused by an involuntary movement the reverse pattern of results was observed (actions perceived earlier and outcomes perceived later than their respective baseline estimates). Increased SoA is therefore associated with a later awareness of the action, and an earlier awareness of the outcome. This effect is robust and has been consistently replicated (see, for example, Engbert & Wohlschläger, 2007; Engbert, Wohlschläger, & Haggard, 2008; Moore, Wegner, & Haggard, 2009; Tsakiris & Haggard, 2003). It has also been shown that these changes in the subjective experience of time correlate with explicit higher order changes in the sense of agency, as measured using subjective rating scales (Ebert & Wegner, 2010; Moore & Haggard, 2010). In this way, intentional binding offers a precise, implicit measure of SoA. Of primary interest to the present study is the fact that the binding effect, defined as the temporal attraction between voluntary action and outcome, is greater in people with schizophrenia (Haggard, Martin, Taylor-Clarke, Jeannerod, & Franck, 2003; Voss et al., 2010). That is, people with schizophrenia show increased intentional binding. Our principal aim here was to determine whether ketamine also induced increased binding, as previously reported in schizophrenia. We also investigated the relationship between this implicit measure of SoA and subjective experiences of dissociation and psychotic-like phenomena produced by the drug as measured using the Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998). Here we focused our analysis on changes in the subjective experience of one's own body, since sense of ownership (SoO) over one's body and SoA may be related. For example, in healthy individuals SoA for a voluntary action may strongly depend on a SoO (Gallagher, 2000, 2007; Tsakiris, Schütz-Bosbach, & Gallagher, 2007). The reverse relationship may also hold, whereby the neurocognitive processes that give rise to sense of agency also contribute to Soo (Tsakiris, Prabhu, & Haggard, 2006). Dissociative symptoms, such as depersonalisation, are a common effect of the ketamine challenge (Goff & Coyle, 2001). Furthermore, there is frequent co-occurrence of depersonalisation and abnormal bodily experience (Sierra, Baker, Medford, & David, 2005; Simeon et al., 2008). Although not typically associated with established schizophrenic illness, depersonalisation appears to be associated with the schizophrenia prodrome (Goff & Coyle, 2001; Krystal et al., 1994). Therefore, given the link between bodily experience and sense of agency, and the common disruption of bodily experience engendered by the ketamine challenge, the body perception subscale on the CADSS questionnaire was of primary interest. MATERIALS AND METHODS: Participants Eighteen right-handed healthy volunteers were recruited (eight female; mean age = 22.4, range = 19–26; mean NART IQ = 114 [± 7]). The study was approved by Addenbrooke's NHS Trust Research Ethics Committee. Participants provided written, informed consent. One participant was excluded from the analysis on the basis of a preexisting history of psychiatric illness (although all participants were screened for the presence of psychiatric illness in themselves and relatives prior to taking part in the study, this participant only disclosed this information after testing). Three participants failed to complete the intentional binding task owing to nausea produced by the drug infusion. Therefore, 14 participants were included in the final analysis. These same participants also completed other cognitive tasks, unrelated to SoA, during infusion. It is planned to publish those results elsewhere. Eighteen right-handed healthy volunteers were recruited (eight female; mean age = 22.4, range = 19–26; mean NART IQ = 114 [± 7]). The study was approved by Addenbrooke's NHS Trust Research Ethics Committee. Participants provided written, informed consent. One participant was excluded from the analysis on the basis of a preexisting history of psychiatric illness (although all participants were screened for the presence of psychiatric illness in themselves and relatives prior to taking part in the study, this participant only disclosed this information after testing). Three participants failed to complete the intentional binding task owing to nausea produced by the drug infusion. Therefore, 14 participants were included in the final analysis. These same participants also completed other cognitive tasks, unrelated to SoA, during infusion. It is planned to publish those results elsewhere. Experimental design The study used a double-blind, placebo-controlled, randomised, within-subjects design. The study used a double-blind, placebo-controlled, randomised, within-subjects design. Infusion protocol Participants were administered placebo (saline) or racemic ketamine (2 mg/mL) as an intravenous infusion using a target-controlled infusion system comprising a computer which implemented Stanpump software (S Shafer; http://www.opentci.org/doku.php?id=code:code) to control a syringe driver infusion pump (Graseby 3500; Graseby Medical Ltd, Watford, UK). Stanpump was programmed to use a two-compartmental pharmacokinetic model (Rigby-Jones, Sneyd, & Absalom, 2006), to implement a complex infusion profile designed to achieve prespecified plasma ketamine concentrations. During the drug session, participants received first low-dose ketamine (plasma target 100 ng/mL) and then higher dose (plasma target 200 ng/mL). The intentional binding task was completed at the low dose. Drug and placebo sessions were separated by at least 1 week. Participants also underwent a clinical rating (see later). The order of drug and placebo visits was counterbalanced across all 18 participants initially recruited. Of the 14 participants who were included in the final analysis, eight participants completed the ketamine session first. Participants were administered placebo (saline) or racemic ketamine (2 mg/mL) as an intravenous infusion using a target-controlled infusion system comprising a computer which implemented Stanpump software (S Shafer; http://www.opentci.org/doku.php?id=code:code) to control a syringe driver infusion pump (Graseby 3500; Graseby Medical Ltd, Watford, UK). Stanpump was programmed to use a two-compartmental pharmacokinetic model (Rigby-Jones, Sneyd, & Absalom, 2006), to implement a complex infusion profile designed to achieve prespecified plasma ketamine concentrations. During the drug session, participants received first low-dose ketamine (plasma target 100 ng/mL) and then higher dose (plasma target 200 ng/mL). The intentional binding task was completed at the low dose. Drug and placebo sessions were separated by at least 1 week. Participants also underwent a clinical rating (see later). The order of drug and placebo visits was counterbalanced across all 18 participants initially recruited. Of the 14 participants who were included in the final analysis, eight participants completed the ketamine session first. Intentional binding Participants watched a computer screen on which a hand rotated around a clock-face (marked at conventional “5-minute” intervals) (see Figure 1). Each full rotation lasted 2560 ms. In the agency condition, participants pressed a key with their right index finger at a time of their choosing. This keypress produced a tone after a delay of 250 ms. The clock-hand continued rotating for a random period of time (between 1500 ms and 2500 ms). This ensures that the finishing position of the clock-hand is not informative with respect to where it was when the action or tone occurred (see Libet, Gleason, Wright, & Pearl, 1983). When the hand stopped rotating, participants verbally reported the time of their keypress or the subsequent tone. These judgements were blocked, so participants only made a single type of estimate on each trial in each block. To make the time estimates, participants reported the position of the hand on the clock face when they either pressed the key or heard the tone. Participants completed a block of 20 action estimate trials and a block of 20 tone estimate trials. Trial structure in the agency condition (following Haggard et al., 2002). Participants pressed the key at a time of their choosing, which produced a tone after a delay of 250 ms. Participants judged where the clock hand was when they pressed the key or when they heard the tone, in separate blocks of trials. They completed two further 20 trial baseline blocks of time estimates. In one block (baseline action) participants pressed the key at a time of their choosing. However, the keypress never produced a tone, and on each trial participants reported the time of the keypress. In the other block (baseline effect) participants made no keypresses. Instead, a tone would sound at a random time on each trial and participants reported the time of the tone. The order of agency and baseline blocks was randomised anew for each participant. All blocks (baseline and agency) were performed during the drug/saline infusion. For our analysis we calculated an overall measure of intentional binding. We first calculated the binding effect for actions and tones individually. Action binding is found by subtracting the mean time estimate in the baseline action condition from the mean time estimate of actions in the agency condition. Tone binding is found by subtracting the mean time estimate in the baseline tone condition from the mean time estimate of tones in the agency condition. The overall measure of intentional binding was calculated by combining action and tone binding (i.e., action binding minus tone binding). To determine the effect of ketamine on intentional binding (and therefore SoA), this overall measure of intentional binding was compared within subjects (ketamine vs. placebo; paired-samples t-test). Participants watched a computer screen on which a hand rotated around a clock-face (marked at conventional “5-minute” intervals) (see Figure 1). Each full rotation lasted 2560 ms. In the agency condition, participants pressed a key with their right index finger at a time of their choosing. This keypress produced a tone after a delay of 250 ms. The clock-hand continued rotating for a random period of time (between 1500 ms and 2500 ms). This ensures that the finishing position of the clock-hand is not informative with respect to where it was when the action or tone occurred (see Libet, Gleason, Wright, & Pearl, 1983). When the hand stopped rotating, participants verbally reported the time of their keypress or the subsequent tone. These judgements were blocked, so participants only made a single type of estimate on each trial in each block. To make the time estimates, participants reported the position of the hand on the clock face when they either pressed the key or heard the tone. Participants completed a block of 20 action estimate trials and a block of 20 tone estimate trials. Trial structure in the agency condition (following Haggard et al., 2002). Participants pressed the key at a time of their choosing, which produced a tone after a delay of 250 ms. Participants judged where the clock hand was when they pressed the key or when they heard the tone, in separate blocks of trials. They completed two further 20 trial baseline blocks of time estimates. In one block (baseline action) participants pressed the key at a time of their choosing. However, the keypress never produced a tone, and on each trial participants reported the time of the keypress. In the other block (baseline effect) participants made no keypresses. Instead, a tone would sound at a random time on each trial and participants reported the time of the tone. The order of agency and baseline blocks was randomised anew for each participant. All blocks (baseline and agency) were performed during the drug/saline infusion. For our analysis we calculated an overall measure of intentional binding. We first calculated the binding effect for actions and tones individually. Action binding is found by subtracting the mean time estimate in the baseline action condition from the mean time estimate of actions in the agency condition. Tone binding is found by subtracting the mean time estimate in the baseline tone condition from the mean time estimate of tones in the agency condition. The overall measure of intentional binding was calculated by combining action and tone binding (i.e., action binding minus tone binding). To determine the effect of ketamine on intentional binding (and therefore SoA), this overall measure of intentional binding was compared within subjects (ketamine vs. placebo; paired-samples t-test). Clinical assessment The Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998) was administered at both 100 ng/mL and 200 ng/mL. There are five subscales, each of which consists of items (questions), and participants’ responses are coded on a 5-point scale (0 = “Not at all” through to 4 = “Extremely”). As discussed, our analysis focused primarily on the body perception category. We assessed the strength of correlation between scores on items relating to body perception at 100 ng/mL with binding on ketamine. The Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998) was administered at both 100 ng/mL and 200 ng/mL. There are five subscales, each of which consists of items (questions), and participants’ responses are coded on a 5-point scale (0 = “Not at all” through to 4 = “Extremely”). As discussed, our analysis focused primarily on the body perception category. We assessed the strength of correlation between scores on items relating to body perception at 100 ng/mL with binding on ketamine. Participants: Eighteen right-handed healthy volunteers were recruited (eight female; mean age = 22.4, range = 19–26; mean NART IQ = 114 [± 7]). The study was approved by Addenbrooke's NHS Trust Research Ethics Committee. Participants provided written, informed consent. One participant was excluded from the analysis on the basis of a preexisting history of psychiatric illness (although all participants were screened for the presence of psychiatric illness in themselves and relatives prior to taking part in the study, this participant only disclosed this information after testing). Three participants failed to complete the intentional binding task owing to nausea produced by the drug infusion. Therefore, 14 participants were included in the final analysis. These same participants also completed other cognitive tasks, unrelated to SoA, during infusion. It is planned to publish those results elsewhere. Experimental design: The study used a double-blind, placebo-controlled, randomised, within-subjects design. Infusion protocol: Participants were administered placebo (saline) or racemic ketamine (2 mg/mL) as an intravenous infusion using a target-controlled infusion system comprising a computer which implemented Stanpump software (S Shafer; http://www.opentci.org/doku.php?id=code:code) to control a syringe driver infusion pump (Graseby 3500; Graseby Medical Ltd, Watford, UK). Stanpump was programmed to use a two-compartmental pharmacokinetic model (Rigby-Jones, Sneyd, & Absalom, 2006), to implement a complex infusion profile designed to achieve prespecified plasma ketamine concentrations. During the drug session, participants received first low-dose ketamine (plasma target 100 ng/mL) and then higher dose (plasma target 200 ng/mL). The intentional binding task was completed at the low dose. Drug and placebo sessions were separated by at least 1 week. Participants also underwent a clinical rating (see later). The order of drug and placebo visits was counterbalanced across all 18 participants initially recruited. Of the 14 participants who were included in the final analysis, eight participants completed the ketamine session first. Intentional binding: Participants watched a computer screen on which a hand rotated around a clock-face (marked at conventional “5-minute” intervals) (see Figure 1). Each full rotation lasted 2560 ms. In the agency condition, participants pressed a key with their right index finger at a time of their choosing. This keypress produced a tone after a delay of 250 ms. The clock-hand continued rotating for a random period of time (between 1500 ms and 2500 ms). This ensures that the finishing position of the clock-hand is not informative with respect to where it was when the action or tone occurred (see Libet, Gleason, Wright, & Pearl, 1983). When the hand stopped rotating, participants verbally reported the time of their keypress or the subsequent tone. These judgements were blocked, so participants only made a single type of estimate on each trial in each block. To make the time estimates, participants reported the position of the hand on the clock face when they either pressed the key or heard the tone. Participants completed a block of 20 action estimate trials and a block of 20 tone estimate trials. Trial structure in the agency condition (following Haggard et al., 2002). Participants pressed the key at a time of their choosing, which produced a tone after a delay of 250 ms. Participants judged where the clock hand was when they pressed the key or when they heard the tone, in separate blocks of trials. They completed two further 20 trial baseline blocks of time estimates. In one block (baseline action) participants pressed the key at a time of their choosing. However, the keypress never produced a tone, and on each trial participants reported the time of the keypress. In the other block (baseline effect) participants made no keypresses. Instead, a tone would sound at a random time on each trial and participants reported the time of the tone. The order of agency and baseline blocks was randomised anew for each participant. All blocks (baseline and agency) were performed during the drug/saline infusion. For our analysis we calculated an overall measure of intentional binding. We first calculated the binding effect for actions and tones individually. Action binding is found by subtracting the mean time estimate in the baseline action condition from the mean time estimate of actions in the agency condition. Tone binding is found by subtracting the mean time estimate in the baseline tone condition from the mean time estimate of tones in the agency condition. The overall measure of intentional binding was calculated by combining action and tone binding (i.e., action binding minus tone binding). To determine the effect of ketamine on intentional binding (and therefore SoA), this overall measure of intentional binding was compared within subjects (ketamine vs. placebo; paired-samples t-test). Clinical assessment: The Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998) was administered at both 100 ng/mL and 200 ng/mL. There are five subscales, each of which consists of items (questions), and participants’ responses are coded on a 5-point scale (0 = “Not at all” through to 4 = “Extremely”). As discussed, our analysis focused primarily on the body perception category. We assessed the strength of correlation between scores on items relating to body perception at 100 ng/mL with binding on ketamine. RESULTS: During the intentional binding task, the target plasma ketamine concentration was 100 ng/mL, and the mean ± SD) measured ketamine plasma concentration was 157 ± 36 ng/mL. Ketamine effects on intentional binding Table 1 presents the binding effects for keypresses and tones (mean shifts from baseline) for the 14 participants who completed the task. These data show that in the agency conditions on both placebo and ketamine, keypresses were bound towards tones and tones were bound back towards keypresses. This is consistent with the intentional binding effect, as previously reported (e.g., Engbert et al., 2008; Haggard et al., 2002; Moore & Haggard, 2008). Mean judgement errors in ms (SD across subjects) and shifts relative to baseline conditions in ms Table 1 (final column) also presents the overall binding measure (keypress binding minus tone binding). These data show that overall binding was greater under ketamine compared with placebo. A paired-samples t-test revealed that this difference was significant, t(13) = 2.79, p = .008 (onetailed). Follow-up paired sample t-tests suggest that this difference is due to differences in binding for actions towards tones, t(13) = 2.35, p = .036 (two-tailed) rather than differences in binding for tones towards actions, t(13) = 0.242, p = .812 (two-tailed). Furthermore, this exaggerated binding appears to be driven by changes in baseline action judgements; isolated actions on ketamine were perceived as occurring significantly earlier than on placebo, t(13) = 2.59, p = .023 (two-tailed). Intentional binding is an implicit measure of SoA. These findings therefore suggest that SoA is exaggerated under ketamine, which is consistent with previous data on patients with schizophrenia (Haggard et al., 2003; Voss et al., 2010). Table 1 presents the binding effects for keypresses and tones (mean shifts from baseline) for the 14 participants who completed the task. These data show that in the agency conditions on both placebo and ketamine, keypresses were bound towards tones and tones were bound back towards keypresses. This is consistent with the intentional binding effect, as previously reported (e.g., Engbert et al., 2008; Haggard et al., 2002; Moore & Haggard, 2008). Mean judgement errors in ms (SD across subjects) and shifts relative to baseline conditions in ms Table 1 (final column) also presents the overall binding measure (keypress binding minus tone binding). These data show that overall binding was greater under ketamine compared with placebo. A paired-samples t-test revealed that this difference was significant, t(13) = 2.79, p = .008 (onetailed). Follow-up paired sample t-tests suggest that this difference is due to differences in binding for actions towards tones, t(13) = 2.35, p = .036 (two-tailed) rather than differences in binding for tones towards actions, t(13) = 0.242, p = .812 (two-tailed). Furthermore, this exaggerated binding appears to be driven by changes in baseline action judgements; isolated actions on ketamine were perceived as occurring significantly earlier than on placebo, t(13) = 2.59, p = .023 (two-tailed). Intentional binding is an implicit measure of SoA. These findings therefore suggest that SoA is exaggerated under ketamine, which is consistent with previous data on patients with schizophrenia (Haggard et al., 2003; Voss et al., 2010). The relation between binding and body perception We also examined the strength of correlation between binding on ketamine and scores on the CADSS assessment. The overall main effect of ketamine was generated by changes in action binding. Therefore, our correlations were based on this binding measure. There was no significant correlation between action binding and the overall CADSS score, r = .197, p = 499 (two-tailed). Further analyses focused on the body perception subscale. There was a significant positive correlation between action binding and Item 6 on the CADSS, which asks, “Do you feel disconnected from your own body?,” r = .549, p = .042 (two-tailed) (see Figure 2). This suggests that the more participants felt disconnected from their bodies on ketamine, the greater the intentional binding effect. There was no significant correlation between action binding and Item 7 on the CADSS which asks “Does your sense of your own body feel changed: for instance, does your own body feel unusually large or unusually small?,” r = .208, p = .476 (two-tailed). Scatter plot showing the significant correlation between action binding and CADSS Item 6 (“Do you feel disconnected from your own body?”) on ketamine (100 ng/mL). We also examined the strength of correlation between binding on ketamine and scores on the CADSS assessment. The overall main effect of ketamine was generated by changes in action binding. Therefore, our correlations were based on this binding measure. There was no significant correlation between action binding and the overall CADSS score, r = .197, p = 499 (two-tailed). Further analyses focused on the body perception subscale. There was a significant positive correlation between action binding and Item 6 on the CADSS, which asks, “Do you feel disconnected from your own body?,” r = .549, p = .042 (two-tailed) (see Figure 2). This suggests that the more participants felt disconnected from their bodies on ketamine, the greater the intentional binding effect. There was no significant correlation between action binding and Item 7 on the CADSS which asks “Does your sense of your own body feel changed: for instance, does your own body feel unusually large or unusually small?,” r = .208, p = .476 (two-tailed). Scatter plot showing the significant correlation between action binding and CADSS Item 6 (“Do you feel disconnected from your own body?”) on ketamine (100 ng/mL). Control analyses The CADSS questionnaire also measures changes in time perception. Given the temporal nature of our SoA measure we investigated the putative relation between binding and general changes in time perception. There were no significant correlations between action binding and time perception items on the scale (Item 1 “Do things seem to be moving in slow motion?,” r = 198, p = .498; Item 12 “Does this experience seem to take much longer than you would have expected?,” r = −.337, p = .238; Item 13 “Do things seem to be happening very quickly, as if there is a lifetime in a moment?,” r = .265, p = .360). This suggests that changes in action binding were not related to general changes in the subjective experience of time. To determine the presence of possible drug order effects in our data we compared mean overall binding on ketamine versus placebo, introducing “order” (ketamine first vs. placebo first) as a between-subjects variable. We found no significant main effect of “order,” F(1, 12) = 0.381, p = .548, and no significant interaction, F(1, 12) = 0.889, p = 364. This suggests that changes in binding were not linked to drug order. We also compared standard deviations of time estimates across repeated trials. These provide a measure of perceptual timing variability, with higher standard deviations reflecting inconsistent timing performance. This may indicate difficulty in using the clock for timing judgements, erratic allocation of attention either to the action/tone or to the clock, or general confusion. The increase in binding on ketamine was driven by differences in the binding of actions towards tones, so we focus on standard deviation of action time estimates. On ketamine the mean standard deviation was 77 ms (SD = 32) while on placebo it was 67 ms (SD = 18). Despite the numerical increase, the difference in mean standard deviation was not significant, t(13) = 1.149, p = .271 (two-tailed). This suggests that changes in action binding were not related to general changes in timing ability. In a final control analysis, we investigated whether there was a significant reduction in the speed of the self-paced response on ketamine, as it could be that changes in binding are related to changes in motor function. On ketamine the mean response latency was 3798 ms (SD = 1580), whereas on placebo it was 3538 ms (SD = 1160). Despite the numerical increase in response latency, this difference was not significant, t(13) = 0.945, p = .362 (two-tailed). This suggests that changes in action binding were not related to changes in motor function (as measured by the response latency). The CADSS questionnaire also measures changes in time perception. Given the temporal nature of our SoA measure we investigated the putative relation between binding and general changes in time perception. There were no significant correlations between action binding and time perception items on the scale (Item 1 “Do things seem to be moving in slow motion?,” r = 198, p = .498; Item 12 “Does this experience seem to take much longer than you would have expected?,” r = −.337, p = .238; Item 13 “Do things seem to be happening very quickly, as if there is a lifetime in a moment?,” r = .265, p = .360). This suggests that changes in action binding were not related to general changes in the subjective experience of time. To determine the presence of possible drug order effects in our data we compared mean overall binding on ketamine versus placebo, introducing “order” (ketamine first vs. placebo first) as a between-subjects variable. We found no significant main effect of “order,” F(1, 12) = 0.381, p = .548, and no significant interaction, F(1, 12) = 0.889, p = 364. This suggests that changes in binding were not linked to drug order. We also compared standard deviations of time estimates across repeated trials. These provide a measure of perceptual timing variability, with higher standard deviations reflecting inconsistent timing performance. This may indicate difficulty in using the clock for timing judgements, erratic allocation of attention either to the action/tone or to the clock, or general confusion. The increase in binding on ketamine was driven by differences in the binding of actions towards tones, so we focus on standard deviation of action time estimates. On ketamine the mean standard deviation was 77 ms (SD = 32) while on placebo it was 67 ms (SD = 18). Despite the numerical increase, the difference in mean standard deviation was not significant, t(13) = 1.149, p = .271 (two-tailed). This suggests that changes in action binding were not related to general changes in timing ability. In a final control analysis, we investigated whether there was a significant reduction in the speed of the self-paced response on ketamine, as it could be that changes in binding are related to changes in motor function. On ketamine the mean response latency was 3798 ms (SD = 1580), whereas on placebo it was 3538 ms (SD = 1160). Despite the numerical increase in response latency, this difference was not significant, t(13) = 0.945, p = .362 (two-tailed). This suggests that changes in action binding were not related to changes in motor function (as measured by the response latency). Ketamine effects on intentional binding: Table 1 presents the binding effects for keypresses and tones (mean shifts from baseline) for the 14 participants who completed the task. These data show that in the agency conditions on both placebo and ketamine, keypresses were bound towards tones and tones were bound back towards keypresses. This is consistent with the intentional binding effect, as previously reported (e.g., Engbert et al., 2008; Haggard et al., 2002; Moore & Haggard, 2008). Mean judgement errors in ms (SD across subjects) and shifts relative to baseline conditions in ms Table 1 (final column) also presents the overall binding measure (keypress binding minus tone binding). These data show that overall binding was greater under ketamine compared with placebo. A paired-samples t-test revealed that this difference was significant, t(13) = 2.79, p = .008 (onetailed). Follow-up paired sample t-tests suggest that this difference is due to differences in binding for actions towards tones, t(13) = 2.35, p = .036 (two-tailed) rather than differences in binding for tones towards actions, t(13) = 0.242, p = .812 (two-tailed). Furthermore, this exaggerated binding appears to be driven by changes in baseline action judgements; isolated actions on ketamine were perceived as occurring significantly earlier than on placebo, t(13) = 2.59, p = .023 (two-tailed). Intentional binding is an implicit measure of SoA. These findings therefore suggest that SoA is exaggerated under ketamine, which is consistent with previous data on patients with schizophrenia (Haggard et al., 2003; Voss et al., 2010). The relation between binding and body perception: We also examined the strength of correlation between binding on ketamine and scores on the CADSS assessment. The overall main effect of ketamine was generated by changes in action binding. Therefore, our correlations were based on this binding measure. There was no significant correlation between action binding and the overall CADSS score, r = .197, p = 499 (two-tailed). Further analyses focused on the body perception subscale. There was a significant positive correlation between action binding and Item 6 on the CADSS, which asks, “Do you feel disconnected from your own body?,” r = .549, p = .042 (two-tailed) (see Figure 2). This suggests that the more participants felt disconnected from their bodies on ketamine, the greater the intentional binding effect. There was no significant correlation between action binding and Item 7 on the CADSS which asks “Does your sense of your own body feel changed: for instance, does your own body feel unusually large or unusually small?,” r = .208, p = .476 (two-tailed). Scatter plot showing the significant correlation between action binding and CADSS Item 6 (“Do you feel disconnected from your own body?”) on ketamine (100 ng/mL). Control analyses: The CADSS questionnaire also measures changes in time perception. Given the temporal nature of our SoA measure we investigated the putative relation between binding and general changes in time perception. There were no significant correlations between action binding and time perception items on the scale (Item 1 “Do things seem to be moving in slow motion?,” r = 198, p = .498; Item 12 “Does this experience seem to take much longer than you would have expected?,” r = −.337, p = .238; Item 13 “Do things seem to be happening very quickly, as if there is a lifetime in a moment?,” r = .265, p = .360). This suggests that changes in action binding were not related to general changes in the subjective experience of time. To determine the presence of possible drug order effects in our data we compared mean overall binding on ketamine versus placebo, introducing “order” (ketamine first vs. placebo first) as a between-subjects variable. We found no significant main effect of “order,” F(1, 12) = 0.381, p = .548, and no significant interaction, F(1, 12) = 0.889, p = 364. This suggests that changes in binding were not linked to drug order. We also compared standard deviations of time estimates across repeated trials. These provide a measure of perceptual timing variability, with higher standard deviations reflecting inconsistent timing performance. This may indicate difficulty in using the clock for timing judgements, erratic allocation of attention either to the action/tone or to the clock, or general confusion. The increase in binding on ketamine was driven by differences in the binding of actions towards tones, so we focus on standard deviation of action time estimates. On ketamine the mean standard deviation was 77 ms (SD = 32) while on placebo it was 67 ms (SD = 18). Despite the numerical increase, the difference in mean standard deviation was not significant, t(13) = 1.149, p = .271 (two-tailed). This suggests that changes in action binding were not related to general changes in timing ability. In a final control analysis, we investigated whether there was a significant reduction in the speed of the self-paced response on ketamine, as it could be that changes in binding are related to changes in motor function. On ketamine the mean response latency was 3798 ms (SD = 1580), whereas on placebo it was 3538 ms (SD = 1160). Despite the numerical increase in response latency, this difference was not significant, t(13) = 0.945, p = .362 (two-tailed). This suggests that changes in action binding were not related to changes in motor function (as measured by the response latency). DISCUSSION: We investigated whether the psychotomimetic effects of ketamine extend to producing aberrant agency experiences associated with schizophrenia. On the intentional binding task under placebo conditions, the expected binding effect (a compression of the subjective interval between action and outcome; Haggard et al., 2002) was observed. Under ketamine this effect was exaggerated, as has been previously reported in people with schizophrenia (Haggard et al., 2003; Voss et al., 2010). The effect of ketamine on action–outcome binding is intriguing: The exaggerated effect was driven primarily by an increase in binding of actions towards the tone, rather than binding of tones back towards actions. Action binding represents the difference between action time estimates in the agency condition and action time estimates in the baseline condition. Previous studies have found that the experience of isolated action, as in the baseline condition, is anticipatory: On average, participants are aware of moving slightly before the actual onset of movement (Haggard, Newman, & Magno, 1999; Libet et al., 1983). This suggests that motor experience in this context is not based on feedback generated by the actual movement itself. If it were, one would expect a slightly delayed awareness of moving owing to inherent delays in the transmission of sensory information to the brain (Obhi, Planetta, & Scantlebury, 2009). Instead, it has been proposed that the experience of isolated action is linked to processes occurring prior to movement onset (Haggard, 2003). In our data, the baseline experience of action on ketamine was significantly earlier than on placebo, whereas the baseline action awareness on placebo was, unusually, slightly delayed relative to the actual keypress. This pattern of results suggests that the drug may have exaggerated the putative influence of action preparation on the experience of action. However, although baseline action experience is generally anticipatory, the intentional binding effect in healthy adults shows that causing an external event through one's own actions (as in our agency condition) draws the temporal experience of action towards that event (Engbert & Wohlschläger, 2007; Haggard et al., 2002; Moore & Haggard, 2008; Moore, Lagnado, et al., 2009). It is this shift in action experience that represents the binding effect for actions, and it was present on both placebo and ketamine. However, the magnitude of the shift was significantly greater on ketamine. It appears, therefore, that the presence of the tone exerted a particularly strong influence on action experience. In short, although ketamine has a strong effect on action experience when the action occurs without a perceptual consequence, we cannot interpret the drug's effect merely in terms of this baseline action experience. Rather, the significantly greater subjective shift, on ketamine, in the experience of action towards the tone means that a full explanation of the effects of ketamine must take into account the experience of action in both the absence and the presence of the tone. Thus, bringing together the key results from the intentional binding task, ketamine appears to boost the influence of action preparation on action awareness, but also to boost the influence of the effects of action (a tone) on action awareness. This combination may seem paradoxical. However, several results suggest that the action experience is in fact a synthesis of a range of different events occurring over an extended time period between preparation and consequence (Banks & Isham, 2009; Haggard, 2005; Haggard, Cartledge, Dafydd, & Oakley, 2004; Lau, Rogers, & Passingham, 2007; Moore & Haggard, 2008; Moore, Wegner, et al., 2009). In normal circumstances, action awareness is likely to be the result of integration of efferent and afferent processes in the sensorimotor system (Moore & Haggard, 2008; Moore, Wegner, et al., 2009; Synofzik, Vosgerau, & Newen, 2008). On ketamine, however, the processes underlying this normal process of integration may be compromised. To this extent, our results are consistent with a ketamine-induced deficit in monitoring action signals. Participants appeared to feel dissociated from their own actions while on ketamine, since their representations of their own actions were susceptible to influences from other events, such as their original intentions and their subsequent effects. Confirmation of this dissociative interpretation comes from the correlations found between intentional binding and the specific CADSS item concerning the feeling of disconnection from the body. Taken together, these findings suggest that ketamine may preferentially influence a neural system for monitoring action. As a result of this deficit, actions on ketamine become mutable and vulnerable to capture by other events. However, given the apparently tight coupling of SoO and SoA, the fact that increased SoA was associated with an increase in the feeling of disconnection from one's body may be surprising. Dissociations between SoO and SoA are not uncommon in psychiatric illness such as schizophrenia. For example, a patient with passivity phenomena will recognise their actions as the movements of their own body (preserved SoO) but will experience their actions as produced by an external force (reduced SoA). However, these dissociations cannot explain our finding that an increase in SoA was associated with reduced SoO on ketamine. The mutability hypothesis discussed earlier may provide an explanation: If ketamine engenders mutability in the experience of action, then the more one's experience of action is “captured” by external sensory events the greater the externalisation of bodily experience may be, resulting in the feeling of “disconnection” from one's own body. What might be the neurochemical and neuroanatomical basis of the hyperbinding effect we observed? One possibility is that hyperbinding is the product of aberrant prediction error signalling. Prediction error refers to the mismatch between expectation and occurrence, and is used as a teaching signal to drive causal associations between events (Dickinson, 2001). Although midbrain dopamine neurons may signal a reward prediction error (Schultz & Dickinson, 2000), others have argued that their activity profile may reflect a novelty, salience, or surprise signal used by organisms to judge whether or not they caused a surprising event to happen (Redgrave & Gurney, 2006). We have previously shown that ketamine induces prediction error responses to predictable events and thus increases the salience of those events (Corlett et al., 2006). Neurochemically, ketamine may increase dopamine and glutamate corelease, in the mesocortical pathway between the midbrain and prefrontal cortex (Corlett et al., 2006; Corlett, Honey, et al., 2007). Such signalling has been suggested to register surprise and permit its explanation (Lavin et al., 2005). Since associations between intention, action, and outcome are well learned, the ketamine induced hyperbinding effect we report presently may reflect inappropriate salience of action–outcome causal associations, via aberrant prediction error signalling. Our findings overall are compatible with the notion that the execution of action and SoA may be linked by a simple computational principle (minimising prediction error), which, when perturbed, could explain the varied phenomenology of psychosis (Corlett, Frith, & Fletcher, 2009; Fletcher & Frith, 2009). The hyperbinding found previously in schizophrenia patients (Haggard et al., 2003; Voss et al., 2010), and here found also with ketamine, suggests an exaggerated SoA. A number of other studies, using different paradigms, have reported data that are consistent with this interpretation. For example, people with schizophrenia (including those experiencing passivity symptoms) are more likely than healthy controls to attribute the source of distorted or ambiguous visual feedback of an action to themselves (Daprati et al., 1997; Fourneret et al., 2002; Franck et al., 2001; Schnell et al., 2008). This suggests a tendency towards over-attribution of sensory consequences of movement to oneself (Synofzik et al., 2008). However, these data are at odds with the feeling of reduced SoA that is typically reported by patients. One solution to this paradox is offered by Franck et al. (2001), who have suggested that patients with passivity symptoms have a tendency towards self-attribution of extraneous events (see also Daprati et al., 1997). This could result in a feeling of being influenced when observing another action, and hyperassociation when observing action outcomes. In short, it may be possible to recognise strongly the outcomes of one's actions while at the same time feeling a diminished sense of agency for the actions themselves. This implies a distinction between feeling one is the author of action on the one hand, and feeling one is the author of an effect on the other. This putative distinction would be usefully explored in future studies. It should also be noted that exaggerated SoA may be associated with certain schizophrenia subtypes, particularly those with self-referential symptoms. For example, patients with persecutory delusions feel a greater sense of control over action outcomes compared with healthy and patient controls (Kaney & Bentall, 1992). Therefore, the exaggerated agency effects shown in previous patient studies could be driven by the presence of patients with self-referential symptoms in these samples. Intriguingly, self-referential symptoms are also a common effect of the ketamine challenge (Corlett, Honey, & Fletcher, 2007; Honey et al., 2006). It may be, therefore, that the increased SoA found in the current study is associated with this specific effect of the drug. Our study shows that ketamine can mimic aberrant agency experiences associated with schizophrenia, but certain limitations of the task used should be noted. Unlike previous intentional binding studies (Haggard et al., 2002), we did not include any involuntary movement conditions. Using transcranial magnetic stimulation to induce involuntary movements, Haggard et al. (2002) showed that the binding of actions and outcomes was specific to voluntary, self-generated movement. In fact, when involuntary transcranial magnetic stimulation (TMS)-induced movements were followed by tones, they found a temporal repulsion between involuntary movement and tone. We did not include this TMS condition because the focus of our investigation was whether ketamine increased the magnitude of intentional binding for voluntary actions. It would be interesting in the future to explore the effect of the ketamine challenge on this “repulsion” effect. Limitations of the paradigm should also be noted. Intentional binding represents an implicit measure of agency experience. That is, participants are not required to make explicit agency judgements, such as the attribution of an observed movement to its correct origin (as in Farrer & Frith, 2002, for example). Implicit measures have certain advantages, such as the quantification of subjective experience, and the mitigation of demand effects. Also, such tasks may allow us to detect subtle perceptual and cognitive changes engendered by the drug and relate them to the early stages of psychosis. However, there are certain drawbacks. Primarily, implicit measures will fail to capture the broader phenomenology of SoA, in particular the highly complex phenomenology associated with delusions of agency in established psychosis. In the current study this limitation was mitigated somewhat by the observation that changes in these subtle implicit measures correlate with participants’ self-reports of drug-induced changes in body experience. Finally, limitations of the ketamine model of schizophrenia should also be acknowledged. For example, ketamine produces a range of symptoms associated with endogenous psychosis (arguably a broader range than other drug models of the disease; Krystal et al., 1994), but there are notable exceptions (Fletcher & Honey, 2006). Furthermore, ketamine produces changes that are not necessarily associated with schizophrenia, such as euphoria (Fletcher & Honey, 2006). Although it is important to acknowledge limitations of the drug model, we do not feel they undermine our interpretation of the present data, given the fact that these data are consistent with schizophrenic psychopathology and replicate previous behavioural data from patients with the disease (Haggard et al., 2003; Voss et al., 2010). Despite these caveats, this study shows that the psychotomimetic property of ketamine may extend to aberrant experiences of agency associated with schizophrenia. In particular, ketamine mimics the exaggerated intentional binding effect that has been found in association with the disease. The pattern of results suggested a mutable experience of action on ketamine, consistent with a deficit in the neural circuits for action monitoring. We believe that these findings may be explained in terms of changes in stimulus salience via aberrant prediction error signalling. Ketamine may be a valuable psychopharmacological model of aberrant agency experiences found in schizophrenia. To this extent, it could be used to elucidate the neurobiological and psychological basis of such aberrant experiences.
Background: Aberrant experience of agency is characteristic of schizophrenia. An understanding of the neurobiological basis of such experience is therefore of considerable importance for developing successful models of the disease. We aimed to characterise the effects of ketamine, a drug model for psychosis, on sense of agency (SoA). SoA is associated with a subjective compression of the temporal interval between an action and its effects: This is known as "intentional binding". This action-effect binding provides an indirect measure of SoA. Previous research has found that the magnitude of binding is exaggerated in patients with schizophrenia. We therefore investigated whether ketamine administration to otherwise healthy adults induced a similar pattern of binding. Methods: 14 right-handed healthy participants (8 female; mean age 22.4 years) were given low-dose ketamine (100 ng/mL plasma) and completed the binding task. They also underwent structured clinical interviews. Results: Ketamine mimicked the performance of schizophrenia patients on the intentional binding task, significantly increasing binding relative to placebo. The size of this effect also correlated with aberrant bodily experiences engendered by the drug. Conclusions: These data suggest that ketamine may be able to mimic certain aberrant agency experiences that characterise schizophrenia. The link to individual changes in bodily experience suggests that the fundamental change produced by the drug has wider consequences in terms of individuals' experiences of their bodies and movements.
INTRODUCTION: Administration of the anaesthetic agent, ketamine, to healthy participants produces a state that resembles schizophrenia (Ghoneim, Hinrichs, Mewaldt, & Petersen, 1985; Krystal et al., 1994; Lahti, Weiler, Tamara Michaelidis, Parwani, & Tamminga, 2001). Although there are notable differences between the ketamine state and established schizophrenic illness (for example, ketamine does not reliably produce auditory hallucinations; Fletcher & Honey, 2006), ketamine does produce a range of symptoms associated with endogenous psychosis, including perceptual changes, ideas of reference, thought disorder, and some negative symptoms (Ghoneim et al., 1985; Krystal et al., 1994; Lahti et al., 2001; Mason, Morgan, Stefanovic, & Curran, 2008; Morgan, Mofeez, Brandner, Bromley, & Curran, 2004; Pomarol-Clotet et al., 2006). In addition, a number of cognitive changes produced by ketamine are comparable to those seen in schizophrenia (e.g., learning: Corlett, Murray, et al., 2007; memory: Fletcher & Honey, 2006; attention: Oranje et al., 2000; language: Covington et al., 2007). Overall, the effects of ketamine are most strikingly characteristic of the earliest stages of psychosis (Corlett, Honey, & Fletcher, 2007). Moreover, ketamine causes changes in brain activity that overlap with those reported in schizophrenia (Breier, Malhotra, Pinals, Weisenfeld, & Pickar, 1997; Corlett et al., 2006; Vollenweider, Leenders, Oye, Hell, & Angst, 1997; Vollenweider, Leenders, Scharfetter, et al., 1997). An important next step is to explore the effects of ketamine in greater detail and to exploit the potential that this approach offers for relating cognitive-behavioural function to subjective experiences in psychosis. Schizophrenia is associated with important changes in the experience of voluntary action such as those that occur in delusions of control (Frith, 1992). Although it has received little formal documentation, ketamine also, in our experience, alters the way that participants experience their own actions. For example, participants sometimes report that they don't feel fully in control of their own actions (“I don't feel in control of my muscles …,” and “… as though someone else was controlling my movements;” Pomarol-Clotet et al., 2006). Given these observations, together with the perceptuomotor abnormalities in schizophrenia, the current study was set up to characterise the effects of ketamine on a task examining voluntary actions and their sensory consequences. Sense of agency (SoA) refers to the experience of initiating and controlling voluntary action to achieve effects in the outside world. Sense of agency is a background feeling that accompanies most of our actions. Perhaps because of its ubiquity, it has proved difficult to isolate and measure experimentally. Recently, action-related changes in time perception have been proposed as a proxy for SoA (Haggard, Clark, & Kalogeras, 2002; Moore & Haggard, 2008; Moore, Lagnado, Deal, & Haggard, 2009). Situations that elicit SoA are associated with systematic changes in the temporal experience of actions and outcomes: There is a subjective compression of the interval between the action and the outcome. This relation between SoA and subjective time is revealed in the intentional binding paradigm developed by Haggard et al. (2002). In an agency condition, in which participants’ actions produced outcome tones, participants judged the time of an action or the time of the subsequent tone, in separate blocks of trials. Actions were perceived as occurring later in time compared to a nonagency (baseline) condition in which participants’ actions did not produce tones. In addition, a tone that followed the action was perceived as occurring earlier in time compared to a nonagency (baseline) condition involving tones but no actions. Importantly, these shifts were only found for voluntary actions: When the outcome was caused by an involuntary movement the reverse pattern of results was observed (actions perceived earlier and outcomes perceived later than their respective baseline estimates). Increased SoA is therefore associated with a later awareness of the action, and an earlier awareness of the outcome. This effect is robust and has been consistently replicated (see, for example, Engbert & Wohlschläger, 2007; Engbert, Wohlschläger, & Haggard, 2008; Moore, Wegner, & Haggard, 2009; Tsakiris & Haggard, 2003). It has also been shown that these changes in the subjective experience of time correlate with explicit higher order changes in the sense of agency, as measured using subjective rating scales (Ebert & Wegner, 2010; Moore & Haggard, 2010). In this way, intentional binding offers a precise, implicit measure of SoA. Of primary interest to the present study is the fact that the binding effect, defined as the temporal attraction between voluntary action and outcome, is greater in people with schizophrenia (Haggard, Martin, Taylor-Clarke, Jeannerod, & Franck, 2003; Voss et al., 2010). That is, people with schizophrenia show increased intentional binding. Our principal aim here was to determine whether ketamine also induced increased binding, as previously reported in schizophrenia. We also investigated the relationship between this implicit measure of SoA and subjective experiences of dissociation and psychotic-like phenomena produced by the drug as measured using the Clinician-Administered Dissociative States Scale (CADSS; Bremner et al., 1998). Here we focused our analysis on changes in the subjective experience of one's own body, since sense of ownership (SoO) over one's body and SoA may be related. For example, in healthy individuals SoA for a voluntary action may strongly depend on a SoO (Gallagher, 2000, 2007; Tsakiris, Schütz-Bosbach, & Gallagher, 2007). The reverse relationship may also hold, whereby the neurocognitive processes that give rise to sense of agency also contribute to Soo (Tsakiris, Prabhu, & Haggard, 2006). Dissociative symptoms, such as depersonalisation, are a common effect of the ketamine challenge (Goff & Coyle, 2001). Furthermore, there is frequent co-occurrence of depersonalisation and abnormal bodily experience (Sierra, Baker, Medford, & David, 2005; Simeon et al., 2008). Although not typically associated with established schizophrenic illness, depersonalisation appears to be associated with the schizophrenia prodrome (Goff & Coyle, 2001; Krystal et al., 1994). Therefore, given the link between bodily experience and sense of agency, and the common disruption of bodily experience engendered by the ketamine challenge, the body perception subscale on the CADSS questionnaire was of primary interest. DISCUSSION: We investigated whether the psychotomimetic effects of ketamine extend to producing aberrant agency experiences associated with schizophrenia. On the intentional binding task under placebo conditions, the expected binding effect (a compression of the subjective interval between action and outcome; Haggard et al., 2002) was observed. Under ketamine this effect was exaggerated, as has been previously reported in people with schizophrenia (Haggard et al., 2003; Voss et al., 2010). The effect of ketamine on action–outcome binding is intriguing: The exaggerated effect was driven primarily by an increase in binding of actions towards the tone, rather than binding of tones back towards actions. Action binding represents the difference between action time estimates in the agency condition and action time estimates in the baseline condition. Previous studies have found that the experience of isolated action, as in the baseline condition, is anticipatory: On average, participants are aware of moving slightly before the actual onset of movement (Haggard, Newman, & Magno, 1999; Libet et al., 1983). This suggests that motor experience in this context is not based on feedback generated by the actual movement itself. If it were, one would expect a slightly delayed awareness of moving owing to inherent delays in the transmission of sensory information to the brain (Obhi, Planetta, & Scantlebury, 2009). Instead, it has been proposed that the experience of isolated action is linked to processes occurring prior to movement onset (Haggard, 2003). In our data, the baseline experience of action on ketamine was significantly earlier than on placebo, whereas the baseline action awareness on placebo was, unusually, slightly delayed relative to the actual keypress. This pattern of results suggests that the drug may have exaggerated the putative influence of action preparation on the experience of action. However, although baseline action experience is generally anticipatory, the intentional binding effect in healthy adults shows that causing an external event through one's own actions (as in our agency condition) draws the temporal experience of action towards that event (Engbert & Wohlschläger, 2007; Haggard et al., 2002; Moore & Haggard, 2008; Moore, Lagnado, et al., 2009). It is this shift in action experience that represents the binding effect for actions, and it was present on both placebo and ketamine. However, the magnitude of the shift was significantly greater on ketamine. It appears, therefore, that the presence of the tone exerted a particularly strong influence on action experience. In short, although ketamine has a strong effect on action experience when the action occurs without a perceptual consequence, we cannot interpret the drug's effect merely in terms of this baseline action experience. Rather, the significantly greater subjective shift, on ketamine, in the experience of action towards the tone means that a full explanation of the effects of ketamine must take into account the experience of action in both the absence and the presence of the tone. Thus, bringing together the key results from the intentional binding task, ketamine appears to boost the influence of action preparation on action awareness, but also to boost the influence of the effects of action (a tone) on action awareness. This combination may seem paradoxical. However, several results suggest that the action experience is in fact a synthesis of a range of different events occurring over an extended time period between preparation and consequence (Banks & Isham, 2009; Haggard, 2005; Haggard, Cartledge, Dafydd, & Oakley, 2004; Lau, Rogers, & Passingham, 2007; Moore & Haggard, 2008; Moore, Wegner, et al., 2009). In normal circumstances, action awareness is likely to be the result of integration of efferent and afferent processes in the sensorimotor system (Moore & Haggard, 2008; Moore, Wegner, et al., 2009; Synofzik, Vosgerau, & Newen, 2008). On ketamine, however, the processes underlying this normal process of integration may be compromised. To this extent, our results are consistent with a ketamine-induced deficit in monitoring action signals. Participants appeared to feel dissociated from their own actions while on ketamine, since their representations of their own actions were susceptible to influences from other events, such as their original intentions and their subsequent effects. Confirmation of this dissociative interpretation comes from the correlations found between intentional binding and the specific CADSS item concerning the feeling of disconnection from the body. Taken together, these findings suggest that ketamine may preferentially influence a neural system for monitoring action. As a result of this deficit, actions on ketamine become mutable and vulnerable to capture by other events. However, given the apparently tight coupling of SoO and SoA, the fact that increased SoA was associated with an increase in the feeling of disconnection from one's body may be surprising. Dissociations between SoO and SoA are not uncommon in psychiatric illness such as schizophrenia. For example, a patient with passivity phenomena will recognise their actions as the movements of their own body (preserved SoO) but will experience their actions as produced by an external force (reduced SoA). However, these dissociations cannot explain our finding that an increase in SoA was associated with reduced SoO on ketamine. The mutability hypothesis discussed earlier may provide an explanation: If ketamine engenders mutability in the experience of action, then the more one's experience of action is “captured” by external sensory events the greater the externalisation of bodily experience may be, resulting in the feeling of “disconnection” from one's own body. What might be the neurochemical and neuroanatomical basis of the hyperbinding effect we observed? One possibility is that hyperbinding is the product of aberrant prediction error signalling. Prediction error refers to the mismatch between expectation and occurrence, and is used as a teaching signal to drive causal associations between events (Dickinson, 2001). Although midbrain dopamine neurons may signal a reward prediction error (Schultz & Dickinson, 2000), others have argued that their activity profile may reflect a novelty, salience, or surprise signal used by organisms to judge whether or not they caused a surprising event to happen (Redgrave & Gurney, 2006). We have previously shown that ketamine induces prediction error responses to predictable events and thus increases the salience of those events (Corlett et al., 2006). Neurochemically, ketamine may increase dopamine and glutamate corelease, in the mesocortical pathway between the midbrain and prefrontal cortex (Corlett et al., 2006; Corlett, Honey, et al., 2007). Such signalling has been suggested to register surprise and permit its explanation (Lavin et al., 2005). Since associations between intention, action, and outcome are well learned, the ketamine induced hyperbinding effect we report presently may reflect inappropriate salience of action–outcome causal associations, via aberrant prediction error signalling. Our findings overall are compatible with the notion that the execution of action and SoA may be linked by a simple computational principle (minimising prediction error), which, when perturbed, could explain the varied phenomenology of psychosis (Corlett, Frith, & Fletcher, 2009; Fletcher & Frith, 2009). The hyperbinding found previously in schizophrenia patients (Haggard et al., 2003; Voss et al., 2010), and here found also with ketamine, suggests an exaggerated SoA. A number of other studies, using different paradigms, have reported data that are consistent with this interpretation. For example, people with schizophrenia (including those experiencing passivity symptoms) are more likely than healthy controls to attribute the source of distorted or ambiguous visual feedback of an action to themselves (Daprati et al., 1997; Fourneret et al., 2002; Franck et al., 2001; Schnell et al., 2008). This suggests a tendency towards over-attribution of sensory consequences of movement to oneself (Synofzik et al., 2008). However, these data are at odds with the feeling of reduced SoA that is typically reported by patients. One solution to this paradox is offered by Franck et al. (2001), who have suggested that patients with passivity symptoms have a tendency towards self-attribution of extraneous events (see also Daprati et al., 1997). This could result in a feeling of being influenced when observing another action, and hyperassociation when observing action outcomes. In short, it may be possible to recognise strongly the outcomes of one's actions while at the same time feeling a diminished sense of agency for the actions themselves. This implies a distinction between feeling one is the author of action on the one hand, and feeling one is the author of an effect on the other. This putative distinction would be usefully explored in future studies. It should also be noted that exaggerated SoA may be associated with certain schizophrenia subtypes, particularly those with self-referential symptoms. For example, patients with persecutory delusions feel a greater sense of control over action outcomes compared with healthy and patient controls (Kaney & Bentall, 1992). Therefore, the exaggerated agency effects shown in previous patient studies could be driven by the presence of patients with self-referential symptoms in these samples. Intriguingly, self-referential symptoms are also a common effect of the ketamine challenge (Corlett, Honey, & Fletcher, 2007; Honey et al., 2006). It may be, therefore, that the increased SoA found in the current study is associated with this specific effect of the drug. Our study shows that ketamine can mimic aberrant agency experiences associated with schizophrenia, but certain limitations of the task used should be noted. Unlike previous intentional binding studies (Haggard et al., 2002), we did not include any involuntary movement conditions. Using transcranial magnetic stimulation to induce involuntary movements, Haggard et al. (2002) showed that the binding of actions and outcomes was specific to voluntary, self-generated movement. In fact, when involuntary transcranial magnetic stimulation (TMS)-induced movements were followed by tones, they found a temporal repulsion between involuntary movement and tone. We did not include this TMS condition because the focus of our investigation was whether ketamine increased the magnitude of intentional binding for voluntary actions. It would be interesting in the future to explore the effect of the ketamine challenge on this “repulsion” effect. Limitations of the paradigm should also be noted. Intentional binding represents an implicit measure of agency experience. That is, participants are not required to make explicit agency judgements, such as the attribution of an observed movement to its correct origin (as in Farrer & Frith, 2002, for example). Implicit measures have certain advantages, such as the quantification of subjective experience, and the mitigation of demand effects. Also, such tasks may allow us to detect subtle perceptual and cognitive changes engendered by the drug and relate them to the early stages of psychosis. However, there are certain drawbacks. Primarily, implicit measures will fail to capture the broader phenomenology of SoA, in particular the highly complex phenomenology associated with delusions of agency in established psychosis. In the current study this limitation was mitigated somewhat by the observation that changes in these subtle implicit measures correlate with participants’ self-reports of drug-induced changes in body experience. Finally, limitations of the ketamine model of schizophrenia should also be acknowledged. For example, ketamine produces a range of symptoms associated with endogenous psychosis (arguably a broader range than other drug models of the disease; Krystal et al., 1994), but there are notable exceptions (Fletcher & Honey, 2006). Furthermore, ketamine produces changes that are not necessarily associated with schizophrenia, such as euphoria (Fletcher & Honey, 2006). Although it is important to acknowledge limitations of the drug model, we do not feel they undermine our interpretation of the present data, given the fact that these data are consistent with schizophrenic psychopathology and replicate previous behavioural data from patients with the disease (Haggard et al., 2003; Voss et al., 2010). Despite these caveats, this study shows that the psychotomimetic property of ketamine may extend to aberrant experiences of agency associated with schizophrenia. In particular, ketamine mimics the exaggerated intentional binding effect that has been found in association with the disease. The pattern of results suggested a mutable experience of action on ketamine, consistent with a deficit in the neural circuits for action monitoring. We believe that these findings may be explained in terms of changes in stimulus salience via aberrant prediction error signalling. Ketamine may be a valuable psychopharmacological model of aberrant agency experiences found in schizophrenia. To this extent, it could be used to elucidate the neurobiological and psychological basis of such aberrant experiences.
Background: Aberrant experience of agency is characteristic of schizophrenia. An understanding of the neurobiological basis of such experience is therefore of considerable importance for developing successful models of the disease. We aimed to characterise the effects of ketamine, a drug model for psychosis, on sense of agency (SoA). SoA is associated with a subjective compression of the temporal interval between an action and its effects: This is known as "intentional binding". This action-effect binding provides an indirect measure of SoA. Previous research has found that the magnitude of binding is exaggerated in patients with schizophrenia. We therefore investigated whether ketamine administration to otherwise healthy adults induced a similar pattern of binding. Methods: 14 right-handed healthy participants (8 female; mean age 22.4 years) were given low-dose ketamine (100 ng/mL plasma) and completed the binding task. They also underwent structured clinical interviews. Results: Ketamine mimicked the performance of schizophrenia patients on the intentional binding task, significantly increasing binding relative to placebo. The size of this effect also correlated with aberrant bodily experiences engendered by the drug. Conclusions: These data suggest that ketamine may be able to mimic certain aberrant agency experiences that characterise schizophrenia. The link to individual changes in bodily experience suggests that the fundamental change produced by the drug has wider consequences in terms of individuals' experiences of their bodies and movements.
10,196
267
12
[ "binding", "ketamine", "action", "participants", "time", "tone", "changes", "intentional binding", "intentional", "actions" ]
[ "test", "test" ]
null
[CONTENT] Action-outcome binding | Ketamine | Schizophrenia | Sense of agency | Volition | Voluntary action [SUMMARY]
null
[CONTENT] Action-outcome binding | Ketamine | Schizophrenia | Sense of agency | Volition | Voluntary action [SUMMARY]
[CONTENT] Action-outcome binding | Ketamine | Schizophrenia | Sense of agency | Volition | Voluntary action [SUMMARY]
[CONTENT] Action-outcome binding | Ketamine | Schizophrenia | Sense of agency | Volition | Voluntary action [SUMMARY]
[CONTENT] Action-outcome binding | Ketamine | Schizophrenia | Sense of agency | Volition | Voluntary action [SUMMARY]
[CONTENT] Adult | Dose-Response Relationship, Drug | Excitatory Amino Acid Antagonists | Female | Healthy Volunteers | Humans | Judgment | Ketamine | Male | Models, Psychological | Psychoses, Substance-Induced | Schizophrenia | Sensation | Young Adult [SUMMARY]
null
[CONTENT] Adult | Dose-Response Relationship, Drug | Excitatory Amino Acid Antagonists | Female | Healthy Volunteers | Humans | Judgment | Ketamine | Male | Models, Psychological | Psychoses, Substance-Induced | Schizophrenia | Sensation | Young Adult [SUMMARY]
[CONTENT] Adult | Dose-Response Relationship, Drug | Excitatory Amino Acid Antagonists | Female | Healthy Volunteers | Humans | Judgment | Ketamine | Male | Models, Psychological | Psychoses, Substance-Induced | Schizophrenia | Sensation | Young Adult [SUMMARY]
[CONTENT] Adult | Dose-Response Relationship, Drug | Excitatory Amino Acid Antagonists | Female | Healthy Volunteers | Humans | Judgment | Ketamine | Male | Models, Psychological | Psychoses, Substance-Induced | Schizophrenia | Sensation | Young Adult [SUMMARY]
[CONTENT] Adult | Dose-Response Relationship, Drug | Excitatory Amino Acid Antagonists | Female | Healthy Volunteers | Humans | Judgment | Ketamine | Male | Models, Psychological | Psychoses, Substance-Induced | Schizophrenia | Sensation | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] binding | ketamine | action | participants | time | tone | changes | intentional binding | intentional | actions [SUMMARY]
null
[CONTENT] binding | ketamine | action | participants | time | tone | changes | intentional binding | intentional | actions [SUMMARY]
[CONTENT] binding | ketamine | action | participants | time | tone | changes | intentional binding | intentional | actions [SUMMARY]
[CONTENT] binding | ketamine | action | participants | time | tone | changes | intentional binding | intentional | actions [SUMMARY]
[CONTENT] binding | ketamine | action | participants | time | tone | changes | intentional binding | intentional | actions [SUMMARY]
[CONTENT] experience | schizophrenia | actions | haggard | ketamine | associated | 2007 | voluntary | changes | subjective [SUMMARY]
null
[CONTENT] binding | significant | changes | ketamine | action | tailed | 13 | action binding | standard | sd [SUMMARY]
[CONTENT] action | experience | ketamine | experience action | associated | aberrant | events | haggard | effect | schizophrenia [SUMMARY]
[CONTENT] binding | ketamine | action | participants | time | changes | tone | placebo | significant | mean [SUMMARY]
[CONTENT] binding | ketamine | action | participants | time | changes | tone | placebo | significant | mean [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| [SUMMARY]
null
[CONTENT] Ketamine ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| ||| 14 | 8 female | age 22.4 years | 100 ng/mL ||| ||| ||| Ketamine ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| ||| 14 | 8 female | age 22.4 years | 100 ng/mL ||| ||| ||| Ketamine ||| ||| ||| [SUMMARY]
Out-of-pocket expenditures for childbirth in the context of the Janani Suraksha Yojana (JSY) cash transfer program to promote facility births: who pays and how much? Studies from Madhya Pradesh, India.
27142657
High out-of-pocket expenditures (OOPE) make delivery care difficult to access for a large proportion of India's population. Given that home deliveries increase the risk of maternal mortality, in 2005 the Indian Government implemented the Janani Suraksha Yojana (JSY) program to incentivize poor women to deliver in public health facilities by providing a cash transfer upon discharge. We study the OOPE among JSY beneficiaries and women who deliver at home, and predictors of OOPE in two districts of Madhya Pradesh.
BACKGROUND
September 2013 to April 2015 a cross-sectional community-based survey was performed. All recently delivered women were interviewed to elicit delivery costs, socio-demographic characteristics and delivery related information.
METHODS
Most women (n = 1995, 84 %) delivered in JSY public health facility, the remaining 16 % (n = 386) delivered at home. Women who delivered under JSY program had a higher median, IQR OOPE ($8, 3-18) compared to home ($6, 2-13). Among JSY beneficiaries, poorest women had twice net gain ($20) versus wealthiest ($10) post cash transfer. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE. OOPE made among JSY beneficiaries was pro-poor: poorer women made proportionally less expenditures compared to wealthier women. In an adjusted model, delivering in a JSY public facility increased odds of incurring expenditures (OR: 1.58, 95 % CI: 1.11-2.25) but at the same time to a 16 % (95 % CI: 0.73-0.96) decrease in the amount paid compared to home deliveries.
RESULTS
OOPE is prevalent among JSY beneficiaries as well in home deliveries. In JSY, OOPE varies by income quintile: wealthier quintiles pay more OOPE. However the cash incentive is adequate enough to provide a net gain for all quintiles. OOPE was largely due to indirect costs and not direct medical payments. The program seems to be effective in providing financial protection for the most vulnerable groups.
CONCLUSIONS
[ "Adult", "Cross-Sectional Studies", "Female", "Health Care Costs", "Health Expenditures", "Health Services Accessibility", "Humans", "India", "Maternal Health Services", "Parturition", "Pregnancy" ]
4855911
Background
India’s health care services are largely financed by out-of-pocket payments (71 %) at the point of care [1]. High out-of-pocket expenditures (OOPE) make health services, including care for childbirth, difficult to access for a large proportion of its population especially the poor. In 2005, institutional delivery was nearly 6.5 times higher among Indian women belonging to the highest wealth quintile (84 %) compared to poorest quintile women (13 %) [2]. Evidence suggests maternal mortality can be reduced when deliveries are conducted by skilled birth attendants and women have access to emergency obstetric care, given the unpredictable nature of life-threatening complications that can occur at the time of childbirth [3]. Poor women, who have the least access to such care, bear a disproportionate burden of maternal mortality. Therefore governments in many low income settings, with high burdens of maternal mortality, have initiated special programs to draw women into facilities to give birth (instead of at home), where such care can be provided. Given the high number of maternal deaths in India (one-fifth of the global count) and a low institutional delivery proportion (39 % in 2005); the Indian government launched a conditional cash transfer program to promote institutional delivery among poor women the same year [2, 4]. The program, Janani Suraksha Yojana (JSY or safe motherhood program), provides women $23 (INR 1400) upon discharge after giving birth in a public health facility. JSY, the largest cash transfer program in the world, is funded by the central government of India (GOI) while implementation is managed by states. Eligibility, incentive amount and uptake differ across the states [5, 6]. In the ten years since the program began, institutional delivery rates have increased to 74 % and more than 106 million women have benefited with the GOI spending 16.4 billion USD on the program [7–9]. Previous studies have found a considerable proportion of Indian women cite financial access barriers as one of the main reasons for not having an institutional delivery [10–12]. The JSY program was expected to draw these mothers into public facilities with the assumption that they will receive a free delivery in addition to receiving the cash incentive of $23. While all services in the Indian public sector are supposed to be free, in reality OOPE during childbirth is common among these facilities [13–17]. Although there are a number of reports on the JSY [18–22], none focus on OOPE in the context of high JSY program uptake [13–17, 23]. The magnitude of OOPE incurred among these JSY beneficiaries is unknown. In addition, we do not know if the cash incentive offsets OOPE in the same facility, and if it does then the extent to which it does. Also the question remains whether the level of OOPE paid by JSY beneficiaries of different socio-economic status is similar. Further, if women who participate in the JSY program actually have higher OOPE than women who give birth at home. We studied the OOPE among JSY beneficiaries and compared this OOPE to that incurred by women who delivered at home in two districts of Madhya Pradesh, India. We also described the extent to which the JSY cash transfer defrayed the OOPE for JSY beneficiaries. Among both groups of women, we studied predictors of OOPE, and how OOPE varied with wealth status.
null
null
Results
Descriptive sample characteristics for the sample As depicted in Table 1, the majority of women (n = 1995, 84 %) in our study delivered in a JSY public health facility. The remaining 16 % (n = 386) delivered at home. The median age of the study sample was 23 years and 29 % (n = 692) had no formal education. More than a third (n = 932) of the women were primiparous. The main reason given for home deliveries was that the baby came unexpectedly and quickly (n = 312, 52 %). Other reasons included; planned to have a home delivery (n = 97, 16 %), transportation related issues (n = 95, 16 %), no one to accompany them to the hospital (n = 58, 10 %) and other (n = 37, 6 %). Only one woman replied she could not afford to deliver in a health facility (results not shown).Table 1Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)All womenJSY beneficiaryHome deliveryBackground characteristicsn (%)Median (IQR)n (%)Median (IQR)n (%)Median (IQR)Total2381 (100)8 (3–17)1995 (84)8 (3–18)386 (16)6 (2–13)Age in years (median, IQR)23 (21–25)–22 (20–25)–25 (22–27)–Districts District 11405 (59)14 (7–23)*1251 (63)14 (7–23)*154 (40)12 (5–23)* District 2976 (41)3 (1–7)744 (37)3 (1–7)232 (60)3 (1–7)Education in years (median, IQR)5 (0–8)–5 (0–8)–4 (0–7)–Household wealth 1st quintile (Poorest)519 (22)3 (1–7)**361 (18)3 (1–6)**158 (40)3 (0–7)** 2nd quintile505 (21)7 (2–13)414 (21)6 (2–15)91 (24)7 (2–12) 3rd quintile499 (21)11 (5–18)434 (22)11 (5–18)65 (17)10 (3–18) 4th quintile476 (20)14 (7–23)429 (21)14 (7–23)47 (12)8 (3–19) 5th quintile (Least Poor)382 (16)13 (7–25)357 (18)13 (7–24)25 (7)17 (8–29)Caste Scheduled Caste (SC)599 (25)10 (4–19)**502 (25)11 (5–20)**97 (25)8 (2–17)** Other backward caste (OBC)904 (38)12 (5–22)807 (40)12 (5–22)97 (25)8 (4–17) Scheduled Tribe (ST)598 (25)3 (1–6)430 (22)3 (1–6)168 (44)3 (1–7) General280 (12)13 (4–21)256 (13)12 (4–22)24 (6)16 (5–20)Birth Order 1st child932 (39)8 (3–19)**858 (43)9 (3–20)**74 (19)7 (3–12)** 2nd child838 (35)8 (3–17)688 (34)9 (3–17)150 (39)7 (2–15) 3rd child382 (16)5 (2–14)292 (15)7 (2–15)90 (23)4 (1–12) 4th or more child229 (10)7 (2–15)157 (8)8 (2–17)72 (19)4 (0–9)Type of Delivery Vaginal Delivery2303 (97)8 (3–17)* 1917 (96)8 (3–17)*386 (100)6 (2–13) Cesarean Section Delivery78 (3)50 (21–93)78 (4)50 (21–93)0 (0)-Cost Categories Medicine, supplies and procedures224 (10)3 (1–8)181 (10)3 (1–7)43 (11)7 (3–8) Informal payments1445 (65)5 (2–8)1187 (65)5 (2–9)258 (68)5 (3–8) Food/baby items1695 (77)5 (3–8)1489 (81)5 (3–8)206 (55)3 (2–8) Transportation328 (17)3 (1–8)328 (17)3 (1–8)––*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%) *Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made As depicted in Table 1, the majority of women (n = 1995, 84 %) in our study delivered in a JSY public health facility. The remaining 16 % (n = 386) delivered at home. The median age of the study sample was 23 years and 29 % (n = 692) had no formal education. More than a third (n = 932) of the women were primiparous. The main reason given for home deliveries was that the baby came unexpectedly and quickly (n = 312, 52 %). Other reasons included; planned to have a home delivery (n = 97, 16 %), transportation related issues (n = 95, 16 %), no one to accompany them to the hospital (n = 58, 10 %) and other (n = 37, 6 %). Only one woman replied she could not afford to deliver in a health facility (results not shown).Table 1Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)All womenJSY beneficiaryHome deliveryBackground characteristicsn (%)Median (IQR)n (%)Median (IQR)n (%)Median (IQR)Total2381 (100)8 (3–17)1995 (84)8 (3–18)386 (16)6 (2–13)Age in years (median, IQR)23 (21–25)–22 (20–25)–25 (22–27)–Districts District 11405 (59)14 (7–23)*1251 (63)14 (7–23)*154 (40)12 (5–23)* District 2976 (41)3 (1–7)744 (37)3 (1–7)232 (60)3 (1–7)Education in years (median, IQR)5 (0–8)–5 (0–8)–4 (0–7)–Household wealth 1st quintile (Poorest)519 (22)3 (1–7)**361 (18)3 (1–6)**158 (40)3 (0–7)** 2nd quintile505 (21)7 (2–13)414 (21)6 (2–15)91 (24)7 (2–12) 3rd quintile499 (21)11 (5–18)434 (22)11 (5–18)65 (17)10 (3–18) 4th quintile476 (20)14 (7–23)429 (21)14 (7–23)47 (12)8 (3–19) 5th quintile (Least Poor)382 (16)13 (7–25)357 (18)13 (7–24)25 (7)17 (8–29)Caste Scheduled Caste (SC)599 (25)10 (4–19)**502 (25)11 (5–20)**97 (25)8 (2–17)** Other backward caste (OBC)904 (38)12 (5–22)807 (40)12 (5–22)97 (25)8 (4–17) Scheduled Tribe (ST)598 (25)3 (1–6)430 (22)3 (1–6)168 (44)3 (1–7) General280 (12)13 (4–21)256 (13)12 (4–22)24 (6)16 (5–20)Birth Order 1st child932 (39)8 (3–19)**858 (43)9 (3–20)**74 (19)7 (3–12)** 2nd child838 (35)8 (3–17)688 (34)9 (3–17)150 (39)7 (2–15) 3rd child382 (16)5 (2–14)292 (15)7 (2–15)90 (23)4 (1–12) 4th or more child229 (10)7 (2–15)157 (8)8 (2–17)72 (19)4 (0–9)Type of Delivery Vaginal Delivery2303 (97)8 (3–17)* 1917 (96)8 (3–17)*386 (100)6 (2–13) Cesarean Section Delivery78 (3)50 (21–93)78 (4)50 (21–93)0 (0)-Cost Categories Medicine, supplies and procedures224 (10)3 (1–8)181 (10)3 (1–7)43 (11)7 (3–8) Informal payments1445 (65)5 (2–8)1187 (65)5 (2–9)258 (68)5 (3–8) Food/baby items1695 (77)5 (3–8)1489 (81)5 (3–8)206 (55)3 (2–8) Transportation328 (17)3 (1–8)328 (17)3 (1–8)––*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%) *Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made Who had any OOPE? Ninety one percent (n = 2172) of the sample reported having OOPE; 92 % of JSY beneficiaries and 85 % of women who delivered at home. From the descriptive analysis in Table 1, women who delivered under the JSY program had a significantly higher median, IQR OOPE ($8, 3–18) compared to women who delivered at home ($6, 2–13). The median, IQR OOPE significantly differed between district one ($14, 7–23), and two ($3, 1–7). The median OOPE increased with household wealth for women who delivered in a JSY facility or at home. This pattern was similar in both districts (data not shown). Women from the scheduled tribe caste paid the least OOPE (median $3, IQR 1–6) (Table 1). Women who delivered by caesarean section paid more than six times (median $50, IQR 21–93) the amount compared to women who delivered vaginally (median $8, IQR 3–17). Ninety one percent (n = 2172) of the sample reported having OOPE; 92 % of JSY beneficiaries and 85 % of women who delivered at home. From the descriptive analysis in Table 1, women who delivered under the JSY program had a significantly higher median, IQR OOPE ($8, 3–18) compared to women who delivered at home ($6, 2–13). The median, IQR OOPE significantly differed between district one ($14, 7–23), and two ($3, 1–7). The median OOPE increased with household wealth for women who delivered in a JSY facility or at home. This pattern was similar in both districts (data not shown). Women from the scheduled tribe caste paid the least OOPE (median $3, IQR 1–6) (Table 1). Women who delivered by caesarean section paid more than six times (median $50, IQR 21–93) the amount compared to women who delivered vaginally (median $8, IQR 3–17). Does the JSY cash incentive defray OOPE? Among the women who delivered in a JSY public facility, only a quarter (n = 504) received the cash incentive upon discharge, 68 % (n = 1353) were told to come back to receive the money. Assuming all JSY beneficiaries eventually receive the cash incentive, they would have a median net gain (i.e., JSY incentive – OOPE) of $11. The net gain was larger in district 2 ($19) compared to district 1 ($8) (data not shown). As demonstrated in Fig. 1, women from the poorest wealth quintile had twice the net gain ($20) versus the wealthiest quintile ($10). Only 4 % (21/519) from the poorest quintile incurred OOPE greater than the value of the JSY cash benefit.Fig. 1Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Among the women who delivered in a JSY public facility, only a quarter (n = 504) received the cash incentive upon discharge, 68 % (n = 1353) were told to come back to receive the money. Assuming all JSY beneficiaries eventually receive the cash incentive, they would have a median net gain (i.e., JSY incentive – OOPE) of $11. The net gain was larger in district 2 ($19) compared to district 1 ($8) (data not shown). As demonstrated in Fig. 1, women from the poorest wealth quintile had twice the net gain ($20) versus the wealthiest quintile ($10). Only 4 % (21/519) from the poorest quintile incurred OOPE greater than the value of the JSY cash benefit.Fig. 1Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Breakdown of OOPE among JSY and Home deliveries Among the JSY and home deliveries, 92 % (n = 2213) provided disaggregated cost information. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE (Table 1). The proportion of women incurring OOPE for food and cloth items was higher for JSY beneficiaries (81 %) than for home mothers (55 %). Among the two groups, the median cost for food and baby items was significantly higher ($5) for JSY beneficiaries compared to home deliveries ($3). No other significant differences were found. OOPE differences by district were found. The breakdown of costs by category was similar for JSY beneficiaries and home deliveries with the exception of the proportion of informal payments in district 2 (Fig. 2). Informal payments constituted only 5 % of the total OOPE for JSY beneficiaries in district 2 compared to 43 % in district 1. The proportion did not differ between districts among home mothers. Among JSY beneficiaries, a higher proportion of wealthy women incurred OOPE as well as had higher median OOPE compared to the poorest quintile.Fig. 2Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %) Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %) Among the JSY and home deliveries, 92 % (n = 2213) provided disaggregated cost information. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE (Table 1). The proportion of women incurring OOPE for food and cloth items was higher for JSY beneficiaries (81 %) than for home mothers (55 %). Among the two groups, the median cost for food and baby items was significantly higher ($5) for JSY beneficiaries compared to home deliveries ($3). No other significant differences were found. OOPE differences by district were found. The breakdown of costs by category was similar for JSY beneficiaries and home deliveries with the exception of the proportion of informal payments in district 2 (Fig. 2). Informal payments constituted only 5 % of the total OOPE for JSY beneficiaries in district 2 compared to 43 % in district 1. The proportion did not differ between districts among home mothers. Among JSY beneficiaries, a higher proportion of wealthy women incurred OOPE as well as had higher median OOPE compared to the poorest quintile.Fig. 2Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %) Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %) Inequalities in OOPE among JSY and home deliveries Figure 3 displays the concentration curves for OOPE among JSY beneficiaries and home deliveries. Since both curves lie below the line of equality; OOPE for both JSY beneficiaries and home mothers was found to be progressive indicating that poorer households make proportionally less OOP payments during childbirth compared to wealthier households. JSY beneficiaries had less progressive OOPE (CI = 0.189) compared to women who delivered at home (CI = 0.293), however this difference was not significant. The difference was not significant when each district was analyzed separately.Fig. 3Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Figure 3 displays the concentration curves for OOPE among JSY beneficiaries and home deliveries. Since both curves lie below the line of equality; OOPE for both JSY beneficiaries and home mothers was found to be progressive indicating that poorer households make proportionally less OOP payments during childbirth compared to wealthier households. JSY beneficiaries had less progressive OOPE (CI = 0.189) compared to women who delivered at home (CI = 0.293), however this difference was not significant. The difference was not significant when each district was analyzed separately.Fig. 3Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Impact of JSY on OOPE When adjusting for confounders, women who delivered in a JSY public facility had 1.58 higher odds (95 % CI: 1.11–2.25) to incur any OOPE than women who delivered at home (Table 2, model 1). Women from district 1 had twice the odds (95 % CI: 1.30–3.18) of having OOPE compared to district 2. Wealth, caste and birth order were not significant predictors of incurring any OOPE.Table 2Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)Background characteristicsPart 1 of model: AOR n = 238195 % confidence intervalPart 2 of model: IRR n = 217295 % confidence intervalPlace of delivery Home 1.00 1.00  JSY (Public Facility)1.58(1.11–2.25)*0.84(0.73–0.96)*Districts District #2 1.00 1.00  District #12.03(1.30–3.18)*2.36(2.06–2.69)**Household wealth 1st quintile (Poorest) 1.00 1.00  2nd quintile1.49(0.98–2.27)1.19(1.02–1.38)* 3rd quintile1.53(0.91–2.56)1.33(1.13–1.56)** 4th quintile1.14(0.65–1.99)1.31(1.11–1.55)** 5th quintile (Least Poor)1.21(0.64–2.31)1.34(1.02–1.49)*Caste Scheduled Caste (SC) 1.00 1.00  Other backward caste (OBC)1.15(0.76–1.73)1.14(1.01–1.28)* Scheduled Tribe (ST)0.97(0.62–1.52)0.70(0.60–0.81)** General1.19(0.63–2.22)1.07(0.91–1.26)Birth order 1st child 1.00 1.00  2nd child1.21(0.82–1.79)0.82(0.73–0.92)* 3rd child1.03(0.63–1.68)0.71(0.60–0.83)** 4th or more child0.64(0.35–1.17)0.69(0.56–0.86)*Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001 Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386) Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001 However in model 2 among the women who had incurred any OOPE (Table 2), those who delivered under the JSY program paid 16 % less (95 % CI: 0.73–0.96) OOPE than women who delivered at home. Women from district 1 paid more than twice (95 % CI: 2.06–2.69) the OOPE compared to district 2. Increased wealth was also significantly related to higher OOPE. Conversely, being from a scheduled tribe and birth order (women with more children) were related to having lower OOPE. When adjusting for confounders, women who delivered in a JSY public facility had 1.58 higher odds (95 % CI: 1.11–2.25) to incur any OOPE than women who delivered at home (Table 2, model 1). Women from district 1 had twice the odds (95 % CI: 1.30–3.18) of having OOPE compared to district 2. Wealth, caste and birth order were not significant predictors of incurring any OOPE.Table 2Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)Background characteristicsPart 1 of model: AOR n = 238195 % confidence intervalPart 2 of model: IRR n = 217295 % confidence intervalPlace of delivery Home 1.00 1.00  JSY (Public Facility)1.58(1.11–2.25)*0.84(0.73–0.96)*Districts District #2 1.00 1.00  District #12.03(1.30–3.18)*2.36(2.06–2.69)**Household wealth 1st quintile (Poorest) 1.00 1.00  2nd quintile1.49(0.98–2.27)1.19(1.02–1.38)* 3rd quintile1.53(0.91–2.56)1.33(1.13–1.56)** 4th quintile1.14(0.65–1.99)1.31(1.11–1.55)** 5th quintile (Least Poor)1.21(0.64–2.31)1.34(1.02–1.49)*Caste Scheduled Caste (SC) 1.00 1.00  Other backward caste (OBC)1.15(0.76–1.73)1.14(1.01–1.28)* Scheduled Tribe (ST)0.97(0.62–1.52)0.70(0.60–0.81)** General1.19(0.63–2.22)1.07(0.91–1.26)Birth order 1st child 1.00 1.00  2nd child1.21(0.82–1.79)0.82(0.73–0.92)* 3rd child1.03(0.63–1.68)0.71(0.60–0.83)** 4th or more child0.64(0.35–1.17)0.69(0.56–0.86)*Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001 Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386) Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001 However in model 2 among the women who had incurred any OOPE (Table 2), those who delivered under the JSY program paid 16 % less (95 % CI: 0.73–0.96) OOPE than women who delivered at home. Women from district 1 paid more than twice (95 % CI: 2.06–2.69) the OOPE compared to district 2. Increased wealth was also significantly related to higher OOPE. Conversely, being from a scheduled tribe and birth order (women with more children) were related to having lower OOPE.
Conclusion
OOPE is still prevalent among women who deliver under the JSY program as well in home deliveries. In JSY, OOPE varies by income quintile; the wealthier women pay more OOPE. There is a net gain for all quintiles when the incentive is taken into consideration, highest gains occur for the poorest women. OOPE was largely due to indirect costs like informal payments, food and cloth items for the baby and not direct medical payments. Further, we found OOP payments under the program were progressive with the most disadvantaged wealth quintile making proportionally less OOPE compared to wealthier women. Being a JSY beneficiary led to increased odds of incurring OOPE, but at the same time a decrease in the amount of OOPE incurred compared to women who delivered at home. While wealth was not a predictor of having OOPE, it was an indicator of how much the women would pay. The program seems to be effective in providing financial protection for the most vulnerable groups (i.e., women from poorer households and disadvantaged castes).
[ "Study area", "Study design and sampling", "Data collection", "Definitions and variables used", "Analysis", "Multivariable analysis", "Ethical considerations", "Descriptive sample characteristics for the sample", "Who had any OOPE?", "Does the JSY cash incentive defray OOPE?", "Breakdown of OOPE among JSY and Home deliveries", "Inequalities in OOPE among JSY and home deliveries", "Impact of JSY on OOPE", "Free delivery under the JSY program?", "Does JSY reduce financial access barriers?", "JSY role in reducing OOPE inequalities and inequities: wealth quintiles and OOPE in the JSY", "Methodological considerations", "Ethical considerations" ]
[ "The study took place in two of the 51 administrative districts that make up Madhya Pradesh (MP), a state in central India. With a population of 71 million, it is one of India’s largest states [24]. MP on the whole has poorer socio-economic and health indicators relative to country (India) averages; 24 % of the population lives below the poverty line and has a maternal mortality ratio (MMR) of 190 per 100,000 live births [25]. In MP, all women are eligible to be beneficiaries of the JSY program regardless of their poverty status, if they deliver in a public health facility. MP has had one of the highest uptakes of JSY in the country [26].\nThe two study districts were purposively selected to represent different socio-economic and geographic areas of MP. They were also study areas of a larger project (MATIND) [27] to study the JSY program in which this study was nested. Each study district (population 1.5–2 million) is divided into 5 administrative blocks (100,000–200,000 people per block), each block comprises of villages with an approximate population of 1000–10,000 people per village. District one, situated on the western side of the state, has relatively better socio-economic characteristics, a lower MMR (176) and a higher uptake of the JSY program (72 %) [26]. In comparison, district two which lies on the eastern side of the state, has generally poorer socio-economic characteristics, a higher MMR (361), and a lower JSY uptake (59 %) [26]. The districts not only differ on socio-economic characteristics, but also in regards to geographical features. The first district is relatively more urban and has a flat geographical terrain, while the second district is mostly rural, has several hill ranges as well as dense forested areas.", "A community-based cross-sectional study was conducted from September 2013 to April 2015. Study villages were selected through a combination of multi-stage sampling in the two districts. In the first district, all blocks were included. In the second district, two out of the five blocks were purposively selected, one in the north and the other in the south of the district. Within the selected blocks, villages that had between 200 and 10,000 inhabitants were included in the sampling frame. All villages were stratified into two groups based on a five kilometer (km) distance from a public health facility that conducted more than 10 deliveries in a month. Probability proportionate to size was used to select the individual study villages ensuring the number of sample units in each stratum would be allocated in proportion to their share in total population. In total, 247 villages were selected, 101 and 146 villages within and outside the five km radius of a facility, respectively.", "The study participants (i.e., pregnant women who had just given birth in each village) were identified with the help of local community health workers (Accredited Social Health Activist), local crèche workers, and traditional birth attendants (Dai). The community workers were incentivized to report births to the research team within two days of delivery. As an additional measure to ensure all births were identified, research teams frequently visited the study villages throughout the recruitment period. Trained research assistants interviewed recently delivered mothers at a health facility or in their home within the first week of delivery. They elicited information on out-of-pocket expenditures (OOPE) for the current childbirth and other details including maternal socio-demographic characteristics, birth order, household wealth, location of delivery, and type of delivery. Family members provided additional information on the costs if the mother could not recall or did not know.\nDuring the recruitment period, 2779 births were reported and 2615 women were enrolled in the study. The reasons given for not enrolling in the study were (i) the mother had migrated back to her resident village (n = 108), (ii) the home was not accessible to the research team (n = 52), (iii) maternal death (n = 8) and (iv) two women refused to participate. A small number of women (n = 8) reported that they did not know the cost of the delivery thus were excluded from the analysis. Women who delivered in a private facility (n = 226) were removed from the analysis as the study focused on the JSY program and OOPE. Also these women have significantly different characteristics; they tend to be wealthier, live in urban settings and are not the primary focus of the JSY program.", "The main outcome of interest, OOPE, was a sum of the following expenditures (i.e., gross OOPE): 1. Medicine, supplies and procedures (i.e., delivery costs, medicines, supplies, blood transfusions, diagnostic tests, and anesthesia); 2. Informal payments (i.e., expenditures reported as ‘rewards’ paid by the women/families to the staff for assisting their care in the facility. For the women who delivered at home, cash given to the dai (traditional birth attendant who conducts home deliveries) was classified as an informal payment; 3. Food/Cloth (i.e., food consumed during hospital stay or at home in relation to the delivery and cloths used for the infant), and 4. Transportation costs (i.e., all costs for the mother and her attendants associated with reaching the health facility for delivery). The OOPE were collected in Indian rupees (INR) and converted to U.S. dollars (US$) using the exchange rate at the time of the study of 60 INR to US$1.\nIndependent variables: Socio-demographic characteristics included age and education as continuous variables and caste1 as a categorical variable. Birth order was categorized by number of live births up to four (or more). Women’s household wealth was assessed by using a standard technique involving principal component analysis to construct a wealth index based on socio-economic characteristics developed for the standard of living index in the National Family Health Survey. The variables included 20 household assets, structural material of dwelling, sanitation provisions, and land ownership [28, 29]. The household wealth index was calculated for the entire study sample, then the score was categorized into five quintiles. We used the wealth index as a proxy measurement for wealth as income is a difficult variable to collect in low-income settings. The mother could have delivered under the JSY program at public facility or at home. JSY beneficiaries were any woman who delivered in a public health facility and thus eligible to receive the cash incentive. Type of delivery was classified into vaginal or cesarean section. We define the ‘net gain’ as the JSY incentive ($23) minus the OOPE.", "Univariate, bivariate and multivariable statistics were used to describe and analyze the data. The dependent variable, OOPE, was not normally distributed thus median and interquartile range (IQR) were used to describe the data. Wilcoxon-Mann-Whitney and Kruskal Wallis tests were used to compare OOPE differences between groups.\nInequalities in OOPE were analyzed using the concentration curve and concentration index as demonstrated by Wagstaff et al. [30]. The concentration curve plots the cumulative percentage of OOPE (y-axis) against the cumulative percentage of the women, ranked by their wealth index, from the poorest to the richest households (x-axis) [31]. We used the concetration curve to graphically display the inequalities in OOPE in this population. The concentration index (CI), a range of −1 to 1, allowed us to quantify the inequality in the OOPE. CI is defined graphically as twice the area between the concentration curve and the line of equality [32]. A positive value indicates a progressive system where the wealthy have more proportion of OOPE compared to the poor, while a negative value indicates the opposite (and a regressive system).", "A two-part model, developed as part of the Rand Health Insurance Experiment, was used for the multivariable regression to assess determinants of OOPE [33–36]. This model is commonly used when studying health expenditures to accommodate the significant number of zeros (no expense incurred) and for its distribution (i.e., right skewed with a long tail) which fits our data. The first part, a binary logistic model, was used to understand the predictors associated with any OOPE and estimated the odds of a woman incurring any OOPE. We present the coefficients as adjusted odds ratios (AOR) and 95 % Confidence Intervals.\nThe second part of the model, a generalized linear model (GLM) with a negative binomial distribution, analyzed the determinants of OOPE among women who reported any OOPE. This model estimated the OOPE for women who reported incurring an expense. We adjusted for observable characteristics that may influence delivery-related OOPE based on previous literature and context specific knowledge. We present the coefficients as incident rate ratios (IRR) and 95 % Confidence Intervals.\nMulticollinearity was assessed by calculating the mean variance inflation factor (VIF = 1.53), which did not show evidence of collinearity. No interaction existed between the variables in the model. In all the models, p-values <0.05 were considered significant.", "The study was described to all study participants. Informed consent was obtained from the participants before they were enrolled in the study and responded to the questionnaire. Anonymity and confidentiality was ensured to all women. Ethical approval for the study was granted by the Ethics Committee from the authors’ institution.", "As depicted in Table 1, the majority of women (n = 1995, 84 %) in our study delivered in a JSY public health facility. The remaining 16 % (n = 386) delivered at home. The median age of the study sample was 23 years and 29 % (n = 692) had no formal education. More than a third (n = 932) of the women were primiparous. The main reason given for home deliveries was that the baby came unexpectedly and quickly (n = 312, 52 %). Other reasons included; planned to have a home delivery (n = 97, 16 %), transportation related issues (n = 95, 16 %), no one to accompany them to the hospital (n = 58, 10 %) and other (n = 37, 6 %). Only one woman replied she could not afford to deliver in a health facility (results not shown).Table 1Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)All womenJSY beneficiaryHome deliveryBackground characteristicsn (%)Median (IQR)n (%)Median (IQR)n (%)Median (IQR)Total2381 (100)8 (3–17)1995 (84)8 (3–18)386 (16)6 (2–13)Age in years (median, IQR)23 (21–25)–22 (20–25)–25 (22–27)–Districts District 11405 (59)14 (7–23)*1251 (63)14 (7–23)*154 (40)12 (5–23)* District 2976 (41)3 (1–7)744 (37)3 (1–7)232 (60)3 (1–7)Education in years (median, IQR)5 (0–8)–5 (0–8)–4 (0–7)–Household wealth 1st quintile (Poorest)519 (22)3 (1–7)**361 (18)3 (1–6)**158 (40)3 (0–7)** 2nd quintile505 (21)7 (2–13)414 (21)6 (2–15)91 (24)7 (2–12) 3rd quintile499 (21)11 (5–18)434 (22)11 (5–18)65 (17)10 (3–18) 4th quintile476 (20)14 (7–23)429 (21)14 (7–23)47 (12)8 (3–19) 5th quintile (Least Poor)382 (16)13 (7–25)357 (18)13 (7–24)25 (7)17 (8–29)Caste Scheduled Caste (SC)599 (25)10 (4–19)**502 (25)11 (5–20)**97 (25)8 (2–17)** Other backward caste (OBC)904 (38)12 (5–22)807 (40)12 (5–22)97 (25)8 (4–17) Scheduled Tribe (ST)598 (25)3 (1–6)430 (22)3 (1–6)168 (44)3 (1–7) General280 (12)13 (4–21)256 (13)12 (4–22)24 (6)16 (5–20)Birth Order 1st child932 (39)8 (3–19)**858 (43)9 (3–20)**74 (19)7 (3–12)** 2nd child838 (35)8 (3–17)688 (34)9 (3–17)150 (39)7 (2–15) 3rd child382 (16)5 (2–14)292 (15)7 (2–15)90 (23)4 (1–12) 4th or more child229 (10)7 (2–15)157 (8)8 (2–17)72 (19)4 (0–9)Type of Delivery Vaginal Delivery2303 (97)8 (3–17)*\n1917 (96)8 (3–17)*386 (100)6 (2–13) Cesarean Section Delivery78 (3)50 (21–93)78 (4)50 (21–93)0 (0)-Cost Categories Medicine, supplies and procedures224 (10)3 (1–8)181 (10)3 (1–7)43 (11)7 (3–8) Informal payments1445 (65)5 (2–8)1187 (65)5 (2–9)258 (68)5 (3–8) Food/baby items1695 (77)5 (3–8)1489 (81)5 (3–8)206 (55)3 (2–8) Transportation328 (17)3 (1–8)328 (17)3 (1–8)––*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made\nBackground characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)\n*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made", "Ninety one percent (n = 2172) of the sample reported having OOPE; 92 % of JSY beneficiaries and 85 % of women who delivered at home. From the descriptive analysis in Table 1, women who delivered under the JSY program had a significantly higher median, IQR OOPE ($8, 3–18) compared to women who delivered at home ($6, 2–13). The median, IQR OOPE significantly differed between district one ($14, 7–23), and two ($3, 1–7). The median OOPE increased with household wealth for women who delivered in a JSY facility or at home. This pattern was similar in both districts (data not shown). Women from the scheduled tribe caste paid the least OOPE (median $3, IQR 1–6) (Table 1). Women who delivered by caesarean section paid more than six times (median $50, IQR 21–93) the amount compared to women who delivered vaginally (median $8, IQR 3–17).", "Among the women who delivered in a JSY public facility, only a quarter (n = 504) received the cash incentive upon discharge, 68 % (n = 1353) were told to come back to receive the money. Assuming all JSY beneficiaries eventually receive the cash incentive, they would have a median net gain (i.e., JSY incentive – OOPE) of $11. The net gain was larger in district 2 ($19) compared to district 1 ($8) (data not shown). As demonstrated in Fig. 1, women from the poorest wealth quintile had twice the net gain ($20) versus the wealthiest quintile ($10). Only 4 % (21/519) from the poorest quintile incurred OOPE greater than the value of the JSY cash benefit.Fig. 1Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\nGross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures", "Among the JSY and home deliveries, 92 % (n = 2213) provided disaggregated cost information. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE (Table 1). The proportion of women incurring OOPE for food and cloth items was higher for JSY beneficiaries (81 %) than for home mothers (55 %). Among the two groups, the median cost for food and baby items was significantly higher ($5) for JSY beneficiaries compared to home deliveries ($3). No other significant differences were found.\nOOPE differences by district were found. The breakdown of costs by category was similar for JSY beneficiaries and home deliveries with the exception of the proportion of informal payments in district 2 (Fig. 2). Informal payments constituted only 5 % of the total OOPE for JSY beneficiaries in district 2 compared to 43 % in district 1. The proportion did not differ between districts among home mothers. Among JSY beneficiaries, a higher proportion of wealthy women incurred OOPE as well as had higher median OOPE compared to the poorest quintile.Fig. 2Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %)\nBreakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %)", "Figure 3 displays the concentration curves for OOPE among JSY beneficiaries and home deliveries. Since both curves lie below the line of equality; OOPE for both JSY beneficiaries and home mothers was found to be progressive indicating that poorer households make proportionally less OOP payments during childbirth compared to wealthier households. JSY beneficiaries had less progressive OOPE (CI = 0.189) compared to women who delivered at home (CI = 0.293), however this difference was not significant. The difference was not significant when each district was analyzed separately.Fig. 3Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\nConcentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures", "When adjusting for confounders, women who delivered in a JSY public facility had 1.58 higher odds (95 % CI: 1.11–2.25) to incur any OOPE than women who delivered at home (Table 2, model 1). Women from district 1 had twice the odds (95 % CI: 1.30–3.18) of having OOPE compared to district 2. Wealth, caste and birth order were not significant predictors of incurring any OOPE.Table 2Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)Background characteristicsPart 1 of model: AOR n = 238195 % confidence intervalPart 2 of model: IRR n = 217295 % confidence intervalPlace of delivery Home\n1.00\n\n1.00\n JSY (Public Facility)1.58(1.11–2.25)*0.84(0.73–0.96)*Districts District #2\n1.00\n\n1.00\n District #12.03(1.30–3.18)*2.36(2.06–2.69)**Household wealth 1st quintile (Poorest)\n1.00\n\n1.00\n 2nd quintile1.49(0.98–2.27)1.19(1.02–1.38)* 3rd quintile1.53(0.91–2.56)1.33(1.13–1.56)** 4th quintile1.14(0.65–1.99)1.31(1.11–1.55)** 5th quintile (Least Poor)1.21(0.64–2.31)1.34(1.02–1.49)*Caste Scheduled Caste (SC)\n1.00\n\n1.00\n Other backward caste (OBC)1.15(0.76–1.73)1.14(1.01–1.28)* Scheduled Tribe (ST)0.97(0.62–1.52)0.70(0.60–0.81)** General1.19(0.63–2.22)1.07(0.91–1.26)Birth order 1st child\n1.00\n\n1.00\n 2nd child1.21(0.82–1.79)0.82(0.73–0.92)* 3rd child1.03(0.63–1.68)0.71(0.60–0.83)** 4th or more child0.64(0.35–1.17)0.69(0.56–0.86)*Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001\nTwo part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)\nAdjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001\nHowever in model 2 among the women who had incurred any OOPE (Table 2), those who delivered under the JSY program paid 16 % less (95 % CI: 0.73–0.96) OOPE than women who delivered at home. Women from district 1 paid more than twice (95 % CI: 2.06–2.69) the OOPE compared to district 2. Increased wealth was also significantly related to higher OOPE. Conversely, being from a scheduled tribe and birth order (women with more children) were related to having lower OOPE.", "High OOPE are a well-known constraint to the utilization of delivery services where ready access to cash is not available for many rural households, especially the poor whose ability to pay can be temporal or mainly rely on seasonal production like farms [39]. The vast majority of women in our study incurred some amount of OOPE. While these women paid less as JSY beneficiaries compared to women who delivered at home, they did not have a free delivery as intended by the public health care system. As mentioned above, in this study the direct medical costs were not the driving force behind OOPE, but informal payments to the staff. A qualitative study from the same area found women knew they were not supposed to make payments to the staff but did anyway out of fear of not receiving appropriate care in a timely manner [40].\nAlthough from the perspective of the Indian government, the primary purpose of the cash transfer is to incentivize women for a free delivery in a public health facility and not to cover OOPE, the same qualitative study found many women do in fact intend to use it to compensate for the OOPE incurred [40]. Only 25 % of the women who delivered in a JSY public facility in our study received the cash incentive upon discharge. While a previous study that took place in the same area found 85 % of the women received the benefit within two weeks of delivery [41], if the cash benefit is expected by the women to cover their OOPE, it needs to be received upon discharge. These bureaucratic issues related to receiving the JSY cash incentive and informal payments to public health facility staff members undermine the program and could potentially cause future uptake problems if not addressed.", "Population based surveys have shown a significant proportion of women report cost as the major barrier to an institutional delivery, [2, 37] and other research reports support this [10, 23, 42]. However, our model showed JSY beneficiaries had lower OOPE compared to women who delivered at home. In addition, it is important to note the JSY beneficiaries in our study received a net gain, especially among the poorest quintiles, after receiving the cash incentive ($23). Although this is to be expected since the poorest quintiles paid the least, regardless this would imply the program is reaching and assisting the most vulnerable groups. In our setting the incentive was more than adequate to cover the OOPE, while an earlier study from Orissa found the magnitude of the cash incentive was not large enough to compensate for the entire OOPE amount [43]. Another Indian study reported the JSY program provided women with some financial protection, though it was limited and did not cover the entire sum [14].\nFurther, the women in our study who delivered at home did not cite financial barriers as the justification for a home delivery. A recent qualitative study from the same area also found that cost was not a deterrent for most of the study participants who delivered at home [40]. This implies other access barriers persist, some of which may be remedied but not necessarily by a cash transfer.", "Our concentration curves showed that the OOPE for women who delivered under the JSY program was pro-poor; poorer households made proportionally less OOP payments during childbirth compared to wealthier households (i.e., a progressive system). From a strict equality perspective, the distribution was not equal. However, some would argue the wealthier households have the means to pay for services while the poorer ones do not [44]. The OOPE may not be equal; nevertheless the program is making OOPE more equitable and fair.\nIn general, health care financing systems that are progressive tend to have a redistributive nature. Taxes, where wealthier households pay higher amounts of money compared to poorer ones, is one example. Social service provision by the government (e.g., offering free delivery care) is another [44]. So while vulnerable groups often have more healthcare needs, despite contributing less, they are able to obtain the same service as their wealthier counterparts. Yet, it is unknown whether the poorest women are in fact acquiring the same level of service. In our study, when comparing the prevalence of different costs and the median expenditure between the poorest and wealthiest quintiles of JSY mothers, it appears there is a higher prevalence of informal payments among the wealthiest quintile and higher amounts are paid. This may just reflect the fact wealthier women have more disposable income to spend or do wealthier women informally pay for better services even within the same facilities? A program like JSY has the power to promote equality and equity in access to delivery care, and while several argue the overall quality of care administered in JSY public facilities is low, [45–49] it is still important to ensure all receive the best care possible regardless of whether they have the means to pay for it.\nWe have presented differences in OOPE between the two study districts throughout the paper. In district 1, a woman is more likely to have an OOPE and pay higher amounts. Since the OOPE amount was higher in district 1, JSY beneficiaries’ net gain was also smaller. These findings were not surprising considering the heterogeneity between the districts. The women are much poorer in district 2 compared to district 1 so this would have an impact on the amount of money they spent. This also has an implication for the relative worth of the incentive to women in their respective districts. Another reason could be attributed to the sampling methodology. While district 1 included all blocks, district 2 only included 2 of the 5.", "Though there have been some studies assessing the OOPE experienced during childbirth in India, few have looked at this from an equity perspective under the JSY program. The previous studies have used secondary data that did not allow for cost disaggregation, assessed a time period when JSY coverage and overall institutional delivery was low or were small in size [13–17, 23]. Furthermore, in some studies the costs were collected for deliveries that occurred in the previous five years, thus probably suffered from substantial recall bias. Considering how quickly the JSY program has increased institutional birth proportions in such a short period of time, it is important to have recent data that reflects the current situation.\nReports have found in many Asian countries that families borrow money to pay for maternal related costs thus being forced to forego essential items like food and education to repay the loans. These costs have a ripple effect on the family for years to come [50]. We did not enquire about the financing sources used to pay for the hospital delivery costs. This limits our ability to understand the role of JSY in providing financial protection and reducing subsequent impoverishment. So while this study has shown JSY beneficiaries have reduced OOPE compared to women who delivered at home, further research is needed to understand the magnitude of the reduction in relation to the family’s overall poverty status.\nMany studies highlight the limitations (e.g., recall bias and over/underreporting) associated with collecting health expenditure data [51–53]. Cost data was collected shortly after delivery and triangulated with other family members to minimize recall bias. A disaggregated cost collection design was used to improve accuracy and avoid underreporting of expenditures. While our study design minimized recall bias as the study participants were interviewed within a week of delivery, we do need to acknowledge the possibility of underreporting for women who were interviewed in a health facility because of staff presence.\nThe sex of the infant has been reported in the literature as a determinant of OOPE [14]. Data was not collected on the sex of the infant, therefore we could not adjust for it in the model. It is reasonable to assume in this setting the infant’s sex would influence OOPE, it is well documented that families make higher informal payments when a male child is born [14]. This could affect the precision of the analysis and the significance of other explanatory variables could be overestimated. However, we do not have any reason to think that the sex of the infant is differently distributed among the two groups.\nThere could be an argument for the payments made to the dai to be considered as a formal payment and classified as payment for ‘medicines, supplies and procedures’. However we chose to classify these payments as informal because (a) the dai is not formally trained, (b) is not a part of the formal health system and (c) remuneration to the dai is negotiable i.e., there are not fixed stipulated fees, she is remunerated often partly in cash and partly in kind based on the ability of the mothers family to pay and their relationship with the dai.\nSampling: The districts were sampled in the exact same way with the exception of the number of blocks chosen. However, the process of how the women were selected for the study was the same in both districts. As the districts in the state and in the rest of the country are very heterogeneous, generalizability of our results needs to be done with caution.", "Ethical approval for the study was granted by the Ethics Committee of R.D. Gardi Medical College (Ujjain, India) and Karolinska Institutet (Stockholm, Sweden), reference # 2010/1671–31/5." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study area", "Study design and sampling", "Data collection", "Definitions and variables used", "Analysis", "Multivariable analysis", "Ethical considerations", "Results", "Descriptive sample characteristics for the sample", "Who had any OOPE?", "Does the JSY cash incentive defray OOPE?", "Breakdown of OOPE among JSY and Home deliveries", "Inequalities in OOPE among JSY and home deliveries", "Impact of JSY on OOPE", "Discussion", "Free delivery under the JSY program?", "Does JSY reduce financial access barriers?", "JSY role in reducing OOPE inequalities and inequities: wealth quintiles and OOPE in the JSY", "Methodological considerations", "Conclusion", "Ethical considerations" ]
[ "India’s health care services are largely financed by out-of-pocket payments (71 %) at the point of care [1]. High out-of-pocket expenditures (OOPE) make health services, including care for childbirth, difficult to access for a large proportion of its population especially the poor. In 2005, institutional delivery was nearly 6.5 times higher among Indian women belonging to the highest wealth quintile (84 %) compared to poorest quintile women (13 %) [2].\nEvidence suggests maternal mortality can be reduced when deliveries are conducted by skilled birth attendants and women have access to emergency obstetric care, given the unpredictable nature of life-threatening complications that can occur at the time of childbirth [3]. Poor women, who have the least access to such care, bear a disproportionate burden of maternal mortality.\nTherefore governments in many low income settings, with high burdens of maternal mortality, have initiated special programs to draw women into facilities to give birth (instead of at home), where such care can be provided. Given the high number of maternal deaths in India (one-fifth of the global count) and a low institutional delivery proportion (39 % in 2005); the Indian government launched a conditional cash transfer program to promote institutional delivery among poor women the same year [2, 4]. The program, Janani Suraksha Yojana (JSY or safe motherhood program), provides women $23 (INR 1400) upon discharge after giving birth in a public health facility.\nJSY, the largest cash transfer program in the world, is funded by the central government of India (GOI) while implementation is managed by states. Eligibility, incentive amount and uptake differ across the states [5, 6]. In the ten years since the program began, institutional delivery rates have increased to 74 % and more than 106 million women have benefited with the GOI spending 16.4 billion USD on the program [7–9].\nPrevious studies have found a considerable proportion of Indian women cite financial access barriers as one of the main reasons for not having an institutional delivery [10–12]. The JSY program was expected to draw these mothers into public facilities with the assumption that they will receive a free delivery in addition to receiving the cash incentive of $23. While all services in the Indian public sector are supposed to be free, in reality OOPE during childbirth is common among these facilities [13–17].\nAlthough there are a number of reports on the JSY [18–22], none focus on OOPE in the context of high JSY program uptake [13–17, 23]. The magnitude of OOPE incurred among these JSY beneficiaries is unknown. In addition, we do not know if the cash incentive offsets OOPE in the same facility, and if it does then the extent to which it does. Also the question remains whether the level of OOPE paid by JSY beneficiaries of different socio-economic status is similar. Further, if women who participate in the JSY program actually have higher OOPE than women who give birth at home. We studied the OOPE among JSY beneficiaries and compared this OOPE to that incurred by women who delivered at home in two districts of Madhya Pradesh, India. We also described the extent to which the JSY cash transfer defrayed the OOPE for JSY beneficiaries. Among both groups of women, we studied predictors of OOPE, and how OOPE varied with wealth status.", " Study area The study took place in two of the 51 administrative districts that make up Madhya Pradesh (MP), a state in central India. With a population of 71 million, it is one of India’s largest states [24]. MP on the whole has poorer socio-economic and health indicators relative to country (India) averages; 24 % of the population lives below the poverty line and has a maternal mortality ratio (MMR) of 190 per 100,000 live births [25]. In MP, all women are eligible to be beneficiaries of the JSY program regardless of their poverty status, if they deliver in a public health facility. MP has had one of the highest uptakes of JSY in the country [26].\nThe two study districts were purposively selected to represent different socio-economic and geographic areas of MP. They were also study areas of a larger project (MATIND) [27] to study the JSY program in which this study was nested. Each study district (population 1.5–2 million) is divided into 5 administrative blocks (100,000–200,000 people per block), each block comprises of villages with an approximate population of 1000–10,000 people per village. District one, situated on the western side of the state, has relatively better socio-economic characteristics, a lower MMR (176) and a higher uptake of the JSY program (72 %) [26]. In comparison, district two which lies on the eastern side of the state, has generally poorer socio-economic characteristics, a higher MMR (361), and a lower JSY uptake (59 %) [26]. The districts not only differ on socio-economic characteristics, but also in regards to geographical features. The first district is relatively more urban and has a flat geographical terrain, while the second district is mostly rural, has several hill ranges as well as dense forested areas.\nThe study took place in two of the 51 administrative districts that make up Madhya Pradesh (MP), a state in central India. With a population of 71 million, it is one of India’s largest states [24]. MP on the whole has poorer socio-economic and health indicators relative to country (India) averages; 24 % of the population lives below the poverty line and has a maternal mortality ratio (MMR) of 190 per 100,000 live births [25]. In MP, all women are eligible to be beneficiaries of the JSY program regardless of their poverty status, if they deliver in a public health facility. MP has had one of the highest uptakes of JSY in the country [26].\nThe two study districts were purposively selected to represent different socio-economic and geographic areas of MP. They were also study areas of a larger project (MATIND) [27] to study the JSY program in which this study was nested. Each study district (population 1.5–2 million) is divided into 5 administrative blocks (100,000–200,000 people per block), each block comprises of villages with an approximate population of 1000–10,000 people per village. District one, situated on the western side of the state, has relatively better socio-economic characteristics, a lower MMR (176) and a higher uptake of the JSY program (72 %) [26]. In comparison, district two which lies on the eastern side of the state, has generally poorer socio-economic characteristics, a higher MMR (361), and a lower JSY uptake (59 %) [26]. The districts not only differ on socio-economic characteristics, but also in regards to geographical features. The first district is relatively more urban and has a flat geographical terrain, while the second district is mostly rural, has several hill ranges as well as dense forested areas.\n Study design and sampling A community-based cross-sectional study was conducted from September 2013 to April 2015. Study villages were selected through a combination of multi-stage sampling in the two districts. In the first district, all blocks were included. In the second district, two out of the five blocks were purposively selected, one in the north and the other in the south of the district. Within the selected blocks, villages that had between 200 and 10,000 inhabitants were included in the sampling frame. All villages were stratified into two groups based on a five kilometer (km) distance from a public health facility that conducted more than 10 deliveries in a month. Probability proportionate to size was used to select the individual study villages ensuring the number of sample units in each stratum would be allocated in proportion to their share in total population. In total, 247 villages were selected, 101 and 146 villages within and outside the five km radius of a facility, respectively.\nA community-based cross-sectional study was conducted from September 2013 to April 2015. Study villages were selected through a combination of multi-stage sampling in the two districts. In the first district, all blocks were included. In the second district, two out of the five blocks were purposively selected, one in the north and the other in the south of the district. Within the selected blocks, villages that had between 200 and 10,000 inhabitants were included in the sampling frame. All villages were stratified into two groups based on a five kilometer (km) distance from a public health facility that conducted more than 10 deliveries in a month. Probability proportionate to size was used to select the individual study villages ensuring the number of sample units in each stratum would be allocated in proportion to their share in total population. In total, 247 villages were selected, 101 and 146 villages within and outside the five km radius of a facility, respectively.\n Data collection The study participants (i.e., pregnant women who had just given birth in each village) were identified with the help of local community health workers (Accredited Social Health Activist), local crèche workers, and traditional birth attendants (Dai). The community workers were incentivized to report births to the research team within two days of delivery. As an additional measure to ensure all births were identified, research teams frequently visited the study villages throughout the recruitment period. Trained research assistants interviewed recently delivered mothers at a health facility or in their home within the first week of delivery. They elicited information on out-of-pocket expenditures (OOPE) for the current childbirth and other details including maternal socio-demographic characteristics, birth order, household wealth, location of delivery, and type of delivery. Family members provided additional information on the costs if the mother could not recall or did not know.\nDuring the recruitment period, 2779 births were reported and 2615 women were enrolled in the study. The reasons given for not enrolling in the study were (i) the mother had migrated back to her resident village (n = 108), (ii) the home was not accessible to the research team (n = 52), (iii) maternal death (n = 8) and (iv) two women refused to participate. A small number of women (n = 8) reported that they did not know the cost of the delivery thus were excluded from the analysis. Women who delivered in a private facility (n = 226) were removed from the analysis as the study focused on the JSY program and OOPE. Also these women have significantly different characteristics; they tend to be wealthier, live in urban settings and are not the primary focus of the JSY program.\nThe study participants (i.e., pregnant women who had just given birth in each village) were identified with the help of local community health workers (Accredited Social Health Activist), local crèche workers, and traditional birth attendants (Dai). The community workers were incentivized to report births to the research team within two days of delivery. As an additional measure to ensure all births were identified, research teams frequently visited the study villages throughout the recruitment period. Trained research assistants interviewed recently delivered mothers at a health facility or in their home within the first week of delivery. They elicited information on out-of-pocket expenditures (OOPE) for the current childbirth and other details including maternal socio-demographic characteristics, birth order, household wealth, location of delivery, and type of delivery. Family members provided additional information on the costs if the mother could not recall or did not know.\nDuring the recruitment period, 2779 births were reported and 2615 women were enrolled in the study. The reasons given for not enrolling in the study were (i) the mother had migrated back to her resident village (n = 108), (ii) the home was not accessible to the research team (n = 52), (iii) maternal death (n = 8) and (iv) two women refused to participate. A small number of women (n = 8) reported that they did not know the cost of the delivery thus were excluded from the analysis. Women who delivered in a private facility (n = 226) were removed from the analysis as the study focused on the JSY program and OOPE. Also these women have significantly different characteristics; they tend to be wealthier, live in urban settings and are not the primary focus of the JSY program.\n Definitions and variables used The main outcome of interest, OOPE, was a sum of the following expenditures (i.e., gross OOPE): 1. Medicine, supplies and procedures (i.e., delivery costs, medicines, supplies, blood transfusions, diagnostic tests, and anesthesia); 2. Informal payments (i.e., expenditures reported as ‘rewards’ paid by the women/families to the staff for assisting their care in the facility. For the women who delivered at home, cash given to the dai (traditional birth attendant who conducts home deliveries) was classified as an informal payment; 3. Food/Cloth (i.e., food consumed during hospital stay or at home in relation to the delivery and cloths used for the infant), and 4. Transportation costs (i.e., all costs for the mother and her attendants associated with reaching the health facility for delivery). The OOPE were collected in Indian rupees (INR) and converted to U.S. dollars (US$) using the exchange rate at the time of the study of 60 INR to US$1.\nIndependent variables: Socio-demographic characteristics included age and education as continuous variables and caste1 as a categorical variable. Birth order was categorized by number of live births up to four (or more). Women’s household wealth was assessed by using a standard technique involving principal component analysis to construct a wealth index based on socio-economic characteristics developed for the standard of living index in the National Family Health Survey. The variables included 20 household assets, structural material of dwelling, sanitation provisions, and land ownership [28, 29]. The household wealth index was calculated for the entire study sample, then the score was categorized into five quintiles. We used the wealth index as a proxy measurement for wealth as income is a difficult variable to collect in low-income settings. The mother could have delivered under the JSY program at public facility or at home. JSY beneficiaries were any woman who delivered in a public health facility and thus eligible to receive the cash incentive. Type of delivery was classified into vaginal or cesarean section. We define the ‘net gain’ as the JSY incentive ($23) minus the OOPE.\nThe main outcome of interest, OOPE, was a sum of the following expenditures (i.e., gross OOPE): 1. Medicine, supplies and procedures (i.e., delivery costs, medicines, supplies, blood transfusions, diagnostic tests, and anesthesia); 2. Informal payments (i.e., expenditures reported as ‘rewards’ paid by the women/families to the staff for assisting their care in the facility. For the women who delivered at home, cash given to the dai (traditional birth attendant who conducts home deliveries) was classified as an informal payment; 3. Food/Cloth (i.e., food consumed during hospital stay or at home in relation to the delivery and cloths used for the infant), and 4. Transportation costs (i.e., all costs for the mother and her attendants associated with reaching the health facility for delivery). The OOPE were collected in Indian rupees (INR) and converted to U.S. dollars (US$) using the exchange rate at the time of the study of 60 INR to US$1.\nIndependent variables: Socio-demographic characteristics included age and education as continuous variables and caste1 as a categorical variable. Birth order was categorized by number of live births up to four (or more). Women’s household wealth was assessed by using a standard technique involving principal component analysis to construct a wealth index based on socio-economic characteristics developed for the standard of living index in the National Family Health Survey. The variables included 20 household assets, structural material of dwelling, sanitation provisions, and land ownership [28, 29]. The household wealth index was calculated for the entire study sample, then the score was categorized into five quintiles. We used the wealth index as a proxy measurement for wealth as income is a difficult variable to collect in low-income settings. The mother could have delivered under the JSY program at public facility or at home. JSY beneficiaries were any woman who delivered in a public health facility and thus eligible to receive the cash incentive. Type of delivery was classified into vaginal or cesarean section. We define the ‘net gain’ as the JSY incentive ($23) minus the OOPE.\n Analysis Univariate, bivariate and multivariable statistics were used to describe and analyze the data. The dependent variable, OOPE, was not normally distributed thus median and interquartile range (IQR) were used to describe the data. Wilcoxon-Mann-Whitney and Kruskal Wallis tests were used to compare OOPE differences between groups.\nInequalities in OOPE were analyzed using the concentration curve and concentration index as demonstrated by Wagstaff et al. [30]. The concentration curve plots the cumulative percentage of OOPE (y-axis) against the cumulative percentage of the women, ranked by their wealth index, from the poorest to the richest households (x-axis) [31]. We used the concetration curve to graphically display the inequalities in OOPE in this population. The concentration index (CI), a range of −1 to 1, allowed us to quantify the inequality in the OOPE. CI is defined graphically as twice the area between the concentration curve and the line of equality [32]. A positive value indicates a progressive system where the wealthy have more proportion of OOPE compared to the poor, while a negative value indicates the opposite (and a regressive system).\nUnivariate, bivariate and multivariable statistics were used to describe and analyze the data. The dependent variable, OOPE, was not normally distributed thus median and interquartile range (IQR) were used to describe the data. Wilcoxon-Mann-Whitney and Kruskal Wallis tests were used to compare OOPE differences between groups.\nInequalities in OOPE were analyzed using the concentration curve and concentration index as demonstrated by Wagstaff et al. [30]. The concentration curve plots the cumulative percentage of OOPE (y-axis) against the cumulative percentage of the women, ranked by their wealth index, from the poorest to the richest households (x-axis) [31]. We used the concetration curve to graphically display the inequalities in OOPE in this population. The concentration index (CI), a range of −1 to 1, allowed us to quantify the inequality in the OOPE. CI is defined graphically as twice the area between the concentration curve and the line of equality [32]. A positive value indicates a progressive system where the wealthy have more proportion of OOPE compared to the poor, while a negative value indicates the opposite (and a regressive system).\n Multivariable analysis A two-part model, developed as part of the Rand Health Insurance Experiment, was used for the multivariable regression to assess determinants of OOPE [33–36]. This model is commonly used when studying health expenditures to accommodate the significant number of zeros (no expense incurred) and for its distribution (i.e., right skewed with a long tail) which fits our data. The first part, a binary logistic model, was used to understand the predictors associated with any OOPE and estimated the odds of a woman incurring any OOPE. We present the coefficients as adjusted odds ratios (AOR) and 95 % Confidence Intervals.\nThe second part of the model, a generalized linear model (GLM) with a negative binomial distribution, analyzed the determinants of OOPE among women who reported any OOPE. This model estimated the OOPE for women who reported incurring an expense. We adjusted for observable characteristics that may influence delivery-related OOPE based on previous literature and context specific knowledge. We present the coefficients as incident rate ratios (IRR) and 95 % Confidence Intervals.\nMulticollinearity was assessed by calculating the mean variance inflation factor (VIF = 1.53), which did not show evidence of collinearity. No interaction existed between the variables in the model. In all the models, p-values <0.05 were considered significant.\nA two-part model, developed as part of the Rand Health Insurance Experiment, was used for the multivariable regression to assess determinants of OOPE [33–36]. This model is commonly used when studying health expenditures to accommodate the significant number of zeros (no expense incurred) and for its distribution (i.e., right skewed with a long tail) which fits our data. The first part, a binary logistic model, was used to understand the predictors associated with any OOPE and estimated the odds of a woman incurring any OOPE. We present the coefficients as adjusted odds ratios (AOR) and 95 % Confidence Intervals.\nThe second part of the model, a generalized linear model (GLM) with a negative binomial distribution, analyzed the determinants of OOPE among women who reported any OOPE. This model estimated the OOPE for women who reported incurring an expense. We adjusted for observable characteristics that may influence delivery-related OOPE based on previous literature and context specific knowledge. We present the coefficients as incident rate ratios (IRR) and 95 % Confidence Intervals.\nMulticollinearity was assessed by calculating the mean variance inflation factor (VIF = 1.53), which did not show evidence of collinearity. No interaction existed between the variables in the model. In all the models, p-values <0.05 were considered significant.\n Ethical considerations The study was described to all study participants. Informed consent was obtained from the participants before they were enrolled in the study and responded to the questionnaire. Anonymity and confidentiality was ensured to all women. Ethical approval for the study was granted by the Ethics Committee from the authors’ institution.\nThe study was described to all study participants. Informed consent was obtained from the participants before they were enrolled in the study and responded to the questionnaire. Anonymity and confidentiality was ensured to all women. Ethical approval for the study was granted by the Ethics Committee from the authors’ institution.", "The study took place in two of the 51 administrative districts that make up Madhya Pradesh (MP), a state in central India. With a population of 71 million, it is one of India’s largest states [24]. MP on the whole has poorer socio-economic and health indicators relative to country (India) averages; 24 % of the population lives below the poverty line and has a maternal mortality ratio (MMR) of 190 per 100,000 live births [25]. In MP, all women are eligible to be beneficiaries of the JSY program regardless of their poverty status, if they deliver in a public health facility. MP has had one of the highest uptakes of JSY in the country [26].\nThe two study districts were purposively selected to represent different socio-economic and geographic areas of MP. They were also study areas of a larger project (MATIND) [27] to study the JSY program in which this study was nested. Each study district (population 1.5–2 million) is divided into 5 administrative blocks (100,000–200,000 people per block), each block comprises of villages with an approximate population of 1000–10,000 people per village. District one, situated on the western side of the state, has relatively better socio-economic characteristics, a lower MMR (176) and a higher uptake of the JSY program (72 %) [26]. In comparison, district two which lies on the eastern side of the state, has generally poorer socio-economic characteristics, a higher MMR (361), and a lower JSY uptake (59 %) [26]. The districts not only differ on socio-economic characteristics, but also in regards to geographical features. The first district is relatively more urban and has a flat geographical terrain, while the second district is mostly rural, has several hill ranges as well as dense forested areas.", "A community-based cross-sectional study was conducted from September 2013 to April 2015. Study villages were selected through a combination of multi-stage sampling in the two districts. In the first district, all blocks were included. In the second district, two out of the five blocks were purposively selected, one in the north and the other in the south of the district. Within the selected blocks, villages that had between 200 and 10,000 inhabitants were included in the sampling frame. All villages were stratified into two groups based on a five kilometer (km) distance from a public health facility that conducted more than 10 deliveries in a month. Probability proportionate to size was used to select the individual study villages ensuring the number of sample units in each stratum would be allocated in proportion to their share in total population. In total, 247 villages were selected, 101 and 146 villages within and outside the five km radius of a facility, respectively.", "The study participants (i.e., pregnant women who had just given birth in each village) were identified with the help of local community health workers (Accredited Social Health Activist), local crèche workers, and traditional birth attendants (Dai). The community workers were incentivized to report births to the research team within two days of delivery. As an additional measure to ensure all births were identified, research teams frequently visited the study villages throughout the recruitment period. Trained research assistants interviewed recently delivered mothers at a health facility or in their home within the first week of delivery. They elicited information on out-of-pocket expenditures (OOPE) for the current childbirth and other details including maternal socio-demographic characteristics, birth order, household wealth, location of delivery, and type of delivery. Family members provided additional information on the costs if the mother could not recall or did not know.\nDuring the recruitment period, 2779 births were reported and 2615 women were enrolled in the study. The reasons given for not enrolling in the study were (i) the mother had migrated back to her resident village (n = 108), (ii) the home was not accessible to the research team (n = 52), (iii) maternal death (n = 8) and (iv) two women refused to participate. A small number of women (n = 8) reported that they did not know the cost of the delivery thus were excluded from the analysis. Women who delivered in a private facility (n = 226) were removed from the analysis as the study focused on the JSY program and OOPE. Also these women have significantly different characteristics; they tend to be wealthier, live in urban settings and are not the primary focus of the JSY program.", "The main outcome of interest, OOPE, was a sum of the following expenditures (i.e., gross OOPE): 1. Medicine, supplies and procedures (i.e., delivery costs, medicines, supplies, blood transfusions, diagnostic tests, and anesthesia); 2. Informal payments (i.e., expenditures reported as ‘rewards’ paid by the women/families to the staff for assisting their care in the facility. For the women who delivered at home, cash given to the dai (traditional birth attendant who conducts home deliveries) was classified as an informal payment; 3. Food/Cloth (i.e., food consumed during hospital stay or at home in relation to the delivery and cloths used for the infant), and 4. Transportation costs (i.e., all costs for the mother and her attendants associated with reaching the health facility for delivery). The OOPE were collected in Indian rupees (INR) and converted to U.S. dollars (US$) using the exchange rate at the time of the study of 60 INR to US$1.\nIndependent variables: Socio-demographic characteristics included age and education as continuous variables and caste1 as a categorical variable. Birth order was categorized by number of live births up to four (or more). Women’s household wealth was assessed by using a standard technique involving principal component analysis to construct a wealth index based on socio-economic characteristics developed for the standard of living index in the National Family Health Survey. The variables included 20 household assets, structural material of dwelling, sanitation provisions, and land ownership [28, 29]. The household wealth index was calculated for the entire study sample, then the score was categorized into five quintiles. We used the wealth index as a proxy measurement for wealth as income is a difficult variable to collect in low-income settings. The mother could have delivered under the JSY program at public facility or at home. JSY beneficiaries were any woman who delivered in a public health facility and thus eligible to receive the cash incentive. Type of delivery was classified into vaginal or cesarean section. We define the ‘net gain’ as the JSY incentive ($23) minus the OOPE.", "Univariate, bivariate and multivariable statistics were used to describe and analyze the data. The dependent variable, OOPE, was not normally distributed thus median and interquartile range (IQR) were used to describe the data. Wilcoxon-Mann-Whitney and Kruskal Wallis tests were used to compare OOPE differences between groups.\nInequalities in OOPE were analyzed using the concentration curve and concentration index as demonstrated by Wagstaff et al. [30]. The concentration curve plots the cumulative percentage of OOPE (y-axis) against the cumulative percentage of the women, ranked by their wealth index, from the poorest to the richest households (x-axis) [31]. We used the concetration curve to graphically display the inequalities in OOPE in this population. The concentration index (CI), a range of −1 to 1, allowed us to quantify the inequality in the OOPE. CI is defined graphically as twice the area between the concentration curve and the line of equality [32]. A positive value indicates a progressive system where the wealthy have more proportion of OOPE compared to the poor, while a negative value indicates the opposite (and a regressive system).", "A two-part model, developed as part of the Rand Health Insurance Experiment, was used for the multivariable regression to assess determinants of OOPE [33–36]. This model is commonly used when studying health expenditures to accommodate the significant number of zeros (no expense incurred) and for its distribution (i.e., right skewed with a long tail) which fits our data. The first part, a binary logistic model, was used to understand the predictors associated with any OOPE and estimated the odds of a woman incurring any OOPE. We present the coefficients as adjusted odds ratios (AOR) and 95 % Confidence Intervals.\nThe second part of the model, a generalized linear model (GLM) with a negative binomial distribution, analyzed the determinants of OOPE among women who reported any OOPE. This model estimated the OOPE for women who reported incurring an expense. We adjusted for observable characteristics that may influence delivery-related OOPE based on previous literature and context specific knowledge. We present the coefficients as incident rate ratios (IRR) and 95 % Confidence Intervals.\nMulticollinearity was assessed by calculating the mean variance inflation factor (VIF = 1.53), which did not show evidence of collinearity. No interaction existed between the variables in the model. In all the models, p-values <0.05 were considered significant.", "The study was described to all study participants. Informed consent was obtained from the participants before they were enrolled in the study and responded to the questionnaire. Anonymity and confidentiality was ensured to all women. Ethical approval for the study was granted by the Ethics Committee from the authors’ institution.", " Descriptive sample characteristics for the sample As depicted in Table 1, the majority of women (n = 1995, 84 %) in our study delivered in a JSY public health facility. The remaining 16 % (n = 386) delivered at home. The median age of the study sample was 23 years and 29 % (n = 692) had no formal education. More than a third (n = 932) of the women were primiparous. The main reason given for home deliveries was that the baby came unexpectedly and quickly (n = 312, 52 %). Other reasons included; planned to have a home delivery (n = 97, 16 %), transportation related issues (n = 95, 16 %), no one to accompany them to the hospital (n = 58, 10 %) and other (n = 37, 6 %). Only one woman replied she could not afford to deliver in a health facility (results not shown).Table 1Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)All womenJSY beneficiaryHome deliveryBackground characteristicsn (%)Median (IQR)n (%)Median (IQR)n (%)Median (IQR)Total2381 (100)8 (3–17)1995 (84)8 (3–18)386 (16)6 (2–13)Age in years (median, IQR)23 (21–25)–22 (20–25)–25 (22–27)–Districts District 11405 (59)14 (7–23)*1251 (63)14 (7–23)*154 (40)12 (5–23)* District 2976 (41)3 (1–7)744 (37)3 (1–7)232 (60)3 (1–7)Education in years (median, IQR)5 (0–8)–5 (0–8)–4 (0–7)–Household wealth 1st quintile (Poorest)519 (22)3 (1–7)**361 (18)3 (1–6)**158 (40)3 (0–7)** 2nd quintile505 (21)7 (2–13)414 (21)6 (2–15)91 (24)7 (2–12) 3rd quintile499 (21)11 (5–18)434 (22)11 (5–18)65 (17)10 (3–18) 4th quintile476 (20)14 (7–23)429 (21)14 (7–23)47 (12)8 (3–19) 5th quintile (Least Poor)382 (16)13 (7–25)357 (18)13 (7–24)25 (7)17 (8–29)Caste Scheduled Caste (SC)599 (25)10 (4–19)**502 (25)11 (5–20)**97 (25)8 (2–17)** Other backward caste (OBC)904 (38)12 (5–22)807 (40)12 (5–22)97 (25)8 (4–17) Scheduled Tribe (ST)598 (25)3 (1–6)430 (22)3 (1–6)168 (44)3 (1–7) General280 (12)13 (4–21)256 (13)12 (4–22)24 (6)16 (5–20)Birth Order 1st child932 (39)8 (3–19)**858 (43)9 (3–20)**74 (19)7 (3–12)** 2nd child838 (35)8 (3–17)688 (34)9 (3–17)150 (39)7 (2–15) 3rd child382 (16)5 (2–14)292 (15)7 (2–15)90 (23)4 (1–12) 4th or more child229 (10)7 (2–15)157 (8)8 (2–17)72 (19)4 (0–9)Type of Delivery Vaginal Delivery2303 (97)8 (3–17)*\n1917 (96)8 (3–17)*386 (100)6 (2–13) Cesarean Section Delivery78 (3)50 (21–93)78 (4)50 (21–93)0 (0)-Cost Categories Medicine, supplies and procedures224 (10)3 (1–8)181 (10)3 (1–7)43 (11)7 (3–8) Informal payments1445 (65)5 (2–8)1187 (65)5 (2–9)258 (68)5 (3–8) Food/baby items1695 (77)5 (3–8)1489 (81)5 (3–8)206 (55)3 (2–8) Transportation328 (17)3 (1–8)328 (17)3 (1–8)––*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made\nBackground characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)\n*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made\nAs depicted in Table 1, the majority of women (n = 1995, 84 %) in our study delivered in a JSY public health facility. The remaining 16 % (n = 386) delivered at home. The median age of the study sample was 23 years and 29 % (n = 692) had no formal education. More than a third (n = 932) of the women were primiparous. The main reason given for home deliveries was that the baby came unexpectedly and quickly (n = 312, 52 %). Other reasons included; planned to have a home delivery (n = 97, 16 %), transportation related issues (n = 95, 16 %), no one to accompany them to the hospital (n = 58, 10 %) and other (n = 37, 6 %). Only one woman replied she could not afford to deliver in a health facility (results not shown).Table 1Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)All womenJSY beneficiaryHome deliveryBackground characteristicsn (%)Median (IQR)n (%)Median (IQR)n (%)Median (IQR)Total2381 (100)8 (3–17)1995 (84)8 (3–18)386 (16)6 (2–13)Age in years (median, IQR)23 (21–25)–22 (20–25)–25 (22–27)–Districts District 11405 (59)14 (7–23)*1251 (63)14 (7–23)*154 (40)12 (5–23)* District 2976 (41)3 (1–7)744 (37)3 (1–7)232 (60)3 (1–7)Education in years (median, IQR)5 (0–8)–5 (0–8)–4 (0–7)–Household wealth 1st quintile (Poorest)519 (22)3 (1–7)**361 (18)3 (1–6)**158 (40)3 (0–7)** 2nd quintile505 (21)7 (2–13)414 (21)6 (2–15)91 (24)7 (2–12) 3rd quintile499 (21)11 (5–18)434 (22)11 (5–18)65 (17)10 (3–18) 4th quintile476 (20)14 (7–23)429 (21)14 (7–23)47 (12)8 (3–19) 5th quintile (Least Poor)382 (16)13 (7–25)357 (18)13 (7–24)25 (7)17 (8–29)Caste Scheduled Caste (SC)599 (25)10 (4–19)**502 (25)11 (5–20)**97 (25)8 (2–17)** Other backward caste (OBC)904 (38)12 (5–22)807 (40)12 (5–22)97 (25)8 (4–17) Scheduled Tribe (ST)598 (25)3 (1–6)430 (22)3 (1–6)168 (44)3 (1–7) General280 (12)13 (4–21)256 (13)12 (4–22)24 (6)16 (5–20)Birth Order 1st child932 (39)8 (3–19)**858 (43)9 (3–20)**74 (19)7 (3–12)** 2nd child838 (35)8 (3–17)688 (34)9 (3–17)150 (39)7 (2–15) 3rd child382 (16)5 (2–14)292 (15)7 (2–15)90 (23)4 (1–12) 4th or more child229 (10)7 (2–15)157 (8)8 (2–17)72 (19)4 (0–9)Type of Delivery Vaginal Delivery2303 (97)8 (3–17)*\n1917 (96)8 (3–17)*386 (100)6 (2–13) Cesarean Section Delivery78 (3)50 (21–93)78 (4)50 (21–93)0 (0)-Cost Categories Medicine, supplies and procedures224 (10)3 (1–8)181 (10)3 (1–7)43 (11)7 (3–8) Informal payments1445 (65)5 (2–8)1187 (65)5 (2–9)258 (68)5 (3–8) Food/baby items1695 (77)5 (3–8)1489 (81)5 (3–8)206 (55)3 (2–8) Transportation328 (17)3 (1–8)328 (17)3 (1–8)––*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made\nBackground characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)\n*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made\n Who had any OOPE? Ninety one percent (n = 2172) of the sample reported having OOPE; 92 % of JSY beneficiaries and 85 % of women who delivered at home. From the descriptive analysis in Table 1, women who delivered under the JSY program had a significantly higher median, IQR OOPE ($8, 3–18) compared to women who delivered at home ($6, 2–13). The median, IQR OOPE significantly differed between district one ($14, 7–23), and two ($3, 1–7). The median OOPE increased with household wealth for women who delivered in a JSY facility or at home. This pattern was similar in both districts (data not shown). Women from the scheduled tribe caste paid the least OOPE (median $3, IQR 1–6) (Table 1). Women who delivered by caesarean section paid more than six times (median $50, IQR 21–93) the amount compared to women who delivered vaginally (median $8, IQR 3–17).\nNinety one percent (n = 2172) of the sample reported having OOPE; 92 % of JSY beneficiaries and 85 % of women who delivered at home. From the descriptive analysis in Table 1, women who delivered under the JSY program had a significantly higher median, IQR OOPE ($8, 3–18) compared to women who delivered at home ($6, 2–13). The median, IQR OOPE significantly differed between district one ($14, 7–23), and two ($3, 1–7). The median OOPE increased with household wealth for women who delivered in a JSY facility or at home. This pattern was similar in both districts (data not shown). Women from the scheduled tribe caste paid the least OOPE (median $3, IQR 1–6) (Table 1). Women who delivered by caesarean section paid more than six times (median $50, IQR 21–93) the amount compared to women who delivered vaginally (median $8, IQR 3–17).\n Does the JSY cash incentive defray OOPE? Among the women who delivered in a JSY public facility, only a quarter (n = 504) received the cash incentive upon discharge, 68 % (n = 1353) were told to come back to receive the money. Assuming all JSY beneficiaries eventually receive the cash incentive, they would have a median net gain (i.e., JSY incentive – OOPE) of $11. The net gain was larger in district 2 ($19) compared to district 1 ($8) (data not shown). As demonstrated in Fig. 1, women from the poorest wealth quintile had twice the net gain ($20) versus the wealthiest quintile ($10). Only 4 % (21/519) from the poorest quintile incurred OOPE greater than the value of the JSY cash benefit.Fig. 1Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\nGross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\nAmong the women who delivered in a JSY public facility, only a quarter (n = 504) received the cash incentive upon discharge, 68 % (n = 1353) were told to come back to receive the money. Assuming all JSY beneficiaries eventually receive the cash incentive, they would have a median net gain (i.e., JSY incentive – OOPE) of $11. The net gain was larger in district 2 ($19) compared to district 1 ($8) (data not shown). As demonstrated in Fig. 1, women from the poorest wealth quintile had twice the net gain ($20) versus the wealthiest quintile ($10). Only 4 % (21/519) from the poorest quintile incurred OOPE greater than the value of the JSY cash benefit.Fig. 1Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\nGross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\n Breakdown of OOPE among JSY and Home deliveries Among the JSY and home deliveries, 92 % (n = 2213) provided disaggregated cost information. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE (Table 1). The proportion of women incurring OOPE for food and cloth items was higher for JSY beneficiaries (81 %) than for home mothers (55 %). Among the two groups, the median cost for food and baby items was significantly higher ($5) for JSY beneficiaries compared to home deliveries ($3). No other significant differences were found.\nOOPE differences by district were found. The breakdown of costs by category was similar for JSY beneficiaries and home deliveries with the exception of the proportion of informal payments in district 2 (Fig. 2). Informal payments constituted only 5 % of the total OOPE for JSY beneficiaries in district 2 compared to 43 % in district 1. The proportion did not differ between districts among home mothers. Among JSY beneficiaries, a higher proportion of wealthy women incurred OOPE as well as had higher median OOPE compared to the poorest quintile.Fig. 2Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %)\nBreakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %)\nAmong the JSY and home deliveries, 92 % (n = 2213) provided disaggregated cost information. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE (Table 1). The proportion of women incurring OOPE for food and cloth items was higher for JSY beneficiaries (81 %) than for home mothers (55 %). Among the two groups, the median cost for food and baby items was significantly higher ($5) for JSY beneficiaries compared to home deliveries ($3). No other significant differences were found.\nOOPE differences by district were found. The breakdown of costs by category was similar for JSY beneficiaries and home deliveries with the exception of the proportion of informal payments in district 2 (Fig. 2). Informal payments constituted only 5 % of the total OOPE for JSY beneficiaries in district 2 compared to 43 % in district 1. The proportion did not differ between districts among home mothers. Among JSY beneficiaries, a higher proportion of wealthy women incurred OOPE as well as had higher median OOPE compared to the poorest quintile.Fig. 2Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %)\nBreakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %)\n Inequalities in OOPE among JSY and home deliveries Figure 3 displays the concentration curves for OOPE among JSY beneficiaries and home deliveries. Since both curves lie below the line of equality; OOPE for both JSY beneficiaries and home mothers was found to be progressive indicating that poorer households make proportionally less OOP payments during childbirth compared to wealthier households. JSY beneficiaries had less progressive OOPE (CI = 0.189) compared to women who delivered at home (CI = 0.293), however this difference was not significant. The difference was not significant when each district was analyzed separately.Fig. 3Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\nConcentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\nFigure 3 displays the concentration curves for OOPE among JSY beneficiaries and home deliveries. Since both curves lie below the line of equality; OOPE for both JSY beneficiaries and home mothers was found to be progressive indicating that poorer households make proportionally less OOP payments during childbirth compared to wealthier households. JSY beneficiaries had less progressive OOPE (CI = 0.189) compared to women who delivered at home (CI = 0.293), however this difference was not significant. The difference was not significant when each district was analyzed separately.Fig. 3Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\nConcentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\n Impact of JSY on OOPE When adjusting for confounders, women who delivered in a JSY public facility had 1.58 higher odds (95 % CI: 1.11–2.25) to incur any OOPE than women who delivered at home (Table 2, model 1). Women from district 1 had twice the odds (95 % CI: 1.30–3.18) of having OOPE compared to district 2. Wealth, caste and birth order were not significant predictors of incurring any OOPE.Table 2Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)Background characteristicsPart 1 of model: AOR n = 238195 % confidence intervalPart 2 of model: IRR n = 217295 % confidence intervalPlace of delivery Home\n1.00\n\n1.00\n JSY (Public Facility)1.58(1.11–2.25)*0.84(0.73–0.96)*Districts District #2\n1.00\n\n1.00\n District #12.03(1.30–3.18)*2.36(2.06–2.69)**Household wealth 1st quintile (Poorest)\n1.00\n\n1.00\n 2nd quintile1.49(0.98–2.27)1.19(1.02–1.38)* 3rd quintile1.53(0.91–2.56)1.33(1.13–1.56)** 4th quintile1.14(0.65–1.99)1.31(1.11–1.55)** 5th quintile (Least Poor)1.21(0.64–2.31)1.34(1.02–1.49)*Caste Scheduled Caste (SC)\n1.00\n\n1.00\n Other backward caste (OBC)1.15(0.76–1.73)1.14(1.01–1.28)* Scheduled Tribe (ST)0.97(0.62–1.52)0.70(0.60–0.81)** General1.19(0.63–2.22)1.07(0.91–1.26)Birth order 1st child\n1.00\n\n1.00\n 2nd child1.21(0.82–1.79)0.82(0.73–0.92)* 3rd child1.03(0.63–1.68)0.71(0.60–0.83)** 4th or more child0.64(0.35–1.17)0.69(0.56–0.86)*Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001\nTwo part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)\nAdjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001\nHowever in model 2 among the women who had incurred any OOPE (Table 2), those who delivered under the JSY program paid 16 % less (95 % CI: 0.73–0.96) OOPE than women who delivered at home. Women from district 1 paid more than twice (95 % CI: 2.06–2.69) the OOPE compared to district 2. Increased wealth was also significantly related to higher OOPE. Conversely, being from a scheduled tribe and birth order (women with more children) were related to having lower OOPE.\nWhen adjusting for confounders, women who delivered in a JSY public facility had 1.58 higher odds (95 % CI: 1.11–2.25) to incur any OOPE than women who delivered at home (Table 2, model 1). Women from district 1 had twice the odds (95 % CI: 1.30–3.18) of having OOPE compared to district 2. Wealth, caste and birth order were not significant predictors of incurring any OOPE.Table 2Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)Background characteristicsPart 1 of model: AOR n = 238195 % confidence intervalPart 2 of model: IRR n = 217295 % confidence intervalPlace of delivery Home\n1.00\n\n1.00\n JSY (Public Facility)1.58(1.11–2.25)*0.84(0.73–0.96)*Districts District #2\n1.00\n\n1.00\n District #12.03(1.30–3.18)*2.36(2.06–2.69)**Household wealth 1st quintile (Poorest)\n1.00\n\n1.00\n 2nd quintile1.49(0.98–2.27)1.19(1.02–1.38)* 3rd quintile1.53(0.91–2.56)1.33(1.13–1.56)** 4th quintile1.14(0.65–1.99)1.31(1.11–1.55)** 5th quintile (Least Poor)1.21(0.64–2.31)1.34(1.02–1.49)*Caste Scheduled Caste (SC)\n1.00\n\n1.00\n Other backward caste (OBC)1.15(0.76–1.73)1.14(1.01–1.28)* Scheduled Tribe (ST)0.97(0.62–1.52)0.70(0.60–0.81)** General1.19(0.63–2.22)1.07(0.91–1.26)Birth order 1st child\n1.00\n\n1.00\n 2nd child1.21(0.82–1.79)0.82(0.73–0.92)* 3rd child1.03(0.63–1.68)0.71(0.60–0.83)** 4th or more child0.64(0.35–1.17)0.69(0.56–0.86)*Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001\nTwo part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)\nAdjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001\nHowever in model 2 among the women who had incurred any OOPE (Table 2), those who delivered under the JSY program paid 16 % less (95 % CI: 0.73–0.96) OOPE than women who delivered at home. Women from district 1 paid more than twice (95 % CI: 2.06–2.69) the OOPE compared to district 2. Increased wealth was also significantly related to higher OOPE. Conversely, being from a scheduled tribe and birth order (women with more children) were related to having lower OOPE.", "As depicted in Table 1, the majority of women (n = 1995, 84 %) in our study delivered in a JSY public health facility. The remaining 16 % (n = 386) delivered at home. The median age of the study sample was 23 years and 29 % (n = 692) had no formal education. More than a third (n = 932) of the women were primiparous. The main reason given for home deliveries was that the baby came unexpectedly and quickly (n = 312, 52 %). Other reasons included; planned to have a home delivery (n = 97, 16 %), transportation related issues (n = 95, 16 %), no one to accompany them to the hospital (n = 58, 10 %) and other (n = 37, 6 %). Only one woman replied she could not afford to deliver in a health facility (results not shown).Table 1Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)All womenJSY beneficiaryHome deliveryBackground characteristicsn (%)Median (IQR)n (%)Median (IQR)n (%)Median (IQR)Total2381 (100)8 (3–17)1995 (84)8 (3–18)386 (16)6 (2–13)Age in years (median, IQR)23 (21–25)–22 (20–25)–25 (22–27)–Districts District 11405 (59)14 (7–23)*1251 (63)14 (7–23)*154 (40)12 (5–23)* District 2976 (41)3 (1–7)744 (37)3 (1–7)232 (60)3 (1–7)Education in years (median, IQR)5 (0–8)–5 (0–8)–4 (0–7)–Household wealth 1st quintile (Poorest)519 (22)3 (1–7)**361 (18)3 (1–6)**158 (40)3 (0–7)** 2nd quintile505 (21)7 (2–13)414 (21)6 (2–15)91 (24)7 (2–12) 3rd quintile499 (21)11 (5–18)434 (22)11 (5–18)65 (17)10 (3–18) 4th quintile476 (20)14 (7–23)429 (21)14 (7–23)47 (12)8 (3–19) 5th quintile (Least Poor)382 (16)13 (7–25)357 (18)13 (7–24)25 (7)17 (8–29)Caste Scheduled Caste (SC)599 (25)10 (4–19)**502 (25)11 (5–20)**97 (25)8 (2–17)** Other backward caste (OBC)904 (38)12 (5–22)807 (40)12 (5–22)97 (25)8 (4–17) Scheduled Tribe (ST)598 (25)3 (1–6)430 (22)3 (1–6)168 (44)3 (1–7) General280 (12)13 (4–21)256 (13)12 (4–22)24 (6)16 (5–20)Birth Order 1st child932 (39)8 (3–19)**858 (43)9 (3–20)**74 (19)7 (3–12)** 2nd child838 (35)8 (3–17)688 (34)9 (3–17)150 (39)7 (2–15) 3rd child382 (16)5 (2–14)292 (15)7 (2–15)90 (23)4 (1–12) 4th or more child229 (10)7 (2–15)157 (8)8 (2–17)72 (19)4 (0–9)Type of Delivery Vaginal Delivery2303 (97)8 (3–17)*\n1917 (96)8 (3–17)*386 (100)6 (2–13) Cesarean Section Delivery78 (3)50 (21–93)78 (4)50 (21–93)0 (0)-Cost Categories Medicine, supplies and procedures224 (10)3 (1–8)181 (10)3 (1–7)43 (11)7 (3–8) Informal payments1445 (65)5 (2–8)1187 (65)5 (2–9)258 (68)5 (3–8) Food/baby items1695 (77)5 (3–8)1489 (81)5 (3–8)206 (55)3 (2–8) Transportation328 (17)3 (1–8)328 (17)3 (1–8)––*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made\nBackground characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)\n*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made", "Ninety one percent (n = 2172) of the sample reported having OOPE; 92 % of JSY beneficiaries and 85 % of women who delivered at home. From the descriptive analysis in Table 1, women who delivered under the JSY program had a significantly higher median, IQR OOPE ($8, 3–18) compared to women who delivered at home ($6, 2–13). The median, IQR OOPE significantly differed between district one ($14, 7–23), and two ($3, 1–7). The median OOPE increased with household wealth for women who delivered in a JSY facility or at home. This pattern was similar in both districts (data not shown). Women from the scheduled tribe caste paid the least OOPE (median $3, IQR 1–6) (Table 1). Women who delivered by caesarean section paid more than six times (median $50, IQR 21–93) the amount compared to women who delivered vaginally (median $8, IQR 3–17).", "Among the women who delivered in a JSY public facility, only a quarter (n = 504) received the cash incentive upon discharge, 68 % (n = 1353) were told to come back to receive the money. Assuming all JSY beneficiaries eventually receive the cash incentive, they would have a median net gain (i.e., JSY incentive – OOPE) of $11. The net gain was larger in district 2 ($19) compared to district 1 ($8) (data not shown). As demonstrated in Fig. 1, women from the poorest wealth quintile had twice the net gain ($20) versus the wealthiest quintile ($10). Only 4 % (21/519) from the poorest quintile incurred OOPE greater than the value of the JSY cash benefit.Fig. 1Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\nGross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures", "Among the JSY and home deliveries, 92 % (n = 2213) provided disaggregated cost information. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE (Table 1). The proportion of women incurring OOPE for food and cloth items was higher for JSY beneficiaries (81 %) than for home mothers (55 %). Among the two groups, the median cost for food and baby items was significantly higher ($5) for JSY beneficiaries compared to home deliveries ($3). No other significant differences were found.\nOOPE differences by district were found. The breakdown of costs by category was similar for JSY beneficiaries and home deliveries with the exception of the proportion of informal payments in district 2 (Fig. 2). Informal payments constituted only 5 % of the total OOPE for JSY beneficiaries in district 2 compared to 43 % in district 1. The proportion did not differ between districts among home mothers. Among JSY beneficiaries, a higher proportion of wealthy women incurred OOPE as well as had higher median OOPE compared to the poorest quintile.Fig. 2Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %)\nBreakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %)", "Figure 3 displays the concentration curves for OOPE among JSY beneficiaries and home deliveries. Since both curves lie below the line of equality; OOPE for both JSY beneficiaries and home mothers was found to be progressive indicating that poorer households make proportionally less OOP payments during childbirth compared to wealthier households. JSY beneficiaries had less progressive OOPE (CI = 0.189) compared to women who delivered at home (CI = 0.293), however this difference was not significant. The difference was not significant when each district was analyzed separately.Fig. 3Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures\nConcentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures", "When adjusting for confounders, women who delivered in a JSY public facility had 1.58 higher odds (95 % CI: 1.11–2.25) to incur any OOPE than women who delivered at home (Table 2, model 1). Women from district 1 had twice the odds (95 % CI: 1.30–3.18) of having OOPE compared to district 2. Wealth, caste and birth order were not significant predictors of incurring any OOPE.Table 2Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)Background characteristicsPart 1 of model: AOR n = 238195 % confidence intervalPart 2 of model: IRR n = 217295 % confidence intervalPlace of delivery Home\n1.00\n\n1.00\n JSY (Public Facility)1.58(1.11–2.25)*0.84(0.73–0.96)*Districts District #2\n1.00\n\n1.00\n District #12.03(1.30–3.18)*2.36(2.06–2.69)**Household wealth 1st quintile (Poorest)\n1.00\n\n1.00\n 2nd quintile1.49(0.98–2.27)1.19(1.02–1.38)* 3rd quintile1.53(0.91–2.56)1.33(1.13–1.56)** 4th quintile1.14(0.65–1.99)1.31(1.11–1.55)** 5th quintile (Least Poor)1.21(0.64–2.31)1.34(1.02–1.49)*Caste Scheduled Caste (SC)\n1.00\n\n1.00\n Other backward caste (OBC)1.15(0.76–1.73)1.14(1.01–1.28)* Scheduled Tribe (ST)0.97(0.62–1.52)0.70(0.60–0.81)** General1.19(0.63–2.22)1.07(0.91–1.26)Birth order 1st child\n1.00\n\n1.00\n 2nd child1.21(0.82–1.79)0.82(0.73–0.92)* 3rd child1.03(0.63–1.68)0.71(0.60–0.83)** 4th or more child0.64(0.35–1.17)0.69(0.56–0.86)*Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001\nTwo part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)\nAdjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001\nHowever in model 2 among the women who had incurred any OOPE (Table 2), those who delivered under the JSY program paid 16 % less (95 % CI: 0.73–0.96) OOPE than women who delivered at home. Women from district 1 paid more than twice (95 % CI: 2.06–2.69) the OOPE compared to district 2. Increased wealth was also significantly related to higher OOPE. Conversely, being from a scheduled tribe and birth order (women with more children) were related to having lower OOPE.", "This is the first study to our knowledge that models predictors of OOPE among JSY beneficiaries and women who deliver at home. We found the large majority of women in our study reported some kind of OOPE. Among the JSY beneficiaries, the poorest women had the largest net gain when the cash transfer was taken into account. Further we found OOP payments under the program were progressive with the most disadvantaged wealth quintiles making proportionally less OOPE compared to wealthier women. Being a JSY beneficiary led to increased odds of incurring OOPE, but at the same time a 16 % decrease in the amount of OOPE incurred compared to women who delivered at home. Finally, wealth was not a predictor of having OOPE, but it was an indicator of how much the women would pay.\nIn our study, the descriptive and multivariable analysis demonstrated that OOPE in the most vulnerable groups (ST, poorest wealth quintiles and multiparous women) was the smallest. This finding was consistent with another Indian study performed by Mohanty et al. [14] on the District Level Household and Facility Survey (DLHS-3), a national level survey that was conducted in 2007–2008 for births that occurred in the last five years [37]. They also found the amount of OOPE increased with wealth and decreased with birth order. Although there is no universal consensus on why a wealth gradient in OOPE exists, many studies show a positive relationship between income and health expenditures [38]. In other words, the wealthy pay more as they can afford to.\nIn the descriptive analysis, we found OOPE was higher for women who delivered under the JSY program. However when adjusting for possible confounders, the model showed JSY beneficiaries actually had less OOPE compared to women who delivered at home. Another Indian study that also used data from the DLHS-3 found deliveries conducted under the JSY program at a public facility had higher OOPE compared to home deliveries [15]. This difference was not surprising as only a descriptive analysis was performed and possible confounders were not taken into consideration.\nAnother probable explanation for these differences is Mohanty et al. reported the amount of OOPE has declined over time for women who delivered in public facilities [14]. Further in 2011, the government of India launched a complementary program to JSY, Janani Shishu Suraksha Karyakram (JSSK), to eliminate OOPE for pregnant women and ensure delivery care was free of cost. Medicines, medical supplies, procedures (surgery, diagnostics and x-rays) food, user fees and referral transport were all supposed to be covered under this program [7].\nWe found indirect costs (informal payments to staff, food and items purchased for the infant) comprised the largest proportion of maternal expenses. Direct costs (medicines, supplies and delivery services) constituted the smallest share of spending among both JSY beneficiaries (4 %) and women who delivered at home (9 %). Conversely, Skordis-Worral et al. [23] found in urban Indian slums that direct medical costs were responsible for the majority of OOPE. Context is extremely important when interpreting results for studies performed in India given the heterogeneity between different parts of the country and stark differences between rural and urban areas. Our study was performed in a rural setting, the poor in urban and rural settings experience different kinds of access issues possibly explaining some of the difference in results.\n Free delivery under the JSY program? High OOPE are a well-known constraint to the utilization of delivery services where ready access to cash is not available for many rural households, especially the poor whose ability to pay can be temporal or mainly rely on seasonal production like farms [39]. The vast majority of women in our study incurred some amount of OOPE. While these women paid less as JSY beneficiaries compared to women who delivered at home, they did not have a free delivery as intended by the public health care system. As mentioned above, in this study the direct medical costs were not the driving force behind OOPE, but informal payments to the staff. A qualitative study from the same area found women knew they were not supposed to make payments to the staff but did anyway out of fear of not receiving appropriate care in a timely manner [40].\nAlthough from the perspective of the Indian government, the primary purpose of the cash transfer is to incentivize women for a free delivery in a public health facility and not to cover OOPE, the same qualitative study found many women do in fact intend to use it to compensate for the OOPE incurred [40]. Only 25 % of the women who delivered in a JSY public facility in our study received the cash incentive upon discharge. While a previous study that took place in the same area found 85 % of the women received the benefit within two weeks of delivery [41], if the cash benefit is expected by the women to cover their OOPE, it needs to be received upon discharge. These bureaucratic issues related to receiving the JSY cash incentive and informal payments to public health facility staff members undermine the program and could potentially cause future uptake problems if not addressed.\nHigh OOPE are a well-known constraint to the utilization of delivery services where ready access to cash is not available for many rural households, especially the poor whose ability to pay can be temporal or mainly rely on seasonal production like farms [39]. The vast majority of women in our study incurred some amount of OOPE. While these women paid less as JSY beneficiaries compared to women who delivered at home, they did not have a free delivery as intended by the public health care system. As mentioned above, in this study the direct medical costs were not the driving force behind OOPE, but informal payments to the staff. A qualitative study from the same area found women knew they were not supposed to make payments to the staff but did anyway out of fear of not receiving appropriate care in a timely manner [40].\nAlthough from the perspective of the Indian government, the primary purpose of the cash transfer is to incentivize women for a free delivery in a public health facility and not to cover OOPE, the same qualitative study found many women do in fact intend to use it to compensate for the OOPE incurred [40]. Only 25 % of the women who delivered in a JSY public facility in our study received the cash incentive upon discharge. While a previous study that took place in the same area found 85 % of the women received the benefit within two weeks of delivery [41], if the cash benefit is expected by the women to cover their OOPE, it needs to be received upon discharge. These bureaucratic issues related to receiving the JSY cash incentive and informal payments to public health facility staff members undermine the program and could potentially cause future uptake problems if not addressed.\n Does JSY reduce financial access barriers? Population based surveys have shown a significant proportion of women report cost as the major barrier to an institutional delivery, [2, 37] and other research reports support this [10, 23, 42]. However, our model showed JSY beneficiaries had lower OOPE compared to women who delivered at home. In addition, it is important to note the JSY beneficiaries in our study received a net gain, especially among the poorest quintiles, after receiving the cash incentive ($23). Although this is to be expected since the poorest quintiles paid the least, regardless this would imply the program is reaching and assisting the most vulnerable groups. In our setting the incentive was more than adequate to cover the OOPE, while an earlier study from Orissa found the magnitude of the cash incentive was not large enough to compensate for the entire OOPE amount [43]. Another Indian study reported the JSY program provided women with some financial protection, though it was limited and did not cover the entire sum [14].\nFurther, the women in our study who delivered at home did not cite financial barriers as the justification for a home delivery. A recent qualitative study from the same area also found that cost was not a deterrent for most of the study participants who delivered at home [40]. This implies other access barriers persist, some of which may be remedied but not necessarily by a cash transfer.\nPopulation based surveys have shown a significant proportion of women report cost as the major barrier to an institutional delivery, [2, 37] and other research reports support this [10, 23, 42]. However, our model showed JSY beneficiaries had lower OOPE compared to women who delivered at home. In addition, it is important to note the JSY beneficiaries in our study received a net gain, especially among the poorest quintiles, after receiving the cash incentive ($23). Although this is to be expected since the poorest quintiles paid the least, regardless this would imply the program is reaching and assisting the most vulnerable groups. In our setting the incentive was more than adequate to cover the OOPE, while an earlier study from Orissa found the magnitude of the cash incentive was not large enough to compensate for the entire OOPE amount [43]. Another Indian study reported the JSY program provided women with some financial protection, though it was limited and did not cover the entire sum [14].\nFurther, the women in our study who delivered at home did not cite financial barriers as the justification for a home delivery. A recent qualitative study from the same area also found that cost was not a deterrent for most of the study participants who delivered at home [40]. This implies other access barriers persist, some of which may be remedied but not necessarily by a cash transfer.\n JSY role in reducing OOPE inequalities and inequities: wealth quintiles and OOPE in the JSY Our concentration curves showed that the OOPE for women who delivered under the JSY program was pro-poor; poorer households made proportionally less OOP payments during childbirth compared to wealthier households (i.e., a progressive system). From a strict equality perspective, the distribution was not equal. However, some would argue the wealthier households have the means to pay for services while the poorer ones do not [44]. The OOPE may not be equal; nevertheless the program is making OOPE more equitable and fair.\nIn general, health care financing systems that are progressive tend to have a redistributive nature. Taxes, where wealthier households pay higher amounts of money compared to poorer ones, is one example. Social service provision by the government (e.g., offering free delivery care) is another [44]. So while vulnerable groups often have more healthcare needs, despite contributing less, they are able to obtain the same service as their wealthier counterparts. Yet, it is unknown whether the poorest women are in fact acquiring the same level of service. In our study, when comparing the prevalence of different costs and the median expenditure between the poorest and wealthiest quintiles of JSY mothers, it appears there is a higher prevalence of informal payments among the wealthiest quintile and higher amounts are paid. This may just reflect the fact wealthier women have more disposable income to spend or do wealthier women informally pay for better services even within the same facilities? A program like JSY has the power to promote equality and equity in access to delivery care, and while several argue the overall quality of care administered in JSY public facilities is low, [45–49] it is still important to ensure all receive the best care possible regardless of whether they have the means to pay for it.\nWe have presented differences in OOPE between the two study districts throughout the paper. In district 1, a woman is more likely to have an OOPE and pay higher amounts. Since the OOPE amount was higher in district 1, JSY beneficiaries’ net gain was also smaller. These findings were not surprising considering the heterogeneity between the districts. The women are much poorer in district 2 compared to district 1 so this would have an impact on the amount of money they spent. This also has an implication for the relative worth of the incentive to women in their respective districts. Another reason could be attributed to the sampling methodology. While district 1 included all blocks, district 2 only included 2 of the 5.\nOur concentration curves showed that the OOPE for women who delivered under the JSY program was pro-poor; poorer households made proportionally less OOP payments during childbirth compared to wealthier households (i.e., a progressive system). From a strict equality perspective, the distribution was not equal. However, some would argue the wealthier households have the means to pay for services while the poorer ones do not [44]. The OOPE may not be equal; nevertheless the program is making OOPE more equitable and fair.\nIn general, health care financing systems that are progressive tend to have a redistributive nature. Taxes, where wealthier households pay higher amounts of money compared to poorer ones, is one example. Social service provision by the government (e.g., offering free delivery care) is another [44]. So while vulnerable groups often have more healthcare needs, despite contributing less, they are able to obtain the same service as their wealthier counterparts. Yet, it is unknown whether the poorest women are in fact acquiring the same level of service. In our study, when comparing the prevalence of different costs and the median expenditure between the poorest and wealthiest quintiles of JSY mothers, it appears there is a higher prevalence of informal payments among the wealthiest quintile and higher amounts are paid. This may just reflect the fact wealthier women have more disposable income to spend or do wealthier women informally pay for better services even within the same facilities? A program like JSY has the power to promote equality and equity in access to delivery care, and while several argue the overall quality of care administered in JSY public facilities is low, [45–49] it is still important to ensure all receive the best care possible regardless of whether they have the means to pay for it.\nWe have presented differences in OOPE between the two study districts throughout the paper. In district 1, a woman is more likely to have an OOPE and pay higher amounts. Since the OOPE amount was higher in district 1, JSY beneficiaries’ net gain was also smaller. These findings were not surprising considering the heterogeneity between the districts. The women are much poorer in district 2 compared to district 1 so this would have an impact on the amount of money they spent. This also has an implication for the relative worth of the incentive to women in their respective districts. Another reason could be attributed to the sampling methodology. While district 1 included all blocks, district 2 only included 2 of the 5.", "High OOPE are a well-known constraint to the utilization of delivery services where ready access to cash is not available for many rural households, especially the poor whose ability to pay can be temporal or mainly rely on seasonal production like farms [39]. The vast majority of women in our study incurred some amount of OOPE. While these women paid less as JSY beneficiaries compared to women who delivered at home, they did not have a free delivery as intended by the public health care system. As mentioned above, in this study the direct medical costs were not the driving force behind OOPE, but informal payments to the staff. A qualitative study from the same area found women knew they were not supposed to make payments to the staff but did anyway out of fear of not receiving appropriate care in a timely manner [40].\nAlthough from the perspective of the Indian government, the primary purpose of the cash transfer is to incentivize women for a free delivery in a public health facility and not to cover OOPE, the same qualitative study found many women do in fact intend to use it to compensate for the OOPE incurred [40]. Only 25 % of the women who delivered in a JSY public facility in our study received the cash incentive upon discharge. While a previous study that took place in the same area found 85 % of the women received the benefit within two weeks of delivery [41], if the cash benefit is expected by the women to cover their OOPE, it needs to be received upon discharge. These bureaucratic issues related to receiving the JSY cash incentive and informal payments to public health facility staff members undermine the program and could potentially cause future uptake problems if not addressed.", "Population based surveys have shown a significant proportion of women report cost as the major barrier to an institutional delivery, [2, 37] and other research reports support this [10, 23, 42]. However, our model showed JSY beneficiaries had lower OOPE compared to women who delivered at home. In addition, it is important to note the JSY beneficiaries in our study received a net gain, especially among the poorest quintiles, after receiving the cash incentive ($23). Although this is to be expected since the poorest quintiles paid the least, regardless this would imply the program is reaching and assisting the most vulnerable groups. In our setting the incentive was more than adequate to cover the OOPE, while an earlier study from Orissa found the magnitude of the cash incentive was not large enough to compensate for the entire OOPE amount [43]. Another Indian study reported the JSY program provided women with some financial protection, though it was limited and did not cover the entire sum [14].\nFurther, the women in our study who delivered at home did not cite financial barriers as the justification for a home delivery. A recent qualitative study from the same area also found that cost was not a deterrent for most of the study participants who delivered at home [40]. This implies other access barriers persist, some of which may be remedied but not necessarily by a cash transfer.", "Our concentration curves showed that the OOPE for women who delivered under the JSY program was pro-poor; poorer households made proportionally less OOP payments during childbirth compared to wealthier households (i.e., a progressive system). From a strict equality perspective, the distribution was not equal. However, some would argue the wealthier households have the means to pay for services while the poorer ones do not [44]. The OOPE may not be equal; nevertheless the program is making OOPE more equitable and fair.\nIn general, health care financing systems that are progressive tend to have a redistributive nature. Taxes, where wealthier households pay higher amounts of money compared to poorer ones, is one example. Social service provision by the government (e.g., offering free delivery care) is another [44]. So while vulnerable groups often have more healthcare needs, despite contributing less, they are able to obtain the same service as their wealthier counterparts. Yet, it is unknown whether the poorest women are in fact acquiring the same level of service. In our study, when comparing the prevalence of different costs and the median expenditure between the poorest and wealthiest quintiles of JSY mothers, it appears there is a higher prevalence of informal payments among the wealthiest quintile and higher amounts are paid. This may just reflect the fact wealthier women have more disposable income to spend or do wealthier women informally pay for better services even within the same facilities? A program like JSY has the power to promote equality and equity in access to delivery care, and while several argue the overall quality of care administered in JSY public facilities is low, [45–49] it is still important to ensure all receive the best care possible regardless of whether they have the means to pay for it.\nWe have presented differences in OOPE between the two study districts throughout the paper. In district 1, a woman is more likely to have an OOPE and pay higher amounts. Since the OOPE amount was higher in district 1, JSY beneficiaries’ net gain was also smaller. These findings were not surprising considering the heterogeneity between the districts. The women are much poorer in district 2 compared to district 1 so this would have an impact on the amount of money they spent. This also has an implication for the relative worth of the incentive to women in their respective districts. Another reason could be attributed to the sampling methodology. While district 1 included all blocks, district 2 only included 2 of the 5.", "Though there have been some studies assessing the OOPE experienced during childbirth in India, few have looked at this from an equity perspective under the JSY program. The previous studies have used secondary data that did not allow for cost disaggregation, assessed a time period when JSY coverage and overall institutional delivery was low or were small in size [13–17, 23]. Furthermore, in some studies the costs were collected for deliveries that occurred in the previous five years, thus probably suffered from substantial recall bias. Considering how quickly the JSY program has increased institutional birth proportions in such a short period of time, it is important to have recent data that reflects the current situation.\nReports have found in many Asian countries that families borrow money to pay for maternal related costs thus being forced to forego essential items like food and education to repay the loans. These costs have a ripple effect on the family for years to come [50]. We did not enquire about the financing sources used to pay for the hospital delivery costs. This limits our ability to understand the role of JSY in providing financial protection and reducing subsequent impoverishment. So while this study has shown JSY beneficiaries have reduced OOPE compared to women who delivered at home, further research is needed to understand the magnitude of the reduction in relation to the family’s overall poverty status.\nMany studies highlight the limitations (e.g., recall bias and over/underreporting) associated with collecting health expenditure data [51–53]. Cost data was collected shortly after delivery and triangulated with other family members to minimize recall bias. A disaggregated cost collection design was used to improve accuracy and avoid underreporting of expenditures. While our study design minimized recall bias as the study participants were interviewed within a week of delivery, we do need to acknowledge the possibility of underreporting for women who were interviewed in a health facility because of staff presence.\nThe sex of the infant has been reported in the literature as a determinant of OOPE [14]. Data was not collected on the sex of the infant, therefore we could not adjust for it in the model. It is reasonable to assume in this setting the infant’s sex would influence OOPE, it is well documented that families make higher informal payments when a male child is born [14]. This could affect the precision of the analysis and the significance of other explanatory variables could be overestimated. However, we do not have any reason to think that the sex of the infant is differently distributed among the two groups.\nThere could be an argument for the payments made to the dai to be considered as a formal payment and classified as payment for ‘medicines, supplies and procedures’. However we chose to classify these payments as informal because (a) the dai is not formally trained, (b) is not a part of the formal health system and (c) remuneration to the dai is negotiable i.e., there are not fixed stipulated fees, she is remunerated often partly in cash and partly in kind based on the ability of the mothers family to pay and their relationship with the dai.\nSampling: The districts were sampled in the exact same way with the exception of the number of blocks chosen. However, the process of how the women were selected for the study was the same in both districts. As the districts in the state and in the rest of the country are very heterogeneous, generalizability of our results needs to be done with caution.", "OOPE is still prevalent among women who deliver under the JSY program as well in home deliveries. In JSY, OOPE varies by income quintile; the wealthier women pay more OOPE. There is a net gain for all quintiles when the incentive is taken into consideration, highest gains occur for the poorest women. OOPE was largely due to indirect costs like informal payments, food and cloth items for the baby and not direct medical payments. Further, we found OOP payments under the program were progressive with the most disadvantaged wealth quintile making proportionally less OOPE compared to wealthier women. Being a JSY beneficiary led to increased odds of incurring OOPE, but at the same time a decrease in the amount of OOPE incurred compared to women who delivered at home. While wealth was not a predictor of having OOPE, it was an indicator of how much the women would pay. The program seems to be effective in providing financial protection for the most vulnerable groups (i.e., women from poorer households and disadvantaged castes).", "Ethical approval for the study was granted by the Ethics Committee of R.D. Gardi Medical College (Ujjain, India) and Karolinska Institutet (Stockholm, Sweden), reference # 2010/1671–31/5." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", null, null, null, null, "conclusion", null ]
[ "Out-of-pocket expenditure", "Conditional cash transfer", "Maternal health", "Facility based delivery", "India" ]
Background: India’s health care services are largely financed by out-of-pocket payments (71 %) at the point of care [1]. High out-of-pocket expenditures (OOPE) make health services, including care for childbirth, difficult to access for a large proportion of its population especially the poor. In 2005, institutional delivery was nearly 6.5 times higher among Indian women belonging to the highest wealth quintile (84 %) compared to poorest quintile women (13 %) [2]. Evidence suggests maternal mortality can be reduced when deliveries are conducted by skilled birth attendants and women have access to emergency obstetric care, given the unpredictable nature of life-threatening complications that can occur at the time of childbirth [3]. Poor women, who have the least access to such care, bear a disproportionate burden of maternal mortality. Therefore governments in many low income settings, with high burdens of maternal mortality, have initiated special programs to draw women into facilities to give birth (instead of at home), where such care can be provided. Given the high number of maternal deaths in India (one-fifth of the global count) and a low institutional delivery proportion (39 % in 2005); the Indian government launched a conditional cash transfer program to promote institutional delivery among poor women the same year [2, 4]. The program, Janani Suraksha Yojana (JSY or safe motherhood program), provides women $23 (INR 1400) upon discharge after giving birth in a public health facility. JSY, the largest cash transfer program in the world, is funded by the central government of India (GOI) while implementation is managed by states. Eligibility, incentive amount and uptake differ across the states [5, 6]. In the ten years since the program began, institutional delivery rates have increased to 74 % and more than 106 million women have benefited with the GOI spending 16.4 billion USD on the program [7–9]. Previous studies have found a considerable proportion of Indian women cite financial access barriers as one of the main reasons for not having an institutional delivery [10–12]. The JSY program was expected to draw these mothers into public facilities with the assumption that they will receive a free delivery in addition to receiving the cash incentive of $23. While all services in the Indian public sector are supposed to be free, in reality OOPE during childbirth is common among these facilities [13–17]. Although there are a number of reports on the JSY [18–22], none focus on OOPE in the context of high JSY program uptake [13–17, 23]. The magnitude of OOPE incurred among these JSY beneficiaries is unknown. In addition, we do not know if the cash incentive offsets OOPE in the same facility, and if it does then the extent to which it does. Also the question remains whether the level of OOPE paid by JSY beneficiaries of different socio-economic status is similar. Further, if women who participate in the JSY program actually have higher OOPE than women who give birth at home. We studied the OOPE among JSY beneficiaries and compared this OOPE to that incurred by women who delivered at home in two districts of Madhya Pradesh, India. We also described the extent to which the JSY cash transfer defrayed the OOPE for JSY beneficiaries. Among both groups of women, we studied predictors of OOPE, and how OOPE varied with wealth status. Methods: Study area The study took place in two of the 51 administrative districts that make up Madhya Pradesh (MP), a state in central India. With a population of 71 million, it is one of India’s largest states [24]. MP on the whole has poorer socio-economic and health indicators relative to country (India) averages; 24 % of the population lives below the poverty line and has a maternal mortality ratio (MMR) of 190 per 100,000 live births [25]. In MP, all women are eligible to be beneficiaries of the JSY program regardless of their poverty status, if they deliver in a public health facility. MP has had one of the highest uptakes of JSY in the country [26]. The two study districts were purposively selected to represent different socio-economic and geographic areas of MP. They were also study areas of a larger project (MATIND) [27] to study the JSY program in which this study was nested. Each study district (population 1.5–2 million) is divided into 5 administrative blocks (100,000–200,000 people per block), each block comprises of villages with an approximate population of 1000–10,000 people per village. District one, situated on the western side of the state, has relatively better socio-economic characteristics, a lower MMR (176) and a higher uptake of the JSY program (72 %) [26]. In comparison, district two which lies on the eastern side of the state, has generally poorer socio-economic characteristics, a higher MMR (361), and a lower JSY uptake (59 %) [26]. The districts not only differ on socio-economic characteristics, but also in regards to geographical features. The first district is relatively more urban and has a flat geographical terrain, while the second district is mostly rural, has several hill ranges as well as dense forested areas. The study took place in two of the 51 administrative districts that make up Madhya Pradesh (MP), a state in central India. With a population of 71 million, it is one of India’s largest states [24]. MP on the whole has poorer socio-economic and health indicators relative to country (India) averages; 24 % of the population lives below the poverty line and has a maternal mortality ratio (MMR) of 190 per 100,000 live births [25]. In MP, all women are eligible to be beneficiaries of the JSY program regardless of their poverty status, if they deliver in a public health facility. MP has had one of the highest uptakes of JSY in the country [26]. The two study districts were purposively selected to represent different socio-economic and geographic areas of MP. They were also study areas of a larger project (MATIND) [27] to study the JSY program in which this study was nested. Each study district (population 1.5–2 million) is divided into 5 administrative blocks (100,000–200,000 people per block), each block comprises of villages with an approximate population of 1000–10,000 people per village. District one, situated on the western side of the state, has relatively better socio-economic characteristics, a lower MMR (176) and a higher uptake of the JSY program (72 %) [26]. In comparison, district two which lies on the eastern side of the state, has generally poorer socio-economic characteristics, a higher MMR (361), and a lower JSY uptake (59 %) [26]. The districts not only differ on socio-economic characteristics, but also in regards to geographical features. The first district is relatively more urban and has a flat geographical terrain, while the second district is mostly rural, has several hill ranges as well as dense forested areas. Study design and sampling A community-based cross-sectional study was conducted from September 2013 to April 2015. Study villages were selected through a combination of multi-stage sampling in the two districts. In the first district, all blocks were included. In the second district, two out of the five blocks were purposively selected, one in the north and the other in the south of the district. Within the selected blocks, villages that had between 200 and 10,000 inhabitants were included in the sampling frame. All villages were stratified into two groups based on a five kilometer (km) distance from a public health facility that conducted more than 10 deliveries in a month. Probability proportionate to size was used to select the individual study villages ensuring the number of sample units in each stratum would be allocated in proportion to their share in total population. In total, 247 villages were selected, 101 and 146 villages within and outside the five km radius of a facility, respectively. A community-based cross-sectional study was conducted from September 2013 to April 2015. Study villages were selected through a combination of multi-stage sampling in the two districts. In the first district, all blocks were included. In the second district, two out of the five blocks were purposively selected, one in the north and the other in the south of the district. Within the selected blocks, villages that had between 200 and 10,000 inhabitants were included in the sampling frame. All villages were stratified into two groups based on a five kilometer (km) distance from a public health facility that conducted more than 10 deliveries in a month. Probability proportionate to size was used to select the individual study villages ensuring the number of sample units in each stratum would be allocated in proportion to their share in total population. In total, 247 villages were selected, 101 and 146 villages within and outside the five km radius of a facility, respectively. Data collection The study participants (i.e., pregnant women who had just given birth in each village) were identified with the help of local community health workers (Accredited Social Health Activist), local crèche workers, and traditional birth attendants (Dai). The community workers were incentivized to report births to the research team within two days of delivery. As an additional measure to ensure all births were identified, research teams frequently visited the study villages throughout the recruitment period. Trained research assistants interviewed recently delivered mothers at a health facility or in their home within the first week of delivery. They elicited information on out-of-pocket expenditures (OOPE) for the current childbirth and other details including maternal socio-demographic characteristics, birth order, household wealth, location of delivery, and type of delivery. Family members provided additional information on the costs if the mother could not recall or did not know. During the recruitment period, 2779 births were reported and 2615 women were enrolled in the study. The reasons given for not enrolling in the study were (i) the mother had migrated back to her resident village (n = 108), (ii) the home was not accessible to the research team (n = 52), (iii) maternal death (n = 8) and (iv) two women refused to participate. A small number of women (n = 8) reported that they did not know the cost of the delivery thus were excluded from the analysis. Women who delivered in a private facility (n = 226) were removed from the analysis as the study focused on the JSY program and OOPE. Also these women have significantly different characteristics; they tend to be wealthier, live in urban settings and are not the primary focus of the JSY program. The study participants (i.e., pregnant women who had just given birth in each village) were identified with the help of local community health workers (Accredited Social Health Activist), local crèche workers, and traditional birth attendants (Dai). The community workers were incentivized to report births to the research team within two days of delivery. As an additional measure to ensure all births were identified, research teams frequently visited the study villages throughout the recruitment period. Trained research assistants interviewed recently delivered mothers at a health facility or in their home within the first week of delivery. They elicited information on out-of-pocket expenditures (OOPE) for the current childbirth and other details including maternal socio-demographic characteristics, birth order, household wealth, location of delivery, and type of delivery. Family members provided additional information on the costs if the mother could not recall or did not know. During the recruitment period, 2779 births were reported and 2615 women were enrolled in the study. The reasons given for not enrolling in the study were (i) the mother had migrated back to her resident village (n = 108), (ii) the home was not accessible to the research team (n = 52), (iii) maternal death (n = 8) and (iv) two women refused to participate. A small number of women (n = 8) reported that they did not know the cost of the delivery thus were excluded from the analysis. Women who delivered in a private facility (n = 226) were removed from the analysis as the study focused on the JSY program and OOPE. Also these women have significantly different characteristics; they tend to be wealthier, live in urban settings and are not the primary focus of the JSY program. Definitions and variables used The main outcome of interest, OOPE, was a sum of the following expenditures (i.e., gross OOPE): 1. Medicine, supplies and procedures (i.e., delivery costs, medicines, supplies, blood transfusions, diagnostic tests, and anesthesia); 2. Informal payments (i.e., expenditures reported as ‘rewards’ paid by the women/families to the staff for assisting their care in the facility. For the women who delivered at home, cash given to the dai (traditional birth attendant who conducts home deliveries) was classified as an informal payment; 3. Food/Cloth (i.e., food consumed during hospital stay or at home in relation to the delivery and cloths used for the infant), and 4. Transportation costs (i.e., all costs for the mother and her attendants associated with reaching the health facility for delivery). The OOPE were collected in Indian rupees (INR) and converted to U.S. dollars (US$) using the exchange rate at the time of the study of 60 INR to US$1. Independent variables: Socio-demographic characteristics included age and education as continuous variables and caste1 as a categorical variable. Birth order was categorized by number of live births up to four (or more). Women’s household wealth was assessed by using a standard technique involving principal component analysis to construct a wealth index based on socio-economic characteristics developed for the standard of living index in the National Family Health Survey. The variables included 20 household assets, structural material of dwelling, sanitation provisions, and land ownership [28, 29]. The household wealth index was calculated for the entire study sample, then the score was categorized into five quintiles. We used the wealth index as a proxy measurement for wealth as income is a difficult variable to collect in low-income settings. The mother could have delivered under the JSY program at public facility or at home. JSY beneficiaries were any woman who delivered in a public health facility and thus eligible to receive the cash incentive. Type of delivery was classified into vaginal or cesarean section. We define the ‘net gain’ as the JSY incentive ($23) minus the OOPE. The main outcome of interest, OOPE, was a sum of the following expenditures (i.e., gross OOPE): 1. Medicine, supplies and procedures (i.e., delivery costs, medicines, supplies, blood transfusions, diagnostic tests, and anesthesia); 2. Informal payments (i.e., expenditures reported as ‘rewards’ paid by the women/families to the staff for assisting their care in the facility. For the women who delivered at home, cash given to the dai (traditional birth attendant who conducts home deliveries) was classified as an informal payment; 3. Food/Cloth (i.e., food consumed during hospital stay or at home in relation to the delivery and cloths used for the infant), and 4. Transportation costs (i.e., all costs for the mother and her attendants associated with reaching the health facility for delivery). The OOPE were collected in Indian rupees (INR) and converted to U.S. dollars (US$) using the exchange rate at the time of the study of 60 INR to US$1. Independent variables: Socio-demographic characteristics included age and education as continuous variables and caste1 as a categorical variable. Birth order was categorized by number of live births up to four (or more). Women’s household wealth was assessed by using a standard technique involving principal component analysis to construct a wealth index based on socio-economic characteristics developed for the standard of living index in the National Family Health Survey. The variables included 20 household assets, structural material of dwelling, sanitation provisions, and land ownership [28, 29]. The household wealth index was calculated for the entire study sample, then the score was categorized into five quintiles. We used the wealth index as a proxy measurement for wealth as income is a difficult variable to collect in low-income settings. The mother could have delivered under the JSY program at public facility or at home. JSY beneficiaries were any woman who delivered in a public health facility and thus eligible to receive the cash incentive. Type of delivery was classified into vaginal or cesarean section. We define the ‘net gain’ as the JSY incentive ($23) minus the OOPE. Analysis Univariate, bivariate and multivariable statistics were used to describe and analyze the data. The dependent variable, OOPE, was not normally distributed thus median and interquartile range (IQR) were used to describe the data. Wilcoxon-Mann-Whitney and Kruskal Wallis tests were used to compare OOPE differences between groups. Inequalities in OOPE were analyzed using the concentration curve and concentration index as demonstrated by Wagstaff et al. [30]. The concentration curve plots the cumulative percentage of OOPE (y-axis) against the cumulative percentage of the women, ranked by their wealth index, from the poorest to the richest households (x-axis) [31]. We used the concetration curve to graphically display the inequalities in OOPE in this population. The concentration index (CI), a range of −1 to 1, allowed us to quantify the inequality in the OOPE. CI is defined graphically as twice the area between the concentration curve and the line of equality [32]. A positive value indicates a progressive system where the wealthy have more proportion of OOPE compared to the poor, while a negative value indicates the opposite (and a regressive system). Univariate, bivariate and multivariable statistics were used to describe and analyze the data. The dependent variable, OOPE, was not normally distributed thus median and interquartile range (IQR) were used to describe the data. Wilcoxon-Mann-Whitney and Kruskal Wallis tests were used to compare OOPE differences between groups. Inequalities in OOPE were analyzed using the concentration curve and concentration index as demonstrated by Wagstaff et al. [30]. The concentration curve plots the cumulative percentage of OOPE (y-axis) against the cumulative percentage of the women, ranked by their wealth index, from the poorest to the richest households (x-axis) [31]. We used the concetration curve to graphically display the inequalities in OOPE in this population. The concentration index (CI), a range of −1 to 1, allowed us to quantify the inequality in the OOPE. CI is defined graphically as twice the area between the concentration curve and the line of equality [32]. A positive value indicates a progressive system where the wealthy have more proportion of OOPE compared to the poor, while a negative value indicates the opposite (and a regressive system). Multivariable analysis A two-part model, developed as part of the Rand Health Insurance Experiment, was used for the multivariable regression to assess determinants of OOPE [33–36]. This model is commonly used when studying health expenditures to accommodate the significant number of zeros (no expense incurred) and for its distribution (i.e., right skewed with a long tail) which fits our data. The first part, a binary logistic model, was used to understand the predictors associated with any OOPE and estimated the odds of a woman incurring any OOPE. We present the coefficients as adjusted odds ratios (AOR) and 95 % Confidence Intervals. The second part of the model, a generalized linear model (GLM) with a negative binomial distribution, analyzed the determinants of OOPE among women who reported any OOPE. This model estimated the OOPE for women who reported incurring an expense. We adjusted for observable characteristics that may influence delivery-related OOPE based on previous literature and context specific knowledge. We present the coefficients as incident rate ratios (IRR) and 95 % Confidence Intervals. Multicollinearity was assessed by calculating the mean variance inflation factor (VIF = 1.53), which did not show evidence of collinearity. No interaction existed between the variables in the model. In all the models, p-values <0.05 were considered significant. A two-part model, developed as part of the Rand Health Insurance Experiment, was used for the multivariable regression to assess determinants of OOPE [33–36]. This model is commonly used when studying health expenditures to accommodate the significant number of zeros (no expense incurred) and for its distribution (i.e., right skewed with a long tail) which fits our data. The first part, a binary logistic model, was used to understand the predictors associated with any OOPE and estimated the odds of a woman incurring any OOPE. We present the coefficients as adjusted odds ratios (AOR) and 95 % Confidence Intervals. The second part of the model, a generalized linear model (GLM) with a negative binomial distribution, analyzed the determinants of OOPE among women who reported any OOPE. This model estimated the OOPE for women who reported incurring an expense. We adjusted for observable characteristics that may influence delivery-related OOPE based on previous literature and context specific knowledge. We present the coefficients as incident rate ratios (IRR) and 95 % Confidence Intervals. Multicollinearity was assessed by calculating the mean variance inflation factor (VIF = 1.53), which did not show evidence of collinearity. No interaction existed between the variables in the model. In all the models, p-values <0.05 were considered significant. Ethical considerations The study was described to all study participants. Informed consent was obtained from the participants before they were enrolled in the study and responded to the questionnaire. Anonymity and confidentiality was ensured to all women. Ethical approval for the study was granted by the Ethics Committee from the authors’ institution. The study was described to all study participants. Informed consent was obtained from the participants before they were enrolled in the study and responded to the questionnaire. Anonymity and confidentiality was ensured to all women. Ethical approval for the study was granted by the Ethics Committee from the authors’ institution. Study area: The study took place in two of the 51 administrative districts that make up Madhya Pradesh (MP), a state in central India. With a population of 71 million, it is one of India’s largest states [24]. MP on the whole has poorer socio-economic and health indicators relative to country (India) averages; 24 % of the population lives below the poverty line and has a maternal mortality ratio (MMR) of 190 per 100,000 live births [25]. In MP, all women are eligible to be beneficiaries of the JSY program regardless of their poverty status, if they deliver in a public health facility. MP has had one of the highest uptakes of JSY in the country [26]. The two study districts were purposively selected to represent different socio-economic and geographic areas of MP. They were also study areas of a larger project (MATIND) [27] to study the JSY program in which this study was nested. Each study district (population 1.5–2 million) is divided into 5 administrative blocks (100,000–200,000 people per block), each block comprises of villages with an approximate population of 1000–10,000 people per village. District one, situated on the western side of the state, has relatively better socio-economic characteristics, a lower MMR (176) and a higher uptake of the JSY program (72 %) [26]. In comparison, district two which lies on the eastern side of the state, has generally poorer socio-economic characteristics, a higher MMR (361), and a lower JSY uptake (59 %) [26]. The districts not only differ on socio-economic characteristics, but also in regards to geographical features. The first district is relatively more urban and has a flat geographical terrain, while the second district is mostly rural, has several hill ranges as well as dense forested areas. Study design and sampling: A community-based cross-sectional study was conducted from September 2013 to April 2015. Study villages were selected through a combination of multi-stage sampling in the two districts. In the first district, all blocks were included. In the second district, two out of the five blocks were purposively selected, one in the north and the other in the south of the district. Within the selected blocks, villages that had between 200 and 10,000 inhabitants were included in the sampling frame. All villages were stratified into two groups based on a five kilometer (km) distance from a public health facility that conducted more than 10 deliveries in a month. Probability proportionate to size was used to select the individual study villages ensuring the number of sample units in each stratum would be allocated in proportion to their share in total population. In total, 247 villages were selected, 101 and 146 villages within and outside the five km radius of a facility, respectively. Data collection: The study participants (i.e., pregnant women who had just given birth in each village) were identified with the help of local community health workers (Accredited Social Health Activist), local crèche workers, and traditional birth attendants (Dai). The community workers were incentivized to report births to the research team within two days of delivery. As an additional measure to ensure all births were identified, research teams frequently visited the study villages throughout the recruitment period. Trained research assistants interviewed recently delivered mothers at a health facility or in their home within the first week of delivery. They elicited information on out-of-pocket expenditures (OOPE) for the current childbirth and other details including maternal socio-demographic characteristics, birth order, household wealth, location of delivery, and type of delivery. Family members provided additional information on the costs if the mother could not recall or did not know. During the recruitment period, 2779 births were reported and 2615 women were enrolled in the study. The reasons given for not enrolling in the study were (i) the mother had migrated back to her resident village (n = 108), (ii) the home was not accessible to the research team (n = 52), (iii) maternal death (n = 8) and (iv) two women refused to participate. A small number of women (n = 8) reported that they did not know the cost of the delivery thus were excluded from the analysis. Women who delivered in a private facility (n = 226) were removed from the analysis as the study focused on the JSY program and OOPE. Also these women have significantly different characteristics; they tend to be wealthier, live in urban settings and are not the primary focus of the JSY program. Definitions and variables used: The main outcome of interest, OOPE, was a sum of the following expenditures (i.e., gross OOPE): 1. Medicine, supplies and procedures (i.e., delivery costs, medicines, supplies, blood transfusions, diagnostic tests, and anesthesia); 2. Informal payments (i.e., expenditures reported as ‘rewards’ paid by the women/families to the staff for assisting their care in the facility. For the women who delivered at home, cash given to the dai (traditional birth attendant who conducts home deliveries) was classified as an informal payment; 3. Food/Cloth (i.e., food consumed during hospital stay or at home in relation to the delivery and cloths used for the infant), and 4. Transportation costs (i.e., all costs for the mother and her attendants associated with reaching the health facility for delivery). The OOPE were collected in Indian rupees (INR) and converted to U.S. dollars (US$) using the exchange rate at the time of the study of 60 INR to US$1. Independent variables: Socio-demographic characteristics included age and education as continuous variables and caste1 as a categorical variable. Birth order was categorized by number of live births up to four (or more). Women’s household wealth was assessed by using a standard technique involving principal component analysis to construct a wealth index based on socio-economic characteristics developed for the standard of living index in the National Family Health Survey. The variables included 20 household assets, structural material of dwelling, sanitation provisions, and land ownership [28, 29]. The household wealth index was calculated for the entire study sample, then the score was categorized into five quintiles. We used the wealth index as a proxy measurement for wealth as income is a difficult variable to collect in low-income settings. The mother could have delivered under the JSY program at public facility or at home. JSY beneficiaries were any woman who delivered in a public health facility and thus eligible to receive the cash incentive. Type of delivery was classified into vaginal or cesarean section. We define the ‘net gain’ as the JSY incentive ($23) minus the OOPE. Analysis: Univariate, bivariate and multivariable statistics were used to describe and analyze the data. The dependent variable, OOPE, was not normally distributed thus median and interquartile range (IQR) were used to describe the data. Wilcoxon-Mann-Whitney and Kruskal Wallis tests were used to compare OOPE differences between groups. Inequalities in OOPE were analyzed using the concentration curve and concentration index as demonstrated by Wagstaff et al. [30]. The concentration curve plots the cumulative percentage of OOPE (y-axis) against the cumulative percentage of the women, ranked by their wealth index, from the poorest to the richest households (x-axis) [31]. We used the concetration curve to graphically display the inequalities in OOPE in this population. The concentration index (CI), a range of −1 to 1, allowed us to quantify the inequality in the OOPE. CI is defined graphically as twice the area between the concentration curve and the line of equality [32]. A positive value indicates a progressive system where the wealthy have more proportion of OOPE compared to the poor, while a negative value indicates the opposite (and a regressive system). Multivariable analysis: A two-part model, developed as part of the Rand Health Insurance Experiment, was used for the multivariable regression to assess determinants of OOPE [33–36]. This model is commonly used when studying health expenditures to accommodate the significant number of zeros (no expense incurred) and for its distribution (i.e., right skewed with a long tail) which fits our data. The first part, a binary logistic model, was used to understand the predictors associated with any OOPE and estimated the odds of a woman incurring any OOPE. We present the coefficients as adjusted odds ratios (AOR) and 95 % Confidence Intervals. The second part of the model, a generalized linear model (GLM) with a negative binomial distribution, analyzed the determinants of OOPE among women who reported any OOPE. This model estimated the OOPE for women who reported incurring an expense. We adjusted for observable characteristics that may influence delivery-related OOPE based on previous literature and context specific knowledge. We present the coefficients as incident rate ratios (IRR) and 95 % Confidence Intervals. Multicollinearity was assessed by calculating the mean variance inflation factor (VIF = 1.53), which did not show evidence of collinearity. No interaction existed between the variables in the model. In all the models, p-values <0.05 were considered significant. Ethical considerations: The study was described to all study participants. Informed consent was obtained from the participants before they were enrolled in the study and responded to the questionnaire. Anonymity and confidentiality was ensured to all women. Ethical approval for the study was granted by the Ethics Committee from the authors’ institution. Results: Descriptive sample characteristics for the sample As depicted in Table 1, the majority of women (n = 1995, 84 %) in our study delivered in a JSY public health facility. The remaining 16 % (n = 386) delivered at home. The median age of the study sample was 23 years and 29 % (n = 692) had no formal education. More than a third (n = 932) of the women were primiparous. The main reason given for home deliveries was that the baby came unexpectedly and quickly (n = 312, 52 %). Other reasons included; planned to have a home delivery (n = 97, 16 %), transportation related issues (n = 95, 16 %), no one to accompany them to the hospital (n = 58, 10 %) and other (n = 37, 6 %). Only one woman replied she could not afford to deliver in a health facility (results not shown).Table 1Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)All womenJSY beneficiaryHome deliveryBackground characteristicsn (%)Median (IQR)n (%)Median (IQR)n (%)Median (IQR)Total2381 (100)8 (3–17)1995 (84)8 (3–18)386 (16)6 (2–13)Age in years (median, IQR)23 (21–25)–22 (20–25)–25 (22–27)–Districts District 11405 (59)14 (7–23)*1251 (63)14 (7–23)*154 (40)12 (5–23)* District 2976 (41)3 (1–7)744 (37)3 (1–7)232 (60)3 (1–7)Education in years (median, IQR)5 (0–8)–5 (0–8)–4 (0–7)–Household wealth 1st quintile (Poorest)519 (22)3 (1–7)**361 (18)3 (1–6)**158 (40)3 (0–7)** 2nd quintile505 (21)7 (2–13)414 (21)6 (2–15)91 (24)7 (2–12) 3rd quintile499 (21)11 (5–18)434 (22)11 (5–18)65 (17)10 (3–18) 4th quintile476 (20)14 (7–23)429 (21)14 (7–23)47 (12)8 (3–19) 5th quintile (Least Poor)382 (16)13 (7–25)357 (18)13 (7–24)25 (7)17 (8–29)Caste Scheduled Caste (SC)599 (25)10 (4–19)**502 (25)11 (5–20)**97 (25)8 (2–17)** Other backward caste (OBC)904 (38)12 (5–22)807 (40)12 (5–22)97 (25)8 (4–17) Scheduled Tribe (ST)598 (25)3 (1–6)430 (22)3 (1–6)168 (44)3 (1–7) General280 (12)13 (4–21)256 (13)12 (4–22)24 (6)16 (5–20)Birth Order 1st child932 (39)8 (3–19)**858 (43)9 (3–20)**74 (19)7 (3–12)** 2nd child838 (35)8 (3–17)688 (34)9 (3–17)150 (39)7 (2–15) 3rd child382 (16)5 (2–14)292 (15)7 (2–15)90 (23)4 (1–12) 4th or more child229 (10)7 (2–15)157 (8)8 (2–17)72 (19)4 (0–9)Type of Delivery Vaginal Delivery2303 (97)8 (3–17)* 1917 (96)8 (3–17)*386 (100)6 (2–13) Cesarean Section Delivery78 (3)50 (21–93)78 (4)50 (21–93)0 (0)-Cost Categories Medicine, supplies and procedures224 (10)3 (1–8)181 (10)3 (1–7)43 (11)7 (3–8) Informal payments1445 (65)5 (2–8)1187 (65)5 (2–9)258 (68)5 (3–8) Food/baby items1695 (77)5 (3–8)1489 (81)5 (3–8)206 (55)3 (2–8) Transportation328 (17)3 (1–8)328 (17)3 (1–8)––*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%) *Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made As depicted in Table 1, the majority of women (n = 1995, 84 %) in our study delivered in a JSY public health facility. The remaining 16 % (n = 386) delivered at home. The median age of the study sample was 23 years and 29 % (n = 692) had no formal education. More than a third (n = 932) of the women were primiparous. The main reason given for home deliveries was that the baby came unexpectedly and quickly (n = 312, 52 %). Other reasons included; planned to have a home delivery (n = 97, 16 %), transportation related issues (n = 95, 16 %), no one to accompany them to the hospital (n = 58, 10 %) and other (n = 37, 6 %). Only one woman replied she could not afford to deliver in a health facility (results not shown).Table 1Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)All womenJSY beneficiaryHome deliveryBackground characteristicsn (%)Median (IQR)n (%)Median (IQR)n (%)Median (IQR)Total2381 (100)8 (3–17)1995 (84)8 (3–18)386 (16)6 (2–13)Age in years (median, IQR)23 (21–25)–22 (20–25)–25 (22–27)–Districts District 11405 (59)14 (7–23)*1251 (63)14 (7–23)*154 (40)12 (5–23)* District 2976 (41)3 (1–7)744 (37)3 (1–7)232 (60)3 (1–7)Education in years (median, IQR)5 (0–8)–5 (0–8)–4 (0–7)–Household wealth 1st quintile (Poorest)519 (22)3 (1–7)**361 (18)3 (1–6)**158 (40)3 (0–7)** 2nd quintile505 (21)7 (2–13)414 (21)6 (2–15)91 (24)7 (2–12) 3rd quintile499 (21)11 (5–18)434 (22)11 (5–18)65 (17)10 (3–18) 4th quintile476 (20)14 (7–23)429 (21)14 (7–23)47 (12)8 (3–19) 5th quintile (Least Poor)382 (16)13 (7–25)357 (18)13 (7–24)25 (7)17 (8–29)Caste Scheduled Caste (SC)599 (25)10 (4–19)**502 (25)11 (5–20)**97 (25)8 (2–17)** Other backward caste (OBC)904 (38)12 (5–22)807 (40)12 (5–22)97 (25)8 (4–17) Scheduled Tribe (ST)598 (25)3 (1–6)430 (22)3 (1–6)168 (44)3 (1–7) General280 (12)13 (4–21)256 (13)12 (4–22)24 (6)16 (5–20)Birth Order 1st child932 (39)8 (3–19)**858 (43)9 (3–20)**74 (19)7 (3–12)** 2nd child838 (35)8 (3–17)688 (34)9 (3–17)150 (39)7 (2–15) 3rd child382 (16)5 (2–14)292 (15)7 (2–15)90 (23)4 (1–12) 4th or more child229 (10)7 (2–15)157 (8)8 (2–17)72 (19)4 (0–9)Type of Delivery Vaginal Delivery2303 (97)8 (3–17)* 1917 (96)8 (3–17)*386 (100)6 (2–13) Cesarean Section Delivery78 (3)50 (21–93)78 (4)50 (21–93)0 (0)-Cost Categories Medicine, supplies and procedures224 (10)3 (1–8)181 (10)3 (1–7)43 (11)7 (3–8) Informal payments1445 (65)5 (2–8)1187 (65)5 (2–9)258 (68)5 (3–8) Food/baby items1695 (77)5 (3–8)1489 (81)5 (3–8)206 (55)3 (2–8) Transportation328 (17)3 (1–8)328 (17)3 (1–8)––*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%) *Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made Who had any OOPE? Ninety one percent (n = 2172) of the sample reported having OOPE; 92 % of JSY beneficiaries and 85 % of women who delivered at home. From the descriptive analysis in Table 1, women who delivered under the JSY program had a significantly higher median, IQR OOPE ($8, 3–18) compared to women who delivered at home ($6, 2–13). The median, IQR OOPE significantly differed between district one ($14, 7–23), and two ($3, 1–7). The median OOPE increased with household wealth for women who delivered in a JSY facility or at home. This pattern was similar in both districts (data not shown). Women from the scheduled tribe caste paid the least OOPE (median $3, IQR 1–6) (Table 1). Women who delivered by caesarean section paid more than six times (median $50, IQR 21–93) the amount compared to women who delivered vaginally (median $8, IQR 3–17). Ninety one percent (n = 2172) of the sample reported having OOPE; 92 % of JSY beneficiaries and 85 % of women who delivered at home. From the descriptive analysis in Table 1, women who delivered under the JSY program had a significantly higher median, IQR OOPE ($8, 3–18) compared to women who delivered at home ($6, 2–13). The median, IQR OOPE significantly differed between district one ($14, 7–23), and two ($3, 1–7). The median OOPE increased with household wealth for women who delivered in a JSY facility or at home. This pattern was similar in both districts (data not shown). Women from the scheduled tribe caste paid the least OOPE (median $3, IQR 1–6) (Table 1). Women who delivered by caesarean section paid more than six times (median $50, IQR 21–93) the amount compared to women who delivered vaginally (median $8, IQR 3–17). Does the JSY cash incentive defray OOPE? Among the women who delivered in a JSY public facility, only a quarter (n = 504) received the cash incentive upon discharge, 68 % (n = 1353) were told to come back to receive the money. Assuming all JSY beneficiaries eventually receive the cash incentive, they would have a median net gain (i.e., JSY incentive – OOPE) of $11. The net gain was larger in district 2 ($19) compared to district 1 ($8) (data not shown). As demonstrated in Fig. 1, women from the poorest wealth quintile had twice the net gain ($20) versus the wealthiest quintile ($10). Only 4 % (21/519) from the poorest quintile incurred OOPE greater than the value of the JSY cash benefit.Fig. 1Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Among the women who delivered in a JSY public facility, only a quarter (n = 504) received the cash incentive upon discharge, 68 % (n = 1353) were told to come back to receive the money. Assuming all JSY beneficiaries eventually receive the cash incentive, they would have a median net gain (i.e., JSY incentive – OOPE) of $11. The net gain was larger in district 2 ($19) compared to district 1 ($8) (data not shown). As demonstrated in Fig. 1, women from the poorest wealth quintile had twice the net gain ($20) versus the wealthiest quintile ($10). Only 4 % (21/519) from the poorest quintile incurred OOPE greater than the value of the JSY cash benefit.Fig. 1Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Breakdown of OOPE among JSY and Home deliveries Among the JSY and home deliveries, 92 % (n = 2213) provided disaggregated cost information. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE (Table 1). The proportion of women incurring OOPE for food and cloth items was higher for JSY beneficiaries (81 %) than for home mothers (55 %). Among the two groups, the median cost for food and baby items was significantly higher ($5) for JSY beneficiaries compared to home deliveries ($3). No other significant differences were found. OOPE differences by district were found. The breakdown of costs by category was similar for JSY beneficiaries and home deliveries with the exception of the proportion of informal payments in district 2 (Fig. 2). Informal payments constituted only 5 % of the total OOPE for JSY beneficiaries in district 2 compared to 43 % in district 1. The proportion did not differ between districts among home mothers. Among JSY beneficiaries, a higher proportion of wealthy women incurred OOPE as well as had higher median OOPE compared to the poorest quintile.Fig. 2Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %) Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %) Among the JSY and home deliveries, 92 % (n = 2213) provided disaggregated cost information. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE (Table 1). The proportion of women incurring OOPE for food and cloth items was higher for JSY beneficiaries (81 %) than for home mothers (55 %). Among the two groups, the median cost for food and baby items was significantly higher ($5) for JSY beneficiaries compared to home deliveries ($3). No other significant differences were found. OOPE differences by district were found. The breakdown of costs by category was similar for JSY beneficiaries and home deliveries with the exception of the proportion of informal payments in district 2 (Fig. 2). Informal payments constituted only 5 % of the total OOPE for JSY beneficiaries in district 2 compared to 43 % in district 1. The proportion did not differ between districts among home mothers. Among JSY beneficiaries, a higher proportion of wealthy women incurred OOPE as well as had higher median OOPE compared to the poorest quintile.Fig. 2Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %) Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %) Inequalities in OOPE among JSY and home deliveries Figure 3 displays the concentration curves for OOPE among JSY beneficiaries and home deliveries. Since both curves lie below the line of equality; OOPE for both JSY beneficiaries and home mothers was found to be progressive indicating that poorer households make proportionally less OOP payments during childbirth compared to wealthier households. JSY beneficiaries had less progressive OOPE (CI = 0.189) compared to women who delivered at home (CI = 0.293), however this difference was not significant. The difference was not significant when each district was analyzed separately.Fig. 3Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Figure 3 displays the concentration curves for OOPE among JSY beneficiaries and home deliveries. Since both curves lie below the line of equality; OOPE for both JSY beneficiaries and home mothers was found to be progressive indicating that poorer households make proportionally less OOP payments during childbirth compared to wealthier households. JSY beneficiaries had less progressive OOPE (CI = 0.189) compared to women who delivered at home (CI = 0.293), however this difference was not significant. The difference was not significant when each district was analyzed separately.Fig. 3Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Impact of JSY on OOPE When adjusting for confounders, women who delivered in a JSY public facility had 1.58 higher odds (95 % CI: 1.11–2.25) to incur any OOPE than women who delivered at home (Table 2, model 1). Women from district 1 had twice the odds (95 % CI: 1.30–3.18) of having OOPE compared to district 2. Wealth, caste and birth order were not significant predictors of incurring any OOPE.Table 2Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)Background characteristicsPart 1 of model: AOR n = 238195 % confidence intervalPart 2 of model: IRR n = 217295 % confidence intervalPlace of delivery Home 1.00 1.00  JSY (Public Facility)1.58(1.11–2.25)*0.84(0.73–0.96)*Districts District #2 1.00 1.00  District #12.03(1.30–3.18)*2.36(2.06–2.69)**Household wealth 1st quintile (Poorest) 1.00 1.00  2nd quintile1.49(0.98–2.27)1.19(1.02–1.38)* 3rd quintile1.53(0.91–2.56)1.33(1.13–1.56)** 4th quintile1.14(0.65–1.99)1.31(1.11–1.55)** 5th quintile (Least Poor)1.21(0.64–2.31)1.34(1.02–1.49)*Caste Scheduled Caste (SC) 1.00 1.00  Other backward caste (OBC)1.15(0.76–1.73)1.14(1.01–1.28)* Scheduled Tribe (ST)0.97(0.62–1.52)0.70(0.60–0.81)** General1.19(0.63–2.22)1.07(0.91–1.26)Birth order 1st child 1.00 1.00  2nd child1.21(0.82–1.79)0.82(0.73–0.92)* 3rd child1.03(0.63–1.68)0.71(0.60–0.83)** 4th or more child0.64(0.35–1.17)0.69(0.56–0.86)*Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001 Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386) Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001 However in model 2 among the women who had incurred any OOPE (Table 2), those who delivered under the JSY program paid 16 % less (95 % CI: 0.73–0.96) OOPE than women who delivered at home. Women from district 1 paid more than twice (95 % CI: 2.06–2.69) the OOPE compared to district 2. Increased wealth was also significantly related to higher OOPE. Conversely, being from a scheduled tribe and birth order (women with more children) were related to having lower OOPE. When adjusting for confounders, women who delivered in a JSY public facility had 1.58 higher odds (95 % CI: 1.11–2.25) to incur any OOPE than women who delivered at home (Table 2, model 1). Women from district 1 had twice the odds (95 % CI: 1.30–3.18) of having OOPE compared to district 2. Wealth, caste and birth order were not significant predictors of incurring any OOPE.Table 2Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)Background characteristicsPart 1 of model: AOR n = 238195 % confidence intervalPart 2 of model: IRR n = 217295 % confidence intervalPlace of delivery Home 1.00 1.00  JSY (Public Facility)1.58(1.11–2.25)*0.84(0.73–0.96)*Districts District #2 1.00 1.00  District #12.03(1.30–3.18)*2.36(2.06–2.69)**Household wealth 1st quintile (Poorest) 1.00 1.00  2nd quintile1.49(0.98–2.27)1.19(1.02–1.38)* 3rd quintile1.53(0.91–2.56)1.33(1.13–1.56)** 4th quintile1.14(0.65–1.99)1.31(1.11–1.55)** 5th quintile (Least Poor)1.21(0.64–2.31)1.34(1.02–1.49)*Caste Scheduled Caste (SC) 1.00 1.00  Other backward caste (OBC)1.15(0.76–1.73)1.14(1.01–1.28)* Scheduled Tribe (ST)0.97(0.62–1.52)0.70(0.60–0.81)** General1.19(0.63–2.22)1.07(0.91–1.26)Birth order 1st child 1.00 1.00  2nd child1.21(0.82–1.79)0.82(0.73–0.92)* 3rd child1.03(0.63–1.68)0.71(0.60–0.83)** 4th or more child0.64(0.35–1.17)0.69(0.56–0.86)*Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001 Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386) Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001 However in model 2 among the women who had incurred any OOPE (Table 2), those who delivered under the JSY program paid 16 % less (95 % CI: 0.73–0.96) OOPE than women who delivered at home. Women from district 1 paid more than twice (95 % CI: 2.06–2.69) the OOPE compared to district 2. Increased wealth was also significantly related to higher OOPE. Conversely, being from a scheduled tribe and birth order (women with more children) were related to having lower OOPE. Descriptive sample characteristics for the sample: As depicted in Table 1, the majority of women (n = 1995, 84 %) in our study delivered in a JSY public health facility. The remaining 16 % (n = 386) delivered at home. The median age of the study sample was 23 years and 29 % (n = 692) had no formal education. More than a third (n = 932) of the women were primiparous. The main reason given for home deliveries was that the baby came unexpectedly and quickly (n = 312, 52 %). Other reasons included; planned to have a home delivery (n = 97, 16 %), transportation related issues (n = 95, 16 %), no one to accompany them to the hospital (n = 58, 10 %) and other (n = 37, 6 %). Only one woman replied she could not afford to deliver in a health facility (results not shown).Table 1Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%)All womenJSY beneficiaryHome deliveryBackground characteristicsn (%)Median (IQR)n (%)Median (IQR)n (%)Median (IQR)Total2381 (100)8 (3–17)1995 (84)8 (3–18)386 (16)6 (2–13)Age in years (median, IQR)23 (21–25)–22 (20–25)–25 (22–27)–Districts District 11405 (59)14 (7–23)*1251 (63)14 (7–23)*154 (40)12 (5–23)* District 2976 (41)3 (1–7)744 (37)3 (1–7)232 (60)3 (1–7)Education in years (median, IQR)5 (0–8)–5 (0–8)–4 (0–7)–Household wealth 1st quintile (Poorest)519 (22)3 (1–7)**361 (18)3 (1–6)**158 (40)3 (0–7)** 2nd quintile505 (21)7 (2–13)414 (21)6 (2–15)91 (24)7 (2–12) 3rd quintile499 (21)11 (5–18)434 (22)11 (5–18)65 (17)10 (3–18) 4th quintile476 (20)14 (7–23)429 (21)14 (7–23)47 (12)8 (3–19) 5th quintile (Least Poor)382 (16)13 (7–25)357 (18)13 (7–24)25 (7)17 (8–29)Caste Scheduled Caste (SC)599 (25)10 (4–19)**502 (25)11 (5–20)**97 (25)8 (2–17)** Other backward caste (OBC)904 (38)12 (5–22)807 (40)12 (5–22)97 (25)8 (4–17) Scheduled Tribe (ST)598 (25)3 (1–6)430 (22)3 (1–6)168 (44)3 (1–7) General280 (12)13 (4–21)256 (13)12 (4–22)24 (6)16 (5–20)Birth Order 1st child932 (39)8 (3–19)**858 (43)9 (3–20)**74 (19)7 (3–12)** 2nd child838 (35)8 (3–17)688 (34)9 (3–17)150 (39)7 (2–15) 3rd child382 (16)5 (2–14)292 (15)7 (2–15)90 (23)4 (1–12) 4th or more child229 (10)7 (2–15)157 (8)8 (2–17)72 (19)4 (0–9)Type of Delivery Vaginal Delivery2303 (97)8 (3–17)* 1917 (96)8 (3–17)*386 (100)6 (2–13) Cesarean Section Delivery78 (3)50 (21–93)78 (4)50 (21–93)0 (0)-Cost Categories Medicine, supplies and procedures224 (10)3 (1–8)181 (10)3 (1–7)43 (11)7 (3–8) Informal payments1445 (65)5 (2–8)1187 (65)5 (2–9)258 (68)5 (3–8) Food/baby items1695 (77)5 (3–8)1489 (81)5 (3–8)206 (55)3 (2–8) Transportation328 (17)3 (1–8)328 (17)3 (1–8)––*Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made Background characteristics, median and inter-quartile range (IQR) of gross OOPE (in U.S. dollars) for women who delivered in a JSY facility or at home. Column percentages (%) *Wilcoxon-Mann–Whitney test, p-value ≤0.05; **Kruskal Wallis test, p-value ≤0.05. Column comparisons made Who had any OOPE?: Ninety one percent (n = 2172) of the sample reported having OOPE; 92 % of JSY beneficiaries and 85 % of women who delivered at home. From the descriptive analysis in Table 1, women who delivered under the JSY program had a significantly higher median, IQR OOPE ($8, 3–18) compared to women who delivered at home ($6, 2–13). The median, IQR OOPE significantly differed between district one ($14, 7–23), and two ($3, 1–7). The median OOPE increased with household wealth for women who delivered in a JSY facility or at home. This pattern was similar in both districts (data not shown). Women from the scheduled tribe caste paid the least OOPE (median $3, IQR 1–6) (Table 1). Women who delivered by caesarean section paid more than six times (median $50, IQR 21–93) the amount compared to women who delivered vaginally (median $8, IQR 3–17). Does the JSY cash incentive defray OOPE?: Among the women who delivered in a JSY public facility, only a quarter (n = 504) received the cash incentive upon discharge, 68 % (n = 1353) were told to come back to receive the money. Assuming all JSY beneficiaries eventually receive the cash incentive, they would have a median net gain (i.e., JSY incentive – OOPE) of $11. The net gain was larger in district 2 ($19) compared to district 1 ($8) (data not shown). As demonstrated in Fig. 1, women from the poorest wealth quintile had twice the net gain ($20) versus the wealthiest quintile ($10). Only 4 % (21/519) from the poorest quintile incurred OOPE greater than the value of the JSY cash benefit.Fig. 1Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Gross OOPE and Net Gain (gross OOPE minus the incentive $23) for women who delivered in a JSY facility (n = 1995), U.S. dollars. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Breakdown of OOPE among JSY and Home deliveries: Among the JSY and home deliveries, 92 % (n = 2213) provided disaggregated cost information. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE (Table 1). The proportion of women incurring OOPE for food and cloth items was higher for JSY beneficiaries (81 %) than for home mothers (55 %). Among the two groups, the median cost for food and baby items was significantly higher ($5) for JSY beneficiaries compared to home deliveries ($3). No other significant differences were found. OOPE differences by district were found. The breakdown of costs by category was similar for JSY beneficiaries and home deliveries with the exception of the proportion of informal payments in district 2 (Fig. 2). Informal payments constituted only 5 % of the total OOPE for JSY beneficiaries in district 2 compared to 43 % in district 1. The proportion did not differ between districts among home mothers. Among JSY beneficiaries, a higher proportion of wealthy women incurred OOPE as well as had higher median OOPE compared to the poorest quintile.Fig. 2Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %) Breakdown of out-of-pocket expenditures by cost categories for JSY beneficiaries and women who delivered at home by district, n = 2213*. Legend: JSY: Janani Suraksha Yojana; *This graphs includes only women who were able to provide disaggregated costs (93 %) Inequalities in OOPE among JSY and home deliveries: Figure 3 displays the concentration curves for OOPE among JSY beneficiaries and home deliveries. Since both curves lie below the line of equality; OOPE for both JSY beneficiaries and home mothers was found to be progressive indicating that poorer households make proportionally less OOP payments during childbirth compared to wealthier households. JSY beneficiaries had less progressive OOPE (CI = 0.189) compared to women who delivered at home (CI = 0.293), however this difference was not significant. The difference was not significant when each district was analyzed separately.Fig. 3Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Concentration Curve for JSY beneficiaries and women who delivered at home study. Legend: JSY: Janani Suraksha Yojana; OOPE: out-of-pocket expenditures Impact of JSY on OOPE: When adjusting for confounders, women who delivered in a JSY public facility had 1.58 higher odds (95 % CI: 1.11–2.25) to incur any OOPE than women who delivered at home (Table 2, model 1). Women from district 1 had twice the odds (95 % CI: 1.30–3.18) of having OOPE compared to district 2. Wealth, caste and birth order were not significant predictors of incurring any OOPE.Table 2Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386)Background characteristicsPart 1 of model: AOR n = 238195 % confidence intervalPart 2 of model: IRR n = 217295 % confidence intervalPlace of delivery Home 1.00 1.00  JSY (Public Facility)1.58(1.11–2.25)*0.84(0.73–0.96)*Districts District #2 1.00 1.00  District #12.03(1.30–3.18)*2.36(2.06–2.69)**Household wealth 1st quintile (Poorest) 1.00 1.00  2nd quintile1.49(0.98–2.27)1.19(1.02–1.38)* 3rd quintile1.53(0.91–2.56)1.33(1.13–1.56)** 4th quintile1.14(0.65–1.99)1.31(1.11–1.55)** 5th quintile (Least Poor)1.21(0.64–2.31)1.34(1.02–1.49)*Caste Scheduled Caste (SC) 1.00 1.00  Other backward caste (OBC)1.15(0.76–1.73)1.14(1.01–1.28)* Scheduled Tribe (ST)0.97(0.62–1.52)0.70(0.60–0.81)** General1.19(0.63–2.22)1.07(0.91–1.26)Birth order 1st child 1.00 1.00  2nd child1.21(0.82–1.79)0.82(0.73–0.92)* 3rd child1.03(0.63–1.68)0.71(0.60–0.83)** 4th or more child0.64(0.35–1.17)0.69(0.56–0.86)*Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001 Two part model OOPE ($) among women who delivered in a JSY facility (n = 1995) and delivered at home (n = 386) Adjusted for age, education and delivery type; JSY Janani Suraksha Yojana, AOR Adjusted Odds Ratio, IRR Incidence Rate Ratio, *p-value ≤0.05; **p-value ≤0.001 However in model 2 among the women who had incurred any OOPE (Table 2), those who delivered under the JSY program paid 16 % less (95 % CI: 0.73–0.96) OOPE than women who delivered at home. Women from district 1 paid more than twice (95 % CI: 2.06–2.69) the OOPE compared to district 2. Increased wealth was also significantly related to higher OOPE. Conversely, being from a scheduled tribe and birth order (women with more children) were related to having lower OOPE. Discussion: This is the first study to our knowledge that models predictors of OOPE among JSY beneficiaries and women who deliver at home. We found the large majority of women in our study reported some kind of OOPE. Among the JSY beneficiaries, the poorest women had the largest net gain when the cash transfer was taken into account. Further we found OOP payments under the program were progressive with the most disadvantaged wealth quintiles making proportionally less OOPE compared to wealthier women. Being a JSY beneficiary led to increased odds of incurring OOPE, but at the same time a 16 % decrease in the amount of OOPE incurred compared to women who delivered at home. Finally, wealth was not a predictor of having OOPE, but it was an indicator of how much the women would pay. In our study, the descriptive and multivariable analysis demonstrated that OOPE in the most vulnerable groups (ST, poorest wealth quintiles and multiparous women) was the smallest. This finding was consistent with another Indian study performed by Mohanty et al. [14] on the District Level Household and Facility Survey (DLHS-3), a national level survey that was conducted in 2007–2008 for births that occurred in the last five years [37]. They also found the amount of OOPE increased with wealth and decreased with birth order. Although there is no universal consensus on why a wealth gradient in OOPE exists, many studies show a positive relationship between income and health expenditures [38]. In other words, the wealthy pay more as they can afford to. In the descriptive analysis, we found OOPE was higher for women who delivered under the JSY program. However when adjusting for possible confounders, the model showed JSY beneficiaries actually had less OOPE compared to women who delivered at home. Another Indian study that also used data from the DLHS-3 found deliveries conducted under the JSY program at a public facility had higher OOPE compared to home deliveries [15]. This difference was not surprising as only a descriptive analysis was performed and possible confounders were not taken into consideration. Another probable explanation for these differences is Mohanty et al. reported the amount of OOPE has declined over time for women who delivered in public facilities [14]. Further in 2011, the government of India launched a complementary program to JSY, Janani Shishu Suraksha Karyakram (JSSK), to eliminate OOPE for pregnant women and ensure delivery care was free of cost. Medicines, medical supplies, procedures (surgery, diagnostics and x-rays) food, user fees and referral transport were all supposed to be covered under this program [7]. We found indirect costs (informal payments to staff, food and items purchased for the infant) comprised the largest proportion of maternal expenses. Direct costs (medicines, supplies and delivery services) constituted the smallest share of spending among both JSY beneficiaries (4 %) and women who delivered at home (9 %). Conversely, Skordis-Worral et al. [23] found in urban Indian slums that direct medical costs were responsible for the majority of OOPE. Context is extremely important when interpreting results for studies performed in India given the heterogeneity between different parts of the country and stark differences between rural and urban areas. Our study was performed in a rural setting, the poor in urban and rural settings experience different kinds of access issues possibly explaining some of the difference in results. Free delivery under the JSY program? High OOPE are a well-known constraint to the utilization of delivery services where ready access to cash is not available for many rural households, especially the poor whose ability to pay can be temporal or mainly rely on seasonal production like farms [39]. The vast majority of women in our study incurred some amount of OOPE. While these women paid less as JSY beneficiaries compared to women who delivered at home, they did not have a free delivery as intended by the public health care system. As mentioned above, in this study the direct medical costs were not the driving force behind OOPE, but informal payments to the staff. A qualitative study from the same area found women knew they were not supposed to make payments to the staff but did anyway out of fear of not receiving appropriate care in a timely manner [40]. Although from the perspective of the Indian government, the primary purpose of the cash transfer is to incentivize women for a free delivery in a public health facility and not to cover OOPE, the same qualitative study found many women do in fact intend to use it to compensate for the OOPE incurred [40]. Only 25 % of the women who delivered in a JSY public facility in our study received the cash incentive upon discharge. While a previous study that took place in the same area found 85 % of the women received the benefit within two weeks of delivery [41], if the cash benefit is expected by the women to cover their OOPE, it needs to be received upon discharge. These bureaucratic issues related to receiving the JSY cash incentive and informal payments to public health facility staff members undermine the program and could potentially cause future uptake problems if not addressed. High OOPE are a well-known constraint to the utilization of delivery services where ready access to cash is not available for many rural households, especially the poor whose ability to pay can be temporal or mainly rely on seasonal production like farms [39]. The vast majority of women in our study incurred some amount of OOPE. While these women paid less as JSY beneficiaries compared to women who delivered at home, they did not have a free delivery as intended by the public health care system. As mentioned above, in this study the direct medical costs were not the driving force behind OOPE, but informal payments to the staff. A qualitative study from the same area found women knew they were not supposed to make payments to the staff but did anyway out of fear of not receiving appropriate care in a timely manner [40]. Although from the perspective of the Indian government, the primary purpose of the cash transfer is to incentivize women for a free delivery in a public health facility and not to cover OOPE, the same qualitative study found many women do in fact intend to use it to compensate for the OOPE incurred [40]. Only 25 % of the women who delivered in a JSY public facility in our study received the cash incentive upon discharge. While a previous study that took place in the same area found 85 % of the women received the benefit within two weeks of delivery [41], if the cash benefit is expected by the women to cover their OOPE, it needs to be received upon discharge. These bureaucratic issues related to receiving the JSY cash incentive and informal payments to public health facility staff members undermine the program and could potentially cause future uptake problems if not addressed. Does JSY reduce financial access barriers? Population based surveys have shown a significant proportion of women report cost as the major barrier to an institutional delivery, [2, 37] and other research reports support this [10, 23, 42]. However, our model showed JSY beneficiaries had lower OOPE compared to women who delivered at home. In addition, it is important to note the JSY beneficiaries in our study received a net gain, especially among the poorest quintiles, after receiving the cash incentive ($23). Although this is to be expected since the poorest quintiles paid the least, regardless this would imply the program is reaching and assisting the most vulnerable groups. In our setting the incentive was more than adequate to cover the OOPE, while an earlier study from Orissa found the magnitude of the cash incentive was not large enough to compensate for the entire OOPE amount [43]. Another Indian study reported the JSY program provided women with some financial protection, though it was limited and did not cover the entire sum [14]. Further, the women in our study who delivered at home did not cite financial barriers as the justification for a home delivery. A recent qualitative study from the same area also found that cost was not a deterrent for most of the study participants who delivered at home [40]. This implies other access barriers persist, some of which may be remedied but not necessarily by a cash transfer. Population based surveys have shown a significant proportion of women report cost as the major barrier to an institutional delivery, [2, 37] and other research reports support this [10, 23, 42]. However, our model showed JSY beneficiaries had lower OOPE compared to women who delivered at home. In addition, it is important to note the JSY beneficiaries in our study received a net gain, especially among the poorest quintiles, after receiving the cash incentive ($23). Although this is to be expected since the poorest quintiles paid the least, regardless this would imply the program is reaching and assisting the most vulnerable groups. In our setting the incentive was more than adequate to cover the OOPE, while an earlier study from Orissa found the magnitude of the cash incentive was not large enough to compensate for the entire OOPE amount [43]. Another Indian study reported the JSY program provided women with some financial protection, though it was limited and did not cover the entire sum [14]. Further, the women in our study who delivered at home did not cite financial barriers as the justification for a home delivery. A recent qualitative study from the same area also found that cost was not a deterrent for most of the study participants who delivered at home [40]. This implies other access barriers persist, some of which may be remedied but not necessarily by a cash transfer. JSY role in reducing OOPE inequalities and inequities: wealth quintiles and OOPE in the JSY Our concentration curves showed that the OOPE for women who delivered under the JSY program was pro-poor; poorer households made proportionally less OOP payments during childbirth compared to wealthier households (i.e., a progressive system). From a strict equality perspective, the distribution was not equal. However, some would argue the wealthier households have the means to pay for services while the poorer ones do not [44]. The OOPE may not be equal; nevertheless the program is making OOPE more equitable and fair. In general, health care financing systems that are progressive tend to have a redistributive nature. Taxes, where wealthier households pay higher amounts of money compared to poorer ones, is one example. Social service provision by the government (e.g., offering free delivery care) is another [44]. So while vulnerable groups often have more healthcare needs, despite contributing less, they are able to obtain the same service as their wealthier counterparts. Yet, it is unknown whether the poorest women are in fact acquiring the same level of service. In our study, when comparing the prevalence of different costs and the median expenditure between the poorest and wealthiest quintiles of JSY mothers, it appears there is a higher prevalence of informal payments among the wealthiest quintile and higher amounts are paid. This may just reflect the fact wealthier women have more disposable income to spend or do wealthier women informally pay for better services even within the same facilities? A program like JSY has the power to promote equality and equity in access to delivery care, and while several argue the overall quality of care administered in JSY public facilities is low, [45–49] it is still important to ensure all receive the best care possible regardless of whether they have the means to pay for it. We have presented differences in OOPE between the two study districts throughout the paper. In district 1, a woman is more likely to have an OOPE and pay higher amounts. Since the OOPE amount was higher in district 1, JSY beneficiaries’ net gain was also smaller. These findings were not surprising considering the heterogeneity between the districts. The women are much poorer in district 2 compared to district 1 so this would have an impact on the amount of money they spent. This also has an implication for the relative worth of the incentive to women in their respective districts. Another reason could be attributed to the sampling methodology. While district 1 included all blocks, district 2 only included 2 of the 5. Our concentration curves showed that the OOPE for women who delivered under the JSY program was pro-poor; poorer households made proportionally less OOP payments during childbirth compared to wealthier households (i.e., a progressive system). From a strict equality perspective, the distribution was not equal. However, some would argue the wealthier households have the means to pay for services while the poorer ones do not [44]. The OOPE may not be equal; nevertheless the program is making OOPE more equitable and fair. In general, health care financing systems that are progressive tend to have a redistributive nature. Taxes, where wealthier households pay higher amounts of money compared to poorer ones, is one example. Social service provision by the government (e.g., offering free delivery care) is another [44]. So while vulnerable groups often have more healthcare needs, despite contributing less, they are able to obtain the same service as their wealthier counterparts. Yet, it is unknown whether the poorest women are in fact acquiring the same level of service. In our study, when comparing the prevalence of different costs and the median expenditure between the poorest and wealthiest quintiles of JSY mothers, it appears there is a higher prevalence of informal payments among the wealthiest quintile and higher amounts are paid. This may just reflect the fact wealthier women have more disposable income to spend or do wealthier women informally pay for better services even within the same facilities? A program like JSY has the power to promote equality and equity in access to delivery care, and while several argue the overall quality of care administered in JSY public facilities is low, [45–49] it is still important to ensure all receive the best care possible regardless of whether they have the means to pay for it. We have presented differences in OOPE between the two study districts throughout the paper. In district 1, a woman is more likely to have an OOPE and pay higher amounts. Since the OOPE amount was higher in district 1, JSY beneficiaries’ net gain was also smaller. These findings were not surprising considering the heterogeneity between the districts. The women are much poorer in district 2 compared to district 1 so this would have an impact on the amount of money they spent. This also has an implication for the relative worth of the incentive to women in their respective districts. Another reason could be attributed to the sampling methodology. While district 1 included all blocks, district 2 only included 2 of the 5. Free delivery under the JSY program?: High OOPE are a well-known constraint to the utilization of delivery services where ready access to cash is not available for many rural households, especially the poor whose ability to pay can be temporal or mainly rely on seasonal production like farms [39]. The vast majority of women in our study incurred some amount of OOPE. While these women paid less as JSY beneficiaries compared to women who delivered at home, they did not have a free delivery as intended by the public health care system. As mentioned above, in this study the direct medical costs were not the driving force behind OOPE, but informal payments to the staff. A qualitative study from the same area found women knew they were not supposed to make payments to the staff but did anyway out of fear of not receiving appropriate care in a timely manner [40]. Although from the perspective of the Indian government, the primary purpose of the cash transfer is to incentivize women for a free delivery in a public health facility and not to cover OOPE, the same qualitative study found many women do in fact intend to use it to compensate for the OOPE incurred [40]. Only 25 % of the women who delivered in a JSY public facility in our study received the cash incentive upon discharge. While a previous study that took place in the same area found 85 % of the women received the benefit within two weeks of delivery [41], if the cash benefit is expected by the women to cover their OOPE, it needs to be received upon discharge. These bureaucratic issues related to receiving the JSY cash incentive and informal payments to public health facility staff members undermine the program and could potentially cause future uptake problems if not addressed. Does JSY reduce financial access barriers?: Population based surveys have shown a significant proportion of women report cost as the major barrier to an institutional delivery, [2, 37] and other research reports support this [10, 23, 42]. However, our model showed JSY beneficiaries had lower OOPE compared to women who delivered at home. In addition, it is important to note the JSY beneficiaries in our study received a net gain, especially among the poorest quintiles, after receiving the cash incentive ($23). Although this is to be expected since the poorest quintiles paid the least, regardless this would imply the program is reaching and assisting the most vulnerable groups. In our setting the incentive was more than adequate to cover the OOPE, while an earlier study from Orissa found the magnitude of the cash incentive was not large enough to compensate for the entire OOPE amount [43]. Another Indian study reported the JSY program provided women with some financial protection, though it was limited and did not cover the entire sum [14]. Further, the women in our study who delivered at home did not cite financial barriers as the justification for a home delivery. A recent qualitative study from the same area also found that cost was not a deterrent for most of the study participants who delivered at home [40]. This implies other access barriers persist, some of which may be remedied but not necessarily by a cash transfer. JSY role in reducing OOPE inequalities and inequities: wealth quintiles and OOPE in the JSY: Our concentration curves showed that the OOPE for women who delivered under the JSY program was pro-poor; poorer households made proportionally less OOP payments during childbirth compared to wealthier households (i.e., a progressive system). From a strict equality perspective, the distribution was not equal. However, some would argue the wealthier households have the means to pay for services while the poorer ones do not [44]. The OOPE may not be equal; nevertheless the program is making OOPE more equitable and fair. In general, health care financing systems that are progressive tend to have a redistributive nature. Taxes, where wealthier households pay higher amounts of money compared to poorer ones, is one example. Social service provision by the government (e.g., offering free delivery care) is another [44]. So while vulnerable groups often have more healthcare needs, despite contributing less, they are able to obtain the same service as their wealthier counterparts. Yet, it is unknown whether the poorest women are in fact acquiring the same level of service. In our study, when comparing the prevalence of different costs and the median expenditure between the poorest and wealthiest quintiles of JSY mothers, it appears there is a higher prevalence of informal payments among the wealthiest quintile and higher amounts are paid. This may just reflect the fact wealthier women have more disposable income to spend or do wealthier women informally pay for better services even within the same facilities? A program like JSY has the power to promote equality and equity in access to delivery care, and while several argue the overall quality of care administered in JSY public facilities is low, [45–49] it is still important to ensure all receive the best care possible regardless of whether they have the means to pay for it. We have presented differences in OOPE between the two study districts throughout the paper. In district 1, a woman is more likely to have an OOPE and pay higher amounts. Since the OOPE amount was higher in district 1, JSY beneficiaries’ net gain was also smaller. These findings were not surprising considering the heterogeneity between the districts. The women are much poorer in district 2 compared to district 1 so this would have an impact on the amount of money they spent. This also has an implication for the relative worth of the incentive to women in their respective districts. Another reason could be attributed to the sampling methodology. While district 1 included all blocks, district 2 only included 2 of the 5. Methodological considerations: Though there have been some studies assessing the OOPE experienced during childbirth in India, few have looked at this from an equity perspective under the JSY program. The previous studies have used secondary data that did not allow for cost disaggregation, assessed a time period when JSY coverage and overall institutional delivery was low or were small in size [13–17, 23]. Furthermore, in some studies the costs were collected for deliveries that occurred in the previous five years, thus probably suffered from substantial recall bias. Considering how quickly the JSY program has increased institutional birth proportions in such a short period of time, it is important to have recent data that reflects the current situation. Reports have found in many Asian countries that families borrow money to pay for maternal related costs thus being forced to forego essential items like food and education to repay the loans. These costs have a ripple effect on the family for years to come [50]. We did not enquire about the financing sources used to pay for the hospital delivery costs. This limits our ability to understand the role of JSY in providing financial protection and reducing subsequent impoverishment. So while this study has shown JSY beneficiaries have reduced OOPE compared to women who delivered at home, further research is needed to understand the magnitude of the reduction in relation to the family’s overall poverty status. Many studies highlight the limitations (e.g., recall bias and over/underreporting) associated with collecting health expenditure data [51–53]. Cost data was collected shortly after delivery and triangulated with other family members to minimize recall bias. A disaggregated cost collection design was used to improve accuracy and avoid underreporting of expenditures. While our study design minimized recall bias as the study participants were interviewed within a week of delivery, we do need to acknowledge the possibility of underreporting for women who were interviewed in a health facility because of staff presence. The sex of the infant has been reported in the literature as a determinant of OOPE [14]. Data was not collected on the sex of the infant, therefore we could not adjust for it in the model. It is reasonable to assume in this setting the infant’s sex would influence OOPE, it is well documented that families make higher informal payments when a male child is born [14]. This could affect the precision of the analysis and the significance of other explanatory variables could be overestimated. However, we do not have any reason to think that the sex of the infant is differently distributed among the two groups. There could be an argument for the payments made to the dai to be considered as a formal payment and classified as payment for ‘medicines, supplies and procedures’. However we chose to classify these payments as informal because (a) the dai is not formally trained, (b) is not a part of the formal health system and (c) remuneration to the dai is negotiable i.e., there are not fixed stipulated fees, she is remunerated often partly in cash and partly in kind based on the ability of the mothers family to pay and their relationship with the dai. Sampling: The districts were sampled in the exact same way with the exception of the number of blocks chosen. However, the process of how the women were selected for the study was the same in both districts. As the districts in the state and in the rest of the country are very heterogeneous, generalizability of our results needs to be done with caution. Conclusion: OOPE is still prevalent among women who deliver under the JSY program as well in home deliveries. In JSY, OOPE varies by income quintile; the wealthier women pay more OOPE. There is a net gain for all quintiles when the incentive is taken into consideration, highest gains occur for the poorest women. OOPE was largely due to indirect costs like informal payments, food and cloth items for the baby and not direct medical payments. Further, we found OOP payments under the program were progressive with the most disadvantaged wealth quintile making proportionally less OOPE compared to wealthier women. Being a JSY beneficiary led to increased odds of incurring OOPE, but at the same time a decrease in the amount of OOPE incurred compared to women who delivered at home. While wealth was not a predictor of having OOPE, it was an indicator of how much the women would pay. The program seems to be effective in providing financial protection for the most vulnerable groups (i.e., women from poorer households and disadvantaged castes). Ethical considerations: Ethical approval for the study was granted by the Ethics Committee of R.D. Gardi Medical College (Ujjain, India) and Karolinska Institutet (Stockholm, Sweden), reference # 2010/1671–31/5.
Background: High out-of-pocket expenditures (OOPE) make delivery care difficult to access for a large proportion of India's population. Given that home deliveries increase the risk of maternal mortality, in 2005 the Indian Government implemented the Janani Suraksha Yojana (JSY) program to incentivize poor women to deliver in public health facilities by providing a cash transfer upon discharge. We study the OOPE among JSY beneficiaries and women who deliver at home, and predictors of OOPE in two districts of Madhya Pradesh. Methods: September 2013 to April 2015 a cross-sectional community-based survey was performed. All recently delivered women were interviewed to elicit delivery costs, socio-demographic characteristics and delivery related information. Results: Most women (n = 1995, 84 %) delivered in JSY public health facility, the remaining 16 % (n = 386) delivered at home. Women who delivered under JSY program had a higher median, IQR OOPE ($8, 3-18) compared to home ($6, 2-13). Among JSY beneficiaries, poorest women had twice net gain ($20) versus wealthiest ($10) post cash transfer. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE. OOPE made among JSY beneficiaries was pro-poor: poorer women made proportionally less expenditures compared to wealthier women. In an adjusted model, delivering in a JSY public facility increased odds of incurring expenditures (OR: 1.58, 95 % CI: 1.11-2.25) but at the same time to a 16 % (95 % CI: 0.73-0.96) decrease in the amount paid compared to home deliveries. Conclusions: OOPE is prevalent among JSY beneficiaries as well in home deliveries. In JSY, OOPE varies by income quintile: wealthier quintiles pay more OOPE. However the cash incentive is adequate enough to provide a net gain for all quintiles. OOPE was largely due to indirect costs and not direct medical payments. The program seems to be effective in providing financial protection for the most vulnerable groups.
Background: India’s health care services are largely financed by out-of-pocket payments (71 %) at the point of care [1]. High out-of-pocket expenditures (OOPE) make health services, including care for childbirth, difficult to access for a large proportion of its population especially the poor. In 2005, institutional delivery was nearly 6.5 times higher among Indian women belonging to the highest wealth quintile (84 %) compared to poorest quintile women (13 %) [2]. Evidence suggests maternal mortality can be reduced when deliveries are conducted by skilled birth attendants and women have access to emergency obstetric care, given the unpredictable nature of life-threatening complications that can occur at the time of childbirth [3]. Poor women, who have the least access to such care, bear a disproportionate burden of maternal mortality. Therefore governments in many low income settings, with high burdens of maternal mortality, have initiated special programs to draw women into facilities to give birth (instead of at home), where such care can be provided. Given the high number of maternal deaths in India (one-fifth of the global count) and a low institutional delivery proportion (39 % in 2005); the Indian government launched a conditional cash transfer program to promote institutional delivery among poor women the same year [2, 4]. The program, Janani Suraksha Yojana (JSY or safe motherhood program), provides women $23 (INR 1400) upon discharge after giving birth in a public health facility. JSY, the largest cash transfer program in the world, is funded by the central government of India (GOI) while implementation is managed by states. Eligibility, incentive amount and uptake differ across the states [5, 6]. In the ten years since the program began, institutional delivery rates have increased to 74 % and more than 106 million women have benefited with the GOI spending 16.4 billion USD on the program [7–9]. Previous studies have found a considerable proportion of Indian women cite financial access barriers as one of the main reasons for not having an institutional delivery [10–12]. The JSY program was expected to draw these mothers into public facilities with the assumption that they will receive a free delivery in addition to receiving the cash incentive of $23. While all services in the Indian public sector are supposed to be free, in reality OOPE during childbirth is common among these facilities [13–17]. Although there are a number of reports on the JSY [18–22], none focus on OOPE in the context of high JSY program uptake [13–17, 23]. The magnitude of OOPE incurred among these JSY beneficiaries is unknown. In addition, we do not know if the cash incentive offsets OOPE in the same facility, and if it does then the extent to which it does. Also the question remains whether the level of OOPE paid by JSY beneficiaries of different socio-economic status is similar. Further, if women who participate in the JSY program actually have higher OOPE than women who give birth at home. We studied the OOPE among JSY beneficiaries and compared this OOPE to that incurred by women who delivered at home in two districts of Madhya Pradesh, India. We also described the extent to which the JSY cash transfer defrayed the OOPE for JSY beneficiaries. Among both groups of women, we studied predictors of OOPE, and how OOPE varied with wealth status. Conclusion: OOPE is still prevalent among women who deliver under the JSY program as well in home deliveries. In JSY, OOPE varies by income quintile; the wealthier women pay more OOPE. There is a net gain for all quintiles when the incentive is taken into consideration, highest gains occur for the poorest women. OOPE was largely due to indirect costs like informal payments, food and cloth items for the baby and not direct medical payments. Further, we found OOP payments under the program were progressive with the most disadvantaged wealth quintile making proportionally less OOPE compared to wealthier women. Being a JSY beneficiary led to increased odds of incurring OOPE, but at the same time a decrease in the amount of OOPE incurred compared to women who delivered at home. While wealth was not a predictor of having OOPE, it was an indicator of how much the women would pay. The program seems to be effective in providing financial protection for the most vulnerable groups (i.e., women from poorer households and disadvantaged castes).
Background: High out-of-pocket expenditures (OOPE) make delivery care difficult to access for a large proportion of India's population. Given that home deliveries increase the risk of maternal mortality, in 2005 the Indian Government implemented the Janani Suraksha Yojana (JSY) program to incentivize poor women to deliver in public health facilities by providing a cash transfer upon discharge. We study the OOPE among JSY beneficiaries and women who deliver at home, and predictors of OOPE in two districts of Madhya Pradesh. Methods: September 2013 to April 2015 a cross-sectional community-based survey was performed. All recently delivered women were interviewed to elicit delivery costs, socio-demographic characteristics and delivery related information. Results: Most women (n = 1995, 84 %) delivered in JSY public health facility, the remaining 16 % (n = 386) delivered at home. Women who delivered under JSY program had a higher median, IQR OOPE ($8, 3-18) compared to home ($6, 2-13). Among JSY beneficiaries, poorest women had twice net gain ($20) versus wealthiest ($10) post cash transfer. Informal payments (64 %) and food/baby items (77 %) were the two most common sources of OOPE. OOPE made among JSY beneficiaries was pro-poor: poorer women made proportionally less expenditures compared to wealthier women. In an adjusted model, delivering in a JSY public facility increased odds of incurring expenditures (OR: 1.58, 95 % CI: 1.11-2.25) but at the same time to a 16 % (95 % CI: 0.73-0.96) decrease in the amount paid compared to home deliveries. Conclusions: OOPE is prevalent among JSY beneficiaries as well in home deliveries. In JSY, OOPE varies by income quintile: wealthier quintiles pay more OOPE. However the cash incentive is adequate enough to provide a net gain for all quintiles. OOPE was largely due to indirect costs and not direct medical payments. The program seems to be effective in providing financial protection for the most vulnerable groups.
17,728
409
23
[ "oope", "women", "jsy", "study", "home", "delivered", "district", "women delivered", "delivery", "facility" ]
[ "test", "test" ]
null
[CONTENT] Out-of-pocket expenditure | Conditional cash transfer | Maternal health | Facility based delivery | India [SUMMARY]
null
[CONTENT] Out-of-pocket expenditure | Conditional cash transfer | Maternal health | Facility based delivery | India [SUMMARY]
[CONTENT] Out-of-pocket expenditure | Conditional cash transfer | Maternal health | Facility based delivery | India [SUMMARY]
[CONTENT] Out-of-pocket expenditure | Conditional cash transfer | Maternal health | Facility based delivery | India [SUMMARY]
[CONTENT] Out-of-pocket expenditure | Conditional cash transfer | Maternal health | Facility based delivery | India [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Female | Health Care Costs | Health Expenditures | Health Services Accessibility | Humans | India | Maternal Health Services | Parturition | Pregnancy [SUMMARY]
null
[CONTENT] Adult | Cross-Sectional Studies | Female | Health Care Costs | Health Expenditures | Health Services Accessibility | Humans | India | Maternal Health Services | Parturition | Pregnancy [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Female | Health Care Costs | Health Expenditures | Health Services Accessibility | Humans | India | Maternal Health Services | Parturition | Pregnancy [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Female | Health Care Costs | Health Expenditures | Health Services Accessibility | Humans | India | Maternal Health Services | Parturition | Pregnancy [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Female | Health Care Costs | Health Expenditures | Health Services Accessibility | Humans | India | Maternal Health Services | Parturition | Pregnancy [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] oope | women | jsy | study | home | delivered | district | women delivered | delivery | facility [SUMMARY]
null
[CONTENT] oope | women | jsy | study | home | delivered | district | women delivered | delivery | facility [SUMMARY]
[CONTENT] oope | women | jsy | study | home | delivered | district | women delivered | delivery | facility [SUMMARY]
[CONTENT] oope | women | jsy | study | home | delivered | district | women delivered | delivery | facility [SUMMARY]
[CONTENT] oope | women | jsy | study | home | delivered | district | women delivered | delivery | facility [SUMMARY]
[CONTENT] oope | women | jsy | care | program | institutional delivery | institutional | high | access | cash [SUMMARY]
null
[CONTENT] jsy | oope | home | women | delivered | median | 17 | 00 | district | 21 [SUMMARY]
[CONTENT] oope | women | disadvantaged | women pay | wealthier women | payments | pay | program | wealthier | quintile [SUMMARY]
[CONTENT] oope | women | jsy | study | home | delivered | district | delivery | women delivered | jsy beneficiaries [SUMMARY]
[CONTENT] oope | women | jsy | study | home | delivered | district | delivery | women delivered | jsy beneficiaries [SUMMARY]
[CONTENT] OOPE | India ||| 2005 | the Indian Government | the Janani Suraksha Yojana | JSY ||| OOPE | JSY | OOPE | two | Madhya Pradesh [SUMMARY]
null
[CONTENT] 1995 | 84 % | JSY | 16 % | 386 ||| JSY | IQR OOPE | 8 | 3-18 | 6 | 2-13 ||| JSY | 20 | 10 ||| 64 % | 77 % | two | OOPE ||| JSY ||| JSY | 1.58 | 95 % | CI | 1.11-2.25 | 16 % | 95 % | CI | 0.73 [SUMMARY]
[CONTENT] OOPE | JSY ||| JSY | OOPE ||| ||| OOPE ||| [SUMMARY]
[CONTENT] OOPE | India ||| 2005 | the Indian Government | the Janani Suraksha Yojana | JSY ||| OOPE | JSY | OOPE | two | Madhya Pradesh ||| September 2013 to April 2015 ||| ||| 1995 | 84 % | JSY | 16 % | 386 ||| JSY | IQR OOPE | 8 | 3-18 | 6 | 2-13 ||| JSY | 20 | 10 ||| 64 % | 77 % | two | OOPE ||| JSY ||| JSY | 1.58 | 95 % | CI | 1.11-2.25 | 16 % | 95 % | CI | 0.73 ||| OOPE | JSY ||| JSY | OOPE ||| ||| OOPE ||| [SUMMARY]
[CONTENT] OOPE | India ||| 2005 | the Indian Government | the Janani Suraksha Yojana | JSY ||| OOPE | JSY | OOPE | two | Madhya Pradesh ||| September 2013 to April 2015 ||| ||| 1995 | 84 % | JSY | 16 % | 386 ||| JSY | IQR OOPE | 8 | 3-18 | 6 | 2-13 ||| JSY | 20 | 10 ||| 64 % | 77 % | two | OOPE ||| JSY ||| JSY | 1.58 | 95 % | CI | 1.11-2.25 | 16 % | 95 % | CI | 0.73 ||| OOPE | JSY ||| JSY | OOPE ||| ||| OOPE ||| [SUMMARY]
GPR39 (zinc receptor) knockout mice exhibit depression-like behavior and CREB/BDNF down-regulation in the hippocampus.
25609596
Zinc may act as a neurotransmitter in the central nervous system by activation of the GPR39 metabotropic receptors.
BACKGROUND
In the present study, we investigated whether GPR39 knockout would cause depressive-like and/or anxiety-like behavior, as measured by the forced swim test, tail suspension test, and light/dark test. We also investigated whether lack of GPR39 would change levels of cAMP response element-binding protein (CREB),brain-derived neurotrophic factor (BDNF) and tropomyosin related kinase B (TrkB) protein in the hippocampus and frontal cortex of GPR39 knockout mice subjected to the forced swim test, as measured by Western-blot analysis.
METHODS
In this study, GPR39 knockout mice showed an increased immobility time in both the forced swim test and tail suspension test, indicating depressive-like behavior and displayed anxiety-like phenotype. GPR39 knockout mice had lower CREB and BDNF levels in the hippocampus, but not in the frontal cortex, which indicates region specificity for the impaired CREB/BDNF pathway (which is important in antidepressant response) in the absence of GPR39. There were no changes in TrkB protein in either structure. In the present study, we also investigated activity in the hypothalamus-pituitary-adrenal axis under both zinc- and GPR39-deficient conditions. Zinc-deficient mice had higher serum corticosterone levels and lower glucocorticoid receptor levels in the hippocampus and frontal cortex.
RESULTS
There were no changes in the GPR39 knockout mice in comparison with the wild-type control mice, which does not support a role of GPR39 in hypothalamus-pituitary-adrenal axis regulation. The results of this study indicate the involvement of the GPR39 Zn(2+)-sensing receptor in the pathophysiology of depression with component of anxiety.
CONCLUSIONS
[ "Animals", "Brain-Derived Neurotrophic Factor", "CREB-Binding Protein", "Corticosterone", "Dark Adaptation", "Depression", "Disease Models, Animal", "Down-Regulation", "Hindlimb Suspension", "Hippocampus", "Immobility Response, Tonic", "Male", "Mice", "Mice, Inbred C57BL", "Mice, Knockout", "Motor Activity", "Receptor, trkB", "Receptors, G-Protein-Coupled", "Swimming", "Time Factors", "Zinc" ]
4360246
Introduction
Depression is a leading psychiatric illness, with high morbidity and mortality (Ustun, 2004). The lack of appropriate, rapidly acting antidepressants is probably due to the direct pathomechanism of depression being unknown, and this leads to the high suicide statistics. Approximately 50% of those diagnosed with major depressive disorder do not respond to antidepressants when using them for the first time (Fava et al., 2008). Long-term antidepressant treatment generates many side effects, and more than 30% of depressed patients do not experience any mood improvement at all (Fava and Davidson, 1996). Until now, only one drug, ketamine, has shown rapid action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). One drug, ketamine, has shown rapid and sustained action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). This indicates promise for modulators of the glutamatergic system, which may lead to the establishment of homeostasis between glutamate and GABA in the central nervous system (CNS) (Skolnick, 2002; Skolnick et al., 2009; Malkesman et al., 2012; Pilc et al., 2013; Pochwat et al., 2014). In addition, some trace elements, such as magnesium and zinc, are involved in glutamatergic attenuation through their binding sites at the N-methyl-d-aspartate (NMDA) receptor (Swardfager et al., 2013b). Preclinical findings indicate that zinc deficiency has been shown to produce depressive-like behavior (Singewald et al., 2004; Tassabehji et al., 2008; Tamano et al., 2009; Whittle et al., 2009; Młyniec and Nowak, 2012; Młyniec et al., 2013a, 2013b, 2014a). Clinical studies indicate that zinc is lower in the blood of depressed people (Swardfager et al., 2013b), and that zinc supplementation may produce antidepressant effects alone and in combination with conventional antidepressant therapies (Ranjbar et al., 2013; Siwek et al., 2013; Swardfager et al., 2013a).. Zinc is an important trace element in the central nervous system and seems to be involved in neurotransmission. As a natural ligand, it was found to activate the metabotropic GPR39 receptor (Holst et al., 2007). Highest levels of GPR39 are found in the brain regions involved in emotion, such as the amygdala and hippocampus (McKee et al., 1997; Jackson et al., 2006). The GPR39 signals with high constitutive activity via Gq, which stimulates transcription mediated by the cyclic adenosine monophosphate (cAMP) following inositol 1,4,5-triphosphate turnover, as well as via G12/13, leading to activation of transcription mediated by the serum response element (Holst et al., 2004). Zinc was found to be a ligand capable of stimulating the activity of the GPR39, which activates the Gq, G12/13, and Gs pathways (Holst et al., 2007). Since zinc shows antidepressant properties and its deficiency leads to the development of depression-like and anxiety-like behaviors (Whittle et al., 2009; Swardfager et al., 2013a), we investigated whether the GPR39 receptor may be involved in the pathophysiology of depression. Recently, we found GPR39 down-regulation in the frontal cortex and hippocampus of zinc-deficient rodents and suicide victims (Młyniec et al., 2014b). On the other hand, we observed up-regulation of the GPR39 after chronic antidepressant treatment (Młyniec and Nowak, 2013). In the present study, we investigated behavior in mice lacking a GPR39 as well as an hypothalamus-pituitary-adrenal axis (HPA) axis and proteins such as CREB, BDNF, and TrkB, all of which are important in the pathophysiology of depression and the antidepressant response.
Methods
Animals All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków. CD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum. GPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used. As with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts. All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków. CD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum. GPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used. As with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts. Forced Swim Test The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test. The immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured. The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test. The immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured. Tail Suspension Test WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually. WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually. Locomotor Activity Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes. Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes. Light/Dark Test WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters). WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters). Zinc Concentration The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L. The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L. Corticosterone Assay The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation. The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation. Western-Blot Analysis Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place. The frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band. Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place. The frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band. Statistical Analysis The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant. The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant.
null
null
Discussion
None.
[ "Introduction", "Animals", "Forced Swim Test", "Tail Suspension Test", "Locomotor Activity", "Light/Dark Test", "Zinc Concentration", "Corticosterone Assay", "Western-Blot Analysis", "Statistical Analysis", "Results", "Behavioral Studies of Gpr39 KO Mice", "Serum Zinc Concentration in GPR39 KO Mice", "CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice", "Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice", "GR Protein Levels in Zinc-Deficient and GPR39 KO Mice", "Discussion" ]
[ "Depression is a leading psychiatric illness, with high morbidity and mortality (Ustun, 2004). The lack of appropriate, rapidly acting antidepressants is probably due to the direct pathomechanism of depression being unknown, and this leads to the high suicide statistics. Approximately 50% of those diagnosed with major depressive disorder do not respond to antidepressants when using them for the first time (Fava et al., 2008). Long-term antidepressant treatment generates many side effects, and more than 30% of depressed patients do not experience any mood improvement at all (Fava and Davidson, 1996). Until now, only one drug, ketamine, has shown rapid action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). One drug, ketamine, has shown rapid and sustained action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). This indicates promise for modulators of the glutamatergic system, which may lead to the establishment of homeostasis between glutamate and GABA in the central nervous system (CNS) (Skolnick, 2002; Skolnick et al., 2009; Malkesman et al., 2012; Pilc et al., 2013; Pochwat et al., 2014). In addition, some trace elements, such as magnesium and zinc, are involved in glutamatergic attenuation through their binding sites at the N-methyl-d-aspartate (NMDA) receptor (Swardfager et al., 2013b). Preclinical findings indicate that zinc deficiency has been shown to produce depressive-like behavior (Singewald et al., 2004; Tassabehji et al., 2008; Tamano et al., 2009; Whittle et al., 2009; Młyniec and Nowak, 2012; Młyniec et al., 2013a, 2013b, 2014a). Clinical studies indicate that zinc is lower in the blood of depressed people (Swardfager et al., 2013b), and that zinc supplementation may produce antidepressant effects alone and in combination with conventional antidepressant therapies (Ranjbar et al., 2013; Siwek et al., 2013;\nSwardfager et al., 2013a)..\nZinc is an important trace element in the central nervous system and seems to be involved in neurotransmission. As a natural ligand, it was found to activate the metabotropic GPR39 receptor (Holst et al., 2007). Highest levels of GPR39 are found in the brain regions involved in emotion, such as the amygdala and hippocampus (McKee et al., 1997; Jackson et al., 2006). The GPR39 signals with high constitutive activity via Gq, which stimulates transcription mediated by the cyclic adenosine monophosphate (cAMP) following inositol 1,4,5-triphosphate turnover, as well as via G12/13, leading to activation of transcription mediated by the serum response element (Holst et al., 2004). Zinc was found to be a ligand capable of stimulating the activity of the GPR39, which activates the Gq, G12/13, and Gs pathways (Holst et al., 2007). Since zinc shows antidepressant properties and its deficiency leads to the development of depression-like and anxiety-like behaviors (Whittle et al., 2009; Swardfager et al., 2013a), we investigated whether the GPR39 receptor may be involved in the pathophysiology of depression. Recently, we found GPR39 down-regulation in the frontal cortex and hippocampus of zinc-deficient rodents and suicide victims (Młyniec et al., 2014b). On the other hand, we observed up-regulation of the GPR39 after chronic antidepressant treatment (Młyniec and Nowak, 2013). In the present study, we investigated behavior in mice lacking a GPR39 as well as an hypothalamus-pituitary-adrenal axis (HPA) axis and proteins such as CREB, BDNF, and TrkB, all of which are important in the pathophysiology of depression and the antidepressant response.", "All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków.\nCD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum.\nGPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used.\nAs with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts.", "The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test.\nThe immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured.", "WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually.", "Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes.", "WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters).", "The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L.", "The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation.", "Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place.\nThe frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band.", "The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant.", " Behavioral Studies of Gpr39 KO Mice Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.\nBefore experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.\n Serum Zinc Concentration in GPR39 KO Mice There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].\nThere was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].\n CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).\nThe effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).\n Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.\nThe effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.\n GR Protein Levels in Zinc-Deficient and GPR39 KO Mice Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.\nAdministration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.", "Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.", "There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].", "The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).", "The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.", "Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.", "In the present study, we found that elimination of GPR39 leads to the development of depressive-like behavior, as measured by the FST and TST. Additionally, we found decreased entries into the lit compartment and line crossing and increased immobility time in the light/dark test, indicating anxiety-like phenotype in GPR 39 KO mice. Although not statistically significant, we also observed some tendencies towards less time spent in the lit compartment (decreased by 17%) and increased and freezing behavior (28%) in GPR39 KO compared to WT mice.\nThe GPR39 was found to be activated by zinc ions (Holst et al., 2007; Yasuda et al., 2007), and the link between zinc and depression is well known (Maes et al., 1997, 1999; Szewczyk et al., 2011; Swardfager et al., 2013a, 2013b). Ishitobi et al. (2012) did not find behavioral abnormalities in the FST after administering antisense of DNA for GPR39-1b. In our present study, mice with general GPR39 KO were used. GPR39-1a full-length isoform was found to be a receptor for zinc ions, whereas GPR39-1b, corresponding to the 5-transmembrane truncated form (Egerod et al., 2007), did not respond to zinc stimulation, which means that the GPR39-1b splice variant is not a receptor of Zn2+ (Yasuda and Ishida, 2014).\nActivation of the GPR39 triggers diverse neuronal pathways (Holst et al., 2004, 2007; Popovics and Stewart, 2011) that may be involved in neuroprotection (Depoortere, 2012). Zinc stimulates GPR39 activity, which activates the Gαs, Gαq, and Gα12/13 pathways (Holst et al., 2007). The Gαq pathway triggers diverse downstream kinases and mediates CREB activation and cyclic adenosine monophosphate response element-dependent transcription. Our previous study showed decreased CREB, BDNF, and TrkB proteins in the hippocampus of mice under zinc-deficient conditions (Młyniec et al., 2014b). Moreover, disruption of the CaM/CaMKII/CREB signaling pathway was found after administration of a zinc-deficient diet for 5 weeks (Gao et al., 2011). Since GPR39 was discovered to be a receptor for zinc, we previously investigated whether GPR39 may be involved in the pathophysiology of depression and suicide behavior. In our postmortem study, we found GPR39 down-regulation in the hippocampus and the frontal cortex of suicide victims (Młyniec et al., 2014b). In the present study, we investigated whether GPR39 KO would decrease levels of such proteins as CREB, BDNF, and TrkB, which were also found to be impaired in depression in suicide victims (Dwivedi et al., 2003; Pandey et al., 2007). Indeed, we found that lack of the GPR39 gene causes CREB and BDNF reduction in the hippocampus, but not in the frontal cortex, suggesting that the hippocampus might be a specific region for CREB and BDNF down-regulation in the absence of a zinc receptor. The CA3 region of the hippocampus seems to be strongly involved in zinc neurotransmission. Besser et al. (2009) found that the GPR39 is activated by synaptically released zinc ions in the CA3 area of the hippocampus. This activation triggers Ca2+ and Ca2+/calmodulin kinase II, suggesting that it has a role in neuron survival/neuroplasticity in this brain area (Besser et al., 2009), which is of importance in antidepressant action. In this study, we did not find any changes in TrkB levels in either the hippocampus or frontal cortex; in the case of the hippocampus, this may be a compensatory mechanism, and it needs further investigation.\nThere is strong evidence that zinc deficiency leads to hyperactivation of the HPA axis (Watanabe et al., 2010; Takeda and Tamano, 2012; Młyniec et al., 2012, 2013a), which is activated as a reaction to stress. The activity of the HPA axis is regulated by negative feedback through GR receptors that are present in the brain, mainly in the hippocampus (Herman et al., 2005). This mechanism was shown to be impaired in mood disorders. In the present study, we compared corticosterone and GR receptor levels in zinc-deficient and GPR39 KOs. We found an elevated corticosterone concentration in the serum and decreased GR levels in the hippocampus and frontal cortex of mice receiving a zinc-deficient diet. However, there were no changes in corticosterone or GR levels in GPR39 KO mice in comparison with WT mice. This suggests that the depressive-like behavior observed in mice lacking the GPR39 gene is not due to higher corticosterone concentrations and that there is no link between GPR39 and the HPA axis. In the present study, we did not find any changes in the serum zinc level in GPR39 KO mice in comparison with WT mice, which indicates a possible correlation between serum zinc and serum corticosterone.\nDepressive-like changes with component of anxiety observed in GPR39 KO mice may result of glutamatergic abnormalities that were found in cases of zinc deficiency, but this requires further investigation. Zinc as an NMDA antagonist modulates the glutamatergic system, which is overexcited during depression. Zinc co-released with glutamate from “gluzinergic” neurons modulates excitability of the brain by attenuating glutamate release (Frederickson et al., 2005). The GPR39 zinc receptor seems to be involved in the mechanism regulating the amount of glutamate in the brain (Besser et al., 2009). Activation of the GPR39 up-regulates KCC2 and thereby enhances Cl− efflux in the postsynaptic neurons, which may potentiate γ-aminobutyric acidA-mediated inhibition (Chorin et al., 2011).\nOur present study shows that deletion of GPR39 leads to depressive-like behaviors in animals, which may be relevant to depressive disorders in humans. Decreased levels of CREB and BDNF proteins in the hippocampus of GPR39 KO mice support the involvement of GPR39 in the synthesis of CREB and BDNF, proteins that are important in neuronal plasticity and the antidepressant response." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Animals", "Forced Swim Test", "Tail Suspension Test", "Locomotor Activity", "Light/Dark Test", "Zinc Concentration", "Corticosterone Assay", "Western-Blot Analysis", "Statistical Analysis", "Results", "Behavioral Studies of Gpr39 KO Mice", "Serum Zinc Concentration in GPR39 KO Mice", "CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice", "Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice", "GR Protein Levels in Zinc-Deficient and GPR39 KO Mice", "Discussion" ]
[ "Depression is a leading psychiatric illness, with high morbidity and mortality (Ustun, 2004). The lack of appropriate, rapidly acting antidepressants is probably due to the direct pathomechanism of depression being unknown, and this leads to the high suicide statistics. Approximately 50% of those diagnosed with major depressive disorder do not respond to antidepressants when using them for the first time (Fava et al., 2008). Long-term antidepressant treatment generates many side effects, and more than 30% of depressed patients do not experience any mood improvement at all (Fava and Davidson, 1996). Until now, only one drug, ketamine, has shown rapid action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). One drug, ketamine, has shown rapid and sustained action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). This indicates promise for modulators of the glutamatergic system, which may lead to the establishment of homeostasis between glutamate and GABA in the central nervous system (CNS) (Skolnick, 2002; Skolnick et al., 2009; Malkesman et al., 2012; Pilc et al., 2013; Pochwat et al., 2014). In addition, some trace elements, such as magnesium and zinc, are involved in glutamatergic attenuation through their binding sites at the N-methyl-d-aspartate (NMDA) receptor (Swardfager et al., 2013b). Preclinical findings indicate that zinc deficiency has been shown to produce depressive-like behavior (Singewald et al., 2004; Tassabehji et al., 2008; Tamano et al., 2009; Whittle et al., 2009; Młyniec and Nowak, 2012; Młyniec et al., 2013a, 2013b, 2014a). Clinical studies indicate that zinc is lower in the blood of depressed people (Swardfager et al., 2013b), and that zinc supplementation may produce antidepressant effects alone and in combination with conventional antidepressant therapies (Ranjbar et al., 2013; Siwek et al., 2013;\nSwardfager et al., 2013a)..\nZinc is an important trace element in the central nervous system and seems to be involved in neurotransmission. As a natural ligand, it was found to activate the metabotropic GPR39 receptor (Holst et al., 2007). Highest levels of GPR39 are found in the brain regions involved in emotion, such as the amygdala and hippocampus (McKee et al., 1997; Jackson et al., 2006). The GPR39 signals with high constitutive activity via Gq, which stimulates transcription mediated by the cyclic adenosine monophosphate (cAMP) following inositol 1,4,5-triphosphate turnover, as well as via G12/13, leading to activation of transcription mediated by the serum response element (Holst et al., 2004). Zinc was found to be a ligand capable of stimulating the activity of the GPR39, which activates the Gq, G12/13, and Gs pathways (Holst et al., 2007). Since zinc shows antidepressant properties and its deficiency leads to the development of depression-like and anxiety-like behaviors (Whittle et al., 2009; Swardfager et al., 2013a), we investigated whether the GPR39 receptor may be involved in the pathophysiology of depression. Recently, we found GPR39 down-regulation in the frontal cortex and hippocampus of zinc-deficient rodents and suicide victims (Młyniec et al., 2014b). On the other hand, we observed up-regulation of the GPR39 after chronic antidepressant treatment (Młyniec and Nowak, 2013). In the present study, we investigated behavior in mice lacking a GPR39 as well as an hypothalamus-pituitary-adrenal axis (HPA) axis and proteins such as CREB, BDNF, and TrkB, all of which are important in the pathophysiology of depression and the antidepressant response.", " Animals All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków.\nCD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum.\nGPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used.\nAs with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts.\nAll of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków.\nCD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum.\nGPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used.\nAs with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts.\n Forced Swim Test The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test.\nThe immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured.\nThe forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test.\nThe immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured.\n Tail Suspension Test WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually.\nWT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually.\n Locomotor Activity Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes.\nLocomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes.\n Light/Dark Test WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters).\nWT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters).\n Zinc Concentration The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L.\nThe serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L.\n Corticosterone Assay The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation.\nThe serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation.\n Western-Blot Analysis Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place.\nThe frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band.\nGlucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place.\nThe frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band.\n Statistical Analysis The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant.\nThe data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant.", "All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków.\nCD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum.\nGPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used.\nAs with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts.", "The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test.\nThe immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured.", "WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually.", "Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes.", "WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters).", "The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L.", "The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation.", "Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place.\nThe frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band.", "The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant.", " Behavioral Studies of Gpr39 KO Mice Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.\nBefore experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.\n Serum Zinc Concentration in GPR39 KO Mice There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].\nThere was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].\n CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).\nThe effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).\n Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.\nThe effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.\n GR Protein Levels in Zinc-Deficient and GPR39 KO Mice Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.\nAdministration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.", "Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.", "There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].", "The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).", "The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.", "Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.", "In the present study, we found that elimination of GPR39 leads to the development of depressive-like behavior, as measured by the FST and TST. Additionally, we found decreased entries into the lit compartment and line crossing and increased immobility time in the light/dark test, indicating anxiety-like phenotype in GPR 39 KO mice. Although not statistically significant, we also observed some tendencies towards less time spent in the lit compartment (decreased by 17%) and increased and freezing behavior (28%) in GPR39 KO compared to WT mice.\nThe GPR39 was found to be activated by zinc ions (Holst et al., 2007; Yasuda et al., 2007), and the link between zinc and depression is well known (Maes et al., 1997, 1999; Szewczyk et al., 2011; Swardfager et al., 2013a, 2013b). Ishitobi et al. (2012) did not find behavioral abnormalities in the FST after administering antisense of DNA for GPR39-1b. In our present study, mice with general GPR39 KO were used. GPR39-1a full-length isoform was found to be a receptor for zinc ions, whereas GPR39-1b, corresponding to the 5-transmembrane truncated form (Egerod et al., 2007), did not respond to zinc stimulation, which means that the GPR39-1b splice variant is not a receptor of Zn2+ (Yasuda and Ishida, 2014).\nActivation of the GPR39 triggers diverse neuronal pathways (Holst et al., 2004, 2007; Popovics and Stewart, 2011) that may be involved in neuroprotection (Depoortere, 2012). Zinc stimulates GPR39 activity, which activates the Gαs, Gαq, and Gα12/13 pathways (Holst et al., 2007). The Gαq pathway triggers diverse downstream kinases and mediates CREB activation and cyclic adenosine monophosphate response element-dependent transcription. Our previous study showed decreased CREB, BDNF, and TrkB proteins in the hippocampus of mice under zinc-deficient conditions (Młyniec et al., 2014b). Moreover, disruption of the CaM/CaMKII/CREB signaling pathway was found after administration of a zinc-deficient diet for 5 weeks (Gao et al., 2011). Since GPR39 was discovered to be a receptor for zinc, we previously investigated whether GPR39 may be involved in the pathophysiology of depression and suicide behavior. In our postmortem study, we found GPR39 down-regulation in the hippocampus and the frontal cortex of suicide victims (Młyniec et al., 2014b). In the present study, we investigated whether GPR39 KO would decrease levels of such proteins as CREB, BDNF, and TrkB, which were also found to be impaired in depression in suicide victims (Dwivedi et al., 2003; Pandey et al., 2007). Indeed, we found that lack of the GPR39 gene causes CREB and BDNF reduction in the hippocampus, but not in the frontal cortex, suggesting that the hippocampus might be a specific region for CREB and BDNF down-regulation in the absence of a zinc receptor. The CA3 region of the hippocampus seems to be strongly involved in zinc neurotransmission. Besser et al. (2009) found that the GPR39 is activated by synaptically released zinc ions in the CA3 area of the hippocampus. This activation triggers Ca2+ and Ca2+/calmodulin kinase II, suggesting that it has a role in neuron survival/neuroplasticity in this brain area (Besser et al., 2009), which is of importance in antidepressant action. In this study, we did not find any changes in TrkB levels in either the hippocampus or frontal cortex; in the case of the hippocampus, this may be a compensatory mechanism, and it needs further investigation.\nThere is strong evidence that zinc deficiency leads to hyperactivation of the HPA axis (Watanabe et al., 2010; Takeda and Tamano, 2012; Młyniec et al., 2012, 2013a), which is activated as a reaction to stress. The activity of the HPA axis is regulated by negative feedback through GR receptors that are present in the brain, mainly in the hippocampus (Herman et al., 2005). This mechanism was shown to be impaired in mood disorders. In the present study, we compared corticosterone and GR receptor levels in zinc-deficient and GPR39 KOs. We found an elevated corticosterone concentration in the serum and decreased GR levels in the hippocampus and frontal cortex of mice receiving a zinc-deficient diet. However, there were no changes in corticosterone or GR levels in GPR39 KO mice in comparison with WT mice. This suggests that the depressive-like behavior observed in mice lacking the GPR39 gene is not due to higher corticosterone concentrations and that there is no link between GPR39 and the HPA axis. In the present study, we did not find any changes in the serum zinc level in GPR39 KO mice in comparison with WT mice, which indicates a possible correlation between serum zinc and serum corticosterone.\nDepressive-like changes with component of anxiety observed in GPR39 KO mice may result of glutamatergic abnormalities that were found in cases of zinc deficiency, but this requires further investigation. Zinc as an NMDA antagonist modulates the glutamatergic system, which is overexcited during depression. Zinc co-released with glutamate from “gluzinergic” neurons modulates excitability of the brain by attenuating glutamate release (Frederickson et al., 2005). The GPR39 zinc receptor seems to be involved in the mechanism regulating the amount of glutamate in the brain (Besser et al., 2009). Activation of the GPR39 up-regulates KCC2 and thereby enhances Cl− efflux in the postsynaptic neurons, which may potentiate γ-aminobutyric acidA-mediated inhibition (Chorin et al., 2011).\nOur present study shows that deletion of GPR39 leads to depressive-like behaviors in animals, which may be relevant to depressive disorders in humans. Decreased levels of CREB and BDNF proteins in the hippocampus of GPR39 KO mice support the involvement of GPR39 in the synthesis of CREB and BDNF, proteins that are important in neuronal plasticity and the antidepressant response." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "GPR39", "zinc receptor", "depression", "HPA axis", "CREB" ]
Introduction: Depression is a leading psychiatric illness, with high morbidity and mortality (Ustun, 2004). The lack of appropriate, rapidly acting antidepressants is probably due to the direct pathomechanism of depression being unknown, and this leads to the high suicide statistics. Approximately 50% of those diagnosed with major depressive disorder do not respond to antidepressants when using them for the first time (Fava et al., 2008). Long-term antidepressant treatment generates many side effects, and more than 30% of depressed patients do not experience any mood improvement at all (Fava and Davidson, 1996). Until now, only one drug, ketamine, has shown rapid action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). One drug, ketamine, has shown rapid and sustained action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). This indicates promise for modulators of the glutamatergic system, which may lead to the establishment of homeostasis between glutamate and GABA in the central nervous system (CNS) (Skolnick, 2002; Skolnick et al., 2009; Malkesman et al., 2012; Pilc et al., 2013; Pochwat et al., 2014). In addition, some trace elements, such as magnesium and zinc, are involved in glutamatergic attenuation through their binding sites at the N-methyl-d-aspartate (NMDA) receptor (Swardfager et al., 2013b). Preclinical findings indicate that zinc deficiency has been shown to produce depressive-like behavior (Singewald et al., 2004; Tassabehji et al., 2008; Tamano et al., 2009; Whittle et al., 2009; Młyniec and Nowak, 2012; Młyniec et al., 2013a, 2013b, 2014a). Clinical studies indicate that zinc is lower in the blood of depressed people (Swardfager et al., 2013b), and that zinc supplementation may produce antidepressant effects alone and in combination with conventional antidepressant therapies (Ranjbar et al., 2013; Siwek et al., 2013; Swardfager et al., 2013a).. Zinc is an important trace element in the central nervous system and seems to be involved in neurotransmission. As a natural ligand, it was found to activate the metabotropic GPR39 receptor (Holst et al., 2007). Highest levels of GPR39 are found in the brain regions involved in emotion, such as the amygdala and hippocampus (McKee et al., 1997; Jackson et al., 2006). The GPR39 signals with high constitutive activity via Gq, which stimulates transcription mediated by the cyclic adenosine monophosphate (cAMP) following inositol 1,4,5-triphosphate turnover, as well as via G12/13, leading to activation of transcription mediated by the serum response element (Holst et al., 2004). Zinc was found to be a ligand capable of stimulating the activity of the GPR39, which activates the Gq, G12/13, and Gs pathways (Holst et al., 2007). Since zinc shows antidepressant properties and its deficiency leads to the development of depression-like and anxiety-like behaviors (Whittle et al., 2009; Swardfager et al., 2013a), we investigated whether the GPR39 receptor may be involved in the pathophysiology of depression. Recently, we found GPR39 down-regulation in the frontal cortex and hippocampus of zinc-deficient rodents and suicide victims (Młyniec et al., 2014b). On the other hand, we observed up-regulation of the GPR39 after chronic antidepressant treatment (Młyniec and Nowak, 2013). In the present study, we investigated behavior in mice lacking a GPR39 as well as an hypothalamus-pituitary-adrenal axis (HPA) axis and proteins such as CREB, BDNF, and TrkB, all of which are important in the pathophysiology of depression and the antidepressant response. Methods: Animals All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków. CD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum. GPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used. As with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts. All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków. CD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum. GPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used. As with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts. Forced Swim Test The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test. The immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured. The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test. The immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured. Tail Suspension Test WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually. WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually. Locomotor Activity Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes. Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes. Light/Dark Test WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters). WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters). Zinc Concentration The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L. The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L. Corticosterone Assay The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation. The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation. Western-Blot Analysis Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place. The frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band. Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place. The frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band. Statistical Analysis The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant. The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant. Animals: All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków. CD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum. GPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used. As with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts. Forced Swim Test: The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test. The immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured. Tail Suspension Test: WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually. Locomotor Activity: Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes. Light/Dark Test: WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters). Zinc Concentration: The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L. Corticosterone Assay: The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation. Western-Blot Analysis: Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place. The frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band. Statistical Analysis: The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant. Results: Behavioral Studies of Gpr39 KO Mice Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916]. The effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2). The effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control. The effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control. There were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1). The Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group. In the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2). Behavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control. Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916]. The effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2). The effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control. The effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control. There were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1). The Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group. In the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2). Behavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control. Serum Zinc Concentration in GPR39 KO Mice There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790]. There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790]. CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively). The effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control. In a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively). The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively). The effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control. In a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively). Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B). The effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control. The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B). The effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control. GR Protein Levels in Zinc-Deficient and GPR39 KO Mice Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively). The effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control. Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively). The effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control. Behavioral Studies of Gpr39 KO Mice: Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916]. The effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2). The effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control. The effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control. There were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1). The Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group. In the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2). Behavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control. Serum Zinc Concentration in GPR39 KO Mice: There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790]. CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice: The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively). The effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control. In a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively). Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice: The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B). The effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control. GR Protein Levels in Zinc-Deficient and GPR39 KO Mice: Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively). The effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control. Discussion: In the present study, we found that elimination of GPR39 leads to the development of depressive-like behavior, as measured by the FST and TST. Additionally, we found decreased entries into the lit compartment and line crossing and increased immobility time in the light/dark test, indicating anxiety-like phenotype in GPR 39 KO mice. Although not statistically significant, we also observed some tendencies towards less time spent in the lit compartment (decreased by 17%) and increased and freezing behavior (28%) in GPR39 KO compared to WT mice. The GPR39 was found to be activated by zinc ions (Holst et al., 2007; Yasuda et al., 2007), and the link between zinc and depression is well known (Maes et al., 1997, 1999; Szewczyk et al., 2011; Swardfager et al., 2013a, 2013b). Ishitobi et al. (2012) did not find behavioral abnormalities in the FST after administering antisense of DNA for GPR39-1b. In our present study, mice with general GPR39 KO were used. GPR39-1a full-length isoform was found to be a receptor for zinc ions, whereas GPR39-1b, corresponding to the 5-transmembrane truncated form (Egerod et al., 2007), did not respond to zinc stimulation, which means that the GPR39-1b splice variant is not a receptor of Zn2+ (Yasuda and Ishida, 2014). Activation of the GPR39 triggers diverse neuronal pathways (Holst et al., 2004, 2007; Popovics and Stewart, 2011) that may be involved in neuroprotection (Depoortere, 2012). Zinc stimulates GPR39 activity, which activates the Gαs, Gαq, and Gα12/13 pathways (Holst et al., 2007). The Gαq pathway triggers diverse downstream kinases and mediates CREB activation and cyclic adenosine monophosphate response element-dependent transcription. Our previous study showed decreased CREB, BDNF, and TrkB proteins in the hippocampus of mice under zinc-deficient conditions (Młyniec et al., 2014b). Moreover, disruption of the CaM/CaMKII/CREB signaling pathway was found after administration of a zinc-deficient diet for 5 weeks (Gao et al., 2011). Since GPR39 was discovered to be a receptor for zinc, we previously investigated whether GPR39 may be involved in the pathophysiology of depression and suicide behavior. In our postmortem study, we found GPR39 down-regulation in the hippocampus and the frontal cortex of suicide victims (Młyniec et al., 2014b). In the present study, we investigated whether GPR39 KO would decrease levels of such proteins as CREB, BDNF, and TrkB, which were also found to be impaired in depression in suicide victims (Dwivedi et al., 2003; Pandey et al., 2007). Indeed, we found that lack of the GPR39 gene causes CREB and BDNF reduction in the hippocampus, but not in the frontal cortex, suggesting that the hippocampus might be a specific region for CREB and BDNF down-regulation in the absence of a zinc receptor. The CA3 region of the hippocampus seems to be strongly involved in zinc neurotransmission. Besser et al. (2009) found that the GPR39 is activated by synaptically released zinc ions in the CA3 area of the hippocampus. This activation triggers Ca2+ and Ca2+/calmodulin kinase II, suggesting that it has a role in neuron survival/neuroplasticity in this brain area (Besser et al., 2009), which is of importance in antidepressant action. In this study, we did not find any changes in TrkB levels in either the hippocampus or frontal cortex; in the case of the hippocampus, this may be a compensatory mechanism, and it needs further investigation. There is strong evidence that zinc deficiency leads to hyperactivation of the HPA axis (Watanabe et al., 2010; Takeda and Tamano, 2012; Młyniec et al., 2012, 2013a), which is activated as a reaction to stress. The activity of the HPA axis is regulated by negative feedback through GR receptors that are present in the brain, mainly in the hippocampus (Herman et al., 2005). This mechanism was shown to be impaired in mood disorders. In the present study, we compared corticosterone and GR receptor levels in zinc-deficient and GPR39 KOs. We found an elevated corticosterone concentration in the serum and decreased GR levels in the hippocampus and frontal cortex of mice receiving a zinc-deficient diet. However, there were no changes in corticosterone or GR levels in GPR39 KO mice in comparison with WT mice. This suggests that the depressive-like behavior observed in mice lacking the GPR39 gene is not due to higher corticosterone concentrations and that there is no link between GPR39 and the HPA axis. In the present study, we did not find any changes in the serum zinc level in GPR39 KO mice in comparison with WT mice, which indicates a possible correlation between serum zinc and serum corticosterone. Depressive-like changes with component of anxiety observed in GPR39 KO mice may result of glutamatergic abnormalities that were found in cases of zinc deficiency, but this requires further investigation. Zinc as an NMDA antagonist modulates the glutamatergic system, which is overexcited during depression. Zinc co-released with glutamate from “gluzinergic” neurons modulates excitability of the brain by attenuating glutamate release (Frederickson et al., 2005). The GPR39 zinc receptor seems to be involved in the mechanism regulating the amount of glutamate in the brain (Besser et al., 2009). Activation of the GPR39 up-regulates KCC2 and thereby enhances Cl− efflux in the postsynaptic neurons, which may potentiate γ-aminobutyric acidA-mediated inhibition (Chorin et al., 2011). Our present study shows that deletion of GPR39 leads to depressive-like behaviors in animals, which may be relevant to depressive disorders in humans. Decreased levels of CREB and BDNF proteins in the hippocampus of GPR39 KO mice support the involvement of GPR39 in the synthesis of CREB and BDNF, proteins that are important in neuronal plasticity and the antidepressant response.
Background: Zinc may act as a neurotransmitter in the central nervous system by activation of the GPR39 metabotropic receptors. Methods: In the present study, we investigated whether GPR39 knockout would cause depressive-like and/or anxiety-like behavior, as measured by the forced swim test, tail suspension test, and light/dark test. We also investigated whether lack of GPR39 would change levels of cAMP response element-binding protein (CREB),brain-derived neurotrophic factor (BDNF) and tropomyosin related kinase B (TrkB) protein in the hippocampus and frontal cortex of GPR39 knockout mice subjected to the forced swim test, as measured by Western-blot analysis. Results: In this study, GPR39 knockout mice showed an increased immobility time in both the forced swim test and tail suspension test, indicating depressive-like behavior and displayed anxiety-like phenotype. GPR39 knockout mice had lower CREB and BDNF levels in the hippocampus, but not in the frontal cortex, which indicates region specificity for the impaired CREB/BDNF pathway (which is important in antidepressant response) in the absence of GPR39. There were no changes in TrkB protein in either structure. In the present study, we also investigated activity in the hypothalamus-pituitary-adrenal axis under both zinc- and GPR39-deficient conditions. Zinc-deficient mice had higher serum corticosterone levels and lower glucocorticoid receptor levels in the hippocampus and frontal cortex. Conclusions: There were no changes in the GPR39 knockout mice in comparison with the wild-type control mice, which does not support a role of GPR39 in hypothalamus-pituitary-adrenal axis regulation. The results of this study indicate the involvement of the GPR39 Zn(2+)-sensing receptor in the pathophysiology of depression with component of anxiety.
Introduction: Depression is a leading psychiatric illness, with high morbidity and mortality (Ustun, 2004). The lack of appropriate, rapidly acting antidepressants is probably due to the direct pathomechanism of depression being unknown, and this leads to the high suicide statistics. Approximately 50% of those diagnosed with major depressive disorder do not respond to antidepressants when using them for the first time (Fava et al., 2008). Long-term antidepressant treatment generates many side effects, and more than 30% of depressed patients do not experience any mood improvement at all (Fava and Davidson, 1996). Until now, only one drug, ketamine, has shown rapid action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). One drug, ketamine, has shown rapid and sustained action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). This indicates promise for modulators of the glutamatergic system, which may lead to the establishment of homeostasis between glutamate and GABA in the central nervous system (CNS) (Skolnick, 2002; Skolnick et al., 2009; Malkesman et al., 2012; Pilc et al., 2013; Pochwat et al., 2014). In addition, some trace elements, such as magnesium and zinc, are involved in glutamatergic attenuation through their binding sites at the N-methyl-d-aspartate (NMDA) receptor (Swardfager et al., 2013b). Preclinical findings indicate that zinc deficiency has been shown to produce depressive-like behavior (Singewald et al., 2004; Tassabehji et al., 2008; Tamano et al., 2009; Whittle et al., 2009; Młyniec and Nowak, 2012; Młyniec et al., 2013a, 2013b, 2014a). Clinical studies indicate that zinc is lower in the blood of depressed people (Swardfager et al., 2013b), and that zinc supplementation may produce antidepressant effects alone and in combination with conventional antidepressant therapies (Ranjbar et al., 2013; Siwek et al., 2013; Swardfager et al., 2013a).. Zinc is an important trace element in the central nervous system and seems to be involved in neurotransmission. As a natural ligand, it was found to activate the metabotropic GPR39 receptor (Holst et al., 2007). Highest levels of GPR39 are found in the brain regions involved in emotion, such as the amygdala and hippocampus (McKee et al., 1997; Jackson et al., 2006). The GPR39 signals with high constitutive activity via Gq, which stimulates transcription mediated by the cyclic adenosine monophosphate (cAMP) following inositol 1,4,5-triphosphate turnover, as well as via G12/13, leading to activation of transcription mediated by the serum response element (Holst et al., 2004). Zinc was found to be a ligand capable of stimulating the activity of the GPR39, which activates the Gq, G12/13, and Gs pathways (Holst et al., 2007). Since zinc shows antidepressant properties and its deficiency leads to the development of depression-like and anxiety-like behaviors (Whittle et al., 2009; Swardfager et al., 2013a), we investigated whether the GPR39 receptor may be involved in the pathophysiology of depression. Recently, we found GPR39 down-regulation in the frontal cortex and hippocampus of zinc-deficient rodents and suicide victims (Młyniec et al., 2014b). On the other hand, we observed up-regulation of the GPR39 after chronic antidepressant treatment (Młyniec and Nowak, 2013). In the present study, we investigated behavior in mice lacking a GPR39 as well as an hypothalamus-pituitary-adrenal axis (HPA) axis and proteins such as CREB, BDNF, and TrkB, all of which are important in the pathophysiology of depression and the antidepressant response. Discussion: None.
Background: Zinc may act as a neurotransmitter in the central nervous system by activation of the GPR39 metabotropic receptors. Methods: In the present study, we investigated whether GPR39 knockout would cause depressive-like and/or anxiety-like behavior, as measured by the forced swim test, tail suspension test, and light/dark test. We also investigated whether lack of GPR39 would change levels of cAMP response element-binding protein (CREB),brain-derived neurotrophic factor (BDNF) and tropomyosin related kinase B (TrkB) protein in the hippocampus and frontal cortex of GPR39 knockout mice subjected to the forced swim test, as measured by Western-blot analysis. Results: In this study, GPR39 knockout mice showed an increased immobility time in both the forced swim test and tail suspension test, indicating depressive-like behavior and displayed anxiety-like phenotype. GPR39 knockout mice had lower CREB and BDNF levels in the hippocampus, but not in the frontal cortex, which indicates region specificity for the impaired CREB/BDNF pathway (which is important in antidepressant response) in the absence of GPR39. There were no changes in TrkB protein in either structure. In the present study, we also investigated activity in the hypothalamus-pituitary-adrenal axis under both zinc- and GPR39-deficient conditions. Zinc-deficient mice had higher serum corticosterone levels and lower glucocorticoid receptor levels in the hippocampus and frontal cortex. Conclusions: There were no changes in the GPR39 knockout mice in comparison with the wild-type control mice, which does not support a role of GPR39 in hypothalamus-pituitary-adrenal axis regulation. The results of this study indicate the involvement of the GPR39 Zn(2+)-sensing receptor in the pathophysiology of depression with component of anxiety.
9,135
330
18
[ "mice", "gpr39", "ko", "gpr39 ko", "zinc", "wt", "ko mice", "time", "gpr39 ko mice", "test" ]
[ "test", "test" ]
null
[CONTENT] GPR39 | zinc receptor | depression | HPA axis | CREB [SUMMARY]
[CONTENT] GPR39 | zinc receptor | depression | HPA axis | CREB [SUMMARY]
null
[CONTENT] GPR39 | zinc receptor | depression | HPA axis | CREB [SUMMARY]
[CONTENT] GPR39 | zinc receptor | depression | HPA axis | CREB [SUMMARY]
[CONTENT] GPR39 | zinc receptor | depression | HPA axis | CREB [SUMMARY]
[CONTENT] Animals | Brain-Derived Neurotrophic Factor | CREB-Binding Protein | Corticosterone | Dark Adaptation | Depression | Disease Models, Animal | Down-Regulation | Hindlimb Suspension | Hippocampus | Immobility Response, Tonic | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Motor Activity | Receptor, trkB | Receptors, G-Protein-Coupled | Swimming | Time Factors | Zinc [SUMMARY]
[CONTENT] Animals | Brain-Derived Neurotrophic Factor | CREB-Binding Protein | Corticosterone | Dark Adaptation | Depression | Disease Models, Animal | Down-Regulation | Hindlimb Suspension | Hippocampus | Immobility Response, Tonic | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Motor Activity | Receptor, trkB | Receptors, G-Protein-Coupled | Swimming | Time Factors | Zinc [SUMMARY]
null
[CONTENT] Animals | Brain-Derived Neurotrophic Factor | CREB-Binding Protein | Corticosterone | Dark Adaptation | Depression | Disease Models, Animal | Down-Regulation | Hindlimb Suspension | Hippocampus | Immobility Response, Tonic | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Motor Activity | Receptor, trkB | Receptors, G-Protein-Coupled | Swimming | Time Factors | Zinc [SUMMARY]
[CONTENT] Animals | Brain-Derived Neurotrophic Factor | CREB-Binding Protein | Corticosterone | Dark Adaptation | Depression | Disease Models, Animal | Down-Regulation | Hindlimb Suspension | Hippocampus | Immobility Response, Tonic | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Motor Activity | Receptor, trkB | Receptors, G-Protein-Coupled | Swimming | Time Factors | Zinc [SUMMARY]
[CONTENT] Animals | Brain-Derived Neurotrophic Factor | CREB-Binding Protein | Corticosterone | Dark Adaptation | Depression | Disease Models, Animal | Down-Regulation | Hindlimb Suspension | Hippocampus | Immobility Response, Tonic | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Motor Activity | Receptor, trkB | Receptors, G-Protein-Coupled | Swimming | Time Factors | Zinc [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] mice | gpr39 | ko | gpr39 ko | zinc | wt | ko mice | time | gpr39 ko mice | test [SUMMARY]
[CONTENT] mice | gpr39 | ko | gpr39 ko | zinc | wt | ko mice | time | gpr39 ko mice | test [SUMMARY]
null
[CONTENT] mice | gpr39 | ko | gpr39 ko | zinc | wt | ko mice | time | gpr39 ko mice | test [SUMMARY]
[CONTENT] mice | gpr39 | ko | gpr39 ko | zinc | wt | ko mice | time | gpr39 ko mice | test [SUMMARY]
[CONTENT] mice | gpr39 | ko | gpr39 ko | zinc | wt | ko mice | time | gpr39 ko mice | test [SUMMARY]
[CONTENT] 2013 | depression | antidepressant | treatment | zinc | involved | swardfager | patients | high | 2012 [SUMMARY]
[CONTENT] mice | described | test | anti | minutes | membranes | corticosterone | time | immobility | gpr39 [SUMMARY]
null
[CONTENT] zinc | gpr39 | found | study | hippocampus | present | 2007 | present study | creb | mice [SUMMARY]
[CONTENT] mice | gpr39 | ko | zinc | gpr39 ko | wt | test | immobility | time | corticosterone [SUMMARY]
[CONTENT] mice | gpr39 | ko | zinc | gpr39 ko | wt | test | immobility | time | corticosterone [SUMMARY]
[CONTENT] Zinc [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] Zinc ||| ||| ||| ||| ||| CREB | CREB ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] Zinc ||| ||| ||| ||| ||| CREB | CREB ||| ||| ||| ||| ||| [SUMMARY]
Increased Risk of Atrial Fibrillation in the Early Period after Herpes Zoster Infection: a Nationwide Population-based Case-control Study.
29805341
Herpes zoster (HZ) is a chronic inflammatory disease that could result in autonomic dysfunction, often leading to atrial fibrillation (AF).
BACKGROUND
From the Korean National Health Insurance Service database of 738,559 subjects, patients newly diagnosed with HZ (n = 30,685) between 2004 and 2011, with no history of HZ or AF were identified. For the non-HZ control group, 122,740 age- and sex-matched subjects were selected. AF development in the first two-years following HZ diagnosis, and during the overall follow-up period were compared among severe (requiring hospitalization, n = 2,213), mild (n = 28,472), and non-HZ (n = 122,740) groups.
METHODS
There were 2,204 (1.4%) patients diagnosed with AF during follow-up, and 825 (0.5%) were diagnosed within the first two years after HZ. The severe HZ group showed higher rates of AF development (6.4 per 1,000 patient-years [PTPY]) compared to mild-HZ group (2.9 PTPY) and non-HZ group (2.7 PTPY). The risk of developing AF was higher in the first two-years after HZ diagnosis in the severe HZ group (10.6 PTPY vs. 2.7 PTPY in mild-HZ group and 2.6 PTPY in non-HZ group).
RESULTS
Severe HZ that requires hospitalization shows an increased risk of incident AF, and the risk is higher in the first two-years following HZ diagnosis.
CONCLUSION
[ "Adult", "Aged", "Atrial Fibrillation", "Case-Control Studies", "Databases, Factual", "Female", "Herpesvirus 3, Human", "Humans", "Incidence", "Male", "Middle Aged", "Risk Factors", "Severity of Illness Index", "Varicella Zoster Virus Infection" ]
5966375
INTRODUCTION
The incidence of atrial fibrillation (AF), the most common type of clinically significant cardiac arrhythmia, has increased over the last decade.1 AF is associated with high medical costs, as well as an increased risk of ischemic stroke and death.2 Although the precise mechanisms involved in AF are not well understood, a growing body of evidence indicates that inflammation and the autonomic nervous system are involved in the pathogenesis of AF.345 Clinical and experimental studies have suggested that AF triggers and drivers in the pulmonary veins and left atrial wall are at least partially modulated by the autonomic nervous system.6 Herpes zoster (HZ) is caused by reactivation of the latent varicella-zoster virus (VZV) in the cranial nerve, dorsal root, or autonomic ganglia with spread of the virus along the sensory nerve to the dermatome producing radiculoneuritis and cutaneous manifestations. Unvaccinated persons over 85 years of age have a 50% risk of developing HZ.78 HZ may result in serious neurologic or inflammatory sequelae and severe cases often require hospitalization (up to 3% of HZ cases).9 The neurologic sequelae of HZ is well-known, and can result in autonomic dysfunction1011 The vagus nerve or its ganglia can be involved in HZ, resulting in dysphagia, nausea, vomiting, gastric upset, or cardiac irregularities.12 According to the pathogenesis of the two diseases, AF can occur as a complication of HZ, but the relationship between AF and HZ has not been studied widely. We hypothesized that HZ may be associated with AF development.
METHODS
Study population From the Korean National Health Insurance Service (NHIS) database, we used a random multistage representative sample of 1,034,777 people. Consequently, a total of 738,559 subjects were selected for analysis after patients less than 20 years of age and patients who had been treated for either AF or HZ during the screening period (2002–2004) were excluded. Newly diagnosed HZ patients (n = 30,685) were identified using health insurance claims data from 2005 to 2011. We defined the date on which each patient was first diagnosed with HZ as the index date. On the same index date, four age- and sex-matched non-HZ control subjects (n = 122,740) were selected for each HZ patient. A flowchart of the patient enrollment process is shown in Supplementary Fig. 1. Follow-up data was obtained up to 2013. From the Korean National Health Insurance Service (NHIS) database, we used a random multistage representative sample of 1,034,777 people. Consequently, a total of 738,559 subjects were selected for analysis after patients less than 20 years of age and patients who had been treated for either AF or HZ during the screening period (2002–2004) were excluded. Newly diagnosed HZ patients (n = 30,685) were identified using health insurance claims data from 2005 to 2011. We defined the date on which each patient was first diagnosed with HZ as the index date. On the same index date, four age- and sex-matched non-HZ control subjects (n = 122,740) were selected for each HZ patient. A flowchart of the patient enrollment process is shown in Supplementary Fig. 1. Follow-up data was obtained up to 2013. Database Records from the NHIS database included patients' sociodemographic information, their use of inpatient and outpatient services, and pharmacy dispensing claims. The majority (97.1%) of the Korean population (approximately 50 million people) is covered by the mandatory NHIS. Diagnoses were confirmed using the International Classification of Disease, Tenth Revision, Clinical Modification (ICD-10-CM) codes. In the present study, multistage sampling was performed, and we used a random sample of one million people selected from the NHIS database. The staff of the NHIS confirmed that the study cohort was a representative sample of the general Korean population for the period 2002 to 2013.13 Records from the NHIS database included patients' sociodemographic information, their use of inpatient and outpatient services, and pharmacy dispensing claims. The majority (97.1%) of the Korean population (approximately 50 million people) is covered by the mandatory NHIS. Diagnoses were confirmed using the International Classification of Disease, Tenth Revision, Clinical Modification (ICD-10-CM) codes. In the present study, multistage sampling was performed, and we used a random sample of one million people selected from the NHIS database. The staff of the NHIS confirmed that the study cohort was a representative sample of the general Korean population for the period 2002 to 2013.13 Definition Instances of newly diagnosed AF (I480–I484, I489) identified during follow-up were analyzed as in our previous studies1415 and the HZ and non-HZ group were compared. Patients who were diagnosed with mitral stenosis (I050, I052, and I059) or those who had mechanical heart valves (Z952–Z954) implanted from 2002 to 2013 were excluded from the analysis. The presence and severity of HZ was determined on the basis of diagnostic and procedure codes recorded during the study period from 2005 to 2011. A HZ patient was identified by the diagnostic code for HZ (B02) and received either oral anti-viral medication for at least five days, or an anti-viral injection more than once. Hospitalization due to HZ was obtained from the database. A newly diagnosed HZ patient requiring simultaneous hospitalization was defined as a severe HZ case. The remaining HZ patients, who were treated as outpatients, were considered mild HZ cases. The epidemiology of HZ in Korea has been previously reported using NHIS database records.16 Instances of newly diagnosed AF (I480–I484, I489) identified during follow-up were analyzed as in our previous studies1415 and the HZ and non-HZ group were compared. Patients who were diagnosed with mitral stenosis (I050, I052, and I059) or those who had mechanical heart valves (Z952–Z954) implanted from 2002 to 2013 were excluded from the analysis. The presence and severity of HZ was determined on the basis of diagnostic and procedure codes recorded during the study period from 2005 to 2011. A HZ patient was identified by the diagnostic code for HZ (B02) and received either oral anti-viral medication for at least five days, or an anti-viral injection more than once. Hospitalization due to HZ was obtained from the database. A newly diagnosed HZ patient requiring simultaneous hospitalization was defined as a severe HZ case. The remaining HZ patients, who were treated as outpatients, were considered mild HZ cases. The epidemiology of HZ in Korea has been previously reported using NHIS database records.16 Statistical analysis Baseline characteristics are presented as mean and standard deviation, or numbers and percentages. The differences between continuous values were assessed using an unpaired two-tailed t-test for normally distributed continuous variables and a Mann-Whitney rank-sum test for skewed variables. The χ2 test was used for the comparison of nominal variables. We used risk set sampling for selection of the age- and sex-matched control subjects. Incidence rates were described as the number of events per 1,000 patient-years (PTPY). Hazard ratios (HRs) and the corresponding 95% confidence intervals (CIs) were calculated using Cox proportional hazard models. Comparison of cumulative event rates between HZ and non-HZ groups was based on Kaplan-Meier censoring estimates and performed with the log-rank test. Crude risks were analyzed in the overall cohort, and adjusted results were subsequently calculated with the multivariable Cox regression analyses. Subgroup analyses of multiple cardiovascular risk factors were subsequently performed. The level of statistical significance was set at P < 0.05, and all statistical analyses were performed using SAS version 9.3 (SAS Institute, Cary, NC, USA) and SPSS (IBM® SPSS® Statistics version 22; IBM Corp., Armonk, NY, USA). Baseline characteristics are presented as mean and standard deviation, or numbers and percentages. The differences between continuous values were assessed using an unpaired two-tailed t-test for normally distributed continuous variables and a Mann-Whitney rank-sum test for skewed variables. The χ2 test was used for the comparison of nominal variables. We used risk set sampling for selection of the age- and sex-matched control subjects. Incidence rates were described as the number of events per 1,000 patient-years (PTPY). Hazard ratios (HRs) and the corresponding 95% confidence intervals (CIs) were calculated using Cox proportional hazard models. Comparison of cumulative event rates between HZ and non-HZ groups was based on Kaplan-Meier censoring estimates and performed with the log-rank test. Crude risks were analyzed in the overall cohort, and adjusted results were subsequently calculated with the multivariable Cox regression analyses. Subgroup analyses of multiple cardiovascular risk factors were subsequently performed. The level of statistical significance was set at P < 0.05, and all statistical analyses were performed using SAS version 9.3 (SAS Institute, Cary, NC, USA) and SPSS (IBM® SPSS® Statistics version 22; IBM Corp., Armonk, NY, USA). Ethics statement The database is open to any researcher whose study protocols have been approved by the official review committee. This study was approved by Institutional Review Board of Seoul National University Hospital, Seoul, Korea (approval No. 1607-032-773). The database is open to any researcher whose study protocols have been approved by the official review committee. This study was approved by Institutional Review Board of Seoul National University Hospital, Seoul, Korea (approval No. 1607-032-773).
RESULTS
Baseline characteristics A total of 153,425 adults (aged over 20) with no history of AF or HZ were analyzed. The baseline characteristics and demographic features of patients according to HZ severity are described in Table 1. The mean age of the total study population was 53.8 ± 15.2 years, and 60.1% of the patients were female. The HZ group had significantly more comorbidities than the non-HZ group, and the severe HZ group had the highest CHA2DS2-VASc score of all three groups. A total of 2,204 (1.4%) patients were diagnosed with new AF, and among them 825 (0.5%) patients were diagnosed within two years of their index (diagnosis) date. Data are presented as means ± standard deviation (range) or number (%). HZ = herpes zoster, TIA = transient ischemic attack, AF = atrial fibrillation. A total of 153,425 adults (aged over 20) with no history of AF or HZ were analyzed. The baseline characteristics and demographic features of patients according to HZ severity are described in Table 1. The mean age of the total study population was 53.8 ± 15.2 years, and 60.1% of the patients were female. The HZ group had significantly more comorbidities than the non-HZ group, and the severe HZ group had the highest CHA2DS2-VASc score of all three groups. A total of 2,204 (1.4%) patients were diagnosed with new AF, and among them 825 (0.5%) patients were diagnosed within two years of their index (diagnosis) date. Data are presented as means ± standard deviation (range) or number (%). HZ = herpes zoster, TIA = transient ischemic attack, AF = atrial fibrillation. Relative risk of AF development according to HZ severity The HZ group showed a higher incidence of AF than the non-HZ group. The number of events, calculated incidence rates, and unadjusted and adjusted HRs for AF and stroke according to HZ severity are described in Table 2. Those with severe HZ were at significantly higher risk of developing AF (n = 66, 6.4 per 1,000 PTPY) compared to those with mild HZ (n = 384, 2.9 PTPY) and the non-HZ group (n = 1,732, 2.7 PTPY) (Fig. 1A). In a landmark analysis (Fig. 1B), the risk of developing AF was significantly greater for severe HZ cases in the two-year period following HZ diagnosis; however, after the two-year period, the relative risk of developing AF did not differ significantly in the three groups. HZ = herpes zoster, AF = atrial fibrillation, HR = hazard ratio, CI = confidence interval. aIncidence rates were calculated per 1,000 patient-years, within the population who were over 20 years old and not previously diagnosed with AF; bMultivariate adjusted hazard ratios were calculated by Cox regression models, including income level, resident area, CHA2DS2-VASc score as covariates. Cumulative incidence of AF. (A) Kaplan-Meier curves with cumulative hazards of overall AF development. (B) Landmark analysis at 2 years. AF = atrial fibrillation, HZ = herpes zoster. The HZ group showed a higher incidence of AF than the non-HZ group. The number of events, calculated incidence rates, and unadjusted and adjusted HRs for AF and stroke according to HZ severity are described in Table 2. Those with severe HZ were at significantly higher risk of developing AF (n = 66, 6.4 per 1,000 PTPY) compared to those with mild HZ (n = 384, 2.9 PTPY) and the non-HZ group (n = 1,732, 2.7 PTPY) (Fig. 1A). In a landmark analysis (Fig. 1B), the risk of developing AF was significantly greater for severe HZ cases in the two-year period following HZ diagnosis; however, after the two-year period, the relative risk of developing AF did not differ significantly in the three groups. HZ = herpes zoster, AF = atrial fibrillation, HR = hazard ratio, CI = confidence interval. aIncidence rates were calculated per 1,000 patient-years, within the population who were over 20 years old and not previously diagnosed with AF; bMultivariate adjusted hazard ratios were calculated by Cox regression models, including income level, resident area, CHA2DS2-VASc score as covariates. Cumulative incidence of AF. (A) Kaplan-Meier curves with cumulative hazards of overall AF development. (B) Landmark analysis at 2 years. AF = atrial fibrillation, HZ = herpes zoster. Severe HZ vs. mild HZ or non-HZ patients We re-grouped the patients into two groups: a severe HZ group and a control group (mild HZ plus non-HZ patients). We did this because the overall incidence of AF was not significantly different for mild HZ patients and the non-HZ group. The baseline characteristics of the two groups are described in Supplementary Table 1. The overall cumulative incidence of AF in the severe HZ group and in the control group within the two-year period following HZ diagnosis, was 3.0% and 1.4%, respectively (HR, 2.30; 95% CI, 1.81–2.95; P < 0.001). Analysis after adjustment for CHA2DS2-VASc score and socio-economic status (income status and residence) showed the same results as the original analysis (HR, 1.46; 95% CI, 1.14–1.87; P = 0.003 in the severe HZ group). The detailed data is shown in Supplementary Table 2 and Supplementary Fig. 2. The risk of AF development was more pronounced in the two-year period following HZ diagnosis in the severe HZ group. We re-grouped the patients into two groups: a severe HZ group and a control group (mild HZ plus non-HZ patients). We did this because the overall incidence of AF was not significantly different for mild HZ patients and the non-HZ group. The baseline characteristics of the two groups are described in Supplementary Table 1. The overall cumulative incidence of AF in the severe HZ group and in the control group within the two-year period following HZ diagnosis, was 3.0% and 1.4%, respectively (HR, 2.30; 95% CI, 1.81–2.95; P < 0.001). Analysis after adjustment for CHA2DS2-VASc score and socio-economic status (income status and residence) showed the same results as the original analysis (HR, 1.46; 95% CI, 1.14–1.87; P = 0.003 in the severe HZ group). The detailed data is shown in Supplementary Table 2 and Supplementary Fig. 2. The risk of AF development was more pronounced in the two-year period following HZ diagnosis in the severe HZ group. Subgroup analyses Formal testing for interactions showed that the overall AF development rates in the severe HZ group and the control group were consistent across multiple subgroups (Fig. 2). Across the two subgroups defined by CHA2DS2-VASc (cut-off score of 3), the risk of AF development was significantly higher in the severe HZ group (Supplementary Tables 3 and 4). The relationship between AF and HZ was strong in the first two years after diagnosis. Subgroup analyses for AF risk. AF = atrial fibrillation, HZ = herpes zoster, HR = hazard ratio, CI = confidence interval. Formal testing for interactions showed that the overall AF development rates in the severe HZ group and the control group were consistent across multiple subgroups (Fig. 2). Across the two subgroups defined by CHA2DS2-VASc (cut-off score of 3), the risk of AF development was significantly higher in the severe HZ group (Supplementary Tables 3 and 4). The relationship between AF and HZ was strong in the first two years after diagnosis. Subgroup analyses for AF risk. AF = atrial fibrillation, HZ = herpes zoster, HR = hazard ratio, CI = confidence interval. Propensity score matched analysis Although age and sex were matched between HZ and non-HZ group, the patients with HZ had more comorbidities. Therefore, we performed 1:2 propensity score matching, and the demographic features of matched population is demonstrated in Supplementary Table 5. A comorbidity and socioeconomic status (hypertension, diabetes, dyslipidemia, previous stroke/transient ischemic attack [TIA], heart failure, end-stage renal disease, chronic obstructive pulmonary disease, previous ischemic heart disease, low income level, and urban residence) in addition to age and sex between HZ and non-HZ group were matched. Kaplan-Meier curve from the propensity score matched cohort according to HZ severity are presented in Supplementary Table 6 and Supplementary Fig. 3. In accordance with overall cohort analysis, the risk of AF development was more obvious 2 years after HZ diagnosis, while the relative risk of AF after 2 years from index date among three severity groups were not significantly different. Although age and sex were matched between HZ and non-HZ group, the patients with HZ had more comorbidities. Therefore, we performed 1:2 propensity score matching, and the demographic features of matched population is demonstrated in Supplementary Table 5. A comorbidity and socioeconomic status (hypertension, diabetes, dyslipidemia, previous stroke/transient ischemic attack [TIA], heart failure, end-stage renal disease, chronic obstructive pulmonary disease, previous ischemic heart disease, low income level, and urban residence) in addition to age and sex between HZ and non-HZ group were matched. Kaplan-Meier curve from the propensity score matched cohort according to HZ severity are presented in Supplementary Table 6 and Supplementary Fig. 3. In accordance with overall cohort analysis, the risk of AF development was more obvious 2 years after HZ diagnosis, while the relative risk of AF after 2 years from index date among three severity groups were not significantly different.
null
null
[ "Study population", "Database", "Definition", "Statistical analysis", "Ethics statement", "Baseline characteristics", "Relative risk of AF development according to HZ severity", "Severe HZ vs. mild HZ or non-HZ patients", "Subgroup analyses", "Propensity score matched analysis" ]
[ "From the Korean National Health Insurance Service (NHIS) database, we used a random multistage representative sample of 1,034,777 people. Consequently, a total of 738,559 subjects were selected for analysis after patients less than 20 years of age and patients who had been treated for either AF or HZ during the screening period (2002–2004) were excluded. Newly diagnosed HZ patients (n = 30,685) were identified using health insurance claims data from 2005 to 2011. We defined the date on which each patient was first diagnosed with HZ as the index date. On the same index date, four age- and sex-matched non-HZ control subjects (n = 122,740) were selected for each HZ patient. A flowchart of the patient enrollment process is shown in Supplementary Fig. 1. Follow-up data was obtained up to 2013.", "Records from the NHIS database included patients' sociodemographic information, their use of inpatient and outpatient services, and pharmacy dispensing claims. The majority (97.1%) of the Korean population (approximately 50 million people) is covered by the mandatory NHIS. Diagnoses were confirmed using the International Classification of Disease, Tenth Revision, Clinical Modification (ICD-10-CM) codes. In the present study, multistage sampling was performed, and we used a random sample of one million people selected from the NHIS database. The staff of the NHIS confirmed that the study cohort was a representative sample of the general Korean population for the period 2002 to 2013.13", "Instances of newly diagnosed AF (I480–I484, I489) identified during follow-up were analyzed as in our previous studies1415 and the HZ and non-HZ group were compared. Patients who were diagnosed with mitral stenosis (I050, I052, and I059) or those who had mechanical heart valves (Z952–Z954) implanted from 2002 to 2013 were excluded from the analysis.\nThe presence and severity of HZ was determined on the basis of diagnostic and procedure codes recorded during the study period from 2005 to 2011. A HZ patient was identified by the diagnostic code for HZ (B02) and received either oral anti-viral medication for at least five days, or an anti-viral injection more than once. Hospitalization due to HZ was obtained from the database. A newly diagnosed HZ patient requiring simultaneous hospitalization was defined as a severe HZ case. The remaining HZ patients, who were treated as outpatients, were considered mild HZ cases. The epidemiology of HZ in Korea has been previously reported using NHIS database records.16", "Baseline characteristics are presented as mean and standard deviation, or numbers and percentages. The differences between continuous values were assessed using an unpaired two-tailed t-test for normally distributed continuous variables and a Mann-Whitney rank-sum test for skewed variables. The χ2 test was used for the comparison of nominal variables. We used risk set sampling for selection of the age- and sex-matched control subjects. Incidence rates were described as the number of events per 1,000 patient-years (PTPY). Hazard ratios (HRs) and the corresponding 95% confidence intervals (CIs) were calculated using Cox proportional hazard models. Comparison of cumulative event rates between HZ and non-HZ groups was based on Kaplan-Meier censoring estimates and performed with the log-rank test. Crude risks were analyzed in the overall cohort, and adjusted results were subsequently calculated with the multivariable Cox regression analyses. Subgroup analyses of multiple cardiovascular risk factors were subsequently performed. The level of statistical significance was set at P < 0.05, and all statistical analyses were performed using SAS version 9.3 (SAS Institute, Cary, NC, USA) and SPSS (IBM® SPSS® Statistics version 22; IBM Corp., Armonk, NY, USA).", "The database is open to any researcher whose study protocols have been approved by the official review committee. This study was approved by Institutional Review Board of Seoul National University Hospital, Seoul, Korea (approval No. 1607-032-773).", "A total of 153,425 adults (aged over 20) with no history of AF or HZ were analyzed. The baseline characteristics and demographic features of patients according to HZ severity are described in Table 1. The mean age of the total study population was 53.8 ± 15.2 years, and 60.1% of the patients were female. The HZ group had significantly more comorbidities than the non-HZ group, and the severe HZ group had the highest CHA2DS2-VASc score of all three groups. A total of 2,204 (1.4%) patients were diagnosed with new AF, and among them 825 (0.5%) patients were diagnosed within two years of their index (diagnosis) date.\nData are presented as means ± standard deviation (range) or number (%).\nHZ = herpes zoster, TIA = transient ischemic attack, AF = atrial fibrillation.", "The HZ group showed a higher incidence of AF than the non-HZ group. The number of events, calculated incidence rates, and unadjusted and adjusted HRs for AF and stroke according to HZ severity are described in Table 2. Those with severe HZ were at significantly higher risk of developing AF (n = 66, 6.4 per 1,000 PTPY) compared to those with mild HZ (n = 384, 2.9 PTPY) and the non-HZ group (n = 1,732, 2.7 PTPY) (Fig. 1A). In a landmark analysis (Fig. 1B), the risk of developing AF was significantly greater for severe HZ cases in the two-year period following HZ diagnosis; however, after the two-year period, the relative risk of developing AF did not differ significantly in the three groups.\nHZ = herpes zoster, AF = atrial fibrillation, HR = hazard ratio, CI = confidence interval.\naIncidence rates were calculated per 1,000 patient-years, within the population who were over 20 years old and not previously diagnosed with AF; bMultivariate adjusted hazard ratios were calculated by Cox regression models, including income level, resident area, CHA2DS2-VASc score as covariates.\nCumulative incidence of AF. (A) Kaplan-Meier curves with cumulative hazards of overall AF development. (B) Landmark analysis at 2 years.\nAF = atrial fibrillation, HZ = herpes zoster.", "We re-grouped the patients into two groups: a severe HZ group and a control group (mild HZ plus non-HZ patients). We did this because the overall incidence of AF was not significantly different for mild HZ patients and the non-HZ group. The baseline characteristics of the two groups are described in Supplementary Table 1. The overall cumulative incidence of AF in the severe HZ group and in the control group within the two-year period following HZ diagnosis, was 3.0% and 1.4%, respectively (HR, 2.30; 95% CI, 1.81–2.95; P < 0.001). Analysis after adjustment for CHA2DS2-VASc score and socio-economic status (income status and residence) showed the same results as the original analysis (HR, 1.46; 95% CI, 1.14–1.87; P = 0.003 in the severe HZ group). The detailed data is shown in Supplementary Table 2 and Supplementary Fig. 2. The risk of AF development was more pronounced in the two-year period following HZ diagnosis in the severe HZ group.", "Formal testing for interactions showed that the overall AF development rates in the severe HZ group and the control group were consistent across multiple subgroups (Fig. 2). Across the two subgroups defined by CHA2DS2-VASc (cut-off score of 3), the risk of AF development was significantly higher in the severe HZ group (Supplementary Tables 3 and 4). The relationship between AF and HZ was strong in the first two years after diagnosis.\nSubgroup analyses for AF risk.\nAF = atrial fibrillation, HZ = herpes zoster, HR = hazard ratio, CI = confidence interval.", "Although age and sex were matched between HZ and non-HZ group, the patients with HZ had more comorbidities. Therefore, we performed 1:2 propensity score matching, and the demographic features of matched population is demonstrated in Supplementary Table 5. A comorbidity and socioeconomic status (hypertension, diabetes, dyslipidemia, previous stroke/transient ischemic attack [TIA], heart failure, end-stage renal disease, chronic obstructive pulmonary disease, previous ischemic heart disease, low income level, and urban residence) in addition to age and sex between HZ and non-HZ group were matched. Kaplan-Meier curve from the propensity score matched cohort according to HZ severity are presented in Supplementary Table 6 and Supplementary Fig. 3. In accordance with overall cohort analysis, the risk of AF development was more obvious 2 years after HZ diagnosis, while the relative risk of AF after 2 years from index date among three severity groups were not significantly different." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study population", "Database", "Definition", "Statistical analysis", "Ethics statement", "RESULTS", "Baseline characteristics", "Relative risk of AF development according to HZ severity", "Severe HZ vs. mild HZ or non-HZ patients", "Subgroup analyses", "Propensity score matched analysis", "DISCUSSION" ]
[ "The incidence of atrial fibrillation (AF), the most common type of clinically significant cardiac arrhythmia, has increased over the last decade.1 AF is associated with high medical costs, as well as an increased risk of ischemic stroke and death.2 Although the precise mechanisms involved in AF are not well understood, a growing body of evidence indicates that inflammation and the autonomic nervous system are involved in the pathogenesis of AF.345 Clinical and experimental studies have suggested that AF triggers and drivers in the pulmonary veins and left atrial wall are at least partially modulated by the autonomic nervous system.6\nHerpes zoster (HZ) is caused by reactivation of the latent varicella-zoster virus (VZV) in the cranial nerve, dorsal root, or autonomic ganglia with spread of the virus along the sensory nerve to the dermatome producing radiculoneuritis and cutaneous manifestations. Unvaccinated persons over 85 years of age have a 50% risk of developing HZ.78 HZ may result in serious neurologic or inflammatory sequelae and severe cases often require hospitalization (up to 3% of HZ cases).9 The neurologic sequelae of HZ is well-known, and can result in autonomic dysfunction1011 The vagus nerve or its ganglia can be involved in HZ, resulting in dysphagia, nausea, vomiting, gastric upset, or cardiac irregularities.12\nAccording to the pathogenesis of the two diseases, AF can occur as a complication of HZ, but the relationship between AF and HZ has not been studied widely. We hypothesized that HZ may be associated with AF development.", " Study population From the Korean National Health Insurance Service (NHIS) database, we used a random multistage representative sample of 1,034,777 people. Consequently, a total of 738,559 subjects were selected for analysis after patients less than 20 years of age and patients who had been treated for either AF or HZ during the screening period (2002–2004) were excluded. Newly diagnosed HZ patients (n = 30,685) were identified using health insurance claims data from 2005 to 2011. We defined the date on which each patient was first diagnosed with HZ as the index date. On the same index date, four age- and sex-matched non-HZ control subjects (n = 122,740) were selected for each HZ patient. A flowchart of the patient enrollment process is shown in Supplementary Fig. 1. Follow-up data was obtained up to 2013.\nFrom the Korean National Health Insurance Service (NHIS) database, we used a random multistage representative sample of 1,034,777 people. Consequently, a total of 738,559 subjects were selected for analysis after patients less than 20 years of age and patients who had been treated for either AF or HZ during the screening period (2002–2004) were excluded. Newly diagnosed HZ patients (n = 30,685) were identified using health insurance claims data from 2005 to 2011. We defined the date on which each patient was first diagnosed with HZ as the index date. On the same index date, four age- and sex-matched non-HZ control subjects (n = 122,740) were selected for each HZ patient. A flowchart of the patient enrollment process is shown in Supplementary Fig. 1. Follow-up data was obtained up to 2013.\n Database Records from the NHIS database included patients' sociodemographic information, their use of inpatient and outpatient services, and pharmacy dispensing claims. The majority (97.1%) of the Korean population (approximately 50 million people) is covered by the mandatory NHIS. Diagnoses were confirmed using the International Classification of Disease, Tenth Revision, Clinical Modification (ICD-10-CM) codes. In the present study, multistage sampling was performed, and we used a random sample of one million people selected from the NHIS database. The staff of the NHIS confirmed that the study cohort was a representative sample of the general Korean population for the period 2002 to 2013.13\nRecords from the NHIS database included patients' sociodemographic information, their use of inpatient and outpatient services, and pharmacy dispensing claims. The majority (97.1%) of the Korean population (approximately 50 million people) is covered by the mandatory NHIS. Diagnoses were confirmed using the International Classification of Disease, Tenth Revision, Clinical Modification (ICD-10-CM) codes. In the present study, multistage sampling was performed, and we used a random sample of one million people selected from the NHIS database. The staff of the NHIS confirmed that the study cohort was a representative sample of the general Korean population for the period 2002 to 2013.13\n Definition Instances of newly diagnosed AF (I480–I484, I489) identified during follow-up were analyzed as in our previous studies1415 and the HZ and non-HZ group were compared. Patients who were diagnosed with mitral stenosis (I050, I052, and I059) or those who had mechanical heart valves (Z952–Z954) implanted from 2002 to 2013 were excluded from the analysis.\nThe presence and severity of HZ was determined on the basis of diagnostic and procedure codes recorded during the study period from 2005 to 2011. A HZ patient was identified by the diagnostic code for HZ (B02) and received either oral anti-viral medication for at least five days, or an anti-viral injection more than once. Hospitalization due to HZ was obtained from the database. A newly diagnosed HZ patient requiring simultaneous hospitalization was defined as a severe HZ case. The remaining HZ patients, who were treated as outpatients, were considered mild HZ cases. The epidemiology of HZ in Korea has been previously reported using NHIS database records.16\nInstances of newly diagnosed AF (I480–I484, I489) identified during follow-up were analyzed as in our previous studies1415 and the HZ and non-HZ group were compared. Patients who were diagnosed with mitral stenosis (I050, I052, and I059) or those who had mechanical heart valves (Z952–Z954) implanted from 2002 to 2013 were excluded from the analysis.\nThe presence and severity of HZ was determined on the basis of diagnostic and procedure codes recorded during the study period from 2005 to 2011. A HZ patient was identified by the diagnostic code for HZ (B02) and received either oral anti-viral medication for at least five days, or an anti-viral injection more than once. Hospitalization due to HZ was obtained from the database. A newly diagnosed HZ patient requiring simultaneous hospitalization was defined as a severe HZ case. The remaining HZ patients, who were treated as outpatients, were considered mild HZ cases. The epidemiology of HZ in Korea has been previously reported using NHIS database records.16\n Statistical analysis Baseline characteristics are presented as mean and standard deviation, or numbers and percentages. The differences between continuous values were assessed using an unpaired two-tailed t-test for normally distributed continuous variables and a Mann-Whitney rank-sum test for skewed variables. The χ2 test was used for the comparison of nominal variables. We used risk set sampling for selection of the age- and sex-matched control subjects. Incidence rates were described as the number of events per 1,000 patient-years (PTPY). Hazard ratios (HRs) and the corresponding 95% confidence intervals (CIs) were calculated using Cox proportional hazard models. Comparison of cumulative event rates between HZ and non-HZ groups was based on Kaplan-Meier censoring estimates and performed with the log-rank test. Crude risks were analyzed in the overall cohort, and adjusted results were subsequently calculated with the multivariable Cox regression analyses. Subgroup analyses of multiple cardiovascular risk factors were subsequently performed. The level of statistical significance was set at P < 0.05, and all statistical analyses were performed using SAS version 9.3 (SAS Institute, Cary, NC, USA) and SPSS (IBM® SPSS® Statistics version 22; IBM Corp., Armonk, NY, USA).\nBaseline characteristics are presented as mean and standard deviation, or numbers and percentages. The differences between continuous values were assessed using an unpaired two-tailed t-test for normally distributed continuous variables and a Mann-Whitney rank-sum test for skewed variables. The χ2 test was used for the comparison of nominal variables. We used risk set sampling for selection of the age- and sex-matched control subjects. Incidence rates were described as the number of events per 1,000 patient-years (PTPY). Hazard ratios (HRs) and the corresponding 95% confidence intervals (CIs) were calculated using Cox proportional hazard models. Comparison of cumulative event rates between HZ and non-HZ groups was based on Kaplan-Meier censoring estimates and performed with the log-rank test. Crude risks were analyzed in the overall cohort, and adjusted results were subsequently calculated with the multivariable Cox regression analyses. Subgroup analyses of multiple cardiovascular risk factors were subsequently performed. The level of statistical significance was set at P < 0.05, and all statistical analyses were performed using SAS version 9.3 (SAS Institute, Cary, NC, USA) and SPSS (IBM® SPSS® Statistics version 22; IBM Corp., Armonk, NY, USA).\n Ethics statement The database is open to any researcher whose study protocols have been approved by the official review committee. This study was approved by Institutional Review Board of Seoul National University Hospital, Seoul, Korea (approval No. 1607-032-773).\nThe database is open to any researcher whose study protocols have been approved by the official review committee. This study was approved by Institutional Review Board of Seoul National University Hospital, Seoul, Korea (approval No. 1607-032-773).", "From the Korean National Health Insurance Service (NHIS) database, we used a random multistage representative sample of 1,034,777 people. Consequently, a total of 738,559 subjects were selected for analysis after patients less than 20 years of age and patients who had been treated for either AF or HZ during the screening period (2002–2004) were excluded. Newly diagnosed HZ patients (n = 30,685) were identified using health insurance claims data from 2005 to 2011. We defined the date on which each patient was first diagnosed with HZ as the index date. On the same index date, four age- and sex-matched non-HZ control subjects (n = 122,740) were selected for each HZ patient. A flowchart of the patient enrollment process is shown in Supplementary Fig. 1. Follow-up data was obtained up to 2013.", "Records from the NHIS database included patients' sociodemographic information, their use of inpatient and outpatient services, and pharmacy dispensing claims. The majority (97.1%) of the Korean population (approximately 50 million people) is covered by the mandatory NHIS. Diagnoses were confirmed using the International Classification of Disease, Tenth Revision, Clinical Modification (ICD-10-CM) codes. In the present study, multistage sampling was performed, and we used a random sample of one million people selected from the NHIS database. The staff of the NHIS confirmed that the study cohort was a representative sample of the general Korean population for the period 2002 to 2013.13", "Instances of newly diagnosed AF (I480–I484, I489) identified during follow-up were analyzed as in our previous studies1415 and the HZ and non-HZ group were compared. Patients who were diagnosed with mitral stenosis (I050, I052, and I059) or those who had mechanical heart valves (Z952–Z954) implanted from 2002 to 2013 were excluded from the analysis.\nThe presence and severity of HZ was determined on the basis of diagnostic and procedure codes recorded during the study period from 2005 to 2011. A HZ patient was identified by the diagnostic code for HZ (B02) and received either oral anti-viral medication for at least five days, or an anti-viral injection more than once. Hospitalization due to HZ was obtained from the database. A newly diagnosed HZ patient requiring simultaneous hospitalization was defined as a severe HZ case. The remaining HZ patients, who were treated as outpatients, were considered mild HZ cases. The epidemiology of HZ in Korea has been previously reported using NHIS database records.16", "Baseline characteristics are presented as mean and standard deviation, or numbers and percentages. The differences between continuous values were assessed using an unpaired two-tailed t-test for normally distributed continuous variables and a Mann-Whitney rank-sum test for skewed variables. The χ2 test was used for the comparison of nominal variables. We used risk set sampling for selection of the age- and sex-matched control subjects. Incidence rates were described as the number of events per 1,000 patient-years (PTPY). Hazard ratios (HRs) and the corresponding 95% confidence intervals (CIs) were calculated using Cox proportional hazard models. Comparison of cumulative event rates between HZ and non-HZ groups was based on Kaplan-Meier censoring estimates and performed with the log-rank test. Crude risks were analyzed in the overall cohort, and adjusted results were subsequently calculated with the multivariable Cox regression analyses. Subgroup analyses of multiple cardiovascular risk factors were subsequently performed. The level of statistical significance was set at P < 0.05, and all statistical analyses were performed using SAS version 9.3 (SAS Institute, Cary, NC, USA) and SPSS (IBM® SPSS® Statistics version 22; IBM Corp., Armonk, NY, USA).", "The database is open to any researcher whose study protocols have been approved by the official review committee. This study was approved by Institutional Review Board of Seoul National University Hospital, Seoul, Korea (approval No. 1607-032-773).", " Baseline characteristics A total of 153,425 adults (aged over 20) with no history of AF or HZ were analyzed. The baseline characteristics and demographic features of patients according to HZ severity are described in Table 1. The mean age of the total study population was 53.8 ± 15.2 years, and 60.1% of the patients were female. The HZ group had significantly more comorbidities than the non-HZ group, and the severe HZ group had the highest CHA2DS2-VASc score of all three groups. A total of 2,204 (1.4%) patients were diagnosed with new AF, and among them 825 (0.5%) patients were diagnosed within two years of their index (diagnosis) date.\nData are presented as means ± standard deviation (range) or number (%).\nHZ = herpes zoster, TIA = transient ischemic attack, AF = atrial fibrillation.\nA total of 153,425 adults (aged over 20) with no history of AF or HZ were analyzed. The baseline characteristics and demographic features of patients according to HZ severity are described in Table 1. The mean age of the total study population was 53.8 ± 15.2 years, and 60.1% of the patients were female. The HZ group had significantly more comorbidities than the non-HZ group, and the severe HZ group had the highest CHA2DS2-VASc score of all three groups. A total of 2,204 (1.4%) patients were diagnosed with new AF, and among them 825 (0.5%) patients were diagnosed within two years of their index (diagnosis) date.\nData are presented as means ± standard deviation (range) or number (%).\nHZ = herpes zoster, TIA = transient ischemic attack, AF = atrial fibrillation.\n Relative risk of AF development according to HZ severity The HZ group showed a higher incidence of AF than the non-HZ group. The number of events, calculated incidence rates, and unadjusted and adjusted HRs for AF and stroke according to HZ severity are described in Table 2. Those with severe HZ were at significantly higher risk of developing AF (n = 66, 6.4 per 1,000 PTPY) compared to those with mild HZ (n = 384, 2.9 PTPY) and the non-HZ group (n = 1,732, 2.7 PTPY) (Fig. 1A). In a landmark analysis (Fig. 1B), the risk of developing AF was significantly greater for severe HZ cases in the two-year period following HZ diagnosis; however, after the two-year period, the relative risk of developing AF did not differ significantly in the three groups.\nHZ = herpes zoster, AF = atrial fibrillation, HR = hazard ratio, CI = confidence interval.\naIncidence rates were calculated per 1,000 patient-years, within the population who were over 20 years old and not previously diagnosed with AF; bMultivariate adjusted hazard ratios were calculated by Cox regression models, including income level, resident area, CHA2DS2-VASc score as covariates.\nCumulative incidence of AF. (A) Kaplan-Meier curves with cumulative hazards of overall AF development. (B) Landmark analysis at 2 years.\nAF = atrial fibrillation, HZ = herpes zoster.\nThe HZ group showed a higher incidence of AF than the non-HZ group. The number of events, calculated incidence rates, and unadjusted and adjusted HRs for AF and stroke according to HZ severity are described in Table 2. Those with severe HZ were at significantly higher risk of developing AF (n = 66, 6.4 per 1,000 PTPY) compared to those with mild HZ (n = 384, 2.9 PTPY) and the non-HZ group (n = 1,732, 2.7 PTPY) (Fig. 1A). In a landmark analysis (Fig. 1B), the risk of developing AF was significantly greater for severe HZ cases in the two-year period following HZ diagnosis; however, after the two-year period, the relative risk of developing AF did not differ significantly in the three groups.\nHZ = herpes zoster, AF = atrial fibrillation, HR = hazard ratio, CI = confidence interval.\naIncidence rates were calculated per 1,000 patient-years, within the population who were over 20 years old and not previously diagnosed with AF; bMultivariate adjusted hazard ratios were calculated by Cox regression models, including income level, resident area, CHA2DS2-VASc score as covariates.\nCumulative incidence of AF. (A) Kaplan-Meier curves with cumulative hazards of overall AF development. (B) Landmark analysis at 2 years.\nAF = atrial fibrillation, HZ = herpes zoster.\n Severe HZ vs. mild HZ or non-HZ patients We re-grouped the patients into two groups: a severe HZ group and a control group (mild HZ plus non-HZ patients). We did this because the overall incidence of AF was not significantly different for mild HZ patients and the non-HZ group. The baseline characteristics of the two groups are described in Supplementary Table 1. The overall cumulative incidence of AF in the severe HZ group and in the control group within the two-year period following HZ diagnosis, was 3.0% and 1.4%, respectively (HR, 2.30; 95% CI, 1.81–2.95; P < 0.001). Analysis after adjustment for CHA2DS2-VASc score and socio-economic status (income status and residence) showed the same results as the original analysis (HR, 1.46; 95% CI, 1.14–1.87; P = 0.003 in the severe HZ group). The detailed data is shown in Supplementary Table 2 and Supplementary Fig. 2. The risk of AF development was more pronounced in the two-year period following HZ diagnosis in the severe HZ group.\nWe re-grouped the patients into two groups: a severe HZ group and a control group (mild HZ plus non-HZ patients). We did this because the overall incidence of AF was not significantly different for mild HZ patients and the non-HZ group. The baseline characteristics of the two groups are described in Supplementary Table 1. The overall cumulative incidence of AF in the severe HZ group and in the control group within the two-year period following HZ diagnosis, was 3.0% and 1.4%, respectively (HR, 2.30; 95% CI, 1.81–2.95; P < 0.001). Analysis after adjustment for CHA2DS2-VASc score and socio-economic status (income status and residence) showed the same results as the original analysis (HR, 1.46; 95% CI, 1.14–1.87; P = 0.003 in the severe HZ group). The detailed data is shown in Supplementary Table 2 and Supplementary Fig. 2. The risk of AF development was more pronounced in the two-year period following HZ diagnosis in the severe HZ group.\n Subgroup analyses Formal testing for interactions showed that the overall AF development rates in the severe HZ group and the control group were consistent across multiple subgroups (Fig. 2). Across the two subgroups defined by CHA2DS2-VASc (cut-off score of 3), the risk of AF development was significantly higher in the severe HZ group (Supplementary Tables 3 and 4). The relationship between AF and HZ was strong in the first two years after diagnosis.\nSubgroup analyses for AF risk.\nAF = atrial fibrillation, HZ = herpes zoster, HR = hazard ratio, CI = confidence interval.\nFormal testing for interactions showed that the overall AF development rates in the severe HZ group and the control group were consistent across multiple subgroups (Fig. 2). Across the two subgroups defined by CHA2DS2-VASc (cut-off score of 3), the risk of AF development was significantly higher in the severe HZ group (Supplementary Tables 3 and 4). The relationship between AF and HZ was strong in the first two years after diagnosis.\nSubgroup analyses for AF risk.\nAF = atrial fibrillation, HZ = herpes zoster, HR = hazard ratio, CI = confidence interval.\n Propensity score matched analysis Although age and sex were matched between HZ and non-HZ group, the patients with HZ had more comorbidities. Therefore, we performed 1:2 propensity score matching, and the demographic features of matched population is demonstrated in Supplementary Table 5. A comorbidity and socioeconomic status (hypertension, diabetes, dyslipidemia, previous stroke/transient ischemic attack [TIA], heart failure, end-stage renal disease, chronic obstructive pulmonary disease, previous ischemic heart disease, low income level, and urban residence) in addition to age and sex between HZ and non-HZ group were matched. Kaplan-Meier curve from the propensity score matched cohort according to HZ severity are presented in Supplementary Table 6 and Supplementary Fig. 3. In accordance with overall cohort analysis, the risk of AF development was more obvious 2 years after HZ diagnosis, while the relative risk of AF after 2 years from index date among three severity groups were not significantly different.\nAlthough age and sex were matched between HZ and non-HZ group, the patients with HZ had more comorbidities. Therefore, we performed 1:2 propensity score matching, and the demographic features of matched population is demonstrated in Supplementary Table 5. A comorbidity and socioeconomic status (hypertension, diabetes, dyslipidemia, previous stroke/transient ischemic attack [TIA], heart failure, end-stage renal disease, chronic obstructive pulmonary disease, previous ischemic heart disease, low income level, and urban residence) in addition to age and sex between HZ and non-HZ group were matched. Kaplan-Meier curve from the propensity score matched cohort according to HZ severity are presented in Supplementary Table 6 and Supplementary Fig. 3. In accordance with overall cohort analysis, the risk of AF development was more obvious 2 years after HZ diagnosis, while the relative risk of AF after 2 years from index date among three severity groups were not significantly different.", "A total of 153,425 adults (aged over 20) with no history of AF or HZ were analyzed. The baseline characteristics and demographic features of patients according to HZ severity are described in Table 1. The mean age of the total study population was 53.8 ± 15.2 years, and 60.1% of the patients were female. The HZ group had significantly more comorbidities than the non-HZ group, and the severe HZ group had the highest CHA2DS2-VASc score of all three groups. A total of 2,204 (1.4%) patients were diagnosed with new AF, and among them 825 (0.5%) patients were diagnosed within two years of their index (diagnosis) date.\nData are presented as means ± standard deviation (range) or number (%).\nHZ = herpes zoster, TIA = transient ischemic attack, AF = atrial fibrillation.", "The HZ group showed a higher incidence of AF than the non-HZ group. The number of events, calculated incidence rates, and unadjusted and adjusted HRs for AF and stroke according to HZ severity are described in Table 2. Those with severe HZ were at significantly higher risk of developing AF (n = 66, 6.4 per 1,000 PTPY) compared to those with mild HZ (n = 384, 2.9 PTPY) and the non-HZ group (n = 1,732, 2.7 PTPY) (Fig. 1A). In a landmark analysis (Fig. 1B), the risk of developing AF was significantly greater for severe HZ cases in the two-year period following HZ diagnosis; however, after the two-year period, the relative risk of developing AF did not differ significantly in the three groups.\nHZ = herpes zoster, AF = atrial fibrillation, HR = hazard ratio, CI = confidence interval.\naIncidence rates were calculated per 1,000 patient-years, within the population who were over 20 years old and not previously diagnosed with AF; bMultivariate adjusted hazard ratios were calculated by Cox regression models, including income level, resident area, CHA2DS2-VASc score as covariates.\nCumulative incidence of AF. (A) Kaplan-Meier curves with cumulative hazards of overall AF development. (B) Landmark analysis at 2 years.\nAF = atrial fibrillation, HZ = herpes zoster.", "We re-grouped the patients into two groups: a severe HZ group and a control group (mild HZ plus non-HZ patients). We did this because the overall incidence of AF was not significantly different for mild HZ patients and the non-HZ group. The baseline characteristics of the two groups are described in Supplementary Table 1. The overall cumulative incidence of AF in the severe HZ group and in the control group within the two-year period following HZ diagnosis, was 3.0% and 1.4%, respectively (HR, 2.30; 95% CI, 1.81–2.95; P < 0.001). Analysis after adjustment for CHA2DS2-VASc score and socio-economic status (income status and residence) showed the same results as the original analysis (HR, 1.46; 95% CI, 1.14–1.87; P = 0.003 in the severe HZ group). The detailed data is shown in Supplementary Table 2 and Supplementary Fig. 2. The risk of AF development was more pronounced in the two-year period following HZ diagnosis in the severe HZ group.", "Formal testing for interactions showed that the overall AF development rates in the severe HZ group and the control group were consistent across multiple subgroups (Fig. 2). Across the two subgroups defined by CHA2DS2-VASc (cut-off score of 3), the risk of AF development was significantly higher in the severe HZ group (Supplementary Tables 3 and 4). The relationship between AF and HZ was strong in the first two years after diagnosis.\nSubgroup analyses for AF risk.\nAF = atrial fibrillation, HZ = herpes zoster, HR = hazard ratio, CI = confidence interval.", "Although age and sex were matched between HZ and non-HZ group, the patients with HZ had more comorbidities. Therefore, we performed 1:2 propensity score matching, and the demographic features of matched population is demonstrated in Supplementary Table 5. A comorbidity and socioeconomic status (hypertension, diabetes, dyslipidemia, previous stroke/transient ischemic attack [TIA], heart failure, end-stage renal disease, chronic obstructive pulmonary disease, previous ischemic heart disease, low income level, and urban residence) in addition to age and sex between HZ and non-HZ group were matched. Kaplan-Meier curve from the propensity score matched cohort according to HZ severity are presented in Supplementary Table 6 and Supplementary Fig. 3. In accordance with overall cohort analysis, the risk of AF development was more obvious 2 years after HZ diagnosis, while the relative risk of AF after 2 years from index date among three severity groups were not significantly different.", "In our nationwide case-control cohort study, we found an association between HZ and the development of AF. The main finding was that patients with severe HZ requiring hospitalization have an increased risk of incident AF within the two-year period following HZ diagnosis.\nInflammatory process is a well-established pathogenic process in exacerbated atherosclerosis. Many studies have supported a close link between AF and inflammatory processes. A high incidence of AF in patients after cardiac surgery, a well-known state of intense inflammatory process was reported.1718 Furthermore, an elevated serum C-reactive protein level has been reported to be related to AF development, increased AF burden, and higher recurrence rates after catheter ablation and electrical cardioversion for AF.19 Previous studies have demonstrated that a higher interleukin (IL)-6 level is associated with a higher risk of incident AF following coronary artery bypass graft surgery20 and a higher AF recurrence rate following cardioversion.21 The nationwide population based study on Herpes simplex infection reported that the viral inflammatory disease could be associated with an increased risk of AF.22 Bacterial infections such as Helicobacter pylori and Chlamydia pneumoniae have also shown an association with the occurrence of AF.23 However, there has been no population-based epidemiology study on the association between VZV and AF. Given that HZ involves a chronic latent inflammation, there is a strong possibility that it is associated with AF.\nThe intrinsic cardiac autonomic nervous system plays a critical role in the initiation and maintenance of AF, and sympathovagal discharges are common triggers for paroxysmal AF.24 There are many ganglionic plexuses around the atria, which is markedly influenced by cholinergic as well as sympathetic innervation.25 Changes in autonomic tone before the onset of paroxysmal AF have been reported, suggesting autonomic dysfunction may lead to the development of AF.26 HZ can involve nerve ganglia, resulting in autonomic dysfunction. Alternatively, VZV reactivations may cause the neuritis of the sympathetic and autonomic ganglia. HZ is reported to affect the autonomic nervous system of the colon due to the centripetal spread of the virus from the dorsal ganglion.27 In addition, the vagus nerve or its ganglia can be involved in HZ, resulting in cardiac irregularities.12 The reactivation may involve multi-loci, including dermatome and symptomatic ganglia that innervate the heart, leading to cardiovascular disease. The immune response has been shown to play an important role in the ganglia during VZV reactivation. Unfortunately, we could not find direct evidence that HZ involves cardiac ganglionic plexuses. Based on the literary evidence cited above, we assumed that HZ might lead to AF development via cardiac autonomic nervous system involvement.\nTo our knowledge, this is the first report to prove an increased risk of developing AF in the early period after a particular infection. The development of AF shortly after infection may be associated with HZ. In our cohort, about two percent of patients developed newly diagnosed AF within two years of HZ diagnosis, and this difference made the overall risk difference. AF can be asymptomatic and delayed diagnosis is common in real world clinical situations, thus the risk of incident AF may actually be higher within the short-term period following HZ diagnosis.\nBecause of the increasing incidence of AF and the ever-increasing public health burden it produces, significant efforts have been made recently in both risk factor identification and modifications for AF prevention. A number of traditional risk factors for AF have been identified, many of which are also associated with cardiovascular diseases, and the common preventive strategies focus on conventional modification of cardiovascular risk factors such as obesity, glucose control, blood pressure control, etc.\nFurthermore, given that HZ is an inflammatory disease involving the nervous system, it can be prevented with vaccination.28 A significant proportion of AF cases could be prevented by HZ vaccination, and this has potential to dissipate the medical socio-economic burden.\nOne of the highlights of this study is the large comparison cohort stratified by age and gender. The effects of diabetes, hypertension, and hyperlipidemia on cardiovascular diseases were able to be adjusted according to the Cox proportional hazards. Subgroup analyses were performed to ensure the consistency of the results. However, several limitations of the present study remain. First, personal information such as smoking habits, physical activity, and body mass index were not available from this registry database. In addition, echocardiographic parameters, such as left atrial dimension and left ventricular ejection fraction, or electrocardiographic results were absent. Therefore, we were not able to control effectively for all potential confounders, although we have tried to adjust for important comorbidities in the Cox regression model. Second, HZ was diagnosed using ICD-10-CM codes and was not confirmed by molecular biological detection. The accuracy of HZ diagnosis could not be fully ascertained. Third, selection bias due to hospitalization might affect the detection rate of AF. Routine ECG screenings during hospitalization might increase the chance of detection of AF, so hospitalized severe HZ patients would have more chance to detect AF than non-hospitalized control group. However, we found that the incident AF occurred consistently during 2 years after diagnosed with HZ, and there is little chance to admit for 2 years due to HZ. Although there might be selection bias during early admission period in severe HZ patients, consistent occurrence of AF during 2 years of follow-up after HZ development would support our final conclusion. Lastly, while we reported a significant association between AF and HZ, these results were derived from an observational database. As we were not able to conclude whether HZ was the direct cause of the increased AF incidence, a further prospective and large-scale trial is necessary to confirm the findings of the present study.\nOur results suggest that severe HZ requiring hospitalization is a possible risk factor for AF, and that AF incidence is greater for severe HZ patients in the two-year period following HZ diagnosis." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion" ]
[ "Atrial Fibrillation", "Herpes Zoster", "Inflammation", "Autonomic Dysfunction" ]
INTRODUCTION: The incidence of atrial fibrillation (AF), the most common type of clinically significant cardiac arrhythmia, has increased over the last decade.1 AF is associated with high medical costs, as well as an increased risk of ischemic stroke and death.2 Although the precise mechanisms involved in AF are not well understood, a growing body of evidence indicates that inflammation and the autonomic nervous system are involved in the pathogenesis of AF.345 Clinical and experimental studies have suggested that AF triggers and drivers in the pulmonary veins and left atrial wall are at least partially modulated by the autonomic nervous system.6 Herpes zoster (HZ) is caused by reactivation of the latent varicella-zoster virus (VZV) in the cranial nerve, dorsal root, or autonomic ganglia with spread of the virus along the sensory nerve to the dermatome producing radiculoneuritis and cutaneous manifestations. Unvaccinated persons over 85 years of age have a 50% risk of developing HZ.78 HZ may result in serious neurologic or inflammatory sequelae and severe cases often require hospitalization (up to 3% of HZ cases).9 The neurologic sequelae of HZ is well-known, and can result in autonomic dysfunction1011 The vagus nerve or its ganglia can be involved in HZ, resulting in dysphagia, nausea, vomiting, gastric upset, or cardiac irregularities.12 According to the pathogenesis of the two diseases, AF can occur as a complication of HZ, but the relationship between AF and HZ has not been studied widely. We hypothesized that HZ may be associated with AF development. METHODS: Study population From the Korean National Health Insurance Service (NHIS) database, we used a random multistage representative sample of 1,034,777 people. Consequently, a total of 738,559 subjects were selected for analysis after patients less than 20 years of age and patients who had been treated for either AF or HZ during the screening period (2002–2004) were excluded. Newly diagnosed HZ patients (n = 30,685) were identified using health insurance claims data from 2005 to 2011. We defined the date on which each patient was first diagnosed with HZ as the index date. On the same index date, four age- and sex-matched non-HZ control subjects (n = 122,740) were selected for each HZ patient. A flowchart of the patient enrollment process is shown in Supplementary Fig. 1. Follow-up data was obtained up to 2013. From the Korean National Health Insurance Service (NHIS) database, we used a random multistage representative sample of 1,034,777 people. Consequently, a total of 738,559 subjects were selected for analysis after patients less than 20 years of age and patients who had been treated for either AF or HZ during the screening period (2002–2004) were excluded. Newly diagnosed HZ patients (n = 30,685) were identified using health insurance claims data from 2005 to 2011. We defined the date on which each patient was first diagnosed with HZ as the index date. On the same index date, four age- and sex-matched non-HZ control subjects (n = 122,740) were selected for each HZ patient. A flowchart of the patient enrollment process is shown in Supplementary Fig. 1. Follow-up data was obtained up to 2013. Database Records from the NHIS database included patients' sociodemographic information, their use of inpatient and outpatient services, and pharmacy dispensing claims. The majority (97.1%) of the Korean population (approximately 50 million people) is covered by the mandatory NHIS. Diagnoses were confirmed using the International Classification of Disease, Tenth Revision, Clinical Modification (ICD-10-CM) codes. In the present study, multistage sampling was performed, and we used a random sample of one million people selected from the NHIS database. The staff of the NHIS confirmed that the study cohort was a representative sample of the general Korean population for the period 2002 to 2013.13 Records from the NHIS database included patients' sociodemographic information, their use of inpatient and outpatient services, and pharmacy dispensing claims. The majority (97.1%) of the Korean population (approximately 50 million people) is covered by the mandatory NHIS. Diagnoses were confirmed using the International Classification of Disease, Tenth Revision, Clinical Modification (ICD-10-CM) codes. In the present study, multistage sampling was performed, and we used a random sample of one million people selected from the NHIS database. The staff of the NHIS confirmed that the study cohort was a representative sample of the general Korean population for the period 2002 to 2013.13 Definition Instances of newly diagnosed AF (I480–I484, I489) identified during follow-up were analyzed as in our previous studies1415 and the HZ and non-HZ group were compared. Patients who were diagnosed with mitral stenosis (I050, I052, and I059) or those who had mechanical heart valves (Z952–Z954) implanted from 2002 to 2013 were excluded from the analysis. The presence and severity of HZ was determined on the basis of diagnostic and procedure codes recorded during the study period from 2005 to 2011. A HZ patient was identified by the diagnostic code for HZ (B02) and received either oral anti-viral medication for at least five days, or an anti-viral injection more than once. Hospitalization due to HZ was obtained from the database. A newly diagnosed HZ patient requiring simultaneous hospitalization was defined as a severe HZ case. The remaining HZ patients, who were treated as outpatients, were considered mild HZ cases. The epidemiology of HZ in Korea has been previously reported using NHIS database records.16 Instances of newly diagnosed AF (I480–I484, I489) identified during follow-up were analyzed as in our previous studies1415 and the HZ and non-HZ group were compared. Patients who were diagnosed with mitral stenosis (I050, I052, and I059) or those who had mechanical heart valves (Z952–Z954) implanted from 2002 to 2013 were excluded from the analysis. The presence and severity of HZ was determined on the basis of diagnostic and procedure codes recorded during the study period from 2005 to 2011. A HZ patient was identified by the diagnostic code for HZ (B02) and received either oral anti-viral medication for at least five days, or an anti-viral injection more than once. Hospitalization due to HZ was obtained from the database. A newly diagnosed HZ patient requiring simultaneous hospitalization was defined as a severe HZ case. The remaining HZ patients, who were treated as outpatients, were considered mild HZ cases. The epidemiology of HZ in Korea has been previously reported using NHIS database records.16 Statistical analysis Baseline characteristics are presented as mean and standard deviation, or numbers and percentages. The differences between continuous values were assessed using an unpaired two-tailed t-test for normally distributed continuous variables and a Mann-Whitney rank-sum test for skewed variables. The χ2 test was used for the comparison of nominal variables. We used risk set sampling for selection of the age- and sex-matched control subjects. Incidence rates were described as the number of events per 1,000 patient-years (PTPY). Hazard ratios (HRs) and the corresponding 95% confidence intervals (CIs) were calculated using Cox proportional hazard models. Comparison of cumulative event rates between HZ and non-HZ groups was based on Kaplan-Meier censoring estimates and performed with the log-rank test. Crude risks were analyzed in the overall cohort, and adjusted results were subsequently calculated with the multivariable Cox regression analyses. Subgroup analyses of multiple cardiovascular risk factors were subsequently performed. The level of statistical significance was set at P < 0.05, and all statistical analyses were performed using SAS version 9.3 (SAS Institute, Cary, NC, USA) and SPSS (IBM® SPSS® Statistics version 22; IBM Corp., Armonk, NY, USA). Baseline characteristics are presented as mean and standard deviation, or numbers and percentages. The differences between continuous values were assessed using an unpaired two-tailed t-test for normally distributed continuous variables and a Mann-Whitney rank-sum test for skewed variables. The χ2 test was used for the comparison of nominal variables. We used risk set sampling for selection of the age- and sex-matched control subjects. Incidence rates were described as the number of events per 1,000 patient-years (PTPY). Hazard ratios (HRs) and the corresponding 95% confidence intervals (CIs) were calculated using Cox proportional hazard models. Comparison of cumulative event rates between HZ and non-HZ groups was based on Kaplan-Meier censoring estimates and performed with the log-rank test. Crude risks were analyzed in the overall cohort, and adjusted results were subsequently calculated with the multivariable Cox regression analyses. Subgroup analyses of multiple cardiovascular risk factors were subsequently performed. The level of statistical significance was set at P < 0.05, and all statistical analyses were performed using SAS version 9.3 (SAS Institute, Cary, NC, USA) and SPSS (IBM® SPSS® Statistics version 22; IBM Corp., Armonk, NY, USA). Ethics statement The database is open to any researcher whose study protocols have been approved by the official review committee. This study was approved by Institutional Review Board of Seoul National University Hospital, Seoul, Korea (approval No. 1607-032-773). The database is open to any researcher whose study protocols have been approved by the official review committee. This study was approved by Institutional Review Board of Seoul National University Hospital, Seoul, Korea (approval No. 1607-032-773). Study population: From the Korean National Health Insurance Service (NHIS) database, we used a random multistage representative sample of 1,034,777 people. Consequently, a total of 738,559 subjects were selected for analysis after patients less than 20 years of age and patients who had been treated for either AF or HZ during the screening period (2002–2004) were excluded. Newly diagnosed HZ patients (n = 30,685) were identified using health insurance claims data from 2005 to 2011. We defined the date on which each patient was first diagnosed with HZ as the index date. On the same index date, four age- and sex-matched non-HZ control subjects (n = 122,740) were selected for each HZ patient. A flowchart of the patient enrollment process is shown in Supplementary Fig. 1. Follow-up data was obtained up to 2013. Database: Records from the NHIS database included patients' sociodemographic information, their use of inpatient and outpatient services, and pharmacy dispensing claims. The majority (97.1%) of the Korean population (approximately 50 million people) is covered by the mandatory NHIS. Diagnoses were confirmed using the International Classification of Disease, Tenth Revision, Clinical Modification (ICD-10-CM) codes. In the present study, multistage sampling was performed, and we used a random sample of one million people selected from the NHIS database. The staff of the NHIS confirmed that the study cohort was a representative sample of the general Korean population for the period 2002 to 2013.13 Definition: Instances of newly diagnosed AF (I480–I484, I489) identified during follow-up were analyzed as in our previous studies1415 and the HZ and non-HZ group were compared. Patients who were diagnosed with mitral stenosis (I050, I052, and I059) or those who had mechanical heart valves (Z952–Z954) implanted from 2002 to 2013 were excluded from the analysis. The presence and severity of HZ was determined on the basis of diagnostic and procedure codes recorded during the study period from 2005 to 2011. A HZ patient was identified by the diagnostic code for HZ (B02) and received either oral anti-viral medication for at least five days, or an anti-viral injection more than once. Hospitalization due to HZ was obtained from the database. A newly diagnosed HZ patient requiring simultaneous hospitalization was defined as a severe HZ case. The remaining HZ patients, who were treated as outpatients, were considered mild HZ cases. The epidemiology of HZ in Korea has been previously reported using NHIS database records.16 Statistical analysis: Baseline characteristics are presented as mean and standard deviation, or numbers and percentages. The differences between continuous values were assessed using an unpaired two-tailed t-test for normally distributed continuous variables and a Mann-Whitney rank-sum test for skewed variables. The χ2 test was used for the comparison of nominal variables. We used risk set sampling for selection of the age- and sex-matched control subjects. Incidence rates were described as the number of events per 1,000 patient-years (PTPY). Hazard ratios (HRs) and the corresponding 95% confidence intervals (CIs) were calculated using Cox proportional hazard models. Comparison of cumulative event rates between HZ and non-HZ groups was based on Kaplan-Meier censoring estimates and performed with the log-rank test. Crude risks were analyzed in the overall cohort, and adjusted results were subsequently calculated with the multivariable Cox regression analyses. Subgroup analyses of multiple cardiovascular risk factors were subsequently performed. The level of statistical significance was set at P < 0.05, and all statistical analyses were performed using SAS version 9.3 (SAS Institute, Cary, NC, USA) and SPSS (IBM® SPSS® Statistics version 22; IBM Corp., Armonk, NY, USA). Ethics statement: The database is open to any researcher whose study protocols have been approved by the official review committee. This study was approved by Institutional Review Board of Seoul National University Hospital, Seoul, Korea (approval No. 1607-032-773). RESULTS: Baseline characteristics A total of 153,425 adults (aged over 20) with no history of AF or HZ were analyzed. The baseline characteristics and demographic features of patients according to HZ severity are described in Table 1. The mean age of the total study population was 53.8 ± 15.2 years, and 60.1% of the patients were female. The HZ group had significantly more comorbidities than the non-HZ group, and the severe HZ group had the highest CHA2DS2-VASc score of all three groups. A total of 2,204 (1.4%) patients were diagnosed with new AF, and among them 825 (0.5%) patients were diagnosed within two years of their index (diagnosis) date. Data are presented as means ± standard deviation (range) or number (%). HZ = herpes zoster, TIA = transient ischemic attack, AF = atrial fibrillation. A total of 153,425 adults (aged over 20) with no history of AF or HZ were analyzed. The baseline characteristics and demographic features of patients according to HZ severity are described in Table 1. The mean age of the total study population was 53.8 ± 15.2 years, and 60.1% of the patients were female. The HZ group had significantly more comorbidities than the non-HZ group, and the severe HZ group had the highest CHA2DS2-VASc score of all three groups. A total of 2,204 (1.4%) patients were diagnosed with new AF, and among them 825 (0.5%) patients were diagnosed within two years of their index (diagnosis) date. Data are presented as means ± standard deviation (range) or number (%). HZ = herpes zoster, TIA = transient ischemic attack, AF = atrial fibrillation. Relative risk of AF development according to HZ severity The HZ group showed a higher incidence of AF than the non-HZ group. The number of events, calculated incidence rates, and unadjusted and adjusted HRs for AF and stroke according to HZ severity are described in Table 2. Those with severe HZ were at significantly higher risk of developing AF (n = 66, 6.4 per 1,000 PTPY) compared to those with mild HZ (n = 384, 2.9 PTPY) and the non-HZ group (n = 1,732, 2.7 PTPY) (Fig. 1A). In a landmark analysis (Fig. 1B), the risk of developing AF was significantly greater for severe HZ cases in the two-year period following HZ diagnosis; however, after the two-year period, the relative risk of developing AF did not differ significantly in the three groups. HZ = herpes zoster, AF = atrial fibrillation, HR = hazard ratio, CI = confidence interval. aIncidence rates were calculated per 1,000 patient-years, within the population who were over 20 years old and not previously diagnosed with AF; bMultivariate adjusted hazard ratios were calculated by Cox regression models, including income level, resident area, CHA2DS2-VASc score as covariates. Cumulative incidence of AF. (A) Kaplan-Meier curves with cumulative hazards of overall AF development. (B) Landmark analysis at 2 years. AF = atrial fibrillation, HZ = herpes zoster. The HZ group showed a higher incidence of AF than the non-HZ group. The number of events, calculated incidence rates, and unadjusted and adjusted HRs for AF and stroke according to HZ severity are described in Table 2. Those with severe HZ were at significantly higher risk of developing AF (n = 66, 6.4 per 1,000 PTPY) compared to those with mild HZ (n = 384, 2.9 PTPY) and the non-HZ group (n = 1,732, 2.7 PTPY) (Fig. 1A). In a landmark analysis (Fig. 1B), the risk of developing AF was significantly greater for severe HZ cases in the two-year period following HZ diagnosis; however, after the two-year period, the relative risk of developing AF did not differ significantly in the three groups. HZ = herpes zoster, AF = atrial fibrillation, HR = hazard ratio, CI = confidence interval. aIncidence rates were calculated per 1,000 patient-years, within the population who were over 20 years old and not previously diagnosed with AF; bMultivariate adjusted hazard ratios were calculated by Cox regression models, including income level, resident area, CHA2DS2-VASc score as covariates. Cumulative incidence of AF. (A) Kaplan-Meier curves with cumulative hazards of overall AF development. (B) Landmark analysis at 2 years. AF = atrial fibrillation, HZ = herpes zoster. Severe HZ vs. mild HZ or non-HZ patients We re-grouped the patients into two groups: a severe HZ group and a control group (mild HZ plus non-HZ patients). We did this because the overall incidence of AF was not significantly different for mild HZ patients and the non-HZ group. The baseline characteristics of the two groups are described in Supplementary Table 1. The overall cumulative incidence of AF in the severe HZ group and in the control group within the two-year period following HZ diagnosis, was 3.0% and 1.4%, respectively (HR, 2.30; 95% CI, 1.81–2.95; P < 0.001). Analysis after adjustment for CHA2DS2-VASc score and socio-economic status (income status and residence) showed the same results as the original analysis (HR, 1.46; 95% CI, 1.14–1.87; P = 0.003 in the severe HZ group). The detailed data is shown in Supplementary Table 2 and Supplementary Fig. 2. The risk of AF development was more pronounced in the two-year period following HZ diagnosis in the severe HZ group. We re-grouped the patients into two groups: a severe HZ group and a control group (mild HZ plus non-HZ patients). We did this because the overall incidence of AF was not significantly different for mild HZ patients and the non-HZ group. The baseline characteristics of the two groups are described in Supplementary Table 1. The overall cumulative incidence of AF in the severe HZ group and in the control group within the two-year period following HZ diagnosis, was 3.0% and 1.4%, respectively (HR, 2.30; 95% CI, 1.81–2.95; P < 0.001). Analysis after adjustment for CHA2DS2-VASc score and socio-economic status (income status and residence) showed the same results as the original analysis (HR, 1.46; 95% CI, 1.14–1.87; P = 0.003 in the severe HZ group). The detailed data is shown in Supplementary Table 2 and Supplementary Fig. 2. The risk of AF development was more pronounced in the two-year period following HZ diagnosis in the severe HZ group. Subgroup analyses Formal testing for interactions showed that the overall AF development rates in the severe HZ group and the control group were consistent across multiple subgroups (Fig. 2). Across the two subgroups defined by CHA2DS2-VASc (cut-off score of 3), the risk of AF development was significantly higher in the severe HZ group (Supplementary Tables 3 and 4). The relationship between AF and HZ was strong in the first two years after diagnosis. Subgroup analyses for AF risk. AF = atrial fibrillation, HZ = herpes zoster, HR = hazard ratio, CI = confidence interval. Formal testing for interactions showed that the overall AF development rates in the severe HZ group and the control group were consistent across multiple subgroups (Fig. 2). Across the two subgroups defined by CHA2DS2-VASc (cut-off score of 3), the risk of AF development was significantly higher in the severe HZ group (Supplementary Tables 3 and 4). The relationship between AF and HZ was strong in the first two years after diagnosis. Subgroup analyses for AF risk. AF = atrial fibrillation, HZ = herpes zoster, HR = hazard ratio, CI = confidence interval. Propensity score matched analysis Although age and sex were matched between HZ and non-HZ group, the patients with HZ had more comorbidities. Therefore, we performed 1:2 propensity score matching, and the demographic features of matched population is demonstrated in Supplementary Table 5. A comorbidity and socioeconomic status (hypertension, diabetes, dyslipidemia, previous stroke/transient ischemic attack [TIA], heart failure, end-stage renal disease, chronic obstructive pulmonary disease, previous ischemic heart disease, low income level, and urban residence) in addition to age and sex between HZ and non-HZ group were matched. Kaplan-Meier curve from the propensity score matched cohort according to HZ severity are presented in Supplementary Table 6 and Supplementary Fig. 3. In accordance with overall cohort analysis, the risk of AF development was more obvious 2 years after HZ diagnosis, while the relative risk of AF after 2 years from index date among three severity groups were not significantly different. Although age and sex were matched between HZ and non-HZ group, the patients with HZ had more comorbidities. Therefore, we performed 1:2 propensity score matching, and the demographic features of matched population is demonstrated in Supplementary Table 5. A comorbidity and socioeconomic status (hypertension, diabetes, dyslipidemia, previous stroke/transient ischemic attack [TIA], heart failure, end-stage renal disease, chronic obstructive pulmonary disease, previous ischemic heart disease, low income level, and urban residence) in addition to age and sex between HZ and non-HZ group were matched. Kaplan-Meier curve from the propensity score matched cohort according to HZ severity are presented in Supplementary Table 6 and Supplementary Fig. 3. In accordance with overall cohort analysis, the risk of AF development was more obvious 2 years after HZ diagnosis, while the relative risk of AF after 2 years from index date among three severity groups were not significantly different. Baseline characteristics: A total of 153,425 adults (aged over 20) with no history of AF or HZ were analyzed. The baseline characteristics and demographic features of patients according to HZ severity are described in Table 1. The mean age of the total study population was 53.8 ± 15.2 years, and 60.1% of the patients were female. The HZ group had significantly more comorbidities than the non-HZ group, and the severe HZ group had the highest CHA2DS2-VASc score of all three groups. A total of 2,204 (1.4%) patients were diagnosed with new AF, and among them 825 (0.5%) patients were diagnosed within two years of their index (diagnosis) date. Data are presented as means ± standard deviation (range) or number (%). HZ = herpes zoster, TIA = transient ischemic attack, AF = atrial fibrillation. Relative risk of AF development according to HZ severity: The HZ group showed a higher incidence of AF than the non-HZ group. The number of events, calculated incidence rates, and unadjusted and adjusted HRs for AF and stroke according to HZ severity are described in Table 2. Those with severe HZ were at significantly higher risk of developing AF (n = 66, 6.4 per 1,000 PTPY) compared to those with mild HZ (n = 384, 2.9 PTPY) and the non-HZ group (n = 1,732, 2.7 PTPY) (Fig. 1A). In a landmark analysis (Fig. 1B), the risk of developing AF was significantly greater for severe HZ cases in the two-year period following HZ diagnosis; however, after the two-year period, the relative risk of developing AF did not differ significantly in the three groups. HZ = herpes zoster, AF = atrial fibrillation, HR = hazard ratio, CI = confidence interval. aIncidence rates were calculated per 1,000 patient-years, within the population who were over 20 years old and not previously diagnosed with AF; bMultivariate adjusted hazard ratios were calculated by Cox regression models, including income level, resident area, CHA2DS2-VASc score as covariates. Cumulative incidence of AF. (A) Kaplan-Meier curves with cumulative hazards of overall AF development. (B) Landmark analysis at 2 years. AF = atrial fibrillation, HZ = herpes zoster. Severe HZ vs. mild HZ or non-HZ patients: We re-grouped the patients into two groups: a severe HZ group and a control group (mild HZ plus non-HZ patients). We did this because the overall incidence of AF was not significantly different for mild HZ patients and the non-HZ group. The baseline characteristics of the two groups are described in Supplementary Table 1. The overall cumulative incidence of AF in the severe HZ group and in the control group within the two-year period following HZ diagnosis, was 3.0% and 1.4%, respectively (HR, 2.30; 95% CI, 1.81–2.95; P < 0.001). Analysis after adjustment for CHA2DS2-VASc score and socio-economic status (income status and residence) showed the same results as the original analysis (HR, 1.46; 95% CI, 1.14–1.87; P = 0.003 in the severe HZ group). The detailed data is shown in Supplementary Table 2 and Supplementary Fig. 2. The risk of AF development was more pronounced in the two-year period following HZ diagnosis in the severe HZ group. Subgroup analyses: Formal testing for interactions showed that the overall AF development rates in the severe HZ group and the control group were consistent across multiple subgroups (Fig. 2). Across the two subgroups defined by CHA2DS2-VASc (cut-off score of 3), the risk of AF development was significantly higher in the severe HZ group (Supplementary Tables 3 and 4). The relationship between AF and HZ was strong in the first two years after diagnosis. Subgroup analyses for AF risk. AF = atrial fibrillation, HZ = herpes zoster, HR = hazard ratio, CI = confidence interval. Propensity score matched analysis: Although age and sex were matched between HZ and non-HZ group, the patients with HZ had more comorbidities. Therefore, we performed 1:2 propensity score matching, and the demographic features of matched population is demonstrated in Supplementary Table 5. A comorbidity and socioeconomic status (hypertension, diabetes, dyslipidemia, previous stroke/transient ischemic attack [TIA], heart failure, end-stage renal disease, chronic obstructive pulmonary disease, previous ischemic heart disease, low income level, and urban residence) in addition to age and sex between HZ and non-HZ group were matched. Kaplan-Meier curve from the propensity score matched cohort according to HZ severity are presented in Supplementary Table 6 and Supplementary Fig. 3. In accordance with overall cohort analysis, the risk of AF development was more obvious 2 years after HZ diagnosis, while the relative risk of AF after 2 years from index date among three severity groups were not significantly different. DISCUSSION: In our nationwide case-control cohort study, we found an association between HZ and the development of AF. The main finding was that patients with severe HZ requiring hospitalization have an increased risk of incident AF within the two-year period following HZ diagnosis. Inflammatory process is a well-established pathogenic process in exacerbated atherosclerosis. Many studies have supported a close link between AF and inflammatory processes. A high incidence of AF in patients after cardiac surgery, a well-known state of intense inflammatory process was reported.1718 Furthermore, an elevated serum C-reactive protein level has been reported to be related to AF development, increased AF burden, and higher recurrence rates after catheter ablation and electrical cardioversion for AF.19 Previous studies have demonstrated that a higher interleukin (IL)-6 level is associated with a higher risk of incident AF following coronary artery bypass graft surgery20 and a higher AF recurrence rate following cardioversion.21 The nationwide population based study on Herpes simplex infection reported that the viral inflammatory disease could be associated with an increased risk of AF.22 Bacterial infections such as Helicobacter pylori and Chlamydia pneumoniae have also shown an association with the occurrence of AF.23 However, there has been no population-based epidemiology study on the association between VZV and AF. Given that HZ involves a chronic latent inflammation, there is a strong possibility that it is associated with AF. The intrinsic cardiac autonomic nervous system plays a critical role in the initiation and maintenance of AF, and sympathovagal discharges are common triggers for paroxysmal AF.24 There are many ganglionic plexuses around the atria, which is markedly influenced by cholinergic as well as sympathetic innervation.25 Changes in autonomic tone before the onset of paroxysmal AF have been reported, suggesting autonomic dysfunction may lead to the development of AF.26 HZ can involve nerve ganglia, resulting in autonomic dysfunction. Alternatively, VZV reactivations may cause the neuritis of the sympathetic and autonomic ganglia. HZ is reported to affect the autonomic nervous system of the colon due to the centripetal spread of the virus from the dorsal ganglion.27 In addition, the vagus nerve or its ganglia can be involved in HZ, resulting in cardiac irregularities.12 The reactivation may involve multi-loci, including dermatome and symptomatic ganglia that innervate the heart, leading to cardiovascular disease. The immune response has been shown to play an important role in the ganglia during VZV reactivation. Unfortunately, we could not find direct evidence that HZ involves cardiac ganglionic plexuses. Based on the literary evidence cited above, we assumed that HZ might lead to AF development via cardiac autonomic nervous system involvement. To our knowledge, this is the first report to prove an increased risk of developing AF in the early period after a particular infection. The development of AF shortly after infection may be associated with HZ. In our cohort, about two percent of patients developed newly diagnosed AF within two years of HZ diagnosis, and this difference made the overall risk difference. AF can be asymptomatic and delayed diagnosis is common in real world clinical situations, thus the risk of incident AF may actually be higher within the short-term period following HZ diagnosis. Because of the increasing incidence of AF and the ever-increasing public health burden it produces, significant efforts have been made recently in both risk factor identification and modifications for AF prevention. A number of traditional risk factors for AF have been identified, many of which are also associated with cardiovascular diseases, and the common preventive strategies focus on conventional modification of cardiovascular risk factors such as obesity, glucose control, blood pressure control, etc. Furthermore, given that HZ is an inflammatory disease involving the nervous system, it can be prevented with vaccination.28 A significant proportion of AF cases could be prevented by HZ vaccination, and this has potential to dissipate the medical socio-economic burden. One of the highlights of this study is the large comparison cohort stratified by age and gender. The effects of diabetes, hypertension, and hyperlipidemia on cardiovascular diseases were able to be adjusted according to the Cox proportional hazards. Subgroup analyses were performed to ensure the consistency of the results. However, several limitations of the present study remain. First, personal information such as smoking habits, physical activity, and body mass index were not available from this registry database. In addition, echocardiographic parameters, such as left atrial dimension and left ventricular ejection fraction, or electrocardiographic results were absent. Therefore, we were not able to control effectively for all potential confounders, although we have tried to adjust for important comorbidities in the Cox regression model. Second, HZ was diagnosed using ICD-10-CM codes and was not confirmed by molecular biological detection. The accuracy of HZ diagnosis could not be fully ascertained. Third, selection bias due to hospitalization might affect the detection rate of AF. Routine ECG screenings during hospitalization might increase the chance of detection of AF, so hospitalized severe HZ patients would have more chance to detect AF than non-hospitalized control group. However, we found that the incident AF occurred consistently during 2 years after diagnosed with HZ, and there is little chance to admit for 2 years due to HZ. Although there might be selection bias during early admission period in severe HZ patients, consistent occurrence of AF during 2 years of follow-up after HZ development would support our final conclusion. Lastly, while we reported a significant association between AF and HZ, these results were derived from an observational database. As we were not able to conclude whether HZ was the direct cause of the increased AF incidence, a further prospective and large-scale trial is necessary to confirm the findings of the present study. Our results suggest that severe HZ requiring hospitalization is a possible risk factor for AF, and that AF incidence is greater for severe HZ patients in the two-year period following HZ diagnosis.
Background: Herpes zoster (HZ) is a chronic inflammatory disease that could result in autonomic dysfunction, often leading to atrial fibrillation (AF). Methods: From the Korean National Health Insurance Service database of 738,559 subjects, patients newly diagnosed with HZ (n = 30,685) between 2004 and 2011, with no history of HZ or AF were identified. For the non-HZ control group, 122,740 age- and sex-matched subjects were selected. AF development in the first two-years following HZ diagnosis, and during the overall follow-up period were compared among severe (requiring hospitalization, n = 2,213), mild (n = 28,472), and non-HZ (n = 122,740) groups. Results: There were 2,204 (1.4%) patients diagnosed with AF during follow-up, and 825 (0.5%) were diagnosed within the first two years after HZ. The severe HZ group showed higher rates of AF development (6.4 per 1,000 patient-years [PTPY]) compared to mild-HZ group (2.9 PTPY) and non-HZ group (2.7 PTPY). The risk of developing AF was higher in the first two-years after HZ diagnosis in the severe HZ group (10.6 PTPY vs. 2.7 PTPY in mild-HZ group and 2.6 PTPY in non-HZ group). Conclusions: Severe HZ that requires hospitalization shows an increased risk of incident AF, and the risk is higher in the first two-years following HZ diagnosis.
null
null
6,551
290
14
[ "hz", "af", "group", "patients", "hz group", "risk", "severe", "severe hz", "years", "non" ]
[ "test", "test" ]
null
null
[CONTENT] Atrial Fibrillation | Herpes Zoster | Inflammation | Autonomic Dysfunction [SUMMARY]
[CONTENT] Atrial Fibrillation | Herpes Zoster | Inflammation | Autonomic Dysfunction [SUMMARY]
[CONTENT] Atrial Fibrillation | Herpes Zoster | Inflammation | Autonomic Dysfunction [SUMMARY]
null
[CONTENT] Atrial Fibrillation | Herpes Zoster | Inflammation | Autonomic Dysfunction [SUMMARY]
null
[CONTENT] Adult | Aged | Atrial Fibrillation | Case-Control Studies | Databases, Factual | Female | Herpesvirus 3, Human | Humans | Incidence | Male | Middle Aged | Risk Factors | Severity of Illness Index | Varicella Zoster Virus Infection [SUMMARY]
[CONTENT] Adult | Aged | Atrial Fibrillation | Case-Control Studies | Databases, Factual | Female | Herpesvirus 3, Human | Humans | Incidence | Male | Middle Aged | Risk Factors | Severity of Illness Index | Varicella Zoster Virus Infection [SUMMARY]
[CONTENT] Adult | Aged | Atrial Fibrillation | Case-Control Studies | Databases, Factual | Female | Herpesvirus 3, Human | Humans | Incidence | Male | Middle Aged | Risk Factors | Severity of Illness Index | Varicella Zoster Virus Infection [SUMMARY]
null
[CONTENT] Adult | Aged | Atrial Fibrillation | Case-Control Studies | Databases, Factual | Female | Herpesvirus 3, Human | Humans | Incidence | Male | Middle Aged | Risk Factors | Severity of Illness Index | Varicella Zoster Virus Infection [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] hz | af | group | patients | hz group | risk | severe | severe hz | years | non [SUMMARY]
[CONTENT] hz | af | group | patients | hz group | risk | severe | severe hz | years | non [SUMMARY]
[CONTENT] hz | af | group | patients | hz group | risk | severe | severe hz | years | non [SUMMARY]
null
[CONTENT] hz | af | group | patients | hz group | risk | severe | severe hz | years | non [SUMMARY]
null
[CONTENT] autonomic | hz | af | involved | nerve | sequelae | pathogenesis | result | neurologic | associated [SUMMARY]
[CONTENT] hz | nhis | database | patient | test | study | patients | nhis database | diagnosed | variables [SUMMARY]
[CONTENT] hz | af | group | hz group | severe hz group | severe hz | severe | significantly | supplementary | patients [SUMMARY]
null
[CONTENT] hz | af | group | hz group | patients | risk | severe hz | severe | severe hz group | study [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] the Korean National Health Insurance Service | 738,559 | 30,685 | between 2004 and 2011 | HZ ||| 122,740 ||| the first two-years | HZ | 2,213 | 28,472 | non-HZ | 122,740 [SUMMARY]
[CONTENT] 2,204 | 1.4% | 825 | 0.5% | the first two years | HZ ||| 6.4 | 1,000 ||| 2.9 | 2.7 ||| the first two-years | HZ | 10.6 | 2.7 | 2.6 [SUMMARY]
null
[CONTENT] ||| the Korean National Health Insurance Service | 738,559 | 30,685 | between 2004 and 2011 | HZ ||| 122,740 ||| the first two-years | HZ | 2,213 | 28,472 | non-HZ | 122,740 ||| 2,204 | 1.4% | 825 | 0.5% | the first two years | HZ ||| 6.4 | 1,000 ||| 2.9 | 2.7 ||| the first two-years | HZ | 10.6 | 2.7 | 2.6 ||| the first two-years | HZ [SUMMARY]
null
Eight Weeks of a High Dose of Curcumin Supplementation May Attenuate Performance Decrements Following Muscle-Damaging Exercise.
31340534
It is known that unaccustomed exercise-especially when it has an eccentric component-causes muscle damage and subsequent performance decrements. Attenuating muscle damage may improve performance and recovery, allowing for improved training quality and adaptations. Therefore, the current study sought to examine the effect of two doses of curcumin supplementation on performance decrements following downhill running.
BACKGROUND
Sixty-three physically active men and women (21 ± 2 y; 70.0 ± 13.7 kg; 169.3 ± 15.2 cm; 25.6 ± 14.3 body mass index (BMI), 32 women, 31 men) were randomly assigned to ingest 250 mg of CurcuWIN® (50 mg of curcuminoids), 1000 mg of CurcuWIN® (200 mg of curcuminoids), or a corn starch placebo (PLA) for eight weeks in a double-blind, randomized, placebo-controlled parallel design. At the end of the supplementation period, subjects completed a downhill running protocol intended to induce muscle damage. Muscle function using isokinetic dynamometry and perceived soreness was assessed prior to and at 1 h, 24 h, 48 h, and 72 h post-downhill run.
METHODS
Isokinetic peak extension torque did not change in the 200-mg dose, while significant reductions occurred in the PLA and 50-mg groups through the first 24 h of recovery. Isokinetic peak flexion torque and power both decreased in the 50-mg group, while no change was observed in the PLA or 200-mg groups. All the groups experienced no changes in isokinetic extension power and isometric average peak torque. Soreness was significantly increased in all the groups compared to the baseline. Non-significant improvements in total soreness were observed for the 200-mg group, but these changes failed to reach statistical significance.
RESULTS
When compared to changes observed against PLA, a 200-mg dose of curcumin attenuated reductions in some but not all observed changes in performance and soreness after completion of a downhill running bout. Additionally, a 50-mg dose appears to offer no advantage to changes observed in the PLA and 200-mg groups.
CONCLUSION
[ "Adult", "Athletic Performance", "Curcumin", "Dietary Supplements", "Double-Blind Method", "Exercise", "Female", "Humans", "Male", "Myalgia", "Performance-Enhancing Substances", "Running", "Time Factors", "Treatment Outcome", "Young Adult" ]
6683062
1. Introduction
Even in trained athletes, a novel or unaccustomed exercise bout, especially those that emphasize eccentric contractions, can cause microscopic intramuscular tears and an exaggerated inflammatory response [1,2], which is generally referred to as exercise-induced muscle damage (EIMD) [3]. The subsequent muscular pain and restriction of movement from EIMD can limit an athlete’s performance [4]. Thus, strategies that can attenuate performance decrements associated with EIMD should result in higher training quality and hypothetically greater exercise training adaptations. Consequently, many strategies have been proposed to treat or prevent EIMD [5], but there is still no scientific consensus on the most effective strategy for all individuals [6]. Curcumin, the bioactive component (2–5% by weight) of the spice herb turmeric, has a long history of medicinal use due to its anti-inflammatory and antioxidant properties [7]. The United States Food and Drug Administration has listed curcumin as GRAS (generally recognized as safe), and curcumin-containing supplements have been approved for human ingestion [8]. Curcumin is a polyphenol that is considered to be a “nutraceutical”, or a dietary agent with pharmaceutical or therapeutic properties. The capacity for curcumin as a treatment strategy in favor of targeted pharmaceuticals is promising due to curcumin being a highly pleiotropic molecule that interacts with multiple inflammatory pathways [1,2]. Previous research has indicated that curcumin may help alleviate performance decrements following intense, challenging exercise. For example, initial research in mice indicated that curcumin supplementation led to greater voluntary activity and improved running performance compared to placebo-supplemented mice after eccentric exercise [9]. Similar effects of curcumin following EIMD have been reported in human subjects. One study reported that curcumin supplementation reduces pain and tenderness [10], while Drobnic et al. [11] reported a reduction in muscular trauma in the posterior and medial thigh following a downhill run with curcumin supplementation along with a moderate reduction in pain. In contrast, Nicol et al. [12] reported that curcumin moderately reduced pain during exercise, but had little effect on muscle function. Relative to the impact of specific curcuminoids (as opposed to curcumin supplementation) in response to exercise and muscle damage, no published literature is available at the current time. A systematic review by Gaffey et al. [13] concluded that insufficient evidence exists to support the ability of curcuminoids to relieve pain and improve function. Importantly, the authors highlighted a general lack of evidence and poor study quality from the existing literature base. The mixed findings in previous studies may have been a result of the limited bioavailability due to formulation of the supplement [14]. A major limitation to the therapeutic potential of curcumin is its poor solubility, low absorption from the gut, rapid metabolism, and systemic elimination [15]. Curcumin is primarily excreted through the feces, never reaching detectable levels in circulation [16]. Previously, it has been shown that high doses of orally administered curcumin (e.g., 10–12 g), has resulted in little-to-no appearance of curcuminoids in circulation [17]. Various methods have been developed to increase the bioavailability of curcumin involving emulsions, nanocrystals, and liposomes, with varying degrees of success [18]. A recent formulation of curcumin (CurcuWIN®) involves the combination of a cellulosic derivatives and other natural antioxidants (tocopherol and ascorbyl palmitate). This formulation has been shown previously to increase absorption by 45.9-fold over a standardized curcumin mixture and a 5.8 to 34.9-fold greater increase in absorption (versus other formulations) while being well tolerated with no reported adverse events [19]. As part of participating in regular exercise training programs, much interest exists to maximize strategies to reduce decrements in performance and improve training quality. As a strategy to reduce soreness, loss of muscle function, and inflammation, curcumin serves as a potentially useful nutritional target to aid in accomplishing these goals, yet longstanding shortcomings with the ingredient have limited its potential. Therefore, the purpose of this study was to examine, in a double blind, placebo-controlled fashion, whether a novel form of curcumin would attenuate performance decrements and reduce inflammation following a downhill running bout. Beyond examining the impact of curcumin availability, an additional study question focused upon the identification of dose-dependent outcomes associated with curcumin administration relative to its ability to attenuate performance changes. We hypothesize that in a dose-dependent fashion, curcumin will attenuate changes in performance indicators after completion of a downhill bout of treadmill running.
null
null
3. Results
3.1. Isokinetic Peak Flexion Torque Using mixed factorial ANOVA, no significant group x time interaction (p = 0.60) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak flexion torque values (Figure 2). Regarding PLA, no significant changes across time (p = 0.15) were identified, while significant changes across time were found for the 50-mg dose (p = 0.03), and the 200-mg dose tended (p = 0.052) to exhibit changes across time. In the 50-mg dose group, peak flexion torque was significantly reduced in comparison to baseline values after one hour (95% CI: 2.46–9.48, p = 0.02), 24 h (95% CI: 2.34–9.60, p = 0.03), and 48 h (95% CI: 3.25–9.79, p = 0.02). In the 200-mg dose group, peak flexion torque was significantly reduced against baseline values after one hour (95% CI: 1.42–6.36, p = 0.04), and returned to baseline values for all the other time points. Using mixed factorial ANOVA, no significant group x time interaction (p = 0.60) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak flexion torque values (Figure 2). Regarding PLA, no significant changes across time (p = 0.15) were identified, while significant changes across time were found for the 50-mg dose (p = 0.03), and the 200-mg dose tended (p = 0.052) to exhibit changes across time. In the 50-mg dose group, peak flexion torque was significantly reduced in comparison to baseline values after one hour (95% CI: 2.46–9.48, p = 0.02), 24 h (95% CI: 2.34–9.60, p = 0.03), and 48 h (95% CI: 3.25–9.79, p = 0.02). In the 200-mg dose group, peak flexion torque was significantly reduced against baseline values after one hour (95% CI: 1.42–6.36, p = 0.04), and returned to baseline values for all the other time points. 3.2. Isokinetic Average Peak Flexion Torque No significant group × time interaction (p = 0.55) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in the average peak flexion torque. A statistical tendency for change was observed in both the PLA (p = 0.08) and 200-mg (p = 0.07) groups, while values in the 50-mg dose (p = 0.03) exhibited a significant change across time. Pairwise comparisons in both the PLA (95% CI: 2.47–9.04, p = 0.046) and 200-mg (95% CI: 2.16–6.86, p = 0.006) groups indicated that the average peak flexion torque values were reduced one hour after damage in comparison to baseline values, but returned to baseline values after that initial drop in performance. Values in the 50-mg group exhibited more changes, with significantly reduced values being observed one hour (95% CI: 2.23–9.40, p = 0.03) and 48 h (95% CI: 2.45–8.57, p = 0.04) after damage, with values 24 h after damage tending to be reduced (95% CI: 1.58–9.48, p = 0.06). No significant group × time interaction (p = 0.55) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in the average peak flexion torque. A statistical tendency for change was observed in both the PLA (p = 0.08) and 200-mg (p = 0.07) groups, while values in the 50-mg dose (p = 0.03) exhibited a significant change across time. Pairwise comparisons in both the PLA (95% CI: 2.47–9.04, p = 0.046) and 200-mg (95% CI: 2.16–6.86, p = 0.006) groups indicated that the average peak flexion torque values were reduced one hour after damage in comparison to baseline values, but returned to baseline values after that initial drop in performance. Values in the 50-mg group exhibited more changes, with significantly reduced values being observed one hour (95% CI: 2.23–9.40, p = 0.03) and 48 h (95% CI: 2.45–8.57, p = 0.04) after damage, with values 24 h after damage tending to be reduced (95% CI: 1.58–9.48, p = 0.06). 3.3. Isokinetic Peak Extension Torque Using mixed factorial ANOVA, no significant group x time interaction (p = 0.52) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak extension torque values (Figure 3). Values changed significantly across time for the PLA (p = 0.002) and 50-mg (p = 0.04) groups, while no significant change was exhibited for the 200-mg dose (p = 0.16). In PLA, peak extension torque was significantly reduced one hour (95% CI: 8.10–18.04, p = 0.01) and 24 h (95% CI: 3.84–18.56, p = 0.03) after completion of the downhill running bout in comparison to pre-damage values. Changes in the 50-mg group indicated that peak extension torque in comparison to baseline values were significantly reduced (95% CI: 11.02–24.85, p < 0.001) one hour after completion of the damaging exercise, but returned to baseline values for all the other time points. No significant changes across time (p = 0.159) were observed for the 200-mg dose. Using mixed factorial ANOVA, no significant group x time interaction (p = 0.52) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak extension torque values (Figure 3). Values changed significantly across time for the PLA (p = 0.002) and 50-mg (p = 0.04) groups, while no significant change was exhibited for the 200-mg dose (p = 0.16). In PLA, peak extension torque was significantly reduced one hour (95% CI: 8.10–18.04, p = 0.01) and 24 h (95% CI: 3.84–18.56, p = 0.03) after completion of the downhill running bout in comparison to pre-damage values. Changes in the 50-mg group indicated that peak extension torque in comparison to baseline values were significantly reduced (95% CI: 11.02–24.85, p < 0.001) one hour after completion of the damaging exercise, but returned to baseline values for all the other time points. No significant changes across time (p = 0.159) were observed for the 200-mg dose. 3.4. Isokinetic Average Peak Extension Torque Changes in average peak extension torque values indicated no significant group x time interaction (p = 0.578) in conjunction with a significant main effect for time (p < 0.001). Individual pairwise comparisons within each condition between individual time points revealed a similar pattern of change. In this respect, significant changes across time were found for PLA (p < 0.001), 50 mg (p = 0.03), and 200 mg (p = 0.03). Pairwise comparisons for PLA indicated that average peak extension torque values were significantly lower one hour (95% CI: 6.83–18.88, p = 0.02) after the running bout when compared to baseline levels. Individual comparison in the 50-mg group revealed that values were significantly lower than baseline after one hour (95% CI: 9.72–24.39, p <0.001) and 24 h (95% CI: 3.48–23.07, p = 0.04). Similarly, changes within the 200-mg group exhibited a significant reduction from baseline after one hour (95% CI: 6.59–16.37, p = 0.002). Changes in average peak extension torque values indicated no significant group x time interaction (p = 0.578) in conjunction with a significant main effect for time (p < 0.001). Individual pairwise comparisons within each condition between individual time points revealed a similar pattern of change. In this respect, significant changes across time were found for PLA (p < 0.001), 50 mg (p = 0.03), and 200 mg (p = 0.03). Pairwise comparisons for PLA indicated that average peak extension torque values were significantly lower one hour (95% CI: 6.83–18.88, p = 0.02) after the running bout when compared to baseline levels. Individual comparison in the 50-mg group revealed that values were significantly lower than baseline after one hour (95% CI: 9.72–24.39, p <0.001) and 24 h (95% CI: 3.48–23.07, p = 0.04). Similarly, changes within the 200-mg group exhibited a significant reduction from baseline after one hour (95% CI: 6.59–16.37, p = 0.002). 3.5. Isokinetic Peak Extension Power No significant group x time interaction (p = 0.39) effect was found for changes in peak extension power in all the supplementation groups against time (Figure 4). A significant main effect over time was found (p = 0.002). Within-group changes in the PLA group revealed a tendency for values to change (p = 0.08). Individual pairwise comparisons in PLA indicated that peak extension power data was significantly reduced (95% CI: 6.16–17.69, p = 0.04) one hour after downhill treadmill running. Similarly, within-group changes in the 50-mg group indicated a statistical trend (p = 0.051) for peak extension power values to change across time. Again, pairwise comparisons revealed that the only significant changes for peak extension power occurred one hour (95% CI: 6.86–22.30, p = 0.03) after completion of the treadmill exercise bout. Changes in the 200-mg group also exhibited a tendency for power values to change across time (p = 0.09), with individual pairwise comparisons indicating that values one hour (95% CI: 4.24–15.29, p = 0.03) after completion of the damage bout were significantly lower than baseline, while changes after 24 h (95% CI: 2.51–10.91, p = 0.09) tended to be lower. No significant group x time interaction (p = 0.39) effect was found for changes in peak extension power in all the supplementation groups against time (Figure 4). A significant main effect over time was found (p = 0.002). Within-group changes in the PLA group revealed a tendency for values to change (p = 0.08). Individual pairwise comparisons in PLA indicated that peak extension power data was significantly reduced (95% CI: 6.16–17.69, p = 0.04) one hour after downhill treadmill running. Similarly, within-group changes in the 50-mg group indicated a statistical trend (p = 0.051) for peak extension power values to change across time. Again, pairwise comparisons revealed that the only significant changes for peak extension power occurred one hour (95% CI: 6.86–22.30, p = 0.03) after completion of the treadmill exercise bout. Changes in the 200-mg group also exhibited a tendency for power values to change across time (p = 0.09), with individual pairwise comparisons indicating that values one hour (95% CI: 4.24–15.29, p = 0.03) after completion of the damage bout were significantly lower than baseline, while changes after 24 h (95% CI: 2.51–10.91, p = 0.09) tended to be lower. 3.6. Isokinetic Peak Flexion Power No significant group x time interaction (p = 0.96) effect was found in any of the supplementation groups for peak flexion power (Figure 5). A significant main effect over time was found (p < 0.001). Within-group changes in the PLA group revealed no significant change across time (p = 0.14), while values in 50-mg group revealed a significant within-group change (p = 0.049), while values for the 200 mg tended to change (p = 0.08). Significant reductions from baseline in peak flexion power were observed one hour after running for both PLA (95% CI: 3.29–12.03, p = 0.03) and 200 mg (95% CI: 3.21–8.51, p = 0.01), while values in the 50-mg group were significantly reduced from baseline after one hour (0.001–0.088, p = 0.04) and 24 h (95% CI: 0.85–6.74, p = 0.03), and tended to be lower after 48 h (95% CI: 1.18–9.73, p = 0.08). No significant group x time interaction (p = 0.96) effect was found in any of the supplementation groups for peak flexion power (Figure 5). A significant main effect over time was found (p < 0.001). Within-group changes in the PLA group revealed no significant change across time (p = 0.14), while values in 50-mg group revealed a significant within-group change (p = 0.049), while values for the 200 mg tended to change (p = 0.08). Significant reductions from baseline in peak flexion power were observed one hour after running for both PLA (95% CI: 3.29–12.03, p = 0.03) and 200 mg (95% CI: 3.21–8.51, p = 0.01), while values in the 50-mg group were significantly reduced from baseline after one hour (0.001–0.088, p = 0.04) and 24 h (95% CI: 0.85–6.74, p = 0.03), and tended to be lower after 48 h (95% CI: 1.18–9.73, p = 0.08). 3.7. Isometric Peak Torque No significant group x time interaction (p = 0.57) was found, while a significant main effect over time was found (p = 0.046). Within-group changes in the PLA group revealed no significant change across time (p = 0.43) while values in both the 50-mg (p = 0.01) and 200-mg groups (p = 0.02) significantly decreased across time. As expected, no individual pairwise comparisons within PLA revealed any statistically significant changes (p > 0.05). Significant reductions from baseline in isometric peak torque were observed one hour (95% CI: 8.73–19.16, p < 0.001) and 24 h (95% CI: 7.95–23.62, p = 0.004) in the 50-mg group. Within the 200-mg group, isometric peak torque values were significantly reduced 24 h after exercise (95% CI: 1.69–13.79, p = 0.02), but were similar to the baseline values at all the other time points. No significant group x time interaction (p = 0.57) was found, while a significant main effect over time was found (p = 0.046). Within-group changes in the PLA group revealed no significant change across time (p = 0.43) while values in both the 50-mg (p = 0.01) and 200-mg groups (p = 0.02) significantly decreased across time. As expected, no individual pairwise comparisons within PLA revealed any statistically significant changes (p > 0.05). Significant reductions from baseline in isometric peak torque were observed one hour (95% CI: 8.73–19.16, p < 0.001) and 24 h (95% CI: 7.95–23.62, p = 0.004) in the 50-mg group. Within the 200-mg group, isometric peak torque values were significantly reduced 24 h after exercise (95% CI: 1.69–13.79, p = 0.02), but were similar to the baseline values at all the other time points. 3.8. Isometric Average Peak Torque No significant group x time interaction (p = 0.52) was found, while a significant main effect over time was found (p = 0.001) for isometric average peak torque values. Within-group changes in the PLA (p = 0.06) and 50-mg groups (p = 0.11) were not different from baseline, while changes in the 200-mg group exhibited significant reductions across time (p = 0.02) with no individual pairwise comparisons reaching statistical significance. No significant group x time interaction (p = 0.52) was found, while a significant main effect over time was found (p = 0.001) for isometric average peak torque values. Within-group changes in the PLA (p = 0.06) and 50-mg groups (p = 0.11) were not different from baseline, while changes in the 200-mg group exhibited significant reductions across time (p = 0.02) with no individual pairwise comparisons reaching statistical significance. 3.9. Perceived Soreness Statistically significant main effects for time were identified, indicating that perceived thigh soreness levels significantly increased in response to the exercise bout. Anterior thigh soreness (p < 0.001), posterior thigh soreness (p < 0.001), and total thigh soreness (p < 0.001) were all found to significantly increase across time. No significant group x time interaction were identified for anterior thigh soreness (p = 0.73), posterior thigh soreness (p = 0.73), and total thigh soreness (p = 0.30). Non-significant improvements in exercise-induced total thigh soreness indicated that the 200-mg groups reported 26%, 20%, and 8% less soreness immediately, 24 h, and 48 h after exercise, respectively, than the soreness levels that were reported in the PLA and 50-mg groups. However, these differences failed to reach statistical significance (Figure 6). Statistically significant main effects for time were identified, indicating that perceived thigh soreness levels significantly increased in response to the exercise bout. Anterior thigh soreness (p < 0.001), posterior thigh soreness (p < 0.001), and total thigh soreness (p < 0.001) were all found to significantly increase across time. No significant group x time interaction were identified for anterior thigh soreness (p = 0.73), posterior thigh soreness (p = 0.73), and total thigh soreness (p = 0.30). Non-significant improvements in exercise-induced total thigh soreness indicated that the 200-mg groups reported 26%, 20%, and 8% less soreness immediately, 24 h, and 48 h after exercise, respectively, than the soreness levels that were reported in the PLA and 50-mg groups. However, these differences failed to reach statistical significance (Figure 6).
5. Conclusions
In conclusion, results from the present study highlight the ability of a high dose of CurcuWIN® (1000-mg dose delivering 200 mg of curcuminoids) to prevent the observed decreases in peak extension torque values seen one and 24 h after muscle-damaging exercise. In comparison, a lower dose of CurcuWIN® (delivering 50 mg of curcuminoids) was unable to attenuate performance changes to a similar pattern as what was observed in PLA. While this study investigated changes in performance, future studies should also investigate more objective measures (blood markers such as creatine kinase or myoglobin) of muscle damage in cohorts of trained and untrained samples.
[ "2. Materials and Methods", "2.2. Experimental Protocol", "2.3. Supplementation", "2.4. Muscle Function Assessment", "2.5. Statistical Analysis", "3.1. Isokinetic Peak Flexion Torque", "3.2. Isokinetic Average Peak Flexion Torque", "3.3. Isokinetic Peak Extension Torque", "3.4. Isokinetic Average Peak Extension Torque", "3.5. Isokinetic Peak Extension Power", "3.6. Isokinetic Peak Flexion Power", "3.7. Isometric Peak Torque", "3.8. Isometric Average Peak Torque", "3.9. Perceived Soreness" ]
[ "The data reported in this study were collected as part of a randomized, double-blind, placebo-controlled investigation that examined the effects of eight weeks of curcumin supplementation on blood flow, exercise performance, and muscle damage. The data involving blood flow have already been published [20]. This study was conducted in accordance with the Declaration of Helsinki guidelines and registered with the ISRCTN registry (ISRCTN90184217). All the procedures involving human subjects were approved by the Institutional Review Board of Texas Christian University for use of human subjects in research (IRB approval: 1410-105-1410). Written consent was obtained from all the participants prior to any participation.\n 2.1. Participants Men and women aged 19 to 29 years were considered eligible for participation if he or she was a non-smoker and free of any musculoskeletal, medical, or metabolic contraindications to exercise. The health status and activity levels of potential participants were determined by completion of a medical history form and a physical activity record. Further, participants had to be low to moderately trained, which was defined as meeting the current American College of Sports Medicine (ACSM) guidelines of at least 150 min of moderate aerobic activity, or 75 min per week of vigorous aerobic activity per week for at least the past three months [21]. Exclusion criteria included women who were pregnant or lactating, any participation in another clinical trial or consumption of an investigational product within the previous 30 days, receiving regular treatment with anti-inflammatory/analgesic/antioxidant drugs in the previous month, or the use of any ergogenic aid during the nine-week period prior to recruitment.\nA total of 106 participants (53M, 53F) were initially screened for study participation. Of this cohort, 32 additional participants (15M, 17F) were excluded from study involvement due to them not meeting the study’s inclusion or exclusion criteria. Consequently, a total of 74 subjects were randomly assigned to a supplementation group and from there, an additional 11 participants (7M, 4F) withdrew from the study due to different priorities, i.e., school, resulting in a total of 63 (31M, 32F) study participants who completed the study. The subjects’ characteristics are presented in Table 1.\nMen and women aged 19 to 29 years were considered eligible for participation if he or she was a non-smoker and free of any musculoskeletal, medical, or metabolic contraindications to exercise. The health status and activity levels of potential participants were determined by completion of a medical history form and a physical activity record. Further, participants had to be low to moderately trained, which was defined as meeting the current American College of Sports Medicine (ACSM) guidelines of at least 150 min of moderate aerobic activity, or 75 min per week of vigorous aerobic activity per week for at least the past three months [21]. Exclusion criteria included women who were pregnant or lactating, any participation in another clinical trial or consumption of an investigational product within the previous 30 days, receiving regular treatment with anti-inflammatory/analgesic/antioxidant drugs in the previous month, or the use of any ergogenic aid during the nine-week period prior to recruitment.\nA total of 106 participants (53M, 53F) were initially screened for study participation. Of this cohort, 32 additional participants (15M, 17F) were excluded from study involvement due to them not meeting the study’s inclusion or exclusion criteria. Consequently, a total of 74 subjects were randomly assigned to a supplementation group and from there, an additional 11 participants (7M, 4F) withdrew from the study due to different priorities, i.e., school, resulting in a total of 63 (31M, 32F) study participants who completed the study. The subjects’ characteristics are presented in Table 1.\n 2.2. Experimental Protocol A summary of the study design is presented in Figure 1. All the testing was performed in the Exercise Physiology Laboratory within the Kinesiology Department at Texas Christian University. Prior to supplementation, and 3 to 7 days after a familiarization visit, participants returned to the laboratory after having abstained from any strenuous physical activity for at least 48 h. Additionally, participants were asked to avoid coffee, alcohol, and non-steroidal anti-inflammatory drugs at least 24 h prior to any trial. Baseline testing consisted of a muscle function assessment and a maximal aerobic capacity (VO2max) test.\nAfter 56 days of supplementation, the participants reported to the lab to repeat all the muscle function assessments. The following day, participants performed a downhill run to induce muscle damage. The downhill run was performed on a modified TMX 3030c (Trackmaster, Newton, KS, USA) at a –15% grade. Prior to the downhill run, participants warmed up for five minutes at a 0% grade at a speed equivalent to 65% VO2Max. American College of Sports Medicine metabolic calculations for the estimation of energy expenditure during running [21] were utilized to determine the treadmill speed—on a level grade—that would approximately elicit 65% VO2max. Gas exchange was continuously measured (Parvo Medics, Sandy, UT, USA), and the treadmill speed was adjusted after two minutes to match 65% VO2Max. After the five-minute warm-up, participants completed a 45-min downhill run at a –15% grade and speed equivalent to 65% VO2Max. Muscle function was assessed one hour after completion of the downhill running bout. Participants returned to the lab at 24 h, 48 h, and 72 h following the downhill run for follow-up muscle function testing.\nA summary of the study design is presented in Figure 1. All the testing was performed in the Exercise Physiology Laboratory within the Kinesiology Department at Texas Christian University. Prior to supplementation, and 3 to 7 days after a familiarization visit, participants returned to the laboratory after having abstained from any strenuous physical activity for at least 48 h. Additionally, participants were asked to avoid coffee, alcohol, and non-steroidal anti-inflammatory drugs at least 24 h prior to any trial. Baseline testing consisted of a muscle function assessment and a maximal aerobic capacity (VO2max) test.\nAfter 56 days of supplementation, the participants reported to the lab to repeat all the muscle function assessments. The following day, participants performed a downhill run to induce muscle damage. The downhill run was performed on a modified TMX 3030c (Trackmaster, Newton, KS, USA) at a –15% grade. Prior to the downhill run, participants warmed up for five minutes at a 0% grade at a speed equivalent to 65% VO2Max. American College of Sports Medicine metabolic calculations for the estimation of energy expenditure during running [21] were utilized to determine the treadmill speed—on a level grade—that would approximately elicit 65% VO2max. Gas exchange was continuously measured (Parvo Medics, Sandy, UT, USA), and the treadmill speed was adjusted after two minutes to match 65% VO2Max. After the five-minute warm-up, participants completed a 45-min downhill run at a –15% grade and speed equivalent to 65% VO2Max. Muscle function was assessed one hour after completion of the downhill running bout. Participants returned to the lab at 24 h, 48 h, and 72 h following the downhill run for follow-up muscle function testing.\n 2.3. Supplementation The investigational product (CurcuWIN®; OmniActive Health Technologies Ltd., Mumbai, India) contained turmeric extract (20–28%), a hydrophilic carrier (63–75%), cellulosic derivatives (10–40%), and natural antioxidants (1–3%) [19]. During the consent and familiarization process, participants were educated on which foods contained turmeric, and were asked to avoid those foods for the duration of the study while maintaining their normal diet. Participants were randomly assigned in a double-blind manner to either a placebo (corn starch, PLA), low dose of curcumin (50 mg of curcuminoids = 250 mg CurcuWIN®), or high dose of curcumin (200 mg of curcuminoids = 1000 mg CurcuWIN®) supplementation. Commercially available natural curcumin contains three curcuminoids in the following ratios: curcumin (71.5%), demethoxycurcumin (19.4%), and bisdemethoxycurcumin (9.1%) [22]. The day following baseline testing, participants were asked to ingest one dose with breakfast, lunch, and dinner for a total of three doses per day. To assist in blinding, all the doses required the ingestion of one capsule that was identical in shape, size, and color. As part of compliance monitoring to the supplementation regimen, participants were provided the capsules on a weekly basis in a pill container that provided seven doses and were asked to return the empty containers. A side effects questionnaire was also completed when receiving capsules. Compliance was set at ≥80%, and participants not meeting compliance were removed from the study.\nThe investigational product (CurcuWIN®; OmniActive Health Technologies Ltd., Mumbai, India) contained turmeric extract (20–28%), a hydrophilic carrier (63–75%), cellulosic derivatives (10–40%), and natural antioxidants (1–3%) [19]. During the consent and familiarization process, participants were educated on which foods contained turmeric, and were asked to avoid those foods for the duration of the study while maintaining their normal diet. Participants were randomly assigned in a double-blind manner to either a placebo (corn starch, PLA), low dose of curcumin (50 mg of curcuminoids = 250 mg CurcuWIN®), or high dose of curcumin (200 mg of curcuminoids = 1000 mg CurcuWIN®) supplementation. Commercially available natural curcumin contains three curcuminoids in the following ratios: curcumin (71.5%), demethoxycurcumin (19.4%), and bisdemethoxycurcumin (9.1%) [22]. The day following baseline testing, participants were asked to ingest one dose with breakfast, lunch, and dinner for a total of three doses per day. To assist in blinding, all the doses required the ingestion of one capsule that was identical in shape, size, and color. As part of compliance monitoring to the supplementation regimen, participants were provided the capsules on a weekly basis in a pill container that provided seven doses and were asked to return the empty containers. A side effects questionnaire was also completed when receiving capsules. Compliance was set at ≥80%, and participants not meeting compliance were removed from the study.\n 2.4. Muscle Function Assessment Muscle function was determined by assessing isokinetic and isometric peak torque as well as isokinetic power using a Biodex dynamometer (System 3; Biodex Medical Systems, Shirley, NY, USA) on the participant’s self-reported dominant leg. All the participants were familiarized with the testing protocols during his or her familiarization visit prior to supplementation. Before muscle function assessment, participants performed a five-minute self-paced warm up on a treadmill. Then, each participant was seated with their knee aligned with the lever arm axis of the dynamometer. The dynamometer warm-up consisted of three concentric-only extension and flexion repetitions at 50% of perceived maximal force production. Following the warm-up, participants were given a 90-s recovery.\nIsokinetic peak torque and power were tested at a speed of 60° * s−1 through a modified range of motion, which began with their leg at approximately 120° knee flexion and continued through full extension of the knee (180° knee flexion). Once in the starting position, participants began each set of repetitions by forcefully extending their leg against the resistance through the full range of motion before forcefully flexing their knee back to the starting position. Participants repeated this motion for five continuous repetitions at maximal effort. Peak torque and power values for both extension and flexion were recorded as individual peak values and as average values across all the completed repetitions. After the isokinetic exercise, participants were given a three-minute rest. Then, isometric strength was assessed by three maximal voluntary extensions of the knee, each lasting five seconds against a fixed resistance arm at an angle of 120°. A one-minute rest was given between each repetition, and all the repetitions were performed at maximal effort. The highest isometric torque values were recorded as peak isometric torque in foot–pounds (ft–lbs) of torque, and the average peak torque value was also computed. Acute assessments of both isokinetic and isometric force and power are regularly used to assess acute changes in both dynamic and static force production in response to various forms of exercise stimuli [23,24]. Test–retest reliability using similar isokinetic devices with similar protocols and controls consistently yielded high measures of reliability and coefficients of variation less than 5% [25].\nAs an indirect indicator of muscle damage, perceived levels of anterior, posterior, and total soreness of the knee extensors were assessed by all participants using a 100-cm visual analog scale. Soreness was assessed along a 100-mm scale (0 mm = no soreness, 100 mm = extreme soreness) for each time point (pre-exercise, immediately (0 h), 24 h, 48 h, and 72 h post-exercise by drawing a line perpendicular to the continuum line extending from 0 mm to 100 mm. Soreness was evaluated by measuring the distance of each mark from 0 and rounded up to the nearest millimeter.\nMuscle function was determined by assessing isokinetic and isometric peak torque as well as isokinetic power using a Biodex dynamometer (System 3; Biodex Medical Systems, Shirley, NY, USA) on the participant’s self-reported dominant leg. All the participants were familiarized with the testing protocols during his or her familiarization visit prior to supplementation. Before muscle function assessment, participants performed a five-minute self-paced warm up on a treadmill. Then, each participant was seated with their knee aligned with the lever arm axis of the dynamometer. The dynamometer warm-up consisted of three concentric-only extension and flexion repetitions at 50% of perceived maximal force production. Following the warm-up, participants were given a 90-s recovery.\nIsokinetic peak torque and power were tested at a speed of 60° * s−1 through a modified range of motion, which began with their leg at approximately 120° knee flexion and continued through full extension of the knee (180° knee flexion). Once in the starting position, participants began each set of repetitions by forcefully extending their leg against the resistance through the full range of motion before forcefully flexing their knee back to the starting position. Participants repeated this motion for five continuous repetitions at maximal effort. Peak torque and power values for both extension and flexion were recorded as individual peak values and as average values across all the completed repetitions. After the isokinetic exercise, participants were given a three-minute rest. Then, isometric strength was assessed by three maximal voluntary extensions of the knee, each lasting five seconds against a fixed resistance arm at an angle of 120°. A one-minute rest was given between each repetition, and all the repetitions were performed at maximal effort. The highest isometric torque values were recorded as peak isometric torque in foot–pounds (ft–lbs) of torque, and the average peak torque value was also computed. Acute assessments of both isokinetic and isometric force and power are regularly used to assess acute changes in both dynamic and static force production in response to various forms of exercise stimuli [23,24]. Test–retest reliability using similar isokinetic devices with similar protocols and controls consistently yielded high measures of reliability and coefficients of variation less than 5% [25].\nAs an indirect indicator of muscle damage, perceived levels of anterior, posterior, and total soreness of the knee extensors were assessed by all participants using a 100-cm visual analog scale. Soreness was assessed along a 100-mm scale (0 mm = no soreness, 100 mm = extreme soreness) for each time point (pre-exercise, immediately (0 h), 24 h, 48 h, and 72 h post-exercise by drawing a line perpendicular to the continuum line extending from 0 mm to 100 mm. Soreness was evaluated by measuring the distance of each mark from 0 and rounded up to the nearest millimeter.\n 2.5. Statistical Analysis Statistical analyses were performed using SPSS V.25 (IBM Corporation; Armonk, NY, USA). Primary outcomes were identified as peak torque values, while secondary outcomes were identified as average torque and power assessments. All the outcome measures were initially analyzed by a repeated measures analysis of variance (ANOVA) with three factors: gender (two levels), treatment (three levels), and time (six levels). Normality was assessed using the Shapiro–Wilk test with several variables violating the normality assumptions. All the non-normal variables were transformed using log10 before completing ANOVA procedures. All the data reported throughout is the raw data. Homogeneity of variance was assessed using Mauchly’s test of sphericity. When the homogeneity of variance assumption was violated, the Greenhouse–Geisser correction was applied when epsilon was <0.75, and the Huynh–Feldt correction was applied when epsilon was >0.75. Outside of expected outcomes (greater strength), gender contributed little to the overall model and did not impact treatment analysis; thus, gender was removed from the model and 3 (group) × 6 (time) mixed factorial ANOVA with repeated measures on time were subsequently used to assess the differences across time between groups. Tukey post hoc procedures were used when a significant finding (p ≤ 0.05) or trend (0.05 < p ≤ 0.10) was identified. Significant main effects for time were fully decomposed by completing factorial ANOVA with repeated measures on time. If significant differences in time or trends were identified, pairwise comparisons with Bonferroni corrections applied to the confidence interval were evaluated to determine between time points for each condition.\nStatistical analyses were performed using SPSS V.25 (IBM Corporation; Armonk, NY, USA). Primary outcomes were identified as peak torque values, while secondary outcomes were identified as average torque and power assessments. All the outcome measures were initially analyzed by a repeated measures analysis of variance (ANOVA) with three factors: gender (two levels), treatment (three levels), and time (six levels). Normality was assessed using the Shapiro–Wilk test with several variables violating the normality assumptions. All the non-normal variables were transformed using log10 before completing ANOVA procedures. All the data reported throughout is the raw data. Homogeneity of variance was assessed using Mauchly’s test of sphericity. When the homogeneity of variance assumption was violated, the Greenhouse–Geisser correction was applied when epsilon was <0.75, and the Huynh–Feldt correction was applied when epsilon was >0.75. Outside of expected outcomes (greater strength), gender contributed little to the overall model and did not impact treatment analysis; thus, gender was removed from the model and 3 (group) × 6 (time) mixed factorial ANOVA with repeated measures on time were subsequently used to assess the differences across time between groups. Tukey post hoc procedures were used when a significant finding (p ≤ 0.05) or trend (0.05 < p ≤ 0.10) was identified. Significant main effects for time were fully decomposed by completing factorial ANOVA with repeated measures on time. If significant differences in time or trends were identified, pairwise comparisons with Bonferroni corrections applied to the confidence interval were evaluated to determine between time points for each condition.", "A summary of the study design is presented in Figure 1. All the testing was performed in the Exercise Physiology Laboratory within the Kinesiology Department at Texas Christian University. Prior to supplementation, and 3 to 7 days after a familiarization visit, participants returned to the laboratory after having abstained from any strenuous physical activity for at least 48 h. Additionally, participants were asked to avoid coffee, alcohol, and non-steroidal anti-inflammatory drugs at least 24 h prior to any trial. Baseline testing consisted of a muscle function assessment and a maximal aerobic capacity (VO2max) test.\nAfter 56 days of supplementation, the participants reported to the lab to repeat all the muscle function assessments. The following day, participants performed a downhill run to induce muscle damage. The downhill run was performed on a modified TMX 3030c (Trackmaster, Newton, KS, USA) at a –15% grade. Prior to the downhill run, participants warmed up for five minutes at a 0% grade at a speed equivalent to 65% VO2Max. American College of Sports Medicine metabolic calculations for the estimation of energy expenditure during running [21] were utilized to determine the treadmill speed—on a level grade—that would approximately elicit 65% VO2max. Gas exchange was continuously measured (Parvo Medics, Sandy, UT, USA), and the treadmill speed was adjusted after two minutes to match 65% VO2Max. After the five-minute warm-up, participants completed a 45-min downhill run at a –15% grade and speed equivalent to 65% VO2Max. Muscle function was assessed one hour after completion of the downhill running bout. Participants returned to the lab at 24 h, 48 h, and 72 h following the downhill run for follow-up muscle function testing.", "The investigational product (CurcuWIN®; OmniActive Health Technologies Ltd., Mumbai, India) contained turmeric extract (20–28%), a hydrophilic carrier (63–75%), cellulosic derivatives (10–40%), and natural antioxidants (1–3%) [19]. During the consent and familiarization process, participants were educated on which foods contained turmeric, and were asked to avoid those foods for the duration of the study while maintaining their normal diet. Participants were randomly assigned in a double-blind manner to either a placebo (corn starch, PLA), low dose of curcumin (50 mg of curcuminoids = 250 mg CurcuWIN®), or high dose of curcumin (200 mg of curcuminoids = 1000 mg CurcuWIN®) supplementation. Commercially available natural curcumin contains three curcuminoids in the following ratios: curcumin (71.5%), demethoxycurcumin (19.4%), and bisdemethoxycurcumin (9.1%) [22]. The day following baseline testing, participants were asked to ingest one dose with breakfast, lunch, and dinner for a total of three doses per day. To assist in blinding, all the doses required the ingestion of one capsule that was identical in shape, size, and color. As part of compliance monitoring to the supplementation regimen, participants were provided the capsules on a weekly basis in a pill container that provided seven doses and were asked to return the empty containers. A side effects questionnaire was also completed when receiving capsules. Compliance was set at ≥80%, and participants not meeting compliance were removed from the study.", "Muscle function was determined by assessing isokinetic and isometric peak torque as well as isokinetic power using a Biodex dynamometer (System 3; Biodex Medical Systems, Shirley, NY, USA) on the participant’s self-reported dominant leg. All the participants were familiarized with the testing protocols during his or her familiarization visit prior to supplementation. Before muscle function assessment, participants performed a five-minute self-paced warm up on a treadmill. Then, each participant was seated with their knee aligned with the lever arm axis of the dynamometer. The dynamometer warm-up consisted of three concentric-only extension and flexion repetitions at 50% of perceived maximal force production. Following the warm-up, participants were given a 90-s recovery.\nIsokinetic peak torque and power were tested at a speed of 60° * s−1 through a modified range of motion, which began with their leg at approximately 120° knee flexion and continued through full extension of the knee (180° knee flexion). Once in the starting position, participants began each set of repetitions by forcefully extending their leg against the resistance through the full range of motion before forcefully flexing their knee back to the starting position. Participants repeated this motion for five continuous repetitions at maximal effort. Peak torque and power values for both extension and flexion were recorded as individual peak values and as average values across all the completed repetitions. After the isokinetic exercise, participants were given a three-minute rest. Then, isometric strength was assessed by three maximal voluntary extensions of the knee, each lasting five seconds against a fixed resistance arm at an angle of 120°. A one-minute rest was given between each repetition, and all the repetitions were performed at maximal effort. The highest isometric torque values were recorded as peak isometric torque in foot–pounds (ft–lbs) of torque, and the average peak torque value was also computed. Acute assessments of both isokinetic and isometric force and power are regularly used to assess acute changes in both dynamic and static force production in response to various forms of exercise stimuli [23,24]. Test–retest reliability using similar isokinetic devices with similar protocols and controls consistently yielded high measures of reliability and coefficients of variation less than 5% [25].\nAs an indirect indicator of muscle damage, perceived levels of anterior, posterior, and total soreness of the knee extensors were assessed by all participants using a 100-cm visual analog scale. Soreness was assessed along a 100-mm scale (0 mm = no soreness, 100 mm = extreme soreness) for each time point (pre-exercise, immediately (0 h), 24 h, 48 h, and 72 h post-exercise by drawing a line perpendicular to the continuum line extending from 0 mm to 100 mm. Soreness was evaluated by measuring the distance of each mark from 0 and rounded up to the nearest millimeter.", "Statistical analyses were performed using SPSS V.25 (IBM Corporation; Armonk, NY, USA). Primary outcomes were identified as peak torque values, while secondary outcomes were identified as average torque and power assessments. All the outcome measures were initially analyzed by a repeated measures analysis of variance (ANOVA) with three factors: gender (two levels), treatment (three levels), and time (six levels). Normality was assessed using the Shapiro–Wilk test with several variables violating the normality assumptions. All the non-normal variables were transformed using log10 before completing ANOVA procedures. All the data reported throughout is the raw data. Homogeneity of variance was assessed using Mauchly’s test of sphericity. When the homogeneity of variance assumption was violated, the Greenhouse–Geisser correction was applied when epsilon was <0.75, and the Huynh–Feldt correction was applied when epsilon was >0.75. Outside of expected outcomes (greater strength), gender contributed little to the overall model and did not impact treatment analysis; thus, gender was removed from the model and 3 (group) × 6 (time) mixed factorial ANOVA with repeated measures on time were subsequently used to assess the differences across time between groups. Tukey post hoc procedures were used when a significant finding (p ≤ 0.05) or trend (0.05 < p ≤ 0.10) was identified. Significant main effects for time were fully decomposed by completing factorial ANOVA with repeated measures on time. If significant differences in time or trends were identified, pairwise comparisons with Bonferroni corrections applied to the confidence interval were evaluated to determine between time points for each condition.", "Using mixed factorial ANOVA, no significant group x time interaction (p = 0.60) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak flexion torque values (Figure 2). Regarding PLA, no significant changes across time (p = 0.15) were identified, while significant changes across time were found for the 50-mg dose (p = 0.03), and the 200-mg dose tended (p = 0.052) to exhibit changes across time. In the 50-mg dose group, peak flexion torque was significantly reduced in comparison to baseline values after one hour (95% CI: 2.46–9.48, p = 0.02), 24 h (95% CI: 2.34–9.60, p = 0.03), and 48 h (95% CI: 3.25–9.79, p = 0.02). In the 200-mg dose group, peak flexion torque was significantly reduced against baseline values after one hour (95% CI: 1.42–6.36, p = 0.04), and returned to baseline values for all the other time points.", "No significant group × time interaction (p = 0.55) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in the average peak flexion torque. A statistical tendency for change was observed in both the PLA (p = 0.08) and 200-mg (p = 0.07) groups, while values in the 50-mg dose (p = 0.03) exhibited a significant change across time. Pairwise comparisons in both the PLA (95% CI: 2.47–9.04, p = 0.046) and 200-mg (95% CI: 2.16–6.86, p = 0.006) groups indicated that the average peak flexion torque values were reduced one hour after damage in comparison to baseline values, but returned to baseline values after that initial drop in performance. Values in the 50-mg group exhibited more changes, with significantly reduced values being observed one hour (95% CI: 2.23–9.40, p = 0.03) and 48 h (95% CI: 2.45–8.57, p = 0.04) after damage, with values 24 h after damage tending to be reduced (95% CI: 1.58–9.48, p = 0.06).", "Using mixed factorial ANOVA, no significant group x time interaction (p = 0.52) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak extension torque values (Figure 3). Values changed significantly across time for the PLA (p = 0.002) and 50-mg (p = 0.04) groups, while no significant change was exhibited for the 200-mg dose (p = 0.16). In PLA, peak extension torque was significantly reduced one hour (95% CI: 8.10–18.04, p = 0.01) and 24 h (95% CI: 3.84–18.56, p = 0.03) after completion of the downhill running bout in comparison to pre-damage values. Changes in the 50-mg group indicated that peak extension torque in comparison to baseline values were significantly reduced (95% CI: 11.02–24.85, p < 0.001) one hour after completion of the damaging exercise, but returned to baseline values for all the other time points. No significant changes across time (p = 0.159) were observed for the 200-mg dose.", "Changes in average peak extension torque values indicated no significant group x time interaction (p = 0.578) in conjunction with a significant main effect for time (p < 0.001). Individual pairwise comparisons within each condition between individual time points revealed a similar pattern of change. In this respect, significant changes across time were found for PLA (p < 0.001), 50 mg (p = 0.03), and 200 mg (p = 0.03). Pairwise comparisons for PLA indicated that average peak extension torque values were significantly lower one hour (95% CI: 6.83–18.88, p = 0.02) after the running bout when compared to baseline levels. Individual comparison in the 50-mg group revealed that values were significantly lower than baseline after one hour (95% CI: 9.72–24.39, p <0.001) and 24 h (95% CI: 3.48–23.07, p = 0.04). Similarly, changes within the 200-mg group exhibited a significant reduction from baseline after one hour (95% CI: 6.59–16.37, p = 0.002).", "No significant group x time interaction (p = 0.39) effect was found for changes in peak extension power in all the supplementation groups against time (Figure 4). A significant main effect over time was found (p = 0.002). Within-group changes in the PLA group revealed a tendency for values to change (p = 0.08). Individual pairwise comparisons in PLA indicated that peak extension power data was significantly reduced (95% CI: 6.16–17.69, p = 0.04) one hour after downhill treadmill running. Similarly, within-group changes in the 50-mg group indicated a statistical trend (p = 0.051) for peak extension power values to change across time. Again, pairwise comparisons revealed that the only significant changes for peak extension power occurred one hour (95% CI: 6.86–22.30, p = 0.03) after completion of the treadmill exercise bout. Changes in the 200-mg group also exhibited a tendency for power values to change across time (p = 0.09), with individual pairwise comparisons indicating that values one hour (95% CI: 4.24–15.29, p = 0.03) after completion of the damage bout were significantly lower than baseline, while changes after 24 h (95% CI: 2.51–10.91, p = 0.09) tended to be lower.", "No significant group x time interaction (p = 0.96) effect was found in any of the supplementation groups for peak flexion power (Figure 5). A significant main effect over time was found (p < 0.001). Within-group changes in the PLA group revealed no significant change across time (p = 0.14), while values in 50-mg group revealed a significant within-group change (p = 0.049), while values for the 200 mg tended to change (p = 0.08). Significant reductions from baseline in peak flexion power were observed one hour after running for both PLA (95% CI: 3.29–12.03, p = 0.03) and 200 mg (95% CI: 3.21–8.51, p = 0.01), while values in the 50-mg group were significantly reduced from baseline after one hour (0.001–0.088, p = 0.04) and 24 h (95% CI: 0.85–6.74, p = 0.03), and tended to be lower after 48 h (95% CI: 1.18–9.73, p = 0.08).", "No significant group x time interaction (p = 0.57) was found, while a significant main effect over time was found (p = 0.046). Within-group changes in the PLA group revealed no significant change across time (p = 0.43) while values in both the 50-mg (p = 0.01) and 200-mg groups (p = 0.02) significantly decreased across time. As expected, no individual pairwise comparisons within PLA revealed any statistically significant changes (p > 0.05). Significant reductions from baseline in isometric peak torque were observed one hour (95% CI: 8.73–19.16, p < 0.001) and 24 h (95% CI: 7.95–23.62, p = 0.004) in the 50-mg group. Within the 200-mg group, isometric peak torque values were significantly reduced 24 h after exercise (95% CI: 1.69–13.79, p = 0.02), but were similar to the baseline values at all the other time points. ", "No significant group x time interaction (p = 0.52) was found, while a significant main effect over time was found (p = 0.001) for isometric average peak torque values. Within-group changes in the PLA (p = 0.06) and 50-mg groups (p = 0.11) were not different from baseline, while changes in the 200-mg group exhibited significant reductions across time (p = 0.02) with no individual pairwise comparisons reaching statistical significance.", "Statistically significant main effects for time were identified, indicating that perceived thigh soreness levels significantly increased in response to the exercise bout. Anterior thigh soreness (p < 0.001), posterior thigh soreness (p < 0.001), and total thigh soreness (p < 0.001) were all found to significantly increase across time. No significant group x time interaction were identified for anterior thigh soreness (p = 0.73), posterior thigh soreness (p = 0.73), and total thigh soreness (p = 0.30). Non-significant improvements in exercise-induced total thigh soreness indicated that the 200-mg groups reported 26%, 20%, and 8% less soreness immediately, 24 h, and 48 h after exercise, respectively, than the soreness levels that were reported in the PLA and 50-mg groups. However, these differences failed to reach statistical significance (Figure 6)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Participants", "2.2. Experimental Protocol", "2.3. Supplementation", "2.4. Muscle Function Assessment", "2.5. Statistical Analysis", "3. Results", "3.1. Isokinetic Peak Flexion Torque", "3.2. Isokinetic Average Peak Flexion Torque", "3.3. Isokinetic Peak Extension Torque", "3.4. Isokinetic Average Peak Extension Torque", "3.5. Isokinetic Peak Extension Power", "3.6. Isokinetic Peak Flexion Power", "3.7. Isometric Peak Torque", "3.8. Isometric Average Peak Torque", "3.9. Perceived Soreness", "4. Discussion", "5. Conclusions" ]
[ "Even in trained athletes, a novel or unaccustomed exercise bout, especially those that emphasize eccentric contractions, can cause microscopic intramuscular tears and an exaggerated inflammatory response [1,2], which is generally referred to as exercise-induced muscle damage (EIMD) [3]. The subsequent muscular pain and restriction of movement from EIMD can limit an athlete’s performance [4]. Thus, strategies that can attenuate performance decrements associated with EIMD should result in higher training quality and hypothetically greater exercise training adaptations. Consequently, many strategies have been proposed to treat or prevent EIMD [5], but there is still no scientific consensus on the most effective strategy for all individuals [6].\nCurcumin, the bioactive component (2–5% by weight) of the spice herb turmeric, has a long history of medicinal use due to its anti-inflammatory and antioxidant properties [7]. The United States Food and Drug Administration has listed curcumin as GRAS (generally recognized as safe), and curcumin-containing supplements have been approved for human ingestion [8]. Curcumin is a polyphenol that is considered to be a “nutraceutical”, or a dietary agent with pharmaceutical or therapeutic properties. The capacity for curcumin as a treatment strategy in favor of targeted pharmaceuticals is promising due to curcumin being a highly pleiotropic molecule that interacts with multiple inflammatory pathways [1,2].\nPrevious research has indicated that curcumin may help alleviate performance decrements following intense, challenging exercise. For example, initial research in mice indicated that curcumin supplementation led to greater voluntary activity and improved running performance compared to placebo-supplemented mice after eccentric exercise [9]. Similar effects of curcumin following EIMD have been reported in human subjects. One study reported that curcumin supplementation reduces pain and tenderness [10], while Drobnic et al. [11] reported a reduction in muscular trauma in the posterior and medial thigh following a downhill run with curcumin supplementation along with a moderate reduction in pain. In contrast, Nicol et al. [12] reported that curcumin moderately reduced pain during exercise, but had little effect on muscle function. Relative to the impact of specific curcuminoids (as opposed to curcumin supplementation) in response to exercise and muscle damage, no published literature is available at the current time. A systematic review by Gaffey et al. [13] concluded that insufficient evidence exists to support the ability of curcuminoids to relieve pain and improve function. Importantly, the authors highlighted a general lack of evidence and poor study quality from the existing literature base.\nThe mixed findings in previous studies may have been a result of the limited bioavailability due to formulation of the supplement [14]. A major limitation to the therapeutic potential of curcumin is its poor solubility, low absorption from the gut, rapid metabolism, and systemic elimination [15]. Curcumin is primarily excreted through the feces, never reaching detectable levels in circulation [16]. Previously, it has been shown that high doses of orally administered curcumin (e.g., 10–12 g), has resulted in little-to-no appearance of curcuminoids in circulation [17]. Various methods have been developed to increase the bioavailability of curcumin involving emulsions, nanocrystals, and liposomes, with varying degrees of success [18]. A recent formulation of curcumin (CurcuWIN®) involves the combination of a cellulosic derivatives and other natural antioxidants (tocopherol and ascorbyl palmitate). This formulation has been shown previously to increase absorption by 45.9-fold over a standardized curcumin mixture and a 5.8 to 34.9-fold greater increase in absorption (versus other formulations) while being well tolerated with no reported adverse events [19].\nAs part of participating in regular exercise training programs, much interest exists to maximize strategies to reduce decrements in performance and improve training quality. As a strategy to reduce soreness, loss of muscle function, and inflammation, curcumin serves as a potentially useful nutritional target to aid in accomplishing these goals, yet longstanding shortcomings with the ingredient have limited its potential. Therefore, the purpose of this study was to examine, in a double blind, placebo-controlled fashion, whether a novel form of curcumin would attenuate performance decrements and reduce inflammation following a downhill running bout. Beyond examining the impact of curcumin availability, an additional study question focused upon the identification of dose-dependent outcomes associated with curcumin administration relative to its ability to attenuate performance changes. We hypothesize that in a dose-dependent fashion, curcumin will attenuate changes in performance indicators after completion of a downhill bout of treadmill running.", "The data reported in this study were collected as part of a randomized, double-blind, placebo-controlled investigation that examined the effects of eight weeks of curcumin supplementation on blood flow, exercise performance, and muscle damage. The data involving blood flow have already been published [20]. This study was conducted in accordance with the Declaration of Helsinki guidelines and registered with the ISRCTN registry (ISRCTN90184217). All the procedures involving human subjects were approved by the Institutional Review Board of Texas Christian University for use of human subjects in research (IRB approval: 1410-105-1410). Written consent was obtained from all the participants prior to any participation.\n 2.1. Participants Men and women aged 19 to 29 years were considered eligible for participation if he or she was a non-smoker and free of any musculoskeletal, medical, or metabolic contraindications to exercise. The health status and activity levels of potential participants were determined by completion of a medical history form and a physical activity record. Further, participants had to be low to moderately trained, which was defined as meeting the current American College of Sports Medicine (ACSM) guidelines of at least 150 min of moderate aerobic activity, or 75 min per week of vigorous aerobic activity per week for at least the past three months [21]. Exclusion criteria included women who were pregnant or lactating, any participation in another clinical trial or consumption of an investigational product within the previous 30 days, receiving regular treatment with anti-inflammatory/analgesic/antioxidant drugs in the previous month, or the use of any ergogenic aid during the nine-week period prior to recruitment.\nA total of 106 participants (53M, 53F) were initially screened for study participation. Of this cohort, 32 additional participants (15M, 17F) were excluded from study involvement due to them not meeting the study’s inclusion or exclusion criteria. Consequently, a total of 74 subjects were randomly assigned to a supplementation group and from there, an additional 11 participants (7M, 4F) withdrew from the study due to different priorities, i.e., school, resulting in a total of 63 (31M, 32F) study participants who completed the study. The subjects’ characteristics are presented in Table 1.\nMen and women aged 19 to 29 years were considered eligible for participation if he or she was a non-smoker and free of any musculoskeletal, medical, or metabolic contraindications to exercise. The health status and activity levels of potential participants were determined by completion of a medical history form and a physical activity record. Further, participants had to be low to moderately trained, which was defined as meeting the current American College of Sports Medicine (ACSM) guidelines of at least 150 min of moderate aerobic activity, or 75 min per week of vigorous aerobic activity per week for at least the past three months [21]. Exclusion criteria included women who were pregnant or lactating, any participation in another clinical trial or consumption of an investigational product within the previous 30 days, receiving regular treatment with anti-inflammatory/analgesic/antioxidant drugs in the previous month, or the use of any ergogenic aid during the nine-week period prior to recruitment.\nA total of 106 participants (53M, 53F) were initially screened for study participation. Of this cohort, 32 additional participants (15M, 17F) were excluded from study involvement due to them not meeting the study’s inclusion or exclusion criteria. Consequently, a total of 74 subjects were randomly assigned to a supplementation group and from there, an additional 11 participants (7M, 4F) withdrew from the study due to different priorities, i.e., school, resulting in a total of 63 (31M, 32F) study participants who completed the study. The subjects’ characteristics are presented in Table 1.\n 2.2. Experimental Protocol A summary of the study design is presented in Figure 1. All the testing was performed in the Exercise Physiology Laboratory within the Kinesiology Department at Texas Christian University. Prior to supplementation, and 3 to 7 days after a familiarization visit, participants returned to the laboratory after having abstained from any strenuous physical activity for at least 48 h. Additionally, participants were asked to avoid coffee, alcohol, and non-steroidal anti-inflammatory drugs at least 24 h prior to any trial. Baseline testing consisted of a muscle function assessment and a maximal aerobic capacity (VO2max) test.\nAfter 56 days of supplementation, the participants reported to the lab to repeat all the muscle function assessments. The following day, participants performed a downhill run to induce muscle damage. The downhill run was performed on a modified TMX 3030c (Trackmaster, Newton, KS, USA) at a –15% grade. Prior to the downhill run, participants warmed up for five minutes at a 0% grade at a speed equivalent to 65% VO2Max. American College of Sports Medicine metabolic calculations for the estimation of energy expenditure during running [21] were utilized to determine the treadmill speed—on a level grade—that would approximately elicit 65% VO2max. Gas exchange was continuously measured (Parvo Medics, Sandy, UT, USA), and the treadmill speed was adjusted after two minutes to match 65% VO2Max. After the five-minute warm-up, participants completed a 45-min downhill run at a –15% grade and speed equivalent to 65% VO2Max. Muscle function was assessed one hour after completion of the downhill running bout. Participants returned to the lab at 24 h, 48 h, and 72 h following the downhill run for follow-up muscle function testing.\nA summary of the study design is presented in Figure 1. All the testing was performed in the Exercise Physiology Laboratory within the Kinesiology Department at Texas Christian University. Prior to supplementation, and 3 to 7 days after a familiarization visit, participants returned to the laboratory after having abstained from any strenuous physical activity for at least 48 h. Additionally, participants were asked to avoid coffee, alcohol, and non-steroidal anti-inflammatory drugs at least 24 h prior to any trial. Baseline testing consisted of a muscle function assessment and a maximal aerobic capacity (VO2max) test.\nAfter 56 days of supplementation, the participants reported to the lab to repeat all the muscle function assessments. The following day, participants performed a downhill run to induce muscle damage. The downhill run was performed on a modified TMX 3030c (Trackmaster, Newton, KS, USA) at a –15% grade. Prior to the downhill run, participants warmed up for five minutes at a 0% grade at a speed equivalent to 65% VO2Max. American College of Sports Medicine metabolic calculations for the estimation of energy expenditure during running [21] were utilized to determine the treadmill speed—on a level grade—that would approximately elicit 65% VO2max. Gas exchange was continuously measured (Parvo Medics, Sandy, UT, USA), and the treadmill speed was adjusted after two minutes to match 65% VO2Max. After the five-minute warm-up, participants completed a 45-min downhill run at a –15% grade and speed equivalent to 65% VO2Max. Muscle function was assessed one hour after completion of the downhill running bout. Participants returned to the lab at 24 h, 48 h, and 72 h following the downhill run for follow-up muscle function testing.\n 2.3. Supplementation The investigational product (CurcuWIN®; OmniActive Health Technologies Ltd., Mumbai, India) contained turmeric extract (20–28%), a hydrophilic carrier (63–75%), cellulosic derivatives (10–40%), and natural antioxidants (1–3%) [19]. During the consent and familiarization process, participants were educated on which foods contained turmeric, and were asked to avoid those foods for the duration of the study while maintaining their normal diet. Participants were randomly assigned in a double-blind manner to either a placebo (corn starch, PLA), low dose of curcumin (50 mg of curcuminoids = 250 mg CurcuWIN®), or high dose of curcumin (200 mg of curcuminoids = 1000 mg CurcuWIN®) supplementation. Commercially available natural curcumin contains three curcuminoids in the following ratios: curcumin (71.5%), demethoxycurcumin (19.4%), and bisdemethoxycurcumin (9.1%) [22]. The day following baseline testing, participants were asked to ingest one dose with breakfast, lunch, and dinner for a total of three doses per day. To assist in blinding, all the doses required the ingestion of one capsule that was identical in shape, size, and color. As part of compliance monitoring to the supplementation regimen, participants were provided the capsules on a weekly basis in a pill container that provided seven doses and were asked to return the empty containers. A side effects questionnaire was also completed when receiving capsules. Compliance was set at ≥80%, and participants not meeting compliance were removed from the study.\nThe investigational product (CurcuWIN®; OmniActive Health Technologies Ltd., Mumbai, India) contained turmeric extract (20–28%), a hydrophilic carrier (63–75%), cellulosic derivatives (10–40%), and natural antioxidants (1–3%) [19]. During the consent and familiarization process, participants were educated on which foods contained turmeric, and were asked to avoid those foods for the duration of the study while maintaining their normal diet. Participants were randomly assigned in a double-blind manner to either a placebo (corn starch, PLA), low dose of curcumin (50 mg of curcuminoids = 250 mg CurcuWIN®), or high dose of curcumin (200 mg of curcuminoids = 1000 mg CurcuWIN®) supplementation. Commercially available natural curcumin contains three curcuminoids in the following ratios: curcumin (71.5%), demethoxycurcumin (19.4%), and bisdemethoxycurcumin (9.1%) [22]. The day following baseline testing, participants were asked to ingest one dose with breakfast, lunch, and dinner for a total of three doses per day. To assist in blinding, all the doses required the ingestion of one capsule that was identical in shape, size, and color. As part of compliance monitoring to the supplementation regimen, participants were provided the capsules on a weekly basis in a pill container that provided seven doses and were asked to return the empty containers. A side effects questionnaire was also completed when receiving capsules. Compliance was set at ≥80%, and participants not meeting compliance were removed from the study.\n 2.4. Muscle Function Assessment Muscle function was determined by assessing isokinetic and isometric peak torque as well as isokinetic power using a Biodex dynamometer (System 3; Biodex Medical Systems, Shirley, NY, USA) on the participant’s self-reported dominant leg. All the participants were familiarized with the testing protocols during his or her familiarization visit prior to supplementation. Before muscle function assessment, participants performed a five-minute self-paced warm up on a treadmill. Then, each participant was seated with their knee aligned with the lever arm axis of the dynamometer. The dynamometer warm-up consisted of three concentric-only extension and flexion repetitions at 50% of perceived maximal force production. Following the warm-up, participants were given a 90-s recovery.\nIsokinetic peak torque and power were tested at a speed of 60° * s−1 through a modified range of motion, which began with their leg at approximately 120° knee flexion and continued through full extension of the knee (180° knee flexion). Once in the starting position, participants began each set of repetitions by forcefully extending their leg against the resistance through the full range of motion before forcefully flexing their knee back to the starting position. Participants repeated this motion for five continuous repetitions at maximal effort. Peak torque and power values for both extension and flexion were recorded as individual peak values and as average values across all the completed repetitions. After the isokinetic exercise, participants were given a three-minute rest. Then, isometric strength was assessed by three maximal voluntary extensions of the knee, each lasting five seconds against a fixed resistance arm at an angle of 120°. A one-minute rest was given between each repetition, and all the repetitions were performed at maximal effort. The highest isometric torque values were recorded as peak isometric torque in foot–pounds (ft–lbs) of torque, and the average peak torque value was also computed. Acute assessments of both isokinetic and isometric force and power are regularly used to assess acute changes in both dynamic and static force production in response to various forms of exercise stimuli [23,24]. Test–retest reliability using similar isokinetic devices with similar protocols and controls consistently yielded high measures of reliability and coefficients of variation less than 5% [25].\nAs an indirect indicator of muscle damage, perceived levels of anterior, posterior, and total soreness of the knee extensors were assessed by all participants using a 100-cm visual analog scale. Soreness was assessed along a 100-mm scale (0 mm = no soreness, 100 mm = extreme soreness) for each time point (pre-exercise, immediately (0 h), 24 h, 48 h, and 72 h post-exercise by drawing a line perpendicular to the continuum line extending from 0 mm to 100 mm. Soreness was evaluated by measuring the distance of each mark from 0 and rounded up to the nearest millimeter.\nMuscle function was determined by assessing isokinetic and isometric peak torque as well as isokinetic power using a Biodex dynamometer (System 3; Biodex Medical Systems, Shirley, NY, USA) on the participant’s self-reported dominant leg. All the participants were familiarized with the testing protocols during his or her familiarization visit prior to supplementation. Before muscle function assessment, participants performed a five-minute self-paced warm up on a treadmill. Then, each participant was seated with their knee aligned with the lever arm axis of the dynamometer. The dynamometer warm-up consisted of three concentric-only extension and flexion repetitions at 50% of perceived maximal force production. Following the warm-up, participants were given a 90-s recovery.\nIsokinetic peak torque and power were tested at a speed of 60° * s−1 through a modified range of motion, which began with their leg at approximately 120° knee flexion and continued through full extension of the knee (180° knee flexion). Once in the starting position, participants began each set of repetitions by forcefully extending their leg against the resistance through the full range of motion before forcefully flexing their knee back to the starting position. Participants repeated this motion for five continuous repetitions at maximal effort. Peak torque and power values for both extension and flexion were recorded as individual peak values and as average values across all the completed repetitions. After the isokinetic exercise, participants were given a three-minute rest. Then, isometric strength was assessed by three maximal voluntary extensions of the knee, each lasting five seconds against a fixed resistance arm at an angle of 120°. A one-minute rest was given between each repetition, and all the repetitions were performed at maximal effort. The highest isometric torque values were recorded as peak isometric torque in foot–pounds (ft–lbs) of torque, and the average peak torque value was also computed. Acute assessments of both isokinetic and isometric force and power are regularly used to assess acute changes in both dynamic and static force production in response to various forms of exercise stimuli [23,24]. Test–retest reliability using similar isokinetic devices with similar protocols and controls consistently yielded high measures of reliability and coefficients of variation less than 5% [25].\nAs an indirect indicator of muscle damage, perceived levels of anterior, posterior, and total soreness of the knee extensors were assessed by all participants using a 100-cm visual analog scale. Soreness was assessed along a 100-mm scale (0 mm = no soreness, 100 mm = extreme soreness) for each time point (pre-exercise, immediately (0 h), 24 h, 48 h, and 72 h post-exercise by drawing a line perpendicular to the continuum line extending from 0 mm to 100 mm. Soreness was evaluated by measuring the distance of each mark from 0 and rounded up to the nearest millimeter.\n 2.5. Statistical Analysis Statistical analyses were performed using SPSS V.25 (IBM Corporation; Armonk, NY, USA). Primary outcomes were identified as peak torque values, while secondary outcomes were identified as average torque and power assessments. All the outcome measures were initially analyzed by a repeated measures analysis of variance (ANOVA) with three factors: gender (two levels), treatment (three levels), and time (six levels). Normality was assessed using the Shapiro–Wilk test with several variables violating the normality assumptions. All the non-normal variables were transformed using log10 before completing ANOVA procedures. All the data reported throughout is the raw data. Homogeneity of variance was assessed using Mauchly’s test of sphericity. When the homogeneity of variance assumption was violated, the Greenhouse–Geisser correction was applied when epsilon was <0.75, and the Huynh–Feldt correction was applied when epsilon was >0.75. Outside of expected outcomes (greater strength), gender contributed little to the overall model and did not impact treatment analysis; thus, gender was removed from the model and 3 (group) × 6 (time) mixed factorial ANOVA with repeated measures on time were subsequently used to assess the differences across time between groups. Tukey post hoc procedures were used when a significant finding (p ≤ 0.05) or trend (0.05 < p ≤ 0.10) was identified. Significant main effects for time were fully decomposed by completing factorial ANOVA with repeated measures on time. If significant differences in time or trends were identified, pairwise comparisons with Bonferroni corrections applied to the confidence interval were evaluated to determine between time points for each condition.\nStatistical analyses were performed using SPSS V.25 (IBM Corporation; Armonk, NY, USA). Primary outcomes were identified as peak torque values, while secondary outcomes were identified as average torque and power assessments. All the outcome measures were initially analyzed by a repeated measures analysis of variance (ANOVA) with three factors: gender (two levels), treatment (three levels), and time (six levels). Normality was assessed using the Shapiro–Wilk test with several variables violating the normality assumptions. All the non-normal variables were transformed using log10 before completing ANOVA procedures. All the data reported throughout is the raw data. Homogeneity of variance was assessed using Mauchly’s test of sphericity. When the homogeneity of variance assumption was violated, the Greenhouse–Geisser correction was applied when epsilon was <0.75, and the Huynh–Feldt correction was applied when epsilon was >0.75. Outside of expected outcomes (greater strength), gender contributed little to the overall model and did not impact treatment analysis; thus, gender was removed from the model and 3 (group) × 6 (time) mixed factorial ANOVA with repeated measures on time were subsequently used to assess the differences across time between groups. Tukey post hoc procedures were used when a significant finding (p ≤ 0.05) or trend (0.05 < p ≤ 0.10) was identified. Significant main effects for time were fully decomposed by completing factorial ANOVA with repeated measures on time. If significant differences in time or trends were identified, pairwise comparisons with Bonferroni corrections applied to the confidence interval were evaluated to determine between time points for each condition.", "Men and women aged 19 to 29 years were considered eligible for participation if he or she was a non-smoker and free of any musculoskeletal, medical, or metabolic contraindications to exercise. The health status and activity levels of potential participants were determined by completion of a medical history form and a physical activity record. Further, participants had to be low to moderately trained, which was defined as meeting the current American College of Sports Medicine (ACSM) guidelines of at least 150 min of moderate aerobic activity, or 75 min per week of vigorous aerobic activity per week for at least the past three months [21]. Exclusion criteria included women who were pregnant or lactating, any participation in another clinical trial or consumption of an investigational product within the previous 30 days, receiving regular treatment with anti-inflammatory/analgesic/antioxidant drugs in the previous month, or the use of any ergogenic aid during the nine-week period prior to recruitment.\nA total of 106 participants (53M, 53F) were initially screened for study participation. Of this cohort, 32 additional participants (15M, 17F) were excluded from study involvement due to them not meeting the study’s inclusion or exclusion criteria. Consequently, a total of 74 subjects were randomly assigned to a supplementation group and from there, an additional 11 participants (7M, 4F) withdrew from the study due to different priorities, i.e., school, resulting in a total of 63 (31M, 32F) study participants who completed the study. The subjects’ characteristics are presented in Table 1.", "A summary of the study design is presented in Figure 1. All the testing was performed in the Exercise Physiology Laboratory within the Kinesiology Department at Texas Christian University. Prior to supplementation, and 3 to 7 days after a familiarization visit, participants returned to the laboratory after having abstained from any strenuous physical activity for at least 48 h. Additionally, participants were asked to avoid coffee, alcohol, and non-steroidal anti-inflammatory drugs at least 24 h prior to any trial. Baseline testing consisted of a muscle function assessment and a maximal aerobic capacity (VO2max) test.\nAfter 56 days of supplementation, the participants reported to the lab to repeat all the muscle function assessments. The following day, participants performed a downhill run to induce muscle damage. The downhill run was performed on a modified TMX 3030c (Trackmaster, Newton, KS, USA) at a –15% grade. Prior to the downhill run, participants warmed up for five minutes at a 0% grade at a speed equivalent to 65% VO2Max. American College of Sports Medicine metabolic calculations for the estimation of energy expenditure during running [21] were utilized to determine the treadmill speed—on a level grade—that would approximately elicit 65% VO2max. Gas exchange was continuously measured (Parvo Medics, Sandy, UT, USA), and the treadmill speed was adjusted after two minutes to match 65% VO2Max. After the five-minute warm-up, participants completed a 45-min downhill run at a –15% grade and speed equivalent to 65% VO2Max. Muscle function was assessed one hour after completion of the downhill running bout. Participants returned to the lab at 24 h, 48 h, and 72 h following the downhill run for follow-up muscle function testing.", "The investigational product (CurcuWIN®; OmniActive Health Technologies Ltd., Mumbai, India) contained turmeric extract (20–28%), a hydrophilic carrier (63–75%), cellulosic derivatives (10–40%), and natural antioxidants (1–3%) [19]. During the consent and familiarization process, participants were educated on which foods contained turmeric, and were asked to avoid those foods for the duration of the study while maintaining their normal diet. Participants were randomly assigned in a double-blind manner to either a placebo (corn starch, PLA), low dose of curcumin (50 mg of curcuminoids = 250 mg CurcuWIN®), or high dose of curcumin (200 mg of curcuminoids = 1000 mg CurcuWIN®) supplementation. Commercially available natural curcumin contains three curcuminoids in the following ratios: curcumin (71.5%), demethoxycurcumin (19.4%), and bisdemethoxycurcumin (9.1%) [22]. The day following baseline testing, participants were asked to ingest one dose with breakfast, lunch, and dinner for a total of three doses per day. To assist in blinding, all the doses required the ingestion of one capsule that was identical in shape, size, and color. As part of compliance monitoring to the supplementation regimen, participants were provided the capsules on a weekly basis in a pill container that provided seven doses and were asked to return the empty containers. A side effects questionnaire was also completed when receiving capsules. Compliance was set at ≥80%, and participants not meeting compliance were removed from the study.", "Muscle function was determined by assessing isokinetic and isometric peak torque as well as isokinetic power using a Biodex dynamometer (System 3; Biodex Medical Systems, Shirley, NY, USA) on the participant’s self-reported dominant leg. All the participants were familiarized with the testing protocols during his or her familiarization visit prior to supplementation. Before muscle function assessment, participants performed a five-minute self-paced warm up on a treadmill. Then, each participant was seated with their knee aligned with the lever arm axis of the dynamometer. The dynamometer warm-up consisted of three concentric-only extension and flexion repetitions at 50% of perceived maximal force production. Following the warm-up, participants were given a 90-s recovery.\nIsokinetic peak torque and power were tested at a speed of 60° * s−1 through a modified range of motion, which began with their leg at approximately 120° knee flexion and continued through full extension of the knee (180° knee flexion). Once in the starting position, participants began each set of repetitions by forcefully extending their leg against the resistance through the full range of motion before forcefully flexing their knee back to the starting position. Participants repeated this motion for five continuous repetitions at maximal effort. Peak torque and power values for both extension and flexion were recorded as individual peak values and as average values across all the completed repetitions. After the isokinetic exercise, participants were given a three-minute rest. Then, isometric strength was assessed by three maximal voluntary extensions of the knee, each lasting five seconds against a fixed resistance arm at an angle of 120°. A one-minute rest was given between each repetition, and all the repetitions were performed at maximal effort. The highest isometric torque values were recorded as peak isometric torque in foot–pounds (ft–lbs) of torque, and the average peak torque value was also computed. Acute assessments of both isokinetic and isometric force and power are regularly used to assess acute changes in both dynamic and static force production in response to various forms of exercise stimuli [23,24]. Test–retest reliability using similar isokinetic devices with similar protocols and controls consistently yielded high measures of reliability and coefficients of variation less than 5% [25].\nAs an indirect indicator of muscle damage, perceived levels of anterior, posterior, and total soreness of the knee extensors were assessed by all participants using a 100-cm visual analog scale. Soreness was assessed along a 100-mm scale (0 mm = no soreness, 100 mm = extreme soreness) for each time point (pre-exercise, immediately (0 h), 24 h, 48 h, and 72 h post-exercise by drawing a line perpendicular to the continuum line extending from 0 mm to 100 mm. Soreness was evaluated by measuring the distance of each mark from 0 and rounded up to the nearest millimeter.", "Statistical analyses were performed using SPSS V.25 (IBM Corporation; Armonk, NY, USA). Primary outcomes were identified as peak torque values, while secondary outcomes were identified as average torque and power assessments. All the outcome measures were initially analyzed by a repeated measures analysis of variance (ANOVA) with three factors: gender (two levels), treatment (three levels), and time (six levels). Normality was assessed using the Shapiro–Wilk test with several variables violating the normality assumptions. All the non-normal variables were transformed using log10 before completing ANOVA procedures. All the data reported throughout is the raw data. Homogeneity of variance was assessed using Mauchly’s test of sphericity. When the homogeneity of variance assumption was violated, the Greenhouse–Geisser correction was applied when epsilon was <0.75, and the Huynh–Feldt correction was applied when epsilon was >0.75. Outside of expected outcomes (greater strength), gender contributed little to the overall model and did not impact treatment analysis; thus, gender was removed from the model and 3 (group) × 6 (time) mixed factorial ANOVA with repeated measures on time were subsequently used to assess the differences across time between groups. Tukey post hoc procedures were used when a significant finding (p ≤ 0.05) or trend (0.05 < p ≤ 0.10) was identified. Significant main effects for time were fully decomposed by completing factorial ANOVA with repeated measures on time. If significant differences in time or trends were identified, pairwise comparisons with Bonferroni corrections applied to the confidence interval were evaluated to determine between time points for each condition.", " 3.1. Isokinetic Peak Flexion Torque Using mixed factorial ANOVA, no significant group x time interaction (p = 0.60) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak flexion torque values (Figure 2). Regarding PLA, no significant changes across time (p = 0.15) were identified, while significant changes across time were found for the 50-mg dose (p = 0.03), and the 200-mg dose tended (p = 0.052) to exhibit changes across time. In the 50-mg dose group, peak flexion torque was significantly reduced in comparison to baseline values after one hour (95% CI: 2.46–9.48, p = 0.02), 24 h (95% CI: 2.34–9.60, p = 0.03), and 48 h (95% CI: 3.25–9.79, p = 0.02). In the 200-mg dose group, peak flexion torque was significantly reduced against baseline values after one hour (95% CI: 1.42–6.36, p = 0.04), and returned to baseline values for all the other time points.\nUsing mixed factorial ANOVA, no significant group x time interaction (p = 0.60) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak flexion torque values (Figure 2). Regarding PLA, no significant changes across time (p = 0.15) were identified, while significant changes across time were found for the 50-mg dose (p = 0.03), and the 200-mg dose tended (p = 0.052) to exhibit changes across time. In the 50-mg dose group, peak flexion torque was significantly reduced in comparison to baseline values after one hour (95% CI: 2.46–9.48, p = 0.02), 24 h (95% CI: 2.34–9.60, p = 0.03), and 48 h (95% CI: 3.25–9.79, p = 0.02). In the 200-mg dose group, peak flexion torque was significantly reduced against baseline values after one hour (95% CI: 1.42–6.36, p = 0.04), and returned to baseline values for all the other time points.\n 3.2. Isokinetic Average Peak Flexion Torque No significant group × time interaction (p = 0.55) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in the average peak flexion torque. A statistical tendency for change was observed in both the PLA (p = 0.08) and 200-mg (p = 0.07) groups, while values in the 50-mg dose (p = 0.03) exhibited a significant change across time. Pairwise comparisons in both the PLA (95% CI: 2.47–9.04, p = 0.046) and 200-mg (95% CI: 2.16–6.86, p = 0.006) groups indicated that the average peak flexion torque values were reduced one hour after damage in comparison to baseline values, but returned to baseline values after that initial drop in performance. Values in the 50-mg group exhibited more changes, with significantly reduced values being observed one hour (95% CI: 2.23–9.40, p = 0.03) and 48 h (95% CI: 2.45–8.57, p = 0.04) after damage, with values 24 h after damage tending to be reduced (95% CI: 1.58–9.48, p = 0.06).\nNo significant group × time interaction (p = 0.55) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in the average peak flexion torque. A statistical tendency for change was observed in both the PLA (p = 0.08) and 200-mg (p = 0.07) groups, while values in the 50-mg dose (p = 0.03) exhibited a significant change across time. Pairwise comparisons in both the PLA (95% CI: 2.47–9.04, p = 0.046) and 200-mg (95% CI: 2.16–6.86, p = 0.006) groups indicated that the average peak flexion torque values were reduced one hour after damage in comparison to baseline values, but returned to baseline values after that initial drop in performance. Values in the 50-mg group exhibited more changes, with significantly reduced values being observed one hour (95% CI: 2.23–9.40, p = 0.03) and 48 h (95% CI: 2.45–8.57, p = 0.04) after damage, with values 24 h after damage tending to be reduced (95% CI: 1.58–9.48, p = 0.06).\n 3.3. Isokinetic Peak Extension Torque Using mixed factorial ANOVA, no significant group x time interaction (p = 0.52) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak extension torque values (Figure 3). Values changed significantly across time for the PLA (p = 0.002) and 50-mg (p = 0.04) groups, while no significant change was exhibited for the 200-mg dose (p = 0.16). In PLA, peak extension torque was significantly reduced one hour (95% CI: 8.10–18.04, p = 0.01) and 24 h (95% CI: 3.84–18.56, p = 0.03) after completion of the downhill running bout in comparison to pre-damage values. Changes in the 50-mg group indicated that peak extension torque in comparison to baseline values were significantly reduced (95% CI: 11.02–24.85, p < 0.001) one hour after completion of the damaging exercise, but returned to baseline values for all the other time points. No significant changes across time (p = 0.159) were observed for the 200-mg dose.\nUsing mixed factorial ANOVA, no significant group x time interaction (p = 0.52) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak extension torque values (Figure 3). Values changed significantly across time for the PLA (p = 0.002) and 50-mg (p = 0.04) groups, while no significant change was exhibited for the 200-mg dose (p = 0.16). In PLA, peak extension torque was significantly reduced one hour (95% CI: 8.10–18.04, p = 0.01) and 24 h (95% CI: 3.84–18.56, p = 0.03) after completion of the downhill running bout in comparison to pre-damage values. Changes in the 50-mg group indicated that peak extension torque in comparison to baseline values were significantly reduced (95% CI: 11.02–24.85, p < 0.001) one hour after completion of the damaging exercise, but returned to baseline values for all the other time points. No significant changes across time (p = 0.159) were observed for the 200-mg dose.\n 3.4. Isokinetic Average Peak Extension Torque Changes in average peak extension torque values indicated no significant group x time interaction (p = 0.578) in conjunction with a significant main effect for time (p < 0.001). Individual pairwise comparisons within each condition between individual time points revealed a similar pattern of change. In this respect, significant changes across time were found for PLA (p < 0.001), 50 mg (p = 0.03), and 200 mg (p = 0.03). Pairwise comparisons for PLA indicated that average peak extension torque values were significantly lower one hour (95% CI: 6.83–18.88, p = 0.02) after the running bout when compared to baseline levels. Individual comparison in the 50-mg group revealed that values were significantly lower than baseline after one hour (95% CI: 9.72–24.39, p <0.001) and 24 h (95% CI: 3.48–23.07, p = 0.04). Similarly, changes within the 200-mg group exhibited a significant reduction from baseline after one hour (95% CI: 6.59–16.37, p = 0.002).\nChanges in average peak extension torque values indicated no significant group x time interaction (p = 0.578) in conjunction with a significant main effect for time (p < 0.001). Individual pairwise comparisons within each condition between individual time points revealed a similar pattern of change. In this respect, significant changes across time were found for PLA (p < 0.001), 50 mg (p = 0.03), and 200 mg (p = 0.03). Pairwise comparisons for PLA indicated that average peak extension torque values were significantly lower one hour (95% CI: 6.83–18.88, p = 0.02) after the running bout when compared to baseline levels. Individual comparison in the 50-mg group revealed that values were significantly lower than baseline after one hour (95% CI: 9.72–24.39, p <0.001) and 24 h (95% CI: 3.48–23.07, p = 0.04). Similarly, changes within the 200-mg group exhibited a significant reduction from baseline after one hour (95% CI: 6.59–16.37, p = 0.002).\n 3.5. Isokinetic Peak Extension Power No significant group x time interaction (p = 0.39) effect was found for changes in peak extension power in all the supplementation groups against time (Figure 4). A significant main effect over time was found (p = 0.002). Within-group changes in the PLA group revealed a tendency for values to change (p = 0.08). Individual pairwise comparisons in PLA indicated that peak extension power data was significantly reduced (95% CI: 6.16–17.69, p = 0.04) one hour after downhill treadmill running. Similarly, within-group changes in the 50-mg group indicated a statistical trend (p = 0.051) for peak extension power values to change across time. Again, pairwise comparisons revealed that the only significant changes for peak extension power occurred one hour (95% CI: 6.86–22.30, p = 0.03) after completion of the treadmill exercise bout. Changes in the 200-mg group also exhibited a tendency for power values to change across time (p = 0.09), with individual pairwise comparisons indicating that values one hour (95% CI: 4.24–15.29, p = 0.03) after completion of the damage bout were significantly lower than baseline, while changes after 24 h (95% CI: 2.51–10.91, p = 0.09) tended to be lower.\nNo significant group x time interaction (p = 0.39) effect was found for changes in peak extension power in all the supplementation groups against time (Figure 4). A significant main effect over time was found (p = 0.002). Within-group changes in the PLA group revealed a tendency for values to change (p = 0.08). Individual pairwise comparisons in PLA indicated that peak extension power data was significantly reduced (95% CI: 6.16–17.69, p = 0.04) one hour after downhill treadmill running. Similarly, within-group changes in the 50-mg group indicated a statistical trend (p = 0.051) for peak extension power values to change across time. Again, pairwise comparisons revealed that the only significant changes for peak extension power occurred one hour (95% CI: 6.86–22.30, p = 0.03) after completion of the treadmill exercise bout. Changes in the 200-mg group also exhibited a tendency for power values to change across time (p = 0.09), with individual pairwise comparisons indicating that values one hour (95% CI: 4.24–15.29, p = 0.03) after completion of the damage bout were significantly lower than baseline, while changes after 24 h (95% CI: 2.51–10.91, p = 0.09) tended to be lower.\n 3.6. Isokinetic Peak Flexion Power No significant group x time interaction (p = 0.96) effect was found in any of the supplementation groups for peak flexion power (Figure 5). A significant main effect over time was found (p < 0.001). Within-group changes in the PLA group revealed no significant change across time (p = 0.14), while values in 50-mg group revealed a significant within-group change (p = 0.049), while values for the 200 mg tended to change (p = 0.08). Significant reductions from baseline in peak flexion power were observed one hour after running for both PLA (95% CI: 3.29–12.03, p = 0.03) and 200 mg (95% CI: 3.21–8.51, p = 0.01), while values in the 50-mg group were significantly reduced from baseline after one hour (0.001–0.088, p = 0.04) and 24 h (95% CI: 0.85–6.74, p = 0.03), and tended to be lower after 48 h (95% CI: 1.18–9.73, p = 0.08).\nNo significant group x time interaction (p = 0.96) effect was found in any of the supplementation groups for peak flexion power (Figure 5). A significant main effect over time was found (p < 0.001). Within-group changes in the PLA group revealed no significant change across time (p = 0.14), while values in 50-mg group revealed a significant within-group change (p = 0.049), while values for the 200 mg tended to change (p = 0.08). Significant reductions from baseline in peak flexion power were observed one hour after running for both PLA (95% CI: 3.29–12.03, p = 0.03) and 200 mg (95% CI: 3.21–8.51, p = 0.01), while values in the 50-mg group were significantly reduced from baseline after one hour (0.001–0.088, p = 0.04) and 24 h (95% CI: 0.85–6.74, p = 0.03), and tended to be lower after 48 h (95% CI: 1.18–9.73, p = 0.08).\n 3.7. Isometric Peak Torque No significant group x time interaction (p = 0.57) was found, while a significant main effect over time was found (p = 0.046). Within-group changes in the PLA group revealed no significant change across time (p = 0.43) while values in both the 50-mg (p = 0.01) and 200-mg groups (p = 0.02) significantly decreased across time. As expected, no individual pairwise comparisons within PLA revealed any statistically significant changes (p > 0.05). Significant reductions from baseline in isometric peak torque were observed one hour (95% CI: 8.73–19.16, p < 0.001) and 24 h (95% CI: 7.95–23.62, p = 0.004) in the 50-mg group. Within the 200-mg group, isometric peak torque values were significantly reduced 24 h after exercise (95% CI: 1.69–13.79, p = 0.02), but were similar to the baseline values at all the other time points. \nNo significant group x time interaction (p = 0.57) was found, while a significant main effect over time was found (p = 0.046). Within-group changes in the PLA group revealed no significant change across time (p = 0.43) while values in both the 50-mg (p = 0.01) and 200-mg groups (p = 0.02) significantly decreased across time. As expected, no individual pairwise comparisons within PLA revealed any statistically significant changes (p > 0.05). Significant reductions from baseline in isometric peak torque were observed one hour (95% CI: 8.73–19.16, p < 0.001) and 24 h (95% CI: 7.95–23.62, p = 0.004) in the 50-mg group. Within the 200-mg group, isometric peak torque values were significantly reduced 24 h after exercise (95% CI: 1.69–13.79, p = 0.02), but were similar to the baseline values at all the other time points. \n 3.8. Isometric Average Peak Torque No significant group x time interaction (p = 0.52) was found, while a significant main effect over time was found (p = 0.001) for isometric average peak torque values. Within-group changes in the PLA (p = 0.06) and 50-mg groups (p = 0.11) were not different from baseline, while changes in the 200-mg group exhibited significant reductions across time (p = 0.02) with no individual pairwise comparisons reaching statistical significance.\nNo significant group x time interaction (p = 0.52) was found, while a significant main effect over time was found (p = 0.001) for isometric average peak torque values. Within-group changes in the PLA (p = 0.06) and 50-mg groups (p = 0.11) were not different from baseline, while changes in the 200-mg group exhibited significant reductions across time (p = 0.02) with no individual pairwise comparisons reaching statistical significance.\n 3.9. Perceived Soreness Statistically significant main effects for time were identified, indicating that perceived thigh soreness levels significantly increased in response to the exercise bout. Anterior thigh soreness (p < 0.001), posterior thigh soreness (p < 0.001), and total thigh soreness (p < 0.001) were all found to significantly increase across time. No significant group x time interaction were identified for anterior thigh soreness (p = 0.73), posterior thigh soreness (p = 0.73), and total thigh soreness (p = 0.30). Non-significant improvements in exercise-induced total thigh soreness indicated that the 200-mg groups reported 26%, 20%, and 8% less soreness immediately, 24 h, and 48 h after exercise, respectively, than the soreness levels that were reported in the PLA and 50-mg groups. However, these differences failed to reach statistical significance (Figure 6).\nStatistically significant main effects for time were identified, indicating that perceived thigh soreness levels significantly increased in response to the exercise bout. Anterior thigh soreness (p < 0.001), posterior thigh soreness (p < 0.001), and total thigh soreness (p < 0.001) were all found to significantly increase across time. No significant group x time interaction were identified for anterior thigh soreness (p = 0.73), posterior thigh soreness (p = 0.73), and total thigh soreness (p = 0.30). Non-significant improvements in exercise-induced total thigh soreness indicated that the 200-mg groups reported 26%, 20%, and 8% less soreness immediately, 24 h, and 48 h after exercise, respectively, than the soreness levels that were reported in the PLA and 50-mg groups. However, these differences failed to reach statistical significance (Figure 6).", "Using mixed factorial ANOVA, no significant group x time interaction (p = 0.60) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak flexion torque values (Figure 2). Regarding PLA, no significant changes across time (p = 0.15) were identified, while significant changes across time were found for the 50-mg dose (p = 0.03), and the 200-mg dose tended (p = 0.052) to exhibit changes across time. In the 50-mg dose group, peak flexion torque was significantly reduced in comparison to baseline values after one hour (95% CI: 2.46–9.48, p = 0.02), 24 h (95% CI: 2.34–9.60, p = 0.03), and 48 h (95% CI: 3.25–9.79, p = 0.02). In the 200-mg dose group, peak flexion torque was significantly reduced against baseline values after one hour (95% CI: 1.42–6.36, p = 0.04), and returned to baseline values for all the other time points.", "No significant group × time interaction (p = 0.55) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in the average peak flexion torque. A statistical tendency for change was observed in both the PLA (p = 0.08) and 200-mg (p = 0.07) groups, while values in the 50-mg dose (p = 0.03) exhibited a significant change across time. Pairwise comparisons in both the PLA (95% CI: 2.47–9.04, p = 0.046) and 200-mg (95% CI: 2.16–6.86, p = 0.006) groups indicated that the average peak flexion torque values were reduced one hour after damage in comparison to baseline values, but returned to baseline values after that initial drop in performance. Values in the 50-mg group exhibited more changes, with significantly reduced values being observed one hour (95% CI: 2.23–9.40, p = 0.03) and 48 h (95% CI: 2.45–8.57, p = 0.04) after damage, with values 24 h after damage tending to be reduced (95% CI: 1.58–9.48, p = 0.06).", "Using mixed factorial ANOVA, no significant group x time interaction (p = 0.52) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak extension torque values (Figure 3). Values changed significantly across time for the PLA (p = 0.002) and 50-mg (p = 0.04) groups, while no significant change was exhibited for the 200-mg dose (p = 0.16). In PLA, peak extension torque was significantly reduced one hour (95% CI: 8.10–18.04, p = 0.01) and 24 h (95% CI: 3.84–18.56, p = 0.03) after completion of the downhill running bout in comparison to pre-damage values. Changes in the 50-mg group indicated that peak extension torque in comparison to baseline values were significantly reduced (95% CI: 11.02–24.85, p < 0.001) one hour after completion of the damaging exercise, but returned to baseline values for all the other time points. No significant changes across time (p = 0.159) were observed for the 200-mg dose.", "Changes in average peak extension torque values indicated no significant group x time interaction (p = 0.578) in conjunction with a significant main effect for time (p < 0.001). Individual pairwise comparisons within each condition between individual time points revealed a similar pattern of change. In this respect, significant changes across time were found for PLA (p < 0.001), 50 mg (p = 0.03), and 200 mg (p = 0.03). Pairwise comparisons for PLA indicated that average peak extension torque values were significantly lower one hour (95% CI: 6.83–18.88, p = 0.02) after the running bout when compared to baseline levels. Individual comparison in the 50-mg group revealed that values were significantly lower than baseline after one hour (95% CI: 9.72–24.39, p <0.001) and 24 h (95% CI: 3.48–23.07, p = 0.04). Similarly, changes within the 200-mg group exhibited a significant reduction from baseline after one hour (95% CI: 6.59–16.37, p = 0.002).", "No significant group x time interaction (p = 0.39) effect was found for changes in peak extension power in all the supplementation groups against time (Figure 4). A significant main effect over time was found (p = 0.002). Within-group changes in the PLA group revealed a tendency for values to change (p = 0.08). Individual pairwise comparisons in PLA indicated that peak extension power data was significantly reduced (95% CI: 6.16–17.69, p = 0.04) one hour after downhill treadmill running. Similarly, within-group changes in the 50-mg group indicated a statistical trend (p = 0.051) for peak extension power values to change across time. Again, pairwise comparisons revealed that the only significant changes for peak extension power occurred one hour (95% CI: 6.86–22.30, p = 0.03) after completion of the treadmill exercise bout. Changes in the 200-mg group also exhibited a tendency for power values to change across time (p = 0.09), with individual pairwise comparisons indicating that values one hour (95% CI: 4.24–15.29, p = 0.03) after completion of the damage bout were significantly lower than baseline, while changes after 24 h (95% CI: 2.51–10.91, p = 0.09) tended to be lower.", "No significant group x time interaction (p = 0.96) effect was found in any of the supplementation groups for peak flexion power (Figure 5). A significant main effect over time was found (p < 0.001). Within-group changes in the PLA group revealed no significant change across time (p = 0.14), while values in 50-mg group revealed a significant within-group change (p = 0.049), while values for the 200 mg tended to change (p = 0.08). Significant reductions from baseline in peak flexion power were observed one hour after running for both PLA (95% CI: 3.29–12.03, p = 0.03) and 200 mg (95% CI: 3.21–8.51, p = 0.01), while values in the 50-mg group were significantly reduced from baseline after one hour (0.001–0.088, p = 0.04) and 24 h (95% CI: 0.85–6.74, p = 0.03), and tended to be lower after 48 h (95% CI: 1.18–9.73, p = 0.08).", "No significant group x time interaction (p = 0.57) was found, while a significant main effect over time was found (p = 0.046). Within-group changes in the PLA group revealed no significant change across time (p = 0.43) while values in both the 50-mg (p = 0.01) and 200-mg groups (p = 0.02) significantly decreased across time. As expected, no individual pairwise comparisons within PLA revealed any statistically significant changes (p > 0.05). Significant reductions from baseline in isometric peak torque were observed one hour (95% CI: 8.73–19.16, p < 0.001) and 24 h (95% CI: 7.95–23.62, p = 0.004) in the 50-mg group. Within the 200-mg group, isometric peak torque values were significantly reduced 24 h after exercise (95% CI: 1.69–13.79, p = 0.02), but were similar to the baseline values at all the other time points. ", "No significant group x time interaction (p = 0.52) was found, while a significant main effect over time was found (p = 0.001) for isometric average peak torque values. Within-group changes in the PLA (p = 0.06) and 50-mg groups (p = 0.11) were not different from baseline, while changes in the 200-mg group exhibited significant reductions across time (p = 0.02) with no individual pairwise comparisons reaching statistical significance.", "Statistically significant main effects for time were identified, indicating that perceived thigh soreness levels significantly increased in response to the exercise bout. Anterior thigh soreness (p < 0.001), posterior thigh soreness (p < 0.001), and total thigh soreness (p < 0.001) were all found to significantly increase across time. No significant group x time interaction were identified for anterior thigh soreness (p = 0.73), posterior thigh soreness (p = 0.73), and total thigh soreness (p = 0.30). Non-significant improvements in exercise-induced total thigh soreness indicated that the 200-mg groups reported 26%, 20%, and 8% less soreness immediately, 24 h, and 48 h after exercise, respectively, than the soreness levels that were reported in the PLA and 50-mg groups. However, these differences failed to reach statistical significance (Figure 6).", "The main findings of this study indicate widespread decreases in torque and power after completion of a one hour bout of downhill running. Responses in both the PLA and 200-mg groups were largely similar across time, while the 50-mg dose failed to respond in a similar fashion for any measured variable in comparison to either PLA or the 200-mg dose. In particular, isokinetic peak extension torque values experienced significant reductions in PLA and 50 mg across time, while no changes were found in the 200-mg group. Several other measured variables indicated that the 200-mg and PLA groups responded in a similar fashion with the smallest decrements in performance occurring in the 200-mg group, while changes in both of these groups were more favorable than those outlined in the 50-mg group. Additionally, a single bout of downhill treadmill running led to a widespread significant increases in perceived muscle soreness, but no differences between groups were identified.\nDespite the proven preclinical efficacy, curcumin and other curcuminoids are known for poor solubility, low absorption from the gut, rapid metabolism, and rapid systemic elimination, which contribute to an overall low oral bioavailability [26]. The curcumin used in the current study has been shown to have a 45.9-fold higher absorption over standard curcumin [19]. Notably, the form of curcumin used in the current study has previously been shown to illustrate a clinically meaningful improvement in endothelial function, as measured by flow-mediated dilation when a dose of 200 mg of curcuminoids was delivered [20]. Curcumin has previously been shown to increase vasodilation similar to exercise [27], and curcumin ingestion with aerobic exercise training is more effective than curcumin ingestion or aerobic exercise training alone at reducing left ventricular afterload [28]. In addition, curcumin inhibits the conversion of amino acids from muscle into glucose (gluconeogenesis) [14]. From a performance and muscle damage recovery perspective, limited work is available regarding the role of curcumin. In an animal running model, curcumin (10-mg pellet added to food for three consecutive days) was shown to significantly improve running time to exhaustion and voluntary physical activity in rats exposed to a single bout of downhill running [9]. Additionally, curcumin treatment in the downhill running rats also improved circulating levels of tumor necrosis factor-alpha to similar levels as those of the placebo within 48 h of completing the treadmill running [9]. In addition, Vitadell et al. [29] injected rats with curcumin and treated them to either hindlimb unloading or standard caging. After seven days, curcumin treatment significantly attenuated the loss of soleus mass and the myofiber cross-sectional area, while also sustaining a greater maintenance of force production attributes. Additionally, curcumin treatment was found to significantly blunt changes in protein and lipid oxidation in unloaded, treated rats compared to untreated ones. In one of the few studies conducted in humans to examine the impact of curcumin treatment on recovery from damaging exercise, Nicol et al. [12] had 17 men supplement in a double-blind, placebo controlled fashion with either curcumin (5000 mg of curcuminoids per dose) for 2.5 days leading up to and 2.5 days after completing an eccentric exercise protocol. Jump performance, creatine kinase, and pain indicators were assessed along with changes in circulating cytokines. It was concluded that curcumin reduced pain associated with soreness and may improve jumping performance in the days following muscle damage. These outcomes are in accordance with the changes seen in peak extension torque values in the present study, but are not entirely supported with the reported soreness levels in the present study. While the most favorable pattern of change was observed in the 200-mg group (Figure 6), these changes failed to reach a statistically significant levels in comparison to PLA and the 50-mg dose. Briefly, the 200-mg curcumin dose exhibited no loss of torque production, while significant reductions were noted one hour and 24 h after completion of the damage bout in the PLA group (Figure 3). Changes in peak flexion torque values revealed a similar pattern of change for the PLA and 200-mg group, with immediate significant reductions in peak flexion torque, but a return to baseline values by the next time point at 24 h. Unexpectedly, peak flexion torque values were still significantly reduced at 24 h and 48 h (Figure 2) after completion of the damage bout in the 50-mg group. These divergent changes seemingly suggest that a dose–response action may be evident between the ability of curcumin to mitigate losses of force production after damaging exercise. A similar pattern of change was observed for total thigh soreness, with the 200-mg dose exhibiting smaller perturbations in reported soreness, but the magnitude of these values failed to reach statistical significance. Similar patterns of change were noted for all the other variables, with significant reductions occurring in all the groups for isokinetic extension and flexion power (Figure 4 and Figure 5, respectively).\nSeveral strengths and limitations are present in this study that require discussion. For starters, the current study was the first investigation in humans to examine the ability of different dosages of curcumin (50 and 200 mg of curcuminoids) to mitigate recovery from damaging exercise using a novel formulation that has been previously shown to optimize bioavailability [19]. In addition, at eight weeks, the current investigation was one of the longer curcumin supplementation studies to date with the majority of studies supplementing in varying amounts for 1 to 4 weeks. A key limitation of the supplementation protocol used in the present study was the ceasing of supplementation after damage. While speculative, it remains possible that continuing supplementation through all the recovery time points may have provided better support to mitigate the changes observed, as this has been observed and highlighted to be a confounding factor surrounding research of this nature [5,25]. This consideration may also help to explain why various mean changes were observed in multiple outcome measures, but the overall magnitude of change failed to yield statistical significance. Beyond these points, the already limited research in humans becomes even more convoluted when study cohorts comprising differing demographics and different forms of exercise are utilized. In particular, the present study is the only investigation to introduce data featuring females. While the influence of gender in the present study was considered to be statistically irrelevant, known differences exist between how males and females respond to damaging exercise [30]. As such, these confounding influences could seemingly interact with the known ability of curcumin to change the observed responses to inflammation and oxidative stress. Finally, it is challenging to understand why changes in peak extension torque reached statistically significant thresholds in the 200-mg dose, but such changes failed to be realized when investigating the changes seen in peak flexion torque from the same exercise bout. While merely speculative, these changes may be due to a greater magnitude of stress placed on the knee extensors throughout the downhill running bout as opposed to the knee flexors, and this greater stress was differentially impacted by the curcumin supplementation. Irrespective of these differences, results from the previous study demonstrate consistently better performance outcomes in the higher dose of curcumin (200 mg) when compared to the lower dose (50 mg). In a somewhat unexpected fashion, the placebo group responded more favorably than the 50-mg dosage and in all but one variable (isokinetic peak extension torque), while the PLA group responded similarly to the 200-mg group. It is possible that a certain threshold of curcumin (or curcuminoids) is needed to exert any impact on our measured outcomes, and consequently, the 200-mg dose crossed this threshold, while the 50-mg dose did not. When considered in combination with the cessation of supplementation after conclusion of the damage bout, the lack of circulating curcuminoids in the 50-mg dose (versus the 200-mg dose) is considered to be the most likely reason for our dichotomous outcomes from a dose perspective. Another key consideration when interpreting our results in comparison to previous findings are the different patterns of exercise stress and damage that are employed. In the present study, a downhill running protocol was utilized, which is an established protocol to instigate metabolic as well as mechanical stress [6]. Other damage models such as the one utilized in the Nicol protocol [12], employ high volumes of resistance exercise contractions that involve eccentric contractions resulting in high levels of mechanical overload on the involved musculature, and have been clearly established as causing muscle damage [6].", "In conclusion, results from the present study highlight the ability of a high dose of CurcuWIN® (1000-mg dose delivering 200 mg of curcuminoids) to prevent the observed decreases in peak extension torque values seen one and 24 h after muscle-damaging exercise. In comparison, a lower dose of CurcuWIN® (delivering 50 mg of curcuminoids) was unable to attenuate performance changes to a similar pattern as what was observed in PLA. While this study investigated changes in performance, future studies should also investigate more objective measures (blood markers such as creatine kinase or myoglobin) of muscle damage in cohorts of trained and untrained samples. " ]
[ "intro", null, "subjects", null, null, null, null, "results", null, null, null, null, null, null, null, null, null, "discussion", "conclusions" ]
[ "athletic performance", "muscle-damaging exercise", "recovery", "downhill run" ]
1. Introduction: Even in trained athletes, a novel or unaccustomed exercise bout, especially those that emphasize eccentric contractions, can cause microscopic intramuscular tears and an exaggerated inflammatory response [1,2], which is generally referred to as exercise-induced muscle damage (EIMD) [3]. The subsequent muscular pain and restriction of movement from EIMD can limit an athlete’s performance [4]. Thus, strategies that can attenuate performance decrements associated with EIMD should result in higher training quality and hypothetically greater exercise training adaptations. Consequently, many strategies have been proposed to treat or prevent EIMD [5], but there is still no scientific consensus on the most effective strategy for all individuals [6]. Curcumin, the bioactive component (2–5% by weight) of the spice herb turmeric, has a long history of medicinal use due to its anti-inflammatory and antioxidant properties [7]. The United States Food and Drug Administration has listed curcumin as GRAS (generally recognized as safe), and curcumin-containing supplements have been approved for human ingestion [8]. Curcumin is a polyphenol that is considered to be a “nutraceutical”, or a dietary agent with pharmaceutical or therapeutic properties. The capacity for curcumin as a treatment strategy in favor of targeted pharmaceuticals is promising due to curcumin being a highly pleiotropic molecule that interacts with multiple inflammatory pathways [1,2]. Previous research has indicated that curcumin may help alleviate performance decrements following intense, challenging exercise. For example, initial research in mice indicated that curcumin supplementation led to greater voluntary activity and improved running performance compared to placebo-supplemented mice after eccentric exercise [9]. Similar effects of curcumin following EIMD have been reported in human subjects. One study reported that curcumin supplementation reduces pain and tenderness [10], while Drobnic et al. [11] reported a reduction in muscular trauma in the posterior and medial thigh following a downhill run with curcumin supplementation along with a moderate reduction in pain. In contrast, Nicol et al. [12] reported that curcumin moderately reduced pain during exercise, but had little effect on muscle function. Relative to the impact of specific curcuminoids (as opposed to curcumin supplementation) in response to exercise and muscle damage, no published literature is available at the current time. A systematic review by Gaffey et al. [13] concluded that insufficient evidence exists to support the ability of curcuminoids to relieve pain and improve function. Importantly, the authors highlighted a general lack of evidence and poor study quality from the existing literature base. The mixed findings in previous studies may have been a result of the limited bioavailability due to formulation of the supplement [14]. A major limitation to the therapeutic potential of curcumin is its poor solubility, low absorption from the gut, rapid metabolism, and systemic elimination [15]. Curcumin is primarily excreted through the feces, never reaching detectable levels in circulation [16]. Previously, it has been shown that high doses of orally administered curcumin (e.g., 10–12 g), has resulted in little-to-no appearance of curcuminoids in circulation [17]. Various methods have been developed to increase the bioavailability of curcumin involving emulsions, nanocrystals, and liposomes, with varying degrees of success [18]. A recent formulation of curcumin (CurcuWIN®) involves the combination of a cellulosic derivatives and other natural antioxidants (tocopherol and ascorbyl palmitate). This formulation has been shown previously to increase absorption by 45.9-fold over a standardized curcumin mixture and a 5.8 to 34.9-fold greater increase in absorption (versus other formulations) while being well tolerated with no reported adverse events [19]. As part of participating in regular exercise training programs, much interest exists to maximize strategies to reduce decrements in performance and improve training quality. As a strategy to reduce soreness, loss of muscle function, and inflammation, curcumin serves as a potentially useful nutritional target to aid in accomplishing these goals, yet longstanding shortcomings with the ingredient have limited its potential. Therefore, the purpose of this study was to examine, in a double blind, placebo-controlled fashion, whether a novel form of curcumin would attenuate performance decrements and reduce inflammation following a downhill running bout. Beyond examining the impact of curcumin availability, an additional study question focused upon the identification of dose-dependent outcomes associated with curcumin administration relative to its ability to attenuate performance changes. We hypothesize that in a dose-dependent fashion, curcumin will attenuate changes in performance indicators after completion of a downhill bout of treadmill running. 2. Materials and Methods: The data reported in this study were collected as part of a randomized, double-blind, placebo-controlled investigation that examined the effects of eight weeks of curcumin supplementation on blood flow, exercise performance, and muscle damage. The data involving blood flow have already been published [20]. This study was conducted in accordance with the Declaration of Helsinki guidelines and registered with the ISRCTN registry (ISRCTN90184217). All the procedures involving human subjects were approved by the Institutional Review Board of Texas Christian University for use of human subjects in research (IRB approval: 1410-105-1410). Written consent was obtained from all the participants prior to any participation. 2.1. Participants Men and women aged 19 to 29 years were considered eligible for participation if he or she was a non-smoker and free of any musculoskeletal, medical, or metabolic contraindications to exercise. The health status and activity levels of potential participants were determined by completion of a medical history form and a physical activity record. Further, participants had to be low to moderately trained, which was defined as meeting the current American College of Sports Medicine (ACSM) guidelines of at least 150 min of moderate aerobic activity, or 75 min per week of vigorous aerobic activity per week for at least the past three months [21]. Exclusion criteria included women who were pregnant or lactating, any participation in another clinical trial or consumption of an investigational product within the previous 30 days, receiving regular treatment with anti-inflammatory/analgesic/antioxidant drugs in the previous month, or the use of any ergogenic aid during the nine-week period prior to recruitment. A total of 106 participants (53M, 53F) were initially screened for study participation. Of this cohort, 32 additional participants (15M, 17F) were excluded from study involvement due to them not meeting the study’s inclusion or exclusion criteria. Consequently, a total of 74 subjects were randomly assigned to a supplementation group and from there, an additional 11 participants (7M, 4F) withdrew from the study due to different priorities, i.e., school, resulting in a total of 63 (31M, 32F) study participants who completed the study. The subjects’ characteristics are presented in Table 1. Men and women aged 19 to 29 years were considered eligible for participation if he or she was a non-smoker and free of any musculoskeletal, medical, or metabolic contraindications to exercise. The health status and activity levels of potential participants were determined by completion of a medical history form and a physical activity record. Further, participants had to be low to moderately trained, which was defined as meeting the current American College of Sports Medicine (ACSM) guidelines of at least 150 min of moderate aerobic activity, or 75 min per week of vigorous aerobic activity per week for at least the past three months [21]. Exclusion criteria included women who were pregnant or lactating, any participation in another clinical trial or consumption of an investigational product within the previous 30 days, receiving regular treatment with anti-inflammatory/analgesic/antioxidant drugs in the previous month, or the use of any ergogenic aid during the nine-week period prior to recruitment. A total of 106 participants (53M, 53F) were initially screened for study participation. Of this cohort, 32 additional participants (15M, 17F) were excluded from study involvement due to them not meeting the study’s inclusion or exclusion criteria. Consequently, a total of 74 subjects were randomly assigned to a supplementation group and from there, an additional 11 participants (7M, 4F) withdrew from the study due to different priorities, i.e., school, resulting in a total of 63 (31M, 32F) study participants who completed the study. The subjects’ characteristics are presented in Table 1. 2.2. Experimental Protocol A summary of the study design is presented in Figure 1. All the testing was performed in the Exercise Physiology Laboratory within the Kinesiology Department at Texas Christian University. Prior to supplementation, and 3 to 7 days after a familiarization visit, participants returned to the laboratory after having abstained from any strenuous physical activity for at least 48 h. Additionally, participants were asked to avoid coffee, alcohol, and non-steroidal anti-inflammatory drugs at least 24 h prior to any trial. Baseline testing consisted of a muscle function assessment and a maximal aerobic capacity (VO2max) test. After 56 days of supplementation, the participants reported to the lab to repeat all the muscle function assessments. The following day, participants performed a downhill run to induce muscle damage. The downhill run was performed on a modified TMX 3030c (Trackmaster, Newton, KS, USA) at a –15% grade. Prior to the downhill run, participants warmed up for five minutes at a 0% grade at a speed equivalent to 65% VO2Max. American College of Sports Medicine metabolic calculations for the estimation of energy expenditure during running [21] were utilized to determine the treadmill speed—on a level grade—that would approximately elicit 65% VO2max. Gas exchange was continuously measured (Parvo Medics, Sandy, UT, USA), and the treadmill speed was adjusted after two minutes to match 65% VO2Max. After the five-minute warm-up, participants completed a 45-min downhill run at a –15% grade and speed equivalent to 65% VO2Max. Muscle function was assessed one hour after completion of the downhill running bout. Participants returned to the lab at 24 h, 48 h, and 72 h following the downhill run for follow-up muscle function testing. A summary of the study design is presented in Figure 1. All the testing was performed in the Exercise Physiology Laboratory within the Kinesiology Department at Texas Christian University. Prior to supplementation, and 3 to 7 days after a familiarization visit, participants returned to the laboratory after having abstained from any strenuous physical activity for at least 48 h. Additionally, participants were asked to avoid coffee, alcohol, and non-steroidal anti-inflammatory drugs at least 24 h prior to any trial. Baseline testing consisted of a muscle function assessment and a maximal aerobic capacity (VO2max) test. After 56 days of supplementation, the participants reported to the lab to repeat all the muscle function assessments. The following day, participants performed a downhill run to induce muscle damage. The downhill run was performed on a modified TMX 3030c (Trackmaster, Newton, KS, USA) at a –15% grade. Prior to the downhill run, participants warmed up for five minutes at a 0% grade at a speed equivalent to 65% VO2Max. American College of Sports Medicine metabolic calculations for the estimation of energy expenditure during running [21] were utilized to determine the treadmill speed—on a level grade—that would approximately elicit 65% VO2max. Gas exchange was continuously measured (Parvo Medics, Sandy, UT, USA), and the treadmill speed was adjusted after two minutes to match 65% VO2Max. After the five-minute warm-up, participants completed a 45-min downhill run at a –15% grade and speed equivalent to 65% VO2Max. Muscle function was assessed one hour after completion of the downhill running bout. Participants returned to the lab at 24 h, 48 h, and 72 h following the downhill run for follow-up muscle function testing. 2.3. Supplementation The investigational product (CurcuWIN®; OmniActive Health Technologies Ltd., Mumbai, India) contained turmeric extract (20–28%), a hydrophilic carrier (63–75%), cellulosic derivatives (10–40%), and natural antioxidants (1–3%) [19]. During the consent and familiarization process, participants were educated on which foods contained turmeric, and were asked to avoid those foods for the duration of the study while maintaining their normal diet. Participants were randomly assigned in a double-blind manner to either a placebo (corn starch, PLA), low dose of curcumin (50 mg of curcuminoids = 250 mg CurcuWIN®), or high dose of curcumin (200 mg of curcuminoids = 1000 mg CurcuWIN®) supplementation. Commercially available natural curcumin contains three curcuminoids in the following ratios: curcumin (71.5%), demethoxycurcumin (19.4%), and bisdemethoxycurcumin (9.1%) [22]. The day following baseline testing, participants were asked to ingest one dose with breakfast, lunch, and dinner for a total of three doses per day. To assist in blinding, all the doses required the ingestion of one capsule that was identical in shape, size, and color. As part of compliance monitoring to the supplementation regimen, participants were provided the capsules on a weekly basis in a pill container that provided seven doses and were asked to return the empty containers. A side effects questionnaire was also completed when receiving capsules. Compliance was set at ≥80%, and participants not meeting compliance were removed from the study. The investigational product (CurcuWIN®; OmniActive Health Technologies Ltd., Mumbai, India) contained turmeric extract (20–28%), a hydrophilic carrier (63–75%), cellulosic derivatives (10–40%), and natural antioxidants (1–3%) [19]. During the consent and familiarization process, participants were educated on which foods contained turmeric, and were asked to avoid those foods for the duration of the study while maintaining their normal diet. Participants were randomly assigned in a double-blind manner to either a placebo (corn starch, PLA), low dose of curcumin (50 mg of curcuminoids = 250 mg CurcuWIN®), or high dose of curcumin (200 mg of curcuminoids = 1000 mg CurcuWIN®) supplementation. Commercially available natural curcumin contains three curcuminoids in the following ratios: curcumin (71.5%), demethoxycurcumin (19.4%), and bisdemethoxycurcumin (9.1%) [22]. The day following baseline testing, participants were asked to ingest one dose with breakfast, lunch, and dinner for a total of three doses per day. To assist in blinding, all the doses required the ingestion of one capsule that was identical in shape, size, and color. As part of compliance monitoring to the supplementation regimen, participants were provided the capsules on a weekly basis in a pill container that provided seven doses and were asked to return the empty containers. A side effects questionnaire was also completed when receiving capsules. Compliance was set at ≥80%, and participants not meeting compliance were removed from the study. 2.4. Muscle Function Assessment Muscle function was determined by assessing isokinetic and isometric peak torque as well as isokinetic power using a Biodex dynamometer (System 3; Biodex Medical Systems, Shirley, NY, USA) on the participant’s self-reported dominant leg. All the participants were familiarized with the testing protocols during his or her familiarization visit prior to supplementation. Before muscle function assessment, participants performed a five-minute self-paced warm up on a treadmill. Then, each participant was seated with their knee aligned with the lever arm axis of the dynamometer. The dynamometer warm-up consisted of three concentric-only extension and flexion repetitions at 50% of perceived maximal force production. Following the warm-up, participants were given a 90-s recovery. Isokinetic peak torque and power were tested at a speed of 60° * s−1 through a modified range of motion, which began with their leg at approximately 120° knee flexion and continued through full extension of the knee (180° knee flexion). Once in the starting position, participants began each set of repetitions by forcefully extending their leg against the resistance through the full range of motion before forcefully flexing their knee back to the starting position. Participants repeated this motion for five continuous repetitions at maximal effort. Peak torque and power values for both extension and flexion were recorded as individual peak values and as average values across all the completed repetitions. After the isokinetic exercise, participants were given a three-minute rest. Then, isometric strength was assessed by three maximal voluntary extensions of the knee, each lasting five seconds against a fixed resistance arm at an angle of 120°. A one-minute rest was given between each repetition, and all the repetitions were performed at maximal effort. The highest isometric torque values were recorded as peak isometric torque in foot–pounds (ft–lbs) of torque, and the average peak torque value was also computed. Acute assessments of both isokinetic and isometric force and power are regularly used to assess acute changes in both dynamic and static force production in response to various forms of exercise stimuli [23,24]. Test–retest reliability using similar isokinetic devices with similar protocols and controls consistently yielded high measures of reliability and coefficients of variation less than 5% [25]. As an indirect indicator of muscle damage, perceived levels of anterior, posterior, and total soreness of the knee extensors were assessed by all participants using a 100-cm visual analog scale. Soreness was assessed along a 100-mm scale (0 mm = no soreness, 100 mm = extreme soreness) for each time point (pre-exercise, immediately (0 h), 24 h, 48 h, and 72 h post-exercise by drawing a line perpendicular to the continuum line extending from 0 mm to 100 mm. Soreness was evaluated by measuring the distance of each mark from 0 and rounded up to the nearest millimeter. Muscle function was determined by assessing isokinetic and isometric peak torque as well as isokinetic power using a Biodex dynamometer (System 3; Biodex Medical Systems, Shirley, NY, USA) on the participant’s self-reported dominant leg. All the participants were familiarized with the testing protocols during his or her familiarization visit prior to supplementation. Before muscle function assessment, participants performed a five-minute self-paced warm up on a treadmill. Then, each participant was seated with their knee aligned with the lever arm axis of the dynamometer. The dynamometer warm-up consisted of three concentric-only extension and flexion repetitions at 50% of perceived maximal force production. Following the warm-up, participants were given a 90-s recovery. Isokinetic peak torque and power were tested at a speed of 60° * s−1 through a modified range of motion, which began with their leg at approximately 120° knee flexion and continued through full extension of the knee (180° knee flexion). Once in the starting position, participants began each set of repetitions by forcefully extending their leg against the resistance through the full range of motion before forcefully flexing their knee back to the starting position. Participants repeated this motion for five continuous repetitions at maximal effort. Peak torque and power values for both extension and flexion were recorded as individual peak values and as average values across all the completed repetitions. After the isokinetic exercise, participants were given a three-minute rest. Then, isometric strength was assessed by three maximal voluntary extensions of the knee, each lasting five seconds against a fixed resistance arm at an angle of 120°. A one-minute rest was given between each repetition, and all the repetitions were performed at maximal effort. The highest isometric torque values were recorded as peak isometric torque in foot–pounds (ft–lbs) of torque, and the average peak torque value was also computed. Acute assessments of both isokinetic and isometric force and power are regularly used to assess acute changes in both dynamic and static force production in response to various forms of exercise stimuli [23,24]. Test–retest reliability using similar isokinetic devices with similar protocols and controls consistently yielded high measures of reliability and coefficients of variation less than 5% [25]. As an indirect indicator of muscle damage, perceived levels of anterior, posterior, and total soreness of the knee extensors were assessed by all participants using a 100-cm visual analog scale. Soreness was assessed along a 100-mm scale (0 mm = no soreness, 100 mm = extreme soreness) for each time point (pre-exercise, immediately (0 h), 24 h, 48 h, and 72 h post-exercise by drawing a line perpendicular to the continuum line extending from 0 mm to 100 mm. Soreness was evaluated by measuring the distance of each mark from 0 and rounded up to the nearest millimeter. 2.5. Statistical Analysis Statistical analyses were performed using SPSS V.25 (IBM Corporation; Armonk, NY, USA). Primary outcomes were identified as peak torque values, while secondary outcomes were identified as average torque and power assessments. All the outcome measures were initially analyzed by a repeated measures analysis of variance (ANOVA) with three factors: gender (two levels), treatment (three levels), and time (six levels). Normality was assessed using the Shapiro–Wilk test with several variables violating the normality assumptions. All the non-normal variables were transformed using log10 before completing ANOVA procedures. All the data reported throughout is the raw data. Homogeneity of variance was assessed using Mauchly’s test of sphericity. When the homogeneity of variance assumption was violated, the Greenhouse–Geisser correction was applied when epsilon was <0.75, and the Huynh–Feldt correction was applied when epsilon was >0.75. Outside of expected outcomes (greater strength), gender contributed little to the overall model and did not impact treatment analysis; thus, gender was removed from the model and 3 (group) × 6 (time) mixed factorial ANOVA with repeated measures on time were subsequently used to assess the differences across time between groups. Tukey post hoc procedures were used when a significant finding (p ≤ 0.05) or trend (0.05 < p ≤ 0.10) was identified. Significant main effects for time were fully decomposed by completing factorial ANOVA with repeated measures on time. If significant differences in time or trends were identified, pairwise comparisons with Bonferroni corrections applied to the confidence interval were evaluated to determine between time points for each condition. Statistical analyses were performed using SPSS V.25 (IBM Corporation; Armonk, NY, USA). Primary outcomes were identified as peak torque values, while secondary outcomes were identified as average torque and power assessments. All the outcome measures were initially analyzed by a repeated measures analysis of variance (ANOVA) with three factors: gender (two levels), treatment (three levels), and time (six levels). Normality was assessed using the Shapiro–Wilk test with several variables violating the normality assumptions. All the non-normal variables were transformed using log10 before completing ANOVA procedures. All the data reported throughout is the raw data. Homogeneity of variance was assessed using Mauchly’s test of sphericity. When the homogeneity of variance assumption was violated, the Greenhouse–Geisser correction was applied when epsilon was <0.75, and the Huynh–Feldt correction was applied when epsilon was >0.75. Outside of expected outcomes (greater strength), gender contributed little to the overall model and did not impact treatment analysis; thus, gender was removed from the model and 3 (group) × 6 (time) mixed factorial ANOVA with repeated measures on time were subsequently used to assess the differences across time between groups. Tukey post hoc procedures were used when a significant finding (p ≤ 0.05) or trend (0.05 < p ≤ 0.10) was identified. Significant main effects for time were fully decomposed by completing factorial ANOVA with repeated measures on time. If significant differences in time or trends were identified, pairwise comparisons with Bonferroni corrections applied to the confidence interval were evaluated to determine between time points for each condition. 2.1. Participants: Men and women aged 19 to 29 years were considered eligible for participation if he or she was a non-smoker and free of any musculoskeletal, medical, or metabolic contraindications to exercise. The health status and activity levels of potential participants were determined by completion of a medical history form and a physical activity record. Further, participants had to be low to moderately trained, which was defined as meeting the current American College of Sports Medicine (ACSM) guidelines of at least 150 min of moderate aerobic activity, or 75 min per week of vigorous aerobic activity per week for at least the past three months [21]. Exclusion criteria included women who were pregnant or lactating, any participation in another clinical trial or consumption of an investigational product within the previous 30 days, receiving regular treatment with anti-inflammatory/analgesic/antioxidant drugs in the previous month, or the use of any ergogenic aid during the nine-week period prior to recruitment. A total of 106 participants (53M, 53F) were initially screened for study participation. Of this cohort, 32 additional participants (15M, 17F) were excluded from study involvement due to them not meeting the study’s inclusion or exclusion criteria. Consequently, a total of 74 subjects were randomly assigned to a supplementation group and from there, an additional 11 participants (7M, 4F) withdrew from the study due to different priorities, i.e., school, resulting in a total of 63 (31M, 32F) study participants who completed the study. The subjects’ characteristics are presented in Table 1. 2.2. Experimental Protocol: A summary of the study design is presented in Figure 1. All the testing was performed in the Exercise Physiology Laboratory within the Kinesiology Department at Texas Christian University. Prior to supplementation, and 3 to 7 days after a familiarization visit, participants returned to the laboratory after having abstained from any strenuous physical activity for at least 48 h. Additionally, participants were asked to avoid coffee, alcohol, and non-steroidal anti-inflammatory drugs at least 24 h prior to any trial. Baseline testing consisted of a muscle function assessment and a maximal aerobic capacity (VO2max) test. After 56 days of supplementation, the participants reported to the lab to repeat all the muscle function assessments. The following day, participants performed a downhill run to induce muscle damage. The downhill run was performed on a modified TMX 3030c (Trackmaster, Newton, KS, USA) at a –15% grade. Prior to the downhill run, participants warmed up for five minutes at a 0% grade at a speed equivalent to 65% VO2Max. American College of Sports Medicine metabolic calculations for the estimation of energy expenditure during running [21] were utilized to determine the treadmill speed—on a level grade—that would approximately elicit 65% VO2max. Gas exchange was continuously measured (Parvo Medics, Sandy, UT, USA), and the treadmill speed was adjusted after two minutes to match 65% VO2Max. After the five-minute warm-up, participants completed a 45-min downhill run at a –15% grade and speed equivalent to 65% VO2Max. Muscle function was assessed one hour after completion of the downhill running bout. Participants returned to the lab at 24 h, 48 h, and 72 h following the downhill run for follow-up muscle function testing. 2.3. Supplementation: The investigational product (CurcuWIN®; OmniActive Health Technologies Ltd., Mumbai, India) contained turmeric extract (20–28%), a hydrophilic carrier (63–75%), cellulosic derivatives (10–40%), and natural antioxidants (1–3%) [19]. During the consent and familiarization process, participants were educated on which foods contained turmeric, and were asked to avoid those foods for the duration of the study while maintaining their normal diet. Participants were randomly assigned in a double-blind manner to either a placebo (corn starch, PLA), low dose of curcumin (50 mg of curcuminoids = 250 mg CurcuWIN®), or high dose of curcumin (200 mg of curcuminoids = 1000 mg CurcuWIN®) supplementation. Commercially available natural curcumin contains three curcuminoids in the following ratios: curcumin (71.5%), demethoxycurcumin (19.4%), and bisdemethoxycurcumin (9.1%) [22]. The day following baseline testing, participants were asked to ingest one dose with breakfast, lunch, and dinner for a total of three doses per day. To assist in blinding, all the doses required the ingestion of one capsule that was identical in shape, size, and color. As part of compliance monitoring to the supplementation regimen, participants were provided the capsules on a weekly basis in a pill container that provided seven doses and were asked to return the empty containers. A side effects questionnaire was also completed when receiving capsules. Compliance was set at ≥80%, and participants not meeting compliance were removed from the study. 2.4. Muscle Function Assessment: Muscle function was determined by assessing isokinetic and isometric peak torque as well as isokinetic power using a Biodex dynamometer (System 3; Biodex Medical Systems, Shirley, NY, USA) on the participant’s self-reported dominant leg. All the participants were familiarized with the testing protocols during his or her familiarization visit prior to supplementation. Before muscle function assessment, participants performed a five-minute self-paced warm up on a treadmill. Then, each participant was seated with their knee aligned with the lever arm axis of the dynamometer. The dynamometer warm-up consisted of three concentric-only extension and flexion repetitions at 50% of perceived maximal force production. Following the warm-up, participants were given a 90-s recovery. Isokinetic peak torque and power were tested at a speed of 60° * s−1 through a modified range of motion, which began with their leg at approximately 120° knee flexion and continued through full extension of the knee (180° knee flexion). Once in the starting position, participants began each set of repetitions by forcefully extending their leg against the resistance through the full range of motion before forcefully flexing their knee back to the starting position. Participants repeated this motion for five continuous repetitions at maximal effort. Peak torque and power values for both extension and flexion were recorded as individual peak values and as average values across all the completed repetitions. After the isokinetic exercise, participants were given a three-minute rest. Then, isometric strength was assessed by three maximal voluntary extensions of the knee, each lasting five seconds against a fixed resistance arm at an angle of 120°. A one-minute rest was given between each repetition, and all the repetitions were performed at maximal effort. The highest isometric torque values were recorded as peak isometric torque in foot–pounds (ft–lbs) of torque, and the average peak torque value was also computed. Acute assessments of both isokinetic and isometric force and power are regularly used to assess acute changes in both dynamic and static force production in response to various forms of exercise stimuli [23,24]. Test–retest reliability using similar isokinetic devices with similar protocols and controls consistently yielded high measures of reliability and coefficients of variation less than 5% [25]. As an indirect indicator of muscle damage, perceived levels of anterior, posterior, and total soreness of the knee extensors were assessed by all participants using a 100-cm visual analog scale. Soreness was assessed along a 100-mm scale (0 mm = no soreness, 100 mm = extreme soreness) for each time point (pre-exercise, immediately (0 h), 24 h, 48 h, and 72 h post-exercise by drawing a line perpendicular to the continuum line extending from 0 mm to 100 mm. Soreness was evaluated by measuring the distance of each mark from 0 and rounded up to the nearest millimeter. 2.5. Statistical Analysis: Statistical analyses were performed using SPSS V.25 (IBM Corporation; Armonk, NY, USA). Primary outcomes were identified as peak torque values, while secondary outcomes were identified as average torque and power assessments. All the outcome measures were initially analyzed by a repeated measures analysis of variance (ANOVA) with three factors: gender (two levels), treatment (three levels), and time (six levels). Normality was assessed using the Shapiro–Wilk test with several variables violating the normality assumptions. All the non-normal variables were transformed using log10 before completing ANOVA procedures. All the data reported throughout is the raw data. Homogeneity of variance was assessed using Mauchly’s test of sphericity. When the homogeneity of variance assumption was violated, the Greenhouse–Geisser correction was applied when epsilon was <0.75, and the Huynh–Feldt correction was applied when epsilon was >0.75. Outside of expected outcomes (greater strength), gender contributed little to the overall model and did not impact treatment analysis; thus, gender was removed from the model and 3 (group) × 6 (time) mixed factorial ANOVA with repeated measures on time were subsequently used to assess the differences across time between groups. Tukey post hoc procedures were used when a significant finding (p ≤ 0.05) or trend (0.05 < p ≤ 0.10) was identified. Significant main effects for time were fully decomposed by completing factorial ANOVA with repeated measures on time. If significant differences in time or trends were identified, pairwise comparisons with Bonferroni corrections applied to the confidence interval were evaluated to determine between time points for each condition. 3. Results: 3.1. Isokinetic Peak Flexion Torque Using mixed factorial ANOVA, no significant group x time interaction (p = 0.60) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak flexion torque values (Figure 2). Regarding PLA, no significant changes across time (p = 0.15) were identified, while significant changes across time were found for the 50-mg dose (p = 0.03), and the 200-mg dose tended (p = 0.052) to exhibit changes across time. In the 50-mg dose group, peak flexion torque was significantly reduced in comparison to baseline values after one hour (95% CI: 2.46–9.48, p = 0.02), 24 h (95% CI: 2.34–9.60, p = 0.03), and 48 h (95% CI: 3.25–9.79, p = 0.02). In the 200-mg dose group, peak flexion torque was significantly reduced against baseline values after one hour (95% CI: 1.42–6.36, p = 0.04), and returned to baseline values for all the other time points. Using mixed factorial ANOVA, no significant group x time interaction (p = 0.60) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak flexion torque values (Figure 2). Regarding PLA, no significant changes across time (p = 0.15) were identified, while significant changes across time were found for the 50-mg dose (p = 0.03), and the 200-mg dose tended (p = 0.052) to exhibit changes across time. In the 50-mg dose group, peak flexion torque was significantly reduced in comparison to baseline values after one hour (95% CI: 2.46–9.48, p = 0.02), 24 h (95% CI: 2.34–9.60, p = 0.03), and 48 h (95% CI: 3.25–9.79, p = 0.02). In the 200-mg dose group, peak flexion torque was significantly reduced against baseline values after one hour (95% CI: 1.42–6.36, p = 0.04), and returned to baseline values for all the other time points. 3.2. Isokinetic Average Peak Flexion Torque No significant group × time interaction (p = 0.55) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in the average peak flexion torque. A statistical tendency for change was observed in both the PLA (p = 0.08) and 200-mg (p = 0.07) groups, while values in the 50-mg dose (p = 0.03) exhibited a significant change across time. Pairwise comparisons in both the PLA (95% CI: 2.47–9.04, p = 0.046) and 200-mg (95% CI: 2.16–6.86, p = 0.006) groups indicated that the average peak flexion torque values were reduced one hour after damage in comparison to baseline values, but returned to baseline values after that initial drop in performance. Values in the 50-mg group exhibited more changes, with significantly reduced values being observed one hour (95% CI: 2.23–9.40, p = 0.03) and 48 h (95% CI: 2.45–8.57, p = 0.04) after damage, with values 24 h after damage tending to be reduced (95% CI: 1.58–9.48, p = 0.06). No significant group × time interaction (p = 0.55) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in the average peak flexion torque. A statistical tendency for change was observed in both the PLA (p = 0.08) and 200-mg (p = 0.07) groups, while values in the 50-mg dose (p = 0.03) exhibited a significant change across time. Pairwise comparisons in both the PLA (95% CI: 2.47–9.04, p = 0.046) and 200-mg (95% CI: 2.16–6.86, p = 0.006) groups indicated that the average peak flexion torque values were reduced one hour after damage in comparison to baseline values, but returned to baseline values after that initial drop in performance. Values in the 50-mg group exhibited more changes, with significantly reduced values being observed one hour (95% CI: 2.23–9.40, p = 0.03) and 48 h (95% CI: 2.45–8.57, p = 0.04) after damage, with values 24 h after damage tending to be reduced (95% CI: 1.58–9.48, p = 0.06). 3.3. Isokinetic Peak Extension Torque Using mixed factorial ANOVA, no significant group x time interaction (p = 0.52) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak extension torque values (Figure 3). Values changed significantly across time for the PLA (p = 0.002) and 50-mg (p = 0.04) groups, while no significant change was exhibited for the 200-mg dose (p = 0.16). In PLA, peak extension torque was significantly reduced one hour (95% CI: 8.10–18.04, p = 0.01) and 24 h (95% CI: 3.84–18.56, p = 0.03) after completion of the downhill running bout in comparison to pre-damage values. Changes in the 50-mg group indicated that peak extension torque in comparison to baseline values were significantly reduced (95% CI: 11.02–24.85, p < 0.001) one hour after completion of the damaging exercise, but returned to baseline values for all the other time points. No significant changes across time (p = 0.159) were observed for the 200-mg dose. Using mixed factorial ANOVA, no significant group x time interaction (p = 0.52) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak extension torque values (Figure 3). Values changed significantly across time for the PLA (p = 0.002) and 50-mg (p = 0.04) groups, while no significant change was exhibited for the 200-mg dose (p = 0.16). In PLA, peak extension torque was significantly reduced one hour (95% CI: 8.10–18.04, p = 0.01) and 24 h (95% CI: 3.84–18.56, p = 0.03) after completion of the downhill running bout in comparison to pre-damage values. Changes in the 50-mg group indicated that peak extension torque in comparison to baseline values were significantly reduced (95% CI: 11.02–24.85, p < 0.001) one hour after completion of the damaging exercise, but returned to baseline values for all the other time points. No significant changes across time (p = 0.159) were observed for the 200-mg dose. 3.4. Isokinetic Average Peak Extension Torque Changes in average peak extension torque values indicated no significant group x time interaction (p = 0.578) in conjunction with a significant main effect for time (p < 0.001). Individual pairwise comparisons within each condition between individual time points revealed a similar pattern of change. In this respect, significant changes across time were found for PLA (p < 0.001), 50 mg (p = 0.03), and 200 mg (p = 0.03). Pairwise comparisons for PLA indicated that average peak extension torque values were significantly lower one hour (95% CI: 6.83–18.88, p = 0.02) after the running bout when compared to baseline levels. Individual comparison in the 50-mg group revealed that values were significantly lower than baseline after one hour (95% CI: 9.72–24.39, p <0.001) and 24 h (95% CI: 3.48–23.07, p = 0.04). Similarly, changes within the 200-mg group exhibited a significant reduction from baseline after one hour (95% CI: 6.59–16.37, p = 0.002). Changes in average peak extension torque values indicated no significant group x time interaction (p = 0.578) in conjunction with a significant main effect for time (p < 0.001). Individual pairwise comparisons within each condition between individual time points revealed a similar pattern of change. In this respect, significant changes across time were found for PLA (p < 0.001), 50 mg (p = 0.03), and 200 mg (p = 0.03). Pairwise comparisons for PLA indicated that average peak extension torque values were significantly lower one hour (95% CI: 6.83–18.88, p = 0.02) after the running bout when compared to baseline levels. Individual comparison in the 50-mg group revealed that values were significantly lower than baseline after one hour (95% CI: 9.72–24.39, p <0.001) and 24 h (95% CI: 3.48–23.07, p = 0.04). Similarly, changes within the 200-mg group exhibited a significant reduction from baseline after one hour (95% CI: 6.59–16.37, p = 0.002). 3.5. Isokinetic Peak Extension Power No significant group x time interaction (p = 0.39) effect was found for changes in peak extension power in all the supplementation groups against time (Figure 4). A significant main effect over time was found (p = 0.002). Within-group changes in the PLA group revealed a tendency for values to change (p = 0.08). Individual pairwise comparisons in PLA indicated that peak extension power data was significantly reduced (95% CI: 6.16–17.69, p = 0.04) one hour after downhill treadmill running. Similarly, within-group changes in the 50-mg group indicated a statistical trend (p = 0.051) for peak extension power values to change across time. Again, pairwise comparisons revealed that the only significant changes for peak extension power occurred one hour (95% CI: 6.86–22.30, p = 0.03) after completion of the treadmill exercise bout. Changes in the 200-mg group also exhibited a tendency for power values to change across time (p = 0.09), with individual pairwise comparisons indicating that values one hour (95% CI: 4.24–15.29, p = 0.03) after completion of the damage bout were significantly lower than baseline, while changes after 24 h (95% CI: 2.51–10.91, p = 0.09) tended to be lower. No significant group x time interaction (p = 0.39) effect was found for changes in peak extension power in all the supplementation groups against time (Figure 4). A significant main effect over time was found (p = 0.002). Within-group changes in the PLA group revealed a tendency for values to change (p = 0.08). Individual pairwise comparisons in PLA indicated that peak extension power data was significantly reduced (95% CI: 6.16–17.69, p = 0.04) one hour after downhill treadmill running. Similarly, within-group changes in the 50-mg group indicated a statistical trend (p = 0.051) for peak extension power values to change across time. Again, pairwise comparisons revealed that the only significant changes for peak extension power occurred one hour (95% CI: 6.86–22.30, p = 0.03) after completion of the treadmill exercise bout. Changes in the 200-mg group also exhibited a tendency for power values to change across time (p = 0.09), with individual pairwise comparisons indicating that values one hour (95% CI: 4.24–15.29, p = 0.03) after completion of the damage bout were significantly lower than baseline, while changes after 24 h (95% CI: 2.51–10.91, p = 0.09) tended to be lower. 3.6. Isokinetic Peak Flexion Power No significant group x time interaction (p = 0.96) effect was found in any of the supplementation groups for peak flexion power (Figure 5). A significant main effect over time was found (p < 0.001). Within-group changes in the PLA group revealed no significant change across time (p = 0.14), while values in 50-mg group revealed a significant within-group change (p = 0.049), while values for the 200 mg tended to change (p = 0.08). Significant reductions from baseline in peak flexion power were observed one hour after running for both PLA (95% CI: 3.29–12.03, p = 0.03) and 200 mg (95% CI: 3.21–8.51, p = 0.01), while values in the 50-mg group were significantly reduced from baseline after one hour (0.001–0.088, p = 0.04) and 24 h (95% CI: 0.85–6.74, p = 0.03), and tended to be lower after 48 h (95% CI: 1.18–9.73, p = 0.08). No significant group x time interaction (p = 0.96) effect was found in any of the supplementation groups for peak flexion power (Figure 5). A significant main effect over time was found (p < 0.001). Within-group changes in the PLA group revealed no significant change across time (p = 0.14), while values in 50-mg group revealed a significant within-group change (p = 0.049), while values for the 200 mg tended to change (p = 0.08). Significant reductions from baseline in peak flexion power were observed one hour after running for both PLA (95% CI: 3.29–12.03, p = 0.03) and 200 mg (95% CI: 3.21–8.51, p = 0.01), while values in the 50-mg group were significantly reduced from baseline after one hour (0.001–0.088, p = 0.04) and 24 h (95% CI: 0.85–6.74, p = 0.03), and tended to be lower after 48 h (95% CI: 1.18–9.73, p = 0.08). 3.7. Isometric Peak Torque No significant group x time interaction (p = 0.57) was found, while a significant main effect over time was found (p = 0.046). Within-group changes in the PLA group revealed no significant change across time (p = 0.43) while values in both the 50-mg (p = 0.01) and 200-mg groups (p = 0.02) significantly decreased across time. As expected, no individual pairwise comparisons within PLA revealed any statistically significant changes (p > 0.05). Significant reductions from baseline in isometric peak torque were observed one hour (95% CI: 8.73–19.16, p < 0.001) and 24 h (95% CI: 7.95–23.62, p = 0.004) in the 50-mg group. Within the 200-mg group, isometric peak torque values were significantly reduced 24 h after exercise (95% CI: 1.69–13.79, p = 0.02), but were similar to the baseline values at all the other time points. No significant group x time interaction (p = 0.57) was found, while a significant main effect over time was found (p = 0.046). Within-group changes in the PLA group revealed no significant change across time (p = 0.43) while values in both the 50-mg (p = 0.01) and 200-mg groups (p = 0.02) significantly decreased across time. As expected, no individual pairwise comparisons within PLA revealed any statistically significant changes (p > 0.05). Significant reductions from baseline in isometric peak torque were observed one hour (95% CI: 8.73–19.16, p < 0.001) and 24 h (95% CI: 7.95–23.62, p = 0.004) in the 50-mg group. Within the 200-mg group, isometric peak torque values were significantly reduced 24 h after exercise (95% CI: 1.69–13.79, p = 0.02), but were similar to the baseline values at all the other time points. 3.8. Isometric Average Peak Torque No significant group x time interaction (p = 0.52) was found, while a significant main effect over time was found (p = 0.001) for isometric average peak torque values. Within-group changes in the PLA (p = 0.06) and 50-mg groups (p = 0.11) were not different from baseline, while changes in the 200-mg group exhibited significant reductions across time (p = 0.02) with no individual pairwise comparisons reaching statistical significance. No significant group x time interaction (p = 0.52) was found, while a significant main effect over time was found (p = 0.001) for isometric average peak torque values. Within-group changes in the PLA (p = 0.06) and 50-mg groups (p = 0.11) were not different from baseline, while changes in the 200-mg group exhibited significant reductions across time (p = 0.02) with no individual pairwise comparisons reaching statistical significance. 3.9. Perceived Soreness Statistically significant main effects for time were identified, indicating that perceived thigh soreness levels significantly increased in response to the exercise bout. Anterior thigh soreness (p < 0.001), posterior thigh soreness (p < 0.001), and total thigh soreness (p < 0.001) were all found to significantly increase across time. No significant group x time interaction were identified for anterior thigh soreness (p = 0.73), posterior thigh soreness (p = 0.73), and total thigh soreness (p = 0.30). Non-significant improvements in exercise-induced total thigh soreness indicated that the 200-mg groups reported 26%, 20%, and 8% less soreness immediately, 24 h, and 48 h after exercise, respectively, than the soreness levels that were reported in the PLA and 50-mg groups. However, these differences failed to reach statistical significance (Figure 6). Statistically significant main effects for time were identified, indicating that perceived thigh soreness levels significantly increased in response to the exercise bout. Anterior thigh soreness (p < 0.001), posterior thigh soreness (p < 0.001), and total thigh soreness (p < 0.001) were all found to significantly increase across time. No significant group x time interaction were identified for anterior thigh soreness (p = 0.73), posterior thigh soreness (p = 0.73), and total thigh soreness (p = 0.30). Non-significant improvements in exercise-induced total thigh soreness indicated that the 200-mg groups reported 26%, 20%, and 8% less soreness immediately, 24 h, and 48 h after exercise, respectively, than the soreness levels that were reported in the PLA and 50-mg groups. However, these differences failed to reach statistical significance (Figure 6). 3.1. Isokinetic Peak Flexion Torque: Using mixed factorial ANOVA, no significant group x time interaction (p = 0.60) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak flexion torque values (Figure 2). Regarding PLA, no significant changes across time (p = 0.15) were identified, while significant changes across time were found for the 50-mg dose (p = 0.03), and the 200-mg dose tended (p = 0.052) to exhibit changes across time. In the 50-mg dose group, peak flexion torque was significantly reduced in comparison to baseline values after one hour (95% CI: 2.46–9.48, p = 0.02), 24 h (95% CI: 2.34–9.60, p = 0.03), and 48 h (95% CI: 3.25–9.79, p = 0.02). In the 200-mg dose group, peak flexion torque was significantly reduced against baseline values after one hour (95% CI: 1.42–6.36, p = 0.04), and returned to baseline values for all the other time points. 3.2. Isokinetic Average Peak Flexion Torque: No significant group × time interaction (p = 0.55) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in the average peak flexion torque. A statistical tendency for change was observed in both the PLA (p = 0.08) and 200-mg (p = 0.07) groups, while values in the 50-mg dose (p = 0.03) exhibited a significant change across time. Pairwise comparisons in both the PLA (95% CI: 2.47–9.04, p = 0.046) and 200-mg (95% CI: 2.16–6.86, p = 0.006) groups indicated that the average peak flexion torque values were reduced one hour after damage in comparison to baseline values, but returned to baseline values after that initial drop in performance. Values in the 50-mg group exhibited more changes, with significantly reduced values being observed one hour (95% CI: 2.23–9.40, p = 0.03) and 48 h (95% CI: 2.45–8.57, p = 0.04) after damage, with values 24 h after damage tending to be reduced (95% CI: 1.58–9.48, p = 0.06). 3.3. Isokinetic Peak Extension Torque: Using mixed factorial ANOVA, no significant group x time interaction (p = 0.52) effects were realized, while a significant main effect for time (p < 0.001) was identified for changes in peak extension torque values (Figure 3). Values changed significantly across time for the PLA (p = 0.002) and 50-mg (p = 0.04) groups, while no significant change was exhibited for the 200-mg dose (p = 0.16). In PLA, peak extension torque was significantly reduced one hour (95% CI: 8.10–18.04, p = 0.01) and 24 h (95% CI: 3.84–18.56, p = 0.03) after completion of the downhill running bout in comparison to pre-damage values. Changes in the 50-mg group indicated that peak extension torque in comparison to baseline values were significantly reduced (95% CI: 11.02–24.85, p < 0.001) one hour after completion of the damaging exercise, but returned to baseline values for all the other time points. No significant changes across time (p = 0.159) were observed for the 200-mg dose. 3.4. Isokinetic Average Peak Extension Torque: Changes in average peak extension torque values indicated no significant group x time interaction (p = 0.578) in conjunction with a significant main effect for time (p < 0.001). Individual pairwise comparisons within each condition between individual time points revealed a similar pattern of change. In this respect, significant changes across time were found for PLA (p < 0.001), 50 mg (p = 0.03), and 200 mg (p = 0.03). Pairwise comparisons for PLA indicated that average peak extension torque values were significantly lower one hour (95% CI: 6.83–18.88, p = 0.02) after the running bout when compared to baseline levels. Individual comparison in the 50-mg group revealed that values were significantly lower than baseline after one hour (95% CI: 9.72–24.39, p <0.001) and 24 h (95% CI: 3.48–23.07, p = 0.04). Similarly, changes within the 200-mg group exhibited a significant reduction from baseline after one hour (95% CI: 6.59–16.37, p = 0.002). 3.5. Isokinetic Peak Extension Power: No significant group x time interaction (p = 0.39) effect was found for changes in peak extension power in all the supplementation groups against time (Figure 4). A significant main effect over time was found (p = 0.002). Within-group changes in the PLA group revealed a tendency for values to change (p = 0.08). Individual pairwise comparisons in PLA indicated that peak extension power data was significantly reduced (95% CI: 6.16–17.69, p = 0.04) one hour after downhill treadmill running. Similarly, within-group changes in the 50-mg group indicated a statistical trend (p = 0.051) for peak extension power values to change across time. Again, pairwise comparisons revealed that the only significant changes for peak extension power occurred one hour (95% CI: 6.86–22.30, p = 0.03) after completion of the treadmill exercise bout. Changes in the 200-mg group also exhibited a tendency for power values to change across time (p = 0.09), with individual pairwise comparisons indicating that values one hour (95% CI: 4.24–15.29, p = 0.03) after completion of the damage bout were significantly lower than baseline, while changes after 24 h (95% CI: 2.51–10.91, p = 0.09) tended to be lower. 3.6. Isokinetic Peak Flexion Power: No significant group x time interaction (p = 0.96) effect was found in any of the supplementation groups for peak flexion power (Figure 5). A significant main effect over time was found (p < 0.001). Within-group changes in the PLA group revealed no significant change across time (p = 0.14), while values in 50-mg group revealed a significant within-group change (p = 0.049), while values for the 200 mg tended to change (p = 0.08). Significant reductions from baseline in peak flexion power were observed one hour after running for both PLA (95% CI: 3.29–12.03, p = 0.03) and 200 mg (95% CI: 3.21–8.51, p = 0.01), while values in the 50-mg group were significantly reduced from baseline after one hour (0.001–0.088, p = 0.04) and 24 h (95% CI: 0.85–6.74, p = 0.03), and tended to be lower after 48 h (95% CI: 1.18–9.73, p = 0.08). 3.7. Isometric Peak Torque: No significant group x time interaction (p = 0.57) was found, while a significant main effect over time was found (p = 0.046). Within-group changes in the PLA group revealed no significant change across time (p = 0.43) while values in both the 50-mg (p = 0.01) and 200-mg groups (p = 0.02) significantly decreased across time. As expected, no individual pairwise comparisons within PLA revealed any statistically significant changes (p > 0.05). Significant reductions from baseline in isometric peak torque were observed one hour (95% CI: 8.73–19.16, p < 0.001) and 24 h (95% CI: 7.95–23.62, p = 0.004) in the 50-mg group. Within the 200-mg group, isometric peak torque values were significantly reduced 24 h after exercise (95% CI: 1.69–13.79, p = 0.02), but were similar to the baseline values at all the other time points. 3.8. Isometric Average Peak Torque: No significant group x time interaction (p = 0.52) was found, while a significant main effect over time was found (p = 0.001) for isometric average peak torque values. Within-group changes in the PLA (p = 0.06) and 50-mg groups (p = 0.11) were not different from baseline, while changes in the 200-mg group exhibited significant reductions across time (p = 0.02) with no individual pairwise comparisons reaching statistical significance. 3.9. Perceived Soreness: Statistically significant main effects for time were identified, indicating that perceived thigh soreness levels significantly increased in response to the exercise bout. Anterior thigh soreness (p < 0.001), posterior thigh soreness (p < 0.001), and total thigh soreness (p < 0.001) were all found to significantly increase across time. No significant group x time interaction were identified for anterior thigh soreness (p = 0.73), posterior thigh soreness (p = 0.73), and total thigh soreness (p = 0.30). Non-significant improvements in exercise-induced total thigh soreness indicated that the 200-mg groups reported 26%, 20%, and 8% less soreness immediately, 24 h, and 48 h after exercise, respectively, than the soreness levels that were reported in the PLA and 50-mg groups. However, these differences failed to reach statistical significance (Figure 6). 4. Discussion: The main findings of this study indicate widespread decreases in torque and power after completion of a one hour bout of downhill running. Responses in both the PLA and 200-mg groups were largely similar across time, while the 50-mg dose failed to respond in a similar fashion for any measured variable in comparison to either PLA or the 200-mg dose. In particular, isokinetic peak extension torque values experienced significant reductions in PLA and 50 mg across time, while no changes were found in the 200-mg group. Several other measured variables indicated that the 200-mg and PLA groups responded in a similar fashion with the smallest decrements in performance occurring in the 200-mg group, while changes in both of these groups were more favorable than those outlined in the 50-mg group. Additionally, a single bout of downhill treadmill running led to a widespread significant increases in perceived muscle soreness, but no differences between groups were identified. Despite the proven preclinical efficacy, curcumin and other curcuminoids are known for poor solubility, low absorption from the gut, rapid metabolism, and rapid systemic elimination, which contribute to an overall low oral bioavailability [26]. The curcumin used in the current study has been shown to have a 45.9-fold higher absorption over standard curcumin [19]. Notably, the form of curcumin used in the current study has previously been shown to illustrate a clinically meaningful improvement in endothelial function, as measured by flow-mediated dilation when a dose of 200 mg of curcuminoids was delivered [20]. Curcumin has previously been shown to increase vasodilation similar to exercise [27], and curcumin ingestion with aerobic exercise training is more effective than curcumin ingestion or aerobic exercise training alone at reducing left ventricular afterload [28]. In addition, curcumin inhibits the conversion of amino acids from muscle into glucose (gluconeogenesis) [14]. From a performance and muscle damage recovery perspective, limited work is available regarding the role of curcumin. In an animal running model, curcumin (10-mg pellet added to food for three consecutive days) was shown to significantly improve running time to exhaustion and voluntary physical activity in rats exposed to a single bout of downhill running [9]. Additionally, curcumin treatment in the downhill running rats also improved circulating levels of tumor necrosis factor-alpha to similar levels as those of the placebo within 48 h of completing the treadmill running [9]. In addition, Vitadell et al. [29] injected rats with curcumin and treated them to either hindlimb unloading or standard caging. After seven days, curcumin treatment significantly attenuated the loss of soleus mass and the myofiber cross-sectional area, while also sustaining a greater maintenance of force production attributes. Additionally, curcumin treatment was found to significantly blunt changes in protein and lipid oxidation in unloaded, treated rats compared to untreated ones. In one of the few studies conducted in humans to examine the impact of curcumin treatment on recovery from damaging exercise, Nicol et al. [12] had 17 men supplement in a double-blind, placebo controlled fashion with either curcumin (5000 mg of curcuminoids per dose) for 2.5 days leading up to and 2.5 days after completing an eccentric exercise protocol. Jump performance, creatine kinase, and pain indicators were assessed along with changes in circulating cytokines. It was concluded that curcumin reduced pain associated with soreness and may improve jumping performance in the days following muscle damage. These outcomes are in accordance with the changes seen in peak extension torque values in the present study, but are not entirely supported with the reported soreness levels in the present study. While the most favorable pattern of change was observed in the 200-mg group (Figure 6), these changes failed to reach a statistically significant levels in comparison to PLA and the 50-mg dose. Briefly, the 200-mg curcumin dose exhibited no loss of torque production, while significant reductions were noted one hour and 24 h after completion of the damage bout in the PLA group (Figure 3). Changes in peak flexion torque values revealed a similar pattern of change for the PLA and 200-mg group, with immediate significant reductions in peak flexion torque, but a return to baseline values by the next time point at 24 h. Unexpectedly, peak flexion torque values were still significantly reduced at 24 h and 48 h (Figure 2) after completion of the damage bout in the 50-mg group. These divergent changes seemingly suggest that a dose–response action may be evident between the ability of curcumin to mitigate losses of force production after damaging exercise. A similar pattern of change was observed for total thigh soreness, with the 200-mg dose exhibiting smaller perturbations in reported soreness, but the magnitude of these values failed to reach statistical significance. Similar patterns of change were noted for all the other variables, with significant reductions occurring in all the groups for isokinetic extension and flexion power (Figure 4 and Figure 5, respectively). Several strengths and limitations are present in this study that require discussion. For starters, the current study was the first investigation in humans to examine the ability of different dosages of curcumin (50 and 200 mg of curcuminoids) to mitigate recovery from damaging exercise using a novel formulation that has been previously shown to optimize bioavailability [19]. In addition, at eight weeks, the current investigation was one of the longer curcumin supplementation studies to date with the majority of studies supplementing in varying amounts for 1 to 4 weeks. A key limitation of the supplementation protocol used in the present study was the ceasing of supplementation after damage. While speculative, it remains possible that continuing supplementation through all the recovery time points may have provided better support to mitigate the changes observed, as this has been observed and highlighted to be a confounding factor surrounding research of this nature [5,25]. This consideration may also help to explain why various mean changes were observed in multiple outcome measures, but the overall magnitude of change failed to yield statistical significance. Beyond these points, the already limited research in humans becomes even more convoluted when study cohorts comprising differing demographics and different forms of exercise are utilized. In particular, the present study is the only investigation to introduce data featuring females. While the influence of gender in the present study was considered to be statistically irrelevant, known differences exist between how males and females respond to damaging exercise [30]. As such, these confounding influences could seemingly interact with the known ability of curcumin to change the observed responses to inflammation and oxidative stress. Finally, it is challenging to understand why changes in peak extension torque reached statistically significant thresholds in the 200-mg dose, but such changes failed to be realized when investigating the changes seen in peak flexion torque from the same exercise bout. While merely speculative, these changes may be due to a greater magnitude of stress placed on the knee extensors throughout the downhill running bout as opposed to the knee flexors, and this greater stress was differentially impacted by the curcumin supplementation. Irrespective of these differences, results from the previous study demonstrate consistently better performance outcomes in the higher dose of curcumin (200 mg) when compared to the lower dose (50 mg). In a somewhat unexpected fashion, the placebo group responded more favorably than the 50-mg dosage and in all but one variable (isokinetic peak extension torque), while the PLA group responded similarly to the 200-mg group. It is possible that a certain threshold of curcumin (or curcuminoids) is needed to exert any impact on our measured outcomes, and consequently, the 200-mg dose crossed this threshold, while the 50-mg dose did not. When considered in combination with the cessation of supplementation after conclusion of the damage bout, the lack of circulating curcuminoids in the 50-mg dose (versus the 200-mg dose) is considered to be the most likely reason for our dichotomous outcomes from a dose perspective. Another key consideration when interpreting our results in comparison to previous findings are the different patterns of exercise stress and damage that are employed. In the present study, a downhill running protocol was utilized, which is an established protocol to instigate metabolic as well as mechanical stress [6]. Other damage models such as the one utilized in the Nicol protocol [12], employ high volumes of resistance exercise contractions that involve eccentric contractions resulting in high levels of mechanical overload on the involved musculature, and have been clearly established as causing muscle damage [6]. 5. Conclusions: In conclusion, results from the present study highlight the ability of a high dose of CurcuWIN® (1000-mg dose delivering 200 mg of curcuminoids) to prevent the observed decreases in peak extension torque values seen one and 24 h after muscle-damaging exercise. In comparison, a lower dose of CurcuWIN® (delivering 50 mg of curcuminoids) was unable to attenuate performance changes to a similar pattern as what was observed in PLA. While this study investigated changes in performance, future studies should also investigate more objective measures (blood markers such as creatine kinase or myoglobin) of muscle damage in cohorts of trained and untrained samples.
Background: It is known that unaccustomed exercise-especially when it has an eccentric component-causes muscle damage and subsequent performance decrements. Attenuating muscle damage may improve performance and recovery, allowing for improved training quality and adaptations. Therefore, the current study sought to examine the effect of two doses of curcumin supplementation on performance decrements following downhill running. Methods: Sixty-three physically active men and women (21 ± 2 y; 70.0 ± 13.7 kg; 169.3 ± 15.2 cm; 25.6 ± 14.3 body mass index (BMI), 32 women, 31 men) were randomly assigned to ingest 250 mg of CurcuWIN® (50 mg of curcuminoids), 1000 mg of CurcuWIN® (200 mg of curcuminoids), or a corn starch placebo (PLA) for eight weeks in a double-blind, randomized, placebo-controlled parallel design. At the end of the supplementation period, subjects completed a downhill running protocol intended to induce muscle damage. Muscle function using isokinetic dynamometry and perceived soreness was assessed prior to and at 1 h, 24 h, 48 h, and 72 h post-downhill run. Results: Isokinetic peak extension torque did not change in the 200-mg dose, while significant reductions occurred in the PLA and 50-mg groups through the first 24 h of recovery. Isokinetic peak flexion torque and power both decreased in the 50-mg group, while no change was observed in the PLA or 200-mg groups. All the groups experienced no changes in isokinetic extension power and isometric average peak torque. Soreness was significantly increased in all the groups compared to the baseline. Non-significant improvements in total soreness were observed for the 200-mg group, but these changes failed to reach statistical significance. Conclusions: When compared to changes observed against PLA, a 200-mg dose of curcumin attenuated reductions in some but not all observed changes in performance and soreness after completion of a downhill running bout. Additionally, a 50-mg dose appears to offer no advantage to changes observed in the PLA and 200-mg groups.
1. Introduction: Even in trained athletes, a novel or unaccustomed exercise bout, especially those that emphasize eccentric contractions, can cause microscopic intramuscular tears and an exaggerated inflammatory response [1,2], which is generally referred to as exercise-induced muscle damage (EIMD) [3]. The subsequent muscular pain and restriction of movement from EIMD can limit an athlete’s performance [4]. Thus, strategies that can attenuate performance decrements associated with EIMD should result in higher training quality and hypothetically greater exercise training adaptations. Consequently, many strategies have been proposed to treat or prevent EIMD [5], but there is still no scientific consensus on the most effective strategy for all individuals [6]. Curcumin, the bioactive component (2–5% by weight) of the spice herb turmeric, has a long history of medicinal use due to its anti-inflammatory and antioxidant properties [7]. The United States Food and Drug Administration has listed curcumin as GRAS (generally recognized as safe), and curcumin-containing supplements have been approved for human ingestion [8]. Curcumin is a polyphenol that is considered to be a “nutraceutical”, or a dietary agent with pharmaceutical or therapeutic properties. The capacity for curcumin as a treatment strategy in favor of targeted pharmaceuticals is promising due to curcumin being a highly pleiotropic molecule that interacts with multiple inflammatory pathways [1,2]. Previous research has indicated that curcumin may help alleviate performance decrements following intense, challenging exercise. For example, initial research in mice indicated that curcumin supplementation led to greater voluntary activity and improved running performance compared to placebo-supplemented mice after eccentric exercise [9]. Similar effects of curcumin following EIMD have been reported in human subjects. One study reported that curcumin supplementation reduces pain and tenderness [10], while Drobnic et al. [11] reported a reduction in muscular trauma in the posterior and medial thigh following a downhill run with curcumin supplementation along with a moderate reduction in pain. In contrast, Nicol et al. [12] reported that curcumin moderately reduced pain during exercise, but had little effect on muscle function. Relative to the impact of specific curcuminoids (as opposed to curcumin supplementation) in response to exercise and muscle damage, no published literature is available at the current time. A systematic review by Gaffey et al. [13] concluded that insufficient evidence exists to support the ability of curcuminoids to relieve pain and improve function. Importantly, the authors highlighted a general lack of evidence and poor study quality from the existing literature base. The mixed findings in previous studies may have been a result of the limited bioavailability due to formulation of the supplement [14]. A major limitation to the therapeutic potential of curcumin is its poor solubility, low absorption from the gut, rapid metabolism, and systemic elimination [15]. Curcumin is primarily excreted through the feces, never reaching detectable levels in circulation [16]. Previously, it has been shown that high doses of orally administered curcumin (e.g., 10–12 g), has resulted in little-to-no appearance of curcuminoids in circulation [17]. Various methods have been developed to increase the bioavailability of curcumin involving emulsions, nanocrystals, and liposomes, with varying degrees of success [18]. A recent formulation of curcumin (CurcuWIN®) involves the combination of a cellulosic derivatives and other natural antioxidants (tocopherol and ascorbyl palmitate). This formulation has been shown previously to increase absorption by 45.9-fold over a standardized curcumin mixture and a 5.8 to 34.9-fold greater increase in absorption (versus other formulations) while being well tolerated with no reported adverse events [19]. As part of participating in regular exercise training programs, much interest exists to maximize strategies to reduce decrements in performance and improve training quality. As a strategy to reduce soreness, loss of muscle function, and inflammation, curcumin serves as a potentially useful nutritional target to aid in accomplishing these goals, yet longstanding shortcomings with the ingredient have limited its potential. Therefore, the purpose of this study was to examine, in a double blind, placebo-controlled fashion, whether a novel form of curcumin would attenuate performance decrements and reduce inflammation following a downhill running bout. Beyond examining the impact of curcumin availability, an additional study question focused upon the identification of dose-dependent outcomes associated with curcumin administration relative to its ability to attenuate performance changes. We hypothesize that in a dose-dependent fashion, curcumin will attenuate changes in performance indicators after completion of a downhill bout of treadmill running. 5. Conclusions: In conclusion, results from the present study highlight the ability of a high dose of CurcuWIN® (1000-mg dose delivering 200 mg of curcuminoids) to prevent the observed decreases in peak extension torque values seen one and 24 h after muscle-damaging exercise. In comparison, a lower dose of CurcuWIN® (delivering 50 mg of curcuminoids) was unable to attenuate performance changes to a similar pattern as what was observed in PLA. While this study investigated changes in performance, future studies should also investigate more objective measures (blood markers such as creatine kinase or myoglobin) of muscle damage in cohorts of trained and untrained samples.
Background: It is known that unaccustomed exercise-especially when it has an eccentric component-causes muscle damage and subsequent performance decrements. Attenuating muscle damage may improve performance and recovery, allowing for improved training quality and adaptations. Therefore, the current study sought to examine the effect of two doses of curcumin supplementation on performance decrements following downhill running. Methods: Sixty-three physically active men and women (21 ± 2 y; 70.0 ± 13.7 kg; 169.3 ± 15.2 cm; 25.6 ± 14.3 body mass index (BMI), 32 women, 31 men) were randomly assigned to ingest 250 mg of CurcuWIN® (50 mg of curcuminoids), 1000 mg of CurcuWIN® (200 mg of curcuminoids), or a corn starch placebo (PLA) for eight weeks in a double-blind, randomized, placebo-controlled parallel design. At the end of the supplementation period, subjects completed a downhill running protocol intended to induce muscle damage. Muscle function using isokinetic dynamometry and perceived soreness was assessed prior to and at 1 h, 24 h, 48 h, and 72 h post-downhill run. Results: Isokinetic peak extension torque did not change in the 200-mg dose, while significant reductions occurred in the PLA and 50-mg groups through the first 24 h of recovery. Isokinetic peak flexion torque and power both decreased in the 50-mg group, while no change was observed in the PLA or 200-mg groups. All the groups experienced no changes in isokinetic extension power and isometric average peak torque. Soreness was significantly increased in all the groups compared to the baseline. Non-significant improvements in total soreness were observed for the 200-mg group, but these changes failed to reach statistical significance. Conclusions: When compared to changes observed against PLA, a 200-mg dose of curcumin attenuated reductions in some but not all observed changes in performance and soreness after completion of a downhill running bout. Additionally, a 50-mg dose appears to offer no advantage to changes observed in the PLA and 200-mg groups.
13,518
400
19
[ "time", "mg", "significant", "values", "group", "peak", "changes", "torque", "95", "95 ci" ]
[ "test", "test" ]
null
[CONTENT] athletic performance | muscle-damaging exercise | recovery | downhill run [SUMMARY]
null
[CONTENT] athletic performance | muscle-damaging exercise | recovery | downhill run [SUMMARY]
[CONTENT] athletic performance | muscle-damaging exercise | recovery | downhill run [SUMMARY]
[CONTENT] athletic performance | muscle-damaging exercise | recovery | downhill run [SUMMARY]
[CONTENT] athletic performance | muscle-damaging exercise | recovery | downhill run [SUMMARY]
[CONTENT] Adult | Athletic Performance | Curcumin | Dietary Supplements | Double-Blind Method | Exercise | Female | Humans | Male | Myalgia | Performance-Enhancing Substances | Running | Time Factors | Treatment Outcome | Young Adult [SUMMARY]
null
[CONTENT] Adult | Athletic Performance | Curcumin | Dietary Supplements | Double-Blind Method | Exercise | Female | Humans | Male | Myalgia | Performance-Enhancing Substances | Running | Time Factors | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adult | Athletic Performance | Curcumin | Dietary Supplements | Double-Blind Method | Exercise | Female | Humans | Male | Myalgia | Performance-Enhancing Substances | Running | Time Factors | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adult | Athletic Performance | Curcumin | Dietary Supplements | Double-Blind Method | Exercise | Female | Humans | Male | Myalgia | Performance-Enhancing Substances | Running | Time Factors | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adult | Athletic Performance | Curcumin | Dietary Supplements | Double-Blind Method | Exercise | Female | Humans | Male | Myalgia | Performance-Enhancing Substances | Running | Time Factors | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] time | mg | significant | values | group | peak | changes | torque | 95 | 95 ci [SUMMARY]
null
[CONTENT] time | mg | significant | values | group | peak | changes | torque | 95 | 95 ci [SUMMARY]
[CONTENT] time | mg | significant | values | group | peak | changes | torque | 95 | 95 ci [SUMMARY]
[CONTENT] time | mg | significant | values | group | peak | changes | torque | 95 | 95 ci [SUMMARY]
[CONTENT] time | mg | significant | values | group | peak | changes | torque | 95 | 95 ci [SUMMARY]
[CONTENT] curcumin | eimd | performance | pain | attenuate | training | decrements | exercise | curcumin supplementation | reduce [SUMMARY]
null
[CONTENT] 95 | ci | 95 ci | significant | time | group | mg | values | peak | changes [SUMMARY]
[CONTENT] delivering | dose curcuwin | dose | mg curcuminoids | curcuwin | curcuminoids | muscle | performance | mg | study [SUMMARY]
[CONTENT] time | mg | significant | 95 | 95 ci | ci | group | values | participants | changes [SUMMARY]
[CONTENT] time | mg | significant | 95 | 95 ci | ci | group | values | participants | changes [SUMMARY]
[CONTENT] ||| ||| two [SUMMARY]
null
[CONTENT] 200 | PLA | 50 | first | 24 ||| 50 | PLA | 200 ||| ||| ||| 200 [SUMMARY]
[CONTENT] PLA | 200 ||| 50 | PLA | 200 [SUMMARY]
[CONTENT] ||| ||| two ||| Sixty-three | 21 | 2 | 70.0 | 13.7 kg | 169.3 | 15.2 cm | 25.6 | 14.3 | BMI | 32 | 31 | 250 | 50 | 1000 | 200 | PLA | eight weeks ||| ||| 1 | 24 | 48 | 72 ||| ||| 200 | PLA | 50 | first | 24 ||| 50 | PLA | 200 ||| ||| ||| 200 ||| PLA | 200 ||| 50 | PLA | 200 [SUMMARY]
[CONTENT] ||| ||| two ||| Sixty-three | 21 | 2 | 70.0 | 13.7 kg | 169.3 | 15.2 cm | 25.6 | 14.3 | BMI | 32 | 31 | 250 | 50 | 1000 | 200 | PLA | eight weeks ||| ||| 1 | 24 | 48 | 72 ||| ||| 200 | PLA | 50 | first | 24 ||| 50 | PLA | 200 ||| ||| ||| 200 ||| PLA | 200 ||| 50 | PLA | 200 [SUMMARY]
Evaluation of low density polyethylene and nylon for delivery of synthetic mosquito attractants.
22992518
Synthetic odour baits present an unexploited potential for sampling, surveillance and control of malaria and other mosquito vectors. However, application of such baits is impeded by the unavailability of robust odour delivery devices that perform reliably under field conditions. In the present study the suitability of low density polyethylene (LDPE) and nylon strips for dispensing synthetic attractants of host-seeking Anopheles gambiae mosquitoes was evaluated.
BACKGROUND
Baseline experiments assessed the numbers of An. gambiae mosquitoes caught in response to low density polyethylene (LDPE) sachets filled with attractants, attractant-treated nylon strips, control LDPE sachets, and control nylon strips placed in separate MM-X traps. Residual attraction of An. gambiae to attractant-treated nylon strips was determined subsequently. The effects of sheet thickness and surface area on numbers of mosquitoes caught in MM-X traps containing the synthetic kairomone blend dispensed from LDPE sachets and nylon strips were also evaluated. Various treatments were tested through randomized 4 × 4 Latin Square experimental designs under semi-field conditions in western Kenya.
METHODS
Attractant-treated nylon strips collected 5.6 times more An. gambiae mosquitoes than LDPE sachets filled with the same attractants. The attractant-impregnated nylon strips were consistently more attractive (76.95%; n = 9,120) than sachets containing the same attractants (18.59%; n = 2,203), control nylon strips (2.17%; n = 257) and control LDPE sachets (2.29%; n = 271) up to 40 days post-treatment (P < 0.001). The higher catches of mosquitoes achieved with nylon strips were unrelated to differences in surface area between nylon strips and LDPE sachets. The proportion of mosquitoes trapped when individual components of the attractant were dispensed in LDPE sachets of optimized sheet thicknesses was significantly higher than when 0.03 mm-sachets were used (P < 0.001).
RESULTS
Nylon strips continuously dispense synthetic mosquito attractants several weeks post treatment. This, added to the superior performance of nylon strips relative to LDPE material in dispensing synthetic mosquito attractants, opens up the opportunity for showcasing the effectiveness of odour-baited devices for sampling, surveillance and control of disease vectors.
CONCLUSION
[ "Animals", "Anopheles", "Chemotactic Factors", "Kenya", "Mosquito Control", "Nylons", "Polyethylene", "Time Factors" ]
3480916
Background
The effectiveness of odour-baited tools for sampling, surveillance and control of insect vectors is strongly influenced by the selected odour delivery device [1,2]. Low density polyethylene (LDPE) materials have proved useful because odour baits are released at predictable rates and do not need to be replenished over prolonged periods of time [1,3]. However, these attributes may not guarantee maximal mosquito trap catches without prior optimization of sheet thickness and surface area [1,2,4]. Since LDPE sachets are prone to leakage, further searches for slow-release materials and techniques is warranted for the optimal release of odorants. In a previous eight-day study we reported on the efficacy of nylon fabric (90% polyamide and 10% spandex) as a tool for dispensing odours [3]. A potent synthetic mosquito attractant namely Ifakara blend 1 (hereafter referred to as blend IB1) was used to evaluate open glass vials, LDPE and nylon as delivery tools. Nylon strips impregnated with blend IB1 attracted 5.83 and 1.78 times more Anopheles gambiae Giles sensu stricto (hereafter referred to as An. gambiae) mosquitoes than solutions of attractants dispensed from glass vials and LDPE sachets, respectively [3]. However, in the case of nylon strips each chemical component of the attractant was applied at its optimal concentration whereas such optimization had not been implemented in advance for LDPE sachets. In this study we re-evaluated the suitability of nylon versus LDPE as materials for dispensing synthetic mosquito attractants. We pursued four specific aims i.e. (i) comparison of nylon strips and LDPE sachets as materials for releasing synthetic mosquito attractants, (ii) assessment of the residual activity of attractant-baited nylon strips and LDPE sachets on host-seeking mosquitoes, (iii) determination of the effect of LDPE sheet thickness on attraction of mosquitoes to synthetic attractants, and (iv) comparison of surface area effects on attraction of mosquitoes to attractants administered through nylon strips versus LDPE sachets.
Methods
The study was carried out at the Thomas Odhiambo Campus of the International Centre of Insect Physiology and Ecology (icipe) located near Mbita Point Township in western Kenya between April 2010 and January 2011. Mosquitoes The Mbita strain of An. gambiae was used for all experiments. For maintenance of this strain, mosquito eggs were placed in plastic trays containing filtered water from Lake Victoria. Larvae were fed on Tetramin® baby fish food three times per day. Pupae were collected daily, put in clean cups half-filled with filtered lake water and then placed in mesh-covered cages (30 × 30 × 30 cm). Emerging adult mosquitoes were fed on 6% glucose solution. The Mbita strain of An. gambiae was used for all experiments. For maintenance of this strain, mosquito eggs were placed in plastic trays containing filtered water from Lake Victoria. Larvae were fed on Tetramin® baby fish food three times per day. Pupae were collected daily, put in clean cups half-filled with filtered lake water and then placed in mesh-covered cages (30 × 30 × 30 cm). Emerging adult mosquitoes were fed on 6% glucose solution. General procedures The experiments were conducted under semi-field conditions in a screen-walled greenhouse measuring 11 m × 7 m × 2.8 m, with the roof apex standing 3.4 m high. Four treatments including two negative controls were evaluated in each experimental run. A total of 200 adult female mosquitoes aged 3–5 days old were utilized for individual bioassays conducted between 20:00 and 06:30 h. The mosquitoes were starved for 8 h with no prior access to blood meals. Only water presented on cotton towels on top of mosquito holding cups was provided. Mosquitoes attracted to each treatment were sampled using MM-X traps (American Biophysics, North Kingstown, RI, USA). The nylon strips and LDPE sachets were suspended inside the plume tubes of separate traps where a fan blew air over them to expel the attractant plume as indicated in our previous study [3]. Latex gloves were worn when hanging odour dispensers in the traps to avoid contamination. Trap positions were rotated to minimise positional effects. The traps were placed 1 m away from the edges of the greenhouse [4-6]. Each trap was marked and used for one specific treatment throughout the experiments. The number of mosquitoes collected per trap was counted and used both as an estimate for the attractiveness of the baits and an indicator for the suitability of dispensing materials. Each morning the traps were cleaned using 70% methanol solution. Mosquitoes that were not trapped were recaptured from the green house using manual aspirators and killed. Temperature and relative humidity in the greenhouse were recorded using data loggers (Tinytag®). Whereas all experiments were conducted for 12 nights, responses of mosquitoes to residual release from attractant-treated nylon strips were evaluated for 40 nights and repeated three times. The experiments were conducted under semi-field conditions in a screen-walled greenhouse measuring 11 m × 7 m × 2.8 m, with the roof apex standing 3.4 m high. Four treatments including two negative controls were evaluated in each experimental run. A total of 200 adult female mosquitoes aged 3–5 days old were utilized for individual bioassays conducted between 20:00 and 06:30 h. The mosquitoes were starved for 8 h with no prior access to blood meals. Only water presented on cotton towels on top of mosquito holding cups was provided. Mosquitoes attracted to each treatment were sampled using MM-X traps (American Biophysics, North Kingstown, RI, USA). The nylon strips and LDPE sachets were suspended inside the plume tubes of separate traps where a fan blew air over them to expel the attractant plume as indicated in our previous study [3]. Latex gloves were worn when hanging odour dispensers in the traps to avoid contamination. Trap positions were rotated to minimise positional effects. The traps were placed 1 m away from the edges of the greenhouse [4-6]. Each trap was marked and used for one specific treatment throughout the experiments. The number of mosquitoes collected per trap was counted and used both as an estimate for the attractiveness of the baits and an indicator for the suitability of dispensing materials. Each morning the traps were cleaned using 70% methanol solution. Mosquitoes that were not trapped were recaptured from the green house using manual aspirators and killed. Temperature and relative humidity in the greenhouse were recorded using data loggers (Tinytag®). Whereas all experiments were conducted for 12 nights, responses of mosquitoes to residual release from attractant-treated nylon strips were evaluated for 40 nights and repeated three times. Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets A 4 × 4 Latin square experimental design was conducted incorporating LDPE sachets filled with IB1, IBI-treated nylon strips, LDPE sachets filled with water (hereafter termed control LDPE sachets) and water-treated nylon strips (hereafter termed control nylon strips) as treatments. Sheet thicknesses of LDPE sachets each measuring 2.5 cm × 2.5 cm (surface area 12.5 cm2) were optimized for individual chemical components of blend IB1 [7]. These were 0.2 mm (distilled water, propionic, butanoic, pentanoic, and 3-methylbutanoic acid), 0.1 mm (heptanoic and octanoic acid), 0.05 mm (lactic acid) and 0.03 mm (tetradecanoic acid and ammonia solution). Depending on treatment, LDPE sachets were filled with either 1 ml of the attractant compound or solvent. Individual nylon strips measuring 26.5 cm × 1 cm (surface area 53 cm2) were separately soaked in 1 ml of each of the chemical constituents of blend IB1 at their optimal concentrations [3,7]. The strips were air-dried at room temperature for 5 h before the start of experiments. Whereas attractant-treated nylon strips were freshly prepared each day, LDPE sachets filled with IB1 were re-used throughout the 12 days of the study and replaced upon leakage or depletion of individual components. Carbon dioxide, produced from 250 g of sucrose dissolved in 2 l of tap water containing 17.5 g of yeast [3,5,8] was supplied through silicon gas tubing at a flow rate of approximately 63 ml/min into traps baited with IB1-treated nylon strips or LDPE sachets filled with IB1 only and not with control nylon or LDPE sachets. Individual LDPE sachets containing chemicals were weighed before and after each experiment to determine how much of the individual components of the blend had been released. Control LDPE sachets and LDPE sachets filled with IB1 were stored in the refrigerator at 4 0C between experimental runs. A 4 × 4 Latin square experimental design was conducted incorporating LDPE sachets filled with IB1, IBI-treated nylon strips, LDPE sachets filled with water (hereafter termed control LDPE sachets) and water-treated nylon strips (hereafter termed control nylon strips) as treatments. Sheet thicknesses of LDPE sachets each measuring 2.5 cm × 2.5 cm (surface area 12.5 cm2) were optimized for individual chemical components of blend IB1 [7]. These were 0.2 mm (distilled water, propionic, butanoic, pentanoic, and 3-methylbutanoic acid), 0.1 mm (heptanoic and octanoic acid), 0.05 mm (lactic acid) and 0.03 mm (tetradecanoic acid and ammonia solution). Depending on treatment, LDPE sachets were filled with either 1 ml of the attractant compound or solvent. Individual nylon strips measuring 26.5 cm × 1 cm (surface area 53 cm2) were separately soaked in 1 ml of each of the chemical constituents of blend IB1 at their optimal concentrations [3,7]. The strips were air-dried at room temperature for 5 h before the start of experiments. Whereas attractant-treated nylon strips were freshly prepared each day, LDPE sachets filled with IB1 were re-used throughout the 12 days of the study and replaced upon leakage or depletion of individual components. Carbon dioxide, produced from 250 g of sucrose dissolved in 2 l of tap water containing 17.5 g of yeast [3,5,8] was supplied through silicon gas tubing at a flow rate of approximately 63 ml/min into traps baited with IB1-treated nylon strips or LDPE sachets filled with IB1 only and not with control nylon or LDPE sachets. Individual LDPE sachets containing chemicals were weighed before and after each experiment to determine how much of the individual components of the blend had been released. Control LDPE sachets and LDPE sachets filled with IB1 were stored in the refrigerator at 4 0C between experimental runs. Residual activity of attractant-treated nylon strips on host seeking mosquitoes In our previous study we noted the potential disadvantage of nylon strips i.e. that they tend to dry up quickly so no more active ingredient may be available following long hours of trap operation [3]. We designed experiments aimed at addressing this shortcoming. A 4 × 4 Latin square experimental design was used to evaluate residual attraction of An. gambiae to IB1-treated nylon strips and LDPE sachets filled with IB1. The four treatments included (i) LDPE sachets filled with IB1, (ii) IB1-treated nylon strips, (iii) control LDPE sachets and (iv), control nylon strips. The number of mosquitoes attracted to each treatment over a period of 40 nights was recorded daily and proportions trapped were calculated. The experiment was replicated three times. Analysis of data revealed no need to prepare fresh nylon strips daily. Thus, nylon strips were re-used in subsequent experiments. Whereas control LDPE sachets and IB1-filled LDPE sachets were also re-used, individual sachets were replenished upon depletion of contents. Sachets containing butanoic, pentanoic, 3-methylbutanoic, heptanoic and octanoic acid were replaced after every 10–14 nights. In our previous study we noted the potential disadvantage of nylon strips i.e. that they tend to dry up quickly so no more active ingredient may be available following long hours of trap operation [3]. We designed experiments aimed at addressing this shortcoming. A 4 × 4 Latin square experimental design was used to evaluate residual attraction of An. gambiae to IB1-treated nylon strips and LDPE sachets filled with IB1. The four treatments included (i) LDPE sachets filled with IB1, (ii) IB1-treated nylon strips, (iii) control LDPE sachets and (iv), control nylon strips. The number of mosquitoes attracted to each treatment over a period of 40 nights was recorded daily and proportions trapped were calculated. The experiment was replicated three times. Analysis of data revealed no need to prepare fresh nylon strips daily. Thus, nylon strips were re-used in subsequent experiments. Whereas control LDPE sachets and IB1-filled LDPE sachets were also re-used, individual sachets were replenished upon depletion of contents. Sachets containing butanoic, pentanoic, 3-methylbutanoic, heptanoic and octanoic acid were replaced after every 10–14 nights. Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae Direct exposure of IB1-treated nylon to environmental conditions may have led to higher release rates of attractant volatiles resulting in more mosquitoes being attracted relative to LDPE sachets of optimal sheet thicknesses containing the same attractants. We hypothesized that increasing release rates for all components in the blend using IB1-filled LDPE sachets of 0.03 mm sheet thickness for all components in the blend (hereafter indicated as 0.03 mm-LDPE or 0.03 mm-sachet) could enhance numbers of mosquitoes attracted. A sheet thickness of 0.03 mm was selected, because it was the thinnest available LDPE material and had been used in our previous investigations [7]. This hypothesis was tested by comparing An. gambiae mosquito capture rates with sachets of variable thickness versus 0.03 mm sachets. The sachets were weighed daily before and after each experiment to verify differences in volatile release rates. The carbon dioxide component of the blend was delivered separately through silicon tubing. A randomised 4 × 4 Latin square experimental design was adopted. The treatments included (a) LDPE sachets with optimized sheet thicknesses for all components of IB1, (b) each component of IB1 dispensed in LDPE sachets of 0.03 mm sheet thickness, (c) control LDPE sachets with optimal sheet thicknesses for all components of IB1, and (d) control LDPE sachets with 0.03 mm sheet thickness. Direct exposure of IB1-treated nylon to environmental conditions may have led to higher release rates of attractant volatiles resulting in more mosquitoes being attracted relative to LDPE sachets of optimal sheet thicknesses containing the same attractants. We hypothesized that increasing release rates for all components in the blend using IB1-filled LDPE sachets of 0.03 mm sheet thickness for all components in the blend (hereafter indicated as 0.03 mm-LDPE or 0.03 mm-sachet) could enhance numbers of mosquitoes attracted. A sheet thickness of 0.03 mm was selected, because it was the thinnest available LDPE material and had been used in our previous investigations [7]. This hypothesis was tested by comparing An. gambiae mosquito capture rates with sachets of variable thickness versus 0.03 mm sachets. The sachets were weighed daily before and after each experiment to verify differences in volatile release rates. The carbon dioxide component of the blend was delivered separately through silicon tubing. A randomised 4 × 4 Latin square experimental design was adopted. The treatments included (a) LDPE sachets with optimized sheet thicknesses for all components of IB1, (b) each component of IB1 dispensed in LDPE sachets of 0.03 mm sheet thickness, (c) control LDPE sachets with optimal sheet thicknesses for all components of IB1, and (d) control LDPE sachets with 0.03 mm sheet thickness. Response of mosquitoes to attractants applied on nylon versus 0.03 mm LDPE sachets In addition to investigating the effect of volatile release rates on mosquito behaviour, we compared numbers of An. gambiae mosquitoes attracted to IB1-filled in LDPE sachets of uniform sheet thickness (0.03 mm) or applied on nylon strips. The following treatments were tested (a) IB1-treated nylon strips, (b) each component of IB1 dispensed in 0.03 mm-LDPE sachets, (c) control nylon strips and (d) control 0.03 mm-LDPE sachets. A randomised 4 × 4 Latin square experimental design was adopted. The sachets and nylon strips had surface areas of 12.5 cm2 and 53 cm2, respectively. In addition to investigating the effect of volatile release rates on mosquito behaviour, we compared numbers of An. gambiae mosquitoes attracted to IB1-filled in LDPE sachets of uniform sheet thickness (0.03 mm) or applied on nylon strips. The following treatments were tested (a) IB1-treated nylon strips, (b) each component of IB1 dispensed in 0.03 mm-LDPE sachets, (c) control nylon strips and (d) control 0.03 mm-LDPE sachets. A randomised 4 × 4 Latin square experimental design was adopted. The sachets and nylon strips had surface areas of 12.5 cm2 and 53 cm2, respectively. Effects of dispenser surface area on attraction of mosquitoes As higher mosquito catches associated with IB1-treated nylon strips could not be explained by the strips being freshly treated prior to each experiment, we tested whether variations in mosquito catches were due to differences in surface area. The LDPE sachets and nylon strips used in previous experiments of this study had surface areas of 12.5 cm2 and 53 cm2, respectively. Thus, the strips released odorants over a larger surface area than the LDPE sachets. We designed two sets of 4 × 4 Latin square experiments to test whether the larger surface area of nylon strips was responsible for the higher mosquito catches. The four treatments included (a) IB1-treated nylon strips, (b) LDPE sachets filled with IB1, (c) control nylon strips and (d) control LDPE sachets. Enlarged LDPE sachets were similarly filled with one ml of attractant or solvent. In the first set of experiments, surface areas of control and attractant-filled LDPE sachets were enlarged (2.5 cm wide × 10.6 cm long × 2 sides of the sachet) to equal the surface area of nylon strips. In the second set of experiments, a piece of absorbent material (nylon strip) was placed inside enlarged (53 cm2) control and attractant-filled LDPE sachets to ensure that blend IB1 was evenly spread over the entire inner surface of the sachets. Each set of experiments was replicated 12 times. All other experimental procedures were similar to those described in previous sections. As higher mosquito catches associated with IB1-treated nylon strips could not be explained by the strips being freshly treated prior to each experiment, we tested whether variations in mosquito catches were due to differences in surface area. The LDPE sachets and nylon strips used in previous experiments of this study had surface areas of 12.5 cm2 and 53 cm2, respectively. Thus, the strips released odorants over a larger surface area than the LDPE sachets. We designed two sets of 4 × 4 Latin square experiments to test whether the larger surface area of nylon strips was responsible for the higher mosquito catches. The four treatments included (a) IB1-treated nylon strips, (b) LDPE sachets filled with IB1, (c) control nylon strips and (d) control LDPE sachets. Enlarged LDPE sachets were similarly filled with one ml of attractant or solvent. In the first set of experiments, surface areas of control and attractant-filled LDPE sachets were enlarged (2.5 cm wide × 10.6 cm long × 2 sides of the sachet) to equal the surface area of nylon strips. In the second set of experiments, a piece of absorbent material (nylon strip) was placed inside enlarged (53 cm2) control and attractant-filled LDPE sachets to ensure that blend IB1 was evenly spread over the entire inner surface of the sachets. Each set of experiments was replicated 12 times. All other experimental procedures were similar to those described in previous sections. Data analysis The relative efficacy of each treatment was defined as a percentage of female mosquitoes caught in the traps containing either of the two release materials impregnated or filled with synthetic attractants or solvent. In order to investigate the effect of residual activity of attractant-treated materials on capture rates, we used the baseline-category logit model [9]. The nominal response variable was defined as the attractant type with four categories: IB1-containing LDPE sachets, IB1-treated nylon strips, control nylon strips, and control LDPE sachets with day and trap position as covariates. We estimated the odds that mosquitoes chose other attractants instead of IB1-treated nylon strips over time, while adjusting for trap-position. The Mann Whitney-U test was used to estimate the effect of sheet thickness of LDPE sachets on release rates of IB1 components except carbon dioxide. To investigate the effect of surface area and sheet thickness on the release material on mosquito catches, we fitted a Poisson regression model controlling for trap position. The analyses were performed using SAS v9.2 (SAS Institute Inc.) with tests performed at 5% level. The relative efficacy of each treatment was defined as a percentage of female mosquitoes caught in the traps containing either of the two release materials impregnated or filled with synthetic attractants or solvent. In order to investigate the effect of residual activity of attractant-treated materials on capture rates, we used the baseline-category logit model [9]. The nominal response variable was defined as the attractant type with four categories: IB1-containing LDPE sachets, IB1-treated nylon strips, control nylon strips, and control LDPE sachets with day and trap position as covariates. We estimated the odds that mosquitoes chose other attractants instead of IB1-treated nylon strips over time, while adjusting for trap-position. The Mann Whitney-U test was used to estimate the effect of sheet thickness of LDPE sachets on release rates of IB1 components except carbon dioxide. To investigate the effect of surface area and sheet thickness on the release material on mosquito catches, we fitted a Poisson regression model controlling for trap position. The analyses were performed using SAS v9.2 (SAS Institute Inc.) with tests performed at 5% level.
Results
Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets The 12-day period over which experiments were conducted was characterized by a mean temperature and relative humidity of 22.18 ± 0.080C and 86.15 ± 1.56%, respectively, within the screen-walled greenhouse. Out of 2,400 female An. gambiae mosquitoes released, 51.88% (n = 1,235) were caught in the four treatment traps. Of these catches, 77.73%, 18.62%, 1.78% and 1.86% were trapped by IB1-treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets, respectively (Figure 1). Baseline-category logit model results revealed that IB1-impregnated nylon strips attracted, on average, 5.6 times more mosquitoes than LDPE sachets filled with IB1 (P < 0.001). Whereas there was no significant difference in the proportion of mosquitoes attracted to control nylon strips and control LDPE sachets (P = 0.436), these treatments attracted significantly fewer mosquitoes than nylon and LDPE sachets containing blend IB1 (P < 0.001). Day effect was not significant (P = 0.056), and was therefore excluded from the final model. However, trap position was an important determinant of mosquito catches (P < 0.001). These experiments provided baseline information for subsequent investigations conducted during the study. Proportions of mosquitoes caught in MM-X traps containing IB1-treated nylon strips, LDPE sachets filled with blend IB1, control nylon strips and control LDPE sachets. Mean mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches. The 12-day period over which experiments were conducted was characterized by a mean temperature and relative humidity of 22.18 ± 0.080C and 86.15 ± 1.56%, respectively, within the screen-walled greenhouse. Out of 2,400 female An. gambiae mosquitoes released, 51.88% (n = 1,235) were caught in the four treatment traps. Of these catches, 77.73%, 18.62%, 1.78% and 1.86% were trapped by IB1-treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets, respectively (Figure 1). Baseline-category logit model results revealed that IB1-impregnated nylon strips attracted, on average, 5.6 times more mosquitoes than LDPE sachets filled with IB1 (P < 0.001). Whereas there was no significant difference in the proportion of mosquitoes attracted to control nylon strips and control LDPE sachets (P = 0.436), these treatments attracted significantly fewer mosquitoes than nylon and LDPE sachets containing blend IB1 (P < 0.001). Day effect was not significant (P = 0.056), and was therefore excluded from the final model. However, trap position was an important determinant of mosquito catches (P < 0.001). These experiments provided baseline information for subsequent investigations conducted during the study. Proportions of mosquitoes caught in MM-X traps containing IB1-treated nylon strips, LDPE sachets filled with blend IB1, control nylon strips and control LDPE sachets. Mean mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches. Residual activity of attractant-treated nylon strips on host-seeking mosquitoes A total of 11,851 (49.38%) mosquitoes were attracted and collected over 120 nights (i.e. three replicates of 40 days each). The proportions of mosquitoes caught over time differed among treatments (P < 0.001) (Figure 2). Attractant-treated nylon strips repeatedly trapped the highest proportion of mosquitoes without re-applying the attractant blend up to 40 days post-treatment. During this period the treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets attracted 76.95% (n = 9,120), 18.59% (n = 2,203), 2.17% (n = 257) and 2.29% (n = 271) of the mosquitoes, respectively. There was also a significant increase over time in the proportion of mosquitoes choosing LDPE sachets filled with IB1 (P < 0.001), but not for control nylon strips (P = 0.051) and control LDPE sachets (P = 0.071). In contrast, the numbers of mosquitoes attracted to IB1-impregnated nylon strips decreased considerably over time (P < 0.002). However, they were consistently preferred to LDPE sachets filled with IB1 (Figure 2). Proportions of mosquitoes caught in traps containing IB1-treated nylon strips (―), LDPE sachets filled with blend IB1 (−−--), control nylon strips (—♦--) and control LDPE sachets (− × − × −) over time. Lines and symbols representing mosquito catches due to control nylon strips and control LDPE sachets are superimposed over each other. Open (IB1-treated nylon strips) and closed circles (LDPE sachets filled with IB1) represent observed values. Lines represent the Baseline-category logit model fit showing trends of proportions of mosquitoes attracted over time. A total of 11,851 (49.38%) mosquitoes were attracted and collected over 120 nights (i.e. three replicates of 40 days each). The proportions of mosquitoes caught over time differed among treatments (P < 0.001) (Figure 2). Attractant-treated nylon strips repeatedly trapped the highest proportion of mosquitoes without re-applying the attractant blend up to 40 days post-treatment. During this period the treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets attracted 76.95% (n = 9,120), 18.59% (n = 2,203), 2.17% (n = 257) and 2.29% (n = 271) of the mosquitoes, respectively. There was also a significant increase over time in the proportion of mosquitoes choosing LDPE sachets filled with IB1 (P < 0.001), but not for control nylon strips (P = 0.051) and control LDPE sachets (P = 0.071). In contrast, the numbers of mosquitoes attracted to IB1-impregnated nylon strips decreased considerably over time (P < 0.002). However, they were consistently preferred to LDPE sachets filled with IB1 (Figure 2). Proportions of mosquitoes caught in traps containing IB1-treated nylon strips (―), LDPE sachets filled with blend IB1 (−−--), control nylon strips (—♦--) and control LDPE sachets (− × − × −) over time. Lines and symbols representing mosquito catches due to control nylon strips and control LDPE sachets are superimposed over each other. Open (IB1-treated nylon strips) and closed circles (LDPE sachets filled with IB1) represent observed values. Lines represent the Baseline-category logit model fit showing trends of proportions of mosquitoes attracted over time. Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae Here LDPE sachets with sheet thickness optimized [7] or kept uniform (0.03 mm) for each chemical constituent of the attractant were evaluated. Out of 2,400 mosquitoes released, 51.17% were trapped (Table 1). Whereas trap position was not a significant factor (P = 0.183), attraction of mosquitoes to different traps was influenced by LDPE sheet thickness (P < 0.001). Delivery of attractant components through sachets with optimized sheet thicknesses resulted in a significant increase in mosquito catches as opposed to uniform 0.03 mm-sachets (P < 0.001). There was no difference in mosquito catches between both types of control LDPE sachets (P = 0.111). Effect of polyethylene sheet thickness on attraction of An. gambiae to attractant baited sachets N refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05). The effect of porosity due to differences in sheet thickness of LDPE sachets on release rates of various chemicals emitted from blend IB1 was also investigated. Mann Whitney-U tests indicated that sheet thickness had a significant effect on the release rates of propionic acid, pentanoic acid, heptanoic acid, distilled water and lactic acid (P = 0.04, 0.03, 0.02, 0.01 and 0.02, respectively). However, release rates of butanoic acid, 3-methylbutanoic acid, octanoic acid, tetradecanoic acid, and ammonia were not dependent on sheet thickness of LDPE sachets (P = 0.722, 0.97, 0.30, 0.23, and 0.87, respectively) (Figure 3). Effect of LDPE sheet thickness on release rates of chemical constituents contained in the mosquito attractant Ifakara blend 1 (IB1). Release rates from sachets, the sheet thickness of which had been optimised for all chemicals components of the blend (open bars) or kept uniform (0.03 mm-sheets) for all the chemical constituents (shaded bars), are shown. The optimised LDPE sheet thicknesses were 0.2 mm [distilled water (H20), propionic (C3), butanoic (C4), pentanoic (C5), and 3-methylbutanoic acid (3MC4)], 0.1 mm [heptanoic (C7) and octanoic acid (C8)], 0.05 mm [lactic acid (LA)] and 0.03 mm [tetradecanoic acid (C14) and ammonia solution (NH3)]. Odour release rates represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean odour release rates measured in ng/h. Here LDPE sachets with sheet thickness optimized [7] or kept uniform (0.03 mm) for each chemical constituent of the attractant were evaluated. Out of 2,400 mosquitoes released, 51.17% were trapped (Table 1). Whereas trap position was not a significant factor (P = 0.183), attraction of mosquitoes to different traps was influenced by LDPE sheet thickness (P < 0.001). Delivery of attractant components through sachets with optimized sheet thicknesses resulted in a significant increase in mosquito catches as opposed to uniform 0.03 mm-sachets (P < 0.001). There was no difference in mosquito catches between both types of control LDPE sachets (P = 0.111). Effect of polyethylene sheet thickness on attraction of An. gambiae to attractant baited sachets N refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05). The effect of porosity due to differences in sheet thickness of LDPE sachets on release rates of various chemicals emitted from blend IB1 was also investigated. Mann Whitney-U tests indicated that sheet thickness had a significant effect on the release rates of propionic acid, pentanoic acid, heptanoic acid, distilled water and lactic acid (P = 0.04, 0.03, 0.02, 0.01 and 0.02, respectively). However, release rates of butanoic acid, 3-methylbutanoic acid, octanoic acid, tetradecanoic acid, and ammonia were not dependent on sheet thickness of LDPE sachets (P = 0.722, 0.97, 0.30, 0.23, and 0.87, respectively) (Figure 3). Effect of LDPE sheet thickness on release rates of chemical constituents contained in the mosquito attractant Ifakara blend 1 (IB1). Release rates from sachets, the sheet thickness of which had been optimised for all chemicals components of the blend (open bars) or kept uniform (0.03 mm-sheets) for all the chemical constituents (shaded bars), are shown. The optimised LDPE sheet thicknesses were 0.2 mm [distilled water (H20), propionic (C3), butanoic (C4), pentanoic (C5), and 3-methylbutanoic acid (3MC4)], 0.1 mm [heptanoic (C7) and octanoic acid (C8)], 0.05 mm [lactic acid (LA)] and 0.03 mm [tetradecanoic acid (C14) and ammonia solution (NH3)]. Odour release rates represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean odour release rates measured in ng/h. Response of mosquitoes to attractant-treated nylon versus thin-sheeted polyethylene sachets Additional studies confirmed that mosquitoes preferred attractant-treated nylon strips compared to attractants contained in 0.03 mm-LDPE sachets (P < 0.001). Overall, 49.63% (n = 1191) of released mosquitoes were recaptured. Of these, 84.50%, 11.07%, 2.26%, and 1.68% were found in traps baited with attractant-treated nylon strips, LDPE sachets (0.03 mm) filled with IB1, control nylon strips and control LDPE sachets (0.03 mm), respectively (Figure 4). The numbers of mosquitoes caught by control strips and sachets were not significantly different (P = 0.309). Proportions of mosquitoes caught by traps containing IB1-treated nylon strips, 0.03 mm-LDPE sachets filled with blend IB1, control 0.03 mm-LDPE sachets and control nylon strips. Mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches. Additional studies confirmed that mosquitoes preferred attractant-treated nylon strips compared to attractants contained in 0.03 mm-LDPE sachets (P < 0.001). Overall, 49.63% (n = 1191) of released mosquitoes were recaptured. Of these, 84.50%, 11.07%, 2.26%, and 1.68% were found in traps baited with attractant-treated nylon strips, LDPE sachets (0.03 mm) filled with IB1, control nylon strips and control LDPE sachets (0.03 mm), respectively (Figure 4). The numbers of mosquitoes caught by control strips and sachets were not significantly different (P = 0.309). Proportions of mosquitoes caught by traps containing IB1-treated nylon strips, 0.03 mm-LDPE sachets filled with blend IB1, control 0.03 mm-LDPE sachets and control nylon strips. Mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches. Effects of dispenser surface area on attraction of mosquitoes The LDPE sachets and nylon strips used to dispense blend IB1 in preceding experiments of this study had total surface areas of 12.5 cm2 and 53 cm2, respectively. Follow-up experiments were conducted in which LDPE sachets were enlarged (2.5 cm × 10.6 cm × 2) to equal the surface area of the nylon strips. Attractant-treated nylon strips caught significantly more mosquitoes than attractants contained in enlarged LDPE sachets with (P < 0.001) and without an inner lining of absorbent material (P < 0.001) (Table 2). Thus, higher attraction of mosquitoes to IB1-treated nylon strips was not neutralized by equalized surface area or uniform spread of attractants over the inner surface area of LDPE sachets. Mosquito responses to traps containing control nylon strips versus control LDPE sachets with or without the absorbent nylon material were not different ((P = 0.173 and P = 0.556, respectively). Position had a significant effect on trap catches (P < 0 .001 in both cases). Behavioural responses of mosquitoes towards attractant treated polyethylene sachets lined with nylon versus nylon strips treated with a similar attractant N refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05). The LDPE sachets and nylon strips used to dispense blend IB1 in preceding experiments of this study had total surface areas of 12.5 cm2 and 53 cm2, respectively. Follow-up experiments were conducted in which LDPE sachets were enlarged (2.5 cm × 10.6 cm × 2) to equal the surface area of the nylon strips. Attractant-treated nylon strips caught significantly more mosquitoes than attractants contained in enlarged LDPE sachets with (P < 0.001) and without an inner lining of absorbent material (P < 0.001) (Table 2). Thus, higher attraction of mosquitoes to IB1-treated nylon strips was not neutralized by equalized surface area or uniform spread of attractants over the inner surface area of LDPE sachets. Mosquito responses to traps containing control nylon strips versus control LDPE sachets with or without the absorbent nylon material were not different ((P = 0.173 and P = 0.556, respectively). Position had a significant effect on trap catches (P < 0 .001 in both cases). Behavioural responses of mosquitoes towards attractant treated polyethylene sachets lined with nylon versus nylon strips treated with a similar attractant N refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05).
Conclusion
This study demonstrates that nylon strips present a potent and sustainable release material for dispensing synthetic mosquito attractants. Apparently, attractant-treated nylon strips can be used over prolonged time without re-applying the attractant blend. Treatment of nylon surfaces with attractants presents an opportunity for use in long-lasting odour-baited devices for sampling, surveillance and control of disease vectors.
[ "Background", "Mosquitoes", "General procedures", "Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets", "Residual activity of attractant-treated nylon strips on host seeking mosquitoes", "Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae", "Response of mosquitoes to attractants applied on nylon versus 0.03 mm LDPE sachets", "Effects of dispenser surface area on attraction of mosquitoes", "Data analysis", "Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets", "Residual activity of attractant-treated nylon strips on host-seeking mosquitoes", "Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae", "Response of mosquitoes to attractant-treated nylon versus thin-sheeted polyethylene sachets", "Effects of dispenser surface area on attraction of mosquitoes", "Competing interests", "Authors’ contributions" ]
[ "The effectiveness of odour-baited tools for sampling, surveillance and control of insect vectors is strongly influenced by the selected odour delivery device [1,2]. Low density polyethylene (LDPE) materials have proved useful because odour baits are released at predictable rates and do not need to be replenished over prolonged periods of time [1,3]. However, these attributes may not guarantee maximal mosquito trap catches without prior optimization of sheet thickness and surface area [1,2,4]. Since LDPE sachets are prone to leakage, further searches for slow-release materials and techniques is warranted for the optimal release of odorants.\nIn a previous eight-day study we reported on the efficacy of nylon fabric (90% polyamide and 10% spandex) as a tool for dispensing odours [3]. A potent synthetic mosquito attractant namely Ifakara blend 1 (hereafter referred to as blend IB1) was used to evaluate open glass vials, LDPE and nylon as delivery tools. Nylon strips impregnated with blend IB1 attracted 5.83 and 1.78 times more Anopheles gambiae Giles sensu stricto (hereafter referred to as An. gambiae) mosquitoes than solutions of attractants dispensed from glass vials and LDPE sachets, respectively [3]. However, in the case of nylon strips each chemical component of the attractant was applied at its optimal concentration whereas such optimization had not been implemented in advance for LDPE sachets.\nIn this study we re-evaluated the suitability of nylon versus LDPE as materials for dispensing synthetic mosquito attractants. We pursued four specific aims i.e. (i) comparison of nylon strips and LDPE sachets as materials for releasing synthetic mosquito attractants, (ii) assessment of the residual activity of attractant-baited nylon strips and LDPE sachets on host-seeking mosquitoes, (iii) determination of the effect of LDPE sheet thickness on attraction of mosquitoes to synthetic attractants, and (iv) comparison of surface area effects on attraction of mosquitoes to attractants administered through nylon strips versus LDPE sachets.", "The Mbita strain of An. gambiae was used for all experiments. For maintenance of this strain, mosquito eggs were placed in plastic trays containing filtered water from Lake Victoria. Larvae were fed on Tetramin® baby fish food three times per day. Pupae were collected daily, put in clean cups half-filled with filtered lake water and then placed in mesh-covered cages (30 × 30 × 30 cm). Emerging adult mosquitoes were fed on 6% glucose solution.", "The experiments were conducted under semi-field conditions in a screen-walled greenhouse measuring 11 m × 7 m × 2.8 m, with the roof apex standing 3.4 m high. Four treatments including two negative controls were evaluated in each experimental run. A total of 200 adult female mosquitoes aged 3–5 days old were utilized for individual bioassays conducted between 20:00 and 06:30 h. The mosquitoes were starved for 8 h with no prior access to blood meals. Only water presented on cotton towels on top of mosquito holding cups was provided. Mosquitoes attracted to each treatment were sampled using MM-X traps (American Biophysics, North Kingstown, RI, USA). The nylon strips and LDPE sachets were suspended inside the plume tubes of separate traps where a fan blew air over them to expel the attractant plume as indicated in our previous study [3]. Latex gloves were worn when hanging odour dispensers in the traps to avoid contamination.\nTrap positions were rotated to minimise positional effects. The traps were placed 1 m away from the edges of the greenhouse [4-6]. Each trap was marked and used for one specific treatment throughout the experiments. The number of mosquitoes collected per trap was counted and used both as an estimate for the attractiveness of the baits and an indicator for the suitability of dispensing materials. Each morning the traps were cleaned using 70% methanol solution. Mosquitoes that were not trapped were recaptured from the green house using manual aspirators and killed. Temperature and relative humidity in the greenhouse were recorded using data loggers (Tinytag®). Whereas all experiments were conducted for 12 nights, responses of mosquitoes to residual release from attractant-treated nylon strips were evaluated for 40 nights and repeated three times.", "A 4 × 4 Latin square experimental design was conducted incorporating LDPE sachets filled with IB1, IBI-treated nylon strips, LDPE sachets filled with water (hereafter termed control LDPE sachets) and water-treated nylon strips (hereafter termed control nylon strips) as treatments. Sheet thicknesses of LDPE sachets each measuring 2.5 cm × 2.5 cm (surface area 12.5 cm2) were optimized for individual chemical components of blend IB1 [7]. These were 0.2 mm (distilled water, propionic, butanoic, pentanoic, and 3-methylbutanoic acid), 0.1 mm (heptanoic and octanoic acid), 0.05 mm (lactic acid) and 0.03 mm (tetradecanoic acid and ammonia solution). Depending on treatment, LDPE sachets were filled with either 1 ml of the attractant compound or solvent. Individual nylon strips measuring 26.5 cm × 1 cm (surface area 53 cm2) were separately soaked in 1 ml of each of the chemical constituents of blend IB1 at their optimal concentrations [3,7]. The strips were air-dried at room temperature for 5 h before the start of experiments. Whereas attractant-treated nylon strips were freshly prepared each day, LDPE sachets filled with IB1 were re-used throughout the 12 days of the study and replaced upon leakage or depletion of individual components. Carbon dioxide, produced from 250 g of sucrose dissolved in 2 l of tap water containing 17.5 g of yeast [3,5,8] was supplied through silicon gas tubing at a flow rate of approximately 63 ml/min into traps baited with IB1-treated nylon strips or LDPE sachets filled with IB1 only and not with control nylon or LDPE sachets. Individual LDPE sachets containing chemicals were weighed before and after each experiment to determine how much of the individual components of the blend had been released. Control LDPE sachets and LDPE sachets filled with IB1 were stored in the refrigerator at 4 0C between experimental runs.", "In our previous study we noted the potential disadvantage of nylon strips i.e. that they tend to dry up quickly so no more active ingredient may be available following long hours of trap operation [3]. We designed experiments aimed at addressing this shortcoming. A 4 × 4 Latin square experimental design was used to evaluate residual attraction of An. gambiae to IB1-treated nylon strips and LDPE sachets filled with IB1. The four treatments included (i) LDPE sachets filled with IB1, (ii) IB1-treated nylon strips, (iii) control LDPE sachets and (iv), control nylon strips. The number of mosquitoes attracted to each treatment over a period of 40 nights was recorded daily and proportions trapped were calculated. The experiment was replicated three times. Analysis of data revealed no need to prepare fresh nylon strips daily. Thus, nylon strips were re-used in subsequent experiments. Whereas control LDPE sachets and IB1-filled LDPE sachets were also re-used, individual sachets were replenished upon depletion of contents. Sachets containing butanoic, pentanoic, 3-methylbutanoic, heptanoic and octanoic acid were replaced after every 10–14 nights.", "Direct exposure of IB1-treated nylon to environmental conditions may have led to higher release rates of attractant volatiles resulting in more mosquitoes being attracted relative to LDPE sachets of optimal sheet thicknesses containing the same attractants. We hypothesized that increasing release rates for all components in the blend using IB1-filled LDPE sachets of 0.03 mm sheet thickness for all components in the blend (hereafter indicated as 0.03 mm-LDPE or 0.03 mm-sachet) could enhance numbers of mosquitoes attracted. A sheet thickness of 0.03 mm was selected, because it was the thinnest available LDPE material and had been used in our previous investigations [7]. This hypothesis was tested by comparing An. gambiae mosquito capture rates with sachets of variable thickness versus 0.03 mm sachets. The sachets were weighed daily before and after each experiment to verify differences in volatile release rates. The carbon dioxide component of the blend was delivered separately through silicon tubing. A randomised 4 × 4 Latin square experimental design was adopted. The treatments included (a) LDPE sachets with optimized sheet thicknesses for all components of IB1, (b) each component of IB1 dispensed in LDPE sachets of 0.03 mm sheet thickness, (c) control LDPE sachets with optimal sheet thicknesses for all components of IB1, and (d) control LDPE sachets with 0.03 mm sheet thickness.", "In addition to investigating the effect of volatile release rates on mosquito behaviour, we compared numbers of An. gambiae mosquitoes attracted to IB1-filled in LDPE sachets of uniform sheet thickness (0.03 mm) or applied on nylon strips. The following treatments were tested (a) IB1-treated nylon strips, (b) each component of IB1 dispensed in 0.03 mm-LDPE sachets, (c) control nylon strips and (d) control 0.03 mm-LDPE sachets. A randomised 4 × 4 Latin square experimental design was adopted. The sachets and nylon strips had surface areas of 12.5 cm2 and 53 cm2, respectively.", "As higher mosquito catches associated with IB1-treated nylon strips could not be explained by the strips being freshly treated prior to each experiment, we tested whether variations in mosquito catches were due to differences in surface area. The LDPE sachets and nylon strips used in previous experiments of this study had surface areas of 12.5 cm2 and 53 cm2, respectively. Thus, the strips released odorants over a larger surface area than the LDPE sachets. We designed two sets of 4 × 4 Latin square experiments to test whether the larger surface area of nylon strips was responsible for the higher mosquito catches. The four treatments included (a) IB1-treated nylon strips, (b) LDPE sachets filled with IB1, (c) control nylon strips and (d) control LDPE sachets. Enlarged LDPE sachets were similarly filled with one ml of attractant or solvent. In the first set of experiments, surface areas of control and attractant-filled LDPE sachets were enlarged (2.5 cm wide × 10.6 cm long × 2 sides of the sachet) to equal the surface area of nylon strips. In the second set of experiments, a piece of absorbent material (nylon strip) was placed inside enlarged (53 cm2) control and attractant-filled LDPE sachets to ensure that blend IB1 was evenly spread over the entire inner surface of the sachets. Each set of experiments was replicated 12 times. All other experimental procedures were similar to those described in previous sections.", "The relative efficacy of each treatment was defined as a percentage of female mosquitoes caught in the traps containing either of the two release materials impregnated or filled with synthetic attractants or solvent. In order to investigate the effect of residual activity of attractant-treated materials on capture rates, we used the baseline-category logit model [9]. The nominal response variable was defined as the attractant type with four categories: IB1-containing LDPE sachets, IB1-treated nylon strips, control nylon strips, and control LDPE sachets with day and trap position as covariates. We estimated the odds that mosquitoes chose other attractants instead of IB1-treated nylon strips over time, while adjusting for trap-position. The Mann Whitney-U test was used to estimate the effect of sheet thickness of LDPE sachets on release rates of IB1 components except carbon dioxide. To investigate the effect of surface area and sheet thickness on the release material on mosquito catches, we fitted a Poisson regression model controlling for trap position. The analyses were performed using SAS v9.2 (SAS Institute Inc.) with tests performed at 5% level.", "The 12-day period over which experiments were conducted was characterized by a mean temperature and relative humidity of 22.18 ± 0.080C and 86.15 ± 1.56%, respectively, within the screen-walled greenhouse. Out of 2,400 female An. gambiae mosquitoes released, 51.88% (n = 1,235) were caught in the four treatment traps. Of these catches, 77.73%, 18.62%, 1.78% and 1.86% were trapped by IB1-treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets, respectively (Figure 1). Baseline-category logit model results revealed that IB1-impregnated nylon strips attracted, on average, 5.6 times more mosquitoes than LDPE sachets filled with IB1 (P < 0.001). Whereas there was no significant difference in the proportion of mosquitoes attracted to control nylon strips and control LDPE sachets (P = 0.436), these treatments attracted significantly fewer mosquitoes than nylon and LDPE sachets containing blend IB1 (P < 0.001). Day effect was not significant (P = 0.056), and was therefore excluded from the final model. However, trap position was an important determinant of mosquito catches (P < 0.001). These experiments provided baseline information for subsequent investigations conducted during the study.\nProportions of mosquitoes caught in MM-X traps containing IB1-treated nylon strips, LDPE sachets filled with blend IB1, control nylon strips and control LDPE sachets. Mean mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches.", "A total of 11,851 (49.38%) mosquitoes were attracted and collected over 120 nights (i.e. three replicates of 40 days each). The proportions of mosquitoes caught over time differed among treatments (P < 0.001) (Figure 2). Attractant-treated nylon strips repeatedly trapped the highest proportion of mosquitoes without re-applying the attractant blend up to 40 days post-treatment. During this period the treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets attracted 76.95% (n = 9,120), 18.59% (n = 2,203), 2.17% (n = 257) and 2.29% (n = 271) of the mosquitoes, respectively. There was also a significant increase over time in the proportion of mosquitoes choosing LDPE sachets filled with IB1 (P < 0.001), but not for control nylon strips (P = 0.051) and control LDPE sachets (P = 0.071). In contrast, the numbers of mosquitoes attracted to IB1-impregnated nylon strips decreased considerably over time (P < 0.002). However, they were consistently preferred to LDPE sachets filled with IB1 (Figure 2).\nProportions of mosquitoes caught in traps containing IB1-treated nylon strips (―), LDPE sachets filled with blend IB1 (−−--), control nylon strips (—♦--) and control LDPE sachets (− × − × −) over time. Lines and symbols representing mosquito catches due to control nylon strips and control LDPE sachets are superimposed over each other. Open (IB1-treated nylon strips) and closed circles (LDPE sachets filled with IB1) represent observed values. Lines represent the Baseline-category logit model fit showing trends of proportions of mosquitoes attracted over time.", "Here LDPE sachets with sheet thickness optimized [7] or kept uniform (0.03 mm) for each chemical constituent of the attractant were evaluated. Out of 2,400 mosquitoes released, 51.17% were trapped (Table 1). Whereas trap position was not a significant factor (P = 0.183), attraction of mosquitoes to different traps was influenced by LDPE sheet thickness (P < 0.001). Delivery of attractant components through sachets with optimized sheet thicknesses resulted in a significant increase in mosquito catches as opposed to uniform 0.03 mm-sachets (P < 0.001). There was no difference in mosquito catches between both types of control LDPE sachets (P = 0.111).\n\nEffect of polyethylene sheet thickness on attraction of \n\nAn. gambiae\n\n to attractant baited sachets\n\nN refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05).\nThe effect of porosity due to differences in sheet thickness of LDPE sachets on release rates of various chemicals emitted from blend IB1 was also investigated. Mann Whitney-U tests indicated that sheet thickness had a significant effect on the release rates of propionic acid, pentanoic acid, heptanoic acid, distilled water and lactic acid (P = 0.04, 0.03, 0.02, 0.01 and 0.02, respectively). However, release rates of butanoic acid, 3-methylbutanoic acid, octanoic acid, tetradecanoic acid, and ammonia were not dependent on sheet thickness of LDPE sachets (P = 0.722, 0.97, 0.30, 0.23, and 0.87, respectively) (Figure 3).\nEffect of LDPE sheet thickness on release rates of chemical constituents contained in the mosquito attractant Ifakara blend 1 (IB1). Release rates from sachets, the sheet thickness of which had been optimised for all chemicals components of the blend (open bars) or kept uniform (0.03 mm-sheets) for all the chemical constituents (shaded bars), are shown. The optimised LDPE sheet thicknesses were 0.2 mm [distilled water (H20), propionic (C3), butanoic (C4), pentanoic (C5), and 3-methylbutanoic acid (3MC4)], 0.1 mm [heptanoic (C7) and octanoic acid (C8)], 0.05 mm [lactic acid (LA)] and 0.03 mm [tetradecanoic acid (C14) and ammonia solution (NH3)]. Odour release rates represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean odour release rates measured in ng/h.", "Additional studies confirmed that mosquitoes preferred attractant-treated nylon strips compared to attractants contained in 0.03 mm-LDPE sachets (P < 0.001). Overall, 49.63% (n = 1191) of released mosquitoes were recaptured. Of these, 84.50%, 11.07%, 2.26%, and 1.68% were found in traps baited with attractant-treated nylon strips, LDPE sachets (0.03 mm) filled with IB1, control nylon strips and control LDPE sachets (0.03 mm), respectively (Figure 4). The numbers of mosquitoes caught by control strips and sachets were not significantly different (P = 0.309).\nProportions of mosquitoes caught by traps containing IB1-treated nylon strips, 0.03 mm-LDPE sachets filled with blend IB1, control 0.03 mm-LDPE sachets and control nylon strips. Mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches.", "The LDPE sachets and nylon strips used to dispense blend IB1 in preceding experiments of this study had total surface areas of 12.5 cm2 and 53 cm2, respectively. Follow-up experiments were conducted in which LDPE sachets were enlarged (2.5 cm × 10.6 cm × 2) to equal the surface area of the nylon strips. Attractant-treated nylon strips caught significantly more mosquitoes than attractants contained in enlarged LDPE sachets with (P < 0.001) and without an inner lining of absorbent material (P < 0.001) (Table 2). Thus, higher attraction of mosquitoes to IB1-treated nylon strips was not neutralized by equalized surface area or uniform spread of attractants over the inner surface area of LDPE sachets. Mosquito responses to traps containing control nylon strips versus control LDPE sachets with or without the absorbent nylon material were not different ((P = 0.173 and P = 0.556, respectively). Position had a significant effect on trap catches (P < 0 .001 in both cases).\nBehavioural responses of mosquitoes towards attractant treated polyethylene sachets lined with nylon versus nylon strips treated with a similar attractant\nN refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05).", "The authors declare that they have no competing interests.", "WRM, CKM, WT, and RCS designed the study; CKM, and PO conducted the research; BO and WRM analysed the data; CKM, WRM, RCS, WT and JJAvL wrote the paper. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Mosquitoes", "General procedures", "Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets", "Residual activity of attractant-treated nylon strips on host seeking mosquitoes", "Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae", "Response of mosquitoes to attractants applied on nylon versus 0.03 mm LDPE sachets", "Effects of dispenser surface area on attraction of mosquitoes", "Data analysis", "Results", "Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets", "Residual activity of attractant-treated nylon strips on host-seeking mosquitoes", "Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae", "Response of mosquitoes to attractant-treated nylon versus thin-sheeted polyethylene sachets", "Effects of dispenser surface area on attraction of mosquitoes", "Discussion", "Conclusion", "Competing interests", "Authors’ contributions" ]
[ "The effectiveness of odour-baited tools for sampling, surveillance and control of insect vectors is strongly influenced by the selected odour delivery device [1,2]. Low density polyethylene (LDPE) materials have proved useful because odour baits are released at predictable rates and do not need to be replenished over prolonged periods of time [1,3]. However, these attributes may not guarantee maximal mosquito trap catches without prior optimization of sheet thickness and surface area [1,2,4]. Since LDPE sachets are prone to leakage, further searches for slow-release materials and techniques is warranted for the optimal release of odorants.\nIn a previous eight-day study we reported on the efficacy of nylon fabric (90% polyamide and 10% spandex) as a tool for dispensing odours [3]. A potent synthetic mosquito attractant namely Ifakara blend 1 (hereafter referred to as blend IB1) was used to evaluate open glass vials, LDPE and nylon as delivery tools. Nylon strips impregnated with blend IB1 attracted 5.83 and 1.78 times more Anopheles gambiae Giles sensu stricto (hereafter referred to as An. gambiae) mosquitoes than solutions of attractants dispensed from glass vials and LDPE sachets, respectively [3]. However, in the case of nylon strips each chemical component of the attractant was applied at its optimal concentration whereas such optimization had not been implemented in advance for LDPE sachets.\nIn this study we re-evaluated the suitability of nylon versus LDPE as materials for dispensing synthetic mosquito attractants. We pursued four specific aims i.e. (i) comparison of nylon strips and LDPE sachets as materials for releasing synthetic mosquito attractants, (ii) assessment of the residual activity of attractant-baited nylon strips and LDPE sachets on host-seeking mosquitoes, (iii) determination of the effect of LDPE sheet thickness on attraction of mosquitoes to synthetic attractants, and (iv) comparison of surface area effects on attraction of mosquitoes to attractants administered through nylon strips versus LDPE sachets.", "The study was carried out at the Thomas Odhiambo Campus of the International Centre of Insect Physiology and Ecology (icipe) located near Mbita Point Township in western Kenya between April 2010 and January 2011.\n Mosquitoes The Mbita strain of An. gambiae was used for all experiments. For maintenance of this strain, mosquito eggs were placed in plastic trays containing filtered water from Lake Victoria. Larvae were fed on Tetramin® baby fish food three times per day. Pupae were collected daily, put in clean cups half-filled with filtered lake water and then placed in mesh-covered cages (30 × 30 × 30 cm). Emerging adult mosquitoes were fed on 6% glucose solution.\nThe Mbita strain of An. gambiae was used for all experiments. For maintenance of this strain, mosquito eggs were placed in plastic trays containing filtered water from Lake Victoria. Larvae were fed on Tetramin® baby fish food three times per day. Pupae were collected daily, put in clean cups half-filled with filtered lake water and then placed in mesh-covered cages (30 × 30 × 30 cm). Emerging adult mosquitoes were fed on 6% glucose solution.\n General procedures The experiments were conducted under semi-field conditions in a screen-walled greenhouse measuring 11 m × 7 m × 2.8 m, with the roof apex standing 3.4 m high. Four treatments including two negative controls were evaluated in each experimental run. A total of 200 adult female mosquitoes aged 3–5 days old were utilized for individual bioassays conducted between 20:00 and 06:30 h. The mosquitoes were starved for 8 h with no prior access to blood meals. Only water presented on cotton towels on top of mosquito holding cups was provided. Mosquitoes attracted to each treatment were sampled using MM-X traps (American Biophysics, North Kingstown, RI, USA). The nylon strips and LDPE sachets were suspended inside the plume tubes of separate traps where a fan blew air over them to expel the attractant plume as indicated in our previous study [3]. Latex gloves were worn when hanging odour dispensers in the traps to avoid contamination.\nTrap positions were rotated to minimise positional effects. The traps were placed 1 m away from the edges of the greenhouse [4-6]. Each trap was marked and used for one specific treatment throughout the experiments. The number of mosquitoes collected per trap was counted and used both as an estimate for the attractiveness of the baits and an indicator for the suitability of dispensing materials. Each morning the traps were cleaned using 70% methanol solution. Mosquitoes that were not trapped were recaptured from the green house using manual aspirators and killed. Temperature and relative humidity in the greenhouse were recorded using data loggers (Tinytag®). Whereas all experiments were conducted for 12 nights, responses of mosquitoes to residual release from attractant-treated nylon strips were evaluated for 40 nights and repeated three times.\nThe experiments were conducted under semi-field conditions in a screen-walled greenhouse measuring 11 m × 7 m × 2.8 m, with the roof apex standing 3.4 m high. Four treatments including two negative controls were evaluated in each experimental run. A total of 200 adult female mosquitoes aged 3–5 days old were utilized for individual bioassays conducted between 20:00 and 06:30 h. The mosquitoes were starved for 8 h with no prior access to blood meals. Only water presented on cotton towels on top of mosquito holding cups was provided. Mosquitoes attracted to each treatment were sampled using MM-X traps (American Biophysics, North Kingstown, RI, USA). The nylon strips and LDPE sachets were suspended inside the plume tubes of separate traps where a fan blew air over them to expel the attractant plume as indicated in our previous study [3]. Latex gloves were worn when hanging odour dispensers in the traps to avoid contamination.\nTrap positions were rotated to minimise positional effects. The traps were placed 1 m away from the edges of the greenhouse [4-6]. Each trap was marked and used for one specific treatment throughout the experiments. The number of mosquitoes collected per trap was counted and used both as an estimate for the attractiveness of the baits and an indicator for the suitability of dispensing materials. Each morning the traps were cleaned using 70% methanol solution. Mosquitoes that were not trapped were recaptured from the green house using manual aspirators and killed. Temperature and relative humidity in the greenhouse were recorded using data loggers (Tinytag®). Whereas all experiments were conducted for 12 nights, responses of mosquitoes to residual release from attractant-treated nylon strips were evaluated for 40 nights and repeated three times.\n Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets A 4 × 4 Latin square experimental design was conducted incorporating LDPE sachets filled with IB1, IBI-treated nylon strips, LDPE sachets filled with water (hereafter termed control LDPE sachets) and water-treated nylon strips (hereafter termed control nylon strips) as treatments. Sheet thicknesses of LDPE sachets each measuring 2.5 cm × 2.5 cm (surface area 12.5 cm2) were optimized for individual chemical components of blend IB1 [7]. These were 0.2 mm (distilled water, propionic, butanoic, pentanoic, and 3-methylbutanoic acid), 0.1 mm (heptanoic and octanoic acid), 0.05 mm (lactic acid) and 0.03 mm (tetradecanoic acid and ammonia solution). Depending on treatment, LDPE sachets were filled with either 1 ml of the attractant compound or solvent. Individual nylon strips measuring 26.5 cm × 1 cm (surface area 53 cm2) were separately soaked in 1 ml of each of the chemical constituents of blend IB1 at their optimal concentrations [3,7]. The strips were air-dried at room temperature for 5 h before the start of experiments. Whereas attractant-treated nylon strips were freshly prepared each day, LDPE sachets filled with IB1 were re-used throughout the 12 days of the study and replaced upon leakage or depletion of individual components. Carbon dioxide, produced from 250 g of sucrose dissolved in 2 l of tap water containing 17.5 g of yeast [3,5,8] was supplied through silicon gas tubing at a flow rate of approximately 63 ml/min into traps baited with IB1-treated nylon strips or LDPE sachets filled with IB1 only and not with control nylon or LDPE sachets. Individual LDPE sachets containing chemicals were weighed before and after each experiment to determine how much of the individual components of the blend had been released. Control LDPE sachets and LDPE sachets filled with IB1 were stored in the refrigerator at 4 0C between experimental runs.\nA 4 × 4 Latin square experimental design was conducted incorporating LDPE sachets filled with IB1, IBI-treated nylon strips, LDPE sachets filled with water (hereafter termed control LDPE sachets) and water-treated nylon strips (hereafter termed control nylon strips) as treatments. Sheet thicknesses of LDPE sachets each measuring 2.5 cm × 2.5 cm (surface area 12.5 cm2) were optimized for individual chemical components of blend IB1 [7]. These were 0.2 mm (distilled water, propionic, butanoic, pentanoic, and 3-methylbutanoic acid), 0.1 mm (heptanoic and octanoic acid), 0.05 mm (lactic acid) and 0.03 mm (tetradecanoic acid and ammonia solution). Depending on treatment, LDPE sachets were filled with either 1 ml of the attractant compound or solvent. Individual nylon strips measuring 26.5 cm × 1 cm (surface area 53 cm2) were separately soaked in 1 ml of each of the chemical constituents of blend IB1 at their optimal concentrations [3,7]. The strips were air-dried at room temperature for 5 h before the start of experiments. Whereas attractant-treated nylon strips were freshly prepared each day, LDPE sachets filled with IB1 were re-used throughout the 12 days of the study and replaced upon leakage or depletion of individual components. Carbon dioxide, produced from 250 g of sucrose dissolved in 2 l of tap water containing 17.5 g of yeast [3,5,8] was supplied through silicon gas tubing at a flow rate of approximately 63 ml/min into traps baited with IB1-treated nylon strips or LDPE sachets filled with IB1 only and not with control nylon or LDPE sachets. Individual LDPE sachets containing chemicals were weighed before and after each experiment to determine how much of the individual components of the blend had been released. Control LDPE sachets and LDPE sachets filled with IB1 were stored in the refrigerator at 4 0C between experimental runs.\n Residual activity of attractant-treated nylon strips on host seeking mosquitoes In our previous study we noted the potential disadvantage of nylon strips i.e. that they tend to dry up quickly so no more active ingredient may be available following long hours of trap operation [3]. We designed experiments aimed at addressing this shortcoming. A 4 × 4 Latin square experimental design was used to evaluate residual attraction of An. gambiae to IB1-treated nylon strips and LDPE sachets filled with IB1. The four treatments included (i) LDPE sachets filled with IB1, (ii) IB1-treated nylon strips, (iii) control LDPE sachets and (iv), control nylon strips. The number of mosquitoes attracted to each treatment over a period of 40 nights was recorded daily and proportions trapped were calculated. The experiment was replicated three times. Analysis of data revealed no need to prepare fresh nylon strips daily. Thus, nylon strips were re-used in subsequent experiments. Whereas control LDPE sachets and IB1-filled LDPE sachets were also re-used, individual sachets were replenished upon depletion of contents. Sachets containing butanoic, pentanoic, 3-methylbutanoic, heptanoic and octanoic acid were replaced after every 10–14 nights.\nIn our previous study we noted the potential disadvantage of nylon strips i.e. that they tend to dry up quickly so no more active ingredient may be available following long hours of trap operation [3]. We designed experiments aimed at addressing this shortcoming. A 4 × 4 Latin square experimental design was used to evaluate residual attraction of An. gambiae to IB1-treated nylon strips and LDPE sachets filled with IB1. The four treatments included (i) LDPE sachets filled with IB1, (ii) IB1-treated nylon strips, (iii) control LDPE sachets and (iv), control nylon strips. The number of mosquitoes attracted to each treatment over a period of 40 nights was recorded daily and proportions trapped were calculated. The experiment was replicated three times. Analysis of data revealed no need to prepare fresh nylon strips daily. Thus, nylon strips were re-used in subsequent experiments. Whereas control LDPE sachets and IB1-filled LDPE sachets were also re-used, individual sachets were replenished upon depletion of contents. Sachets containing butanoic, pentanoic, 3-methylbutanoic, heptanoic and octanoic acid were replaced after every 10–14 nights.\n Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae Direct exposure of IB1-treated nylon to environmental conditions may have led to higher release rates of attractant volatiles resulting in more mosquitoes being attracted relative to LDPE sachets of optimal sheet thicknesses containing the same attractants. We hypothesized that increasing release rates for all components in the blend using IB1-filled LDPE sachets of 0.03 mm sheet thickness for all components in the blend (hereafter indicated as 0.03 mm-LDPE or 0.03 mm-sachet) could enhance numbers of mosquitoes attracted. A sheet thickness of 0.03 mm was selected, because it was the thinnest available LDPE material and had been used in our previous investigations [7]. This hypothesis was tested by comparing An. gambiae mosquito capture rates with sachets of variable thickness versus 0.03 mm sachets. The sachets were weighed daily before and after each experiment to verify differences in volatile release rates. The carbon dioxide component of the blend was delivered separately through silicon tubing. A randomised 4 × 4 Latin square experimental design was adopted. The treatments included (a) LDPE sachets with optimized sheet thicknesses for all components of IB1, (b) each component of IB1 dispensed in LDPE sachets of 0.03 mm sheet thickness, (c) control LDPE sachets with optimal sheet thicknesses for all components of IB1, and (d) control LDPE sachets with 0.03 mm sheet thickness.\nDirect exposure of IB1-treated nylon to environmental conditions may have led to higher release rates of attractant volatiles resulting in more mosquitoes being attracted relative to LDPE sachets of optimal sheet thicknesses containing the same attractants. We hypothesized that increasing release rates for all components in the blend using IB1-filled LDPE sachets of 0.03 mm sheet thickness for all components in the blend (hereafter indicated as 0.03 mm-LDPE or 0.03 mm-sachet) could enhance numbers of mosquitoes attracted. A sheet thickness of 0.03 mm was selected, because it was the thinnest available LDPE material and had been used in our previous investigations [7]. This hypothesis was tested by comparing An. gambiae mosquito capture rates with sachets of variable thickness versus 0.03 mm sachets. The sachets were weighed daily before and after each experiment to verify differences in volatile release rates. The carbon dioxide component of the blend was delivered separately through silicon tubing. A randomised 4 × 4 Latin square experimental design was adopted. The treatments included (a) LDPE sachets with optimized sheet thicknesses for all components of IB1, (b) each component of IB1 dispensed in LDPE sachets of 0.03 mm sheet thickness, (c) control LDPE sachets with optimal sheet thicknesses for all components of IB1, and (d) control LDPE sachets with 0.03 mm sheet thickness.\n Response of mosquitoes to attractants applied on nylon versus 0.03 mm LDPE sachets In addition to investigating the effect of volatile release rates on mosquito behaviour, we compared numbers of An. gambiae mosquitoes attracted to IB1-filled in LDPE sachets of uniform sheet thickness (0.03 mm) or applied on nylon strips. The following treatments were tested (a) IB1-treated nylon strips, (b) each component of IB1 dispensed in 0.03 mm-LDPE sachets, (c) control nylon strips and (d) control 0.03 mm-LDPE sachets. A randomised 4 × 4 Latin square experimental design was adopted. The sachets and nylon strips had surface areas of 12.5 cm2 and 53 cm2, respectively.\nIn addition to investigating the effect of volatile release rates on mosquito behaviour, we compared numbers of An. gambiae mosquitoes attracted to IB1-filled in LDPE sachets of uniform sheet thickness (0.03 mm) or applied on nylon strips. The following treatments were tested (a) IB1-treated nylon strips, (b) each component of IB1 dispensed in 0.03 mm-LDPE sachets, (c) control nylon strips and (d) control 0.03 mm-LDPE sachets. A randomised 4 × 4 Latin square experimental design was adopted. The sachets and nylon strips had surface areas of 12.5 cm2 and 53 cm2, respectively.\n Effects of dispenser surface area on attraction of mosquitoes As higher mosquito catches associated with IB1-treated nylon strips could not be explained by the strips being freshly treated prior to each experiment, we tested whether variations in mosquito catches were due to differences in surface area. The LDPE sachets and nylon strips used in previous experiments of this study had surface areas of 12.5 cm2 and 53 cm2, respectively. Thus, the strips released odorants over a larger surface area than the LDPE sachets. We designed two sets of 4 × 4 Latin square experiments to test whether the larger surface area of nylon strips was responsible for the higher mosquito catches. The four treatments included (a) IB1-treated nylon strips, (b) LDPE sachets filled with IB1, (c) control nylon strips and (d) control LDPE sachets. Enlarged LDPE sachets were similarly filled with one ml of attractant or solvent. In the first set of experiments, surface areas of control and attractant-filled LDPE sachets were enlarged (2.5 cm wide × 10.6 cm long × 2 sides of the sachet) to equal the surface area of nylon strips. In the second set of experiments, a piece of absorbent material (nylon strip) was placed inside enlarged (53 cm2) control and attractant-filled LDPE sachets to ensure that blend IB1 was evenly spread over the entire inner surface of the sachets. Each set of experiments was replicated 12 times. All other experimental procedures were similar to those described in previous sections.\nAs higher mosquito catches associated with IB1-treated nylon strips could not be explained by the strips being freshly treated prior to each experiment, we tested whether variations in mosquito catches were due to differences in surface area. The LDPE sachets and nylon strips used in previous experiments of this study had surface areas of 12.5 cm2 and 53 cm2, respectively. Thus, the strips released odorants over a larger surface area than the LDPE sachets. We designed two sets of 4 × 4 Latin square experiments to test whether the larger surface area of nylon strips was responsible for the higher mosquito catches. The four treatments included (a) IB1-treated nylon strips, (b) LDPE sachets filled with IB1, (c) control nylon strips and (d) control LDPE sachets. Enlarged LDPE sachets were similarly filled with one ml of attractant or solvent. In the first set of experiments, surface areas of control and attractant-filled LDPE sachets were enlarged (2.5 cm wide × 10.6 cm long × 2 sides of the sachet) to equal the surface area of nylon strips. In the second set of experiments, a piece of absorbent material (nylon strip) was placed inside enlarged (53 cm2) control and attractant-filled LDPE sachets to ensure that blend IB1 was evenly spread over the entire inner surface of the sachets. Each set of experiments was replicated 12 times. All other experimental procedures were similar to those described in previous sections.\n Data analysis The relative efficacy of each treatment was defined as a percentage of female mosquitoes caught in the traps containing either of the two release materials impregnated or filled with synthetic attractants or solvent. In order to investigate the effect of residual activity of attractant-treated materials on capture rates, we used the baseline-category logit model [9]. The nominal response variable was defined as the attractant type with four categories: IB1-containing LDPE sachets, IB1-treated nylon strips, control nylon strips, and control LDPE sachets with day and trap position as covariates. We estimated the odds that mosquitoes chose other attractants instead of IB1-treated nylon strips over time, while adjusting for trap-position. The Mann Whitney-U test was used to estimate the effect of sheet thickness of LDPE sachets on release rates of IB1 components except carbon dioxide. To investigate the effect of surface area and sheet thickness on the release material on mosquito catches, we fitted a Poisson regression model controlling for trap position. The analyses were performed using SAS v9.2 (SAS Institute Inc.) with tests performed at 5% level.\nThe relative efficacy of each treatment was defined as a percentage of female mosquitoes caught in the traps containing either of the two release materials impregnated or filled with synthetic attractants or solvent. In order to investigate the effect of residual activity of attractant-treated materials on capture rates, we used the baseline-category logit model [9]. The nominal response variable was defined as the attractant type with four categories: IB1-containing LDPE sachets, IB1-treated nylon strips, control nylon strips, and control LDPE sachets with day and trap position as covariates. We estimated the odds that mosquitoes chose other attractants instead of IB1-treated nylon strips over time, while adjusting for trap-position. The Mann Whitney-U test was used to estimate the effect of sheet thickness of LDPE sachets on release rates of IB1 components except carbon dioxide. To investigate the effect of surface area and sheet thickness on the release material on mosquito catches, we fitted a Poisson regression model controlling for trap position. The analyses were performed using SAS v9.2 (SAS Institute Inc.) with tests performed at 5% level.", "The Mbita strain of An. gambiae was used for all experiments. For maintenance of this strain, mosquito eggs were placed in plastic trays containing filtered water from Lake Victoria. Larvae were fed on Tetramin® baby fish food three times per day. Pupae were collected daily, put in clean cups half-filled with filtered lake water and then placed in mesh-covered cages (30 × 30 × 30 cm). Emerging adult mosquitoes were fed on 6% glucose solution.", "The experiments were conducted under semi-field conditions in a screen-walled greenhouse measuring 11 m × 7 m × 2.8 m, with the roof apex standing 3.4 m high. Four treatments including two negative controls were evaluated in each experimental run. A total of 200 adult female mosquitoes aged 3–5 days old were utilized for individual bioassays conducted between 20:00 and 06:30 h. The mosquitoes were starved for 8 h with no prior access to blood meals. Only water presented on cotton towels on top of mosquito holding cups was provided. Mosquitoes attracted to each treatment were sampled using MM-X traps (American Biophysics, North Kingstown, RI, USA). The nylon strips and LDPE sachets were suspended inside the plume tubes of separate traps where a fan blew air over them to expel the attractant plume as indicated in our previous study [3]. Latex gloves were worn when hanging odour dispensers in the traps to avoid contamination.\nTrap positions were rotated to minimise positional effects. The traps were placed 1 m away from the edges of the greenhouse [4-6]. Each trap was marked and used for one specific treatment throughout the experiments. The number of mosquitoes collected per trap was counted and used both as an estimate for the attractiveness of the baits and an indicator for the suitability of dispensing materials. Each morning the traps were cleaned using 70% methanol solution. Mosquitoes that were not trapped were recaptured from the green house using manual aspirators and killed. Temperature and relative humidity in the greenhouse were recorded using data loggers (Tinytag®). Whereas all experiments were conducted for 12 nights, responses of mosquitoes to residual release from attractant-treated nylon strips were evaluated for 40 nights and repeated three times.", "A 4 × 4 Latin square experimental design was conducted incorporating LDPE sachets filled with IB1, IBI-treated nylon strips, LDPE sachets filled with water (hereafter termed control LDPE sachets) and water-treated nylon strips (hereafter termed control nylon strips) as treatments. Sheet thicknesses of LDPE sachets each measuring 2.5 cm × 2.5 cm (surface area 12.5 cm2) were optimized for individual chemical components of blend IB1 [7]. These were 0.2 mm (distilled water, propionic, butanoic, pentanoic, and 3-methylbutanoic acid), 0.1 mm (heptanoic and octanoic acid), 0.05 mm (lactic acid) and 0.03 mm (tetradecanoic acid and ammonia solution). Depending on treatment, LDPE sachets were filled with either 1 ml of the attractant compound or solvent. Individual nylon strips measuring 26.5 cm × 1 cm (surface area 53 cm2) were separately soaked in 1 ml of each of the chemical constituents of blend IB1 at their optimal concentrations [3,7]. The strips were air-dried at room temperature for 5 h before the start of experiments. Whereas attractant-treated nylon strips were freshly prepared each day, LDPE sachets filled with IB1 were re-used throughout the 12 days of the study and replaced upon leakage or depletion of individual components. Carbon dioxide, produced from 250 g of sucrose dissolved in 2 l of tap water containing 17.5 g of yeast [3,5,8] was supplied through silicon gas tubing at a flow rate of approximately 63 ml/min into traps baited with IB1-treated nylon strips or LDPE sachets filled with IB1 only and not with control nylon or LDPE sachets. Individual LDPE sachets containing chemicals were weighed before and after each experiment to determine how much of the individual components of the blend had been released. Control LDPE sachets and LDPE sachets filled with IB1 were stored in the refrigerator at 4 0C between experimental runs.", "In our previous study we noted the potential disadvantage of nylon strips i.e. that they tend to dry up quickly so no more active ingredient may be available following long hours of trap operation [3]. We designed experiments aimed at addressing this shortcoming. A 4 × 4 Latin square experimental design was used to evaluate residual attraction of An. gambiae to IB1-treated nylon strips and LDPE sachets filled with IB1. The four treatments included (i) LDPE sachets filled with IB1, (ii) IB1-treated nylon strips, (iii) control LDPE sachets and (iv), control nylon strips. The number of mosquitoes attracted to each treatment over a period of 40 nights was recorded daily and proportions trapped were calculated. The experiment was replicated three times. Analysis of data revealed no need to prepare fresh nylon strips daily. Thus, nylon strips were re-used in subsequent experiments. Whereas control LDPE sachets and IB1-filled LDPE sachets were also re-used, individual sachets were replenished upon depletion of contents. Sachets containing butanoic, pentanoic, 3-methylbutanoic, heptanoic and octanoic acid were replaced after every 10–14 nights.", "Direct exposure of IB1-treated nylon to environmental conditions may have led to higher release rates of attractant volatiles resulting in more mosquitoes being attracted relative to LDPE sachets of optimal sheet thicknesses containing the same attractants. We hypothesized that increasing release rates for all components in the blend using IB1-filled LDPE sachets of 0.03 mm sheet thickness for all components in the blend (hereafter indicated as 0.03 mm-LDPE or 0.03 mm-sachet) could enhance numbers of mosquitoes attracted. A sheet thickness of 0.03 mm was selected, because it was the thinnest available LDPE material and had been used in our previous investigations [7]. This hypothesis was tested by comparing An. gambiae mosquito capture rates with sachets of variable thickness versus 0.03 mm sachets. The sachets were weighed daily before and after each experiment to verify differences in volatile release rates. The carbon dioxide component of the blend was delivered separately through silicon tubing. A randomised 4 × 4 Latin square experimental design was adopted. The treatments included (a) LDPE sachets with optimized sheet thicknesses for all components of IB1, (b) each component of IB1 dispensed in LDPE sachets of 0.03 mm sheet thickness, (c) control LDPE sachets with optimal sheet thicknesses for all components of IB1, and (d) control LDPE sachets with 0.03 mm sheet thickness.", "In addition to investigating the effect of volatile release rates on mosquito behaviour, we compared numbers of An. gambiae mosquitoes attracted to IB1-filled in LDPE sachets of uniform sheet thickness (0.03 mm) or applied on nylon strips. The following treatments were tested (a) IB1-treated nylon strips, (b) each component of IB1 dispensed in 0.03 mm-LDPE sachets, (c) control nylon strips and (d) control 0.03 mm-LDPE sachets. A randomised 4 × 4 Latin square experimental design was adopted. The sachets and nylon strips had surface areas of 12.5 cm2 and 53 cm2, respectively.", "As higher mosquito catches associated with IB1-treated nylon strips could not be explained by the strips being freshly treated prior to each experiment, we tested whether variations in mosquito catches were due to differences in surface area. The LDPE sachets and nylon strips used in previous experiments of this study had surface areas of 12.5 cm2 and 53 cm2, respectively. Thus, the strips released odorants over a larger surface area than the LDPE sachets. We designed two sets of 4 × 4 Latin square experiments to test whether the larger surface area of nylon strips was responsible for the higher mosquito catches. The four treatments included (a) IB1-treated nylon strips, (b) LDPE sachets filled with IB1, (c) control nylon strips and (d) control LDPE sachets. Enlarged LDPE sachets were similarly filled with one ml of attractant or solvent. In the first set of experiments, surface areas of control and attractant-filled LDPE sachets were enlarged (2.5 cm wide × 10.6 cm long × 2 sides of the sachet) to equal the surface area of nylon strips. In the second set of experiments, a piece of absorbent material (nylon strip) was placed inside enlarged (53 cm2) control and attractant-filled LDPE sachets to ensure that blend IB1 was evenly spread over the entire inner surface of the sachets. Each set of experiments was replicated 12 times. All other experimental procedures were similar to those described in previous sections.", "The relative efficacy of each treatment was defined as a percentage of female mosquitoes caught in the traps containing either of the two release materials impregnated or filled with synthetic attractants or solvent. In order to investigate the effect of residual activity of attractant-treated materials on capture rates, we used the baseline-category logit model [9]. The nominal response variable was defined as the attractant type with four categories: IB1-containing LDPE sachets, IB1-treated nylon strips, control nylon strips, and control LDPE sachets with day and trap position as covariates. We estimated the odds that mosquitoes chose other attractants instead of IB1-treated nylon strips over time, while adjusting for trap-position. The Mann Whitney-U test was used to estimate the effect of sheet thickness of LDPE sachets on release rates of IB1 components except carbon dioxide. To investigate the effect of surface area and sheet thickness on the release material on mosquito catches, we fitted a Poisson regression model controlling for trap position. The analyses were performed using SAS v9.2 (SAS Institute Inc.) with tests performed at 5% level.", " Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets The 12-day period over which experiments were conducted was characterized by a mean temperature and relative humidity of 22.18 ± 0.080C and 86.15 ± 1.56%, respectively, within the screen-walled greenhouse. Out of 2,400 female An. gambiae mosquitoes released, 51.88% (n = 1,235) were caught in the four treatment traps. Of these catches, 77.73%, 18.62%, 1.78% and 1.86% were trapped by IB1-treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets, respectively (Figure 1). Baseline-category logit model results revealed that IB1-impregnated nylon strips attracted, on average, 5.6 times more mosquitoes than LDPE sachets filled with IB1 (P < 0.001). Whereas there was no significant difference in the proportion of mosquitoes attracted to control nylon strips and control LDPE sachets (P = 0.436), these treatments attracted significantly fewer mosquitoes than nylon and LDPE sachets containing blend IB1 (P < 0.001). Day effect was not significant (P = 0.056), and was therefore excluded from the final model. However, trap position was an important determinant of mosquito catches (P < 0.001). These experiments provided baseline information for subsequent investigations conducted during the study.\nProportions of mosquitoes caught in MM-X traps containing IB1-treated nylon strips, LDPE sachets filled with blend IB1, control nylon strips and control LDPE sachets. Mean mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches.\nThe 12-day period over which experiments were conducted was characterized by a mean temperature and relative humidity of 22.18 ± 0.080C and 86.15 ± 1.56%, respectively, within the screen-walled greenhouse. Out of 2,400 female An. gambiae mosquitoes released, 51.88% (n = 1,235) were caught in the four treatment traps. Of these catches, 77.73%, 18.62%, 1.78% and 1.86% were trapped by IB1-treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets, respectively (Figure 1). Baseline-category logit model results revealed that IB1-impregnated nylon strips attracted, on average, 5.6 times more mosquitoes than LDPE sachets filled with IB1 (P < 0.001). Whereas there was no significant difference in the proportion of mosquitoes attracted to control nylon strips and control LDPE sachets (P = 0.436), these treatments attracted significantly fewer mosquitoes than nylon and LDPE sachets containing blend IB1 (P < 0.001). Day effect was not significant (P = 0.056), and was therefore excluded from the final model. However, trap position was an important determinant of mosquito catches (P < 0.001). These experiments provided baseline information for subsequent investigations conducted during the study.\nProportions of mosquitoes caught in MM-X traps containing IB1-treated nylon strips, LDPE sachets filled with blend IB1, control nylon strips and control LDPE sachets. Mean mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches.\n Residual activity of attractant-treated nylon strips on host-seeking mosquitoes A total of 11,851 (49.38%) mosquitoes were attracted and collected over 120 nights (i.e. three replicates of 40 days each). The proportions of mosquitoes caught over time differed among treatments (P < 0.001) (Figure 2). Attractant-treated nylon strips repeatedly trapped the highest proportion of mosquitoes without re-applying the attractant blend up to 40 days post-treatment. During this period the treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets attracted 76.95% (n = 9,120), 18.59% (n = 2,203), 2.17% (n = 257) and 2.29% (n = 271) of the mosquitoes, respectively. There was also a significant increase over time in the proportion of mosquitoes choosing LDPE sachets filled with IB1 (P < 0.001), but not for control nylon strips (P = 0.051) and control LDPE sachets (P = 0.071). In contrast, the numbers of mosquitoes attracted to IB1-impregnated nylon strips decreased considerably over time (P < 0.002). However, they were consistently preferred to LDPE sachets filled with IB1 (Figure 2).\nProportions of mosquitoes caught in traps containing IB1-treated nylon strips (―), LDPE sachets filled with blend IB1 (−−--), control nylon strips (—♦--) and control LDPE sachets (− × − × −) over time. Lines and symbols representing mosquito catches due to control nylon strips and control LDPE sachets are superimposed over each other. Open (IB1-treated nylon strips) and closed circles (LDPE sachets filled with IB1) represent observed values. Lines represent the Baseline-category logit model fit showing trends of proportions of mosquitoes attracted over time.\nA total of 11,851 (49.38%) mosquitoes were attracted and collected over 120 nights (i.e. three replicates of 40 days each). The proportions of mosquitoes caught over time differed among treatments (P < 0.001) (Figure 2). Attractant-treated nylon strips repeatedly trapped the highest proportion of mosquitoes without re-applying the attractant blend up to 40 days post-treatment. During this period the treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets attracted 76.95% (n = 9,120), 18.59% (n = 2,203), 2.17% (n = 257) and 2.29% (n = 271) of the mosquitoes, respectively. There was also a significant increase over time in the proportion of mosquitoes choosing LDPE sachets filled with IB1 (P < 0.001), but not for control nylon strips (P = 0.051) and control LDPE sachets (P = 0.071). In contrast, the numbers of mosquitoes attracted to IB1-impregnated nylon strips decreased considerably over time (P < 0.002). However, they were consistently preferred to LDPE sachets filled with IB1 (Figure 2).\nProportions of mosquitoes caught in traps containing IB1-treated nylon strips (―), LDPE sachets filled with blend IB1 (−−--), control nylon strips (—♦--) and control LDPE sachets (− × − × −) over time. Lines and symbols representing mosquito catches due to control nylon strips and control LDPE sachets are superimposed over each other. Open (IB1-treated nylon strips) and closed circles (LDPE sachets filled with IB1) represent observed values. Lines represent the Baseline-category logit model fit showing trends of proportions of mosquitoes attracted over time.\n Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae Here LDPE sachets with sheet thickness optimized [7] or kept uniform (0.03 mm) for each chemical constituent of the attractant were evaluated. Out of 2,400 mosquitoes released, 51.17% were trapped (Table 1). Whereas trap position was not a significant factor (P = 0.183), attraction of mosquitoes to different traps was influenced by LDPE sheet thickness (P < 0.001). Delivery of attractant components through sachets with optimized sheet thicknesses resulted in a significant increase in mosquito catches as opposed to uniform 0.03 mm-sachets (P < 0.001). There was no difference in mosquito catches between both types of control LDPE sachets (P = 0.111).\n\nEffect of polyethylene sheet thickness on attraction of \n\nAn. gambiae\n\n to attractant baited sachets\n\nN refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05).\nThe effect of porosity due to differences in sheet thickness of LDPE sachets on release rates of various chemicals emitted from blend IB1 was also investigated. Mann Whitney-U tests indicated that sheet thickness had a significant effect on the release rates of propionic acid, pentanoic acid, heptanoic acid, distilled water and lactic acid (P = 0.04, 0.03, 0.02, 0.01 and 0.02, respectively). However, release rates of butanoic acid, 3-methylbutanoic acid, octanoic acid, tetradecanoic acid, and ammonia were not dependent on sheet thickness of LDPE sachets (P = 0.722, 0.97, 0.30, 0.23, and 0.87, respectively) (Figure 3).\nEffect of LDPE sheet thickness on release rates of chemical constituents contained in the mosquito attractant Ifakara blend 1 (IB1). Release rates from sachets, the sheet thickness of which had been optimised for all chemicals components of the blend (open bars) or kept uniform (0.03 mm-sheets) for all the chemical constituents (shaded bars), are shown. The optimised LDPE sheet thicknesses were 0.2 mm [distilled water (H20), propionic (C3), butanoic (C4), pentanoic (C5), and 3-methylbutanoic acid (3MC4)], 0.1 mm [heptanoic (C7) and octanoic acid (C8)], 0.05 mm [lactic acid (LA)] and 0.03 mm [tetradecanoic acid (C14) and ammonia solution (NH3)]. Odour release rates represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean odour release rates measured in ng/h.\nHere LDPE sachets with sheet thickness optimized [7] or kept uniform (0.03 mm) for each chemical constituent of the attractant were evaluated. Out of 2,400 mosquitoes released, 51.17% were trapped (Table 1). Whereas trap position was not a significant factor (P = 0.183), attraction of mosquitoes to different traps was influenced by LDPE sheet thickness (P < 0.001). Delivery of attractant components through sachets with optimized sheet thicknesses resulted in a significant increase in mosquito catches as opposed to uniform 0.03 mm-sachets (P < 0.001). There was no difference in mosquito catches between both types of control LDPE sachets (P = 0.111).\n\nEffect of polyethylene sheet thickness on attraction of \n\nAn. gambiae\n\n to attractant baited sachets\n\nN refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05).\nThe effect of porosity due to differences in sheet thickness of LDPE sachets on release rates of various chemicals emitted from blend IB1 was also investigated. Mann Whitney-U tests indicated that sheet thickness had a significant effect on the release rates of propionic acid, pentanoic acid, heptanoic acid, distilled water and lactic acid (P = 0.04, 0.03, 0.02, 0.01 and 0.02, respectively). However, release rates of butanoic acid, 3-methylbutanoic acid, octanoic acid, tetradecanoic acid, and ammonia were not dependent on sheet thickness of LDPE sachets (P = 0.722, 0.97, 0.30, 0.23, and 0.87, respectively) (Figure 3).\nEffect of LDPE sheet thickness on release rates of chemical constituents contained in the mosquito attractant Ifakara blend 1 (IB1). Release rates from sachets, the sheet thickness of which had been optimised for all chemicals components of the blend (open bars) or kept uniform (0.03 mm-sheets) for all the chemical constituents (shaded bars), are shown. The optimised LDPE sheet thicknesses were 0.2 mm [distilled water (H20), propionic (C3), butanoic (C4), pentanoic (C5), and 3-methylbutanoic acid (3MC4)], 0.1 mm [heptanoic (C7) and octanoic acid (C8)], 0.05 mm [lactic acid (LA)] and 0.03 mm [tetradecanoic acid (C14) and ammonia solution (NH3)]. Odour release rates represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean odour release rates measured in ng/h.\n Response of mosquitoes to attractant-treated nylon versus thin-sheeted polyethylene sachets Additional studies confirmed that mosquitoes preferred attractant-treated nylon strips compared to attractants contained in 0.03 mm-LDPE sachets (P < 0.001). Overall, 49.63% (n = 1191) of released mosquitoes were recaptured. Of these, 84.50%, 11.07%, 2.26%, and 1.68% were found in traps baited with attractant-treated nylon strips, LDPE sachets (0.03 mm) filled with IB1, control nylon strips and control LDPE sachets (0.03 mm), respectively (Figure 4). The numbers of mosquitoes caught by control strips and sachets were not significantly different (P = 0.309).\nProportions of mosquitoes caught by traps containing IB1-treated nylon strips, 0.03 mm-LDPE sachets filled with blend IB1, control 0.03 mm-LDPE sachets and control nylon strips. Mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches.\nAdditional studies confirmed that mosquitoes preferred attractant-treated nylon strips compared to attractants contained in 0.03 mm-LDPE sachets (P < 0.001). Overall, 49.63% (n = 1191) of released mosquitoes were recaptured. Of these, 84.50%, 11.07%, 2.26%, and 1.68% were found in traps baited with attractant-treated nylon strips, LDPE sachets (0.03 mm) filled with IB1, control nylon strips and control LDPE sachets (0.03 mm), respectively (Figure 4). The numbers of mosquitoes caught by control strips and sachets were not significantly different (P = 0.309).\nProportions of mosquitoes caught by traps containing IB1-treated nylon strips, 0.03 mm-LDPE sachets filled with blend IB1, control 0.03 mm-LDPE sachets and control nylon strips. Mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches.\n Effects of dispenser surface area on attraction of mosquitoes The LDPE sachets and nylon strips used to dispense blend IB1 in preceding experiments of this study had total surface areas of 12.5 cm2 and 53 cm2, respectively. Follow-up experiments were conducted in which LDPE sachets were enlarged (2.5 cm × 10.6 cm × 2) to equal the surface area of the nylon strips. Attractant-treated nylon strips caught significantly more mosquitoes than attractants contained in enlarged LDPE sachets with (P < 0.001) and without an inner lining of absorbent material (P < 0.001) (Table 2). Thus, higher attraction of mosquitoes to IB1-treated nylon strips was not neutralized by equalized surface area or uniform spread of attractants over the inner surface area of LDPE sachets. Mosquito responses to traps containing control nylon strips versus control LDPE sachets with or without the absorbent nylon material were not different ((P = 0.173 and P = 0.556, respectively). Position had a significant effect on trap catches (P < 0 .001 in both cases).\nBehavioural responses of mosquitoes towards attractant treated polyethylene sachets lined with nylon versus nylon strips treated with a similar attractant\nN refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05).\nThe LDPE sachets and nylon strips used to dispense blend IB1 in preceding experiments of this study had total surface areas of 12.5 cm2 and 53 cm2, respectively. Follow-up experiments were conducted in which LDPE sachets were enlarged (2.5 cm × 10.6 cm × 2) to equal the surface area of the nylon strips. Attractant-treated nylon strips caught significantly more mosquitoes than attractants contained in enlarged LDPE sachets with (P < 0.001) and without an inner lining of absorbent material (P < 0.001) (Table 2). Thus, higher attraction of mosquitoes to IB1-treated nylon strips was not neutralized by equalized surface area or uniform spread of attractants over the inner surface area of LDPE sachets. Mosquito responses to traps containing control nylon strips versus control LDPE sachets with or without the absorbent nylon material were not different ((P = 0.173 and P = 0.556, respectively). Position had a significant effect on trap catches (P < 0 .001 in both cases).\nBehavioural responses of mosquitoes towards attractant treated polyethylene sachets lined with nylon versus nylon strips treated with a similar attractant\nN refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05).", "The 12-day period over which experiments were conducted was characterized by a mean temperature and relative humidity of 22.18 ± 0.080C and 86.15 ± 1.56%, respectively, within the screen-walled greenhouse. Out of 2,400 female An. gambiae mosquitoes released, 51.88% (n = 1,235) were caught in the four treatment traps. Of these catches, 77.73%, 18.62%, 1.78% and 1.86% were trapped by IB1-treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets, respectively (Figure 1). Baseline-category logit model results revealed that IB1-impregnated nylon strips attracted, on average, 5.6 times more mosquitoes than LDPE sachets filled with IB1 (P < 0.001). Whereas there was no significant difference in the proportion of mosquitoes attracted to control nylon strips and control LDPE sachets (P = 0.436), these treatments attracted significantly fewer mosquitoes than nylon and LDPE sachets containing blend IB1 (P < 0.001). Day effect was not significant (P = 0.056), and was therefore excluded from the final model. However, trap position was an important determinant of mosquito catches (P < 0.001). These experiments provided baseline information for subsequent investigations conducted during the study.\nProportions of mosquitoes caught in MM-X traps containing IB1-treated nylon strips, LDPE sachets filled with blend IB1, control nylon strips and control LDPE sachets. Mean mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches.", "A total of 11,851 (49.38%) mosquitoes were attracted and collected over 120 nights (i.e. three replicates of 40 days each). The proportions of mosquitoes caught over time differed among treatments (P < 0.001) (Figure 2). Attractant-treated nylon strips repeatedly trapped the highest proportion of mosquitoes without re-applying the attractant blend up to 40 days post-treatment. During this period the treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets attracted 76.95% (n = 9,120), 18.59% (n = 2,203), 2.17% (n = 257) and 2.29% (n = 271) of the mosquitoes, respectively. There was also a significant increase over time in the proportion of mosquitoes choosing LDPE sachets filled with IB1 (P < 0.001), but not for control nylon strips (P = 0.051) and control LDPE sachets (P = 0.071). In contrast, the numbers of mosquitoes attracted to IB1-impregnated nylon strips decreased considerably over time (P < 0.002). However, they were consistently preferred to LDPE sachets filled with IB1 (Figure 2).\nProportions of mosquitoes caught in traps containing IB1-treated nylon strips (―), LDPE sachets filled with blend IB1 (−−--), control nylon strips (—♦--) and control LDPE sachets (− × − × −) over time. Lines and symbols representing mosquito catches due to control nylon strips and control LDPE sachets are superimposed over each other. Open (IB1-treated nylon strips) and closed circles (LDPE sachets filled with IB1) represent observed values. Lines represent the Baseline-category logit model fit showing trends of proportions of mosquitoes attracted over time.", "Here LDPE sachets with sheet thickness optimized [7] or kept uniform (0.03 mm) for each chemical constituent of the attractant were evaluated. Out of 2,400 mosquitoes released, 51.17% were trapped (Table 1). Whereas trap position was not a significant factor (P = 0.183), attraction of mosquitoes to different traps was influenced by LDPE sheet thickness (P < 0.001). Delivery of attractant components through sachets with optimized sheet thicknesses resulted in a significant increase in mosquito catches as opposed to uniform 0.03 mm-sachets (P < 0.001). There was no difference in mosquito catches between both types of control LDPE sachets (P = 0.111).\n\nEffect of polyethylene sheet thickness on attraction of \n\nAn. gambiae\n\n to attractant baited sachets\n\nN refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05).\nThe effect of porosity due to differences in sheet thickness of LDPE sachets on release rates of various chemicals emitted from blend IB1 was also investigated. Mann Whitney-U tests indicated that sheet thickness had a significant effect on the release rates of propionic acid, pentanoic acid, heptanoic acid, distilled water and lactic acid (P = 0.04, 0.03, 0.02, 0.01 and 0.02, respectively). However, release rates of butanoic acid, 3-methylbutanoic acid, octanoic acid, tetradecanoic acid, and ammonia were not dependent on sheet thickness of LDPE sachets (P = 0.722, 0.97, 0.30, 0.23, and 0.87, respectively) (Figure 3).\nEffect of LDPE sheet thickness on release rates of chemical constituents contained in the mosquito attractant Ifakara blend 1 (IB1). Release rates from sachets, the sheet thickness of which had been optimised for all chemicals components of the blend (open bars) or kept uniform (0.03 mm-sheets) for all the chemical constituents (shaded bars), are shown. The optimised LDPE sheet thicknesses were 0.2 mm [distilled water (H20), propionic (C3), butanoic (C4), pentanoic (C5), and 3-methylbutanoic acid (3MC4)], 0.1 mm [heptanoic (C7) and octanoic acid (C8)], 0.05 mm [lactic acid (LA)] and 0.03 mm [tetradecanoic acid (C14) and ammonia solution (NH3)]. Odour release rates represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean odour release rates measured in ng/h.", "Additional studies confirmed that mosquitoes preferred attractant-treated nylon strips compared to attractants contained in 0.03 mm-LDPE sachets (P < 0.001). Overall, 49.63% (n = 1191) of released mosquitoes were recaptured. Of these, 84.50%, 11.07%, 2.26%, and 1.68% were found in traps baited with attractant-treated nylon strips, LDPE sachets (0.03 mm) filled with IB1, control nylon strips and control LDPE sachets (0.03 mm), respectively (Figure 4). The numbers of mosquitoes caught by control strips and sachets were not significantly different (P = 0.309).\nProportions of mosquitoes caught by traps containing IB1-treated nylon strips, 0.03 mm-LDPE sachets filled with blend IB1, control 0.03 mm-LDPE sachets and control nylon strips. Mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches.", "The LDPE sachets and nylon strips used to dispense blend IB1 in preceding experiments of this study had total surface areas of 12.5 cm2 and 53 cm2, respectively. Follow-up experiments were conducted in which LDPE sachets were enlarged (2.5 cm × 10.6 cm × 2) to equal the surface area of the nylon strips. Attractant-treated nylon strips caught significantly more mosquitoes than attractants contained in enlarged LDPE sachets with (P < 0.001) and without an inner lining of absorbent material (P < 0.001) (Table 2). Thus, higher attraction of mosquitoes to IB1-treated nylon strips was not neutralized by equalized surface area or uniform spread of attractants over the inner surface area of LDPE sachets. Mosquito responses to traps containing control nylon strips versus control LDPE sachets with or without the absorbent nylon material were not different ((P = 0.173 and P = 0.556, respectively). Position had a significant effect on trap catches (P < 0 .001 in both cases).\nBehavioural responses of mosquitoes towards attractant treated polyethylene sachets lined with nylon versus nylon strips treated with a similar attractant\nN refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05).", "This study demonstrates that nylon strips can act as a sustainable matrix for dispensing synthetic attractants of host-seeking An. gambiae mosquitoes, performing much better than low density polyethylene (LDPE) sachets. It was remarkable that attractant-treated nylon strips continued to attract mosquitoes without re-application and remained consistently more attractive than LDPE sachets filled with the same attractants over a period of 40 nights post-treatment. The higher catches of mosquitoes associated with nylon strips were apparently not due to smaller surface area, uneven spread of the attractant on inner surfaces or LDPE sheet thickness.\nThe baseline experiments reported herein confirm findings of our previous studies in which nylon strips were found to provide a better release matrix for delivering synthetic attractants of host-seeking An. gambiae mosquitoes than did LDPE sachets or open glass vials [3]. LDPE and nylon differ in physico-chemical characteristics such as porosity and chemical binding affinity that may explain the observed differences in mosquito catches through their effects on the release rate of odorant volatiles [1,10,11]. Although the use of LDPE sachets allows the adjustment of attractant release rates, release rates from nylon have yet to be determined e.g. through headspace sampling at the trap outlet.\nThat IB1-treated nylon strips remained consistently more attractive to host-seeking An. gambiae mosquitoes than LDPE sachets filled with the same attractants for a period of up to 40 days post-treatment is definitive proof of inherent residual activity. This finding corroborates that of related studies where nylon stockings impregnated with human emanations remained attractive to An. gambiae mosquitoes for several weeks [12-14]. Blend IB1 impregnated on nylon strips may have been subject to bacterial degradation over the prolonged experimental time. This may have resulted in the release of additional components than were originally present on the nylon strips [15-17]. However, the present study did not investigate the presence of microbes or additional attractant compounds on aging IB1-treated nylon strips.\nThe current study shows that, attractant-treated nylon strips can be re-used for at least 40 consecutive days as baits for host-seeking An. gambiae mosquitoes, thereby reducing costs of odorants and nylon strips, time and labour used to prepare fresh baits. These attributes are consistent with those associated with the long-lasting fabric materials impregnated with mosquito repellents or insecticides [18,19]. The availability of long-lasting mosquito-attractant fabrics is interesting as these can potentially be combined with mosquito pathogens such as entomopathogenic fungi or bacteria [20]. Thus, a cheap and effective tool for intercepting and eliminating host-seeking mosquitoes can be exploited for vector-borne disease control. However, further testing is needed to examine the maximal duration of residual activity of the attractant-treated strips.\nContrary to our expectations, LDPE sachets optimized for release rates and surface area caught fewer mosquitoes than nylon strips. The release rate of some compounds (propanoic, pentanoic, heptanoic, lactic acid and water) was significantly increased when uniformly thinner-sheeted sachets were utilized. Because sheet thickness of LDPE sachets is a determinant of volatile release rate the composition of the volatile blend released may have changed so as to negatively affect attractiveness to An. gambiae mosquitoes [1,21]. We conclude that, blend ratio and concentration affects orientation and capture rates of insect vectors with odour-baited systems [22,23].\nAlthough LDPE sachets have been effectively used to release attractants for tsetse flies and other insect pests [1,2], they attracted fewer mosquitoes compared to nylon strips when both were treated or filled with the same blend of attractants. This could be explained by differences in optimized sheet thicknesses of LDPE sachets and physical and chemical characteristics of the odorants used for attraction of tsetse flies versus those used for mosquitoes [24]. Moreover, trap designs used for collection of both insect vectors were also different [3,25,26]. Delivery of synthetic attractant components through sachets with standardized sheet thickness and surface area have demonstrated consistent mosquito catches under laboratory and semi-field conditions [17].Whereas nylon strips were associated with higher mosquito catches, we currently lack information on the release rates of the odorants dispensed. Accurate release rates have been established for odorants delivered through LDPE sachets [4], and such chemical measurements should also be done for nylon, as this allows for a direct comparison of the active aerial odorant concentration that host-seeking mosquitoes encounter.", "This study demonstrates that nylon strips present a potent and sustainable release material for dispensing synthetic mosquito attractants. Apparently, attractant-treated nylon strips can be used over prolonged time without re-applying the attractant blend. Treatment of nylon surfaces with attractants presents an opportunity for use in long-lasting odour-baited devices for sampling, surveillance and control of disease vectors.", "The authors declare that they have no competing interests.", "WRM, CKM, WT, and RCS designed the study; CKM, and PO conducted the research; BO and WRM analysed the data; CKM, WRM, RCS, WT and JJAvL wrote the paper. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusions", null, null ]
[ "Mosquito", "Trapping", "Attractant", "Odour release system" ]
Background: The effectiveness of odour-baited tools for sampling, surveillance and control of insect vectors is strongly influenced by the selected odour delivery device [1,2]. Low density polyethylene (LDPE) materials have proved useful because odour baits are released at predictable rates and do not need to be replenished over prolonged periods of time [1,3]. However, these attributes may not guarantee maximal mosquito trap catches without prior optimization of sheet thickness and surface area [1,2,4]. Since LDPE sachets are prone to leakage, further searches for slow-release materials and techniques is warranted for the optimal release of odorants. In a previous eight-day study we reported on the efficacy of nylon fabric (90% polyamide and 10% spandex) as a tool for dispensing odours [3]. A potent synthetic mosquito attractant namely Ifakara blend 1 (hereafter referred to as blend IB1) was used to evaluate open glass vials, LDPE and nylon as delivery tools. Nylon strips impregnated with blend IB1 attracted 5.83 and 1.78 times more Anopheles gambiae Giles sensu stricto (hereafter referred to as An. gambiae) mosquitoes than solutions of attractants dispensed from glass vials and LDPE sachets, respectively [3]. However, in the case of nylon strips each chemical component of the attractant was applied at its optimal concentration whereas such optimization had not been implemented in advance for LDPE sachets. In this study we re-evaluated the suitability of nylon versus LDPE as materials for dispensing synthetic mosquito attractants. We pursued four specific aims i.e. (i) comparison of nylon strips and LDPE sachets as materials for releasing synthetic mosquito attractants, (ii) assessment of the residual activity of attractant-baited nylon strips and LDPE sachets on host-seeking mosquitoes, (iii) determination of the effect of LDPE sheet thickness on attraction of mosquitoes to synthetic attractants, and (iv) comparison of surface area effects on attraction of mosquitoes to attractants administered through nylon strips versus LDPE sachets. Methods: The study was carried out at the Thomas Odhiambo Campus of the International Centre of Insect Physiology and Ecology (icipe) located near Mbita Point Township in western Kenya between April 2010 and January 2011. Mosquitoes The Mbita strain of An. gambiae was used for all experiments. For maintenance of this strain, mosquito eggs were placed in plastic trays containing filtered water from Lake Victoria. Larvae were fed on Tetramin® baby fish food three times per day. Pupae were collected daily, put in clean cups half-filled with filtered lake water and then placed in mesh-covered cages (30 × 30 × 30 cm). Emerging adult mosquitoes were fed on 6% glucose solution. The Mbita strain of An. gambiae was used for all experiments. For maintenance of this strain, mosquito eggs were placed in plastic trays containing filtered water from Lake Victoria. Larvae were fed on Tetramin® baby fish food three times per day. Pupae were collected daily, put in clean cups half-filled with filtered lake water and then placed in mesh-covered cages (30 × 30 × 30 cm). Emerging adult mosquitoes were fed on 6% glucose solution. General procedures The experiments were conducted under semi-field conditions in a screen-walled greenhouse measuring 11 m × 7 m × 2.8 m, with the roof apex standing 3.4 m high. Four treatments including two negative controls were evaluated in each experimental run. A total of 200 adult female mosquitoes aged 3–5 days old were utilized for individual bioassays conducted between 20:00 and 06:30 h. The mosquitoes were starved for 8 h with no prior access to blood meals. Only water presented on cotton towels on top of mosquito holding cups was provided. Mosquitoes attracted to each treatment were sampled using MM-X traps (American Biophysics, North Kingstown, RI, USA). The nylon strips and LDPE sachets were suspended inside the plume tubes of separate traps where a fan blew air over them to expel the attractant plume as indicated in our previous study [3]. Latex gloves were worn when hanging odour dispensers in the traps to avoid contamination. Trap positions were rotated to minimise positional effects. The traps were placed 1 m away from the edges of the greenhouse [4-6]. Each trap was marked and used for one specific treatment throughout the experiments. The number of mosquitoes collected per trap was counted and used both as an estimate for the attractiveness of the baits and an indicator for the suitability of dispensing materials. Each morning the traps were cleaned using 70% methanol solution. Mosquitoes that were not trapped were recaptured from the green house using manual aspirators and killed. Temperature and relative humidity in the greenhouse were recorded using data loggers (Tinytag®). Whereas all experiments were conducted for 12 nights, responses of mosquitoes to residual release from attractant-treated nylon strips were evaluated for 40 nights and repeated three times. The experiments were conducted under semi-field conditions in a screen-walled greenhouse measuring 11 m × 7 m × 2.8 m, with the roof apex standing 3.4 m high. Four treatments including two negative controls were evaluated in each experimental run. A total of 200 adult female mosquitoes aged 3–5 days old were utilized for individual bioassays conducted between 20:00 and 06:30 h. The mosquitoes were starved for 8 h with no prior access to blood meals. Only water presented on cotton towels on top of mosquito holding cups was provided. Mosquitoes attracted to each treatment were sampled using MM-X traps (American Biophysics, North Kingstown, RI, USA). The nylon strips and LDPE sachets were suspended inside the plume tubes of separate traps where a fan blew air over them to expel the attractant plume as indicated in our previous study [3]. Latex gloves were worn when hanging odour dispensers in the traps to avoid contamination. Trap positions were rotated to minimise positional effects. The traps were placed 1 m away from the edges of the greenhouse [4-6]. Each trap was marked and used for one specific treatment throughout the experiments. The number of mosquitoes collected per trap was counted and used both as an estimate for the attractiveness of the baits and an indicator for the suitability of dispensing materials. Each morning the traps were cleaned using 70% methanol solution. Mosquitoes that were not trapped were recaptured from the green house using manual aspirators and killed. Temperature and relative humidity in the greenhouse were recorded using data loggers (Tinytag®). Whereas all experiments were conducted for 12 nights, responses of mosquitoes to residual release from attractant-treated nylon strips were evaluated for 40 nights and repeated three times. Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets A 4 × 4 Latin square experimental design was conducted incorporating LDPE sachets filled with IB1, IBI-treated nylon strips, LDPE sachets filled with water (hereafter termed control LDPE sachets) and water-treated nylon strips (hereafter termed control nylon strips) as treatments. Sheet thicknesses of LDPE sachets each measuring 2.5 cm × 2.5 cm (surface area 12.5 cm2) were optimized for individual chemical components of blend IB1 [7]. These were 0.2 mm (distilled water, propionic, butanoic, pentanoic, and 3-methylbutanoic acid), 0.1 mm (heptanoic and octanoic acid), 0.05 mm (lactic acid) and 0.03 mm (tetradecanoic acid and ammonia solution). Depending on treatment, LDPE sachets were filled with either 1 ml of the attractant compound or solvent. Individual nylon strips measuring 26.5 cm × 1 cm (surface area 53 cm2) were separately soaked in 1 ml of each of the chemical constituents of blend IB1 at their optimal concentrations [3,7]. The strips were air-dried at room temperature for 5 h before the start of experiments. Whereas attractant-treated nylon strips were freshly prepared each day, LDPE sachets filled with IB1 were re-used throughout the 12 days of the study and replaced upon leakage or depletion of individual components. Carbon dioxide, produced from 250 g of sucrose dissolved in 2 l of tap water containing 17.5 g of yeast [3,5,8] was supplied through silicon gas tubing at a flow rate of approximately 63 ml/min into traps baited with IB1-treated nylon strips or LDPE sachets filled with IB1 only and not with control nylon or LDPE sachets. Individual LDPE sachets containing chemicals were weighed before and after each experiment to determine how much of the individual components of the blend had been released. Control LDPE sachets and LDPE sachets filled with IB1 were stored in the refrigerator at 4 0C between experimental runs. A 4 × 4 Latin square experimental design was conducted incorporating LDPE sachets filled with IB1, IBI-treated nylon strips, LDPE sachets filled with water (hereafter termed control LDPE sachets) and water-treated nylon strips (hereafter termed control nylon strips) as treatments. Sheet thicknesses of LDPE sachets each measuring 2.5 cm × 2.5 cm (surface area 12.5 cm2) were optimized for individual chemical components of blend IB1 [7]. These were 0.2 mm (distilled water, propionic, butanoic, pentanoic, and 3-methylbutanoic acid), 0.1 mm (heptanoic and octanoic acid), 0.05 mm (lactic acid) and 0.03 mm (tetradecanoic acid and ammonia solution). Depending on treatment, LDPE sachets were filled with either 1 ml of the attractant compound or solvent. Individual nylon strips measuring 26.5 cm × 1 cm (surface area 53 cm2) were separately soaked in 1 ml of each of the chemical constituents of blend IB1 at their optimal concentrations [3,7]. The strips were air-dried at room temperature for 5 h before the start of experiments. Whereas attractant-treated nylon strips were freshly prepared each day, LDPE sachets filled with IB1 were re-used throughout the 12 days of the study and replaced upon leakage or depletion of individual components. Carbon dioxide, produced from 250 g of sucrose dissolved in 2 l of tap water containing 17.5 g of yeast [3,5,8] was supplied through silicon gas tubing at a flow rate of approximately 63 ml/min into traps baited with IB1-treated nylon strips or LDPE sachets filled with IB1 only and not with control nylon or LDPE sachets. Individual LDPE sachets containing chemicals were weighed before and after each experiment to determine how much of the individual components of the blend had been released. Control LDPE sachets and LDPE sachets filled with IB1 were stored in the refrigerator at 4 0C between experimental runs. Residual activity of attractant-treated nylon strips on host seeking mosquitoes In our previous study we noted the potential disadvantage of nylon strips i.e. that they tend to dry up quickly so no more active ingredient may be available following long hours of trap operation [3]. We designed experiments aimed at addressing this shortcoming. A 4 × 4 Latin square experimental design was used to evaluate residual attraction of An. gambiae to IB1-treated nylon strips and LDPE sachets filled with IB1. The four treatments included (i) LDPE sachets filled with IB1, (ii) IB1-treated nylon strips, (iii) control LDPE sachets and (iv), control nylon strips. The number of mosquitoes attracted to each treatment over a period of 40 nights was recorded daily and proportions trapped were calculated. The experiment was replicated three times. Analysis of data revealed no need to prepare fresh nylon strips daily. Thus, nylon strips were re-used in subsequent experiments. Whereas control LDPE sachets and IB1-filled LDPE sachets were also re-used, individual sachets were replenished upon depletion of contents. Sachets containing butanoic, pentanoic, 3-methylbutanoic, heptanoic and octanoic acid were replaced after every 10–14 nights. In our previous study we noted the potential disadvantage of nylon strips i.e. that they tend to dry up quickly so no more active ingredient may be available following long hours of trap operation [3]. We designed experiments aimed at addressing this shortcoming. A 4 × 4 Latin square experimental design was used to evaluate residual attraction of An. gambiae to IB1-treated nylon strips and LDPE sachets filled with IB1. The four treatments included (i) LDPE sachets filled with IB1, (ii) IB1-treated nylon strips, (iii) control LDPE sachets and (iv), control nylon strips. The number of mosquitoes attracted to each treatment over a period of 40 nights was recorded daily and proportions trapped were calculated. The experiment was replicated three times. Analysis of data revealed no need to prepare fresh nylon strips daily. Thus, nylon strips were re-used in subsequent experiments. Whereas control LDPE sachets and IB1-filled LDPE sachets were also re-used, individual sachets were replenished upon depletion of contents. Sachets containing butanoic, pentanoic, 3-methylbutanoic, heptanoic and octanoic acid were replaced after every 10–14 nights. Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae Direct exposure of IB1-treated nylon to environmental conditions may have led to higher release rates of attractant volatiles resulting in more mosquitoes being attracted relative to LDPE sachets of optimal sheet thicknesses containing the same attractants. We hypothesized that increasing release rates for all components in the blend using IB1-filled LDPE sachets of 0.03 mm sheet thickness for all components in the blend (hereafter indicated as 0.03 mm-LDPE or 0.03 mm-sachet) could enhance numbers of mosquitoes attracted. A sheet thickness of 0.03 mm was selected, because it was the thinnest available LDPE material and had been used in our previous investigations [7]. This hypothesis was tested by comparing An. gambiae mosquito capture rates with sachets of variable thickness versus 0.03 mm sachets. The sachets were weighed daily before and after each experiment to verify differences in volatile release rates. The carbon dioxide component of the blend was delivered separately through silicon tubing. A randomised 4 × 4 Latin square experimental design was adopted. The treatments included (a) LDPE sachets with optimized sheet thicknesses for all components of IB1, (b) each component of IB1 dispensed in LDPE sachets of 0.03 mm sheet thickness, (c) control LDPE sachets with optimal sheet thicknesses for all components of IB1, and (d) control LDPE sachets with 0.03 mm sheet thickness. Direct exposure of IB1-treated nylon to environmental conditions may have led to higher release rates of attractant volatiles resulting in more mosquitoes being attracted relative to LDPE sachets of optimal sheet thicknesses containing the same attractants. We hypothesized that increasing release rates for all components in the blend using IB1-filled LDPE sachets of 0.03 mm sheet thickness for all components in the blend (hereafter indicated as 0.03 mm-LDPE or 0.03 mm-sachet) could enhance numbers of mosquitoes attracted. A sheet thickness of 0.03 mm was selected, because it was the thinnest available LDPE material and had been used in our previous investigations [7]. This hypothesis was tested by comparing An. gambiae mosquito capture rates with sachets of variable thickness versus 0.03 mm sachets. The sachets were weighed daily before and after each experiment to verify differences in volatile release rates. The carbon dioxide component of the blend was delivered separately through silicon tubing. A randomised 4 × 4 Latin square experimental design was adopted. The treatments included (a) LDPE sachets with optimized sheet thicknesses for all components of IB1, (b) each component of IB1 dispensed in LDPE sachets of 0.03 mm sheet thickness, (c) control LDPE sachets with optimal sheet thicknesses for all components of IB1, and (d) control LDPE sachets with 0.03 mm sheet thickness. Response of mosquitoes to attractants applied on nylon versus 0.03 mm LDPE sachets In addition to investigating the effect of volatile release rates on mosquito behaviour, we compared numbers of An. gambiae mosquitoes attracted to IB1-filled in LDPE sachets of uniform sheet thickness (0.03 mm) or applied on nylon strips. The following treatments were tested (a) IB1-treated nylon strips, (b) each component of IB1 dispensed in 0.03 mm-LDPE sachets, (c) control nylon strips and (d) control 0.03 mm-LDPE sachets. A randomised 4 × 4 Latin square experimental design was adopted. The sachets and nylon strips had surface areas of 12.5 cm2 and 53 cm2, respectively. In addition to investigating the effect of volatile release rates on mosquito behaviour, we compared numbers of An. gambiae mosquitoes attracted to IB1-filled in LDPE sachets of uniform sheet thickness (0.03 mm) or applied on nylon strips. The following treatments were tested (a) IB1-treated nylon strips, (b) each component of IB1 dispensed in 0.03 mm-LDPE sachets, (c) control nylon strips and (d) control 0.03 mm-LDPE sachets. A randomised 4 × 4 Latin square experimental design was adopted. The sachets and nylon strips had surface areas of 12.5 cm2 and 53 cm2, respectively. Effects of dispenser surface area on attraction of mosquitoes As higher mosquito catches associated with IB1-treated nylon strips could not be explained by the strips being freshly treated prior to each experiment, we tested whether variations in mosquito catches were due to differences in surface area. The LDPE sachets and nylon strips used in previous experiments of this study had surface areas of 12.5 cm2 and 53 cm2, respectively. Thus, the strips released odorants over a larger surface area than the LDPE sachets. We designed two sets of 4 × 4 Latin square experiments to test whether the larger surface area of nylon strips was responsible for the higher mosquito catches. The four treatments included (a) IB1-treated nylon strips, (b) LDPE sachets filled with IB1, (c) control nylon strips and (d) control LDPE sachets. Enlarged LDPE sachets were similarly filled with one ml of attractant or solvent. In the first set of experiments, surface areas of control and attractant-filled LDPE sachets were enlarged (2.5 cm wide × 10.6 cm long × 2 sides of the sachet) to equal the surface area of nylon strips. In the second set of experiments, a piece of absorbent material (nylon strip) was placed inside enlarged (53 cm2) control and attractant-filled LDPE sachets to ensure that blend IB1 was evenly spread over the entire inner surface of the sachets. Each set of experiments was replicated 12 times. All other experimental procedures were similar to those described in previous sections. As higher mosquito catches associated with IB1-treated nylon strips could not be explained by the strips being freshly treated prior to each experiment, we tested whether variations in mosquito catches were due to differences in surface area. The LDPE sachets and nylon strips used in previous experiments of this study had surface areas of 12.5 cm2 and 53 cm2, respectively. Thus, the strips released odorants over a larger surface area than the LDPE sachets. We designed two sets of 4 × 4 Latin square experiments to test whether the larger surface area of nylon strips was responsible for the higher mosquito catches. The four treatments included (a) IB1-treated nylon strips, (b) LDPE sachets filled with IB1, (c) control nylon strips and (d) control LDPE sachets. Enlarged LDPE sachets were similarly filled with one ml of attractant or solvent. In the first set of experiments, surface areas of control and attractant-filled LDPE sachets were enlarged (2.5 cm wide × 10.6 cm long × 2 sides of the sachet) to equal the surface area of nylon strips. In the second set of experiments, a piece of absorbent material (nylon strip) was placed inside enlarged (53 cm2) control and attractant-filled LDPE sachets to ensure that blend IB1 was evenly spread over the entire inner surface of the sachets. Each set of experiments was replicated 12 times. All other experimental procedures were similar to those described in previous sections. Data analysis The relative efficacy of each treatment was defined as a percentage of female mosquitoes caught in the traps containing either of the two release materials impregnated or filled with synthetic attractants or solvent. In order to investigate the effect of residual activity of attractant-treated materials on capture rates, we used the baseline-category logit model [9]. The nominal response variable was defined as the attractant type with four categories: IB1-containing LDPE sachets, IB1-treated nylon strips, control nylon strips, and control LDPE sachets with day and trap position as covariates. We estimated the odds that mosquitoes chose other attractants instead of IB1-treated nylon strips over time, while adjusting for trap-position. The Mann Whitney-U test was used to estimate the effect of sheet thickness of LDPE sachets on release rates of IB1 components except carbon dioxide. To investigate the effect of surface area and sheet thickness on the release material on mosquito catches, we fitted a Poisson regression model controlling for trap position. The analyses were performed using SAS v9.2 (SAS Institute Inc.) with tests performed at 5% level. The relative efficacy of each treatment was defined as a percentage of female mosquitoes caught in the traps containing either of the two release materials impregnated or filled with synthetic attractants or solvent. In order to investigate the effect of residual activity of attractant-treated materials on capture rates, we used the baseline-category logit model [9]. The nominal response variable was defined as the attractant type with four categories: IB1-containing LDPE sachets, IB1-treated nylon strips, control nylon strips, and control LDPE sachets with day and trap position as covariates. We estimated the odds that mosquitoes chose other attractants instead of IB1-treated nylon strips over time, while adjusting for trap-position. The Mann Whitney-U test was used to estimate the effect of sheet thickness of LDPE sachets on release rates of IB1 components except carbon dioxide. To investigate the effect of surface area and sheet thickness on the release material on mosquito catches, we fitted a Poisson regression model controlling for trap position. The analyses were performed using SAS v9.2 (SAS Institute Inc.) with tests performed at 5% level. Mosquitoes: The Mbita strain of An. gambiae was used for all experiments. For maintenance of this strain, mosquito eggs were placed in plastic trays containing filtered water from Lake Victoria. Larvae were fed on Tetramin® baby fish food three times per day. Pupae were collected daily, put in clean cups half-filled with filtered lake water and then placed in mesh-covered cages (30 × 30 × 30 cm). Emerging adult mosquitoes were fed on 6% glucose solution. General procedures: The experiments were conducted under semi-field conditions in a screen-walled greenhouse measuring 11 m × 7 m × 2.8 m, with the roof apex standing 3.4 m high. Four treatments including two negative controls were evaluated in each experimental run. A total of 200 adult female mosquitoes aged 3–5 days old were utilized for individual bioassays conducted between 20:00 and 06:30 h. The mosquitoes were starved for 8 h with no prior access to blood meals. Only water presented on cotton towels on top of mosquito holding cups was provided. Mosquitoes attracted to each treatment were sampled using MM-X traps (American Biophysics, North Kingstown, RI, USA). The nylon strips and LDPE sachets were suspended inside the plume tubes of separate traps where a fan blew air over them to expel the attractant plume as indicated in our previous study [3]. Latex gloves were worn when hanging odour dispensers in the traps to avoid contamination. Trap positions were rotated to minimise positional effects. The traps were placed 1 m away from the edges of the greenhouse [4-6]. Each trap was marked and used for one specific treatment throughout the experiments. The number of mosquitoes collected per trap was counted and used both as an estimate for the attractiveness of the baits and an indicator for the suitability of dispensing materials. Each morning the traps were cleaned using 70% methanol solution. Mosquitoes that were not trapped were recaptured from the green house using manual aspirators and killed. Temperature and relative humidity in the greenhouse were recorded using data loggers (Tinytag®). Whereas all experiments were conducted for 12 nights, responses of mosquitoes to residual release from attractant-treated nylon strips were evaluated for 40 nights and repeated three times. Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets: A 4 × 4 Latin square experimental design was conducted incorporating LDPE sachets filled with IB1, IBI-treated nylon strips, LDPE sachets filled with water (hereafter termed control LDPE sachets) and water-treated nylon strips (hereafter termed control nylon strips) as treatments. Sheet thicknesses of LDPE sachets each measuring 2.5 cm × 2.5 cm (surface area 12.5 cm2) were optimized for individual chemical components of blend IB1 [7]. These were 0.2 mm (distilled water, propionic, butanoic, pentanoic, and 3-methylbutanoic acid), 0.1 mm (heptanoic and octanoic acid), 0.05 mm (lactic acid) and 0.03 mm (tetradecanoic acid and ammonia solution). Depending on treatment, LDPE sachets were filled with either 1 ml of the attractant compound or solvent. Individual nylon strips measuring 26.5 cm × 1 cm (surface area 53 cm2) were separately soaked in 1 ml of each of the chemical constituents of blend IB1 at their optimal concentrations [3,7]. The strips were air-dried at room temperature for 5 h before the start of experiments. Whereas attractant-treated nylon strips were freshly prepared each day, LDPE sachets filled with IB1 were re-used throughout the 12 days of the study and replaced upon leakage or depletion of individual components. Carbon dioxide, produced from 250 g of sucrose dissolved in 2 l of tap water containing 17.5 g of yeast [3,5,8] was supplied through silicon gas tubing at a flow rate of approximately 63 ml/min into traps baited with IB1-treated nylon strips or LDPE sachets filled with IB1 only and not with control nylon or LDPE sachets. Individual LDPE sachets containing chemicals were weighed before and after each experiment to determine how much of the individual components of the blend had been released. Control LDPE sachets and LDPE sachets filled with IB1 were stored in the refrigerator at 4 0C between experimental runs. Residual activity of attractant-treated nylon strips on host seeking mosquitoes: In our previous study we noted the potential disadvantage of nylon strips i.e. that they tend to dry up quickly so no more active ingredient may be available following long hours of trap operation [3]. We designed experiments aimed at addressing this shortcoming. A 4 × 4 Latin square experimental design was used to evaluate residual attraction of An. gambiae to IB1-treated nylon strips and LDPE sachets filled with IB1. The four treatments included (i) LDPE sachets filled with IB1, (ii) IB1-treated nylon strips, (iii) control LDPE sachets and (iv), control nylon strips. The number of mosquitoes attracted to each treatment over a period of 40 nights was recorded daily and proportions trapped were calculated. The experiment was replicated three times. Analysis of data revealed no need to prepare fresh nylon strips daily. Thus, nylon strips were re-used in subsequent experiments. Whereas control LDPE sachets and IB1-filled LDPE sachets were also re-used, individual sachets were replenished upon depletion of contents. Sachets containing butanoic, pentanoic, 3-methylbutanoic, heptanoic and octanoic acid were replaced after every 10–14 nights. Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae: Direct exposure of IB1-treated nylon to environmental conditions may have led to higher release rates of attractant volatiles resulting in more mosquitoes being attracted relative to LDPE sachets of optimal sheet thicknesses containing the same attractants. We hypothesized that increasing release rates for all components in the blend using IB1-filled LDPE sachets of 0.03 mm sheet thickness for all components in the blend (hereafter indicated as 0.03 mm-LDPE or 0.03 mm-sachet) could enhance numbers of mosquitoes attracted. A sheet thickness of 0.03 mm was selected, because it was the thinnest available LDPE material and had been used in our previous investigations [7]. This hypothesis was tested by comparing An. gambiae mosquito capture rates with sachets of variable thickness versus 0.03 mm sachets. The sachets were weighed daily before and after each experiment to verify differences in volatile release rates. The carbon dioxide component of the blend was delivered separately through silicon tubing. A randomised 4 × 4 Latin square experimental design was adopted. The treatments included (a) LDPE sachets with optimized sheet thicknesses for all components of IB1, (b) each component of IB1 dispensed in LDPE sachets of 0.03 mm sheet thickness, (c) control LDPE sachets with optimal sheet thicknesses for all components of IB1, and (d) control LDPE sachets with 0.03 mm sheet thickness. Response of mosquitoes to attractants applied on nylon versus 0.03 mm LDPE sachets: In addition to investigating the effect of volatile release rates on mosquito behaviour, we compared numbers of An. gambiae mosquitoes attracted to IB1-filled in LDPE sachets of uniform sheet thickness (0.03 mm) or applied on nylon strips. The following treatments were tested (a) IB1-treated nylon strips, (b) each component of IB1 dispensed in 0.03 mm-LDPE sachets, (c) control nylon strips and (d) control 0.03 mm-LDPE sachets. A randomised 4 × 4 Latin square experimental design was adopted. The sachets and nylon strips had surface areas of 12.5 cm2 and 53 cm2, respectively. Effects of dispenser surface area on attraction of mosquitoes: As higher mosquito catches associated with IB1-treated nylon strips could not be explained by the strips being freshly treated prior to each experiment, we tested whether variations in mosquito catches were due to differences in surface area. The LDPE sachets and nylon strips used in previous experiments of this study had surface areas of 12.5 cm2 and 53 cm2, respectively. Thus, the strips released odorants over a larger surface area than the LDPE sachets. We designed two sets of 4 × 4 Latin square experiments to test whether the larger surface area of nylon strips was responsible for the higher mosquito catches. The four treatments included (a) IB1-treated nylon strips, (b) LDPE sachets filled with IB1, (c) control nylon strips and (d) control LDPE sachets. Enlarged LDPE sachets were similarly filled with one ml of attractant or solvent. In the first set of experiments, surface areas of control and attractant-filled LDPE sachets were enlarged (2.5 cm wide × 10.6 cm long × 2 sides of the sachet) to equal the surface area of nylon strips. In the second set of experiments, a piece of absorbent material (nylon strip) was placed inside enlarged (53 cm2) control and attractant-filled LDPE sachets to ensure that blend IB1 was evenly spread over the entire inner surface of the sachets. Each set of experiments was replicated 12 times. All other experimental procedures were similar to those described in previous sections. Data analysis: The relative efficacy of each treatment was defined as a percentage of female mosquitoes caught in the traps containing either of the two release materials impregnated or filled with synthetic attractants or solvent. In order to investigate the effect of residual activity of attractant-treated materials on capture rates, we used the baseline-category logit model [9]. The nominal response variable was defined as the attractant type with four categories: IB1-containing LDPE sachets, IB1-treated nylon strips, control nylon strips, and control LDPE sachets with day and trap position as covariates. We estimated the odds that mosquitoes chose other attractants instead of IB1-treated nylon strips over time, while adjusting for trap-position. The Mann Whitney-U test was used to estimate the effect of sheet thickness of LDPE sachets on release rates of IB1 components except carbon dioxide. To investigate the effect of surface area and sheet thickness on the release material on mosquito catches, we fitted a Poisson regression model controlling for trap position. The analyses were performed using SAS v9.2 (SAS Institute Inc.) with tests performed at 5% level. Results: Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets The 12-day period over which experiments were conducted was characterized by a mean temperature and relative humidity of 22.18 ± 0.080C and 86.15 ± 1.56%, respectively, within the screen-walled greenhouse. Out of 2,400 female An. gambiae mosquitoes released, 51.88% (n = 1,235) were caught in the four treatment traps. Of these catches, 77.73%, 18.62%, 1.78% and 1.86% were trapped by IB1-treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets, respectively (Figure 1). Baseline-category logit model results revealed that IB1-impregnated nylon strips attracted, on average, 5.6 times more mosquitoes than LDPE sachets filled with IB1 (P < 0.001). Whereas there was no significant difference in the proportion of mosquitoes attracted to control nylon strips and control LDPE sachets (P = 0.436), these treatments attracted significantly fewer mosquitoes than nylon and LDPE sachets containing blend IB1 (P < 0.001). Day effect was not significant (P = 0.056), and was therefore excluded from the final model. However, trap position was an important determinant of mosquito catches (P < 0.001). These experiments provided baseline information for subsequent investigations conducted during the study. Proportions of mosquitoes caught in MM-X traps containing IB1-treated nylon strips, LDPE sachets filled with blend IB1, control nylon strips and control LDPE sachets. Mean mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches. The 12-day period over which experiments were conducted was characterized by a mean temperature and relative humidity of 22.18 ± 0.080C and 86.15 ± 1.56%, respectively, within the screen-walled greenhouse. Out of 2,400 female An. gambiae mosquitoes released, 51.88% (n = 1,235) were caught in the four treatment traps. Of these catches, 77.73%, 18.62%, 1.78% and 1.86% were trapped by IB1-treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets, respectively (Figure 1). Baseline-category logit model results revealed that IB1-impregnated nylon strips attracted, on average, 5.6 times more mosquitoes than LDPE sachets filled with IB1 (P < 0.001). Whereas there was no significant difference in the proportion of mosquitoes attracted to control nylon strips and control LDPE sachets (P = 0.436), these treatments attracted significantly fewer mosquitoes than nylon and LDPE sachets containing blend IB1 (P < 0.001). Day effect was not significant (P = 0.056), and was therefore excluded from the final model. However, trap position was an important determinant of mosquito catches (P < 0.001). These experiments provided baseline information for subsequent investigations conducted during the study. Proportions of mosquitoes caught in MM-X traps containing IB1-treated nylon strips, LDPE sachets filled with blend IB1, control nylon strips and control LDPE sachets. Mean mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches. Residual activity of attractant-treated nylon strips on host-seeking mosquitoes A total of 11,851 (49.38%) mosquitoes were attracted and collected over 120 nights (i.e. three replicates of 40 days each). The proportions of mosquitoes caught over time differed among treatments (P < 0.001) (Figure 2). Attractant-treated nylon strips repeatedly trapped the highest proportion of mosquitoes without re-applying the attractant blend up to 40 days post-treatment. During this period the treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets attracted 76.95% (n = 9,120), 18.59% (n = 2,203), 2.17% (n = 257) and 2.29% (n = 271) of the mosquitoes, respectively. There was also a significant increase over time in the proportion of mosquitoes choosing LDPE sachets filled with IB1 (P < 0.001), but not for control nylon strips (P = 0.051) and control LDPE sachets (P = 0.071). In contrast, the numbers of mosquitoes attracted to IB1-impregnated nylon strips decreased considerably over time (P < 0.002). However, they were consistently preferred to LDPE sachets filled with IB1 (Figure 2). Proportions of mosquitoes caught in traps containing IB1-treated nylon strips (―), LDPE sachets filled with blend IB1 (−−--), control nylon strips (—♦--) and control LDPE sachets (− × − × −) over time. Lines and symbols representing mosquito catches due to control nylon strips and control LDPE sachets are superimposed over each other. Open (IB1-treated nylon strips) and closed circles (LDPE sachets filled with IB1) represent observed values. Lines represent the Baseline-category logit model fit showing trends of proportions of mosquitoes attracted over time. A total of 11,851 (49.38%) mosquitoes were attracted and collected over 120 nights (i.e. three replicates of 40 days each). The proportions of mosquitoes caught over time differed among treatments (P < 0.001) (Figure 2). Attractant-treated nylon strips repeatedly trapped the highest proportion of mosquitoes without re-applying the attractant blend up to 40 days post-treatment. During this period the treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets attracted 76.95% (n = 9,120), 18.59% (n = 2,203), 2.17% (n = 257) and 2.29% (n = 271) of the mosquitoes, respectively. There was also a significant increase over time in the proportion of mosquitoes choosing LDPE sachets filled with IB1 (P < 0.001), but not for control nylon strips (P = 0.051) and control LDPE sachets (P = 0.071). In contrast, the numbers of mosquitoes attracted to IB1-impregnated nylon strips decreased considerably over time (P < 0.002). However, they were consistently preferred to LDPE sachets filled with IB1 (Figure 2). Proportions of mosquitoes caught in traps containing IB1-treated nylon strips (―), LDPE sachets filled with blend IB1 (−−--), control nylon strips (—♦--) and control LDPE sachets (− × − × −) over time. Lines and symbols representing mosquito catches due to control nylon strips and control LDPE sachets are superimposed over each other. Open (IB1-treated nylon strips) and closed circles (LDPE sachets filled with IB1) represent observed values. Lines represent the Baseline-category logit model fit showing trends of proportions of mosquitoes attracted over time. Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae Here LDPE sachets with sheet thickness optimized [7] or kept uniform (0.03 mm) for each chemical constituent of the attractant were evaluated. Out of 2,400 mosquitoes released, 51.17% were trapped (Table 1). Whereas trap position was not a significant factor (P = 0.183), attraction of mosquitoes to different traps was influenced by LDPE sheet thickness (P < 0.001). Delivery of attractant components through sachets with optimized sheet thicknesses resulted in a significant increase in mosquito catches as opposed to uniform 0.03 mm-sachets (P < 0.001). There was no difference in mosquito catches between both types of control LDPE sachets (P = 0.111). Effect of polyethylene sheet thickness on attraction of An. gambiae to attractant baited sachets N refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05). The effect of porosity due to differences in sheet thickness of LDPE sachets on release rates of various chemicals emitted from blend IB1 was also investigated. Mann Whitney-U tests indicated that sheet thickness had a significant effect on the release rates of propionic acid, pentanoic acid, heptanoic acid, distilled water and lactic acid (P = 0.04, 0.03, 0.02, 0.01 and 0.02, respectively). However, release rates of butanoic acid, 3-methylbutanoic acid, octanoic acid, tetradecanoic acid, and ammonia were not dependent on sheet thickness of LDPE sachets (P = 0.722, 0.97, 0.30, 0.23, and 0.87, respectively) (Figure 3). Effect of LDPE sheet thickness on release rates of chemical constituents contained in the mosquito attractant Ifakara blend 1 (IB1). Release rates from sachets, the sheet thickness of which had been optimised for all chemicals components of the blend (open bars) or kept uniform (0.03 mm-sheets) for all the chemical constituents (shaded bars), are shown. The optimised LDPE sheet thicknesses were 0.2 mm [distilled water (H20), propionic (C3), butanoic (C4), pentanoic (C5), and 3-methylbutanoic acid (3MC4)], 0.1 mm [heptanoic (C7) and octanoic acid (C8)], 0.05 mm [lactic acid (LA)] and 0.03 mm [tetradecanoic acid (C14) and ammonia solution (NH3)]. Odour release rates represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean odour release rates measured in ng/h. Here LDPE sachets with sheet thickness optimized [7] or kept uniform (0.03 mm) for each chemical constituent of the attractant were evaluated. Out of 2,400 mosquitoes released, 51.17% were trapped (Table 1). Whereas trap position was not a significant factor (P = 0.183), attraction of mosquitoes to different traps was influenced by LDPE sheet thickness (P < 0.001). Delivery of attractant components through sachets with optimized sheet thicknesses resulted in a significant increase in mosquito catches as opposed to uniform 0.03 mm-sachets (P < 0.001). There was no difference in mosquito catches between both types of control LDPE sachets (P = 0.111). Effect of polyethylene sheet thickness on attraction of An. gambiae to attractant baited sachets N refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05). The effect of porosity due to differences in sheet thickness of LDPE sachets on release rates of various chemicals emitted from blend IB1 was also investigated. Mann Whitney-U tests indicated that sheet thickness had a significant effect on the release rates of propionic acid, pentanoic acid, heptanoic acid, distilled water and lactic acid (P = 0.04, 0.03, 0.02, 0.01 and 0.02, respectively). However, release rates of butanoic acid, 3-methylbutanoic acid, octanoic acid, tetradecanoic acid, and ammonia were not dependent on sheet thickness of LDPE sachets (P = 0.722, 0.97, 0.30, 0.23, and 0.87, respectively) (Figure 3). Effect of LDPE sheet thickness on release rates of chemical constituents contained in the mosquito attractant Ifakara blend 1 (IB1). Release rates from sachets, the sheet thickness of which had been optimised for all chemicals components of the blend (open bars) or kept uniform (0.03 mm-sheets) for all the chemical constituents (shaded bars), are shown. The optimised LDPE sheet thicknesses were 0.2 mm [distilled water (H20), propionic (C3), butanoic (C4), pentanoic (C5), and 3-methylbutanoic acid (3MC4)], 0.1 mm [heptanoic (C7) and octanoic acid (C8)], 0.05 mm [lactic acid (LA)] and 0.03 mm [tetradecanoic acid (C14) and ammonia solution (NH3)]. Odour release rates represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean odour release rates measured in ng/h. Response of mosquitoes to attractant-treated nylon versus thin-sheeted polyethylene sachets Additional studies confirmed that mosquitoes preferred attractant-treated nylon strips compared to attractants contained in 0.03 mm-LDPE sachets (P < 0.001). Overall, 49.63% (n = 1191) of released mosquitoes were recaptured. Of these, 84.50%, 11.07%, 2.26%, and 1.68% were found in traps baited with attractant-treated nylon strips, LDPE sachets (0.03 mm) filled with IB1, control nylon strips and control LDPE sachets (0.03 mm), respectively (Figure 4). The numbers of mosquitoes caught by control strips and sachets were not significantly different (P = 0.309). Proportions of mosquitoes caught by traps containing IB1-treated nylon strips, 0.03 mm-LDPE sachets filled with blend IB1, control 0.03 mm-LDPE sachets and control nylon strips. Mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches. Additional studies confirmed that mosquitoes preferred attractant-treated nylon strips compared to attractants contained in 0.03 mm-LDPE sachets (P < 0.001). Overall, 49.63% (n = 1191) of released mosquitoes were recaptured. Of these, 84.50%, 11.07%, 2.26%, and 1.68% were found in traps baited with attractant-treated nylon strips, LDPE sachets (0.03 mm) filled with IB1, control nylon strips and control LDPE sachets (0.03 mm), respectively (Figure 4). The numbers of mosquitoes caught by control strips and sachets were not significantly different (P = 0.309). Proportions of mosquitoes caught by traps containing IB1-treated nylon strips, 0.03 mm-LDPE sachets filled with blend IB1, control 0.03 mm-LDPE sachets and control nylon strips. Mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches. Effects of dispenser surface area on attraction of mosquitoes The LDPE sachets and nylon strips used to dispense blend IB1 in preceding experiments of this study had total surface areas of 12.5 cm2 and 53 cm2, respectively. Follow-up experiments were conducted in which LDPE sachets were enlarged (2.5 cm × 10.6 cm × 2) to equal the surface area of the nylon strips. Attractant-treated nylon strips caught significantly more mosquitoes than attractants contained in enlarged LDPE sachets with (P < 0.001) and without an inner lining of absorbent material (P < 0.001) (Table 2). Thus, higher attraction of mosquitoes to IB1-treated nylon strips was not neutralized by equalized surface area or uniform spread of attractants over the inner surface area of LDPE sachets. Mosquito responses to traps containing control nylon strips versus control LDPE sachets with or without the absorbent nylon material were not different ((P = 0.173 and P = 0.556, respectively). Position had a significant effect on trap catches (P < 0 .001 in both cases). Behavioural responses of mosquitoes towards attractant treated polyethylene sachets lined with nylon versus nylon strips treated with a similar attractant N refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05). The LDPE sachets and nylon strips used to dispense blend IB1 in preceding experiments of this study had total surface areas of 12.5 cm2 and 53 cm2, respectively. Follow-up experiments were conducted in which LDPE sachets were enlarged (2.5 cm × 10.6 cm × 2) to equal the surface area of the nylon strips. Attractant-treated nylon strips caught significantly more mosquitoes than attractants contained in enlarged LDPE sachets with (P < 0.001) and without an inner lining of absorbent material (P < 0.001) (Table 2). Thus, higher attraction of mosquitoes to IB1-treated nylon strips was not neutralized by equalized surface area or uniform spread of attractants over the inner surface area of LDPE sachets. Mosquito responses to traps containing control nylon strips versus control LDPE sachets with or without the absorbent nylon material were not different ((P = 0.173 and P = 0.556, respectively). Position had a significant effect on trap catches (P < 0 .001 in both cases). Behavioural responses of mosquitoes towards attractant treated polyethylene sachets lined with nylon versus nylon strips treated with a similar attractant N refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05). Response of mosquitoes to attractant-treated nylon strips versus LDPE sachets: The 12-day period over which experiments were conducted was characterized by a mean temperature and relative humidity of 22.18 ± 0.080C and 86.15 ± 1.56%, respectively, within the screen-walled greenhouse. Out of 2,400 female An. gambiae mosquitoes released, 51.88% (n = 1,235) were caught in the four treatment traps. Of these catches, 77.73%, 18.62%, 1.78% and 1.86% were trapped by IB1-treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets, respectively (Figure 1). Baseline-category logit model results revealed that IB1-impregnated nylon strips attracted, on average, 5.6 times more mosquitoes than LDPE sachets filled with IB1 (P < 0.001). Whereas there was no significant difference in the proportion of mosquitoes attracted to control nylon strips and control LDPE sachets (P = 0.436), these treatments attracted significantly fewer mosquitoes than nylon and LDPE sachets containing blend IB1 (P < 0.001). Day effect was not significant (P = 0.056), and was therefore excluded from the final model. However, trap position was an important determinant of mosquito catches (P < 0.001). These experiments provided baseline information for subsequent investigations conducted during the study. Proportions of mosquitoes caught in MM-X traps containing IB1-treated nylon strips, LDPE sachets filled with blend IB1, control nylon strips and control LDPE sachets. Mean mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches. Residual activity of attractant-treated nylon strips on host-seeking mosquitoes: A total of 11,851 (49.38%) mosquitoes were attracted and collected over 120 nights (i.e. three replicates of 40 days each). The proportions of mosquitoes caught over time differed among treatments (P < 0.001) (Figure 2). Attractant-treated nylon strips repeatedly trapped the highest proportion of mosquitoes without re-applying the attractant blend up to 40 days post-treatment. During this period the treated nylon strips, LDPE sachets filled with IB1, control nylon strips and control LDPE sachets attracted 76.95% (n = 9,120), 18.59% (n = 2,203), 2.17% (n = 257) and 2.29% (n = 271) of the mosquitoes, respectively. There was also a significant increase over time in the proportion of mosquitoes choosing LDPE sachets filled with IB1 (P < 0.001), but not for control nylon strips (P = 0.051) and control LDPE sachets (P = 0.071). In contrast, the numbers of mosquitoes attracted to IB1-impregnated nylon strips decreased considerably over time (P < 0.002). However, they were consistently preferred to LDPE sachets filled with IB1 (Figure 2). Proportions of mosquitoes caught in traps containing IB1-treated nylon strips (―), LDPE sachets filled with blend IB1 (−−--), control nylon strips (—♦--) and control LDPE sachets (− × − × −) over time. Lines and symbols representing mosquito catches due to control nylon strips and control LDPE sachets are superimposed over each other. Open (IB1-treated nylon strips) and closed circles (LDPE sachets filled with IB1) represent observed values. Lines represent the Baseline-category logit model fit showing trends of proportions of mosquitoes attracted over time. Sheet thickness of LDPE sachets containing attractants and its effect on attraction of An. gambiae: Here LDPE sachets with sheet thickness optimized [7] or kept uniform (0.03 mm) for each chemical constituent of the attractant were evaluated. Out of 2,400 mosquitoes released, 51.17% were trapped (Table 1). Whereas trap position was not a significant factor (P = 0.183), attraction of mosquitoes to different traps was influenced by LDPE sheet thickness (P < 0.001). Delivery of attractant components through sachets with optimized sheet thicknesses resulted in a significant increase in mosquito catches as opposed to uniform 0.03 mm-sachets (P < 0.001). There was no difference in mosquito catches between both types of control LDPE sachets (P = 0.111). Effect of polyethylene sheet thickness on attraction of An. gambiae to attractant baited sachets N refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05). The effect of porosity due to differences in sheet thickness of LDPE sachets on release rates of various chemicals emitted from blend IB1 was also investigated. Mann Whitney-U tests indicated that sheet thickness had a significant effect on the release rates of propionic acid, pentanoic acid, heptanoic acid, distilled water and lactic acid (P = 0.04, 0.03, 0.02, 0.01 and 0.02, respectively). However, release rates of butanoic acid, 3-methylbutanoic acid, octanoic acid, tetradecanoic acid, and ammonia were not dependent on sheet thickness of LDPE sachets (P = 0.722, 0.97, 0.30, 0.23, and 0.87, respectively) (Figure 3). Effect of LDPE sheet thickness on release rates of chemical constituents contained in the mosquito attractant Ifakara blend 1 (IB1). Release rates from sachets, the sheet thickness of which had been optimised for all chemicals components of the blend (open bars) or kept uniform (0.03 mm-sheets) for all the chemical constituents (shaded bars), are shown. The optimised LDPE sheet thicknesses were 0.2 mm [distilled water (H20), propionic (C3), butanoic (C4), pentanoic (C5), and 3-methylbutanoic acid (3MC4)], 0.1 mm [heptanoic (C7) and octanoic acid (C8)], 0.05 mm [lactic acid (LA)] and 0.03 mm [tetradecanoic acid (C14) and ammonia solution (NH3)]. Odour release rates represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean odour release rates measured in ng/h. Response of mosquitoes to attractant-treated nylon versus thin-sheeted polyethylene sachets: Additional studies confirmed that mosquitoes preferred attractant-treated nylon strips compared to attractants contained in 0.03 mm-LDPE sachets (P < 0.001). Overall, 49.63% (n = 1191) of released mosquitoes were recaptured. Of these, 84.50%, 11.07%, 2.26%, and 1.68% were found in traps baited with attractant-treated nylon strips, LDPE sachets (0.03 mm) filled with IB1, control nylon strips and control LDPE sachets (0.03 mm), respectively (Figure 4). The numbers of mosquitoes caught by control strips and sachets were not significantly different (P = 0.309). Proportions of mosquitoes caught by traps containing IB1-treated nylon strips, 0.03 mm-LDPE sachets filled with blend IB1, control 0.03 mm-LDPE sachets and control nylon strips. Mosquito catches represented by bars with different letters differ significantly (P < 0.05). Error bars represent the standard error of the mean proportion of mosquito catches. Effects of dispenser surface area on attraction of mosquitoes: The LDPE sachets and nylon strips used to dispense blend IB1 in preceding experiments of this study had total surface areas of 12.5 cm2 and 53 cm2, respectively. Follow-up experiments were conducted in which LDPE sachets were enlarged (2.5 cm × 10.6 cm × 2) to equal the surface area of the nylon strips. Attractant-treated nylon strips caught significantly more mosquitoes than attractants contained in enlarged LDPE sachets with (P < 0.001) and without an inner lining of absorbent material (P < 0.001) (Table 2). Thus, higher attraction of mosquitoes to IB1-treated nylon strips was not neutralized by equalized surface area or uniform spread of attractants over the inner surface area of LDPE sachets. Mosquito responses to traps containing control nylon strips versus control LDPE sachets with or without the absorbent nylon material were not different ((P = 0.173 and P = 0.556, respectively). Position had a significant effect on trap catches (P < 0 .001 in both cases). Behavioural responses of mosquitoes towards attractant treated polyethylene sachets lined with nylon versus nylon strips treated with a similar attractant N refers to the number of replicates and n to the total number of mosquitoes trapped. Mean (±S.E) numbers of mosquitoes trapped with different letter superscripts differ significantly (P < 0.05). Discussion: This study demonstrates that nylon strips can act as a sustainable matrix for dispensing synthetic attractants of host-seeking An. gambiae mosquitoes, performing much better than low density polyethylene (LDPE) sachets. It was remarkable that attractant-treated nylon strips continued to attract mosquitoes without re-application and remained consistently more attractive than LDPE sachets filled with the same attractants over a period of 40 nights post-treatment. The higher catches of mosquitoes associated with nylon strips were apparently not due to smaller surface area, uneven spread of the attractant on inner surfaces or LDPE sheet thickness. The baseline experiments reported herein confirm findings of our previous studies in which nylon strips were found to provide a better release matrix for delivering synthetic attractants of host-seeking An. gambiae mosquitoes than did LDPE sachets or open glass vials [3]. LDPE and nylon differ in physico-chemical characteristics such as porosity and chemical binding affinity that may explain the observed differences in mosquito catches through their effects on the release rate of odorant volatiles [1,10,11]. Although the use of LDPE sachets allows the adjustment of attractant release rates, release rates from nylon have yet to be determined e.g. through headspace sampling at the trap outlet. That IB1-treated nylon strips remained consistently more attractive to host-seeking An. gambiae mosquitoes than LDPE sachets filled with the same attractants for a period of up to 40 days post-treatment is definitive proof of inherent residual activity. This finding corroborates that of related studies where nylon stockings impregnated with human emanations remained attractive to An. gambiae mosquitoes for several weeks [12-14]. Blend IB1 impregnated on nylon strips may have been subject to bacterial degradation over the prolonged experimental time. This may have resulted in the release of additional components than were originally present on the nylon strips [15-17]. However, the present study did not investigate the presence of microbes or additional attractant compounds on aging IB1-treated nylon strips. The current study shows that, attractant-treated nylon strips can be re-used for at least 40 consecutive days as baits for host-seeking An. gambiae mosquitoes, thereby reducing costs of odorants and nylon strips, time and labour used to prepare fresh baits. These attributes are consistent with those associated with the long-lasting fabric materials impregnated with mosquito repellents or insecticides [18,19]. The availability of long-lasting mosquito-attractant fabrics is interesting as these can potentially be combined with mosquito pathogens such as entomopathogenic fungi or bacteria [20]. Thus, a cheap and effective tool for intercepting and eliminating host-seeking mosquitoes can be exploited for vector-borne disease control. However, further testing is needed to examine the maximal duration of residual activity of the attractant-treated strips. Contrary to our expectations, LDPE sachets optimized for release rates and surface area caught fewer mosquitoes than nylon strips. The release rate of some compounds (propanoic, pentanoic, heptanoic, lactic acid and water) was significantly increased when uniformly thinner-sheeted sachets were utilized. Because sheet thickness of LDPE sachets is a determinant of volatile release rate the composition of the volatile blend released may have changed so as to negatively affect attractiveness to An. gambiae mosquitoes [1,21]. We conclude that, blend ratio and concentration affects orientation and capture rates of insect vectors with odour-baited systems [22,23]. Although LDPE sachets have been effectively used to release attractants for tsetse flies and other insect pests [1,2], they attracted fewer mosquitoes compared to nylon strips when both were treated or filled with the same blend of attractants. This could be explained by differences in optimized sheet thicknesses of LDPE sachets and physical and chemical characteristics of the odorants used for attraction of tsetse flies versus those used for mosquitoes [24]. Moreover, trap designs used for collection of both insect vectors were also different [3,25,26]. Delivery of synthetic attractant components through sachets with standardized sheet thickness and surface area have demonstrated consistent mosquito catches under laboratory and semi-field conditions [17].Whereas nylon strips were associated with higher mosquito catches, we currently lack information on the release rates of the odorants dispensed. Accurate release rates have been established for odorants delivered through LDPE sachets [4], and such chemical measurements should also be done for nylon, as this allows for a direct comparison of the active aerial odorant concentration that host-seeking mosquitoes encounter. Conclusion: This study demonstrates that nylon strips present a potent and sustainable release material for dispensing synthetic mosquito attractants. Apparently, attractant-treated nylon strips can be used over prolonged time without re-applying the attractant blend. Treatment of nylon surfaces with attractants presents an opportunity for use in long-lasting odour-baited devices for sampling, surveillance and control of disease vectors. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: WRM, CKM, WT, and RCS designed the study; CKM, and PO conducted the research; BO and WRM analysed the data; CKM, WRM, RCS, WT and JJAvL wrote the paper. All authors read and approved the final manuscript.
Background: Synthetic odour baits present an unexploited potential for sampling, surveillance and control of malaria and other mosquito vectors. However, application of such baits is impeded by the unavailability of robust odour delivery devices that perform reliably under field conditions. In the present study the suitability of low density polyethylene (LDPE) and nylon strips for dispensing synthetic attractants of host-seeking Anopheles gambiae mosquitoes was evaluated. Methods: Baseline experiments assessed the numbers of An. gambiae mosquitoes caught in response to low density polyethylene (LDPE) sachets filled with attractants, attractant-treated nylon strips, control LDPE sachets, and control nylon strips placed in separate MM-X traps. Residual attraction of An. gambiae to attractant-treated nylon strips was determined subsequently. The effects of sheet thickness and surface area on numbers of mosquitoes caught in MM-X traps containing the synthetic kairomone blend dispensed from LDPE sachets and nylon strips were also evaluated. Various treatments were tested through randomized 4 × 4 Latin Square experimental designs under semi-field conditions in western Kenya. Results: Attractant-treated nylon strips collected 5.6 times more An. gambiae mosquitoes than LDPE sachets filled with the same attractants. The attractant-impregnated nylon strips were consistently more attractive (76.95%; n = 9,120) than sachets containing the same attractants (18.59%; n = 2,203), control nylon strips (2.17%; n = 257) and control LDPE sachets (2.29%; n = 271) up to 40 days post-treatment (P < 0.001). The higher catches of mosquitoes achieved with nylon strips were unrelated to differences in surface area between nylon strips and LDPE sachets. The proportion of mosquitoes trapped when individual components of the attractant were dispensed in LDPE sachets of optimized sheet thicknesses was significantly higher than when 0.03 mm-sachets were used (P < 0.001). Conclusions: Nylon strips continuously dispense synthetic mosquito attractants several weeks post treatment. This, added to the superior performance of nylon strips relative to LDPE material in dispensing synthetic mosquito attractants, opens up the opportunity for showcasing the effectiveness of odour-baited devices for sampling, surveillance and control of disease vectors.
Background: The effectiveness of odour-baited tools for sampling, surveillance and control of insect vectors is strongly influenced by the selected odour delivery device [1,2]. Low density polyethylene (LDPE) materials have proved useful because odour baits are released at predictable rates and do not need to be replenished over prolonged periods of time [1,3]. However, these attributes may not guarantee maximal mosquito trap catches without prior optimization of sheet thickness and surface area [1,2,4]. Since LDPE sachets are prone to leakage, further searches for slow-release materials and techniques is warranted for the optimal release of odorants. In a previous eight-day study we reported on the efficacy of nylon fabric (90% polyamide and 10% spandex) as a tool for dispensing odours [3]. A potent synthetic mosquito attractant namely Ifakara blend 1 (hereafter referred to as blend IB1) was used to evaluate open glass vials, LDPE and nylon as delivery tools. Nylon strips impregnated with blend IB1 attracted 5.83 and 1.78 times more Anopheles gambiae Giles sensu stricto (hereafter referred to as An. gambiae) mosquitoes than solutions of attractants dispensed from glass vials and LDPE sachets, respectively [3]. However, in the case of nylon strips each chemical component of the attractant was applied at its optimal concentration whereas such optimization had not been implemented in advance for LDPE sachets. In this study we re-evaluated the suitability of nylon versus LDPE as materials for dispensing synthetic mosquito attractants. We pursued four specific aims i.e. (i) comparison of nylon strips and LDPE sachets as materials for releasing synthetic mosquito attractants, (ii) assessment of the residual activity of attractant-baited nylon strips and LDPE sachets on host-seeking mosquitoes, (iii) determination of the effect of LDPE sheet thickness on attraction of mosquitoes to synthetic attractants, and (iv) comparison of surface area effects on attraction of mosquitoes to attractants administered through nylon strips versus LDPE sachets. Conclusion: This study demonstrates that nylon strips present a potent and sustainable release material for dispensing synthetic mosquito attractants. Apparently, attractant-treated nylon strips can be used over prolonged time without re-applying the attractant blend. Treatment of nylon surfaces with attractants presents an opportunity for use in long-lasting odour-baited devices for sampling, surveillance and control of disease vectors.
Background: Synthetic odour baits present an unexploited potential for sampling, surveillance and control of malaria and other mosquito vectors. However, application of such baits is impeded by the unavailability of robust odour delivery devices that perform reliably under field conditions. In the present study the suitability of low density polyethylene (LDPE) and nylon strips for dispensing synthetic attractants of host-seeking Anopheles gambiae mosquitoes was evaluated. Methods: Baseline experiments assessed the numbers of An. gambiae mosquitoes caught in response to low density polyethylene (LDPE) sachets filled with attractants, attractant-treated nylon strips, control LDPE sachets, and control nylon strips placed in separate MM-X traps. Residual attraction of An. gambiae to attractant-treated nylon strips was determined subsequently. The effects of sheet thickness and surface area on numbers of mosquitoes caught in MM-X traps containing the synthetic kairomone blend dispensed from LDPE sachets and nylon strips were also evaluated. Various treatments were tested through randomized 4 × 4 Latin Square experimental designs under semi-field conditions in western Kenya. Results: Attractant-treated nylon strips collected 5.6 times more An. gambiae mosquitoes than LDPE sachets filled with the same attractants. The attractant-impregnated nylon strips were consistently more attractive (76.95%; n = 9,120) than sachets containing the same attractants (18.59%; n = 2,203), control nylon strips (2.17%; n = 257) and control LDPE sachets (2.29%; n = 271) up to 40 days post-treatment (P < 0.001). The higher catches of mosquitoes achieved with nylon strips were unrelated to differences in surface area between nylon strips and LDPE sachets. The proportion of mosquitoes trapped when individual components of the attractant were dispensed in LDPE sachets of optimized sheet thicknesses was significantly higher than when 0.03 mm-sachets were used (P < 0.001). Conclusions: Nylon strips continuously dispense synthetic mosquito attractants several weeks post treatment. This, added to the superior performance of nylon strips relative to LDPE material in dispensing synthetic mosquito attractants, opens up the opportunity for showcasing the effectiveness of odour-baited devices for sampling, surveillance and control of disease vectors.
12,412
429
20
[ "sachets", "ldpe", "ldpe sachets", "nylon", "strips", "nylon strips", "ib1", "mosquitoes", "control", "treated" ]
[ "test", "test" ]
[CONTENT] Mosquito | Trapping | Attractant | Odour release system [SUMMARY]
[CONTENT] Mosquito | Trapping | Attractant | Odour release system [SUMMARY]
[CONTENT] Mosquito | Trapping | Attractant | Odour release system [SUMMARY]
[CONTENT] Mosquito | Trapping | Attractant | Odour release system [SUMMARY]
[CONTENT] Mosquito | Trapping | Attractant | Odour release system [SUMMARY]
[CONTENT] Mosquito | Trapping | Attractant | Odour release system [SUMMARY]
[CONTENT] Animals | Anopheles | Chemotactic Factors | Kenya | Mosquito Control | Nylons | Polyethylene | Time Factors [SUMMARY]
[CONTENT] Animals | Anopheles | Chemotactic Factors | Kenya | Mosquito Control | Nylons | Polyethylene | Time Factors [SUMMARY]
[CONTENT] Animals | Anopheles | Chemotactic Factors | Kenya | Mosquito Control | Nylons | Polyethylene | Time Factors [SUMMARY]
[CONTENT] Animals | Anopheles | Chemotactic Factors | Kenya | Mosquito Control | Nylons | Polyethylene | Time Factors [SUMMARY]
[CONTENT] Animals | Anopheles | Chemotactic Factors | Kenya | Mosquito Control | Nylons | Polyethylene | Time Factors [SUMMARY]
[CONTENT] Animals | Anopheles | Chemotactic Factors | Kenya | Mosquito Control | Nylons | Polyethylene | Time Factors [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] sachets | ldpe | ldpe sachets | nylon | strips | nylon strips | ib1 | mosquitoes | control | treated [SUMMARY]
[CONTENT] sachets | ldpe | ldpe sachets | nylon | strips | nylon strips | ib1 | mosquitoes | control | treated [SUMMARY]
[CONTENT] sachets | ldpe | ldpe sachets | nylon | strips | nylon strips | ib1 | mosquitoes | control | treated [SUMMARY]
[CONTENT] sachets | ldpe | ldpe sachets | nylon | strips | nylon strips | ib1 | mosquitoes | control | treated [SUMMARY]
[CONTENT] sachets | ldpe | ldpe sachets | nylon | strips | nylon strips | ib1 | mosquitoes | control | treated [SUMMARY]
[CONTENT] sachets | ldpe | ldpe sachets | nylon | strips | nylon strips | ib1 | mosquitoes | control | treated [SUMMARY]
[CONTENT] ldpe | nylon | synthetic | materials | synthetic mosquito | attractants | ldpe sachets | sachets | referred | optimization [SUMMARY]
[CONTENT] sachets | ldpe | ldpe sachets | strips | nylon | ib1 | nylon strips | mm | filled | 03 [SUMMARY]
[CONTENT] sachets | ldpe | ldpe sachets | nylon | strips | mosquitoes | nylon strips | ib1 | control | acid [SUMMARY]
[CONTENT] nylon | attractants | strips prolonged time applying | applying attractant blend treatment | prolonged time | opportunity use long lasting | opportunity use long | opportunity use | opportunity | release material dispensing [SUMMARY]
[CONTENT] sachets | ldpe | nylon | strips | ldpe sachets | nylon strips | ib1 | mosquitoes | control | mm [SUMMARY]
[CONTENT] sachets | ldpe | nylon | strips | ldpe sachets | nylon strips | ib1 | mosquitoes | control | mm [SUMMARY]
[CONTENT] malaria ||| ||| LDPE [SUMMARY]
[CONTENT] LDPE | LDPE ||| ||| MM | LDPE ||| 4 × | 4 | Latin Square | Kenya [SUMMARY]
[CONTENT] Attractant | 5.6 | LDPE ||| 76.95% | 9,120 | 18.59% | 2,203 | 2.17% | 257 | LDPE | 2.29% | 271 | 40 days | 0.001 ||| mosquitoes | LDPE ||| LDPE | 0.03 mm | 0.001 [SUMMARY]
[CONTENT] several weeks ||| LDPE [SUMMARY]
[CONTENT] malaria ||| ||| LDPE ||| LDPE | LDPE ||| ||| MM | LDPE ||| 4 × | 4 | Latin Square | Kenya ||| 5.6 | LDPE ||| 76.95% | 9,120 | 18.59% | 2,203 | 2.17% | 257 | LDPE | 2.29% | 271 | 40 days | 0.001 ||| mosquitoes | LDPE ||| LDPE | 0.03 mm | 0.001 ||| several weeks ||| LDPE [SUMMARY]
[CONTENT] malaria ||| ||| LDPE ||| LDPE | LDPE ||| ||| MM | LDPE ||| 4 × | 4 | Latin Square | Kenya ||| 5.6 | LDPE ||| 76.95% | 9,120 | 18.59% | 2,203 | 2.17% | 257 | LDPE | 2.29% | 271 | 40 days | 0.001 ||| mosquitoes | LDPE ||| LDPE | 0.03 mm | 0.001 ||| several weeks ||| LDPE [SUMMARY]
Sensitive Troponin I Assay in Patients with Chest Pain - Association with Significant Coronary Lesions with or Without Renal Failure.
29538525
Despite having higher sensitivity as compared to conventional troponins, sensitive troponins have lower specificity, mainly in patients with renal failure.
INTRODUCTION
Retrospective, single-center, observational. This study included 991 patients divided into two groups: with (N = 681) and without (N = 310) significant coronary lesion. For posterior analysis, the patients were divided into two other groups: with (N = 184) and without (N = 807) chronic renal failure. The commercial ADVIA Centaur® TnI-Ultra assay (Siemens Healthcare Diagnostics) was used. The ROC curve analysis was performed to identify the sensitivity and specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion. The associations were considered significant when p < 0.05.
METHODS
The median age was 63 years, and 52% of the patients were of the male sex. The area under the ROC curve between the troponin levels and significant coronary lesions was 0.685 (95% CI: 0.65 - 0.72). In patients with or without renal failure, the areas under the ROC curve were 0.703 (95% CI: 0.66 - 0.74) and 0.608 (95% CI: 0.52 - 0.70), respectively. The best cutoff points to discriminate the presence of significant coronary lesion were: in the general population, 0.605 ng/dL (sensitivity, 63.4%; specificity, 67%); in patients without renal failure, 0.605 ng/dL (sensitivity, 62.7%; specificity, 71%); and in patients with chronic renal failure, 0.515 ng/dL (sensitivity, 80.6%; specificity, 42%).
RESULTS
In patients with chest pain, sensitive troponin I showed a good correlation with significant coronary lesions when its level was greater than 0.605 ng/dL. In patients with chronic renal failure, a significant decrease in specificity was observed in the correlation of troponin levels and severe coronary lesions.
CONCLUSION
[ "Biomarkers", "Chest Pain", "Coronary Disease", "Female", "Humans", "Kidney Failure, Chronic", "Male", "Middle Aged", "ROC Curve", "Retrospective Studies", "Sensitivity and Specificity", "Troponin I" ]
5831304
Introduction
In recent years, cardiology has witnessed the constant development of several biomarkers, of which, current sensitive troponins and high-sensitivity troponins, widespread in Brazil and Europe, stand out.1 However, despite the huge gain in sensitivity, allowing early detection of a minimum threshold of myocardial lesion in patients presenting to the emergency department with chest pain, there was a reduction in specificity, which resulted in several patients with non-cardiological or non-coronary problems undergoing unnecessary and even harmful antithrombotic therapy and invasive coronary stratification.2-5 The adequate troponin level to be considered for the correct interpretation of clinical findings depends on the patient’s characteristics and on the troponin assay used, and should be ideally individualized for each service.2-4,6 Thus, this study was aimed at assessing the current sensitive troponin I levels for patients with chest pain, in addition to relating them to the existence of significant coronary lesions both in the presence and absence of chronic renal failure in the sample selected.
Methods
Study population This is a retrospective, single-center, observational study, including 991 patients with chest pain admitted to the emergency department of a high-complexity tertiary cardiology center, between May 2013 and May 2015. All patients with chest pain undergoing coronary angiography for suspected unstable angina or non-ST-elevation acute myocardial infarction were included. Presence of ST-segment elevation was the only exclusion criterion. The coronary lesion was considered significant when ≥ 70% on coronary angiography. Chronic renal failure was defined as a creatinine level > 1.5 mg/dL. The patients were divided into two groups: with (N = 681) and without (N = 310) significant coronary lesion. For Receiver Operating Characteristic (ROC) curve analysis, the patients were divided into two other groups: with (N = 184) and without (N = 807) chronic renal failure. The commercial ADVIA Centaur® TnI-Ultra assay (Siemens Healthcare Diagnostics, Tarrytown, NY, USA) was used for current sensitive troponin with a 99th percentile value of 0.04 ng/mL. The flowchart of the management of all patients with chest pain met the criteria established by the last American Heart Association guideline.7-9 Non-ST-elevation acute coronary syndrome was defined as presence of chest pain associated with electrocardiographic changes or troponin elevation/drop on admission or, in the lack thereof, clinical findings and risk factors compatible with unstable angina (chest pain at rest or on minimal exertion, of severe intensity or occurring in a crescendo pattern). The highest troponin level during hospitalization before coronary angiography was considered for analysis, following the every 6-hour marker collection protocol of the institution. The following data were obtained: age, sex, presence of diabetes mellitus, systemic arterial hypertension, smoking habit, dyslipidemia, family history of early coronary artery disease, chronic coronary artery disease, previous acute myocardial infarction, creatinine, ST-segment depression or T-wave inversion on the electrocardiogram. This study was submitted to the Ethics Committee in Research and approved by it. All patients provided written informed consent. This is a retrospective, single-center, observational study, including 991 patients with chest pain admitted to the emergency department of a high-complexity tertiary cardiology center, between May 2013 and May 2015. All patients with chest pain undergoing coronary angiography for suspected unstable angina or non-ST-elevation acute myocardial infarction were included. Presence of ST-segment elevation was the only exclusion criterion. The coronary lesion was considered significant when ≥ 70% on coronary angiography. Chronic renal failure was defined as a creatinine level > 1.5 mg/dL. The patients were divided into two groups: with (N = 681) and without (N = 310) significant coronary lesion. For Receiver Operating Characteristic (ROC) curve analysis, the patients were divided into two other groups: with (N = 184) and without (N = 807) chronic renal failure. The commercial ADVIA Centaur® TnI-Ultra assay (Siemens Healthcare Diagnostics, Tarrytown, NY, USA) was used for current sensitive troponin with a 99th percentile value of 0.04 ng/mL. The flowchart of the management of all patients with chest pain met the criteria established by the last American Heart Association guideline.7-9 Non-ST-elevation acute coronary syndrome was defined as presence of chest pain associated with electrocardiographic changes or troponin elevation/drop on admission or, in the lack thereof, clinical findings and risk factors compatible with unstable angina (chest pain at rest or on minimal exertion, of severe intensity or occurring in a crescendo pattern). The highest troponin level during hospitalization before coronary angiography was considered for analysis, following the every 6-hour marker collection protocol of the institution. The following data were obtained: age, sex, presence of diabetes mellitus, systemic arterial hypertension, smoking habit, dyslipidemia, family history of early coronary artery disease, chronic coronary artery disease, previous acute myocardial infarction, creatinine, ST-segment depression or T-wave inversion on the electrocardiogram. This study was submitted to the Ethics Committee in Research and approved by it. All patients provided written informed consent. Statistical analysis The ROC curve analysis was performed to identify the sensitivity and specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion, and 95% confidence interval (CI) was used. That analysis was performed for the general population and separately for patients with and without chronic renal failure. Descriptive analysis of the categorical variables was performed by use of percentages. Continuous variables with non-normal distribution were expressed as medians and interquartile intervals, and those with normal distribution, as means and standard deviations. The comparison between groups was performed by use of the chi-square test for categorical variables. The continuous variables, when the Kolmogorov-Smirnov test showed normal distribution, were assessed by using the unpaired T test, and when the distribution was not normal, the Mann-Whitney U test was used. Both troponin cutoff points analyzed (the 99th percentile of the method and the best cutoff point found in this study) were entered into the univariate analysis. Comparison between patients with versus without significant coronary lesion was performed. Multivariate analysis was performed with logistic regression, p < 0.05 being the significance level adopted. All baseline characteristics listed in Table 1 that reached statistical significance on univariate analysis were considered as variables in the analysis. Multivariate analysis was performed separately for each troponin cutoff point assessed (the 99th percentile of the method and the best cutoff point found in this study). Baseline characteristics and univariate analysis comparing patients with versus without significant coronary lesion FH: family history; CAD: coronary artery disease; AMI: acute myocardial infarction; chi square test; unpaired T test; Mann-Whitney U test. The calculations were performed with the SPSS software, version 10.0. The ROC curve analysis was performed to identify the sensitivity and specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion, and 95% confidence interval (CI) was used. That analysis was performed for the general population and separately for patients with and without chronic renal failure. Descriptive analysis of the categorical variables was performed by use of percentages. Continuous variables with non-normal distribution were expressed as medians and interquartile intervals, and those with normal distribution, as means and standard deviations. The comparison between groups was performed by use of the chi-square test for categorical variables. The continuous variables, when the Kolmogorov-Smirnov test showed normal distribution, were assessed by using the unpaired T test, and when the distribution was not normal, the Mann-Whitney U test was used. Both troponin cutoff points analyzed (the 99th percentile of the method and the best cutoff point found in this study) were entered into the univariate analysis. Comparison between patients with versus without significant coronary lesion was performed. Multivariate analysis was performed with logistic regression, p < 0.05 being the significance level adopted. All baseline characteristics listed in Table 1 that reached statistical significance on univariate analysis were considered as variables in the analysis. Multivariate analysis was performed separately for each troponin cutoff point assessed (the 99th percentile of the method and the best cutoff point found in this study). Baseline characteristics and univariate analysis comparing patients with versus without significant coronary lesion FH: family history; CAD: coronary artery disease; AMI: acute myocardial infarction; chi square test; unpaired T test; Mann-Whitney U test. The calculations were performed with the SPSS software, version 10.0.
Results
The median age was 63 years, and 52% of the patients were of the male sex. The area under the ROC curve between the troponin levels and significant coronary lesions was 0.685 (95% CI: 0.65 - 0.72). In patients with or without renal failure, the areas under the ROC curve were 0.703 (95% CI: 0.66 - 0.74) and 0.608 (95% CI: 0.52 - 0.70), respectively. The best cutoff points to discriminate the presence of significant coronary lesion were: in the general population, 0.605 ng/dL (sensitivity, 63.4%; specificity, 67%; positive predictive value, 65.9%; negative predictive value, 64.7%; accuracy, 65.3%; and likelihood ratio, 1.9); in patients without renal failure, 0.605 ng/dL (sensitivity, 62.7%; specificity, 71%; accuracy, 66.9%; and likelihood ratio, 2.2); and in patients with chronic renal failure, 0.515 ng/dL (sensitivity, 80.6%; specificity, 42%; accuracy, 61.3%; and likelihood ratio, 1.4) (Figure 1). In the general population, the level of 0.05 ng/dL (immediately above the 99th percentile) showed sensitivity of 93.7% and specificity of 23%. For patients with chronic renal failure to reach a specificity of 67% (as in the general population), an elevation in the troponin level to 1.58 ng/dL was necessary. Figure 1ROC curve identifying the sensitivity and the specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion. AUC: area under the curve. ROC curve identifying the sensitivity and the specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion. AUC: area under the curve. Troponin was negative in 143 patients, and, in 40.6% of them, significant lesions were observed on coronary angiography. In addition, 10.5% of those patients with negative troponin showed ST-segment depression/T-wave inversion on electrocardiogram. Using the gold-standard procedure of cardiac catheterization, the acute coronary syndrome diagnosis was confirmed in 68.7% of the patients admitted due to chest pain. In 9.1% of those without significant coronary lesion on coronary angiography and with positive troponin, the acute coronary syndrome diagnosis was confirmed by cardiac magnetic resonance. The baseline characteristics of the population studied and the univariate analysis between the groups are shown in Table 1. In multivariate analysis, considering the 99th percentile of the method, there were significant differences between the groups with and without coronary lesion regarding smoking habit (OR = 1.58, p = 0.002), ST-segment depression/T-wave inversion (OR = 2.05, p < 0.0001) and troponin positivity (OR = 3.39, p < 0.0001), respectively. However, when considering the best troponin cutoff point found in this study, there were significant differences between the groups with and without coronary lesion regarding the male sex (OR = 1.35, p = 0.039), smoking habit (OR = 1.64, p = 0.001), ST-segment depression/T-wave inversion (OR = 2.22, p < 0.0001) and troponin positivity (OR = 3.39, p < 0.0001), respectively. The multivariate analysis results are shown in Table 2. Multivariate analysis comparing patients with versus without significant coronary lesion: A. Using the 99th percentile of the troponin assay; B. using the best cutoff point for troponin found in the study OR: odds ratio; CI: confidence interval.
Conclusion
In the study population of patients with chest pain, sensitive troponin I showed a good correlation with significant coronary lesions when its level was greater than 0.605 ng/dL. In patients with chronic renal failure, a significant decrease in specificity was observed in the correlation of troponin levels and severe coronary lesions.
[ "Study population", "Statistical analysis", "Limitations" ]
[ "This is a retrospective, single-center, observational study, including 991\npatients with chest pain admitted to the emergency department of a\nhigh-complexity tertiary cardiology center, between May 2013 and May 2015.\nAll patients with chest pain undergoing coronary angiography for suspected\nunstable angina or non-ST-elevation acute myocardial infarction were included.\nPresence of ST-segment elevation was the only exclusion criterion. The coronary\nlesion was considered significant when ≥ 70% on coronary angiography.\nChronic renal failure was defined as a creatinine level > 1.5 mg/dL.\nThe patients were divided into two groups: with (N = 681) and without (N = 310)\nsignificant coronary lesion. For Receiver Operating Characteristic (ROC) curve\nanalysis, the patients were divided into two other groups: with (N = 184) and\nwithout (N = 807) chronic renal failure.\nThe commercial ADVIA Centaur® TnI-Ultra assay (Siemens\nHealthcare Diagnostics, Tarrytown, NY, USA) was used for current sensitive\ntroponin with a 99th percentile value of 0.04 ng/mL. The flowchart of\nthe management of all patients with chest pain met the criteria established by\nthe last American Heart Association guideline.7-9\nNon-ST-elevation acute coronary syndrome was defined as presence of chest pain\nassociated with electrocardiographic changes or troponin elevation/drop on\nadmission or, in the lack thereof, clinical findings and risk factors compatible\nwith unstable angina (chest pain at rest or on minimal exertion, of severe\nintensity or occurring in a crescendo pattern). The highest\ntroponin level during hospitalization before coronary angiography was considered\nfor analysis, following the every 6-hour marker collection protocol of the\ninstitution.\nThe following data were obtained: age, sex, presence of diabetes mellitus,\nsystemic arterial hypertension, smoking habit, dyslipidemia, family history of\nearly coronary artery disease, chronic coronary artery disease, previous acute\nmyocardial infarction, creatinine, ST-segment depression or T-wave inversion on\nthe electrocardiogram.\nThis study was submitted to the Ethics Committee in Research and approved by it.\nAll patients provided written informed consent.", "The ROC curve analysis was performed to identify the sensitivity and specificity\nof the best cutoff point of troponin as a discriminator of the probability of\nsignificant coronary lesion, and 95% confidence interval (CI) was used. That\nanalysis was performed for the general population and separately for patients\nwith and without chronic renal failure.\nDescriptive analysis of the categorical variables was performed by use of\npercentages. Continuous variables with non-normal distribution were expressed as\nmedians and interquartile intervals, and those with normal distribution, as\nmeans and standard deviations. The comparison between groups was performed by\nuse of the chi-square test for categorical variables. The continuous variables,\nwhen the Kolmogorov-Smirnov test showed normal distribution, were assessed by\nusing the unpaired T test, and when the distribution was not normal, the\nMann-Whitney U test was used. Both troponin cutoff points analyzed (the\n99th percentile of the method and the best cutoff point found in\nthis study) were entered into the univariate analysis. Comparison between\npatients with versus without significant coronary lesion was\nperformed.\nMultivariate analysis was performed with logistic regression, p < 0.05 being\nthe significance level adopted. All baseline characteristics listed in Table 1 that reached statistical\nsignificance on univariate analysis were considered as variables in the\nanalysis. Multivariate analysis was performed separately for each troponin\ncutoff point assessed (the 99th percentile of the method and the best\ncutoff point found in this study).\nBaseline characteristics and univariate analysis comparing patients with\nversus without significant coronary lesion\nFH: family history; CAD: coronary artery disease; AMI: acute\nmyocardial infarction;\nchi square test;\nunpaired T test;\nMann-Whitney U test.\nThe calculations were performed with the SPSS software, version 10.0.", "Despite the large case series, this is a retrospective (hindering the blinded\nanalysis) single-center study, with a much higher number of patients without\nchronic renal failure than with it. In addition, we used only one troponin\nassay, and most patients were of the male sex." ]
[ null, null, null ]
[ "Introduction", "Methods", "Study population", "Statistical analysis", "Results", "Discussion", "Limitations", "Conclusion" ]
[ "In recent years, cardiology has witnessed the constant development of several\nbiomarkers, of which, current sensitive troponins and high-sensitivity troponins,\nwidespread in Brazil and Europe, stand out.1\nHowever, despite the huge gain in sensitivity, allowing early detection of a minimum\nthreshold of myocardial lesion in patients presenting to the emergency department\nwith chest pain, there was a reduction in specificity, which resulted in several\npatients with non-cardiological or non-coronary problems undergoing unnecessary and\neven harmful antithrombotic therapy and invasive coronary stratification.2-5 The adequate troponin level to be considered for the correct\ninterpretation of clinical findings depends on the patient’s characteristics and on\nthe troponin assay used, and should be ideally individualized for each\nservice.2-4,6\nThus, this study was aimed at assessing the current sensitive troponin I levels for\npatients with chest pain, in addition to relating them to the existence of\nsignificant coronary lesions both in the presence and absence of chronic renal\nfailure in the sample selected.", " Study population This is a retrospective, single-center, observational study, including 991\npatients with chest pain admitted to the emergency department of a\nhigh-complexity tertiary cardiology center, between May 2013 and May 2015.\nAll patients with chest pain undergoing coronary angiography for suspected\nunstable angina or non-ST-elevation acute myocardial infarction were included.\nPresence of ST-segment elevation was the only exclusion criterion. The coronary\nlesion was considered significant when ≥ 70% on coronary angiography.\nChronic renal failure was defined as a creatinine level > 1.5 mg/dL.\nThe patients were divided into two groups: with (N = 681) and without (N = 310)\nsignificant coronary lesion. For Receiver Operating Characteristic (ROC) curve\nanalysis, the patients were divided into two other groups: with (N = 184) and\nwithout (N = 807) chronic renal failure.\nThe commercial ADVIA Centaur® TnI-Ultra assay (Siemens\nHealthcare Diagnostics, Tarrytown, NY, USA) was used for current sensitive\ntroponin with a 99th percentile value of 0.04 ng/mL. The flowchart of\nthe management of all patients with chest pain met the criteria established by\nthe last American Heart Association guideline.7-9\nNon-ST-elevation acute coronary syndrome was defined as presence of chest pain\nassociated with electrocardiographic changes or troponin elevation/drop on\nadmission or, in the lack thereof, clinical findings and risk factors compatible\nwith unstable angina (chest pain at rest or on minimal exertion, of severe\nintensity or occurring in a crescendo pattern). The highest\ntroponin level during hospitalization before coronary angiography was considered\nfor analysis, following the every 6-hour marker collection protocol of the\ninstitution.\nThe following data were obtained: age, sex, presence of diabetes mellitus,\nsystemic arterial hypertension, smoking habit, dyslipidemia, family history of\nearly coronary artery disease, chronic coronary artery disease, previous acute\nmyocardial infarction, creatinine, ST-segment depression or T-wave inversion on\nthe electrocardiogram.\nThis study was submitted to the Ethics Committee in Research and approved by it.\nAll patients provided written informed consent.\nThis is a retrospective, single-center, observational study, including 991\npatients with chest pain admitted to the emergency department of a\nhigh-complexity tertiary cardiology center, between May 2013 and May 2015.\nAll patients with chest pain undergoing coronary angiography for suspected\nunstable angina or non-ST-elevation acute myocardial infarction were included.\nPresence of ST-segment elevation was the only exclusion criterion. The coronary\nlesion was considered significant when ≥ 70% on coronary angiography.\nChronic renal failure was defined as a creatinine level > 1.5 mg/dL.\nThe patients were divided into two groups: with (N = 681) and without (N = 310)\nsignificant coronary lesion. For Receiver Operating Characteristic (ROC) curve\nanalysis, the patients were divided into two other groups: with (N = 184) and\nwithout (N = 807) chronic renal failure.\nThe commercial ADVIA Centaur® TnI-Ultra assay (Siemens\nHealthcare Diagnostics, Tarrytown, NY, USA) was used for current sensitive\ntroponin with a 99th percentile value of 0.04 ng/mL. The flowchart of\nthe management of all patients with chest pain met the criteria established by\nthe last American Heart Association guideline.7-9\nNon-ST-elevation acute coronary syndrome was defined as presence of chest pain\nassociated with electrocardiographic changes or troponin elevation/drop on\nadmission or, in the lack thereof, clinical findings and risk factors compatible\nwith unstable angina (chest pain at rest or on minimal exertion, of severe\nintensity or occurring in a crescendo pattern). The highest\ntroponin level during hospitalization before coronary angiography was considered\nfor analysis, following the every 6-hour marker collection protocol of the\ninstitution.\nThe following data were obtained: age, sex, presence of diabetes mellitus,\nsystemic arterial hypertension, smoking habit, dyslipidemia, family history of\nearly coronary artery disease, chronic coronary artery disease, previous acute\nmyocardial infarction, creatinine, ST-segment depression or T-wave inversion on\nthe electrocardiogram.\nThis study was submitted to the Ethics Committee in Research and approved by it.\nAll patients provided written informed consent.\n Statistical analysis The ROC curve analysis was performed to identify the sensitivity and specificity\nof the best cutoff point of troponin as a discriminator of the probability of\nsignificant coronary lesion, and 95% confidence interval (CI) was used. That\nanalysis was performed for the general population and separately for patients\nwith and without chronic renal failure.\nDescriptive analysis of the categorical variables was performed by use of\npercentages. Continuous variables with non-normal distribution were expressed as\nmedians and interquartile intervals, and those with normal distribution, as\nmeans and standard deviations. The comparison between groups was performed by\nuse of the chi-square test for categorical variables. The continuous variables,\nwhen the Kolmogorov-Smirnov test showed normal distribution, were assessed by\nusing the unpaired T test, and when the distribution was not normal, the\nMann-Whitney U test was used. Both troponin cutoff points analyzed (the\n99th percentile of the method and the best cutoff point found in\nthis study) were entered into the univariate analysis. Comparison between\npatients with versus without significant coronary lesion was\nperformed.\nMultivariate analysis was performed with logistic regression, p < 0.05 being\nthe significance level adopted. All baseline characteristics listed in Table 1 that reached statistical\nsignificance on univariate analysis were considered as variables in the\nanalysis. Multivariate analysis was performed separately for each troponin\ncutoff point assessed (the 99th percentile of the method and the best\ncutoff point found in this study).\nBaseline characteristics and univariate analysis comparing patients with\nversus without significant coronary lesion\nFH: family history; CAD: coronary artery disease; AMI: acute\nmyocardial infarction;\nchi square test;\nunpaired T test;\nMann-Whitney U test.\nThe calculations were performed with the SPSS software, version 10.0.\nThe ROC curve analysis was performed to identify the sensitivity and specificity\nof the best cutoff point of troponin as a discriminator of the probability of\nsignificant coronary lesion, and 95% confidence interval (CI) was used. That\nanalysis was performed for the general population and separately for patients\nwith and without chronic renal failure.\nDescriptive analysis of the categorical variables was performed by use of\npercentages. Continuous variables with non-normal distribution were expressed as\nmedians and interquartile intervals, and those with normal distribution, as\nmeans and standard deviations. The comparison between groups was performed by\nuse of the chi-square test for categorical variables. The continuous variables,\nwhen the Kolmogorov-Smirnov test showed normal distribution, were assessed by\nusing the unpaired T test, and when the distribution was not normal, the\nMann-Whitney U test was used. Both troponin cutoff points analyzed (the\n99th percentile of the method and the best cutoff point found in\nthis study) were entered into the univariate analysis. Comparison between\npatients with versus without significant coronary lesion was\nperformed.\nMultivariate analysis was performed with logistic regression, p < 0.05 being\nthe significance level adopted. All baseline characteristics listed in Table 1 that reached statistical\nsignificance on univariate analysis were considered as variables in the\nanalysis. Multivariate analysis was performed separately for each troponin\ncutoff point assessed (the 99th percentile of the method and the best\ncutoff point found in this study).\nBaseline characteristics and univariate analysis comparing patients with\nversus without significant coronary lesion\nFH: family history; CAD: coronary artery disease; AMI: acute\nmyocardial infarction;\nchi square test;\nunpaired T test;\nMann-Whitney U test.\nThe calculations were performed with the SPSS software, version 10.0.", "This is a retrospective, single-center, observational study, including 991\npatients with chest pain admitted to the emergency department of a\nhigh-complexity tertiary cardiology center, between May 2013 and May 2015.\nAll patients with chest pain undergoing coronary angiography for suspected\nunstable angina or non-ST-elevation acute myocardial infarction were included.\nPresence of ST-segment elevation was the only exclusion criterion. The coronary\nlesion was considered significant when ≥ 70% on coronary angiography.\nChronic renal failure was defined as a creatinine level > 1.5 mg/dL.\nThe patients were divided into two groups: with (N = 681) and without (N = 310)\nsignificant coronary lesion. For Receiver Operating Characteristic (ROC) curve\nanalysis, the patients were divided into two other groups: with (N = 184) and\nwithout (N = 807) chronic renal failure.\nThe commercial ADVIA Centaur® TnI-Ultra assay (Siemens\nHealthcare Diagnostics, Tarrytown, NY, USA) was used for current sensitive\ntroponin with a 99th percentile value of 0.04 ng/mL. The flowchart of\nthe management of all patients with chest pain met the criteria established by\nthe last American Heart Association guideline.7-9\nNon-ST-elevation acute coronary syndrome was defined as presence of chest pain\nassociated with electrocardiographic changes or troponin elevation/drop on\nadmission or, in the lack thereof, clinical findings and risk factors compatible\nwith unstable angina (chest pain at rest or on minimal exertion, of severe\nintensity or occurring in a crescendo pattern). The highest\ntroponin level during hospitalization before coronary angiography was considered\nfor analysis, following the every 6-hour marker collection protocol of the\ninstitution.\nThe following data were obtained: age, sex, presence of diabetes mellitus,\nsystemic arterial hypertension, smoking habit, dyslipidemia, family history of\nearly coronary artery disease, chronic coronary artery disease, previous acute\nmyocardial infarction, creatinine, ST-segment depression or T-wave inversion on\nthe electrocardiogram.\nThis study was submitted to the Ethics Committee in Research and approved by it.\nAll patients provided written informed consent.", "The ROC curve analysis was performed to identify the sensitivity and specificity\nof the best cutoff point of troponin as a discriminator of the probability of\nsignificant coronary lesion, and 95% confidence interval (CI) was used. That\nanalysis was performed for the general population and separately for patients\nwith and without chronic renal failure.\nDescriptive analysis of the categorical variables was performed by use of\npercentages. Continuous variables with non-normal distribution were expressed as\nmedians and interquartile intervals, and those with normal distribution, as\nmeans and standard deviations. The comparison between groups was performed by\nuse of the chi-square test for categorical variables. The continuous variables,\nwhen the Kolmogorov-Smirnov test showed normal distribution, were assessed by\nusing the unpaired T test, and when the distribution was not normal, the\nMann-Whitney U test was used. Both troponin cutoff points analyzed (the\n99th percentile of the method and the best cutoff point found in\nthis study) were entered into the univariate analysis. Comparison between\npatients with versus without significant coronary lesion was\nperformed.\nMultivariate analysis was performed with logistic regression, p < 0.05 being\nthe significance level adopted. All baseline characteristics listed in Table 1 that reached statistical\nsignificance on univariate analysis were considered as variables in the\nanalysis. Multivariate analysis was performed separately for each troponin\ncutoff point assessed (the 99th percentile of the method and the best\ncutoff point found in this study).\nBaseline characteristics and univariate analysis comparing patients with\nversus without significant coronary lesion\nFH: family history; CAD: coronary artery disease; AMI: acute\nmyocardial infarction;\nchi square test;\nunpaired T test;\nMann-Whitney U test.\nThe calculations were performed with the SPSS software, version 10.0.", "The median age was 63 years, and 52% of the patients were of the male sex. The area\nunder the ROC curve between the troponin levels and significant coronary lesions was\n0.685 (95% CI: 0.65 - 0.72). In patients with or without renal failure, the areas\nunder the ROC curve were 0.703 (95% CI: 0.66 - 0.74) and 0.608 (95% CI: 0.52 -\n0.70), respectively. The best cutoff points to discriminate the presence of\nsignificant coronary lesion were: in the general population, 0.605 ng/dL\n(sensitivity, 63.4%; specificity, 67%; positive predictive value, 65.9%; negative\npredictive value, 64.7%; accuracy, 65.3%; and likelihood ratio, 1.9); in patients\nwithout renal failure, 0.605 ng/dL (sensitivity, 62.7%; specificity, 71%; accuracy,\n66.9%; and likelihood ratio, 2.2); and in patients with chronic renal failure, 0.515\nng/dL (sensitivity, 80.6%; specificity, 42%; accuracy, 61.3%; and likelihood ratio,\n1.4) (Figure 1). In the general population, the\nlevel of 0.05 ng/dL (immediately above the 99th percentile) showed\nsensitivity of 93.7% and specificity of 23%. For patients with chronic renal failure\nto reach a specificity of 67% (as in the general population), an elevation in the\ntroponin level to 1.58 ng/dL was necessary.\n\nFigure 1ROC curve identifying the sensitivity and the specificity of the best\ncutoff point of troponin as a discriminator of the probability of\nsignificant coronary lesion. AUC: area under the curve.\n\nROC curve identifying the sensitivity and the specificity of the best\ncutoff point of troponin as a discriminator of the probability of\nsignificant coronary lesion. AUC: area under the curve.\nTroponin was negative in 143 patients, and, in 40.6% of them, significant lesions\nwere observed on coronary angiography. In addition, 10.5% of those patients with\nnegative troponin showed ST-segment depression/T-wave inversion on\nelectrocardiogram. Using the gold-standard procedure of cardiac catheterization, the\nacute coronary syndrome diagnosis was confirmed in 68.7% of the patients admitted\ndue to chest pain. In 9.1% of those without significant coronary lesion on coronary\nangiography and with positive troponin, the acute coronary syndrome diagnosis was\nconfirmed by cardiac magnetic resonance. The baseline characteristics of the\npopulation studied and the univariate analysis between the groups are shown in Table 1.\nIn multivariate analysis, considering the 99th percentile of the method,\nthere were significant differences between the groups with and without coronary\nlesion regarding smoking habit (OR = 1.58, p = 0.002), ST-segment depression/T-wave\ninversion (OR = 2.05, p < 0.0001) and troponin positivity (OR = 3.39, p <\n0.0001), respectively. However, when considering the best troponin cutoff point\nfound in this study, there were significant differences between the groups with and\nwithout coronary lesion regarding the male sex (OR = 1.35, p = 0.039), smoking habit\n(OR = 1.64, p = 0.001), ST-segment depression/T-wave inversion (OR = 2.22, p <\n0.0001) and troponin positivity (OR = 3.39, p < 0.0001), respectively. The\nmultivariate analysis results are shown in Table\n2.\nMultivariate analysis comparing patients with versus without\nsignificant coronary lesion: A. Using the 99th percentile of the\ntroponin assay; B. using the best cutoff point for troponin found in the\nstudy\nOR: odds ratio; CI: confidence interval.", "The results of this study in the Brazilian population are in accordance with those of\nrecently published literature. Troponin positivity without association with coronary\nangiographic findings was observed in 31.3% of the patients. In addition, better\nspecificity values were only achieved with a troponin cutoff point of 0.605 ng/dL,\napproximately 15 times the 99th percentile of the method. When assessing\nthe subgroup with renal failure, that level is even higher, hindering its correct\ninterpretation.\nIn a study published in 2012 derived from the Scottish Heart Health Extended Cohort,\nblood samples were collected and high-sensitivity troponin I levels were measured.\nThe results showed that, in a population of 15340 individuals, 31.7% of the men and\n18.1% of the women had high high-sensitivity troponin with no clinical manifestation\nat the time of blood collection, highlighting the problem of the specificity of the\nmethod. Positivity and worse prognosis were correlated in the long run (p <\n0.0001), as reported in other studies.4,10-12 That prevalence of troponin positivity not related\nto acute coronary artery disease is similar to that found in our study, although we\nassessed specifically patients with chest pain.\nLikewise, a prospective cohort study of 6304 patients with chest pain presenting to\nthe emergency department has reported positive high-sensitivity troponin T in 39% of\nthe cases diagnosed as non-coronary.13\nIrfan et al.14 have conducted an\nobservational multicenter study with 1181 patients hospitalized because of\nnon-cardiac causes, 15% of whom had positive high-sensitivity troponin T. Of the\nmajor factors related to that unexpected elevation, the presence of kidney\ndysfunction was identified as a significantly influencing factor. In addition, once\nagain, patients with elevated troponin were at higher risk for death (HR = 3.0; p =\n0.02).14\nIn individuals older than 75 years, high-sensitivity troponin T was assessed in the\ncontext of chest pain, being measured at baseline and 3-4 hours. Approximately 27%\nof the patients were classified as having acute coronary syndrome. The sensitivity\nand specificity found in that population were 88% and 38%, respectively. The greater\nthe initial level or the increase (mainly absolute) in the subsequent measures, the\nhigher the specificity found.15\nThat specificity value can be greater than ours found in the general population,\nprobably because of the inclusion of more patients with other heart diseases,\nbecause we belong to a referral tertiary cardiology center.\nThe concept of variation in the levels of sensitive troponin and high-sensitivity\ntroponin in different measurements has been studied, and establishing a correlation\nbetween the amplitude of variability and the probability of coronary artery disease\nhas been consecutively attempted. In addition, amplitude can be relative (expressed\nas percentages) or absolute, with possible implications and distinct\ninterpretations.1\nA retrospective study published in 2014, including 1054 patients with chest pain,\nassessed the variability related to high-sensitivity troponin T. Approximately 40%\nof the patients showed alteration in at least one measurement. Even with a variation\ngreater than 20% as compared to the initial level, the specificity did not exceed\n70%.16\nAssessing specifically the same current sensitive troponin assay used in this study,\nin 2013 Bonaca et al.17 published\na study comparing current sensitive troponin I versus\nhigh-sensitivity troponin I in 381 patients with chest pain at the emergency\ndepartment. Those authors found sensitivity values for the two assays of 94% and\n97%, and negative predictive values of 98% and 99%, respectively, with no\nsignificant difference.17 Another\nsimilar study of 1807 patients with non-ST-segment elevation acute coronary syndrome\nhas shown no significant difference regarding prognosis when comparing the\npositivity of current sensitive troponin I versus high-sensitivity\ntroponin I.18 Differently from the\nfindings of those studies and using the same assay, ours showed lower sensitivity\nand specificity of 23% when using the 99th percentile of the method. That\nshows the importance of assessing each center’s population, respecting their\nspecific individualities.\nIn alignment with that, the meta-analysis published in 2014 with 17 studies and 8644\npatients with chest pain compared the use of high-sensitivity troponin with that of\nconventional troponin. There were differences regarding sensitivity (88.4% vs.\n74.9%; p < 0.001) and specificity (81.6% vs. 93.8%; p < 0.001), respectively.\nDespite that increase in sensitivity with high-sensitivity troponin, the number of\npatients with the final diagnosis of myocardial infarction and the need for\nadditional tests for ischemia did not differ between the groups, showing no\nadditional clinical advantage with the use of high-sensitivity troponin.2\nFinally, some studies have validated the new troponin assays.1,19,20 The study\nconducted in 2015 compared seven assays of current sensitive troponins and\nhigh-sensitivity troponin in 2813 patients with chest pain, and with (16%) or\nwithout kidney dysfunction. Of the patients with nephropathy, in only 45-80% of\nthose with positive troponin, the final diagnosis was myocardial infarction. The\noptimal cutoff point varied from 1.9 to 3.4 times that of the general population to\ndetect acute coronary artery disease. Assessing only the same current sensitive\ntroponin assay used in this study, in 27% of those with positive troponin, the final\ndiagnosis of myocardial infarction was ruled out. The area under the curve of\naccuracy of that assay decreased from 0.92 to 0.87 (p = 0.013), comparing the\ngeneral population with the patients with kidney dysfunction.19 That cutoff point elevation is in\naccordance with our findings, showing a clear specificity reduction in the group of\npatients with nephropathy.\n Limitations Despite the large case series, this is a retrospective (hindering the blinded\nanalysis) single-center study, with a much higher number of patients without\nchronic renal failure than with it. In addition, we used only one troponin\nassay, and most patients were of the male sex.\nDespite the large case series, this is a retrospective (hindering the blinded\nanalysis) single-center study, with a much higher number of patients without\nchronic renal failure than with it. In addition, we used only one troponin\nassay, and most patients were of the male sex.", "Despite the large case series, this is a retrospective (hindering the blinded\nanalysis) single-center study, with a much higher number of patients without\nchronic renal failure than with it. In addition, we used only one troponin\nassay, and most patients were of the male sex.", "In the study population of patients with chest pain, sensitive troponin I showed a\ngood correlation with significant coronary lesions when its level was greater than\n0.605 ng/dL. In patients with chronic renal failure, a significant decrease in\nspecificity was observed in the correlation of troponin levels and severe coronary\nlesions." ]
[ "intro", "methods", null, null, "results", "discussion", null, "conclusions" ]
[ "Troponin I", "Chest Pain", "Coronary Artery Disease", "Renal Insufficiency, Chronic", "Biomarkers" ]
Introduction: In recent years, cardiology has witnessed the constant development of several biomarkers, of which, current sensitive troponins and high-sensitivity troponins, widespread in Brazil and Europe, stand out.1 However, despite the huge gain in sensitivity, allowing early detection of a minimum threshold of myocardial lesion in patients presenting to the emergency department with chest pain, there was a reduction in specificity, which resulted in several patients with non-cardiological or non-coronary problems undergoing unnecessary and even harmful antithrombotic therapy and invasive coronary stratification.2-5 The adequate troponin level to be considered for the correct interpretation of clinical findings depends on the patient’s characteristics and on the troponin assay used, and should be ideally individualized for each service.2-4,6 Thus, this study was aimed at assessing the current sensitive troponin I levels for patients with chest pain, in addition to relating them to the existence of significant coronary lesions both in the presence and absence of chronic renal failure in the sample selected. Methods: Study population This is a retrospective, single-center, observational study, including 991 patients with chest pain admitted to the emergency department of a high-complexity tertiary cardiology center, between May 2013 and May 2015. All patients with chest pain undergoing coronary angiography for suspected unstable angina or non-ST-elevation acute myocardial infarction were included. Presence of ST-segment elevation was the only exclusion criterion. The coronary lesion was considered significant when ≥ 70% on coronary angiography. Chronic renal failure was defined as a creatinine level > 1.5 mg/dL. The patients were divided into two groups: with (N = 681) and without (N = 310) significant coronary lesion. For Receiver Operating Characteristic (ROC) curve analysis, the patients were divided into two other groups: with (N = 184) and without (N = 807) chronic renal failure. The commercial ADVIA Centaur® TnI-Ultra assay (Siemens Healthcare Diagnostics, Tarrytown, NY, USA) was used for current sensitive troponin with a 99th percentile value of 0.04 ng/mL. The flowchart of the management of all patients with chest pain met the criteria established by the last American Heart Association guideline.7-9 Non-ST-elevation acute coronary syndrome was defined as presence of chest pain associated with electrocardiographic changes or troponin elevation/drop on admission or, in the lack thereof, clinical findings and risk factors compatible with unstable angina (chest pain at rest or on minimal exertion, of severe intensity or occurring in a crescendo pattern). The highest troponin level during hospitalization before coronary angiography was considered for analysis, following the every 6-hour marker collection protocol of the institution. The following data were obtained: age, sex, presence of diabetes mellitus, systemic arterial hypertension, smoking habit, dyslipidemia, family history of early coronary artery disease, chronic coronary artery disease, previous acute myocardial infarction, creatinine, ST-segment depression or T-wave inversion on the electrocardiogram. This study was submitted to the Ethics Committee in Research and approved by it. All patients provided written informed consent. This is a retrospective, single-center, observational study, including 991 patients with chest pain admitted to the emergency department of a high-complexity tertiary cardiology center, between May 2013 and May 2015. All patients with chest pain undergoing coronary angiography for suspected unstable angina or non-ST-elevation acute myocardial infarction were included. Presence of ST-segment elevation was the only exclusion criterion. The coronary lesion was considered significant when ≥ 70% on coronary angiography. Chronic renal failure was defined as a creatinine level > 1.5 mg/dL. The patients were divided into two groups: with (N = 681) and without (N = 310) significant coronary lesion. For Receiver Operating Characteristic (ROC) curve analysis, the patients were divided into two other groups: with (N = 184) and without (N = 807) chronic renal failure. The commercial ADVIA Centaur® TnI-Ultra assay (Siemens Healthcare Diagnostics, Tarrytown, NY, USA) was used for current sensitive troponin with a 99th percentile value of 0.04 ng/mL. The flowchart of the management of all patients with chest pain met the criteria established by the last American Heart Association guideline.7-9 Non-ST-elevation acute coronary syndrome was defined as presence of chest pain associated with electrocardiographic changes or troponin elevation/drop on admission or, in the lack thereof, clinical findings and risk factors compatible with unstable angina (chest pain at rest or on minimal exertion, of severe intensity or occurring in a crescendo pattern). The highest troponin level during hospitalization before coronary angiography was considered for analysis, following the every 6-hour marker collection protocol of the institution. The following data were obtained: age, sex, presence of diabetes mellitus, systemic arterial hypertension, smoking habit, dyslipidemia, family history of early coronary artery disease, chronic coronary artery disease, previous acute myocardial infarction, creatinine, ST-segment depression or T-wave inversion on the electrocardiogram. This study was submitted to the Ethics Committee in Research and approved by it. All patients provided written informed consent. Statistical analysis The ROC curve analysis was performed to identify the sensitivity and specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion, and 95% confidence interval (CI) was used. That analysis was performed for the general population and separately for patients with and without chronic renal failure. Descriptive analysis of the categorical variables was performed by use of percentages. Continuous variables with non-normal distribution were expressed as medians and interquartile intervals, and those with normal distribution, as means and standard deviations. The comparison between groups was performed by use of the chi-square test for categorical variables. The continuous variables, when the Kolmogorov-Smirnov test showed normal distribution, were assessed by using the unpaired T test, and when the distribution was not normal, the Mann-Whitney U test was used. Both troponin cutoff points analyzed (the 99th percentile of the method and the best cutoff point found in this study) were entered into the univariate analysis. Comparison between patients with versus without significant coronary lesion was performed. Multivariate analysis was performed with logistic regression, p < 0.05 being the significance level adopted. All baseline characteristics listed in Table 1 that reached statistical significance on univariate analysis were considered as variables in the analysis. Multivariate analysis was performed separately for each troponin cutoff point assessed (the 99th percentile of the method and the best cutoff point found in this study). Baseline characteristics and univariate analysis comparing patients with versus without significant coronary lesion FH: family history; CAD: coronary artery disease; AMI: acute myocardial infarction; chi square test; unpaired T test; Mann-Whitney U test. The calculations were performed with the SPSS software, version 10.0. The ROC curve analysis was performed to identify the sensitivity and specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion, and 95% confidence interval (CI) was used. That analysis was performed for the general population and separately for patients with and without chronic renal failure. Descriptive analysis of the categorical variables was performed by use of percentages. Continuous variables with non-normal distribution were expressed as medians and interquartile intervals, and those with normal distribution, as means and standard deviations. The comparison between groups was performed by use of the chi-square test for categorical variables. The continuous variables, when the Kolmogorov-Smirnov test showed normal distribution, were assessed by using the unpaired T test, and when the distribution was not normal, the Mann-Whitney U test was used. Both troponin cutoff points analyzed (the 99th percentile of the method and the best cutoff point found in this study) were entered into the univariate analysis. Comparison between patients with versus without significant coronary lesion was performed. Multivariate analysis was performed with logistic regression, p < 0.05 being the significance level adopted. All baseline characteristics listed in Table 1 that reached statistical significance on univariate analysis were considered as variables in the analysis. Multivariate analysis was performed separately for each troponin cutoff point assessed (the 99th percentile of the method and the best cutoff point found in this study). Baseline characteristics and univariate analysis comparing patients with versus without significant coronary lesion FH: family history; CAD: coronary artery disease; AMI: acute myocardial infarction; chi square test; unpaired T test; Mann-Whitney U test. The calculations were performed with the SPSS software, version 10.0. Study population: This is a retrospective, single-center, observational study, including 991 patients with chest pain admitted to the emergency department of a high-complexity tertiary cardiology center, between May 2013 and May 2015. All patients with chest pain undergoing coronary angiography for suspected unstable angina or non-ST-elevation acute myocardial infarction were included. Presence of ST-segment elevation was the only exclusion criterion. The coronary lesion was considered significant when ≥ 70% on coronary angiography. Chronic renal failure was defined as a creatinine level > 1.5 mg/dL. The patients were divided into two groups: with (N = 681) and without (N = 310) significant coronary lesion. For Receiver Operating Characteristic (ROC) curve analysis, the patients were divided into two other groups: with (N = 184) and without (N = 807) chronic renal failure. The commercial ADVIA Centaur® TnI-Ultra assay (Siemens Healthcare Diagnostics, Tarrytown, NY, USA) was used for current sensitive troponin with a 99th percentile value of 0.04 ng/mL. The flowchart of the management of all patients with chest pain met the criteria established by the last American Heart Association guideline.7-9 Non-ST-elevation acute coronary syndrome was defined as presence of chest pain associated with electrocardiographic changes or troponin elevation/drop on admission or, in the lack thereof, clinical findings and risk factors compatible with unstable angina (chest pain at rest or on minimal exertion, of severe intensity or occurring in a crescendo pattern). The highest troponin level during hospitalization before coronary angiography was considered for analysis, following the every 6-hour marker collection protocol of the institution. The following data were obtained: age, sex, presence of diabetes mellitus, systemic arterial hypertension, smoking habit, dyslipidemia, family history of early coronary artery disease, chronic coronary artery disease, previous acute myocardial infarction, creatinine, ST-segment depression or T-wave inversion on the electrocardiogram. This study was submitted to the Ethics Committee in Research and approved by it. All patients provided written informed consent. Statistical analysis: The ROC curve analysis was performed to identify the sensitivity and specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion, and 95% confidence interval (CI) was used. That analysis was performed for the general population and separately for patients with and without chronic renal failure. Descriptive analysis of the categorical variables was performed by use of percentages. Continuous variables with non-normal distribution were expressed as medians and interquartile intervals, and those with normal distribution, as means and standard deviations. The comparison between groups was performed by use of the chi-square test for categorical variables. The continuous variables, when the Kolmogorov-Smirnov test showed normal distribution, were assessed by using the unpaired T test, and when the distribution was not normal, the Mann-Whitney U test was used. Both troponin cutoff points analyzed (the 99th percentile of the method and the best cutoff point found in this study) were entered into the univariate analysis. Comparison between patients with versus without significant coronary lesion was performed. Multivariate analysis was performed with logistic regression, p < 0.05 being the significance level adopted. All baseline characteristics listed in Table 1 that reached statistical significance on univariate analysis were considered as variables in the analysis. Multivariate analysis was performed separately for each troponin cutoff point assessed (the 99th percentile of the method and the best cutoff point found in this study). Baseline characteristics and univariate analysis comparing patients with versus without significant coronary lesion FH: family history; CAD: coronary artery disease; AMI: acute myocardial infarction; chi square test; unpaired T test; Mann-Whitney U test. The calculations were performed with the SPSS software, version 10.0. Results: The median age was 63 years, and 52% of the patients were of the male sex. The area under the ROC curve between the troponin levels and significant coronary lesions was 0.685 (95% CI: 0.65 - 0.72). In patients with or without renal failure, the areas under the ROC curve were 0.703 (95% CI: 0.66 - 0.74) and 0.608 (95% CI: 0.52 - 0.70), respectively. The best cutoff points to discriminate the presence of significant coronary lesion were: in the general population, 0.605 ng/dL (sensitivity, 63.4%; specificity, 67%; positive predictive value, 65.9%; negative predictive value, 64.7%; accuracy, 65.3%; and likelihood ratio, 1.9); in patients without renal failure, 0.605 ng/dL (sensitivity, 62.7%; specificity, 71%; accuracy, 66.9%; and likelihood ratio, 2.2); and in patients with chronic renal failure, 0.515 ng/dL (sensitivity, 80.6%; specificity, 42%; accuracy, 61.3%; and likelihood ratio, 1.4) (Figure 1). In the general population, the level of 0.05 ng/dL (immediately above the 99th percentile) showed sensitivity of 93.7% and specificity of 23%. For patients with chronic renal failure to reach a specificity of 67% (as in the general population), an elevation in the troponin level to 1.58 ng/dL was necessary. Figure 1ROC curve identifying the sensitivity and the specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion. AUC: area under the curve. ROC curve identifying the sensitivity and the specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion. AUC: area under the curve. Troponin was negative in 143 patients, and, in 40.6% of them, significant lesions were observed on coronary angiography. In addition, 10.5% of those patients with negative troponin showed ST-segment depression/T-wave inversion on electrocardiogram. Using the gold-standard procedure of cardiac catheterization, the acute coronary syndrome diagnosis was confirmed in 68.7% of the patients admitted due to chest pain. In 9.1% of those without significant coronary lesion on coronary angiography and with positive troponin, the acute coronary syndrome diagnosis was confirmed by cardiac magnetic resonance. The baseline characteristics of the population studied and the univariate analysis between the groups are shown in Table 1. In multivariate analysis, considering the 99th percentile of the method, there were significant differences between the groups with and without coronary lesion regarding smoking habit (OR = 1.58, p = 0.002), ST-segment depression/T-wave inversion (OR = 2.05, p < 0.0001) and troponin positivity (OR = 3.39, p < 0.0001), respectively. However, when considering the best troponin cutoff point found in this study, there were significant differences between the groups with and without coronary lesion regarding the male sex (OR = 1.35, p = 0.039), smoking habit (OR = 1.64, p = 0.001), ST-segment depression/T-wave inversion (OR = 2.22, p < 0.0001) and troponin positivity (OR = 3.39, p < 0.0001), respectively. The multivariate analysis results are shown in Table 2. Multivariate analysis comparing patients with versus without significant coronary lesion: A. Using the 99th percentile of the troponin assay; B. using the best cutoff point for troponin found in the study OR: odds ratio; CI: confidence interval. Discussion: The results of this study in the Brazilian population are in accordance with those of recently published literature. Troponin positivity without association with coronary angiographic findings was observed in 31.3% of the patients. In addition, better specificity values were only achieved with a troponin cutoff point of 0.605 ng/dL, approximately 15 times the 99th percentile of the method. When assessing the subgroup with renal failure, that level is even higher, hindering its correct interpretation. In a study published in 2012 derived from the Scottish Heart Health Extended Cohort, blood samples were collected and high-sensitivity troponin I levels were measured. The results showed that, in a population of 15340 individuals, 31.7% of the men and 18.1% of the women had high high-sensitivity troponin with no clinical manifestation at the time of blood collection, highlighting the problem of the specificity of the method. Positivity and worse prognosis were correlated in the long run (p < 0.0001), as reported in other studies.4,10-12 That prevalence of troponin positivity not related to acute coronary artery disease is similar to that found in our study, although we assessed specifically patients with chest pain. Likewise, a prospective cohort study of 6304 patients with chest pain presenting to the emergency department has reported positive high-sensitivity troponin T in 39% of the cases diagnosed as non-coronary.13 Irfan et al.14 have conducted an observational multicenter study with 1181 patients hospitalized because of non-cardiac causes, 15% of whom had positive high-sensitivity troponin T. Of the major factors related to that unexpected elevation, the presence of kidney dysfunction was identified as a significantly influencing factor. In addition, once again, patients with elevated troponin were at higher risk for death (HR = 3.0; p = 0.02).14 In individuals older than 75 years, high-sensitivity troponin T was assessed in the context of chest pain, being measured at baseline and 3-4 hours. Approximately 27% of the patients were classified as having acute coronary syndrome. The sensitivity and specificity found in that population were 88% and 38%, respectively. The greater the initial level or the increase (mainly absolute) in the subsequent measures, the higher the specificity found.15 That specificity value can be greater than ours found in the general population, probably because of the inclusion of more patients with other heart diseases, because we belong to a referral tertiary cardiology center. The concept of variation in the levels of sensitive troponin and high-sensitivity troponin in different measurements has been studied, and establishing a correlation between the amplitude of variability and the probability of coronary artery disease has been consecutively attempted. In addition, amplitude can be relative (expressed as percentages) or absolute, with possible implications and distinct interpretations.1 A retrospective study published in 2014, including 1054 patients with chest pain, assessed the variability related to high-sensitivity troponin T. Approximately 40% of the patients showed alteration in at least one measurement. Even with a variation greater than 20% as compared to the initial level, the specificity did not exceed 70%.16 Assessing specifically the same current sensitive troponin assay used in this study, in 2013 Bonaca et al.17 published a study comparing current sensitive troponin I versus high-sensitivity troponin I in 381 patients with chest pain at the emergency department. Those authors found sensitivity values for the two assays of 94% and 97%, and negative predictive values of 98% and 99%, respectively, with no significant difference.17 Another similar study of 1807 patients with non-ST-segment elevation acute coronary syndrome has shown no significant difference regarding prognosis when comparing the positivity of current sensitive troponin I versus high-sensitivity troponin I.18 Differently from the findings of those studies and using the same assay, ours showed lower sensitivity and specificity of 23% when using the 99th percentile of the method. That shows the importance of assessing each center’s population, respecting their specific individualities. In alignment with that, the meta-analysis published in 2014 with 17 studies and 8644 patients with chest pain compared the use of high-sensitivity troponin with that of conventional troponin. There were differences regarding sensitivity (88.4% vs. 74.9%; p < 0.001) and specificity (81.6% vs. 93.8%; p < 0.001), respectively. Despite that increase in sensitivity with high-sensitivity troponin, the number of patients with the final diagnosis of myocardial infarction and the need for additional tests for ischemia did not differ between the groups, showing no additional clinical advantage with the use of high-sensitivity troponin.2 Finally, some studies have validated the new troponin assays.1,19,20 The study conducted in 2015 compared seven assays of current sensitive troponins and high-sensitivity troponin in 2813 patients with chest pain, and with (16%) or without kidney dysfunction. Of the patients with nephropathy, in only 45-80% of those with positive troponin, the final diagnosis was myocardial infarction. The optimal cutoff point varied from 1.9 to 3.4 times that of the general population to detect acute coronary artery disease. Assessing only the same current sensitive troponin assay used in this study, in 27% of those with positive troponin, the final diagnosis of myocardial infarction was ruled out. The area under the curve of accuracy of that assay decreased from 0.92 to 0.87 (p = 0.013), comparing the general population with the patients with kidney dysfunction.19 That cutoff point elevation is in accordance with our findings, showing a clear specificity reduction in the group of patients with nephropathy. Limitations Despite the large case series, this is a retrospective (hindering the blinded analysis) single-center study, with a much higher number of patients without chronic renal failure than with it. In addition, we used only one troponin assay, and most patients were of the male sex. Despite the large case series, this is a retrospective (hindering the blinded analysis) single-center study, with a much higher number of patients without chronic renal failure than with it. In addition, we used only one troponin assay, and most patients were of the male sex. Limitations: Despite the large case series, this is a retrospective (hindering the blinded analysis) single-center study, with a much higher number of patients without chronic renal failure than with it. In addition, we used only one troponin assay, and most patients were of the male sex. Conclusion: In the study population of patients with chest pain, sensitive troponin I showed a good correlation with significant coronary lesions when its level was greater than 0.605 ng/dL. In patients with chronic renal failure, a significant decrease in specificity was observed in the correlation of troponin levels and severe coronary lesions.
Background: Despite having higher sensitivity as compared to conventional troponins, sensitive troponins have lower specificity, mainly in patients with renal failure. Methods: Retrospective, single-center, observational. This study included 991 patients divided into two groups: with (N = 681) and without (N = 310) significant coronary lesion. For posterior analysis, the patients were divided into two other groups: with (N = 184) and without (N = 807) chronic renal failure. The commercial ADVIA Centaur® TnI-Ultra assay (Siemens Healthcare Diagnostics) was used. The ROC curve analysis was performed to identify the sensitivity and specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion. The associations were considered significant when p < 0.05. Results: The median age was 63 years, and 52% of the patients were of the male sex. The area under the ROC curve between the troponin levels and significant coronary lesions was 0.685 (95% CI: 0.65 - 0.72). In patients with or without renal failure, the areas under the ROC curve were 0.703 (95% CI: 0.66 - 0.74) and 0.608 (95% CI: 0.52 - 0.70), respectively. The best cutoff points to discriminate the presence of significant coronary lesion were: in the general population, 0.605 ng/dL (sensitivity, 63.4%; specificity, 67%); in patients without renal failure, 0.605 ng/dL (sensitivity, 62.7%; specificity, 71%); and in patients with chronic renal failure, 0.515 ng/dL (sensitivity, 80.6%; specificity, 42%). Conclusions: In patients with chest pain, sensitive troponin I showed a good correlation with significant coronary lesions when its level was greater than 0.605 ng/dL. In patients with chronic renal failure, a significant decrease in specificity was observed in the correlation of troponin levels and severe coronary lesions.
Introduction: In recent years, cardiology has witnessed the constant development of several biomarkers, of which, current sensitive troponins and high-sensitivity troponins, widespread in Brazil and Europe, stand out.1 However, despite the huge gain in sensitivity, allowing early detection of a minimum threshold of myocardial lesion in patients presenting to the emergency department with chest pain, there was a reduction in specificity, which resulted in several patients with non-cardiological or non-coronary problems undergoing unnecessary and even harmful antithrombotic therapy and invasive coronary stratification.2-5 The adequate troponin level to be considered for the correct interpretation of clinical findings depends on the patient’s characteristics and on the troponin assay used, and should be ideally individualized for each service.2-4,6 Thus, this study was aimed at assessing the current sensitive troponin I levels for patients with chest pain, in addition to relating them to the existence of significant coronary lesions both in the presence and absence of chronic renal failure in the sample selected. Conclusion: In the study population of patients with chest pain, sensitive troponin I showed a good correlation with significant coronary lesions when its level was greater than 0.605 ng/dL. In patients with chronic renal failure, a significant decrease in specificity was observed in the correlation of troponin levels and severe coronary lesions.
Background: Despite having higher sensitivity as compared to conventional troponins, sensitive troponins have lower specificity, mainly in patients with renal failure. Methods: Retrospective, single-center, observational. This study included 991 patients divided into two groups: with (N = 681) and without (N = 310) significant coronary lesion. For posterior analysis, the patients were divided into two other groups: with (N = 184) and without (N = 807) chronic renal failure. The commercial ADVIA Centaur® TnI-Ultra assay (Siemens Healthcare Diagnostics) was used. The ROC curve analysis was performed to identify the sensitivity and specificity of the best cutoff point of troponin as a discriminator of the probability of significant coronary lesion. The associations were considered significant when p < 0.05. Results: The median age was 63 years, and 52% of the patients were of the male sex. The area under the ROC curve between the troponin levels and significant coronary lesions was 0.685 (95% CI: 0.65 - 0.72). In patients with or without renal failure, the areas under the ROC curve were 0.703 (95% CI: 0.66 - 0.74) and 0.608 (95% CI: 0.52 - 0.70), respectively. The best cutoff points to discriminate the presence of significant coronary lesion were: in the general population, 0.605 ng/dL (sensitivity, 63.4%; specificity, 67%); in patients without renal failure, 0.605 ng/dL (sensitivity, 62.7%; specificity, 71%); and in patients with chronic renal failure, 0.515 ng/dL (sensitivity, 80.6%; specificity, 42%). Conclusions: In patients with chest pain, sensitive troponin I showed a good correlation with significant coronary lesions when its level was greater than 0.605 ng/dL. In patients with chronic renal failure, a significant decrease in specificity was observed in the correlation of troponin levels and severe coronary lesions.
4,648
377
8
[ "troponin", "patients", "coronary", "analysis", "study", "significant", "sensitivity", "chest pain", "chest", "pain" ]
[ "test", "test" ]
[CONTENT] Troponin I | Chest Pain | Coronary Artery Disease | Renal Insufficiency, Chronic | Biomarkers [SUMMARY]
[CONTENT] Troponin I | Chest Pain | Coronary Artery Disease | Renal Insufficiency, Chronic | Biomarkers [SUMMARY]
[CONTENT] Troponin I | Chest Pain | Coronary Artery Disease | Renal Insufficiency, Chronic | Biomarkers [SUMMARY]
[CONTENT] Troponin I | Chest Pain | Coronary Artery Disease | Renal Insufficiency, Chronic | Biomarkers [SUMMARY]
[CONTENT] Troponin I | Chest Pain | Coronary Artery Disease | Renal Insufficiency, Chronic | Biomarkers [SUMMARY]
[CONTENT] Troponin I | Chest Pain | Coronary Artery Disease | Renal Insufficiency, Chronic | Biomarkers [SUMMARY]
[CONTENT] Biomarkers | Chest Pain | Coronary Disease | Female | Humans | Kidney Failure, Chronic | Male | Middle Aged | ROC Curve | Retrospective Studies | Sensitivity and Specificity | Troponin I [SUMMARY]
[CONTENT] Biomarkers | Chest Pain | Coronary Disease | Female | Humans | Kidney Failure, Chronic | Male | Middle Aged | ROC Curve | Retrospective Studies | Sensitivity and Specificity | Troponin I [SUMMARY]
[CONTENT] Biomarkers | Chest Pain | Coronary Disease | Female | Humans | Kidney Failure, Chronic | Male | Middle Aged | ROC Curve | Retrospective Studies | Sensitivity and Specificity | Troponin I [SUMMARY]
[CONTENT] Biomarkers | Chest Pain | Coronary Disease | Female | Humans | Kidney Failure, Chronic | Male | Middle Aged | ROC Curve | Retrospective Studies | Sensitivity and Specificity | Troponin I [SUMMARY]
[CONTENT] Biomarkers | Chest Pain | Coronary Disease | Female | Humans | Kidney Failure, Chronic | Male | Middle Aged | ROC Curve | Retrospective Studies | Sensitivity and Specificity | Troponin I [SUMMARY]
[CONTENT] Biomarkers | Chest Pain | Coronary Disease | Female | Humans | Kidney Failure, Chronic | Male | Middle Aged | ROC Curve | Retrospective Studies | Sensitivity and Specificity | Troponin I [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] troponin | patients | coronary | analysis | study | significant | sensitivity | chest pain | chest | pain [SUMMARY]
[CONTENT] troponin | patients | coronary | analysis | study | significant | sensitivity | chest pain | chest | pain [SUMMARY]
[CONTENT] troponin | patients | coronary | analysis | study | significant | sensitivity | chest pain | chest | pain [SUMMARY]
[CONTENT] troponin | patients | coronary | analysis | study | significant | sensitivity | chest pain | chest | pain [SUMMARY]
[CONTENT] troponin | patients | coronary | analysis | study | significant | sensitivity | chest pain | chest | pain [SUMMARY]
[CONTENT] troponin | patients | coronary | analysis | study | significant | sensitivity | chest pain | chest | pain [SUMMARY]
[CONTENT] troponins | coronary | current sensitive | current | troponin | patients | non | sensitivity | sensitive | resulted patients [SUMMARY]
[CONTENT] performed | test | analysis | coronary | variables | patients | normal | distribution | analysis performed | cutoff [SUMMARY]
[CONTENT] coronary | troponin | coronary lesion | significant | ratio | lesion | best | ng dl | patients | specificity [SUMMARY]
[CONTENT] correlation | lesions | coronary lesions | troponin levels severe coronary | pain sensitive troponin | good correlation | good | population patients chest pain | population patients chest | pain sensitive [SUMMARY]
[CONTENT] patients | coronary | troponin | analysis | performed | test | significant | sensitivity | pain | chest pain [SUMMARY]
[CONTENT] patients | coronary | troponin | analysis | performed | test | significant | sensitivity | pain | chest pain [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| 991 | two | 681 | 310 ||| two | 184 | 807 ||| Siemens Healthcare Diagnostics ||| ROC ||| [SUMMARY]
[CONTENT] 63 years | 52% ||| ROC | 0.685 | 95% | CI | 0.65 - 0.72 ||| ROC | 0.703 | 95% | CI | 0.66 | 0.608 | 95% | CI | 0.52 - 0.70 ||| 0.605 ng/dL | 63.4% | 67% | 0.605 ng/dL | 62.7% | 71% | 0.515 ng/dL | 80.6% | 42% [SUMMARY]
[CONTENT] 0.605 ng/dL. [SUMMARY]
[CONTENT] ||| ||| 991 | two | 681 | 310 ||| two | 184 | 807 ||| Siemens Healthcare Diagnostics ||| ROC ||| ||| ||| 63 years | 52% ||| ROC | 0.685 | 95% | CI | 0.65 - 0.72 ||| ROC | 0.703 | 95% | CI | 0.66 | 0.608 | 95% | CI | 0.52 - 0.70 ||| 0.605 ng/dL | 63.4% | 67% | 0.605 ng/dL | 62.7% | 71% | 0.515 ng/dL | 80.6% | 42% ||| 0.605 ng/dL. [SUMMARY]
[CONTENT] ||| ||| 991 | two | 681 | 310 ||| two | 184 | 807 ||| Siemens Healthcare Diagnostics ||| ROC ||| ||| ||| 63 years | 52% ||| ROC | 0.685 | 95% | CI | 0.65 - 0.72 ||| ROC | 0.703 | 95% | CI | 0.66 | 0.608 | 95% | CI | 0.52 - 0.70 ||| 0.605 ng/dL | 63.4% | 67% | 0.605 ng/dL | 62.7% | 71% | 0.515 ng/dL | 80.6% | 42% ||| 0.605 ng/dL. [SUMMARY]
Impact of smoking and chewing tobacco on arsenic-induced skin lesions.
20064784
We recently reported that the main reason for the documented higher prevalence of arsenic-related skin lesions among men than among women is the result of less efficient arsenic metabolism.
BACKGROUND
We used a population-based case-referent study that showed increased risk for skin lesions in relation to chronic arsenic exposure via drinking water in Bangladesh and randomly selected 526 of the referents (random sample of inhabitants > 4 years old; 47% male) and all 504 cases (54% male) with arsenic-related skin lesions to measure arsenic metabolites [methylarsonic acid (MA) and dimethylarsinic acid (DMA)] in urine using high-performance liquid chromatography (HPLC) and inductively coupled plasma mass spectrometry (ICPMS).
METHODS
The odds ratio for skin lesions was almost three times higher in the highest tertile of urinary %MA than in the lowest tertile. Men who smoked cigarettes and bidis (locally produced cigarettes; 33% of referents, 58% of cases) had a significantly higher risk for skin lesions than did nonsmoking men; this association decreased slightly after accounting for arsenic metabolism. Only two women smoked, but women who chewed tobacco (21% of referents, 43% of cases) had a considerably higher risk of skin lesions than did women who did not use tobacco. The odds ratio (OR) for women who chewed tobacco and who had < or = 7.9%MA was 3.8 [95% confidence interval (CI), 1.4-10] compared with women in the same MA tertile who did not use tobacco. In the highest tertile of %MA or %inorganic arsenic (iAs), women who chewed tobacco had ORs of 7.3 and 7.5, respectively, compared with women in the lowest tertiles who did not use tobacco.
RESULTS
The increased risk of arsenic-related skin lesions in male smokers compared with nonsmokers appears to be partly explained by impaired arsenic methylation, while there seemed to be an excess risk due to interaction between chewing tobacco and arsenic metabolism in women.
CONCLUSION
[ "Adult", "Arsenic", "Arsenicals", "Cacodylic Acid", "Female", "Humans", "Male", "Odds Ratio", "Regression Analysis", "Skin Diseases", "Smoking", "Tobacco, Smokeless", "Young Adult" ]
2854731
null
null
Data collection
The field teams interviewed all individuals about their history of water consumption and the water sources used, including location, during each calendar year since 1970 or since birth if later than 1970, as described previously in more detail (Rahman et al. 2006b). Data on socioeconomic status (SES) were collected from the HDSS database and were defined in terms of assets relevant to these rural settings (Rahman et al. 2006b). Information on tobacco use, obtained in the interviews, was divided into cigarette smoking, bidi (locally produced cigarettes) smoking, or chewing tobacco, the latter consisting of dried tobacco leaves or zarda (a type of chewing tobacco, often used with areca nut, slaked lime, and betel leaves). Water samples were collected from all functional wells in the area and stored at −20°C (n = 13,286). Arsenic concentrations in drinking water were determined using atomic absorption spectrometry with hydride generation, with addition of hydrochloric acid and potassium iodine combined with heating (Wahed et al. 2006). For samples with concentrations below the limit of detection (LOD; 1.0 μg/L), half the LOD was used in the calculations.
Results
Sex-specific characteristics of the cases and referents are presented in Table 1. The cases were more often men than women (54% vs. 46%) and older than the referents (overall, 40 vs. 31 years old). The cases also had higher SES, as measured by household asset scores, and they smoked cigarettes or bidis (men only) or used zarda more often than did the referents. The three different measures of arsenic exposure, that is, cumulative lifetime exposure to arsenic in drinking water (micrograms per liter × years), average lifetime exposure to arsenic (micrograms per liter), and current total exposure to iAs, as measured by the concentration of arsenic metabolites in urine, are also presented in Table 1. The cases had significantly higher lifetime cumulative arsenic exposure and average lifetime arsenic exposure than did the referents but not higher urinary arsenic concentrations. The number of years of tobacco use and the number of tobacco products used per day among cases and referents are shown in Supplemental Material, Table S1 (doi:10.1289/ehp.0900728). The pattern of tobacco use differed markedly between men (46% smokers, 11% chewing tobacco) and women [only two female smokers, 31% chewing tobacco (see Supplemental Material, Table S2)]. We analyzed men and women separately. The associations between the different forms of tobacco use and urinary arsenic metabolites are shown in Table 2. All forms of tobacco use were associated with an increased percentage of MA and a decreased percentage of DMA, whereas the percentage of iAs and the urinary arsenic concentration did not vary much by tobacco use. Men who smoked cigarettes or bidis had significantly higher risk for skin lesions than did men who did not use tobacco (OR = 1.8; 95% CI, 1.1–3.1; adjusted for age, SES, and cumulative arsenic exposure; Table 3). We observed a marked drop in the OR after adjusting for age, SES, and cumulative arsenic exposure (crude OR = 3.2). Entering the different covariables independently showed that age was the main confounding factor. The adjusted OR in men chewing tobacco was not significantly increased (adjusted OR = 0.80; 95% CI, 0.36–1.7; Table 3); however, the number of individuals was low. Because very few women smoked, we could not determine the effect on the risk for skin lesions among women. The multivariable-adjusted model that included all women who used tobacco (mainly chewing tobacco) showed that they had considerably higher prevalence of skin lesions than did women who did not use tobacco (adjusted OR = 2.4; 95% CI, 1.4–3.9; Table 3). Excluding the two women who smoked cigarettes did not change the associations. To test for the influence of arsenic methylation on the association between tobacco use and arsenic-related skin lesions, we also adjusted the OR values for %MA in urine. As shown in Table 3, the OR for men who smoked changed from 1.8 to 1.4 (95% CI, 0.80–2.4) after additional adjustment for %MA. Compared with women who did not use any tobacco, the OR for women who chewed tobacco changed from 2.4 to 2.0 (95% CI, 1.2–3.4) after adjusting for %MA and remained significantly increased. To further evaluate interactions between smoking and arsenic metabolism on the risk for skin lesions, we stratified both tobacco use and urinary arsenic metabolites and analyzed the joint effects. The increasing risk for skin lesions with increasing proportion of MA and decreasing proportion of DMA was more pronounced among nontobacco using men and women (Table 4). The OR for men who used tobacco (mostly smokers, because few chewed tobacco) with ≤ 7.9 %MA was 1.8 (95% CI, 0.53–6.3; p = 0.34), compared with men in the same tertile of %MA who did not use tobacco. Men within the highest tertile of %MA who used tobacco had an OR of 4.4 (95% CI, 1.7–11). This OR was essentially the same as the sum of the OR for nontobacco users in the highest %MA tertile and that for tobacco users in the lowest %MA tertile minus the baseline OR (OR values 3.8 + 1.8 − 1.0 = 4.6). Similar results were obtained for %iAs; the ORjoint was 2.8, which is approximately equal to the sum (minus 1) of the OR for nonsmokers in the highest %iAs tertile (OR 1.6) and that for smokers in the lowest %iAs (OR = 1.9). Among women, the OR for tobacco users (mainly chewing tobacco) with ≤ 7.9 %MA in urine was 3.8 (95% CI, 1.4–10), compared with nonusers of tobacco within the same tertile (p = 0.009; Table 5). Women in the highest %MA tertile who chewed tobacco had an OR of 7.3 (95% CI, 3.4–15), compared with women in the lowest %MA group who did not use tobacco. This OR was higher than the predicted additive risks (5.7), although the RERI was not statistically significant (1.5; 95% CI, −3.7 to 6.7). Similarly, the ORjoint for women in the highest %iAs tertile who used tobacco was 7.5, which was higher than the predicted additive risks (2.5). The RERI was 5.0 (95% CI, −1.3 to 11.4), the attributable proportion due to interaction was 0.67 (0.36–0.98), and the synergy index was 4.4 (1.2–16), which indicated a biologic interaction.
null
null
[ "Study population", "Arsenic exposure", "Urine arsenic measures", "Statistical analyses" ]
[ "This study is part of a population-based case–referent study concerning the development of skin lesions in relation to arsenic exposure via drinking water carried out in Matlab, a rural area 53 km southeast of Dhaka, Bangladesh (Rahman et al. 2006a, 2006b). More than 60% of the tube wells in this area have concentrations above 50 μg/L, which is the Bangladeshi standard for drinking water, and 70% are above the WHO maximum guideline of 10 μg As/L. In Matlab, the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) has been running a comprehensive Health and Demographic Surveillance System (HDSS) that covers a population of 220,000.\nWe obtained informed consent from all participants, and the study was approved by both the ICDDR,B Ethical Review Committee and the ethics committee at the Karolinska Institutet in Stockholm. Mitigating activities such as painting wells with elevated arsenic concentrations and installing filters were initiated as described elsewhere (Jakariya et al. 2005; Rahman et al. 2006b).", "Both the average and the cumulative historical arsenic exposure were calculated as the time-weighted mean arsenic concentration of drinking water of all sources used since 1970 or birth. The cumulative arsenic exposure was calculated by summing the arsenic concentration multiplied by the number of years of usage (micrograms per liter × years) for all water sources used since 1970. From the interviews of the participants regarding their water consumption history, we were able to collect data on which year the participant started to drink well water (after 1970) and thereby the age at first exposure to tube well water. Very few wells were constructed before 1970, at which time the registration of the wells in the HDSS began.", "Spot urine samples were collected in 20 mL polyethylene containers and stored at −20°C. We randomly selected 526 samples from all referents and all 504 cases for analysis of arsenic metabolites in urine for evaluation of the individual arsenic methylation efficiency. The individual pattern of arsenic metabolites in urine is shown to be fairly stable over time (Concha et al. 2002; Kile et al. 2009; Steinmaus et al. 2005). Speciation analysis of arsenic metabolites in urine was performed by an inductively coupled plasma mass spectrometer (ICP-MS; Agilent 7500ce, Agilent Technologies, Waldbronn, Germany) together with an Agilent 1100 chromatographic system equipped with solvent degasser, auto sampler, and temperature-controlled column. For the separation of trivalent arsenic [As(III)], MA, DMA, and pentavalent arsenic [As(V)], a Hamilton PRP-X100 anion exchange column (4.6 mm × 250 mm) was used (Lindberg et al. 2006, 2007). For quality control, we used a Japanese reference urine certified for arsenic (Yoshinaga et al. 2000) and a spiked urine sample. Mean concentrations of As(III), DMA, MA, and As(V) in the reference urine were 3.9 ± 0.4, 43 ± 3, 3.0 ± 0.4, and 0.1 ± 0.1 (n = 79 during an 8-week period), respectively. The mean concentrations of the spiked urine samples of As(III), DMA, MA, and As(V) were 1.5 ± 0.6, 58 ± 3, 10 ± 0.6, and 16 ± 1.3 (n = 82 during an 8-week period), respectively. The results of interlaboratory comparisons are described in more detail elsewhere (Lindberg et al. 2007). Arsenic concentrations in urine were adjusted to the average specific gravity in this population (1.012 g/cm3), as measured by a refractometer (Uricon-Ne, ATAGO Co. Ltd, Tokyo, Japan) to adjust for variation in dilution in the urine samples (Nermell et al. 2008).\nWe also measured the arsenic concentrations of various forms of cigarettes, using ICP-MS after microwave-assisted acid digestion at high temperature and pressure as previously described for blood and breast milk (Fangstrom et al. 2008).", "We used SPSS 14.0 for Windows (SPSS Inc., Chicago, IL, USA) to perform all statistical analyses. Multivariate logistic regression analyses were performed to estimate the odds ratios (ORs) and corresponding confidence intervals (CIs) for having skin lesions at different proportions of arsenic metabolites in urine. Continuous variables were stratified into tertiles when estimating the ORs for having skin lesions. All multivariate associations were simultaneously adjusted for sex (man/woman), age (continuous), SES (continuous), tobacco use (no/yes), and cumulative arsenic exposure (continuous). We chose to adjust all models for cumulative arsenic exposure instead of average arsenic exposure or urinary arsenic concentrations. The cumulative arsenic exposure influenced the associations slightly more than did the average arsenic exposure, and adjusting for the urinary arsenic concentrations did not make any difference in the models when tested (data not shown). We calucated p for trend by treating the categorical variables as continuous variables in the models. To evaluate potential biologic interactions between the different risk factors, that is, arsenic metabolism and smoking, on arsenic-related skin lesions, we calculated the relative excess risk due to additive interactions (RERI) and corresponding CIs (Ahlbom and Alfredsson 2005). These calculations were performed according to Andersson et al. (2005), using an Excel sheet (EpiNET 2008). We used p-values < 0.05 for statistical significance." ]
[ "methods", null, null, null ]
[ "Materials and Methods", "Study population", "Cases and referents recruitment", "Data collection", "Arsenic exposure", "Urine arsenic measures", "Statistical analyses", "Results", "Discussion" ]
[ " Study population This study is part of a population-based case–referent study concerning the development of skin lesions in relation to arsenic exposure via drinking water carried out in Matlab, a rural area 53 km southeast of Dhaka, Bangladesh (Rahman et al. 2006a, 2006b). More than 60% of the tube wells in this area have concentrations above 50 μg/L, which is the Bangladeshi standard for drinking water, and 70% are above the WHO maximum guideline of 10 μg As/L. In Matlab, the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) has been running a comprehensive Health and Demographic Surveillance System (HDSS) that covers a population of 220,000.\nWe obtained informed consent from all participants, and the study was approved by both the ICDDR,B Ethical Review Committee and the ethics committee at the Karolinska Institutet in Stockholm. Mitigating activities such as painting wells with elevated arsenic concentrations and installing filters were initiated as described elsewhere (Jakariya et al. 2005; Rahman et al. 2006b).\nThis study is part of a population-based case–referent study concerning the development of skin lesions in relation to arsenic exposure via drinking water carried out in Matlab, a rural area 53 km southeast of Dhaka, Bangladesh (Rahman et al. 2006a, 2006b). More than 60% of the tube wells in this area have concentrations above 50 μg/L, which is the Bangladeshi standard for drinking water, and 70% are above the WHO maximum guideline of 10 μg As/L. In Matlab, the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) has been running a comprehensive Health and Demographic Surveillance System (HDSS) that covers a population of 220,000.\nWe obtained informed consent from all participants, and the study was approved by both the ICDDR,B Ethical Review Committee and the ethics committee at the Karolinska Institutet in Stockholm. Mitigating activities such as painting wells with elevated arsenic concentrations and installing filters were initiated as described elsewhere (Jakariya et al. 2005; Rahman et al. 2006b).\n Cases and referents recruitment The recruitment of persons with skin lesions (504 cases) and the selection of referents (1,830 referents) are described elsewhere (Rahman et al. 2006a). In short, all residents > 4 years of age who had lived in the study area for at least 6 months were eligible for the present study (180,811 individuals). In total, 166,934 (92%) of the eligible individuals were interviewed and examined for skin lesions by well-trained field teams, and 1,682 suspected cases were referred to the study physicians (one male and one female) at the health centers. Eventually, 504 cases were diagnosed with arsenic-induced skin lesions (defined as hyperpigmentation, hypopigmentation, and keratosis). For details regarding the ascertainment of cases, see Rahman et al. (2006a). All cases provided urine samples. Referents were randomly selected from the HDSS database with the criteria of more than 4 years of age, living in the area for at least 6 months, and drinking water from the area at least once a week. Selected referents were interviewed and invited to attend the clinic to be examined for skin lesions by a physician and to provide urine samples. A total of 1,579 referents attended the clinic.\nThe recruitment of persons with skin lesions (504 cases) and the selection of referents (1,830 referents) are described elsewhere (Rahman et al. 2006a). In short, all residents > 4 years of age who had lived in the study area for at least 6 months were eligible for the present study (180,811 individuals). In total, 166,934 (92%) of the eligible individuals were interviewed and examined for skin lesions by well-trained field teams, and 1,682 suspected cases were referred to the study physicians (one male and one female) at the health centers. Eventually, 504 cases were diagnosed with arsenic-induced skin lesions (defined as hyperpigmentation, hypopigmentation, and keratosis). For details regarding the ascertainment of cases, see Rahman et al. (2006a). All cases provided urine samples. Referents were randomly selected from the HDSS database with the criteria of more than 4 years of age, living in the area for at least 6 months, and drinking water from the area at least once a week. Selected referents were interviewed and invited to attend the clinic to be examined for skin lesions by a physician and to provide urine samples. A total of 1,579 referents attended the clinic.\n Data collection The field teams interviewed all individuals about their history of water consumption and the water sources used, including location, during each calendar year since 1970 or since birth if later than 1970, as described previously in more detail (Rahman et al. 2006b). Data on socioeconomic status (SES) were collected from the HDSS database and were defined in terms of assets relevant to these rural settings (Rahman et al. 2006b). Information on tobacco use, obtained in the interviews, was divided into cigarette smoking, bidi (locally produced cigarettes) smoking, or chewing tobacco, the latter consisting of dried tobacco leaves or zarda (a type of chewing tobacco, often used with areca nut, slaked lime, and betel leaves). Water samples were collected from all functional wells in the area and stored at −20°C (n = 13,286). Arsenic concentrations in drinking water were determined using atomic absorption spectrometry with hydride generation, with addition of hydrochloric acid and potassium iodine combined with heating (Wahed et al. 2006). For samples with concentrations below the limit of detection (LOD; 1.0 μg/L), half the LOD was used in the calculations.\nThe field teams interviewed all individuals about their history of water consumption and the water sources used, including location, during each calendar year since 1970 or since birth if later than 1970, as described previously in more detail (Rahman et al. 2006b). Data on socioeconomic status (SES) were collected from the HDSS database and were defined in terms of assets relevant to these rural settings (Rahman et al. 2006b). Information on tobacco use, obtained in the interviews, was divided into cigarette smoking, bidi (locally produced cigarettes) smoking, or chewing tobacco, the latter consisting of dried tobacco leaves or zarda (a type of chewing tobacco, often used with areca nut, slaked lime, and betel leaves). Water samples were collected from all functional wells in the area and stored at −20°C (n = 13,286). Arsenic concentrations in drinking water were determined using atomic absorption spectrometry with hydride generation, with addition of hydrochloric acid and potassium iodine combined with heating (Wahed et al. 2006). For samples with concentrations below the limit of detection (LOD; 1.0 μg/L), half the LOD was used in the calculations.\n Arsenic exposure Both the average and the cumulative historical arsenic exposure were calculated as the time-weighted mean arsenic concentration of drinking water of all sources used since 1970 or birth. The cumulative arsenic exposure was calculated by summing the arsenic concentration multiplied by the number of years of usage (micrograms per liter × years) for all water sources used since 1970. From the interviews of the participants regarding their water consumption history, we were able to collect data on which year the participant started to drink well water (after 1970) and thereby the age at first exposure to tube well water. Very few wells were constructed before 1970, at which time the registration of the wells in the HDSS began.\nBoth the average and the cumulative historical arsenic exposure were calculated as the time-weighted mean arsenic concentration of drinking water of all sources used since 1970 or birth. The cumulative arsenic exposure was calculated by summing the arsenic concentration multiplied by the number of years of usage (micrograms per liter × years) for all water sources used since 1970. From the interviews of the participants regarding their water consumption history, we were able to collect data on which year the participant started to drink well water (after 1970) and thereby the age at first exposure to tube well water. Very few wells were constructed before 1970, at which time the registration of the wells in the HDSS began.\n Urine arsenic measures Spot urine samples were collected in 20 mL polyethylene containers and stored at −20°C. We randomly selected 526 samples from all referents and all 504 cases for analysis of arsenic metabolites in urine for evaluation of the individual arsenic methylation efficiency. The individual pattern of arsenic metabolites in urine is shown to be fairly stable over time (Concha et al. 2002; Kile et al. 2009; Steinmaus et al. 2005). Speciation analysis of arsenic metabolites in urine was performed by an inductively coupled plasma mass spectrometer (ICP-MS; Agilent 7500ce, Agilent Technologies, Waldbronn, Germany) together with an Agilent 1100 chromatographic system equipped with solvent degasser, auto sampler, and temperature-controlled column. For the separation of trivalent arsenic [As(III)], MA, DMA, and pentavalent arsenic [As(V)], a Hamilton PRP-X100 anion exchange column (4.6 mm × 250 mm) was used (Lindberg et al. 2006, 2007). For quality control, we used a Japanese reference urine certified for arsenic (Yoshinaga et al. 2000) and a spiked urine sample. Mean concentrations of As(III), DMA, MA, and As(V) in the reference urine were 3.9 ± 0.4, 43 ± 3, 3.0 ± 0.4, and 0.1 ± 0.1 (n = 79 during an 8-week period), respectively. The mean concentrations of the spiked urine samples of As(III), DMA, MA, and As(V) were 1.5 ± 0.6, 58 ± 3, 10 ± 0.6, and 16 ± 1.3 (n = 82 during an 8-week period), respectively. The results of interlaboratory comparisons are described in more detail elsewhere (Lindberg et al. 2007). Arsenic concentrations in urine were adjusted to the average specific gravity in this population (1.012 g/cm3), as measured by a refractometer (Uricon-Ne, ATAGO Co. Ltd, Tokyo, Japan) to adjust for variation in dilution in the urine samples (Nermell et al. 2008).\nWe also measured the arsenic concentrations of various forms of cigarettes, using ICP-MS after microwave-assisted acid digestion at high temperature and pressure as previously described for blood and breast milk (Fangstrom et al. 2008).\nSpot urine samples were collected in 20 mL polyethylene containers and stored at −20°C. We randomly selected 526 samples from all referents and all 504 cases for analysis of arsenic metabolites in urine for evaluation of the individual arsenic methylation efficiency. The individual pattern of arsenic metabolites in urine is shown to be fairly stable over time (Concha et al. 2002; Kile et al. 2009; Steinmaus et al. 2005). Speciation analysis of arsenic metabolites in urine was performed by an inductively coupled plasma mass spectrometer (ICP-MS; Agilent 7500ce, Agilent Technologies, Waldbronn, Germany) together with an Agilent 1100 chromatographic system equipped with solvent degasser, auto sampler, and temperature-controlled column. For the separation of trivalent arsenic [As(III)], MA, DMA, and pentavalent arsenic [As(V)], a Hamilton PRP-X100 anion exchange column (4.6 mm × 250 mm) was used (Lindberg et al. 2006, 2007). For quality control, we used a Japanese reference urine certified for arsenic (Yoshinaga et al. 2000) and a spiked urine sample. Mean concentrations of As(III), DMA, MA, and As(V) in the reference urine were 3.9 ± 0.4, 43 ± 3, 3.0 ± 0.4, and 0.1 ± 0.1 (n = 79 during an 8-week period), respectively. The mean concentrations of the spiked urine samples of As(III), DMA, MA, and As(V) were 1.5 ± 0.6, 58 ± 3, 10 ± 0.6, and 16 ± 1.3 (n = 82 during an 8-week period), respectively. The results of interlaboratory comparisons are described in more detail elsewhere (Lindberg et al. 2007). Arsenic concentrations in urine were adjusted to the average specific gravity in this population (1.012 g/cm3), as measured by a refractometer (Uricon-Ne, ATAGO Co. Ltd, Tokyo, Japan) to adjust for variation in dilution in the urine samples (Nermell et al. 2008).\nWe also measured the arsenic concentrations of various forms of cigarettes, using ICP-MS after microwave-assisted acid digestion at high temperature and pressure as previously described for blood and breast milk (Fangstrom et al. 2008).\n Statistical analyses We used SPSS 14.0 for Windows (SPSS Inc., Chicago, IL, USA) to perform all statistical analyses. Multivariate logistic regression analyses were performed to estimate the odds ratios (ORs) and corresponding confidence intervals (CIs) for having skin lesions at different proportions of arsenic metabolites in urine. Continuous variables were stratified into tertiles when estimating the ORs for having skin lesions. All multivariate associations were simultaneously adjusted for sex (man/woman), age (continuous), SES (continuous), tobacco use (no/yes), and cumulative arsenic exposure (continuous). We chose to adjust all models for cumulative arsenic exposure instead of average arsenic exposure or urinary arsenic concentrations. The cumulative arsenic exposure influenced the associations slightly more than did the average arsenic exposure, and adjusting for the urinary arsenic concentrations did not make any difference in the models when tested (data not shown). We calucated p for trend by treating the categorical variables as continuous variables in the models. To evaluate potential biologic interactions between the different risk factors, that is, arsenic metabolism and smoking, on arsenic-related skin lesions, we calculated the relative excess risk due to additive interactions (RERI) and corresponding CIs (Ahlbom and Alfredsson 2005). These calculations were performed according to Andersson et al. (2005), using an Excel sheet (EpiNET 2008). We used p-values < 0.05 for statistical significance.\nWe used SPSS 14.0 for Windows (SPSS Inc., Chicago, IL, USA) to perform all statistical analyses. Multivariate logistic regression analyses were performed to estimate the odds ratios (ORs) and corresponding confidence intervals (CIs) for having skin lesions at different proportions of arsenic metabolites in urine. Continuous variables were stratified into tertiles when estimating the ORs for having skin lesions. All multivariate associations were simultaneously adjusted for sex (man/woman), age (continuous), SES (continuous), tobacco use (no/yes), and cumulative arsenic exposure (continuous). We chose to adjust all models for cumulative arsenic exposure instead of average arsenic exposure or urinary arsenic concentrations. The cumulative arsenic exposure influenced the associations slightly more than did the average arsenic exposure, and adjusting for the urinary arsenic concentrations did not make any difference in the models when tested (data not shown). We calucated p for trend by treating the categorical variables as continuous variables in the models. To evaluate potential biologic interactions between the different risk factors, that is, arsenic metabolism and smoking, on arsenic-related skin lesions, we calculated the relative excess risk due to additive interactions (RERI) and corresponding CIs (Ahlbom and Alfredsson 2005). These calculations were performed according to Andersson et al. (2005), using an Excel sheet (EpiNET 2008). We used p-values < 0.05 for statistical significance.", "This study is part of a population-based case–referent study concerning the development of skin lesions in relation to arsenic exposure via drinking water carried out in Matlab, a rural area 53 km southeast of Dhaka, Bangladesh (Rahman et al. 2006a, 2006b). More than 60% of the tube wells in this area have concentrations above 50 μg/L, which is the Bangladeshi standard for drinking water, and 70% are above the WHO maximum guideline of 10 μg As/L. In Matlab, the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) has been running a comprehensive Health and Demographic Surveillance System (HDSS) that covers a population of 220,000.\nWe obtained informed consent from all participants, and the study was approved by both the ICDDR,B Ethical Review Committee and the ethics committee at the Karolinska Institutet in Stockholm. Mitigating activities such as painting wells with elevated arsenic concentrations and installing filters were initiated as described elsewhere (Jakariya et al. 2005; Rahman et al. 2006b).", "The recruitment of persons with skin lesions (504 cases) and the selection of referents (1,830 referents) are described elsewhere (Rahman et al. 2006a). In short, all residents > 4 years of age who had lived in the study area for at least 6 months were eligible for the present study (180,811 individuals). In total, 166,934 (92%) of the eligible individuals were interviewed and examined for skin lesions by well-trained field teams, and 1,682 suspected cases were referred to the study physicians (one male and one female) at the health centers. Eventually, 504 cases were diagnosed with arsenic-induced skin lesions (defined as hyperpigmentation, hypopigmentation, and keratosis). For details regarding the ascertainment of cases, see Rahman et al. (2006a). All cases provided urine samples. Referents were randomly selected from the HDSS database with the criteria of more than 4 years of age, living in the area for at least 6 months, and drinking water from the area at least once a week. Selected referents were interviewed and invited to attend the clinic to be examined for skin lesions by a physician and to provide urine samples. A total of 1,579 referents attended the clinic.", "The field teams interviewed all individuals about their history of water consumption and the water sources used, including location, during each calendar year since 1970 or since birth if later than 1970, as described previously in more detail (Rahman et al. 2006b). Data on socioeconomic status (SES) were collected from the HDSS database and were defined in terms of assets relevant to these rural settings (Rahman et al. 2006b). Information on tobacco use, obtained in the interviews, was divided into cigarette smoking, bidi (locally produced cigarettes) smoking, or chewing tobacco, the latter consisting of dried tobacco leaves or zarda (a type of chewing tobacco, often used with areca nut, slaked lime, and betel leaves). Water samples were collected from all functional wells in the area and stored at −20°C (n = 13,286). Arsenic concentrations in drinking water were determined using atomic absorption spectrometry with hydride generation, with addition of hydrochloric acid and potassium iodine combined with heating (Wahed et al. 2006). For samples with concentrations below the limit of detection (LOD; 1.0 μg/L), half the LOD was used in the calculations.", "Both the average and the cumulative historical arsenic exposure were calculated as the time-weighted mean arsenic concentration of drinking water of all sources used since 1970 or birth. The cumulative arsenic exposure was calculated by summing the arsenic concentration multiplied by the number of years of usage (micrograms per liter × years) for all water sources used since 1970. From the interviews of the participants regarding their water consumption history, we were able to collect data on which year the participant started to drink well water (after 1970) and thereby the age at first exposure to tube well water. Very few wells were constructed before 1970, at which time the registration of the wells in the HDSS began.", "Spot urine samples were collected in 20 mL polyethylene containers and stored at −20°C. We randomly selected 526 samples from all referents and all 504 cases for analysis of arsenic metabolites in urine for evaluation of the individual arsenic methylation efficiency. The individual pattern of arsenic metabolites in urine is shown to be fairly stable over time (Concha et al. 2002; Kile et al. 2009; Steinmaus et al. 2005). Speciation analysis of arsenic metabolites in urine was performed by an inductively coupled plasma mass spectrometer (ICP-MS; Agilent 7500ce, Agilent Technologies, Waldbronn, Germany) together with an Agilent 1100 chromatographic system equipped with solvent degasser, auto sampler, and temperature-controlled column. For the separation of trivalent arsenic [As(III)], MA, DMA, and pentavalent arsenic [As(V)], a Hamilton PRP-X100 anion exchange column (4.6 mm × 250 mm) was used (Lindberg et al. 2006, 2007). For quality control, we used a Japanese reference urine certified for arsenic (Yoshinaga et al. 2000) and a spiked urine sample. Mean concentrations of As(III), DMA, MA, and As(V) in the reference urine were 3.9 ± 0.4, 43 ± 3, 3.0 ± 0.4, and 0.1 ± 0.1 (n = 79 during an 8-week period), respectively. The mean concentrations of the spiked urine samples of As(III), DMA, MA, and As(V) were 1.5 ± 0.6, 58 ± 3, 10 ± 0.6, and 16 ± 1.3 (n = 82 during an 8-week period), respectively. The results of interlaboratory comparisons are described in more detail elsewhere (Lindberg et al. 2007). Arsenic concentrations in urine were adjusted to the average specific gravity in this population (1.012 g/cm3), as measured by a refractometer (Uricon-Ne, ATAGO Co. Ltd, Tokyo, Japan) to adjust for variation in dilution in the urine samples (Nermell et al. 2008).\nWe also measured the arsenic concentrations of various forms of cigarettes, using ICP-MS after microwave-assisted acid digestion at high temperature and pressure as previously described for blood and breast milk (Fangstrom et al. 2008).", "We used SPSS 14.0 for Windows (SPSS Inc., Chicago, IL, USA) to perform all statistical analyses. Multivariate logistic regression analyses were performed to estimate the odds ratios (ORs) and corresponding confidence intervals (CIs) for having skin lesions at different proportions of arsenic metabolites in urine. Continuous variables were stratified into tertiles when estimating the ORs for having skin lesions. All multivariate associations were simultaneously adjusted for sex (man/woman), age (continuous), SES (continuous), tobacco use (no/yes), and cumulative arsenic exposure (continuous). We chose to adjust all models for cumulative arsenic exposure instead of average arsenic exposure or urinary arsenic concentrations. The cumulative arsenic exposure influenced the associations slightly more than did the average arsenic exposure, and adjusting for the urinary arsenic concentrations did not make any difference in the models when tested (data not shown). We calucated p for trend by treating the categorical variables as continuous variables in the models. To evaluate potential biologic interactions between the different risk factors, that is, arsenic metabolism and smoking, on arsenic-related skin lesions, we calculated the relative excess risk due to additive interactions (RERI) and corresponding CIs (Ahlbom and Alfredsson 2005). These calculations were performed according to Andersson et al. (2005), using an Excel sheet (EpiNET 2008). We used p-values < 0.05 for statistical significance.", "Sex-specific characteristics of the cases and referents are presented in Table 1. The cases were more often men than women (54% vs. 46%) and older than the referents (overall, 40 vs. 31 years old). The cases also had higher SES, as measured by household asset scores, and they smoked cigarettes or bidis (men only) or used zarda more often than did the referents. The three different measures of arsenic exposure, that is, cumulative lifetime exposure to arsenic in drinking water (micrograms per liter × years), average lifetime exposure to arsenic (micrograms per liter), and current total exposure to iAs, as measured by the concentration of arsenic metabolites in urine, are also presented in Table 1. The cases had significantly higher lifetime cumulative arsenic exposure and average lifetime arsenic exposure than did the referents but not higher urinary arsenic concentrations.\nThe number of years of tobacco use and the number of tobacco products used per day among cases and referents are shown in Supplemental Material, Table S1 (doi:10.1289/ehp.0900728). The pattern of tobacco use differed markedly between men (46% smokers, 11% chewing tobacco) and women [only two female smokers, 31% chewing tobacco (see Supplemental Material, Table S2)]. We analyzed men and women separately. The associations between the different forms of tobacco use and urinary arsenic metabolites are shown in Table 2. All forms of tobacco use were associated with an increased percentage of MA and a decreased percentage of DMA, whereas the percentage of iAs and the urinary arsenic concentration did not vary much by tobacco use.\nMen who smoked cigarettes or bidis had significantly higher risk for skin lesions than did men who did not use tobacco (OR = 1.8; 95% CI, 1.1–3.1; adjusted for age, SES, and cumulative arsenic exposure; Table 3). We observed a marked drop in the OR after adjusting for age, SES, and cumulative arsenic exposure (crude OR = 3.2). Entering the different covariables independently showed that age was the main confounding factor. The adjusted OR in men chewing tobacco was not significantly increased (adjusted OR = 0.80; 95% CI, 0.36–1.7; Table 3); however, the number of individuals was low. Because very few women smoked, we could not determine the effect on the risk for skin lesions among women. The multivariable-adjusted model that included all women who used tobacco (mainly chewing tobacco) showed that they had considerably higher prevalence of skin lesions than did women who did not use tobacco (adjusted OR = 2.4; 95% CI, 1.4–3.9; Table 3). Excluding the two women who smoked cigarettes did not change the associations. To test for the influence of arsenic methylation on the association between tobacco use and arsenic-related skin lesions, we also adjusted the OR values for %MA in urine. As shown in Table 3, the OR for men who smoked changed from 1.8 to 1.4 (95% CI, 0.80–2.4) after additional adjustment for %MA. Compared with women who did not use any tobacco, the OR for women who chewed tobacco changed from 2.4 to 2.0 (95% CI, 1.2–3.4) after adjusting for %MA and remained significantly increased.\nTo further evaluate interactions between smoking and arsenic metabolism on the risk for skin lesions, we stratified both tobacco use and urinary arsenic metabolites and analyzed the joint effects. The increasing risk for skin lesions with increasing proportion of MA and decreasing proportion of DMA was more pronounced among nontobacco using men and women (Table 4). The OR for men who used tobacco (mostly smokers, because few chewed tobacco) with ≤ 7.9 %MA was 1.8 (95% CI, 0.53–6.3; p = 0.34), compared with men in the same tertile of %MA who did not use tobacco. Men within the highest tertile of %MA who used tobacco had an OR of 4.4 (95% CI, 1.7–11). This OR was essentially the same as the sum of the OR for nontobacco users in the highest %MA tertile and that for tobacco users in the lowest %MA tertile minus the baseline OR (OR values 3.8 + 1.8 − 1.0 = 4.6). Similar results were obtained for %iAs; the ORjoint was 2.8, which is approximately equal to the sum (minus 1) of the OR for nonsmokers in the highest %iAs tertile (OR 1.6) and that for smokers in the lowest %iAs (OR = 1.9). Among women, the OR for tobacco users (mainly chewing tobacco) with ≤ 7.9 %MA in urine was 3.8 (95% CI, 1.4–10), compared with nonusers of tobacco within the same tertile (p = 0.009; Table 5). Women in the highest %MA tertile who chewed tobacco had an OR of 7.3 (95% CI, 3.4–15), compared with women in the lowest %MA group who did not use tobacco. This OR was higher than the predicted additive risks (5.7), although the RERI was not statistically significant (1.5; 95% CI, −3.7 to 6.7). Similarly, the ORjoint for women in the highest %iAs tertile who used tobacco was 7.5, which was higher than the predicted additive risks (2.5). The RERI was 5.0 (95% CI, −1.3 to 11.4), the attributable proportion due to interaction was 0.67 (0.36–0.98), and the synergy index was 4.4 (1.2–16), which indicated a biologic interaction.", "This population-based case–referent study in Bangladesh is the first to evaluate the combined effects of arsenic exposure, arsenic metabolism, and use of tobacco for the risk of arsenic-related skin effects. All forms of tobacco use were associated with less efficient methylation of arsenic. Among men, there appeared to be an additive effect of poor arsenic methylation (high iAs and high %MA) and smoking for the development of arsenic-induced skin lesions, although a high %MA increased the risk more than did smoking. Because very few women smoked cigarettes or bidis, an interaction between arsenic methylation and smoking in women could not be evaluated. Another new finding in the present study was that tobacco chewing, which is much more common among Bangladeshi women than smoking, was also a risk factor for developing arsenic-related skin lesions in women. The high ORs for skin lesions among the women who chewed tobacco in the highest tertiles of %iAs or %MA (7.5 and 7.3, respectively), compared with nontobacco using women with efficient arsenic methylation, suggest an interaction, although the RERI values were not quite significant. For men, an association between chewing tobacco and skin lesions was observed in the crude analysis only, but the sample size was small and the CIs wide. Further studies on larger cohorts are warranted for firm conclusions concerning the biologic interactions between various tobacco use, arsenic exposure, and arsenic metabolism. In any case, the use of various forms of tobacco should be considered in the risk assessment of arsenic and in the comparison of arsenic-related health risks among populations.\nTobacco smoking has been identified as an independent risk factor of nonmelanoma skin cancer (De Hertog et al. 2001; Grant 2008; Grodstein et al. 1995), psoriasis (Setty et al. 2007), and premature skin aging (Just et al. 2007; Morita 2007), but few studies have investigated the modifying effect of smoking on the arsenic-related hyperpigmentation and hyperkeratosis. Chen et al. (2006) reported a significant synergistic effect between the highest level of arsenic exposure via drinking water (> 113 μg/L) and tobacco smoking for the risk of skin lesions among Bangladeshi men, but much weaker interaction effects among women. These authors suggested that the interaction could be due to immunosuppression caused by tobacco smoking, inhibition of arsenic methylation, or the prevalent smoking of bidis, the filterless, locally produced cigarettes with raw tobacco. Bidis are popular in rural areas in Bangladesh and are claimed to contain more carcinogenic substances than do cigarettes.\nIn the present study, the effect of smoking on arsenic-related skin lesions was studied only in men, because, by tradition, very few women in Bangladesh are smokers. According to personal interviews, almost half the men were smokers, whereas only two of more than 500 women smoked. It is highly unlikely that we have any differential misclassification in the data on tobacco use. Smoking or other forms of tobacco use are not linked to any stigma and are in no way considered to be associated with the arsenic-induced skin lesions by the study population. The increased risk of skin lesions among men who smoked was not due to additional exposure to arsenic via smoking. The concentrations of arsenic in different brands of cigarettes from local shops in Matlab ranged between 0.13 and 0.29 μg/g (mean 0.21 μg/g, n = 5) and between 0.24 and 0.27 μg/g (mean 0.25 μg/g, n = 3) for bidis. Thus, it could be estimated that the inhaled amount of arsenic by smoking 10 cigarettes or bidis/day, for example, was about 2 μg. Even though a considerable part of this arsenic is absorbed in the lungs, the arsenic uptake from smoking is negligible compared with that from drinking water (70% of the wells had > 10 μg As/L; Rahman et al. 2006b) and food (Lindberg et al. 2008a).\nInstead, we found that smoking was associated with higher %MA in urine, which is a known risk factor for arsenic-related skin effects (Ahsan et al. 2007; Chen et al. 2003a; Del Razo et al. 1997; Hsueh et al. 1997; Lindberg et al. 2008a; Yu et al. 2000). Elevated %MA in urine may be related to the highly toxic trivalent intermediate metabolite MA(III) (Ganyc et al. 2007; Vega et al. 2001) in the tissues, including skin (Vahter 2002). The decrease in the second step in the methylation of arsenic (to DMA) by smoking is in agreement with previous findings (Hopenhayn-Rich et al. 1996; Lindberg et al. 2008b); however, the mechanism by which this occurs is not clear. It may be that smoking inhibits the specific AS3MT involved in arsenic methylation or impairs one-carbon metabolism in general. Cigarette smoking is known to increase serum homocysteine concentration (O’Callaghan et al. 2002; Refsum et al. 2006), which, via the concurrent accumulation of S-adenosylhomocysteine, exerts a strong inhibition of S-adenosylmethionine–dependent transmethylation reactions, including those of arsenic (Gamble et al. 2005; Marafante and Vahter 1984). Smokers also tend to have lower levels of folate and vitamin B6 and B12 (O’Callaghan et al. 2002), all of which are essential for homocysteine metabolism. The smoking-related increase in homocysteine is most likely less pronounced in women, in whom the estrogen-dependent upregulation of endogenous choline synthesis may, via oxidation to betaine, contribute to remethylation of homocysteine (Zeisel 2007). Indeed, the study by Chen et al. (2006) observed a much stronger effect of smoking on arsenic-related skin lesions in men than in women.\nAnother new finding in this study is that chewing tobacco is a risk factor for arsenic-related skin effects for women and that there appears to be an interaction with poor metabolism of arsenic. For men, the effect of chewing tobacco on the risk of skin lesion seems to be much less, but the number of cases is too small for a firm conclusion. About 31% of the women reported chewing tobacco, which locally is called shada (dried tobacco leaves) or zarda (processed tobacco leaves in a paste) (Choudhury et al. 2007). In zarda, the tobacco is often mixed with sliced areca nut, lime, and sometimes a leaf of the piper betel plant, and the adverse effects may be caused by this mixture rather than the tobacco alone. Areca nut, the seed of Areca catechu, is used in a variety of chewed products, often mixed with tobacco or betel leaves. Both betel quid and areca nut have been associated with increased risk for oral epithelial malignancy (IARC 2004b; Lee et al. 2008). In a previous study in Bangladesh, McCarty et al. (2006) reported that betel nut use, but not tobacco chewing or cigarette smoking, was associated with skin lesions. As that study matched by sex, it is not known if the associations were sex dependent. In the present study, zarda contained about half a microgram of arsenic per gram of tobacco (0.33–0.54 μg/g; mean 0.45 μg/g; n = 8), which is about twice as much as in cigarettes and bidis. Still, the arsenic exposure from chewing zarda was low compared with that from the drinking water and food." ]
[ "materials|methods", "methods", "cases", "methods", null, null, null, "results", "discussion" ]
[ "arsenic", "interactions", "metabolism", "skin lesions", "smoking", "tobacco", "urine metabolites" ]
Materials and Methods: Study population This study is part of a population-based case–referent study concerning the development of skin lesions in relation to arsenic exposure via drinking water carried out in Matlab, a rural area 53 km southeast of Dhaka, Bangladesh (Rahman et al. 2006a, 2006b). More than 60% of the tube wells in this area have concentrations above 50 μg/L, which is the Bangladeshi standard for drinking water, and 70% are above the WHO maximum guideline of 10 μg As/L. In Matlab, the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) has been running a comprehensive Health and Demographic Surveillance System (HDSS) that covers a population of 220,000. We obtained informed consent from all participants, and the study was approved by both the ICDDR,B Ethical Review Committee and the ethics committee at the Karolinska Institutet in Stockholm. Mitigating activities such as painting wells with elevated arsenic concentrations and installing filters were initiated as described elsewhere (Jakariya et al. 2005; Rahman et al. 2006b). This study is part of a population-based case–referent study concerning the development of skin lesions in relation to arsenic exposure via drinking water carried out in Matlab, a rural area 53 km southeast of Dhaka, Bangladesh (Rahman et al. 2006a, 2006b). More than 60% of the tube wells in this area have concentrations above 50 μg/L, which is the Bangladeshi standard for drinking water, and 70% are above the WHO maximum guideline of 10 μg As/L. In Matlab, the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) has been running a comprehensive Health and Demographic Surveillance System (HDSS) that covers a population of 220,000. We obtained informed consent from all participants, and the study was approved by both the ICDDR,B Ethical Review Committee and the ethics committee at the Karolinska Institutet in Stockholm. Mitigating activities such as painting wells with elevated arsenic concentrations and installing filters were initiated as described elsewhere (Jakariya et al. 2005; Rahman et al. 2006b). Cases and referents recruitment The recruitment of persons with skin lesions (504 cases) and the selection of referents (1,830 referents) are described elsewhere (Rahman et al. 2006a). In short, all residents > 4 years of age who had lived in the study area for at least 6 months were eligible for the present study (180,811 individuals). In total, 166,934 (92%) of the eligible individuals were interviewed and examined for skin lesions by well-trained field teams, and 1,682 suspected cases were referred to the study physicians (one male and one female) at the health centers. Eventually, 504 cases were diagnosed with arsenic-induced skin lesions (defined as hyperpigmentation, hypopigmentation, and keratosis). For details regarding the ascertainment of cases, see Rahman et al. (2006a). All cases provided urine samples. Referents were randomly selected from the HDSS database with the criteria of more than 4 years of age, living in the area for at least 6 months, and drinking water from the area at least once a week. Selected referents were interviewed and invited to attend the clinic to be examined for skin lesions by a physician and to provide urine samples. A total of 1,579 referents attended the clinic. The recruitment of persons with skin lesions (504 cases) and the selection of referents (1,830 referents) are described elsewhere (Rahman et al. 2006a). In short, all residents > 4 years of age who had lived in the study area for at least 6 months were eligible for the present study (180,811 individuals). In total, 166,934 (92%) of the eligible individuals were interviewed and examined for skin lesions by well-trained field teams, and 1,682 suspected cases were referred to the study physicians (one male and one female) at the health centers. Eventually, 504 cases were diagnosed with arsenic-induced skin lesions (defined as hyperpigmentation, hypopigmentation, and keratosis). For details regarding the ascertainment of cases, see Rahman et al. (2006a). All cases provided urine samples. Referents were randomly selected from the HDSS database with the criteria of more than 4 years of age, living in the area for at least 6 months, and drinking water from the area at least once a week. Selected referents were interviewed and invited to attend the clinic to be examined for skin lesions by a physician and to provide urine samples. A total of 1,579 referents attended the clinic. Data collection The field teams interviewed all individuals about their history of water consumption and the water sources used, including location, during each calendar year since 1970 or since birth if later than 1970, as described previously in more detail (Rahman et al. 2006b). Data on socioeconomic status (SES) were collected from the HDSS database and were defined in terms of assets relevant to these rural settings (Rahman et al. 2006b). Information on tobacco use, obtained in the interviews, was divided into cigarette smoking, bidi (locally produced cigarettes) smoking, or chewing tobacco, the latter consisting of dried tobacco leaves or zarda (a type of chewing tobacco, often used with areca nut, slaked lime, and betel leaves). Water samples were collected from all functional wells in the area and stored at −20°C (n = 13,286). Arsenic concentrations in drinking water were determined using atomic absorption spectrometry with hydride generation, with addition of hydrochloric acid and potassium iodine combined with heating (Wahed et al. 2006). For samples with concentrations below the limit of detection (LOD; 1.0 μg/L), half the LOD was used in the calculations. The field teams interviewed all individuals about their history of water consumption and the water sources used, including location, during each calendar year since 1970 or since birth if later than 1970, as described previously in more detail (Rahman et al. 2006b). Data on socioeconomic status (SES) were collected from the HDSS database and were defined in terms of assets relevant to these rural settings (Rahman et al. 2006b). Information on tobacco use, obtained in the interviews, was divided into cigarette smoking, bidi (locally produced cigarettes) smoking, or chewing tobacco, the latter consisting of dried tobacco leaves or zarda (a type of chewing tobacco, often used with areca nut, slaked lime, and betel leaves). Water samples were collected from all functional wells in the area and stored at −20°C (n = 13,286). Arsenic concentrations in drinking water were determined using atomic absorption spectrometry with hydride generation, with addition of hydrochloric acid and potassium iodine combined with heating (Wahed et al. 2006). For samples with concentrations below the limit of detection (LOD; 1.0 μg/L), half the LOD was used in the calculations. Arsenic exposure Both the average and the cumulative historical arsenic exposure were calculated as the time-weighted mean arsenic concentration of drinking water of all sources used since 1970 or birth. The cumulative arsenic exposure was calculated by summing the arsenic concentration multiplied by the number of years of usage (micrograms per liter × years) for all water sources used since 1970. From the interviews of the participants regarding their water consumption history, we were able to collect data on which year the participant started to drink well water (after 1970) and thereby the age at first exposure to tube well water. Very few wells were constructed before 1970, at which time the registration of the wells in the HDSS began. Both the average and the cumulative historical arsenic exposure were calculated as the time-weighted mean arsenic concentration of drinking water of all sources used since 1970 or birth. The cumulative arsenic exposure was calculated by summing the arsenic concentration multiplied by the number of years of usage (micrograms per liter × years) for all water sources used since 1970. From the interviews of the participants regarding their water consumption history, we were able to collect data on which year the participant started to drink well water (after 1970) and thereby the age at first exposure to tube well water. Very few wells were constructed before 1970, at which time the registration of the wells in the HDSS began. Urine arsenic measures Spot urine samples were collected in 20 mL polyethylene containers and stored at −20°C. We randomly selected 526 samples from all referents and all 504 cases for analysis of arsenic metabolites in urine for evaluation of the individual arsenic methylation efficiency. The individual pattern of arsenic metabolites in urine is shown to be fairly stable over time (Concha et al. 2002; Kile et al. 2009; Steinmaus et al. 2005). Speciation analysis of arsenic metabolites in urine was performed by an inductively coupled plasma mass spectrometer (ICP-MS; Agilent 7500ce, Agilent Technologies, Waldbronn, Germany) together with an Agilent 1100 chromatographic system equipped with solvent degasser, auto sampler, and temperature-controlled column. For the separation of trivalent arsenic [As(III)], MA, DMA, and pentavalent arsenic [As(V)], a Hamilton PRP-X100 anion exchange column (4.6 mm × 250 mm) was used (Lindberg et al. 2006, 2007). For quality control, we used a Japanese reference urine certified for arsenic (Yoshinaga et al. 2000) and a spiked urine sample. Mean concentrations of As(III), DMA, MA, and As(V) in the reference urine were 3.9 ± 0.4, 43 ± 3, 3.0 ± 0.4, and 0.1 ± 0.1 (n = 79 during an 8-week period), respectively. The mean concentrations of the spiked urine samples of As(III), DMA, MA, and As(V) were 1.5 ± 0.6, 58 ± 3, 10 ± 0.6, and 16 ± 1.3 (n = 82 during an 8-week period), respectively. The results of interlaboratory comparisons are described in more detail elsewhere (Lindberg et al. 2007). Arsenic concentrations in urine were adjusted to the average specific gravity in this population (1.012 g/cm3), as measured by a refractometer (Uricon-Ne, ATAGO Co. Ltd, Tokyo, Japan) to adjust for variation in dilution in the urine samples (Nermell et al. 2008). We also measured the arsenic concentrations of various forms of cigarettes, using ICP-MS after microwave-assisted acid digestion at high temperature and pressure as previously described for blood and breast milk (Fangstrom et al. 2008). Spot urine samples were collected in 20 mL polyethylene containers and stored at −20°C. We randomly selected 526 samples from all referents and all 504 cases for analysis of arsenic metabolites in urine for evaluation of the individual arsenic methylation efficiency. The individual pattern of arsenic metabolites in urine is shown to be fairly stable over time (Concha et al. 2002; Kile et al. 2009; Steinmaus et al. 2005). Speciation analysis of arsenic metabolites in urine was performed by an inductively coupled plasma mass spectrometer (ICP-MS; Agilent 7500ce, Agilent Technologies, Waldbronn, Germany) together with an Agilent 1100 chromatographic system equipped with solvent degasser, auto sampler, and temperature-controlled column. For the separation of trivalent arsenic [As(III)], MA, DMA, and pentavalent arsenic [As(V)], a Hamilton PRP-X100 anion exchange column (4.6 mm × 250 mm) was used (Lindberg et al. 2006, 2007). For quality control, we used a Japanese reference urine certified for arsenic (Yoshinaga et al. 2000) and a spiked urine sample. Mean concentrations of As(III), DMA, MA, and As(V) in the reference urine were 3.9 ± 0.4, 43 ± 3, 3.0 ± 0.4, and 0.1 ± 0.1 (n = 79 during an 8-week period), respectively. The mean concentrations of the spiked urine samples of As(III), DMA, MA, and As(V) were 1.5 ± 0.6, 58 ± 3, 10 ± 0.6, and 16 ± 1.3 (n = 82 during an 8-week period), respectively. The results of interlaboratory comparisons are described in more detail elsewhere (Lindberg et al. 2007). Arsenic concentrations in urine were adjusted to the average specific gravity in this population (1.012 g/cm3), as measured by a refractometer (Uricon-Ne, ATAGO Co. Ltd, Tokyo, Japan) to adjust for variation in dilution in the urine samples (Nermell et al. 2008). We also measured the arsenic concentrations of various forms of cigarettes, using ICP-MS after microwave-assisted acid digestion at high temperature and pressure as previously described for blood and breast milk (Fangstrom et al. 2008). Statistical analyses We used SPSS 14.0 for Windows (SPSS Inc., Chicago, IL, USA) to perform all statistical analyses. Multivariate logistic regression analyses were performed to estimate the odds ratios (ORs) and corresponding confidence intervals (CIs) for having skin lesions at different proportions of arsenic metabolites in urine. Continuous variables were stratified into tertiles when estimating the ORs for having skin lesions. All multivariate associations were simultaneously adjusted for sex (man/woman), age (continuous), SES (continuous), tobacco use (no/yes), and cumulative arsenic exposure (continuous). We chose to adjust all models for cumulative arsenic exposure instead of average arsenic exposure or urinary arsenic concentrations. The cumulative arsenic exposure influenced the associations slightly more than did the average arsenic exposure, and adjusting for the urinary arsenic concentrations did not make any difference in the models when tested (data not shown). We calucated p for trend by treating the categorical variables as continuous variables in the models. To evaluate potential biologic interactions between the different risk factors, that is, arsenic metabolism and smoking, on arsenic-related skin lesions, we calculated the relative excess risk due to additive interactions (RERI) and corresponding CIs (Ahlbom and Alfredsson 2005). These calculations were performed according to Andersson et al. (2005), using an Excel sheet (EpiNET 2008). We used p-values < 0.05 for statistical significance. We used SPSS 14.0 for Windows (SPSS Inc., Chicago, IL, USA) to perform all statistical analyses. Multivariate logistic regression analyses were performed to estimate the odds ratios (ORs) and corresponding confidence intervals (CIs) for having skin lesions at different proportions of arsenic metabolites in urine. Continuous variables were stratified into tertiles when estimating the ORs for having skin lesions. All multivariate associations were simultaneously adjusted for sex (man/woman), age (continuous), SES (continuous), tobacco use (no/yes), and cumulative arsenic exposure (continuous). We chose to adjust all models for cumulative arsenic exposure instead of average arsenic exposure or urinary arsenic concentrations. The cumulative arsenic exposure influenced the associations slightly more than did the average arsenic exposure, and adjusting for the urinary arsenic concentrations did not make any difference in the models when tested (data not shown). We calucated p for trend by treating the categorical variables as continuous variables in the models. To evaluate potential biologic interactions between the different risk factors, that is, arsenic metabolism and smoking, on arsenic-related skin lesions, we calculated the relative excess risk due to additive interactions (RERI) and corresponding CIs (Ahlbom and Alfredsson 2005). These calculations were performed according to Andersson et al. (2005), using an Excel sheet (EpiNET 2008). We used p-values < 0.05 for statistical significance. Study population: This study is part of a population-based case–referent study concerning the development of skin lesions in relation to arsenic exposure via drinking water carried out in Matlab, a rural area 53 km southeast of Dhaka, Bangladesh (Rahman et al. 2006a, 2006b). More than 60% of the tube wells in this area have concentrations above 50 μg/L, which is the Bangladeshi standard for drinking water, and 70% are above the WHO maximum guideline of 10 μg As/L. In Matlab, the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) has been running a comprehensive Health and Demographic Surveillance System (HDSS) that covers a population of 220,000. We obtained informed consent from all participants, and the study was approved by both the ICDDR,B Ethical Review Committee and the ethics committee at the Karolinska Institutet in Stockholm. Mitigating activities such as painting wells with elevated arsenic concentrations and installing filters were initiated as described elsewhere (Jakariya et al. 2005; Rahman et al. 2006b). Cases and referents recruitment: The recruitment of persons with skin lesions (504 cases) and the selection of referents (1,830 referents) are described elsewhere (Rahman et al. 2006a). In short, all residents > 4 years of age who had lived in the study area for at least 6 months were eligible for the present study (180,811 individuals). In total, 166,934 (92%) of the eligible individuals were interviewed and examined for skin lesions by well-trained field teams, and 1,682 suspected cases were referred to the study physicians (one male and one female) at the health centers. Eventually, 504 cases were diagnosed with arsenic-induced skin lesions (defined as hyperpigmentation, hypopigmentation, and keratosis). For details regarding the ascertainment of cases, see Rahman et al. (2006a). All cases provided urine samples. Referents were randomly selected from the HDSS database with the criteria of more than 4 years of age, living in the area for at least 6 months, and drinking water from the area at least once a week. Selected referents were interviewed and invited to attend the clinic to be examined for skin lesions by a physician and to provide urine samples. A total of 1,579 referents attended the clinic. Data collection: The field teams interviewed all individuals about their history of water consumption and the water sources used, including location, during each calendar year since 1970 or since birth if later than 1970, as described previously in more detail (Rahman et al. 2006b). Data on socioeconomic status (SES) were collected from the HDSS database and were defined in terms of assets relevant to these rural settings (Rahman et al. 2006b). Information on tobacco use, obtained in the interviews, was divided into cigarette smoking, bidi (locally produced cigarettes) smoking, or chewing tobacco, the latter consisting of dried tobacco leaves or zarda (a type of chewing tobacco, often used with areca nut, slaked lime, and betel leaves). Water samples were collected from all functional wells in the area and stored at −20°C (n = 13,286). Arsenic concentrations in drinking water were determined using atomic absorption spectrometry with hydride generation, with addition of hydrochloric acid and potassium iodine combined with heating (Wahed et al. 2006). For samples with concentrations below the limit of detection (LOD; 1.0 μg/L), half the LOD was used in the calculations. Arsenic exposure: Both the average and the cumulative historical arsenic exposure were calculated as the time-weighted mean arsenic concentration of drinking water of all sources used since 1970 or birth. The cumulative arsenic exposure was calculated by summing the arsenic concentration multiplied by the number of years of usage (micrograms per liter × years) for all water sources used since 1970. From the interviews of the participants regarding their water consumption history, we were able to collect data on which year the participant started to drink well water (after 1970) and thereby the age at first exposure to tube well water. Very few wells were constructed before 1970, at which time the registration of the wells in the HDSS began. Urine arsenic measures: Spot urine samples were collected in 20 mL polyethylene containers and stored at −20°C. We randomly selected 526 samples from all referents and all 504 cases for analysis of arsenic metabolites in urine for evaluation of the individual arsenic methylation efficiency. The individual pattern of arsenic metabolites in urine is shown to be fairly stable over time (Concha et al. 2002; Kile et al. 2009; Steinmaus et al. 2005). Speciation analysis of arsenic metabolites in urine was performed by an inductively coupled plasma mass spectrometer (ICP-MS; Agilent 7500ce, Agilent Technologies, Waldbronn, Germany) together with an Agilent 1100 chromatographic system equipped with solvent degasser, auto sampler, and temperature-controlled column. For the separation of trivalent arsenic [As(III)], MA, DMA, and pentavalent arsenic [As(V)], a Hamilton PRP-X100 anion exchange column (4.6 mm × 250 mm) was used (Lindberg et al. 2006, 2007). For quality control, we used a Japanese reference urine certified for arsenic (Yoshinaga et al. 2000) and a spiked urine sample. Mean concentrations of As(III), DMA, MA, and As(V) in the reference urine were 3.9 ± 0.4, 43 ± 3, 3.0 ± 0.4, and 0.1 ± 0.1 (n = 79 during an 8-week period), respectively. The mean concentrations of the spiked urine samples of As(III), DMA, MA, and As(V) were 1.5 ± 0.6, 58 ± 3, 10 ± 0.6, and 16 ± 1.3 (n = 82 during an 8-week period), respectively. The results of interlaboratory comparisons are described in more detail elsewhere (Lindberg et al. 2007). Arsenic concentrations in urine were adjusted to the average specific gravity in this population (1.012 g/cm3), as measured by a refractometer (Uricon-Ne, ATAGO Co. Ltd, Tokyo, Japan) to adjust for variation in dilution in the urine samples (Nermell et al. 2008). We also measured the arsenic concentrations of various forms of cigarettes, using ICP-MS after microwave-assisted acid digestion at high temperature and pressure as previously described for blood and breast milk (Fangstrom et al. 2008). Statistical analyses: We used SPSS 14.0 for Windows (SPSS Inc., Chicago, IL, USA) to perform all statistical analyses. Multivariate logistic regression analyses were performed to estimate the odds ratios (ORs) and corresponding confidence intervals (CIs) for having skin lesions at different proportions of arsenic metabolites in urine. Continuous variables were stratified into tertiles when estimating the ORs for having skin lesions. All multivariate associations were simultaneously adjusted for sex (man/woman), age (continuous), SES (continuous), tobacco use (no/yes), and cumulative arsenic exposure (continuous). We chose to adjust all models for cumulative arsenic exposure instead of average arsenic exposure or urinary arsenic concentrations. The cumulative arsenic exposure influenced the associations slightly more than did the average arsenic exposure, and adjusting for the urinary arsenic concentrations did not make any difference in the models when tested (data not shown). We calucated p for trend by treating the categorical variables as continuous variables in the models. To evaluate potential biologic interactions between the different risk factors, that is, arsenic metabolism and smoking, on arsenic-related skin lesions, we calculated the relative excess risk due to additive interactions (RERI) and corresponding CIs (Ahlbom and Alfredsson 2005). These calculations were performed according to Andersson et al. (2005), using an Excel sheet (EpiNET 2008). We used p-values < 0.05 for statistical significance. Results: Sex-specific characteristics of the cases and referents are presented in Table 1. The cases were more often men than women (54% vs. 46%) and older than the referents (overall, 40 vs. 31 years old). The cases also had higher SES, as measured by household asset scores, and they smoked cigarettes or bidis (men only) or used zarda more often than did the referents. The three different measures of arsenic exposure, that is, cumulative lifetime exposure to arsenic in drinking water (micrograms per liter × years), average lifetime exposure to arsenic (micrograms per liter), and current total exposure to iAs, as measured by the concentration of arsenic metabolites in urine, are also presented in Table 1. The cases had significantly higher lifetime cumulative arsenic exposure and average lifetime arsenic exposure than did the referents but not higher urinary arsenic concentrations. The number of years of tobacco use and the number of tobacco products used per day among cases and referents are shown in Supplemental Material, Table S1 (doi:10.1289/ehp.0900728). The pattern of tobacco use differed markedly between men (46% smokers, 11% chewing tobacco) and women [only two female smokers, 31% chewing tobacco (see Supplemental Material, Table S2)]. We analyzed men and women separately. The associations between the different forms of tobacco use and urinary arsenic metabolites are shown in Table 2. All forms of tobacco use were associated with an increased percentage of MA and a decreased percentage of DMA, whereas the percentage of iAs and the urinary arsenic concentration did not vary much by tobacco use. Men who smoked cigarettes or bidis had significantly higher risk for skin lesions than did men who did not use tobacco (OR = 1.8; 95% CI, 1.1–3.1; adjusted for age, SES, and cumulative arsenic exposure; Table 3). We observed a marked drop in the OR after adjusting for age, SES, and cumulative arsenic exposure (crude OR = 3.2). Entering the different covariables independently showed that age was the main confounding factor. The adjusted OR in men chewing tobacco was not significantly increased (adjusted OR = 0.80; 95% CI, 0.36–1.7; Table 3); however, the number of individuals was low. Because very few women smoked, we could not determine the effect on the risk for skin lesions among women. The multivariable-adjusted model that included all women who used tobacco (mainly chewing tobacco) showed that they had considerably higher prevalence of skin lesions than did women who did not use tobacco (adjusted OR = 2.4; 95% CI, 1.4–3.9; Table 3). Excluding the two women who smoked cigarettes did not change the associations. To test for the influence of arsenic methylation on the association between tobacco use and arsenic-related skin lesions, we also adjusted the OR values for %MA in urine. As shown in Table 3, the OR for men who smoked changed from 1.8 to 1.4 (95% CI, 0.80–2.4) after additional adjustment for %MA. Compared with women who did not use any tobacco, the OR for women who chewed tobacco changed from 2.4 to 2.0 (95% CI, 1.2–3.4) after adjusting for %MA and remained significantly increased. To further evaluate interactions between smoking and arsenic metabolism on the risk for skin lesions, we stratified both tobacco use and urinary arsenic metabolites and analyzed the joint effects. The increasing risk for skin lesions with increasing proportion of MA and decreasing proportion of DMA was more pronounced among nontobacco using men and women (Table 4). The OR for men who used tobacco (mostly smokers, because few chewed tobacco) with ≤ 7.9 %MA was 1.8 (95% CI, 0.53–6.3; p = 0.34), compared with men in the same tertile of %MA who did not use tobacco. Men within the highest tertile of %MA who used tobacco had an OR of 4.4 (95% CI, 1.7–11). This OR was essentially the same as the sum of the OR for nontobacco users in the highest %MA tertile and that for tobacco users in the lowest %MA tertile minus the baseline OR (OR values 3.8 + 1.8 − 1.0 = 4.6). Similar results were obtained for %iAs; the ORjoint was 2.8, which is approximately equal to the sum (minus 1) of the OR for nonsmokers in the highest %iAs tertile (OR 1.6) and that for smokers in the lowest %iAs (OR = 1.9). Among women, the OR for tobacco users (mainly chewing tobacco) with ≤ 7.9 %MA in urine was 3.8 (95% CI, 1.4–10), compared with nonusers of tobacco within the same tertile (p = 0.009; Table 5). Women in the highest %MA tertile who chewed tobacco had an OR of 7.3 (95% CI, 3.4–15), compared with women in the lowest %MA group who did not use tobacco. This OR was higher than the predicted additive risks (5.7), although the RERI was not statistically significant (1.5; 95% CI, −3.7 to 6.7). Similarly, the ORjoint for women in the highest %iAs tertile who used tobacco was 7.5, which was higher than the predicted additive risks (2.5). The RERI was 5.0 (95% CI, −1.3 to 11.4), the attributable proportion due to interaction was 0.67 (0.36–0.98), and the synergy index was 4.4 (1.2–16), which indicated a biologic interaction. Discussion: This population-based case–referent study in Bangladesh is the first to evaluate the combined effects of arsenic exposure, arsenic metabolism, and use of tobacco for the risk of arsenic-related skin effects. All forms of tobacco use were associated with less efficient methylation of arsenic. Among men, there appeared to be an additive effect of poor arsenic methylation (high iAs and high %MA) and smoking for the development of arsenic-induced skin lesions, although a high %MA increased the risk more than did smoking. Because very few women smoked cigarettes or bidis, an interaction between arsenic methylation and smoking in women could not be evaluated. Another new finding in the present study was that tobacco chewing, which is much more common among Bangladeshi women than smoking, was also a risk factor for developing arsenic-related skin lesions in women. The high ORs for skin lesions among the women who chewed tobacco in the highest tertiles of %iAs or %MA (7.5 and 7.3, respectively), compared with nontobacco using women with efficient arsenic methylation, suggest an interaction, although the RERI values were not quite significant. For men, an association between chewing tobacco and skin lesions was observed in the crude analysis only, but the sample size was small and the CIs wide. Further studies on larger cohorts are warranted for firm conclusions concerning the biologic interactions between various tobacco use, arsenic exposure, and arsenic metabolism. In any case, the use of various forms of tobacco should be considered in the risk assessment of arsenic and in the comparison of arsenic-related health risks among populations. Tobacco smoking has been identified as an independent risk factor of nonmelanoma skin cancer (De Hertog et al. 2001; Grant 2008; Grodstein et al. 1995), psoriasis (Setty et al. 2007), and premature skin aging (Just et al. 2007; Morita 2007), but few studies have investigated the modifying effect of smoking on the arsenic-related hyperpigmentation and hyperkeratosis. Chen et al. (2006) reported a significant synergistic effect between the highest level of arsenic exposure via drinking water (> 113 μg/L) and tobacco smoking for the risk of skin lesions among Bangladeshi men, but much weaker interaction effects among women. These authors suggested that the interaction could be due to immunosuppression caused by tobacco smoking, inhibition of arsenic methylation, or the prevalent smoking of bidis, the filterless, locally produced cigarettes with raw tobacco. Bidis are popular in rural areas in Bangladesh and are claimed to contain more carcinogenic substances than do cigarettes. In the present study, the effect of smoking on arsenic-related skin lesions was studied only in men, because, by tradition, very few women in Bangladesh are smokers. According to personal interviews, almost half the men were smokers, whereas only two of more than 500 women smoked. It is highly unlikely that we have any differential misclassification in the data on tobacco use. Smoking or other forms of tobacco use are not linked to any stigma and are in no way considered to be associated with the arsenic-induced skin lesions by the study population. The increased risk of skin lesions among men who smoked was not due to additional exposure to arsenic via smoking. The concentrations of arsenic in different brands of cigarettes from local shops in Matlab ranged between 0.13 and 0.29 μg/g (mean 0.21 μg/g, n = 5) and between 0.24 and 0.27 μg/g (mean 0.25 μg/g, n = 3) for bidis. Thus, it could be estimated that the inhaled amount of arsenic by smoking 10 cigarettes or bidis/day, for example, was about 2 μg. Even though a considerable part of this arsenic is absorbed in the lungs, the arsenic uptake from smoking is negligible compared with that from drinking water (70% of the wells had > 10 μg As/L; Rahman et al. 2006b) and food (Lindberg et al. 2008a). Instead, we found that smoking was associated with higher %MA in urine, which is a known risk factor for arsenic-related skin effects (Ahsan et al. 2007; Chen et al. 2003a; Del Razo et al. 1997; Hsueh et al. 1997; Lindberg et al. 2008a; Yu et al. 2000). Elevated %MA in urine may be related to the highly toxic trivalent intermediate metabolite MA(III) (Ganyc et al. 2007; Vega et al. 2001) in the tissues, including skin (Vahter 2002). The decrease in the second step in the methylation of arsenic (to DMA) by smoking is in agreement with previous findings (Hopenhayn-Rich et al. 1996; Lindberg et al. 2008b); however, the mechanism by which this occurs is not clear. It may be that smoking inhibits the specific AS3MT involved in arsenic methylation or impairs one-carbon metabolism in general. Cigarette smoking is known to increase serum homocysteine concentration (O’Callaghan et al. 2002; Refsum et al. 2006), which, via the concurrent accumulation of S-adenosylhomocysteine, exerts a strong inhibition of S-adenosylmethionine–dependent transmethylation reactions, including those of arsenic (Gamble et al. 2005; Marafante and Vahter 1984). Smokers also tend to have lower levels of folate and vitamin B6 and B12 (O’Callaghan et al. 2002), all of which are essential for homocysteine metabolism. The smoking-related increase in homocysteine is most likely less pronounced in women, in whom the estrogen-dependent upregulation of endogenous choline synthesis may, via oxidation to betaine, contribute to remethylation of homocysteine (Zeisel 2007). Indeed, the study by Chen et al. (2006) observed a much stronger effect of smoking on arsenic-related skin lesions in men than in women. Another new finding in this study is that chewing tobacco is a risk factor for arsenic-related skin effects for women and that there appears to be an interaction with poor metabolism of arsenic. For men, the effect of chewing tobacco on the risk of skin lesion seems to be much less, but the number of cases is too small for a firm conclusion. About 31% of the women reported chewing tobacco, which locally is called shada (dried tobacco leaves) or zarda (processed tobacco leaves in a paste) (Choudhury et al. 2007). In zarda, the tobacco is often mixed with sliced areca nut, lime, and sometimes a leaf of the piper betel plant, and the adverse effects may be caused by this mixture rather than the tobacco alone. Areca nut, the seed of Areca catechu, is used in a variety of chewed products, often mixed with tobacco or betel leaves. Both betel quid and areca nut have been associated with increased risk for oral epithelial malignancy (IARC 2004b; Lee et al. 2008). In a previous study in Bangladesh, McCarty et al. (2006) reported that betel nut use, but not tobacco chewing or cigarette smoking, was associated with skin lesions. As that study matched by sex, it is not known if the associations were sex dependent. In the present study, zarda contained about half a microgram of arsenic per gram of tobacco (0.33–0.54 μg/g; mean 0.45 μg/g; n = 8), which is about twice as much as in cigarettes and bidis. Still, the arsenic exposure from chewing zarda was low compared with that from the drinking water and food.
Background: We recently reported that the main reason for the documented higher prevalence of arsenic-related skin lesions among men than among women is the result of less efficient arsenic metabolism. Methods: We used a population-based case-referent study that showed increased risk for skin lesions in relation to chronic arsenic exposure via drinking water in Bangladesh and randomly selected 526 of the referents (random sample of inhabitants > 4 years old; 47% male) and all 504 cases (54% male) with arsenic-related skin lesions to measure arsenic metabolites [methylarsonic acid (MA) and dimethylarsinic acid (DMA)] in urine using high-performance liquid chromatography (HPLC) and inductively coupled plasma mass spectrometry (ICPMS). Results: The odds ratio for skin lesions was almost three times higher in the highest tertile of urinary %MA than in the lowest tertile. Men who smoked cigarettes and bidis (locally produced cigarettes; 33% of referents, 58% of cases) had a significantly higher risk for skin lesions than did nonsmoking men; this association decreased slightly after accounting for arsenic metabolism. Only two women smoked, but women who chewed tobacco (21% of referents, 43% of cases) had a considerably higher risk of skin lesions than did women who did not use tobacco. The odds ratio (OR) for women who chewed tobacco and who had < or = 7.9%MA was 3.8 [95% confidence interval (CI), 1.4-10] compared with women in the same MA tertile who did not use tobacco. In the highest tertile of %MA or %inorganic arsenic (iAs), women who chewed tobacco had ORs of 7.3 and 7.5, respectively, compared with women in the lowest tertiles who did not use tobacco. Conclusions: The increased risk of arsenic-related skin lesions in male smokers compared with nonsmokers appears to be partly explained by impaired arsenic methylation, while there seemed to be an excess risk due to interaction between chewing tobacco and arsenic metabolism in women.
null
null
6,999
389
9
[ "arsenic", "tobacco", "skin", "urine", "exposure", "lesions", "skin lesions", "water", "arsenic exposure", "concentrations" ]
[ "test", "test" ]
null
null
null
null
[CONTENT] arsenic | interactions | metabolism | skin lesions | smoking | tobacco | urine metabolites [SUMMARY]
[CONTENT] arsenic | interactions | metabolism | skin lesions | smoking | tobacco | urine metabolites [SUMMARY]
null
[CONTENT] arsenic | interactions | metabolism | skin lesions | smoking | tobacco | urine metabolites [SUMMARY]
null
null
[CONTENT] Adult | Arsenic | Arsenicals | Cacodylic Acid | Female | Humans | Male | Odds Ratio | Regression Analysis | Skin Diseases | Smoking | Tobacco, Smokeless | Young Adult [SUMMARY]
[CONTENT] Adult | Arsenic | Arsenicals | Cacodylic Acid | Female | Humans | Male | Odds Ratio | Regression Analysis | Skin Diseases | Smoking | Tobacco, Smokeless | Young Adult [SUMMARY]
null
[CONTENT] Adult | Arsenic | Arsenicals | Cacodylic Acid | Female | Humans | Male | Odds Ratio | Regression Analysis | Skin Diseases | Smoking | Tobacco, Smokeless | Young Adult [SUMMARY]
null
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
null
[CONTENT] arsenic | tobacco | skin | urine | exposure | lesions | skin lesions | water | arsenic exposure | concentrations [SUMMARY]
[CONTENT] arsenic | tobacco | skin | urine | exposure | lesions | skin lesions | water | arsenic exposure | concentrations [SUMMARY]
null
[CONTENT] arsenic | tobacco | skin | urine | exposure | lesions | skin lesions | water | arsenic exposure | concentrations [SUMMARY]
null
null
[CONTENT] tobacco | water | lod | leaves | 1970 | collected | chewing | chewing tobacco | 2006b | rahman 2006b [SUMMARY]
[CONTENT] tobacco | women | 95 | table | ci | 95 ci | men | ma | tertile | use [SUMMARY]
null
[CONTENT] arsenic | tobacco | water | skin | exposure | urine | lesions | skin lesions | study | arsenic exposure [SUMMARY]
null
null
[CONTENT] Bangladesh | 526 | 4 years old | 47% | 504 | 54% | MA | ICPMS [SUMMARY]
[CONTENT] almost three ||| 33% | 58% ||| Only two | 21% | 43% ||| 3.8 ||| 95% | CI | 1.4 | MA ||| MA | %inorganic arsenic (iAs | 7.3 | 7.5 [SUMMARY]
null
[CONTENT] ||| Bangladesh | 526 | 4 years old | 47% | 504 | 54% | MA | ICPMS ||| ||| almost three ||| 33% | 58% ||| Only two | 21% | 43% ||| 3.8 ||| 95% | CI | 1.4 | MA ||| MA | %inorganic arsenic (iAs | 7.3 | 7.5 ||| impaired arsenic methylation [SUMMARY]
null
Re-irradiation of recurrent glioblastoma multiforme using 11C-methionine PET/CT/MRI image fusion for hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy.
25123357
This research paper presents a valid treatment strategy for recurrent glioblastoma multiforme (GBM) using hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT) planned with 11C-methionine positron emission tomography (MET-PET)/computed tomography (CT)/magnetic resonance imaging (MRI) fusion.
BACKGROUND
Twenty-one patients with recurrent GBM received HS-IMRT planned by MET-PET/CT/MRI. The region of increased amino acid tracer uptake on MET-PET was defined as the gross tumor volume (GTV). The planning target volume encompassed the GTV by a 3-mm margin. Treatment was performed with a total dose of 25- to 35-Gy, given as 5- to 7-Gy daily for 5 days.
METHODS
With a median follow-up of 12 months, median overall survival time (OS) was 11 months from the start of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively. Karnofsky performance status was a significant prognostic factor of OS as tested by univariate and multivariate analysis. Re-operation rate was 4.8% for radiation necrosis. No other acute or late toxicity Grade 3 or higher was observed.
RESULTS
This is the first prospective study of biologic imaging optimized HS-IMRT in recurrent GBM. HS-IMRT with PET data seems to be well tolerated and resulted in a median survival time of 11 months after HS-IMRT.
CONCLUSIONS
[ "Adult", "Aged", "Brain Neoplasms", "Carbon Radioisotopes", "Chemotherapy, Adjuvant", "Combined Modality Therapy", "Female", "Glioblastoma", "Humans", "Kaplan-Meier Estimate", "Magnetic Resonance Imaging", "Male", "Methionine", "Middle Aged", "Multimodal Imaging", "Neoplasm Recurrence, Local", "Positron-Emission Tomography", "Proportional Hazards Models", "Radiopharmaceuticals", "Radiosurgery", "Radiotherapy Planning, Computer-Assisted", "Radiotherapy, Intensity-Modulated", "Tomography, X-Ray Computed", "Young Adult" ]
4155106
Background
In recurrent gliomas retreated with radiation therapy, precise dose delivery is extremely important in order to reduce the risk of normal brain toxicity. Recently, novel treatment modalities with increased radiation dose-target conformality, such as intensity modulated radiation therapy (IMRT), have been introduced [1–4]. While IMRT has superior target isodose coverage compared to other external radiation techniques in scenarios involving geometrically complex target volumes adjacent to radiosensitive tissues, planning and delivery in IMRT are resource intensive and require specific and costly software and hardware. Gross tumor volume (GTV) delineation in gliomas has been traditionally based on computed tomography (CT) and magnetic resonance imaging (MRI). However, 11C-methionine positron emission tomography (MET-PET) for high-grade gliomas was recently demonstrated to have an improved specificity and sensitivity and is the rationale for the integration of biologic imaging in the treatment planning [5–8]. In previous studies using MET-PET/MRI image fusion, we demonstrated that biologic imaging helps to detect tumor infiltration in regions with a non-specific MRI appearance in a significant number of patients [9–11]. Moreover, non-specific post-radiotherapeutic changes (e.g., radiation necrosis, gliosis, unspecific blood–brain barrier disturbance) could be differentiated from tumor tissue with a higher accuracy [12–14]. A recent study demonstrated that MET-PET could improve the ability to identify areas with a high risk of local failure in GBM patients [15]. Based on the prior PET studies, we hypothesized that an approach of hypofractionated stereotactic radiotherapy by IMRT (HS-IMRT) with the use of MET-PET data would be an effective strategy for recurrent GBM. This prospective study was designed to measure the acute and late toxicity of patients treated with HS-IMRT planned by MET-PET, response of recurrent GBM to this treatment, overall survival (OS), and the time to disease progression after treatment.
Methods
Patients eligibility Patients were recruited from September 2007 to August 2011. Adult patients (aged ≥ 18 years) with histopathologic confirmation of GBM who had local recurrent tumor were eligible. Primary treatment consisted of subtotal surgical resection in all patients. All patients had a Karnofsky performance status (KPS) ≥ 60 and were previously treated with external postoperative radiotherapy to a mean and median dose of 60 Gy (range 54 Gy–68 Gy) with concomitant and adjuvant Temozolomide (TMZ) chemotherapy. Additional inclusion criteria consisted of the following: age 18 years or older; first macroscopical relapse at the original site; and adequate bone marrow, hepatic, and renal function. In all cases, a multidisciplinary panel judged the resectability of the lesion before inclusion in this study. Thus patients with non-resectable lesions were included in the study. Patients were recruited from September 2007 to August 2011. Adult patients (aged ≥ 18 years) with histopathologic confirmation of GBM who had local recurrent tumor were eligible. Primary treatment consisted of subtotal surgical resection in all patients. All patients had a Karnofsky performance status (KPS) ≥ 60 and were previously treated with external postoperative radiotherapy to a mean and median dose of 60 Gy (range 54 Gy–68 Gy) with concomitant and adjuvant Temozolomide (TMZ) chemotherapy. Additional inclusion criteria consisted of the following: age 18 years or older; first macroscopical relapse at the original site; and adequate bone marrow, hepatic, and renal function. In all cases, a multidisciplinary panel judged the resectability of the lesion before inclusion in this study. Thus patients with non-resectable lesions were included in the study. Study design This prospective nonrandomized single-institution study was approved by the Department of Radiation Oncology of Kizawa Memorial Hospital Institutional Review Board. Informed consent was obtained from each subject after disclosing the potential risks of HS-IMRT and discussion of potential alternative treatments. Baseline evaluation included gadolinium-enhanced brain MRI and MET-PET, complete physical and neurological examination, and blood and urine tests within 2 weeks before treatment. After completion of HS-IMRT, patients underwent a physical and neurological examination and a repeat brain MRI and MET-PET. TMZ chemotherapy was continued in a manner consistent with standard clinical use after HS-IMRT course. This prospective nonrandomized single-institution study was approved by the Department of Radiation Oncology of Kizawa Memorial Hospital Institutional Review Board. Informed consent was obtained from each subject after disclosing the potential risks of HS-IMRT and discussion of potential alternative treatments. Baseline evaluation included gadolinium-enhanced brain MRI and MET-PET, complete physical and neurological examination, and blood and urine tests within 2 weeks before treatment. After completion of HS-IMRT, patients underwent a physical and neurological examination and a repeat brain MRI and MET-PET. TMZ chemotherapy was continued in a manner consistent with standard clinical use after HS-IMRT course. Imaging: CT CT (matrix size: 512 × 512, FOV 50 × 50 cm) was performed using a helical CT instrument (Light Speed; General Electric, Waukesha, WI). The patient head was immobilized in a commercially available stereotactic mask, and scans were performed with a 2.5-mm slice thickness without a gap. CT (matrix size: 512 × 512, FOV 50 × 50 cm) was performed using a helical CT instrument (Light Speed; General Electric, Waukesha, WI). The patient head was immobilized in a commercially available stereotactic mask, and scans were performed with a 2.5-mm slice thickness without a gap. Imaging: MRI MRI (matrix size: 256 × 256, FOV 25 × 25 cm) for radiation treatment planning was performed using a 1.5-T instrument (Light Speed; General Electric). A standard head coil without rigid immobilization was used. An axial, three-dimensional gradient echo T1-weighted sequence with gadolinium and 2.0-mm slice thickness were acquired from the foramen magnum to the vertex, perpendicular to the main magnetic field. MRI (matrix size: 256 × 256, FOV 25 × 25 cm) for radiation treatment planning was performed using a 1.5-T instrument (Light Speed; General Electric). A standard head coil without rigid immobilization was used. An axial, three-dimensional gradient echo T1-weighted sequence with gadolinium and 2.0-mm slice thickness were acquired from the foramen magnum to the vertex, perpendicular to the main magnetic field. Imaging: MET-PET An ADVANCE NXi Imaging System (General Electric Yokokawa Medical System, Hino-shi, Tokyo), which provides 35 transaxial images at 4.25-mm intervals, was used for PET scanning. A crystal width of 4.0 mm (transaxial) was used. The in-plane spatial resolution (full width at half-maximum) was 4.8 mm, and scans were performed in a standard two-dimensional mode. Before emission scans were performed, a 3-min transmission scan was performed to correct photon attenuation using a ring source containing 68Ge. A dose of 7.0 MBq/kg of MET was injected intravenously. Emission scans were acquired for 30 min, beginning 5 min after MET injection. During MET-PET data acquisition, head motion was continuously monitored using laser beams projected onto ink markers drawn over the forehead skin and corrected as necessary. PET/CT and MRI volumes were fused using commercially available software (Syntegra, Philips Medical System, Fitchburg, WI) using a combination of automatic and manual methods. Fusion accuracy was evaluated by a consensus of 3 expert radiologists with 15 years of experience using anatomical fiducials such as the eyeball, lacrimal glands, and lateral ventricles. An ADVANCE NXi Imaging System (General Electric Yokokawa Medical System, Hino-shi, Tokyo), which provides 35 transaxial images at 4.25-mm intervals, was used for PET scanning. A crystal width of 4.0 mm (transaxial) was used. The in-plane spatial resolution (full width at half-maximum) was 4.8 mm, and scans were performed in a standard two-dimensional mode. Before emission scans were performed, a 3-min transmission scan was performed to correct photon attenuation using a ring source containing 68Ge. A dose of 7.0 MBq/kg of MET was injected intravenously. Emission scans were acquired for 30 min, beginning 5 min after MET injection. During MET-PET data acquisition, head motion was continuously monitored using laser beams projected onto ink markers drawn over the forehead skin and corrected as necessary. PET/CT and MRI volumes were fused using commercially available software (Syntegra, Philips Medical System, Fitchburg, WI) using a combination of automatic and manual methods. Fusion accuracy was evaluated by a consensus of 3 expert radiologists with 15 years of experience using anatomical fiducials such as the eyeball, lacrimal glands, and lateral ventricles. Radiation technique Target volumes were delineated on the registered PET/CT-MRI images and were used to plan HS-IMRT delivery. In this preliminary study, GTV was defined as using an area of high uptake on MET-PET (Figure  1). This high-uptake region was defined using a threshold value for the lesion versus normal tissue counts of radioisotope per pixel index of at least 1.3. A previous study with high-grade gliomas, comparing the exact local MET uptake with histology of stereotaxically guided biopsies, demonstrated a sensitivity of 87% and specificity of 89% for the detection of tumor tissue at the same threshold of 1.3-fold MET uptake relative to normal brain tissue [16]. Further, this threshold was validated in a study using diffusion tensor imaging in comparison with MET uptake [17]. Although the delineation of GTV was done by using the automatic contouring mode (Philips Pinnacle v9.0 treatment planning system), the final determination of GTV was confirmed by consensus among 3 observers based on the co-registered MRI and PET data. The PET/MRI fusion image was positioned properly by CT scans equipped with Helical TomoTherapy (TomoTherapy Inc., Madison, WI). Finally, the GTV was expanded uniformly by 3 mm to generate the planning target volume (PTV). The prescribed dose for re-irradiation was based on tumor volume, prior radiation dose, time since external postoperative radiotherapy, and location of the lesion with proximity to eloquent brain or radiosensitive structures. Radiosensitive structures, including the brainstem, optic chiasm, lens, optic nerves, and cerebral cortex, were outlined, and dose-volume histograms for each structure were obtained to ensure that doses delivered to these structures were tolerable. Considering these different conditions, the dose for PTV was prescribed using the 80-95% isodose line, and the total doses were arranged from 25 Gy to 35 Gy in each patient. The dose maps and dose-volume histograms of a representative case are illustrated in Figure  2.Figure 1 An example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B) 11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.Figure 2 A dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated. An example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B) 11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region. A dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated. Target volumes were delineated on the registered PET/CT-MRI images and were used to plan HS-IMRT delivery. In this preliminary study, GTV was defined as using an area of high uptake on MET-PET (Figure  1). This high-uptake region was defined using a threshold value for the lesion versus normal tissue counts of radioisotope per pixel index of at least 1.3. A previous study with high-grade gliomas, comparing the exact local MET uptake with histology of stereotaxically guided biopsies, demonstrated a sensitivity of 87% and specificity of 89% for the detection of tumor tissue at the same threshold of 1.3-fold MET uptake relative to normal brain tissue [16]. Further, this threshold was validated in a study using diffusion tensor imaging in comparison with MET uptake [17]. Although the delineation of GTV was done by using the automatic contouring mode (Philips Pinnacle v9.0 treatment planning system), the final determination of GTV was confirmed by consensus among 3 observers based on the co-registered MRI and PET data. The PET/MRI fusion image was positioned properly by CT scans equipped with Helical TomoTherapy (TomoTherapy Inc., Madison, WI). Finally, the GTV was expanded uniformly by 3 mm to generate the planning target volume (PTV). The prescribed dose for re-irradiation was based on tumor volume, prior radiation dose, time since external postoperative radiotherapy, and location of the lesion with proximity to eloquent brain or radiosensitive structures. Radiosensitive structures, including the brainstem, optic chiasm, lens, optic nerves, and cerebral cortex, were outlined, and dose-volume histograms for each structure were obtained to ensure that doses delivered to these structures were tolerable. Considering these different conditions, the dose for PTV was prescribed using the 80-95% isodose line, and the total doses were arranged from 25 Gy to 35 Gy in each patient. The dose maps and dose-volume histograms of a representative case are illustrated in Figure  2.Figure 1 An example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B) 11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.Figure 2 A dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated. An example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B) 11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region. A dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated. Chemotherapy After HS-IMRT course, a dose of 200 mg/m2/day for 5 days with TMZ chemotherapy was administered. Cycles were repeated every 28 days with 3 or more cycles after HS-IMRT. Treatment was discontinued when unequivocal progression or severe toxicity occurred. TMZ was not administrated in patients who refused treatment or did not meet inclusion criteria. Patient inclusion criteria for TMZ chemotherapy consisted of the following: KPS score 50 or higher; evidence of good maintenance of major organ function (bone marrow, liver, kidney, etc.) in routine laboratory studies; expected to survive more than 3 months. After HS-IMRT course, a dose of 200 mg/m2/day for 5 days with TMZ chemotherapy was administered. Cycles were repeated every 28 days with 3 or more cycles after HS-IMRT. Treatment was discontinued when unequivocal progression or severe toxicity occurred. TMZ was not administrated in patients who refused treatment or did not meet inclusion criteria. Patient inclusion criteria for TMZ chemotherapy consisted of the following: KPS score 50 or higher; evidence of good maintenance of major organ function (bone marrow, liver, kidney, etc.) in routine laboratory studies; expected to survive more than 3 months. Follow-up Regular serial neurological and radiological examinations were initially performed at 1 month after completion of treatment and then every 2 months thereafter or in the event of neurological decay. Follow-up examinations included MRI and MET-PET imaging. If the MRI revealed further enlargement of the enhanced mass, the lesion was diagnosed as “local progression”, and the day on which MRI first revealed lesion enlargement was defined as the date of progression. However, in cases with low MET uptake on PET in the MRI-enhanced lesion, the diagnosis was changed to “radiation necrosis”. The criterion of MET-PET differential diagnosis between local progression and radiation necrosis was utilized in a previous MET-PET study by Takenaka et al. [18]. However, all patients couldn’t be applied with PET examination in the follow up study. In cases without PET examination, diagnosis of radiation necrosis was based on pathologic examination or clinical course. Cases in which lesions showed spontaneous shrinkage or decreased in size during corticosteroid treatment were also defined as “radiation necrosis”. A diagnosis of “distant failure” was defined as the appearance of a new enhanced lesion distant from the original tumor site. Either local progression or distant failure was defined as disease progression. Acute and late toxicities were determined based on the Common Terminology Criteria for Advance Events (version 4). Regular serial neurological and radiological examinations were initially performed at 1 month after completion of treatment and then every 2 months thereafter or in the event of neurological decay. Follow-up examinations included MRI and MET-PET imaging. If the MRI revealed further enlargement of the enhanced mass, the lesion was diagnosed as “local progression”, and the day on which MRI first revealed lesion enlargement was defined as the date of progression. However, in cases with low MET uptake on PET in the MRI-enhanced lesion, the diagnosis was changed to “radiation necrosis”. The criterion of MET-PET differential diagnosis between local progression and radiation necrosis was utilized in a previous MET-PET study by Takenaka et al. [18]. However, all patients couldn’t be applied with PET examination in the follow up study. In cases without PET examination, diagnosis of radiation necrosis was based on pathologic examination or clinical course. Cases in which lesions showed spontaneous shrinkage or decreased in size during corticosteroid treatment were also defined as “radiation necrosis”. A diagnosis of “distant failure” was defined as the appearance of a new enhanced lesion distant from the original tumor site. Either local progression or distant failure was defined as disease progression. Acute and late toxicities were determined based on the Common Terminology Criteria for Advance Events (version 4). Statistical analysis Survival events were defined as death from any cause for OS and as disease progression for progression-free survival (PFS). OS and PFS were analyzed from the date of HS-IMRT to the date of the documented event using the Kaplan-Meier method. Tumor- and therapy-related variables were tested for a possible correlation with survival, using the Log-rank test. Variables included age (≥50 vs. <50), KPS (≥70 vs. <70), and combined TMZ chemotherapy (“yes” vs. “no”). P-values of less than 0.05 were considered significant. Prognostic factors were further evaluated in a multivariate stepwise Cox regression analysis. Survival events were defined as death from any cause for OS and as disease progression for progression-free survival (PFS). OS and PFS were analyzed from the date of HS-IMRT to the date of the documented event using the Kaplan-Meier method. Tumor- and therapy-related variables were tested for a possible correlation with survival, using the Log-rank test. Variables included age (≥50 vs. <50), KPS (≥70 vs. <70), and combined TMZ chemotherapy (“yes” vs. “no”). P-values of less than 0.05 were considered significant. Prognostic factors were further evaluated in a multivariate stepwise Cox regression analysis.
Results
Twenty-one patients (18 men, 3 women) with histologically confirmed GBM were enrolled in this trial (Table  1). The median patient age was 53.9 years (range 22–76), and median KPS was 80 (range 60–90).Table 1 Patient characteristics Parametern (%)Age, years ≥5013 (62) <508 (38)Gender Male18 (86) Female3 (14)KPS score ≥7016 (76) <705 (24)Combined TMZ chemotherapy Yes13 (62) No8 (38) Abbreviations: KPS Karnofsky performance status, TMZ temozolomide. Patient characteristics Abbreviations: KPS Karnofsky performance status, TMZ temozolomide. The median elapsed time between external postoperative radiotherapy and study enrollment was 12 months (range, 3–48 months). HS-IMRT was performed in 5 fractions, keeping a total dose for the PTV at 25 Gy in 6 patients, 30 Gy in 11 patients, and 35 Gy in 4 patients, given as 5- to 7-Gy daily over a period of 5 days. The average PTV was 27.4 ± 24.1 cm3 (3.4 - 102.9 cm3). All 21 patients completed the prescribed HS-IMRT course. Overall, thirteen patients (62%) received combined modality treatment with TMZ. Toxicity assessment No patients demonstrated significant acute toxicity, and all patients were able to complete the prescribed radiation dose without interruption. In the late phase, Grade 2 radiation necrosis was observed in 1 patient, although the lesion was decreased in size during corticosteroid treatment. One patient who received a dose of radiation of 25.0 Gy, experienced Grade 4 radiation necrosis in the form of mental deterioration 4 months after HS-IMRT. In this case, a second necrotomy was required 5 months after HS-IMRT. Although after a second surgery, the patient’s clinical condition was stable for a long period, disease progression occurred 40 months after HS-IMRT, causing death. The actuarial reoperation rate was 4.8% for radiation necrosis. No patients demonstrated significant acute toxicity, and all patients were able to complete the prescribed radiation dose without interruption. In the late phase, Grade 2 radiation necrosis was observed in 1 patient, although the lesion was decreased in size during corticosteroid treatment. One patient who received a dose of radiation of 25.0 Gy, experienced Grade 4 radiation necrosis in the form of mental deterioration 4 months after HS-IMRT. In this case, a second necrotomy was required 5 months after HS-IMRT. Although after a second surgery, the patient’s clinical condition was stable for a long period, disease progression occurred 40 months after HS-IMRT, causing death. The actuarial reoperation rate was 4.8% for radiation necrosis. Outcomes With a median follow-up of 12 months, the median OS was 11 months from the date of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively (Figure  3A). The survival rates by age, KPS, and TMZ chemotherapy are shown in Figure  4A-C. OS of patients with KPS 70 or over was 12 months versus 5 months for patients with KPS less than 70. KPS was a significant prognostic factor of OS as tested by univariate analysis (p < 0.001). OS of patients who received combined TMZ chemotherapy was 12 months versus 6 months for patients who did not receive chemotherapy. The differences between the two groups approached significance (p = 0.079). In the multivariate model, only KPS remained statistically significant (p = 0.005) (Table  2).Figure 3 (A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.Figure 4 Overall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy. (A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively. Overall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy. Analysis of prognostic variables for overall survival Abbreviations as in Table  1. Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model. The median PFS was 6 months from the date of HS-IMRT treatment, with a 6-month and 1-year survival rate of 38.1% and 14.3%, respectively (Figure  3B). KPS was the significant prognostic factor for PFS as tested by univariate analysis (p = 0.016). On multivariate analysis, none of the variables were significantly predictive of PFS (Table  3). A representative case is shown in Figure 5, in which a follow-up MET-PET scan was performed to improve the diagnostic accuracy.Table 3 Analysis of prognostic variables for progression - free survival VariablesMedian survival (months)Univariate analysis* pvalueMultivariate analysis † pvalueAge ≥5060.279 <506KPS ≥7060.0160.059 <705Combined TMZ chemotherapyYes60.4470.479No5 Abbreviations as in Table  1.Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.Figure 5 Recurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%. Analysis of prognostic variables for progression - free survival Abbreviations as in Table  1. Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model. Recurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%. With a median follow-up of 12 months, the median OS was 11 months from the date of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively (Figure  3A). The survival rates by age, KPS, and TMZ chemotherapy are shown in Figure  4A-C. OS of patients with KPS 70 or over was 12 months versus 5 months for patients with KPS less than 70. KPS was a significant prognostic factor of OS as tested by univariate analysis (p < 0.001). OS of patients who received combined TMZ chemotherapy was 12 months versus 6 months for patients who did not receive chemotherapy. The differences between the two groups approached significance (p = 0.079). In the multivariate model, only KPS remained statistically significant (p = 0.005) (Table  2).Figure 3 (A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.Figure 4 Overall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy. (A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively. Overall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy. Analysis of prognostic variables for overall survival Abbreviations as in Table  1. Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model. The median PFS was 6 months from the date of HS-IMRT treatment, with a 6-month and 1-year survival rate of 38.1% and 14.3%, respectively (Figure  3B). KPS was the significant prognostic factor for PFS as tested by univariate analysis (p = 0.016). On multivariate analysis, none of the variables were significantly predictive of PFS (Table  3). A representative case is shown in Figure 5, in which a follow-up MET-PET scan was performed to improve the diagnostic accuracy.Table 3 Analysis of prognostic variables for progression - free survival VariablesMedian survival (months)Univariate analysis* pvalueMultivariate analysis † pvalueAge ≥5060.279 <506KPS ≥7060.0160.059 <705Combined TMZ chemotherapyYes60.4470.479No5 Abbreviations as in Table  1.Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.Figure 5 Recurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%. Analysis of prognostic variables for progression - free survival Abbreviations as in Table  1. Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model. Recurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%.
Conclusions
This is the first prospective study using biologic imaging to optimize HS-IMRT using MET-PET/CT/MRI image fusion in recurrent GBM. A low frequency of side effects was observed. HS-IMRT with PET data seems to be well tolerated and resulted in a median survival time of 11 months after HS-IMRT, although a properly designed randomized trial are necessary to firmly establish whether the present regimen is superior to the other treatment methods for recurrent GBM.
[ "Background", "Patients eligibility", "Study design", "Imaging: CT", "Imaging: MRI", "Imaging: MET-PET", "Radiation technique", "Chemotherapy", "Follow-up", "Statistical analysis", "Toxicity assessment", "Outcomes" ]
[ "In recurrent gliomas retreated with radiation therapy, precise dose delivery is extremely important in order to reduce the risk of normal brain toxicity. Recently, novel treatment modalities with increased radiation dose-target conformality, such as intensity modulated radiation therapy (IMRT), have been introduced\n[1–4]. While IMRT has superior target isodose coverage compared to other external radiation techniques in scenarios involving geometrically complex target volumes adjacent to radiosensitive tissues, planning and delivery in IMRT are resource intensive and require specific and costly software and hardware.\nGross tumor volume (GTV) delineation in gliomas has been traditionally based on computed tomography (CT) and magnetic resonance imaging (MRI). However, 11C-methionine positron emission tomography (MET-PET) for high-grade gliomas was recently demonstrated to have an improved specificity and sensitivity and is the rationale for the integration of biologic imaging in the treatment planning\n[5–8]. In previous studies using MET-PET/MRI image fusion, we demonstrated that biologic imaging helps to detect tumor infiltration in regions with a non-specific MRI appearance in a significant number of patients\n[9–11]. Moreover, non-specific post-radiotherapeutic changes (e.g., radiation necrosis, gliosis, unspecific blood–brain barrier disturbance) could be differentiated from tumor tissue with a higher accuracy\n[12–14]. A recent study demonstrated that MET-PET could improve the ability to identify areas with a high risk of local failure in GBM patients\n[15].\nBased on the prior PET studies, we hypothesized that an approach of hypofractionated stereotactic radiotherapy by IMRT (HS-IMRT) with the use of MET-PET data would be an effective strategy for recurrent GBM. This prospective study was designed to measure the acute and late toxicity of patients treated with HS-IMRT planned by MET-PET, response of recurrent GBM to this treatment, overall survival (OS), and the time to disease progression after treatment.", "Patients were recruited from September 2007 to August 2011. Adult patients (aged ≥ 18 years) with histopathologic confirmation of GBM who had local recurrent tumor were eligible. Primary treatment consisted of subtotal surgical resection in all patients. All patients had a Karnofsky performance status (KPS) ≥ 60 and were previously treated with external postoperative radiotherapy to a mean and median dose of 60 Gy (range 54 Gy–68 Gy) with concomitant and adjuvant Temozolomide (TMZ) chemotherapy. Additional inclusion criteria consisted of the following: age 18 years or older; first macroscopical relapse at the original site; and adequate bone marrow, hepatic, and renal function. In all cases, a multidisciplinary panel judged the resectability of the lesion before inclusion in this study. Thus patients with non-resectable lesions were included in the study.", "This prospective nonrandomized single-institution study was approved by the Department of Radiation Oncology of Kizawa Memorial Hospital Institutional Review Board. Informed consent was obtained from each subject after disclosing the potential risks of HS-IMRT and discussion of potential alternative treatments. Baseline evaluation included gadolinium-enhanced brain MRI and MET-PET, complete physical and neurological examination, and blood and urine tests within 2 weeks before treatment. After completion of HS-IMRT, patients underwent a physical and neurological examination and a repeat brain MRI and MET-PET. TMZ chemotherapy was continued in a manner consistent with standard clinical use after HS-IMRT course.", "CT (matrix size: 512 × 512, FOV 50 × 50 cm) was performed using a helical CT instrument (Light Speed; General Electric, Waukesha, WI). The patient head was immobilized in a commercially available stereotactic mask, and scans were performed with a 2.5-mm slice thickness without a gap.", "MRI (matrix size: 256 × 256, FOV 25 × 25 cm) for radiation treatment planning was performed using a 1.5-T instrument (Light Speed; General Electric). A standard head coil without rigid immobilization was used. An axial, three-dimensional gradient echo T1-weighted sequence with gadolinium and 2.0-mm slice thickness were acquired from the foramen magnum to the vertex, perpendicular to the main magnetic field.", "An ADVANCE NXi Imaging System (General Electric Yokokawa Medical System, Hino-shi, Tokyo), which provides 35 transaxial images at 4.25-mm intervals, was used for PET scanning. A crystal width of 4.0 mm (transaxial) was used. The in-plane spatial resolution (full width at half-maximum) was 4.8 mm, and scans were performed in a standard two-dimensional mode. Before emission scans were performed, a 3-min transmission scan was performed to correct photon attenuation using a ring source containing 68Ge. A dose of 7.0 MBq/kg of MET was injected intravenously. Emission scans were acquired for 30 min, beginning 5 min after MET injection. During MET-PET data acquisition, head motion was continuously monitored using laser beams projected onto ink markers drawn over the forehead skin and corrected as necessary. PET/CT and MRI volumes were fused using commercially available software (Syntegra, Philips Medical System, Fitchburg, WI) using a combination of automatic and manual methods. Fusion accuracy was evaluated by a consensus of 3 expert radiologists with 15 years of experience using anatomical fiducials such as the eyeball, lacrimal glands, and lateral ventricles.", "Target volumes were delineated on the registered PET/CT-MRI images and were used to plan HS-IMRT delivery. In this preliminary study, GTV was defined as using an area of high uptake on MET-PET (Figure \n1). This high-uptake region was defined using a threshold value for the lesion versus normal tissue counts of radioisotope per pixel index of at least 1.3. A previous study with high-grade gliomas, comparing the exact local MET uptake with histology of stereotaxically guided biopsies, demonstrated a sensitivity of 87% and specificity of 89% for the detection of tumor tissue at the same threshold of 1.3-fold MET uptake relative to normal brain tissue\n[16]. Further, this threshold was validated in a study using diffusion tensor imaging in comparison with MET uptake\n[17]. Although the delineation of GTV was done by using the automatic contouring mode (Philips Pinnacle v9.0 treatment planning system), the final determination of GTV was confirmed by consensus among 3 observers based on the co-registered MRI and PET data. The PET/MRI fusion image was positioned properly by CT scans equipped with Helical TomoTherapy (TomoTherapy Inc., Madison, WI). Finally, the GTV was expanded uniformly by 3 mm to generate the planning target volume (PTV). The prescribed dose for re-irradiation was based on tumor volume, prior radiation dose, time since external postoperative radiotherapy, and location of the lesion with proximity to eloquent brain or radiosensitive structures. Radiosensitive structures, including the brainstem, optic chiasm, lens, optic nerves, and cerebral cortex, were outlined, and dose-volume histograms for each structure were obtained to ensure that doses delivered to these structures were tolerable. Considering these different conditions, the dose for PTV was prescribed using the 80-95% isodose line, and the total doses were arranged from 25 Gy to 35 Gy in each patient. The dose maps and dose-volume histograms of a representative case are illustrated in Figure \n2.Figure 1\nAn example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B)\n11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.Figure 2\nA dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated.\n\nAn example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B)\n11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.\n\nA dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated.", "After HS-IMRT course, a dose of 200 mg/m2/day for 5 days with TMZ chemotherapy was administered. Cycles were repeated every 28 days with 3 or more cycles after HS-IMRT. Treatment was discontinued when unequivocal progression or severe toxicity occurred. TMZ was not administrated in patients who refused treatment or did not meet inclusion criteria. Patient inclusion criteria for TMZ chemotherapy consisted of the following: KPS score 50 or higher; evidence of good maintenance of major organ function (bone marrow, liver, kidney, etc.) in routine laboratory studies; expected to survive more than 3 months.", "Regular serial neurological and radiological examinations were initially performed at 1 month after completion of treatment and then every 2 months thereafter or in the event of neurological decay. Follow-up examinations included MRI and MET-PET imaging. If the MRI revealed further enlargement of the enhanced mass, the lesion was diagnosed as “local progression”, and the day on which MRI first revealed lesion enlargement was defined as the date of progression. However, in cases with low MET uptake on PET in the MRI-enhanced lesion, the diagnosis was changed to “radiation necrosis”. The criterion of MET-PET differential diagnosis between local progression and radiation necrosis was utilized in a previous MET-PET study by Takenaka et al.\n[18]. However, all patients couldn’t be applied with PET examination in the follow up study. In cases without PET examination, diagnosis of radiation necrosis was based on pathologic examination or clinical course. Cases in which lesions showed spontaneous shrinkage or decreased in size during corticosteroid treatment were also defined as “radiation necrosis”. A diagnosis of “distant failure” was defined as the appearance of a new enhanced lesion distant from the original tumor site. Either local progression or distant failure was defined as disease progression. Acute and late toxicities were determined based on the Common Terminology Criteria for Advance Events (version 4).", "Survival events were defined as death from any cause for OS and as disease progression for progression-free survival (PFS). OS and PFS were analyzed from the date of HS-IMRT to the date of the documented event using the Kaplan-Meier method. Tumor- and therapy-related variables were tested for a possible correlation with survival, using the Log-rank test. Variables included age (≥50 vs. <50), KPS (≥70 vs. <70), and combined TMZ chemotherapy (“yes” vs. “no”). P-values of less than 0.05 were considered significant. Prognostic factors were further evaluated in a multivariate stepwise Cox regression analysis.", "No patients demonstrated significant acute toxicity, and all patients were able to complete the prescribed radiation dose without interruption. In the late phase, Grade 2 radiation necrosis was observed in 1 patient, although the lesion was decreased in size during corticosteroid treatment. One patient who received a dose of radiation of 25.0 Gy, experienced Grade 4 radiation necrosis in the form of mental deterioration 4 months after HS-IMRT. In this case, a second necrotomy was required 5 months after HS-IMRT. Although after a second surgery, the patient’s clinical condition was stable for a long period, disease progression occurred 40 months after HS-IMRT, causing death. The actuarial reoperation rate was 4.8% for radiation necrosis.", "With a median follow-up of 12 months, the median OS was 11 months from the date of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively (Figure \n3A). The survival rates by age, KPS, and TMZ chemotherapy are shown in Figure \n4A-C. OS of patients with KPS 70 or over was 12 months versus 5 months for patients with KPS less than 70. KPS was a significant prognostic factor of OS as tested by univariate analysis (p < 0.001). OS of patients who received combined TMZ chemotherapy was 12 months versus 6 months for patients who did not receive chemotherapy. The differences between the two groups approached significance (p = 0.079). In the multivariate model, only KPS remained statistically significant (p = 0.005) (Table \n2).Figure 3\n(A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.Figure 4\nOverall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy.\n\n\n(A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.\n\nOverall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy.\n\n\nAnalysis of prognostic variables for overall survival\n\n\nAbbreviations as in Table \n1.\nStatistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.\nThe median PFS was 6 months from the date of HS-IMRT treatment, with a 6-month and 1-year survival rate of 38.1% and 14.3%, respectively (Figure \n3B). KPS was the significant prognostic factor for PFS as tested by univariate analysis (p = 0.016). On multivariate analysis, none of the variables were significantly predictive of PFS (Table \n3).\nA representative case is shown in Figure 5, in which a follow-up MET-PET scan was performed to improve the diagnostic accuracy.Table 3\nAnalysis of prognostic variables for progression - free survival\nVariablesMedian survival (months)Univariate analysis* pvalueMultivariate analysis\n†\npvalueAge ≥5060.279 <506KPS ≥7060.0160.059 <705Combined TMZ chemotherapyYes60.4470.479No5\nAbbreviations as in Table \n1.Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.Figure 5\nRecurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%.\n\nAnalysis of prognostic variables for progression - free survival\n\n\nAbbreviations as in Table \n1.\nStatistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.\n\nRecurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients eligibility", "Study design", "Imaging: CT", "Imaging: MRI", "Imaging: MET-PET", "Radiation technique", "Chemotherapy", "Follow-up", "Statistical analysis", "Results", "Toxicity assessment", "Outcomes", "Discussion", "Conclusions" ]
[ "In recurrent gliomas retreated with radiation therapy, precise dose delivery is extremely important in order to reduce the risk of normal brain toxicity. Recently, novel treatment modalities with increased radiation dose-target conformality, such as intensity modulated radiation therapy (IMRT), have been introduced\n[1–4]. While IMRT has superior target isodose coverage compared to other external radiation techniques in scenarios involving geometrically complex target volumes adjacent to radiosensitive tissues, planning and delivery in IMRT are resource intensive and require specific and costly software and hardware.\nGross tumor volume (GTV) delineation in gliomas has been traditionally based on computed tomography (CT) and magnetic resonance imaging (MRI). However, 11C-methionine positron emission tomography (MET-PET) for high-grade gliomas was recently demonstrated to have an improved specificity and sensitivity and is the rationale for the integration of biologic imaging in the treatment planning\n[5–8]. In previous studies using MET-PET/MRI image fusion, we demonstrated that biologic imaging helps to detect tumor infiltration in regions with a non-specific MRI appearance in a significant number of patients\n[9–11]. Moreover, non-specific post-radiotherapeutic changes (e.g., radiation necrosis, gliosis, unspecific blood–brain barrier disturbance) could be differentiated from tumor tissue with a higher accuracy\n[12–14]. A recent study demonstrated that MET-PET could improve the ability to identify areas with a high risk of local failure in GBM patients\n[15].\nBased on the prior PET studies, we hypothesized that an approach of hypofractionated stereotactic radiotherapy by IMRT (HS-IMRT) with the use of MET-PET data would be an effective strategy for recurrent GBM. This prospective study was designed to measure the acute and late toxicity of patients treated with HS-IMRT planned by MET-PET, response of recurrent GBM to this treatment, overall survival (OS), and the time to disease progression after treatment.", " Patients eligibility Patients were recruited from September 2007 to August 2011. Adult patients (aged ≥ 18 years) with histopathologic confirmation of GBM who had local recurrent tumor were eligible. Primary treatment consisted of subtotal surgical resection in all patients. All patients had a Karnofsky performance status (KPS) ≥ 60 and were previously treated with external postoperative radiotherapy to a mean and median dose of 60 Gy (range 54 Gy–68 Gy) with concomitant and adjuvant Temozolomide (TMZ) chemotherapy. Additional inclusion criteria consisted of the following: age 18 years or older; first macroscopical relapse at the original site; and adequate bone marrow, hepatic, and renal function. In all cases, a multidisciplinary panel judged the resectability of the lesion before inclusion in this study. Thus patients with non-resectable lesions were included in the study.\nPatients were recruited from September 2007 to August 2011. Adult patients (aged ≥ 18 years) with histopathologic confirmation of GBM who had local recurrent tumor were eligible. Primary treatment consisted of subtotal surgical resection in all patients. All patients had a Karnofsky performance status (KPS) ≥ 60 and were previously treated with external postoperative radiotherapy to a mean and median dose of 60 Gy (range 54 Gy–68 Gy) with concomitant and adjuvant Temozolomide (TMZ) chemotherapy. Additional inclusion criteria consisted of the following: age 18 years or older; first macroscopical relapse at the original site; and adequate bone marrow, hepatic, and renal function. In all cases, a multidisciplinary panel judged the resectability of the lesion before inclusion in this study. Thus patients with non-resectable lesions were included in the study.\n Study design This prospective nonrandomized single-institution study was approved by the Department of Radiation Oncology of Kizawa Memorial Hospital Institutional Review Board. Informed consent was obtained from each subject after disclosing the potential risks of HS-IMRT and discussion of potential alternative treatments. Baseline evaluation included gadolinium-enhanced brain MRI and MET-PET, complete physical and neurological examination, and blood and urine tests within 2 weeks before treatment. After completion of HS-IMRT, patients underwent a physical and neurological examination and a repeat brain MRI and MET-PET. TMZ chemotherapy was continued in a manner consistent with standard clinical use after HS-IMRT course.\nThis prospective nonrandomized single-institution study was approved by the Department of Radiation Oncology of Kizawa Memorial Hospital Institutional Review Board. Informed consent was obtained from each subject after disclosing the potential risks of HS-IMRT and discussion of potential alternative treatments. Baseline evaluation included gadolinium-enhanced brain MRI and MET-PET, complete physical and neurological examination, and blood and urine tests within 2 weeks before treatment. After completion of HS-IMRT, patients underwent a physical and neurological examination and a repeat brain MRI and MET-PET. TMZ chemotherapy was continued in a manner consistent with standard clinical use after HS-IMRT course.\n Imaging: CT CT (matrix size: 512 × 512, FOV 50 × 50 cm) was performed using a helical CT instrument (Light Speed; General Electric, Waukesha, WI). The patient head was immobilized in a commercially available stereotactic mask, and scans were performed with a 2.5-mm slice thickness without a gap.\nCT (matrix size: 512 × 512, FOV 50 × 50 cm) was performed using a helical CT instrument (Light Speed; General Electric, Waukesha, WI). The patient head was immobilized in a commercially available stereotactic mask, and scans were performed with a 2.5-mm slice thickness without a gap.\n Imaging: MRI MRI (matrix size: 256 × 256, FOV 25 × 25 cm) for radiation treatment planning was performed using a 1.5-T instrument (Light Speed; General Electric). A standard head coil without rigid immobilization was used. An axial, three-dimensional gradient echo T1-weighted sequence with gadolinium and 2.0-mm slice thickness were acquired from the foramen magnum to the vertex, perpendicular to the main magnetic field.\nMRI (matrix size: 256 × 256, FOV 25 × 25 cm) for radiation treatment planning was performed using a 1.5-T instrument (Light Speed; General Electric). A standard head coil without rigid immobilization was used. An axial, three-dimensional gradient echo T1-weighted sequence with gadolinium and 2.0-mm slice thickness were acquired from the foramen magnum to the vertex, perpendicular to the main magnetic field.\n Imaging: MET-PET An ADVANCE NXi Imaging System (General Electric Yokokawa Medical System, Hino-shi, Tokyo), which provides 35 transaxial images at 4.25-mm intervals, was used for PET scanning. A crystal width of 4.0 mm (transaxial) was used. The in-plane spatial resolution (full width at half-maximum) was 4.8 mm, and scans were performed in a standard two-dimensional mode. Before emission scans were performed, a 3-min transmission scan was performed to correct photon attenuation using a ring source containing 68Ge. A dose of 7.0 MBq/kg of MET was injected intravenously. Emission scans were acquired for 30 min, beginning 5 min after MET injection. During MET-PET data acquisition, head motion was continuously monitored using laser beams projected onto ink markers drawn over the forehead skin and corrected as necessary. PET/CT and MRI volumes were fused using commercially available software (Syntegra, Philips Medical System, Fitchburg, WI) using a combination of automatic and manual methods. Fusion accuracy was evaluated by a consensus of 3 expert radiologists with 15 years of experience using anatomical fiducials such as the eyeball, lacrimal glands, and lateral ventricles.\nAn ADVANCE NXi Imaging System (General Electric Yokokawa Medical System, Hino-shi, Tokyo), which provides 35 transaxial images at 4.25-mm intervals, was used for PET scanning. A crystal width of 4.0 mm (transaxial) was used. The in-plane spatial resolution (full width at half-maximum) was 4.8 mm, and scans were performed in a standard two-dimensional mode. Before emission scans were performed, a 3-min transmission scan was performed to correct photon attenuation using a ring source containing 68Ge. A dose of 7.0 MBq/kg of MET was injected intravenously. Emission scans were acquired for 30 min, beginning 5 min after MET injection. During MET-PET data acquisition, head motion was continuously monitored using laser beams projected onto ink markers drawn over the forehead skin and corrected as necessary. PET/CT and MRI volumes were fused using commercially available software (Syntegra, Philips Medical System, Fitchburg, WI) using a combination of automatic and manual methods. Fusion accuracy was evaluated by a consensus of 3 expert radiologists with 15 years of experience using anatomical fiducials such as the eyeball, lacrimal glands, and lateral ventricles.\n Radiation technique Target volumes were delineated on the registered PET/CT-MRI images and were used to plan HS-IMRT delivery. In this preliminary study, GTV was defined as using an area of high uptake on MET-PET (Figure \n1). This high-uptake region was defined using a threshold value for the lesion versus normal tissue counts of radioisotope per pixel index of at least 1.3. A previous study with high-grade gliomas, comparing the exact local MET uptake with histology of stereotaxically guided biopsies, demonstrated a sensitivity of 87% and specificity of 89% for the detection of tumor tissue at the same threshold of 1.3-fold MET uptake relative to normal brain tissue\n[16]. Further, this threshold was validated in a study using diffusion tensor imaging in comparison with MET uptake\n[17]. Although the delineation of GTV was done by using the automatic contouring mode (Philips Pinnacle v9.0 treatment planning system), the final determination of GTV was confirmed by consensus among 3 observers based on the co-registered MRI and PET data. The PET/MRI fusion image was positioned properly by CT scans equipped with Helical TomoTherapy (TomoTherapy Inc., Madison, WI). Finally, the GTV was expanded uniformly by 3 mm to generate the planning target volume (PTV). The prescribed dose for re-irradiation was based on tumor volume, prior radiation dose, time since external postoperative radiotherapy, and location of the lesion with proximity to eloquent brain or radiosensitive structures. Radiosensitive structures, including the brainstem, optic chiasm, lens, optic nerves, and cerebral cortex, were outlined, and dose-volume histograms for each structure were obtained to ensure that doses delivered to these structures were tolerable. Considering these different conditions, the dose for PTV was prescribed using the 80-95% isodose line, and the total doses were arranged from 25 Gy to 35 Gy in each patient. The dose maps and dose-volume histograms of a representative case are illustrated in Figure \n2.Figure 1\nAn example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B)\n11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.Figure 2\nA dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated.\n\nAn example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B)\n11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.\n\nA dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated.\nTarget volumes were delineated on the registered PET/CT-MRI images and were used to plan HS-IMRT delivery. In this preliminary study, GTV was defined as using an area of high uptake on MET-PET (Figure \n1). This high-uptake region was defined using a threshold value for the lesion versus normal tissue counts of radioisotope per pixel index of at least 1.3. A previous study with high-grade gliomas, comparing the exact local MET uptake with histology of stereotaxically guided biopsies, demonstrated a sensitivity of 87% and specificity of 89% for the detection of tumor tissue at the same threshold of 1.3-fold MET uptake relative to normal brain tissue\n[16]. Further, this threshold was validated in a study using diffusion tensor imaging in comparison with MET uptake\n[17]. Although the delineation of GTV was done by using the automatic contouring mode (Philips Pinnacle v9.0 treatment planning system), the final determination of GTV was confirmed by consensus among 3 observers based on the co-registered MRI and PET data. The PET/MRI fusion image was positioned properly by CT scans equipped with Helical TomoTherapy (TomoTherapy Inc., Madison, WI). Finally, the GTV was expanded uniformly by 3 mm to generate the planning target volume (PTV). The prescribed dose for re-irradiation was based on tumor volume, prior radiation dose, time since external postoperative radiotherapy, and location of the lesion with proximity to eloquent brain or radiosensitive structures. Radiosensitive structures, including the brainstem, optic chiasm, lens, optic nerves, and cerebral cortex, were outlined, and dose-volume histograms for each structure were obtained to ensure that doses delivered to these structures were tolerable. Considering these different conditions, the dose for PTV was prescribed using the 80-95% isodose line, and the total doses were arranged from 25 Gy to 35 Gy in each patient. The dose maps and dose-volume histograms of a representative case are illustrated in Figure \n2.Figure 1\nAn example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B)\n11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.Figure 2\nA dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated.\n\nAn example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B)\n11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.\n\nA dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated.\n Chemotherapy After HS-IMRT course, a dose of 200 mg/m2/day for 5 days with TMZ chemotherapy was administered. Cycles were repeated every 28 days with 3 or more cycles after HS-IMRT. Treatment was discontinued when unequivocal progression or severe toxicity occurred. TMZ was not administrated in patients who refused treatment or did not meet inclusion criteria. Patient inclusion criteria for TMZ chemotherapy consisted of the following: KPS score 50 or higher; evidence of good maintenance of major organ function (bone marrow, liver, kidney, etc.) in routine laboratory studies; expected to survive more than 3 months.\nAfter HS-IMRT course, a dose of 200 mg/m2/day for 5 days with TMZ chemotherapy was administered. Cycles were repeated every 28 days with 3 or more cycles after HS-IMRT. Treatment was discontinued when unequivocal progression or severe toxicity occurred. TMZ was not administrated in patients who refused treatment or did not meet inclusion criteria. Patient inclusion criteria for TMZ chemotherapy consisted of the following: KPS score 50 or higher; evidence of good maintenance of major organ function (bone marrow, liver, kidney, etc.) in routine laboratory studies; expected to survive more than 3 months.\n Follow-up Regular serial neurological and radiological examinations were initially performed at 1 month after completion of treatment and then every 2 months thereafter or in the event of neurological decay. Follow-up examinations included MRI and MET-PET imaging. If the MRI revealed further enlargement of the enhanced mass, the lesion was diagnosed as “local progression”, and the day on which MRI first revealed lesion enlargement was defined as the date of progression. However, in cases with low MET uptake on PET in the MRI-enhanced lesion, the diagnosis was changed to “radiation necrosis”. The criterion of MET-PET differential diagnosis between local progression and radiation necrosis was utilized in a previous MET-PET study by Takenaka et al.\n[18]. However, all patients couldn’t be applied with PET examination in the follow up study. In cases without PET examination, diagnosis of radiation necrosis was based on pathologic examination or clinical course. Cases in which lesions showed spontaneous shrinkage or decreased in size during corticosteroid treatment were also defined as “radiation necrosis”. A diagnosis of “distant failure” was defined as the appearance of a new enhanced lesion distant from the original tumor site. Either local progression or distant failure was defined as disease progression. Acute and late toxicities were determined based on the Common Terminology Criteria for Advance Events (version 4).\nRegular serial neurological and radiological examinations were initially performed at 1 month after completion of treatment and then every 2 months thereafter or in the event of neurological decay. Follow-up examinations included MRI and MET-PET imaging. If the MRI revealed further enlargement of the enhanced mass, the lesion was diagnosed as “local progression”, and the day on which MRI first revealed lesion enlargement was defined as the date of progression. However, in cases with low MET uptake on PET in the MRI-enhanced lesion, the diagnosis was changed to “radiation necrosis”. The criterion of MET-PET differential diagnosis between local progression and radiation necrosis was utilized in a previous MET-PET study by Takenaka et al.\n[18]. However, all patients couldn’t be applied with PET examination in the follow up study. In cases without PET examination, diagnosis of radiation necrosis was based on pathologic examination or clinical course. Cases in which lesions showed spontaneous shrinkage or decreased in size during corticosteroid treatment were also defined as “radiation necrosis”. A diagnosis of “distant failure” was defined as the appearance of a new enhanced lesion distant from the original tumor site. Either local progression or distant failure was defined as disease progression. Acute and late toxicities were determined based on the Common Terminology Criteria for Advance Events (version 4).\n Statistical analysis Survival events were defined as death from any cause for OS and as disease progression for progression-free survival (PFS). OS and PFS were analyzed from the date of HS-IMRT to the date of the documented event using the Kaplan-Meier method. Tumor- and therapy-related variables were tested for a possible correlation with survival, using the Log-rank test. Variables included age (≥50 vs. <50), KPS (≥70 vs. <70), and combined TMZ chemotherapy (“yes” vs. “no”). P-values of less than 0.05 were considered significant. Prognostic factors were further evaluated in a multivariate stepwise Cox regression analysis.\nSurvival events were defined as death from any cause for OS and as disease progression for progression-free survival (PFS). OS and PFS were analyzed from the date of HS-IMRT to the date of the documented event using the Kaplan-Meier method. Tumor- and therapy-related variables were tested for a possible correlation with survival, using the Log-rank test. Variables included age (≥50 vs. <50), KPS (≥70 vs. <70), and combined TMZ chemotherapy (“yes” vs. “no”). P-values of less than 0.05 were considered significant. Prognostic factors were further evaluated in a multivariate stepwise Cox regression analysis.", "Patients were recruited from September 2007 to August 2011. Adult patients (aged ≥ 18 years) with histopathologic confirmation of GBM who had local recurrent tumor were eligible. Primary treatment consisted of subtotal surgical resection in all patients. All patients had a Karnofsky performance status (KPS) ≥ 60 and were previously treated with external postoperative radiotherapy to a mean and median dose of 60 Gy (range 54 Gy–68 Gy) with concomitant and adjuvant Temozolomide (TMZ) chemotherapy. Additional inclusion criteria consisted of the following: age 18 years or older; first macroscopical relapse at the original site; and adequate bone marrow, hepatic, and renal function. In all cases, a multidisciplinary panel judged the resectability of the lesion before inclusion in this study. Thus patients with non-resectable lesions were included in the study.", "This prospective nonrandomized single-institution study was approved by the Department of Radiation Oncology of Kizawa Memorial Hospital Institutional Review Board. Informed consent was obtained from each subject after disclosing the potential risks of HS-IMRT and discussion of potential alternative treatments. Baseline evaluation included gadolinium-enhanced brain MRI and MET-PET, complete physical and neurological examination, and blood and urine tests within 2 weeks before treatment. After completion of HS-IMRT, patients underwent a physical and neurological examination and a repeat brain MRI and MET-PET. TMZ chemotherapy was continued in a manner consistent with standard clinical use after HS-IMRT course.", "CT (matrix size: 512 × 512, FOV 50 × 50 cm) was performed using a helical CT instrument (Light Speed; General Electric, Waukesha, WI). The patient head was immobilized in a commercially available stereotactic mask, and scans were performed with a 2.5-mm slice thickness without a gap.", "MRI (matrix size: 256 × 256, FOV 25 × 25 cm) for radiation treatment planning was performed using a 1.5-T instrument (Light Speed; General Electric). A standard head coil without rigid immobilization was used. An axial, three-dimensional gradient echo T1-weighted sequence with gadolinium and 2.0-mm slice thickness were acquired from the foramen magnum to the vertex, perpendicular to the main magnetic field.", "An ADVANCE NXi Imaging System (General Electric Yokokawa Medical System, Hino-shi, Tokyo), which provides 35 transaxial images at 4.25-mm intervals, was used for PET scanning. A crystal width of 4.0 mm (transaxial) was used. The in-plane spatial resolution (full width at half-maximum) was 4.8 mm, and scans were performed in a standard two-dimensional mode. Before emission scans were performed, a 3-min transmission scan was performed to correct photon attenuation using a ring source containing 68Ge. A dose of 7.0 MBq/kg of MET was injected intravenously. Emission scans were acquired for 30 min, beginning 5 min after MET injection. During MET-PET data acquisition, head motion was continuously monitored using laser beams projected onto ink markers drawn over the forehead skin and corrected as necessary. PET/CT and MRI volumes were fused using commercially available software (Syntegra, Philips Medical System, Fitchburg, WI) using a combination of automatic and manual methods. Fusion accuracy was evaluated by a consensus of 3 expert radiologists with 15 years of experience using anatomical fiducials such as the eyeball, lacrimal glands, and lateral ventricles.", "Target volumes were delineated on the registered PET/CT-MRI images and were used to plan HS-IMRT delivery. In this preliminary study, GTV was defined as using an area of high uptake on MET-PET (Figure \n1). This high-uptake region was defined using a threshold value for the lesion versus normal tissue counts of radioisotope per pixel index of at least 1.3. A previous study with high-grade gliomas, comparing the exact local MET uptake with histology of stereotaxically guided biopsies, demonstrated a sensitivity of 87% and specificity of 89% for the detection of tumor tissue at the same threshold of 1.3-fold MET uptake relative to normal brain tissue\n[16]. Further, this threshold was validated in a study using diffusion tensor imaging in comparison with MET uptake\n[17]. Although the delineation of GTV was done by using the automatic contouring mode (Philips Pinnacle v9.0 treatment planning system), the final determination of GTV was confirmed by consensus among 3 observers based on the co-registered MRI and PET data. The PET/MRI fusion image was positioned properly by CT scans equipped with Helical TomoTherapy (TomoTherapy Inc., Madison, WI). Finally, the GTV was expanded uniformly by 3 mm to generate the planning target volume (PTV). The prescribed dose for re-irradiation was based on tumor volume, prior radiation dose, time since external postoperative radiotherapy, and location of the lesion with proximity to eloquent brain or radiosensitive structures. Radiosensitive structures, including the brainstem, optic chiasm, lens, optic nerves, and cerebral cortex, were outlined, and dose-volume histograms for each structure were obtained to ensure that doses delivered to these structures were tolerable. Considering these different conditions, the dose for PTV was prescribed using the 80-95% isodose line, and the total doses were arranged from 25 Gy to 35 Gy in each patient. The dose maps and dose-volume histograms of a representative case are illustrated in Figure \n2.Figure 1\nAn example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B)\n11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.Figure 2\nA dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated.\n\nAn example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B)\n11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.\n\nA dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated.", "After HS-IMRT course, a dose of 200 mg/m2/day for 5 days with TMZ chemotherapy was administered. Cycles were repeated every 28 days with 3 or more cycles after HS-IMRT. Treatment was discontinued when unequivocal progression or severe toxicity occurred. TMZ was not administrated in patients who refused treatment or did not meet inclusion criteria. Patient inclusion criteria for TMZ chemotherapy consisted of the following: KPS score 50 or higher; evidence of good maintenance of major organ function (bone marrow, liver, kidney, etc.) in routine laboratory studies; expected to survive more than 3 months.", "Regular serial neurological and radiological examinations were initially performed at 1 month after completion of treatment and then every 2 months thereafter or in the event of neurological decay. Follow-up examinations included MRI and MET-PET imaging. If the MRI revealed further enlargement of the enhanced mass, the lesion was diagnosed as “local progression”, and the day on which MRI first revealed lesion enlargement was defined as the date of progression. However, in cases with low MET uptake on PET in the MRI-enhanced lesion, the diagnosis was changed to “radiation necrosis”. The criterion of MET-PET differential diagnosis between local progression and radiation necrosis was utilized in a previous MET-PET study by Takenaka et al.\n[18]. However, all patients couldn’t be applied with PET examination in the follow up study. In cases without PET examination, diagnosis of radiation necrosis was based on pathologic examination or clinical course. Cases in which lesions showed spontaneous shrinkage or decreased in size during corticosteroid treatment were also defined as “radiation necrosis”. A diagnosis of “distant failure” was defined as the appearance of a new enhanced lesion distant from the original tumor site. Either local progression or distant failure was defined as disease progression. Acute and late toxicities were determined based on the Common Terminology Criteria for Advance Events (version 4).", "Survival events were defined as death from any cause for OS and as disease progression for progression-free survival (PFS). OS and PFS were analyzed from the date of HS-IMRT to the date of the documented event using the Kaplan-Meier method. Tumor- and therapy-related variables were tested for a possible correlation with survival, using the Log-rank test. Variables included age (≥50 vs. <50), KPS (≥70 vs. <70), and combined TMZ chemotherapy (“yes” vs. “no”). P-values of less than 0.05 were considered significant. Prognostic factors were further evaluated in a multivariate stepwise Cox regression analysis.", "Twenty-one patients (18 men, 3 women) with histologically confirmed GBM were enrolled in this trial (Table \n1). The median patient age was 53.9 years (range 22–76), and median KPS was 80 (range 60–90).Table 1\nPatient characteristics\nParametern (%)Age, years ≥5013 (62) <508 (38)Gender Male18 (86) Female3 (14)KPS score ≥7016 (76) <705 (24)Combined TMZ chemotherapy Yes13 (62) No8 (38)\nAbbreviations:\nKPS Karnofsky performance status, TMZ temozolomide.\n\nPatient characteristics\n\n\nAbbreviations:\nKPS Karnofsky performance status, TMZ temozolomide.\nThe median elapsed time between external postoperative radiotherapy and study enrollment was 12 months (range, 3–48 months). HS-IMRT was performed in 5 fractions, keeping a total dose for the PTV at 25 Gy in 6 patients, 30 Gy in 11 patients, and 35 Gy in 4 patients, given as 5- to 7-Gy daily over a period of 5 days. The average PTV was 27.4 ± 24.1 cm3 (3.4 - 102.9 cm3). All 21 patients completed the prescribed HS-IMRT course. Overall, thirteen patients (62%) received combined modality treatment with TMZ.\n Toxicity assessment No patients demonstrated significant acute toxicity, and all patients were able to complete the prescribed radiation dose without interruption. In the late phase, Grade 2 radiation necrosis was observed in 1 patient, although the lesion was decreased in size during corticosteroid treatment. One patient who received a dose of radiation of 25.0 Gy, experienced Grade 4 radiation necrosis in the form of mental deterioration 4 months after HS-IMRT. In this case, a second necrotomy was required 5 months after HS-IMRT. Although after a second surgery, the patient’s clinical condition was stable for a long period, disease progression occurred 40 months after HS-IMRT, causing death. The actuarial reoperation rate was 4.8% for radiation necrosis.\nNo patients demonstrated significant acute toxicity, and all patients were able to complete the prescribed radiation dose without interruption. In the late phase, Grade 2 radiation necrosis was observed in 1 patient, although the lesion was decreased in size during corticosteroid treatment. One patient who received a dose of radiation of 25.0 Gy, experienced Grade 4 radiation necrosis in the form of mental deterioration 4 months after HS-IMRT. In this case, a second necrotomy was required 5 months after HS-IMRT. Although after a second surgery, the patient’s clinical condition was stable for a long period, disease progression occurred 40 months after HS-IMRT, causing death. The actuarial reoperation rate was 4.8% for radiation necrosis.\n Outcomes With a median follow-up of 12 months, the median OS was 11 months from the date of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively (Figure \n3A). The survival rates by age, KPS, and TMZ chemotherapy are shown in Figure \n4A-C. OS of patients with KPS 70 or over was 12 months versus 5 months for patients with KPS less than 70. KPS was a significant prognostic factor of OS as tested by univariate analysis (p < 0.001). OS of patients who received combined TMZ chemotherapy was 12 months versus 6 months for patients who did not receive chemotherapy. The differences between the two groups approached significance (p = 0.079). In the multivariate model, only KPS remained statistically significant (p = 0.005) (Table \n2).Figure 3\n(A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.Figure 4\nOverall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy.\n\n\n(A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.\n\nOverall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy.\n\n\nAnalysis of prognostic variables for overall survival\n\n\nAbbreviations as in Table \n1.\nStatistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.\nThe median PFS was 6 months from the date of HS-IMRT treatment, with a 6-month and 1-year survival rate of 38.1% and 14.3%, respectively (Figure \n3B). KPS was the significant prognostic factor for PFS as tested by univariate analysis (p = 0.016). On multivariate analysis, none of the variables were significantly predictive of PFS (Table \n3).\nA representative case is shown in Figure 5, in which a follow-up MET-PET scan was performed to improve the diagnostic accuracy.Table 3\nAnalysis of prognostic variables for progression - free survival\nVariablesMedian survival (months)Univariate analysis* pvalueMultivariate analysis\n†\npvalueAge ≥5060.279 <506KPS ≥7060.0160.059 <705Combined TMZ chemotherapyYes60.4470.479No5\nAbbreviations as in Table \n1.Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.Figure 5\nRecurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%.\n\nAnalysis of prognostic variables for progression - free survival\n\n\nAbbreviations as in Table \n1.\nStatistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.\n\nRecurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%.\nWith a median follow-up of 12 months, the median OS was 11 months from the date of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively (Figure \n3A). The survival rates by age, KPS, and TMZ chemotherapy are shown in Figure \n4A-C. OS of patients with KPS 70 or over was 12 months versus 5 months for patients with KPS less than 70. KPS was a significant prognostic factor of OS as tested by univariate analysis (p < 0.001). OS of patients who received combined TMZ chemotherapy was 12 months versus 6 months for patients who did not receive chemotherapy. The differences between the two groups approached significance (p = 0.079). In the multivariate model, only KPS remained statistically significant (p = 0.005) (Table \n2).Figure 3\n(A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.Figure 4\nOverall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy.\n\n\n(A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.\n\nOverall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy.\n\n\nAnalysis of prognostic variables for overall survival\n\n\nAbbreviations as in Table \n1.\nStatistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.\nThe median PFS was 6 months from the date of HS-IMRT treatment, with a 6-month and 1-year survival rate of 38.1% and 14.3%, respectively (Figure \n3B). KPS was the significant prognostic factor for PFS as tested by univariate analysis (p = 0.016). On multivariate analysis, none of the variables were significantly predictive of PFS (Table \n3).\nA representative case is shown in Figure 5, in which a follow-up MET-PET scan was performed to improve the diagnostic accuracy.Table 3\nAnalysis of prognostic variables for progression - free survival\nVariablesMedian survival (months)Univariate analysis* pvalueMultivariate analysis\n†\npvalueAge ≥5060.279 <506KPS ≥7060.0160.059 <705Combined TMZ chemotherapyYes60.4470.479No5\nAbbreviations as in Table \n1.Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.Figure 5\nRecurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%.\n\nAnalysis of prognostic variables for progression - free survival\n\n\nAbbreviations as in Table \n1.\nStatistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.\n\nRecurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%.", "No patients demonstrated significant acute toxicity, and all patients were able to complete the prescribed radiation dose without interruption. In the late phase, Grade 2 radiation necrosis was observed in 1 patient, although the lesion was decreased in size during corticosteroid treatment. One patient who received a dose of radiation of 25.0 Gy, experienced Grade 4 radiation necrosis in the form of mental deterioration 4 months after HS-IMRT. In this case, a second necrotomy was required 5 months after HS-IMRT. Although after a second surgery, the patient’s clinical condition was stable for a long period, disease progression occurred 40 months after HS-IMRT, causing death. The actuarial reoperation rate was 4.8% for radiation necrosis.", "With a median follow-up of 12 months, the median OS was 11 months from the date of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively (Figure \n3A). The survival rates by age, KPS, and TMZ chemotherapy are shown in Figure \n4A-C. OS of patients with KPS 70 or over was 12 months versus 5 months for patients with KPS less than 70. KPS was a significant prognostic factor of OS as tested by univariate analysis (p < 0.001). OS of patients who received combined TMZ chemotherapy was 12 months versus 6 months for patients who did not receive chemotherapy. The differences between the two groups approached significance (p = 0.079). In the multivariate model, only KPS remained statistically significant (p = 0.005) (Table \n2).Figure 3\n(A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.Figure 4\nOverall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy.\n\n\n(A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.\n\nOverall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy.\n\n\nAnalysis of prognostic variables for overall survival\n\n\nAbbreviations as in Table \n1.\nStatistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.\nThe median PFS was 6 months from the date of HS-IMRT treatment, with a 6-month and 1-year survival rate of 38.1% and 14.3%, respectively (Figure \n3B). KPS was the significant prognostic factor for PFS as tested by univariate analysis (p = 0.016). On multivariate analysis, none of the variables were significantly predictive of PFS (Table \n3).\nA representative case is shown in Figure 5, in which a follow-up MET-PET scan was performed to improve the diagnostic accuracy.Table 3\nAnalysis of prognostic variables for progression - free survival\nVariablesMedian survival (months)Univariate analysis* pvalueMultivariate analysis\n†\npvalueAge ≥5060.279 <506KPS ≥7060.0160.059 <705Combined TMZ chemotherapyYes60.4470.479No5\nAbbreviations as in Table \n1.Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.Figure 5\nRecurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%.\n\nAnalysis of prognostic variables for progression - free survival\n\n\nAbbreviations as in Table \n1.\nStatistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.\n\nRecurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%.", "Recently, much work has been performed on various fractionation regimens and dose escalation with IMRT for GBM. These studies reveal relatively favorable survival results\n[1–4], although a distinct advantage over conventional radiation therapy has not been demonstrated. However, if a precise conformal treatment technique such as HS-IMRT is utilized, and a greater biologic dose to the infiltrating tumor is achieved through hypo-fractionation, it could be possible to deliver an effective therapy that may increase patient survival without increasing morbidity. To meet these requirements, the contouring of the target volume is of critical importance.\nPET is a newer imaging method that can improve the visualization of biological processes. In recent PET studies, analysis of the metabolic and histologic characteristics provided evidence that regional high MET uptake correlates with the malignant pathologic features\n[6–8, 16]. Furthermore, nonspecific post-therapeutic changes could be differentiated from tumor tissue with a higher accuracy\n[12–14]. These findings support the notion that complementary information derived from MET uptake may be helpful in developing individualized, patient-tailored therapy strategies in patients with recurrent GBM.\nThis prospective study was designed to measure the acute and late toxicity of patients treated with HS-IMRT planned by MET-PET, response of recurrent GBM to this treatment, OS, and the time to disease progression after treatment. We found that the OS rate from re-irradiation was 11 months and that the 6-month and 1-year OS rates were 71.4% and 38.1%, respectively (Figure \n3). The survival results of our study appeared to be favorable compared to prior studies using other hypofractionated stereotactic radiotherapy in recurrent malignant glioma\n[19–22]. Also, we observed an improved toxicity profile, suggesting that hypofractionated stereotactic radiotherapy should be considered standard salvage therapy for previously irradiated high-grade gliomas\n[23]. We hypothesized that MET-PET/CT/MRI image fusion could facilitate target volume delineation and normal tissue sparing. This might lead to an improved therapeutic ratio, especially in pre-treated patients, in which post-therapy changes in CT or MRI, as with contrast enhancement, are difficult to distinguish from tumor recurrence (Figure \n5).\nIn our study, KPS ≥ 70% was associated with longer patient survival, as previously described for recurrent GBM\n[21]. In addition, we observed a trend toward a beneficial effect on OS when combining TMZ chemotherapy (Figure \n4C, Table \n2). We estimated that the addition of TMZ might be particularly effective if the radiation dose to normal brain tissue was limited by better targeting. However, the impact of TMZ along with the methylation status of the O-6-methylguanine-DNA methyltransferase (MGMT) on survival was not systematically evaluated in the present study. This study has a selection bias which necessitates prospective studies, as the patients with methylated MGMT or who were in better clinical condition were more likely to have received adjuvant TMZ chemotherapy. Nevertheless, the results of this study support our initial work and further establish the efficacy of this HS-IMRT regimen combined with TMZ.\nBoth single-fraction stereotactic radiosurgery and brachytherapy have been reported to have modest utility as palliative salvage interventions; however, both have been associated with high rates of re-operation because of the associated toxicity\n[24, 25]. Our initial experience with HS-IMRT used the 5- to 7-Gy fractions to a maximum of 35 Gy and reported an actuarial reoperation rate of 4.8% for radiation necrosis, providing additional support that this dose and fraction size is well tolerated. In our opinion, MET-PET/CT/MR image fusion could facilitate target volume delineation and normal tissue sparing, which might lead to an improved therapeutic ratio. MET-PET/CT/MR image fusion planning, which examines biological behavior in re-irradiation, seems not only to decrease the likelihood of geographical misses in target volume definition, but also to facilitate normal tissue sparing and toxicity reduction, especially when conformal treatment technique such as HS-IMRT is implemented.", "This is the first prospective study using biologic imaging to optimize HS-IMRT using MET-PET/CT/MRI image fusion in recurrent GBM. A low frequency of side effects was observed. HS-IMRT with PET data seems to be well tolerated and resulted in a median survival time of 11 months after HS-IMRT, although a properly designed randomized trial are necessary to firmly establish whether the present regimen is superior to the other treatment methods for recurrent GBM." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, "results", null, null, "discussion", "conclusions" ]
[ "Recurrent glioblastoma multiforme", "Hypofractionated stereotactic radiotherapy", "Intensity modulated radiation therapy", "11C-methionine PET" ]
Background: In recurrent gliomas retreated with radiation therapy, precise dose delivery is extremely important in order to reduce the risk of normal brain toxicity. Recently, novel treatment modalities with increased radiation dose-target conformality, such as intensity modulated radiation therapy (IMRT), have been introduced [1–4]. While IMRT has superior target isodose coverage compared to other external radiation techniques in scenarios involving geometrically complex target volumes adjacent to radiosensitive tissues, planning and delivery in IMRT are resource intensive and require specific and costly software and hardware. Gross tumor volume (GTV) delineation in gliomas has been traditionally based on computed tomography (CT) and magnetic resonance imaging (MRI). However, 11C-methionine positron emission tomography (MET-PET) for high-grade gliomas was recently demonstrated to have an improved specificity and sensitivity and is the rationale for the integration of biologic imaging in the treatment planning [5–8]. In previous studies using MET-PET/MRI image fusion, we demonstrated that biologic imaging helps to detect tumor infiltration in regions with a non-specific MRI appearance in a significant number of patients [9–11]. Moreover, non-specific post-radiotherapeutic changes (e.g., radiation necrosis, gliosis, unspecific blood–brain barrier disturbance) could be differentiated from tumor tissue with a higher accuracy [12–14]. A recent study demonstrated that MET-PET could improve the ability to identify areas with a high risk of local failure in GBM patients [15]. Based on the prior PET studies, we hypothesized that an approach of hypofractionated stereotactic radiotherapy by IMRT (HS-IMRT) with the use of MET-PET data would be an effective strategy for recurrent GBM. This prospective study was designed to measure the acute and late toxicity of patients treated with HS-IMRT planned by MET-PET, response of recurrent GBM to this treatment, overall survival (OS), and the time to disease progression after treatment. Methods: Patients eligibility Patients were recruited from September 2007 to August 2011. Adult patients (aged ≥ 18 years) with histopathologic confirmation of GBM who had local recurrent tumor were eligible. Primary treatment consisted of subtotal surgical resection in all patients. All patients had a Karnofsky performance status (KPS) ≥ 60 and were previously treated with external postoperative radiotherapy to a mean and median dose of 60 Gy (range 54 Gy–68 Gy) with concomitant and adjuvant Temozolomide (TMZ) chemotherapy. Additional inclusion criteria consisted of the following: age 18 years or older; first macroscopical relapse at the original site; and adequate bone marrow, hepatic, and renal function. In all cases, a multidisciplinary panel judged the resectability of the lesion before inclusion in this study. Thus patients with non-resectable lesions were included in the study. Patients were recruited from September 2007 to August 2011. Adult patients (aged ≥ 18 years) with histopathologic confirmation of GBM who had local recurrent tumor were eligible. Primary treatment consisted of subtotal surgical resection in all patients. All patients had a Karnofsky performance status (KPS) ≥ 60 and were previously treated with external postoperative radiotherapy to a mean and median dose of 60 Gy (range 54 Gy–68 Gy) with concomitant and adjuvant Temozolomide (TMZ) chemotherapy. Additional inclusion criteria consisted of the following: age 18 years or older; first macroscopical relapse at the original site; and adequate bone marrow, hepatic, and renal function. In all cases, a multidisciplinary panel judged the resectability of the lesion before inclusion in this study. Thus patients with non-resectable lesions were included in the study. Study design This prospective nonrandomized single-institution study was approved by the Department of Radiation Oncology of Kizawa Memorial Hospital Institutional Review Board. Informed consent was obtained from each subject after disclosing the potential risks of HS-IMRT and discussion of potential alternative treatments. Baseline evaluation included gadolinium-enhanced brain MRI and MET-PET, complete physical and neurological examination, and blood and urine tests within 2 weeks before treatment. After completion of HS-IMRT, patients underwent a physical and neurological examination and a repeat brain MRI and MET-PET. TMZ chemotherapy was continued in a manner consistent with standard clinical use after HS-IMRT course. This prospective nonrandomized single-institution study was approved by the Department of Radiation Oncology of Kizawa Memorial Hospital Institutional Review Board. Informed consent was obtained from each subject after disclosing the potential risks of HS-IMRT and discussion of potential alternative treatments. Baseline evaluation included gadolinium-enhanced brain MRI and MET-PET, complete physical and neurological examination, and blood and urine tests within 2 weeks before treatment. After completion of HS-IMRT, patients underwent a physical and neurological examination and a repeat brain MRI and MET-PET. TMZ chemotherapy was continued in a manner consistent with standard clinical use after HS-IMRT course. Imaging: CT CT (matrix size: 512 × 512, FOV 50 × 50 cm) was performed using a helical CT instrument (Light Speed; General Electric, Waukesha, WI). The patient head was immobilized in a commercially available stereotactic mask, and scans were performed with a 2.5-mm slice thickness without a gap. CT (matrix size: 512 × 512, FOV 50 × 50 cm) was performed using a helical CT instrument (Light Speed; General Electric, Waukesha, WI). The patient head was immobilized in a commercially available stereotactic mask, and scans were performed with a 2.5-mm slice thickness without a gap. Imaging: MRI MRI (matrix size: 256 × 256, FOV 25 × 25 cm) for radiation treatment planning was performed using a 1.5-T instrument (Light Speed; General Electric). A standard head coil without rigid immobilization was used. An axial, three-dimensional gradient echo T1-weighted sequence with gadolinium and 2.0-mm slice thickness were acquired from the foramen magnum to the vertex, perpendicular to the main magnetic field. MRI (matrix size: 256 × 256, FOV 25 × 25 cm) for radiation treatment planning was performed using a 1.5-T instrument (Light Speed; General Electric). A standard head coil without rigid immobilization was used. An axial, three-dimensional gradient echo T1-weighted sequence with gadolinium and 2.0-mm slice thickness were acquired from the foramen magnum to the vertex, perpendicular to the main magnetic field. Imaging: MET-PET An ADVANCE NXi Imaging System (General Electric Yokokawa Medical System, Hino-shi, Tokyo), which provides 35 transaxial images at 4.25-mm intervals, was used for PET scanning. A crystal width of 4.0 mm (transaxial) was used. The in-plane spatial resolution (full width at half-maximum) was 4.8 mm, and scans were performed in a standard two-dimensional mode. Before emission scans were performed, a 3-min transmission scan was performed to correct photon attenuation using a ring source containing 68Ge. A dose of 7.0 MBq/kg of MET was injected intravenously. Emission scans were acquired for 30 min, beginning 5 min after MET injection. During MET-PET data acquisition, head motion was continuously monitored using laser beams projected onto ink markers drawn over the forehead skin and corrected as necessary. PET/CT and MRI volumes were fused using commercially available software (Syntegra, Philips Medical System, Fitchburg, WI) using a combination of automatic and manual methods. Fusion accuracy was evaluated by a consensus of 3 expert radiologists with 15 years of experience using anatomical fiducials such as the eyeball, lacrimal glands, and lateral ventricles. An ADVANCE NXi Imaging System (General Electric Yokokawa Medical System, Hino-shi, Tokyo), which provides 35 transaxial images at 4.25-mm intervals, was used for PET scanning. A crystal width of 4.0 mm (transaxial) was used. The in-plane spatial resolution (full width at half-maximum) was 4.8 mm, and scans were performed in a standard two-dimensional mode. Before emission scans were performed, a 3-min transmission scan was performed to correct photon attenuation using a ring source containing 68Ge. A dose of 7.0 MBq/kg of MET was injected intravenously. Emission scans were acquired for 30 min, beginning 5 min after MET injection. During MET-PET data acquisition, head motion was continuously monitored using laser beams projected onto ink markers drawn over the forehead skin and corrected as necessary. PET/CT and MRI volumes were fused using commercially available software (Syntegra, Philips Medical System, Fitchburg, WI) using a combination of automatic and manual methods. Fusion accuracy was evaluated by a consensus of 3 expert radiologists with 15 years of experience using anatomical fiducials such as the eyeball, lacrimal glands, and lateral ventricles. Radiation technique Target volumes were delineated on the registered PET/CT-MRI images and were used to plan HS-IMRT delivery. In this preliminary study, GTV was defined as using an area of high uptake on MET-PET (Figure  1). This high-uptake region was defined using a threshold value for the lesion versus normal tissue counts of radioisotope per pixel index of at least 1.3. A previous study with high-grade gliomas, comparing the exact local MET uptake with histology of stereotaxically guided biopsies, demonstrated a sensitivity of 87% and specificity of 89% for the detection of tumor tissue at the same threshold of 1.3-fold MET uptake relative to normal brain tissue [16]. Further, this threshold was validated in a study using diffusion tensor imaging in comparison with MET uptake [17]. Although the delineation of GTV was done by using the automatic contouring mode (Philips Pinnacle v9.0 treatment planning system), the final determination of GTV was confirmed by consensus among 3 observers based on the co-registered MRI and PET data. The PET/MRI fusion image was positioned properly by CT scans equipped with Helical TomoTherapy (TomoTherapy Inc., Madison, WI). Finally, the GTV was expanded uniformly by 3 mm to generate the planning target volume (PTV). The prescribed dose for re-irradiation was based on tumor volume, prior radiation dose, time since external postoperative radiotherapy, and location of the lesion with proximity to eloquent brain or radiosensitive structures. Radiosensitive structures, including the brainstem, optic chiasm, lens, optic nerves, and cerebral cortex, were outlined, and dose-volume histograms for each structure were obtained to ensure that doses delivered to these structures were tolerable. Considering these different conditions, the dose for PTV was prescribed using the 80-95% isodose line, and the total doses were arranged from 25 Gy to 35 Gy in each patient. The dose maps and dose-volume histograms of a representative case are illustrated in Figure  2.Figure 1 An example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B) 11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.Figure 2 A dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated. An example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B) 11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region. A dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated. Target volumes were delineated on the registered PET/CT-MRI images and were used to plan HS-IMRT delivery. In this preliminary study, GTV was defined as using an area of high uptake on MET-PET (Figure  1). This high-uptake region was defined using a threshold value for the lesion versus normal tissue counts of radioisotope per pixel index of at least 1.3. A previous study with high-grade gliomas, comparing the exact local MET uptake with histology of stereotaxically guided biopsies, demonstrated a sensitivity of 87% and specificity of 89% for the detection of tumor tissue at the same threshold of 1.3-fold MET uptake relative to normal brain tissue [16]. Further, this threshold was validated in a study using diffusion tensor imaging in comparison with MET uptake [17]. Although the delineation of GTV was done by using the automatic contouring mode (Philips Pinnacle v9.0 treatment planning system), the final determination of GTV was confirmed by consensus among 3 observers based on the co-registered MRI and PET data. The PET/MRI fusion image was positioned properly by CT scans equipped with Helical TomoTherapy (TomoTherapy Inc., Madison, WI). Finally, the GTV was expanded uniformly by 3 mm to generate the planning target volume (PTV). The prescribed dose for re-irradiation was based on tumor volume, prior radiation dose, time since external postoperative radiotherapy, and location of the lesion with proximity to eloquent brain or radiosensitive structures. Radiosensitive structures, including the brainstem, optic chiasm, lens, optic nerves, and cerebral cortex, were outlined, and dose-volume histograms for each structure were obtained to ensure that doses delivered to these structures were tolerable. Considering these different conditions, the dose for PTV was prescribed using the 80-95% isodose line, and the total doses were arranged from 25 Gy to 35 Gy in each patient. The dose maps and dose-volume histograms of a representative case are illustrated in Figure  2.Figure 1 An example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B) 11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.Figure 2 A dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated. An example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B) 11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region. A dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated. Chemotherapy After HS-IMRT course, a dose of 200 mg/m2/day for 5 days with TMZ chemotherapy was administered. Cycles were repeated every 28 days with 3 or more cycles after HS-IMRT. Treatment was discontinued when unequivocal progression or severe toxicity occurred. TMZ was not administrated in patients who refused treatment or did not meet inclusion criteria. Patient inclusion criteria for TMZ chemotherapy consisted of the following: KPS score 50 or higher; evidence of good maintenance of major organ function (bone marrow, liver, kidney, etc.) in routine laboratory studies; expected to survive more than 3 months. After HS-IMRT course, a dose of 200 mg/m2/day for 5 days with TMZ chemotherapy was administered. Cycles were repeated every 28 days with 3 or more cycles after HS-IMRT. Treatment was discontinued when unequivocal progression or severe toxicity occurred. TMZ was not administrated in patients who refused treatment or did not meet inclusion criteria. Patient inclusion criteria for TMZ chemotherapy consisted of the following: KPS score 50 or higher; evidence of good maintenance of major organ function (bone marrow, liver, kidney, etc.) in routine laboratory studies; expected to survive more than 3 months. Follow-up Regular serial neurological and radiological examinations were initially performed at 1 month after completion of treatment and then every 2 months thereafter or in the event of neurological decay. Follow-up examinations included MRI and MET-PET imaging. If the MRI revealed further enlargement of the enhanced mass, the lesion was diagnosed as “local progression”, and the day on which MRI first revealed lesion enlargement was defined as the date of progression. However, in cases with low MET uptake on PET in the MRI-enhanced lesion, the diagnosis was changed to “radiation necrosis”. The criterion of MET-PET differential diagnosis between local progression and radiation necrosis was utilized in a previous MET-PET study by Takenaka et al. [18]. However, all patients couldn’t be applied with PET examination in the follow up study. In cases without PET examination, diagnosis of radiation necrosis was based on pathologic examination or clinical course. Cases in which lesions showed spontaneous shrinkage or decreased in size during corticosteroid treatment were also defined as “radiation necrosis”. A diagnosis of “distant failure” was defined as the appearance of a new enhanced lesion distant from the original tumor site. Either local progression or distant failure was defined as disease progression. Acute and late toxicities were determined based on the Common Terminology Criteria for Advance Events (version 4). Regular serial neurological and radiological examinations were initially performed at 1 month after completion of treatment and then every 2 months thereafter or in the event of neurological decay. Follow-up examinations included MRI and MET-PET imaging. If the MRI revealed further enlargement of the enhanced mass, the lesion was diagnosed as “local progression”, and the day on which MRI first revealed lesion enlargement was defined as the date of progression. However, in cases with low MET uptake on PET in the MRI-enhanced lesion, the diagnosis was changed to “radiation necrosis”. The criterion of MET-PET differential diagnosis between local progression and radiation necrosis was utilized in a previous MET-PET study by Takenaka et al. [18]. However, all patients couldn’t be applied with PET examination in the follow up study. In cases without PET examination, diagnosis of radiation necrosis was based on pathologic examination or clinical course. Cases in which lesions showed spontaneous shrinkage or decreased in size during corticosteroid treatment were also defined as “radiation necrosis”. A diagnosis of “distant failure” was defined as the appearance of a new enhanced lesion distant from the original tumor site. Either local progression or distant failure was defined as disease progression. Acute and late toxicities were determined based on the Common Terminology Criteria for Advance Events (version 4). Statistical analysis Survival events were defined as death from any cause for OS and as disease progression for progression-free survival (PFS). OS and PFS were analyzed from the date of HS-IMRT to the date of the documented event using the Kaplan-Meier method. Tumor- and therapy-related variables were tested for a possible correlation with survival, using the Log-rank test. Variables included age (≥50 vs. <50), KPS (≥70 vs. <70), and combined TMZ chemotherapy (“yes” vs. “no”). P-values of less than 0.05 were considered significant. Prognostic factors were further evaluated in a multivariate stepwise Cox regression analysis. Survival events were defined as death from any cause for OS and as disease progression for progression-free survival (PFS). OS and PFS were analyzed from the date of HS-IMRT to the date of the documented event using the Kaplan-Meier method. Tumor- and therapy-related variables were tested for a possible correlation with survival, using the Log-rank test. Variables included age (≥50 vs. <50), KPS (≥70 vs. <70), and combined TMZ chemotherapy (“yes” vs. “no”). P-values of less than 0.05 were considered significant. Prognostic factors were further evaluated in a multivariate stepwise Cox regression analysis. Patients eligibility: Patients were recruited from September 2007 to August 2011. Adult patients (aged ≥ 18 years) with histopathologic confirmation of GBM who had local recurrent tumor were eligible. Primary treatment consisted of subtotal surgical resection in all patients. All patients had a Karnofsky performance status (KPS) ≥ 60 and were previously treated with external postoperative radiotherapy to a mean and median dose of 60 Gy (range 54 Gy–68 Gy) with concomitant and adjuvant Temozolomide (TMZ) chemotherapy. Additional inclusion criteria consisted of the following: age 18 years or older; first macroscopical relapse at the original site; and adequate bone marrow, hepatic, and renal function. In all cases, a multidisciplinary panel judged the resectability of the lesion before inclusion in this study. Thus patients with non-resectable lesions were included in the study. Study design: This prospective nonrandomized single-institution study was approved by the Department of Radiation Oncology of Kizawa Memorial Hospital Institutional Review Board. Informed consent was obtained from each subject after disclosing the potential risks of HS-IMRT and discussion of potential alternative treatments. Baseline evaluation included gadolinium-enhanced brain MRI and MET-PET, complete physical and neurological examination, and blood and urine tests within 2 weeks before treatment. After completion of HS-IMRT, patients underwent a physical and neurological examination and a repeat brain MRI and MET-PET. TMZ chemotherapy was continued in a manner consistent with standard clinical use after HS-IMRT course. Imaging: CT: CT (matrix size: 512 × 512, FOV 50 × 50 cm) was performed using a helical CT instrument (Light Speed; General Electric, Waukesha, WI). The patient head was immobilized in a commercially available stereotactic mask, and scans were performed with a 2.5-mm slice thickness without a gap. Imaging: MRI: MRI (matrix size: 256 × 256, FOV 25 × 25 cm) for radiation treatment planning was performed using a 1.5-T instrument (Light Speed; General Electric). A standard head coil without rigid immobilization was used. An axial, three-dimensional gradient echo T1-weighted sequence with gadolinium and 2.0-mm slice thickness were acquired from the foramen magnum to the vertex, perpendicular to the main magnetic field. Imaging: MET-PET: An ADVANCE NXi Imaging System (General Electric Yokokawa Medical System, Hino-shi, Tokyo), which provides 35 transaxial images at 4.25-mm intervals, was used for PET scanning. A crystal width of 4.0 mm (transaxial) was used. The in-plane spatial resolution (full width at half-maximum) was 4.8 mm, and scans were performed in a standard two-dimensional mode. Before emission scans were performed, a 3-min transmission scan was performed to correct photon attenuation using a ring source containing 68Ge. A dose of 7.0 MBq/kg of MET was injected intravenously. Emission scans were acquired for 30 min, beginning 5 min after MET injection. During MET-PET data acquisition, head motion was continuously monitored using laser beams projected onto ink markers drawn over the forehead skin and corrected as necessary. PET/CT and MRI volumes were fused using commercially available software (Syntegra, Philips Medical System, Fitchburg, WI) using a combination of automatic and manual methods. Fusion accuracy was evaluated by a consensus of 3 expert radiologists with 15 years of experience using anatomical fiducials such as the eyeball, lacrimal glands, and lateral ventricles. Radiation technique: Target volumes were delineated on the registered PET/CT-MRI images and were used to plan HS-IMRT delivery. In this preliminary study, GTV was defined as using an area of high uptake on MET-PET (Figure  1). This high-uptake region was defined using a threshold value for the lesion versus normal tissue counts of radioisotope per pixel index of at least 1.3. A previous study with high-grade gliomas, comparing the exact local MET uptake with histology of stereotaxically guided biopsies, demonstrated a sensitivity of 87% and specificity of 89% for the detection of tumor tissue at the same threshold of 1.3-fold MET uptake relative to normal brain tissue [16]. Further, this threshold was validated in a study using diffusion tensor imaging in comparison with MET uptake [17]. Although the delineation of GTV was done by using the automatic contouring mode (Philips Pinnacle v9.0 treatment planning system), the final determination of GTV was confirmed by consensus among 3 observers based on the co-registered MRI and PET data. The PET/MRI fusion image was positioned properly by CT scans equipped with Helical TomoTherapy (TomoTherapy Inc., Madison, WI). Finally, the GTV was expanded uniformly by 3 mm to generate the planning target volume (PTV). The prescribed dose for re-irradiation was based on tumor volume, prior radiation dose, time since external postoperative radiotherapy, and location of the lesion with proximity to eloquent brain or radiosensitive structures. Radiosensitive structures, including the brainstem, optic chiasm, lens, optic nerves, and cerebral cortex, were outlined, and dose-volume histograms for each structure were obtained to ensure that doses delivered to these structures were tolerable. Considering these different conditions, the dose for PTV was prescribed using the 80-95% isodose line, and the total doses were arranged from 25 Gy to 35 Gy in each patient. The dose maps and dose-volume histograms of a representative case are illustrated in Figure  2.Figure 1 An example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B) 11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region.Figure 2 A dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated. An example of a target planned for a hypofractionated stereotactic radiotherapy using intensity modulated radiation therapy. (A) Contrast-enhanced T1-weighted magnetic resonance imaging. (B) 11C-methionine positron emission tomography (MET-PET). Gross tumor volume was defined as the region with high MET uptake (yellow line). The threshold for increased MET uptake was set to ≥1.3 in the contiguous tumor region. A dose map and dose-volume histogram of a representative case. Prescribed dose for planning target volume (PTV) and the doses of organs at risk were demonstrated. Chemotherapy: After HS-IMRT course, a dose of 200 mg/m2/day for 5 days with TMZ chemotherapy was administered. Cycles were repeated every 28 days with 3 or more cycles after HS-IMRT. Treatment was discontinued when unequivocal progression or severe toxicity occurred. TMZ was not administrated in patients who refused treatment or did not meet inclusion criteria. Patient inclusion criteria for TMZ chemotherapy consisted of the following: KPS score 50 or higher; evidence of good maintenance of major organ function (bone marrow, liver, kidney, etc.) in routine laboratory studies; expected to survive more than 3 months. Follow-up: Regular serial neurological and radiological examinations were initially performed at 1 month after completion of treatment and then every 2 months thereafter or in the event of neurological decay. Follow-up examinations included MRI and MET-PET imaging. If the MRI revealed further enlargement of the enhanced mass, the lesion was diagnosed as “local progression”, and the day on which MRI first revealed lesion enlargement was defined as the date of progression. However, in cases with low MET uptake on PET in the MRI-enhanced lesion, the diagnosis was changed to “radiation necrosis”. The criterion of MET-PET differential diagnosis between local progression and radiation necrosis was utilized in a previous MET-PET study by Takenaka et al. [18]. However, all patients couldn’t be applied with PET examination in the follow up study. In cases without PET examination, diagnosis of radiation necrosis was based on pathologic examination or clinical course. Cases in which lesions showed spontaneous shrinkage or decreased in size during corticosteroid treatment were also defined as “radiation necrosis”. A diagnosis of “distant failure” was defined as the appearance of a new enhanced lesion distant from the original tumor site. Either local progression or distant failure was defined as disease progression. Acute and late toxicities were determined based on the Common Terminology Criteria for Advance Events (version 4). Statistical analysis: Survival events were defined as death from any cause for OS and as disease progression for progression-free survival (PFS). OS and PFS were analyzed from the date of HS-IMRT to the date of the documented event using the Kaplan-Meier method. Tumor- and therapy-related variables were tested for a possible correlation with survival, using the Log-rank test. Variables included age (≥50 vs. <50), KPS (≥70 vs. <70), and combined TMZ chemotherapy (“yes” vs. “no”). P-values of less than 0.05 were considered significant. Prognostic factors were further evaluated in a multivariate stepwise Cox regression analysis. Results: Twenty-one patients (18 men, 3 women) with histologically confirmed GBM were enrolled in this trial (Table  1). The median patient age was 53.9 years (range 22–76), and median KPS was 80 (range 60–90).Table 1 Patient characteristics Parametern (%)Age, years ≥5013 (62) <508 (38)Gender Male18 (86) Female3 (14)KPS score ≥7016 (76) <705 (24)Combined TMZ chemotherapy Yes13 (62) No8 (38) Abbreviations: KPS Karnofsky performance status, TMZ temozolomide. Patient characteristics Abbreviations: KPS Karnofsky performance status, TMZ temozolomide. The median elapsed time between external postoperative radiotherapy and study enrollment was 12 months (range, 3–48 months). HS-IMRT was performed in 5 fractions, keeping a total dose for the PTV at 25 Gy in 6 patients, 30 Gy in 11 patients, and 35 Gy in 4 patients, given as 5- to 7-Gy daily over a period of 5 days. The average PTV was 27.4 ± 24.1 cm3 (3.4 - 102.9 cm3). All 21 patients completed the prescribed HS-IMRT course. Overall, thirteen patients (62%) received combined modality treatment with TMZ. Toxicity assessment No patients demonstrated significant acute toxicity, and all patients were able to complete the prescribed radiation dose without interruption. In the late phase, Grade 2 radiation necrosis was observed in 1 patient, although the lesion was decreased in size during corticosteroid treatment. One patient who received a dose of radiation of 25.0 Gy, experienced Grade 4 radiation necrosis in the form of mental deterioration 4 months after HS-IMRT. In this case, a second necrotomy was required 5 months after HS-IMRT. Although after a second surgery, the patient’s clinical condition was stable for a long period, disease progression occurred 40 months after HS-IMRT, causing death. The actuarial reoperation rate was 4.8% for radiation necrosis. No patients demonstrated significant acute toxicity, and all patients were able to complete the prescribed radiation dose without interruption. In the late phase, Grade 2 radiation necrosis was observed in 1 patient, although the lesion was decreased in size during corticosteroid treatment. One patient who received a dose of radiation of 25.0 Gy, experienced Grade 4 radiation necrosis in the form of mental deterioration 4 months after HS-IMRT. In this case, a second necrotomy was required 5 months after HS-IMRT. Although after a second surgery, the patient’s clinical condition was stable for a long period, disease progression occurred 40 months after HS-IMRT, causing death. The actuarial reoperation rate was 4.8% for radiation necrosis. Outcomes With a median follow-up of 12 months, the median OS was 11 months from the date of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively (Figure  3A). The survival rates by age, KPS, and TMZ chemotherapy are shown in Figure  4A-C. OS of patients with KPS 70 or over was 12 months versus 5 months for patients with KPS less than 70. KPS was a significant prognostic factor of OS as tested by univariate analysis (p < 0.001). OS of patients who received combined TMZ chemotherapy was 12 months versus 6 months for patients who did not receive chemotherapy. The differences between the two groups approached significance (p = 0.079). In the multivariate model, only KPS remained statistically significant (p = 0.005) (Table  2).Figure 3 (A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.Figure 4 Overall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy. (A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively. Overall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy. Analysis of prognostic variables for overall survival Abbreviations as in Table  1. Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model. The median PFS was 6 months from the date of HS-IMRT treatment, with a 6-month and 1-year survival rate of 38.1% and 14.3%, respectively (Figure  3B). KPS was the significant prognostic factor for PFS as tested by univariate analysis (p = 0.016). On multivariate analysis, none of the variables were significantly predictive of PFS (Table  3). A representative case is shown in Figure 5, in which a follow-up MET-PET scan was performed to improve the diagnostic accuracy.Table 3 Analysis of prognostic variables for progression - free survival VariablesMedian survival (months)Univariate analysis* pvalueMultivariate analysis † pvalueAge ≥5060.279 <506KPS ≥7060.0160.059 <705Combined TMZ chemotherapyYes60.4470.479No5 Abbreviations as in Table  1.Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.Figure 5 Recurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%. Analysis of prognostic variables for progression - free survival Abbreviations as in Table  1. Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model. Recurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%. With a median follow-up of 12 months, the median OS was 11 months from the date of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively (Figure  3A). The survival rates by age, KPS, and TMZ chemotherapy are shown in Figure  4A-C. OS of patients with KPS 70 or over was 12 months versus 5 months for patients with KPS less than 70. KPS was a significant prognostic factor of OS as tested by univariate analysis (p < 0.001). OS of patients who received combined TMZ chemotherapy was 12 months versus 6 months for patients who did not receive chemotherapy. The differences between the two groups approached significance (p = 0.079). In the multivariate model, only KPS remained statistically significant (p = 0.005) (Table  2).Figure 3 (A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.Figure 4 Overall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy. (A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively. Overall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy. Analysis of prognostic variables for overall survival Abbreviations as in Table  1. Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model. The median PFS was 6 months from the date of HS-IMRT treatment, with a 6-month and 1-year survival rate of 38.1% and 14.3%, respectively (Figure  3B). KPS was the significant prognostic factor for PFS as tested by univariate analysis (p = 0.016). On multivariate analysis, none of the variables were significantly predictive of PFS (Table  3). A representative case is shown in Figure 5, in which a follow-up MET-PET scan was performed to improve the diagnostic accuracy.Table 3 Analysis of prognostic variables for progression - free survival VariablesMedian survival (months)Univariate analysis* pvalueMultivariate analysis † pvalueAge ≥5060.279 <506KPS ≥7060.0160.059 <705Combined TMZ chemotherapyYes60.4470.479No5 Abbreviations as in Table  1.Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.Figure 5 Recurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%. Analysis of prognostic variables for progression - free survival Abbreviations as in Table  1. Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model. Recurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%. Toxicity assessment: No patients demonstrated significant acute toxicity, and all patients were able to complete the prescribed radiation dose without interruption. In the late phase, Grade 2 radiation necrosis was observed in 1 patient, although the lesion was decreased in size during corticosteroid treatment. One patient who received a dose of radiation of 25.0 Gy, experienced Grade 4 radiation necrosis in the form of mental deterioration 4 months after HS-IMRT. In this case, a second necrotomy was required 5 months after HS-IMRT. Although after a second surgery, the patient’s clinical condition was stable for a long period, disease progression occurred 40 months after HS-IMRT, causing death. The actuarial reoperation rate was 4.8% for radiation necrosis. Outcomes: With a median follow-up of 12 months, the median OS was 11 months from the date of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively (Figure  3A). The survival rates by age, KPS, and TMZ chemotherapy are shown in Figure  4A-C. OS of patients with KPS 70 or over was 12 months versus 5 months for patients with KPS less than 70. KPS was a significant prognostic factor of OS as tested by univariate analysis (p < 0.001). OS of patients who received combined TMZ chemotherapy was 12 months versus 6 months for patients who did not receive chemotherapy. The differences between the two groups approached significance (p = 0.079). In the multivariate model, only KPS remained statistically significant (p = 0.005) (Table  2).Figure 3 (A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively.Figure 4 Overall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy. (A) Overall survival and (B) Progression-free survival for all patients from the date of re-irradiation. The median overall survival time was 11 months, and the median progression-free survival time was 6 months from the date of re-irradiation, respectively. Overall survival rates among different subgroups by (A) age, (B) Karnofsky performance status (KPS), and (C) combined temozolomide (TMZ) chemotherapy. Analysis of prognostic variables for overall survival Abbreviations as in Table  1. Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model. The median PFS was 6 months from the date of HS-IMRT treatment, with a 6-month and 1-year survival rate of 38.1% and 14.3%, respectively (Figure  3B). KPS was the significant prognostic factor for PFS as tested by univariate analysis (p = 0.016). On multivariate analysis, none of the variables were significantly predictive of PFS (Table  3). A representative case is shown in Figure 5, in which a follow-up MET-PET scan was performed to improve the diagnostic accuracy.Table 3 Analysis of prognostic variables for progression - free survival VariablesMedian survival (months)Univariate analysis* pvalueMultivariate analysis † pvalueAge ≥5060.279 <506KPS ≥7060.0160.059 <705Combined TMZ chemotherapyYes60.4470.479No5 Abbreviations as in Table  1.Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model.Figure 5 Recurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%. Analysis of prognostic variables for progression - free survival Abbreviations as in Table  1. Statistical analyses were performed with *Log - rank test and †Cox’s proportional hazards model. Recurrent glioblastoma multiforme (GBM) in a 55-year-old man before and after hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT). Before HS-IMRT, two enhanced lesions (long and short arrow) were demonstrated in the left temporal lobe on T1-weighted magnetic resonance imaging (A). 11C-methionine positron emission tomography (MET-PET) demonstrated a MET high-uptake on the region of short arrow, which was defined as the Gross Tumor Volume (red line) (B). 5 months after HS-IMRT, there was no tumor recurrence on the lesion (long arrow, C &D). The enhanced lesion (short arrow) was increased in size (C), although MET uptake decreased relative to normal tissue, which suggested a necrotic change in the irradiated region (D). The patient had no neurologic deficits or quality of life issues. KPS was 90%. Discussion: Recently, much work has been performed on various fractionation regimens and dose escalation with IMRT for GBM. These studies reveal relatively favorable survival results [1–4], although a distinct advantage over conventional radiation therapy has not been demonstrated. However, if a precise conformal treatment technique such as HS-IMRT is utilized, and a greater biologic dose to the infiltrating tumor is achieved through hypo-fractionation, it could be possible to deliver an effective therapy that may increase patient survival without increasing morbidity. To meet these requirements, the contouring of the target volume is of critical importance. PET is a newer imaging method that can improve the visualization of biological processes. In recent PET studies, analysis of the metabolic and histologic characteristics provided evidence that regional high MET uptake correlates with the malignant pathologic features [6–8, 16]. Furthermore, nonspecific post-therapeutic changes could be differentiated from tumor tissue with a higher accuracy [12–14]. These findings support the notion that complementary information derived from MET uptake may be helpful in developing individualized, patient-tailored therapy strategies in patients with recurrent GBM. This prospective study was designed to measure the acute and late toxicity of patients treated with HS-IMRT planned by MET-PET, response of recurrent GBM to this treatment, OS, and the time to disease progression after treatment. We found that the OS rate from re-irradiation was 11 months and that the 6-month and 1-year OS rates were 71.4% and 38.1%, respectively (Figure  3). The survival results of our study appeared to be favorable compared to prior studies using other hypofractionated stereotactic radiotherapy in recurrent malignant glioma [19–22]. Also, we observed an improved toxicity profile, suggesting that hypofractionated stereotactic radiotherapy should be considered standard salvage therapy for previously irradiated high-grade gliomas [23]. We hypothesized that MET-PET/CT/MRI image fusion could facilitate target volume delineation and normal tissue sparing. This might lead to an improved therapeutic ratio, especially in pre-treated patients, in which post-therapy changes in CT or MRI, as with contrast enhancement, are difficult to distinguish from tumor recurrence (Figure  5). In our study, KPS ≥ 70% was associated with longer patient survival, as previously described for recurrent GBM [21]. In addition, we observed a trend toward a beneficial effect on OS when combining TMZ chemotherapy (Figure  4C, Table  2). We estimated that the addition of TMZ might be particularly effective if the radiation dose to normal brain tissue was limited by better targeting. However, the impact of TMZ along with the methylation status of the O-6-methylguanine-DNA methyltransferase (MGMT) on survival was not systematically evaluated in the present study. This study has a selection bias which necessitates prospective studies, as the patients with methylated MGMT or who were in better clinical condition were more likely to have received adjuvant TMZ chemotherapy. Nevertheless, the results of this study support our initial work and further establish the efficacy of this HS-IMRT regimen combined with TMZ. Both single-fraction stereotactic radiosurgery and brachytherapy have been reported to have modest utility as palliative salvage interventions; however, both have been associated with high rates of re-operation because of the associated toxicity [24, 25]. Our initial experience with HS-IMRT used the 5- to 7-Gy fractions to a maximum of 35 Gy and reported an actuarial reoperation rate of 4.8% for radiation necrosis, providing additional support that this dose and fraction size is well tolerated. In our opinion, MET-PET/CT/MR image fusion could facilitate target volume delineation and normal tissue sparing, which might lead to an improved therapeutic ratio. MET-PET/CT/MR image fusion planning, which examines biological behavior in re-irradiation, seems not only to decrease the likelihood of geographical misses in target volume definition, but also to facilitate normal tissue sparing and toxicity reduction, especially when conformal treatment technique such as HS-IMRT is implemented. Conclusions: This is the first prospective study using biologic imaging to optimize HS-IMRT using MET-PET/CT/MRI image fusion in recurrent GBM. A low frequency of side effects was observed. HS-IMRT with PET data seems to be well tolerated and resulted in a median survival time of 11 months after HS-IMRT, although a properly designed randomized trial are necessary to firmly establish whether the present regimen is superior to the other treatment methods for recurrent GBM.
Background: This research paper presents a valid treatment strategy for recurrent glioblastoma multiforme (GBM) using hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT) planned with 11C-methionine positron emission tomography (MET-PET)/computed tomography (CT)/magnetic resonance imaging (MRI) fusion. Methods: Twenty-one patients with recurrent GBM received HS-IMRT planned by MET-PET/CT/MRI. The region of increased amino acid tracer uptake on MET-PET was defined as the gross tumor volume (GTV). The planning target volume encompassed the GTV by a 3-mm margin. Treatment was performed with a total dose of 25- to 35-Gy, given as 5- to 7-Gy daily for 5 days. Results: With a median follow-up of 12 months, median overall survival time (OS) was 11 months from the start of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively. Karnofsky performance status was a significant prognostic factor of OS as tested by univariate and multivariate analysis. Re-operation rate was 4.8% for radiation necrosis. No other acute or late toxicity Grade 3 or higher was observed. Conclusions: This is the first prospective study of biologic imaging optimized HS-IMRT in recurrent GBM. HS-IMRT with PET data seems to be well tolerated and resulted in a median survival time of 11 months after HS-IMRT.
Background: In recurrent gliomas retreated with radiation therapy, precise dose delivery is extremely important in order to reduce the risk of normal brain toxicity. Recently, novel treatment modalities with increased radiation dose-target conformality, such as intensity modulated radiation therapy (IMRT), have been introduced [1–4]. While IMRT has superior target isodose coverage compared to other external radiation techniques in scenarios involving geometrically complex target volumes adjacent to radiosensitive tissues, planning and delivery in IMRT are resource intensive and require specific and costly software and hardware. Gross tumor volume (GTV) delineation in gliomas has been traditionally based on computed tomography (CT) and magnetic resonance imaging (MRI). However, 11C-methionine positron emission tomography (MET-PET) for high-grade gliomas was recently demonstrated to have an improved specificity and sensitivity and is the rationale for the integration of biologic imaging in the treatment planning [5–8]. In previous studies using MET-PET/MRI image fusion, we demonstrated that biologic imaging helps to detect tumor infiltration in regions with a non-specific MRI appearance in a significant number of patients [9–11]. Moreover, non-specific post-radiotherapeutic changes (e.g., radiation necrosis, gliosis, unspecific blood–brain barrier disturbance) could be differentiated from tumor tissue with a higher accuracy [12–14]. A recent study demonstrated that MET-PET could improve the ability to identify areas with a high risk of local failure in GBM patients [15]. Based on the prior PET studies, we hypothesized that an approach of hypofractionated stereotactic radiotherapy by IMRT (HS-IMRT) with the use of MET-PET data would be an effective strategy for recurrent GBM. This prospective study was designed to measure the acute and late toxicity of patients treated with HS-IMRT planned by MET-PET, response of recurrent GBM to this treatment, overall survival (OS), and the time to disease progression after treatment. Conclusions: This is the first prospective study using biologic imaging to optimize HS-IMRT using MET-PET/CT/MRI image fusion in recurrent GBM. A low frequency of side effects was observed. HS-IMRT with PET data seems to be well tolerated and resulted in a median survival time of 11 months after HS-IMRT, although a properly designed randomized trial are necessary to firmly establish whether the present regimen is superior to the other treatment methods for recurrent GBM.
Background: This research paper presents a valid treatment strategy for recurrent glioblastoma multiforme (GBM) using hypofractionated stereotactic radiotherapy by intensity modulated radiation therapy (HS-IMRT) planned with 11C-methionine positron emission tomography (MET-PET)/computed tomography (CT)/magnetic resonance imaging (MRI) fusion. Methods: Twenty-one patients with recurrent GBM received HS-IMRT planned by MET-PET/CT/MRI. The region of increased amino acid tracer uptake on MET-PET was defined as the gross tumor volume (GTV). The planning target volume encompassed the GTV by a 3-mm margin. Treatment was performed with a total dose of 25- to 35-Gy, given as 5- to 7-Gy daily for 5 days. Results: With a median follow-up of 12 months, median overall survival time (OS) was 11 months from the start of HS-IMRT, with a 6-month and 1-year survival rate of 71.4% and 38.1%, respectively. Karnofsky performance status was a significant prognostic factor of OS as tested by univariate and multivariate analysis. Re-operation rate was 4.8% for radiation necrosis. No other acute or late toxicity Grade 3 or higher was observed. Conclusions: This is the first prospective study of biologic imaging optimized HS-IMRT in recurrent GBM. HS-IMRT with PET data seems to be well tolerated and resulted in a median survival time of 11 months after HS-IMRT.
10,278
287
16
[ "met", "pet", "imrt", "survival", "hs", "hs imrt", "patients", "months", "dose", "radiation" ]
[ "test", "test" ]
[CONTENT] Recurrent glioblastoma multiforme | Hypofractionated stereotactic radiotherapy | Intensity modulated radiation therapy | 11C-methionine PET [SUMMARY]
[CONTENT] Recurrent glioblastoma multiforme | Hypofractionated stereotactic radiotherapy | Intensity modulated radiation therapy | 11C-methionine PET [SUMMARY]
[CONTENT] Recurrent glioblastoma multiforme | Hypofractionated stereotactic radiotherapy | Intensity modulated radiation therapy | 11C-methionine PET [SUMMARY]
[CONTENT] Recurrent glioblastoma multiforme | Hypofractionated stereotactic radiotherapy | Intensity modulated radiation therapy | 11C-methionine PET [SUMMARY]
[CONTENT] Recurrent glioblastoma multiforme | Hypofractionated stereotactic radiotherapy | Intensity modulated radiation therapy | 11C-methionine PET [SUMMARY]
[CONTENT] Recurrent glioblastoma multiforme | Hypofractionated stereotactic radiotherapy | Intensity modulated radiation therapy | 11C-methionine PET [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carbon Radioisotopes | Chemotherapy, Adjuvant | Combined Modality Therapy | Female | Glioblastoma | Humans | Kaplan-Meier Estimate | Magnetic Resonance Imaging | Male | Methionine | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Proportional Hazards Models | Radiopharmaceuticals | Radiosurgery | Radiotherapy Planning, Computer-Assisted | Radiotherapy, Intensity-Modulated | Tomography, X-Ray Computed | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carbon Radioisotopes | Chemotherapy, Adjuvant | Combined Modality Therapy | Female | Glioblastoma | Humans | Kaplan-Meier Estimate | Magnetic Resonance Imaging | Male | Methionine | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Proportional Hazards Models | Radiopharmaceuticals | Radiosurgery | Radiotherapy Planning, Computer-Assisted | Radiotherapy, Intensity-Modulated | Tomography, X-Ray Computed | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carbon Radioisotopes | Chemotherapy, Adjuvant | Combined Modality Therapy | Female | Glioblastoma | Humans | Kaplan-Meier Estimate | Magnetic Resonance Imaging | Male | Methionine | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Proportional Hazards Models | Radiopharmaceuticals | Radiosurgery | Radiotherapy Planning, Computer-Assisted | Radiotherapy, Intensity-Modulated | Tomography, X-Ray Computed | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carbon Radioisotopes | Chemotherapy, Adjuvant | Combined Modality Therapy | Female | Glioblastoma | Humans | Kaplan-Meier Estimate | Magnetic Resonance Imaging | Male | Methionine | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Proportional Hazards Models | Radiopharmaceuticals | Radiosurgery | Radiotherapy Planning, Computer-Assisted | Radiotherapy, Intensity-Modulated | Tomography, X-Ray Computed | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carbon Radioisotopes | Chemotherapy, Adjuvant | Combined Modality Therapy | Female | Glioblastoma | Humans | Kaplan-Meier Estimate | Magnetic Resonance Imaging | Male | Methionine | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Proportional Hazards Models | Radiopharmaceuticals | Radiosurgery | Radiotherapy Planning, Computer-Assisted | Radiotherapy, Intensity-Modulated | Tomography, X-Ray Computed | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carbon Radioisotopes | Chemotherapy, Adjuvant | Combined Modality Therapy | Female | Glioblastoma | Humans | Kaplan-Meier Estimate | Magnetic Resonance Imaging | Male | Methionine | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Proportional Hazards Models | Radiopharmaceuticals | Radiosurgery | Radiotherapy Planning, Computer-Assisted | Radiotherapy, Intensity-Modulated | Tomography, X-Ray Computed | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] met | pet | imrt | survival | hs | hs imrt | patients | months | dose | radiation [SUMMARY]
[CONTENT] met | pet | imrt | survival | hs | hs imrt | patients | months | dose | radiation [SUMMARY]
[CONTENT] met | pet | imrt | survival | hs | hs imrt | patients | months | dose | radiation [SUMMARY]
[CONTENT] met | pet | imrt | survival | hs | hs imrt | patients | months | dose | radiation [SUMMARY]
[CONTENT] met | pet | imrt | survival | hs | hs imrt | patients | months | dose | radiation [SUMMARY]
[CONTENT] met | pet | imrt | survival | hs | hs imrt | patients | months | dose | radiation [SUMMARY]
[CONTENT] specific | pet | imrt | radiation | met pet | met | target | gliomas | non specific | biologic imaging [SUMMARY]
[CONTENT] met | pet | dose | volume | uptake | mri | defined | met uptake | tumor | threshold [SUMMARY]
[CONTENT] survival | months | arrow | kps | overall | median | table | overall survival | short arrow | short [SUMMARY]
[CONTENT] recurrent gbm | hs imrt | hs | imrt | recurrent | gbm | data tolerated resulted | mri image fusion recurrent | data tolerated | observed hs [SUMMARY]
[CONTENT] met | pet | imrt | hs imrt | hs | survival | radiation | patients | months | dose [SUMMARY]
[CONTENT] met | pet | imrt | hs imrt | hs | survival | radiation | patients | months | dose [SUMMARY]
[CONTENT] GBM | 11C | MET [SUMMARY]
[CONTENT] Twenty-one | GBM | MET-PET/CT/MRI ||| MET-PET | GTV ||| GTV | 3-mm ||| 25- | 35 | 7 | daily | 5 days [SUMMARY]
[CONTENT] 12 months | 11 months | 6-month | 1-year | 71.4% | 38.1% ||| Karnofsky ||| 4.8% ||| Grade 3 [SUMMARY]
[CONTENT] first | GBM ||| PET | 11 months [SUMMARY]
[CONTENT] GBM | 11C | MET ||| Twenty-one | GBM | MET-PET/CT/MRI ||| MET-PET | GTV ||| GTV | 3-mm ||| 25- | 35 | 7 | daily | 5 days ||| 12 months | 11 months | 6-month | 1-year | 71.4% | 38.1% ||| Karnofsky ||| 4.8% ||| Grade 3 ||| first | GBM ||| PET | 11 months [SUMMARY]
[CONTENT] GBM | 11C | MET ||| Twenty-one | GBM | MET-PET/CT/MRI ||| MET-PET | GTV ||| GTV | 3-mm ||| 25- | 35 | 7 | daily | 5 days ||| 12 months | 11 months | 6-month | 1-year | 71.4% | 38.1% ||| Karnofsky ||| 4.8% ||| Grade 3 ||| first | GBM ||| PET | 11 months [SUMMARY]
CKD.QLD: chronic kidney disease surveillance and research in Queensland, Australia.
23115138
Chronic kidney disease (CKD) is recognized as a major public health problem in Australia with significant mortality, morbidity and economic burden. However, there is no comprehensive surveillance programme to collect, collate and analyse data on CKD in a systematic way.
BACKGROUND
We describe an initiative called CKD Queensland (CKD.QLD), which was established in 2009 to address this deficiency, and outline the processes and progress made to date. The foundation is a CKD Registry of all CKD patients attending public health renal services in Queensland, and patient recruitment and data capture have started.
METHODS
We have established through early work of CKD.QLD that there are over 11,500 CKD patients attending public renal services in Queensland, and these are the target population for our registry. Progress so far includes conducting two CKD clinic site surveys, consenting over 3000 patients into the registry and initiation of baseline data analysis of the first 600 patients enrolled at the Royal Brisbane and Women's Hospital (RBWH) site. In addition, research studies in dietary intake and CKD outcomes and in models of care in CKD patient management are underway.
RESULTS
Through the CKD Registry, we will define the distribution of CKD patients referred to renal practices in the public system in Queensland by region, remoteness, age, gender, ethnicity and socioeconomic status. We will define the clinical characteristics of those patients, and the CKD associations, stages, co-morbidities and current management. We will follow the course and outcomes in individuals over time, as well as group trends over time. Through our activities and outcomes, we are aiming to provide a nidus for other states in Australia to join in a national CKD registry and network.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Child", "Child, Preschool", "Clinical Trials as Topic", "Female", "Glomerular Filtration Rate", "Health Surveys", "Humans", "Infant", "Infant, Newborn", "Longitudinal Studies", "Male", "Middle Aged", "Patient Selection", "Population Surveillance", "Prevalence", "Prognosis", "Quality of Life", "Queensland", "Registries", "Renal Insufficiency, Chronic", "Research Design", "Young Adult" ]
3484715
Introduction
Chronic kidney disease (CKD) in Australia is a major public health problem. At least one biomarker of kidney injury has been found in one in seven Australian adults over the age of 25 years [1, 2]. CKD contributed to nearly 10% of deaths in 2006 and over 1.1 million hospitalizations in 2006–07 [3], both of which are underestimates driven by the incompleteness of CKD-related diagnostic coding. Excellent data of treated end-stage kidney disease (ESKD) patients are available from the Australia and New Zealand Dialysis and Transplant Registry (ANZDATA) [4]. Based on their reports, the incidence of treated ESKD in Australia is projected to increase by 80% between 2009 and 2020 [5]. In addition, incidence and prevalence of ESKD is more common in the Aboriginal and Torres Strait Islander population, who represent almost 10% of new cases of treated ESKD, despite being only 2.5% of the Australian population [6]. However, people on renal replacement therapy (RRT) represent only a small proportion of the CKD population. There are no existing systematic mechanisms to estimate the incidence, prevalence and distribution of severity of Stages 1–4 CKD in Australia. The best available source of population-based CKD rates is The Australian Diabetes, Obesity and Lifestyle (AusDiab) study [1, 2]. However, this study is outdated and may have overestimated CKD prevalence, as it did not collect information on two occasions 3 months apart and its estimates may include some cases of spurious or temporary elevations of serum creatinine or of acute kidney injury. Studies like the Bettering the Evaluation And Care of Health (BEACH) Program [7] are continuously gathering excellent data on the burden of CKD in primary practice in the Australian population through recording of General Practice (GP) consultations. However, this study sacrifices meaningful detail to capture a wide array of summative health indices. A unique study called the ‘45 and Up Study’ [8] is currently recruiting 250 000 men and women aged 45 and over from the general population in the state of New South Wales, Australia. This study is expected to provide a range of health-related information, including risk factors for CKD, through a self-administered questionnaire and capture of additional information through linkage of health-related data sets. Notwithstanding these important initiatives, major gaps in our knowledge of CKD will remain. To address this deficiency, the Australian Government, through the Australian Institute of Health and Welfare (AIHW), established the National Centre for Monitoring Chronic Kidney Disease (NCM CKD) in 2007 [9]. The NCM CKD is located in the Cardiovascular, Diabetes and Kidney Unit at the AIHW, and the Medical Director of Kidney Health Australia (KHA) is the Chairman of the Advisory Committee. Centre, the AIHW and other organizations using existing data sets to collect, aggregate, link, analyse and estimate future disease burdens. They have produced important documents on ESKD and on CKD including its relationships with other conditions such as diabetes and cardiovascular disease [10–14]. However, the Agency's work is also impeded by the lack of population-based CKD data. The NCM CKD has therefore proposed a conceptual framework for building a surveillance system to monitor CKD with reference to Australia [9]. Consistent with the vision and strategy of the NCM CKD, we have established a programme for surveillance including the generation of an original data set, practice improvement and research for CKD across the entire referred renal practice network in the public health system in Queensland, through an entity called CKD Queensland (CKD.QLD). It is unique within Australia and one of the few comprehensive CKD surveillance entities globally. Queensland has a current population of 4.6 million, or 20% of the Australian population. The number of retired individuals aged over 65 who are at high risk for CKD is 493 525, and this figure is expected to increase over time [15]. Queensland's multiethnic mix includes large numbers of Chinese, other Asian and Indian people, Pacific Islanders, Maori and more recently, Africans. It also includes 146 400 Indigenous Australians, the second largest number in any state [16]. Queensland had 3511 prevalent RRT patients at the end of 2009 [17], ∼20% of the prevalent patients in the ANZDATA Registry, consistent with Queensland's proportion of the total Australian population. Queensland has 80 nephrologists and 20 CKD nurses and nurse practitioners spread across 15 Health Service districts [18]. The demographic profile of Queensland is comparable to the rest of Australia (Table 1). Table 1.2006 Australian Census: Queensland versus AustraliaaQueensland% of personsAustralia% of personsTotal persons (residents)3 904 532—19 855 288— Males1 935 38149.69 799 25249.4 Females1 969 15150.410 056 03650.6 Indigenous persons127 5783.3455 0312.3Age group 0–4 years257 0776.61 260 4056.3 5–14 years549 45514.12 676 80713.5 15–24 years539 20613.82 704 27613.6 25–54 years1 638 35442.08 376 75142.2 55–64 years437 55011.22 192 67511.0 65 years and over482 89112.42 644 37413.3 Median age of persons36—37—Language spoken at home English only3 371 68486.415 581 33378.5 Mandarin24 4470.6220 6011.1 Italian22 0320.6316 8901.6 Cantonese19 6270.5244 5531.2 Vietnamese17 1450.4194 8551.0 German14 7430.475 6360.4Income (persons ≥15 years) $ weekly Median individual income476—466— Median household income1033—1027— Median family income1154—1171—aAdapted from 2006 Australian Census, Quickstats-Queensland http://www.censusdata.abs.gov.au/. 2006 Australian Census: Queensland versus Australiaa aAdapted from 2006 Australian Census, Quickstats-Queensland http://www.censusdata.abs.gov.au/. The objectives of CKD.QLD include establishing a Registry and an ongoing surveillance system of all the CKD patients in renal practices in the public health system in the state of Queensland. We will define CKD distribution and characteristics, evaluate longitudinal CKD population trends and outcomes, identify treatment gaps, promote best practice and conduct clinical trials in collaboration with the Australian Kidney Trials Network (AKTN) [19]. We are also exploring models of CKD health service delivery and aim to develop CKD education and training streams. This will provide a platform for continuous practice improvement and research and build knowledge and capacity in the care of CKD patients (Figure 1). Fig. 1.CKD.QLD research operating platforms. CKD.QLD research operating platforms.
null
null
null
null
Conclusions
CKD.QLD has evolved as a platform for research and practice improvement in CKD management in Australia. We have successfully established the Registry and are on target to recruit 5000 CKD patients by mid-2012. In the process, we have developed a systematic and comprehensive surveillance system for referred CKD monitoring in Australia. The Registry will also facilitate and inform data cross-linkage between the initial stages of CKD with that of treated ESKD as reported in ANZDATA. We have initiated major research studies of various aspects of CKD management. It is anticipated that the activities and products of CKD.QLD will be a benchmark for other states in Australia in understanding and managing CKD across the nation. We expect the findings by demographic group generated through the CKD.QLD registry mechanism may be extrapolated to populations groups in other states/territories and their accuracy evaluated. We anticipate that CKD.QLD will coalesce with other states and regions in Australia into a broader National CKD Registry and network. In the longer term CKD.QLD will continue to grow international collaborations and contribute significantly towards a global improvement of CKD management.
[ "Introduction", "Structure and organization of CKD.QLD", "Online resources", "Governance", "Progress to date", "Survey 1 of public health CKD services, Queensland", "Survey 2 of public health CKD services, Queensland", "Queensland CKD Registry", "Patient recruitment", "Data collection", "CKD models of care", "Clinical trials", "Biomarker research", "Collaborations", "Significance of CKD.QLD", "Future directions", "Funding", "Conflict of interest statement" ]
[ "Chronic kidney disease (CKD) in Australia is a major public health problem. At least one biomarker of kidney injury has been found in one in seven Australian adults over the age of 25 years [1, 2]. CKD contributed to nearly 10% of deaths in 2006 and over 1.1 million hospitalizations in 2006–07 [3], both of which are underestimates driven by the incompleteness of CKD-related diagnostic coding.\nExcellent data of treated end-stage kidney disease (ESKD) patients are available from the Australia and New Zealand Dialysis and Transplant Registry (ANZDATA) [4]. Based on their reports, the incidence of treated ESKD in Australia is projected to increase by 80% between 2009 and 2020 [5]. In addition, incidence and prevalence of ESKD is more common in the Aboriginal and Torres Strait Islander population, who represent almost 10% of new cases of treated ESKD, despite being only 2.5% of the Australian population [6]. However, people on renal replacement therapy (RRT) represent only a small proportion of the CKD population.\nThere are no existing systematic mechanisms to estimate the incidence, prevalence and distribution of severity of Stages 1–4 CKD in Australia. The best available source of population-based CKD rates is The Australian Diabetes, Obesity and Lifestyle (AusDiab) study [1, 2]. However, this study is outdated and may have overestimated CKD prevalence, as it did not collect information on two occasions 3 months apart and its estimates may include some cases of spurious or temporary elevations of serum creatinine or of acute kidney injury. Studies like the Bettering the Evaluation And Care of Health (BEACH) Program [7] are continuously gathering excellent data on the burden of CKD in primary practice in the Australian population through recording of General Practice (GP) consultations. However, this study sacrifices meaningful detail to capture a wide array of summative health indices. A unique study called the ‘45 and Up Study’ [8] is currently recruiting 250 000 men and women aged 45 and over from the general population in the state of New South Wales, Australia. This study is expected to provide a range of health-related information, including risk factors for CKD, through a self-administered questionnaire and capture of additional information through linkage of health-related data sets.\nNotwithstanding these important initiatives, major gaps in our knowledge of CKD will remain. To address this deficiency, the Australian Government, through the Australian Institute of Health and Welfare (AIHW), established the National Centre for Monitoring Chronic Kidney Disease (NCM CKD) in 2007 [9]. The NCM CKD is located in the Cardiovascular, Diabetes and Kidney Unit at the AIHW, and the Medical Director of Kidney Health Australia (KHA) is the Chairman of the Advisory Committee. Centre, the AIHW and other organizations using existing data sets to collect, aggregate, link, analyse and estimate future disease burdens. They have produced important documents on ESKD and on CKD including its relationships with other conditions such as diabetes and cardiovascular disease [10–14]. However, the Agency's work is also impeded by the lack of population-based CKD data. The NCM CKD has therefore proposed a conceptual framework for building a surveillance system to monitor CKD with reference to Australia [9].\nConsistent with the vision and strategy of the NCM CKD, we have established a programme for surveillance including the generation of an original data set, practice improvement and research for CKD across the entire referred renal practice network in the public health system in Queensland, through an entity called CKD Queensland (CKD.QLD). It is unique within Australia and one of the few comprehensive CKD surveillance entities globally. Queensland has a current population of 4.6 million, or 20% of the Australian population. The number of retired individuals aged over 65 who are at high risk for CKD is 493 525, and this figure is expected to increase over time [15]. Queensland's multiethnic mix includes large numbers of Chinese, other Asian and Indian people, Pacific Islanders, Maori and more recently, Africans. It also includes 146 400 Indigenous Australians, the second largest number in any state [16]. Queensland had 3511 prevalent RRT patients at the end of 2009 [17], ∼20% of the prevalent patients in the ANZDATA Registry, consistent with Queensland's proportion of the total Australian population. Queensland has 80 nephrologists and 20 CKD nurses and nurse practitioners spread across 15 Health Service districts [18]. The demographic profile of Queensland is comparable to the rest of Australia (Table 1).\nTable 1.2006 Australian Census: Queensland versus AustraliaaQueensland% of personsAustralia% of personsTotal persons (residents)3 904 532—19 855 288— Males1 935 38149.69 799 25249.4 Females1 969 15150.410 056 03650.6 Indigenous persons127 5783.3455 0312.3Age group 0–4 years257 0776.61 260 4056.3 5–14 years549 45514.12 676 80713.5 15–24 years539 20613.82 704 27613.6 25–54 years1 638 35442.08 376 75142.2 55–64 years437 55011.22 192 67511.0 65 years and over482 89112.42 644 37413.3 Median age of persons36—37—Language spoken at home English only3 371 68486.415 581 33378.5 Mandarin24 4470.6220 6011.1 Italian22 0320.6316 8901.6 Cantonese19 6270.5244 5531.2 Vietnamese17 1450.4194 8551.0 German14 7430.475 6360.4Income (persons ≥15 years) $ weekly Median individual income476—466— Median household income1033—1027— Median family income1154—1171—aAdapted from 2006 Australian Census, Quickstats-Queensland http://www.censusdata.abs.gov.au/.\n2006 Australian Census: Queensland versus Australiaa\naAdapted from 2006 Australian Census, Quickstats-Queensland http://www.censusdata.abs.gov.au/.\nThe objectives of CKD.QLD include establishing a Registry and an ongoing surveillance system of all the CKD patients in renal practices in the public health system in the state of Queensland. We will define CKD distribution and characteristics, evaluate longitudinal CKD population trends and outcomes, identify treatment gaps, promote best practice and conduct clinical trials in collaboration with the Australian Kidney Trials Network (AKTN) [19]. We are also exploring models of CKD health service delivery and aim to develop CKD education and training streams. This will provide a platform for continuous practice improvement and research and build knowledge and capacity in the care of CKD patients (Figure 1).\nFig. 1.CKD.QLD research operating platforms.\nCKD.QLD research operating platforms.", "CKD.QLD arose from collaboration between the Centre for Chronic Disease at the University of Queensland (UQ) and Kidney Research in the Department of Renal Medicine at the Royal Brisbane and Women's Hospital (RBWH) in Brisbane, Queensland, Australia. It has expanded to include all public health renal practices in Queensland. Everyone involved in the care of renal patients in Queensland, encompassing medical, nursing, allied health and scientists, have become partners in the initiative. Participating Queensland Health Renal Services are shown in Figure 2.\nFig. 2.Participating Queensland Health Renal services.\nParticipating Queensland Health Renal services.", "The CKD.QLD website [20] is a key resource for communicating activities and network outcomes between the participating renal services distributed throughout Queensland, which is Australia's second largest state measuring more than 1.72 million km2, 25% of Australia's land mass. It is four times the size of Japan, nearly six times the size of the UK and more than twice the size of Texas in the USA. The website informs and interfaces between CKD.QLD and the rest of the world, with exponential growth in access to the website internationally as well as nationally.", "A Management Committee governs CKD.QLD with specific subcommittees that drive key activities such as specific research projects [20].", "Substantial investment of personnel and resources has led to impressive progress as outlined below.\n Survey 1 of public health CKD services, Queensland All public renal services were eligible and agreed to participate in the CKD.QLD collaborative. The initial consultative and foundation work involved a broad clinical audit and profile of CKD in Queensland, through face-to-face meetings with senior providers at each site. It was completed in March 2010 and the results presented at the Australian and New Zealand Society of Nephrology (ANZSN) Annual Scientific Meeting in 2010 [Nephrology 2010; 15 (Suppl 4), 29]. This survey gathered information on the structure and functioning of services, including the burden of CKD, available resources and the processes involved in the delivery of clinical care. We established that all sites collected CKD patient data, usually in an Excel spreadsheet. This encouraged us to proceed to the next step of populating a central data repository (CDR). It also involved exploratory snapshot visits to private CKD practices, which revealed large numbers of patients in those settings, but substantial overlap with patients in the public system.\nAll public renal services were eligible and agreed to participate in the CKD.QLD collaborative. The initial consultative and foundation work involved a broad clinical audit and profile of CKD in Queensland, through face-to-face meetings with senior providers at each site. It was completed in March 2010 and the results presented at the Australian and New Zealand Society of Nephrology (ANZSN) Annual Scientific Meeting in 2010 [Nephrology 2010; 15 (Suppl 4), 29]. This survey gathered information on the structure and functioning of services, including the burden of CKD, available resources and the processes involved in the delivery of clinical care. We established that all sites collected CKD patient data, usually in an Excel spreadsheet. This encouraged us to proceed to the next step of populating a central data repository (CDR). It also involved exploratory snapshot visits to private CKD practices, which revealed large numbers of patients in those settings, but substantial overlap with patients in the public system.\n Survey 2 of public health CKD services, Queensland Public renal services were resurveyed. Site Survey 2 was completed in March 2011 with results shared at the ANZSN 2011 conference [Nephrology 2011; 16 (Suppl 1), 37: 52]. This survey aimed to profile CKD management in more detail and employed a web-based questionnaire completed by senior renal medical and nursing staff at each Queensland Health (QH) renal service. The response rate of 100% demonstrated the commitment of Public Renal Service Providers to CKD.QLD. This survey indicated that there are 11 668 CKD patients potentially available for recruitment into the CKD.QLD registry in QH public CKD clinics. The majority of these patients (90%) are seen by nephrologists as the clinics are hospital based. Multidisciplinary CKD clinics are available throughout the state with variable allied health resources. CKD Nurse Practitioner (NP)-run clinics, especially in the community setting, are an increasing trend with five NPs seeing CKD patients independently and another 10 NPs actively taking part in CKD patient management. This survey also provided useful information on the clinical management of CKD including risk factor modification, such as blood pressure (BP) therapy, HbA1c monitoring and management of complications and referral patterns for RRT [Nephrology 2011; 16 (Suppl 1), 37: 53]. Angiotensin-converting enzyme inhibitors (ACEI) form the cornerstone of BP control and proteinuria reduction. Dual therapy with angiotensin receptor blockers is utilized in <50% of cases. HbA1c levels are followed in up to 85% of cases to assess the control of diabetes. Lipid-lowering agents, including statins, are increasingly used in the majority of cases (>90%) but not with the intention of slowing CKD progression. These findings from the survey laid the foundation and collaborative approach for the more refined and structured next phase of CKD Registry activity of defining the characteristics of all potentially recruitable patients across Queensland.\nPublic renal services were resurveyed. Site Survey 2 was completed in March 2011 with results shared at the ANZSN 2011 conference [Nephrology 2011; 16 (Suppl 1), 37: 52]. This survey aimed to profile CKD management in more detail and employed a web-based questionnaire completed by senior renal medical and nursing staff at each Queensland Health (QH) renal service. The response rate of 100% demonstrated the commitment of Public Renal Service Providers to CKD.QLD. This survey indicated that there are 11 668 CKD patients potentially available for recruitment into the CKD.QLD registry in QH public CKD clinics. The majority of these patients (90%) are seen by nephrologists as the clinics are hospital based. Multidisciplinary CKD clinics are available throughout the state with variable allied health resources. CKD Nurse Practitioner (NP)-run clinics, especially in the community setting, are an increasing trend with five NPs seeing CKD patients independently and another 10 NPs actively taking part in CKD patient management. This survey also provided useful information on the clinical management of CKD including risk factor modification, such as blood pressure (BP) therapy, HbA1c monitoring and management of complications and referral patterns for RRT [Nephrology 2011; 16 (Suppl 1), 37: 53]. Angiotensin-converting enzyme inhibitors (ACEI) form the cornerstone of BP control and proteinuria reduction. Dual therapy with angiotensin receptor blockers is utilized in <50% of cases. HbA1c levels are followed in up to 85% of cases to assess the control of diabetes. Lipid-lowering agents, including statins, are increasingly used in the majority of cases (>90%) but not with the intention of slowing CKD progression. These findings from the survey laid the foundation and collaborative approach for the more refined and structured next phase of CKD Registry activity of defining the characteristics of all potentially recruitable patients across Queensland.", "All public renal services were eligible and agreed to participate in the CKD.QLD collaborative. The initial consultative and foundation work involved a broad clinical audit and profile of CKD in Queensland, through face-to-face meetings with senior providers at each site. It was completed in March 2010 and the results presented at the Australian and New Zealand Society of Nephrology (ANZSN) Annual Scientific Meeting in 2010 [Nephrology 2010; 15 (Suppl 4), 29]. This survey gathered information on the structure and functioning of services, including the burden of CKD, available resources and the processes involved in the delivery of clinical care. We established that all sites collected CKD patient data, usually in an Excel spreadsheet. This encouraged us to proceed to the next step of populating a central data repository (CDR). It also involved exploratory snapshot visits to private CKD practices, which revealed large numbers of patients in those settings, but substantial overlap with patients in the public system.", "Public renal services were resurveyed. Site Survey 2 was completed in March 2011 with results shared at the ANZSN 2011 conference [Nephrology 2011; 16 (Suppl 1), 37: 52]. This survey aimed to profile CKD management in more detail and employed a web-based questionnaire completed by senior renal medical and nursing staff at each Queensland Health (QH) renal service. The response rate of 100% demonstrated the commitment of Public Renal Service Providers to CKD.QLD. This survey indicated that there are 11 668 CKD patients potentially available for recruitment into the CKD.QLD registry in QH public CKD clinics. The majority of these patients (90%) are seen by nephrologists as the clinics are hospital based. Multidisciplinary CKD clinics are available throughout the state with variable allied health resources. CKD Nurse Practitioner (NP)-run clinics, especially in the community setting, are an increasing trend with five NPs seeing CKD patients independently and another 10 NPs actively taking part in CKD patient management. This survey also provided useful information on the clinical management of CKD including risk factor modification, such as blood pressure (BP) therapy, HbA1c monitoring and management of complications and referral patterns for RRT [Nephrology 2011; 16 (Suppl 1), 37: 53]. Angiotensin-converting enzyme inhibitors (ACEI) form the cornerstone of BP control and proteinuria reduction. Dual therapy with angiotensin receptor blockers is utilized in <50% of cases. HbA1c levels are followed in up to 85% of cases to assess the control of diabetes. Lipid-lowering agents, including statins, are increasingly used in the majority of cases (>90%) but not with the intention of slowing CKD progression. These findings from the survey laid the foundation and collaborative approach for the more refined and structured next phase of CKD Registry activity of defining the characteristics of all potentially recruitable patients across Queensland.", "The establishment of a CKD Registry in Queensland is a key objective of CKD.QLD. It is a major achievement given that no such system is available in Australia and very few exist globally. The structure and logistics for a Registry were developed during a Database Workshop held in mid-2010, which defined ethical issues, governance structure, roles and responsibilities and terms of reference. Queensland state-wide ethical approval of the Registry was obtained in November 2010, allowing applications for Site Specific Governance Approvals for each Renal Service site to proceed. Governance approval has now been achieved for all 12 primary sites and consenting and centralized data collection has started. Approval at four smaller affiliated hospitals with facilities to care for CKD patients is in process and are expected to begin recruitment soon. The primary objective of the CKD.QLD Registry is to narrow the knowledge gap currently existing in CKD. A database will be populated from the CDR with the flexibility to scale to population level for epidemiological purposes or single patient level for clinical care. The latter will allow the longitudinal assessment of putative parameters or biomarkers recorded at the baseline, including data from additional testing. Such predictors can be incorporated into evaluations of models of CKD care, intensity of surveillance and interventions, and customization of individual therapy. It is envisioned that the Registry will be a valuable resource to cross-link data of CKD patients entering the ANZDATA Registry. The goals of the CKD.QLD Registry are summarized in Table 2.\nTable 2.Objectives of CKD.QLD Registry\nTo characterize patients within the CKD.QLD RegistryTo evaluate longitudinal population trends in CKDTo document the course and outcomes of CKD patientsTo develop predictors of prognosis and responsiveness to interventionsTo identify treatment gaps and support and promote best CKD practiceTo evaluate models of CKD health service deliveryTo support CKD clinical trialsTo develop a platform to link clinical data to a repository of biological samples for CKD biomarker researchTo develop educational streams to improve knowledge and management of CKDTo establish data linkage of all stages of CKD with existent data sets like ANZDATA RegistryTo develop a research capability in healthcare workers\nObjectives of CKD.QLD Registry\nTo characterize patients within the CKD.QLD Registry\nTo evaluate longitudinal population trends in CKD\nTo document the course and outcomes of CKD patients\nTo develop predictors of prognosis and responsiveness to interventions\nTo identify treatment gaps and support and promote best CKD practice\nTo evaluate models of CKD health service delivery\nTo support CKD clinical trials\nTo develop a platform to link clinical data to a repository of biological samples for CKD biomarker research\nTo develop educational streams to improve knowledge and management of CKD\nTo establish data linkage of all stages of CKD with existent data sets like ANZDATA Registry\nTo develop a research capability in healthcare workers", "Consenting and recruitment of both incident and accrued prevalent patients (the resource-intensive component) has progressed very well across all CKD practices. We are on target with 1450 patients having given consent by the end of March 2012 with a negligible refusal rate. With this positive experience, we will seek ethical approval for an ‘opt out’ consent protocol rather than ‘opt in’ recruitment system. The Australian National Health and Medical Research Council (NHMRC) is developing guidelines for opt out consent method, and CKD.QLD is engaged in the development process. The collaborative timeline is to recruit 3000 patients by mid-2012 and 11 000 by the end of the year. Informing and consenting of incident CKD patients as they are referred to clinics will be a less labour intensive process and performed by the Renal Service provider seeing the patient for their first consultation, as a proposed part of standard care. The analysis of the first 600 patients recruited to the registry at the RBWH site has begun and the profile and characteristics of this group will be reported in the near future. Similar analysis of data from other sites will follow.", "CKD.QLD facilitates initiation and start up of the Registry at each site by providing patient information, assistance with consenting prevalent patients and refining data capture protocols. The CDR has the flexibility to accommodate data feeds from sources ranging from Excel spreadsheets to proprietary software like Audit4 [21]. The CDR is housed at a QH site in the Centre for Chronic Disease, RBWH.", "The combination of increased referrals and earlier referral has put significant strain on resources and required a shift in focus and strategy of delivering renal care. The previous concentration of hospital-based speciality practices is diffusing towards community CKD clinics and primary care. Practicing CKD NPs are complementing medical renal management across the state of Queensland and these different models of CKD care are the targets for nursing research initiatives of CKD.QLD.", "The CKD patient platform provides excellent opportunities for large-scale clinical trials, and currently, there are several studies in progress. The CKD Nursing Models of Care (CKiDNaP Study) is developing operational definitions for the NP role and expert renal nurse in clinical care of patients with CKD. This study is capturing the multidisciplinary team's acceptance of the NP role in the delivery of CKD care. Potentially modifiable dietary factors in CKD are being characterized in another study. Changes in the dietary factors will be mapped and correlated over time with patient outcomes. This study has a target recruitment of 500 patients across multiple sites. Once completed, this study is expected to provide useful information on dietary intake, body composition and their influence on cardiovascular outcomes in CKD patients. Studies in development will evaluate the introduction of palliative care earlier and during CKD management, focusing on pain management, treatment of depression and symptom control, especially in the elderly. Planned future studies include randomized controlled trials assessing the effects of fish oil and allopurinol in mitigating the course of CKD. Our close collaboration and shared investigators with the AKTN is a major benefit.", "The search for biomarkers that accurately represents the onset, progress and prognosis of kidney disease is a priority of international research. Currently, a multitude of candidate markers are reported in the literature [22], but validation in the clinical setting is problematic [23]. Our registry provides an opportunity to integrate CKD clinical practices with applied biomarker research and post-marketing evaluation. Novel biomarker discovery and validation is based at the Institute for Molecular Bioscience (IMB), University of Queensland, Australia. In addition, a critical link has been established with international biomarker research collaboration and validation, between the Proof Centre of Excellence, Canada, and CKD.QLD. We are aiming at a recruitment target of 2100 patients into a biomarker study across three sites in Queensland to test the transferability of the Canadian Study of Prediction of Risk and Evolution to Dialysis, Death and Interim Cardiovascular Events Over Time study (CanPREDICT study) [24] results to the Australian context. This study protocol includes the development of a biobank to store samples for future studies, an important planned future development of CKD.QLD.", "CKD.QLD investigators have established major collaborations with important stakeholders at various levels. In Queensland, CKD.QLD is affiliated with two non-government kidney health advocacy agencies, KHA and Kidney Support Network (KSN). An important relationship is established with the Queensland Aboriginal Islander Health Council (QAIHC), the peak body representing the community controlled indigenous health-care sector. Nationally, CKD.QLD is in discussion with Western Australia, the Northern Territory, Victoria and New South Wales in progressing a National Network in CKD research, a concept that is supported by key renal bodies, including KHA and ANZDATA. Internationally, we are in collaboration with CKD investigators in British Columbia, Canada.", "We know of no other comprehensive regional or state/territory-wide longitudinal Registry of CKD patients in renal speciality practices in Australia. There is as yet no comprehensive data source on people with Stages 1–4 CKD and no precedence to cross-link their clinical data to existing data sets of mortality or RRT dependent ESKD. This initiative in Queensland will establish the progress and prognosis of CKD in earlier stages in the sequence of health, risk, unrecognized CKD, recognized and referred CKD and ESKD. The successful development of the CKD Registry has substantial beneficial outcomes for many other groups. Some of the end-users who will benefit from CKD.QLD activities are given in Table 3.\nTable 3.End-users who will benefit from the activities and outcomes of CKD.QLD\nCKD patients, through better outcomesPractitioners, from structured and efficient support systems and models of careHealth systems, from more efficient models of careThe taxpayer, from cost containment achieved through more efficient servicesThe general population, through better health awarenessCommercial interests, through increased identification of CKD and its better management, the clinical trials platform, new diagnostic agents and opportunities for private insurance\nEnd-users who will benefit from the activities and outcomes of CKD.QLD\nCKD patients, through better outcomes\nPractitioners, from structured and efficient support systems and models of care\nHealth systems, from more efficient models of care\nThe taxpayer, from cost containment achieved through more efficient services\nThe general population, through better health awareness\nCommercial interests, through increased identification of CKD and its better management, the clinical trials platform, new diagnostic agents and opportunities for private insurance", "The next major task is to aggregate data collected from all CKD.QLD sites, which will be supplied in different data formats. We have now completed recruitment of the first 1450 patients and data audit is being performed and missing information will be sought from the primary sources. Preliminary analysis of already accrued patient information will help us to refine the process, improve the methodology and review uniform data collection parameters.\nFuture projects will include sampling of CKD patients in Queensland private practices, sampling from other specialities such as cardiology, gerontology and endocrinology, and sampling within primary care and general practice. This collaboration will allow us to develop primary and secondary preventive programmes targeting CKD.", "Funding for the establishment of CKD.QLD has been provided by Prof W.E.H. NHMRC Australian Fellowship, the Colonial Foundation (Melbourne, Australia), AMGEN Australia, and Queensland Health.", "None declared." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Structure and organization of CKD.QLD", "Online resources", "Governance", "Progress to date", "Survey 1 of public health CKD services, Queensland", "Survey 2 of public health CKD services, Queensland", "Queensland CKD Registry", "Patient recruitment", "Data collection", "CKD models of care", "Clinical trials", "Biomarker research", "Collaborations", "Significance of CKD.QLD", "Future directions", "Conclusions", "Funding", "Conflict of interest statement" ]
[ "Chronic kidney disease (CKD) in Australia is a major public health problem. At least one biomarker of kidney injury has been found in one in seven Australian adults over the age of 25 years [1, 2]. CKD contributed to nearly 10% of deaths in 2006 and over 1.1 million hospitalizations in 2006–07 [3], both of which are underestimates driven by the incompleteness of CKD-related diagnostic coding.\nExcellent data of treated end-stage kidney disease (ESKD) patients are available from the Australia and New Zealand Dialysis and Transplant Registry (ANZDATA) [4]. Based on their reports, the incidence of treated ESKD in Australia is projected to increase by 80% between 2009 and 2020 [5]. In addition, incidence and prevalence of ESKD is more common in the Aboriginal and Torres Strait Islander population, who represent almost 10% of new cases of treated ESKD, despite being only 2.5% of the Australian population [6]. However, people on renal replacement therapy (RRT) represent only a small proportion of the CKD population.\nThere are no existing systematic mechanisms to estimate the incidence, prevalence and distribution of severity of Stages 1–4 CKD in Australia. The best available source of population-based CKD rates is The Australian Diabetes, Obesity and Lifestyle (AusDiab) study [1, 2]. However, this study is outdated and may have overestimated CKD prevalence, as it did not collect information on two occasions 3 months apart and its estimates may include some cases of spurious or temporary elevations of serum creatinine or of acute kidney injury. Studies like the Bettering the Evaluation And Care of Health (BEACH) Program [7] are continuously gathering excellent data on the burden of CKD in primary practice in the Australian population through recording of General Practice (GP) consultations. However, this study sacrifices meaningful detail to capture a wide array of summative health indices. A unique study called the ‘45 and Up Study’ [8] is currently recruiting 250 000 men and women aged 45 and over from the general population in the state of New South Wales, Australia. This study is expected to provide a range of health-related information, including risk factors for CKD, through a self-administered questionnaire and capture of additional information through linkage of health-related data sets.\nNotwithstanding these important initiatives, major gaps in our knowledge of CKD will remain. To address this deficiency, the Australian Government, through the Australian Institute of Health and Welfare (AIHW), established the National Centre for Monitoring Chronic Kidney Disease (NCM CKD) in 2007 [9]. The NCM CKD is located in the Cardiovascular, Diabetes and Kidney Unit at the AIHW, and the Medical Director of Kidney Health Australia (KHA) is the Chairman of the Advisory Committee. Centre, the AIHW and other organizations using existing data sets to collect, aggregate, link, analyse and estimate future disease burdens. They have produced important documents on ESKD and on CKD including its relationships with other conditions such as diabetes and cardiovascular disease [10–14]. However, the Agency's work is also impeded by the lack of population-based CKD data. The NCM CKD has therefore proposed a conceptual framework for building a surveillance system to monitor CKD with reference to Australia [9].\nConsistent with the vision and strategy of the NCM CKD, we have established a programme for surveillance including the generation of an original data set, practice improvement and research for CKD across the entire referred renal practice network in the public health system in Queensland, through an entity called CKD Queensland (CKD.QLD). It is unique within Australia and one of the few comprehensive CKD surveillance entities globally. Queensland has a current population of 4.6 million, or 20% of the Australian population. The number of retired individuals aged over 65 who are at high risk for CKD is 493 525, and this figure is expected to increase over time [15]. Queensland's multiethnic mix includes large numbers of Chinese, other Asian and Indian people, Pacific Islanders, Maori and more recently, Africans. It also includes 146 400 Indigenous Australians, the second largest number in any state [16]. Queensland had 3511 prevalent RRT patients at the end of 2009 [17], ∼20% of the prevalent patients in the ANZDATA Registry, consistent with Queensland's proportion of the total Australian population. Queensland has 80 nephrologists and 20 CKD nurses and nurse practitioners spread across 15 Health Service districts [18]. The demographic profile of Queensland is comparable to the rest of Australia (Table 1).\nTable 1.2006 Australian Census: Queensland versus AustraliaaQueensland% of personsAustralia% of personsTotal persons (residents)3 904 532—19 855 288— Males1 935 38149.69 799 25249.4 Females1 969 15150.410 056 03650.6 Indigenous persons127 5783.3455 0312.3Age group 0–4 years257 0776.61 260 4056.3 5–14 years549 45514.12 676 80713.5 15–24 years539 20613.82 704 27613.6 25–54 years1 638 35442.08 376 75142.2 55–64 years437 55011.22 192 67511.0 65 years and over482 89112.42 644 37413.3 Median age of persons36—37—Language spoken at home English only3 371 68486.415 581 33378.5 Mandarin24 4470.6220 6011.1 Italian22 0320.6316 8901.6 Cantonese19 6270.5244 5531.2 Vietnamese17 1450.4194 8551.0 German14 7430.475 6360.4Income (persons ≥15 years) $ weekly Median individual income476—466— Median household income1033—1027— Median family income1154—1171—aAdapted from 2006 Australian Census, Quickstats-Queensland http://www.censusdata.abs.gov.au/.\n2006 Australian Census: Queensland versus Australiaa\naAdapted from 2006 Australian Census, Quickstats-Queensland http://www.censusdata.abs.gov.au/.\nThe objectives of CKD.QLD include establishing a Registry and an ongoing surveillance system of all the CKD patients in renal practices in the public health system in the state of Queensland. We will define CKD distribution and characteristics, evaluate longitudinal CKD population trends and outcomes, identify treatment gaps, promote best practice and conduct clinical trials in collaboration with the Australian Kidney Trials Network (AKTN) [19]. We are also exploring models of CKD health service delivery and aim to develop CKD education and training streams. This will provide a platform for continuous practice improvement and research and build knowledge and capacity in the care of CKD patients (Figure 1).\nFig. 1.CKD.QLD research operating platforms.\nCKD.QLD research operating platforms.", "CKD.QLD arose from collaboration between the Centre for Chronic Disease at the University of Queensland (UQ) and Kidney Research in the Department of Renal Medicine at the Royal Brisbane and Women's Hospital (RBWH) in Brisbane, Queensland, Australia. It has expanded to include all public health renal practices in Queensland. Everyone involved in the care of renal patients in Queensland, encompassing medical, nursing, allied health and scientists, have become partners in the initiative. Participating Queensland Health Renal Services are shown in Figure 2.\nFig. 2.Participating Queensland Health Renal services.\nParticipating Queensland Health Renal services.", "The CKD.QLD website [20] is a key resource for communicating activities and network outcomes between the participating renal services distributed throughout Queensland, which is Australia's second largest state measuring more than 1.72 million km2, 25% of Australia's land mass. It is four times the size of Japan, nearly six times the size of the UK and more than twice the size of Texas in the USA. The website informs and interfaces between CKD.QLD and the rest of the world, with exponential growth in access to the website internationally as well as nationally.", "A Management Committee governs CKD.QLD with specific subcommittees that drive key activities such as specific research projects [20].", "Substantial investment of personnel and resources has led to impressive progress as outlined below.\n Survey 1 of public health CKD services, Queensland All public renal services were eligible and agreed to participate in the CKD.QLD collaborative. The initial consultative and foundation work involved a broad clinical audit and profile of CKD in Queensland, through face-to-face meetings with senior providers at each site. It was completed in March 2010 and the results presented at the Australian and New Zealand Society of Nephrology (ANZSN) Annual Scientific Meeting in 2010 [Nephrology 2010; 15 (Suppl 4), 29]. This survey gathered information on the structure and functioning of services, including the burden of CKD, available resources and the processes involved in the delivery of clinical care. We established that all sites collected CKD patient data, usually in an Excel spreadsheet. This encouraged us to proceed to the next step of populating a central data repository (CDR). It also involved exploratory snapshot visits to private CKD practices, which revealed large numbers of patients in those settings, but substantial overlap with patients in the public system.\nAll public renal services were eligible and agreed to participate in the CKD.QLD collaborative. The initial consultative and foundation work involved a broad clinical audit and profile of CKD in Queensland, through face-to-face meetings with senior providers at each site. It was completed in March 2010 and the results presented at the Australian and New Zealand Society of Nephrology (ANZSN) Annual Scientific Meeting in 2010 [Nephrology 2010; 15 (Suppl 4), 29]. This survey gathered information on the structure and functioning of services, including the burden of CKD, available resources and the processes involved in the delivery of clinical care. We established that all sites collected CKD patient data, usually in an Excel spreadsheet. This encouraged us to proceed to the next step of populating a central data repository (CDR). It also involved exploratory snapshot visits to private CKD practices, which revealed large numbers of patients in those settings, but substantial overlap with patients in the public system.\n Survey 2 of public health CKD services, Queensland Public renal services were resurveyed. Site Survey 2 was completed in March 2011 with results shared at the ANZSN 2011 conference [Nephrology 2011; 16 (Suppl 1), 37: 52]. This survey aimed to profile CKD management in more detail and employed a web-based questionnaire completed by senior renal medical and nursing staff at each Queensland Health (QH) renal service. The response rate of 100% demonstrated the commitment of Public Renal Service Providers to CKD.QLD. This survey indicated that there are 11 668 CKD patients potentially available for recruitment into the CKD.QLD registry in QH public CKD clinics. The majority of these patients (90%) are seen by nephrologists as the clinics are hospital based. Multidisciplinary CKD clinics are available throughout the state with variable allied health resources. CKD Nurse Practitioner (NP)-run clinics, especially in the community setting, are an increasing trend with five NPs seeing CKD patients independently and another 10 NPs actively taking part in CKD patient management. This survey also provided useful information on the clinical management of CKD including risk factor modification, such as blood pressure (BP) therapy, HbA1c monitoring and management of complications and referral patterns for RRT [Nephrology 2011; 16 (Suppl 1), 37: 53]. Angiotensin-converting enzyme inhibitors (ACEI) form the cornerstone of BP control and proteinuria reduction. Dual therapy with angiotensin receptor blockers is utilized in <50% of cases. HbA1c levels are followed in up to 85% of cases to assess the control of diabetes. Lipid-lowering agents, including statins, are increasingly used in the majority of cases (>90%) but not with the intention of slowing CKD progression. These findings from the survey laid the foundation and collaborative approach for the more refined and structured next phase of CKD Registry activity of defining the characteristics of all potentially recruitable patients across Queensland.\nPublic renal services were resurveyed. Site Survey 2 was completed in March 2011 with results shared at the ANZSN 2011 conference [Nephrology 2011; 16 (Suppl 1), 37: 52]. This survey aimed to profile CKD management in more detail and employed a web-based questionnaire completed by senior renal medical and nursing staff at each Queensland Health (QH) renal service. The response rate of 100% demonstrated the commitment of Public Renal Service Providers to CKD.QLD. This survey indicated that there are 11 668 CKD patients potentially available for recruitment into the CKD.QLD registry in QH public CKD clinics. The majority of these patients (90%) are seen by nephrologists as the clinics are hospital based. Multidisciplinary CKD clinics are available throughout the state with variable allied health resources. CKD Nurse Practitioner (NP)-run clinics, especially in the community setting, are an increasing trend with five NPs seeing CKD patients independently and another 10 NPs actively taking part in CKD patient management. This survey also provided useful information on the clinical management of CKD including risk factor modification, such as blood pressure (BP) therapy, HbA1c monitoring and management of complications and referral patterns for RRT [Nephrology 2011; 16 (Suppl 1), 37: 53]. Angiotensin-converting enzyme inhibitors (ACEI) form the cornerstone of BP control and proteinuria reduction. Dual therapy with angiotensin receptor blockers is utilized in <50% of cases. HbA1c levels are followed in up to 85% of cases to assess the control of diabetes. Lipid-lowering agents, including statins, are increasingly used in the majority of cases (>90%) but not with the intention of slowing CKD progression. These findings from the survey laid the foundation and collaborative approach for the more refined and structured next phase of CKD Registry activity of defining the characteristics of all potentially recruitable patients across Queensland.", "All public renal services were eligible and agreed to participate in the CKD.QLD collaborative. The initial consultative and foundation work involved a broad clinical audit and profile of CKD in Queensland, through face-to-face meetings with senior providers at each site. It was completed in March 2010 and the results presented at the Australian and New Zealand Society of Nephrology (ANZSN) Annual Scientific Meeting in 2010 [Nephrology 2010; 15 (Suppl 4), 29]. This survey gathered information on the structure and functioning of services, including the burden of CKD, available resources and the processes involved in the delivery of clinical care. We established that all sites collected CKD patient data, usually in an Excel spreadsheet. This encouraged us to proceed to the next step of populating a central data repository (CDR). It also involved exploratory snapshot visits to private CKD practices, which revealed large numbers of patients in those settings, but substantial overlap with patients in the public system.", "Public renal services were resurveyed. Site Survey 2 was completed in March 2011 with results shared at the ANZSN 2011 conference [Nephrology 2011; 16 (Suppl 1), 37: 52]. This survey aimed to profile CKD management in more detail and employed a web-based questionnaire completed by senior renal medical and nursing staff at each Queensland Health (QH) renal service. The response rate of 100% demonstrated the commitment of Public Renal Service Providers to CKD.QLD. This survey indicated that there are 11 668 CKD patients potentially available for recruitment into the CKD.QLD registry in QH public CKD clinics. The majority of these patients (90%) are seen by nephrologists as the clinics are hospital based. Multidisciplinary CKD clinics are available throughout the state with variable allied health resources. CKD Nurse Practitioner (NP)-run clinics, especially in the community setting, are an increasing trend with five NPs seeing CKD patients independently and another 10 NPs actively taking part in CKD patient management. This survey also provided useful information on the clinical management of CKD including risk factor modification, such as blood pressure (BP) therapy, HbA1c monitoring and management of complications and referral patterns for RRT [Nephrology 2011; 16 (Suppl 1), 37: 53]. Angiotensin-converting enzyme inhibitors (ACEI) form the cornerstone of BP control and proteinuria reduction. Dual therapy with angiotensin receptor blockers is utilized in <50% of cases. HbA1c levels are followed in up to 85% of cases to assess the control of diabetes. Lipid-lowering agents, including statins, are increasingly used in the majority of cases (>90%) but not with the intention of slowing CKD progression. These findings from the survey laid the foundation and collaborative approach for the more refined and structured next phase of CKD Registry activity of defining the characteristics of all potentially recruitable patients across Queensland.", "The establishment of a CKD Registry in Queensland is a key objective of CKD.QLD. It is a major achievement given that no such system is available in Australia and very few exist globally. The structure and logistics for a Registry were developed during a Database Workshop held in mid-2010, which defined ethical issues, governance structure, roles and responsibilities and terms of reference. Queensland state-wide ethical approval of the Registry was obtained in November 2010, allowing applications for Site Specific Governance Approvals for each Renal Service site to proceed. Governance approval has now been achieved for all 12 primary sites and consenting and centralized data collection has started. Approval at four smaller affiliated hospitals with facilities to care for CKD patients is in process and are expected to begin recruitment soon. The primary objective of the CKD.QLD Registry is to narrow the knowledge gap currently existing in CKD. A database will be populated from the CDR with the flexibility to scale to population level for epidemiological purposes or single patient level for clinical care. The latter will allow the longitudinal assessment of putative parameters or biomarkers recorded at the baseline, including data from additional testing. Such predictors can be incorporated into evaluations of models of CKD care, intensity of surveillance and interventions, and customization of individual therapy. It is envisioned that the Registry will be a valuable resource to cross-link data of CKD patients entering the ANZDATA Registry. The goals of the CKD.QLD Registry are summarized in Table 2.\nTable 2.Objectives of CKD.QLD Registry\nTo characterize patients within the CKD.QLD RegistryTo evaluate longitudinal population trends in CKDTo document the course and outcomes of CKD patientsTo develop predictors of prognosis and responsiveness to interventionsTo identify treatment gaps and support and promote best CKD practiceTo evaluate models of CKD health service deliveryTo support CKD clinical trialsTo develop a platform to link clinical data to a repository of biological samples for CKD biomarker researchTo develop educational streams to improve knowledge and management of CKDTo establish data linkage of all stages of CKD with existent data sets like ANZDATA RegistryTo develop a research capability in healthcare workers\nObjectives of CKD.QLD Registry\nTo characterize patients within the CKD.QLD Registry\nTo evaluate longitudinal population trends in CKD\nTo document the course and outcomes of CKD patients\nTo develop predictors of prognosis and responsiveness to interventions\nTo identify treatment gaps and support and promote best CKD practice\nTo evaluate models of CKD health service delivery\nTo support CKD clinical trials\nTo develop a platform to link clinical data to a repository of biological samples for CKD biomarker research\nTo develop educational streams to improve knowledge and management of CKD\nTo establish data linkage of all stages of CKD with existent data sets like ANZDATA Registry\nTo develop a research capability in healthcare workers", "Consenting and recruitment of both incident and accrued prevalent patients (the resource-intensive component) has progressed very well across all CKD practices. We are on target with 1450 patients having given consent by the end of March 2012 with a negligible refusal rate. With this positive experience, we will seek ethical approval for an ‘opt out’ consent protocol rather than ‘opt in’ recruitment system. The Australian National Health and Medical Research Council (NHMRC) is developing guidelines for opt out consent method, and CKD.QLD is engaged in the development process. The collaborative timeline is to recruit 3000 patients by mid-2012 and 11 000 by the end of the year. Informing and consenting of incident CKD patients as they are referred to clinics will be a less labour intensive process and performed by the Renal Service provider seeing the patient for their first consultation, as a proposed part of standard care. The analysis of the first 600 patients recruited to the registry at the RBWH site has begun and the profile and characteristics of this group will be reported in the near future. Similar analysis of data from other sites will follow.", "CKD.QLD facilitates initiation and start up of the Registry at each site by providing patient information, assistance with consenting prevalent patients and refining data capture protocols. The CDR has the flexibility to accommodate data feeds from sources ranging from Excel spreadsheets to proprietary software like Audit4 [21]. The CDR is housed at a QH site in the Centre for Chronic Disease, RBWH.", "The combination of increased referrals and earlier referral has put significant strain on resources and required a shift in focus and strategy of delivering renal care. The previous concentration of hospital-based speciality practices is diffusing towards community CKD clinics and primary care. Practicing CKD NPs are complementing medical renal management across the state of Queensland and these different models of CKD care are the targets for nursing research initiatives of CKD.QLD.", "The CKD patient platform provides excellent opportunities for large-scale clinical trials, and currently, there are several studies in progress. The CKD Nursing Models of Care (CKiDNaP Study) is developing operational definitions for the NP role and expert renal nurse in clinical care of patients with CKD. This study is capturing the multidisciplinary team's acceptance of the NP role in the delivery of CKD care. Potentially modifiable dietary factors in CKD are being characterized in another study. Changes in the dietary factors will be mapped and correlated over time with patient outcomes. This study has a target recruitment of 500 patients across multiple sites. Once completed, this study is expected to provide useful information on dietary intake, body composition and their influence on cardiovascular outcomes in CKD patients. Studies in development will evaluate the introduction of palliative care earlier and during CKD management, focusing on pain management, treatment of depression and symptom control, especially in the elderly. Planned future studies include randomized controlled trials assessing the effects of fish oil and allopurinol in mitigating the course of CKD. Our close collaboration and shared investigators with the AKTN is a major benefit.", "The search for biomarkers that accurately represents the onset, progress and prognosis of kidney disease is a priority of international research. Currently, a multitude of candidate markers are reported in the literature [22], but validation in the clinical setting is problematic [23]. Our registry provides an opportunity to integrate CKD clinical practices with applied biomarker research and post-marketing evaluation. Novel biomarker discovery and validation is based at the Institute for Molecular Bioscience (IMB), University of Queensland, Australia. In addition, a critical link has been established with international biomarker research collaboration and validation, between the Proof Centre of Excellence, Canada, and CKD.QLD. We are aiming at a recruitment target of 2100 patients into a biomarker study across three sites in Queensland to test the transferability of the Canadian Study of Prediction of Risk and Evolution to Dialysis, Death and Interim Cardiovascular Events Over Time study (CanPREDICT study) [24] results to the Australian context. This study protocol includes the development of a biobank to store samples for future studies, an important planned future development of CKD.QLD.", "CKD.QLD investigators have established major collaborations with important stakeholders at various levels. In Queensland, CKD.QLD is affiliated with two non-government kidney health advocacy agencies, KHA and Kidney Support Network (KSN). An important relationship is established with the Queensland Aboriginal Islander Health Council (QAIHC), the peak body representing the community controlled indigenous health-care sector. Nationally, CKD.QLD is in discussion with Western Australia, the Northern Territory, Victoria and New South Wales in progressing a National Network in CKD research, a concept that is supported by key renal bodies, including KHA and ANZDATA. Internationally, we are in collaboration with CKD investigators in British Columbia, Canada.", "We know of no other comprehensive regional or state/territory-wide longitudinal Registry of CKD patients in renal speciality practices in Australia. There is as yet no comprehensive data source on people with Stages 1–4 CKD and no precedence to cross-link their clinical data to existing data sets of mortality or RRT dependent ESKD. This initiative in Queensland will establish the progress and prognosis of CKD in earlier stages in the sequence of health, risk, unrecognized CKD, recognized and referred CKD and ESKD. The successful development of the CKD Registry has substantial beneficial outcomes for many other groups. Some of the end-users who will benefit from CKD.QLD activities are given in Table 3.\nTable 3.End-users who will benefit from the activities and outcomes of CKD.QLD\nCKD patients, through better outcomesPractitioners, from structured and efficient support systems and models of careHealth systems, from more efficient models of careThe taxpayer, from cost containment achieved through more efficient servicesThe general population, through better health awarenessCommercial interests, through increased identification of CKD and its better management, the clinical trials platform, new diagnostic agents and opportunities for private insurance\nEnd-users who will benefit from the activities and outcomes of CKD.QLD\nCKD patients, through better outcomes\nPractitioners, from structured and efficient support systems and models of care\nHealth systems, from more efficient models of care\nThe taxpayer, from cost containment achieved through more efficient services\nThe general population, through better health awareness\nCommercial interests, through increased identification of CKD and its better management, the clinical trials platform, new diagnostic agents and opportunities for private insurance", "The next major task is to aggregate data collected from all CKD.QLD sites, which will be supplied in different data formats. We have now completed recruitment of the first 1450 patients and data audit is being performed and missing information will be sought from the primary sources. Preliminary analysis of already accrued patient information will help us to refine the process, improve the methodology and review uniform data collection parameters.\nFuture projects will include sampling of CKD patients in Queensland private practices, sampling from other specialities such as cardiology, gerontology and endocrinology, and sampling within primary care and general practice. This collaboration will allow us to develop primary and secondary preventive programmes targeting CKD.", "CKD.QLD has evolved as a platform for research and practice improvement in CKD management in Australia. We have successfully established the Registry and are on target to recruit 5000 CKD patients by mid-2012. In the process, we have developed a systematic and comprehensive surveillance system for referred CKD monitoring in Australia. The Registry will also facilitate and inform data cross-linkage between the initial stages of CKD with that of treated ESKD as reported in ANZDATA. We have initiated major research studies of various aspects of CKD management. It is anticipated that the activities and products of CKD.QLD will be a benchmark for other states in Australia in understanding and managing CKD across the nation. We expect the findings by demographic group generated through the CKD.QLD registry mechanism may be extrapolated to populations groups in other states/territories and their accuracy evaluated. We anticipate that CKD.QLD will coalesce with other states and regions in Australia into a broader National CKD Registry and network. In the longer term CKD.QLD will continue to grow international collaborations and contribute significantly towards a global improvement of CKD management.", "Funding for the establishment of CKD.QLD has been provided by Prof W.E.H. NHMRC Australian Fellowship, the Colonial Foundation (Melbourne, Australia), AMGEN Australia, and Queensland Health.", "None declared." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "conclusions", null, null ]
[ "chronic kidney disease", "registry", "surveillance" ]
Introduction: Chronic kidney disease (CKD) in Australia is a major public health problem. At least one biomarker of kidney injury has been found in one in seven Australian adults over the age of 25 years [1, 2]. CKD contributed to nearly 10% of deaths in 2006 and over 1.1 million hospitalizations in 2006–07 [3], both of which are underestimates driven by the incompleteness of CKD-related diagnostic coding. Excellent data of treated end-stage kidney disease (ESKD) patients are available from the Australia and New Zealand Dialysis and Transplant Registry (ANZDATA) [4]. Based on their reports, the incidence of treated ESKD in Australia is projected to increase by 80% between 2009 and 2020 [5]. In addition, incidence and prevalence of ESKD is more common in the Aboriginal and Torres Strait Islander population, who represent almost 10% of new cases of treated ESKD, despite being only 2.5% of the Australian population [6]. However, people on renal replacement therapy (RRT) represent only a small proportion of the CKD population. There are no existing systematic mechanisms to estimate the incidence, prevalence and distribution of severity of Stages 1–4 CKD in Australia. The best available source of population-based CKD rates is The Australian Diabetes, Obesity and Lifestyle (AusDiab) study [1, 2]. However, this study is outdated and may have overestimated CKD prevalence, as it did not collect information on two occasions 3 months apart and its estimates may include some cases of spurious or temporary elevations of serum creatinine or of acute kidney injury. Studies like the Bettering the Evaluation And Care of Health (BEACH) Program [7] are continuously gathering excellent data on the burden of CKD in primary practice in the Australian population through recording of General Practice (GP) consultations. However, this study sacrifices meaningful detail to capture a wide array of summative health indices. A unique study called the ‘45 and Up Study’ [8] is currently recruiting 250 000 men and women aged 45 and over from the general population in the state of New South Wales, Australia. This study is expected to provide a range of health-related information, including risk factors for CKD, through a self-administered questionnaire and capture of additional information through linkage of health-related data sets. Notwithstanding these important initiatives, major gaps in our knowledge of CKD will remain. To address this deficiency, the Australian Government, through the Australian Institute of Health and Welfare (AIHW), established the National Centre for Monitoring Chronic Kidney Disease (NCM CKD) in 2007 [9]. The NCM CKD is located in the Cardiovascular, Diabetes and Kidney Unit at the AIHW, and the Medical Director of Kidney Health Australia (KHA) is the Chairman of the Advisory Committee. Centre, the AIHW and other organizations using existing data sets to collect, aggregate, link, analyse and estimate future disease burdens. They have produced important documents on ESKD and on CKD including its relationships with other conditions such as diabetes and cardiovascular disease [10–14]. However, the Agency's work is also impeded by the lack of population-based CKD data. The NCM CKD has therefore proposed a conceptual framework for building a surveillance system to monitor CKD with reference to Australia [9]. Consistent with the vision and strategy of the NCM CKD, we have established a programme for surveillance including the generation of an original data set, practice improvement and research for CKD across the entire referred renal practice network in the public health system in Queensland, through an entity called CKD Queensland (CKD.QLD). It is unique within Australia and one of the few comprehensive CKD surveillance entities globally. Queensland has a current population of 4.6 million, or 20% of the Australian population. The number of retired individuals aged over 65 who are at high risk for CKD is 493 525, and this figure is expected to increase over time [15]. Queensland's multiethnic mix includes large numbers of Chinese, other Asian and Indian people, Pacific Islanders, Maori and more recently, Africans. It also includes 146 400 Indigenous Australians, the second largest number in any state [16]. Queensland had 3511 prevalent RRT patients at the end of 2009 [17], ∼20% of the prevalent patients in the ANZDATA Registry, consistent with Queensland's proportion of the total Australian population. Queensland has 80 nephrologists and 20 CKD nurses and nurse practitioners spread across 15 Health Service districts [18]. The demographic profile of Queensland is comparable to the rest of Australia (Table 1). Table 1.2006 Australian Census: Queensland versus AustraliaaQueensland% of personsAustralia% of personsTotal persons (residents)3 904 532—19 855 288— Males1 935 38149.69 799 25249.4 Females1 969 15150.410 056 03650.6 Indigenous persons127 5783.3455 0312.3Age group 0–4 years257 0776.61 260 4056.3 5–14 years549 45514.12 676 80713.5 15–24 years539 20613.82 704 27613.6 25–54 years1 638 35442.08 376 75142.2 55–64 years437 55011.22 192 67511.0 65 years and over482 89112.42 644 37413.3 Median age of persons36—37—Language spoken at home English only3 371 68486.415 581 33378.5 Mandarin24 4470.6220 6011.1 Italian22 0320.6316 8901.6 Cantonese19 6270.5244 5531.2 Vietnamese17 1450.4194 8551.0 German14 7430.475 6360.4Income (persons ≥15 years) $ weekly Median individual income476—466— Median household income1033—1027— Median family income1154—1171—aAdapted from 2006 Australian Census, Quickstats-Queensland http://www.censusdata.abs.gov.au/. 2006 Australian Census: Queensland versus Australiaa aAdapted from 2006 Australian Census, Quickstats-Queensland http://www.censusdata.abs.gov.au/. The objectives of CKD.QLD include establishing a Registry and an ongoing surveillance system of all the CKD patients in renal practices in the public health system in the state of Queensland. We will define CKD distribution and characteristics, evaluate longitudinal CKD population trends and outcomes, identify treatment gaps, promote best practice and conduct clinical trials in collaboration with the Australian Kidney Trials Network (AKTN) [19]. We are also exploring models of CKD health service delivery and aim to develop CKD education and training streams. This will provide a platform for continuous practice improvement and research and build knowledge and capacity in the care of CKD patients (Figure 1). Fig. 1.CKD.QLD research operating platforms. CKD.QLD research operating platforms. Structure and organization of CKD.QLD: CKD.QLD arose from collaboration between the Centre for Chronic Disease at the University of Queensland (UQ) and Kidney Research in the Department of Renal Medicine at the Royal Brisbane and Women's Hospital (RBWH) in Brisbane, Queensland, Australia. It has expanded to include all public health renal practices in Queensland. Everyone involved in the care of renal patients in Queensland, encompassing medical, nursing, allied health and scientists, have become partners in the initiative. Participating Queensland Health Renal Services are shown in Figure 2. Fig. 2.Participating Queensland Health Renal services. Participating Queensland Health Renal services. Online resources: The CKD.QLD website [20] is a key resource for communicating activities and network outcomes between the participating renal services distributed throughout Queensland, which is Australia's second largest state measuring more than 1.72 million km2, 25% of Australia's land mass. It is four times the size of Japan, nearly six times the size of the UK and more than twice the size of Texas in the USA. The website informs and interfaces between CKD.QLD and the rest of the world, with exponential growth in access to the website internationally as well as nationally. Governance: A Management Committee governs CKD.QLD with specific subcommittees that drive key activities such as specific research projects [20]. Progress to date: Substantial investment of personnel and resources has led to impressive progress as outlined below. Survey 1 of public health CKD services, Queensland All public renal services were eligible and agreed to participate in the CKD.QLD collaborative. The initial consultative and foundation work involved a broad clinical audit and profile of CKD in Queensland, through face-to-face meetings with senior providers at each site. It was completed in March 2010 and the results presented at the Australian and New Zealand Society of Nephrology (ANZSN) Annual Scientific Meeting in 2010 [Nephrology 2010; 15 (Suppl 4), 29]. This survey gathered information on the structure and functioning of services, including the burden of CKD, available resources and the processes involved in the delivery of clinical care. We established that all sites collected CKD patient data, usually in an Excel spreadsheet. This encouraged us to proceed to the next step of populating a central data repository (CDR). It also involved exploratory snapshot visits to private CKD practices, which revealed large numbers of patients in those settings, but substantial overlap with patients in the public system. All public renal services were eligible and agreed to participate in the CKD.QLD collaborative. The initial consultative and foundation work involved a broad clinical audit and profile of CKD in Queensland, through face-to-face meetings with senior providers at each site. It was completed in March 2010 and the results presented at the Australian and New Zealand Society of Nephrology (ANZSN) Annual Scientific Meeting in 2010 [Nephrology 2010; 15 (Suppl 4), 29]. This survey gathered information on the structure and functioning of services, including the burden of CKD, available resources and the processes involved in the delivery of clinical care. We established that all sites collected CKD patient data, usually in an Excel spreadsheet. This encouraged us to proceed to the next step of populating a central data repository (CDR). It also involved exploratory snapshot visits to private CKD practices, which revealed large numbers of patients in those settings, but substantial overlap with patients in the public system. Survey 2 of public health CKD services, Queensland Public renal services were resurveyed. Site Survey 2 was completed in March 2011 with results shared at the ANZSN 2011 conference [Nephrology 2011; 16 (Suppl 1), 37: 52]. This survey aimed to profile CKD management in more detail and employed a web-based questionnaire completed by senior renal medical and nursing staff at each Queensland Health (QH) renal service. The response rate of 100% demonstrated the commitment of Public Renal Service Providers to CKD.QLD. This survey indicated that there are 11 668 CKD patients potentially available for recruitment into the CKD.QLD registry in QH public CKD clinics. The majority of these patients (90%) are seen by nephrologists as the clinics are hospital based. Multidisciplinary CKD clinics are available throughout the state with variable allied health resources. CKD Nurse Practitioner (NP)-run clinics, especially in the community setting, are an increasing trend with five NPs seeing CKD patients independently and another 10 NPs actively taking part in CKD patient management. This survey also provided useful information on the clinical management of CKD including risk factor modification, such as blood pressure (BP) therapy, HbA1c monitoring and management of complications and referral patterns for RRT [Nephrology 2011; 16 (Suppl 1), 37: 53]. Angiotensin-converting enzyme inhibitors (ACEI) form the cornerstone of BP control and proteinuria reduction. Dual therapy with angiotensin receptor blockers is utilized in <50% of cases. HbA1c levels are followed in up to 85% of cases to assess the control of diabetes. Lipid-lowering agents, including statins, are increasingly used in the majority of cases (>90%) but not with the intention of slowing CKD progression. These findings from the survey laid the foundation and collaborative approach for the more refined and structured next phase of CKD Registry activity of defining the characteristics of all potentially recruitable patients across Queensland. Public renal services were resurveyed. Site Survey 2 was completed in March 2011 with results shared at the ANZSN 2011 conference [Nephrology 2011; 16 (Suppl 1), 37: 52]. This survey aimed to profile CKD management in more detail and employed a web-based questionnaire completed by senior renal medical and nursing staff at each Queensland Health (QH) renal service. The response rate of 100% demonstrated the commitment of Public Renal Service Providers to CKD.QLD. This survey indicated that there are 11 668 CKD patients potentially available for recruitment into the CKD.QLD registry in QH public CKD clinics. The majority of these patients (90%) are seen by nephrologists as the clinics are hospital based. Multidisciplinary CKD clinics are available throughout the state with variable allied health resources. CKD Nurse Practitioner (NP)-run clinics, especially in the community setting, are an increasing trend with five NPs seeing CKD patients independently and another 10 NPs actively taking part in CKD patient management. This survey also provided useful information on the clinical management of CKD including risk factor modification, such as blood pressure (BP) therapy, HbA1c monitoring and management of complications and referral patterns for RRT [Nephrology 2011; 16 (Suppl 1), 37: 53]. Angiotensin-converting enzyme inhibitors (ACEI) form the cornerstone of BP control and proteinuria reduction. Dual therapy with angiotensin receptor blockers is utilized in <50% of cases. HbA1c levels are followed in up to 85% of cases to assess the control of diabetes. Lipid-lowering agents, including statins, are increasingly used in the majority of cases (>90%) but not with the intention of slowing CKD progression. These findings from the survey laid the foundation and collaborative approach for the more refined and structured next phase of CKD Registry activity of defining the characteristics of all potentially recruitable patients across Queensland. Survey 1 of public health CKD services, Queensland: All public renal services were eligible and agreed to participate in the CKD.QLD collaborative. The initial consultative and foundation work involved a broad clinical audit and profile of CKD in Queensland, through face-to-face meetings with senior providers at each site. It was completed in March 2010 and the results presented at the Australian and New Zealand Society of Nephrology (ANZSN) Annual Scientific Meeting in 2010 [Nephrology 2010; 15 (Suppl 4), 29]. This survey gathered information on the structure and functioning of services, including the burden of CKD, available resources and the processes involved in the delivery of clinical care. We established that all sites collected CKD patient data, usually in an Excel spreadsheet. This encouraged us to proceed to the next step of populating a central data repository (CDR). It also involved exploratory snapshot visits to private CKD practices, which revealed large numbers of patients in those settings, but substantial overlap with patients in the public system. Survey 2 of public health CKD services, Queensland: Public renal services were resurveyed. Site Survey 2 was completed in March 2011 with results shared at the ANZSN 2011 conference [Nephrology 2011; 16 (Suppl 1), 37: 52]. This survey aimed to profile CKD management in more detail and employed a web-based questionnaire completed by senior renal medical and nursing staff at each Queensland Health (QH) renal service. The response rate of 100% demonstrated the commitment of Public Renal Service Providers to CKD.QLD. This survey indicated that there are 11 668 CKD patients potentially available for recruitment into the CKD.QLD registry in QH public CKD clinics. The majority of these patients (90%) are seen by nephrologists as the clinics are hospital based. Multidisciplinary CKD clinics are available throughout the state with variable allied health resources. CKD Nurse Practitioner (NP)-run clinics, especially in the community setting, are an increasing trend with five NPs seeing CKD patients independently and another 10 NPs actively taking part in CKD patient management. This survey also provided useful information on the clinical management of CKD including risk factor modification, such as blood pressure (BP) therapy, HbA1c monitoring and management of complications and referral patterns for RRT [Nephrology 2011; 16 (Suppl 1), 37: 53]. Angiotensin-converting enzyme inhibitors (ACEI) form the cornerstone of BP control and proteinuria reduction. Dual therapy with angiotensin receptor blockers is utilized in <50% of cases. HbA1c levels are followed in up to 85% of cases to assess the control of diabetes. Lipid-lowering agents, including statins, are increasingly used in the majority of cases (>90%) but not with the intention of slowing CKD progression. These findings from the survey laid the foundation and collaborative approach for the more refined and structured next phase of CKD Registry activity of defining the characteristics of all potentially recruitable patients across Queensland. Queensland CKD Registry: The establishment of a CKD Registry in Queensland is a key objective of CKD.QLD. It is a major achievement given that no such system is available in Australia and very few exist globally. The structure and logistics for a Registry were developed during a Database Workshop held in mid-2010, which defined ethical issues, governance structure, roles and responsibilities and terms of reference. Queensland state-wide ethical approval of the Registry was obtained in November 2010, allowing applications for Site Specific Governance Approvals for each Renal Service site to proceed. Governance approval has now been achieved for all 12 primary sites and consenting and centralized data collection has started. Approval at four smaller affiliated hospitals with facilities to care for CKD patients is in process and are expected to begin recruitment soon. The primary objective of the CKD.QLD Registry is to narrow the knowledge gap currently existing in CKD. A database will be populated from the CDR with the flexibility to scale to population level for epidemiological purposes or single patient level for clinical care. The latter will allow the longitudinal assessment of putative parameters or biomarkers recorded at the baseline, including data from additional testing. Such predictors can be incorporated into evaluations of models of CKD care, intensity of surveillance and interventions, and customization of individual therapy. It is envisioned that the Registry will be a valuable resource to cross-link data of CKD patients entering the ANZDATA Registry. The goals of the CKD.QLD Registry are summarized in Table 2. Table 2.Objectives of CKD.QLD Registry To characterize patients within the CKD.QLD RegistryTo evaluate longitudinal population trends in CKDTo document the course and outcomes of CKD patientsTo develop predictors of prognosis and responsiveness to interventionsTo identify treatment gaps and support and promote best CKD practiceTo evaluate models of CKD health service deliveryTo support CKD clinical trialsTo develop a platform to link clinical data to a repository of biological samples for CKD biomarker researchTo develop educational streams to improve knowledge and management of CKDTo establish data linkage of all stages of CKD with existent data sets like ANZDATA RegistryTo develop a research capability in healthcare workers Objectives of CKD.QLD Registry To characterize patients within the CKD.QLD Registry To evaluate longitudinal population trends in CKD To document the course and outcomes of CKD patients To develop predictors of prognosis and responsiveness to interventions To identify treatment gaps and support and promote best CKD practice To evaluate models of CKD health service delivery To support CKD clinical trials To develop a platform to link clinical data to a repository of biological samples for CKD biomarker research To develop educational streams to improve knowledge and management of CKD To establish data linkage of all stages of CKD with existent data sets like ANZDATA Registry To develop a research capability in healthcare workers Patient recruitment: Consenting and recruitment of both incident and accrued prevalent patients (the resource-intensive component) has progressed very well across all CKD practices. We are on target with 1450 patients having given consent by the end of March 2012 with a negligible refusal rate. With this positive experience, we will seek ethical approval for an ‘opt out’ consent protocol rather than ‘opt in’ recruitment system. The Australian National Health and Medical Research Council (NHMRC) is developing guidelines for opt out consent method, and CKD.QLD is engaged in the development process. The collaborative timeline is to recruit 3000 patients by mid-2012 and 11 000 by the end of the year. Informing and consenting of incident CKD patients as they are referred to clinics will be a less labour intensive process and performed by the Renal Service provider seeing the patient for their first consultation, as a proposed part of standard care. The analysis of the first 600 patients recruited to the registry at the RBWH site has begun and the profile and characteristics of this group will be reported in the near future. Similar analysis of data from other sites will follow. Data collection: CKD.QLD facilitates initiation and start up of the Registry at each site by providing patient information, assistance with consenting prevalent patients and refining data capture protocols. The CDR has the flexibility to accommodate data feeds from sources ranging from Excel spreadsheets to proprietary software like Audit4 [21]. The CDR is housed at a QH site in the Centre for Chronic Disease, RBWH. CKD models of care: The combination of increased referrals and earlier referral has put significant strain on resources and required a shift in focus and strategy of delivering renal care. The previous concentration of hospital-based speciality practices is diffusing towards community CKD clinics and primary care. Practicing CKD NPs are complementing medical renal management across the state of Queensland and these different models of CKD care are the targets for nursing research initiatives of CKD.QLD. Clinical trials: The CKD patient platform provides excellent opportunities for large-scale clinical trials, and currently, there are several studies in progress. The CKD Nursing Models of Care (CKiDNaP Study) is developing operational definitions for the NP role and expert renal nurse in clinical care of patients with CKD. This study is capturing the multidisciplinary team's acceptance of the NP role in the delivery of CKD care. Potentially modifiable dietary factors in CKD are being characterized in another study. Changes in the dietary factors will be mapped and correlated over time with patient outcomes. This study has a target recruitment of 500 patients across multiple sites. Once completed, this study is expected to provide useful information on dietary intake, body composition and their influence on cardiovascular outcomes in CKD patients. Studies in development will evaluate the introduction of palliative care earlier and during CKD management, focusing on pain management, treatment of depression and symptom control, especially in the elderly. Planned future studies include randomized controlled trials assessing the effects of fish oil and allopurinol in mitigating the course of CKD. Our close collaboration and shared investigators with the AKTN is a major benefit. Biomarker research: The search for biomarkers that accurately represents the onset, progress and prognosis of kidney disease is a priority of international research. Currently, a multitude of candidate markers are reported in the literature [22], but validation in the clinical setting is problematic [23]. Our registry provides an opportunity to integrate CKD clinical practices with applied biomarker research and post-marketing evaluation. Novel biomarker discovery and validation is based at the Institute for Molecular Bioscience (IMB), University of Queensland, Australia. In addition, a critical link has been established with international biomarker research collaboration and validation, between the Proof Centre of Excellence, Canada, and CKD.QLD. We are aiming at a recruitment target of 2100 patients into a biomarker study across three sites in Queensland to test the transferability of the Canadian Study of Prediction of Risk and Evolution to Dialysis, Death and Interim Cardiovascular Events Over Time study (CanPREDICT study) [24] results to the Australian context. This study protocol includes the development of a biobank to store samples for future studies, an important planned future development of CKD.QLD. Collaborations: CKD.QLD investigators have established major collaborations with important stakeholders at various levels. In Queensland, CKD.QLD is affiliated with two non-government kidney health advocacy agencies, KHA and Kidney Support Network (KSN). An important relationship is established with the Queensland Aboriginal Islander Health Council (QAIHC), the peak body representing the community controlled indigenous health-care sector. Nationally, CKD.QLD is in discussion with Western Australia, the Northern Territory, Victoria and New South Wales in progressing a National Network in CKD research, a concept that is supported by key renal bodies, including KHA and ANZDATA. Internationally, we are in collaboration with CKD investigators in British Columbia, Canada. Significance of CKD.QLD: We know of no other comprehensive regional or state/territory-wide longitudinal Registry of CKD patients in renal speciality practices in Australia. There is as yet no comprehensive data source on people with Stages 1–4 CKD and no precedence to cross-link their clinical data to existing data sets of mortality or RRT dependent ESKD. This initiative in Queensland will establish the progress and prognosis of CKD in earlier stages in the sequence of health, risk, unrecognized CKD, recognized and referred CKD and ESKD. The successful development of the CKD Registry has substantial beneficial outcomes for many other groups. Some of the end-users who will benefit from CKD.QLD activities are given in Table 3. Table 3.End-users who will benefit from the activities and outcomes of CKD.QLD CKD patients, through better outcomesPractitioners, from structured and efficient support systems and models of careHealth systems, from more efficient models of careThe taxpayer, from cost containment achieved through more efficient servicesThe general population, through better health awarenessCommercial interests, through increased identification of CKD and its better management, the clinical trials platform, new diagnostic agents and opportunities for private insurance End-users who will benefit from the activities and outcomes of CKD.QLD CKD patients, through better outcomes Practitioners, from structured and efficient support systems and models of care Health systems, from more efficient models of care The taxpayer, from cost containment achieved through more efficient services The general population, through better health awareness Commercial interests, through increased identification of CKD and its better management, the clinical trials platform, new diagnostic agents and opportunities for private insurance Future directions: The next major task is to aggregate data collected from all CKD.QLD sites, which will be supplied in different data formats. We have now completed recruitment of the first 1450 patients and data audit is being performed and missing information will be sought from the primary sources. Preliminary analysis of already accrued patient information will help us to refine the process, improve the methodology and review uniform data collection parameters. Future projects will include sampling of CKD patients in Queensland private practices, sampling from other specialities such as cardiology, gerontology and endocrinology, and sampling within primary care and general practice. This collaboration will allow us to develop primary and secondary preventive programmes targeting CKD. Conclusions: CKD.QLD has evolved as a platform for research and practice improvement in CKD management in Australia. We have successfully established the Registry and are on target to recruit 5000 CKD patients by mid-2012. In the process, we have developed a systematic and comprehensive surveillance system for referred CKD monitoring in Australia. The Registry will also facilitate and inform data cross-linkage between the initial stages of CKD with that of treated ESKD as reported in ANZDATA. We have initiated major research studies of various aspects of CKD management. It is anticipated that the activities and products of CKD.QLD will be a benchmark for other states in Australia in understanding and managing CKD across the nation. We expect the findings by demographic group generated through the CKD.QLD registry mechanism may be extrapolated to populations groups in other states/territories and their accuracy evaluated. We anticipate that CKD.QLD will coalesce with other states and regions in Australia into a broader National CKD Registry and network. In the longer term CKD.QLD will continue to grow international collaborations and contribute significantly towards a global improvement of CKD management. Funding: Funding for the establishment of CKD.QLD has been provided by Prof W.E.H. NHMRC Australian Fellowship, the Colonial Foundation (Melbourne, Australia), AMGEN Australia, and Queensland Health. Conflict of interest statement: None declared.
Background: Chronic kidney disease (CKD) is recognized as a major public health problem in Australia with significant mortality, morbidity and economic burden. However, there is no comprehensive surveillance programme to collect, collate and analyse data on CKD in a systematic way. Methods: We describe an initiative called CKD Queensland (CKD.QLD), which was established in 2009 to address this deficiency, and outline the processes and progress made to date. The foundation is a CKD Registry of all CKD patients attending public health renal services in Queensland, and patient recruitment and data capture have started. Results: We have established through early work of CKD.QLD that there are over 11,500 CKD patients attending public renal services in Queensland, and these are the target population for our registry. Progress so far includes conducting two CKD clinic site surveys, consenting over 3000 patients into the registry and initiation of baseline data analysis of the first 600 patients enrolled at the Royal Brisbane and Women's Hospital (RBWH) site. In addition, research studies in dietary intake and CKD outcomes and in models of care in CKD patient management are underway. Conclusions: Through the CKD Registry, we will define the distribution of CKD patients referred to renal practices in the public system in Queensland by region, remoteness, age, gender, ethnicity and socioeconomic status. We will define the clinical characteristics of those patients, and the CKD associations, stages, co-morbidities and current management. We will follow the course and outcomes in individuals over time, as well as group trends over time. Through our activities and outcomes, we are aiming to provide a nidus for other states in Australia to join in a national CKD registry and network.
Introduction: Chronic kidney disease (CKD) in Australia is a major public health problem. At least one biomarker of kidney injury has been found in one in seven Australian adults over the age of 25 years [1, 2]. CKD contributed to nearly 10% of deaths in 2006 and over 1.1 million hospitalizations in 2006–07 [3], both of which are underestimates driven by the incompleteness of CKD-related diagnostic coding. Excellent data of treated end-stage kidney disease (ESKD) patients are available from the Australia and New Zealand Dialysis and Transplant Registry (ANZDATA) [4]. Based on their reports, the incidence of treated ESKD in Australia is projected to increase by 80% between 2009 and 2020 [5]. In addition, incidence and prevalence of ESKD is more common in the Aboriginal and Torres Strait Islander population, who represent almost 10% of new cases of treated ESKD, despite being only 2.5% of the Australian population [6]. However, people on renal replacement therapy (RRT) represent only a small proportion of the CKD population. There are no existing systematic mechanisms to estimate the incidence, prevalence and distribution of severity of Stages 1–4 CKD in Australia. The best available source of population-based CKD rates is The Australian Diabetes, Obesity and Lifestyle (AusDiab) study [1, 2]. However, this study is outdated and may have overestimated CKD prevalence, as it did not collect information on two occasions 3 months apart and its estimates may include some cases of spurious or temporary elevations of serum creatinine or of acute kidney injury. Studies like the Bettering the Evaluation And Care of Health (BEACH) Program [7] are continuously gathering excellent data on the burden of CKD in primary practice in the Australian population through recording of General Practice (GP) consultations. However, this study sacrifices meaningful detail to capture a wide array of summative health indices. A unique study called the ‘45 and Up Study’ [8] is currently recruiting 250 000 men and women aged 45 and over from the general population in the state of New South Wales, Australia. This study is expected to provide a range of health-related information, including risk factors for CKD, through a self-administered questionnaire and capture of additional information through linkage of health-related data sets. Notwithstanding these important initiatives, major gaps in our knowledge of CKD will remain. To address this deficiency, the Australian Government, through the Australian Institute of Health and Welfare (AIHW), established the National Centre for Monitoring Chronic Kidney Disease (NCM CKD) in 2007 [9]. The NCM CKD is located in the Cardiovascular, Diabetes and Kidney Unit at the AIHW, and the Medical Director of Kidney Health Australia (KHA) is the Chairman of the Advisory Committee. Centre, the AIHW and other organizations using existing data sets to collect, aggregate, link, analyse and estimate future disease burdens. They have produced important documents on ESKD and on CKD including its relationships with other conditions such as diabetes and cardiovascular disease [10–14]. However, the Agency's work is also impeded by the lack of population-based CKD data. The NCM CKD has therefore proposed a conceptual framework for building a surveillance system to monitor CKD with reference to Australia [9]. Consistent with the vision and strategy of the NCM CKD, we have established a programme for surveillance including the generation of an original data set, practice improvement and research for CKD across the entire referred renal practice network in the public health system in Queensland, through an entity called CKD Queensland (CKD.QLD). It is unique within Australia and one of the few comprehensive CKD surveillance entities globally. Queensland has a current population of 4.6 million, or 20% of the Australian population. The number of retired individuals aged over 65 who are at high risk for CKD is 493 525, and this figure is expected to increase over time [15]. Queensland's multiethnic mix includes large numbers of Chinese, other Asian and Indian people, Pacific Islanders, Maori and more recently, Africans. It also includes 146 400 Indigenous Australians, the second largest number in any state [16]. Queensland had 3511 prevalent RRT patients at the end of 2009 [17], ∼20% of the prevalent patients in the ANZDATA Registry, consistent with Queensland's proportion of the total Australian population. Queensland has 80 nephrologists and 20 CKD nurses and nurse practitioners spread across 15 Health Service districts [18]. The demographic profile of Queensland is comparable to the rest of Australia (Table 1). Table 1.2006 Australian Census: Queensland versus AustraliaaQueensland% of personsAustralia% of personsTotal persons (residents)3 904 532—19 855 288— Males1 935 38149.69 799 25249.4 Females1 969 15150.410 056 03650.6 Indigenous persons127 5783.3455 0312.3Age group 0–4 years257 0776.61 260 4056.3 5–14 years549 45514.12 676 80713.5 15–24 years539 20613.82 704 27613.6 25–54 years1 638 35442.08 376 75142.2 55–64 years437 55011.22 192 67511.0 65 years and over482 89112.42 644 37413.3 Median age of persons36—37—Language spoken at home English only3 371 68486.415 581 33378.5 Mandarin24 4470.6220 6011.1 Italian22 0320.6316 8901.6 Cantonese19 6270.5244 5531.2 Vietnamese17 1450.4194 8551.0 German14 7430.475 6360.4Income (persons ≥15 years) $ weekly Median individual income476—466— Median household income1033—1027— Median family income1154—1171—aAdapted from 2006 Australian Census, Quickstats-Queensland http://www.censusdata.abs.gov.au/. 2006 Australian Census: Queensland versus Australiaa aAdapted from 2006 Australian Census, Quickstats-Queensland http://www.censusdata.abs.gov.au/. The objectives of CKD.QLD include establishing a Registry and an ongoing surveillance system of all the CKD patients in renal practices in the public health system in the state of Queensland. We will define CKD distribution and characteristics, evaluate longitudinal CKD population trends and outcomes, identify treatment gaps, promote best practice and conduct clinical trials in collaboration with the Australian Kidney Trials Network (AKTN) [19]. We are also exploring models of CKD health service delivery and aim to develop CKD education and training streams. This will provide a platform for continuous practice improvement and research and build knowledge and capacity in the care of CKD patients (Figure 1). Fig. 1.CKD.QLD research operating platforms. CKD.QLD research operating platforms. Conclusions: CKD.QLD has evolved as a platform for research and practice improvement in CKD management in Australia. We have successfully established the Registry and are on target to recruit 5000 CKD patients by mid-2012. In the process, we have developed a systematic and comprehensive surveillance system for referred CKD monitoring in Australia. The Registry will also facilitate and inform data cross-linkage between the initial stages of CKD with that of treated ESKD as reported in ANZDATA. We have initiated major research studies of various aspects of CKD management. It is anticipated that the activities and products of CKD.QLD will be a benchmark for other states in Australia in understanding and managing CKD across the nation. We expect the findings by demographic group generated through the CKD.QLD registry mechanism may be extrapolated to populations groups in other states/territories and their accuracy evaluated. We anticipate that CKD.QLD will coalesce with other states and regions in Australia into a broader National CKD Registry and network. In the longer term CKD.QLD will continue to grow international collaborations and contribute significantly towards a global improvement of CKD management.
Background: Chronic kidney disease (CKD) is recognized as a major public health problem in Australia with significant mortality, morbidity and economic burden. However, there is no comprehensive surveillance programme to collect, collate and analyse data on CKD in a systematic way. Methods: We describe an initiative called CKD Queensland (CKD.QLD), which was established in 2009 to address this deficiency, and outline the processes and progress made to date. The foundation is a CKD Registry of all CKD patients attending public health renal services in Queensland, and patient recruitment and data capture have started. Results: We have established through early work of CKD.QLD that there are over 11,500 CKD patients attending public renal services in Queensland, and these are the target population for our registry. Progress so far includes conducting two CKD clinic site surveys, consenting over 3000 patients into the registry and initiation of baseline data analysis of the first 600 patients enrolled at the Royal Brisbane and Women's Hospital (RBWH) site. In addition, research studies in dietary intake and CKD outcomes and in models of care in CKD patient management are underway. Conclusions: Through the CKD Registry, we will define the distribution of CKD patients referred to renal practices in the public system in Queensland by region, remoteness, age, gender, ethnicity and socioeconomic status. We will define the clinical characteristics of those patients, and the CKD associations, stages, co-morbidities and current management. We will follow the course and outcomes in individuals over time, as well as group trends over time. Through our activities and outcomes, we are aiming to provide a nidus for other states in Australia to join in a national CKD registry and network.
5,243
327
19
[ "ckd", "queensland", "patients", "ckd qld", "qld", "health", "data", "renal", "registry", "management" ]
[ "test", "test" ]
null
null
[CONTENT] chronic kidney disease | registry | surveillance [SUMMARY]
null
null
[CONTENT] chronic kidney disease | registry | surveillance [SUMMARY]
[CONTENT] chronic kidney disease | registry | surveillance [SUMMARY]
[CONTENT] chronic kidney disease | registry | surveillance [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Child | Child, Preschool | Clinical Trials as Topic | Female | Glomerular Filtration Rate | Health Surveys | Humans | Infant | Infant, Newborn | Longitudinal Studies | Male | Middle Aged | Patient Selection | Population Surveillance | Prevalence | Prognosis | Quality of Life | Queensland | Registries | Renal Insufficiency, Chronic | Research Design | Young Adult [SUMMARY]
null
null
[CONTENT] Adolescent | Adult | Aged | Child | Child, Preschool | Clinical Trials as Topic | Female | Glomerular Filtration Rate | Health Surveys | Humans | Infant | Infant, Newborn | Longitudinal Studies | Male | Middle Aged | Patient Selection | Population Surveillance | Prevalence | Prognosis | Quality of Life | Queensland | Registries | Renal Insufficiency, Chronic | Research Design | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Child | Child, Preschool | Clinical Trials as Topic | Female | Glomerular Filtration Rate | Health Surveys | Humans | Infant | Infant, Newborn | Longitudinal Studies | Male | Middle Aged | Patient Selection | Population Surveillance | Prevalence | Prognosis | Quality of Life | Queensland | Registries | Renal Insufficiency, Chronic | Research Design | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Child | Child, Preschool | Clinical Trials as Topic | Female | Glomerular Filtration Rate | Health Surveys | Humans | Infant | Infant, Newborn | Longitudinal Studies | Male | Middle Aged | Patient Selection | Population Surveillance | Prevalence | Prognosis | Quality of Life | Queensland | Registries | Renal Insufficiency, Chronic | Research Design | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] ckd | queensland | patients | ckd qld | qld | health | data | renal | registry | management [SUMMARY]
null
null
[CONTENT] ckd | queensland | patients | ckd qld | qld | health | data | renal | registry | management [SUMMARY]
[CONTENT] ckd | queensland | patients | ckd qld | qld | health | data | renal | registry | management [SUMMARY]
[CONTENT] ckd | queensland | patients | ckd qld | qld | health | data | renal | registry | management [SUMMARY]
[CONTENT] ckd | population | australian | 2006 | kidney | health | queensland | study | australia | practice [SUMMARY]
null
null
[CONTENT] ckd | states | ckd management | australia | registry | improvement ckd | improvement ckd management | improvement | ckd qld | qld [SUMMARY]
[CONTENT] ckd | declared | queensland | patients | ckd qld | qld | health | data | renal | australia [SUMMARY]
[CONTENT] ckd | declared | queensland | patients | ckd qld | qld | health | data | renal | australia [SUMMARY]
[CONTENT] Australia ||| CKD [SUMMARY]
null
null
[CONTENT] the CKD Registry | CKD | Queensland ||| CKD ||| ||| Australia [SUMMARY]
[CONTENT] Australia ||| CKD ||| CKD Queensland | 2009 ||| Queensland ||| ||| 11,500 | Queensland ||| two | 3000 | first | 600 | the Royal Brisbane and Women's Hospital | RBWH ||| CKD | CKD ||| CKD | Queensland ||| CKD ||| ||| Australia [SUMMARY]
[CONTENT] Australia ||| CKD ||| CKD Queensland | 2009 ||| Queensland ||| ||| 11,500 | Queensland ||| two | 3000 | first | 600 | the Royal Brisbane and Women's Hospital | RBWH ||| CKD | CKD ||| CKD | Queensland ||| CKD ||| ||| Australia [SUMMARY]
Analysis of Risk Factors for Myringosclerosis Formation after Ventilation Tube Insertion.
32242343
This study examined possible risk factors for myringosclerosis formation after ventilation tube insertion (VTI).
BACKGROUND
A retrospective study was performed in a single tertiary referral center. A total of 582 patients who underwent VTI were enrolled in this study. Patients were divided into two groups based on the presence or absence of myringosclerosis: MS+ and MS-. Characteristics of patients were collected through medical chart review; these included age, gender, nature and duration of effusion, type of ventilation tube (VT), duration and frequency of VTI, incidence of post-VTI infection, incidence of intraoperative bleeding, and presence of postoperative perforation. Incidences of risk factors for myringosclerosis and the severity of myringosclerosis in association with possible risk factors were analyzed.
METHODS
Myringosclerosis developed in 168 of 582 patients (28.9%) after VTI. Patients in the MS+ group had an older mean age than those in the MS- group. The rates of myringosclerosis were higher in patients with older age, serous otitis media, type 2 VT, post-VTI perforation, and frequent VTI. However, there were no differences in occurrence of myringosclerosis based on gender, duration of effusion, duration of VT placement, incidence of post-VTI infection, or incidence of intraoperative bleeding. The severity of myringosclerosis was associated with the duration of effusion and frequency of VTI.
RESULTS
Older age, serous effusion, type 2 VT, presence of post-VTI perforation, and frequent VTI may be risk factors for myringosclerosis after VTI; the severity of myringosclerosis may vary based on the duration of effusion and frequency of VTI.
CONCLUSION
[ "Adolescent", "Adult", "Humans", "Incidence", "Middle Aged", "Middle Ear Ventilation", "Myringosclerosis", "Postoperative Complications", "Retrospective Studies", "Risk Factors", "Young Adult" ]
7131898
INTRODUCTION
Tympanosclerosis is associated with sclerotic changes in the tympanic membrane (TM), middle ear cavity, ossicular chain, or (rarely) mastoid cavity; it is characterized by the deposition of degenerated hyaline and calcification of collagen fibrils in the submucosal layer. Tympanosclerosis limited to the TM is known as myringosclerosis,1 and can be defined as a localized tissue reaction involving hyalinization and calcification of the middle fibrous layer of the TM.2 Myringosclerosis is generally asymptomatic, but may cause conductive or mixed type hearing loss if it affects the movement of the TM, ossicular chain, or windows.3 Myringosclerosis is the most common complication after ventilation tube insertion (VTI),45 which has been used for the treatment of otitis media with effusion (OME).56 The etiology and pathogenesis of this condition have not been fully elucidated; however, VTI, middle ear infection, trauma, genetic tendency, and increased formation of oxygen-derived free radicals have been proposed as risk factors for myringosclerosis.7 Sclerotic changes caused by myringosclerosis could render the TM stiff and inflexible, with fixation of the ossicular chain.8 Therefore, a small myringosclerosis may not affect hearing, while a more extensive plaque may lead to the onset of conductive hearing loss.9 Hearing loss caused by OME itself or myringosclerosis in childhood could result in a detrimental effect on the development of auditory temporal processing, and language development.10 Although VTI is the most common procedure used for the treatment of OME and is presumed to be associated with myringosclerosis, there have only been a few studies of potential risk factors for myringosclerosis after VTI.711 This study was performed to investigate possible risk factors for myringosclerosis formation after VTI and to determine factors affecting the severity of myringosclerosis after VTI.
METHODS
Patients The study population consisted of 582 patients who underwent VTI for OME between January 2011 and March 2016 at Chungnam National University Hospital (Daejeon, Korea). Unilateral VTI was performed in 364 patients, while bilateral VTI was performed in 218 patients. All patients had undergone monthly follow-up for at least 6 months postoperatively. Patients with craniofacial anomalies, such as cleft lip and palate or Treacher-Collins syndrome, as well as those with insufficient medical records or otoscopic images, were excluded from the study. Patients with myringosclerosis before VTI were also excluded. In patients with bilateral VTI, only the left ear was included in the analysis of risk factors, in order to reduce the potential for bias originating from differences in myringosclerosis formation between ears in each patient. The study population consisted of 582 patients who underwent VTI for OME between January 2011 and March 2016 at Chungnam National University Hospital (Daejeon, Korea). Unilateral VTI was performed in 364 patients, while bilateral VTI was performed in 218 patients. All patients had undergone monthly follow-up for at least 6 months postoperatively. Patients with craniofacial anomalies, such as cleft lip and palate or Treacher-Collins syndrome, as well as those with insufficient medical records or otoscopic images, were excluded from the study. Patients with myringosclerosis before VTI were also excluded. In patients with bilateral VTI, only the left ear was included in the analysis of risk factors, in order to reduce the potential for bias originating from differences in myringosclerosis formation between ears in each patient. Data collection Patient data and clinical information were retrospectively collected through medical records review. Patient data collected included the following variables: age; gender; VT type; duration of effusion; characteristics of effusion; duration of VT placement; frequency of VTI; and incidences of intraoperative bleeding, post-VTI infection, and post-VTI perforation; these data were collected from operation records and otoendoscopic imaging. The point of myringosclerosis development was regarded as the date of initial detection after VTI. Patient data and clinical information were retrospectively collected through medical records review. Patient data collected included the following variables: age; gender; VT type; duration of effusion; characteristics of effusion; duration of VT placement; frequency of VTI; and incidences of intraoperative bleeding, post-VTI infection, and post-VTI perforation; these data were collected from operation records and otoendoscopic imaging. The point of myringosclerosis development was regarded as the date of initial detection after VTI. Study protocol Myringosclerosis was divided into five grades according to severity: grade 0, no myringosclerosis; grade 1, myringosclerosis involving a single quadrant of the TM; grade 2, myringosclerosis involving two quadrants of the TM, regardless of the particular quadrants involved; grade 3, myringosclerosis involving three quadrants of the TM; grade 4, myringosclerosis involving all quadrants of the TM (Fig. 1). A small myringotomy was made radially on the anterior-inferior quadrant of the TM. Effusion in the middle ear was aspirated and a VT was placed into the TM. The characteristics of the effusion and presence of intraoperative bleeding were determined intraoperatively. For all patients, silicone VTs (Paparella Type; Medtronic Xomed, Minneapolis, MN, USA) were used; there were two types of VTs: type 1 had an internal diameter of 1.14 mm and type 2 had an internal diameter of 1.52 mm. The duration of effusion was defined as the time from initial detection of middle ear effusion to the time of surgery; duration of VT placement was defined from VTI to the extrusion of the VT from the TM whether the myringosclerosis was formed before or after tube extrusion. If the perforation of the TM continued for more than 3 months after removal of the tube, it was regarded as post-VTI perforation. Myringosclerosis was divided into five grades according to severity: grade 0, no myringosclerosis; grade 1, myringosclerosis involving a single quadrant of the TM; grade 2, myringosclerosis involving two quadrants of the TM, regardless of the particular quadrants involved; grade 3, myringosclerosis involving three quadrants of the TM; grade 4, myringosclerosis involving all quadrants of the TM (Fig. 1). A small myringotomy was made radially on the anterior-inferior quadrant of the TM. Effusion in the middle ear was aspirated and a VT was placed into the TM. The characteristics of the effusion and presence of intraoperative bleeding were determined intraoperatively. For all patients, silicone VTs (Paparella Type; Medtronic Xomed, Minneapolis, MN, USA) were used; there were two types of VTs: type 1 had an internal diameter of 1.14 mm and type 2 had an internal diameter of 1.52 mm. The duration of effusion was defined as the time from initial detection of middle ear effusion to the time of surgery; duration of VT placement was defined from VTI to the extrusion of the VT from the TM whether the myringosclerosis was formed before or after tube extrusion. If the perforation of the TM continued for more than 3 months after removal of the tube, it was regarded as post-VTI perforation. Statistical analysis Statistical analysis was performed using SPSS for Windows, version 22 (SPSS Inc., Chicago, IL, USA). Independent t-tests were used for comparisons of age, frequency of VTI, duration of effusion, and duration of VT placement between the MS+ and MS− groups. The chi-squared test was used for comparisons between the two groups with regard to gender, characteristics of effusion, tube type, post-VTI infection, intraoperative bleeding, and post-VTI perforation. In addition, 1-way analysis of variance was performed to analyze the associations between severity of myringosclerosis and potential risk factors, including the frequency of VTI, duration of OME, and duration of VT placement. Post hoc analysis was performed using Tukey's test to assess potential factors affecting the severity of myringosclerosis development. In all analyses, P < 0.05 was considered to indicate statistical significance. Statistical analysis was performed using SPSS for Windows, version 22 (SPSS Inc., Chicago, IL, USA). Independent t-tests were used for comparisons of age, frequency of VTI, duration of effusion, and duration of VT placement between the MS+ and MS− groups. The chi-squared test was used for comparisons between the two groups with regard to gender, characteristics of effusion, tube type, post-VTI infection, intraoperative bleeding, and post-VTI perforation. In addition, 1-way analysis of variance was performed to analyze the associations between severity of myringosclerosis and potential risk factors, including the frequency of VTI, duration of OME, and duration of VT placement. Post hoc analysis was performed using Tukey's test to assess potential factors affecting the severity of myringosclerosis development. In all analyses, P < 0.05 was considered to indicate statistical significance. Ethics statement The study protocol was approved by the Institutional Review Board (IRB) of Chungnam National University Hospital (IRB No. 2016-09-021) and the study was performed according to the approved protocol. The study protocol was approved by the Institutional Review Board (IRB) of Chungnam National University Hospital (IRB No. 2016-09-021) and the study was performed according to the approved protocol.
RESULTS
A total of 582 patients underwent VTI (men:women = 308:274). The patients ranged in age from 0 to 88 years, with a mean ± standard deviation (SD) age of 35.6 ± 27.8 years. Of these 582 patients, 168 (28.9%) had myringosclerosis (MS+ group) and 414 (71.1%) did not have myringosclerosis (MS− group). The mean ± SD age of the MS+ group was 39.84 ± 26.17 years, while that of the MS− group was 33.84 ± 28.29 years (P < 0.05) (Table 1). VTI was performed most commonly in patients 0–9 years old (190 patients, 32.65%) followed by patients 50–59 years old (90 patients, 15.46%); distinct myringosclerosis formation rates were observed in each age group (Fig. 2). Patients in this study who were < 6 years old had a lower rate of myringosclerosis formation than patients ≥ 6 years old (P < 0.05) (Fig. 3). Values are expressed as mean ± standard deviations or number. MS+ = with myringosclerosis, MS− = without myringosclerosis, VTI = ventilation tube insertion, VT = ventilation tube. aStatistically significant risk factors, P < 0.05. MS+ = with myringosclerosis, MS− = without myringosclerosis. MS+ = with myringosclerosis, MS− = without myringosclerosis. Myringosclerosis was observed in 87 of 308 men patients (28.25%) and in 81 of 274 women patients (29.56%); thus, there was no significant gender-related difference in the incidence of myringosclerosis. With regard to possible risk factors, serous effusion, type 2 VT, post-VTI perforation, and frequent VTIs were associated with the formation of myringosclerosis after VTI (P < 0.05). Post-VTI infection, intraoperative bleeding, duration of effusion, and duration of VT placement were not significantly related to myringosclerosis formation (Table 1). A logistic regression analysis was performed to examine the association of explanatory variables: serous effusion, type 2 VT, post-VTI perforation, and frequent VTIs, further confirming the significance of each variable in the formation of myringosclerosis (P < 0.01, P = 0.028, 0.002, 0.002 respectively). Among the possible risk factors for myringosclerosis formation, the duration of effusion and frequency of VTI were related to the severity of myringosclerosis. The duration of effusion was longer in the grade 3 and 4 group than in the grade 0 group, while the frequency of VTI was higher in the grade 1 and 2 group than in the grade 0 group (P < 0.05) (Fig. 4). VT = ventilation tube, VTI = ventilation tube insertion. *P < 0.05.
null
null
[ "Patients", "Data collection", "Study protocol", "Statistical analysis", "Ethics statement" ]
[ "The study population consisted of 582 patients who underwent VTI for OME between January 2011 and March 2016 at Chungnam National University Hospital (Daejeon, Korea). Unilateral VTI was performed in 364 patients, while bilateral VTI was performed in 218 patients. All patients had undergone monthly follow-up for at least 6 months postoperatively. Patients with craniofacial anomalies, such as cleft lip and palate or Treacher-Collins syndrome, as well as those with insufficient medical records or otoscopic images, were excluded from the study. Patients with myringosclerosis before VTI were also excluded. In patients with bilateral VTI, only the left ear was included in the analysis of risk factors, in order to reduce the potential for bias originating from differences in myringosclerosis formation between ears in each patient.", "Patient data and clinical information were retrospectively collected through medical records review. Patient data collected included the following variables: age; gender; VT type; duration of effusion; characteristics of effusion; duration of VT placement; frequency of VTI; and incidences of intraoperative bleeding, post-VTI infection, and post-VTI perforation; these data were collected from operation records and otoendoscopic imaging. The point of myringosclerosis development was regarded as the date of initial detection after VTI.", "Myringosclerosis was divided into five grades according to severity: grade 0, no myringosclerosis; grade 1, myringosclerosis involving a single quadrant of the TM; grade 2, myringosclerosis involving two quadrants of the TM, regardless of the particular quadrants involved; grade 3, myringosclerosis involving three quadrants of the TM; grade 4, myringosclerosis involving all quadrants of the TM (Fig. 1).\nA small myringotomy was made radially on the anterior-inferior quadrant of the TM. Effusion in the middle ear was aspirated and a VT was placed into the TM. The characteristics of the effusion and presence of intraoperative bleeding were determined intraoperatively. For all patients, silicone VTs (Paparella Type; Medtronic Xomed, Minneapolis, MN, USA) were used; there were two types of VTs: type 1 had an internal diameter of 1.14 mm and type 2 had an internal diameter of 1.52 mm. The duration of effusion was defined as the time from initial detection of middle ear effusion to the time of surgery; duration of VT placement was defined from VTI to the extrusion of the VT from the TM whether the myringosclerosis was formed before or after tube extrusion. If the perforation of the TM continued for more than 3 months after removal of the tube, it was regarded as post-VTI perforation.", "Statistical analysis was performed using SPSS for Windows, version 22 (SPSS Inc., Chicago, IL, USA). Independent t-tests were used for comparisons of age, frequency of VTI, duration of effusion, and duration of VT placement between the MS+ and MS− groups. The chi-squared test was used for comparisons between the two groups with regard to gender, characteristics of effusion, tube type, post-VTI infection, intraoperative bleeding, and post-VTI perforation. In addition, 1-way analysis of variance was performed to analyze the associations between severity of myringosclerosis and potential risk factors, including the frequency of VTI, duration of OME, and duration of VT placement. Post hoc analysis was performed using Tukey's test to assess potential factors affecting the severity of myringosclerosis development. In all analyses, P < 0.05 was considered to indicate statistical significance.", "The study protocol was approved by the Institutional Review Board (IRB) of Chungnam National University Hospital (IRB No. 2016-09-021) and the study was performed according to the approved protocol." ]
[ null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Patients", "Data collection", "Study protocol", "Statistical analysis", "Ethics statement", "RESULTS", "DISCUSSION" ]
[ "Tympanosclerosis is associated with sclerotic changes in the tympanic membrane (TM), middle ear cavity, ossicular chain, or (rarely) mastoid cavity; it is characterized by the deposition of degenerated hyaline and calcification of collagen fibrils in the submucosal layer. Tympanosclerosis limited to the TM is known as myringosclerosis,1 and can be defined as a localized tissue reaction involving hyalinization and calcification of the middle fibrous layer of the TM.2 Myringosclerosis is generally asymptomatic, but may cause conductive or mixed type hearing loss if it affects the movement of the TM, ossicular chain, or windows.3\nMyringosclerosis is the most common complication after ventilation tube insertion (VTI),45 which has been used for the treatment of otitis media with effusion (OME).56 The etiology and pathogenesis of this condition have not been fully elucidated; however, VTI, middle ear infection, trauma, genetic tendency, and increased formation of oxygen-derived free radicals have been proposed as risk factors for myringosclerosis.7 Sclerotic changes caused by myringosclerosis could render the TM stiff and inflexible, with fixation of the ossicular chain.8 Therefore, a small myringosclerosis may not affect hearing, while a more extensive plaque may lead to the onset of conductive hearing loss.9 Hearing loss caused by OME itself or myringosclerosis in childhood could result in a detrimental effect on the development of auditory temporal processing, and language development.10\nAlthough VTI is the most common procedure used for the treatment of OME and is presumed to be associated with myringosclerosis, there have only been a few studies of potential risk factors for myringosclerosis after VTI.711 This study was performed to investigate possible risk factors for myringosclerosis formation after VTI and to determine factors affecting the severity of myringosclerosis after VTI.", " Patients The study population consisted of 582 patients who underwent VTI for OME between January 2011 and March 2016 at Chungnam National University Hospital (Daejeon, Korea). Unilateral VTI was performed in 364 patients, while bilateral VTI was performed in 218 patients. All patients had undergone monthly follow-up for at least 6 months postoperatively. Patients with craniofacial anomalies, such as cleft lip and palate or Treacher-Collins syndrome, as well as those with insufficient medical records or otoscopic images, were excluded from the study. Patients with myringosclerosis before VTI were also excluded. In patients with bilateral VTI, only the left ear was included in the analysis of risk factors, in order to reduce the potential for bias originating from differences in myringosclerosis formation between ears in each patient.\nThe study population consisted of 582 patients who underwent VTI for OME between January 2011 and March 2016 at Chungnam National University Hospital (Daejeon, Korea). Unilateral VTI was performed in 364 patients, while bilateral VTI was performed in 218 patients. All patients had undergone monthly follow-up for at least 6 months postoperatively. Patients with craniofacial anomalies, such as cleft lip and palate or Treacher-Collins syndrome, as well as those with insufficient medical records or otoscopic images, were excluded from the study. Patients with myringosclerosis before VTI were also excluded. In patients with bilateral VTI, only the left ear was included in the analysis of risk factors, in order to reduce the potential for bias originating from differences in myringosclerosis formation between ears in each patient.\n Data collection Patient data and clinical information were retrospectively collected through medical records review. Patient data collected included the following variables: age; gender; VT type; duration of effusion; characteristics of effusion; duration of VT placement; frequency of VTI; and incidences of intraoperative bleeding, post-VTI infection, and post-VTI perforation; these data were collected from operation records and otoendoscopic imaging. The point of myringosclerosis development was regarded as the date of initial detection after VTI.\nPatient data and clinical information were retrospectively collected through medical records review. Patient data collected included the following variables: age; gender; VT type; duration of effusion; characteristics of effusion; duration of VT placement; frequency of VTI; and incidences of intraoperative bleeding, post-VTI infection, and post-VTI perforation; these data were collected from operation records and otoendoscopic imaging. The point of myringosclerosis development was regarded as the date of initial detection after VTI.\n Study protocol Myringosclerosis was divided into five grades according to severity: grade 0, no myringosclerosis; grade 1, myringosclerosis involving a single quadrant of the TM; grade 2, myringosclerosis involving two quadrants of the TM, regardless of the particular quadrants involved; grade 3, myringosclerosis involving three quadrants of the TM; grade 4, myringosclerosis involving all quadrants of the TM (Fig. 1).\nA small myringotomy was made radially on the anterior-inferior quadrant of the TM. Effusion in the middle ear was aspirated and a VT was placed into the TM. The characteristics of the effusion and presence of intraoperative bleeding were determined intraoperatively. For all patients, silicone VTs (Paparella Type; Medtronic Xomed, Minneapolis, MN, USA) were used; there were two types of VTs: type 1 had an internal diameter of 1.14 mm and type 2 had an internal diameter of 1.52 mm. The duration of effusion was defined as the time from initial detection of middle ear effusion to the time of surgery; duration of VT placement was defined from VTI to the extrusion of the VT from the TM whether the myringosclerosis was formed before or after tube extrusion. If the perforation of the TM continued for more than 3 months after removal of the tube, it was regarded as post-VTI perforation.\nMyringosclerosis was divided into five grades according to severity: grade 0, no myringosclerosis; grade 1, myringosclerosis involving a single quadrant of the TM; grade 2, myringosclerosis involving two quadrants of the TM, regardless of the particular quadrants involved; grade 3, myringosclerosis involving three quadrants of the TM; grade 4, myringosclerosis involving all quadrants of the TM (Fig. 1).\nA small myringotomy was made radially on the anterior-inferior quadrant of the TM. Effusion in the middle ear was aspirated and a VT was placed into the TM. The characteristics of the effusion and presence of intraoperative bleeding were determined intraoperatively. For all patients, silicone VTs (Paparella Type; Medtronic Xomed, Minneapolis, MN, USA) were used; there were two types of VTs: type 1 had an internal diameter of 1.14 mm and type 2 had an internal diameter of 1.52 mm. The duration of effusion was defined as the time from initial detection of middle ear effusion to the time of surgery; duration of VT placement was defined from VTI to the extrusion of the VT from the TM whether the myringosclerosis was formed before or after tube extrusion. If the perforation of the TM continued for more than 3 months after removal of the tube, it was regarded as post-VTI perforation.\n Statistical analysis Statistical analysis was performed using SPSS for Windows, version 22 (SPSS Inc., Chicago, IL, USA). Independent t-tests were used for comparisons of age, frequency of VTI, duration of effusion, and duration of VT placement between the MS+ and MS− groups. The chi-squared test was used for comparisons between the two groups with regard to gender, characteristics of effusion, tube type, post-VTI infection, intraoperative bleeding, and post-VTI perforation. In addition, 1-way analysis of variance was performed to analyze the associations between severity of myringosclerosis and potential risk factors, including the frequency of VTI, duration of OME, and duration of VT placement. Post hoc analysis was performed using Tukey's test to assess potential factors affecting the severity of myringosclerosis development. In all analyses, P < 0.05 was considered to indicate statistical significance.\nStatistical analysis was performed using SPSS for Windows, version 22 (SPSS Inc., Chicago, IL, USA). Independent t-tests were used for comparisons of age, frequency of VTI, duration of effusion, and duration of VT placement between the MS+ and MS− groups. The chi-squared test was used for comparisons between the two groups with regard to gender, characteristics of effusion, tube type, post-VTI infection, intraoperative bleeding, and post-VTI perforation. In addition, 1-way analysis of variance was performed to analyze the associations between severity of myringosclerosis and potential risk factors, including the frequency of VTI, duration of OME, and duration of VT placement. Post hoc analysis was performed using Tukey's test to assess potential factors affecting the severity of myringosclerosis development. In all analyses, P < 0.05 was considered to indicate statistical significance.\n Ethics statement The study protocol was approved by the Institutional Review Board (IRB) of Chungnam National University Hospital (IRB No. 2016-09-021) and the study was performed according to the approved protocol.\nThe study protocol was approved by the Institutional Review Board (IRB) of Chungnam National University Hospital (IRB No. 2016-09-021) and the study was performed according to the approved protocol.", "The study population consisted of 582 patients who underwent VTI for OME between January 2011 and March 2016 at Chungnam National University Hospital (Daejeon, Korea). Unilateral VTI was performed in 364 patients, while bilateral VTI was performed in 218 patients. All patients had undergone monthly follow-up for at least 6 months postoperatively. Patients with craniofacial anomalies, such as cleft lip and palate or Treacher-Collins syndrome, as well as those with insufficient medical records or otoscopic images, were excluded from the study. Patients with myringosclerosis before VTI were also excluded. In patients with bilateral VTI, only the left ear was included in the analysis of risk factors, in order to reduce the potential for bias originating from differences in myringosclerosis formation between ears in each patient.", "Patient data and clinical information were retrospectively collected through medical records review. Patient data collected included the following variables: age; gender; VT type; duration of effusion; characteristics of effusion; duration of VT placement; frequency of VTI; and incidences of intraoperative bleeding, post-VTI infection, and post-VTI perforation; these data were collected from operation records and otoendoscopic imaging. The point of myringosclerosis development was regarded as the date of initial detection after VTI.", "Myringosclerosis was divided into five grades according to severity: grade 0, no myringosclerosis; grade 1, myringosclerosis involving a single quadrant of the TM; grade 2, myringosclerosis involving two quadrants of the TM, regardless of the particular quadrants involved; grade 3, myringosclerosis involving three quadrants of the TM; grade 4, myringosclerosis involving all quadrants of the TM (Fig. 1).\nA small myringotomy was made radially on the anterior-inferior quadrant of the TM. Effusion in the middle ear was aspirated and a VT was placed into the TM. The characteristics of the effusion and presence of intraoperative bleeding were determined intraoperatively. For all patients, silicone VTs (Paparella Type; Medtronic Xomed, Minneapolis, MN, USA) were used; there were two types of VTs: type 1 had an internal diameter of 1.14 mm and type 2 had an internal diameter of 1.52 mm. The duration of effusion was defined as the time from initial detection of middle ear effusion to the time of surgery; duration of VT placement was defined from VTI to the extrusion of the VT from the TM whether the myringosclerosis was formed before or after tube extrusion. If the perforation of the TM continued for more than 3 months after removal of the tube, it was regarded as post-VTI perforation.", "Statistical analysis was performed using SPSS for Windows, version 22 (SPSS Inc., Chicago, IL, USA). Independent t-tests were used for comparisons of age, frequency of VTI, duration of effusion, and duration of VT placement between the MS+ and MS− groups. The chi-squared test was used for comparisons between the two groups with regard to gender, characteristics of effusion, tube type, post-VTI infection, intraoperative bleeding, and post-VTI perforation. In addition, 1-way analysis of variance was performed to analyze the associations between severity of myringosclerosis and potential risk factors, including the frequency of VTI, duration of OME, and duration of VT placement. Post hoc analysis was performed using Tukey's test to assess potential factors affecting the severity of myringosclerosis development. In all analyses, P < 0.05 was considered to indicate statistical significance.", "The study protocol was approved by the Institutional Review Board (IRB) of Chungnam National University Hospital (IRB No. 2016-09-021) and the study was performed according to the approved protocol.", "A total of 582 patients underwent VTI (men:women = 308:274). The patients ranged in age from 0 to 88 years, with a mean ± standard deviation (SD) age of 35.6 ± 27.8 years. Of these 582 patients, 168 (28.9%) had myringosclerosis (MS+ group) and 414 (71.1%) did not have myringosclerosis (MS− group). The mean ± SD age of the MS+ group was 39.84 ± 26.17 years, while that of the MS− group was 33.84 ± 28.29 years (P < 0.05) (Table 1). VTI was performed most commonly in patients 0–9 years old (190 patients, 32.65%) followed by patients 50–59 years old (90 patients, 15.46%); distinct myringosclerosis formation rates were observed in each age group (Fig. 2). Patients in this study who were < 6 years old had a lower rate of myringosclerosis formation than patients ≥ 6 years old (P < 0.05) (Fig. 3).\nValues are expressed as mean ± standard deviations or number.\nMS+ = with myringosclerosis, MS− = without myringosclerosis, VTI = ventilation tube insertion, VT = ventilation tube.\naStatistically significant risk factors, P < 0.05.\nMS+ = with myringosclerosis, MS− = without myringosclerosis.\nMS+ = with myringosclerosis, MS− = without myringosclerosis.\nMyringosclerosis was observed in 87 of 308 men patients (28.25%) and in 81 of 274 women patients (29.56%); thus, there was no significant gender-related difference in the incidence of myringosclerosis. With regard to possible risk factors, serous effusion, type 2 VT, post-VTI perforation, and frequent VTIs were associated with the formation of myringosclerosis after VTI (P < 0.05). Post-VTI infection, intraoperative bleeding, duration of effusion, and duration of VT placement were not significantly related to myringosclerosis formation (Table 1). A logistic regression analysis was performed to examine the association of explanatory variables: serous effusion, type 2 VT, post-VTI perforation, and frequent VTIs, further confirming the significance of each variable in the formation of myringosclerosis (P < 0.01, P = 0.028, 0.002, 0.002 respectively).\nAmong the possible risk factors for myringosclerosis formation, the duration of effusion and frequency of VTI were related to the severity of myringosclerosis. The duration of effusion was longer in the grade 3 and 4 group than in the grade 0 group, while the frequency of VTI was higher in the grade 1 and 2 group than in the grade 0 group (P < 0.05) (Fig. 4).\nVT = ventilation tube, VTI = ventilation tube insertion.\n*P < 0.05.", "In the present study, the rate of myringosclerosis formation after VTI was 28.9%, which was consistent with the findings of previous reports (32% and 35%).1112 VTI related to OME was most commonly performed in patients 0–9 years old, followed by patients 50–59 years old; these increased incidences may be related to dysfunction of the immature Eustachian tube and OME induced by aging of the Eustachian tube, respectively. These findings were supported by the results of previous studies regarding changes in the Eustachian tube lumen and cartilage calcification with age.1314 However, the incidence of myringosclerosis after VTI was highest among patients 20–29 years (68.2%) and lowest among patients 0–9 years (17.9%), although it was difficult to identify a clear statistical tendency due to differences in the numbers of patients in each group. According to a prior study of 126 school children who were 5–12 years old, the incidence of OME was much lower in children aged ≥ 6 years.15 The opening function of the Eustachian tube has been reported to improve significantly with age, and improvement is most common in pre-school age patients (3–7 years old).16 Thus, maturation of the structure (length) and function (active opening mechanism) of the Eustachian tube and maturation of the immune system by the age of 6 years could be associated with the observed reduction in the incidence of otitis media.17 When patients in the present study were stratified based on the age threshold of 6 years, the incidence of myringosclerosis formation after VTI was found to be higher in the group ≥ 6 years old than in the group < 6 years old. Our findings suggested that the degree of Eustachian tube maturation may be related to the formation of myringosclerosis and the occurrence of OME, although further studies are needed to confirm this hypothesis.\nVarious classification methods have been proposed based on histology or morphology of myringosclerosis, and most are related to the process by which myringosclerosis develops.3 Selcuk et al.3 classified myringosclerosis into three types based on the maturation of tympanosclerosis plaque: type I is characterized by fibroblasts, calcium crystals, and loose connective tissue under microscopic examination; these “pearl” tissues can be easily removed. Type II is characterized by calcification foci and large bundles of collagen, while type III is characterized by diffuse calcification and chondroblast-like cells. These histological findings show the sequence of myringosclerosis maturation. Akyildiz18 divided tympanosclerotic plaque into two types based on surgical detachment features: type 1 plaque is soft and easily removable with some calcium accumulation; in contrast, type 2 plaque is hard, white, fragile, and cannot be easily removed from peripheral tissues. However, these previously proposed classification systems require biopsy, which causes difficulty in preoperative evaluation of myringosclerosis. In the present study, we classified myringosclerosis based on severity in the TM.\nThe pathogenesis of myringosclerosis has not been fully elucidated. Mattsson et al.19 reported that free oxygen radicals generated by hyperoxic conditions in the middle ear resulted in the accumulation of sclerotic deposits. Karlidağ et al.20 reported that lower concentrations of antioxidants may increase the damage created by free oxygen radicals, providing an environment more favorable for the formation of myringosclerosis. Healed inflammation or scar tissue after recurrent inflammation have also been suggested as possible factors underlying the pathogenesis of myringosclerosis.821 There is no established treatment for myringosclerosis; however, some studies have demonstrated that the formation of myringosclerosis could be reduced by using antioxidants to reduce free radicals, as well as by applying topical vitamin E, N-acetylcysteine, sodium thiosulfate, and ciprofloxacin.2519222324 Erdurak et al.25 reported that VTI with radiofrequency myringotomy, instead of incisional myringotomy with a cold knife, could reduce the formation of myringosclerosis.\nChronic inflammation of the middle ear and trauma to the TM are regarded as the most important factors related to the formation of myringosclerosis; however, other factors have not been investigated in detail.9 Yaman et al.7 reported that age was not a significant factor associated with the formation of myringosclerosis, although it was identified as a risk factor for the formation of myringosclerosis in this study. Although results vary among studies, the majority of studies (including the present study) have indicated that gender is not a risk factor for myringosclerosis formation.71126 The duration of VT placement was considered to be an important factor in our study and in previous studies.711\nOther risk factors have been analyzed in previous studies. Patients with a greater number of otitis episodes in the past year and those with a greater number of otorrhea episodes in the past year had higher rates of myringosclerosis after VTI.11 In addition, the anterior-inferior quadrant was identified as the highest risk quadrant for myringosclerosis formation, compared to the posterior-inferior quadrant, as a site of myringotomy.11 Yaman et al.7 reported that myringosclerosis was more likely to develop after VTI than after myringotomy alone. Koc and Uneri27 reported a higher rate of tympanosclerosis in patients with atherosclerosis, in comparison to the normal population; this result suggested that a genetic predisposition may be an additional risk factor for myringosclerosis formation. In the present study, the severity of myringosclerosis formation after VTI was related to the duration of effusion and frequency of VTI. Longer duration of effusion could result in a longer exposure time to middle ear inflammation, which may lead to more severe myringosclerosis formation.\nThere were some limitations in our study. Due to the retrospective nature of the study and the range of factors related to the formation of myringosclerosis, confounding factors may not have been effectively controlled, thereby reducing the impact of the findings.\nThis study identified the following risk factors for the formation of myringosclerosis after VTI: older age at onset, nature of middle ear effusion, post-VTI perforation, type 2 VT, and frequent VTI. A longer duration of effusion and more frequent VTI were associated with increased myringosclerosis severity. These risk factors for myringosclerosis formation may be useful in prediction of myringosclerosis before VTI, as well as in prevention after the procedure." ]
[ "intro", "methods", null, null, null, null, null, "results", "discussion" ]
[ "Otitis Media", "Myringosclerosis", "Ventilation Tube", "Middle Ear", "Risk Factor" ]
INTRODUCTION: Tympanosclerosis is associated with sclerotic changes in the tympanic membrane (TM), middle ear cavity, ossicular chain, or (rarely) mastoid cavity; it is characterized by the deposition of degenerated hyaline and calcification of collagen fibrils in the submucosal layer. Tympanosclerosis limited to the TM is known as myringosclerosis,1 and can be defined as a localized tissue reaction involving hyalinization and calcification of the middle fibrous layer of the TM.2 Myringosclerosis is generally asymptomatic, but may cause conductive or mixed type hearing loss if it affects the movement of the TM, ossicular chain, or windows.3 Myringosclerosis is the most common complication after ventilation tube insertion (VTI),45 which has been used for the treatment of otitis media with effusion (OME).56 The etiology and pathogenesis of this condition have not been fully elucidated; however, VTI, middle ear infection, trauma, genetic tendency, and increased formation of oxygen-derived free radicals have been proposed as risk factors for myringosclerosis.7 Sclerotic changes caused by myringosclerosis could render the TM stiff and inflexible, with fixation of the ossicular chain.8 Therefore, a small myringosclerosis may not affect hearing, while a more extensive plaque may lead to the onset of conductive hearing loss.9 Hearing loss caused by OME itself or myringosclerosis in childhood could result in a detrimental effect on the development of auditory temporal processing, and language development.10 Although VTI is the most common procedure used for the treatment of OME and is presumed to be associated with myringosclerosis, there have only been a few studies of potential risk factors for myringosclerosis after VTI.711 This study was performed to investigate possible risk factors for myringosclerosis formation after VTI and to determine factors affecting the severity of myringosclerosis after VTI. METHODS: Patients The study population consisted of 582 patients who underwent VTI for OME between January 2011 and March 2016 at Chungnam National University Hospital (Daejeon, Korea). Unilateral VTI was performed in 364 patients, while bilateral VTI was performed in 218 patients. All patients had undergone monthly follow-up for at least 6 months postoperatively. Patients with craniofacial anomalies, such as cleft lip and palate or Treacher-Collins syndrome, as well as those with insufficient medical records or otoscopic images, were excluded from the study. Patients with myringosclerosis before VTI were also excluded. In patients with bilateral VTI, only the left ear was included in the analysis of risk factors, in order to reduce the potential for bias originating from differences in myringosclerosis formation between ears in each patient. The study population consisted of 582 patients who underwent VTI for OME between January 2011 and March 2016 at Chungnam National University Hospital (Daejeon, Korea). Unilateral VTI was performed in 364 patients, while bilateral VTI was performed in 218 patients. All patients had undergone monthly follow-up for at least 6 months postoperatively. Patients with craniofacial anomalies, such as cleft lip and palate or Treacher-Collins syndrome, as well as those with insufficient medical records or otoscopic images, were excluded from the study. Patients with myringosclerosis before VTI were also excluded. In patients with bilateral VTI, only the left ear was included in the analysis of risk factors, in order to reduce the potential for bias originating from differences in myringosclerosis formation between ears in each patient. Data collection Patient data and clinical information were retrospectively collected through medical records review. Patient data collected included the following variables: age; gender; VT type; duration of effusion; characteristics of effusion; duration of VT placement; frequency of VTI; and incidences of intraoperative bleeding, post-VTI infection, and post-VTI perforation; these data were collected from operation records and otoendoscopic imaging. The point of myringosclerosis development was regarded as the date of initial detection after VTI. Patient data and clinical information were retrospectively collected through medical records review. Patient data collected included the following variables: age; gender; VT type; duration of effusion; characteristics of effusion; duration of VT placement; frequency of VTI; and incidences of intraoperative bleeding, post-VTI infection, and post-VTI perforation; these data were collected from operation records and otoendoscopic imaging. The point of myringosclerosis development was regarded as the date of initial detection after VTI. Study protocol Myringosclerosis was divided into five grades according to severity: grade 0, no myringosclerosis; grade 1, myringosclerosis involving a single quadrant of the TM; grade 2, myringosclerosis involving two quadrants of the TM, regardless of the particular quadrants involved; grade 3, myringosclerosis involving three quadrants of the TM; grade 4, myringosclerosis involving all quadrants of the TM (Fig. 1). A small myringotomy was made radially on the anterior-inferior quadrant of the TM. Effusion in the middle ear was aspirated and a VT was placed into the TM. The characteristics of the effusion and presence of intraoperative bleeding were determined intraoperatively. For all patients, silicone VTs (Paparella Type; Medtronic Xomed, Minneapolis, MN, USA) were used; there were two types of VTs: type 1 had an internal diameter of 1.14 mm and type 2 had an internal diameter of 1.52 mm. The duration of effusion was defined as the time from initial detection of middle ear effusion to the time of surgery; duration of VT placement was defined from VTI to the extrusion of the VT from the TM whether the myringosclerosis was formed before or after tube extrusion. If the perforation of the TM continued for more than 3 months after removal of the tube, it was regarded as post-VTI perforation. Myringosclerosis was divided into five grades according to severity: grade 0, no myringosclerosis; grade 1, myringosclerosis involving a single quadrant of the TM; grade 2, myringosclerosis involving two quadrants of the TM, regardless of the particular quadrants involved; grade 3, myringosclerosis involving three quadrants of the TM; grade 4, myringosclerosis involving all quadrants of the TM (Fig. 1). A small myringotomy was made radially on the anterior-inferior quadrant of the TM. Effusion in the middle ear was aspirated and a VT was placed into the TM. The characteristics of the effusion and presence of intraoperative bleeding were determined intraoperatively. For all patients, silicone VTs (Paparella Type; Medtronic Xomed, Minneapolis, MN, USA) were used; there were two types of VTs: type 1 had an internal diameter of 1.14 mm and type 2 had an internal diameter of 1.52 mm. The duration of effusion was defined as the time from initial detection of middle ear effusion to the time of surgery; duration of VT placement was defined from VTI to the extrusion of the VT from the TM whether the myringosclerosis was formed before or after tube extrusion. If the perforation of the TM continued for more than 3 months after removal of the tube, it was regarded as post-VTI perforation. Statistical analysis Statistical analysis was performed using SPSS for Windows, version 22 (SPSS Inc., Chicago, IL, USA). Independent t-tests were used for comparisons of age, frequency of VTI, duration of effusion, and duration of VT placement between the MS+ and MS− groups. The chi-squared test was used for comparisons between the two groups with regard to gender, characteristics of effusion, tube type, post-VTI infection, intraoperative bleeding, and post-VTI perforation. In addition, 1-way analysis of variance was performed to analyze the associations between severity of myringosclerosis and potential risk factors, including the frequency of VTI, duration of OME, and duration of VT placement. Post hoc analysis was performed using Tukey's test to assess potential factors affecting the severity of myringosclerosis development. In all analyses, P < 0.05 was considered to indicate statistical significance. Statistical analysis was performed using SPSS for Windows, version 22 (SPSS Inc., Chicago, IL, USA). Independent t-tests were used for comparisons of age, frequency of VTI, duration of effusion, and duration of VT placement between the MS+ and MS− groups. The chi-squared test was used for comparisons between the two groups with regard to gender, characteristics of effusion, tube type, post-VTI infection, intraoperative bleeding, and post-VTI perforation. In addition, 1-way analysis of variance was performed to analyze the associations between severity of myringosclerosis and potential risk factors, including the frequency of VTI, duration of OME, and duration of VT placement. Post hoc analysis was performed using Tukey's test to assess potential factors affecting the severity of myringosclerosis development. In all analyses, P < 0.05 was considered to indicate statistical significance. Ethics statement The study protocol was approved by the Institutional Review Board (IRB) of Chungnam National University Hospital (IRB No. 2016-09-021) and the study was performed according to the approved protocol. The study protocol was approved by the Institutional Review Board (IRB) of Chungnam National University Hospital (IRB No. 2016-09-021) and the study was performed according to the approved protocol. Patients: The study population consisted of 582 patients who underwent VTI for OME between January 2011 and March 2016 at Chungnam National University Hospital (Daejeon, Korea). Unilateral VTI was performed in 364 patients, while bilateral VTI was performed in 218 patients. All patients had undergone monthly follow-up for at least 6 months postoperatively. Patients with craniofacial anomalies, such as cleft lip and palate or Treacher-Collins syndrome, as well as those with insufficient medical records or otoscopic images, were excluded from the study. Patients with myringosclerosis before VTI were also excluded. In patients with bilateral VTI, only the left ear was included in the analysis of risk factors, in order to reduce the potential for bias originating from differences in myringosclerosis formation between ears in each patient. Data collection: Patient data and clinical information were retrospectively collected through medical records review. Patient data collected included the following variables: age; gender; VT type; duration of effusion; characteristics of effusion; duration of VT placement; frequency of VTI; and incidences of intraoperative bleeding, post-VTI infection, and post-VTI perforation; these data were collected from operation records and otoendoscopic imaging. The point of myringosclerosis development was regarded as the date of initial detection after VTI. Study protocol: Myringosclerosis was divided into five grades according to severity: grade 0, no myringosclerosis; grade 1, myringosclerosis involving a single quadrant of the TM; grade 2, myringosclerosis involving two quadrants of the TM, regardless of the particular quadrants involved; grade 3, myringosclerosis involving three quadrants of the TM; grade 4, myringosclerosis involving all quadrants of the TM (Fig. 1). A small myringotomy was made radially on the anterior-inferior quadrant of the TM. Effusion in the middle ear was aspirated and a VT was placed into the TM. The characteristics of the effusion and presence of intraoperative bleeding were determined intraoperatively. For all patients, silicone VTs (Paparella Type; Medtronic Xomed, Minneapolis, MN, USA) were used; there were two types of VTs: type 1 had an internal diameter of 1.14 mm and type 2 had an internal diameter of 1.52 mm. The duration of effusion was defined as the time from initial detection of middle ear effusion to the time of surgery; duration of VT placement was defined from VTI to the extrusion of the VT from the TM whether the myringosclerosis was formed before or after tube extrusion. If the perforation of the TM continued for more than 3 months after removal of the tube, it was regarded as post-VTI perforation. Statistical analysis: Statistical analysis was performed using SPSS for Windows, version 22 (SPSS Inc., Chicago, IL, USA). Independent t-tests were used for comparisons of age, frequency of VTI, duration of effusion, and duration of VT placement between the MS+ and MS− groups. The chi-squared test was used for comparisons between the two groups with regard to gender, characteristics of effusion, tube type, post-VTI infection, intraoperative bleeding, and post-VTI perforation. In addition, 1-way analysis of variance was performed to analyze the associations between severity of myringosclerosis and potential risk factors, including the frequency of VTI, duration of OME, and duration of VT placement. Post hoc analysis was performed using Tukey's test to assess potential factors affecting the severity of myringosclerosis development. In all analyses, P < 0.05 was considered to indicate statistical significance. Ethics statement: The study protocol was approved by the Institutional Review Board (IRB) of Chungnam National University Hospital (IRB No. 2016-09-021) and the study was performed according to the approved protocol. RESULTS: A total of 582 patients underwent VTI (men:women = 308:274). The patients ranged in age from 0 to 88 years, with a mean ± standard deviation (SD) age of 35.6 ± 27.8 years. Of these 582 patients, 168 (28.9%) had myringosclerosis (MS+ group) and 414 (71.1%) did not have myringosclerosis (MS− group). The mean ± SD age of the MS+ group was 39.84 ± 26.17 years, while that of the MS− group was 33.84 ± 28.29 years (P < 0.05) (Table 1). VTI was performed most commonly in patients 0–9 years old (190 patients, 32.65%) followed by patients 50–59 years old (90 patients, 15.46%); distinct myringosclerosis formation rates were observed in each age group (Fig. 2). Patients in this study who were < 6 years old had a lower rate of myringosclerosis formation than patients ≥ 6 years old (P < 0.05) (Fig. 3). Values are expressed as mean ± standard deviations or number. MS+ = with myringosclerosis, MS− = without myringosclerosis, VTI = ventilation tube insertion, VT = ventilation tube. aStatistically significant risk factors, P < 0.05. MS+ = with myringosclerosis, MS− = without myringosclerosis. MS+ = with myringosclerosis, MS− = without myringosclerosis. Myringosclerosis was observed in 87 of 308 men patients (28.25%) and in 81 of 274 women patients (29.56%); thus, there was no significant gender-related difference in the incidence of myringosclerosis. With regard to possible risk factors, serous effusion, type 2 VT, post-VTI perforation, and frequent VTIs were associated with the formation of myringosclerosis after VTI (P < 0.05). Post-VTI infection, intraoperative bleeding, duration of effusion, and duration of VT placement were not significantly related to myringosclerosis formation (Table 1). A logistic regression analysis was performed to examine the association of explanatory variables: serous effusion, type 2 VT, post-VTI perforation, and frequent VTIs, further confirming the significance of each variable in the formation of myringosclerosis (P < 0.01, P = 0.028, 0.002, 0.002 respectively). Among the possible risk factors for myringosclerosis formation, the duration of effusion and frequency of VTI were related to the severity of myringosclerosis. The duration of effusion was longer in the grade 3 and 4 group than in the grade 0 group, while the frequency of VTI was higher in the grade 1 and 2 group than in the grade 0 group (P < 0.05) (Fig. 4). VT = ventilation tube, VTI = ventilation tube insertion. *P < 0.05. DISCUSSION: In the present study, the rate of myringosclerosis formation after VTI was 28.9%, which was consistent with the findings of previous reports (32% and 35%).1112 VTI related to OME was most commonly performed in patients 0–9 years old, followed by patients 50–59 years old; these increased incidences may be related to dysfunction of the immature Eustachian tube and OME induced by aging of the Eustachian tube, respectively. These findings were supported by the results of previous studies regarding changes in the Eustachian tube lumen and cartilage calcification with age.1314 However, the incidence of myringosclerosis after VTI was highest among patients 20–29 years (68.2%) and lowest among patients 0–9 years (17.9%), although it was difficult to identify a clear statistical tendency due to differences in the numbers of patients in each group. According to a prior study of 126 school children who were 5–12 years old, the incidence of OME was much lower in children aged ≥ 6 years.15 The opening function of the Eustachian tube has been reported to improve significantly with age, and improvement is most common in pre-school age patients (3–7 years old).16 Thus, maturation of the structure (length) and function (active opening mechanism) of the Eustachian tube and maturation of the immune system by the age of 6 years could be associated with the observed reduction in the incidence of otitis media.17 When patients in the present study were stratified based on the age threshold of 6 years, the incidence of myringosclerosis formation after VTI was found to be higher in the group ≥ 6 years old than in the group < 6 years old. Our findings suggested that the degree of Eustachian tube maturation may be related to the formation of myringosclerosis and the occurrence of OME, although further studies are needed to confirm this hypothesis. Various classification methods have been proposed based on histology or morphology of myringosclerosis, and most are related to the process by which myringosclerosis develops.3 Selcuk et al.3 classified myringosclerosis into three types based on the maturation of tympanosclerosis plaque: type I is characterized by fibroblasts, calcium crystals, and loose connective tissue under microscopic examination; these “pearl” tissues can be easily removed. Type II is characterized by calcification foci and large bundles of collagen, while type III is characterized by diffuse calcification and chondroblast-like cells. These histological findings show the sequence of myringosclerosis maturation. Akyildiz18 divided tympanosclerotic plaque into two types based on surgical detachment features: type 1 plaque is soft and easily removable with some calcium accumulation; in contrast, type 2 plaque is hard, white, fragile, and cannot be easily removed from peripheral tissues. However, these previously proposed classification systems require biopsy, which causes difficulty in preoperative evaluation of myringosclerosis. In the present study, we classified myringosclerosis based on severity in the TM. The pathogenesis of myringosclerosis has not been fully elucidated. Mattsson et al.19 reported that free oxygen radicals generated by hyperoxic conditions in the middle ear resulted in the accumulation of sclerotic deposits. Karlidağ et al.20 reported that lower concentrations of antioxidants may increase the damage created by free oxygen radicals, providing an environment more favorable for the formation of myringosclerosis. Healed inflammation or scar tissue after recurrent inflammation have also been suggested as possible factors underlying the pathogenesis of myringosclerosis.821 There is no established treatment for myringosclerosis; however, some studies have demonstrated that the formation of myringosclerosis could be reduced by using antioxidants to reduce free radicals, as well as by applying topical vitamin E, N-acetylcysteine, sodium thiosulfate, and ciprofloxacin.2519222324 Erdurak et al.25 reported that VTI with radiofrequency myringotomy, instead of incisional myringotomy with a cold knife, could reduce the formation of myringosclerosis. Chronic inflammation of the middle ear and trauma to the TM are regarded as the most important factors related to the formation of myringosclerosis; however, other factors have not been investigated in detail.9 Yaman et al.7 reported that age was not a significant factor associated with the formation of myringosclerosis, although it was identified as a risk factor for the formation of myringosclerosis in this study. Although results vary among studies, the majority of studies (including the present study) have indicated that gender is not a risk factor for myringosclerosis formation.71126 The duration of VT placement was considered to be an important factor in our study and in previous studies.711 Other risk factors have been analyzed in previous studies. Patients with a greater number of otitis episodes in the past year and those with a greater number of otorrhea episodes in the past year had higher rates of myringosclerosis after VTI.11 In addition, the anterior-inferior quadrant was identified as the highest risk quadrant for myringosclerosis formation, compared to the posterior-inferior quadrant, as a site of myringotomy.11 Yaman et al.7 reported that myringosclerosis was more likely to develop after VTI than after myringotomy alone. Koc and Uneri27 reported a higher rate of tympanosclerosis in patients with atherosclerosis, in comparison to the normal population; this result suggested that a genetic predisposition may be an additional risk factor for myringosclerosis formation. In the present study, the severity of myringosclerosis formation after VTI was related to the duration of effusion and frequency of VTI. Longer duration of effusion could result in a longer exposure time to middle ear inflammation, which may lead to more severe myringosclerosis formation. There were some limitations in our study. Due to the retrospective nature of the study and the range of factors related to the formation of myringosclerosis, confounding factors may not have been effectively controlled, thereby reducing the impact of the findings. This study identified the following risk factors for the formation of myringosclerosis after VTI: older age at onset, nature of middle ear effusion, post-VTI perforation, type 2 VT, and frequent VTI. A longer duration of effusion and more frequent VTI were associated with increased myringosclerosis severity. These risk factors for myringosclerosis formation may be useful in prediction of myringosclerosis before VTI, as well as in prevention after the procedure.
Background: This study examined possible risk factors for myringosclerosis formation after ventilation tube insertion (VTI). Methods: A retrospective study was performed in a single tertiary referral center. A total of 582 patients who underwent VTI were enrolled in this study. Patients were divided into two groups based on the presence or absence of myringosclerosis: MS+ and MS-. Characteristics of patients were collected through medical chart review; these included age, gender, nature and duration of effusion, type of ventilation tube (VT), duration and frequency of VTI, incidence of post-VTI infection, incidence of intraoperative bleeding, and presence of postoperative perforation. Incidences of risk factors for myringosclerosis and the severity of myringosclerosis in association with possible risk factors were analyzed. Results: Myringosclerosis developed in 168 of 582 patients (28.9%) after VTI. Patients in the MS+ group had an older mean age than those in the MS- group. The rates of myringosclerosis were higher in patients with older age, serous otitis media, type 2 VT, post-VTI perforation, and frequent VTI. However, there were no differences in occurrence of myringosclerosis based on gender, duration of effusion, duration of VT placement, incidence of post-VTI infection, or incidence of intraoperative bleeding. The severity of myringosclerosis was associated with the duration of effusion and frequency of VTI. Conclusions: Older age, serous effusion, type 2 VT, presence of post-VTI perforation, and frequent VTI may be risk factors for myringosclerosis after VTI; the severity of myringosclerosis may vary based on the duration of effusion and frequency of VTI.
null
null
4,046
309
9
[ "myringosclerosis", "vti", "patients", "effusion", "duration", "tm", "vt", "formation", "study", "type" ]
[ "test", "test" ]
null
null
[CONTENT] Otitis Media | Myringosclerosis | Ventilation Tube | Middle Ear | Risk Factor [SUMMARY]
[CONTENT] Otitis Media | Myringosclerosis | Ventilation Tube | Middle Ear | Risk Factor [SUMMARY]
[CONTENT] Otitis Media | Myringosclerosis | Ventilation Tube | Middle Ear | Risk Factor [SUMMARY]
null
[CONTENT] Otitis Media | Myringosclerosis | Ventilation Tube | Middle Ear | Risk Factor [SUMMARY]
null
[CONTENT] Adolescent | Adult | Humans | Incidence | Middle Aged | Middle Ear Ventilation | Myringosclerosis | Postoperative Complications | Retrospective Studies | Risk Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Humans | Incidence | Middle Aged | Middle Ear Ventilation | Myringosclerosis | Postoperative Complications | Retrospective Studies | Risk Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Humans | Incidence | Middle Aged | Middle Ear Ventilation | Myringosclerosis | Postoperative Complications | Retrospective Studies | Risk Factors | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Humans | Incidence | Middle Aged | Middle Ear Ventilation | Myringosclerosis | Postoperative Complications | Retrospective Studies | Risk Factors | Young Adult [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] myringosclerosis | vti | patients | effusion | duration | tm | vt | formation | study | type [SUMMARY]
[CONTENT] myringosclerosis | vti | patients | effusion | duration | tm | vt | formation | study | type [SUMMARY]
[CONTENT] myringosclerosis | vti | patients | effusion | duration | tm | vt | formation | study | type [SUMMARY]
null
[CONTENT] myringosclerosis | vti | patients | effusion | duration | tm | vt | formation | study | type [SUMMARY]
null
[CONTENT] myringosclerosis | hearing | tm | hearing loss | ossicular chain | chain | ossicular | loss | vti | factors myringosclerosis [SUMMARY]
[CONTENT] vti | tm | myringosclerosis | patients | grade myringosclerosis | duration | effusion | grade | vt | grade myringosclerosis involving [SUMMARY]
[CONTENT] group | ms | myringosclerosis | years | patients | myringosclerosis ms | ms myringosclerosis | 05 | vti | ms myringosclerosis ms [SUMMARY]
null
[CONTENT] myringosclerosis | vti | patients | tm | duration | effusion | vt | study | formation | grade [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| 582 ||| two ||| VT ||| [SUMMARY]
[CONTENT] 168 | 582 | 28.9% ||| MS- ||| 2 ||| ||| [SUMMARY]
null
[CONTENT] ||| ||| ||| 582 ||| two ||| VT ||| ||| ||| 168 | 582 | 28.9% ||| MS- ||| 2 ||| ||| ||| 2 [SUMMARY]
null
Bioavailability, distribution and clearance of tracheally-instilled and gavaged uncoated or silica-coated zinc oxide nanoparticles.
25183210
Nanoparticle pharmacokinetics and biological effects are influenced by several factors. We assessed the effects of amorphous SiO2 coating on the pharmacokinetics of zinc oxide nanoparticles (ZnO NPs) following intratracheal (IT) instillation and gavage in rats.
BACKGROUND
Uncoated and SiO2-coated ZnO NPs were neutron-activated and IT-instilled at 1 mg/kg or gavaged at 5 mg/kg. Rats were followed over 28 days post-IT, and over 7 days post-gavage. Tissue samples were analyzed for 65Zn radioactivity. Pulmonary responses to instilled NPs were also evaluated at 24 hours.
METHODS
SiO2-coated ZnO elicited significantly higher inflammatory responses than uncoated NPs. Pulmonary clearance of both 65ZnO NPs was biphasic with a rapid initial t1/2 (0.2 - 0.3 hours), and a slower terminal t1/2 of 1.2 days (SiO2-coated ZnO) and 1.7 days (ZnO). Both NPs were almost completely cleared by day 7 (>98%). With IT-instilled 65ZnO NPs, significantly more 65Zn was found in skeletal muscle, liver, skin, kidneys, cecum and blood on day 2 in uncoated than SiO2-coated NPs. By 28 days, extrapulmonary levels of 65Zn from both NPs significantly decreased. However, 65Zn levels in skeletal muscle, skin and blood remained higher from uncoated NPs. Interestingly, 65Zn levels in bone marrow and thoracic lymph nodes were higher from coated 65ZnO NPs. More 65Zn was excreted in the urine from rats instilled with SiO2-coated 65ZnO NPs. After 7 days post-gavage, only 7.4% (uncoated) and 6.7% (coated) of 65Zn dose were measured in all tissues combined. As with instilled NPs, after gavage significantly more 65Zn was measured in skeletal muscle from uncoated NPs and less in thoracic lymph nodes. More 65Zn was excreted in the urine and feces with coated than uncoated 65ZnO NPs. However, over 95% of the total dose of both NPs was eliminated in the feces by day 7.
RESULTS
Although SiO2-coated ZnO NPs were more inflammogenic, the overall lung clearance rate was not affected. However, SiO2 coating altered the tissue distribution of 65Zn in some extrapulmonary tissues. For both IT instillation and gavage administration, SiO2 coating enhanced transport of 65Zn to thoracic lymph nodes and decreased transport to the skeletal muscle.
CONCLUSIONS
[ "Administration, Oral", "Animals", "Biological Availability", "Half-Life", "Inhalation Exposure", "Lung", "Lymph Nodes", "Male", "Metabolic Clearance Rate", "Muscle, Skeletal", "Nanoparticles", "Pneumonia", "Rats", "Rats, Wistar", "Silicon Dioxide", "Tissue Distribution", "Zinc Oxide" ]
4237897
Background
Zinc oxide nanoparticles (ZnO NPs) are widely used in consumer products, including ceramics, cosmetics, plastics, sealants, toners and foods [1]. They are a common component in a range of technologies, including sensors, light emitting diodes, and solar cells due to their semiconducting and optical properties [2]. ZnO NPs filter both UV-A and UV-B radiation but remain transparent in the visible spectrum [3]. For this reason, ZnO NPs are commonly added to sunscreens [4] and other cosmetic products. Furthermore, advanced technologies have made the large-scale production of ZnO NPs possible [5]. Health concerns have been raised due to the growing evidence of the potential toxicity of ZnO NPs. Reduced pulmonary function in humans was observed 24 hours after inhalation of ultrafine (<100 nm) ZnO [6]. It has also been shown to cause DNA damage in HepG2 cells and neurotoxicity due to the formation of reactive oxygen species (ROS) [7],[8]. Recently, others and we have demonstrated that ZnO NPs can cause DNA damage in TK6 and H9T3 cells [9],[10]. ZnO NPs dissolve in aqueous solutions, releasing Zn2+ ions that may in turn cause cytotoxicity and DNA damage to cells [9],[11]–[13]. Studies have shown that changing the surface characteristics of certain NPs may alter the biologic responses of cells [14],[15]. Developing strategies to reduce the toxicity of ZnO NPs without changing their core properties (safer-by-design approach) is an active area of research. Xia et al. [16] showed that doping ZnO NPs with iron could reduce the rate of ZnO dissolution and the toxic effects in zebra fish embryos and rat and mouse lungs [16]. We also showed that encapsulation of ZnO NPs with amorphous SiO2 reduced the dissolution of Zn2+ ions in biological media, and reduced cell cytotoxicity and DNA damage in vitro [17]. Surface characteristics of NPs, such as their chemical and molecular structure, influence their pharmacokinetic behavior [18]–[20]. Surface chemistry influences the adsorption of phospholipids, proteins and other components of lung surfactants in the formation of a particle corona, which may regulate the overall nanoparticle pharmacokinetics and biological responses [19]. Coronas have been shown to influence the dynamics of cellular uptake, localization, biodistribution, and biological effects of NPs [21],[22]. Coating of NPs with amorphous silica is a promising technique to enhance colloidal stability and biocompatibility for theranostics [23],[24]. A recent study by Chen et al. showed that coating gold nanorods with silica can amplify the photoacoustic response without altering optical absorption [25]. Furthermore, coating magnetic NPs with amorphous silica enhances particle stability and reduces its cytotoxicity in a human bronchial epithelium cell line model [26]. Amorphous SiO2 is generally considered relatively biologically inert [27], and is commonly used in cosmetic and personal care products, and as a negative control in some nanoparticle toxicity screening assays [28]. However, Napierska et al. demonstrated the size-dependent cytotoxic effects of amorphous silica in vitro[29]. They concluded that the surface area of amorphous silica is an important determinant of cytotoxicity. An in vivo study using a rat model demonstrated that the pulmonary toxicity and inflammatory responses to amorphous silica are transient [30]. Moreover, SiO2-coated nanoceria induced minimal lung injury and inflammation [31]. It has also been demonstrated that SiO2 coating improves nanoparticle biocompatibility in vitro for a variety of nanomaterials, including Ag [32], Y2O3[33], and ZnO [17]. We have recently developed methods for the gas-phase synthesis of metal and metal oxide NPs by a modified flame spray pyrolysis (FSP) reactor. Coating metal oxide NPs with amorphous SiO2 involves the encapsulation of the core NPs in flight with a nanothin amorphous SiO2 layer [34]. An important advantage of flame-made NPs is their high purity. Flame synthesis is a high-temperature process that leaves no organic contamination on the particle surface. Furthermore, the presence of SiO2 does not influence the optoelectronic properties of the core ZnO nanorods. Thus, they retain their desired high transparency in the visible spectrum and UV absorption rendering them suitable for UV blocking applications [17]. The SiO2 coating has been demonstrated to reduce ZnO nanorod toxicity by mitigating their dissolution and generation of ions in solutions, and by preventing the immediate contact between the core particle and mammalian cells. For ZnO NPs, such a hermetic SiO2 coating reduces ZnO dissolution while preserving the optical properties and band-gap energy of the ZnO core [17]. Studies examining nanoparticle structure-pharmacokinetic relationships have established that plasma protein binding profiles correlate with circulation half-lives [27]. However, studies evaluating the relationship between surface modifications, lung clearance kinetics, and pulmonary effects are lacking. Thus, we sought to study the effects of amorphous SiO2 coating on ZnO pulmonary effects and on pharmacokinetics of 65Zn when radioactive 65ZnO and SiO2-coated 65ZnO nanorods are administered by intratracheal instillation (IT) and gavage. We explored how the SiO2 coating affected acute toxicity and inflammatory responses in the lungs, as well as 65Zn clearance and tissue distribution after IT instillation over a period of 28 days. The translocation of the 65Zn from the stomach to other organs was also quantified for up to 7 days after gavage. Finally, we examined how the SiO2 coating affected the urinary and fecal excretion of 65Zn during the entire observation period.
Methods
Synthesis of ZnO and SiO2-coated ZnO NPs The synthesis of these NPs was reported in detail elsewhere [17]. In brief, uncoated and SiO2-coated ZnO particles were synthesized by flame spray pyrolysis (FSP) of zinc naphthenate (Sigma-Aldrich, St. Louis, MO, USA) dissolved in ethanol (Sigma-Aldrich) at a precursor molarity of 0.5 M. The precursor solution was fed through a stainless steel capillary at 5 ml/min, dispersed by 5 L/min O2 (purity > 99%, pressure drop at nozzle tip: pdrop = 2 bar) (Air Gas, Berwyn, PA, USA) and combusted. A premixed methane-oxygen (1.5 L/min, 3.2 L/min) supporting flame was used to ignite the spray. Oxygen (Air Gas, purity > 99%) sheath gas was used at 40 L/min. Core particles were coated in-flight by the swirl-injection of hexamethyldisiloxane (HMDSO) (Sigma Aldrich) through a torus ring with 16 jets at an injection height of 200 mm above the FSP burner. A total gas flow of 16 L/min, consisting of N2 carrying HMDSO vapor and pure N2, was injected through the torus ring jets. HMDSO vapor was obtained by bubbling N2 gas through liquid HMDSO (500 ml), maintained at a controlled temperature using a temperature-controlled water bath. The synthesis of these NPs was reported in detail elsewhere [17]. In brief, uncoated and SiO2-coated ZnO particles were synthesized by flame spray pyrolysis (FSP) of zinc naphthenate (Sigma-Aldrich, St. Louis, MO, USA) dissolved in ethanol (Sigma-Aldrich) at a precursor molarity of 0.5 M. The precursor solution was fed through a stainless steel capillary at 5 ml/min, dispersed by 5 L/min O2 (purity > 99%, pressure drop at nozzle tip: pdrop = 2 bar) (Air Gas, Berwyn, PA, USA) and combusted. A premixed methane-oxygen (1.5 L/min, 3.2 L/min) supporting flame was used to ignite the spray. Oxygen (Air Gas, purity > 99%) sheath gas was used at 40 L/min. Core particles were coated in-flight by the swirl-injection of hexamethyldisiloxane (HMDSO) (Sigma Aldrich) through a torus ring with 16 jets at an injection height of 200 mm above the FSP burner. A total gas flow of 16 L/min, consisting of N2 carrying HMDSO vapor and pure N2, was injected through the torus ring jets. HMDSO vapor was obtained by bubbling N2 gas through liquid HMDSO (500 ml), maintained at a controlled temperature using a temperature-controlled water bath. Characterization of ZnO and SiO2-coated ZnO NPs The morphology of these NPs was examined by electron microscopy. Uncoated and SiO2-coated ZnO NPs were dispersed in ethanol at a concentration of 1 mg/ml in 50 ml polyethylene conical tubes and sonicated at 246 J/ml (Branson Sonifier S-450A, Swedesboro, NJ, USA). The samples were deposited onto lacey carbon TEM grids. All grids were imaged with a JEOL 2100. The primary particle size was determined by X-ray diffraction (XRD). XRD patterns for uncoated ZnO and SiO2-coated ZnO NPs were obtained using a Scintag XDS2000 powder diffractometer (Cu Kα, λ = 0.154 nm, 40 kV, 40 mA, stepsize = 0.02°). One hundred mg of each sample was placed onto the diffractometer stage and analyzed from a range of 2θ = 20-70°. Major diffraction peaks were identified using the Inorganic Crystal Structure Database (ICSD) for wurtzite (ZnO) crystals. The crystal size was determined by applying the Debye-Scherrer Shape Equation to the Gaussian fit of the major diffraction peak. The specific surface area was obtained using the Brunauer-Emmet-Teller (BET) method. The samples were degassed in N2 for at least 1 hour at 150°C before obtaining five-point N2 adsorption at 77 K (Micrometrics Tristar 3000, Norcross, GA, USA). The morphology of these NPs was examined by electron microscopy. Uncoated and SiO2-coated ZnO NPs were dispersed in ethanol at a concentration of 1 mg/ml in 50 ml polyethylene conical tubes and sonicated at 246 J/ml (Branson Sonifier S-450A, Swedesboro, NJ, USA). The samples were deposited onto lacey carbon TEM grids. All grids were imaged with a JEOL 2100. The primary particle size was determined by X-ray diffraction (XRD). XRD patterns for uncoated ZnO and SiO2-coated ZnO NPs were obtained using a Scintag XDS2000 powder diffractometer (Cu Kα, λ = 0.154 nm, 40 kV, 40 mA, stepsize = 0.02°). One hundred mg of each sample was placed onto the diffractometer stage and analyzed from a range of 2θ = 20-70°. Major diffraction peaks were identified using the Inorganic Crystal Structure Database (ICSD) for wurtzite (ZnO) crystals. The crystal size was determined by applying the Debye-Scherrer Shape Equation to the Gaussian fit of the major diffraction peak. The specific surface area was obtained using the Brunauer-Emmet-Teller (BET) method. The samples were degassed in N2 for at least 1 hour at 150°C before obtaining five-point N2 adsorption at 77 K (Micrometrics Tristar 3000, Norcross, GA, USA). Neutron activation of NPs The NPs with and without the SiO2 coating were neutron-activated at the Massachusetts Institute of Technology (MIT) Nuclear Reactor Laboratory (Cambridge, MA). Samples were irradiated with a thermal neutron flux of 5 × 1013 n/cm2s for 120 hours. The resulting 65Zn radioisotope has a half-life of 244.3 days and a primary gamma energy peak of 1115 keV. The relative specific activities for 65Zn were 37.7 ± 5.0 kBq/mg for SiO2-coated 65ZnO and 41.7 ± 7.2 kBq/mg for 65ZnO NPs. The NPs with and without the SiO2 coating were neutron-activated at the Massachusetts Institute of Technology (MIT) Nuclear Reactor Laboratory (Cambridge, MA). Samples were irradiated with a thermal neutron flux of 5 × 1013 n/cm2s for 120 hours. The resulting 65Zn radioisotope has a half-life of 244.3 days and a primary gamma energy peak of 1115 keV. The relative specific activities for 65Zn were 37.7 ± 5.0 kBq/mg for SiO2-coated 65ZnO and 41.7 ± 7.2 kBq/mg for 65ZnO NPs. Preparation and characterization of ZnO and SiO2 -coated ZnO nanoparticle suspensions Uncoated and SiO2-coated ZnO NPs were dispersed using a protocol previously described [82],[36]. The NPs were dispersed in deionized water at a concentration of 0.66 mg/ml (IT) or 10 mg/ml (gavage). Sonication was performed in deionized water to minimize the formation of reactive oxygen species. Samples were thoroughly mixed immediately prior to instillation. Dispersions of NPs were analyzed for hydrodynamic diameter (dH), polydispersity index (PdI), and zeta potential (ζ) by DLS using a Zetasizer Nano-ZS (Malvern Instruments, Worcestershire, UK). Uncoated and SiO2-coated ZnO NPs were dispersed using a protocol previously described [82],[36]. The NPs were dispersed in deionized water at a concentration of 0.66 mg/ml (IT) or 10 mg/ml (gavage). Sonication was performed in deionized water to minimize the formation of reactive oxygen species. Samples were thoroughly mixed immediately prior to instillation. Dispersions of NPs were analyzed for hydrodynamic diameter (dH), polydispersity index (PdI), and zeta potential (ζ) by DLS using a Zetasizer Nano-ZS (Malvern Instruments, Worcestershire, UK). Animals The protocols used in this study were approved by the Harvard Medical Area Animal Care and Use Committee. Nine-week-old male Wistar Han rats were purchased from Charles River Laboratories (Wilmington, MA). Rats were housed in pairs in polypropylene cages and allowed to acclimate for 1 week before the studies were initiated. Rats were maintained on a 12-hour light/dark cycle. Food and water were provided ad libitum. The protocols used in this study were approved by the Harvard Medical Area Animal Care and Use Committee. Nine-week-old male Wistar Han rats were purchased from Charles River Laboratories (Wilmington, MA). Rats were housed in pairs in polypropylene cages and allowed to acclimate for 1 week before the studies were initiated. Rats were maintained on a 12-hour light/dark cycle. Food and water were provided ad libitum. Pulmonary responses – Bronchoalveolar lavage and analyses This experiment was performed to determine pulmonary responses to instilled NPs. A group of rats (mean wt. 264 ± 15 g) was intratracheally instilled with either an uncoated ZnO or SiO2 -coated ZnO NP suspension at a 0, 0.2 or 1.0 mg/kg dose. The particle suspensions were delivered to the lungs through the trachea in a volume of 1.5 ml/kg. Twenty-four hours later, rats were euthanized via exsanguination with a cut in the abdominal aorta while under anesthesia. The trachea was exposed and cannulated. The lungs were then lavaged 12 times, with 3 ml of 0.9% sterile PBS, without calcium and magnesium ions. The cells of all washes were separated from the supernatant by centrifugation (350 × g at 4°C for 10 min). Total cell count and hemoglobin measurements were made from the cell pellets. After staining the cells, a differential cell count was performed. The supernatant of the two first washes was clarified via centrifugation (14,500 × g at 4°C for 30 min), and used for standard spectrophotometric assays for lactate dehydrogenase (LDH), myeloperoxidase (MPO) and albumin. This experiment was performed to determine pulmonary responses to instilled NPs. A group of rats (mean wt. 264 ± 15 g) was intratracheally instilled with either an uncoated ZnO or SiO2 -coated ZnO NP suspension at a 0, 0.2 or 1.0 mg/kg dose. The particle suspensions were delivered to the lungs through the trachea in a volume of 1.5 ml/kg. Twenty-four hours later, rats were euthanized via exsanguination with a cut in the abdominal aorta while under anesthesia. The trachea was exposed and cannulated. The lungs were then lavaged 12 times, with 3 ml of 0.9% sterile PBS, without calcium and magnesium ions. The cells of all washes were separated from the supernatant by centrifugation (350 × g at 4°C for 10 min). Total cell count and hemoglobin measurements were made from the cell pellets. After staining the cells, a differential cell count was performed. The supernatant of the two first washes was clarified via centrifugation (14,500 × g at 4°C for 30 min), and used for standard spectrophotometric assays for lactate dehydrogenase (LDH), myeloperoxidase (MPO) and albumin. Pharmacokinetics of 65Zn The mean weight of rats at the start of the experiment was 285 ± 3 g. Two groups of rats (29 rats/NP) were intratracheally instilled with 65ZnO NPs or with SiO2-coated 65ZnO NPs at a 1 mg/kg dose (1.5 ml/kg, 0.66 mg/ml). Rats were placed in metabolic cages containing food and water, as previously described. Twenty four-hour samples of feces and urine were collected at selected time points (0–24 hours, 2–3 days, 6–7 days, 9–10 days, 13–14 days, 20–21 days, and 27–28 days post-IT instillation). Fecal/urine collection was accomplished by placing each rat in individual metabolic cage containing food and water during each 24-hour period. All samples were analyzed for total 65Zn activity, and expressed as % of instilled 65Zn dose. Fecal and urine clearance curves were generated and were used to estimate the daily cumulative excretion. Groups of 8 rats were humanely sacrificed at 5 minutes, 2 days, 7 days, and 5 rats/group at 28 days. Therefore, the number of collected fecal/urine samples decreased over time. Another cohort of 20 rats was dosed with 65ZnO (n = 10) or SiO2-coated 65ZnO (n = 10) by gavage at a 5 mg/kg dose (0.5 ml/kg, 10 mg/ml). One group of 5 rats was humanely sacrificed at 5 minutes and immediately dissected. Another group of 5 rats was individually placed in metabolic cages, as previously described, and 24-hour samples of urine and feces were collected at 0–1 day, 2–3 days, and 6–7 days post-gavage. The remaining rats were sacrificed at 7 days. At each endpoint, rats were euthanized and dissected, and the whole brain, spleen, kidneys, heart, liver, lungs, GI tract, testes, thoracic lymph nodes, blood (10 ml, separated into plasma and RBC), bone marrow (from femoral bones), bone (both femurs), skin (2 × 3 inches), and skeletal muscle (from 4 sites) were collected. The 65Zn radioactivity present in each sample was measured with a WIZARD Gamma Counter (PerkinElmer, Inc., Waltham, MA). The number of disintegrations per minute was determined from the counts per minute and the counting efficiency. The efficiency of the gamma counter was derived from counting multiple aliquots of NP samples and relating them to the specific activities measured at Massachusetts Institute of Technology Nuclear Reactor. We estimated that the counter had an efficiency of ~52%. The 65Zn radioactivity was expressed as kBq/g tissue and the percentage of administered dose in each organ. All radioactivity data were adjusted for physical decay over the entire observation period. The radioactivity in organs and tissues not measured in their entirety was estimated as a percentage of total body weight as: skeletal muscle, 40%; bone marrow, 3.2%; peripheral blood, 7%; skin, 19%; and bone, 6% [83],[84]. Based on the 65Zn specific activity (kBq/mg NP) and tissue 65Zn concentration, the amount of Zn derived from each NP was calculated for each tissue examined (ng Zn/g tissue). The mean weight of rats at the start of the experiment was 285 ± 3 g. Two groups of rats (29 rats/NP) were intratracheally instilled with 65ZnO NPs or with SiO2-coated 65ZnO NPs at a 1 mg/kg dose (1.5 ml/kg, 0.66 mg/ml). Rats were placed in metabolic cages containing food and water, as previously described. Twenty four-hour samples of feces and urine were collected at selected time points (0–24 hours, 2–3 days, 6–7 days, 9–10 days, 13–14 days, 20–21 days, and 27–28 days post-IT instillation). Fecal/urine collection was accomplished by placing each rat in individual metabolic cage containing food and water during each 24-hour period. All samples were analyzed for total 65Zn activity, and expressed as % of instilled 65Zn dose. Fecal and urine clearance curves were generated and were used to estimate the daily cumulative excretion. Groups of 8 rats were humanely sacrificed at 5 minutes, 2 days, 7 days, and 5 rats/group at 28 days. Therefore, the number of collected fecal/urine samples decreased over time. Another cohort of 20 rats was dosed with 65ZnO (n = 10) or SiO2-coated 65ZnO (n = 10) by gavage at a 5 mg/kg dose (0.5 ml/kg, 10 mg/ml). One group of 5 rats was humanely sacrificed at 5 minutes and immediately dissected. Another group of 5 rats was individually placed in metabolic cages, as previously described, and 24-hour samples of urine and feces were collected at 0–1 day, 2–3 days, and 6–7 days post-gavage. The remaining rats were sacrificed at 7 days. At each endpoint, rats were euthanized and dissected, and the whole brain, spleen, kidneys, heart, liver, lungs, GI tract, testes, thoracic lymph nodes, blood (10 ml, separated into plasma and RBC), bone marrow (from femoral bones), bone (both femurs), skin (2 × 3 inches), and skeletal muscle (from 4 sites) were collected. The 65Zn radioactivity present in each sample was measured with a WIZARD Gamma Counter (PerkinElmer, Inc., Waltham, MA). The number of disintegrations per minute was determined from the counts per minute and the counting efficiency. The efficiency of the gamma counter was derived from counting multiple aliquots of NP samples and relating them to the specific activities measured at Massachusetts Institute of Technology Nuclear Reactor. We estimated that the counter had an efficiency of ~52%. The 65Zn radioactivity was expressed as kBq/g tissue and the percentage of administered dose in each organ. All radioactivity data were adjusted for physical decay over the entire observation period. The radioactivity in organs and tissues not measured in their entirety was estimated as a percentage of total body weight as: skeletal muscle, 40%; bone marrow, 3.2%; peripheral blood, 7%; skin, 19%; and bone, 6% [83],[84]. Based on the 65Zn specific activity (kBq/mg NP) and tissue 65Zn concentration, the amount of Zn derived from each NP was calculated for each tissue examined (ng Zn/g tissue). Statistical analyses Differences in the 65Zn tissue distribution and in cellular and biochemical parameters measured in bronchoalveolar lavage between groups were analyzed using multivariate analysis of variance (MANOVA) with REGWQ (Ryan-Einot-Gabriel-Welch based on range) and Tukey post hoc tests using SAS Statistical Analysis software (SAS Institute, Cary, NC). The lung clearance half-life was estimated by a two-phase estimation by a biexponential model using the R Program v. 3.1.0 [85]. Differences in the 65Zn tissue distribution and in cellular and biochemical parameters measured in bronchoalveolar lavage between groups were analyzed using multivariate analysis of variance (MANOVA) with REGWQ (Ryan-Einot-Gabriel-Welch based on range) and Tukey post hoc tests using SAS Statistical Analysis software (SAS Institute, Cary, NC). The lung clearance half-life was estimated by a two-phase estimation by a biexponential model using the R Program v. 3.1.0 [85].
Results
Synthesis and characterization of ZnO and SiO2-coated ZnO NPs Uncoated and SiO2-coated ZnO NPs were made by flame spray pyrolysis using the Versatile Engineered Nanomaterial Generation System at Harvard University [35],[17]. The detailed physicochemical and morphological characterization of these NPs was reported earlier [36],[17]. The ZnO primary NPs had a rod-like shape with an aspect ratio of 2:1 to 8:1 (Figure 1) [37],[17]. Flame-made nanoparticles typically exhibit a lognormal size distribution with a geometric standard deviation of σg  = 1.45 [38]. To create the SiO2-coated ZnO nanorods, a nanothin (~4.6 ± 2.5 nm) amorphous SiO2 layer encapsulated the ZnO core [17] (Figure 1B). The amorphous nature of the silica coating was verified by X-ray diffraction (XRD) and electron microscopy analyses [17]. The average crystal size of uncoated and SiO2-coated NPs were 29 and 28 nm, respectively [39]. Their specific surface areas (SSA) were 41 m2/g (uncoated) and 55 m2/g (SiO2-coated) [40]. The lower density of SiO2 compared to ZnO contributes to the higher SSA of the SiO2-coated ZnO than uncoated NPs. The extent of the SiO2 coating was assessed by X-ray photoelectron spectroscopy and photocatalytic experiments. These data showed that less than 5% of ZnO NPs were uncoated, as some of the freshly-formed core ZnO NPs may escape the coating process [41],[17]. Furthermore, the ZnO dissolution of the SiO2-coated nanorods was significantly lower than the uncoated NPs in culture medium over 24 h [17]. The Zn2+ ion concentration reached equilibrium after 6 hours for the coated NPs (~20%), while the uncoated ones dissolved at a constant rate up to 24 hours [17]. For both IT and gavage routes, the NPs were dispersed in deionized water by sonication at 242 J/ml. The hydrodynamic diameters were 165 ± 3 nm (SiO2-coated) and 221 ± 3 nm (uncoated). The zeta potential values in these suspensions were 23 ± 0.4 mV (uncoated) and −16.2 ± 1.2 (SiO2-coated). The zeta potential differences between these two types of NPs were observed at a pH range of 2.5-8.0 [17], which includes the pH conditions in the airways/alveoli and small and large intestines. The post-irradiation hydrodynamic diameter and zeta potential in water suspension were similar to those of pristine NPs used in the lung toxicity/inflammation experiments. Physicochemical characterization of test materials. Transmission electron micrograph of uncoated ZnO (A) and SiO2-coated ZnO (B) NPs. The thin silica coating of approximately 5 nm is shown in B, inset. Uncoated and SiO2-coated ZnO NPs were made by flame spray pyrolysis using the Versatile Engineered Nanomaterial Generation System at Harvard University [35],[17]. The detailed physicochemical and morphological characterization of these NPs was reported earlier [36],[17]. The ZnO primary NPs had a rod-like shape with an aspect ratio of 2:1 to 8:1 (Figure 1) [37],[17]. Flame-made nanoparticles typically exhibit a lognormal size distribution with a geometric standard deviation of σg  = 1.45 [38]. To create the SiO2-coated ZnO nanorods, a nanothin (~4.6 ± 2.5 nm) amorphous SiO2 layer encapsulated the ZnO core [17] (Figure 1B). The amorphous nature of the silica coating was verified by X-ray diffraction (XRD) and electron microscopy analyses [17]. The average crystal size of uncoated and SiO2-coated NPs were 29 and 28 nm, respectively [39]. Their specific surface areas (SSA) were 41 m2/g (uncoated) and 55 m2/g (SiO2-coated) [40]. The lower density of SiO2 compared to ZnO contributes to the higher SSA of the SiO2-coated ZnO than uncoated NPs. The extent of the SiO2 coating was assessed by X-ray photoelectron spectroscopy and photocatalytic experiments. These data showed that less than 5% of ZnO NPs were uncoated, as some of the freshly-formed core ZnO NPs may escape the coating process [41],[17]. Furthermore, the ZnO dissolution of the SiO2-coated nanorods was significantly lower than the uncoated NPs in culture medium over 24 h [17]. The Zn2+ ion concentration reached equilibrium after 6 hours for the coated NPs (~20%), while the uncoated ones dissolved at a constant rate up to 24 hours [17]. For both IT and gavage routes, the NPs were dispersed in deionized water by sonication at 242 J/ml. The hydrodynamic diameters were 165 ± 3 nm (SiO2-coated) and 221 ± 3 nm (uncoated). The zeta potential values in these suspensions were 23 ± 0.4 mV (uncoated) and −16.2 ± 1.2 (SiO2-coated). The zeta potential differences between these two types of NPs were observed at a pH range of 2.5-8.0 [17], which includes the pH conditions in the airways/alveoli and small and large intestines. The post-irradiation hydrodynamic diameter and zeta potential in water suspension were similar to those of pristine NPs used in the lung toxicity/inflammation experiments. Physicochemical characterization of test materials. Transmission electron micrograph of uncoated ZnO (A) and SiO2-coated ZnO (B) NPs. The thin silica coating of approximately 5 nm is shown in B, inset. Pulmonary responses to intratracheally instilled ZnO and SiO2-coated ZnO We compared the pulmonary responses to uncoated versus SiO2-coated ZnO NPs at 24 hours after IT instillation in rats. Groups of 4–6 rats received 0, 0.2 or 1 mg/kg of either type of NP. We found that IT-instilled coated and uncoated ZnO NPs induced a dose-dependent injury and inflammation evident by increased neutrophils, elevated levels of myeloperoxidase (MPO), albumin and lactate dehydrogenase (LDH) in the bronchoalveolar lavage (BAL) fluid at 24 hours post-instillation (Figure 2). At the lower dose of 0.2 mg/kg, only the SiO2-coated ZnO instilled rats (n = 4) showed elevated neutrophils, LDH, MPO, and albumin levels. But at 1 mg/kg, both types of NPs induced injury and inflammation to the same extent, except that MPO was higher in rats instilled with SiO2-coated ZnO NPs. Cellular and biochemical parameters of lung injury and inflammation in bronchoalveolar lavage (BAL). Tracheally instilled ZnO and SiO2-coated ZnO induced a dose-dependent lung injury and inflammation at 24 hours. (A) Significant increases in BAL neutrophils were observed at 1 mg/kg of both NPs (n = 6/group). At the lower dose of 0.2 mg/kg (n = 4-6/group), only the SiO2-coated ZnO (n = 4) induced significant neutrophil influx in the lungs. (B) Similarly, significant increases in LDH, myeloperoxidase and albumin were observed at 1 mg/kg of both NPs, and at 0.2 mg of SiO2-coated ZnO. (*P < 0.05, vs. control, #P < 0.05, SiO2-coated ZnO versus ZnO). We compared the pulmonary responses to uncoated versus SiO2-coated ZnO NPs at 24 hours after IT instillation in rats. Groups of 4–6 rats received 0, 0.2 or 1 mg/kg of either type of NP. We found that IT-instilled coated and uncoated ZnO NPs induced a dose-dependent injury and inflammation evident by increased neutrophils, elevated levels of myeloperoxidase (MPO), albumin and lactate dehydrogenase (LDH) in the bronchoalveolar lavage (BAL) fluid at 24 hours post-instillation (Figure 2). At the lower dose of 0.2 mg/kg, only the SiO2-coated ZnO instilled rats (n = 4) showed elevated neutrophils, LDH, MPO, and albumin levels. But at 1 mg/kg, both types of NPs induced injury and inflammation to the same extent, except that MPO was higher in rats instilled with SiO2-coated ZnO NPs. Cellular and biochemical parameters of lung injury and inflammation in bronchoalveolar lavage (BAL). Tracheally instilled ZnO and SiO2-coated ZnO induced a dose-dependent lung injury and inflammation at 24 hours. (A) Significant increases in BAL neutrophils were observed at 1 mg/kg of both NPs (n = 6/group). At the lower dose of 0.2 mg/kg (n = 4-6/group), only the SiO2-coated ZnO (n = 4) induced significant neutrophil influx in the lungs. (B) Similarly, significant increases in LDH, myeloperoxidase and albumin were observed at 1 mg/kg of both NPs, and at 0.2 mg of SiO2-coated ZnO. (*P < 0.05, vs. control, #P < 0.05, SiO2-coated ZnO versus ZnO). Pharmacokinetics of intratracheally-instilled uncoated or SiO2-coated 65ZnO NPs Clearance of instilled uncoated or SiO2-coated 65ZnO NPs from the lungs is shown in Figure 3. Overall, both 65ZnO NPs and SiO2-coated 65ZnO NPs exhibited a biphasic clearance with a rapid initial phase (t1/2: 65ZnO = 0.3 hours; SiO2-coated 65ZnO = 0.2 hours) and a slower terminal phase (t1/2: 65ZnO = 42 hours; SiO2-coated 65ZnO = 29 hours). No significant difference was observed on the initial clearance between the two types of NPs. At 2 days, 18.1 ± 2.1% and 16.1 ± 2.0% remained in the lungs for the SiO2-coated and uncoated 65ZnO NPs, respectively. At 7 and 28 days post-IT instillation, we observed statistically significant but small (in magnitude) differences. At 28 days, only 0.14 ± 0.01% of SiO2-coated 65ZnO and 0.28 ± 0.05% of the uncoated 65ZnO NPs remained in the lungs. Lung clearance of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. The percentages of instilled 65Zn measured in the whole lungs are shown over a period of 28 days. The clearance of 65Zn was rapid with only 16-18% of dose remaining at 2 days. By day 7, only 1.1% (SiO2-coated 65ZnO NPs) and 1.9% (65ZnO NPs) were measured in the lungs. And by the end of experiment, 65Zn was nearly gone (less than 0.3% of dose). Although statistically higher levels of 65ZnO NPs than of SiO2-coated 65ZnO NPs remained in the lungs at 7 and 28 days, the graphs show nearly identical clearance kinetics. (n = 8 rats at 5 minutes, 2 days, and 7 days, n = 5 at 28 days). However, analyses of the selected extrapulmonary tissues showed significant differences (Figure 4). Even at the earliest time point of 5 minutes post-IT instillation, significantly more 65Zn was detected in the blood (0.47% vs. 0.25%) and heart (0.03% vs. 0.01%) of rats instilled with the uncoated 65ZnO NPs. These tissue differences became more pronounced at later time points. At 2 days post-IT instillation, more 65Zn from uncoated 65ZnO NPs translocated to the blood, skeletal muscle, kidneys, heart, liver and cecum than from SiO2-coated 65ZnO NPs (Table 1). At 7 and 28 days, the overall differences in the 65Zn contents in these tissues remained the same. As shown in Tables 2 and 3, significantly higher fractions of the 65Zn from uncoated 65ZnO NPs than from SiO2-coated 65ZnO NPs were found in the blood, skeletal muscle, heart, liver and skin. Interestingly, higher percentages of 65Zn dose from the SiO2-coated 65ZnO NPs were found in the thoracic lymph nodes and bone marrow (Tables 2 and 4). Radioactive 65Zn levels decreased from 2 to 28 days in all tissues except bone, where it increased for both types of NPs. Additionally, we found that the total recovered 65Zn in examined tissues, feces and urine was significantly higher in uncoated than SiO2-coated 65ZnO NPs (Tables 1, 23 and Figure 5). Since the thoracic lymph nodes had higher 65Zn in the latter group at all time points (Tables 1, 2 and 3), we speculate that the unaccounted radioactivity may have been in other lymph nodes as well as organs not analyzed such as adipose tissue, pancreas, adrenals, teeth, nails, tendons, and nasal tissues. Extrapulmonary distribution of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are % of instilled dose recovered in all secondary tissues examined. It included blood, thoracic lymph nodes, bone, bone marrow, skin, brain, skeletal muscle, testes, kidneys, heart, liver, and the gastrointestinal tract. There was a rapid absorption and accumulation of 65Zn in secondary tissues. At day 2, 59-72% of the dose was detected in extrapulmonary organs. Then, 65Zn levels decreased over time to 25-37% by day 28. Significantly more 65Zn was detected in secondary organs at all time points in rats instilled with uncoated 65ZnO NPs. Tissue distribution of 65 Zn at 2 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 8/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Tissue distribution of 65 Zn at 7 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 8/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Tissue distribution of 65 Zn at 28 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 5/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Distribution of 65 Zn 7 days after gavage administration of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% gavaged dose, n = 5/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Urinary excretion of 65Zn was much lower than fecal excretion in both groups. The urinary excretion of 65Zn in rats instilled with SiO2-coated 65ZnO NPs was significantly higher than in those instilled with uncoated 65ZnO NPs (Figure 5B). Although the fecal excretion rates appeared similar, slightly but significantly more 65Zn (50.04 ± 0.96% vs. 46.68 ± 0.76%) was eliminated via the feces over 28 days in rats instilled with uncoated 65ZnO NPs (Figure 5A). Fecal and urinary excretion of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 28 days. The predominant excretion pathway was via the feces. Approximately half of the instilled 65Zn was excreted in the feces in both groups over 28 days (A). Only about 1% of the 65Zn dose was excreted in the urine (B). Clearance of instilled uncoated or SiO2-coated 65ZnO NPs from the lungs is shown in Figure 3. Overall, both 65ZnO NPs and SiO2-coated 65ZnO NPs exhibited a biphasic clearance with a rapid initial phase (t1/2: 65ZnO = 0.3 hours; SiO2-coated 65ZnO = 0.2 hours) and a slower terminal phase (t1/2: 65ZnO = 42 hours; SiO2-coated 65ZnO = 29 hours). No significant difference was observed on the initial clearance between the two types of NPs. At 2 days, 18.1 ± 2.1% and 16.1 ± 2.0% remained in the lungs for the SiO2-coated and uncoated 65ZnO NPs, respectively. At 7 and 28 days post-IT instillation, we observed statistically significant but small (in magnitude) differences. At 28 days, only 0.14 ± 0.01% of SiO2-coated 65ZnO and 0.28 ± 0.05% of the uncoated 65ZnO NPs remained in the lungs. Lung clearance of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. The percentages of instilled 65Zn measured in the whole lungs are shown over a period of 28 days. The clearance of 65Zn was rapid with only 16-18% of dose remaining at 2 days. By day 7, only 1.1% (SiO2-coated 65ZnO NPs) and 1.9% (65ZnO NPs) were measured in the lungs. And by the end of experiment, 65Zn was nearly gone (less than 0.3% of dose). Although statistically higher levels of 65ZnO NPs than of SiO2-coated 65ZnO NPs remained in the lungs at 7 and 28 days, the graphs show nearly identical clearance kinetics. (n = 8 rats at 5 minutes, 2 days, and 7 days, n = 5 at 28 days). However, analyses of the selected extrapulmonary tissues showed significant differences (Figure 4). Even at the earliest time point of 5 minutes post-IT instillation, significantly more 65Zn was detected in the blood (0.47% vs. 0.25%) and heart (0.03% vs. 0.01%) of rats instilled with the uncoated 65ZnO NPs. These tissue differences became more pronounced at later time points. At 2 days post-IT instillation, more 65Zn from uncoated 65ZnO NPs translocated to the blood, skeletal muscle, kidneys, heart, liver and cecum than from SiO2-coated 65ZnO NPs (Table 1). At 7 and 28 days, the overall differences in the 65Zn contents in these tissues remained the same. As shown in Tables 2 and 3, significantly higher fractions of the 65Zn from uncoated 65ZnO NPs than from SiO2-coated 65ZnO NPs were found in the blood, skeletal muscle, heart, liver and skin. Interestingly, higher percentages of 65Zn dose from the SiO2-coated 65ZnO NPs were found in the thoracic lymph nodes and bone marrow (Tables 2 and 4). Radioactive 65Zn levels decreased from 2 to 28 days in all tissues except bone, where it increased for both types of NPs. Additionally, we found that the total recovered 65Zn in examined tissues, feces and urine was significantly higher in uncoated than SiO2-coated 65ZnO NPs (Tables 1, 23 and Figure 5). Since the thoracic lymph nodes had higher 65Zn in the latter group at all time points (Tables 1, 2 and 3), we speculate that the unaccounted radioactivity may have been in other lymph nodes as well as organs not analyzed such as adipose tissue, pancreas, adrenals, teeth, nails, tendons, and nasal tissues. Extrapulmonary distribution of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are % of instilled dose recovered in all secondary tissues examined. It included blood, thoracic lymph nodes, bone, bone marrow, skin, brain, skeletal muscle, testes, kidneys, heart, liver, and the gastrointestinal tract. There was a rapid absorption and accumulation of 65Zn in secondary tissues. At day 2, 59-72% of the dose was detected in extrapulmonary organs. Then, 65Zn levels decreased over time to 25-37% by day 28. Significantly more 65Zn was detected in secondary organs at all time points in rats instilled with uncoated 65ZnO NPs. Tissue distribution of 65 Zn at 2 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 8/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Tissue distribution of 65 Zn at 7 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 8/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Tissue distribution of 65 Zn at 28 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 5/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Distribution of 65 Zn 7 days after gavage administration of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% gavaged dose, n = 5/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Urinary excretion of 65Zn was much lower than fecal excretion in both groups. The urinary excretion of 65Zn in rats instilled with SiO2-coated 65ZnO NPs was significantly higher than in those instilled with uncoated 65ZnO NPs (Figure 5B). Although the fecal excretion rates appeared similar, slightly but significantly more 65Zn (50.04 ± 0.96% vs. 46.68 ± 0.76%) was eliminated via the feces over 28 days in rats instilled with uncoated 65ZnO NPs (Figure 5A). Fecal and urinary excretion of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 28 days. The predominant excretion pathway was via the feces. Approximately half of the instilled 65Zn was excreted in the feces in both groups over 28 days (A). Only about 1% of the 65Zn dose was excreted in the urine (B). Pharmacokinetics of gavaged uncoated or SiO2-coated 65ZnO NPs Absorption of 65Zn from the gut was studied at 5 minutes and 7 days post-gavage of uncoated or SiO2-coated 65ZnO NPs. Nearly 100% of the dose was recovered at 5 minutes in the stomach for both types of NPs (Figure 6A). The 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). However, significantly higher percentages of total dose were still detected in the blood, bone marrow, skin, testes, kidneys, spleen and liver in rats instilled with uncoated 65ZnO NPs (data not shown). After 7 days, low levels of 65Zn from both types of NPs (<1% original dose) were measured in all organs except the bone, skeletal muscle and skin (Figure 6B, Table 4). Higher levels of 65Zn were observed in the skeletal muscle from uncoated than from coated 65ZnO NPs at this time point (Table 4). However, similar to the IT-instillation data, the thoracic lymph nodes retained more 65Zn from the SiO2-coated than the uncoated 65ZnO NPs. Urinary excretion of 65Zn was also much lower than fecal excretion post-gavage. The urinary excretion of 65Zn in rats gavaged with SiO2-coated 65ZnO NPs was significantly higher than in rats gavaged with uncoated 65ZnO NPs (Figure 7B). The fecal excretion in the gavaged rats was higher than in IT-instilled rats. Despite a significant difference in fecal excretion during the first day post-gavage, nearly 95% of the dose for both types of NPs was excreted in the feces by day 7 (Figure 7A). Tissue distribution of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are % dose of administered 65Zn in different organs. (A) At 5 minutes post-gavage, the 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). (B) At day 7, significantly more 65Zn was absorbed and retained in non-GIT tissues (6.9% for uncoated, 6.0% for coated 65ZnO NPs). Significantly more 65Zn was measured in skeletal muscle in rat gavaged with uncoated versus coated 65ZnO NPs. (Note: RBC: red blood cell; sk muscl: skeletal muscle; sm int: small intestine: large int: large intestine). Fecal and urinary excretion of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 7 days. Similar to the IT-instilled groups, the predominant excretion pathway was via the feces. Ninety five % of the instilled 65Zn was excreted in both groups by day 7 (A). Only 0.1% of the 65Zn dose was excreted in the urine (B). Absorption of 65Zn from the gut was studied at 5 minutes and 7 days post-gavage of uncoated or SiO2-coated 65ZnO NPs. Nearly 100% of the dose was recovered at 5 minutes in the stomach for both types of NPs (Figure 6A). The 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). However, significantly higher percentages of total dose were still detected in the blood, bone marrow, skin, testes, kidneys, spleen and liver in rats instilled with uncoated 65ZnO NPs (data not shown). After 7 days, low levels of 65Zn from both types of NPs (<1% original dose) were measured in all organs except the bone, skeletal muscle and skin (Figure 6B, Table 4). Higher levels of 65Zn were observed in the skeletal muscle from uncoated than from coated 65ZnO NPs at this time point (Table 4). However, similar to the IT-instillation data, the thoracic lymph nodes retained more 65Zn from the SiO2-coated than the uncoated 65ZnO NPs. Urinary excretion of 65Zn was also much lower than fecal excretion post-gavage. The urinary excretion of 65Zn in rats gavaged with SiO2-coated 65ZnO NPs was significantly higher than in rats gavaged with uncoated 65ZnO NPs (Figure 7B). The fecal excretion in the gavaged rats was higher than in IT-instilled rats. Despite a significant difference in fecal excretion during the first day post-gavage, nearly 95% of the dose for both types of NPs was excreted in the feces by day 7 (Figure 7A). Tissue distribution of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are % dose of administered 65Zn in different organs. (A) At 5 minutes post-gavage, the 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). (B) At day 7, significantly more 65Zn was absorbed and retained in non-GIT tissues (6.9% for uncoated, 6.0% for coated 65ZnO NPs). Significantly more 65Zn was measured in skeletal muscle in rat gavaged with uncoated versus coated 65ZnO NPs. (Note: RBC: red blood cell; sk muscl: skeletal muscle; sm int: small intestine: large int: large intestine). Fecal and urinary excretion of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 7 days. Similar to the IT-instilled groups, the predominant excretion pathway was via the feces. Ninety five % of the instilled 65Zn was excreted in both groups by day 7 (A). Only 0.1% of the 65Zn dose was excreted in the urine (B).
Conclusions
We examined the influence of a 4.5 nm SiO2 coating on ZnO NPs on the 65Zn pharmacokinetics following IT instillation and gavage of neutron activated NPs. The SiO2 coating does not affect the clearance of 65Zn from the lungs. However, the extrapulmonary translocation and distribution of 65Zn from coated versus uncoated 65ZnO NPs were significantly altered in some tissues. The SiO2 coating resulted in lower translocation of instilled 65Zn to the skeletal muscle, skin and heart. The SiO2 coating also reduced 65Zn translocation to skeletal muscle post-gavage. For both routes of administration, the SiO2 coating enhanced the transport of 65Zn to the thoracic lymph nodes.
[ "Background", "Synthesis and characterization of ZnO and SiO2-coated ZnO NPs", "Pulmonary responses to intratracheally instilled ZnO and SiO2-coated ZnO", "Pharmacokinetics of intratracheally-instilled uncoated or SiO2-coated 65ZnO NPs", "Pharmacokinetics of gavaged uncoated or SiO2-coated 65ZnO NPs", "Synthesis of ZnO and SiO2-coated ZnO NPs", "Characterization of ZnO and SiO2-coated ZnO NPs", "Neutron activation of NPs", "Preparation and characterization of ZnO and SiO2 -coated ZnO nanoparticle suspensions", "Animals", "Pulmonary responses – Bronchoalveolar lavage and analyses", "Pharmacokinetics of 65Zn", "Statistical analyses", "Competing interests", "Authors’ contributions" ]
[ "Zinc oxide nanoparticles (ZnO NPs) are widely used in consumer products, including ceramics, cosmetics, plastics, sealants, toners and foods [1]. They are a common component in a range of technologies, including sensors, light emitting diodes, and solar cells due to their semiconducting and optical properties [2]. ZnO NPs filter both UV-A and UV-B radiation but remain transparent in the visible spectrum [3]. For this reason, ZnO NPs are commonly added to sunscreens [4] and other cosmetic products. Furthermore, advanced technologies have made the large-scale production of ZnO NPs possible [5]. Health concerns have been raised due to the growing evidence of the potential toxicity of ZnO NPs. Reduced pulmonary function in humans was observed 24 hours after inhalation of ultrafine (<100 nm) ZnO [6]. It has also been shown to cause DNA damage in HepG2 cells and neurotoxicity due to the formation of reactive oxygen species (ROS) [7],[8]. Recently, others and we have demonstrated that ZnO NPs can cause DNA damage in TK6 and H9T3 cells [9],[10]. ZnO NPs dissolve in aqueous solutions, releasing Zn2+ ions that may in turn cause cytotoxicity and DNA damage to cells [9],[11]–[13].\nStudies have shown that changing the surface characteristics of certain NPs may alter the biologic responses of cells [14],[15]. Developing strategies to reduce the toxicity of ZnO NPs without changing their core properties (safer-by-design approach) is an active area of research. Xia et al. [16] showed that doping ZnO NPs with iron could reduce the rate of ZnO dissolution and the toxic effects in zebra fish embryos and rat and mouse lungs [16]. We also showed that encapsulation of ZnO NPs with amorphous SiO2 reduced the dissolution of Zn2+ ions in biological media, and reduced cell cytotoxicity and DNA damage in vitro [17]. Surface characteristics of NPs, such as their chemical and molecular structure, influence their pharmacokinetic behavior [18]–[20]. Surface chemistry influences the adsorption of phospholipids, proteins and other components of lung surfactants in the formation of a particle corona, which may regulate the overall nanoparticle pharmacokinetics and biological responses [19]. Coronas have been shown to influence the dynamics of cellular uptake, localization, biodistribution, and biological effects of NPs [21],[22].\nCoating of NPs with amorphous silica is a promising technique to enhance colloidal stability and biocompatibility for theranostics [23],[24]. A recent study by Chen et al. showed that coating gold nanorods with silica can amplify the photoacoustic response without altering optical absorption [25]. Furthermore, coating magnetic NPs with amorphous silica enhances particle stability and reduces its cytotoxicity in a human bronchial epithelium cell line model [26]. Amorphous SiO2 is generally considered relatively biologically inert [27], and is commonly used in cosmetic and personal care products, and as a negative control in some nanoparticle toxicity screening assays [28]. However, Napierska et al. demonstrated the size-dependent cytotoxic effects of amorphous silica in vitro[29]. They concluded that the surface area of amorphous silica is an important determinant of cytotoxicity. An in vivo study using a rat model demonstrated that the pulmonary toxicity and inflammatory responses to amorphous silica are transient [30]. Moreover, SiO2-coated nanoceria induced minimal lung injury and inflammation [31]. It has also been demonstrated that SiO2 coating improves nanoparticle biocompatibility in vitro for a variety of nanomaterials, including Ag [32], Y2O3[33], and ZnO [17]. We have recently developed methods for the gas-phase synthesis of metal and metal oxide NPs by a modified flame spray pyrolysis (FSP) reactor. Coating metal oxide NPs with amorphous SiO2 involves the encapsulation of the core NPs in flight with a nanothin amorphous SiO2 layer [34]. An important advantage of flame-made NPs is their high purity. Flame synthesis is a high-temperature process that leaves no organic contamination on the particle surface. Furthermore, the presence of SiO2 does not influence the optoelectronic properties of the core ZnO nanorods. Thus, they retain their desired high transparency in the visible spectrum and UV absorption rendering them suitable for UV blocking applications [17]. The SiO2 coating has been demonstrated to reduce ZnO nanorod toxicity by mitigating their dissolution and generation of ions in solutions, and by preventing the immediate contact between the core particle and mammalian cells. For ZnO NPs, such a hermetic SiO2 coating reduces ZnO dissolution while preserving the optical properties and band-gap energy of the ZnO core [17].\nStudies examining nanoparticle structure-pharmacokinetic relationships have established that plasma protein binding profiles correlate with circulation half-lives [27]. However, studies evaluating the relationship between surface modifications, lung clearance kinetics, and pulmonary effects are lacking. Thus, we sought to study the effects of amorphous SiO2 coating on ZnO pulmonary effects and on pharmacokinetics of 65Zn when radioactive 65ZnO and SiO2-coated 65ZnO nanorods are administered by intratracheal instillation (IT) and gavage. We explored how the SiO2 coating affected acute toxicity and inflammatory responses in the lungs, as well as 65Zn clearance and tissue distribution after IT instillation over a period of 28 days. The translocation of the 65Zn from the stomach to other organs was also quantified for up to 7 days after gavage. Finally, we examined how the SiO2 coating affected the urinary and fecal excretion of 65Zn during the entire observation period.", "Uncoated and SiO2-coated ZnO NPs were made by flame spray pyrolysis using the Versatile Engineered Nanomaterial Generation System at Harvard University [35],[17]. The detailed physicochemical and morphological characterization of these NPs was reported earlier [36],[17]. The ZnO primary NPs had a rod-like shape with an aspect ratio of 2:1 to 8:1 (Figure 1) [37],[17]. Flame-made nanoparticles typically exhibit a lognormal size distribution with a geometric standard deviation of σg \n= 1.45 [38]. To create the SiO2-coated ZnO nanorods, a nanothin (~4.6 ± 2.5 nm) amorphous SiO2 layer encapsulated the ZnO core [17] (Figure 1B). The amorphous nature of the silica coating was verified by X-ray diffraction (XRD) and electron microscopy analyses [17]. The average crystal size of uncoated and SiO2-coated NPs were 29 and 28 nm, respectively [39]. Their specific surface areas (SSA) were 41 m2/g (uncoated) and 55 m2/g (SiO2-coated) [40]. The lower density of SiO2 compared to ZnO contributes to the higher SSA of the SiO2-coated ZnO than uncoated NPs. The extent of the SiO2 coating was assessed by X-ray photoelectron spectroscopy and photocatalytic experiments. These data showed that less than 5% of ZnO NPs were uncoated, as some of the freshly-formed core ZnO NPs may escape the coating process [41],[17]. Furthermore, the ZnO dissolution of the SiO2-coated nanorods was significantly lower than the uncoated NPs in culture medium over 24 h [17]. The Zn2+ ion concentration reached equilibrium after 6 hours for the coated NPs (~20%), while the uncoated ones dissolved at a constant rate up to 24 hours [17]. For both IT and gavage routes, the NPs were dispersed in deionized water by sonication at 242 J/ml. The hydrodynamic diameters were 165 ± 3 nm (SiO2-coated) and 221 ± 3 nm (uncoated). The zeta potential values in these suspensions were 23 ± 0.4 mV (uncoated) and −16.2 ± 1.2 (SiO2-coated). The zeta potential differences between these two types of NPs were observed at a pH range of 2.5-8.0 [17], which includes the pH conditions in the airways/alveoli and small and large intestines. The post-irradiation hydrodynamic diameter and zeta potential in water suspension were similar to those of pristine NPs used in the lung toxicity/inflammation experiments.\nPhysicochemical characterization of test materials. Transmission electron micrograph of uncoated ZnO (A) and SiO2-coated ZnO (B) NPs. The thin silica coating of approximately 5 nm is shown in B, inset.", "We compared the pulmonary responses to uncoated versus SiO2-coated ZnO NPs at 24 hours after IT instillation in rats. Groups of 4–6 rats received 0, 0.2 or 1 mg/kg of either type of NP. We found that IT-instilled coated and uncoated ZnO NPs induced a dose-dependent injury and inflammation evident by increased neutrophils, elevated levels of myeloperoxidase (MPO), albumin and lactate dehydrogenase (LDH) in the bronchoalveolar lavage (BAL) fluid at 24 hours post-instillation (Figure 2). At the lower dose of 0.2 mg/kg, only the SiO2-coated ZnO instilled rats (n = 4) showed elevated neutrophils, LDH, MPO, and albumin levels. But at 1 mg/kg, both types of NPs induced injury and inflammation to the same extent, except that MPO was higher in rats instilled with SiO2-coated ZnO NPs.\nCellular and biochemical parameters of lung injury and inflammation in bronchoalveolar lavage (BAL). Tracheally instilled ZnO and SiO2-coated ZnO induced a dose-dependent lung injury and inflammation at 24 hours. (A) Significant increases in BAL neutrophils were observed at 1 mg/kg of both NPs (n = 6/group). At the lower dose of 0.2 mg/kg (n = 4-6/group), only the SiO2-coated ZnO (n = 4) induced significant neutrophil influx in the lungs. (B) Similarly, significant increases in LDH, myeloperoxidase and albumin were observed at 1 mg/kg of both NPs, and at 0.2 mg of SiO2-coated ZnO. (*P < 0.05, vs. control, #P < 0.05, SiO2-coated ZnO versus ZnO).", "Clearance of instilled uncoated or SiO2-coated 65ZnO NPs from the lungs is shown in Figure 3. Overall, both 65ZnO NPs and SiO2-coated 65ZnO NPs exhibited a biphasic clearance with a rapid initial phase (t1/2: 65ZnO = 0.3 hours; SiO2-coated 65ZnO = 0.2 hours) and a slower terminal phase (t1/2: 65ZnO = 42 hours; SiO2-coated 65ZnO = 29 hours). No significant difference was observed on the initial clearance between the two types of NPs. At 2 days, 18.1 ± 2.1% and 16.1 ± 2.0% remained in the lungs for the SiO2-coated and uncoated 65ZnO NPs, respectively. At 7 and 28 days post-IT instillation, we observed statistically significant but small (in magnitude) differences. At 28 days, only 0.14 ± 0.01% of SiO2-coated 65ZnO and 0.28 ± 0.05% of the uncoated 65ZnO NPs remained in the lungs.\nLung clearance of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. The percentages of instilled 65Zn measured in the whole lungs are shown over a period of 28 days. The clearance of 65Zn was rapid with only 16-18% of dose remaining at 2 days. By day 7, only 1.1% (SiO2-coated 65ZnO NPs) and 1.9% (65ZnO NPs) were measured in the lungs. And by the end of experiment, 65Zn was nearly gone (less than 0.3% of dose). Although statistically higher levels of 65ZnO NPs than of SiO2-coated 65ZnO NPs remained in the lungs at 7 and 28 days, the graphs show nearly identical clearance kinetics. (n = 8 rats at 5 minutes, 2 days, and 7 days, n = 5 at 28 days).\nHowever, analyses of the selected extrapulmonary tissues showed significant differences (Figure 4). Even at the earliest time point of 5 minutes post-IT instillation, significantly more 65Zn was detected in the blood (0.47% vs. 0.25%) and heart (0.03% vs. 0.01%) of rats instilled with the uncoated 65ZnO NPs. These tissue differences became more pronounced at later time points. At 2 days post-IT instillation, more 65Zn from uncoated 65ZnO NPs translocated to the blood, skeletal muscle, kidneys, heart, liver and cecum than from SiO2-coated 65ZnO NPs (Table 1). At 7 and 28 days, the overall differences in the 65Zn contents in these tissues remained the same. As shown in Tables 2 and 3, significantly higher fractions of the 65Zn from uncoated 65ZnO NPs than from SiO2-coated 65ZnO NPs were found in the blood, skeletal muscle, heart, liver and skin. Interestingly, higher percentages of 65Zn dose from the SiO2-coated 65ZnO NPs were found in the thoracic lymph nodes and bone marrow (Tables 2 and 4). Radioactive 65Zn levels decreased from 2 to 28 days in all tissues except bone, where it increased for both types of NPs. Additionally, we found that the total recovered 65Zn in examined tissues, feces and urine was significantly higher in uncoated than SiO2-coated 65ZnO NPs (Tables 1, 23 and Figure 5). Since the thoracic lymph nodes had higher 65Zn in the latter group at all time points (Tables 1, 2 and 3), we speculate that the unaccounted radioactivity may have been in other lymph nodes as well as organs not analyzed such as adipose tissue, pancreas, adrenals, teeth, nails, tendons, and nasal tissues.\nExtrapulmonary distribution of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are % of instilled dose recovered in all secondary tissues examined. It included blood, thoracic lymph nodes, bone, bone marrow, skin, brain, skeletal muscle, testes, kidneys, heart, liver, and the gastrointestinal tract. There was a rapid absorption and accumulation of 65Zn in secondary tissues. At day 2, 59-72% of the dose was detected in extrapulmonary organs. Then, 65Zn levels decreased over time to 25-37% by day 28. Significantly more 65Zn was detected in secondary organs at all time points in rats instilled with uncoated 65ZnO NPs.\n\nTissue distribution of\n\n65\n\nZn at 2 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 8/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nTissue distribution of\n\n65\n\nZn at 7 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 8/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nTissue distribution of\n\n65\n\nZn at 28 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 5/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nDistribution of\n\n65\n\nZn 7 days after gavage administration of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% gavaged dose, n = 5/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\nUrinary excretion of 65Zn was much lower than fecal excretion in both groups. The urinary excretion of 65Zn in rats instilled with SiO2-coated 65ZnO NPs was significantly higher than in those instilled with uncoated 65ZnO NPs (Figure 5B). Although the fecal excretion rates appeared similar, slightly but significantly more 65Zn (50.04 ± 0.96% vs. 46.68 ± 0.76%) was eliminated via the feces over 28 days in rats instilled with uncoated 65ZnO NPs (Figure 5A).\nFecal and urinary excretion of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 28 days. The predominant excretion pathway was via the feces. Approximately half of the instilled 65Zn was excreted in the feces in both groups over 28 days (A). Only about 1% of the 65Zn dose was excreted in the urine (B).", "Absorption of 65Zn from the gut was studied at 5 minutes and 7 days post-gavage of uncoated or SiO2-coated 65ZnO NPs. Nearly 100% of the dose was recovered at 5 minutes in the stomach for both types of NPs (Figure 6A). The 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). However, significantly higher percentages of total dose were still detected in the blood, bone marrow, skin, testes, kidneys, spleen and liver in rats instilled with uncoated 65ZnO NPs (data not shown). After 7 days, low levels of 65Zn from both types of NPs (<1% original dose) were measured in all organs except the bone, skeletal muscle and skin (Figure 6B, Table 4). Higher levels of 65Zn were observed in the skeletal muscle from uncoated than from coated 65ZnO NPs at this time point (Table 4). However, similar to the IT-instillation data, the thoracic lymph nodes retained more 65Zn from the SiO2-coated than the uncoated 65ZnO NPs. Urinary excretion of 65Zn was also much lower than fecal excretion post-gavage. The urinary excretion of 65Zn in rats gavaged with SiO2-coated 65ZnO NPs was significantly higher than in rats gavaged with uncoated 65ZnO NPs (Figure 7B). The fecal excretion in the gavaged rats was higher than in IT-instilled rats. Despite a significant difference in fecal excretion during the first day post-gavage, nearly 95% of the dose for both types of NPs was excreted in the feces by day 7 (Figure 7A).\nTissue distribution of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are % dose of administered 65Zn in different organs. (A) At 5 minutes post-gavage, the 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). (B) At day 7, significantly more 65Zn was absorbed and retained in non-GIT tissues (6.9% for uncoated, 6.0% for coated 65ZnO NPs). Significantly more 65Zn was measured in skeletal muscle in rat gavaged with uncoated versus coated 65ZnO NPs. (Note: RBC: red blood cell; sk muscl: skeletal muscle; sm int: small intestine: large int: large intestine).\nFecal and urinary excretion of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 7 days. Similar to the IT-instilled groups, the predominant excretion pathway was via the feces. Ninety five % of the instilled 65Zn was excreted in both groups by day 7 (A). Only 0.1% of the 65Zn dose was excreted in the urine (B).", "The synthesis of these NPs was reported in detail elsewhere [17]. In brief, uncoated and SiO2-coated ZnO particles were synthesized by flame spray pyrolysis (FSP) of zinc naphthenate (Sigma-Aldrich, St. Louis, MO, USA) dissolved in ethanol (Sigma-Aldrich) at a precursor molarity of 0.5 M. The precursor solution was fed through a stainless steel capillary at 5 ml/min, dispersed by 5 L/min O2 (purity > 99%, pressure drop at nozzle tip: pdrop = 2 bar) (Air Gas, Berwyn, PA, USA) and combusted. A premixed methane-oxygen (1.5 L/min, 3.2 L/min) supporting flame was used to ignite the spray. Oxygen (Air Gas, purity > 99%) sheath gas was used at 40 L/min. Core particles were coated in-flight by the swirl-injection of hexamethyldisiloxane (HMDSO) (Sigma Aldrich) through a torus ring with 16 jets at an injection height of 200 mm above the FSP burner. A total gas flow of 16 L/min, consisting of N2 carrying HMDSO vapor and pure N2, was injected through the torus ring jets. HMDSO vapor was obtained by bubbling N2 gas through liquid HMDSO (500 ml), maintained at a controlled temperature using a temperature-controlled water bath.", "The morphology of these NPs was examined by electron microscopy. Uncoated and SiO2-coated ZnO NPs were dispersed in ethanol at a concentration of 1 mg/ml in 50 ml polyethylene conical tubes and sonicated at 246 J/ml (Branson Sonifier S-450A, Swedesboro, NJ, USA). The samples were deposited onto lacey carbon TEM grids. All grids were imaged with a JEOL 2100. The primary particle size was determined by X-ray diffraction (XRD). XRD patterns for uncoated ZnO and SiO2-coated ZnO NPs were obtained using a Scintag XDS2000 powder diffractometer (Cu Kα, λ = 0.154 nm, 40 kV, 40 mA, stepsize = 0.02°). One hundred mg of each sample was placed onto the diffractometer stage and analyzed from a range of 2θ = 20-70°. Major diffraction peaks were identified using the Inorganic Crystal Structure Database (ICSD) for wurtzite (ZnO) crystals. The crystal size was determined by applying the Debye-Scherrer Shape Equation to the Gaussian fit of the major diffraction peak. The specific surface area was obtained using the Brunauer-Emmet-Teller (BET) method. The samples were degassed in N2 for at least 1 hour at 150°C before obtaining five-point N2 adsorption at 77 K (Micrometrics Tristar 3000, Norcross, GA, USA).", "The NPs with and without the SiO2 coating were neutron-activated at the Massachusetts Institute of Technology (MIT) Nuclear Reactor Laboratory (Cambridge, MA). Samples were irradiated with a thermal neutron flux of 5 × 1013 n/cm2s for 120 hours. The resulting 65Zn radioisotope has a half-life of 244.3 days and a primary gamma energy peak of 1115 keV. The relative specific activities for 65Zn were 37.7 ± 5.0 kBq/mg for SiO2-coated 65ZnO and 41.7 ± 7.2 kBq/mg for 65ZnO NPs.", "Uncoated and SiO2-coated ZnO NPs were dispersed using a protocol previously described [82],[36]. The NPs were dispersed in deionized water at a concentration of 0.66 mg/ml (IT) or 10 mg/ml (gavage). Sonication was performed in deionized water to minimize the formation of reactive oxygen species. Samples were thoroughly mixed immediately prior to instillation. Dispersions of NPs were analyzed for hydrodynamic diameter (dH), polydispersity index (PdI), and zeta potential (ζ) by DLS using a Zetasizer Nano-ZS (Malvern Instruments, Worcestershire, UK).", "The protocols used in this study were approved by the Harvard Medical Area Animal Care and Use Committee. Nine-week-old male Wistar Han rats were purchased from Charles River Laboratories (Wilmington, MA). Rats were housed in pairs in polypropylene cages and allowed to acclimate for 1 week before the studies were initiated. Rats were maintained on a 12-hour light/dark cycle. Food and water were provided ad libitum.", "This experiment was performed to determine pulmonary responses to instilled NPs. A group of rats (mean wt. 264 ± 15 g) was intratracheally instilled with either an uncoated ZnO or SiO2 -coated ZnO NP suspension at a 0, 0.2 or 1.0 mg/kg dose. The particle suspensions were delivered to the lungs through the trachea in a volume of 1.5 ml/kg. Twenty-four hours later, rats were euthanized via exsanguination with a cut in the abdominal aorta while under anesthesia. The trachea was exposed and cannulated. The lungs were then lavaged 12 times, with 3 ml of 0.9% sterile PBS, without calcium and magnesium ions. The cells of all washes were separated from the supernatant by centrifugation (350 × g at 4°C for 10 min). Total cell count and hemoglobin measurements were made from the cell pellets. After staining the cells, a differential cell count was performed. The supernatant of the two first washes was clarified via centrifugation (14,500 × g at 4°C for 30 min), and used for standard spectrophotometric assays for lactate dehydrogenase (LDH), myeloperoxidase (MPO) and albumin.", "The mean weight of rats at the start of the experiment was 285 ± 3 g. Two groups of rats (29 rats/NP) were intratracheally instilled with 65ZnO NPs or with SiO2-coated 65ZnO NPs at a 1 mg/kg dose (1.5 ml/kg, 0.66 mg/ml). Rats were placed in metabolic cages containing food and water, as previously described. Twenty four-hour samples of feces and urine were collected at selected time points (0–24 hours, 2–3 days, 6–7 days, 9–10 days, 13–14 days, 20–21 days, and 27–28 days post-IT instillation). Fecal/urine collection was accomplished by placing each rat in individual metabolic cage containing food and water during each 24-hour period. All samples were analyzed for total 65Zn activity, and expressed as % of instilled 65Zn dose. Fecal and urine clearance curves were generated and were used to estimate the daily cumulative excretion. Groups of 8 rats were humanely sacrificed at 5 minutes, 2 days, 7 days, and 5 rats/group at 28 days. Therefore, the number of collected fecal/urine samples decreased over time.\nAnother cohort of 20 rats was dosed with 65ZnO (n = 10) or SiO2-coated 65ZnO (n = 10) by gavage at a 5 mg/kg dose (0.5 ml/kg, 10 mg/ml). One group of 5 rats was humanely sacrificed at 5 minutes and immediately dissected. Another group of 5 rats was individually placed in metabolic cages, as previously described, and 24-hour samples of urine and feces were collected at 0–1 day, 2–3 days, and 6–7 days post-gavage. The remaining rats were sacrificed at 7 days.\nAt each endpoint, rats were euthanized and dissected, and the whole brain, spleen, kidneys, heart, liver, lungs, GI tract, testes, thoracic lymph nodes, blood (10 ml, separated into plasma and RBC), bone marrow (from femoral bones), bone (both femurs), skin (2 × 3 inches), and skeletal muscle (from 4 sites) were collected. The 65Zn radioactivity present in each sample was measured with a WIZARD Gamma Counter (PerkinElmer, Inc., Waltham, MA). The number of disintegrations per minute was determined from the counts per minute and the counting efficiency. The efficiency of the gamma counter was derived from counting multiple aliquots of NP samples and relating them to the specific activities measured at Massachusetts Institute of Technology Nuclear Reactor. We estimated that the counter had an efficiency of ~52%. The 65Zn radioactivity was expressed as kBq/g tissue and the percentage of administered dose in each organ. All radioactivity data were adjusted for physical decay over the entire observation period. The radioactivity in organs and tissues not measured in their entirety was estimated as a percentage of total body weight as: skeletal muscle, 40%; bone marrow, 3.2%; peripheral blood, 7%; skin, 19%; and bone, 6% [83],[84]. Based on the 65Zn specific activity (kBq/mg NP) and tissue 65Zn concentration, the amount of Zn derived from each NP was calculated for each tissue examined (ng Zn/g tissue).", "Differences in the 65Zn tissue distribution and in cellular and biochemical parameters measured in bronchoalveolar lavage between groups were analyzed using multivariate analysis of variance (MANOVA) with REGWQ (Ryan-Einot-Gabriel-Welch based on range) and Tukey post hoc tests using SAS Statistical Analysis software (SAS Institute, Cary, NC). The lung clearance half-life was estimated by a two-phase estimation by a biexponential model using the R Program v. 3.1.0 [85].", "The authors declare that they have no competing interests.", "NVK, KMM, RMM, and JDB designed and performed the lung toxicity and pharmacokinetic studies. TCD performed statistical analyses. PD and GAS synthesized and characterized the NPs. This manuscript was written by NVK, RMM, and KMM and revised by JDB, GS, PD and RMM. All authors read, corrected and approved the manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Results", "Synthesis and characterization of ZnO and SiO2-coated ZnO NPs", "Pulmonary responses to intratracheally instilled ZnO and SiO2-coated ZnO", "Pharmacokinetics of intratracheally-instilled uncoated or SiO2-coated 65ZnO NPs", "Pharmacokinetics of gavaged uncoated or SiO2-coated 65ZnO NPs", "Discussion", "Conclusions", "Methods", "Synthesis of ZnO and SiO2-coated ZnO NPs", "Characterization of ZnO and SiO2-coated ZnO NPs", "Neutron activation of NPs", "Preparation and characterization of ZnO and SiO2 -coated ZnO nanoparticle suspensions", "Animals", "Pulmonary responses – Bronchoalveolar lavage and analyses", "Pharmacokinetics of 65Zn", "Statistical analyses", "Competing interests", "Authors’ contributions" ]
[ "Zinc oxide nanoparticles (ZnO NPs) are widely used in consumer products, including ceramics, cosmetics, plastics, sealants, toners and foods [1]. They are a common component in a range of technologies, including sensors, light emitting diodes, and solar cells due to their semiconducting and optical properties [2]. ZnO NPs filter both UV-A and UV-B radiation but remain transparent in the visible spectrum [3]. For this reason, ZnO NPs are commonly added to sunscreens [4] and other cosmetic products. Furthermore, advanced technologies have made the large-scale production of ZnO NPs possible [5]. Health concerns have been raised due to the growing evidence of the potential toxicity of ZnO NPs. Reduced pulmonary function in humans was observed 24 hours after inhalation of ultrafine (<100 nm) ZnO [6]. It has also been shown to cause DNA damage in HepG2 cells and neurotoxicity due to the formation of reactive oxygen species (ROS) [7],[8]. Recently, others and we have demonstrated that ZnO NPs can cause DNA damage in TK6 and H9T3 cells [9],[10]. ZnO NPs dissolve in aqueous solutions, releasing Zn2+ ions that may in turn cause cytotoxicity and DNA damage to cells [9],[11]–[13].\nStudies have shown that changing the surface characteristics of certain NPs may alter the biologic responses of cells [14],[15]. Developing strategies to reduce the toxicity of ZnO NPs without changing their core properties (safer-by-design approach) is an active area of research. Xia et al. [16] showed that doping ZnO NPs with iron could reduce the rate of ZnO dissolution and the toxic effects in zebra fish embryos and rat and mouse lungs [16]. We also showed that encapsulation of ZnO NPs with amorphous SiO2 reduced the dissolution of Zn2+ ions in biological media, and reduced cell cytotoxicity and DNA damage in vitro [17]. Surface characteristics of NPs, such as their chemical and molecular structure, influence their pharmacokinetic behavior [18]–[20]. Surface chemistry influences the adsorption of phospholipids, proteins and other components of lung surfactants in the formation of a particle corona, which may regulate the overall nanoparticle pharmacokinetics and biological responses [19]. Coronas have been shown to influence the dynamics of cellular uptake, localization, biodistribution, and biological effects of NPs [21],[22].\nCoating of NPs with amorphous silica is a promising technique to enhance colloidal stability and biocompatibility for theranostics [23],[24]. A recent study by Chen et al. showed that coating gold nanorods with silica can amplify the photoacoustic response without altering optical absorption [25]. Furthermore, coating magnetic NPs with amorphous silica enhances particle stability and reduces its cytotoxicity in a human bronchial epithelium cell line model [26]. Amorphous SiO2 is generally considered relatively biologically inert [27], and is commonly used in cosmetic and personal care products, and as a negative control in some nanoparticle toxicity screening assays [28]. However, Napierska et al. demonstrated the size-dependent cytotoxic effects of amorphous silica in vitro[29]. They concluded that the surface area of amorphous silica is an important determinant of cytotoxicity. An in vivo study using a rat model demonstrated that the pulmonary toxicity and inflammatory responses to amorphous silica are transient [30]. Moreover, SiO2-coated nanoceria induced minimal lung injury and inflammation [31]. It has also been demonstrated that SiO2 coating improves nanoparticle biocompatibility in vitro for a variety of nanomaterials, including Ag [32], Y2O3[33], and ZnO [17]. We have recently developed methods for the gas-phase synthesis of metal and metal oxide NPs by a modified flame spray pyrolysis (FSP) reactor. Coating metal oxide NPs with amorphous SiO2 involves the encapsulation of the core NPs in flight with a nanothin amorphous SiO2 layer [34]. An important advantage of flame-made NPs is their high purity. Flame synthesis is a high-temperature process that leaves no organic contamination on the particle surface. Furthermore, the presence of SiO2 does not influence the optoelectronic properties of the core ZnO nanorods. Thus, they retain their desired high transparency in the visible spectrum and UV absorption rendering them suitable for UV blocking applications [17]. The SiO2 coating has been demonstrated to reduce ZnO nanorod toxicity by mitigating their dissolution and generation of ions in solutions, and by preventing the immediate contact between the core particle and mammalian cells. For ZnO NPs, such a hermetic SiO2 coating reduces ZnO dissolution while preserving the optical properties and band-gap energy of the ZnO core [17].\nStudies examining nanoparticle structure-pharmacokinetic relationships have established that plasma protein binding profiles correlate with circulation half-lives [27]. However, studies evaluating the relationship between surface modifications, lung clearance kinetics, and pulmonary effects are lacking. Thus, we sought to study the effects of amorphous SiO2 coating on ZnO pulmonary effects and on pharmacokinetics of 65Zn when radioactive 65ZnO and SiO2-coated 65ZnO nanorods are administered by intratracheal instillation (IT) and gavage. We explored how the SiO2 coating affected acute toxicity and inflammatory responses in the lungs, as well as 65Zn clearance and tissue distribution after IT instillation over a period of 28 days. The translocation of the 65Zn from the stomach to other organs was also quantified for up to 7 days after gavage. Finally, we examined how the SiO2 coating affected the urinary and fecal excretion of 65Zn during the entire observation period.", " Synthesis and characterization of ZnO and SiO2-coated ZnO NPs Uncoated and SiO2-coated ZnO NPs were made by flame spray pyrolysis using the Versatile Engineered Nanomaterial Generation System at Harvard University [35],[17]. The detailed physicochemical and morphological characterization of these NPs was reported earlier [36],[17]. The ZnO primary NPs had a rod-like shape with an aspect ratio of 2:1 to 8:1 (Figure 1) [37],[17]. Flame-made nanoparticles typically exhibit a lognormal size distribution with a geometric standard deviation of σg \n= 1.45 [38]. To create the SiO2-coated ZnO nanorods, a nanothin (~4.6 ± 2.5 nm) amorphous SiO2 layer encapsulated the ZnO core [17] (Figure 1B). The amorphous nature of the silica coating was verified by X-ray diffraction (XRD) and electron microscopy analyses [17]. The average crystal size of uncoated and SiO2-coated NPs were 29 and 28 nm, respectively [39]. Their specific surface areas (SSA) were 41 m2/g (uncoated) and 55 m2/g (SiO2-coated) [40]. The lower density of SiO2 compared to ZnO contributes to the higher SSA of the SiO2-coated ZnO than uncoated NPs. The extent of the SiO2 coating was assessed by X-ray photoelectron spectroscopy and photocatalytic experiments. These data showed that less than 5% of ZnO NPs were uncoated, as some of the freshly-formed core ZnO NPs may escape the coating process [41],[17]. Furthermore, the ZnO dissolution of the SiO2-coated nanorods was significantly lower than the uncoated NPs in culture medium over 24 h [17]. The Zn2+ ion concentration reached equilibrium after 6 hours for the coated NPs (~20%), while the uncoated ones dissolved at a constant rate up to 24 hours [17]. For both IT and gavage routes, the NPs were dispersed in deionized water by sonication at 242 J/ml. The hydrodynamic diameters were 165 ± 3 nm (SiO2-coated) and 221 ± 3 nm (uncoated). The zeta potential values in these suspensions were 23 ± 0.4 mV (uncoated) and −16.2 ± 1.2 (SiO2-coated). The zeta potential differences between these two types of NPs were observed at a pH range of 2.5-8.0 [17], which includes the pH conditions in the airways/alveoli and small and large intestines. The post-irradiation hydrodynamic diameter and zeta potential in water suspension were similar to those of pristine NPs used in the lung toxicity/inflammation experiments.\nPhysicochemical characterization of test materials. Transmission electron micrograph of uncoated ZnO (A) and SiO2-coated ZnO (B) NPs. The thin silica coating of approximately 5 nm is shown in B, inset.\nUncoated and SiO2-coated ZnO NPs were made by flame spray pyrolysis using the Versatile Engineered Nanomaterial Generation System at Harvard University [35],[17]. The detailed physicochemical and morphological characterization of these NPs was reported earlier [36],[17]. The ZnO primary NPs had a rod-like shape with an aspect ratio of 2:1 to 8:1 (Figure 1) [37],[17]. Flame-made nanoparticles typically exhibit a lognormal size distribution with a geometric standard deviation of σg \n= 1.45 [38]. To create the SiO2-coated ZnO nanorods, a nanothin (~4.6 ± 2.5 nm) amorphous SiO2 layer encapsulated the ZnO core [17] (Figure 1B). The amorphous nature of the silica coating was verified by X-ray diffraction (XRD) and electron microscopy analyses [17]. The average crystal size of uncoated and SiO2-coated NPs were 29 and 28 nm, respectively [39]. Their specific surface areas (SSA) were 41 m2/g (uncoated) and 55 m2/g (SiO2-coated) [40]. The lower density of SiO2 compared to ZnO contributes to the higher SSA of the SiO2-coated ZnO than uncoated NPs. The extent of the SiO2 coating was assessed by X-ray photoelectron spectroscopy and photocatalytic experiments. These data showed that less than 5% of ZnO NPs were uncoated, as some of the freshly-formed core ZnO NPs may escape the coating process [41],[17]. Furthermore, the ZnO dissolution of the SiO2-coated nanorods was significantly lower than the uncoated NPs in culture medium over 24 h [17]. The Zn2+ ion concentration reached equilibrium after 6 hours for the coated NPs (~20%), while the uncoated ones dissolved at a constant rate up to 24 hours [17]. For both IT and gavage routes, the NPs were dispersed in deionized water by sonication at 242 J/ml. The hydrodynamic diameters were 165 ± 3 nm (SiO2-coated) and 221 ± 3 nm (uncoated). The zeta potential values in these suspensions were 23 ± 0.4 mV (uncoated) and −16.2 ± 1.2 (SiO2-coated). The zeta potential differences between these two types of NPs were observed at a pH range of 2.5-8.0 [17], which includes the pH conditions in the airways/alveoli and small and large intestines. The post-irradiation hydrodynamic diameter and zeta potential in water suspension were similar to those of pristine NPs used in the lung toxicity/inflammation experiments.\nPhysicochemical characterization of test materials. Transmission electron micrograph of uncoated ZnO (A) and SiO2-coated ZnO (B) NPs. The thin silica coating of approximately 5 nm is shown in B, inset.\n Pulmonary responses to intratracheally instilled ZnO and SiO2-coated ZnO We compared the pulmonary responses to uncoated versus SiO2-coated ZnO NPs at 24 hours after IT instillation in rats. Groups of 4–6 rats received 0, 0.2 or 1 mg/kg of either type of NP. We found that IT-instilled coated and uncoated ZnO NPs induced a dose-dependent injury and inflammation evident by increased neutrophils, elevated levels of myeloperoxidase (MPO), albumin and lactate dehydrogenase (LDH) in the bronchoalveolar lavage (BAL) fluid at 24 hours post-instillation (Figure 2). At the lower dose of 0.2 mg/kg, only the SiO2-coated ZnO instilled rats (n = 4) showed elevated neutrophils, LDH, MPO, and albumin levels. But at 1 mg/kg, both types of NPs induced injury and inflammation to the same extent, except that MPO was higher in rats instilled with SiO2-coated ZnO NPs.\nCellular and biochemical parameters of lung injury and inflammation in bronchoalveolar lavage (BAL). Tracheally instilled ZnO and SiO2-coated ZnO induced a dose-dependent lung injury and inflammation at 24 hours. (A) Significant increases in BAL neutrophils were observed at 1 mg/kg of both NPs (n = 6/group). At the lower dose of 0.2 mg/kg (n = 4-6/group), only the SiO2-coated ZnO (n = 4) induced significant neutrophil influx in the lungs. (B) Similarly, significant increases in LDH, myeloperoxidase and albumin were observed at 1 mg/kg of both NPs, and at 0.2 mg of SiO2-coated ZnO. (*P < 0.05, vs. control, #P < 0.05, SiO2-coated ZnO versus ZnO).\nWe compared the pulmonary responses to uncoated versus SiO2-coated ZnO NPs at 24 hours after IT instillation in rats. Groups of 4–6 rats received 0, 0.2 or 1 mg/kg of either type of NP. We found that IT-instilled coated and uncoated ZnO NPs induced a dose-dependent injury and inflammation evident by increased neutrophils, elevated levels of myeloperoxidase (MPO), albumin and lactate dehydrogenase (LDH) in the bronchoalveolar lavage (BAL) fluid at 24 hours post-instillation (Figure 2). At the lower dose of 0.2 mg/kg, only the SiO2-coated ZnO instilled rats (n = 4) showed elevated neutrophils, LDH, MPO, and albumin levels. But at 1 mg/kg, both types of NPs induced injury and inflammation to the same extent, except that MPO was higher in rats instilled with SiO2-coated ZnO NPs.\nCellular and biochemical parameters of lung injury and inflammation in bronchoalveolar lavage (BAL). Tracheally instilled ZnO and SiO2-coated ZnO induced a dose-dependent lung injury and inflammation at 24 hours. (A) Significant increases in BAL neutrophils were observed at 1 mg/kg of both NPs (n = 6/group). At the lower dose of 0.2 mg/kg (n = 4-6/group), only the SiO2-coated ZnO (n = 4) induced significant neutrophil influx in the lungs. (B) Similarly, significant increases in LDH, myeloperoxidase and albumin were observed at 1 mg/kg of both NPs, and at 0.2 mg of SiO2-coated ZnO. (*P < 0.05, vs. control, #P < 0.05, SiO2-coated ZnO versus ZnO).\n Pharmacokinetics of intratracheally-instilled uncoated or SiO2-coated 65ZnO NPs Clearance of instilled uncoated or SiO2-coated 65ZnO NPs from the lungs is shown in Figure 3. Overall, both 65ZnO NPs and SiO2-coated 65ZnO NPs exhibited a biphasic clearance with a rapid initial phase (t1/2: 65ZnO = 0.3 hours; SiO2-coated 65ZnO = 0.2 hours) and a slower terminal phase (t1/2: 65ZnO = 42 hours; SiO2-coated 65ZnO = 29 hours). No significant difference was observed on the initial clearance between the two types of NPs. At 2 days, 18.1 ± 2.1% and 16.1 ± 2.0% remained in the lungs for the SiO2-coated and uncoated 65ZnO NPs, respectively. At 7 and 28 days post-IT instillation, we observed statistically significant but small (in magnitude) differences. At 28 days, only 0.14 ± 0.01% of SiO2-coated 65ZnO and 0.28 ± 0.05% of the uncoated 65ZnO NPs remained in the lungs.\nLung clearance of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. The percentages of instilled 65Zn measured in the whole lungs are shown over a period of 28 days. The clearance of 65Zn was rapid with only 16-18% of dose remaining at 2 days. By day 7, only 1.1% (SiO2-coated 65ZnO NPs) and 1.9% (65ZnO NPs) were measured in the lungs. And by the end of experiment, 65Zn was nearly gone (less than 0.3% of dose). Although statistically higher levels of 65ZnO NPs than of SiO2-coated 65ZnO NPs remained in the lungs at 7 and 28 days, the graphs show nearly identical clearance kinetics. (n = 8 rats at 5 minutes, 2 days, and 7 days, n = 5 at 28 days).\nHowever, analyses of the selected extrapulmonary tissues showed significant differences (Figure 4). Even at the earliest time point of 5 minutes post-IT instillation, significantly more 65Zn was detected in the blood (0.47% vs. 0.25%) and heart (0.03% vs. 0.01%) of rats instilled with the uncoated 65ZnO NPs. These tissue differences became more pronounced at later time points. At 2 days post-IT instillation, more 65Zn from uncoated 65ZnO NPs translocated to the blood, skeletal muscle, kidneys, heart, liver and cecum than from SiO2-coated 65ZnO NPs (Table 1). At 7 and 28 days, the overall differences in the 65Zn contents in these tissues remained the same. As shown in Tables 2 and 3, significantly higher fractions of the 65Zn from uncoated 65ZnO NPs than from SiO2-coated 65ZnO NPs were found in the blood, skeletal muscle, heart, liver and skin. Interestingly, higher percentages of 65Zn dose from the SiO2-coated 65ZnO NPs were found in the thoracic lymph nodes and bone marrow (Tables 2 and 4). Radioactive 65Zn levels decreased from 2 to 28 days in all tissues except bone, where it increased for both types of NPs. Additionally, we found that the total recovered 65Zn in examined tissues, feces and urine was significantly higher in uncoated than SiO2-coated 65ZnO NPs (Tables 1, 23 and Figure 5). Since the thoracic lymph nodes had higher 65Zn in the latter group at all time points (Tables 1, 2 and 3), we speculate that the unaccounted radioactivity may have been in other lymph nodes as well as organs not analyzed such as adipose tissue, pancreas, adrenals, teeth, nails, tendons, and nasal tissues.\nExtrapulmonary distribution of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are % of instilled dose recovered in all secondary tissues examined. It included blood, thoracic lymph nodes, bone, bone marrow, skin, brain, skeletal muscle, testes, kidneys, heart, liver, and the gastrointestinal tract. There was a rapid absorption and accumulation of 65Zn in secondary tissues. At day 2, 59-72% of the dose was detected in extrapulmonary organs. Then, 65Zn levels decreased over time to 25-37% by day 28. Significantly more 65Zn was detected in secondary organs at all time points in rats instilled with uncoated 65ZnO NPs.\n\nTissue distribution of\n\n65\n\nZn at 2 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 8/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nTissue distribution of\n\n65\n\nZn at 7 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 8/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nTissue distribution of\n\n65\n\nZn at 28 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 5/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nDistribution of\n\n65\n\nZn 7 days after gavage administration of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% gavaged dose, n = 5/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\nUrinary excretion of 65Zn was much lower than fecal excretion in both groups. The urinary excretion of 65Zn in rats instilled with SiO2-coated 65ZnO NPs was significantly higher than in those instilled with uncoated 65ZnO NPs (Figure 5B). Although the fecal excretion rates appeared similar, slightly but significantly more 65Zn (50.04 ± 0.96% vs. 46.68 ± 0.76%) was eliminated via the feces over 28 days in rats instilled with uncoated 65ZnO NPs (Figure 5A).\nFecal and urinary excretion of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 28 days. The predominant excretion pathway was via the feces. Approximately half of the instilled 65Zn was excreted in the feces in both groups over 28 days (A). Only about 1% of the 65Zn dose was excreted in the urine (B).\nClearance of instilled uncoated or SiO2-coated 65ZnO NPs from the lungs is shown in Figure 3. Overall, both 65ZnO NPs and SiO2-coated 65ZnO NPs exhibited a biphasic clearance with a rapid initial phase (t1/2: 65ZnO = 0.3 hours; SiO2-coated 65ZnO = 0.2 hours) and a slower terminal phase (t1/2: 65ZnO = 42 hours; SiO2-coated 65ZnO = 29 hours). No significant difference was observed on the initial clearance between the two types of NPs. At 2 days, 18.1 ± 2.1% and 16.1 ± 2.0% remained in the lungs for the SiO2-coated and uncoated 65ZnO NPs, respectively. At 7 and 28 days post-IT instillation, we observed statistically significant but small (in magnitude) differences. At 28 days, only 0.14 ± 0.01% of SiO2-coated 65ZnO and 0.28 ± 0.05% of the uncoated 65ZnO NPs remained in the lungs.\nLung clearance of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. The percentages of instilled 65Zn measured in the whole lungs are shown over a period of 28 days. The clearance of 65Zn was rapid with only 16-18% of dose remaining at 2 days. By day 7, only 1.1% (SiO2-coated 65ZnO NPs) and 1.9% (65ZnO NPs) were measured in the lungs. And by the end of experiment, 65Zn was nearly gone (less than 0.3% of dose). Although statistically higher levels of 65ZnO NPs than of SiO2-coated 65ZnO NPs remained in the lungs at 7 and 28 days, the graphs show nearly identical clearance kinetics. (n = 8 rats at 5 minutes, 2 days, and 7 days, n = 5 at 28 days).\nHowever, analyses of the selected extrapulmonary tissues showed significant differences (Figure 4). Even at the earliest time point of 5 minutes post-IT instillation, significantly more 65Zn was detected in the blood (0.47% vs. 0.25%) and heart (0.03% vs. 0.01%) of rats instilled with the uncoated 65ZnO NPs. These tissue differences became more pronounced at later time points. At 2 days post-IT instillation, more 65Zn from uncoated 65ZnO NPs translocated to the blood, skeletal muscle, kidneys, heart, liver and cecum than from SiO2-coated 65ZnO NPs (Table 1). At 7 and 28 days, the overall differences in the 65Zn contents in these tissues remained the same. As shown in Tables 2 and 3, significantly higher fractions of the 65Zn from uncoated 65ZnO NPs than from SiO2-coated 65ZnO NPs were found in the blood, skeletal muscle, heart, liver and skin. Interestingly, higher percentages of 65Zn dose from the SiO2-coated 65ZnO NPs were found in the thoracic lymph nodes and bone marrow (Tables 2 and 4). Radioactive 65Zn levels decreased from 2 to 28 days in all tissues except bone, where it increased for both types of NPs. Additionally, we found that the total recovered 65Zn in examined tissues, feces and urine was significantly higher in uncoated than SiO2-coated 65ZnO NPs (Tables 1, 23 and Figure 5). Since the thoracic lymph nodes had higher 65Zn in the latter group at all time points (Tables 1, 2 and 3), we speculate that the unaccounted radioactivity may have been in other lymph nodes as well as organs not analyzed such as adipose tissue, pancreas, adrenals, teeth, nails, tendons, and nasal tissues.\nExtrapulmonary distribution of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are % of instilled dose recovered in all secondary tissues examined. It included blood, thoracic lymph nodes, bone, bone marrow, skin, brain, skeletal muscle, testes, kidneys, heart, liver, and the gastrointestinal tract. There was a rapid absorption and accumulation of 65Zn in secondary tissues. At day 2, 59-72% of the dose was detected in extrapulmonary organs. Then, 65Zn levels decreased over time to 25-37% by day 28. Significantly more 65Zn was detected in secondary organs at all time points in rats instilled with uncoated 65ZnO NPs.\n\nTissue distribution of\n\n65\n\nZn at 2 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 8/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nTissue distribution of\n\n65\n\nZn at 7 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 8/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nTissue distribution of\n\n65\n\nZn at 28 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 5/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nDistribution of\n\n65\n\nZn 7 days after gavage administration of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% gavaged dose, n = 5/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\nUrinary excretion of 65Zn was much lower than fecal excretion in both groups. The urinary excretion of 65Zn in rats instilled with SiO2-coated 65ZnO NPs was significantly higher than in those instilled with uncoated 65ZnO NPs (Figure 5B). Although the fecal excretion rates appeared similar, slightly but significantly more 65Zn (50.04 ± 0.96% vs. 46.68 ± 0.76%) was eliminated via the feces over 28 days in rats instilled with uncoated 65ZnO NPs (Figure 5A).\nFecal and urinary excretion of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 28 days. The predominant excretion pathway was via the feces. Approximately half of the instilled 65Zn was excreted in the feces in both groups over 28 days (A). Only about 1% of the 65Zn dose was excreted in the urine (B).\n Pharmacokinetics of gavaged uncoated or SiO2-coated 65ZnO NPs Absorption of 65Zn from the gut was studied at 5 minutes and 7 days post-gavage of uncoated or SiO2-coated 65ZnO NPs. Nearly 100% of the dose was recovered at 5 minutes in the stomach for both types of NPs (Figure 6A). The 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). However, significantly higher percentages of total dose were still detected in the blood, bone marrow, skin, testes, kidneys, spleen and liver in rats instilled with uncoated 65ZnO NPs (data not shown). After 7 days, low levels of 65Zn from both types of NPs (<1% original dose) were measured in all organs except the bone, skeletal muscle and skin (Figure 6B, Table 4). Higher levels of 65Zn were observed in the skeletal muscle from uncoated than from coated 65ZnO NPs at this time point (Table 4). However, similar to the IT-instillation data, the thoracic lymph nodes retained more 65Zn from the SiO2-coated than the uncoated 65ZnO NPs. Urinary excretion of 65Zn was also much lower than fecal excretion post-gavage. The urinary excretion of 65Zn in rats gavaged with SiO2-coated 65ZnO NPs was significantly higher than in rats gavaged with uncoated 65ZnO NPs (Figure 7B). The fecal excretion in the gavaged rats was higher than in IT-instilled rats. Despite a significant difference in fecal excretion during the first day post-gavage, nearly 95% of the dose for both types of NPs was excreted in the feces by day 7 (Figure 7A).\nTissue distribution of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are % dose of administered 65Zn in different organs. (A) At 5 minutes post-gavage, the 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). (B) At day 7, significantly more 65Zn was absorbed and retained in non-GIT tissues (6.9% for uncoated, 6.0% for coated 65ZnO NPs). Significantly more 65Zn was measured in skeletal muscle in rat gavaged with uncoated versus coated 65ZnO NPs. (Note: RBC: red blood cell; sk muscl: skeletal muscle; sm int: small intestine: large int: large intestine).\nFecal and urinary excretion of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 7 days. Similar to the IT-instilled groups, the predominant excretion pathway was via the feces. Ninety five % of the instilled 65Zn was excreted in both groups by day 7 (A). Only 0.1% of the 65Zn dose was excreted in the urine (B).\nAbsorption of 65Zn from the gut was studied at 5 minutes and 7 days post-gavage of uncoated or SiO2-coated 65ZnO NPs. Nearly 100% of the dose was recovered at 5 minutes in the stomach for both types of NPs (Figure 6A). The 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). However, significantly higher percentages of total dose were still detected in the blood, bone marrow, skin, testes, kidneys, spleen and liver in rats instilled with uncoated 65ZnO NPs (data not shown). After 7 days, low levels of 65Zn from both types of NPs (<1% original dose) were measured in all organs except the bone, skeletal muscle and skin (Figure 6B, Table 4). Higher levels of 65Zn were observed in the skeletal muscle from uncoated than from coated 65ZnO NPs at this time point (Table 4). However, similar to the IT-instillation data, the thoracic lymph nodes retained more 65Zn from the SiO2-coated than the uncoated 65ZnO NPs. Urinary excretion of 65Zn was also much lower than fecal excretion post-gavage. The urinary excretion of 65Zn in rats gavaged with SiO2-coated 65ZnO NPs was significantly higher than in rats gavaged with uncoated 65ZnO NPs (Figure 7B). The fecal excretion in the gavaged rats was higher than in IT-instilled rats. Despite a significant difference in fecal excretion during the first day post-gavage, nearly 95% of the dose for both types of NPs was excreted in the feces by day 7 (Figure 7A).\nTissue distribution of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are % dose of administered 65Zn in different organs. (A) At 5 minutes post-gavage, the 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). (B) At day 7, significantly more 65Zn was absorbed and retained in non-GIT tissues (6.9% for uncoated, 6.0% for coated 65ZnO NPs). Significantly more 65Zn was measured in skeletal muscle in rat gavaged with uncoated versus coated 65ZnO NPs. (Note: RBC: red blood cell; sk muscl: skeletal muscle; sm int: small intestine: large int: large intestine).\nFecal and urinary excretion of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 7 days. Similar to the IT-instilled groups, the predominant excretion pathway was via the feces. Ninety five % of the instilled 65Zn was excreted in both groups by day 7 (A). Only 0.1% of the 65Zn dose was excreted in the urine (B).", "Uncoated and SiO2-coated ZnO NPs were made by flame spray pyrolysis using the Versatile Engineered Nanomaterial Generation System at Harvard University [35],[17]. The detailed physicochemical and morphological characterization of these NPs was reported earlier [36],[17]. The ZnO primary NPs had a rod-like shape with an aspect ratio of 2:1 to 8:1 (Figure 1) [37],[17]. Flame-made nanoparticles typically exhibit a lognormal size distribution with a geometric standard deviation of σg \n= 1.45 [38]. To create the SiO2-coated ZnO nanorods, a nanothin (~4.6 ± 2.5 nm) amorphous SiO2 layer encapsulated the ZnO core [17] (Figure 1B). The amorphous nature of the silica coating was verified by X-ray diffraction (XRD) and electron microscopy analyses [17]. The average crystal size of uncoated and SiO2-coated NPs were 29 and 28 nm, respectively [39]. Their specific surface areas (SSA) were 41 m2/g (uncoated) and 55 m2/g (SiO2-coated) [40]. The lower density of SiO2 compared to ZnO contributes to the higher SSA of the SiO2-coated ZnO than uncoated NPs. The extent of the SiO2 coating was assessed by X-ray photoelectron spectroscopy and photocatalytic experiments. These data showed that less than 5% of ZnO NPs were uncoated, as some of the freshly-formed core ZnO NPs may escape the coating process [41],[17]. Furthermore, the ZnO dissolution of the SiO2-coated nanorods was significantly lower than the uncoated NPs in culture medium over 24 h [17]. The Zn2+ ion concentration reached equilibrium after 6 hours for the coated NPs (~20%), while the uncoated ones dissolved at a constant rate up to 24 hours [17]. For both IT and gavage routes, the NPs were dispersed in deionized water by sonication at 242 J/ml. The hydrodynamic diameters were 165 ± 3 nm (SiO2-coated) and 221 ± 3 nm (uncoated). The zeta potential values in these suspensions were 23 ± 0.4 mV (uncoated) and −16.2 ± 1.2 (SiO2-coated). The zeta potential differences between these two types of NPs were observed at a pH range of 2.5-8.0 [17], which includes the pH conditions in the airways/alveoli and small and large intestines. The post-irradiation hydrodynamic diameter and zeta potential in water suspension were similar to those of pristine NPs used in the lung toxicity/inflammation experiments.\nPhysicochemical characterization of test materials. Transmission electron micrograph of uncoated ZnO (A) and SiO2-coated ZnO (B) NPs. The thin silica coating of approximately 5 nm is shown in B, inset.", "We compared the pulmonary responses to uncoated versus SiO2-coated ZnO NPs at 24 hours after IT instillation in rats. Groups of 4–6 rats received 0, 0.2 or 1 mg/kg of either type of NP. We found that IT-instilled coated and uncoated ZnO NPs induced a dose-dependent injury and inflammation evident by increased neutrophils, elevated levels of myeloperoxidase (MPO), albumin and lactate dehydrogenase (LDH) in the bronchoalveolar lavage (BAL) fluid at 24 hours post-instillation (Figure 2). At the lower dose of 0.2 mg/kg, only the SiO2-coated ZnO instilled rats (n = 4) showed elevated neutrophils, LDH, MPO, and albumin levels. But at 1 mg/kg, both types of NPs induced injury and inflammation to the same extent, except that MPO was higher in rats instilled with SiO2-coated ZnO NPs.\nCellular and biochemical parameters of lung injury and inflammation in bronchoalveolar lavage (BAL). Tracheally instilled ZnO and SiO2-coated ZnO induced a dose-dependent lung injury and inflammation at 24 hours. (A) Significant increases in BAL neutrophils were observed at 1 mg/kg of both NPs (n = 6/group). At the lower dose of 0.2 mg/kg (n = 4-6/group), only the SiO2-coated ZnO (n = 4) induced significant neutrophil influx in the lungs. (B) Similarly, significant increases in LDH, myeloperoxidase and albumin were observed at 1 mg/kg of both NPs, and at 0.2 mg of SiO2-coated ZnO. (*P < 0.05, vs. control, #P < 0.05, SiO2-coated ZnO versus ZnO).", "Clearance of instilled uncoated or SiO2-coated 65ZnO NPs from the lungs is shown in Figure 3. Overall, both 65ZnO NPs and SiO2-coated 65ZnO NPs exhibited a biphasic clearance with a rapid initial phase (t1/2: 65ZnO = 0.3 hours; SiO2-coated 65ZnO = 0.2 hours) and a slower terminal phase (t1/2: 65ZnO = 42 hours; SiO2-coated 65ZnO = 29 hours). No significant difference was observed on the initial clearance between the two types of NPs. At 2 days, 18.1 ± 2.1% and 16.1 ± 2.0% remained in the lungs for the SiO2-coated and uncoated 65ZnO NPs, respectively. At 7 and 28 days post-IT instillation, we observed statistically significant but small (in magnitude) differences. At 28 days, only 0.14 ± 0.01% of SiO2-coated 65ZnO and 0.28 ± 0.05% of the uncoated 65ZnO NPs remained in the lungs.\nLung clearance of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. The percentages of instilled 65Zn measured in the whole lungs are shown over a period of 28 days. The clearance of 65Zn was rapid with only 16-18% of dose remaining at 2 days. By day 7, only 1.1% (SiO2-coated 65ZnO NPs) and 1.9% (65ZnO NPs) were measured in the lungs. And by the end of experiment, 65Zn was nearly gone (less than 0.3% of dose). Although statistically higher levels of 65ZnO NPs than of SiO2-coated 65ZnO NPs remained in the lungs at 7 and 28 days, the graphs show nearly identical clearance kinetics. (n = 8 rats at 5 minutes, 2 days, and 7 days, n = 5 at 28 days).\nHowever, analyses of the selected extrapulmonary tissues showed significant differences (Figure 4). Even at the earliest time point of 5 minutes post-IT instillation, significantly more 65Zn was detected in the blood (0.47% vs. 0.25%) and heart (0.03% vs. 0.01%) of rats instilled with the uncoated 65ZnO NPs. These tissue differences became more pronounced at later time points. At 2 days post-IT instillation, more 65Zn from uncoated 65ZnO NPs translocated to the blood, skeletal muscle, kidneys, heart, liver and cecum than from SiO2-coated 65ZnO NPs (Table 1). At 7 and 28 days, the overall differences in the 65Zn contents in these tissues remained the same. As shown in Tables 2 and 3, significantly higher fractions of the 65Zn from uncoated 65ZnO NPs than from SiO2-coated 65ZnO NPs were found in the blood, skeletal muscle, heart, liver and skin. Interestingly, higher percentages of 65Zn dose from the SiO2-coated 65ZnO NPs were found in the thoracic lymph nodes and bone marrow (Tables 2 and 4). Radioactive 65Zn levels decreased from 2 to 28 days in all tissues except bone, where it increased for both types of NPs. Additionally, we found that the total recovered 65Zn in examined tissues, feces and urine was significantly higher in uncoated than SiO2-coated 65ZnO NPs (Tables 1, 23 and Figure 5). Since the thoracic lymph nodes had higher 65Zn in the latter group at all time points (Tables 1, 2 and 3), we speculate that the unaccounted radioactivity may have been in other lymph nodes as well as organs not analyzed such as adipose tissue, pancreas, adrenals, teeth, nails, tendons, and nasal tissues.\nExtrapulmonary distribution of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are % of instilled dose recovered in all secondary tissues examined. It included blood, thoracic lymph nodes, bone, bone marrow, skin, brain, skeletal muscle, testes, kidneys, heart, liver, and the gastrointestinal tract. There was a rapid absorption and accumulation of 65Zn in secondary tissues. At day 2, 59-72% of the dose was detected in extrapulmonary organs. Then, 65Zn levels decreased over time to 25-37% by day 28. Significantly more 65Zn was detected in secondary organs at all time points in rats instilled with uncoated 65ZnO NPs.\n\nTissue distribution of\n\n65\n\nZn at 2 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 8/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nTissue distribution of\n\n65\n\nZn at 7 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 8/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nTissue distribution of\n\n65\n\nZn at 28 days after intratracheal instillation of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% instilled dose, n = 5/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\n\nDistribution of\n\n65\n\nZn 7 days after gavage administration of\n\n65\n\nZnO or SiO\n\n2\n\n-coated\n\n65\n\nZnO NPs in rats\n\nData are mean ± SE% gavaged dose, n = 5/group.\nTotal recovered = sum of 65Zn in analyzed organs, feces and urine.\n*P < 0.05, ZnO > SiO2-coated ZnO.\n# P <0.05, SiO2-coated ZnO > ZnO.\nUrinary excretion of 65Zn was much lower than fecal excretion in both groups. The urinary excretion of 65Zn in rats instilled with SiO2-coated 65ZnO NPs was significantly higher than in those instilled with uncoated 65ZnO NPs (Figure 5B). Although the fecal excretion rates appeared similar, slightly but significantly more 65Zn (50.04 ± 0.96% vs. 46.68 ± 0.76%) was eliminated via the feces over 28 days in rats instilled with uncoated 65ZnO NPs (Figure 5A).\nFecal and urinary excretion of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 28 days. The predominant excretion pathway was via the feces. Approximately half of the instilled 65Zn was excreted in the feces in both groups over 28 days (A). Only about 1% of the 65Zn dose was excreted in the urine (B).", "Absorption of 65Zn from the gut was studied at 5 minutes and 7 days post-gavage of uncoated or SiO2-coated 65ZnO NPs. Nearly 100% of the dose was recovered at 5 minutes in the stomach for both types of NPs (Figure 6A). The 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). However, significantly higher percentages of total dose were still detected in the blood, bone marrow, skin, testes, kidneys, spleen and liver in rats instilled with uncoated 65ZnO NPs (data not shown). After 7 days, low levels of 65Zn from both types of NPs (<1% original dose) were measured in all organs except the bone, skeletal muscle and skin (Figure 6B, Table 4). Higher levels of 65Zn were observed in the skeletal muscle from uncoated than from coated 65ZnO NPs at this time point (Table 4). However, similar to the IT-instillation data, the thoracic lymph nodes retained more 65Zn from the SiO2-coated than the uncoated 65ZnO NPs. Urinary excretion of 65Zn was also much lower than fecal excretion post-gavage. The urinary excretion of 65Zn in rats gavaged with SiO2-coated 65ZnO NPs was significantly higher than in rats gavaged with uncoated 65ZnO NPs (Figure 7B). The fecal excretion in the gavaged rats was higher than in IT-instilled rats. Despite a significant difference in fecal excretion during the first day post-gavage, nearly 95% of the dose for both types of NPs was excreted in the feces by day 7 (Figure 7A).\nTissue distribution of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are % dose of administered 65Zn in different organs. (A) At 5 minutes post-gavage, the 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). (B) At day 7, significantly more 65Zn was absorbed and retained in non-GIT tissues (6.9% for uncoated, 6.0% for coated 65ZnO NPs). Significantly more 65Zn was measured in skeletal muscle in rat gavaged with uncoated versus coated 65ZnO NPs. (Note: RBC: red blood cell; sk muscl: skeletal muscle; sm int: small intestine: large int: large intestine).\nFecal and urinary excretion of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 7 days. Similar to the IT-instilled groups, the predominant excretion pathway was via the feces. Ninety five % of the instilled 65Zn was excreted in both groups by day 7 (A). Only 0.1% of the 65Zn dose was excreted in the urine (B).", "Nanoparticles can be released into the workplace environment during production and handling of nanomaterials [42]. For example, studies have shown that ZnO NPs were released during an abrasion test of commercially available two-pack polyurethane coatings with ZnO NPs [43]. This suggests the likelihood of emission of NPs during activities related to handling of nano-enabled products. In this study we describe the acute pulmonary responses to ZnO NPs and the pharmacokinetics of Zn from ZnO or SiO2-coated ZnO NPs in male Wistar Han rats. To track Zn for biokinetic studies in rats, we neutron activated the NPs to change the stable element 64Zn into radioactive 65Zn, suitable for detection over long-term studies. The agglomerate size and zeta potential in water suspension were similar to those of pristine ZnO NPs. Using these radioactive NPs, we evaluated the influence of an amorphous silica coating on the clearance, bioavailability and excretion of 65Zn following intratracheal instillation and gavage of 65ZnO and Si-coated 65ZnO NPs. We have shown previously that the hermetic encapsulation of ZnO NPs with a thin layer of amorphous SiO2 reduces the dissolution of Zn2+ ions in biological media, DNA damage in vitro [17] and cellular toxicity [36]. Since the SiO2 coating does not affect the core ZnO NP optoelectronic properties, these coatings may be employed in sunscreens and UV filters. This could be a strategy to reduce ZnO toxicity while maintaining the intended performance of ZnO NPs.\nIntratracheal instillation differs from inhalation exposure in terms of particle distribution, dose rate, clearance, NP agglomerate surface properties, and pattern of injury [44],[45]. A study by Baisch et al. reported a higher inflammatory response following intratracheal instillation compared to whole body inhalation for single and repeated exposures of titanium dioxide NPs when deposited doses were held constant [46]. Although IT instillation does not directly model inhalation exposure, it is a reliable method for administering a precise dose to the lungs for biokinetic studies. We hypothesized that silica coating may alter zinc-induced lung injury and inflammation by reducing the available zinc ions based on our previous data [17]. We have also shown that pulmonary toxicity in rats exposed to nanoceria via inhalation was reduced when exposed to the same nanoceria with amorphous SiO2 coating. Surprisingly, the in vivo lung responses in the present study showed the opposite. That amorphous silica can cause injury and inflammation when inhaled at high doses has been shown in several previous studies [47]–[51]. However, it has also been shown that the lung injury and inflammatory responses to amorphous silica are transient [27]. In this study, SiO2-coated ZnO NPs induce more lung injury/inflammation than uncoated ZnO, even at a low dose at which uncoated ZnO had no effects. Considering that the effective density of ZnO NPs is reduced by silica coating (ZnO: 5.6 g/cm3 vs. SiO2-coated ZnO: estimated 4.1 g/cm3), it is possible that the coated particle number concentration is higher for an equivalent mass of NP. It is also likely that the silica coating elicits more inflammation than the ZnO NPs. Silica may act in concert with dissolved Zn ions, causing more lung injury. Furthermore, surface coating with amorphous silica also changed the zeta-potential of ZnO NPs from positive (23.0 ± 0.4 mV, uncoated ZnO NPs) to negative (−16.2 ± 1.2 mV, SiO2-coated ZnO NPs), decreasing the likelihood of agglomeration and sedimentation of SiO2-coated NP suspension in aqueous systems. The reduced NP agglomeration of the SiO2-coated ZnO NPs may increase the available NP surface area that may facilitate biointeractions with lung cells and thus induces a higher toxic/inflammatory response. It has also been reported that surface charge may influence the lung translocation rates of NPs [52]. For example, the adsorption of endogenous proteins like albumin to the surface of charged NPs increases their hydrodynamic diameter and alters their translocation rate [53]. It was also showed that NPs with zwitterionic cysteine and polar PEG ligands on the surface cause their rapid translocation to the mediastinal lymph nodes. Additionally, a higher surface charge density has been shown to cause an increased adsorption of proteins on NPs [54] while zwitterionic or neutral organic coatings have been shown to prevent adsorption of serum proteins [18]. A recent study also showed that nanoparticle protein corona can alter their uptake by macrophages [55].\nOur results demonstrate that ZnO and SiO2-coated ZnO NPs are both cleared rapidly and completely from the lungs by 28 days after IT instillation. In the lungs, NPs may be cleared via different pathways. They may be cleared by dissolution before or after alveolar macrophage uptake, by phagocytic cells in the lymph nodes, or by translocation across the alveolar epithelium into the blood circulation [56]. Since ZnO NPs have been shown to dissolve in culture medium and in endosomes [57], it is not surprising that lung clearance of 65ZnO NPs was rapid compared to that of poorly soluble NPs of cerium oxide [58] and titanium dioxide [59]. The clearance of radioactive 65Zn from the lungs includes translocation of the NPs themselves as well as dissolution of 65ZnO which is an important clearance mechanism [60]. As shown previously, the silica coating reduced the dissolution of ZnO NPs in culture medium [17], suggesting that dissolution and clearance in vivo may also be reduced. However, the silica coating appeared to very modestly but significantly enhance the amount of cleared 65Zn at day 7 and 28. The significance of this observation needs further investigation.\nDespite similar clearance from the lungs over 28 days, translocation of 65Zn from uncoated ZnO NPs is significantly higher than from coated ZnO NPs in some of the examined extrapulmonary tissues, especially skeletal muscle. In these extrapulmonary tissues, the measured 65Zn is more likely to be dissolved Zn, rather than intact 65ZnO. The amount of 65Zn was greatest in the skeletal muscle, liver, skin, and bone from both particle types. The selective retention of 65Zn into those tissues might be explained, in part, by the fact that 85% of the total body zinc is present in skeletal muscle and bone [61]. There was clearance of 65Zn from most of the extrapulmonary tissues we examined over time (day 2 to day 28), except in bone where 65Zn levels increased. The skin and skeletal muscle exhibited faster clearance with coated than with uncoated NPs. 65Zn from both particle types was largely excreted in the feces, presumably via pancreato-biliary secretion, and to a lesser extent via mucociliary clearance of instilled NPs [62]. A study investigating the pharmacokinetic behavior of inhaled iridium NPs showed that they accumulated in soft connective tissue (2%) and bone, including bone marrow (5%) [63].\nAlthough this study indicates that the SiO2 coating modestly reduces the translocation of 65Zn to the blood, skin, kidneys, heart, liver and skeletal muscle, it is unclear whether the SiO2-coated ZnO NPs dissolve at a different rate in vivo, and whether 65Zn is in particulate or ionic form when it reaches the circulation and bone. ZnO NPs have been shown to rapidly dissolve under acidic conditions (pH 4.5) and are more likely to remain intact around neutral pHs [64]. It is likely that the ZnO NPs entering phagolysosomal compartments of alveolar macrophages or neutrophils may encounter conditions favorable for dissolution. Our previous study suggested that the SiO2 coating is stable in vitro and exhibits low dissolution in biological media (<8% over 24 hours) [17]. Thus, it is possible that the SiO2-coated NPs remain in particulate form for a longer period of time. There are data showing that translocation of gold, silver, TiO2, polystyrene and carbon particles in the size range of 5–100 nm crossing the air-blood barrier and reaching blood circulation and extrapulmonary organs can occur [65]–[71].\nThe SiO2 coating significantly increased the levels of 65Zn in the bone and bone marrow (Table 3). We note that zinc is essential to the development and maintenance of bone. Zinc is known to play a major role in bone growth in mammals [72], and is required for protein synthesis in osteoblasts [73]. It can also inhibit the development of osteoclasts from bone marrow cells, thereby reducing bone resorption and bone growth [74],[75]. Radioactive 65Zn from uncoated and coated 65ZnO NPs also translocated to the skin, skeletal muscle, liver, heart, small intestine, testes, and brain (but to a lesser extent than the bone and bone marrow). It is important to note that of the 16 extrapulmonary tissues examined at 28 days after IT instillation, 4 had a higher 65Zn content from uncoated ZnO than coated ZnO (blood, skin, skeletal muscle and heart) (Table 3). This suggests that amorphous silica coating of NPs may reduce Zn retention and its potential toxicity when accumulated at high levels in those organs. Whether coating modifications like the use of thicker or different coatings can further reduce Zn bioavailability warrants further investigation. There was significantly more 65Zn from SiO2-coated ZnO excreted in the urine, which was more likely the ionic form of Zn.\nThe oral exposure to ZnO NPs is relevant from an environmental health perspective. ZnO is widely used as a nutritional supplement and as a food additive [76]. Because it is an essential trace element, zinc is routinely added to animal food products and fertilizer [75]. Due to its antimicrobial properties, there is increasing interest in adding ZnO to polymers in food packaging and preservative films to prevent bacterial growth [77]. It is possible that ZnO in sunscreens, ointments, and other cosmetics can be accidentally ingested, especially by children. The biokinetic behavior of NPs in the gastrointestinal tract may be influenced by particle surface charge. Positively charged particles are attracted to negatively charged mucus, while negatively charged particles directly contact epithelial cell surfaces [78]. A study by Paek et al. investigating the effect of surface charge on the biokinetics of Zn over 4 hours after oral administration of ZnO NPs showed that negatively charged NPs were absorbed more than positively charged ZnO NPs [79]. However, no effect on tissue distribution was observed. This is in contrast to our findings at 7 days post-gavage when coating of ZnO NPs with amorphous SiO2 (with negative zeta potential) increased the retention in thoracic lymph nodes compared to uncoated ZnO NPs (with positive zeta potential). Our study also showed that low levels of 65Zn were retained in the blood, skeletal muscle, bone and skin from both coated and uncoated 65ZnO NPs (Table 4). Most of the gavaged dose (over 90%) was excreted in the feces by day 3 indicating a rapid clearance of ZnO NPs, consistent with previous reports. Another study reported the pharmacokinetics of ZnO NPs (125, 250 and 500 mg/kg) after a single and repeated dose oral administration (90-day) [80]. They found that plasma Zn concentration significantly increased in a dose-dependent manner, but significantly decreased within 24 hours post-oral administration, suggesting that the systemic clearance of ZnO NPs is rapid even at these high doses. In another study, Baek et al. examined the pharmacokinetics of 20 nm and 70 nm citrate-modified ZnO NPs at doses of 50, 300 and 2000 mg/kg [81]. Similar to our results, they showed that ZnO NPs were not readily absorbed into the bloodstream after single-dose oral administration. The tissue distributions of Zn from both 20 nm and 70 nm ZnO NPs were similar and mainly to the liver, lung and kidneys. The study also reported predominant excretion of Zn in the feces, with smaller 20 nm particles being cleared more rapidly than the 70 nm NPs.\nIn summary, the results presented here show that uncoated 65Zn NPs resulted in higher levels of 65Zn in multiple organs following intratracheal instillation or gavage, particularly in skeletal muscle. This suggests that coating with amorphous silica can reduce tissue Zn concentration and its potential toxicity. Interestingly, the bioavailability of Zn from SiO2-coated 65ZnO was higher in thoracic lymph nodes and bone. Additionally, the excretion of 65Zn was higher from SiO2-coated 65ZnO NPs from both routes suggesting enhanced hepatobiliary excretion. Our data indicate that silica coating alters the pharmacokinetic behavior of ZnO NPs, but the effect was not as dramatic as anticipated. With increasing trends in physicochemical modifications of NPs for special applications, it is necessary to understand their influence on the fate, metabolism and toxicity of these nanoparticles.", "We examined the influence of a 4.5 nm SiO2 coating on ZnO NPs on the 65Zn pharmacokinetics following IT instillation and gavage of neutron activated NPs. The SiO2 coating does not affect the clearance of 65Zn from the lungs. However, the extrapulmonary translocation and distribution of 65Zn from coated versus uncoated 65ZnO NPs were significantly altered in some tissues. The SiO2 coating resulted in lower translocation of instilled 65Zn to the skeletal muscle, skin and heart. The SiO2 coating also reduced 65Zn translocation to skeletal muscle post-gavage. For both routes of administration, the SiO2 coating enhanced the transport of 65Zn to the thoracic lymph nodes.", " Synthesis of ZnO and SiO2-coated ZnO NPs The synthesis of these NPs was reported in detail elsewhere [17]. In brief, uncoated and SiO2-coated ZnO particles were synthesized by flame spray pyrolysis (FSP) of zinc naphthenate (Sigma-Aldrich, St. Louis, MO, USA) dissolved in ethanol (Sigma-Aldrich) at a precursor molarity of 0.5 M. The precursor solution was fed through a stainless steel capillary at 5 ml/min, dispersed by 5 L/min O2 (purity > 99%, pressure drop at nozzle tip: pdrop = 2 bar) (Air Gas, Berwyn, PA, USA) and combusted. A premixed methane-oxygen (1.5 L/min, 3.2 L/min) supporting flame was used to ignite the spray. Oxygen (Air Gas, purity > 99%) sheath gas was used at 40 L/min. Core particles were coated in-flight by the swirl-injection of hexamethyldisiloxane (HMDSO) (Sigma Aldrich) through a torus ring with 16 jets at an injection height of 200 mm above the FSP burner. A total gas flow of 16 L/min, consisting of N2 carrying HMDSO vapor and pure N2, was injected through the torus ring jets. HMDSO vapor was obtained by bubbling N2 gas through liquid HMDSO (500 ml), maintained at a controlled temperature using a temperature-controlled water bath.\nThe synthesis of these NPs was reported in detail elsewhere [17]. In brief, uncoated and SiO2-coated ZnO particles were synthesized by flame spray pyrolysis (FSP) of zinc naphthenate (Sigma-Aldrich, St. Louis, MO, USA) dissolved in ethanol (Sigma-Aldrich) at a precursor molarity of 0.5 M. The precursor solution was fed through a stainless steel capillary at 5 ml/min, dispersed by 5 L/min O2 (purity > 99%, pressure drop at nozzle tip: pdrop = 2 bar) (Air Gas, Berwyn, PA, USA) and combusted. A premixed methane-oxygen (1.5 L/min, 3.2 L/min) supporting flame was used to ignite the spray. Oxygen (Air Gas, purity > 99%) sheath gas was used at 40 L/min. Core particles were coated in-flight by the swirl-injection of hexamethyldisiloxane (HMDSO) (Sigma Aldrich) through a torus ring with 16 jets at an injection height of 200 mm above the FSP burner. A total gas flow of 16 L/min, consisting of N2 carrying HMDSO vapor and pure N2, was injected through the torus ring jets. HMDSO vapor was obtained by bubbling N2 gas through liquid HMDSO (500 ml), maintained at a controlled temperature using a temperature-controlled water bath.\n Characterization of ZnO and SiO2-coated ZnO NPs The morphology of these NPs was examined by electron microscopy. Uncoated and SiO2-coated ZnO NPs were dispersed in ethanol at a concentration of 1 mg/ml in 50 ml polyethylene conical tubes and sonicated at 246 J/ml (Branson Sonifier S-450A, Swedesboro, NJ, USA). The samples were deposited onto lacey carbon TEM grids. All grids were imaged with a JEOL 2100. The primary particle size was determined by X-ray diffraction (XRD). XRD patterns for uncoated ZnO and SiO2-coated ZnO NPs were obtained using a Scintag XDS2000 powder diffractometer (Cu Kα, λ = 0.154 nm, 40 kV, 40 mA, stepsize = 0.02°). One hundred mg of each sample was placed onto the diffractometer stage and analyzed from a range of 2θ = 20-70°. Major diffraction peaks were identified using the Inorganic Crystal Structure Database (ICSD) for wurtzite (ZnO) crystals. The crystal size was determined by applying the Debye-Scherrer Shape Equation to the Gaussian fit of the major diffraction peak. The specific surface area was obtained using the Brunauer-Emmet-Teller (BET) method. The samples were degassed in N2 for at least 1 hour at 150°C before obtaining five-point N2 adsorption at 77 K (Micrometrics Tristar 3000, Norcross, GA, USA).\nThe morphology of these NPs was examined by electron microscopy. Uncoated and SiO2-coated ZnO NPs were dispersed in ethanol at a concentration of 1 mg/ml in 50 ml polyethylene conical tubes and sonicated at 246 J/ml (Branson Sonifier S-450A, Swedesboro, NJ, USA). The samples were deposited onto lacey carbon TEM grids. All grids were imaged with a JEOL 2100. The primary particle size was determined by X-ray diffraction (XRD). XRD patterns for uncoated ZnO and SiO2-coated ZnO NPs were obtained using a Scintag XDS2000 powder diffractometer (Cu Kα, λ = 0.154 nm, 40 kV, 40 mA, stepsize = 0.02°). One hundred mg of each sample was placed onto the diffractometer stage and analyzed from a range of 2θ = 20-70°. Major diffraction peaks were identified using the Inorganic Crystal Structure Database (ICSD) for wurtzite (ZnO) crystals. The crystal size was determined by applying the Debye-Scherrer Shape Equation to the Gaussian fit of the major diffraction peak. The specific surface area was obtained using the Brunauer-Emmet-Teller (BET) method. The samples were degassed in N2 for at least 1 hour at 150°C before obtaining five-point N2 adsorption at 77 K (Micrometrics Tristar 3000, Norcross, GA, USA).\n Neutron activation of NPs The NPs with and without the SiO2 coating were neutron-activated at the Massachusetts Institute of Technology (MIT) Nuclear Reactor Laboratory (Cambridge, MA). Samples were irradiated with a thermal neutron flux of 5 × 1013 n/cm2s for 120 hours. The resulting 65Zn radioisotope has a half-life of 244.3 days and a primary gamma energy peak of 1115 keV. The relative specific activities for 65Zn were 37.7 ± 5.0 kBq/mg for SiO2-coated 65ZnO and 41.7 ± 7.2 kBq/mg for 65ZnO NPs.\nThe NPs with and without the SiO2 coating were neutron-activated at the Massachusetts Institute of Technology (MIT) Nuclear Reactor Laboratory (Cambridge, MA). Samples were irradiated with a thermal neutron flux of 5 × 1013 n/cm2s for 120 hours. The resulting 65Zn radioisotope has a half-life of 244.3 days and a primary gamma energy peak of 1115 keV. The relative specific activities for 65Zn were 37.7 ± 5.0 kBq/mg for SiO2-coated 65ZnO and 41.7 ± 7.2 kBq/mg for 65ZnO NPs.\n Preparation and characterization of ZnO and SiO2 -coated ZnO nanoparticle suspensions Uncoated and SiO2-coated ZnO NPs were dispersed using a protocol previously described [82],[36]. The NPs were dispersed in deionized water at a concentration of 0.66 mg/ml (IT) or 10 mg/ml (gavage). Sonication was performed in deionized water to minimize the formation of reactive oxygen species. Samples were thoroughly mixed immediately prior to instillation. Dispersions of NPs were analyzed for hydrodynamic diameter (dH), polydispersity index (PdI), and zeta potential (ζ) by DLS using a Zetasizer Nano-ZS (Malvern Instruments, Worcestershire, UK).\nUncoated and SiO2-coated ZnO NPs were dispersed using a protocol previously described [82],[36]. The NPs were dispersed in deionized water at a concentration of 0.66 mg/ml (IT) or 10 mg/ml (gavage). Sonication was performed in deionized water to minimize the formation of reactive oxygen species. Samples were thoroughly mixed immediately prior to instillation. Dispersions of NPs were analyzed for hydrodynamic diameter (dH), polydispersity index (PdI), and zeta potential (ζ) by DLS using a Zetasizer Nano-ZS (Malvern Instruments, Worcestershire, UK).\n Animals The protocols used in this study were approved by the Harvard Medical Area Animal Care and Use Committee. Nine-week-old male Wistar Han rats were purchased from Charles River Laboratories (Wilmington, MA). Rats were housed in pairs in polypropylene cages and allowed to acclimate for 1 week before the studies were initiated. Rats were maintained on a 12-hour light/dark cycle. Food and water were provided ad libitum.\nThe protocols used in this study were approved by the Harvard Medical Area Animal Care and Use Committee. Nine-week-old male Wistar Han rats were purchased from Charles River Laboratories (Wilmington, MA). Rats were housed in pairs in polypropylene cages and allowed to acclimate for 1 week before the studies were initiated. Rats were maintained on a 12-hour light/dark cycle. Food and water were provided ad libitum.\n Pulmonary responses – Bronchoalveolar lavage and analyses This experiment was performed to determine pulmonary responses to instilled NPs. A group of rats (mean wt. 264 ± 15 g) was intratracheally instilled with either an uncoated ZnO or SiO2 -coated ZnO NP suspension at a 0, 0.2 or 1.0 mg/kg dose. The particle suspensions were delivered to the lungs through the trachea in a volume of 1.5 ml/kg. Twenty-four hours later, rats were euthanized via exsanguination with a cut in the abdominal aorta while under anesthesia. The trachea was exposed and cannulated. The lungs were then lavaged 12 times, with 3 ml of 0.9% sterile PBS, without calcium and magnesium ions. The cells of all washes were separated from the supernatant by centrifugation (350 × g at 4°C for 10 min). Total cell count and hemoglobin measurements were made from the cell pellets. After staining the cells, a differential cell count was performed. The supernatant of the two first washes was clarified via centrifugation (14,500 × g at 4°C for 30 min), and used for standard spectrophotometric assays for lactate dehydrogenase (LDH), myeloperoxidase (MPO) and albumin.\nThis experiment was performed to determine pulmonary responses to instilled NPs. A group of rats (mean wt. 264 ± 15 g) was intratracheally instilled with either an uncoated ZnO or SiO2 -coated ZnO NP suspension at a 0, 0.2 or 1.0 mg/kg dose. The particle suspensions were delivered to the lungs through the trachea in a volume of 1.5 ml/kg. Twenty-four hours later, rats were euthanized via exsanguination with a cut in the abdominal aorta while under anesthesia. The trachea was exposed and cannulated. The lungs were then lavaged 12 times, with 3 ml of 0.9% sterile PBS, without calcium and magnesium ions. The cells of all washes were separated from the supernatant by centrifugation (350 × g at 4°C for 10 min). Total cell count and hemoglobin measurements were made from the cell pellets. After staining the cells, a differential cell count was performed. The supernatant of the two first washes was clarified via centrifugation (14,500 × g at 4°C for 30 min), and used for standard spectrophotometric assays for lactate dehydrogenase (LDH), myeloperoxidase (MPO) and albumin.\n Pharmacokinetics of 65Zn The mean weight of rats at the start of the experiment was 285 ± 3 g. Two groups of rats (29 rats/NP) were intratracheally instilled with 65ZnO NPs or with SiO2-coated 65ZnO NPs at a 1 mg/kg dose (1.5 ml/kg, 0.66 mg/ml). Rats were placed in metabolic cages containing food and water, as previously described. Twenty four-hour samples of feces and urine were collected at selected time points (0–24 hours, 2–3 days, 6–7 days, 9–10 days, 13–14 days, 20–21 days, and 27–28 days post-IT instillation). Fecal/urine collection was accomplished by placing each rat in individual metabolic cage containing food and water during each 24-hour period. All samples were analyzed for total 65Zn activity, and expressed as % of instilled 65Zn dose. Fecal and urine clearance curves were generated and were used to estimate the daily cumulative excretion. Groups of 8 rats were humanely sacrificed at 5 minutes, 2 days, 7 days, and 5 rats/group at 28 days. Therefore, the number of collected fecal/urine samples decreased over time.\nAnother cohort of 20 rats was dosed with 65ZnO (n = 10) or SiO2-coated 65ZnO (n = 10) by gavage at a 5 mg/kg dose (0.5 ml/kg, 10 mg/ml). One group of 5 rats was humanely sacrificed at 5 minutes and immediately dissected. Another group of 5 rats was individually placed in metabolic cages, as previously described, and 24-hour samples of urine and feces were collected at 0–1 day, 2–3 days, and 6–7 days post-gavage. The remaining rats were sacrificed at 7 days.\nAt each endpoint, rats were euthanized and dissected, and the whole brain, spleen, kidneys, heart, liver, lungs, GI tract, testes, thoracic lymph nodes, blood (10 ml, separated into plasma and RBC), bone marrow (from femoral bones), bone (both femurs), skin (2 × 3 inches), and skeletal muscle (from 4 sites) were collected. The 65Zn radioactivity present in each sample was measured with a WIZARD Gamma Counter (PerkinElmer, Inc., Waltham, MA). The number of disintegrations per minute was determined from the counts per minute and the counting efficiency. The efficiency of the gamma counter was derived from counting multiple aliquots of NP samples and relating them to the specific activities measured at Massachusetts Institute of Technology Nuclear Reactor. We estimated that the counter had an efficiency of ~52%. The 65Zn radioactivity was expressed as kBq/g tissue and the percentage of administered dose in each organ. All radioactivity data were adjusted for physical decay over the entire observation period. The radioactivity in organs and tissues not measured in their entirety was estimated as a percentage of total body weight as: skeletal muscle, 40%; bone marrow, 3.2%; peripheral blood, 7%; skin, 19%; and bone, 6% [83],[84]. Based on the 65Zn specific activity (kBq/mg NP) and tissue 65Zn concentration, the amount of Zn derived from each NP was calculated for each tissue examined (ng Zn/g tissue).\nThe mean weight of rats at the start of the experiment was 285 ± 3 g. Two groups of rats (29 rats/NP) were intratracheally instilled with 65ZnO NPs or with SiO2-coated 65ZnO NPs at a 1 mg/kg dose (1.5 ml/kg, 0.66 mg/ml). Rats were placed in metabolic cages containing food and water, as previously described. Twenty four-hour samples of feces and urine were collected at selected time points (0–24 hours, 2–3 days, 6–7 days, 9–10 days, 13–14 days, 20–21 days, and 27–28 days post-IT instillation). Fecal/urine collection was accomplished by placing each rat in individual metabolic cage containing food and water during each 24-hour period. All samples were analyzed for total 65Zn activity, and expressed as % of instilled 65Zn dose. Fecal and urine clearance curves were generated and were used to estimate the daily cumulative excretion. Groups of 8 rats were humanely sacrificed at 5 minutes, 2 days, 7 days, and 5 rats/group at 28 days. Therefore, the number of collected fecal/urine samples decreased over time.\nAnother cohort of 20 rats was dosed with 65ZnO (n = 10) or SiO2-coated 65ZnO (n = 10) by gavage at a 5 mg/kg dose (0.5 ml/kg, 10 mg/ml). One group of 5 rats was humanely sacrificed at 5 minutes and immediately dissected. Another group of 5 rats was individually placed in metabolic cages, as previously described, and 24-hour samples of urine and feces were collected at 0–1 day, 2–3 days, and 6–7 days post-gavage. The remaining rats were sacrificed at 7 days.\nAt each endpoint, rats were euthanized and dissected, and the whole brain, spleen, kidneys, heart, liver, lungs, GI tract, testes, thoracic lymph nodes, blood (10 ml, separated into plasma and RBC), bone marrow (from femoral bones), bone (both femurs), skin (2 × 3 inches), and skeletal muscle (from 4 sites) were collected. The 65Zn radioactivity present in each sample was measured with a WIZARD Gamma Counter (PerkinElmer, Inc., Waltham, MA). The number of disintegrations per minute was determined from the counts per minute and the counting efficiency. The efficiency of the gamma counter was derived from counting multiple aliquots of NP samples and relating them to the specific activities measured at Massachusetts Institute of Technology Nuclear Reactor. We estimated that the counter had an efficiency of ~52%. The 65Zn radioactivity was expressed as kBq/g tissue and the percentage of administered dose in each organ. All radioactivity data were adjusted for physical decay over the entire observation period. The radioactivity in organs and tissues not measured in their entirety was estimated as a percentage of total body weight as: skeletal muscle, 40%; bone marrow, 3.2%; peripheral blood, 7%; skin, 19%; and bone, 6% [83],[84]. Based on the 65Zn specific activity (kBq/mg NP) and tissue 65Zn concentration, the amount of Zn derived from each NP was calculated for each tissue examined (ng Zn/g tissue).\n Statistical analyses Differences in the 65Zn tissue distribution and in cellular and biochemical parameters measured in bronchoalveolar lavage between groups were analyzed using multivariate analysis of variance (MANOVA) with REGWQ (Ryan-Einot-Gabriel-Welch based on range) and Tukey post hoc tests using SAS Statistical Analysis software (SAS Institute, Cary, NC). The lung clearance half-life was estimated by a two-phase estimation by a biexponential model using the R Program v. 3.1.0 [85].\nDifferences in the 65Zn tissue distribution and in cellular and biochemical parameters measured in bronchoalveolar lavage between groups were analyzed using multivariate analysis of variance (MANOVA) with REGWQ (Ryan-Einot-Gabriel-Welch based on range) and Tukey post hoc tests using SAS Statistical Analysis software (SAS Institute, Cary, NC). The lung clearance half-life was estimated by a two-phase estimation by a biexponential model using the R Program v. 3.1.0 [85].", "The synthesis of these NPs was reported in detail elsewhere [17]. In brief, uncoated and SiO2-coated ZnO particles were synthesized by flame spray pyrolysis (FSP) of zinc naphthenate (Sigma-Aldrich, St. Louis, MO, USA) dissolved in ethanol (Sigma-Aldrich) at a precursor molarity of 0.5 M. The precursor solution was fed through a stainless steel capillary at 5 ml/min, dispersed by 5 L/min O2 (purity > 99%, pressure drop at nozzle tip: pdrop = 2 bar) (Air Gas, Berwyn, PA, USA) and combusted. A premixed methane-oxygen (1.5 L/min, 3.2 L/min) supporting flame was used to ignite the spray. Oxygen (Air Gas, purity > 99%) sheath gas was used at 40 L/min. Core particles were coated in-flight by the swirl-injection of hexamethyldisiloxane (HMDSO) (Sigma Aldrich) through a torus ring with 16 jets at an injection height of 200 mm above the FSP burner. A total gas flow of 16 L/min, consisting of N2 carrying HMDSO vapor and pure N2, was injected through the torus ring jets. HMDSO vapor was obtained by bubbling N2 gas through liquid HMDSO (500 ml), maintained at a controlled temperature using a temperature-controlled water bath.", "The morphology of these NPs was examined by electron microscopy. Uncoated and SiO2-coated ZnO NPs were dispersed in ethanol at a concentration of 1 mg/ml in 50 ml polyethylene conical tubes and sonicated at 246 J/ml (Branson Sonifier S-450A, Swedesboro, NJ, USA). The samples were deposited onto lacey carbon TEM grids. All grids were imaged with a JEOL 2100. The primary particle size was determined by X-ray diffraction (XRD). XRD patterns for uncoated ZnO and SiO2-coated ZnO NPs were obtained using a Scintag XDS2000 powder diffractometer (Cu Kα, λ = 0.154 nm, 40 kV, 40 mA, stepsize = 0.02°). One hundred mg of each sample was placed onto the diffractometer stage and analyzed from a range of 2θ = 20-70°. Major diffraction peaks were identified using the Inorganic Crystal Structure Database (ICSD) for wurtzite (ZnO) crystals. The crystal size was determined by applying the Debye-Scherrer Shape Equation to the Gaussian fit of the major diffraction peak. The specific surface area was obtained using the Brunauer-Emmet-Teller (BET) method. The samples were degassed in N2 for at least 1 hour at 150°C before obtaining five-point N2 adsorption at 77 K (Micrometrics Tristar 3000, Norcross, GA, USA).", "The NPs with and without the SiO2 coating were neutron-activated at the Massachusetts Institute of Technology (MIT) Nuclear Reactor Laboratory (Cambridge, MA). Samples were irradiated with a thermal neutron flux of 5 × 1013 n/cm2s for 120 hours. The resulting 65Zn radioisotope has a half-life of 244.3 days and a primary gamma energy peak of 1115 keV. The relative specific activities for 65Zn were 37.7 ± 5.0 kBq/mg for SiO2-coated 65ZnO and 41.7 ± 7.2 kBq/mg for 65ZnO NPs.", "Uncoated and SiO2-coated ZnO NPs were dispersed using a protocol previously described [82],[36]. The NPs were dispersed in deionized water at a concentration of 0.66 mg/ml (IT) or 10 mg/ml (gavage). Sonication was performed in deionized water to minimize the formation of reactive oxygen species. Samples were thoroughly mixed immediately prior to instillation. Dispersions of NPs were analyzed for hydrodynamic diameter (dH), polydispersity index (PdI), and zeta potential (ζ) by DLS using a Zetasizer Nano-ZS (Malvern Instruments, Worcestershire, UK).", "The protocols used in this study were approved by the Harvard Medical Area Animal Care and Use Committee. Nine-week-old male Wistar Han rats were purchased from Charles River Laboratories (Wilmington, MA). Rats were housed in pairs in polypropylene cages and allowed to acclimate for 1 week before the studies were initiated. Rats were maintained on a 12-hour light/dark cycle. Food and water were provided ad libitum.", "This experiment was performed to determine pulmonary responses to instilled NPs. A group of rats (mean wt. 264 ± 15 g) was intratracheally instilled with either an uncoated ZnO or SiO2 -coated ZnO NP suspension at a 0, 0.2 or 1.0 mg/kg dose. The particle suspensions were delivered to the lungs through the trachea in a volume of 1.5 ml/kg. Twenty-four hours later, rats were euthanized via exsanguination with a cut in the abdominal aorta while under anesthesia. The trachea was exposed and cannulated. The lungs were then lavaged 12 times, with 3 ml of 0.9% sterile PBS, without calcium and magnesium ions. The cells of all washes were separated from the supernatant by centrifugation (350 × g at 4°C for 10 min). Total cell count and hemoglobin measurements were made from the cell pellets. After staining the cells, a differential cell count was performed. The supernatant of the two first washes was clarified via centrifugation (14,500 × g at 4°C for 30 min), and used for standard spectrophotometric assays for lactate dehydrogenase (LDH), myeloperoxidase (MPO) and albumin.", "The mean weight of rats at the start of the experiment was 285 ± 3 g. Two groups of rats (29 rats/NP) were intratracheally instilled with 65ZnO NPs or with SiO2-coated 65ZnO NPs at a 1 mg/kg dose (1.5 ml/kg, 0.66 mg/ml). Rats were placed in metabolic cages containing food and water, as previously described. Twenty four-hour samples of feces and urine were collected at selected time points (0–24 hours, 2–3 days, 6–7 days, 9–10 days, 13–14 days, 20–21 days, and 27–28 days post-IT instillation). Fecal/urine collection was accomplished by placing each rat in individual metabolic cage containing food and water during each 24-hour period. All samples were analyzed for total 65Zn activity, and expressed as % of instilled 65Zn dose. Fecal and urine clearance curves were generated and were used to estimate the daily cumulative excretion. Groups of 8 rats were humanely sacrificed at 5 minutes, 2 days, 7 days, and 5 rats/group at 28 days. Therefore, the number of collected fecal/urine samples decreased over time.\nAnother cohort of 20 rats was dosed with 65ZnO (n = 10) or SiO2-coated 65ZnO (n = 10) by gavage at a 5 mg/kg dose (0.5 ml/kg, 10 mg/ml). One group of 5 rats was humanely sacrificed at 5 minutes and immediately dissected. Another group of 5 rats was individually placed in metabolic cages, as previously described, and 24-hour samples of urine and feces were collected at 0–1 day, 2–3 days, and 6–7 days post-gavage. The remaining rats were sacrificed at 7 days.\nAt each endpoint, rats were euthanized and dissected, and the whole brain, spleen, kidneys, heart, liver, lungs, GI tract, testes, thoracic lymph nodes, blood (10 ml, separated into plasma and RBC), bone marrow (from femoral bones), bone (both femurs), skin (2 × 3 inches), and skeletal muscle (from 4 sites) were collected. The 65Zn radioactivity present in each sample was measured with a WIZARD Gamma Counter (PerkinElmer, Inc., Waltham, MA). The number of disintegrations per minute was determined from the counts per minute and the counting efficiency. The efficiency of the gamma counter was derived from counting multiple aliquots of NP samples and relating them to the specific activities measured at Massachusetts Institute of Technology Nuclear Reactor. We estimated that the counter had an efficiency of ~52%. The 65Zn radioactivity was expressed as kBq/g tissue and the percentage of administered dose in each organ. All radioactivity data were adjusted for physical decay over the entire observation period. The radioactivity in organs and tissues not measured in their entirety was estimated as a percentage of total body weight as: skeletal muscle, 40%; bone marrow, 3.2%; peripheral blood, 7%; skin, 19%; and bone, 6% [83],[84]. Based on the 65Zn specific activity (kBq/mg NP) and tissue 65Zn concentration, the amount of Zn derived from each NP was calculated for each tissue examined (ng Zn/g tissue).", "Differences in the 65Zn tissue distribution and in cellular and biochemical parameters measured in bronchoalveolar lavage between groups were analyzed using multivariate analysis of variance (MANOVA) with REGWQ (Ryan-Einot-Gabriel-Welch based on range) and Tukey post hoc tests using SAS Statistical Analysis software (SAS Institute, Cary, NC). The lung clearance half-life was estimated by a two-phase estimation by a biexponential model using the R Program v. 3.1.0 [85].", "The authors declare that they have no competing interests.", "NVK, KMM, RMM, and JDB designed and performed the lung toxicity and pharmacokinetic studies. TCD performed statistical analyses. PD and GAS synthesized and characterized the NPs. This manuscript was written by NVK, RMM, and KMM and revised by JDB, GS, PD and RMM. All authors read, corrected and approved the manuscript." ]
[ null, "results", null, null, null, null, "discussion", "conclusions", "methods", null, null, null, null, null, null, null, null, null, null ]
[ "Zinc oxide", "Nanoparticles", "Pharmacokinetics", "Bioavailability", "Silica coating", "Nanotoxicology" ]
Background: Zinc oxide nanoparticles (ZnO NPs) are widely used in consumer products, including ceramics, cosmetics, plastics, sealants, toners and foods [1]. They are a common component in a range of technologies, including sensors, light emitting diodes, and solar cells due to their semiconducting and optical properties [2]. ZnO NPs filter both UV-A and UV-B radiation but remain transparent in the visible spectrum [3]. For this reason, ZnO NPs are commonly added to sunscreens [4] and other cosmetic products. Furthermore, advanced technologies have made the large-scale production of ZnO NPs possible [5]. Health concerns have been raised due to the growing evidence of the potential toxicity of ZnO NPs. Reduced pulmonary function in humans was observed 24 hours after inhalation of ultrafine (<100 nm) ZnO [6]. It has also been shown to cause DNA damage in HepG2 cells and neurotoxicity due to the formation of reactive oxygen species (ROS) [7],[8]. Recently, others and we have demonstrated that ZnO NPs can cause DNA damage in TK6 and H9T3 cells [9],[10]. ZnO NPs dissolve in aqueous solutions, releasing Zn2+ ions that may in turn cause cytotoxicity and DNA damage to cells [9],[11]–[13]. Studies have shown that changing the surface characteristics of certain NPs may alter the biologic responses of cells [14],[15]. Developing strategies to reduce the toxicity of ZnO NPs without changing their core properties (safer-by-design approach) is an active area of research. Xia et al. [16] showed that doping ZnO NPs with iron could reduce the rate of ZnO dissolution and the toxic effects in zebra fish embryos and rat and mouse lungs [16]. We also showed that encapsulation of ZnO NPs with amorphous SiO2 reduced the dissolution of Zn2+ ions in biological media, and reduced cell cytotoxicity and DNA damage in vitro [17]. Surface characteristics of NPs, such as their chemical and molecular structure, influence their pharmacokinetic behavior [18]–[20]. Surface chemistry influences the adsorption of phospholipids, proteins and other components of lung surfactants in the formation of a particle corona, which may regulate the overall nanoparticle pharmacokinetics and biological responses [19]. Coronas have been shown to influence the dynamics of cellular uptake, localization, biodistribution, and biological effects of NPs [21],[22]. Coating of NPs with amorphous silica is a promising technique to enhance colloidal stability and biocompatibility for theranostics [23],[24]. A recent study by Chen et al. showed that coating gold nanorods with silica can amplify the photoacoustic response without altering optical absorption [25]. Furthermore, coating magnetic NPs with amorphous silica enhances particle stability and reduces its cytotoxicity in a human bronchial epithelium cell line model [26]. Amorphous SiO2 is generally considered relatively biologically inert [27], and is commonly used in cosmetic and personal care products, and as a negative control in some nanoparticle toxicity screening assays [28]. However, Napierska et al. demonstrated the size-dependent cytotoxic effects of amorphous silica in vitro[29]. They concluded that the surface area of amorphous silica is an important determinant of cytotoxicity. An in vivo study using a rat model demonstrated that the pulmonary toxicity and inflammatory responses to amorphous silica are transient [30]. Moreover, SiO2-coated nanoceria induced minimal lung injury and inflammation [31]. It has also been demonstrated that SiO2 coating improves nanoparticle biocompatibility in vitro for a variety of nanomaterials, including Ag [32], Y2O3[33], and ZnO [17]. We have recently developed methods for the gas-phase synthesis of metal and metal oxide NPs by a modified flame spray pyrolysis (FSP) reactor. Coating metal oxide NPs with amorphous SiO2 involves the encapsulation of the core NPs in flight with a nanothin amorphous SiO2 layer [34]. An important advantage of flame-made NPs is their high purity. Flame synthesis is a high-temperature process that leaves no organic contamination on the particle surface. Furthermore, the presence of SiO2 does not influence the optoelectronic properties of the core ZnO nanorods. Thus, they retain their desired high transparency in the visible spectrum and UV absorption rendering them suitable for UV blocking applications [17]. The SiO2 coating has been demonstrated to reduce ZnO nanorod toxicity by mitigating their dissolution and generation of ions in solutions, and by preventing the immediate contact between the core particle and mammalian cells. For ZnO NPs, such a hermetic SiO2 coating reduces ZnO dissolution while preserving the optical properties and band-gap energy of the ZnO core [17]. Studies examining nanoparticle structure-pharmacokinetic relationships have established that plasma protein binding profiles correlate with circulation half-lives [27]. However, studies evaluating the relationship between surface modifications, lung clearance kinetics, and pulmonary effects are lacking. Thus, we sought to study the effects of amorphous SiO2 coating on ZnO pulmonary effects and on pharmacokinetics of 65Zn when radioactive 65ZnO and SiO2-coated 65ZnO nanorods are administered by intratracheal instillation (IT) and gavage. We explored how the SiO2 coating affected acute toxicity and inflammatory responses in the lungs, as well as 65Zn clearance and tissue distribution after IT instillation over a period of 28 days. The translocation of the 65Zn from the stomach to other organs was also quantified for up to 7 days after gavage. Finally, we examined how the SiO2 coating affected the urinary and fecal excretion of 65Zn during the entire observation period. Results: Synthesis and characterization of ZnO and SiO2-coated ZnO NPs Uncoated and SiO2-coated ZnO NPs were made by flame spray pyrolysis using the Versatile Engineered Nanomaterial Generation System at Harvard University [35],[17]. The detailed physicochemical and morphological characterization of these NPs was reported earlier [36],[17]. The ZnO primary NPs had a rod-like shape with an aspect ratio of 2:1 to 8:1 (Figure 1) [37],[17]. Flame-made nanoparticles typically exhibit a lognormal size distribution with a geometric standard deviation of σg  = 1.45 [38]. To create the SiO2-coated ZnO nanorods, a nanothin (~4.6 ± 2.5 nm) amorphous SiO2 layer encapsulated the ZnO core [17] (Figure 1B). The amorphous nature of the silica coating was verified by X-ray diffraction (XRD) and electron microscopy analyses [17]. The average crystal size of uncoated and SiO2-coated NPs were 29 and 28 nm, respectively [39]. Their specific surface areas (SSA) were 41 m2/g (uncoated) and 55 m2/g (SiO2-coated) [40]. The lower density of SiO2 compared to ZnO contributes to the higher SSA of the SiO2-coated ZnO than uncoated NPs. The extent of the SiO2 coating was assessed by X-ray photoelectron spectroscopy and photocatalytic experiments. These data showed that less than 5% of ZnO NPs were uncoated, as some of the freshly-formed core ZnO NPs may escape the coating process [41],[17]. Furthermore, the ZnO dissolution of the SiO2-coated nanorods was significantly lower than the uncoated NPs in culture medium over 24 h [17]. The Zn2+ ion concentration reached equilibrium after 6 hours for the coated NPs (~20%), while the uncoated ones dissolved at a constant rate up to 24 hours [17]. For both IT and gavage routes, the NPs were dispersed in deionized water by sonication at 242 J/ml. The hydrodynamic diameters were 165 ± 3 nm (SiO2-coated) and 221 ± 3 nm (uncoated). The zeta potential values in these suspensions were 23 ± 0.4 mV (uncoated) and −16.2 ± 1.2 (SiO2-coated). The zeta potential differences between these two types of NPs were observed at a pH range of 2.5-8.0 [17], which includes the pH conditions in the airways/alveoli and small and large intestines. The post-irradiation hydrodynamic diameter and zeta potential in water suspension were similar to those of pristine NPs used in the lung toxicity/inflammation experiments. Physicochemical characterization of test materials. Transmission electron micrograph of uncoated ZnO (A) and SiO2-coated ZnO (B) NPs. The thin silica coating of approximately 5 nm is shown in B, inset. Uncoated and SiO2-coated ZnO NPs were made by flame spray pyrolysis using the Versatile Engineered Nanomaterial Generation System at Harvard University [35],[17]. The detailed physicochemical and morphological characterization of these NPs was reported earlier [36],[17]. The ZnO primary NPs had a rod-like shape with an aspect ratio of 2:1 to 8:1 (Figure 1) [37],[17]. Flame-made nanoparticles typically exhibit a lognormal size distribution with a geometric standard deviation of σg  = 1.45 [38]. To create the SiO2-coated ZnO nanorods, a nanothin (~4.6 ± 2.5 nm) amorphous SiO2 layer encapsulated the ZnO core [17] (Figure 1B). The amorphous nature of the silica coating was verified by X-ray diffraction (XRD) and electron microscopy analyses [17]. The average crystal size of uncoated and SiO2-coated NPs were 29 and 28 nm, respectively [39]. Their specific surface areas (SSA) were 41 m2/g (uncoated) and 55 m2/g (SiO2-coated) [40]. The lower density of SiO2 compared to ZnO contributes to the higher SSA of the SiO2-coated ZnO than uncoated NPs. The extent of the SiO2 coating was assessed by X-ray photoelectron spectroscopy and photocatalytic experiments. These data showed that less than 5% of ZnO NPs were uncoated, as some of the freshly-formed core ZnO NPs may escape the coating process [41],[17]. Furthermore, the ZnO dissolution of the SiO2-coated nanorods was significantly lower than the uncoated NPs in culture medium over 24 h [17]. The Zn2+ ion concentration reached equilibrium after 6 hours for the coated NPs (~20%), while the uncoated ones dissolved at a constant rate up to 24 hours [17]. For both IT and gavage routes, the NPs were dispersed in deionized water by sonication at 242 J/ml. The hydrodynamic diameters were 165 ± 3 nm (SiO2-coated) and 221 ± 3 nm (uncoated). The zeta potential values in these suspensions were 23 ± 0.4 mV (uncoated) and −16.2 ± 1.2 (SiO2-coated). The zeta potential differences between these two types of NPs were observed at a pH range of 2.5-8.0 [17], which includes the pH conditions in the airways/alveoli and small and large intestines. The post-irradiation hydrodynamic diameter and zeta potential in water suspension were similar to those of pristine NPs used in the lung toxicity/inflammation experiments. Physicochemical characterization of test materials. Transmission electron micrograph of uncoated ZnO (A) and SiO2-coated ZnO (B) NPs. The thin silica coating of approximately 5 nm is shown in B, inset. Pulmonary responses to intratracheally instilled ZnO and SiO2-coated ZnO We compared the pulmonary responses to uncoated versus SiO2-coated ZnO NPs at 24 hours after IT instillation in rats. Groups of 4–6 rats received 0, 0.2 or 1 mg/kg of either type of NP. We found that IT-instilled coated and uncoated ZnO NPs induced a dose-dependent injury and inflammation evident by increased neutrophils, elevated levels of myeloperoxidase (MPO), albumin and lactate dehydrogenase (LDH) in the bronchoalveolar lavage (BAL) fluid at 24 hours post-instillation (Figure 2). At the lower dose of 0.2 mg/kg, only the SiO2-coated ZnO instilled rats (n = 4) showed elevated neutrophils, LDH, MPO, and albumin levels. But at 1 mg/kg, both types of NPs induced injury and inflammation to the same extent, except that MPO was higher in rats instilled with SiO2-coated ZnO NPs. Cellular and biochemical parameters of lung injury and inflammation in bronchoalveolar lavage (BAL). Tracheally instilled ZnO and SiO2-coated ZnO induced a dose-dependent lung injury and inflammation at 24 hours. (A) Significant increases in BAL neutrophils were observed at 1 mg/kg of both NPs (n = 6/group). At the lower dose of 0.2 mg/kg (n = 4-6/group), only the SiO2-coated ZnO (n = 4) induced significant neutrophil influx in the lungs. (B) Similarly, significant increases in LDH, myeloperoxidase and albumin were observed at 1 mg/kg of both NPs, and at 0.2 mg of SiO2-coated ZnO. (*P < 0.05, vs. control, #P < 0.05, SiO2-coated ZnO versus ZnO). We compared the pulmonary responses to uncoated versus SiO2-coated ZnO NPs at 24 hours after IT instillation in rats. Groups of 4–6 rats received 0, 0.2 or 1 mg/kg of either type of NP. We found that IT-instilled coated and uncoated ZnO NPs induced a dose-dependent injury and inflammation evident by increased neutrophils, elevated levels of myeloperoxidase (MPO), albumin and lactate dehydrogenase (LDH) in the bronchoalveolar lavage (BAL) fluid at 24 hours post-instillation (Figure 2). At the lower dose of 0.2 mg/kg, only the SiO2-coated ZnO instilled rats (n = 4) showed elevated neutrophils, LDH, MPO, and albumin levels. But at 1 mg/kg, both types of NPs induced injury and inflammation to the same extent, except that MPO was higher in rats instilled with SiO2-coated ZnO NPs. Cellular and biochemical parameters of lung injury and inflammation in bronchoalveolar lavage (BAL). Tracheally instilled ZnO and SiO2-coated ZnO induced a dose-dependent lung injury and inflammation at 24 hours. (A) Significant increases in BAL neutrophils were observed at 1 mg/kg of both NPs (n = 6/group). At the lower dose of 0.2 mg/kg (n = 4-6/group), only the SiO2-coated ZnO (n = 4) induced significant neutrophil influx in the lungs. (B) Similarly, significant increases in LDH, myeloperoxidase and albumin were observed at 1 mg/kg of both NPs, and at 0.2 mg of SiO2-coated ZnO. (*P < 0.05, vs. control, #P < 0.05, SiO2-coated ZnO versus ZnO). Pharmacokinetics of intratracheally-instilled uncoated or SiO2-coated 65ZnO NPs Clearance of instilled uncoated or SiO2-coated 65ZnO NPs from the lungs is shown in Figure 3. Overall, both 65ZnO NPs and SiO2-coated 65ZnO NPs exhibited a biphasic clearance with a rapid initial phase (t1/2: 65ZnO = 0.3 hours; SiO2-coated 65ZnO = 0.2 hours) and a slower terminal phase (t1/2: 65ZnO = 42 hours; SiO2-coated 65ZnO = 29 hours). No significant difference was observed on the initial clearance between the two types of NPs. At 2 days, 18.1 ± 2.1% and 16.1 ± 2.0% remained in the lungs for the SiO2-coated and uncoated 65ZnO NPs, respectively. At 7 and 28 days post-IT instillation, we observed statistically significant but small (in magnitude) differences. At 28 days, only 0.14 ± 0.01% of SiO2-coated 65ZnO and 0.28 ± 0.05% of the uncoated 65ZnO NPs remained in the lungs. Lung clearance of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. The percentages of instilled 65Zn measured in the whole lungs are shown over a period of 28 days. The clearance of 65Zn was rapid with only 16-18% of dose remaining at 2 days. By day 7, only 1.1% (SiO2-coated 65ZnO NPs) and 1.9% (65ZnO NPs) were measured in the lungs. And by the end of experiment, 65Zn was nearly gone (less than 0.3% of dose). Although statistically higher levels of 65ZnO NPs than of SiO2-coated 65ZnO NPs remained in the lungs at 7 and 28 days, the graphs show nearly identical clearance kinetics. (n = 8 rats at 5 minutes, 2 days, and 7 days, n = 5 at 28 days). However, analyses of the selected extrapulmonary tissues showed significant differences (Figure 4). Even at the earliest time point of 5 minutes post-IT instillation, significantly more 65Zn was detected in the blood (0.47% vs. 0.25%) and heart (0.03% vs. 0.01%) of rats instilled with the uncoated 65ZnO NPs. These tissue differences became more pronounced at later time points. At 2 days post-IT instillation, more 65Zn from uncoated 65ZnO NPs translocated to the blood, skeletal muscle, kidneys, heart, liver and cecum than from SiO2-coated 65ZnO NPs (Table 1). At 7 and 28 days, the overall differences in the 65Zn contents in these tissues remained the same. As shown in Tables 2 and 3, significantly higher fractions of the 65Zn from uncoated 65ZnO NPs than from SiO2-coated 65ZnO NPs were found in the blood, skeletal muscle, heart, liver and skin. Interestingly, higher percentages of 65Zn dose from the SiO2-coated 65ZnO NPs were found in the thoracic lymph nodes and bone marrow (Tables 2 and 4). Radioactive 65Zn levels decreased from 2 to 28 days in all tissues except bone, where it increased for both types of NPs. Additionally, we found that the total recovered 65Zn in examined tissues, feces and urine was significantly higher in uncoated than SiO2-coated 65ZnO NPs (Tables 1, 23 and Figure 5). Since the thoracic lymph nodes had higher 65Zn in the latter group at all time points (Tables 1, 2 and 3), we speculate that the unaccounted radioactivity may have been in other lymph nodes as well as organs not analyzed such as adipose tissue, pancreas, adrenals, teeth, nails, tendons, and nasal tissues. Extrapulmonary distribution of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are % of instilled dose recovered in all secondary tissues examined. It included blood, thoracic lymph nodes, bone, bone marrow, skin, brain, skeletal muscle, testes, kidneys, heart, liver, and the gastrointestinal tract. There was a rapid absorption and accumulation of 65Zn in secondary tissues. At day 2, 59-72% of the dose was detected in extrapulmonary organs. Then, 65Zn levels decreased over time to 25-37% by day 28. Significantly more 65Zn was detected in secondary organs at all time points in rats instilled with uncoated 65ZnO NPs. Tissue distribution of 65 Zn at 2 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 8/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Tissue distribution of 65 Zn at 7 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 8/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Tissue distribution of 65 Zn at 28 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 5/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Distribution of 65 Zn 7 days after gavage administration of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% gavaged dose, n = 5/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Urinary excretion of 65Zn was much lower than fecal excretion in both groups. The urinary excretion of 65Zn in rats instilled with SiO2-coated 65ZnO NPs was significantly higher than in those instilled with uncoated 65ZnO NPs (Figure 5B). Although the fecal excretion rates appeared similar, slightly but significantly more 65Zn (50.04 ± 0.96% vs. 46.68 ± 0.76%) was eliminated via the feces over 28 days in rats instilled with uncoated 65ZnO NPs (Figure 5A). Fecal and urinary excretion of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 28 days. The predominant excretion pathway was via the feces. Approximately half of the instilled 65Zn was excreted in the feces in both groups over 28 days (A). Only about 1% of the 65Zn dose was excreted in the urine (B). Clearance of instilled uncoated or SiO2-coated 65ZnO NPs from the lungs is shown in Figure 3. Overall, both 65ZnO NPs and SiO2-coated 65ZnO NPs exhibited a biphasic clearance with a rapid initial phase (t1/2: 65ZnO = 0.3 hours; SiO2-coated 65ZnO = 0.2 hours) and a slower terminal phase (t1/2: 65ZnO = 42 hours; SiO2-coated 65ZnO = 29 hours). No significant difference was observed on the initial clearance between the two types of NPs. At 2 days, 18.1 ± 2.1% and 16.1 ± 2.0% remained in the lungs for the SiO2-coated and uncoated 65ZnO NPs, respectively. At 7 and 28 days post-IT instillation, we observed statistically significant but small (in magnitude) differences. At 28 days, only 0.14 ± 0.01% of SiO2-coated 65ZnO and 0.28 ± 0.05% of the uncoated 65ZnO NPs remained in the lungs. Lung clearance of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. The percentages of instilled 65Zn measured in the whole lungs are shown over a period of 28 days. The clearance of 65Zn was rapid with only 16-18% of dose remaining at 2 days. By day 7, only 1.1% (SiO2-coated 65ZnO NPs) and 1.9% (65ZnO NPs) were measured in the lungs. And by the end of experiment, 65Zn was nearly gone (less than 0.3% of dose). Although statistically higher levels of 65ZnO NPs than of SiO2-coated 65ZnO NPs remained in the lungs at 7 and 28 days, the graphs show nearly identical clearance kinetics. (n = 8 rats at 5 minutes, 2 days, and 7 days, n = 5 at 28 days). However, analyses of the selected extrapulmonary tissues showed significant differences (Figure 4). Even at the earliest time point of 5 minutes post-IT instillation, significantly more 65Zn was detected in the blood (0.47% vs. 0.25%) and heart (0.03% vs. 0.01%) of rats instilled with the uncoated 65ZnO NPs. These tissue differences became more pronounced at later time points. At 2 days post-IT instillation, more 65Zn from uncoated 65ZnO NPs translocated to the blood, skeletal muscle, kidneys, heart, liver and cecum than from SiO2-coated 65ZnO NPs (Table 1). At 7 and 28 days, the overall differences in the 65Zn contents in these tissues remained the same. As shown in Tables 2 and 3, significantly higher fractions of the 65Zn from uncoated 65ZnO NPs than from SiO2-coated 65ZnO NPs were found in the blood, skeletal muscle, heart, liver and skin. Interestingly, higher percentages of 65Zn dose from the SiO2-coated 65ZnO NPs were found in the thoracic lymph nodes and bone marrow (Tables 2 and 4). Radioactive 65Zn levels decreased from 2 to 28 days in all tissues except bone, where it increased for both types of NPs. Additionally, we found that the total recovered 65Zn in examined tissues, feces and urine was significantly higher in uncoated than SiO2-coated 65ZnO NPs (Tables 1, 23 and Figure 5). Since the thoracic lymph nodes had higher 65Zn in the latter group at all time points (Tables 1, 2 and 3), we speculate that the unaccounted radioactivity may have been in other lymph nodes as well as organs not analyzed such as adipose tissue, pancreas, adrenals, teeth, nails, tendons, and nasal tissues. Extrapulmonary distribution of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are % of instilled dose recovered in all secondary tissues examined. It included blood, thoracic lymph nodes, bone, bone marrow, skin, brain, skeletal muscle, testes, kidneys, heart, liver, and the gastrointestinal tract. There was a rapid absorption and accumulation of 65Zn in secondary tissues. At day 2, 59-72% of the dose was detected in extrapulmonary organs. Then, 65Zn levels decreased over time to 25-37% by day 28. Significantly more 65Zn was detected in secondary organs at all time points in rats instilled with uncoated 65ZnO NPs. Tissue distribution of 65 Zn at 2 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 8/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Tissue distribution of 65 Zn at 7 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 8/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Tissue distribution of 65 Zn at 28 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 5/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Distribution of 65 Zn 7 days after gavage administration of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% gavaged dose, n = 5/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Urinary excretion of 65Zn was much lower than fecal excretion in both groups. The urinary excretion of 65Zn in rats instilled with SiO2-coated 65ZnO NPs was significantly higher than in those instilled with uncoated 65ZnO NPs (Figure 5B). Although the fecal excretion rates appeared similar, slightly but significantly more 65Zn (50.04 ± 0.96% vs. 46.68 ± 0.76%) was eliminated via the feces over 28 days in rats instilled with uncoated 65ZnO NPs (Figure 5A). Fecal and urinary excretion of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 28 days. The predominant excretion pathway was via the feces. Approximately half of the instilled 65Zn was excreted in the feces in both groups over 28 days (A). Only about 1% of the 65Zn dose was excreted in the urine (B). Pharmacokinetics of gavaged uncoated or SiO2-coated 65ZnO NPs Absorption of 65Zn from the gut was studied at 5 minutes and 7 days post-gavage of uncoated or SiO2-coated 65ZnO NPs. Nearly 100% of the dose was recovered at 5 minutes in the stomach for both types of NPs (Figure 6A). The 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). However, significantly higher percentages of total dose were still detected in the blood, bone marrow, skin, testes, kidneys, spleen and liver in rats instilled with uncoated 65ZnO NPs (data not shown). After 7 days, low levels of 65Zn from both types of NPs (<1% original dose) were measured in all organs except the bone, skeletal muscle and skin (Figure 6B, Table 4). Higher levels of 65Zn were observed in the skeletal muscle from uncoated than from coated 65ZnO NPs at this time point (Table 4). However, similar to the IT-instillation data, the thoracic lymph nodes retained more 65Zn from the SiO2-coated than the uncoated 65ZnO NPs. Urinary excretion of 65Zn was also much lower than fecal excretion post-gavage. The urinary excretion of 65Zn in rats gavaged with SiO2-coated 65ZnO NPs was significantly higher than in rats gavaged with uncoated 65ZnO NPs (Figure 7B). The fecal excretion in the gavaged rats was higher than in IT-instilled rats. Despite a significant difference in fecal excretion during the first day post-gavage, nearly 95% of the dose for both types of NPs was excreted in the feces by day 7 (Figure 7A). Tissue distribution of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are % dose of administered 65Zn in different organs. (A) At 5 minutes post-gavage, the 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). (B) At day 7, significantly more 65Zn was absorbed and retained in non-GIT tissues (6.9% for uncoated, 6.0% for coated 65ZnO NPs). Significantly more 65Zn was measured in skeletal muscle in rat gavaged with uncoated versus coated 65ZnO NPs. (Note: RBC: red blood cell; sk muscl: skeletal muscle; sm int: small intestine: large int: large intestine). Fecal and urinary excretion of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 7 days. Similar to the IT-instilled groups, the predominant excretion pathway was via the feces. Ninety five % of the instilled 65Zn was excreted in both groups by day 7 (A). Only 0.1% of the 65Zn dose was excreted in the urine (B). Absorption of 65Zn from the gut was studied at 5 minutes and 7 days post-gavage of uncoated or SiO2-coated 65ZnO NPs. Nearly 100% of the dose was recovered at 5 minutes in the stomach for both types of NPs (Figure 6A). The 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). However, significantly higher percentages of total dose were still detected in the blood, bone marrow, skin, testes, kidneys, spleen and liver in rats instilled with uncoated 65ZnO NPs (data not shown). After 7 days, low levels of 65Zn from both types of NPs (<1% original dose) were measured in all organs except the bone, skeletal muscle and skin (Figure 6B, Table 4). Higher levels of 65Zn were observed in the skeletal muscle from uncoated than from coated 65ZnO NPs at this time point (Table 4). However, similar to the IT-instillation data, the thoracic lymph nodes retained more 65Zn from the SiO2-coated than the uncoated 65ZnO NPs. Urinary excretion of 65Zn was also much lower than fecal excretion post-gavage. The urinary excretion of 65Zn in rats gavaged with SiO2-coated 65ZnO NPs was significantly higher than in rats gavaged with uncoated 65ZnO NPs (Figure 7B). The fecal excretion in the gavaged rats was higher than in IT-instilled rats. Despite a significant difference in fecal excretion during the first day post-gavage, nearly 95% of the dose for both types of NPs was excreted in the feces by day 7 (Figure 7A). Tissue distribution of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are % dose of administered 65Zn in different organs. (A) At 5 minutes post-gavage, the 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). (B) At day 7, significantly more 65Zn was absorbed and retained in non-GIT tissues (6.9% for uncoated, 6.0% for coated 65ZnO NPs). Significantly more 65Zn was measured in skeletal muscle in rat gavaged with uncoated versus coated 65ZnO NPs. (Note: RBC: red blood cell; sk muscl: skeletal muscle; sm int: small intestine: large int: large intestine). Fecal and urinary excretion of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 7 days. Similar to the IT-instilled groups, the predominant excretion pathway was via the feces. Ninety five % of the instilled 65Zn was excreted in both groups by day 7 (A). Only 0.1% of the 65Zn dose was excreted in the urine (B). Synthesis and characterization of ZnO and SiO2-coated ZnO NPs: Uncoated and SiO2-coated ZnO NPs were made by flame spray pyrolysis using the Versatile Engineered Nanomaterial Generation System at Harvard University [35],[17]. The detailed physicochemical and morphological characterization of these NPs was reported earlier [36],[17]. The ZnO primary NPs had a rod-like shape with an aspect ratio of 2:1 to 8:1 (Figure 1) [37],[17]. Flame-made nanoparticles typically exhibit a lognormal size distribution with a geometric standard deviation of σg  = 1.45 [38]. To create the SiO2-coated ZnO nanorods, a nanothin (~4.6 ± 2.5 nm) amorphous SiO2 layer encapsulated the ZnO core [17] (Figure 1B). The amorphous nature of the silica coating was verified by X-ray diffraction (XRD) and electron microscopy analyses [17]. The average crystal size of uncoated and SiO2-coated NPs were 29 and 28 nm, respectively [39]. Their specific surface areas (SSA) were 41 m2/g (uncoated) and 55 m2/g (SiO2-coated) [40]. The lower density of SiO2 compared to ZnO contributes to the higher SSA of the SiO2-coated ZnO than uncoated NPs. The extent of the SiO2 coating was assessed by X-ray photoelectron spectroscopy and photocatalytic experiments. These data showed that less than 5% of ZnO NPs were uncoated, as some of the freshly-formed core ZnO NPs may escape the coating process [41],[17]. Furthermore, the ZnO dissolution of the SiO2-coated nanorods was significantly lower than the uncoated NPs in culture medium over 24 h [17]. The Zn2+ ion concentration reached equilibrium after 6 hours for the coated NPs (~20%), while the uncoated ones dissolved at a constant rate up to 24 hours [17]. For both IT and gavage routes, the NPs were dispersed in deionized water by sonication at 242 J/ml. The hydrodynamic diameters were 165 ± 3 nm (SiO2-coated) and 221 ± 3 nm (uncoated). The zeta potential values in these suspensions were 23 ± 0.4 mV (uncoated) and −16.2 ± 1.2 (SiO2-coated). The zeta potential differences between these two types of NPs were observed at a pH range of 2.5-8.0 [17], which includes the pH conditions in the airways/alveoli and small and large intestines. The post-irradiation hydrodynamic diameter and zeta potential in water suspension were similar to those of pristine NPs used in the lung toxicity/inflammation experiments. Physicochemical characterization of test materials. Transmission electron micrograph of uncoated ZnO (A) and SiO2-coated ZnO (B) NPs. The thin silica coating of approximately 5 nm is shown in B, inset. Pulmonary responses to intratracheally instilled ZnO and SiO2-coated ZnO: We compared the pulmonary responses to uncoated versus SiO2-coated ZnO NPs at 24 hours after IT instillation in rats. Groups of 4–6 rats received 0, 0.2 or 1 mg/kg of either type of NP. We found that IT-instilled coated and uncoated ZnO NPs induced a dose-dependent injury and inflammation evident by increased neutrophils, elevated levels of myeloperoxidase (MPO), albumin and lactate dehydrogenase (LDH) in the bronchoalveolar lavage (BAL) fluid at 24 hours post-instillation (Figure 2). At the lower dose of 0.2 mg/kg, only the SiO2-coated ZnO instilled rats (n = 4) showed elevated neutrophils, LDH, MPO, and albumin levels. But at 1 mg/kg, both types of NPs induced injury and inflammation to the same extent, except that MPO was higher in rats instilled with SiO2-coated ZnO NPs. Cellular and biochemical parameters of lung injury and inflammation in bronchoalveolar lavage (BAL). Tracheally instilled ZnO and SiO2-coated ZnO induced a dose-dependent lung injury and inflammation at 24 hours. (A) Significant increases in BAL neutrophils were observed at 1 mg/kg of both NPs (n = 6/group). At the lower dose of 0.2 mg/kg (n = 4-6/group), only the SiO2-coated ZnO (n = 4) induced significant neutrophil influx in the lungs. (B) Similarly, significant increases in LDH, myeloperoxidase and albumin were observed at 1 mg/kg of both NPs, and at 0.2 mg of SiO2-coated ZnO. (*P < 0.05, vs. control, #P < 0.05, SiO2-coated ZnO versus ZnO). Pharmacokinetics of intratracheally-instilled uncoated or SiO2-coated 65ZnO NPs: Clearance of instilled uncoated or SiO2-coated 65ZnO NPs from the lungs is shown in Figure 3. Overall, both 65ZnO NPs and SiO2-coated 65ZnO NPs exhibited a biphasic clearance with a rapid initial phase (t1/2: 65ZnO = 0.3 hours; SiO2-coated 65ZnO = 0.2 hours) and a slower terminal phase (t1/2: 65ZnO = 42 hours; SiO2-coated 65ZnO = 29 hours). No significant difference was observed on the initial clearance between the two types of NPs. At 2 days, 18.1 ± 2.1% and 16.1 ± 2.0% remained in the lungs for the SiO2-coated and uncoated 65ZnO NPs, respectively. At 7 and 28 days post-IT instillation, we observed statistically significant but small (in magnitude) differences. At 28 days, only 0.14 ± 0.01% of SiO2-coated 65ZnO and 0.28 ± 0.05% of the uncoated 65ZnO NPs remained in the lungs. Lung clearance of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. The percentages of instilled 65Zn measured in the whole lungs are shown over a period of 28 days. The clearance of 65Zn was rapid with only 16-18% of dose remaining at 2 days. By day 7, only 1.1% (SiO2-coated 65ZnO NPs) and 1.9% (65ZnO NPs) were measured in the lungs. And by the end of experiment, 65Zn was nearly gone (less than 0.3% of dose). Although statistically higher levels of 65ZnO NPs than of SiO2-coated 65ZnO NPs remained in the lungs at 7 and 28 days, the graphs show nearly identical clearance kinetics. (n = 8 rats at 5 minutes, 2 days, and 7 days, n = 5 at 28 days). However, analyses of the selected extrapulmonary tissues showed significant differences (Figure 4). Even at the earliest time point of 5 minutes post-IT instillation, significantly more 65Zn was detected in the blood (0.47% vs. 0.25%) and heart (0.03% vs. 0.01%) of rats instilled with the uncoated 65ZnO NPs. These tissue differences became more pronounced at later time points. At 2 days post-IT instillation, more 65Zn from uncoated 65ZnO NPs translocated to the blood, skeletal muscle, kidneys, heart, liver and cecum than from SiO2-coated 65ZnO NPs (Table 1). At 7 and 28 days, the overall differences in the 65Zn contents in these tissues remained the same. As shown in Tables 2 and 3, significantly higher fractions of the 65Zn from uncoated 65ZnO NPs than from SiO2-coated 65ZnO NPs were found in the blood, skeletal muscle, heart, liver and skin. Interestingly, higher percentages of 65Zn dose from the SiO2-coated 65ZnO NPs were found in the thoracic lymph nodes and bone marrow (Tables 2 and 4). Radioactive 65Zn levels decreased from 2 to 28 days in all tissues except bone, where it increased for both types of NPs. Additionally, we found that the total recovered 65Zn in examined tissues, feces and urine was significantly higher in uncoated than SiO2-coated 65ZnO NPs (Tables 1, 23 and Figure 5). Since the thoracic lymph nodes had higher 65Zn in the latter group at all time points (Tables 1, 2 and 3), we speculate that the unaccounted radioactivity may have been in other lymph nodes as well as organs not analyzed such as adipose tissue, pancreas, adrenals, teeth, nails, tendons, and nasal tissues. Extrapulmonary distribution of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are % of instilled dose recovered in all secondary tissues examined. It included blood, thoracic lymph nodes, bone, bone marrow, skin, brain, skeletal muscle, testes, kidneys, heart, liver, and the gastrointestinal tract. There was a rapid absorption and accumulation of 65Zn in secondary tissues. At day 2, 59-72% of the dose was detected in extrapulmonary organs. Then, 65Zn levels decreased over time to 25-37% by day 28. Significantly more 65Zn was detected in secondary organs at all time points in rats instilled with uncoated 65ZnO NPs. Tissue distribution of 65 Zn at 2 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 8/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Tissue distribution of 65 Zn at 7 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 8/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Tissue distribution of 65 Zn at 28 days after intratracheal instillation of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% instilled dose, n = 5/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Distribution of 65 Zn 7 days after gavage administration of 65 ZnO or SiO 2 -coated 65 ZnO NPs in rats Data are mean ± SE% gavaged dose, n = 5/group. Total recovered = sum of 65Zn in analyzed organs, feces and urine. *P < 0.05, ZnO > SiO2-coated ZnO. # P <0.05, SiO2-coated ZnO > ZnO. Urinary excretion of 65Zn was much lower than fecal excretion in both groups. The urinary excretion of 65Zn in rats instilled with SiO2-coated 65ZnO NPs was significantly higher than in those instilled with uncoated 65ZnO NPs (Figure 5B). Although the fecal excretion rates appeared similar, slightly but significantly more 65Zn (50.04 ± 0.96% vs. 46.68 ± 0.76%) was eliminated via the feces over 28 days in rats instilled with uncoated 65ZnO NPs (Figure 5A). Fecal and urinary excretion of65Zn post-IT instillation of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 28 days. The predominant excretion pathway was via the feces. Approximately half of the instilled 65Zn was excreted in the feces in both groups over 28 days (A). Only about 1% of the 65Zn dose was excreted in the urine (B). Pharmacokinetics of gavaged uncoated or SiO2-coated 65ZnO NPs: Absorption of 65Zn from the gut was studied at 5 minutes and 7 days post-gavage of uncoated or SiO2-coated 65ZnO NPs. Nearly 100% of the dose was recovered at 5 minutes in the stomach for both types of NPs (Figure 6A). The 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). However, significantly higher percentages of total dose were still detected in the blood, bone marrow, skin, testes, kidneys, spleen and liver in rats instilled with uncoated 65ZnO NPs (data not shown). After 7 days, low levels of 65Zn from both types of NPs (<1% original dose) were measured in all organs except the bone, skeletal muscle and skin (Figure 6B, Table 4). Higher levels of 65Zn were observed in the skeletal muscle from uncoated than from coated 65ZnO NPs at this time point (Table 4). However, similar to the IT-instillation data, the thoracic lymph nodes retained more 65Zn from the SiO2-coated than the uncoated 65ZnO NPs. Urinary excretion of 65Zn was also much lower than fecal excretion post-gavage. The urinary excretion of 65Zn in rats gavaged with SiO2-coated 65ZnO NPs was significantly higher than in rats gavaged with uncoated 65ZnO NPs (Figure 7B). The fecal excretion in the gavaged rats was higher than in IT-instilled rats. Despite a significant difference in fecal excretion during the first day post-gavage, nearly 95% of the dose for both types of NPs was excreted in the feces by day 7 (Figure 7A). Tissue distribution of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are % dose of administered 65Zn in different organs. (A) At 5 minutes post-gavage, the 65Zn levels in tissues other than the gastrointestinal tract were much lower (0.3% for uncoated, 0.05% for coated 65ZnO NPs). (B) At day 7, significantly more 65Zn was absorbed and retained in non-GIT tissues (6.9% for uncoated, 6.0% for coated 65ZnO NPs). Significantly more 65Zn was measured in skeletal muscle in rat gavaged with uncoated versus coated 65ZnO NPs. (Note: RBC: red blood cell; sk muscl: skeletal muscle; sm int: small intestine: large int: large intestine). Fecal and urinary excretion of65Zn post-gavage of65ZnO and SiO2-coated65ZnO NPs. Data are estimated cumulative urinary or fecal excretion of 65Zn over 7 days. Similar to the IT-instilled groups, the predominant excretion pathway was via the feces. Ninety five % of the instilled 65Zn was excreted in both groups by day 7 (A). Only 0.1% of the 65Zn dose was excreted in the urine (B). Discussion: Nanoparticles can be released into the workplace environment during production and handling of nanomaterials [42]. For example, studies have shown that ZnO NPs were released during an abrasion test of commercially available two-pack polyurethane coatings with ZnO NPs [43]. This suggests the likelihood of emission of NPs during activities related to handling of nano-enabled products. In this study we describe the acute pulmonary responses to ZnO NPs and the pharmacokinetics of Zn from ZnO or SiO2-coated ZnO NPs in male Wistar Han rats. To track Zn for biokinetic studies in rats, we neutron activated the NPs to change the stable element 64Zn into radioactive 65Zn, suitable for detection over long-term studies. The agglomerate size and zeta potential in water suspension were similar to those of pristine ZnO NPs. Using these radioactive NPs, we evaluated the influence of an amorphous silica coating on the clearance, bioavailability and excretion of 65Zn following intratracheal instillation and gavage of 65ZnO and Si-coated 65ZnO NPs. We have shown previously that the hermetic encapsulation of ZnO NPs with a thin layer of amorphous SiO2 reduces the dissolution of Zn2+ ions in biological media, DNA damage in vitro [17] and cellular toxicity [36]. Since the SiO2 coating does not affect the core ZnO NP optoelectronic properties, these coatings may be employed in sunscreens and UV filters. This could be a strategy to reduce ZnO toxicity while maintaining the intended performance of ZnO NPs. Intratracheal instillation differs from inhalation exposure in terms of particle distribution, dose rate, clearance, NP agglomerate surface properties, and pattern of injury [44],[45]. A study by Baisch et al. reported a higher inflammatory response following intratracheal instillation compared to whole body inhalation for single and repeated exposures of titanium dioxide NPs when deposited doses were held constant [46]. Although IT instillation does not directly model inhalation exposure, it is a reliable method for administering a precise dose to the lungs for biokinetic studies. We hypothesized that silica coating may alter zinc-induced lung injury and inflammation by reducing the available zinc ions based on our previous data [17]. We have also shown that pulmonary toxicity in rats exposed to nanoceria via inhalation was reduced when exposed to the same nanoceria with amorphous SiO2 coating. Surprisingly, the in vivo lung responses in the present study showed the opposite. That amorphous silica can cause injury and inflammation when inhaled at high doses has been shown in several previous studies [47]–[51]. However, it has also been shown that the lung injury and inflammatory responses to amorphous silica are transient [27]. In this study, SiO2-coated ZnO NPs induce more lung injury/inflammation than uncoated ZnO, even at a low dose at which uncoated ZnO had no effects. Considering that the effective density of ZnO NPs is reduced by silica coating (ZnO: 5.6 g/cm3 vs. SiO2-coated ZnO: estimated 4.1 g/cm3), it is possible that the coated particle number concentration is higher for an equivalent mass of NP. It is also likely that the silica coating elicits more inflammation than the ZnO NPs. Silica may act in concert with dissolved Zn ions, causing more lung injury. Furthermore, surface coating with amorphous silica also changed the zeta-potential of ZnO NPs from positive (23.0 ± 0.4 mV, uncoated ZnO NPs) to negative (−16.2 ± 1.2 mV, SiO2-coated ZnO NPs), decreasing the likelihood of agglomeration and sedimentation of SiO2-coated NP suspension in aqueous systems. The reduced NP agglomeration of the SiO2-coated ZnO NPs may increase the available NP surface area that may facilitate biointeractions with lung cells and thus induces a higher toxic/inflammatory response. It has also been reported that surface charge may influence the lung translocation rates of NPs [52]. For example, the adsorption of endogenous proteins like albumin to the surface of charged NPs increases their hydrodynamic diameter and alters their translocation rate [53]. It was also showed that NPs with zwitterionic cysteine and polar PEG ligands on the surface cause their rapid translocation to the mediastinal lymph nodes. Additionally, a higher surface charge density has been shown to cause an increased adsorption of proteins on NPs [54] while zwitterionic or neutral organic coatings have been shown to prevent adsorption of serum proteins [18]. A recent study also showed that nanoparticle protein corona can alter their uptake by macrophages [55]. Our results demonstrate that ZnO and SiO2-coated ZnO NPs are both cleared rapidly and completely from the lungs by 28 days after IT instillation. In the lungs, NPs may be cleared via different pathways. They may be cleared by dissolution before or after alveolar macrophage uptake, by phagocytic cells in the lymph nodes, or by translocation across the alveolar epithelium into the blood circulation [56]. Since ZnO NPs have been shown to dissolve in culture medium and in endosomes [57], it is not surprising that lung clearance of 65ZnO NPs was rapid compared to that of poorly soluble NPs of cerium oxide [58] and titanium dioxide [59]. The clearance of radioactive 65Zn from the lungs includes translocation of the NPs themselves as well as dissolution of 65ZnO which is an important clearance mechanism [60]. As shown previously, the silica coating reduced the dissolution of ZnO NPs in culture medium [17], suggesting that dissolution and clearance in vivo may also be reduced. However, the silica coating appeared to very modestly but significantly enhance the amount of cleared 65Zn at day 7 and 28. The significance of this observation needs further investigation. Despite similar clearance from the lungs over 28 days, translocation of 65Zn from uncoated ZnO NPs is significantly higher than from coated ZnO NPs in some of the examined extrapulmonary tissues, especially skeletal muscle. In these extrapulmonary tissues, the measured 65Zn is more likely to be dissolved Zn, rather than intact 65ZnO. The amount of 65Zn was greatest in the skeletal muscle, liver, skin, and bone from both particle types. The selective retention of 65Zn into those tissues might be explained, in part, by the fact that 85% of the total body zinc is present in skeletal muscle and bone [61]. There was clearance of 65Zn from most of the extrapulmonary tissues we examined over time (day 2 to day 28), except in bone where 65Zn levels increased. The skin and skeletal muscle exhibited faster clearance with coated than with uncoated NPs. 65Zn from both particle types was largely excreted in the feces, presumably via pancreato-biliary secretion, and to a lesser extent via mucociliary clearance of instilled NPs [62]. A study investigating the pharmacokinetic behavior of inhaled iridium NPs showed that they accumulated in soft connective tissue (2%) and bone, including bone marrow (5%) [63]. Although this study indicates that the SiO2 coating modestly reduces the translocation of 65Zn to the blood, skin, kidneys, heart, liver and skeletal muscle, it is unclear whether the SiO2-coated ZnO NPs dissolve at a different rate in vivo, and whether 65Zn is in particulate or ionic form when it reaches the circulation and bone. ZnO NPs have been shown to rapidly dissolve under acidic conditions (pH 4.5) and are more likely to remain intact around neutral pHs [64]. It is likely that the ZnO NPs entering phagolysosomal compartments of alveolar macrophages or neutrophils may encounter conditions favorable for dissolution. Our previous study suggested that the SiO2 coating is stable in vitro and exhibits low dissolution in biological media (<8% over 24 hours) [17]. Thus, it is possible that the SiO2-coated NPs remain in particulate form for a longer period of time. There are data showing that translocation of gold, silver, TiO2, polystyrene and carbon particles in the size range of 5–100 nm crossing the air-blood barrier and reaching blood circulation and extrapulmonary organs can occur [65]–[71]. The SiO2 coating significantly increased the levels of 65Zn in the bone and bone marrow (Table 3). We note that zinc is essential to the development and maintenance of bone. Zinc is known to play a major role in bone growth in mammals [72], and is required for protein synthesis in osteoblasts [73]. It can also inhibit the development of osteoclasts from bone marrow cells, thereby reducing bone resorption and bone growth [74],[75]. Radioactive 65Zn from uncoated and coated 65ZnO NPs also translocated to the skin, skeletal muscle, liver, heart, small intestine, testes, and brain (but to a lesser extent than the bone and bone marrow). It is important to note that of the 16 extrapulmonary tissues examined at 28 days after IT instillation, 4 had a higher 65Zn content from uncoated ZnO than coated ZnO (blood, skin, skeletal muscle and heart) (Table 3). This suggests that amorphous silica coating of NPs may reduce Zn retention and its potential toxicity when accumulated at high levels in those organs. Whether coating modifications like the use of thicker or different coatings can further reduce Zn bioavailability warrants further investigation. There was significantly more 65Zn from SiO2-coated ZnO excreted in the urine, which was more likely the ionic form of Zn. The oral exposure to ZnO NPs is relevant from an environmental health perspective. ZnO is widely used as a nutritional supplement and as a food additive [76]. Because it is an essential trace element, zinc is routinely added to animal food products and fertilizer [75]. Due to its antimicrobial properties, there is increasing interest in adding ZnO to polymers in food packaging and preservative films to prevent bacterial growth [77]. It is possible that ZnO in sunscreens, ointments, and other cosmetics can be accidentally ingested, especially by children. The biokinetic behavior of NPs in the gastrointestinal tract may be influenced by particle surface charge. Positively charged particles are attracted to negatively charged mucus, while negatively charged particles directly contact epithelial cell surfaces [78]. A study by Paek et al. investigating the effect of surface charge on the biokinetics of Zn over 4 hours after oral administration of ZnO NPs showed that negatively charged NPs were absorbed more than positively charged ZnO NPs [79]. However, no effect on tissue distribution was observed. This is in contrast to our findings at 7 days post-gavage when coating of ZnO NPs with amorphous SiO2 (with negative zeta potential) increased the retention in thoracic lymph nodes compared to uncoated ZnO NPs (with positive zeta potential). Our study also showed that low levels of 65Zn were retained in the blood, skeletal muscle, bone and skin from both coated and uncoated 65ZnO NPs (Table 4). Most of the gavaged dose (over 90%) was excreted in the feces by day 3 indicating a rapid clearance of ZnO NPs, consistent with previous reports. Another study reported the pharmacokinetics of ZnO NPs (125, 250 and 500 mg/kg) after a single and repeated dose oral administration (90-day) [80]. They found that plasma Zn concentration significantly increased in a dose-dependent manner, but significantly decreased within 24 hours post-oral administration, suggesting that the systemic clearance of ZnO NPs is rapid even at these high doses. In another study, Baek et al. examined the pharmacokinetics of 20 nm and 70 nm citrate-modified ZnO NPs at doses of 50, 300 and 2000 mg/kg [81]. Similar to our results, they showed that ZnO NPs were not readily absorbed into the bloodstream after single-dose oral administration. The tissue distributions of Zn from both 20 nm and 70 nm ZnO NPs were similar and mainly to the liver, lung and kidneys. The study also reported predominant excretion of Zn in the feces, with smaller 20 nm particles being cleared more rapidly than the 70 nm NPs. In summary, the results presented here show that uncoated 65Zn NPs resulted in higher levels of 65Zn in multiple organs following intratracheal instillation or gavage, particularly in skeletal muscle. This suggests that coating with amorphous silica can reduce tissue Zn concentration and its potential toxicity. Interestingly, the bioavailability of Zn from SiO2-coated 65ZnO was higher in thoracic lymph nodes and bone. Additionally, the excretion of 65Zn was higher from SiO2-coated 65ZnO NPs from both routes suggesting enhanced hepatobiliary excretion. Our data indicate that silica coating alters the pharmacokinetic behavior of ZnO NPs, but the effect was not as dramatic as anticipated. With increasing trends in physicochemical modifications of NPs for special applications, it is necessary to understand their influence on the fate, metabolism and toxicity of these nanoparticles. Conclusions: We examined the influence of a 4.5 nm SiO2 coating on ZnO NPs on the 65Zn pharmacokinetics following IT instillation and gavage of neutron activated NPs. The SiO2 coating does not affect the clearance of 65Zn from the lungs. However, the extrapulmonary translocation and distribution of 65Zn from coated versus uncoated 65ZnO NPs were significantly altered in some tissues. The SiO2 coating resulted in lower translocation of instilled 65Zn to the skeletal muscle, skin and heart. The SiO2 coating also reduced 65Zn translocation to skeletal muscle post-gavage. For both routes of administration, the SiO2 coating enhanced the transport of 65Zn to the thoracic lymph nodes. Methods: Synthesis of ZnO and SiO2-coated ZnO NPs The synthesis of these NPs was reported in detail elsewhere [17]. In brief, uncoated and SiO2-coated ZnO particles were synthesized by flame spray pyrolysis (FSP) of zinc naphthenate (Sigma-Aldrich, St. Louis, MO, USA) dissolved in ethanol (Sigma-Aldrich) at a precursor molarity of 0.5 M. The precursor solution was fed through a stainless steel capillary at 5 ml/min, dispersed by 5 L/min O2 (purity > 99%, pressure drop at nozzle tip: pdrop = 2 bar) (Air Gas, Berwyn, PA, USA) and combusted. A premixed methane-oxygen (1.5 L/min, 3.2 L/min) supporting flame was used to ignite the spray. Oxygen (Air Gas, purity > 99%) sheath gas was used at 40 L/min. Core particles were coated in-flight by the swirl-injection of hexamethyldisiloxane (HMDSO) (Sigma Aldrich) through a torus ring with 16 jets at an injection height of 200 mm above the FSP burner. A total gas flow of 16 L/min, consisting of N2 carrying HMDSO vapor and pure N2, was injected through the torus ring jets. HMDSO vapor was obtained by bubbling N2 gas through liquid HMDSO (500 ml), maintained at a controlled temperature using a temperature-controlled water bath. The synthesis of these NPs was reported in detail elsewhere [17]. In brief, uncoated and SiO2-coated ZnO particles were synthesized by flame spray pyrolysis (FSP) of zinc naphthenate (Sigma-Aldrich, St. Louis, MO, USA) dissolved in ethanol (Sigma-Aldrich) at a precursor molarity of 0.5 M. The precursor solution was fed through a stainless steel capillary at 5 ml/min, dispersed by 5 L/min O2 (purity > 99%, pressure drop at nozzle tip: pdrop = 2 bar) (Air Gas, Berwyn, PA, USA) and combusted. A premixed methane-oxygen (1.5 L/min, 3.2 L/min) supporting flame was used to ignite the spray. Oxygen (Air Gas, purity > 99%) sheath gas was used at 40 L/min. Core particles were coated in-flight by the swirl-injection of hexamethyldisiloxane (HMDSO) (Sigma Aldrich) through a torus ring with 16 jets at an injection height of 200 mm above the FSP burner. A total gas flow of 16 L/min, consisting of N2 carrying HMDSO vapor and pure N2, was injected through the torus ring jets. HMDSO vapor was obtained by bubbling N2 gas through liquid HMDSO (500 ml), maintained at a controlled temperature using a temperature-controlled water bath. Characterization of ZnO and SiO2-coated ZnO NPs The morphology of these NPs was examined by electron microscopy. Uncoated and SiO2-coated ZnO NPs were dispersed in ethanol at a concentration of 1 mg/ml in 50 ml polyethylene conical tubes and sonicated at 246 J/ml (Branson Sonifier S-450A, Swedesboro, NJ, USA). The samples were deposited onto lacey carbon TEM grids. All grids were imaged with a JEOL 2100. The primary particle size was determined by X-ray diffraction (XRD). XRD patterns for uncoated ZnO and SiO2-coated ZnO NPs were obtained using a Scintag XDS2000 powder diffractometer (Cu Kα, λ = 0.154 nm, 40 kV, 40 mA, stepsize = 0.02°). One hundred mg of each sample was placed onto the diffractometer stage and analyzed from a range of 2θ = 20-70°. Major diffraction peaks were identified using the Inorganic Crystal Structure Database (ICSD) for wurtzite (ZnO) crystals. The crystal size was determined by applying the Debye-Scherrer Shape Equation to the Gaussian fit of the major diffraction peak. The specific surface area was obtained using the Brunauer-Emmet-Teller (BET) method. The samples were degassed in N2 for at least 1 hour at 150°C before obtaining five-point N2 adsorption at 77 K (Micrometrics Tristar 3000, Norcross, GA, USA). The morphology of these NPs was examined by electron microscopy. Uncoated and SiO2-coated ZnO NPs were dispersed in ethanol at a concentration of 1 mg/ml in 50 ml polyethylene conical tubes and sonicated at 246 J/ml (Branson Sonifier S-450A, Swedesboro, NJ, USA). The samples were deposited onto lacey carbon TEM grids. All grids were imaged with a JEOL 2100. The primary particle size was determined by X-ray diffraction (XRD). XRD patterns for uncoated ZnO and SiO2-coated ZnO NPs were obtained using a Scintag XDS2000 powder diffractometer (Cu Kα, λ = 0.154 nm, 40 kV, 40 mA, stepsize = 0.02°). One hundred mg of each sample was placed onto the diffractometer stage and analyzed from a range of 2θ = 20-70°. Major diffraction peaks were identified using the Inorganic Crystal Structure Database (ICSD) for wurtzite (ZnO) crystals. The crystal size was determined by applying the Debye-Scherrer Shape Equation to the Gaussian fit of the major diffraction peak. The specific surface area was obtained using the Brunauer-Emmet-Teller (BET) method. The samples were degassed in N2 for at least 1 hour at 150°C before obtaining five-point N2 adsorption at 77 K (Micrometrics Tristar 3000, Norcross, GA, USA). Neutron activation of NPs The NPs with and without the SiO2 coating were neutron-activated at the Massachusetts Institute of Technology (MIT) Nuclear Reactor Laboratory (Cambridge, MA). Samples were irradiated with a thermal neutron flux of 5 × 1013 n/cm2s for 120 hours. The resulting 65Zn radioisotope has a half-life of 244.3 days and a primary gamma energy peak of 1115 keV. The relative specific activities for 65Zn were 37.7 ± 5.0 kBq/mg for SiO2-coated 65ZnO and 41.7 ± 7.2 kBq/mg for 65ZnO NPs. The NPs with and without the SiO2 coating were neutron-activated at the Massachusetts Institute of Technology (MIT) Nuclear Reactor Laboratory (Cambridge, MA). Samples were irradiated with a thermal neutron flux of 5 × 1013 n/cm2s for 120 hours. The resulting 65Zn radioisotope has a half-life of 244.3 days and a primary gamma energy peak of 1115 keV. The relative specific activities for 65Zn were 37.7 ± 5.0 kBq/mg for SiO2-coated 65ZnO and 41.7 ± 7.2 kBq/mg for 65ZnO NPs. Preparation and characterization of ZnO and SiO2 -coated ZnO nanoparticle suspensions Uncoated and SiO2-coated ZnO NPs were dispersed using a protocol previously described [82],[36]. The NPs were dispersed in deionized water at a concentration of 0.66 mg/ml (IT) or 10 mg/ml (gavage). Sonication was performed in deionized water to minimize the formation of reactive oxygen species. Samples were thoroughly mixed immediately prior to instillation. Dispersions of NPs were analyzed for hydrodynamic diameter (dH), polydispersity index (PdI), and zeta potential (ζ) by DLS using a Zetasizer Nano-ZS (Malvern Instruments, Worcestershire, UK). Uncoated and SiO2-coated ZnO NPs were dispersed using a protocol previously described [82],[36]. The NPs were dispersed in deionized water at a concentration of 0.66 mg/ml (IT) or 10 mg/ml (gavage). Sonication was performed in deionized water to minimize the formation of reactive oxygen species. Samples were thoroughly mixed immediately prior to instillation. Dispersions of NPs were analyzed for hydrodynamic diameter (dH), polydispersity index (PdI), and zeta potential (ζ) by DLS using a Zetasizer Nano-ZS (Malvern Instruments, Worcestershire, UK). Animals The protocols used in this study were approved by the Harvard Medical Area Animal Care and Use Committee. Nine-week-old male Wistar Han rats were purchased from Charles River Laboratories (Wilmington, MA). Rats were housed in pairs in polypropylene cages and allowed to acclimate for 1 week before the studies were initiated. Rats were maintained on a 12-hour light/dark cycle. Food and water were provided ad libitum. The protocols used in this study were approved by the Harvard Medical Area Animal Care and Use Committee. Nine-week-old male Wistar Han rats were purchased from Charles River Laboratories (Wilmington, MA). Rats were housed in pairs in polypropylene cages and allowed to acclimate for 1 week before the studies were initiated. Rats were maintained on a 12-hour light/dark cycle. Food and water were provided ad libitum. Pulmonary responses – Bronchoalveolar lavage and analyses This experiment was performed to determine pulmonary responses to instilled NPs. A group of rats (mean wt. 264 ± 15 g) was intratracheally instilled with either an uncoated ZnO or SiO2 -coated ZnO NP suspension at a 0, 0.2 or 1.0 mg/kg dose. The particle suspensions were delivered to the lungs through the trachea in a volume of 1.5 ml/kg. Twenty-four hours later, rats were euthanized via exsanguination with a cut in the abdominal aorta while under anesthesia. The trachea was exposed and cannulated. The lungs were then lavaged 12 times, with 3 ml of 0.9% sterile PBS, without calcium and magnesium ions. The cells of all washes were separated from the supernatant by centrifugation (350 × g at 4°C for 10 min). Total cell count and hemoglobin measurements were made from the cell pellets. After staining the cells, a differential cell count was performed. The supernatant of the two first washes was clarified via centrifugation (14,500 × g at 4°C for 30 min), and used for standard spectrophotometric assays for lactate dehydrogenase (LDH), myeloperoxidase (MPO) and albumin. This experiment was performed to determine pulmonary responses to instilled NPs. A group of rats (mean wt. 264 ± 15 g) was intratracheally instilled with either an uncoated ZnO or SiO2 -coated ZnO NP suspension at a 0, 0.2 or 1.0 mg/kg dose. The particle suspensions were delivered to the lungs through the trachea in a volume of 1.5 ml/kg. Twenty-four hours later, rats were euthanized via exsanguination with a cut in the abdominal aorta while under anesthesia. The trachea was exposed and cannulated. The lungs were then lavaged 12 times, with 3 ml of 0.9% sterile PBS, without calcium and magnesium ions. The cells of all washes were separated from the supernatant by centrifugation (350 × g at 4°C for 10 min). Total cell count and hemoglobin measurements were made from the cell pellets. After staining the cells, a differential cell count was performed. The supernatant of the two first washes was clarified via centrifugation (14,500 × g at 4°C for 30 min), and used for standard spectrophotometric assays for lactate dehydrogenase (LDH), myeloperoxidase (MPO) and albumin. Pharmacokinetics of 65Zn The mean weight of rats at the start of the experiment was 285 ± 3 g. Two groups of rats (29 rats/NP) were intratracheally instilled with 65ZnO NPs or with SiO2-coated 65ZnO NPs at a 1 mg/kg dose (1.5 ml/kg, 0.66 mg/ml). Rats were placed in metabolic cages containing food and water, as previously described. Twenty four-hour samples of feces and urine were collected at selected time points (0–24 hours, 2–3 days, 6–7 days, 9–10 days, 13–14 days, 20–21 days, and 27–28 days post-IT instillation). Fecal/urine collection was accomplished by placing each rat in individual metabolic cage containing food and water during each 24-hour period. All samples were analyzed for total 65Zn activity, and expressed as % of instilled 65Zn dose. Fecal and urine clearance curves were generated and were used to estimate the daily cumulative excretion. Groups of 8 rats were humanely sacrificed at 5 minutes, 2 days, 7 days, and 5 rats/group at 28 days. Therefore, the number of collected fecal/urine samples decreased over time. Another cohort of 20 rats was dosed with 65ZnO (n = 10) or SiO2-coated 65ZnO (n = 10) by gavage at a 5 mg/kg dose (0.5 ml/kg, 10 mg/ml). One group of 5 rats was humanely sacrificed at 5 minutes and immediately dissected. Another group of 5 rats was individually placed in metabolic cages, as previously described, and 24-hour samples of urine and feces were collected at 0–1 day, 2–3 days, and 6–7 days post-gavage. The remaining rats were sacrificed at 7 days. At each endpoint, rats were euthanized and dissected, and the whole brain, spleen, kidneys, heart, liver, lungs, GI tract, testes, thoracic lymph nodes, blood (10 ml, separated into plasma and RBC), bone marrow (from femoral bones), bone (both femurs), skin (2 × 3 inches), and skeletal muscle (from 4 sites) were collected. The 65Zn radioactivity present in each sample was measured with a WIZARD Gamma Counter (PerkinElmer, Inc., Waltham, MA). The number of disintegrations per minute was determined from the counts per minute and the counting efficiency. The efficiency of the gamma counter was derived from counting multiple aliquots of NP samples and relating them to the specific activities measured at Massachusetts Institute of Technology Nuclear Reactor. We estimated that the counter had an efficiency of ~52%. The 65Zn radioactivity was expressed as kBq/g tissue and the percentage of administered dose in each organ. All radioactivity data were adjusted for physical decay over the entire observation period. The radioactivity in organs and tissues not measured in their entirety was estimated as a percentage of total body weight as: skeletal muscle, 40%; bone marrow, 3.2%; peripheral blood, 7%; skin, 19%; and bone, 6% [83],[84]. Based on the 65Zn specific activity (kBq/mg NP) and tissue 65Zn concentration, the amount of Zn derived from each NP was calculated for each tissue examined (ng Zn/g tissue). The mean weight of rats at the start of the experiment was 285 ± 3 g. Two groups of rats (29 rats/NP) were intratracheally instilled with 65ZnO NPs or with SiO2-coated 65ZnO NPs at a 1 mg/kg dose (1.5 ml/kg, 0.66 mg/ml). Rats were placed in metabolic cages containing food and water, as previously described. Twenty four-hour samples of feces and urine were collected at selected time points (0–24 hours, 2–3 days, 6–7 days, 9–10 days, 13–14 days, 20–21 days, and 27–28 days post-IT instillation). Fecal/urine collection was accomplished by placing each rat in individual metabolic cage containing food and water during each 24-hour period. All samples were analyzed for total 65Zn activity, and expressed as % of instilled 65Zn dose. Fecal and urine clearance curves were generated and were used to estimate the daily cumulative excretion. Groups of 8 rats were humanely sacrificed at 5 minutes, 2 days, 7 days, and 5 rats/group at 28 days. Therefore, the number of collected fecal/urine samples decreased over time. Another cohort of 20 rats was dosed with 65ZnO (n = 10) or SiO2-coated 65ZnO (n = 10) by gavage at a 5 mg/kg dose (0.5 ml/kg, 10 mg/ml). One group of 5 rats was humanely sacrificed at 5 minutes and immediately dissected. Another group of 5 rats was individually placed in metabolic cages, as previously described, and 24-hour samples of urine and feces were collected at 0–1 day, 2–3 days, and 6–7 days post-gavage. The remaining rats were sacrificed at 7 days. At each endpoint, rats were euthanized and dissected, and the whole brain, spleen, kidneys, heart, liver, lungs, GI tract, testes, thoracic lymph nodes, blood (10 ml, separated into plasma and RBC), bone marrow (from femoral bones), bone (both femurs), skin (2 × 3 inches), and skeletal muscle (from 4 sites) were collected. The 65Zn radioactivity present in each sample was measured with a WIZARD Gamma Counter (PerkinElmer, Inc., Waltham, MA). The number of disintegrations per minute was determined from the counts per minute and the counting efficiency. The efficiency of the gamma counter was derived from counting multiple aliquots of NP samples and relating them to the specific activities measured at Massachusetts Institute of Technology Nuclear Reactor. We estimated that the counter had an efficiency of ~52%. The 65Zn radioactivity was expressed as kBq/g tissue and the percentage of administered dose in each organ. All radioactivity data were adjusted for physical decay over the entire observation period. The radioactivity in organs and tissues not measured in their entirety was estimated as a percentage of total body weight as: skeletal muscle, 40%; bone marrow, 3.2%; peripheral blood, 7%; skin, 19%; and bone, 6% [83],[84]. Based on the 65Zn specific activity (kBq/mg NP) and tissue 65Zn concentration, the amount of Zn derived from each NP was calculated for each tissue examined (ng Zn/g tissue). Statistical analyses Differences in the 65Zn tissue distribution and in cellular and biochemical parameters measured in bronchoalveolar lavage between groups were analyzed using multivariate analysis of variance (MANOVA) with REGWQ (Ryan-Einot-Gabriel-Welch based on range) and Tukey post hoc tests using SAS Statistical Analysis software (SAS Institute, Cary, NC). The lung clearance half-life was estimated by a two-phase estimation by a biexponential model using the R Program v. 3.1.0 [85]. Differences in the 65Zn tissue distribution and in cellular and biochemical parameters measured in bronchoalveolar lavage between groups were analyzed using multivariate analysis of variance (MANOVA) with REGWQ (Ryan-Einot-Gabriel-Welch based on range) and Tukey post hoc tests using SAS Statistical Analysis software (SAS Institute, Cary, NC). The lung clearance half-life was estimated by a two-phase estimation by a biexponential model using the R Program v. 3.1.0 [85]. Synthesis of ZnO and SiO2-coated ZnO NPs: The synthesis of these NPs was reported in detail elsewhere [17]. In brief, uncoated and SiO2-coated ZnO particles were synthesized by flame spray pyrolysis (FSP) of zinc naphthenate (Sigma-Aldrich, St. Louis, MO, USA) dissolved in ethanol (Sigma-Aldrich) at a precursor molarity of 0.5 M. The precursor solution was fed through a stainless steel capillary at 5 ml/min, dispersed by 5 L/min O2 (purity > 99%, pressure drop at nozzle tip: pdrop = 2 bar) (Air Gas, Berwyn, PA, USA) and combusted. A premixed methane-oxygen (1.5 L/min, 3.2 L/min) supporting flame was used to ignite the spray. Oxygen (Air Gas, purity > 99%) sheath gas was used at 40 L/min. Core particles were coated in-flight by the swirl-injection of hexamethyldisiloxane (HMDSO) (Sigma Aldrich) through a torus ring with 16 jets at an injection height of 200 mm above the FSP burner. A total gas flow of 16 L/min, consisting of N2 carrying HMDSO vapor and pure N2, was injected through the torus ring jets. HMDSO vapor was obtained by bubbling N2 gas through liquid HMDSO (500 ml), maintained at a controlled temperature using a temperature-controlled water bath. Characterization of ZnO and SiO2-coated ZnO NPs: The morphology of these NPs was examined by electron microscopy. Uncoated and SiO2-coated ZnO NPs were dispersed in ethanol at a concentration of 1 mg/ml in 50 ml polyethylene conical tubes and sonicated at 246 J/ml (Branson Sonifier S-450A, Swedesboro, NJ, USA). The samples were deposited onto lacey carbon TEM grids. All grids were imaged with a JEOL 2100. The primary particle size was determined by X-ray diffraction (XRD). XRD patterns for uncoated ZnO and SiO2-coated ZnO NPs were obtained using a Scintag XDS2000 powder diffractometer (Cu Kα, λ = 0.154 nm, 40 kV, 40 mA, stepsize = 0.02°). One hundred mg of each sample was placed onto the diffractometer stage and analyzed from a range of 2θ = 20-70°. Major diffraction peaks were identified using the Inorganic Crystal Structure Database (ICSD) for wurtzite (ZnO) crystals. The crystal size was determined by applying the Debye-Scherrer Shape Equation to the Gaussian fit of the major diffraction peak. The specific surface area was obtained using the Brunauer-Emmet-Teller (BET) method. The samples were degassed in N2 for at least 1 hour at 150°C before obtaining five-point N2 adsorption at 77 K (Micrometrics Tristar 3000, Norcross, GA, USA). Neutron activation of NPs: The NPs with and without the SiO2 coating were neutron-activated at the Massachusetts Institute of Technology (MIT) Nuclear Reactor Laboratory (Cambridge, MA). Samples were irradiated with a thermal neutron flux of 5 × 1013 n/cm2s for 120 hours. The resulting 65Zn radioisotope has a half-life of 244.3 days and a primary gamma energy peak of 1115 keV. The relative specific activities for 65Zn were 37.7 ± 5.0 kBq/mg for SiO2-coated 65ZnO and 41.7 ± 7.2 kBq/mg for 65ZnO NPs. Preparation and characterization of ZnO and SiO2 -coated ZnO nanoparticle suspensions: Uncoated and SiO2-coated ZnO NPs were dispersed using a protocol previously described [82],[36]. The NPs were dispersed in deionized water at a concentration of 0.66 mg/ml (IT) or 10 mg/ml (gavage). Sonication was performed in deionized water to minimize the formation of reactive oxygen species. Samples were thoroughly mixed immediately prior to instillation. Dispersions of NPs were analyzed for hydrodynamic diameter (dH), polydispersity index (PdI), and zeta potential (ζ) by DLS using a Zetasizer Nano-ZS (Malvern Instruments, Worcestershire, UK). Animals: The protocols used in this study were approved by the Harvard Medical Area Animal Care and Use Committee. Nine-week-old male Wistar Han rats were purchased from Charles River Laboratories (Wilmington, MA). Rats were housed in pairs in polypropylene cages and allowed to acclimate for 1 week before the studies were initiated. Rats were maintained on a 12-hour light/dark cycle. Food and water were provided ad libitum. Pulmonary responses – Bronchoalveolar lavage and analyses: This experiment was performed to determine pulmonary responses to instilled NPs. A group of rats (mean wt. 264 ± 15 g) was intratracheally instilled with either an uncoated ZnO or SiO2 -coated ZnO NP suspension at a 0, 0.2 or 1.0 mg/kg dose. The particle suspensions were delivered to the lungs through the trachea in a volume of 1.5 ml/kg. Twenty-four hours later, rats were euthanized via exsanguination with a cut in the abdominal aorta while under anesthesia. The trachea was exposed and cannulated. The lungs were then lavaged 12 times, with 3 ml of 0.9% sterile PBS, without calcium and magnesium ions. The cells of all washes were separated from the supernatant by centrifugation (350 × g at 4°C for 10 min). Total cell count and hemoglobin measurements were made from the cell pellets. After staining the cells, a differential cell count was performed. The supernatant of the two first washes was clarified via centrifugation (14,500 × g at 4°C for 30 min), and used for standard spectrophotometric assays for lactate dehydrogenase (LDH), myeloperoxidase (MPO) and albumin. Pharmacokinetics of 65Zn: The mean weight of rats at the start of the experiment was 285 ± 3 g. Two groups of rats (29 rats/NP) were intratracheally instilled with 65ZnO NPs or with SiO2-coated 65ZnO NPs at a 1 mg/kg dose (1.5 ml/kg, 0.66 mg/ml). Rats were placed in metabolic cages containing food and water, as previously described. Twenty four-hour samples of feces and urine were collected at selected time points (0–24 hours, 2–3 days, 6–7 days, 9–10 days, 13–14 days, 20–21 days, and 27–28 days post-IT instillation). Fecal/urine collection was accomplished by placing each rat in individual metabolic cage containing food and water during each 24-hour period. All samples were analyzed for total 65Zn activity, and expressed as % of instilled 65Zn dose. Fecal and urine clearance curves were generated and were used to estimate the daily cumulative excretion. Groups of 8 rats were humanely sacrificed at 5 minutes, 2 days, 7 days, and 5 rats/group at 28 days. Therefore, the number of collected fecal/urine samples decreased over time. Another cohort of 20 rats was dosed with 65ZnO (n = 10) or SiO2-coated 65ZnO (n = 10) by gavage at a 5 mg/kg dose (0.5 ml/kg, 10 mg/ml). One group of 5 rats was humanely sacrificed at 5 minutes and immediately dissected. Another group of 5 rats was individually placed in metabolic cages, as previously described, and 24-hour samples of urine and feces were collected at 0–1 day, 2–3 days, and 6–7 days post-gavage. The remaining rats were sacrificed at 7 days. At each endpoint, rats were euthanized and dissected, and the whole brain, spleen, kidneys, heart, liver, lungs, GI tract, testes, thoracic lymph nodes, blood (10 ml, separated into plasma and RBC), bone marrow (from femoral bones), bone (both femurs), skin (2 × 3 inches), and skeletal muscle (from 4 sites) were collected. The 65Zn radioactivity present in each sample was measured with a WIZARD Gamma Counter (PerkinElmer, Inc., Waltham, MA). The number of disintegrations per minute was determined from the counts per minute and the counting efficiency. The efficiency of the gamma counter was derived from counting multiple aliquots of NP samples and relating them to the specific activities measured at Massachusetts Institute of Technology Nuclear Reactor. We estimated that the counter had an efficiency of ~52%. The 65Zn radioactivity was expressed as kBq/g tissue and the percentage of administered dose in each organ. All radioactivity data were adjusted for physical decay over the entire observation period. The radioactivity in organs and tissues not measured in their entirety was estimated as a percentage of total body weight as: skeletal muscle, 40%; bone marrow, 3.2%; peripheral blood, 7%; skin, 19%; and bone, 6% [83],[84]. Based on the 65Zn specific activity (kBq/mg NP) and tissue 65Zn concentration, the amount of Zn derived from each NP was calculated for each tissue examined (ng Zn/g tissue). Statistical analyses: Differences in the 65Zn tissue distribution and in cellular and biochemical parameters measured in bronchoalveolar lavage between groups were analyzed using multivariate analysis of variance (MANOVA) with REGWQ (Ryan-Einot-Gabriel-Welch based on range) and Tukey post hoc tests using SAS Statistical Analysis software (SAS Institute, Cary, NC). The lung clearance half-life was estimated by a two-phase estimation by a biexponential model using the R Program v. 3.1.0 [85]. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: NVK, KMM, RMM, and JDB designed and performed the lung toxicity and pharmacokinetic studies. TCD performed statistical analyses. PD and GAS synthesized and characterized the NPs. This manuscript was written by NVK, RMM, and KMM and revised by JDB, GS, PD and RMM. All authors read, corrected and approved the manuscript.
Background: Nanoparticle pharmacokinetics and biological effects are influenced by several factors. We assessed the effects of amorphous SiO2 coating on the pharmacokinetics of zinc oxide nanoparticles (ZnO NPs) following intratracheal (IT) instillation and gavage in rats. Methods: Uncoated and SiO2-coated ZnO NPs were neutron-activated and IT-instilled at 1 mg/kg or gavaged at 5 mg/kg. Rats were followed over 28 days post-IT, and over 7 days post-gavage. Tissue samples were analyzed for 65Zn radioactivity. Pulmonary responses to instilled NPs were also evaluated at 24 hours. Results: SiO2-coated ZnO elicited significantly higher inflammatory responses than uncoated NPs. Pulmonary clearance of both 65ZnO NPs was biphasic with a rapid initial t1/2 (0.2 - 0.3 hours), and a slower terminal t1/2 of 1.2 days (SiO2-coated ZnO) and 1.7 days (ZnO). Both NPs were almost completely cleared by day 7 (>98%). With IT-instilled 65ZnO NPs, significantly more 65Zn was found in skeletal muscle, liver, skin, kidneys, cecum and blood on day 2 in uncoated than SiO2-coated NPs. By 28 days, extrapulmonary levels of 65Zn from both NPs significantly decreased. However, 65Zn levels in skeletal muscle, skin and blood remained higher from uncoated NPs. Interestingly, 65Zn levels in bone marrow and thoracic lymph nodes were higher from coated 65ZnO NPs. More 65Zn was excreted in the urine from rats instilled with SiO2-coated 65ZnO NPs. After 7 days post-gavage, only 7.4% (uncoated) and 6.7% (coated) of 65Zn dose were measured in all tissues combined. As with instilled NPs, after gavage significantly more 65Zn was measured in skeletal muscle from uncoated NPs and less in thoracic lymph nodes. More 65Zn was excreted in the urine and feces with coated than uncoated 65ZnO NPs. However, over 95% of the total dose of both NPs was eliminated in the feces by day 7. Conclusions: Although SiO2-coated ZnO NPs were more inflammogenic, the overall lung clearance rate was not affected. However, SiO2 coating altered the tissue distribution of 65Zn in some extrapulmonary tissues. For both IT instillation and gavage administration, SiO2 coating enhanced transport of 65Zn to thoracic lymph nodes and decreased transport to the skeletal muscle.
Background: Zinc oxide nanoparticles (ZnO NPs) are widely used in consumer products, including ceramics, cosmetics, plastics, sealants, toners and foods [1]. They are a common component in a range of technologies, including sensors, light emitting diodes, and solar cells due to their semiconducting and optical properties [2]. ZnO NPs filter both UV-A and UV-B radiation but remain transparent in the visible spectrum [3]. For this reason, ZnO NPs are commonly added to sunscreens [4] and other cosmetic products. Furthermore, advanced technologies have made the large-scale production of ZnO NPs possible [5]. Health concerns have been raised due to the growing evidence of the potential toxicity of ZnO NPs. Reduced pulmonary function in humans was observed 24 hours after inhalation of ultrafine (<100 nm) ZnO [6]. It has also been shown to cause DNA damage in HepG2 cells and neurotoxicity due to the formation of reactive oxygen species (ROS) [7],[8]. Recently, others and we have demonstrated that ZnO NPs can cause DNA damage in TK6 and H9T3 cells [9],[10]. ZnO NPs dissolve in aqueous solutions, releasing Zn2+ ions that may in turn cause cytotoxicity and DNA damage to cells [9],[11]–[13]. Studies have shown that changing the surface characteristics of certain NPs may alter the biologic responses of cells [14],[15]. Developing strategies to reduce the toxicity of ZnO NPs without changing their core properties (safer-by-design approach) is an active area of research. Xia et al. [16] showed that doping ZnO NPs with iron could reduce the rate of ZnO dissolution and the toxic effects in zebra fish embryos and rat and mouse lungs [16]. We also showed that encapsulation of ZnO NPs with amorphous SiO2 reduced the dissolution of Zn2+ ions in biological media, and reduced cell cytotoxicity and DNA damage in vitro [17]. Surface characteristics of NPs, such as their chemical and molecular structure, influence their pharmacokinetic behavior [18]–[20]. Surface chemistry influences the adsorption of phospholipids, proteins and other components of lung surfactants in the formation of a particle corona, which may regulate the overall nanoparticle pharmacokinetics and biological responses [19]. Coronas have been shown to influence the dynamics of cellular uptake, localization, biodistribution, and biological effects of NPs [21],[22]. Coating of NPs with amorphous silica is a promising technique to enhance colloidal stability and biocompatibility for theranostics [23],[24]. A recent study by Chen et al. showed that coating gold nanorods with silica can amplify the photoacoustic response without altering optical absorption [25]. Furthermore, coating magnetic NPs with amorphous silica enhances particle stability and reduces its cytotoxicity in a human bronchial epithelium cell line model [26]. Amorphous SiO2 is generally considered relatively biologically inert [27], and is commonly used in cosmetic and personal care products, and as a negative control in some nanoparticle toxicity screening assays [28]. However, Napierska et al. demonstrated the size-dependent cytotoxic effects of amorphous silica in vitro[29]. They concluded that the surface area of amorphous silica is an important determinant of cytotoxicity. An in vivo study using a rat model demonstrated that the pulmonary toxicity and inflammatory responses to amorphous silica are transient [30]. Moreover, SiO2-coated nanoceria induced minimal lung injury and inflammation [31]. It has also been demonstrated that SiO2 coating improves nanoparticle biocompatibility in vitro for a variety of nanomaterials, including Ag [32], Y2O3[33], and ZnO [17]. We have recently developed methods for the gas-phase synthesis of metal and metal oxide NPs by a modified flame spray pyrolysis (FSP) reactor. Coating metal oxide NPs with amorphous SiO2 involves the encapsulation of the core NPs in flight with a nanothin amorphous SiO2 layer [34]. An important advantage of flame-made NPs is their high purity. Flame synthesis is a high-temperature process that leaves no organic contamination on the particle surface. Furthermore, the presence of SiO2 does not influence the optoelectronic properties of the core ZnO nanorods. Thus, they retain their desired high transparency in the visible spectrum and UV absorption rendering them suitable for UV blocking applications [17]. The SiO2 coating has been demonstrated to reduce ZnO nanorod toxicity by mitigating their dissolution and generation of ions in solutions, and by preventing the immediate contact between the core particle and mammalian cells. For ZnO NPs, such a hermetic SiO2 coating reduces ZnO dissolution while preserving the optical properties and band-gap energy of the ZnO core [17]. Studies examining nanoparticle structure-pharmacokinetic relationships have established that plasma protein binding profiles correlate with circulation half-lives [27]. However, studies evaluating the relationship between surface modifications, lung clearance kinetics, and pulmonary effects are lacking. Thus, we sought to study the effects of amorphous SiO2 coating on ZnO pulmonary effects and on pharmacokinetics of 65Zn when radioactive 65ZnO and SiO2-coated 65ZnO nanorods are administered by intratracheal instillation (IT) and gavage. We explored how the SiO2 coating affected acute toxicity and inflammatory responses in the lungs, as well as 65Zn clearance and tissue distribution after IT instillation over a period of 28 days. The translocation of the 65Zn from the stomach to other organs was also quantified for up to 7 days after gavage. Finally, we examined how the SiO2 coating affected the urinary and fecal excretion of 65Zn during the entire observation period. Conclusions: We examined the influence of a 4.5 nm SiO2 coating on ZnO NPs on the 65Zn pharmacokinetics following IT instillation and gavage of neutron activated NPs. The SiO2 coating does not affect the clearance of 65Zn from the lungs. However, the extrapulmonary translocation and distribution of 65Zn from coated versus uncoated 65ZnO NPs were significantly altered in some tissues. The SiO2 coating resulted in lower translocation of instilled 65Zn to the skeletal muscle, skin and heart. The SiO2 coating also reduced 65Zn translocation to skeletal muscle post-gavage. For both routes of administration, the SiO2 coating enhanced the transport of 65Zn to the thoracic lymph nodes.
Background: Nanoparticle pharmacokinetics and biological effects are influenced by several factors. We assessed the effects of amorphous SiO2 coating on the pharmacokinetics of zinc oxide nanoparticles (ZnO NPs) following intratracheal (IT) instillation and gavage in rats. Methods: Uncoated and SiO2-coated ZnO NPs were neutron-activated and IT-instilled at 1 mg/kg or gavaged at 5 mg/kg. Rats were followed over 28 days post-IT, and over 7 days post-gavage. Tissue samples were analyzed for 65Zn radioactivity. Pulmonary responses to instilled NPs were also evaluated at 24 hours. Results: SiO2-coated ZnO elicited significantly higher inflammatory responses than uncoated NPs. Pulmonary clearance of both 65ZnO NPs was biphasic with a rapid initial t1/2 (0.2 - 0.3 hours), and a slower terminal t1/2 of 1.2 days (SiO2-coated ZnO) and 1.7 days (ZnO). Both NPs were almost completely cleared by day 7 (>98%). With IT-instilled 65ZnO NPs, significantly more 65Zn was found in skeletal muscle, liver, skin, kidneys, cecum and blood on day 2 in uncoated than SiO2-coated NPs. By 28 days, extrapulmonary levels of 65Zn from both NPs significantly decreased. However, 65Zn levels in skeletal muscle, skin and blood remained higher from uncoated NPs. Interestingly, 65Zn levels in bone marrow and thoracic lymph nodes were higher from coated 65ZnO NPs. More 65Zn was excreted in the urine from rats instilled with SiO2-coated 65ZnO NPs. After 7 days post-gavage, only 7.4% (uncoated) and 6.7% (coated) of 65Zn dose were measured in all tissues combined. As with instilled NPs, after gavage significantly more 65Zn was measured in skeletal muscle from uncoated NPs and less in thoracic lymph nodes. More 65Zn was excreted in the urine and feces with coated than uncoated 65ZnO NPs. However, over 95% of the total dose of both NPs was eliminated in the feces by day 7. Conclusions: Although SiO2-coated ZnO NPs were more inflammogenic, the overall lung clearance rate was not affected. However, SiO2 coating altered the tissue distribution of 65Zn in some extrapulmonary tissues. For both IT instillation and gavage administration, SiO2 coating enhanced transport of 65Zn to thoracic lymph nodes and decreased transport to the skeletal muscle.
17,840
444
19
[ "nps", "zno", "sio2", "coated", "65zn", "sio2 coated", "65zno", "uncoated", "days", "65zno nps" ]
[ "test", "test" ]
[CONTENT] Zinc oxide | Nanoparticles | Pharmacokinetics | Bioavailability | Silica coating | Nanotoxicology [SUMMARY]
[CONTENT] Zinc oxide | Nanoparticles | Pharmacokinetics | Bioavailability | Silica coating | Nanotoxicology [SUMMARY]
[CONTENT] Zinc oxide | Nanoparticles | Pharmacokinetics | Bioavailability | Silica coating | Nanotoxicology [SUMMARY]
[CONTENT] Zinc oxide | Nanoparticles | Pharmacokinetics | Bioavailability | Silica coating | Nanotoxicology [SUMMARY]
[CONTENT] Zinc oxide | Nanoparticles | Pharmacokinetics | Bioavailability | Silica coating | Nanotoxicology [SUMMARY]
[CONTENT] Zinc oxide | Nanoparticles | Pharmacokinetics | Bioavailability | Silica coating | Nanotoxicology [SUMMARY]
[CONTENT] Administration, Oral | Animals | Biological Availability | Half-Life | Inhalation Exposure | Lung | Lymph Nodes | Male | Metabolic Clearance Rate | Muscle, Skeletal | Nanoparticles | Pneumonia | Rats | Rats, Wistar | Silicon Dioxide | Tissue Distribution | Zinc Oxide [SUMMARY]
[CONTENT] Administration, Oral | Animals | Biological Availability | Half-Life | Inhalation Exposure | Lung | Lymph Nodes | Male | Metabolic Clearance Rate | Muscle, Skeletal | Nanoparticles | Pneumonia | Rats | Rats, Wistar | Silicon Dioxide | Tissue Distribution | Zinc Oxide [SUMMARY]
[CONTENT] Administration, Oral | Animals | Biological Availability | Half-Life | Inhalation Exposure | Lung | Lymph Nodes | Male | Metabolic Clearance Rate | Muscle, Skeletal | Nanoparticles | Pneumonia | Rats | Rats, Wistar | Silicon Dioxide | Tissue Distribution | Zinc Oxide [SUMMARY]
[CONTENT] Administration, Oral | Animals | Biological Availability | Half-Life | Inhalation Exposure | Lung | Lymph Nodes | Male | Metabolic Clearance Rate | Muscle, Skeletal | Nanoparticles | Pneumonia | Rats | Rats, Wistar | Silicon Dioxide | Tissue Distribution | Zinc Oxide [SUMMARY]
[CONTENT] Administration, Oral | Animals | Biological Availability | Half-Life | Inhalation Exposure | Lung | Lymph Nodes | Male | Metabolic Clearance Rate | Muscle, Skeletal | Nanoparticles | Pneumonia | Rats | Rats, Wistar | Silicon Dioxide | Tissue Distribution | Zinc Oxide [SUMMARY]
[CONTENT] Administration, Oral | Animals | Biological Availability | Half-Life | Inhalation Exposure | Lung | Lymph Nodes | Male | Metabolic Clearance Rate | Muscle, Skeletal | Nanoparticles | Pneumonia | Rats | Rats, Wistar | Silicon Dioxide | Tissue Distribution | Zinc Oxide [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] nps | zno | sio2 | coated | 65zn | sio2 coated | 65zno | uncoated | days | 65zno nps [SUMMARY]
[CONTENT] nps | zno | sio2 | coated | 65zn | sio2 coated | 65zno | uncoated | days | 65zno nps [SUMMARY]
[CONTENT] nps | zno | sio2 | coated | 65zn | sio2 coated | 65zno | uncoated | days | 65zno nps [SUMMARY]
[CONTENT] nps | zno | sio2 | coated | 65zn | sio2 coated | 65zno | uncoated | days | 65zno nps [SUMMARY]
[CONTENT] nps | zno | sio2 | coated | 65zn | sio2 coated | 65zno | uncoated | days | 65zno nps [SUMMARY]
[CONTENT] nps | zno | sio2 | coated | 65zn | sio2 coated | 65zno | uncoated | days | 65zno nps [SUMMARY]
[CONTENT] zno | amorphous | nps | coating | zno nps | effects | demonstrated | sio2 | amorphous silica | silica [SUMMARY]
[CONTENT] rats | ml | days | min | mg | samples | zno | nps | 10 | 65zn [SUMMARY]
[CONTENT] nps | zno | coated | sio2 | 65zno | 65zn | 65zno nps | sio2 coated | uncoated | days [SUMMARY]
[CONTENT] 65zn | sio2 coating | coating | translocation | sio2 | muscle | skeletal muscle | skeletal | nps | skeletal muscle skin heart [SUMMARY]
[CONTENT] nps | zno | 65zn | sio2 | coated | sio2 coated | rats | 65zno | uncoated | days [SUMMARY]
[CONTENT] nps | zno | 65zn | sio2 | coated | sio2 coated | rats | 65zno | uncoated | days [SUMMARY]
[CONTENT] ||| SiO2 [SUMMARY]
[CONTENT] ZnO NPs | 1 mg/kg | 5 mg/kg ||| 28 days | 7 days ||| 65Zn ||| NPs | 24 hours [SUMMARY]
[CONTENT] SiO2 | ZnO ||| Pulmonary | 65ZnO NPs | 0.2 - 0.3 hours | 1.2 days | SiO2 | 1.7 days ||| NPs | day 7 | 98% ||| 65ZnO NPs | day 2 | SiO2 | NPs ||| 28 days | 65Zn | NPs ||| NPs ||| 65ZnO ||| SiO2 | 65ZnO ||| 7 days | only 7.4% | 6.7% | 65Zn ||| NPs | NPs ||| ||| over 95% | NPs | day 7 [SUMMARY]
[CONTENT] SiO2 | ZnO NPs ||| SiO2 | 65Zn ||| SiO2 | 65Zn [SUMMARY]
[CONTENT] ||| SiO2 ||| ZnO NPs | 1 mg/kg | 5 mg/kg ||| 28 days | 7 days ||| 65Zn ||| NPs | 24 hours ||| ZnO ||| Pulmonary | 65ZnO NPs | 0.2 - 0.3 hours | 1.2 days | SiO2 | 1.7 days ||| NPs | day 7 | 98% ||| 65ZnO NPs | day 2 | SiO2 | NPs ||| 28 days | 65Zn | NPs ||| NPs ||| 65ZnO ||| SiO2 | 65ZnO ||| 7 days | only 7.4% | 6.7% | 65Zn ||| NPs | NPs ||| ||| over 95% | NPs | day 7 ||| ZnO NPs ||| SiO2 | 65Zn ||| SiO2 | 65Zn [SUMMARY]
[CONTENT] ||| SiO2 ||| ZnO NPs | 1 mg/kg | 5 mg/kg ||| 28 days | 7 days ||| 65Zn ||| NPs | 24 hours ||| ZnO ||| Pulmonary | 65ZnO NPs | 0.2 - 0.3 hours | 1.2 days | SiO2 | 1.7 days ||| NPs | day 7 | 98% ||| 65ZnO NPs | day 2 | SiO2 | NPs ||| 28 days | 65Zn | NPs ||| NPs ||| 65ZnO ||| SiO2 | 65ZnO ||| 7 days | only 7.4% | 6.7% | 65Zn ||| NPs | NPs ||| ||| over 95% | NPs | day 7 ||| ZnO NPs ||| SiO2 | 65Zn ||| SiO2 | 65Zn [SUMMARY]
Factors influencing compliance with infection control practice in Japanese dentists.
24463798
In recent years, dentists have more opportunity of treating patients infected with blood-borne pathogens. Although compliance with infection control practice (ICP) in dental practice is required, it is not still sufficiently spread in Japan.
BACKGROUND
In a questionnaire-based cross-sectional study in 2009, 2134 dentists in Aichi prefecture, Japan, were surveyed. They were asked for their demographic characteristics, willingness to treat HIV/AIDS patients, and knowledge about universal/standard precautions and ICP.
METHODS
Many ICP items had significant association with age, specialty for oral surgery, number of patients treated per day, willingness to treat HIV/AIDS patients and knowledge about the universal/standard precautions. In logistic regression model, knowledge about the precautions had significant associations with all ICP items. Among participants with disadvantageous characteristic group for ICP (ie, age ≥50 years, being general dentist, and treating ≤35 patients/day), knowledge about the universal/standard precautions had greater impact on exchanging handpiece for each patient and installing extra-oral vacuum in those with age of ≥50 years than in those who visited ≤35 patient per day.
RESULTS
Knowledge about the meaning of universal/standard precautions is the most significant predictor of compliance with ICPs among Japanese dentists.
CONCLUSION
[ "Attitude of Health Personnel", "Cross-Sectional Studies", "Dentists", "Female", "Guideline Adherence", "Humans", "Infection Control", "Japan", "Male", "Middle Aged", "Surveys and Questionnaires" ]
7767590
Introduction
Occupational exposure to blood and body fluids is a serious concern for health care workers (HCWs) and presents a major risk for the transmission of infections such as human immuno-deficiency virus (HIV), hepatitis B virus (HBV), and hepatitis C virus (HCV).1,2 The Occupational Safety and Health Administration (OSHA) published its blood-borne pathogens rule in 1991,3 which requires training of all workers at risk, implementation of universal precautions, and monitoring of compliance. In 1996, the US Centers for Disease Control and Prevention (CDC) combined universal precautions with body-substance isolation recommendations in the “standard precautions,”4and have advised HCWs to practice regular personal hygiene; use protective barriers such as gloves and gown, whenever there is chance of contact with mucous membranes, blood and body fluids of patients; and dispose of sharps, body fluids, and other clinical waste properly.5,6 The standard precautions are the basic level of infection control that should be used in the care of all patients. Currently, the standard precaution is routinely taken in clinical practice of developed counties. However, the rate of infection control practice (ICP) in Japan is still lower than that in other developed countries.7 There are reports documenting the association between some characteristics of HCWs and adherence to ICPs.8-14 Being younger, having specialty in oral surgery, and treating more patients per day were the factors influencing the compliance with ICP. Because ICP is considered one of the health behaviors, the practice of ICP is expected to be associated with attitude and knowledge of HCWs. We therefore, hypothesized that attitude and knowledge concerning infection control may affect the compliance of HCWs with ICP. There are reports on the attitude and knowledge of dentists concerning infection control in several countries.15-21 We previously reported the evaluation of attitude and knowledge concerning infection control of dentists in Japan.7 Few studies analyzed the correlation between attitude and knowledge, and infection control practice.18,22 However, little information is available on the impact of attitudes, and knowledge on ICP in dentists with private practice. We conducted this study to identify factors associated with their compliance with ICP.
null
null
Results
Characteristics of the surveyed population The distribution of the participants by characteristics is shown in Table 1. Most of participants were male (95.5%); about 70% aged 50 years or older; 85% were general dentists. Three-forth of dentists treated 35 or less patients per day. Table 2 shows the distribution of all participants by their willingness to treat HIV/AIDS patients, knowledge about universal/standard precautions, and adherence to ICP. Only 32.3% of respondents reported willingness to treat HIV-infected patients in their practice. Only 21.4% knew about universal/standard precautions. Regarding ICPs, 97.6% of the dentists wore masks during dental treatment and 87.1% provided education about infection prevention for clinical staff. The least commonly reported ICP was “installing extra-oral vacuum aspiration” (22.6%), followed by “exchanging handpiece in each patient” (27.6%) and “wearing protective eyewear for treatment” (37.0%). The distribution of the participants by characteristics is shown in Table 1. Most of participants were male (95.5%); about 70% aged 50 years or older; 85% were general dentists. Three-forth of dentists treated 35 or less patients per day. Table 2 shows the distribution of all participants by their willingness to treat HIV/AIDS patients, knowledge about universal/standard precautions, and adherence to ICP. Only 32.3% of respondents reported willingness to treat HIV-infected patients in their practice. Only 21.4% knew about universal/standard precautions. Regarding ICPs, 97.6% of the dentists wore masks during dental treatment and 87.1% provided education about infection prevention for clinical staff. The least commonly reported ICP was “installing extra-oral vacuum aspiration” (22.6%), followed by “exchanging handpiece in each patient” (27.6%) and “wearing protective eyewear for treatment” (37.0%). Factors affecting ICP Table 3 shows the associations between ICPs and dentists’ demographic characteristics, willingness to treat HIV-infected patients in their practice, and knowledge about universal/standard precautions. Dentists who treated 36 or more patients per day and those who knew about universal/standard precautions had a higher level of adherence in practicing all items of ICP than others. Specialist had a significantly higher adherence to all items of ICP but “wearing mask” than generalist. Those aged 49 years or younger had a higher adherence in six items of ICP (ie , wearing mask, glove, handpiece, education, manual, vaccine) than those aged 50 or older. Dentists with willingness to treat HIV/AIDS patients had a significantly higher level of adherence to seven items of ICP (ie , wearing glove, handpiece, education, lecture, manual, vaccine, vacuum) than others. Women reported significantly higher rates of wearing glass and providing education and lower rate of wearing mask than men. To identify the independent factors affecting dentists adherence to ICP, we used a logistic regression analysis and found similar results earlier found in univariate analysis. The associations of knowledge about universal/standard precautions with all ICPs persisted after adjusting for other confounding variables. Being 49 years old or younger and having specialty had significant negative associations with participating in clinical lecture for infection control. Other significant associations observed in univariate analysis disappeared after controlling for confounding factors. Three ICP items (ie , wearing glass, exchanging handpiece, installing extra-oral vacuum) that practiced with a frequency of <50% were compared between those who had disadvantageous characteristics (age ≥50 years, being a general dentist, treating ≤35 patients/day) with and without knowledge about universal/standard precautions (Table 4). For wearing glass, the odds ratio was highest for those who had fewer visits. On the contrary, in exchanging handpiece and installing extra-oral vacuum aspiration, the odds ratios were highest for those who aged 50 years or more. Table 3 shows the associations between ICPs and dentists’ demographic characteristics, willingness to treat HIV-infected patients in their practice, and knowledge about universal/standard precautions. Dentists who treated 36 or more patients per day and those who knew about universal/standard precautions had a higher level of adherence in practicing all items of ICP than others. Specialist had a significantly higher adherence to all items of ICP but “wearing mask” than generalist. Those aged 49 years or younger had a higher adherence in six items of ICP (ie , wearing mask, glove, handpiece, education, manual, vaccine) than those aged 50 or older. Dentists with willingness to treat HIV/AIDS patients had a significantly higher level of adherence to seven items of ICP (ie , wearing glove, handpiece, education, lecture, manual, vaccine, vacuum) than others. Women reported significantly higher rates of wearing glass and providing education and lower rate of wearing mask than men. To identify the independent factors affecting dentists adherence to ICP, we used a logistic regression analysis and found similar results earlier found in univariate analysis. The associations of knowledge about universal/standard precautions with all ICPs persisted after adjusting for other confounding variables. Being 49 years old or younger and having specialty had significant negative associations with participating in clinical lecture for infection control. Other significant associations observed in univariate analysis disappeared after controlling for confounding factors. Three ICP items (ie , wearing glass, exchanging handpiece, installing extra-oral vacuum) that practiced with a frequency of <50% were compared between those who had disadvantageous characteristics (age ≥50 years, being a general dentist, treating ≤35 patients/day) with and without knowledge about universal/standard precautions (Table 4). For wearing glass, the odds ratio was highest for those who had fewer visits. On the contrary, in exchanging handpiece and installing extra-oral vacuum aspiration, the odds ratios were highest for those who aged 50 years or more.
null
null
[ "TAKE-HOME MESSAGE", "\nCharacteristics of the surveyed population", "\nFactors affecting ICP" ]
[ "The standard precautions are the basic level of infection\ncontrol that should be used in the care of all patients.\n\nKnowledge about the universal/standard precautions is the\nmost significant predictor of compliance with ICP.\n\nAround one-third of studied dentists had willingness to treat\nHIV/AIDS patients.\n\nThe major concern for health care workers is exposure to\nblood and body fluids and transmission of infections such\nas HIV, hepatitis B and C viruses.", "\nThe distribution of the participants by characteristics is shown in Table 1. Most of participants were male (95.5%); about 70% aged 50 years or older; 85% were general dentists. Three-forth of dentists treated 35 or less patients per day.\n\nTable 2 shows the distribution of all participants by their willingness to treat HIV/AIDS patients, knowledge about universal/standard precautions, and adherence to ICP. Only 32.3% of respondents reported willingness to treat HIV-infected patients in their practice. Only 21.4% knew about universal/standard precautions. Regarding ICPs, 97.6% of the dentists wore masks during dental treatment and 87.1% provided education about infection prevention for clinical staff. The least commonly reported ICP was “installing extra-oral vacuum aspiration” (22.6%), followed by “exchanging handpiece in each patient” (27.6%) and “wearing protective eyewear for treatment” (37.0%).", "\nTable 3 shows the associations between ICPs and dentists’ demographic characteristics, willingness to treat HIV-infected patients in their practice, and knowledge about universal/standard precautions. Dentists who treated 36 or more patients per day and those who knew about universal/standard precautions had a higher level of adherence in practicing all items of ICP than others. Specialist had a significantly higher adherence to all items of ICP but “wearing mask” than generalist. Those aged 49 years or younger had a higher adherence in six items of ICP (ie , wearing mask, glove, handpiece, education, manual, vaccine) than those aged 50 or older. Dentists with willingness to treat HIV/AIDS patients had a significantly higher level of adherence to seven items of ICP (ie , wearing glove, handpiece, education, lecture, manual, vaccine, vacuum) than others. Women reported significantly higher rates of wearing glass and providing education and lower rate of wearing mask than men.\n\nTo identify the independent factors affecting dentists adherence to ICP, we used a logistic regression analysis and found similar results earlier found in univariate analysis. The associations of knowledge about universal/standard precautions with all ICPs persisted after adjusting for other confounding variables. Being 49 years old or younger and having specialty had significant negative associations with participating in clinical lecture for infection control. Other significant associations observed in univariate analysis disappeared after controlling for confounding factors.\n\nThree ICP items (ie , wearing glass, exchanging handpiece, installing extra-oral vacuum) that practiced with a frequency of <50% were compared between those who had disadvantageous characteristics (age ≥50 years, being a general dentist, treating ≤35 patients/day) with and without knowledge about universal/standard precautions (Table 4). For wearing glass, the odds ratio was highest for those who had fewer visits. On the contrary, in exchanging handpiece and installing extra-oral vacuum aspiration, the odds ratios were highest for those who aged 50 years or more." ]
[ null, null, null ]
[ "TAKE-HOME MESSAGE", "Introduction", "Materials and Methods", "Results", "\nCharacteristics of the surveyed population", "\nFactors affecting ICP", "Discussion", "Conflicts of Interest:", "Financial Support:" ]
[ "The standard precautions are the basic level of infection\ncontrol that should be used in the care of all patients.\n\nKnowledge about the universal/standard precautions is the\nmost significant predictor of compliance with ICP.\n\nAround one-third of studied dentists had willingness to treat\nHIV/AIDS patients.\n\nThe major concern for health care workers is exposure to\nblood and body fluids and transmission of infections such\nas HIV, hepatitis B and C viruses.", "\nOccupational exposure to blood and body fluids is a serious concern for health care workers (HCWs) and presents a major risk for the transmission of infections such as human immuno-deficiency virus (HIV), hepatitis B virus (HBV), and hepatitis C virus (HCV).1,2 The Occupational Safety and Health Administration (OSHA) published its blood-borne pathogens rule in 1991,3 which requires training of all workers at risk, implementation of universal precautions, and monitoring of compliance. In 1996, the US Centers for Disease Control and Prevention (CDC) combined universal precautions with body-substance isolation recommendations in the “standard precautions,”4and have advised HCWs to practice regular personal hygiene; use protective barriers such as gloves and gown, whenever there is chance of contact with mucous membranes, blood and body fluids of patients; and dispose of sharps, body fluids, and other clinical waste properly.5,6 The standard precautions are the basic level of infection control that should be used in the care of all patients. Currently, the standard precaution is routinely taken in clinical practice of developed counties. However, the rate of infection control practice (ICP) in Japan is still lower than that in other developed countries.7\n\nThere are reports documenting the association between some characteristics of HCWs and adherence to ICPs.8-14 Being younger, having specialty in oral surgery, and treating more patients per day were the factors influencing the compliance with ICP. Because ICP is considered one of the health behaviors, the practice of ICP is expected to be associated with attitude and knowledge of HCWs. We therefore, hypothesized that attitude and knowledge concerning infection control may affect the compliance of HCWs with ICP. There are reports on the attitude and knowledge of dentists concerning infection control in several countries.15-21 We previously reported the evaluation of attitude and knowledge concerning infection control of dentists in Japan.7 Few studies analyzed the correlation between attitude and knowledge, and infection control practice.18,22 However, little information is available on the impact of attitudes, and knowledge on ICP in dentists with private practice. We conducted this study to identify factors associated with their compliance with ICP.", "\nWe studied dentists of Aichi prefecture between August and October 2011. A self-administered questionnaire was sent to all 3316 directors of private dental offices listed in the Aichi prefecture Dental Association. The questionnaire was accompanied by a letter of endorsement signed by the President of the Aichi prefecture Dental Association and a letter of introduction signed by the research team emphasizing the importance of this study and ensuring the anonymity of the answers. Of 3316 questionnaires distributed 2350 (70.9 %) were returned; 216 subjects were excluded from the analysis because of missing data in their questionnaires.\n\nDentists were asked to complete the questionnaire. Items of the questionnaire included characteristics of dentists (gender, age, specialty in oral surgery, and the number of patients treated per day); their willingness to treat patients with HIV/AIDS; knowledge about universal/standard precaution; adherence to ICP (ie , wearing protective eyewear for treatment, wearing mask for treatment, wearing glove for treatment, exchanging handpiece for each patient, providing education for preventing infection, preparing office infection control manual, participating in clinical lectures for infection control, HBV immunization, and installing extra-oral vacuum aspiration).\n\nUnder Japan’s dental health service system, dentists can establish their own private dental offices offering services in the specialties of dentistry, oral surgery, pedodontics or orthodontics. In terms of specialty in oral surgery, we classified dentists into two groups—“specialist” and “general dentists.” Specialists were dentists who had established private dental offices, had specialty in oral surgery and did relatively difficult oral surgeries as well as restorative treatment and prosthodontic treatment. General dentists were those who had established private dental offices, did not have specialty in oral surgery and mainly did restorative and prosthodontic treatment, as well as basic oral surgeries such as tooth extraction. The number of patients treated per day was recorded for each private dental office. The dentists were dichotomously categorized according to their age to “49 years or younger” and “50 years or older.” Number of visits was dichotomously categorized as 35 or less visits/day and 36 or more.\n\nEthical approval for the study was provided by the Aichi prefecture Dental Association. An agreement form that clarified that no direct benefit could be expected for participating in this study and that all data collected were confidential and anonymous was sent to each participant.\n\nTo analyze the associations of adherence to ICP with characteristics, willingness to treat HIV/AIDS patients, and knowledge about universal/standard precautions, χ2 test and logistic regression analysis were used. The dependent variable was following ICPs and the independent variables were characteristics of participants, and their attitude and knowledge. Statistical analyses were carried out by SPSS® for Windows® ver 12. A p value <0.05 was considered statistically significant.", " \nCharacteristics of the surveyed population \nThe distribution of the participants by characteristics is shown in Table 1. Most of participants were male (95.5%); about 70% aged 50 years or older; 85% were general dentists. Three-forth of dentists treated 35 or less patients per day.\n\nTable 2 shows the distribution of all participants by their willingness to treat HIV/AIDS patients, knowledge about universal/standard precautions, and adherence to ICP. Only 32.3% of respondents reported willingness to treat HIV-infected patients in their practice. Only 21.4% knew about universal/standard precautions. Regarding ICPs, 97.6% of the dentists wore masks during dental treatment and 87.1% provided education about infection prevention for clinical staff. The least commonly reported ICP was “installing extra-oral vacuum aspiration” (22.6%), followed by “exchanging handpiece in each patient” (27.6%) and “wearing protective eyewear for treatment” (37.0%).\n\nThe distribution of the participants by characteristics is shown in Table 1. Most of participants were male (95.5%); about 70% aged 50 years or older; 85% were general dentists. Three-forth of dentists treated 35 or less patients per day.\n\nTable 2 shows the distribution of all participants by their willingness to treat HIV/AIDS patients, knowledge about universal/standard precautions, and adherence to ICP. Only 32.3% of respondents reported willingness to treat HIV-infected patients in their practice. Only 21.4% knew about universal/standard precautions. Regarding ICPs, 97.6% of the dentists wore masks during dental treatment and 87.1% provided education about infection prevention for clinical staff. The least commonly reported ICP was “installing extra-oral vacuum aspiration” (22.6%), followed by “exchanging handpiece in each patient” (27.6%) and “wearing protective eyewear for treatment” (37.0%).\n \nFactors affecting ICP \nTable 3 shows the associations between ICPs and dentists’ demographic characteristics, willingness to treat HIV-infected patients in their practice, and knowledge about universal/standard precautions. Dentists who treated 36 or more patients per day and those who knew about universal/standard precautions had a higher level of adherence in practicing all items of ICP than others. Specialist had a significantly higher adherence to all items of ICP but “wearing mask” than generalist. Those aged 49 years or younger had a higher adherence in six items of ICP (ie , wearing mask, glove, handpiece, education, manual, vaccine) than those aged 50 or older. Dentists with willingness to treat HIV/AIDS patients had a significantly higher level of adherence to seven items of ICP (ie , wearing glove, handpiece, education, lecture, manual, vaccine, vacuum) than others. Women reported significantly higher rates of wearing glass and providing education and lower rate of wearing mask than men.\n\nTo identify the independent factors affecting dentists adherence to ICP, we used a logistic regression analysis and found similar results earlier found in univariate analysis. The associations of knowledge about universal/standard precautions with all ICPs persisted after adjusting for other confounding variables. Being 49 years old or younger and having specialty had significant negative associations with participating in clinical lecture for infection control. Other significant associations observed in univariate analysis disappeared after controlling for confounding factors.\n\nThree ICP items (ie , wearing glass, exchanging handpiece, installing extra-oral vacuum) that practiced with a frequency of <50% were compared between those who had disadvantageous characteristics (age ≥50 years, being a general dentist, treating ≤35 patients/day) with and without knowledge about universal/standard precautions (Table 4). For wearing glass, the odds ratio was highest for those who had fewer visits. On the contrary, in exchanging handpiece and installing extra-oral vacuum aspiration, the odds ratios were highest for those who aged 50 years or more.\n\nTable 3 shows the associations between ICPs and dentists’ demographic characteristics, willingness to treat HIV-infected patients in their practice, and knowledge about universal/standard precautions. Dentists who treated 36 or more patients per day and those who knew about universal/standard precautions had a higher level of adherence in practicing all items of ICP than others. Specialist had a significantly higher adherence to all items of ICP but “wearing mask” than generalist. Those aged 49 years or younger had a higher adherence in six items of ICP (ie , wearing mask, glove, handpiece, education, manual, vaccine) than those aged 50 or older. Dentists with willingness to treat HIV/AIDS patients had a significantly higher level of adherence to seven items of ICP (ie , wearing glove, handpiece, education, lecture, manual, vaccine, vacuum) than others. Women reported significantly higher rates of wearing glass and providing education and lower rate of wearing mask than men.\n\nTo identify the independent factors affecting dentists adherence to ICP, we used a logistic regression analysis and found similar results earlier found in univariate analysis. The associations of knowledge about universal/standard precautions with all ICPs persisted after adjusting for other confounding variables. Being 49 years old or younger and having specialty had significant negative associations with participating in clinical lecture for infection control. Other significant associations observed in univariate analysis disappeared after controlling for confounding factors.\n\nThree ICP items (ie , wearing glass, exchanging handpiece, installing extra-oral vacuum) that practiced with a frequency of <50% were compared between those who had disadvantageous characteristics (age ≥50 years, being a general dentist, treating ≤35 patients/day) with and without knowledge about universal/standard precautions (Table 4). For wearing glass, the odds ratio was highest for those who had fewer visits. On the contrary, in exchanging handpiece and installing extra-oral vacuum aspiration, the odds ratios were highest for those who aged 50 years or more.", "\nThe distribution of the participants by characteristics is shown in Table 1. Most of participants were male (95.5%); about 70% aged 50 years or older; 85% were general dentists. Three-forth of dentists treated 35 or less patients per day.\n\nTable 2 shows the distribution of all participants by their willingness to treat HIV/AIDS patients, knowledge about universal/standard precautions, and adherence to ICP. Only 32.3% of respondents reported willingness to treat HIV-infected patients in their practice. Only 21.4% knew about universal/standard precautions. Regarding ICPs, 97.6% of the dentists wore masks during dental treatment and 87.1% provided education about infection prevention for clinical staff. The least commonly reported ICP was “installing extra-oral vacuum aspiration” (22.6%), followed by “exchanging handpiece in each patient” (27.6%) and “wearing protective eyewear for treatment” (37.0%).", "\nTable 3 shows the associations between ICPs and dentists’ demographic characteristics, willingness to treat HIV-infected patients in their practice, and knowledge about universal/standard precautions. Dentists who treated 36 or more patients per day and those who knew about universal/standard precautions had a higher level of adherence in practicing all items of ICP than others. Specialist had a significantly higher adherence to all items of ICP but “wearing mask” than generalist. Those aged 49 years or younger had a higher adherence in six items of ICP (ie , wearing mask, glove, handpiece, education, manual, vaccine) than those aged 50 or older. Dentists with willingness to treat HIV/AIDS patients had a significantly higher level of adherence to seven items of ICP (ie , wearing glove, handpiece, education, lecture, manual, vaccine, vacuum) than others. Women reported significantly higher rates of wearing glass and providing education and lower rate of wearing mask than men.\n\nTo identify the independent factors affecting dentists adherence to ICP, we used a logistic regression analysis and found similar results earlier found in univariate analysis. The associations of knowledge about universal/standard precautions with all ICPs persisted after adjusting for other confounding variables. Being 49 years old or younger and having specialty had significant negative associations with participating in clinical lecture for infection control. Other significant associations observed in univariate analysis disappeared after controlling for confounding factors.\n\nThree ICP items (ie , wearing glass, exchanging handpiece, installing extra-oral vacuum) that practiced with a frequency of <50% were compared between those who had disadvantageous characteristics (age ≥50 years, being a general dentist, treating ≤35 patients/day) with and without knowledge about universal/standard precautions (Table 4). For wearing glass, the odds ratio was highest for those who had fewer visits. On the contrary, in exchanging handpiece and installing extra-oral vacuum aspiration, the odds ratios were highest for those who aged 50 years or more.", "\nAround one-third of studied dentists had willingness to treat HIV/AIDS patients. The rate, however, reported as 15.6% by Aizawa, et al.23 The observed difference may be attributed to the region and the study population of the two studies. The rate in the present study is nevertheless, higher than that in other previous reports,7 reflecting an increased in the willingness of dentists to treat patients with AIDS/HIV.\n\nWe found that compliance of dentists with ICP was neither complete nor uniform for various ICP items. About 97% of dentists reported wearing mask for their treatment. As wearing mask has been routinely done among Japanese dentists, this result was expected. The rate of “always wearing gloves for treatment” was 80%, although gloves were not routinely worn before. Increased number of patients with infectious disease and more attention to ICP may contribute to the high frequency of wearing gloves, participating in clinical lecture for infection control, and providing education for preventing infection against clinical staff. On the contrary, the rates of exchanging handpiece for each patient, or installing extra-oral vacuum aspiration were lower than 30%. The most possible reason for lower compliance with these ICPs would be financial issues.\n\nDentists aged 49 years or younger, specialist in oral surgery, and dentists who treated 36 or more patients per day had higher compliance with ICPs than others, which is in agreement with findings of previous studies.7-13 Gender differences were variable among ICP items. On the contrary, there are reports which demonstrate that female dentists had a better compliance with ICPs in Canada.24 Gender difference in adherence to ICP may depend on nationality. Furthermore, the small population of female dentists in our study may cause such an observation.\n\nIn the logistic regression analysis, only knowledge about universal/standard precautions had significant association with all items of ICP. Among the items, knowledge about universal/standard precautions had the largest independent effect on ICP compliance. These results show that having knowledge about universal/standard precautions is the most important factor for compliance with ICP. It is commonly known that acquiring knowledge is one of potential promoting factors for better health behavior. The ICP items included in the questionnaire in the present study are defined or involved in a broad sense in standard precaution.14 Personal protective equipment for HCWs includes wearing gloves, goggles, and masks. Education of HCWs is also defined in the standard precaution. Exchanging handpiece in each patient is considered to be one of the patient care equipment and instruments/devices. Extra-oral vacuum aspiration is considered to be one of the environmental measures for avoiding nosocomial infection. Although HBV vaccination is not included in the safe work practices to prevent HBV exposure to blood-borne pathogens, both have a goal to prevent HCWs from infection by common pathogens. Dentists who have knowledge about universal/standard precautions had a higher concern about ICP. It is reasonable that having knowledge about universal/standard precautions gives marked effects on the subjects’ compliance with ICP. Cleveland demonstrated that dentists who perceived that following the latest infection control recommendations was extremely important were most likely to implement the recommendations.25 This finding supports our results.\n\nThe logistic regression model indicated that knowledge about the universal/standard precautions had a greater impact on the adherence to ICP than the willingness to treat HIV/AIDS patients. A dentist who has insufficient knowledge about ICP cannot prepare enough for treatment of patients with infectious diseases. It is thus important to acquire knowledge about universal/standard precautions for better compliance with ICP. On the other hand, Askarian and Assadian could not find a significant correlation between attitude and knowledge of infection control and ICP in dentists.18 Factors other than attitude and knowledge may have impact on the compliance with ICP.\n\nThe impacts of having knowledge about universal/standard precaution on the improvement of compliance with handpiece or extra-oral vacuum in dentists with fewer patients were lower than those in older dentists. Dentists with fewer visits per day may consider such ICPs as a financial burden, since fewer patients is linked to smaller income. Other studies have also reported that some dentists see ICPs as a financial burden.10,12 On the contrary, older age may not be a substantial hindrance for ICP. Poor compliance with ICP in older dentists may be attributed to their lower knowledge, probably for the great advances in the field of ICP after their graduation. It has been shown that educational intervention in HCWs improved their compliance with ICP.26-29 Among the group with disadvantageous characteristics, the biggest effect of education on the compliance with ICPs, was observed in older dentists.\n\nAttitudes and knowledge for the prevention of nosocomial infection, and compliance with ICP are not still sufficient in many countries over the world.20,21,30,31 As concluded in these papers, providing correct information for ICP requires implementation of curriculum for dental school and continuing dental education for practicing dentists.\n\nWe had some limitations. This study could not measure the association between the overall levels of the attitude and knowledge, and compliance with ICP in the study population. Moreover, other attitude and knowledge items are thought to have an impact on the compliance with ICP. Further studies are needed to find further associations between attitude and knowledge items and adherence to ICP.", "\nNone declared.", "\nNone declared." ]
[ null, "introduction", "materials and methods", "results", null, null, "discussion", "COI-statement", "financial support:" ]
[ "Infection control", "dental", "Dentists", "Universal precautions", "Knowledge", "Attitude", "HIV", "Acquired immunodeficiency syndrome" ]
TAKE-HOME MESSAGE: The standard precautions are the basic level of infection control that should be used in the care of all patients. Knowledge about the universal/standard precautions is the most significant predictor of compliance with ICP. Around one-third of studied dentists had willingness to treat HIV/AIDS patients. The major concern for health care workers is exposure to blood and body fluids and transmission of infections such as HIV, hepatitis B and C viruses. Introduction: Occupational exposure to blood and body fluids is a serious concern for health care workers (HCWs) and presents a major risk for the transmission of infections such as human immuno-deficiency virus (HIV), hepatitis B virus (HBV), and hepatitis C virus (HCV).1,2 The Occupational Safety and Health Administration (OSHA) published its blood-borne pathogens rule in 1991,3 which requires training of all workers at risk, implementation of universal precautions, and monitoring of compliance. In 1996, the US Centers for Disease Control and Prevention (CDC) combined universal precautions with body-substance isolation recommendations in the “standard precautions,”4and have advised HCWs to practice regular personal hygiene; use protective barriers such as gloves and gown, whenever there is chance of contact with mucous membranes, blood and body fluids of patients; and dispose of sharps, body fluids, and other clinical waste properly.5,6 The standard precautions are the basic level of infection control that should be used in the care of all patients. Currently, the standard precaution is routinely taken in clinical practice of developed counties. However, the rate of infection control practice (ICP) in Japan is still lower than that in other developed countries.7 There are reports documenting the association between some characteristics of HCWs and adherence to ICPs.8-14 Being younger, having specialty in oral surgery, and treating more patients per day were the factors influencing the compliance with ICP. Because ICP is considered one of the health behaviors, the practice of ICP is expected to be associated with attitude and knowledge of HCWs. We therefore, hypothesized that attitude and knowledge concerning infection control may affect the compliance of HCWs with ICP. There are reports on the attitude and knowledge of dentists concerning infection control in several countries.15-21 We previously reported the evaluation of attitude and knowledge concerning infection control of dentists in Japan.7 Few studies analyzed the correlation between attitude and knowledge, and infection control practice.18,22 However, little information is available on the impact of attitudes, and knowledge on ICP in dentists with private practice. We conducted this study to identify factors associated with their compliance with ICP. Materials and Methods: We studied dentists of Aichi prefecture between August and October 2011. A self-administered questionnaire was sent to all 3316 directors of private dental offices listed in the Aichi prefecture Dental Association. The questionnaire was accompanied by a letter of endorsement signed by the President of the Aichi prefecture Dental Association and a letter of introduction signed by the research team emphasizing the importance of this study and ensuring the anonymity of the answers. Of 3316 questionnaires distributed 2350 (70.9 %) were returned; 216 subjects were excluded from the analysis because of missing data in their questionnaires. Dentists were asked to complete the questionnaire. Items of the questionnaire included characteristics of dentists (gender, age, specialty in oral surgery, and the number of patients treated per day); their willingness to treat patients with HIV/AIDS; knowledge about universal/standard precaution; adherence to ICP (ie , wearing protective eyewear for treatment, wearing mask for treatment, wearing glove for treatment, exchanging handpiece for each patient, providing education for preventing infection, preparing office infection control manual, participating in clinical lectures for infection control, HBV immunization, and installing extra-oral vacuum aspiration). Under Japan’s dental health service system, dentists can establish their own private dental offices offering services in the specialties of dentistry, oral surgery, pedodontics or orthodontics. In terms of specialty in oral surgery, we classified dentists into two groups—“specialist” and “general dentists.” Specialists were dentists who had established private dental offices, had specialty in oral surgery and did relatively difficult oral surgeries as well as restorative treatment and prosthodontic treatment. General dentists were those who had established private dental offices, did not have specialty in oral surgery and mainly did restorative and prosthodontic treatment, as well as basic oral surgeries such as tooth extraction. The number of patients treated per day was recorded for each private dental office. The dentists were dichotomously categorized according to their age to “49 years or younger” and “50 years or older.” Number of visits was dichotomously categorized as 35 or less visits/day and 36 or more. Ethical approval for the study was provided by the Aichi prefecture Dental Association. An agreement form that clarified that no direct benefit could be expected for participating in this study and that all data collected were confidential and anonymous was sent to each participant. To analyze the associations of adherence to ICP with characteristics, willingness to treat HIV/AIDS patients, and knowledge about universal/standard precautions, χ2 test and logistic regression analysis were used. The dependent variable was following ICPs and the independent variables were characteristics of participants, and their attitude and knowledge. Statistical analyses were carried out by SPSS® for Windows® ver 12. A p value <0.05 was considered statistically significant. Results: Characteristics of the surveyed population The distribution of the participants by characteristics is shown in Table 1. Most of participants were male (95.5%); about 70% aged 50 years or older; 85% were general dentists. Three-forth of dentists treated 35 or less patients per day. Table 2 shows the distribution of all participants by their willingness to treat HIV/AIDS patients, knowledge about universal/standard precautions, and adherence to ICP. Only 32.3% of respondents reported willingness to treat HIV-infected patients in their practice. Only 21.4% knew about universal/standard precautions. Regarding ICPs, 97.6% of the dentists wore masks during dental treatment and 87.1% provided education about infection prevention for clinical staff. The least commonly reported ICP was “installing extra-oral vacuum aspiration” (22.6%), followed by “exchanging handpiece in each patient” (27.6%) and “wearing protective eyewear for treatment” (37.0%). The distribution of the participants by characteristics is shown in Table 1. Most of participants were male (95.5%); about 70% aged 50 years or older; 85% were general dentists. Three-forth of dentists treated 35 or less patients per day. Table 2 shows the distribution of all participants by their willingness to treat HIV/AIDS patients, knowledge about universal/standard precautions, and adherence to ICP. Only 32.3% of respondents reported willingness to treat HIV-infected patients in their practice. Only 21.4% knew about universal/standard precautions. Regarding ICPs, 97.6% of the dentists wore masks during dental treatment and 87.1% provided education about infection prevention for clinical staff. The least commonly reported ICP was “installing extra-oral vacuum aspiration” (22.6%), followed by “exchanging handpiece in each patient” (27.6%) and “wearing protective eyewear for treatment” (37.0%). Factors affecting ICP Table 3 shows the associations between ICPs and dentists’ demographic characteristics, willingness to treat HIV-infected patients in their practice, and knowledge about universal/standard precautions. Dentists who treated 36 or more patients per day and those who knew about universal/standard precautions had a higher level of adherence in practicing all items of ICP than others. Specialist had a significantly higher adherence to all items of ICP but “wearing mask” than generalist. Those aged 49 years or younger had a higher adherence in six items of ICP (ie , wearing mask, glove, handpiece, education, manual, vaccine) than those aged 50 or older. Dentists with willingness to treat HIV/AIDS patients had a significantly higher level of adherence to seven items of ICP (ie , wearing glove, handpiece, education, lecture, manual, vaccine, vacuum) than others. Women reported significantly higher rates of wearing glass and providing education and lower rate of wearing mask than men. To identify the independent factors affecting dentists adherence to ICP, we used a logistic regression analysis and found similar results earlier found in univariate analysis. The associations of knowledge about universal/standard precautions with all ICPs persisted after adjusting for other confounding variables. Being 49 years old or younger and having specialty had significant negative associations with participating in clinical lecture for infection control. Other significant associations observed in univariate analysis disappeared after controlling for confounding factors. Three ICP items (ie , wearing glass, exchanging handpiece, installing extra-oral vacuum) that practiced with a frequency of <50% were compared between those who had disadvantageous characteristics (age ≥50 years, being a general dentist, treating ≤35 patients/day) with and without knowledge about universal/standard precautions (Table 4). For wearing glass, the odds ratio was highest for those who had fewer visits. On the contrary, in exchanging handpiece and installing extra-oral vacuum aspiration, the odds ratios were highest for those who aged 50 years or more. Table 3 shows the associations between ICPs and dentists’ demographic characteristics, willingness to treat HIV-infected patients in their practice, and knowledge about universal/standard precautions. Dentists who treated 36 or more patients per day and those who knew about universal/standard precautions had a higher level of adherence in practicing all items of ICP than others. Specialist had a significantly higher adherence to all items of ICP but “wearing mask” than generalist. Those aged 49 years or younger had a higher adherence in six items of ICP (ie , wearing mask, glove, handpiece, education, manual, vaccine) than those aged 50 or older. Dentists with willingness to treat HIV/AIDS patients had a significantly higher level of adherence to seven items of ICP (ie , wearing glove, handpiece, education, lecture, manual, vaccine, vacuum) than others. Women reported significantly higher rates of wearing glass and providing education and lower rate of wearing mask than men. To identify the independent factors affecting dentists adherence to ICP, we used a logistic regression analysis and found similar results earlier found in univariate analysis. The associations of knowledge about universal/standard precautions with all ICPs persisted after adjusting for other confounding variables. Being 49 years old or younger and having specialty had significant negative associations with participating in clinical lecture for infection control. Other significant associations observed in univariate analysis disappeared after controlling for confounding factors. Three ICP items (ie , wearing glass, exchanging handpiece, installing extra-oral vacuum) that practiced with a frequency of <50% were compared between those who had disadvantageous characteristics (age ≥50 years, being a general dentist, treating ≤35 patients/day) with and without knowledge about universal/standard precautions (Table 4). For wearing glass, the odds ratio was highest for those who had fewer visits. On the contrary, in exchanging handpiece and installing extra-oral vacuum aspiration, the odds ratios were highest for those who aged 50 years or more. Characteristics of the surveyed population: The distribution of the participants by characteristics is shown in Table 1. Most of participants were male (95.5%); about 70% aged 50 years or older; 85% were general dentists. Three-forth of dentists treated 35 or less patients per day. Table 2 shows the distribution of all participants by their willingness to treat HIV/AIDS patients, knowledge about universal/standard precautions, and adherence to ICP. Only 32.3% of respondents reported willingness to treat HIV-infected patients in their practice. Only 21.4% knew about universal/standard precautions. Regarding ICPs, 97.6% of the dentists wore masks during dental treatment and 87.1% provided education about infection prevention for clinical staff. The least commonly reported ICP was “installing extra-oral vacuum aspiration” (22.6%), followed by “exchanging handpiece in each patient” (27.6%) and “wearing protective eyewear for treatment” (37.0%). Factors affecting ICP: Table 3 shows the associations between ICPs and dentists’ demographic characteristics, willingness to treat HIV-infected patients in their practice, and knowledge about universal/standard precautions. Dentists who treated 36 or more patients per day and those who knew about universal/standard precautions had a higher level of adherence in practicing all items of ICP than others. Specialist had a significantly higher adherence to all items of ICP but “wearing mask” than generalist. Those aged 49 years or younger had a higher adherence in six items of ICP (ie , wearing mask, glove, handpiece, education, manual, vaccine) than those aged 50 or older. Dentists with willingness to treat HIV/AIDS patients had a significantly higher level of adherence to seven items of ICP (ie , wearing glove, handpiece, education, lecture, manual, vaccine, vacuum) than others. Women reported significantly higher rates of wearing glass and providing education and lower rate of wearing mask than men. To identify the independent factors affecting dentists adherence to ICP, we used a logistic regression analysis and found similar results earlier found in univariate analysis. The associations of knowledge about universal/standard precautions with all ICPs persisted after adjusting for other confounding variables. Being 49 years old or younger and having specialty had significant negative associations with participating in clinical lecture for infection control. Other significant associations observed in univariate analysis disappeared after controlling for confounding factors. Three ICP items (ie , wearing glass, exchanging handpiece, installing extra-oral vacuum) that practiced with a frequency of <50% were compared between those who had disadvantageous characteristics (age ≥50 years, being a general dentist, treating ≤35 patients/day) with and without knowledge about universal/standard precautions (Table 4). For wearing glass, the odds ratio was highest for those who had fewer visits. On the contrary, in exchanging handpiece and installing extra-oral vacuum aspiration, the odds ratios were highest for those who aged 50 years or more. Discussion: Around one-third of studied dentists had willingness to treat HIV/AIDS patients. The rate, however, reported as 15.6% by Aizawa, et al.23 The observed difference may be attributed to the region and the study population of the two studies. The rate in the present study is nevertheless, higher than that in other previous reports,7 reflecting an increased in the willingness of dentists to treat patients with AIDS/HIV. We found that compliance of dentists with ICP was neither complete nor uniform for various ICP items. About 97% of dentists reported wearing mask for their treatment. As wearing mask has been routinely done among Japanese dentists, this result was expected. The rate of “always wearing gloves for treatment” was 80%, although gloves were not routinely worn before. Increased number of patients with infectious disease and more attention to ICP may contribute to the high frequency of wearing gloves, participating in clinical lecture for infection control, and providing education for preventing infection against clinical staff. On the contrary, the rates of exchanging handpiece for each patient, or installing extra-oral vacuum aspiration were lower than 30%. The most possible reason for lower compliance with these ICPs would be financial issues. Dentists aged 49 years or younger, specialist in oral surgery, and dentists who treated 36 or more patients per day had higher compliance with ICPs than others, which is in agreement with findings of previous studies.7-13 Gender differences were variable among ICP items. On the contrary, there are reports which demonstrate that female dentists had a better compliance with ICPs in Canada.24 Gender difference in adherence to ICP may depend on nationality. Furthermore, the small population of female dentists in our study may cause such an observation. In the logistic regression analysis, only knowledge about universal/standard precautions had significant association with all items of ICP. Among the items, knowledge about universal/standard precautions had the largest independent effect on ICP compliance. These results show that having knowledge about universal/standard precautions is the most important factor for compliance with ICP. It is commonly known that acquiring knowledge is one of potential promoting factors for better health behavior. The ICP items included in the questionnaire in the present study are defined or involved in a broad sense in standard precaution.14 Personal protective equipment for HCWs includes wearing gloves, goggles, and masks. Education of HCWs is also defined in the standard precaution. Exchanging handpiece in each patient is considered to be one of the patient care equipment and instruments/devices. Extra-oral vacuum aspiration is considered to be one of the environmental measures for avoiding nosocomial infection. Although HBV vaccination is not included in the safe work practices to prevent HBV exposure to blood-borne pathogens, both have a goal to prevent HCWs from infection by common pathogens. Dentists who have knowledge about universal/standard precautions had a higher concern about ICP. It is reasonable that having knowledge about universal/standard precautions gives marked effects on the subjects’ compliance with ICP. Cleveland demonstrated that dentists who perceived that following the latest infection control recommendations was extremely important were most likely to implement the recommendations.25 This finding supports our results. The logistic regression model indicated that knowledge about the universal/standard precautions had a greater impact on the adherence to ICP than the willingness to treat HIV/AIDS patients. A dentist who has insufficient knowledge about ICP cannot prepare enough for treatment of patients with infectious diseases. It is thus important to acquire knowledge about universal/standard precautions for better compliance with ICP. On the other hand, Askarian and Assadian could not find a significant correlation between attitude and knowledge of infection control and ICP in dentists.18 Factors other than attitude and knowledge may have impact on the compliance with ICP. The impacts of having knowledge about universal/standard precaution on the improvement of compliance with handpiece or extra-oral vacuum in dentists with fewer patients were lower than those in older dentists. Dentists with fewer visits per day may consider such ICPs as a financial burden, since fewer patients is linked to smaller income. Other studies have also reported that some dentists see ICPs as a financial burden.10,12 On the contrary, older age may not be a substantial hindrance for ICP. Poor compliance with ICP in older dentists may be attributed to their lower knowledge, probably for the great advances in the field of ICP after their graduation. It has been shown that educational intervention in HCWs improved their compliance with ICP.26-29 Among the group with disadvantageous characteristics, the biggest effect of education on the compliance with ICPs, was observed in older dentists. Attitudes and knowledge for the prevention of nosocomial infection, and compliance with ICP are not still sufficient in many countries over the world.20,21,30,31 As concluded in these papers, providing correct information for ICP requires implementation of curriculum for dental school and continuing dental education for practicing dentists. We had some limitations. This study could not measure the association between the overall levels of the attitude and knowledge, and compliance with ICP in the study population. Moreover, other attitude and knowledge items are thought to have an impact on the compliance with ICP. Further studies are needed to find further associations between attitude and knowledge items and adherence to ICP. Conflicts of Interest:: None declared. Financial Support:: None declared.
Background: In recent years, dentists have more opportunity of treating patients infected with blood-borne pathogens. Although compliance with infection control practice (ICP) in dental practice is required, it is not still sufficiently spread in Japan. Methods: In a questionnaire-based cross-sectional study in 2009, 2134 dentists in Aichi prefecture, Japan, were surveyed. They were asked for their demographic characteristics, willingness to treat HIV/AIDS patients, and knowledge about universal/standard precautions and ICP. Results: Many ICP items had significant association with age, specialty for oral surgery, number of patients treated per day, willingness to treat HIV/AIDS patients and knowledge about the universal/standard precautions. In logistic regression model, knowledge about the precautions had significant associations with all ICP items. Among participants with disadvantageous characteristic group for ICP (ie, age ≥50 years, being general dentist, and treating ≤35 patients/day), knowledge about the universal/standard precautions had greater impact on exchanging handpiece for each patient and installing extra-oral vacuum in those with age of ≥50 years than in those who visited ≤35 patient per day. Conclusions: Knowledge about the meaning of universal/standard precautions is the most significant predictor of compliance with ICPs among Japanese dentists.
null
null
3,740
248
9
[ "icp", "dentists", "knowledge", "patients", "standard", "precautions", "wearing", "universal", "standard precautions", "universal standard" ]
[ "test", "test" ]
null
null
null
[CONTENT] Infection control | dental | Dentists | Universal precautions | Knowledge | Attitude | HIV | Acquired immunodeficiency syndrome [SUMMARY]
null
[CONTENT] Infection control | dental | Dentists | Universal precautions | Knowledge | Attitude | HIV | Acquired immunodeficiency syndrome [SUMMARY]
null
[CONTENT] Infection control | dental | Dentists | Universal precautions | Knowledge | Attitude | HIV | Acquired immunodeficiency syndrome [SUMMARY]
null
[CONTENT] Attitude of Health Personnel | Cross-Sectional Studies | Dentists | Female | Guideline Adherence | Humans | Infection Control | Japan | Male | Middle Aged | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] Attitude of Health Personnel | Cross-Sectional Studies | Dentists | Female | Guideline Adherence | Humans | Infection Control | Japan | Male | Middle Aged | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] Attitude of Health Personnel | Cross-Sectional Studies | Dentists | Female | Guideline Adherence | Humans | Infection Control | Japan | Male | Middle Aged | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] icp | dentists | knowledge | patients | standard | precautions | wearing | universal | standard precautions | universal standard [SUMMARY]
null
[CONTENT] icp | dentists | knowledge | patients | standard | precautions | wearing | universal | standard precautions | universal standard [SUMMARY]
null
[CONTENT] icp | dentists | knowledge | patients | standard | precautions | wearing | universal | standard precautions | universal standard [SUMMARY]
null
[CONTENT] hcws | practice | attitude knowledge | attitude | control | body | icp | infection control | virus | concerning infection [SUMMARY]
null
[CONTENT] wearing | icp | higher | patients | dentists | items | 50 | universal standard | universal standard precautions | adherence [SUMMARY]
null
[CONTENT] declared | icp | dentists | patients | standard | knowledge | precautions | wearing | standard precautions | universal standard [SUMMARY]
null
[CONTENT] recent years ||| ICP | Japan [SUMMARY]
null
[CONTENT] ICP ||| ||| ICP | age ≥50 years | age of ≥50 years [SUMMARY]
null
[CONTENT] recent years ||| ICP | Japan ||| 2009 | 2134 | Aichi | Japan ||| ICP ||| ||| ICP ||| ||| ICP | age ≥50 years | age of ≥50 years ||| Japanese [SUMMARY]
null
Nocardia Colonization: A Risk Factor for Lung Deterioration in Cystic Fibrosis Patients?
26125407
Cystic fibrosis (CF) patients are predisposed to infection and colonization with different microbes. Some cause deterioration of lung functions, while others are colonizers without clear pathogenic effects. Our aim was to understand the effects of Nocardia species in sputum cultures on the course of lung disease in CF patients.
BACKGROUND
A retrospective study analyzing the impact of positive Nocardia spp. in sputum of 19 CF patients over a period of 10 years, comparing them with similar status patients without Nocardia growth. Pulmonary function tests (PFTs) are used as indicators of lung disease severity and decline rate in functions per year is calculated.
MATERIAL AND METHODS
No significant difference in PFTs of CF patients with positive Nocardia in sputum was found in different sub-groups according to number of episodes of growth, background variables, or treatment plans. The yearly decline in PFTs was similar to that recognized in CF patients. The control group patients showed similar background data. However, a small difference was found in the rate of decline of their PFTs, which implies a possibly slower rate of progression of lung disease.
RESULTS
The prognosis of lung disease in CF patients colonized with Nocardia does not seem to differ based on the persistence of growth on cultures, different treatment plans or risk factors. Apparently, Nocardia does not cause a deterioration of lung functions with time. However, it may show a trend to faster decline in PFTs compared to similar status CF patients without isolation of this microorganism in their sputum.
CONCLUSIONS
[ "Adolescent", "Adult", "Case-Control Studies", "Child", "Cystic Fibrosis", "Female", "Humans", "Longitudinal Studies", "Male", "Middle Aged", "Nocardia", "Respiratory Function Tests", "Risk Factors", "Sputum", "Young Adult" ]
4498443
Background
Nocardia species are aerobic Gram-positive filamentous bacteria of the Actinomycetes order, which are natural inhabitants of soil and water throughout the world. Pulmonary nocardiosis, the most common clinical presentation of infection, is usually acquired by direct inhalation of contaminated soil. Symptoms may include cough, shortness of breath, chest pain, hemoptysis, fever, night sweats, weight loss, and fatigue. The chest radiograph may be variable with nodular and/or consolidation infiltrate as well as cavitary lesions and pleural effusions. Risk factors for acquisition of infection include depressed cell immunity, but up to one-third of patients are immunocompetent [1–3]. Furthermore, Nocardia may be identified in the respiratory tract without apparent infection. This colonization is encountered in patients with underlying structural lung disease, such as bronchiectasis and CF [4]. Published data on Nocardia in cystic fibrosis is sparse and includes mostly case reports [5–9]. Their conclusion was that isolation of Nocardia spp. from sputum usually represents colonization rather than infection. One larger retrospective study of 17 CF patients with a sputum culture positive for Nocardia spp. [10] showed that treatment of Nocardia-positive sputum in CF has no value in improving pulmonary function. No data has been published comparing the natural course of cystic fibrosis patients with positive Nocardia sputum cultures to matched CF patients without Nocardia. In our study we describe our patients with Nocardia spp. in sputum over a period of 10 years, comparing the severity of lung disease and the rate of pulmonary function deterioration during the study period between those patients with a single episode of growth to patients with recurrent isolations. We also compare these colonized patients to CF control patients of similar status but without Nocardia growth. Therefore, the aim of the study was to determine if colonization by the bacteria affects the natural history of CF disease.
Control group data
A group of control patients paired with the 19 patients with positive Nocardia cultures was formed; they were matched by mutation class, age, and sex (Table 1). The McNemar test was used o check the pairing of background details and confounders between cases and controls. The sex distribution of the control group was 12 (63.2%) male patients and 7 (36.8%) females. The mean age at the end of the study was 25 years for both groups. Related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the control group patients are summarized in Table 1. Eleven patients (58%) underwent CT imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. No statistical differences were found between the background variables, demographic data, chronic treatments and medications, related diseases, prevalence of other microorganisms in sputum, or findings on CT between the study group and the control group.
Results
Demographics Of the 160 CF patients treated at our clinic, 19 (12%) were found to have Nocardia spp. isolated from their sputum at some point during their clinical follow-up during the years 2003–2013. The sex distribution was 13 (68.4%) male patients and 6 (31.6%) females. The mean age at first episode was 20.5 years (SD 10.2), ranging from 8.6 years to 51 years of age. Eleven patients (58%) had only 1 episode of being Nocardia-positive. Six patients had recurrent growth of 3–4 episodes (32%) and 2 patients (10%) were found to have chronic positive sputum for Nocardia until the end of the study period (10–11 episodes). There was no evidence of extra-pulmonary disease or dissemination in any of the patients. There were 53% of patients symptomatic during episodes of Nocardia isolation. The symptoms reported included dyspnea, cough, fever, chest pain, and hemoptysis. Parameters evaluated such as related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the patients are summarized in Table 1. Sixteen patients (84%) underwent computerized tomography imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. None had specific lesions attributed to Nocardia. Of the 160 CF patients treated at our clinic, 19 (12%) were found to have Nocardia spp. isolated from their sputum at some point during their clinical follow-up during the years 2003–2013. The sex distribution was 13 (68.4%) male patients and 6 (31.6%) females. The mean age at first episode was 20.5 years (SD 10.2), ranging from 8.6 years to 51 years of age. Eleven patients (58%) had only 1 episode of being Nocardia-positive. Six patients had recurrent growth of 3–4 episodes (32%) and 2 patients (10%) were found to have chronic positive sputum for Nocardia until the end of the study period (10–11 episodes). There was no evidence of extra-pulmonary disease or dissemination in any of the patients. There were 53% of patients symptomatic during episodes of Nocardia isolation. The symptoms reported included dyspnea, cough, fever, chest pain, and hemoptysis. Parameters evaluated such as related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the patients are summarized in Table 1. Sixteen patients (84%) underwent computerized tomography imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. None had specific lesions attributed to Nocardia. Medication and antibiotic treatment Sixty-three percent of the patients were not hospitalized during the episodes of positive Nocardia cultures. Of the patients treated with IV Ceftazidime (37%), Piperacillin-Tazobactam or Meropenem combined with Amikacin, 3 were treated IV for less than 2 weeks and continued treatment orally and 4 received treatment for the duration of 3–4 weeks. Most patients (63%) received antibiotic treatment with Ciprofloxacin during the episodes, for a duration of less than 2 weeks (10%), 3–4 weeks (32%), 1–2 months (16%) or continuously (5%). Other antibiotic treatments such as Fucidic acid, Clarithromycin, Sporanox, or Voriconazole for other indications were given in 37% of the patients. After the isolation, specific anti-Nocardial treatments, such as Cotrimoxazole or Minocycline, were used in only 3 patients (16%), for a duration of over 3 months in 2 of them and 3–4 weeks in 1. Sixty-three percent of the patients were not hospitalized during the episodes of positive Nocardia cultures. Of the patients treated with IV Ceftazidime (37%), Piperacillin-Tazobactam or Meropenem combined with Amikacin, 3 were treated IV for less than 2 weeks and continued treatment orally and 4 received treatment for the duration of 3–4 weeks. Most patients (63%) received antibiotic treatment with Ciprofloxacin during the episodes, for a duration of less than 2 weeks (10%), 3–4 weeks (32%), 1–2 months (16%) or continuously (5%). Other antibiotic treatments such as Fucidic acid, Clarithromycin, Sporanox, or Voriconazole for other indications were given in 37% of the patients. After the isolation, specific anti-Nocardial treatments, such as Cotrimoxazole or Minocycline, were used in only 3 patients (16%), for a duration of over 3 months in 2 of them and 3–4 weeks in 1. Control group data A group of control patients paired with the 19 patients with positive Nocardia cultures was formed; they were matched by mutation class, age, and sex (Table 1). The McNemar test was used o check the pairing of background details and confounders between cases and controls. The sex distribution of the control group was 12 (63.2%) male patients and 7 (36.8%) females. The mean age at the end of the study was 25 years for both groups. Related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the control group patients are summarized in Table 1. Eleven patients (58%) underwent CT imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. No statistical differences were found between the background variables, demographic data, chronic treatments and medications, related diseases, prevalence of other microorganisms in sputum, or findings on CT between the study group and the control group. A group of control patients paired with the 19 patients with positive Nocardia cultures was formed; they were matched by mutation class, age, and sex (Table 1). The McNemar test was used o check the pairing of background details and confounders between cases and controls. The sex distribution of the control group was 12 (63.2%) male patients and 7 (36.8%) females. The mean age at the end of the study was 25 years for both groups. Related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the control group patients are summarized in Table 1. Eleven patients (58%) underwent CT imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. No statistical differences were found between the background variables, demographic data, chronic treatments and medications, related diseases, prevalence of other microorganisms in sputum, or findings on CT between the study group and the control group. Pulmonary function tests Pulmonary function tests were obtained 6 months previous to the first episode of positive Nocardia culture, during the episode, and 6 months afterwards. No statistically significant difference was found during this year in FEV1 (p=0.5), FVC (p=0.15) or FEF 25–75% (p=0.84). Moreover, no differences were found in mean changes in PFTs during this year between different groups of patients divided into those who were hospitalized, those receiving steroids, those treated with intravenous or oral antibiotics including azithromycin or ciprofloxacin or anti-Nocardial treatment, those on continuous antibiotic inhalations, and those who were symptomatic compared to patients who were not. In addition, no statistical difference in the change in pulmonary functions between the time of isolation and 6 months afterwards was found when comparing the patients who received intravenous or oral anti-Nocardial treatment to those who did not (FEV1 (p=0.6) FVC (p=0.9) FEF25–75% (p=0.72)). Comparing PFTs during the episode of symptomatic versus asymptomatic patients, no difference was found in mean FEV1 or FVC. However, a significantly lower mean FEF25–75% was found among the symptomatic patients (35.7% vs. 73.3%, p=0.01). Pulmonary function tests of study group and control patients were obtained at the beginning and at the end of the study period on similar dates for each pair (Figure 1). No statistically significant difference was found between the groups for all mean PFTs (FEV1, FVC, FEF 25–75%) at the beginning of the study (79.6% vs. 75.4% p=0.39, 87% vs. 86% p=1.0, 71.8% vs. 57.3% p= 0.07, respectively). The mean decline of PFTs per year during the study period was calculated for both groups of patients, and comparing them showed a trend to significance in FEV1 (−1.87% vs. −0.5% p=0.08) and FVC (−1.2% vs. +0.2% p=0.09), favoring the patients without isolation of Nocardia. No difference between the groups was found in the decline of FEF 25–75% (−3% vs. −1.6% p=0.2). In the evaluation of PFTs from the beginning and until the end of the study period, no difference was found in the yearly decline rates of PFTs (FEV1, FVC or FEF 25–75%) between patients with a single episode of isolation of Nocardia in culture and those with recurrent episodes (p=0.55, 0.97, and 0.24, respectively). Furthermore, no significant difference was found in the mean rate of decline per year of FEV1 (p=0.72), FVC (p=0.21) or FEF 25–75% (p=0.36) in the years prior to the first positive Nocardia culture compared to the years following the isolation until the end of the study period. Pulmonary function tests were obtained 6 months previous to the first episode of positive Nocardia culture, during the episode, and 6 months afterwards. No statistically significant difference was found during this year in FEV1 (p=0.5), FVC (p=0.15) or FEF 25–75% (p=0.84). Moreover, no differences were found in mean changes in PFTs during this year between different groups of patients divided into those who were hospitalized, those receiving steroids, those treated with intravenous or oral antibiotics including azithromycin or ciprofloxacin or anti-Nocardial treatment, those on continuous antibiotic inhalations, and those who were symptomatic compared to patients who were not. In addition, no statistical difference in the change in pulmonary functions between the time of isolation and 6 months afterwards was found when comparing the patients who received intravenous or oral anti-Nocardial treatment to those who did not (FEV1 (p=0.6) FVC (p=0.9) FEF25–75% (p=0.72)). Comparing PFTs during the episode of symptomatic versus asymptomatic patients, no difference was found in mean FEV1 or FVC. However, a significantly lower mean FEF25–75% was found among the symptomatic patients (35.7% vs. 73.3%, p=0.01). Pulmonary function tests of study group and control patients were obtained at the beginning and at the end of the study period on similar dates for each pair (Figure 1). No statistically significant difference was found between the groups for all mean PFTs (FEV1, FVC, FEF 25–75%) at the beginning of the study (79.6% vs. 75.4% p=0.39, 87% vs. 86% p=1.0, 71.8% vs. 57.3% p= 0.07, respectively). The mean decline of PFTs per year during the study period was calculated for both groups of patients, and comparing them showed a trend to significance in FEV1 (−1.87% vs. −0.5% p=0.08) and FVC (−1.2% vs. +0.2% p=0.09), favoring the patients without isolation of Nocardia. No difference between the groups was found in the decline of FEF 25–75% (−3% vs. −1.6% p=0.2). In the evaluation of PFTs from the beginning and until the end of the study period, no difference was found in the yearly decline rates of PFTs (FEV1, FVC or FEF 25–75%) between patients with a single episode of isolation of Nocardia in culture and those with recurrent episodes (p=0.55, 0.97, and 0.24, respectively). Furthermore, no significant difference was found in the mean rate of decline per year of FEV1 (p=0.72), FVC (p=0.21) or FEF 25–75% (p=0.36) in the years prior to the first positive Nocardia culture compared to the years following the isolation until the end of the study period.
Conclusions
As shown in the literature, our study supports the fact that Nocardia isolation in sputum of CF patients is usually due to colonization without true infection, and its presence does not significantly affect the natural history or pulmonary deterioration of their lung disease. Furthermore, no difference was found between patients with different risk factors for infection, those who were symptomatic during the episodes of growth and those who were treated accordingly, compared to the others. Therefore, it seems the need for Nocardia treatment should be evaluated on an individual basis and in the context of the clinical picture. Our study found that the rate of decline in lung functions per year in patients of similar status but without isolation of Nocardia showed a slower trend than average for CF patients. It seems possible that these patients have less colonization of different microbes in their sputum, but larger, multi-center studies are needed to confirm this.
[ "Background", "Study design", "Statistical analyses", "Demographics", "Medication and antibiotic treatment", "Pulmonary function tests" ]
[ "Nocardia species are aerobic Gram-positive filamentous bacteria of the Actinomycetes order, which are natural inhabitants of soil and water throughout the world. Pulmonary nocardiosis, the most common clinical presentation of infection, is usually acquired by direct inhalation of contaminated soil. Symptoms may include cough, shortness of breath, chest pain, hemoptysis, fever, night sweats, weight loss, and fatigue. The chest radiograph may be variable with nodular and/or consolidation infiltrate as well as cavitary lesions and pleural effusions. Risk factors for acquisition of infection include depressed cell immunity, but up to one-third of patients are immunocompetent [1–3]. Furthermore, Nocardia may be identified in the respiratory tract without apparent infection. This colonization is encountered in patients with underlying structural lung disease, such as bronchiectasis and CF [4].\nPublished data on Nocardia in cystic fibrosis is sparse and includes mostly case reports [5–9]. Their conclusion was that isolation of Nocardia spp. from sputum usually represents colonization rather than infection. One larger retrospective study of 17 CF patients with a sputum culture positive for Nocardia spp. [10] showed that treatment of Nocardia-positive sputum in CF has no value in improving pulmonary function.\nNo data has been published comparing the natural course of cystic fibrosis patients with positive Nocardia sputum cultures to matched CF patients without Nocardia. In our study we describe our patients with Nocardia spp. in sputum over a period of 10 years, comparing the severity of lung disease and the rate of pulmonary function deterioration during the study period between those patients with a single episode of growth to patients with recurrent isolations. We also compare these colonized patients to CF control patients of similar status but without Nocardia growth. Therefore, the aim of the study was to determine if colonization by the bacteria affects the natural history of CF disease.", "Our study was a longitudinal, paired, case-control study of cystic fibrosis patients with and without isolation of Nocardia spp. in sputum from 2003 until December 2013. We collected all positive sputum cultures for Nocardia spp. in CF patients from our clinic – the Israeli National Center for Cystic Fibrosis, Safra Children’s Hospital, Sheba Medical Center Israel – found during the study period, using a laboratory microbiology database. All medical records were reviewed and the following data were collected for analysis: age, sex, CFTR mutations, and co-morbidities such as pancreatic sufficiency, CF-related diabetes, liver disease, allergic bronchopulmonary aspergillosis (ABPA), recurrent distal intestinal obstruction syndrome (DIOS), and hemoptysis. Other sputum cultures were recorded for the colonization of microbes such as Pseudomonas aeruginosa, Staphylococcus aureus, Aspergillus fumigatus, and Mycobacterium abscessus. Pulmonary function test (PFT) recordings using percent-predicted values adjusted for age, sex, and height were obtained at different timepoints. FEV1, FEF25–75, and FVC were primarily obtained during 2003 or the first PFTs found in the patients’ files (3 of the 19 patients had first PFTs in the years 2006 to 2009 due to young age or late diagnosis). Next, PFTs were obtained 6 months preceding the first episode of positive Nocardia culture, during the episode and 6 months following it. Additionally, updated PFTs were obtained from 2013 (aside from 1 patient whose last record of PFTs was in 2009 due to lung transplantation). The number of episodes of Nocardia growth was recorded. Each episode was defined as positive sputum cultures for Nocardia separated by 2 months’ time, with or without a specific treatment trial. Symptoms of pulmonary exacerbation during the episode were recorded, and the need for hospitalization and antibiotic treatment, either intravenous or orally. Concurrent treatments such as Azithromycin, steroids, and chronic use of inhaled antibiotics were recorded. The use of specific anti-Nocardia antibiotics was reported, as was the duration of treatment. Lastly, number of patients undergoing computerized tomography was observed, as well as pathological findings.\nEach patient was then matched with a control cystic fibrosis patient of the same mutation class, age, and sex, but without Nocardia isolation in sputum. Medical records of the control group were reviewed for the same study period as the other member of their matched pair, and similar data were collected for comparison. PFTs were compared as well as the trend of decline during the study period years for of each matched pair.\nThe study was approved by the Institutional Review Board.", "Categorical variables were reported using frequency and percentage. Continuous variables were described using mean (SD), median (IQR), and range (min-max). Continuous variables were tested for normal distribution using Kolmogorov-Smirnov, Q-Q plots, and histogram. The comparison between pulmonary function tests 6 months before the first episode, during the episode, and 6 months after the episode was conducted using the Friedman test. The Mann-Whitney test was used to compare between pulmonary function tests of symptomatic patients compared to the others and those treated for Nocardia compared to those not. One-sample Wilcoxon signed rank test was used to evaluate whether the decline of FEV1 per year during the study period differed from 1.5–2% (the average mean rate of decline per year for cystic fibrosis patients). Categorical variables were compared using the chi-square test or Fischer’s exact test. Paired categorical variables were compared using the McNemar Test, and paired continuous variables were compared using the Wilcoxon signed rank test.", "Of the 160 CF patients treated at our clinic, 19 (12%) were found to have Nocardia spp. isolated from their sputum at some point during their clinical follow-up during the years 2003–2013. The sex distribution was 13 (68.4%) male patients and 6 (31.6%) females. The mean age at first episode was 20.5 years (SD 10.2), ranging from 8.6 years to 51 years of age. Eleven patients (58%) had only 1 episode of being Nocardia-positive. Six patients had recurrent growth of 3–4 episodes (32%) and 2 patients (10%) were found to have chronic positive sputum for Nocardia until the end of the study period (10–11 episodes). There was no evidence of extra-pulmonary disease or dissemination in any of the patients. There were 53% of patients symptomatic during episodes of Nocardia isolation. The symptoms reported included dyspnea, cough, fever, chest pain, and hemoptysis. Parameters evaluated such as related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the patients are summarized in Table 1. Sixteen patients (84%) underwent computerized tomography imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. None had specific lesions attributed to Nocardia.", "Sixty-three percent of the patients were not hospitalized during the episodes of positive Nocardia cultures. Of the patients treated with IV Ceftazidime (37%), Piperacillin-Tazobactam or Meropenem combined with Amikacin, 3 were treated IV for less than 2 weeks and continued treatment orally and 4 received treatment for the duration of 3–4 weeks. Most patients (63%) received antibiotic treatment with Ciprofloxacin during the episodes, for a duration of less than 2 weeks (10%), 3–4 weeks (32%), 1–2 months (16%) or continuously (5%). Other antibiotic treatments such as Fucidic acid, Clarithromycin, Sporanox, or Voriconazole for other indications were given in 37% of the patients. After the isolation, specific anti-Nocardial treatments, such as Cotrimoxazole or Minocycline, were used in only 3 patients (16%), for a duration of over 3 months in 2 of them and 3–4 weeks in 1.", "Pulmonary function tests were obtained 6 months previous to the first episode of positive Nocardia culture, during the episode, and 6 months afterwards. No statistically significant difference was found during this year in FEV1 (p=0.5), FVC (p=0.15) or FEF 25–75% (p=0.84).\nMoreover, no differences were found in mean changes in PFTs during this year between different groups of patients divided into those who were hospitalized, those receiving steroids, those treated with intravenous or oral antibiotics including azithromycin or ciprofloxacin or anti-Nocardial treatment, those on continuous antibiotic inhalations, and those who were symptomatic compared to patients who were not. In addition, no statistical difference in the change in pulmonary functions between the time of isolation and 6 months afterwards was found when comparing the patients who received intravenous or oral anti-Nocardial treatment to those who did not (FEV1 (p=0.6) FVC (p=0.9) FEF25–75% (p=0.72)).\nComparing PFTs during the episode of symptomatic versus asymptomatic patients, no difference was found in mean FEV1 or FVC. However, a significantly lower mean FEF25–75% was found among the symptomatic patients (35.7% vs. 73.3%, p=0.01).\nPulmonary function tests of study group and control patients were obtained at the beginning and at the end of the study period on similar dates for each pair (Figure 1). No statistically significant difference was found between the groups for all mean PFTs (FEV1, FVC, FEF 25–75%) at the beginning of the study (79.6% vs. 75.4% p=0.39, 87% vs. 86% p=1.0, 71.8% vs. 57.3% p= 0.07, respectively). The mean decline of PFTs per year during the study period was calculated for both groups of patients, and comparing them showed a trend to significance in FEV1 (−1.87% vs. −0.5% p=0.08) and FVC (−1.2% vs. +0.2% p=0.09), favoring the patients without isolation of Nocardia. No difference between the groups was found in the decline of FEF 25–75% (−3% vs. −1.6% p=0.2).\nIn the evaluation of PFTs from the beginning and until the end of the study period, no difference was found in the yearly decline rates of PFTs (FEV1, FVC or FEF 25–75%) between patients with a single episode of isolation of Nocardia in culture and those with recurrent episodes (p=0.55, 0.97, and 0.24, respectively). Furthermore, no significant difference was found in the mean rate of decline per year of FEV1 (p=0.72), FVC (p=0.21) or FEF 25–75% (p=0.36) in the years prior to the first positive Nocardia culture compared to the years following the isolation until the end of the study period." ]
[ null, "methods", null, null, null, null ]
[ "Background", "Material and Methods", "Study design", "Statistical analyses", "Results", "Demographics", "Medication and antibiotic treatment", "Control group data", "Pulmonary function tests", "Discussion", "Conclusions" ]
[ "Nocardia species are aerobic Gram-positive filamentous bacteria of the Actinomycetes order, which are natural inhabitants of soil and water throughout the world. Pulmonary nocardiosis, the most common clinical presentation of infection, is usually acquired by direct inhalation of contaminated soil. Symptoms may include cough, shortness of breath, chest pain, hemoptysis, fever, night sweats, weight loss, and fatigue. The chest radiograph may be variable with nodular and/or consolidation infiltrate as well as cavitary lesions and pleural effusions. Risk factors for acquisition of infection include depressed cell immunity, but up to one-third of patients are immunocompetent [1–3]. Furthermore, Nocardia may be identified in the respiratory tract without apparent infection. This colonization is encountered in patients with underlying structural lung disease, such as bronchiectasis and CF [4].\nPublished data on Nocardia in cystic fibrosis is sparse and includes mostly case reports [5–9]. Their conclusion was that isolation of Nocardia spp. from sputum usually represents colonization rather than infection. One larger retrospective study of 17 CF patients with a sputum culture positive for Nocardia spp. [10] showed that treatment of Nocardia-positive sputum in CF has no value in improving pulmonary function.\nNo data has been published comparing the natural course of cystic fibrosis patients with positive Nocardia sputum cultures to matched CF patients without Nocardia. In our study we describe our patients with Nocardia spp. in sputum over a period of 10 years, comparing the severity of lung disease and the rate of pulmonary function deterioration during the study period between those patients with a single episode of growth to patients with recurrent isolations. We also compare these colonized patients to CF control patients of similar status but without Nocardia growth. Therefore, the aim of the study was to determine if colonization by the bacteria affects the natural history of CF disease.", " Study design Our study was a longitudinal, paired, case-control study of cystic fibrosis patients with and without isolation of Nocardia spp. in sputum from 2003 until December 2013. We collected all positive sputum cultures for Nocardia spp. in CF patients from our clinic – the Israeli National Center for Cystic Fibrosis, Safra Children’s Hospital, Sheba Medical Center Israel – found during the study period, using a laboratory microbiology database. All medical records were reviewed and the following data were collected for analysis: age, sex, CFTR mutations, and co-morbidities such as pancreatic sufficiency, CF-related diabetes, liver disease, allergic bronchopulmonary aspergillosis (ABPA), recurrent distal intestinal obstruction syndrome (DIOS), and hemoptysis. Other sputum cultures were recorded for the colonization of microbes such as Pseudomonas aeruginosa, Staphylococcus aureus, Aspergillus fumigatus, and Mycobacterium abscessus. Pulmonary function test (PFT) recordings using percent-predicted values adjusted for age, sex, and height were obtained at different timepoints. FEV1, FEF25–75, and FVC were primarily obtained during 2003 or the first PFTs found in the patients’ files (3 of the 19 patients had first PFTs in the years 2006 to 2009 due to young age or late diagnosis). Next, PFTs were obtained 6 months preceding the first episode of positive Nocardia culture, during the episode and 6 months following it. Additionally, updated PFTs were obtained from 2013 (aside from 1 patient whose last record of PFTs was in 2009 due to lung transplantation). The number of episodes of Nocardia growth was recorded. Each episode was defined as positive sputum cultures for Nocardia separated by 2 months’ time, with or without a specific treatment trial. Symptoms of pulmonary exacerbation during the episode were recorded, and the need for hospitalization and antibiotic treatment, either intravenous or orally. Concurrent treatments such as Azithromycin, steroids, and chronic use of inhaled antibiotics were recorded. The use of specific anti-Nocardia antibiotics was reported, as was the duration of treatment. Lastly, number of patients undergoing computerized tomography was observed, as well as pathological findings.\nEach patient was then matched with a control cystic fibrosis patient of the same mutation class, age, and sex, but without Nocardia isolation in sputum. Medical records of the control group were reviewed for the same study period as the other member of their matched pair, and similar data were collected for comparison. PFTs were compared as well as the trend of decline during the study period years for of each matched pair.\nThe study was approved by the Institutional Review Board.\nOur study was a longitudinal, paired, case-control study of cystic fibrosis patients with and without isolation of Nocardia spp. in sputum from 2003 until December 2013. We collected all positive sputum cultures for Nocardia spp. in CF patients from our clinic – the Israeli National Center for Cystic Fibrosis, Safra Children’s Hospital, Sheba Medical Center Israel – found during the study period, using a laboratory microbiology database. All medical records were reviewed and the following data were collected for analysis: age, sex, CFTR mutations, and co-morbidities such as pancreatic sufficiency, CF-related diabetes, liver disease, allergic bronchopulmonary aspergillosis (ABPA), recurrent distal intestinal obstruction syndrome (DIOS), and hemoptysis. Other sputum cultures were recorded for the colonization of microbes such as Pseudomonas aeruginosa, Staphylococcus aureus, Aspergillus fumigatus, and Mycobacterium abscessus. Pulmonary function test (PFT) recordings using percent-predicted values adjusted for age, sex, and height were obtained at different timepoints. FEV1, FEF25–75, and FVC were primarily obtained during 2003 or the first PFTs found in the patients’ files (3 of the 19 patients had first PFTs in the years 2006 to 2009 due to young age or late diagnosis). Next, PFTs were obtained 6 months preceding the first episode of positive Nocardia culture, during the episode and 6 months following it. Additionally, updated PFTs were obtained from 2013 (aside from 1 patient whose last record of PFTs was in 2009 due to lung transplantation). The number of episodes of Nocardia growth was recorded. Each episode was defined as positive sputum cultures for Nocardia separated by 2 months’ time, with or without a specific treatment trial. Symptoms of pulmonary exacerbation during the episode were recorded, and the need for hospitalization and antibiotic treatment, either intravenous or orally. Concurrent treatments such as Azithromycin, steroids, and chronic use of inhaled antibiotics were recorded. The use of specific anti-Nocardia antibiotics was reported, as was the duration of treatment. Lastly, number of patients undergoing computerized tomography was observed, as well as pathological findings.\nEach patient was then matched with a control cystic fibrosis patient of the same mutation class, age, and sex, but without Nocardia isolation in sputum. Medical records of the control group were reviewed for the same study period as the other member of their matched pair, and similar data were collected for comparison. PFTs were compared as well as the trend of decline during the study period years for of each matched pair.\nThe study was approved by the Institutional Review Board.\n Statistical analyses Categorical variables were reported using frequency and percentage. Continuous variables were described using mean (SD), median (IQR), and range (min-max). Continuous variables were tested for normal distribution using Kolmogorov-Smirnov, Q-Q plots, and histogram. The comparison between pulmonary function tests 6 months before the first episode, during the episode, and 6 months after the episode was conducted using the Friedman test. The Mann-Whitney test was used to compare between pulmonary function tests of symptomatic patients compared to the others and those treated for Nocardia compared to those not. One-sample Wilcoxon signed rank test was used to evaluate whether the decline of FEV1 per year during the study period differed from 1.5–2% (the average mean rate of decline per year for cystic fibrosis patients). Categorical variables were compared using the chi-square test or Fischer’s exact test. Paired categorical variables were compared using the McNemar Test, and paired continuous variables were compared using the Wilcoxon signed rank test.\nCategorical variables were reported using frequency and percentage. Continuous variables were described using mean (SD), median (IQR), and range (min-max). Continuous variables were tested for normal distribution using Kolmogorov-Smirnov, Q-Q plots, and histogram. The comparison between pulmonary function tests 6 months before the first episode, during the episode, and 6 months after the episode was conducted using the Friedman test. The Mann-Whitney test was used to compare between pulmonary function tests of symptomatic patients compared to the others and those treated for Nocardia compared to those not. One-sample Wilcoxon signed rank test was used to evaluate whether the decline of FEV1 per year during the study period differed from 1.5–2% (the average mean rate of decline per year for cystic fibrosis patients). Categorical variables were compared using the chi-square test or Fischer’s exact test. Paired categorical variables were compared using the McNemar Test, and paired continuous variables were compared using the Wilcoxon signed rank test.", "Our study was a longitudinal, paired, case-control study of cystic fibrosis patients with and without isolation of Nocardia spp. in sputum from 2003 until December 2013. We collected all positive sputum cultures for Nocardia spp. in CF patients from our clinic – the Israeli National Center for Cystic Fibrosis, Safra Children’s Hospital, Sheba Medical Center Israel – found during the study period, using a laboratory microbiology database. All medical records were reviewed and the following data were collected for analysis: age, sex, CFTR mutations, and co-morbidities such as pancreatic sufficiency, CF-related diabetes, liver disease, allergic bronchopulmonary aspergillosis (ABPA), recurrent distal intestinal obstruction syndrome (DIOS), and hemoptysis. Other sputum cultures were recorded for the colonization of microbes such as Pseudomonas aeruginosa, Staphylococcus aureus, Aspergillus fumigatus, and Mycobacterium abscessus. Pulmonary function test (PFT) recordings using percent-predicted values adjusted for age, sex, and height were obtained at different timepoints. FEV1, FEF25–75, and FVC were primarily obtained during 2003 or the first PFTs found in the patients’ files (3 of the 19 patients had first PFTs in the years 2006 to 2009 due to young age or late diagnosis). Next, PFTs were obtained 6 months preceding the first episode of positive Nocardia culture, during the episode and 6 months following it. Additionally, updated PFTs were obtained from 2013 (aside from 1 patient whose last record of PFTs was in 2009 due to lung transplantation). The number of episodes of Nocardia growth was recorded. Each episode was defined as positive sputum cultures for Nocardia separated by 2 months’ time, with or without a specific treatment trial. Symptoms of pulmonary exacerbation during the episode were recorded, and the need for hospitalization and antibiotic treatment, either intravenous or orally. Concurrent treatments such as Azithromycin, steroids, and chronic use of inhaled antibiotics were recorded. The use of specific anti-Nocardia antibiotics was reported, as was the duration of treatment. Lastly, number of patients undergoing computerized tomography was observed, as well as pathological findings.\nEach patient was then matched with a control cystic fibrosis patient of the same mutation class, age, and sex, but without Nocardia isolation in sputum. Medical records of the control group were reviewed for the same study period as the other member of their matched pair, and similar data were collected for comparison. PFTs were compared as well as the trend of decline during the study period years for of each matched pair.\nThe study was approved by the Institutional Review Board.", "Categorical variables were reported using frequency and percentage. Continuous variables were described using mean (SD), median (IQR), and range (min-max). Continuous variables were tested for normal distribution using Kolmogorov-Smirnov, Q-Q plots, and histogram. The comparison between pulmonary function tests 6 months before the first episode, during the episode, and 6 months after the episode was conducted using the Friedman test. The Mann-Whitney test was used to compare between pulmonary function tests of symptomatic patients compared to the others and those treated for Nocardia compared to those not. One-sample Wilcoxon signed rank test was used to evaluate whether the decline of FEV1 per year during the study period differed from 1.5–2% (the average mean rate of decline per year for cystic fibrosis patients). Categorical variables were compared using the chi-square test or Fischer’s exact test. Paired categorical variables were compared using the McNemar Test, and paired continuous variables were compared using the Wilcoxon signed rank test.", " Demographics Of the 160 CF patients treated at our clinic, 19 (12%) were found to have Nocardia spp. isolated from their sputum at some point during their clinical follow-up during the years 2003–2013. The sex distribution was 13 (68.4%) male patients and 6 (31.6%) females. The mean age at first episode was 20.5 years (SD 10.2), ranging from 8.6 years to 51 years of age. Eleven patients (58%) had only 1 episode of being Nocardia-positive. Six patients had recurrent growth of 3–4 episodes (32%) and 2 patients (10%) were found to have chronic positive sputum for Nocardia until the end of the study period (10–11 episodes). There was no evidence of extra-pulmonary disease or dissemination in any of the patients. There were 53% of patients symptomatic during episodes of Nocardia isolation. The symptoms reported included dyspnea, cough, fever, chest pain, and hemoptysis. Parameters evaluated such as related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the patients are summarized in Table 1. Sixteen patients (84%) underwent computerized tomography imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. None had specific lesions attributed to Nocardia.\nOf the 160 CF patients treated at our clinic, 19 (12%) were found to have Nocardia spp. isolated from their sputum at some point during their clinical follow-up during the years 2003–2013. The sex distribution was 13 (68.4%) male patients and 6 (31.6%) females. The mean age at first episode was 20.5 years (SD 10.2), ranging from 8.6 years to 51 years of age. Eleven patients (58%) had only 1 episode of being Nocardia-positive. Six patients had recurrent growth of 3–4 episodes (32%) and 2 patients (10%) were found to have chronic positive sputum for Nocardia until the end of the study period (10–11 episodes). There was no evidence of extra-pulmonary disease or dissemination in any of the patients. There were 53% of patients symptomatic during episodes of Nocardia isolation. The symptoms reported included dyspnea, cough, fever, chest pain, and hemoptysis. Parameters evaluated such as related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the patients are summarized in Table 1. Sixteen patients (84%) underwent computerized tomography imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. None had specific lesions attributed to Nocardia.\n Medication and antibiotic treatment Sixty-three percent of the patients were not hospitalized during the episodes of positive Nocardia cultures. Of the patients treated with IV Ceftazidime (37%), Piperacillin-Tazobactam or Meropenem combined with Amikacin, 3 were treated IV for less than 2 weeks and continued treatment orally and 4 received treatment for the duration of 3–4 weeks. Most patients (63%) received antibiotic treatment with Ciprofloxacin during the episodes, for a duration of less than 2 weeks (10%), 3–4 weeks (32%), 1–2 months (16%) or continuously (5%). Other antibiotic treatments such as Fucidic acid, Clarithromycin, Sporanox, or Voriconazole for other indications were given in 37% of the patients. After the isolation, specific anti-Nocardial treatments, such as Cotrimoxazole or Minocycline, were used in only 3 patients (16%), for a duration of over 3 months in 2 of them and 3–4 weeks in 1.\nSixty-three percent of the patients were not hospitalized during the episodes of positive Nocardia cultures. Of the patients treated with IV Ceftazidime (37%), Piperacillin-Tazobactam or Meropenem combined with Amikacin, 3 were treated IV for less than 2 weeks and continued treatment orally and 4 received treatment for the duration of 3–4 weeks. Most patients (63%) received antibiotic treatment with Ciprofloxacin during the episodes, for a duration of less than 2 weeks (10%), 3–4 weeks (32%), 1–2 months (16%) or continuously (5%). Other antibiotic treatments such as Fucidic acid, Clarithromycin, Sporanox, or Voriconazole for other indications were given in 37% of the patients. After the isolation, specific anti-Nocardial treatments, such as Cotrimoxazole or Minocycline, were used in only 3 patients (16%), for a duration of over 3 months in 2 of them and 3–4 weeks in 1.\n Control group data A group of control patients paired with the 19 patients with positive Nocardia cultures was formed; they were matched by mutation class, age, and sex (Table 1). The McNemar test was used o check the pairing of background details and confounders between cases and controls.\nThe sex distribution of the control group was 12 (63.2%) male patients and 7 (36.8%) females. The mean age at the end of the study was 25 years for both groups. Related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the control group patients are summarized in Table 1. Eleven patients (58%) underwent CT imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy.\nNo statistical differences were found between the background variables, demographic data, chronic treatments and medications, related diseases, prevalence of other microorganisms in sputum, or findings on CT between the study group and the control group.\nA group of control patients paired with the 19 patients with positive Nocardia cultures was formed; they were matched by mutation class, age, and sex (Table 1). The McNemar test was used o check the pairing of background details and confounders between cases and controls.\nThe sex distribution of the control group was 12 (63.2%) male patients and 7 (36.8%) females. The mean age at the end of the study was 25 years for both groups. Related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the control group patients are summarized in Table 1. Eleven patients (58%) underwent CT imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy.\nNo statistical differences were found between the background variables, demographic data, chronic treatments and medications, related diseases, prevalence of other microorganisms in sputum, or findings on CT between the study group and the control group.\n Pulmonary function tests Pulmonary function tests were obtained 6 months previous to the first episode of positive Nocardia culture, during the episode, and 6 months afterwards. No statistically significant difference was found during this year in FEV1 (p=0.5), FVC (p=0.15) or FEF 25–75% (p=0.84).\nMoreover, no differences were found in mean changes in PFTs during this year between different groups of patients divided into those who were hospitalized, those receiving steroids, those treated with intravenous or oral antibiotics including azithromycin or ciprofloxacin or anti-Nocardial treatment, those on continuous antibiotic inhalations, and those who were symptomatic compared to patients who were not. In addition, no statistical difference in the change in pulmonary functions between the time of isolation and 6 months afterwards was found when comparing the patients who received intravenous or oral anti-Nocardial treatment to those who did not (FEV1 (p=0.6) FVC (p=0.9) FEF25–75% (p=0.72)).\nComparing PFTs during the episode of symptomatic versus asymptomatic patients, no difference was found in mean FEV1 or FVC. However, a significantly lower mean FEF25–75% was found among the symptomatic patients (35.7% vs. 73.3%, p=0.01).\nPulmonary function tests of study group and control patients were obtained at the beginning and at the end of the study period on similar dates for each pair (Figure 1). No statistically significant difference was found between the groups for all mean PFTs (FEV1, FVC, FEF 25–75%) at the beginning of the study (79.6% vs. 75.4% p=0.39, 87% vs. 86% p=1.0, 71.8% vs. 57.3% p= 0.07, respectively). The mean decline of PFTs per year during the study period was calculated for both groups of patients, and comparing them showed a trend to significance in FEV1 (−1.87% vs. −0.5% p=0.08) and FVC (−1.2% vs. +0.2% p=0.09), favoring the patients without isolation of Nocardia. No difference between the groups was found in the decline of FEF 25–75% (−3% vs. −1.6% p=0.2).\nIn the evaluation of PFTs from the beginning and until the end of the study period, no difference was found in the yearly decline rates of PFTs (FEV1, FVC or FEF 25–75%) between patients with a single episode of isolation of Nocardia in culture and those with recurrent episodes (p=0.55, 0.97, and 0.24, respectively). Furthermore, no significant difference was found in the mean rate of decline per year of FEV1 (p=0.72), FVC (p=0.21) or FEF 25–75% (p=0.36) in the years prior to the first positive Nocardia culture compared to the years following the isolation until the end of the study period.\nPulmonary function tests were obtained 6 months previous to the first episode of positive Nocardia culture, during the episode, and 6 months afterwards. No statistically significant difference was found during this year in FEV1 (p=0.5), FVC (p=0.15) or FEF 25–75% (p=0.84).\nMoreover, no differences were found in mean changes in PFTs during this year between different groups of patients divided into those who were hospitalized, those receiving steroids, those treated with intravenous or oral antibiotics including azithromycin or ciprofloxacin or anti-Nocardial treatment, those on continuous antibiotic inhalations, and those who were symptomatic compared to patients who were not. In addition, no statistical difference in the change in pulmonary functions between the time of isolation and 6 months afterwards was found when comparing the patients who received intravenous or oral anti-Nocardial treatment to those who did not (FEV1 (p=0.6) FVC (p=0.9) FEF25–75% (p=0.72)).\nComparing PFTs during the episode of symptomatic versus asymptomatic patients, no difference was found in mean FEV1 or FVC. However, a significantly lower mean FEF25–75% was found among the symptomatic patients (35.7% vs. 73.3%, p=0.01).\nPulmonary function tests of study group and control patients were obtained at the beginning and at the end of the study period on similar dates for each pair (Figure 1). No statistically significant difference was found between the groups for all mean PFTs (FEV1, FVC, FEF 25–75%) at the beginning of the study (79.6% vs. 75.4% p=0.39, 87% vs. 86% p=1.0, 71.8% vs. 57.3% p= 0.07, respectively). The mean decline of PFTs per year during the study period was calculated for both groups of patients, and comparing them showed a trend to significance in FEV1 (−1.87% vs. −0.5% p=0.08) and FVC (−1.2% vs. +0.2% p=0.09), favoring the patients without isolation of Nocardia. No difference between the groups was found in the decline of FEF 25–75% (−3% vs. −1.6% p=0.2).\nIn the evaluation of PFTs from the beginning and until the end of the study period, no difference was found in the yearly decline rates of PFTs (FEV1, FVC or FEF 25–75%) between patients with a single episode of isolation of Nocardia in culture and those with recurrent episodes (p=0.55, 0.97, and 0.24, respectively). Furthermore, no significant difference was found in the mean rate of decline per year of FEV1 (p=0.72), FVC (p=0.21) or FEF 25–75% (p=0.36) in the years prior to the first positive Nocardia culture compared to the years following the isolation until the end of the study period.", "Of the 160 CF patients treated at our clinic, 19 (12%) were found to have Nocardia spp. isolated from their sputum at some point during their clinical follow-up during the years 2003–2013. The sex distribution was 13 (68.4%) male patients and 6 (31.6%) females. The mean age at first episode was 20.5 years (SD 10.2), ranging from 8.6 years to 51 years of age. Eleven patients (58%) had only 1 episode of being Nocardia-positive. Six patients had recurrent growth of 3–4 episodes (32%) and 2 patients (10%) were found to have chronic positive sputum for Nocardia until the end of the study period (10–11 episodes). There was no evidence of extra-pulmonary disease or dissemination in any of the patients. There were 53% of patients symptomatic during episodes of Nocardia isolation. The symptoms reported included dyspnea, cough, fever, chest pain, and hemoptysis. Parameters evaluated such as related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the patients are summarized in Table 1. Sixteen patients (84%) underwent computerized tomography imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. None had specific lesions attributed to Nocardia.", "Sixty-three percent of the patients were not hospitalized during the episodes of positive Nocardia cultures. Of the patients treated with IV Ceftazidime (37%), Piperacillin-Tazobactam or Meropenem combined with Amikacin, 3 were treated IV for less than 2 weeks and continued treatment orally and 4 received treatment for the duration of 3–4 weeks. Most patients (63%) received antibiotic treatment with Ciprofloxacin during the episodes, for a duration of less than 2 weeks (10%), 3–4 weeks (32%), 1–2 months (16%) or continuously (5%). Other antibiotic treatments such as Fucidic acid, Clarithromycin, Sporanox, or Voriconazole for other indications were given in 37% of the patients. After the isolation, specific anti-Nocardial treatments, such as Cotrimoxazole or Minocycline, were used in only 3 patients (16%), for a duration of over 3 months in 2 of them and 3–4 weeks in 1.", "A group of control patients paired with the 19 patients with positive Nocardia cultures was formed; they were matched by mutation class, age, and sex (Table 1). The McNemar test was used o check the pairing of background details and confounders between cases and controls.\nThe sex distribution of the control group was 12 (63.2%) male patients and 7 (36.8%) females. The mean age at the end of the study was 25 years for both groups. Related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the control group patients are summarized in Table 1. Eleven patients (58%) underwent CT imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy.\nNo statistical differences were found between the background variables, demographic data, chronic treatments and medications, related diseases, prevalence of other microorganisms in sputum, or findings on CT between the study group and the control group.", "Pulmonary function tests were obtained 6 months previous to the first episode of positive Nocardia culture, during the episode, and 6 months afterwards. No statistically significant difference was found during this year in FEV1 (p=0.5), FVC (p=0.15) or FEF 25–75% (p=0.84).\nMoreover, no differences were found in mean changes in PFTs during this year between different groups of patients divided into those who were hospitalized, those receiving steroids, those treated with intravenous or oral antibiotics including azithromycin or ciprofloxacin or anti-Nocardial treatment, those on continuous antibiotic inhalations, and those who were symptomatic compared to patients who were not. In addition, no statistical difference in the change in pulmonary functions between the time of isolation and 6 months afterwards was found when comparing the patients who received intravenous or oral anti-Nocardial treatment to those who did not (FEV1 (p=0.6) FVC (p=0.9) FEF25–75% (p=0.72)).\nComparing PFTs during the episode of symptomatic versus asymptomatic patients, no difference was found in mean FEV1 or FVC. However, a significantly lower mean FEF25–75% was found among the symptomatic patients (35.7% vs. 73.3%, p=0.01).\nPulmonary function tests of study group and control patients were obtained at the beginning and at the end of the study period on similar dates for each pair (Figure 1). No statistically significant difference was found between the groups for all mean PFTs (FEV1, FVC, FEF 25–75%) at the beginning of the study (79.6% vs. 75.4% p=0.39, 87% vs. 86% p=1.0, 71.8% vs. 57.3% p= 0.07, respectively). The mean decline of PFTs per year during the study period was calculated for both groups of patients, and comparing them showed a trend to significance in FEV1 (−1.87% vs. −0.5% p=0.08) and FVC (−1.2% vs. +0.2% p=0.09), favoring the patients without isolation of Nocardia. No difference between the groups was found in the decline of FEF 25–75% (−3% vs. −1.6% p=0.2).\nIn the evaluation of PFTs from the beginning and until the end of the study period, no difference was found in the yearly decline rates of PFTs (FEV1, FVC or FEF 25–75%) between patients with a single episode of isolation of Nocardia in culture and those with recurrent episodes (p=0.55, 0.97, and 0.24, respectively). Furthermore, no significant difference was found in the mean rate of decline per year of FEV1 (p=0.72), FVC (p=0.21) or FEF 25–75% (p=0.36) in the years prior to the first positive Nocardia culture compared to the years following the isolation until the end of the study period.", "Patients with cystic fibrosis have a predisposition to become colonized and infected by a large array of microbes. Some bacteria, such as Pseudomonas aeruginosa, Staphylococcus aureus, and Stenotrophomonas maltophilia, accelerate the deterioration of pulmonary disease, while others have little or no data to define their role as a pathogen or colonizer in the cystic fibrosis patient, and the benefits of treatment after their isolation is unknown [11,12].\nThe literature describing Nocardia in CF are mostly case reports, including 1 report describing 3 patients in Australia [5] and a report of 9 patients in Spain [6], which concluded that isolation represents colonization rather than infection. Three more case reports of children with CF and Nocardia isolation were published and found other risk factors for infection, such as corticosteroid treatment for ABPA [7–9]. Most patients described were treated with cotrimoxazole, but the outcomes for the treated versus the untreated patients were similar or unknown [5–9].\nTo the best of our knowledge, the present study is the first to evaluate such a large cohort of patients, 25% of them carriers of a unique mutation prevalent in Israel (W1282X), for such a long period of time (10 years). We found no statistically significant difference in PFTs before or after the Nocardia isolation. No difference in PFTs were found between patients with a single episode versus those with recurrent Nocardia growth, nor in those treated with antibiotics, some for prolonged periods of time, versus those who were not treated. There was no difference in mean FEV1 or FVC between symptomatic versus asymptomatic patients during the episode, but a significantly lower mean FEF 25–75% was found among the symptomatic patients (35.7 vs. 73.3, p=0.01). Regarding this finding, we hypothesized that a graver disease of the smaller airways and perhaps a more parenchymal involvement could result in subjective pulmonary symptomatology.\nIn this study we expanded the follow-up period and evaluated the changes in pulmonary functions throughout the entire period of 10 years and not only during the year of the episode of Nocardia isolation, realizing that in some cases a bacteria could colonize the lungs without growing in our sputum cultures. Furthermore, we compared our study group patients to paired CF patients with similar status as their peers by mutation class, age, and sex, but no Nocardia isolation, a comparison not carried out until this time. In these patients we found a mean decline of FEV1 per year of −1.87%, similar to the average rate of decline known for CF patients. No difference in the rate of decline was found comparing the period prior to the positive Nocardia culture to the years following the isolation until the end of the study period.\nSurprisingly, the mean decline of FEV1 per year for our control group CF patients was -0.5%. Comparing the rates for both groups showed a trend to significance in FEV1 (p=0.08) and FVC (p=0.09), with no difference in FEF25–75% (p=0.2), meaning that the cystic fibrosis patients who had positive sputum cultures with Nocardia declined faster than matched controls without isolation of this microbe. It is possible that this difference was not statistically significant because of the small study group. Since it does not seem that Nocardia causes a deterioration of lung functions, we hypothesized that this microbe might grow more often in patients with more significant lung disease and worse prognosis. Perhaps other genetic or environmental factors affect the lungs of this group of patients or protect the lungs of the “healthier” patients and differentiate them from each other. These factors will need to be studied in larger groups of patients.\nIn a previous retrospective study from Arizona [10], 17 patients with CF had a sputum culture positive for Nocardia spp. during the years 1997–2007 (of 123 CF patients – 14%), a similar percentage as in our clinic. Several patients had persistent positive Nocardia sputum cultures despite repeated antibiotic courses and did not appear to have different outcomes from the ones with a single isolated culture. As opposed to our center, in Arizona 78% of episodes were treated with antibiotics. Despite this, mean FEV1 and FVC values showed no significant linear trend before, during, and after an episode of positive Nocardia isolation in sputum, which led to the conclusion that treatment likely has no value in improving pulmonary function. Our study strengthens these conclusions of our colleagues.\nOptimal treatment recommendations for pulmonary Nocardiosis have not been firmly established due to variable in vitro antimicrobial susceptibility patterns. Sulfonamides have been the antimicrobials of choice since they were introduced 50 years ago [13], but resistance has been described lately in up to 15% of the isolates [14]. Alternative antibiotics used orally are minocycline or doxycycline, while some strains of Nocardia are moderately sensitive to amoxicillin-clavulanate or to fluoroquinolones. Several intravenous antibiotics are also effective against Nocardia such as carbapenems, ceftriaxone, cefotaxime, amikacin, and linezolid. When treating pulmonary Nocardiosis, combination therapy should be considered and a duration of 6–12 months of therapy is recommended. However, as shown in the literature and highlighted in our study, in CF patients the need to treat as well as the duration of treatment is less well defined.\nDespite all this, it is believed that Nocardia eradication or a very long period of treatment might be needed in some patients with CF, especially if they are to undergo lung transplantation or are in need of prolonged and high doses of corticosteroids, such as for treatment of ABPA. The reason for this is that the corticosteroids as well as the immunosuppressive regimen during the post-operative period are well-established risk factors for invasive Nocardiosis and disseminated disease that lead to high rate of mortality [15,16].\nIndeed, mediastinal mass and pericardial effusion with cardiac tamponade was diagnosed as an invasive Nocardial infection in a renal transplant immunosuppressed patient [17].\nThere are some limitations to our study. The major limitation is the small study sample, making firm conclusions difficult. However, it is the largest study group assessed so far, to the best of our knowledge. It seems that a multi-center study with more cases could provide more information and better statistics. The retrospective method of our study is another limitation. Accordingly, we significantly expanded the period of follow-up of the patients to the entire study period, enabling us to estimate yearly changes in pulmonary functions. Another limit of our study was the lack of differentiation of Nocardia into different species and establishment of drug sensitivity in our microbiologic laboratory, which could possibly separate the patients into different sub-groups according to the virulence and resistance of the specific microbe.", "As shown in the literature, our study supports the fact that Nocardia isolation in sputum of CF patients is usually due to colonization without true infection, and its presence does not significantly affect the natural history or pulmonary deterioration of their lung disease. Furthermore, no difference was found between patients with different risk factors for infection, those who were symptomatic during the episodes of growth and those who were treated accordingly, compared to the others. Therefore, it seems the need for Nocardia treatment should be evaluated on an individual basis and in the context of the clinical picture. Our study found that the rate of decline in lung functions per year in patients of similar status but without isolation of Nocardia showed a slower trend than average for CF patients. It seems possible that these patients have less colonization of different microbes in their sputum, but larger, multi-center studies are needed to confirm this." ]
[ null, "materials|methods", "methods", null, "results", null, null, "methods", null, "discussion", "conclusions" ]
[ "Cystic Fibrosis", "Lung Diseases", "Nocardia", "Respiratory Function Tests" ]
Background: Nocardia species are aerobic Gram-positive filamentous bacteria of the Actinomycetes order, which are natural inhabitants of soil and water throughout the world. Pulmonary nocardiosis, the most common clinical presentation of infection, is usually acquired by direct inhalation of contaminated soil. Symptoms may include cough, shortness of breath, chest pain, hemoptysis, fever, night sweats, weight loss, and fatigue. The chest radiograph may be variable with nodular and/or consolidation infiltrate as well as cavitary lesions and pleural effusions. Risk factors for acquisition of infection include depressed cell immunity, but up to one-third of patients are immunocompetent [1–3]. Furthermore, Nocardia may be identified in the respiratory tract without apparent infection. This colonization is encountered in patients with underlying structural lung disease, such as bronchiectasis and CF [4]. Published data on Nocardia in cystic fibrosis is sparse and includes mostly case reports [5–9]. Their conclusion was that isolation of Nocardia spp. from sputum usually represents colonization rather than infection. One larger retrospective study of 17 CF patients with a sputum culture positive for Nocardia spp. [10] showed that treatment of Nocardia-positive sputum in CF has no value in improving pulmonary function. No data has been published comparing the natural course of cystic fibrosis patients with positive Nocardia sputum cultures to matched CF patients without Nocardia. In our study we describe our patients with Nocardia spp. in sputum over a period of 10 years, comparing the severity of lung disease and the rate of pulmonary function deterioration during the study period between those patients with a single episode of growth to patients with recurrent isolations. We also compare these colonized patients to CF control patients of similar status but without Nocardia growth. Therefore, the aim of the study was to determine if colonization by the bacteria affects the natural history of CF disease. Material and Methods: Study design Our study was a longitudinal, paired, case-control study of cystic fibrosis patients with and without isolation of Nocardia spp. in sputum from 2003 until December 2013. We collected all positive sputum cultures for Nocardia spp. in CF patients from our clinic – the Israeli National Center for Cystic Fibrosis, Safra Children’s Hospital, Sheba Medical Center Israel – found during the study period, using a laboratory microbiology database. All medical records were reviewed and the following data were collected for analysis: age, sex, CFTR mutations, and co-morbidities such as pancreatic sufficiency, CF-related diabetes, liver disease, allergic bronchopulmonary aspergillosis (ABPA), recurrent distal intestinal obstruction syndrome (DIOS), and hemoptysis. Other sputum cultures were recorded for the colonization of microbes such as Pseudomonas aeruginosa, Staphylococcus aureus, Aspergillus fumigatus, and Mycobacterium abscessus. Pulmonary function test (PFT) recordings using percent-predicted values adjusted for age, sex, and height were obtained at different timepoints. FEV1, FEF25–75, and FVC were primarily obtained during 2003 or the first PFTs found in the patients’ files (3 of the 19 patients had first PFTs in the years 2006 to 2009 due to young age or late diagnosis). Next, PFTs were obtained 6 months preceding the first episode of positive Nocardia culture, during the episode and 6 months following it. Additionally, updated PFTs were obtained from 2013 (aside from 1 patient whose last record of PFTs was in 2009 due to lung transplantation). The number of episodes of Nocardia growth was recorded. Each episode was defined as positive sputum cultures for Nocardia separated by 2 months’ time, with or without a specific treatment trial. Symptoms of pulmonary exacerbation during the episode were recorded, and the need for hospitalization and antibiotic treatment, either intravenous or orally. Concurrent treatments such as Azithromycin, steroids, and chronic use of inhaled antibiotics were recorded. The use of specific anti-Nocardia antibiotics was reported, as was the duration of treatment. Lastly, number of patients undergoing computerized tomography was observed, as well as pathological findings. Each patient was then matched with a control cystic fibrosis patient of the same mutation class, age, and sex, but without Nocardia isolation in sputum. Medical records of the control group were reviewed for the same study period as the other member of their matched pair, and similar data were collected for comparison. PFTs were compared as well as the trend of decline during the study period years for of each matched pair. The study was approved by the Institutional Review Board. Our study was a longitudinal, paired, case-control study of cystic fibrosis patients with and without isolation of Nocardia spp. in sputum from 2003 until December 2013. We collected all positive sputum cultures for Nocardia spp. in CF patients from our clinic – the Israeli National Center for Cystic Fibrosis, Safra Children’s Hospital, Sheba Medical Center Israel – found during the study period, using a laboratory microbiology database. All medical records were reviewed and the following data were collected for analysis: age, sex, CFTR mutations, and co-morbidities such as pancreatic sufficiency, CF-related diabetes, liver disease, allergic bronchopulmonary aspergillosis (ABPA), recurrent distal intestinal obstruction syndrome (DIOS), and hemoptysis. Other sputum cultures were recorded for the colonization of microbes such as Pseudomonas aeruginosa, Staphylococcus aureus, Aspergillus fumigatus, and Mycobacterium abscessus. Pulmonary function test (PFT) recordings using percent-predicted values adjusted for age, sex, and height were obtained at different timepoints. FEV1, FEF25–75, and FVC were primarily obtained during 2003 or the first PFTs found in the patients’ files (3 of the 19 patients had first PFTs in the years 2006 to 2009 due to young age or late diagnosis). Next, PFTs were obtained 6 months preceding the first episode of positive Nocardia culture, during the episode and 6 months following it. Additionally, updated PFTs were obtained from 2013 (aside from 1 patient whose last record of PFTs was in 2009 due to lung transplantation). The number of episodes of Nocardia growth was recorded. Each episode was defined as positive sputum cultures for Nocardia separated by 2 months’ time, with or without a specific treatment trial. Symptoms of pulmonary exacerbation during the episode were recorded, and the need for hospitalization and antibiotic treatment, either intravenous or orally. Concurrent treatments such as Azithromycin, steroids, and chronic use of inhaled antibiotics were recorded. The use of specific anti-Nocardia antibiotics was reported, as was the duration of treatment. Lastly, number of patients undergoing computerized tomography was observed, as well as pathological findings. Each patient was then matched with a control cystic fibrosis patient of the same mutation class, age, and sex, but without Nocardia isolation in sputum. Medical records of the control group were reviewed for the same study period as the other member of their matched pair, and similar data were collected for comparison. PFTs were compared as well as the trend of decline during the study period years for of each matched pair. The study was approved by the Institutional Review Board. Statistical analyses Categorical variables were reported using frequency and percentage. Continuous variables were described using mean (SD), median (IQR), and range (min-max). Continuous variables were tested for normal distribution using Kolmogorov-Smirnov, Q-Q plots, and histogram. The comparison between pulmonary function tests 6 months before the first episode, during the episode, and 6 months after the episode was conducted using the Friedman test. The Mann-Whitney test was used to compare between pulmonary function tests of symptomatic patients compared to the others and those treated for Nocardia compared to those not. One-sample Wilcoxon signed rank test was used to evaluate whether the decline of FEV1 per year during the study period differed from 1.5–2% (the average mean rate of decline per year for cystic fibrosis patients). Categorical variables were compared using the chi-square test or Fischer’s exact test. Paired categorical variables were compared using the McNemar Test, and paired continuous variables were compared using the Wilcoxon signed rank test. Categorical variables were reported using frequency and percentage. Continuous variables were described using mean (SD), median (IQR), and range (min-max). Continuous variables were tested for normal distribution using Kolmogorov-Smirnov, Q-Q plots, and histogram. The comparison between pulmonary function tests 6 months before the first episode, during the episode, and 6 months after the episode was conducted using the Friedman test. The Mann-Whitney test was used to compare between pulmonary function tests of symptomatic patients compared to the others and those treated for Nocardia compared to those not. One-sample Wilcoxon signed rank test was used to evaluate whether the decline of FEV1 per year during the study period differed from 1.5–2% (the average mean rate of decline per year for cystic fibrosis patients). Categorical variables were compared using the chi-square test or Fischer’s exact test. Paired categorical variables were compared using the McNemar Test, and paired continuous variables were compared using the Wilcoxon signed rank test. Study design: Our study was a longitudinal, paired, case-control study of cystic fibrosis patients with and without isolation of Nocardia spp. in sputum from 2003 until December 2013. We collected all positive sputum cultures for Nocardia spp. in CF patients from our clinic – the Israeli National Center for Cystic Fibrosis, Safra Children’s Hospital, Sheba Medical Center Israel – found during the study period, using a laboratory microbiology database. All medical records were reviewed and the following data were collected for analysis: age, sex, CFTR mutations, and co-morbidities such as pancreatic sufficiency, CF-related diabetes, liver disease, allergic bronchopulmonary aspergillosis (ABPA), recurrent distal intestinal obstruction syndrome (DIOS), and hemoptysis. Other sputum cultures were recorded for the colonization of microbes such as Pseudomonas aeruginosa, Staphylococcus aureus, Aspergillus fumigatus, and Mycobacterium abscessus. Pulmonary function test (PFT) recordings using percent-predicted values adjusted for age, sex, and height were obtained at different timepoints. FEV1, FEF25–75, and FVC were primarily obtained during 2003 or the first PFTs found in the patients’ files (3 of the 19 patients had first PFTs in the years 2006 to 2009 due to young age or late diagnosis). Next, PFTs were obtained 6 months preceding the first episode of positive Nocardia culture, during the episode and 6 months following it. Additionally, updated PFTs were obtained from 2013 (aside from 1 patient whose last record of PFTs was in 2009 due to lung transplantation). The number of episodes of Nocardia growth was recorded. Each episode was defined as positive sputum cultures for Nocardia separated by 2 months’ time, with or without a specific treatment trial. Symptoms of pulmonary exacerbation during the episode were recorded, and the need for hospitalization and antibiotic treatment, either intravenous or orally. Concurrent treatments such as Azithromycin, steroids, and chronic use of inhaled antibiotics were recorded. The use of specific anti-Nocardia antibiotics was reported, as was the duration of treatment. Lastly, number of patients undergoing computerized tomography was observed, as well as pathological findings. Each patient was then matched with a control cystic fibrosis patient of the same mutation class, age, and sex, but without Nocardia isolation in sputum. Medical records of the control group were reviewed for the same study period as the other member of their matched pair, and similar data were collected for comparison. PFTs were compared as well as the trend of decline during the study period years for of each matched pair. The study was approved by the Institutional Review Board. Statistical analyses: Categorical variables were reported using frequency and percentage. Continuous variables were described using mean (SD), median (IQR), and range (min-max). Continuous variables were tested for normal distribution using Kolmogorov-Smirnov, Q-Q plots, and histogram. The comparison between pulmonary function tests 6 months before the first episode, during the episode, and 6 months after the episode was conducted using the Friedman test. The Mann-Whitney test was used to compare between pulmonary function tests of symptomatic patients compared to the others and those treated for Nocardia compared to those not. One-sample Wilcoxon signed rank test was used to evaluate whether the decline of FEV1 per year during the study period differed from 1.5–2% (the average mean rate of decline per year for cystic fibrosis patients). Categorical variables were compared using the chi-square test or Fischer’s exact test. Paired categorical variables were compared using the McNemar Test, and paired continuous variables were compared using the Wilcoxon signed rank test. Results: Demographics Of the 160 CF patients treated at our clinic, 19 (12%) were found to have Nocardia spp. isolated from their sputum at some point during their clinical follow-up during the years 2003–2013. The sex distribution was 13 (68.4%) male patients and 6 (31.6%) females. The mean age at first episode was 20.5 years (SD 10.2), ranging from 8.6 years to 51 years of age. Eleven patients (58%) had only 1 episode of being Nocardia-positive. Six patients had recurrent growth of 3–4 episodes (32%) and 2 patients (10%) were found to have chronic positive sputum for Nocardia until the end of the study period (10–11 episodes). There was no evidence of extra-pulmonary disease or dissemination in any of the patients. There were 53% of patients symptomatic during episodes of Nocardia isolation. The symptoms reported included dyspnea, cough, fever, chest pain, and hemoptysis. Parameters evaluated such as related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the patients are summarized in Table 1. Sixteen patients (84%) underwent computerized tomography imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. None had specific lesions attributed to Nocardia. Of the 160 CF patients treated at our clinic, 19 (12%) were found to have Nocardia spp. isolated from their sputum at some point during their clinical follow-up during the years 2003–2013. The sex distribution was 13 (68.4%) male patients and 6 (31.6%) females. The mean age at first episode was 20.5 years (SD 10.2), ranging from 8.6 years to 51 years of age. Eleven patients (58%) had only 1 episode of being Nocardia-positive. Six patients had recurrent growth of 3–4 episodes (32%) and 2 patients (10%) were found to have chronic positive sputum for Nocardia until the end of the study period (10–11 episodes). There was no evidence of extra-pulmonary disease or dissemination in any of the patients. There were 53% of patients symptomatic during episodes of Nocardia isolation. The symptoms reported included dyspnea, cough, fever, chest pain, and hemoptysis. Parameters evaluated such as related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the patients are summarized in Table 1. Sixteen patients (84%) underwent computerized tomography imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. None had specific lesions attributed to Nocardia. Medication and antibiotic treatment Sixty-three percent of the patients were not hospitalized during the episodes of positive Nocardia cultures. Of the patients treated with IV Ceftazidime (37%), Piperacillin-Tazobactam or Meropenem combined with Amikacin, 3 were treated IV for less than 2 weeks and continued treatment orally and 4 received treatment for the duration of 3–4 weeks. Most patients (63%) received antibiotic treatment with Ciprofloxacin during the episodes, for a duration of less than 2 weeks (10%), 3–4 weeks (32%), 1–2 months (16%) or continuously (5%). Other antibiotic treatments such as Fucidic acid, Clarithromycin, Sporanox, or Voriconazole for other indications were given in 37% of the patients. After the isolation, specific anti-Nocardial treatments, such as Cotrimoxazole or Minocycline, were used in only 3 patients (16%), for a duration of over 3 months in 2 of them and 3–4 weeks in 1. Sixty-three percent of the patients were not hospitalized during the episodes of positive Nocardia cultures. Of the patients treated with IV Ceftazidime (37%), Piperacillin-Tazobactam or Meropenem combined with Amikacin, 3 were treated IV for less than 2 weeks and continued treatment orally and 4 received treatment for the duration of 3–4 weeks. Most patients (63%) received antibiotic treatment with Ciprofloxacin during the episodes, for a duration of less than 2 weeks (10%), 3–4 weeks (32%), 1–2 months (16%) or continuously (5%). Other antibiotic treatments such as Fucidic acid, Clarithromycin, Sporanox, or Voriconazole for other indications were given in 37% of the patients. After the isolation, specific anti-Nocardial treatments, such as Cotrimoxazole or Minocycline, were used in only 3 patients (16%), for a duration of over 3 months in 2 of them and 3–4 weeks in 1. Control group data A group of control patients paired with the 19 patients with positive Nocardia cultures was formed; they were matched by mutation class, age, and sex (Table 1). The McNemar test was used o check the pairing of background details and confounders between cases and controls. The sex distribution of the control group was 12 (63.2%) male patients and 7 (36.8%) females. The mean age at the end of the study was 25 years for both groups. Related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the control group patients are summarized in Table 1. Eleven patients (58%) underwent CT imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. No statistical differences were found between the background variables, demographic data, chronic treatments and medications, related diseases, prevalence of other microorganisms in sputum, or findings on CT between the study group and the control group. A group of control patients paired with the 19 patients with positive Nocardia cultures was formed; they were matched by mutation class, age, and sex (Table 1). The McNemar test was used o check the pairing of background details and confounders between cases and controls. The sex distribution of the control group was 12 (63.2%) male patients and 7 (36.8%) females. The mean age at the end of the study was 25 years for both groups. Related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the control group patients are summarized in Table 1. Eleven patients (58%) underwent CT imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. No statistical differences were found between the background variables, demographic data, chronic treatments and medications, related diseases, prevalence of other microorganisms in sputum, or findings on CT between the study group and the control group. Pulmonary function tests Pulmonary function tests were obtained 6 months previous to the first episode of positive Nocardia culture, during the episode, and 6 months afterwards. No statistically significant difference was found during this year in FEV1 (p=0.5), FVC (p=0.15) or FEF 25–75% (p=0.84). Moreover, no differences were found in mean changes in PFTs during this year between different groups of patients divided into those who were hospitalized, those receiving steroids, those treated with intravenous or oral antibiotics including azithromycin or ciprofloxacin or anti-Nocardial treatment, those on continuous antibiotic inhalations, and those who were symptomatic compared to patients who were not. In addition, no statistical difference in the change in pulmonary functions between the time of isolation and 6 months afterwards was found when comparing the patients who received intravenous or oral anti-Nocardial treatment to those who did not (FEV1 (p=0.6) FVC (p=0.9) FEF25–75% (p=0.72)). Comparing PFTs during the episode of symptomatic versus asymptomatic patients, no difference was found in mean FEV1 or FVC. However, a significantly lower mean FEF25–75% was found among the symptomatic patients (35.7% vs. 73.3%, p=0.01). Pulmonary function tests of study group and control patients were obtained at the beginning and at the end of the study period on similar dates for each pair (Figure 1). No statistically significant difference was found between the groups for all mean PFTs (FEV1, FVC, FEF 25–75%) at the beginning of the study (79.6% vs. 75.4% p=0.39, 87% vs. 86% p=1.0, 71.8% vs. 57.3% p= 0.07, respectively). The mean decline of PFTs per year during the study period was calculated for both groups of patients, and comparing them showed a trend to significance in FEV1 (−1.87% vs. −0.5% p=0.08) and FVC (−1.2% vs. +0.2% p=0.09), favoring the patients without isolation of Nocardia. No difference between the groups was found in the decline of FEF 25–75% (−3% vs. −1.6% p=0.2). In the evaluation of PFTs from the beginning and until the end of the study period, no difference was found in the yearly decline rates of PFTs (FEV1, FVC or FEF 25–75%) between patients with a single episode of isolation of Nocardia in culture and those with recurrent episodes (p=0.55, 0.97, and 0.24, respectively). Furthermore, no significant difference was found in the mean rate of decline per year of FEV1 (p=0.72), FVC (p=0.21) or FEF 25–75% (p=0.36) in the years prior to the first positive Nocardia culture compared to the years following the isolation until the end of the study period. Pulmonary function tests were obtained 6 months previous to the first episode of positive Nocardia culture, during the episode, and 6 months afterwards. No statistically significant difference was found during this year in FEV1 (p=0.5), FVC (p=0.15) or FEF 25–75% (p=0.84). Moreover, no differences were found in mean changes in PFTs during this year between different groups of patients divided into those who were hospitalized, those receiving steroids, those treated with intravenous or oral antibiotics including azithromycin or ciprofloxacin or anti-Nocardial treatment, those on continuous antibiotic inhalations, and those who were symptomatic compared to patients who were not. In addition, no statistical difference in the change in pulmonary functions between the time of isolation and 6 months afterwards was found when comparing the patients who received intravenous or oral anti-Nocardial treatment to those who did not (FEV1 (p=0.6) FVC (p=0.9) FEF25–75% (p=0.72)). Comparing PFTs during the episode of symptomatic versus asymptomatic patients, no difference was found in mean FEV1 or FVC. However, a significantly lower mean FEF25–75% was found among the symptomatic patients (35.7% vs. 73.3%, p=0.01). Pulmonary function tests of study group and control patients were obtained at the beginning and at the end of the study period on similar dates for each pair (Figure 1). No statistically significant difference was found between the groups for all mean PFTs (FEV1, FVC, FEF 25–75%) at the beginning of the study (79.6% vs. 75.4% p=0.39, 87% vs. 86% p=1.0, 71.8% vs. 57.3% p= 0.07, respectively). The mean decline of PFTs per year during the study period was calculated for both groups of patients, and comparing them showed a trend to significance in FEV1 (−1.87% vs. −0.5% p=0.08) and FVC (−1.2% vs. +0.2% p=0.09), favoring the patients without isolation of Nocardia. No difference between the groups was found in the decline of FEF 25–75% (−3% vs. −1.6% p=0.2). In the evaluation of PFTs from the beginning and until the end of the study period, no difference was found in the yearly decline rates of PFTs (FEV1, FVC or FEF 25–75%) between patients with a single episode of isolation of Nocardia in culture and those with recurrent episodes (p=0.55, 0.97, and 0.24, respectively). Furthermore, no significant difference was found in the mean rate of decline per year of FEV1 (p=0.72), FVC (p=0.21) or FEF 25–75% (p=0.36) in the years prior to the first positive Nocardia culture compared to the years following the isolation until the end of the study period. Demographics: Of the 160 CF patients treated at our clinic, 19 (12%) were found to have Nocardia spp. isolated from their sputum at some point during their clinical follow-up during the years 2003–2013. The sex distribution was 13 (68.4%) male patients and 6 (31.6%) females. The mean age at first episode was 20.5 years (SD 10.2), ranging from 8.6 years to 51 years of age. Eleven patients (58%) had only 1 episode of being Nocardia-positive. Six patients had recurrent growth of 3–4 episodes (32%) and 2 patients (10%) were found to have chronic positive sputum for Nocardia until the end of the study period (10–11 episodes). There was no evidence of extra-pulmonary disease or dissemination in any of the patients. There were 53% of patients symptomatic during episodes of Nocardia isolation. The symptoms reported included dyspnea, cough, fever, chest pain, and hemoptysis. Parameters evaluated such as related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the patients are summarized in Table 1. Sixteen patients (84%) underwent computerized tomography imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. None had specific lesions attributed to Nocardia. Medication and antibiotic treatment: Sixty-three percent of the patients were not hospitalized during the episodes of positive Nocardia cultures. Of the patients treated with IV Ceftazidime (37%), Piperacillin-Tazobactam or Meropenem combined with Amikacin, 3 were treated IV for less than 2 weeks and continued treatment orally and 4 received treatment for the duration of 3–4 weeks. Most patients (63%) received antibiotic treatment with Ciprofloxacin during the episodes, for a duration of less than 2 weeks (10%), 3–4 weeks (32%), 1–2 months (16%) or continuously (5%). Other antibiotic treatments such as Fucidic acid, Clarithromycin, Sporanox, or Voriconazole for other indications were given in 37% of the patients. After the isolation, specific anti-Nocardial treatments, such as Cotrimoxazole or Minocycline, were used in only 3 patients (16%), for a duration of over 3 months in 2 of them and 3–4 weeks in 1. Control group data: A group of control patients paired with the 19 patients with positive Nocardia cultures was formed; they were matched by mutation class, age, and sex (Table 1). The McNemar test was used o check the pairing of background details and confounders between cases and controls. The sex distribution of the control group was 12 (63.2%) male patients and 7 (36.8%) females. The mean age at the end of the study was 25 years for both groups. Related diseases, chronic antibiotic or steroid treatment, and frequency of additional microbes isolated from sputum of the control group patients are summarized in Table 1. Eleven patients (58%) underwent CT imaging during the study period. All had manifestations of cystic fibrosis, such as bronchiectasis and lymphadenopathy. No statistical differences were found between the background variables, demographic data, chronic treatments and medications, related diseases, prevalence of other microorganisms in sputum, or findings on CT between the study group and the control group. Pulmonary function tests: Pulmonary function tests were obtained 6 months previous to the first episode of positive Nocardia culture, during the episode, and 6 months afterwards. No statistically significant difference was found during this year in FEV1 (p=0.5), FVC (p=0.15) or FEF 25–75% (p=0.84). Moreover, no differences were found in mean changes in PFTs during this year between different groups of patients divided into those who were hospitalized, those receiving steroids, those treated with intravenous or oral antibiotics including azithromycin or ciprofloxacin or anti-Nocardial treatment, those on continuous antibiotic inhalations, and those who were symptomatic compared to patients who were not. In addition, no statistical difference in the change in pulmonary functions between the time of isolation and 6 months afterwards was found when comparing the patients who received intravenous or oral anti-Nocardial treatment to those who did not (FEV1 (p=0.6) FVC (p=0.9) FEF25–75% (p=0.72)). Comparing PFTs during the episode of symptomatic versus asymptomatic patients, no difference was found in mean FEV1 or FVC. However, a significantly lower mean FEF25–75% was found among the symptomatic patients (35.7% vs. 73.3%, p=0.01). Pulmonary function tests of study group and control patients were obtained at the beginning and at the end of the study period on similar dates for each pair (Figure 1). No statistically significant difference was found between the groups for all mean PFTs (FEV1, FVC, FEF 25–75%) at the beginning of the study (79.6% vs. 75.4% p=0.39, 87% vs. 86% p=1.0, 71.8% vs. 57.3% p= 0.07, respectively). The mean decline of PFTs per year during the study period was calculated for both groups of patients, and comparing them showed a trend to significance in FEV1 (−1.87% vs. −0.5% p=0.08) and FVC (−1.2% vs. +0.2% p=0.09), favoring the patients without isolation of Nocardia. No difference between the groups was found in the decline of FEF 25–75% (−3% vs. −1.6% p=0.2). In the evaluation of PFTs from the beginning and until the end of the study period, no difference was found in the yearly decline rates of PFTs (FEV1, FVC or FEF 25–75%) between patients with a single episode of isolation of Nocardia in culture and those with recurrent episodes (p=0.55, 0.97, and 0.24, respectively). Furthermore, no significant difference was found in the mean rate of decline per year of FEV1 (p=0.72), FVC (p=0.21) or FEF 25–75% (p=0.36) in the years prior to the first positive Nocardia culture compared to the years following the isolation until the end of the study period. Discussion: Patients with cystic fibrosis have a predisposition to become colonized and infected by a large array of microbes. Some bacteria, such as Pseudomonas aeruginosa, Staphylococcus aureus, and Stenotrophomonas maltophilia, accelerate the deterioration of pulmonary disease, while others have little or no data to define their role as a pathogen or colonizer in the cystic fibrosis patient, and the benefits of treatment after their isolation is unknown [11,12]. The literature describing Nocardia in CF are mostly case reports, including 1 report describing 3 patients in Australia [5] and a report of 9 patients in Spain [6], which concluded that isolation represents colonization rather than infection. Three more case reports of children with CF and Nocardia isolation were published and found other risk factors for infection, such as corticosteroid treatment for ABPA [7–9]. Most patients described were treated with cotrimoxazole, but the outcomes for the treated versus the untreated patients were similar or unknown [5–9]. To the best of our knowledge, the present study is the first to evaluate such a large cohort of patients, 25% of them carriers of a unique mutation prevalent in Israel (W1282X), for such a long period of time (10 years). We found no statistically significant difference in PFTs before or after the Nocardia isolation. No difference in PFTs were found between patients with a single episode versus those with recurrent Nocardia growth, nor in those treated with antibiotics, some for prolonged periods of time, versus those who were not treated. There was no difference in mean FEV1 or FVC between symptomatic versus asymptomatic patients during the episode, but a significantly lower mean FEF 25–75% was found among the symptomatic patients (35.7 vs. 73.3, p=0.01). Regarding this finding, we hypothesized that a graver disease of the smaller airways and perhaps a more parenchymal involvement could result in subjective pulmonary symptomatology. In this study we expanded the follow-up period and evaluated the changes in pulmonary functions throughout the entire period of 10 years and not only during the year of the episode of Nocardia isolation, realizing that in some cases a bacteria could colonize the lungs without growing in our sputum cultures. Furthermore, we compared our study group patients to paired CF patients with similar status as their peers by mutation class, age, and sex, but no Nocardia isolation, a comparison not carried out until this time. In these patients we found a mean decline of FEV1 per year of −1.87%, similar to the average rate of decline known for CF patients. No difference in the rate of decline was found comparing the period prior to the positive Nocardia culture to the years following the isolation until the end of the study period. Surprisingly, the mean decline of FEV1 per year for our control group CF patients was -0.5%. Comparing the rates for both groups showed a trend to significance in FEV1 (p=0.08) and FVC (p=0.09), with no difference in FEF25–75% (p=0.2), meaning that the cystic fibrosis patients who had positive sputum cultures with Nocardia declined faster than matched controls without isolation of this microbe. It is possible that this difference was not statistically significant because of the small study group. Since it does not seem that Nocardia causes a deterioration of lung functions, we hypothesized that this microbe might grow more often in patients with more significant lung disease and worse prognosis. Perhaps other genetic or environmental factors affect the lungs of this group of patients or protect the lungs of the “healthier” patients and differentiate them from each other. These factors will need to be studied in larger groups of patients. In a previous retrospective study from Arizona [10], 17 patients with CF had a sputum culture positive for Nocardia spp. during the years 1997–2007 (of 123 CF patients – 14%), a similar percentage as in our clinic. Several patients had persistent positive Nocardia sputum cultures despite repeated antibiotic courses and did not appear to have different outcomes from the ones with a single isolated culture. As opposed to our center, in Arizona 78% of episodes were treated with antibiotics. Despite this, mean FEV1 and FVC values showed no significant linear trend before, during, and after an episode of positive Nocardia isolation in sputum, which led to the conclusion that treatment likely has no value in improving pulmonary function. Our study strengthens these conclusions of our colleagues. Optimal treatment recommendations for pulmonary Nocardiosis have not been firmly established due to variable in vitro antimicrobial susceptibility patterns. Sulfonamides have been the antimicrobials of choice since they were introduced 50 years ago [13], but resistance has been described lately in up to 15% of the isolates [14]. Alternative antibiotics used orally are minocycline or doxycycline, while some strains of Nocardia are moderately sensitive to amoxicillin-clavulanate or to fluoroquinolones. Several intravenous antibiotics are also effective against Nocardia such as carbapenems, ceftriaxone, cefotaxime, amikacin, and linezolid. When treating pulmonary Nocardiosis, combination therapy should be considered and a duration of 6–12 months of therapy is recommended. However, as shown in the literature and highlighted in our study, in CF patients the need to treat as well as the duration of treatment is less well defined. Despite all this, it is believed that Nocardia eradication or a very long period of treatment might be needed in some patients with CF, especially if they are to undergo lung transplantation or are in need of prolonged and high doses of corticosteroids, such as for treatment of ABPA. The reason for this is that the corticosteroids as well as the immunosuppressive regimen during the post-operative period are well-established risk factors for invasive Nocardiosis and disseminated disease that lead to high rate of mortality [15,16]. Indeed, mediastinal mass and pericardial effusion with cardiac tamponade was diagnosed as an invasive Nocardial infection in a renal transplant immunosuppressed patient [17]. There are some limitations to our study. The major limitation is the small study sample, making firm conclusions difficult. However, it is the largest study group assessed so far, to the best of our knowledge. It seems that a multi-center study with more cases could provide more information and better statistics. The retrospective method of our study is another limitation. Accordingly, we significantly expanded the period of follow-up of the patients to the entire study period, enabling us to estimate yearly changes in pulmonary functions. Another limit of our study was the lack of differentiation of Nocardia into different species and establishment of drug sensitivity in our microbiologic laboratory, which could possibly separate the patients into different sub-groups according to the virulence and resistance of the specific microbe. Conclusions: As shown in the literature, our study supports the fact that Nocardia isolation in sputum of CF patients is usually due to colonization without true infection, and its presence does not significantly affect the natural history or pulmonary deterioration of their lung disease. Furthermore, no difference was found between patients with different risk factors for infection, those who were symptomatic during the episodes of growth and those who were treated accordingly, compared to the others. Therefore, it seems the need for Nocardia treatment should be evaluated on an individual basis and in the context of the clinical picture. Our study found that the rate of decline in lung functions per year in patients of similar status but without isolation of Nocardia showed a slower trend than average for CF patients. It seems possible that these patients have less colonization of different microbes in their sputum, but larger, multi-center studies are needed to confirm this.
Background: Cystic fibrosis (CF) patients are predisposed to infection and colonization with different microbes. Some cause deterioration of lung functions, while others are colonizers without clear pathogenic effects. Our aim was to understand the effects of Nocardia species in sputum cultures on the course of lung disease in CF patients. Methods: A retrospective study analyzing the impact of positive Nocardia spp. in sputum of 19 CF patients over a period of 10 years, comparing them with similar status patients without Nocardia growth. Pulmonary function tests (PFTs) are used as indicators of lung disease severity and decline rate in functions per year is calculated. Results: No significant difference in PFTs of CF patients with positive Nocardia in sputum was found in different sub-groups according to number of episodes of growth, background variables, or treatment plans. The yearly decline in PFTs was similar to that recognized in CF patients. The control group patients showed similar background data. However, a small difference was found in the rate of decline of their PFTs, which implies a possibly slower rate of progression of lung disease. Conclusions: The prognosis of lung disease in CF patients colonized with Nocardia does not seem to differ based on the persistence of growth on cultures, different treatment plans or risk factors. Apparently, Nocardia does not cause a deterioration of lung functions with time. However, it may show a trend to faster decline in PFTs compared to similar status CF patients without isolation of this microorganism in their sputum.
Background: Nocardia species are aerobic Gram-positive filamentous bacteria of the Actinomycetes order, which are natural inhabitants of soil and water throughout the world. Pulmonary nocardiosis, the most common clinical presentation of infection, is usually acquired by direct inhalation of contaminated soil. Symptoms may include cough, shortness of breath, chest pain, hemoptysis, fever, night sweats, weight loss, and fatigue. The chest radiograph may be variable with nodular and/or consolidation infiltrate as well as cavitary lesions and pleural effusions. Risk factors for acquisition of infection include depressed cell immunity, but up to one-third of patients are immunocompetent [1–3]. Furthermore, Nocardia may be identified in the respiratory tract without apparent infection. This colonization is encountered in patients with underlying structural lung disease, such as bronchiectasis and CF [4]. Published data on Nocardia in cystic fibrosis is sparse and includes mostly case reports [5–9]. Their conclusion was that isolation of Nocardia spp. from sputum usually represents colonization rather than infection. One larger retrospective study of 17 CF patients with a sputum culture positive for Nocardia spp. [10] showed that treatment of Nocardia-positive sputum in CF has no value in improving pulmonary function. No data has been published comparing the natural course of cystic fibrosis patients with positive Nocardia sputum cultures to matched CF patients without Nocardia. In our study we describe our patients with Nocardia spp. in sputum over a period of 10 years, comparing the severity of lung disease and the rate of pulmonary function deterioration during the study period between those patients with a single episode of growth to patients with recurrent isolations. We also compare these colonized patients to CF control patients of similar status but without Nocardia growth. Therefore, the aim of the study was to determine if colonization by the bacteria affects the natural history of CF disease. Conclusions: As shown in the literature, our study supports the fact that Nocardia isolation in sputum of CF patients is usually due to colonization without true infection, and its presence does not significantly affect the natural history or pulmonary deterioration of their lung disease. Furthermore, no difference was found between patients with different risk factors for infection, those who were symptomatic during the episodes of growth and those who were treated accordingly, compared to the others. Therefore, it seems the need for Nocardia treatment should be evaluated on an individual basis and in the context of the clinical picture. Our study found that the rate of decline in lung functions per year in patients of similar status but without isolation of Nocardia showed a slower trend than average for CF patients. It seems possible that these patients have less colonization of different microbes in their sputum, but larger, multi-center studies are needed to confirm this.
Background: Cystic fibrosis (CF) patients are predisposed to infection and colonization with different microbes. Some cause deterioration of lung functions, while others are colonizers without clear pathogenic effects. Our aim was to understand the effects of Nocardia species in sputum cultures on the course of lung disease in CF patients. Methods: A retrospective study analyzing the impact of positive Nocardia spp. in sputum of 19 CF patients over a period of 10 years, comparing them with similar status patients without Nocardia growth. Pulmonary function tests (PFTs) are used as indicators of lung disease severity and decline rate in functions per year is calculated. Results: No significant difference in PFTs of CF patients with positive Nocardia in sputum was found in different sub-groups according to number of episodes of growth, background variables, or treatment plans. The yearly decline in PFTs was similar to that recognized in CF patients. The control group patients showed similar background data. However, a small difference was found in the rate of decline of their PFTs, which implies a possibly slower rate of progression of lung disease. Conclusions: The prognosis of lung disease in CF patients colonized with Nocardia does not seem to differ based on the persistence of growth on cultures, different treatment plans or risk factors. Apparently, Nocardia does not cause a deterioration of lung functions with time. However, it may show a trend to faster decline in PFTs compared to similar status CF patients without isolation of this microorganism in their sputum.
7,292
287
11
[ "patients", "nocardia", "study", "found", "episode", "period", "sputum", "treatment", "pfts", "pulmonary" ]
[ "test", "test" ]
[CONTENT] Cystic Fibrosis | Lung Diseases | Nocardia | Respiratory Function Tests [SUMMARY]
[CONTENT] Cystic Fibrosis | Lung Diseases | Nocardia | Respiratory Function Tests [SUMMARY]
[CONTENT] Cystic Fibrosis | Lung Diseases | Nocardia | Respiratory Function Tests [SUMMARY]
[CONTENT] Cystic Fibrosis | Lung Diseases | Nocardia | Respiratory Function Tests [SUMMARY]
[CONTENT] Cystic Fibrosis | Lung Diseases | Nocardia | Respiratory Function Tests [SUMMARY]
[CONTENT] Cystic Fibrosis | Lung Diseases | Nocardia | Respiratory Function Tests [SUMMARY]
[CONTENT] Adolescent | Adult | Case-Control Studies | Child | Cystic Fibrosis | Female | Humans | Longitudinal Studies | Male | Middle Aged | Nocardia | Respiratory Function Tests | Risk Factors | Sputum | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Case-Control Studies | Child | Cystic Fibrosis | Female | Humans | Longitudinal Studies | Male | Middle Aged | Nocardia | Respiratory Function Tests | Risk Factors | Sputum | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Case-Control Studies | Child | Cystic Fibrosis | Female | Humans | Longitudinal Studies | Male | Middle Aged | Nocardia | Respiratory Function Tests | Risk Factors | Sputum | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Case-Control Studies | Child | Cystic Fibrosis | Female | Humans | Longitudinal Studies | Male | Middle Aged | Nocardia | Respiratory Function Tests | Risk Factors | Sputum | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Case-Control Studies | Child | Cystic Fibrosis | Female | Humans | Longitudinal Studies | Male | Middle Aged | Nocardia | Respiratory Function Tests | Risk Factors | Sputum | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Case-Control Studies | Child | Cystic Fibrosis | Female | Humans | Longitudinal Studies | Male | Middle Aged | Nocardia | Respiratory Function Tests | Risk Factors | Sputum | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] patients | nocardia | study | found | episode | period | sputum | treatment | pfts | pulmonary [SUMMARY]
[CONTENT] patients | nocardia | study | found | episode | period | sputum | treatment | pfts | pulmonary [SUMMARY]
[CONTENT] patients | nocardia | study | found | episode | period | sputum | treatment | pfts | pulmonary [SUMMARY]
[CONTENT] patients | nocardia | study | found | episode | period | sputum | treatment | pfts | pulmonary [SUMMARY]
[CONTENT] patients | nocardia | study | found | episode | period | sputum | treatment | pfts | pulmonary [SUMMARY]
[CONTENT] patients | nocardia | study | found | episode | period | sputum | treatment | pfts | pulmonary [SUMMARY]
[CONTENT] nocardia | patients | cf | infection | natural | sputum | soil | patients nocardia | include | colonization [SUMMARY]
[CONTENT] group | control | control group | patients | background | ct | diseases | table | group control | related diseases [SUMMARY]
[CONTENT] patients | found | vs | 75 | difference | study | weeks | fvc | 25 | mean [SUMMARY]
[CONTENT] patients | infection | lung | colonization | different | nocardia | cf patients | cf | nocardia treatment evaluated | nocardia treatment [SUMMARY]
[CONTENT] patients | nocardia | study | sputum | found | episode | pfts | test | period | variables [SUMMARY]
[CONTENT] patients | nocardia | study | sputum | found | episode | pfts | test | period | variables [SUMMARY]
[CONTENT] Cystic ||| ||| Nocardia | CF [SUMMARY]
[CONTENT] Nocardia ||| 19 | 10 years | Nocardia ||| [SUMMARY]
[CONTENT] CF | Nocardia ||| yearly | CF ||| ||| [SUMMARY]
[CONTENT] CF | Nocardia ||| Nocardia ||| [SUMMARY]
[CONTENT] ||| ||| Nocardia | CF ||| Nocardia ||| 19 | 10 years | Nocardia ||| ||| ||| CF | Nocardia ||| yearly | CF ||| ||| ||| CF | Nocardia ||| Nocardia ||| [SUMMARY]
[CONTENT] ||| ||| Nocardia | CF ||| Nocardia ||| 19 | 10 years | Nocardia ||| ||| ||| CF | Nocardia ||| yearly | CF ||| ||| ||| CF | Nocardia ||| Nocardia ||| [SUMMARY]
Physician self-reported treatment of brain metastases according to patients' clinical and demographic factors and physician practice setting.
23136987
Limited data guide radiotherapy choices for patients with brain metastases. This survey aimed to identify patient, physician, and practice setting variables associated with reported preferences for different treatment techniques.
BACKGROUND
277 members of the American Society for Radiation Oncology (6% of surveyed physicians) completed a survey regarding treatment preferences for 21 hypothetical patients with brain metastases. Treatment choices included combinations of whole brain radiation therapy (WBRT), stereotactic radiosurgery (SRS), and surgery. Vignettes varied histology, extracranial disease status, Karnofsky Performance Status (KPS), presence of neurologic deficits, lesion size and number. Multivariate generalized estimating equation regression models were used to estimate odds ratios.
METHOD
For a hypothetical patient with 3 lesions or 8 lesions, 21% and 91% of physicians, respectively, chose WBRT alone, compared with 1% selecting WBRT alone for a patient with 1 lesion. 51% chose WBRT alone for a patient with active extracranial disease or KPS=50%. 40% chose SRS alone for an 80 year-old patient with 1 lesion, compared to 29% for a 55 year-old patient. Multivariate modeling detailed factors associated with SRS use, including availability of SRS within one's practice (OR 2.22, 95% CI 1.46-3.37).
RESULTS
Poor prognostic factors, such as advanced age, poor performance status, or active extracranial disease, correspond with an increase in physicians' reported preference for using WBRT. When controlling for clinical factors, equipment access was independently associated with choice of SRS. The large variability in preferences suggests that more information about the relative harms and benefits of these options is needed to guide decision-making.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Brain Neoplasms", "Carcinoma, Non-Small-Cell Lung", "Choice Behavior", "Combined Modality Therapy", "Cranial Irradiation", "Data Collection", "Demography", "Female", "Humans", "Karnofsky Performance Status", "Lung Neoplasms", "Male", "Melanoma", "Middle Aged", "Neurosurgery", "Patient Selection", "Physicians", "Professional Practice", "Professional Practice Location", "Radiosurgery", "Self Report", "Socioeconomic Factors" ]
3533820
Background
Brain metastases are the most common intracranial tumor, occurring in 20-40% of cancer patients and accounting for 20% of cancer deaths annually [1]. Median survival is 1–2 months with corticosteroids alone [2] or six months with whole brain radiation therapy (WBRT) [3,4]. A major advance in the treatment of these patients was addition of surgery to WBRT for treatment of a single metastasis, which improved local control, distant intracranial control and neurologic survival compared to either modality alone [5,6]. A retrospective study demonstrated differential survival among patients undergoing WBRT according to recursive partitioning analysis (RPA) classes [7]; further prognostic refinements have incorporated histology and number of lesions [8]. More recently, stereotactic radiosurgery (SRS) has been used alone or with WBRT in patients with up to 4 metastases. When compared with WBRT alone, the addition of SRS has improved local control, functional autonomy and survival [5,9-11]. However, WBRT can have significant toxicities, including fatigue, drowsiness and suppressed appetite, and long-term difficulties with learning, memory, concentration, and depression [12-14]. The use of SRS alone controls limited disease and delays the time until WBRT is necessary for distant intracranial progression [12,15,16]. In most clinical trials of therapies for brain metastases, patients have been selected on the basis of having few metastases, stable extracranial disease, and excellent performance status. In clinical practice, patients with brain metastases are a heterogeneous population, and decision-making requires the synthesis of multiple variables. The objective of this survey of radiation oncologists was to identify patient factors, physician characteristics, and practice setting variables associated with physicians’ preferred use of different techniques for treating brain metastases. This survey aimed to generate data that would allow physicians to: (1) compare their practice patterns to a national sample; (2) assess the influence of their practice environment on treatment choice; and (3) generate new hypotheses regarding appropriate treatment.
Methods
This project was approved by the IRB of Harvard Medical School. The survey was launched online, and physician members of the American Society for Therapeutic Radiology and Oncology (ASTRO) were emailed a recruitment letter. Eligibility criteria included respondent status as a U.S. or Canadian physician in the ASTRO database, valid email address, and current management of patients with brain metastases, as reflected by the screener question. Respondents linked directly to the survey from the email, and there was no incentive for survey participation. Data collection Data was de-identified and collected through the online survey tool for one month. We emailed surveys to 4357 physician members of ASTRO on September 26, 2008, and the survey was closed on October 26, 2008. 417 respondents answered at least one question, and 277 answered all demographic and clinical questions, for a response rate of 6%. Despite our low response rate, physician respondents were representative of practicing radiation oncologists when compared to respondents to the American College of Radiology’s (ACR) Survey of Radiation Oncologists. Our sample was similar to the ACR survey on selected characteristics such as sex (73% male in our survey, 77% in ACR), age (62% ages 35–54 in our survey, 65% in ACR) and being in private practice (52% in our survey, 48% in ACR) [17]. However, it was not possible to assess interest in SRS or palliative care, or use of advanced technology, among those included in the ACR sample, which limits the comparison. The survey was designed to: (1) describe radiation oncologists’ patterns of treatment of patients with brain metastases; and (2) identify clinical, demographic, and practice setting factors associated with treatment patterns. To test physician practices, a series of short hypothetical clinical vignettes were developed to assess respondents’ preferred treatment modalities. Vignettes have been demonstrated to be a valid study tool when compared with actual clinical practice patterns [18]. Treatment options for each vignette were identical: WBRT alone; WBRT with SRS; SRS alone; WBRT with surgery; or no treatment. We constructed 3 versions of a reference vignette: the first with 1 metastasis, the next with 3 metastases, and one with 8 metastases. Each reference vignette described a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, Karnofsky Performance Status (KPS) 80%, and asymptomatic, small brain lesion(s). For each of these 3 vignettes, we asked about 6 additional patients, modifying a single variable: melanoma histology, active extracranial disease, KPS 50%, presence of neurologic deficit, age of 80 years old, and large lesion (Figure 1). Variations under assumptions of 1, 3, 8 metastases in each Reference Patient. The survey sequentially varied the characteristics of each Reference Patient to create vignettes for Patients 1–6. The effect of each variation was evaluated under assumptions of 1, 3, and 8 lesions, respective to each vignette’s Reference Patient. Other survey items assessed factors related to the patient, physician, or practice setting. These questions included physician demographics, practice environment, availability of SRS, and opinions about the nature of intracranial disease and the toxicity of its treatment. A copy of our survey is included as supplementary material (Additional file 1: Appendix 1). Data regarding non-respondents were not collected. Data was de-identified and collected through the online survey tool for one month. We emailed surveys to 4357 physician members of ASTRO on September 26, 2008, and the survey was closed on October 26, 2008. 417 respondents answered at least one question, and 277 answered all demographic and clinical questions, for a response rate of 6%. Despite our low response rate, physician respondents were representative of practicing radiation oncologists when compared to respondents to the American College of Radiology’s (ACR) Survey of Radiation Oncologists. Our sample was similar to the ACR survey on selected characteristics such as sex (73% male in our survey, 77% in ACR), age (62% ages 35–54 in our survey, 65% in ACR) and being in private practice (52% in our survey, 48% in ACR) [17]. However, it was not possible to assess interest in SRS or palliative care, or use of advanced technology, among those included in the ACR sample, which limits the comparison. The survey was designed to: (1) describe radiation oncologists’ patterns of treatment of patients with brain metastases; and (2) identify clinical, demographic, and practice setting factors associated with treatment patterns. To test physician practices, a series of short hypothetical clinical vignettes were developed to assess respondents’ preferred treatment modalities. Vignettes have been demonstrated to be a valid study tool when compared with actual clinical practice patterns [18]. Treatment options for each vignette were identical: WBRT alone; WBRT with SRS; SRS alone; WBRT with surgery; or no treatment. We constructed 3 versions of a reference vignette: the first with 1 metastasis, the next with 3 metastases, and one with 8 metastases. Each reference vignette described a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, Karnofsky Performance Status (KPS) 80%, and asymptomatic, small brain lesion(s). For each of these 3 vignettes, we asked about 6 additional patients, modifying a single variable: melanoma histology, active extracranial disease, KPS 50%, presence of neurologic deficit, age of 80 years old, and large lesion (Figure 1). Variations under assumptions of 1, 3, 8 metastases in each Reference Patient. The survey sequentially varied the characteristics of each Reference Patient to create vignettes for Patients 1–6. The effect of each variation was evaluated under assumptions of 1, 3, and 8 lesions, respective to each vignette’s Reference Patient. Other survey items assessed factors related to the patient, physician, or practice setting. These questions included physician demographics, practice environment, availability of SRS, and opinions about the nature of intracranial disease and the toxicity of its treatment. A copy of our survey is included as supplementary material (Additional file 1: Appendix 1). Data regarding non-respondents were not collected. Statistical analysis Effects of patient clinical characteristics on treatment choices For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. Effects of patient &physician characteristics on odds of including SRS We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses. We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses. Effects of patient clinical characteristics on treatment choices For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. Effects of patient &physician characteristics on odds of including SRS We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses. We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.
Results
Physician demographics and practice environment The characteristics of our survey respondents are shown in Table 1. Sixty percent of respondents were in single-specialty group practices. Most practices were hospital-based, academic (38%) or private (30%). Seventy-six percent of respondents treated 10–50 patients with brain metastases per year. Forty-four percent of respondents performed SRS, while 35% had a colleague at their institution who performed SRS. Sixty-one percent of respondents had LINAC-based SRS, and 18% had no SRS equipment. Distribution of Physician Characteristics (N=277) 1 Respondents were permitted to select more than one modality. 2 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases. * Stereotactic radiosurgery. Physicians’ responses to the 21 vignettes varied substantially (Table 2). Multivariable modeling revealed clinical factors influencing treatment selection (Tables 3, 4, 5; complete results in Additional file 2: Appendix 2). Unadjusted Response (in %) Among Radiation Oncologist (N=277) 1 Abbreviations as follows: Whole Brain Radiation Therapy (WBRT); Whole Brain Radiation Therapy with Stereotactic Radiosurgery (WBRT+SRS); Stereotactic Radiosurgery (SRS); Surgery with Whole Brain Radiation Therapy (SURG+WBRT). 2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion. 3 Patient characteristics were varied sequentially with each patient differing by a single characteristic from the reference patient as shown in Figure 1. * Whole brain radiation therapy. † Stereotactic radiosurgery. Odds Ratios for Choice of WBRT* alone versus SRS† Alone Odds Ratios for Choice of WBRT alone versus WBRT with SRS Odds Ratios for Choice of WBRT with SRS versus SRS alone Notes Odds ratios (OR) are quoted with their 95% confidence intervals in parentheses. "*" Denotes significant odds ratios at the 0.05 level. The odds ratios compare odds of choosing each given treatment, with the odds of choosing the treatments serving as reference categories. * Whole brain radiation therapy. † Stereotactic radiosurgery. †† Confidence intervals. § Karnofsky Performance Status. ¶ Non-Small Cell Lung Cancer. The characteristics of our survey respondents are shown in Table 1. Sixty percent of respondents were in single-specialty group practices. Most practices were hospital-based, academic (38%) or private (30%). Seventy-six percent of respondents treated 10–50 patients with brain metastases per year. Forty-four percent of respondents performed SRS, while 35% had a colleague at their institution who performed SRS. Sixty-one percent of respondents had LINAC-based SRS, and 18% had no SRS equipment. Distribution of Physician Characteristics (N=277) 1 Respondents were permitted to select more than one modality. 2 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases. * Stereotactic radiosurgery. Physicians’ responses to the 21 vignettes varied substantially (Table 2). Multivariable modeling revealed clinical factors influencing treatment selection (Tables 3, 4, 5; complete results in Additional file 2: Appendix 2). Unadjusted Response (in %) Among Radiation Oncologist (N=277) 1 Abbreviations as follows: Whole Brain Radiation Therapy (WBRT); Whole Brain Radiation Therapy with Stereotactic Radiosurgery (WBRT+SRS); Stereotactic Radiosurgery (SRS); Surgery with Whole Brain Radiation Therapy (SURG+WBRT). 2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion. 3 Patient characteristics were varied sequentially with each patient differing by a single characteristic from the reference patient as shown in Figure 1. * Whole brain radiation therapy. † Stereotactic radiosurgery. Odds Ratios for Choice of WBRT* alone versus SRS† Alone Odds Ratios for Choice of WBRT alone versus WBRT with SRS Odds Ratios for Choice of WBRT with SRS versus SRS alone Notes Odds ratios (OR) are quoted with their 95% confidence intervals in parentheses. "*" Denotes significant odds ratios at the 0.05 level. The odds ratios compare odds of choosing each given treatment, with the odds of choosing the treatments serving as reference categories. * Whole brain radiation therapy. † Stereotactic radiosurgery. †† Confidence intervals. § Karnofsky Performance Status. ¶ Non-Small Cell Lung Cancer. Whole brain radiation therapy alone WBRT alone was selected frequently, particularly for patients with 8 metastases. For the 80 year-old patient with 3 or 8 metastases, WBRT was commonly preferred (52% and 96% vs. 21% and 91%, respectively, for the 55-year old patient, Table 2). Even for a patient with a single metastasis, 56% of respondents preferred WBRT alone if that patient had KPS 50%; 51% would choose WBRT if the patient had active extracranial disease. In adjusted analyses, all of the clinical variables (melanoma histology, KPS 50%, active extracranial disease, age of 80 years old, presence of focal neurologic deficits, and large lesion) were associated with a higher likelihood of respondents preferring WBRT alone versus either SRS alone (Table 3) or WBRT with SRS (Table 4), except for radioresistant histology. WBRT alone was selected frequently, particularly for patients with 8 metastases. For the 80 year-old patient with 3 or 8 metastases, WBRT was commonly preferred (52% and 96% vs. 21% and 91%, respectively, for the 55-year old patient, Table 2). Even for a patient with a single metastasis, 56% of respondents preferred WBRT alone if that patient had KPS 50%; 51% would choose WBRT if the patient had active extracranial disease. In adjusted analyses, all of the clinical variables (melanoma histology, KPS 50%, active extracranial disease, age of 80 years old, presence of focal neurologic deficits, and large lesion) were associated with a higher likelihood of respondents preferring WBRT alone versus either SRS alone (Table 3) or WBRT with SRS (Table 4), except for radioresistant histology. Addition of surgery For the reference patient with a single metastasis, 44% of respondents selected surgery with WBRT, although most respondents selected a non-operative approach that included SRS (26% WBRT with SRS; 29% SRS alone, for a total of 55% of respondents). When the reference vignette was revised to include the presence of focal neurologic deficits, the distribution of responses was similar for those with 1 lesion, with 48% of respondents preferring surgery with WBRT. When considering patients with a single, large lesion, the percent of respondents choosing surgery with WBRT increased from 44% to 63%. After adjusting for all other clinical factors, respondents were more likely to choose surgery with WBRT rather than WBRT alone for patients with large versus smaller lesions (OR=1.9, 95% CI 1.3-2.8). For 3 or 8 lesions, age 80, active extracranial disease, and KPS 50%, respondents were more likely to choose WBRT alone than surgery with WBRT (Additional file 2: Appendix 2). Melanoma histology and presence of neurologic deficits did not correlate with respondents’ selections. For the reference patient with a single metastasis, 44% of respondents selected surgery with WBRT, although most respondents selected a non-operative approach that included SRS (26% WBRT with SRS; 29% SRS alone, for a total of 55% of respondents). When the reference vignette was revised to include the presence of focal neurologic deficits, the distribution of responses was similar for those with 1 lesion, with 48% of respondents preferring surgery with WBRT. When considering patients with a single, large lesion, the percent of respondents choosing surgery with WBRT increased from 44% to 63%. After adjusting for all other clinical factors, respondents were more likely to choose surgery with WBRT rather than WBRT alone for patients with large versus smaller lesions (OR=1.9, 95% CI 1.3-2.8). For 3 or 8 lesions, age 80, active extracranial disease, and KPS 50%, respondents were more likely to choose WBRT alone than surgery with WBRT (Additional file 2: Appendix 2). Melanoma histology and presence of neurologic deficits did not correlate with respondents’ selections. Addition of stereotactic radiosurgery SRS was commonly preferred by respondents for patients with 3 lesions (23% SRS alone; 54% SRS with WBRT, Table 2), and it largely replaced the use of surgery for the older patient with a single lesion (25% WBRT with SRS; 40% chose SRS alone). Presence of neurological deficits and large lesion size were associated with physicians’ preference for WBRT with SRS over SRS alone (Table 5). However, older age, poorer performance status and melanoma histology were associated with less frequent selection of WBRT with SRS versus SRS alone. SRS was commonly preferred by respondents for patients with 3 lesions (23% SRS alone; 54% SRS with WBRT, Table 2), and it largely replaced the use of surgery for the older patient with a single lesion (25% WBRT with SRS; 40% chose SRS alone). Presence of neurological deficits and large lesion size were associated with physicians’ preference for WBRT with SRS over SRS alone (Table 5). However, older age, poorer performance status and melanoma histology were associated with less frequent selection of WBRT with SRS versus SRS alone. Use of stereotactic radiosurgery Multivariable analysis was performed to identify which factors were independently associated with including SRS as part of treatment (SRS or WBRT with SRS) compared to all other treatment choices (WBRT, WBRT with surgery, no treatment), adjusting for all other characteristics in Table 6. Number of metastases was strongly associated with treatment preferences: after adjustment for all other factors in the model, respondents were significantly more likely to favor SRS for 3 lesions than for 1 (OR=2.22, 95% CI 1.96-2.51), and physicians were 5 times less likely to choose an approach that included SRS for a patient with 8 lesions relative to patients with 1 lesion (OR=0.19, 95% CI 0.15-0.23). Results of logistic regression model showing the reported use of SRS * as part of treatment for brain metastases according to multiple clinical, sociodemographic, and practice setting factors 2 1 Including SRS was defined as either use of SRS alone or with Whole Brain Radiation Therapy (WBRT). 2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion. 3 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases. * Stereotactic radiosurgery. † Whole brain radiation therapy. Across all clinical vignettes, after adjusting for all other factors, poor KPS (OR=0.38, 95% CI 0.31-0.46), active extracranial disease (OR=0.56, 95% CI 0.47-0.65), and large lesion (OR=0.58, 95% CI 0.47-0.71) remained strongly negatively associated with the choice of SRS, while melanoma histology (OR=2.84, 95% CI 2.45-3.29) and advanced age (OR=1.23, 95% CI 1.07-1.41) were positively associated with choice of SRS. Physician access was the strongest factor associated with choosing SRS as part of treatment. Respondents with SRS capability in their own practice were more likely to favor its use for hypothetical patients than those without it (OR=2.22, 95% CI 1.46-3.37). As expected, those physicians who personally used SRS were more likely to recommend it than those who did not have it or use it personally in their practice (OR=3.57, 95% CI 2.42-5.26). Patient volume and physician seniority were examined, but were not associated with SRS use. Multivariable analysis was performed to identify which factors were independently associated with including SRS as part of treatment (SRS or WBRT with SRS) compared to all other treatment choices (WBRT, WBRT with surgery, no treatment), adjusting for all other characteristics in Table 6. Number of metastases was strongly associated with treatment preferences: after adjustment for all other factors in the model, respondents were significantly more likely to favor SRS for 3 lesions than for 1 (OR=2.22, 95% CI 1.96-2.51), and physicians were 5 times less likely to choose an approach that included SRS for a patient with 8 lesions relative to patients with 1 lesion (OR=0.19, 95% CI 0.15-0.23). Results of logistic regression model showing the reported use of SRS * as part of treatment for brain metastases according to multiple clinical, sociodemographic, and practice setting factors 2 1 Including SRS was defined as either use of SRS alone or with Whole Brain Radiation Therapy (WBRT). 2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion. 3 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases. * Stereotactic radiosurgery. † Whole brain radiation therapy. Across all clinical vignettes, after adjusting for all other factors, poor KPS (OR=0.38, 95% CI 0.31-0.46), active extracranial disease (OR=0.56, 95% CI 0.47-0.65), and large lesion (OR=0.58, 95% CI 0.47-0.71) remained strongly negatively associated with the choice of SRS, while melanoma histology (OR=2.84, 95% CI 2.45-3.29) and advanced age (OR=1.23, 95% CI 1.07-1.41) were positively associated with choice of SRS. Physician access was the strongest factor associated with choosing SRS as part of treatment. Respondents with SRS capability in their own practice were more likely to favor its use for hypothetical patients than those without it (OR=2.22, 95% CI 1.46-3.37). As expected, those physicians who personally used SRS were more likely to recommend it than those who did not have it or use it personally in their practice (OR=3.57, 95% CI 2.42-5.26). Patient volume and physician seniority were examined, but were not associated with SRS use.
Conclusions
Although many patients with cancer develop brain metastases, there is little data to guide treatment decisions. Our study demonstrates the significant heterogeneity among radiation oncologists in general clinical practice even for patients with identical clinical characteristics. Certain non-clinical factors, such as access to SRS, appear to be key drivers of use of advanced technology. This finding raises the question about what additional incentives could be driving treatment selection in the absence of gold-standard evidence of the superiority of a single approach over other alternatives. Our findings from this survey also underscore the likely uncertainty or disagreement that may exist among radiation oncologists about the relative harms and benefits of different treatment approaches. This uncertainty is likely related to the lack of prospective randomized studies that compare specific single- and multi-modality approaches for the treatment of brain metastases. More research is needed that directly compares the effectiveness of these approaches for a variety of different clinical circumstances. It would also be important to investigate underlying non-clinical factors, such as physician environment, reimbursement, and technology access, which likely contribute to observed heterogeneity of care for patients with brain metastases.
[ "Background", "Data collection", "Statistical analysis", "\nEffects of patient clinical characteristics on treatment choices\n", "Effects of patient &physician characteristics on odds of including SRS", "Physician demographics and practice environment", "Whole brain radiation therapy alone", "Addition of surgery", "Addition of stereotactic radiosurgery", "Use of stereotactic radiosurgery", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Brain metastases are the most common intracranial tumor, occurring in 20-40% of cancer patients and accounting for 20% of cancer deaths annually [1]. Median survival is 1–2 months with corticosteroids alone [2] or six months with whole brain radiation therapy (WBRT) [3,4].\nA major advance in the treatment of these patients was addition of surgery to WBRT for treatment of a single metastasis, which improved local control, distant intracranial control and neurologic survival compared to either modality alone [5,6]. A retrospective study demonstrated differential survival among patients undergoing WBRT according to recursive partitioning analysis (RPA) classes [7]; further prognostic refinements have incorporated histology and number of lesions [8].\nMore recently, stereotactic radiosurgery (SRS) has been used alone or with WBRT in patients with up to 4 metastases. When compared with WBRT alone, the addition of SRS has improved local control, functional autonomy and survival [5,9-11]. However, WBRT can have significant toxicities, including fatigue, drowsiness and suppressed appetite, and long-term difficulties with learning, memory, concentration, and depression [12-14]. The use of SRS alone controls limited disease and delays the time until WBRT is necessary for distant intracranial progression [12,15,16].\nIn most clinical trials of therapies for brain metastases, patients have been selected on the basis of having few metastases, stable extracranial disease, and excellent performance status. In clinical practice, patients with brain metastases are a heterogeneous population, and decision-making requires the synthesis of multiple variables.\nThe objective of this survey of radiation oncologists was to identify patient factors, physician characteristics, and practice setting variables associated with physicians’ preferred use of different techniques for treating brain metastases. This survey aimed to generate data that would allow physicians to: (1) compare their practice patterns to a national sample; (2) assess the influence of their practice environment on treatment choice; and (3) generate new hypotheses regarding appropriate treatment.", "Data was de-identified and collected through the online survey tool for one month. We emailed surveys to 4357 physician members of ASTRO on September 26, 2008, and the survey was closed on October 26, 2008. 417 respondents answered at least one question, and 277 answered all demographic and clinical questions, for a response rate of 6%. Despite our low response rate, physician respondents were representative of practicing radiation oncologists when compared to respondents to the American College of Radiology’s (ACR) Survey of Radiation Oncologists. Our sample was similar to the ACR survey on selected characteristics such as sex (73% male in our survey, 77% in ACR), age (62% ages 35–54 in our survey, 65% in ACR) and being in private practice (52% in our survey, 48% in ACR) [17]. However, it was not possible to assess interest in SRS or palliative care, or use of advanced technology, among those included in the ACR sample, which limits the comparison.\nThe survey was designed to: (1) describe radiation oncologists’ patterns of treatment of patients with brain metastases; and (2) identify clinical, demographic, and practice setting factors associated with treatment patterns. To test physician practices, a series of short hypothetical clinical vignettes were developed to assess respondents’ preferred treatment modalities. Vignettes have been demonstrated to be a valid study tool when compared with actual clinical practice patterns [18]. Treatment options for each vignette were identical: WBRT alone; WBRT with SRS; SRS alone; WBRT with surgery; or no treatment. We constructed 3 versions of a reference vignette: the first with 1 metastasis, the next with 3 metastases, and one with 8 metastases. Each reference vignette described a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, Karnofsky Performance Status (KPS) 80%, and asymptomatic, small brain lesion(s). For each of these 3 vignettes, we asked about 6 additional patients, modifying a single variable: melanoma histology, active extracranial disease, KPS 50%, presence of neurologic deficit, age of 80 years old, and large lesion (Figure 1).\nVariations under assumptions of 1, 3, 8 metastases in each Reference Patient. The survey sequentially varied the characteristics of each Reference Patient to create vignettes for Patients 1–6. The effect of each variation was evaluated under assumptions of 1, 3, and 8 lesions, respective to each vignette’s Reference Patient.\nOther survey items assessed factors related to the patient, physician, or practice setting. These questions included physician demographics, practice environment, availability of SRS, and opinions about the nature of intracranial disease and the toxicity of its treatment. A copy of our survey is included as supplementary material (Additional file 1: Appendix 1). Data regarding non-respondents were not collected.", " \nEffects of patient clinical characteristics on treatment choices\n For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19].\nFor the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19].\n Effects of patient &physician characteristics on odds of including SRS We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models.\nAll parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.\nWe grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models.\nAll parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.", "For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19].", "We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models.\nAll parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.", "The characteristics of our survey respondents are shown in Table 1. Sixty percent of respondents were in single-specialty group practices. Most practices were hospital-based, academic (38%) or private (30%). Seventy-six percent of respondents treated 10–50 patients with brain metastases per year. Forty-four percent of respondents performed SRS, while 35% had a colleague at their institution who performed SRS. Sixty-one percent of respondents had LINAC-based SRS, and 18% had no SRS equipment.\nDistribution of Physician Characteristics (N=277)\n1 Respondents were permitted to select more than one modality.\n2 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases.\n* Stereotactic radiosurgery.\nPhysicians’ responses to the 21 vignettes varied substantially (Table 2). Multivariable modeling revealed clinical factors influencing treatment selection (Tables 3, 4, 5; complete results in Additional file 2: Appendix 2).\nUnadjusted Response (in %) Among Radiation Oncologist (N=277)\n1 Abbreviations as follows: Whole Brain Radiation Therapy (WBRT); Whole Brain Radiation Therapy with Stereotactic Radiosurgery (WBRT+SRS); Stereotactic Radiosurgery (SRS); Surgery with Whole Brain Radiation Therapy (SURG+WBRT).\n2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion.\n3 Patient characteristics were varied sequentially with each patient differing by a single characteristic from the reference patient as shown in Figure 1.\n* Whole brain radiation therapy.\n† Stereotactic radiosurgery.\nOdds Ratios for Choice of WBRT* alone versus SRS† Alone\nOdds Ratios for Choice of WBRT alone versus WBRT with SRS\nOdds Ratios for Choice of WBRT with SRS versus SRS alone\nNotes\nOdds ratios (OR) are quoted with their 95% confidence intervals in parentheses. \"*\" Denotes significant odds ratios at the 0.05 level.\nThe odds ratios compare odds of choosing each given treatment, with the odds of choosing the treatments serving as reference categories.\n* Whole brain radiation therapy.\n† Stereotactic radiosurgery.\n†† Confidence intervals.\n§ Karnofsky Performance Status.\n¶ Non-Small Cell Lung Cancer.", "WBRT alone was selected frequently, particularly for patients with 8 metastases. For the 80 year-old patient with 3 or 8 metastases, WBRT was commonly preferred (52% and 96% vs. 21% and 91%, respectively, for the 55-year old patient, Table 2). Even for a patient with a single metastasis, 56% of respondents preferred WBRT alone if that patient had KPS 50%; 51% would choose WBRT if the patient had active extracranial disease. In adjusted analyses, all of the clinical variables (melanoma histology, KPS 50%, active extracranial disease, age of 80 years old, presence of focal neurologic deficits, and large lesion) were associated with a higher likelihood of respondents preferring WBRT alone versus either SRS alone (Table 3) or WBRT with SRS (Table 4), except for radioresistant histology.", "For the reference patient with a single metastasis, 44% of respondents selected surgery with WBRT, although most respondents selected a non-operative approach that included SRS (26% WBRT with SRS; 29% SRS alone, for a total of 55% of respondents). When the reference vignette was revised to include the presence of focal neurologic deficits, the distribution of responses was similar for those with 1 lesion, with 48% of respondents preferring surgery with WBRT. When considering patients with a single, large lesion, the percent of respondents choosing surgery with WBRT increased from 44% to 63%. After adjusting for all other clinical factors, respondents were more likely to choose surgery with WBRT rather than WBRT alone for patients with large versus smaller lesions (OR=1.9, 95% CI 1.3-2.8). For 3 or 8 lesions, age 80, active extracranial disease, and KPS 50%, respondents were more likely to choose WBRT alone than surgery with WBRT (Additional file 2: Appendix 2). Melanoma histology and presence of neurologic deficits did not correlate with respondents’ selections.", "SRS was commonly preferred by respondents for patients with 3 lesions (23% SRS alone; 54% SRS with WBRT, Table 2), and it largely replaced the use of surgery for the older patient with a single lesion (25% WBRT with SRS; 40% chose SRS alone). Presence of neurological deficits and large lesion size were associated with physicians’ preference for WBRT with SRS over SRS alone (Table 5). However, older age, poorer performance status and melanoma histology were associated with less frequent selection of WBRT with SRS versus SRS alone.", "Multivariable analysis was performed to identify which factors were independently associated with including SRS as part of treatment (SRS or WBRT with SRS) compared to all other treatment choices (WBRT, WBRT with surgery, no treatment), adjusting for all other characteristics in Table 6. Number of metastases was strongly associated with treatment preferences: after adjustment for all other factors in the model, respondents were significantly more likely to favor SRS for 3 lesions than for 1 (OR=2.22, 95% CI 1.96-2.51), and physicians were 5 times less likely to choose an approach that included SRS for a patient with 8 lesions relative to patients with 1 lesion (OR=0.19, 95% CI 0.15-0.23).\n\nResults of logistic regression model showing the reported use of SRS\n\n* \n\nas part of treatment for brain metastases according to multiple clinical, sociodemographic, and practice setting factors\n\n2\n\n\n1 Including SRS was defined as either use of SRS alone or with Whole Brain Radiation Therapy (WBRT).\n2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion.\n3 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases.\n* Stereotactic radiosurgery.\n† Whole brain radiation therapy.\nAcross all clinical vignettes, after adjusting for all other factors, poor KPS (OR=0.38, 95% CI 0.31-0.46), active extracranial disease (OR=0.56, 95% CI 0.47-0.65), and large lesion (OR=0.58, 95% CI 0.47-0.71) remained strongly negatively associated with the choice of SRS, while melanoma histology (OR=2.84, 95% CI 2.45-3.29) and advanced age (OR=1.23, 95% CI 1.07-1.41) were positively associated with choice of SRS. Physician access was the strongest factor associated with choosing SRS as part of treatment. Respondents with SRS capability in their own practice were more likely to favor its use for hypothetical patients than those without it (OR=2.22, 95% CI 1.46-3.37). As expected, those physicians who personally used SRS were more likely to recommend it than those who did not have it or use it personally in their practice (OR=3.57, 95% CI 2.42-5.26). Patient volume and physician seniority were examined, but were not associated with SRS use.", "WBRT: Whole brain radiation therapy; SRS: Stereotactic radiosurgery; KPS: Karnofsky performance status; RPA: Recursive partitioning analysis; ASTRO: American society for therapeutic radiation oncology; ACR: American college of radiology; GEE: Generalized estimating equation; NCCN: National comprehensive cancer network.", "Dr. Ramakrishna has received speaker’s honoraria from and prepared educational materials for Brainlab Ag, Heimstetten, Germany. The remaining authors have no conflicts of interest to disclose.", "NR and MK conceived of the study, designed the survey, and completed data collection. MK, KU, SM, and AP performed statistical analysis and data interpretation. MK, SM, and AP drafted the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data collection", "Statistical analysis", "\nEffects of patient clinical characteristics on treatment choices\n", "Effects of patient &physician characteristics on odds of including SRS", "Results", "Physician demographics and practice environment", "Whole brain radiation therapy alone", "Addition of surgery", "Addition of stereotactic radiosurgery", "Use of stereotactic radiosurgery", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Supplementary Material" ]
[ "Brain metastases are the most common intracranial tumor, occurring in 20-40% of cancer patients and accounting for 20% of cancer deaths annually [1]. Median survival is 1–2 months with corticosteroids alone [2] or six months with whole brain radiation therapy (WBRT) [3,4].\nA major advance in the treatment of these patients was addition of surgery to WBRT for treatment of a single metastasis, which improved local control, distant intracranial control and neurologic survival compared to either modality alone [5,6]. A retrospective study demonstrated differential survival among patients undergoing WBRT according to recursive partitioning analysis (RPA) classes [7]; further prognostic refinements have incorporated histology and number of lesions [8].\nMore recently, stereotactic radiosurgery (SRS) has been used alone or with WBRT in patients with up to 4 metastases. When compared with WBRT alone, the addition of SRS has improved local control, functional autonomy and survival [5,9-11]. However, WBRT can have significant toxicities, including fatigue, drowsiness and suppressed appetite, and long-term difficulties with learning, memory, concentration, and depression [12-14]. The use of SRS alone controls limited disease and delays the time until WBRT is necessary for distant intracranial progression [12,15,16].\nIn most clinical trials of therapies for brain metastases, patients have been selected on the basis of having few metastases, stable extracranial disease, and excellent performance status. In clinical practice, patients with brain metastases are a heterogeneous population, and decision-making requires the synthesis of multiple variables.\nThe objective of this survey of radiation oncologists was to identify patient factors, physician characteristics, and practice setting variables associated with physicians’ preferred use of different techniques for treating brain metastases. This survey aimed to generate data that would allow physicians to: (1) compare their practice patterns to a national sample; (2) assess the influence of their practice environment on treatment choice; and (3) generate new hypotheses regarding appropriate treatment.", "This project was approved by the IRB of Harvard Medical School. The survey was launched online, and physician members of the American Society for Therapeutic Radiology and Oncology (ASTRO) were emailed a recruitment letter. Eligibility criteria included respondent status as a U.S. or Canadian physician in the ASTRO database, valid email address, and current management of patients with brain metastases, as reflected by the screener question. Respondents linked directly to the survey from the email, and there was no incentive for survey participation.\n Data collection Data was de-identified and collected through the online survey tool for one month. We emailed surveys to 4357 physician members of ASTRO on September 26, 2008, and the survey was closed on October 26, 2008. 417 respondents answered at least one question, and 277 answered all demographic and clinical questions, for a response rate of 6%. Despite our low response rate, physician respondents were representative of practicing radiation oncologists when compared to respondents to the American College of Radiology’s (ACR) Survey of Radiation Oncologists. Our sample was similar to the ACR survey on selected characteristics such as sex (73% male in our survey, 77% in ACR), age (62% ages 35–54 in our survey, 65% in ACR) and being in private practice (52% in our survey, 48% in ACR) [17]. However, it was not possible to assess interest in SRS or palliative care, or use of advanced technology, among those included in the ACR sample, which limits the comparison.\nThe survey was designed to: (1) describe radiation oncologists’ patterns of treatment of patients with brain metastases; and (2) identify clinical, demographic, and practice setting factors associated with treatment patterns. To test physician practices, a series of short hypothetical clinical vignettes were developed to assess respondents’ preferred treatment modalities. Vignettes have been demonstrated to be a valid study tool when compared with actual clinical practice patterns [18]. Treatment options for each vignette were identical: WBRT alone; WBRT with SRS; SRS alone; WBRT with surgery; or no treatment. We constructed 3 versions of a reference vignette: the first with 1 metastasis, the next with 3 metastases, and one with 8 metastases. Each reference vignette described a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, Karnofsky Performance Status (KPS) 80%, and asymptomatic, small brain lesion(s). For each of these 3 vignettes, we asked about 6 additional patients, modifying a single variable: melanoma histology, active extracranial disease, KPS 50%, presence of neurologic deficit, age of 80 years old, and large lesion (Figure 1).\nVariations under assumptions of 1, 3, 8 metastases in each Reference Patient. The survey sequentially varied the characteristics of each Reference Patient to create vignettes for Patients 1–6. The effect of each variation was evaluated under assumptions of 1, 3, and 8 lesions, respective to each vignette’s Reference Patient.\nOther survey items assessed factors related to the patient, physician, or practice setting. These questions included physician demographics, practice environment, availability of SRS, and opinions about the nature of intracranial disease and the toxicity of its treatment. A copy of our survey is included as supplementary material (Additional file 1: Appendix 1). Data regarding non-respondents were not collected.\nData was de-identified and collected through the online survey tool for one month. We emailed surveys to 4357 physician members of ASTRO on September 26, 2008, and the survey was closed on October 26, 2008. 417 respondents answered at least one question, and 277 answered all demographic and clinical questions, for a response rate of 6%. Despite our low response rate, physician respondents were representative of practicing radiation oncologists when compared to respondents to the American College of Radiology’s (ACR) Survey of Radiation Oncologists. Our sample was similar to the ACR survey on selected characteristics such as sex (73% male in our survey, 77% in ACR), age (62% ages 35–54 in our survey, 65% in ACR) and being in private practice (52% in our survey, 48% in ACR) [17]. However, it was not possible to assess interest in SRS or palliative care, or use of advanced technology, among those included in the ACR sample, which limits the comparison.\nThe survey was designed to: (1) describe radiation oncologists’ patterns of treatment of patients with brain metastases; and (2) identify clinical, demographic, and practice setting factors associated with treatment patterns. To test physician practices, a series of short hypothetical clinical vignettes were developed to assess respondents’ preferred treatment modalities. Vignettes have been demonstrated to be a valid study tool when compared with actual clinical practice patterns [18]. Treatment options for each vignette were identical: WBRT alone; WBRT with SRS; SRS alone; WBRT with surgery; or no treatment. We constructed 3 versions of a reference vignette: the first with 1 metastasis, the next with 3 metastases, and one with 8 metastases. Each reference vignette described a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, Karnofsky Performance Status (KPS) 80%, and asymptomatic, small brain lesion(s). For each of these 3 vignettes, we asked about 6 additional patients, modifying a single variable: melanoma histology, active extracranial disease, KPS 50%, presence of neurologic deficit, age of 80 years old, and large lesion (Figure 1).\nVariations under assumptions of 1, 3, 8 metastases in each Reference Patient. The survey sequentially varied the characteristics of each Reference Patient to create vignettes for Patients 1–6. The effect of each variation was evaluated under assumptions of 1, 3, and 8 lesions, respective to each vignette’s Reference Patient.\nOther survey items assessed factors related to the patient, physician, or practice setting. These questions included physician demographics, practice environment, availability of SRS, and opinions about the nature of intracranial disease and the toxicity of its treatment. A copy of our survey is included as supplementary material (Additional file 1: Appendix 1). Data regarding non-respondents were not collected.\n Statistical analysis \nEffects of patient clinical characteristics on treatment choices\n For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19].\nFor the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19].\n Effects of patient &physician characteristics on odds of including SRS We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models.\nAll parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.\nWe grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models.\nAll parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.\n \nEffects of patient clinical characteristics on treatment choices\n For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19].\nFor the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19].\n Effects of patient &physician characteristics on odds of including SRS We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models.\nAll parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.\nWe grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models.\nAll parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.", "Data was de-identified and collected through the online survey tool for one month. We emailed surveys to 4357 physician members of ASTRO on September 26, 2008, and the survey was closed on October 26, 2008. 417 respondents answered at least one question, and 277 answered all demographic and clinical questions, for a response rate of 6%. Despite our low response rate, physician respondents were representative of practicing radiation oncologists when compared to respondents to the American College of Radiology’s (ACR) Survey of Radiation Oncologists. Our sample was similar to the ACR survey on selected characteristics such as sex (73% male in our survey, 77% in ACR), age (62% ages 35–54 in our survey, 65% in ACR) and being in private practice (52% in our survey, 48% in ACR) [17]. However, it was not possible to assess interest in SRS or palliative care, or use of advanced technology, among those included in the ACR sample, which limits the comparison.\nThe survey was designed to: (1) describe radiation oncologists’ patterns of treatment of patients with brain metastases; and (2) identify clinical, demographic, and practice setting factors associated with treatment patterns. To test physician practices, a series of short hypothetical clinical vignettes were developed to assess respondents’ preferred treatment modalities. Vignettes have been demonstrated to be a valid study tool when compared with actual clinical practice patterns [18]. Treatment options for each vignette were identical: WBRT alone; WBRT with SRS; SRS alone; WBRT with surgery; or no treatment. We constructed 3 versions of a reference vignette: the first with 1 metastasis, the next with 3 metastases, and one with 8 metastases. Each reference vignette described a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, Karnofsky Performance Status (KPS) 80%, and asymptomatic, small brain lesion(s). For each of these 3 vignettes, we asked about 6 additional patients, modifying a single variable: melanoma histology, active extracranial disease, KPS 50%, presence of neurologic deficit, age of 80 years old, and large lesion (Figure 1).\nVariations under assumptions of 1, 3, 8 metastases in each Reference Patient. The survey sequentially varied the characteristics of each Reference Patient to create vignettes for Patients 1–6. The effect of each variation was evaluated under assumptions of 1, 3, and 8 lesions, respective to each vignette’s Reference Patient.\nOther survey items assessed factors related to the patient, physician, or practice setting. These questions included physician demographics, practice environment, availability of SRS, and opinions about the nature of intracranial disease and the toxicity of its treatment. A copy of our survey is included as supplementary material (Additional file 1: Appendix 1). Data regarding non-respondents were not collected.", " \nEffects of patient clinical characteristics on treatment choices\n For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19].\nFor the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19].\n Effects of patient &physician characteristics on odds of including SRS We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models.\nAll parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.\nWe grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models.\nAll parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.", "For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19].", "We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models.\nAll parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses.", " Physician demographics and practice environment The characteristics of our survey respondents are shown in Table 1. Sixty percent of respondents were in single-specialty group practices. Most practices were hospital-based, academic (38%) or private (30%). Seventy-six percent of respondents treated 10–50 patients with brain metastases per year. Forty-four percent of respondents performed SRS, while 35% had a colleague at their institution who performed SRS. Sixty-one percent of respondents had LINAC-based SRS, and 18% had no SRS equipment.\nDistribution of Physician Characteristics (N=277)\n1 Respondents were permitted to select more than one modality.\n2 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases.\n* Stereotactic radiosurgery.\nPhysicians’ responses to the 21 vignettes varied substantially (Table 2). Multivariable modeling revealed clinical factors influencing treatment selection (Tables 3, 4, 5; complete results in Additional file 2: Appendix 2).\nUnadjusted Response (in %) Among Radiation Oncologist (N=277)\n1 Abbreviations as follows: Whole Brain Radiation Therapy (WBRT); Whole Brain Radiation Therapy with Stereotactic Radiosurgery (WBRT+SRS); Stereotactic Radiosurgery (SRS); Surgery with Whole Brain Radiation Therapy (SURG+WBRT).\n2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion.\n3 Patient characteristics were varied sequentially with each patient differing by a single characteristic from the reference patient as shown in Figure 1.\n* Whole brain radiation therapy.\n† Stereotactic radiosurgery.\nOdds Ratios for Choice of WBRT* alone versus SRS† Alone\nOdds Ratios for Choice of WBRT alone versus WBRT with SRS\nOdds Ratios for Choice of WBRT with SRS versus SRS alone\nNotes\nOdds ratios (OR) are quoted with their 95% confidence intervals in parentheses. \"*\" Denotes significant odds ratios at the 0.05 level.\nThe odds ratios compare odds of choosing each given treatment, with the odds of choosing the treatments serving as reference categories.\n* Whole brain radiation therapy.\n† Stereotactic radiosurgery.\n†† Confidence intervals.\n§ Karnofsky Performance Status.\n¶ Non-Small Cell Lung Cancer.\nThe characteristics of our survey respondents are shown in Table 1. Sixty percent of respondents were in single-specialty group practices. Most practices were hospital-based, academic (38%) or private (30%). Seventy-six percent of respondents treated 10–50 patients with brain metastases per year. Forty-four percent of respondents performed SRS, while 35% had a colleague at their institution who performed SRS. Sixty-one percent of respondents had LINAC-based SRS, and 18% had no SRS equipment.\nDistribution of Physician Characteristics (N=277)\n1 Respondents were permitted to select more than one modality.\n2 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases.\n* Stereotactic radiosurgery.\nPhysicians’ responses to the 21 vignettes varied substantially (Table 2). Multivariable modeling revealed clinical factors influencing treatment selection (Tables 3, 4, 5; complete results in Additional file 2: Appendix 2).\nUnadjusted Response (in %) Among Radiation Oncologist (N=277)\n1 Abbreviations as follows: Whole Brain Radiation Therapy (WBRT); Whole Brain Radiation Therapy with Stereotactic Radiosurgery (WBRT+SRS); Stereotactic Radiosurgery (SRS); Surgery with Whole Brain Radiation Therapy (SURG+WBRT).\n2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion.\n3 Patient characteristics were varied sequentially with each patient differing by a single characteristic from the reference patient as shown in Figure 1.\n* Whole brain radiation therapy.\n† Stereotactic radiosurgery.\nOdds Ratios for Choice of WBRT* alone versus SRS† Alone\nOdds Ratios for Choice of WBRT alone versus WBRT with SRS\nOdds Ratios for Choice of WBRT with SRS versus SRS alone\nNotes\nOdds ratios (OR) are quoted with their 95% confidence intervals in parentheses. \"*\" Denotes significant odds ratios at the 0.05 level.\nThe odds ratios compare odds of choosing each given treatment, with the odds of choosing the treatments serving as reference categories.\n* Whole brain radiation therapy.\n† Stereotactic radiosurgery.\n†† Confidence intervals.\n§ Karnofsky Performance Status.\n¶ Non-Small Cell Lung Cancer.\n Whole brain radiation therapy alone WBRT alone was selected frequently, particularly for patients with 8 metastases. For the 80 year-old patient with 3 or 8 metastases, WBRT was commonly preferred (52% and 96% vs. 21% and 91%, respectively, for the 55-year old patient, Table 2). Even for a patient with a single metastasis, 56% of respondents preferred WBRT alone if that patient had KPS 50%; 51% would choose WBRT if the patient had active extracranial disease. In adjusted analyses, all of the clinical variables (melanoma histology, KPS 50%, active extracranial disease, age of 80 years old, presence of focal neurologic deficits, and large lesion) were associated with a higher likelihood of respondents preferring WBRT alone versus either SRS alone (Table 3) or WBRT with SRS (Table 4), except for radioresistant histology.\nWBRT alone was selected frequently, particularly for patients with 8 metastases. For the 80 year-old patient with 3 or 8 metastases, WBRT was commonly preferred (52% and 96% vs. 21% and 91%, respectively, for the 55-year old patient, Table 2). Even for a patient with a single metastasis, 56% of respondents preferred WBRT alone if that patient had KPS 50%; 51% would choose WBRT if the patient had active extracranial disease. In adjusted analyses, all of the clinical variables (melanoma histology, KPS 50%, active extracranial disease, age of 80 years old, presence of focal neurologic deficits, and large lesion) were associated with a higher likelihood of respondents preferring WBRT alone versus either SRS alone (Table 3) or WBRT with SRS (Table 4), except for radioresistant histology.\n Addition of surgery For the reference patient with a single metastasis, 44% of respondents selected surgery with WBRT, although most respondents selected a non-operative approach that included SRS (26% WBRT with SRS; 29% SRS alone, for a total of 55% of respondents). When the reference vignette was revised to include the presence of focal neurologic deficits, the distribution of responses was similar for those with 1 lesion, with 48% of respondents preferring surgery with WBRT. When considering patients with a single, large lesion, the percent of respondents choosing surgery with WBRT increased from 44% to 63%. After adjusting for all other clinical factors, respondents were more likely to choose surgery with WBRT rather than WBRT alone for patients with large versus smaller lesions (OR=1.9, 95% CI 1.3-2.8). For 3 or 8 lesions, age 80, active extracranial disease, and KPS 50%, respondents were more likely to choose WBRT alone than surgery with WBRT (Additional file 2: Appendix 2). Melanoma histology and presence of neurologic deficits did not correlate with respondents’ selections.\nFor the reference patient with a single metastasis, 44% of respondents selected surgery with WBRT, although most respondents selected a non-operative approach that included SRS (26% WBRT with SRS; 29% SRS alone, for a total of 55% of respondents). When the reference vignette was revised to include the presence of focal neurologic deficits, the distribution of responses was similar for those with 1 lesion, with 48% of respondents preferring surgery with WBRT. When considering patients with a single, large lesion, the percent of respondents choosing surgery with WBRT increased from 44% to 63%. After adjusting for all other clinical factors, respondents were more likely to choose surgery with WBRT rather than WBRT alone for patients with large versus smaller lesions (OR=1.9, 95% CI 1.3-2.8). For 3 or 8 lesions, age 80, active extracranial disease, and KPS 50%, respondents were more likely to choose WBRT alone than surgery with WBRT (Additional file 2: Appendix 2). Melanoma histology and presence of neurologic deficits did not correlate with respondents’ selections.\n Addition of stereotactic radiosurgery SRS was commonly preferred by respondents for patients with 3 lesions (23% SRS alone; 54% SRS with WBRT, Table 2), and it largely replaced the use of surgery for the older patient with a single lesion (25% WBRT with SRS; 40% chose SRS alone). Presence of neurological deficits and large lesion size were associated with physicians’ preference for WBRT with SRS over SRS alone (Table 5). However, older age, poorer performance status and melanoma histology were associated with less frequent selection of WBRT with SRS versus SRS alone.\nSRS was commonly preferred by respondents for patients with 3 lesions (23% SRS alone; 54% SRS with WBRT, Table 2), and it largely replaced the use of surgery for the older patient with a single lesion (25% WBRT with SRS; 40% chose SRS alone). Presence of neurological deficits and large lesion size were associated with physicians’ preference for WBRT with SRS over SRS alone (Table 5). However, older age, poorer performance status and melanoma histology were associated with less frequent selection of WBRT with SRS versus SRS alone.\n Use of stereotactic radiosurgery Multivariable analysis was performed to identify which factors were independently associated with including SRS as part of treatment (SRS or WBRT with SRS) compared to all other treatment choices (WBRT, WBRT with surgery, no treatment), adjusting for all other characteristics in Table 6. Number of metastases was strongly associated with treatment preferences: after adjustment for all other factors in the model, respondents were significantly more likely to favor SRS for 3 lesions than for 1 (OR=2.22, 95% CI 1.96-2.51), and physicians were 5 times less likely to choose an approach that included SRS for a patient with 8 lesions relative to patients with 1 lesion (OR=0.19, 95% CI 0.15-0.23).\n\nResults of logistic regression model showing the reported use of SRS\n\n* \n\nas part of treatment for brain metastases according to multiple clinical, sociodemographic, and practice setting factors\n\n2\n\n\n1 Including SRS was defined as either use of SRS alone or with Whole Brain Radiation Therapy (WBRT).\n2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion.\n3 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases.\n* Stereotactic radiosurgery.\n† Whole brain radiation therapy.\nAcross all clinical vignettes, after adjusting for all other factors, poor KPS (OR=0.38, 95% CI 0.31-0.46), active extracranial disease (OR=0.56, 95% CI 0.47-0.65), and large lesion (OR=0.58, 95% CI 0.47-0.71) remained strongly negatively associated with the choice of SRS, while melanoma histology (OR=2.84, 95% CI 2.45-3.29) and advanced age (OR=1.23, 95% CI 1.07-1.41) were positively associated with choice of SRS. Physician access was the strongest factor associated with choosing SRS as part of treatment. Respondents with SRS capability in their own practice were more likely to favor its use for hypothetical patients than those without it (OR=2.22, 95% CI 1.46-3.37). As expected, those physicians who personally used SRS were more likely to recommend it than those who did not have it or use it personally in their practice (OR=3.57, 95% CI 2.42-5.26). Patient volume and physician seniority were examined, but were not associated with SRS use.\nMultivariable analysis was performed to identify which factors were independently associated with including SRS as part of treatment (SRS or WBRT with SRS) compared to all other treatment choices (WBRT, WBRT with surgery, no treatment), adjusting for all other characteristics in Table 6. Number of metastases was strongly associated with treatment preferences: after adjustment for all other factors in the model, respondents were significantly more likely to favor SRS for 3 lesions than for 1 (OR=2.22, 95% CI 1.96-2.51), and physicians were 5 times less likely to choose an approach that included SRS for a patient with 8 lesions relative to patients with 1 lesion (OR=0.19, 95% CI 0.15-0.23).\n\nResults of logistic regression model showing the reported use of SRS\n\n* \n\nas part of treatment for brain metastases according to multiple clinical, sociodemographic, and practice setting factors\n\n2\n\n\n1 Including SRS was defined as either use of SRS alone or with Whole Brain Radiation Therapy (WBRT).\n2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion.\n3 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases.\n* Stereotactic radiosurgery.\n† Whole brain radiation therapy.\nAcross all clinical vignettes, after adjusting for all other factors, poor KPS (OR=0.38, 95% CI 0.31-0.46), active extracranial disease (OR=0.56, 95% CI 0.47-0.65), and large lesion (OR=0.58, 95% CI 0.47-0.71) remained strongly negatively associated with the choice of SRS, while melanoma histology (OR=2.84, 95% CI 2.45-3.29) and advanced age (OR=1.23, 95% CI 1.07-1.41) were positively associated with choice of SRS. Physician access was the strongest factor associated with choosing SRS as part of treatment. Respondents with SRS capability in their own practice were more likely to favor its use for hypothetical patients than those without it (OR=2.22, 95% CI 1.46-3.37). As expected, those physicians who personally used SRS were more likely to recommend it than those who did not have it or use it personally in their practice (OR=3.57, 95% CI 2.42-5.26). Patient volume and physician seniority were examined, but were not associated with SRS use.", "The characteristics of our survey respondents are shown in Table 1. Sixty percent of respondents were in single-specialty group practices. Most practices were hospital-based, academic (38%) or private (30%). Seventy-six percent of respondents treated 10–50 patients with brain metastases per year. Forty-four percent of respondents performed SRS, while 35% had a colleague at their institution who performed SRS. Sixty-one percent of respondents had LINAC-based SRS, and 18% had no SRS equipment.\nDistribution of Physician Characteristics (N=277)\n1 Respondents were permitted to select more than one modality.\n2 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases.\n* Stereotactic radiosurgery.\nPhysicians’ responses to the 21 vignettes varied substantially (Table 2). Multivariable modeling revealed clinical factors influencing treatment selection (Tables 3, 4, 5; complete results in Additional file 2: Appendix 2).\nUnadjusted Response (in %) Among Radiation Oncologist (N=277)\n1 Abbreviations as follows: Whole Brain Radiation Therapy (WBRT); Whole Brain Radiation Therapy with Stereotactic Radiosurgery (WBRT+SRS); Stereotactic Radiosurgery (SRS); Surgery with Whole Brain Radiation Therapy (SURG+WBRT).\n2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion.\n3 Patient characteristics were varied sequentially with each patient differing by a single characteristic from the reference patient as shown in Figure 1.\n* Whole brain radiation therapy.\n† Stereotactic radiosurgery.\nOdds Ratios for Choice of WBRT* alone versus SRS† Alone\nOdds Ratios for Choice of WBRT alone versus WBRT with SRS\nOdds Ratios for Choice of WBRT with SRS versus SRS alone\nNotes\nOdds ratios (OR) are quoted with their 95% confidence intervals in parentheses. \"*\" Denotes significant odds ratios at the 0.05 level.\nThe odds ratios compare odds of choosing each given treatment, with the odds of choosing the treatments serving as reference categories.\n* Whole brain radiation therapy.\n† Stereotactic radiosurgery.\n†† Confidence intervals.\n§ Karnofsky Performance Status.\n¶ Non-Small Cell Lung Cancer.", "WBRT alone was selected frequently, particularly for patients with 8 metastases. For the 80 year-old patient with 3 or 8 metastases, WBRT was commonly preferred (52% and 96% vs. 21% and 91%, respectively, for the 55-year old patient, Table 2). Even for a patient with a single metastasis, 56% of respondents preferred WBRT alone if that patient had KPS 50%; 51% would choose WBRT if the patient had active extracranial disease. In adjusted analyses, all of the clinical variables (melanoma histology, KPS 50%, active extracranial disease, age of 80 years old, presence of focal neurologic deficits, and large lesion) were associated with a higher likelihood of respondents preferring WBRT alone versus either SRS alone (Table 3) or WBRT with SRS (Table 4), except for radioresistant histology.", "For the reference patient with a single metastasis, 44% of respondents selected surgery with WBRT, although most respondents selected a non-operative approach that included SRS (26% WBRT with SRS; 29% SRS alone, for a total of 55% of respondents). When the reference vignette was revised to include the presence of focal neurologic deficits, the distribution of responses was similar for those with 1 lesion, with 48% of respondents preferring surgery with WBRT. When considering patients with a single, large lesion, the percent of respondents choosing surgery with WBRT increased from 44% to 63%. After adjusting for all other clinical factors, respondents were more likely to choose surgery with WBRT rather than WBRT alone for patients with large versus smaller lesions (OR=1.9, 95% CI 1.3-2.8). For 3 or 8 lesions, age 80, active extracranial disease, and KPS 50%, respondents were more likely to choose WBRT alone than surgery with WBRT (Additional file 2: Appendix 2). Melanoma histology and presence of neurologic deficits did not correlate with respondents’ selections.", "SRS was commonly preferred by respondents for patients with 3 lesions (23% SRS alone; 54% SRS with WBRT, Table 2), and it largely replaced the use of surgery for the older patient with a single lesion (25% WBRT with SRS; 40% chose SRS alone). Presence of neurological deficits and large lesion size were associated with physicians’ preference for WBRT with SRS over SRS alone (Table 5). However, older age, poorer performance status and melanoma histology were associated with less frequent selection of WBRT with SRS versus SRS alone.", "Multivariable analysis was performed to identify which factors were independently associated with including SRS as part of treatment (SRS or WBRT with SRS) compared to all other treatment choices (WBRT, WBRT with surgery, no treatment), adjusting for all other characteristics in Table 6. Number of metastases was strongly associated with treatment preferences: after adjustment for all other factors in the model, respondents were significantly more likely to favor SRS for 3 lesions than for 1 (OR=2.22, 95% CI 1.96-2.51), and physicians were 5 times less likely to choose an approach that included SRS for a patient with 8 lesions relative to patients with 1 lesion (OR=0.19, 95% CI 0.15-0.23).\n\nResults of logistic regression model showing the reported use of SRS\n\n* \n\nas part of treatment for brain metastases according to multiple clinical, sociodemographic, and practice setting factors\n\n2\n\n\n1 Including SRS was defined as either use of SRS alone or with Whole Brain Radiation Therapy (WBRT).\n2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion.\n3 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases.\n* Stereotactic radiosurgery.\n† Whole brain radiation therapy.\nAcross all clinical vignettes, after adjusting for all other factors, poor KPS (OR=0.38, 95% CI 0.31-0.46), active extracranial disease (OR=0.56, 95% CI 0.47-0.65), and large lesion (OR=0.58, 95% CI 0.47-0.71) remained strongly negatively associated with the choice of SRS, while melanoma histology (OR=2.84, 95% CI 2.45-3.29) and advanced age (OR=1.23, 95% CI 1.07-1.41) were positively associated with choice of SRS. Physician access was the strongest factor associated with choosing SRS as part of treatment. Respondents with SRS capability in their own practice were more likely to favor its use for hypothetical patients than those without it (OR=2.22, 95% CI 1.46-3.37). As expected, those physicians who personally used SRS were more likely to recommend it than those who did not have it or use it personally in their practice (OR=3.57, 95% CI 2.42-5.26). Patient volume and physician seniority were examined, but were not associated with SRS use.", "Treatment of patients with brain metastases is heterogeneous. WBRT is a standard therapy, with the addition of surgery or SRS to WBRT, or SRS used alone, reserved for selected patients on the basis of their clinical characteristics. One potential advantage of local therapy may be avoiding the toxicity of WBRT [12-14]. However, SRS, when used alone, has several disadvantages. SRS alone has been shown to be inferior to the combination of SRS with WBRT for durable local control and distant intracranial control [15]. When studying patients initially undergoing any local therapy – surgery or SRS – more patients required salvage if treated without WBRT [20]. Long-term cognitive outcomes have been shown to be more closely correlated with intracranial progression than with treatment modality, emphasizing the significance of intracranial control over short-term side effects [21,22].\nGiven the limited scope of current studies and the variability in outcomes, National Comprehensive Cancer Network (NCCN) guidelines allow for a wide range of treatment options including WBRT, surgical resection, or SRS, alone or in combinations [23]. Previous reviews of treatment patterns have demonstrated stable rates of surgery since the 1980s, with an increasing use of SRS [24]. Despite clinical trials limiting eligible patients to those with limited central nervous system disease, a recent survey demonstrated that more than half of physician respondents would consider using SRS as an initial treatment for patients with 5 or more intracranial lesions [25]. The increased utilization of SRS as well as the persistent heterogeneity in practice may be due to the time of dissemination of research into clinical practice, or the time to purchase and adoption of new technologies.\nWith mixed evidence and a heterogeneous patient population, treatment decision-making is complex. Significantly, our study demonstrates that although clinical factors, such as number of lesions and patient age, affected treatment selection, physician practice environment had a strong, independent effect on the use of SRS.\nFactors related to the patient’s clinical condition affected treatment selection. There was increased use of WBRT for increasing number of lesions, which is consistent with the lack of evidence to support the use of local techniques for patients with numerous metastases. However, we observed that a substantial proportion of physicians still chose SRS as part of their approach for patients with multiple lesions, particularly for patients with 3 lesions. The increased use of SRS with 3 lesions as compared with 1 was possibly due to the use of surgery for a substantial proportion of patients with 1 lesion, and due to the use of SRS combined with WBRT in patients with 3 lesions. Interestingly, physicians overall selected WBRT for patients with 1, 3, or 8 lesions more often for patients who were frail (increased age, low KPS) and might suffer increased morbidity from WBRT. This finding was unexpected, since WBRT has been shown to cause side effects that might be difficult for frail patients with limited life expectancy to tolerate, such as increasing fatigue, worsening physical function, and deterioration of appetite [7,14,26]. Additional clinical factors may influence treatment selection, but were not addressed in this study, including tumor location and surgical accessibility; additional treatment options not evaluated include the use of SRS in combination with surgery, chemotherapy, and the role of hospice.\nPractice environment and clinical expertise also influenced the use of SRS, even when controlling for clinical factors. Although practice type was not associated with the preference for SRS, the availability of SRS was significantly associated with its use, indicating that patients are more likely to receive this treatment if the physician they see practices it herself or has it available within her practice. This pattern of care could lead to under- or over-utilization of SRS: patients may have treatment guided more by a provider’s practice than by the patient’s clinical condition. Previous studies have demonstrated the association of physician specialization, board certification, treatment volume and time in practice with other cancer-related treatment decisions [27,28]. For example, diagnostic imaging use has increased when such imaging is performed at a self-referred facility [29]. Similarly, radiation oncologists may be prescribing complex treatment approaches more frequently when they have access to the facilities or equipment. Alternatively, this propensity for increased use of SRS with easy access may relate to physicians’ familiarity with their own clinical outcomes when using new technology. Our respondents may also have rates of access to SRS that are not comparable to those available nationwide, since the ACR survey did not report on the availability of SRS equipment.\nOur study has several limitations due to its reliance on physician self-report as a proxy for practice, its timing, and the limited number of respondents. Clinical scenarios were hypothetical and treatment options were limited. Although physician surveys have shown a strong correlation between vignettes and actual practice [18], further objective validation of these data would be desirable, as the vignettes used in this survey were novel. Respondents to this survey were dominantly radiation oncologists, whose treatment decisions may be greatly impacted by other members of the inter-disciplinary oncology team not represented in this survey. Rates of radiosurgery utilization more than doubled between 2000 and 2005, so continued increases in the use of radiosurgery could have occurred since the completion of this survey [30]. Additional research has been published since 2008 that may have resulted in further shifts in practice patterns.\nThe limited number of respondents to our survey limits the generalizability of our findings. The response rate of 6% may indicate that the practice patterns outlined in this study are specific to a subgroup of clinicians with particular interest or expertise in radiosurgery and may not be indicative of global patterns of care. Although respondents were similar to those in the ACR survey, the comparison is limited due to the nature of the variables available; key issues, such as expertise with SRS or volume of patients brain metastases, were not available in the ACR survey for comparison. However, ours is the first study to document practice patterns using vignettes in this clinical setting.", "Although many patients with cancer develop brain metastases, there is little data to guide treatment decisions. Our study demonstrates the significant heterogeneity among radiation oncologists in general clinical practice even for patients with identical clinical characteristics. Certain non-clinical factors, such as access to SRS, appear to be key drivers of use of advanced technology. This finding raises the question about what additional incentives could be driving treatment selection in the absence of gold-standard evidence of the superiority of a single approach over other alternatives. Our findings from this survey also underscore the likely uncertainty or disagreement that may exist among radiation oncologists about the relative harms and benefits of different treatment approaches. This uncertainty is likely related to the lack of prospective randomized studies that compare specific single- and multi-modality approaches for the treatment of brain metastases. More research is needed that directly compares the effectiveness of these approaches for a variety of different clinical circumstances. It would also be important to investigate underlying non-clinical factors, such as physician environment, reimbursement, and technology access, which likely contribute to observed heterogeneity of care for patients with brain metastases.", "WBRT: Whole brain radiation therapy; SRS: Stereotactic radiosurgery; KPS: Karnofsky performance status; RPA: Recursive partitioning analysis; ASTRO: American society for therapeutic radiation oncology; ACR: American college of radiology; GEE: Generalized estimating equation; NCCN: National comprehensive cancer network.", "Dr. Ramakrishna has received speaker’s honoraria from and prepared educational materials for Brainlab Ag, Heimstetten, Germany. The remaining authors have no conflicts of interest to disclose.", "NR and MK conceived of the study, designed the survey, and completed data collection. MK, KU, SM, and AP performed statistical analysis and data interpretation. MK, SM, and AP drafted the manuscript. All authors read and approved the final manuscript.", "Appendix 1. Complete physician survey.\nClick here for file\nAppendix 2. Odds Ratios and Confidence Intervals Comparing the Odds of Treatment Choices for Different Patient Characteristics.\nClick here for file" ]
[ null, "methods", null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusions", null, null, null, "supplementary-material" ]
[ "Brain metastases", "Stereotactic radiosurgery", "Whole brain radiation therapy", "Treatment patterns", "Physician survey" ]
Background: Brain metastases are the most common intracranial tumor, occurring in 20-40% of cancer patients and accounting for 20% of cancer deaths annually [1]. Median survival is 1–2 months with corticosteroids alone [2] or six months with whole brain radiation therapy (WBRT) [3,4]. A major advance in the treatment of these patients was addition of surgery to WBRT for treatment of a single metastasis, which improved local control, distant intracranial control and neurologic survival compared to either modality alone [5,6]. A retrospective study demonstrated differential survival among patients undergoing WBRT according to recursive partitioning analysis (RPA) classes [7]; further prognostic refinements have incorporated histology and number of lesions [8]. More recently, stereotactic radiosurgery (SRS) has been used alone or with WBRT in patients with up to 4 metastases. When compared with WBRT alone, the addition of SRS has improved local control, functional autonomy and survival [5,9-11]. However, WBRT can have significant toxicities, including fatigue, drowsiness and suppressed appetite, and long-term difficulties with learning, memory, concentration, and depression [12-14]. The use of SRS alone controls limited disease and delays the time until WBRT is necessary for distant intracranial progression [12,15,16]. In most clinical trials of therapies for brain metastases, patients have been selected on the basis of having few metastases, stable extracranial disease, and excellent performance status. In clinical practice, patients with brain metastases are a heterogeneous population, and decision-making requires the synthesis of multiple variables. The objective of this survey of radiation oncologists was to identify patient factors, physician characteristics, and practice setting variables associated with physicians’ preferred use of different techniques for treating brain metastases. This survey aimed to generate data that would allow physicians to: (1) compare their practice patterns to a national sample; (2) assess the influence of their practice environment on treatment choice; and (3) generate new hypotheses regarding appropriate treatment. Methods: This project was approved by the IRB of Harvard Medical School. The survey was launched online, and physician members of the American Society for Therapeutic Radiology and Oncology (ASTRO) were emailed a recruitment letter. Eligibility criteria included respondent status as a U.S. or Canadian physician in the ASTRO database, valid email address, and current management of patients with brain metastases, as reflected by the screener question. Respondents linked directly to the survey from the email, and there was no incentive for survey participation. Data collection Data was de-identified and collected through the online survey tool for one month. We emailed surveys to 4357 physician members of ASTRO on September 26, 2008, and the survey was closed on October 26, 2008. 417 respondents answered at least one question, and 277 answered all demographic and clinical questions, for a response rate of 6%. Despite our low response rate, physician respondents were representative of practicing radiation oncologists when compared to respondents to the American College of Radiology’s (ACR) Survey of Radiation Oncologists. Our sample was similar to the ACR survey on selected characteristics such as sex (73% male in our survey, 77% in ACR), age (62% ages 35–54 in our survey, 65% in ACR) and being in private practice (52% in our survey, 48% in ACR) [17]. However, it was not possible to assess interest in SRS or palliative care, or use of advanced technology, among those included in the ACR sample, which limits the comparison. The survey was designed to: (1) describe radiation oncologists’ patterns of treatment of patients with brain metastases; and (2) identify clinical, demographic, and practice setting factors associated with treatment patterns. To test physician practices, a series of short hypothetical clinical vignettes were developed to assess respondents’ preferred treatment modalities. Vignettes have been demonstrated to be a valid study tool when compared with actual clinical practice patterns [18]. Treatment options for each vignette were identical: WBRT alone; WBRT with SRS; SRS alone; WBRT with surgery; or no treatment. We constructed 3 versions of a reference vignette: the first with 1 metastasis, the next with 3 metastases, and one with 8 metastases. Each reference vignette described a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, Karnofsky Performance Status (KPS) 80%, and asymptomatic, small brain lesion(s). For each of these 3 vignettes, we asked about 6 additional patients, modifying a single variable: melanoma histology, active extracranial disease, KPS 50%, presence of neurologic deficit, age of 80 years old, and large lesion (Figure 1). Variations under assumptions of 1, 3, 8 metastases in each Reference Patient. The survey sequentially varied the characteristics of each Reference Patient to create vignettes for Patients 1–6. The effect of each variation was evaluated under assumptions of 1, 3, and 8 lesions, respective to each vignette’s Reference Patient. Other survey items assessed factors related to the patient, physician, or practice setting. These questions included physician demographics, practice environment, availability of SRS, and opinions about the nature of intracranial disease and the toxicity of its treatment. A copy of our survey is included as supplementary material (Additional file 1: Appendix 1). Data regarding non-respondents were not collected. Data was de-identified and collected through the online survey tool for one month. We emailed surveys to 4357 physician members of ASTRO on September 26, 2008, and the survey was closed on October 26, 2008. 417 respondents answered at least one question, and 277 answered all demographic and clinical questions, for a response rate of 6%. Despite our low response rate, physician respondents were representative of practicing radiation oncologists when compared to respondents to the American College of Radiology’s (ACR) Survey of Radiation Oncologists. Our sample was similar to the ACR survey on selected characteristics such as sex (73% male in our survey, 77% in ACR), age (62% ages 35–54 in our survey, 65% in ACR) and being in private practice (52% in our survey, 48% in ACR) [17]. However, it was not possible to assess interest in SRS or palliative care, or use of advanced technology, among those included in the ACR sample, which limits the comparison. The survey was designed to: (1) describe radiation oncologists’ patterns of treatment of patients with brain metastases; and (2) identify clinical, demographic, and practice setting factors associated with treatment patterns. To test physician practices, a series of short hypothetical clinical vignettes were developed to assess respondents’ preferred treatment modalities. Vignettes have been demonstrated to be a valid study tool when compared with actual clinical practice patterns [18]. Treatment options for each vignette were identical: WBRT alone; WBRT with SRS; SRS alone; WBRT with surgery; or no treatment. We constructed 3 versions of a reference vignette: the first with 1 metastasis, the next with 3 metastases, and one with 8 metastases. Each reference vignette described a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, Karnofsky Performance Status (KPS) 80%, and asymptomatic, small brain lesion(s). For each of these 3 vignettes, we asked about 6 additional patients, modifying a single variable: melanoma histology, active extracranial disease, KPS 50%, presence of neurologic deficit, age of 80 years old, and large lesion (Figure 1). Variations under assumptions of 1, 3, 8 metastases in each Reference Patient. The survey sequentially varied the characteristics of each Reference Patient to create vignettes for Patients 1–6. The effect of each variation was evaluated under assumptions of 1, 3, and 8 lesions, respective to each vignette’s Reference Patient. Other survey items assessed factors related to the patient, physician, or practice setting. These questions included physician demographics, practice environment, availability of SRS, and opinions about the nature of intracranial disease and the toxicity of its treatment. A copy of our survey is included as supplementary material (Additional file 1: Appendix 1). Data regarding non-respondents were not collected. Statistical analysis Effects of patient clinical characteristics on treatment choices For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. Effects of patient &physician characteristics on odds of including SRS We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses. We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses. Effects of patient clinical characteristics on treatment choices For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. Effects of patient &physician characteristics on odds of including SRS We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses. We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses. Data collection: Data was de-identified and collected through the online survey tool for one month. We emailed surveys to 4357 physician members of ASTRO on September 26, 2008, and the survey was closed on October 26, 2008. 417 respondents answered at least one question, and 277 answered all demographic and clinical questions, for a response rate of 6%. Despite our low response rate, physician respondents were representative of practicing radiation oncologists when compared to respondents to the American College of Radiology’s (ACR) Survey of Radiation Oncologists. Our sample was similar to the ACR survey on selected characteristics such as sex (73% male in our survey, 77% in ACR), age (62% ages 35–54 in our survey, 65% in ACR) and being in private practice (52% in our survey, 48% in ACR) [17]. However, it was not possible to assess interest in SRS or palliative care, or use of advanced technology, among those included in the ACR sample, which limits the comparison. The survey was designed to: (1) describe radiation oncologists’ patterns of treatment of patients with brain metastases; and (2) identify clinical, demographic, and practice setting factors associated with treatment patterns. To test physician practices, a series of short hypothetical clinical vignettes were developed to assess respondents’ preferred treatment modalities. Vignettes have been demonstrated to be a valid study tool when compared with actual clinical practice patterns [18]. Treatment options for each vignette were identical: WBRT alone; WBRT with SRS; SRS alone; WBRT with surgery; or no treatment. We constructed 3 versions of a reference vignette: the first with 1 metastasis, the next with 3 metastases, and one with 8 metastases. Each reference vignette described a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, Karnofsky Performance Status (KPS) 80%, and asymptomatic, small brain lesion(s). For each of these 3 vignettes, we asked about 6 additional patients, modifying a single variable: melanoma histology, active extracranial disease, KPS 50%, presence of neurologic deficit, age of 80 years old, and large lesion (Figure 1). Variations under assumptions of 1, 3, 8 metastases in each Reference Patient. The survey sequentially varied the characteristics of each Reference Patient to create vignettes for Patients 1–6. The effect of each variation was evaluated under assumptions of 1, 3, and 8 lesions, respective to each vignette’s Reference Patient. Other survey items assessed factors related to the patient, physician, or practice setting. These questions included physician demographics, practice environment, availability of SRS, and opinions about the nature of intracranial disease and the toxicity of its treatment. A copy of our survey is included as supplementary material (Additional file 1: Appendix 1). Data regarding non-respondents were not collected. Statistical analysis: Effects of patient clinical characteristics on treatment choices For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. Effects of patient &physician characteristics on odds of including SRS We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses. We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses. Effects of patient clinical characteristics on treatment choices : For the four category treatment choice responses (WBRT alone, WBRT with SRS, SRS alone, or surgery with WBRT), we used a series of multivariate binomial generalized estimating equation (GEE) models to estimate odds ratios that measured the effects of each change in patients’ clinical characteristics on the odds of each of 4 treatments choices relative to the odds of the remaining 3 alternatives. Since each vignette represented a repeat measurement on a physician, we considered treatment choices as correlated observations clustered within individual physicians. We used an exchangeable correlation structure to account for the correlation of physician responses between vignettes. Graphical techniques were used to assess model adequacy. We chose to use a series of binomial models to model a multi-category response because of the lack of available statistical software to implement multi-category GEE models with exchangeable correlation structure [19]. Effects of patient &physician characteristics on odds of including SRS: We grouped treatment responses that included SRS (SRS or WBRT with SRS) and compared them with the 3 remaining alternatives as a combined reference group (WBRT, WBRT with surgery, or no treatment) in a binomial GEE model that included patient clinical, physician and practice setting characteristics as covariates. These groupings were created to allow for exploration of factors contributing to integrating advanced technology (SRS) into the treatment plan, despite the fact that each treatment approach may have different clinical indications, as explored through the above-detailed models. Working correlations and clustering were treated as in the previous models. All parameter estimates were tested for statistical significance at the 0.05 level. SAS® software version 9.2 was used in all analyses. Results: Physician demographics and practice environment The characteristics of our survey respondents are shown in Table 1. Sixty percent of respondents were in single-specialty group practices. Most practices were hospital-based, academic (38%) or private (30%). Seventy-six percent of respondents treated 10–50 patients with brain metastases per year. Forty-four percent of respondents performed SRS, while 35% had a colleague at their institution who performed SRS. Sixty-one percent of respondents had LINAC-based SRS, and 18% had no SRS equipment. Distribution of Physician Characteristics (N=277) 1 Respondents were permitted to select more than one modality. 2 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases. * Stereotactic radiosurgery. Physicians’ responses to the 21 vignettes varied substantially (Table 2). Multivariable modeling revealed clinical factors influencing treatment selection (Tables 3, 4, 5; complete results in Additional file 2: Appendix 2). Unadjusted Response (in %) Among Radiation Oncologist (N=277) 1 Abbreviations as follows: Whole Brain Radiation Therapy (WBRT); Whole Brain Radiation Therapy with Stereotactic Radiosurgery (WBRT+SRS); Stereotactic Radiosurgery (SRS); Surgery with Whole Brain Radiation Therapy (SURG+WBRT). 2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion. 3 Patient characteristics were varied sequentially with each patient differing by a single characteristic from the reference patient as shown in Figure 1. * Whole brain radiation therapy. † Stereotactic radiosurgery. Odds Ratios for Choice of WBRT* alone versus SRS† Alone Odds Ratios for Choice of WBRT alone versus WBRT with SRS Odds Ratios for Choice of WBRT with SRS versus SRS alone Notes Odds ratios (OR) are quoted with their 95% confidence intervals in parentheses. "*" Denotes significant odds ratios at the 0.05 level. The odds ratios compare odds of choosing each given treatment, with the odds of choosing the treatments serving as reference categories. * Whole brain radiation therapy. † Stereotactic radiosurgery. †† Confidence intervals. § Karnofsky Performance Status. ¶ Non-Small Cell Lung Cancer. The characteristics of our survey respondents are shown in Table 1. Sixty percent of respondents were in single-specialty group practices. Most practices were hospital-based, academic (38%) or private (30%). Seventy-six percent of respondents treated 10–50 patients with brain metastases per year. Forty-four percent of respondents performed SRS, while 35% had a colleague at their institution who performed SRS. Sixty-one percent of respondents had LINAC-based SRS, and 18% had no SRS equipment. Distribution of Physician Characteristics (N=277) 1 Respondents were permitted to select more than one modality. 2 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases. * Stereotactic radiosurgery. Physicians’ responses to the 21 vignettes varied substantially (Table 2). Multivariable modeling revealed clinical factors influencing treatment selection (Tables 3, 4, 5; complete results in Additional file 2: Appendix 2). Unadjusted Response (in %) Among Radiation Oncologist (N=277) 1 Abbreviations as follows: Whole Brain Radiation Therapy (WBRT); Whole Brain Radiation Therapy with Stereotactic Radiosurgery (WBRT+SRS); Stereotactic Radiosurgery (SRS); Surgery with Whole Brain Radiation Therapy (SURG+WBRT). 2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion. 3 Patient characteristics were varied sequentially with each patient differing by a single characteristic from the reference patient as shown in Figure 1. * Whole brain radiation therapy. † Stereotactic radiosurgery. Odds Ratios for Choice of WBRT* alone versus SRS† Alone Odds Ratios for Choice of WBRT alone versus WBRT with SRS Odds Ratios for Choice of WBRT with SRS versus SRS alone Notes Odds ratios (OR) are quoted with their 95% confidence intervals in parentheses. "*" Denotes significant odds ratios at the 0.05 level. The odds ratios compare odds of choosing each given treatment, with the odds of choosing the treatments serving as reference categories. * Whole brain radiation therapy. † Stereotactic radiosurgery. †† Confidence intervals. § Karnofsky Performance Status. ¶ Non-Small Cell Lung Cancer. Whole brain radiation therapy alone WBRT alone was selected frequently, particularly for patients with 8 metastases. For the 80 year-old patient with 3 or 8 metastases, WBRT was commonly preferred (52% and 96% vs. 21% and 91%, respectively, for the 55-year old patient, Table 2). Even for a patient with a single metastasis, 56% of respondents preferred WBRT alone if that patient had KPS 50%; 51% would choose WBRT if the patient had active extracranial disease. In adjusted analyses, all of the clinical variables (melanoma histology, KPS 50%, active extracranial disease, age of 80 years old, presence of focal neurologic deficits, and large lesion) were associated with a higher likelihood of respondents preferring WBRT alone versus either SRS alone (Table 3) or WBRT with SRS (Table 4), except for radioresistant histology. WBRT alone was selected frequently, particularly for patients with 8 metastases. For the 80 year-old patient with 3 or 8 metastases, WBRT was commonly preferred (52% and 96% vs. 21% and 91%, respectively, for the 55-year old patient, Table 2). Even for a patient with a single metastasis, 56% of respondents preferred WBRT alone if that patient had KPS 50%; 51% would choose WBRT if the patient had active extracranial disease. In adjusted analyses, all of the clinical variables (melanoma histology, KPS 50%, active extracranial disease, age of 80 years old, presence of focal neurologic deficits, and large lesion) were associated with a higher likelihood of respondents preferring WBRT alone versus either SRS alone (Table 3) or WBRT with SRS (Table 4), except for radioresistant histology. Addition of surgery For the reference patient with a single metastasis, 44% of respondents selected surgery with WBRT, although most respondents selected a non-operative approach that included SRS (26% WBRT with SRS; 29% SRS alone, for a total of 55% of respondents). When the reference vignette was revised to include the presence of focal neurologic deficits, the distribution of responses was similar for those with 1 lesion, with 48% of respondents preferring surgery with WBRT. When considering patients with a single, large lesion, the percent of respondents choosing surgery with WBRT increased from 44% to 63%. After adjusting for all other clinical factors, respondents were more likely to choose surgery with WBRT rather than WBRT alone for patients with large versus smaller lesions (OR=1.9, 95% CI 1.3-2.8). For 3 or 8 lesions, age 80, active extracranial disease, and KPS 50%, respondents were more likely to choose WBRT alone than surgery with WBRT (Additional file 2: Appendix 2). Melanoma histology and presence of neurologic deficits did not correlate with respondents’ selections. For the reference patient with a single metastasis, 44% of respondents selected surgery with WBRT, although most respondents selected a non-operative approach that included SRS (26% WBRT with SRS; 29% SRS alone, for a total of 55% of respondents). When the reference vignette was revised to include the presence of focal neurologic deficits, the distribution of responses was similar for those with 1 lesion, with 48% of respondents preferring surgery with WBRT. When considering patients with a single, large lesion, the percent of respondents choosing surgery with WBRT increased from 44% to 63%. After adjusting for all other clinical factors, respondents were more likely to choose surgery with WBRT rather than WBRT alone for patients with large versus smaller lesions (OR=1.9, 95% CI 1.3-2.8). For 3 or 8 lesions, age 80, active extracranial disease, and KPS 50%, respondents were more likely to choose WBRT alone than surgery with WBRT (Additional file 2: Appendix 2). Melanoma histology and presence of neurologic deficits did not correlate with respondents’ selections. Addition of stereotactic radiosurgery SRS was commonly preferred by respondents for patients with 3 lesions (23% SRS alone; 54% SRS with WBRT, Table 2), and it largely replaced the use of surgery for the older patient with a single lesion (25% WBRT with SRS; 40% chose SRS alone). Presence of neurological deficits and large lesion size were associated with physicians’ preference for WBRT with SRS over SRS alone (Table 5). However, older age, poorer performance status and melanoma histology were associated with less frequent selection of WBRT with SRS versus SRS alone. SRS was commonly preferred by respondents for patients with 3 lesions (23% SRS alone; 54% SRS with WBRT, Table 2), and it largely replaced the use of surgery for the older patient with a single lesion (25% WBRT with SRS; 40% chose SRS alone). Presence of neurological deficits and large lesion size were associated with physicians’ preference for WBRT with SRS over SRS alone (Table 5). However, older age, poorer performance status and melanoma histology were associated with less frequent selection of WBRT with SRS versus SRS alone. Use of stereotactic radiosurgery Multivariable analysis was performed to identify which factors were independently associated with including SRS as part of treatment (SRS or WBRT with SRS) compared to all other treatment choices (WBRT, WBRT with surgery, no treatment), adjusting for all other characteristics in Table 6. Number of metastases was strongly associated with treatment preferences: after adjustment for all other factors in the model, respondents were significantly more likely to favor SRS for 3 lesions than for 1 (OR=2.22, 95% CI 1.96-2.51), and physicians were 5 times less likely to choose an approach that included SRS for a patient with 8 lesions relative to patients with 1 lesion (OR=0.19, 95% CI 0.15-0.23). Results of logistic regression model showing the reported use of SRS * as part of treatment for brain metastases according to multiple clinical, sociodemographic, and practice setting factors 2 1 Including SRS was defined as either use of SRS alone or with Whole Brain Radiation Therapy (WBRT). 2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion. 3 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases. * Stereotactic radiosurgery. † Whole brain radiation therapy. Across all clinical vignettes, after adjusting for all other factors, poor KPS (OR=0.38, 95% CI 0.31-0.46), active extracranial disease (OR=0.56, 95% CI 0.47-0.65), and large lesion (OR=0.58, 95% CI 0.47-0.71) remained strongly negatively associated with the choice of SRS, while melanoma histology (OR=2.84, 95% CI 2.45-3.29) and advanced age (OR=1.23, 95% CI 1.07-1.41) were positively associated with choice of SRS. Physician access was the strongest factor associated with choosing SRS as part of treatment. Respondents with SRS capability in their own practice were more likely to favor its use for hypothetical patients than those without it (OR=2.22, 95% CI 1.46-3.37). As expected, those physicians who personally used SRS were more likely to recommend it than those who did not have it or use it personally in their practice (OR=3.57, 95% CI 2.42-5.26). Patient volume and physician seniority were examined, but were not associated with SRS use. Multivariable analysis was performed to identify which factors were independently associated with including SRS as part of treatment (SRS or WBRT with SRS) compared to all other treatment choices (WBRT, WBRT with surgery, no treatment), adjusting for all other characteristics in Table 6. Number of metastases was strongly associated with treatment preferences: after adjustment for all other factors in the model, respondents were significantly more likely to favor SRS for 3 lesions than for 1 (OR=2.22, 95% CI 1.96-2.51), and physicians were 5 times less likely to choose an approach that included SRS for a patient with 8 lesions relative to patients with 1 lesion (OR=0.19, 95% CI 0.15-0.23). Results of logistic regression model showing the reported use of SRS * as part of treatment for brain metastases according to multiple clinical, sociodemographic, and practice setting factors 2 1 Including SRS was defined as either use of SRS alone or with Whole Brain Radiation Therapy (WBRT). 2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion. 3 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases. * Stereotactic radiosurgery. † Whole brain radiation therapy. Across all clinical vignettes, after adjusting for all other factors, poor KPS (OR=0.38, 95% CI 0.31-0.46), active extracranial disease (OR=0.56, 95% CI 0.47-0.65), and large lesion (OR=0.58, 95% CI 0.47-0.71) remained strongly negatively associated with the choice of SRS, while melanoma histology (OR=2.84, 95% CI 2.45-3.29) and advanced age (OR=1.23, 95% CI 1.07-1.41) were positively associated with choice of SRS. Physician access was the strongest factor associated with choosing SRS as part of treatment. Respondents with SRS capability in their own practice were more likely to favor its use for hypothetical patients than those without it (OR=2.22, 95% CI 1.46-3.37). As expected, those physicians who personally used SRS were more likely to recommend it than those who did not have it or use it personally in their practice (OR=3.57, 95% CI 2.42-5.26). Patient volume and physician seniority were examined, but were not associated with SRS use. Physician demographics and practice environment: The characteristics of our survey respondents are shown in Table 1. Sixty percent of respondents were in single-specialty group practices. Most practices were hospital-based, academic (38%) or private (30%). Seventy-six percent of respondents treated 10–50 patients with brain metastases per year. Forty-four percent of respondents performed SRS, while 35% had a colleague at their institution who performed SRS. Sixty-one percent of respondents had LINAC-based SRS, and 18% had no SRS equipment. Distribution of Physician Characteristics (N=277) 1 Respondents were permitted to select more than one modality. 2 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases. * Stereotactic radiosurgery. Physicians’ responses to the 21 vignettes varied substantially (Table 2). Multivariable modeling revealed clinical factors influencing treatment selection (Tables 3, 4, 5; complete results in Additional file 2: Appendix 2). Unadjusted Response (in %) Among Radiation Oncologist (N=277) 1 Abbreviations as follows: Whole Brain Radiation Therapy (WBRT); Whole Brain Radiation Therapy with Stereotactic Radiosurgery (WBRT+SRS); Stereotactic Radiosurgery (SRS); Surgery with Whole Brain Radiation Therapy (SURG+WBRT). 2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion. 3 Patient characteristics were varied sequentially with each patient differing by a single characteristic from the reference patient as shown in Figure 1. * Whole brain radiation therapy. † Stereotactic radiosurgery. Odds Ratios for Choice of WBRT* alone versus SRS† Alone Odds Ratios for Choice of WBRT alone versus WBRT with SRS Odds Ratios for Choice of WBRT with SRS versus SRS alone Notes Odds ratios (OR) are quoted with their 95% confidence intervals in parentheses. "*" Denotes significant odds ratios at the 0.05 level. The odds ratios compare odds of choosing each given treatment, with the odds of choosing the treatments serving as reference categories. * Whole brain radiation therapy. † Stereotactic radiosurgery. †† Confidence intervals. § Karnofsky Performance Status. ¶ Non-Small Cell Lung Cancer. Whole brain radiation therapy alone: WBRT alone was selected frequently, particularly for patients with 8 metastases. For the 80 year-old patient with 3 or 8 metastases, WBRT was commonly preferred (52% and 96% vs. 21% and 91%, respectively, for the 55-year old patient, Table 2). Even for a patient with a single metastasis, 56% of respondents preferred WBRT alone if that patient had KPS 50%; 51% would choose WBRT if the patient had active extracranial disease. In adjusted analyses, all of the clinical variables (melanoma histology, KPS 50%, active extracranial disease, age of 80 years old, presence of focal neurologic deficits, and large lesion) were associated with a higher likelihood of respondents preferring WBRT alone versus either SRS alone (Table 3) or WBRT with SRS (Table 4), except for radioresistant histology. Addition of surgery: For the reference patient with a single metastasis, 44% of respondents selected surgery with WBRT, although most respondents selected a non-operative approach that included SRS (26% WBRT with SRS; 29% SRS alone, for a total of 55% of respondents). When the reference vignette was revised to include the presence of focal neurologic deficits, the distribution of responses was similar for those with 1 lesion, with 48% of respondents preferring surgery with WBRT. When considering patients with a single, large lesion, the percent of respondents choosing surgery with WBRT increased from 44% to 63%. After adjusting for all other clinical factors, respondents were more likely to choose surgery with WBRT rather than WBRT alone for patients with large versus smaller lesions (OR=1.9, 95% CI 1.3-2.8). For 3 or 8 lesions, age 80, active extracranial disease, and KPS 50%, respondents were more likely to choose WBRT alone than surgery with WBRT (Additional file 2: Appendix 2). Melanoma histology and presence of neurologic deficits did not correlate with respondents’ selections. Addition of stereotactic radiosurgery: SRS was commonly preferred by respondents for patients with 3 lesions (23% SRS alone; 54% SRS with WBRT, Table 2), and it largely replaced the use of surgery for the older patient with a single lesion (25% WBRT with SRS; 40% chose SRS alone). Presence of neurological deficits and large lesion size were associated with physicians’ preference for WBRT with SRS over SRS alone (Table 5). However, older age, poorer performance status and melanoma histology were associated with less frequent selection of WBRT with SRS versus SRS alone. Use of stereotactic radiosurgery: Multivariable analysis was performed to identify which factors were independently associated with including SRS as part of treatment (SRS or WBRT with SRS) compared to all other treatment choices (WBRT, WBRT with surgery, no treatment), adjusting for all other characteristics in Table 6. Number of metastases was strongly associated with treatment preferences: after adjustment for all other factors in the model, respondents were significantly more likely to favor SRS for 3 lesions than for 1 (OR=2.22, 95% CI 1.96-2.51), and physicians were 5 times less likely to choose an approach that included SRS for a patient with 8 lesions relative to patients with 1 lesion (OR=0.19, 95% CI 0.15-0.23). Results of logistic regression model showing the reported use of SRS * as part of treatment for brain metastases according to multiple clinical, sociodemographic, and practice setting factors 2 1 Including SRS was defined as either use of SRS alone or with Whole Brain Radiation Therapy (WBRT). 2 The reference patient was a 55 year-old patient with non-small cell lung cancer, inactive extracranial disease, KPS 80%, and an asymptomatic, small brain lesion. 3 Personal experience includes the respondent personally being treated for brain metastases, or having had a friend or family member treated for brain metastases. * Stereotactic radiosurgery. † Whole brain radiation therapy. Across all clinical vignettes, after adjusting for all other factors, poor KPS (OR=0.38, 95% CI 0.31-0.46), active extracranial disease (OR=0.56, 95% CI 0.47-0.65), and large lesion (OR=0.58, 95% CI 0.47-0.71) remained strongly negatively associated with the choice of SRS, while melanoma histology (OR=2.84, 95% CI 2.45-3.29) and advanced age (OR=1.23, 95% CI 1.07-1.41) were positively associated with choice of SRS. Physician access was the strongest factor associated with choosing SRS as part of treatment. Respondents with SRS capability in their own practice were more likely to favor its use for hypothetical patients than those without it (OR=2.22, 95% CI 1.46-3.37). As expected, those physicians who personally used SRS were more likely to recommend it than those who did not have it or use it personally in their practice (OR=3.57, 95% CI 2.42-5.26). Patient volume and physician seniority were examined, but were not associated with SRS use. Discussion: Treatment of patients with brain metastases is heterogeneous. WBRT is a standard therapy, with the addition of surgery or SRS to WBRT, or SRS used alone, reserved for selected patients on the basis of their clinical characteristics. One potential advantage of local therapy may be avoiding the toxicity of WBRT [12-14]. However, SRS, when used alone, has several disadvantages. SRS alone has been shown to be inferior to the combination of SRS with WBRT for durable local control and distant intracranial control [15]. When studying patients initially undergoing any local therapy – surgery or SRS – more patients required salvage if treated without WBRT [20]. Long-term cognitive outcomes have been shown to be more closely correlated with intracranial progression than with treatment modality, emphasizing the significance of intracranial control over short-term side effects [21,22]. Given the limited scope of current studies and the variability in outcomes, National Comprehensive Cancer Network (NCCN) guidelines allow for a wide range of treatment options including WBRT, surgical resection, or SRS, alone or in combinations [23]. Previous reviews of treatment patterns have demonstrated stable rates of surgery since the 1980s, with an increasing use of SRS [24]. Despite clinical trials limiting eligible patients to those with limited central nervous system disease, a recent survey demonstrated that more than half of physician respondents would consider using SRS as an initial treatment for patients with 5 or more intracranial lesions [25]. The increased utilization of SRS as well as the persistent heterogeneity in practice may be due to the time of dissemination of research into clinical practice, or the time to purchase and adoption of new technologies. With mixed evidence and a heterogeneous patient population, treatment decision-making is complex. Significantly, our study demonstrates that although clinical factors, such as number of lesions and patient age, affected treatment selection, physician practice environment had a strong, independent effect on the use of SRS. Factors related to the patient’s clinical condition affected treatment selection. There was increased use of WBRT for increasing number of lesions, which is consistent with the lack of evidence to support the use of local techniques for patients with numerous metastases. However, we observed that a substantial proportion of physicians still chose SRS as part of their approach for patients with multiple lesions, particularly for patients with 3 lesions. The increased use of SRS with 3 lesions as compared with 1 was possibly due to the use of surgery for a substantial proportion of patients with 1 lesion, and due to the use of SRS combined with WBRT in patients with 3 lesions. Interestingly, physicians overall selected WBRT for patients with 1, 3, or 8 lesions more often for patients who were frail (increased age, low KPS) and might suffer increased morbidity from WBRT. This finding was unexpected, since WBRT has been shown to cause side effects that might be difficult for frail patients with limited life expectancy to tolerate, such as increasing fatigue, worsening physical function, and deterioration of appetite [7,14,26]. Additional clinical factors may influence treatment selection, but were not addressed in this study, including tumor location and surgical accessibility; additional treatment options not evaluated include the use of SRS in combination with surgery, chemotherapy, and the role of hospice. Practice environment and clinical expertise also influenced the use of SRS, even when controlling for clinical factors. Although practice type was not associated with the preference for SRS, the availability of SRS was significantly associated with its use, indicating that patients are more likely to receive this treatment if the physician they see practices it herself or has it available within her practice. This pattern of care could lead to under- or over-utilization of SRS: patients may have treatment guided more by a provider’s practice than by the patient’s clinical condition. Previous studies have demonstrated the association of physician specialization, board certification, treatment volume and time in practice with other cancer-related treatment decisions [27,28]. For example, diagnostic imaging use has increased when such imaging is performed at a self-referred facility [29]. Similarly, radiation oncologists may be prescribing complex treatment approaches more frequently when they have access to the facilities or equipment. Alternatively, this propensity for increased use of SRS with easy access may relate to physicians’ familiarity with their own clinical outcomes when using new technology. Our respondents may also have rates of access to SRS that are not comparable to those available nationwide, since the ACR survey did not report on the availability of SRS equipment. Our study has several limitations due to its reliance on physician self-report as a proxy for practice, its timing, and the limited number of respondents. Clinical scenarios were hypothetical and treatment options were limited. Although physician surveys have shown a strong correlation between vignettes and actual practice [18], further objective validation of these data would be desirable, as the vignettes used in this survey were novel. Respondents to this survey were dominantly radiation oncologists, whose treatment decisions may be greatly impacted by other members of the inter-disciplinary oncology team not represented in this survey. Rates of radiosurgery utilization more than doubled between 2000 and 2005, so continued increases in the use of radiosurgery could have occurred since the completion of this survey [30]. Additional research has been published since 2008 that may have resulted in further shifts in practice patterns. The limited number of respondents to our survey limits the generalizability of our findings. The response rate of 6% may indicate that the practice patterns outlined in this study are specific to a subgroup of clinicians with particular interest or expertise in radiosurgery and may not be indicative of global patterns of care. Although respondents were similar to those in the ACR survey, the comparison is limited due to the nature of the variables available; key issues, such as expertise with SRS or volume of patients brain metastases, were not available in the ACR survey for comparison. However, ours is the first study to document practice patterns using vignettes in this clinical setting. Conclusions: Although many patients with cancer develop brain metastases, there is little data to guide treatment decisions. Our study demonstrates the significant heterogeneity among radiation oncologists in general clinical practice even for patients with identical clinical characteristics. Certain non-clinical factors, such as access to SRS, appear to be key drivers of use of advanced technology. This finding raises the question about what additional incentives could be driving treatment selection in the absence of gold-standard evidence of the superiority of a single approach over other alternatives. Our findings from this survey also underscore the likely uncertainty or disagreement that may exist among radiation oncologists about the relative harms and benefits of different treatment approaches. This uncertainty is likely related to the lack of prospective randomized studies that compare specific single- and multi-modality approaches for the treatment of brain metastases. More research is needed that directly compares the effectiveness of these approaches for a variety of different clinical circumstances. It would also be important to investigate underlying non-clinical factors, such as physician environment, reimbursement, and technology access, which likely contribute to observed heterogeneity of care for patients with brain metastases. Abbreviations: WBRT: Whole brain radiation therapy; SRS: Stereotactic radiosurgery; KPS: Karnofsky performance status; RPA: Recursive partitioning analysis; ASTRO: American society for therapeutic radiation oncology; ACR: American college of radiology; GEE: Generalized estimating equation; NCCN: National comprehensive cancer network. Competing interests: Dr. Ramakrishna has received speaker’s honoraria from and prepared educational materials for Brainlab Ag, Heimstetten, Germany. The remaining authors have no conflicts of interest to disclose. Authors’ contributions: NR and MK conceived of the study, designed the survey, and completed data collection. MK, KU, SM, and AP performed statistical analysis and data interpretation. MK, SM, and AP drafted the manuscript. All authors read and approved the final manuscript. Supplementary Material: Appendix 1. Complete physician survey. Click here for file Appendix 2. Odds Ratios and Confidence Intervals Comparing the Odds of Treatment Choices for Different Patient Characteristics. Click here for file
Background: Limited data guide radiotherapy choices for patients with brain metastases. This survey aimed to identify patient, physician, and practice setting variables associated with reported preferences for different treatment techniques. Methods: 277 members of the American Society for Radiation Oncology (6% of surveyed physicians) completed a survey regarding treatment preferences for 21 hypothetical patients with brain metastases. Treatment choices included combinations of whole brain radiation therapy (WBRT), stereotactic radiosurgery (SRS), and surgery. Vignettes varied histology, extracranial disease status, Karnofsky Performance Status (KPS), presence of neurologic deficits, lesion size and number. Multivariate generalized estimating equation regression models were used to estimate odds ratios. Results: For a hypothetical patient with 3 lesions or 8 lesions, 21% and 91% of physicians, respectively, chose WBRT alone, compared with 1% selecting WBRT alone for a patient with 1 lesion. 51% chose WBRT alone for a patient with active extracranial disease or KPS=50%. 40% chose SRS alone for an 80 year-old patient with 1 lesion, compared to 29% for a 55 year-old patient. Multivariate modeling detailed factors associated with SRS use, including availability of SRS within one's practice (OR 2.22, 95% CI 1.46-3.37). Conclusions: Poor prognostic factors, such as advanced age, poor performance status, or active extracranial disease, correspond with an increase in physicians' reported preference for using WBRT. When controlling for clinical factors, equipment access was independently associated with choice of SRS. The large variability in preferences suggests that more information about the relative harms and benefits of these options is needed to guide decision-making.
Background: Brain metastases are the most common intracranial tumor, occurring in 20-40% of cancer patients and accounting for 20% of cancer deaths annually [1]. Median survival is 1–2 months with corticosteroids alone [2] or six months with whole brain radiation therapy (WBRT) [3,4]. A major advance in the treatment of these patients was addition of surgery to WBRT for treatment of a single metastasis, which improved local control, distant intracranial control and neurologic survival compared to either modality alone [5,6]. A retrospective study demonstrated differential survival among patients undergoing WBRT according to recursive partitioning analysis (RPA) classes [7]; further prognostic refinements have incorporated histology and number of lesions [8]. More recently, stereotactic radiosurgery (SRS) has been used alone or with WBRT in patients with up to 4 metastases. When compared with WBRT alone, the addition of SRS has improved local control, functional autonomy and survival [5,9-11]. However, WBRT can have significant toxicities, including fatigue, drowsiness and suppressed appetite, and long-term difficulties with learning, memory, concentration, and depression [12-14]. The use of SRS alone controls limited disease and delays the time until WBRT is necessary for distant intracranial progression [12,15,16]. In most clinical trials of therapies for brain metastases, patients have been selected on the basis of having few metastases, stable extracranial disease, and excellent performance status. In clinical practice, patients with brain metastases are a heterogeneous population, and decision-making requires the synthesis of multiple variables. The objective of this survey of radiation oncologists was to identify patient factors, physician characteristics, and practice setting variables associated with physicians’ preferred use of different techniques for treating brain metastases. This survey aimed to generate data that would allow physicians to: (1) compare their practice patterns to a national sample; (2) assess the influence of their practice environment on treatment choice; and (3) generate new hypotheses regarding appropriate treatment. Conclusions: Although many patients with cancer develop brain metastases, there is little data to guide treatment decisions. Our study demonstrates the significant heterogeneity among radiation oncologists in general clinical practice even for patients with identical clinical characteristics. Certain non-clinical factors, such as access to SRS, appear to be key drivers of use of advanced technology. This finding raises the question about what additional incentives could be driving treatment selection in the absence of gold-standard evidence of the superiority of a single approach over other alternatives. Our findings from this survey also underscore the likely uncertainty or disagreement that may exist among radiation oncologists about the relative harms and benefits of different treatment approaches. This uncertainty is likely related to the lack of prospective randomized studies that compare specific single- and multi-modality approaches for the treatment of brain metastases. More research is needed that directly compares the effectiveness of these approaches for a variety of different clinical circumstances. It would also be important to investigate underlying non-clinical factors, such as physician environment, reimbursement, and technology access, which likely contribute to observed heterogeneity of care for patients with brain metastases.
Background: Limited data guide radiotherapy choices for patients with brain metastases. This survey aimed to identify patient, physician, and practice setting variables associated with reported preferences for different treatment techniques. Methods: 277 members of the American Society for Radiation Oncology (6% of surveyed physicians) completed a survey regarding treatment preferences for 21 hypothetical patients with brain metastases. Treatment choices included combinations of whole brain radiation therapy (WBRT), stereotactic radiosurgery (SRS), and surgery. Vignettes varied histology, extracranial disease status, Karnofsky Performance Status (KPS), presence of neurologic deficits, lesion size and number. Multivariate generalized estimating equation regression models were used to estimate odds ratios. Results: For a hypothetical patient with 3 lesions or 8 lesions, 21% and 91% of physicians, respectively, chose WBRT alone, compared with 1% selecting WBRT alone for a patient with 1 lesion. 51% chose WBRT alone for a patient with active extracranial disease or KPS=50%. 40% chose SRS alone for an 80 year-old patient with 1 lesion, compared to 29% for a 55 year-old patient. Multivariate modeling detailed factors associated with SRS use, including availability of SRS within one's practice (OR 2.22, 95% CI 1.46-3.37). Conclusions: Poor prognostic factors, such as advanced age, poor performance status, or active extracranial disease, correspond with an increase in physicians' reported preference for using WBRT. When controlling for clinical factors, equipment access was independently associated with choice of SRS. The large variability in preferences suggests that more information about the relative harms and benefits of these options is needed to guide decision-making.
10,202
323
18
[ "srs", "wbrt", "treatment", "patient", "respondents", "clinical", "brain", "patients", "physician", "survey" ]
[ "test", "test" ]
[CONTENT] Brain metastases | Stereotactic radiosurgery | Whole brain radiation therapy | Treatment patterns | Physician survey [SUMMARY]
[CONTENT] Brain metastases | Stereotactic radiosurgery | Whole brain radiation therapy | Treatment patterns | Physician survey [SUMMARY]
[CONTENT] Brain metastases | Stereotactic radiosurgery | Whole brain radiation therapy | Treatment patterns | Physician survey [SUMMARY]
[CONTENT] Brain metastases | Stereotactic radiosurgery | Whole brain radiation therapy | Treatment patterns | Physician survey [SUMMARY]
[CONTENT] Brain metastases | Stereotactic radiosurgery | Whole brain radiation therapy | Treatment patterns | Physician survey [SUMMARY]
[CONTENT] Brain metastases | Stereotactic radiosurgery | Whole brain radiation therapy | Treatment patterns | Physician survey [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Brain Neoplasms | Carcinoma, Non-Small-Cell Lung | Choice Behavior | Combined Modality Therapy | Cranial Irradiation | Data Collection | Demography | Female | Humans | Karnofsky Performance Status | Lung Neoplasms | Male | Melanoma | Middle Aged | Neurosurgery | Patient Selection | Physicians | Professional Practice | Professional Practice Location | Radiosurgery | Self Report | Socioeconomic Factors [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Brain Neoplasms | Carcinoma, Non-Small-Cell Lung | Choice Behavior | Combined Modality Therapy | Cranial Irradiation | Data Collection | Demography | Female | Humans | Karnofsky Performance Status | Lung Neoplasms | Male | Melanoma | Middle Aged | Neurosurgery | Patient Selection | Physicians | Professional Practice | Professional Practice Location | Radiosurgery | Self Report | Socioeconomic Factors [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Brain Neoplasms | Carcinoma, Non-Small-Cell Lung | Choice Behavior | Combined Modality Therapy | Cranial Irradiation | Data Collection | Demography | Female | Humans | Karnofsky Performance Status | Lung Neoplasms | Male | Melanoma | Middle Aged | Neurosurgery | Patient Selection | Physicians | Professional Practice | Professional Practice Location | Radiosurgery | Self Report | Socioeconomic Factors [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Brain Neoplasms | Carcinoma, Non-Small-Cell Lung | Choice Behavior | Combined Modality Therapy | Cranial Irradiation | Data Collection | Demography | Female | Humans | Karnofsky Performance Status | Lung Neoplasms | Male | Melanoma | Middle Aged | Neurosurgery | Patient Selection | Physicians | Professional Practice | Professional Practice Location | Radiosurgery | Self Report | Socioeconomic Factors [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Brain Neoplasms | Carcinoma, Non-Small-Cell Lung | Choice Behavior | Combined Modality Therapy | Cranial Irradiation | Data Collection | Demography | Female | Humans | Karnofsky Performance Status | Lung Neoplasms | Male | Melanoma | Middle Aged | Neurosurgery | Patient Selection | Physicians | Professional Practice | Professional Practice Location | Radiosurgery | Self Report | Socioeconomic Factors [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Brain Neoplasms | Carcinoma, Non-Small-Cell Lung | Choice Behavior | Combined Modality Therapy | Cranial Irradiation | Data Collection | Demography | Female | Humans | Karnofsky Performance Status | Lung Neoplasms | Male | Melanoma | Middle Aged | Neurosurgery | Patient Selection | Physicians | Professional Practice | Professional Practice Location | Radiosurgery | Self Report | Socioeconomic Factors [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] srs | wbrt | treatment | patient | respondents | clinical | brain | patients | physician | survey [SUMMARY]
[CONTENT] srs | wbrt | treatment | patient | respondents | clinical | brain | patients | physician | survey [SUMMARY]
[CONTENT] srs | wbrt | treatment | patient | respondents | clinical | brain | patients | physician | survey [SUMMARY]
[CONTENT] srs | wbrt | treatment | patient | respondents | clinical | brain | patients | physician | survey [SUMMARY]
[CONTENT] srs | wbrt | treatment | patient | respondents | clinical | brain | patients | physician | survey [SUMMARY]
[CONTENT] srs | wbrt | treatment | patient | respondents | clinical | brain | patients | physician | survey [SUMMARY]
[CONTENT] survival | metastases | wbrt | control | brain | patients | intracranial | brain metastases | practice | months [SUMMARY]
[CONTENT] treatment | models | survey | srs | wbrt | physician | category | clinical | acr | correlation [SUMMARY]
[CONTENT] srs | wbrt | respondents | 95 | 95 ci | ci | brain | patient | table | radiation therapy [SUMMARY]
[CONTENT] approaches | clinical | non clinical | non clinical factors | uncertainty | likely | heterogeneity | treatment | brain metastases | brain [SUMMARY]
[CONTENT] srs | wbrt | treatment | respondents | patient | odds | brain | clinical | patients | metastases [SUMMARY]
[CONTENT] srs | wbrt | treatment | respondents | patient | odds | brain | clinical | patients | metastases [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 277 | the American Society for Radiation Oncology | 6% | 21 ||| WBRT ||| Karnofsky Performance Status | KPS ||| [SUMMARY]
[CONTENT] 3 | 8 | 21% and 91% | WBRT | 1% | WBRT | 1 ||| 51% | WBRT | KPS=50% ||| 40% | SRS | 80 year-old | 1 | 29% | 55 year-old ||| SRS | SRS | 2.22 | 95% | CI | 1.46-3.37 [SUMMARY]
[CONTENT] WBRT ||| SRS ||| [SUMMARY]
[CONTENT] ||| ||| 277 | the American Society for Radiation Oncology | 6% | 21 ||| WBRT ||| Karnofsky Performance Status | KPS ||| ||| 3 | 8 | 21% and 91% | WBRT | 1% | WBRT | 1 ||| 51% | WBRT | KPS=50% ||| 40% | SRS | 80 year-old | 1 | 29% | 55 year-old ||| SRS | SRS | 2.22 | 95% | CI | 1.46-3.37 ||| WBRT ||| SRS ||| [SUMMARY]
[CONTENT] ||| ||| 277 | the American Society for Radiation Oncology | 6% | 21 ||| WBRT ||| Karnofsky Performance Status | KPS ||| ||| 3 | 8 | 21% and 91% | WBRT | 1% | WBRT | 1 ||| 51% | WBRT | KPS=50% ||| 40% | SRS | 80 year-old | 1 | 29% | 55 year-old ||| SRS | SRS | 2.22 | 95% | CI | 1.46-3.37 ||| WBRT ||| SRS ||| [SUMMARY]
Birth preparedness and complication readiness among women of child bearing age group in Goba woreda, Oromia region, Ethiopia.
25132227
Birth preparedness and complication readiness is the process of planning for normal birth and anticipating the actions needed in case of an emergency. It is also a strategy to promote the timely use of skilled maternal care, especially during childbirth, based on the theory that preparing for childbirth reduces delays in obtaining this care. Therefore, the aim of this study was to assess birth preparedness and complication readiness among women of child bearing age group in Goba woreda, Oromia region, Ethiopia.
BACKGROUND
A community based cross sectional study was conducted in Goba woreda, Oromia region, Ethiopia. Multistage sampling was employed. Descriptive, binary and multiple logistic regression analyses were conducted. Statistically significant tests were declared at a level of significance of P value < 0.05.
METHODS
Only 29.9% of the respondents were prepared for birth and its complications. And, only 82 (14.6%) study participants were knowledgeable about birth preparedness and complication readiness.Variables having statistically significant association with birth preparedness and complication readiness of women were attending up to primary education (AOR = 3.24, 95% CI = 1.75, 6.02), attending up to secondary and higher level of education (AOR = 2.88, 95% CI = 1.34, 6.15), the presence of antenatal care follow up (AOR = 8.07, 95% CI = 2.41,27.00), knowledge about key danger signs during pregnancy (AOR = 1.74, 95% CI = 1.06,2.88), and knowledge about key danger signs during the postpartum period (AOR = 2.08, 95% CI = 1.20,3.60).
RESULTS
Only a small number of respondents were prepared for birth and its complications. Furthermore, the vast majority of women were not knowledgeable about birth preparedness and complication readiness. Residence, educational status, ANC follow up, knowledge of key danger signs during pregnancy and the postpartum period were independent predictors of birth preparedness and complication readiness.
CONCLUSIONS
[ "Adult", "Cholestyramine Resin", "Cross-Sectional Studies", "Educational Status", "Ethiopia", "Female", "Health Knowledge, Attitudes, Practice", "Humans", "Parturition", "Pregnancy", "Pregnancy Complications", "Prenatal Care", "Rural Population", "Urban Population", "Young Adult" ]
4148918
Background
Globally, maternal mortality remains a public health challenge [1]. World Health Organization (WHO) estimated that 529,000 women die annually from maternal causes. Ninety nine percent of these deaths occur in less developed countries. The situation is most dire for women in Sub-Saharan Africa, where one of every 16 women dies of pregnancy related causes during her lifetime, compared with only 1 in 2,800 women in developed regions [2]. The global maternal mortality ratio (MMR) decreased from 422 (358-505) in 1980 to 320 (272-388) in 1990, and was 251 (221-289) per 100 000 live births in 2008. The yearly rate of decline of the global MMR since 1990 was 1.3% (1.0-1.5). More than 50% of all maternal deaths were in only six countries in 2008 (India, Nigeria, Pakistan, Afghanistan, Ethiopia, and the Democratic Republic of the Congo) [3]. In Ethiopia, according to 2011 Ethiopia Demographic and Health Survey (EDHS) report, MMR remains high which is 676 deaths per 100,000 live births. However, only 10% of births in Ethiopia occur in health facility while 90% women deliver at home. Reasons for not delivering at health facility for more than six women in ten (61%) stated that a health facility delivery was not necessary, and three in every ten (30%) stated that it was not customary [4]. Birth preparedness and complication readiness (BP/CR) is a comprehensive package aimed at promoting timely access to skilled maternal and neonatal services. It promotes active preparation and decision making for delivery by pregnant women and their families. This stems from the fact that every pregnant woman faces risk of sudden and unpredictable life threatening complications that could end in death or injury to herself or to her infant. BP/CR is a relatively common strategy employed by numerous groups implementing safe motherhood program in the world [5, 6]. In many societies in the world, cultural beliefs and lack of awareness inhibit preparation in advance for delivery and expected baby. Since no action is taken prior to the delivery, the family tries to act only when labor begins. The majority of pregnant women and their families do not know how to recognize the danger signs of complications. When complications occur, the unprepared family will waste a great deal of time in recognizing the problem, getting organized, getting money, finding transport and reaching the appropriate referral facility [7]. It is difficult to predict which pregnancy, delivery or post delivery period will experience complications; hence birth preparedness and complication readiness plan is recommended with the notion of pregnancy is risk [6]. BP/CR strategy encourage women to be informed of danger signs of obstetric complications and emergencies, choose a preferred birth place and attendant at birth, make advance arrangement with the attendant at birth, arrange for transport to skilled care site in case of emergence, saving or arranging alternative funds for costs of skilled and emergency care, and finding a companion to be with the woman at birth or to accompany her to emergency care source. Other measures include identifying a compatible blood donor in case of hemorrhage, obtaining permission from the head of household to seek skilled care in the event that a birth emergency occurs in his absence and arrange a source of household support to provide temporary family care during her absence [6, 8]. With different effort from government as well as nongovernmental organization working on maternal issues, pregnant women were not found to be well prepared for birth and its complication. For example, only 47.8% women who have already given birth in Indore city in India [9] and 35% of pregnant women in Uganda were prepared for birth and its complication [10]. Additionally, according to the research done in some part of Ethiopia, only 22% of pregnant women in Adigrat town [7] and 17% of pregnant women in Aleta Wondo of southern region [11] were prepared for birth and its complication. Even though there are some studies which were conducted on this similar issue in Ethiopia, they mainly encompass the urban area. Additionally, there are some differences in socio demographic and cultural conditions of these different regions of the country. Therefore, the aim of this study was to assess birth preparedness and complication readiness among women of child bearing age group in Goba woreda, Oromia region, Ethiopia.
Methods
Study area and period The study was conducted in Goba woreda, Bale zone from April 1-28, 2013. Bale zone is administratively divided in to woreda and smaller kebele. Goba woreda is one of the woreda in Bale zone, Oromia region of Ethiopia and located 444 km from Addis Ababa. Currently, the woreda has 24 rural and 2 urban kebeles. Based on figures obtained from 2007 census, this woreda has an estimated total population of 73,653; of whom 37,427 were females and 32,916 (44.7%) of the population were urban dwellers [12]. The estimated total number of women of reproductive age and pregnant women in the woreda were 16,277 and 2725 respectively. The woreda have one health post in each kebele, 4 health centers, one Hospital and Blood Bank. One ambulance from hospital as well as Red Cross society serves the community in providing transportation service from home to the health facility ( [13], Goba woreda health office: Annual health report, unpublished]). The study was conducted in Goba woreda, Bale zone from April 1-28, 2013. Bale zone is administratively divided in to woreda and smaller kebele. Goba woreda is one of the woreda in Bale zone, Oromia region of Ethiopia and located 444 km from Addis Ababa. Currently, the woreda has 24 rural and 2 urban kebeles. Based on figures obtained from 2007 census, this woreda has an estimated total population of 73,653; of whom 37,427 were females and 32,916 (44.7%) of the population were urban dwellers [12]. The estimated total number of women of reproductive age and pregnant women in the woreda were 16,277 and 2725 respectively. The woreda have one health post in each kebele, 4 health centers, one Hospital and Blood Bank. One ambulance from hospital as well as Red Cross society serves the community in providing transportation service from home to the health facility ( [13], Goba woreda health office: Annual health report, unpublished]). Study design A community based cross sectional study was conducted. Women who gave birth in the last 12 months regardless of their birth outcome were included in the study and those women who were severely ill, mentally and physically not capable of being interviewed were excluded from the study. A community based cross sectional study was conducted. Women who gave birth in the last 12 months regardless of their birth outcome were included in the study and those women who were severely ill, mentally and physically not capable of being interviewed were excluded from the study. Sample size and sampling procedure Sample size The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. Sample size The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. Sampling procedure Multistage sampling was employed to select study subjects. First, all kebeles in the woreda were stratified in to urban and rural. Goba Woreda constitutes 24 rural kebeles and one town administrative (consisting of two urban kebeles) that makes up a total of 26 kebeles. To determine representative sample of kebeles for the woreda and got adequate sample, 1/3rd of the kebeles were selected. Based on the above calculation, 9 kebeles were chosen using simple random sampling from the total 26 kebeles. According to the strata i.e. urban and rural residence, a kebele from the 2 urban and 8 kebeles from the 24 rural kebeles were selected using simple random sampling technique. Then, the total sample size (n =580) was allocated proportionally to the size of the selected kebeles. Finally, systematic sampling was employed to select the study subjects in each kebele until the desired numbers of sample was obtained. To select the first house hold in each kebele, first a land mark which is common in almost all kebeles that is health post was identified. A pen was spin and the direction pointed by the tip of the pen was followed. To select the first house hold, one of the house which was included under the initial sampling interval of each kebele was selected by simple random sampling; lottery method. Then, the next house hold was selected through systematic sampling technique that is every Kth interval household which was calculated for each kebele because the numbers of households vary from one kebele to another kebele. In a case when the study participants were not be able to be interviewed for some reason (e.g. absenteeism), attempt was made for three times to interview the respondent and after all, they were considered as non respondents. On the other hand, if the household did not include women who meet the inclusion criteria, the next household was substituted. Moreover, if the household contained more than one candidate, one of them was taken randomly by employing lottery method. Multistage sampling was employed to select study subjects. First, all kebeles in the woreda were stratified in to urban and rural. Goba Woreda constitutes 24 rural kebeles and one town administrative (consisting of two urban kebeles) that makes up a total of 26 kebeles. To determine representative sample of kebeles for the woreda and got adequate sample, 1/3rd of the kebeles were selected. Based on the above calculation, 9 kebeles were chosen using simple random sampling from the total 26 kebeles. According to the strata i.e. urban and rural residence, a kebele from the 2 urban and 8 kebeles from the 24 rural kebeles were selected using simple random sampling technique. Then, the total sample size (n =580) was allocated proportionally to the size of the selected kebeles. Finally, systematic sampling was employed to select the study subjects in each kebele until the desired numbers of sample was obtained. To select the first house hold in each kebele, first a land mark which is common in almost all kebeles that is health post was identified. A pen was spin and the direction pointed by the tip of the pen was followed. To select the first house hold, one of the house which was included under the initial sampling interval of each kebele was selected by simple random sampling; lottery method. Then, the next house hold was selected through systematic sampling technique that is every Kth interval household which was calculated for each kebele because the numbers of households vary from one kebele to another kebele. In a case when the study participants were not be able to be interviewed for some reason (e.g. absenteeism), attempt was made for three times to interview the respondent and after all, they were considered as non respondents. On the other hand, if the household did not include women who meet the inclusion criteria, the next household was substituted. Moreover, if the household contained more than one candidate, one of them was taken randomly by employing lottery method. Operational and term definitions Birth preparedness and complication ready: A woman was considered as prepared for birth and its complication if she identified four and more components from birth preparedness complication readiness item [7]. Skilled provider: are persons with midwifery skills (Physicians, Nurses, Midwives, and Health Officers) who can manage normal deliveries and diagnose, manage or refer obstetric complications. Knowledge on key danger signs of pregnancy: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of pregnancy otherwise not knowledgeable. Knowledge on key danger signs of labor: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of labor otherwise not knowledgeable. Knowledge on key danger signs of post partum: A woman was considered knowledgeable if she spontaneously mentioned at least two out of the three key danger signs of post partum period otherwise not knowledgeable. Knowledge of birth preparedness and complication readiness: A woman was considered knowledgeable if she can spontaneously mentioned at least 4 item of knowledge of birth preparedness and complication readiness question otherwise not knowledgeable. Birth preparedness and complication ready: A woman was considered as prepared for birth and its complication if she identified four and more components from birth preparedness complication readiness item [7]. Skilled provider: are persons with midwifery skills (Physicians, Nurses, Midwives, and Health Officers) who can manage normal deliveries and diagnose, manage or refer obstetric complications. Knowledge on key danger signs of pregnancy: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of pregnancy otherwise not knowledgeable. Knowledge on key danger signs of labor: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of labor otherwise not knowledgeable. Knowledge on key danger signs of post partum: A woman was considered knowledgeable if she spontaneously mentioned at least two out of the three key danger signs of post partum period otherwise not knowledgeable. Knowledge of birth preparedness and complication readiness: A woman was considered knowledgeable if she can spontaneously mentioned at least 4 item of knowledge of birth preparedness and complication readiness question otherwise not knowledgeable. Data collection tool and procedure Validated structured questionnaire adapted from the survey tools developed by JHPIEGO Maternal Neonatal Health program was used. Information on socio demographic characteristic was collected. Additionally, knowledge of key danger signs during pregnancy, delivery and postpartum period and its actual problem which require referral were asked. Furthermore, the study subjects were asked their BP/CR practice waiting their spontaneous answer to check whether they practiced those operationally defined BP/CR component. These were identifying place of delivery, plan of skilled assistant during delivery, saving money for obstetric emergency, plan of mode of transport to place of delivery during emergency, plan of blood donor during obstetric emergency, detecting early signs of emergency and identifying institution with 24 hour emergency obstetric care services. Eight diploma Nurses who are fluent in speaking Amharic and Afan Oromo were involved in the data collection. They collected the data through face to face interview. In addition to the principal investigators, four Bachelor of Science degree (BSc) holder health professionals were recruited and supervised the data collection process. Validated structured questionnaire adapted from the survey tools developed by JHPIEGO Maternal Neonatal Health program was used. Information on socio demographic characteristic was collected. Additionally, knowledge of key danger signs during pregnancy, delivery and postpartum period and its actual problem which require referral were asked. Furthermore, the study subjects were asked their BP/CR practice waiting their spontaneous answer to check whether they practiced those operationally defined BP/CR component. These were identifying place of delivery, plan of skilled assistant during delivery, saving money for obstetric emergency, plan of mode of transport to place of delivery during emergency, plan of blood donor during obstetric emergency, detecting early signs of emergency and identifying institution with 24 hour emergency obstetric care services. Eight diploma Nurses who are fluent in speaking Amharic and Afan Oromo were involved in the data collection. They collected the data through face to face interview. In addition to the principal investigators, four Bachelor of Science degree (BSc) holder health professionals were recruited and supervised the data collection process. Data quality control The Qualities of data were assured by using validated questionnaire, translation, retranslation and pretesting of the questionnaire. The questionnaire was translated from English language to Amharic and Afan Oromo by a translator and back to English language by second other translators who were health professionals to compare its consistency. The pretest was done on 5% of the total sample size in Robe town. Content and face validity were checked by reproductive health expert. Additionally, after the pretest, to check the internal consistency of the tool, cronbach alpha value was calculated and its value for birth preparedness and complication readiness item was 0.86. Data collectors and supervisors were trained for two days on the study instrument and data collection procedure. The principal investigator and the supervisors strictly followed the overall data collection activities. The Qualities of data were assured by using validated questionnaire, translation, retranslation and pretesting of the questionnaire. The questionnaire was translated from English language to Amharic and Afan Oromo by a translator and back to English language by second other translators who were health professionals to compare its consistency. The pretest was done on 5% of the total sample size in Robe town. Content and face validity were checked by reproductive health expert. Additionally, after the pretest, to check the internal consistency of the tool, cronbach alpha value was calculated and its value for birth preparedness and complication readiness item was 0.86. Data collectors and supervisors were trained for two days on the study instrument and data collection procedure. The principal investigator and the supervisors strictly followed the overall data collection activities. Data processing and analysis The data were checked for its completeness and consistencies. Then, it was cleaned, coded and entered in to computer using statistical package for social sciences (SPSS) windows version 16.0. Descriptive statistics were computed to determine the prevalence of birth preparedness and complication readiness. Moreover, binary logistic regression analysis was performed to identify those factors associated with birth preparedness and complication readiness. Then, to control the effect of possible confounders, multiple logistics regression was computed with a confidence interval of 95%. P value < 0.05 on a binary logistic regression was considered to select candidate variable for multiple logistic regression analysis as well as to declare statistically significant variable. The data were checked for its completeness and consistencies. Then, it was cleaned, coded and entered in to computer using statistical package for social sciences (SPSS) windows version 16.0. Descriptive statistics were computed to determine the prevalence of birth preparedness and complication readiness. Moreover, binary logistic regression analysis was performed to identify those factors associated with birth preparedness and complication readiness. Then, to control the effect of possible confounders, multiple logistics regression was computed with a confidence interval of 95%. P value < 0.05 on a binary logistic regression was considered to select candidate variable for multiple logistic regression analysis as well as to declare statistically significant variable. Ethical consideration The proposal was approved by Ethical Review Committee of College of Medicine and Health Sciences of Madawalabu University. Furthermore, letter of permission was obtained from Bale zone health department and Goba woreda health offices. Consent was obtained from the study subjects after explaining the study objectives and procedures and their right to refuse not to participate in the study any time they want was also assured. The proposal was approved by Ethical Review Committee of College of Medicine and Health Sciences of Madawalabu University. Furthermore, letter of permission was obtained from Bale zone health department and Goba woreda health offices. Consent was obtained from the study subjects after explaining the study objectives and procedures and their right to refuse not to participate in the study any time they want was also assured.
Results
Out of the total 580 women planned for the study, 8 (1.4%) study subjects refused to respond for the question,10(1.7%) individuals were not found in three times visit and 562 were successfully interviewed yielding the response rate of 97%. Socio demographic characteristics The mean age of the respondents were 26.6 (SD ± 5.9) year. Majority of respondents, 33.8%, were between the age group of 21 and 25 years and a few were in the age group of less than 20 years. Muslim and Orthodox Tewahido were the dominant religions each accounting 49.1%. Most participants, 266 (47.3%), reported that they were educated up to primary school followed by those who did not attend formal education; 161 (28.6). Four hundred ten (73%) respondents were Oromo in their ethnic group, followed by Amhara; 139 (24.7%). The vast majority, (85.2%), of respondents were housewives. Regarding the marital status of study subjects, 531 (94.5%) of them were married. The mean family size and monthly income of the participants were 5.0 (SD ± 2.1) and 1267.9 (SD ± 1298.7) Ethiopian Birr respectively. About 224 (39.9%) study subjects did not have any financial income (Table  1).Table 1 Socio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013 VariableFrequencyPercent Residence Urban 11620.6 Rural 44679.4 Age ≤ 20 9416.7 21-25 19033.8 26-30 16429.2 >30 11420.3 Mean ( ± SD) 26.6 (±5.9) Religion Muslim 27649.1 Orthodox 27649.1 Protestant 71.2 Other * 30.5 Marital status Married/in Union 53194.5 Single 142.1 Widowed 81.4 Divorced 61.1 Separated 30.5 Ethnicity Oromo 41073 Amhara 13924.7 Tigre 20.4 Gamo 20.4 Other ** 91.6 Educational level None 16128.6 Read and write 162.8 Primary 26647.3 Secondary and above 11921.2 Occupation Housewife 47985.2 Gov’t. employee 213.7 Private employee 142.5 Merchant 427.7 Other *** 61.1 Monthly income of the women + None 22439.9 (None-100] 5610 101-300 13524 ≥ 301 14726.2 Total family income + ≤ 100 122.1 101-300 6010.7 ≥ 301 48787.1 Mean ( ± SD) 1267.9 ( ± 1298.7) Family size ≤ 4 28150 5-6 16228.8 ≥ 7 11921.1 Mean ( ± SD) 5.0 (±2.1) *indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer. +Currency is measured in Ethiopian Birr. Socio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013 *indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer. +Currency is measured in Ethiopian Birr. The mean age of the respondents were 26.6 (SD ± 5.9) year. Majority of respondents, 33.8%, were between the age group of 21 and 25 years and a few were in the age group of less than 20 years. Muslim and Orthodox Tewahido were the dominant religions each accounting 49.1%. Most participants, 266 (47.3%), reported that they were educated up to primary school followed by those who did not attend formal education; 161 (28.6). Four hundred ten (73%) respondents were Oromo in their ethnic group, followed by Amhara; 139 (24.7%). The vast majority, (85.2%), of respondents were housewives. Regarding the marital status of study subjects, 531 (94.5%) of them were married. The mean family size and monthly income of the participants were 5.0 (SD ± 2.1) and 1267.9 (SD ± 1298.7) Ethiopian Birr respectively. About 224 (39.9%) study subjects did not have any financial income (Table  1).Table 1 Socio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013 VariableFrequencyPercent Residence Urban 11620.6 Rural 44679.4 Age ≤ 20 9416.7 21-25 19033.8 26-30 16429.2 >30 11420.3 Mean ( ± SD) 26.6 (±5.9) Religion Muslim 27649.1 Orthodox 27649.1 Protestant 71.2 Other * 30.5 Marital status Married/in Union 53194.5 Single 142.1 Widowed 81.4 Divorced 61.1 Separated 30.5 Ethnicity Oromo 41073 Amhara 13924.7 Tigre 20.4 Gamo 20.4 Other ** 91.6 Educational level None 16128.6 Read and write 162.8 Primary 26647.3 Secondary and above 11921.2 Occupation Housewife 47985.2 Gov’t. employee 213.7 Private employee 142.5 Merchant 427.7 Other *** 61.1 Monthly income of the women + None 22439.9 (None-100] 5610 101-300 13524 ≥ 301 14726.2 Total family income + ≤ 100 122.1 101-300 6010.7 ≥ 301 48787.1 Mean ( ± SD) 1267.9 ( ± 1298.7) Family size ≤ 4 28150 5-6 16228.8 ≥ 7 11921.1 Mean ( ± SD) 5.0 (±2.1) *indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer. +Currency is measured in Ethiopian Birr. Socio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013 *indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer. +Currency is measured in Ethiopian Birr. Birth preparedness and complication readiness Out of the total respondents, 301 (53.6%) study participants reported that they have never heard the term birth preparedness and complication readiness. Their sources of information for the majority of respondents, 61.0%, were community health workers (Figure  1).Figure 1 Sources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible. Sources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible. Among those respondent who knew about birth preparedness and complication readiness, 415 (87.6%) spontaneously identified and mentioned preparing essential items for clean delivery and post partum period followed by saving money; 330 (69.6%) (Table  2). In summary, only 82 (14.6%) study subjects were knowledgeable about birth preparedness and complication readiness.Table 2 Knowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013 S. No VariableFrequencyPercent(N = 474)1Identify place of delivery21445.12Save money33069.63Prepare essential items for clean delivery & post partum period41587.64Identify skilled provider347.25Being aware of the signs of an emergency & the need to act immediately163.46Designating decision maker on her40.87Arranging a way to communicate with a source of help332.78Arranging emergency funds675.59Identify a mode of transportation255.310Arranging blood donors153.211Identifying the nearest institution that has24 hours functioning EmOC services5812.2N.B. This cannot be sum up to 100% because multiple responses were possible. Knowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013 N.B. This cannot be sum up to 100% because multiple responses were possible. Generally, only 168 (29.9%) study subjects were prepared for birth and its complication in their last pregnancy where as the remaining 394 (70.1%) study subject did not (Table  3).Table 3 Practices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013 S. NoVariableFrequencyPercent 1 Identify place of delivery43276.9 2 Plan skilled assistant during delivery29752.8 3 Saved money for obstetric emergency38869 4 Plan a mode of transport to place of delivery during emergency35362.8 5 Plan blood donor during obstetric emergency519.1 6 Detect early signs of an Emergence17731.5 7 Identified institution with 24 hr EmOC services26747.5N.B. This cannot be sum up to 100% because multiple responses were possible. Practices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013 N.B. This cannot be sum up to 100% because multiple responses were possible. Out of the total respondents, 301 (53.6%) study participants reported that they have never heard the term birth preparedness and complication readiness. Their sources of information for the majority of respondents, 61.0%, were community health workers (Figure  1).Figure 1 Sources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible. Sources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible. Among those respondent who knew about birth preparedness and complication readiness, 415 (87.6%) spontaneously identified and mentioned preparing essential items for clean delivery and post partum period followed by saving money; 330 (69.6%) (Table  2). In summary, only 82 (14.6%) study subjects were knowledgeable about birth preparedness and complication readiness.Table 2 Knowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013 S. No VariableFrequencyPercent(N = 474)1Identify place of delivery21445.12Save money33069.63Prepare essential items for clean delivery & post partum period41587.64Identify skilled provider347.25Being aware of the signs of an emergency & the need to act immediately163.46Designating decision maker on her40.87Arranging a way to communicate with a source of help332.78Arranging emergency funds675.59Identify a mode of transportation255.310Arranging blood donors153.211Identifying the nearest institution that has24 hours functioning EmOC services5812.2N.B. This cannot be sum up to 100% because multiple responses were possible. Knowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013 N.B. This cannot be sum up to 100% because multiple responses were possible. Generally, only 168 (29.9%) study subjects were prepared for birth and its complication in their last pregnancy where as the remaining 394 (70.1%) study subject did not (Table  3).Table 3 Practices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013 S. NoVariableFrequencyPercent 1 Identify place of delivery43276.9 2 Plan skilled assistant during delivery29752.8 3 Saved money for obstetric emergency38869 4 Plan a mode of transport to place of delivery during emergency35362.8 5 Plan blood donor during obstetric emergency519.1 6 Detect early signs of an Emergence17731.5 7 Identified institution with 24 hr EmOC services26747.5N.B. This cannot be sum up to 100% because multiple responses were possible. Practices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013 N.B. This cannot be sum up to 100% because multiple responses were possible. Factors associated with birth preparedness and complication readiness On binary logistic regression, place of residence, occupation, educational level, family size, ANC follow up, knowledge of danger sign during pregnancy, labour and postnatal period, as well as gravida and parity were found to have statistically significant association with birth preparedness and complication readiness. Multiple logistic regression analysis was also computed to control the possible confounder, explores the association between selected independent variables, and birth preparedness and complication readiness. The odds of birth preparedness and complication readiness were two times greater among urban resident when compared to rural resident (AOR = 2.01, 95% CI = 1.20, 3.36). Additionally, educational level of mother was also found as a predictor of birth preparedness and complication readiness. The odds of birth preparedness and complication readiness of woman who attended to the primary, and secondary and higher level of education were three and nearly three times higher than that those who did not attend formal education (AOR = 3.24, 95% CI = 1.75, 6.02) and (AOR = 2.88, 95% CI = 1.34, 6.15) respectively. Furthermore, the odds of birth preparedness and complication readiness were eight times greater among women who have ANC follow up when compared with women who did not have ANC follow up (AOR = 8.07, 95% CI = 2.41,27.00). Besides, the odds of birth preparedness & complication readiness among knowledgeable women about key danger signs during pregnancy were nearly two times greater than not knowledgeable women (AOR = 1.74, 95% CI = 1.06,2.88). Similarly, the odds of birth preparedness and complication readiness among knowledgeable respondents about key danger signs during postpartum period were two times greater than those who lack knowledge about it (AOR = 2.08, 95% CI = 1.20,3.60) (Table  4).Table 4 Association of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013 VariablePractice of birth preparedness & its complicationCOR (95% CI)AOR (95% CI)Not prepared (%)Prepared (%) Residence Urban 57 (49.1)59 (50.9)3.2 (2.09,4.88)2.01(1.20,3.36) Rural 337 (75.6)109 (24.4)11 Marital status In marital union 370 (69.7)161 (30.3)1.49(0.63, 3.53) Not in marital union 24 (77.4)7 (22.6)1 Occupation Housewife 349 (72.9)130 (27)11 Gov’t. employee 9 (42.9)12 (57.1)3.57 (1.47, 8.69)1.39(0.48,4.04) Private employee 9 (64.3)5 (35.7)1.49 (0.49, 4.53)1.15(0.33,4.01) Merchant 24 (57)18 (42.9)2.01 (1.05, 3.83)1.45(0.68,3.07) Other *** 3 (50)3 (50)2.68 (0.53, 13.4)1.69(0.22,13.11) Age ≤ 20 70 (74.5)24 (25.5)1 21-25 125 (65.8)65 (34.2)1.51 (0.87, 2.63) 26-30 119 (72.6)45 (27.4)1.10 (0.62, 1.96) >30 80 (70.2)34 (29.8)1.24 (0.67, 2.28) Educational level None 144 (89.4)17 (10.6)11 Read and write 12 (75)4 (25)2.82 (0.81, 9.73)1.96(0.49,7.79) Primary 173 (65)93 (35)4.55(2.59, 7.99)3.24(1.75,6.02) Secondary and above 65 (54.6)54 (45.4)7.03 (3.79, 13.06)2.88(1.34,6.15) Total family income 1 ≤ 100 11 (91.7)1 (8.3)3.34(0.39, 28.24) 101-300 46 (76.7)14 (23.3)5.03(0.64, 39.37) ≥ 301 334 (68.6)153 (31.4) Family size ≤ 4 177 (63.9)100 (36.1)2.35 (1.40,3.95)1.19(0.46,3.04) 5-6 117 (72.7)44 (27.3)1.57 (0.88,2.78)1.22(0.61,2.46) ≥ 7 96 (80.7)23 (19.3)11 ANC follow up Yes 313 (65.5)165 (34.5)14.23(4.42, 45.75)8.07(2.41,27.00) No 81 (96.4)3 (3.6)11 Knowledge status of danger signs during pregnancy Not knowledgeable 92(51.4)87 (48.6)11 Knowledgeable 302(78.9)81 (21.1)3.52 (2.40, 5.16)1.74(1.06,2.88) Knowledge status of danger signs during labour Not knowledgeable 321(78.3)89 (21.7)11 Knowledgeable 73 (48)79 (52)3.90 (2.62, 5.79)1.66(0.97,2.84) Knowledge status of danger signs during postnatal period 336 (76.7) Not Knowledgeable 58 (46.8)102 (23.3)11 Knowledgeable 66 (53.2)3.74(2.47, 5.68)2.08(1.20,3.60) Gravid 1 103 (64)58 (36)1.86(1.17,2.96)0.32(0.05,2.02) 2-3 142 (68.6)65 (31.4)1.51 (0.97, 2.36)0.39(0.09,1.61) ≥ 4 149 (76.8)45 (23.2)11 Parity First 109 (64.5)60 (35.5)1.90 (1.19, 3.0)2.64(0.38,18.17) Second 76 (61.8)47 (38.2)2.14 (1.29,3.54)3.91(0.81,18.90) Third 67 (77)20 (23)1.03 (0.56,1.90)1.39(0.46.5.62) Fourth and above 142 (77.6)41 (22.4)11 Association of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013 On binary logistic regression, place of residence, occupation, educational level, family size, ANC follow up, knowledge of danger sign during pregnancy, labour and postnatal period, as well as gravida and parity were found to have statistically significant association with birth preparedness and complication readiness. Multiple logistic regression analysis was also computed to control the possible confounder, explores the association between selected independent variables, and birth preparedness and complication readiness. The odds of birth preparedness and complication readiness were two times greater among urban resident when compared to rural resident (AOR = 2.01, 95% CI = 1.20, 3.36). Additionally, educational level of mother was also found as a predictor of birth preparedness and complication readiness. The odds of birth preparedness and complication readiness of woman who attended to the primary, and secondary and higher level of education were three and nearly three times higher than that those who did not attend formal education (AOR = 3.24, 95% CI = 1.75, 6.02) and (AOR = 2.88, 95% CI = 1.34, 6.15) respectively. Furthermore, the odds of birth preparedness and complication readiness were eight times greater among women who have ANC follow up when compared with women who did not have ANC follow up (AOR = 8.07, 95% CI = 2.41,27.00). Besides, the odds of birth preparedness & complication readiness among knowledgeable women about key danger signs during pregnancy were nearly two times greater than not knowledgeable women (AOR = 1.74, 95% CI = 1.06,2.88). Similarly, the odds of birth preparedness and complication readiness among knowledgeable respondents about key danger signs during postpartum period were two times greater than those who lack knowledge about it (AOR = 2.08, 95% CI = 1.20,3.60) (Table  4).Table 4 Association of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013 VariablePractice of birth preparedness & its complicationCOR (95% CI)AOR (95% CI)Not prepared (%)Prepared (%) Residence Urban 57 (49.1)59 (50.9)3.2 (2.09,4.88)2.01(1.20,3.36) Rural 337 (75.6)109 (24.4)11 Marital status In marital union 370 (69.7)161 (30.3)1.49(0.63, 3.53) Not in marital union 24 (77.4)7 (22.6)1 Occupation Housewife 349 (72.9)130 (27)11 Gov’t. employee 9 (42.9)12 (57.1)3.57 (1.47, 8.69)1.39(0.48,4.04) Private employee 9 (64.3)5 (35.7)1.49 (0.49, 4.53)1.15(0.33,4.01) Merchant 24 (57)18 (42.9)2.01 (1.05, 3.83)1.45(0.68,3.07) Other *** 3 (50)3 (50)2.68 (0.53, 13.4)1.69(0.22,13.11) Age ≤ 20 70 (74.5)24 (25.5)1 21-25 125 (65.8)65 (34.2)1.51 (0.87, 2.63) 26-30 119 (72.6)45 (27.4)1.10 (0.62, 1.96) >30 80 (70.2)34 (29.8)1.24 (0.67, 2.28) Educational level None 144 (89.4)17 (10.6)11 Read and write 12 (75)4 (25)2.82 (0.81, 9.73)1.96(0.49,7.79) Primary 173 (65)93 (35)4.55(2.59, 7.99)3.24(1.75,6.02) Secondary and above 65 (54.6)54 (45.4)7.03 (3.79, 13.06)2.88(1.34,6.15) Total family income 1 ≤ 100 11 (91.7)1 (8.3)3.34(0.39, 28.24) 101-300 46 (76.7)14 (23.3)5.03(0.64, 39.37) ≥ 301 334 (68.6)153 (31.4) Family size ≤ 4 177 (63.9)100 (36.1)2.35 (1.40,3.95)1.19(0.46,3.04) 5-6 117 (72.7)44 (27.3)1.57 (0.88,2.78)1.22(0.61,2.46) ≥ 7 96 (80.7)23 (19.3)11 ANC follow up Yes 313 (65.5)165 (34.5)14.23(4.42, 45.75)8.07(2.41,27.00) No 81 (96.4)3 (3.6)11 Knowledge status of danger signs during pregnancy Not knowledgeable 92(51.4)87 (48.6)11 Knowledgeable 302(78.9)81 (21.1)3.52 (2.40, 5.16)1.74(1.06,2.88) Knowledge status of danger signs during labour Not knowledgeable 321(78.3)89 (21.7)11 Knowledgeable 73 (48)79 (52)3.90 (2.62, 5.79)1.66(0.97,2.84) Knowledge status of danger signs during postnatal period 336 (76.7) Not Knowledgeable 58 (46.8)102 (23.3)11 Knowledgeable 66 (53.2)3.74(2.47, 5.68)2.08(1.20,3.60) Gravid 1 103 (64)58 (36)1.86(1.17,2.96)0.32(0.05,2.02) 2-3 142 (68.6)65 (31.4)1.51 (0.97, 2.36)0.39(0.09,1.61) ≥ 4 149 (76.8)45 (23.2)11 Parity First 109 (64.5)60 (35.5)1.90 (1.19, 3.0)2.64(0.38,18.17) Second 76 (61.8)47 (38.2)2.14 (1.29,3.54)3.91(0.81,18.90) Third 67 (77)20 (23)1.03 (0.56,1.90)1.39(0.46.5.62) Fourth and above 142 (77.6)41 (22.4)11 Association of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013
Conclusion
Only a small numbers of respondent were found to be prepared for birth and its complications in their last pregnancy. Place of residence, educational status, ANC follow up, knowledge of key danger signs during pregnancy as well as postpartum period were independent predictors of birth preparedness and complication readiness. Recommendation The study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women. Educations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education. Antenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care. Even though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness. The study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women. Educations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education. Antenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care. Even though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness.
[ "Background", "Study area and period", "Study design", "Sample size and sampling procedure", "Sample size", "Sampling procedure", "Operational and term definitions", "Data collection tool and procedure", "Data quality control", "Data processing and analysis", "Ethical consideration", "Socio demographic characteristics", "Birth preparedness and complication readiness", "Factors associated with birth preparedness and complication readiness", "Recommendation" ]
[ "Globally, maternal mortality remains a public health challenge\n[1]. World Health Organization (WHO) estimated that 529,000 women die annually from maternal causes. Ninety nine percent of these deaths occur in less developed countries. The situation is most dire for women in Sub-Saharan Africa, where one of every 16 women dies of pregnancy related causes during her lifetime, compared with only 1 in 2,800 women in developed regions\n[2].\nThe global maternal mortality ratio (MMR) decreased from 422 (358-505) in 1980 to 320 (272-388) in 1990, and was 251 (221-289) per 100 000 live births in 2008. The yearly rate of decline of the global MMR since 1990 was 1.3% (1.0-1.5). More than 50% of all maternal deaths were in only six countries in 2008 (India, Nigeria, Pakistan, Afghanistan, Ethiopia, and the Democratic Republic of the Congo)\n[3]. In Ethiopia, according to 2011 Ethiopia Demographic and Health Survey (EDHS) report, MMR remains high which is 676 deaths per 100,000 live births. However, only 10% of births in Ethiopia occur in health facility while 90% women deliver at home. Reasons for not delivering at health facility for more than six women in ten (61%) stated that a health facility delivery was not necessary, and three in every ten (30%) stated that it was not customary\n[4].\nBirth preparedness and complication readiness (BP/CR) is a comprehensive package aimed at promoting timely access to skilled maternal and neonatal services. It promotes active preparation and decision making for delivery by pregnant women and their families. This stems from the fact that every pregnant woman faces risk of sudden and unpredictable life threatening complications that could end in death or injury to herself or to her infant. BP/CR is a relatively common strategy employed by numerous groups implementing safe motherhood program in the world\n[5, 6].\nIn many societies in the world, cultural beliefs and lack of awareness inhibit preparation in advance for delivery and expected baby. Since no action is taken prior to the delivery, the family tries to act only when labor begins. The majority of pregnant women and their families do not know how to recognize the danger signs of complications. When complications occur, the unprepared family will waste a great deal of time in recognizing the problem, getting organized, getting money, finding transport and reaching the appropriate referral facility\n[7].\nIt is difficult to predict which pregnancy, delivery or post delivery period will experience complications; hence birth preparedness and complication readiness plan is recommended with the notion of pregnancy is risk\n[6]. BP/CR strategy encourage women to be informed of danger signs of obstetric complications and emergencies, choose a preferred birth place and attendant at birth, make advance arrangement with the attendant at birth, arrange for transport to skilled care site in case of emergence, saving or arranging alternative funds for costs of skilled and emergency care, and finding a companion to be with the woman at birth or to accompany her to emergency care source. Other measures include identifying a compatible blood donor in case of hemorrhage, obtaining permission from the head of household to seek skilled care in the event that a birth emergency occurs in his absence and arrange a source of household support to provide temporary family care during her absence\n[6, 8].\nWith different effort from government as well as nongovernmental organization working on maternal issues, pregnant women were not found to be well prepared for birth and its complication. For example, only 47.8% women who have already given birth in Indore city in India\n[9] and 35% of pregnant women in Uganda were prepared for birth and its complication\n[10]. Additionally, according to the research done in some part of Ethiopia, only 22% of pregnant women in Adigrat town\n[7] and 17% of pregnant women in Aleta Wondo of southern region\n[11] were prepared for birth and its complication. Even though there are some studies which were conducted on this similar issue in Ethiopia, they mainly encompass the urban area. Additionally, there are some differences in socio demographic and cultural conditions of these different regions of the country. Therefore, the aim of this study was to assess birth preparedness and complication readiness among women of child bearing age group in Goba woreda, Oromia region, Ethiopia.", "The study was conducted in Goba woreda, Bale zone from April 1-28, 2013. Bale zone is administratively divided in to woreda and smaller kebele. Goba woreda is one of the woreda in Bale zone, Oromia region of Ethiopia and located 444 km from Addis Ababa. Currently, the woreda has 24 rural and 2 urban kebeles. Based on figures obtained from 2007 census, this woreda has an estimated total population of 73,653; of whom 37,427 were females and 32,916 (44.7%) of the population were urban dwellers\n[12]. The estimated total number of women of reproductive age and pregnant women in the woreda were 16,277 and 2725 respectively. The woreda have one health post in each kebele, 4 health centers, one Hospital and Blood Bank. One ambulance from hospital as well as Red Cross society serves the community in providing transportation service from home to the health facility (\n[13], Goba woreda health office: Annual health report, unpublished]).", "A community based cross sectional study was conducted. Women who gave birth in the last 12 months regardless of their birth outcome were included in the study and those women who were severely ill, mentally and physically not capable of being interviewed were excluded from the study.", " Sample size The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness\n[7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580.\nThe study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness\n[7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580.", "The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness\n[7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580.", "Multistage sampling was employed to select study subjects. First, all kebeles in the woreda were stratified in to urban and rural. Goba Woreda constitutes 24 rural kebeles and one town administrative (consisting of two urban kebeles) that makes up a total of 26 kebeles. To determine representative sample of kebeles for the woreda and got adequate sample, 1/3rd of the kebeles were selected. Based on the above calculation, 9 kebeles were chosen using simple random sampling from the total 26 kebeles. According to the strata i.e. urban and rural residence, a kebele from the 2 urban and 8 kebeles from the 24 rural kebeles were selected using simple random sampling technique. Then, the total sample size (n =580) was allocated proportionally to the size of the selected kebeles. Finally, systematic sampling was employed to select the study subjects in each kebele until the desired numbers of sample was obtained. To select the first house hold in each kebele, first a land mark which is common in almost all kebeles that is health post was identified. A pen was spin and the direction pointed by the tip of the pen was followed. To select the first house hold, one of the house which was included under the initial sampling interval of each kebele was selected by simple random sampling; lottery method. Then, the next house hold was selected through systematic sampling technique that is every Kth interval household which was calculated for each kebele because the numbers of households vary from one kebele to another kebele. In a case when the study participants were not be able to be interviewed for some reason (e.g. absenteeism), attempt was made for three times to interview the respondent and after all, they were considered as non respondents. On the other hand, if the household did not include women who meet the inclusion criteria, the next household was substituted. Moreover, if the household contained more than one candidate, one of them was taken randomly by employing lottery method.", "Birth preparedness and complication ready: A woman was considered as prepared for birth and its complication if she identified four and more components from birth preparedness complication readiness item\n[7].\nSkilled provider: are persons with midwifery skills (Physicians, Nurses, Midwives, and Health Officers) who can manage normal deliveries and diagnose, manage or refer obstetric complications.\nKnowledge on key danger signs of pregnancy: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of pregnancy otherwise not knowledgeable.\nKnowledge on key danger signs of labor: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of labor otherwise not knowledgeable.\nKnowledge on key danger signs of post partum: A woman was considered knowledgeable if she spontaneously mentioned at least two out of the three key danger signs of post partum period otherwise not knowledgeable.\nKnowledge of birth preparedness and complication readiness: A woman was considered knowledgeable if she can spontaneously mentioned at least 4 item of knowledge of birth preparedness and complication readiness question otherwise not knowledgeable.", "Validated structured questionnaire adapted from the survey tools developed by JHPIEGO Maternal Neonatal Health program was used. Information on socio demographic characteristic was collected. Additionally, knowledge of key danger signs during pregnancy, delivery and postpartum period and its actual problem which require referral were asked. Furthermore, the study subjects were asked their BP/CR practice waiting their spontaneous answer to check whether they practiced those operationally defined BP/CR component. These were identifying place of delivery, plan of skilled assistant during delivery, saving money for obstetric emergency, plan of mode of transport to place of delivery during emergency, plan of blood donor during obstetric emergency, detecting early signs of emergency and identifying institution with 24 hour emergency obstetric care services. Eight diploma Nurses who are fluent in speaking Amharic and Afan Oromo were involved in the data collection. They collected the data through face to face interview. In addition to the principal investigators, four Bachelor of Science degree (BSc) holder health professionals were recruited and supervised the data collection process.", "The Qualities of data were assured by using validated questionnaire, translation, retranslation and pretesting of the questionnaire. The questionnaire was translated from English language to Amharic and Afan Oromo by a translator and back to English language by second other translators who were health professionals to compare its consistency. The pretest was done on 5% of the total sample size in Robe town. Content and face validity were checked by reproductive health expert. Additionally, after the pretest, to check the internal consistency of the tool, cronbach alpha value was calculated and its value for birth preparedness and complication readiness item was 0.86. Data collectors and supervisors were trained for two days on the study instrument and data collection procedure. The principal investigator and the supervisors strictly followed the overall data collection activities.", "The data were checked for its completeness and consistencies. Then, it was cleaned, coded and entered in to computer using statistical package for social sciences (SPSS) windows version 16.0. Descriptive statistics were computed to determine the prevalence of birth preparedness and complication readiness. Moreover, binary logistic regression analysis was performed to identify those factors associated with birth preparedness and complication readiness. Then, to control the effect of possible confounders, multiple logistics regression was computed with a confidence interval of 95%. P value < 0.05 on a binary logistic regression was considered to select candidate variable for multiple logistic regression analysis as well as to declare statistically significant variable.", "The proposal was approved by Ethical Review Committee of College of Medicine and Health Sciences of Madawalabu University. Furthermore, letter of permission was obtained from Bale zone health department and Goba woreda health offices. Consent was obtained from the study subjects after explaining the study objectives and procedures and their right to refuse not to participate in the study any time they want was also assured.", "The mean age of the respondents were 26.6 (SD ± 5.9) year. Majority of respondents, 33.8%, were between the age group of 21 and 25 years and a few were in the age group of less than 20 years. Muslim and Orthodox Tewahido were the dominant religions each accounting 49.1%. Most participants, 266 (47.3%), reported that they were educated up to primary school followed by those who did not attend formal education; 161 (28.6). Four hundred ten (73%) respondents were Oromo in their ethnic group, followed by Amhara; 139 (24.7%). The vast majority, (85.2%), of respondents were housewives. Regarding the marital status of study subjects, 531 (94.5%) of them were married. The mean family size and monthly income of the participants were 5.0 (SD ± 2.1) and 1267.9 (SD ± 1298.7) Ethiopian Birr respectively. About 224 (39.9%) study subjects did not have any financial income (Table \n1).Table 1\nSocio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013\nVariableFrequencyPercent\nResidence\n\nUrban\n11620.6\nRural\n44679.4\nAge\n\n≤\n20\n9416.7\n21-25\n19033.8\n26-30\n16429.2\n>30\n11420.3\nMean (\n±\nSD)\n26.6 (±5.9)\nReligion\n\nMuslim\n27649.1\nOrthodox\n27649.1\nProtestant\n71.2\nOther\n*\n30.5\nMarital status\n\nMarried/in Union\n53194.5\nSingle\n142.1\nWidowed\n81.4\nDivorced\n61.1\nSeparated\n30.5\nEthnicity\n\nOromo\n41073\nAmhara\n13924.7\nTigre\n20.4\nGamo\n20.4\nOther\n**\n91.6\nEducational level\n\nNone\n16128.6\nRead and write\n162.8\nPrimary\n26647.3\nSecondary and above\n11921.2\nOccupation\n\nHousewife\n47985.2\nGov’t. employee\n213.7\nPrivate employee\n142.5\nMerchant\n427.7\nOther\n***\n61.1\nMonthly income of the women\n+\n\nNone\n22439.9\n(None-100]\n5610\n101-300\n13524\n≥\n301\n14726.2\nTotal family income\n+\n\n≤\n100\n122.1\n101-300\n6010.7\n≥\n301\n48787.1\nMean (\n±\nSD)\n1267.9 ( ± 1298.7)\nFamily size\n\n≤\n4\n28150\n5-6\n16228.8\n≥\n7\n11921.1\nMean (\n±\nSD)\n5.0 (±2.1)\n*indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer.\n+Currency is measured in Ethiopian Birr.\n\nSocio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013\n\n\n*indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer.\n\n+Currency is measured in Ethiopian Birr.", "Out of the total respondents, 301 (53.6%) study participants reported that they have never heard the term birth preparedness and complication readiness. Their sources of information for the majority of respondents, 61.0%, were community health workers (Figure \n1).Figure 1\nSources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nSources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible.\nAmong those respondent who knew about birth preparedness and complication readiness, 415 (87.6%) spontaneously identified and mentioned preparing essential items for clean delivery and post partum period followed by saving money; 330 (69.6%) (Table \n2). In summary, only 82 (14.6%) study subjects were knowledgeable about birth preparedness and complication readiness.Table 2\nKnowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013\nS. No\nVariableFrequencyPercent(N = 474)1Identify place of delivery21445.12Save money33069.63Prepare essential items for clean delivery & post partum period41587.64Identify skilled provider347.25Being aware of the signs of an emergency & the need to act immediately163.46Designating decision maker on her40.87Arranging a way to communicate with a source of help332.78Arranging emergency funds675.59Identify a mode of transportation255.310Arranging blood donors153.211Identifying the nearest institution that has24 hours functioning EmOC services5812.2N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nKnowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013\n\nN.B. This cannot be sum up to 100% because multiple responses were possible.\nGenerally, only 168 (29.9%) study subjects were prepared for birth and its complication in their last pregnancy where as the remaining 394 (70.1%) study subject did not (Table \n3).Table 3\nPractices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013\nS. NoVariableFrequencyPercent\n1\nIdentify place of delivery43276.9\n2\nPlan skilled assistant during delivery29752.8\n3\nSaved money for obstetric emergency38869\n4\nPlan a mode of transport to place of delivery during emergency35362.8\n5\nPlan blood donor during obstetric emergency519.1\n6\nDetect early signs of an Emergence17731.5\n7\nIdentified institution with 24 hr EmOC services26747.5N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nPractices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013\n\nN.B. This cannot be sum up to 100% because multiple responses were possible.", "On binary logistic regression, place of residence, occupation, educational level, family size, ANC follow up, knowledge of danger sign during pregnancy, labour and postnatal period, as well as gravida and parity were found to have statistically significant association with birth preparedness and complication readiness.\nMultiple logistic regression analysis was also computed to control the possible confounder, explores the association between selected independent variables, and birth preparedness and complication readiness. The odds of birth preparedness and complication readiness were two times greater among urban resident when compared to rural resident (AOR = 2.01, 95% CI = 1.20, 3.36). Additionally, educational level of mother was also found as a predictor of birth preparedness and complication readiness. The odds of birth preparedness and complication readiness of woman who attended to the primary, and secondary and higher level of education were three and nearly three times higher than that those who did not attend formal education (AOR = 3.24, 95% CI = 1.75, 6.02) and (AOR = 2.88, 95% CI = 1.34, 6.15) respectively. Furthermore, the odds of birth preparedness and complication readiness were eight times greater among women who have ANC follow up when compared with women who did not have ANC follow up (AOR = 8.07, 95% CI = 2.41,27.00). Besides, the odds of birth preparedness & complication readiness among knowledgeable women about key danger signs during pregnancy were nearly two times greater than not knowledgeable women (AOR = 1.74, 95% CI = 1.06,2.88). Similarly, the odds of birth preparedness and complication readiness among knowledgeable respondents about key danger signs during postpartum period were two times greater than those who lack knowledge about it (AOR = 2.08, 95% CI = 1.20,3.60) (Table \n4).Table 4\nAssociation of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013\nVariablePractice of birth preparedness & its complicationCOR (95% CI)AOR (95% CI)Not prepared (%)Prepared (%)\nResidence\n\nUrban\n57 (49.1)59 (50.9)3.2 (2.09,4.88)2.01(1.20,3.36)\nRural\n337 (75.6)109 (24.4)11\nMarital status\n\nIn marital union\n370 (69.7)161 (30.3)1.49(0.63, 3.53)\nNot in marital union\n24 (77.4)7 (22.6)1\nOccupation\n\nHousewife\n349 (72.9)130 (27)11\nGov’t. employee\n9 (42.9)12 (57.1)3.57 (1.47, 8.69)1.39(0.48,4.04)\nPrivate employee\n9 (64.3)5 (35.7)1.49 (0.49, 4.53)1.15(0.33,4.01)\nMerchant\n24 (57)18 (42.9)2.01 (1.05, 3.83)1.45(0.68,3.07)\nOther\n***\n3 (50)3 (50)2.68 (0.53, 13.4)1.69(0.22,13.11)\nAge\n\n≤\n20\n70 (74.5)24 (25.5)1\n21-25\n125 (65.8)65 (34.2)1.51 (0.87, 2.63)\n26-30\n119 (72.6)45 (27.4)1.10 (0.62, 1.96)\n>30\n80 (70.2)34 (29.8)1.24 (0.67, 2.28)\nEducational level\n\nNone\n144 (89.4)17 (10.6)11\nRead and write\n12 (75)4 (25)2.82 (0.81, 9.73)1.96(0.49,7.79)\nPrimary\n173 (65)93 (35)4.55(2.59, 7.99)3.24(1.75,6.02)\nSecondary and above\n65 (54.6)54 (45.4)7.03 (3.79, 13.06)2.88(1.34,6.15)\nTotal family income\n1\n≤\n100\n11 (91.7)1 (8.3)3.34(0.39, 28.24)\n101-300\n46 (76.7)14 (23.3)5.03(0.64, 39.37)\n≥\n301\n334 (68.6)153 (31.4)\nFamily size\n\n≤\n4\n177 (63.9)100 (36.1)2.35 (1.40,3.95)1.19(0.46,3.04)\n5-6\n117 (72.7)44 (27.3)1.57 (0.88,2.78)1.22(0.61,2.46)\n≥\n7\n96 (80.7)23 (19.3)11\nANC follow up\n\nYes\n313 (65.5)165 (34.5)14.23(4.42, 45.75)8.07(2.41,27.00)\nNo\n81 (96.4)3 (3.6)11\nKnowledge status of danger signs during pregnancy\n\nNot knowledgeable\n92(51.4)87 (48.6)11\nKnowledgeable\n302(78.9)81 (21.1)3.52 (2.40, 5.16)1.74(1.06,2.88)\nKnowledge status of danger signs during labour\n\nNot knowledgeable\n321(78.3)89 (21.7)11\nKnowledgeable\n73 (48)79 (52)3.90 (2.62, 5.79)1.66(0.97,2.84)\nKnowledge status of danger signs during postnatal period\n336 (76.7)\nNot Knowledgeable\n58 (46.8)102 (23.3)11\nKnowledgeable\n66 (53.2)3.74(2.47, 5.68)2.08(1.20,3.60)\nGravid\n\n1\n103 (64)58 (36)1.86(1.17,2.96)0.32(0.05,2.02)\n2-3\n142 (68.6)65 (31.4)1.51 (0.97, 2.36)0.39(0.09,1.61)\n≥\n4\n149 (76.8)45 (23.2)11\nParity\n\nFirst\n109 (64.5)60 (35.5)1.90 (1.19, 3.0)2.64(0.38,18.17)\nSecond\n76 (61.8)47 (38.2)2.14 (1.29,3.54)3.91(0.81,18.90)\nThird\n67 (77)20 (23)1.03 (0.56,1.90)1.39(0.46.5.62)\nFourth and above\n142 (77.6)41 (22.4)11\n\nAssociation of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013\n", "The study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women.\nEducations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education.\nAntenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care.\nEven though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study area and period", "Study design", "Sample size and sampling procedure", "Sample size", "Sampling procedure", "Operational and term definitions", "Data collection tool and procedure", "Data quality control", "Data processing and analysis", "Ethical consideration", "Results", "Socio demographic characteristics", "Birth preparedness and complication readiness", "Factors associated with birth preparedness and complication readiness", "Discussion", "Conclusion", "Recommendation" ]
[ "Globally, maternal mortality remains a public health challenge\n[1]. World Health Organization (WHO) estimated that 529,000 women die annually from maternal causes. Ninety nine percent of these deaths occur in less developed countries. The situation is most dire for women in Sub-Saharan Africa, where one of every 16 women dies of pregnancy related causes during her lifetime, compared with only 1 in 2,800 women in developed regions\n[2].\nThe global maternal mortality ratio (MMR) decreased from 422 (358-505) in 1980 to 320 (272-388) in 1990, and was 251 (221-289) per 100 000 live births in 2008. The yearly rate of decline of the global MMR since 1990 was 1.3% (1.0-1.5). More than 50% of all maternal deaths were in only six countries in 2008 (India, Nigeria, Pakistan, Afghanistan, Ethiopia, and the Democratic Republic of the Congo)\n[3]. In Ethiopia, according to 2011 Ethiopia Demographic and Health Survey (EDHS) report, MMR remains high which is 676 deaths per 100,000 live births. However, only 10% of births in Ethiopia occur in health facility while 90% women deliver at home. Reasons for not delivering at health facility for more than six women in ten (61%) stated that a health facility delivery was not necessary, and three in every ten (30%) stated that it was not customary\n[4].\nBirth preparedness and complication readiness (BP/CR) is a comprehensive package aimed at promoting timely access to skilled maternal and neonatal services. It promotes active preparation and decision making for delivery by pregnant women and their families. This stems from the fact that every pregnant woman faces risk of sudden and unpredictable life threatening complications that could end in death or injury to herself or to her infant. BP/CR is a relatively common strategy employed by numerous groups implementing safe motherhood program in the world\n[5, 6].\nIn many societies in the world, cultural beliefs and lack of awareness inhibit preparation in advance for delivery and expected baby. Since no action is taken prior to the delivery, the family tries to act only when labor begins. The majority of pregnant women and their families do not know how to recognize the danger signs of complications. When complications occur, the unprepared family will waste a great deal of time in recognizing the problem, getting organized, getting money, finding transport and reaching the appropriate referral facility\n[7].\nIt is difficult to predict which pregnancy, delivery or post delivery period will experience complications; hence birth preparedness and complication readiness plan is recommended with the notion of pregnancy is risk\n[6]. BP/CR strategy encourage women to be informed of danger signs of obstetric complications and emergencies, choose a preferred birth place and attendant at birth, make advance arrangement with the attendant at birth, arrange for transport to skilled care site in case of emergence, saving or arranging alternative funds for costs of skilled and emergency care, and finding a companion to be with the woman at birth or to accompany her to emergency care source. Other measures include identifying a compatible blood donor in case of hemorrhage, obtaining permission from the head of household to seek skilled care in the event that a birth emergency occurs in his absence and arrange a source of household support to provide temporary family care during her absence\n[6, 8].\nWith different effort from government as well as nongovernmental organization working on maternal issues, pregnant women were not found to be well prepared for birth and its complication. For example, only 47.8% women who have already given birth in Indore city in India\n[9] and 35% of pregnant women in Uganda were prepared for birth and its complication\n[10]. Additionally, according to the research done in some part of Ethiopia, only 22% of pregnant women in Adigrat town\n[7] and 17% of pregnant women in Aleta Wondo of southern region\n[11] were prepared for birth and its complication. Even though there are some studies which were conducted on this similar issue in Ethiopia, they mainly encompass the urban area. Additionally, there are some differences in socio demographic and cultural conditions of these different regions of the country. Therefore, the aim of this study was to assess birth preparedness and complication readiness among women of child bearing age group in Goba woreda, Oromia region, Ethiopia.", " Study area and period The study was conducted in Goba woreda, Bale zone from April 1-28, 2013. Bale zone is administratively divided in to woreda and smaller kebele. Goba woreda is one of the woreda in Bale zone, Oromia region of Ethiopia and located 444 km from Addis Ababa. Currently, the woreda has 24 rural and 2 urban kebeles. Based on figures obtained from 2007 census, this woreda has an estimated total population of 73,653; of whom 37,427 were females and 32,916 (44.7%) of the population were urban dwellers\n[12]. The estimated total number of women of reproductive age and pregnant women in the woreda were 16,277 and 2725 respectively. The woreda have one health post in each kebele, 4 health centers, one Hospital and Blood Bank. One ambulance from hospital as well as Red Cross society serves the community in providing transportation service from home to the health facility (\n[13], Goba woreda health office: Annual health report, unpublished]).\nThe study was conducted in Goba woreda, Bale zone from April 1-28, 2013. Bale zone is administratively divided in to woreda and smaller kebele. Goba woreda is one of the woreda in Bale zone, Oromia region of Ethiopia and located 444 km from Addis Ababa. Currently, the woreda has 24 rural and 2 urban kebeles. Based on figures obtained from 2007 census, this woreda has an estimated total population of 73,653; of whom 37,427 were females and 32,916 (44.7%) of the population were urban dwellers\n[12]. The estimated total number of women of reproductive age and pregnant women in the woreda were 16,277 and 2725 respectively. The woreda have one health post in each kebele, 4 health centers, one Hospital and Blood Bank. One ambulance from hospital as well as Red Cross society serves the community in providing transportation service from home to the health facility (\n[13], Goba woreda health office: Annual health report, unpublished]).\n Study design A community based cross sectional study was conducted. Women who gave birth in the last 12 months regardless of their birth outcome were included in the study and those women who were severely ill, mentally and physically not capable of being interviewed were excluded from the study.\nA community based cross sectional study was conducted. Women who gave birth in the last 12 months regardless of their birth outcome were included in the study and those women who were severely ill, mentally and physically not capable of being interviewed were excluded from the study.\n Sample size and sampling procedure Sample size The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness\n[7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580.\nThe study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness\n[7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580.\n Sample size The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness\n[7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580.\nThe study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness\n[7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580.\n Sampling procedure Multistage sampling was employed to select study subjects. First, all kebeles in the woreda were stratified in to urban and rural. Goba Woreda constitutes 24 rural kebeles and one town administrative (consisting of two urban kebeles) that makes up a total of 26 kebeles. To determine representative sample of kebeles for the woreda and got adequate sample, 1/3rd of the kebeles were selected. Based on the above calculation, 9 kebeles were chosen using simple random sampling from the total 26 kebeles. According to the strata i.e. urban and rural residence, a kebele from the 2 urban and 8 kebeles from the 24 rural kebeles were selected using simple random sampling technique. Then, the total sample size (n =580) was allocated proportionally to the size of the selected kebeles. Finally, systematic sampling was employed to select the study subjects in each kebele until the desired numbers of sample was obtained. To select the first house hold in each kebele, first a land mark which is common in almost all kebeles that is health post was identified. A pen was spin and the direction pointed by the tip of the pen was followed. To select the first house hold, one of the house which was included under the initial sampling interval of each kebele was selected by simple random sampling; lottery method. Then, the next house hold was selected through systematic sampling technique that is every Kth interval household which was calculated for each kebele because the numbers of households vary from one kebele to another kebele. In a case when the study participants were not be able to be interviewed for some reason (e.g. absenteeism), attempt was made for three times to interview the respondent and after all, they were considered as non respondents. On the other hand, if the household did not include women who meet the inclusion criteria, the next household was substituted. Moreover, if the household contained more than one candidate, one of them was taken randomly by employing lottery method.\nMultistage sampling was employed to select study subjects. First, all kebeles in the woreda were stratified in to urban and rural. Goba Woreda constitutes 24 rural kebeles and one town administrative (consisting of two urban kebeles) that makes up a total of 26 kebeles. To determine representative sample of kebeles for the woreda and got adequate sample, 1/3rd of the kebeles were selected. Based on the above calculation, 9 kebeles were chosen using simple random sampling from the total 26 kebeles. According to the strata i.e. urban and rural residence, a kebele from the 2 urban and 8 kebeles from the 24 rural kebeles were selected using simple random sampling technique. Then, the total sample size (n =580) was allocated proportionally to the size of the selected kebeles. Finally, systematic sampling was employed to select the study subjects in each kebele until the desired numbers of sample was obtained. To select the first house hold in each kebele, first a land mark which is common in almost all kebeles that is health post was identified. A pen was spin and the direction pointed by the tip of the pen was followed. To select the first house hold, one of the house which was included under the initial sampling interval of each kebele was selected by simple random sampling; lottery method. Then, the next house hold was selected through systematic sampling technique that is every Kth interval household which was calculated for each kebele because the numbers of households vary from one kebele to another kebele. In a case when the study participants were not be able to be interviewed for some reason (e.g. absenteeism), attempt was made for three times to interview the respondent and after all, they were considered as non respondents. On the other hand, if the household did not include women who meet the inclusion criteria, the next household was substituted. Moreover, if the household contained more than one candidate, one of them was taken randomly by employing lottery method.\n Operational and term definitions Birth preparedness and complication ready: A woman was considered as prepared for birth and its complication if she identified four and more components from birth preparedness complication readiness item\n[7].\nSkilled provider: are persons with midwifery skills (Physicians, Nurses, Midwives, and Health Officers) who can manage normal deliveries and diagnose, manage or refer obstetric complications.\nKnowledge on key danger signs of pregnancy: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of pregnancy otherwise not knowledgeable.\nKnowledge on key danger signs of labor: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of labor otherwise not knowledgeable.\nKnowledge on key danger signs of post partum: A woman was considered knowledgeable if she spontaneously mentioned at least two out of the three key danger signs of post partum period otherwise not knowledgeable.\nKnowledge of birth preparedness and complication readiness: A woman was considered knowledgeable if she can spontaneously mentioned at least 4 item of knowledge of birth preparedness and complication readiness question otherwise not knowledgeable.\nBirth preparedness and complication ready: A woman was considered as prepared for birth and its complication if she identified four and more components from birth preparedness complication readiness item\n[7].\nSkilled provider: are persons with midwifery skills (Physicians, Nurses, Midwives, and Health Officers) who can manage normal deliveries and diagnose, manage or refer obstetric complications.\nKnowledge on key danger signs of pregnancy: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of pregnancy otherwise not knowledgeable.\nKnowledge on key danger signs of labor: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of labor otherwise not knowledgeable.\nKnowledge on key danger signs of post partum: A woman was considered knowledgeable if she spontaneously mentioned at least two out of the three key danger signs of post partum period otherwise not knowledgeable.\nKnowledge of birth preparedness and complication readiness: A woman was considered knowledgeable if she can spontaneously mentioned at least 4 item of knowledge of birth preparedness and complication readiness question otherwise not knowledgeable.\n Data collection tool and procedure Validated structured questionnaire adapted from the survey tools developed by JHPIEGO Maternal Neonatal Health program was used. Information on socio demographic characteristic was collected. Additionally, knowledge of key danger signs during pregnancy, delivery and postpartum period and its actual problem which require referral were asked. Furthermore, the study subjects were asked their BP/CR practice waiting their spontaneous answer to check whether they practiced those operationally defined BP/CR component. These were identifying place of delivery, plan of skilled assistant during delivery, saving money for obstetric emergency, plan of mode of transport to place of delivery during emergency, plan of blood donor during obstetric emergency, detecting early signs of emergency and identifying institution with 24 hour emergency obstetric care services. Eight diploma Nurses who are fluent in speaking Amharic and Afan Oromo were involved in the data collection. They collected the data through face to face interview. In addition to the principal investigators, four Bachelor of Science degree (BSc) holder health professionals were recruited and supervised the data collection process.\nValidated structured questionnaire adapted from the survey tools developed by JHPIEGO Maternal Neonatal Health program was used. Information on socio demographic characteristic was collected. Additionally, knowledge of key danger signs during pregnancy, delivery and postpartum period and its actual problem which require referral were asked. Furthermore, the study subjects were asked their BP/CR practice waiting their spontaneous answer to check whether they practiced those operationally defined BP/CR component. These were identifying place of delivery, plan of skilled assistant during delivery, saving money for obstetric emergency, plan of mode of transport to place of delivery during emergency, plan of blood donor during obstetric emergency, detecting early signs of emergency and identifying institution with 24 hour emergency obstetric care services. Eight diploma Nurses who are fluent in speaking Amharic and Afan Oromo were involved in the data collection. They collected the data through face to face interview. In addition to the principal investigators, four Bachelor of Science degree (BSc) holder health professionals were recruited and supervised the data collection process.\n Data quality control The Qualities of data were assured by using validated questionnaire, translation, retranslation and pretesting of the questionnaire. The questionnaire was translated from English language to Amharic and Afan Oromo by a translator and back to English language by second other translators who were health professionals to compare its consistency. The pretest was done on 5% of the total sample size in Robe town. Content and face validity were checked by reproductive health expert. Additionally, after the pretest, to check the internal consistency of the tool, cronbach alpha value was calculated and its value for birth preparedness and complication readiness item was 0.86. Data collectors and supervisors were trained for two days on the study instrument and data collection procedure. The principal investigator and the supervisors strictly followed the overall data collection activities.\nThe Qualities of data were assured by using validated questionnaire, translation, retranslation and pretesting of the questionnaire. The questionnaire was translated from English language to Amharic and Afan Oromo by a translator and back to English language by second other translators who were health professionals to compare its consistency. The pretest was done on 5% of the total sample size in Robe town. Content and face validity were checked by reproductive health expert. Additionally, after the pretest, to check the internal consistency of the tool, cronbach alpha value was calculated and its value for birth preparedness and complication readiness item was 0.86. Data collectors and supervisors were trained for two days on the study instrument and data collection procedure. The principal investigator and the supervisors strictly followed the overall data collection activities.\n Data processing and analysis The data were checked for its completeness and consistencies. Then, it was cleaned, coded and entered in to computer using statistical package for social sciences (SPSS) windows version 16.0. Descriptive statistics were computed to determine the prevalence of birth preparedness and complication readiness. Moreover, binary logistic regression analysis was performed to identify those factors associated with birth preparedness and complication readiness. Then, to control the effect of possible confounders, multiple logistics regression was computed with a confidence interval of 95%. P value < 0.05 on a binary logistic regression was considered to select candidate variable for multiple logistic regression analysis as well as to declare statistically significant variable.\nThe data were checked for its completeness and consistencies. Then, it was cleaned, coded and entered in to computer using statistical package for social sciences (SPSS) windows version 16.0. Descriptive statistics were computed to determine the prevalence of birth preparedness and complication readiness. Moreover, binary logistic regression analysis was performed to identify those factors associated with birth preparedness and complication readiness. Then, to control the effect of possible confounders, multiple logistics regression was computed with a confidence interval of 95%. P value < 0.05 on a binary logistic regression was considered to select candidate variable for multiple logistic regression analysis as well as to declare statistically significant variable.\n Ethical consideration The proposal was approved by Ethical Review Committee of College of Medicine and Health Sciences of Madawalabu University. Furthermore, letter of permission was obtained from Bale zone health department and Goba woreda health offices. Consent was obtained from the study subjects after explaining the study objectives and procedures and their right to refuse not to participate in the study any time they want was also assured.\nThe proposal was approved by Ethical Review Committee of College of Medicine and Health Sciences of Madawalabu University. Furthermore, letter of permission was obtained from Bale zone health department and Goba woreda health offices. Consent was obtained from the study subjects after explaining the study objectives and procedures and their right to refuse not to participate in the study any time they want was also assured.", "The study was conducted in Goba woreda, Bale zone from April 1-28, 2013. Bale zone is administratively divided in to woreda and smaller kebele. Goba woreda is one of the woreda in Bale zone, Oromia region of Ethiopia and located 444 km from Addis Ababa. Currently, the woreda has 24 rural and 2 urban kebeles. Based on figures obtained from 2007 census, this woreda has an estimated total population of 73,653; of whom 37,427 were females and 32,916 (44.7%) of the population were urban dwellers\n[12]. The estimated total number of women of reproductive age and pregnant women in the woreda were 16,277 and 2725 respectively. The woreda have one health post in each kebele, 4 health centers, one Hospital and Blood Bank. One ambulance from hospital as well as Red Cross society serves the community in providing transportation service from home to the health facility (\n[13], Goba woreda health office: Annual health report, unpublished]).", "A community based cross sectional study was conducted. Women who gave birth in the last 12 months regardless of their birth outcome were included in the study and those women who were severely ill, mentally and physically not capable of being interviewed were excluded from the study.", " Sample size The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness\n[7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580.\nThe study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness\n[7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580.", "The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness\n[7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580.", "Multistage sampling was employed to select study subjects. First, all kebeles in the woreda were stratified in to urban and rural. Goba Woreda constitutes 24 rural kebeles and one town administrative (consisting of two urban kebeles) that makes up a total of 26 kebeles. To determine representative sample of kebeles for the woreda and got adequate sample, 1/3rd of the kebeles were selected. Based on the above calculation, 9 kebeles were chosen using simple random sampling from the total 26 kebeles. According to the strata i.e. urban and rural residence, a kebele from the 2 urban and 8 kebeles from the 24 rural kebeles were selected using simple random sampling technique. Then, the total sample size (n =580) was allocated proportionally to the size of the selected kebeles. Finally, systematic sampling was employed to select the study subjects in each kebele until the desired numbers of sample was obtained. To select the first house hold in each kebele, first a land mark which is common in almost all kebeles that is health post was identified. A pen was spin and the direction pointed by the tip of the pen was followed. To select the first house hold, one of the house which was included under the initial sampling interval of each kebele was selected by simple random sampling; lottery method. Then, the next house hold was selected through systematic sampling technique that is every Kth interval household which was calculated for each kebele because the numbers of households vary from one kebele to another kebele. In a case when the study participants were not be able to be interviewed for some reason (e.g. absenteeism), attempt was made for three times to interview the respondent and after all, they were considered as non respondents. On the other hand, if the household did not include women who meet the inclusion criteria, the next household was substituted. Moreover, if the household contained more than one candidate, one of them was taken randomly by employing lottery method.", "Birth preparedness and complication ready: A woman was considered as prepared for birth and its complication if she identified four and more components from birth preparedness complication readiness item\n[7].\nSkilled provider: are persons with midwifery skills (Physicians, Nurses, Midwives, and Health Officers) who can manage normal deliveries and diagnose, manage or refer obstetric complications.\nKnowledge on key danger signs of pregnancy: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of pregnancy otherwise not knowledgeable.\nKnowledge on key danger signs of labor: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of labor otherwise not knowledgeable.\nKnowledge on key danger signs of post partum: A woman was considered knowledgeable if she spontaneously mentioned at least two out of the three key danger signs of post partum period otherwise not knowledgeable.\nKnowledge of birth preparedness and complication readiness: A woman was considered knowledgeable if she can spontaneously mentioned at least 4 item of knowledge of birth preparedness and complication readiness question otherwise not knowledgeable.", "Validated structured questionnaire adapted from the survey tools developed by JHPIEGO Maternal Neonatal Health program was used. Information on socio demographic characteristic was collected. Additionally, knowledge of key danger signs during pregnancy, delivery and postpartum period and its actual problem which require referral were asked. Furthermore, the study subjects were asked their BP/CR practice waiting their spontaneous answer to check whether they practiced those operationally defined BP/CR component. These were identifying place of delivery, plan of skilled assistant during delivery, saving money for obstetric emergency, plan of mode of transport to place of delivery during emergency, plan of blood donor during obstetric emergency, detecting early signs of emergency and identifying institution with 24 hour emergency obstetric care services. Eight diploma Nurses who are fluent in speaking Amharic and Afan Oromo were involved in the data collection. They collected the data through face to face interview. In addition to the principal investigators, four Bachelor of Science degree (BSc) holder health professionals were recruited and supervised the data collection process.", "The Qualities of data were assured by using validated questionnaire, translation, retranslation and pretesting of the questionnaire. The questionnaire was translated from English language to Amharic and Afan Oromo by a translator and back to English language by second other translators who were health professionals to compare its consistency. The pretest was done on 5% of the total sample size in Robe town. Content and face validity were checked by reproductive health expert. Additionally, after the pretest, to check the internal consistency of the tool, cronbach alpha value was calculated and its value for birth preparedness and complication readiness item was 0.86. Data collectors and supervisors were trained for two days on the study instrument and data collection procedure. The principal investigator and the supervisors strictly followed the overall data collection activities.", "The data were checked for its completeness and consistencies. Then, it was cleaned, coded and entered in to computer using statistical package for social sciences (SPSS) windows version 16.0. Descriptive statistics were computed to determine the prevalence of birth preparedness and complication readiness. Moreover, binary logistic regression analysis was performed to identify those factors associated with birth preparedness and complication readiness. Then, to control the effect of possible confounders, multiple logistics regression was computed with a confidence interval of 95%. P value < 0.05 on a binary logistic regression was considered to select candidate variable for multiple logistic regression analysis as well as to declare statistically significant variable.", "The proposal was approved by Ethical Review Committee of College of Medicine and Health Sciences of Madawalabu University. Furthermore, letter of permission was obtained from Bale zone health department and Goba woreda health offices. Consent was obtained from the study subjects after explaining the study objectives and procedures and their right to refuse not to participate in the study any time they want was also assured.", "Out of the total 580 women planned for the study, 8 (1.4%) study subjects refused to respond for the question,10(1.7%) individuals were not found in three times visit and 562 were successfully interviewed yielding the response rate of 97%.\n Socio demographic characteristics The mean age of the respondents were 26.6 (SD ± 5.9) year. Majority of respondents, 33.8%, were between the age group of 21 and 25 years and a few were in the age group of less than 20 years. Muslim and Orthodox Tewahido were the dominant religions each accounting 49.1%. Most participants, 266 (47.3%), reported that they were educated up to primary school followed by those who did not attend formal education; 161 (28.6). Four hundred ten (73%) respondents were Oromo in their ethnic group, followed by Amhara; 139 (24.7%). The vast majority, (85.2%), of respondents were housewives. Regarding the marital status of study subjects, 531 (94.5%) of them were married. The mean family size and monthly income of the participants were 5.0 (SD ± 2.1) and 1267.9 (SD ± 1298.7) Ethiopian Birr respectively. About 224 (39.9%) study subjects did not have any financial income (Table \n1).Table 1\nSocio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013\nVariableFrequencyPercent\nResidence\n\nUrban\n11620.6\nRural\n44679.4\nAge\n\n≤\n20\n9416.7\n21-25\n19033.8\n26-30\n16429.2\n>30\n11420.3\nMean (\n±\nSD)\n26.6 (±5.9)\nReligion\n\nMuslim\n27649.1\nOrthodox\n27649.1\nProtestant\n71.2\nOther\n*\n30.5\nMarital status\n\nMarried/in Union\n53194.5\nSingle\n142.1\nWidowed\n81.4\nDivorced\n61.1\nSeparated\n30.5\nEthnicity\n\nOromo\n41073\nAmhara\n13924.7\nTigre\n20.4\nGamo\n20.4\nOther\n**\n91.6\nEducational level\n\nNone\n16128.6\nRead and write\n162.8\nPrimary\n26647.3\nSecondary and above\n11921.2\nOccupation\n\nHousewife\n47985.2\nGov’t. employee\n213.7\nPrivate employee\n142.5\nMerchant\n427.7\nOther\n***\n61.1\nMonthly income of the women\n+\n\nNone\n22439.9\n(None-100]\n5610\n101-300\n13524\n≥\n301\n14726.2\nTotal family income\n+\n\n≤\n100\n122.1\n101-300\n6010.7\n≥\n301\n48787.1\nMean (\n±\nSD)\n1267.9 ( ± 1298.7)\nFamily size\n\n≤\n4\n28150\n5-6\n16228.8\n≥\n7\n11921.1\nMean (\n±\nSD)\n5.0 (±2.1)\n*indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer.\n+Currency is measured in Ethiopian Birr.\n\nSocio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013\n\n\n*indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer.\n\n+Currency is measured in Ethiopian Birr.\nThe mean age of the respondents were 26.6 (SD ± 5.9) year. Majority of respondents, 33.8%, were between the age group of 21 and 25 years and a few were in the age group of less than 20 years. Muslim and Orthodox Tewahido were the dominant religions each accounting 49.1%. Most participants, 266 (47.3%), reported that they were educated up to primary school followed by those who did not attend formal education; 161 (28.6). Four hundred ten (73%) respondents were Oromo in their ethnic group, followed by Amhara; 139 (24.7%). The vast majority, (85.2%), of respondents were housewives. Regarding the marital status of study subjects, 531 (94.5%) of them were married. The mean family size and monthly income of the participants were 5.0 (SD ± 2.1) and 1267.9 (SD ± 1298.7) Ethiopian Birr respectively. About 224 (39.9%) study subjects did not have any financial income (Table \n1).Table 1\nSocio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013\nVariableFrequencyPercent\nResidence\n\nUrban\n11620.6\nRural\n44679.4\nAge\n\n≤\n20\n9416.7\n21-25\n19033.8\n26-30\n16429.2\n>30\n11420.3\nMean (\n±\nSD)\n26.6 (±5.9)\nReligion\n\nMuslim\n27649.1\nOrthodox\n27649.1\nProtestant\n71.2\nOther\n*\n30.5\nMarital status\n\nMarried/in Union\n53194.5\nSingle\n142.1\nWidowed\n81.4\nDivorced\n61.1\nSeparated\n30.5\nEthnicity\n\nOromo\n41073\nAmhara\n13924.7\nTigre\n20.4\nGamo\n20.4\nOther\n**\n91.6\nEducational level\n\nNone\n16128.6\nRead and write\n162.8\nPrimary\n26647.3\nSecondary and above\n11921.2\nOccupation\n\nHousewife\n47985.2\nGov’t. employee\n213.7\nPrivate employee\n142.5\nMerchant\n427.7\nOther\n***\n61.1\nMonthly income of the women\n+\n\nNone\n22439.9\n(None-100]\n5610\n101-300\n13524\n≥\n301\n14726.2\nTotal family income\n+\n\n≤\n100\n122.1\n101-300\n6010.7\n≥\n301\n48787.1\nMean (\n±\nSD)\n1267.9 ( ± 1298.7)\nFamily size\n\n≤\n4\n28150\n5-6\n16228.8\n≥\n7\n11921.1\nMean (\n±\nSD)\n5.0 (±2.1)\n*indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer.\n+Currency is measured in Ethiopian Birr.\n\nSocio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013\n\n\n*indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer.\n\n+Currency is measured in Ethiopian Birr.\n Birth preparedness and complication readiness Out of the total respondents, 301 (53.6%) study participants reported that they have never heard the term birth preparedness and complication readiness. Their sources of information for the majority of respondents, 61.0%, were community health workers (Figure \n1).Figure 1\nSources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nSources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible.\nAmong those respondent who knew about birth preparedness and complication readiness, 415 (87.6%) spontaneously identified and mentioned preparing essential items for clean delivery and post partum period followed by saving money; 330 (69.6%) (Table \n2). In summary, only 82 (14.6%) study subjects were knowledgeable about birth preparedness and complication readiness.Table 2\nKnowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013\nS. No\nVariableFrequencyPercent(N = 474)1Identify place of delivery21445.12Save money33069.63Prepare essential items for clean delivery & post partum period41587.64Identify skilled provider347.25Being aware of the signs of an emergency & the need to act immediately163.46Designating decision maker on her40.87Arranging a way to communicate with a source of help332.78Arranging emergency funds675.59Identify a mode of transportation255.310Arranging blood donors153.211Identifying the nearest institution that has24 hours functioning EmOC services5812.2N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nKnowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013\n\nN.B. This cannot be sum up to 100% because multiple responses were possible.\nGenerally, only 168 (29.9%) study subjects were prepared for birth and its complication in their last pregnancy where as the remaining 394 (70.1%) study subject did not (Table \n3).Table 3\nPractices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013\nS. NoVariableFrequencyPercent\n1\nIdentify place of delivery43276.9\n2\nPlan skilled assistant during delivery29752.8\n3\nSaved money for obstetric emergency38869\n4\nPlan a mode of transport to place of delivery during emergency35362.8\n5\nPlan blood donor during obstetric emergency519.1\n6\nDetect early signs of an Emergence17731.5\n7\nIdentified institution with 24 hr EmOC services26747.5N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nPractices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013\n\nN.B. This cannot be sum up to 100% because multiple responses were possible.\nOut of the total respondents, 301 (53.6%) study participants reported that they have never heard the term birth preparedness and complication readiness. Their sources of information for the majority of respondents, 61.0%, were community health workers (Figure \n1).Figure 1\nSources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nSources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible.\nAmong those respondent who knew about birth preparedness and complication readiness, 415 (87.6%) spontaneously identified and mentioned preparing essential items for clean delivery and post partum period followed by saving money; 330 (69.6%) (Table \n2). In summary, only 82 (14.6%) study subjects were knowledgeable about birth preparedness and complication readiness.Table 2\nKnowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013\nS. No\nVariableFrequencyPercent(N = 474)1Identify place of delivery21445.12Save money33069.63Prepare essential items for clean delivery & post partum period41587.64Identify skilled provider347.25Being aware of the signs of an emergency & the need to act immediately163.46Designating decision maker on her40.87Arranging a way to communicate with a source of help332.78Arranging emergency funds675.59Identify a mode of transportation255.310Arranging blood donors153.211Identifying the nearest institution that has24 hours functioning EmOC services5812.2N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nKnowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013\n\nN.B. This cannot be sum up to 100% because multiple responses were possible.\nGenerally, only 168 (29.9%) study subjects were prepared for birth and its complication in their last pregnancy where as the remaining 394 (70.1%) study subject did not (Table \n3).Table 3\nPractices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013\nS. NoVariableFrequencyPercent\n1\nIdentify place of delivery43276.9\n2\nPlan skilled assistant during delivery29752.8\n3\nSaved money for obstetric emergency38869\n4\nPlan a mode of transport to place of delivery during emergency35362.8\n5\nPlan blood donor during obstetric emergency519.1\n6\nDetect early signs of an Emergence17731.5\n7\nIdentified institution with 24 hr EmOC services26747.5N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nPractices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013\n\nN.B. This cannot be sum up to 100% because multiple responses were possible.\n Factors associated with birth preparedness and complication readiness On binary logistic regression, place of residence, occupation, educational level, family size, ANC follow up, knowledge of danger sign during pregnancy, labour and postnatal period, as well as gravida and parity were found to have statistically significant association with birth preparedness and complication readiness.\nMultiple logistic regression analysis was also computed to control the possible confounder, explores the association between selected independent variables, and birth preparedness and complication readiness. The odds of birth preparedness and complication readiness were two times greater among urban resident when compared to rural resident (AOR = 2.01, 95% CI = 1.20, 3.36). Additionally, educational level of mother was also found as a predictor of birth preparedness and complication readiness. The odds of birth preparedness and complication readiness of woman who attended to the primary, and secondary and higher level of education were three and nearly three times higher than that those who did not attend formal education (AOR = 3.24, 95% CI = 1.75, 6.02) and (AOR = 2.88, 95% CI = 1.34, 6.15) respectively. Furthermore, the odds of birth preparedness and complication readiness were eight times greater among women who have ANC follow up when compared with women who did not have ANC follow up (AOR = 8.07, 95% CI = 2.41,27.00). Besides, the odds of birth preparedness & complication readiness among knowledgeable women about key danger signs during pregnancy were nearly two times greater than not knowledgeable women (AOR = 1.74, 95% CI = 1.06,2.88). Similarly, the odds of birth preparedness and complication readiness among knowledgeable respondents about key danger signs during postpartum period were two times greater than those who lack knowledge about it (AOR = 2.08, 95% CI = 1.20,3.60) (Table \n4).Table 4\nAssociation of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013\nVariablePractice of birth preparedness & its complicationCOR (95% CI)AOR (95% CI)Not prepared (%)Prepared (%)\nResidence\n\nUrban\n57 (49.1)59 (50.9)3.2 (2.09,4.88)2.01(1.20,3.36)\nRural\n337 (75.6)109 (24.4)11\nMarital status\n\nIn marital union\n370 (69.7)161 (30.3)1.49(0.63, 3.53)\nNot in marital union\n24 (77.4)7 (22.6)1\nOccupation\n\nHousewife\n349 (72.9)130 (27)11\nGov’t. employee\n9 (42.9)12 (57.1)3.57 (1.47, 8.69)1.39(0.48,4.04)\nPrivate employee\n9 (64.3)5 (35.7)1.49 (0.49, 4.53)1.15(0.33,4.01)\nMerchant\n24 (57)18 (42.9)2.01 (1.05, 3.83)1.45(0.68,3.07)\nOther\n***\n3 (50)3 (50)2.68 (0.53, 13.4)1.69(0.22,13.11)\nAge\n\n≤\n20\n70 (74.5)24 (25.5)1\n21-25\n125 (65.8)65 (34.2)1.51 (0.87, 2.63)\n26-30\n119 (72.6)45 (27.4)1.10 (0.62, 1.96)\n>30\n80 (70.2)34 (29.8)1.24 (0.67, 2.28)\nEducational level\n\nNone\n144 (89.4)17 (10.6)11\nRead and write\n12 (75)4 (25)2.82 (0.81, 9.73)1.96(0.49,7.79)\nPrimary\n173 (65)93 (35)4.55(2.59, 7.99)3.24(1.75,6.02)\nSecondary and above\n65 (54.6)54 (45.4)7.03 (3.79, 13.06)2.88(1.34,6.15)\nTotal family income\n1\n≤\n100\n11 (91.7)1 (8.3)3.34(0.39, 28.24)\n101-300\n46 (76.7)14 (23.3)5.03(0.64, 39.37)\n≥\n301\n334 (68.6)153 (31.4)\nFamily size\n\n≤\n4\n177 (63.9)100 (36.1)2.35 (1.40,3.95)1.19(0.46,3.04)\n5-6\n117 (72.7)44 (27.3)1.57 (0.88,2.78)1.22(0.61,2.46)\n≥\n7\n96 (80.7)23 (19.3)11\nANC follow up\n\nYes\n313 (65.5)165 (34.5)14.23(4.42, 45.75)8.07(2.41,27.00)\nNo\n81 (96.4)3 (3.6)11\nKnowledge status of danger signs during pregnancy\n\nNot knowledgeable\n92(51.4)87 (48.6)11\nKnowledgeable\n302(78.9)81 (21.1)3.52 (2.40, 5.16)1.74(1.06,2.88)\nKnowledge status of danger signs during labour\n\nNot knowledgeable\n321(78.3)89 (21.7)11\nKnowledgeable\n73 (48)79 (52)3.90 (2.62, 5.79)1.66(0.97,2.84)\nKnowledge status of danger signs during postnatal period\n336 (76.7)\nNot Knowledgeable\n58 (46.8)102 (23.3)11\nKnowledgeable\n66 (53.2)3.74(2.47, 5.68)2.08(1.20,3.60)\nGravid\n\n1\n103 (64)58 (36)1.86(1.17,2.96)0.32(0.05,2.02)\n2-3\n142 (68.6)65 (31.4)1.51 (0.97, 2.36)0.39(0.09,1.61)\n≥\n4\n149 (76.8)45 (23.2)11\nParity\n\nFirst\n109 (64.5)60 (35.5)1.90 (1.19, 3.0)2.64(0.38,18.17)\nSecond\n76 (61.8)47 (38.2)2.14 (1.29,3.54)3.91(0.81,18.90)\nThird\n67 (77)20 (23)1.03 (0.56,1.90)1.39(0.46.5.62)\nFourth and above\n142 (77.6)41 (22.4)11\n\nAssociation of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013\n\nOn binary logistic regression, place of residence, occupation, educational level, family size, ANC follow up, knowledge of danger sign during pregnancy, labour and postnatal period, as well as gravida and parity were found to have statistically significant association with birth preparedness and complication readiness.\nMultiple logistic regression analysis was also computed to control the possible confounder, explores the association between selected independent variables, and birth preparedness and complication readiness. The odds of birth preparedness and complication readiness were two times greater among urban resident when compared to rural resident (AOR = 2.01, 95% CI = 1.20, 3.36). Additionally, educational level of mother was also found as a predictor of birth preparedness and complication readiness. The odds of birth preparedness and complication readiness of woman who attended to the primary, and secondary and higher level of education were three and nearly three times higher than that those who did not attend formal education (AOR = 3.24, 95% CI = 1.75, 6.02) and (AOR = 2.88, 95% CI = 1.34, 6.15) respectively. Furthermore, the odds of birth preparedness and complication readiness were eight times greater among women who have ANC follow up when compared with women who did not have ANC follow up (AOR = 8.07, 95% CI = 2.41,27.00). Besides, the odds of birth preparedness & complication readiness among knowledgeable women about key danger signs during pregnancy were nearly two times greater than not knowledgeable women (AOR = 1.74, 95% CI = 1.06,2.88). Similarly, the odds of birth preparedness and complication readiness among knowledgeable respondents about key danger signs during postpartum period were two times greater than those who lack knowledge about it (AOR = 2.08, 95% CI = 1.20,3.60) (Table \n4).Table 4\nAssociation of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013\nVariablePractice of birth preparedness & its complicationCOR (95% CI)AOR (95% CI)Not prepared (%)Prepared (%)\nResidence\n\nUrban\n57 (49.1)59 (50.9)3.2 (2.09,4.88)2.01(1.20,3.36)\nRural\n337 (75.6)109 (24.4)11\nMarital status\n\nIn marital union\n370 (69.7)161 (30.3)1.49(0.63, 3.53)\nNot in marital union\n24 (77.4)7 (22.6)1\nOccupation\n\nHousewife\n349 (72.9)130 (27)11\nGov’t. employee\n9 (42.9)12 (57.1)3.57 (1.47, 8.69)1.39(0.48,4.04)\nPrivate employee\n9 (64.3)5 (35.7)1.49 (0.49, 4.53)1.15(0.33,4.01)\nMerchant\n24 (57)18 (42.9)2.01 (1.05, 3.83)1.45(0.68,3.07)\nOther\n***\n3 (50)3 (50)2.68 (0.53, 13.4)1.69(0.22,13.11)\nAge\n\n≤\n20\n70 (74.5)24 (25.5)1\n21-25\n125 (65.8)65 (34.2)1.51 (0.87, 2.63)\n26-30\n119 (72.6)45 (27.4)1.10 (0.62, 1.96)\n>30\n80 (70.2)34 (29.8)1.24 (0.67, 2.28)\nEducational level\n\nNone\n144 (89.4)17 (10.6)11\nRead and write\n12 (75)4 (25)2.82 (0.81, 9.73)1.96(0.49,7.79)\nPrimary\n173 (65)93 (35)4.55(2.59, 7.99)3.24(1.75,6.02)\nSecondary and above\n65 (54.6)54 (45.4)7.03 (3.79, 13.06)2.88(1.34,6.15)\nTotal family income\n1\n≤\n100\n11 (91.7)1 (8.3)3.34(0.39, 28.24)\n101-300\n46 (76.7)14 (23.3)5.03(0.64, 39.37)\n≥\n301\n334 (68.6)153 (31.4)\nFamily size\n\n≤\n4\n177 (63.9)100 (36.1)2.35 (1.40,3.95)1.19(0.46,3.04)\n5-6\n117 (72.7)44 (27.3)1.57 (0.88,2.78)1.22(0.61,2.46)\n≥\n7\n96 (80.7)23 (19.3)11\nANC follow up\n\nYes\n313 (65.5)165 (34.5)14.23(4.42, 45.75)8.07(2.41,27.00)\nNo\n81 (96.4)3 (3.6)11\nKnowledge status of danger signs during pregnancy\n\nNot knowledgeable\n92(51.4)87 (48.6)11\nKnowledgeable\n302(78.9)81 (21.1)3.52 (2.40, 5.16)1.74(1.06,2.88)\nKnowledge status of danger signs during labour\n\nNot knowledgeable\n321(78.3)89 (21.7)11\nKnowledgeable\n73 (48)79 (52)3.90 (2.62, 5.79)1.66(0.97,2.84)\nKnowledge status of danger signs during postnatal period\n336 (76.7)\nNot Knowledgeable\n58 (46.8)102 (23.3)11\nKnowledgeable\n66 (53.2)3.74(2.47, 5.68)2.08(1.20,3.60)\nGravid\n\n1\n103 (64)58 (36)1.86(1.17,2.96)0.32(0.05,2.02)\n2-3\n142 (68.6)65 (31.4)1.51 (0.97, 2.36)0.39(0.09,1.61)\n≥\n4\n149 (76.8)45 (23.2)11\nParity\n\nFirst\n109 (64.5)60 (35.5)1.90 (1.19, 3.0)2.64(0.38,18.17)\nSecond\n76 (61.8)47 (38.2)2.14 (1.29,3.54)3.91(0.81,18.90)\nThird\n67 (77)20 (23)1.03 (0.56,1.90)1.39(0.46.5.62)\nFourth and above\n142 (77.6)41 (22.4)11\n\nAssociation of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013\n", "The mean age of the respondents were 26.6 (SD ± 5.9) year. Majority of respondents, 33.8%, were between the age group of 21 and 25 years and a few were in the age group of less than 20 years. Muslim and Orthodox Tewahido were the dominant religions each accounting 49.1%. Most participants, 266 (47.3%), reported that they were educated up to primary school followed by those who did not attend formal education; 161 (28.6). Four hundred ten (73%) respondents were Oromo in their ethnic group, followed by Amhara; 139 (24.7%). The vast majority, (85.2%), of respondents were housewives. Regarding the marital status of study subjects, 531 (94.5%) of them were married. The mean family size and monthly income of the participants were 5.0 (SD ± 2.1) and 1267.9 (SD ± 1298.7) Ethiopian Birr respectively. About 224 (39.9%) study subjects did not have any financial income (Table \n1).Table 1\nSocio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013\nVariableFrequencyPercent\nResidence\n\nUrban\n11620.6\nRural\n44679.4\nAge\n\n≤\n20\n9416.7\n21-25\n19033.8\n26-30\n16429.2\n>30\n11420.3\nMean (\n±\nSD)\n26.6 (±5.9)\nReligion\n\nMuslim\n27649.1\nOrthodox\n27649.1\nProtestant\n71.2\nOther\n*\n30.5\nMarital status\n\nMarried/in Union\n53194.5\nSingle\n142.1\nWidowed\n81.4\nDivorced\n61.1\nSeparated\n30.5\nEthnicity\n\nOromo\n41073\nAmhara\n13924.7\nTigre\n20.4\nGamo\n20.4\nOther\n**\n91.6\nEducational level\n\nNone\n16128.6\nRead and write\n162.8\nPrimary\n26647.3\nSecondary and above\n11921.2\nOccupation\n\nHousewife\n47985.2\nGov’t. employee\n213.7\nPrivate employee\n142.5\nMerchant\n427.7\nOther\n***\n61.1\nMonthly income of the women\n+\n\nNone\n22439.9\n(None-100]\n5610\n101-300\n13524\n≥\n301\n14726.2\nTotal family income\n+\n\n≤\n100\n122.1\n101-300\n6010.7\n≥\n301\n48787.1\nMean (\n±\nSD)\n1267.9 ( ± 1298.7)\nFamily size\n\n≤\n4\n28150\n5-6\n16228.8\n≥\n7\n11921.1\nMean (\n±\nSD)\n5.0 (±2.1)\n*indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer.\n+Currency is measured in Ethiopian Birr.\n\nSocio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013\n\n\n*indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer.\n\n+Currency is measured in Ethiopian Birr.", "Out of the total respondents, 301 (53.6%) study participants reported that they have never heard the term birth preparedness and complication readiness. Their sources of information for the majority of respondents, 61.0%, were community health workers (Figure \n1).Figure 1\nSources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nSources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible.\nAmong those respondent who knew about birth preparedness and complication readiness, 415 (87.6%) spontaneously identified and mentioned preparing essential items for clean delivery and post partum period followed by saving money; 330 (69.6%) (Table \n2). In summary, only 82 (14.6%) study subjects were knowledgeable about birth preparedness and complication readiness.Table 2\nKnowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013\nS. No\nVariableFrequencyPercent(N = 474)1Identify place of delivery21445.12Save money33069.63Prepare essential items for clean delivery & post partum period41587.64Identify skilled provider347.25Being aware of the signs of an emergency & the need to act immediately163.46Designating decision maker on her40.87Arranging a way to communicate with a source of help332.78Arranging emergency funds675.59Identify a mode of transportation255.310Arranging blood donors153.211Identifying the nearest institution that has24 hours functioning EmOC services5812.2N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nKnowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013\n\nN.B. This cannot be sum up to 100% because multiple responses were possible.\nGenerally, only 168 (29.9%) study subjects were prepared for birth and its complication in their last pregnancy where as the remaining 394 (70.1%) study subject did not (Table \n3).Table 3\nPractices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013\nS. NoVariableFrequencyPercent\n1\nIdentify place of delivery43276.9\n2\nPlan skilled assistant during delivery29752.8\n3\nSaved money for obstetric emergency38869\n4\nPlan a mode of transport to place of delivery during emergency35362.8\n5\nPlan blood donor during obstetric emergency519.1\n6\nDetect early signs of an Emergence17731.5\n7\nIdentified institution with 24 hr EmOC services26747.5N.B. This cannot be sum up to 100% because multiple responses were possible.\n\nPractices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013\n\nN.B. This cannot be sum up to 100% because multiple responses were possible.", "On binary logistic regression, place of residence, occupation, educational level, family size, ANC follow up, knowledge of danger sign during pregnancy, labour and postnatal period, as well as gravida and parity were found to have statistically significant association with birth preparedness and complication readiness.\nMultiple logistic regression analysis was also computed to control the possible confounder, explores the association between selected independent variables, and birth preparedness and complication readiness. The odds of birth preparedness and complication readiness were two times greater among urban resident when compared to rural resident (AOR = 2.01, 95% CI = 1.20, 3.36). Additionally, educational level of mother was also found as a predictor of birth preparedness and complication readiness. The odds of birth preparedness and complication readiness of woman who attended to the primary, and secondary and higher level of education were three and nearly three times higher than that those who did not attend formal education (AOR = 3.24, 95% CI = 1.75, 6.02) and (AOR = 2.88, 95% CI = 1.34, 6.15) respectively. Furthermore, the odds of birth preparedness and complication readiness were eight times greater among women who have ANC follow up when compared with women who did not have ANC follow up (AOR = 8.07, 95% CI = 2.41,27.00). Besides, the odds of birth preparedness & complication readiness among knowledgeable women about key danger signs during pregnancy were nearly two times greater than not knowledgeable women (AOR = 1.74, 95% CI = 1.06,2.88). Similarly, the odds of birth preparedness and complication readiness among knowledgeable respondents about key danger signs during postpartum period were two times greater than those who lack knowledge about it (AOR = 2.08, 95% CI = 1.20,3.60) (Table \n4).Table 4\nAssociation of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013\nVariablePractice of birth preparedness & its complicationCOR (95% CI)AOR (95% CI)Not prepared (%)Prepared (%)\nResidence\n\nUrban\n57 (49.1)59 (50.9)3.2 (2.09,4.88)2.01(1.20,3.36)\nRural\n337 (75.6)109 (24.4)11\nMarital status\n\nIn marital union\n370 (69.7)161 (30.3)1.49(0.63, 3.53)\nNot in marital union\n24 (77.4)7 (22.6)1\nOccupation\n\nHousewife\n349 (72.9)130 (27)11\nGov’t. employee\n9 (42.9)12 (57.1)3.57 (1.47, 8.69)1.39(0.48,4.04)\nPrivate employee\n9 (64.3)5 (35.7)1.49 (0.49, 4.53)1.15(0.33,4.01)\nMerchant\n24 (57)18 (42.9)2.01 (1.05, 3.83)1.45(0.68,3.07)\nOther\n***\n3 (50)3 (50)2.68 (0.53, 13.4)1.69(0.22,13.11)\nAge\n\n≤\n20\n70 (74.5)24 (25.5)1\n21-25\n125 (65.8)65 (34.2)1.51 (0.87, 2.63)\n26-30\n119 (72.6)45 (27.4)1.10 (0.62, 1.96)\n>30\n80 (70.2)34 (29.8)1.24 (0.67, 2.28)\nEducational level\n\nNone\n144 (89.4)17 (10.6)11\nRead and write\n12 (75)4 (25)2.82 (0.81, 9.73)1.96(0.49,7.79)\nPrimary\n173 (65)93 (35)4.55(2.59, 7.99)3.24(1.75,6.02)\nSecondary and above\n65 (54.6)54 (45.4)7.03 (3.79, 13.06)2.88(1.34,6.15)\nTotal family income\n1\n≤\n100\n11 (91.7)1 (8.3)3.34(0.39, 28.24)\n101-300\n46 (76.7)14 (23.3)5.03(0.64, 39.37)\n≥\n301\n334 (68.6)153 (31.4)\nFamily size\n\n≤\n4\n177 (63.9)100 (36.1)2.35 (1.40,3.95)1.19(0.46,3.04)\n5-6\n117 (72.7)44 (27.3)1.57 (0.88,2.78)1.22(0.61,2.46)\n≥\n7\n96 (80.7)23 (19.3)11\nANC follow up\n\nYes\n313 (65.5)165 (34.5)14.23(4.42, 45.75)8.07(2.41,27.00)\nNo\n81 (96.4)3 (3.6)11\nKnowledge status of danger signs during pregnancy\n\nNot knowledgeable\n92(51.4)87 (48.6)11\nKnowledgeable\n302(78.9)81 (21.1)3.52 (2.40, 5.16)1.74(1.06,2.88)\nKnowledge status of danger signs during labour\n\nNot knowledgeable\n321(78.3)89 (21.7)11\nKnowledgeable\n73 (48)79 (52)3.90 (2.62, 5.79)1.66(0.97,2.84)\nKnowledge status of danger signs during postnatal period\n336 (76.7)\nNot Knowledgeable\n58 (46.8)102 (23.3)11\nKnowledgeable\n66 (53.2)3.74(2.47, 5.68)2.08(1.20,3.60)\nGravid\n\n1\n103 (64)58 (36)1.86(1.17,2.96)0.32(0.05,2.02)\n2-3\n142 (68.6)65 (31.4)1.51 (0.97, 2.36)0.39(0.09,1.61)\n≥\n4\n149 (76.8)45 (23.2)11\nParity\n\nFirst\n109 (64.5)60 (35.5)1.90 (1.19, 3.0)2.64(0.38,18.17)\nSecond\n76 (61.8)47 (38.2)2.14 (1.29,3.54)3.91(0.81,18.90)\nThird\n67 (77)20 (23)1.03 (0.56,1.90)1.39(0.46.5.62)\nFourth and above\n142 (77.6)41 (22.4)11\n\nAssociation of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013\n", "This study showed that, only a small proportion of the respondents were prepared for birth and its complication in their last pregnancy. The finding is not in agreement with the result of the study done in rural Uganda, Mbarara district\n[10]. On the other hand, the result is almost consistent with another study done among pregnant women in Aleta Wondo Woreda, Sidama zone, Southern Ethiopia\n[11]. Besides, it is slightly in agreement with the finding of study conducted in Adigrat town, North Ethiopia\n[7]. The difference with the finding of the study done in rural Uganda, Mbarara district could be unlike the current study, in the former study there was a mixture of study subject where pregnant women in addition to women who gave birth in the last 12 months were considered. So, pregnant women might tell their future plan rather than what they have actually performed. On the top of this, majority of the respondent in rural Uganda, Mbarara district attended ANC for four and more visit where they could have more opportunity to get information concerning birth preparedness and complication readiness than the current study subject where only a few women attended four and more ANC visit.\nThe most commonly mentioned practice in the study were identifying place of delivery, saving money which may be explained by the fact that both women and their partners may knew that money is required to facilitate referral in case of complications, planning skilled assistant and identifying institution with 24 hour emergency obstetric care. It is nearly comparable with community based study in rural Uganda, Mbarara district where majority of the respondents identified a skilled providers, saved money, identified means of transport, and bought delivery kits/birth materials during their most recent pregnancy\n[10].\nIn contrary to the practice of BP/CR among women, only a small number, 82 (14.6%), of study participants were knowledgeable about birth preparedness and complication readiness. The implication of this finding could be women could practice some of those BP/CR components without having the knowledge of its rationale. Therefore, their continuous practice in the future is under question because of their knowledge gap.\nRegarding some of the factors affecting birth preparedness and complication readiness, the study found educational status of the women, and ANC follow up has significant statistical association with birth preparedness and complication readiness. This finding is also supported by another community based study done in Adigrat town, North Ethiopia\n[7]. The implication of this finding could be when a women become educated, they might have better access for information from different sources like from reading different materials and for the case of ANC follow up, if the women have ANC follow up, they could accept advise and health information from health professionals. So that helps them be prepared for birth and its complication.\nAs a strength, this study tried to minimize selection bias by employing community based study with probability sampling method. Additionally, recall bias was attempted to be reduced by involving women who have given birth in the last 12 months preceding the study.\nOn the other hand, its limitation is associated with not ascribing the direction of causations to the relationships found in the study because of the nature of cross sectional study design.", "Only a small numbers of respondent were found to be prepared for birth and its complications in their last pregnancy. Place of residence, educational status, ANC follow up, knowledge of key danger signs during pregnancy as well as postpartum period were independent predictors of birth preparedness and complication readiness.\n Recommendation The study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women.\nEducations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education.\nAntenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care.\nEven though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness.\nThe study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women.\nEducations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education.\nAntenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care.\nEven though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness.", "The study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women.\nEducations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education.\nAntenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care.\nEven though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusions", null ]
[ "Birth preparedness", "Complication readiness", "Goba woreda", "Ethiopia" ]
Background: Globally, maternal mortality remains a public health challenge [1]. World Health Organization (WHO) estimated that 529,000 women die annually from maternal causes. Ninety nine percent of these deaths occur in less developed countries. The situation is most dire for women in Sub-Saharan Africa, where one of every 16 women dies of pregnancy related causes during her lifetime, compared with only 1 in 2,800 women in developed regions [2]. The global maternal mortality ratio (MMR) decreased from 422 (358-505) in 1980 to 320 (272-388) in 1990, and was 251 (221-289) per 100 000 live births in 2008. The yearly rate of decline of the global MMR since 1990 was 1.3% (1.0-1.5). More than 50% of all maternal deaths were in only six countries in 2008 (India, Nigeria, Pakistan, Afghanistan, Ethiopia, and the Democratic Republic of the Congo) [3]. In Ethiopia, according to 2011 Ethiopia Demographic and Health Survey (EDHS) report, MMR remains high which is 676 deaths per 100,000 live births. However, only 10% of births in Ethiopia occur in health facility while 90% women deliver at home. Reasons for not delivering at health facility for more than six women in ten (61%) stated that a health facility delivery was not necessary, and three in every ten (30%) stated that it was not customary [4]. Birth preparedness and complication readiness (BP/CR) is a comprehensive package aimed at promoting timely access to skilled maternal and neonatal services. It promotes active preparation and decision making for delivery by pregnant women and their families. This stems from the fact that every pregnant woman faces risk of sudden and unpredictable life threatening complications that could end in death or injury to herself or to her infant. BP/CR is a relatively common strategy employed by numerous groups implementing safe motherhood program in the world [5, 6]. In many societies in the world, cultural beliefs and lack of awareness inhibit preparation in advance for delivery and expected baby. Since no action is taken prior to the delivery, the family tries to act only when labor begins. The majority of pregnant women and their families do not know how to recognize the danger signs of complications. When complications occur, the unprepared family will waste a great deal of time in recognizing the problem, getting organized, getting money, finding transport and reaching the appropriate referral facility [7]. It is difficult to predict which pregnancy, delivery or post delivery period will experience complications; hence birth preparedness and complication readiness plan is recommended with the notion of pregnancy is risk [6]. BP/CR strategy encourage women to be informed of danger signs of obstetric complications and emergencies, choose a preferred birth place and attendant at birth, make advance arrangement with the attendant at birth, arrange for transport to skilled care site in case of emergence, saving or arranging alternative funds for costs of skilled and emergency care, and finding a companion to be with the woman at birth or to accompany her to emergency care source. Other measures include identifying a compatible blood donor in case of hemorrhage, obtaining permission from the head of household to seek skilled care in the event that a birth emergency occurs in his absence and arrange a source of household support to provide temporary family care during her absence [6, 8]. With different effort from government as well as nongovernmental organization working on maternal issues, pregnant women were not found to be well prepared for birth and its complication. For example, only 47.8% women who have already given birth in Indore city in India [9] and 35% of pregnant women in Uganda were prepared for birth and its complication [10]. Additionally, according to the research done in some part of Ethiopia, only 22% of pregnant women in Adigrat town [7] and 17% of pregnant women in Aleta Wondo of southern region [11] were prepared for birth and its complication. Even though there are some studies which were conducted on this similar issue in Ethiopia, they mainly encompass the urban area. Additionally, there are some differences in socio demographic and cultural conditions of these different regions of the country. Therefore, the aim of this study was to assess birth preparedness and complication readiness among women of child bearing age group in Goba woreda, Oromia region, Ethiopia. Methods: Study area and period The study was conducted in Goba woreda, Bale zone from April 1-28, 2013. Bale zone is administratively divided in to woreda and smaller kebele. Goba woreda is one of the woreda in Bale zone, Oromia region of Ethiopia and located 444 km from Addis Ababa. Currently, the woreda has 24 rural and 2 urban kebeles. Based on figures obtained from 2007 census, this woreda has an estimated total population of 73,653; of whom 37,427 were females and 32,916 (44.7%) of the population were urban dwellers [12]. The estimated total number of women of reproductive age and pregnant women in the woreda were 16,277 and 2725 respectively. The woreda have one health post in each kebele, 4 health centers, one Hospital and Blood Bank. One ambulance from hospital as well as Red Cross society serves the community in providing transportation service from home to the health facility ( [13], Goba woreda health office: Annual health report, unpublished]). The study was conducted in Goba woreda, Bale zone from April 1-28, 2013. Bale zone is administratively divided in to woreda and smaller kebele. Goba woreda is one of the woreda in Bale zone, Oromia region of Ethiopia and located 444 km from Addis Ababa. Currently, the woreda has 24 rural and 2 urban kebeles. Based on figures obtained from 2007 census, this woreda has an estimated total population of 73,653; of whom 37,427 were females and 32,916 (44.7%) of the population were urban dwellers [12]. The estimated total number of women of reproductive age and pregnant women in the woreda were 16,277 and 2725 respectively. The woreda have one health post in each kebele, 4 health centers, one Hospital and Blood Bank. One ambulance from hospital as well as Red Cross society serves the community in providing transportation service from home to the health facility ( [13], Goba woreda health office: Annual health report, unpublished]). Study design A community based cross sectional study was conducted. Women who gave birth in the last 12 months regardless of their birth outcome were included in the study and those women who were severely ill, mentally and physically not capable of being interviewed were excluded from the study. A community based cross sectional study was conducted. Women who gave birth in the last 12 months regardless of their birth outcome were included in the study and those women who were severely ill, mentally and physically not capable of being interviewed were excluded from the study. Sample size and sampling procedure Sample size The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. Sample size The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. Sampling procedure Multistage sampling was employed to select study subjects. First, all kebeles in the woreda were stratified in to urban and rural. Goba Woreda constitutes 24 rural kebeles and one town administrative (consisting of two urban kebeles) that makes up a total of 26 kebeles. To determine representative sample of kebeles for the woreda and got adequate sample, 1/3rd of the kebeles were selected. Based on the above calculation, 9 kebeles were chosen using simple random sampling from the total 26 kebeles. According to the strata i.e. urban and rural residence, a kebele from the 2 urban and 8 kebeles from the 24 rural kebeles were selected using simple random sampling technique. Then, the total sample size (n =580) was allocated proportionally to the size of the selected kebeles. Finally, systematic sampling was employed to select the study subjects in each kebele until the desired numbers of sample was obtained. To select the first house hold in each kebele, first a land mark which is common in almost all kebeles that is health post was identified. A pen was spin and the direction pointed by the tip of the pen was followed. To select the first house hold, one of the house which was included under the initial sampling interval of each kebele was selected by simple random sampling; lottery method. Then, the next house hold was selected through systematic sampling technique that is every Kth interval household which was calculated for each kebele because the numbers of households vary from one kebele to another kebele. In a case when the study participants were not be able to be interviewed for some reason (e.g. absenteeism), attempt was made for three times to interview the respondent and after all, they were considered as non respondents. On the other hand, if the household did not include women who meet the inclusion criteria, the next household was substituted. Moreover, if the household contained more than one candidate, one of them was taken randomly by employing lottery method. Multistage sampling was employed to select study subjects. First, all kebeles in the woreda were stratified in to urban and rural. Goba Woreda constitutes 24 rural kebeles and one town administrative (consisting of two urban kebeles) that makes up a total of 26 kebeles. To determine representative sample of kebeles for the woreda and got adequate sample, 1/3rd of the kebeles were selected. Based on the above calculation, 9 kebeles were chosen using simple random sampling from the total 26 kebeles. According to the strata i.e. urban and rural residence, a kebele from the 2 urban and 8 kebeles from the 24 rural kebeles were selected using simple random sampling technique. Then, the total sample size (n =580) was allocated proportionally to the size of the selected kebeles. Finally, systematic sampling was employed to select the study subjects in each kebele until the desired numbers of sample was obtained. To select the first house hold in each kebele, first a land mark which is common in almost all kebeles that is health post was identified. A pen was spin and the direction pointed by the tip of the pen was followed. To select the first house hold, one of the house which was included under the initial sampling interval of each kebele was selected by simple random sampling; lottery method. Then, the next house hold was selected through systematic sampling technique that is every Kth interval household which was calculated for each kebele because the numbers of households vary from one kebele to another kebele. In a case when the study participants were not be able to be interviewed for some reason (e.g. absenteeism), attempt was made for three times to interview the respondent and after all, they were considered as non respondents. On the other hand, if the household did not include women who meet the inclusion criteria, the next household was substituted. Moreover, if the household contained more than one candidate, one of them was taken randomly by employing lottery method. Operational and term definitions Birth preparedness and complication ready: A woman was considered as prepared for birth and its complication if she identified four and more components from birth preparedness complication readiness item [7]. Skilled provider: are persons with midwifery skills (Physicians, Nurses, Midwives, and Health Officers) who can manage normal deliveries and diagnose, manage or refer obstetric complications. Knowledge on key danger signs of pregnancy: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of pregnancy otherwise not knowledgeable. Knowledge on key danger signs of labor: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of labor otherwise not knowledgeable. Knowledge on key danger signs of post partum: A woman was considered knowledgeable if she spontaneously mentioned at least two out of the three key danger signs of post partum period otherwise not knowledgeable. Knowledge of birth preparedness and complication readiness: A woman was considered knowledgeable if she can spontaneously mentioned at least 4 item of knowledge of birth preparedness and complication readiness question otherwise not knowledgeable. Birth preparedness and complication ready: A woman was considered as prepared for birth and its complication if she identified four and more components from birth preparedness complication readiness item [7]. Skilled provider: are persons with midwifery skills (Physicians, Nurses, Midwives, and Health Officers) who can manage normal deliveries and diagnose, manage or refer obstetric complications. Knowledge on key danger signs of pregnancy: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of pregnancy otherwise not knowledgeable. Knowledge on key danger signs of labor: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of labor otherwise not knowledgeable. Knowledge on key danger signs of post partum: A woman was considered knowledgeable if she spontaneously mentioned at least two out of the three key danger signs of post partum period otherwise not knowledgeable. Knowledge of birth preparedness and complication readiness: A woman was considered knowledgeable if she can spontaneously mentioned at least 4 item of knowledge of birth preparedness and complication readiness question otherwise not knowledgeable. Data collection tool and procedure Validated structured questionnaire adapted from the survey tools developed by JHPIEGO Maternal Neonatal Health program was used. Information on socio demographic characteristic was collected. Additionally, knowledge of key danger signs during pregnancy, delivery and postpartum period and its actual problem which require referral were asked. Furthermore, the study subjects were asked their BP/CR practice waiting their spontaneous answer to check whether they practiced those operationally defined BP/CR component. These were identifying place of delivery, plan of skilled assistant during delivery, saving money for obstetric emergency, plan of mode of transport to place of delivery during emergency, plan of blood donor during obstetric emergency, detecting early signs of emergency and identifying institution with 24 hour emergency obstetric care services. Eight diploma Nurses who are fluent in speaking Amharic and Afan Oromo were involved in the data collection. They collected the data through face to face interview. In addition to the principal investigators, four Bachelor of Science degree (BSc) holder health professionals were recruited and supervised the data collection process. Validated structured questionnaire adapted from the survey tools developed by JHPIEGO Maternal Neonatal Health program was used. Information on socio demographic characteristic was collected. Additionally, knowledge of key danger signs during pregnancy, delivery and postpartum period and its actual problem which require referral were asked. Furthermore, the study subjects were asked their BP/CR practice waiting their spontaneous answer to check whether they practiced those operationally defined BP/CR component. These were identifying place of delivery, plan of skilled assistant during delivery, saving money for obstetric emergency, plan of mode of transport to place of delivery during emergency, plan of blood donor during obstetric emergency, detecting early signs of emergency and identifying institution with 24 hour emergency obstetric care services. Eight diploma Nurses who are fluent in speaking Amharic and Afan Oromo were involved in the data collection. They collected the data through face to face interview. In addition to the principal investigators, four Bachelor of Science degree (BSc) holder health professionals were recruited and supervised the data collection process. Data quality control The Qualities of data were assured by using validated questionnaire, translation, retranslation and pretesting of the questionnaire. The questionnaire was translated from English language to Amharic and Afan Oromo by a translator and back to English language by second other translators who were health professionals to compare its consistency. The pretest was done on 5% of the total sample size in Robe town. Content and face validity were checked by reproductive health expert. Additionally, after the pretest, to check the internal consistency of the tool, cronbach alpha value was calculated and its value for birth preparedness and complication readiness item was 0.86. Data collectors and supervisors were trained for two days on the study instrument and data collection procedure. The principal investigator and the supervisors strictly followed the overall data collection activities. The Qualities of data were assured by using validated questionnaire, translation, retranslation and pretesting of the questionnaire. The questionnaire was translated from English language to Amharic and Afan Oromo by a translator and back to English language by second other translators who were health professionals to compare its consistency. The pretest was done on 5% of the total sample size in Robe town. Content and face validity were checked by reproductive health expert. Additionally, after the pretest, to check the internal consistency of the tool, cronbach alpha value was calculated and its value for birth preparedness and complication readiness item was 0.86. Data collectors and supervisors were trained for two days on the study instrument and data collection procedure. The principal investigator and the supervisors strictly followed the overall data collection activities. Data processing and analysis The data were checked for its completeness and consistencies. Then, it was cleaned, coded and entered in to computer using statistical package for social sciences (SPSS) windows version 16.0. Descriptive statistics were computed to determine the prevalence of birth preparedness and complication readiness. Moreover, binary logistic regression analysis was performed to identify those factors associated with birth preparedness and complication readiness. Then, to control the effect of possible confounders, multiple logistics regression was computed with a confidence interval of 95%. P value < 0.05 on a binary logistic regression was considered to select candidate variable for multiple logistic regression analysis as well as to declare statistically significant variable. The data were checked for its completeness and consistencies. Then, it was cleaned, coded and entered in to computer using statistical package for social sciences (SPSS) windows version 16.0. Descriptive statistics were computed to determine the prevalence of birth preparedness and complication readiness. Moreover, binary logistic regression analysis was performed to identify those factors associated with birth preparedness and complication readiness. Then, to control the effect of possible confounders, multiple logistics regression was computed with a confidence interval of 95%. P value < 0.05 on a binary logistic regression was considered to select candidate variable for multiple logistic regression analysis as well as to declare statistically significant variable. Ethical consideration The proposal was approved by Ethical Review Committee of College of Medicine and Health Sciences of Madawalabu University. Furthermore, letter of permission was obtained from Bale zone health department and Goba woreda health offices. Consent was obtained from the study subjects after explaining the study objectives and procedures and their right to refuse not to participate in the study any time they want was also assured. The proposal was approved by Ethical Review Committee of College of Medicine and Health Sciences of Madawalabu University. Furthermore, letter of permission was obtained from Bale zone health department and Goba woreda health offices. Consent was obtained from the study subjects after explaining the study objectives and procedures and their right to refuse not to participate in the study any time they want was also assured. Study area and period: The study was conducted in Goba woreda, Bale zone from April 1-28, 2013. Bale zone is administratively divided in to woreda and smaller kebele. Goba woreda is one of the woreda in Bale zone, Oromia region of Ethiopia and located 444 km from Addis Ababa. Currently, the woreda has 24 rural and 2 urban kebeles. Based on figures obtained from 2007 census, this woreda has an estimated total population of 73,653; of whom 37,427 were females and 32,916 (44.7%) of the population were urban dwellers [12]. The estimated total number of women of reproductive age and pregnant women in the woreda were 16,277 and 2725 respectively. The woreda have one health post in each kebele, 4 health centers, one Hospital and Blood Bank. One ambulance from hospital as well as Red Cross society serves the community in providing transportation service from home to the health facility ( [13], Goba woreda health office: Annual health report, unpublished]). Study design: A community based cross sectional study was conducted. Women who gave birth in the last 12 months regardless of their birth outcome were included in the study and those women who were severely ill, mentally and physically not capable of being interviewed were excluded from the study. Sample size and sampling procedure: Sample size The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. Sample size: The study employed single population proportion sample size determination formula. Twenty two percent proportion (p) of birth preparedness and complication readiness [7] with 95% CI, and 5% marginal error (where n is desired sample size, Z is value of standard normal variable at 95% confidence interval and, p is proportion of Birth preparedness and complication readiness and d is marginal error which is 5%) was considered to calculate the sample size. Since multistage sampling was employed, the calculated sample was multiplied by 2 for design effects to control the effect of sampling that could happen due to using sampling method other than simple random sampling, and 10% contingency for non respondents were also added. After all, the final sample size became 580. Sampling procedure: Multistage sampling was employed to select study subjects. First, all kebeles in the woreda were stratified in to urban and rural. Goba Woreda constitutes 24 rural kebeles and one town administrative (consisting of two urban kebeles) that makes up a total of 26 kebeles. To determine representative sample of kebeles for the woreda and got adequate sample, 1/3rd of the kebeles were selected. Based on the above calculation, 9 kebeles were chosen using simple random sampling from the total 26 kebeles. According to the strata i.e. urban and rural residence, a kebele from the 2 urban and 8 kebeles from the 24 rural kebeles were selected using simple random sampling technique. Then, the total sample size (n =580) was allocated proportionally to the size of the selected kebeles. Finally, systematic sampling was employed to select the study subjects in each kebele until the desired numbers of sample was obtained. To select the first house hold in each kebele, first a land mark which is common in almost all kebeles that is health post was identified. A pen was spin and the direction pointed by the tip of the pen was followed. To select the first house hold, one of the house which was included under the initial sampling interval of each kebele was selected by simple random sampling; lottery method. Then, the next house hold was selected through systematic sampling technique that is every Kth interval household which was calculated for each kebele because the numbers of households vary from one kebele to another kebele. In a case when the study participants were not be able to be interviewed for some reason (e.g. absenteeism), attempt was made for three times to interview the respondent and after all, they were considered as non respondents. On the other hand, if the household did not include women who meet the inclusion criteria, the next household was substituted. Moreover, if the household contained more than one candidate, one of them was taken randomly by employing lottery method. Operational and term definitions: Birth preparedness and complication ready: A woman was considered as prepared for birth and its complication if she identified four and more components from birth preparedness complication readiness item [7]. Skilled provider: are persons with midwifery skills (Physicians, Nurses, Midwives, and Health Officers) who can manage normal deliveries and diagnose, manage or refer obstetric complications. Knowledge on key danger signs of pregnancy: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of pregnancy otherwise not knowledgeable. Knowledge on key danger signs of labor: a woman was considered as knowledgeable if she spontaneously mentioned at least three key danger signs of labor otherwise not knowledgeable. Knowledge on key danger signs of post partum: A woman was considered knowledgeable if she spontaneously mentioned at least two out of the three key danger signs of post partum period otherwise not knowledgeable. Knowledge of birth preparedness and complication readiness: A woman was considered knowledgeable if she can spontaneously mentioned at least 4 item of knowledge of birth preparedness and complication readiness question otherwise not knowledgeable. Data collection tool and procedure: Validated structured questionnaire adapted from the survey tools developed by JHPIEGO Maternal Neonatal Health program was used. Information on socio demographic characteristic was collected. Additionally, knowledge of key danger signs during pregnancy, delivery and postpartum period and its actual problem which require referral were asked. Furthermore, the study subjects were asked their BP/CR practice waiting their spontaneous answer to check whether they practiced those operationally defined BP/CR component. These were identifying place of delivery, plan of skilled assistant during delivery, saving money for obstetric emergency, plan of mode of transport to place of delivery during emergency, plan of blood donor during obstetric emergency, detecting early signs of emergency and identifying institution with 24 hour emergency obstetric care services. Eight diploma Nurses who are fluent in speaking Amharic and Afan Oromo were involved in the data collection. They collected the data through face to face interview. In addition to the principal investigators, four Bachelor of Science degree (BSc) holder health professionals were recruited and supervised the data collection process. Data quality control: The Qualities of data were assured by using validated questionnaire, translation, retranslation and pretesting of the questionnaire. The questionnaire was translated from English language to Amharic and Afan Oromo by a translator and back to English language by second other translators who were health professionals to compare its consistency. The pretest was done on 5% of the total sample size in Robe town. Content and face validity were checked by reproductive health expert. Additionally, after the pretest, to check the internal consistency of the tool, cronbach alpha value was calculated and its value for birth preparedness and complication readiness item was 0.86. Data collectors and supervisors were trained for two days on the study instrument and data collection procedure. The principal investigator and the supervisors strictly followed the overall data collection activities. Data processing and analysis: The data were checked for its completeness and consistencies. Then, it was cleaned, coded and entered in to computer using statistical package for social sciences (SPSS) windows version 16.0. Descriptive statistics were computed to determine the prevalence of birth preparedness and complication readiness. Moreover, binary logistic regression analysis was performed to identify those factors associated with birth preparedness and complication readiness. Then, to control the effect of possible confounders, multiple logistics regression was computed with a confidence interval of 95%. P value < 0.05 on a binary logistic regression was considered to select candidate variable for multiple logistic regression analysis as well as to declare statistically significant variable. Ethical consideration: The proposal was approved by Ethical Review Committee of College of Medicine and Health Sciences of Madawalabu University. Furthermore, letter of permission was obtained from Bale zone health department and Goba woreda health offices. Consent was obtained from the study subjects after explaining the study objectives and procedures and their right to refuse not to participate in the study any time they want was also assured. Results: Out of the total 580 women planned for the study, 8 (1.4%) study subjects refused to respond for the question,10(1.7%) individuals were not found in three times visit and 562 were successfully interviewed yielding the response rate of 97%. Socio demographic characteristics The mean age of the respondents were 26.6 (SD ± 5.9) year. Majority of respondents, 33.8%, were between the age group of 21 and 25 years and a few were in the age group of less than 20 years. Muslim and Orthodox Tewahido were the dominant religions each accounting 49.1%. Most participants, 266 (47.3%), reported that they were educated up to primary school followed by those who did not attend formal education; 161 (28.6). Four hundred ten (73%) respondents were Oromo in their ethnic group, followed by Amhara; 139 (24.7%). The vast majority, (85.2%), of respondents were housewives. Regarding the marital status of study subjects, 531 (94.5%) of them were married. The mean family size and monthly income of the participants were 5.0 (SD ± 2.1) and 1267.9 (SD ± 1298.7) Ethiopian Birr respectively. About 224 (39.9%) study subjects did not have any financial income (Table  1).Table 1 Socio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013 VariableFrequencyPercent Residence Urban 11620.6 Rural 44679.4 Age ≤ 20 9416.7 21-25 19033.8 26-30 16429.2 >30 11420.3 Mean ( ± SD) 26.6 (±5.9) Religion Muslim 27649.1 Orthodox 27649.1 Protestant 71.2 Other * 30.5 Marital status Married/in Union 53194.5 Single 142.1 Widowed 81.4 Divorced 61.1 Separated 30.5 Ethnicity Oromo 41073 Amhara 13924.7 Tigre 20.4 Gamo 20.4 Other ** 91.6 Educational level None 16128.6 Read and write 162.8 Primary 26647.3 Secondary and above 11921.2 Occupation Housewife 47985.2 Gov’t. employee 213.7 Private employee 142.5 Merchant 427.7 Other *** 61.1 Monthly income of the women + None 22439.9 (None-100] 5610 101-300 13524 ≥ 301 14726.2 Total family income + ≤ 100 122.1 101-300 6010.7 ≥ 301 48787.1 Mean ( ± SD) 1267.9 ( ± 1298.7) Family size ≤ 4 28150 5-6 16228.8 ≥ 7 11921.1 Mean ( ± SD) 5.0 (±2.1) *indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer. +Currency is measured in Ethiopian Birr. Socio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013 *indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer. +Currency is measured in Ethiopian Birr. The mean age of the respondents were 26.6 (SD ± 5.9) year. Majority of respondents, 33.8%, were between the age group of 21 and 25 years and a few were in the age group of less than 20 years. Muslim and Orthodox Tewahido were the dominant religions each accounting 49.1%. Most participants, 266 (47.3%), reported that they were educated up to primary school followed by those who did not attend formal education; 161 (28.6). Four hundred ten (73%) respondents were Oromo in their ethnic group, followed by Amhara; 139 (24.7%). The vast majority, (85.2%), of respondents were housewives. Regarding the marital status of study subjects, 531 (94.5%) of them were married. The mean family size and monthly income of the participants were 5.0 (SD ± 2.1) and 1267.9 (SD ± 1298.7) Ethiopian Birr respectively. About 224 (39.9%) study subjects did not have any financial income (Table  1).Table 1 Socio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013 VariableFrequencyPercent Residence Urban 11620.6 Rural 44679.4 Age ≤ 20 9416.7 21-25 19033.8 26-30 16429.2 >30 11420.3 Mean ( ± SD) 26.6 (±5.9) Religion Muslim 27649.1 Orthodox 27649.1 Protestant 71.2 Other * 30.5 Marital status Married/in Union 53194.5 Single 142.1 Widowed 81.4 Divorced 61.1 Separated 30.5 Ethnicity Oromo 41073 Amhara 13924.7 Tigre 20.4 Gamo 20.4 Other ** 91.6 Educational level None 16128.6 Read and write 162.8 Primary 26647.3 Secondary and above 11921.2 Occupation Housewife 47985.2 Gov’t. employee 213.7 Private employee 142.5 Merchant 427.7 Other *** 61.1 Monthly income of the women + None 22439.9 (None-100] 5610 101-300 13524 ≥ 301 14726.2 Total family income + ≤ 100 122.1 101-300 6010.7 ≥ 301 48787.1 Mean ( ± SD) 1267.9 ( ± 1298.7) Family size ≤ 4 28150 5-6 16228.8 ≥ 7 11921.1 Mean ( ± SD) 5.0 (±2.1) *indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer. +Currency is measured in Ethiopian Birr. Socio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013 *indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer. +Currency is measured in Ethiopian Birr. Birth preparedness and complication readiness Out of the total respondents, 301 (53.6%) study participants reported that they have never heard the term birth preparedness and complication readiness. Their sources of information for the majority of respondents, 61.0%, were community health workers (Figure  1).Figure 1 Sources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible. Sources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible. Among those respondent who knew about birth preparedness and complication readiness, 415 (87.6%) spontaneously identified and mentioned preparing essential items for clean delivery and post partum period followed by saving money; 330 (69.6%) (Table  2). In summary, only 82 (14.6%) study subjects were knowledgeable about birth preparedness and complication readiness.Table 2 Knowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013 S. No VariableFrequencyPercent(N = 474)1Identify place of delivery21445.12Save money33069.63Prepare essential items for clean delivery & post partum period41587.64Identify skilled provider347.25Being aware of the signs of an emergency & the need to act immediately163.46Designating decision maker on her40.87Arranging a way to communicate with a source of help332.78Arranging emergency funds675.59Identify a mode of transportation255.310Arranging blood donors153.211Identifying the nearest institution that has24 hours functioning EmOC services5812.2N.B. This cannot be sum up to 100% because multiple responses were possible. Knowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013 N.B. This cannot be sum up to 100% because multiple responses were possible. Generally, only 168 (29.9%) study subjects were prepared for birth and its complication in their last pregnancy where as the remaining 394 (70.1%) study subject did not (Table  3).Table 3 Practices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013 S. NoVariableFrequencyPercent 1 Identify place of delivery43276.9 2 Plan skilled assistant during delivery29752.8 3 Saved money for obstetric emergency38869 4 Plan a mode of transport to place of delivery during emergency35362.8 5 Plan blood donor during obstetric emergency519.1 6 Detect early signs of an Emergence17731.5 7 Identified institution with 24 hr EmOC services26747.5N.B. This cannot be sum up to 100% because multiple responses were possible. Practices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013 N.B. This cannot be sum up to 100% because multiple responses were possible. Out of the total respondents, 301 (53.6%) study participants reported that they have never heard the term birth preparedness and complication readiness. Their sources of information for the majority of respondents, 61.0%, were community health workers (Figure  1).Figure 1 Sources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible. Sources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible. Among those respondent who knew about birth preparedness and complication readiness, 415 (87.6%) spontaneously identified and mentioned preparing essential items for clean delivery and post partum period followed by saving money; 330 (69.6%) (Table  2). In summary, only 82 (14.6%) study subjects were knowledgeable about birth preparedness and complication readiness.Table 2 Knowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013 S. No VariableFrequencyPercent(N = 474)1Identify place of delivery21445.12Save money33069.63Prepare essential items for clean delivery & post partum period41587.64Identify skilled provider347.25Being aware of the signs of an emergency & the need to act immediately163.46Designating decision maker on her40.87Arranging a way to communicate with a source of help332.78Arranging emergency funds675.59Identify a mode of transportation255.310Arranging blood donors153.211Identifying the nearest institution that has24 hours functioning EmOC services5812.2N.B. This cannot be sum up to 100% because multiple responses were possible. Knowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013 N.B. This cannot be sum up to 100% because multiple responses were possible. Generally, only 168 (29.9%) study subjects were prepared for birth and its complication in their last pregnancy where as the remaining 394 (70.1%) study subject did not (Table  3).Table 3 Practices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013 S. NoVariableFrequencyPercent 1 Identify place of delivery43276.9 2 Plan skilled assistant during delivery29752.8 3 Saved money for obstetric emergency38869 4 Plan a mode of transport to place of delivery during emergency35362.8 5 Plan blood donor during obstetric emergency519.1 6 Detect early signs of an Emergence17731.5 7 Identified institution with 24 hr EmOC services26747.5N.B. This cannot be sum up to 100% because multiple responses were possible. Practices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013 N.B. This cannot be sum up to 100% because multiple responses were possible. Factors associated with birth preparedness and complication readiness On binary logistic regression, place of residence, occupation, educational level, family size, ANC follow up, knowledge of danger sign during pregnancy, labour and postnatal period, as well as gravida and parity were found to have statistically significant association with birth preparedness and complication readiness. Multiple logistic regression analysis was also computed to control the possible confounder, explores the association between selected independent variables, and birth preparedness and complication readiness. The odds of birth preparedness and complication readiness were two times greater among urban resident when compared to rural resident (AOR = 2.01, 95% CI = 1.20, 3.36). Additionally, educational level of mother was also found as a predictor of birth preparedness and complication readiness. The odds of birth preparedness and complication readiness of woman who attended to the primary, and secondary and higher level of education were three and nearly three times higher than that those who did not attend formal education (AOR = 3.24, 95% CI = 1.75, 6.02) and (AOR = 2.88, 95% CI = 1.34, 6.15) respectively. Furthermore, the odds of birth preparedness and complication readiness were eight times greater among women who have ANC follow up when compared with women who did not have ANC follow up (AOR = 8.07, 95% CI = 2.41,27.00). Besides, the odds of birth preparedness & complication readiness among knowledgeable women about key danger signs during pregnancy were nearly two times greater than not knowledgeable women (AOR = 1.74, 95% CI = 1.06,2.88). Similarly, the odds of birth preparedness and complication readiness among knowledgeable respondents about key danger signs during postpartum period were two times greater than those who lack knowledge about it (AOR = 2.08, 95% CI = 1.20,3.60) (Table  4).Table 4 Association of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013 VariablePractice of birth preparedness & its complicationCOR (95% CI)AOR (95% CI)Not prepared (%)Prepared (%) Residence Urban 57 (49.1)59 (50.9)3.2 (2.09,4.88)2.01(1.20,3.36) Rural 337 (75.6)109 (24.4)11 Marital status In marital union 370 (69.7)161 (30.3)1.49(0.63, 3.53) Not in marital union 24 (77.4)7 (22.6)1 Occupation Housewife 349 (72.9)130 (27)11 Gov’t. employee 9 (42.9)12 (57.1)3.57 (1.47, 8.69)1.39(0.48,4.04) Private employee 9 (64.3)5 (35.7)1.49 (0.49, 4.53)1.15(0.33,4.01) Merchant 24 (57)18 (42.9)2.01 (1.05, 3.83)1.45(0.68,3.07) Other *** 3 (50)3 (50)2.68 (0.53, 13.4)1.69(0.22,13.11) Age ≤ 20 70 (74.5)24 (25.5)1 21-25 125 (65.8)65 (34.2)1.51 (0.87, 2.63) 26-30 119 (72.6)45 (27.4)1.10 (0.62, 1.96) >30 80 (70.2)34 (29.8)1.24 (0.67, 2.28) Educational level None 144 (89.4)17 (10.6)11 Read and write 12 (75)4 (25)2.82 (0.81, 9.73)1.96(0.49,7.79) Primary 173 (65)93 (35)4.55(2.59, 7.99)3.24(1.75,6.02) Secondary and above 65 (54.6)54 (45.4)7.03 (3.79, 13.06)2.88(1.34,6.15) Total family income 1 ≤ 100 11 (91.7)1 (8.3)3.34(0.39, 28.24) 101-300 46 (76.7)14 (23.3)5.03(0.64, 39.37) ≥ 301 334 (68.6)153 (31.4) Family size ≤ 4 177 (63.9)100 (36.1)2.35 (1.40,3.95)1.19(0.46,3.04) 5-6 117 (72.7)44 (27.3)1.57 (0.88,2.78)1.22(0.61,2.46) ≥ 7 96 (80.7)23 (19.3)11 ANC follow up Yes 313 (65.5)165 (34.5)14.23(4.42, 45.75)8.07(2.41,27.00) No 81 (96.4)3 (3.6)11 Knowledge status of danger signs during pregnancy Not knowledgeable 92(51.4)87 (48.6)11 Knowledgeable 302(78.9)81 (21.1)3.52 (2.40, 5.16)1.74(1.06,2.88) Knowledge status of danger signs during labour Not knowledgeable 321(78.3)89 (21.7)11 Knowledgeable 73 (48)79 (52)3.90 (2.62, 5.79)1.66(0.97,2.84) Knowledge status of danger signs during postnatal period 336 (76.7) Not Knowledgeable 58 (46.8)102 (23.3)11 Knowledgeable 66 (53.2)3.74(2.47, 5.68)2.08(1.20,3.60) Gravid 1 103 (64)58 (36)1.86(1.17,2.96)0.32(0.05,2.02) 2-3 142 (68.6)65 (31.4)1.51 (0.97, 2.36)0.39(0.09,1.61) ≥ 4 149 (76.8)45 (23.2)11 Parity First 109 (64.5)60 (35.5)1.90 (1.19, 3.0)2.64(0.38,18.17) Second 76 (61.8)47 (38.2)2.14 (1.29,3.54)3.91(0.81,18.90) Third 67 (77)20 (23)1.03 (0.56,1.90)1.39(0.46.5.62) Fourth and above 142 (77.6)41 (22.4)11 Association of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013 On binary logistic regression, place of residence, occupation, educational level, family size, ANC follow up, knowledge of danger sign during pregnancy, labour and postnatal period, as well as gravida and parity were found to have statistically significant association with birth preparedness and complication readiness. Multiple logistic regression analysis was also computed to control the possible confounder, explores the association between selected independent variables, and birth preparedness and complication readiness. The odds of birth preparedness and complication readiness were two times greater among urban resident when compared to rural resident (AOR = 2.01, 95% CI = 1.20, 3.36). Additionally, educational level of mother was also found as a predictor of birth preparedness and complication readiness. The odds of birth preparedness and complication readiness of woman who attended to the primary, and secondary and higher level of education were three and nearly three times higher than that those who did not attend formal education (AOR = 3.24, 95% CI = 1.75, 6.02) and (AOR = 2.88, 95% CI = 1.34, 6.15) respectively. Furthermore, the odds of birth preparedness and complication readiness were eight times greater among women who have ANC follow up when compared with women who did not have ANC follow up (AOR = 8.07, 95% CI = 2.41,27.00). Besides, the odds of birth preparedness & complication readiness among knowledgeable women about key danger signs during pregnancy were nearly two times greater than not knowledgeable women (AOR = 1.74, 95% CI = 1.06,2.88). Similarly, the odds of birth preparedness and complication readiness among knowledgeable respondents about key danger signs during postpartum period were two times greater than those who lack knowledge about it (AOR = 2.08, 95% CI = 1.20,3.60) (Table  4).Table 4 Association of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013 VariablePractice of birth preparedness & its complicationCOR (95% CI)AOR (95% CI)Not prepared (%)Prepared (%) Residence Urban 57 (49.1)59 (50.9)3.2 (2.09,4.88)2.01(1.20,3.36) Rural 337 (75.6)109 (24.4)11 Marital status In marital union 370 (69.7)161 (30.3)1.49(0.63, 3.53) Not in marital union 24 (77.4)7 (22.6)1 Occupation Housewife 349 (72.9)130 (27)11 Gov’t. employee 9 (42.9)12 (57.1)3.57 (1.47, 8.69)1.39(0.48,4.04) Private employee 9 (64.3)5 (35.7)1.49 (0.49, 4.53)1.15(0.33,4.01) Merchant 24 (57)18 (42.9)2.01 (1.05, 3.83)1.45(0.68,3.07) Other *** 3 (50)3 (50)2.68 (0.53, 13.4)1.69(0.22,13.11) Age ≤ 20 70 (74.5)24 (25.5)1 21-25 125 (65.8)65 (34.2)1.51 (0.87, 2.63) 26-30 119 (72.6)45 (27.4)1.10 (0.62, 1.96) >30 80 (70.2)34 (29.8)1.24 (0.67, 2.28) Educational level None 144 (89.4)17 (10.6)11 Read and write 12 (75)4 (25)2.82 (0.81, 9.73)1.96(0.49,7.79) Primary 173 (65)93 (35)4.55(2.59, 7.99)3.24(1.75,6.02) Secondary and above 65 (54.6)54 (45.4)7.03 (3.79, 13.06)2.88(1.34,6.15) Total family income 1 ≤ 100 11 (91.7)1 (8.3)3.34(0.39, 28.24) 101-300 46 (76.7)14 (23.3)5.03(0.64, 39.37) ≥ 301 334 (68.6)153 (31.4) Family size ≤ 4 177 (63.9)100 (36.1)2.35 (1.40,3.95)1.19(0.46,3.04) 5-6 117 (72.7)44 (27.3)1.57 (0.88,2.78)1.22(0.61,2.46) ≥ 7 96 (80.7)23 (19.3)11 ANC follow up Yes 313 (65.5)165 (34.5)14.23(4.42, 45.75)8.07(2.41,27.00) No 81 (96.4)3 (3.6)11 Knowledge status of danger signs during pregnancy Not knowledgeable 92(51.4)87 (48.6)11 Knowledgeable 302(78.9)81 (21.1)3.52 (2.40, 5.16)1.74(1.06,2.88) Knowledge status of danger signs during labour Not knowledgeable 321(78.3)89 (21.7)11 Knowledgeable 73 (48)79 (52)3.90 (2.62, 5.79)1.66(0.97,2.84) Knowledge status of danger signs during postnatal period 336 (76.7) Not Knowledgeable 58 (46.8)102 (23.3)11 Knowledgeable 66 (53.2)3.74(2.47, 5.68)2.08(1.20,3.60) Gravid 1 103 (64)58 (36)1.86(1.17,2.96)0.32(0.05,2.02) 2-3 142 (68.6)65 (31.4)1.51 (0.97, 2.36)0.39(0.09,1.61) ≥ 4 149 (76.8)45 (23.2)11 Parity First 109 (64.5)60 (35.5)1.90 (1.19, 3.0)2.64(0.38,18.17) Second 76 (61.8)47 (38.2)2.14 (1.29,3.54)3.91(0.81,18.90) Third 67 (77)20 (23)1.03 (0.56,1.90)1.39(0.46.5.62) Fourth and above 142 (77.6)41 (22.4)11 Association of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013 Socio demographic characteristics: The mean age of the respondents were 26.6 (SD ± 5.9) year. Majority of respondents, 33.8%, were between the age group of 21 and 25 years and a few were in the age group of less than 20 years. Muslim and Orthodox Tewahido were the dominant religions each accounting 49.1%. Most participants, 266 (47.3%), reported that they were educated up to primary school followed by those who did not attend formal education; 161 (28.6). Four hundred ten (73%) respondents were Oromo in their ethnic group, followed by Amhara; 139 (24.7%). The vast majority, (85.2%), of respondents were housewives. Regarding the marital status of study subjects, 531 (94.5%) of them were married. The mean family size and monthly income of the participants were 5.0 (SD ± 2.1) and 1267.9 (SD ± 1298.7) Ethiopian Birr respectively. About 224 (39.9%) study subjects did not have any financial income (Table  1).Table 1 Socio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013 VariableFrequencyPercent Residence Urban 11620.6 Rural 44679.4 Age ≤ 20 9416.7 21-25 19033.8 26-30 16429.2 >30 11420.3 Mean ( ± SD) 26.6 (±5.9) Religion Muslim 27649.1 Orthodox 27649.1 Protestant 71.2 Other * 30.5 Marital status Married/in Union 53194.5 Single 142.1 Widowed 81.4 Divorced 61.1 Separated 30.5 Ethnicity Oromo 41073 Amhara 13924.7 Tigre 20.4 Gamo 20.4 Other ** 91.6 Educational level None 16128.6 Read and write 162.8 Primary 26647.3 Secondary and above 11921.2 Occupation Housewife 47985.2 Gov’t. employee 213.7 Private employee 142.5 Merchant 427.7 Other *** 61.1 Monthly income of the women + None 22439.9 (None-100] 5610 101-300 13524 ≥ 301 14726.2 Total family income + ≤ 100 122.1 101-300 6010.7 ≥ 301 48787.1 Mean ( ± SD) 1267.9 ( ± 1298.7) Family size ≤ 4 28150 5-6 16228.8 ≥ 7 11921.1 Mean ( ± SD) 5.0 (±2.1) *indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer. +Currency is measured in Ethiopian Birr. Socio-demographic characteristics of the respondents, Goba woreda, Oromia region, Ethiopia, April, 2013 *indicate Catholic and Wakefeta, **indicate Welayita, Gurage, ***indicate daily laborer. +Currency is measured in Ethiopian Birr. Birth preparedness and complication readiness: Out of the total respondents, 301 (53.6%) study participants reported that they have never heard the term birth preparedness and complication readiness. Their sources of information for the majority of respondents, 61.0%, were community health workers (Figure  1).Figure 1 Sources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible. Sources of information about birth preparedness and complication readiness as reported by study participants in Goba woreda, Oromia region, Ethiopia, April, 2013. N.B. This cannot be sum up to 100% because multiple responses were possible. Among those respondent who knew about birth preparedness and complication readiness, 415 (87.6%) spontaneously identified and mentioned preparing essential items for clean delivery and post partum period followed by saving money; 330 (69.6%) (Table  2). In summary, only 82 (14.6%) study subjects were knowledgeable about birth preparedness and complication readiness.Table 2 Knowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013 S. No VariableFrequencyPercent(N = 474)1Identify place of delivery21445.12Save money33069.63Prepare essential items for clean delivery & post partum period41587.64Identify skilled provider347.25Being aware of the signs of an emergency & the need to act immediately163.46Designating decision maker on her40.87Arranging a way to communicate with a source of help332.78Arranging emergency funds675.59Identify a mode of transportation255.310Arranging blood donors153.211Identifying the nearest institution that has24 hours functioning EmOC services5812.2N.B. This cannot be sum up to 100% because multiple responses were possible. Knowledge of respondents about preparation for birth and its complication, Goba woreda, Ethiopia, April, 2013 N.B. This cannot be sum up to 100% because multiple responses were possible. Generally, only 168 (29.9%) study subjects were prepared for birth and its complication in their last pregnancy where as the remaining 394 (70.1%) study subject did not (Table  3).Table 3 Practices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013 S. NoVariableFrequencyPercent 1 Identify place of delivery43276.9 2 Plan skilled assistant during delivery29752.8 3 Saved money for obstetric emergency38869 4 Plan a mode of transport to place of delivery during emergency35362.8 5 Plan blood donor during obstetric emergency519.1 6 Detect early signs of an Emergence17731.5 7 Identified institution with 24 hr EmOC services26747.5N.B. This cannot be sum up to 100% because multiple responses were possible. Practices of respondents on preparation for birth/complication, Goba woreda, Oromia region, Ethiopia, April, 2013 N.B. This cannot be sum up to 100% because multiple responses were possible. Factors associated with birth preparedness and complication readiness: On binary logistic regression, place of residence, occupation, educational level, family size, ANC follow up, knowledge of danger sign during pregnancy, labour and postnatal period, as well as gravida and parity were found to have statistically significant association with birth preparedness and complication readiness. Multiple logistic regression analysis was also computed to control the possible confounder, explores the association between selected independent variables, and birth preparedness and complication readiness. The odds of birth preparedness and complication readiness were two times greater among urban resident when compared to rural resident (AOR = 2.01, 95% CI = 1.20, 3.36). Additionally, educational level of mother was also found as a predictor of birth preparedness and complication readiness. The odds of birth preparedness and complication readiness of woman who attended to the primary, and secondary and higher level of education were three and nearly three times higher than that those who did not attend formal education (AOR = 3.24, 95% CI = 1.75, 6.02) and (AOR = 2.88, 95% CI = 1.34, 6.15) respectively. Furthermore, the odds of birth preparedness and complication readiness were eight times greater among women who have ANC follow up when compared with women who did not have ANC follow up (AOR = 8.07, 95% CI = 2.41,27.00). Besides, the odds of birth preparedness & complication readiness among knowledgeable women about key danger signs during pregnancy were nearly two times greater than not knowledgeable women (AOR = 1.74, 95% CI = 1.06,2.88). Similarly, the odds of birth preparedness and complication readiness among knowledgeable respondents about key danger signs during postpartum period were two times greater than those who lack knowledge about it (AOR = 2.08, 95% CI = 1.20,3.60) (Table  4).Table 4 Association of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013 VariablePractice of birth preparedness & its complicationCOR (95% CI)AOR (95% CI)Not prepared (%)Prepared (%) Residence Urban 57 (49.1)59 (50.9)3.2 (2.09,4.88)2.01(1.20,3.36) Rural 337 (75.6)109 (24.4)11 Marital status In marital union 370 (69.7)161 (30.3)1.49(0.63, 3.53) Not in marital union 24 (77.4)7 (22.6)1 Occupation Housewife 349 (72.9)130 (27)11 Gov’t. employee 9 (42.9)12 (57.1)3.57 (1.47, 8.69)1.39(0.48,4.04) Private employee 9 (64.3)5 (35.7)1.49 (0.49, 4.53)1.15(0.33,4.01) Merchant 24 (57)18 (42.9)2.01 (1.05, 3.83)1.45(0.68,3.07) Other *** 3 (50)3 (50)2.68 (0.53, 13.4)1.69(0.22,13.11) Age ≤ 20 70 (74.5)24 (25.5)1 21-25 125 (65.8)65 (34.2)1.51 (0.87, 2.63) 26-30 119 (72.6)45 (27.4)1.10 (0.62, 1.96) >30 80 (70.2)34 (29.8)1.24 (0.67, 2.28) Educational level None 144 (89.4)17 (10.6)11 Read and write 12 (75)4 (25)2.82 (0.81, 9.73)1.96(0.49,7.79) Primary 173 (65)93 (35)4.55(2.59, 7.99)3.24(1.75,6.02) Secondary and above 65 (54.6)54 (45.4)7.03 (3.79, 13.06)2.88(1.34,6.15) Total family income 1 ≤ 100 11 (91.7)1 (8.3)3.34(0.39, 28.24) 101-300 46 (76.7)14 (23.3)5.03(0.64, 39.37) ≥ 301 334 (68.6)153 (31.4) Family size ≤ 4 177 (63.9)100 (36.1)2.35 (1.40,3.95)1.19(0.46,3.04) 5-6 117 (72.7)44 (27.3)1.57 (0.88,2.78)1.22(0.61,2.46) ≥ 7 96 (80.7)23 (19.3)11 ANC follow up Yes 313 (65.5)165 (34.5)14.23(4.42, 45.75)8.07(2.41,27.00) No 81 (96.4)3 (3.6)11 Knowledge status of danger signs during pregnancy Not knowledgeable 92(51.4)87 (48.6)11 Knowledgeable 302(78.9)81 (21.1)3.52 (2.40, 5.16)1.74(1.06,2.88) Knowledge status of danger signs during labour Not knowledgeable 321(78.3)89 (21.7)11 Knowledgeable 73 (48)79 (52)3.90 (2.62, 5.79)1.66(0.97,2.84) Knowledge status of danger signs during postnatal period 336 (76.7) Not Knowledgeable 58 (46.8)102 (23.3)11 Knowledgeable 66 (53.2)3.74(2.47, 5.68)2.08(1.20,3.60) Gravid 1 103 (64)58 (36)1.86(1.17,2.96)0.32(0.05,2.02) 2-3 142 (68.6)65 (31.4)1.51 (0.97, 2.36)0.39(0.09,1.61) ≥ 4 149 (76.8)45 (23.2)11 Parity First 109 (64.5)60 (35.5)1.90 (1.19, 3.0)2.64(0.38,18.17) Second 76 (61.8)47 (38.2)2.14 (1.29,3.54)3.91(0.81,18.90) Third 67 (77)20 (23)1.03 (0.56,1.90)1.39(0.46.5.62) Fourth and above 142 (77.6)41 (22.4)11 Association of selected socio-demographic and obstetric factors of respondents with preparation for birth and its complication, Goba woreda, Oromia region, Ethiopia, April, 2013 Discussion: This study showed that, only a small proportion of the respondents were prepared for birth and its complication in their last pregnancy. The finding is not in agreement with the result of the study done in rural Uganda, Mbarara district [10]. On the other hand, the result is almost consistent with another study done among pregnant women in Aleta Wondo Woreda, Sidama zone, Southern Ethiopia [11]. Besides, it is slightly in agreement with the finding of study conducted in Adigrat town, North Ethiopia [7]. The difference with the finding of the study done in rural Uganda, Mbarara district could be unlike the current study, in the former study there was a mixture of study subject where pregnant women in addition to women who gave birth in the last 12 months were considered. So, pregnant women might tell their future plan rather than what they have actually performed. On the top of this, majority of the respondent in rural Uganda, Mbarara district attended ANC for four and more visit where they could have more opportunity to get information concerning birth preparedness and complication readiness than the current study subject where only a few women attended four and more ANC visit. The most commonly mentioned practice in the study were identifying place of delivery, saving money which may be explained by the fact that both women and their partners may knew that money is required to facilitate referral in case of complications, planning skilled assistant and identifying institution with 24 hour emergency obstetric care. It is nearly comparable with community based study in rural Uganda, Mbarara district where majority of the respondents identified a skilled providers, saved money, identified means of transport, and bought delivery kits/birth materials during their most recent pregnancy [10]. In contrary to the practice of BP/CR among women, only a small number, 82 (14.6%), of study participants were knowledgeable about birth preparedness and complication readiness. The implication of this finding could be women could practice some of those BP/CR components without having the knowledge of its rationale. Therefore, their continuous practice in the future is under question because of their knowledge gap. Regarding some of the factors affecting birth preparedness and complication readiness, the study found educational status of the women, and ANC follow up has significant statistical association with birth preparedness and complication readiness. This finding is also supported by another community based study done in Adigrat town, North Ethiopia [7]. The implication of this finding could be when a women become educated, they might have better access for information from different sources like from reading different materials and for the case of ANC follow up, if the women have ANC follow up, they could accept advise and health information from health professionals. So that helps them be prepared for birth and its complication. As a strength, this study tried to minimize selection bias by employing community based study with probability sampling method. Additionally, recall bias was attempted to be reduced by involving women who have given birth in the last 12 months preceding the study. On the other hand, its limitation is associated with not ascribing the direction of causations to the relationships found in the study because of the nature of cross sectional study design. Conclusion: Only a small numbers of respondent were found to be prepared for birth and its complications in their last pregnancy. Place of residence, educational status, ANC follow up, knowledge of key danger signs during pregnancy as well as postpartum period were independent predictors of birth preparedness and complication readiness. Recommendation The study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women. Educations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education. Antenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care. Even though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness. The study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women. Educations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education. Antenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care. Even though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness. Recommendation: The study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women. Educations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education. Antenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care. Even though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness.
Background: Birth preparedness and complication readiness is the process of planning for normal birth and anticipating the actions needed in case of an emergency. It is also a strategy to promote the timely use of skilled maternal care, especially during childbirth, based on the theory that preparing for childbirth reduces delays in obtaining this care. Therefore, the aim of this study was to assess birth preparedness and complication readiness among women of child bearing age group in Goba woreda, Oromia region, Ethiopia. Methods: A community based cross sectional study was conducted in Goba woreda, Oromia region, Ethiopia. Multistage sampling was employed. Descriptive, binary and multiple logistic regression analyses were conducted. Statistically significant tests were declared at a level of significance of P value < 0.05. Results: Only 29.9% of the respondents were prepared for birth and its complications. And, only 82 (14.6%) study participants were knowledgeable about birth preparedness and complication readiness.Variables having statistically significant association with birth preparedness and complication readiness of women were attending up to primary education (AOR = 3.24, 95% CI = 1.75, 6.02), attending up to secondary and higher level of education (AOR = 2.88, 95% CI = 1.34, 6.15), the presence of antenatal care follow up (AOR = 8.07, 95% CI = 2.41,27.00), knowledge about key danger signs during pregnancy (AOR = 1.74, 95% CI = 1.06,2.88), and knowledge about key danger signs during the postpartum period (AOR = 2.08, 95% CI = 1.20,3.60). Conclusions: Only a small number of respondents were prepared for birth and its complications. Furthermore, the vast majority of women were not knowledgeable about birth preparedness and complication readiness. Residence, educational status, ANC follow up, knowledge of key danger signs during pregnancy and the postpartum period were independent predictors of birth preparedness and complication readiness.
Background: Globally, maternal mortality remains a public health challenge [1]. World Health Organization (WHO) estimated that 529,000 women die annually from maternal causes. Ninety nine percent of these deaths occur in less developed countries. The situation is most dire for women in Sub-Saharan Africa, where one of every 16 women dies of pregnancy related causes during her lifetime, compared with only 1 in 2,800 women in developed regions [2]. The global maternal mortality ratio (MMR) decreased from 422 (358-505) in 1980 to 320 (272-388) in 1990, and was 251 (221-289) per 100 000 live births in 2008. The yearly rate of decline of the global MMR since 1990 was 1.3% (1.0-1.5). More than 50% of all maternal deaths were in only six countries in 2008 (India, Nigeria, Pakistan, Afghanistan, Ethiopia, and the Democratic Republic of the Congo) [3]. In Ethiopia, according to 2011 Ethiopia Demographic and Health Survey (EDHS) report, MMR remains high which is 676 deaths per 100,000 live births. However, only 10% of births in Ethiopia occur in health facility while 90% women deliver at home. Reasons for not delivering at health facility for more than six women in ten (61%) stated that a health facility delivery was not necessary, and three in every ten (30%) stated that it was not customary [4]. Birth preparedness and complication readiness (BP/CR) is a comprehensive package aimed at promoting timely access to skilled maternal and neonatal services. It promotes active preparation and decision making for delivery by pregnant women and their families. This stems from the fact that every pregnant woman faces risk of sudden and unpredictable life threatening complications that could end in death or injury to herself or to her infant. BP/CR is a relatively common strategy employed by numerous groups implementing safe motherhood program in the world [5, 6]. In many societies in the world, cultural beliefs and lack of awareness inhibit preparation in advance for delivery and expected baby. Since no action is taken prior to the delivery, the family tries to act only when labor begins. The majority of pregnant women and their families do not know how to recognize the danger signs of complications. When complications occur, the unprepared family will waste a great deal of time in recognizing the problem, getting organized, getting money, finding transport and reaching the appropriate referral facility [7]. It is difficult to predict which pregnancy, delivery or post delivery period will experience complications; hence birth preparedness and complication readiness plan is recommended with the notion of pregnancy is risk [6]. BP/CR strategy encourage women to be informed of danger signs of obstetric complications and emergencies, choose a preferred birth place and attendant at birth, make advance arrangement with the attendant at birth, arrange for transport to skilled care site in case of emergence, saving or arranging alternative funds for costs of skilled and emergency care, and finding a companion to be with the woman at birth or to accompany her to emergency care source. Other measures include identifying a compatible blood donor in case of hemorrhage, obtaining permission from the head of household to seek skilled care in the event that a birth emergency occurs in his absence and arrange a source of household support to provide temporary family care during her absence [6, 8]. With different effort from government as well as nongovernmental organization working on maternal issues, pregnant women were not found to be well prepared for birth and its complication. For example, only 47.8% women who have already given birth in Indore city in India [9] and 35% of pregnant women in Uganda were prepared for birth and its complication [10]. Additionally, according to the research done in some part of Ethiopia, only 22% of pregnant women in Adigrat town [7] and 17% of pregnant women in Aleta Wondo of southern region [11] were prepared for birth and its complication. Even though there are some studies which were conducted on this similar issue in Ethiopia, they mainly encompass the urban area. Additionally, there are some differences in socio demographic and cultural conditions of these different regions of the country. Therefore, the aim of this study was to assess birth preparedness and complication readiness among women of child bearing age group in Goba woreda, Oromia region, Ethiopia. Conclusion: Only a small numbers of respondent were found to be prepared for birth and its complications in their last pregnancy. Place of residence, educational status, ANC follow up, knowledge of key danger signs during pregnancy as well as postpartum period were independent predictors of birth preparedness and complication readiness. Recommendation The study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women. Educations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education. Antenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care. Even though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness. The study revealed that only a few women were well prepared for birth and its complication. Therefore, ministry of health, Oromia region health bureau, Bale zone health department, Goba woreda health office as well as other partner organizations that are working in maternal health issue should work hard to improve birth preparedness and complication readiness of women. Educations were found to be one of the predictors of BP/CR. Therefore, Goba woreda health office in collaboration with other stake holders such as Goba woreda education office should further strengthen their effort to empower women with education. Antenatal care follow up was found to have statistically significant association with birth preparedness and complication readiness. Therefore, health professionals during antenatal care should give due emphasis on birth preparedness and complication readiness plan to improve access to skilled and emergency obstetric care. Even though majority of women attended ANC, only very small numbers of the respondents were prepared for birth and its complication. Therefore, any interested researcher should conduct further study on quality of ANC in focus of birth preparedness and complication readiness to assess whether health professionals appropriately advise and provide health information concerning birth preparedness and complication readiness.
Background: Birth preparedness and complication readiness is the process of planning for normal birth and anticipating the actions needed in case of an emergency. It is also a strategy to promote the timely use of skilled maternal care, especially during childbirth, based on the theory that preparing for childbirth reduces delays in obtaining this care. Therefore, the aim of this study was to assess birth preparedness and complication readiness among women of child bearing age group in Goba woreda, Oromia region, Ethiopia. Methods: A community based cross sectional study was conducted in Goba woreda, Oromia region, Ethiopia. Multistage sampling was employed. Descriptive, binary and multiple logistic regression analyses were conducted. Statistically significant tests were declared at a level of significance of P value < 0.05. Results: Only 29.9% of the respondents were prepared for birth and its complications. And, only 82 (14.6%) study participants were knowledgeable about birth preparedness and complication readiness.Variables having statistically significant association with birth preparedness and complication readiness of women were attending up to primary education (AOR = 3.24, 95% CI = 1.75, 6.02), attending up to secondary and higher level of education (AOR = 2.88, 95% CI = 1.34, 6.15), the presence of antenatal care follow up (AOR = 8.07, 95% CI = 2.41,27.00), knowledge about key danger signs during pregnancy (AOR = 1.74, 95% CI = 1.06,2.88), and knowledge about key danger signs during the postpartum period (AOR = 2.08, 95% CI = 1.20,3.60). Conclusions: Only a small number of respondents were prepared for birth and its complications. Furthermore, the vast majority of women were not knowledgeable about birth preparedness and complication readiness. Residence, educational status, ANC follow up, knowledge of key danger signs during pregnancy and the postpartum period were independent predictors of birth preparedness and complication readiness.
13,652
363
19
[ "birth", "complication", "preparedness", "birth preparedness", "birth preparedness complication", "preparedness complication", "study", "preparedness complication readiness", "complication readiness", "readiness" ]
[ "test", "test" ]
[CONTENT] Birth preparedness | Complication readiness | Goba woreda | Ethiopia [SUMMARY]
[CONTENT] Birth preparedness | Complication readiness | Goba woreda | Ethiopia [SUMMARY]
[CONTENT] Birth preparedness | Complication readiness | Goba woreda | Ethiopia [SUMMARY]
[CONTENT] Birth preparedness | Complication readiness | Goba woreda | Ethiopia [SUMMARY]
[CONTENT] Birth preparedness | Complication readiness | Goba woreda | Ethiopia [SUMMARY]
[CONTENT] Birth preparedness | Complication readiness | Goba woreda | Ethiopia [SUMMARY]
[CONTENT] Adult | Cholestyramine Resin | Cross-Sectional Studies | Educational Status | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Humans | Parturition | Pregnancy | Pregnancy Complications | Prenatal Care | Rural Population | Urban Population | Young Adult [SUMMARY]
[CONTENT] Adult | Cholestyramine Resin | Cross-Sectional Studies | Educational Status | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Humans | Parturition | Pregnancy | Pregnancy Complications | Prenatal Care | Rural Population | Urban Population | Young Adult [SUMMARY]
[CONTENT] Adult | Cholestyramine Resin | Cross-Sectional Studies | Educational Status | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Humans | Parturition | Pregnancy | Pregnancy Complications | Prenatal Care | Rural Population | Urban Population | Young Adult [SUMMARY]
[CONTENT] Adult | Cholestyramine Resin | Cross-Sectional Studies | Educational Status | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Humans | Parturition | Pregnancy | Pregnancy Complications | Prenatal Care | Rural Population | Urban Population | Young Adult [SUMMARY]
[CONTENT] Adult | Cholestyramine Resin | Cross-Sectional Studies | Educational Status | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Humans | Parturition | Pregnancy | Pregnancy Complications | Prenatal Care | Rural Population | Urban Population | Young Adult [SUMMARY]
[CONTENT] Adult | Cholestyramine Resin | Cross-Sectional Studies | Educational Status | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Humans | Parturition | Pregnancy | Pregnancy Complications | Prenatal Care | Rural Population | Urban Population | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] birth | complication | preparedness | birth preparedness | birth preparedness complication | preparedness complication | study | preparedness complication readiness | complication readiness | readiness [SUMMARY]
[CONTENT] birth | complication | preparedness | birth preparedness | birth preparedness complication | preparedness complication | study | preparedness complication readiness | complication readiness | readiness [SUMMARY]
[CONTENT] birth | complication | preparedness | birth preparedness | birth preparedness complication | preparedness complication | study | preparedness complication readiness | complication readiness | readiness [SUMMARY]
[CONTENT] birth | complication | preparedness | birth preparedness | birth preparedness complication | preparedness complication | study | preparedness complication readiness | complication readiness | readiness [SUMMARY]
[CONTENT] birth | complication | preparedness | birth preparedness | birth preparedness complication | preparedness complication | study | preparedness complication readiness | complication readiness | readiness [SUMMARY]
[CONTENT] birth | complication | preparedness | birth preparedness | birth preparedness complication | preparedness complication | study | preparedness complication readiness | complication readiness | readiness [SUMMARY]
[CONTENT] women | pregnant | birth | pregnant women | maternal | ethiopia | delivery | complications | facility | care [SUMMARY]
[CONTENT] sample | sampling | kebeles | sample size | kebele | data | size | health | woreda | birth [SUMMARY]
[CONTENT] 11 | complication | birth | 20 | ethiopia april 2013 | ethiopia april | april 2013 | respondents | 100 | knowledgeable [SUMMARY]
[CONTENT] health | birth | complication | preparedness complication readiness | birth preparedness complication | birth preparedness | complication readiness | birth preparedness complication readiness | readiness | preparedness [SUMMARY]
[CONTENT] birth | complication | health | study | preparedness | birth preparedness | preparedness complication | birth preparedness complication | sample | readiness [SUMMARY]
[CONTENT] birth | complication | health | study | preparedness | birth preparedness | preparedness complication | birth preparedness complication | sample | readiness [SUMMARY]
[CONTENT] ||| ||| Goba | Oromia | Ethiopia [SUMMARY]
[CONTENT] Goba | Oromia | Ethiopia ||| ||| ||| 0.05 [SUMMARY]
[CONTENT] Only 29.9% ||| only 82 | 14.6% ||| AOR | 3.24 | 95% | CI | 1.75 | 6.02 | AOR | 2.88 | 95% | CI | 1.34 | 6.15 | AOR | 8.07 | 95% | CI | 2.41,27.00 | 1.74 | 95% | CI | 1.06,2.88 | AOR | 2.08 | 95% | CI | 1.20,3.60 [SUMMARY]
[CONTENT] ||| ||| ANC [SUMMARY]
[CONTENT] ||| ||| Goba | Oromia | Ethiopia ||| Goba | Oromia | Ethiopia ||| ||| ||| 0.05 ||| Only 29.9% ||| only 82 | 14.6% ||| AOR | 3.24 | 95% | CI | 1.75 | 6.02 | AOR | 2.88 | 95% | CI | 1.34 | 6.15 | AOR | 8.07 | 95% | CI | 2.41,27.00 | 1.74 | 95% | CI | 1.06,2.88 | AOR | 2.08 | 95% | CI | 1.20,3.60 ||| ||| ||| ANC [SUMMARY]
[CONTENT] ||| ||| Goba | Oromia | Ethiopia ||| Goba | Oromia | Ethiopia ||| ||| ||| 0.05 ||| Only 29.9% ||| only 82 | 14.6% ||| AOR | 3.24 | 95% | CI | 1.75 | 6.02 | AOR | 2.88 | 95% | CI | 1.34 | 6.15 | AOR | 8.07 | 95% | CI | 2.41,27.00 | 1.74 | 95% | CI | 1.06,2.88 | AOR | 2.08 | 95% | CI | 1.20,3.60 ||| ||| ||| ANC [SUMMARY]
Reviewing the connection between speech and obstructive sleep apnea.
26897500
Sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). The altered UA structure or function in OSA speakers has led to hypothesize the automatic analysis of speech for OSA assessment. In this paper we critically review several approaches using speech analysis and machine learning techniques for OSA detection, and discuss the limitations that can arise when using machine learning techniques for diagnostic applications.
BACKGROUND
A large speech database including 426 male Spanish speakers suspected to suffer OSA and derived to a sleep disorders unit was used to study the clinical validity of several proposals using machine learning techniques to predict the apnea-hypopnea index (AHI) or classify individuals according to their OSA severity. AHI describes the severity of patients' condition. We first evaluate AHI prediction using state-of-the-art speaker recognition technologies: speech spectral information is modelled using supervectors or i-vectors techniques, and AHI is predicted through support vector regression (SVR). Using the same database we then critically review several OSA classification approaches previously proposed. The influence and possible interference of other clinical variables or characteristics available for our OSA population: age, height, weight, body mass index, and cervical perimeter, are also studied.
METHODS
The poor results obtained when estimating AHI using supervectors or i-vectors followed by SVR contrast with the positive results reported by previous research. This fact prompted us to a careful review of these approaches, also testing some reported results over our database. Several methodological limitations and deficiencies were detected that may have led to overoptimistic results.
RESULTS
The methodological deficiencies observed after critically reviewing previous research can be relevant examples of potential pitfalls when using machine learning techniques for diagnostic applications. We have found two common limitations that can explain the likelihood of false discovery in previous research: (1) the use of prediction models derived from sources, such as speech, which are also correlated with other patient characteristics (age, height, sex,…) that act as confounding factors; and (2) overfitting of feature selection and validation methods when working with a high number of variables compared to the number of cases. We hope this study could not only be a useful example of relevant issues when using machine learning for medical diagnosis, but it will also help in guiding further research on the connection between speech and OSA.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "Diagnosis, Computer-Assisted", "Female", "Humans", "Machine Learning", "Male", "Middle Aged", "Polysomnography", "Sleep Apnea, Obstructive", "Speech", "Young Adult" ]
4761156
Background
Sleep disorders are receiving increased attention as a cause of daytime sleepiness, impaired work, traffic accidents, and are associated with hypertension, heart failure, arrhythmia, and diabetes. Among sleep disorders, obstructive sleep apnea (OSA) is the most frequent one [1]. OSA is characterized by recurring episodes of breathing pauses during sleep, greater than 10 s at a time, caused by a blockage of the upper airway (UA) at the level of the pharynx. The gold standard for sleep apnea diagnosis is the polysomnography (PSG) test [2]. This test requires an overnight stay of the patient at the sleep unit within a hospital to monitor breathing patterns, heart rhythm and limb movements. As a result of this test, the apnea–hypopnea index (AHI) is computed as the average number of apnea and hypopnea episodes (partial and total breath cessation episodes respectively) per hour of sleep. Because of its high reliability this index is used to describe the severity of patients’ condition: low AHI (AHI <10) indicates a healthy subject or mild OSA patient (10≤ AHI ≤30), while AHI above 30 is associated with severe OSA. Waiting lists for PSG may exceed 1 year in some countries as Spain [3]. Therefore, faster and less costly alternatives have been proposed for early OSA detection and severity assessment; and speech-based methods are among them. The rationale of using speech analysis in OSA assessment can be found on early works such as the one by Davidson et al. [4] where the evolutionary changes in the acquisition of speech are connected to the appearance of OSA from an anatomical basis. Several studies have shown physical alterations in OSA patients such as craniofacial abnormalities, dental occlusion, longer distance between the hyoid bone and the mandibular plane, relaxed pharyngeal soft tissues, large tongue base, etc. that generally cause a longer and more collapsible upper airway (UA). Consequently, abnormal or particular speech features in OSA speakers may be expected from an altered structure or function of their UA. Early approaches to speech-based OSA detection can be found in [5] and [6]. In [5] authors used perceptive speech descriptors (related to articulation, phonation and resonance) to correctly identify 96.3 % of normal (healthy) subjects, though only 63.0 % of sleep apnea speakers were detected. The use of acoustic analysis of speech for OSA detection was first presented in [7] and [8]. Fiz et al. [7] examined the harmonic structure of vowels spectra, finding a narrower frequency range for OSA speakers, which may point at differences in laryngeal behavior between OSA and non-OSA speakers. Later on, Robb et al. [8] presented an acoustic analysis of vocal tract formant frequencies and bandwidths, thus focusing on the supra-laryngeal level where OSA-related alterations should have larger impact according to the pathogenesis of the disorder. These early contributions have driven recent proposals for using automatic speech processing in OSA detection such as [9–14]. Different approaches, generally using machine learning techniques, have been studied for Hebrew [9, 14] and Spanish [10–13] languages. Results have been reported for different types of speech (i.e., sustained and/or continuous speech) [9, 11, 13], different speech features [9, 12, 13], and modeling different linguistic units [11]. Also speech recorded from two distinct positions, upright or seated and supine or stretched, has been considered [13, 15]. Despite the positive results reported in these previous studies (including ours), as it will be presented in the section “Discussion”, we have found contradictory results when applying the proposed methods over our large clinical database composed of speech samples from 426 OSA male speakers. The next section describes a new method for estimating the AHI using state-of-the-art speaker’s voice characterization technologies. This same approach has been recently tested and demonstrated to be effective in the estimation of other characteristics in speakers’ populations such as age [16] and height [17]. However, as it can be seen in the section “Results”, only a very limited performance is found when this approach is used for AHI prediction. These poor results contrast with the positive results reported by previous research and motivated us to their careful review. The review (presented in the section “Discussion”) reveals some common limitation and deficiencies when developing and validating machine learning techniques, as overfitting and false discovery (i.e., finding spurious or indirect associations) [18], that may have led to overoptimistic previous results. Therefore, our study can represent an important and useful example to illustrate the potential pitfalls in the development of machine learning techniques for diagnostic applications as it is being identified by the biomedical engineering research community [19]. As we conclude at the end of the paper, we not only hope that our study could be useful to help in the development of machine learning techniques in biomedical engineering research, we also think it can help in guiding any future research on the connection between speech and OSA.
null
null
Results
Clinical variables estimation Results in Tables 3 and 4 show performance when using speech to estimate age and height. As mentioned before, the purpose of these tests is to validate our procedure by comparing these results to those reported in recent references [16] and [17]. Table 3 shows that our estimation performance (both in terms of MAE and correlation coefficient) for height are comparable, and better when using i-vectors, that those in [17]. However estimation results for age, Table 4, are slightly worse than [16]. A plausible explanation is that the population in [16] includes a majority of young people, between 20 and 30 years old, while most of our OSA speakers are well above 45 years old. According to [16] speech records from young speakers can be better discriminated than those from older ones. In any case, our results are very similar to results published previously by other authors, which is a good indicator of the validity of our methods.Table 3Speakers’ height estimation resultsRegression methodMean absolute error (cm)Correlation coefficient (ρ)I-vector–LSSVR [17]6.20.41b Supervector–SVR5.370.34a I-vector–SVR5.060.45a aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reportedTable 4Speakers’ age estimation resultsRegression methodMean absolute error (years)Correlation coefficient (ρ)I-vector–WCCN–SVR [16]6.00.77b Supervector–SVR7.750.66a I-vector–SVR7.870.63a aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Speakers’ height estimation results aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Speakers’ age estimation results aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Prediction results using i-vectors and supervectors for all our clinical variables are listed on Tables 5, 6 and 7.Table 5Speakers’ clinical variables estimation using supervector-SVR (linear kernel)Clinical variableMAEρAHI14.260.17Height (cm)5.370.34Age (years)7.750.66Weight (kg)12.580.31BMI (kg/m2)3.810.23CP (cm)2.290.42 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 6Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI13.6813.6413.5513.2313.4013.850.230.210.240.300.270.20Height (cm)5.215.235.115.065.295.380.400.410.430.450.360.34Age (years)8.167.878.118.298.779.160.610.630.610.590.520.44Weight (kg)12.3112.2312.2511.8612.1612.310.340.350.360.390.350.31BMI (kg/m2)3.593.653.673.693.743.800.330.300.290.280.260.18CP (cm)2.282.262.202.262.312.420.440.450.490.470.440.32 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 7Speakers clinical variables estimation using i-vectors-SVR (RBF kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI14.0413.9113.6313.4813.8414.120.000.170.250.260.180.02Height (cm)5.285.235.165.245.465.430.400.410.420.410.290.32Age (years)9.469.228.298.689.109.530.420.510.610.570.500.41Weight (kg)12.3912.8212.1812.1112.2712.590.290.180.320.350.340.24BMI (kg/m2)3.733.703.663.683.723.770.200.180.270.270.210.14CP (cm)2.382.422.322.342.422.440.310.260.420.400.310.26 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers’ clinical variables estimation using supervector-SVR (linear kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers clinical variables estimation using i-vectors-SVR (RBF kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence As pointed out before, for supervectors (Table 5), only a linear kernel was evaluated because the very large supervector dimension (>1000) makes not advisable mapping this data into a higher dimensional space. Tables 6 and 7 show that for i-vectors, estimation results using linear and RBF kernels are very similar. These tables also show that both i-vectors and supervectors reach similar results for almost all clinical variables. Results in Tables 3 and 4 show performance when using speech to estimate age and height. As mentioned before, the purpose of these tests is to validate our procedure by comparing these results to those reported in recent references [16] and [17]. Table 3 shows that our estimation performance (both in terms of MAE and correlation coefficient) for height are comparable, and better when using i-vectors, that those in [17]. However estimation results for age, Table 4, are slightly worse than [16]. A plausible explanation is that the population in [16] includes a majority of young people, between 20 and 30 years old, while most of our OSA speakers are well above 45 years old. According to [16] speech records from young speakers can be better discriminated than those from older ones. In any case, our results are very similar to results published previously by other authors, which is a good indicator of the validity of our methods.Table 3Speakers’ height estimation resultsRegression methodMean absolute error (cm)Correlation coefficient (ρ)I-vector–LSSVR [17]6.20.41b Supervector–SVR5.370.34a I-vector–SVR5.060.45a aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reportedTable 4Speakers’ age estimation resultsRegression methodMean absolute error (years)Correlation coefficient (ρ)I-vector–WCCN–SVR [16]6.00.77b Supervector–SVR7.750.66a I-vector–SVR7.870.63a aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Speakers’ height estimation results aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Speakers’ age estimation results aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Prediction results using i-vectors and supervectors for all our clinical variables are listed on Tables 5, 6 and 7.Table 5Speakers’ clinical variables estimation using supervector-SVR (linear kernel)Clinical variableMAEρAHI14.260.17Height (cm)5.370.34Age (years)7.750.66Weight (kg)12.580.31BMI (kg/m2)3.810.23CP (cm)2.290.42 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 6Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI13.6813.6413.5513.2313.4013.850.230.210.240.300.270.20Height (cm)5.215.235.115.065.295.380.400.410.430.450.360.34Age (years)8.167.878.118.298.779.160.610.630.610.590.520.44Weight (kg)12.3112.2312.2511.8612.1612.310.340.350.360.390.350.31BMI (kg/m2)3.593.653.673.693.743.800.330.300.290.280.260.18CP (cm)2.282.262.202.262.312.420.440.450.490.470.440.32 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 7Speakers clinical variables estimation using i-vectors-SVR (RBF kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI14.0413.9113.6313.4813.8414.120.000.170.250.260.180.02Height (cm)5.285.235.165.245.465.430.400.410.420.410.290.32Age (years)9.469.228.298.689.109.530.420.510.610.570.500.41Weight (kg)12.3912.8212.1812.1112.2712.590.290.180.320.350.340.24BMI (kg/m2)3.733.703.663.683.723.770.200.180.270.270.210.14CP (cm)2.382.422.322.342.422.440.310.260.420.400.310.26 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers’ clinical variables estimation using supervector-SVR (linear kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers clinical variables estimation using i-vectors-SVR (RBF kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence As pointed out before, for supervectors (Table 5), only a linear kernel was evaluated because the very large supervector dimension (>1000) makes not advisable mapping this data into a higher dimensional space. Tables 6 and 7 show that for i-vectors, estimation results using linear and RBF kernels are very similar. These tables also show that both i-vectors and supervectors reach similar results for almost all clinical variables. AHI classification Table 8 shows classification results in terms of sensitivity, specificity and area under the ROC curve when classifying our population as OSA subjects or healthy individuals based on the estimated AHI values. That is, first supervectors or i-vectors are used to estimate the AHI using SVR, and then subjects are classified as OSA individuals when their estimated AHI is above ten, otherwise they are classified as healthy. The results in Table 8 using i-vectors were obtained for i-vector dimensionality of 100 as this provided the best AHI estimation results (see Table 6).Table 8OSA Classification using estimated AHI valuesFeatureAccuracy (%)Sensitivity (%)Specificity (%)ROC AUCSupervectors6889180.58I-vectors (dim 100)7192200.64 OSA Classification using estimated AHI values We are aware that better results could be obtained using supervectors or i-vectors as inputs to a classification algorithm such as SVM, however results in Table 8 were only obtained looking for some figures that will be used in the section “Discussion” to compare our results with those from previous research (Table 9).Table 9Test characteristics of previous research using speech analysis and machine learning for AHI classification and regressionStudyPopulation characteristicsClassificationRegressionCorrect classification rate (%)Sensitivity (%)Specificity (%)Correlation coefficientGMMs [10]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)8177.585_HMMs [11]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)85___Several feature selection and classification schemes [13]248 subjects(AHI ≤5: 48 male, 79 women; AHI ≥30: 101 male, 20 women)82.8581.4984.69_Feature selection and GMMs [9]93 subjects(AHI ≤5: 14 female; AHI >5: 19 female)(AHI ≤10: 12 male; AHI >10: 48 male)_86838479_Feature selection and GMMs [41]103 male subjects(AHI ≤10: 25 male; AHI >10: 78 male)8080.6580_Feature selection, supervectors and SVR [14]131 males___0.67a I-vectors/supervectors and SVR this study426 males(AHI <10: 125 male; AHI ≥10: 301 male)71.0692.9220.60.30 aResults using speech features plus age and BMI Test characteristics of previous research using speech analysis and machine learning for AHI classification and regression aResults using speech features plus age and BMI Table 8 shows classification results in terms of sensitivity, specificity and area under the ROC curve when classifying our population as OSA subjects or healthy individuals based on the estimated AHI values. That is, first supervectors or i-vectors are used to estimate the AHI using SVR, and then subjects are classified as OSA individuals when their estimated AHI is above ten, otherwise they are classified as healthy. The results in Table 8 using i-vectors were obtained for i-vector dimensionality of 100 as this provided the best AHI estimation results (see Table 6).Table 8OSA Classification using estimated AHI valuesFeatureAccuracy (%)Sensitivity (%)Specificity (%)ROC AUCSupervectors6889180.58I-vectors (dim 100)7192200.64 OSA Classification using estimated AHI values We are aware that better results could be obtained using supervectors or i-vectors as inputs to a classification algorithm such as SVM, however results in Table 8 were only obtained looking for some figures that will be used in the section “Discussion” to compare our results with those from previous research (Table 9).Table 9Test characteristics of previous research using speech analysis and machine learning for AHI classification and regressionStudyPopulation characteristicsClassificationRegressionCorrect classification rate (%)Sensitivity (%)Specificity (%)Correlation coefficientGMMs [10]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)8177.585_HMMs [11]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)85___Several feature selection and classification schemes [13]248 subjects(AHI ≤5: 48 male, 79 women; AHI ≥30: 101 male, 20 women)82.8581.4984.69_Feature selection and GMMs [9]93 subjects(AHI ≤5: 14 female; AHI >5: 19 female)(AHI ≤10: 12 male; AHI >10: 48 male)_86838479_Feature selection and GMMs [41]103 male subjects(AHI ≤10: 25 male; AHI >10: 78 male)8080.6580_Feature selection, supervectors and SVR [14]131 males___0.67a I-vectors/supervectors and SVR this study426 males(AHI <10: 125 male; AHI ≥10: 301 male)71.0692.9220.60.30 aResults using speech features plus age and BMI Test characteristics of previous research using speech analysis and machine learning for AHI classification and regression aResults using speech features plus age and BMI
Conclusions
This study can represent an important and useful example to illustrate the potential pitfalls in the development of machine learning techniques for diagnostic applications. The contradictory results using state-of-the-art speech processing and machine learning for OSA assessment over, to the best of our knowledge, the largest database used in this kind of studies, led us to address a critical review of previous studies reporting positive results in connecting OSA and speech. As it is being identified in different fields by the biomedical research community, several limitations in the development of machine learning techniques were observed and, when possible, experimentally studied. In line with other similar studies on these pitfalls [19, 38] main detected deficiencies are: the impact of a limited size of training and evaluation datasets in performance evaluation, the likelihood of false discovery or spurious associations due to the presence of confounding variables, and the risk for overfitting when feature selection techniques are applied over large numbers of variables when only limited training data is available. In conclusion, we believe that our study and results could be useful both to sensitize the biomedical engineering research community to the potential pitfalls when using machine learning for medical diagnosis, and to guide further research on the connection between speech and OSA. In this later aspect, we believe there is an open way for future research looking for new insights in this connection using different acoustic features, languages, speaking styles, or recording positions. However, besides properly addressing the methodological issues when using machine learning, any new advance should carefully explore and report on any possible indirect influence of speech and AHI mediated through other clinical variables or any other factor that could affect speech such as speakers’ dialect, gender or mood state.
[ "Subjects and experimental design", "Problem formulation", "Acoustic representation of OSA-related sounds", "Utterance modelling", "Supervectors", "I-vectors", "Regression using SVR", "Performance metrics", "k-fold cross-validation and grid-search", "Clinical variables estimation", "AHI classification" ]
[ "The population under study is composed by 426 male subjects presenting symptoms of OSA during a preliminary interview with a pneumonologist: such as excessive daytime sleepiness, snoring, choking during sleep, or somnolent driving. Several clinical variables were collected for each individual: age, height, weight, body-mass index (BMI, defined as the weight in kilograms divided by the square of the height in meters, kg/m2) and cervical perimeter (CP, measure of the neck circumference, in centimeters, at the level of the cricothyroid membrane). This database has been recorded at the Hospital Quirón de Málaga (Spain) since 2010 and is, to the best of our knowledge, the largest database used in this kind of studies. The database contains 597 speakers: 426 males and 171 females. Our study had no impact on the diagnosis process of patients or on their possible medical treatment therefore the Hospital did not consider necessary to seek approval from their ethics committee. Before starting the study, participants were notified about the research and their informed consent obtained. Statistics of the clinical variables for the male population in this study are summarized in Table 1.Table 1Descriptive statistics on the 426 male subjectsClinical variablesMeanSDRangeAHI22.518.10.0–102.0Weight (kg)91.717.361.0–162.0Height (cm)175.37.1152.0–197.0BMI (kg/m2)29.85.120.1–52.1Age (years)48.812.520.0–85.0Cervical perimeter (cm)42.23.234.0–53.0\nAHI apnea–hypopnea index, BMI body mass index, SD standard deviation\nDescriptive statistics on the 426 male subjects\n\nAHI apnea–hypopnea index, BMI body mass index, SD standard deviation\nThe diagnosis for each patient was confirmed by specialized medical staff through PSG, obtaining the AHI on the basis of the number of apnea and hypopnea episodes. Patients’ speech was recorded prior to PSG. All speakers read the same 4 sentences and sustained a complete set of Spanish vowels [i.e., a, o, u]. Sentences were designed trying to cover relevant linguistic/phonetic contexts related to peculiarities in OSA voices (see details in [12]). Recordings were made in a room with low noise and patients at an upright or seated position. Recording equipment was a standard laptop with an USB SP500 Plantronics headset. Speech was recorded at a sampling frequency of 50 kHz and encoded in 16 bits. Afterwards it was down-sampled to 16 kHz before processing.", "Our major aim is testing whether state-of-the-art speaker’s voice characterization technologies that have already demonstrated to be effective in the estimation of speaker’s characteristics such as age [16] and height [17] could be also effective in estimating the AHI. It is important to point out that, besides predicting the AHI from speech samples, we also tested the performance when using these same techniques to estimate the other clinical variables (age, height, weight, BMI and CP). We think this evaluation is relevant for two main reasons: firstly, to validate our methodology, by comparing our results estimating age, height and BMI with those previously reported over general speaker populations (such as [16, 17, 20]); and secondly, to identify correlations between speech and other clinical variables that can increase the likelihood of false discovery based on spurious or indirect associations [18] between these clinical variables and AHI. This second aspect we will be relevant when presenting the critical review of previous approaches to OSA assessment in the section “Discussion”.\nConsequently, our study can be formulated as a machine learning regression problem as follows: we are given a training dataset of speech recordings and clinical variables information \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{S}}_{\\text{tr}}^{j} = \\left\\{ {{\\mathbf{x}}_{n} , y_{n}^{j} } \\right\\}_{n = 1}^{N}$$\\end{document}Strj=xn,ynjn=1N, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{x}}_{n} \\in \\Re^{\\text{p}}$$\\end{document}xn∈ℜp denotes the acoustic representation for the nth utterance of the training dataset and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y_{n}^{j} \\in \\Re$$\\end{document}ynj∈ℜ denotes the corresponding value of the clinical variable for the speaker of that utterance; j corresponds to a particular variable in the set of V clinical variables (j = 1, 2, …V; i.e., AHI, age, height, weight, BMI, CP).\nThe goal is to design an estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj for each clinical variable, such that for an utterance of an unseen testing speaker xtst, the difference between the estimated value of that particular clinical variable \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\hat{y}^{j} = f^{j} \\left( {{\\mathbf{x}}_{{\\text{tst}}} } \\right)$$\\end{document}y^j=fjxtst and its actual value \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj is minimized.\nOnce this regression problem has been formulated two main issues must be addressed: 1) what acoustic representation and model will be used for a given utterance xn and 2) how to design the regression or estimator functions \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj.", "Besides the linguistic message, speech signals carry important information about speakers mainly related to their particular physical or physiological characteristics. This has been the basis for the development of automatic speaker recognition systems, automatic detection of vocal fold pathologies, emotional/psychological state recognition as well as age and weight (or BMI) estimation. In a similar vein, the specific characteristics of the UA in OSA individuals have led to hypothesize OSA detection using automatic acoustic analysis of speech sounds.\nTo represent OSA-specific acoustic information, speech records in our database include read speech of four sentences that were designed to contain specific distinctive sounds to discriminate between healthy and OSA speakers. The design of these four sentences was done according to the reference research in [5] and [6], where Fox et al. identify a set of speech descriptors in OSA speakers related to articulation, phonation and resonance. For example, the third sentence in our corpus includes mostly nasal sounds to detect the expected resonance anomalies in OSA individuals (the details on the design criteria for this corpus can be found in [12]). Additionally, to exclude any acoustic factor not related to OSA discrimination, the speech signal acquisition was done in a room with low noise and using a single high quality microphone (USB SP500 Plantronics headset).\nOnce we have a set of speech utterances containing OSA-specific sounds and collected under a controlled recording environment, speech signals were processed at a sampling frequency of 16 kHz to have a precise wide-band representation all the relevant information in the speech spectrum. As Fig. 1 illustrates, each sentence was analyzed in speech segments (i.e., frames) of 20 ms duration with an overlap of 10 ms; each speech frame was multiplied by a Hamming window. The spectral envelope of each frame was then represented using mel-frequency cepstral coefficients (MFCCs). MFCCs provide a spectral envelope representation of speech sounds extensively used in automatic speech and speaker recognition [21, 22], pathological voice detection, age, height and BMI estimation [16, 17, 20], etc. MFCCS have also been used in previous research on speech-based OSA detection [9–11] and [14].Fig. 1Acoustic representation of utterances\nAcoustic representation of utterances\nIn the MFCC representation the spectrum magnitude of each speech frame is first obtained as the absolute value of its DFT (discrete Fourier transform). Then a filterbank of triangular filters spaced in a frequency scale based on the human perception system (i.e., Mel-scale) is used to obtain a vector with the log-energies of each filter (see Fig. 1). Finally, a discrete cosine transform (DCT) is applied over the vector of log filterbank energies to produce a compact set of decorrelated MFCC coefficients. Additionally, in order to represent the spectral change over time, MFFCs are extended to their first order (velocity or delta ΔMFCCs) time derivatives (more details on MFCCs parametrization can be found in [23]). So far, in our experiments, in each speech frame i the acoustic information is represented by a D-dimensional vector Oi, called observation vector, that includes 19 MFFCs +19 ΔMFCCs parameters, thus D = 38. The extraction of MFCCswas performed using the HTK software (htk.eng.cam.ac.uk), see Table 2 for the details on DFT order, number of triangular filters, etc.Table 2Implementation toolsToola\nFunction nameFunction descriptionParametersHTKHCopyExtract the MFCCs coefficientsNo. DFT bins = 512No. filters = 26No. MFCC coeff. = 19No. ΔMFCC coeff. = 19MSR Identity ToolBoxb\nGMM_emGMM–UBM trainingNo. mixtures = 512No. of expectation maximization iteration = 10Feature sub-sampling factor = 1MapAdaptGMM adaptationAdaptation algorithm = MAPNo. mixtures = 512MAP relevance factor = 10Train_tv_spaceTotal variability matrix trainingDimension of total variability matrix = {400,300,200,100,50,30}Number of iteration = 5Extract_ivectorI-vector trainingDimension of total variability matrix = {400,300,200,100,50,30}LIBSVMSVM_trainSVR trainingGrid search parameters:C, model complexity = −20:20\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, insensitive-zone = 2−7:27\nSVM_predictSVR regressionGrid search parameters:C, model complexity = −20:20 \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, insensitive-zone = 2−7:27\n\naAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System\nbExecuted on Matlab 2014a\nImplementation tools\n\naAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System\n\nbExecuted on Matlab 2014a", "Due to the natural variability in speech production different utterances corresponding to the same sentence will exhibit variable-duration and thus will be represented by a variable-length sequence O of observation vectors:1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\underline{{\\mathbf{O}}} = \\left[ {{\\mathbf{O}}_{1} ,\\;{\\mathbf{O}}_{2} \\ldots {\\mathbf{O}}_{NF} } \\right]$$\\end{document}O̲=O1,O2…ONFwhere Oi is the D-dimensional observation vector at frame i and NF is the number of frames, which will be variable due to the different durations when reading the same sentence. This variable-length sequence cannot be the input for a regression algorithm as support vector regression (SVR) that will be the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj to predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj (being \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj the AHI and the other clinical variables: age, height, weight, BMI and CP).\nConsequently, the sequence of observations O must be mapped into a vector with fixed dimension. In our method, this has been done using two modeling approaches, referred to as supervectors and i-vectors, which have been successfully applied to speaker recognition [24], language recognition [25], speaker age estimation [16], speaker height estimation [17] and accent recognition [26]. We think that their success in those challenging tasks were speech contains significant sources of interfering intra-speaker variability (speaker weight, height, etc.), is a reasonable guarantee for exploring its use in estimating the AHI and other clinical variables in our OSA population.\nIt is also important to point out that we have avoided the use of feature selection procedures because, as it will be commented in the section “Discussion”, we believe this has led to over-fitted results in several previous studies in this field. It is for that reason that in our approach we evaluate high-dimensional acoustic modelling provided by supervectors and low-dimensional i-vectors representations based on subspace projection. These two techniques are described below.", "Both supervector and i-vector modelling approaches start by fitting a Gaussian mixture model (GMM) to the sequence of observations O. A GMM (see [23, 27]) consists of a weighted sum of K D-dimensional Gaussian components, where, in our case, D is the dimension of the MFFCs observation vectors. Each i-th Gaussian component is represented by a mean vector (µi) of dimension D and a D × D covariance matrix (Σi). Due to limited data, it is not possible to accurately fit a separate GMM for a short utterance, specially when using a high number of Gaussian components (i.e., large K). Consequently, GMMs are obtained using adaptation techniques from a universal background model (UBM), which is also a GMM trained on a large database containing speech from a large number of different speakers [23]. Therefore, as Fig. 2 illustrates, the variable-length sequence O of vectors of a given utterance is used to adapt a GMM–UBM generating an adapted GMM where only the means (µi) are adapted.Fig. 2GMM and supervector modelling\nGMM and supervector modelling\nIn the supervector modelling approach [21], the adapted GMM means (µi) are extracted and concatenated (appending one after the other) into a single high-dimensional vector s that is called the GMM mean supervector:2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{s}} = \\left[ {\\begin{array}{*{20}c} {{\\mathbf{\\mu }}_{{\\text{1}}} } \\\\ {{\\mathbf{\\mu }}_{{\\text{2}}} } \\\\ \\vdots \\\\ {{\\mathbf{\\mu }}_{K} } \\\\ \\end{array} } \\right]$$\\end{document}s=μ1μ2⋮μK\nThe resulting fixed-length supervector, of size K × D, is now suitable to be used as input to a regression algorithm, such as SVR, to predict AHI and the other clinical variables.\nAs it is summarized in Table 2, in our experiments GMM–UBM training, GMM adaptation and supervector generation was done using the MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Linux Ubuntu 12.04 LTS. As it is also shown in Table 2, to have a precise acoustic representation for each sentence a GMM with K = 512 components was used, resulting in a high-dimensional supervector of size K × D = 19,456 = 38 × 512 (D = 38 is the dimension of MFFCs observation vectors Oi).\nAs mentioned before, training the GMM UBM requires a considerable amount of development data to represent a global acoustic space. Therefore, for development we used several large databases containing microphonic speech sampled at 16 kHz, covering a wide range of phonetic variability from continuous/read Spanish speech (see, for example, ALBAYZIN [29], as it was one the databases we used). The whole development dataset includes 25,451 speech recordings from 940 speakers. Among them 126 speakers certainly diagnosed with OSA, and not used for tests, were also included to reflect OSA-specific characteristics of speech.", "Beyond the success of high-dimensional supervectors, a new paradigm called i-vector has been successfully and is widely used by the speaker recognition community [24]. The i-vector model relies on the definition of a low-dimensional total variability subspace and can be described in the GMM mean supervector space by:3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{s}} = {\\mathbf{m}} + {\\mathbf{Tw}}$$\\end{document}s=m+Twwhere s is the GMM mean supervector representing an utterance and m is the mean supervector obtained from the UBM GMM–UBM, which can be considered a global acoustic representation independent from utterance, speaker, health and clinical condition. T is a rectangular low rank matrix representing the primary directions of total acoustic variability observed in a large development speech database, and w is a low dimensional random vector having a standard normal distribution. In short, Eq. (3) can be viewed as a simple factor analysis for projecting the high-dimensional (in order of thousands) supervector s to the low-dimensional (in order of hundreds) factor vector, identity vector or i-vector w. T is named the total variability matrix and the components of i-vector w are the total factors that represent the acoustic information in the reduced total variability space. Compared to supervectors, the total variability modeling using i-vectors has the advantage of projecting the high dimensionality of GMM supervectors into a low-dimensional subspace, where most of the speaker-specific variability is captured.\nAutomatic speech recognition systems typically use i-vectors with dimensionality of 400. In our tests the total variability matrix T was estimated using the same development data described before for training the GMM–UBM, and we evaluated subspace projections for i-vectors with different dimensions ranging from 30 to 400. Efficient procedure for training T and MAP adaptation of i-vectors can be found in [30]. In our tests we use the implementation provided by MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Ubunutu 12.04 LTS (see the details in Table 2).", "Once an utterance is represented by a fixed-length vector, supervector or i-vector, SVR is employed as the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj to predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj, i.e., the AHI and other clinical variables (age, height, weight, BMI and CP).\nSVR is a function approximation approach developed as a regression version of the widely known Support Vector Machine (SVM) classifier [31]. When using SVR, the input variable (i-vector/supervector) is firstly mapped onto a high dimensional feature space by using a non-linear mapping. The mapping is performed by the kernel function. The kernel yields the new high dimensional feature by a similarity measure between the points from the original feature space. Once the mapping onto a high dimensional space is done then a linear model is constructed in this feature space by finding the optimal hyperplane in which most of the of the training samples lie within an \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-margin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone) around this hyperplane [31].\nThe generalization of SVR’s performance depends on a good setting of two hyperparameters (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, C) and the kernel parameters. The parameter \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈ controls the width of the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone, used to fit the training data. The width of the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone determines the level of accuracy of approximation function. It relies entirely on the target values of the training set. The parameter C determines the trade-off between the model complexity, controlled by \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, and the degree to which deviations larger than the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone are tolerated in the optimization of the hyperplane. Finally, the kernel parameters depend on the type similarity measure used.\nIn this paper, SVR is applied to estimate the clinical variables and linear and radial basis function (RBF) kernels were tested to approximate the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj. In our study, both linear and RBF kernels were tested for i-vectors, but only linear kernels were considered for supervectors because their large dimensionality makes it not advisable mapping them into a higher dimensional space. SVR training and testing were implemented by using LIBSVM [32] running on Linux Ubuntu 12.04 LTS. Table 2 describes de details of use for this software together with all the parameters used in our tests.", "To evaluate the proposed method of using supervectors or i-vectors to predict or estimate AHI and the other clinical variables (age, height, weight, BMI and CP) we measure both the mean absolute error (MAE) and the Pearson correlation coefficient (ρ). MAE provides the average absolute difference between actual and estimated values, while ρ evaluates their linear relationship. As we will see in the section “Results”, correlation coefficients between estimated and actual AHI values were many times very small. Therefore, we considered informative to report p-values for correlation coefficients as the probability that they were in fact zero (null hypothesis).\nAlthough the main objective of our method is to evaluate the capability of using speech to predict or estimate AHI, in the section “Discussion” we also review previous research that aim at classify or discriminate between subjects with OSA (AHI ≥10) and without OSA (defined by an AHI <10). Therefore, we performed some additional tests using our estimated AHI values to classify subjects as OSA (predicted AHI ≥10) and non-OSA (predicted AHI <10). In these classification tests performance was measured in terms of sensitivity, specificity and the area under the ROC curve.", "In order to train the SVR regression model (function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj) and predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj variables (AHI and other clinical variables) we have employed k-fold cross-validation and grid-search for finding the optimal SVR parameters. The whole process is presented in Fig. 3. Firstly, to guarantee that all speakers are involved on the test, the dataset is split into k equal sized subsamples with no speakers in common. Then, of the k subsamples, a single subsample is retained for testing and the remaining k−1 subsamples are used as training dataset. Results were reported for k = 10.Fig. 3Representation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables\nRepresentation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables\nFurthermore, as Fig. 3 also illustrates, in each cross-validation loop the optimal hyperparameters (ϵ, C) of the SVR models are obtained through “grid search” using a fivefold cross-validation on the training data. The ranges for this grid search are detailed in Table 2.", "Results in Tables 3 and 4 show performance when using speech to estimate age and height. As mentioned before, the purpose of these tests is to validate our procedure by comparing these results to those reported in recent references [16] and [17]. Table 3 shows that our estimation performance (both in terms of MAE and correlation coefficient) for height are comparable, and better when using i-vectors, that those in [17]. However estimation results for age, Table 4, are slightly worse than [16]. A plausible explanation is that the population in [16] includes a majority of young people, between 20 and 30 years old, while most of our OSA speakers are well above 45 years old. According to [16] speech records from young speakers can be better discriminated than those from older ones. In any case, our results are very similar to results published previously by other authors, which is a good indicator of the validity of our methods.Table 3Speakers’ height estimation resultsRegression methodMean absolute error (cm)Correlation coefficient (ρ)I-vector–LSSVR [17]6.20.41b\nSupervector–SVR5.370.34a\nI-vector–SVR5.060.45a\n\naThese values are significant beyond the 0.01 level of confidence\nbLevel of confidence is not reportedTable 4Speakers’ age estimation resultsRegression methodMean absolute error (years)Correlation coefficient (ρ)I-vector–WCCN–SVR [16]6.00.77b\nSupervector–SVR7.750.66a\nI-vector–SVR7.870.63a\n\naThese values are significant beyond the 0.01 level of confidence\nbLevel of confidence is not reported\nSpeakers’ height estimation results\n\naThese values are significant beyond the 0.01 level of confidence\n\nbLevel of confidence is not reported\nSpeakers’ age estimation results\n\naThese values are significant beyond the 0.01 level of confidence\n\nbLevel of confidence is not reported\nPrediction results using i-vectors and supervectors for all our clinical variables are listed on Tables 5, 6 and 7.Table 5Speakers’ clinical variables estimation using supervector-SVR (linear kernel)Clinical variableMAEρAHI14.260.17Height (cm)5.370.34Age (years)7.750.66Weight (kg)12.580.31BMI (kg/m2)3.810.23CP (cm)2.290.42\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 6Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI13.6813.6413.5513.2313.4013.850.230.210.240.300.270.20Height (cm)5.215.235.115.065.295.380.400.410.430.450.360.34Age (years)8.167.878.118.298.779.160.610.630.610.590.520.44Weight (kg)12.3112.2312.2511.8612.1612.310.340.350.360.390.350.31BMI (kg/m2)3.593.653.673.693.743.800.330.300.290.280.260.18CP (cm)2.282.262.202.262.312.420.440.450.490.470.440.32\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 7Speakers clinical variables estimation using i-vectors-SVR (RBF kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI14.0413.9113.6313.4813.8414.120.000.170.250.260.180.02Height (cm)5.285.235.165.245.465.430.400.410.420.410.290.32Age (years)9.469.228.298.689.109.530.420.510.610.570.500.41Weight (kg)12.3912.8212.1812.1112.2712.590.290.180.320.350.340.24BMI (kg/m2)3.733.703.663.683.723.770.200.180.270.270.210.14CP (cm)2.382.422.322.342.422.440.310.260.420.400.310.26\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers’ clinical variables estimation using supervector-SVR (linear kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers’ clinical variables estimation using i-vectors-SVR (linear kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers clinical variables estimation using i-vectors-SVR (RBF kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nAs pointed out before, for supervectors (Table 5), only a linear kernel was evaluated because the very large supervector dimension (>1000) makes not advisable mapping this data into a higher dimensional space.\nTables 6 and 7 show that for i-vectors, estimation results using linear and RBF kernels are very similar. These tables also show that both i-vectors and supervectors reach similar results for almost all clinical variables.", "Table 8 shows classification results in terms of sensitivity, specificity and area under the ROC curve when classifying our population as OSA subjects or healthy individuals based on the estimated AHI values. That is, first supervectors or i-vectors are used to estimate the AHI using SVR, and then subjects are classified as OSA individuals when their estimated AHI is above ten, otherwise they are classified as healthy. The results in Table 8 using i-vectors were obtained for i-vector dimensionality of 100 as this provided the best AHI estimation results (see Table 6).Table 8OSA Classification using estimated AHI valuesFeatureAccuracy (%)Sensitivity (%)Specificity (%)ROC AUCSupervectors6889180.58I-vectors (dim 100)7192200.64\nOSA Classification using estimated AHI values\nWe are aware that better results could be obtained using supervectors or i-vectors as inputs to a classification algorithm such as SVM, however results in Table 8 were only obtained looking for some figures that will be used in the section “Discussion” to compare our results with those from previous research (Table 9).Table 9Test characteristics of previous research using speech analysis and machine learning for AHI classification and regressionStudyPopulation characteristicsClassificationRegressionCorrect classification rate (%)Sensitivity (%)Specificity (%)Correlation coefficientGMMs [10]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)8177.585_HMMs [11]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)85___Several feature selection and classification schemes [13]248 subjects(AHI ≤5: 48 male, 79 women; AHI ≥30: 101 male, 20 women)82.8581.4984.69_Feature selection and GMMs [9]93 subjects(AHI ≤5: 14 female; AHI >5: 19 female)(AHI ≤10: 12 male; AHI >10: 48 male)_86838479_Feature selection and GMMs [41]103 male subjects(AHI ≤10: 25 male; AHI >10: 78 male)8080.6580_Feature selection, supervectors and SVR [14]131 males___0.67a\nI-vectors/supervectors and SVR this study426 males(AHI <10: 125 male; AHI ≥10: 301 male)71.0692.9220.60.30\naResults using speech features plus age and BMI\nTest characteristics of previous research using speech analysis and machine learning for AHI classification and regression\n\naResults using speech features plus age and BMI" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects and experimental design", "Problem formulation", "Acoustic representation of OSA-related sounds", "Utterance modelling", "Supervectors", "I-vectors", "Regression using SVR", "Performance metrics", "k-fold cross-validation and grid-search", "Results", "Clinical variables estimation", "AHI classification", "Discussion", "Conclusions" ]
[ "Sleep disorders are receiving increased attention as a cause of daytime sleepiness, impaired work, traffic accidents, and are associated with hypertension, heart failure, arrhythmia, and diabetes. Among sleep disorders, obstructive sleep apnea (OSA) is the most frequent one [1]. OSA is characterized by recurring episodes of breathing pauses during sleep, greater than 10 s at a time, caused by a blockage of the upper airway (UA) at the level of the pharynx.\nThe gold standard for sleep apnea diagnosis is the polysomnography (PSG) test [2]. This test requires an overnight stay of the patient at the sleep unit within a hospital to monitor breathing patterns, heart rhythm and limb movements. As a result of this test, the apnea–hypopnea index (AHI) is computed as the average number of apnea and hypopnea episodes (partial and total breath cessation episodes respectively) per hour of sleep. Because of its high reliability this index is used to describe the severity of patients’ condition: low AHI (AHI <10) indicates a healthy subject or mild OSA patient (10≤ AHI ≤30), while AHI above 30 is associated with severe OSA. Waiting lists for PSG may exceed 1 year in some countries as Spain [3]. Therefore, faster and less costly alternatives have been proposed for early OSA detection and severity assessment; and speech-based methods are among them.\nThe rationale of using speech analysis in OSA assessment can be found on early works such as the one by Davidson et al. [4] where the evolutionary changes in the acquisition of speech are connected to the appearance of OSA from an anatomical basis. Several studies have shown physical alterations in OSA patients such as craniofacial abnormalities, dental occlusion, longer distance between the hyoid bone and the mandibular plane, relaxed pharyngeal soft tissues, large tongue base, etc. that generally cause a longer and more collapsible upper airway (UA). Consequently, abnormal or particular speech features in OSA speakers may be expected from an altered structure or function of their UA.\nEarly approaches to speech-based OSA detection can be found in [5] and [6]. In [5] authors used perceptive speech descriptors (related to articulation, phonation and resonance) to correctly identify 96.3 % of normal (healthy) subjects, though only 63.0 % of sleep apnea speakers were detected. The use of acoustic analysis of speech for OSA detection was first presented in [7] and [8]. Fiz et al. [7] examined the harmonic structure of vowels spectra, finding a narrower frequency range for OSA speakers, which may point at differences in laryngeal behavior between OSA and non-OSA speakers. Later on, Robb et al. [8] presented an acoustic analysis of vocal tract formant frequencies and bandwidths, thus focusing on the supra-laryngeal level where OSA-related alterations should have larger impact according to the pathogenesis of the disorder.\nThese early contributions have driven recent proposals for using automatic speech processing in OSA detection such as [9–14]. Different approaches, generally using machine learning techniques, have been studied for Hebrew [9, 14] and Spanish [10–13] languages. Results have been reported for different types of speech (i.e., sustained and/or continuous speech) [9, 11, 13], different speech features [9, 12, 13], and modeling different linguistic units [11]. Also speech recorded from two distinct positions, upright or seated and supine or stretched, has been considered [13, 15].\nDespite the positive results reported in these previous studies (including ours), as it will be presented in the section “Discussion”, we have found contradictory results when applying the proposed methods over our large clinical database composed of speech samples from 426 OSA male speakers. The next section describes a new method for estimating the AHI using state-of-the-art speaker’s voice characterization technologies. This same approach has been recently tested and demonstrated to be effective in the estimation of other characteristics in speakers’ populations such as age [16] and height [17]. However, as it can be seen in the section “Results”, only a very limited performance is found when this approach is used for AHI prediction. These poor results contrast with the positive results reported by previous research and motivated us to their careful review. The review (presented in the section “Discussion”) reveals some common limitation and deficiencies when developing and validating machine learning techniques, as overfitting and false discovery (i.e., finding spurious or indirect associations) [18], that may have led to overoptimistic previous results. Therefore, our study can represent an important and useful example to illustrate the potential pitfalls in the development of machine learning techniques for diagnostic applications as it is being identified by the biomedical engineering research community [19].\nAs we conclude at the end of the paper, we not only hope that our study could be useful to help in the development of machine learning techniques in biomedical engineering research, we also think it can help in guiding any future research on the connection between speech and OSA.", " Subjects and experimental design The population under study is composed by 426 male subjects presenting symptoms of OSA during a preliminary interview with a pneumonologist: such as excessive daytime sleepiness, snoring, choking during sleep, or somnolent driving. Several clinical variables were collected for each individual: age, height, weight, body-mass index (BMI, defined as the weight in kilograms divided by the square of the height in meters, kg/m2) and cervical perimeter (CP, measure of the neck circumference, in centimeters, at the level of the cricothyroid membrane). This database has been recorded at the Hospital Quirón de Málaga (Spain) since 2010 and is, to the best of our knowledge, the largest database used in this kind of studies. The database contains 597 speakers: 426 males and 171 females. Our study had no impact on the diagnosis process of patients or on their possible medical treatment therefore the Hospital did not consider necessary to seek approval from their ethics committee. Before starting the study, participants were notified about the research and their informed consent obtained. Statistics of the clinical variables for the male population in this study are summarized in Table 1.Table 1Descriptive statistics on the 426 male subjectsClinical variablesMeanSDRangeAHI22.518.10.0–102.0Weight (kg)91.717.361.0–162.0Height (cm)175.37.1152.0–197.0BMI (kg/m2)29.85.120.1–52.1Age (years)48.812.520.0–85.0Cervical perimeter (cm)42.23.234.0–53.0\nAHI apnea–hypopnea index, BMI body mass index, SD standard deviation\nDescriptive statistics on the 426 male subjects\n\nAHI apnea–hypopnea index, BMI body mass index, SD standard deviation\nThe diagnosis for each patient was confirmed by specialized medical staff through PSG, obtaining the AHI on the basis of the number of apnea and hypopnea episodes. Patients’ speech was recorded prior to PSG. All speakers read the same 4 sentences and sustained a complete set of Spanish vowels [i.e., a, o, u]. Sentences were designed trying to cover relevant linguistic/phonetic contexts related to peculiarities in OSA voices (see details in [12]). Recordings were made in a room with low noise and patients at an upright or seated position. Recording equipment was a standard laptop with an USB SP500 Plantronics headset. Speech was recorded at a sampling frequency of 50 kHz and encoded in 16 bits. Afterwards it was down-sampled to 16 kHz before processing.\nThe population under study is composed by 426 male subjects presenting symptoms of OSA during a preliminary interview with a pneumonologist: such as excessive daytime sleepiness, snoring, choking during sleep, or somnolent driving. Several clinical variables were collected for each individual: age, height, weight, body-mass index (BMI, defined as the weight in kilograms divided by the square of the height in meters, kg/m2) and cervical perimeter (CP, measure of the neck circumference, in centimeters, at the level of the cricothyroid membrane). This database has been recorded at the Hospital Quirón de Málaga (Spain) since 2010 and is, to the best of our knowledge, the largest database used in this kind of studies. The database contains 597 speakers: 426 males and 171 females. Our study had no impact on the diagnosis process of patients or on their possible medical treatment therefore the Hospital did not consider necessary to seek approval from their ethics committee. Before starting the study, participants were notified about the research and their informed consent obtained. Statistics of the clinical variables for the male population in this study are summarized in Table 1.Table 1Descriptive statistics on the 426 male subjectsClinical variablesMeanSDRangeAHI22.518.10.0–102.0Weight (kg)91.717.361.0–162.0Height (cm)175.37.1152.0–197.0BMI (kg/m2)29.85.120.1–52.1Age (years)48.812.520.0–85.0Cervical perimeter (cm)42.23.234.0–53.0\nAHI apnea–hypopnea index, BMI body mass index, SD standard deviation\nDescriptive statistics on the 426 male subjects\n\nAHI apnea–hypopnea index, BMI body mass index, SD standard deviation\nThe diagnosis for each patient was confirmed by specialized medical staff through PSG, obtaining the AHI on the basis of the number of apnea and hypopnea episodes. Patients’ speech was recorded prior to PSG. All speakers read the same 4 sentences and sustained a complete set of Spanish vowels [i.e., a, o, u]. Sentences were designed trying to cover relevant linguistic/phonetic contexts related to peculiarities in OSA voices (see details in [12]). Recordings were made in a room with low noise and patients at an upright or seated position. Recording equipment was a standard laptop with an USB SP500 Plantronics headset. Speech was recorded at a sampling frequency of 50 kHz and encoded in 16 bits. Afterwards it was down-sampled to 16 kHz before processing.\n Problem formulation Our major aim is testing whether state-of-the-art speaker’s voice characterization technologies that have already demonstrated to be effective in the estimation of speaker’s characteristics such as age [16] and height [17] could be also effective in estimating the AHI. It is important to point out that, besides predicting the AHI from speech samples, we also tested the performance when using these same techniques to estimate the other clinical variables (age, height, weight, BMI and CP). We think this evaluation is relevant for two main reasons: firstly, to validate our methodology, by comparing our results estimating age, height and BMI with those previously reported over general speaker populations (such as [16, 17, 20]); and secondly, to identify correlations between speech and other clinical variables that can increase the likelihood of false discovery based on spurious or indirect associations [18] between these clinical variables and AHI. This second aspect we will be relevant when presenting the critical review of previous approaches to OSA assessment in the section “Discussion”.\nConsequently, our study can be formulated as a machine learning regression problem as follows: we are given a training dataset of speech recordings and clinical variables information \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{S}}_{\\text{tr}}^{j} = \\left\\{ {{\\mathbf{x}}_{n} , y_{n}^{j} } \\right\\}_{n = 1}^{N}$$\\end{document}Strj=xn,ynjn=1N, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{x}}_{n} \\in \\Re^{\\text{p}}$$\\end{document}xn∈ℜp denotes the acoustic representation for the nth utterance of the training dataset and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y_{n}^{j} \\in \\Re$$\\end{document}ynj∈ℜ denotes the corresponding value of the clinical variable for the speaker of that utterance; j corresponds to a particular variable in the set of V clinical variables (j = 1, 2, …V; i.e., AHI, age, height, weight, BMI, CP).\nThe goal is to design an estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj for each clinical variable, such that for an utterance of an unseen testing speaker xtst, the difference between the estimated value of that particular clinical variable \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\hat{y}^{j} = f^{j} \\left( {{\\mathbf{x}}_{{\\text{tst}}} } \\right)$$\\end{document}y^j=fjxtst and its actual value \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj is minimized.\nOnce this regression problem has been formulated two main issues must be addressed: 1) what acoustic representation and model will be used for a given utterance xn and 2) how to design the regression or estimator functions \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj.\nOur major aim is testing whether state-of-the-art speaker’s voice characterization technologies that have already demonstrated to be effective in the estimation of speaker’s characteristics such as age [16] and height [17] could be also effective in estimating the AHI. It is important to point out that, besides predicting the AHI from speech samples, we also tested the performance when using these same techniques to estimate the other clinical variables (age, height, weight, BMI and CP). We think this evaluation is relevant for two main reasons: firstly, to validate our methodology, by comparing our results estimating age, height and BMI with those previously reported over general speaker populations (such as [16, 17, 20]); and secondly, to identify correlations between speech and other clinical variables that can increase the likelihood of false discovery based on spurious or indirect associations [18] between these clinical variables and AHI. This second aspect we will be relevant when presenting the critical review of previous approaches to OSA assessment in the section “Discussion”.\nConsequently, our study can be formulated as a machine learning regression problem as follows: we are given a training dataset of speech recordings and clinical variables information \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{S}}_{\\text{tr}}^{j} = \\left\\{ {{\\mathbf{x}}_{n} , y_{n}^{j} } \\right\\}_{n = 1}^{N}$$\\end{document}Strj=xn,ynjn=1N, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{x}}_{n} \\in \\Re^{\\text{p}}$$\\end{document}xn∈ℜp denotes the acoustic representation for the nth utterance of the training dataset and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y_{n}^{j} \\in \\Re$$\\end{document}ynj∈ℜ denotes the corresponding value of the clinical variable for the speaker of that utterance; j corresponds to a particular variable in the set of V clinical variables (j = 1, 2, …V; i.e., AHI, age, height, weight, BMI, CP).\nThe goal is to design an estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj for each clinical variable, such that for an utterance of an unseen testing speaker xtst, the difference between the estimated value of that particular clinical variable \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\hat{y}^{j} = f^{j} \\left( {{\\mathbf{x}}_{{\\text{tst}}} } \\right)$$\\end{document}y^j=fjxtst and its actual value \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj is minimized.\nOnce this regression problem has been formulated two main issues must be addressed: 1) what acoustic representation and model will be used for a given utterance xn and 2) how to design the regression or estimator functions \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj.\n Acoustic representation of OSA-related sounds Besides the linguistic message, speech signals carry important information about speakers mainly related to their particular physical or physiological characteristics. This has been the basis for the development of automatic speaker recognition systems, automatic detection of vocal fold pathologies, emotional/psychological state recognition as well as age and weight (or BMI) estimation. In a similar vein, the specific characteristics of the UA in OSA individuals have led to hypothesize OSA detection using automatic acoustic analysis of speech sounds.\nTo represent OSA-specific acoustic information, speech records in our database include read speech of four sentences that were designed to contain specific distinctive sounds to discriminate between healthy and OSA speakers. The design of these four sentences was done according to the reference research in [5] and [6], where Fox et al. identify a set of speech descriptors in OSA speakers related to articulation, phonation and resonance. For example, the third sentence in our corpus includes mostly nasal sounds to detect the expected resonance anomalies in OSA individuals (the details on the design criteria for this corpus can be found in [12]). Additionally, to exclude any acoustic factor not related to OSA discrimination, the speech signal acquisition was done in a room with low noise and using a single high quality microphone (USB SP500 Plantronics headset).\nOnce we have a set of speech utterances containing OSA-specific sounds and collected under a controlled recording environment, speech signals were processed at a sampling frequency of 16 kHz to have a precise wide-band representation all the relevant information in the speech spectrum. As Fig. 1 illustrates, each sentence was analyzed in speech segments (i.e., frames) of 20 ms duration with an overlap of 10 ms; each speech frame was multiplied by a Hamming window. The spectral envelope of each frame was then represented using mel-frequency cepstral coefficients (MFCCs). MFCCs provide a spectral envelope representation of speech sounds extensively used in automatic speech and speaker recognition [21, 22], pathological voice detection, age, height and BMI estimation [16, 17, 20], etc. MFCCS have also been used in previous research on speech-based OSA detection [9–11] and [14].Fig. 1Acoustic representation of utterances\nAcoustic representation of utterances\nIn the MFCC representation the spectrum magnitude of each speech frame is first obtained as the absolute value of its DFT (discrete Fourier transform). Then a filterbank of triangular filters spaced in a frequency scale based on the human perception system (i.e., Mel-scale) is used to obtain a vector with the log-energies of each filter (see Fig. 1). Finally, a discrete cosine transform (DCT) is applied over the vector of log filterbank energies to produce a compact set of decorrelated MFCC coefficients. Additionally, in order to represent the spectral change over time, MFFCs are extended to their first order (velocity or delta ΔMFCCs) time derivatives (more details on MFCCs parametrization can be found in [23]). So far, in our experiments, in each speech frame i the acoustic information is represented by a D-dimensional vector Oi, called observation vector, that includes 19 MFFCs +19 ΔMFCCs parameters, thus D = 38. The extraction of MFCCswas performed using the HTK software (htk.eng.cam.ac.uk), see Table 2 for the details on DFT order, number of triangular filters, etc.Table 2Implementation toolsToola\nFunction nameFunction descriptionParametersHTKHCopyExtract the MFCCs coefficientsNo. DFT bins = 512No. filters = 26No. MFCC coeff. = 19No. ΔMFCC coeff. = 19MSR Identity ToolBoxb\nGMM_emGMM–UBM trainingNo. mixtures = 512No. of expectation maximization iteration = 10Feature sub-sampling factor = 1MapAdaptGMM adaptationAdaptation algorithm = MAPNo. mixtures = 512MAP relevance factor = 10Train_tv_spaceTotal variability matrix trainingDimension of total variability matrix = {400,300,200,100,50,30}Number of iteration = 5Extract_ivectorI-vector trainingDimension of total variability matrix = {400,300,200,100,50,30}LIBSVMSVM_trainSVR trainingGrid search parameters:C, model complexity = −20:20\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, insensitive-zone = 2−7:27\nSVM_predictSVR regressionGrid search parameters:C, model complexity = −20:20 \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, insensitive-zone = 2−7:27\n\naAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System\nbExecuted on Matlab 2014a\nImplementation tools\n\naAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System\n\nbExecuted on Matlab 2014a\nBesides the linguistic message, speech signals carry important information about speakers mainly related to their particular physical or physiological characteristics. This has been the basis for the development of automatic speaker recognition systems, automatic detection of vocal fold pathologies, emotional/psychological state recognition as well as age and weight (or BMI) estimation. In a similar vein, the specific characteristics of the UA in OSA individuals have led to hypothesize OSA detection using automatic acoustic analysis of speech sounds.\nTo represent OSA-specific acoustic information, speech records in our database include read speech of four sentences that were designed to contain specific distinctive sounds to discriminate between healthy and OSA speakers. The design of these four sentences was done according to the reference research in [5] and [6], where Fox et al. identify a set of speech descriptors in OSA speakers related to articulation, phonation and resonance. For example, the third sentence in our corpus includes mostly nasal sounds to detect the expected resonance anomalies in OSA individuals (the details on the design criteria for this corpus can be found in [12]). Additionally, to exclude any acoustic factor not related to OSA discrimination, the speech signal acquisition was done in a room with low noise and using a single high quality microphone (USB SP500 Plantronics headset).\nOnce we have a set of speech utterances containing OSA-specific sounds and collected under a controlled recording environment, speech signals were processed at a sampling frequency of 16 kHz to have a precise wide-band representation all the relevant information in the speech spectrum. As Fig. 1 illustrates, each sentence was analyzed in speech segments (i.e., frames) of 20 ms duration with an overlap of 10 ms; each speech frame was multiplied by a Hamming window. The spectral envelope of each frame was then represented using mel-frequency cepstral coefficients (MFCCs). MFCCs provide a spectral envelope representation of speech sounds extensively used in automatic speech and speaker recognition [21, 22], pathological voice detection, age, height and BMI estimation [16, 17, 20], etc. MFCCS have also been used in previous research on speech-based OSA detection [9–11] and [14].Fig. 1Acoustic representation of utterances\nAcoustic representation of utterances\nIn the MFCC representation the spectrum magnitude of each speech frame is first obtained as the absolute value of its DFT (discrete Fourier transform). Then a filterbank of triangular filters spaced in a frequency scale based on the human perception system (i.e., Mel-scale) is used to obtain a vector with the log-energies of each filter (see Fig. 1). Finally, a discrete cosine transform (DCT) is applied over the vector of log filterbank energies to produce a compact set of decorrelated MFCC coefficients. Additionally, in order to represent the spectral change over time, MFFCs are extended to their first order (velocity or delta ΔMFCCs) time derivatives (more details on MFCCs parametrization can be found in [23]). So far, in our experiments, in each speech frame i the acoustic information is represented by a D-dimensional vector Oi, called observation vector, that includes 19 MFFCs +19 ΔMFCCs parameters, thus D = 38. The extraction of MFCCswas performed using the HTK software (htk.eng.cam.ac.uk), see Table 2 for the details on DFT order, number of triangular filters, etc.Table 2Implementation toolsToola\nFunction nameFunction descriptionParametersHTKHCopyExtract the MFCCs coefficientsNo. DFT bins = 512No. filters = 26No. MFCC coeff. = 19No. ΔMFCC coeff. = 19MSR Identity ToolBoxb\nGMM_emGMM–UBM trainingNo. mixtures = 512No. of expectation maximization iteration = 10Feature sub-sampling factor = 1MapAdaptGMM adaptationAdaptation algorithm = MAPNo. mixtures = 512MAP relevance factor = 10Train_tv_spaceTotal variability matrix trainingDimension of total variability matrix = {400,300,200,100,50,30}Number of iteration = 5Extract_ivectorI-vector trainingDimension of total variability matrix = {400,300,200,100,50,30}LIBSVMSVM_trainSVR trainingGrid search parameters:C, model complexity = −20:20\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, insensitive-zone = 2−7:27\nSVM_predictSVR regressionGrid search parameters:C, model complexity = −20:20 \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, insensitive-zone = 2−7:27\n\naAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System\nbExecuted on Matlab 2014a\nImplementation tools\n\naAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System\n\nbExecuted on Matlab 2014a\n Utterance modelling Due to the natural variability in speech production different utterances corresponding to the same sentence will exhibit variable-duration and thus will be represented by a variable-length sequence O of observation vectors:1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\underline{{\\mathbf{O}}} = \\left[ {{\\mathbf{O}}_{1} ,\\;{\\mathbf{O}}_{2} \\ldots {\\mathbf{O}}_{NF} } \\right]$$\\end{document}O̲=O1,O2…ONFwhere Oi is the D-dimensional observation vector at frame i and NF is the number of frames, which will be variable due to the different durations when reading the same sentence. This variable-length sequence cannot be the input for a regression algorithm as support vector regression (SVR) that will be the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj to predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj (being \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj the AHI and the other clinical variables: age, height, weight, BMI and CP).\nConsequently, the sequence of observations O must be mapped into a vector with fixed dimension. In our method, this has been done using two modeling approaches, referred to as supervectors and i-vectors, which have been successfully applied to speaker recognition [24], language recognition [25], speaker age estimation [16], speaker height estimation [17] and accent recognition [26]. We think that their success in those challenging tasks were speech contains significant sources of interfering intra-speaker variability (speaker weight, height, etc.), is a reasonable guarantee for exploring its use in estimating the AHI and other clinical variables in our OSA population.\nIt is also important to point out that we have avoided the use of feature selection procedures because, as it will be commented in the section “Discussion”, we believe this has led to over-fitted results in several previous studies in this field. It is for that reason that in our approach we evaluate high-dimensional acoustic modelling provided by supervectors and low-dimensional i-vectors representations based on subspace projection. These two techniques are described below.\nDue to the natural variability in speech production different utterances corresponding to the same sentence will exhibit variable-duration and thus will be represented by a variable-length sequence O of observation vectors:1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\underline{{\\mathbf{O}}} = \\left[ {{\\mathbf{O}}_{1} ,\\;{\\mathbf{O}}_{2} \\ldots {\\mathbf{O}}_{NF} } \\right]$$\\end{document}O̲=O1,O2…ONFwhere Oi is the D-dimensional observation vector at frame i and NF is the number of frames, which will be variable due to the different durations when reading the same sentence. This variable-length sequence cannot be the input for a regression algorithm as support vector regression (SVR) that will be the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj to predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj (being \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj the AHI and the other clinical variables: age, height, weight, BMI and CP).\nConsequently, the sequence of observations O must be mapped into a vector with fixed dimension. In our method, this has been done using two modeling approaches, referred to as supervectors and i-vectors, which have been successfully applied to speaker recognition [24], language recognition [25], speaker age estimation [16], speaker height estimation [17] and accent recognition [26]. We think that their success in those challenging tasks were speech contains significant sources of interfering intra-speaker variability (speaker weight, height, etc.), is a reasonable guarantee for exploring its use in estimating the AHI and other clinical variables in our OSA population.\nIt is also important to point out that we have avoided the use of feature selection procedures because, as it will be commented in the section “Discussion”, we believe this has led to over-fitted results in several previous studies in this field. It is for that reason that in our approach we evaluate high-dimensional acoustic modelling provided by supervectors and low-dimensional i-vectors representations based on subspace projection. These two techniques are described below.\n Supervectors Both supervector and i-vector modelling approaches start by fitting a Gaussian mixture model (GMM) to the sequence of observations O. A GMM (see [23, 27]) consists of a weighted sum of K D-dimensional Gaussian components, where, in our case, D is the dimension of the MFFCs observation vectors. Each i-th Gaussian component is represented by a mean vector (µi) of dimension D and a D × D covariance matrix (Σi). Due to limited data, it is not possible to accurately fit a separate GMM for a short utterance, specially when using a high number of Gaussian components (i.e., large K). Consequently, GMMs are obtained using adaptation techniques from a universal background model (UBM), which is also a GMM trained on a large database containing speech from a large number of different speakers [23]. Therefore, as Fig. 2 illustrates, the variable-length sequence O of vectors of a given utterance is used to adapt a GMM–UBM generating an adapted GMM where only the means (µi) are adapted.Fig. 2GMM and supervector modelling\nGMM and supervector modelling\nIn the supervector modelling approach [21], the adapted GMM means (µi) are extracted and concatenated (appending one after the other) into a single high-dimensional vector s that is called the GMM mean supervector:2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{s}} = \\left[ {\\begin{array}{*{20}c} {{\\mathbf{\\mu }}_{{\\text{1}}} } \\\\ {{\\mathbf{\\mu }}_{{\\text{2}}} } \\\\ \\vdots \\\\ {{\\mathbf{\\mu }}_{K} } \\\\ \\end{array} } \\right]$$\\end{document}s=μ1μ2⋮μK\nThe resulting fixed-length supervector, of size K × D, is now suitable to be used as input to a regression algorithm, such as SVR, to predict AHI and the other clinical variables.\nAs it is summarized in Table 2, in our experiments GMM–UBM training, GMM adaptation and supervector generation was done using the MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Linux Ubuntu 12.04 LTS. As it is also shown in Table 2, to have a precise acoustic representation for each sentence a GMM with K = 512 components was used, resulting in a high-dimensional supervector of size K × D = 19,456 = 38 × 512 (D = 38 is the dimension of MFFCs observation vectors Oi).\nAs mentioned before, training the GMM UBM requires a considerable amount of development data to represent a global acoustic space. Therefore, for development we used several large databases containing microphonic speech sampled at 16 kHz, covering a wide range of phonetic variability from continuous/read Spanish speech (see, for example, ALBAYZIN [29], as it was one the databases we used). The whole development dataset includes 25,451 speech recordings from 940 speakers. Among them 126 speakers certainly diagnosed with OSA, and not used for tests, were also included to reflect OSA-specific characteristics of speech.\nBoth supervector and i-vector modelling approaches start by fitting a Gaussian mixture model (GMM) to the sequence of observations O. A GMM (see [23, 27]) consists of a weighted sum of K D-dimensional Gaussian components, where, in our case, D is the dimension of the MFFCs observation vectors. Each i-th Gaussian component is represented by a mean vector (µi) of dimension D and a D × D covariance matrix (Σi). Due to limited data, it is not possible to accurately fit a separate GMM for a short utterance, specially when using a high number of Gaussian components (i.e., large K). Consequently, GMMs are obtained using adaptation techniques from a universal background model (UBM), which is also a GMM trained on a large database containing speech from a large number of different speakers [23]. Therefore, as Fig. 2 illustrates, the variable-length sequence O of vectors of a given utterance is used to adapt a GMM–UBM generating an adapted GMM where only the means (µi) are adapted.Fig. 2GMM and supervector modelling\nGMM and supervector modelling\nIn the supervector modelling approach [21], the adapted GMM means (µi) are extracted and concatenated (appending one after the other) into a single high-dimensional vector s that is called the GMM mean supervector:2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{s}} = \\left[ {\\begin{array}{*{20}c} {{\\mathbf{\\mu }}_{{\\text{1}}} } \\\\ {{\\mathbf{\\mu }}_{{\\text{2}}} } \\\\ \\vdots \\\\ {{\\mathbf{\\mu }}_{K} } \\\\ \\end{array} } \\right]$$\\end{document}s=μ1μ2⋮μK\nThe resulting fixed-length supervector, of size K × D, is now suitable to be used as input to a regression algorithm, such as SVR, to predict AHI and the other clinical variables.\nAs it is summarized in Table 2, in our experiments GMM–UBM training, GMM adaptation and supervector generation was done using the MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Linux Ubuntu 12.04 LTS. As it is also shown in Table 2, to have a precise acoustic representation for each sentence a GMM with K = 512 components was used, resulting in a high-dimensional supervector of size K × D = 19,456 = 38 × 512 (D = 38 is the dimension of MFFCs observation vectors Oi).\nAs mentioned before, training the GMM UBM requires a considerable amount of development data to represent a global acoustic space. Therefore, for development we used several large databases containing microphonic speech sampled at 16 kHz, covering a wide range of phonetic variability from continuous/read Spanish speech (see, for example, ALBAYZIN [29], as it was one the databases we used). The whole development dataset includes 25,451 speech recordings from 940 speakers. Among them 126 speakers certainly diagnosed with OSA, and not used for tests, were also included to reflect OSA-specific characteristics of speech.\n I-vectors Beyond the success of high-dimensional supervectors, a new paradigm called i-vector has been successfully and is widely used by the speaker recognition community [24]. The i-vector model relies on the definition of a low-dimensional total variability subspace and can be described in the GMM mean supervector space by:3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{s}} = {\\mathbf{m}} + {\\mathbf{Tw}}$$\\end{document}s=m+Twwhere s is the GMM mean supervector representing an utterance and m is the mean supervector obtained from the UBM GMM–UBM, which can be considered a global acoustic representation independent from utterance, speaker, health and clinical condition. T is a rectangular low rank matrix representing the primary directions of total acoustic variability observed in a large development speech database, and w is a low dimensional random vector having a standard normal distribution. In short, Eq. (3) can be viewed as a simple factor analysis for projecting the high-dimensional (in order of thousands) supervector s to the low-dimensional (in order of hundreds) factor vector, identity vector or i-vector w. T is named the total variability matrix and the components of i-vector w are the total factors that represent the acoustic information in the reduced total variability space. Compared to supervectors, the total variability modeling using i-vectors has the advantage of projecting the high dimensionality of GMM supervectors into a low-dimensional subspace, where most of the speaker-specific variability is captured.\nAutomatic speech recognition systems typically use i-vectors with dimensionality of 400. In our tests the total variability matrix T was estimated using the same development data described before for training the GMM–UBM, and we evaluated subspace projections for i-vectors with different dimensions ranging from 30 to 400. Efficient procedure for training T and MAP adaptation of i-vectors can be found in [30]. In our tests we use the implementation provided by MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Ubunutu 12.04 LTS (see the details in Table 2).\nBeyond the success of high-dimensional supervectors, a new paradigm called i-vector has been successfully and is widely used by the speaker recognition community [24]. The i-vector model relies on the definition of a low-dimensional total variability subspace and can be described in the GMM mean supervector space by:3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{s}} = {\\mathbf{m}} + {\\mathbf{Tw}}$$\\end{document}s=m+Twwhere s is the GMM mean supervector representing an utterance and m is the mean supervector obtained from the UBM GMM–UBM, which can be considered a global acoustic representation independent from utterance, speaker, health and clinical condition. T is a rectangular low rank matrix representing the primary directions of total acoustic variability observed in a large development speech database, and w is a low dimensional random vector having a standard normal distribution. In short, Eq. (3) can be viewed as a simple factor analysis for projecting the high-dimensional (in order of thousands) supervector s to the low-dimensional (in order of hundreds) factor vector, identity vector or i-vector w. T is named the total variability matrix and the components of i-vector w are the total factors that represent the acoustic information in the reduced total variability space. Compared to supervectors, the total variability modeling using i-vectors has the advantage of projecting the high dimensionality of GMM supervectors into a low-dimensional subspace, where most of the speaker-specific variability is captured.\nAutomatic speech recognition systems typically use i-vectors with dimensionality of 400. In our tests the total variability matrix T was estimated using the same development data described before for training the GMM–UBM, and we evaluated subspace projections for i-vectors with different dimensions ranging from 30 to 400. Efficient procedure for training T and MAP adaptation of i-vectors can be found in [30]. In our tests we use the implementation provided by MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Ubunutu 12.04 LTS (see the details in Table 2).\n Regression using SVR Once an utterance is represented by a fixed-length vector, supervector or i-vector, SVR is employed as the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj to predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj, i.e., the AHI and other clinical variables (age, height, weight, BMI and CP).\nSVR is a function approximation approach developed as a regression version of the widely known Support Vector Machine (SVM) classifier [31]. When using SVR, the input variable (i-vector/supervector) is firstly mapped onto a high dimensional feature space by using a non-linear mapping. The mapping is performed by the kernel function. The kernel yields the new high dimensional feature by a similarity measure between the points from the original feature space. Once the mapping onto a high dimensional space is done then a linear model is constructed in this feature space by finding the optimal hyperplane in which most of the of the training samples lie within an \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-margin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone) around this hyperplane [31].\nThe generalization of SVR’s performance depends on a good setting of two hyperparameters (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, C) and the kernel parameters. The parameter \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈ controls the width of the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone, used to fit the training data. The width of the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone determines the level of accuracy of approximation function. It relies entirely on the target values of the training set. The parameter C determines the trade-off between the model complexity, controlled by \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, and the degree to which deviations larger than the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone are tolerated in the optimization of the hyperplane. Finally, the kernel parameters depend on the type similarity measure used.\nIn this paper, SVR is applied to estimate the clinical variables and linear and radial basis function (RBF) kernels were tested to approximate the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj. In our study, both linear and RBF kernels were tested for i-vectors, but only linear kernels were considered for supervectors because their large dimensionality makes it not advisable mapping them into a higher dimensional space. SVR training and testing were implemented by using LIBSVM [32] running on Linux Ubuntu 12.04 LTS. Table 2 describes de details of use for this software together with all the parameters used in our tests.\nOnce an utterance is represented by a fixed-length vector, supervector or i-vector, SVR is employed as the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj to predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj, i.e., the AHI and other clinical variables (age, height, weight, BMI and CP).\nSVR is a function approximation approach developed as a regression version of the widely known Support Vector Machine (SVM) classifier [31]. When using SVR, the input variable (i-vector/supervector) is firstly mapped onto a high dimensional feature space by using a non-linear mapping. The mapping is performed by the kernel function. The kernel yields the new high dimensional feature by a similarity measure between the points from the original feature space. Once the mapping onto a high dimensional space is done then a linear model is constructed in this feature space by finding the optimal hyperplane in which most of the of the training samples lie within an \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-margin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone) around this hyperplane [31].\nThe generalization of SVR’s performance depends on a good setting of two hyperparameters (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, C) and the kernel parameters. The parameter \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈ controls the width of the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone, used to fit the training data. The width of the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone determines the level of accuracy of approximation function. It relies entirely on the target values of the training set. The parameter C determines the trade-off between the model complexity, controlled by \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, and the degree to which deviations larger than the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone are tolerated in the optimization of the hyperplane. Finally, the kernel parameters depend on the type similarity measure used.\nIn this paper, SVR is applied to estimate the clinical variables and linear and radial basis function (RBF) kernels were tested to approximate the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj. In our study, both linear and RBF kernels were tested for i-vectors, but only linear kernels were considered for supervectors because their large dimensionality makes it not advisable mapping them into a higher dimensional space. SVR training and testing were implemented by using LIBSVM [32] running on Linux Ubuntu 12.04 LTS. Table 2 describes de details of use for this software together with all the parameters used in our tests.\n Performance metrics To evaluate the proposed method of using supervectors or i-vectors to predict or estimate AHI and the other clinical variables (age, height, weight, BMI and CP) we measure both the mean absolute error (MAE) and the Pearson correlation coefficient (ρ). MAE provides the average absolute difference between actual and estimated values, while ρ evaluates their linear relationship. As we will see in the section “Results”, correlation coefficients between estimated and actual AHI values were many times very small. Therefore, we considered informative to report p-values for correlation coefficients as the probability that they were in fact zero (null hypothesis).\nAlthough the main objective of our method is to evaluate the capability of using speech to predict or estimate AHI, in the section “Discussion” we also review previous research that aim at classify or discriminate between subjects with OSA (AHI ≥10) and without OSA (defined by an AHI <10). Therefore, we performed some additional tests using our estimated AHI values to classify subjects as OSA (predicted AHI ≥10) and non-OSA (predicted AHI <10). In these classification tests performance was measured in terms of sensitivity, specificity and the area under the ROC curve.\nTo evaluate the proposed method of using supervectors or i-vectors to predict or estimate AHI and the other clinical variables (age, height, weight, BMI and CP) we measure both the mean absolute error (MAE) and the Pearson correlation coefficient (ρ). MAE provides the average absolute difference between actual and estimated values, while ρ evaluates their linear relationship. As we will see in the section “Results”, correlation coefficients between estimated and actual AHI values were many times very small. Therefore, we considered informative to report p-values for correlation coefficients as the probability that they were in fact zero (null hypothesis).\nAlthough the main objective of our method is to evaluate the capability of using speech to predict or estimate AHI, in the section “Discussion” we also review previous research that aim at classify or discriminate between subjects with OSA (AHI ≥10) and without OSA (defined by an AHI <10). Therefore, we performed some additional tests using our estimated AHI values to classify subjects as OSA (predicted AHI ≥10) and non-OSA (predicted AHI <10). In these classification tests performance was measured in terms of sensitivity, specificity and the area under the ROC curve.\n k-fold cross-validation and grid-search In order to train the SVR regression model (function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj) and predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj variables (AHI and other clinical variables) we have employed k-fold cross-validation and grid-search for finding the optimal SVR parameters. The whole process is presented in Fig. 3. Firstly, to guarantee that all speakers are involved on the test, the dataset is split into k equal sized subsamples with no speakers in common. Then, of the k subsamples, a single subsample is retained for testing and the remaining k−1 subsamples are used as training dataset. Results were reported for k = 10.Fig. 3Representation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables\nRepresentation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables\nFurthermore, as Fig. 3 also illustrates, in each cross-validation loop the optimal hyperparameters (ϵ, C) of the SVR models are obtained through “grid search” using a fivefold cross-validation on the training data. The ranges for this grid search are detailed in Table 2.\nIn order to train the SVR regression model (function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj) and predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj variables (AHI and other clinical variables) we have employed k-fold cross-validation and grid-search for finding the optimal SVR parameters. The whole process is presented in Fig. 3. Firstly, to guarantee that all speakers are involved on the test, the dataset is split into k equal sized subsamples with no speakers in common. Then, of the k subsamples, a single subsample is retained for testing and the remaining k−1 subsamples are used as training dataset. Results were reported for k = 10.Fig. 3Representation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables\nRepresentation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables\nFurthermore, as Fig. 3 also illustrates, in each cross-validation loop the optimal hyperparameters (ϵ, C) of the SVR models are obtained through “grid search” using a fivefold cross-validation on the training data. The ranges for this grid search are detailed in Table 2.", "The population under study is composed by 426 male subjects presenting symptoms of OSA during a preliminary interview with a pneumonologist: such as excessive daytime sleepiness, snoring, choking during sleep, or somnolent driving. Several clinical variables were collected for each individual: age, height, weight, body-mass index (BMI, defined as the weight in kilograms divided by the square of the height in meters, kg/m2) and cervical perimeter (CP, measure of the neck circumference, in centimeters, at the level of the cricothyroid membrane). This database has been recorded at the Hospital Quirón de Málaga (Spain) since 2010 and is, to the best of our knowledge, the largest database used in this kind of studies. The database contains 597 speakers: 426 males and 171 females. Our study had no impact on the diagnosis process of patients or on their possible medical treatment therefore the Hospital did not consider necessary to seek approval from their ethics committee. Before starting the study, participants were notified about the research and their informed consent obtained. Statistics of the clinical variables for the male population in this study are summarized in Table 1.Table 1Descriptive statistics on the 426 male subjectsClinical variablesMeanSDRangeAHI22.518.10.0–102.0Weight (kg)91.717.361.0–162.0Height (cm)175.37.1152.0–197.0BMI (kg/m2)29.85.120.1–52.1Age (years)48.812.520.0–85.0Cervical perimeter (cm)42.23.234.0–53.0\nAHI apnea–hypopnea index, BMI body mass index, SD standard deviation\nDescriptive statistics on the 426 male subjects\n\nAHI apnea–hypopnea index, BMI body mass index, SD standard deviation\nThe diagnosis for each patient was confirmed by specialized medical staff through PSG, obtaining the AHI on the basis of the number of apnea and hypopnea episodes. Patients’ speech was recorded prior to PSG. All speakers read the same 4 sentences and sustained a complete set of Spanish vowels [i.e., a, o, u]. Sentences were designed trying to cover relevant linguistic/phonetic contexts related to peculiarities in OSA voices (see details in [12]). Recordings were made in a room with low noise and patients at an upright or seated position. Recording equipment was a standard laptop with an USB SP500 Plantronics headset. Speech was recorded at a sampling frequency of 50 kHz and encoded in 16 bits. Afterwards it was down-sampled to 16 kHz before processing.", "Our major aim is testing whether state-of-the-art speaker’s voice characterization technologies that have already demonstrated to be effective in the estimation of speaker’s characteristics such as age [16] and height [17] could be also effective in estimating the AHI. It is important to point out that, besides predicting the AHI from speech samples, we also tested the performance when using these same techniques to estimate the other clinical variables (age, height, weight, BMI and CP). We think this evaluation is relevant for two main reasons: firstly, to validate our methodology, by comparing our results estimating age, height and BMI with those previously reported over general speaker populations (such as [16, 17, 20]); and secondly, to identify correlations between speech and other clinical variables that can increase the likelihood of false discovery based on spurious or indirect associations [18] between these clinical variables and AHI. This second aspect we will be relevant when presenting the critical review of previous approaches to OSA assessment in the section “Discussion”.\nConsequently, our study can be formulated as a machine learning regression problem as follows: we are given a training dataset of speech recordings and clinical variables information \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{S}}_{\\text{tr}}^{j} = \\left\\{ {{\\mathbf{x}}_{n} , y_{n}^{j} } \\right\\}_{n = 1}^{N}$$\\end{document}Strj=xn,ynjn=1N, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{x}}_{n} \\in \\Re^{\\text{p}}$$\\end{document}xn∈ℜp denotes the acoustic representation for the nth utterance of the training dataset and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y_{n}^{j} \\in \\Re$$\\end{document}ynj∈ℜ denotes the corresponding value of the clinical variable for the speaker of that utterance; j corresponds to a particular variable in the set of V clinical variables (j = 1, 2, …V; i.e., AHI, age, height, weight, BMI, CP).\nThe goal is to design an estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj for each clinical variable, such that for an utterance of an unseen testing speaker xtst, the difference between the estimated value of that particular clinical variable \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\hat{y}^{j} = f^{j} \\left( {{\\mathbf{x}}_{{\\text{tst}}} } \\right)$$\\end{document}y^j=fjxtst and its actual value \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj is minimized.\nOnce this regression problem has been formulated two main issues must be addressed: 1) what acoustic representation and model will be used for a given utterance xn and 2) how to design the regression or estimator functions \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj.", "Besides the linguistic message, speech signals carry important information about speakers mainly related to their particular physical or physiological characteristics. This has been the basis for the development of automatic speaker recognition systems, automatic detection of vocal fold pathologies, emotional/psychological state recognition as well as age and weight (or BMI) estimation. In a similar vein, the specific characteristics of the UA in OSA individuals have led to hypothesize OSA detection using automatic acoustic analysis of speech sounds.\nTo represent OSA-specific acoustic information, speech records in our database include read speech of four sentences that were designed to contain specific distinctive sounds to discriminate between healthy and OSA speakers. The design of these four sentences was done according to the reference research in [5] and [6], where Fox et al. identify a set of speech descriptors in OSA speakers related to articulation, phonation and resonance. For example, the third sentence in our corpus includes mostly nasal sounds to detect the expected resonance anomalies in OSA individuals (the details on the design criteria for this corpus can be found in [12]). Additionally, to exclude any acoustic factor not related to OSA discrimination, the speech signal acquisition was done in a room with low noise and using a single high quality microphone (USB SP500 Plantronics headset).\nOnce we have a set of speech utterances containing OSA-specific sounds and collected under a controlled recording environment, speech signals were processed at a sampling frequency of 16 kHz to have a precise wide-band representation all the relevant information in the speech spectrum. As Fig. 1 illustrates, each sentence was analyzed in speech segments (i.e., frames) of 20 ms duration with an overlap of 10 ms; each speech frame was multiplied by a Hamming window. The spectral envelope of each frame was then represented using mel-frequency cepstral coefficients (MFCCs). MFCCs provide a spectral envelope representation of speech sounds extensively used in automatic speech and speaker recognition [21, 22], pathological voice detection, age, height and BMI estimation [16, 17, 20], etc. MFCCS have also been used in previous research on speech-based OSA detection [9–11] and [14].Fig. 1Acoustic representation of utterances\nAcoustic representation of utterances\nIn the MFCC representation the spectrum magnitude of each speech frame is first obtained as the absolute value of its DFT (discrete Fourier transform). Then a filterbank of triangular filters spaced in a frequency scale based on the human perception system (i.e., Mel-scale) is used to obtain a vector with the log-energies of each filter (see Fig. 1). Finally, a discrete cosine transform (DCT) is applied over the vector of log filterbank energies to produce a compact set of decorrelated MFCC coefficients. Additionally, in order to represent the spectral change over time, MFFCs are extended to their first order (velocity or delta ΔMFCCs) time derivatives (more details on MFCCs parametrization can be found in [23]). So far, in our experiments, in each speech frame i the acoustic information is represented by a D-dimensional vector Oi, called observation vector, that includes 19 MFFCs +19 ΔMFCCs parameters, thus D = 38. The extraction of MFCCswas performed using the HTK software (htk.eng.cam.ac.uk), see Table 2 for the details on DFT order, number of triangular filters, etc.Table 2Implementation toolsToola\nFunction nameFunction descriptionParametersHTKHCopyExtract the MFCCs coefficientsNo. DFT bins = 512No. filters = 26No. MFCC coeff. = 19No. ΔMFCC coeff. = 19MSR Identity ToolBoxb\nGMM_emGMM–UBM trainingNo. mixtures = 512No. of expectation maximization iteration = 10Feature sub-sampling factor = 1MapAdaptGMM adaptationAdaptation algorithm = MAPNo. mixtures = 512MAP relevance factor = 10Train_tv_spaceTotal variability matrix trainingDimension of total variability matrix = {400,300,200,100,50,30}Number of iteration = 5Extract_ivectorI-vector trainingDimension of total variability matrix = {400,300,200,100,50,30}LIBSVMSVM_trainSVR trainingGrid search parameters:C, model complexity = −20:20\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, insensitive-zone = 2−7:27\nSVM_predictSVR regressionGrid search parameters:C, model complexity = −20:20 \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, insensitive-zone = 2−7:27\n\naAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System\nbExecuted on Matlab 2014a\nImplementation tools\n\naAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System\n\nbExecuted on Matlab 2014a", "Due to the natural variability in speech production different utterances corresponding to the same sentence will exhibit variable-duration and thus will be represented by a variable-length sequence O of observation vectors:1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\underline{{\\mathbf{O}}} = \\left[ {{\\mathbf{O}}_{1} ,\\;{\\mathbf{O}}_{2} \\ldots {\\mathbf{O}}_{NF} } \\right]$$\\end{document}O̲=O1,O2…ONFwhere Oi is the D-dimensional observation vector at frame i and NF is the number of frames, which will be variable due to the different durations when reading the same sentence. This variable-length sequence cannot be the input for a regression algorithm as support vector regression (SVR) that will be the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj to predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj (being \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj the AHI and the other clinical variables: age, height, weight, BMI and CP).\nConsequently, the sequence of observations O must be mapped into a vector with fixed dimension. In our method, this has been done using two modeling approaches, referred to as supervectors and i-vectors, which have been successfully applied to speaker recognition [24], language recognition [25], speaker age estimation [16], speaker height estimation [17] and accent recognition [26]. We think that their success in those challenging tasks were speech contains significant sources of interfering intra-speaker variability (speaker weight, height, etc.), is a reasonable guarantee for exploring its use in estimating the AHI and other clinical variables in our OSA population.\nIt is also important to point out that we have avoided the use of feature selection procedures because, as it will be commented in the section “Discussion”, we believe this has led to over-fitted results in several previous studies in this field. It is for that reason that in our approach we evaluate high-dimensional acoustic modelling provided by supervectors and low-dimensional i-vectors representations based on subspace projection. These two techniques are described below.", "Both supervector and i-vector modelling approaches start by fitting a Gaussian mixture model (GMM) to the sequence of observations O. A GMM (see [23, 27]) consists of a weighted sum of K D-dimensional Gaussian components, where, in our case, D is the dimension of the MFFCs observation vectors. Each i-th Gaussian component is represented by a mean vector (µi) of dimension D and a D × D covariance matrix (Σi). Due to limited data, it is not possible to accurately fit a separate GMM for a short utterance, specially when using a high number of Gaussian components (i.e., large K). Consequently, GMMs are obtained using adaptation techniques from a universal background model (UBM), which is also a GMM trained on a large database containing speech from a large number of different speakers [23]. Therefore, as Fig. 2 illustrates, the variable-length sequence O of vectors of a given utterance is used to adapt a GMM–UBM generating an adapted GMM where only the means (µi) are adapted.Fig. 2GMM and supervector modelling\nGMM and supervector modelling\nIn the supervector modelling approach [21], the adapted GMM means (µi) are extracted and concatenated (appending one after the other) into a single high-dimensional vector s that is called the GMM mean supervector:2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{s}} = \\left[ {\\begin{array}{*{20}c} {{\\mathbf{\\mu }}_{{\\text{1}}} } \\\\ {{\\mathbf{\\mu }}_{{\\text{2}}} } \\\\ \\vdots \\\\ {{\\mathbf{\\mu }}_{K} } \\\\ \\end{array} } \\right]$$\\end{document}s=μ1μ2⋮μK\nThe resulting fixed-length supervector, of size K × D, is now suitable to be used as input to a regression algorithm, such as SVR, to predict AHI and the other clinical variables.\nAs it is summarized in Table 2, in our experiments GMM–UBM training, GMM adaptation and supervector generation was done using the MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Linux Ubuntu 12.04 LTS. As it is also shown in Table 2, to have a precise acoustic representation for each sentence a GMM with K = 512 components was used, resulting in a high-dimensional supervector of size K × D = 19,456 = 38 × 512 (D = 38 is the dimension of MFFCs observation vectors Oi).\nAs mentioned before, training the GMM UBM requires a considerable amount of development data to represent a global acoustic space. Therefore, for development we used several large databases containing microphonic speech sampled at 16 kHz, covering a wide range of phonetic variability from continuous/read Spanish speech (see, for example, ALBAYZIN [29], as it was one the databases we used). The whole development dataset includes 25,451 speech recordings from 940 speakers. Among them 126 speakers certainly diagnosed with OSA, and not used for tests, were also included to reflect OSA-specific characteristics of speech.", "Beyond the success of high-dimensional supervectors, a new paradigm called i-vector has been successfully and is widely used by the speaker recognition community [24]. The i-vector model relies on the definition of a low-dimensional total variability subspace and can be described in the GMM mean supervector space by:3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$${\\mathbf{s}} = {\\mathbf{m}} + {\\mathbf{Tw}}$$\\end{document}s=m+Twwhere s is the GMM mean supervector representing an utterance and m is the mean supervector obtained from the UBM GMM–UBM, which can be considered a global acoustic representation independent from utterance, speaker, health and clinical condition. T is a rectangular low rank matrix representing the primary directions of total acoustic variability observed in a large development speech database, and w is a low dimensional random vector having a standard normal distribution. In short, Eq. (3) can be viewed as a simple factor analysis for projecting the high-dimensional (in order of thousands) supervector s to the low-dimensional (in order of hundreds) factor vector, identity vector or i-vector w. T is named the total variability matrix and the components of i-vector w are the total factors that represent the acoustic information in the reduced total variability space. Compared to supervectors, the total variability modeling using i-vectors has the advantage of projecting the high dimensionality of GMM supervectors into a low-dimensional subspace, where most of the speaker-specific variability is captured.\nAutomatic speech recognition systems typically use i-vectors with dimensionality of 400. In our tests the total variability matrix T was estimated using the same development data described before for training the GMM–UBM, and we evaluated subspace projections for i-vectors with different dimensions ranging from 30 to 400. Efficient procedure for training T and MAP adaptation of i-vectors can be found in [30]. In our tests we use the implementation provided by MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Ubunutu 12.04 LTS (see the details in Table 2).", "Once an utterance is represented by a fixed-length vector, supervector or i-vector, SVR is employed as the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj to predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj, i.e., the AHI and other clinical variables (age, height, weight, BMI and CP).\nSVR is a function approximation approach developed as a regression version of the widely known Support Vector Machine (SVM) classifier [31]. When using SVR, the input variable (i-vector/supervector) is firstly mapped onto a high dimensional feature space by using a non-linear mapping. The mapping is performed by the kernel function. The kernel yields the new high dimensional feature by a similarity measure between the points from the original feature space. Once the mapping onto a high dimensional space is done then a linear model is constructed in this feature space by finding the optimal hyperplane in which most of the of the training samples lie within an \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-margin (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone) around this hyperplane [31].\nThe generalization of SVR’s performance depends on a good setting of two hyperparameters (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, C) and the kernel parameters. The parameter \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈ controls the width of the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone, used to fit the training data. The width of the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone determines the level of accuracy of approximation function. It relies entirely on the target values of the training set. The parameter C determines the trade-off between the model complexity, controlled by \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈, and the degree to which deviations larger than the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\in$$\\end{document}∈-insensitive zone are tolerated in the optimization of the hyperplane. Finally, the kernel parameters depend on the type similarity measure used.\nIn this paper, SVR is applied to estimate the clinical variables and linear and radial basis function (RBF) kernels were tested to approximate the estimator function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj. In our study, both linear and RBF kernels were tested for i-vectors, but only linear kernels were considered for supervectors because their large dimensionality makes it not advisable mapping them into a higher dimensional space. SVR training and testing were implemented by using LIBSVM [32] running on Linux Ubuntu 12.04 LTS. Table 2 describes de details of use for this software together with all the parameters used in our tests.", "To evaluate the proposed method of using supervectors or i-vectors to predict or estimate AHI and the other clinical variables (age, height, weight, BMI and CP) we measure both the mean absolute error (MAE) and the Pearson correlation coefficient (ρ). MAE provides the average absolute difference between actual and estimated values, while ρ evaluates their linear relationship. As we will see in the section “Results”, correlation coefficients between estimated and actual AHI values were many times very small. Therefore, we considered informative to report p-values for correlation coefficients as the probability that they were in fact zero (null hypothesis).\nAlthough the main objective of our method is to evaluate the capability of using speech to predict or estimate AHI, in the section “Discussion” we also review previous research that aim at classify or discriminate between subjects with OSA (AHI ≥10) and without OSA (defined by an AHI <10). Therefore, we performed some additional tests using our estimated AHI values to classify subjects as OSA (predicted AHI ≥10) and non-OSA (predicted AHI <10). In these classification tests performance was measured in terms of sensitivity, specificity and the area under the ROC curve.", "In order to train the SVR regression model (function \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$f^{j}$$\\end{document}fj) and predict \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$y^{j}$$\\end{document}yj variables (AHI and other clinical variables) we have employed k-fold cross-validation and grid-search for finding the optimal SVR parameters. The whole process is presented in Fig. 3. Firstly, to guarantee that all speakers are involved on the test, the dataset is split into k equal sized subsamples with no speakers in common. Then, of the k subsamples, a single subsample is retained for testing and the remaining k−1 subsamples are used as training dataset. Results were reported for k = 10.Fig. 3Representation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables\nRepresentation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables\nFurthermore, as Fig. 3 also illustrates, in each cross-validation loop the optimal hyperparameters (ϵ, C) of the SVR models are obtained through “grid search” using a fivefold cross-validation on the training data. The ranges for this grid search are detailed in Table 2.", " Clinical variables estimation Results in Tables 3 and 4 show performance when using speech to estimate age and height. As mentioned before, the purpose of these tests is to validate our procedure by comparing these results to those reported in recent references [16] and [17]. Table 3 shows that our estimation performance (both in terms of MAE and correlation coefficient) for height are comparable, and better when using i-vectors, that those in [17]. However estimation results for age, Table 4, are slightly worse than [16]. A plausible explanation is that the population in [16] includes a majority of young people, between 20 and 30 years old, while most of our OSA speakers are well above 45 years old. According to [16] speech records from young speakers can be better discriminated than those from older ones. In any case, our results are very similar to results published previously by other authors, which is a good indicator of the validity of our methods.Table 3Speakers’ height estimation resultsRegression methodMean absolute error (cm)Correlation coefficient (ρ)I-vector–LSSVR [17]6.20.41b\nSupervector–SVR5.370.34a\nI-vector–SVR5.060.45a\n\naThese values are significant beyond the 0.01 level of confidence\nbLevel of confidence is not reportedTable 4Speakers’ age estimation resultsRegression methodMean absolute error (years)Correlation coefficient (ρ)I-vector–WCCN–SVR [16]6.00.77b\nSupervector–SVR7.750.66a\nI-vector–SVR7.870.63a\n\naThese values are significant beyond the 0.01 level of confidence\nbLevel of confidence is not reported\nSpeakers’ height estimation results\n\naThese values are significant beyond the 0.01 level of confidence\n\nbLevel of confidence is not reported\nSpeakers’ age estimation results\n\naThese values are significant beyond the 0.01 level of confidence\n\nbLevel of confidence is not reported\nPrediction results using i-vectors and supervectors for all our clinical variables are listed on Tables 5, 6 and 7.Table 5Speakers’ clinical variables estimation using supervector-SVR (linear kernel)Clinical variableMAEρAHI14.260.17Height (cm)5.370.34Age (years)7.750.66Weight (kg)12.580.31BMI (kg/m2)3.810.23CP (cm)2.290.42\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 6Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI13.6813.6413.5513.2313.4013.850.230.210.240.300.270.20Height (cm)5.215.235.115.065.295.380.400.410.430.450.360.34Age (years)8.167.878.118.298.779.160.610.630.610.590.520.44Weight (kg)12.3112.2312.2511.8612.1612.310.340.350.360.390.350.31BMI (kg/m2)3.593.653.673.693.743.800.330.300.290.280.260.18CP (cm)2.282.262.202.262.312.420.440.450.490.470.440.32\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 7Speakers clinical variables estimation using i-vectors-SVR (RBF kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI14.0413.9113.6313.4813.8414.120.000.170.250.260.180.02Height (cm)5.285.235.165.245.465.430.400.410.420.410.290.32Age (years)9.469.228.298.689.109.530.420.510.610.570.500.41Weight (kg)12.3912.8212.1812.1112.2712.590.290.180.320.350.340.24BMI (kg/m2)3.733.703.663.683.723.770.200.180.270.270.210.14CP (cm)2.382.422.322.342.422.440.310.260.420.400.310.26\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers’ clinical variables estimation using supervector-SVR (linear kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers’ clinical variables estimation using i-vectors-SVR (linear kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers clinical variables estimation using i-vectors-SVR (RBF kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nAs pointed out before, for supervectors (Table 5), only a linear kernel was evaluated because the very large supervector dimension (>1000) makes not advisable mapping this data into a higher dimensional space.\nTables 6 and 7 show that for i-vectors, estimation results using linear and RBF kernels are very similar. These tables also show that both i-vectors and supervectors reach similar results for almost all clinical variables.\nResults in Tables 3 and 4 show performance when using speech to estimate age and height. As mentioned before, the purpose of these tests is to validate our procedure by comparing these results to those reported in recent references [16] and [17]. Table 3 shows that our estimation performance (both in terms of MAE and correlation coefficient) for height are comparable, and better when using i-vectors, that those in [17]. However estimation results for age, Table 4, are slightly worse than [16]. A plausible explanation is that the population in [16] includes a majority of young people, between 20 and 30 years old, while most of our OSA speakers are well above 45 years old. According to [16] speech records from young speakers can be better discriminated than those from older ones. In any case, our results are very similar to results published previously by other authors, which is a good indicator of the validity of our methods.Table 3Speakers’ height estimation resultsRegression methodMean absolute error (cm)Correlation coefficient (ρ)I-vector–LSSVR [17]6.20.41b\nSupervector–SVR5.370.34a\nI-vector–SVR5.060.45a\n\naThese values are significant beyond the 0.01 level of confidence\nbLevel of confidence is not reportedTable 4Speakers’ age estimation resultsRegression methodMean absolute error (years)Correlation coefficient (ρ)I-vector–WCCN–SVR [16]6.00.77b\nSupervector–SVR7.750.66a\nI-vector–SVR7.870.63a\n\naThese values are significant beyond the 0.01 level of confidence\nbLevel of confidence is not reported\nSpeakers’ height estimation results\n\naThese values are significant beyond the 0.01 level of confidence\n\nbLevel of confidence is not reported\nSpeakers’ age estimation results\n\naThese values are significant beyond the 0.01 level of confidence\n\nbLevel of confidence is not reported\nPrediction results using i-vectors and supervectors for all our clinical variables are listed on Tables 5, 6 and 7.Table 5Speakers’ clinical variables estimation using supervector-SVR (linear kernel)Clinical variableMAEρAHI14.260.17Height (cm)5.370.34Age (years)7.750.66Weight (kg)12.580.31BMI (kg/m2)3.810.23CP (cm)2.290.42\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 6Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI13.6813.6413.5513.2313.4013.850.230.210.240.300.270.20Height (cm)5.215.235.115.065.295.380.400.410.430.450.360.34Age (years)8.167.878.118.298.779.160.610.630.610.590.520.44Weight (kg)12.3112.2312.2511.8612.1612.310.340.350.360.390.350.31BMI (kg/m2)3.593.653.673.693.743.800.330.300.290.280.260.18CP (cm)2.282.262.202.262.312.420.440.450.490.470.440.32\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 7Speakers clinical variables estimation using i-vectors-SVR (RBF kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI14.0413.9113.6313.4813.8414.120.000.170.250.260.180.02Height (cm)5.285.235.165.245.465.430.400.410.420.410.290.32Age (years)9.469.228.298.689.109.530.420.510.610.570.500.41Weight (kg)12.3912.8212.1812.1112.2712.590.290.180.320.350.340.24BMI (kg/m2)3.733.703.663.683.723.770.200.180.270.270.210.14CP (cm)2.382.422.322.342.422.440.310.260.420.400.310.26\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers’ clinical variables estimation using supervector-SVR (linear kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers’ clinical variables estimation using i-vectors-SVR (linear kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers clinical variables estimation using i-vectors-SVR (RBF kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nAs pointed out before, for supervectors (Table 5), only a linear kernel was evaluated because the very large supervector dimension (>1000) makes not advisable mapping this data into a higher dimensional space.\nTables 6 and 7 show that for i-vectors, estimation results using linear and RBF kernels are very similar. These tables also show that both i-vectors and supervectors reach similar results for almost all clinical variables.\n AHI classification Table 8 shows classification results in terms of sensitivity, specificity and area under the ROC curve when classifying our population as OSA subjects or healthy individuals based on the estimated AHI values. That is, first supervectors or i-vectors are used to estimate the AHI using SVR, and then subjects are classified as OSA individuals when their estimated AHI is above ten, otherwise they are classified as healthy. The results in Table 8 using i-vectors were obtained for i-vector dimensionality of 100 as this provided the best AHI estimation results (see Table 6).Table 8OSA Classification using estimated AHI valuesFeatureAccuracy (%)Sensitivity (%)Specificity (%)ROC AUCSupervectors6889180.58I-vectors (dim 100)7192200.64\nOSA Classification using estimated AHI values\nWe are aware that better results could be obtained using supervectors or i-vectors as inputs to a classification algorithm such as SVM, however results in Table 8 were only obtained looking for some figures that will be used in the section “Discussion” to compare our results with those from previous research (Table 9).Table 9Test characteristics of previous research using speech analysis and machine learning for AHI classification and regressionStudyPopulation characteristicsClassificationRegressionCorrect classification rate (%)Sensitivity (%)Specificity (%)Correlation coefficientGMMs [10]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)8177.585_HMMs [11]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)85___Several feature selection and classification schemes [13]248 subjects(AHI ≤5: 48 male, 79 women; AHI ≥30: 101 male, 20 women)82.8581.4984.69_Feature selection and GMMs [9]93 subjects(AHI ≤5: 14 female; AHI >5: 19 female)(AHI ≤10: 12 male; AHI >10: 48 male)_86838479_Feature selection and GMMs [41]103 male subjects(AHI ≤10: 25 male; AHI >10: 78 male)8080.6580_Feature selection, supervectors and SVR [14]131 males___0.67a\nI-vectors/supervectors and SVR this study426 males(AHI <10: 125 male; AHI ≥10: 301 male)71.0692.9220.60.30\naResults using speech features plus age and BMI\nTest characteristics of previous research using speech analysis and machine learning for AHI classification and regression\n\naResults using speech features plus age and BMI\nTable 8 shows classification results in terms of sensitivity, specificity and area under the ROC curve when classifying our population as OSA subjects or healthy individuals based on the estimated AHI values. That is, first supervectors or i-vectors are used to estimate the AHI using SVR, and then subjects are classified as OSA individuals when their estimated AHI is above ten, otherwise they are classified as healthy. The results in Table 8 using i-vectors were obtained for i-vector dimensionality of 100 as this provided the best AHI estimation results (see Table 6).Table 8OSA Classification using estimated AHI valuesFeatureAccuracy (%)Sensitivity (%)Specificity (%)ROC AUCSupervectors6889180.58I-vectors (dim 100)7192200.64\nOSA Classification using estimated AHI values\nWe are aware that better results could be obtained using supervectors or i-vectors as inputs to a classification algorithm such as SVM, however results in Table 8 were only obtained looking for some figures that will be used in the section “Discussion” to compare our results with those from previous research (Table 9).Table 9Test characteristics of previous research using speech analysis and machine learning for AHI classification and regressionStudyPopulation characteristicsClassificationRegressionCorrect classification rate (%)Sensitivity (%)Specificity (%)Correlation coefficientGMMs [10]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)8177.585_HMMs [11]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)85___Several feature selection and classification schemes [13]248 subjects(AHI ≤5: 48 male, 79 women; AHI ≥30: 101 male, 20 women)82.8581.4984.69_Feature selection and GMMs [9]93 subjects(AHI ≤5: 14 female; AHI >5: 19 female)(AHI ≤10: 12 male; AHI >10: 48 male)_86838479_Feature selection and GMMs [41]103 male subjects(AHI ≤10: 25 male; AHI >10: 78 male)8080.6580_Feature selection, supervectors and SVR [14]131 males___0.67a\nI-vectors/supervectors and SVR this study426 males(AHI <10: 125 male; AHI ≥10: 301 male)71.0692.9220.60.30\naResults using speech features plus age and BMI\nTest characteristics of previous research using speech analysis and machine learning for AHI classification and regression\n\naResults using speech features plus age and BMI", "Results in Tables 3 and 4 show performance when using speech to estimate age and height. As mentioned before, the purpose of these tests is to validate our procedure by comparing these results to those reported in recent references [16] and [17]. Table 3 shows that our estimation performance (both in terms of MAE and correlation coefficient) for height are comparable, and better when using i-vectors, that those in [17]. However estimation results for age, Table 4, are slightly worse than [16]. A plausible explanation is that the population in [16] includes a majority of young people, between 20 and 30 years old, while most of our OSA speakers are well above 45 years old. According to [16] speech records from young speakers can be better discriminated than those from older ones. In any case, our results are very similar to results published previously by other authors, which is a good indicator of the validity of our methods.Table 3Speakers’ height estimation resultsRegression methodMean absolute error (cm)Correlation coefficient (ρ)I-vector–LSSVR [17]6.20.41b\nSupervector–SVR5.370.34a\nI-vector–SVR5.060.45a\n\naThese values are significant beyond the 0.01 level of confidence\nbLevel of confidence is not reportedTable 4Speakers’ age estimation resultsRegression methodMean absolute error (years)Correlation coefficient (ρ)I-vector–WCCN–SVR [16]6.00.77b\nSupervector–SVR7.750.66a\nI-vector–SVR7.870.63a\n\naThese values are significant beyond the 0.01 level of confidence\nbLevel of confidence is not reported\nSpeakers’ height estimation results\n\naThese values are significant beyond the 0.01 level of confidence\n\nbLevel of confidence is not reported\nSpeakers’ age estimation results\n\naThese values are significant beyond the 0.01 level of confidence\n\nbLevel of confidence is not reported\nPrediction results using i-vectors and supervectors for all our clinical variables are listed on Tables 5, 6 and 7.Table 5Speakers’ clinical variables estimation using supervector-SVR (linear kernel)Clinical variableMAEρAHI14.260.17Height (cm)5.370.34Age (years)7.750.66Weight (kg)12.580.31BMI (kg/m2)3.810.23CP (cm)2.290.42\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 6Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI13.6813.6413.5513.2313.4013.850.230.210.240.300.270.20Height (cm)5.215.235.115.065.295.380.400.410.430.450.360.34Age (years)8.167.878.118.298.779.160.610.630.610.590.520.44Weight (kg)12.3112.2312.2511.8612.1612.310.340.350.360.390.350.31BMI (kg/m2)3.593.653.673.693.743.800.330.300.290.280.260.18CP (cm)2.282.262.202.262.312.420.440.450.490.470.440.32\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 7Speakers clinical variables estimation using i-vectors-SVR (RBF kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI14.0413.9113.6313.4813.8414.120.000.170.250.260.180.02Height (cm)5.285.235.165.245.465.430.400.410.420.410.290.32Age (years)9.469.228.298.689.109.530.420.510.610.570.500.41Weight (kg)12.3912.8212.1812.1112.2712.590.290.180.320.350.340.24BMI (kg/m2)3.733.703.663.683.723.770.200.180.270.270.210.14CP (cm)2.382.422.322.342.422.440.310.260.420.400.310.26\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers’ clinical variables estimation using supervector-SVR (linear kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers’ clinical variables estimation using i-vectors-SVR (linear kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpeakers clinical variables estimation using i-vectors-SVR (RBF kernel)\n\nAHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter\nThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nAs pointed out before, for supervectors (Table 5), only a linear kernel was evaluated because the very large supervector dimension (>1000) makes not advisable mapping this data into a higher dimensional space.\nTables 6 and 7 show that for i-vectors, estimation results using linear and RBF kernels are very similar. These tables also show that both i-vectors and supervectors reach similar results for almost all clinical variables.", "Table 8 shows classification results in terms of sensitivity, specificity and area under the ROC curve when classifying our population as OSA subjects or healthy individuals based on the estimated AHI values. That is, first supervectors or i-vectors are used to estimate the AHI using SVR, and then subjects are classified as OSA individuals when their estimated AHI is above ten, otherwise they are classified as healthy. The results in Table 8 using i-vectors were obtained for i-vector dimensionality of 100 as this provided the best AHI estimation results (see Table 6).Table 8OSA Classification using estimated AHI valuesFeatureAccuracy (%)Sensitivity (%)Specificity (%)ROC AUCSupervectors6889180.58I-vectors (dim 100)7192200.64\nOSA Classification using estimated AHI values\nWe are aware that better results could be obtained using supervectors or i-vectors as inputs to a classification algorithm such as SVM, however results in Table 8 were only obtained looking for some figures that will be used in the section “Discussion” to compare our results with those from previous research (Table 9).Table 9Test characteristics of previous research using speech analysis and machine learning for AHI classification and regressionStudyPopulation characteristicsClassificationRegressionCorrect classification rate (%)Sensitivity (%)Specificity (%)Correlation coefficientGMMs [10]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)8177.585_HMMs [11]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)85___Several feature selection and classification schemes [13]248 subjects(AHI ≤5: 48 male, 79 women; AHI ≥30: 101 male, 20 women)82.8581.4984.69_Feature selection and GMMs [9]93 subjects(AHI ≤5: 14 female; AHI >5: 19 female)(AHI ≤10: 12 male; AHI >10: 48 male)_86838479_Feature selection and GMMs [41]103 male subjects(AHI ≤10: 25 male; AHI >10: 78 male)8080.6580_Feature selection, supervectors and SVR [14]131 males___0.67a\nI-vectors/supervectors and SVR this study426 males(AHI <10: 125 male; AHI ≥10: 301 male)71.0692.9220.60.30\naResults using speech features plus age and BMI\nTest characteristics of previous research using speech analysis and machine learning for AHI classification and regression\n\naResults using speech features plus age and BMI", "Overall, results in Tables 5–7 indicate a poor performance when estimating AHI from acoustic speech information; the best results are obtained using SVR after i-vectors acoustic representation with dimensionality 100 (ρ = 0.30). Better performance is obtained when predicting the other clinical variables: best results were for i-vectors and SVR linear kernel (see Table 6) with correlation coefficient ρ = 0.63 for age followed by CP (ρ = 0.49), height (ρ = 0.45), weight (ρ = 0.39) and BMI (ρ = 0.33).\nNevertheless, the most interesting discussion arises when comparing these results with those reported in previous research.\nAs stated before our results when estimating age and height are comparable to those previously published in [16] and [17]. Previous research has also demonstrated moderate results (similar to ours) when estimating speakers’ weight and CP from speech (for example, see [33] and [34]). The less success when estimating BMI has also been reported in [35]. Only more positive results have been recently presented in [20], although they have been questioned for possible overfitting by their authors, as they used machine learning after feature selection over a large set of acoustic features.\nHowever, our AHI estimation results contrast markedly with those reported in previous research connecting speech and OSA. Therefore we decided to address a critical review of previous studies (including ours) that led us to identify possible machine learning issues similar to those reported in [19].\nA first discrepancy, though not related with machine learning issues, was addressed in our research [36] were we found notable differences with the seminal work by Robb et al. [8]. In [8] statistical significant differences between OSA and non-OSA speakers were found for several formants frequencies and bandwidths extracted from sustained vowels, while our study in [36] only revealed very weak correlations with two formant bandwidths. In this case, the discrepancy can be mainly attributed to the small and biased sample in Robb’s exploratory analysis (10 OSA and 10 no-OSA subjects, including extreme AHI differences between individuals); while in our study [36] we explored a larger sample of 241 male subjects representing a wide range of AHI values.\nTable 9 summarizes the most relevant existing research proposals using automatic speech analysis and machine learning for OSA assessment.\nWe start by reviewing our own previous positive results presented in [10–12]. In [10] and [11] speech samples from control (AHI <10) and OSA (AHI >30) individuals were used to train a binary machine learning classifier for severe OSA detection. Healthy and OSA speakers were thus classified using two models: one trained to represent OSA voices and the other to model healthy voices. Two different approaches were researched: (1) a text-independent approach using two GMMs [10], and (2) through two text-dependent Hidden Markov Models (HMMs) [11]. Correct classification rates were 80 and 85 %, for GMMs and HMMs respectively. These promising results contrast with both the weak correlation between speech and AHI and the low OSA classification performance we have found in this study. Consequently, we repeated experiments in [10] and [11] on the same database used in this paper, and found that performance has now been significantly degraded only achieving correct classification rates of 63 % for GMMs and 67 % for HMMs. This important reduction in performance can again be attributed to the very limited database (40 controls and 40 OSA speakers with AHI >30) used in [10] and [11], while now we have 125 controls (AHI <10) and 118 OSA subjects (AHI >30). As pointed out in [19] the size of training and evaluation sets are important factors to gain a reasonable understanding of the performance of any classifier. Furthermore, another relevant factor that can explain this degradation in performance is that those 40 controls in [10] and [11] were asymptomatic individuals, selected trying to have both control and OSA populations as matched as possible in terms of age and BMI. While in our new database all individuals (i.e., controls and OSA) are suspected to suffer from OSA as they have been referred to a sleep disorders unit (as indicated before control population was defined by AHI <10), so, for example, most of them are heavy snorers. A third possible cause to explain previous over-optimistic results can be traced considering possible indirect influences of speech and AHI mediated through other clinical variables (see correlation coefficients between AHI and other clinical variables in Table 10). More specifically, as it was discussed in [9] speech acoustic features can be less correlated with AHI than with some clinical variables as age or BMI that are good predictors of AHI [37]. Therefore, a population of controls and OSA speakers with significant differences in these confounding variables can be prone to false discovery of discrimination results due to the underlying differences in these confounders and not in AHI. This fact was reported in our research [12] were OSA detection results using 16 speech features (many of them similar to those traditionally used in detecting voice pathologies, such as HNR, Jitter, Shimmer,…) were degraded when tested on a database designed to avoid statistically significant differences in age and BMI.Table 10Spearman’s correlation between clinical variablesFeatureAHIWeightHeightBMIAgeCPAHI10.41a\n−0.0070.44a\n0.16a\n0.40a\nWeight0.41a\n10.40a\n0.89a\n−0.11a\n0.71a\nHeight−0.0070.40a\n1−0.02−0.35a\n0.13a\nBMI0.44a\n0.89a\n−0.0210.040.72a\nAge0.16a\n−0.11a\n−0.35a\n0.0410.16a\nCP0.40a\n0.71a\n0.13a\n0.72a\n0.16a\n1\naThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSpearman’s correlation between clinical variables\n\naThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence\nSame critical demands to explore and report on significant differences in confounding speaker’s features such as age, height, BMI, etc., must be extended to any other factor that could affect speech such as speakers’ dialect, gender, mood state, and so forth. In fact we believe this is an issue that can explain the good discrimination results when detecting severe OSA reported in [13]. The study by Solan-Casals et al. [13] analyzes both sustained and connected speech and recordings from two distinct positions, upright or seated and supine or stretched. The reason for recording two distinct uttering positions, which was also preliminary explored in [15], is that due to anatomical and functional abnormalities in OSA individuals different body positions can affect differently their vocal tract, therefore presenting more discriminative acoustic features. Solan-Casals et al. evaluate several feature selection, feature combination (i.e., PCA) and classification schemes (Bayesian Classifiers, KNN, Support Vector Machines, Neural Networks, Adaboost). Best results are achieved when using a genetic algorithm for feature selection. An interesting result in [13] is that positive discrimination results, i.e., Correct Classification Rate, Sensitivity and Specificity, all above 80 %, were only obtained when classifying between extreme cases: severe OSA (AHI ≥30) and controls (AHI ≤5). While a notable reduction in performance was obtained when trying to classify “in-between cases”, i.e., cases with AHI between 5 and 30. Solan-Casals et al. conclude that “for intermediate cases where upper-airway closure may not be so pronounced (thus voice not much affected), we cannot rely on voice alone for making a good discrimination between OSA and non-OSA.”\nAt first glance, this conclusion of [13] could be linked to our weak estimation and classification results for the broad range of AHI values using acoustic speech information. However, there are two critical issues that can be identified in this study. First, feature selection is applied over a high number of features (253) compared to the number of cases (248). Though authors report the use of cross-validation for the development and evaluation of different classification algorithms there is no clear indication on what data was used for feature selection. At this point, it is worth noting that i-vectors subspace projection in our study was trained using a development database completely different from the one used for training and testing our SVR regression model. Without this precaution, as discussed in several studies [19, 38], feature selection can lead to over-fitted results based on a set of “ad-hoc” selected features. A second highly relevant issue in [13] is that when evaluating the classification performance between extreme cases (see Table 7 in [13]), OSA and control groups contain very different percentages of male and female speakers: 48 men/79 women in control vs. 101 men/20 women in OSA. This notable imbalance between female and male percentages in control and OSA groups is clearly due to the significantly lower prevalence of OSA in women compared to men [39]. Consequently, considering the important acoustic differences between female and male voices [40], this makes gender a strong confounding factor that could also explain the good classification results. To illustrate these issues, we have studied the best discriminative feature reported in [13] which is the mean value of the Harmonics to Noise Ratio (HNR) measured for sustained vowel/a/recorded in seated position (MEAN_HNR_VA_A in [13]). A small p value, p < 0.0001, was reported in [13] using a Wilcoxon two-sampled test of difference in medians for MEAN_HNR_VA_A values in control and OSA groups. As our database also contains speech records of sustained/a/recorded in seated position for both 426 male individuals and 171 female speakers, we have made Wilcoxon two-sampled tests for MEAN_HNR_VA_A values contrasting: a) a group of male speakers vs a group of female speakers, and b) a group of extreme OSA male speakers (AHI ≥30) with another of male controls (AHI ≤5). Results presented in Table 11, clearly reveal that while significant differences (p < 0.0001) appear contrasting female and male voices (which has already been reported in other studies such as [40]), no significant differences are found between extreme OSA groups including only male speakers (p = 0.06). This is therefore an illustrative example on how gender can act as a strong confounding factor.Table 11Wilcoxon two-sampled test for MEAN_HNR_VA_A contrasting gender and group of extreme OSA male speakersMean_HNR_VA_A (Gender)Mean_HNR_VA_A (extreme OSA male speakers)FemaleMalep valueMale (AHI ≤5)Male (AHI ≥30)p valueMedian19.4317.07<0.000117.4616.380.06SD3.984.233.894.32# Samples17142669129\nWilcoxon two-sampled test for MEAN_HNR_VA_A contrasting gender and group of extreme OSA male speakers\nThe connection between OSA and speech analysis has also been studied for Hebrew language, mainly in [9] and [14]. Following the same approach previously described for [10], the work in [9] uses two GMMs to classify between OSA and non-OSA speakers. However, differently from [10] acoustic feature selection is made before GMM modelling. The experimental protocol presented by Goldshtein et al. in [9] properly separates female and male speakers. Different AHI thresholds are used to define OSA and non-OSA groups: an AHI threshold of 5 is used for women and 10 for men. Reported results achieved specificity of 83 % and sensitivity of 79 % for OSA detection in males and 86 and 84 % for females (see Table 9). A major limitation in this study is again the small number of cases under study: a total number of 60 male speakers (12 controls/48 OSA) and 33 female subjects (14 controls/19 OSA). Besides the low reliability with such small samples, again a critical issue, both in [9] and [14], is the use of feature selection techniques from a large number of acoustic parameters (sometimes on the order of hundreds) when only very limited training data is available. The same research group reported in [41] a decrease in performance using the same techniques as in [9] but over a different database with 103 males. According to Kriboy et al. in [41], this mismatch could be explained by the use of a different database with more subjects and with a different balance in terms of possible confounding factors BMI, age, etc.\nAlso particularly relevant can be to analyze the good results estimating AHI reported by Kriboy et al. in [14] because they used a prediction scheme very close to the one we have presented in this paper: GMM supervectors are used in combination with SVR to estimate AHI. Nevertheless, differently from our study, again feature selection is firstly used to select the most five discriminative features from a set of 71 acoustic features, and then GMM mean supervectors are trained for that small number of features. Although the experimental protocol in [14] separates training and validation data to avoid over-fitting, the set of selected features was composed by five high-order cepstral and LPC coefficients (a15, ΔΔc9, a17, ΔΔc12, c16) which are difficult to interpret or justify. Both cepstral and LPC coefficients are commonly used to represent the acoustic spectral information in speech signals, but higher order coefficients are generally less informative and noisy. Another notable limitation to validate results in [14] is that SVR regression is applied after adding two clinical variables, age and BMI, to the speech supervector generated from the five selected features. These two clinical variables are well known predictors of AHI [37]. So it should had been advisable first to report AHI estimation results only using supervectors representing speech acoustic features, then to present results only using age and BMI, and finally give results extending supervectors with age and BMI.\nTrying to contribute to review these results we have applied the same estimation procedure described in [14] to our database. First row in Table 12 shows prediction results for AHI using only speech supervectors including the same set of five selected features in [14]. Second row presents estimation performance when using only BMI and age. Third row includes the results using the supervector of acoustic features extended with BMI and age.Table 12Speakers’ AHI estimation using supervector generated by five high-order cepstral and LPC coefficients [14]Set of clinical variablesMAECorrelation coefficient (ρ)\np valuea15, ΔΔc9, a17, ΔΔc12, c1614.330.120.008AGE + BMI12.960.38<0.00001(a15, ΔΔc9, a17, ΔΔc12, c16) + AGE + BMI12.240.46<0.00001p values are given for correlation coefficient (ρ)\nSpeakers’ AHI estimation using supervector generated by five high-order cepstral and LPC coefficients [14]\np values are given for correlation coefficient (ρ)\nAs it can be seen in Table 12, estimation results are mainly driven by the presence of BMI and age, and very poor correlation (ρ = 0.12) is obtained when only the set of 5 selected speech features is used. Therefore, it is reasonable to conclude that the well-known correlation between AHI and BMI and age [37, 42] together with possible over-fitting from feature selection on a high number of features compared to the number of cases can cause the optimistic results presented in [14].\nWe acknowledge several limitations in our work that should be addressed in future research. Results presented in this paper are limited to speech from Spanish speakers, so comparisons with other languages will require a more careful analysis of language-dependent acoustic traits in OSA voices. Another limitation in our study is that it has only considered male speakers. As our database now includes an important number of female speakers the extension of this study on female voices could be especially interesting as apnea disease is still not well researched in women. Considering also some recent studies as [43], we should also acknowledge the limitation of i-vectors to represent relevant segmental (non-cepstral) and supra-segmental speaker information. Therefore, subspace projection techniques could also be explored over other speech acoustic features previously related to OSA as: nasality [9, 10], voice turbulence [13, 44] or specific co-articulation trajectories. Finally, a comparative analysis of results for both different recording positions (as proposed in [15]) should be addressed.", "This study can represent an important and useful example to illustrate the potential pitfalls in the development of machine learning techniques for diagnostic applications. The contradictory results using state-of-the-art speech processing and machine learning for OSA assessment over, to the best of our knowledge, the largest database used in this kind of studies, led us to address a critical review of previous studies reporting positive results in connecting OSA and speech. As it is being identified in different fields by the biomedical research community, several limitations in the development of machine learning techniques were observed and, when possible, experimentally studied. In line with other similar studies on these pitfalls [19, 38] main detected deficiencies are: the impact of a limited size of training and evaluation datasets in performance evaluation, the likelihood of false discovery or spurious associations due to the presence of confounding variables, and the risk for overfitting when feature selection techniques are applied over large numbers of variables when only limited training data is available.\nIn conclusion, we believe that our study and results could be useful both to sensitize the biomedical engineering research community to the potential pitfalls when using machine learning for medical diagnosis, and to guide further research on the connection between speech and OSA. In this later aspect, we believe there is an open way for future research looking for new insights in this connection using different acoustic features, languages, speaking styles, or recording positions. However, besides properly addressing the methodological issues when using machine learning, any new advance should carefully explore and report on any possible indirect influence of speech and AHI mediated through other clinical variables or any other factor that could affect speech such as speakers’ dialect, gender or mood state." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, null, null, "results", null, null, "discussion", "conclusion" ]
[ "Obstructive sleep apnea", "Speech", "Clinical variables", "Speaker’s voice characterization", "Supervector", "Gaussian mixture models", "i-vector", "Support vector regression" ]
Background: Sleep disorders are receiving increased attention as a cause of daytime sleepiness, impaired work, traffic accidents, and are associated with hypertension, heart failure, arrhythmia, and diabetes. Among sleep disorders, obstructive sleep apnea (OSA) is the most frequent one [1]. OSA is characterized by recurring episodes of breathing pauses during sleep, greater than 10 s at a time, caused by a blockage of the upper airway (UA) at the level of the pharynx. The gold standard for sleep apnea diagnosis is the polysomnography (PSG) test [2]. This test requires an overnight stay of the patient at the sleep unit within a hospital to monitor breathing patterns, heart rhythm and limb movements. As a result of this test, the apnea–hypopnea index (AHI) is computed as the average number of apnea and hypopnea episodes (partial and total breath cessation episodes respectively) per hour of sleep. Because of its high reliability this index is used to describe the severity of patients’ condition: low AHI (AHI <10) indicates a healthy subject or mild OSA patient (10≤ AHI ≤30), while AHI above 30 is associated with severe OSA. Waiting lists for PSG may exceed 1 year in some countries as Spain [3]. Therefore, faster and less costly alternatives have been proposed for early OSA detection and severity assessment; and speech-based methods are among them. The rationale of using speech analysis in OSA assessment can be found on early works such as the one by Davidson et al. [4] where the evolutionary changes in the acquisition of speech are connected to the appearance of OSA from an anatomical basis. Several studies have shown physical alterations in OSA patients such as craniofacial abnormalities, dental occlusion, longer distance between the hyoid bone and the mandibular plane, relaxed pharyngeal soft tissues, large tongue base, etc. that generally cause a longer and more collapsible upper airway (UA). Consequently, abnormal or particular speech features in OSA speakers may be expected from an altered structure or function of their UA. Early approaches to speech-based OSA detection can be found in [5] and [6]. In [5] authors used perceptive speech descriptors (related to articulation, phonation and resonance) to correctly identify 96.3 % of normal (healthy) subjects, though only 63.0 % of sleep apnea speakers were detected. The use of acoustic analysis of speech for OSA detection was first presented in [7] and [8]. Fiz et al. [7] examined the harmonic structure of vowels spectra, finding a narrower frequency range for OSA speakers, which may point at differences in laryngeal behavior between OSA and non-OSA speakers. Later on, Robb et al. [8] presented an acoustic analysis of vocal tract formant frequencies and bandwidths, thus focusing on the supra-laryngeal level where OSA-related alterations should have larger impact according to the pathogenesis of the disorder. These early contributions have driven recent proposals for using automatic speech processing in OSA detection such as [9–14]. Different approaches, generally using machine learning techniques, have been studied for Hebrew [9, 14] and Spanish [10–13] languages. Results have been reported for different types of speech (i.e., sustained and/or continuous speech) [9, 11, 13], different speech features [9, 12, 13], and modeling different linguistic units [11]. Also speech recorded from two distinct positions, upright or seated and supine or stretched, has been considered [13, 15]. Despite the positive results reported in these previous studies (including ours), as it will be presented in the section “Discussion”, we have found contradictory results when applying the proposed methods over our large clinical database composed of speech samples from 426 OSA male speakers. The next section describes a new method for estimating the AHI using state-of-the-art speaker’s voice characterization technologies. This same approach has been recently tested and demonstrated to be effective in the estimation of other characteristics in speakers’ populations such as age [16] and height [17]. However, as it can be seen in the section “Results”, only a very limited performance is found when this approach is used for AHI prediction. These poor results contrast with the positive results reported by previous research and motivated us to their careful review. The review (presented in the section “Discussion”) reveals some common limitation and deficiencies when developing and validating machine learning techniques, as overfitting and false discovery (i.e., finding spurious or indirect associations) [18], that may have led to overoptimistic previous results. Therefore, our study can represent an important and useful example to illustrate the potential pitfalls in the development of machine learning techniques for diagnostic applications as it is being identified by the biomedical engineering research community [19]. As we conclude at the end of the paper, we not only hope that our study could be useful to help in the development of machine learning techniques in biomedical engineering research, we also think it can help in guiding any future research on the connection between speech and OSA. Methods: Subjects and experimental design The population under study is composed by 426 male subjects presenting symptoms of OSA during a preliminary interview with a pneumonologist: such as excessive daytime sleepiness, snoring, choking during sleep, or somnolent driving. Several clinical variables were collected for each individual: age, height, weight, body-mass index (BMI, defined as the weight in kilograms divided by the square of the height in meters, kg/m2) and cervical perimeter (CP, measure of the neck circumference, in centimeters, at the level of the cricothyroid membrane). This database has been recorded at the Hospital Quirón de Málaga (Spain) since 2010 and is, to the best of our knowledge, the largest database used in this kind of studies. The database contains 597 speakers: 426 males and 171 females. Our study had no impact on the diagnosis process of patients or on their possible medical treatment therefore the Hospital did not consider necessary to seek approval from their ethics committee. Before starting the study, participants were notified about the research and their informed consent obtained. Statistics of the clinical variables for the male population in this study are summarized in Table 1.Table 1Descriptive statistics on the 426 male subjectsClinical variablesMeanSDRangeAHI22.518.10.0–102.0Weight (kg)91.717.361.0–162.0Height (cm)175.37.1152.0–197.0BMI (kg/m2)29.85.120.1–52.1Age (years)48.812.520.0–85.0Cervical perimeter (cm)42.23.234.0–53.0 AHI apnea–hypopnea index, BMI body mass index, SD standard deviation Descriptive statistics on the 426 male subjects AHI apnea–hypopnea index, BMI body mass index, SD standard deviation The diagnosis for each patient was confirmed by specialized medical staff through PSG, obtaining the AHI on the basis of the number of apnea and hypopnea episodes. Patients’ speech was recorded prior to PSG. All speakers read the same 4 sentences and sustained a complete set of Spanish vowels [i.e., a, o, u]. Sentences were designed trying to cover relevant linguistic/phonetic contexts related to peculiarities in OSA voices (see details in [12]). Recordings were made in a room with low noise and patients at an upright or seated position. Recording equipment was a standard laptop with an USB SP500 Plantronics headset. Speech was recorded at a sampling frequency of 50 kHz and encoded in 16 bits. Afterwards it was down-sampled to 16 kHz before processing. The population under study is composed by 426 male subjects presenting symptoms of OSA during a preliminary interview with a pneumonologist: such as excessive daytime sleepiness, snoring, choking during sleep, or somnolent driving. Several clinical variables were collected for each individual: age, height, weight, body-mass index (BMI, defined as the weight in kilograms divided by the square of the height in meters, kg/m2) and cervical perimeter (CP, measure of the neck circumference, in centimeters, at the level of the cricothyroid membrane). This database has been recorded at the Hospital Quirón de Málaga (Spain) since 2010 and is, to the best of our knowledge, the largest database used in this kind of studies. The database contains 597 speakers: 426 males and 171 females. Our study had no impact on the diagnosis process of patients or on their possible medical treatment therefore the Hospital did not consider necessary to seek approval from their ethics committee. Before starting the study, participants were notified about the research and their informed consent obtained. Statistics of the clinical variables for the male population in this study are summarized in Table 1.Table 1Descriptive statistics on the 426 male subjectsClinical variablesMeanSDRangeAHI22.518.10.0–102.0Weight (kg)91.717.361.0–162.0Height (cm)175.37.1152.0–197.0BMI (kg/m2)29.85.120.1–52.1Age (years)48.812.520.0–85.0Cervical perimeter (cm)42.23.234.0–53.0 AHI apnea–hypopnea index, BMI body mass index, SD standard deviation Descriptive statistics on the 426 male subjects AHI apnea–hypopnea index, BMI body mass index, SD standard deviation The diagnosis for each patient was confirmed by specialized medical staff through PSG, obtaining the AHI on the basis of the number of apnea and hypopnea episodes. Patients’ speech was recorded prior to PSG. All speakers read the same 4 sentences and sustained a complete set of Spanish vowels [i.e., a, o, u]. Sentences were designed trying to cover relevant linguistic/phonetic contexts related to peculiarities in OSA voices (see details in [12]). Recordings were made in a room with low noise and patients at an upright or seated position. Recording equipment was a standard laptop with an USB SP500 Plantronics headset. Speech was recorded at a sampling frequency of 50 kHz and encoded in 16 bits. Afterwards it was down-sampled to 16 kHz before processing. Problem formulation Our major aim is testing whether state-of-the-art speaker’s voice characterization technologies that have already demonstrated to be effective in the estimation of speaker’s characteristics such as age [16] and height [17] could be also effective in estimating the AHI. It is important to point out that, besides predicting the AHI from speech samples, we also tested the performance when using these same techniques to estimate the other clinical variables (age, height, weight, BMI and CP). We think this evaluation is relevant for two main reasons: firstly, to validate our methodology, by comparing our results estimating age, height and BMI with those previously reported over general speaker populations (such as [16, 17, 20]); and secondly, to identify correlations between speech and other clinical variables that can increase the likelihood of false discovery based on spurious or indirect associations [18] between these clinical variables and AHI. This second aspect we will be relevant when presenting the critical review of previous approaches to OSA assessment in the section “Discussion”. Consequently, our study can be formulated as a machine learning regression problem as follows: we are given a training dataset of speech recordings and clinical variables information \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{S}}_{\text{tr}}^{j} = \left\{ {{\mathbf{x}}_{n} , y_{n}^{j} } \right\}_{n = 1}^{N}$$\end{document}Strj=xn,ynjn=1N, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{x}}_{n} \in \Re^{\text{p}}$$\end{document}xn∈ℜp denotes the acoustic representation for the nth utterance of the training dataset and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y_{n}^{j} \in \Re$$\end{document}ynj∈ℜ denotes the corresponding value of the clinical variable for the speaker of that utterance; j corresponds to a particular variable in the set of V clinical variables (j = 1, 2, …V; i.e., AHI, age, height, weight, BMI, CP). The goal is to design an estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj for each clinical variable, such that for an utterance of an unseen testing speaker xtst, the difference between the estimated value of that particular clinical variable \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{y}^{j} = f^{j} \left( {{\mathbf{x}}_{{\text{tst}}} } \right)$$\end{document}y^j=fjxtst and its actual value \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj is minimized. Once this regression problem has been formulated two main issues must be addressed: 1) what acoustic representation and model will be used for a given utterance xn and 2) how to design the regression or estimator functions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj. Our major aim is testing whether state-of-the-art speaker’s voice characterization technologies that have already demonstrated to be effective in the estimation of speaker’s characteristics such as age [16] and height [17] could be also effective in estimating the AHI. It is important to point out that, besides predicting the AHI from speech samples, we also tested the performance when using these same techniques to estimate the other clinical variables (age, height, weight, BMI and CP). We think this evaluation is relevant for two main reasons: firstly, to validate our methodology, by comparing our results estimating age, height and BMI with those previously reported over general speaker populations (such as [16, 17, 20]); and secondly, to identify correlations between speech and other clinical variables that can increase the likelihood of false discovery based on spurious or indirect associations [18] between these clinical variables and AHI. This second aspect we will be relevant when presenting the critical review of previous approaches to OSA assessment in the section “Discussion”. Consequently, our study can be formulated as a machine learning regression problem as follows: we are given a training dataset of speech recordings and clinical variables information \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{S}}_{\text{tr}}^{j} = \left\{ {{\mathbf{x}}_{n} , y_{n}^{j} } \right\}_{n = 1}^{N}$$\end{document}Strj=xn,ynjn=1N, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{x}}_{n} \in \Re^{\text{p}}$$\end{document}xn∈ℜp denotes the acoustic representation for the nth utterance of the training dataset and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y_{n}^{j} \in \Re$$\end{document}ynj∈ℜ denotes the corresponding value of the clinical variable for the speaker of that utterance; j corresponds to a particular variable in the set of V clinical variables (j = 1, 2, …V; i.e., AHI, age, height, weight, BMI, CP). The goal is to design an estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj for each clinical variable, such that for an utterance of an unseen testing speaker xtst, the difference between the estimated value of that particular clinical variable \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{y}^{j} = f^{j} \left( {{\mathbf{x}}_{{\text{tst}}} } \right)$$\end{document}y^j=fjxtst and its actual value \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj is minimized. Once this regression problem has been formulated two main issues must be addressed: 1) what acoustic representation and model will be used for a given utterance xn and 2) how to design the regression or estimator functions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj. Acoustic representation of OSA-related sounds Besides the linguistic message, speech signals carry important information about speakers mainly related to their particular physical or physiological characteristics. This has been the basis for the development of automatic speaker recognition systems, automatic detection of vocal fold pathologies, emotional/psychological state recognition as well as age and weight (or BMI) estimation. In a similar vein, the specific characteristics of the UA in OSA individuals have led to hypothesize OSA detection using automatic acoustic analysis of speech sounds. To represent OSA-specific acoustic information, speech records in our database include read speech of four sentences that were designed to contain specific distinctive sounds to discriminate between healthy and OSA speakers. The design of these four sentences was done according to the reference research in [5] and [6], where Fox et al. identify a set of speech descriptors in OSA speakers related to articulation, phonation and resonance. For example, the third sentence in our corpus includes mostly nasal sounds to detect the expected resonance anomalies in OSA individuals (the details on the design criteria for this corpus can be found in [12]). Additionally, to exclude any acoustic factor not related to OSA discrimination, the speech signal acquisition was done in a room with low noise and using a single high quality microphone (USB SP500 Plantronics headset). Once we have a set of speech utterances containing OSA-specific sounds and collected under a controlled recording environment, speech signals were processed at a sampling frequency of 16 kHz to have a precise wide-band representation all the relevant information in the speech spectrum. As Fig. 1 illustrates, each sentence was analyzed in speech segments (i.e., frames) of 20 ms duration with an overlap of 10 ms; each speech frame was multiplied by a Hamming window. The spectral envelope of each frame was then represented using mel-frequency cepstral coefficients (MFCCs). MFCCs provide a spectral envelope representation of speech sounds extensively used in automatic speech and speaker recognition [21, 22], pathological voice detection, age, height and BMI estimation [16, 17, 20], etc. MFCCS have also been used in previous research on speech-based OSA detection [9–11] and [14].Fig. 1Acoustic representation of utterances Acoustic representation of utterances In the MFCC representation the spectrum magnitude of each speech frame is first obtained as the absolute value of its DFT (discrete Fourier transform). Then a filterbank of triangular filters spaced in a frequency scale based on the human perception system (i.e., Mel-scale) is used to obtain a vector with the log-energies of each filter (see Fig. 1). Finally, a discrete cosine transform (DCT) is applied over the vector of log filterbank energies to produce a compact set of decorrelated MFCC coefficients. Additionally, in order to represent the spectral change over time, MFFCs are extended to their first order (velocity or delta ΔMFCCs) time derivatives (more details on MFCCs parametrization can be found in [23]). So far, in our experiments, in each speech frame i the acoustic information is represented by a D-dimensional vector Oi, called observation vector, that includes 19 MFFCs +19 ΔMFCCs parameters, thus D = 38. The extraction of MFCCswas performed using the HTK software (htk.eng.cam.ac.uk), see Table 2 for the details on DFT order, number of triangular filters, etc.Table 2Implementation toolsToola Function nameFunction descriptionParametersHTKHCopyExtract the MFCCs coefficientsNo. DFT bins = 512No. filters = 26No. MFCC coeff. = 19No. ΔMFCC coeff. = 19MSR Identity ToolBoxb GMM_emGMM–UBM trainingNo. mixtures = 512No. of expectation maximization iteration = 10Feature sub-sampling factor = 1MapAdaptGMM adaptationAdaptation algorithm = MAPNo. mixtures = 512MAP relevance factor = 10Train_tv_spaceTotal variability matrix trainingDimension of total variability matrix = {400,300,200,100,50,30}Number of iteration = 5Extract_ivectorI-vector trainingDimension of total variability matrix = {400,300,200,100,50,30}LIBSVMSVM_trainSVR trainingGrid search parameters:C, model complexity = −20:20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, insensitive-zone = 2−7:27 SVM_predictSVR regressionGrid search parameters:C, model complexity = −20:20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, insensitive-zone = 2−7:27 aAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System bExecuted on Matlab 2014a Implementation tools aAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System bExecuted on Matlab 2014a Besides the linguistic message, speech signals carry important information about speakers mainly related to their particular physical or physiological characteristics. This has been the basis for the development of automatic speaker recognition systems, automatic detection of vocal fold pathologies, emotional/psychological state recognition as well as age and weight (or BMI) estimation. In a similar vein, the specific characteristics of the UA in OSA individuals have led to hypothesize OSA detection using automatic acoustic analysis of speech sounds. To represent OSA-specific acoustic information, speech records in our database include read speech of four sentences that were designed to contain specific distinctive sounds to discriminate between healthy and OSA speakers. The design of these four sentences was done according to the reference research in [5] and [6], where Fox et al. identify a set of speech descriptors in OSA speakers related to articulation, phonation and resonance. For example, the third sentence in our corpus includes mostly nasal sounds to detect the expected resonance anomalies in OSA individuals (the details on the design criteria for this corpus can be found in [12]). Additionally, to exclude any acoustic factor not related to OSA discrimination, the speech signal acquisition was done in a room with low noise and using a single high quality microphone (USB SP500 Plantronics headset). Once we have a set of speech utterances containing OSA-specific sounds and collected under a controlled recording environment, speech signals were processed at a sampling frequency of 16 kHz to have a precise wide-band representation all the relevant information in the speech spectrum. As Fig. 1 illustrates, each sentence was analyzed in speech segments (i.e., frames) of 20 ms duration with an overlap of 10 ms; each speech frame was multiplied by a Hamming window. The spectral envelope of each frame was then represented using mel-frequency cepstral coefficients (MFCCs). MFCCs provide a spectral envelope representation of speech sounds extensively used in automatic speech and speaker recognition [21, 22], pathological voice detection, age, height and BMI estimation [16, 17, 20], etc. MFCCS have also been used in previous research on speech-based OSA detection [9–11] and [14].Fig. 1Acoustic representation of utterances Acoustic representation of utterances In the MFCC representation the spectrum magnitude of each speech frame is first obtained as the absolute value of its DFT (discrete Fourier transform). Then a filterbank of triangular filters spaced in a frequency scale based on the human perception system (i.e., Mel-scale) is used to obtain a vector with the log-energies of each filter (see Fig. 1). Finally, a discrete cosine transform (DCT) is applied over the vector of log filterbank energies to produce a compact set of decorrelated MFCC coefficients. Additionally, in order to represent the spectral change over time, MFFCs are extended to their first order (velocity or delta ΔMFCCs) time derivatives (more details on MFCCs parametrization can be found in [23]). So far, in our experiments, in each speech frame i the acoustic information is represented by a D-dimensional vector Oi, called observation vector, that includes 19 MFFCs +19 ΔMFCCs parameters, thus D = 38. The extraction of MFCCswas performed using the HTK software (htk.eng.cam.ac.uk), see Table 2 for the details on DFT order, number of triangular filters, etc.Table 2Implementation toolsToola Function nameFunction descriptionParametersHTKHCopyExtract the MFCCs coefficientsNo. DFT bins = 512No. filters = 26No. MFCC coeff. = 19No. ΔMFCC coeff. = 19MSR Identity ToolBoxb GMM_emGMM–UBM trainingNo. mixtures = 512No. of expectation maximization iteration = 10Feature sub-sampling factor = 1MapAdaptGMM adaptationAdaptation algorithm = MAPNo. mixtures = 512MAP relevance factor = 10Train_tv_spaceTotal variability matrix trainingDimension of total variability matrix = {400,300,200,100,50,30}Number of iteration = 5Extract_ivectorI-vector trainingDimension of total variability matrix = {400,300,200,100,50,30}LIBSVMSVM_trainSVR trainingGrid search parameters:C, model complexity = −20:20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, insensitive-zone = 2−7:27 SVM_predictSVR regressionGrid search parameters:C, model complexity = −20:20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, insensitive-zone = 2−7:27 aAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System bExecuted on Matlab 2014a Implementation tools aAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System bExecuted on Matlab 2014a Utterance modelling Due to the natural variability in speech production different utterances corresponding to the same sentence will exhibit variable-duration and thus will be represented by a variable-length sequence O of observation vectors:1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underline{{\mathbf{O}}} = \left[ {{\mathbf{O}}_{1} ,\;{\mathbf{O}}_{2} \ldots {\mathbf{O}}_{NF} } \right]$$\end{document}O̲=O1,O2…ONFwhere Oi is the D-dimensional observation vector at frame i and NF is the number of frames, which will be variable due to the different durations when reading the same sentence. This variable-length sequence cannot be the input for a regression algorithm as support vector regression (SVR) that will be the estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj to predict \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj (being \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj the AHI and the other clinical variables: age, height, weight, BMI and CP). Consequently, the sequence of observations O must be mapped into a vector with fixed dimension. In our method, this has been done using two modeling approaches, referred to as supervectors and i-vectors, which have been successfully applied to speaker recognition [24], language recognition [25], speaker age estimation [16], speaker height estimation [17] and accent recognition [26]. We think that their success in those challenging tasks were speech contains significant sources of interfering intra-speaker variability (speaker weight, height, etc.), is a reasonable guarantee for exploring its use in estimating the AHI and other clinical variables in our OSA population. It is also important to point out that we have avoided the use of feature selection procedures because, as it will be commented in the section “Discussion”, we believe this has led to over-fitted results in several previous studies in this field. It is for that reason that in our approach we evaluate high-dimensional acoustic modelling provided by supervectors and low-dimensional i-vectors representations based on subspace projection. These two techniques are described below. Due to the natural variability in speech production different utterances corresponding to the same sentence will exhibit variable-duration and thus will be represented by a variable-length sequence O of observation vectors:1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underline{{\mathbf{O}}} = \left[ {{\mathbf{O}}_{1} ,\;{\mathbf{O}}_{2} \ldots {\mathbf{O}}_{NF} } \right]$$\end{document}O̲=O1,O2…ONFwhere Oi is the D-dimensional observation vector at frame i and NF is the number of frames, which will be variable due to the different durations when reading the same sentence. This variable-length sequence cannot be the input for a regression algorithm as support vector regression (SVR) that will be the estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj to predict \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj (being \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj the AHI and the other clinical variables: age, height, weight, BMI and CP). Consequently, the sequence of observations O must be mapped into a vector with fixed dimension. In our method, this has been done using two modeling approaches, referred to as supervectors and i-vectors, which have been successfully applied to speaker recognition [24], language recognition [25], speaker age estimation [16], speaker height estimation [17] and accent recognition [26]. We think that their success in those challenging tasks were speech contains significant sources of interfering intra-speaker variability (speaker weight, height, etc.), is a reasonable guarantee for exploring its use in estimating the AHI and other clinical variables in our OSA population. It is also important to point out that we have avoided the use of feature selection procedures because, as it will be commented in the section “Discussion”, we believe this has led to over-fitted results in several previous studies in this field. It is for that reason that in our approach we evaluate high-dimensional acoustic modelling provided by supervectors and low-dimensional i-vectors representations based on subspace projection. These two techniques are described below. Supervectors Both supervector and i-vector modelling approaches start by fitting a Gaussian mixture model (GMM) to the sequence of observations O. A GMM (see [23, 27]) consists of a weighted sum of K D-dimensional Gaussian components, where, in our case, D is the dimension of the MFFCs observation vectors. Each i-th Gaussian component is represented by a mean vector (µi) of dimension D and a D × D covariance matrix (Σi). Due to limited data, it is not possible to accurately fit a separate GMM for a short utterance, specially when using a high number of Gaussian components (i.e., large K). Consequently, GMMs are obtained using adaptation techniques from a universal background model (UBM), which is also a GMM trained on a large database containing speech from a large number of different speakers [23]. Therefore, as Fig. 2 illustrates, the variable-length sequence O of vectors of a given utterance is used to adapt a GMM–UBM generating an adapted GMM where only the means (µi) are adapted.Fig. 2GMM and supervector modelling GMM and supervector modelling In the supervector modelling approach [21], the adapted GMM means (µi) are extracted and concatenated (appending one after the other) into a single high-dimensional vector s that is called the GMM mean supervector:2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{s}} = \left[ {\begin{array}{*{20}c} {{\mathbf{\mu }}_{{\text{1}}} } \\ {{\mathbf{\mu }}_{{\text{2}}} } \\ \vdots \\ {{\mathbf{\mu }}_{K} } \\ \end{array} } \right]$$\end{document}s=μ1μ2⋮μK The resulting fixed-length supervector, of size K × D, is now suitable to be used as input to a regression algorithm, such as SVR, to predict AHI and the other clinical variables. As it is summarized in Table 2, in our experiments GMM–UBM training, GMM adaptation and supervector generation was done using the MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Linux Ubuntu 12.04 LTS. As it is also shown in Table 2, to have a precise acoustic representation for each sentence a GMM with K = 512 components was used, resulting in a high-dimensional supervector of size K × D = 19,456 = 38 × 512 (D = 38 is the dimension of MFFCs observation vectors Oi). As mentioned before, training the GMM UBM requires a considerable amount of development data to represent a global acoustic space. Therefore, for development we used several large databases containing microphonic speech sampled at 16 kHz, covering a wide range of phonetic variability from continuous/read Spanish speech (see, for example, ALBAYZIN [29], as it was one the databases we used). The whole development dataset includes 25,451 speech recordings from 940 speakers. Among them 126 speakers certainly diagnosed with OSA, and not used for tests, were also included to reflect OSA-specific characteristics of speech. Both supervector and i-vector modelling approaches start by fitting a Gaussian mixture model (GMM) to the sequence of observations O. A GMM (see [23, 27]) consists of a weighted sum of K D-dimensional Gaussian components, where, in our case, D is the dimension of the MFFCs observation vectors. Each i-th Gaussian component is represented by a mean vector (µi) of dimension D and a D × D covariance matrix (Σi). Due to limited data, it is not possible to accurately fit a separate GMM for a short utterance, specially when using a high number of Gaussian components (i.e., large K). Consequently, GMMs are obtained using adaptation techniques from a universal background model (UBM), which is also a GMM trained on a large database containing speech from a large number of different speakers [23]. Therefore, as Fig. 2 illustrates, the variable-length sequence O of vectors of a given utterance is used to adapt a GMM–UBM generating an adapted GMM where only the means (µi) are adapted.Fig. 2GMM and supervector modelling GMM and supervector modelling In the supervector modelling approach [21], the adapted GMM means (µi) are extracted and concatenated (appending one after the other) into a single high-dimensional vector s that is called the GMM mean supervector:2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{s}} = \left[ {\begin{array}{*{20}c} {{\mathbf{\mu }}_{{\text{1}}} } \\ {{\mathbf{\mu }}_{{\text{2}}} } \\ \vdots \\ {{\mathbf{\mu }}_{K} } \\ \end{array} } \right]$$\end{document}s=μ1μ2⋮μK The resulting fixed-length supervector, of size K × D, is now suitable to be used as input to a regression algorithm, such as SVR, to predict AHI and the other clinical variables. As it is summarized in Table 2, in our experiments GMM–UBM training, GMM adaptation and supervector generation was done using the MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Linux Ubuntu 12.04 LTS. As it is also shown in Table 2, to have a precise acoustic representation for each sentence a GMM with K = 512 components was used, resulting in a high-dimensional supervector of size K × D = 19,456 = 38 × 512 (D = 38 is the dimension of MFFCs observation vectors Oi). As mentioned before, training the GMM UBM requires a considerable amount of development data to represent a global acoustic space. Therefore, for development we used several large databases containing microphonic speech sampled at 16 kHz, covering a wide range of phonetic variability from continuous/read Spanish speech (see, for example, ALBAYZIN [29], as it was one the databases we used). The whole development dataset includes 25,451 speech recordings from 940 speakers. Among them 126 speakers certainly diagnosed with OSA, and not used for tests, were also included to reflect OSA-specific characteristics of speech. I-vectors Beyond the success of high-dimensional supervectors, a new paradigm called i-vector has been successfully and is widely used by the speaker recognition community [24]. The i-vector model relies on the definition of a low-dimensional total variability subspace and can be described in the GMM mean supervector space by:3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{s}} = {\mathbf{m}} + {\mathbf{Tw}}$$\end{document}s=m+Twwhere s is the GMM mean supervector representing an utterance and m is the mean supervector obtained from the UBM GMM–UBM, which can be considered a global acoustic representation independent from utterance, speaker, health and clinical condition. T is a rectangular low rank matrix representing the primary directions of total acoustic variability observed in a large development speech database, and w is a low dimensional random vector having a standard normal distribution. In short, Eq. (3) can be viewed as a simple factor analysis for projecting the high-dimensional (in order of thousands) supervector s to the low-dimensional (in order of hundreds) factor vector, identity vector or i-vector w. T is named the total variability matrix and the components of i-vector w are the total factors that represent the acoustic information in the reduced total variability space. Compared to supervectors, the total variability modeling using i-vectors has the advantage of projecting the high dimensionality of GMM supervectors into a low-dimensional subspace, where most of the speaker-specific variability is captured. Automatic speech recognition systems typically use i-vectors with dimensionality of 400. In our tests the total variability matrix T was estimated using the same development data described before for training the GMM–UBM, and we evaluated subspace projections for i-vectors with different dimensions ranging from 30 to 400. Efficient procedure for training T and MAP adaptation of i-vectors can be found in [30]. In our tests we use the implementation provided by MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Ubunutu 12.04 LTS (see the details in Table 2). Beyond the success of high-dimensional supervectors, a new paradigm called i-vector has been successfully and is widely used by the speaker recognition community [24]. The i-vector model relies on the definition of a low-dimensional total variability subspace and can be described in the GMM mean supervector space by:3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{s}} = {\mathbf{m}} + {\mathbf{Tw}}$$\end{document}s=m+Twwhere s is the GMM mean supervector representing an utterance and m is the mean supervector obtained from the UBM GMM–UBM, which can be considered a global acoustic representation independent from utterance, speaker, health and clinical condition. T is a rectangular low rank matrix representing the primary directions of total acoustic variability observed in a large development speech database, and w is a low dimensional random vector having a standard normal distribution. In short, Eq. (3) can be viewed as a simple factor analysis for projecting the high-dimensional (in order of thousands) supervector s to the low-dimensional (in order of hundreds) factor vector, identity vector or i-vector w. T is named the total variability matrix and the components of i-vector w are the total factors that represent the acoustic information in the reduced total variability space. Compared to supervectors, the total variability modeling using i-vectors has the advantage of projecting the high dimensionality of GMM supervectors into a low-dimensional subspace, where most of the speaker-specific variability is captured. Automatic speech recognition systems typically use i-vectors with dimensionality of 400. In our tests the total variability matrix T was estimated using the same development data described before for training the GMM–UBM, and we evaluated subspace projections for i-vectors with different dimensions ranging from 30 to 400. Efficient procedure for training T and MAP adaptation of i-vectors can be found in [30]. In our tests we use the implementation provided by MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Ubunutu 12.04 LTS (see the details in Table 2). Regression using SVR Once an utterance is represented by a fixed-length vector, supervector or i-vector, SVR is employed as the estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj to predict \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj, i.e., the AHI and other clinical variables (age, height, weight, BMI and CP). SVR is a function approximation approach developed as a regression version of the widely known Support Vector Machine (SVM) classifier [31]. When using SVR, the input variable (i-vector/supervector) is firstly mapped onto a high dimensional feature space by using a non-linear mapping. The mapping is performed by the kernel function. The kernel yields the new high dimensional feature by a similarity measure between the points from the original feature space. Once the mapping onto a high dimensional space is done then a linear model is constructed in this feature space by finding the optimal hyperplane in which most of the of the training samples lie within an \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-margin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone) around this hyperplane [31]. The generalization of SVR’s performance depends on a good setting of two hyperparameters (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, C) and the kernel parameters. The parameter \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈ controls the width of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone, used to fit the training data. The width of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone determines the level of accuracy of approximation function. It relies entirely on the target values of the training set. The parameter C determines the trade-off between the model complexity, controlled by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, and the degree to which deviations larger than the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone are tolerated in the optimization of the hyperplane. Finally, the kernel parameters depend on the type similarity measure used. In this paper, SVR is applied to estimate the clinical variables and linear and radial basis function (RBF) kernels were tested to approximate the estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj. In our study, both linear and RBF kernels were tested for i-vectors, but only linear kernels were considered for supervectors because their large dimensionality makes it not advisable mapping them into a higher dimensional space. SVR training and testing were implemented by using LIBSVM [32] running on Linux Ubuntu 12.04 LTS. Table 2 describes de details of use for this software together with all the parameters used in our tests. Once an utterance is represented by a fixed-length vector, supervector or i-vector, SVR is employed as the estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj to predict \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj, i.e., the AHI and other clinical variables (age, height, weight, BMI and CP). SVR is a function approximation approach developed as a regression version of the widely known Support Vector Machine (SVM) classifier [31]. When using SVR, the input variable (i-vector/supervector) is firstly mapped onto a high dimensional feature space by using a non-linear mapping. The mapping is performed by the kernel function. The kernel yields the new high dimensional feature by a similarity measure between the points from the original feature space. Once the mapping onto a high dimensional space is done then a linear model is constructed in this feature space by finding the optimal hyperplane in which most of the of the training samples lie within an \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-margin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone) around this hyperplane [31]. The generalization of SVR’s performance depends on a good setting of two hyperparameters (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, C) and the kernel parameters. The parameter \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈ controls the width of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone, used to fit the training data. The width of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone determines the level of accuracy of approximation function. It relies entirely on the target values of the training set. The parameter C determines the trade-off between the model complexity, controlled by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, and the degree to which deviations larger than the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone are tolerated in the optimization of the hyperplane. Finally, the kernel parameters depend on the type similarity measure used. In this paper, SVR is applied to estimate the clinical variables and linear and radial basis function (RBF) kernels were tested to approximate the estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj. In our study, both linear and RBF kernels were tested for i-vectors, but only linear kernels were considered for supervectors because their large dimensionality makes it not advisable mapping them into a higher dimensional space. SVR training and testing were implemented by using LIBSVM [32] running on Linux Ubuntu 12.04 LTS. Table 2 describes de details of use for this software together with all the parameters used in our tests. Performance metrics To evaluate the proposed method of using supervectors or i-vectors to predict or estimate AHI and the other clinical variables (age, height, weight, BMI and CP) we measure both the mean absolute error (MAE) and the Pearson correlation coefficient (ρ). MAE provides the average absolute difference between actual and estimated values, while ρ evaluates their linear relationship. As we will see in the section “Results”, correlation coefficients between estimated and actual AHI values were many times very small. Therefore, we considered informative to report p-values for correlation coefficients as the probability that they were in fact zero (null hypothesis). Although the main objective of our method is to evaluate the capability of using speech to predict or estimate AHI, in the section “Discussion” we also review previous research that aim at classify or discriminate between subjects with OSA (AHI ≥10) and without OSA (defined by an AHI <10). Therefore, we performed some additional tests using our estimated AHI values to classify subjects as OSA (predicted AHI ≥10) and non-OSA (predicted AHI <10). In these classification tests performance was measured in terms of sensitivity, specificity and the area under the ROC curve. To evaluate the proposed method of using supervectors or i-vectors to predict or estimate AHI and the other clinical variables (age, height, weight, BMI and CP) we measure both the mean absolute error (MAE) and the Pearson correlation coefficient (ρ). MAE provides the average absolute difference between actual and estimated values, while ρ evaluates their linear relationship. As we will see in the section “Results”, correlation coefficients between estimated and actual AHI values were many times very small. Therefore, we considered informative to report p-values for correlation coefficients as the probability that they were in fact zero (null hypothesis). Although the main objective of our method is to evaluate the capability of using speech to predict or estimate AHI, in the section “Discussion” we also review previous research that aim at classify or discriminate between subjects with OSA (AHI ≥10) and without OSA (defined by an AHI <10). Therefore, we performed some additional tests using our estimated AHI values to classify subjects as OSA (predicted AHI ≥10) and non-OSA (predicted AHI <10). In these classification tests performance was measured in terms of sensitivity, specificity and the area under the ROC curve. k-fold cross-validation and grid-search In order to train the SVR regression model (function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj) and predict \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj variables (AHI and other clinical variables) we have employed k-fold cross-validation and grid-search for finding the optimal SVR parameters. The whole process is presented in Fig. 3. Firstly, to guarantee that all speakers are involved on the test, the dataset is split into k equal sized subsamples with no speakers in common. Then, of the k subsamples, a single subsample is retained for testing and the remaining k−1 subsamples are used as training dataset. Results were reported for k = 10.Fig. 3Representation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables Representation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables Furthermore, as Fig. 3 also illustrates, in each cross-validation loop the optimal hyperparameters (ϵ, C) of the SVR models are obtained through “grid search” using a fivefold cross-validation on the training data. The ranges for this grid search are detailed in Table 2. In order to train the SVR regression model (function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj) and predict \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj variables (AHI and other clinical variables) we have employed k-fold cross-validation and grid-search for finding the optimal SVR parameters. The whole process is presented in Fig. 3. Firstly, to guarantee that all speakers are involved on the test, the dataset is split into k equal sized subsamples with no speakers in common. Then, of the k subsamples, a single subsample is retained for testing and the remaining k−1 subsamples are used as training dataset. Results were reported for k = 10.Fig. 3Representation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables Representation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables Furthermore, as Fig. 3 also illustrates, in each cross-validation loop the optimal hyperparameters (ϵ, C) of the SVR models are obtained through “grid search” using a fivefold cross-validation on the training data. The ranges for this grid search are detailed in Table 2. Subjects and experimental design: The population under study is composed by 426 male subjects presenting symptoms of OSA during a preliminary interview with a pneumonologist: such as excessive daytime sleepiness, snoring, choking during sleep, or somnolent driving. Several clinical variables were collected for each individual: age, height, weight, body-mass index (BMI, defined as the weight in kilograms divided by the square of the height in meters, kg/m2) and cervical perimeter (CP, measure of the neck circumference, in centimeters, at the level of the cricothyroid membrane). This database has been recorded at the Hospital Quirón de Málaga (Spain) since 2010 and is, to the best of our knowledge, the largest database used in this kind of studies. The database contains 597 speakers: 426 males and 171 females. Our study had no impact on the diagnosis process of patients or on their possible medical treatment therefore the Hospital did not consider necessary to seek approval from their ethics committee. Before starting the study, participants were notified about the research and their informed consent obtained. Statistics of the clinical variables for the male population in this study are summarized in Table 1.Table 1Descriptive statistics on the 426 male subjectsClinical variablesMeanSDRangeAHI22.518.10.0–102.0Weight (kg)91.717.361.0–162.0Height (cm)175.37.1152.0–197.0BMI (kg/m2)29.85.120.1–52.1Age (years)48.812.520.0–85.0Cervical perimeter (cm)42.23.234.0–53.0 AHI apnea–hypopnea index, BMI body mass index, SD standard deviation Descriptive statistics on the 426 male subjects AHI apnea–hypopnea index, BMI body mass index, SD standard deviation The diagnosis for each patient was confirmed by specialized medical staff through PSG, obtaining the AHI on the basis of the number of apnea and hypopnea episodes. Patients’ speech was recorded prior to PSG. All speakers read the same 4 sentences and sustained a complete set of Spanish vowels [i.e., a, o, u]. Sentences were designed trying to cover relevant linguistic/phonetic contexts related to peculiarities in OSA voices (see details in [12]). Recordings were made in a room with low noise and patients at an upright or seated position. Recording equipment was a standard laptop with an USB SP500 Plantronics headset. Speech was recorded at a sampling frequency of 50 kHz and encoded in 16 bits. Afterwards it was down-sampled to 16 kHz before processing. Problem formulation: Our major aim is testing whether state-of-the-art speaker’s voice characterization technologies that have already demonstrated to be effective in the estimation of speaker’s characteristics such as age [16] and height [17] could be also effective in estimating the AHI. It is important to point out that, besides predicting the AHI from speech samples, we also tested the performance when using these same techniques to estimate the other clinical variables (age, height, weight, BMI and CP). We think this evaluation is relevant for two main reasons: firstly, to validate our methodology, by comparing our results estimating age, height and BMI with those previously reported over general speaker populations (such as [16, 17, 20]); and secondly, to identify correlations between speech and other clinical variables that can increase the likelihood of false discovery based on spurious or indirect associations [18] between these clinical variables and AHI. This second aspect we will be relevant when presenting the critical review of previous approaches to OSA assessment in the section “Discussion”. Consequently, our study can be formulated as a machine learning regression problem as follows: we are given a training dataset of speech recordings and clinical variables information \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{S}}_{\text{tr}}^{j} = \left\{ {{\mathbf{x}}_{n} , y_{n}^{j} } \right\}_{n = 1}^{N}$$\end{document}Strj=xn,ynjn=1N, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{x}}_{n} \in \Re^{\text{p}}$$\end{document}xn∈ℜp denotes the acoustic representation for the nth utterance of the training dataset and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y_{n}^{j} \in \Re$$\end{document}ynj∈ℜ denotes the corresponding value of the clinical variable for the speaker of that utterance; j corresponds to a particular variable in the set of V clinical variables (j = 1, 2, …V; i.e., AHI, age, height, weight, BMI, CP). The goal is to design an estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj for each clinical variable, such that for an utterance of an unseen testing speaker xtst, the difference between the estimated value of that particular clinical variable \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{y}^{j} = f^{j} \left( {{\mathbf{x}}_{{\text{tst}}} } \right)$$\end{document}y^j=fjxtst and its actual value \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj is minimized. Once this regression problem has been formulated two main issues must be addressed: 1) what acoustic representation and model will be used for a given utterance xn and 2) how to design the regression or estimator functions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj. Acoustic representation of OSA-related sounds: Besides the linguistic message, speech signals carry important information about speakers mainly related to their particular physical or physiological characteristics. This has been the basis for the development of automatic speaker recognition systems, automatic detection of vocal fold pathologies, emotional/psychological state recognition as well as age and weight (or BMI) estimation. In a similar vein, the specific characteristics of the UA in OSA individuals have led to hypothesize OSA detection using automatic acoustic analysis of speech sounds. To represent OSA-specific acoustic information, speech records in our database include read speech of four sentences that were designed to contain specific distinctive sounds to discriminate between healthy and OSA speakers. The design of these four sentences was done according to the reference research in [5] and [6], where Fox et al. identify a set of speech descriptors in OSA speakers related to articulation, phonation and resonance. For example, the third sentence in our corpus includes mostly nasal sounds to detect the expected resonance anomalies in OSA individuals (the details on the design criteria for this corpus can be found in [12]). Additionally, to exclude any acoustic factor not related to OSA discrimination, the speech signal acquisition was done in a room with low noise and using a single high quality microphone (USB SP500 Plantronics headset). Once we have a set of speech utterances containing OSA-specific sounds and collected under a controlled recording environment, speech signals were processed at a sampling frequency of 16 kHz to have a precise wide-band representation all the relevant information in the speech spectrum. As Fig. 1 illustrates, each sentence was analyzed in speech segments (i.e., frames) of 20 ms duration with an overlap of 10 ms; each speech frame was multiplied by a Hamming window. The spectral envelope of each frame was then represented using mel-frequency cepstral coefficients (MFCCs). MFCCs provide a spectral envelope representation of speech sounds extensively used in automatic speech and speaker recognition [21, 22], pathological voice detection, age, height and BMI estimation [16, 17, 20], etc. MFCCS have also been used in previous research on speech-based OSA detection [9–11] and [14].Fig. 1Acoustic representation of utterances Acoustic representation of utterances In the MFCC representation the spectrum magnitude of each speech frame is first obtained as the absolute value of its DFT (discrete Fourier transform). Then a filterbank of triangular filters spaced in a frequency scale based on the human perception system (i.e., Mel-scale) is used to obtain a vector with the log-energies of each filter (see Fig. 1). Finally, a discrete cosine transform (DCT) is applied over the vector of log filterbank energies to produce a compact set of decorrelated MFCC coefficients. Additionally, in order to represent the spectral change over time, MFFCs are extended to their first order (velocity or delta ΔMFCCs) time derivatives (more details on MFCCs parametrization can be found in [23]). So far, in our experiments, in each speech frame i the acoustic information is represented by a D-dimensional vector Oi, called observation vector, that includes 19 MFFCs +19 ΔMFCCs parameters, thus D = 38. The extraction of MFCCswas performed using the HTK software (htk.eng.cam.ac.uk), see Table 2 for the details on DFT order, number of triangular filters, etc.Table 2Implementation toolsToola Function nameFunction descriptionParametersHTKHCopyExtract the MFCCs coefficientsNo. DFT bins = 512No. filters = 26No. MFCC coeff. = 19No. ΔMFCC coeff. = 19MSR Identity ToolBoxb GMM_emGMM–UBM trainingNo. mixtures = 512No. of expectation maximization iteration = 10Feature sub-sampling factor = 1MapAdaptGMM adaptationAdaptation algorithm = MAPNo. mixtures = 512MAP relevance factor = 10Train_tv_spaceTotal variability matrix trainingDimension of total variability matrix = {400,300,200,100,50,30}Number of iteration = 5Extract_ivectorI-vector trainingDimension of total variability matrix = {400,300,200,100,50,30}LIBSVMSVM_trainSVR trainingGrid search parameters:C, model complexity = −20:20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, insensitive-zone = 2−7:27 SVM_predictSVR regressionGrid search parameters:C, model complexity = −20:20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, insensitive-zone = 2−7:27 aAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System bExecuted on Matlab 2014a Implementation tools aAll the implementation tools were used under Linux Ubuntu 12.04 LTS Operating System bExecuted on Matlab 2014a Utterance modelling: Due to the natural variability in speech production different utterances corresponding to the same sentence will exhibit variable-duration and thus will be represented by a variable-length sequence O of observation vectors:1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underline{{\mathbf{O}}} = \left[ {{\mathbf{O}}_{1} ,\;{\mathbf{O}}_{2} \ldots {\mathbf{O}}_{NF} } \right]$$\end{document}O̲=O1,O2…ONFwhere Oi is the D-dimensional observation vector at frame i and NF is the number of frames, which will be variable due to the different durations when reading the same sentence. This variable-length sequence cannot be the input for a regression algorithm as support vector regression (SVR) that will be the estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj to predict \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj (being \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj the AHI and the other clinical variables: age, height, weight, BMI and CP). Consequently, the sequence of observations O must be mapped into a vector with fixed dimension. In our method, this has been done using two modeling approaches, referred to as supervectors and i-vectors, which have been successfully applied to speaker recognition [24], language recognition [25], speaker age estimation [16], speaker height estimation [17] and accent recognition [26]. We think that their success in those challenging tasks were speech contains significant sources of interfering intra-speaker variability (speaker weight, height, etc.), is a reasonable guarantee for exploring its use in estimating the AHI and other clinical variables in our OSA population. It is also important to point out that we have avoided the use of feature selection procedures because, as it will be commented in the section “Discussion”, we believe this has led to over-fitted results in several previous studies in this field. It is for that reason that in our approach we evaluate high-dimensional acoustic modelling provided by supervectors and low-dimensional i-vectors representations based on subspace projection. These two techniques are described below. Supervectors: Both supervector and i-vector modelling approaches start by fitting a Gaussian mixture model (GMM) to the sequence of observations O. A GMM (see [23, 27]) consists of a weighted sum of K D-dimensional Gaussian components, where, in our case, D is the dimension of the MFFCs observation vectors. Each i-th Gaussian component is represented by a mean vector (µi) of dimension D and a D × D covariance matrix (Σi). Due to limited data, it is not possible to accurately fit a separate GMM for a short utterance, specially when using a high number of Gaussian components (i.e., large K). Consequently, GMMs are obtained using adaptation techniques from a universal background model (UBM), which is also a GMM trained on a large database containing speech from a large number of different speakers [23]. Therefore, as Fig. 2 illustrates, the variable-length sequence O of vectors of a given utterance is used to adapt a GMM–UBM generating an adapted GMM where only the means (µi) are adapted.Fig. 2GMM and supervector modelling GMM and supervector modelling In the supervector modelling approach [21], the adapted GMM means (µi) are extracted and concatenated (appending one after the other) into a single high-dimensional vector s that is called the GMM mean supervector:2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{s}} = \left[ {\begin{array}{*{20}c} {{\mathbf{\mu }}_{{\text{1}}} } \\ {{\mathbf{\mu }}_{{\text{2}}} } \\ \vdots \\ {{\mathbf{\mu }}_{K} } \\ \end{array} } \right]$$\end{document}s=μ1μ2⋮μK The resulting fixed-length supervector, of size K × D, is now suitable to be used as input to a regression algorithm, such as SVR, to predict AHI and the other clinical variables. As it is summarized in Table 2, in our experiments GMM–UBM training, GMM adaptation and supervector generation was done using the MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Linux Ubuntu 12.04 LTS. As it is also shown in Table 2, to have a precise acoustic representation for each sentence a GMM with K = 512 components was used, resulting in a high-dimensional supervector of size K × D = 19,456 = 38 × 512 (D = 38 is the dimension of MFFCs observation vectors Oi). As mentioned before, training the GMM UBM requires a considerable amount of development data to represent a global acoustic space. Therefore, for development we used several large databases containing microphonic speech sampled at 16 kHz, covering a wide range of phonetic variability from continuous/read Spanish speech (see, for example, ALBAYZIN [29], as it was one the databases we used). The whole development dataset includes 25,451 speech recordings from 940 speakers. Among them 126 speakers certainly diagnosed with OSA, and not used for tests, were also included to reflect OSA-specific characteristics of speech. I-vectors: Beyond the success of high-dimensional supervectors, a new paradigm called i-vector has been successfully and is widely used by the speaker recognition community [24]. The i-vector model relies on the definition of a low-dimensional total variability subspace and can be described in the GMM mean supervector space by:3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbf{s}} = {\mathbf{m}} + {\mathbf{Tw}}$$\end{document}s=m+Twwhere s is the GMM mean supervector representing an utterance and m is the mean supervector obtained from the UBM GMM–UBM, which can be considered a global acoustic representation independent from utterance, speaker, health and clinical condition. T is a rectangular low rank matrix representing the primary directions of total acoustic variability observed in a large development speech database, and w is a low dimensional random vector having a standard normal distribution. In short, Eq. (3) can be viewed as a simple factor analysis for projecting the high-dimensional (in order of thousands) supervector s to the low-dimensional (in order of hundreds) factor vector, identity vector or i-vector w. T is named the total variability matrix and the components of i-vector w are the total factors that represent the acoustic information in the reduced total variability space. Compared to supervectors, the total variability modeling using i-vectors has the advantage of projecting the high dimensionality of GMM supervectors into a low-dimensional subspace, where most of the speaker-specific variability is captured. Automatic speech recognition systems typically use i-vectors with dimensionality of 400. In our tests the total variability matrix T was estimated using the same development data described before for training the GMM–UBM, and we evaluated subspace projections for i-vectors with different dimensions ranging from 30 to 400. Efficient procedure for training T and MAP adaptation of i-vectors can be found in [30]. In our tests we use the implementation provided by MSR Identity ToolBox for Matlab™ [28] running over Matlab 2014a on Ubunutu 12.04 LTS (see the details in Table 2). Regression using SVR: Once an utterance is represented by a fixed-length vector, supervector or i-vector, SVR is employed as the estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj to predict \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj, i.e., the AHI and other clinical variables (age, height, weight, BMI and CP). SVR is a function approximation approach developed as a regression version of the widely known Support Vector Machine (SVM) classifier [31]. When using SVR, the input variable (i-vector/supervector) is firstly mapped onto a high dimensional feature space by using a non-linear mapping. The mapping is performed by the kernel function. The kernel yields the new high dimensional feature by a similarity measure between the points from the original feature space. Once the mapping onto a high dimensional space is done then a linear model is constructed in this feature space by finding the optimal hyperplane in which most of the of the training samples lie within an \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-margin (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone) around this hyperplane [31]. The generalization of SVR’s performance depends on a good setting of two hyperparameters (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, C) and the kernel parameters. The parameter \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈ controls the width of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone, used to fit the training data. The width of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone determines the level of accuracy of approximation function. It relies entirely on the target values of the training set. The parameter C determines the trade-off between the model complexity, controlled by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈, and the degree to which deviations larger than the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\in$$\end{document}∈-insensitive zone are tolerated in the optimization of the hyperplane. Finally, the kernel parameters depend on the type similarity measure used. In this paper, SVR is applied to estimate the clinical variables and linear and radial basis function (RBF) kernels were tested to approximate the estimator function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj. In our study, both linear and RBF kernels were tested for i-vectors, but only linear kernels were considered for supervectors because their large dimensionality makes it not advisable mapping them into a higher dimensional space. SVR training and testing were implemented by using LIBSVM [32] running on Linux Ubuntu 12.04 LTS. Table 2 describes de details of use for this software together with all the parameters used in our tests. Performance metrics: To evaluate the proposed method of using supervectors or i-vectors to predict or estimate AHI and the other clinical variables (age, height, weight, BMI and CP) we measure both the mean absolute error (MAE) and the Pearson correlation coefficient (ρ). MAE provides the average absolute difference between actual and estimated values, while ρ evaluates their linear relationship. As we will see in the section “Results”, correlation coefficients between estimated and actual AHI values were many times very small. Therefore, we considered informative to report p-values for correlation coefficients as the probability that they were in fact zero (null hypothesis). Although the main objective of our method is to evaluate the capability of using speech to predict or estimate AHI, in the section “Discussion” we also review previous research that aim at classify or discriminate between subjects with OSA (AHI ≥10) and without OSA (defined by an AHI <10). Therefore, we performed some additional tests using our estimated AHI values to classify subjects as OSA (predicted AHI ≥10) and non-OSA (predicted AHI <10). In these classification tests performance was measured in terms of sensitivity, specificity and the area under the ROC curve. k-fold cross-validation and grid-search: In order to train the SVR regression model (function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f^{j}$$\end{document}fj) and predict \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y^{j}$$\end{document}yj variables (AHI and other clinical variables) we have employed k-fold cross-validation and grid-search for finding the optimal SVR parameters. The whole process is presented in Fig. 3. Firstly, to guarantee that all speakers are involved on the test, the dataset is split into k equal sized subsamples with no speakers in common. Then, of the k subsamples, a single subsample is retained for testing and the remaining k−1 subsamples are used as training dataset. Results were reported for k = 10.Fig. 3Representation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables Representation of k-fold cross-validation and grid search for SVR regression and predicting clinical variables Furthermore, as Fig. 3 also illustrates, in each cross-validation loop the optimal hyperparameters (ϵ, C) of the SVR models are obtained through “grid search” using a fivefold cross-validation on the training data. The ranges for this grid search are detailed in Table 2. Results: Clinical variables estimation Results in Tables 3 and 4 show performance when using speech to estimate age and height. As mentioned before, the purpose of these tests is to validate our procedure by comparing these results to those reported in recent references [16] and [17]. Table 3 shows that our estimation performance (both in terms of MAE and correlation coefficient) for height are comparable, and better when using i-vectors, that those in [17]. However estimation results for age, Table 4, are slightly worse than [16]. A plausible explanation is that the population in [16] includes a majority of young people, between 20 and 30 years old, while most of our OSA speakers are well above 45 years old. According to [16] speech records from young speakers can be better discriminated than those from older ones. In any case, our results are very similar to results published previously by other authors, which is a good indicator of the validity of our methods.Table 3Speakers’ height estimation resultsRegression methodMean absolute error (cm)Correlation coefficient (ρ)I-vector–LSSVR [17]6.20.41b Supervector–SVR5.370.34a I-vector–SVR5.060.45a aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reportedTable 4Speakers’ age estimation resultsRegression methodMean absolute error (years)Correlation coefficient (ρ)I-vector–WCCN–SVR [16]6.00.77b Supervector–SVR7.750.66a I-vector–SVR7.870.63a aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Speakers’ height estimation results aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Speakers’ age estimation results aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Prediction results using i-vectors and supervectors for all our clinical variables are listed on Tables 5, 6 and 7.Table 5Speakers’ clinical variables estimation using supervector-SVR (linear kernel)Clinical variableMAEρAHI14.260.17Height (cm)5.370.34Age (years)7.750.66Weight (kg)12.580.31BMI (kg/m2)3.810.23CP (cm)2.290.42 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 6Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI13.6813.6413.5513.2313.4013.850.230.210.240.300.270.20Height (cm)5.215.235.115.065.295.380.400.410.430.450.360.34Age (years)8.167.878.118.298.779.160.610.630.610.590.520.44Weight (kg)12.3112.2312.2511.8612.1612.310.340.350.360.390.350.31BMI (kg/m2)3.593.653.673.693.743.800.330.300.290.280.260.18CP (cm)2.282.262.202.262.312.420.440.450.490.470.440.32 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 7Speakers clinical variables estimation using i-vectors-SVR (RBF kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI14.0413.9113.6313.4813.8414.120.000.170.250.260.180.02Height (cm)5.285.235.165.245.465.430.400.410.420.410.290.32Age (years)9.469.228.298.689.109.530.420.510.610.570.500.41Weight (kg)12.3912.8212.1812.1112.2712.590.290.180.320.350.340.24BMI (kg/m2)3.733.703.663.683.723.770.200.180.270.270.210.14CP (cm)2.382.422.322.342.422.440.310.260.420.400.310.26 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers’ clinical variables estimation using supervector-SVR (linear kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers clinical variables estimation using i-vectors-SVR (RBF kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence As pointed out before, for supervectors (Table 5), only a linear kernel was evaluated because the very large supervector dimension (>1000) makes not advisable mapping this data into a higher dimensional space. Tables 6 and 7 show that for i-vectors, estimation results using linear and RBF kernels are very similar. These tables also show that both i-vectors and supervectors reach similar results for almost all clinical variables. Results in Tables 3 and 4 show performance when using speech to estimate age and height. As mentioned before, the purpose of these tests is to validate our procedure by comparing these results to those reported in recent references [16] and [17]. Table 3 shows that our estimation performance (both in terms of MAE and correlation coefficient) for height are comparable, and better when using i-vectors, that those in [17]. However estimation results for age, Table 4, are slightly worse than [16]. A plausible explanation is that the population in [16] includes a majority of young people, between 20 and 30 years old, while most of our OSA speakers are well above 45 years old. According to [16] speech records from young speakers can be better discriminated than those from older ones. In any case, our results are very similar to results published previously by other authors, which is a good indicator of the validity of our methods.Table 3Speakers’ height estimation resultsRegression methodMean absolute error (cm)Correlation coefficient (ρ)I-vector–LSSVR [17]6.20.41b Supervector–SVR5.370.34a I-vector–SVR5.060.45a aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reportedTable 4Speakers’ age estimation resultsRegression methodMean absolute error (years)Correlation coefficient (ρ)I-vector–WCCN–SVR [16]6.00.77b Supervector–SVR7.750.66a I-vector–SVR7.870.63a aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Speakers’ height estimation results aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Speakers’ age estimation results aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Prediction results using i-vectors and supervectors for all our clinical variables are listed on Tables 5, 6 and 7.Table 5Speakers’ clinical variables estimation using supervector-SVR (linear kernel)Clinical variableMAEρAHI14.260.17Height (cm)5.370.34Age (years)7.750.66Weight (kg)12.580.31BMI (kg/m2)3.810.23CP (cm)2.290.42 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 6Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI13.6813.6413.5513.2313.4013.850.230.210.240.300.270.20Height (cm)5.215.235.115.065.295.380.400.410.430.450.360.34Age (years)8.167.878.118.298.779.160.610.630.610.590.520.44Weight (kg)12.3112.2312.2511.8612.1612.310.340.350.360.390.350.31BMI (kg/m2)3.593.653.673.693.743.800.330.300.290.280.260.18CP (cm)2.282.262.202.262.312.420.440.450.490.470.440.32 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 7Speakers clinical variables estimation using i-vectors-SVR (RBF kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI14.0413.9113.6313.4813.8414.120.000.170.250.260.180.02Height (cm)5.285.235.165.245.465.430.400.410.420.410.290.32Age (years)9.469.228.298.689.109.530.420.510.610.570.500.41Weight (kg)12.3912.8212.1812.1112.2712.590.290.180.320.350.340.24BMI (kg/m2)3.733.703.663.683.723.770.200.180.270.270.210.14CP (cm)2.382.422.322.342.422.440.310.260.420.400.310.26 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers’ clinical variables estimation using supervector-SVR (linear kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers clinical variables estimation using i-vectors-SVR (RBF kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence As pointed out before, for supervectors (Table 5), only a linear kernel was evaluated because the very large supervector dimension (>1000) makes not advisable mapping this data into a higher dimensional space. Tables 6 and 7 show that for i-vectors, estimation results using linear and RBF kernels are very similar. These tables also show that both i-vectors and supervectors reach similar results for almost all clinical variables. AHI classification Table 8 shows classification results in terms of sensitivity, specificity and area under the ROC curve when classifying our population as OSA subjects or healthy individuals based on the estimated AHI values. That is, first supervectors or i-vectors are used to estimate the AHI using SVR, and then subjects are classified as OSA individuals when their estimated AHI is above ten, otherwise they are classified as healthy. The results in Table 8 using i-vectors were obtained for i-vector dimensionality of 100 as this provided the best AHI estimation results (see Table 6).Table 8OSA Classification using estimated AHI valuesFeatureAccuracy (%)Sensitivity (%)Specificity (%)ROC AUCSupervectors6889180.58I-vectors (dim 100)7192200.64 OSA Classification using estimated AHI values We are aware that better results could be obtained using supervectors or i-vectors as inputs to a classification algorithm such as SVM, however results in Table 8 were only obtained looking for some figures that will be used in the section “Discussion” to compare our results with those from previous research (Table 9).Table 9Test characteristics of previous research using speech analysis and machine learning for AHI classification and regressionStudyPopulation characteristicsClassificationRegressionCorrect classification rate (%)Sensitivity (%)Specificity (%)Correlation coefficientGMMs [10]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)8177.585_HMMs [11]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)85___Several feature selection and classification schemes [13]248 subjects(AHI ≤5: 48 male, 79 women; AHI ≥30: 101 male, 20 women)82.8581.4984.69_Feature selection and GMMs [9]93 subjects(AHI ≤5: 14 female; AHI >5: 19 female)(AHI ≤10: 12 male; AHI >10: 48 male)_86838479_Feature selection and GMMs [41]103 male subjects(AHI ≤10: 25 male; AHI >10: 78 male)8080.6580_Feature selection, supervectors and SVR [14]131 males___0.67a I-vectors/supervectors and SVR this study426 males(AHI <10: 125 male; AHI ≥10: 301 male)71.0692.9220.60.30 aResults using speech features plus age and BMI Test characteristics of previous research using speech analysis and machine learning for AHI classification and regression aResults using speech features plus age and BMI Table 8 shows classification results in terms of sensitivity, specificity and area under the ROC curve when classifying our population as OSA subjects or healthy individuals based on the estimated AHI values. That is, first supervectors or i-vectors are used to estimate the AHI using SVR, and then subjects are classified as OSA individuals when their estimated AHI is above ten, otherwise they are classified as healthy. The results in Table 8 using i-vectors were obtained for i-vector dimensionality of 100 as this provided the best AHI estimation results (see Table 6).Table 8OSA Classification using estimated AHI valuesFeatureAccuracy (%)Sensitivity (%)Specificity (%)ROC AUCSupervectors6889180.58I-vectors (dim 100)7192200.64 OSA Classification using estimated AHI values We are aware that better results could be obtained using supervectors or i-vectors as inputs to a classification algorithm such as SVM, however results in Table 8 were only obtained looking for some figures that will be used in the section “Discussion” to compare our results with those from previous research (Table 9).Table 9Test characteristics of previous research using speech analysis and machine learning for AHI classification and regressionStudyPopulation characteristicsClassificationRegressionCorrect classification rate (%)Sensitivity (%)Specificity (%)Correlation coefficientGMMs [10]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)8177.585_HMMs [11]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)85___Several feature selection and classification schemes [13]248 subjects(AHI ≤5: 48 male, 79 women; AHI ≥30: 101 male, 20 women)82.8581.4984.69_Feature selection and GMMs [9]93 subjects(AHI ≤5: 14 female; AHI >5: 19 female)(AHI ≤10: 12 male; AHI >10: 48 male)_86838479_Feature selection and GMMs [41]103 male subjects(AHI ≤10: 25 male; AHI >10: 78 male)8080.6580_Feature selection, supervectors and SVR [14]131 males___0.67a I-vectors/supervectors and SVR this study426 males(AHI <10: 125 male; AHI ≥10: 301 male)71.0692.9220.60.30 aResults using speech features plus age and BMI Test characteristics of previous research using speech analysis and machine learning for AHI classification and regression aResults using speech features plus age and BMI Clinical variables estimation: Results in Tables 3 and 4 show performance when using speech to estimate age and height. As mentioned before, the purpose of these tests is to validate our procedure by comparing these results to those reported in recent references [16] and [17]. Table 3 shows that our estimation performance (both in terms of MAE and correlation coefficient) for height are comparable, and better when using i-vectors, that those in [17]. However estimation results for age, Table 4, are slightly worse than [16]. A plausible explanation is that the population in [16] includes a majority of young people, between 20 and 30 years old, while most of our OSA speakers are well above 45 years old. According to [16] speech records from young speakers can be better discriminated than those from older ones. In any case, our results are very similar to results published previously by other authors, which is a good indicator of the validity of our methods.Table 3Speakers’ height estimation resultsRegression methodMean absolute error (cm)Correlation coefficient (ρ)I-vector–LSSVR [17]6.20.41b Supervector–SVR5.370.34a I-vector–SVR5.060.45a aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reportedTable 4Speakers’ age estimation resultsRegression methodMean absolute error (years)Correlation coefficient (ρ)I-vector–WCCN–SVR [16]6.00.77b Supervector–SVR7.750.66a I-vector–SVR7.870.63a aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Speakers’ height estimation results aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Speakers’ age estimation results aThese values are significant beyond the 0.01 level of confidence bLevel of confidence is not reported Prediction results using i-vectors and supervectors for all our clinical variables are listed on Tables 5, 6 and 7.Table 5Speakers’ clinical variables estimation using supervector-SVR (linear kernel)Clinical variableMAEρAHI14.260.17Height (cm)5.370.34Age (years)7.750.66Weight (kg)12.580.31BMI (kg/m2)3.810.23CP (cm)2.290.42 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 6Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI13.6813.6413.5513.2313.4013.850.230.210.240.300.270.20Height (cm)5.215.235.115.065.295.380.400.410.430.450.360.34Age (years)8.167.878.118.298.779.160.610.630.610.590.520.44Weight (kg)12.3112.2312.2511.8612.1612.310.340.350.360.390.350.31BMI (kg/m2)3.593.653.673.693.743.800.330.300.290.280.260.18CP (cm)2.282.262.202.262.312.420.440.450.490.470.440.32 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidenceTable 7Speakers clinical variables estimation using i-vectors-SVR (RBF kernel)Clinical variableI-vector dimensionMean absolute error (MAE)Correlation coefficient (ρ)40030020010050304003002001005030AHI14.0413.9113.6313.4813.8414.120.000.170.250.260.180.02Height (cm)5.285.235.165.245.465.430.400.410.420.410.290.32Age (years)9.469.228.298.689.109.530.420.510.610.570.500.41Weight (kg)12.3912.8212.1812.1112.2712.590.290.180.320.350.340.24BMI (kg/m2)3.733.703.663.683.723.770.200.180.270.270.210.14CP (cm)2.382.422.322.342.422.440.310.260.420.400.310.26 AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeterThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers’ clinical variables estimation using supervector-SVR (linear kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers’ clinical variables estimation using i-vectors-SVR (linear kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Speakers clinical variables estimation using i-vectors-SVR (RBF kernel) AHI apnea–hypopnea index, BMI body mass index, CP cervical perimeter The correlation coefficients (ρ) are significant beyond the 0.01 level of confidence As pointed out before, for supervectors (Table 5), only a linear kernel was evaluated because the very large supervector dimension (>1000) makes not advisable mapping this data into a higher dimensional space. Tables 6 and 7 show that for i-vectors, estimation results using linear and RBF kernels are very similar. These tables also show that both i-vectors and supervectors reach similar results for almost all clinical variables. AHI classification: Table 8 shows classification results in terms of sensitivity, specificity and area under the ROC curve when classifying our population as OSA subjects or healthy individuals based on the estimated AHI values. That is, first supervectors or i-vectors are used to estimate the AHI using SVR, and then subjects are classified as OSA individuals when their estimated AHI is above ten, otherwise they are classified as healthy. The results in Table 8 using i-vectors were obtained for i-vector dimensionality of 100 as this provided the best AHI estimation results (see Table 6).Table 8OSA Classification using estimated AHI valuesFeatureAccuracy (%)Sensitivity (%)Specificity (%)ROC AUCSupervectors6889180.58I-vectors (dim 100)7192200.64 OSA Classification using estimated AHI values We are aware that better results could be obtained using supervectors or i-vectors as inputs to a classification algorithm such as SVM, however results in Table 8 were only obtained looking for some figures that will be used in the section “Discussion” to compare our results with those from previous research (Table 9).Table 9Test characteristics of previous research using speech analysis and machine learning for AHI classification and regressionStudyPopulation characteristicsClassificationRegressionCorrect classification rate (%)Sensitivity (%)Specificity (%)Correlation coefficientGMMs [10]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)8177.585_HMMs [11]80 male subjects(AHI <10: 40 men, AHI >30: 40 men)85___Several feature selection and classification schemes [13]248 subjects(AHI ≤5: 48 male, 79 women; AHI ≥30: 101 male, 20 women)82.8581.4984.69_Feature selection and GMMs [9]93 subjects(AHI ≤5: 14 female; AHI >5: 19 female)(AHI ≤10: 12 male; AHI >10: 48 male)_86838479_Feature selection and GMMs [41]103 male subjects(AHI ≤10: 25 male; AHI >10: 78 male)8080.6580_Feature selection, supervectors and SVR [14]131 males___0.67a I-vectors/supervectors and SVR this study426 males(AHI <10: 125 male; AHI ≥10: 301 male)71.0692.9220.60.30 aResults using speech features plus age and BMI Test characteristics of previous research using speech analysis and machine learning for AHI classification and regression aResults using speech features plus age and BMI Discussion: Overall, results in Tables 5–7 indicate a poor performance when estimating AHI from acoustic speech information; the best results are obtained using SVR after i-vectors acoustic representation with dimensionality 100 (ρ = 0.30). Better performance is obtained when predicting the other clinical variables: best results were for i-vectors and SVR linear kernel (see Table 6) with correlation coefficient ρ = 0.63 for age followed by CP (ρ = 0.49), height (ρ = 0.45), weight (ρ = 0.39) and BMI (ρ = 0.33). Nevertheless, the most interesting discussion arises when comparing these results with those reported in previous research. As stated before our results when estimating age and height are comparable to those previously published in [16] and [17]. Previous research has also demonstrated moderate results (similar to ours) when estimating speakers’ weight and CP from speech (for example, see [33] and [34]). The less success when estimating BMI has also been reported in [35]. Only more positive results have been recently presented in [20], although they have been questioned for possible overfitting by their authors, as they used machine learning after feature selection over a large set of acoustic features. However, our AHI estimation results contrast markedly with those reported in previous research connecting speech and OSA. Therefore we decided to address a critical review of previous studies (including ours) that led us to identify possible machine learning issues similar to those reported in [19]. A first discrepancy, though not related with machine learning issues, was addressed in our research [36] were we found notable differences with the seminal work by Robb et al. [8]. In [8] statistical significant differences between OSA and non-OSA speakers were found for several formants frequencies and bandwidths extracted from sustained vowels, while our study in [36] only revealed very weak correlations with two formant bandwidths. In this case, the discrepancy can be mainly attributed to the small and biased sample in Robb’s exploratory analysis (10 OSA and 10 no-OSA subjects, including extreme AHI differences between individuals); while in our study [36] we explored a larger sample of 241 male subjects representing a wide range of AHI values. Table 9 summarizes the most relevant existing research proposals using automatic speech analysis and machine learning for OSA assessment. We start by reviewing our own previous positive results presented in [10–12]. In [10] and [11] speech samples from control (AHI <10) and OSA (AHI >30) individuals were used to train a binary machine learning classifier for severe OSA detection. Healthy and OSA speakers were thus classified using two models: one trained to represent OSA voices and the other to model healthy voices. Two different approaches were researched: (1) a text-independent approach using two GMMs [10], and (2) through two text-dependent Hidden Markov Models (HMMs) [11]. Correct classification rates were 80 and 85 %, for GMMs and HMMs respectively. These promising results contrast with both the weak correlation between speech and AHI and the low OSA classification performance we have found in this study. Consequently, we repeated experiments in [10] and [11] on the same database used in this paper, and found that performance has now been significantly degraded only achieving correct classification rates of 63 % for GMMs and 67 % for HMMs. This important reduction in performance can again be attributed to the very limited database (40 controls and 40 OSA speakers with AHI >30) used in [10] and [11], while now we have 125 controls (AHI <10) and 118 OSA subjects (AHI >30). As pointed out in [19] the size of training and evaluation sets are important factors to gain a reasonable understanding of the performance of any classifier. Furthermore, another relevant factor that can explain this degradation in performance is that those 40 controls in [10] and [11] were asymptomatic individuals, selected trying to have both control and OSA populations as matched as possible in terms of age and BMI. While in our new database all individuals (i.e., controls and OSA) are suspected to suffer from OSA as they have been referred to a sleep disorders unit (as indicated before control population was defined by AHI <10), so, for example, most of them are heavy snorers. A third possible cause to explain previous over-optimistic results can be traced considering possible indirect influences of speech and AHI mediated through other clinical variables (see correlation coefficients between AHI and other clinical variables in Table 10). More specifically, as it was discussed in [9] speech acoustic features can be less correlated with AHI than with some clinical variables as age or BMI that are good predictors of AHI [37]. Therefore, a population of controls and OSA speakers with significant differences in these confounding variables can be prone to false discovery of discrimination results due to the underlying differences in these confounders and not in AHI. This fact was reported in our research [12] were OSA detection results using 16 speech features (many of them similar to those traditionally used in detecting voice pathologies, such as HNR, Jitter, Shimmer,…) were degraded when tested on a database designed to avoid statistically significant differences in age and BMI.Table 10Spearman’s correlation between clinical variablesFeatureAHIWeightHeightBMIAgeCPAHI10.41a −0.0070.44a 0.16a 0.40a Weight0.41a 10.40a 0.89a −0.11a 0.71a Height−0.0070.40a 1−0.02−0.35a 0.13a BMI0.44a 0.89a −0.0210.040.72a Age0.16a −0.11a −0.35a 0.0410.16a CP0.40a 0.71a 0.13a 0.72a 0.16a 1 aThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Spearman’s correlation between clinical variables aThe correlation coefficients (ρ) are significant beyond the 0.01 level of confidence Same critical demands to explore and report on significant differences in confounding speaker’s features such as age, height, BMI, etc., must be extended to any other factor that could affect speech such as speakers’ dialect, gender, mood state, and so forth. In fact we believe this is an issue that can explain the good discrimination results when detecting severe OSA reported in [13]. The study by Solan-Casals et al. [13] analyzes both sustained and connected speech and recordings from two distinct positions, upright or seated and supine or stretched. The reason for recording two distinct uttering positions, which was also preliminary explored in [15], is that due to anatomical and functional abnormalities in OSA individuals different body positions can affect differently their vocal tract, therefore presenting more discriminative acoustic features. Solan-Casals et al. evaluate several feature selection, feature combination (i.e., PCA) and classification schemes (Bayesian Classifiers, KNN, Support Vector Machines, Neural Networks, Adaboost). Best results are achieved when using a genetic algorithm for feature selection. An interesting result in [13] is that positive discrimination results, i.e., Correct Classification Rate, Sensitivity and Specificity, all above 80 %, were only obtained when classifying between extreme cases: severe OSA (AHI ≥30) and controls (AHI ≤5). While a notable reduction in performance was obtained when trying to classify “in-between cases”, i.e., cases with AHI between 5 and 30. Solan-Casals et al. conclude that “for intermediate cases where upper-airway closure may not be so pronounced (thus voice not much affected), we cannot rely on voice alone for making a good discrimination between OSA and non-OSA.” At first glance, this conclusion of [13] could be linked to our weak estimation and classification results for the broad range of AHI values using acoustic speech information. However, there are two critical issues that can be identified in this study. First, feature selection is applied over a high number of features (253) compared to the number of cases (248). Though authors report the use of cross-validation for the development and evaluation of different classification algorithms there is no clear indication on what data was used for feature selection. At this point, it is worth noting that i-vectors subspace projection in our study was trained using a development database completely different from the one used for training and testing our SVR regression model. Without this precaution, as discussed in several studies [19, 38], feature selection can lead to over-fitted results based on a set of “ad-hoc” selected features. A second highly relevant issue in [13] is that when evaluating the classification performance between extreme cases (see Table 7 in [13]), OSA and control groups contain very different percentages of male and female speakers: 48 men/79 women in control vs. 101 men/20 women in OSA. This notable imbalance between female and male percentages in control and OSA groups is clearly due to the significantly lower prevalence of OSA in women compared to men [39]. Consequently, considering the important acoustic differences between female and male voices [40], this makes gender a strong confounding factor that could also explain the good classification results. To illustrate these issues, we have studied the best discriminative feature reported in [13] which is the mean value of the Harmonics to Noise Ratio (HNR) measured for sustained vowel/a/recorded in seated position (MEAN_HNR_VA_A in [13]). A small p value, p < 0.0001, was reported in [13] using a Wilcoxon two-sampled test of difference in medians for MEAN_HNR_VA_A values in control and OSA groups. As our database also contains speech records of sustained/a/recorded in seated position for both 426 male individuals and 171 female speakers, we have made Wilcoxon two-sampled tests for MEAN_HNR_VA_A values contrasting: a) a group of male speakers vs a group of female speakers, and b) a group of extreme OSA male speakers (AHI ≥30) with another of male controls (AHI ≤5). Results presented in Table 11, clearly reveal that while significant differences (p < 0.0001) appear contrasting female and male voices (which has already been reported in other studies such as [40]), no significant differences are found between extreme OSA groups including only male speakers (p = 0.06). This is therefore an illustrative example on how gender can act as a strong confounding factor.Table 11Wilcoxon two-sampled test for MEAN_HNR_VA_A contrasting gender and group of extreme OSA male speakersMean_HNR_VA_A (Gender)Mean_HNR_VA_A (extreme OSA male speakers)FemaleMalep valueMale (AHI ≤5)Male (AHI ≥30)p valueMedian19.4317.07<0.000117.4616.380.06SD3.984.233.894.32# Samples17142669129 Wilcoxon two-sampled test for MEAN_HNR_VA_A contrasting gender and group of extreme OSA male speakers The connection between OSA and speech analysis has also been studied for Hebrew language, mainly in [9] and [14]. Following the same approach previously described for [10], the work in [9] uses two GMMs to classify between OSA and non-OSA speakers. However, differently from [10] acoustic feature selection is made before GMM modelling. The experimental protocol presented by Goldshtein et al. in [9] properly separates female and male speakers. Different AHI thresholds are used to define OSA and non-OSA groups: an AHI threshold of 5 is used for women and 10 for men. Reported results achieved specificity of 83 % and sensitivity of 79 % for OSA detection in males and 86 and 84 % for females (see Table 9). A major limitation in this study is again the small number of cases under study: a total number of 60 male speakers (12 controls/48 OSA) and 33 female subjects (14 controls/19 OSA). Besides the low reliability with such small samples, again a critical issue, both in [9] and [14], is the use of feature selection techniques from a large number of acoustic parameters (sometimes on the order of hundreds) when only very limited training data is available. The same research group reported in [41] a decrease in performance using the same techniques as in [9] but over a different database with 103 males. According to Kriboy et al. in [41], this mismatch could be explained by the use of a different database with more subjects and with a different balance in terms of possible confounding factors BMI, age, etc. Also particularly relevant can be to analyze the good results estimating AHI reported by Kriboy et al. in [14] because they used a prediction scheme very close to the one we have presented in this paper: GMM supervectors are used in combination with SVR to estimate AHI. Nevertheless, differently from our study, again feature selection is firstly used to select the most five discriminative features from a set of 71 acoustic features, and then GMM mean supervectors are trained for that small number of features. Although the experimental protocol in [14] separates training and validation data to avoid over-fitting, the set of selected features was composed by five high-order cepstral and LPC coefficients (a15, ΔΔc9, a17, ΔΔc12, c16) which are difficult to interpret or justify. Both cepstral and LPC coefficients are commonly used to represent the acoustic spectral information in speech signals, but higher order coefficients are generally less informative and noisy. Another notable limitation to validate results in [14] is that SVR regression is applied after adding two clinical variables, age and BMI, to the speech supervector generated from the five selected features. These two clinical variables are well known predictors of AHI [37]. So it should had been advisable first to report AHI estimation results only using supervectors representing speech acoustic features, then to present results only using age and BMI, and finally give results extending supervectors with age and BMI. Trying to contribute to review these results we have applied the same estimation procedure described in [14] to our database. First row in Table 12 shows prediction results for AHI using only speech supervectors including the same set of five selected features in [14]. Second row presents estimation performance when using only BMI and age. Third row includes the results using the supervector of acoustic features extended with BMI and age.Table 12Speakers’ AHI estimation using supervector generated by five high-order cepstral and LPC coefficients [14]Set of clinical variablesMAECorrelation coefficient (ρ) p valuea15, ΔΔc9, a17, ΔΔc12, c1614.330.120.008AGE + BMI12.960.38<0.00001(a15, ΔΔc9, a17, ΔΔc12, c16) + AGE + BMI12.240.46<0.00001p values are given for correlation coefficient (ρ) Speakers’ AHI estimation using supervector generated by five high-order cepstral and LPC coefficients [14] p values are given for correlation coefficient (ρ) As it can be seen in Table 12, estimation results are mainly driven by the presence of BMI and age, and very poor correlation (ρ = 0.12) is obtained when only the set of 5 selected speech features is used. Therefore, it is reasonable to conclude that the well-known correlation between AHI and BMI and age [37, 42] together with possible over-fitting from feature selection on a high number of features compared to the number of cases can cause the optimistic results presented in [14]. We acknowledge several limitations in our work that should be addressed in future research. Results presented in this paper are limited to speech from Spanish speakers, so comparisons with other languages will require a more careful analysis of language-dependent acoustic traits in OSA voices. Another limitation in our study is that it has only considered male speakers. As our database now includes an important number of female speakers the extension of this study on female voices could be especially interesting as apnea disease is still not well researched in women. Considering also some recent studies as [43], we should also acknowledge the limitation of i-vectors to represent relevant segmental (non-cepstral) and supra-segmental speaker information. Therefore, subspace projection techniques could also be explored over other speech acoustic features previously related to OSA as: nasality [9, 10], voice turbulence [13, 44] or specific co-articulation trajectories. Finally, a comparative analysis of results for both different recording positions (as proposed in [15]) should be addressed. Conclusions: This study can represent an important and useful example to illustrate the potential pitfalls in the development of machine learning techniques for diagnostic applications. The contradictory results using state-of-the-art speech processing and machine learning for OSA assessment over, to the best of our knowledge, the largest database used in this kind of studies, led us to address a critical review of previous studies reporting positive results in connecting OSA and speech. As it is being identified in different fields by the biomedical research community, several limitations in the development of machine learning techniques were observed and, when possible, experimentally studied. In line with other similar studies on these pitfalls [19, 38] main detected deficiencies are: the impact of a limited size of training and evaluation datasets in performance evaluation, the likelihood of false discovery or spurious associations due to the presence of confounding variables, and the risk for overfitting when feature selection techniques are applied over large numbers of variables when only limited training data is available. In conclusion, we believe that our study and results could be useful both to sensitize the biomedical engineering research community to the potential pitfalls when using machine learning for medical diagnosis, and to guide further research on the connection between speech and OSA. In this later aspect, we believe there is an open way for future research looking for new insights in this connection using different acoustic features, languages, speaking styles, or recording positions. However, besides properly addressing the methodological issues when using machine learning, any new advance should carefully explore and report on any possible indirect influence of speech and AHI mediated through other clinical variables or any other factor that could affect speech such as speakers’ dialect, gender or mood state.
Background: Sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). The altered UA structure or function in OSA speakers has led to hypothesize the automatic analysis of speech for OSA assessment. In this paper we critically review several approaches using speech analysis and machine learning techniques for OSA detection, and discuss the limitations that can arise when using machine learning techniques for diagnostic applications. Methods: A large speech database including 426 male Spanish speakers suspected to suffer OSA and derived to a sleep disorders unit was used to study the clinical validity of several proposals using machine learning techniques to predict the apnea-hypopnea index (AHI) or classify individuals according to their OSA severity. AHI describes the severity of patients' condition. We first evaluate AHI prediction using state-of-the-art speaker recognition technologies: speech spectral information is modelled using supervectors or i-vectors techniques, and AHI is predicted through support vector regression (SVR). Using the same database we then critically review several OSA classification approaches previously proposed. The influence and possible interference of other clinical variables or characteristics available for our OSA population: age, height, weight, body mass index, and cervical perimeter, are also studied. Results: The poor results obtained when estimating AHI using supervectors or i-vectors followed by SVR contrast with the positive results reported by previous research. This fact prompted us to a careful review of these approaches, also testing some reported results over our database. Several methodological limitations and deficiencies were detected that may have led to overoptimistic results. Conclusions: The methodological deficiencies observed after critically reviewing previous research can be relevant examples of potential pitfalls when using machine learning techniques for diagnostic applications. We have found two common limitations that can explain the likelihood of false discovery in previous research: (1) the use of prediction models derived from sources, such as speech, which are also correlated with other patient characteristics (age, height, sex,…) that act as confounding factors; and (2) overfitting of feature selection and validation methods when working with a high number of variables compared to the number of cases. We hope this study could not only be a useful example of relevant issues when using machine learning for medical diagnosis, but it will also help in guiding further research on the connection between speech and OSA.
Background: Sleep disorders are receiving increased attention as a cause of daytime sleepiness, impaired work, traffic accidents, and are associated with hypertension, heart failure, arrhythmia, and diabetes. Among sleep disorders, obstructive sleep apnea (OSA) is the most frequent one [1]. OSA is characterized by recurring episodes of breathing pauses during sleep, greater than 10 s at a time, caused by a blockage of the upper airway (UA) at the level of the pharynx. The gold standard for sleep apnea diagnosis is the polysomnography (PSG) test [2]. This test requires an overnight stay of the patient at the sleep unit within a hospital to monitor breathing patterns, heart rhythm and limb movements. As a result of this test, the apnea–hypopnea index (AHI) is computed as the average number of apnea and hypopnea episodes (partial and total breath cessation episodes respectively) per hour of sleep. Because of its high reliability this index is used to describe the severity of patients’ condition: low AHI (AHI <10) indicates a healthy subject or mild OSA patient (10≤ AHI ≤30), while AHI above 30 is associated with severe OSA. Waiting lists for PSG may exceed 1 year in some countries as Spain [3]. Therefore, faster and less costly alternatives have been proposed for early OSA detection and severity assessment; and speech-based methods are among them. The rationale of using speech analysis in OSA assessment can be found on early works such as the one by Davidson et al. [4] where the evolutionary changes in the acquisition of speech are connected to the appearance of OSA from an anatomical basis. Several studies have shown physical alterations in OSA patients such as craniofacial abnormalities, dental occlusion, longer distance between the hyoid bone and the mandibular plane, relaxed pharyngeal soft tissues, large tongue base, etc. that generally cause a longer and more collapsible upper airway (UA). Consequently, abnormal or particular speech features in OSA speakers may be expected from an altered structure or function of their UA. Early approaches to speech-based OSA detection can be found in [5] and [6]. In [5] authors used perceptive speech descriptors (related to articulation, phonation and resonance) to correctly identify 96.3 % of normal (healthy) subjects, though only 63.0 % of sleep apnea speakers were detected. The use of acoustic analysis of speech for OSA detection was first presented in [7] and [8]. Fiz et al. [7] examined the harmonic structure of vowels spectra, finding a narrower frequency range for OSA speakers, which may point at differences in laryngeal behavior between OSA and non-OSA speakers. Later on, Robb et al. [8] presented an acoustic analysis of vocal tract formant frequencies and bandwidths, thus focusing on the supra-laryngeal level where OSA-related alterations should have larger impact according to the pathogenesis of the disorder. These early contributions have driven recent proposals for using automatic speech processing in OSA detection such as [9–14]. Different approaches, generally using machine learning techniques, have been studied for Hebrew [9, 14] and Spanish [10–13] languages. Results have been reported for different types of speech (i.e., sustained and/or continuous speech) [9, 11, 13], different speech features [9, 12, 13], and modeling different linguistic units [11]. Also speech recorded from two distinct positions, upright or seated and supine or stretched, has been considered [13, 15]. Despite the positive results reported in these previous studies (including ours), as it will be presented in the section “Discussion”, we have found contradictory results when applying the proposed methods over our large clinical database composed of speech samples from 426 OSA male speakers. The next section describes a new method for estimating the AHI using state-of-the-art speaker’s voice characterization technologies. This same approach has been recently tested and demonstrated to be effective in the estimation of other characteristics in speakers’ populations such as age [16] and height [17]. However, as it can be seen in the section “Results”, only a very limited performance is found when this approach is used for AHI prediction. These poor results contrast with the positive results reported by previous research and motivated us to their careful review. The review (presented in the section “Discussion”) reveals some common limitation and deficiencies when developing and validating machine learning techniques, as overfitting and false discovery (i.e., finding spurious or indirect associations) [18], that may have led to overoptimistic previous results. Therefore, our study can represent an important and useful example to illustrate the potential pitfalls in the development of machine learning techniques for diagnostic applications as it is being identified by the biomedical engineering research community [19]. As we conclude at the end of the paper, we not only hope that our study could be useful to help in the development of machine learning techniques in biomedical engineering research, we also think it can help in guiding any future research on the connection between speech and OSA. Conclusions: This study can represent an important and useful example to illustrate the potential pitfalls in the development of machine learning techniques for diagnostic applications. The contradictory results using state-of-the-art speech processing and machine learning for OSA assessment over, to the best of our knowledge, the largest database used in this kind of studies, led us to address a critical review of previous studies reporting positive results in connecting OSA and speech. As it is being identified in different fields by the biomedical research community, several limitations in the development of machine learning techniques were observed and, when possible, experimentally studied. In line with other similar studies on these pitfalls [19, 38] main detected deficiencies are: the impact of a limited size of training and evaluation datasets in performance evaluation, the likelihood of false discovery or spurious associations due to the presence of confounding variables, and the risk for overfitting when feature selection techniques are applied over large numbers of variables when only limited training data is available. In conclusion, we believe that our study and results could be useful both to sensitize the biomedical engineering research community to the potential pitfalls when using machine learning for medical diagnosis, and to guide further research on the connection between speech and OSA. In this later aspect, we believe there is an open way for future research looking for new insights in this connection using different acoustic features, languages, speaking styles, or recording positions. However, besides properly addressing the methodological issues when using machine learning, any new advance should carefully explore and report on any possible indirect influence of speech and AHI mediated through other clinical variables or any other factor that could affect speech such as speakers’ dialect, gender or mood state.
Background: Sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). The altered UA structure or function in OSA speakers has led to hypothesize the automatic analysis of speech for OSA assessment. In this paper we critically review several approaches using speech analysis and machine learning techniques for OSA detection, and discuss the limitations that can arise when using machine learning techniques for diagnostic applications. Methods: A large speech database including 426 male Spanish speakers suspected to suffer OSA and derived to a sleep disorders unit was used to study the clinical validity of several proposals using machine learning techniques to predict the apnea-hypopnea index (AHI) or classify individuals according to their OSA severity. AHI describes the severity of patients' condition. We first evaluate AHI prediction using state-of-the-art speaker recognition technologies: speech spectral information is modelled using supervectors or i-vectors techniques, and AHI is predicted through support vector regression (SVR). Using the same database we then critically review several OSA classification approaches previously proposed. The influence and possible interference of other clinical variables or characteristics available for our OSA population: age, height, weight, body mass index, and cervical perimeter, are also studied. Results: The poor results obtained when estimating AHI using supervectors or i-vectors followed by SVR contrast with the positive results reported by previous research. This fact prompted us to a careful review of these approaches, also testing some reported results over our database. Several methodological limitations and deficiencies were detected that may have led to overoptimistic results. Conclusions: The methodological deficiencies observed after critically reviewing previous research can be relevant examples of potential pitfalls when using machine learning techniques for diagnostic applications. We have found two common limitations that can explain the likelihood of false discovery in previous research: (1) the use of prediction models derived from sources, such as speech, which are also correlated with other patient characteristics (age, height, sex,…) that act as confounding factors; and (2) overfitting of feature selection and validation methods when working with a high number of variables compared to the number of cases. We hope this study could not only be a useful example of relevant issues when using machine learning for medical diagnosis, but it will also help in guiding further research on the connection between speech and OSA.
22,260
464
16
[ "usepackage", "ahi", "document", "speech", "osa", "clinical", "results", "variables", "end", "vector" ]
[ "test", "test" ]
null
[CONTENT] Obstructive sleep apnea | Speech | Clinical variables | Speaker’s voice characterization | Supervector | Gaussian mixture models | i-vector | Support vector regression [SUMMARY]
null
[CONTENT] Obstructive sleep apnea | Speech | Clinical variables | Speaker’s voice characterization | Supervector | Gaussian mixture models | i-vector | Support vector regression [SUMMARY]
[CONTENT] Obstructive sleep apnea | Speech | Clinical variables | Speaker’s voice characterization | Supervector | Gaussian mixture models | i-vector | Support vector regression [SUMMARY]
[CONTENT] Obstructive sleep apnea | Speech | Clinical variables | Speaker’s voice characterization | Supervector | Gaussian mixture models | i-vector | Support vector regression [SUMMARY]
[CONTENT] Obstructive sleep apnea | Speech | Clinical variables | Speaker’s voice characterization | Supervector | Gaussian mixture models | i-vector | Support vector regression [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Diagnosis, Computer-Assisted | Female | Humans | Machine Learning | Male | Middle Aged | Polysomnography | Sleep Apnea, Obstructive | Speech | Young Adult [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Diagnosis, Computer-Assisted | Female | Humans | Machine Learning | Male | Middle Aged | Polysomnography | Sleep Apnea, Obstructive | Speech | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Diagnosis, Computer-Assisted | Female | Humans | Machine Learning | Male | Middle Aged | Polysomnography | Sleep Apnea, Obstructive | Speech | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Diagnosis, Computer-Assisted | Female | Humans | Machine Learning | Male | Middle Aged | Polysomnography | Sleep Apnea, Obstructive | Speech | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Diagnosis, Computer-Assisted | Female | Humans | Machine Learning | Male | Middle Aged | Polysomnography | Sleep Apnea, Obstructive | Speech | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] usepackage | ahi | document | speech | osa | clinical | results | variables | end | vector [SUMMARY]
null
[CONTENT] usepackage | ahi | document | speech | osa | clinical | results | variables | end | vector [SUMMARY]
[CONTENT] usepackage | ahi | document | speech | osa | clinical | results | variables | end | vector [SUMMARY]
[CONTENT] usepackage | ahi | document | speech | osa | clinical | results | variables | end | vector [SUMMARY]
[CONTENT] usepackage | ahi | document | speech | osa | clinical | results | variables | end | vector [SUMMARY]
[CONTENT] osa | sleep | speech | early | learning techniques | machine learning techniques | results | apnea | sleep apnea | osa detection [SUMMARY]
null
[CONTENT] ahi | confidence | index | significant 01 level | 01 | significant 01 | 01 level | correlation | estimation | results [SUMMARY]
[CONTENT] learning | machine learning | pitfalls | machine | research | studies | speech | biomedical | machine learning techniques | potential [SUMMARY]
[CONTENT] usepackage | ahi | document | speech | osa | results | male | gmm | 10 | vectors [SUMMARY]
[CONTENT] usepackage | ahi | document | speech | osa | results | male | gmm | 10 | vectors [SUMMARY]
[CONTENT] UA ||| UA ||| [SUMMARY]
null
[CONTENT] SVR ||| ||| [SUMMARY]
[CONTENT] ||| two | 1 | 2 ||| [SUMMARY]
[CONTENT] UA ||| UA ||| ||| 426 | Spanish ||| ||| first | AHI ||| ||| ||| ||| SVR ||| ||| ||| ||| two | 1 | 2 ||| [SUMMARY]
[CONTENT] UA ||| UA ||| ||| 426 | Spanish ||| ||| first | AHI ||| ||| ||| ||| SVR ||| ||| ||| ||| two | 1 | 2 ||| [SUMMARY]
A cluster randomised controlled trial of a telephone-based intervention targeting the home food environment of preschoolers (The Healthy Habits Trial): the effect on parent fruit and vegetable consumption.
25540041
The home food environment is an important setting for the development of dietary patterns in childhood. Interventions that support parents to modify the home food environment for their children, however, may also improve parent diet. The purpose of this study was to assess the impact of a telephone-based intervention targeting the home food environment of preschool children on the fruit and vegetable consumption of parents.
BACKGROUND
In 2010, 394 parents of 3-5 year-old children from 30 preschools in the Hunter region of Australia were recruited to this cluster randomised controlled trial and were randomly assigned to an intervention or control group. Intervention group parents received four weekly 30-minute telephone calls and written resources. The scripted calls focused on; fruit and vegetable availability and accessibility, parental role-modelling, and supportive home food routines. Two items from the Australian National Nutrition Survey were used to assess the average number of serves of fruit and vegetables consumed each day by parents at baseline, and 2-, 6-, 12-, and 18-months later, using generalised estimating equations (adjusted for baseline values and clustering by preschool) and an intention-to-treat-approach.
METHODS
At each follow-up, vegetable consumption among intervention parents significantly exceeded that of controls. At 2-months the difference was 0.71 serves (95% CI: 0.58-0.85, p < 0.0001), and at 18-months the difference was 0.36 serves (95% CI: 0.10-0.61, p = 0.0067). Fruit consumption among intervention parents was found to significantly exceed consumption of control parents at the 2-,12- and 18-month follow-up, with the difference at 2-months being 0.26 serves (95% CI: 0.12-0.40, p = 0.0003), and 0.26 serves maintained at 18-months, (95% CI: 0.10-0.43, p = 0.0015).
RESULTS
A four-contact telephone-based intervention that focuses on changing characteristics of preschoolers' home food environment can increase parents' fruit and vegetable consumption. (ANZCTR12609000820202).
CONCLUSIONS
[ "Adult", "Australia", "Child, Preschool", "Cluster Analysis", "Diet", "Feeding Behavior", "Female", "Follow-Up Studies", "Food, Organic", "Fruit", "Health Behavior", "Health Promotion", "Humans", "Male", "Socioeconomic Factors", "Telephone", "Treatment Outcome", "Vegetables" ]
4304182
Background
From a young age many children fail to meet minimum dietary guidelines for fruit and vegetable consumption [1,2]. Dietary patterns established in childhood track into adulthood [3], and insufficient childhood consumption of fruit and vegetables is linked to an increased risk of chronic disease in adults [4,5]. As such increasing the fruit and vegetable consumption of children represents a global health priority [6]. Ecological models suggest that the home food environment is an important setting for the development of dietary patterns in childhood [7]. Parents are gatekeepers to the home food environment [7,8] through: making foods available and accessible to children [9]: role modelling [10]; and establishing eating rules and encouraging specific eating behaviours [11]. In addition, parents’ own behaviours may be modified by the environment they establish for their children. As such, interventions that focus on changing the home food environment may impact the dietary patterns of parents as well as their children. A recent systematic review of interventions to increase fruit and vegetable consumption among children aged 0–5 years [12] identified only one study attempting to change the home food environment. A four-contact, home-visiting program increased parent fruit and vegetable consumption, but did not change child consumption [13]. Since the review, the Infant Feeding Activity and Nutrition Trial (InFANT) showed that a parent intervention to improve infant diet, which also included strategies targeting some aspects of the home food environment, improved maternal diet quality [14]. The intervention was delivered via parenting groups over the first 18 months of the infants’ life, and focused on teaching parenting skills to support the development of positive diet behaviours in infancy. Furthermore, the Healthy Habits trial, demonstrated that a four-contact telephone-based intervention with parents that focused on creating a supportive home food environment could increase fruit and vegetable consumption among 3–5 year-old children [15]. After 12 months, the combined fruit and vegetable consumption of intervention children was significantly greater than consumption of control children [16]. However, after 18 months the difference between groups was no longer significant [16]. This intervention supported parents to make changes to their home food environment associated with higher fruit and vegetable consumption in children; increasing the availability and accessibility of fruit and vegetables in the home, increasing parental role-modelling, and introducing supportive home food routines [17-19]. There is a well-established literature regarding effective interventions to increase adult fruit and vegetable consumption, with systematic reviews finding evidence in favour of behavioural interventions [20,21], and specifically those utilising face-to-face nutrition education or counselling [22]. A review of environmental approaches to encourage healthy eating highlighted the potential for home-based environmental interventions, however the authors concluded there was a paucity of such interventions [23]. The current study attempts to address this gap in the literature and investigate the efficacy of a home food environment intervention on the dietary behaviours of parents. Specifically, this paper describes the changes in the average daily serves of fruit and of vegetables consumed by parents, at 2-, 6- 12- and 18-months, the secondary outcomes of the Healthy Habits trial. It was hypothesised that consumption among intervention parents would exceed that of controls at each follow-up time point.
null
null
Results
A description of the study response rates and attrition is described in detail elsewhere [15,16]. Of the 394 parents recruited into the study 78% of those allocated to the intervention group and 88% of those allocated to the control group provided 18-month follow-up data (Figure 1).Figure 1 CONSORT flowchart. CONSORT flowchart. Most parents who completed the baseline survey were female (96%), university educated (47%) with a household income over AUS$100,000 per year (41%); the average age was 35.4 years (SD = 5.4), and parents had an average of 2.3 children under 16 years (SD = 0.8) [32]. The participant characteristics by group are shown in Table 1.Table 1 Participant characteristics at baseline [15] Participant characteristics Intervention n = 208 mean (SD)/% Control n = 186 mean (SD)/% Age – years35.2 (5.6)35.7 (5.0)Gender - female95.2%96.8%University Education45.2%49.5%Annual household income (>AU$100,000)42.4%40.2%Aboriginal or Torres Strait Islander1.0%3.2%Number of children <16 y.o.2.3 (0.8)2.3 (0.7) Participant characteristics at baseline [15] Vegetable consumption At each follow-up, intervention parents consumed significantly more vegetable serves than control parents (Table 2). Effect sizes ranged from 0.36 to 0.71 serves per day (approximately 27–53 grams) [33]. Although the sensitivity analysis attenuated the effect size (0.22-0.57 serves per day), the between-group difference remained statistically significant at each follow-up (Table 3).Table 2 Changes in participant consumption of vegetables and fruit (mean daily serves) using all available data Vegetable consumption (Mean daily serves) Fruit consumption (Mean daily serves) Control (SD) Control* (SD) Intervention (SD) Intervention* (SD) Between group difference* (95% CI) P-value Control (SD) Control** (SD) Intervention (SD) Intervention** (SD) Between group difference ** (95% CI) P-value Baseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.08 (1.26)3.11 (0.68)3.91 (1.41)3.92 (0.64)0.71 (0.58-0.85)<0.00011.82 (1.04)1.83 (0.69)2.17 (1.08)2.16 (0.74)0.26 (0.12-0.40)0.00036 mon3.03 (1.51)3.04 (0.68)3.61 (1.40)3.60 (0.63)0.43 (0.19-0.68)0.00051.95 (0.97)1.96 (0.61)2.04 (1.08)2.04 (0.63)0.06 (−0.14-0.26)0.540512 mon3.04 (1.37)3.01 (0.80)3.66 (1.77)3.65 (0.74)0.51 (0.30-0.73<0.00011.81 (1.07)1.81 (0.53)2.08 (1.08)2.09 (0.59)0.22 (0.04-0.39)0.015318 mon3.06 (1.24)3.06 (0.52)3.53 (1.36)3.53 (0.48)0.36 (0.10-0.61)0.00671.93 (1.03)1.93 (0.54)2.24 (1.07)2.24 (0.57)0.26 (0.10-0.43)0.0015*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.Table 3 Changes in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward Vegetable consumption (Mean daily serves) Fruit consumption (Mean daily serves) Control (SD) Control* (SD) Intervention (SD) Intervention* (SD) Between group difference * (95% CI) P-value Control (SD) Control** (SD) Intervention (SD) Intervention** (SD) Between group difference ** (95% CI) P-value Baseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.09 (1.24)3.11 (0.73)3.82 (1.45)3.79 (0.72)0.57 (0.46-0.68)<0.00011.81 (1.04)1.81 (0.71)2.08 (1.06)2.08 (0.75)0.21 (0.08-0.34)0.00156 mon3.11 (1.49)3.10 (0.78)3.56 (1.44)3.54 (0.76)0.32 (0.11-0.53)0.00291.91 (1.01)1.92 (0.70)2.03 (1.10)2.03 (0.73)0.06 (−0.11-0.23)0.496112 mon3.05 (1.34)3.03 (0.89)3.56 (1.73)3.55 (0.87)0.39 (0.20-0.57)<0.00011.79 (1.07)1.79 (0.60)1.99 (1.04)2.00 (0.63)0.16 (0.01-0.310.038218 mon3.11 (1.24)3.11 (0.69)3.44 (1.41)3.43 (0.68)0.22 (0.01-0.43)0.03741.89 (1.04)1.89 (0.63)2.13 (1.09)2.13 (0.66)0.19 (0.04-0.34)0.0139*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool. Changes in participant consumption of vegetables and fruit (mean daily serves) using all available data *Adjusted for baseline value of daily vegetable serves and clustering by preschool. **Adjusted for baseline value of daily fruit serves and clustering by preschool. Changes in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward *Adjusted for baseline value of daily vegetable serves and clustering by preschool. **Adjusted for baseline value of daily fruit serves and clustering by preschool. At each follow-up, intervention parents consumed significantly more vegetable serves than control parents (Table 2). Effect sizes ranged from 0.36 to 0.71 serves per day (approximately 27–53 grams) [33]. Although the sensitivity analysis attenuated the effect size (0.22-0.57 serves per day), the between-group difference remained statistically significant at each follow-up (Table 3).Table 2 Changes in participant consumption of vegetables and fruit (mean daily serves) using all available data Vegetable consumption (Mean daily serves) Fruit consumption (Mean daily serves) Control (SD) Control* (SD) Intervention (SD) Intervention* (SD) Between group difference* (95% CI) P-value Control (SD) Control** (SD) Intervention (SD) Intervention** (SD) Between group difference ** (95% CI) P-value Baseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.08 (1.26)3.11 (0.68)3.91 (1.41)3.92 (0.64)0.71 (0.58-0.85)<0.00011.82 (1.04)1.83 (0.69)2.17 (1.08)2.16 (0.74)0.26 (0.12-0.40)0.00036 mon3.03 (1.51)3.04 (0.68)3.61 (1.40)3.60 (0.63)0.43 (0.19-0.68)0.00051.95 (0.97)1.96 (0.61)2.04 (1.08)2.04 (0.63)0.06 (−0.14-0.26)0.540512 mon3.04 (1.37)3.01 (0.80)3.66 (1.77)3.65 (0.74)0.51 (0.30-0.73<0.00011.81 (1.07)1.81 (0.53)2.08 (1.08)2.09 (0.59)0.22 (0.04-0.39)0.015318 mon3.06 (1.24)3.06 (0.52)3.53 (1.36)3.53 (0.48)0.36 (0.10-0.61)0.00671.93 (1.03)1.93 (0.54)2.24 (1.07)2.24 (0.57)0.26 (0.10-0.43)0.0015*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.Table 3 Changes in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward Vegetable consumption (Mean daily serves) Fruit consumption (Mean daily serves) Control (SD) Control* (SD) Intervention (SD) Intervention* (SD) Between group difference * (95% CI) P-value Control (SD) Control** (SD) Intervention (SD) Intervention** (SD) Between group difference ** (95% CI) P-value Baseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.09 (1.24)3.11 (0.73)3.82 (1.45)3.79 (0.72)0.57 (0.46-0.68)<0.00011.81 (1.04)1.81 (0.71)2.08 (1.06)2.08 (0.75)0.21 (0.08-0.34)0.00156 mon3.11 (1.49)3.10 (0.78)3.56 (1.44)3.54 (0.76)0.32 (0.11-0.53)0.00291.91 (1.01)1.92 (0.70)2.03 (1.10)2.03 (0.73)0.06 (−0.11-0.23)0.496112 mon3.05 (1.34)3.03 (0.89)3.56 (1.73)3.55 (0.87)0.39 (0.20-0.57)<0.00011.79 (1.07)1.79 (0.60)1.99 (1.04)2.00 (0.63)0.16 (0.01-0.310.038218 mon3.11 (1.24)3.11 (0.69)3.44 (1.41)3.43 (0.68)0.22 (0.01-0.43)0.03741.89 (1.04)1.89 (0.63)2.13 (1.09)2.13 (0.66)0.19 (0.04-0.34)0.0139*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool. Changes in participant consumption of vegetables and fruit (mean daily serves) using all available data *Adjusted for baseline value of daily vegetable serves and clustering by preschool. **Adjusted for baseline value of daily fruit serves and clustering by preschool. Changes in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward *Adjusted for baseline value of daily vegetable serves and clustering by preschool. **Adjusted for baseline value of daily fruit serves and clustering by preschool. Fruit consumption With the exception of 6-months, fruit consumption in the intervention group exceeded that of the control group at each follow-up, with effect sizes of 0.06-0.26 serves per day (9–39 grams) [33] (Table 2). The between-group difference remained significant when baseline values were substituted for missing values in the sensitivity analysis (0.06-0.21 serves per day) (Table 3). With the exception of 6-months, fruit consumption in the intervention group exceeded that of the control group at each follow-up, with effect sizes of 0.06-0.26 serves per day (9–39 grams) [33] (Table 2). The between-group difference remained significant when baseline values were substituted for missing values in the sensitivity analysis (0.06-0.21 serves per day) (Table 3).
Conclusion
A four-contact telephone-based intervention that focuses on changing characteristics of preschoolers’ home food environment can increase parents’ fruit and vegetable consumption. These results could inform the development of public health nutrition interventions attempting to improve the diet of preschoolers and their parents.
[ "Intervention", "Control", "Data collection & measures", "Analysis", "Vegetable consumption", "Fruit consumption" ]
[ "The intervention, described in greater detail elsewhere [24], was developed in conjunction with a multi-disciplinary advisory group that included accredited practicing dietitians, psychologists specialising in parenting, and health promotion professionals. The intervention was conceptually guided by the model of family-based intervention used in the treatment and prevention of childhood obesity, as proposed by Golan and colleagues [25]. The model is grounded in socio-ecological theory and attempts to introduce new familial norms regarding healthy eating. The intervention was previously piloted in a small sample which demonstrated effectiveness, feasibility of delivery, and acceptability to parents [26]. The intervention was delivered through a series of four weekly 30-minute telephone calls. The calls were scripted and delivered by experienced telephone interviewers. The script content and homework activities were tailored based on parents’ responses and focused on; fruit and vegetable availability and accessibility, parental role-modelling, and supportive home food routines (e.g. children having set meal and snack times). Behaviour change techniques such as goal setting and role-modelling [27] were built into the script. The intervention also consisted of written resources. Intervention parents received the ‘Healthy Budget Bites’ cookbook, which was developed locally and was specifically designed to encourage healthy eating through the provision of simple, inexpensive recipes [28]. The Healthy Habits guidebook was designed to accompany each of the calls. It provided a summary of the content of each call, as well as factsheets with more detail about each of the included topics. There was dedicated space in the guidebook where participants were encouraged to record their goals and activities for each week. Intervention parents also received a pad of weekly meal planner templates. The intervention was delivered between April and December 2010. Approximately 6% of all intervention calls were monitored by members of the research team, with results indicating that 97% of key content areas of the script were covered, and in 80% of calls the telephone interviewers “rarely” deviated from the script [15]. Of the intervention parents, 181 (87%) completed all intervention calls [15], and the median number of call attempts per completed intervention call was three attempts for the Week 1 call, and two attempts for the calls in Weeks 2, 3 & 4.", "Parents were mailed printed information about Australian dietary guidelines [29]. They received no further contact until the follow-up assessments.", "Telephone interviewers, blind to group allocation, collected data at baseline (from April to October 2010) and 2-, 6-, 12-, and 18-months later. Items from the Australian National Nutrition Survey [30] were used to assess the average daily serves of fruit and vegetables consumed by parents. (How many serves of vegetables do you usually eat each day? One adult serve is a ½ cup of cooked vegetables or 1 cup of salad vegetables. How many serves of fruit do you usually eat each day? An adult serve is 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces). A study of 1,598 Australian adults found these items were significantly associated with biomarkers of fruit and vegetable intake (alpha- and beta-carotene and red-cell folate) [31].", "At each follow-up, generalised estimating equations were used to compare parents’ mean daily fruit and vegetable serves between groups. Fruit and vegetable outcomes were analysed separately. The analyses were adjusted for clustering by preschool and baseline values (i.e. baseline daily serves of vegetables and fruits respectively). An intention-to-treat approach was utilised, whereby participants were analysed based on the group to which they were originally allocated. All available data was used in the initial analysis. A sensitivity analysis was also conducted whereby baseline values were substituted for missing data (Baseline Observation Carried Forward). The trial was powered based upon the primary outcome (children’s fruit and vegetable consumption). However, using the same assumptions it was calculated that the sample would allow a between group detectable difference of 0.33 and 0.43 daily serves of fruit and vegetables respectively, with 80% power at the 0.05 significance level, after 18 months.", "At each follow-up, intervention parents consumed significantly more vegetable serves than control parents (Table 2). Effect sizes ranged from 0.36 to 0.71 serves per day (approximately 27–53 grams) [33]. Although the sensitivity analysis attenuated the effect size (0.22-0.57 serves per day), the between-group difference remained statistically significant at each follow-up (Table 3).Table 2\nChanges in participant consumption of vegetables and fruit (mean daily serves) using all available data\n\nVegetable consumption (Mean daily serves)\n\nFruit consumption (Mean daily serves)\n\nControl (SD)\n\nControl* (SD)\n\nIntervention (SD)\n\nIntervention* (SD)\n\nBetween group difference* (95% CI)\n\nP-value\n\nControl (SD)\n\nControl** (SD)\n\nIntervention (SD)\n\nIntervention** (SD)\n\nBetween group difference ** (95% CI)\n\nP-value\nBaseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.08 (1.26)3.11 (0.68)3.91 (1.41)3.92 (0.64)0.71 (0.58-0.85)<0.00011.82 (1.04)1.83 (0.69)2.17 (1.08)2.16 (0.74)0.26 (0.12-0.40)0.00036 mon3.03 (1.51)3.04 (0.68)3.61 (1.40)3.60 (0.63)0.43 (0.19-0.68)0.00051.95 (0.97)1.96 (0.61)2.04 (1.08)2.04 (0.63)0.06 (−0.14-0.26)0.540512 mon3.04 (1.37)3.01 (0.80)3.66 (1.77)3.65 (0.74)0.51 (0.30-0.73<0.00011.81 (1.07)1.81 (0.53)2.08 (1.08)2.09 (0.59)0.22 (0.04-0.39)0.015318 mon3.06 (1.24)3.06 (0.52)3.53 (1.36)3.53 (0.48)0.36 (0.10-0.61)0.00671.93 (1.03)1.93 (0.54)2.24 (1.07)2.24 (0.57)0.26 (0.10-0.43)0.0015*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.Table 3\nChanges in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward\n\nVegetable consumption (Mean daily serves)\n\nFruit consumption (Mean daily serves)\n\nControl (SD)\n\nControl* (SD)\n\nIntervention (SD)\n\nIntervention* (SD)\n\nBetween group difference * (95% CI)\n\nP-value\n\nControl (SD)\n\nControl** (SD)\n\nIntervention (SD)\n\nIntervention** (SD)\n\nBetween group difference ** (95% CI)\n\nP-value\nBaseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.09 (1.24)3.11 (0.73)3.82 (1.45)3.79 (0.72)0.57 (0.46-0.68)<0.00011.81 (1.04)1.81 (0.71)2.08 (1.06)2.08 (0.75)0.21 (0.08-0.34)0.00156 mon3.11 (1.49)3.10 (0.78)3.56 (1.44)3.54 (0.76)0.32 (0.11-0.53)0.00291.91 (1.01)1.92 (0.70)2.03 (1.10)2.03 (0.73)0.06 (−0.11-0.23)0.496112 mon3.05 (1.34)3.03 (0.89)3.56 (1.73)3.55 (0.87)0.39 (0.20-0.57)<0.00011.79 (1.07)1.79 (0.60)1.99 (1.04)2.00 (0.63)0.16 (0.01-0.310.038218 mon3.11 (1.24)3.11 (0.69)3.44 (1.41)3.43 (0.68)0.22 (0.01-0.43)0.03741.89 (1.04)1.89 (0.63)2.13 (1.09)2.13 (0.66)0.19 (0.04-0.34)0.0139*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.\n\nChanges in participant consumption of vegetables and fruit (mean daily serves) using all available data\n\n*Adjusted for baseline value of daily vegetable serves and clustering by preschool.\n**Adjusted for baseline value of daily fruit serves and clustering by preschool.\n\nChanges in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward\n\n*Adjusted for baseline value of daily vegetable serves and clustering by preschool.\n**Adjusted for baseline value of daily fruit serves and clustering by preschool.", "With the exception of 6-months, fruit consumption in the intervention group exceeded that of the control group at each follow-up, with effect sizes of 0.06-0.26 serves per day (9–39 grams) [33] (Table 2). The between-group difference remained significant when baseline values were substituted for missing values in the sensitivity analysis (0.06-0.21 serves per day) (Table 3)." ]
[ null, null, null, null, null, null ]
[ "Background", "Methods", "Intervention", "Control", "Data collection & measures", "Analysis", "Results", "Vegetable consumption", "Fruit consumption", "Discussion", "Conclusion" ]
[ "From a young age many children fail to meet minimum dietary guidelines for fruit and vegetable consumption [1,2]. Dietary patterns established in childhood track into adulthood [3], and insufficient childhood consumption of fruit and vegetables is linked to an increased risk of chronic disease in adults [4,5]. As such increasing the fruit and vegetable consumption of children represents a global health priority [6]. Ecological models suggest that the home food environment is an important setting for the development of dietary patterns in childhood [7]. Parents are gatekeepers to the home food environment [7,8] through: making foods available and accessible to children [9]: role modelling [10]; and establishing eating rules and encouraging specific eating behaviours [11]. In addition, parents’ own behaviours may be modified by the environment they establish for their children. As such, interventions that focus on changing the home food environment may impact the dietary patterns of parents as well as their children.\nA recent systematic review of interventions to increase fruit and vegetable consumption among children aged 0–5 years [12] identified only one study attempting to change the home food environment. A four-contact, home-visiting program increased parent fruit and vegetable consumption, but did not change child consumption [13]. Since the review, the Infant Feeding Activity and Nutrition Trial (InFANT) showed that a parent intervention to improve infant diet, which also included strategies targeting some aspects of the home food environment, improved maternal diet quality [14]. The intervention was delivered via parenting groups over the first 18 months of the infants’ life, and focused on teaching parenting skills to support the development of positive diet behaviours in infancy. Furthermore, the Healthy Habits trial, demonstrated that a four-contact telephone-based intervention with parents that focused on creating a supportive home food environment could increase fruit and vegetable consumption among 3–5 year-old children [15]. After 12 months, the combined fruit and vegetable consumption of intervention children was significantly greater than consumption of control children [16]. However, after 18 months the difference between groups was no longer significant [16]. This intervention supported parents to make changes to their home food environment associated with higher fruit and vegetable consumption in children; increasing the availability and accessibility of fruit and vegetables in the home, increasing parental role-modelling, and introducing supportive home food routines [17-19].\nThere is a well-established literature regarding effective interventions to increase adult fruit and vegetable consumption, with systematic reviews finding evidence in favour of behavioural interventions [20,21], and specifically those utilising face-to-face nutrition education or counselling [22]. A review of environmental approaches to encourage healthy eating highlighted the potential for home-based environmental interventions, however the authors concluded there was a paucity of such interventions [23]. The current study attempts to address this gap in the literature and investigate the efficacy of a home food environment intervention on the dietary behaviours of parents. Specifically, this paper describes the changes in the average daily serves of fruit and of vegetables consumed by parents, at 2-, 6- 12- and 18-months, the secondary outcomes of the Healthy Habits trial. It was hypothesised that consumption among intervention parents would exceed that of controls at each follow-up time point.", "The trial was registered with the Australian New Zealand Clinical Trials Registry on Sept 21 2009 (ANZCTR12609000820202) and approved by the Hunter New England Health Human Research Ethics Committee. A detailed description of the methods employed in this cluster randomised controlled trial have been published elsewhere and are described briefly below [24]. Parents of 3–5 year-old children were recruited to the trial from 30 preschools in the Hunter region of NSW, Australia. Parents were allocated to an intervention (telephone support) or control condition (written information) using block randomisation in a 1:1 ratio based on the preschool of recruitment. Preschool randomisation was conducted by an independent statistician using a random number function in Microsoft Excel. Following collection of baseline data, parents were notified of their group allocation by letter.\n Intervention The intervention, described in greater detail elsewhere [24], was developed in conjunction with a multi-disciplinary advisory group that included accredited practicing dietitians, psychologists specialising in parenting, and health promotion professionals. The intervention was conceptually guided by the model of family-based intervention used in the treatment and prevention of childhood obesity, as proposed by Golan and colleagues [25]. The model is grounded in socio-ecological theory and attempts to introduce new familial norms regarding healthy eating. The intervention was previously piloted in a small sample which demonstrated effectiveness, feasibility of delivery, and acceptability to parents [26]. The intervention was delivered through a series of four weekly 30-minute telephone calls. The calls were scripted and delivered by experienced telephone interviewers. The script content and homework activities were tailored based on parents’ responses and focused on; fruit and vegetable availability and accessibility, parental role-modelling, and supportive home food routines (e.g. children having set meal and snack times). Behaviour change techniques such as goal setting and role-modelling [27] were built into the script. The intervention also consisted of written resources. Intervention parents received the ‘Healthy Budget Bites’ cookbook, which was developed locally and was specifically designed to encourage healthy eating through the provision of simple, inexpensive recipes [28]. The Healthy Habits guidebook was designed to accompany each of the calls. It provided a summary of the content of each call, as well as factsheets with more detail about each of the included topics. There was dedicated space in the guidebook where participants were encouraged to record their goals and activities for each week. Intervention parents also received a pad of weekly meal planner templates. The intervention was delivered between April and December 2010. Approximately 6% of all intervention calls were monitored by members of the research team, with results indicating that 97% of key content areas of the script were covered, and in 80% of calls the telephone interviewers “rarely” deviated from the script [15]. Of the intervention parents, 181 (87%) completed all intervention calls [15], and the median number of call attempts per completed intervention call was three attempts for the Week 1 call, and two attempts for the calls in Weeks 2, 3 & 4.\nThe intervention, described in greater detail elsewhere [24], was developed in conjunction with a multi-disciplinary advisory group that included accredited practicing dietitians, psychologists specialising in parenting, and health promotion professionals. The intervention was conceptually guided by the model of family-based intervention used in the treatment and prevention of childhood obesity, as proposed by Golan and colleagues [25]. The model is grounded in socio-ecological theory and attempts to introduce new familial norms regarding healthy eating. The intervention was previously piloted in a small sample which demonstrated effectiveness, feasibility of delivery, and acceptability to parents [26]. The intervention was delivered through a series of four weekly 30-minute telephone calls. The calls were scripted and delivered by experienced telephone interviewers. The script content and homework activities were tailored based on parents’ responses and focused on; fruit and vegetable availability and accessibility, parental role-modelling, and supportive home food routines (e.g. children having set meal and snack times). Behaviour change techniques such as goal setting and role-modelling [27] were built into the script. The intervention also consisted of written resources. Intervention parents received the ‘Healthy Budget Bites’ cookbook, which was developed locally and was specifically designed to encourage healthy eating through the provision of simple, inexpensive recipes [28]. The Healthy Habits guidebook was designed to accompany each of the calls. It provided a summary of the content of each call, as well as factsheets with more detail about each of the included topics. There was dedicated space in the guidebook where participants were encouraged to record their goals and activities for each week. Intervention parents also received a pad of weekly meal planner templates. The intervention was delivered between April and December 2010. Approximately 6% of all intervention calls were monitored by members of the research team, with results indicating that 97% of key content areas of the script were covered, and in 80% of calls the telephone interviewers “rarely” deviated from the script [15]. Of the intervention parents, 181 (87%) completed all intervention calls [15], and the median number of call attempts per completed intervention call was three attempts for the Week 1 call, and two attempts for the calls in Weeks 2, 3 & 4.\n Control Parents were mailed printed information about Australian dietary guidelines [29]. They received no further contact until the follow-up assessments.\nParents were mailed printed information about Australian dietary guidelines [29]. They received no further contact until the follow-up assessments.\n Data collection & measures Telephone interviewers, blind to group allocation, collected data at baseline (from April to October 2010) and 2-, 6-, 12-, and 18-months later. Items from the Australian National Nutrition Survey [30] were used to assess the average daily serves of fruit and vegetables consumed by parents. (How many serves of vegetables do you usually eat each day? One adult serve is a ½ cup of cooked vegetables or 1 cup of salad vegetables. How many serves of fruit do you usually eat each day? An adult serve is 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces). A study of 1,598 Australian adults found these items were significantly associated with biomarkers of fruit and vegetable intake (alpha- and beta-carotene and red-cell folate) [31].\nTelephone interviewers, blind to group allocation, collected data at baseline (from April to October 2010) and 2-, 6-, 12-, and 18-months later. Items from the Australian National Nutrition Survey [30] were used to assess the average daily serves of fruit and vegetables consumed by parents. (How many serves of vegetables do you usually eat each day? One adult serve is a ½ cup of cooked vegetables or 1 cup of salad vegetables. How many serves of fruit do you usually eat each day? An adult serve is 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces). A study of 1,598 Australian adults found these items were significantly associated with biomarkers of fruit and vegetable intake (alpha- and beta-carotene and red-cell folate) [31].\n Analysis At each follow-up, generalised estimating equations were used to compare parents’ mean daily fruit and vegetable serves between groups. Fruit and vegetable outcomes were analysed separately. The analyses were adjusted for clustering by preschool and baseline values (i.e. baseline daily serves of vegetables and fruits respectively). An intention-to-treat approach was utilised, whereby participants were analysed based on the group to which they were originally allocated. All available data was used in the initial analysis. A sensitivity analysis was also conducted whereby baseline values were substituted for missing data (Baseline Observation Carried Forward). The trial was powered based upon the primary outcome (children’s fruit and vegetable consumption). However, using the same assumptions it was calculated that the sample would allow a between group detectable difference of 0.33 and 0.43 daily serves of fruit and vegetables respectively, with 80% power at the 0.05 significance level, after 18 months.\nAt each follow-up, generalised estimating equations were used to compare parents’ mean daily fruit and vegetable serves between groups. Fruit and vegetable outcomes were analysed separately. The analyses were adjusted for clustering by preschool and baseline values (i.e. baseline daily serves of vegetables and fruits respectively). An intention-to-treat approach was utilised, whereby participants were analysed based on the group to which they were originally allocated. All available data was used in the initial analysis. A sensitivity analysis was also conducted whereby baseline values were substituted for missing data (Baseline Observation Carried Forward). The trial was powered based upon the primary outcome (children’s fruit and vegetable consumption). However, using the same assumptions it was calculated that the sample would allow a between group detectable difference of 0.33 and 0.43 daily serves of fruit and vegetables respectively, with 80% power at the 0.05 significance level, after 18 months.", "The intervention, described in greater detail elsewhere [24], was developed in conjunction with a multi-disciplinary advisory group that included accredited practicing dietitians, psychologists specialising in parenting, and health promotion professionals. The intervention was conceptually guided by the model of family-based intervention used in the treatment and prevention of childhood obesity, as proposed by Golan and colleagues [25]. The model is grounded in socio-ecological theory and attempts to introduce new familial norms regarding healthy eating. The intervention was previously piloted in a small sample which demonstrated effectiveness, feasibility of delivery, and acceptability to parents [26]. The intervention was delivered through a series of four weekly 30-minute telephone calls. The calls were scripted and delivered by experienced telephone interviewers. The script content and homework activities were tailored based on parents’ responses and focused on; fruit and vegetable availability and accessibility, parental role-modelling, and supportive home food routines (e.g. children having set meal and snack times). Behaviour change techniques such as goal setting and role-modelling [27] were built into the script. The intervention also consisted of written resources. Intervention parents received the ‘Healthy Budget Bites’ cookbook, which was developed locally and was specifically designed to encourage healthy eating through the provision of simple, inexpensive recipes [28]. The Healthy Habits guidebook was designed to accompany each of the calls. It provided a summary of the content of each call, as well as factsheets with more detail about each of the included topics. There was dedicated space in the guidebook where participants were encouraged to record their goals and activities for each week. Intervention parents also received a pad of weekly meal planner templates. The intervention was delivered between April and December 2010. Approximately 6% of all intervention calls were monitored by members of the research team, with results indicating that 97% of key content areas of the script were covered, and in 80% of calls the telephone interviewers “rarely” deviated from the script [15]. Of the intervention parents, 181 (87%) completed all intervention calls [15], and the median number of call attempts per completed intervention call was three attempts for the Week 1 call, and two attempts for the calls in Weeks 2, 3 & 4.", "Parents were mailed printed information about Australian dietary guidelines [29]. They received no further contact until the follow-up assessments.", "Telephone interviewers, blind to group allocation, collected data at baseline (from April to October 2010) and 2-, 6-, 12-, and 18-months later. Items from the Australian National Nutrition Survey [30] were used to assess the average daily serves of fruit and vegetables consumed by parents. (How many serves of vegetables do you usually eat each day? One adult serve is a ½ cup of cooked vegetables or 1 cup of salad vegetables. How many serves of fruit do you usually eat each day? An adult serve is 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces). A study of 1,598 Australian adults found these items were significantly associated with biomarkers of fruit and vegetable intake (alpha- and beta-carotene and red-cell folate) [31].", "At each follow-up, generalised estimating equations were used to compare parents’ mean daily fruit and vegetable serves between groups. Fruit and vegetable outcomes were analysed separately. The analyses were adjusted for clustering by preschool and baseline values (i.e. baseline daily serves of vegetables and fruits respectively). An intention-to-treat approach was utilised, whereby participants were analysed based on the group to which they were originally allocated. All available data was used in the initial analysis. A sensitivity analysis was also conducted whereby baseline values were substituted for missing data (Baseline Observation Carried Forward). The trial was powered based upon the primary outcome (children’s fruit and vegetable consumption). However, using the same assumptions it was calculated that the sample would allow a between group detectable difference of 0.33 and 0.43 daily serves of fruit and vegetables respectively, with 80% power at the 0.05 significance level, after 18 months.", "A description of the study response rates and attrition is described in detail elsewhere [15,16]. Of the 394 parents recruited into the study 78% of those allocated to the intervention group and 88% of those allocated to the control group provided 18-month follow-up data (Figure 1).Figure 1\nCONSORT flowchart.\n\n\nCONSORT flowchart.\n\nMost parents who completed the baseline survey were female (96%), university educated (47%) with a household income over AUS$100,000 per year (41%); the average age was 35.4 years (SD = 5.4), and parents had an average of 2.3 children under 16 years (SD = 0.8) [32]. The participant characteristics by group are shown in Table 1.Table 1\nParticipant characteristics at baseline [15]\nParticipant characteristics\n\nIntervention n = 208 mean (SD)/%\n\nControl n = 186 mean (SD)/%\nAge – years35.2 (5.6)35.7 (5.0)Gender - female95.2%96.8%University Education45.2%49.5%Annual household income (>AU$100,000)42.4%40.2%Aboriginal or Torres Strait Islander1.0%3.2%Number of children <16 y.o.2.3 (0.8)2.3 (0.7)\n\nParticipant characteristics at baseline [15]\n Vegetable consumption At each follow-up, intervention parents consumed significantly more vegetable serves than control parents (Table 2). Effect sizes ranged from 0.36 to 0.71 serves per day (approximately 27–53 grams) [33]. Although the sensitivity analysis attenuated the effect size (0.22-0.57 serves per day), the between-group difference remained statistically significant at each follow-up (Table 3).Table 2\nChanges in participant consumption of vegetables and fruit (mean daily serves) using all available data\n\nVegetable consumption (Mean daily serves)\n\nFruit consumption (Mean daily serves)\n\nControl (SD)\n\nControl* (SD)\n\nIntervention (SD)\n\nIntervention* (SD)\n\nBetween group difference* (95% CI)\n\nP-value\n\nControl (SD)\n\nControl** (SD)\n\nIntervention (SD)\n\nIntervention** (SD)\n\nBetween group difference ** (95% CI)\n\nP-value\nBaseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.08 (1.26)3.11 (0.68)3.91 (1.41)3.92 (0.64)0.71 (0.58-0.85)<0.00011.82 (1.04)1.83 (0.69)2.17 (1.08)2.16 (0.74)0.26 (0.12-0.40)0.00036 mon3.03 (1.51)3.04 (0.68)3.61 (1.40)3.60 (0.63)0.43 (0.19-0.68)0.00051.95 (0.97)1.96 (0.61)2.04 (1.08)2.04 (0.63)0.06 (−0.14-0.26)0.540512 mon3.04 (1.37)3.01 (0.80)3.66 (1.77)3.65 (0.74)0.51 (0.30-0.73<0.00011.81 (1.07)1.81 (0.53)2.08 (1.08)2.09 (0.59)0.22 (0.04-0.39)0.015318 mon3.06 (1.24)3.06 (0.52)3.53 (1.36)3.53 (0.48)0.36 (0.10-0.61)0.00671.93 (1.03)1.93 (0.54)2.24 (1.07)2.24 (0.57)0.26 (0.10-0.43)0.0015*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.Table 3\nChanges in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward\n\nVegetable consumption (Mean daily serves)\n\nFruit consumption (Mean daily serves)\n\nControl (SD)\n\nControl* (SD)\n\nIntervention (SD)\n\nIntervention* (SD)\n\nBetween group difference * (95% CI)\n\nP-value\n\nControl (SD)\n\nControl** (SD)\n\nIntervention (SD)\n\nIntervention** (SD)\n\nBetween group difference ** (95% CI)\n\nP-value\nBaseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.09 (1.24)3.11 (0.73)3.82 (1.45)3.79 (0.72)0.57 (0.46-0.68)<0.00011.81 (1.04)1.81 (0.71)2.08 (1.06)2.08 (0.75)0.21 (0.08-0.34)0.00156 mon3.11 (1.49)3.10 (0.78)3.56 (1.44)3.54 (0.76)0.32 (0.11-0.53)0.00291.91 (1.01)1.92 (0.70)2.03 (1.10)2.03 (0.73)0.06 (−0.11-0.23)0.496112 mon3.05 (1.34)3.03 (0.89)3.56 (1.73)3.55 (0.87)0.39 (0.20-0.57)<0.00011.79 (1.07)1.79 (0.60)1.99 (1.04)2.00 (0.63)0.16 (0.01-0.310.038218 mon3.11 (1.24)3.11 (0.69)3.44 (1.41)3.43 (0.68)0.22 (0.01-0.43)0.03741.89 (1.04)1.89 (0.63)2.13 (1.09)2.13 (0.66)0.19 (0.04-0.34)0.0139*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.\n\nChanges in participant consumption of vegetables and fruit (mean daily serves) using all available data\n\n*Adjusted for baseline value of daily vegetable serves and clustering by preschool.\n**Adjusted for baseline value of daily fruit serves and clustering by preschool.\n\nChanges in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward\n\n*Adjusted for baseline value of daily vegetable serves and clustering by preschool.\n**Adjusted for baseline value of daily fruit serves and clustering by preschool.\nAt each follow-up, intervention parents consumed significantly more vegetable serves than control parents (Table 2). Effect sizes ranged from 0.36 to 0.71 serves per day (approximately 27–53 grams) [33]. Although the sensitivity analysis attenuated the effect size (0.22-0.57 serves per day), the between-group difference remained statistically significant at each follow-up (Table 3).Table 2\nChanges in participant consumption of vegetables and fruit (mean daily serves) using all available data\n\nVegetable consumption (Mean daily serves)\n\nFruit consumption (Mean daily serves)\n\nControl (SD)\n\nControl* (SD)\n\nIntervention (SD)\n\nIntervention* (SD)\n\nBetween group difference* (95% CI)\n\nP-value\n\nControl (SD)\n\nControl** (SD)\n\nIntervention (SD)\n\nIntervention** (SD)\n\nBetween group difference ** (95% CI)\n\nP-value\nBaseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.08 (1.26)3.11 (0.68)3.91 (1.41)3.92 (0.64)0.71 (0.58-0.85)<0.00011.82 (1.04)1.83 (0.69)2.17 (1.08)2.16 (0.74)0.26 (0.12-0.40)0.00036 mon3.03 (1.51)3.04 (0.68)3.61 (1.40)3.60 (0.63)0.43 (0.19-0.68)0.00051.95 (0.97)1.96 (0.61)2.04 (1.08)2.04 (0.63)0.06 (−0.14-0.26)0.540512 mon3.04 (1.37)3.01 (0.80)3.66 (1.77)3.65 (0.74)0.51 (0.30-0.73<0.00011.81 (1.07)1.81 (0.53)2.08 (1.08)2.09 (0.59)0.22 (0.04-0.39)0.015318 mon3.06 (1.24)3.06 (0.52)3.53 (1.36)3.53 (0.48)0.36 (0.10-0.61)0.00671.93 (1.03)1.93 (0.54)2.24 (1.07)2.24 (0.57)0.26 (0.10-0.43)0.0015*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.Table 3\nChanges in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward\n\nVegetable consumption (Mean daily serves)\n\nFruit consumption (Mean daily serves)\n\nControl (SD)\n\nControl* (SD)\n\nIntervention (SD)\n\nIntervention* (SD)\n\nBetween group difference * (95% CI)\n\nP-value\n\nControl (SD)\n\nControl** (SD)\n\nIntervention (SD)\n\nIntervention** (SD)\n\nBetween group difference ** (95% CI)\n\nP-value\nBaseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.09 (1.24)3.11 (0.73)3.82 (1.45)3.79 (0.72)0.57 (0.46-0.68)<0.00011.81 (1.04)1.81 (0.71)2.08 (1.06)2.08 (0.75)0.21 (0.08-0.34)0.00156 mon3.11 (1.49)3.10 (0.78)3.56 (1.44)3.54 (0.76)0.32 (0.11-0.53)0.00291.91 (1.01)1.92 (0.70)2.03 (1.10)2.03 (0.73)0.06 (−0.11-0.23)0.496112 mon3.05 (1.34)3.03 (0.89)3.56 (1.73)3.55 (0.87)0.39 (0.20-0.57)<0.00011.79 (1.07)1.79 (0.60)1.99 (1.04)2.00 (0.63)0.16 (0.01-0.310.038218 mon3.11 (1.24)3.11 (0.69)3.44 (1.41)3.43 (0.68)0.22 (0.01-0.43)0.03741.89 (1.04)1.89 (0.63)2.13 (1.09)2.13 (0.66)0.19 (0.04-0.34)0.0139*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.\n\nChanges in participant consumption of vegetables and fruit (mean daily serves) using all available data\n\n*Adjusted for baseline value of daily vegetable serves and clustering by preschool.\n**Adjusted for baseline value of daily fruit serves and clustering by preschool.\n\nChanges in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward\n\n*Adjusted for baseline value of daily vegetable serves and clustering by preschool.\n**Adjusted for baseline value of daily fruit serves and clustering by preschool.\n Fruit consumption With the exception of 6-months, fruit consumption in the intervention group exceeded that of the control group at each follow-up, with effect sizes of 0.06-0.26 serves per day (9–39 grams) [33] (Table 2). The between-group difference remained significant when baseline values were substituted for missing values in the sensitivity analysis (0.06-0.21 serves per day) (Table 3).\nWith the exception of 6-months, fruit consumption in the intervention group exceeded that of the control group at each follow-up, with effect sizes of 0.06-0.26 serves per day (9–39 grams) [33] (Table 2). The between-group difference remained significant when baseline values were substituted for missing values in the sensitivity analysis (0.06-0.21 serves per day) (Table 3).", "At each follow-up, intervention parents consumed significantly more vegetable serves than control parents (Table 2). Effect sizes ranged from 0.36 to 0.71 serves per day (approximately 27–53 grams) [33]. Although the sensitivity analysis attenuated the effect size (0.22-0.57 serves per day), the between-group difference remained statistically significant at each follow-up (Table 3).Table 2\nChanges in participant consumption of vegetables and fruit (mean daily serves) using all available data\n\nVegetable consumption (Mean daily serves)\n\nFruit consumption (Mean daily serves)\n\nControl (SD)\n\nControl* (SD)\n\nIntervention (SD)\n\nIntervention* (SD)\n\nBetween group difference* (95% CI)\n\nP-value\n\nControl (SD)\n\nControl** (SD)\n\nIntervention (SD)\n\nIntervention** (SD)\n\nBetween group difference ** (95% CI)\n\nP-value\nBaseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.08 (1.26)3.11 (0.68)3.91 (1.41)3.92 (0.64)0.71 (0.58-0.85)<0.00011.82 (1.04)1.83 (0.69)2.17 (1.08)2.16 (0.74)0.26 (0.12-0.40)0.00036 mon3.03 (1.51)3.04 (0.68)3.61 (1.40)3.60 (0.63)0.43 (0.19-0.68)0.00051.95 (0.97)1.96 (0.61)2.04 (1.08)2.04 (0.63)0.06 (−0.14-0.26)0.540512 mon3.04 (1.37)3.01 (0.80)3.66 (1.77)3.65 (0.74)0.51 (0.30-0.73<0.00011.81 (1.07)1.81 (0.53)2.08 (1.08)2.09 (0.59)0.22 (0.04-0.39)0.015318 mon3.06 (1.24)3.06 (0.52)3.53 (1.36)3.53 (0.48)0.36 (0.10-0.61)0.00671.93 (1.03)1.93 (0.54)2.24 (1.07)2.24 (0.57)0.26 (0.10-0.43)0.0015*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.Table 3\nChanges in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward\n\nVegetable consumption (Mean daily serves)\n\nFruit consumption (Mean daily serves)\n\nControl (SD)\n\nControl* (SD)\n\nIntervention (SD)\n\nIntervention* (SD)\n\nBetween group difference * (95% CI)\n\nP-value\n\nControl (SD)\n\nControl** (SD)\n\nIntervention (SD)\n\nIntervention** (SD)\n\nBetween group difference ** (95% CI)\n\nP-value\nBaseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.09 (1.24)3.11 (0.73)3.82 (1.45)3.79 (0.72)0.57 (0.46-0.68)<0.00011.81 (1.04)1.81 (0.71)2.08 (1.06)2.08 (0.75)0.21 (0.08-0.34)0.00156 mon3.11 (1.49)3.10 (0.78)3.56 (1.44)3.54 (0.76)0.32 (0.11-0.53)0.00291.91 (1.01)1.92 (0.70)2.03 (1.10)2.03 (0.73)0.06 (−0.11-0.23)0.496112 mon3.05 (1.34)3.03 (0.89)3.56 (1.73)3.55 (0.87)0.39 (0.20-0.57)<0.00011.79 (1.07)1.79 (0.60)1.99 (1.04)2.00 (0.63)0.16 (0.01-0.310.038218 mon3.11 (1.24)3.11 (0.69)3.44 (1.41)3.43 (0.68)0.22 (0.01-0.43)0.03741.89 (1.04)1.89 (0.63)2.13 (1.09)2.13 (0.66)0.19 (0.04-0.34)0.0139*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.\n\nChanges in participant consumption of vegetables and fruit (mean daily serves) using all available data\n\n*Adjusted for baseline value of daily vegetable serves and clustering by preschool.\n**Adjusted for baseline value of daily fruit serves and clustering by preschool.\n\nChanges in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward\n\n*Adjusted for baseline value of daily vegetable serves and clustering by preschool.\n**Adjusted for baseline value of daily fruit serves and clustering by preschool.", "With the exception of 6-months, fruit consumption in the intervention group exceeded that of the control group at each follow-up, with effect sizes of 0.06-0.26 serves per day (9–39 grams) [33] (Table 2). The between-group difference remained significant when baseline values were substituted for missing values in the sensitivity analysis (0.06-0.21 serves per day) (Table 3).", "Findings suggest that a telephone-based intervention focused on changing the home food environment of preschool children can increase their parents’ fruit and vegetable consumption, and that increases are sustained up to 18-months post-baseline. A systematic review found that primary prevention interventions targeting healthy adults increased combined fruit and vegetable consumption by 0.1-1.4 serves per day [22]. The increases in consumption in this trial ranged from 0.49 to 0.97 combined serves per day, with an average increase of 0.7 combined serves per day, and are consistent with these review findings, and with the average increase identified in a systematic review of behavioural interventions to increase fruit and vegetable consumption (0.6 serves per day) [20]. Furthermore, these findings are consistent with other telephone-delivered interventions to increase the fruit and vegetable consumption of adults [34]. Although these reviewed trials [35-37] were of a longer duration and included a greater number of calls than the current trial, telephone interventions consisting of fewer calls have also been successful in changing behavioural outcomes [38,39]. Most notably, a trial of an intervention consisting of written materials, a written plan, and two to three telephone calls (the most comparable intervention approach) increased combined fruit and vegetable consumption by 1.4 serves per day among primary care patients after 12 weeks [40]. The magnitude of the observed effect appears to have clinical significance, with meta-analyses conducted by the World Cancer Research Fund indicating that each 50 gram increase of vegetables per day reduced the risks of stomach, oesophageal, and mouth/pharynx/larynx cancers by 15%, 31% and 28% respectively [41]. This suggests that intervention approaches that target the home food environment may produce improvements in diet and reduce associated disease risks.\nThese significant findings are particularly noteworthy given these represent secondary trial outcomes, with the primary intervention aim being to increase preschoolers’ fruit and vegetable consumption. In fact, the changes in secondary trial outcomes were sustained for longer than the primary outcome (child fruit and vegetable consumption) [16]. This may be in part due to the intervention being delivered directly to parents rather than children, and accords with the theoretical underpinnings of the intervention that changes in familial norms and behaviours are antecedent to behaviour change for children [25]. This is most clearly illustrated through the intervention strategy of role-modelling, which directly relies on changes in parents’ consumption to facilitate changes in children’s consumption. Furthermore, the home food environment strategies that were targeted as part of the intervention required greater input from parents. For example, although increasing the availability of fruit and vegetables in the home is associated with higher consumption among both adults and children [17-19,42], adults are required to more actively respond to this strategy through making changes in their food purchasing and food preparation behaviours. Findings from child obesity treatment studies suggest that treating parents alone may be more effective than treating the parent and child together [43]. Although results from the Healthy Habits trial suggest dietary interventions involving parents can be effective [15,16], it is recommended future trials of dietary interventions investigate the relative effectiveness of strategies targeting parents-alone versus parents and children combined.\nThe non-significant finding for fruit consumption at 6-months reflects a slight increase in consumption of controls coinciding with a slight decrease in consumption of intervention parents. The increase in fruit consumption in the control group most likely reflects increases in the seasonal availability of fruits over the Australian summer period. This argument is strengthened by the similarly elevated fruit consumption among controls at the 18-month follow-up (i.e. the summer period the following year). The decrease in fruit consumption in the intervention group may reflect the typical attenuation of effect size over time [44]. Strategies that help maintain intervention effects are important to maximise the long-term benefits of dietary interventions. The results of this trial suggest that approximately 4 to 5 months post-intervention may be a critical point for the delivery of intervention maintenance strategies. However, further research is warranted.\nThe trial findings should be considered in conjunction with the limitations and strengths of this study. Strengths included the randomised controlled design, and standardised delivery of intervention scripts and data collection surveys. Use of self-reported, brief dietary measures may not represent optimal assessment of dietary intake and represent a limitation of the trial. More accurate assessments may result from alternative assessment methods, such as food records [45].\nIt is recommended that future trials investigate whether changes to the home food environment mediate changes in the fruit and vegetable consumption of children and their parents. A recent related study demonstrated that changes in child consumption of non-core foods were mediated by changes in the home food environment [46]. Identification of mediators of fruit and vegetable consumption could facilitate intervention streamlining. Beyond the cost efficiency afforded by telephone-delivery [47], this trial provides preliminary evidence of an additional efficiency; simultaneous increases in the fruit and vegetable consumption of preschool children [15,16] and their parents. Interventions targeting characteristics of the home food environment therefore appear to have substantial public health utility.", "A four-contact telephone-based intervention that focuses on changing characteristics of preschoolers’ home food environment can increase parents’ fruit and vegetable consumption. These results could inform the development of public health nutrition interventions attempting to improve the diet of preschoolers and their parents." ]
[ "introduction", "materials|methods", null, null, null, null, "results", null, null, "discussion", "conclusion" ]
[ "Fruit", "Vegetable", "Randomised controlled trial", "Intervention", "Telephone", "Family", "Home food environment", "Preschool", "Child" ]
Background: From a young age many children fail to meet minimum dietary guidelines for fruit and vegetable consumption [1,2]. Dietary patterns established in childhood track into adulthood [3], and insufficient childhood consumption of fruit and vegetables is linked to an increased risk of chronic disease in adults [4,5]. As such increasing the fruit and vegetable consumption of children represents a global health priority [6]. Ecological models suggest that the home food environment is an important setting for the development of dietary patterns in childhood [7]. Parents are gatekeepers to the home food environment [7,8] through: making foods available and accessible to children [9]: role modelling [10]; and establishing eating rules and encouraging specific eating behaviours [11]. In addition, parents’ own behaviours may be modified by the environment they establish for their children. As such, interventions that focus on changing the home food environment may impact the dietary patterns of parents as well as their children. A recent systematic review of interventions to increase fruit and vegetable consumption among children aged 0–5 years [12] identified only one study attempting to change the home food environment. A four-contact, home-visiting program increased parent fruit and vegetable consumption, but did not change child consumption [13]. Since the review, the Infant Feeding Activity and Nutrition Trial (InFANT) showed that a parent intervention to improve infant diet, which also included strategies targeting some aspects of the home food environment, improved maternal diet quality [14]. The intervention was delivered via parenting groups over the first 18 months of the infants’ life, and focused on teaching parenting skills to support the development of positive diet behaviours in infancy. Furthermore, the Healthy Habits trial, demonstrated that a four-contact telephone-based intervention with parents that focused on creating a supportive home food environment could increase fruit and vegetable consumption among 3–5 year-old children [15]. After 12 months, the combined fruit and vegetable consumption of intervention children was significantly greater than consumption of control children [16]. However, after 18 months the difference between groups was no longer significant [16]. This intervention supported parents to make changes to their home food environment associated with higher fruit and vegetable consumption in children; increasing the availability and accessibility of fruit and vegetables in the home, increasing parental role-modelling, and introducing supportive home food routines [17-19]. There is a well-established literature regarding effective interventions to increase adult fruit and vegetable consumption, with systematic reviews finding evidence in favour of behavioural interventions [20,21], and specifically those utilising face-to-face nutrition education or counselling [22]. A review of environmental approaches to encourage healthy eating highlighted the potential for home-based environmental interventions, however the authors concluded there was a paucity of such interventions [23]. The current study attempts to address this gap in the literature and investigate the efficacy of a home food environment intervention on the dietary behaviours of parents. Specifically, this paper describes the changes in the average daily serves of fruit and of vegetables consumed by parents, at 2-, 6- 12- and 18-months, the secondary outcomes of the Healthy Habits trial. It was hypothesised that consumption among intervention parents would exceed that of controls at each follow-up time point. Methods: The trial was registered with the Australian New Zealand Clinical Trials Registry on Sept 21 2009 (ANZCTR12609000820202) and approved by the Hunter New England Health Human Research Ethics Committee. A detailed description of the methods employed in this cluster randomised controlled trial have been published elsewhere and are described briefly below [24]. Parents of 3–5 year-old children were recruited to the trial from 30 preschools in the Hunter region of NSW, Australia. Parents were allocated to an intervention (telephone support) or control condition (written information) using block randomisation in a 1:1 ratio based on the preschool of recruitment. Preschool randomisation was conducted by an independent statistician using a random number function in Microsoft Excel. Following collection of baseline data, parents were notified of their group allocation by letter. Intervention The intervention, described in greater detail elsewhere [24], was developed in conjunction with a multi-disciplinary advisory group that included accredited practicing dietitians, psychologists specialising in parenting, and health promotion professionals. The intervention was conceptually guided by the model of family-based intervention used in the treatment and prevention of childhood obesity, as proposed by Golan and colleagues [25]. The model is grounded in socio-ecological theory and attempts to introduce new familial norms regarding healthy eating. The intervention was previously piloted in a small sample which demonstrated effectiveness, feasibility of delivery, and acceptability to parents [26]. The intervention was delivered through a series of four weekly 30-minute telephone calls. The calls were scripted and delivered by experienced telephone interviewers. The script content and homework activities were tailored based on parents’ responses and focused on; fruit and vegetable availability and accessibility, parental role-modelling, and supportive home food routines (e.g. children having set meal and snack times). Behaviour change techniques such as goal setting and role-modelling [27] were built into the script. The intervention also consisted of written resources. Intervention parents received the ‘Healthy Budget Bites’ cookbook, which was developed locally and was specifically designed to encourage healthy eating through the provision of simple, inexpensive recipes [28]. The Healthy Habits guidebook was designed to accompany each of the calls. It provided a summary of the content of each call, as well as factsheets with more detail about each of the included topics. There was dedicated space in the guidebook where participants were encouraged to record their goals and activities for each week. Intervention parents also received a pad of weekly meal planner templates. The intervention was delivered between April and December 2010. Approximately 6% of all intervention calls were monitored by members of the research team, with results indicating that 97% of key content areas of the script were covered, and in 80% of calls the telephone interviewers “rarely” deviated from the script [15]. Of the intervention parents, 181 (87%) completed all intervention calls [15], and the median number of call attempts per completed intervention call was three attempts for the Week 1 call, and two attempts for the calls in Weeks 2, 3 & 4. The intervention, described in greater detail elsewhere [24], was developed in conjunction with a multi-disciplinary advisory group that included accredited practicing dietitians, psychologists specialising in parenting, and health promotion professionals. The intervention was conceptually guided by the model of family-based intervention used in the treatment and prevention of childhood obesity, as proposed by Golan and colleagues [25]. The model is grounded in socio-ecological theory and attempts to introduce new familial norms regarding healthy eating. The intervention was previously piloted in a small sample which demonstrated effectiveness, feasibility of delivery, and acceptability to parents [26]. The intervention was delivered through a series of four weekly 30-minute telephone calls. The calls were scripted and delivered by experienced telephone interviewers. The script content and homework activities were tailored based on parents’ responses and focused on; fruit and vegetable availability and accessibility, parental role-modelling, and supportive home food routines (e.g. children having set meal and snack times). Behaviour change techniques such as goal setting and role-modelling [27] were built into the script. The intervention also consisted of written resources. Intervention parents received the ‘Healthy Budget Bites’ cookbook, which was developed locally and was specifically designed to encourage healthy eating through the provision of simple, inexpensive recipes [28]. The Healthy Habits guidebook was designed to accompany each of the calls. It provided a summary of the content of each call, as well as factsheets with more detail about each of the included topics. There was dedicated space in the guidebook where participants were encouraged to record their goals and activities for each week. Intervention parents also received a pad of weekly meal planner templates. The intervention was delivered between April and December 2010. Approximately 6% of all intervention calls were monitored by members of the research team, with results indicating that 97% of key content areas of the script were covered, and in 80% of calls the telephone interviewers “rarely” deviated from the script [15]. Of the intervention parents, 181 (87%) completed all intervention calls [15], and the median number of call attempts per completed intervention call was three attempts for the Week 1 call, and two attempts for the calls in Weeks 2, 3 & 4. Control Parents were mailed printed information about Australian dietary guidelines [29]. They received no further contact until the follow-up assessments. Parents were mailed printed information about Australian dietary guidelines [29]. They received no further contact until the follow-up assessments. Data collection & measures Telephone interviewers, blind to group allocation, collected data at baseline (from April to October 2010) and 2-, 6-, 12-, and 18-months later. Items from the Australian National Nutrition Survey [30] were used to assess the average daily serves of fruit and vegetables consumed by parents. (How many serves of vegetables do you usually eat each day? One adult serve is a ½ cup of cooked vegetables or 1 cup of salad vegetables. How many serves of fruit do you usually eat each day? An adult serve is 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces). A study of 1,598 Australian adults found these items were significantly associated with biomarkers of fruit and vegetable intake (alpha- and beta-carotene and red-cell folate) [31]. Telephone interviewers, blind to group allocation, collected data at baseline (from April to October 2010) and 2-, 6-, 12-, and 18-months later. Items from the Australian National Nutrition Survey [30] were used to assess the average daily serves of fruit and vegetables consumed by parents. (How many serves of vegetables do you usually eat each day? One adult serve is a ½ cup of cooked vegetables or 1 cup of salad vegetables. How many serves of fruit do you usually eat each day? An adult serve is 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces). A study of 1,598 Australian adults found these items were significantly associated with biomarkers of fruit and vegetable intake (alpha- and beta-carotene and red-cell folate) [31]. Analysis At each follow-up, generalised estimating equations were used to compare parents’ mean daily fruit and vegetable serves between groups. Fruit and vegetable outcomes were analysed separately. The analyses were adjusted for clustering by preschool and baseline values (i.e. baseline daily serves of vegetables and fruits respectively). An intention-to-treat approach was utilised, whereby participants were analysed based on the group to which they were originally allocated. All available data was used in the initial analysis. A sensitivity analysis was also conducted whereby baseline values were substituted for missing data (Baseline Observation Carried Forward). The trial was powered based upon the primary outcome (children’s fruit and vegetable consumption). However, using the same assumptions it was calculated that the sample would allow a between group detectable difference of 0.33 and 0.43 daily serves of fruit and vegetables respectively, with 80% power at the 0.05 significance level, after 18 months. At each follow-up, generalised estimating equations were used to compare parents’ mean daily fruit and vegetable serves between groups. Fruit and vegetable outcomes were analysed separately. The analyses were adjusted for clustering by preschool and baseline values (i.e. baseline daily serves of vegetables and fruits respectively). An intention-to-treat approach was utilised, whereby participants were analysed based on the group to which they were originally allocated. All available data was used in the initial analysis. A sensitivity analysis was also conducted whereby baseline values were substituted for missing data (Baseline Observation Carried Forward). The trial was powered based upon the primary outcome (children’s fruit and vegetable consumption). However, using the same assumptions it was calculated that the sample would allow a between group detectable difference of 0.33 and 0.43 daily serves of fruit and vegetables respectively, with 80% power at the 0.05 significance level, after 18 months. Intervention: The intervention, described in greater detail elsewhere [24], was developed in conjunction with a multi-disciplinary advisory group that included accredited practicing dietitians, psychologists specialising in parenting, and health promotion professionals. The intervention was conceptually guided by the model of family-based intervention used in the treatment and prevention of childhood obesity, as proposed by Golan and colleagues [25]. The model is grounded in socio-ecological theory and attempts to introduce new familial norms regarding healthy eating. The intervention was previously piloted in a small sample which demonstrated effectiveness, feasibility of delivery, and acceptability to parents [26]. The intervention was delivered through a series of four weekly 30-minute telephone calls. The calls were scripted and delivered by experienced telephone interviewers. The script content and homework activities were tailored based on parents’ responses and focused on; fruit and vegetable availability and accessibility, parental role-modelling, and supportive home food routines (e.g. children having set meal and snack times). Behaviour change techniques such as goal setting and role-modelling [27] were built into the script. The intervention also consisted of written resources. Intervention parents received the ‘Healthy Budget Bites’ cookbook, which was developed locally and was specifically designed to encourage healthy eating through the provision of simple, inexpensive recipes [28]. The Healthy Habits guidebook was designed to accompany each of the calls. It provided a summary of the content of each call, as well as factsheets with more detail about each of the included topics. There was dedicated space in the guidebook where participants were encouraged to record their goals and activities for each week. Intervention parents also received a pad of weekly meal planner templates. The intervention was delivered between April and December 2010. Approximately 6% of all intervention calls were monitored by members of the research team, with results indicating that 97% of key content areas of the script were covered, and in 80% of calls the telephone interviewers “rarely” deviated from the script [15]. Of the intervention parents, 181 (87%) completed all intervention calls [15], and the median number of call attempts per completed intervention call was three attempts for the Week 1 call, and two attempts for the calls in Weeks 2, 3 & 4. Control: Parents were mailed printed information about Australian dietary guidelines [29]. They received no further contact until the follow-up assessments. Data collection & measures: Telephone interviewers, blind to group allocation, collected data at baseline (from April to October 2010) and 2-, 6-, 12-, and 18-months later. Items from the Australian National Nutrition Survey [30] were used to assess the average daily serves of fruit and vegetables consumed by parents. (How many serves of vegetables do you usually eat each day? One adult serve is a ½ cup of cooked vegetables or 1 cup of salad vegetables. How many serves of fruit do you usually eat each day? An adult serve is 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces). A study of 1,598 Australian adults found these items were significantly associated with biomarkers of fruit and vegetable intake (alpha- and beta-carotene and red-cell folate) [31]. Analysis: At each follow-up, generalised estimating equations were used to compare parents’ mean daily fruit and vegetable serves between groups. Fruit and vegetable outcomes were analysed separately. The analyses were adjusted for clustering by preschool and baseline values (i.e. baseline daily serves of vegetables and fruits respectively). An intention-to-treat approach was utilised, whereby participants were analysed based on the group to which they were originally allocated. All available data was used in the initial analysis. A sensitivity analysis was also conducted whereby baseline values were substituted for missing data (Baseline Observation Carried Forward). The trial was powered based upon the primary outcome (children’s fruit and vegetable consumption). However, using the same assumptions it was calculated that the sample would allow a between group detectable difference of 0.33 and 0.43 daily serves of fruit and vegetables respectively, with 80% power at the 0.05 significance level, after 18 months. Results: A description of the study response rates and attrition is described in detail elsewhere [15,16]. Of the 394 parents recruited into the study 78% of those allocated to the intervention group and 88% of those allocated to the control group provided 18-month follow-up data (Figure 1).Figure 1 CONSORT flowchart. CONSORT flowchart. Most parents who completed the baseline survey were female (96%), university educated (47%) with a household income over AUS$100,000 per year (41%); the average age was 35.4 years (SD = 5.4), and parents had an average of 2.3 children under 16 years (SD = 0.8) [32]. The participant characteristics by group are shown in Table 1.Table 1 Participant characteristics at baseline [15] Participant characteristics Intervention n = 208 mean (SD)/% Control n = 186 mean (SD)/% Age – years35.2 (5.6)35.7 (5.0)Gender - female95.2%96.8%University Education45.2%49.5%Annual household income (>AU$100,000)42.4%40.2%Aboriginal or Torres Strait Islander1.0%3.2%Number of children <16 y.o.2.3 (0.8)2.3 (0.7) Participant characteristics at baseline [15] Vegetable consumption At each follow-up, intervention parents consumed significantly more vegetable serves than control parents (Table 2). Effect sizes ranged from 0.36 to 0.71 serves per day (approximately 27–53 grams) [33]. Although the sensitivity analysis attenuated the effect size (0.22-0.57 serves per day), the between-group difference remained statistically significant at each follow-up (Table 3).Table 2 Changes in participant consumption of vegetables and fruit (mean daily serves) using all available data Vegetable consumption (Mean daily serves) Fruit consumption (Mean daily serves) Control (SD) Control* (SD) Intervention (SD) Intervention* (SD) Between group difference* (95% CI) P-value Control (SD) Control** (SD) Intervention (SD) Intervention** (SD) Between group difference ** (95% CI) P-value Baseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.08 (1.26)3.11 (0.68)3.91 (1.41)3.92 (0.64)0.71 (0.58-0.85)<0.00011.82 (1.04)1.83 (0.69)2.17 (1.08)2.16 (0.74)0.26 (0.12-0.40)0.00036 mon3.03 (1.51)3.04 (0.68)3.61 (1.40)3.60 (0.63)0.43 (0.19-0.68)0.00051.95 (0.97)1.96 (0.61)2.04 (1.08)2.04 (0.63)0.06 (−0.14-0.26)0.540512 mon3.04 (1.37)3.01 (0.80)3.66 (1.77)3.65 (0.74)0.51 (0.30-0.73<0.00011.81 (1.07)1.81 (0.53)2.08 (1.08)2.09 (0.59)0.22 (0.04-0.39)0.015318 mon3.06 (1.24)3.06 (0.52)3.53 (1.36)3.53 (0.48)0.36 (0.10-0.61)0.00671.93 (1.03)1.93 (0.54)2.24 (1.07)2.24 (0.57)0.26 (0.10-0.43)0.0015*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.Table 3 Changes in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward Vegetable consumption (Mean daily serves) Fruit consumption (Mean daily serves) Control (SD) Control* (SD) Intervention (SD) Intervention* (SD) Between group difference * (95% CI) P-value Control (SD) Control** (SD) Intervention (SD) Intervention** (SD) Between group difference ** (95% CI) P-value Baseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.09 (1.24)3.11 (0.73)3.82 (1.45)3.79 (0.72)0.57 (0.46-0.68)<0.00011.81 (1.04)1.81 (0.71)2.08 (1.06)2.08 (0.75)0.21 (0.08-0.34)0.00156 mon3.11 (1.49)3.10 (0.78)3.56 (1.44)3.54 (0.76)0.32 (0.11-0.53)0.00291.91 (1.01)1.92 (0.70)2.03 (1.10)2.03 (0.73)0.06 (−0.11-0.23)0.496112 mon3.05 (1.34)3.03 (0.89)3.56 (1.73)3.55 (0.87)0.39 (0.20-0.57)<0.00011.79 (1.07)1.79 (0.60)1.99 (1.04)2.00 (0.63)0.16 (0.01-0.310.038218 mon3.11 (1.24)3.11 (0.69)3.44 (1.41)3.43 (0.68)0.22 (0.01-0.43)0.03741.89 (1.04)1.89 (0.63)2.13 (1.09)2.13 (0.66)0.19 (0.04-0.34)0.0139*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool. Changes in participant consumption of vegetables and fruit (mean daily serves) using all available data *Adjusted for baseline value of daily vegetable serves and clustering by preschool. **Adjusted for baseline value of daily fruit serves and clustering by preschool. Changes in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward *Adjusted for baseline value of daily vegetable serves and clustering by preschool. **Adjusted for baseline value of daily fruit serves and clustering by preschool. At each follow-up, intervention parents consumed significantly more vegetable serves than control parents (Table 2). Effect sizes ranged from 0.36 to 0.71 serves per day (approximately 27–53 grams) [33]. Although the sensitivity analysis attenuated the effect size (0.22-0.57 serves per day), the between-group difference remained statistically significant at each follow-up (Table 3).Table 2 Changes in participant consumption of vegetables and fruit (mean daily serves) using all available data Vegetable consumption (Mean daily serves) Fruit consumption (Mean daily serves) Control (SD) Control* (SD) Intervention (SD) Intervention* (SD) Between group difference* (95% CI) P-value Control (SD) Control** (SD) Intervention (SD) Intervention** (SD) Between group difference ** (95% CI) P-value Baseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.08 (1.26)3.11 (0.68)3.91 (1.41)3.92 (0.64)0.71 (0.58-0.85)<0.00011.82 (1.04)1.83 (0.69)2.17 (1.08)2.16 (0.74)0.26 (0.12-0.40)0.00036 mon3.03 (1.51)3.04 (0.68)3.61 (1.40)3.60 (0.63)0.43 (0.19-0.68)0.00051.95 (0.97)1.96 (0.61)2.04 (1.08)2.04 (0.63)0.06 (−0.14-0.26)0.540512 mon3.04 (1.37)3.01 (0.80)3.66 (1.77)3.65 (0.74)0.51 (0.30-0.73<0.00011.81 (1.07)1.81 (0.53)2.08 (1.08)2.09 (0.59)0.22 (0.04-0.39)0.015318 mon3.06 (1.24)3.06 (0.52)3.53 (1.36)3.53 (0.48)0.36 (0.10-0.61)0.00671.93 (1.03)1.93 (0.54)2.24 (1.07)2.24 (0.57)0.26 (0.10-0.43)0.0015*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.Table 3 Changes in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward Vegetable consumption (Mean daily serves) Fruit consumption (Mean daily serves) Control (SD) Control* (SD) Intervention (SD) Intervention* (SD) Between group difference * (95% CI) P-value Control (SD) Control** (SD) Intervention (SD) Intervention** (SD) Between group difference ** (95% CI) P-value Baseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.09 (1.24)3.11 (0.73)3.82 (1.45)3.79 (0.72)0.57 (0.46-0.68)<0.00011.81 (1.04)1.81 (0.71)2.08 (1.06)2.08 (0.75)0.21 (0.08-0.34)0.00156 mon3.11 (1.49)3.10 (0.78)3.56 (1.44)3.54 (0.76)0.32 (0.11-0.53)0.00291.91 (1.01)1.92 (0.70)2.03 (1.10)2.03 (0.73)0.06 (−0.11-0.23)0.496112 mon3.05 (1.34)3.03 (0.89)3.56 (1.73)3.55 (0.87)0.39 (0.20-0.57)<0.00011.79 (1.07)1.79 (0.60)1.99 (1.04)2.00 (0.63)0.16 (0.01-0.310.038218 mon3.11 (1.24)3.11 (0.69)3.44 (1.41)3.43 (0.68)0.22 (0.01-0.43)0.03741.89 (1.04)1.89 (0.63)2.13 (1.09)2.13 (0.66)0.19 (0.04-0.34)0.0139*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool. Changes in participant consumption of vegetables and fruit (mean daily serves) using all available data *Adjusted for baseline value of daily vegetable serves and clustering by preschool. **Adjusted for baseline value of daily fruit serves and clustering by preschool. Changes in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward *Adjusted for baseline value of daily vegetable serves and clustering by preschool. **Adjusted for baseline value of daily fruit serves and clustering by preschool. Fruit consumption With the exception of 6-months, fruit consumption in the intervention group exceeded that of the control group at each follow-up, with effect sizes of 0.06-0.26 serves per day (9–39 grams) [33] (Table 2). The between-group difference remained significant when baseline values were substituted for missing values in the sensitivity analysis (0.06-0.21 serves per day) (Table 3). With the exception of 6-months, fruit consumption in the intervention group exceeded that of the control group at each follow-up, with effect sizes of 0.06-0.26 serves per day (9–39 grams) [33] (Table 2). The between-group difference remained significant when baseline values were substituted for missing values in the sensitivity analysis (0.06-0.21 serves per day) (Table 3). Vegetable consumption: At each follow-up, intervention parents consumed significantly more vegetable serves than control parents (Table 2). Effect sizes ranged from 0.36 to 0.71 serves per day (approximately 27–53 grams) [33]. Although the sensitivity analysis attenuated the effect size (0.22-0.57 serves per day), the between-group difference remained statistically significant at each follow-up (Table 3).Table 2 Changes in participant consumption of vegetables and fruit (mean daily serves) using all available data Vegetable consumption (Mean daily serves) Fruit consumption (Mean daily serves) Control (SD) Control* (SD) Intervention (SD) Intervention* (SD) Between group difference* (95% CI) P-value Control (SD) Control** (SD) Intervention (SD) Intervention** (SD) Between group difference ** (95% CI) P-value Baseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.08 (1.26)3.11 (0.68)3.91 (1.41)3.92 (0.64)0.71 (0.58-0.85)<0.00011.82 (1.04)1.83 (0.69)2.17 (1.08)2.16 (0.74)0.26 (0.12-0.40)0.00036 mon3.03 (1.51)3.04 (0.68)3.61 (1.40)3.60 (0.63)0.43 (0.19-0.68)0.00051.95 (0.97)1.96 (0.61)2.04 (1.08)2.04 (0.63)0.06 (−0.14-0.26)0.540512 mon3.04 (1.37)3.01 (0.80)3.66 (1.77)3.65 (0.74)0.51 (0.30-0.73<0.00011.81 (1.07)1.81 (0.53)2.08 (1.08)2.09 (0.59)0.22 (0.04-0.39)0.015318 mon3.06 (1.24)3.06 (0.52)3.53 (1.36)3.53 (0.48)0.36 (0.10-0.61)0.00671.93 (1.03)1.93 (0.54)2.24 (1.07)2.24 (0.57)0.26 (0.10-0.43)0.0015*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool.Table 3 Changes in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward Vegetable consumption (Mean daily serves) Fruit consumption (Mean daily serves) Control (SD) Control* (SD) Intervention (SD) Intervention* (SD) Between group difference * (95% CI) P-value Control (SD) Control** (SD) Intervention (SD) Intervention** (SD) Between group difference ** (95% CI) P-value Baseline3.05 (1.34)-3.25 (1.32)---1.76 (1.03)-1.83 (1.08)---2 mon3.09 (1.24)3.11 (0.73)3.82 (1.45)3.79 (0.72)0.57 (0.46-0.68)<0.00011.81 (1.04)1.81 (0.71)2.08 (1.06)2.08 (0.75)0.21 (0.08-0.34)0.00156 mon3.11 (1.49)3.10 (0.78)3.56 (1.44)3.54 (0.76)0.32 (0.11-0.53)0.00291.91 (1.01)1.92 (0.70)2.03 (1.10)2.03 (0.73)0.06 (−0.11-0.23)0.496112 mon3.05 (1.34)3.03 (0.89)3.56 (1.73)3.55 (0.87)0.39 (0.20-0.57)<0.00011.79 (1.07)1.79 (0.60)1.99 (1.04)2.00 (0.63)0.16 (0.01-0.310.038218 mon3.11 (1.24)3.11 (0.69)3.44 (1.41)3.43 (0.68)0.22 (0.01-0.43)0.03741.89 (1.04)1.89 (0.63)2.13 (1.09)2.13 (0.66)0.19 (0.04-0.34)0.0139*Adjusted for baseline value of daily vegetable serves and clustering by preschool.**Adjusted for baseline value of daily fruit serves and clustering by preschool. Changes in participant consumption of vegetables and fruit (mean daily serves) using all available data *Adjusted for baseline value of daily vegetable serves and clustering by preschool. **Adjusted for baseline value of daily fruit serves and clustering by preschool. Changes in participant consumption of vegetables and fruit (mean daily serves): sensitivity analysis using baseline observation carried forward *Adjusted for baseline value of daily vegetable serves and clustering by preschool. **Adjusted for baseline value of daily fruit serves and clustering by preschool. Fruit consumption: With the exception of 6-months, fruit consumption in the intervention group exceeded that of the control group at each follow-up, with effect sizes of 0.06-0.26 serves per day (9–39 grams) [33] (Table 2). The between-group difference remained significant when baseline values were substituted for missing values in the sensitivity analysis (0.06-0.21 serves per day) (Table 3). Discussion: Findings suggest that a telephone-based intervention focused on changing the home food environment of preschool children can increase their parents’ fruit and vegetable consumption, and that increases are sustained up to 18-months post-baseline. A systematic review found that primary prevention interventions targeting healthy adults increased combined fruit and vegetable consumption by 0.1-1.4 serves per day [22]. The increases in consumption in this trial ranged from 0.49 to 0.97 combined serves per day, with an average increase of 0.7 combined serves per day, and are consistent with these review findings, and with the average increase identified in a systematic review of behavioural interventions to increase fruit and vegetable consumption (0.6 serves per day) [20]. Furthermore, these findings are consistent with other telephone-delivered interventions to increase the fruit and vegetable consumption of adults [34]. Although these reviewed trials [35-37] were of a longer duration and included a greater number of calls than the current trial, telephone interventions consisting of fewer calls have also been successful in changing behavioural outcomes [38,39]. Most notably, a trial of an intervention consisting of written materials, a written plan, and two to three telephone calls (the most comparable intervention approach) increased combined fruit and vegetable consumption by 1.4 serves per day among primary care patients after 12 weeks [40]. The magnitude of the observed effect appears to have clinical significance, with meta-analyses conducted by the World Cancer Research Fund indicating that each 50 gram increase of vegetables per day reduced the risks of stomach, oesophageal, and mouth/pharynx/larynx cancers by 15%, 31% and 28% respectively [41]. This suggests that intervention approaches that target the home food environment may produce improvements in diet and reduce associated disease risks. These significant findings are particularly noteworthy given these represent secondary trial outcomes, with the primary intervention aim being to increase preschoolers’ fruit and vegetable consumption. In fact, the changes in secondary trial outcomes were sustained for longer than the primary outcome (child fruit and vegetable consumption) [16]. This may be in part due to the intervention being delivered directly to parents rather than children, and accords with the theoretical underpinnings of the intervention that changes in familial norms and behaviours are antecedent to behaviour change for children [25]. This is most clearly illustrated through the intervention strategy of role-modelling, which directly relies on changes in parents’ consumption to facilitate changes in children’s consumption. Furthermore, the home food environment strategies that were targeted as part of the intervention required greater input from parents. For example, although increasing the availability of fruit and vegetables in the home is associated with higher consumption among both adults and children [17-19,42], adults are required to more actively respond to this strategy through making changes in their food purchasing and food preparation behaviours. Findings from child obesity treatment studies suggest that treating parents alone may be more effective than treating the parent and child together [43]. Although results from the Healthy Habits trial suggest dietary interventions involving parents can be effective [15,16], it is recommended future trials of dietary interventions investigate the relative effectiveness of strategies targeting parents-alone versus parents and children combined. The non-significant finding for fruit consumption at 6-months reflects a slight increase in consumption of controls coinciding with a slight decrease in consumption of intervention parents. The increase in fruit consumption in the control group most likely reflects increases in the seasonal availability of fruits over the Australian summer period. This argument is strengthened by the similarly elevated fruit consumption among controls at the 18-month follow-up (i.e. the summer period the following year). The decrease in fruit consumption in the intervention group may reflect the typical attenuation of effect size over time [44]. Strategies that help maintain intervention effects are important to maximise the long-term benefits of dietary interventions. The results of this trial suggest that approximately 4 to 5 months post-intervention may be a critical point for the delivery of intervention maintenance strategies. However, further research is warranted. The trial findings should be considered in conjunction with the limitations and strengths of this study. Strengths included the randomised controlled design, and standardised delivery of intervention scripts and data collection surveys. Use of self-reported, brief dietary measures may not represent optimal assessment of dietary intake and represent a limitation of the trial. More accurate assessments may result from alternative assessment methods, such as food records [45]. It is recommended that future trials investigate whether changes to the home food environment mediate changes in the fruit and vegetable consumption of children and their parents. A recent related study demonstrated that changes in child consumption of non-core foods were mediated by changes in the home food environment [46]. Identification of mediators of fruit and vegetable consumption could facilitate intervention streamlining. Beyond the cost efficiency afforded by telephone-delivery [47], this trial provides preliminary evidence of an additional efficiency; simultaneous increases in the fruit and vegetable consumption of preschool children [15,16] and their parents. Interventions targeting characteristics of the home food environment therefore appear to have substantial public health utility. Conclusion: A four-contact telephone-based intervention that focuses on changing characteristics of preschoolers’ home food environment can increase parents’ fruit and vegetable consumption. These results could inform the development of public health nutrition interventions attempting to improve the diet of preschoolers and their parents.
Background: The home food environment is an important setting for the development of dietary patterns in childhood. Interventions that support parents to modify the home food environment for their children, however, may also improve parent diet. The purpose of this study was to assess the impact of a telephone-based intervention targeting the home food environment of preschool children on the fruit and vegetable consumption of parents. Methods: In 2010, 394 parents of 3-5 year-old children from 30 preschools in the Hunter region of Australia were recruited to this cluster randomised controlled trial and were randomly assigned to an intervention or control group. Intervention group parents received four weekly 30-minute telephone calls and written resources. The scripted calls focused on; fruit and vegetable availability and accessibility, parental role-modelling, and supportive home food routines. Two items from the Australian National Nutrition Survey were used to assess the average number of serves of fruit and vegetables consumed each day by parents at baseline, and 2-, 6-, 12-, and 18-months later, using generalised estimating equations (adjusted for baseline values and clustering by preschool) and an intention-to-treat-approach. Results: At each follow-up, vegetable consumption among intervention parents significantly exceeded that of controls. At 2-months the difference was 0.71 serves (95% CI: 0.58-0.85, p < 0.0001), and at 18-months the difference was 0.36 serves (95% CI: 0.10-0.61, p = 0.0067). Fruit consumption among intervention parents was found to significantly exceed consumption of control parents at the 2-,12- and 18-month follow-up, with the difference at 2-months being 0.26 serves (95% CI: 0.12-0.40, p = 0.0003), and 0.26 serves maintained at 18-months, (95% CI: 0.10-0.43, p = 0.0015). Conclusions: A four-contact telephone-based intervention that focuses on changing characteristics of preschoolers' home food environment can increase parents' fruit and vegetable consumption. (ANZCTR12609000820202).
Background: From a young age many children fail to meet minimum dietary guidelines for fruit and vegetable consumption [1,2]. Dietary patterns established in childhood track into adulthood [3], and insufficient childhood consumption of fruit and vegetables is linked to an increased risk of chronic disease in adults [4,5]. As such increasing the fruit and vegetable consumption of children represents a global health priority [6]. Ecological models suggest that the home food environment is an important setting for the development of dietary patterns in childhood [7]. Parents are gatekeepers to the home food environment [7,8] through: making foods available and accessible to children [9]: role modelling [10]; and establishing eating rules and encouraging specific eating behaviours [11]. In addition, parents’ own behaviours may be modified by the environment they establish for their children. As such, interventions that focus on changing the home food environment may impact the dietary patterns of parents as well as their children. A recent systematic review of interventions to increase fruit and vegetable consumption among children aged 0–5 years [12] identified only one study attempting to change the home food environment. A four-contact, home-visiting program increased parent fruit and vegetable consumption, but did not change child consumption [13]. Since the review, the Infant Feeding Activity and Nutrition Trial (InFANT) showed that a parent intervention to improve infant diet, which also included strategies targeting some aspects of the home food environment, improved maternal diet quality [14]. The intervention was delivered via parenting groups over the first 18 months of the infants’ life, and focused on teaching parenting skills to support the development of positive diet behaviours in infancy. Furthermore, the Healthy Habits trial, demonstrated that a four-contact telephone-based intervention with parents that focused on creating a supportive home food environment could increase fruit and vegetable consumption among 3–5 year-old children [15]. After 12 months, the combined fruit and vegetable consumption of intervention children was significantly greater than consumption of control children [16]. However, after 18 months the difference between groups was no longer significant [16]. This intervention supported parents to make changes to their home food environment associated with higher fruit and vegetable consumption in children; increasing the availability and accessibility of fruit and vegetables in the home, increasing parental role-modelling, and introducing supportive home food routines [17-19]. There is a well-established literature regarding effective interventions to increase adult fruit and vegetable consumption, with systematic reviews finding evidence in favour of behavioural interventions [20,21], and specifically those utilising face-to-face nutrition education or counselling [22]. A review of environmental approaches to encourage healthy eating highlighted the potential for home-based environmental interventions, however the authors concluded there was a paucity of such interventions [23]. The current study attempts to address this gap in the literature and investigate the efficacy of a home food environment intervention on the dietary behaviours of parents. Specifically, this paper describes the changes in the average daily serves of fruit and of vegetables consumed by parents, at 2-, 6- 12- and 18-months, the secondary outcomes of the Healthy Habits trial. It was hypothesised that consumption among intervention parents would exceed that of controls at each follow-up time point. Conclusion: A four-contact telephone-based intervention that focuses on changing characteristics of preschoolers’ home food environment can increase parents’ fruit and vegetable consumption. These results could inform the development of public health nutrition interventions attempting to improve the diet of preschoolers and their parents.
Background: The home food environment is an important setting for the development of dietary patterns in childhood. Interventions that support parents to modify the home food environment for their children, however, may also improve parent diet. The purpose of this study was to assess the impact of a telephone-based intervention targeting the home food environment of preschool children on the fruit and vegetable consumption of parents. Methods: In 2010, 394 parents of 3-5 year-old children from 30 preschools in the Hunter region of Australia were recruited to this cluster randomised controlled trial and were randomly assigned to an intervention or control group. Intervention group parents received four weekly 30-minute telephone calls and written resources. The scripted calls focused on; fruit and vegetable availability and accessibility, parental role-modelling, and supportive home food routines. Two items from the Australian National Nutrition Survey were used to assess the average number of serves of fruit and vegetables consumed each day by parents at baseline, and 2-, 6-, 12-, and 18-months later, using generalised estimating equations (adjusted for baseline values and clustering by preschool) and an intention-to-treat-approach. Results: At each follow-up, vegetable consumption among intervention parents significantly exceeded that of controls. At 2-months the difference was 0.71 serves (95% CI: 0.58-0.85, p < 0.0001), and at 18-months the difference was 0.36 serves (95% CI: 0.10-0.61, p = 0.0067). Fruit consumption among intervention parents was found to significantly exceed consumption of control parents at the 2-,12- and 18-month follow-up, with the difference at 2-months being 0.26 serves (95% CI: 0.12-0.40, p = 0.0003), and 0.26 serves maintained at 18-months, (95% CI: 0.10-0.43, p = 0.0015). Conclusions: A four-contact telephone-based intervention that focuses on changing characteristics of preschoolers' home food environment can increase parents' fruit and vegetable consumption. (ANZCTR12609000820202).
6,744
409
11
[ "intervention", "fruit", "serves", "consumption", "daily", "vegetable", "parents", "baseline", "sd", "group" ]
[ "test", "test" ]
null
[CONTENT] Fruit | Vegetable | Randomised controlled trial | Intervention | Telephone | Family | Home food environment | Preschool | Child [SUMMARY]
null
[CONTENT] Fruit | Vegetable | Randomised controlled trial | Intervention | Telephone | Family | Home food environment | Preschool | Child [SUMMARY]
[CONTENT] Fruit | Vegetable | Randomised controlled trial | Intervention | Telephone | Family | Home food environment | Preschool | Child [SUMMARY]
[CONTENT] Fruit | Vegetable | Randomised controlled trial | Intervention | Telephone | Family | Home food environment | Preschool | Child [SUMMARY]
[CONTENT] Fruit | Vegetable | Randomised controlled trial | Intervention | Telephone | Family | Home food environment | Preschool | Child [SUMMARY]
[CONTENT] Adult | Australia | Child, Preschool | Cluster Analysis | Diet | Feeding Behavior | Female | Follow-Up Studies | Food, Organic | Fruit | Health Behavior | Health Promotion | Humans | Male | Socioeconomic Factors | Telephone | Treatment Outcome | Vegetables [SUMMARY]
null
[CONTENT] Adult | Australia | Child, Preschool | Cluster Analysis | Diet | Feeding Behavior | Female | Follow-Up Studies | Food, Organic | Fruit | Health Behavior | Health Promotion | Humans | Male | Socioeconomic Factors | Telephone | Treatment Outcome | Vegetables [SUMMARY]
[CONTENT] Adult | Australia | Child, Preschool | Cluster Analysis | Diet | Feeding Behavior | Female | Follow-Up Studies | Food, Organic | Fruit | Health Behavior | Health Promotion | Humans | Male | Socioeconomic Factors | Telephone | Treatment Outcome | Vegetables [SUMMARY]
[CONTENT] Adult | Australia | Child, Preschool | Cluster Analysis | Diet | Feeding Behavior | Female | Follow-Up Studies | Food, Organic | Fruit | Health Behavior | Health Promotion | Humans | Male | Socioeconomic Factors | Telephone | Treatment Outcome | Vegetables [SUMMARY]
[CONTENT] Adult | Australia | Child, Preschool | Cluster Analysis | Diet | Feeding Behavior | Female | Follow-Up Studies | Food, Organic | Fruit | Health Behavior | Health Promotion | Humans | Male | Socioeconomic Factors | Telephone | Treatment Outcome | Vegetables [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] intervention | fruit | serves | consumption | daily | vegetable | parents | baseline | sd | group [SUMMARY]
null
[CONTENT] intervention | fruit | serves | consumption | daily | vegetable | parents | baseline | sd | group [SUMMARY]
[CONTENT] intervention | fruit | serves | consumption | daily | vegetable | parents | baseline | sd | group [SUMMARY]
[CONTENT] intervention | fruit | serves | consumption | daily | vegetable | parents | baseline | sd | group [SUMMARY]
[CONTENT] intervention | fruit | serves | consumption | daily | vegetable | parents | baseline | sd | group [SUMMARY]
[CONTENT] home | environment | food environment | home food environment | consumption | children | food | home food | fruit vegetable consumption | interventions [SUMMARY]
null
[CONTENT] sd | value | serves | daily | 04 | 08 | mean daily serves | control sd | value daily | sd intervention [SUMMARY]
[CONTENT] preschoolers | development public health | results inform | development public | preschoolers home | preschoolers home food | focuses changing characteristics preschoolers | focuses changing characteristics | focuses changing | focuses [SUMMARY]
[CONTENT] intervention | fruit | serves | consumption | parents | daily | vegetable | baseline | fruit vegetable | sd [SUMMARY]
[CONTENT] intervention | fruit | serves | consumption | parents | daily | vegetable | baseline | fruit vegetable | sd [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] ||| 2-months | 0.71 | 95% | 0.58-0.85 | 0.0001 | 18-months | 0.36 | 95% | CI | 0.10-0.61 | 0.0067 ||| 18-month | 2-months | 0.26 | 95% | CI | 0.12-0.40 | 0.0003 | 0.26 | 18-months | 95% | CI | 0.10 | 0.0015 [SUMMARY]
[CONTENT] four ||| [SUMMARY]
[CONTENT] ||| ||| ||| 2010 | 394 | 3-5 year-old | 30 | Hunter | Australia ||| four | weekly | 30-minute ||| ||| Two | the Australian National Nutrition Survey | each day | 18-months later ||| ||| 2-months | 0.71 | 95% | 0.58-0.85 | 0.0001 | 18-months | 0.36 | 95% | CI | 0.10-0.61 | 0.0067 ||| 18-month | 2-months | 0.26 | 95% | CI | 0.12-0.40 | 0.0003 | 0.26 | 18-months | 95% | CI | 0.10 | 0.0015 ||| four ||| [SUMMARY]
[CONTENT] ||| ||| ||| 2010 | 394 | 3-5 year-old | 30 | Hunter | Australia ||| four | weekly | 30-minute ||| ||| Two | the Australian National Nutrition Survey | each day | 18-months later ||| ||| 2-months | 0.71 | 95% | 0.58-0.85 | 0.0001 | 18-months | 0.36 | 95% | CI | 0.10-0.61 | 0.0067 ||| 18-month | 2-months | 0.26 | 95% | CI | 0.12-0.40 | 0.0003 | 0.26 | 18-months | 95% | CI | 0.10 | 0.0015 ||| four ||| [SUMMARY]
Association of quantitative CT lung density measurements and lung function decline in World Trade Center workers.
33244876
Occupational exposures at the WTC site after 11 September 2001 have been associated with presumably inflammatory chronic lower airway diseases.
BACKGROUND
We examined the trajectories of expiratory air flow decline in a group of 1,321 former WTC workers and volunteers with at least three periodic spirometries, and using QCT-measured low (LAV%, -950 HU) and high (HAV%, from -600 to -250 HU) attenuation volume percent. We calculated the individual regression line slopes for first-second forced expiratory volume (FEV1 slope), identified subjects with rapidly declining ("accelerated decliners") and increasing ("improved"), and compared them to subjects with "intermediate" (0 to -66.5 mL/year) FEV1 slope. We then used multinomial logistic regression to model those three trajectories, and the two lung attenuation metrics.
METHODS
The mean longitudinal FEV1 slopes for the entire study population, and its intermediate, decliner, and improved subgroups were, respectively, -40.4, -34.3, -106.5, and 37.6 mL/year. In unadjusted and adjusted analyses, LAV% and HAV% were both associated with "accelerated decliner" status (ORadj , 95% CI 2.37, 1.41-3.97, and 1.77, 1.08-2.89, respectively), compared to the intermediate decline.
RESULTS
Longitudinal FEV1 decline in this cohort, known to be associated with QCT proximal airway inflammation metric, is also associated with QCT indicators of increased and decreased lung density. The improved FEV1 trajectory did not seem to be associated with lung density metrics.
CONCLUSIONS
[ "Child", "Female", "Forced Expiratory Volume", "Humans", "Lung", "Lung Diseases", "Male", "Occupational Exposure", "September 11 Terrorist Attacks", "Tomography, X-Ray Computed" ]
8149480
INTRODUCTION
Occupational exposures at the World Trade Center (WTC) disaster site in 2001‐2002 have been associated with a variety of adverse health effects,1 including chronic lower airway diseases.1, 2 The predominant spirometric abnormality has been a nonspecific reduced forced vital capacity (FVC) patterns,3, 4 and findings of airflow obstruction, bronchodilator response, or airway hyperresponsiveness are less frequent.1, 4 We previously reported on mostly mild chest computed tomography (CT) parenchymal findings (eg, interstitial abnormalities and emphysema).5 Quantitative chest computed tomography (QCT) measurements6 can help to characterize the presumed inflammatory changes underlying those heterogeneous WTC‐related lower airway disorders and lung function changes, even if mild. Although post‐WTC longitudinal follow‐up of lung function among workers in the WTC occupational cohorts have suggested a normal age‐related mean expiratory flow decline,7, 8, 9 that average hides widely divergent longitudinal lung function trajectories.9 We hypothesized that both increased (eg, from early interstitial lung disease) and decreased (eg, from early emphysema) lung density markers could help to explain the adverse lung function trajectories in a cohort under long‐term health surveillance. In this study, we used longitudinal spirometry data to examine those trajectories, and the association of QCT metrics of increased and decreased lung density with accelerated FEV1 decline among WTC rescue and recovery workers.
METHODS
Subjects and clinical data acquisition All subjects were members of the WTC General Responders Cohort (GRC) and participated in the screening, surveillance, and clinical programs of the WTC Clinical Center of Excellence at Mount Sinai Medical Center (New York, NY). They were also part of the subcohort (n = 1,641) evaluated by the WTC Pulmonary Evaluation Unit (WTC PEU), who underwent chest CT scanning between 2003 and 2012, as part of their diagnostic evaluation. Details on subject recruitment, eligibility criteria, and screening and surveillance protocols have been previously reported.10 In brief, participants were all workers and volunteers who performed rescue, recovery, and service restoration duties at the WTC disaster site from 11 September 2001 to June 2002. This open cohort includes all occupational groups deployed at the WTC disaster site.11 Beginning in July 2002, all subjects underwent a baseline screening evaluation, which included questionnaires on respiratory and other symptoms, pre‐WTC‐ and WTC‐related occupational exposures, physical examination (including height and weight), laboratory testing, and spirometry. Subsequent (“monitoring”) health surveillance visits included a similar evaluation at 12‐ to 18‐month intervals, and clinical services were offered for individualized diagnostic and treatment services.1, 2 All subjects were members of the WTC General Responders Cohort (GRC) and participated in the screening, surveillance, and clinical programs of the WTC Clinical Center of Excellence at Mount Sinai Medical Center (New York, NY). They were also part of the subcohort (n = 1,641) evaluated by the WTC Pulmonary Evaluation Unit (WTC PEU), who underwent chest CT scanning between 2003 and 2012, as part of their diagnostic evaluation. Details on subject recruitment, eligibility criteria, and screening and surveillance protocols have been previously reported.10 In brief, participants were all workers and volunteers who performed rescue, recovery, and service restoration duties at the WTC disaster site from 11 September 2001 to June 2002. This open cohort includes all occupational groups deployed at the WTC disaster site.11 Beginning in July 2002, all subjects underwent a baseline screening evaluation, which included questionnaires on respiratory and other symptoms, pre‐WTC‐ and WTC‐related occupational exposures, physical examination (including height and weight), laboratory testing, and spirometry. Subsequent (“monitoring”) health surveillance visits included a similar evaluation at 12‐ to 18‐month intervals, and clinical services were offered for individualized diagnostic and treatment services.1, 2 CT imaging procedures All CT studies were obtained at Mount Sinai Hospital, in General Electric® or Siemens® multidetector row chest CT scanners. Chest CT studies were performed using an institutional clinical protocol12 with a radiation dose at 120 kVp, and a mean of 146 (SD 69) mAs, with subjects in the supine position, noise correction, and routine periodic scanner calibration. CT scans were obtained from the lung apices to the bases in a single breath hold at maximum inspiration, and we excluded those with section thicknesses exceeding 1.5 mm, contrast administration, or respiratory or motion artifacts. All deidentified and coded chest CT images were stored and cataloged during the past 5 years in the WTC PEU Chest CT Image Archive (ClinicalTrials.gov identifier NCT03295279).5 All CT studies were obtained at Mount Sinai Hospital, in General Electric® or Siemens® multidetector row chest CT scanners. Chest CT studies were performed using an institutional clinical protocol12 with a radiation dose at 120 kVp, and a mean of 146 (SD 69) mAs, with subjects in the supine position, noise correction, and routine periodic scanner calibration. CT scans were obtained from the lung apices to the bases in a single breath hold at maximum inspiration, and we excluded those with section thicknesses exceeding 1.5 mm, contrast administration, or respiratory or motion artifacts. All deidentified and coded chest CT images were stored and cataloged during the past 5 years in the WTC PEU Chest CT Image Archive (ClinicalTrials.gov identifier NCT03295279).5 Inclusion criteria and QCT systems Inclusion into this study required that the WTC workers had (1) at least three screening and surveillance spirometries, (2) adequate quality study for QCT measurements of their lung parenchyma performed with the Simba system (http://www.via.cornell.edu/simba/simba),13 and (3) complete data for all covariates of interest. None of the chest CT was obtained for investigation of acute processes (infections, congestive heart failure, etc.). For this study, we selected the first adequate quality and available chest CT scan on each subject. For descriptive purposes, and as previously reported,5 each study was also read by research radiologists using the International Classification of High‐resolution Computed Tomography for Occupational and Environmental Respiratory Diseases (ICOERD)14 to classify and grade dichotomously and/or semiquantitatively eight main types of abnormalities. Inclusion into this study required that the WTC workers had (1) at least three screening and surveillance spirometries, (2) adequate quality study for QCT measurements of their lung parenchyma performed with the Simba system (http://www.via.cornell.edu/simba/simba),13 and (3) complete data for all covariates of interest. None of the chest CT was obtained for investigation of acute processes (infections, congestive heart failure, etc.). For this study, we selected the first adequate quality and available chest CT scan on each subject. For descriptive purposes, and as previously reported,5 each study was also read by research radiologists using the International Classification of High‐resolution Computed Tomography for Occupational and Environmental Respiratory Diseases (ICOERD)14 to classify and grade dichotomously and/or semiquantitatively eight main types of abnormalities. Spirometry Spirometries were performed using a single device, the EasyOne® portable flow device (ndd Medizintechnik AG, Zurich, Switzerland). Bronchodilator response (BDR) was assessed at least once in the majority of subjects (most often at their baseline visit) by repeating spirometry 15 minutes after administration of 180 mcg of albuterol via metered dose inhaler and disposable spacer. Predicted values for spirometric measurements were calculated for all subjects’ acceptable tests, based on reference equations from the third National Health and Nutrition Examination Survey (NHANES III),15 and all testing, quality assurance, ventilatory impairment pattern definitions, and interpretative approaches followed American Thoracic Society recommendations.16, 17, 18 We selected spirometries for this study if they had a computer quality grade of A or B, or C if at least five trials had been obtained.18 Spirometries were performed using a single device, the EasyOne® portable flow device (ndd Medizintechnik AG, Zurich, Switzerland). Bronchodilator response (BDR) was assessed at least once in the majority of subjects (most often at their baseline visit) by repeating spirometry 15 minutes after administration of 180 mcg of albuterol via metered dose inhaler and disposable spacer. Predicted values for spirometric measurements were calculated for all subjects’ acceptable tests, based on reference equations from the third National Health and Nutrition Examination Survey (NHANES III),15 and all testing, quality assurance, ventilatory impairment pattern definitions, and interpretative approaches followed American Thoracic Society recommendations.16, 17, 18 We selected spirometries for this study if they had a computer quality grade of A or B, or C if at least five trials had been obtained.18 Measurements We defined a priori, modeled, and plotted the three types of longitudinal post‐WTC FEV1 trajectories (see below, under Statistical Analysis). For the multinomial logistic regression model (see below) with the three trajectories of longitudinal FEV1 as outcomes, we used as main predictors the two QCT‐measured lung density indicators. For low parenchymal density (radiological attenuation), suggestive of emphysema, we used low attenuation volume percent at −950 HU (LAV%, also known as EI950). Categorization was necessary given the variable distribution, using a cut point of 2.5%, based on previously published findings in a nonsmoking healthy multiethnic population.19 For high lung parenchymal density, which could suggest interstitial disease changes, we used high attenuation volume percent from −600 to −250 HU (HAV%). Although a study in smokers used a cut point of 10%20 for HAV%, we felt that using the top decile (6.25%) as the cut point for our study was more appropriate, and equally necessary given the variable distribution. We defined a priori, modeled, and plotted the three types of longitudinal post‐WTC FEV1 trajectories (see below, under Statistical Analysis). For the multinomial logistic regression model (see below) with the three trajectories of longitudinal FEV1 as outcomes, we used as main predictors the two QCT‐measured lung density indicators. For low parenchymal density (radiological attenuation), suggestive of emphysema, we used low attenuation volume percent at −950 HU (LAV%, also known as EI950). Categorization was necessary given the variable distribution, using a cut point of 2.5%, based on previously published findings in a nonsmoking healthy multiethnic population.19 For high lung parenchymal density, which could suggest interstitial disease changes, we used high attenuation volume percent from −600 to −250 HU (HAV%). Although a study in smokers used a cut point of 10%20 for HAV%, we felt that using the top decile (6.25%) as the cut point for our study was more appropriate, and equally necessary given the variable distribution. Statistical analysis We used linear regression to calculate the slope of FEV1 (FEV1slope) on each of 1,321 subjects who had a minimum of three periodic measurements.21 We then estimated the average group decline, and its standard deviation, to classify FEV1slopes into three trajectories defined as follows: accelerated FEV1 decline (“accelerated decliner”) by an FEV1 slope<−66.5 mL/year (ie, exceeding the group mean + 0.5 SD), excessive FEV1 gain (“improved”) by an FEV1slope > 0 mL/year, and intermediate FEV1 decline (“intermediate decliner”) by an FEV1slope between 0 and −66.5 ml/year (ie, between 0 and the group mean + 0.5 SD). We also estimated the root mean squared error (RMSE) as an indicator of group FEV1 variability.9 Descriptive statistics included means and standard deviations (SDs), and medians and interquartile ranges (IQR) for normally and non‐normally distributed continuous variables, respectively; counts and proportions were used for categorical variables. Unadjusted bivariate analyses included t‐test, chi‐squared test, or Wilcoxon rank test as appropriate. We employed, moreover, standardized differences (StD)22 for the descriptive comparisons of the ICOERD findings with the outcomes (FEV1 trajectories) and main predictors (QCT lung density indicators), with a StD > 0.2 suggesting a potentially important difference. Covariates included in all multivariable models were age on 11 September 2001, sex, height, ethnicity/race (Latino, non‐Latino White, and non‐Latino other races), body mass index (BMI) at first evaluation and individual BMI trajectories by linear regression (BMI slope, in kg/m2/year), FEV1 percent predicted at baseline, evidence of bronchodilator response at any visit (BDRany, dichotomous), smoking status (never, former and current smokers), and smoking intensity (in pack‐years) at the baseline examination. Occupational WTC exposure indicators included WTC arrival within 48 hours (dichotomous), and WTC exposure duration (in days, or per 100‐day units).1, 9 A subject was considered a never smoker if (s)he had smoked less than 20 packs of cigarettes (or 12 oz. of tobacco) in a lifetime, or less than 1 cigarette/day (or 1 cigar/week) for 1 year. A minimum of 12 months without tobacco use was required to deem a subject a former smoker. In the model with QCT predictors, we included scanner manufacturer and slice thickness (in mm) to adjust for potentially important CT scan technical differences. We first used linear mixed random effects modeling to plot the above described longitudinal FEV1 trajectories. Mean absolute FEV1 was estimated for each 1‐year period between 2001 and 2015 (the actual follow‐up year was rounded). In this multivariable model, age on 11 September 2001, BMI slope, height, sex, ethnicity/race, and bronchodilator response (BDRany) were included as fixed effects, centering the first three at the mean values for the cohort. Random intercepts accounted for between‐subject variability, and repeated measures correlations accounted for intra‐subject variability. In order to estimate the effect of increased and decreased lung density on longitudinal FEV1 trajectories after 11 September 2001, we employed multinomial logistic regression for the fully specified multivariable analysis of the association between QCT lung density indicators and the categorical outcome of FEV1 trajectories (“accelerated decline,” “intermediate decline,” and “improved lung function”). We excluded multicollinearity by the variance inflation factor method and tested for interactions. In this analysis, we used multiple imputation (MI) with full conditional specification to account for missing QCT measurements. Results of complete case and imputed analyses were similar, the observed effects had the same direction and statistical significance, so we only report the latter. Model performance was assessed by means of the c statistic. A two‐sided p value less than 0.05 defined statistical significance. The SAS program, version 9.4 (SAS Institute, Cary, NC) was used for all analyses. We used linear regression to calculate the slope of FEV1 (FEV1slope) on each of 1,321 subjects who had a minimum of three periodic measurements.21 We then estimated the average group decline, and its standard deviation, to classify FEV1slopes into three trajectories defined as follows: accelerated FEV1 decline (“accelerated decliner”) by an FEV1 slope<−66.5 mL/year (ie, exceeding the group mean + 0.5 SD), excessive FEV1 gain (“improved”) by an FEV1slope > 0 mL/year, and intermediate FEV1 decline (“intermediate decliner”) by an FEV1slope between 0 and −66.5 ml/year (ie, between 0 and the group mean + 0.5 SD). We also estimated the root mean squared error (RMSE) as an indicator of group FEV1 variability.9 Descriptive statistics included means and standard deviations (SDs), and medians and interquartile ranges (IQR) for normally and non‐normally distributed continuous variables, respectively; counts and proportions were used for categorical variables. Unadjusted bivariate analyses included t‐test, chi‐squared test, or Wilcoxon rank test as appropriate. We employed, moreover, standardized differences (StD)22 for the descriptive comparisons of the ICOERD findings with the outcomes (FEV1 trajectories) and main predictors (QCT lung density indicators), with a StD > 0.2 suggesting a potentially important difference. Covariates included in all multivariable models were age on 11 September 2001, sex, height, ethnicity/race (Latino, non‐Latino White, and non‐Latino other races), body mass index (BMI) at first evaluation and individual BMI trajectories by linear regression (BMI slope, in kg/m2/year), FEV1 percent predicted at baseline, evidence of bronchodilator response at any visit (BDRany, dichotomous), smoking status (never, former and current smokers), and smoking intensity (in pack‐years) at the baseline examination. Occupational WTC exposure indicators included WTC arrival within 48 hours (dichotomous), and WTC exposure duration (in days, or per 100‐day units).1, 9 A subject was considered a never smoker if (s)he had smoked less than 20 packs of cigarettes (or 12 oz. of tobacco) in a lifetime, or less than 1 cigarette/day (or 1 cigar/week) for 1 year. A minimum of 12 months without tobacco use was required to deem a subject a former smoker. In the model with QCT predictors, we included scanner manufacturer and slice thickness (in mm) to adjust for potentially important CT scan technical differences. We first used linear mixed random effects modeling to plot the above described longitudinal FEV1 trajectories. Mean absolute FEV1 was estimated for each 1‐year period between 2001 and 2015 (the actual follow‐up year was rounded). In this multivariable model, age on 11 September 2001, BMI slope, height, sex, ethnicity/race, and bronchodilator response (BDRany) were included as fixed effects, centering the first three at the mean values for the cohort. Random intercepts accounted for between‐subject variability, and repeated measures correlations accounted for intra‐subject variability. In order to estimate the effect of increased and decreased lung density on longitudinal FEV1 trajectories after 11 September 2001, we employed multinomial logistic regression for the fully specified multivariable analysis of the association between QCT lung density indicators and the categorical outcome of FEV1 trajectories (“accelerated decline,” “intermediate decline,” and “improved lung function”). We excluded multicollinearity by the variance inflation factor method and tested for interactions. In this analysis, we used multiple imputation (MI) with full conditional specification to account for missing QCT measurements. Results of complete case and imputed analyses were similar, the observed effects had the same direction and statistical significance, so we only report the latter. Model performance was assessed by means of the c statistic. A two‐sided p value less than 0.05 defined statistical significance. The SAS program, version 9.4 (SAS Institute, Cary, NC) was used for all analyses.
RESULTS
Figure 1 shows the study flow chart. Table 1 shows that the study population consisted of 1,321 subjects who had at least three technically acceptable spirometries, with a total, median, and IQR per subject of 7,446, 6, and 4–7 studies, respectively, between July 2002 and June 2016. The mean age on 11 September 2001 was 42.1 (SD 9) years, and more than 80% were both male, and either overweight or obese (mean BMI 29.2 SD 4.9 kg/m2). The mean longitudinal FEV1slope for all subjects was −40.4 mL/year (SD = 52.2 mL/year, RMSE = 0.17 mL/year). Figure 2 displays the trajectories in absolute FEV1 from 2002 to 2016, as revealed by the linear mixed random effects model. The mean longitudinal FEV1 slopes for the intermediate decline (n = 876, 66.3%), accelerated decline (n = 280, 21.2%), and improved FEV1 (n = 165, 12.5%) subgroups were, respectively, ‐34.3 (SD = 16.9, RMSE = 0.15) ml/year, −106.5 (SD 47.1, RMSE 0.21) ml/year, and 37.6 (SD 55.2, RMSE 0.21) ml/year. The observed differences with the intermediate decline subgroup in mean FEV1 at the last follow‐up visit were significant for both the improved (0.26 l, P = 0.004) and the accelerated decline (−0.44 l, P < 0.0001) subgroups. Study flowchart Post‐11 September 2001 grouped longitudinal FEV1 trajectories, in 1321 former nonfirefighting WTC workers. Yearly mean FEV1 in liters from 2002 to 2016, adjusted for WTC occupational exposure (arrival within 48 hours of the disaster), age on 11 September 2001, sex, height, ethnicity/race, BMI slope, baseline smoking status, and BDRany. The three post‐11 September 2001 FEV1 trajectories are: accelerated decline (purple circle, solid line), intermediate decline (green triangle, short dash), and improved FEV1 (yellow square, long dash). Error bars show the standard error of the mean. Numbers below the x‐axis represent the sample size at each time point. The dotted vertical line represents 11 September 2001. For reference, selected chest CT scans were obtained a median of 7.09 (IQR 5.75‐8.61) years after 11 September 2001 (illustrated by the thick bar above horizontal time line). Patient characteristics and unadjusted comparisons of decliners and improved versus intermediate decline group. Statistically significant differences are bolded. P‐value for 2‐sample comparisons between the accelerated and intermediate decline groups. P‐value for 2‐sample comparisons between the improved and intermediate decline groups. Medians (IQR) reported due to skewed distributions. In order to examine the association of increased and decreased lung density with the aforementioned post‐WTC FEV1 trajectories, we selected the first available chest CT scan with those measurements, obtained a median of 7.09 (IQR 5.75–8.61) years after 11 September 2001. Unadjusted comparisons to the intermediate decliners (Table 1) showed that the “accelerated decliners” were significantly more likely to have higher categorical LAV% and HAV% (P < 0.001, and P = 0.009, respectively), to be male, taller, non‐Latino/White and ever smoker, to gain weight over time, and to have higher baseline FEV1% predicted, BDRany, and shorter WTC exposure duration. Compared with the intermediate decliners, the improved FEV1 subjects were more likely to have high LAV% (P = 0.039), to be male, younger, and taller, have higher BMI at baseline and to lose weight on follow‐up. They also had a lower baseline FEV1% predicted, were more likely to have BDRany, to have arrived early at the WTC disaster site, and to have had a shorter WTC exposure duration. As expected, the mostly subtle radiological findings associated with high HAV% were linear and ground glass opacities, inhomogeneous attenuation, and honeycombing, while high LAV% was mainly associated with emphysema, but also with the presence of any well rounded or large opacities (Table OS1). We found no unadjusted (Table OS1) or adjusted23 (data not presented) association between the visual radiologic abnormalities and the lung function trajectories. As emphysema and interstitial lung disease both share some associated causal factors (eg, tobacco and occupational exposures) and can coexist24, 25 we checked for overlap between the two QCT predictors (ie, confirmed their independence). Only one study subject with both emphysema and inhomogeneous attenuation had both high HAV% and high LAV% as defined, so the two predictors were essentially independent from each other. Table 2 shows the results of the multinomial logistic regression analysis of lung density measures and lung function trajectories in 1,321 subjects. In this analysis both categorical LAV% and HAV% were significantly associated with accelerated FEV1 decline (ORadj, 95% CI 2.37, 1.41–3.97, and 1.77, 1.08–2.89, respectively), but not with improved FEV1. Comparisons QCT lung density metrics (LAV% and HAV%) between the accelerated decliner (n = 280) and the improved (n = 165) versus intermediate decliner (n = 876) subgroups, respectively Significant associations are bolded.
null
null
[ "INTRODUCTION", "Subjects and clinical data acquisition", "CT imaging procedures", "Inclusion criteria and QCT systems", "Spirometry", "Measurements", "Statistical analysis", "AUTHOR CONTRIBUTIONS", "ETHICS STATEMENT", "Funding information" ]
[ "Occupational exposures at the World Trade Center (WTC) disaster site in 2001‐2002 have been associated with a variety of adverse health effects,1 including chronic lower airway diseases.1, 2 The predominant spirometric abnormality has been a nonspecific reduced forced vital capacity (FVC) patterns,3, 4 and findings of airflow obstruction, bronchodilator response, or airway hyperresponsiveness are less frequent.1, 4 We previously reported on mostly mild chest computed tomography (CT) parenchymal findings (eg, interstitial abnormalities and emphysema).5 Quantitative chest computed tomography (QCT) measurements6 can help to characterize the presumed inflammatory changes underlying those heterogeneous WTC‐related lower airway disorders and lung function changes, even if mild. Although post‐WTC longitudinal follow‐up of lung function among workers in the WTC occupational cohorts have suggested a normal age‐related mean expiratory flow decline,7, 8, 9 that average hides widely divergent longitudinal lung function trajectories.9 We hypothesized that both increased (eg, from early interstitial lung disease) and decreased (eg, from early emphysema) lung density markers could help to explain the adverse lung function trajectories in a cohort under long‐term health surveillance. In this study, we used longitudinal spirometry data to examine those trajectories, and the association of QCT metrics of increased and decreased lung density with accelerated FEV1 decline among WTC rescue and recovery workers.", "All subjects were members of the WTC General Responders Cohort (GRC) and participated in the screening, surveillance, and clinical programs of the WTC Clinical Center of Excellence at Mount Sinai Medical Center (New York, NY). They were also part of the subcohort (n = 1,641) evaluated by the WTC Pulmonary Evaluation Unit (WTC PEU), who underwent chest CT scanning between 2003 and 2012, as part of their diagnostic evaluation. Details on subject recruitment, eligibility criteria, and screening and surveillance protocols have been previously reported.10 In brief, participants were all workers and volunteers who performed rescue, recovery, and service restoration duties at the WTC disaster site from 11 September 2001 to June 2002. This open cohort includes all occupational groups deployed at the WTC disaster site.11 Beginning in July 2002, all subjects underwent a baseline screening evaluation, which included questionnaires on respiratory and other symptoms, pre‐WTC‐ and WTC‐related occupational exposures, physical examination (including height and weight), laboratory testing, and spirometry. Subsequent (“monitoring”) health surveillance visits included a similar evaluation at 12‐ to 18‐month intervals, and clinical services were offered for individualized diagnostic and treatment services.1, 2\n", "All CT studies were obtained at Mount Sinai Hospital, in General Electric® or Siemens® multidetector row chest CT scanners. Chest CT studies were performed using an institutional clinical protocol12 with a radiation dose at 120 kVp, and a mean of 146 (SD 69) mAs, with subjects in the supine position, noise correction, and routine periodic scanner calibration. CT scans were obtained from the lung apices to the bases in a single breath hold at maximum inspiration, and we excluded those with section thicknesses exceeding 1.5 mm, contrast administration, or respiratory or motion artifacts. All deidentified and coded chest CT images were stored and cataloged during the past 5 years in the WTC PEU Chest CT Image Archive (ClinicalTrials.gov identifier NCT03295279).5\n", "Inclusion into this study required that the WTC workers had (1) at least three screening and surveillance spirometries, (2) adequate quality study for QCT measurements of their lung parenchyma performed with the Simba system (http://www.via.cornell.edu/simba/simba),13 and (3) complete data for all covariates of interest. None of the chest CT was obtained for investigation of acute processes (infections, congestive heart failure, etc.). For this study, we selected the first adequate quality and available chest CT scan on each subject. For descriptive purposes, and as previously reported,5 each study was also read by research radiologists using the International Classification of High‐resolution Computed Tomography for Occupational and Environmental Respiratory Diseases (ICOERD)14 to classify and grade dichotomously and/or semiquantitatively eight main types of abnormalities.", "Spirometries were performed using a single device, the EasyOne® portable flow device (ndd Medizintechnik AG, Zurich, Switzerland). Bronchodilator response (BDR) was assessed at least once in the majority of subjects (most often at their baseline visit) by repeating spirometry 15 minutes after administration of 180 mcg of albuterol via metered dose inhaler and disposable spacer. Predicted values for spirometric measurements were calculated for all subjects’ acceptable tests, based on reference equations from the third National Health and Nutrition Examination Survey (NHANES III),15 and all testing, quality assurance, ventilatory impairment pattern definitions, and interpretative approaches followed American Thoracic Society recommendations.16, 17, 18 We selected spirometries for this study if they had a computer quality grade of A or B, or C if at least five trials had been obtained.18\n", "We defined a priori, modeled, and plotted the three types of longitudinal post‐WTC FEV1 trajectories (see below, under Statistical Analysis). For the multinomial logistic regression model (see below) with the three trajectories of longitudinal FEV1 as outcomes, we used as main predictors the two QCT‐measured lung density indicators. For low parenchymal density (radiological attenuation), suggestive of emphysema, we used low attenuation volume percent at −950 HU (LAV%, also known as EI950). Categorization was necessary given the variable distribution, using a cut point of 2.5%, based on previously published findings in a nonsmoking healthy multiethnic population.19 For high lung parenchymal density, which could suggest interstitial disease changes, we used high attenuation volume percent from −600 to −250 HU (HAV%). Although a study in smokers used a cut point of 10%20 for HAV%, we felt that using the top decile (6.25%) as the cut point for our study was more appropriate, and equally necessary given the variable distribution.", "We used linear regression to calculate the slope of FEV1 (FEV1slope) on each of 1,321 subjects who had a minimum of three periodic measurements.21 We then estimated the average group decline, and its standard deviation, to classify FEV1slopes into three trajectories defined as follows: accelerated FEV1 decline (“accelerated decliner”) by an FEV1 slope<−66.5 mL/year (ie, exceeding the group mean + 0.5 SD), excessive FEV1 gain (“improved”) by an FEV1slope > 0 mL/year, and intermediate FEV1 decline (“intermediate decliner”) by an FEV1slope between 0 and −66.5 ml/year (ie, between 0 and the group mean + 0.5 SD). We also estimated the root mean squared error (RMSE) as an indicator of group FEV1 variability.9\n\nDescriptive statistics included means and standard deviations (SDs), and medians and interquartile ranges (IQR) for normally and non‐normally distributed continuous variables, respectively; counts and proportions were used for categorical variables. Unadjusted bivariate analyses included t‐test, chi‐squared test, or Wilcoxon rank test as appropriate. We employed, moreover, standardized differences (StD)22 for the descriptive comparisons of the ICOERD findings with the outcomes (FEV1 trajectories) and main predictors (QCT lung density indicators), with a StD > 0.2 suggesting a potentially important difference.\nCovariates included in all multivariable models were age on 11 September 2001, sex, height, ethnicity/race (Latino, non‐Latino White, and non‐Latino other races), body mass index (BMI) at first evaluation and individual BMI trajectories by linear regression (BMI slope, in kg/m2/year), FEV1 percent predicted at baseline, evidence of bronchodilator response at any visit (BDRany, dichotomous), smoking status (never, former and current smokers), and smoking intensity (in pack‐years) at the baseline examination. Occupational WTC exposure indicators included WTC arrival within 48 hours (dichotomous), and WTC exposure duration (in days, or per 100‐day units).1, 9 A subject was considered a never smoker if (s)he had smoked less than 20 packs of cigarettes (or 12 oz. of tobacco) in a lifetime, or less than 1 cigarette/day (or 1 cigar/week) for 1 year. A minimum of 12 months without tobacco use was required to deem a subject a former smoker. In the model with QCT predictors, we included scanner manufacturer and slice thickness (in mm) to adjust for potentially important CT scan technical differences.\nWe first used linear mixed random effects modeling to plot the above described longitudinal FEV1 trajectories. Mean absolute FEV1 was estimated for each 1‐year period between 2001 and 2015 (the actual follow‐up year was rounded). In this multivariable model, age on 11 September 2001, BMI slope, height, sex, ethnicity/race, and bronchodilator response (BDRany) were included as fixed effects, centering the first three at the mean values for the cohort. Random intercepts accounted for between‐subject variability, and repeated measures correlations accounted for intra‐subject variability.\nIn order to estimate the effect of increased and decreased lung density on longitudinal FEV1 trajectories after 11 September 2001, we employed multinomial logistic regression for the fully specified multivariable analysis of the association between QCT lung density indicators and the categorical outcome of FEV1 trajectories (“accelerated decline,” “intermediate decline,” and “improved lung function”). We excluded multicollinearity by the variance inflation factor method and tested for interactions. In this analysis, we used multiple imputation (MI) with full conditional specification to account for missing QCT measurements. Results of complete case and imputed analyses were similar, the observed effects had the same direction and statistical significance, so we only report the latter. Model performance was assessed by means of the c statistic. A two‐sided p value less than 0.05 defined statistical significance. The SAS program, version 9.4 (SAS Institute, Cary, NC) was used for all analyses.", "de la Hoz, Estépar, and Celedón designed and oversaw the study and selected analytical strategies. Liu, Antoniak, Jeon, Weber, and Doucette performed all statistical analyses. Reeves performed the QCT measurements and Xu participated in the radiological readings (ICOERD data). All authors contributed to writing, reviewed and revised the drafts, and approved the final manuscript. de la Hoz had full access to all the data in the study and had final responsibility for the decision to submit for publication.", "The study was conducted in accordance with the guidelines for human studies of the amended Declaration of Helsinki. The Mount Sinai Program for the Protection of Human Subjects (HS12‐00925) reviewed and approved the study protocol.", "This work was supported by grants U01 OH010401 and U01 OH011697 (RED, PI), and contract 200‐2017‐93325 (WTC General Responders Cohort Data Center) from the Centers for Disease Control and Prevention/National Institute for Occupational Safety and Health (CDCP/NIOSH). KA’s work was funded by a short‐term research training program for minority students from the National Heart, Lung, and Blood Institute (grant R25 HL108857)." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Subjects and clinical data acquisition", "CT imaging procedures", "Inclusion criteria and QCT systems", "Spirometry", "Measurements", "Statistical analysis", "RESULTS", "DISCUSSION", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTIONS", "ETHICS STATEMENT", "Funding information", "Supporting information" ]
[ "Occupational exposures at the World Trade Center (WTC) disaster site in 2001‐2002 have been associated with a variety of adverse health effects,1 including chronic lower airway diseases.1, 2 The predominant spirometric abnormality has been a nonspecific reduced forced vital capacity (FVC) patterns,3, 4 and findings of airflow obstruction, bronchodilator response, or airway hyperresponsiveness are less frequent.1, 4 We previously reported on mostly mild chest computed tomography (CT) parenchymal findings (eg, interstitial abnormalities and emphysema).5 Quantitative chest computed tomography (QCT) measurements6 can help to characterize the presumed inflammatory changes underlying those heterogeneous WTC‐related lower airway disorders and lung function changes, even if mild. Although post‐WTC longitudinal follow‐up of lung function among workers in the WTC occupational cohorts have suggested a normal age‐related mean expiratory flow decline,7, 8, 9 that average hides widely divergent longitudinal lung function trajectories.9 We hypothesized that both increased (eg, from early interstitial lung disease) and decreased (eg, from early emphysema) lung density markers could help to explain the adverse lung function trajectories in a cohort under long‐term health surveillance. In this study, we used longitudinal spirometry data to examine those trajectories, and the association of QCT metrics of increased and decreased lung density with accelerated FEV1 decline among WTC rescue and recovery workers.", " Subjects and clinical data acquisition All subjects were members of the WTC General Responders Cohort (GRC) and participated in the screening, surveillance, and clinical programs of the WTC Clinical Center of Excellence at Mount Sinai Medical Center (New York, NY). They were also part of the subcohort (n = 1,641) evaluated by the WTC Pulmonary Evaluation Unit (WTC PEU), who underwent chest CT scanning between 2003 and 2012, as part of their diagnostic evaluation. Details on subject recruitment, eligibility criteria, and screening and surveillance protocols have been previously reported.10 In brief, participants were all workers and volunteers who performed rescue, recovery, and service restoration duties at the WTC disaster site from 11 September 2001 to June 2002. This open cohort includes all occupational groups deployed at the WTC disaster site.11 Beginning in July 2002, all subjects underwent a baseline screening evaluation, which included questionnaires on respiratory and other symptoms, pre‐WTC‐ and WTC‐related occupational exposures, physical examination (including height and weight), laboratory testing, and spirometry. Subsequent (“monitoring”) health surveillance visits included a similar evaluation at 12‐ to 18‐month intervals, and clinical services were offered for individualized diagnostic and treatment services.1, 2\n\nAll subjects were members of the WTC General Responders Cohort (GRC) and participated in the screening, surveillance, and clinical programs of the WTC Clinical Center of Excellence at Mount Sinai Medical Center (New York, NY). They were also part of the subcohort (n = 1,641) evaluated by the WTC Pulmonary Evaluation Unit (WTC PEU), who underwent chest CT scanning between 2003 and 2012, as part of their diagnostic evaluation. Details on subject recruitment, eligibility criteria, and screening and surveillance protocols have been previously reported.10 In brief, participants were all workers and volunteers who performed rescue, recovery, and service restoration duties at the WTC disaster site from 11 September 2001 to June 2002. This open cohort includes all occupational groups deployed at the WTC disaster site.11 Beginning in July 2002, all subjects underwent a baseline screening evaluation, which included questionnaires on respiratory and other symptoms, pre‐WTC‐ and WTC‐related occupational exposures, physical examination (including height and weight), laboratory testing, and spirometry. Subsequent (“monitoring”) health surveillance visits included a similar evaluation at 12‐ to 18‐month intervals, and clinical services were offered for individualized diagnostic and treatment services.1, 2\n\n CT imaging procedures All CT studies were obtained at Mount Sinai Hospital, in General Electric® or Siemens® multidetector row chest CT scanners. Chest CT studies were performed using an institutional clinical protocol12 with a radiation dose at 120 kVp, and a mean of 146 (SD 69) mAs, with subjects in the supine position, noise correction, and routine periodic scanner calibration. CT scans were obtained from the lung apices to the bases in a single breath hold at maximum inspiration, and we excluded those with section thicknesses exceeding 1.5 mm, contrast administration, or respiratory or motion artifacts. All deidentified and coded chest CT images were stored and cataloged during the past 5 years in the WTC PEU Chest CT Image Archive (ClinicalTrials.gov identifier NCT03295279).5\n\nAll CT studies were obtained at Mount Sinai Hospital, in General Electric® or Siemens® multidetector row chest CT scanners. Chest CT studies were performed using an institutional clinical protocol12 with a radiation dose at 120 kVp, and a mean of 146 (SD 69) mAs, with subjects in the supine position, noise correction, and routine periodic scanner calibration. CT scans were obtained from the lung apices to the bases in a single breath hold at maximum inspiration, and we excluded those with section thicknesses exceeding 1.5 mm, contrast administration, or respiratory or motion artifacts. All deidentified and coded chest CT images were stored and cataloged during the past 5 years in the WTC PEU Chest CT Image Archive (ClinicalTrials.gov identifier NCT03295279).5\n\n Inclusion criteria and QCT systems Inclusion into this study required that the WTC workers had (1) at least three screening and surveillance spirometries, (2) adequate quality study for QCT measurements of their lung parenchyma performed with the Simba system (http://www.via.cornell.edu/simba/simba),13 and (3) complete data for all covariates of interest. None of the chest CT was obtained for investigation of acute processes (infections, congestive heart failure, etc.). For this study, we selected the first adequate quality and available chest CT scan on each subject. For descriptive purposes, and as previously reported,5 each study was also read by research radiologists using the International Classification of High‐resolution Computed Tomography for Occupational and Environmental Respiratory Diseases (ICOERD)14 to classify and grade dichotomously and/or semiquantitatively eight main types of abnormalities.\nInclusion into this study required that the WTC workers had (1) at least three screening and surveillance spirometries, (2) adequate quality study for QCT measurements of their lung parenchyma performed with the Simba system (http://www.via.cornell.edu/simba/simba),13 and (3) complete data for all covariates of interest. None of the chest CT was obtained for investigation of acute processes (infections, congestive heart failure, etc.). For this study, we selected the first adequate quality and available chest CT scan on each subject. For descriptive purposes, and as previously reported,5 each study was also read by research radiologists using the International Classification of High‐resolution Computed Tomography for Occupational and Environmental Respiratory Diseases (ICOERD)14 to classify and grade dichotomously and/or semiquantitatively eight main types of abnormalities.\n Spirometry Spirometries were performed using a single device, the EasyOne® portable flow device (ndd Medizintechnik AG, Zurich, Switzerland). Bronchodilator response (BDR) was assessed at least once in the majority of subjects (most often at their baseline visit) by repeating spirometry 15 minutes after administration of 180 mcg of albuterol via metered dose inhaler and disposable spacer. Predicted values for spirometric measurements were calculated for all subjects’ acceptable tests, based on reference equations from the third National Health and Nutrition Examination Survey (NHANES III),15 and all testing, quality assurance, ventilatory impairment pattern definitions, and interpretative approaches followed American Thoracic Society recommendations.16, 17, 18 We selected spirometries for this study if they had a computer quality grade of A or B, or C if at least five trials had been obtained.18\n\nSpirometries were performed using a single device, the EasyOne® portable flow device (ndd Medizintechnik AG, Zurich, Switzerland). Bronchodilator response (BDR) was assessed at least once in the majority of subjects (most often at their baseline visit) by repeating spirometry 15 minutes after administration of 180 mcg of albuterol via metered dose inhaler and disposable spacer. Predicted values for spirometric measurements were calculated for all subjects’ acceptable tests, based on reference equations from the third National Health and Nutrition Examination Survey (NHANES III),15 and all testing, quality assurance, ventilatory impairment pattern definitions, and interpretative approaches followed American Thoracic Society recommendations.16, 17, 18 We selected spirometries for this study if they had a computer quality grade of A or B, or C if at least five trials had been obtained.18\n\n Measurements We defined a priori, modeled, and plotted the three types of longitudinal post‐WTC FEV1 trajectories (see below, under Statistical Analysis). For the multinomial logistic regression model (see below) with the three trajectories of longitudinal FEV1 as outcomes, we used as main predictors the two QCT‐measured lung density indicators. For low parenchymal density (radiological attenuation), suggestive of emphysema, we used low attenuation volume percent at −950 HU (LAV%, also known as EI950). Categorization was necessary given the variable distribution, using a cut point of 2.5%, based on previously published findings in a nonsmoking healthy multiethnic population.19 For high lung parenchymal density, which could suggest interstitial disease changes, we used high attenuation volume percent from −600 to −250 HU (HAV%). Although a study in smokers used a cut point of 10%20 for HAV%, we felt that using the top decile (6.25%) as the cut point for our study was more appropriate, and equally necessary given the variable distribution.\nWe defined a priori, modeled, and plotted the three types of longitudinal post‐WTC FEV1 trajectories (see below, under Statistical Analysis). For the multinomial logistic regression model (see below) with the three trajectories of longitudinal FEV1 as outcomes, we used as main predictors the two QCT‐measured lung density indicators. For low parenchymal density (radiological attenuation), suggestive of emphysema, we used low attenuation volume percent at −950 HU (LAV%, also known as EI950). Categorization was necessary given the variable distribution, using a cut point of 2.5%, based on previously published findings in a nonsmoking healthy multiethnic population.19 For high lung parenchymal density, which could suggest interstitial disease changes, we used high attenuation volume percent from −600 to −250 HU (HAV%). Although a study in smokers used a cut point of 10%20 for HAV%, we felt that using the top decile (6.25%) as the cut point for our study was more appropriate, and equally necessary given the variable distribution.\n Statistical analysis We used linear regression to calculate the slope of FEV1 (FEV1slope) on each of 1,321 subjects who had a minimum of three periodic measurements.21 We then estimated the average group decline, and its standard deviation, to classify FEV1slopes into three trajectories defined as follows: accelerated FEV1 decline (“accelerated decliner”) by an FEV1 slope<−66.5 mL/year (ie, exceeding the group mean + 0.5 SD), excessive FEV1 gain (“improved”) by an FEV1slope > 0 mL/year, and intermediate FEV1 decline (“intermediate decliner”) by an FEV1slope between 0 and −66.5 ml/year (ie, between 0 and the group mean + 0.5 SD). We also estimated the root mean squared error (RMSE) as an indicator of group FEV1 variability.9\n\nDescriptive statistics included means and standard deviations (SDs), and medians and interquartile ranges (IQR) for normally and non‐normally distributed continuous variables, respectively; counts and proportions were used for categorical variables. Unadjusted bivariate analyses included t‐test, chi‐squared test, or Wilcoxon rank test as appropriate. We employed, moreover, standardized differences (StD)22 for the descriptive comparisons of the ICOERD findings with the outcomes (FEV1 trajectories) and main predictors (QCT lung density indicators), with a StD > 0.2 suggesting a potentially important difference.\nCovariates included in all multivariable models were age on 11 September 2001, sex, height, ethnicity/race (Latino, non‐Latino White, and non‐Latino other races), body mass index (BMI) at first evaluation and individual BMI trajectories by linear regression (BMI slope, in kg/m2/year), FEV1 percent predicted at baseline, evidence of bronchodilator response at any visit (BDRany, dichotomous), smoking status (never, former and current smokers), and smoking intensity (in pack‐years) at the baseline examination. Occupational WTC exposure indicators included WTC arrival within 48 hours (dichotomous), and WTC exposure duration (in days, or per 100‐day units).1, 9 A subject was considered a never smoker if (s)he had smoked less than 20 packs of cigarettes (or 12 oz. of tobacco) in a lifetime, or less than 1 cigarette/day (or 1 cigar/week) for 1 year. A minimum of 12 months without tobacco use was required to deem a subject a former smoker. In the model with QCT predictors, we included scanner manufacturer and slice thickness (in mm) to adjust for potentially important CT scan technical differences.\nWe first used linear mixed random effects modeling to plot the above described longitudinal FEV1 trajectories. Mean absolute FEV1 was estimated for each 1‐year period between 2001 and 2015 (the actual follow‐up year was rounded). In this multivariable model, age on 11 September 2001, BMI slope, height, sex, ethnicity/race, and bronchodilator response (BDRany) were included as fixed effects, centering the first three at the mean values for the cohort. Random intercepts accounted for between‐subject variability, and repeated measures correlations accounted for intra‐subject variability.\nIn order to estimate the effect of increased and decreased lung density on longitudinal FEV1 trajectories after 11 September 2001, we employed multinomial logistic regression for the fully specified multivariable analysis of the association between QCT lung density indicators and the categorical outcome of FEV1 trajectories (“accelerated decline,” “intermediate decline,” and “improved lung function”). We excluded multicollinearity by the variance inflation factor method and tested for interactions. In this analysis, we used multiple imputation (MI) with full conditional specification to account for missing QCT measurements. Results of complete case and imputed analyses were similar, the observed effects had the same direction and statistical significance, so we only report the latter. Model performance was assessed by means of the c statistic. A two‐sided p value less than 0.05 defined statistical significance. The SAS program, version 9.4 (SAS Institute, Cary, NC) was used for all analyses.\nWe used linear regression to calculate the slope of FEV1 (FEV1slope) on each of 1,321 subjects who had a minimum of three periodic measurements.21 We then estimated the average group decline, and its standard deviation, to classify FEV1slopes into three trajectories defined as follows: accelerated FEV1 decline (“accelerated decliner”) by an FEV1 slope<−66.5 mL/year (ie, exceeding the group mean + 0.5 SD), excessive FEV1 gain (“improved”) by an FEV1slope > 0 mL/year, and intermediate FEV1 decline (“intermediate decliner”) by an FEV1slope between 0 and −66.5 ml/year (ie, between 0 and the group mean + 0.5 SD). We also estimated the root mean squared error (RMSE) as an indicator of group FEV1 variability.9\n\nDescriptive statistics included means and standard deviations (SDs), and medians and interquartile ranges (IQR) for normally and non‐normally distributed continuous variables, respectively; counts and proportions were used for categorical variables. Unadjusted bivariate analyses included t‐test, chi‐squared test, or Wilcoxon rank test as appropriate. We employed, moreover, standardized differences (StD)22 for the descriptive comparisons of the ICOERD findings with the outcomes (FEV1 trajectories) and main predictors (QCT lung density indicators), with a StD > 0.2 suggesting a potentially important difference.\nCovariates included in all multivariable models were age on 11 September 2001, sex, height, ethnicity/race (Latino, non‐Latino White, and non‐Latino other races), body mass index (BMI) at first evaluation and individual BMI trajectories by linear regression (BMI slope, in kg/m2/year), FEV1 percent predicted at baseline, evidence of bronchodilator response at any visit (BDRany, dichotomous), smoking status (never, former and current smokers), and smoking intensity (in pack‐years) at the baseline examination. Occupational WTC exposure indicators included WTC arrival within 48 hours (dichotomous), and WTC exposure duration (in days, or per 100‐day units).1, 9 A subject was considered a never smoker if (s)he had smoked less than 20 packs of cigarettes (or 12 oz. of tobacco) in a lifetime, or less than 1 cigarette/day (or 1 cigar/week) for 1 year. A minimum of 12 months without tobacco use was required to deem a subject a former smoker. In the model with QCT predictors, we included scanner manufacturer and slice thickness (in mm) to adjust for potentially important CT scan technical differences.\nWe first used linear mixed random effects modeling to plot the above described longitudinal FEV1 trajectories. Mean absolute FEV1 was estimated for each 1‐year period between 2001 and 2015 (the actual follow‐up year was rounded). In this multivariable model, age on 11 September 2001, BMI slope, height, sex, ethnicity/race, and bronchodilator response (BDRany) were included as fixed effects, centering the first three at the mean values for the cohort. Random intercepts accounted for between‐subject variability, and repeated measures correlations accounted for intra‐subject variability.\nIn order to estimate the effect of increased and decreased lung density on longitudinal FEV1 trajectories after 11 September 2001, we employed multinomial logistic regression for the fully specified multivariable analysis of the association between QCT lung density indicators and the categorical outcome of FEV1 trajectories (“accelerated decline,” “intermediate decline,” and “improved lung function”). We excluded multicollinearity by the variance inflation factor method and tested for interactions. In this analysis, we used multiple imputation (MI) with full conditional specification to account for missing QCT measurements. Results of complete case and imputed analyses were similar, the observed effects had the same direction and statistical significance, so we only report the latter. Model performance was assessed by means of the c statistic. A two‐sided p value less than 0.05 defined statistical significance. The SAS program, version 9.4 (SAS Institute, Cary, NC) was used for all analyses.", "All subjects were members of the WTC General Responders Cohort (GRC) and participated in the screening, surveillance, and clinical programs of the WTC Clinical Center of Excellence at Mount Sinai Medical Center (New York, NY). They were also part of the subcohort (n = 1,641) evaluated by the WTC Pulmonary Evaluation Unit (WTC PEU), who underwent chest CT scanning between 2003 and 2012, as part of their diagnostic evaluation. Details on subject recruitment, eligibility criteria, and screening and surveillance protocols have been previously reported.10 In brief, participants were all workers and volunteers who performed rescue, recovery, and service restoration duties at the WTC disaster site from 11 September 2001 to June 2002. This open cohort includes all occupational groups deployed at the WTC disaster site.11 Beginning in July 2002, all subjects underwent a baseline screening evaluation, which included questionnaires on respiratory and other symptoms, pre‐WTC‐ and WTC‐related occupational exposures, physical examination (including height and weight), laboratory testing, and spirometry. Subsequent (“monitoring”) health surveillance visits included a similar evaluation at 12‐ to 18‐month intervals, and clinical services were offered for individualized diagnostic and treatment services.1, 2\n", "All CT studies were obtained at Mount Sinai Hospital, in General Electric® or Siemens® multidetector row chest CT scanners. Chest CT studies were performed using an institutional clinical protocol12 with a radiation dose at 120 kVp, and a mean of 146 (SD 69) mAs, with subjects in the supine position, noise correction, and routine periodic scanner calibration. CT scans were obtained from the lung apices to the bases in a single breath hold at maximum inspiration, and we excluded those with section thicknesses exceeding 1.5 mm, contrast administration, or respiratory or motion artifacts. All deidentified and coded chest CT images were stored and cataloged during the past 5 years in the WTC PEU Chest CT Image Archive (ClinicalTrials.gov identifier NCT03295279).5\n", "Inclusion into this study required that the WTC workers had (1) at least three screening and surveillance spirometries, (2) adequate quality study for QCT measurements of their lung parenchyma performed with the Simba system (http://www.via.cornell.edu/simba/simba),13 and (3) complete data for all covariates of interest. None of the chest CT was obtained for investigation of acute processes (infections, congestive heart failure, etc.). For this study, we selected the first adequate quality and available chest CT scan on each subject. For descriptive purposes, and as previously reported,5 each study was also read by research radiologists using the International Classification of High‐resolution Computed Tomography for Occupational and Environmental Respiratory Diseases (ICOERD)14 to classify and grade dichotomously and/or semiquantitatively eight main types of abnormalities.", "Spirometries were performed using a single device, the EasyOne® portable flow device (ndd Medizintechnik AG, Zurich, Switzerland). Bronchodilator response (BDR) was assessed at least once in the majority of subjects (most often at their baseline visit) by repeating spirometry 15 minutes after administration of 180 mcg of albuterol via metered dose inhaler and disposable spacer. Predicted values for spirometric measurements were calculated for all subjects’ acceptable tests, based on reference equations from the third National Health and Nutrition Examination Survey (NHANES III),15 and all testing, quality assurance, ventilatory impairment pattern definitions, and interpretative approaches followed American Thoracic Society recommendations.16, 17, 18 We selected spirometries for this study if they had a computer quality grade of A or B, or C if at least five trials had been obtained.18\n", "We defined a priori, modeled, and plotted the three types of longitudinal post‐WTC FEV1 trajectories (see below, under Statistical Analysis). For the multinomial logistic regression model (see below) with the three trajectories of longitudinal FEV1 as outcomes, we used as main predictors the two QCT‐measured lung density indicators. For low parenchymal density (radiological attenuation), suggestive of emphysema, we used low attenuation volume percent at −950 HU (LAV%, also known as EI950). Categorization was necessary given the variable distribution, using a cut point of 2.5%, based on previously published findings in a nonsmoking healthy multiethnic population.19 For high lung parenchymal density, which could suggest interstitial disease changes, we used high attenuation volume percent from −600 to −250 HU (HAV%). Although a study in smokers used a cut point of 10%20 for HAV%, we felt that using the top decile (6.25%) as the cut point for our study was more appropriate, and equally necessary given the variable distribution.", "We used linear regression to calculate the slope of FEV1 (FEV1slope) on each of 1,321 subjects who had a minimum of three periodic measurements.21 We then estimated the average group decline, and its standard deviation, to classify FEV1slopes into three trajectories defined as follows: accelerated FEV1 decline (“accelerated decliner”) by an FEV1 slope<−66.5 mL/year (ie, exceeding the group mean + 0.5 SD), excessive FEV1 gain (“improved”) by an FEV1slope > 0 mL/year, and intermediate FEV1 decline (“intermediate decliner”) by an FEV1slope between 0 and −66.5 ml/year (ie, between 0 and the group mean + 0.5 SD). We also estimated the root mean squared error (RMSE) as an indicator of group FEV1 variability.9\n\nDescriptive statistics included means and standard deviations (SDs), and medians and interquartile ranges (IQR) for normally and non‐normally distributed continuous variables, respectively; counts and proportions were used for categorical variables. Unadjusted bivariate analyses included t‐test, chi‐squared test, or Wilcoxon rank test as appropriate. We employed, moreover, standardized differences (StD)22 for the descriptive comparisons of the ICOERD findings with the outcomes (FEV1 trajectories) and main predictors (QCT lung density indicators), with a StD > 0.2 suggesting a potentially important difference.\nCovariates included in all multivariable models were age on 11 September 2001, sex, height, ethnicity/race (Latino, non‐Latino White, and non‐Latino other races), body mass index (BMI) at first evaluation and individual BMI trajectories by linear regression (BMI slope, in kg/m2/year), FEV1 percent predicted at baseline, evidence of bronchodilator response at any visit (BDRany, dichotomous), smoking status (never, former and current smokers), and smoking intensity (in pack‐years) at the baseline examination. Occupational WTC exposure indicators included WTC arrival within 48 hours (dichotomous), and WTC exposure duration (in days, or per 100‐day units).1, 9 A subject was considered a never smoker if (s)he had smoked less than 20 packs of cigarettes (or 12 oz. of tobacco) in a lifetime, or less than 1 cigarette/day (or 1 cigar/week) for 1 year. A minimum of 12 months without tobacco use was required to deem a subject a former smoker. In the model with QCT predictors, we included scanner manufacturer and slice thickness (in mm) to adjust for potentially important CT scan technical differences.\nWe first used linear mixed random effects modeling to plot the above described longitudinal FEV1 trajectories. Mean absolute FEV1 was estimated for each 1‐year period between 2001 and 2015 (the actual follow‐up year was rounded). In this multivariable model, age on 11 September 2001, BMI slope, height, sex, ethnicity/race, and bronchodilator response (BDRany) were included as fixed effects, centering the first three at the mean values for the cohort. Random intercepts accounted for between‐subject variability, and repeated measures correlations accounted for intra‐subject variability.\nIn order to estimate the effect of increased and decreased lung density on longitudinal FEV1 trajectories after 11 September 2001, we employed multinomial logistic regression for the fully specified multivariable analysis of the association between QCT lung density indicators and the categorical outcome of FEV1 trajectories (“accelerated decline,” “intermediate decline,” and “improved lung function”). We excluded multicollinearity by the variance inflation factor method and tested for interactions. In this analysis, we used multiple imputation (MI) with full conditional specification to account for missing QCT measurements. Results of complete case and imputed analyses were similar, the observed effects had the same direction and statistical significance, so we only report the latter. Model performance was assessed by means of the c statistic. A two‐sided p value less than 0.05 defined statistical significance. The SAS program, version 9.4 (SAS Institute, Cary, NC) was used for all analyses.", "Figure 1 shows the study flow chart. Table 1 shows that the study population consisted of 1,321 subjects who had at least three technically acceptable spirometries, with a total, median, and IQR per subject of 7,446, 6, and 4–7 studies, respectively, between July 2002 and June 2016. The mean age on 11 September 2001 was 42.1 (SD 9) years, and more than 80% were both male, and either overweight or obese (mean BMI 29.2 SD 4.9 kg/m2). The mean longitudinal FEV1slope for all subjects was −40.4 mL/year (SD = 52.2 mL/year, RMSE = 0.17 mL/year). Figure 2 displays the trajectories in absolute FEV1 from 2002 to 2016, as revealed by the linear mixed random effects model. The mean longitudinal FEV1 slopes for the intermediate decline (n = 876, 66.3%), accelerated decline (n = 280, 21.2%), and improved FEV1 (n = 165, 12.5%) subgroups were, respectively, ‐34.3 (SD = 16.9, RMSE = 0.15) ml/year, −106.5 (SD 47.1, RMSE 0.21) ml/year, and 37.6 (SD 55.2, RMSE 0.21) ml/year. The observed differences with the intermediate decline subgroup in mean FEV1 at the last follow‐up visit were significant for both the improved (0.26 l, P = 0.004) and the accelerated decline (−0.44 l, P < 0.0001) subgroups.\nStudy flowchart\nPost‐11 September 2001 grouped longitudinal FEV1 trajectories, in 1321 former nonfirefighting WTC workers. Yearly mean FEV1 in liters from 2002 to 2016, adjusted for WTC occupational exposure (arrival within 48 hours of the disaster), age on 11 September 2001, sex, height, ethnicity/race, BMI slope, baseline smoking status, and BDRany. The three post‐11 September 2001 FEV1 trajectories are: accelerated decline (purple circle, solid line), intermediate decline (green triangle, short dash), and improved FEV1 (yellow square, long dash). Error bars show the standard error of the mean. Numbers below the x‐axis represent the sample size at each time point. The dotted vertical line represents 11 September 2001. For reference, selected chest CT scans were obtained a median of 7.09 (IQR 5.75‐8.61) years after 11 September 2001 (illustrated by the thick bar above horizontal time line).\nPatient characteristics and unadjusted comparisons of decliners and improved versus intermediate decline group.\nStatistically significant differences are bolded.\nP‐value for 2‐sample comparisons between the accelerated and intermediate decline groups.\nP‐value for 2‐sample comparisons between the improved and intermediate decline groups.\nMedians (IQR) reported due to skewed distributions.\nIn order to examine the association of increased and decreased lung density with the aforementioned post‐WTC FEV1 trajectories, we selected the first available chest CT scan with those measurements, obtained a median of 7.09 (IQR 5.75–8.61) years after 11 September 2001. Unadjusted comparisons to the intermediate decliners (Table 1) showed that the “accelerated decliners” were significantly more likely to have higher categorical LAV% and HAV% (P < 0.001, and P = 0.009, respectively), to be male, taller, non‐Latino/White and ever smoker, to gain weight over time, and to have higher baseline FEV1% predicted, BDRany, and shorter WTC exposure duration. Compared with the intermediate decliners, the improved FEV1 subjects were more likely to have high LAV% (P = 0.039), to be male, younger, and taller, have higher BMI at baseline and to lose weight on follow‐up. They also had a lower baseline FEV1% predicted, were more likely to have BDRany, to have arrived early at the WTC disaster site, and to have had a shorter WTC exposure duration. As expected, the mostly subtle radiological findings associated with high HAV% were linear and ground glass opacities, inhomogeneous attenuation, and honeycombing, while high LAV% was mainly associated with emphysema, but also with the presence of any well rounded or large opacities (Table OS1). We found no unadjusted (Table OS1) or adjusted23 (data not presented) association between the visual radiologic abnormalities and the lung function trajectories.\nAs emphysema and interstitial lung disease both share some associated causal factors (eg, tobacco and occupational exposures) and can coexist24, 25 we checked for overlap between the two QCT predictors (ie, confirmed their independence). Only one study subject with both emphysema and inhomogeneous attenuation had both high HAV% and high LAV% as defined, so the two predictors were essentially independent from each other. Table 2 shows the results of the multinomial logistic regression analysis of lung density measures and lung function trajectories in 1,321 subjects. In this analysis both categorical LAV% and HAV% were significantly associated with accelerated FEV1 decline (ORadj, 95% CI 2.37, 1.41–3.97, and 1.77, 1.08–2.89, respectively), but not with improved FEV1.\nComparisons QCT lung density metrics (LAV% and HAV%) between the accelerated decliner (n = 280) and the improved (n = 165) versus intermediate decliner (n = 876) subgroups, respectively\nSignificant associations are bolded.", "In this study, we extend our previous assessment9 of the rate of FEV1 decline in a diverse group of former WTC workers11 over a longer period of time. We also demonstrated the association of accelerated FEV1 decline with QCT‐measured decreased (LAV%) and increased (HAV%) lung density metrics. The subgroup with abnormal lung function gain or improvement, which had an inconsistent association with proximal airway wall thickness in our previous study,9 failed to demonstrate a significant association with either LAV% or HAV% in this study.\nThe recently described9, 26 divergent long‐term FEV1 trajectories in the WTC worker and volunteer cohorts motivated an examination of QCT metrics and lung function decline. We previously established that wall area percent, a QCT metric of proximal airway inflammation, is associated with accelerated FEV1 decline.9 We hypothesized that changes in lung density, which QCT may help to identify at early stages, could also be associated with the diverging trajectories (accelerated decline and improvement). Chronic parenchymal lung diseases associated with decreased (eg, emphysema), and increased (eg, interstitial lung diseases) lung density usually have a long latency after inciting exposures, and the included CT studies were obtained a median of 7.1 years after 11 September 2001. Our previous findings on systematic and semiquantitative readings of the CT scans, noted the prevalence of usually mild emphysema and interstitial lung disease in about 10% of this cohort, respectively.5 Because the findings were so mild, we hypothesized that QCT density metrics corresponding to those abnormalities could be used to quantify their extent more precisely, and to assess whether they were significantly associated with adverse functional outcomes, even if quantitatively mild. Our findings would suggest that they are, that further follow‐up is warranted, and that QCT can play a role in longitudinal respiratory surveillance of this cohort.\nExtending our previous study9 to delineate better the longitudinal FEV1 trajectories, further suggests that the improved function group may have experienced the earliest and deepest lung function decrease (i.e., they were the “rapid accelerated decliners” after the WTC exposures), but also has demonstrated recovery toward higher pre‐WTC FEV1 levels, as suggested by a WTC firefighters’ study.26 The latter can be accounted by a calculated difference in pre‐WTC predicted FEV1 of 190 mL in favor of the improved, compared to the intermediate decline subjects (data not presented). The latter is in turn expected, given the sex and height differences between the subgroups (Table 2). The accelerated decline subgroup had a similarly higher pre‐WTC predicted FEV1 than the intermediate group, which also further underscores the severity of their longitudinal functional loss. Although our models suggested several important predictive factors of those divergent trajectories, further characterization and follow‐up are warranted.\nThe value of the QCT emphysema measurement (LAV%) has been extensively used in studies of smokers and COPD (reviewed in 27). The QCT high lung density (HAV%) metric, moreover, has been used less often, mainly in the investigation of interstitial disease changes with COPD,20 and adverse effects on lung function in pulmonary fibrosis patients,28 or of cigarette smoking.29 Aside from studies with a predominance of smokers, HAV% in subjects from the Framingham Heart Study,30 was also modestly predictive of lung volume loss. The interstitial abnormalities measured by HAV% may be reversible. Our findings, and those of previous studies, support the longitudinal and quantitative investigation of increased and decreased lung density in occupationally exposed cohorts like that of the WTC responders.\nThe pathogenesis of both COPD and interstitial fibrosis31, 32 is thought to share trajectories marked by a series of pulmonary tissue injuries, including those caused by environmental toxicants, such as tobacco, occupational vapors, dust, and gases, as well as dysregulated aging,20, 33, 34 developmental, and other factors. The factors that determine the diverging pathophysiologic paths between the two, however, remain to be elucidated. While both diseases can indeed coexist,24, 25 we found no substantial evidence of overlap of their respective QCT markers in our study group. While there is thus far no conclusively documented evidence of association of WTC‐related exposures with the causation of either COPD or interstitial fibrosis in the WTC cohorts, our models included both tobacco smoking and WTC occupational exposure variables as covariates and not main predictors in the models. Properly designed studies are needed to elucidate their potential causative roles.\nThe strengths of this study relate to the richness of the patient population, the amount of data available for important covariates, the availability of extensive, and detailed imaging data, unique in WTC cohort studies. This study also has some limitations. We did not assess the effect of pre‐WTC occupational exposures,5 the effect of smoking after the baseline examination, or of smoking intensity. Periodic cross‐sectional assessments of smoking status in this cohort, moreover, suggest steadily decreasing cross‐sectional current smoking rates.3 Our study subjects underwent chest CT scanning for a variety of reasons, often not inclusive of abnormal lung function (eg, investigation of nodules, atelectatic densities, etc.). This represents an inevitable selection bias that allowed us, however, to obtain QCT measurements. We measured the indicators of lung density on CTs performed 7.1 years after 11 September 2001, at a time when the three FEV1 trajectories were already clearly established (Figure 1). Our observations in this and our previous study on the divergent FEV1 trajectories9 were similar to those reported in the WTC firefighters’ cohort,26 where the overall longitudinal FEV1 decline was −32 mL/year, versus −40.4 mL/year in our cohort. Preliminary data on the Mount Sinai WTC General Responders Cohort (n = 15,753), that includes our cohort, indicate a longitudinal FEV1 decline of −33.2 mL/year,4 which further supports our a priori trajectory classifications. We chose FEV1 for this study, because of its reliability and repeatability, particularly in the setting of large screening and surveillance spirometry programs. All WTC occupational studies with spirometry data have shown nonobstructive low FVC as the largely predominant spirometric impairment,3, 4 and thus essentially parallel trajectories of FEV1 and FVC.7 Future studies should examine, however, the differential effect of our and other QCT markers in subgroups of patients. We lacked comparison QCT imaging data from a well‐defined control group of occupationally and WTC unexposed, totally asymptomatic subjects, with normal spirometry and chest radiograph, but we believe that our selection of QCT lung density metrics cut off criteria were adequate for these analyses. We used retrospective chest CT imaging data, which were subject to slight variations in protocols over time. However, most studies were performed in a very small number of scanners, at a single location, and with an intended technical consistency. In addition, quality control was exerted to exclude CT scan studies that did not meet technical standards for quantitative imaging analyses and, as a result, our models did not suggest significant effects of scanner brand and study slice thickness.\nAs the potential for the development of chronic lung diseases looms on this cohort,35, 36 this and our previous studies1, 5, 9 underscore the need for surveillance and suggest the potential role of both qualitative and quantitative chest CT in the ongoing evaluation of lung function changes and disease transitions in this cohort.", "The authors had no other relevant financial conflict of interest disclosure to make. The contents of this article are the sole responsibility of the authors and do not necessarily represent the official views of the CDCP/NIOSH. The funding agencies had no role in the study design, in the collection, analysis, or interpretation of the data, in the writing of the report, or in the decision to submit this article for publication.", "de la Hoz, Estépar, and Celedón designed and oversaw the study and selected analytical strategies. Liu, Antoniak, Jeon, Weber, and Doucette performed all statistical analyses. Reeves performed the QCT measurements and Xu participated in the radiological readings (ICOERD data). All authors contributed to writing, reviewed and revised the drafts, and approved the final manuscript. de la Hoz had full access to all the data in the study and had final responsibility for the decision to submit for publication.", "The study was conducted in accordance with the guidelines for human studies of the amended Declaration of Helsinki. The Mount Sinai Program for the Protection of Human Subjects (HS12‐00925) reviewed and approved the study protocol.", "This work was supported by grants U01 OH010401 and U01 OH011697 (RED, PI), and contract 200‐2017‐93325 (WTC General Responders Cohort Data Center) from the Centers for Disease Control and Prevention/National Institute for Occupational Safety and Health (CDCP/NIOSH). KA’s work was funded by a short‐term research training program for minority students from the National Heart, Lung, and Blood Institute (grant R25 HL108857).", "Table S1\nClick here for additional data file." ]
[ null, "methods", null, null, null, null, null, null, "results", "discussion", "COI-statement", null, null, null, "supplementary-material" ]
[ "CT–lung", "helical computed tomography", "imaging of the chest", "inhalation injury", "lung function decline", "lung function trajectories", "multivariate analysis of prognostic factors", "occupational respiratory diseases", "World Trade Center‐related lung disease" ]
INTRODUCTION: Occupational exposures at the World Trade Center (WTC) disaster site in 2001‐2002 have been associated with a variety of adverse health effects,1 including chronic lower airway diseases.1, 2 The predominant spirometric abnormality has been a nonspecific reduced forced vital capacity (FVC) patterns,3, 4 and findings of airflow obstruction, bronchodilator response, or airway hyperresponsiveness are less frequent.1, 4 We previously reported on mostly mild chest computed tomography (CT) parenchymal findings (eg, interstitial abnormalities and emphysema).5 Quantitative chest computed tomography (QCT) measurements6 can help to characterize the presumed inflammatory changes underlying those heterogeneous WTC‐related lower airway disorders and lung function changes, even if mild. Although post‐WTC longitudinal follow‐up of lung function among workers in the WTC occupational cohorts have suggested a normal age‐related mean expiratory flow decline,7, 8, 9 that average hides widely divergent longitudinal lung function trajectories.9 We hypothesized that both increased (eg, from early interstitial lung disease) and decreased (eg, from early emphysema) lung density markers could help to explain the adverse lung function trajectories in a cohort under long‐term health surveillance. In this study, we used longitudinal spirometry data to examine those trajectories, and the association of QCT metrics of increased and decreased lung density with accelerated FEV1 decline among WTC rescue and recovery workers. METHODS: Subjects and clinical data acquisition All subjects were members of the WTC General Responders Cohort (GRC) and participated in the screening, surveillance, and clinical programs of the WTC Clinical Center of Excellence at Mount Sinai Medical Center (New York, NY). They were also part of the subcohort (n = 1,641) evaluated by the WTC Pulmonary Evaluation Unit (WTC PEU), who underwent chest CT scanning between 2003 and 2012, as part of their diagnostic evaluation. Details on subject recruitment, eligibility criteria, and screening and surveillance protocols have been previously reported.10 In brief, participants were all workers and volunteers who performed rescue, recovery, and service restoration duties at the WTC disaster site from 11 September 2001 to June 2002. This open cohort includes all occupational groups deployed at the WTC disaster site.11 Beginning in July 2002, all subjects underwent a baseline screening evaluation, which included questionnaires on respiratory and other symptoms, pre‐WTC‐ and WTC‐related occupational exposures, physical examination (including height and weight), laboratory testing, and spirometry. Subsequent (“monitoring”) health surveillance visits included a similar evaluation at 12‐ to 18‐month intervals, and clinical services were offered for individualized diagnostic and treatment services.1, 2 All subjects were members of the WTC General Responders Cohort (GRC) and participated in the screening, surveillance, and clinical programs of the WTC Clinical Center of Excellence at Mount Sinai Medical Center (New York, NY). They were also part of the subcohort (n = 1,641) evaluated by the WTC Pulmonary Evaluation Unit (WTC PEU), who underwent chest CT scanning between 2003 and 2012, as part of their diagnostic evaluation. Details on subject recruitment, eligibility criteria, and screening and surveillance protocols have been previously reported.10 In brief, participants were all workers and volunteers who performed rescue, recovery, and service restoration duties at the WTC disaster site from 11 September 2001 to June 2002. This open cohort includes all occupational groups deployed at the WTC disaster site.11 Beginning in July 2002, all subjects underwent a baseline screening evaluation, which included questionnaires on respiratory and other symptoms, pre‐WTC‐ and WTC‐related occupational exposures, physical examination (including height and weight), laboratory testing, and spirometry. Subsequent (“monitoring”) health surveillance visits included a similar evaluation at 12‐ to 18‐month intervals, and clinical services were offered for individualized diagnostic and treatment services.1, 2 CT imaging procedures All CT studies were obtained at Mount Sinai Hospital, in General Electric® or Siemens® multidetector row chest CT scanners. Chest CT studies were performed using an institutional clinical protocol12 with a radiation dose at 120 kVp, and a mean of 146 (SD 69) mAs, with subjects in the supine position, noise correction, and routine periodic scanner calibration. CT scans were obtained from the lung apices to the bases in a single breath hold at maximum inspiration, and we excluded those with section thicknesses exceeding 1.5 mm, contrast administration, or respiratory or motion artifacts. All deidentified and coded chest CT images were stored and cataloged during the past 5 years in the WTC PEU Chest CT Image Archive (ClinicalTrials.gov identifier NCT03295279).5 All CT studies were obtained at Mount Sinai Hospital, in General Electric® or Siemens® multidetector row chest CT scanners. Chest CT studies were performed using an institutional clinical protocol12 with a radiation dose at 120 kVp, and a mean of 146 (SD 69) mAs, with subjects in the supine position, noise correction, and routine periodic scanner calibration. CT scans were obtained from the lung apices to the bases in a single breath hold at maximum inspiration, and we excluded those with section thicknesses exceeding 1.5 mm, contrast administration, or respiratory or motion artifacts. All deidentified and coded chest CT images were stored and cataloged during the past 5 years in the WTC PEU Chest CT Image Archive (ClinicalTrials.gov identifier NCT03295279).5 Inclusion criteria and QCT systems Inclusion into this study required that the WTC workers had (1) at least three screening and surveillance spirometries, (2) adequate quality study for QCT measurements of their lung parenchyma performed with the Simba system (http://www.via.cornell.edu/simba/simba),13 and (3) complete data for all covariates of interest. None of the chest CT was obtained for investigation of acute processes (infections, congestive heart failure, etc.). For this study, we selected the first adequate quality and available chest CT scan on each subject. For descriptive purposes, and as previously reported,5 each study was also read by research radiologists using the International Classification of High‐resolution Computed Tomography for Occupational and Environmental Respiratory Diseases (ICOERD)14 to classify and grade dichotomously and/or semiquantitatively eight main types of abnormalities. Inclusion into this study required that the WTC workers had (1) at least three screening and surveillance spirometries, (2) adequate quality study for QCT measurements of their lung parenchyma performed with the Simba system (http://www.via.cornell.edu/simba/simba),13 and (3) complete data for all covariates of interest. None of the chest CT was obtained for investigation of acute processes (infections, congestive heart failure, etc.). For this study, we selected the first adequate quality and available chest CT scan on each subject. For descriptive purposes, and as previously reported,5 each study was also read by research radiologists using the International Classification of High‐resolution Computed Tomography for Occupational and Environmental Respiratory Diseases (ICOERD)14 to classify and grade dichotomously and/or semiquantitatively eight main types of abnormalities. Spirometry Spirometries were performed using a single device, the EasyOne® portable flow device (ndd Medizintechnik AG, Zurich, Switzerland). Bronchodilator response (BDR) was assessed at least once in the majority of subjects (most often at their baseline visit) by repeating spirometry 15 minutes after administration of 180 mcg of albuterol via metered dose inhaler and disposable spacer. Predicted values for spirometric measurements were calculated for all subjects’ acceptable tests, based on reference equations from the third National Health and Nutrition Examination Survey (NHANES III),15 and all testing, quality assurance, ventilatory impairment pattern definitions, and interpretative approaches followed American Thoracic Society recommendations.16, 17, 18 We selected spirometries for this study if they had a computer quality grade of A or B, or C if at least five trials had been obtained.18 Spirometries were performed using a single device, the EasyOne® portable flow device (ndd Medizintechnik AG, Zurich, Switzerland). Bronchodilator response (BDR) was assessed at least once in the majority of subjects (most often at their baseline visit) by repeating spirometry 15 minutes after administration of 180 mcg of albuterol via metered dose inhaler and disposable spacer. Predicted values for spirometric measurements were calculated for all subjects’ acceptable tests, based on reference equations from the third National Health and Nutrition Examination Survey (NHANES III),15 and all testing, quality assurance, ventilatory impairment pattern definitions, and interpretative approaches followed American Thoracic Society recommendations.16, 17, 18 We selected spirometries for this study if they had a computer quality grade of A or B, or C if at least five trials had been obtained.18 Measurements We defined a priori, modeled, and plotted the three types of longitudinal post‐WTC FEV1 trajectories (see below, under Statistical Analysis). For the multinomial logistic regression model (see below) with the three trajectories of longitudinal FEV1 as outcomes, we used as main predictors the two QCT‐measured lung density indicators. For low parenchymal density (radiological attenuation), suggestive of emphysema, we used low attenuation volume percent at −950 HU (LAV%, also known as EI950). Categorization was necessary given the variable distribution, using a cut point of 2.5%, based on previously published findings in a nonsmoking healthy multiethnic population.19 For high lung parenchymal density, which could suggest interstitial disease changes, we used high attenuation volume percent from −600 to −250 HU (HAV%). Although a study in smokers used a cut point of 10%20 for HAV%, we felt that using the top decile (6.25%) as the cut point for our study was more appropriate, and equally necessary given the variable distribution. We defined a priori, modeled, and plotted the three types of longitudinal post‐WTC FEV1 trajectories (see below, under Statistical Analysis). For the multinomial logistic regression model (see below) with the three trajectories of longitudinal FEV1 as outcomes, we used as main predictors the two QCT‐measured lung density indicators. For low parenchymal density (radiological attenuation), suggestive of emphysema, we used low attenuation volume percent at −950 HU (LAV%, also known as EI950). Categorization was necessary given the variable distribution, using a cut point of 2.5%, based on previously published findings in a nonsmoking healthy multiethnic population.19 For high lung parenchymal density, which could suggest interstitial disease changes, we used high attenuation volume percent from −600 to −250 HU (HAV%). Although a study in smokers used a cut point of 10%20 for HAV%, we felt that using the top decile (6.25%) as the cut point for our study was more appropriate, and equally necessary given the variable distribution. Statistical analysis We used linear regression to calculate the slope of FEV1 (FEV1slope) on each of 1,321 subjects who had a minimum of three periodic measurements.21 We then estimated the average group decline, and its standard deviation, to classify FEV1slopes into three trajectories defined as follows: accelerated FEV1 decline (“accelerated decliner”) by an FEV1 slope<−66.5 mL/year (ie, exceeding the group mean + 0.5 SD), excessive FEV1 gain (“improved”) by an FEV1slope > 0 mL/year, and intermediate FEV1 decline (“intermediate decliner”) by an FEV1slope between 0 and −66.5 ml/year (ie, between 0 and the group mean + 0.5 SD). We also estimated the root mean squared error (RMSE) as an indicator of group FEV1 variability.9 Descriptive statistics included means and standard deviations (SDs), and medians and interquartile ranges (IQR) for normally and non‐normally distributed continuous variables, respectively; counts and proportions were used for categorical variables. Unadjusted bivariate analyses included t‐test, chi‐squared test, or Wilcoxon rank test as appropriate. We employed, moreover, standardized differences (StD)22 for the descriptive comparisons of the ICOERD findings with the outcomes (FEV1 trajectories) and main predictors (QCT lung density indicators), with a StD > 0.2 suggesting a potentially important difference. Covariates included in all multivariable models were age on 11 September 2001, sex, height, ethnicity/race (Latino, non‐Latino White, and non‐Latino other races), body mass index (BMI) at first evaluation and individual BMI trajectories by linear regression (BMI slope, in kg/m2/year), FEV1 percent predicted at baseline, evidence of bronchodilator response at any visit (BDRany, dichotomous), smoking status (never, former and current smokers), and smoking intensity (in pack‐years) at the baseline examination. Occupational WTC exposure indicators included WTC arrival within 48 hours (dichotomous), and WTC exposure duration (in days, or per 100‐day units).1, 9 A subject was considered a never smoker if (s)he had smoked less than 20 packs of cigarettes (or 12 oz. of tobacco) in a lifetime, or less than 1 cigarette/day (or 1 cigar/week) for 1 year. A minimum of 12 months without tobacco use was required to deem a subject a former smoker. In the model with QCT predictors, we included scanner manufacturer and slice thickness (in mm) to adjust for potentially important CT scan technical differences. We first used linear mixed random effects modeling to plot the above described longitudinal FEV1 trajectories. Mean absolute FEV1 was estimated for each 1‐year period between 2001 and 2015 (the actual follow‐up year was rounded). In this multivariable model, age on 11 September 2001, BMI slope, height, sex, ethnicity/race, and bronchodilator response (BDRany) were included as fixed effects, centering the first three at the mean values for the cohort. Random intercepts accounted for between‐subject variability, and repeated measures correlations accounted for intra‐subject variability. In order to estimate the effect of increased and decreased lung density on longitudinal FEV1 trajectories after 11 September 2001, we employed multinomial logistic regression for the fully specified multivariable analysis of the association between QCT lung density indicators and the categorical outcome of FEV1 trajectories (“accelerated decline,” “intermediate decline,” and “improved lung function”). We excluded multicollinearity by the variance inflation factor method and tested for interactions. In this analysis, we used multiple imputation (MI) with full conditional specification to account for missing QCT measurements. Results of complete case and imputed analyses were similar, the observed effects had the same direction and statistical significance, so we only report the latter. Model performance was assessed by means of the c statistic. A two‐sided p value less than 0.05 defined statistical significance. The SAS program, version 9.4 (SAS Institute, Cary, NC) was used for all analyses. We used linear regression to calculate the slope of FEV1 (FEV1slope) on each of 1,321 subjects who had a minimum of three periodic measurements.21 We then estimated the average group decline, and its standard deviation, to classify FEV1slopes into three trajectories defined as follows: accelerated FEV1 decline (“accelerated decliner”) by an FEV1 slope<−66.5 mL/year (ie, exceeding the group mean + 0.5 SD), excessive FEV1 gain (“improved”) by an FEV1slope > 0 mL/year, and intermediate FEV1 decline (“intermediate decliner”) by an FEV1slope between 0 and −66.5 ml/year (ie, between 0 and the group mean + 0.5 SD). We also estimated the root mean squared error (RMSE) as an indicator of group FEV1 variability.9 Descriptive statistics included means and standard deviations (SDs), and medians and interquartile ranges (IQR) for normally and non‐normally distributed continuous variables, respectively; counts and proportions were used for categorical variables. Unadjusted bivariate analyses included t‐test, chi‐squared test, or Wilcoxon rank test as appropriate. We employed, moreover, standardized differences (StD)22 for the descriptive comparisons of the ICOERD findings with the outcomes (FEV1 trajectories) and main predictors (QCT lung density indicators), with a StD > 0.2 suggesting a potentially important difference. Covariates included in all multivariable models were age on 11 September 2001, sex, height, ethnicity/race (Latino, non‐Latino White, and non‐Latino other races), body mass index (BMI) at first evaluation and individual BMI trajectories by linear regression (BMI slope, in kg/m2/year), FEV1 percent predicted at baseline, evidence of bronchodilator response at any visit (BDRany, dichotomous), smoking status (never, former and current smokers), and smoking intensity (in pack‐years) at the baseline examination. Occupational WTC exposure indicators included WTC arrival within 48 hours (dichotomous), and WTC exposure duration (in days, or per 100‐day units).1, 9 A subject was considered a never smoker if (s)he had smoked less than 20 packs of cigarettes (or 12 oz. of tobacco) in a lifetime, or less than 1 cigarette/day (or 1 cigar/week) for 1 year. A minimum of 12 months without tobacco use was required to deem a subject a former smoker. In the model with QCT predictors, we included scanner manufacturer and slice thickness (in mm) to adjust for potentially important CT scan technical differences. We first used linear mixed random effects modeling to plot the above described longitudinal FEV1 trajectories. Mean absolute FEV1 was estimated for each 1‐year period between 2001 and 2015 (the actual follow‐up year was rounded). In this multivariable model, age on 11 September 2001, BMI slope, height, sex, ethnicity/race, and bronchodilator response (BDRany) were included as fixed effects, centering the first three at the mean values for the cohort. Random intercepts accounted for between‐subject variability, and repeated measures correlations accounted for intra‐subject variability. In order to estimate the effect of increased and decreased lung density on longitudinal FEV1 trajectories after 11 September 2001, we employed multinomial logistic regression for the fully specified multivariable analysis of the association between QCT lung density indicators and the categorical outcome of FEV1 trajectories (“accelerated decline,” “intermediate decline,” and “improved lung function”). We excluded multicollinearity by the variance inflation factor method and tested for interactions. In this analysis, we used multiple imputation (MI) with full conditional specification to account for missing QCT measurements. Results of complete case and imputed analyses were similar, the observed effects had the same direction and statistical significance, so we only report the latter. Model performance was assessed by means of the c statistic. A two‐sided p value less than 0.05 defined statistical significance. The SAS program, version 9.4 (SAS Institute, Cary, NC) was used for all analyses. Subjects and clinical data acquisition: All subjects were members of the WTC General Responders Cohort (GRC) and participated in the screening, surveillance, and clinical programs of the WTC Clinical Center of Excellence at Mount Sinai Medical Center (New York, NY). They were also part of the subcohort (n = 1,641) evaluated by the WTC Pulmonary Evaluation Unit (WTC PEU), who underwent chest CT scanning between 2003 and 2012, as part of their diagnostic evaluation. Details on subject recruitment, eligibility criteria, and screening and surveillance protocols have been previously reported.10 In brief, participants were all workers and volunteers who performed rescue, recovery, and service restoration duties at the WTC disaster site from 11 September 2001 to June 2002. This open cohort includes all occupational groups deployed at the WTC disaster site.11 Beginning in July 2002, all subjects underwent a baseline screening evaluation, which included questionnaires on respiratory and other symptoms, pre‐WTC‐ and WTC‐related occupational exposures, physical examination (including height and weight), laboratory testing, and spirometry. Subsequent (“monitoring”) health surveillance visits included a similar evaluation at 12‐ to 18‐month intervals, and clinical services were offered for individualized diagnostic and treatment services.1, 2 CT imaging procedures: All CT studies were obtained at Mount Sinai Hospital, in General Electric® or Siemens® multidetector row chest CT scanners. Chest CT studies were performed using an institutional clinical protocol12 with a radiation dose at 120 kVp, and a mean of 146 (SD 69) mAs, with subjects in the supine position, noise correction, and routine periodic scanner calibration. CT scans were obtained from the lung apices to the bases in a single breath hold at maximum inspiration, and we excluded those with section thicknesses exceeding 1.5 mm, contrast administration, or respiratory or motion artifacts. All deidentified and coded chest CT images were stored and cataloged during the past 5 years in the WTC PEU Chest CT Image Archive (ClinicalTrials.gov identifier NCT03295279).5 Inclusion criteria and QCT systems: Inclusion into this study required that the WTC workers had (1) at least three screening and surveillance spirometries, (2) adequate quality study for QCT measurements of their lung parenchyma performed with the Simba system (http://www.via.cornell.edu/simba/simba),13 and (3) complete data for all covariates of interest. None of the chest CT was obtained for investigation of acute processes (infections, congestive heart failure, etc.). For this study, we selected the first adequate quality and available chest CT scan on each subject. For descriptive purposes, and as previously reported,5 each study was also read by research radiologists using the International Classification of High‐resolution Computed Tomography for Occupational and Environmental Respiratory Diseases (ICOERD)14 to classify and grade dichotomously and/or semiquantitatively eight main types of abnormalities. Spirometry: Spirometries were performed using a single device, the EasyOne® portable flow device (ndd Medizintechnik AG, Zurich, Switzerland). Bronchodilator response (BDR) was assessed at least once in the majority of subjects (most often at their baseline visit) by repeating spirometry 15 minutes after administration of 180 mcg of albuterol via metered dose inhaler and disposable spacer. Predicted values for spirometric measurements were calculated for all subjects’ acceptable tests, based on reference equations from the third National Health and Nutrition Examination Survey (NHANES III),15 and all testing, quality assurance, ventilatory impairment pattern definitions, and interpretative approaches followed American Thoracic Society recommendations.16, 17, 18 We selected spirometries for this study if they had a computer quality grade of A or B, or C if at least five trials had been obtained.18 Measurements: We defined a priori, modeled, and plotted the three types of longitudinal post‐WTC FEV1 trajectories (see below, under Statistical Analysis). For the multinomial logistic regression model (see below) with the three trajectories of longitudinal FEV1 as outcomes, we used as main predictors the two QCT‐measured lung density indicators. For low parenchymal density (radiological attenuation), suggestive of emphysema, we used low attenuation volume percent at −950 HU (LAV%, also known as EI950). Categorization was necessary given the variable distribution, using a cut point of 2.5%, based on previously published findings in a nonsmoking healthy multiethnic population.19 For high lung parenchymal density, which could suggest interstitial disease changes, we used high attenuation volume percent from −600 to −250 HU (HAV%). Although a study in smokers used a cut point of 10%20 for HAV%, we felt that using the top decile (6.25%) as the cut point for our study was more appropriate, and equally necessary given the variable distribution. Statistical analysis: We used linear regression to calculate the slope of FEV1 (FEV1slope) on each of 1,321 subjects who had a minimum of three periodic measurements.21 We then estimated the average group decline, and its standard deviation, to classify FEV1slopes into three trajectories defined as follows: accelerated FEV1 decline (“accelerated decliner”) by an FEV1 slope<−66.5 mL/year (ie, exceeding the group mean + 0.5 SD), excessive FEV1 gain (“improved”) by an FEV1slope > 0 mL/year, and intermediate FEV1 decline (“intermediate decliner”) by an FEV1slope between 0 and −66.5 ml/year (ie, between 0 and the group mean + 0.5 SD). We also estimated the root mean squared error (RMSE) as an indicator of group FEV1 variability.9 Descriptive statistics included means and standard deviations (SDs), and medians and interquartile ranges (IQR) for normally and non‐normally distributed continuous variables, respectively; counts and proportions were used for categorical variables. Unadjusted bivariate analyses included t‐test, chi‐squared test, or Wilcoxon rank test as appropriate. We employed, moreover, standardized differences (StD)22 for the descriptive comparisons of the ICOERD findings with the outcomes (FEV1 trajectories) and main predictors (QCT lung density indicators), with a StD > 0.2 suggesting a potentially important difference. Covariates included in all multivariable models were age on 11 September 2001, sex, height, ethnicity/race (Latino, non‐Latino White, and non‐Latino other races), body mass index (BMI) at first evaluation and individual BMI trajectories by linear regression (BMI slope, in kg/m2/year), FEV1 percent predicted at baseline, evidence of bronchodilator response at any visit (BDRany, dichotomous), smoking status (never, former and current smokers), and smoking intensity (in pack‐years) at the baseline examination. Occupational WTC exposure indicators included WTC arrival within 48 hours (dichotomous), and WTC exposure duration (in days, or per 100‐day units).1, 9 A subject was considered a never smoker if (s)he had smoked less than 20 packs of cigarettes (or 12 oz. of tobacco) in a lifetime, or less than 1 cigarette/day (or 1 cigar/week) for 1 year. A minimum of 12 months without tobacco use was required to deem a subject a former smoker. In the model with QCT predictors, we included scanner manufacturer and slice thickness (in mm) to adjust for potentially important CT scan technical differences. We first used linear mixed random effects modeling to plot the above described longitudinal FEV1 trajectories. Mean absolute FEV1 was estimated for each 1‐year period between 2001 and 2015 (the actual follow‐up year was rounded). In this multivariable model, age on 11 September 2001, BMI slope, height, sex, ethnicity/race, and bronchodilator response (BDRany) were included as fixed effects, centering the first three at the mean values for the cohort. Random intercepts accounted for between‐subject variability, and repeated measures correlations accounted for intra‐subject variability. In order to estimate the effect of increased and decreased lung density on longitudinal FEV1 trajectories after 11 September 2001, we employed multinomial logistic regression for the fully specified multivariable analysis of the association between QCT lung density indicators and the categorical outcome of FEV1 trajectories (“accelerated decline,” “intermediate decline,” and “improved lung function”). We excluded multicollinearity by the variance inflation factor method and tested for interactions. In this analysis, we used multiple imputation (MI) with full conditional specification to account for missing QCT measurements. Results of complete case and imputed analyses were similar, the observed effects had the same direction and statistical significance, so we only report the latter. Model performance was assessed by means of the c statistic. A two‐sided p value less than 0.05 defined statistical significance. The SAS program, version 9.4 (SAS Institute, Cary, NC) was used for all analyses. RESULTS: Figure 1 shows the study flow chart. Table 1 shows that the study population consisted of 1,321 subjects who had at least three technically acceptable spirometries, with a total, median, and IQR per subject of 7,446, 6, and 4–7 studies, respectively, between July 2002 and June 2016. The mean age on 11 September 2001 was 42.1 (SD 9) years, and more than 80% were both male, and either overweight or obese (mean BMI 29.2 SD 4.9 kg/m2). The mean longitudinal FEV1slope for all subjects was −40.4 mL/year (SD = 52.2 mL/year, RMSE = 0.17 mL/year). Figure 2 displays the trajectories in absolute FEV1 from 2002 to 2016, as revealed by the linear mixed random effects model. The mean longitudinal FEV1 slopes for the intermediate decline (n = 876, 66.3%), accelerated decline (n = 280, 21.2%), and improved FEV1 (n = 165, 12.5%) subgroups were, respectively, ‐34.3 (SD = 16.9, RMSE = 0.15) ml/year, −106.5 (SD 47.1, RMSE 0.21) ml/year, and 37.6 (SD 55.2, RMSE 0.21) ml/year. The observed differences with the intermediate decline subgroup in mean FEV1 at the last follow‐up visit were significant for both the improved (0.26 l, P = 0.004) and the accelerated decline (−0.44 l, P < 0.0001) subgroups. Study flowchart Post‐11 September 2001 grouped longitudinal FEV1 trajectories, in 1321 former nonfirefighting WTC workers. Yearly mean FEV1 in liters from 2002 to 2016, adjusted for WTC occupational exposure (arrival within 48 hours of the disaster), age on 11 September 2001, sex, height, ethnicity/race, BMI slope, baseline smoking status, and BDRany. The three post‐11 September 2001 FEV1 trajectories are: accelerated decline (purple circle, solid line), intermediate decline (green triangle, short dash), and improved FEV1 (yellow square, long dash). Error bars show the standard error of the mean. Numbers below the x‐axis represent the sample size at each time point. The dotted vertical line represents 11 September 2001. For reference, selected chest CT scans were obtained a median of 7.09 (IQR 5.75‐8.61) years after 11 September 2001 (illustrated by the thick bar above horizontal time line). Patient characteristics and unadjusted comparisons of decliners and improved versus intermediate decline group. Statistically significant differences are bolded. P‐value for 2‐sample comparisons between the accelerated and intermediate decline groups. P‐value for 2‐sample comparisons between the improved and intermediate decline groups. Medians (IQR) reported due to skewed distributions. In order to examine the association of increased and decreased lung density with the aforementioned post‐WTC FEV1 trajectories, we selected the first available chest CT scan with those measurements, obtained a median of 7.09 (IQR 5.75–8.61) years after 11 September 2001. Unadjusted comparisons to the intermediate decliners (Table 1) showed that the “accelerated decliners” were significantly more likely to have higher categorical LAV% and HAV% (P < 0.001, and P = 0.009, respectively), to be male, taller, non‐Latino/White and ever smoker, to gain weight over time, and to have higher baseline FEV1% predicted, BDRany, and shorter WTC exposure duration. Compared with the intermediate decliners, the improved FEV1 subjects were more likely to have high LAV% (P = 0.039), to be male, younger, and taller, have higher BMI at baseline and to lose weight on follow‐up. They also had a lower baseline FEV1% predicted, were more likely to have BDRany, to have arrived early at the WTC disaster site, and to have had a shorter WTC exposure duration. As expected, the mostly subtle radiological findings associated with high HAV% were linear and ground glass opacities, inhomogeneous attenuation, and honeycombing, while high LAV% was mainly associated with emphysema, but also with the presence of any well rounded or large opacities (Table OS1). We found no unadjusted (Table OS1) or adjusted23 (data not presented) association between the visual radiologic abnormalities and the lung function trajectories. As emphysema and interstitial lung disease both share some associated causal factors (eg, tobacco and occupational exposures) and can coexist24, 25 we checked for overlap between the two QCT predictors (ie, confirmed their independence). Only one study subject with both emphysema and inhomogeneous attenuation had both high HAV% and high LAV% as defined, so the two predictors were essentially independent from each other. Table 2 shows the results of the multinomial logistic regression analysis of lung density measures and lung function trajectories in 1,321 subjects. In this analysis both categorical LAV% and HAV% were significantly associated with accelerated FEV1 decline (ORadj, 95% CI 2.37, 1.41–3.97, and 1.77, 1.08–2.89, respectively), but not with improved FEV1. Comparisons QCT lung density metrics (LAV% and HAV%) between the accelerated decliner (n = 280) and the improved (n = 165) versus intermediate decliner (n = 876) subgroups, respectively Significant associations are bolded. DISCUSSION: In this study, we extend our previous assessment9 of the rate of FEV1 decline in a diverse group of former WTC workers11 over a longer period of time. We also demonstrated the association of accelerated FEV1 decline with QCT‐measured decreased (LAV%) and increased (HAV%) lung density metrics. The subgroup with abnormal lung function gain or improvement, which had an inconsistent association with proximal airway wall thickness in our previous study,9 failed to demonstrate a significant association with either LAV% or HAV% in this study. The recently described9, 26 divergent long‐term FEV1 trajectories in the WTC worker and volunteer cohorts motivated an examination of QCT metrics and lung function decline. We previously established that wall area percent, a QCT metric of proximal airway inflammation, is associated with accelerated FEV1 decline.9 We hypothesized that changes in lung density, which QCT may help to identify at early stages, could also be associated with the diverging trajectories (accelerated decline and improvement). Chronic parenchymal lung diseases associated with decreased (eg, emphysema), and increased (eg, interstitial lung diseases) lung density usually have a long latency after inciting exposures, and the included CT studies were obtained a median of 7.1 years after 11 September 2001. Our previous findings on systematic and semiquantitative readings of the CT scans, noted the prevalence of usually mild emphysema and interstitial lung disease in about 10% of this cohort, respectively.5 Because the findings were so mild, we hypothesized that QCT density metrics corresponding to those abnormalities could be used to quantify their extent more precisely, and to assess whether they were significantly associated with adverse functional outcomes, even if quantitatively mild. Our findings would suggest that they are, that further follow‐up is warranted, and that QCT can play a role in longitudinal respiratory surveillance of this cohort. Extending our previous study9 to delineate better the longitudinal FEV1 trajectories, further suggests that the improved function group may have experienced the earliest and deepest lung function decrease (i.e., they were the “rapid accelerated decliners” after the WTC exposures), but also has demonstrated recovery toward higher pre‐WTC FEV1 levels, as suggested by a WTC firefighters’ study.26 The latter can be accounted by a calculated difference in pre‐WTC predicted FEV1 of 190 mL in favor of the improved, compared to the intermediate decline subjects (data not presented). The latter is in turn expected, given the sex and height differences between the subgroups (Table 2). The accelerated decline subgroup had a similarly higher pre‐WTC predicted FEV1 than the intermediate group, which also further underscores the severity of their longitudinal functional loss. Although our models suggested several important predictive factors of those divergent trajectories, further characterization and follow‐up are warranted. The value of the QCT emphysema measurement (LAV%) has been extensively used in studies of smokers and COPD (reviewed in 27). The QCT high lung density (HAV%) metric, moreover, has been used less often, mainly in the investigation of interstitial disease changes with COPD,20 and adverse effects on lung function in pulmonary fibrosis patients,28 or of cigarette smoking.29 Aside from studies with a predominance of smokers, HAV% in subjects from the Framingham Heart Study,30 was also modestly predictive of lung volume loss. The interstitial abnormalities measured by HAV% may be reversible. Our findings, and those of previous studies, support the longitudinal and quantitative investigation of increased and decreased lung density in occupationally exposed cohorts like that of the WTC responders. The pathogenesis of both COPD and interstitial fibrosis31, 32 is thought to share trajectories marked by a series of pulmonary tissue injuries, including those caused by environmental toxicants, such as tobacco, occupational vapors, dust, and gases, as well as dysregulated aging,20, 33, 34 developmental, and other factors. The factors that determine the diverging pathophysiologic paths between the two, however, remain to be elucidated. While both diseases can indeed coexist,24, 25 we found no substantial evidence of overlap of their respective QCT markers in our study group. While there is thus far no conclusively documented evidence of association of WTC‐related exposures with the causation of either COPD or interstitial fibrosis in the WTC cohorts, our models included both tobacco smoking and WTC occupational exposure variables as covariates and not main predictors in the models. Properly designed studies are needed to elucidate their potential causative roles. The strengths of this study relate to the richness of the patient population, the amount of data available for important covariates, the availability of extensive, and detailed imaging data, unique in WTC cohort studies. This study also has some limitations. We did not assess the effect of pre‐WTC occupational exposures,5 the effect of smoking after the baseline examination, or of smoking intensity. Periodic cross‐sectional assessments of smoking status in this cohort, moreover, suggest steadily decreasing cross‐sectional current smoking rates.3 Our study subjects underwent chest CT scanning for a variety of reasons, often not inclusive of abnormal lung function (eg, investigation of nodules, atelectatic densities, etc.). This represents an inevitable selection bias that allowed us, however, to obtain QCT measurements. We measured the indicators of lung density on CTs performed 7.1 years after 11 September 2001, at a time when the three FEV1 trajectories were already clearly established (Figure 1). Our observations in this and our previous study on the divergent FEV1 trajectories9 were similar to those reported in the WTC firefighters’ cohort,26 where the overall longitudinal FEV1 decline was −32 mL/year, versus −40.4 mL/year in our cohort. Preliminary data on the Mount Sinai WTC General Responders Cohort (n = 15,753), that includes our cohort, indicate a longitudinal FEV1 decline of −33.2 mL/year,4 which further supports our a priori trajectory classifications. We chose FEV1 for this study, because of its reliability and repeatability, particularly in the setting of large screening and surveillance spirometry programs. All WTC occupational studies with spirometry data have shown nonobstructive low FVC as the largely predominant spirometric impairment,3, 4 and thus essentially parallel trajectories of FEV1 and FVC.7 Future studies should examine, however, the differential effect of our and other QCT markers in subgroups of patients. We lacked comparison QCT imaging data from a well‐defined control group of occupationally and WTC unexposed, totally asymptomatic subjects, with normal spirometry and chest radiograph, but we believe that our selection of QCT lung density metrics cut off criteria were adequate for these analyses. We used retrospective chest CT imaging data, which were subject to slight variations in protocols over time. However, most studies were performed in a very small number of scanners, at a single location, and with an intended technical consistency. In addition, quality control was exerted to exclude CT scan studies that did not meet technical standards for quantitative imaging analyses and, as a result, our models did not suggest significant effects of scanner brand and study slice thickness. As the potential for the development of chronic lung diseases looms on this cohort,35, 36 this and our previous studies1, 5, 9 underscore the need for surveillance and suggest the potential role of both qualitative and quantitative chest CT in the ongoing evaluation of lung function changes and disease transitions in this cohort. CONFLICT OF INTEREST: The authors had no other relevant financial conflict of interest disclosure to make. The contents of this article are the sole responsibility of the authors and do not necessarily represent the official views of the CDCP/NIOSH. The funding agencies had no role in the study design, in the collection, analysis, or interpretation of the data, in the writing of the report, or in the decision to submit this article for publication. AUTHOR CONTRIBUTIONS: de la Hoz, Estépar, and Celedón designed and oversaw the study and selected analytical strategies. Liu, Antoniak, Jeon, Weber, and Doucette performed all statistical analyses. Reeves performed the QCT measurements and Xu participated in the radiological readings (ICOERD data). All authors contributed to writing, reviewed and revised the drafts, and approved the final manuscript. de la Hoz had full access to all the data in the study and had final responsibility for the decision to submit for publication. ETHICS STATEMENT: The study was conducted in accordance with the guidelines for human studies of the amended Declaration of Helsinki. The Mount Sinai Program for the Protection of Human Subjects (HS12‐00925) reviewed and approved the study protocol. Funding information: This work was supported by grants U01 OH010401 and U01 OH011697 (RED, PI), and contract 200‐2017‐93325 (WTC General Responders Cohort Data Center) from the Centers for Disease Control and Prevention/National Institute for Occupational Safety and Health (CDCP/NIOSH). KA’s work was funded by a short‐term research training program for minority students from the National Heart, Lung, and Blood Institute (grant R25 HL108857). Supporting information: Table S1 Click here for additional data file.
Background: Occupational exposures at the WTC site after 11 September 2001 have been associated with presumably inflammatory chronic lower airway diseases. Methods: We examined the trajectories of expiratory air flow decline in a group of 1,321 former WTC workers and volunteers with at least three periodic spirometries, and using QCT-measured low (LAV%, -950 HU) and high (HAV%, from -600 to -250 HU) attenuation volume percent. We calculated the individual regression line slopes for first-second forced expiratory volume (FEV1 slope), identified subjects with rapidly declining ("accelerated decliners") and increasing ("improved"), and compared them to subjects with "intermediate" (0 to -66.5 mL/year) FEV1 slope. We then used multinomial logistic regression to model those three trajectories, and the two lung attenuation metrics. Results: The mean longitudinal FEV1 slopes for the entire study population, and its intermediate, decliner, and improved subgroups were, respectively, -40.4, -34.3, -106.5, and 37.6 mL/year. In unadjusted and adjusted analyses, LAV% and HAV% were both associated with "accelerated decliner" status (ORadj , 95% CI 2.37, 1.41-3.97, and 1.77, 1.08-2.89, respectively), compared to the intermediate decline. Conclusions: Longitudinal FEV1 decline in this cohort, known to be associated with QCT proximal airway inflammation metric, is also associated with QCT indicators of increased and decreased lung density. The improved FEV1 trajectory did not seem to be associated with lung density metrics.
null
null
7,755
294
15
[ "wtc", "fev1", "lung", "study", "ct", "trajectories", "qct", "decline", "density", "year" ]
[ "test", "test" ]
null
null
[CONTENT] CT–lung | helical computed tomography | imaging of the chest | inhalation injury | lung function decline | lung function trajectories | multivariate analysis of prognostic factors | occupational respiratory diseases | World Trade Center‐related lung disease [SUMMARY]
[CONTENT] CT–lung | helical computed tomography | imaging of the chest | inhalation injury | lung function decline | lung function trajectories | multivariate analysis of prognostic factors | occupational respiratory diseases | World Trade Center‐related lung disease [SUMMARY]
[CONTENT] CT–lung | helical computed tomography | imaging of the chest | inhalation injury | lung function decline | lung function trajectories | multivariate analysis of prognostic factors | occupational respiratory diseases | World Trade Center‐related lung disease [SUMMARY]
null
[CONTENT] CT–lung | helical computed tomography | imaging of the chest | inhalation injury | lung function decline | lung function trajectories | multivariate analysis of prognostic factors | occupational respiratory diseases | World Trade Center‐related lung disease [SUMMARY]
null
[CONTENT] Child | Female | Forced Expiratory Volume | Humans | Lung | Lung Diseases | Male | Occupational Exposure | September 11 Terrorist Attacks | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Child | Female | Forced Expiratory Volume | Humans | Lung | Lung Diseases | Male | Occupational Exposure | September 11 Terrorist Attacks | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Child | Female | Forced Expiratory Volume | Humans | Lung | Lung Diseases | Male | Occupational Exposure | September 11 Terrorist Attacks | Tomography, X-Ray Computed [SUMMARY]
null
[CONTENT] Child | Female | Forced Expiratory Volume | Humans | Lung | Lung Diseases | Male | Occupational Exposure | September 11 Terrorist Attacks | Tomography, X-Ray Computed [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] wtc | fev1 | lung | study | ct | trajectories | qct | decline | density | year [SUMMARY]
[CONTENT] wtc | fev1 | lung | study | ct | trajectories | qct | decline | density | year [SUMMARY]
[CONTENT] wtc | fev1 | lung | study | ct | trajectories | qct | decline | density | year [SUMMARY]
null
[CONTENT] wtc | fev1 | lung | study | ct | trajectories | qct | decline | density | year [SUMMARY]
null
[CONTENT] lung | airway | lung function | function | eg | wtc | chest computed tomography | chest computed | lower airway | eg early [SUMMARY]
[CONTENT] fev1 | wtc | included | ct | year | trajectories | chest ct | mean | chest | lung [SUMMARY]
[CONTENT] fev1 | decline | intermediate | improved | mean | september 2001 | september | 11 september 2001 | 11 september | 11 [SUMMARY]
null
[CONTENT] fev1 | wtc | lung | study | ct | trajectories | decline | chest | data | qct [SUMMARY]
null
[CONTENT] WTC | 11 September 2001 [SUMMARY]
[CONTENT] 1,321 | WTC | at least three | HU ||| first ||| ||| three | two [SUMMARY]
[CONTENT] FEV1 | 37.6 mL/year ||| LAV% | ORadj | 95% | CI | 2.37 | 1.41 | 1.77 | 1.08 [SUMMARY]
null
[CONTENT] WTC | 11 September 2001 ||| 1,321 | WTC | at least three | HU ||| first ||| ||| three | two ||| FEV1 | 37.6 mL/year ||| LAV% | ORadj | 95% | CI | 2.37 | 1.41 | 1.77 | 1.08 ||| ||| [SUMMARY]
null
Decreased oxygen extraction during cardiopulmonary exercise test in patients with chronic fatigue syndrome.
24456560
The insufficient metabolic adaptation to exercise in Chronic Fatigue Syndrome (CFS) is still being debated and poorly understood.
BACKGROUND
We analysed the cardiopulmonary exercise tests of CFS patients, idiopathic chronic fatigue (CFI) patients and healthy visitors. Continuous non-invasive measurement of the cardiac output by Nexfin (BMEYE B.V. Amsterdam, the Netherlands) was added to the cardiopulmonary exercise tests. The peak oxygen extraction by muscle cells and the increase of cardiac output relative to the increase of oxygen uptake (ΔQ'/ΔV'O₂) were measured, calculated from the cardiac output and the oxygen uptake during incremental exercise.
METHODS
The peak oxygen extraction by muscle cells was 10.83 ± 2.80 ml/100ml in 178 CFS women, 11.62 ± 2.90 ml/100 ml in 172 CFI, and 13.45 ± 2.72 ml/100 ml in 11 healthy women (ANOVA: P=0.001), 13.66 ± 3.31 ml/100 ml in 25 CFS men, 14.63 ± 4.38 ml/100 ml in 51 CFI, and 19.52 ± 6.53 ml/100 ml in 7 healthy men (ANOVA: P=0.008).The ΔQ'/ΔV'O₂ was > 6 L/L (normal ΔQ'/ΔV'O₂ ≈ 5 L/L) in 70% of the patients and in 22% of the healthy group.
RESULTS
Low oxygen uptake by muscle cells causes exercise intolerance in a majority of CFS patients, indicating insufficient metabolic adaptation to incremental exercise. The high increase of the cardiac output relative to the increase of oxygen uptake argues against deconditioning as a cause for physical impairment in these patients.
CONCLUSION
[ "Adult", "Demography", "Exercise Test", "Fatigue Syndrome, Chronic", "Female", "Heart", "Humans", "Lung", "Male", "Oxygen", "Oxygen Consumption" ]
3903040
Background
Exercise intolerance is a frequent complaint from patients who meet the criteria for Chronic Fatigue Syndrome/Myalgic Encephalitis (CFS) and Idiopathic Chronic Fatigue (CFI) [1]. Objective tests for physical impairment measure the maximal oxygen uptake (peak V’O2) during a cardiopulmonary exercise test (CPET) [2-4]. Most studies agree that peak V’O2 is lower in CFS, but we need to understand the cause of the lower peak V’O2 to explain the pathogenesis. The V’O2 depends on the uptake, transport and metabolism of oxygen in the muscle cells during physical exercise. In most CPET studies in CFS patients, the limitation of peak V’O2 is not attributed to a lower uptake and transport of oxygen to the muscle. A lower metabolic capacity of the muscle cell would change the demand for oxygen and thus lower the oxygen extraction (C(a-v)O2) and increase the cardiac output relative to V’O2 (ΔQ’/ΔV’O2) [5,6]. In previous studies we, and others, did not find impaired mitochondrial activity to be a cause for a lower peak V’O2[4,7], but abnormal mitochondrial activity was reported by some in CFS and CFI [8-10]. The aim of the present retrospective study was to determine to what extent the physical impairment in CFS and CFI was attributable to changes in uptake, transport and metabolism of oxygen in the muscle cells.
Methods
Data was collected from patients who attended the CFS Medical Centre Amsterdam. The data of sedentary men and women, physically active less than 1 hour per week, was added to the patient database. This group comprised visitors to the centre for cardiopulmonary exercise tests, check-ups and training program advice over the same period. We obtained information about the health status of this group, but laboratory tests were not included. Subjects using medication that could possibly influence the pulmonary, cardiovascular, immunologic system or cellular respiration were not included in the study. Chronic fatigue patients were assigned to the CFS or CFI group according to the criteria of Fukuda [1]. Operational criteria for assignment were: a score of >40 on the fatigue subscale of the Checklist Individual Strength (CIS-20) [11], score of?≤?35 for vitality, ≤ 62.5 for Social Functioning and ≤50 for Role-Physical on the SF-36 [12] and ≥4 positive scores (≥7,5) of the additional symptoms of the CDC Symptom Inventory –DLV [13] for the diagnosis of CFS and ≤4 positive scores (≥7,5) of the additional symptoms of the CDC Symptom Inventory –DLV for the diagnosis of idiopathic chronic fatigue (CFI) . The cognitive function of patients was screened with the Shifting Attentional Test Visual of the Amsterdam Neuropsychological Tasks [14]. Normal Z-score in this test is -2 to +2. All patients completed a CPET as part of the diagnostic procedure. The protocol of the CPET was described before [4]. All subjects performed a symptom limited CPET on a cycle ergometer (Excalibur, Lode, Groningen, The Netherlands) as described by Wassermann et al. [15]: 3 min without activity, 3 min of unloaded pedalling, followed by cycling against increasing resistance until exhaustion (ramp protocol) and concluded by 3 min cycling without resistance. The work rate increase was estimated from history, physical examination, gender, weight and height. Verbal encouragement was used to maximise performance during the last phase of incremental exercise. Exhaustion of the leg muscles was the limiting symptom in all participants. The V’E, V’O2, V’CO2 and oxygen saturation were continuously measured (Metasoft). The ECG was continuously recorded and blood pressure was measured every 2 min. Maximal exercise capacity was expressed as peak oxygen uptake per kilogram bodyweight and as percentage of predicted maximal oxygen uptake [16]. Non-invasive stroke volume measurement by continuous beat-to-beat pulse contour analysis (Nexfin) was added to the standard measurements of the CPET [17]. Oxygen extraction and ΔQ’/ΔV’O2 were calculated from the oxygen uptake and the cardiac output (C(a-v)O2 = 100 × V’O2/ CO and (Q’max-Q’rest)/(V’O2max-V’O2rest)) [18]. All subjects signed an informed consent for the use of the data for analysis and publication. Statistical analysis was conducted using the IBM Statistical Package for the Social Sciences (19.0 for Windows, Chicago, Ill, US). The results were presented as the mean?±?standard deviation (SD). Kolmogorov-Smirnov tests with Lilliefors significance correction for normality showed that the data were normally distributed. Differences between groups were tested with Analysis of Variance (ANOVA) and Bonferroni post hoc analysis. Pearson product-moment correlation coefficient was calculated as a measure of the strength of the relationship between 2 variables. Statistical significance of two-tailed tests was determined by an alpha level of 0.05.
Results
Data was collected and analysed from a total of 444 subjects who visited the CFS/ME Medical Centre Amsterdam between June 2008 and June 2013: 203 CFS patients (178 women), 223 CFI patients (172 women) and 18 healthy visitors (11 women) (Table 1). Demographic data and results Demographic data and results of cardiopulmonary exercise tests in patients with chronic fatigue syndrome, idiopathic chronic fatigue and healthy humans during rest, at the anaerobic threshold and maximal workload. BSA: body surface area, HR: heart rate, V’O2: oxygen uptake, V’O2/pred.: oxygen uptake as percentage of predicted, SVI stroke volume index, ΔQ’/ΔV’O2: increase of cardiac output relative to the increase of oxygen uptake. Post hoc Bonferroni analysis revealed that the body weight of healthy males was higher than the body weight of male CFI patients (P = 0.036, 95% CI: 0.70; 27.95). Haemoglobin was not different in CFS and CFI patients (Table 1). In female patients haemoglobin was not related to O2 extraction (r = 0.077, P = 0.155) or to the increase of cardiac output relative to oxygen uptake (ΔQ’/ΔV’O2) (r = -0.008, P = 0.882). In male patients haemoglobin was related to O2 extraction (r = 0.245, P = 0.047) but not to ΔQ’/ΔV’O2 (r = -0.140, P = 0.273). The pulmonary ventilation tests showed no difference between the CFS, CFI and healthy groups (data not shown) and the cardiac index was similar in the groups at any time during the CPET. Blood pressure was within normal limits in all tests. At the anaerobic threshold, O2 extraction in healthy women was higher than in CFS (P = 0.008, 95% CI: 0.378; 3.273). The O2 extraction in healthy men was higher than in CFS (P = 0.044, 95% CI: 0.064; 6.221) and higher than in CFI (P = 0.023, 95% CI: 0.355; 6.172). At peak exercise O2 extraction was higher in healthy women than in CFS (P = 0.010, 95% CI: 0.49; 4.74) and higher in CFI than in CFS (P = 0.030, 95% CI: 0.06; 1.52). The O2 extraction was higher in healthy men than in CFS (P = 0.006, 95% CI: 1.36; 10.36) and higher than in CFI (P = 0.018, 95% CI: 0.65; 9.13). The lowest level of O2 extraction at maximal workload was 10.0 ml/100 ml in the group of healthy women, 5.4 ml/100 ml in CFI and 4.4 ml/100 ml in female CFS patients. The lowest O2 extraction at maximal workload in males was 12.5 ml/100 ml in the healthy group, 8.2 ml/100 ml in the CFI and 6.9 ml/100 in CFS patients. The increase of cardiac output relative to oxygen uptake (ΔQ’/ΔV’O2) was lower in healthy women than CFS (P = 0.011, 95% CI: 0.45; 4.72) and in healthy men than CFS (P = 0.037, 95% CI: 0.08; 3.58). In the fatigue patients a low oxygen extraction (≤10 ml/100 ml) coincided with a increased response time of 2.35?±?0.19 versus 1.79?±?0.13 (P = 0.001, 95% CI: -1.19; -0.33) in the Shifting Attentional test visual of the ANT.
Conclusions
CPET with continuous measuring of cardiac output by Nexfin allowed for the calculation of the presence and severity of metabolic causes of exercise intolerance. This retrospective study showed that a low oxygen extraction and a high ΔQ’/ΔV’O2 were consistent with a metabolic cause for exercise intolerance in 70% of CFS patients.
[ "Background", "Limitations", "Competing interests", "Authors’ contributions" ]
[ "Exercise intolerance is a frequent complaint from patients who meet the criteria for Chronic Fatigue Syndrome/Myalgic Encephalitis (CFS) and Idiopathic Chronic Fatigue (CFI) [1].\nObjective tests for physical impairment measure the maximal oxygen uptake (peak V’O2) during a cardiopulmonary exercise test (CPET) [2-4]. Most studies agree that peak V’O2 is lower in CFS, but we need to understand the cause of the lower peak V’O2 to explain the pathogenesis.\nThe V’O2 depends on the uptake, transport and metabolism of oxygen in the muscle cells during physical exercise. In most CPET studies in CFS patients, the limitation of peak V’O2 is not attributed to a lower uptake and transport of oxygen to the muscle. A lower metabolic capacity of the muscle cell would change the demand for oxygen and thus lower the oxygen extraction (C(a-v)O2) and increase the cardiac output relative to V’O2 (ΔQ’/ΔV’O2) [5,6]. In previous studies we, and others, did not find impaired mitochondrial activity to be a cause for a lower peak V’O2[4,7], but abnormal mitochondrial activity was reported by some in CFS and CFI [8-10].\nThe aim of the present retrospective study was to determine to what extent the physical impairment in CFS and CFI was attributable to changes in uptake, transport and metabolism of oxygen in the muscle cells.", "The validity of the results of this retrospective observational study is limited by the uncontrolled inclusion of the participants. The exercise protocol for healthy visitors was not different from the protocol for fatigued patients but we have no laboratory data of healthy visitors. Therefore we cannot exclude less than optimal results in this group due to unknown diseases. The accuracy and precision of the assessment of stroke volume by Nexfin in CFS patients during CPET was not reported yet. For accuracy we used the data of comparable studies in healthy subjects, the results of a study of repeated CPET’s is needed for the assessment of precision. We cannot exclude influence of haemoglobin on the oxygen transport capacity of the blood in healthy visitors.", "The authors declare that they do not have competing interests.", "RV and IV contributed to the collection and analysis of the data. The first drafts of the paper were written by RV and RV and IV contributed to the final version of the article. Both authors read and approved the final manuscript." ]
[ null, null, null, null ]
[ "Background", "Methods", "Results", "Discussion", "Limitations", "Conclusions", "Competing interests", "Authors’ contributions" ]
[ "Exercise intolerance is a frequent complaint from patients who meet the criteria for Chronic Fatigue Syndrome/Myalgic Encephalitis (CFS) and Idiopathic Chronic Fatigue (CFI) [1].\nObjective tests for physical impairment measure the maximal oxygen uptake (peak V’O2) during a cardiopulmonary exercise test (CPET) [2-4]. Most studies agree that peak V’O2 is lower in CFS, but we need to understand the cause of the lower peak V’O2 to explain the pathogenesis.\nThe V’O2 depends on the uptake, transport and metabolism of oxygen in the muscle cells during physical exercise. In most CPET studies in CFS patients, the limitation of peak V’O2 is not attributed to a lower uptake and transport of oxygen to the muscle. A lower metabolic capacity of the muscle cell would change the demand for oxygen and thus lower the oxygen extraction (C(a-v)O2) and increase the cardiac output relative to V’O2 (ΔQ’/ΔV’O2) [5,6]. In previous studies we, and others, did not find impaired mitochondrial activity to be a cause for a lower peak V’O2[4,7], but abnormal mitochondrial activity was reported by some in CFS and CFI [8-10].\nThe aim of the present retrospective study was to determine to what extent the physical impairment in CFS and CFI was attributable to changes in uptake, transport and metabolism of oxygen in the muscle cells.", "Data was collected from patients who attended the CFS Medical Centre Amsterdam. The data of sedentary men and women, physically active less than 1 hour per week, was added to the patient database. This group comprised visitors to the centre for cardiopulmonary exercise tests, check-ups and training program advice over the same period. We obtained information about the health status of this group, but laboratory tests were not included. Subjects using medication that could possibly influence the pulmonary, cardiovascular, immunologic system or cellular respiration were not included in the study. Chronic fatigue patients were assigned to the CFS or CFI group according to the criteria of Fukuda [1]. Operational criteria for assignment were: a score of >40 on the fatigue subscale of the Checklist Individual Strength (CIS-20) [11], score of?≤?35 for vitality, ≤ 62.5 for Social Functioning and ≤50 for Role-Physical on the SF-36 [12] and ≥4 positive scores (≥7,5) of the additional symptoms of the CDC Symptom Inventory –DLV [13] for the diagnosis of CFS and ≤4 positive scores (≥7,5) of the additional symptoms of the CDC Symptom Inventory –DLV for the diagnosis of idiopathic chronic fatigue (CFI) . The cognitive function of patients was screened with the Shifting Attentional Test Visual of the Amsterdam Neuropsychological Tasks [14]. Normal Z-score in this test is -2 to +2.\nAll patients completed a CPET as part of the diagnostic procedure. The protocol of the CPET was described before [4]. All subjects performed a symptom limited CPET on a cycle ergometer (Excalibur, Lode, Groningen, The Netherlands) as described by Wassermann et al. [15]: 3 min without activity, 3 min of unloaded pedalling, followed by cycling against increasing resistance until exhaustion (ramp protocol) and concluded by 3 min cycling without resistance. The work rate increase was estimated from history, physical examination, gender, weight and height. Verbal encouragement was used to maximise performance during the last phase of incremental exercise. Exhaustion of the leg muscles was the limiting symptom in all participants. The V’E, V’O2, V’CO2 and oxygen saturation were continuously measured (Metasoft). The ECG was continuously recorded and blood pressure was measured every 2 min. Maximal exercise capacity was expressed as peak oxygen uptake per kilogram bodyweight and as percentage of predicted maximal oxygen uptake [16]. Non-invasive stroke volume measurement by continuous beat-to-beat pulse contour analysis (Nexfin) was added to the standard measurements of the CPET [17]. Oxygen extraction and ΔQ’/ΔV’O2 were calculated from the oxygen uptake and the cardiac output (C(a-v)O2 = 100 × V’O2/ CO and (Q’max-Q’rest)/(V’O2max-V’O2rest)) [18]. All subjects signed an informed consent for the use of the data for analysis and publication.\nStatistical analysis was conducted using the IBM Statistical Package for the Social Sciences (19.0 for Windows, Chicago, Ill, US). The results were presented as the mean?±?standard deviation (SD). Kolmogorov-Smirnov tests with Lilliefors significance correction for normality showed that the data were normally distributed. Differences between groups were tested with Analysis of Variance (ANOVA) and Bonferroni post hoc analysis. Pearson product-moment correlation coefficient was calculated as a measure of the strength of the relationship between 2 variables. Statistical significance of two-tailed tests was determined by an alpha level of 0.05.", "Data was collected and analysed from a total of 444 subjects who visited the CFS/ME Medical Centre Amsterdam between June 2008 and June 2013: 203 CFS patients (178 women), 223 CFI patients (172 women) and 18 healthy visitors (11 women) (Table 1).\nDemographic data and results\nDemographic data and results of cardiopulmonary exercise tests in patients with chronic fatigue syndrome, idiopathic chronic fatigue and healthy humans during rest, at the anaerobic threshold and maximal workload. BSA: body surface area, HR: heart rate, V’O2: oxygen uptake, V’O2/pred.: oxygen uptake as percentage of predicted, SVI stroke volume index, ΔQ’/ΔV’O2: increase of cardiac output relative to the increase of oxygen uptake.\nPost hoc Bonferroni analysis revealed that the body weight of healthy males was higher than the body weight of male CFI patients (P = 0.036, 95% CI: 0.70; 27.95).\nHaemoglobin was not different in CFS and CFI patients (Table 1). In female patients haemoglobin was not related to O2 extraction (r = 0.077, P = 0.155) or to the increase of cardiac output relative to oxygen uptake (ΔQ’/ΔV’O2) (r = -0.008, P = 0.882). In male patients haemoglobin was related to O2 extraction (r = 0.245, P = 0.047) but not to ΔQ’/ΔV’O2 (r = -0.140, P = 0.273).\nThe pulmonary ventilation tests showed no difference between the CFS, CFI and healthy groups (data not shown) and the cardiac index was similar in the groups at any time during the CPET. Blood pressure was within normal limits in all tests.\nAt the anaerobic threshold, O2 extraction in healthy women was higher than in CFS (P = 0.008, 95% CI: 0.378; 3.273). The O2 extraction in healthy men was higher than in CFS (P = 0.044, 95% CI: 0.064; 6.221) and higher than in CFI (P = 0.023, 95% CI: 0.355; 6.172).\nAt peak exercise O2 extraction was higher in healthy women than in CFS (P = 0.010, 95% CI: 0.49; 4.74) and higher in CFI than in CFS (P = 0.030, 95% CI: 0.06; 1.52). The O2 extraction was higher in healthy men than in CFS (P = 0.006, 95% CI: 1.36; 10.36) and higher than in CFI (P = 0.018, 95% CI: 0.65; 9.13).\nThe lowest level of O2 extraction at maximal workload was 10.0 ml/100 ml in the group of healthy women, 5.4 ml/100 ml in CFI and 4.4 ml/100 ml in female CFS patients. The lowest O2 extraction at maximal workload in males was 12.5 ml/100 ml in the healthy group, 8.2 ml/100 ml in the CFI and 6.9 ml/100 in CFS patients.\nThe increase of cardiac output relative to oxygen uptake (ΔQ’/ΔV’O2) was lower in healthy women than CFS (P = 0.011, 95% CI: 0.45; 4.72) and in healthy men than CFS (P = 0.037, 95% CI: 0.08; 3.58).\nIn the fatigue patients a low oxygen extraction (≤10 ml/100 ml) coincided with a increased response time of 2.35?±?0.19 versus 1.79?±?0.13 (P = 0.001, 95% CI: -1.19; -0.33) in the Shifting Attentional test visual of the ANT.", "The lower maximal exercise capacity (peak V’O2) of CFS and CFI patients was related to a lower oxygen uptake of the muscle cells (C(a-v)O2) and a higher increase of cardiac output relative to V’O2 (ΔQ’/ΔV’O2) than in healthy men and women.\nThe lower peak V’O2 was not explained by an impairment of ventilation. The stroke volume and cardiac output increased during the exercise test, but at no level of effort a consistent difference was seen between the three groups, indicating a normal adaptation of the heart to increasing workload in CFS patients. This result was in accordance with previous studies [2,4,19]. Stroke volume in men (n = 83) during exercise test was not different from reported values by Nexfin [17], gas rebreathing [5] and impedance cardiography [20]. Resting values were 78.9?±?12.8 ml in this study, 80?±?9 ml by Nexfin and 73.8?±?10.1 ml by impedance cardiography and at peak 100.7?±?18.0 ml in this study, 107.5?±?7.2 ml by gas rebreathing and 97.9?±?6.4 ml by impedance cardiography. Peak cardiac output in men was 192.0?±?92.9 ml/kg/min in this study and 212?±?37 ml/kg/min by acetylene rebreathing [6].\nWe have no haemoglobin data of the healthy visitors. It is possible that the haemoglobin values of patients, although within normal limits [21], were lower than the values of healthy visitors. The mean value of the healthy female visitors would need to be ±10 mmol/l (1 mmol Hb?≈?1.5 ml/100 ml O2 extraction) to explain the difference of O2 extraction by a difference of haemoglobin [15]. A high cardiac index, caused by low haemoglobin would also have been present during rest [22], but we found no difference between the 3 groups. In men haemoglobin correlated with oxygen extraction, but explained only 6% of the variance at peak V’O2.\nThe oxygen extraction increases during incremental exercise and a lower value of peak oxygen extraction in the CFS and CFI groups might be attributed to insufficient effort, caused by lack of motivation. This, however, would not explain the lower oxygen extraction at the anaerobic threshold and the higher slope of the ΔQ’/ΔV’O2. Another cause for the lower V’O2 in CFS and CFI could be the deconditioning in these patients, but the value of the increase of cardiac output relative to the oxygen uptake (ΔQ’/ΔV’O2) is independent from motivation and deconditioning [6] (normal ΔQ’/ΔV’O2?≈?5).\nThe most probable cause for the low peak V’O2, the low oxygen extraction and the high ΔQ’/ΔV’O2 in CFS and CFI patients was an attenuated cell metabolism. The low oxygen extraction during exercise was also reported in mitochondrial pathology [6], systemic lupus erythematosus [23], HIV [5] and myophosphorylase deficiency [24].\nThis result is also not in contradiction to the abnormal proton handling that was reported during and after cessation of exercise in CFS patients [25,26].\nThe peak oxygen extraction in men and women who performed at the same level or better than the reference population of sedentary healthy subjects [27] was never less than 10 ml/100 ml as reported for healthy male subjects [28] (Figure 1). The same lower limit of 10 ml/100 ml was also reported in heart failure patients [29]. The O2 extraction of healthy participants in this study was higher than 10 ml/100 ml (Figure 1). The mean O2 extraction at maximal workload in fatigue patients was much lower and comparable to asymptomatic HIV infected individuals (10.8?±?0.5 ml/100 ml) [5].\nThe peak oxygen uptake as percentage of predicted relative to the peak oxygen extraction. Vertical line: predicted peak oxygen uptake. Horizontal line: lowest oxygen extraction reported for healthy humans. No-CFS: Idiopathic Chronic Fatigue, CFS Chronic Fatigue Syndrome.\nThe peak V’O2 of 73 CFS patients and 59 CFI patients was the same as or higher than the mean peak V’O2 of healthy sedentary people [27]. All CFS and CFI patients, however, experienced a physical impairment that was severe enough for the diagnosis. The conclusion must be that the subjective experience of physical impairment and the objective peak V’O2 in the CPET are not identical.\nIf the mitochondrial system is intact in CFS patients [4], the low oxygen extraction in a subgroup of CFS patients may indicate a downregulation of the activity in vivo. A downregulation by a factor that is involved in the activity of the immune system would explain the same phenomenon in SLE [23], different from damaged mitochondria in HIV [5,30]. The lower oxygen extraction and higher ΔQ’/ΔV’O2 however do not differentiate between down regulation of cell metabolism and congenital or acquired mitochondrial pathology in CFS patients. The abnormal results of the Shifting Attentional test visual of the ANT suggest that the impaired oxygen uptake is not limited to the muscle cells.\n Limitations The validity of the results of this retrospective observational study is limited by the uncontrolled inclusion of the participants. The exercise protocol for healthy visitors was not different from the protocol for fatigued patients but we have no laboratory data of healthy visitors. Therefore we cannot exclude less than optimal results in this group due to unknown diseases. The accuracy and precision of the assessment of stroke volume by Nexfin in CFS patients during CPET was not reported yet. For accuracy we used the data of comparable studies in healthy subjects, the results of a study of repeated CPET’s is needed for the assessment of precision. We cannot exclude influence of haemoglobin on the oxygen transport capacity of the blood in healthy visitors.\nThe validity of the results of this retrospective observational study is limited by the uncontrolled inclusion of the participants. The exercise protocol for healthy visitors was not different from the protocol for fatigued patients but we have no laboratory data of healthy visitors. Therefore we cannot exclude less than optimal results in this group due to unknown diseases. The accuracy and precision of the assessment of stroke volume by Nexfin in CFS patients during CPET was not reported yet. For accuracy we used the data of comparable studies in healthy subjects, the results of a study of repeated CPET’s is needed for the assessment of precision. We cannot exclude influence of haemoglobin on the oxygen transport capacity of the blood in healthy visitors.", "The validity of the results of this retrospective observational study is limited by the uncontrolled inclusion of the participants. The exercise protocol for healthy visitors was not different from the protocol for fatigued patients but we have no laboratory data of healthy visitors. Therefore we cannot exclude less than optimal results in this group due to unknown diseases. The accuracy and precision of the assessment of stroke volume by Nexfin in CFS patients during CPET was not reported yet. For accuracy we used the data of comparable studies in healthy subjects, the results of a study of repeated CPET’s is needed for the assessment of precision. We cannot exclude influence of haemoglobin on the oxygen transport capacity of the blood in healthy visitors.", "CPET with continuous measuring of cardiac output by Nexfin allowed for the calculation of the presence and severity of metabolic causes of exercise intolerance. This retrospective study showed that a low oxygen extraction and a high ΔQ’/ΔV’O2 were consistent with a metabolic cause for exercise intolerance in 70% of CFS patients.", "The authors declare that they do not have competing interests.", "RV and IV contributed to the collection and analysis of the data. The first drafts of the paper were written by RV and RV and IV contributed to the final version of the article. Both authors read and approved the final manuscript." ]
[ null, "methods", "results", "discussion", null, "conclusions", null, null ]
[ "Chronic fatigue syndrome", "Exercise test", "Exercise intolerance", "Oxygen extraction" ]
Background: Exercise intolerance is a frequent complaint from patients who meet the criteria for Chronic Fatigue Syndrome/Myalgic Encephalitis (CFS) and Idiopathic Chronic Fatigue (CFI) [1]. Objective tests for physical impairment measure the maximal oxygen uptake (peak V’O2) during a cardiopulmonary exercise test (CPET) [2-4]. Most studies agree that peak V’O2 is lower in CFS, but we need to understand the cause of the lower peak V’O2 to explain the pathogenesis. The V’O2 depends on the uptake, transport and metabolism of oxygen in the muscle cells during physical exercise. In most CPET studies in CFS patients, the limitation of peak V’O2 is not attributed to a lower uptake and transport of oxygen to the muscle. A lower metabolic capacity of the muscle cell would change the demand for oxygen and thus lower the oxygen extraction (C(a-v)O2) and increase the cardiac output relative to V’O2 (ΔQ’/ΔV’O2) [5,6]. In previous studies we, and others, did not find impaired mitochondrial activity to be a cause for a lower peak V’O2[4,7], but abnormal mitochondrial activity was reported by some in CFS and CFI [8-10]. The aim of the present retrospective study was to determine to what extent the physical impairment in CFS and CFI was attributable to changes in uptake, transport and metabolism of oxygen in the muscle cells. Methods: Data was collected from patients who attended the CFS Medical Centre Amsterdam. The data of sedentary men and women, physically active less than 1 hour per week, was added to the patient database. This group comprised visitors to the centre for cardiopulmonary exercise tests, check-ups and training program advice over the same period. We obtained information about the health status of this group, but laboratory tests were not included. Subjects using medication that could possibly influence the pulmonary, cardiovascular, immunologic system or cellular respiration were not included in the study. Chronic fatigue patients were assigned to the CFS or CFI group according to the criteria of Fukuda [1]. Operational criteria for assignment were: a score of >40 on the fatigue subscale of the Checklist Individual Strength (CIS-20) [11], score of?≤?35 for vitality, ≤ 62.5 for Social Functioning and ≤50 for Role-Physical on the SF-36 [12] and ≥4 positive scores (≥7,5) of the additional symptoms of the CDC Symptom Inventory –DLV [13] for the diagnosis of CFS and ≤4 positive scores (≥7,5) of the additional symptoms of the CDC Symptom Inventory –DLV for the diagnosis of idiopathic chronic fatigue (CFI) . The cognitive function of patients was screened with the Shifting Attentional Test Visual of the Amsterdam Neuropsychological Tasks [14]. Normal Z-score in this test is -2 to +2. All patients completed a CPET as part of the diagnostic procedure. The protocol of the CPET was described before [4]. All subjects performed a symptom limited CPET on a cycle ergometer (Excalibur, Lode, Groningen, The Netherlands) as described by Wassermann et al. [15]: 3 min without activity, 3 min of unloaded pedalling, followed by cycling against increasing resistance until exhaustion (ramp protocol) and concluded by 3 min cycling without resistance. The work rate increase was estimated from history, physical examination, gender, weight and height. Verbal encouragement was used to maximise performance during the last phase of incremental exercise. Exhaustion of the leg muscles was the limiting symptom in all participants. The V’E, V’O2, V’CO2 and oxygen saturation were continuously measured (Metasoft). The ECG was continuously recorded and blood pressure was measured every 2 min. Maximal exercise capacity was expressed as peak oxygen uptake per kilogram bodyweight and as percentage of predicted maximal oxygen uptake [16]. Non-invasive stroke volume measurement by continuous beat-to-beat pulse contour analysis (Nexfin) was added to the standard measurements of the CPET [17]. Oxygen extraction and ΔQ’/ΔV’O2 were calculated from the oxygen uptake and the cardiac output (C(a-v)O2 = 100 × V’O2/ CO and (Q’max-Q’rest)/(V’O2max-V’O2rest)) [18]. All subjects signed an informed consent for the use of the data for analysis and publication. Statistical analysis was conducted using the IBM Statistical Package for the Social Sciences (19.0 for Windows, Chicago, Ill, US). The results were presented as the mean?±?standard deviation (SD). Kolmogorov-Smirnov tests with Lilliefors significance correction for normality showed that the data were normally distributed. Differences between groups were tested with Analysis of Variance (ANOVA) and Bonferroni post hoc analysis. Pearson product-moment correlation coefficient was calculated as a measure of the strength of the relationship between 2 variables. Statistical significance of two-tailed tests was determined by an alpha level of 0.05. Results: Data was collected and analysed from a total of 444 subjects who visited the CFS/ME Medical Centre Amsterdam between June 2008 and June 2013: 203 CFS patients (178 women), 223 CFI patients (172 women) and 18 healthy visitors (11 women) (Table 1). Demographic data and results Demographic data and results of cardiopulmonary exercise tests in patients with chronic fatigue syndrome, idiopathic chronic fatigue and healthy humans during rest, at the anaerobic threshold and maximal workload. BSA: body surface area, HR: heart rate, V’O2: oxygen uptake, V’O2/pred.: oxygen uptake as percentage of predicted, SVI stroke volume index, ΔQ’/ΔV’O2: increase of cardiac output relative to the increase of oxygen uptake. Post hoc Bonferroni analysis revealed that the body weight of healthy males was higher than the body weight of male CFI patients (P = 0.036, 95% CI: 0.70; 27.95). Haemoglobin was not different in CFS and CFI patients (Table 1). In female patients haemoglobin was not related to O2 extraction (r = 0.077, P = 0.155) or to the increase of cardiac output relative to oxygen uptake (ΔQ’/ΔV’O2) (r = -0.008, P = 0.882). In male patients haemoglobin was related to O2 extraction (r = 0.245, P = 0.047) but not to ΔQ’/ΔV’O2 (r = -0.140, P = 0.273). The pulmonary ventilation tests showed no difference between the CFS, CFI and healthy groups (data not shown) and the cardiac index was similar in the groups at any time during the CPET. Blood pressure was within normal limits in all tests. At the anaerobic threshold, O2 extraction in healthy women was higher than in CFS (P = 0.008, 95% CI: 0.378; 3.273). The O2 extraction in healthy men was higher than in CFS (P = 0.044, 95% CI: 0.064; 6.221) and higher than in CFI (P = 0.023, 95% CI: 0.355; 6.172). At peak exercise O2 extraction was higher in healthy women than in CFS (P = 0.010, 95% CI: 0.49; 4.74) and higher in CFI than in CFS (P = 0.030, 95% CI: 0.06; 1.52). The O2 extraction was higher in healthy men than in CFS (P = 0.006, 95% CI: 1.36; 10.36) and higher than in CFI (P = 0.018, 95% CI: 0.65; 9.13). The lowest level of O2 extraction at maximal workload was 10.0 ml/100 ml in the group of healthy women, 5.4 ml/100 ml in CFI and 4.4 ml/100 ml in female CFS patients. The lowest O2 extraction at maximal workload in males was 12.5 ml/100 ml in the healthy group, 8.2 ml/100 ml in the CFI and 6.9 ml/100 in CFS patients. The increase of cardiac output relative to oxygen uptake (ΔQ’/ΔV’O2) was lower in healthy women than CFS (P = 0.011, 95% CI: 0.45; 4.72) and in healthy men than CFS (P = 0.037, 95% CI: 0.08; 3.58). In the fatigue patients a low oxygen extraction (≤10 ml/100 ml) coincided with a increased response time of 2.35?±?0.19 versus 1.79?±?0.13 (P = 0.001, 95% CI: -1.19; -0.33) in the Shifting Attentional test visual of the ANT. Discussion: The lower maximal exercise capacity (peak V’O2) of CFS and CFI patients was related to a lower oxygen uptake of the muscle cells (C(a-v)O2) and a higher increase of cardiac output relative to V’O2 (ΔQ’/ΔV’O2) than in healthy men and women. The lower peak V’O2 was not explained by an impairment of ventilation. The stroke volume and cardiac output increased during the exercise test, but at no level of effort a consistent difference was seen between the three groups, indicating a normal adaptation of the heart to increasing workload in CFS patients. This result was in accordance with previous studies [2,4,19]. Stroke volume in men (n = 83) during exercise test was not different from reported values by Nexfin [17], gas rebreathing [5] and impedance cardiography [20]. Resting values were 78.9?±?12.8 ml in this study, 80?±?9 ml by Nexfin and 73.8?±?10.1 ml by impedance cardiography and at peak 100.7?±?18.0 ml in this study, 107.5?±?7.2 ml by gas rebreathing and 97.9?±?6.4 ml by impedance cardiography. Peak cardiac output in men was 192.0?±?92.9 ml/kg/min in this study and 212?±?37 ml/kg/min by acetylene rebreathing [6]. We have no haemoglobin data of the healthy visitors. It is possible that the haemoglobin values of patients, although within normal limits [21], were lower than the values of healthy visitors. The mean value of the healthy female visitors would need to be ±10 mmol/l (1 mmol Hb?≈?1.5 ml/100 ml O2 extraction) to explain the difference of O2 extraction by a difference of haemoglobin [15]. A high cardiac index, caused by low haemoglobin would also have been present during rest [22], but we found no difference between the 3 groups. In men haemoglobin correlated with oxygen extraction, but explained only 6% of the variance at peak V’O2. The oxygen extraction increases during incremental exercise and a lower value of peak oxygen extraction in the CFS and CFI groups might be attributed to insufficient effort, caused by lack of motivation. This, however, would not explain the lower oxygen extraction at the anaerobic threshold and the higher slope of the ΔQ’/ΔV’O2. Another cause for the lower V’O2 in CFS and CFI could be the deconditioning in these patients, but the value of the increase of cardiac output relative to the oxygen uptake (ΔQ’/ΔV’O2) is independent from motivation and deconditioning [6] (normal ΔQ’/ΔV’O2?≈?5). The most probable cause for the low peak V’O2, the low oxygen extraction and the high ΔQ’/ΔV’O2 in CFS and CFI patients was an attenuated cell metabolism. The low oxygen extraction during exercise was also reported in mitochondrial pathology [6], systemic lupus erythematosus [23], HIV [5] and myophosphorylase deficiency [24]. This result is also not in contradiction to the abnormal proton handling that was reported during and after cessation of exercise in CFS patients [25,26]. The peak oxygen extraction in men and women who performed at the same level or better than the reference population of sedentary healthy subjects [27] was never less than 10 ml/100 ml as reported for healthy male subjects [28] (Figure 1). The same lower limit of 10 ml/100 ml was also reported in heart failure patients [29]. The O2 extraction of healthy participants in this study was higher than 10 ml/100 ml (Figure 1). The mean O2 extraction at maximal workload in fatigue patients was much lower and comparable to asymptomatic HIV infected individuals (10.8?±?0.5 ml/100 ml) [5]. The peak oxygen uptake as percentage of predicted relative to the peak oxygen extraction. Vertical line: predicted peak oxygen uptake. Horizontal line: lowest oxygen extraction reported for healthy humans. No-CFS: Idiopathic Chronic Fatigue, CFS Chronic Fatigue Syndrome. The peak V’O2 of 73 CFS patients and 59 CFI patients was the same as or higher than the mean peak V’O2 of healthy sedentary people [27]. All CFS and CFI patients, however, experienced a physical impairment that was severe enough for the diagnosis. The conclusion must be that the subjective experience of physical impairment and the objective peak V’O2 in the CPET are not identical. If the mitochondrial system is intact in CFS patients [4], the low oxygen extraction in a subgroup of CFS patients may indicate a downregulation of the activity in vivo. A downregulation by a factor that is involved in the activity of the immune system would explain the same phenomenon in SLE [23], different from damaged mitochondria in HIV [5,30]. The lower oxygen extraction and higher ΔQ’/ΔV’O2 however do not differentiate between down regulation of cell metabolism and congenital or acquired mitochondrial pathology in CFS patients. The abnormal results of the Shifting Attentional test visual of the ANT suggest that the impaired oxygen uptake is not limited to the muscle cells. Limitations The validity of the results of this retrospective observational study is limited by the uncontrolled inclusion of the participants. The exercise protocol for healthy visitors was not different from the protocol for fatigued patients but we have no laboratory data of healthy visitors. Therefore we cannot exclude less than optimal results in this group due to unknown diseases. The accuracy and precision of the assessment of stroke volume by Nexfin in CFS patients during CPET was not reported yet. For accuracy we used the data of comparable studies in healthy subjects, the results of a study of repeated CPET’s is needed for the assessment of precision. We cannot exclude influence of haemoglobin on the oxygen transport capacity of the blood in healthy visitors. The validity of the results of this retrospective observational study is limited by the uncontrolled inclusion of the participants. The exercise protocol for healthy visitors was not different from the protocol for fatigued patients but we have no laboratory data of healthy visitors. Therefore we cannot exclude less than optimal results in this group due to unknown diseases. The accuracy and precision of the assessment of stroke volume by Nexfin in CFS patients during CPET was not reported yet. For accuracy we used the data of comparable studies in healthy subjects, the results of a study of repeated CPET’s is needed for the assessment of precision. We cannot exclude influence of haemoglobin on the oxygen transport capacity of the blood in healthy visitors. Limitations: The validity of the results of this retrospective observational study is limited by the uncontrolled inclusion of the participants. The exercise protocol for healthy visitors was not different from the protocol for fatigued patients but we have no laboratory data of healthy visitors. Therefore we cannot exclude less than optimal results in this group due to unknown diseases. The accuracy and precision of the assessment of stroke volume by Nexfin in CFS patients during CPET was not reported yet. For accuracy we used the data of comparable studies in healthy subjects, the results of a study of repeated CPET’s is needed for the assessment of precision. We cannot exclude influence of haemoglobin on the oxygen transport capacity of the blood in healthy visitors. Conclusions: CPET with continuous measuring of cardiac output by Nexfin allowed for the calculation of the presence and severity of metabolic causes of exercise intolerance. This retrospective study showed that a low oxygen extraction and a high ΔQ’/ΔV’O2 were consistent with a metabolic cause for exercise intolerance in 70% of CFS patients. Competing interests: The authors declare that they do not have competing interests. Authors’ contributions: RV and IV contributed to the collection and analysis of the data. The first drafts of the paper were written by RV and RV and IV contributed to the final version of the article. Both authors read and approved the final manuscript.
Background: The insufficient metabolic adaptation to exercise in Chronic Fatigue Syndrome (CFS) is still being debated and poorly understood. Methods: We analysed the cardiopulmonary exercise tests of CFS patients, idiopathic chronic fatigue (CFI) patients and healthy visitors. Continuous non-invasive measurement of the cardiac output by Nexfin (BMEYE B.V. Amsterdam, the Netherlands) was added to the cardiopulmonary exercise tests. The peak oxygen extraction by muscle cells and the increase of cardiac output relative to the increase of oxygen uptake (ΔQ'/ΔV'O₂) were measured, calculated from the cardiac output and the oxygen uptake during incremental exercise. Results: The peak oxygen extraction by muscle cells was 10.83 ± 2.80 ml/100ml in 178 CFS women, 11.62 ± 2.90 ml/100 ml in 172 CFI, and 13.45 ± 2.72 ml/100 ml in 11 healthy women (ANOVA: P=0.001), 13.66 ± 3.31 ml/100 ml in 25 CFS men, 14.63 ± 4.38 ml/100 ml in 51 CFI, and 19.52 ± 6.53 ml/100 ml in 7 healthy men (ANOVA: P=0.008).The ΔQ'/ΔV'O₂ was > 6 L/L (normal ΔQ'/ΔV'O₂ ≈ 5 L/L) in 70% of the patients and in 22% of the healthy group. Conclusions: Low oxygen uptake by muscle cells causes exercise intolerance in a majority of CFS patients, indicating insufficient metabolic adaptation to incremental exercise. The high increase of the cardiac output relative to the increase of oxygen uptake argues against deconditioning as a cause for physical impairment in these patients.
Background: Exercise intolerance is a frequent complaint from patients who meet the criteria for Chronic Fatigue Syndrome/Myalgic Encephalitis (CFS) and Idiopathic Chronic Fatigue (CFI) [1]. Objective tests for physical impairment measure the maximal oxygen uptake (peak V’O2) during a cardiopulmonary exercise test (CPET) [2-4]. Most studies agree that peak V’O2 is lower in CFS, but we need to understand the cause of the lower peak V’O2 to explain the pathogenesis. The V’O2 depends on the uptake, transport and metabolism of oxygen in the muscle cells during physical exercise. In most CPET studies in CFS patients, the limitation of peak V’O2 is not attributed to a lower uptake and transport of oxygen to the muscle. A lower metabolic capacity of the muscle cell would change the demand for oxygen and thus lower the oxygen extraction (C(a-v)O2) and increase the cardiac output relative to V’O2 (ΔQ’/ΔV’O2) [5,6]. In previous studies we, and others, did not find impaired mitochondrial activity to be a cause for a lower peak V’O2[4,7], but abnormal mitochondrial activity was reported by some in CFS and CFI [8-10]. The aim of the present retrospective study was to determine to what extent the physical impairment in CFS and CFI was attributable to changes in uptake, transport and metabolism of oxygen in the muscle cells. Conclusions: CPET with continuous measuring of cardiac output by Nexfin allowed for the calculation of the presence and severity of metabolic causes of exercise intolerance. This retrospective study showed that a low oxygen extraction and a high ΔQ’/ΔV’O2 were consistent with a metabolic cause for exercise intolerance in 70% of CFS patients.
Background: The insufficient metabolic adaptation to exercise in Chronic Fatigue Syndrome (CFS) is still being debated and poorly understood. Methods: We analysed the cardiopulmonary exercise tests of CFS patients, idiopathic chronic fatigue (CFI) patients and healthy visitors. Continuous non-invasive measurement of the cardiac output by Nexfin (BMEYE B.V. Amsterdam, the Netherlands) was added to the cardiopulmonary exercise tests. The peak oxygen extraction by muscle cells and the increase of cardiac output relative to the increase of oxygen uptake (ΔQ'/ΔV'O₂) were measured, calculated from the cardiac output and the oxygen uptake during incremental exercise. Results: The peak oxygen extraction by muscle cells was 10.83 ± 2.80 ml/100ml in 178 CFS women, 11.62 ± 2.90 ml/100 ml in 172 CFI, and 13.45 ± 2.72 ml/100 ml in 11 healthy women (ANOVA: P=0.001), 13.66 ± 3.31 ml/100 ml in 25 CFS men, 14.63 ± 4.38 ml/100 ml in 51 CFI, and 19.52 ± 6.53 ml/100 ml in 7 healthy men (ANOVA: P=0.008).The ΔQ'/ΔV'O₂ was > 6 L/L (normal ΔQ'/ΔV'O₂ ≈ 5 L/L) in 70% of the patients and in 22% of the healthy group. Conclusions: Low oxygen uptake by muscle cells causes exercise intolerance in a majority of CFS patients, indicating insufficient metabolic adaptation to incremental exercise. The high increase of the cardiac output relative to the increase of oxygen uptake argues against deconditioning as a cause for physical impairment in these patients.
3,013
280
8
[ "o2", "cfs", "patients", "oxygen", "healthy", "ml", "extraction", "peak", "cfi", "exercise" ]
[ "test", "test" ]
[CONTENT] Chronic fatigue syndrome | Exercise test | Exercise intolerance | Oxygen extraction [SUMMARY]
[CONTENT] Chronic fatigue syndrome | Exercise test | Exercise intolerance | Oxygen extraction [SUMMARY]
[CONTENT] Chronic fatigue syndrome | Exercise test | Exercise intolerance | Oxygen extraction [SUMMARY]
[CONTENT] Chronic fatigue syndrome | Exercise test | Exercise intolerance | Oxygen extraction [SUMMARY]
[CONTENT] Chronic fatigue syndrome | Exercise test | Exercise intolerance | Oxygen extraction [SUMMARY]
[CONTENT] Chronic fatigue syndrome | Exercise test | Exercise intolerance | Oxygen extraction [SUMMARY]
[CONTENT] Adult | Demography | Exercise Test | Fatigue Syndrome, Chronic | Female | Heart | Humans | Lung | Male | Oxygen | Oxygen Consumption [SUMMARY]
[CONTENT] Adult | Demography | Exercise Test | Fatigue Syndrome, Chronic | Female | Heart | Humans | Lung | Male | Oxygen | Oxygen Consumption [SUMMARY]
[CONTENT] Adult | Demography | Exercise Test | Fatigue Syndrome, Chronic | Female | Heart | Humans | Lung | Male | Oxygen | Oxygen Consumption [SUMMARY]
[CONTENT] Adult | Demography | Exercise Test | Fatigue Syndrome, Chronic | Female | Heart | Humans | Lung | Male | Oxygen | Oxygen Consumption [SUMMARY]
[CONTENT] Adult | Demography | Exercise Test | Fatigue Syndrome, Chronic | Female | Heart | Humans | Lung | Male | Oxygen | Oxygen Consumption [SUMMARY]
[CONTENT] Adult | Demography | Exercise Test | Fatigue Syndrome, Chronic | Female | Heart | Humans | Lung | Male | Oxygen | Oxygen Consumption [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] o2 | cfs | patients | oxygen | healthy | ml | extraction | peak | cfi | exercise [SUMMARY]
[CONTENT] o2 | cfs | patients | oxygen | healthy | ml | extraction | peak | cfi | exercise [SUMMARY]
[CONTENT] o2 | cfs | patients | oxygen | healthy | ml | extraction | peak | cfi | exercise [SUMMARY]
[CONTENT] o2 | cfs | patients | oxygen | healthy | ml | extraction | peak | cfi | exercise [SUMMARY]
[CONTENT] o2 | cfs | patients | oxygen | healthy | ml | extraction | peak | cfi | exercise [SUMMARY]
[CONTENT] o2 | cfs | patients | oxygen | healthy | ml | extraction | peak | cfi | exercise [SUMMARY]
[CONTENT] o2 | lower | peak o2 | muscle | peak | uptake transport | oxygen muscle | oxygen | uptake | cfs [SUMMARY]
[CONTENT] symptom | analysis | min | score | statistical | tests | oxygen | data | o2 | inventory dlv [SUMMARY]
[CONTENT] 95 | ci | 95 ci | ml | healthy | o2 | o2 extraction | higher | cfs | ml 100 [SUMMARY]
[CONTENT] intolerance | metabolic | exercise intolerance | causes | continuous measuring cardiac output | continuous measuring cardiac | continuous measuring | o2 consistent | intolerance retrospective study showed | intolerance retrospective study [SUMMARY]
[CONTENT] o2 | healthy | cfs | oxygen | patients | ml | authors | extraction | exercise | authors declare competing [SUMMARY]
[CONTENT] o2 | healthy | cfs | oxygen | patients | ml | authors | extraction | exercise | authors declare competing [SUMMARY]
[CONTENT] CFS [SUMMARY]
[CONTENT] CFS ||| Continuous non-invasive | Nexfin | B.V. Amsterdam | the Netherlands ||| [SUMMARY]
[CONTENT] 10.83 | 2.80 ||| 178 | CFS | 11.62 | 2.90 | 172 | 13.45 | 2.72 ||| 11 | 13.66 | 3.31 ml/100 ml | 25 | CFS | 14.63 | 4.38 | 51 | 19.52 ± | 6.53 ||| 7 ||| 6 ||| 70% | 22% [SUMMARY]
[CONTENT] CFS ||| [SUMMARY]
[CONTENT] CFS ||| CFS ||| Continuous non-invasive | Nexfin | B.V. Amsterdam | the Netherlands ||| ||| ||| 10.83 | 2.80 ||| 178 | CFS | 11.62 | 2.90 | 172 | 13.45 | 2.72 ||| 11 | 13.66 | 3.31 ml/100 ml | 25 | CFS | 14.63 | 4.38 | 51 | 19.52 ± | 6.53 ||| 7 ||| 6 ||| 70% | 22% ||| CFS ||| [SUMMARY]
[CONTENT] CFS ||| CFS ||| Continuous non-invasive | Nexfin | B.V. Amsterdam | the Netherlands ||| ||| ||| 10.83 | 2.80 ||| 178 | CFS | 11.62 | 2.90 | 172 | 13.45 | 2.72 ||| 11 | 13.66 | 3.31 ml/100 ml | 25 | CFS | 14.63 | 4.38 | 51 | 19.52 ± | 6.53 ||| 7 ||| 6 ||| 70% | 22% ||| CFS ||| [SUMMARY]
Impact of medical and psychiatric multi-morbidity on mortality in diabetes: emerging evidence.
25138206
Multi-morbidity, or the presence of multiple chronic diseases, is a major problem in clinical care and is associated with worse outcomes. Additionally, the presence of mental health conditions, such as depression, anxiety, etc., has further negative impact on clinical outcomes. However, most health systems are generally configured for management of individual diseases instead of multi-morbidity. The study examined the prevalence and differential impact of medical and psychiatric multi-morbidity on risk of death in adults with diabetes.
BACKGROUND
A national cohort of 625,903 veterans with type 2 diabetes was created by linking multiple patient and administrative files from 2002 through 2006. The main outcome was time to death. Primary independent variables were numbers of medical and psychiatric comorbidities over the study period. Covariates included age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region. Cox regression was used to model the association between time to death and multi-morbidity adjusting for relevant covariates.
METHODS
Hypertension (78%) and depression (13%) were the most prevalent medical and psychiatric comorbidities, respectively; 23% had 3+ medical comorbidities, 3% had 2+ psychiatric comorbidities and 22% died. Among medical comorbidities, mortality risk was highest in those with congestive heart failure (hazard ratio, HR = 1.92; 95% CI 1.89-1.95), Lung disease (HR = 1.42; 95% CI 1.40-1.44) and cerebrovascular disease (HR = 1.39; 95% CI 1.37-1.40). Among psychiatric comorbidities, mortality risk was highest in those with substance abuse (HR = 1.50; 95% CI 1.46-1.54), psychoses (HR = 1.16; 95% CI 1.14-1.19) and depression (HR = 1.05; 95% CI 1.03-1.07). There was an interaction between medical and psychiatric comorbidity (p = 0.003) so stratified analyses were performed. HRs for effect of 3+ medical comorbidity (2.63, 2.66, 2.15) remained high across levels of psychiatric comorbidities (0, 1, 2+), respectively. HRs for effect of 2+ psychiatric comorbidity (1.69, 1.63, 1.42, 1.38) declined across levels of medical comorbidity (0, 1, 2, 3+), respectively.
RESULTS
Medical and psychiatric multi-morbidity are significant predictors of mortality among older adults (veterans) with type 2 diabetes with a graded response as multimorbidity increases.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Cohort Studies", "Comorbidity", "Diabetes Mellitus, Type 2", "Female", "Follow-Up Studies", "Heart Failure", "Humans", "Hypertension", "Male", "Middle Aged", "Prevalence", "Prognosis", "Psychotic Disorders", "South Carolina", "Survival Rate", "Veterans", "Young Adult" ]
4144689
Background
Multi-morbidity, or the presence of multiple chronic diseases, is a major problem in clinical care and is associated with worse outcomes [1-3]. People with multi-morbid conditions also have a higher medication burden [4] and, not unexpectedly, have worse medication adherence [5]. The prevalence of multi-morbidity is high (>50%) in middle-aged and older populations [6]. However, most health systems are generally configured for management of individual diseases instead of multi-morbidity. A recent review article suggests the number of studies testing interventions for patients with multi-morbid disease has increased, but the findings showed that organizational interventions with a broader focus and patient-specific interventions not linked to care delivery had little impact on health outcomes [7]. The presence of mental health conditions (e.g., depression, anxiety, etc.) in people with multi-morbidity has adverse impact on clinical outcomes [8,9]. A study of the impact of psychiatric conditions (depression/anxiety, substance abuse, psychotic or bipolar disorder) on mortality among individuals with diabetes indicated that alcohol and drug abuse/dependence was associated with a 22% higher mortality [10]. Chronic conditions like diabetes tend to occur in clusters (with other cardiovascular conditions like hypertension, heart disease and stroke) [11]. However, very little is known about the separate and combined impact of medical and psychiatric conditions on mortality and which has the strongest impact on risk of death in individuals with diabetes. Few studies have carefully assessed the impact of both medical and psychiatric multi-morbidity on mortality in middle-aged and older adults with diabetes. Older aged and elderly individuals have greater disease burden and are more likely to have multimorbidity and experience the adverse outcomes associated with multimorbidity. Because of the detailed clinical data available in the Veterans Health Administration (VHA) databases, the older age, and the ability to track patients over long periods of time, veterans are an ideal population to tease apart the impact of medical and psychiatric multi-morbidity on mortality. However, the focus of this study is to examine the impact of total burden of disease rather than specific comorbidities, so we used a national sample of veterans with type 2 diabetes followed over 5 years to examine the prevalence and differential impact of medical and psychiatric multi-morbidity on risk of death.
Methods
Study population A national cohort of veterans with type 2 diabetes was created by linking multiple patient and administrative files from the VHA National Patient Care and Pharmacy Benefits Management (PBM) databases. We used a previously validated algorithm [12,13] for identifying veterans with diabetes. Veterans were included in the cohort if they had: 1) type 2 diabetes defined by two or more International Classification of Diseases, Ninth Revision (ICD-9) codes for diabetes (250, 357.2, 362.0, and 366.41) in the previous 24 months (2000 and 2001) and during 2002 from inpatient stays and/or outpatient visits on separate days (excluding codes from lab tests and other non-clinician visits); or 2) prescriptions for insulin or oral hypoglycemic agents (VA classes HS501 or HS502, respectively; to capture those without a diabetes ICD-9 code) in 2002. PBM data were available during the entire period of analysis. When the data were merged based on the criteria above, the total sample included 832,000 veterans. We excluded those not taking prescription medications for diabetes (n = 201,255) and added those who had one ICD-9 code for diabetes and prescriptions filled in 2002 (n = 60,493); and 3,660 were excluded due to death prior to 2002, missing age or no service connection. The subset with complete data resulted in a final cohort of 625,903 veterans. The Department of Veterans Affairs maintains data through the VA Information Resource Center (VIReC). Data was requested from and approved for use by VIReC following data use agreement requirements. The study was approved by the Medical University of South Carolina Institutional Review Board (IRB) and the Ralph H. Johnson Veterans Affairs Medical Center Research and Development committee. A national cohort of veterans with type 2 diabetes was created by linking multiple patient and administrative files from the VHA National Patient Care and Pharmacy Benefits Management (PBM) databases. We used a previously validated algorithm [12,13] for identifying veterans with diabetes. Veterans were included in the cohort if they had: 1) type 2 diabetes defined by two or more International Classification of Diseases, Ninth Revision (ICD-9) codes for diabetes (250, 357.2, 362.0, and 366.41) in the previous 24 months (2000 and 2001) and during 2002 from inpatient stays and/or outpatient visits on separate days (excluding codes from lab tests and other non-clinician visits); or 2) prescriptions for insulin or oral hypoglycemic agents (VA classes HS501 or HS502, respectively; to capture those without a diabetes ICD-9 code) in 2002. PBM data were available during the entire period of analysis. When the data were merged based on the criteria above, the total sample included 832,000 veterans. We excluded those not taking prescription medications for diabetes (n = 201,255) and added those who had one ICD-9 code for diabetes and prescriptions filled in 2002 (n = 60,493); and 3,660 were excluded due to death prior to 2002, missing age or no service connection. The subset with complete data resulted in a final cohort of 625,903 veterans. The Department of Veterans Affairs maintains data through the VA Information Resource Center (VIReC). Data was requested from and approved for use by VIReC following data use agreement requirements. The study was approved by the Medical University of South Carolina Institutional Review Board (IRB) and the Ralph H. Johnson Veterans Affairs Medical Center Research and Development committee. Outcome measure The main outcome measure was time to death. Veterans were followed from time of entry into the study until death, loss to follow-up, or through December 2006. A subject was considered censored if alive by December 2006. The main outcome measure was time to death. Veterans were followed from time of entry into the study until death, loss to follow-up, or through December 2006. A subject was considered censored if alive by December 2006. Primary covariates The primary covariates were medical comorbidity and psychiatric comorbidity both defined as the count of diseases for each subject throughout the study period. All comorbidities were dichotomized as present or absent where presence was determined by ICD-9 codes at entry into the cohort based on a previously validated algorithm in veterans [13]. Medical comorbidity variables included anemia, cancer, cardiovascular disease (CVD), cerebrovascular disease, congestive heart failure (CHF), fluid and electrolyte disorders, hypertension, hypothyroidism, liver disease, lung conditions (chronic pulmonary disease, pulmonary circulation disease), obesity, peripheral vascular disease, and other (acquired immunodeficiency syndrome–AIDS, rheumatoid arthritis, renal failure, peptic ulcer disease and bleeding, weight loss). Psychiatric comorbidities included psychoses, substance abuse (alcohol abuse, drug abuse) and depression [13]. The primary covariates were medical comorbidity and psychiatric comorbidity both defined as the count of diseases for each subject throughout the study period. All comorbidities were dichotomized as present or absent where presence was determined by ICD-9 codes at entry into the cohort based on a previously validated algorithm in veterans [13]. Medical comorbidity variables included anemia, cancer, cardiovascular disease (CVD), cerebrovascular disease, congestive heart failure (CHF), fluid and electrolyte disorders, hypertension, hypothyroidism, liver disease, lung conditions (chronic pulmonary disease, pulmonary circulation disease), obesity, peripheral vascular disease, and other (acquired immunodeficiency syndrome–AIDS, rheumatoid arthritis, renal failure, peptic ulcer disease and bleeding, weight loss). Psychiatric comorbidities included psychoses, substance abuse (alcohol abuse, drug abuse) and depression [13]. Demographic variables We controlled for seven demographic variables. Age was treated as continuous and centered at a mean of 66 years. Race/ethnicity included four categories with non-Hispanic white (NHW) serving as the reference group. Race/ethnicity was retrieved from the 2002 outpatient and inpatient [Medical SAS] data sets. When missing or unknown, the variable was supplemented using the inpatient race1-race6 fields from the 2003 [Medical SAS] data sets, the outpatient race1-race7 fields from the 2004 [Medical SAS] data sets, and the VA Vital Status Centers for Medicare and Medicaid Services (CMS) field for race. Gender, marital status, and location of residence (urban versus rural or highly rural) were dichotomous. Highly rural was categorized as rural according to the VA definition of rurality [14]. Percentage service-connectedness, representing the degree of disability due to illness or injury that was aggravated by or incurred in military service, was treated as dichotomous (1= > 50%, 0 = <50%). Region, which accounts for the five geographic regions of the country, was treated as a categorical variable: Northeast [VISNs 1, 2, 3], Mid-Atlantic [VISNs 4, 5, 6, 9, 10], South [VISNs 7, 8, 16, 17], Midwest [VISNs 11, 12, 15, 19, 23], and West [VISNs 18, 20, 21, 22] [15]. We controlled for seven demographic variables. Age was treated as continuous and centered at a mean of 66 years. Race/ethnicity included four categories with non-Hispanic white (NHW) serving as the reference group. Race/ethnicity was retrieved from the 2002 outpatient and inpatient [Medical SAS] data sets. When missing or unknown, the variable was supplemented using the inpatient race1-race6 fields from the 2003 [Medical SAS] data sets, the outpatient race1-race7 fields from the 2004 [Medical SAS] data sets, and the VA Vital Status Centers for Medicare and Medicaid Services (CMS) field for race. Gender, marital status, and location of residence (urban versus rural or highly rural) were dichotomous. Highly rural was categorized as rural according to the VA definition of rurality [14]. Percentage service-connectedness, representing the degree of disability due to illness or injury that was aggravated by or incurred in military service, was treated as dichotomous (1= > 50%, 0 = <50%). Region, which accounts for the five geographic regions of the country, was treated as a categorical variable: Northeast [VISNs 1, 2, 3], Mid-Atlantic [VISNs 4, 5, 6, 9, 10], South [VISNs 7, 8, 16, 17], Midwest [VISNs 11, 12, 15, 19, 23], and West [VISNs 18, 20, 21, 22] [15]. Statistical analysis In preliminary analyses, crude associations were examined between mortality and all measured covariates using chi-square tests for categorical variables and t-tests for continuous variables. Cox regression methods were used to model the association between time to death and medical and psychiatric comorbidity after adjusting for known covariates. Time to death was defined as the number of months from time of entry into the cohort to time of death or censoring (i.e., day last seen or May 2006). For the Cox model, appropriateness of the assumption of proportionality was determined by testing the coefficients of the interactions of time with the respective covariate in multivariate analyses. Initially, Cox models for each of the medical and psychiatric comorbidities were fitted adjusting for all covariates (race, socio-demographics). Then an interaction between medical and psychiatric comorbidity was tested to check whether the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity. HR estimates of medical comorbidity for each level of psychiatric comorbidity and estimates for levels of psychiatric comorbidity for levels of medical comorbidity are reported since there was significant interaction (p = 0.003). The Kaplan-Meier method was used to plot the survival functions for both medical and psychiatric comorbidities separately. Residual analysis was used to assess goodness-of-fit of each of the models. All data analyses were conducted using SAS 9.3 [16]. In preliminary analyses, crude associations were examined between mortality and all measured covariates using chi-square tests for categorical variables and t-tests for continuous variables. Cox regression methods were used to model the association between time to death and medical and psychiatric comorbidity after adjusting for known covariates. Time to death was defined as the number of months from time of entry into the cohort to time of death or censoring (i.e., day last seen or May 2006). For the Cox model, appropriateness of the assumption of proportionality was determined by testing the coefficients of the interactions of time with the respective covariate in multivariate analyses. Initially, Cox models for each of the medical and psychiatric comorbidities were fitted adjusting for all covariates (race, socio-demographics). Then an interaction between medical and psychiatric comorbidity was tested to check whether the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity. HR estimates of medical comorbidity for each level of psychiatric comorbidity and estimates for levels of psychiatric comorbidity for levels of medical comorbidity are reported since there was significant interaction (p = 0.003). The Kaplan-Meier method was used to plot the survival functions for both medical and psychiatric comorbidities separately. Residual analysis was used to assess goodness-of-fit of each of the models. All data analyses were conducted using SAS 9.3 [16].
null
null
Conclusions
The findings of this study clearly demonstrate that the total burden of medical and psychiatric multi-morbidity are significant predictors of mortality among middle age and older adults (veterans) with type 2 diabetes with a graded response as medical multimorbidity increases. The average survival time was about 2 years with 52% of deaths occurring in the first 2 years of follow-up and 9% in the final year of follow-up. Among medical comorbidities, mortality risk was highest in those with congestive heart failure, lung disease and cerebrovascular disease; while among psychiatric comorbidities, mortality risk was highest in those with substance abuse, psychoses and depression. There was an interaction between medical and psychiatric comorbidity indicating that the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity so stratified analyses were performed. HRs for effect of 3+ medical comorbidities remained high across levels of psychiatric comorbidities (0, 1, 2+), while HRs for effect of 2+ psychiatric comorbidities declined across levels of medical comorbidity (0, 1, 2, 3+). Interestingly, comorbid hypertension and obesity were associated with lower mortality risk in this sample of veterans with type 2 diabetes. Since more than two-thirds of obese veterans also have comorbid hypertension [17], it is likely that the reduced mortality risk associated with hypertension was demonstrated among obese veterans. These findings may be explained by multiple interventions within the VA system from 2000–2010 (encompasses the time period of this study) that were successful in achieving blood pressure control among veterans who are provided access to healthcare services and to a very affordable formulary of anti-hypertensive medications [18]. Furthermore, one should consider that this analysis examined data generated shortly after diabetes was designated as a CVD risk equivalent along with tobacco smoking, hypertension, and hyperlipidemia [19]. In that case, it is likely that a strong awareness of CVD as a leading cause of death, growing evidence of behavioral management and treatment effectiveness from multiple large-scale RCTs (like Hypertension Optimal Treatment (HOT), United Kingdom Prospective Diabetes Study (UKPDS), Diabetes Prevention Program (DPP), Action to Control Cardiovascular Risk in Diabetes (ACCORD), and Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT), to name a few), and broad public health efforts lead by guidelines from several national organizations (American Heart Association (AHA), American Diabetes Association (ADA), and National Heart, Lung, and Blood Institute (NHLBI)) to aggressively manage CVD risk factors among people with diabetes not only reduced CVD mortality rates, but also helped to reduce the impact of comorbid hypertension and obesity on mortality in people with diabetes. Medical comorbidities exerted the greatest influence on mortality and had significantly stronger effects on hazard of death compared to psychiatric comorbidities. In stratified analyses, the risk of death with increasing medical comorbidity remained high across each level of psychiatric comorbidity with the greatest risk seen in individuals with 3+ medical comorbidities (HR ranged 2.15-2.66) compared to no psychiatric comorbidity. In contrast, risk of death with increasing psychiatric comorbidity was not consistently high across levels of medical comorbidities. Instead the risk of death with 2+ psychiatric comorbidities declined as the number of medical comorbidities increased. In other words, among those with 2 or more psychiatric comorbidities, baseline risk (no medical comorbidity) is higher while the risk associated with additional medical comorbidity is lower. One explanation could be increased care associated with multiple medical and psychiatric comorbidities. However, the mortality risk from psychiatric comorbidities remained significant, emphasizing the need for greater attention to identifying and treating psychiatric comorbidities, instead of focusing on medical comorbidities alone. There is growing awareness of the burden of multi-morbidity and the need to develop new treatment paradigms that account for multi-morbidity. This is particularly important for certain populations such as the elderly, veterans and low income and ethnic minorities with diabetes, where the prevalence of multi-morbidity is high and the risk for adverse outcomes increase exponentially with increasing levels of multi-morbidity [20-22]. The current approach to treatment for diabetes that is focused on single or related disease (e.g. diabetes and CVD) management may no longer be appropriate based on emerging evidence of the high prevalence of multi-morbidity and its detrimental effect on health outcomes including polypharmacy, increased morbidity, disability, health utilization/cost and mortality [5,7,23-26]. As the paradigm of care shifts to patient-centered care, strategies are needed to treat the individual as a whole and coordinate care while accounting for medical and psychiatric multi-morbidity. This will be particularly important in middle age and older adults, where our findings show that multi-morbidity is associated with significant increases in mortality. Newer evidence suggests that interventions linked to care delivery and focused on specific patient needs (e.g., medication management or functional status) are effective for improving outcomes among patients with multi-morbidity [7]. Our findings further highlight the need to integrate care for medical and psychiatric conditions and address the fragmentation of health care that currently exists with separate coverages for medical and mental health conditions. Some of the strategies will require policy change in how mental health services are covered and reimbursement structure in primary care settings. Strengths of our study include the study population, its longitudinal design with 5 years of follow-up data, the extensive data available on comorbidities, and our ability to identify racial/ethnic group in over 90% of the cohort. To place this study in context, we report on the limitations. First, the VA medical record either omits or fails to update many important social, environmental and behavioral variables including socio-economic status, diet, physical activity, weight and alcohol consumption, as well as disease measures such as diabetes duration. Second, there is no way to account for the lag time between disease onset and diagnosis; thus, the timing of onset of comorbidity cannot be accurately determined in this retrospective cohort analysis. Finally, there were no measures of disease control accounted for so the findings are generalizable to those veterans with the diagnoses included in this study. Nevertheless, our findings are important and show that, in this national longitudinal cohort of veterans with diabetes followed over 5 years, multi-morbidity was associated with increased risk of death and the risk of death was incremental with a graded response in mortality risk with increasing numbers of medical and psychiatric comorbidities.
[ "Background", "Study population", "Outcome measure", "Primary covariates", "Demographic variables", "Statistical analysis", "Results and discussion", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Multi-morbidity, or the presence of multiple chronic diseases, is a major problem in clinical care and is associated with worse outcomes [1-3]. People with multi-morbid conditions also have a higher medication burden [4] and, not unexpectedly, have worse medication adherence [5]. The prevalence of multi-morbidity is high (>50%) in middle-aged and older populations [6]. However, most health systems are generally configured for management of individual diseases instead of multi-morbidity. A recent review article suggests the number of studies testing interventions for patients with multi-morbid disease has increased, but the findings showed that organizational interventions with a broader focus and patient-specific interventions not linked to care delivery had little impact on health outcomes [7].\nThe presence of mental health conditions (e.g., depression, anxiety, etc.) in people with multi-morbidity has adverse impact on clinical outcomes [8,9]. A study of the impact of psychiatric conditions (depression/anxiety, substance abuse, psychotic or bipolar disorder) on mortality among individuals with diabetes indicated that alcohol and drug abuse/dependence was associated with a 22% higher mortality [10]. Chronic conditions like diabetes tend to occur in clusters (with other cardiovascular conditions like hypertension, heart disease and stroke) [11]. However, very little is known about the separate and combined impact of medical and psychiatric conditions on mortality and which has the strongest impact on risk of death in individuals with diabetes. Few studies have carefully assessed the impact of both medical and psychiatric multi-morbidity on mortality in middle-aged and older adults with diabetes.\nOlder aged and elderly individuals have greater disease burden and are more likely to have multimorbidity and experience the adverse outcomes associated with multimorbidity. Because of the detailed clinical data available in the Veterans Health Administration (VHA) databases, the older age, and the ability to track patients over long periods of time, veterans are an ideal population to tease apart the impact of medical and psychiatric multi-morbidity on mortality. However, the focus of this study is to examine the impact of total burden of disease rather than specific comorbidities, so we used a national sample of veterans with type 2 diabetes followed over 5 years to examine the prevalence and differential impact of medical and psychiatric multi-morbidity on risk of death.", "A national cohort of veterans with type 2 diabetes was created by linking multiple patient and administrative files from the VHA National Patient Care and Pharmacy Benefits Management (PBM) databases. We used a previously validated algorithm [12,13] for identifying veterans with diabetes. Veterans were included in the cohort if they had: 1) type 2 diabetes defined by two or more International Classification of Diseases, Ninth Revision (ICD-9) codes for diabetes (250, 357.2, 362.0, and 366.41) in the previous 24 months (2000 and 2001) and during 2002 from inpatient stays and/or outpatient visits on separate days (excluding codes from lab tests and other non-clinician visits); or 2) prescriptions for insulin or oral hypoglycemic agents (VA classes HS501 or HS502, respectively; to capture those without a diabetes ICD-9 code) in 2002. PBM data were available during the entire period of analysis. When the data were merged based on the criteria above, the total sample included 832,000 veterans. We excluded those not taking prescription medications for diabetes (n = 201,255) and added those who had one ICD-9 code for diabetes and prescriptions filled in 2002 (n = 60,493); and 3,660 were excluded due to death prior to 2002, missing age or no service connection. The subset with complete data resulted in a final cohort of 625,903 veterans. The Department of Veterans Affairs maintains data through the VA Information Resource Center (VIReC). Data was requested from and approved for use by VIReC following data use agreement requirements. The study was approved by the Medical University of South Carolina Institutional Review Board (IRB) and the Ralph H. Johnson Veterans Affairs Medical Center Research and Development committee.", "The main outcome measure was time to death. Veterans were followed from time of entry into the study until death, loss to follow-up, or through December 2006. A subject was considered censored if alive by December 2006.", "The primary covariates were medical comorbidity and psychiatric comorbidity both defined as the count of diseases for each subject throughout the study period. All comorbidities were dichotomized as present or absent where presence was determined by ICD-9 codes at entry into the cohort based on a previously validated algorithm in veterans [13]. Medical comorbidity variables included anemia, cancer, cardiovascular disease (CVD), cerebrovascular disease, congestive heart failure (CHF), fluid and electrolyte disorders, hypertension, hypothyroidism, liver disease, lung conditions (chronic pulmonary disease, pulmonary circulation disease), obesity, peripheral vascular disease, and other (acquired immunodeficiency syndrome–AIDS, rheumatoid arthritis, renal failure, peptic ulcer disease and bleeding, weight loss). Psychiatric comorbidities included psychoses, substance abuse (alcohol abuse, drug abuse) and depression [13].", "We controlled for seven demographic variables. Age was treated as continuous and centered at a mean of 66 years. Race/ethnicity included four categories with non-Hispanic white (NHW) serving as the reference group. Race/ethnicity was retrieved from the 2002 outpatient and inpatient [Medical SAS] data sets. When missing or unknown, the variable was supplemented using the inpatient race1-race6 fields from the 2003 [Medical SAS] data sets, the outpatient race1-race7 fields from the 2004 [Medical SAS] data sets, and the VA Vital Status Centers for Medicare and Medicaid Services (CMS) field for race. Gender, marital status, and location of residence (urban versus rural or highly rural) were dichotomous. Highly rural was categorized as rural according to the VA definition of rurality [14]. Percentage service-connectedness, representing the degree of disability due to illness or injury that was aggravated by or incurred in military service, was treated as dichotomous (1= > 50%, 0 = <50%). Region, which accounts for the five geographic regions of the country, was treated as a categorical variable: Northeast [VISNs 1, 2, 3], Mid-Atlantic [VISNs 4, 5, 6, 9, 10], South [VISNs 7, 8, 16, 17], Midwest [VISNs 11, 12, 15, 19, 23], and West [VISNs 18, 20, 21, 22] [15].", "In preliminary analyses, crude associations were examined between mortality and all measured covariates using chi-square tests for categorical variables and t-tests for continuous variables. Cox regression methods were used to model the association between time to death and medical and psychiatric comorbidity after adjusting for known covariates. Time to death was defined as the number of months from time of entry into the cohort to time of death or censoring (i.e., day last seen or May 2006). For the Cox model, appropriateness of the assumption of proportionality was determined by testing the coefficients of the interactions of time with the respective covariate in multivariate analyses. Initially, Cox models for each of the medical and psychiatric comorbidities were fitted adjusting for all covariates (race, socio-demographics). Then an interaction between medical and psychiatric comorbidity was tested to check whether the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity. HR estimates of medical comorbidity for each level of psychiatric comorbidity and estimates for levels of psychiatric comorbidity for levels of medical comorbidity are reported since there was significant interaction (p = 0.003). The Kaplan-Meier method was used to plot the survival functions for both medical and psychiatric comorbidities separately. Residual analysis was used to assess goodness-of-fit of each of the models. All data analyses were conducted using SAS 9.3 [16].", "The study population consisted of 625,903 veterans with diabetes who were followed until death, loss to follow-up, or through December 2006. The mean age was 65.4 years (sd 11.1) with a range of 18–100 years. Those with diabetes and no comorbid diagnosis comprised 10.6% of the sample. Hypertension was the most common medical comorbidity (78.2%) and depression was the most common psychiatric comorbidity (12.7%). Over one fifth of the population (22.5%) had three or more medical comorbidities and 3.2% of the population had two or more psychiatric comorbidities. During the follow-up period, 21.9% of individuals in the cohort died. Table 1 shows demographic and comorbidity profile of the study population. The average survival time of those that died was 1.97 years. About one quarter of deaths occurred in each of the first (25.8%) and second (26.2%) year of follow-up with 9.1% occurring in the final year of follow-up.\nDemographic and Comorbidity Profile of Veterans*\n*All numbers represent percentages unless otherwise indicated.\nThe mortality risk associated with individual medical and psychiatric comorbidities after adjusting for demographic factors (age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region) are presented in Table 2. Of the medical comorbidities examined, congestive heart failure, which was present in 11.2% of the population, carried the greatest mortality risk with patients with congestive heart failure being almost twice as likely to die as those without congestive heart failure (HR = 1.92). Lung and cerebrovascular disease present in 13.8% and 11.4%, respectively, of the population also carried high mortality risk with patients with lung disease having 42% greater risk of mortality than those without lung disease and patients with cerebrovascular disease having 39% greater risk of mortality than those without cerebrovascular disease. Alternatively, hypertension and obesity each carried a lower mortality risk. For each of the psychiatric comorbidities examined, depression, psychoses and substance abuse carried significant mortality risk of 5%, 16% and 50%, respectively.Unadjusted Kaplan-Meier survival curves presented in Figure 1, illustrate the effect of medical comorbidity burden on probability of survival stratified by number of psychiatric comorbidities (i.e., 0, 1 and 2). Within each strata of psychiatric comorbidity burden, there was a clear graded relationship between number of medical comorbidities and probability of survival. Specifically, within each strata probability of survival was highest in those with zero medical comorbidities and lowest in those with 3 or more medical comorbidities. In Figure 2, unadjusted Kaplan-Meier survival curves illustrate the effect of psychiatric comorbidity burden on probability of survival stratified by number of medical comorbidities (i.e., 0, 1, 2 and 3 or more). In contrast to different survival curves for each level of comorbidity, Figure 2 illustrates a similar probability of survival across levels of psychiatric comorbidity burden in veterans with zero and 1 medical comorbidity. Moreover, while there is some differentiation in survival across levels of psychiatric comorbidity burden in veterans with 2 and 3 or more medical comorbidities, survival probability did not clearly decrease with increased psychiatric comorbidity burden.\nHazard Ratio and 95% Confidence Interval Estimates for Each Disease Category* (ordered by decreasing level of risk)\n*Adjusted for sociodemographic characteristics including age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region.\nUnadjusted Kaplan-Meier Survival Curves for Medical Comorbidity at Each Level of Psychiatric Comorbidity.\nUnadjusted Kaplan-Meier Survival Curves for Psychiatric Comorbidity at Each Level of Medical Comorbidity.\nTable 3 shows the HR and corresponding 95% CI for the association between number of medical comorbidities (i.e., 0, 1, 2 and 3 or more) and mortality stratified by number of psychiatric comorbidities (i.e., 0, 1 and 2) after adjusting for demographic factors (age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region). The mortality hazards associated with having 2 or 3 or more medical comorbidities as compared to no medical comorbidities were 1.54 and 2.63, respectively, in veterans with diabetes without any psychiatric comorbidity. Similar patterns in mortality hazards were seen in veterans with a single psychiatric comorbidity among those with 2 or 3 or more medical comorbidities compared to those with no medical comorbidities (HRs of 1.51 and 2.66, respectively). The difference in mortality hazard among those with greater medical comorbidity decreased only slightly in veterans with two psychiatric comorbidities (HR 1.30) compared to those with no medical comorbidity (HR 2.15), respectively.\nHazard Ratios (HR)* and 95% confidence Interval (CI) for Effect of Medical and Psychiatric Comorbidity Burden on Mortality among Veterans with Diabetes\n*Interaction terms were used to determine adjusted hazard ratios for the effect of medical comorbidity within strata of psychiatric comorbiditya as well as for the effect of psychiatric comorbidity within strata of medical comorbidityb. Adjusted for sociodemographic characteristics including age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region.\nAdditionally, Table 3 shows the HR and corresponding 95% confidence intervals (CI) for the association between number of psychiatric comorbidities (i.e., 0, 1 and 2) and mortality stratified by number of medical comorbidities (i.e., 0, 1, 2 and 3 or more) after adjusting for demographic factors. The mortality hazard associated with having a single as compared to zero psychiatric comorbidities remained constant across all medical comorbidity categories with respective hazard ratios of 1.23 and 1.24 in veterans with zero and 3 or more medical comorbidities. In contrast, the mortality hazard associated with having two as compared to zero psychiatric comorbidities decreased from 1.69 in veterans with zero medical comorbidities to 1.38 in veterans with 3 or more medical comorbidities.", "The authors declare that they have no competing interest.", "LE conceived study concept and design. MG and LE were responsible for acquisition of data. CL, LE, MG, YZ, and KH analyzed and interpreted data. CL, KH, and MG drafted the manuscript. Critical revision of the manuscript for important intellectual content was provided by CL, KH, and LE. Study supervision was provided by LE. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6823/14/68/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Outcome measure", "Primary covariates", "Demographic variables", "Statistical analysis", "Results and discussion", "Conclusions", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Multi-morbidity, or the presence of multiple chronic diseases, is a major problem in clinical care and is associated with worse outcomes [1-3]. People with multi-morbid conditions also have a higher medication burden [4] and, not unexpectedly, have worse medication adherence [5]. The prevalence of multi-morbidity is high (>50%) in middle-aged and older populations [6]. However, most health systems are generally configured for management of individual diseases instead of multi-morbidity. A recent review article suggests the number of studies testing interventions for patients with multi-morbid disease has increased, but the findings showed that organizational interventions with a broader focus and patient-specific interventions not linked to care delivery had little impact on health outcomes [7].\nThe presence of mental health conditions (e.g., depression, anxiety, etc.) in people with multi-morbidity has adverse impact on clinical outcomes [8,9]. A study of the impact of psychiatric conditions (depression/anxiety, substance abuse, psychotic or bipolar disorder) on mortality among individuals with diabetes indicated that alcohol and drug abuse/dependence was associated with a 22% higher mortality [10]. Chronic conditions like diabetes tend to occur in clusters (with other cardiovascular conditions like hypertension, heart disease and stroke) [11]. However, very little is known about the separate and combined impact of medical and psychiatric conditions on mortality and which has the strongest impact on risk of death in individuals with diabetes. Few studies have carefully assessed the impact of both medical and psychiatric multi-morbidity on mortality in middle-aged and older adults with diabetes.\nOlder aged and elderly individuals have greater disease burden and are more likely to have multimorbidity and experience the adverse outcomes associated with multimorbidity. Because of the detailed clinical data available in the Veterans Health Administration (VHA) databases, the older age, and the ability to track patients over long periods of time, veterans are an ideal population to tease apart the impact of medical and psychiatric multi-morbidity on mortality. However, the focus of this study is to examine the impact of total burden of disease rather than specific comorbidities, so we used a national sample of veterans with type 2 diabetes followed over 5 years to examine the prevalence and differential impact of medical and psychiatric multi-morbidity on risk of death.", " Study population A national cohort of veterans with type 2 diabetes was created by linking multiple patient and administrative files from the VHA National Patient Care and Pharmacy Benefits Management (PBM) databases. We used a previously validated algorithm [12,13] for identifying veterans with diabetes. Veterans were included in the cohort if they had: 1) type 2 diabetes defined by two or more International Classification of Diseases, Ninth Revision (ICD-9) codes for diabetes (250, 357.2, 362.0, and 366.41) in the previous 24 months (2000 and 2001) and during 2002 from inpatient stays and/or outpatient visits on separate days (excluding codes from lab tests and other non-clinician visits); or 2) prescriptions for insulin or oral hypoglycemic agents (VA classes HS501 or HS502, respectively; to capture those without a diabetes ICD-9 code) in 2002. PBM data were available during the entire period of analysis. When the data were merged based on the criteria above, the total sample included 832,000 veterans. We excluded those not taking prescription medications for diabetes (n = 201,255) and added those who had one ICD-9 code for diabetes and prescriptions filled in 2002 (n = 60,493); and 3,660 were excluded due to death prior to 2002, missing age or no service connection. The subset with complete data resulted in a final cohort of 625,903 veterans. The Department of Veterans Affairs maintains data through the VA Information Resource Center (VIReC). Data was requested from and approved for use by VIReC following data use agreement requirements. The study was approved by the Medical University of South Carolina Institutional Review Board (IRB) and the Ralph H. Johnson Veterans Affairs Medical Center Research and Development committee.\nA national cohort of veterans with type 2 diabetes was created by linking multiple patient and administrative files from the VHA National Patient Care and Pharmacy Benefits Management (PBM) databases. We used a previously validated algorithm [12,13] for identifying veterans with diabetes. Veterans were included in the cohort if they had: 1) type 2 diabetes defined by two or more International Classification of Diseases, Ninth Revision (ICD-9) codes for diabetes (250, 357.2, 362.0, and 366.41) in the previous 24 months (2000 and 2001) and during 2002 from inpatient stays and/or outpatient visits on separate days (excluding codes from lab tests and other non-clinician visits); or 2) prescriptions for insulin or oral hypoglycemic agents (VA classes HS501 or HS502, respectively; to capture those without a diabetes ICD-9 code) in 2002. PBM data were available during the entire period of analysis. When the data were merged based on the criteria above, the total sample included 832,000 veterans. We excluded those not taking prescription medications for diabetes (n = 201,255) and added those who had one ICD-9 code for diabetes and prescriptions filled in 2002 (n = 60,493); and 3,660 were excluded due to death prior to 2002, missing age or no service connection. The subset with complete data resulted in a final cohort of 625,903 veterans. The Department of Veterans Affairs maintains data through the VA Information Resource Center (VIReC). Data was requested from and approved for use by VIReC following data use agreement requirements. The study was approved by the Medical University of South Carolina Institutional Review Board (IRB) and the Ralph H. Johnson Veterans Affairs Medical Center Research and Development committee.\n Outcome measure The main outcome measure was time to death. Veterans were followed from time of entry into the study until death, loss to follow-up, or through December 2006. A subject was considered censored if alive by December 2006.\nThe main outcome measure was time to death. Veterans were followed from time of entry into the study until death, loss to follow-up, or through December 2006. A subject was considered censored if alive by December 2006.\n Primary covariates The primary covariates were medical comorbidity and psychiatric comorbidity both defined as the count of diseases for each subject throughout the study period. All comorbidities were dichotomized as present or absent where presence was determined by ICD-9 codes at entry into the cohort based on a previously validated algorithm in veterans [13]. Medical comorbidity variables included anemia, cancer, cardiovascular disease (CVD), cerebrovascular disease, congestive heart failure (CHF), fluid and electrolyte disorders, hypertension, hypothyroidism, liver disease, lung conditions (chronic pulmonary disease, pulmonary circulation disease), obesity, peripheral vascular disease, and other (acquired immunodeficiency syndrome–AIDS, rheumatoid arthritis, renal failure, peptic ulcer disease and bleeding, weight loss). Psychiatric comorbidities included psychoses, substance abuse (alcohol abuse, drug abuse) and depression [13].\nThe primary covariates were medical comorbidity and psychiatric comorbidity both defined as the count of diseases for each subject throughout the study period. All comorbidities were dichotomized as present or absent where presence was determined by ICD-9 codes at entry into the cohort based on a previously validated algorithm in veterans [13]. Medical comorbidity variables included anemia, cancer, cardiovascular disease (CVD), cerebrovascular disease, congestive heart failure (CHF), fluid and electrolyte disorders, hypertension, hypothyroidism, liver disease, lung conditions (chronic pulmonary disease, pulmonary circulation disease), obesity, peripheral vascular disease, and other (acquired immunodeficiency syndrome–AIDS, rheumatoid arthritis, renal failure, peptic ulcer disease and bleeding, weight loss). Psychiatric comorbidities included psychoses, substance abuse (alcohol abuse, drug abuse) and depression [13].\n Demographic variables We controlled for seven demographic variables. Age was treated as continuous and centered at a mean of 66 years. Race/ethnicity included four categories with non-Hispanic white (NHW) serving as the reference group. Race/ethnicity was retrieved from the 2002 outpatient and inpatient [Medical SAS] data sets. When missing or unknown, the variable was supplemented using the inpatient race1-race6 fields from the 2003 [Medical SAS] data sets, the outpatient race1-race7 fields from the 2004 [Medical SAS] data sets, and the VA Vital Status Centers for Medicare and Medicaid Services (CMS) field for race. Gender, marital status, and location of residence (urban versus rural or highly rural) were dichotomous. Highly rural was categorized as rural according to the VA definition of rurality [14]. Percentage service-connectedness, representing the degree of disability due to illness or injury that was aggravated by or incurred in military service, was treated as dichotomous (1= > 50%, 0 = <50%). Region, which accounts for the five geographic regions of the country, was treated as a categorical variable: Northeast [VISNs 1, 2, 3], Mid-Atlantic [VISNs 4, 5, 6, 9, 10], South [VISNs 7, 8, 16, 17], Midwest [VISNs 11, 12, 15, 19, 23], and West [VISNs 18, 20, 21, 22] [15].\nWe controlled for seven demographic variables. Age was treated as continuous and centered at a mean of 66 years. Race/ethnicity included four categories with non-Hispanic white (NHW) serving as the reference group. Race/ethnicity was retrieved from the 2002 outpatient and inpatient [Medical SAS] data sets. When missing or unknown, the variable was supplemented using the inpatient race1-race6 fields from the 2003 [Medical SAS] data sets, the outpatient race1-race7 fields from the 2004 [Medical SAS] data sets, and the VA Vital Status Centers for Medicare and Medicaid Services (CMS) field for race. Gender, marital status, and location of residence (urban versus rural or highly rural) were dichotomous. Highly rural was categorized as rural according to the VA definition of rurality [14]. Percentage service-connectedness, representing the degree of disability due to illness or injury that was aggravated by or incurred in military service, was treated as dichotomous (1= > 50%, 0 = <50%). Region, which accounts for the five geographic regions of the country, was treated as a categorical variable: Northeast [VISNs 1, 2, 3], Mid-Atlantic [VISNs 4, 5, 6, 9, 10], South [VISNs 7, 8, 16, 17], Midwest [VISNs 11, 12, 15, 19, 23], and West [VISNs 18, 20, 21, 22] [15].\n Statistical analysis In preliminary analyses, crude associations were examined between mortality and all measured covariates using chi-square tests for categorical variables and t-tests for continuous variables. Cox regression methods were used to model the association between time to death and medical and psychiatric comorbidity after adjusting for known covariates. Time to death was defined as the number of months from time of entry into the cohort to time of death or censoring (i.e., day last seen or May 2006). For the Cox model, appropriateness of the assumption of proportionality was determined by testing the coefficients of the interactions of time with the respective covariate in multivariate analyses. Initially, Cox models for each of the medical and psychiatric comorbidities were fitted adjusting for all covariates (race, socio-demographics). Then an interaction between medical and psychiatric comorbidity was tested to check whether the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity. HR estimates of medical comorbidity for each level of psychiatric comorbidity and estimates for levels of psychiatric comorbidity for levels of medical comorbidity are reported since there was significant interaction (p = 0.003). The Kaplan-Meier method was used to plot the survival functions for both medical and psychiatric comorbidities separately. Residual analysis was used to assess goodness-of-fit of each of the models. All data analyses were conducted using SAS 9.3 [16].\nIn preliminary analyses, crude associations were examined between mortality and all measured covariates using chi-square tests for categorical variables and t-tests for continuous variables. Cox regression methods were used to model the association between time to death and medical and psychiatric comorbidity after adjusting for known covariates. Time to death was defined as the number of months from time of entry into the cohort to time of death or censoring (i.e., day last seen or May 2006). For the Cox model, appropriateness of the assumption of proportionality was determined by testing the coefficients of the interactions of time with the respective covariate in multivariate analyses. Initially, Cox models for each of the medical and psychiatric comorbidities were fitted adjusting for all covariates (race, socio-demographics). Then an interaction between medical and psychiatric comorbidity was tested to check whether the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity. HR estimates of medical comorbidity for each level of psychiatric comorbidity and estimates for levels of psychiatric comorbidity for levels of medical comorbidity are reported since there was significant interaction (p = 0.003). The Kaplan-Meier method was used to plot the survival functions for both medical and psychiatric comorbidities separately. Residual analysis was used to assess goodness-of-fit of each of the models. All data analyses were conducted using SAS 9.3 [16].", "A national cohort of veterans with type 2 diabetes was created by linking multiple patient and administrative files from the VHA National Patient Care and Pharmacy Benefits Management (PBM) databases. We used a previously validated algorithm [12,13] for identifying veterans with diabetes. Veterans were included in the cohort if they had: 1) type 2 diabetes defined by two or more International Classification of Diseases, Ninth Revision (ICD-9) codes for diabetes (250, 357.2, 362.0, and 366.41) in the previous 24 months (2000 and 2001) and during 2002 from inpatient stays and/or outpatient visits on separate days (excluding codes from lab tests and other non-clinician visits); or 2) prescriptions for insulin or oral hypoglycemic agents (VA classes HS501 or HS502, respectively; to capture those without a diabetes ICD-9 code) in 2002. PBM data were available during the entire period of analysis. When the data were merged based on the criteria above, the total sample included 832,000 veterans. We excluded those not taking prescription medications for diabetes (n = 201,255) and added those who had one ICD-9 code for diabetes and prescriptions filled in 2002 (n = 60,493); and 3,660 were excluded due to death prior to 2002, missing age or no service connection. The subset with complete data resulted in a final cohort of 625,903 veterans. The Department of Veterans Affairs maintains data through the VA Information Resource Center (VIReC). Data was requested from and approved for use by VIReC following data use agreement requirements. The study was approved by the Medical University of South Carolina Institutional Review Board (IRB) and the Ralph H. Johnson Veterans Affairs Medical Center Research and Development committee.", "The main outcome measure was time to death. Veterans were followed from time of entry into the study until death, loss to follow-up, or through December 2006. A subject was considered censored if alive by December 2006.", "The primary covariates were medical comorbidity and psychiatric comorbidity both defined as the count of diseases for each subject throughout the study period. All comorbidities were dichotomized as present or absent where presence was determined by ICD-9 codes at entry into the cohort based on a previously validated algorithm in veterans [13]. Medical comorbidity variables included anemia, cancer, cardiovascular disease (CVD), cerebrovascular disease, congestive heart failure (CHF), fluid and electrolyte disorders, hypertension, hypothyroidism, liver disease, lung conditions (chronic pulmonary disease, pulmonary circulation disease), obesity, peripheral vascular disease, and other (acquired immunodeficiency syndrome–AIDS, rheumatoid arthritis, renal failure, peptic ulcer disease and bleeding, weight loss). Psychiatric comorbidities included psychoses, substance abuse (alcohol abuse, drug abuse) and depression [13].", "We controlled for seven demographic variables. Age was treated as continuous and centered at a mean of 66 years. Race/ethnicity included four categories with non-Hispanic white (NHW) serving as the reference group. Race/ethnicity was retrieved from the 2002 outpatient and inpatient [Medical SAS] data sets. When missing or unknown, the variable was supplemented using the inpatient race1-race6 fields from the 2003 [Medical SAS] data sets, the outpatient race1-race7 fields from the 2004 [Medical SAS] data sets, and the VA Vital Status Centers for Medicare and Medicaid Services (CMS) field for race. Gender, marital status, and location of residence (urban versus rural or highly rural) were dichotomous. Highly rural was categorized as rural according to the VA definition of rurality [14]. Percentage service-connectedness, representing the degree of disability due to illness or injury that was aggravated by or incurred in military service, was treated as dichotomous (1= > 50%, 0 = <50%). Region, which accounts for the five geographic regions of the country, was treated as a categorical variable: Northeast [VISNs 1, 2, 3], Mid-Atlantic [VISNs 4, 5, 6, 9, 10], South [VISNs 7, 8, 16, 17], Midwest [VISNs 11, 12, 15, 19, 23], and West [VISNs 18, 20, 21, 22] [15].", "In preliminary analyses, crude associations were examined between mortality and all measured covariates using chi-square tests for categorical variables and t-tests for continuous variables. Cox regression methods were used to model the association between time to death and medical and psychiatric comorbidity after adjusting for known covariates. Time to death was defined as the number of months from time of entry into the cohort to time of death or censoring (i.e., day last seen or May 2006). For the Cox model, appropriateness of the assumption of proportionality was determined by testing the coefficients of the interactions of time with the respective covariate in multivariate analyses. Initially, Cox models for each of the medical and psychiatric comorbidities were fitted adjusting for all covariates (race, socio-demographics). Then an interaction between medical and psychiatric comorbidity was tested to check whether the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity. HR estimates of medical comorbidity for each level of psychiatric comorbidity and estimates for levels of psychiatric comorbidity for levels of medical comorbidity are reported since there was significant interaction (p = 0.003). The Kaplan-Meier method was used to plot the survival functions for both medical and psychiatric comorbidities separately. Residual analysis was used to assess goodness-of-fit of each of the models. All data analyses were conducted using SAS 9.3 [16].", "The study population consisted of 625,903 veterans with diabetes who were followed until death, loss to follow-up, or through December 2006. The mean age was 65.4 years (sd 11.1) with a range of 18–100 years. Those with diabetes and no comorbid diagnosis comprised 10.6% of the sample. Hypertension was the most common medical comorbidity (78.2%) and depression was the most common psychiatric comorbidity (12.7%). Over one fifth of the population (22.5%) had three or more medical comorbidities and 3.2% of the population had two or more psychiatric comorbidities. During the follow-up period, 21.9% of individuals in the cohort died. Table 1 shows demographic and comorbidity profile of the study population. The average survival time of those that died was 1.97 years. About one quarter of deaths occurred in each of the first (25.8%) and second (26.2%) year of follow-up with 9.1% occurring in the final year of follow-up.\nDemographic and Comorbidity Profile of Veterans*\n*All numbers represent percentages unless otherwise indicated.\nThe mortality risk associated with individual medical and psychiatric comorbidities after adjusting for demographic factors (age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region) are presented in Table 2. Of the medical comorbidities examined, congestive heart failure, which was present in 11.2% of the population, carried the greatest mortality risk with patients with congestive heart failure being almost twice as likely to die as those without congestive heart failure (HR = 1.92). Lung and cerebrovascular disease present in 13.8% and 11.4%, respectively, of the population also carried high mortality risk with patients with lung disease having 42% greater risk of mortality than those without lung disease and patients with cerebrovascular disease having 39% greater risk of mortality than those without cerebrovascular disease. Alternatively, hypertension and obesity each carried a lower mortality risk. For each of the psychiatric comorbidities examined, depression, psychoses and substance abuse carried significant mortality risk of 5%, 16% and 50%, respectively.Unadjusted Kaplan-Meier survival curves presented in Figure 1, illustrate the effect of medical comorbidity burden on probability of survival stratified by number of psychiatric comorbidities (i.e., 0, 1 and 2). Within each strata of psychiatric comorbidity burden, there was a clear graded relationship between number of medical comorbidities and probability of survival. Specifically, within each strata probability of survival was highest in those with zero medical comorbidities and lowest in those with 3 or more medical comorbidities. In Figure 2, unadjusted Kaplan-Meier survival curves illustrate the effect of psychiatric comorbidity burden on probability of survival stratified by number of medical comorbidities (i.e., 0, 1, 2 and 3 or more). In contrast to different survival curves for each level of comorbidity, Figure 2 illustrates a similar probability of survival across levels of psychiatric comorbidity burden in veterans with zero and 1 medical comorbidity. Moreover, while there is some differentiation in survival across levels of psychiatric comorbidity burden in veterans with 2 and 3 or more medical comorbidities, survival probability did not clearly decrease with increased psychiatric comorbidity burden.\nHazard Ratio and 95% Confidence Interval Estimates for Each Disease Category* (ordered by decreasing level of risk)\n*Adjusted for sociodemographic characteristics including age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region.\nUnadjusted Kaplan-Meier Survival Curves for Medical Comorbidity at Each Level of Psychiatric Comorbidity.\nUnadjusted Kaplan-Meier Survival Curves for Psychiatric Comorbidity at Each Level of Medical Comorbidity.\nTable 3 shows the HR and corresponding 95% CI for the association between number of medical comorbidities (i.e., 0, 1, 2 and 3 or more) and mortality stratified by number of psychiatric comorbidities (i.e., 0, 1 and 2) after adjusting for demographic factors (age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region). The mortality hazards associated with having 2 or 3 or more medical comorbidities as compared to no medical comorbidities were 1.54 and 2.63, respectively, in veterans with diabetes without any psychiatric comorbidity. Similar patterns in mortality hazards were seen in veterans with a single psychiatric comorbidity among those with 2 or 3 or more medical comorbidities compared to those with no medical comorbidities (HRs of 1.51 and 2.66, respectively). The difference in mortality hazard among those with greater medical comorbidity decreased only slightly in veterans with two psychiatric comorbidities (HR 1.30) compared to those with no medical comorbidity (HR 2.15), respectively.\nHazard Ratios (HR)* and 95% confidence Interval (CI) for Effect of Medical and Psychiatric Comorbidity Burden on Mortality among Veterans with Diabetes\n*Interaction terms were used to determine adjusted hazard ratios for the effect of medical comorbidity within strata of psychiatric comorbiditya as well as for the effect of psychiatric comorbidity within strata of medical comorbidityb. Adjusted for sociodemographic characteristics including age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region.\nAdditionally, Table 3 shows the HR and corresponding 95% confidence intervals (CI) for the association between number of psychiatric comorbidities (i.e., 0, 1 and 2) and mortality stratified by number of medical comorbidities (i.e., 0, 1, 2 and 3 or more) after adjusting for demographic factors. The mortality hazard associated with having a single as compared to zero psychiatric comorbidities remained constant across all medical comorbidity categories with respective hazard ratios of 1.23 and 1.24 in veterans with zero and 3 or more medical comorbidities. In contrast, the mortality hazard associated with having two as compared to zero psychiatric comorbidities decreased from 1.69 in veterans with zero medical comorbidities to 1.38 in veterans with 3 or more medical comorbidities.", "The findings of this study clearly demonstrate that the total burden of medical and psychiatric multi-morbidity are significant predictors of mortality among middle age and older adults (veterans) with type 2 diabetes with a graded response as medical multimorbidity increases. The average survival time was about 2 years with 52% of deaths occurring in the first 2 years of follow-up and 9% in the final year of follow-up. Among medical comorbidities, mortality risk was highest in those with congestive heart failure, lung disease and cerebrovascular disease; while among psychiatric comorbidities, mortality risk was highest in those with substance abuse, psychoses and depression. There was an interaction between medical and psychiatric comorbidity indicating that the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity so stratified analyses were performed. HRs for effect of 3+ medical comorbidities remained high across levels of psychiatric comorbidities (0, 1, 2+), while HRs for effect of 2+ psychiatric comorbidities declined across levels of medical comorbidity (0, 1, 2, 3+).\nInterestingly, comorbid hypertension and obesity were associated with lower mortality risk in this sample of veterans with type 2 diabetes. Since more than two-thirds of obese veterans also have comorbid hypertension [17], it is likely that the reduced mortality risk associated with hypertension was demonstrated among obese veterans. These findings may be explained by multiple interventions within the VA system from 2000–2010 (encompasses the time period of this study) that were successful in achieving blood pressure control among veterans who are provided access to healthcare services and to a very affordable formulary of anti-hypertensive medications [18]. Furthermore, one should consider that this analysis examined data generated shortly after diabetes was designated as a CVD risk equivalent along with tobacco smoking, hypertension, and hyperlipidemia [19]. In that case, it is likely that a strong awareness of CVD as a leading cause of death, growing evidence of behavioral management and treatment effectiveness from multiple large-scale RCTs (like Hypertension Optimal Treatment (HOT), United Kingdom Prospective Diabetes Study (UKPDS), Diabetes Prevention Program (DPP), Action to Control Cardiovascular Risk in Diabetes (ACCORD), and Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT), to name a few), and broad public health efforts lead by guidelines from several national organizations (American Heart Association (AHA), American Diabetes Association (ADA), and National Heart, Lung, and Blood Institute (NHLBI)) to aggressively manage CVD risk factors among people with diabetes not only reduced CVD mortality rates, but also helped to reduce the impact of comorbid hypertension and obesity on mortality in people with diabetes.\nMedical comorbidities exerted the greatest influence on mortality and had significantly stronger effects on hazard of death compared to psychiatric comorbidities. In stratified analyses, the risk of death with increasing medical comorbidity remained high across each level of psychiatric comorbidity with the greatest risk seen in individuals with 3+ medical comorbidities (HR ranged 2.15-2.66) compared to no psychiatric comorbidity. In contrast, risk of death with increasing psychiatric comorbidity was not consistently high across levels of medical comorbidities. Instead the risk of death with 2+ psychiatric comorbidities declined as the number of medical comorbidities increased. In other words, among those with 2 or more psychiatric comorbidities, baseline risk (no medical comorbidity) is higher while the risk associated with additional medical comorbidity is lower. One explanation could be increased care associated with multiple medical and psychiatric comorbidities. However, the mortality risk from psychiatric comorbidities remained significant, emphasizing the need for greater attention to identifying and treating psychiatric comorbidities, instead of focusing on medical comorbidities alone.\nThere is growing awareness of the burden of multi-morbidity and the need to develop new treatment paradigms that account for multi-morbidity. This is particularly important for certain populations such as the elderly, veterans and low income and ethnic minorities with diabetes, where the prevalence of multi-morbidity is high and the risk for adverse outcomes increase exponentially with increasing levels of multi-morbidity [20-22]. The current approach to treatment for diabetes that is focused on single or related disease (e.g. diabetes and CVD) management may no longer be appropriate based on emerging evidence of the high prevalence of multi-morbidity and its detrimental effect on health outcomes including polypharmacy, increased morbidity, disability, health utilization/cost and mortality [5,7,23-26]. As the paradigm of care shifts to patient-centered care, strategies are needed to treat the individual as a whole and coordinate care while accounting for medical and psychiatric multi-morbidity. This will be particularly important in middle age and older adults, where our findings show that multi-morbidity is associated with significant increases in mortality. Newer evidence suggests that interventions linked to care delivery and focused on specific patient needs (e.g., medication management or functional status) are effective for improving outcomes among patients with multi-morbidity [7]. Our findings further highlight the need to integrate care for medical and psychiatric conditions and address the fragmentation of health care that currently exists with separate coverages for medical and mental health conditions. Some of the strategies will require policy change in how mental health services are covered and reimbursement structure in primary care settings.\nStrengths of our study include the study population, its longitudinal design with 5 years of follow-up data, the extensive data available on comorbidities, and our ability to identify racial/ethnic group in over 90% of the cohort. To place this study in context, we report on the limitations. First, the VA medical record either omits or fails to update many important social, environmental and behavioral variables including socio-economic status, diet, physical activity, weight and alcohol consumption, as well as disease measures such as diabetes duration. Second, there is no way to account for the lag time between disease onset and diagnosis; thus, the timing of onset of comorbidity cannot be accurately determined in this retrospective cohort analysis. Finally, there were no measures of disease control accounted for so the findings are generalizable to those veterans with the diagnoses included in this study.\nNevertheless, our findings are important and show that, in this national longitudinal cohort of veterans with diabetes followed over 5 years, multi-morbidity was associated with increased risk of death and the risk of death was incremental with a graded response in mortality risk with increasing numbers of medical and psychiatric comorbidities.", "The authors declare that they have no competing interest.", "LE conceived study concept and design. MG and LE were responsible for acquisition of data. CL, LE, MG, YZ, and KH analyzed and interpreted data. CL, KH, and MG drafted the manuscript. Critical revision of the manuscript for important intellectual content was provided by CL, KH, and LE. Study supervision was provided by LE. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6823/14/68/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, "conclusions", null, null, null ]
[ "Diabetes", "Comorbidity", "Mortality", "Elderly", "Veterans" ]
Background: Multi-morbidity, or the presence of multiple chronic diseases, is a major problem in clinical care and is associated with worse outcomes [1-3]. People with multi-morbid conditions also have a higher medication burden [4] and, not unexpectedly, have worse medication adherence [5]. The prevalence of multi-morbidity is high (>50%) in middle-aged and older populations [6]. However, most health systems are generally configured for management of individual diseases instead of multi-morbidity. A recent review article suggests the number of studies testing interventions for patients with multi-morbid disease has increased, but the findings showed that organizational interventions with a broader focus and patient-specific interventions not linked to care delivery had little impact on health outcomes [7]. The presence of mental health conditions (e.g., depression, anxiety, etc.) in people with multi-morbidity has adverse impact on clinical outcomes [8,9]. A study of the impact of psychiatric conditions (depression/anxiety, substance abuse, psychotic or bipolar disorder) on mortality among individuals with diabetes indicated that alcohol and drug abuse/dependence was associated with a 22% higher mortality [10]. Chronic conditions like diabetes tend to occur in clusters (with other cardiovascular conditions like hypertension, heart disease and stroke) [11]. However, very little is known about the separate and combined impact of medical and psychiatric conditions on mortality and which has the strongest impact on risk of death in individuals with diabetes. Few studies have carefully assessed the impact of both medical and psychiatric multi-morbidity on mortality in middle-aged and older adults with diabetes. Older aged and elderly individuals have greater disease burden and are more likely to have multimorbidity and experience the adverse outcomes associated with multimorbidity. Because of the detailed clinical data available in the Veterans Health Administration (VHA) databases, the older age, and the ability to track patients over long periods of time, veterans are an ideal population to tease apart the impact of medical and psychiatric multi-morbidity on mortality. However, the focus of this study is to examine the impact of total burden of disease rather than specific comorbidities, so we used a national sample of veterans with type 2 diabetes followed over 5 years to examine the prevalence and differential impact of medical and psychiatric multi-morbidity on risk of death. Methods: Study population A national cohort of veterans with type 2 diabetes was created by linking multiple patient and administrative files from the VHA National Patient Care and Pharmacy Benefits Management (PBM) databases. We used a previously validated algorithm [12,13] for identifying veterans with diabetes. Veterans were included in the cohort if they had: 1) type 2 diabetes defined by two or more International Classification of Diseases, Ninth Revision (ICD-9) codes for diabetes (250, 357.2, 362.0, and 366.41) in the previous 24 months (2000 and 2001) and during 2002 from inpatient stays and/or outpatient visits on separate days (excluding codes from lab tests and other non-clinician visits); or 2) prescriptions for insulin or oral hypoglycemic agents (VA classes HS501 or HS502, respectively; to capture those without a diabetes ICD-9 code) in 2002. PBM data were available during the entire period of analysis. When the data were merged based on the criteria above, the total sample included 832,000 veterans. We excluded those not taking prescription medications for diabetes (n = 201,255) and added those who had one ICD-9 code for diabetes and prescriptions filled in 2002 (n = 60,493); and 3,660 were excluded due to death prior to 2002, missing age or no service connection. The subset with complete data resulted in a final cohort of 625,903 veterans. The Department of Veterans Affairs maintains data through the VA Information Resource Center (VIReC). Data was requested from and approved for use by VIReC following data use agreement requirements. The study was approved by the Medical University of South Carolina Institutional Review Board (IRB) and the Ralph H. Johnson Veterans Affairs Medical Center Research and Development committee. A national cohort of veterans with type 2 diabetes was created by linking multiple patient and administrative files from the VHA National Patient Care and Pharmacy Benefits Management (PBM) databases. We used a previously validated algorithm [12,13] for identifying veterans with diabetes. Veterans were included in the cohort if they had: 1) type 2 diabetes defined by two or more International Classification of Diseases, Ninth Revision (ICD-9) codes for diabetes (250, 357.2, 362.0, and 366.41) in the previous 24 months (2000 and 2001) and during 2002 from inpatient stays and/or outpatient visits on separate days (excluding codes from lab tests and other non-clinician visits); or 2) prescriptions for insulin or oral hypoglycemic agents (VA classes HS501 or HS502, respectively; to capture those without a diabetes ICD-9 code) in 2002. PBM data were available during the entire period of analysis. When the data were merged based on the criteria above, the total sample included 832,000 veterans. We excluded those not taking prescription medications for diabetes (n = 201,255) and added those who had one ICD-9 code for diabetes and prescriptions filled in 2002 (n = 60,493); and 3,660 were excluded due to death prior to 2002, missing age or no service connection. The subset with complete data resulted in a final cohort of 625,903 veterans. The Department of Veterans Affairs maintains data through the VA Information Resource Center (VIReC). Data was requested from and approved for use by VIReC following data use agreement requirements. The study was approved by the Medical University of South Carolina Institutional Review Board (IRB) and the Ralph H. Johnson Veterans Affairs Medical Center Research and Development committee. Outcome measure The main outcome measure was time to death. Veterans were followed from time of entry into the study until death, loss to follow-up, or through December 2006. A subject was considered censored if alive by December 2006. The main outcome measure was time to death. Veterans were followed from time of entry into the study until death, loss to follow-up, or through December 2006. A subject was considered censored if alive by December 2006. Primary covariates The primary covariates were medical comorbidity and psychiatric comorbidity both defined as the count of diseases for each subject throughout the study period. All comorbidities were dichotomized as present or absent where presence was determined by ICD-9 codes at entry into the cohort based on a previously validated algorithm in veterans [13]. Medical comorbidity variables included anemia, cancer, cardiovascular disease (CVD), cerebrovascular disease, congestive heart failure (CHF), fluid and electrolyte disorders, hypertension, hypothyroidism, liver disease, lung conditions (chronic pulmonary disease, pulmonary circulation disease), obesity, peripheral vascular disease, and other (acquired immunodeficiency syndrome–AIDS, rheumatoid arthritis, renal failure, peptic ulcer disease and bleeding, weight loss). Psychiatric comorbidities included psychoses, substance abuse (alcohol abuse, drug abuse) and depression [13]. The primary covariates were medical comorbidity and psychiatric comorbidity both defined as the count of diseases for each subject throughout the study period. All comorbidities were dichotomized as present or absent where presence was determined by ICD-9 codes at entry into the cohort based on a previously validated algorithm in veterans [13]. Medical comorbidity variables included anemia, cancer, cardiovascular disease (CVD), cerebrovascular disease, congestive heart failure (CHF), fluid and electrolyte disorders, hypertension, hypothyroidism, liver disease, lung conditions (chronic pulmonary disease, pulmonary circulation disease), obesity, peripheral vascular disease, and other (acquired immunodeficiency syndrome–AIDS, rheumatoid arthritis, renal failure, peptic ulcer disease and bleeding, weight loss). Psychiatric comorbidities included psychoses, substance abuse (alcohol abuse, drug abuse) and depression [13]. Demographic variables We controlled for seven demographic variables. Age was treated as continuous and centered at a mean of 66 years. Race/ethnicity included four categories with non-Hispanic white (NHW) serving as the reference group. Race/ethnicity was retrieved from the 2002 outpatient and inpatient [Medical SAS] data sets. When missing or unknown, the variable was supplemented using the inpatient race1-race6 fields from the 2003 [Medical SAS] data sets, the outpatient race1-race7 fields from the 2004 [Medical SAS] data sets, and the VA Vital Status Centers for Medicare and Medicaid Services (CMS) field for race. Gender, marital status, and location of residence (urban versus rural or highly rural) were dichotomous. Highly rural was categorized as rural according to the VA definition of rurality [14]. Percentage service-connectedness, representing the degree of disability due to illness or injury that was aggravated by or incurred in military service, was treated as dichotomous (1= > 50%, 0 = <50%). Region, which accounts for the five geographic regions of the country, was treated as a categorical variable: Northeast [VISNs 1, 2, 3], Mid-Atlantic [VISNs 4, 5, 6, 9, 10], South [VISNs 7, 8, 16, 17], Midwest [VISNs 11, 12, 15, 19, 23], and West [VISNs 18, 20, 21, 22] [15]. We controlled for seven demographic variables. Age was treated as continuous and centered at a mean of 66 years. Race/ethnicity included four categories with non-Hispanic white (NHW) serving as the reference group. Race/ethnicity was retrieved from the 2002 outpatient and inpatient [Medical SAS] data sets. When missing or unknown, the variable was supplemented using the inpatient race1-race6 fields from the 2003 [Medical SAS] data sets, the outpatient race1-race7 fields from the 2004 [Medical SAS] data sets, and the VA Vital Status Centers for Medicare and Medicaid Services (CMS) field for race. Gender, marital status, and location of residence (urban versus rural or highly rural) were dichotomous. Highly rural was categorized as rural according to the VA definition of rurality [14]. Percentage service-connectedness, representing the degree of disability due to illness or injury that was aggravated by or incurred in military service, was treated as dichotomous (1= > 50%, 0 = <50%). Region, which accounts for the five geographic regions of the country, was treated as a categorical variable: Northeast [VISNs 1, 2, 3], Mid-Atlantic [VISNs 4, 5, 6, 9, 10], South [VISNs 7, 8, 16, 17], Midwest [VISNs 11, 12, 15, 19, 23], and West [VISNs 18, 20, 21, 22] [15]. Statistical analysis In preliminary analyses, crude associations were examined between mortality and all measured covariates using chi-square tests for categorical variables and t-tests for continuous variables. Cox regression methods were used to model the association between time to death and medical and psychiatric comorbidity after adjusting for known covariates. Time to death was defined as the number of months from time of entry into the cohort to time of death or censoring (i.e., day last seen or May 2006). For the Cox model, appropriateness of the assumption of proportionality was determined by testing the coefficients of the interactions of time with the respective covariate in multivariate analyses. Initially, Cox models for each of the medical and psychiatric comorbidities were fitted adjusting for all covariates (race, socio-demographics). Then an interaction between medical and psychiatric comorbidity was tested to check whether the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity. HR estimates of medical comorbidity for each level of psychiatric comorbidity and estimates for levels of psychiatric comorbidity for levels of medical comorbidity are reported since there was significant interaction (p = 0.003). The Kaplan-Meier method was used to plot the survival functions for both medical and psychiatric comorbidities separately. Residual analysis was used to assess goodness-of-fit of each of the models. All data analyses were conducted using SAS 9.3 [16]. In preliminary analyses, crude associations were examined between mortality and all measured covariates using chi-square tests for categorical variables and t-tests for continuous variables. Cox regression methods were used to model the association between time to death and medical and psychiatric comorbidity after adjusting for known covariates. Time to death was defined as the number of months from time of entry into the cohort to time of death or censoring (i.e., day last seen or May 2006). For the Cox model, appropriateness of the assumption of proportionality was determined by testing the coefficients of the interactions of time with the respective covariate in multivariate analyses. Initially, Cox models for each of the medical and psychiatric comorbidities were fitted adjusting for all covariates (race, socio-demographics). Then an interaction between medical and psychiatric comorbidity was tested to check whether the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity. HR estimates of medical comorbidity for each level of psychiatric comorbidity and estimates for levels of psychiatric comorbidity for levels of medical comorbidity are reported since there was significant interaction (p = 0.003). The Kaplan-Meier method was used to plot the survival functions for both medical and psychiatric comorbidities separately. Residual analysis was used to assess goodness-of-fit of each of the models. All data analyses were conducted using SAS 9.3 [16]. Study population: A national cohort of veterans with type 2 diabetes was created by linking multiple patient and administrative files from the VHA National Patient Care and Pharmacy Benefits Management (PBM) databases. We used a previously validated algorithm [12,13] for identifying veterans with diabetes. Veterans were included in the cohort if they had: 1) type 2 diabetes defined by two or more International Classification of Diseases, Ninth Revision (ICD-9) codes for diabetes (250, 357.2, 362.0, and 366.41) in the previous 24 months (2000 and 2001) and during 2002 from inpatient stays and/or outpatient visits on separate days (excluding codes from lab tests and other non-clinician visits); or 2) prescriptions for insulin or oral hypoglycemic agents (VA classes HS501 or HS502, respectively; to capture those without a diabetes ICD-9 code) in 2002. PBM data were available during the entire period of analysis. When the data were merged based on the criteria above, the total sample included 832,000 veterans. We excluded those not taking prescription medications for diabetes (n = 201,255) and added those who had one ICD-9 code for diabetes and prescriptions filled in 2002 (n = 60,493); and 3,660 were excluded due to death prior to 2002, missing age or no service connection. The subset with complete data resulted in a final cohort of 625,903 veterans. The Department of Veterans Affairs maintains data through the VA Information Resource Center (VIReC). Data was requested from and approved for use by VIReC following data use agreement requirements. The study was approved by the Medical University of South Carolina Institutional Review Board (IRB) and the Ralph H. Johnson Veterans Affairs Medical Center Research and Development committee. Outcome measure: The main outcome measure was time to death. Veterans were followed from time of entry into the study until death, loss to follow-up, or through December 2006. A subject was considered censored if alive by December 2006. Primary covariates: The primary covariates were medical comorbidity and psychiatric comorbidity both defined as the count of diseases for each subject throughout the study period. All comorbidities were dichotomized as present or absent where presence was determined by ICD-9 codes at entry into the cohort based on a previously validated algorithm in veterans [13]. Medical comorbidity variables included anemia, cancer, cardiovascular disease (CVD), cerebrovascular disease, congestive heart failure (CHF), fluid and electrolyte disorders, hypertension, hypothyroidism, liver disease, lung conditions (chronic pulmonary disease, pulmonary circulation disease), obesity, peripheral vascular disease, and other (acquired immunodeficiency syndrome–AIDS, rheumatoid arthritis, renal failure, peptic ulcer disease and bleeding, weight loss). Psychiatric comorbidities included psychoses, substance abuse (alcohol abuse, drug abuse) and depression [13]. Demographic variables: We controlled for seven demographic variables. Age was treated as continuous and centered at a mean of 66 years. Race/ethnicity included four categories with non-Hispanic white (NHW) serving as the reference group. Race/ethnicity was retrieved from the 2002 outpatient and inpatient [Medical SAS] data sets. When missing or unknown, the variable was supplemented using the inpatient race1-race6 fields from the 2003 [Medical SAS] data sets, the outpatient race1-race7 fields from the 2004 [Medical SAS] data sets, and the VA Vital Status Centers for Medicare and Medicaid Services (CMS) field for race. Gender, marital status, and location of residence (urban versus rural or highly rural) were dichotomous. Highly rural was categorized as rural according to the VA definition of rurality [14]. Percentage service-connectedness, representing the degree of disability due to illness or injury that was aggravated by or incurred in military service, was treated as dichotomous (1= > 50%, 0 = <50%). Region, which accounts for the five geographic regions of the country, was treated as a categorical variable: Northeast [VISNs 1, 2, 3], Mid-Atlantic [VISNs 4, 5, 6, 9, 10], South [VISNs 7, 8, 16, 17], Midwest [VISNs 11, 12, 15, 19, 23], and West [VISNs 18, 20, 21, 22] [15]. Statistical analysis: In preliminary analyses, crude associations were examined between mortality and all measured covariates using chi-square tests for categorical variables and t-tests for continuous variables. Cox regression methods were used to model the association between time to death and medical and psychiatric comorbidity after adjusting for known covariates. Time to death was defined as the number of months from time of entry into the cohort to time of death or censoring (i.e., day last seen or May 2006). For the Cox model, appropriateness of the assumption of proportionality was determined by testing the coefficients of the interactions of time with the respective covariate in multivariate analyses. Initially, Cox models for each of the medical and psychiatric comorbidities were fitted adjusting for all covariates (race, socio-demographics). Then an interaction between medical and psychiatric comorbidity was tested to check whether the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity. HR estimates of medical comorbidity for each level of psychiatric comorbidity and estimates for levels of psychiatric comorbidity for levels of medical comorbidity are reported since there was significant interaction (p = 0.003). The Kaplan-Meier method was used to plot the survival functions for both medical and psychiatric comorbidities separately. Residual analysis was used to assess goodness-of-fit of each of the models. All data analyses were conducted using SAS 9.3 [16]. Results and discussion: The study population consisted of 625,903 veterans with diabetes who were followed until death, loss to follow-up, or through December 2006. The mean age was 65.4 years (sd 11.1) with a range of 18–100 years. Those with diabetes and no comorbid diagnosis comprised 10.6% of the sample. Hypertension was the most common medical comorbidity (78.2%) and depression was the most common psychiatric comorbidity (12.7%). Over one fifth of the population (22.5%) had three or more medical comorbidities and 3.2% of the population had two or more psychiatric comorbidities. During the follow-up period, 21.9% of individuals in the cohort died. Table 1 shows demographic and comorbidity profile of the study population. The average survival time of those that died was 1.97 years. About one quarter of deaths occurred in each of the first (25.8%) and second (26.2%) year of follow-up with 9.1% occurring in the final year of follow-up. Demographic and Comorbidity Profile of Veterans* *All numbers represent percentages unless otherwise indicated. The mortality risk associated with individual medical and psychiatric comorbidities after adjusting for demographic factors (age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region) are presented in Table 2. Of the medical comorbidities examined, congestive heart failure, which was present in 11.2% of the population, carried the greatest mortality risk with patients with congestive heart failure being almost twice as likely to die as those without congestive heart failure (HR = 1.92). Lung and cerebrovascular disease present in 13.8% and 11.4%, respectively, of the population also carried high mortality risk with patients with lung disease having 42% greater risk of mortality than those without lung disease and patients with cerebrovascular disease having 39% greater risk of mortality than those without cerebrovascular disease. Alternatively, hypertension and obesity each carried a lower mortality risk. For each of the psychiatric comorbidities examined, depression, psychoses and substance abuse carried significant mortality risk of 5%, 16% and 50%, respectively.Unadjusted Kaplan-Meier survival curves presented in Figure 1, illustrate the effect of medical comorbidity burden on probability of survival stratified by number of psychiatric comorbidities (i.e., 0, 1 and 2). Within each strata of psychiatric comorbidity burden, there was a clear graded relationship between number of medical comorbidities and probability of survival. Specifically, within each strata probability of survival was highest in those with zero medical comorbidities and lowest in those with 3 or more medical comorbidities. In Figure 2, unadjusted Kaplan-Meier survival curves illustrate the effect of psychiatric comorbidity burden on probability of survival stratified by number of medical comorbidities (i.e., 0, 1, 2 and 3 or more). In contrast to different survival curves for each level of comorbidity, Figure 2 illustrates a similar probability of survival across levels of psychiatric comorbidity burden in veterans with zero and 1 medical comorbidity. Moreover, while there is some differentiation in survival across levels of psychiatric comorbidity burden in veterans with 2 and 3 or more medical comorbidities, survival probability did not clearly decrease with increased psychiatric comorbidity burden. Hazard Ratio and 95% Confidence Interval Estimates for Each Disease Category* (ordered by decreasing level of risk) *Adjusted for sociodemographic characteristics including age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region. Unadjusted Kaplan-Meier Survival Curves for Medical Comorbidity at Each Level of Psychiatric Comorbidity. Unadjusted Kaplan-Meier Survival Curves for Psychiatric Comorbidity at Each Level of Medical Comorbidity. Table 3 shows the HR and corresponding 95% CI for the association between number of medical comorbidities (i.e., 0, 1, 2 and 3 or more) and mortality stratified by number of psychiatric comorbidities (i.e., 0, 1 and 2) after adjusting for demographic factors (age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region). The mortality hazards associated with having 2 or 3 or more medical comorbidities as compared to no medical comorbidities were 1.54 and 2.63, respectively, in veterans with diabetes without any psychiatric comorbidity. Similar patterns in mortality hazards were seen in veterans with a single psychiatric comorbidity among those with 2 or 3 or more medical comorbidities compared to those with no medical comorbidities (HRs of 1.51 and 2.66, respectively). The difference in mortality hazard among those with greater medical comorbidity decreased only slightly in veterans with two psychiatric comorbidities (HR 1.30) compared to those with no medical comorbidity (HR 2.15), respectively. Hazard Ratios (HR)* and 95% confidence Interval (CI) for Effect of Medical and Psychiatric Comorbidity Burden on Mortality among Veterans with Diabetes *Interaction terms were used to determine adjusted hazard ratios for the effect of medical comorbidity within strata of psychiatric comorbiditya as well as for the effect of psychiatric comorbidity within strata of medical comorbidityb. Adjusted for sociodemographic characteristics including age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region. Additionally, Table 3 shows the HR and corresponding 95% confidence intervals (CI) for the association between number of psychiatric comorbidities (i.e., 0, 1 and 2) and mortality stratified by number of medical comorbidities (i.e., 0, 1, 2 and 3 or more) after adjusting for demographic factors. The mortality hazard associated with having a single as compared to zero psychiatric comorbidities remained constant across all medical comorbidity categories with respective hazard ratios of 1.23 and 1.24 in veterans with zero and 3 or more medical comorbidities. In contrast, the mortality hazard associated with having two as compared to zero psychiatric comorbidities decreased from 1.69 in veterans with zero medical comorbidities to 1.38 in veterans with 3 or more medical comorbidities. Conclusions: The findings of this study clearly demonstrate that the total burden of medical and psychiatric multi-morbidity are significant predictors of mortality among middle age and older adults (veterans) with type 2 diabetes with a graded response as medical multimorbidity increases. The average survival time was about 2 years with 52% of deaths occurring in the first 2 years of follow-up and 9% in the final year of follow-up. Among medical comorbidities, mortality risk was highest in those with congestive heart failure, lung disease and cerebrovascular disease; while among psychiatric comorbidities, mortality risk was highest in those with substance abuse, psychoses and depression. There was an interaction between medical and psychiatric comorbidity indicating that the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity so stratified analyses were performed. HRs for effect of 3+ medical comorbidities remained high across levels of psychiatric comorbidities (0, 1, 2+), while HRs for effect of 2+ psychiatric comorbidities declined across levels of medical comorbidity (0, 1, 2, 3+). Interestingly, comorbid hypertension and obesity were associated with lower mortality risk in this sample of veterans with type 2 diabetes. Since more than two-thirds of obese veterans also have comorbid hypertension [17], it is likely that the reduced mortality risk associated with hypertension was demonstrated among obese veterans. These findings may be explained by multiple interventions within the VA system from 2000–2010 (encompasses the time period of this study) that were successful in achieving blood pressure control among veterans who are provided access to healthcare services and to a very affordable formulary of anti-hypertensive medications [18]. Furthermore, one should consider that this analysis examined data generated shortly after diabetes was designated as a CVD risk equivalent along with tobacco smoking, hypertension, and hyperlipidemia [19]. In that case, it is likely that a strong awareness of CVD as a leading cause of death, growing evidence of behavioral management and treatment effectiveness from multiple large-scale RCTs (like Hypertension Optimal Treatment (HOT), United Kingdom Prospective Diabetes Study (UKPDS), Diabetes Prevention Program (DPP), Action to Control Cardiovascular Risk in Diabetes (ACCORD), and Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT), to name a few), and broad public health efforts lead by guidelines from several national organizations (American Heart Association (AHA), American Diabetes Association (ADA), and National Heart, Lung, and Blood Institute (NHLBI)) to aggressively manage CVD risk factors among people with diabetes not only reduced CVD mortality rates, but also helped to reduce the impact of comorbid hypertension and obesity on mortality in people with diabetes. Medical comorbidities exerted the greatest influence on mortality and had significantly stronger effects on hazard of death compared to psychiatric comorbidities. In stratified analyses, the risk of death with increasing medical comorbidity remained high across each level of psychiatric comorbidity with the greatest risk seen in individuals with 3+ medical comorbidities (HR ranged 2.15-2.66) compared to no psychiatric comorbidity. In contrast, risk of death with increasing psychiatric comorbidity was not consistently high across levels of medical comorbidities. Instead the risk of death with 2+ psychiatric comorbidities declined as the number of medical comorbidities increased. In other words, among those with 2 or more psychiatric comorbidities, baseline risk (no medical comorbidity) is higher while the risk associated with additional medical comorbidity is lower. One explanation could be increased care associated with multiple medical and psychiatric comorbidities. However, the mortality risk from psychiatric comorbidities remained significant, emphasizing the need for greater attention to identifying and treating psychiatric comorbidities, instead of focusing on medical comorbidities alone. There is growing awareness of the burden of multi-morbidity and the need to develop new treatment paradigms that account for multi-morbidity. This is particularly important for certain populations such as the elderly, veterans and low income and ethnic minorities with diabetes, where the prevalence of multi-morbidity is high and the risk for adverse outcomes increase exponentially with increasing levels of multi-morbidity [20-22]. The current approach to treatment for diabetes that is focused on single or related disease (e.g. diabetes and CVD) management may no longer be appropriate based on emerging evidence of the high prevalence of multi-morbidity and its detrimental effect on health outcomes including polypharmacy, increased morbidity, disability, health utilization/cost and mortality [5,7,23-26]. As the paradigm of care shifts to patient-centered care, strategies are needed to treat the individual as a whole and coordinate care while accounting for medical and psychiatric multi-morbidity. This will be particularly important in middle age and older adults, where our findings show that multi-morbidity is associated with significant increases in mortality. Newer evidence suggests that interventions linked to care delivery and focused on specific patient needs (e.g., medication management or functional status) are effective for improving outcomes among patients with multi-morbidity [7]. Our findings further highlight the need to integrate care for medical and psychiatric conditions and address the fragmentation of health care that currently exists with separate coverages for medical and mental health conditions. Some of the strategies will require policy change in how mental health services are covered and reimbursement structure in primary care settings. Strengths of our study include the study population, its longitudinal design with 5 years of follow-up data, the extensive data available on comorbidities, and our ability to identify racial/ethnic group in over 90% of the cohort. To place this study in context, we report on the limitations. First, the VA medical record either omits or fails to update many important social, environmental and behavioral variables including socio-economic status, diet, physical activity, weight and alcohol consumption, as well as disease measures such as diabetes duration. Second, there is no way to account for the lag time between disease onset and diagnosis; thus, the timing of onset of comorbidity cannot be accurately determined in this retrospective cohort analysis. Finally, there were no measures of disease control accounted for so the findings are generalizable to those veterans with the diagnoses included in this study. Nevertheless, our findings are important and show that, in this national longitudinal cohort of veterans with diabetes followed over 5 years, multi-morbidity was associated with increased risk of death and the risk of death was incremental with a graded response in mortality risk with increasing numbers of medical and psychiatric comorbidities. Competing interests: The authors declare that they have no competing interest. Authors’ contributions: LE conceived study concept and design. MG and LE were responsible for acquisition of data. CL, LE, MG, YZ, and KH analyzed and interpreted data. CL, KH, and MG drafted the manuscript. Critical revision of the manuscript for important intellectual content was provided by CL, KH, and LE. Study supervision was provided by LE. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6823/14/68/prepub
Background: Multi-morbidity, or the presence of multiple chronic diseases, is a major problem in clinical care and is associated with worse outcomes. Additionally, the presence of mental health conditions, such as depression, anxiety, etc., has further negative impact on clinical outcomes. However, most health systems are generally configured for management of individual diseases instead of multi-morbidity. The study examined the prevalence and differential impact of medical and psychiatric multi-morbidity on risk of death in adults with diabetes. Methods: A national cohort of 625,903 veterans with type 2 diabetes was created by linking multiple patient and administrative files from 2002 through 2006. The main outcome was time to death. Primary independent variables were numbers of medical and psychiatric comorbidities over the study period. Covariates included age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region. Cox regression was used to model the association between time to death and multi-morbidity adjusting for relevant covariates. Results: Hypertension (78%) and depression (13%) were the most prevalent medical and psychiatric comorbidities, respectively; 23% had 3+ medical comorbidities, 3% had 2+ psychiatric comorbidities and 22% died. Among medical comorbidities, mortality risk was highest in those with congestive heart failure (hazard ratio, HR = 1.92; 95% CI 1.89-1.95), Lung disease (HR = 1.42; 95% CI 1.40-1.44) and cerebrovascular disease (HR = 1.39; 95% CI 1.37-1.40). Among psychiatric comorbidities, mortality risk was highest in those with substance abuse (HR = 1.50; 95% CI 1.46-1.54), psychoses (HR = 1.16; 95% CI 1.14-1.19) and depression (HR = 1.05; 95% CI 1.03-1.07). There was an interaction between medical and psychiatric comorbidity (p = 0.003) so stratified analyses were performed. HRs for effect of 3+ medical comorbidity (2.63, 2.66, 2.15) remained high across levels of psychiatric comorbidities (0, 1, 2+), respectively. HRs for effect of 2+ psychiatric comorbidity (1.69, 1.63, 1.42, 1.38) declined across levels of medical comorbidity (0, 1, 2, 3+), respectively. Conclusions: Medical and psychiatric multi-morbidity are significant predictors of mortality among older adults (veterans) with type 2 diabetes with a graded response as multimorbidity increases.
Background: Multi-morbidity, or the presence of multiple chronic diseases, is a major problem in clinical care and is associated with worse outcomes [1-3]. People with multi-morbid conditions also have a higher medication burden [4] and, not unexpectedly, have worse medication adherence [5]. The prevalence of multi-morbidity is high (>50%) in middle-aged and older populations [6]. However, most health systems are generally configured for management of individual diseases instead of multi-morbidity. A recent review article suggests the number of studies testing interventions for patients with multi-morbid disease has increased, but the findings showed that organizational interventions with a broader focus and patient-specific interventions not linked to care delivery had little impact on health outcomes [7]. The presence of mental health conditions (e.g., depression, anxiety, etc.) in people with multi-morbidity has adverse impact on clinical outcomes [8,9]. A study of the impact of psychiatric conditions (depression/anxiety, substance abuse, psychotic or bipolar disorder) on mortality among individuals with diabetes indicated that alcohol and drug abuse/dependence was associated with a 22% higher mortality [10]. Chronic conditions like diabetes tend to occur in clusters (with other cardiovascular conditions like hypertension, heart disease and stroke) [11]. However, very little is known about the separate and combined impact of medical and psychiatric conditions on mortality and which has the strongest impact on risk of death in individuals with diabetes. Few studies have carefully assessed the impact of both medical and psychiatric multi-morbidity on mortality in middle-aged and older adults with diabetes. Older aged and elderly individuals have greater disease burden and are more likely to have multimorbidity and experience the adverse outcomes associated with multimorbidity. Because of the detailed clinical data available in the Veterans Health Administration (VHA) databases, the older age, and the ability to track patients over long periods of time, veterans are an ideal population to tease apart the impact of medical and psychiatric multi-morbidity on mortality. However, the focus of this study is to examine the impact of total burden of disease rather than specific comorbidities, so we used a national sample of veterans with type 2 diabetes followed over 5 years to examine the prevalence and differential impact of medical and psychiatric multi-morbidity on risk of death. Conclusions: The findings of this study clearly demonstrate that the total burden of medical and psychiatric multi-morbidity are significant predictors of mortality among middle age and older adults (veterans) with type 2 diabetes with a graded response as medical multimorbidity increases. The average survival time was about 2 years with 52% of deaths occurring in the first 2 years of follow-up and 9% in the final year of follow-up. Among medical comorbidities, mortality risk was highest in those with congestive heart failure, lung disease and cerebrovascular disease; while among psychiatric comorbidities, mortality risk was highest in those with substance abuse, psychoses and depression. There was an interaction between medical and psychiatric comorbidity indicating that the association between mortality and medical comorbidity was modified by the presence of psychiatric comorbidity so stratified analyses were performed. HRs for effect of 3+ medical comorbidities remained high across levels of psychiatric comorbidities (0, 1, 2+), while HRs for effect of 2+ psychiatric comorbidities declined across levels of medical comorbidity (0, 1, 2, 3+). Interestingly, comorbid hypertension and obesity were associated with lower mortality risk in this sample of veterans with type 2 diabetes. Since more than two-thirds of obese veterans also have comorbid hypertension [17], it is likely that the reduced mortality risk associated with hypertension was demonstrated among obese veterans. These findings may be explained by multiple interventions within the VA system from 2000–2010 (encompasses the time period of this study) that were successful in achieving blood pressure control among veterans who are provided access to healthcare services and to a very affordable formulary of anti-hypertensive medications [18]. Furthermore, one should consider that this analysis examined data generated shortly after diabetes was designated as a CVD risk equivalent along with tobacco smoking, hypertension, and hyperlipidemia [19]. In that case, it is likely that a strong awareness of CVD as a leading cause of death, growing evidence of behavioral management and treatment effectiveness from multiple large-scale RCTs (like Hypertension Optimal Treatment (HOT), United Kingdom Prospective Diabetes Study (UKPDS), Diabetes Prevention Program (DPP), Action to Control Cardiovascular Risk in Diabetes (ACCORD), and Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT), to name a few), and broad public health efforts lead by guidelines from several national organizations (American Heart Association (AHA), American Diabetes Association (ADA), and National Heart, Lung, and Blood Institute (NHLBI)) to aggressively manage CVD risk factors among people with diabetes not only reduced CVD mortality rates, but also helped to reduce the impact of comorbid hypertension and obesity on mortality in people with diabetes. Medical comorbidities exerted the greatest influence on mortality and had significantly stronger effects on hazard of death compared to psychiatric comorbidities. In stratified analyses, the risk of death with increasing medical comorbidity remained high across each level of psychiatric comorbidity with the greatest risk seen in individuals with 3+ medical comorbidities (HR ranged 2.15-2.66) compared to no psychiatric comorbidity. In contrast, risk of death with increasing psychiatric comorbidity was not consistently high across levels of medical comorbidities. Instead the risk of death with 2+ psychiatric comorbidities declined as the number of medical comorbidities increased. In other words, among those with 2 or more psychiatric comorbidities, baseline risk (no medical comorbidity) is higher while the risk associated with additional medical comorbidity is lower. One explanation could be increased care associated with multiple medical and psychiatric comorbidities. However, the mortality risk from psychiatric comorbidities remained significant, emphasizing the need for greater attention to identifying and treating psychiatric comorbidities, instead of focusing on medical comorbidities alone. There is growing awareness of the burden of multi-morbidity and the need to develop new treatment paradigms that account for multi-morbidity. This is particularly important for certain populations such as the elderly, veterans and low income and ethnic minorities with diabetes, where the prevalence of multi-morbidity is high and the risk for adverse outcomes increase exponentially with increasing levels of multi-morbidity [20-22]. The current approach to treatment for diabetes that is focused on single or related disease (e.g. diabetes and CVD) management may no longer be appropriate based on emerging evidence of the high prevalence of multi-morbidity and its detrimental effect on health outcomes including polypharmacy, increased morbidity, disability, health utilization/cost and mortality [5,7,23-26]. As the paradigm of care shifts to patient-centered care, strategies are needed to treat the individual as a whole and coordinate care while accounting for medical and psychiatric multi-morbidity. This will be particularly important in middle age and older adults, where our findings show that multi-morbidity is associated with significant increases in mortality. Newer evidence suggests that interventions linked to care delivery and focused on specific patient needs (e.g., medication management or functional status) are effective for improving outcomes among patients with multi-morbidity [7]. Our findings further highlight the need to integrate care for medical and psychiatric conditions and address the fragmentation of health care that currently exists with separate coverages for medical and mental health conditions. Some of the strategies will require policy change in how mental health services are covered and reimbursement structure in primary care settings. Strengths of our study include the study population, its longitudinal design with 5 years of follow-up data, the extensive data available on comorbidities, and our ability to identify racial/ethnic group in over 90% of the cohort. To place this study in context, we report on the limitations. First, the VA medical record either omits or fails to update many important social, environmental and behavioral variables including socio-economic status, diet, physical activity, weight and alcohol consumption, as well as disease measures such as diabetes duration. Second, there is no way to account for the lag time between disease onset and diagnosis; thus, the timing of onset of comorbidity cannot be accurately determined in this retrospective cohort analysis. Finally, there were no measures of disease control accounted for so the findings are generalizable to those veterans with the diagnoses included in this study. Nevertheless, our findings are important and show that, in this national longitudinal cohort of veterans with diabetes followed over 5 years, multi-morbidity was associated with increased risk of death and the risk of death was incremental with a graded response in mortality risk with increasing numbers of medical and psychiatric comorbidities.
Background: Multi-morbidity, or the presence of multiple chronic diseases, is a major problem in clinical care and is associated with worse outcomes. Additionally, the presence of mental health conditions, such as depression, anxiety, etc., has further negative impact on clinical outcomes. However, most health systems are generally configured for management of individual diseases instead of multi-morbidity. The study examined the prevalence and differential impact of medical and psychiatric multi-morbidity on risk of death in adults with diabetes. Methods: A national cohort of 625,903 veterans with type 2 diabetes was created by linking multiple patient and administrative files from 2002 through 2006. The main outcome was time to death. Primary independent variables were numbers of medical and psychiatric comorbidities over the study period. Covariates included age, gender, race/ethnicity, marital status, area of residence, service connection, and geographic region. Cox regression was used to model the association between time to death and multi-morbidity adjusting for relevant covariates. Results: Hypertension (78%) and depression (13%) were the most prevalent medical and psychiatric comorbidities, respectively; 23% had 3+ medical comorbidities, 3% had 2+ psychiatric comorbidities and 22% died. Among medical comorbidities, mortality risk was highest in those with congestive heart failure (hazard ratio, HR = 1.92; 95% CI 1.89-1.95), Lung disease (HR = 1.42; 95% CI 1.40-1.44) and cerebrovascular disease (HR = 1.39; 95% CI 1.37-1.40). Among psychiatric comorbidities, mortality risk was highest in those with substance abuse (HR = 1.50; 95% CI 1.46-1.54), psychoses (HR = 1.16; 95% CI 1.14-1.19) and depression (HR = 1.05; 95% CI 1.03-1.07). There was an interaction between medical and psychiatric comorbidity (p = 0.003) so stratified analyses were performed. HRs for effect of 3+ medical comorbidity (2.63, 2.66, 2.15) remained high across levels of psychiatric comorbidities (0, 1, 2+), respectively. HRs for effect of 2+ psychiatric comorbidity (1.69, 1.63, 1.42, 1.38) declined across levels of medical comorbidity (0, 1, 2, 3+), respectively. Conclusions: Medical and psychiatric multi-morbidity are significant predictors of mortality among older adults (veterans) with type 2 diabetes with a graded response as multimorbidity increases.
6,209
490
12
[ "medical", "psychiatric", "comorbidity", "comorbidities", "veterans", "diabetes", "mortality", "disease", "data", "psychiatric comorbidity" ]
[ "test", "test" ]
null
[CONTENT] Diabetes | Comorbidity | Mortality | Elderly | Veterans [SUMMARY]
[CONTENT] Diabetes | Comorbidity | Mortality | Elderly | Veterans [SUMMARY]
null
[CONTENT] Diabetes | Comorbidity | Mortality | Elderly | Veterans [SUMMARY]
[CONTENT] Diabetes | Comorbidity | Mortality | Elderly | Veterans [SUMMARY]
[CONTENT] Diabetes | Comorbidity | Mortality | Elderly | Veterans [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cohort Studies | Comorbidity | Diabetes Mellitus, Type 2 | Female | Follow-Up Studies | Heart Failure | Humans | Hypertension | Male | Middle Aged | Prevalence | Prognosis | Psychotic Disorders | South Carolina | Survival Rate | Veterans | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cohort Studies | Comorbidity | Diabetes Mellitus, Type 2 | Female | Follow-Up Studies | Heart Failure | Humans | Hypertension | Male | Middle Aged | Prevalence | Prognosis | Psychotic Disorders | South Carolina | Survival Rate | Veterans | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cohort Studies | Comorbidity | Diabetes Mellitus, Type 2 | Female | Follow-Up Studies | Heart Failure | Humans | Hypertension | Male | Middle Aged | Prevalence | Prognosis | Psychotic Disorders | South Carolina | Survival Rate | Veterans | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cohort Studies | Comorbidity | Diabetes Mellitus, Type 2 | Female | Follow-Up Studies | Heart Failure | Humans | Hypertension | Male | Middle Aged | Prevalence | Prognosis | Psychotic Disorders | South Carolina | Survival Rate | Veterans | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cohort Studies | Comorbidity | Diabetes Mellitus, Type 2 | Female | Follow-Up Studies | Heart Failure | Humans | Hypertension | Male | Middle Aged | Prevalence | Prognosis | Psychotic Disorders | South Carolina | Survival Rate | Veterans | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] medical | psychiatric | comorbidity | comorbidities | veterans | diabetes | mortality | disease | data | psychiatric comorbidity [SUMMARY]
[CONTENT] medical | psychiatric | comorbidity | comorbidities | veterans | diabetes | mortality | disease | data | psychiatric comorbidity [SUMMARY]
null
[CONTENT] medical | psychiatric | comorbidity | comorbidities | veterans | diabetes | mortality | disease | data | psychiatric comorbidity [SUMMARY]
[CONTENT] medical | psychiatric | comorbidity | comorbidities | veterans | diabetes | mortality | disease | data | psychiatric comorbidity [SUMMARY]
[CONTENT] medical | psychiatric | comorbidity | comorbidities | veterans | diabetes | mortality | disease | data | psychiatric comorbidity [SUMMARY]
[CONTENT] impact | multi | morbidity | multi morbidity | conditions | impact medical psychiatric | impact medical | outcomes | older | health [SUMMARY]
[CONTENT] comorbidity | medical | data | psychiatric | veterans | diabetes | disease | visns | time | 2002 [SUMMARY]
null
[CONTENT] risk | comorbidities | psychiatric | medical | morbidity | diabetes | multi morbidity | multi | mortality | comorbidity [SUMMARY]
[CONTENT] medical | comorbidity | psychiatric | comorbidities | veterans | disease | diabetes | data | mortality | psychiatric comorbidity [SUMMARY]
[CONTENT] medical | comorbidity | psychiatric | comorbidities | veterans | disease | diabetes | data | mortality | psychiatric comorbidity [SUMMARY]
[CONTENT] ||| ||| ||| [SUMMARY]
[CONTENT] 625,903 | 2 | 2002 | 2006 ||| ||| ||| ||| [SUMMARY]
null
[CONTENT] 2 [SUMMARY]
[CONTENT] ||| ||| ||| ||| 625,903 | 2 | 2002 | 2006 ||| ||| ||| ||| ||| 78% | 13% | 23% | 3 | 3% | 2 | 22% ||| 1.92 | 95% | CI | 1.89 | Lung | 1.42 | 95% | CI | 1.40-1.44 | 1.39 | 95% | CI | 1.37 ||| 1.50 | 95% | CI | 1.46-1.54 | 1.16 | 95% | CI | 1.14 | 1.05 | 95% | CI | 1.03-1.07 ||| 0.003 ||| 3 | 2.63 | 2.66 | 2.15 | 1 | 2+ ||| 2+ | 1.69 | 1.63 | 1.42 | 1.38 | 1 | 2 | 3+ ||| 2 [SUMMARY]
[CONTENT] ||| ||| ||| ||| 625,903 | 2 | 2002 | 2006 ||| ||| ||| ||| ||| 78% | 13% | 23% | 3 | 3% | 2 | 22% ||| 1.92 | 95% | CI | 1.89 | Lung | 1.42 | 95% | CI | 1.40-1.44 | 1.39 | 95% | CI | 1.37 ||| 1.50 | 95% | CI | 1.46-1.54 | 1.16 | 95% | CI | 1.14 | 1.05 | 95% | CI | 1.03-1.07 ||| 0.003 ||| 3 | 2.63 | 2.66 | 2.15 | 1 | 2+ ||| 2+ | 1.69 | 1.63 | 1.42 | 1.38 | 1 | 2 | 3+ ||| 2 [SUMMARY]
Understanding the social context of fatal road traffic collisions among young people: a qualitative analysis of narrative text in coroners' records.
24460955
Deaths and injuries on the road remain a major cause of premature death among young people across the world. Routinely collected data usually focuses on the mechanism of road traffic collisions and basic demographic data of those involved. This study aimed to supplement these routine sources with a thematic analysis of narrative text contained in coroners' records, to explore the wider social context in which collisions occur.
BACKGROUND
Thematic analysis of narrative text from Coroners' records, retrieved from thirty-four fatalities among young people (16-24 year olds) occurring as a result of thirty road traffic collisions in a rural county in the south of England over the period 2005-2010.
METHODS
Six key themes emerged: social driving, driving experience, interest in motor vehicles, driving behaviour, perception of driving ability, and emotional distress. Social driving (defined as a group of related behaviours including: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey) was identified as a common feature across cases.
RESULTS
Analysis of the wider social context in which road traffic collisions occur in young people can provide important information for understanding why collisions happen and developing targeted interventions to prevent them. It can complement routinely collected data, which often focuses on events immediately preceding a collision. Qualitative analysis of narrative text in coroner's records may provide a way of providing this type of information. These findings provide additional support for the case for Graduated Driver Licensing programmes to reduce collisions involving young people, and also suggest that road safety interventions need to take a more community development approach, recognising the importance of social context and focusing on social networks of young people.
CONCLUSIONS
[ "Accidents, Traffic", "Adolescent", "Automobile Driving", "Coroners and Medical Examiners", "Female", "Humans", "Male", "Psychology", "Qualitative Research", "Social Behavior", "Stress, Psychological", "United Kingdom", "Young Adult" ]
3913375
Background
Globally, road traffic collisions (RTCs) are the leading cause of death in people aged 15–19, and the second highest cause of death in 20–25 year-olds [1]. In the UK around 300 young people aged 16–29 were killed when driving or riding vehicles, and over 4000 seriously injured during 2011 [2]. RTCs are of particular concern in rural areas in the UK, with the highest proportion of RTC-related fatalities and injuries occurring on rural roads [3]. In general, UK figures present a more favourable picture than many other countries; however, recent UK policy documents suggest that more work can be done to reduce road deaths and injuries [4]. Research suggests that there may be specific risk factors for RTCs which are unique to, or elevated in, young people compared with older adults. These include: limited driving experience; night-time driving; fatigue; particular risks for young men [5]. Other age-specific risk factors may include: personality characteristics; driving ability; demographic factors; perceived environment; driving environment; and developmental factors [6]. To develop effective prevention programmes the factors associated with RTCs must be identified and understood. In the UK, routine road traffic injury data are compiled via STATS19, a national database detailing the nature of a collision, the location, and a record of casualty involvement. These data are collected at the scene of the collision by an attending police officer. Although such quantitative data provide valuable intelligence for the monitoring and prevention of RTCs and related casualties, other data sources (including those facilitating qualitative analysis) may complement existing routine data, thus aiding action on road safety [7-9]. For instance, narrative text in particular can provide more detail on the events surrounding unintentional injury and death [8]. However, analysis of narrative text in the injury field has largely sought to quantify that data [8]. Qualitative methods, including thematic analysis of narrative text, offer opportunities for an in-depth examination of phenomena [10]. A qualitative approach can complement quantitative methods, by taking into account the wider social context of the crash and examining attitudes and experiences of those connected to the event. Study aims In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention. In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention.
Methods
Population A rural county located in South West England was selected as the setting for this research as the region has been identified as having a significantly high road fatality rate and local authorities wanted to conduct more research into this area [3,14]. Between 2005 and 2010 in the county, 35 young people aged 16–24 were killed in an RTC, either as a driver of a motorcar or rider of a motorcycle, or as a passenger alongside a person of that age. A rural county located in South West England was selected as the setting for this research as the region has been identified as having a significantly high road fatality rate and local authorities wanted to conduct more research into this area [3,14]. Between 2005 and 2010 in the county, 35 young people aged 16–24 were killed in an RTC, either as a driver of a motorcar or rider of a motorcycle, or as a passenger alongside a person of that age. Sample selection All RTC-related fatalities among 16–24 year old drivers in the selected area occurring between 2005 and 2010 were eligible for inclusion. Passenger fatalities among 16–24 year olds were also eligible, provided the driver of the vehicle was also aged 16 to 24. The age range was selected in response to evidence that drivers aged 16–24 are greatly overrepresented in road death statistics when compared with other, more experienced drivers [15]. All RTC-related fatalities among 16–24 year old drivers in the selected area occurring between 2005 and 2010 were eligible for inclusion. Passenger fatalities among 16–24 year olds were also eligible, provided the driver of the vehicle was also aged 16 to 24. The age range was selected in response to evidence that drivers aged 16–24 are greatly overrepresented in road death statistics when compared with other, more experienced drivers [15]. Data collection Permission to view and extract data from the Coroners’ records was sought and obtained from the coroner for the area. Eligible cases were obtained during visits to HM Coroner, with data extraction taking place on site. A single coroner covered the whole county. Prior to the beginning of the study a qualitative data extraction tool and framework was developed by the research team, following a review of the literature, and tested during a pilot data collection visit to HM Coroner. The tool consisted of a number of headings based on known risk factors identified by Shope [6] during a comprehensive review and synthesis of quantitative and qualitative research findings relating to RTCs among young drivers. Identified risk factors included: personality characteristics (e.g., level of aggression); developmental factors (e.g., sleep patterns); driving ability (e.g., driving skill); demographic factors; perceived environment (e.g., risk perception); and, driving environment (e.g., weather conditions) [6]. The tool was piloted on seven of the RTCs included in this study. Following the guidelines for inductive qualitative research [16,17], this process allowed the researchers to assess whether the pre-specified headings were applicable to the available data, and to identify any additional areas which may provide in-depth rich descriptions which may aid our understanding of fatal RTCs among young people. Four additional headings were identified and added to the tool during the pilot visit: reason for travel; driving behaviour at the time of the collision; safety features of the vehicle; and, vehicle condition. All data extracted at the pilot visit were reviewed in light of the amended data extraction tool during subsequent data collection. Records for each fatality, including police reports, witness statements and the inquest report pertaining to each RTC were reviewed on site in the Coroner’s office over a four day period. Qualitative data concerning each of the framework headings were extracted verbatim from each record directly into a computerised version of the data extraction tool. Demographic data and basic quantitative descriptive data about the RTC, including vehicle details, travel environment, and collision outcomes were also collected from a combination of STATS19 data and information collected from sources in the coroners’ records where available. Where multiple fatalities were recorded from one collision, data on each fatality were extracted. Descriptive information for each case was anonymised at the Coroner’s office. Qualitative data containing information that had the potential to identify a specific individual were anonymised further following discussion among researchers (PP, EB, SG, ET, SW, MM). Permission to view and extract data from the Coroners’ records was sought and obtained from the coroner for the area. Eligible cases were obtained during visits to HM Coroner, with data extraction taking place on site. A single coroner covered the whole county. Prior to the beginning of the study a qualitative data extraction tool and framework was developed by the research team, following a review of the literature, and tested during a pilot data collection visit to HM Coroner. The tool consisted of a number of headings based on known risk factors identified by Shope [6] during a comprehensive review and synthesis of quantitative and qualitative research findings relating to RTCs among young drivers. Identified risk factors included: personality characteristics (e.g., level of aggression); developmental factors (e.g., sleep patterns); driving ability (e.g., driving skill); demographic factors; perceived environment (e.g., risk perception); and, driving environment (e.g., weather conditions) [6]. The tool was piloted on seven of the RTCs included in this study. Following the guidelines for inductive qualitative research [16,17], this process allowed the researchers to assess whether the pre-specified headings were applicable to the available data, and to identify any additional areas which may provide in-depth rich descriptions which may aid our understanding of fatal RTCs among young people. Four additional headings were identified and added to the tool during the pilot visit: reason for travel; driving behaviour at the time of the collision; safety features of the vehicle; and, vehicle condition. All data extracted at the pilot visit were reviewed in light of the amended data extraction tool during subsequent data collection. Records for each fatality, including police reports, witness statements and the inquest report pertaining to each RTC were reviewed on site in the Coroner’s office over a four day period. Qualitative data concerning each of the framework headings were extracted verbatim from each record directly into a computerised version of the data extraction tool. Demographic data and basic quantitative descriptive data about the RTC, including vehicle details, travel environment, and collision outcomes were also collected from a combination of STATS19 data and information collected from sources in the coroners’ records where available. Where multiple fatalities were recorded from one collision, data on each fatality were extracted. Descriptive information for each case was anonymised at the Coroner’s office. Qualitative data containing information that had the potential to identify a specific individual were anonymised further following discussion among researchers (PP, EB, SG, ET, SW, MM). Data analysis Information recorded in the data extraction form for all cases were imported into NVivo (QSR International 9) verbatim. Each case was explored using Thematic Analysis (TA); a useful method for “identifying, analysing and reporting patterns within data” [18]. One researcher (EB) read through each case multiple times to aid familiarisation. The coding process was predominantly based upon the headings identified in the pre-defined data extraction tool. The data obtained from each case were independently reviewed by one researcher (EB) and the data included under each pre-defined heading of the extraction tool were analysed separately. Notably, the researcher was careful not to be restricted by these pre-defined concepts, allowing for additional codes to emerge from the data inductively. Following this process, the researcher reflected on the codes and allowed wider themes to emerge, with clear examples of each theme taken from the data to illustrate and support the findings of the analysis. To confirm accuracy and interpretation of the data during the coding process and at theme development, findings were discussed and agreed between two researchers (EB and PP) and also among the project steering group (SG, ET, SW, MM). Information recorded in the data extraction form for all cases were imported into NVivo (QSR International 9) verbatim. Each case was explored using Thematic Analysis (TA); a useful method for “identifying, analysing and reporting patterns within data” [18]. One researcher (EB) read through each case multiple times to aid familiarisation. The coding process was predominantly based upon the headings identified in the pre-defined data extraction tool. The data obtained from each case were independently reviewed by one researcher (EB) and the data included under each pre-defined heading of the extraction tool were analysed separately. Notably, the researcher was careful not to be restricted by these pre-defined concepts, allowing for additional codes to emerge from the data inductively. Following this process, the researcher reflected on the codes and allowed wider themes to emerge, with clear examples of each theme taken from the data to illustrate and support the findings of the analysis. To confirm accuracy and interpretation of the data during the coding process and at theme development, findings were discussed and agreed between two researchers (EB and PP) and also among the project steering group (SG, ET, SW, MM). Ethical considerations The Ethics Committee of the University of the West of England declared that no ethical approval was required for this study. Given the highly sensitive nature of the research, care was taken to remove any identifiable content from the data. The Ethics Committee of the University of the West of England declared that no ethical approval was required for this study. Given the highly sensitive nature of the research, care was taken to remove any identifiable content from the data. RATS Guidelines The authors confirm that this study adheres to the RATS guidelines on qualitative research. The authors confirm that this study adheres to the RATS guidelines on qualitative research.
Results
Sample characteristics are available in Table 1. Six key themes emerged. These were: social driving, driving experience, interest in motor vehicles, risky driving behaviour, perception of driving ability, and emotional distress. Sample characteristics (n = 34) Social driving A particular theme arising from the thematic analysis was what we have termed “social driving”. This term has been used to encompass a group of related behaviours, and included: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey). The majority of young drivers were found to be engaged in social driving behaviour prior to, or at the time of, the collision. Many individuals were reportedly travelling with friends to or from a social event, and some of the young people were driving without a specific purpose or destination. Driving without a specific destination appeared to be a specific social activity, allowing the young people to meet up in their cars and socialise together, often occurring at night: “He had a hectic social life and almost seemed to be out of the house more than in… once he passed his driving test [Case 30] would travel around with his friends.” Case 30, Driver, Male “Usually every weekend I’ll get a call about 2 or 3 o’clock in the morning…saying “Can you come get me?”… So I get up and go and pick them up straight away.” Case 4, Driver, Male Nine cases of social driving were from cars which contained at least one passenger. In one case, a driver was transporting passengers home from a social event in the early hours of the morning. The passengers were in high spirits, and were reportedly distracting the driver throughout the journey. The police report stated: “…that they had been messing about in the back of the car, trying to put their hands over his eyes and slapping his face while he was driving.” Case 4, Driver, Male A particular theme arising from the thematic analysis was what we have termed “social driving”. This term has been used to encompass a group of related behaviours, and included: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey). The majority of young drivers were found to be engaged in social driving behaviour prior to, or at the time of, the collision. Many individuals were reportedly travelling with friends to or from a social event, and some of the young people were driving without a specific purpose or destination. Driving without a specific destination appeared to be a specific social activity, allowing the young people to meet up in their cars and socialise together, often occurring at night: “He had a hectic social life and almost seemed to be out of the house more than in… once he passed his driving test [Case 30] would travel around with his friends.” Case 30, Driver, Male “Usually every weekend I’ll get a call about 2 or 3 o’clock in the morning…saying “Can you come get me?”… So I get up and go and pick them up straight away.” Case 4, Driver, Male Nine cases of social driving were from cars which contained at least one passenger. In one case, a driver was transporting passengers home from a social event in the early hours of the morning. The passengers were in high spirits, and were reportedly distracting the driver throughout the journey. The police report stated: “…that they had been messing about in the back of the car, trying to put their hands over his eyes and slapping his face while he was driving.” Case 4, Driver, Male Driving experience A number of cases involved people with very limited driving experience, with many passing their driving test within the year prior to their collision. In one case, the driver had only passed their test on the previous day. One police report stated: “The driver had limited experience as a driver, having passed his test three months prior to this incident…The driver was confronted with another driver, coupled with a situation that he reacted harshly to, which caused him to lose control of his vehicle.” Case 18, Driver, Male In contrast, some individuals were relatively experienced drivers given their age. In six cases, driving experience had been obtained through choice of career as a vehicle driver. One parent reported: “He had licences in CBT/ATV/HGV and trailer tractors as well as a car. He had a couple of specialist licences, such as forklift and cherry picker.” Case 3, Driver, Male Some drivers had limited experience with their vehicle. Over one quarter of cases had been driving the vehicle in which they had their collision for a short time only (weeks or months). In one case, the vehicle had been purchased on the day before their collision. In another case, the driver had recently bought a new vehicle with a large engine size. The driver’s friend commented: “Approximately 6 to 8 weeks ago [he] purchased a 2.2 litre… he only passed his driving test about 12 months ago.” Case 12, Driver, Male A number of cases involved people with very limited driving experience, with many passing their driving test within the year prior to their collision. In one case, the driver had only passed their test on the previous day. One police report stated: “The driver had limited experience as a driver, having passed his test three months prior to this incident…The driver was confronted with another driver, coupled with a situation that he reacted harshly to, which caused him to lose control of his vehicle.” Case 18, Driver, Male In contrast, some individuals were relatively experienced drivers given their age. In six cases, driving experience had been obtained through choice of career as a vehicle driver. One parent reported: “He had licences in CBT/ATV/HGV and trailer tractors as well as a car. He had a couple of specialist licences, such as forklift and cherry picker.” Case 3, Driver, Male Some drivers had limited experience with their vehicle. Over one quarter of cases had been driving the vehicle in which they had their collision for a short time only (weeks or months). In one case, the vehicle had been purchased on the day before their collision. In another case, the driver had recently bought a new vehicle with a large engine size. The driver’s friend commented: “Approximately 6 to 8 weeks ago [he] purchased a 2.2 litre… he only passed his driving test about 12 months ago.” Case 12, Driver, Male Interest in motor vehicles Many young people had a particular interest in motor vehicles. There were several examples when an individual reportedly spent time caring for their vehicle, making sure it was kept in a good quality condition, both aesthetically and mechanically. There were also reports of vehicle modification. In one case, a car had been modified from a 1400 cc engine to a 1600 cc engine. In one statement, a colleague of the deceased reported: “He always talked about his car and how he loved to drive it” Case 12, Driver, Male In another example, a Mother stated: “It has surprised me that [Case 15] has died as a result of a car accident as she would not wish to damage her car in any way. Her car was her pride and joy. She was that particular she would even pick parking spaces away from other vehicles so that it wouldn’t get marked or damaged.” Case 15, Driver, Female Many young people had a particular interest in motor vehicles. There were several examples when an individual reportedly spent time caring for their vehicle, making sure it was kept in a good quality condition, both aesthetically and mechanically. There were also reports of vehicle modification. In one case, a car had been modified from a 1400 cc engine to a 1600 cc engine. In one statement, a colleague of the deceased reported: “He always talked about his car and how he loved to drive it” Case 12, Driver, Male In another example, a Mother stated: “It has surprised me that [Case 15] has died as a result of a car accident as she would not wish to damage her car in any way. Her car was her pride and joy. She was that particular she would even pick parking spaces away from other vehicles so that it wouldn’t get marked or damaged.” Case 15, Driver, Female Risky driving behaviour Dangerous driving behaviour was identified in the majority of cases. This included: driving at speed, tailgating, racing friends and undertaking hazardous overtaking manoeuvres. Notably, excessive speed was referred to in nearly all cases. In one statement, a witness commented: “My first impression was that it was travelling far too fast to negotiate the bend safely....I could see his hands turning the steering wheel to his right in a large movement, his whole body movement and body language gave me the impression of panic.” Case 28, Driver, Male In three cases, it was noted that male driving style may be affected by the presence of other males in the car, causing them to behave differently to normal. In one case, a female friend commented: “He always drove safely with me in the car. [Case 18] had mentioned that he drove faster when he had the lads in the car. I had not experienced him driving excessively fast myself.” Case 18, Driver, Male Dangerous driving behaviour was identified in the majority of cases. This included: driving at speed, tailgating, racing friends and undertaking hazardous overtaking manoeuvres. Notably, excessive speed was referred to in nearly all cases. In one statement, a witness commented: “My first impression was that it was travelling far too fast to negotiate the bend safely....I could see his hands turning the steering wheel to his right in a large movement, his whole body movement and body language gave me the impression of panic.” Case 28, Driver, Male In three cases, it was noted that male driving style may be affected by the presence of other males in the car, causing them to behave differently to normal. In one case, a female friend commented: “He always drove safely with me in the car. [Case 18] had mentioned that he drove faster when he had the lads in the car. I had not experienced him driving excessively fast myself.” Case 18, Driver, Male Perception of driving ability There was a sense of overconfidence and an inflated view of driving skills and ability among many cases. One case concerned two cars that were involved in a race. On interviewing the male driver of the second car, who was unharmed, the police reported: “… He agreed that he had been driving 2 to 3 car lengths behind [Case 15] at approximately 80 mph. He did not consider this to be an unsafe following distance.” Case 15, Driver, Female In another example, a rear-seat passenger stated: “I also saw at least one large arrow shape, indicated to our left. I knew this to mean that we should stay on our own side of the road. [Front seat passenger] was shouting “[Case25], what are you doing you are not going to make that” or similar. I became aware that we were now on the offside of the road…As this was happening I heard [Front seat passenger] shout a second time. This sounded much more urgent than before as he said, “we’re not going to make that”.” Case 25, Driver, Male The perception of driving ability was linked to a lack of adherence to safety regulations. Although the majority of young people observed safety regulations, the decision not to wear a seatbelt or helmet suggests that such drivers were confident in their ability to drive without incident. In one case, despite being involved in a collision in the week leading up to his death, the driver chose not to wear a seatbelt, as described by a friend: “I can say that [Case 13] was not in the habit of wearing a seatbelt as he found it too restrictive. I had asked him if he was wearing one when he hit the van [referring to prior collision]. He said he had not but had been able to brace himself on that occasion against the steering wheel.” Case 13, Driver, Male There were many instances where statements contained others’ perceptions of the driver’s driving ability. People’s perceptions of the deceased’s driving ability were shown to differ; in some statements, people were highly complementary, while other statements highlighted a less favourable view, and in some cases were visibly critical of the deceased’s driving ability. The majority of statements presented a positive perception of the driving ability of the deceased. This is interesting, given the knowledge that many collisions were associated with risky or dangerous driving behaviour. The partner of one individual reported: “[Case 24] was an excellent driver especially considering that he was still young and didn’t have a lot of experience. He never drove fast with me or [their baby] in the car and certainly wouldn’t do if the roads were potentially risky. He constantly talked about other accidents he’d seen to and from work, which always reassured me that he’d drive safely.” Case 24, Driver, Male However, other statements were not so positive. One father reported of his son: “He was in my eyes a typical young driver. He had a few bumps and things. I would say he was a confident driver but at times over confident. He sometimes drove and I would say stop, drop me off. I think his driving just needed maturity.” Case 21, Driver, Male There was a sense of overconfidence and an inflated view of driving skills and ability among many cases. One case concerned two cars that were involved in a race. On interviewing the male driver of the second car, who was unharmed, the police reported: “… He agreed that he had been driving 2 to 3 car lengths behind [Case 15] at approximately 80 mph. He did not consider this to be an unsafe following distance.” Case 15, Driver, Female In another example, a rear-seat passenger stated: “I also saw at least one large arrow shape, indicated to our left. I knew this to mean that we should stay on our own side of the road. [Front seat passenger] was shouting “[Case25], what are you doing you are not going to make that” or similar. I became aware that we were now on the offside of the road…As this was happening I heard [Front seat passenger] shout a second time. This sounded much more urgent than before as he said, “we’re not going to make that”.” Case 25, Driver, Male The perception of driving ability was linked to a lack of adherence to safety regulations. Although the majority of young people observed safety regulations, the decision not to wear a seatbelt or helmet suggests that such drivers were confident in their ability to drive without incident. In one case, despite being involved in a collision in the week leading up to his death, the driver chose not to wear a seatbelt, as described by a friend: “I can say that [Case 13] was not in the habit of wearing a seatbelt as he found it too restrictive. I had asked him if he was wearing one when he hit the van [referring to prior collision]. He said he had not but had been able to brace himself on that occasion against the steering wheel.” Case 13, Driver, Male There were many instances where statements contained others’ perceptions of the driver’s driving ability. People’s perceptions of the deceased’s driving ability were shown to differ; in some statements, people were highly complementary, while other statements highlighted a less favourable view, and in some cases were visibly critical of the deceased’s driving ability. The majority of statements presented a positive perception of the driving ability of the deceased. This is interesting, given the knowledge that many collisions were associated with risky or dangerous driving behaviour. The partner of one individual reported: “[Case 24] was an excellent driver especially considering that he was still young and didn’t have a lot of experience. He never drove fast with me or [their baby] in the car and certainly wouldn’t do if the roads were potentially risky. He constantly talked about other accidents he’d seen to and from work, which always reassured me that he’d drive safely.” Case 24, Driver, Male However, other statements were not so positive. One father reported of his son: “He was in my eyes a typical young driver. He had a few bumps and things. I would say he was a confident driver but at times over confident. He sometimes drove and I would say stop, drop me off. I think his driving just needed maturity.” Case 21, Driver, Male Emotional distress In our study emotional distress immediately prior to the RTC was identified in over one quarter of the cases. This mainly referred to family or personal relationship problems, or financial difficulties, issues which may have distracted the driver from their driving role. In one case, a driver was struggling to cope with relationship and financial demands. As one police report stated: “It is highly likely on the evidence available that [Case 24] was in a hurry to get home…it is possible there were other things on his mind, for example the issue of the rent on his flat.” Case 24, Driver, Male A further statement, provided by the mother of the deceased reported: “[Case 24’s]…relationship with… [His girlfriend] was very rocky throughout the time they were together. On the Sunday before the accident he was at our house with all of his stuff saying that he’d had enough of his relationship with… [his girlfriend] and wanted to return home.” Case 24, Driver, Male Relationship difficulties were identified from a further 6 cases. In one passenger fatality a police report stated: “[The driver] is married and [Case 9] has recently separated from her long term boyfriend. [The driver] and [Case 9] have been involved in a relationship for some weeks preceding the collision. [The driver] had informed his wife of his affair on [the day prior to the collision] when his wife asked him to leave their marital home.” Case 9, Passenger, Female In our study emotional distress immediately prior to the RTC was identified in over one quarter of the cases. This mainly referred to family or personal relationship problems, or financial difficulties, issues which may have distracted the driver from their driving role. In one case, a driver was struggling to cope with relationship and financial demands. As one police report stated: “It is highly likely on the evidence available that [Case 24] was in a hurry to get home…it is possible there were other things on his mind, for example the issue of the rent on his flat.” Case 24, Driver, Male A further statement, provided by the mother of the deceased reported: “[Case 24’s]…relationship with… [His girlfriend] was very rocky throughout the time they were together. On the Sunday before the accident he was at our house with all of his stuff saying that he’d had enough of his relationship with… [his girlfriend] and wanted to return home.” Case 24, Driver, Male Relationship difficulties were identified from a further 6 cases. In one passenger fatality a police report stated: “[The driver] is married and [Case 9] has recently separated from her long term boyfriend. [The driver] and [Case 9] have been involved in a relationship for some weeks preceding the collision. [The driver] had informed his wife of his affair on [the day prior to the collision] when his wife asked him to leave their marital home.” Case 9, Passenger, Female
Conclusions
This study used qualitative thematic analysis of narrative text in coroners’ records to examine the social context of fatal road traffic collisions in young people. It found themes consistent with other frameworks [6], but used the richness of the qualitative data to examine in greater depth the previously identified risk factors. These in-depth findings provide additional support for the case for Graduated Driver Licensing programmes to reduce collisions involving young people, and also suggest that road safety interventions need to take a more community development approach, recognising the importance of social context and focusing on social networks of young people. Further research in the UK and other countries could replicate this approach, to build on the work presented here and the pre-existing frameworks.
[ "Background", "Study aims", "Population", "Sample selection", "Data collection", "Data analysis", "Ethical considerations", "RATS Guidelines", "Social driving", "Driving experience", "Interest in motor vehicles", "Risky driving behaviour", "Perception of driving ability", "Emotional distress", "Main findings", "Implications of the findings", "Strengths and limitations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Globally, road traffic collisions (RTCs) are the leading cause of death in people aged 15–19, and the second highest cause of death in 20–25 year-olds [1]. In the UK around 300 young people aged 16–29 were killed when driving or riding vehicles, and over 4000 seriously injured during 2011 [2]. RTCs are of particular concern in rural areas in the UK, with the highest proportion of RTC-related fatalities and injuries occurring on rural roads [3]. In general, UK figures present a more favourable picture than many other countries; however, recent UK policy documents suggest that more work can be done to reduce road deaths and injuries [4].\nResearch suggests that there may be specific risk factors for RTCs which are unique to, or elevated in, young people compared with older adults. These include: limited driving experience; night-time driving; fatigue; particular risks for young men [5]. Other age-specific risk factors may include: personality characteristics; driving ability; demographic factors; perceived environment; driving environment; and developmental factors [6].\nTo develop effective prevention programmes the factors associated with RTCs must be identified and understood. In the UK, routine road traffic injury data are compiled via STATS19, a national database detailing the nature of a collision, the location, and a record of casualty involvement. These data are collected at the scene of the collision by an attending police officer. Although such quantitative data provide valuable intelligence for the monitoring and prevention of RTCs and related casualties, other data sources (including those facilitating qualitative analysis) may complement existing routine data, thus aiding action on road safety [7-9]. For instance, narrative text in particular can provide more detail on the events surrounding unintentional injury and death [8]. However, analysis of narrative text in the injury field has largely sought to quantify that data [8]. Qualitative methods, including thematic analysis of narrative text, offer opportunities for an in-depth examination of phenomena [10]. A qualitative approach can complement quantitative methods, by taking into account the wider social context of the crash and examining attitudes and experiences of those connected to the event.\n Study aims In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention.\nIn other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention.", "In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention.", "A rural county located in South West England was selected as the setting for this research as the region has been identified as having a significantly high road fatality rate and local authorities wanted to conduct more research into this area [3,14]. Between 2005 and 2010 in the county, 35 young people aged 16–24 were killed in an RTC, either as a driver of a motorcar or rider of a motorcycle, or as a passenger alongside a person of that age.", "All RTC-related fatalities among 16–24 year old drivers in the selected area occurring between 2005 and 2010 were eligible for inclusion. Passenger fatalities among 16–24 year olds were also eligible, provided the driver of the vehicle was also aged 16 to 24. The age range was selected in response to evidence that drivers aged 16–24 are greatly overrepresented in road death statistics when compared with other, more experienced drivers [15].", "Permission to view and extract data from the Coroners’ records was sought and obtained from the coroner for the area. Eligible cases were obtained during visits to HM Coroner, with data extraction taking place on site. A single coroner covered the whole county. Prior to the beginning of the study a qualitative data extraction tool and framework was developed by the research team, following a review of the literature, and tested during a pilot data collection visit to HM Coroner. The tool consisted of a number of headings based on known risk factors identified by Shope [6] during a comprehensive review and synthesis of quantitative and qualitative research findings relating to RTCs among young drivers. Identified risk factors included: personality characteristics (e.g., level of aggression); developmental factors (e.g., sleep patterns); driving ability (e.g., driving skill); demographic factors; perceived environment (e.g., risk perception); and, driving environment (e.g., weather conditions) [6]. The tool was piloted on seven of the RTCs included in this study. Following the guidelines for inductive qualitative research [16,17], this process allowed the researchers to assess whether the pre-specified headings were applicable to the available data, and to identify any additional areas which may provide in-depth rich descriptions which may aid our understanding of fatal RTCs among young people. Four additional headings were identified and added to the tool during the pilot visit: reason for travel; driving behaviour at the time of the collision; safety features of the vehicle; and, vehicle condition. All data extracted at the pilot visit were reviewed in light of the amended data extraction tool during subsequent data collection. Records for each fatality, including police reports, witness statements and the inquest report pertaining to each RTC were reviewed on site in the Coroner’s office over a four day period. Qualitative data concerning each of the framework headings were extracted verbatim from each record directly into a computerised version of the data extraction tool. Demographic data and basic quantitative descriptive data about the RTC, including vehicle details, travel environment, and collision outcomes were also collected from a combination of STATS19 data and information collected from sources in the coroners’ records where available. Where multiple fatalities were recorded from one collision, data on each fatality were extracted. Descriptive information for each case was anonymised at the Coroner’s office. Qualitative data containing information that had the potential to identify a specific individual were anonymised further following discussion among researchers (PP, EB, SG, ET, SW, MM).", "Information recorded in the data extraction form for all cases were imported into NVivo (QSR International 9) verbatim. Each case was explored using Thematic Analysis (TA); a useful method for “identifying, analysing and reporting patterns within data” [18]. One researcher (EB) read through each case multiple times to aid familiarisation. The coding process was predominantly based upon the headings identified in the pre-defined data extraction tool. The data obtained from each case were independently reviewed by one researcher (EB) and the data included under each pre-defined heading of the extraction tool were analysed separately. Notably, the researcher was careful not to be restricted by these pre-defined concepts, allowing for additional codes to emerge from the data inductively. Following this process, the researcher reflected on the codes and allowed wider themes to emerge, with clear examples of each theme taken from the data to illustrate and support the findings of the analysis. To confirm accuracy and interpretation of the data during the coding process and at theme development, findings were discussed and agreed between two researchers (EB and PP) and also among the project steering group (SG, ET, SW, MM).", "The Ethics Committee of the University of the West of England declared that no ethical approval was required for this study. Given the highly sensitive nature of the research, care was taken to remove any identifiable content from the data.", "The authors confirm that this study adheres to the RATS guidelines on qualitative research.", "A particular theme arising from the thematic analysis was what we have termed “social driving”. This term has been used to encompass a group of related behaviours, and included: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey).\nThe majority of young drivers were found to be engaged in social driving behaviour prior to, or at the time of, the collision. Many individuals were reportedly travelling with friends to or from a social event, and some of the young people were driving without a specific purpose or destination. Driving without a specific destination appeared to be a specific social activity, allowing the young people to meet up in their cars and socialise together, often occurring at night:\n“He had a hectic social life and almost seemed to be out of the house more than in… once he passed his driving test [Case 30] would travel around with his friends.”\nCase 30, Driver, Male\n“Usually every weekend I’ll get a call about 2 or 3 o’clock in the morning…saying “Can you come get me?”… So I get up and go and pick them up straight away.”\nCase 4, Driver, Male\nNine cases of social driving were from cars which contained at least one passenger. In one case, a driver was transporting passengers home from a social event in the early hours of the morning. The passengers were in high spirits, and were reportedly distracting the driver throughout the journey. The police report stated:\n“…that they had been messing about in the back of the car, trying to put their hands over his eyes and slapping his face while he was driving.”\nCase 4, Driver, Male", "A number of cases involved people with very limited driving experience, with many passing their driving test within the year prior to their collision. In one case, the driver had only passed their test on the previous day.\nOne police report stated:\n“The driver had limited experience as a driver, having passed his test three months prior to this incident…The driver was confronted with another driver, coupled with a situation that he reacted harshly to, which caused him to lose control of his vehicle.”\nCase 18, Driver, Male\nIn contrast, some individuals were relatively experienced drivers given their age. In six cases, driving experience had been obtained through choice of career as a vehicle driver. One parent reported:\n“He had licences in CBT/ATV/HGV and trailer tractors as well as a car. He had a couple of specialist licences, such as forklift and cherry picker.”\nCase 3, Driver, Male\nSome drivers had limited experience with their vehicle. Over one quarter of cases had been driving the vehicle in which they had their collision for a short time only (weeks or months). In one case, the vehicle had been purchased on the day before their collision. In another case, the driver had recently bought a new vehicle with a large engine size. The driver’s friend commented:\n“Approximately 6 to 8 weeks ago [he] purchased a 2.2 litre… he only passed his driving test about 12 months ago.”\nCase 12, Driver, Male", "Many young people had a particular interest in motor vehicles. There were several examples when an individual reportedly spent time caring for their vehicle, making sure it was kept in a good quality condition, both aesthetically and mechanically. There were also reports of vehicle modification. In one case, a car had been modified from a 1400 cc engine to a 1600 cc engine. In one statement, a colleague of the deceased reported:\n“He always talked about his car and how he loved to drive it”\nCase 12, Driver, Male\nIn another example, a Mother stated:\n“It has surprised me that [Case 15] has died as a result of a car accident as she would not wish to damage her car in any way. Her car was her pride and joy. She was that particular she would even pick parking spaces away from other vehicles so that it wouldn’t get marked or damaged.”\nCase 15, Driver, Female", "Dangerous driving behaviour was identified in the majority of cases. This included: driving at speed, tailgating, racing friends and undertaking hazardous overtaking manoeuvres. Notably, excessive speed was referred to in nearly all cases. In one statement, a witness commented:\n“My first impression was that it was travelling far too fast to negotiate the bend safely....I could see his hands turning the steering wheel to his right in a large movement, his whole body movement and body language gave me the impression of panic.”\nCase 28, Driver, Male\nIn three cases, it was noted that male driving style may be affected by the presence of other males in the car, causing them to behave differently to normal. In one case, a female friend commented:\n“He always drove safely with me in the car. [Case 18] had mentioned that he drove faster when he had the lads in the car. I had not experienced him driving excessively fast myself.”\nCase 18, Driver, Male", "There was a sense of overconfidence and an inflated view of driving skills and ability among many cases. One case concerned two cars that were involved in a race. On interviewing the male driver of the second car, who was unharmed, the police reported:\n“… He agreed that he had been driving 2 to 3 car lengths behind [Case 15] at approximately 80 mph. He did not consider this to be an unsafe following distance.”\nCase 15, Driver, Female\nIn another example, a rear-seat passenger stated:\n“I also saw at least one large arrow shape, indicated to our left. I knew this to mean that we should stay on our own side of the road. [Front seat passenger] was shouting “[Case25], what are you doing you are not going to make that” or similar. I became aware that we were now on the offside of the road…As this was happening I heard [Front seat passenger] shout a second time. This sounded much more urgent than before as he said, “we’re not going to make that”.”\nCase 25, Driver, Male\nThe perception of driving ability was linked to a lack of adherence to safety regulations. Although the majority of young people observed safety regulations, the decision not to wear a seatbelt or helmet suggests that such drivers were confident in their ability to drive without incident. In one case, despite being involved in a collision in the week leading up to his death, the driver chose not to wear a seatbelt, as described by a friend:\n“I can say that [Case 13] was not in the habit of wearing a seatbelt as he found it too restrictive. I had asked him if he was wearing one when he hit the van [referring to prior collision]. He said he had not but had been able to brace himself on that occasion against the steering wheel.”\nCase 13, Driver, Male\nThere were many instances where statements contained others’ perceptions of the driver’s driving ability. People’s perceptions of the deceased’s driving ability were shown to differ; in some statements, people were highly complementary, while other statements highlighted a less favourable view, and in some cases were visibly critical of the deceased’s driving ability. The majority of statements presented a positive perception of the driving ability of the deceased. This is interesting, given the knowledge that many collisions were associated with risky or dangerous driving behaviour. The partner of one individual reported:\n“[Case 24] was an excellent driver especially considering that he was still young and didn’t have a lot of experience. He never drove fast with me or [their baby] in the car and certainly wouldn’t do if the roads were potentially risky. He constantly talked about other accidents he’d seen to and from work, which always reassured me that he’d drive safely.”\nCase 24, Driver, Male\nHowever, other statements were not so positive. One father reported of his son:\n“He was in my eyes a typical young driver. He had a few bumps and things. I would say he was a confident driver but at times over confident. He sometimes drove and I would say stop, drop me off. I think his driving just needed maturity.”\nCase 21, Driver, Male", "In our study emotional distress immediately prior to the RTC was identified in over one quarter of the cases. This mainly referred to family or personal relationship problems, or financial difficulties, issues which may have distracted the driver from their driving role. In one case, a driver was struggling to cope with relationship and financial demands. As one police report stated:\n“It is highly likely on the evidence available that [Case 24] was in a hurry to get home…it is possible there were other things on his mind, for example the issue of the rent on his flat.”\nCase 24, Driver, Male\nA further statement, provided by the mother of the deceased reported:\n“[Case 24’s]…relationship with… [His girlfriend] was very rocky throughout the time they were together. On the Sunday before the accident he was at our house with all of his stuff saying that he’d had enough of his relationship with… [his girlfriend] and wanted to return home.”\nCase 24, Driver, Male\nRelationship difficulties were identified from a further 6 cases. In one passenger fatality a police report stated:\n“[The driver] is married and [Case 9] has recently separated from her long term boyfriend. [The driver] and [Case 9] have been involved in a relationship for some weeks preceding the collision. [The driver] had informed his wife of his affair on [the day prior to the collision] when his wife asked him to leave their marital home.”\nCase 9, Passenger, Female", "Thematic analysis identified six themes: social driving, driving experience, interest in motor vehicles, risky driving behaviour, perception of driving ability, and emotional distress. Notably, multiple themes were identified from each case, indicating that there may be numerous factors which influence the causes and characteristics of RTCs among young people. The findings of the present study were consistent with previous research identifying risk factors concerning RTCs involving young people [1,6,19-22].\nThe high prevalence of social driving among young people is likely to relate to the rural setting where the study took place, and may not be replicated in urban areas. Unlike urban dwellers, people who live in a rural area often have limited access to public transport and therefore need a car for travel. For young people in particular, transport is required for school, work and maintaining a social life [23]. In this study, social driving behaviours referred to: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; or driving where alcohol or drugs were present. The numerous attributes associated with social driving behaviour highlight the difficulties in understanding the impact of different factors leading up to a fatal RTC. The association between driving and the social environment has received some research attention in recent years; for example, the influence of passengers upon driving ability [6]. However, there has been less consideration of “social driving” as a driving ‘culture’ among young adults [24], as observed in the present study.\nRisky driving behaviour was another regular feature of RTCs among young people in the study area, and is a finding that is consistent with previous research [20,25,26]. Risky driving behaviour is often associated with driving inexperience; young people have been shown to underestimate risk and overestimate their driving ability [27,28]. Although the acquisition of driving knowledge and skills are important if competent driving ability is to be achieved, it is also essential to obtain driving experience through practice [6].\nEmotional distress at the time, or leading up to, the collision was identified from numerous cases. Previous studies investigating the causes and characteristics of RTCs have indicated that psychological factors may influence driving behaviour in young people [6,29-31]. The results from previous research imply that interventions could be targeted towards specific ‘at-risk’ groups, or possibly that emotional distress could be included in general road safety awareness campaigns; for example, warning people about the dangers of driving when emotionally distracted. It was notable that the mental distress identified in the current study was often triggered by a recent traumatic event.", "The finding that groups of young people engage in social driving behaviour, and the existence of large social networks of young drivers in and around the rural market towns, lends support to the development of interventions that target these groups using a multi-strategy approach in their specific social context. There is evidence that such community based interventions have the potential to reduce child and adolescent unintentional RTC injuries [32]. They can include interventions such as education, legislation, social and environmental interventions [33] in an attempt to change community norms and behaviour. Given the limited evidence of effectiveness of educational training alone [24], and that education-based training interventions have also been criticised for failing to address the social and lifestyle factors that are associated with risky driving behaviour among young people [34], there would seem to be a need to develop novel community-based interventions targeted at a specific group of young drivers.\nThe findings of this study also lend support to prevention programmes such as Graduated Driver Licensing (GDL), which have been effective in reducing young driver collision rates in parts of the USA, Canada, Australia and New Zealand [1,35,36]. GDL schemes involve restrictions for newly qualified drivers which may include: limitations on accompanying passengers, restrictions on night-time driving, and lower thresholds for blood alcohol concentration. In the UK it is predicted that such a programme (night-time restrictions between 9 pm-6 am, and no 15–24 year old passengers) could save more than 114 lives and result in 872 fewer serious injuries each year among younger drivers [37]. In the current study, if night-time restrictions had been imposed at the time of these collisions, it is estimated that 27% of causalities may have been prevented. The ability of GDL to disrupt the social networks identified in this research is also significant. However, given the rural nature of the study area, legislation restricting night-time and passenger journeys may be difficult to enforce and also result in young people becoming socially isolated and unable to participate in school or work activities [38]. An ideal GDL approach would minimise such costs, while maximising the potentially significant benefits in terms of preventing deaths and injuries on the road.\nParental supervision of young drivers could provide an alternative approach to restricting driving through legislation as parents may have the ability to moderate their child’s driving behaviour and may contribute to decisions or funding about vehicle purchase [39]. However, as observed in the present study, parents often perceived their child to be a competent, responsible driver, despite many of the collisions being associated with risky driving behaviour. Further, while parents appear to be aware of the risks associated with new drivers, they are also keen to encourage independence in their children and reduce the need to transport them to various events [39]. A small number of interventions have investigated the impact of parental supervision, resulting in somewhat mixed overall findings [40,41]. Further exploration of interventions involving parental supervision would therefore be desirable, but at present they do not appear to be an adequate substitute for GDL.", "To our knowledge, this is the first study to undertake a detailed qualitative thematic analysis of narrative text contained in coroners’ records of fatal RTCs among young people. The records provided a comprehensive report on each collision, and the majority of case files contained collision information that went beyond consideration of the physical crash. In some post-collision interviews with family or friends of the deceased, the wider social context of the collision was explored with inclusion of, for example, details of family circumstances. The records therefore complemented routinely collected STATS19 data, which focus primarily on physical crash conditions and acute actions immediately before the collision.\nAlthough the number of cases was relatively small, we did feel that we achieved saturation for identification of themes, and the qualitative nature of the study allowed for a thorough, in-depth analysis of cases in a way that is not routine in road fatality analysis. Comparison with existing theoretical models and literature on risk factors for young people allowed us to locate our findings within a wider UK and international context, adding to the existing evidence base.\nThe study underlines the important role that in-depth data sources such as coroners’ records can play in informing prevention efforts. Countries should facilitate easier access to such data, as has happened in Australia with the development of the National Coroners Information System [42]. The current system of multiple autonomous coroners leads to under-utilisation of coroners’ data in the UK and elsewhere, representing a missed opportunity for public health [43].", "The authors declare that they have no competing interests.", "The study was conceived by PP. PP and EB were responsible for data collection. PP and EB analysed the data and discussed the findings with SG, ET, SW, and MM. PP drafted the first version of the manuscript. All authors provided critical edits and revisions to the manuscript, and reviewed and approved the final version.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/14/78/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Study aims", "Methods", "Population", "Sample selection", "Data collection", "Data analysis", "Ethical considerations", "RATS Guidelines", "Results", "Social driving", "Driving experience", "Interest in motor vehicles", "Risky driving behaviour", "Perception of driving ability", "Emotional distress", "Discussion", "Main findings", "Implications of the findings", "Strengths and limitations", "Conclusions", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Globally, road traffic collisions (RTCs) are the leading cause of death in people aged 15–19, and the second highest cause of death in 20–25 year-olds [1]. In the UK around 300 young people aged 16–29 were killed when driving or riding vehicles, and over 4000 seriously injured during 2011 [2]. RTCs are of particular concern in rural areas in the UK, with the highest proportion of RTC-related fatalities and injuries occurring on rural roads [3]. In general, UK figures present a more favourable picture than many other countries; however, recent UK policy documents suggest that more work can be done to reduce road deaths and injuries [4].\nResearch suggests that there may be specific risk factors for RTCs which are unique to, or elevated in, young people compared with older adults. These include: limited driving experience; night-time driving; fatigue; particular risks for young men [5]. Other age-specific risk factors may include: personality characteristics; driving ability; demographic factors; perceived environment; driving environment; and developmental factors [6].\nTo develop effective prevention programmes the factors associated with RTCs must be identified and understood. In the UK, routine road traffic injury data are compiled via STATS19, a national database detailing the nature of a collision, the location, and a record of casualty involvement. These data are collected at the scene of the collision by an attending police officer. Although such quantitative data provide valuable intelligence for the monitoring and prevention of RTCs and related casualties, other data sources (including those facilitating qualitative analysis) may complement existing routine data, thus aiding action on road safety [7-9]. For instance, narrative text in particular can provide more detail on the events surrounding unintentional injury and death [8]. However, analysis of narrative text in the injury field has largely sought to quantify that data [8]. Qualitative methods, including thematic analysis of narrative text, offer opportunities for an in-depth examination of phenomena [10]. A qualitative approach can complement quantitative methods, by taking into account the wider social context of the crash and examining attitudes and experiences of those connected to the event.\n Study aims In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention.\nIn other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention.", "In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention.", " Population A rural county located in South West England was selected as the setting for this research as the region has been identified as having a significantly high road fatality rate and local authorities wanted to conduct more research into this area [3,14]. Between 2005 and 2010 in the county, 35 young people aged 16–24 were killed in an RTC, either as a driver of a motorcar or rider of a motorcycle, or as a passenger alongside a person of that age.\nA rural county located in South West England was selected as the setting for this research as the region has been identified as having a significantly high road fatality rate and local authorities wanted to conduct more research into this area [3,14]. Between 2005 and 2010 in the county, 35 young people aged 16–24 were killed in an RTC, either as a driver of a motorcar or rider of a motorcycle, or as a passenger alongside a person of that age.\n Sample selection All RTC-related fatalities among 16–24 year old drivers in the selected area occurring between 2005 and 2010 were eligible for inclusion. Passenger fatalities among 16–24 year olds were also eligible, provided the driver of the vehicle was also aged 16 to 24. The age range was selected in response to evidence that drivers aged 16–24 are greatly overrepresented in road death statistics when compared with other, more experienced drivers [15].\nAll RTC-related fatalities among 16–24 year old drivers in the selected area occurring between 2005 and 2010 were eligible for inclusion. Passenger fatalities among 16–24 year olds were also eligible, provided the driver of the vehicle was also aged 16 to 24. The age range was selected in response to evidence that drivers aged 16–24 are greatly overrepresented in road death statistics when compared with other, more experienced drivers [15].\n Data collection Permission to view and extract data from the Coroners’ records was sought and obtained from the coroner for the area. Eligible cases were obtained during visits to HM Coroner, with data extraction taking place on site. A single coroner covered the whole county. Prior to the beginning of the study a qualitative data extraction tool and framework was developed by the research team, following a review of the literature, and tested during a pilot data collection visit to HM Coroner. The tool consisted of a number of headings based on known risk factors identified by Shope [6] during a comprehensive review and synthesis of quantitative and qualitative research findings relating to RTCs among young drivers. Identified risk factors included: personality characteristics (e.g., level of aggression); developmental factors (e.g., sleep patterns); driving ability (e.g., driving skill); demographic factors; perceived environment (e.g., risk perception); and, driving environment (e.g., weather conditions) [6]. The tool was piloted on seven of the RTCs included in this study. Following the guidelines for inductive qualitative research [16,17], this process allowed the researchers to assess whether the pre-specified headings were applicable to the available data, and to identify any additional areas which may provide in-depth rich descriptions which may aid our understanding of fatal RTCs among young people. Four additional headings were identified and added to the tool during the pilot visit: reason for travel; driving behaviour at the time of the collision; safety features of the vehicle; and, vehicle condition. All data extracted at the pilot visit were reviewed in light of the amended data extraction tool during subsequent data collection. Records for each fatality, including police reports, witness statements and the inquest report pertaining to each RTC were reviewed on site in the Coroner’s office over a four day period. Qualitative data concerning each of the framework headings were extracted verbatim from each record directly into a computerised version of the data extraction tool. Demographic data and basic quantitative descriptive data about the RTC, including vehicle details, travel environment, and collision outcomes were also collected from a combination of STATS19 data and information collected from sources in the coroners’ records where available. Where multiple fatalities were recorded from one collision, data on each fatality were extracted. Descriptive information for each case was anonymised at the Coroner’s office. Qualitative data containing information that had the potential to identify a specific individual were anonymised further following discussion among researchers (PP, EB, SG, ET, SW, MM).\nPermission to view and extract data from the Coroners’ records was sought and obtained from the coroner for the area. Eligible cases were obtained during visits to HM Coroner, with data extraction taking place on site. A single coroner covered the whole county. Prior to the beginning of the study a qualitative data extraction tool and framework was developed by the research team, following a review of the literature, and tested during a pilot data collection visit to HM Coroner. The tool consisted of a number of headings based on known risk factors identified by Shope [6] during a comprehensive review and synthesis of quantitative and qualitative research findings relating to RTCs among young drivers. Identified risk factors included: personality characteristics (e.g., level of aggression); developmental factors (e.g., sleep patterns); driving ability (e.g., driving skill); demographic factors; perceived environment (e.g., risk perception); and, driving environment (e.g., weather conditions) [6]. The tool was piloted on seven of the RTCs included in this study. Following the guidelines for inductive qualitative research [16,17], this process allowed the researchers to assess whether the pre-specified headings were applicable to the available data, and to identify any additional areas which may provide in-depth rich descriptions which may aid our understanding of fatal RTCs among young people. Four additional headings were identified and added to the tool during the pilot visit: reason for travel; driving behaviour at the time of the collision; safety features of the vehicle; and, vehicle condition. All data extracted at the pilot visit were reviewed in light of the amended data extraction tool during subsequent data collection. Records for each fatality, including police reports, witness statements and the inquest report pertaining to each RTC were reviewed on site in the Coroner’s office over a four day period. Qualitative data concerning each of the framework headings were extracted verbatim from each record directly into a computerised version of the data extraction tool. Demographic data and basic quantitative descriptive data about the RTC, including vehicle details, travel environment, and collision outcomes were also collected from a combination of STATS19 data and information collected from sources in the coroners’ records where available. Where multiple fatalities were recorded from one collision, data on each fatality were extracted. Descriptive information for each case was anonymised at the Coroner’s office. Qualitative data containing information that had the potential to identify a specific individual were anonymised further following discussion among researchers (PP, EB, SG, ET, SW, MM).\n Data analysis Information recorded in the data extraction form for all cases were imported into NVivo (QSR International 9) verbatim. Each case was explored using Thematic Analysis (TA); a useful method for “identifying, analysing and reporting patterns within data” [18]. One researcher (EB) read through each case multiple times to aid familiarisation. The coding process was predominantly based upon the headings identified in the pre-defined data extraction tool. The data obtained from each case were independently reviewed by one researcher (EB) and the data included under each pre-defined heading of the extraction tool were analysed separately. Notably, the researcher was careful not to be restricted by these pre-defined concepts, allowing for additional codes to emerge from the data inductively. Following this process, the researcher reflected on the codes and allowed wider themes to emerge, with clear examples of each theme taken from the data to illustrate and support the findings of the analysis. To confirm accuracy and interpretation of the data during the coding process and at theme development, findings were discussed and agreed between two researchers (EB and PP) and also among the project steering group (SG, ET, SW, MM).\nInformation recorded in the data extraction form for all cases were imported into NVivo (QSR International 9) verbatim. Each case was explored using Thematic Analysis (TA); a useful method for “identifying, analysing and reporting patterns within data” [18]. One researcher (EB) read through each case multiple times to aid familiarisation. The coding process was predominantly based upon the headings identified in the pre-defined data extraction tool. The data obtained from each case were independently reviewed by one researcher (EB) and the data included under each pre-defined heading of the extraction tool were analysed separately. Notably, the researcher was careful not to be restricted by these pre-defined concepts, allowing for additional codes to emerge from the data inductively. Following this process, the researcher reflected on the codes and allowed wider themes to emerge, with clear examples of each theme taken from the data to illustrate and support the findings of the analysis. To confirm accuracy and interpretation of the data during the coding process and at theme development, findings were discussed and agreed between two researchers (EB and PP) and also among the project steering group (SG, ET, SW, MM).\n Ethical considerations The Ethics Committee of the University of the West of England declared that no ethical approval was required for this study. Given the highly sensitive nature of the research, care was taken to remove any identifiable content from the data.\nThe Ethics Committee of the University of the West of England declared that no ethical approval was required for this study. Given the highly sensitive nature of the research, care was taken to remove any identifiable content from the data.\n RATS Guidelines The authors confirm that this study adheres to the RATS guidelines on qualitative research.\nThe authors confirm that this study adheres to the RATS guidelines on qualitative research.", "A rural county located in South West England was selected as the setting for this research as the region has been identified as having a significantly high road fatality rate and local authorities wanted to conduct more research into this area [3,14]. Between 2005 and 2010 in the county, 35 young people aged 16–24 were killed in an RTC, either as a driver of a motorcar or rider of a motorcycle, or as a passenger alongside a person of that age.", "All RTC-related fatalities among 16–24 year old drivers in the selected area occurring between 2005 and 2010 were eligible for inclusion. Passenger fatalities among 16–24 year olds were also eligible, provided the driver of the vehicle was also aged 16 to 24. The age range was selected in response to evidence that drivers aged 16–24 are greatly overrepresented in road death statistics when compared with other, more experienced drivers [15].", "Permission to view and extract data from the Coroners’ records was sought and obtained from the coroner for the area. Eligible cases were obtained during visits to HM Coroner, with data extraction taking place on site. A single coroner covered the whole county. Prior to the beginning of the study a qualitative data extraction tool and framework was developed by the research team, following a review of the literature, and tested during a pilot data collection visit to HM Coroner. The tool consisted of a number of headings based on known risk factors identified by Shope [6] during a comprehensive review and synthesis of quantitative and qualitative research findings relating to RTCs among young drivers. Identified risk factors included: personality characteristics (e.g., level of aggression); developmental factors (e.g., sleep patterns); driving ability (e.g., driving skill); demographic factors; perceived environment (e.g., risk perception); and, driving environment (e.g., weather conditions) [6]. The tool was piloted on seven of the RTCs included in this study. Following the guidelines for inductive qualitative research [16,17], this process allowed the researchers to assess whether the pre-specified headings were applicable to the available data, and to identify any additional areas which may provide in-depth rich descriptions which may aid our understanding of fatal RTCs among young people. Four additional headings were identified and added to the tool during the pilot visit: reason for travel; driving behaviour at the time of the collision; safety features of the vehicle; and, vehicle condition. All data extracted at the pilot visit were reviewed in light of the amended data extraction tool during subsequent data collection. Records for each fatality, including police reports, witness statements and the inquest report pertaining to each RTC were reviewed on site in the Coroner’s office over a four day period. Qualitative data concerning each of the framework headings were extracted verbatim from each record directly into a computerised version of the data extraction tool. Demographic data and basic quantitative descriptive data about the RTC, including vehicle details, travel environment, and collision outcomes were also collected from a combination of STATS19 data and information collected from sources in the coroners’ records where available. Where multiple fatalities were recorded from one collision, data on each fatality were extracted. Descriptive information for each case was anonymised at the Coroner’s office. Qualitative data containing information that had the potential to identify a specific individual were anonymised further following discussion among researchers (PP, EB, SG, ET, SW, MM).", "Information recorded in the data extraction form for all cases were imported into NVivo (QSR International 9) verbatim. Each case was explored using Thematic Analysis (TA); a useful method for “identifying, analysing and reporting patterns within data” [18]. One researcher (EB) read through each case multiple times to aid familiarisation. The coding process was predominantly based upon the headings identified in the pre-defined data extraction tool. The data obtained from each case were independently reviewed by one researcher (EB) and the data included under each pre-defined heading of the extraction tool were analysed separately. Notably, the researcher was careful not to be restricted by these pre-defined concepts, allowing for additional codes to emerge from the data inductively. Following this process, the researcher reflected on the codes and allowed wider themes to emerge, with clear examples of each theme taken from the data to illustrate and support the findings of the analysis. To confirm accuracy and interpretation of the data during the coding process and at theme development, findings were discussed and agreed between two researchers (EB and PP) and also among the project steering group (SG, ET, SW, MM).", "The Ethics Committee of the University of the West of England declared that no ethical approval was required for this study. Given the highly sensitive nature of the research, care was taken to remove any identifiable content from the data.", "The authors confirm that this study adheres to the RATS guidelines on qualitative research.", "Sample characteristics are available in Table 1. Six key themes emerged. These were: social driving, driving experience, interest in motor vehicles, risky driving behaviour, perception of driving ability, and emotional distress.\nSample characteristics (n = 34)\n Social driving A particular theme arising from the thematic analysis was what we have termed “social driving”. This term has been used to encompass a group of related behaviours, and included: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey).\nThe majority of young drivers were found to be engaged in social driving behaviour prior to, or at the time of, the collision. Many individuals were reportedly travelling with friends to or from a social event, and some of the young people were driving without a specific purpose or destination. Driving without a specific destination appeared to be a specific social activity, allowing the young people to meet up in their cars and socialise together, often occurring at night:\n“He had a hectic social life and almost seemed to be out of the house more than in… once he passed his driving test [Case 30] would travel around with his friends.”\nCase 30, Driver, Male\n“Usually every weekend I’ll get a call about 2 or 3 o’clock in the morning…saying “Can you come get me?”… So I get up and go and pick them up straight away.”\nCase 4, Driver, Male\nNine cases of social driving were from cars which contained at least one passenger. In one case, a driver was transporting passengers home from a social event in the early hours of the morning. The passengers were in high spirits, and were reportedly distracting the driver throughout the journey. The police report stated:\n“…that they had been messing about in the back of the car, trying to put their hands over his eyes and slapping his face while he was driving.”\nCase 4, Driver, Male\nA particular theme arising from the thematic analysis was what we have termed “social driving”. This term has been used to encompass a group of related behaviours, and included: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey).\nThe majority of young drivers were found to be engaged in social driving behaviour prior to, or at the time of, the collision. Many individuals were reportedly travelling with friends to or from a social event, and some of the young people were driving without a specific purpose or destination. Driving without a specific destination appeared to be a specific social activity, allowing the young people to meet up in their cars and socialise together, often occurring at night:\n“He had a hectic social life and almost seemed to be out of the house more than in… once he passed his driving test [Case 30] would travel around with his friends.”\nCase 30, Driver, Male\n“Usually every weekend I’ll get a call about 2 or 3 o’clock in the morning…saying “Can you come get me?”… So I get up and go and pick them up straight away.”\nCase 4, Driver, Male\nNine cases of social driving were from cars which contained at least one passenger. In one case, a driver was transporting passengers home from a social event in the early hours of the morning. The passengers were in high spirits, and were reportedly distracting the driver throughout the journey. The police report stated:\n“…that they had been messing about in the back of the car, trying to put their hands over his eyes and slapping his face while he was driving.”\nCase 4, Driver, Male\n Driving experience A number of cases involved people with very limited driving experience, with many passing their driving test within the year prior to their collision. In one case, the driver had only passed their test on the previous day.\nOne police report stated:\n“The driver had limited experience as a driver, having passed his test three months prior to this incident…The driver was confronted with another driver, coupled with a situation that he reacted harshly to, which caused him to lose control of his vehicle.”\nCase 18, Driver, Male\nIn contrast, some individuals were relatively experienced drivers given their age. In six cases, driving experience had been obtained through choice of career as a vehicle driver. One parent reported:\n“He had licences in CBT/ATV/HGV and trailer tractors as well as a car. He had a couple of specialist licences, such as forklift and cherry picker.”\nCase 3, Driver, Male\nSome drivers had limited experience with their vehicle. Over one quarter of cases had been driving the vehicle in which they had their collision for a short time only (weeks or months). In one case, the vehicle had been purchased on the day before their collision. In another case, the driver had recently bought a new vehicle with a large engine size. The driver’s friend commented:\n“Approximately 6 to 8 weeks ago [he] purchased a 2.2 litre… he only passed his driving test about 12 months ago.”\nCase 12, Driver, Male\nA number of cases involved people with very limited driving experience, with many passing their driving test within the year prior to their collision. In one case, the driver had only passed their test on the previous day.\nOne police report stated:\n“The driver had limited experience as a driver, having passed his test three months prior to this incident…The driver was confronted with another driver, coupled with a situation that he reacted harshly to, which caused him to lose control of his vehicle.”\nCase 18, Driver, Male\nIn contrast, some individuals were relatively experienced drivers given their age. In six cases, driving experience had been obtained through choice of career as a vehicle driver. One parent reported:\n“He had licences in CBT/ATV/HGV and trailer tractors as well as a car. He had a couple of specialist licences, such as forklift and cherry picker.”\nCase 3, Driver, Male\nSome drivers had limited experience with their vehicle. Over one quarter of cases had been driving the vehicle in which they had their collision for a short time only (weeks or months). In one case, the vehicle had been purchased on the day before their collision. In another case, the driver had recently bought a new vehicle with a large engine size. The driver’s friend commented:\n“Approximately 6 to 8 weeks ago [he] purchased a 2.2 litre… he only passed his driving test about 12 months ago.”\nCase 12, Driver, Male\n Interest in motor vehicles Many young people had a particular interest in motor vehicles. There were several examples when an individual reportedly spent time caring for their vehicle, making sure it was kept in a good quality condition, both aesthetically and mechanically. There were also reports of vehicle modification. In one case, a car had been modified from a 1400 cc engine to a 1600 cc engine. In one statement, a colleague of the deceased reported:\n“He always talked about his car and how he loved to drive it”\nCase 12, Driver, Male\nIn another example, a Mother stated:\n“It has surprised me that [Case 15] has died as a result of a car accident as she would not wish to damage her car in any way. Her car was her pride and joy. She was that particular she would even pick parking spaces away from other vehicles so that it wouldn’t get marked or damaged.”\nCase 15, Driver, Female\nMany young people had a particular interest in motor vehicles. There were several examples when an individual reportedly spent time caring for their vehicle, making sure it was kept in a good quality condition, both aesthetically and mechanically. There were also reports of vehicle modification. In one case, a car had been modified from a 1400 cc engine to a 1600 cc engine. In one statement, a colleague of the deceased reported:\n“He always talked about his car and how he loved to drive it”\nCase 12, Driver, Male\nIn another example, a Mother stated:\n“It has surprised me that [Case 15] has died as a result of a car accident as she would not wish to damage her car in any way. Her car was her pride and joy. She was that particular she would even pick parking spaces away from other vehicles so that it wouldn’t get marked or damaged.”\nCase 15, Driver, Female\n Risky driving behaviour Dangerous driving behaviour was identified in the majority of cases. This included: driving at speed, tailgating, racing friends and undertaking hazardous overtaking manoeuvres. Notably, excessive speed was referred to in nearly all cases. In one statement, a witness commented:\n“My first impression was that it was travelling far too fast to negotiate the bend safely....I could see his hands turning the steering wheel to his right in a large movement, his whole body movement and body language gave me the impression of panic.”\nCase 28, Driver, Male\nIn three cases, it was noted that male driving style may be affected by the presence of other males in the car, causing them to behave differently to normal. In one case, a female friend commented:\n“He always drove safely with me in the car. [Case 18] had mentioned that he drove faster when he had the lads in the car. I had not experienced him driving excessively fast myself.”\nCase 18, Driver, Male\nDangerous driving behaviour was identified in the majority of cases. This included: driving at speed, tailgating, racing friends and undertaking hazardous overtaking manoeuvres. Notably, excessive speed was referred to in nearly all cases. In one statement, a witness commented:\n“My first impression was that it was travelling far too fast to negotiate the bend safely....I could see his hands turning the steering wheel to his right in a large movement, his whole body movement and body language gave me the impression of panic.”\nCase 28, Driver, Male\nIn three cases, it was noted that male driving style may be affected by the presence of other males in the car, causing them to behave differently to normal. In one case, a female friend commented:\n“He always drove safely with me in the car. [Case 18] had mentioned that he drove faster when he had the lads in the car. I had not experienced him driving excessively fast myself.”\nCase 18, Driver, Male\n Perception of driving ability There was a sense of overconfidence and an inflated view of driving skills and ability among many cases. One case concerned two cars that were involved in a race. On interviewing the male driver of the second car, who was unharmed, the police reported:\n“… He agreed that he had been driving 2 to 3 car lengths behind [Case 15] at approximately 80 mph. He did not consider this to be an unsafe following distance.”\nCase 15, Driver, Female\nIn another example, a rear-seat passenger stated:\n“I also saw at least one large arrow shape, indicated to our left. I knew this to mean that we should stay on our own side of the road. [Front seat passenger] was shouting “[Case25], what are you doing you are not going to make that” or similar. I became aware that we were now on the offside of the road…As this was happening I heard [Front seat passenger] shout a second time. This sounded much more urgent than before as he said, “we’re not going to make that”.”\nCase 25, Driver, Male\nThe perception of driving ability was linked to a lack of adherence to safety regulations. Although the majority of young people observed safety regulations, the decision not to wear a seatbelt or helmet suggests that such drivers were confident in their ability to drive without incident. In one case, despite being involved in a collision in the week leading up to his death, the driver chose not to wear a seatbelt, as described by a friend:\n“I can say that [Case 13] was not in the habit of wearing a seatbelt as he found it too restrictive. I had asked him if he was wearing one when he hit the van [referring to prior collision]. He said he had not but had been able to brace himself on that occasion against the steering wheel.”\nCase 13, Driver, Male\nThere were many instances where statements contained others’ perceptions of the driver’s driving ability. People’s perceptions of the deceased’s driving ability were shown to differ; in some statements, people were highly complementary, while other statements highlighted a less favourable view, and in some cases were visibly critical of the deceased’s driving ability. The majority of statements presented a positive perception of the driving ability of the deceased. This is interesting, given the knowledge that many collisions were associated with risky or dangerous driving behaviour. The partner of one individual reported:\n“[Case 24] was an excellent driver especially considering that he was still young and didn’t have a lot of experience. He never drove fast with me or [their baby] in the car and certainly wouldn’t do if the roads were potentially risky. He constantly talked about other accidents he’d seen to and from work, which always reassured me that he’d drive safely.”\nCase 24, Driver, Male\nHowever, other statements were not so positive. One father reported of his son:\n“He was in my eyes a typical young driver. He had a few bumps and things. I would say he was a confident driver but at times over confident. He sometimes drove and I would say stop, drop me off. I think his driving just needed maturity.”\nCase 21, Driver, Male\nThere was a sense of overconfidence and an inflated view of driving skills and ability among many cases. One case concerned two cars that were involved in a race. On interviewing the male driver of the second car, who was unharmed, the police reported:\n“… He agreed that he had been driving 2 to 3 car lengths behind [Case 15] at approximately 80 mph. He did not consider this to be an unsafe following distance.”\nCase 15, Driver, Female\nIn another example, a rear-seat passenger stated:\n“I also saw at least one large arrow shape, indicated to our left. I knew this to mean that we should stay on our own side of the road. [Front seat passenger] was shouting “[Case25], what are you doing you are not going to make that” or similar. I became aware that we were now on the offside of the road…As this was happening I heard [Front seat passenger] shout a second time. This sounded much more urgent than before as he said, “we’re not going to make that”.”\nCase 25, Driver, Male\nThe perception of driving ability was linked to a lack of adherence to safety regulations. Although the majority of young people observed safety regulations, the decision not to wear a seatbelt or helmet suggests that such drivers were confident in their ability to drive without incident. In one case, despite being involved in a collision in the week leading up to his death, the driver chose not to wear a seatbelt, as described by a friend:\n“I can say that [Case 13] was not in the habit of wearing a seatbelt as he found it too restrictive. I had asked him if he was wearing one when he hit the van [referring to prior collision]. He said he had not but had been able to brace himself on that occasion against the steering wheel.”\nCase 13, Driver, Male\nThere were many instances where statements contained others’ perceptions of the driver’s driving ability. People’s perceptions of the deceased’s driving ability were shown to differ; in some statements, people were highly complementary, while other statements highlighted a less favourable view, and in some cases were visibly critical of the deceased’s driving ability. The majority of statements presented a positive perception of the driving ability of the deceased. This is interesting, given the knowledge that many collisions were associated with risky or dangerous driving behaviour. The partner of one individual reported:\n“[Case 24] was an excellent driver especially considering that he was still young and didn’t have a lot of experience. He never drove fast with me or [their baby] in the car and certainly wouldn’t do if the roads were potentially risky. He constantly talked about other accidents he’d seen to and from work, which always reassured me that he’d drive safely.”\nCase 24, Driver, Male\nHowever, other statements were not so positive. One father reported of his son:\n“He was in my eyes a typical young driver. He had a few bumps and things. I would say he was a confident driver but at times over confident. He sometimes drove and I would say stop, drop me off. I think his driving just needed maturity.”\nCase 21, Driver, Male\n Emotional distress In our study emotional distress immediately prior to the RTC was identified in over one quarter of the cases. This mainly referred to family or personal relationship problems, or financial difficulties, issues which may have distracted the driver from their driving role. In one case, a driver was struggling to cope with relationship and financial demands. As one police report stated:\n“It is highly likely on the evidence available that [Case 24] was in a hurry to get home…it is possible there were other things on his mind, for example the issue of the rent on his flat.”\nCase 24, Driver, Male\nA further statement, provided by the mother of the deceased reported:\n“[Case 24’s]…relationship with… [His girlfriend] was very rocky throughout the time they were together. On the Sunday before the accident he was at our house with all of his stuff saying that he’d had enough of his relationship with… [his girlfriend] and wanted to return home.”\nCase 24, Driver, Male\nRelationship difficulties were identified from a further 6 cases. In one passenger fatality a police report stated:\n“[The driver] is married and [Case 9] has recently separated from her long term boyfriend. [The driver] and [Case 9] have been involved in a relationship for some weeks preceding the collision. [The driver] had informed his wife of his affair on [the day prior to the collision] when his wife asked him to leave their marital home.”\nCase 9, Passenger, Female\nIn our study emotional distress immediately prior to the RTC was identified in over one quarter of the cases. This mainly referred to family or personal relationship problems, or financial difficulties, issues which may have distracted the driver from their driving role. In one case, a driver was struggling to cope with relationship and financial demands. As one police report stated:\n“It is highly likely on the evidence available that [Case 24] was in a hurry to get home…it is possible there were other things on his mind, for example the issue of the rent on his flat.”\nCase 24, Driver, Male\nA further statement, provided by the mother of the deceased reported:\n“[Case 24’s]…relationship with… [His girlfriend] was very rocky throughout the time they were together. On the Sunday before the accident he was at our house with all of his stuff saying that he’d had enough of his relationship with… [his girlfriend] and wanted to return home.”\nCase 24, Driver, Male\nRelationship difficulties were identified from a further 6 cases. In one passenger fatality a police report stated:\n“[The driver] is married and [Case 9] has recently separated from her long term boyfriend. [The driver] and [Case 9] have been involved in a relationship for some weeks preceding the collision. [The driver] had informed his wife of his affair on [the day prior to the collision] when his wife asked him to leave their marital home.”\nCase 9, Passenger, Female", "A particular theme arising from the thematic analysis was what we have termed “social driving”. This term has been used to encompass a group of related behaviours, and included: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey).\nThe majority of young drivers were found to be engaged in social driving behaviour prior to, or at the time of, the collision. Many individuals were reportedly travelling with friends to or from a social event, and some of the young people were driving without a specific purpose or destination. Driving without a specific destination appeared to be a specific social activity, allowing the young people to meet up in their cars and socialise together, often occurring at night:\n“He had a hectic social life and almost seemed to be out of the house more than in… once he passed his driving test [Case 30] would travel around with his friends.”\nCase 30, Driver, Male\n“Usually every weekend I’ll get a call about 2 or 3 o’clock in the morning…saying “Can you come get me?”… So I get up and go and pick them up straight away.”\nCase 4, Driver, Male\nNine cases of social driving were from cars which contained at least one passenger. In one case, a driver was transporting passengers home from a social event in the early hours of the morning. The passengers were in high spirits, and were reportedly distracting the driver throughout the journey. The police report stated:\n“…that they had been messing about in the back of the car, trying to put their hands over his eyes and slapping his face while he was driving.”\nCase 4, Driver, Male", "A number of cases involved people with very limited driving experience, with many passing their driving test within the year prior to their collision. In one case, the driver had only passed their test on the previous day.\nOne police report stated:\n“The driver had limited experience as a driver, having passed his test three months prior to this incident…The driver was confronted with another driver, coupled with a situation that he reacted harshly to, which caused him to lose control of his vehicle.”\nCase 18, Driver, Male\nIn contrast, some individuals were relatively experienced drivers given their age. In six cases, driving experience had been obtained through choice of career as a vehicle driver. One parent reported:\n“He had licences in CBT/ATV/HGV and trailer tractors as well as a car. He had a couple of specialist licences, such as forklift and cherry picker.”\nCase 3, Driver, Male\nSome drivers had limited experience with their vehicle. Over one quarter of cases had been driving the vehicle in which they had their collision for a short time only (weeks or months). In one case, the vehicle had been purchased on the day before their collision. In another case, the driver had recently bought a new vehicle with a large engine size. The driver’s friend commented:\n“Approximately 6 to 8 weeks ago [he] purchased a 2.2 litre… he only passed his driving test about 12 months ago.”\nCase 12, Driver, Male", "Many young people had a particular interest in motor vehicles. There were several examples when an individual reportedly spent time caring for their vehicle, making sure it was kept in a good quality condition, both aesthetically and mechanically. There were also reports of vehicle modification. In one case, a car had been modified from a 1400 cc engine to a 1600 cc engine. In one statement, a colleague of the deceased reported:\n“He always talked about his car and how he loved to drive it”\nCase 12, Driver, Male\nIn another example, a Mother stated:\n“It has surprised me that [Case 15] has died as a result of a car accident as she would not wish to damage her car in any way. Her car was her pride and joy. She was that particular she would even pick parking spaces away from other vehicles so that it wouldn’t get marked or damaged.”\nCase 15, Driver, Female", "Dangerous driving behaviour was identified in the majority of cases. This included: driving at speed, tailgating, racing friends and undertaking hazardous overtaking manoeuvres. Notably, excessive speed was referred to in nearly all cases. In one statement, a witness commented:\n“My first impression was that it was travelling far too fast to negotiate the bend safely....I could see his hands turning the steering wheel to his right in a large movement, his whole body movement and body language gave me the impression of panic.”\nCase 28, Driver, Male\nIn three cases, it was noted that male driving style may be affected by the presence of other males in the car, causing them to behave differently to normal. In one case, a female friend commented:\n“He always drove safely with me in the car. [Case 18] had mentioned that he drove faster when he had the lads in the car. I had not experienced him driving excessively fast myself.”\nCase 18, Driver, Male", "There was a sense of overconfidence and an inflated view of driving skills and ability among many cases. One case concerned two cars that were involved in a race. On interviewing the male driver of the second car, who was unharmed, the police reported:\n“… He agreed that he had been driving 2 to 3 car lengths behind [Case 15] at approximately 80 mph. He did not consider this to be an unsafe following distance.”\nCase 15, Driver, Female\nIn another example, a rear-seat passenger stated:\n“I also saw at least one large arrow shape, indicated to our left. I knew this to mean that we should stay on our own side of the road. [Front seat passenger] was shouting “[Case25], what are you doing you are not going to make that” or similar. I became aware that we were now on the offside of the road…As this was happening I heard [Front seat passenger] shout a second time. This sounded much more urgent than before as he said, “we’re not going to make that”.”\nCase 25, Driver, Male\nThe perception of driving ability was linked to a lack of adherence to safety regulations. Although the majority of young people observed safety regulations, the decision not to wear a seatbelt or helmet suggests that such drivers were confident in their ability to drive without incident. In one case, despite being involved in a collision in the week leading up to his death, the driver chose not to wear a seatbelt, as described by a friend:\n“I can say that [Case 13] was not in the habit of wearing a seatbelt as he found it too restrictive. I had asked him if he was wearing one when he hit the van [referring to prior collision]. He said he had not but had been able to brace himself on that occasion against the steering wheel.”\nCase 13, Driver, Male\nThere were many instances where statements contained others’ perceptions of the driver’s driving ability. People’s perceptions of the deceased’s driving ability were shown to differ; in some statements, people were highly complementary, while other statements highlighted a less favourable view, and in some cases were visibly critical of the deceased’s driving ability. The majority of statements presented a positive perception of the driving ability of the deceased. This is interesting, given the knowledge that many collisions were associated with risky or dangerous driving behaviour. The partner of one individual reported:\n“[Case 24] was an excellent driver especially considering that he was still young and didn’t have a lot of experience. He never drove fast with me or [their baby] in the car and certainly wouldn’t do if the roads were potentially risky. He constantly talked about other accidents he’d seen to and from work, which always reassured me that he’d drive safely.”\nCase 24, Driver, Male\nHowever, other statements were not so positive. One father reported of his son:\n“He was in my eyes a typical young driver. He had a few bumps and things. I would say he was a confident driver but at times over confident. He sometimes drove and I would say stop, drop me off. I think his driving just needed maturity.”\nCase 21, Driver, Male", "In our study emotional distress immediately prior to the RTC was identified in over one quarter of the cases. This mainly referred to family or personal relationship problems, or financial difficulties, issues which may have distracted the driver from their driving role. In one case, a driver was struggling to cope with relationship and financial demands. As one police report stated:\n“It is highly likely on the evidence available that [Case 24] was in a hurry to get home…it is possible there were other things on his mind, for example the issue of the rent on his flat.”\nCase 24, Driver, Male\nA further statement, provided by the mother of the deceased reported:\n“[Case 24’s]…relationship with… [His girlfriend] was very rocky throughout the time they were together. On the Sunday before the accident he was at our house with all of his stuff saying that he’d had enough of his relationship with… [his girlfriend] and wanted to return home.”\nCase 24, Driver, Male\nRelationship difficulties were identified from a further 6 cases. In one passenger fatality a police report stated:\n“[The driver] is married and [Case 9] has recently separated from her long term boyfriend. [The driver] and [Case 9] have been involved in a relationship for some weeks preceding the collision. [The driver] had informed his wife of his affair on [the day prior to the collision] when his wife asked him to leave their marital home.”\nCase 9, Passenger, Female", " Main findings Thematic analysis identified six themes: social driving, driving experience, interest in motor vehicles, risky driving behaviour, perception of driving ability, and emotional distress. Notably, multiple themes were identified from each case, indicating that there may be numerous factors which influence the causes and characteristics of RTCs among young people. The findings of the present study were consistent with previous research identifying risk factors concerning RTCs involving young people [1,6,19-22].\nThe high prevalence of social driving among young people is likely to relate to the rural setting where the study took place, and may not be replicated in urban areas. Unlike urban dwellers, people who live in a rural area often have limited access to public transport and therefore need a car for travel. For young people in particular, transport is required for school, work and maintaining a social life [23]. In this study, social driving behaviours referred to: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; or driving where alcohol or drugs were present. The numerous attributes associated with social driving behaviour highlight the difficulties in understanding the impact of different factors leading up to a fatal RTC. The association between driving and the social environment has received some research attention in recent years; for example, the influence of passengers upon driving ability [6]. However, there has been less consideration of “social driving” as a driving ‘culture’ among young adults [24], as observed in the present study.\nRisky driving behaviour was another regular feature of RTCs among young people in the study area, and is a finding that is consistent with previous research [20,25,26]. Risky driving behaviour is often associated with driving inexperience; young people have been shown to underestimate risk and overestimate their driving ability [27,28]. Although the acquisition of driving knowledge and skills are important if competent driving ability is to be achieved, it is also essential to obtain driving experience through practice [6].\nEmotional distress at the time, or leading up to, the collision was identified from numerous cases. Previous studies investigating the causes and characteristics of RTCs have indicated that psychological factors may influence driving behaviour in young people [6,29-31]. The results from previous research imply that interventions could be targeted towards specific ‘at-risk’ groups, or possibly that emotional distress could be included in general road safety awareness campaigns; for example, warning people about the dangers of driving when emotionally distracted. It was notable that the mental distress identified in the current study was often triggered by a recent traumatic event.\nThematic analysis identified six themes: social driving, driving experience, interest in motor vehicles, risky driving behaviour, perception of driving ability, and emotional distress. Notably, multiple themes were identified from each case, indicating that there may be numerous factors which influence the causes and characteristics of RTCs among young people. The findings of the present study were consistent with previous research identifying risk factors concerning RTCs involving young people [1,6,19-22].\nThe high prevalence of social driving among young people is likely to relate to the rural setting where the study took place, and may not be replicated in urban areas. Unlike urban dwellers, people who live in a rural area often have limited access to public transport and therefore need a car for travel. For young people in particular, transport is required for school, work and maintaining a social life [23]. In this study, social driving behaviours referred to: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; or driving where alcohol or drugs were present. The numerous attributes associated with social driving behaviour highlight the difficulties in understanding the impact of different factors leading up to a fatal RTC. The association between driving and the social environment has received some research attention in recent years; for example, the influence of passengers upon driving ability [6]. However, there has been less consideration of “social driving” as a driving ‘culture’ among young adults [24], as observed in the present study.\nRisky driving behaviour was another regular feature of RTCs among young people in the study area, and is a finding that is consistent with previous research [20,25,26]. Risky driving behaviour is often associated with driving inexperience; young people have been shown to underestimate risk and overestimate their driving ability [27,28]. Although the acquisition of driving knowledge and skills are important if competent driving ability is to be achieved, it is also essential to obtain driving experience through practice [6].\nEmotional distress at the time, or leading up to, the collision was identified from numerous cases. Previous studies investigating the causes and characteristics of RTCs have indicated that psychological factors may influence driving behaviour in young people [6,29-31]. The results from previous research imply that interventions could be targeted towards specific ‘at-risk’ groups, or possibly that emotional distress could be included in general road safety awareness campaigns; for example, warning people about the dangers of driving when emotionally distracted. It was notable that the mental distress identified in the current study was often triggered by a recent traumatic event.\n Implications of the findings The finding that groups of young people engage in social driving behaviour, and the existence of large social networks of young drivers in and around the rural market towns, lends support to the development of interventions that target these groups using a multi-strategy approach in their specific social context. There is evidence that such community based interventions have the potential to reduce child and adolescent unintentional RTC injuries [32]. They can include interventions such as education, legislation, social and environmental interventions [33] in an attempt to change community norms and behaviour. Given the limited evidence of effectiveness of educational training alone [24], and that education-based training interventions have also been criticised for failing to address the social and lifestyle factors that are associated with risky driving behaviour among young people [34], there would seem to be a need to develop novel community-based interventions targeted at a specific group of young drivers.\nThe findings of this study also lend support to prevention programmes such as Graduated Driver Licensing (GDL), which have been effective in reducing young driver collision rates in parts of the USA, Canada, Australia and New Zealand [1,35,36]. GDL schemes involve restrictions for newly qualified drivers which may include: limitations on accompanying passengers, restrictions on night-time driving, and lower thresholds for blood alcohol concentration. In the UK it is predicted that such a programme (night-time restrictions between 9 pm-6 am, and no 15–24 year old passengers) could save more than 114 lives and result in 872 fewer serious injuries each year among younger drivers [37]. In the current study, if night-time restrictions had been imposed at the time of these collisions, it is estimated that 27% of causalities may have been prevented. The ability of GDL to disrupt the social networks identified in this research is also significant. However, given the rural nature of the study area, legislation restricting night-time and passenger journeys may be difficult to enforce and also result in young people becoming socially isolated and unable to participate in school or work activities [38]. An ideal GDL approach would minimise such costs, while maximising the potentially significant benefits in terms of preventing deaths and injuries on the road.\nParental supervision of young drivers could provide an alternative approach to restricting driving through legislation as parents may have the ability to moderate their child’s driving behaviour and may contribute to decisions or funding about vehicle purchase [39]. However, as observed in the present study, parents often perceived their child to be a competent, responsible driver, despite many of the collisions being associated with risky driving behaviour. Further, while parents appear to be aware of the risks associated with new drivers, they are also keen to encourage independence in their children and reduce the need to transport them to various events [39]. A small number of interventions have investigated the impact of parental supervision, resulting in somewhat mixed overall findings [40,41]. Further exploration of interventions involving parental supervision would therefore be desirable, but at present they do not appear to be an adequate substitute for GDL.\nThe finding that groups of young people engage in social driving behaviour, and the existence of large social networks of young drivers in and around the rural market towns, lends support to the development of interventions that target these groups using a multi-strategy approach in their specific social context. There is evidence that such community based interventions have the potential to reduce child and adolescent unintentional RTC injuries [32]. They can include interventions such as education, legislation, social and environmental interventions [33] in an attempt to change community norms and behaviour. Given the limited evidence of effectiveness of educational training alone [24], and that education-based training interventions have also been criticised for failing to address the social and lifestyle factors that are associated with risky driving behaviour among young people [34], there would seem to be a need to develop novel community-based interventions targeted at a specific group of young drivers.\nThe findings of this study also lend support to prevention programmes such as Graduated Driver Licensing (GDL), which have been effective in reducing young driver collision rates in parts of the USA, Canada, Australia and New Zealand [1,35,36]. GDL schemes involve restrictions for newly qualified drivers which may include: limitations on accompanying passengers, restrictions on night-time driving, and lower thresholds for blood alcohol concentration. In the UK it is predicted that such a programme (night-time restrictions between 9 pm-6 am, and no 15–24 year old passengers) could save more than 114 lives and result in 872 fewer serious injuries each year among younger drivers [37]. In the current study, if night-time restrictions had been imposed at the time of these collisions, it is estimated that 27% of causalities may have been prevented. The ability of GDL to disrupt the social networks identified in this research is also significant. However, given the rural nature of the study area, legislation restricting night-time and passenger journeys may be difficult to enforce and also result in young people becoming socially isolated and unable to participate in school or work activities [38]. An ideal GDL approach would minimise such costs, while maximising the potentially significant benefits in terms of preventing deaths and injuries on the road.\nParental supervision of young drivers could provide an alternative approach to restricting driving through legislation as parents may have the ability to moderate their child’s driving behaviour and may contribute to decisions or funding about vehicle purchase [39]. However, as observed in the present study, parents often perceived their child to be a competent, responsible driver, despite many of the collisions being associated with risky driving behaviour. Further, while parents appear to be aware of the risks associated with new drivers, they are also keen to encourage independence in their children and reduce the need to transport them to various events [39]. A small number of interventions have investigated the impact of parental supervision, resulting in somewhat mixed overall findings [40,41]. Further exploration of interventions involving parental supervision would therefore be desirable, but at present they do not appear to be an adequate substitute for GDL.\n Strengths and limitations To our knowledge, this is the first study to undertake a detailed qualitative thematic analysis of narrative text contained in coroners’ records of fatal RTCs among young people. The records provided a comprehensive report on each collision, and the majority of case files contained collision information that went beyond consideration of the physical crash. In some post-collision interviews with family or friends of the deceased, the wider social context of the collision was explored with inclusion of, for example, details of family circumstances. The records therefore complemented routinely collected STATS19 data, which focus primarily on physical crash conditions and acute actions immediately before the collision.\nAlthough the number of cases was relatively small, we did feel that we achieved saturation for identification of themes, and the qualitative nature of the study allowed for a thorough, in-depth analysis of cases in a way that is not routine in road fatality analysis. Comparison with existing theoretical models and literature on risk factors for young people allowed us to locate our findings within a wider UK and international context, adding to the existing evidence base.\nThe study underlines the important role that in-depth data sources such as coroners’ records can play in informing prevention efforts. Countries should facilitate easier access to such data, as has happened in Australia with the development of the National Coroners Information System [42]. The current system of multiple autonomous coroners leads to under-utilisation of coroners’ data in the UK and elsewhere, representing a missed opportunity for public health [43].\nTo our knowledge, this is the first study to undertake a detailed qualitative thematic analysis of narrative text contained in coroners’ records of fatal RTCs among young people. The records provided a comprehensive report on each collision, and the majority of case files contained collision information that went beyond consideration of the physical crash. In some post-collision interviews with family or friends of the deceased, the wider social context of the collision was explored with inclusion of, for example, details of family circumstances. The records therefore complemented routinely collected STATS19 data, which focus primarily on physical crash conditions and acute actions immediately before the collision.\nAlthough the number of cases was relatively small, we did feel that we achieved saturation for identification of themes, and the qualitative nature of the study allowed for a thorough, in-depth analysis of cases in a way that is not routine in road fatality analysis. Comparison with existing theoretical models and literature on risk factors for young people allowed us to locate our findings within a wider UK and international context, adding to the existing evidence base.\nThe study underlines the important role that in-depth data sources such as coroners’ records can play in informing prevention efforts. Countries should facilitate easier access to such data, as has happened in Australia with the development of the National Coroners Information System [42]. The current system of multiple autonomous coroners leads to under-utilisation of coroners’ data in the UK and elsewhere, representing a missed opportunity for public health [43].", "Thematic analysis identified six themes: social driving, driving experience, interest in motor vehicles, risky driving behaviour, perception of driving ability, and emotional distress. Notably, multiple themes were identified from each case, indicating that there may be numerous factors which influence the causes and characteristics of RTCs among young people. The findings of the present study were consistent with previous research identifying risk factors concerning RTCs involving young people [1,6,19-22].\nThe high prevalence of social driving among young people is likely to relate to the rural setting where the study took place, and may not be replicated in urban areas. Unlike urban dwellers, people who live in a rural area often have limited access to public transport and therefore need a car for travel. For young people in particular, transport is required for school, work and maintaining a social life [23]. In this study, social driving behaviours referred to: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; or driving where alcohol or drugs were present. The numerous attributes associated with social driving behaviour highlight the difficulties in understanding the impact of different factors leading up to a fatal RTC. The association between driving and the social environment has received some research attention in recent years; for example, the influence of passengers upon driving ability [6]. However, there has been less consideration of “social driving” as a driving ‘culture’ among young adults [24], as observed in the present study.\nRisky driving behaviour was another regular feature of RTCs among young people in the study area, and is a finding that is consistent with previous research [20,25,26]. Risky driving behaviour is often associated with driving inexperience; young people have been shown to underestimate risk and overestimate their driving ability [27,28]. Although the acquisition of driving knowledge and skills are important if competent driving ability is to be achieved, it is also essential to obtain driving experience through practice [6].\nEmotional distress at the time, or leading up to, the collision was identified from numerous cases. Previous studies investigating the causes and characteristics of RTCs have indicated that psychological factors may influence driving behaviour in young people [6,29-31]. The results from previous research imply that interventions could be targeted towards specific ‘at-risk’ groups, or possibly that emotional distress could be included in general road safety awareness campaigns; for example, warning people about the dangers of driving when emotionally distracted. It was notable that the mental distress identified in the current study was often triggered by a recent traumatic event.", "The finding that groups of young people engage in social driving behaviour, and the existence of large social networks of young drivers in and around the rural market towns, lends support to the development of interventions that target these groups using a multi-strategy approach in their specific social context. There is evidence that such community based interventions have the potential to reduce child and adolescent unintentional RTC injuries [32]. They can include interventions such as education, legislation, social and environmental interventions [33] in an attempt to change community norms and behaviour. Given the limited evidence of effectiveness of educational training alone [24], and that education-based training interventions have also been criticised for failing to address the social and lifestyle factors that are associated with risky driving behaviour among young people [34], there would seem to be a need to develop novel community-based interventions targeted at a specific group of young drivers.\nThe findings of this study also lend support to prevention programmes such as Graduated Driver Licensing (GDL), which have been effective in reducing young driver collision rates in parts of the USA, Canada, Australia and New Zealand [1,35,36]. GDL schemes involve restrictions for newly qualified drivers which may include: limitations on accompanying passengers, restrictions on night-time driving, and lower thresholds for blood alcohol concentration. In the UK it is predicted that such a programme (night-time restrictions between 9 pm-6 am, and no 15–24 year old passengers) could save more than 114 lives and result in 872 fewer serious injuries each year among younger drivers [37]. In the current study, if night-time restrictions had been imposed at the time of these collisions, it is estimated that 27% of causalities may have been prevented. The ability of GDL to disrupt the social networks identified in this research is also significant. However, given the rural nature of the study area, legislation restricting night-time and passenger journeys may be difficult to enforce and also result in young people becoming socially isolated and unable to participate in school or work activities [38]. An ideal GDL approach would minimise such costs, while maximising the potentially significant benefits in terms of preventing deaths and injuries on the road.\nParental supervision of young drivers could provide an alternative approach to restricting driving through legislation as parents may have the ability to moderate their child’s driving behaviour and may contribute to decisions or funding about vehicle purchase [39]. However, as observed in the present study, parents often perceived their child to be a competent, responsible driver, despite many of the collisions being associated with risky driving behaviour. Further, while parents appear to be aware of the risks associated with new drivers, they are also keen to encourage independence in their children and reduce the need to transport them to various events [39]. A small number of interventions have investigated the impact of parental supervision, resulting in somewhat mixed overall findings [40,41]. Further exploration of interventions involving parental supervision would therefore be desirable, but at present they do not appear to be an adequate substitute for GDL.", "To our knowledge, this is the first study to undertake a detailed qualitative thematic analysis of narrative text contained in coroners’ records of fatal RTCs among young people. The records provided a comprehensive report on each collision, and the majority of case files contained collision information that went beyond consideration of the physical crash. In some post-collision interviews with family or friends of the deceased, the wider social context of the collision was explored with inclusion of, for example, details of family circumstances. The records therefore complemented routinely collected STATS19 data, which focus primarily on physical crash conditions and acute actions immediately before the collision.\nAlthough the number of cases was relatively small, we did feel that we achieved saturation for identification of themes, and the qualitative nature of the study allowed for a thorough, in-depth analysis of cases in a way that is not routine in road fatality analysis. Comparison with existing theoretical models and literature on risk factors for young people allowed us to locate our findings within a wider UK and international context, adding to the existing evidence base.\nThe study underlines the important role that in-depth data sources such as coroners’ records can play in informing prevention efforts. Countries should facilitate easier access to such data, as has happened in Australia with the development of the National Coroners Information System [42]. The current system of multiple autonomous coroners leads to under-utilisation of coroners’ data in the UK and elsewhere, representing a missed opportunity for public health [43].", "This study used qualitative thematic analysis of narrative text in coroners’ records to examine the social context of fatal road traffic collisions in young people. It found themes consistent with other frameworks [6], but used the richness of the qualitative data to examine in greater depth the previously identified risk factors. These in-depth findings provide additional support for the case for Graduated Driver Licensing programmes to reduce collisions involving young people, and also suggest that road safety interventions need to take a more community development approach, recognising the importance of social context and focusing on social networks of young people. Further research in the UK and other countries could replicate this approach, to build on the work presented here and the pre-existing frameworks.", "The authors declare that they have no competing interests.", "The study was conceived by PP. PP and EB were responsible for data collection. PP and EB analysed the data and discussed the findings with SG, ET, SW, and MM. PP drafted the first version of the manuscript. All authors provided critical edits and revisions to the manuscript, and reviewed and approved the final version.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/14/78/prepub\n" ]
[ null, null, "methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", null, null, null, "conclusions", null, null, null ]
[ "Road traffic fatalities", "Uoung people", "Qualitative", "Narrative text", "Coroners’ records" ]
Background: Globally, road traffic collisions (RTCs) are the leading cause of death in people aged 15–19, and the second highest cause of death in 20–25 year-olds [1]. In the UK around 300 young people aged 16–29 were killed when driving or riding vehicles, and over 4000 seriously injured during 2011 [2]. RTCs are of particular concern in rural areas in the UK, with the highest proportion of RTC-related fatalities and injuries occurring on rural roads [3]. In general, UK figures present a more favourable picture than many other countries; however, recent UK policy documents suggest that more work can be done to reduce road deaths and injuries [4]. Research suggests that there may be specific risk factors for RTCs which are unique to, or elevated in, young people compared with older adults. These include: limited driving experience; night-time driving; fatigue; particular risks for young men [5]. Other age-specific risk factors may include: personality characteristics; driving ability; demographic factors; perceived environment; driving environment; and developmental factors [6]. To develop effective prevention programmes the factors associated with RTCs must be identified and understood. In the UK, routine road traffic injury data are compiled via STATS19, a national database detailing the nature of a collision, the location, and a record of casualty involvement. These data are collected at the scene of the collision by an attending police officer. Although such quantitative data provide valuable intelligence for the monitoring and prevention of RTCs and related casualties, other data sources (including those facilitating qualitative analysis) may complement existing routine data, thus aiding action on road safety [7-9]. For instance, narrative text in particular can provide more detail on the events surrounding unintentional injury and death [8]. However, analysis of narrative text in the injury field has largely sought to quantify that data [8]. Qualitative methods, including thematic analysis of narrative text, offer opportunities for an in-depth examination of phenomena [10]. A qualitative approach can complement quantitative methods, by taking into account the wider social context of the crash and examining attitudes and experiences of those connected to the event. Study aims In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention. In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention. Study aims: In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention. Methods: Population A rural county located in South West England was selected as the setting for this research as the region has been identified as having a significantly high road fatality rate and local authorities wanted to conduct more research into this area [3,14]. Between 2005 and 2010 in the county, 35 young people aged 16–24 were killed in an RTC, either as a driver of a motorcar or rider of a motorcycle, or as a passenger alongside a person of that age. A rural county located in South West England was selected as the setting for this research as the region has been identified as having a significantly high road fatality rate and local authorities wanted to conduct more research into this area [3,14]. Between 2005 and 2010 in the county, 35 young people aged 16–24 were killed in an RTC, either as a driver of a motorcar or rider of a motorcycle, or as a passenger alongside a person of that age. Sample selection All RTC-related fatalities among 16–24 year old drivers in the selected area occurring between 2005 and 2010 were eligible for inclusion. Passenger fatalities among 16–24 year olds were also eligible, provided the driver of the vehicle was also aged 16 to 24. The age range was selected in response to evidence that drivers aged 16–24 are greatly overrepresented in road death statistics when compared with other, more experienced drivers [15]. All RTC-related fatalities among 16–24 year old drivers in the selected area occurring between 2005 and 2010 were eligible for inclusion. Passenger fatalities among 16–24 year olds were also eligible, provided the driver of the vehicle was also aged 16 to 24. The age range was selected in response to evidence that drivers aged 16–24 are greatly overrepresented in road death statistics when compared with other, more experienced drivers [15]. Data collection Permission to view and extract data from the Coroners’ records was sought and obtained from the coroner for the area. Eligible cases were obtained during visits to HM Coroner, with data extraction taking place on site. A single coroner covered the whole county. Prior to the beginning of the study a qualitative data extraction tool and framework was developed by the research team, following a review of the literature, and tested during a pilot data collection visit to HM Coroner. The tool consisted of a number of headings based on known risk factors identified by Shope [6] during a comprehensive review and synthesis of quantitative and qualitative research findings relating to RTCs among young drivers. Identified risk factors included: personality characteristics (e.g., level of aggression); developmental factors (e.g., sleep patterns); driving ability (e.g., driving skill); demographic factors; perceived environment (e.g., risk perception); and, driving environment (e.g., weather conditions) [6]. The tool was piloted on seven of the RTCs included in this study. Following the guidelines for inductive qualitative research [16,17], this process allowed the researchers to assess whether the pre-specified headings were applicable to the available data, and to identify any additional areas which may provide in-depth rich descriptions which may aid our understanding of fatal RTCs among young people. Four additional headings were identified and added to the tool during the pilot visit: reason for travel; driving behaviour at the time of the collision; safety features of the vehicle; and, vehicle condition. All data extracted at the pilot visit were reviewed in light of the amended data extraction tool during subsequent data collection. Records for each fatality, including police reports, witness statements and the inquest report pertaining to each RTC were reviewed on site in the Coroner’s office over a four day period. Qualitative data concerning each of the framework headings were extracted verbatim from each record directly into a computerised version of the data extraction tool. Demographic data and basic quantitative descriptive data about the RTC, including vehicle details, travel environment, and collision outcomes were also collected from a combination of STATS19 data and information collected from sources in the coroners’ records where available. Where multiple fatalities were recorded from one collision, data on each fatality were extracted. Descriptive information for each case was anonymised at the Coroner’s office. Qualitative data containing information that had the potential to identify a specific individual were anonymised further following discussion among researchers (PP, EB, SG, ET, SW, MM). Permission to view and extract data from the Coroners’ records was sought and obtained from the coroner for the area. Eligible cases were obtained during visits to HM Coroner, with data extraction taking place on site. A single coroner covered the whole county. Prior to the beginning of the study a qualitative data extraction tool and framework was developed by the research team, following a review of the literature, and tested during a pilot data collection visit to HM Coroner. The tool consisted of a number of headings based on known risk factors identified by Shope [6] during a comprehensive review and synthesis of quantitative and qualitative research findings relating to RTCs among young drivers. Identified risk factors included: personality characteristics (e.g., level of aggression); developmental factors (e.g., sleep patterns); driving ability (e.g., driving skill); demographic factors; perceived environment (e.g., risk perception); and, driving environment (e.g., weather conditions) [6]. The tool was piloted on seven of the RTCs included in this study. Following the guidelines for inductive qualitative research [16,17], this process allowed the researchers to assess whether the pre-specified headings were applicable to the available data, and to identify any additional areas which may provide in-depth rich descriptions which may aid our understanding of fatal RTCs among young people. Four additional headings were identified and added to the tool during the pilot visit: reason for travel; driving behaviour at the time of the collision; safety features of the vehicle; and, vehicle condition. All data extracted at the pilot visit were reviewed in light of the amended data extraction tool during subsequent data collection. Records for each fatality, including police reports, witness statements and the inquest report pertaining to each RTC were reviewed on site in the Coroner’s office over a four day period. Qualitative data concerning each of the framework headings were extracted verbatim from each record directly into a computerised version of the data extraction tool. Demographic data and basic quantitative descriptive data about the RTC, including vehicle details, travel environment, and collision outcomes were also collected from a combination of STATS19 data and information collected from sources in the coroners’ records where available. Where multiple fatalities were recorded from one collision, data on each fatality were extracted. Descriptive information for each case was anonymised at the Coroner’s office. Qualitative data containing information that had the potential to identify a specific individual were anonymised further following discussion among researchers (PP, EB, SG, ET, SW, MM). Data analysis Information recorded in the data extraction form for all cases were imported into NVivo (QSR International 9) verbatim. Each case was explored using Thematic Analysis (TA); a useful method for “identifying, analysing and reporting patterns within data” [18]. One researcher (EB) read through each case multiple times to aid familiarisation. The coding process was predominantly based upon the headings identified in the pre-defined data extraction tool. The data obtained from each case were independently reviewed by one researcher (EB) and the data included under each pre-defined heading of the extraction tool were analysed separately. Notably, the researcher was careful not to be restricted by these pre-defined concepts, allowing for additional codes to emerge from the data inductively. Following this process, the researcher reflected on the codes and allowed wider themes to emerge, with clear examples of each theme taken from the data to illustrate and support the findings of the analysis. To confirm accuracy and interpretation of the data during the coding process and at theme development, findings were discussed and agreed between two researchers (EB and PP) and also among the project steering group (SG, ET, SW, MM). Information recorded in the data extraction form for all cases were imported into NVivo (QSR International 9) verbatim. Each case was explored using Thematic Analysis (TA); a useful method for “identifying, analysing and reporting patterns within data” [18]. One researcher (EB) read through each case multiple times to aid familiarisation. The coding process was predominantly based upon the headings identified in the pre-defined data extraction tool. The data obtained from each case were independently reviewed by one researcher (EB) and the data included under each pre-defined heading of the extraction tool were analysed separately. Notably, the researcher was careful not to be restricted by these pre-defined concepts, allowing for additional codes to emerge from the data inductively. Following this process, the researcher reflected on the codes and allowed wider themes to emerge, with clear examples of each theme taken from the data to illustrate and support the findings of the analysis. To confirm accuracy and interpretation of the data during the coding process and at theme development, findings were discussed and agreed between two researchers (EB and PP) and also among the project steering group (SG, ET, SW, MM). Ethical considerations The Ethics Committee of the University of the West of England declared that no ethical approval was required for this study. Given the highly sensitive nature of the research, care was taken to remove any identifiable content from the data. The Ethics Committee of the University of the West of England declared that no ethical approval was required for this study. Given the highly sensitive nature of the research, care was taken to remove any identifiable content from the data. RATS Guidelines The authors confirm that this study adheres to the RATS guidelines on qualitative research. The authors confirm that this study adheres to the RATS guidelines on qualitative research. Population: A rural county located in South West England was selected as the setting for this research as the region has been identified as having a significantly high road fatality rate and local authorities wanted to conduct more research into this area [3,14]. Between 2005 and 2010 in the county, 35 young people aged 16–24 were killed in an RTC, either as a driver of a motorcar or rider of a motorcycle, or as a passenger alongside a person of that age. Sample selection: All RTC-related fatalities among 16–24 year old drivers in the selected area occurring between 2005 and 2010 were eligible for inclusion. Passenger fatalities among 16–24 year olds were also eligible, provided the driver of the vehicle was also aged 16 to 24. The age range was selected in response to evidence that drivers aged 16–24 are greatly overrepresented in road death statistics when compared with other, more experienced drivers [15]. Data collection: Permission to view and extract data from the Coroners’ records was sought and obtained from the coroner for the area. Eligible cases were obtained during visits to HM Coroner, with data extraction taking place on site. A single coroner covered the whole county. Prior to the beginning of the study a qualitative data extraction tool and framework was developed by the research team, following a review of the literature, and tested during a pilot data collection visit to HM Coroner. The tool consisted of a number of headings based on known risk factors identified by Shope [6] during a comprehensive review and synthesis of quantitative and qualitative research findings relating to RTCs among young drivers. Identified risk factors included: personality characteristics (e.g., level of aggression); developmental factors (e.g., sleep patterns); driving ability (e.g., driving skill); demographic factors; perceived environment (e.g., risk perception); and, driving environment (e.g., weather conditions) [6]. The tool was piloted on seven of the RTCs included in this study. Following the guidelines for inductive qualitative research [16,17], this process allowed the researchers to assess whether the pre-specified headings were applicable to the available data, and to identify any additional areas which may provide in-depth rich descriptions which may aid our understanding of fatal RTCs among young people. Four additional headings were identified and added to the tool during the pilot visit: reason for travel; driving behaviour at the time of the collision; safety features of the vehicle; and, vehicle condition. All data extracted at the pilot visit were reviewed in light of the amended data extraction tool during subsequent data collection. Records for each fatality, including police reports, witness statements and the inquest report pertaining to each RTC were reviewed on site in the Coroner’s office over a four day period. Qualitative data concerning each of the framework headings were extracted verbatim from each record directly into a computerised version of the data extraction tool. Demographic data and basic quantitative descriptive data about the RTC, including vehicle details, travel environment, and collision outcomes were also collected from a combination of STATS19 data and information collected from sources in the coroners’ records where available. Where multiple fatalities were recorded from one collision, data on each fatality were extracted. Descriptive information for each case was anonymised at the Coroner’s office. Qualitative data containing information that had the potential to identify a specific individual were anonymised further following discussion among researchers (PP, EB, SG, ET, SW, MM). Data analysis: Information recorded in the data extraction form for all cases were imported into NVivo (QSR International 9) verbatim. Each case was explored using Thematic Analysis (TA); a useful method for “identifying, analysing and reporting patterns within data” [18]. One researcher (EB) read through each case multiple times to aid familiarisation. The coding process was predominantly based upon the headings identified in the pre-defined data extraction tool. The data obtained from each case were independently reviewed by one researcher (EB) and the data included under each pre-defined heading of the extraction tool were analysed separately. Notably, the researcher was careful not to be restricted by these pre-defined concepts, allowing for additional codes to emerge from the data inductively. Following this process, the researcher reflected on the codes and allowed wider themes to emerge, with clear examples of each theme taken from the data to illustrate and support the findings of the analysis. To confirm accuracy and interpretation of the data during the coding process and at theme development, findings were discussed and agreed between two researchers (EB and PP) and also among the project steering group (SG, ET, SW, MM). Ethical considerations: The Ethics Committee of the University of the West of England declared that no ethical approval was required for this study. Given the highly sensitive nature of the research, care was taken to remove any identifiable content from the data. RATS Guidelines: The authors confirm that this study adheres to the RATS guidelines on qualitative research. Results: Sample characteristics are available in Table 1. Six key themes emerged. These were: social driving, driving experience, interest in motor vehicles, risky driving behaviour, perception of driving ability, and emotional distress. Sample characteristics (n = 34) Social driving A particular theme arising from the thematic analysis was what we have termed “social driving”. This term has been used to encompass a group of related behaviours, and included: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey). The majority of young drivers were found to be engaged in social driving behaviour prior to, or at the time of, the collision. Many individuals were reportedly travelling with friends to or from a social event, and some of the young people were driving without a specific purpose or destination. Driving without a specific destination appeared to be a specific social activity, allowing the young people to meet up in their cars and socialise together, often occurring at night: “He had a hectic social life and almost seemed to be out of the house more than in… once he passed his driving test [Case 30] would travel around with his friends.” Case 30, Driver, Male “Usually every weekend I’ll get a call about 2 or 3 o’clock in the morning…saying “Can you come get me?”… So I get up and go and pick them up straight away.” Case 4, Driver, Male Nine cases of social driving were from cars which contained at least one passenger. In one case, a driver was transporting passengers home from a social event in the early hours of the morning. The passengers were in high spirits, and were reportedly distracting the driver throughout the journey. The police report stated: “…that they had been messing about in the back of the car, trying to put their hands over his eyes and slapping his face while he was driving.” Case 4, Driver, Male A particular theme arising from the thematic analysis was what we have termed “social driving”. This term has been used to encompass a group of related behaviours, and included: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey). The majority of young drivers were found to be engaged in social driving behaviour prior to, or at the time of, the collision. Many individuals were reportedly travelling with friends to or from a social event, and some of the young people were driving without a specific purpose or destination. Driving without a specific destination appeared to be a specific social activity, allowing the young people to meet up in their cars and socialise together, often occurring at night: “He had a hectic social life and almost seemed to be out of the house more than in… once he passed his driving test [Case 30] would travel around with his friends.” Case 30, Driver, Male “Usually every weekend I’ll get a call about 2 or 3 o’clock in the morning…saying “Can you come get me?”… So I get up and go and pick them up straight away.” Case 4, Driver, Male Nine cases of social driving were from cars which contained at least one passenger. In one case, a driver was transporting passengers home from a social event in the early hours of the morning. The passengers were in high spirits, and were reportedly distracting the driver throughout the journey. The police report stated: “…that they had been messing about in the back of the car, trying to put their hands over his eyes and slapping his face while he was driving.” Case 4, Driver, Male Driving experience A number of cases involved people with very limited driving experience, with many passing their driving test within the year prior to their collision. In one case, the driver had only passed their test on the previous day. One police report stated: “The driver had limited experience as a driver, having passed his test three months prior to this incident…The driver was confronted with another driver, coupled with a situation that he reacted harshly to, which caused him to lose control of his vehicle.” Case 18, Driver, Male In contrast, some individuals were relatively experienced drivers given their age. In six cases, driving experience had been obtained through choice of career as a vehicle driver. One parent reported: “He had licences in CBT/ATV/HGV and trailer tractors as well as a car. He had a couple of specialist licences, such as forklift and cherry picker.” Case 3, Driver, Male Some drivers had limited experience with their vehicle. Over one quarter of cases had been driving the vehicle in which they had their collision for a short time only (weeks or months). In one case, the vehicle had been purchased on the day before their collision. In another case, the driver had recently bought a new vehicle with a large engine size. The driver’s friend commented: “Approximately 6 to 8 weeks ago [he] purchased a 2.2 litre… he only passed his driving test about 12 months ago.” Case 12, Driver, Male A number of cases involved people with very limited driving experience, with many passing their driving test within the year prior to their collision. In one case, the driver had only passed their test on the previous day. One police report stated: “The driver had limited experience as a driver, having passed his test three months prior to this incident…The driver was confronted with another driver, coupled with a situation that he reacted harshly to, which caused him to lose control of his vehicle.” Case 18, Driver, Male In contrast, some individuals were relatively experienced drivers given their age. In six cases, driving experience had been obtained through choice of career as a vehicle driver. One parent reported: “He had licences in CBT/ATV/HGV and trailer tractors as well as a car. He had a couple of specialist licences, such as forklift and cherry picker.” Case 3, Driver, Male Some drivers had limited experience with their vehicle. Over one quarter of cases had been driving the vehicle in which they had their collision for a short time only (weeks or months). In one case, the vehicle had been purchased on the day before their collision. In another case, the driver had recently bought a new vehicle with a large engine size. The driver’s friend commented: “Approximately 6 to 8 weeks ago [he] purchased a 2.2 litre… he only passed his driving test about 12 months ago.” Case 12, Driver, Male Interest in motor vehicles Many young people had a particular interest in motor vehicles. There were several examples when an individual reportedly spent time caring for their vehicle, making sure it was kept in a good quality condition, both aesthetically and mechanically. There were also reports of vehicle modification. In one case, a car had been modified from a 1400 cc engine to a 1600 cc engine. In one statement, a colleague of the deceased reported: “He always talked about his car and how he loved to drive it” Case 12, Driver, Male In another example, a Mother stated: “It has surprised me that [Case 15] has died as a result of a car accident as she would not wish to damage her car in any way. Her car was her pride and joy. She was that particular she would even pick parking spaces away from other vehicles so that it wouldn’t get marked or damaged.” Case 15, Driver, Female Many young people had a particular interest in motor vehicles. There were several examples when an individual reportedly spent time caring for their vehicle, making sure it was kept in a good quality condition, both aesthetically and mechanically. There were also reports of vehicle modification. In one case, a car had been modified from a 1400 cc engine to a 1600 cc engine. In one statement, a colleague of the deceased reported: “He always talked about his car and how he loved to drive it” Case 12, Driver, Male In another example, a Mother stated: “It has surprised me that [Case 15] has died as a result of a car accident as she would not wish to damage her car in any way. Her car was her pride and joy. She was that particular she would even pick parking spaces away from other vehicles so that it wouldn’t get marked or damaged.” Case 15, Driver, Female Risky driving behaviour Dangerous driving behaviour was identified in the majority of cases. This included: driving at speed, tailgating, racing friends and undertaking hazardous overtaking manoeuvres. Notably, excessive speed was referred to in nearly all cases. In one statement, a witness commented: “My first impression was that it was travelling far too fast to negotiate the bend safely....I could see his hands turning the steering wheel to his right in a large movement, his whole body movement and body language gave me the impression of panic.” Case 28, Driver, Male In three cases, it was noted that male driving style may be affected by the presence of other males in the car, causing them to behave differently to normal. In one case, a female friend commented: “He always drove safely with me in the car. [Case 18] had mentioned that he drove faster when he had the lads in the car. I had not experienced him driving excessively fast myself.” Case 18, Driver, Male Dangerous driving behaviour was identified in the majority of cases. This included: driving at speed, tailgating, racing friends and undertaking hazardous overtaking manoeuvres. Notably, excessive speed was referred to in nearly all cases. In one statement, a witness commented: “My first impression was that it was travelling far too fast to negotiate the bend safely....I could see his hands turning the steering wheel to his right in a large movement, his whole body movement and body language gave me the impression of panic.” Case 28, Driver, Male In three cases, it was noted that male driving style may be affected by the presence of other males in the car, causing them to behave differently to normal. In one case, a female friend commented: “He always drove safely with me in the car. [Case 18] had mentioned that he drove faster when he had the lads in the car. I had not experienced him driving excessively fast myself.” Case 18, Driver, Male Perception of driving ability There was a sense of overconfidence and an inflated view of driving skills and ability among many cases. One case concerned two cars that were involved in a race. On interviewing the male driver of the second car, who was unharmed, the police reported: “… He agreed that he had been driving 2 to 3 car lengths behind [Case 15] at approximately 80 mph. He did not consider this to be an unsafe following distance.” Case 15, Driver, Female In another example, a rear-seat passenger stated: “I also saw at least one large arrow shape, indicated to our left. I knew this to mean that we should stay on our own side of the road. [Front seat passenger] was shouting “[Case25], what are you doing you are not going to make that” or similar. I became aware that we were now on the offside of the road…As this was happening I heard [Front seat passenger] shout a second time. This sounded much more urgent than before as he said, “we’re not going to make that”.” Case 25, Driver, Male The perception of driving ability was linked to a lack of adherence to safety regulations. Although the majority of young people observed safety regulations, the decision not to wear a seatbelt or helmet suggests that such drivers were confident in their ability to drive without incident. In one case, despite being involved in a collision in the week leading up to his death, the driver chose not to wear a seatbelt, as described by a friend: “I can say that [Case 13] was not in the habit of wearing a seatbelt as he found it too restrictive. I had asked him if he was wearing one when he hit the van [referring to prior collision]. He said he had not but had been able to brace himself on that occasion against the steering wheel.” Case 13, Driver, Male There were many instances where statements contained others’ perceptions of the driver’s driving ability. People’s perceptions of the deceased’s driving ability were shown to differ; in some statements, people were highly complementary, while other statements highlighted a less favourable view, and in some cases were visibly critical of the deceased’s driving ability. The majority of statements presented a positive perception of the driving ability of the deceased. This is interesting, given the knowledge that many collisions were associated with risky or dangerous driving behaviour. The partner of one individual reported: “[Case 24] was an excellent driver especially considering that he was still young and didn’t have a lot of experience. He never drove fast with me or [their baby] in the car and certainly wouldn’t do if the roads were potentially risky. He constantly talked about other accidents he’d seen to and from work, which always reassured me that he’d drive safely.” Case 24, Driver, Male However, other statements were not so positive. One father reported of his son: “He was in my eyes a typical young driver. He had a few bumps and things. I would say he was a confident driver but at times over confident. He sometimes drove and I would say stop, drop me off. I think his driving just needed maturity.” Case 21, Driver, Male There was a sense of overconfidence and an inflated view of driving skills and ability among many cases. One case concerned two cars that were involved in a race. On interviewing the male driver of the second car, who was unharmed, the police reported: “… He agreed that he had been driving 2 to 3 car lengths behind [Case 15] at approximately 80 mph. He did not consider this to be an unsafe following distance.” Case 15, Driver, Female In another example, a rear-seat passenger stated: “I also saw at least one large arrow shape, indicated to our left. I knew this to mean that we should stay on our own side of the road. [Front seat passenger] was shouting “[Case25], what are you doing you are not going to make that” or similar. I became aware that we were now on the offside of the road…As this was happening I heard [Front seat passenger] shout a second time. This sounded much more urgent than before as he said, “we’re not going to make that”.” Case 25, Driver, Male The perception of driving ability was linked to a lack of adherence to safety regulations. Although the majority of young people observed safety regulations, the decision not to wear a seatbelt or helmet suggests that such drivers were confident in their ability to drive without incident. In one case, despite being involved in a collision in the week leading up to his death, the driver chose not to wear a seatbelt, as described by a friend: “I can say that [Case 13] was not in the habit of wearing a seatbelt as he found it too restrictive. I had asked him if he was wearing one when he hit the van [referring to prior collision]. He said he had not but had been able to brace himself on that occasion against the steering wheel.” Case 13, Driver, Male There were many instances where statements contained others’ perceptions of the driver’s driving ability. People’s perceptions of the deceased’s driving ability were shown to differ; in some statements, people were highly complementary, while other statements highlighted a less favourable view, and in some cases were visibly critical of the deceased’s driving ability. The majority of statements presented a positive perception of the driving ability of the deceased. This is interesting, given the knowledge that many collisions were associated with risky or dangerous driving behaviour. The partner of one individual reported: “[Case 24] was an excellent driver especially considering that he was still young and didn’t have a lot of experience. He never drove fast with me or [their baby] in the car and certainly wouldn’t do if the roads were potentially risky. He constantly talked about other accidents he’d seen to and from work, which always reassured me that he’d drive safely.” Case 24, Driver, Male However, other statements were not so positive. One father reported of his son: “He was in my eyes a typical young driver. He had a few bumps and things. I would say he was a confident driver but at times over confident. He sometimes drove and I would say stop, drop me off. I think his driving just needed maturity.” Case 21, Driver, Male Emotional distress In our study emotional distress immediately prior to the RTC was identified in over one quarter of the cases. This mainly referred to family or personal relationship problems, or financial difficulties, issues which may have distracted the driver from their driving role. In one case, a driver was struggling to cope with relationship and financial demands. As one police report stated: “It is highly likely on the evidence available that [Case 24] was in a hurry to get home…it is possible there were other things on his mind, for example the issue of the rent on his flat.” Case 24, Driver, Male A further statement, provided by the mother of the deceased reported: “[Case 24’s]…relationship with… [His girlfriend] was very rocky throughout the time they were together. On the Sunday before the accident he was at our house with all of his stuff saying that he’d had enough of his relationship with… [his girlfriend] and wanted to return home.” Case 24, Driver, Male Relationship difficulties were identified from a further 6 cases. In one passenger fatality a police report stated: “[The driver] is married and [Case 9] has recently separated from her long term boyfriend. [The driver] and [Case 9] have been involved in a relationship for some weeks preceding the collision. [The driver] had informed his wife of his affair on [the day prior to the collision] when his wife asked him to leave their marital home.” Case 9, Passenger, Female In our study emotional distress immediately prior to the RTC was identified in over one quarter of the cases. This mainly referred to family or personal relationship problems, or financial difficulties, issues which may have distracted the driver from their driving role. In one case, a driver was struggling to cope with relationship and financial demands. As one police report stated: “It is highly likely on the evidence available that [Case 24] was in a hurry to get home…it is possible there were other things on his mind, for example the issue of the rent on his flat.” Case 24, Driver, Male A further statement, provided by the mother of the deceased reported: “[Case 24’s]…relationship with… [His girlfriend] was very rocky throughout the time they were together. On the Sunday before the accident he was at our house with all of his stuff saying that he’d had enough of his relationship with… [his girlfriend] and wanted to return home.” Case 24, Driver, Male Relationship difficulties were identified from a further 6 cases. In one passenger fatality a police report stated: “[The driver] is married and [Case 9] has recently separated from her long term boyfriend. [The driver] and [Case 9] have been involved in a relationship for some weeks preceding the collision. [The driver] had informed his wife of his affair on [the day prior to the collision] when his wife asked him to leave their marital home.” Case 9, Passenger, Female Social driving: A particular theme arising from the thematic analysis was what we have termed “social driving”. This term has been used to encompass a group of related behaviours, and included: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey). The majority of young drivers were found to be engaged in social driving behaviour prior to, or at the time of, the collision. Many individuals were reportedly travelling with friends to or from a social event, and some of the young people were driving without a specific purpose or destination. Driving without a specific destination appeared to be a specific social activity, allowing the young people to meet up in their cars and socialise together, often occurring at night: “He had a hectic social life and almost seemed to be out of the house more than in… once he passed his driving test [Case 30] would travel around with his friends.” Case 30, Driver, Male “Usually every weekend I’ll get a call about 2 or 3 o’clock in the morning…saying “Can you come get me?”… So I get up and go and pick them up straight away.” Case 4, Driver, Male Nine cases of social driving were from cars which contained at least one passenger. In one case, a driver was transporting passengers home from a social event in the early hours of the morning. The passengers were in high spirits, and were reportedly distracting the driver throughout the journey. The police report stated: “…that they had been messing about in the back of the car, trying to put their hands over his eyes and slapping his face while he was driving.” Case 4, Driver, Male Driving experience: A number of cases involved people with very limited driving experience, with many passing their driving test within the year prior to their collision. In one case, the driver had only passed their test on the previous day. One police report stated: “The driver had limited experience as a driver, having passed his test three months prior to this incident…The driver was confronted with another driver, coupled with a situation that he reacted harshly to, which caused him to lose control of his vehicle.” Case 18, Driver, Male In contrast, some individuals were relatively experienced drivers given their age. In six cases, driving experience had been obtained through choice of career as a vehicle driver. One parent reported: “He had licences in CBT/ATV/HGV and trailer tractors as well as a car. He had a couple of specialist licences, such as forklift and cherry picker.” Case 3, Driver, Male Some drivers had limited experience with their vehicle. Over one quarter of cases had been driving the vehicle in which they had their collision for a short time only (weeks or months). In one case, the vehicle had been purchased on the day before their collision. In another case, the driver had recently bought a new vehicle with a large engine size. The driver’s friend commented: “Approximately 6 to 8 weeks ago [he] purchased a 2.2 litre… he only passed his driving test about 12 months ago.” Case 12, Driver, Male Interest in motor vehicles: Many young people had a particular interest in motor vehicles. There were several examples when an individual reportedly spent time caring for their vehicle, making sure it was kept in a good quality condition, both aesthetically and mechanically. There were also reports of vehicle modification. In one case, a car had been modified from a 1400 cc engine to a 1600 cc engine. In one statement, a colleague of the deceased reported: “He always talked about his car and how he loved to drive it” Case 12, Driver, Male In another example, a Mother stated: “It has surprised me that [Case 15] has died as a result of a car accident as she would not wish to damage her car in any way. Her car was her pride and joy. She was that particular she would even pick parking spaces away from other vehicles so that it wouldn’t get marked or damaged.” Case 15, Driver, Female Risky driving behaviour: Dangerous driving behaviour was identified in the majority of cases. This included: driving at speed, tailgating, racing friends and undertaking hazardous overtaking manoeuvres. Notably, excessive speed was referred to in nearly all cases. In one statement, a witness commented: “My first impression was that it was travelling far too fast to negotiate the bend safely....I could see his hands turning the steering wheel to his right in a large movement, his whole body movement and body language gave me the impression of panic.” Case 28, Driver, Male In three cases, it was noted that male driving style may be affected by the presence of other males in the car, causing them to behave differently to normal. In one case, a female friend commented: “He always drove safely with me in the car. [Case 18] had mentioned that he drove faster when he had the lads in the car. I had not experienced him driving excessively fast myself.” Case 18, Driver, Male Perception of driving ability: There was a sense of overconfidence and an inflated view of driving skills and ability among many cases. One case concerned two cars that were involved in a race. On interviewing the male driver of the second car, who was unharmed, the police reported: “… He agreed that he had been driving 2 to 3 car lengths behind [Case 15] at approximately 80 mph. He did not consider this to be an unsafe following distance.” Case 15, Driver, Female In another example, a rear-seat passenger stated: “I also saw at least one large arrow shape, indicated to our left. I knew this to mean that we should stay on our own side of the road. [Front seat passenger] was shouting “[Case25], what are you doing you are not going to make that” or similar. I became aware that we were now on the offside of the road…As this was happening I heard [Front seat passenger] shout a second time. This sounded much more urgent than before as he said, “we’re not going to make that”.” Case 25, Driver, Male The perception of driving ability was linked to a lack of adherence to safety regulations. Although the majority of young people observed safety regulations, the decision not to wear a seatbelt or helmet suggests that such drivers were confident in their ability to drive without incident. In one case, despite being involved in a collision in the week leading up to his death, the driver chose not to wear a seatbelt, as described by a friend: “I can say that [Case 13] was not in the habit of wearing a seatbelt as he found it too restrictive. I had asked him if he was wearing one when he hit the van [referring to prior collision]. He said he had not but had been able to brace himself on that occasion against the steering wheel.” Case 13, Driver, Male There were many instances where statements contained others’ perceptions of the driver’s driving ability. People’s perceptions of the deceased’s driving ability were shown to differ; in some statements, people were highly complementary, while other statements highlighted a less favourable view, and in some cases were visibly critical of the deceased’s driving ability. The majority of statements presented a positive perception of the driving ability of the deceased. This is interesting, given the knowledge that many collisions were associated with risky or dangerous driving behaviour. The partner of one individual reported: “[Case 24] was an excellent driver especially considering that he was still young and didn’t have a lot of experience. He never drove fast with me or [their baby] in the car and certainly wouldn’t do if the roads were potentially risky. He constantly talked about other accidents he’d seen to and from work, which always reassured me that he’d drive safely.” Case 24, Driver, Male However, other statements were not so positive. One father reported of his son: “He was in my eyes a typical young driver. He had a few bumps and things. I would say he was a confident driver but at times over confident. He sometimes drove and I would say stop, drop me off. I think his driving just needed maturity.” Case 21, Driver, Male Emotional distress: In our study emotional distress immediately prior to the RTC was identified in over one quarter of the cases. This mainly referred to family or personal relationship problems, or financial difficulties, issues which may have distracted the driver from their driving role. In one case, a driver was struggling to cope with relationship and financial demands. As one police report stated: “It is highly likely on the evidence available that [Case 24] was in a hurry to get home…it is possible there were other things on his mind, for example the issue of the rent on his flat.” Case 24, Driver, Male A further statement, provided by the mother of the deceased reported: “[Case 24’s]…relationship with… [His girlfriend] was very rocky throughout the time they were together. On the Sunday before the accident he was at our house with all of his stuff saying that he’d had enough of his relationship with… [his girlfriend] and wanted to return home.” Case 24, Driver, Male Relationship difficulties were identified from a further 6 cases. In one passenger fatality a police report stated: “[The driver] is married and [Case 9] has recently separated from her long term boyfriend. [The driver] and [Case 9] have been involved in a relationship for some weeks preceding the collision. [The driver] had informed his wife of his affair on [the day prior to the collision] when his wife asked him to leave their marital home.” Case 9, Passenger, Female Discussion: Main findings Thematic analysis identified six themes: social driving, driving experience, interest in motor vehicles, risky driving behaviour, perception of driving ability, and emotional distress. Notably, multiple themes were identified from each case, indicating that there may be numerous factors which influence the causes and characteristics of RTCs among young people. The findings of the present study were consistent with previous research identifying risk factors concerning RTCs involving young people [1,6,19-22]. The high prevalence of social driving among young people is likely to relate to the rural setting where the study took place, and may not be replicated in urban areas. Unlike urban dwellers, people who live in a rural area often have limited access to public transport and therefore need a car for travel. For young people in particular, transport is required for school, work and maintaining a social life [23]. In this study, social driving behaviours referred to: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; or driving where alcohol or drugs were present. The numerous attributes associated with social driving behaviour highlight the difficulties in understanding the impact of different factors leading up to a fatal RTC. The association between driving and the social environment has received some research attention in recent years; for example, the influence of passengers upon driving ability [6]. However, there has been less consideration of “social driving” as a driving ‘culture’ among young adults [24], as observed in the present study. Risky driving behaviour was another regular feature of RTCs among young people in the study area, and is a finding that is consistent with previous research [20,25,26]. Risky driving behaviour is often associated with driving inexperience; young people have been shown to underestimate risk and overestimate their driving ability [27,28]. Although the acquisition of driving knowledge and skills are important if competent driving ability is to be achieved, it is also essential to obtain driving experience through practice [6]. Emotional distress at the time, or leading up to, the collision was identified from numerous cases. Previous studies investigating the causes and characteristics of RTCs have indicated that psychological factors may influence driving behaviour in young people [6,29-31]. The results from previous research imply that interventions could be targeted towards specific ‘at-risk’ groups, or possibly that emotional distress could be included in general road safety awareness campaigns; for example, warning people about the dangers of driving when emotionally distracted. It was notable that the mental distress identified in the current study was often triggered by a recent traumatic event. Thematic analysis identified six themes: social driving, driving experience, interest in motor vehicles, risky driving behaviour, perception of driving ability, and emotional distress. Notably, multiple themes were identified from each case, indicating that there may be numerous factors which influence the causes and characteristics of RTCs among young people. The findings of the present study were consistent with previous research identifying risk factors concerning RTCs involving young people [1,6,19-22]. The high prevalence of social driving among young people is likely to relate to the rural setting where the study took place, and may not be replicated in urban areas. Unlike urban dwellers, people who live in a rural area often have limited access to public transport and therefore need a car for travel. For young people in particular, transport is required for school, work and maintaining a social life [23]. In this study, social driving behaviours referred to: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; or driving where alcohol or drugs were present. The numerous attributes associated with social driving behaviour highlight the difficulties in understanding the impact of different factors leading up to a fatal RTC. The association between driving and the social environment has received some research attention in recent years; for example, the influence of passengers upon driving ability [6]. However, there has been less consideration of “social driving” as a driving ‘culture’ among young adults [24], as observed in the present study. Risky driving behaviour was another regular feature of RTCs among young people in the study area, and is a finding that is consistent with previous research [20,25,26]. Risky driving behaviour is often associated with driving inexperience; young people have been shown to underestimate risk and overestimate their driving ability [27,28]. Although the acquisition of driving knowledge and skills are important if competent driving ability is to be achieved, it is also essential to obtain driving experience through practice [6]. Emotional distress at the time, or leading up to, the collision was identified from numerous cases. Previous studies investigating the causes and characteristics of RTCs have indicated that psychological factors may influence driving behaviour in young people [6,29-31]. The results from previous research imply that interventions could be targeted towards specific ‘at-risk’ groups, or possibly that emotional distress could be included in general road safety awareness campaigns; for example, warning people about the dangers of driving when emotionally distracted. It was notable that the mental distress identified in the current study was often triggered by a recent traumatic event. Implications of the findings The finding that groups of young people engage in social driving behaviour, and the existence of large social networks of young drivers in and around the rural market towns, lends support to the development of interventions that target these groups using a multi-strategy approach in their specific social context. There is evidence that such community based interventions have the potential to reduce child and adolescent unintentional RTC injuries [32]. They can include interventions such as education, legislation, social and environmental interventions [33] in an attempt to change community norms and behaviour. Given the limited evidence of effectiveness of educational training alone [24], and that education-based training interventions have also been criticised for failing to address the social and lifestyle factors that are associated with risky driving behaviour among young people [34], there would seem to be a need to develop novel community-based interventions targeted at a specific group of young drivers. The findings of this study also lend support to prevention programmes such as Graduated Driver Licensing (GDL), which have been effective in reducing young driver collision rates in parts of the USA, Canada, Australia and New Zealand [1,35,36]. GDL schemes involve restrictions for newly qualified drivers which may include: limitations on accompanying passengers, restrictions on night-time driving, and lower thresholds for blood alcohol concentration. In the UK it is predicted that such a programme (night-time restrictions between 9 pm-6 am, and no 15–24 year old passengers) could save more than 114 lives and result in 872 fewer serious injuries each year among younger drivers [37]. In the current study, if night-time restrictions had been imposed at the time of these collisions, it is estimated that 27% of causalities may have been prevented. The ability of GDL to disrupt the social networks identified in this research is also significant. However, given the rural nature of the study area, legislation restricting night-time and passenger journeys may be difficult to enforce and also result in young people becoming socially isolated and unable to participate in school or work activities [38]. An ideal GDL approach would minimise such costs, while maximising the potentially significant benefits in terms of preventing deaths and injuries on the road. Parental supervision of young drivers could provide an alternative approach to restricting driving through legislation as parents may have the ability to moderate their child’s driving behaviour and may contribute to decisions or funding about vehicle purchase [39]. However, as observed in the present study, parents often perceived their child to be a competent, responsible driver, despite many of the collisions being associated with risky driving behaviour. Further, while parents appear to be aware of the risks associated with new drivers, they are also keen to encourage independence in their children and reduce the need to transport them to various events [39]. A small number of interventions have investigated the impact of parental supervision, resulting in somewhat mixed overall findings [40,41]. Further exploration of interventions involving parental supervision would therefore be desirable, but at present they do not appear to be an adequate substitute for GDL. The finding that groups of young people engage in social driving behaviour, and the existence of large social networks of young drivers in and around the rural market towns, lends support to the development of interventions that target these groups using a multi-strategy approach in their specific social context. There is evidence that such community based interventions have the potential to reduce child and adolescent unintentional RTC injuries [32]. They can include interventions such as education, legislation, social and environmental interventions [33] in an attempt to change community norms and behaviour. Given the limited evidence of effectiveness of educational training alone [24], and that education-based training interventions have also been criticised for failing to address the social and lifestyle factors that are associated with risky driving behaviour among young people [34], there would seem to be a need to develop novel community-based interventions targeted at a specific group of young drivers. The findings of this study also lend support to prevention programmes such as Graduated Driver Licensing (GDL), which have been effective in reducing young driver collision rates in parts of the USA, Canada, Australia and New Zealand [1,35,36]. GDL schemes involve restrictions for newly qualified drivers which may include: limitations on accompanying passengers, restrictions on night-time driving, and lower thresholds for blood alcohol concentration. In the UK it is predicted that such a programme (night-time restrictions between 9 pm-6 am, and no 15–24 year old passengers) could save more than 114 lives and result in 872 fewer serious injuries each year among younger drivers [37]. In the current study, if night-time restrictions had been imposed at the time of these collisions, it is estimated that 27% of causalities may have been prevented. The ability of GDL to disrupt the social networks identified in this research is also significant. However, given the rural nature of the study area, legislation restricting night-time and passenger journeys may be difficult to enforce and also result in young people becoming socially isolated and unable to participate in school or work activities [38]. An ideal GDL approach would minimise such costs, while maximising the potentially significant benefits in terms of preventing deaths and injuries on the road. Parental supervision of young drivers could provide an alternative approach to restricting driving through legislation as parents may have the ability to moderate their child’s driving behaviour and may contribute to decisions or funding about vehicle purchase [39]. However, as observed in the present study, parents often perceived their child to be a competent, responsible driver, despite many of the collisions being associated with risky driving behaviour. Further, while parents appear to be aware of the risks associated with new drivers, they are also keen to encourage independence in their children and reduce the need to transport them to various events [39]. A small number of interventions have investigated the impact of parental supervision, resulting in somewhat mixed overall findings [40,41]. Further exploration of interventions involving parental supervision would therefore be desirable, but at present they do not appear to be an adequate substitute for GDL. Strengths and limitations To our knowledge, this is the first study to undertake a detailed qualitative thematic analysis of narrative text contained in coroners’ records of fatal RTCs among young people. The records provided a comprehensive report on each collision, and the majority of case files contained collision information that went beyond consideration of the physical crash. In some post-collision interviews with family or friends of the deceased, the wider social context of the collision was explored with inclusion of, for example, details of family circumstances. The records therefore complemented routinely collected STATS19 data, which focus primarily on physical crash conditions and acute actions immediately before the collision. Although the number of cases was relatively small, we did feel that we achieved saturation for identification of themes, and the qualitative nature of the study allowed for a thorough, in-depth analysis of cases in a way that is not routine in road fatality analysis. Comparison with existing theoretical models and literature on risk factors for young people allowed us to locate our findings within a wider UK and international context, adding to the existing evidence base. The study underlines the important role that in-depth data sources such as coroners’ records can play in informing prevention efforts. Countries should facilitate easier access to such data, as has happened in Australia with the development of the National Coroners Information System [42]. The current system of multiple autonomous coroners leads to under-utilisation of coroners’ data in the UK and elsewhere, representing a missed opportunity for public health [43]. To our knowledge, this is the first study to undertake a detailed qualitative thematic analysis of narrative text contained in coroners’ records of fatal RTCs among young people. The records provided a comprehensive report on each collision, and the majority of case files contained collision information that went beyond consideration of the physical crash. In some post-collision interviews with family or friends of the deceased, the wider social context of the collision was explored with inclusion of, for example, details of family circumstances. The records therefore complemented routinely collected STATS19 data, which focus primarily on physical crash conditions and acute actions immediately before the collision. Although the number of cases was relatively small, we did feel that we achieved saturation for identification of themes, and the qualitative nature of the study allowed for a thorough, in-depth analysis of cases in a way that is not routine in road fatality analysis. Comparison with existing theoretical models and literature on risk factors for young people allowed us to locate our findings within a wider UK and international context, adding to the existing evidence base. The study underlines the important role that in-depth data sources such as coroners’ records can play in informing prevention efforts. Countries should facilitate easier access to such data, as has happened in Australia with the development of the National Coroners Information System [42]. The current system of multiple autonomous coroners leads to under-utilisation of coroners’ data in the UK and elsewhere, representing a missed opportunity for public health [43]. Main findings: Thematic analysis identified six themes: social driving, driving experience, interest in motor vehicles, risky driving behaviour, perception of driving ability, and emotional distress. Notably, multiple themes were identified from each case, indicating that there may be numerous factors which influence the causes and characteristics of RTCs among young people. The findings of the present study were consistent with previous research identifying risk factors concerning RTCs involving young people [1,6,19-22]. The high prevalence of social driving among young people is likely to relate to the rural setting where the study took place, and may not be replicated in urban areas. Unlike urban dwellers, people who live in a rural area often have limited access to public transport and therefore need a car for travel. For young people in particular, transport is required for school, work and maintaining a social life [23]. In this study, social driving behaviours referred to: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; or driving where alcohol or drugs were present. The numerous attributes associated with social driving behaviour highlight the difficulties in understanding the impact of different factors leading up to a fatal RTC. The association between driving and the social environment has received some research attention in recent years; for example, the influence of passengers upon driving ability [6]. However, there has been less consideration of “social driving” as a driving ‘culture’ among young adults [24], as observed in the present study. Risky driving behaviour was another regular feature of RTCs among young people in the study area, and is a finding that is consistent with previous research [20,25,26]. Risky driving behaviour is often associated with driving inexperience; young people have been shown to underestimate risk and overestimate their driving ability [27,28]. Although the acquisition of driving knowledge and skills are important if competent driving ability is to be achieved, it is also essential to obtain driving experience through practice [6]. Emotional distress at the time, or leading up to, the collision was identified from numerous cases. Previous studies investigating the causes and characteristics of RTCs have indicated that psychological factors may influence driving behaviour in young people [6,29-31]. The results from previous research imply that interventions could be targeted towards specific ‘at-risk’ groups, or possibly that emotional distress could be included in general road safety awareness campaigns; for example, warning people about the dangers of driving when emotionally distracted. It was notable that the mental distress identified in the current study was often triggered by a recent traumatic event. Implications of the findings: The finding that groups of young people engage in social driving behaviour, and the existence of large social networks of young drivers in and around the rural market towns, lends support to the development of interventions that target these groups using a multi-strategy approach in their specific social context. There is evidence that such community based interventions have the potential to reduce child and adolescent unintentional RTC injuries [32]. They can include interventions such as education, legislation, social and environmental interventions [33] in an attempt to change community norms and behaviour. Given the limited evidence of effectiveness of educational training alone [24], and that education-based training interventions have also been criticised for failing to address the social and lifestyle factors that are associated with risky driving behaviour among young people [34], there would seem to be a need to develop novel community-based interventions targeted at a specific group of young drivers. The findings of this study also lend support to prevention programmes such as Graduated Driver Licensing (GDL), which have been effective in reducing young driver collision rates in parts of the USA, Canada, Australia and New Zealand [1,35,36]. GDL schemes involve restrictions for newly qualified drivers which may include: limitations on accompanying passengers, restrictions on night-time driving, and lower thresholds for blood alcohol concentration. In the UK it is predicted that such a programme (night-time restrictions between 9 pm-6 am, and no 15–24 year old passengers) could save more than 114 lives and result in 872 fewer serious injuries each year among younger drivers [37]. In the current study, if night-time restrictions had been imposed at the time of these collisions, it is estimated that 27% of causalities may have been prevented. The ability of GDL to disrupt the social networks identified in this research is also significant. However, given the rural nature of the study area, legislation restricting night-time and passenger journeys may be difficult to enforce and also result in young people becoming socially isolated and unable to participate in school or work activities [38]. An ideal GDL approach would minimise such costs, while maximising the potentially significant benefits in terms of preventing deaths and injuries on the road. Parental supervision of young drivers could provide an alternative approach to restricting driving through legislation as parents may have the ability to moderate their child’s driving behaviour and may contribute to decisions or funding about vehicle purchase [39]. However, as observed in the present study, parents often perceived their child to be a competent, responsible driver, despite many of the collisions being associated with risky driving behaviour. Further, while parents appear to be aware of the risks associated with new drivers, they are also keen to encourage independence in their children and reduce the need to transport them to various events [39]. A small number of interventions have investigated the impact of parental supervision, resulting in somewhat mixed overall findings [40,41]. Further exploration of interventions involving parental supervision would therefore be desirable, but at present they do not appear to be an adequate substitute for GDL. Strengths and limitations: To our knowledge, this is the first study to undertake a detailed qualitative thematic analysis of narrative text contained in coroners’ records of fatal RTCs among young people. The records provided a comprehensive report on each collision, and the majority of case files contained collision information that went beyond consideration of the physical crash. In some post-collision interviews with family or friends of the deceased, the wider social context of the collision was explored with inclusion of, for example, details of family circumstances. The records therefore complemented routinely collected STATS19 data, which focus primarily on physical crash conditions and acute actions immediately before the collision. Although the number of cases was relatively small, we did feel that we achieved saturation for identification of themes, and the qualitative nature of the study allowed for a thorough, in-depth analysis of cases in a way that is not routine in road fatality analysis. Comparison with existing theoretical models and literature on risk factors for young people allowed us to locate our findings within a wider UK and international context, adding to the existing evidence base. The study underlines the important role that in-depth data sources such as coroners’ records can play in informing prevention efforts. Countries should facilitate easier access to such data, as has happened in Australia with the development of the National Coroners Information System [42]. The current system of multiple autonomous coroners leads to under-utilisation of coroners’ data in the UK and elsewhere, representing a missed opportunity for public health [43]. Conclusions: This study used qualitative thematic analysis of narrative text in coroners’ records to examine the social context of fatal road traffic collisions in young people. It found themes consistent with other frameworks [6], but used the richness of the qualitative data to examine in greater depth the previously identified risk factors. These in-depth findings provide additional support for the case for Graduated Driver Licensing programmes to reduce collisions involving young people, and also suggest that road safety interventions need to take a more community development approach, recognising the importance of social context and focusing on social networks of young people. Further research in the UK and other countries could replicate this approach, to build on the work presented here and the pre-existing frameworks. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: The study was conceived by PP. PP and EB were responsible for data collection. PP and EB analysed the data and discussed the findings with SG, ET, SW, and MM. PP drafted the first version of the manuscript. All authors provided critical edits and revisions to the manuscript, and reviewed and approved the final version. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/14/78/prepub
Background: Deaths and injuries on the road remain a major cause of premature death among young people across the world. Routinely collected data usually focuses on the mechanism of road traffic collisions and basic demographic data of those involved. This study aimed to supplement these routine sources with a thematic analysis of narrative text contained in coroners' records, to explore the wider social context in which collisions occur. Methods: Thematic analysis of narrative text from Coroners' records, retrieved from thirty-four fatalities among young people (16-24 year olds) occurring as a result of thirty road traffic collisions in a rural county in the south of England over the period 2005-2010. Results: Six key themes emerged: social driving, driving experience, interest in motor vehicles, driving behaviour, perception of driving ability, and emotional distress. Social driving (defined as a group of related behaviours including: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey) was identified as a common feature across cases. Conclusions: Analysis of the wider social context in which road traffic collisions occur in young people can provide important information for understanding why collisions happen and developing targeted interventions to prevent them. It can complement routinely collected data, which often focuses on events immediately preceding a collision. Qualitative analysis of narrative text in coroner's records may provide a way of providing this type of information. These findings provide additional support for the case for Graduated Driver Licensing programmes to reduce collisions involving young people, and also suggest that road safety interventions need to take a more community development approach, recognising the importance of social context and focusing on social networks of young people.
Background: Globally, road traffic collisions (RTCs) are the leading cause of death in people aged 15–19, and the second highest cause of death in 20–25 year-olds [1]. In the UK around 300 young people aged 16–29 were killed when driving or riding vehicles, and over 4000 seriously injured during 2011 [2]. RTCs are of particular concern in rural areas in the UK, with the highest proportion of RTC-related fatalities and injuries occurring on rural roads [3]. In general, UK figures present a more favourable picture than many other countries; however, recent UK policy documents suggest that more work can be done to reduce road deaths and injuries [4]. Research suggests that there may be specific risk factors for RTCs which are unique to, or elevated in, young people compared with older adults. These include: limited driving experience; night-time driving; fatigue; particular risks for young men [5]. Other age-specific risk factors may include: personality characteristics; driving ability; demographic factors; perceived environment; driving environment; and developmental factors [6]. To develop effective prevention programmes the factors associated with RTCs must be identified and understood. In the UK, routine road traffic injury data are compiled via STATS19, a national database detailing the nature of a collision, the location, and a record of casualty involvement. These data are collected at the scene of the collision by an attending police officer. Although such quantitative data provide valuable intelligence for the monitoring and prevention of RTCs and related casualties, other data sources (including those facilitating qualitative analysis) may complement existing routine data, thus aiding action on road safety [7-9]. For instance, narrative text in particular can provide more detail on the events surrounding unintentional injury and death [8]. However, analysis of narrative text in the injury field has largely sought to quantify that data [8]. Qualitative methods, including thematic analysis of narrative text, offer opportunities for an in-depth examination of phenomena [10]. A qualitative approach can complement quantitative methods, by taking into account the wider social context of the crash and examining attitudes and experiences of those connected to the event. Study aims In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention. In other countries, most notably Australia, the value of coroners’ records for informing public health action is recognised, particularly relating to injury prevention [11,12]. Previous studies in England and Wales have examined coroners’ records for public health purposes, but the focus has most often been on suicide prevention rather than issues such as road traffic fatalities [13]. Coroners’ records contain a range of narrative text that is conducive to qualitative analysis, including witness statements, police reports and court transcripts. Therefore, this study used a thematic analysis of narrative text, contained in coroners’ records of fatal RTCs among young people (aged 16–24) in a rural county in the south west of England, to explore if these might complement exist data sources and help identify further areas for prevention. Conclusions: This study used qualitative thematic analysis of narrative text in coroners’ records to examine the social context of fatal road traffic collisions in young people. It found themes consistent with other frameworks [6], but used the richness of the qualitative data to examine in greater depth the previously identified risk factors. These in-depth findings provide additional support for the case for Graduated Driver Licensing programmes to reduce collisions involving young people, and also suggest that road safety interventions need to take a more community development approach, recognising the importance of social context and focusing on social networks of young people. Further research in the UK and other countries could replicate this approach, to build on the work presented here and the pre-existing frameworks.
Background: Deaths and injuries on the road remain a major cause of premature death among young people across the world. Routinely collected data usually focuses on the mechanism of road traffic collisions and basic demographic data of those involved. This study aimed to supplement these routine sources with a thematic analysis of narrative text contained in coroners' records, to explore the wider social context in which collisions occur. Methods: Thematic analysis of narrative text from Coroners' records, retrieved from thirty-four fatalities among young people (16-24 year olds) occurring as a result of thirty road traffic collisions in a rural county in the south of England over the period 2005-2010. Results: Six key themes emerged: social driving, driving experience, interest in motor vehicles, driving behaviour, perception of driving ability, and emotional distress. Social driving (defined as a group of related behaviours including: driving as a social event in itself (i.e. without a pre-specified destination); driving to or from a social event; driving with accompanying passengers; driving late at night; driving where alcohol or drugs were a feature of the journey) was identified as a common feature across cases. Conclusions: Analysis of the wider social context in which road traffic collisions occur in young people can provide important information for understanding why collisions happen and developing targeted interventions to prevent them. It can complement routinely collected data, which often focuses on events immediately preceding a collision. Qualitative analysis of narrative text in coroner's records may provide a way of providing this type of information. These findings provide additional support for the case for Graduated Driver Licensing programmes to reduce collisions involving young people, and also suggest that road safety interventions need to take a more community development approach, recognising the importance of social context and focusing on social networks of young people.
14,399
354
24
[ "driving", "driver", "case", "data", "young", "social", "people", "young people", "study", "collision" ]
[ "test", "test" ]
[CONTENT] Road traffic fatalities | Uoung people | Qualitative | Narrative text | Coroners’ records [SUMMARY]
[CONTENT] Road traffic fatalities | Uoung people | Qualitative | Narrative text | Coroners’ records [SUMMARY]
[CONTENT] Road traffic fatalities | Uoung people | Qualitative | Narrative text | Coroners’ records [SUMMARY]
[CONTENT] Road traffic fatalities | Uoung people | Qualitative | Narrative text | Coroners’ records [SUMMARY]
[CONTENT] Road traffic fatalities | Uoung people | Qualitative | Narrative text | Coroners’ records [SUMMARY]
[CONTENT] Road traffic fatalities | Uoung people | Qualitative | Narrative text | Coroners’ records [SUMMARY]
[CONTENT] Accidents, Traffic | Adolescent | Automobile Driving | Coroners and Medical Examiners | Female | Humans | Male | Psychology | Qualitative Research | Social Behavior | Stress, Psychological | United Kingdom | Young Adult [SUMMARY]
[CONTENT] Accidents, Traffic | Adolescent | Automobile Driving | Coroners and Medical Examiners | Female | Humans | Male | Psychology | Qualitative Research | Social Behavior | Stress, Psychological | United Kingdom | Young Adult [SUMMARY]
[CONTENT] Accidents, Traffic | Adolescent | Automobile Driving | Coroners and Medical Examiners | Female | Humans | Male | Psychology | Qualitative Research | Social Behavior | Stress, Psychological | United Kingdom | Young Adult [SUMMARY]
[CONTENT] Accidents, Traffic | Adolescent | Automobile Driving | Coroners and Medical Examiners | Female | Humans | Male | Psychology | Qualitative Research | Social Behavior | Stress, Psychological | United Kingdom | Young Adult [SUMMARY]
[CONTENT] Accidents, Traffic | Adolescent | Automobile Driving | Coroners and Medical Examiners | Female | Humans | Male | Psychology | Qualitative Research | Social Behavior | Stress, Psychological | United Kingdom | Young Adult [SUMMARY]
[CONTENT] Accidents, Traffic | Adolescent | Automobile Driving | Coroners and Medical Examiners | Female | Humans | Male | Psychology | Qualitative Research | Social Behavior | Stress, Psychological | United Kingdom | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] driving | driver | case | data | young | social | people | young people | study | collision [SUMMARY]
[CONTENT] driving | driver | case | data | young | social | people | young people | study | collision [SUMMARY]
[CONTENT] driving | driver | case | data | young | social | people | young people | study | collision [SUMMARY]
[CONTENT] driving | driver | case | data | young | social | people | young people | study | collision [SUMMARY]
[CONTENT] driving | driver | case | data | young | social | people | young people | study | collision [SUMMARY]
[CONTENT] driving | driver | case | data | young | social | people | young people | study | collision [SUMMARY]
[CONTENT] prevention | coroners | coroners records | records | text | narrative | narrative text | injury | rtcs | data [SUMMARY]
[CONTENT] data | tool | extraction | coroner | data extraction | extraction tool | headings | 16 | research | researcher [SUMMARY]
[CONTENT] driver | driving | case | male | driver male | car | social | relationship | case driver | cases [SUMMARY]
[CONTENT] examine | frameworks | social | approach | context | social context | collisions | depth | young | young people [SUMMARY]
[CONTENT] driving | driver | data | case | social | young | people | study | male | young people [SUMMARY]
[CONTENT] driving | driver | data | case | social | young | people | study | male | young people [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] Coroners | thirty-four | 16-24 year olds | thirty | England [SUMMARY]
[CONTENT] Six ||| late at night [SUMMARY]
[CONTENT] ||| ||| ||| Graduated Driver Licensing [SUMMARY]
[CONTENT] ||| ||| ||| Coroners | thirty-four | 16-24 year olds | thirty | England ||| ||| Six ||| late at night ||| ||| ||| ||| Graduated Driver Licensing [SUMMARY]
[CONTENT] ||| ||| ||| Coroners | thirty-four | 16-24 year olds | thirty | England ||| ||| Six ||| late at night ||| ||| ||| ||| Graduated Driver Licensing [SUMMARY]
Impact of patent ductus arteriosus and subsequent therapy with ibuprofen on the release of S-100B and oxidative stress index in preterm infants.
25542161
Hemodynamically significant patent ductus arteriosus (hsPDA) leads to injury in tissues/organs by reducing perfusion of organs and causing oxidative stress. The purpose of this study was to evaluate the oxidant/antioxidant status in preterm infants with hsPDA by measuring the total antioxidant capacity and total oxidant status and to assess neuronal damage due to oxidant stress related to hsPDA.
BACKGROUND
This prospective study included 37 low-birth-weight infants with echocardiographically diagnosed hsPDA treated with oral ibuprofen and a control group of 40 infants without PDA. Blood samples were taken from all infants, and than the total antioxidant capacity (TAC), total oxidant status (TOS), and S-100B protein levels were assessed and oxidative stress index was calculated before and after therapy.
MATERIAL AND METHODS
The mean pre-therapy TOS level and oxidative stress index (OSI) value of the patients with hsPDA were significantly higher, but TAC level was lower than in the control group. There were no statistically significant differences in the mean post-therapy values of TOS, TAC, OSI, and S-100B protein between the two groups.
RESULTS
hsPDA may cause cellular injury by increasing oxidative stress and damaging tissue perfusion; however the brain can compensate for oxidative stress and impaired tissue perfusion through well-developed autoregulation systems to decrease tissue injury.
CONCLUSIONS
[ "Antioxidants", "Case-Control Studies", "Ductus Arteriosus, Patent", "Female", "Humans", "Ibuprofen", "Infant, Newborn", "Infant, Premature", "Male", "Oxidative Stress", "S100 Calcium Binding Protein beta Subunit" ]
4283821
Background
The left-to-right shunt in hemodynamically significant patent ductus arteriosus (hsPDA) in premature newborns leads to alveolar edema and decreased lung compliance by increasing pulmonary blood flow and capillary permeability. Under this condition, the infant potentially requires more oxygen and higher mechanical ventilation support [1]. Furthermore, hsPDA may cause hypoperfusion of vital organs, leading to pathologies such as necrotizing enterocolitis (NEC), bronchopulmonary dysplasia (BPD), and acute renal insufficiency [2,3]. In addition, intraventricular hemorrhage (IVH) resulting from sudden change in pressure and damage to the white matter may be caused by defective perfusion of the brain [4,5]. Although hsPDA is definitely related to these morbidities, its causative role has not been fully determined [6]. Patients with congenital heart diseases that lead to hypoperfusion, ischemia, and chronic hypoxia cannot meet the biological requirements of the tissues and are strongly confronted with oxygen radicals [7]. To prevent the adverse effects of free radicals and oxidants such as lipid peroxidant, the body activates antioxidant defense mechanisms that include: glutathione reductase, glutathione peroxidase, superoxide dismutase, and catalase [8]. In the first days of life, premature infants have higher concentrations of hydroperoxide in their erythrocyte membrane and lower antioxidant defense compared with those of term infants [9]. S-100B protein and neuron-specific enolase (NSE) are the most valuable biochemical markers of neuronal damage and glial activation in premature infants [37]. S-100B is a calcium-binding protein found particularly in Schwann and astroglial cells [10]. It increases after head trauma, subarachnoid hemorrhage, paralysis, and cardiac pathologies, resulting to neurologic damage [11]. The purpose of this study was to evaluate the oxidant/antioxidant status in preterm infants with hsPDA by measuring the total antioxidant capacity (TAC) and total oxidant status (TOS) and to assess the neuronal damage due to oxidant stress related to hsPDA.
Clinical and laboratory data
The study was performed in a prospective manner. The decision for therapy of the newborns diagnosed with hsPDA was made by a neonatologist. The SpO2, arterial blood pressure, and support ventilation of the newborns were closely followed-up. The cranial ultrasonographies of all patients were performed on postnatal day 1 and 5 by a neonatologist experienced in transfontanel ultrasonography. Complications such as IVH, BPD, and NEC were recorded. BPD was diagnosed according to the diagnostic criteria of the U.S. National Institutes of Health [13]. After taking basal blood samples, the patients in the test group received a single dose ibuprofen via orogastric catheter (10 mg/kg/day on day 1; 5 mg/kg/day on day 2; 5 mg/kg/day on day 3). Twelve hours after the third dose of ibuprofen, the patients were reassessed with ECHO, and once again blood samples were taken from those who’s PDA had closed. The blood samples were taken either from the umbilical venous catheter or from the peripheral veins, centrifuged at 5000 rpm for 10 min, and the sera obtained were kept at −80°C until the time of analysis.
Clinical findings
The clinical, perinatal, and natal features of the hsPDA patients (test group) and the control group patients are shown in Table 1. There were no significant differences between the two groups in terms of birth weight, birth week, sex, antenatal steroid use, Apgar score, surfactant therapy, type of delivery, and inotropic support, but the mean duration of mechanical ventilation support in the PDA group was significantly longer than that in the control group (p<0.001).
Conclusions
The presence of hsPDA in premature infants, in who the oxidant defense mechanism is not fully developed, causes increase in total oxidant status and oxidative stress index, which leads to tissue/organ damage. We determined that in patients with hsPDA no significant cerebral injury developed before or after therapy. To determine the rate of cerebral damage in patients with hsPDA, there is a need for studies with larger number of patients to assess the correlation between the results obtained from devices such as NIRS, which measures local tissue oxygenation and serum NSE, and S-100B levels accepted to be the best markers of cerebral damage.
[ "Background", "Patient population", "Echocardiographic evaluation of patent ductus arteriosus", "Calculation of total antioxidant capacity", "Calculation of total oxidant status", "Oxidative stress index", "Serum S100B measurement", "Statistical analyses", "Results", "TAC, TOS, OSI, and S-100B protein values" ]
[ "The left-to-right shunt in hemodynamically significant patent ductus arteriosus (hsPDA) in premature newborns leads to alveolar edema and decreased lung compliance by increasing pulmonary blood flow and capillary permeability. Under this condition, the infant potentially requires more oxygen and higher mechanical ventilation support [1]. Furthermore, hsPDA may cause hypoperfusion of vital organs, leading to pathologies such as necrotizing enterocolitis (NEC), bronchopulmonary dysplasia (BPD), and acute renal insufficiency [2,3]. In addition, intraventricular hemorrhage (IVH) resulting from sudden change in pressure and damage to the white matter may be caused by defective perfusion of the brain [4,5]. Although hsPDA is definitely related to these morbidities, its causative role has not been fully determined [6]. Patients with congenital heart diseases that lead to hypoperfusion, ischemia, and chronic hypoxia cannot meet the biological requirements of the tissues and are strongly confronted with oxygen radicals [7]. To prevent the adverse effects of free radicals and oxidants such as lipid peroxidant, the body activates antioxidant defense mechanisms that include: glutathione reductase, glutathione peroxidase, superoxide dismutase, and catalase [8]. In the first days of life, premature infants have higher concentrations of hydroperoxide in their erythrocyte membrane and lower antioxidant defense compared with those of term infants [9]. S-100B protein and neuron-specific enolase (NSE) are the most valuable biochemical markers of neuronal damage and glial activation in premature infants [37]. S-100B is a calcium-binding protein found particularly in Schwann and astroglial cells [10]. It increases after head trauma, subarachnoid hemorrhage, paralysis, and cardiac pathologies, resulting to neurologic damage [11].\nThe purpose of this study was to evaluate the oxidant/antioxidant status in preterm infants with hsPDA by measuring the total antioxidant capacity (TAC) and total oxidant status (TOS) and to assess the neuronal damage due to oxidant stress related to hsPDA.", "The study included 77 infants of the same age group and gestational age hospitalized with the diagnosis of prematurity and respiratory distress syndrome in the Newborn Intensive Care Unit of Yüzüncü Yil University Teaching Hospital, Van, Turkey. The infants were divided into 2 groups. The first group included 37 infants, presumably ≤32 weeks of gestational age, diagnosed with hsPDA between day 3 and 7 using electrocardiography (ECHO). The second group included 40 infants with no hsPDA on ECHO examination. The clinical severity of hsPDA was determined according to the PDA scoring system recommended by McNamara et al. [12]. Initially, all patients received fluids at a dose of 70–80 cc/kg/day; on subsequent days the fluid was gradually increased by 10–20 cc/kg/day, reaching a maximum of 150 cc/kg/day. The exclusion criteria for the study were congenital malformations, major congenital cardiac anomaly, genetic or metabolic disease, contraindication for the use of ibuprofen (oliguria or serum creatinine level >150 μmol/L or platelet count <75×10}9/L), and patients who refused to participate in the study. Seven of the patients initially included in the study were thereafter excluded: 2 of the patients had unstable phase III or greater intraventricular hemorrhage, 3 patients had unclosed PDA after ibuprofen therapy, and 2 patients died of sepsis during the study period. Before the initiation of the study, signed permission forms from the families of the infants and approval from the Ethics Council of the University were obtained.", "The patients included in the study were evaluated using ECHO in short and high parasternal axis. Echocardiography was performed once or more than once according to the clinical course of each patient. We used color Doppler echocardiography system with a Vivid S6 6s sector probe (GE Healthcare, GE Medical Systems, Horten, Norway). The diagnosis of hsPDA was reached and ibuprofen therapy was started upon the determination in echocardiographic parasternal long axis, left atrial/aortic root ratio of ≥1.4 mm/kg [2], enlargement of left ventricle, holodiastolic retrograde flow in the descending aorta; and a pulse-waved Doppler turbulent systolic and diastolic flow on the ductus and abnormal antegrade diastolic flow. Tachycardia (≥160/min) and hypotension (less than tenth percentile according to birth weight and age) were evaluated as clinical findings supporting the diagnosis of hsPDA.", "The plasma TAC level was measured using a new automated measurement system developed by Erel [14], which uses hydroxyl radical, one of the most effective biological radicals produced. For measurements, as Reagent 1, the existing ferrous ion solution [o-dianisidine (10 mM), ferrous ion (45 AM) in the Clark and Lubs solution (75 mM, pH 1.8)] was mixed with Reagent 2-hydrogen peroxide [H2O2 (7.5 mM) in the Clark and Lubs solution]. The sequentially produced brown-colored dianisidinyl radical cation and the radicals produced by the hydroxyl radical are strong radicals. Using this method, the antioxidant effect of the sample on a strong free radical produced by hydroxyl radical was measured. The tests gave perfect results at values under 3%. The results are expressed as mmol Trolox Equiv. L-1.", "The plasma TOS level was measured using a new automated measurement system developed by Erel [15]. Oxidants present in the sample oxidize the ferrous ion-o-dianisidine complex to ferric ion. The oxidation reaction is mediated by glycerol molecules amply found in the reaction area. The ferric ion forms an orange-colored complex with the xylenol molecule in an acidic setting. The color saturation, which can be measured spectrophotometrically, is based on the total quantity of oxidant substances in the specimen. The results were calibrated with hydrogen peroxide and expressed in terms of micromolar hydrogen peroxide equivalent per liter (l mol H2O2 Equiv. L-1).", "The oxidative stress index (OSI) was taken as the percentage ratio of TOS values to TAS values. Before making the calculations, the values in the TAC test were converted into micromol values, as in the TOS test. The results were calculated using the formula OSI (arbitrary unit) =TOS (μmol H2O2 equivalent/L)/TAS (mmol Trolox equivalent/L) ×10, and the calculated values were defined as random units [15].", "The serum S-100B level was measured using an electrochemiluminescence immunoassay kit (ECLIA, Roche Diagnostics, Germany).", "The results of groups with normal distribution are presented as mean ±SD, and the median was use to present results that showed abnormal distribution. The chi-square test was used to evaluate demographic data between the 2 groups. To determine significant differences between the groups, the unpaired t-test for data with normal distribution and Mann-Whitney U test for data with non-normal distribution were used. In comparing pre-therapy with post-therapy, the paired t-test for data with normal distribution and Wilcoxon test for data with non-normal distribution were performed. To determine the relationship between the variables, for each group, Pearson and Spearman correlation coefficients were used. The p values <0.05 of the obtained results was accepted as statistically significant. For statistical analysis, SPSS 12.0 (SPSS Inc, Chicago, IL) was used.", " Clinical findings The clinical, perinatal, and natal features of the hsPDA patients (test group) and the control group patients are shown in Table 1. There were no significant differences between the two groups in terms of birth weight, birth week, sex, antenatal steroid use, Apgar score, surfactant therapy, type of delivery, and inotropic support, but the mean duration of mechanical ventilation support in the PDA group was significantly longer than that in the control group (p<0.001).\nThe clinical, perinatal, and natal features of the hsPDA patients (test group) and the control group patients are shown in Table 1. There were no significant differences between the two groups in terms of birth weight, birth week, sex, antenatal steroid use, Apgar score, surfactant therapy, type of delivery, and inotropic support, but the mean duration of mechanical ventilation support in the PDA group was significantly longer than that in the control group (p<0.001).\n TAC, TOS, OSI, and S-100B protein values The mean pre-therapy values of TOS and OSI in the test group were significantly higher, but TAC values were lower than those in the control group (p=0.001, p<0.001, and p=0.023, respectively). The S-100B protein levels in both groups were similar (Table 2).\nThere were no significant differences between the test and control groups in terms of mean post-therapy values of TOS, TAC, OSI, and S-100B protein (p>0.05) (Table 3).\nWhen the pre-therapy TOS, TAC, and OSI values were compared with the post-therapy values, the TOS and OSI values were significantly higher, but TAC values were found to be significantly lower (p<0.001 for all values). The S-100B protein levels in both groups were similar (Table 4).\nThe mean pre-therapy values of TOS and OSI in the test group were significantly higher, but TAC values were lower than those in the control group (p=0.001, p<0.001, and p=0.023, respectively). The S-100B protein levels in both groups were similar (Table 2).\nThere were no significant differences between the test and control groups in terms of mean post-therapy values of TOS, TAC, OSI, and S-100B protein (p>0.05) (Table 3).\nWhen the pre-therapy TOS, TAC, and OSI values were compared with the post-therapy values, the TOS and OSI values were significantly higher, but TAC values were found to be significantly lower (p<0.001 for all values). The S-100B protein levels in both groups were similar (Table 4).", "The mean pre-therapy values of TOS and OSI in the test group were significantly higher, but TAC values were lower than those in the control group (p=0.001, p<0.001, and p=0.023, respectively). The S-100B protein levels in both groups were similar (Table 2).\nThere were no significant differences between the test and control groups in terms of mean post-therapy values of TOS, TAC, OSI, and S-100B protein (p>0.05) (Table 3).\nWhen the pre-therapy TOS, TAC, and OSI values were compared with the post-therapy values, the TOS and OSI values were significantly higher, but TAC values were found to be significantly lower (p<0.001 for all values). The S-100B protein levels in both groups were similar (Table 4)." ]
[ null, null, null, null, null, null, null, null, "results", null ]
[ "Background", "Material and Methods", "Patient population", "Clinical and laboratory data", "Echocardiographic evaluation of patent ductus arteriosus", "Calculation of total antioxidant capacity", "Calculation of total oxidant status", "Oxidative stress index", "Serum S100B measurement", "Statistical analyses", "Results", "Clinical findings", "TAC, TOS, OSI, and S-100B protein values", "Discussion", "Conclusions" ]
[ "The left-to-right shunt in hemodynamically significant patent ductus arteriosus (hsPDA) in premature newborns leads to alveolar edema and decreased lung compliance by increasing pulmonary blood flow and capillary permeability. Under this condition, the infant potentially requires more oxygen and higher mechanical ventilation support [1]. Furthermore, hsPDA may cause hypoperfusion of vital organs, leading to pathologies such as necrotizing enterocolitis (NEC), bronchopulmonary dysplasia (BPD), and acute renal insufficiency [2,3]. In addition, intraventricular hemorrhage (IVH) resulting from sudden change in pressure and damage to the white matter may be caused by defective perfusion of the brain [4,5]. Although hsPDA is definitely related to these morbidities, its causative role has not been fully determined [6]. Patients with congenital heart diseases that lead to hypoperfusion, ischemia, and chronic hypoxia cannot meet the biological requirements of the tissues and are strongly confronted with oxygen radicals [7]. To prevent the adverse effects of free radicals and oxidants such as lipid peroxidant, the body activates antioxidant defense mechanisms that include: glutathione reductase, glutathione peroxidase, superoxide dismutase, and catalase [8]. In the first days of life, premature infants have higher concentrations of hydroperoxide in their erythrocyte membrane and lower antioxidant defense compared with those of term infants [9]. S-100B protein and neuron-specific enolase (NSE) are the most valuable biochemical markers of neuronal damage and glial activation in premature infants [37]. S-100B is a calcium-binding protein found particularly in Schwann and astroglial cells [10]. It increases after head trauma, subarachnoid hemorrhage, paralysis, and cardiac pathologies, resulting to neurologic damage [11].\nThe purpose of this study was to evaluate the oxidant/antioxidant status in preterm infants with hsPDA by measuring the total antioxidant capacity (TAC) and total oxidant status (TOS) and to assess the neuronal damage due to oxidant stress related to hsPDA.", " Patient population The study included 77 infants of the same age group and gestational age hospitalized with the diagnosis of prematurity and respiratory distress syndrome in the Newborn Intensive Care Unit of Yüzüncü Yil University Teaching Hospital, Van, Turkey. The infants were divided into 2 groups. The first group included 37 infants, presumably ≤32 weeks of gestational age, diagnosed with hsPDA between day 3 and 7 using electrocardiography (ECHO). The second group included 40 infants with no hsPDA on ECHO examination. The clinical severity of hsPDA was determined according to the PDA scoring system recommended by McNamara et al. [12]. Initially, all patients received fluids at a dose of 70–80 cc/kg/day; on subsequent days the fluid was gradually increased by 10–20 cc/kg/day, reaching a maximum of 150 cc/kg/day. The exclusion criteria for the study were congenital malformations, major congenital cardiac anomaly, genetic or metabolic disease, contraindication for the use of ibuprofen (oliguria or serum creatinine level >150 μmol/L or platelet count <75×10}9/L), and patients who refused to participate in the study. Seven of the patients initially included in the study were thereafter excluded: 2 of the patients had unstable phase III or greater intraventricular hemorrhage, 3 patients had unclosed PDA after ibuprofen therapy, and 2 patients died of sepsis during the study period. Before the initiation of the study, signed permission forms from the families of the infants and approval from the Ethics Council of the University were obtained.\nThe study included 77 infants of the same age group and gestational age hospitalized with the diagnosis of prematurity and respiratory distress syndrome in the Newborn Intensive Care Unit of Yüzüncü Yil University Teaching Hospital, Van, Turkey. The infants were divided into 2 groups. The first group included 37 infants, presumably ≤32 weeks of gestational age, diagnosed with hsPDA between day 3 and 7 using electrocardiography (ECHO). The second group included 40 infants with no hsPDA on ECHO examination. The clinical severity of hsPDA was determined according to the PDA scoring system recommended by McNamara et al. [12]. Initially, all patients received fluids at a dose of 70–80 cc/kg/day; on subsequent days the fluid was gradually increased by 10–20 cc/kg/day, reaching a maximum of 150 cc/kg/day. The exclusion criteria for the study were congenital malformations, major congenital cardiac anomaly, genetic or metabolic disease, contraindication for the use of ibuprofen (oliguria or serum creatinine level >150 μmol/L or platelet count <75×10}9/L), and patients who refused to participate in the study. Seven of the patients initially included in the study were thereafter excluded: 2 of the patients had unstable phase III or greater intraventricular hemorrhage, 3 patients had unclosed PDA after ibuprofen therapy, and 2 patients died of sepsis during the study period. Before the initiation of the study, signed permission forms from the families of the infants and approval from the Ethics Council of the University were obtained.\n Clinical and laboratory data The study was performed in a prospective manner. The decision for therapy of the newborns diagnosed with hsPDA was made by a neonatologist. The SpO2, arterial blood pressure, and support ventilation of the newborns were closely followed-up. The cranial ultrasonographies of all patients were performed on postnatal day 1 and 5 by a neonatologist experienced in transfontanel ultrasonography. Complications such as IVH, BPD, and NEC were recorded. BPD was diagnosed according to the diagnostic criteria of the U.S. National Institutes of Health [13]. After taking basal blood samples, the patients in the test group received a single dose ibuprofen via orogastric catheter (10 mg/kg/day on day 1; 5 mg/kg/day on day 2; 5 mg/kg/day on day 3). Twelve hours after the third dose of ibuprofen, the patients were reassessed with ECHO, and once again blood samples were taken from those who’s PDA had closed. The blood samples were taken either from the umbilical venous catheter or from the peripheral veins, centrifuged at 5000 rpm for 10 min, and the sera obtained were kept at −80°C until the time of analysis.\nThe study was performed in a prospective manner. The decision for therapy of the newborns diagnosed with hsPDA was made by a neonatologist. The SpO2, arterial blood pressure, and support ventilation of the newborns were closely followed-up. The cranial ultrasonographies of all patients were performed on postnatal day 1 and 5 by a neonatologist experienced in transfontanel ultrasonography. Complications such as IVH, BPD, and NEC were recorded. BPD was diagnosed according to the diagnostic criteria of the U.S. National Institutes of Health [13]. After taking basal blood samples, the patients in the test group received a single dose ibuprofen via orogastric catheter (10 mg/kg/day on day 1; 5 mg/kg/day on day 2; 5 mg/kg/day on day 3). Twelve hours after the third dose of ibuprofen, the patients were reassessed with ECHO, and once again blood samples were taken from those who’s PDA had closed. The blood samples were taken either from the umbilical venous catheter or from the peripheral veins, centrifuged at 5000 rpm for 10 min, and the sera obtained were kept at −80°C until the time of analysis.\n Echocardiographic evaluation of patent ductus arteriosus The patients included in the study were evaluated using ECHO in short and high parasternal axis. Echocardiography was performed once or more than once according to the clinical course of each patient. We used color Doppler echocardiography system with a Vivid S6 6s sector probe (GE Healthcare, GE Medical Systems, Horten, Norway). The diagnosis of hsPDA was reached and ibuprofen therapy was started upon the determination in echocardiographic parasternal long axis, left atrial/aortic root ratio of ≥1.4 mm/kg [2], enlargement of left ventricle, holodiastolic retrograde flow in the descending aorta; and a pulse-waved Doppler turbulent systolic and diastolic flow on the ductus and abnormal antegrade diastolic flow. Tachycardia (≥160/min) and hypotension (less than tenth percentile according to birth weight and age) were evaluated as clinical findings supporting the diagnosis of hsPDA.\nThe patients included in the study were evaluated using ECHO in short and high parasternal axis. Echocardiography was performed once or more than once according to the clinical course of each patient. We used color Doppler echocardiography system with a Vivid S6 6s sector probe (GE Healthcare, GE Medical Systems, Horten, Norway). The diagnosis of hsPDA was reached and ibuprofen therapy was started upon the determination in echocardiographic parasternal long axis, left atrial/aortic root ratio of ≥1.4 mm/kg [2], enlargement of left ventricle, holodiastolic retrograde flow in the descending aorta; and a pulse-waved Doppler turbulent systolic and diastolic flow on the ductus and abnormal antegrade diastolic flow. Tachycardia (≥160/min) and hypotension (less than tenth percentile according to birth weight and age) were evaluated as clinical findings supporting the diagnosis of hsPDA.\n Calculation of total antioxidant capacity The plasma TAC level was measured using a new automated measurement system developed by Erel [14], which uses hydroxyl radical, one of the most effective biological radicals produced. For measurements, as Reagent 1, the existing ferrous ion solution [o-dianisidine (10 mM), ferrous ion (45 AM) in the Clark and Lubs solution (75 mM, pH 1.8)] was mixed with Reagent 2-hydrogen peroxide [H2O2 (7.5 mM) in the Clark and Lubs solution]. The sequentially produced brown-colored dianisidinyl radical cation and the radicals produced by the hydroxyl radical are strong radicals. Using this method, the antioxidant effect of the sample on a strong free radical produced by hydroxyl radical was measured. The tests gave perfect results at values under 3%. The results are expressed as mmol Trolox Equiv. L-1.\nThe plasma TAC level was measured using a new automated measurement system developed by Erel [14], which uses hydroxyl radical, one of the most effective biological radicals produced. For measurements, as Reagent 1, the existing ferrous ion solution [o-dianisidine (10 mM), ferrous ion (45 AM) in the Clark and Lubs solution (75 mM, pH 1.8)] was mixed with Reagent 2-hydrogen peroxide [H2O2 (7.5 mM) in the Clark and Lubs solution]. The sequentially produced brown-colored dianisidinyl radical cation and the radicals produced by the hydroxyl radical are strong radicals. Using this method, the antioxidant effect of the sample on a strong free radical produced by hydroxyl radical was measured. The tests gave perfect results at values under 3%. The results are expressed as mmol Trolox Equiv. L-1.\n Calculation of total oxidant status The plasma TOS level was measured using a new automated measurement system developed by Erel [15]. Oxidants present in the sample oxidize the ferrous ion-o-dianisidine complex to ferric ion. The oxidation reaction is mediated by glycerol molecules amply found in the reaction area. The ferric ion forms an orange-colored complex with the xylenol molecule in an acidic setting. The color saturation, which can be measured spectrophotometrically, is based on the total quantity of oxidant substances in the specimen. The results were calibrated with hydrogen peroxide and expressed in terms of micromolar hydrogen peroxide equivalent per liter (l mol H2O2 Equiv. L-1).\nThe plasma TOS level was measured using a new automated measurement system developed by Erel [15]. Oxidants present in the sample oxidize the ferrous ion-o-dianisidine complex to ferric ion. The oxidation reaction is mediated by glycerol molecules amply found in the reaction area. The ferric ion forms an orange-colored complex with the xylenol molecule in an acidic setting. The color saturation, which can be measured spectrophotometrically, is based on the total quantity of oxidant substances in the specimen. The results were calibrated with hydrogen peroxide and expressed in terms of micromolar hydrogen peroxide equivalent per liter (l mol H2O2 Equiv. L-1).\n Oxidative stress index The oxidative stress index (OSI) was taken as the percentage ratio of TOS values to TAS values. Before making the calculations, the values in the TAC test were converted into micromol values, as in the TOS test. The results were calculated using the formula OSI (arbitrary unit) =TOS (μmol H2O2 equivalent/L)/TAS (mmol Trolox equivalent/L) ×10, and the calculated values were defined as random units [15].\nThe oxidative stress index (OSI) was taken as the percentage ratio of TOS values to TAS values. Before making the calculations, the values in the TAC test were converted into micromol values, as in the TOS test. The results were calculated using the formula OSI (arbitrary unit) =TOS (μmol H2O2 equivalent/L)/TAS (mmol Trolox equivalent/L) ×10, and the calculated values were defined as random units [15].\n Serum S100B measurement The serum S-100B level was measured using an electrochemiluminescence immunoassay kit (ECLIA, Roche Diagnostics, Germany).\nThe serum S-100B level was measured using an electrochemiluminescence immunoassay kit (ECLIA, Roche Diagnostics, Germany).\n Statistical analyses The results of groups with normal distribution are presented as mean ±SD, and the median was use to present results that showed abnormal distribution. The chi-square test was used to evaluate demographic data between the 2 groups. To determine significant differences between the groups, the unpaired t-test for data with normal distribution and Mann-Whitney U test for data with non-normal distribution were used. In comparing pre-therapy with post-therapy, the paired t-test for data with normal distribution and Wilcoxon test for data with non-normal distribution were performed. To determine the relationship between the variables, for each group, Pearson and Spearman correlation coefficients were used. The p values <0.05 of the obtained results was accepted as statistically significant. For statistical analysis, SPSS 12.0 (SPSS Inc, Chicago, IL) was used.\nThe results of groups with normal distribution are presented as mean ±SD, and the median was use to present results that showed abnormal distribution. The chi-square test was used to evaluate demographic data between the 2 groups. To determine significant differences between the groups, the unpaired t-test for data with normal distribution and Mann-Whitney U test for data with non-normal distribution were used. In comparing pre-therapy with post-therapy, the paired t-test for data with normal distribution and Wilcoxon test for data with non-normal distribution were performed. To determine the relationship between the variables, for each group, Pearson and Spearman correlation coefficients were used. The p values <0.05 of the obtained results was accepted as statistically significant. For statistical analysis, SPSS 12.0 (SPSS Inc, Chicago, IL) was used.", "The study included 77 infants of the same age group and gestational age hospitalized with the diagnosis of prematurity and respiratory distress syndrome in the Newborn Intensive Care Unit of Yüzüncü Yil University Teaching Hospital, Van, Turkey. The infants were divided into 2 groups. The first group included 37 infants, presumably ≤32 weeks of gestational age, diagnosed with hsPDA between day 3 and 7 using electrocardiography (ECHO). The second group included 40 infants with no hsPDA on ECHO examination. The clinical severity of hsPDA was determined according to the PDA scoring system recommended by McNamara et al. [12]. Initially, all patients received fluids at a dose of 70–80 cc/kg/day; on subsequent days the fluid was gradually increased by 10–20 cc/kg/day, reaching a maximum of 150 cc/kg/day. The exclusion criteria for the study were congenital malformations, major congenital cardiac anomaly, genetic or metabolic disease, contraindication for the use of ibuprofen (oliguria or serum creatinine level >150 μmol/L or platelet count <75×10}9/L), and patients who refused to participate in the study. Seven of the patients initially included in the study were thereafter excluded: 2 of the patients had unstable phase III or greater intraventricular hemorrhage, 3 patients had unclosed PDA after ibuprofen therapy, and 2 patients died of sepsis during the study period. Before the initiation of the study, signed permission forms from the families of the infants and approval from the Ethics Council of the University were obtained.", "The study was performed in a prospective manner. The decision for therapy of the newborns diagnosed with hsPDA was made by a neonatologist. The SpO2, arterial blood pressure, and support ventilation of the newborns were closely followed-up. The cranial ultrasonographies of all patients were performed on postnatal day 1 and 5 by a neonatologist experienced in transfontanel ultrasonography. Complications such as IVH, BPD, and NEC were recorded. BPD was diagnosed according to the diagnostic criteria of the U.S. National Institutes of Health [13]. After taking basal blood samples, the patients in the test group received a single dose ibuprofen via orogastric catheter (10 mg/kg/day on day 1; 5 mg/kg/day on day 2; 5 mg/kg/day on day 3). Twelve hours after the third dose of ibuprofen, the patients were reassessed with ECHO, and once again blood samples were taken from those who’s PDA had closed. The blood samples were taken either from the umbilical venous catheter or from the peripheral veins, centrifuged at 5000 rpm for 10 min, and the sera obtained were kept at −80°C until the time of analysis.", "The patients included in the study were evaluated using ECHO in short and high parasternal axis. Echocardiography was performed once or more than once according to the clinical course of each patient. We used color Doppler echocardiography system with a Vivid S6 6s sector probe (GE Healthcare, GE Medical Systems, Horten, Norway). The diagnosis of hsPDA was reached and ibuprofen therapy was started upon the determination in echocardiographic parasternal long axis, left atrial/aortic root ratio of ≥1.4 mm/kg [2], enlargement of left ventricle, holodiastolic retrograde flow in the descending aorta; and a pulse-waved Doppler turbulent systolic and diastolic flow on the ductus and abnormal antegrade diastolic flow. Tachycardia (≥160/min) and hypotension (less than tenth percentile according to birth weight and age) were evaluated as clinical findings supporting the diagnosis of hsPDA.", "The plasma TAC level was measured using a new automated measurement system developed by Erel [14], which uses hydroxyl radical, one of the most effective biological radicals produced. For measurements, as Reagent 1, the existing ferrous ion solution [o-dianisidine (10 mM), ferrous ion (45 AM) in the Clark and Lubs solution (75 mM, pH 1.8)] was mixed with Reagent 2-hydrogen peroxide [H2O2 (7.5 mM) in the Clark and Lubs solution]. The sequentially produced brown-colored dianisidinyl radical cation and the radicals produced by the hydroxyl radical are strong radicals. Using this method, the antioxidant effect of the sample on a strong free radical produced by hydroxyl radical was measured. The tests gave perfect results at values under 3%. The results are expressed as mmol Trolox Equiv. L-1.", "The plasma TOS level was measured using a new automated measurement system developed by Erel [15]. Oxidants present in the sample oxidize the ferrous ion-o-dianisidine complex to ferric ion. The oxidation reaction is mediated by glycerol molecules amply found in the reaction area. The ferric ion forms an orange-colored complex with the xylenol molecule in an acidic setting. The color saturation, which can be measured spectrophotometrically, is based on the total quantity of oxidant substances in the specimen. The results were calibrated with hydrogen peroxide and expressed in terms of micromolar hydrogen peroxide equivalent per liter (l mol H2O2 Equiv. L-1).", "The oxidative stress index (OSI) was taken as the percentage ratio of TOS values to TAS values. Before making the calculations, the values in the TAC test were converted into micromol values, as in the TOS test. The results were calculated using the formula OSI (arbitrary unit) =TOS (μmol H2O2 equivalent/L)/TAS (mmol Trolox equivalent/L) ×10, and the calculated values were defined as random units [15].", "The serum S-100B level was measured using an electrochemiluminescence immunoassay kit (ECLIA, Roche Diagnostics, Germany).", "The results of groups with normal distribution are presented as mean ±SD, and the median was use to present results that showed abnormal distribution. The chi-square test was used to evaluate demographic data between the 2 groups. To determine significant differences between the groups, the unpaired t-test for data with normal distribution and Mann-Whitney U test for data with non-normal distribution were used. In comparing pre-therapy with post-therapy, the paired t-test for data with normal distribution and Wilcoxon test for data with non-normal distribution were performed. To determine the relationship between the variables, for each group, Pearson and Spearman correlation coefficients were used. The p values <0.05 of the obtained results was accepted as statistically significant. For statistical analysis, SPSS 12.0 (SPSS Inc, Chicago, IL) was used.", " Clinical findings The clinical, perinatal, and natal features of the hsPDA patients (test group) and the control group patients are shown in Table 1. There were no significant differences between the two groups in terms of birth weight, birth week, sex, antenatal steroid use, Apgar score, surfactant therapy, type of delivery, and inotropic support, but the mean duration of mechanical ventilation support in the PDA group was significantly longer than that in the control group (p<0.001).\nThe clinical, perinatal, and natal features of the hsPDA patients (test group) and the control group patients are shown in Table 1. There were no significant differences between the two groups in terms of birth weight, birth week, sex, antenatal steroid use, Apgar score, surfactant therapy, type of delivery, and inotropic support, but the mean duration of mechanical ventilation support in the PDA group was significantly longer than that in the control group (p<0.001).\n TAC, TOS, OSI, and S-100B protein values The mean pre-therapy values of TOS and OSI in the test group were significantly higher, but TAC values were lower than those in the control group (p=0.001, p<0.001, and p=0.023, respectively). The S-100B protein levels in both groups were similar (Table 2).\nThere were no significant differences between the test and control groups in terms of mean post-therapy values of TOS, TAC, OSI, and S-100B protein (p>0.05) (Table 3).\nWhen the pre-therapy TOS, TAC, and OSI values were compared with the post-therapy values, the TOS and OSI values were significantly higher, but TAC values were found to be significantly lower (p<0.001 for all values). The S-100B protein levels in both groups were similar (Table 4).\nThe mean pre-therapy values of TOS and OSI in the test group were significantly higher, but TAC values were lower than those in the control group (p=0.001, p<0.001, and p=0.023, respectively). The S-100B protein levels in both groups were similar (Table 2).\nThere were no significant differences between the test and control groups in terms of mean post-therapy values of TOS, TAC, OSI, and S-100B protein (p>0.05) (Table 3).\nWhen the pre-therapy TOS, TAC, and OSI values were compared with the post-therapy values, the TOS and OSI values were significantly higher, but TAC values were found to be significantly lower (p<0.001 for all values). The S-100B protein levels in both groups were similar (Table 4).", "The clinical, perinatal, and natal features of the hsPDA patients (test group) and the control group patients are shown in Table 1. There were no significant differences between the two groups in terms of birth weight, birth week, sex, antenatal steroid use, Apgar score, surfactant therapy, type of delivery, and inotropic support, but the mean duration of mechanical ventilation support in the PDA group was significantly longer than that in the control group (p<0.001).", "The mean pre-therapy values of TOS and OSI in the test group were significantly higher, but TAC values were lower than those in the control group (p=0.001, p<0.001, and p=0.023, respectively). The S-100B protein levels in both groups were similar (Table 2).\nThere were no significant differences between the test and control groups in terms of mean post-therapy values of TOS, TAC, OSI, and S-100B protein (p>0.05) (Table 3).\nWhen the pre-therapy TOS, TAC, and OSI values were compared with the post-therapy values, the TOS and OSI values were significantly higher, but TAC values were found to be significantly lower (p<0.001 for all values). The S-100B protein levels in both groups were similar (Table 4).", "In this study, we evaluated how hsPDA, a cause of significantly increased morbidity and mortality in infants, affected the body’s total antioxidant capacity, total oxidant status, and serum S-100B level, which is one of the most important markers of cerebral damage.\nInfants encounter oxidative stress because they are born into a hyperoxic surrounding after a relatively hypoxic fetal life. The effect of oxidative stress, as well as other factors (decreased prostaglandins E2 and I2 and role of platelets), enables the ductus arteriosus to physiologically close within the first 3 days of life [2,16]. In case of PDA, the high requirement for oxygen leads premature infants who are normally prone to oxidative stress to encounter more free oxygen radicals. The oxidant defense mechanism and antioxidant enzymatic system are increasingly activated towards the last weeks of pregnancy; hence, the antioxidant capacity of premature infants is thought to be low [17,18]. Another study [19] reported that serum antioxidant enzyme activity in premature infants was lower than that in term infants, but oxidant markers were found in similar quantities in both groups of infants. In the test group, the pre-therapy TAC levels were lower than the post-therapy and TAC levels of the control group, and the patients with hsPDA had significantly higher levels of TOS and OSI than those in the control patients. The marked decrease in TOS and OSI levels in hsPDA patients after therapy was related to decreased exposure to free oxygen radicals secondary to the closure of PDA. The free oxygen radicals damage the proteins, carbohydrates, lipids, and nucleic acids in the cell membranes [20]. Free oxygen radicals are natural reactive components that are continuously produced in the body and which have positive as well as negative effects. The body needs a strong antioxidant system in order to limit the negative effects of these radicals. The antioxidant system consists of enzymes (catalase, glutathione peroxidase, and superoxide dismutase), vitamins (Vitamin A, E, and C), uric acid, and glutathione. When the balance between the oxidant and antioxidant systems is upset in favor of oxidants, various disorders emerge due to increased oxidative stres [21]. Rokicki et al. [22] reported that in patients with congenital heart disease with left-to-right shunt, there is a low antioxidant system, but a high oxidant system (with the exception of uric acid). Ercan et al/ [21], in their study of 91 patients, reported that serum TAC, TOS, and OSI levels in cyanotic congenital heart disease patients were higher than those in acyanotic congenital heart disease patients and the control group. The same study [21] found no statistically significant differences between the acyanotic congenital heart disease patients (19 cases of ventricular septal defect, 5 cases of atrial septal defect, and 6 cases of PDA) and control patients in terms of TAC, TOS, and OSI levels. Particularly in patients with PDA, the increase in free oxygen radicals caused by high oxygen requirement may lead to an imbalance between oxidants and antioxidants in infants in who the antioxidant system has not yet fully developed. The increasing prevalence of complications such as IVH, BPD, NEC, periventricular leukomalacia, and white matter damage in PDA with no apparent reason might be due to this imbalance. It was reported that BPD developed later in patients with high levels of peroxidase in urine and tracheal lavage specimens [23,24]. Another study [25] reported high values of TOS and OSI in BPD patients. Aydemir et al. [26] found quite high levels of serum TOS and OSI in patients with NEC, but TAC levels similar to those of the control group. Left ventricular volume overload and steal phenomenon due to PDA leads to decreased perfusion and oxygenation of vital organs such as the brain. Since the cerebrovascular autoregulation capacity of the immature brain is not fully developed and the present partial autoregulation tends to be easily upset, the brain is more prone to injury [27]. The critical oxygen level required for sufficient oxygenation of the premature infant’s brain is not yet fully known. PDA in premature infants has been reported to be related to IVH and periventricular leukomalacia, and this negative effect of PDA might cause neurodevelopmental disorders [28–31]. Such critical reports on PDA have consequently led to the conclusion that PDA should be closed as soon as possible. Therefore, based on ECHO findings on the first day of life, surgical ligation performed in the delivery room, and medical treatment with indomethacin or ibuprofen have been widely used as therapies for PDA [32,33]. With the introduction of near-infrared spectroscopy (NIRS), which measures regional cerebral tissue oxygen saturation, the effect of PDA on the cerebral oxygenation of very premature infants could be measured [34]. Determination of cerebral oxygenation and oxygen extraction by using NIRS showed that cerebral oxygenation and fractional tissue oxygen extraction (FTOE) were decreased in PDA patients, but more injury to the immature brain tissue could be prevented by early diagnosis, appropriate treatment, and providing cerebral oxygenation. In contrast to the report that indomethacin, through vasoconstriction, harms tissue oxygenation and thereby causes tissue damage [35], another study [34] found that low cerebral tissue perfusion oxygenation already present did not worsen after indomethacin therapy. Another study that used a similar methodology [36] reported that sudden changes in pressure after surgical ligation were more prominent than after conservative and indomethacin therapy; however, there was no relationship between hsPDA and worsening neuroimaging abnormalities before and after therapy. In our study, when the test cases were compared with control cases and pre-therapy test cases were compared with post-therapy test cases, the S-100B levels, which are expected to rise in direct tissue damage, did not show significant changes.", "The presence of hsPDA in premature infants, in who the oxidant defense mechanism is not fully developed, causes increase in total oxidant status and oxidative stress index, which leads to tissue/organ damage. We determined that in patients with hsPDA no significant cerebral injury developed before or after therapy. To determine the rate of cerebral damage in patients with hsPDA, there is a need for studies with larger number of patients to assess the correlation between the results obtained from devices such as NIRS, which measures local tissue oxygenation and serum NSE, and S-100B levels accepted to be the best markers of cerebral damage." ]
[ null, "materials|methods", null, "methods", null, null, null, null, null, null, "results", "results", null, "discussion", "conclusions" ]
[ "Antioxidants", "Ductus Arteriosus, Patent", "Oxidative Stress", "S100 Calcium Binding Protein beta Subunit" ]
Background: The left-to-right shunt in hemodynamically significant patent ductus arteriosus (hsPDA) in premature newborns leads to alveolar edema and decreased lung compliance by increasing pulmonary blood flow and capillary permeability. Under this condition, the infant potentially requires more oxygen and higher mechanical ventilation support [1]. Furthermore, hsPDA may cause hypoperfusion of vital organs, leading to pathologies such as necrotizing enterocolitis (NEC), bronchopulmonary dysplasia (BPD), and acute renal insufficiency [2,3]. In addition, intraventricular hemorrhage (IVH) resulting from sudden change in pressure and damage to the white matter may be caused by defective perfusion of the brain [4,5]. Although hsPDA is definitely related to these morbidities, its causative role has not been fully determined [6]. Patients with congenital heart diseases that lead to hypoperfusion, ischemia, and chronic hypoxia cannot meet the biological requirements of the tissues and are strongly confronted with oxygen radicals [7]. To prevent the adverse effects of free radicals and oxidants such as lipid peroxidant, the body activates antioxidant defense mechanisms that include: glutathione reductase, glutathione peroxidase, superoxide dismutase, and catalase [8]. In the first days of life, premature infants have higher concentrations of hydroperoxide in their erythrocyte membrane and lower antioxidant defense compared with those of term infants [9]. S-100B protein and neuron-specific enolase (NSE) are the most valuable biochemical markers of neuronal damage and glial activation in premature infants [37]. S-100B is a calcium-binding protein found particularly in Schwann and astroglial cells [10]. It increases after head trauma, subarachnoid hemorrhage, paralysis, and cardiac pathologies, resulting to neurologic damage [11]. The purpose of this study was to evaluate the oxidant/antioxidant status in preterm infants with hsPDA by measuring the total antioxidant capacity (TAC) and total oxidant status (TOS) and to assess the neuronal damage due to oxidant stress related to hsPDA. Material and Methods: Patient population The study included 77 infants of the same age group and gestational age hospitalized with the diagnosis of prematurity and respiratory distress syndrome in the Newborn Intensive Care Unit of Yüzüncü Yil University Teaching Hospital, Van, Turkey. The infants were divided into 2 groups. The first group included 37 infants, presumably ≤32 weeks of gestational age, diagnosed with hsPDA between day 3 and 7 using electrocardiography (ECHO). The second group included 40 infants with no hsPDA on ECHO examination. The clinical severity of hsPDA was determined according to the PDA scoring system recommended by McNamara et al. [12]. Initially, all patients received fluids at a dose of 70–80 cc/kg/day; on subsequent days the fluid was gradually increased by 10–20 cc/kg/day, reaching a maximum of 150 cc/kg/day. The exclusion criteria for the study were congenital malformations, major congenital cardiac anomaly, genetic or metabolic disease, contraindication for the use of ibuprofen (oliguria or serum creatinine level >150 μmol/L or platelet count <75×10}9/L), and patients who refused to participate in the study. Seven of the patients initially included in the study were thereafter excluded: 2 of the patients had unstable phase III or greater intraventricular hemorrhage, 3 patients had unclosed PDA after ibuprofen therapy, and 2 patients died of sepsis during the study period. Before the initiation of the study, signed permission forms from the families of the infants and approval from the Ethics Council of the University were obtained. The study included 77 infants of the same age group and gestational age hospitalized with the diagnosis of prematurity and respiratory distress syndrome in the Newborn Intensive Care Unit of Yüzüncü Yil University Teaching Hospital, Van, Turkey. The infants were divided into 2 groups. The first group included 37 infants, presumably ≤32 weeks of gestational age, diagnosed with hsPDA between day 3 and 7 using electrocardiography (ECHO). The second group included 40 infants with no hsPDA on ECHO examination. The clinical severity of hsPDA was determined according to the PDA scoring system recommended by McNamara et al. [12]. Initially, all patients received fluids at a dose of 70–80 cc/kg/day; on subsequent days the fluid was gradually increased by 10–20 cc/kg/day, reaching a maximum of 150 cc/kg/day. The exclusion criteria for the study were congenital malformations, major congenital cardiac anomaly, genetic or metabolic disease, contraindication for the use of ibuprofen (oliguria or serum creatinine level >150 μmol/L or platelet count <75×10}9/L), and patients who refused to participate in the study. Seven of the patients initially included in the study were thereafter excluded: 2 of the patients had unstable phase III or greater intraventricular hemorrhage, 3 patients had unclosed PDA after ibuprofen therapy, and 2 patients died of sepsis during the study period. Before the initiation of the study, signed permission forms from the families of the infants and approval from the Ethics Council of the University were obtained. Clinical and laboratory data The study was performed in a prospective manner. The decision for therapy of the newborns diagnosed with hsPDA was made by a neonatologist. The SpO2, arterial blood pressure, and support ventilation of the newborns were closely followed-up. The cranial ultrasonographies of all patients were performed on postnatal day 1 and 5 by a neonatologist experienced in transfontanel ultrasonography. Complications such as IVH, BPD, and NEC were recorded. BPD was diagnosed according to the diagnostic criteria of the U.S. National Institutes of Health [13]. After taking basal blood samples, the patients in the test group received a single dose ibuprofen via orogastric catheter (10 mg/kg/day on day 1; 5 mg/kg/day on day 2; 5 mg/kg/day on day 3). Twelve hours after the third dose of ibuprofen, the patients were reassessed with ECHO, and once again blood samples were taken from those who’s PDA had closed. The blood samples were taken either from the umbilical venous catheter or from the peripheral veins, centrifuged at 5000 rpm for 10 min, and the sera obtained were kept at −80°C until the time of analysis. The study was performed in a prospective manner. The decision for therapy of the newborns diagnosed with hsPDA was made by a neonatologist. The SpO2, arterial blood pressure, and support ventilation of the newborns were closely followed-up. The cranial ultrasonographies of all patients were performed on postnatal day 1 and 5 by a neonatologist experienced in transfontanel ultrasonography. Complications such as IVH, BPD, and NEC were recorded. BPD was diagnosed according to the diagnostic criteria of the U.S. National Institutes of Health [13]. After taking basal blood samples, the patients in the test group received a single dose ibuprofen via orogastric catheter (10 mg/kg/day on day 1; 5 mg/kg/day on day 2; 5 mg/kg/day on day 3). Twelve hours after the third dose of ibuprofen, the patients were reassessed with ECHO, and once again blood samples were taken from those who’s PDA had closed. The blood samples were taken either from the umbilical venous catheter or from the peripheral veins, centrifuged at 5000 rpm for 10 min, and the sera obtained were kept at −80°C until the time of analysis. Echocardiographic evaluation of patent ductus arteriosus The patients included in the study were evaluated using ECHO in short and high parasternal axis. Echocardiography was performed once or more than once according to the clinical course of each patient. We used color Doppler echocardiography system with a Vivid S6 6s sector probe (GE Healthcare, GE Medical Systems, Horten, Norway). The diagnosis of hsPDA was reached and ibuprofen therapy was started upon the determination in echocardiographic parasternal long axis, left atrial/aortic root ratio of ≥1.4 mm/kg [2], enlargement of left ventricle, holodiastolic retrograde flow in the descending aorta; and a pulse-waved Doppler turbulent systolic and diastolic flow on the ductus and abnormal antegrade diastolic flow. Tachycardia (≥160/min) and hypotension (less than tenth percentile according to birth weight and age) were evaluated as clinical findings supporting the diagnosis of hsPDA. The patients included in the study were evaluated using ECHO in short and high parasternal axis. Echocardiography was performed once or more than once according to the clinical course of each patient. We used color Doppler echocardiography system with a Vivid S6 6s sector probe (GE Healthcare, GE Medical Systems, Horten, Norway). The diagnosis of hsPDA was reached and ibuprofen therapy was started upon the determination in echocardiographic parasternal long axis, left atrial/aortic root ratio of ≥1.4 mm/kg [2], enlargement of left ventricle, holodiastolic retrograde flow in the descending aorta; and a pulse-waved Doppler turbulent systolic and diastolic flow on the ductus and abnormal antegrade diastolic flow. Tachycardia (≥160/min) and hypotension (less than tenth percentile according to birth weight and age) were evaluated as clinical findings supporting the diagnosis of hsPDA. Calculation of total antioxidant capacity The plasma TAC level was measured using a new automated measurement system developed by Erel [14], which uses hydroxyl radical, one of the most effective biological radicals produced. For measurements, as Reagent 1, the existing ferrous ion solution [o-dianisidine (10 mM), ferrous ion (45 AM) in the Clark and Lubs solution (75 mM, pH 1.8)] was mixed with Reagent 2-hydrogen peroxide [H2O2 (7.5 mM) in the Clark and Lubs solution]. The sequentially produced brown-colored dianisidinyl radical cation and the radicals produced by the hydroxyl radical are strong radicals. Using this method, the antioxidant effect of the sample on a strong free radical produced by hydroxyl radical was measured. The tests gave perfect results at values under 3%. The results are expressed as mmol Trolox Equiv. L-1. The plasma TAC level was measured using a new automated measurement system developed by Erel [14], which uses hydroxyl radical, one of the most effective biological radicals produced. For measurements, as Reagent 1, the existing ferrous ion solution [o-dianisidine (10 mM), ferrous ion (45 AM) in the Clark and Lubs solution (75 mM, pH 1.8)] was mixed with Reagent 2-hydrogen peroxide [H2O2 (7.5 mM) in the Clark and Lubs solution]. The sequentially produced brown-colored dianisidinyl radical cation and the radicals produced by the hydroxyl radical are strong radicals. Using this method, the antioxidant effect of the sample on a strong free radical produced by hydroxyl radical was measured. The tests gave perfect results at values under 3%. The results are expressed as mmol Trolox Equiv. L-1. Calculation of total oxidant status The plasma TOS level was measured using a new automated measurement system developed by Erel [15]. Oxidants present in the sample oxidize the ferrous ion-o-dianisidine complex to ferric ion. The oxidation reaction is mediated by glycerol molecules amply found in the reaction area. The ferric ion forms an orange-colored complex with the xylenol molecule in an acidic setting. The color saturation, which can be measured spectrophotometrically, is based on the total quantity of oxidant substances in the specimen. The results were calibrated with hydrogen peroxide and expressed in terms of micromolar hydrogen peroxide equivalent per liter (l mol H2O2 Equiv. L-1). The plasma TOS level was measured using a new automated measurement system developed by Erel [15]. Oxidants present in the sample oxidize the ferrous ion-o-dianisidine complex to ferric ion. The oxidation reaction is mediated by glycerol molecules amply found in the reaction area. The ferric ion forms an orange-colored complex with the xylenol molecule in an acidic setting. The color saturation, which can be measured spectrophotometrically, is based on the total quantity of oxidant substances in the specimen. The results were calibrated with hydrogen peroxide and expressed in terms of micromolar hydrogen peroxide equivalent per liter (l mol H2O2 Equiv. L-1). Oxidative stress index The oxidative stress index (OSI) was taken as the percentage ratio of TOS values to TAS values. Before making the calculations, the values in the TAC test were converted into micromol values, as in the TOS test. The results were calculated using the formula OSI (arbitrary unit) =TOS (μmol H2O2 equivalent/L)/TAS (mmol Trolox equivalent/L) ×10, and the calculated values were defined as random units [15]. The oxidative stress index (OSI) was taken as the percentage ratio of TOS values to TAS values. Before making the calculations, the values in the TAC test were converted into micromol values, as in the TOS test. The results were calculated using the formula OSI (arbitrary unit) =TOS (μmol H2O2 equivalent/L)/TAS (mmol Trolox equivalent/L) ×10, and the calculated values were defined as random units [15]. Serum S100B measurement The serum S-100B level was measured using an electrochemiluminescence immunoassay kit (ECLIA, Roche Diagnostics, Germany). The serum S-100B level was measured using an electrochemiluminescence immunoassay kit (ECLIA, Roche Diagnostics, Germany). Statistical analyses The results of groups with normal distribution are presented as mean ±SD, and the median was use to present results that showed abnormal distribution. The chi-square test was used to evaluate demographic data between the 2 groups. To determine significant differences between the groups, the unpaired t-test for data with normal distribution and Mann-Whitney U test for data with non-normal distribution were used. In comparing pre-therapy with post-therapy, the paired t-test for data with normal distribution and Wilcoxon test for data with non-normal distribution were performed. To determine the relationship between the variables, for each group, Pearson and Spearman correlation coefficients were used. The p values <0.05 of the obtained results was accepted as statistically significant. For statistical analysis, SPSS 12.0 (SPSS Inc, Chicago, IL) was used. The results of groups with normal distribution are presented as mean ±SD, and the median was use to present results that showed abnormal distribution. The chi-square test was used to evaluate demographic data between the 2 groups. To determine significant differences between the groups, the unpaired t-test for data with normal distribution and Mann-Whitney U test for data with non-normal distribution were used. In comparing pre-therapy with post-therapy, the paired t-test for data with normal distribution and Wilcoxon test for data with non-normal distribution were performed. To determine the relationship between the variables, for each group, Pearson and Spearman correlation coefficients were used. The p values <0.05 of the obtained results was accepted as statistically significant. For statistical analysis, SPSS 12.0 (SPSS Inc, Chicago, IL) was used. Patient population: The study included 77 infants of the same age group and gestational age hospitalized with the diagnosis of prematurity and respiratory distress syndrome in the Newborn Intensive Care Unit of Yüzüncü Yil University Teaching Hospital, Van, Turkey. The infants were divided into 2 groups. The first group included 37 infants, presumably ≤32 weeks of gestational age, diagnosed with hsPDA between day 3 and 7 using electrocardiography (ECHO). The second group included 40 infants with no hsPDA on ECHO examination. The clinical severity of hsPDA was determined according to the PDA scoring system recommended by McNamara et al. [12]. Initially, all patients received fluids at a dose of 70–80 cc/kg/day; on subsequent days the fluid was gradually increased by 10–20 cc/kg/day, reaching a maximum of 150 cc/kg/day. The exclusion criteria for the study were congenital malformations, major congenital cardiac anomaly, genetic or metabolic disease, contraindication for the use of ibuprofen (oliguria or serum creatinine level >150 μmol/L or platelet count <75×10}9/L), and patients who refused to participate in the study. Seven of the patients initially included in the study were thereafter excluded: 2 of the patients had unstable phase III or greater intraventricular hemorrhage, 3 patients had unclosed PDA after ibuprofen therapy, and 2 patients died of sepsis during the study period. Before the initiation of the study, signed permission forms from the families of the infants and approval from the Ethics Council of the University were obtained. Clinical and laboratory data: The study was performed in a prospective manner. The decision for therapy of the newborns diagnosed with hsPDA was made by a neonatologist. The SpO2, arterial blood pressure, and support ventilation of the newborns were closely followed-up. The cranial ultrasonographies of all patients were performed on postnatal day 1 and 5 by a neonatologist experienced in transfontanel ultrasonography. Complications such as IVH, BPD, and NEC were recorded. BPD was diagnosed according to the diagnostic criteria of the U.S. National Institutes of Health [13]. After taking basal blood samples, the patients in the test group received a single dose ibuprofen via orogastric catheter (10 mg/kg/day on day 1; 5 mg/kg/day on day 2; 5 mg/kg/day on day 3). Twelve hours after the third dose of ibuprofen, the patients were reassessed with ECHO, and once again blood samples were taken from those who’s PDA had closed. The blood samples were taken either from the umbilical venous catheter or from the peripheral veins, centrifuged at 5000 rpm for 10 min, and the sera obtained were kept at −80°C until the time of analysis. Echocardiographic evaluation of patent ductus arteriosus: The patients included in the study were evaluated using ECHO in short and high parasternal axis. Echocardiography was performed once or more than once according to the clinical course of each patient. We used color Doppler echocardiography system with a Vivid S6 6s sector probe (GE Healthcare, GE Medical Systems, Horten, Norway). The diagnosis of hsPDA was reached and ibuprofen therapy was started upon the determination in echocardiographic parasternal long axis, left atrial/aortic root ratio of ≥1.4 mm/kg [2], enlargement of left ventricle, holodiastolic retrograde flow in the descending aorta; and a pulse-waved Doppler turbulent systolic and diastolic flow on the ductus and abnormal antegrade diastolic flow. Tachycardia (≥160/min) and hypotension (less than tenth percentile according to birth weight and age) were evaluated as clinical findings supporting the diagnosis of hsPDA. Calculation of total antioxidant capacity: The plasma TAC level was measured using a new automated measurement system developed by Erel [14], which uses hydroxyl radical, one of the most effective biological radicals produced. For measurements, as Reagent 1, the existing ferrous ion solution [o-dianisidine (10 mM), ferrous ion (45 AM) in the Clark and Lubs solution (75 mM, pH 1.8)] was mixed with Reagent 2-hydrogen peroxide [H2O2 (7.5 mM) in the Clark and Lubs solution]. The sequentially produced brown-colored dianisidinyl radical cation and the radicals produced by the hydroxyl radical are strong radicals. Using this method, the antioxidant effect of the sample on a strong free radical produced by hydroxyl radical was measured. The tests gave perfect results at values under 3%. The results are expressed as mmol Trolox Equiv. L-1. Calculation of total oxidant status: The plasma TOS level was measured using a new automated measurement system developed by Erel [15]. Oxidants present in the sample oxidize the ferrous ion-o-dianisidine complex to ferric ion. The oxidation reaction is mediated by glycerol molecules amply found in the reaction area. The ferric ion forms an orange-colored complex with the xylenol molecule in an acidic setting. The color saturation, which can be measured spectrophotometrically, is based on the total quantity of oxidant substances in the specimen. The results were calibrated with hydrogen peroxide and expressed in terms of micromolar hydrogen peroxide equivalent per liter (l mol H2O2 Equiv. L-1). Oxidative stress index: The oxidative stress index (OSI) was taken as the percentage ratio of TOS values to TAS values. Before making the calculations, the values in the TAC test were converted into micromol values, as in the TOS test. The results were calculated using the formula OSI (arbitrary unit) =TOS (μmol H2O2 equivalent/L)/TAS (mmol Trolox equivalent/L) ×10, and the calculated values were defined as random units [15]. Serum S100B measurement: The serum S-100B level was measured using an electrochemiluminescence immunoassay kit (ECLIA, Roche Diagnostics, Germany). Statistical analyses: The results of groups with normal distribution are presented as mean ±SD, and the median was use to present results that showed abnormal distribution. The chi-square test was used to evaluate demographic data between the 2 groups. To determine significant differences between the groups, the unpaired t-test for data with normal distribution and Mann-Whitney U test for data with non-normal distribution were used. In comparing pre-therapy with post-therapy, the paired t-test for data with normal distribution and Wilcoxon test for data with non-normal distribution were performed. To determine the relationship between the variables, for each group, Pearson and Spearman correlation coefficients were used. The p values <0.05 of the obtained results was accepted as statistically significant. For statistical analysis, SPSS 12.0 (SPSS Inc, Chicago, IL) was used. Results: Clinical findings The clinical, perinatal, and natal features of the hsPDA patients (test group) and the control group patients are shown in Table 1. There were no significant differences between the two groups in terms of birth weight, birth week, sex, antenatal steroid use, Apgar score, surfactant therapy, type of delivery, and inotropic support, but the mean duration of mechanical ventilation support in the PDA group was significantly longer than that in the control group (p<0.001). The clinical, perinatal, and natal features of the hsPDA patients (test group) and the control group patients are shown in Table 1. There were no significant differences between the two groups in terms of birth weight, birth week, sex, antenatal steroid use, Apgar score, surfactant therapy, type of delivery, and inotropic support, but the mean duration of mechanical ventilation support in the PDA group was significantly longer than that in the control group (p<0.001). TAC, TOS, OSI, and S-100B protein values The mean pre-therapy values of TOS and OSI in the test group were significantly higher, but TAC values were lower than those in the control group (p=0.001, p<0.001, and p=0.023, respectively). The S-100B protein levels in both groups were similar (Table 2). There were no significant differences between the test and control groups in terms of mean post-therapy values of TOS, TAC, OSI, and S-100B protein (p>0.05) (Table 3). When the pre-therapy TOS, TAC, and OSI values were compared with the post-therapy values, the TOS and OSI values were significantly higher, but TAC values were found to be significantly lower (p<0.001 for all values). The S-100B protein levels in both groups were similar (Table 4). The mean pre-therapy values of TOS and OSI in the test group were significantly higher, but TAC values were lower than those in the control group (p=0.001, p<0.001, and p=0.023, respectively). The S-100B protein levels in both groups were similar (Table 2). There were no significant differences between the test and control groups in terms of mean post-therapy values of TOS, TAC, OSI, and S-100B protein (p>0.05) (Table 3). When the pre-therapy TOS, TAC, and OSI values were compared with the post-therapy values, the TOS and OSI values were significantly higher, but TAC values were found to be significantly lower (p<0.001 for all values). The S-100B protein levels in both groups were similar (Table 4). Clinical findings: The clinical, perinatal, and natal features of the hsPDA patients (test group) and the control group patients are shown in Table 1. There were no significant differences between the two groups in terms of birth weight, birth week, sex, antenatal steroid use, Apgar score, surfactant therapy, type of delivery, and inotropic support, but the mean duration of mechanical ventilation support in the PDA group was significantly longer than that in the control group (p<0.001). TAC, TOS, OSI, and S-100B protein values: The mean pre-therapy values of TOS and OSI in the test group were significantly higher, but TAC values were lower than those in the control group (p=0.001, p<0.001, and p=0.023, respectively). The S-100B protein levels in both groups were similar (Table 2). There were no significant differences between the test and control groups in terms of mean post-therapy values of TOS, TAC, OSI, and S-100B protein (p>0.05) (Table 3). When the pre-therapy TOS, TAC, and OSI values were compared with the post-therapy values, the TOS and OSI values were significantly higher, but TAC values were found to be significantly lower (p<0.001 for all values). The S-100B protein levels in both groups were similar (Table 4). Discussion: In this study, we evaluated how hsPDA, a cause of significantly increased morbidity and mortality in infants, affected the body’s total antioxidant capacity, total oxidant status, and serum S-100B level, which is one of the most important markers of cerebral damage. Infants encounter oxidative stress because they are born into a hyperoxic surrounding after a relatively hypoxic fetal life. The effect of oxidative stress, as well as other factors (decreased prostaglandins E2 and I2 and role of platelets), enables the ductus arteriosus to physiologically close within the first 3 days of life [2,16]. In case of PDA, the high requirement for oxygen leads premature infants who are normally prone to oxidative stress to encounter more free oxygen radicals. The oxidant defense mechanism and antioxidant enzymatic system are increasingly activated towards the last weeks of pregnancy; hence, the antioxidant capacity of premature infants is thought to be low [17,18]. Another study [19] reported that serum antioxidant enzyme activity in premature infants was lower than that in term infants, but oxidant markers were found in similar quantities in both groups of infants. In the test group, the pre-therapy TAC levels were lower than the post-therapy and TAC levels of the control group, and the patients with hsPDA had significantly higher levels of TOS and OSI than those in the control patients. The marked decrease in TOS and OSI levels in hsPDA patients after therapy was related to decreased exposure to free oxygen radicals secondary to the closure of PDA. The free oxygen radicals damage the proteins, carbohydrates, lipids, and nucleic acids in the cell membranes [20]. Free oxygen radicals are natural reactive components that are continuously produced in the body and which have positive as well as negative effects. The body needs a strong antioxidant system in order to limit the negative effects of these radicals. The antioxidant system consists of enzymes (catalase, glutathione peroxidase, and superoxide dismutase), vitamins (Vitamin A, E, and C), uric acid, and glutathione. When the balance between the oxidant and antioxidant systems is upset in favor of oxidants, various disorders emerge due to increased oxidative stres [21]. Rokicki et al. [22] reported that in patients with congenital heart disease with left-to-right shunt, there is a low antioxidant system, but a high oxidant system (with the exception of uric acid). Ercan et al/ [21], in their study of 91 patients, reported that serum TAC, TOS, and OSI levels in cyanotic congenital heart disease patients were higher than those in acyanotic congenital heart disease patients and the control group. The same study [21] found no statistically significant differences between the acyanotic congenital heart disease patients (19 cases of ventricular septal defect, 5 cases of atrial septal defect, and 6 cases of PDA) and control patients in terms of TAC, TOS, and OSI levels. Particularly in patients with PDA, the increase in free oxygen radicals caused by high oxygen requirement may lead to an imbalance between oxidants and antioxidants in infants in who the antioxidant system has not yet fully developed. The increasing prevalence of complications such as IVH, BPD, NEC, periventricular leukomalacia, and white matter damage in PDA with no apparent reason might be due to this imbalance. It was reported that BPD developed later in patients with high levels of peroxidase in urine and tracheal lavage specimens [23,24]. Another study [25] reported high values of TOS and OSI in BPD patients. Aydemir et al. [26] found quite high levels of serum TOS and OSI in patients with NEC, but TAC levels similar to those of the control group. Left ventricular volume overload and steal phenomenon due to PDA leads to decreased perfusion and oxygenation of vital organs such as the brain. Since the cerebrovascular autoregulation capacity of the immature brain is not fully developed and the present partial autoregulation tends to be easily upset, the brain is more prone to injury [27]. The critical oxygen level required for sufficient oxygenation of the premature infant’s brain is not yet fully known. PDA in premature infants has been reported to be related to IVH and periventricular leukomalacia, and this negative effect of PDA might cause neurodevelopmental disorders [28–31]. Such critical reports on PDA have consequently led to the conclusion that PDA should be closed as soon as possible. Therefore, based on ECHO findings on the first day of life, surgical ligation performed in the delivery room, and medical treatment with indomethacin or ibuprofen have been widely used as therapies for PDA [32,33]. With the introduction of near-infrared spectroscopy (NIRS), which measures regional cerebral tissue oxygen saturation, the effect of PDA on the cerebral oxygenation of very premature infants could be measured [34]. Determination of cerebral oxygenation and oxygen extraction by using NIRS showed that cerebral oxygenation and fractional tissue oxygen extraction (FTOE) were decreased in PDA patients, but more injury to the immature brain tissue could be prevented by early diagnosis, appropriate treatment, and providing cerebral oxygenation. In contrast to the report that indomethacin, through vasoconstriction, harms tissue oxygenation and thereby causes tissue damage [35], another study [34] found that low cerebral tissue perfusion oxygenation already present did not worsen after indomethacin therapy. Another study that used a similar methodology [36] reported that sudden changes in pressure after surgical ligation were more prominent than after conservative and indomethacin therapy; however, there was no relationship between hsPDA and worsening neuroimaging abnormalities before and after therapy. In our study, when the test cases were compared with control cases and pre-therapy test cases were compared with post-therapy test cases, the S-100B levels, which are expected to rise in direct tissue damage, did not show significant changes. Conclusions: The presence of hsPDA in premature infants, in who the oxidant defense mechanism is not fully developed, causes increase in total oxidant status and oxidative stress index, which leads to tissue/organ damage. We determined that in patients with hsPDA no significant cerebral injury developed before or after therapy. To determine the rate of cerebral damage in patients with hsPDA, there is a need for studies with larger number of patients to assess the correlation between the results obtained from devices such as NIRS, which measures local tissue oxygenation and serum NSE, and S-100B levels accepted to be the best markers of cerebral damage.
Background: Hemodynamically significant patent ductus arteriosus (hsPDA) leads to injury in tissues/organs by reducing perfusion of organs and causing oxidative stress. The purpose of this study was to evaluate the oxidant/antioxidant status in preterm infants with hsPDA by measuring the total antioxidant capacity and total oxidant status and to assess neuronal damage due to oxidant stress related to hsPDA. Methods: This prospective study included 37 low-birth-weight infants with echocardiographically diagnosed hsPDA treated with oral ibuprofen and a control group of 40 infants without PDA. Blood samples were taken from all infants, and than the total antioxidant capacity (TAC), total oxidant status (TOS), and S-100B protein levels were assessed and oxidative stress index was calculated before and after therapy. Results: The mean pre-therapy TOS level and oxidative stress index (OSI) value of the patients with hsPDA were significantly higher, but TAC level was lower than in the control group. There were no statistically significant differences in the mean post-therapy values of TOS, TAC, OSI, and S-100B protein between the two groups. Conclusions: hsPDA may cause cellular injury by increasing oxidative stress and damaging tissue perfusion; however the brain can compensate for oxidative stress and impaired tissue perfusion through well-developed autoregulation systems to decrease tissue injury.
Background: The left-to-right shunt in hemodynamically significant patent ductus arteriosus (hsPDA) in premature newborns leads to alveolar edema and decreased lung compliance by increasing pulmonary blood flow and capillary permeability. Under this condition, the infant potentially requires more oxygen and higher mechanical ventilation support [1]. Furthermore, hsPDA may cause hypoperfusion of vital organs, leading to pathologies such as necrotizing enterocolitis (NEC), bronchopulmonary dysplasia (BPD), and acute renal insufficiency [2,3]. In addition, intraventricular hemorrhage (IVH) resulting from sudden change in pressure and damage to the white matter may be caused by defective perfusion of the brain [4,5]. Although hsPDA is definitely related to these morbidities, its causative role has not been fully determined [6]. Patients with congenital heart diseases that lead to hypoperfusion, ischemia, and chronic hypoxia cannot meet the biological requirements of the tissues and are strongly confronted with oxygen radicals [7]. To prevent the adverse effects of free radicals and oxidants such as lipid peroxidant, the body activates antioxidant defense mechanisms that include: glutathione reductase, glutathione peroxidase, superoxide dismutase, and catalase [8]. In the first days of life, premature infants have higher concentrations of hydroperoxide in their erythrocyte membrane and lower antioxidant defense compared with those of term infants [9]. S-100B protein and neuron-specific enolase (NSE) are the most valuable biochemical markers of neuronal damage and glial activation in premature infants [37]. S-100B is a calcium-binding protein found particularly in Schwann and astroglial cells [10]. It increases after head trauma, subarachnoid hemorrhage, paralysis, and cardiac pathologies, resulting to neurologic damage [11]. The purpose of this study was to evaluate the oxidant/antioxidant status in preterm infants with hsPDA by measuring the total antioxidant capacity (TAC) and total oxidant status (TOS) and to assess the neuronal damage due to oxidant stress related to hsPDA. Conclusions: The presence of hsPDA in premature infants, in who the oxidant defense mechanism is not fully developed, causes increase in total oxidant status and oxidative stress index, which leads to tissue/organ damage. We determined that in patients with hsPDA no significant cerebral injury developed before or after therapy. To determine the rate of cerebral damage in patients with hsPDA, there is a need for studies with larger number of patients to assess the correlation between the results obtained from devices such as NIRS, which measures local tissue oxygenation and serum NSE, and S-100B levels accepted to be the best markers of cerebral damage.
Background: Hemodynamically significant patent ductus arteriosus (hsPDA) leads to injury in tissues/organs by reducing perfusion of organs and causing oxidative stress. The purpose of this study was to evaluate the oxidant/antioxidant status in preterm infants with hsPDA by measuring the total antioxidant capacity and total oxidant status and to assess neuronal damage due to oxidant stress related to hsPDA. Methods: This prospective study included 37 low-birth-weight infants with echocardiographically diagnosed hsPDA treated with oral ibuprofen and a control group of 40 infants without PDA. Blood samples were taken from all infants, and than the total antioxidant capacity (TAC), total oxidant status (TOS), and S-100B protein levels were assessed and oxidative stress index was calculated before and after therapy. Results: The mean pre-therapy TOS level and oxidative stress index (OSI) value of the patients with hsPDA were significantly higher, but TAC level was lower than in the control group. There were no statistically significant differences in the mean post-therapy values of TOS, TAC, OSI, and S-100B protein between the two groups. Conclusions: hsPDA may cause cellular injury by increasing oxidative stress and damaging tissue perfusion; however the brain can compensate for oxidative stress and impaired tissue perfusion through well-developed autoregulation systems to decrease tissue injury.
6,109
252
15
[ "patients", "values", "therapy", "group", "test", "day", "study", "tos", "hspda", "infants" ]
[ "test", "test" ]
[CONTENT] Antioxidants | Ductus Arteriosus, Patent | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Ductus Arteriosus, Patent | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Ductus Arteriosus, Patent | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Ductus Arteriosus, Patent | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Ductus Arteriosus, Patent | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Ductus Arteriosus, Patent | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Case-Control Studies | Ductus Arteriosus, Patent | Female | Humans | Ibuprofen | Infant, Newborn | Infant, Premature | Male | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Case-Control Studies | Ductus Arteriosus, Patent | Female | Humans | Ibuprofen | Infant, Newborn | Infant, Premature | Male | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Case-Control Studies | Ductus Arteriosus, Patent | Female | Humans | Ibuprofen | Infant, Newborn | Infant, Premature | Male | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Case-Control Studies | Ductus Arteriosus, Patent | Female | Humans | Ibuprofen | Infant, Newborn | Infant, Premature | Male | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Case-Control Studies | Ductus Arteriosus, Patent | Female | Humans | Ibuprofen | Infant, Newborn | Infant, Premature | Male | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] Antioxidants | Case-Control Studies | Ductus Arteriosus, Patent | Female | Humans | Ibuprofen | Infant, Newborn | Infant, Premature | Male | Oxidative Stress | S100 Calcium Binding Protein beta Subunit [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] patients | values | therapy | group | test | day | study | tos | hspda | infants [SUMMARY]
[CONTENT] patients | values | therapy | group | test | day | study | tos | hspda | infants [SUMMARY]
[CONTENT] patients | values | therapy | group | test | day | study | tos | hspda | infants [SUMMARY]
[CONTENT] patients | values | therapy | group | test | day | study | tos | hspda | infants [SUMMARY]
[CONTENT] patients | values | therapy | group | test | day | study | tos | hspda | infants [SUMMARY]
[CONTENT] patients | values | therapy | group | test | day | study | tos | hspda | infants [SUMMARY]
[CONTENT] damage | antioxidant | infants | hspda | premature | neuronal damage | neuronal | pathologies | hypoperfusion | resulting [SUMMARY]
[CONTENT] day | blood | kg day day | blood samples | mg | mg kg day | mg kg day day | mg kg | day day | samples [SUMMARY]
[CONTENT] group | control | control group | birth | support | patients | group significantly longer control | table significant differences groups | birth week sex antenatal | birth week sex [SUMMARY]
[CONTENT] cerebral | damage | patients hspda | tissue | cerebral damage | hspda | patients | oxidant | developed | correlation results obtained [SUMMARY]
[CONTENT] values | patients | group | test | tos | day | therapy | osi | hspda | infants [SUMMARY]
[CONTENT] values | patients | group | test | tos | day | therapy | osi | hspda | infants [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 37 | 40 ||| TAC | TOS [SUMMARY]
[CONTENT] TOS | OSI | TAC ||| TOS | TAC | OSI | two [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| 37 | 40 ||| TAC | TOS ||| TOS | OSI | TAC ||| TOS | TAC | OSI | two ||| [SUMMARY]
[CONTENT] ||| ||| 37 | 40 ||| TAC | TOS ||| TOS | OSI | TAC ||| TOS | TAC | OSI | two ||| [SUMMARY]
Effect of a single injection of autologous conditioned serum (ACS) on tendon healing in equine naturally occurring tendinopathies.
26113022
Autologous blood-derived biologicals, including autologous conditioned serum (ACS), are frequently used to treat tendinopathies in horses despite limited evidence for their efficacy. The purpose of this study was to describe the effect of a single intralesional injection of ACS in naturally occurring tendinopathies of the equine superficial digital flexor tendon (SDFT) on clinical, ultrasonographic, and histological parameters.
INTRODUCTION
Fifteen horses with 17 naturally occurring tendinopathies of forelimb SDFTs were examined clinically and ultrasonographically (day 0). Injured tendons were randomly assigned to the ACS-treated group (n = 10) receiving a single intralesional ACS injection or included as controls (n = 7) which were either untreated or injected with saline on day 1. All horses participated in a gradually increasing exercise programme and were re-examined nine times at regular intervals until day 190. Needle biopsies were taken from the SDFTs on days 0, 36 and 190 and examined histologically and for the expression of collagen types I and III by immunohistochemistry.
METHODS
In ACS-treated limbs lameness decreased significantly until day 10 after treatment. Swelling (scores) of the SDFT region decreased within the ACS group between 50 and 78 days after treatment. Ultrasonographically, the percentage of the lesion in the tendon was significantly lower and the echogenicity of the lesion (total echo score) was significantly higher 78 and 106 days after intralesional ACS injection compared to controls. Histology revealed that, compared to controls, tenocyte nuclei were more spindle-shaped 36 days after ACS injection. Immunohistochemistry showed that collagen type I expression significantly increased between days 36 and 190 after ACS injection.
RESULTS
Single intralesional ACS injection of equine SDFTs with clinical signs of acute tendinopathy contributes to an early significant reduction of lameness and leads to temporary improvement of ultrasonographic parameters of repair tissue. Intralesional ACS treatment might decrease proliferation of tenocytes 5 weeks after treatment and increase their differentiation as demonstrated by elevated collagen type I expression in the remodelling phase. Potential enhancement of these effects by repeated injections should be tested in future controlled clinical investigations.
CONCLUSIONS
[ "Animals", "Blood Transfusion, Autologous", "Collagen Type I", "Collagen Type III", "Forelimb", "Horse Diseases", "Horses", "Immunohistochemistry", "Male", "Tendinopathy", "Tendons", "Ultrasonography", "Wound Healing" ]
4513386
Introduction
Tendinopathy of the superficial digital flexor tendon (SDFT) is a common injury in Thoroughbred racehorses and other horse breeds and is regarded as a career-limiting disease with a high recurrence rate [1]. Numerous treatment modalities have shown limited success in improving tendon repair [2]. Regenerative therapy aims to restore structure and function after application of biocompatible materials, cells, and bioactive molecules [3, 4]. There is growing knowledge about the clinical effects of potentially regenerative substrates, e.g. mesenchymal stem cells (MSCs) [5, 6] and autologous blood products such as platelet rich plasma [7, 8] on equine tendinopathies. To date, however, ideal treatment strategies for naturally occurring tendinopathies have not been established [1, 2]. Autologous conditioned serum (ACS; synonyms irap®, Orthokine®, Orthogen, Düsseldorf, Germany) is used for intralesional treatment of tendinopathy in horses but, to the best of our knowledge, its clinical effect is only documented anecdotally [8–10]. ACS is prepared by exposing whole blood samples to glass beads, which has been shown to stimulate the secretion of anti-inflammatory cytokines, including interleukin (IL)-4 and IL-10 and IL-1 receptor antagonist (IL-1Ra) in humans [11]. A recent investigation has shown that ACS from equine blood also contains high levels of IL-1Ra and IL-10 [12]. Equine studies have focused on the IL-1Ra-mediated anti-inflammatory effects of ACS [13]; however, in tendon healing, the high concentrations of growth factors such as insulin-like growth factor-1 (IGF-1) and transforming growth factor-beta (TGF-β) may be equally or more important [14–16]. Blood samples from different horses and the use of different kits for the preparation of ACS may lead to differences in the cytokine and growth factor concentration in vitro [12, 17]. However, the relevance of these differences for the clinical effect is unknown. ACS was originally described to improve muscle regeneration in a murine muscle contusion model [18] and to exhibit anti-inflammatory effects in an experimental model of carpal osteoarthritis in horses [13] and in a placebo-controlled clinical trial in humans with knee osteoarthritis [19]. The rationale for the use of ACS to treat equine tendinopathies is based on several findings: 1) It was shown in an experimental study that the expression of IL-1β (and matrix metalloproteinase-13) is upregulated following overstrain injury of rat tendons, demonstrating that these molecules are important mediators in the pathogenesis of tendinopathy [15, 20]. 2) IL-1Ra protein and heterologous conditioned serum prepared with the irap® kit reduced the production of prostaglandin E2 by stimulated cells derived from macroscopically normal SDFTs in vitro [21]. 3) Growth factors concentrated in ACS, e.g. IGF-1 and TGF-β, have the potential to attract resident precursor cells, e.g. MSCs and tenoblasts, and to increase cell proliferation during tendon healing [14, 15, 17]. Rat Achilles tendons exposed to ACS in an experimental study showed an enhanced expression of the Col1A1 gene, which led to an increased secretion of type I collagen and accelerated recovery of tendon stiffness and improved histologic maturity of the repair tissue [22]. It was shown in another rodent Achilles tendon transection model that ACS generally increases the expression of basic fibroblast growth factor (bFGF), bone morphogenetic protein-12 and TGF-β1 [16], representing growth factors important for the process of tendon regeneration [15]. The process of tendon healing is mainly divided into three phases which merge into each other. The acute inflammatory phase (<10–14 days) is characterized by phagocytosis and demarcation of injured tendon tissue. A fibroproliferative callus is formed during the proliferative phase (4–45 days), while collagen fibrils are organised into tendon bundles during the remodelling or maturation phase (45–120 days; <3 months) [1, 23]. The aim of the present study was to support the hypothesis that a single intralesional ACS injection into SDFT lesions 1) has a clinically detectable anti-inflammatory effect, 2) leads to improved B-mode ultrasonographic parameters and 3) improves the organization of repair tissue.
null
null
Results
Description and history of horses, intralesional injections Seventeen limbs of 15 horses between 2 and 19 years old (mean 8.46 years old) met the inclusion criteria. Ten of the limbs were included in the ACS-treated group (ACS group) and seven served as controls. Limbs allocated to the ACS group belonged to five Warmbloods (50 %), four Thoroughbreds (40 %) and one Arabian (10 %). The limbs of five Warmbloods (71.42 %), one Thoroughbred (14.28 %) and one Half-blood (14.28 %) were included as controls. Of these, one Thoroughbred and one Warmblood with bilateral SDFT lesions served for both groups (Table 1). The horses’ history (i.e. high-speed exercise, increased age >12 years) was suggestive of tendinopathy to be strain-induced in at least 6 of 10 SDFTs (60 %) allocated to the ACS group and in at least 4 of 7 (57 %) control SDFTs. Two tendons had a definitive history of blunt external trauma. One was included in the ACS group and one served as control.Table 1Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesionsHorse numberBreedAge (years)GenderFor which purpose usedReported duration of SDFT tendinopathy until initial examination (days)Reported initiating eventLimb affectedMaximal injury zoneLesion typeTreatment ACS group 2241/09Thoroughbred2SRacing2TrainingRF2bDiffuseACS2240/09Thoroughbred3SRacing2–3TrainingRF2bCoreACS2489/09a Thoroughbred4MRacing7RacingRF1bCoreACS2539/09Warmblood3SDressage14Blunt traumaRF2bMarginalACS1672/10Arabian17GPleasure14Running freeRF1bCoreACS6264/10b Warmblood8GDressage9UnknownRF3aMarginalACS6335/10Warmblood10GPleasure10UnknownLF2bDiffuseACS4793/10Thoroughbred3SRacing7TrainingLF2bCoreACS6263/10Warmblood20MPleasure14Stumbling at cross country rideRF1bCoreACS2378/10Warmblood11MPleasure13At rideLF2bDiffuseACSMean 8.1 Controls 2489/09a Thoroughbred4MRacing7RacingLF1bDiffuseNo6264/10b Warmblood8GDressage9UnknownLF2aCoreSaline6111/10Warmblood5MJumping14Kicking himself over the jumpRF1bMarginalNo6265/10Warmblood5MPleasure7UnknownRF1bMarginalSaline6383/11Warmblood18MPleasure10At cross country rideLF2aMarginalNo5461/11Warmblood14GPolice horse4At gallop on beachLF2bDiffuseNo6384/11Half-blood8GEventing1After eventing competitionLF2aCoreNoMean 8.86 a, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesions a, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon In total ten SDFTs were injected with ACS. Five control SDFTs were not treated and two control tendons received a single intralesional injection of saline. Seventeen limbs of 15 horses between 2 and 19 years old (mean 8.46 years old) met the inclusion criteria. Ten of the limbs were included in the ACS-treated group (ACS group) and seven served as controls. Limbs allocated to the ACS group belonged to five Warmbloods (50 %), four Thoroughbreds (40 %) and one Arabian (10 %). The limbs of five Warmbloods (71.42 %), one Thoroughbred (14.28 %) and one Half-blood (14.28 %) were included as controls. Of these, one Thoroughbred and one Warmblood with bilateral SDFT lesions served for both groups (Table 1). The horses’ history (i.e. high-speed exercise, increased age >12 years) was suggestive of tendinopathy to be strain-induced in at least 6 of 10 SDFTs (60 %) allocated to the ACS group and in at least 4 of 7 (57 %) control SDFTs. Two tendons had a definitive history of blunt external trauma. One was included in the ACS group and one served as control.Table 1Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesionsHorse numberBreedAge (years)GenderFor which purpose usedReported duration of SDFT tendinopathy until initial examination (days)Reported initiating eventLimb affectedMaximal injury zoneLesion typeTreatment ACS group 2241/09Thoroughbred2SRacing2TrainingRF2bDiffuseACS2240/09Thoroughbred3SRacing2–3TrainingRF2bCoreACS2489/09a Thoroughbred4MRacing7RacingRF1bCoreACS2539/09Warmblood3SDressage14Blunt traumaRF2bMarginalACS1672/10Arabian17GPleasure14Running freeRF1bCoreACS6264/10b Warmblood8GDressage9UnknownRF3aMarginalACS6335/10Warmblood10GPleasure10UnknownLF2bDiffuseACS4793/10Thoroughbred3SRacing7TrainingLF2bCoreACS6263/10Warmblood20MPleasure14Stumbling at cross country rideRF1bCoreACS2378/10Warmblood11MPleasure13At rideLF2bDiffuseACSMean 8.1 Controls 2489/09a Thoroughbred4MRacing7RacingLF1bDiffuseNo6264/10b Warmblood8GDressage9UnknownLF2aCoreSaline6111/10Warmblood5MJumping14Kicking himself over the jumpRF1bMarginalNo6265/10Warmblood5MPleasure7UnknownRF1bMarginalSaline6383/11Warmblood18MPleasure10At cross country rideLF2aMarginalNo5461/11Warmblood14GPolice horse4At gallop on beachLF2bDiffuseNo6384/11Half-blood8GEventing1After eventing competitionLF2aCoreNoMean 8.86 a, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesions a, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon In total ten SDFTs were injected with ACS. Five control SDFTs were not treated and two control tendons received a single intralesional injection of saline. Lameness On day 0, the mean degree of lameness was 0.8 in the ACS group. In control limbs, the mean degree of lameness was 1.42 on day 0. One of the two horses with bilateral tendinopathy that served for the ACS group and as a control was not lame. The second one showed a unilateral front limb lameness (SDFT in the lame limb was treated with ACS). The mean degree of lameness did not differ between groups on any day of examination (p > 0.05). Regardless of treatment modality all horses became sound by day 36. Compared to day 0, lameness decreased significantly by day 11 (p = 0.046) within the ACS group and it decreased significantly by day 36 (p = 0.021) in limbs serving as controls (Fig. 1a).Fig. 1Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy On day 0, the mean degree of lameness was 0.8 in the ACS group. In control limbs, the mean degree of lameness was 1.42 on day 0. One of the two horses with bilateral tendinopathy that served for the ACS group and as a control was not lame. The second one showed a unilateral front limb lameness (SDFT in the lame limb was treated with ACS). The mean degree of lameness did not differ between groups on any day of examination (p > 0.05). Regardless of treatment modality all horses became sound by day 36. Compared to day 0, lameness decreased significantly by day 11 (p = 0.046) within the ACS group and it decreased significantly by day 36 (p = 0.021) in limbs serving as controls (Fig. 1a).Fig. 1Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Long-term follow-up Recurrence of tendon injury was not reported in any horse after 2 to 4 years post-diagnosis. Of the eight horses with SDFTs being allocated to the ACS group, five horses (63 %), among these three racehorses, returned to their previous or a higher performance level, two horses (25 %) died of reasons unrelated to tendinopathy, and one horse (12 %) was retired due to osteoarthritis of interphalangeal joints after the observation period. One of the horses which was included in the ACS group and served as a control did not resume training and was retired as a broodmare; the second one performed as a dressage horse. Of the five horses with tendons serving only as control, four individuals (80 %) performed in their discipline at the previous or a higher level, and one horse (20 %) was lost to follow-up. Recurrence of tendon injury was not reported in any horse after 2 to 4 years post-diagnosis. Of the eight horses with SDFTs being allocated to the ACS group, five horses (63 %), among these three racehorses, returned to their previous or a higher performance level, two horses (25 %) died of reasons unrelated to tendinopathy, and one horse (12 %) was retired due to osteoarthritis of interphalangeal joints after the observation period. One of the horses which was included in the ACS group and served as a control did not resume training and was retired as a broodmare; the second one performed as a dressage horse. Of the five horses with tendons serving only as control, four individuals (80 %) performed in their discipline at the previous or a higher level, and one horse (20 %) was lost to follow-up. Signs of inflammation No statistically significant differences between groups were observed during the entire observation period, including day 0, with regard to scores for swelling, skin surface temperature and sensitivity to palpation. Swelling scores of the SDFT region decreased significantly in the ACS group (p = 0.005) between day 50 (mean score 1.1) and day 78 (mean score 0.7) and remained reduced until the end of the observation period (Fig. 1b). In controls, swelling scores did not decrease significantly during 27 weeks. Skin surface temperature and sensitivity to palpation scores decreased up to day 22 within both groups and remained at a low level until day 190. No statistically significant differences between groups were observed during the entire observation period, including day 0, with regard to scores for swelling, skin surface temperature and sensitivity to palpation. Swelling scores of the SDFT region decreased significantly in the ACS group (p = 0.005) between day 50 (mean score 1.1) and day 78 (mean score 0.7) and remained reduced until the end of the observation period (Fig. 1b). In controls, swelling scores did not decrease significantly during 27 weeks. Skin surface temperature and sensitivity to palpation scores decreased up to day 22 within both groups and remained at a low level until day 190. B-mode ultrasonography The horses included presented with core lesions (seven limbs), marginal lesions (five limbs) or diffuse lesions (five limbs) of the SDFT (Table 1). The MIZ of most lesions was located in zone 2b (41.17 % of limbs), followed by zone 1b (35.29 % of limbs), zone 2a (17.64 % of limbs) and zone 3a (5.88 % of limbs). The mean %T-lesion was 21.73 ± 7.16 in the ACS group and 18.51 ± 5.07 in controls on day 0. This parameter was significantly lower (p < 0.05) in the ACS group than in controls on days 78, 106 and 162 (Fig. 2a). TES were significantly lower in the ACS group versus controls on days 78 and 106 (Fig. 2b). There was no difference in T-CSA, TL-CSA (Fig. 2c) or T-FAS between the groups at any time point.Fig. 2Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Mean %T-lesion showed a continuous decrease over time within the ACS group which was significant (p = 0.02) for the first time on day 78 compared to day 0. After this period, this parameter remained below the 10 % range. In control tendons, mean %T-lesion continuously remained on a similar elevated level (p > 0.05) throughout the entire observation period (Fig. 2a). Compared to day 0, the mean TES decreased significantly on day 22 and again between days 22 and 78 in the ACS group (p < 0.05), while in controls compared to day 0 the TES decreased significantly (p < 0.05) the first time on day 78 (Fig. 2b). Mean T-CSA did not change significantly throughout the observation period in either group. Regarding the progression of TL-CSA within the ACS group over time, it was significantly lower (p < 0.05) from day 78 onwards until the end of the examination period compared to levels on day 0, while values from control tendons remained on a similar level from day 0 to 190 (Fig. 2c). Mean T-FAS decreased significantly between day 0 and day 78 (p = 0.023) within the ACS group; in controls this parameter decreased significantly on day 134 compared to day 0 (p = 0.04). The horses included presented with core lesions (seven limbs), marginal lesions (five limbs) or diffuse lesions (five limbs) of the SDFT (Table 1). The MIZ of most lesions was located in zone 2b (41.17 % of limbs), followed by zone 1b (35.29 % of limbs), zone 2a (17.64 % of limbs) and zone 3a (5.88 % of limbs). The mean %T-lesion was 21.73 ± 7.16 in the ACS group and 18.51 ± 5.07 in controls on day 0. This parameter was significantly lower (p < 0.05) in the ACS group than in controls on days 78, 106 and 162 (Fig. 2a). TES were significantly lower in the ACS group versus controls on days 78 and 106 (Fig. 2b). There was no difference in T-CSA, TL-CSA (Fig. 2c) or T-FAS between the groups at any time point.Fig. 2Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Mean %T-lesion showed a continuous decrease over time within the ACS group which was significant (p = 0.02) for the first time on day 78 compared to day 0. After this period, this parameter remained below the 10 % range. In control tendons, mean %T-lesion continuously remained on a similar elevated level (p > 0.05) throughout the entire observation period (Fig. 2a). Compared to day 0, the mean TES decreased significantly on day 22 and again between days 22 and 78 in the ACS group (p < 0.05), while in controls compared to day 0 the TES decreased significantly (p < 0.05) the first time on day 78 (Fig. 2b). Mean T-CSA did not change significantly throughout the observation period in either group. Regarding the progression of TL-CSA within the ACS group over time, it was significantly lower (p < 0.05) from day 78 onwards until the end of the examination period compared to levels on day 0, while values from control tendons remained on a similar level from day 0 to 190 (Fig. 2c). Mean T-FAS decreased significantly between day 0 and day 78 (p = 0.023) within the ACS group; in controls this parameter decreased significantly on day 134 compared to day 0 (p = 0.04). Needle biopsies and histology A total of 51 needle biopsies were taken. Pain reaction was mild in 78.43 % of the procedures, mild to moderate in 17.64 % and moderate in 3.92 % of the cases. A moderate to severe or severe pain reaction was not observed. No bleeding was observed after taking biopsies in 25.49 % of the cases, mild bleeding occurred in 58.82 % and moderate bleeding in 15.68 % of cases. Severe bleeding was not observed. Of 51 biopsies, 47 were available for tendon histology [7, 30]. Four biopsies from severely oedematous lesions were not evaluated due to limited tissue content. The intra-class correlation for inter-observer repeatability was 0.72 for fibre structure, 0.88 for fibre alignment, 0.84 for nuclei morphology and 0.92 for variations in cell density. Scores for tenocyte nuclei morphology were significantly lower (i.e. cell nuclei more flattened; p = 0.01) in the ACS group (Fig. 3a) than in controls (Fig. 3b) on day 36 (Fig. 4a). Scores for cell density showed a tendency to be lower (i.e. more uniform) in the ACS group than in controls on day 36 (p = 0.052). Scores for fibre structure, fibre alignment (Fig. 4b), vascularisation, and subscores for structural integrity and metabolic activity did not show differences between the ACS group and controls at any time point.Fig. 3Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μmFig. 4Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated) Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μm Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated) With regard to development of fibre alignment during the healing of tendons treated with ACS, scores for this parameter were significantly lower (i.e. fibres were more regularly ordered; p = 0.04) in biopsies taken at the end of the observation period (day 190; Fig. 3d, Fig. 4b) than in those taken on day 0 (Fig. 3c, Fig. 4b). In control tendons, scores for fibre alignment (Fig. 4b), scores for morphology of tenocyte nuclei (Fig. 4a) and scores for cell density decreased significantly (i.e. tissue morphology improved; p < 0.05) between day 36 and day 190 biopsies. There were no differences between the ACS group and controls with regard to collagen type I and collagen type III expression in biopsies taken on days 0, 36 and 190. Within the ACS group, the collagen type I content increased significantly (p = 0.03) between the biopsies taken on day 36 (Fig. 3e) and day 190 (Fig. 3f), while it remained at the same level (p > 0.05) in controls (Fig. 4c). The collagen type III content showed a tendency (p = 0.056) to decrease in the ACS group between samples taken on days 0 and 190. A total of 51 needle biopsies were taken. Pain reaction was mild in 78.43 % of the procedures, mild to moderate in 17.64 % and moderate in 3.92 % of the cases. A moderate to severe or severe pain reaction was not observed. No bleeding was observed after taking biopsies in 25.49 % of the cases, mild bleeding occurred in 58.82 % and moderate bleeding in 15.68 % of cases. Severe bleeding was not observed. Of 51 biopsies, 47 were available for tendon histology [7, 30]. Four biopsies from severely oedematous lesions were not evaluated due to limited tissue content. The intra-class correlation for inter-observer repeatability was 0.72 for fibre structure, 0.88 for fibre alignment, 0.84 for nuclei morphology and 0.92 for variations in cell density. Scores for tenocyte nuclei morphology were significantly lower (i.e. cell nuclei more flattened; p = 0.01) in the ACS group (Fig. 3a) than in controls (Fig. 3b) on day 36 (Fig. 4a). Scores for cell density showed a tendency to be lower (i.e. more uniform) in the ACS group than in controls on day 36 (p = 0.052). Scores for fibre structure, fibre alignment (Fig. 4b), vascularisation, and subscores for structural integrity and metabolic activity did not show differences between the ACS group and controls at any time point.Fig. 3Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μmFig. 4Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated) Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μm Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated) With regard to development of fibre alignment during the healing of tendons treated with ACS, scores for this parameter were significantly lower (i.e. fibres were more regularly ordered; p = 0.04) in biopsies taken at the end of the observation period (day 190; Fig. 3d, Fig. 4b) than in those taken on day 0 (Fig. 3c, Fig. 4b). In control tendons, scores for fibre alignment (Fig. 4b), scores for morphology of tenocyte nuclei (Fig. 4a) and scores for cell density decreased significantly (i.e. tissue morphology improved; p < 0.05) between day 36 and day 190 biopsies. There were no differences between the ACS group and controls with regard to collagen type I and collagen type III expression in biopsies taken on days 0, 36 and 190. Within the ACS group, the collagen type I content increased significantly (p = 0.03) between the biopsies taken on day 36 (Fig. 3e) and day 190 (Fig. 3f), while it remained at the same level (p > 0.05) in controls (Fig. 4c). The collagen type III content showed a tendency (p = 0.056) to decrease in the ACS group between samples taken on days 0 and 190.
Conclusions
This clinical trial in horses with acute tendinopathies of the SDFT shows that a single intralesional ACS injection contributes to significant reduction of lameness within 10 days and to improvement of ultrasonographic parameters of repair tissue between 11 and 23 weeks after treatment. Intralesional ACS treatment potentially decreases proliferation of tenocytes 5 weeks after treatment and increases their differentiation, as demonstrated by an elevated collagen type I expression in the remodelling phase. Repeated ACS injections should be considered to enhance positive effects. Future controlled long-term investigations should be performed in a larger number of horses to determine the effect on recurrence rate.
[ "Clinical examination", "B-mode ultrasonography", "Intralesional treatment, follow-up examinations and controlled exercise", "Needle biopsies and histologic examinations", "Statistical analysis", "Description and history of horses, intralesional injections", "Lameness", "Long-term follow-up", "Signs of inflammation", "B-mode ultrasonography", "Needle biopsies and histology" ]
[ "All horses were examined clinically on the day of first presentation (day 0). This examination included visual assessment of lameness (5 grade score) [24] and signs of inflammation which were scored semiquantitatively by palpation (skin surface temperature in the palmar metacarpal region and sensitivity of the SDFT to palpation: 0 = no abnormality, 1 = mild abnormality, 2 = moderate abnormality, and 3 = severe abnormality; swelling of the SDFT was determined by palpation as an increase in diameter relative to normal tendon: 0 = no increase, 1 = increase by factor 1.5; 2 = increase by factor 1.5 to 2; increase by more than factor 2 [25]).", "All injured tendons were examined with B-mode ultrasound on the day of first presentation (day 0) in a transverse and longitudinal fashion with a linear 5–7.5 MHz linear scanner (Logiq e, GE Healthcare, Wauwatosa, WI, USA), according to the seven zone designations as described previously [26, 27]. Images were stored digitally and analysed according to the following parameters to determine the degree- and time-related changes of the lesions: maximal injury zone (MIZ), type of lesion determined on transverse images in the MIZ (core lesion = centrally located, focal hypo-/anechoic region; marginal lesion = peripherally located, focal hypo-/anechoic region; diffuse lesion = homogenous or heterogenous changes in echogenicity of the whole/most parts of the cross sectional area), summarized cross-sectional areas of the tendon (total cross-sectional area, T-CSA), summarized cross-sectional areas of the lesion (total lesion cross-sectional area, TL-CSA), and percentage of the lesion in the tendon [percent total lesion, %T-lesion = (TL-CSA / T-CSA) × 100]. Echogenicity and fibre alignment were graded semiquantitatively at each zone and the scores for all levels were summarized (total echo score, TES; total fibre alignment score, T-FAS). Echogenicity was assigned to 0 (normoechoic), 1 (hypoechoic), 2 (mixed echogenicity), and 3 (anechoic) [27, 28], and fibre alignment was graded according to the estimated percentage of parallel fibres in the lesion: 0 (>75 %), 1 (50–74 %), 2 (25–49 %), and 3 (<25 %) [27, 28]. Analyses of ultrasonograms were performed by one examiner (ML) blinded to the individual treatment modality.", "Ten millilitres of autologous blood were collected by a single venipuncture of one jugular vein into an irap®-10 syringe system (Orthogen, Düsseldorf, Germany). Blood samples were incubated at 37 °C (range 6–9 hours). After centrifugation at 4,000 rotations per minute for 10 minutes (centrifuge: Universal 320, rotor: no. 1624, Hettich, Tuttlingen, Germany), serum was aseptically aspirated from the syringe and passed through a 0.22 μm syringe-driven filter unit (Millex-MP, Millipore Corporation, Carrigtwohill, Co. Cork, Ireland). Depending on the size of the lesion as determined ultrasonographically (TL-CSA), tendons allocated to the ACS group received a single intralesional injection of 1–3 ml through a 22G needle (diameter 0.7 mm, length 30 mm) into the SDFT defect (day 1). Control tendons either received a single injection of a placebo, i.e. 1–3 ml saline through a 22G needle (diameter 0.7 mm, length 30 mm) or were untreated in case the owner declined an intralesional application of saline. Horses were sedated for the intralesional injections with detomidine (0.01–0.03 mg/kg intravenously) and butorphanol (0.04–0.05 mg/kg intravenously), and the medial and lateral palmar nerves were anaesthetized 2 cm distal to the carpometacarpal joints with 2 ml of a 2 % mepivacaine solution. After aseptic preparation of the skin, superficial digital tendon lesions were injected under sonographic guidance at a single site from the lateral aspect of the tendon perpendicularly to its long axis directly into the most hypoechoic areas, i.e. the MIZ while the limb was weight-bearing. All horses participated in a gradually increasing exercise programme as described previously [7]. The programme started the first day after the reported onset of SDFT tendinopathy. From week 25 to 27 horses were exercised for 25 minutes at a walk and for 15 minutes at a trot.\nHorses were re-examined clinically and ultrasonographically at regular intervals for 27 weeks on days 11, 22, 36, 50, 78, 106, 134, 162, and 190. Thereafter horse owners were advised to gradually increase exercise on an individual basis until the previous level of performance was reached. Data concerning signs of acute tendon injury, the level of performance horses reached and the discipline they were used for were obtained by telephone inquiry with horse owners or trainers until the preparation of the manuscript.", "On days 0, 36 and 190, one needle biopsy was taken aseptically from each SDFT at its MIZ with a 20G automated biopsy needle (Biopsiepistole PlusSpeed™, Peter Pflugbeil GmbH, Zorneding, Germany), with the needle entering the MIZ of the SDFT from distal at a 45° angle while the carpus was flexed approximately 90° and the metacarpophalangeal joint was moderately extended [29]. Pain reaction and intensity of bleeding from the biopsy site were evaluated using an established score [29]. The MIZ was recorded as distance from the accessory carpal bone (cm) so that repeat biopsies were taken from the same anatomic area as the day 0 biopsy while avoiding previous biopsy sites. Limbs were protected with a distal limb bandage for 2 days after taking needle biopsies and after intralesional injections. All needle biopsies were fixed in 10 % formalin, paraffin-embedded, sectioned at a thickness of 1–2 μm, mounted on microscope slides, and stained with haematoxylin and eosin. A single histological slide of each biopsy was examined histologically according to a score described previously [7, 30]; findings were graded using a semiquantitative four-point scale (0 = normal appearance, 1 = slightly abnormal, 2 = moderately abnormal, and 3 = markedly abnormal) considering the following parameters: fibre structure (0 = linear, no interruption; 3 = short with early truncuation), fibre alignment (0 = regularly ordered; 3 = no pattern identified), morphology of tenocyte nuclei (0 = flat; 3 = round), variations in cell density (0 = uniform; 3 = high regional variation), and vascularisation (0 = absent; 3 = high). Histological sections were independently scored by two observers blinded to horse and treatment modality (FG and ML). In total, five high power fields (40× magnification) per section were examined and scored. Mean averages of score values determined by each observer were calculated for each parameter (see above) before score values of both examiners were averaged.\nImmunohistochemical analysis of paraffin-embedded tissue sections was used to determine the formation of collagen type I and collagen type III. A commercially available mouse-anti-bovine antibody (NB600-450 anti-COL 1A1, Novus Biologicals, Littleton, CO, USA) and a rabbit-anti-bovine antibody (CL197P anti collagen type III alpha 1 chain, Acris Antibodies GmbH, Herford, Germany) were applied as primary antibodies against collagen type I and collagen type III, respectively. Secondary biotinylated antibodies were obtained from relevant species allowing binding to the primary antibody. Colour production from the chromogen diaminobenzidine tetrachloride was catalysed by streptavidin-conjugated peroxidase (avidin-biotin-complex-method) [31]. Finally, the sections were counterstained with haematoxylin. Immunohistochemical cross-reactivity of antibodies with uninjured equine tendon tissue was tested prior to analysis of the needle biopsies. Positive control tissues included bovine aorta and bovine tendon for collagen type I antigen-specific antibodies and bovine skin for collagen type III antigen-specific antibodies. In negative control sections, primary antibodies were replaced by appropriately diluted Balb/c mouse ascites and rabbit serum, respectively.\nPhotomicrographs were taken from all immunostained slides (Color View II, 3.3 Megapixel CCD, Soft Imaging System GmbH, Münster, Germany). Quantitative morphometric analysis of the immunoreaction was achieved by determination of the immunostained area using image analysis software (analySIS® 3.1, Soft Imaging System GmbH, Münster, Germany). A threshold for a positive signal was defined and the percentage of positively immunostained area in the tissue section as a whole was calculated [32, 33].", "Analysis of data was performed using SAS™ Version 9.3 (SAS Institute, Cary, NC, USA). The level of significance was set at p < 0.05. All values in the graphs are expressed as arithmetic mean values with standard error (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\overline{\\mathrm{X}} $$\\end{document}X¯ ± SEM). The assumption of normality was tested using the Kolmogorov-Smirnov test and visual assessment of qq-plots of model residuals. In the case of rejection of normal distribution, distribution-free nonparametric methods were applied. Fisher’s exact test was applied to test the differences between groups on each examination day with regard to the parameters of degree of lameness, swelling and skin surface temperature. To compare not-normally distributed parameters within a group between examination days, the permutation test for nonparametric analysis of repeated measurements with the Šidák post hoc test for multiple pairwise comparisons was used. The influence of groups and time points on ultrasonongraphic parameters (T-CSA, TL-CSA, %T-lesion, TES and T-FAS), histology scores and percentages of positively immunostained areas were tested using a two-way analysis of variance for independent samples (groups) and repeated measurements (dependent time points, biopsies), followed by the Tukey post hoc test for multiple pairwise comparisons. The intraclass correlation coefficient was calculated by analysis of variance components to test inter-observer repeatability of histological scores.", "Seventeen limbs of 15 horses between 2 and 19 years old (mean 8.46 years old) met the inclusion criteria. Ten of the limbs were included in the ACS-treated group (ACS group) and seven served as controls. Limbs allocated to the ACS group belonged to five Warmbloods (50 %), four Thoroughbreds (40 %) and one Arabian (10 %). The limbs of five Warmbloods (71.42 %), one Thoroughbred (14.28 %) and one Half-blood (14.28 %) were included as controls. Of these, one Thoroughbred and one Warmblood with bilateral SDFT lesions served for both groups (Table 1). The horses’ history (i.e. high-speed exercise, increased age >12 years) was suggestive of tendinopathy to be strain-induced in at least 6 of 10 SDFTs (60 %) allocated to the ACS group and in at least 4 of 7 (57 %) control SDFTs. Two tendons had a definitive history of blunt external trauma. One was included in the ACS group and one served as control.Table 1Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesionsHorse numberBreedAge (years)GenderFor which purpose usedReported duration of SDFT tendinopathy until initial examination (days)Reported initiating eventLimb affectedMaximal injury zoneLesion typeTreatment\nACS group\n2241/09Thoroughbred2SRacing2TrainingRF2bDiffuseACS2240/09Thoroughbred3SRacing2–3TrainingRF2bCoreACS2489/09a\nThoroughbred4MRacing7RacingRF1bCoreACS2539/09Warmblood3SDressage14Blunt traumaRF2bMarginalACS1672/10Arabian17GPleasure14Running freeRF1bCoreACS6264/10b\nWarmblood8GDressage9UnknownRF3aMarginalACS6335/10Warmblood10GPleasure10UnknownLF2bDiffuseACS4793/10Thoroughbred3SRacing7TrainingLF2bCoreACS6263/10Warmblood20MPleasure14Stumbling at cross country rideRF1bCoreACS2378/10Warmblood11MPleasure13At rideLF2bDiffuseACSMean 8.1\nControls\n2489/09a\nThoroughbred4MRacing7RacingLF1bDiffuseNo6264/10b\nWarmblood8GDressage9UnknownLF2aCoreSaline6111/10Warmblood5MJumping14Kicking himself over the jumpRF1bMarginalNo6265/10Warmblood5MPleasure7UnknownRF1bMarginalSaline6383/11Warmblood18MPleasure10At cross country rideLF2aMarginalNo5461/11Warmblood14GPolice horse4At gallop on beachLF2bDiffuseNo6384/11Half-blood8GEventing1After eventing competitionLF2aCoreNoMean 8.86\na, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon\nDescription, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesions\n\na, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon\nIn total ten SDFTs were injected with ACS. Five control SDFTs were not treated and two control tendons received a single intralesional injection of saline.", "On day 0, the mean degree of lameness was 0.8 in the ACS group. In control limbs, the mean degree of lameness was 1.42 on day 0. One of the two horses with bilateral tendinopathy that served for the ACS group and as a control was not lame. The second one showed a unilateral front limb lameness (SDFT in the lame limb was treated with ACS). The mean degree of lameness did not differ between groups on any day of examination (p > 0.05). Regardless of treatment modality all horses became sound by day 36. Compared to day 0, lameness decreased significantly by day 11 (p = 0.046) within the ACS group and it decreased significantly by day 36 (p = 0.021) in limbs serving as controls (Fig. 1a).Fig. 1Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nDegree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy", "Recurrence of tendon injury was not reported in any horse after 2 to 4 years post-diagnosis. Of the eight horses with SDFTs being allocated to the ACS group, five horses (63 %), among these three racehorses, returned to their previous or a higher performance level, two horses (25 %) died of reasons unrelated to tendinopathy, and one horse (12 %) was retired due to osteoarthritis of interphalangeal joints after the observation period. One of the horses which was included in the ACS group and served as a control did not resume training and was retired as a broodmare; the second one performed as a dressage horse. Of the five horses with tendons serving only as control, four individuals (80 %) performed in their discipline at the previous or a higher level, and one horse (20 %) was lost to follow-up.", "No statistically significant differences between groups were observed during the entire observation period, including day 0, with regard to scores for swelling, skin surface temperature and sensitivity to palpation. Swelling scores of the SDFT region decreased significantly in the ACS group (p = 0.005) between day 50 (mean score 1.1) and day 78 (mean score 0.7) and remained reduced until the end of the observation period (Fig. 1b). In controls, swelling scores did not decrease significantly during 27 weeks. Skin surface temperature and sensitivity to palpation scores decreased up to day 22 within both groups and remained at a low level until day 190.", "The horses included presented with core lesions (seven limbs), marginal lesions (five limbs) or diffuse lesions (five limbs) of the SDFT (Table 1). The MIZ of most lesions was located in zone 2b (41.17 % of limbs), followed by zone 1b (35.29 % of limbs), zone 2a (17.64 % of limbs) and zone 3a (5.88 % of limbs).\nThe mean %T-lesion was 21.73 ± 7.16 in the ACS group and 18.51 ± 5.07 in controls on day 0. This parameter was significantly lower (p < 0.05) in the ACS group than in controls on days 78, 106 and 162 (Fig. 2a). TES were significantly lower in the ACS group versus controls on days 78 and 106 (Fig. 2b). There was no difference in T-CSA, TL-CSA (Fig. 2c) or T-FAS between the groups at any time point.Fig. 2Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nUltrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nMean %T-lesion showed a continuous decrease over time within the ACS group which was significant (p = 0.02) for the first time on day 78 compared to day 0. After this period, this parameter remained below the 10 % range. In control tendons, mean %T-lesion continuously remained on a similar elevated level (p > 0.05) throughout the entire observation period (Fig. 2a).\nCompared to day 0, the mean TES decreased significantly on day 22 and again between days 22 and 78 in the ACS group (p < 0.05), while in controls compared to day 0 the TES decreased significantly (p < 0.05) the first time on day 78 (Fig. 2b).\nMean T-CSA did not change significantly throughout the observation period in either group. Regarding the progression of TL-CSA within the ACS group over time, it was significantly lower (p < 0.05) from day 78 onwards until the end of the examination period compared to levels on day 0, while values from control tendons remained on a similar level from day 0 to 190 (Fig. 2c). Mean T-FAS decreased significantly between day 0 and day 78 (p = 0.023) within the ACS group; in controls this parameter decreased significantly on day 134 compared to day 0 (p = 0.04).", "A total of 51 needle biopsies were taken. Pain reaction was mild in 78.43 % of the procedures, mild to moderate in 17.64 % and moderate in 3.92 % of the cases. A moderate to severe or severe pain reaction was not observed. No bleeding was observed after taking biopsies in 25.49 % of the cases, mild bleeding occurred in 58.82 % and moderate bleeding in 15.68 % of cases. Severe bleeding was not observed.\nOf 51 biopsies, 47 were available for tendon histology [7, 30]. Four biopsies from severely oedematous lesions were not evaluated due to limited tissue content. The intra-class correlation for inter-observer repeatability was 0.72 for fibre structure, 0.88 for fibre alignment, 0.84 for nuclei morphology and 0.92 for variations in cell density.\nScores for tenocyte nuclei morphology were significantly lower (i.e. cell nuclei more flattened; p = 0.01) in the ACS group (Fig. 3a) than in controls (Fig. 3b) on day 36 (Fig. 4a). Scores for cell density showed a tendency to be lower (i.e. more uniform) in the ACS group than in controls on day 36 (p = 0.052). Scores for fibre structure, fibre alignment (Fig. 4b), vascularisation, and subscores for structural integrity and metabolic activity did not show differences between the ACS group and controls at any time point.Fig. 3Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μmFig. 4Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated)\nLongitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μm\nHistologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated)\nWith regard to development of fibre alignment during the healing of tendons treated with ACS, scores for this parameter were significantly lower (i.e. fibres were more regularly ordered; p = 0.04) in biopsies taken at the end of the observation period (day 190; Fig. 3d, Fig. 4b) than in those taken on day 0 (Fig. 3c, Fig. 4b). In control tendons, scores for fibre alignment (Fig. 4b), scores for morphology of tenocyte nuclei (Fig. 4a) and scores for cell density decreased significantly (i.e. tissue morphology improved; p < 0.05) between day 36 and day 190 biopsies.\nThere were no differences between the ACS group and controls with regard to collagen type I and collagen type III expression in biopsies taken on days 0, 36 and 190. Within the ACS group, the collagen type I content increased significantly (p = 0.03) between the biopsies taken on day 36 (Fig. 3e) and day 190 (Fig. 3f), while it remained at the same level (p > 0.05) in controls (Fig. 4c). The collagen type III content showed a tendency (p = 0.056) to decrease in the ACS group between samples taken on days 0 and 190." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Clinical examination", "B-mode ultrasonography", "Intralesional treatment, follow-up examinations and controlled exercise", "Needle biopsies and histologic examinations", "Statistical analysis", "Results", "Description and history of horses, intralesional injections", "Lameness", "Long-term follow-up", "Signs of inflammation", "B-mode ultrasonography", "Needle biopsies and histology", "Discussion", "Conclusions" ]
[ "Tendinopathy of the superficial digital flexor tendon (SDFT) is a common injury in Thoroughbred racehorses and other horse breeds and is regarded as a career-limiting disease with a high recurrence rate [1]. Numerous treatment modalities have shown limited success in improving tendon repair [2]. Regenerative therapy aims to restore structure and function after application of biocompatible materials, cells, and bioactive molecules [3, 4]. There is growing knowledge about the clinical effects of potentially regenerative substrates, e.g. mesenchymal stem cells (MSCs) [5, 6] and autologous blood products such as platelet rich plasma [7, 8] on equine tendinopathies. To date, however, ideal treatment strategies for naturally occurring tendinopathies have not been established [1, 2].\nAutologous conditioned serum (ACS; synonyms irap®, Orthokine®, Orthogen, Düsseldorf, Germany) is used for intralesional treatment of tendinopathy in horses but, to the best of our knowledge, its clinical effect is only documented anecdotally [8–10]. ACS is prepared by exposing whole blood samples to glass beads, which has been shown to stimulate the secretion of anti-inflammatory cytokines, including interleukin (IL)-4 and IL-10 and IL-1 receptor antagonist (IL-1Ra) in humans [11]. A recent investigation has shown that ACS from equine blood also contains high levels of IL-1Ra and IL-10 [12]. Equine studies have focused on the IL-1Ra-mediated anti-inflammatory effects of ACS [13]; however, in tendon healing, the high concentrations of growth factors such as insulin-like growth factor-1 (IGF-1) and transforming growth factor-beta (TGF-β) may be equally or more important [14–16]. Blood samples from different horses and the use of different kits for the preparation of ACS may lead to differences in the cytokine and growth factor concentration in vitro [12, 17]. However, the relevance of these differences for the clinical effect is unknown.\nACS was originally described to improve muscle regeneration in a murine muscle contusion model [18] and to exhibit anti-inflammatory effects in an experimental model of carpal osteoarthritis in horses [13] and in a placebo-controlled clinical trial in humans with knee osteoarthritis [19].\nThe rationale for the use of ACS to treat equine tendinopathies is based on several findings: 1) It was shown in an experimental study that the expression of IL-1β (and matrix metalloproteinase-13) is upregulated following overstrain injury of rat tendons, demonstrating that these molecules are important mediators in the pathogenesis of tendinopathy [15, 20]. 2) IL-1Ra protein and heterologous conditioned serum prepared with the irap® kit reduced the production of prostaglandin E2 by stimulated cells derived from macroscopically normal SDFTs in vitro [21]. 3) Growth factors concentrated in ACS, e.g. IGF-1 and TGF-β, have the potential to attract resident precursor cells, e.g. MSCs and tenoblasts, and to increase cell proliferation during tendon healing [14, 15, 17]. Rat Achilles tendons exposed to ACS in an experimental study showed an enhanced expression of the Col1A1 gene, which led to an increased secretion of type I collagen and accelerated recovery of tendon stiffness and improved histologic maturity of the repair tissue [22]. It was shown in another rodent Achilles tendon transection model that ACS generally increases the expression of basic fibroblast growth factor (bFGF), bone morphogenetic protein-12 and TGF-β1 [16], representing growth factors important for the process of tendon regeneration [15].\nThe process of tendon healing is mainly divided into three phases which merge into each other. The acute inflammatory phase (<10–14 days) is characterized by phagocytosis and demarcation of injured tendon tissue. A fibroproliferative callus is formed during the proliferative phase (4–45 days), while collagen fibrils are organised into tendon bundles during the remodelling or maturation phase (45–120 days; <3 months) [1, 23].\nThe aim of the present study was to support the hypothesis that a single intralesional ACS injection into SDFT lesions 1) has a clinically detectable anti-inflammatory effect, 2) leads to improved B-mode ultrasonographic parameters and 3) improves the organization of repair tissue.", "Inclusion criteria for client-owned adult horses was a history of acute uni- or bilateral SDFT tendinopathy (tendon disorder) without cutaneous injury but with clinical signs of inflammation being reported to be present for up to 14 days prior to the presentation at the Equine Clinic of the University of Veterinary Medicine, Hannover, Foundation, or to collaborating veterinarians. Horses were only included if the clients agreed to the study design and tendons had not received intralesional injections before. Injured limbs were randomly assigned to the group treated with ACS (n = 10) or to controls (n = 7). The study was carried out between 2009 and 2012 and approved by the animal welfare officer of the University of Veterinary Medicine Hannover, Foundation, Germany, and the ethics committee of the responsible German federal state authority in accordance with the German Animal Welfare Law (Lower Saxony State Office for Consumer Protection and Food Safety, Az. 33.9-42502-05-09A652).\n Clinical examination All horses were examined clinically on the day of first presentation (day 0). This examination included visual assessment of lameness (5 grade score) [24] and signs of inflammation which were scored semiquantitatively by palpation (skin surface temperature in the palmar metacarpal region and sensitivity of the SDFT to palpation: 0 = no abnormality, 1 = mild abnormality, 2 = moderate abnormality, and 3 = severe abnormality; swelling of the SDFT was determined by palpation as an increase in diameter relative to normal tendon: 0 = no increase, 1 = increase by factor 1.5; 2 = increase by factor 1.5 to 2; increase by more than factor 2 [25]).\nAll horses were examined clinically on the day of first presentation (day 0). This examination included visual assessment of lameness (5 grade score) [24] and signs of inflammation which were scored semiquantitatively by palpation (skin surface temperature in the palmar metacarpal region and sensitivity of the SDFT to palpation: 0 = no abnormality, 1 = mild abnormality, 2 = moderate abnormality, and 3 = severe abnormality; swelling of the SDFT was determined by palpation as an increase in diameter relative to normal tendon: 0 = no increase, 1 = increase by factor 1.5; 2 = increase by factor 1.5 to 2; increase by more than factor 2 [25]).\n B-mode ultrasonography All injured tendons were examined with B-mode ultrasound on the day of first presentation (day 0) in a transverse and longitudinal fashion with a linear 5–7.5 MHz linear scanner (Logiq e, GE Healthcare, Wauwatosa, WI, USA), according to the seven zone designations as described previously [26, 27]. Images were stored digitally and analysed according to the following parameters to determine the degree- and time-related changes of the lesions: maximal injury zone (MIZ), type of lesion determined on transverse images in the MIZ (core lesion = centrally located, focal hypo-/anechoic region; marginal lesion = peripherally located, focal hypo-/anechoic region; diffuse lesion = homogenous or heterogenous changes in echogenicity of the whole/most parts of the cross sectional area), summarized cross-sectional areas of the tendon (total cross-sectional area, T-CSA), summarized cross-sectional areas of the lesion (total lesion cross-sectional area, TL-CSA), and percentage of the lesion in the tendon [percent total lesion, %T-lesion = (TL-CSA / T-CSA) × 100]. Echogenicity and fibre alignment were graded semiquantitatively at each zone and the scores for all levels were summarized (total echo score, TES; total fibre alignment score, T-FAS). Echogenicity was assigned to 0 (normoechoic), 1 (hypoechoic), 2 (mixed echogenicity), and 3 (anechoic) [27, 28], and fibre alignment was graded according to the estimated percentage of parallel fibres in the lesion: 0 (>75 %), 1 (50–74 %), 2 (25–49 %), and 3 (<25 %) [27, 28]. Analyses of ultrasonograms were performed by one examiner (ML) blinded to the individual treatment modality.\nAll injured tendons were examined with B-mode ultrasound on the day of first presentation (day 0) in a transverse and longitudinal fashion with a linear 5–7.5 MHz linear scanner (Logiq e, GE Healthcare, Wauwatosa, WI, USA), according to the seven zone designations as described previously [26, 27]. Images were stored digitally and analysed according to the following parameters to determine the degree- and time-related changes of the lesions: maximal injury zone (MIZ), type of lesion determined on transverse images in the MIZ (core lesion = centrally located, focal hypo-/anechoic region; marginal lesion = peripherally located, focal hypo-/anechoic region; diffuse lesion = homogenous or heterogenous changes in echogenicity of the whole/most parts of the cross sectional area), summarized cross-sectional areas of the tendon (total cross-sectional area, T-CSA), summarized cross-sectional areas of the lesion (total lesion cross-sectional area, TL-CSA), and percentage of the lesion in the tendon [percent total lesion, %T-lesion = (TL-CSA / T-CSA) × 100]. Echogenicity and fibre alignment were graded semiquantitatively at each zone and the scores for all levels were summarized (total echo score, TES; total fibre alignment score, T-FAS). Echogenicity was assigned to 0 (normoechoic), 1 (hypoechoic), 2 (mixed echogenicity), and 3 (anechoic) [27, 28], and fibre alignment was graded according to the estimated percentage of parallel fibres in the lesion: 0 (>75 %), 1 (50–74 %), 2 (25–49 %), and 3 (<25 %) [27, 28]. Analyses of ultrasonograms were performed by one examiner (ML) blinded to the individual treatment modality.\n Intralesional treatment, follow-up examinations and controlled exercise Ten millilitres of autologous blood were collected by a single venipuncture of one jugular vein into an irap®-10 syringe system (Orthogen, Düsseldorf, Germany). Blood samples were incubated at 37 °C (range 6–9 hours). After centrifugation at 4,000 rotations per minute for 10 minutes (centrifuge: Universal 320, rotor: no. 1624, Hettich, Tuttlingen, Germany), serum was aseptically aspirated from the syringe and passed through a 0.22 μm syringe-driven filter unit (Millex-MP, Millipore Corporation, Carrigtwohill, Co. Cork, Ireland). Depending on the size of the lesion as determined ultrasonographically (TL-CSA), tendons allocated to the ACS group received a single intralesional injection of 1–3 ml through a 22G needle (diameter 0.7 mm, length 30 mm) into the SDFT defect (day 1). Control tendons either received a single injection of a placebo, i.e. 1–3 ml saline through a 22G needle (diameter 0.7 mm, length 30 mm) or were untreated in case the owner declined an intralesional application of saline. Horses were sedated for the intralesional injections with detomidine (0.01–0.03 mg/kg intravenously) and butorphanol (0.04–0.05 mg/kg intravenously), and the medial and lateral palmar nerves were anaesthetized 2 cm distal to the carpometacarpal joints with 2 ml of a 2 % mepivacaine solution. After aseptic preparation of the skin, superficial digital tendon lesions were injected under sonographic guidance at a single site from the lateral aspect of the tendon perpendicularly to its long axis directly into the most hypoechoic areas, i.e. the MIZ while the limb was weight-bearing. All horses participated in a gradually increasing exercise programme as described previously [7]. The programme started the first day after the reported onset of SDFT tendinopathy. From week 25 to 27 horses were exercised for 25 minutes at a walk and for 15 minutes at a trot.\nHorses were re-examined clinically and ultrasonographically at regular intervals for 27 weeks on days 11, 22, 36, 50, 78, 106, 134, 162, and 190. Thereafter horse owners were advised to gradually increase exercise on an individual basis until the previous level of performance was reached. Data concerning signs of acute tendon injury, the level of performance horses reached and the discipline they were used for were obtained by telephone inquiry with horse owners or trainers until the preparation of the manuscript.\nTen millilitres of autologous blood were collected by a single venipuncture of one jugular vein into an irap®-10 syringe system (Orthogen, Düsseldorf, Germany). Blood samples were incubated at 37 °C (range 6–9 hours). After centrifugation at 4,000 rotations per minute for 10 minutes (centrifuge: Universal 320, rotor: no. 1624, Hettich, Tuttlingen, Germany), serum was aseptically aspirated from the syringe and passed through a 0.22 μm syringe-driven filter unit (Millex-MP, Millipore Corporation, Carrigtwohill, Co. Cork, Ireland). Depending on the size of the lesion as determined ultrasonographically (TL-CSA), tendons allocated to the ACS group received a single intralesional injection of 1–3 ml through a 22G needle (diameter 0.7 mm, length 30 mm) into the SDFT defect (day 1). Control tendons either received a single injection of a placebo, i.e. 1–3 ml saline through a 22G needle (diameter 0.7 mm, length 30 mm) or were untreated in case the owner declined an intralesional application of saline. Horses were sedated for the intralesional injections with detomidine (0.01–0.03 mg/kg intravenously) and butorphanol (0.04–0.05 mg/kg intravenously), and the medial and lateral palmar nerves were anaesthetized 2 cm distal to the carpometacarpal joints with 2 ml of a 2 % mepivacaine solution. After aseptic preparation of the skin, superficial digital tendon lesions were injected under sonographic guidance at a single site from the lateral aspect of the tendon perpendicularly to its long axis directly into the most hypoechoic areas, i.e. the MIZ while the limb was weight-bearing. All horses participated in a gradually increasing exercise programme as described previously [7]. The programme started the first day after the reported onset of SDFT tendinopathy. From week 25 to 27 horses were exercised for 25 minutes at a walk and for 15 minutes at a trot.\nHorses were re-examined clinically and ultrasonographically at regular intervals for 27 weeks on days 11, 22, 36, 50, 78, 106, 134, 162, and 190. Thereafter horse owners were advised to gradually increase exercise on an individual basis until the previous level of performance was reached. Data concerning signs of acute tendon injury, the level of performance horses reached and the discipline they were used for were obtained by telephone inquiry with horse owners or trainers until the preparation of the manuscript.\n Needle biopsies and histologic examinations On days 0, 36 and 190, one needle biopsy was taken aseptically from each SDFT at its MIZ with a 20G automated biopsy needle (Biopsiepistole PlusSpeed™, Peter Pflugbeil GmbH, Zorneding, Germany), with the needle entering the MIZ of the SDFT from distal at a 45° angle while the carpus was flexed approximately 90° and the metacarpophalangeal joint was moderately extended [29]. Pain reaction and intensity of bleeding from the biopsy site were evaluated using an established score [29]. The MIZ was recorded as distance from the accessory carpal bone (cm) so that repeat biopsies were taken from the same anatomic area as the day 0 biopsy while avoiding previous biopsy sites. Limbs were protected with a distal limb bandage for 2 days after taking needle biopsies and after intralesional injections. All needle biopsies were fixed in 10 % formalin, paraffin-embedded, sectioned at a thickness of 1–2 μm, mounted on microscope slides, and stained with haematoxylin and eosin. A single histological slide of each biopsy was examined histologically according to a score described previously [7, 30]; findings were graded using a semiquantitative four-point scale (0 = normal appearance, 1 = slightly abnormal, 2 = moderately abnormal, and 3 = markedly abnormal) considering the following parameters: fibre structure (0 = linear, no interruption; 3 = short with early truncuation), fibre alignment (0 = regularly ordered; 3 = no pattern identified), morphology of tenocyte nuclei (0 = flat; 3 = round), variations in cell density (0 = uniform; 3 = high regional variation), and vascularisation (0 = absent; 3 = high). Histological sections were independently scored by two observers blinded to horse and treatment modality (FG and ML). In total, five high power fields (40× magnification) per section were examined and scored. Mean averages of score values determined by each observer were calculated for each parameter (see above) before score values of both examiners were averaged.\nImmunohistochemical analysis of paraffin-embedded tissue sections was used to determine the formation of collagen type I and collagen type III. A commercially available mouse-anti-bovine antibody (NB600-450 anti-COL 1A1, Novus Biologicals, Littleton, CO, USA) and a rabbit-anti-bovine antibody (CL197P anti collagen type III alpha 1 chain, Acris Antibodies GmbH, Herford, Germany) were applied as primary antibodies against collagen type I and collagen type III, respectively. Secondary biotinylated antibodies were obtained from relevant species allowing binding to the primary antibody. Colour production from the chromogen diaminobenzidine tetrachloride was catalysed by streptavidin-conjugated peroxidase (avidin-biotin-complex-method) [31]. Finally, the sections were counterstained with haematoxylin. Immunohistochemical cross-reactivity of antibodies with uninjured equine tendon tissue was tested prior to analysis of the needle biopsies. Positive control tissues included bovine aorta and bovine tendon for collagen type I antigen-specific antibodies and bovine skin for collagen type III antigen-specific antibodies. In negative control sections, primary antibodies were replaced by appropriately diluted Balb/c mouse ascites and rabbit serum, respectively.\nPhotomicrographs were taken from all immunostained slides (Color View II, 3.3 Megapixel CCD, Soft Imaging System GmbH, Münster, Germany). Quantitative morphometric analysis of the immunoreaction was achieved by determination of the immunostained area using image analysis software (analySIS® 3.1, Soft Imaging System GmbH, Münster, Germany). A threshold for a positive signal was defined and the percentage of positively immunostained area in the tissue section as a whole was calculated [32, 33].\nOn days 0, 36 and 190, one needle biopsy was taken aseptically from each SDFT at its MIZ with a 20G automated biopsy needle (Biopsiepistole PlusSpeed™, Peter Pflugbeil GmbH, Zorneding, Germany), with the needle entering the MIZ of the SDFT from distal at a 45° angle while the carpus was flexed approximately 90° and the metacarpophalangeal joint was moderately extended [29]. Pain reaction and intensity of bleeding from the biopsy site were evaluated using an established score [29]. The MIZ was recorded as distance from the accessory carpal bone (cm) so that repeat biopsies were taken from the same anatomic area as the day 0 biopsy while avoiding previous biopsy sites. Limbs were protected with a distal limb bandage for 2 days after taking needle biopsies and after intralesional injections. All needle biopsies were fixed in 10 % formalin, paraffin-embedded, sectioned at a thickness of 1–2 μm, mounted on microscope slides, and stained with haematoxylin and eosin. A single histological slide of each biopsy was examined histologically according to a score described previously [7, 30]; findings were graded using a semiquantitative four-point scale (0 = normal appearance, 1 = slightly abnormal, 2 = moderately abnormal, and 3 = markedly abnormal) considering the following parameters: fibre structure (0 = linear, no interruption; 3 = short with early truncuation), fibre alignment (0 = regularly ordered; 3 = no pattern identified), morphology of tenocyte nuclei (0 = flat; 3 = round), variations in cell density (0 = uniform; 3 = high regional variation), and vascularisation (0 = absent; 3 = high). Histological sections were independently scored by two observers blinded to horse and treatment modality (FG and ML). In total, five high power fields (40× magnification) per section were examined and scored. Mean averages of score values determined by each observer were calculated for each parameter (see above) before score values of both examiners were averaged.\nImmunohistochemical analysis of paraffin-embedded tissue sections was used to determine the formation of collagen type I and collagen type III. A commercially available mouse-anti-bovine antibody (NB600-450 anti-COL 1A1, Novus Biologicals, Littleton, CO, USA) and a rabbit-anti-bovine antibody (CL197P anti collagen type III alpha 1 chain, Acris Antibodies GmbH, Herford, Germany) were applied as primary antibodies against collagen type I and collagen type III, respectively. Secondary biotinylated antibodies were obtained from relevant species allowing binding to the primary antibody. Colour production from the chromogen diaminobenzidine tetrachloride was catalysed by streptavidin-conjugated peroxidase (avidin-biotin-complex-method) [31]. Finally, the sections were counterstained with haematoxylin. Immunohistochemical cross-reactivity of antibodies with uninjured equine tendon tissue was tested prior to analysis of the needle biopsies. Positive control tissues included bovine aorta and bovine tendon for collagen type I antigen-specific antibodies and bovine skin for collagen type III antigen-specific antibodies. In negative control sections, primary antibodies were replaced by appropriately diluted Balb/c mouse ascites and rabbit serum, respectively.\nPhotomicrographs were taken from all immunostained slides (Color View II, 3.3 Megapixel CCD, Soft Imaging System GmbH, Münster, Germany). Quantitative morphometric analysis of the immunoreaction was achieved by determination of the immunostained area using image analysis software (analySIS® 3.1, Soft Imaging System GmbH, Münster, Germany). A threshold for a positive signal was defined and the percentage of positively immunostained area in the tissue section as a whole was calculated [32, 33].\n Statistical analysis Analysis of data was performed using SAS™ Version 9.3 (SAS Institute, Cary, NC, USA). The level of significance was set at p < 0.05. All values in the graphs are expressed as arithmetic mean values with standard error (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\overline{\\mathrm{X}} $$\\end{document}X¯ ± SEM). The assumption of normality was tested using the Kolmogorov-Smirnov test and visual assessment of qq-plots of model residuals. In the case of rejection of normal distribution, distribution-free nonparametric methods were applied. Fisher’s exact test was applied to test the differences between groups on each examination day with regard to the parameters of degree of lameness, swelling and skin surface temperature. To compare not-normally distributed parameters within a group between examination days, the permutation test for nonparametric analysis of repeated measurements with the Šidák post hoc test for multiple pairwise comparisons was used. The influence of groups and time points on ultrasonongraphic parameters (T-CSA, TL-CSA, %T-lesion, TES and T-FAS), histology scores and percentages of positively immunostained areas were tested using a two-way analysis of variance for independent samples (groups) and repeated measurements (dependent time points, biopsies), followed by the Tukey post hoc test for multiple pairwise comparisons. The intraclass correlation coefficient was calculated by analysis of variance components to test inter-observer repeatability of histological scores.\nAnalysis of data was performed using SAS™ Version 9.3 (SAS Institute, Cary, NC, USA). The level of significance was set at p < 0.05. All values in the graphs are expressed as arithmetic mean values with standard error (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\overline{\\mathrm{X}} $$\\end{document}X¯ ± SEM). The assumption of normality was tested using the Kolmogorov-Smirnov test and visual assessment of qq-plots of model residuals. In the case of rejection of normal distribution, distribution-free nonparametric methods were applied. Fisher’s exact test was applied to test the differences between groups on each examination day with regard to the parameters of degree of lameness, swelling and skin surface temperature. To compare not-normally distributed parameters within a group between examination days, the permutation test for nonparametric analysis of repeated measurements with the Šidák post hoc test for multiple pairwise comparisons was used. The influence of groups and time points on ultrasonongraphic parameters (T-CSA, TL-CSA, %T-lesion, TES and T-FAS), histology scores and percentages of positively immunostained areas were tested using a two-way analysis of variance for independent samples (groups) and repeated measurements (dependent time points, biopsies), followed by the Tukey post hoc test for multiple pairwise comparisons. The intraclass correlation coefficient was calculated by analysis of variance components to test inter-observer repeatability of histological scores.", "All horses were examined clinically on the day of first presentation (day 0). This examination included visual assessment of lameness (5 grade score) [24] and signs of inflammation which were scored semiquantitatively by palpation (skin surface temperature in the palmar metacarpal region and sensitivity of the SDFT to palpation: 0 = no abnormality, 1 = mild abnormality, 2 = moderate abnormality, and 3 = severe abnormality; swelling of the SDFT was determined by palpation as an increase in diameter relative to normal tendon: 0 = no increase, 1 = increase by factor 1.5; 2 = increase by factor 1.5 to 2; increase by more than factor 2 [25]).", "All injured tendons were examined with B-mode ultrasound on the day of first presentation (day 0) in a transverse and longitudinal fashion with a linear 5–7.5 MHz linear scanner (Logiq e, GE Healthcare, Wauwatosa, WI, USA), according to the seven zone designations as described previously [26, 27]. Images were stored digitally and analysed according to the following parameters to determine the degree- and time-related changes of the lesions: maximal injury zone (MIZ), type of lesion determined on transverse images in the MIZ (core lesion = centrally located, focal hypo-/anechoic region; marginal lesion = peripherally located, focal hypo-/anechoic region; diffuse lesion = homogenous or heterogenous changes in echogenicity of the whole/most parts of the cross sectional area), summarized cross-sectional areas of the tendon (total cross-sectional area, T-CSA), summarized cross-sectional areas of the lesion (total lesion cross-sectional area, TL-CSA), and percentage of the lesion in the tendon [percent total lesion, %T-lesion = (TL-CSA / T-CSA) × 100]. Echogenicity and fibre alignment were graded semiquantitatively at each zone and the scores for all levels were summarized (total echo score, TES; total fibre alignment score, T-FAS). Echogenicity was assigned to 0 (normoechoic), 1 (hypoechoic), 2 (mixed echogenicity), and 3 (anechoic) [27, 28], and fibre alignment was graded according to the estimated percentage of parallel fibres in the lesion: 0 (>75 %), 1 (50–74 %), 2 (25–49 %), and 3 (<25 %) [27, 28]. Analyses of ultrasonograms were performed by one examiner (ML) blinded to the individual treatment modality.", "Ten millilitres of autologous blood were collected by a single venipuncture of one jugular vein into an irap®-10 syringe system (Orthogen, Düsseldorf, Germany). Blood samples were incubated at 37 °C (range 6–9 hours). After centrifugation at 4,000 rotations per minute for 10 minutes (centrifuge: Universal 320, rotor: no. 1624, Hettich, Tuttlingen, Germany), serum was aseptically aspirated from the syringe and passed through a 0.22 μm syringe-driven filter unit (Millex-MP, Millipore Corporation, Carrigtwohill, Co. Cork, Ireland). Depending on the size of the lesion as determined ultrasonographically (TL-CSA), tendons allocated to the ACS group received a single intralesional injection of 1–3 ml through a 22G needle (diameter 0.7 mm, length 30 mm) into the SDFT defect (day 1). Control tendons either received a single injection of a placebo, i.e. 1–3 ml saline through a 22G needle (diameter 0.7 mm, length 30 mm) or were untreated in case the owner declined an intralesional application of saline. Horses were sedated for the intralesional injections with detomidine (0.01–0.03 mg/kg intravenously) and butorphanol (0.04–0.05 mg/kg intravenously), and the medial and lateral palmar nerves were anaesthetized 2 cm distal to the carpometacarpal joints with 2 ml of a 2 % mepivacaine solution. After aseptic preparation of the skin, superficial digital tendon lesions were injected under sonographic guidance at a single site from the lateral aspect of the tendon perpendicularly to its long axis directly into the most hypoechoic areas, i.e. the MIZ while the limb was weight-bearing. All horses participated in a gradually increasing exercise programme as described previously [7]. The programme started the first day after the reported onset of SDFT tendinopathy. From week 25 to 27 horses were exercised for 25 minutes at a walk and for 15 minutes at a trot.\nHorses were re-examined clinically and ultrasonographically at regular intervals for 27 weeks on days 11, 22, 36, 50, 78, 106, 134, 162, and 190. Thereafter horse owners were advised to gradually increase exercise on an individual basis until the previous level of performance was reached. Data concerning signs of acute tendon injury, the level of performance horses reached and the discipline they were used for were obtained by telephone inquiry with horse owners or trainers until the preparation of the manuscript.", "On days 0, 36 and 190, one needle biopsy was taken aseptically from each SDFT at its MIZ with a 20G automated biopsy needle (Biopsiepistole PlusSpeed™, Peter Pflugbeil GmbH, Zorneding, Germany), with the needle entering the MIZ of the SDFT from distal at a 45° angle while the carpus was flexed approximately 90° and the metacarpophalangeal joint was moderately extended [29]. Pain reaction and intensity of bleeding from the biopsy site were evaluated using an established score [29]. The MIZ was recorded as distance from the accessory carpal bone (cm) so that repeat biopsies were taken from the same anatomic area as the day 0 biopsy while avoiding previous biopsy sites. Limbs were protected with a distal limb bandage for 2 days after taking needle biopsies and after intralesional injections. All needle biopsies were fixed in 10 % formalin, paraffin-embedded, sectioned at a thickness of 1–2 μm, mounted on microscope slides, and stained with haematoxylin and eosin. A single histological slide of each biopsy was examined histologically according to a score described previously [7, 30]; findings were graded using a semiquantitative four-point scale (0 = normal appearance, 1 = slightly abnormal, 2 = moderately abnormal, and 3 = markedly abnormal) considering the following parameters: fibre structure (0 = linear, no interruption; 3 = short with early truncuation), fibre alignment (0 = regularly ordered; 3 = no pattern identified), morphology of tenocyte nuclei (0 = flat; 3 = round), variations in cell density (0 = uniform; 3 = high regional variation), and vascularisation (0 = absent; 3 = high). Histological sections were independently scored by two observers blinded to horse and treatment modality (FG and ML). In total, five high power fields (40× magnification) per section were examined and scored. Mean averages of score values determined by each observer were calculated for each parameter (see above) before score values of both examiners were averaged.\nImmunohistochemical analysis of paraffin-embedded tissue sections was used to determine the formation of collagen type I and collagen type III. A commercially available mouse-anti-bovine antibody (NB600-450 anti-COL 1A1, Novus Biologicals, Littleton, CO, USA) and a rabbit-anti-bovine antibody (CL197P anti collagen type III alpha 1 chain, Acris Antibodies GmbH, Herford, Germany) were applied as primary antibodies against collagen type I and collagen type III, respectively. Secondary biotinylated antibodies were obtained from relevant species allowing binding to the primary antibody. Colour production from the chromogen diaminobenzidine tetrachloride was catalysed by streptavidin-conjugated peroxidase (avidin-biotin-complex-method) [31]. Finally, the sections were counterstained with haematoxylin. Immunohistochemical cross-reactivity of antibodies with uninjured equine tendon tissue was tested prior to analysis of the needle biopsies. Positive control tissues included bovine aorta and bovine tendon for collagen type I antigen-specific antibodies and bovine skin for collagen type III antigen-specific antibodies. In negative control sections, primary antibodies were replaced by appropriately diluted Balb/c mouse ascites and rabbit serum, respectively.\nPhotomicrographs were taken from all immunostained slides (Color View II, 3.3 Megapixel CCD, Soft Imaging System GmbH, Münster, Germany). Quantitative morphometric analysis of the immunoreaction was achieved by determination of the immunostained area using image analysis software (analySIS® 3.1, Soft Imaging System GmbH, Münster, Germany). A threshold for a positive signal was defined and the percentage of positively immunostained area in the tissue section as a whole was calculated [32, 33].", "Analysis of data was performed using SAS™ Version 9.3 (SAS Institute, Cary, NC, USA). The level of significance was set at p < 0.05. All values in the graphs are expressed as arithmetic mean values with standard error (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\overline{\\mathrm{X}} $$\\end{document}X¯ ± SEM). The assumption of normality was tested using the Kolmogorov-Smirnov test and visual assessment of qq-plots of model residuals. In the case of rejection of normal distribution, distribution-free nonparametric methods were applied. Fisher’s exact test was applied to test the differences between groups on each examination day with regard to the parameters of degree of lameness, swelling and skin surface temperature. To compare not-normally distributed parameters within a group between examination days, the permutation test for nonparametric analysis of repeated measurements with the Šidák post hoc test for multiple pairwise comparisons was used. The influence of groups and time points on ultrasonongraphic parameters (T-CSA, TL-CSA, %T-lesion, TES and T-FAS), histology scores and percentages of positively immunostained areas were tested using a two-way analysis of variance for independent samples (groups) and repeated measurements (dependent time points, biopsies), followed by the Tukey post hoc test for multiple pairwise comparisons. The intraclass correlation coefficient was calculated by analysis of variance components to test inter-observer repeatability of histological scores.", " Description and history of horses, intralesional injections Seventeen limbs of 15 horses between 2 and 19 years old (mean 8.46 years old) met the inclusion criteria. Ten of the limbs were included in the ACS-treated group (ACS group) and seven served as controls. Limbs allocated to the ACS group belonged to five Warmbloods (50 %), four Thoroughbreds (40 %) and one Arabian (10 %). The limbs of five Warmbloods (71.42 %), one Thoroughbred (14.28 %) and one Half-blood (14.28 %) were included as controls. Of these, one Thoroughbred and one Warmblood with bilateral SDFT lesions served for both groups (Table 1). The horses’ history (i.e. high-speed exercise, increased age >12 years) was suggestive of tendinopathy to be strain-induced in at least 6 of 10 SDFTs (60 %) allocated to the ACS group and in at least 4 of 7 (57 %) control SDFTs. Two tendons had a definitive history of blunt external trauma. One was included in the ACS group and one served as control.Table 1Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesionsHorse numberBreedAge (years)GenderFor which purpose usedReported duration of SDFT tendinopathy until initial examination (days)Reported initiating eventLimb affectedMaximal injury zoneLesion typeTreatment\nACS group\n2241/09Thoroughbred2SRacing2TrainingRF2bDiffuseACS2240/09Thoroughbred3SRacing2–3TrainingRF2bCoreACS2489/09a\nThoroughbred4MRacing7RacingRF1bCoreACS2539/09Warmblood3SDressage14Blunt traumaRF2bMarginalACS1672/10Arabian17GPleasure14Running freeRF1bCoreACS6264/10b\nWarmblood8GDressage9UnknownRF3aMarginalACS6335/10Warmblood10GPleasure10UnknownLF2bDiffuseACS4793/10Thoroughbred3SRacing7TrainingLF2bCoreACS6263/10Warmblood20MPleasure14Stumbling at cross country rideRF1bCoreACS2378/10Warmblood11MPleasure13At rideLF2bDiffuseACSMean 8.1\nControls\n2489/09a\nThoroughbred4MRacing7RacingLF1bDiffuseNo6264/10b\nWarmblood8GDressage9UnknownLF2aCoreSaline6111/10Warmblood5MJumping14Kicking himself over the jumpRF1bMarginalNo6265/10Warmblood5MPleasure7UnknownRF1bMarginalSaline6383/11Warmblood18MPleasure10At cross country rideLF2aMarginalNo5461/11Warmblood14GPolice horse4At gallop on beachLF2bDiffuseNo6384/11Half-blood8GEventing1After eventing competitionLF2aCoreNoMean 8.86\na, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon\nDescription, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesions\n\na, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon\nIn total ten SDFTs were injected with ACS. Five control SDFTs were not treated and two control tendons received a single intralesional injection of saline.\nSeventeen limbs of 15 horses between 2 and 19 years old (mean 8.46 years old) met the inclusion criteria. Ten of the limbs were included in the ACS-treated group (ACS group) and seven served as controls. Limbs allocated to the ACS group belonged to five Warmbloods (50 %), four Thoroughbreds (40 %) and one Arabian (10 %). The limbs of five Warmbloods (71.42 %), one Thoroughbred (14.28 %) and one Half-blood (14.28 %) were included as controls. Of these, one Thoroughbred and one Warmblood with bilateral SDFT lesions served for both groups (Table 1). The horses’ history (i.e. high-speed exercise, increased age >12 years) was suggestive of tendinopathy to be strain-induced in at least 6 of 10 SDFTs (60 %) allocated to the ACS group and in at least 4 of 7 (57 %) control SDFTs. Two tendons had a definitive history of blunt external trauma. One was included in the ACS group and one served as control.Table 1Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesionsHorse numberBreedAge (years)GenderFor which purpose usedReported duration of SDFT tendinopathy until initial examination (days)Reported initiating eventLimb affectedMaximal injury zoneLesion typeTreatment\nACS group\n2241/09Thoroughbred2SRacing2TrainingRF2bDiffuseACS2240/09Thoroughbred3SRacing2–3TrainingRF2bCoreACS2489/09a\nThoroughbred4MRacing7RacingRF1bCoreACS2539/09Warmblood3SDressage14Blunt traumaRF2bMarginalACS1672/10Arabian17GPleasure14Running freeRF1bCoreACS6264/10b\nWarmblood8GDressage9UnknownRF3aMarginalACS6335/10Warmblood10GPleasure10UnknownLF2bDiffuseACS4793/10Thoroughbred3SRacing7TrainingLF2bCoreACS6263/10Warmblood20MPleasure14Stumbling at cross country rideRF1bCoreACS2378/10Warmblood11MPleasure13At rideLF2bDiffuseACSMean 8.1\nControls\n2489/09a\nThoroughbred4MRacing7RacingLF1bDiffuseNo6264/10b\nWarmblood8GDressage9UnknownLF2aCoreSaline6111/10Warmblood5MJumping14Kicking himself over the jumpRF1bMarginalNo6265/10Warmblood5MPleasure7UnknownRF1bMarginalSaline6383/11Warmblood18MPleasure10At cross country rideLF2aMarginalNo5461/11Warmblood14GPolice horse4At gallop on beachLF2bDiffuseNo6384/11Half-blood8GEventing1After eventing competitionLF2aCoreNoMean 8.86\na, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon\nDescription, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesions\n\na, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon\nIn total ten SDFTs were injected with ACS. Five control SDFTs were not treated and two control tendons received a single intralesional injection of saline.\n Lameness On day 0, the mean degree of lameness was 0.8 in the ACS group. In control limbs, the mean degree of lameness was 1.42 on day 0. One of the two horses with bilateral tendinopathy that served for the ACS group and as a control was not lame. The second one showed a unilateral front limb lameness (SDFT in the lame limb was treated with ACS). The mean degree of lameness did not differ between groups on any day of examination (p > 0.05). Regardless of treatment modality all horses became sound by day 36. Compared to day 0, lameness decreased significantly by day 11 (p = 0.046) within the ACS group and it decreased significantly by day 36 (p = 0.021) in limbs serving as controls (Fig. 1a).Fig. 1Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nDegree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nOn day 0, the mean degree of lameness was 0.8 in the ACS group. In control limbs, the mean degree of lameness was 1.42 on day 0. One of the two horses with bilateral tendinopathy that served for the ACS group and as a control was not lame. The second one showed a unilateral front limb lameness (SDFT in the lame limb was treated with ACS). The mean degree of lameness did not differ between groups on any day of examination (p > 0.05). Regardless of treatment modality all horses became sound by day 36. Compared to day 0, lameness decreased significantly by day 11 (p = 0.046) within the ACS group and it decreased significantly by day 36 (p = 0.021) in limbs serving as controls (Fig. 1a).Fig. 1Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nDegree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\n Long-term follow-up Recurrence of tendon injury was not reported in any horse after 2 to 4 years post-diagnosis. Of the eight horses with SDFTs being allocated to the ACS group, five horses (63 %), among these three racehorses, returned to their previous or a higher performance level, two horses (25 %) died of reasons unrelated to tendinopathy, and one horse (12 %) was retired due to osteoarthritis of interphalangeal joints after the observation period. One of the horses which was included in the ACS group and served as a control did not resume training and was retired as a broodmare; the second one performed as a dressage horse. Of the five horses with tendons serving only as control, four individuals (80 %) performed in their discipline at the previous or a higher level, and one horse (20 %) was lost to follow-up.\nRecurrence of tendon injury was not reported in any horse after 2 to 4 years post-diagnosis. Of the eight horses with SDFTs being allocated to the ACS group, five horses (63 %), among these three racehorses, returned to their previous or a higher performance level, two horses (25 %) died of reasons unrelated to tendinopathy, and one horse (12 %) was retired due to osteoarthritis of interphalangeal joints after the observation period. One of the horses which was included in the ACS group and served as a control did not resume training and was retired as a broodmare; the second one performed as a dressage horse. Of the five horses with tendons serving only as control, four individuals (80 %) performed in their discipline at the previous or a higher level, and one horse (20 %) was lost to follow-up.\n Signs of inflammation No statistically significant differences between groups were observed during the entire observation period, including day 0, with regard to scores for swelling, skin surface temperature and sensitivity to palpation. Swelling scores of the SDFT region decreased significantly in the ACS group (p = 0.005) between day 50 (mean score 1.1) and day 78 (mean score 0.7) and remained reduced until the end of the observation period (Fig. 1b). In controls, swelling scores did not decrease significantly during 27 weeks. Skin surface temperature and sensitivity to palpation scores decreased up to day 22 within both groups and remained at a low level until day 190.\nNo statistically significant differences between groups were observed during the entire observation period, including day 0, with regard to scores for swelling, skin surface temperature and sensitivity to palpation. Swelling scores of the SDFT region decreased significantly in the ACS group (p = 0.005) between day 50 (mean score 1.1) and day 78 (mean score 0.7) and remained reduced until the end of the observation period (Fig. 1b). In controls, swelling scores did not decrease significantly during 27 weeks. Skin surface temperature and sensitivity to palpation scores decreased up to day 22 within both groups and remained at a low level until day 190.\n B-mode ultrasonography The horses included presented with core lesions (seven limbs), marginal lesions (five limbs) or diffuse lesions (five limbs) of the SDFT (Table 1). The MIZ of most lesions was located in zone 2b (41.17 % of limbs), followed by zone 1b (35.29 % of limbs), zone 2a (17.64 % of limbs) and zone 3a (5.88 % of limbs).\nThe mean %T-lesion was 21.73 ± 7.16 in the ACS group and 18.51 ± 5.07 in controls on day 0. This parameter was significantly lower (p < 0.05) in the ACS group than in controls on days 78, 106 and 162 (Fig. 2a). TES were significantly lower in the ACS group versus controls on days 78 and 106 (Fig. 2b). There was no difference in T-CSA, TL-CSA (Fig. 2c) or T-FAS between the groups at any time point.Fig. 2Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nUltrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nMean %T-lesion showed a continuous decrease over time within the ACS group which was significant (p = 0.02) for the first time on day 78 compared to day 0. After this period, this parameter remained below the 10 % range. In control tendons, mean %T-lesion continuously remained on a similar elevated level (p > 0.05) throughout the entire observation period (Fig. 2a).\nCompared to day 0, the mean TES decreased significantly on day 22 and again between days 22 and 78 in the ACS group (p < 0.05), while in controls compared to day 0 the TES decreased significantly (p < 0.05) the first time on day 78 (Fig. 2b).\nMean T-CSA did not change significantly throughout the observation period in either group. Regarding the progression of TL-CSA within the ACS group over time, it was significantly lower (p < 0.05) from day 78 onwards until the end of the examination period compared to levels on day 0, while values from control tendons remained on a similar level from day 0 to 190 (Fig. 2c). Mean T-FAS decreased significantly between day 0 and day 78 (p = 0.023) within the ACS group; in controls this parameter decreased significantly on day 134 compared to day 0 (p = 0.04).\nThe horses included presented with core lesions (seven limbs), marginal lesions (five limbs) or diffuse lesions (five limbs) of the SDFT (Table 1). The MIZ of most lesions was located in zone 2b (41.17 % of limbs), followed by zone 1b (35.29 % of limbs), zone 2a (17.64 % of limbs) and zone 3a (5.88 % of limbs).\nThe mean %T-lesion was 21.73 ± 7.16 in the ACS group and 18.51 ± 5.07 in controls on day 0. This parameter was significantly lower (p < 0.05) in the ACS group than in controls on days 78, 106 and 162 (Fig. 2a). TES were significantly lower in the ACS group versus controls on days 78 and 106 (Fig. 2b). There was no difference in T-CSA, TL-CSA (Fig. 2c) or T-FAS between the groups at any time point.Fig. 2Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nUltrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nMean %T-lesion showed a continuous decrease over time within the ACS group which was significant (p = 0.02) for the first time on day 78 compared to day 0. After this period, this parameter remained below the 10 % range. In control tendons, mean %T-lesion continuously remained on a similar elevated level (p > 0.05) throughout the entire observation period (Fig. 2a).\nCompared to day 0, the mean TES decreased significantly on day 22 and again between days 22 and 78 in the ACS group (p < 0.05), while in controls compared to day 0 the TES decreased significantly (p < 0.05) the first time on day 78 (Fig. 2b).\nMean T-CSA did not change significantly throughout the observation period in either group. Regarding the progression of TL-CSA within the ACS group over time, it was significantly lower (p < 0.05) from day 78 onwards until the end of the examination period compared to levels on day 0, while values from control tendons remained on a similar level from day 0 to 190 (Fig. 2c). Mean T-FAS decreased significantly between day 0 and day 78 (p = 0.023) within the ACS group; in controls this parameter decreased significantly on day 134 compared to day 0 (p = 0.04).\n Needle biopsies and histology A total of 51 needle biopsies were taken. Pain reaction was mild in 78.43 % of the procedures, mild to moderate in 17.64 % and moderate in 3.92 % of the cases. A moderate to severe or severe pain reaction was not observed. No bleeding was observed after taking biopsies in 25.49 % of the cases, mild bleeding occurred in 58.82 % and moderate bleeding in 15.68 % of cases. Severe bleeding was not observed.\nOf 51 biopsies, 47 were available for tendon histology [7, 30]. Four biopsies from severely oedematous lesions were not evaluated due to limited tissue content. The intra-class correlation for inter-observer repeatability was 0.72 for fibre structure, 0.88 for fibre alignment, 0.84 for nuclei morphology and 0.92 for variations in cell density.\nScores for tenocyte nuclei morphology were significantly lower (i.e. cell nuclei more flattened; p = 0.01) in the ACS group (Fig. 3a) than in controls (Fig. 3b) on day 36 (Fig. 4a). Scores for cell density showed a tendency to be lower (i.e. more uniform) in the ACS group than in controls on day 36 (p = 0.052). Scores for fibre structure, fibre alignment (Fig. 4b), vascularisation, and subscores for structural integrity and metabolic activity did not show differences between the ACS group and controls at any time point.Fig. 3Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μmFig. 4Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated)\nLongitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μm\nHistologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated)\nWith regard to development of fibre alignment during the healing of tendons treated with ACS, scores for this parameter were significantly lower (i.e. fibres were more regularly ordered; p = 0.04) in biopsies taken at the end of the observation period (day 190; Fig. 3d, Fig. 4b) than in those taken on day 0 (Fig. 3c, Fig. 4b). In control tendons, scores for fibre alignment (Fig. 4b), scores for morphology of tenocyte nuclei (Fig. 4a) and scores for cell density decreased significantly (i.e. tissue morphology improved; p < 0.05) between day 36 and day 190 biopsies.\nThere were no differences between the ACS group and controls with regard to collagen type I and collagen type III expression in biopsies taken on days 0, 36 and 190. Within the ACS group, the collagen type I content increased significantly (p = 0.03) between the biopsies taken on day 36 (Fig. 3e) and day 190 (Fig. 3f), while it remained at the same level (p > 0.05) in controls (Fig. 4c). The collagen type III content showed a tendency (p = 0.056) to decrease in the ACS group between samples taken on days 0 and 190.\nA total of 51 needle biopsies were taken. Pain reaction was mild in 78.43 % of the procedures, mild to moderate in 17.64 % and moderate in 3.92 % of the cases. A moderate to severe or severe pain reaction was not observed. No bleeding was observed after taking biopsies in 25.49 % of the cases, mild bleeding occurred in 58.82 % and moderate bleeding in 15.68 % of cases. Severe bleeding was not observed.\nOf 51 biopsies, 47 were available for tendon histology [7, 30]. Four biopsies from severely oedematous lesions were not evaluated due to limited tissue content. The intra-class correlation for inter-observer repeatability was 0.72 for fibre structure, 0.88 for fibre alignment, 0.84 for nuclei morphology and 0.92 for variations in cell density.\nScores for tenocyte nuclei morphology were significantly lower (i.e. cell nuclei more flattened; p = 0.01) in the ACS group (Fig. 3a) than in controls (Fig. 3b) on day 36 (Fig. 4a). Scores for cell density showed a tendency to be lower (i.e. more uniform) in the ACS group than in controls on day 36 (p = 0.052). Scores for fibre structure, fibre alignment (Fig. 4b), vascularisation, and subscores for structural integrity and metabolic activity did not show differences between the ACS group and controls at any time point.Fig. 3Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μmFig. 4Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated)\nLongitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μm\nHistologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated)\nWith regard to development of fibre alignment during the healing of tendons treated with ACS, scores for this parameter were significantly lower (i.e. fibres were more regularly ordered; p = 0.04) in biopsies taken at the end of the observation period (day 190; Fig. 3d, Fig. 4b) than in those taken on day 0 (Fig. 3c, Fig. 4b). In control tendons, scores for fibre alignment (Fig. 4b), scores for morphology of tenocyte nuclei (Fig. 4a) and scores for cell density decreased significantly (i.e. tissue morphology improved; p < 0.05) between day 36 and day 190 biopsies.\nThere were no differences between the ACS group and controls with regard to collagen type I and collagen type III expression in biopsies taken on days 0, 36 and 190. Within the ACS group, the collagen type I content increased significantly (p = 0.03) between the biopsies taken on day 36 (Fig. 3e) and day 190 (Fig. 3f), while it remained at the same level (p > 0.05) in controls (Fig. 4c). The collagen type III content showed a tendency (p = 0.056) to decrease in the ACS group between samples taken on days 0 and 190.", "Seventeen limbs of 15 horses between 2 and 19 years old (mean 8.46 years old) met the inclusion criteria. Ten of the limbs were included in the ACS-treated group (ACS group) and seven served as controls. Limbs allocated to the ACS group belonged to five Warmbloods (50 %), four Thoroughbreds (40 %) and one Arabian (10 %). The limbs of five Warmbloods (71.42 %), one Thoroughbred (14.28 %) and one Half-blood (14.28 %) were included as controls. Of these, one Thoroughbred and one Warmblood with bilateral SDFT lesions served for both groups (Table 1). The horses’ history (i.e. high-speed exercise, increased age >12 years) was suggestive of tendinopathy to be strain-induced in at least 6 of 10 SDFTs (60 %) allocated to the ACS group and in at least 4 of 7 (57 %) control SDFTs. Two tendons had a definitive history of blunt external trauma. One was included in the ACS group and one served as control.Table 1Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesionsHorse numberBreedAge (years)GenderFor which purpose usedReported duration of SDFT tendinopathy until initial examination (days)Reported initiating eventLimb affectedMaximal injury zoneLesion typeTreatment\nACS group\n2241/09Thoroughbred2SRacing2TrainingRF2bDiffuseACS2240/09Thoroughbred3SRacing2–3TrainingRF2bCoreACS2489/09a\nThoroughbred4MRacing7RacingRF1bCoreACS2539/09Warmblood3SDressage14Blunt traumaRF2bMarginalACS1672/10Arabian17GPleasure14Running freeRF1bCoreACS6264/10b\nWarmblood8GDressage9UnknownRF3aMarginalACS6335/10Warmblood10GPleasure10UnknownLF2bDiffuseACS4793/10Thoroughbred3SRacing7TrainingLF2bCoreACS6263/10Warmblood20MPleasure14Stumbling at cross country rideRF1bCoreACS2378/10Warmblood11MPleasure13At rideLF2bDiffuseACSMean 8.1\nControls\n2489/09a\nThoroughbred4MRacing7RacingLF1bDiffuseNo6264/10b\nWarmblood8GDressage9UnknownLF2aCoreSaline6111/10Warmblood5MJumping14Kicking himself over the jumpRF1bMarginalNo6265/10Warmblood5MPleasure7UnknownRF1bMarginalSaline6383/11Warmblood18MPleasure10At cross country rideLF2aMarginalNo5461/11Warmblood14GPolice horse4At gallop on beachLF2bDiffuseNo6384/11Half-blood8GEventing1After eventing competitionLF2aCoreNoMean 8.86\na, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon\nDescription, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesions\n\na, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon\nIn total ten SDFTs were injected with ACS. Five control SDFTs were not treated and two control tendons received a single intralesional injection of saline.", "On day 0, the mean degree of lameness was 0.8 in the ACS group. In control limbs, the mean degree of lameness was 1.42 on day 0. One of the two horses with bilateral tendinopathy that served for the ACS group and as a control was not lame. The second one showed a unilateral front limb lameness (SDFT in the lame limb was treated with ACS). The mean degree of lameness did not differ between groups on any day of examination (p > 0.05). Regardless of treatment modality all horses became sound by day 36. Compared to day 0, lameness decreased significantly by day 11 (p = 0.046) within the ACS group and it decreased significantly by day 36 (p = 0.021) in limbs serving as controls (Fig. 1a).Fig. 1Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nDegree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy", "Recurrence of tendon injury was not reported in any horse after 2 to 4 years post-diagnosis. Of the eight horses with SDFTs being allocated to the ACS group, five horses (63 %), among these three racehorses, returned to their previous or a higher performance level, two horses (25 %) died of reasons unrelated to tendinopathy, and one horse (12 %) was retired due to osteoarthritis of interphalangeal joints after the observation period. One of the horses which was included in the ACS group and served as a control did not resume training and was retired as a broodmare; the second one performed as a dressage horse. Of the five horses with tendons serving only as control, four individuals (80 %) performed in their discipline at the previous or a higher level, and one horse (20 %) was lost to follow-up.", "No statistically significant differences between groups were observed during the entire observation period, including day 0, with regard to scores for swelling, skin surface temperature and sensitivity to palpation. Swelling scores of the SDFT region decreased significantly in the ACS group (p = 0.005) between day 50 (mean score 1.1) and day 78 (mean score 0.7) and remained reduced until the end of the observation period (Fig. 1b). In controls, swelling scores did not decrease significantly during 27 weeks. Skin surface temperature and sensitivity to palpation scores decreased up to day 22 within both groups and remained at a low level until day 190.", "The horses included presented with core lesions (seven limbs), marginal lesions (five limbs) or diffuse lesions (five limbs) of the SDFT (Table 1). The MIZ of most lesions was located in zone 2b (41.17 % of limbs), followed by zone 1b (35.29 % of limbs), zone 2a (17.64 % of limbs) and zone 3a (5.88 % of limbs).\nThe mean %T-lesion was 21.73 ± 7.16 in the ACS group and 18.51 ± 5.07 in controls on day 0. This parameter was significantly lower (p < 0.05) in the ACS group than in controls on days 78, 106 and 162 (Fig. 2a). TES were significantly lower in the ACS group versus controls on days 78 and 106 (Fig. 2b). There was no difference in T-CSA, TL-CSA (Fig. 2c) or T-FAS between the groups at any time point.Fig. 2Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nUltrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy\nMean %T-lesion showed a continuous decrease over time within the ACS group which was significant (p = 0.02) for the first time on day 78 compared to day 0. After this period, this parameter remained below the 10 % range. In control tendons, mean %T-lesion continuously remained on a similar elevated level (p > 0.05) throughout the entire observation period (Fig. 2a).\nCompared to day 0, the mean TES decreased significantly on day 22 and again between days 22 and 78 in the ACS group (p < 0.05), while in controls compared to day 0 the TES decreased significantly (p < 0.05) the first time on day 78 (Fig. 2b).\nMean T-CSA did not change significantly throughout the observation period in either group. Regarding the progression of TL-CSA within the ACS group over time, it was significantly lower (p < 0.05) from day 78 onwards until the end of the examination period compared to levels on day 0, while values from control tendons remained on a similar level from day 0 to 190 (Fig. 2c). Mean T-FAS decreased significantly between day 0 and day 78 (p = 0.023) within the ACS group; in controls this parameter decreased significantly on day 134 compared to day 0 (p = 0.04).", "A total of 51 needle biopsies were taken. Pain reaction was mild in 78.43 % of the procedures, mild to moderate in 17.64 % and moderate in 3.92 % of the cases. A moderate to severe or severe pain reaction was not observed. No bleeding was observed after taking biopsies in 25.49 % of the cases, mild bleeding occurred in 58.82 % and moderate bleeding in 15.68 % of cases. Severe bleeding was not observed.\nOf 51 biopsies, 47 were available for tendon histology [7, 30]. Four biopsies from severely oedematous lesions were not evaluated due to limited tissue content. The intra-class correlation for inter-observer repeatability was 0.72 for fibre structure, 0.88 for fibre alignment, 0.84 for nuclei morphology and 0.92 for variations in cell density.\nScores for tenocyte nuclei morphology were significantly lower (i.e. cell nuclei more flattened; p = 0.01) in the ACS group (Fig. 3a) than in controls (Fig. 3b) on day 36 (Fig. 4a). Scores for cell density showed a tendency to be lower (i.e. more uniform) in the ACS group than in controls on day 36 (p = 0.052). Scores for fibre structure, fibre alignment (Fig. 4b), vascularisation, and subscores for structural integrity and metabolic activity did not show differences between the ACS group and controls at any time point.Fig. 3Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μmFig. 4Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated)\nLongitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μm\nHistologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated)\nWith regard to development of fibre alignment during the healing of tendons treated with ACS, scores for this parameter were significantly lower (i.e. fibres were more regularly ordered; p = 0.04) in biopsies taken at the end of the observation period (day 190; Fig. 3d, Fig. 4b) than in those taken on day 0 (Fig. 3c, Fig. 4b). In control tendons, scores for fibre alignment (Fig. 4b), scores for morphology of tenocyte nuclei (Fig. 4a) and scores for cell density decreased significantly (i.e. tissue morphology improved; p < 0.05) between day 36 and day 190 biopsies.\nThere were no differences between the ACS group and controls with regard to collagen type I and collagen type III expression in biopsies taken on days 0, 36 and 190. Within the ACS group, the collagen type I content increased significantly (p = 0.03) between the biopsies taken on day 36 (Fig. 3e) and day 190 (Fig. 3f), while it remained at the same level (p > 0.05) in controls (Fig. 4c). The collagen type III content showed a tendency (p = 0.056) to decrease in the ACS group between samples taken on days 0 and 190.", "Results of the present study show that a single intralesional ACS injection into SDFT lesions leads to a temporary improvement of ultrasonographic parameters [27, 28]. Transient flattened morphology of tenocyte nuclei and an increased collagen type I expression in ACS-treated tendons over time are indicative of reduced proliferation and increased differentiation of this cell type, respectively [1, 34, 35]. The therapeutic effect of ACS treatment is also demonstrated by an earlier decrease in lameness as compared to controls [2].\nThe history of the horses was suggestive of strain-induced tendinopathy in the majority of tendons treated with either modality. However, it remains unclear whether the effects shown in the current study equally apply to tendinopathies related to other etiologies. Although the reported duration of tendinopathy of up to 2 weeks before first examination (day 0) referred to clinical signs in the present study, subclinical degeneration is known to precede obvious clinical symptoms of strain-induced tendinopathies, especially in equine athletes [36]. It cannot be excluded that at least some of the tendons on day 0 were not in the inflammatory, but in the early proliferative phase of tendon healing, although neither B-mode ultrasonography nor histology yielded clear evidence of potential chronicity of the tendinopathies. As an alternative, either more extensive tendon biopsies or ultrasonographic tissue characterization as a noninvasive diagnostic tool could have been used initially to further determine the age of the lesions [37, 38].\nAs tendon composition and biomechanical properties may vary significantly between horses [39] intraindividual controls (control = contralateral limb) are preferred in experimental settings [7]. For these reasons, both front limbs of two horses showing clinical signs of bilateral SDFT tendinopathy were included in the ACS group and as controls. Unfortunately, this could not be realised in more horses. Only two out of five clients who agreed to their horse or the respective limb being included as a control accepted an intralesional injection of these control tendons. Thus, the objective to create two separate control groups, one with the lesion left untreated and one with an intralesional saline injection, could not be achieved. Treatment modalities of control groups in clinical and experimental trials may be seen controversially. On the one hand, it is of interest to compare the effect of controlled exercise plus the effect of the substrate injected intralesionally with the effect of controlled exercise alone (= argument against intralesional injection of a control substance into control tendons). On the other hand, the mere puncture and needle decompression of acute tendon defects may have a therapeutic effect independent of the substrate injected [2]. Against that background, it seems preferable to treat control tendons with sham injections to demonstrate the effect of the substrate injected [5, 7] (ACS in the present case).\nTendon biopsies were used because clinical assessment of tendon healing and B-mode ultrasonography alone are limited with regard to sensitivity and reproducibility [40] and longitudinal needle biopsies were established in human [41, 42] and equine surgery [29, 43] as minimally invasive and well-tolerated techniques. They allow an insight into tendon architecture as well as the immunohistochemical detection of, for example, collagen type I and III [44]. Disadvantages, however, are the potentially therapeutic, albeit unknown, effect of the biopsy process on tendon healing [2, 45], their limited reproducibility and the relatively small volume of tendon tissue harvested [29, 43].\nAlthough mean degree of lameness did not differ between groups, this does not necessarily imply similar functional repair of the lesions, since SDFTs are generally more exposed to maximal load during heavy athletic activities than during trot, i.e. later during rehabilitation [46]. The observation that, compared to day 0, a significant decrease in lameness occurred earlier in the ACS group, i.e. until day 11, compared to the controls (day 36) may be influenced by effects attributed to ACS [8, 11, 13].\nDetection of lameness in horses with bilateral SDFT lesions may be more challenging than detection of unilateral gait abnormality [47]. It cannot be excluded that the two horses with bilateral SDFT tendinopathy may have shown bilateral lameness or an additional contralateral lameness, respectively, if diagnostic analgesia had been performed.\nClinical signs of inflammation were monitored using semiquantitative clinical score systems which may be subject to some inaccuracy. This could have been improved by the use of computerized gait analysis, thermography [48] and measurements of the metacarpal circumference in combination with ultrasonography [49]. None of the inflammatory signs, including swelling, differed between groups, but palpable swelling decreased significantly within the ACS group between days 50 and 78 in contrast to controls. On the one hand this may be attributed to auto- and paracrine effects of ACS on endogenous growth factor expression, since in a rodent Achilles tendon transection model bFGF expression was enhanced but not before 8 weeks, i.e. delayed after ACS injection [16]. On the other hand controlled exercise exerted anti-inflammatory effects on tendons from both groups [50] which did not, however, lead to a significant decrease in swelling in controls.\nThe decrease of palpable swelling in the ACS group between days 50 and 78 correlates positively with the ultrasonographic finding that TL-CSA and %T-lesion decreased only in the ACS group between before treatment and day 78. T-CSA, however, remained unchanged throughout the entire observation period in both groups. This proves that the decrease in swelling in the ACS group rather reflects a decrease in cutaneous, subcutaneous and peritendinous swelling than an altered tendon thickness in the late proliferation and early remodelling phase (i.e. until around day 45). This may be due to a therapeutic effect of inadvertent reflux of small volumes of ACS into the subcutis during intralesional injection. Ultrasonographic measurements of extratendinous swelling are challenging and were not included in the present study, although they could have been helpful to further confirm findings of palpation. In contrast to the results of this study, rat Achilles tendons showed an increase in tendon thickness after ACS treatment compared to the control tendons [22]. However, comparability is limited since the rat tendons, in contrast to the present study, were sutured and received three treatments of ACS.\nThe present study shows that, compared to control tendons, a single intralesional injection of ACS leads to a significant reduction of %T-lesion and an increase in echogenicity (TES) 78 and 106 days after treatment. This finding corresponds to an earlier (until day 22) increase in TES and an earlier (until day 78) increase of the percentage in parallel orientated fibre bundles (T-FAS) in the ACS group compared to controls. These effects could be the result of stimulation of repair tissue, i.e. improved fibrillogenesis in the early proliferative phase of tendon healing (4–45 days after injury) [23, 28, 37] in which most horses were presented and treated. This may have been a consequence of the potential IL-1Ra-mediated anti-inflammatory action which is attributed to ACS by several authors [8, 11, 13], although IL-1Ra concentrations in ACS were not determined in the present study. Another potential pathway may be the supplementation of growth factors, such as IGF-1 and TGF-β, which are supposed to be increased in equine ACS [12, 16, 21]. IGF-1 is known to be decreased for approximately 2 weeks in experimentally induced tendinopathy and a beneficial bolstering effect of exogenous IGF-1 on low endogenous IGF-1 production during the early repair phase of tendinopathy has been hypothesized [34]. ACS has been shown to display significant effects on the endogenous expression of growth factors potentially via auto- and paracrine pathways [16]. It remains unclear why the significant differences between groups with regard to %T-lesion and TES were not consistent until the end of the observation period despite tendencies to significance. This may be attributed to a time-limited effect of ACS (see above). Ultrasonographic tissue characterization has been established in recent years as a more precise alternative to B-mode ultrasonography to monitor the process of tendon healing [37, 38], particularly if only a probe with a relatively low resolution is available as in the present study.\nWith regard to histologic scores, a difference between groups was seen at day 36. Here, cell nuclei were flattened in the ACS group compared to controls, which is suggestive of decreased tenocyte proliferation in the late proliferative phase (4–45 days) [1, 23, 34, 35] as a response to the ACS injection. In agreement with this, it has been shown that tendon fibroblasts with a spindle-shaped nucleus have reduced apoptotic and proliferative indices, as demonstrated in human patellar tendons [35]. A consequence might be a decreased cellular production of inelastic collagen type III, which has been described to peak between 3 to 6 weeks after injury in equine experimental studies [34, 51]. However, immunohistochemistry in the present study revealed no difference of collagen type III expression between groups which might be due to considerable variations of different types of collagen between individual lesions, as described for naturally developed lesions in horses [1, 44].\nThe more favourable development of collagen type I expression in the ACS group between days 36 and 190 indicates qualitative improvement [34, 51], such as increased tensile strength of the repair tissue in the remodelling or maturation phase (45–120 days) [36], which is potentially caused by the mechanisms mentioned, i.e. an IL-1Ra-mediated anti-inflammatory mechanism or the supplementation of growth factors, such as IGF-1 and TGF-β. Collagen type I was seen to be elevated for 6 months after injury in equine experimental studies [34]. This rather reflects the progress in control tendons of the present study and correlates with previous findings in naturally injured equine tendons [44]. In contrast to an experimental Achilles tendinopathy model using ACS-treated rats [22], no difference in collagen type I expression was detected between groups in the present study. However, rat Achilles tendons were treated three times at 24-hour intervals with the first injection 24 hours after induction of the lesion, i.e. in the acute inflammatory phase of tendon healing. By contrast, tendons in the present study received only a single intralesional injection of ACS up to 14 days after the onset of clinical symptoms, i.e. mostly at the end of or even after the acute inflammatory phase. In the latter investigation, real-time quantitative polymerase chain reaction was used, which allows quantification of mRNA transcription of different collagen types, provided that enough tissue is available. Cytokines such as IL-1Ra and the growth factors IGF-1 and TGF-β have a short half-life and they may be degraded and consumed within a short time period after exogenous application [14–16]. Nevertheless, tendon healing may not only be enhanced by direct binding of cytokines and growth factors to cell surface receptors, but also due to indirect effects by stimulation of endogenous production of growth factors [13, 16, 52]. Therefore, the effect of ACS is potentially enhanced by several consecutive injections [22], as reported anecdotally to be common in equine practice and as recommended for the treatment of joint pathology [13]. A single ACS injection was chosen to determine the effect of a low dose as a basis for research because, to date, neither dose-dependent in vivo studies nor a consensus on the treatment protocol are available for blood products such as ACS. Another aim was to keep the number of factors influencing outcome, such as repeated needle puncture of tendons, as low as possible.\nThe increase in collagen I expression after ACS injection in an experimental rat study did not coincide with an improved maximum load to failure, despite leading to an improvement in tendon stiffness during biomechanical testing [22]. Although biomechanical testing is regarded as the method of choice, it could not be accomplished in the present study due to the inclusion of client-owned horses. In general, the degree of lameness and, to a limited extent, the echo pattern of the injured tendon reflect biomechanical properties. These parameters, however, did not significantly differ between groups in the present study at the end of the observation period. Due to the reduced group size, differences in long-term recurrence rate after return of the horses to full exercise could not be calculated statistically.", "This clinical trial in horses with acute tendinopathies of the SDFT shows that a single intralesional ACS injection contributes to significant reduction of lameness within 10 days and to improvement of ultrasonographic parameters of repair tissue between 11 and 23 weeks after treatment. Intralesional ACS treatment potentially decreases proliferation of tenocytes 5 weeks after treatment and increases their differentiation, as demonstrated by an elevated collagen type I expression in the remodelling phase. Repeated ACS injections should be considered to enhance positive effects. Future controlled long-term investigations should be performed in a larger number of horses to determine the effect on recurrence rate." ]
[ "introduction", "materials|methods", null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusion" ]
[ "Horse", "Tendon", "Ultrasonography", "Biopsy", "Histology", "Collagen", "Autologous conditioned serum", "ACS", "Irap" ]
Introduction: Tendinopathy of the superficial digital flexor tendon (SDFT) is a common injury in Thoroughbred racehorses and other horse breeds and is regarded as a career-limiting disease with a high recurrence rate [1]. Numerous treatment modalities have shown limited success in improving tendon repair [2]. Regenerative therapy aims to restore structure and function after application of biocompatible materials, cells, and bioactive molecules [3, 4]. There is growing knowledge about the clinical effects of potentially regenerative substrates, e.g. mesenchymal stem cells (MSCs) [5, 6] and autologous blood products such as platelet rich plasma [7, 8] on equine tendinopathies. To date, however, ideal treatment strategies for naturally occurring tendinopathies have not been established [1, 2]. Autologous conditioned serum (ACS; synonyms irap®, Orthokine®, Orthogen, Düsseldorf, Germany) is used for intralesional treatment of tendinopathy in horses but, to the best of our knowledge, its clinical effect is only documented anecdotally [8–10]. ACS is prepared by exposing whole blood samples to glass beads, which has been shown to stimulate the secretion of anti-inflammatory cytokines, including interleukin (IL)-4 and IL-10 and IL-1 receptor antagonist (IL-1Ra) in humans [11]. A recent investigation has shown that ACS from equine blood also contains high levels of IL-1Ra and IL-10 [12]. Equine studies have focused on the IL-1Ra-mediated anti-inflammatory effects of ACS [13]; however, in tendon healing, the high concentrations of growth factors such as insulin-like growth factor-1 (IGF-1) and transforming growth factor-beta (TGF-β) may be equally or more important [14–16]. Blood samples from different horses and the use of different kits for the preparation of ACS may lead to differences in the cytokine and growth factor concentration in vitro [12, 17]. However, the relevance of these differences for the clinical effect is unknown. ACS was originally described to improve muscle regeneration in a murine muscle contusion model [18] and to exhibit anti-inflammatory effects in an experimental model of carpal osteoarthritis in horses [13] and in a placebo-controlled clinical trial in humans with knee osteoarthritis [19]. The rationale for the use of ACS to treat equine tendinopathies is based on several findings: 1) It was shown in an experimental study that the expression of IL-1β (and matrix metalloproteinase-13) is upregulated following overstrain injury of rat tendons, demonstrating that these molecules are important mediators in the pathogenesis of tendinopathy [15, 20]. 2) IL-1Ra protein and heterologous conditioned serum prepared with the irap® kit reduced the production of prostaglandin E2 by stimulated cells derived from macroscopically normal SDFTs in vitro [21]. 3) Growth factors concentrated in ACS, e.g. IGF-1 and TGF-β, have the potential to attract resident precursor cells, e.g. MSCs and tenoblasts, and to increase cell proliferation during tendon healing [14, 15, 17]. Rat Achilles tendons exposed to ACS in an experimental study showed an enhanced expression of the Col1A1 gene, which led to an increased secretion of type I collagen and accelerated recovery of tendon stiffness and improved histologic maturity of the repair tissue [22]. It was shown in another rodent Achilles tendon transection model that ACS generally increases the expression of basic fibroblast growth factor (bFGF), bone morphogenetic protein-12 and TGF-β1 [16], representing growth factors important for the process of tendon regeneration [15]. The process of tendon healing is mainly divided into three phases which merge into each other. The acute inflammatory phase (<10–14 days) is characterized by phagocytosis and demarcation of injured tendon tissue. A fibroproliferative callus is formed during the proliferative phase (4–45 days), while collagen fibrils are organised into tendon bundles during the remodelling or maturation phase (45–120 days; <3 months) [1, 23]. The aim of the present study was to support the hypothesis that a single intralesional ACS injection into SDFT lesions 1) has a clinically detectable anti-inflammatory effect, 2) leads to improved B-mode ultrasonographic parameters and 3) improves the organization of repair tissue. Materials and methods: Inclusion criteria for client-owned adult horses was a history of acute uni- or bilateral SDFT tendinopathy (tendon disorder) without cutaneous injury but with clinical signs of inflammation being reported to be present for up to 14 days prior to the presentation at the Equine Clinic of the University of Veterinary Medicine, Hannover, Foundation, or to collaborating veterinarians. Horses were only included if the clients agreed to the study design and tendons had not received intralesional injections before. Injured limbs were randomly assigned to the group treated with ACS (n = 10) or to controls (n = 7). The study was carried out between 2009 and 2012 and approved by the animal welfare officer of the University of Veterinary Medicine Hannover, Foundation, Germany, and the ethics committee of the responsible German federal state authority in accordance with the German Animal Welfare Law (Lower Saxony State Office for Consumer Protection and Food Safety, Az. 33.9-42502-05-09A652). Clinical examination All horses were examined clinically on the day of first presentation (day 0). This examination included visual assessment of lameness (5 grade score) [24] and signs of inflammation which were scored semiquantitatively by palpation (skin surface temperature in the palmar metacarpal region and sensitivity of the SDFT to palpation: 0 = no abnormality, 1 = mild abnormality, 2 = moderate abnormality, and 3 = severe abnormality; swelling of the SDFT was determined by palpation as an increase in diameter relative to normal tendon: 0 = no increase, 1 = increase by factor 1.5; 2 = increase by factor 1.5 to 2; increase by more than factor 2 [25]). All horses were examined clinically on the day of first presentation (day 0). This examination included visual assessment of lameness (5 grade score) [24] and signs of inflammation which were scored semiquantitatively by palpation (skin surface temperature in the palmar metacarpal region and sensitivity of the SDFT to palpation: 0 = no abnormality, 1 = mild abnormality, 2 = moderate abnormality, and 3 = severe abnormality; swelling of the SDFT was determined by palpation as an increase in diameter relative to normal tendon: 0 = no increase, 1 = increase by factor 1.5; 2 = increase by factor 1.5 to 2; increase by more than factor 2 [25]). B-mode ultrasonography All injured tendons were examined with B-mode ultrasound on the day of first presentation (day 0) in a transverse and longitudinal fashion with a linear 5–7.5 MHz linear scanner (Logiq e, GE Healthcare, Wauwatosa, WI, USA), according to the seven zone designations as described previously [26, 27]. Images were stored digitally and analysed according to the following parameters to determine the degree- and time-related changes of the lesions: maximal injury zone (MIZ), type of lesion determined on transverse images in the MIZ (core lesion = centrally located, focal hypo-/anechoic region; marginal lesion = peripherally located, focal hypo-/anechoic region; diffuse lesion = homogenous or heterogenous changes in echogenicity of the whole/most parts of the cross sectional area), summarized cross-sectional areas of the tendon (total cross-sectional area, T-CSA), summarized cross-sectional areas of the lesion (total lesion cross-sectional area, TL-CSA), and percentage of the lesion in the tendon [percent total lesion, %T-lesion = (TL-CSA / T-CSA) × 100]. Echogenicity and fibre alignment were graded semiquantitatively at each zone and the scores for all levels were summarized (total echo score, TES; total fibre alignment score, T-FAS). Echogenicity was assigned to 0 (normoechoic), 1 (hypoechoic), 2 (mixed echogenicity), and 3 (anechoic) [27, 28], and fibre alignment was graded according to the estimated percentage of parallel fibres in the lesion: 0 (>75 %), 1 (50–74 %), 2 (25–49 %), and 3 (<25 %) [27, 28]. Analyses of ultrasonograms were performed by one examiner (ML) blinded to the individual treatment modality. All injured tendons were examined with B-mode ultrasound on the day of first presentation (day 0) in a transverse and longitudinal fashion with a linear 5–7.5 MHz linear scanner (Logiq e, GE Healthcare, Wauwatosa, WI, USA), according to the seven zone designations as described previously [26, 27]. Images were stored digitally and analysed according to the following parameters to determine the degree- and time-related changes of the lesions: maximal injury zone (MIZ), type of lesion determined on transverse images in the MIZ (core lesion = centrally located, focal hypo-/anechoic region; marginal lesion = peripherally located, focal hypo-/anechoic region; diffuse lesion = homogenous or heterogenous changes in echogenicity of the whole/most parts of the cross sectional area), summarized cross-sectional areas of the tendon (total cross-sectional area, T-CSA), summarized cross-sectional areas of the lesion (total lesion cross-sectional area, TL-CSA), and percentage of the lesion in the tendon [percent total lesion, %T-lesion = (TL-CSA / T-CSA) × 100]. Echogenicity and fibre alignment were graded semiquantitatively at each zone and the scores for all levels were summarized (total echo score, TES; total fibre alignment score, T-FAS). Echogenicity was assigned to 0 (normoechoic), 1 (hypoechoic), 2 (mixed echogenicity), and 3 (anechoic) [27, 28], and fibre alignment was graded according to the estimated percentage of parallel fibres in the lesion: 0 (>75 %), 1 (50–74 %), 2 (25–49 %), and 3 (<25 %) [27, 28]. Analyses of ultrasonograms were performed by one examiner (ML) blinded to the individual treatment modality. Intralesional treatment, follow-up examinations and controlled exercise Ten millilitres of autologous blood were collected by a single venipuncture of one jugular vein into an irap®-10 syringe system (Orthogen, Düsseldorf, Germany). Blood samples were incubated at 37 °C (range 6–9 hours). After centrifugation at 4,000 rotations per minute for 10 minutes (centrifuge: Universal 320, rotor: no. 1624, Hettich, Tuttlingen, Germany), serum was aseptically aspirated from the syringe and passed through a 0.22 μm syringe-driven filter unit (Millex-MP, Millipore Corporation, Carrigtwohill, Co. Cork, Ireland). Depending on the size of the lesion as determined ultrasonographically (TL-CSA), tendons allocated to the ACS group received a single intralesional injection of 1–3 ml through a 22G needle (diameter 0.7 mm, length 30 mm) into the SDFT defect (day 1). Control tendons either received a single injection of a placebo, i.e. 1–3 ml saline through a 22G needle (diameter 0.7 mm, length 30 mm) or were untreated in case the owner declined an intralesional application of saline. Horses were sedated for the intralesional injections with detomidine (0.01–0.03 mg/kg intravenously) and butorphanol (0.04–0.05 mg/kg intravenously), and the medial and lateral palmar nerves were anaesthetized 2 cm distal to the carpometacarpal joints with 2 ml of a 2 % mepivacaine solution. After aseptic preparation of the skin, superficial digital tendon lesions were injected under sonographic guidance at a single site from the lateral aspect of the tendon perpendicularly to its long axis directly into the most hypoechoic areas, i.e. the MIZ while the limb was weight-bearing. All horses participated in a gradually increasing exercise programme as described previously [7]. The programme started the first day after the reported onset of SDFT tendinopathy. From week 25 to 27 horses were exercised for 25 minutes at a walk and for 15 minutes at a trot. Horses were re-examined clinically and ultrasonographically at regular intervals for 27 weeks on days 11, 22, 36, 50, 78, 106, 134, 162, and 190. Thereafter horse owners were advised to gradually increase exercise on an individual basis until the previous level of performance was reached. Data concerning signs of acute tendon injury, the level of performance horses reached and the discipline they were used for were obtained by telephone inquiry with horse owners or trainers until the preparation of the manuscript. Ten millilitres of autologous blood were collected by a single venipuncture of one jugular vein into an irap®-10 syringe system (Orthogen, Düsseldorf, Germany). Blood samples were incubated at 37 °C (range 6–9 hours). After centrifugation at 4,000 rotations per minute for 10 minutes (centrifuge: Universal 320, rotor: no. 1624, Hettich, Tuttlingen, Germany), serum was aseptically aspirated from the syringe and passed through a 0.22 μm syringe-driven filter unit (Millex-MP, Millipore Corporation, Carrigtwohill, Co. Cork, Ireland). Depending on the size of the lesion as determined ultrasonographically (TL-CSA), tendons allocated to the ACS group received a single intralesional injection of 1–3 ml through a 22G needle (diameter 0.7 mm, length 30 mm) into the SDFT defect (day 1). Control tendons either received a single injection of a placebo, i.e. 1–3 ml saline through a 22G needle (diameter 0.7 mm, length 30 mm) or were untreated in case the owner declined an intralesional application of saline. Horses were sedated for the intralesional injections with detomidine (0.01–0.03 mg/kg intravenously) and butorphanol (0.04–0.05 mg/kg intravenously), and the medial and lateral palmar nerves were anaesthetized 2 cm distal to the carpometacarpal joints with 2 ml of a 2 % mepivacaine solution. After aseptic preparation of the skin, superficial digital tendon lesions were injected under sonographic guidance at a single site from the lateral aspect of the tendon perpendicularly to its long axis directly into the most hypoechoic areas, i.e. the MIZ while the limb was weight-bearing. All horses participated in a gradually increasing exercise programme as described previously [7]. The programme started the first day after the reported onset of SDFT tendinopathy. From week 25 to 27 horses were exercised for 25 minutes at a walk and for 15 minutes at a trot. Horses were re-examined clinically and ultrasonographically at regular intervals for 27 weeks on days 11, 22, 36, 50, 78, 106, 134, 162, and 190. Thereafter horse owners were advised to gradually increase exercise on an individual basis until the previous level of performance was reached. Data concerning signs of acute tendon injury, the level of performance horses reached and the discipline they were used for were obtained by telephone inquiry with horse owners or trainers until the preparation of the manuscript. Needle biopsies and histologic examinations On days 0, 36 and 190, one needle biopsy was taken aseptically from each SDFT at its MIZ with a 20G automated biopsy needle (Biopsiepistole PlusSpeed™, Peter Pflugbeil GmbH, Zorneding, Germany), with the needle entering the MIZ of the SDFT from distal at a 45° angle while the carpus was flexed approximately 90° and the metacarpophalangeal joint was moderately extended [29]. Pain reaction and intensity of bleeding from the biopsy site were evaluated using an established score [29]. The MIZ was recorded as distance from the accessory carpal bone (cm) so that repeat biopsies were taken from the same anatomic area as the day 0 biopsy while avoiding previous biopsy sites. Limbs were protected with a distal limb bandage for 2 days after taking needle biopsies and after intralesional injections. All needle biopsies were fixed in 10 % formalin, paraffin-embedded, sectioned at a thickness of 1–2 μm, mounted on microscope slides, and stained with haematoxylin and eosin. A single histological slide of each biopsy was examined histologically according to a score described previously [7, 30]; findings were graded using a semiquantitative four-point scale (0 = normal appearance, 1 = slightly abnormal, 2 = moderately abnormal, and 3 = markedly abnormal) considering the following parameters: fibre structure (0 = linear, no interruption; 3 = short with early truncuation), fibre alignment (0 = regularly ordered; 3 = no pattern identified), morphology of tenocyte nuclei (0 = flat; 3 = round), variations in cell density (0 = uniform; 3 = high regional variation), and vascularisation (0 = absent; 3 = high). Histological sections were independently scored by two observers blinded to horse and treatment modality (FG and ML). In total, five high power fields (40× magnification) per section were examined and scored. Mean averages of score values determined by each observer were calculated for each parameter (see above) before score values of both examiners were averaged. Immunohistochemical analysis of paraffin-embedded tissue sections was used to determine the formation of collagen type I and collagen type III. A commercially available mouse-anti-bovine antibody (NB600-450 anti-COL 1A1, Novus Biologicals, Littleton, CO, USA) and a rabbit-anti-bovine antibody (CL197P anti collagen type III alpha 1 chain, Acris Antibodies GmbH, Herford, Germany) were applied as primary antibodies against collagen type I and collagen type III, respectively. Secondary biotinylated antibodies were obtained from relevant species allowing binding to the primary antibody. Colour production from the chromogen diaminobenzidine tetrachloride was catalysed by streptavidin-conjugated peroxidase (avidin-biotin-complex-method) [31]. Finally, the sections were counterstained with haematoxylin. Immunohistochemical cross-reactivity of antibodies with uninjured equine tendon tissue was tested prior to analysis of the needle biopsies. Positive control tissues included bovine aorta and bovine tendon for collagen type I antigen-specific antibodies and bovine skin for collagen type III antigen-specific antibodies. In negative control sections, primary antibodies were replaced by appropriately diluted Balb/c mouse ascites and rabbit serum, respectively. Photomicrographs were taken from all immunostained slides (Color View II, 3.3 Megapixel CCD, Soft Imaging System GmbH, Münster, Germany). Quantitative morphometric analysis of the immunoreaction was achieved by determination of the immunostained area using image analysis software (analySIS® 3.1, Soft Imaging System GmbH, Münster, Germany). A threshold for a positive signal was defined and the percentage of positively immunostained area in the tissue section as a whole was calculated [32, 33]. On days 0, 36 and 190, one needle biopsy was taken aseptically from each SDFT at its MIZ with a 20G automated biopsy needle (Biopsiepistole PlusSpeed™, Peter Pflugbeil GmbH, Zorneding, Germany), with the needle entering the MIZ of the SDFT from distal at a 45° angle while the carpus was flexed approximately 90° and the metacarpophalangeal joint was moderately extended [29]. Pain reaction and intensity of bleeding from the biopsy site were evaluated using an established score [29]. The MIZ was recorded as distance from the accessory carpal bone (cm) so that repeat biopsies were taken from the same anatomic area as the day 0 biopsy while avoiding previous biopsy sites. Limbs were protected with a distal limb bandage for 2 days after taking needle biopsies and after intralesional injections. All needle biopsies were fixed in 10 % formalin, paraffin-embedded, sectioned at a thickness of 1–2 μm, mounted on microscope slides, and stained with haematoxylin and eosin. A single histological slide of each biopsy was examined histologically according to a score described previously [7, 30]; findings were graded using a semiquantitative four-point scale (0 = normal appearance, 1 = slightly abnormal, 2 = moderately abnormal, and 3 = markedly abnormal) considering the following parameters: fibre structure (0 = linear, no interruption; 3 = short with early truncuation), fibre alignment (0 = regularly ordered; 3 = no pattern identified), morphology of tenocyte nuclei (0 = flat; 3 = round), variations in cell density (0 = uniform; 3 = high regional variation), and vascularisation (0 = absent; 3 = high). Histological sections were independently scored by two observers blinded to horse and treatment modality (FG and ML). In total, five high power fields (40× magnification) per section were examined and scored. Mean averages of score values determined by each observer were calculated for each parameter (see above) before score values of both examiners were averaged. Immunohistochemical analysis of paraffin-embedded tissue sections was used to determine the formation of collagen type I and collagen type III. A commercially available mouse-anti-bovine antibody (NB600-450 anti-COL 1A1, Novus Biologicals, Littleton, CO, USA) and a rabbit-anti-bovine antibody (CL197P anti collagen type III alpha 1 chain, Acris Antibodies GmbH, Herford, Germany) were applied as primary antibodies against collagen type I and collagen type III, respectively. Secondary biotinylated antibodies were obtained from relevant species allowing binding to the primary antibody. Colour production from the chromogen diaminobenzidine tetrachloride was catalysed by streptavidin-conjugated peroxidase (avidin-biotin-complex-method) [31]. Finally, the sections were counterstained with haematoxylin. Immunohistochemical cross-reactivity of antibodies with uninjured equine tendon tissue was tested prior to analysis of the needle biopsies. Positive control tissues included bovine aorta and bovine tendon for collagen type I antigen-specific antibodies and bovine skin for collagen type III antigen-specific antibodies. In negative control sections, primary antibodies were replaced by appropriately diluted Balb/c mouse ascites and rabbit serum, respectively. Photomicrographs were taken from all immunostained slides (Color View II, 3.3 Megapixel CCD, Soft Imaging System GmbH, Münster, Germany). Quantitative morphometric analysis of the immunoreaction was achieved by determination of the immunostained area using image analysis software (analySIS® 3.1, Soft Imaging System GmbH, Münster, Germany). A threshold for a positive signal was defined and the percentage of positively immunostained area in the tissue section as a whole was calculated [32, 33]. Statistical analysis Analysis of data was performed using SAS™ Version 9.3 (SAS Institute, Cary, NC, USA). The level of significance was set at p < 0.05. All values in the graphs are expressed as arithmetic mean values with standard error (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \overline{\mathrm{X}} $$\end{document}X¯ ± SEM). The assumption of normality was tested using the Kolmogorov-Smirnov test and visual assessment of qq-plots of model residuals. In the case of rejection of normal distribution, distribution-free nonparametric methods were applied. Fisher’s exact test was applied to test the differences between groups on each examination day with regard to the parameters of degree of lameness, swelling and skin surface temperature. To compare not-normally distributed parameters within a group between examination days, the permutation test for nonparametric analysis of repeated measurements with the Šidák post hoc test for multiple pairwise comparisons was used. The influence of groups and time points on ultrasonongraphic parameters (T-CSA, TL-CSA, %T-lesion, TES and T-FAS), histology scores and percentages of positively immunostained areas were tested using a two-way analysis of variance for independent samples (groups) and repeated measurements (dependent time points, biopsies), followed by the Tukey post hoc test for multiple pairwise comparisons. The intraclass correlation coefficient was calculated by analysis of variance components to test inter-observer repeatability of histological scores. Analysis of data was performed using SAS™ Version 9.3 (SAS Institute, Cary, NC, USA). The level of significance was set at p < 0.05. All values in the graphs are expressed as arithmetic mean values with standard error (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \overline{\mathrm{X}} $$\end{document}X¯ ± SEM). The assumption of normality was tested using the Kolmogorov-Smirnov test and visual assessment of qq-plots of model residuals. In the case of rejection of normal distribution, distribution-free nonparametric methods were applied. Fisher’s exact test was applied to test the differences between groups on each examination day with regard to the parameters of degree of lameness, swelling and skin surface temperature. To compare not-normally distributed parameters within a group between examination days, the permutation test for nonparametric analysis of repeated measurements with the Šidák post hoc test for multiple pairwise comparisons was used. The influence of groups and time points on ultrasonongraphic parameters (T-CSA, TL-CSA, %T-lesion, TES and T-FAS), histology scores and percentages of positively immunostained areas were tested using a two-way analysis of variance for independent samples (groups) and repeated measurements (dependent time points, biopsies), followed by the Tukey post hoc test for multiple pairwise comparisons. The intraclass correlation coefficient was calculated by analysis of variance components to test inter-observer repeatability of histological scores. Clinical examination: All horses were examined clinically on the day of first presentation (day 0). This examination included visual assessment of lameness (5 grade score) [24] and signs of inflammation which were scored semiquantitatively by palpation (skin surface temperature in the palmar metacarpal region and sensitivity of the SDFT to palpation: 0 = no abnormality, 1 = mild abnormality, 2 = moderate abnormality, and 3 = severe abnormality; swelling of the SDFT was determined by palpation as an increase in diameter relative to normal tendon: 0 = no increase, 1 = increase by factor 1.5; 2 = increase by factor 1.5 to 2; increase by more than factor 2 [25]). B-mode ultrasonography: All injured tendons were examined with B-mode ultrasound on the day of first presentation (day 0) in a transverse and longitudinal fashion with a linear 5–7.5 MHz linear scanner (Logiq e, GE Healthcare, Wauwatosa, WI, USA), according to the seven zone designations as described previously [26, 27]. Images were stored digitally and analysed according to the following parameters to determine the degree- and time-related changes of the lesions: maximal injury zone (MIZ), type of lesion determined on transverse images in the MIZ (core lesion = centrally located, focal hypo-/anechoic region; marginal lesion = peripherally located, focal hypo-/anechoic region; diffuse lesion = homogenous or heterogenous changes in echogenicity of the whole/most parts of the cross sectional area), summarized cross-sectional areas of the tendon (total cross-sectional area, T-CSA), summarized cross-sectional areas of the lesion (total lesion cross-sectional area, TL-CSA), and percentage of the lesion in the tendon [percent total lesion, %T-lesion = (TL-CSA / T-CSA) × 100]. Echogenicity and fibre alignment were graded semiquantitatively at each zone and the scores for all levels were summarized (total echo score, TES; total fibre alignment score, T-FAS). Echogenicity was assigned to 0 (normoechoic), 1 (hypoechoic), 2 (mixed echogenicity), and 3 (anechoic) [27, 28], and fibre alignment was graded according to the estimated percentage of parallel fibres in the lesion: 0 (>75 %), 1 (50–74 %), 2 (25–49 %), and 3 (<25 %) [27, 28]. Analyses of ultrasonograms were performed by one examiner (ML) blinded to the individual treatment modality. Intralesional treatment, follow-up examinations and controlled exercise: Ten millilitres of autologous blood were collected by a single venipuncture of one jugular vein into an irap®-10 syringe system (Orthogen, Düsseldorf, Germany). Blood samples were incubated at 37 °C (range 6–9 hours). After centrifugation at 4,000 rotations per minute for 10 minutes (centrifuge: Universal 320, rotor: no. 1624, Hettich, Tuttlingen, Germany), serum was aseptically aspirated from the syringe and passed through a 0.22 μm syringe-driven filter unit (Millex-MP, Millipore Corporation, Carrigtwohill, Co. Cork, Ireland). Depending on the size of the lesion as determined ultrasonographically (TL-CSA), tendons allocated to the ACS group received a single intralesional injection of 1–3 ml through a 22G needle (diameter 0.7 mm, length 30 mm) into the SDFT defect (day 1). Control tendons either received a single injection of a placebo, i.e. 1–3 ml saline through a 22G needle (diameter 0.7 mm, length 30 mm) or were untreated in case the owner declined an intralesional application of saline. Horses were sedated for the intralesional injections with detomidine (0.01–0.03 mg/kg intravenously) and butorphanol (0.04–0.05 mg/kg intravenously), and the medial and lateral palmar nerves were anaesthetized 2 cm distal to the carpometacarpal joints with 2 ml of a 2 % mepivacaine solution. After aseptic preparation of the skin, superficial digital tendon lesions were injected under sonographic guidance at a single site from the lateral aspect of the tendon perpendicularly to its long axis directly into the most hypoechoic areas, i.e. the MIZ while the limb was weight-bearing. All horses participated in a gradually increasing exercise programme as described previously [7]. The programme started the first day after the reported onset of SDFT tendinopathy. From week 25 to 27 horses were exercised for 25 minutes at a walk and for 15 minutes at a trot. Horses were re-examined clinically and ultrasonographically at regular intervals for 27 weeks on days 11, 22, 36, 50, 78, 106, 134, 162, and 190. Thereafter horse owners were advised to gradually increase exercise on an individual basis until the previous level of performance was reached. Data concerning signs of acute tendon injury, the level of performance horses reached and the discipline they were used for were obtained by telephone inquiry with horse owners or trainers until the preparation of the manuscript. Needle biopsies and histologic examinations: On days 0, 36 and 190, one needle biopsy was taken aseptically from each SDFT at its MIZ with a 20G automated biopsy needle (Biopsiepistole PlusSpeed™, Peter Pflugbeil GmbH, Zorneding, Germany), with the needle entering the MIZ of the SDFT from distal at a 45° angle while the carpus was flexed approximately 90° and the metacarpophalangeal joint was moderately extended [29]. Pain reaction and intensity of bleeding from the biopsy site were evaluated using an established score [29]. The MIZ was recorded as distance from the accessory carpal bone (cm) so that repeat biopsies were taken from the same anatomic area as the day 0 biopsy while avoiding previous biopsy sites. Limbs were protected with a distal limb bandage for 2 days after taking needle biopsies and after intralesional injections. All needle biopsies were fixed in 10 % formalin, paraffin-embedded, sectioned at a thickness of 1–2 μm, mounted on microscope slides, and stained with haematoxylin and eosin. A single histological slide of each biopsy was examined histologically according to a score described previously [7, 30]; findings were graded using a semiquantitative four-point scale (0 = normal appearance, 1 = slightly abnormal, 2 = moderately abnormal, and 3 = markedly abnormal) considering the following parameters: fibre structure (0 = linear, no interruption; 3 = short with early truncuation), fibre alignment (0 = regularly ordered; 3 = no pattern identified), morphology of tenocyte nuclei (0 = flat; 3 = round), variations in cell density (0 = uniform; 3 = high regional variation), and vascularisation (0 = absent; 3 = high). Histological sections were independently scored by two observers blinded to horse and treatment modality (FG and ML). In total, five high power fields (40× magnification) per section were examined and scored. Mean averages of score values determined by each observer were calculated for each parameter (see above) before score values of both examiners were averaged. Immunohistochemical analysis of paraffin-embedded tissue sections was used to determine the formation of collagen type I and collagen type III. A commercially available mouse-anti-bovine antibody (NB600-450 anti-COL 1A1, Novus Biologicals, Littleton, CO, USA) and a rabbit-anti-bovine antibody (CL197P anti collagen type III alpha 1 chain, Acris Antibodies GmbH, Herford, Germany) were applied as primary antibodies against collagen type I and collagen type III, respectively. Secondary biotinylated antibodies were obtained from relevant species allowing binding to the primary antibody. Colour production from the chromogen diaminobenzidine tetrachloride was catalysed by streptavidin-conjugated peroxidase (avidin-biotin-complex-method) [31]. Finally, the sections were counterstained with haematoxylin. Immunohistochemical cross-reactivity of antibodies with uninjured equine tendon tissue was tested prior to analysis of the needle biopsies. Positive control tissues included bovine aorta and bovine tendon for collagen type I antigen-specific antibodies and bovine skin for collagen type III antigen-specific antibodies. In negative control sections, primary antibodies were replaced by appropriately diluted Balb/c mouse ascites and rabbit serum, respectively. Photomicrographs were taken from all immunostained slides (Color View II, 3.3 Megapixel CCD, Soft Imaging System GmbH, Münster, Germany). Quantitative morphometric analysis of the immunoreaction was achieved by determination of the immunostained area using image analysis software (analySIS® 3.1, Soft Imaging System GmbH, Münster, Germany). A threshold for a positive signal was defined and the percentage of positively immunostained area in the tissue section as a whole was calculated [32, 33]. Statistical analysis: Analysis of data was performed using SAS™ Version 9.3 (SAS Institute, Cary, NC, USA). The level of significance was set at p < 0.05. All values in the graphs are expressed as arithmetic mean values with standard error (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \overline{\mathrm{X}} $$\end{document}X¯ ± SEM). The assumption of normality was tested using the Kolmogorov-Smirnov test and visual assessment of qq-plots of model residuals. In the case of rejection of normal distribution, distribution-free nonparametric methods were applied. Fisher’s exact test was applied to test the differences between groups on each examination day with regard to the parameters of degree of lameness, swelling and skin surface temperature. To compare not-normally distributed parameters within a group between examination days, the permutation test for nonparametric analysis of repeated measurements with the Šidák post hoc test for multiple pairwise comparisons was used. The influence of groups and time points on ultrasonongraphic parameters (T-CSA, TL-CSA, %T-lesion, TES and T-FAS), histology scores and percentages of positively immunostained areas were tested using a two-way analysis of variance for independent samples (groups) and repeated measurements (dependent time points, biopsies), followed by the Tukey post hoc test for multiple pairwise comparisons. The intraclass correlation coefficient was calculated by analysis of variance components to test inter-observer repeatability of histological scores. Results: Description and history of horses, intralesional injections Seventeen limbs of 15 horses between 2 and 19 years old (mean 8.46 years old) met the inclusion criteria. Ten of the limbs were included in the ACS-treated group (ACS group) and seven served as controls. Limbs allocated to the ACS group belonged to five Warmbloods (50 %), four Thoroughbreds (40 %) and one Arabian (10 %). The limbs of five Warmbloods (71.42 %), one Thoroughbred (14.28 %) and one Half-blood (14.28 %) were included as controls. Of these, one Thoroughbred and one Warmblood with bilateral SDFT lesions served for both groups (Table 1). The horses’ history (i.e. high-speed exercise, increased age >12 years) was suggestive of tendinopathy to be strain-induced in at least 6 of 10 SDFTs (60 %) allocated to the ACS group and in at least 4 of 7 (57 %) control SDFTs. Two tendons had a definitive history of blunt external trauma. One was included in the ACS group and one served as control.Table 1Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesionsHorse numberBreedAge (years)GenderFor which purpose usedReported duration of SDFT tendinopathy until initial examination (days)Reported initiating eventLimb affectedMaximal injury zoneLesion typeTreatment ACS group 2241/09Thoroughbred2SRacing2TrainingRF2bDiffuseACS2240/09Thoroughbred3SRacing2–3TrainingRF2bCoreACS2489/09a Thoroughbred4MRacing7RacingRF1bCoreACS2539/09Warmblood3SDressage14Blunt traumaRF2bMarginalACS1672/10Arabian17GPleasure14Running freeRF1bCoreACS6264/10b Warmblood8GDressage9UnknownRF3aMarginalACS6335/10Warmblood10GPleasure10UnknownLF2bDiffuseACS4793/10Thoroughbred3SRacing7TrainingLF2bCoreACS6263/10Warmblood20MPleasure14Stumbling at cross country rideRF1bCoreACS2378/10Warmblood11MPleasure13At rideLF2bDiffuseACSMean 8.1 Controls 2489/09a Thoroughbred4MRacing7RacingLF1bDiffuseNo6264/10b Warmblood8GDressage9UnknownLF2aCoreSaline6111/10Warmblood5MJumping14Kicking himself over the jumpRF1bMarginalNo6265/10Warmblood5MPleasure7UnknownRF1bMarginalSaline6383/11Warmblood18MPleasure10At cross country rideLF2aMarginalNo5461/11Warmblood14GPolice horse4At gallop on beachLF2bDiffuseNo6384/11Half-blood8GEventing1After eventing competitionLF2aCoreNoMean 8.86 a, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesions a, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon In total ten SDFTs were injected with ACS. Five control SDFTs were not treated and two control tendons received a single intralesional injection of saline. Seventeen limbs of 15 horses between 2 and 19 years old (mean 8.46 years old) met the inclusion criteria. Ten of the limbs were included in the ACS-treated group (ACS group) and seven served as controls. Limbs allocated to the ACS group belonged to five Warmbloods (50 %), four Thoroughbreds (40 %) and one Arabian (10 %). The limbs of five Warmbloods (71.42 %), one Thoroughbred (14.28 %) and one Half-blood (14.28 %) were included as controls. Of these, one Thoroughbred and one Warmblood with bilateral SDFT lesions served for both groups (Table 1). The horses’ history (i.e. high-speed exercise, increased age >12 years) was suggestive of tendinopathy to be strain-induced in at least 6 of 10 SDFTs (60 %) allocated to the ACS group and in at least 4 of 7 (57 %) control SDFTs. Two tendons had a definitive history of blunt external trauma. One was included in the ACS group and one served as control.Table 1Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesionsHorse numberBreedAge (years)GenderFor which purpose usedReported duration of SDFT tendinopathy until initial examination (days)Reported initiating eventLimb affectedMaximal injury zoneLesion typeTreatment ACS group 2241/09Thoroughbred2SRacing2TrainingRF2bDiffuseACS2240/09Thoroughbred3SRacing2–3TrainingRF2bCoreACS2489/09a Thoroughbred4MRacing7RacingRF1bCoreACS2539/09Warmblood3SDressage14Blunt traumaRF2bMarginalACS1672/10Arabian17GPleasure14Running freeRF1bCoreACS6264/10b Warmblood8GDressage9UnknownRF3aMarginalACS6335/10Warmblood10GPleasure10UnknownLF2bDiffuseACS4793/10Thoroughbred3SRacing7TrainingLF2bCoreACS6263/10Warmblood20MPleasure14Stumbling at cross country rideRF1bCoreACS2378/10Warmblood11MPleasure13At rideLF2bDiffuseACSMean 8.1 Controls 2489/09a Thoroughbred4MRacing7RacingLF1bDiffuseNo6264/10b Warmblood8GDressage9UnknownLF2aCoreSaline6111/10Warmblood5MJumping14Kicking himself over the jumpRF1bMarginalNo6265/10Warmblood5MPleasure7UnknownRF1bMarginalSaline6383/11Warmblood18MPleasure10At cross country rideLF2aMarginalNo5461/11Warmblood14GPolice horse4At gallop on beachLF2bDiffuseNo6384/11Half-blood8GEventing1After eventing competitionLF2aCoreNoMean 8.86 a, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesions a, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon In total ten SDFTs were injected with ACS. Five control SDFTs were not treated and two control tendons received a single intralesional injection of saline. Lameness On day 0, the mean degree of lameness was 0.8 in the ACS group. In control limbs, the mean degree of lameness was 1.42 on day 0. One of the two horses with bilateral tendinopathy that served for the ACS group and as a control was not lame. The second one showed a unilateral front limb lameness (SDFT in the lame limb was treated with ACS). The mean degree of lameness did not differ between groups on any day of examination (p > 0.05). Regardless of treatment modality all horses became sound by day 36. Compared to day 0, lameness decreased significantly by day 11 (p = 0.046) within the ACS group and it decreased significantly by day 36 (p = 0.021) in limbs serving as controls (Fig. 1a).Fig. 1Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy On day 0, the mean degree of lameness was 0.8 in the ACS group. In control limbs, the mean degree of lameness was 1.42 on day 0. One of the two horses with bilateral tendinopathy that served for the ACS group and as a control was not lame. The second one showed a unilateral front limb lameness (SDFT in the lame limb was treated with ACS). The mean degree of lameness did not differ between groups on any day of examination (p > 0.05). Regardless of treatment modality all horses became sound by day 36. Compared to day 0, lameness decreased significantly by day 11 (p = 0.046) within the ACS group and it decreased significantly by day 36 (p = 0.021) in limbs serving as controls (Fig. 1a).Fig. 1Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Long-term follow-up Recurrence of tendon injury was not reported in any horse after 2 to 4 years post-diagnosis. Of the eight horses with SDFTs being allocated to the ACS group, five horses (63 %), among these three racehorses, returned to their previous or a higher performance level, two horses (25 %) died of reasons unrelated to tendinopathy, and one horse (12 %) was retired due to osteoarthritis of interphalangeal joints after the observation period. One of the horses which was included in the ACS group and served as a control did not resume training and was retired as a broodmare; the second one performed as a dressage horse. Of the five horses with tendons serving only as control, four individuals (80 %) performed in their discipline at the previous or a higher level, and one horse (20 %) was lost to follow-up. Recurrence of tendon injury was not reported in any horse after 2 to 4 years post-diagnosis. Of the eight horses with SDFTs being allocated to the ACS group, five horses (63 %), among these three racehorses, returned to their previous or a higher performance level, two horses (25 %) died of reasons unrelated to tendinopathy, and one horse (12 %) was retired due to osteoarthritis of interphalangeal joints after the observation period. One of the horses which was included in the ACS group and served as a control did not resume training and was retired as a broodmare; the second one performed as a dressage horse. Of the five horses with tendons serving only as control, four individuals (80 %) performed in their discipline at the previous or a higher level, and one horse (20 %) was lost to follow-up. Signs of inflammation No statistically significant differences between groups were observed during the entire observation period, including day 0, with regard to scores for swelling, skin surface temperature and sensitivity to palpation. Swelling scores of the SDFT region decreased significantly in the ACS group (p = 0.005) between day 50 (mean score 1.1) and day 78 (mean score 0.7) and remained reduced until the end of the observation period (Fig. 1b). In controls, swelling scores did not decrease significantly during 27 weeks. Skin surface temperature and sensitivity to palpation scores decreased up to day 22 within both groups and remained at a low level until day 190. No statistically significant differences between groups were observed during the entire observation period, including day 0, with regard to scores for swelling, skin surface temperature and sensitivity to palpation. Swelling scores of the SDFT region decreased significantly in the ACS group (p = 0.005) between day 50 (mean score 1.1) and day 78 (mean score 0.7) and remained reduced until the end of the observation period (Fig. 1b). In controls, swelling scores did not decrease significantly during 27 weeks. Skin surface temperature and sensitivity to palpation scores decreased up to day 22 within both groups and remained at a low level until day 190. B-mode ultrasonography The horses included presented with core lesions (seven limbs), marginal lesions (five limbs) or diffuse lesions (five limbs) of the SDFT (Table 1). The MIZ of most lesions was located in zone 2b (41.17 % of limbs), followed by zone 1b (35.29 % of limbs), zone 2a (17.64 % of limbs) and zone 3a (5.88 % of limbs). The mean %T-lesion was 21.73 ± 7.16 in the ACS group and 18.51 ± 5.07 in controls on day 0. This parameter was significantly lower (p < 0.05) in the ACS group than in controls on days 78, 106 and 162 (Fig. 2a). TES were significantly lower in the ACS group versus controls on days 78 and 106 (Fig. 2b). There was no difference in T-CSA, TL-CSA (Fig. 2c) or T-FAS between the groups at any time point.Fig. 2Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Mean %T-lesion showed a continuous decrease over time within the ACS group which was significant (p = 0.02) for the first time on day 78 compared to day 0. After this period, this parameter remained below the 10 % range. In control tendons, mean %T-lesion continuously remained on a similar elevated level (p > 0.05) throughout the entire observation period (Fig. 2a). Compared to day 0, the mean TES decreased significantly on day 22 and again between days 22 and 78 in the ACS group (p < 0.05), while in controls compared to day 0 the TES decreased significantly (p < 0.05) the first time on day 78 (Fig. 2b). Mean T-CSA did not change significantly throughout the observation period in either group. Regarding the progression of TL-CSA within the ACS group over time, it was significantly lower (p < 0.05) from day 78 onwards until the end of the examination period compared to levels on day 0, while values from control tendons remained on a similar level from day 0 to 190 (Fig. 2c). Mean T-FAS decreased significantly between day 0 and day 78 (p = 0.023) within the ACS group; in controls this parameter decreased significantly on day 134 compared to day 0 (p = 0.04). The horses included presented with core lesions (seven limbs), marginal lesions (five limbs) or diffuse lesions (five limbs) of the SDFT (Table 1). The MIZ of most lesions was located in zone 2b (41.17 % of limbs), followed by zone 1b (35.29 % of limbs), zone 2a (17.64 % of limbs) and zone 3a (5.88 % of limbs). The mean %T-lesion was 21.73 ± 7.16 in the ACS group and 18.51 ± 5.07 in controls on day 0. This parameter was significantly lower (p < 0.05) in the ACS group than in controls on days 78, 106 and 162 (Fig. 2a). TES were significantly lower in the ACS group versus controls on days 78 and 106 (Fig. 2b). There was no difference in T-CSA, TL-CSA (Fig. 2c) or T-FAS between the groups at any time point.Fig. 2Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Mean %T-lesion showed a continuous decrease over time within the ACS group which was significant (p = 0.02) for the first time on day 78 compared to day 0. After this period, this parameter remained below the 10 % range. In control tendons, mean %T-lesion continuously remained on a similar elevated level (p > 0.05) throughout the entire observation period (Fig. 2a). Compared to day 0, the mean TES decreased significantly on day 22 and again between days 22 and 78 in the ACS group (p < 0.05), while in controls compared to day 0 the TES decreased significantly (p < 0.05) the first time on day 78 (Fig. 2b). Mean T-CSA did not change significantly throughout the observation period in either group. Regarding the progression of TL-CSA within the ACS group over time, it was significantly lower (p < 0.05) from day 78 onwards until the end of the examination period compared to levels on day 0, while values from control tendons remained on a similar level from day 0 to 190 (Fig. 2c). Mean T-FAS decreased significantly between day 0 and day 78 (p = 0.023) within the ACS group; in controls this parameter decreased significantly on day 134 compared to day 0 (p = 0.04). Needle biopsies and histology A total of 51 needle biopsies were taken. Pain reaction was mild in 78.43 % of the procedures, mild to moderate in 17.64 % and moderate in 3.92 % of the cases. A moderate to severe or severe pain reaction was not observed. No bleeding was observed after taking biopsies in 25.49 % of the cases, mild bleeding occurred in 58.82 % and moderate bleeding in 15.68 % of cases. Severe bleeding was not observed. Of 51 biopsies, 47 were available for tendon histology [7, 30]. Four biopsies from severely oedematous lesions were not evaluated due to limited tissue content. The intra-class correlation for inter-observer repeatability was 0.72 for fibre structure, 0.88 for fibre alignment, 0.84 for nuclei morphology and 0.92 for variations in cell density. Scores for tenocyte nuclei morphology were significantly lower (i.e. cell nuclei more flattened; p = 0.01) in the ACS group (Fig. 3a) than in controls (Fig. 3b) on day 36 (Fig. 4a). Scores for cell density showed a tendency to be lower (i.e. more uniform) in the ACS group than in controls on day 36 (p = 0.052). Scores for fibre structure, fibre alignment (Fig. 4b), vascularisation, and subscores for structural integrity and metabolic activity did not show differences between the ACS group and controls at any time point.Fig. 3Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μmFig. 4Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated) Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μm Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated) With regard to development of fibre alignment during the healing of tendons treated with ACS, scores for this parameter were significantly lower (i.e. fibres were more regularly ordered; p = 0.04) in biopsies taken at the end of the observation period (day 190; Fig. 3d, Fig. 4b) than in those taken on day 0 (Fig. 3c, Fig. 4b). In control tendons, scores for fibre alignment (Fig. 4b), scores for morphology of tenocyte nuclei (Fig. 4a) and scores for cell density decreased significantly (i.e. tissue morphology improved; p < 0.05) between day 36 and day 190 biopsies. There were no differences between the ACS group and controls with regard to collagen type I and collagen type III expression in biopsies taken on days 0, 36 and 190. Within the ACS group, the collagen type I content increased significantly (p = 0.03) between the biopsies taken on day 36 (Fig. 3e) and day 190 (Fig. 3f), while it remained at the same level (p > 0.05) in controls (Fig. 4c). The collagen type III content showed a tendency (p = 0.056) to decrease in the ACS group between samples taken on days 0 and 190. A total of 51 needle biopsies were taken. Pain reaction was mild in 78.43 % of the procedures, mild to moderate in 17.64 % and moderate in 3.92 % of the cases. A moderate to severe or severe pain reaction was not observed. No bleeding was observed after taking biopsies in 25.49 % of the cases, mild bleeding occurred in 58.82 % and moderate bleeding in 15.68 % of cases. Severe bleeding was not observed. Of 51 biopsies, 47 were available for tendon histology [7, 30]. Four biopsies from severely oedematous lesions were not evaluated due to limited tissue content. The intra-class correlation for inter-observer repeatability was 0.72 for fibre structure, 0.88 for fibre alignment, 0.84 for nuclei morphology and 0.92 for variations in cell density. Scores for tenocyte nuclei morphology were significantly lower (i.e. cell nuclei more flattened; p = 0.01) in the ACS group (Fig. 3a) than in controls (Fig. 3b) on day 36 (Fig. 4a). Scores for cell density showed a tendency to be lower (i.e. more uniform) in the ACS group than in controls on day 36 (p = 0.052). Scores for fibre structure, fibre alignment (Fig. 4b), vascularisation, and subscores for structural integrity and metabolic activity did not show differences between the ACS group and controls at any time point.Fig. 3Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μmFig. 4Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated) Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μm Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated) With regard to development of fibre alignment during the healing of tendons treated with ACS, scores for this parameter were significantly lower (i.e. fibres were more regularly ordered; p = 0.04) in biopsies taken at the end of the observation period (day 190; Fig. 3d, Fig. 4b) than in those taken on day 0 (Fig. 3c, Fig. 4b). In control tendons, scores for fibre alignment (Fig. 4b), scores for morphology of tenocyte nuclei (Fig. 4a) and scores for cell density decreased significantly (i.e. tissue morphology improved; p < 0.05) between day 36 and day 190 biopsies. There were no differences between the ACS group and controls with regard to collagen type I and collagen type III expression in biopsies taken on days 0, 36 and 190. Within the ACS group, the collagen type I content increased significantly (p = 0.03) between the biopsies taken on day 36 (Fig. 3e) and day 190 (Fig. 3f), while it remained at the same level (p > 0.05) in controls (Fig. 4c). The collagen type III content showed a tendency (p = 0.056) to decrease in the ACS group between samples taken on days 0 and 190. Description and history of horses, intralesional injections: Seventeen limbs of 15 horses between 2 and 19 years old (mean 8.46 years old) met the inclusion criteria. Ten of the limbs were included in the ACS-treated group (ACS group) and seven served as controls. Limbs allocated to the ACS group belonged to five Warmbloods (50 %), four Thoroughbreds (40 %) and one Arabian (10 %). The limbs of five Warmbloods (71.42 %), one Thoroughbred (14.28 %) and one Half-blood (14.28 %) were included as controls. Of these, one Thoroughbred and one Warmblood with bilateral SDFT lesions served for both groups (Table 1). The horses’ history (i.e. high-speed exercise, increased age >12 years) was suggestive of tendinopathy to be strain-induced in at least 6 of 10 SDFTs (60 %) allocated to the ACS group and in at least 4 of 7 (57 %) control SDFTs. Two tendons had a definitive history of blunt external trauma. One was included in the ACS group and one served as control.Table 1Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesionsHorse numberBreedAge (years)GenderFor which purpose usedReported duration of SDFT tendinopathy until initial examination (days)Reported initiating eventLimb affectedMaximal injury zoneLesion typeTreatment ACS group 2241/09Thoroughbred2SRacing2TrainingRF2bDiffuseACS2240/09Thoroughbred3SRacing2–3TrainingRF2bCoreACS2489/09a Thoroughbred4MRacing7RacingRF1bCoreACS2539/09Warmblood3SDressage14Blunt traumaRF2bMarginalACS1672/10Arabian17GPleasure14Running freeRF1bCoreACS6264/10b Warmblood8GDressage9UnknownRF3aMarginalACS6335/10Warmblood10GPleasure10UnknownLF2bDiffuseACS4793/10Thoroughbred3SRacing7TrainingLF2bCoreACS6263/10Warmblood20MPleasure14Stumbling at cross country rideRF1bCoreACS2378/10Warmblood11MPleasure13At rideLF2bDiffuseACSMean 8.1 Controls 2489/09a Thoroughbred4MRacing7RacingLF1bDiffuseNo6264/10b Warmblood8GDressage9UnknownLF2aCoreSaline6111/10Warmblood5MJumping14Kicking himself over the jumpRF1bMarginalNo6265/10Warmblood5MPleasure7UnknownRF1bMarginalSaline6383/11Warmblood18MPleasure10At cross country rideLF2aMarginalNo5461/11Warmblood14GPolice horse4At gallop on beachLF2bDiffuseNo6384/11Half-blood8GEventing1After eventing competitionLF2aCoreNoMean 8.86 a, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon Description, clinical history, diagnostic data, and treatment of 15 horses with 17 SDFT lesions a, bHorses had bilateral SDFT lesions and served for the ACS group and as control. ACS autologous conditioned serum (treated with single intralesional injection of autologous conditioned serum); G gelding, LF left front limb, M mare, RF right front limb, S stallion, Saline treated with single intralesional saline injection, SDFT superficial digital flexor tendon In total ten SDFTs were injected with ACS. Five control SDFTs were not treated and two control tendons received a single intralesional injection of saline. Lameness: On day 0, the mean degree of lameness was 0.8 in the ACS group. In control limbs, the mean degree of lameness was 1.42 on day 0. One of the two horses with bilateral tendinopathy that served for the ACS group and as a control was not lame. The second one showed a unilateral front limb lameness (SDFT in the lame limb was treated with ACS). The mean degree of lameness did not differ between groups on any day of examination (p > 0.05). Regardless of treatment modality all horses became sound by day 36. Compared to day 0, lameness decreased significantly by day 11 (p = 0.046) within the ACS group and it decreased significantly by day 36 (p = 0.021) in limbs serving as controls (Fig. 1a).Fig. 1Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Degree of lameness and palpable swelling of tendons. a Degree of lameness of control limbs and those treated with autologous conditioned serum (ACS) over time. b Scores for palpable swelling of ACS-treated and control superficial digital flexor tendons (SDFTs) over time. Mean ± SE. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Long-term follow-up: Recurrence of tendon injury was not reported in any horse after 2 to 4 years post-diagnosis. Of the eight horses with SDFTs being allocated to the ACS group, five horses (63 %), among these three racehorses, returned to their previous or a higher performance level, two horses (25 %) died of reasons unrelated to tendinopathy, and one horse (12 %) was retired due to osteoarthritis of interphalangeal joints after the observation period. One of the horses which was included in the ACS group and served as a control did not resume training and was retired as a broodmare; the second one performed as a dressage horse. Of the five horses with tendons serving only as control, four individuals (80 %) performed in their discipline at the previous or a higher level, and one horse (20 %) was lost to follow-up. Signs of inflammation: No statistically significant differences between groups were observed during the entire observation period, including day 0, with regard to scores for swelling, skin surface temperature and sensitivity to palpation. Swelling scores of the SDFT region decreased significantly in the ACS group (p = 0.005) between day 50 (mean score 1.1) and day 78 (mean score 0.7) and remained reduced until the end of the observation period (Fig. 1b). In controls, swelling scores did not decrease significantly during 27 weeks. Skin surface temperature and sensitivity to palpation scores decreased up to day 22 within both groups and remained at a low level until day 190. B-mode ultrasonography: The horses included presented with core lesions (seven limbs), marginal lesions (five limbs) or diffuse lesions (five limbs) of the SDFT (Table 1). The MIZ of most lesions was located in zone 2b (41.17 % of limbs), followed by zone 1b (35.29 % of limbs), zone 2a (17.64 % of limbs) and zone 3a (5.88 % of limbs). The mean %T-lesion was 21.73 ± 7.16 in the ACS group and 18.51 ± 5.07 in controls on day 0. This parameter was significantly lower (p < 0.05) in the ACS group than in controls on days 78, 106 and 162 (Fig. 2a). TES were significantly lower in the ACS group versus controls on days 78 and 106 (Fig. 2b). There was no difference in T-CSA, TL-CSA (Fig. 2c) or T-FAS between the groups at any time point.Fig. 2Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Ultrasonographic measurements. a Percent total lesion (%T-Lesion) of autologous conditioned serum (ACS)-treated and control superficial digital flexor tendons (SDFTs) over time. b Total echo scores of ACS-treated and control SDFTs over time. c Total lesion cross-sectional area (TL-CSA) of ACS-treated and control SDFTs over time. Mean ± SE. *p < 0.05, between groups. Different letters (ACS normal, and control italic) indicate significant differences (p < 0.05) within treatment group. ACS-group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated). Black arrow day (d)0 – diagnosis, first tendon biopsy; red arrow d1 – intralesional injection of ACS/control substance; blue arrows d36/d190 – second/third tendon biopsy Mean %T-lesion showed a continuous decrease over time within the ACS group which was significant (p = 0.02) for the first time on day 78 compared to day 0. After this period, this parameter remained below the 10 % range. In control tendons, mean %T-lesion continuously remained on a similar elevated level (p > 0.05) throughout the entire observation period (Fig. 2a). Compared to day 0, the mean TES decreased significantly on day 22 and again between days 22 and 78 in the ACS group (p < 0.05), while in controls compared to day 0 the TES decreased significantly (p < 0.05) the first time on day 78 (Fig. 2b). Mean T-CSA did not change significantly throughout the observation period in either group. Regarding the progression of TL-CSA within the ACS group over time, it was significantly lower (p < 0.05) from day 78 onwards until the end of the examination period compared to levels on day 0, while values from control tendons remained on a similar level from day 0 to 190 (Fig. 2c). Mean T-FAS decreased significantly between day 0 and day 78 (p = 0.023) within the ACS group; in controls this parameter decreased significantly on day 134 compared to day 0 (p = 0.04). Needle biopsies and histology: A total of 51 needle biopsies were taken. Pain reaction was mild in 78.43 % of the procedures, mild to moderate in 17.64 % and moderate in 3.92 % of the cases. A moderate to severe or severe pain reaction was not observed. No bleeding was observed after taking biopsies in 25.49 % of the cases, mild bleeding occurred in 58.82 % and moderate bleeding in 15.68 % of cases. Severe bleeding was not observed. Of 51 biopsies, 47 were available for tendon histology [7, 30]. Four biopsies from severely oedematous lesions were not evaluated due to limited tissue content. The intra-class correlation for inter-observer repeatability was 0.72 for fibre structure, 0.88 for fibre alignment, 0.84 for nuclei morphology and 0.92 for variations in cell density. Scores for tenocyte nuclei morphology were significantly lower (i.e. cell nuclei more flattened; p = 0.01) in the ACS group (Fig. 3a) than in controls (Fig. 3b) on day 36 (Fig. 4a). Scores for cell density showed a tendency to be lower (i.e. more uniform) in the ACS group than in controls on day 36 (p = 0.052). Scores for fibre structure, fibre alignment (Fig. 4b), vascularisation, and subscores for structural integrity and metabolic activity did not show differences between the ACS group and controls at any time point.Fig. 3Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μmFig. 4Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated) Longitudinal sections of tendon biopsies from superficial digital flexor tendons with tendinopathy. a–d Histopathological specimens stained with haematoxylin & eosin using a 40× objective. Tendons of horses on day 36 after intralesional treatment with autologous conditioned serum (ACS) (horse no. 4793/10; a) and no treatment (control tendon, horse no. 6384/11; b). The number of round cell nuclei was higher in control tendons than in ACS-treated tendons 36 days after treatment. Scale bars = 10 μm. Tendon of horse no. 2241/09 1 day before (day 0, c) and 190 days after (d) intralesional treatment with ACS. Alignment of collagen fibres improved significantly between day 0 and day 190 after ACS treatment. Scale bars = 10 μm. Tendon of horse no. 2240/09 36 days (e) and 190 days (f) after intralesional treatment with ACS. Immunohistochemistry revealed a significant increase of collagen type I expression between day 36 and day 190 after ACS treatment. Scale bars = 20 μm Histologic scores and collagen type I content of superficial digital flexor tendons. a Histologic scores for morphology of tenocyte nuclei in tendon biopsies taken from autologous conditioned serum (ACS)-treated versus control superficial digital flexor tendons (SDFTs) at different time points during the examination period of 190 days. b Histologic scores for fibre alignment in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. c Percentage of collagen type I content determined immunohistochemically in tendon biopsies taken from ACS-treated versus control SDFTs at different time points during the examination period of 190 days. Day (d)0 = day the diagnosis was made; d36/d190 = 36/190 days after tendinopathy was diagnosed. *p < 0.05. ACS group, n = 10 limbs (SDFTs treated with a single injection of ACS); controls, n = 7 limbs (SDFTs treated with a single injection of control substance or left untreated) With regard to development of fibre alignment during the healing of tendons treated with ACS, scores for this parameter were significantly lower (i.e. fibres were more regularly ordered; p = 0.04) in biopsies taken at the end of the observation period (day 190; Fig. 3d, Fig. 4b) than in those taken on day 0 (Fig. 3c, Fig. 4b). In control tendons, scores for fibre alignment (Fig. 4b), scores for morphology of tenocyte nuclei (Fig. 4a) and scores for cell density decreased significantly (i.e. tissue morphology improved; p < 0.05) between day 36 and day 190 biopsies. There were no differences between the ACS group and controls with regard to collagen type I and collagen type III expression in biopsies taken on days 0, 36 and 190. Within the ACS group, the collagen type I content increased significantly (p = 0.03) between the biopsies taken on day 36 (Fig. 3e) and day 190 (Fig. 3f), while it remained at the same level (p > 0.05) in controls (Fig. 4c). The collagen type III content showed a tendency (p = 0.056) to decrease in the ACS group between samples taken on days 0 and 190. Discussion: Results of the present study show that a single intralesional ACS injection into SDFT lesions leads to a temporary improvement of ultrasonographic parameters [27, 28]. Transient flattened morphology of tenocyte nuclei and an increased collagen type I expression in ACS-treated tendons over time are indicative of reduced proliferation and increased differentiation of this cell type, respectively [1, 34, 35]. The therapeutic effect of ACS treatment is also demonstrated by an earlier decrease in lameness as compared to controls [2]. The history of the horses was suggestive of strain-induced tendinopathy in the majority of tendons treated with either modality. However, it remains unclear whether the effects shown in the current study equally apply to tendinopathies related to other etiologies. Although the reported duration of tendinopathy of up to 2 weeks before first examination (day 0) referred to clinical signs in the present study, subclinical degeneration is known to precede obvious clinical symptoms of strain-induced tendinopathies, especially in equine athletes [36]. It cannot be excluded that at least some of the tendons on day 0 were not in the inflammatory, but in the early proliferative phase of tendon healing, although neither B-mode ultrasonography nor histology yielded clear evidence of potential chronicity of the tendinopathies. As an alternative, either more extensive tendon biopsies or ultrasonographic tissue characterization as a noninvasive diagnostic tool could have been used initially to further determine the age of the lesions [37, 38]. As tendon composition and biomechanical properties may vary significantly between horses [39] intraindividual controls (control = contralateral limb) are preferred in experimental settings [7]. For these reasons, both front limbs of two horses showing clinical signs of bilateral SDFT tendinopathy were included in the ACS group and as controls. Unfortunately, this could not be realised in more horses. Only two out of five clients who agreed to their horse or the respective limb being included as a control accepted an intralesional injection of these control tendons. Thus, the objective to create two separate control groups, one with the lesion left untreated and one with an intralesional saline injection, could not be achieved. Treatment modalities of control groups in clinical and experimental trials may be seen controversially. On the one hand, it is of interest to compare the effect of controlled exercise plus the effect of the substrate injected intralesionally with the effect of controlled exercise alone (= argument against intralesional injection of a control substance into control tendons). On the other hand, the mere puncture and needle decompression of acute tendon defects may have a therapeutic effect independent of the substrate injected [2]. Against that background, it seems preferable to treat control tendons with sham injections to demonstrate the effect of the substrate injected [5, 7] (ACS in the present case). Tendon biopsies were used because clinical assessment of tendon healing and B-mode ultrasonography alone are limited with regard to sensitivity and reproducibility [40] and longitudinal needle biopsies were established in human [41, 42] and equine surgery [29, 43] as minimally invasive and well-tolerated techniques. They allow an insight into tendon architecture as well as the immunohistochemical detection of, for example, collagen type I and III [44]. Disadvantages, however, are the potentially therapeutic, albeit unknown, effect of the biopsy process on tendon healing [2, 45], their limited reproducibility and the relatively small volume of tendon tissue harvested [29, 43]. Although mean degree of lameness did not differ between groups, this does not necessarily imply similar functional repair of the lesions, since SDFTs are generally more exposed to maximal load during heavy athletic activities than during trot, i.e. later during rehabilitation [46]. The observation that, compared to day 0, a significant decrease in lameness occurred earlier in the ACS group, i.e. until day 11, compared to the controls (day 36) may be influenced by effects attributed to ACS [8, 11, 13]. Detection of lameness in horses with bilateral SDFT lesions may be more challenging than detection of unilateral gait abnormality [47]. It cannot be excluded that the two horses with bilateral SDFT tendinopathy may have shown bilateral lameness or an additional contralateral lameness, respectively, if diagnostic analgesia had been performed. Clinical signs of inflammation were monitored using semiquantitative clinical score systems which may be subject to some inaccuracy. This could have been improved by the use of computerized gait analysis, thermography [48] and measurements of the metacarpal circumference in combination with ultrasonography [49]. None of the inflammatory signs, including swelling, differed between groups, but palpable swelling decreased significantly within the ACS group between days 50 and 78 in contrast to controls. On the one hand this may be attributed to auto- and paracrine effects of ACS on endogenous growth factor expression, since in a rodent Achilles tendon transection model bFGF expression was enhanced but not before 8 weeks, i.e. delayed after ACS injection [16]. On the other hand controlled exercise exerted anti-inflammatory effects on tendons from both groups [50] which did not, however, lead to a significant decrease in swelling in controls. The decrease of palpable swelling in the ACS group between days 50 and 78 correlates positively with the ultrasonographic finding that TL-CSA and %T-lesion decreased only in the ACS group between before treatment and day 78. T-CSA, however, remained unchanged throughout the entire observation period in both groups. This proves that the decrease in swelling in the ACS group rather reflects a decrease in cutaneous, subcutaneous and peritendinous swelling than an altered tendon thickness in the late proliferation and early remodelling phase (i.e. until around day 45). This may be due to a therapeutic effect of inadvertent reflux of small volumes of ACS into the subcutis during intralesional injection. Ultrasonographic measurements of extratendinous swelling are challenging and were not included in the present study, although they could have been helpful to further confirm findings of palpation. In contrast to the results of this study, rat Achilles tendons showed an increase in tendon thickness after ACS treatment compared to the control tendons [22]. However, comparability is limited since the rat tendons, in contrast to the present study, were sutured and received three treatments of ACS. The present study shows that, compared to control tendons, a single intralesional injection of ACS leads to a significant reduction of %T-lesion and an increase in echogenicity (TES) 78 and 106 days after treatment. This finding corresponds to an earlier (until day 22) increase in TES and an earlier (until day 78) increase of the percentage in parallel orientated fibre bundles (T-FAS) in the ACS group compared to controls. These effects could be the result of stimulation of repair tissue, i.e. improved fibrillogenesis in the early proliferative phase of tendon healing (4–45 days after injury) [23, 28, 37] in which most horses were presented and treated. This may have been a consequence of the potential IL-1Ra-mediated anti-inflammatory action which is attributed to ACS by several authors [8, 11, 13], although IL-1Ra concentrations in ACS were not determined in the present study. Another potential pathway may be the supplementation of growth factors, such as IGF-1 and TGF-β, which are supposed to be increased in equine ACS [12, 16, 21]. IGF-1 is known to be decreased for approximately 2 weeks in experimentally induced tendinopathy and a beneficial bolstering effect of exogenous IGF-1 on low endogenous IGF-1 production during the early repair phase of tendinopathy has been hypothesized [34]. ACS has been shown to display significant effects on the endogenous expression of growth factors potentially via auto- and paracrine pathways [16]. It remains unclear why the significant differences between groups with regard to %T-lesion and TES were not consistent until the end of the observation period despite tendencies to significance. This may be attributed to a time-limited effect of ACS (see above). Ultrasonographic tissue characterization has been established in recent years as a more precise alternative to B-mode ultrasonography to monitor the process of tendon healing [37, 38], particularly if only a probe with a relatively low resolution is available as in the present study. With regard to histologic scores, a difference between groups was seen at day 36. Here, cell nuclei were flattened in the ACS group compared to controls, which is suggestive of decreased tenocyte proliferation in the late proliferative phase (4–45 days) [1, 23, 34, 35] as a response to the ACS injection. In agreement with this, it has been shown that tendon fibroblasts with a spindle-shaped nucleus have reduced apoptotic and proliferative indices, as demonstrated in human patellar tendons [35]. A consequence might be a decreased cellular production of inelastic collagen type III, which has been described to peak between 3 to 6 weeks after injury in equine experimental studies [34, 51]. However, immunohistochemistry in the present study revealed no difference of collagen type III expression between groups which might be due to considerable variations of different types of collagen between individual lesions, as described for naturally developed lesions in horses [1, 44]. The more favourable development of collagen type I expression in the ACS group between days 36 and 190 indicates qualitative improvement [34, 51], such as increased tensile strength of the repair tissue in the remodelling or maturation phase (45–120 days) [36], which is potentially caused by the mechanisms mentioned, i.e. an IL-1Ra-mediated anti-inflammatory mechanism or the supplementation of growth factors, such as IGF-1 and TGF-β. Collagen type I was seen to be elevated for 6 months after injury in equine experimental studies [34]. This rather reflects the progress in control tendons of the present study and correlates with previous findings in naturally injured equine tendons [44]. In contrast to an experimental Achilles tendinopathy model using ACS-treated rats [22], no difference in collagen type I expression was detected between groups in the present study. However, rat Achilles tendons were treated three times at 24-hour intervals with the first injection 24 hours after induction of the lesion, i.e. in the acute inflammatory phase of tendon healing. By contrast, tendons in the present study received only a single intralesional injection of ACS up to 14 days after the onset of clinical symptoms, i.e. mostly at the end of or even after the acute inflammatory phase. In the latter investigation, real-time quantitative polymerase chain reaction was used, which allows quantification of mRNA transcription of different collagen types, provided that enough tissue is available. Cytokines such as IL-1Ra and the growth factors IGF-1 and TGF-β have a short half-life and they may be degraded and consumed within a short time period after exogenous application [14–16]. Nevertheless, tendon healing may not only be enhanced by direct binding of cytokines and growth factors to cell surface receptors, but also due to indirect effects by stimulation of endogenous production of growth factors [13, 16, 52]. Therefore, the effect of ACS is potentially enhanced by several consecutive injections [22], as reported anecdotally to be common in equine practice and as recommended for the treatment of joint pathology [13]. A single ACS injection was chosen to determine the effect of a low dose as a basis for research because, to date, neither dose-dependent in vivo studies nor a consensus on the treatment protocol are available for blood products such as ACS. Another aim was to keep the number of factors influencing outcome, such as repeated needle puncture of tendons, as low as possible. The increase in collagen I expression after ACS injection in an experimental rat study did not coincide with an improved maximum load to failure, despite leading to an improvement in tendon stiffness during biomechanical testing [22]. Although biomechanical testing is regarded as the method of choice, it could not be accomplished in the present study due to the inclusion of client-owned horses. In general, the degree of lameness and, to a limited extent, the echo pattern of the injured tendon reflect biomechanical properties. These parameters, however, did not significantly differ between groups in the present study at the end of the observation period. Due to the reduced group size, differences in long-term recurrence rate after return of the horses to full exercise could not be calculated statistically. Conclusions: This clinical trial in horses with acute tendinopathies of the SDFT shows that a single intralesional ACS injection contributes to significant reduction of lameness within 10 days and to improvement of ultrasonographic parameters of repair tissue between 11 and 23 weeks after treatment. Intralesional ACS treatment potentially decreases proliferation of tenocytes 5 weeks after treatment and increases their differentiation, as demonstrated by an elevated collagen type I expression in the remodelling phase. Repeated ACS injections should be considered to enhance positive effects. Future controlled long-term investigations should be performed in a larger number of horses to determine the effect on recurrence rate.
Background: Autologous blood-derived biologicals, including autologous conditioned serum (ACS), are frequently used to treat tendinopathies in horses despite limited evidence for their efficacy. The purpose of this study was to describe the effect of a single intralesional injection of ACS in naturally occurring tendinopathies of the equine superficial digital flexor tendon (SDFT) on clinical, ultrasonographic, and histological parameters. Methods: Fifteen horses with 17 naturally occurring tendinopathies of forelimb SDFTs were examined clinically and ultrasonographically (day 0). Injured tendons were randomly assigned to the ACS-treated group (n = 10) receiving a single intralesional ACS injection or included as controls (n = 7) which were either untreated or injected with saline on day 1. All horses participated in a gradually increasing exercise programme and were re-examined nine times at regular intervals until day 190. Needle biopsies were taken from the SDFTs on days 0, 36 and 190 and examined histologically and for the expression of collagen types I and III by immunohistochemistry. Results: In ACS-treated limbs lameness decreased significantly until day 10 after treatment. Swelling (scores) of the SDFT region decreased within the ACS group between 50 and 78 days after treatment. Ultrasonographically, the percentage of the lesion in the tendon was significantly lower and the echogenicity of the lesion (total echo score) was significantly higher 78 and 106 days after intralesional ACS injection compared to controls. Histology revealed that, compared to controls, tenocyte nuclei were more spindle-shaped 36 days after ACS injection. Immunohistochemistry showed that collagen type I expression significantly increased between days 36 and 190 after ACS injection. Conclusions: Single intralesional ACS injection of equine SDFTs with clinical signs of acute tendinopathy contributes to an early significant reduction of lameness and leads to temporary improvement of ultrasonographic parameters of repair tissue. Intralesional ACS treatment might decrease proliferation of tenocytes 5 weeks after treatment and increase their differentiation as demonstrated by elevated collagen type I expression in the remodelling phase. Potential enhancement of these effects by repeated injections should be tested in future controlled clinical investigations.
Introduction: Tendinopathy of the superficial digital flexor tendon (SDFT) is a common injury in Thoroughbred racehorses and other horse breeds and is regarded as a career-limiting disease with a high recurrence rate [1]. Numerous treatment modalities have shown limited success in improving tendon repair [2]. Regenerative therapy aims to restore structure and function after application of biocompatible materials, cells, and bioactive molecules [3, 4]. There is growing knowledge about the clinical effects of potentially regenerative substrates, e.g. mesenchymal stem cells (MSCs) [5, 6] and autologous blood products such as platelet rich plasma [7, 8] on equine tendinopathies. To date, however, ideal treatment strategies for naturally occurring tendinopathies have not been established [1, 2]. Autologous conditioned serum (ACS; synonyms irap®, Orthokine®, Orthogen, Düsseldorf, Germany) is used for intralesional treatment of tendinopathy in horses but, to the best of our knowledge, its clinical effect is only documented anecdotally [8–10]. ACS is prepared by exposing whole blood samples to glass beads, which has been shown to stimulate the secretion of anti-inflammatory cytokines, including interleukin (IL)-4 and IL-10 and IL-1 receptor antagonist (IL-1Ra) in humans [11]. A recent investigation has shown that ACS from equine blood also contains high levels of IL-1Ra and IL-10 [12]. Equine studies have focused on the IL-1Ra-mediated anti-inflammatory effects of ACS [13]; however, in tendon healing, the high concentrations of growth factors such as insulin-like growth factor-1 (IGF-1) and transforming growth factor-beta (TGF-β) may be equally or more important [14–16]. Blood samples from different horses and the use of different kits for the preparation of ACS may lead to differences in the cytokine and growth factor concentration in vitro [12, 17]. However, the relevance of these differences for the clinical effect is unknown. ACS was originally described to improve muscle regeneration in a murine muscle contusion model [18] and to exhibit anti-inflammatory effects in an experimental model of carpal osteoarthritis in horses [13] and in a placebo-controlled clinical trial in humans with knee osteoarthritis [19]. The rationale for the use of ACS to treat equine tendinopathies is based on several findings: 1) It was shown in an experimental study that the expression of IL-1β (and matrix metalloproteinase-13) is upregulated following overstrain injury of rat tendons, demonstrating that these molecules are important mediators in the pathogenesis of tendinopathy [15, 20]. 2) IL-1Ra protein and heterologous conditioned serum prepared with the irap® kit reduced the production of prostaglandin E2 by stimulated cells derived from macroscopically normal SDFTs in vitro [21]. 3) Growth factors concentrated in ACS, e.g. IGF-1 and TGF-β, have the potential to attract resident precursor cells, e.g. MSCs and tenoblasts, and to increase cell proliferation during tendon healing [14, 15, 17]. Rat Achilles tendons exposed to ACS in an experimental study showed an enhanced expression of the Col1A1 gene, which led to an increased secretion of type I collagen and accelerated recovery of tendon stiffness and improved histologic maturity of the repair tissue [22]. It was shown in another rodent Achilles tendon transection model that ACS generally increases the expression of basic fibroblast growth factor (bFGF), bone morphogenetic protein-12 and TGF-β1 [16], representing growth factors important for the process of tendon regeneration [15]. The process of tendon healing is mainly divided into three phases which merge into each other. The acute inflammatory phase (<10–14 days) is characterized by phagocytosis and demarcation of injured tendon tissue. A fibroproliferative callus is formed during the proliferative phase (4–45 days), while collagen fibrils are organised into tendon bundles during the remodelling or maturation phase (45–120 days; <3 months) [1, 23]. The aim of the present study was to support the hypothesis that a single intralesional ACS injection into SDFT lesions 1) has a clinically detectable anti-inflammatory effect, 2) leads to improved B-mode ultrasonographic parameters and 3) improves the organization of repair tissue. Conclusions: This clinical trial in horses with acute tendinopathies of the SDFT shows that a single intralesional ACS injection contributes to significant reduction of lameness within 10 days and to improvement of ultrasonographic parameters of repair tissue between 11 and 23 weeks after treatment. Intralesional ACS treatment potentially decreases proliferation of tenocytes 5 weeks after treatment and increases their differentiation, as demonstrated by an elevated collagen type I expression in the remodelling phase. Repeated ACS injections should be considered to enhance positive effects. Future controlled long-term investigations should be performed in a larger number of horses to determine the effect on recurrence rate.
Background: Autologous blood-derived biologicals, including autologous conditioned serum (ACS), are frequently used to treat tendinopathies in horses despite limited evidence for their efficacy. The purpose of this study was to describe the effect of a single intralesional injection of ACS in naturally occurring tendinopathies of the equine superficial digital flexor tendon (SDFT) on clinical, ultrasonographic, and histological parameters. Methods: Fifteen horses with 17 naturally occurring tendinopathies of forelimb SDFTs were examined clinically and ultrasonographically (day 0). Injured tendons were randomly assigned to the ACS-treated group (n = 10) receiving a single intralesional ACS injection or included as controls (n = 7) which were either untreated or injected with saline on day 1. All horses participated in a gradually increasing exercise programme and were re-examined nine times at regular intervals until day 190. Needle biopsies were taken from the SDFTs on days 0, 36 and 190 and examined histologically and for the expression of collagen types I and III by immunohistochemistry. Results: In ACS-treated limbs lameness decreased significantly until day 10 after treatment. Swelling (scores) of the SDFT region decreased within the ACS group between 50 and 78 days after treatment. Ultrasonographically, the percentage of the lesion in the tendon was significantly lower and the echogenicity of the lesion (total echo score) was significantly higher 78 and 106 days after intralesional ACS injection compared to controls. Histology revealed that, compared to controls, tenocyte nuclei were more spindle-shaped 36 days after ACS injection. Immunohistochemistry showed that collagen type I expression significantly increased between days 36 and 190 after ACS injection. Conclusions: Single intralesional ACS injection of equine SDFTs with clinical signs of acute tendinopathy contributes to an early significant reduction of lameness and leads to temporary improvement of ultrasonographic parameters of repair tissue. Intralesional ACS treatment might decrease proliferation of tenocytes 5 weeks after treatment and increase their differentiation as demonstrated by elevated collagen type I expression in the remodelling phase. Potential enhancement of these effects by repeated injections should be tested in future controlled clinical investigations.
19,204
392
16
[ "acs", "day", "control", "tendon", "group", "treated", "acs group", "tendons", "sdfts", "days" ]
[ "test", "test" ]
null
[CONTENT] Horse | Tendon | Ultrasonography | Biopsy | Histology | Collagen | Autologous conditioned serum | ACS | Irap [SUMMARY]
null
[CONTENT] Horse | Tendon | Ultrasonography | Biopsy | Histology | Collagen | Autologous conditioned serum | ACS | Irap [SUMMARY]
[CONTENT] Horse | Tendon | Ultrasonography | Biopsy | Histology | Collagen | Autologous conditioned serum | ACS | Irap [SUMMARY]
[CONTENT] Horse | Tendon | Ultrasonography | Biopsy | Histology | Collagen | Autologous conditioned serum | ACS | Irap [SUMMARY]
[CONTENT] Horse | Tendon | Ultrasonography | Biopsy | Histology | Collagen | Autologous conditioned serum | ACS | Irap [SUMMARY]
[CONTENT] Animals | Blood Transfusion, Autologous | Collagen Type I | Collagen Type III | Forelimb | Horse Diseases | Horses | Immunohistochemistry | Male | Tendinopathy | Tendons | Ultrasonography | Wound Healing [SUMMARY]
null
[CONTENT] Animals | Blood Transfusion, Autologous | Collagen Type I | Collagen Type III | Forelimb | Horse Diseases | Horses | Immunohistochemistry | Male | Tendinopathy | Tendons | Ultrasonography | Wound Healing [SUMMARY]
[CONTENT] Animals | Blood Transfusion, Autologous | Collagen Type I | Collagen Type III | Forelimb | Horse Diseases | Horses | Immunohistochemistry | Male | Tendinopathy | Tendons | Ultrasonography | Wound Healing [SUMMARY]
[CONTENT] Animals | Blood Transfusion, Autologous | Collagen Type I | Collagen Type III | Forelimb | Horse Diseases | Horses | Immunohistochemistry | Male | Tendinopathy | Tendons | Ultrasonography | Wound Healing [SUMMARY]
[CONTENT] Animals | Blood Transfusion, Autologous | Collagen Type I | Collagen Type III | Forelimb | Horse Diseases | Horses | Immunohistochemistry | Male | Tendinopathy | Tendons | Ultrasonography | Wound Healing [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] acs | day | control | tendon | group | treated | acs group | tendons | sdfts | days [SUMMARY]
null
[CONTENT] acs | day | control | tendon | group | treated | acs group | tendons | sdfts | days [SUMMARY]
[CONTENT] acs | day | control | tendon | group | treated | acs group | tendons | sdfts | days [SUMMARY]
[CONTENT] acs | day | control | tendon | group | treated | acs group | tendons | sdfts | days [SUMMARY]
[CONTENT] acs | day | control | tendon | group | treated | acs group | tendons | sdfts | days [SUMMARY]
[CONTENT] il | growth | acs | inflammatory | shown | cells | tendon | anti inflammatory | il 1ra | growth factor [SUMMARY]
null
[CONTENT] acs | day | control | treated | group | sdfts | fig | acs group | limbs | 190 [SUMMARY]
[CONTENT] weeks treatment | intralesional acs | treatment | acs | weeks | long term investigations performed | long term investigations | expression remodelling phase repeated | ultrasonographic parameters repair tissue | ultrasonographic parameters repair [SUMMARY]
[CONTENT] acs | day | control | group | treated | tendon | acs group | horses | lesion | sdfts [SUMMARY]
[CONTENT] acs | day | control | group | treated | tendon | acs group | horses | lesion | sdfts [SUMMARY]
[CONTENT] ||| ACS | SDFT [SUMMARY]
null
[CONTENT] day 10 ||| SDFT | ACS | between 50 and 78 days ||| 78 and 106 days ||| 36 days ||| between days 36 | 190 [SUMMARY]
[CONTENT] ||| 5 weeks ||| [SUMMARY]
[CONTENT] ||| ACS | SDFT ||| Fifteen | 17 ||| 10 | 7 | day 1 ||| nine | day 190 ||| days 0 | 36 | 190 ||| ||| day 10 ||| SDFT | ACS | between 50 and 78 days ||| 78 and 106 days ||| 36 days ||| between days 36 | 190 ||| ||| 5 weeks ||| [SUMMARY]
[CONTENT] ||| ACS | SDFT ||| Fifteen | 17 ||| 10 | 7 | day 1 ||| nine | day 190 ||| days 0 | 36 | 190 ||| ||| day 10 ||| SDFT | ACS | between 50 and 78 days ||| 78 and 106 days ||| 36 days ||| between days 36 | 190 ||| ||| 5 weeks ||| [SUMMARY]
Comparison of methods for the analysis of airway macrophage particulate load from induced sputum, a potential biomarker of air pollution exposure.
26542371
Air pollution is associated with a high burden or morbidity and mortality, but exposure cannot be quantified rapidly or cheaply. The particulate burden of macrophages from induced sputum may provide a biomarker. We compare the feasibility of two methods for digital quantification of airway macrophage particulate load.
BACKGROUND
Induced sputum samples were processed and analysed using ImageJ and Image SXM software packages. We compare each package by resources and time required.
METHODS
13 adequate samples were obtained from 21 patients. Median particulate load was 0.38 μm(2) (ImageJ) and 4.0 % of the total cellular area of macrophages (Image SXM), with no correlation between results obtained using the two methods (correlation coefficient = -0.42, p = 0.256). Image SXM took longer than ImageJ (median 26 vs 54 mins per participant, p = 0.008) and was less accurate based on visual assessment of the output images. ImageJ's method is subjective and requires well-trained staff.
RESULTS
Induced sputum has limited application as a screening tool due to the resources required. Limitations of both methods compared here were found: the heterogeneity of induced sputum appearances makes automated image analysis challenging. Further work should refine methodologies and assess inter- and intra-observer reliability, if these methods are to be developed for investigating the relationship of particulate and inflammatory response in the macrophage.
CONCLUSION
[ "Adult", "Aged", "Air Pollution", "Asthma", "Biomarkers", "Bronchiectasis", "Environmental Exposure", "Feasibility Studies", "Female", "Forced Expiratory Volume", "Humans", "Image Processing, Computer-Assisted", "Inhalation Exposure", "Macrophages, Alveolar", "Male", "Middle Aged", "Particulate Matter", "Reproducibility of Results", "Software", "Sputum", "Vital Capacity" ]
4635991
Background
Indoor and outdoor air pollution are the 4th and 9th leading risk factors, respectively, for disability-adjusted life years worldwide [1], and exposure is associated with increased risk of pneumonia in children, respiratory cancers, and development of Chronic Obstructive Pulmonary Disease [2–5]. Airborne particulate matter [6] with an aerodynamic diameter of <2.5 μm (PM2.5) is considered particularly harmful as the small size allows inhalation deep into the lungs [7]. Global initiatives, such as the Global Alliance for Clean Cookstoves (www.cleancookstoves.org), are tackling the major health burden caused by airborne PM. Major randomised trials of the health effects of clean burning cookstoves are in progress (e.g. www.capstudy.org and http://www.kintampo-hrc.org/projects/graphs.asp#.VMtKusaI0Rk). All share the challenge that quantifying an individual’s exposure to pollution is complex and expensive, and there is no gold standard method [8]. Development of a biomarker that acts as a surrogate marker of exposure could obviate the need for costly and intensive exposure monitoring. Ideally a biomarker should be: closely associated with exposure, adequately sensitive and specific, consistent across heterogenous populations, cost efficient, acceptable to the user population, and feasible for use in the field (including low-resource settings) [9]. The phagocytic action of airway macrophages (AM) may provide the basis for a biomarker of PM exposure. The particulate load within AM is: increased in individuals who report exposure to household air pollution compared to those who do not [10]; statistically different between individuals who use different types of domestic fuel [11]; and associated with exposure to outdoor PM in commuters who cycle in London [12]. Correlation between AM particulate load (AMPL) and worsening lung function supports a possible pathophysiological role [13]. A recent systematic review of studies calculating AMPL concluded that this biomarker is suitable for assessing personal exposure to PM, but that technical improvements are needed before this method is suitable for widespread use [14]. Once cell monolayers (Cytospins™) have been obtained from induced sputum (IS) or bronchoalveolar lavage (BAL) samples, several different digital image analysis software programmes can be used to calculate AMPL. ImageJ software (http://rsbweb.nih.gov/ij/, superseding a similar software, Scion Image) and Image SXM software [15] (http://www.ImageSXM.org.uk) have both been used for this purpose [12, 16, 17]. There is no previously reported objective comparison of their feasibility and it is unknown whether these two methods provide comparable results. Unlike ImageJ, Image SXM has only been used with samples obtained via BAL, a technique that is not suitable for widespread use in the field due to the expertise, risks and financial costs involved. This study therefore aimed to provide an objective assessment of the relative feasibilities – with regard to resources, expertise and time required - of ImageJ and Image SXM for use with IS samples, and their comparative accuracy.
null
null
Results
Sputum induction 21 participants were recruited and attended for sputum induction and 1 participant was excluded due to baseline hypoxia (28 other recruited participants failed to attend). Of 20 participants undergoing sputum induction, samples were successfully obtained from 19 (Fig. 3). No adverse events occurred. Cytospins from six (32 %) participants were inadequate due to their leukocyte/squamous epithelial cell ratio. The characteristics of the 13 participants who provided an adequate sample are shown in Table 2. There was no significant difference in characteristics between those who provided an adequate sample and those who did not (data not shown). The differential cell counts are shown in Table 3.Fig. 3Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysisTable 2Characteristics of 13 participantsParticipant characteristicGenderMale, n (%)9 (69)Female, n (%)4 (31)AgeYears, median (IQR)57 (39–67)Respiratory diagnosisAsthma, n (%)8 (62)Bronchiectasis, n (%)2 (15)Both, n (%)3 (23)Smoking statusNever smoked (%)8 (62)Ex-smoker (%)5 (38)SpirometryFEV1, median (IQR), litres1.80 (1.47-2.26)FEV1 % Predicted, median (IQR)73.5 (60.1 - 77.6)FVC, median (IQR), litres2.8 (2.47 – 3.82)FVC % predicted, median (IQR)91.2 (87.6 – 109.0) IQR Interquartile RangeTable 3Differential cell countsCell typeCell count % (Median (IQR) of 13 participants)Neutrophil72.5 (51.1-90.1)Macrophage10.0 (4.1-25.8)Eosinophil1.6 (1.0-8.5)Lymphocyte2.3 (1.2-3.5)Metachromatic0.0 (0.0-0.0)Bronchial epithelial2.8 (1.1-12.6)Squamous epithelial2.3 (0.8-6.5) Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysis Characteristics of 13 participants IQR Interquartile Range Differential cell counts 21 participants were recruited and attended for sputum induction and 1 participant was excluded due to baseline hypoxia (28 other recruited participants failed to attend). Of 20 participants undergoing sputum induction, samples were successfully obtained from 19 (Fig. 3). No adverse events occurred. Cytospins from six (32 %) participants were inadequate due to their leukocyte/squamous epithelial cell ratio. The characteristics of the 13 participants who provided an adequate sample are shown in Table 2. There was no significant difference in characteristics between those who provided an adequate sample and those who did not (data not shown). The differential cell counts are shown in Table 3.Fig. 3Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysisTable 2Characteristics of 13 participantsParticipant characteristicGenderMale, n (%)9 (69)Female, n (%)4 (31)AgeYears, median (IQR)57 (39–67)Respiratory diagnosisAsthma, n (%)8 (62)Bronchiectasis, n (%)2 (15)Both, n (%)3 (23)Smoking statusNever smoked (%)8 (62)Ex-smoker (%)5 (38)SpirometryFEV1, median (IQR), litres1.80 (1.47-2.26)FEV1 % Predicted, median (IQR)73.5 (60.1 - 77.6)FVC, median (IQR), litres2.8 (2.47 – 3.82)FVC % predicted, median (IQR)91.2 (87.6 – 109.0) IQR Interquartile RangeTable 3Differential cell countsCell typeCell count % (Median (IQR) of 13 participants)Neutrophil72.5 (51.1-90.1)Macrophage10.0 (4.1-25.8)Eosinophil1.6 (1.0-8.5)Lymphocyte2.3 (1.2-3.5)Metachromatic0.0 (0.0-0.0)Bronchial epithelial2.8 (1.1-12.6)Squamous epithelial2.3 (0.8-6.5) Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysis Characteristics of 13 participants IQR Interquartile Range Differential cell counts Feasibility of methodology Median time for analysis of each participant was significantly lower for Image J (26 mins, interquartile range (IQR): 21–30) than for Image SXM (54 mins, IQR: 43–68), p = 0.008. Including the time taken for image acquisition, the median time was not significantly different between ImageJ (51 mins,IQR: 46–65 mins) and Image SXM (66 mins, IQR: 59 – 84), p = 0.424. For the Image SXM method, 58 % of the ‘analysis time’ was spent editing the images prior to analysis. A comparison of the resources required for each method is shown in Table 4.Table 4Comparison of resource requirements for methodsResourceImage SXMImageJEquipment required for sputum induction and sample processingIdentical specialist equipment and facilities required regardless of analysis methodImage acquisition equipmentMicroscope with x40 objective and digital image capturing capabilitiesMicroscope with x100a objective and digital image capturing capabilitiesAnalysis software availabilityIn the public domain – available free of chargeIn the public domain – available free of chargeAdditional image editing softwarePurchase requiredNot requiredOperating system for analysis softwareCompatible with Mac operating systemsCompatible with Mac and Windows operating systemsFile type availabilityTIFFJPEG, TIFF, GIF, BMP, DICOM, FITS and ‘raw’Time required for sputum induction and processingApproximately 90–120 min per participantTime required for image acquisition (median)15 min27 minTime required for image analysis (including image editing if required) (median)54 min26 min aAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations Comparison of resource requirements for methods aAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations A mean of 49 macrophages per participant were included in the ImageJ analysis (total 632 macrophages). A mean of 43 images of per participant were captured for Image SXM analysis (total 558 images). During the Image SXM process, 72 % of images were removed following the initial analysis as they were deemed to be inaccurate (either over- or under-estimating AMPL) using the blink comparison function (Fig. 4), resulting in a further four participants being excluded from the study. The analysis was repeated with only the remaining 143 images (median 14 images (IQR 11.5-20) per participant). If only these nine participants are included, median time taken increased to 67 mins (IQR 47–72) for Image SXM analysis and 83 mins (IQR 64–87) including image acquisition time.Fig. 4An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated Median time for analysis of each participant was significantly lower for Image J (26 mins, interquartile range (IQR): 21–30) than for Image SXM (54 mins, IQR: 43–68), p = 0.008. Including the time taken for image acquisition, the median time was not significantly different between ImageJ (51 mins,IQR: 46–65 mins) and Image SXM (66 mins, IQR: 59 – 84), p = 0.424. For the Image SXM method, 58 % of the ‘analysis time’ was spent editing the images prior to analysis. A comparison of the resources required for each method is shown in Table 4.Table 4Comparison of resource requirements for methodsResourceImage SXMImageJEquipment required for sputum induction and sample processingIdentical specialist equipment and facilities required regardless of analysis methodImage acquisition equipmentMicroscope with x40 objective and digital image capturing capabilitiesMicroscope with x100a objective and digital image capturing capabilitiesAnalysis software availabilityIn the public domain – available free of chargeIn the public domain – available free of chargeAdditional image editing softwarePurchase requiredNot requiredOperating system for analysis softwareCompatible with Mac operating systemsCompatible with Mac and Windows operating systemsFile type availabilityTIFFJPEG, TIFF, GIF, BMP, DICOM, FITS and ‘raw’Time required for sputum induction and processingApproximately 90–120 min per participantTime required for image acquisition (median)15 min27 minTime required for image analysis (including image editing if required) (median)54 min26 min aAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations Comparison of resource requirements for methods aAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations A mean of 49 macrophages per participant were included in the ImageJ analysis (total 632 macrophages). A mean of 43 images of per participant were captured for Image SXM analysis (total 558 images). During the Image SXM process, 72 % of images were removed following the initial analysis as they were deemed to be inaccurate (either over- or under-estimating AMPL) using the blink comparison function (Fig. 4), resulting in a further four participants being excluded from the study. The analysis was repeated with only the remaining 143 images (median 14 images (IQR 11.5-20) per participant). If only these nine participants are included, median time taken increased to 67 mins (IQR 47–72) for Image SXM analysis and 83 mins (IQR 64–87) including image acquisition time.Fig. 4An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated Airway macrophage particulate load Considerable morphological heterogeneity was seen between AM, both within samples and between participants, with wide variations in AMPL (Fig. 5). The cytoplasms of the AM in this study were noted to be granular and heterogeneous (Fig. 5), unlike the homogenous appearance of cytoplasm seen in our previous experience of macrophages obtained by BAL [11].Fig. 5Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a) Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a) ImageJ analysis of 13 cytospins revealed a median AMPL of 0.38 μm2 (IQR 0.17-0.72 μm2). Image SXM analysis of 9 cytospins calculated a median total cellular area occupied by PM of 4.0 % (IQR 2.3-6.0 %). There was no statistically significant correlation between results obtained using the two methods (correlation coefficient = −0.42, p = 0.256). Considerable morphological heterogeneity was seen between AM, both within samples and between participants, with wide variations in AMPL (Fig. 5). The cytoplasms of the AM in this study were noted to be granular and heterogeneous (Fig. 5), unlike the homogenous appearance of cytoplasm seen in our previous experience of macrophages obtained by BAL [11].Fig. 5Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a) Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a) ImageJ analysis of 13 cytospins revealed a median AMPL of 0.38 μm2 (IQR 0.17-0.72 μm2). Image SXM analysis of 9 cytospins calculated a median total cellular area occupied by PM of 4.0 % (IQR 2.3-6.0 %). There was no statistically significant correlation between results obtained using the two methods (correlation coefficient = −0.42, p = 0.256).
Conclusion
Direct measurement of air pollution exposure is costly, logistically complicated and intrusive to the individual. Studies investigating the health impacts of air pollution exposure and the benefits of interventions are limited by the challenges associated with accurately quantifying exposure [9]. A biomarker of air pollution exposure will be a useful tool to facilitate research addressing the high burden of disease associated with air pollution. This small study has not established whether AMPL is an accurate biomarker of pollution exposure, but has compared the feasibility of two previously used methods. The heterogeneity of IS samples complicates digital image analysis methods, and the resource requirements for assessing AMPL from IS are considerable, making it unlikely that this biomarker of exposure will be appropriate for widespread use as a tool for large-scale intervention studies. Priority should be given to developing a point-of-care biomarker of exposure, without the need for specialist training and equipment, to facilitate the large public health intervention trials that are urgently needed. Potential biomarkers requiring further exploration include direct measures of combustion products, such as exhaled carbon monoxide, exhaled carboxyhaemoglobin, exhaled volatile organic compounds or levoglucosan and methoxyphenols in urine [8, 9, 21–23]. Indirect measures of exposure in sputum, blood and urine, including markers of oxidative stress and endothelial or epithelial damage (such as 8-isoprostane, malondialdehyde, nitric oxide, or surfactant-associated protein D), may also be promising biomarkers [9, 21, 24–26].
[ "Participant involvement", "Sputum induction", "Sputum processing", "Digital image acquisition", "Image SXM analysis", "ImageJ analysis", "Feasibility comparison of methods", "Statistical analysis", "Ethical approval", "Sputum induction", "Feasibility of methodology", "Airway macrophage particulate load" ]
[ "Respiratory patients were recruited via outpatient respiratory clinics at Aintree University Hospital, Liverpool, UK. All consenting adults over 18 years old with asthma or bronchiectasis, who did not meet safety exclusion criteria (see Table 1), were recruited.Table 1The exclusion criteria used for safety reasons prior to performing sputum inductionSafety checklist – exclusion criteria for sputum induction• FEV1 < 60 %/< 1.0 L (post – Salbutamol 200 micrograms)• SaO2 < 90 % on room air• Unable to take salbutamol• Extreme shortness of breath• Acute Respiratory Distress Syndrome• Known haemoptysis• Known arrhythmias/angina• Known thoracic, abdominal or cerebral aneurysms• Recent pneumothorax• Pulmonary emboli• Fractured ribs/recent chest trauma• Recent eye surgery• Known pleural effusions• Pulmonary oedemaThrombocytopenia (Platelets < 25)\nThe exclusion criteria used for safety reasons prior to performing sputum induction\nThrombocytopenia (Platelets < 25)", "Participants underwent sputum induction on one occasion each in August-October 2013. Pre-procedure Salbutamol (200 micrograms) was given to prevent bronchoconstriction. Baseline spirometry was performed to European Respiratory Society and American Thoracic Society standards [18] using a MicroMedical MicroLab Mk8 Spirometer (Cardinal Health UK). Three × 5mls of hypertonic saline (3 %, 4 %, 5 % saline given in stepwise fashion, lasting up to 5 min per nebulisation) were nebulised via Omron NE-U17 Ultrasonic Nebuliser (Omron Healthcare Europe). Lung function was assessed at intervals to detect bronchoconstriction, according to pre-specified safety criteria.", "Sputum samples were kept on ice and sputum plugs were manually extracted and treated with 0.1 % Sputolysin (Merck Chemical Ltd, UK) for fifteen minutes to remove mucus. Phosphate Buffered Solution (Sigma-Aldrich, UK) was added and cells were filtered and centrifuged at 2200 rpm for ten minutes at 4 °C (Heraeus Megafuge 1.0R, ThermoFisher Scientific, USA). The pellet was re-suspended at 0.5×106 cells per ml and two × 100 μl of suspension was cytocentrifuged (Shandon Cytopsin 4, ThermoFisher Scientific) onto microscope slides at 450 rpm for 6 min to produce three cytospins per participant. Slides were fixed in methanol and stored until staining. One slide per participant was stained using Hemacolor Staining kit (Merck-Millipore, Germany) for ImageJ analysis. One slide was stained using Hemacolor Solution 2 (eosin) only (dipped for 9 s), so that only the cytoplasm was stained (a method previously developed for optimising Image SXM analysis [16]). One slide per participant was stained with Diff-Quik (Dade Behring, Deerfield, IL, USA) for differential cell counts : 400 cells were counted per participant, using a Leica DM IL light microscope at ×40 magnification. Cytospins with a leukocyte/squamous epithelial cell ratio of ≤5 were deemed inadequate and therefore excluded from the analysis [19].", "Cytospin slides for ImageJ analysis were photographed at ×60 magnification using Nikon Eclipse 80i digital microscope (Nikon Instruments Europe BV) with Nikon NIS-Elements BR software; 50 macrophages were captured per participant where possible (in cases where less than 50 macrophages were present on the cytospin a reduced number was used). Slides for Image SXM analysis were imaged at ×40 magnification using a Leica DM IL light microscope (Leica Microsystems UK Ltd) with a Nikon E990 digital camera (Nikon Inc, USA); where possible 50 microscope fields (with at least one macrophage per field) were captured per participant - all the macrophages captured in a field were analysed. In cases where less than 50 images from the whole cytospin contained a macrophage this reduced number of macrophages-containing images, and all macrophages within those images, were included in the analysis. Images for both methods were taken systematically using a predefined method to prevent duplication or biased image selection, as shown in Fig. 1.Fig. 1Systematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown\nSystematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown", "Images were edited using Adobe Photoshop Elements v6.0 to show only macrophages, to prevent incorrect calculations of cellular and PM areas (Fig. 2a and b). Image SXM (version 1.92, April 2011) variable settings were optimised for cytoplasm (upper and lower size limits and density threshold) and PM (density threshold) detection by adjusting settings for a range of images from different participants. Values which consistently maximised identification of PM without increasing false positive identification were used. These settings were then applied to the analysis of all images from all participants. 50 images per participant were analysed to generate output images (Fig. 2c) and the arithmetic mean percentage of total cellular area occupied by PM was calculated by Image SXM. The blink comparison function, which provides an overlay of images, was used to compare original and output images; subjective discordance between total cellular or PM area led to removal of that image from the analysis. Participants with fewer than ten images remaining were excluded from the analysis.Fig. 2Image SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c)\nImage SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c)", "A stage micrometer (Agar Scientific, UK) was used to calibrate image size. Colour images were converted to 32-bit black and white images using ImageJ (version 1.46r). The “threshold” settings were adjusted to obtain the best fit of red over black areas [6] (Fig. 1d and e). The freehand select function was used to select PM (Fig. 2f) that was within the cell, and to exclude red areas other than PM, such as nucleus. ImageJ calculated the area of PM within the selection. Thresholds were adjusted to obtain the best fit for different particle aggregates in each macrophage. The median area from 50 macrophages was calculated. This methodology is a refinement of previously used techniques [12], adapted from earlier Scion Image methodology [10].", "The time taken for image capture and analysis of the final 11 samples was recorded, along with an inventory of the required equipment and expertise for each method.", "Data was analysed using SPSS v21. AMPL given by each method were compared using a Spearman Rank Order Correlation test. Participant characteristics were compared using Chi-square and Mann–Whitney U tests. Time taken to conduct the analyses was compared by Wilcoxon Signed Rank test. A p value of <0.05 was considered statistically significant.", "The East Midlands – Derby 1 Research Ethics Committee approved this work (REC reference: 11/EM/0269). Written informed consent was obtained from all participants.", "21 participants were recruited and attended for sputum induction and 1 participant was excluded due to baseline hypoxia (28 other recruited participants failed to attend). Of 20 participants undergoing sputum induction, samples were successfully obtained from 19 (Fig. 3). No adverse events occurred. Cytospins from six (32 %) participants were inadequate due to their leukocyte/squamous epithelial cell ratio. The characteristics of the 13 participants who provided an adequate sample are shown in Table 2. There was no significant difference in characteristics between those who provided an adequate sample and those who did not (data not shown). The differential cell counts are shown in Table 3.Fig. 3Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysisTable 2Characteristics of 13 participantsParticipant characteristicGenderMale, n (%)9 (69)Female, n (%)4 (31)AgeYears, median (IQR)57 (39–67)Respiratory diagnosisAsthma, n (%)8 (62)Bronchiectasis, n (%)2 (15)Both, n (%)3 (23)Smoking statusNever smoked (%)8 (62)Ex-smoker (%)5 (38)SpirometryFEV1, median (IQR), litres1.80 (1.47-2.26)FEV1 % Predicted, median (IQR)73.5 (60.1 - 77.6)FVC, median (IQR), litres2.8 (2.47 – 3.82)FVC % predicted, median (IQR)91.2 (87.6 – 109.0)\nIQR Interquartile RangeTable 3Differential cell countsCell typeCell count % (Median (IQR) of 13 participants)Neutrophil72.5 (51.1-90.1)Macrophage10.0 (4.1-25.8)Eosinophil1.6 (1.0-8.5)Lymphocyte2.3 (1.2-3.5)Metachromatic0.0 (0.0-0.0)Bronchial epithelial2.8 (1.1-12.6)Squamous epithelial2.3 (0.8-6.5)\nParticipants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysis\nCharacteristics of 13 participants\n\nIQR Interquartile Range\nDifferential cell counts", "Median time for analysis of each participant was significantly lower for Image J (26 mins, interquartile range (IQR): 21–30) than for Image SXM (54 mins, IQR: 43–68), p = 0.008. Including the time taken for image acquisition, the median time was not significantly different between ImageJ (51 mins,IQR: 46–65 mins) and Image SXM (66 mins, IQR: 59 – 84), p = 0.424. For the Image SXM method, 58 % of the ‘analysis time’ was spent editing the images prior to analysis. A comparison of the resources required for each method is shown in Table 4.Table 4Comparison of resource requirements for methodsResourceImage SXMImageJEquipment required for sputum induction and sample processingIdentical specialist equipment and facilities required regardless of analysis methodImage acquisition equipmentMicroscope with x40 objective and digital image capturing capabilitiesMicroscope with x100a objective and digital image capturing capabilitiesAnalysis software availabilityIn the public domain – available free of chargeIn the public domain – available free of chargeAdditional image editing softwarePurchase requiredNot requiredOperating system for analysis softwareCompatible with Mac operating systemsCompatible with Mac and Windows operating systemsFile type availabilityTIFFJPEG, TIFF, GIF, BMP, DICOM, FITS and ‘raw’Time required for sputum induction and processingApproximately 90–120 min per participantTime required for image acquisition (median)15 min27 minTime required for image analysis (including image editing if required) (median)54 min26 min\naAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations\nComparison of resource requirements for methods\n\naAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations\nA mean of 49 macrophages per participant were included in the ImageJ analysis (total 632 macrophages). A mean of 43 images of per participant were captured for Image SXM analysis (total 558 images). During the Image SXM process, 72 % of images were removed following the initial analysis as they were deemed to be inaccurate (either over- or under-estimating AMPL) using the blink comparison function (Fig. 4), resulting in a further four participants being excluded from the study. The analysis was repeated with only the remaining 143 images (median 14 images (IQR 11.5-20) per participant). If only these nine participants are included, median time taken increased to 67 mins (IQR 47–72) for Image SXM analysis and 83 mins (IQR 64–87) including image acquisition time.Fig. 4An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated\nAn example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated", "Considerable morphological heterogeneity was seen between AM, both within samples and between participants, with wide variations in AMPL (Fig. 5). The cytoplasms of the AM in this study were noted to be granular and heterogeneous (Fig. 5), unlike the homogenous appearance of cytoplasm seen in our previous experience of macrophages obtained by BAL [11].Fig. 5Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a)\nAirway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a)\nImageJ analysis of 13 cytospins revealed a median AMPL of 0.38 μm2 (IQR 0.17-0.72 μm2). Image SXM analysis of 9 cytospins calculated a median total cellular area occupied by PM of 4.0 % (IQR 2.3-6.0 %). There was no statistically significant correlation between results obtained using the two methods (correlation coefficient = −0.42, p = 0.256)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Participant involvement", "Sputum induction", "Sputum processing", "Digital image acquisition", "Image SXM analysis", "ImageJ analysis", "Feasibility comparison of methods", "Statistical analysis", "Ethical approval", "Results", "Sputum induction", "Feasibility of methodology", "Airway macrophage particulate load", "Discussion", "Conclusion" ]
[ "Indoor and outdoor air pollution are the 4th and 9th leading risk factors, respectively, for disability-adjusted life years worldwide [1], and exposure is associated with increased risk of pneumonia in children, respiratory cancers, and development of Chronic Obstructive Pulmonary Disease [2–5]. Airborne particulate matter [6] with an aerodynamic diameter of <2.5 μm (PM2.5) is considered particularly harmful as the small size allows inhalation deep into the lungs [7].\nGlobal initiatives, such as the Global Alliance for Clean Cookstoves (www.cleancookstoves.org), are tackling the major health burden caused by airborne PM. Major randomised trials of the health effects of clean burning cookstoves are in progress (e.g. www.capstudy.org and http://www.kintampo-hrc.org/projects/graphs.asp#.VMtKusaI0Rk). All share the challenge that quantifying an individual’s exposure to pollution is complex and expensive, and there is no gold standard method [8].\nDevelopment of a biomarker that acts as a surrogate marker of exposure could obviate the need for costly and intensive exposure monitoring. Ideally a biomarker should be: closely associated with exposure, adequately sensitive and specific, consistent across heterogenous populations, cost efficient, acceptable to the user population, and feasible for use in the field (including low-resource settings) [9].\nThe phagocytic action of airway macrophages (AM) may provide the basis for a biomarker of PM exposure. The particulate load within AM is: increased in individuals who report exposure to household air pollution compared to those who do not [10]; statistically different between individuals who use different types of domestic fuel [11]; and associated with exposure to outdoor PM in commuters who cycle in London [12]. Correlation between AM particulate load (AMPL) and worsening lung function supports a possible pathophysiological role [13]. A recent systematic review of studies calculating AMPL concluded that this biomarker is suitable for assessing personal exposure to PM, but that technical improvements are needed before this method is suitable for widespread use [14].\nOnce cell monolayers (Cytospins™) have been obtained from induced sputum (IS) or bronchoalveolar lavage (BAL) samples, several different digital image analysis software programmes can be used to calculate AMPL. ImageJ software (http://rsbweb.nih.gov/ij/, superseding a similar software, Scion Image) and Image SXM software [15] (http://www.ImageSXM.org.uk) have both been used for this purpose [12, 16, 17].\nThere is no previously reported objective comparison of their feasibility and it is unknown whether these two methods provide comparable results. Unlike ImageJ, Image SXM has only been used with samples obtained via BAL, a technique that is not suitable for widespread use in the field due to the expertise, risks and financial costs involved. This study therefore aimed to provide an objective assessment of the relative feasibilities – with regard to resources, expertise and time required - of ImageJ and Image SXM for use with IS samples, and their comparative accuracy.", " Participant involvement Respiratory patients were recruited via outpatient respiratory clinics at Aintree University Hospital, Liverpool, UK. All consenting adults over 18 years old with asthma or bronchiectasis, who did not meet safety exclusion criteria (see Table 1), were recruited.Table 1The exclusion criteria used for safety reasons prior to performing sputum inductionSafety checklist – exclusion criteria for sputum induction• FEV1 < 60 %/< 1.0 L (post – Salbutamol 200 micrograms)• SaO2 < 90 % on room air• Unable to take salbutamol• Extreme shortness of breath• Acute Respiratory Distress Syndrome• Known haemoptysis• Known arrhythmias/angina• Known thoracic, abdominal or cerebral aneurysms• Recent pneumothorax• Pulmonary emboli• Fractured ribs/recent chest trauma• Recent eye surgery• Known pleural effusions• Pulmonary oedemaThrombocytopenia (Platelets < 25)\nThe exclusion criteria used for safety reasons prior to performing sputum induction\nThrombocytopenia (Platelets < 25)\nRespiratory patients were recruited via outpatient respiratory clinics at Aintree University Hospital, Liverpool, UK. All consenting adults over 18 years old with asthma or bronchiectasis, who did not meet safety exclusion criteria (see Table 1), were recruited.Table 1The exclusion criteria used for safety reasons prior to performing sputum inductionSafety checklist – exclusion criteria for sputum induction• FEV1 < 60 %/< 1.0 L (post – Salbutamol 200 micrograms)• SaO2 < 90 % on room air• Unable to take salbutamol• Extreme shortness of breath• Acute Respiratory Distress Syndrome• Known haemoptysis• Known arrhythmias/angina• Known thoracic, abdominal or cerebral aneurysms• Recent pneumothorax• Pulmonary emboli• Fractured ribs/recent chest trauma• Recent eye surgery• Known pleural effusions• Pulmonary oedemaThrombocytopenia (Platelets < 25)\nThe exclusion criteria used for safety reasons prior to performing sputum induction\nThrombocytopenia (Platelets < 25)\n Sputum induction Participants underwent sputum induction on one occasion each in August-October 2013. Pre-procedure Salbutamol (200 micrograms) was given to prevent bronchoconstriction. Baseline spirometry was performed to European Respiratory Society and American Thoracic Society standards [18] using a MicroMedical MicroLab Mk8 Spirometer (Cardinal Health UK). Three × 5mls of hypertonic saline (3 %, 4 %, 5 % saline given in stepwise fashion, lasting up to 5 min per nebulisation) were nebulised via Omron NE-U17 Ultrasonic Nebuliser (Omron Healthcare Europe). Lung function was assessed at intervals to detect bronchoconstriction, according to pre-specified safety criteria.\nParticipants underwent sputum induction on one occasion each in August-October 2013. Pre-procedure Salbutamol (200 micrograms) was given to prevent bronchoconstriction. Baseline spirometry was performed to European Respiratory Society and American Thoracic Society standards [18] using a MicroMedical MicroLab Mk8 Spirometer (Cardinal Health UK). Three × 5mls of hypertonic saline (3 %, 4 %, 5 % saline given in stepwise fashion, lasting up to 5 min per nebulisation) were nebulised via Omron NE-U17 Ultrasonic Nebuliser (Omron Healthcare Europe). Lung function was assessed at intervals to detect bronchoconstriction, according to pre-specified safety criteria.\n Sputum processing Sputum samples were kept on ice and sputum plugs were manually extracted and treated with 0.1 % Sputolysin (Merck Chemical Ltd, UK) for fifteen minutes to remove mucus. Phosphate Buffered Solution (Sigma-Aldrich, UK) was added and cells were filtered and centrifuged at 2200 rpm for ten minutes at 4 °C (Heraeus Megafuge 1.0R, ThermoFisher Scientific, USA). The pellet was re-suspended at 0.5×106 cells per ml and two × 100 μl of suspension was cytocentrifuged (Shandon Cytopsin 4, ThermoFisher Scientific) onto microscope slides at 450 rpm for 6 min to produce three cytospins per participant. Slides were fixed in methanol and stored until staining. One slide per participant was stained using Hemacolor Staining kit (Merck-Millipore, Germany) for ImageJ analysis. One slide was stained using Hemacolor Solution 2 (eosin) only (dipped for 9 s), so that only the cytoplasm was stained (a method previously developed for optimising Image SXM analysis [16]). One slide per participant was stained with Diff-Quik (Dade Behring, Deerfield, IL, USA) for differential cell counts : 400 cells were counted per participant, using a Leica DM IL light microscope at ×40 magnification. Cytospins with a leukocyte/squamous epithelial cell ratio of ≤5 were deemed inadequate and therefore excluded from the analysis [19].\nSputum samples were kept on ice and sputum plugs were manually extracted and treated with 0.1 % Sputolysin (Merck Chemical Ltd, UK) for fifteen minutes to remove mucus. Phosphate Buffered Solution (Sigma-Aldrich, UK) was added and cells were filtered and centrifuged at 2200 rpm for ten minutes at 4 °C (Heraeus Megafuge 1.0R, ThermoFisher Scientific, USA). The pellet was re-suspended at 0.5×106 cells per ml and two × 100 μl of suspension was cytocentrifuged (Shandon Cytopsin 4, ThermoFisher Scientific) onto microscope slides at 450 rpm for 6 min to produce three cytospins per participant. Slides were fixed in methanol and stored until staining. One slide per participant was stained using Hemacolor Staining kit (Merck-Millipore, Germany) for ImageJ analysis. One slide was stained using Hemacolor Solution 2 (eosin) only (dipped for 9 s), so that only the cytoplasm was stained (a method previously developed for optimising Image SXM analysis [16]). One slide per participant was stained with Diff-Quik (Dade Behring, Deerfield, IL, USA) for differential cell counts : 400 cells were counted per participant, using a Leica DM IL light microscope at ×40 magnification. Cytospins with a leukocyte/squamous epithelial cell ratio of ≤5 were deemed inadequate and therefore excluded from the analysis [19].\n Digital image acquisition Cytospin slides for ImageJ analysis were photographed at ×60 magnification using Nikon Eclipse 80i digital microscope (Nikon Instruments Europe BV) with Nikon NIS-Elements BR software; 50 macrophages were captured per participant where possible (in cases where less than 50 macrophages were present on the cytospin a reduced number was used). Slides for Image SXM analysis were imaged at ×40 magnification using a Leica DM IL light microscope (Leica Microsystems UK Ltd) with a Nikon E990 digital camera (Nikon Inc, USA); where possible 50 microscope fields (with at least one macrophage per field) were captured per participant - all the macrophages captured in a field were analysed. In cases where less than 50 images from the whole cytospin contained a macrophage this reduced number of macrophages-containing images, and all macrophages within those images, were included in the analysis. Images for both methods were taken systematically using a predefined method to prevent duplication or biased image selection, as shown in Fig. 1.Fig. 1Systematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown\nSystematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown\nCytospin slides for ImageJ analysis were photographed at ×60 magnification using Nikon Eclipse 80i digital microscope (Nikon Instruments Europe BV) with Nikon NIS-Elements BR software; 50 macrophages were captured per participant where possible (in cases where less than 50 macrophages were present on the cytospin a reduced number was used). Slides for Image SXM analysis were imaged at ×40 magnification using a Leica DM IL light microscope (Leica Microsystems UK Ltd) with a Nikon E990 digital camera (Nikon Inc, USA); where possible 50 microscope fields (with at least one macrophage per field) were captured per participant - all the macrophages captured in a field were analysed. In cases where less than 50 images from the whole cytospin contained a macrophage this reduced number of macrophages-containing images, and all macrophages within those images, were included in the analysis. Images for both methods were taken systematically using a predefined method to prevent duplication or biased image selection, as shown in Fig. 1.Fig. 1Systematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown\nSystematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown\n Image SXM analysis Images were edited using Adobe Photoshop Elements v6.0 to show only macrophages, to prevent incorrect calculations of cellular and PM areas (Fig. 2a and b). Image SXM (version 1.92, April 2011) variable settings were optimised for cytoplasm (upper and lower size limits and density threshold) and PM (density threshold) detection by adjusting settings for a range of images from different participants. Values which consistently maximised identification of PM without increasing false positive identification were used. These settings were then applied to the analysis of all images from all participants. 50 images per participant were analysed to generate output images (Fig. 2c) and the arithmetic mean percentage of total cellular area occupied by PM was calculated by Image SXM. The blink comparison function, which provides an overlay of images, was used to compare original and output images; subjective discordance between total cellular or PM area led to removal of that image from the analysis. Participants with fewer than ten images remaining were excluded from the analysis.Fig. 2Image SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c)\nImage SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c)\nImages were edited using Adobe Photoshop Elements v6.0 to show only macrophages, to prevent incorrect calculations of cellular and PM areas (Fig. 2a and b). Image SXM (version 1.92, April 2011) variable settings were optimised for cytoplasm (upper and lower size limits and density threshold) and PM (density threshold) detection by adjusting settings for a range of images from different participants. Values which consistently maximised identification of PM without increasing false positive identification were used. These settings were then applied to the analysis of all images from all participants. 50 images per participant were analysed to generate output images (Fig. 2c) and the arithmetic mean percentage of total cellular area occupied by PM was calculated by Image SXM. The blink comparison function, which provides an overlay of images, was used to compare original and output images; subjective discordance between total cellular or PM area led to removal of that image from the analysis. Participants with fewer than ten images remaining were excluded from the analysis.Fig. 2Image SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c)\nImage SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c)\n ImageJ analysis A stage micrometer (Agar Scientific, UK) was used to calibrate image size. Colour images were converted to 32-bit black and white images using ImageJ (version 1.46r). The “threshold” settings were adjusted to obtain the best fit of red over black areas [6] (Fig. 1d and e). The freehand select function was used to select PM (Fig. 2f) that was within the cell, and to exclude red areas other than PM, such as nucleus. ImageJ calculated the area of PM within the selection. Thresholds were adjusted to obtain the best fit for different particle aggregates in each macrophage. The median area from 50 macrophages was calculated. This methodology is a refinement of previously used techniques [12], adapted from earlier Scion Image methodology [10].\nA stage micrometer (Agar Scientific, UK) was used to calibrate image size. Colour images were converted to 32-bit black and white images using ImageJ (version 1.46r). The “threshold” settings were adjusted to obtain the best fit of red over black areas [6] (Fig. 1d and e). The freehand select function was used to select PM (Fig. 2f) that was within the cell, and to exclude red areas other than PM, such as nucleus. ImageJ calculated the area of PM within the selection. Thresholds were adjusted to obtain the best fit for different particle aggregates in each macrophage. The median area from 50 macrophages was calculated. This methodology is a refinement of previously used techniques [12], adapted from earlier Scion Image methodology [10].\n Feasibility comparison of methods The time taken for image capture and analysis of the final 11 samples was recorded, along with an inventory of the required equipment and expertise for each method.\nThe time taken for image capture and analysis of the final 11 samples was recorded, along with an inventory of the required equipment and expertise for each method.\n Statistical analysis Data was analysed using SPSS v21. AMPL given by each method were compared using a Spearman Rank Order Correlation test. Participant characteristics were compared using Chi-square and Mann–Whitney U tests. Time taken to conduct the analyses was compared by Wilcoxon Signed Rank test. A p value of <0.05 was considered statistically significant.\nData was analysed using SPSS v21. AMPL given by each method were compared using a Spearman Rank Order Correlation test. Participant characteristics were compared using Chi-square and Mann–Whitney U tests. Time taken to conduct the analyses was compared by Wilcoxon Signed Rank test. A p value of <0.05 was considered statistically significant.\n Ethical approval The East Midlands – Derby 1 Research Ethics Committee approved this work (REC reference: 11/EM/0269). Written informed consent was obtained from all participants.\nThe East Midlands – Derby 1 Research Ethics Committee approved this work (REC reference: 11/EM/0269). Written informed consent was obtained from all participants.", "Respiratory patients were recruited via outpatient respiratory clinics at Aintree University Hospital, Liverpool, UK. All consenting adults over 18 years old with asthma or bronchiectasis, who did not meet safety exclusion criteria (see Table 1), were recruited.Table 1The exclusion criteria used for safety reasons prior to performing sputum inductionSafety checklist – exclusion criteria for sputum induction• FEV1 < 60 %/< 1.0 L (post – Salbutamol 200 micrograms)• SaO2 < 90 % on room air• Unable to take salbutamol• Extreme shortness of breath• Acute Respiratory Distress Syndrome• Known haemoptysis• Known arrhythmias/angina• Known thoracic, abdominal or cerebral aneurysms• Recent pneumothorax• Pulmonary emboli• Fractured ribs/recent chest trauma• Recent eye surgery• Known pleural effusions• Pulmonary oedemaThrombocytopenia (Platelets < 25)\nThe exclusion criteria used for safety reasons prior to performing sputum induction\nThrombocytopenia (Platelets < 25)", "Participants underwent sputum induction on one occasion each in August-October 2013. Pre-procedure Salbutamol (200 micrograms) was given to prevent bronchoconstriction. Baseline spirometry was performed to European Respiratory Society and American Thoracic Society standards [18] using a MicroMedical MicroLab Mk8 Spirometer (Cardinal Health UK). Three × 5mls of hypertonic saline (3 %, 4 %, 5 % saline given in stepwise fashion, lasting up to 5 min per nebulisation) were nebulised via Omron NE-U17 Ultrasonic Nebuliser (Omron Healthcare Europe). Lung function was assessed at intervals to detect bronchoconstriction, according to pre-specified safety criteria.", "Sputum samples were kept on ice and sputum plugs were manually extracted and treated with 0.1 % Sputolysin (Merck Chemical Ltd, UK) for fifteen minutes to remove mucus. Phosphate Buffered Solution (Sigma-Aldrich, UK) was added and cells were filtered and centrifuged at 2200 rpm for ten minutes at 4 °C (Heraeus Megafuge 1.0R, ThermoFisher Scientific, USA). The pellet was re-suspended at 0.5×106 cells per ml and two × 100 μl of suspension was cytocentrifuged (Shandon Cytopsin 4, ThermoFisher Scientific) onto microscope slides at 450 rpm for 6 min to produce three cytospins per participant. Slides were fixed in methanol and stored until staining. One slide per participant was stained using Hemacolor Staining kit (Merck-Millipore, Germany) for ImageJ analysis. One slide was stained using Hemacolor Solution 2 (eosin) only (dipped for 9 s), so that only the cytoplasm was stained (a method previously developed for optimising Image SXM analysis [16]). One slide per participant was stained with Diff-Quik (Dade Behring, Deerfield, IL, USA) for differential cell counts : 400 cells were counted per participant, using a Leica DM IL light microscope at ×40 magnification. Cytospins with a leukocyte/squamous epithelial cell ratio of ≤5 were deemed inadequate and therefore excluded from the analysis [19].", "Cytospin slides for ImageJ analysis were photographed at ×60 magnification using Nikon Eclipse 80i digital microscope (Nikon Instruments Europe BV) with Nikon NIS-Elements BR software; 50 macrophages were captured per participant where possible (in cases where less than 50 macrophages were present on the cytospin a reduced number was used). Slides for Image SXM analysis were imaged at ×40 magnification using a Leica DM IL light microscope (Leica Microsystems UK Ltd) with a Nikon E990 digital camera (Nikon Inc, USA); where possible 50 microscope fields (with at least one macrophage per field) were captured per participant - all the macrophages captured in a field were analysed. In cases where less than 50 images from the whole cytospin contained a macrophage this reduced number of macrophages-containing images, and all macrophages within those images, were included in the analysis. Images for both methods were taken systematically using a predefined method to prevent duplication or biased image selection, as shown in Fig. 1.Fig. 1Systematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown\nSystematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown", "Images were edited using Adobe Photoshop Elements v6.0 to show only macrophages, to prevent incorrect calculations of cellular and PM areas (Fig. 2a and b). Image SXM (version 1.92, April 2011) variable settings were optimised for cytoplasm (upper and lower size limits and density threshold) and PM (density threshold) detection by adjusting settings for a range of images from different participants. Values which consistently maximised identification of PM without increasing false positive identification were used. These settings were then applied to the analysis of all images from all participants. 50 images per participant were analysed to generate output images (Fig. 2c) and the arithmetic mean percentage of total cellular area occupied by PM was calculated by Image SXM. The blink comparison function, which provides an overlay of images, was used to compare original and output images; subjective discordance between total cellular or PM area led to removal of that image from the analysis. Participants with fewer than ten images remaining were excluded from the analysis.Fig. 2Image SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c)\nImage SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c)", "A stage micrometer (Agar Scientific, UK) was used to calibrate image size. Colour images were converted to 32-bit black and white images using ImageJ (version 1.46r). The “threshold” settings were adjusted to obtain the best fit of red over black areas [6] (Fig. 1d and e). The freehand select function was used to select PM (Fig. 2f) that was within the cell, and to exclude red areas other than PM, such as nucleus. ImageJ calculated the area of PM within the selection. Thresholds were adjusted to obtain the best fit for different particle aggregates in each macrophage. The median area from 50 macrophages was calculated. This methodology is a refinement of previously used techniques [12], adapted from earlier Scion Image methodology [10].", "The time taken for image capture and analysis of the final 11 samples was recorded, along with an inventory of the required equipment and expertise for each method.", "Data was analysed using SPSS v21. AMPL given by each method were compared using a Spearman Rank Order Correlation test. Participant characteristics were compared using Chi-square and Mann–Whitney U tests. Time taken to conduct the analyses was compared by Wilcoxon Signed Rank test. A p value of <0.05 was considered statistically significant.", "The East Midlands – Derby 1 Research Ethics Committee approved this work (REC reference: 11/EM/0269). Written informed consent was obtained from all participants.", " Sputum induction 21 participants were recruited and attended for sputum induction and 1 participant was excluded due to baseline hypoxia (28 other recruited participants failed to attend). Of 20 participants undergoing sputum induction, samples were successfully obtained from 19 (Fig. 3). No adverse events occurred. Cytospins from six (32 %) participants were inadequate due to their leukocyte/squamous epithelial cell ratio. The characteristics of the 13 participants who provided an adequate sample are shown in Table 2. There was no significant difference in characteristics between those who provided an adequate sample and those who did not (data not shown). The differential cell counts are shown in Table 3.Fig. 3Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysisTable 2Characteristics of 13 participantsParticipant characteristicGenderMale, n (%)9 (69)Female, n (%)4 (31)AgeYears, median (IQR)57 (39–67)Respiratory diagnosisAsthma, n (%)8 (62)Bronchiectasis, n (%)2 (15)Both, n (%)3 (23)Smoking statusNever smoked (%)8 (62)Ex-smoker (%)5 (38)SpirometryFEV1, median (IQR), litres1.80 (1.47-2.26)FEV1 % Predicted, median (IQR)73.5 (60.1 - 77.6)FVC, median (IQR), litres2.8 (2.47 – 3.82)FVC % predicted, median (IQR)91.2 (87.6 – 109.0)\nIQR Interquartile RangeTable 3Differential cell countsCell typeCell count % (Median (IQR) of 13 participants)Neutrophil72.5 (51.1-90.1)Macrophage10.0 (4.1-25.8)Eosinophil1.6 (1.0-8.5)Lymphocyte2.3 (1.2-3.5)Metachromatic0.0 (0.0-0.0)Bronchial epithelial2.8 (1.1-12.6)Squamous epithelial2.3 (0.8-6.5)\nParticipants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysis\nCharacteristics of 13 participants\n\nIQR Interquartile Range\nDifferential cell counts\n21 participants were recruited and attended for sputum induction and 1 participant was excluded due to baseline hypoxia (28 other recruited participants failed to attend). Of 20 participants undergoing sputum induction, samples were successfully obtained from 19 (Fig. 3). No adverse events occurred. Cytospins from six (32 %) participants were inadequate due to their leukocyte/squamous epithelial cell ratio. The characteristics of the 13 participants who provided an adequate sample are shown in Table 2. There was no significant difference in characteristics between those who provided an adequate sample and those who did not (data not shown). The differential cell counts are shown in Table 3.Fig. 3Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysisTable 2Characteristics of 13 participantsParticipant characteristicGenderMale, n (%)9 (69)Female, n (%)4 (31)AgeYears, median (IQR)57 (39–67)Respiratory diagnosisAsthma, n (%)8 (62)Bronchiectasis, n (%)2 (15)Both, n (%)3 (23)Smoking statusNever smoked (%)8 (62)Ex-smoker (%)5 (38)SpirometryFEV1, median (IQR), litres1.80 (1.47-2.26)FEV1 % Predicted, median (IQR)73.5 (60.1 - 77.6)FVC, median (IQR), litres2.8 (2.47 – 3.82)FVC % predicted, median (IQR)91.2 (87.6 – 109.0)\nIQR Interquartile RangeTable 3Differential cell countsCell typeCell count % (Median (IQR) of 13 participants)Neutrophil72.5 (51.1-90.1)Macrophage10.0 (4.1-25.8)Eosinophil1.6 (1.0-8.5)Lymphocyte2.3 (1.2-3.5)Metachromatic0.0 (0.0-0.0)Bronchial epithelial2.8 (1.1-12.6)Squamous epithelial2.3 (0.8-6.5)\nParticipants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysis\nCharacteristics of 13 participants\n\nIQR Interquartile Range\nDifferential cell counts\n Feasibility of methodology Median time for analysis of each participant was significantly lower for Image J (26 mins, interquartile range (IQR): 21–30) than for Image SXM (54 mins, IQR: 43–68), p = 0.008. Including the time taken for image acquisition, the median time was not significantly different between ImageJ (51 mins,IQR: 46–65 mins) and Image SXM (66 mins, IQR: 59 – 84), p = 0.424. For the Image SXM method, 58 % of the ‘analysis time’ was spent editing the images prior to analysis. A comparison of the resources required for each method is shown in Table 4.Table 4Comparison of resource requirements for methodsResourceImage SXMImageJEquipment required for sputum induction and sample processingIdentical specialist equipment and facilities required regardless of analysis methodImage acquisition equipmentMicroscope with x40 objective and digital image capturing capabilitiesMicroscope with x100a objective and digital image capturing capabilitiesAnalysis software availabilityIn the public domain – available free of chargeIn the public domain – available free of chargeAdditional image editing softwarePurchase requiredNot requiredOperating system for analysis softwareCompatible with Mac operating systemsCompatible with Mac and Windows operating systemsFile type availabilityTIFFJPEG, TIFF, GIF, BMP, DICOM, FITS and ‘raw’Time required for sputum induction and processingApproximately 90–120 min per participantTime required for image acquisition (median)15 min27 minTime required for image analysis (including image editing if required) (median)54 min26 min\naAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations\nComparison of resource requirements for methods\n\naAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations\nA mean of 49 macrophages per participant were included in the ImageJ analysis (total 632 macrophages). A mean of 43 images of per participant were captured for Image SXM analysis (total 558 images). During the Image SXM process, 72 % of images were removed following the initial analysis as they were deemed to be inaccurate (either over- or under-estimating AMPL) using the blink comparison function (Fig. 4), resulting in a further four participants being excluded from the study. The analysis was repeated with only the remaining 143 images (median 14 images (IQR 11.5-20) per participant). If only these nine participants are included, median time taken increased to 67 mins (IQR 47–72) for Image SXM analysis and 83 mins (IQR 64–87) including image acquisition time.Fig. 4An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated\nAn example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated\nMedian time for analysis of each participant was significantly lower for Image J (26 mins, interquartile range (IQR): 21–30) than for Image SXM (54 mins, IQR: 43–68), p = 0.008. Including the time taken for image acquisition, the median time was not significantly different between ImageJ (51 mins,IQR: 46–65 mins) and Image SXM (66 mins, IQR: 59 – 84), p = 0.424. For the Image SXM method, 58 % of the ‘analysis time’ was spent editing the images prior to analysis. A comparison of the resources required for each method is shown in Table 4.Table 4Comparison of resource requirements for methodsResourceImage SXMImageJEquipment required for sputum induction and sample processingIdentical specialist equipment and facilities required regardless of analysis methodImage acquisition equipmentMicroscope with x40 objective and digital image capturing capabilitiesMicroscope with x100a objective and digital image capturing capabilitiesAnalysis software availabilityIn the public domain – available free of chargeIn the public domain – available free of chargeAdditional image editing softwarePurchase requiredNot requiredOperating system for analysis softwareCompatible with Mac operating systemsCompatible with Mac and Windows operating systemsFile type availabilityTIFFJPEG, TIFF, GIF, BMP, DICOM, FITS and ‘raw’Time required for sputum induction and processingApproximately 90–120 min per participantTime required for image acquisition (median)15 min27 minTime required for image analysis (including image editing if required) (median)54 min26 min\naAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations\nComparison of resource requirements for methods\n\naAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations\nA mean of 49 macrophages per participant were included in the ImageJ analysis (total 632 macrophages). A mean of 43 images of per participant were captured for Image SXM analysis (total 558 images). During the Image SXM process, 72 % of images were removed following the initial analysis as they were deemed to be inaccurate (either over- or under-estimating AMPL) using the blink comparison function (Fig. 4), resulting in a further four participants being excluded from the study. The analysis was repeated with only the remaining 143 images (median 14 images (IQR 11.5-20) per participant). If only these nine participants are included, median time taken increased to 67 mins (IQR 47–72) for Image SXM analysis and 83 mins (IQR 64–87) including image acquisition time.Fig. 4An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated\nAn example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated\n Airway macrophage particulate load Considerable morphological heterogeneity was seen between AM, both within samples and between participants, with wide variations in AMPL (Fig. 5). The cytoplasms of the AM in this study were noted to be granular and heterogeneous (Fig. 5), unlike the homogenous appearance of cytoplasm seen in our previous experience of macrophages obtained by BAL [11].Fig. 5Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a)\nAirway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a)\nImageJ analysis of 13 cytospins revealed a median AMPL of 0.38 μm2 (IQR 0.17-0.72 μm2). Image SXM analysis of 9 cytospins calculated a median total cellular area occupied by PM of 4.0 % (IQR 2.3-6.0 %). There was no statistically significant correlation between results obtained using the two methods (correlation coefficient = −0.42, p = 0.256).\nConsiderable morphological heterogeneity was seen between AM, both within samples and between participants, with wide variations in AMPL (Fig. 5). The cytoplasms of the AM in this study were noted to be granular and heterogeneous (Fig. 5), unlike the homogenous appearance of cytoplasm seen in our previous experience of macrophages obtained by BAL [11].Fig. 5Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a)\nAirway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a)\nImageJ analysis of 13 cytospins revealed a median AMPL of 0.38 μm2 (IQR 0.17-0.72 μm2). Image SXM analysis of 9 cytospins calculated a median total cellular area occupied by PM of 4.0 % (IQR 2.3-6.0 %). There was no statistically significant correlation between results obtained using the two methods (correlation coefficient = −0.42, p = 0.256).", "21 participants were recruited and attended for sputum induction and 1 participant was excluded due to baseline hypoxia (28 other recruited participants failed to attend). Of 20 participants undergoing sputum induction, samples were successfully obtained from 19 (Fig. 3). No adverse events occurred. Cytospins from six (32 %) participants were inadequate due to their leukocyte/squamous epithelial cell ratio. The characteristics of the 13 participants who provided an adequate sample are shown in Table 2. There was no significant difference in characteristics between those who provided an adequate sample and those who did not (data not shown). The differential cell counts are shown in Table 3.Fig. 3Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysisTable 2Characteristics of 13 participantsParticipant characteristicGenderMale, n (%)9 (69)Female, n (%)4 (31)AgeYears, median (IQR)57 (39–67)Respiratory diagnosisAsthma, n (%)8 (62)Bronchiectasis, n (%)2 (15)Both, n (%)3 (23)Smoking statusNever smoked (%)8 (62)Ex-smoker (%)5 (38)SpirometryFEV1, median (IQR), litres1.80 (1.47-2.26)FEV1 % Predicted, median (IQR)73.5 (60.1 - 77.6)FVC, median (IQR), litres2.8 (2.47 – 3.82)FVC % predicted, median (IQR)91.2 (87.6 – 109.0)\nIQR Interquartile RangeTable 3Differential cell countsCell typeCell count % (Median (IQR) of 13 participants)Neutrophil72.5 (51.1-90.1)Macrophage10.0 (4.1-25.8)Eosinophil1.6 (1.0-8.5)Lymphocyte2.3 (1.2-3.5)Metachromatic0.0 (0.0-0.0)Bronchial epithelial2.8 (1.1-12.6)Squamous epithelial2.3 (0.8-6.5)\nParticipants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysis\nCharacteristics of 13 participants\n\nIQR Interquartile Range\nDifferential cell counts", "Median time for analysis of each participant was significantly lower for Image J (26 mins, interquartile range (IQR): 21–30) than for Image SXM (54 mins, IQR: 43–68), p = 0.008. Including the time taken for image acquisition, the median time was not significantly different between ImageJ (51 mins,IQR: 46–65 mins) and Image SXM (66 mins, IQR: 59 – 84), p = 0.424. For the Image SXM method, 58 % of the ‘analysis time’ was spent editing the images prior to analysis. A comparison of the resources required for each method is shown in Table 4.Table 4Comparison of resource requirements for methodsResourceImage SXMImageJEquipment required for sputum induction and sample processingIdentical specialist equipment and facilities required regardless of analysis methodImage acquisition equipmentMicroscope with x40 objective and digital image capturing capabilitiesMicroscope with x100a objective and digital image capturing capabilitiesAnalysis software availabilityIn the public domain – available free of chargeIn the public domain – available free of chargeAdditional image editing softwarePurchase requiredNot requiredOperating system for analysis softwareCompatible with Mac operating systemsCompatible with Mac and Windows operating systemsFile type availabilityTIFFJPEG, TIFF, GIF, BMP, DICOM, FITS and ‘raw’Time required for sputum induction and processingApproximately 90–120 min per participantTime required for image acquisition (median)15 min27 minTime required for image analysis (including image editing if required) (median)54 min26 min\naAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations\nComparison of resource requirements for methods\n\naAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations\nA mean of 49 macrophages per participant were included in the ImageJ analysis (total 632 macrophages). A mean of 43 images of per participant were captured for Image SXM analysis (total 558 images). During the Image SXM process, 72 % of images were removed following the initial analysis as they were deemed to be inaccurate (either over- or under-estimating AMPL) using the blink comparison function (Fig. 4), resulting in a further four participants being excluded from the study. The analysis was repeated with only the remaining 143 images (median 14 images (IQR 11.5-20) per participant). If only these nine participants are included, median time taken increased to 67 mins (IQR 47–72) for Image SXM analysis and 83 mins (IQR 64–87) including image acquisition time.Fig. 4An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated\nAn example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated", "Considerable morphological heterogeneity was seen between AM, both within samples and between participants, with wide variations in AMPL (Fig. 5). The cytoplasms of the AM in this study were noted to be granular and heterogeneous (Fig. 5), unlike the homogenous appearance of cytoplasm seen in our previous experience of macrophages obtained by BAL [11].Fig. 5Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a)\nAirway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a)\nImageJ analysis of 13 cytospins revealed a median AMPL of 0.38 μm2 (IQR 0.17-0.72 μm2). Image SXM analysis of 9 cytospins calculated a median total cellular area occupied by PM of 4.0 % (IQR 2.3-6.0 %). There was no statistically significant correlation between results obtained using the two methods (correlation coefficient = −0.42, p = 0.256).", "A biomarker which can be used in the field to assess an individual’s air pollution exposure will be a valuable tool for research into the health effects and benefits of interventions. In our pilot work for the Cooking and Pneumonia Study (www.capstudy.org) we identified the need for a biomarker representative of household air pollution exposure [8]. This study set out to explore the feasibility of using IS samples for assessment of AMPL as a potential biomarker.\nAlthough the procedure was well-tolerated by all participants who underwent IS, there was a low appointment attendance rate despite multiple appointments being offered at their convenience. This may be due to participant’s availability, but may also reflect an unwillingness to undergo the procedure suggesting that IS may not be acceptable to the wider community. A third of participants were unable to produce adequate samples. These factors resulted in a small samples size, a major limitation of this study, but also reflects a potential limitation in the feasibility of using IS as a biomarker.\nThe time taken for the Image SXM method was substantially lengthened by the need to manually edit images prior to analysis to improve accuracy. This editing is not required when using this software with BAL samples, which tend to have few other cells or debris.\nImageJ was the quicker method for image acquisition and analysis (median 51 min). Image capturing software used in this study for the ImageJ method delayed this process by approximately 15 min, but was not used for the Image SXM method – a limitation of this study due to the lack of available equipment. However, when combined with the time taken for sputum induction and processing (usually >90 min), this process is unlikely to be feasible for widespread use in large studies given the total time required (>2 h per participant).\nBoth methods require considerable expenditure for clinical and laboratory equipment. Previously published studies using ImageJ method report using a microscope with a x100 objective, while the Image SXM method requires a x40 objective, both with digital image acquisition capabilities. In this study a x60 objective was used for the ImageJ method, as greater magnification was not available with digital image capturing capabilities. Although this may have theoretically reduced the accuracy of the ImageJ methodology in our study, we experienced no difficulties visualising particulate matter within the macrophages and still found ImageJ to be the more reliable of the two methods for detecting PM. As we do not comment on the accuracy of the ImageJ method in comparison to a gold standard assessment of exposure, this limitation of our study does not have a major impact on our findings. However, it does emphasise the need for specialised equipment, which has implications for feasibility.\nBoth softwares are available free of charge but ImageJ is more widely compatible. Image editing software must be also purchased if using Image SXM with IS. The facilities and equipment required for inducing and processing sputum are likely to preclude the use of this technique in rural or resource poor settings.\nA further limitation of this study is that image capture of macrophages – which can be difficult to differentiate from other cell types (particularly on cytospins stained only with eosin for Image SXM analysis) - was only performed by one reader, with support from a senior cell biologist, without a priori criteria for inclusion. This may have resulted in incorrect identification of some cells. Independent image capture and slide analysis by two individuals with a high level of expertise may improve accuracy of macrophage identification, although this represents an additional challenge for implementing these methods in resource limited settings.\nImageJ method requires higher levels of operator training for image analysis than Image SXM, due to the subjective nature of the analysis process. Further work to assess intra- and inter-observer reliability using the ImageJ method is required before this is widely used – this was not evaluated as part of this study in which only one unblinded reader performed the analysis.\nAlthough previously successfully used with BAL samples, Image SXM appears to not perform as well with IS macrophages. This is possibly due to the heterogenous and granular nature of these macrophages making it difficult for the software to distinguish between cytoplasm and PM, as has been observed in previous studies [14]. We postulate that the difference in appearance compared to BAL macrophages is either due to these being a different population of macrophages, taken from a more proximal part of the airways, or due to cell stress or apoptosis resulting from the IS process, although we did not measure cell viability in this study. Steps were taken to ensure threshold settings were optimised for this batch of images, but due to the heterogeneity seen these settings were not always optimal for each individual image. Image SXM does include an option to adjust the threshold settings manually for different images. This might improve accuracy but would make the process more time-consuming, and would not account for heterogeneity of macrophages within the same image (Fig. 5). Optimising the threshold settings for each image might reduce the number of images discarded from Image SXM following visual checking for accuracy (Fig. 4). This might increase the sample size and therefore the precision of estimates.\nThe lack of correlation observed in AMPL results between the two methods is unsurprising given some of the difficulties outlined above. To determine the accuracy of either method, comparison with an external comparator is required, such as an individual’s PM exposure data. This, and assessment of intra- and inter-observer reliability, were beyond the scope of this study. An association between AMPL calculated and the number of peak exposures to PM has been demonstrated in London cyclists [20], but further exploration of this relationship in other settings is required. The results obtained by the ImageJ method in this study are comparable to that of healthy British children (0.41 μm2 PM per macrophage) [13]. Other studies using ImageJ methodology have suggested that AMPL does correlate with exposure [10, 13].\nGiven the fundamental role of alveolar macrophages in the defence against inhaled pollutants, further exploration of the relationship between AMPL and pathophysiology is an intuitive way to improve understanding of the health impacts of air pollution. Optimising digital analysis software or using alternative methods for quantifying AMPL, such as spectrophotometry, may assist with this, but is unlikely to provide a useful field biomarker of exposure.", "Direct measurement of air pollution exposure is costly, logistically complicated and intrusive to the individual. Studies investigating the health impacts of air pollution exposure and the benefits of interventions are limited by the challenges associated with accurately quantifying exposure [9]. A biomarker of air pollution exposure will be a useful tool to facilitate research addressing the high burden of disease associated with air pollution. This small study has not established whether AMPL is an accurate biomarker of pollution exposure, but has compared the feasibility of two previously used methods. The heterogeneity of IS samples complicates digital image analysis methods, and the resource requirements for assessing AMPL from IS are considerable, making it unlikely that this biomarker of exposure will be appropriate for widespread use as a tool for large-scale intervention studies. Priority should be given to developing a point-of-care biomarker of exposure, without the need for specialist training and equipment, to facilitate the large public health intervention trials that are urgently needed. Potential biomarkers requiring further exploration include direct measures of combustion products, such as exhaled carbon monoxide, exhaled carboxyhaemoglobin, exhaled volatile organic compounds or levoglucosan and methoxyphenols in urine [8, 9, 21–23]. Indirect measures of exposure in sputum, blood and urine, including markers of oxidative stress and endothelial or epithelial damage (such as 8-isoprostane, malondialdehyde, nitric oxide, or surfactant-associated protein D), may also be promising biomarkers [9, 21, 24–26]." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusion" ]
[ "Air pollution", "Particulate matter", "Biomarker", "Induced sputum", "Airway macrophages" ]
Background: Indoor and outdoor air pollution are the 4th and 9th leading risk factors, respectively, for disability-adjusted life years worldwide [1], and exposure is associated with increased risk of pneumonia in children, respiratory cancers, and development of Chronic Obstructive Pulmonary Disease [2–5]. Airborne particulate matter [6] with an aerodynamic diameter of <2.5 μm (PM2.5) is considered particularly harmful as the small size allows inhalation deep into the lungs [7]. Global initiatives, such as the Global Alliance for Clean Cookstoves (www.cleancookstoves.org), are tackling the major health burden caused by airborne PM. Major randomised trials of the health effects of clean burning cookstoves are in progress (e.g. www.capstudy.org and http://www.kintampo-hrc.org/projects/graphs.asp#.VMtKusaI0Rk). All share the challenge that quantifying an individual’s exposure to pollution is complex and expensive, and there is no gold standard method [8]. Development of a biomarker that acts as a surrogate marker of exposure could obviate the need for costly and intensive exposure monitoring. Ideally a biomarker should be: closely associated with exposure, adequately sensitive and specific, consistent across heterogenous populations, cost efficient, acceptable to the user population, and feasible for use in the field (including low-resource settings) [9]. The phagocytic action of airway macrophages (AM) may provide the basis for a biomarker of PM exposure. The particulate load within AM is: increased in individuals who report exposure to household air pollution compared to those who do not [10]; statistically different between individuals who use different types of domestic fuel [11]; and associated with exposure to outdoor PM in commuters who cycle in London [12]. Correlation between AM particulate load (AMPL) and worsening lung function supports a possible pathophysiological role [13]. A recent systematic review of studies calculating AMPL concluded that this biomarker is suitable for assessing personal exposure to PM, but that technical improvements are needed before this method is suitable for widespread use [14]. Once cell monolayers (Cytospins™) have been obtained from induced sputum (IS) or bronchoalveolar lavage (BAL) samples, several different digital image analysis software programmes can be used to calculate AMPL. ImageJ software (http://rsbweb.nih.gov/ij/, superseding a similar software, Scion Image) and Image SXM software [15] (http://www.ImageSXM.org.uk) have both been used for this purpose [12, 16, 17]. There is no previously reported objective comparison of their feasibility and it is unknown whether these two methods provide comparable results. Unlike ImageJ, Image SXM has only been used with samples obtained via BAL, a technique that is not suitable for widespread use in the field due to the expertise, risks and financial costs involved. This study therefore aimed to provide an objective assessment of the relative feasibilities – with regard to resources, expertise and time required - of ImageJ and Image SXM for use with IS samples, and their comparative accuracy. Methods: Participant involvement Respiratory patients were recruited via outpatient respiratory clinics at Aintree University Hospital, Liverpool, UK. All consenting adults over 18 years old with asthma or bronchiectasis, who did not meet safety exclusion criteria (see Table 1), were recruited.Table 1The exclusion criteria used for safety reasons prior to performing sputum inductionSafety checklist – exclusion criteria for sputum induction• FEV1 < 60 %/< 1.0 L (post – Salbutamol 200 micrograms)• SaO2 < 90 % on room air• Unable to take salbutamol• Extreme shortness of breath• Acute Respiratory Distress Syndrome• Known haemoptysis• Known arrhythmias/angina• Known thoracic, abdominal or cerebral aneurysms• Recent pneumothorax• Pulmonary emboli• Fractured ribs/recent chest trauma• Recent eye surgery• Known pleural effusions• Pulmonary oedemaThrombocytopenia (Platelets < 25) The exclusion criteria used for safety reasons prior to performing sputum induction Thrombocytopenia (Platelets < 25) Respiratory patients were recruited via outpatient respiratory clinics at Aintree University Hospital, Liverpool, UK. All consenting adults over 18 years old with asthma or bronchiectasis, who did not meet safety exclusion criteria (see Table 1), were recruited.Table 1The exclusion criteria used for safety reasons prior to performing sputum inductionSafety checklist – exclusion criteria for sputum induction• FEV1 < 60 %/< 1.0 L (post – Salbutamol 200 micrograms)• SaO2 < 90 % on room air• Unable to take salbutamol• Extreme shortness of breath• Acute Respiratory Distress Syndrome• Known haemoptysis• Known arrhythmias/angina• Known thoracic, abdominal or cerebral aneurysms• Recent pneumothorax• Pulmonary emboli• Fractured ribs/recent chest trauma• Recent eye surgery• Known pleural effusions• Pulmonary oedemaThrombocytopenia (Platelets < 25) The exclusion criteria used for safety reasons prior to performing sputum induction Thrombocytopenia (Platelets < 25) Sputum induction Participants underwent sputum induction on one occasion each in August-October 2013. Pre-procedure Salbutamol (200 micrograms) was given to prevent bronchoconstriction. Baseline spirometry was performed to European Respiratory Society and American Thoracic Society standards [18] using a MicroMedical MicroLab Mk8 Spirometer (Cardinal Health UK). Three × 5mls of hypertonic saline (3 %, 4 %, 5 % saline given in stepwise fashion, lasting up to 5 min per nebulisation) were nebulised via Omron NE-U17 Ultrasonic Nebuliser (Omron Healthcare Europe). Lung function was assessed at intervals to detect bronchoconstriction, according to pre-specified safety criteria. Participants underwent sputum induction on one occasion each in August-October 2013. Pre-procedure Salbutamol (200 micrograms) was given to prevent bronchoconstriction. Baseline spirometry was performed to European Respiratory Society and American Thoracic Society standards [18] using a MicroMedical MicroLab Mk8 Spirometer (Cardinal Health UK). Three × 5mls of hypertonic saline (3 %, 4 %, 5 % saline given in stepwise fashion, lasting up to 5 min per nebulisation) were nebulised via Omron NE-U17 Ultrasonic Nebuliser (Omron Healthcare Europe). Lung function was assessed at intervals to detect bronchoconstriction, according to pre-specified safety criteria. Sputum processing Sputum samples were kept on ice and sputum plugs were manually extracted and treated with 0.1 % Sputolysin (Merck Chemical Ltd, UK) for fifteen minutes to remove mucus. Phosphate Buffered Solution (Sigma-Aldrich, UK) was added and cells were filtered and centrifuged at 2200 rpm for ten minutes at 4 °C (Heraeus Megafuge 1.0R, ThermoFisher Scientific, USA). The pellet was re-suspended at 0.5×106 cells per ml and two × 100 μl of suspension was cytocentrifuged (Shandon Cytopsin 4, ThermoFisher Scientific) onto microscope slides at 450 rpm for 6 min to produce three cytospins per participant. Slides were fixed in methanol and stored until staining. One slide per participant was stained using Hemacolor Staining kit (Merck-Millipore, Germany) for ImageJ analysis. One slide was stained using Hemacolor Solution 2 (eosin) only (dipped for 9 s), so that only the cytoplasm was stained (a method previously developed for optimising Image SXM analysis [16]). One slide per participant was stained with Diff-Quik (Dade Behring, Deerfield, IL, USA) for differential cell counts : 400 cells were counted per participant, using a Leica DM IL light microscope at ×40 magnification. Cytospins with a leukocyte/squamous epithelial cell ratio of ≤5 were deemed inadequate and therefore excluded from the analysis [19]. Sputum samples were kept on ice and sputum plugs were manually extracted and treated with 0.1 % Sputolysin (Merck Chemical Ltd, UK) for fifteen minutes to remove mucus. Phosphate Buffered Solution (Sigma-Aldrich, UK) was added and cells were filtered and centrifuged at 2200 rpm for ten minutes at 4 °C (Heraeus Megafuge 1.0R, ThermoFisher Scientific, USA). The pellet was re-suspended at 0.5×106 cells per ml and two × 100 μl of suspension was cytocentrifuged (Shandon Cytopsin 4, ThermoFisher Scientific) onto microscope slides at 450 rpm for 6 min to produce three cytospins per participant. Slides were fixed in methanol and stored until staining. One slide per participant was stained using Hemacolor Staining kit (Merck-Millipore, Germany) for ImageJ analysis. One slide was stained using Hemacolor Solution 2 (eosin) only (dipped for 9 s), so that only the cytoplasm was stained (a method previously developed for optimising Image SXM analysis [16]). One slide per participant was stained with Diff-Quik (Dade Behring, Deerfield, IL, USA) for differential cell counts : 400 cells were counted per participant, using a Leica DM IL light microscope at ×40 magnification. Cytospins with a leukocyte/squamous epithelial cell ratio of ≤5 were deemed inadequate and therefore excluded from the analysis [19]. Digital image acquisition Cytospin slides for ImageJ analysis were photographed at ×60 magnification using Nikon Eclipse 80i digital microscope (Nikon Instruments Europe BV) with Nikon NIS-Elements BR software; 50 macrophages were captured per participant where possible (in cases where less than 50 macrophages were present on the cytospin a reduced number was used). Slides for Image SXM analysis were imaged at ×40 magnification using a Leica DM IL light microscope (Leica Microsystems UK Ltd) with a Nikon E990 digital camera (Nikon Inc, USA); where possible 50 microscope fields (with at least one macrophage per field) were captured per participant - all the macrophages captured in a field were analysed. In cases where less than 50 images from the whole cytospin contained a macrophage this reduced number of macrophages-containing images, and all macrophages within those images, were included in the analysis. Images for both methods were taken systematically using a predefined method to prevent duplication or biased image selection, as shown in Fig. 1.Fig. 1Systematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown Systematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown Cytospin slides for ImageJ analysis were photographed at ×60 magnification using Nikon Eclipse 80i digital microscope (Nikon Instruments Europe BV) with Nikon NIS-Elements BR software; 50 macrophages were captured per participant where possible (in cases where less than 50 macrophages were present on the cytospin a reduced number was used). Slides for Image SXM analysis were imaged at ×40 magnification using a Leica DM IL light microscope (Leica Microsystems UK Ltd) with a Nikon E990 digital camera (Nikon Inc, USA); where possible 50 microscope fields (with at least one macrophage per field) were captured per participant - all the macrophages captured in a field were analysed. In cases where less than 50 images from the whole cytospin contained a macrophage this reduced number of macrophages-containing images, and all macrophages within those images, were included in the analysis. Images for both methods were taken systematically using a predefined method to prevent duplication or biased image selection, as shown in Fig. 1.Fig. 1Systematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown Systematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown Image SXM analysis Images were edited using Adobe Photoshop Elements v6.0 to show only macrophages, to prevent incorrect calculations of cellular and PM areas (Fig. 2a and b). Image SXM (version 1.92, April 2011) variable settings were optimised for cytoplasm (upper and lower size limits and density threshold) and PM (density threshold) detection by adjusting settings for a range of images from different participants. Values which consistently maximised identification of PM without increasing false positive identification were used. These settings were then applied to the analysis of all images from all participants. 50 images per participant were analysed to generate output images (Fig. 2c) and the arithmetic mean percentage of total cellular area occupied by PM was calculated by Image SXM. The blink comparison function, which provides an overlay of images, was used to compare original and output images; subjective discordance between total cellular or PM area led to removal of that image from the analysis. Participants with fewer than ten images remaining were excluded from the analysis.Fig. 2Image SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c) Image SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c) Images were edited using Adobe Photoshop Elements v6.0 to show only macrophages, to prevent incorrect calculations of cellular and PM areas (Fig. 2a and b). Image SXM (version 1.92, April 2011) variable settings were optimised for cytoplasm (upper and lower size limits and density threshold) and PM (density threshold) detection by adjusting settings for a range of images from different participants. Values which consistently maximised identification of PM without increasing false positive identification were used. These settings were then applied to the analysis of all images from all participants. 50 images per participant were analysed to generate output images (Fig. 2c) and the arithmetic mean percentage of total cellular area occupied by PM was calculated by Image SXM. The blink comparison function, which provides an overlay of images, was used to compare original and output images; subjective discordance between total cellular or PM area led to removal of that image from the analysis. Participants with fewer than ten images remaining were excluded from the analysis.Fig. 2Image SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c) Image SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c) ImageJ analysis A stage micrometer (Agar Scientific, UK) was used to calibrate image size. Colour images were converted to 32-bit black and white images using ImageJ (version 1.46r). The “threshold” settings were adjusted to obtain the best fit of red over black areas [6] (Fig. 1d and e). The freehand select function was used to select PM (Fig. 2f) that was within the cell, and to exclude red areas other than PM, such as nucleus. ImageJ calculated the area of PM within the selection. Thresholds were adjusted to obtain the best fit for different particle aggregates in each macrophage. The median area from 50 macrophages was calculated. This methodology is a refinement of previously used techniques [12], adapted from earlier Scion Image methodology [10]. A stage micrometer (Agar Scientific, UK) was used to calibrate image size. Colour images were converted to 32-bit black and white images using ImageJ (version 1.46r). The “threshold” settings were adjusted to obtain the best fit of red over black areas [6] (Fig. 1d and e). The freehand select function was used to select PM (Fig. 2f) that was within the cell, and to exclude red areas other than PM, such as nucleus. ImageJ calculated the area of PM within the selection. Thresholds were adjusted to obtain the best fit for different particle aggregates in each macrophage. The median area from 50 macrophages was calculated. This methodology is a refinement of previously used techniques [12], adapted from earlier Scion Image methodology [10]. Feasibility comparison of methods The time taken for image capture and analysis of the final 11 samples was recorded, along with an inventory of the required equipment and expertise for each method. The time taken for image capture and analysis of the final 11 samples was recorded, along with an inventory of the required equipment and expertise for each method. Statistical analysis Data was analysed using SPSS v21. AMPL given by each method were compared using a Spearman Rank Order Correlation test. Participant characteristics were compared using Chi-square and Mann–Whitney U tests. Time taken to conduct the analyses was compared by Wilcoxon Signed Rank test. A p value of <0.05 was considered statistically significant. Data was analysed using SPSS v21. AMPL given by each method were compared using a Spearman Rank Order Correlation test. Participant characteristics were compared using Chi-square and Mann–Whitney U tests. Time taken to conduct the analyses was compared by Wilcoxon Signed Rank test. A p value of <0.05 was considered statistically significant. Ethical approval The East Midlands – Derby 1 Research Ethics Committee approved this work (REC reference: 11/EM/0269). Written informed consent was obtained from all participants. The East Midlands – Derby 1 Research Ethics Committee approved this work (REC reference: 11/EM/0269). Written informed consent was obtained from all participants. Participant involvement: Respiratory patients were recruited via outpatient respiratory clinics at Aintree University Hospital, Liverpool, UK. All consenting adults over 18 years old with asthma or bronchiectasis, who did not meet safety exclusion criteria (see Table 1), were recruited.Table 1The exclusion criteria used for safety reasons prior to performing sputum inductionSafety checklist – exclusion criteria for sputum induction• FEV1 < 60 %/< 1.0 L (post – Salbutamol 200 micrograms)• SaO2 < 90 % on room air• Unable to take salbutamol• Extreme shortness of breath• Acute Respiratory Distress Syndrome• Known haemoptysis• Known arrhythmias/angina• Known thoracic, abdominal or cerebral aneurysms• Recent pneumothorax• Pulmonary emboli• Fractured ribs/recent chest trauma• Recent eye surgery• Known pleural effusions• Pulmonary oedemaThrombocytopenia (Platelets < 25) The exclusion criteria used for safety reasons prior to performing sputum induction Thrombocytopenia (Platelets < 25) Sputum induction: Participants underwent sputum induction on one occasion each in August-October 2013. Pre-procedure Salbutamol (200 micrograms) was given to prevent bronchoconstriction. Baseline spirometry was performed to European Respiratory Society and American Thoracic Society standards [18] using a MicroMedical MicroLab Mk8 Spirometer (Cardinal Health UK). Three × 5mls of hypertonic saline (3 %, 4 %, 5 % saline given in stepwise fashion, lasting up to 5 min per nebulisation) were nebulised via Omron NE-U17 Ultrasonic Nebuliser (Omron Healthcare Europe). Lung function was assessed at intervals to detect bronchoconstriction, according to pre-specified safety criteria. Sputum processing: Sputum samples were kept on ice and sputum plugs were manually extracted and treated with 0.1 % Sputolysin (Merck Chemical Ltd, UK) for fifteen minutes to remove mucus. Phosphate Buffered Solution (Sigma-Aldrich, UK) was added and cells were filtered and centrifuged at 2200 rpm for ten minutes at 4 °C (Heraeus Megafuge 1.0R, ThermoFisher Scientific, USA). The pellet was re-suspended at 0.5×106 cells per ml and two × 100 μl of suspension was cytocentrifuged (Shandon Cytopsin 4, ThermoFisher Scientific) onto microscope slides at 450 rpm for 6 min to produce three cytospins per participant. Slides were fixed in methanol and stored until staining. One slide per participant was stained using Hemacolor Staining kit (Merck-Millipore, Germany) for ImageJ analysis. One slide was stained using Hemacolor Solution 2 (eosin) only (dipped for 9 s), so that only the cytoplasm was stained (a method previously developed for optimising Image SXM analysis [16]). One slide per participant was stained with Diff-Quik (Dade Behring, Deerfield, IL, USA) for differential cell counts : 400 cells were counted per participant, using a Leica DM IL light microscope at ×40 magnification. Cytospins with a leukocyte/squamous epithelial cell ratio of ≤5 were deemed inadequate and therefore excluded from the analysis [19]. Digital image acquisition: Cytospin slides for ImageJ analysis were photographed at ×60 magnification using Nikon Eclipse 80i digital microscope (Nikon Instruments Europe BV) with Nikon NIS-Elements BR software; 50 macrophages were captured per participant where possible (in cases where less than 50 macrophages were present on the cytospin a reduced number was used). Slides for Image SXM analysis were imaged at ×40 magnification using a Leica DM IL light microscope (Leica Microsystems UK Ltd) with a Nikon E990 digital camera (Nikon Inc, USA); where possible 50 microscope fields (with at least one macrophage per field) were captured per participant - all the macrophages captured in a field were analysed. In cases where less than 50 images from the whole cytospin contained a macrophage this reduced number of macrophages-containing images, and all macrophages within those images, were included in the analysis. Images for both methods were taken systematically using a predefined method to prevent duplication or biased image selection, as shown in Fig. 1.Fig. 1Systematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown Systematic digital image acquisition. The pathway used to acquire digital images of cytospin ‘spots’ is shown Image SXM analysis: Images were edited using Adobe Photoshop Elements v6.0 to show only macrophages, to prevent incorrect calculations of cellular and PM areas (Fig. 2a and b). Image SXM (version 1.92, April 2011) variable settings were optimised for cytoplasm (upper and lower size limits and density threshold) and PM (density threshold) detection by adjusting settings for a range of images from different participants. Values which consistently maximised identification of PM without increasing false positive identification were used. These settings were then applied to the analysis of all images from all participants. 50 images per participant were analysed to generate output images (Fig. 2c) and the arithmetic mean percentage of total cellular area occupied by PM was calculated by Image SXM. The blink comparison function, which provides an overlay of images, was used to compare original and output images; subjective discordance between total cellular or PM area led to removal of that image from the analysis. Participants with fewer than ten images remaining were excluded from the analysis.Fig. 2Image SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c) Image SXM and ImageJ methodology. Image SXM (a, b & c); digital images of the cytospins (a) were manually edited to remove all non-macrophage cells and debris (b). Image SXM then calculated the area of cytoplasm [27] and particulate matter (red), mapped out in the output image (c). ImageJ (d, e & f): for each macropghage, the threshold level was adjusted manually until the black areas of particulate matter seen in the original image (a) turned red (b). The particulate matter within the cytoplasm was then selected by freehand (c) ImageJ analysis: A stage micrometer (Agar Scientific, UK) was used to calibrate image size. Colour images were converted to 32-bit black and white images using ImageJ (version 1.46r). The “threshold” settings were adjusted to obtain the best fit of red over black areas [6] (Fig. 1d and e). The freehand select function was used to select PM (Fig. 2f) that was within the cell, and to exclude red areas other than PM, such as nucleus. ImageJ calculated the area of PM within the selection. Thresholds were adjusted to obtain the best fit for different particle aggregates in each macrophage. The median area from 50 macrophages was calculated. This methodology is a refinement of previously used techniques [12], adapted from earlier Scion Image methodology [10]. Feasibility comparison of methods: The time taken for image capture and analysis of the final 11 samples was recorded, along with an inventory of the required equipment and expertise for each method. Statistical analysis: Data was analysed using SPSS v21. AMPL given by each method were compared using a Spearman Rank Order Correlation test. Participant characteristics were compared using Chi-square and Mann–Whitney U tests. Time taken to conduct the analyses was compared by Wilcoxon Signed Rank test. A p value of <0.05 was considered statistically significant. Ethical approval: The East Midlands – Derby 1 Research Ethics Committee approved this work (REC reference: 11/EM/0269). Written informed consent was obtained from all participants. Results: Sputum induction 21 participants were recruited and attended for sputum induction and 1 participant was excluded due to baseline hypoxia (28 other recruited participants failed to attend). Of 20 participants undergoing sputum induction, samples were successfully obtained from 19 (Fig. 3). No adverse events occurred. Cytospins from six (32 %) participants were inadequate due to their leukocyte/squamous epithelial cell ratio. The characteristics of the 13 participants who provided an adequate sample are shown in Table 2. There was no significant difference in characteristics between those who provided an adequate sample and those who did not (data not shown). The differential cell counts are shown in Table 3.Fig. 3Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysisTable 2Characteristics of 13 participantsParticipant characteristicGenderMale, n (%)9 (69)Female, n (%)4 (31)AgeYears, median (IQR)57 (39–67)Respiratory diagnosisAsthma, n (%)8 (62)Bronchiectasis, n (%)2 (15)Both, n (%)3 (23)Smoking statusNever smoked (%)8 (62)Ex-smoker (%)5 (38)SpirometryFEV1, median (IQR), litres1.80 (1.47-2.26)FEV1 % Predicted, median (IQR)73.5 (60.1 - 77.6)FVC, median (IQR), litres2.8 (2.47 – 3.82)FVC % predicted, median (IQR)91.2 (87.6 – 109.0) IQR Interquartile RangeTable 3Differential cell countsCell typeCell count % (Median (IQR) of 13 participants)Neutrophil72.5 (51.1-90.1)Macrophage10.0 (4.1-25.8)Eosinophil1.6 (1.0-8.5)Lymphocyte2.3 (1.2-3.5)Metachromatic0.0 (0.0-0.0)Bronchial epithelial2.8 (1.1-12.6)Squamous epithelial2.3 (0.8-6.5) Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysis Characteristics of 13 participants IQR Interquartile Range Differential cell counts 21 participants were recruited and attended for sputum induction and 1 participant was excluded due to baseline hypoxia (28 other recruited participants failed to attend). Of 20 participants undergoing sputum induction, samples were successfully obtained from 19 (Fig. 3). No adverse events occurred. Cytospins from six (32 %) participants were inadequate due to their leukocyte/squamous epithelial cell ratio. The characteristics of the 13 participants who provided an adequate sample are shown in Table 2. There was no significant difference in characteristics between those who provided an adequate sample and those who did not (data not shown). The differential cell counts are shown in Table 3.Fig. 3Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysisTable 2Characteristics of 13 participantsParticipant characteristicGenderMale, n (%)9 (69)Female, n (%)4 (31)AgeYears, median (IQR)57 (39–67)Respiratory diagnosisAsthma, n (%)8 (62)Bronchiectasis, n (%)2 (15)Both, n (%)3 (23)Smoking statusNever smoked (%)8 (62)Ex-smoker (%)5 (38)SpirometryFEV1, median (IQR), litres1.80 (1.47-2.26)FEV1 % Predicted, median (IQR)73.5 (60.1 - 77.6)FVC, median (IQR), litres2.8 (2.47 – 3.82)FVC % predicted, median (IQR)91.2 (87.6 – 109.0) IQR Interquartile RangeTable 3Differential cell countsCell typeCell count % (Median (IQR) of 13 participants)Neutrophil72.5 (51.1-90.1)Macrophage10.0 (4.1-25.8)Eosinophil1.6 (1.0-8.5)Lymphocyte2.3 (1.2-3.5)Metachromatic0.0 (0.0-0.0)Bronchial epithelial2.8 (1.1-12.6)Squamous epithelial2.3 (0.8-6.5) Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysis Characteristics of 13 participants IQR Interquartile Range Differential cell counts Feasibility of methodology Median time for analysis of each participant was significantly lower for Image J (26 mins, interquartile range (IQR): 21–30) than for Image SXM (54 mins, IQR: 43–68), p = 0.008. Including the time taken for image acquisition, the median time was not significantly different between ImageJ (51 mins,IQR: 46–65 mins) and Image SXM (66 mins, IQR: 59 – 84), p = 0.424. For the Image SXM method, 58 % of the ‘analysis time’ was spent editing the images prior to analysis. A comparison of the resources required for each method is shown in Table 4.Table 4Comparison of resource requirements for methodsResourceImage SXMImageJEquipment required for sputum induction and sample processingIdentical specialist equipment and facilities required regardless of analysis methodImage acquisition equipmentMicroscope with x40 objective and digital image capturing capabilitiesMicroscope with x100a objective and digital image capturing capabilitiesAnalysis software availabilityIn the public domain – available free of chargeIn the public domain – available free of chargeAdditional image editing softwarePurchase requiredNot requiredOperating system for analysis softwareCompatible with Mac operating systemsCompatible with Mac and Windows operating systemsFile type availabilityTIFFJPEG, TIFF, GIF, BMP, DICOM, FITS and ‘raw’Time required for sputum induction and processingApproximately 90–120 min per participantTime required for image acquisition (median)15 min27 minTime required for image analysis (including image editing if required) (median)54 min26 min aAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations Comparison of resource requirements for methods aAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations A mean of 49 macrophages per participant were included in the ImageJ analysis (total 632 macrophages). A mean of 43 images of per participant were captured for Image SXM analysis (total 558 images). During the Image SXM process, 72 % of images were removed following the initial analysis as they were deemed to be inaccurate (either over- or under-estimating AMPL) using the blink comparison function (Fig. 4), resulting in a further four participants being excluded from the study. The analysis was repeated with only the remaining 143 images (median 14 images (IQR 11.5-20) per participant). If only these nine participants are included, median time taken increased to 67 mins (IQR 47–72) for Image SXM analysis and 83 mins (IQR 64–87) including image acquisition time.Fig. 4An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated Median time for analysis of each participant was significantly lower for Image J (26 mins, interquartile range (IQR): 21–30) than for Image SXM (54 mins, IQR: 43–68), p = 0.008. Including the time taken for image acquisition, the median time was not significantly different between ImageJ (51 mins,IQR: 46–65 mins) and Image SXM (66 mins, IQR: 59 – 84), p = 0.424. For the Image SXM method, 58 % of the ‘analysis time’ was spent editing the images prior to analysis. A comparison of the resources required for each method is shown in Table 4.Table 4Comparison of resource requirements for methodsResourceImage SXMImageJEquipment required for sputum induction and sample processingIdentical specialist equipment and facilities required regardless of analysis methodImage acquisition equipmentMicroscope with x40 objective and digital image capturing capabilitiesMicroscope with x100a objective and digital image capturing capabilitiesAnalysis software availabilityIn the public domain – available free of chargeIn the public domain – available free of chargeAdditional image editing softwarePurchase requiredNot requiredOperating system for analysis softwareCompatible with Mac operating systemsCompatible with Mac and Windows operating systemsFile type availabilityTIFFJPEG, TIFF, GIF, BMP, DICOM, FITS and ‘raw’Time required for sputum induction and processingApproximately 90–120 min per participantTime required for image acquisition (median)15 min27 minTime required for image analysis (including image editing if required) (median)54 min26 min aAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations Comparison of resource requirements for methods aAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations A mean of 49 macrophages per participant were included in the ImageJ analysis (total 632 macrophages). A mean of 43 images of per participant were captured for Image SXM analysis (total 558 images). During the Image SXM process, 72 % of images were removed following the initial analysis as they were deemed to be inaccurate (either over- or under-estimating AMPL) using the blink comparison function (Fig. 4), resulting in a further four participants being excluded from the study. The analysis was repeated with only the remaining 143 images (median 14 images (IQR 11.5-20) per participant). If only these nine participants are included, median time taken increased to 67 mins (IQR 47–72) for Image SXM analysis and 83 mins (IQR 64–87) including image acquisition time.Fig. 4An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated Airway macrophage particulate load Considerable morphological heterogeneity was seen between AM, both within samples and between participants, with wide variations in AMPL (Fig. 5). The cytoplasms of the AM in this study were noted to be granular and heterogeneous (Fig. 5), unlike the homogenous appearance of cytoplasm seen in our previous experience of macrophages obtained by BAL [11].Fig. 5Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a) Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a) ImageJ analysis of 13 cytospins revealed a median AMPL of 0.38 μm2 (IQR 0.17-0.72 μm2). Image SXM analysis of 9 cytospins calculated a median total cellular area occupied by PM of 4.0 % (IQR 2.3-6.0 %). There was no statistically significant correlation between results obtained using the two methods (correlation coefficient = −0.42, p = 0.256). Considerable morphological heterogeneity was seen between AM, both within samples and between participants, with wide variations in AMPL (Fig. 5). The cytoplasms of the AM in this study were noted to be granular and heterogeneous (Fig. 5), unlike the homogenous appearance of cytoplasm seen in our previous experience of macrophages obtained by BAL [11].Fig. 5Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a) Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a) ImageJ analysis of 13 cytospins revealed a median AMPL of 0.38 μm2 (IQR 0.17-0.72 μm2). Image SXM analysis of 9 cytospins calculated a median total cellular area occupied by PM of 4.0 % (IQR 2.3-6.0 %). There was no statistically significant correlation between results obtained using the two methods (correlation coefficient = −0.42, p = 0.256). Sputum induction: 21 participants were recruited and attended for sputum induction and 1 participant was excluded due to baseline hypoxia (28 other recruited participants failed to attend). Of 20 participants undergoing sputum induction, samples were successfully obtained from 19 (Fig. 3). No adverse events occurred. Cytospins from six (32 %) participants were inadequate due to their leukocyte/squamous epithelial cell ratio. The characteristics of the 13 participants who provided an adequate sample are shown in Table 2. There was no significant difference in characteristics between those who provided an adequate sample and those who did not (data not shown). The differential cell counts are shown in Table 3.Fig. 3Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysisTable 2Characteristics of 13 participantsParticipant characteristicGenderMale, n (%)9 (69)Female, n (%)4 (31)AgeYears, median (IQR)57 (39–67)Respiratory diagnosisAsthma, n (%)8 (62)Bronchiectasis, n (%)2 (15)Both, n (%)3 (23)Smoking statusNever smoked (%)8 (62)Ex-smoker (%)5 (38)SpirometryFEV1, median (IQR), litres1.80 (1.47-2.26)FEV1 % Predicted, median (IQR)73.5 (60.1 - 77.6)FVC, median (IQR), litres2.8 (2.47 – 3.82)FVC % predicted, median (IQR)91.2 (87.6 – 109.0) IQR Interquartile RangeTable 3Differential cell countsCell typeCell count % (Median (IQR) of 13 participants)Neutrophil72.5 (51.1-90.1)Macrophage10.0 (4.1-25.8)Eosinophil1.6 (1.0-8.5)Lymphocyte2.3 (1.2-3.5)Metachromatic0.0 (0.0-0.0)Bronchial epithelial2.8 (1.1-12.6)Squamous epithelial2.3 (0.8-6.5) Participants and samples. The flow chart shows the number of consented and recruited patients, and how many samples were obtained and included in the final analysis Characteristics of 13 participants IQR Interquartile Range Differential cell counts Feasibility of methodology: Median time for analysis of each participant was significantly lower for Image J (26 mins, interquartile range (IQR): 21–30) than for Image SXM (54 mins, IQR: 43–68), p = 0.008. Including the time taken for image acquisition, the median time was not significantly different between ImageJ (51 mins,IQR: 46–65 mins) and Image SXM (66 mins, IQR: 59 – 84), p = 0.424. For the Image SXM method, 58 % of the ‘analysis time’ was spent editing the images prior to analysis. A comparison of the resources required for each method is shown in Table 4.Table 4Comparison of resource requirements for methodsResourceImage SXMImageJEquipment required for sputum induction and sample processingIdentical specialist equipment and facilities required regardless of analysis methodImage acquisition equipmentMicroscope with x40 objective and digital image capturing capabilitiesMicroscope with x100a objective and digital image capturing capabilitiesAnalysis software availabilityIn the public domain – available free of chargeIn the public domain – available free of chargeAdditional image editing softwarePurchase requiredNot requiredOperating system for analysis softwareCompatible with Mac operating systemsCompatible with Mac and Windows operating systemsFile type availabilityTIFFJPEG, TIFF, GIF, BMP, DICOM, FITS and ‘raw’Time required for sputum induction and processingApproximately 90–120 min per participantTime required for image acquisition (median)15 min27 minTime required for image analysis (including image editing if required) (median)54 min26 min aAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations Comparison of resource requirements for methods aAlthough a x100 objective is recommended for ImageJ methodology, a x60 objective was used in this study due to resource limitations A mean of 49 macrophages per participant were included in the ImageJ analysis (total 632 macrophages). A mean of 43 images of per participant were captured for Image SXM analysis (total 558 images). During the Image SXM process, 72 % of images were removed following the initial analysis as they were deemed to be inaccurate (either over- or under-estimating AMPL) using the blink comparison function (Fig. 4), resulting in a further four participants being excluded from the study. The analysis was repeated with only the remaining 143 images (median 14 images (IQR 11.5-20) per participant). If only these nine participants are included, median time taken increased to 67 mins (IQR 47–72) for Image SXM analysis and 83 mins (IQR 64–87) including image acquisition time.Fig. 4An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated An example of inaccurte Image SXM analysis. Comparing the original image (a) to the output image (b), the total cellular area [27] of the airway macrophage on the left has been overestimated, and the partcilate matter (red) of the airway macrophage on the right has been overestimated Airway macrophage particulate load: Considerable morphological heterogeneity was seen between AM, both within samples and between participants, with wide variations in AMPL (Fig. 5). The cytoplasms of the AM in this study were noted to be granular and heterogeneous (Fig. 5), unlike the homogenous appearance of cytoplasm seen in our previous experience of macrophages obtained by BAL [11].Fig. 5Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a) Airway macrophage heterogeneity. The morphology of the airway macrophages (shown with red arrows) was varied within the same sample (a) and between different participant samples (a & b). The particulate load also varied between macrophages in the same sample (a) ImageJ analysis of 13 cytospins revealed a median AMPL of 0.38 μm2 (IQR 0.17-0.72 μm2). Image SXM analysis of 9 cytospins calculated a median total cellular area occupied by PM of 4.0 % (IQR 2.3-6.0 %). There was no statistically significant correlation between results obtained using the two methods (correlation coefficient = −0.42, p = 0.256). Discussion: A biomarker which can be used in the field to assess an individual’s air pollution exposure will be a valuable tool for research into the health effects and benefits of interventions. In our pilot work for the Cooking and Pneumonia Study (www.capstudy.org) we identified the need for a biomarker representative of household air pollution exposure [8]. This study set out to explore the feasibility of using IS samples for assessment of AMPL as a potential biomarker. Although the procedure was well-tolerated by all participants who underwent IS, there was a low appointment attendance rate despite multiple appointments being offered at their convenience. This may be due to participant’s availability, but may also reflect an unwillingness to undergo the procedure suggesting that IS may not be acceptable to the wider community. A third of participants were unable to produce adequate samples. These factors resulted in a small samples size, a major limitation of this study, but also reflects a potential limitation in the feasibility of using IS as a biomarker. The time taken for the Image SXM method was substantially lengthened by the need to manually edit images prior to analysis to improve accuracy. This editing is not required when using this software with BAL samples, which tend to have few other cells or debris. ImageJ was the quicker method for image acquisition and analysis (median 51 min). Image capturing software used in this study for the ImageJ method delayed this process by approximately 15 min, but was not used for the Image SXM method – a limitation of this study due to the lack of available equipment. However, when combined with the time taken for sputum induction and processing (usually >90 min), this process is unlikely to be feasible for widespread use in large studies given the total time required (>2 h per participant). Both methods require considerable expenditure for clinical and laboratory equipment. Previously published studies using ImageJ method report using a microscope with a x100 objective, while the Image SXM method requires a x40 objective, both with digital image acquisition capabilities. In this study a x60 objective was used for the ImageJ method, as greater magnification was not available with digital image capturing capabilities. Although this may have theoretically reduced the accuracy of the ImageJ methodology in our study, we experienced no difficulties visualising particulate matter within the macrophages and still found ImageJ to be the more reliable of the two methods for detecting PM. As we do not comment on the accuracy of the ImageJ method in comparison to a gold standard assessment of exposure, this limitation of our study does not have a major impact on our findings. However, it does emphasise the need for specialised equipment, which has implications for feasibility. Both softwares are available free of charge but ImageJ is more widely compatible. Image editing software must be also purchased if using Image SXM with IS. The facilities and equipment required for inducing and processing sputum are likely to preclude the use of this technique in rural or resource poor settings. A further limitation of this study is that image capture of macrophages – which can be difficult to differentiate from other cell types (particularly on cytospins stained only with eosin for Image SXM analysis) - was only performed by one reader, with support from a senior cell biologist, without a priori criteria for inclusion. This may have resulted in incorrect identification of some cells. Independent image capture and slide analysis by two individuals with a high level of expertise may improve accuracy of macrophage identification, although this represents an additional challenge for implementing these methods in resource limited settings. ImageJ method requires higher levels of operator training for image analysis than Image SXM, due to the subjective nature of the analysis process. Further work to assess intra- and inter-observer reliability using the ImageJ method is required before this is widely used – this was not evaluated as part of this study in which only one unblinded reader performed the analysis. Although previously successfully used with BAL samples, Image SXM appears to not perform as well with IS macrophages. This is possibly due to the heterogenous and granular nature of these macrophages making it difficult for the software to distinguish between cytoplasm and PM, as has been observed in previous studies [14]. We postulate that the difference in appearance compared to BAL macrophages is either due to these being a different population of macrophages, taken from a more proximal part of the airways, or due to cell stress or apoptosis resulting from the IS process, although we did not measure cell viability in this study. Steps were taken to ensure threshold settings were optimised for this batch of images, but due to the heterogeneity seen these settings were not always optimal for each individual image. Image SXM does include an option to adjust the threshold settings manually for different images. This might improve accuracy but would make the process more time-consuming, and would not account for heterogeneity of macrophages within the same image (Fig. 5). Optimising the threshold settings for each image might reduce the number of images discarded from Image SXM following visual checking for accuracy (Fig. 4). This might increase the sample size and therefore the precision of estimates. The lack of correlation observed in AMPL results between the two methods is unsurprising given some of the difficulties outlined above. To determine the accuracy of either method, comparison with an external comparator is required, such as an individual’s PM exposure data. This, and assessment of intra- and inter-observer reliability, were beyond the scope of this study. An association between AMPL calculated and the number of peak exposures to PM has been demonstrated in London cyclists [20], but further exploration of this relationship in other settings is required. The results obtained by the ImageJ method in this study are comparable to that of healthy British children (0.41 μm2 PM per macrophage) [13]. Other studies using ImageJ methodology have suggested that AMPL does correlate with exposure [10, 13]. Given the fundamental role of alveolar macrophages in the defence against inhaled pollutants, further exploration of the relationship between AMPL and pathophysiology is an intuitive way to improve understanding of the health impacts of air pollution. Optimising digital analysis software or using alternative methods for quantifying AMPL, such as spectrophotometry, may assist with this, but is unlikely to provide a useful field biomarker of exposure. Conclusion: Direct measurement of air pollution exposure is costly, logistically complicated and intrusive to the individual. Studies investigating the health impacts of air pollution exposure and the benefits of interventions are limited by the challenges associated with accurately quantifying exposure [9]. A biomarker of air pollution exposure will be a useful tool to facilitate research addressing the high burden of disease associated with air pollution. This small study has not established whether AMPL is an accurate biomarker of pollution exposure, but has compared the feasibility of two previously used methods. The heterogeneity of IS samples complicates digital image analysis methods, and the resource requirements for assessing AMPL from IS are considerable, making it unlikely that this biomarker of exposure will be appropriate for widespread use as a tool for large-scale intervention studies. Priority should be given to developing a point-of-care biomarker of exposure, without the need for specialist training and equipment, to facilitate the large public health intervention trials that are urgently needed. Potential biomarkers requiring further exploration include direct measures of combustion products, such as exhaled carbon monoxide, exhaled carboxyhaemoglobin, exhaled volatile organic compounds or levoglucosan and methoxyphenols in urine [8, 9, 21–23]. Indirect measures of exposure in sputum, blood and urine, including markers of oxidative stress and endothelial or epithelial damage (such as 8-isoprostane, malondialdehyde, nitric oxide, or surfactant-associated protein D), may also be promising biomarkers [9, 21, 24–26].
Background: Air pollution is associated with a high burden or morbidity and mortality, but exposure cannot be quantified rapidly or cheaply. The particulate burden of macrophages from induced sputum may provide a biomarker. We compare the feasibility of two methods for digital quantification of airway macrophage particulate load. Methods: Induced sputum samples were processed and analysed using ImageJ and Image SXM software packages. We compare each package by resources and time required. Results: 13 adequate samples were obtained from 21 patients. Median particulate load was 0.38 μm(2) (ImageJ) and 4.0 % of the total cellular area of macrophages (Image SXM), with no correlation between results obtained using the two methods (correlation coefficient = -0.42, p = 0.256). Image SXM took longer than ImageJ (median 26 vs 54 mins per participant, p = 0.008) and was less accurate based on visual assessment of the output images. ImageJ's method is subjective and requires well-trained staff. Conclusions: Induced sputum has limited application as a screening tool due to the resources required. Limitations of both methods compared here were found: the heterogeneity of induced sputum appearances makes automated image analysis challenging. Further work should refine methodologies and assess inter- and intra-observer reliability, if these methods are to be developed for investigating the relationship of particulate and inflammatory response in the macrophage.
Background: Indoor and outdoor air pollution are the 4th and 9th leading risk factors, respectively, for disability-adjusted life years worldwide [1], and exposure is associated with increased risk of pneumonia in children, respiratory cancers, and development of Chronic Obstructive Pulmonary Disease [2–5]. Airborne particulate matter [6] with an aerodynamic diameter of <2.5 μm (PM2.5) is considered particularly harmful as the small size allows inhalation deep into the lungs [7]. Global initiatives, such as the Global Alliance for Clean Cookstoves (www.cleancookstoves.org), are tackling the major health burden caused by airborne PM. Major randomised trials of the health effects of clean burning cookstoves are in progress (e.g. www.capstudy.org and http://www.kintampo-hrc.org/projects/graphs.asp#.VMtKusaI0Rk). All share the challenge that quantifying an individual’s exposure to pollution is complex and expensive, and there is no gold standard method [8]. Development of a biomarker that acts as a surrogate marker of exposure could obviate the need for costly and intensive exposure monitoring. Ideally a biomarker should be: closely associated with exposure, adequately sensitive and specific, consistent across heterogenous populations, cost efficient, acceptable to the user population, and feasible for use in the field (including low-resource settings) [9]. The phagocytic action of airway macrophages (AM) may provide the basis for a biomarker of PM exposure. The particulate load within AM is: increased in individuals who report exposure to household air pollution compared to those who do not [10]; statistically different between individuals who use different types of domestic fuel [11]; and associated with exposure to outdoor PM in commuters who cycle in London [12]. Correlation between AM particulate load (AMPL) and worsening lung function supports a possible pathophysiological role [13]. A recent systematic review of studies calculating AMPL concluded that this biomarker is suitable for assessing personal exposure to PM, but that technical improvements are needed before this method is suitable for widespread use [14]. Once cell monolayers (Cytospins™) have been obtained from induced sputum (IS) or bronchoalveolar lavage (BAL) samples, several different digital image analysis software programmes can be used to calculate AMPL. ImageJ software (http://rsbweb.nih.gov/ij/, superseding a similar software, Scion Image) and Image SXM software [15] (http://www.ImageSXM.org.uk) have both been used for this purpose [12, 16, 17]. There is no previously reported objective comparison of their feasibility and it is unknown whether these two methods provide comparable results. Unlike ImageJ, Image SXM has only been used with samples obtained via BAL, a technique that is not suitable for widespread use in the field due to the expertise, risks and financial costs involved. This study therefore aimed to provide an objective assessment of the relative feasibilities – with regard to resources, expertise and time required - of ImageJ and Image SXM for use with IS samples, and their comparative accuracy. Conclusion: Direct measurement of air pollution exposure is costly, logistically complicated and intrusive to the individual. Studies investigating the health impacts of air pollution exposure and the benefits of interventions are limited by the challenges associated with accurately quantifying exposure [9]. A biomarker of air pollution exposure will be a useful tool to facilitate research addressing the high burden of disease associated with air pollution. This small study has not established whether AMPL is an accurate biomarker of pollution exposure, but has compared the feasibility of two previously used methods. The heterogeneity of IS samples complicates digital image analysis methods, and the resource requirements for assessing AMPL from IS are considerable, making it unlikely that this biomarker of exposure will be appropriate for widespread use as a tool for large-scale intervention studies. Priority should be given to developing a point-of-care biomarker of exposure, without the need for specialist training and equipment, to facilitate the large public health intervention trials that are urgently needed. Potential biomarkers requiring further exploration include direct measures of combustion products, such as exhaled carbon monoxide, exhaled carboxyhaemoglobin, exhaled volatile organic compounds or levoglucosan and methoxyphenols in urine [8, 9, 21–23]. Indirect measures of exposure in sputum, blood and urine, including markers of oxidative stress and endothelial or epithelial damage (such as 8-isoprostane, malondialdehyde, nitric oxide, or surfactant-associated protein D), may also be promising biomarkers [9, 21, 24–26].
Background: Air pollution is associated with a high burden or morbidity and mortality, but exposure cannot be quantified rapidly or cheaply. The particulate burden of macrophages from induced sputum may provide a biomarker. We compare the feasibility of two methods for digital quantification of airway macrophage particulate load. Methods: Induced sputum samples were processed and analysed using ImageJ and Image SXM software packages. We compare each package by resources and time required. Results: 13 adequate samples were obtained from 21 patients. Median particulate load was 0.38 μm(2) (ImageJ) and 4.0 % of the total cellular area of macrophages (Image SXM), with no correlation between results obtained using the two methods (correlation coefficient = -0.42, p = 0.256). Image SXM took longer than ImageJ (median 26 vs 54 mins per participant, p = 0.008) and was less accurate based on visual assessment of the output images. ImageJ's method is subjective and requires well-trained staff. Conclusions: Induced sputum has limited application as a screening tool due to the resources required. Limitations of both methods compared here were found: the heterogeneity of induced sputum appearances makes automated image analysis challenging. Further work should refine methodologies and assess inter- and intra-observer reliability, if these methods are to be developed for investigating the relationship of particulate and inflammatory response in the macrophage.
10,231
261
17
[ "image", "analysis", "images", "sxm", "image sxm", "imagej", "iqr", "macrophages", "participants", "participant" ]
[ "test", "test" ]
null
[CONTENT] Air pollution | Particulate matter | Biomarker | Induced sputum | Airway macrophages [SUMMARY]
null
[CONTENT] Air pollution | Particulate matter | Biomarker | Induced sputum | Airway macrophages [SUMMARY]
[CONTENT] Air pollution | Particulate matter | Biomarker | Induced sputum | Airway macrophages [SUMMARY]
[CONTENT] Air pollution | Particulate matter | Biomarker | Induced sputum | Airway macrophages [SUMMARY]
[CONTENT] Air pollution | Particulate matter | Biomarker | Induced sputum | Airway macrophages [SUMMARY]
[CONTENT] Adult | Aged | Air Pollution | Asthma | Biomarkers | Bronchiectasis | Environmental Exposure | Feasibility Studies | Female | Forced Expiratory Volume | Humans | Image Processing, Computer-Assisted | Inhalation Exposure | Macrophages, Alveolar | Male | Middle Aged | Particulate Matter | Reproducibility of Results | Software | Sputum | Vital Capacity [SUMMARY]
null
[CONTENT] Adult | Aged | Air Pollution | Asthma | Biomarkers | Bronchiectasis | Environmental Exposure | Feasibility Studies | Female | Forced Expiratory Volume | Humans | Image Processing, Computer-Assisted | Inhalation Exposure | Macrophages, Alveolar | Male | Middle Aged | Particulate Matter | Reproducibility of Results | Software | Sputum | Vital Capacity [SUMMARY]
[CONTENT] Adult | Aged | Air Pollution | Asthma | Biomarkers | Bronchiectasis | Environmental Exposure | Feasibility Studies | Female | Forced Expiratory Volume | Humans | Image Processing, Computer-Assisted | Inhalation Exposure | Macrophages, Alveolar | Male | Middle Aged | Particulate Matter | Reproducibility of Results | Software | Sputum | Vital Capacity [SUMMARY]
[CONTENT] Adult | Aged | Air Pollution | Asthma | Biomarkers | Bronchiectasis | Environmental Exposure | Feasibility Studies | Female | Forced Expiratory Volume | Humans | Image Processing, Computer-Assisted | Inhalation Exposure | Macrophages, Alveolar | Male | Middle Aged | Particulate Matter | Reproducibility of Results | Software | Sputum | Vital Capacity [SUMMARY]
[CONTENT] Adult | Aged | Air Pollution | Asthma | Biomarkers | Bronchiectasis | Environmental Exposure | Feasibility Studies | Female | Forced Expiratory Volume | Humans | Image Processing, Computer-Assisted | Inhalation Exposure | Macrophages, Alveolar | Male | Middle Aged | Particulate Matter | Reproducibility of Results | Software | Sputum | Vital Capacity [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] image | analysis | images | sxm | image sxm | imagej | iqr | macrophages | participants | participant [SUMMARY]
null
[CONTENT] image | analysis | images | sxm | image sxm | imagej | iqr | macrophages | participants | participant [SUMMARY]
[CONTENT] image | analysis | images | sxm | image sxm | imagej | iqr | macrophages | participants | participant [SUMMARY]
[CONTENT] image | analysis | images | sxm | image sxm | imagej | iqr | macrophages | participants | participant [SUMMARY]
[CONTENT] image | analysis | images | sxm | image sxm | imagej | iqr | macrophages | participants | participant [SUMMARY]
[CONTENT] exposure | use | www | org | biomarker | suitable | http | provide | associated | software [SUMMARY]
null
[CONTENT] iqr | image | median | analysis | mins | participants | airway | median iqr | sample | sxm [SUMMARY]
[CONTENT] exposure | pollution | pollution exposure | biomarker | air pollution | exhaled | air | associated | air pollution exposure | biomarkers [SUMMARY]
[CONTENT] image | images | analysis | iqr | sxm | image sxm | exposure | macrophages | participants | imagej [SUMMARY]
[CONTENT] image | images | analysis | iqr | sxm | image sxm | exposure | macrophages | participants | imagej [SUMMARY]
[CONTENT] ||| ||| two [SUMMARY]
null
[CONTENT] 13 | 21 ||| Median | 0.38 μm(2 | 4.0 % | two | 0.256 ||| 26 | 54 | 0.008 ||| [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] ||| ||| two ||| ||| ||| 13 | 21 ||| Median | 0.38 μm(2 | 4.0 % | two | 0.256 ||| 26 | 54 | 0.008 ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| two ||| ||| ||| 13 | 21 ||| Median | 0.38 μm(2 | 4.0 % | two | 0.256 ||| 26 | 54 | 0.008 ||| ||| ||| ||| [SUMMARY]
Youth's narratives about family members smoking: parenting the parent- it's not fair!
23140551
Successful cancer prevention policies and programming for youth must be based on a solid understanding of youth's conceptualization of cancer and cancer prevention. Accordingly, a qualitative study examining youth's perspectives of cancer and its prevention was undertaken. Not surprisingly, smoking (i.e., tobacco cigarette smoking) was one of the dominant lines of discourse in the youth's narratives. This paper reports findings of how youth conceptualize smoking with attention to their perspectives on parental and family-related smoking issues and experiences.
BACKGROUND
Seventy-five Canadian youth ranging in age from 11-19 years participated in the study. Six of the 75 youth had a history of smoking and 29 had parents with a history of smoking. Youth were involved in traditional ethnographic methods of interviewing and photovoice. Data analysis involved multiple levels of analysis congruent with ethnography.
METHODS
Youth's perspectives of parents and other family members' cigarette smoking around them was salient as represented by the theme: It's not fair. Youth struggled to make sense of why parents would smoke around their children and perceived their smoking as an unjust act. The theme was supported by four subthemes: 1) parenting the parent about the dangers of smoking; 2) the good/bad parent; 3) distancing family relationships; and 4) the prisoner. Instead of being talked to about smoking it was more common for youth to share stories of talking to their parents about the dangers of smoking. Parents who did not smoke were seen by youth as the good parent, as opposed to the bad parent who smoked. Smoking was an agent that altered relationships with parents and other family members. Youth who lived in homes where they were exposed to cigarette smoke felt like a trapped prisoner.
RESULTS
Further research is needed to investigate youth's perceptions about parental cigarette smoking as well as possible linkages between youth exposed to second hand smoke in their home environment and emotional and lifestyle-related health difficulties. Results emphasize the relational impact of smoking when developing anti-tobacco and cancer prevention campaigns. Recognizing the potential toll that second-hand smoke can have on youth's emotional well-being, health care professionals are encouraged to give youth positive messages in coping with their parents' smoking behaviour.
CONCLUSIONS
[ "Adolescent", "Air Pollution, Indoor", "Attitude to Health", "Canada", "Child", "Family", "Family Relations", "Female", "Humans", "Male", "Narration", "Parenting", "Parents", "Qualitative Research", "Smoking", "Tobacco Smoke Pollution", "Young Adult" ]
3503740
Background
Lung cancer is considered one of the most preventable types of cancer. While smoking (i.e., tobacco cigarette smoking) increases the risk of many forms of cancer, it is the predominant risk factor for lung cancer, accounting for about 80% of lung cancer cases in men and 50% in women worldwide [1]. Despite recent evidence that lung cancer is a high health risk concern to youth [2], adolescent smoking remains a public health problem. In 2010, 12% of Canadian youth aged 15 to 19 years smoked [3]. Although the number of youth smoking was the lowest level recorded for that age group, the decline in youth smoking rates has slowed [3] and youth smoking rates in some countries are rising [4]. In the United States, the surgeon general reported that more than 600,000 middle school students and 3 million high school students smoke cigarettes [5]. Also troubling is the fact that while smoking rates for youth are down, “of every three young smokers, only one will quit, and one of those remaining smokers will die from tobacco-related causes” ( [5], p. 9). Adult smokers frequently report having started smoking as youth. Among smokers aged 15–17, almost 80% said they had tried smoking by age 14 [6], with females (aged 15–17) having had their first cigarette at 12.9 years and males at age 13.3 years [7]. Global trends also reveal that smoking is increasing in developing countries due to adapting Western lifestyle habits. As well, lung cancer rates are increasing in some countries (e.g., China, Korea, and some African countries) and are expected to continue to rise for the next few decades [1]. Smoking has been shown to be a relational and learned behaviour, especially influenced by the family [8]. Regarding youth health behaviours, it has been suggested that the family, especially parents, is one of the dominant arenas in which youth are influenced. It has been established that youth are most likely to smoke if they have been exposed to, or come from a family in which their parents smoked [9]. Findings from a Canadian Community health survey revealed that in 2011 youth (aged 15–17) residing in households where someone smoked regularly were three times more likely to smoke (22.4% versus 7.0%) [10]. One qualitative study reported that aboriginal adolescent girls often smoke because smoking is normalized and reinforced by families: they see family members smoking in the home, they are not discouraged from smoking, and in some cases, parents facilitate adolescents’ access to cigarettes [11]. Youth smoking behaviour has also been linked to a range of other parental influences. For example, using a nationally United States representative sample, Powell and Chaloupka [12] studied specific parenting behaviours and the degree to which high school students felt their parents’ opinions about smoking influenced their decision to smoke. The authors identified that certain parenting practices (i.e., parental smoking, setting limits on youth’s free time, in-home smoking rules, quality and frequency of parent–child communication) as well as how much youth value their parents’ opinions about smoking, strongly influenced youth deciding to smoke. The evidence indicates that while parents exert a strong influence on youth smoking, they can also exist as a protective factor against youth smoking [8,12-14], especially when non-smoking rules are in place [15] including eliminating smoking in the home [12]. Clark et al. [15] revealed that if parents themselves smoked, banning smoking in the home and speaking against smoking reduced the likelihood that youth will smoke. Similarly, other research on household smoking rules found fewer adolescent smoking behaviours in homes with strict anti-smoking rules [16]. While there is empirical evidence of how parents are greatly influential regarding their children’s smoking behaviours, we know little; however, about the dynamics involved in the parent-adolescent relationship regarding smoking in the home, or how youth perceive their parents’ approval or disproval of smoking behaviours. Few studies have addressed how children or youth feel about adult smoking. In addition to findings that parental smoking may be related to the initiation of smoking in youth, there is increasing concern for the health risks of second-hand smoke. Along with anti-smoking legislation in public spaces, attention has been aimed at protecting children from second-hand smoke and recognizing the risks involved in exposure to second-hand smoke in non-public places. Second-hand smoking rates and non-smoking rules, for example, have been examined in family homes and cars [17-20]. In 2006, 22.1% of Canadian youth in grades 5 through 12 were exposed to smoking in their home on a daily or almost daily basis and 28.1% were exposed to smoking while riding in a car on a nearly weekly basis [19]. In the 2008 Canadian survey, the rate of exposure for 12–19 year olds (16.8%) was almost twice as high as the Canadian average [21]. New Zealand national surveys indicate that while exposure to second-hand smoke decreased since 2000, youth’s perceptions revealed that exposure still remained at 35% (in-home exposure) and 32% (in-vehicle exposure) [22]. The effects of parental smoking and maintaining a smoke-free environment has also received attention in areas such as prenatal and newborn care [23] and later, poor respiratory symptoms and outcomes [24]. Studies have also begun emphasizing home smoking bans and perceived dangers of the less visible but harmful exposure of third-hand smoke to children [23,25]. Although the current literature offers insights about the physical effects of second-hand smoke, how second-hand smoke impacts family relationships is unclear and what youth think about adult family members smoking remains in its infancy. This paper draws on data from a larger qualitative study that sought to extend our limited understanding of youth’s perspectives of cancer and cancer prevention. It aims to explore how youth conceptualize smoking within the context of their own life-situations with attention to their perspectives on parental and family-related smoking issues and experiences.
Method
Design The qualitative research design of ethnography was utilized. Ethnography is the study of a specific cultural group of people that provides explanations of people’s thoughts, beliefs, and behaviours in a specific context with the aim of describing aspects of a phenomenon of the group [26-28]. For this study, youth were the group and the phenomenon of interest was youth’s perspectives of cancer and cancer prevention. Key assumptions that were integral to the successful undertaking of the study included viewing youth as self-reflective beings expert on their own experiences and as flexible agents existing within and being touched by multiple social and cultural contexts. The qualitative research design of ethnography was utilized. Ethnography is the study of a specific cultural group of people that provides explanations of people’s thoughts, beliefs, and behaviours in a specific context with the aim of describing aspects of a phenomenon of the group [26-28]. For this study, youth were the group and the phenomenon of interest was youth’s perspectives of cancer and cancer prevention. Key assumptions that were integral to the successful undertaking of the study included viewing youth as self-reflective beings expert on their own experiences and as flexible agents existing within and being touched by multiple social and cultural contexts. Participants Youth were recruited in a Western Canadian province from six schools in both a rural and urban setting. Schools mailed invitation letters about the study to families of potential participants who, if interested, could contact the researcher for further information. Purposive sampling techniques were used with the goal to achieve variation among participants based on demographic information (e.g., age, SES, gender, and urban or rural residency) and experiences in relation to cancer (i.e., some youth had family members who had experienced cancer). Recruitment ended once redundancy or theoretical saturation was achieved, that is, no new themes were apparent. In total, 75 youth ranging in age from 11 to 19 years (M = 14.5, SD = 2.1) participated in the study. The demographic and background characteristics of the participants are presented in Table  1. Demographic Profile of Youth Participants Of the 75 youth participating in the study, six (8%) had tried smoking but no longer smoked and four (5%) reported that they currently smoked. Twenty-two youth (29%) had parents who currently smoked, while seven (9%) had parents who had quit smoking. For the 10 youth (13%) who had a history of smoking, eight (11%) had parents who also had a history of smoking. Youth were recruited in a Western Canadian province from six schools in both a rural and urban setting. Schools mailed invitation letters about the study to families of potential participants who, if interested, could contact the researcher for further information. Purposive sampling techniques were used with the goal to achieve variation among participants based on demographic information (e.g., age, SES, gender, and urban or rural residency) and experiences in relation to cancer (i.e., some youth had family members who had experienced cancer). Recruitment ended once redundancy or theoretical saturation was achieved, that is, no new themes were apparent. In total, 75 youth ranging in age from 11 to 19 years (M = 14.5, SD = 2.1) participated in the study. The demographic and background characteristics of the participants are presented in Table  1. Demographic Profile of Youth Participants Of the 75 youth participating in the study, six (8%) had tried smoking but no longer smoked and four (5%) reported that they currently smoked. Twenty-two youth (29%) had parents who currently smoked, while seven (9%) had parents who had quit smoking. For the 10 youth (13%) who had a history of smoking, eight (11%) had parents who also had a history of smoking. Data collection Data collection occurred between December 2007 and October 2010. The longer time period was due to school breaks and use of multiple data collection methods. The aim was to have each youth participate in two in-depth open-ended interviews. For the first interview an interview guide was used which included questions to elicit youth’s thoughts, beliefs, and feelings about cancer and cancer prevention (e.g., When you hear the word “cancer,” what does it make you think of? If developing cancer messages for youth, what would you tell them?). The interview guide had no direct questions about smoking. The open-ended nature of the interview guide provided an opportunity for youth to discuss areas they considered significant and/or areas previously not anticipated by the researchers [29]. After completing the first interview, all youth were asked to take part in the photovoice method. Photovoice is a participatory research method where individuals can address important issues through taking photographs and discussion [30-34]. Photographs, which are often used in ethnography, provided youth with a unique and creative means to reflect on cancer and cancer prevention. Youth were given a disposable camera to take pictures of people (with permission), objects, places or events that made them think of cancer and cancer prevention. Youth had four weeks to take photographs. During the second interview, participants were asked to talk about what the photos meant to them in terms of cancer and cancer prevention. In addition, youth were asked follow-up questions based on their initial interview and to comment on emerging themes. In total, 53 youth (71%) participated in the photovoice method and second interview. The remaining 22 youth (29%) were unable to complete the photovoice method and second interview due to scheduling difficulties. Four focus group interviews with youth who were previously interviewed were conducted in the schools near the end of the study. The purpose was to identify ideas about cancer and cancer prevention that might emerge from a group context and provide quality controls on data collection [35-37]. Between three and four youth participated in each focus group. In total, fourteen youth attended the focus group discussions. All individual and focus group interviews lasted from 60 to 90 minutes and were digitally recorded and transcribed verbatim. Field notes were recorded to describe the context (e.g., participant’s non-verbal behaviours, communication processes) and the interviewer’s perceptions of the interview. The interviews were conducted by trained research assistants under the supervision of the first author and took place at the participant’s school or home. Finally, ethnographic field research was conducted. Research assistants were trained in fundamental skills and process in doing ethnographic field research including participant observation, field notes, and flexibility and openness [38-40]. On the days that research assistants were at the schools conducting interviews, they observed and recorded interactions and dialogue during informal daily activities (e.g., recess), special events (e.g., cancer prevention fund raising activities), and during the interviews themselves. Ongoing team meetings with research assistants allowed for debriefing and helped increase the sensitivity and richness of fieldnotes by critically highlighting features previously unconsidered by the observers alone. Raw field notes were written up and compared to interview data. Data collection occurred between December 2007 and October 2010. The longer time period was due to school breaks and use of multiple data collection methods. The aim was to have each youth participate in two in-depth open-ended interviews. For the first interview an interview guide was used which included questions to elicit youth’s thoughts, beliefs, and feelings about cancer and cancer prevention (e.g., When you hear the word “cancer,” what does it make you think of? If developing cancer messages for youth, what would you tell them?). The interview guide had no direct questions about smoking. The open-ended nature of the interview guide provided an opportunity for youth to discuss areas they considered significant and/or areas previously not anticipated by the researchers [29]. After completing the first interview, all youth were asked to take part in the photovoice method. Photovoice is a participatory research method where individuals can address important issues through taking photographs and discussion [30-34]. Photographs, which are often used in ethnography, provided youth with a unique and creative means to reflect on cancer and cancer prevention. Youth were given a disposable camera to take pictures of people (with permission), objects, places or events that made them think of cancer and cancer prevention. Youth had four weeks to take photographs. During the second interview, participants were asked to talk about what the photos meant to them in terms of cancer and cancer prevention. In addition, youth were asked follow-up questions based on their initial interview and to comment on emerging themes. In total, 53 youth (71%) participated in the photovoice method and second interview. The remaining 22 youth (29%) were unable to complete the photovoice method and second interview due to scheduling difficulties. Four focus group interviews with youth who were previously interviewed were conducted in the schools near the end of the study. The purpose was to identify ideas about cancer and cancer prevention that might emerge from a group context and provide quality controls on data collection [35-37]. Between three and four youth participated in each focus group. In total, fourteen youth attended the focus group discussions. All individual and focus group interviews lasted from 60 to 90 minutes and were digitally recorded and transcribed verbatim. Field notes were recorded to describe the context (e.g., participant’s non-verbal behaviours, communication processes) and the interviewer’s perceptions of the interview. The interviews were conducted by trained research assistants under the supervision of the first author and took place at the participant’s school or home. Finally, ethnographic field research was conducted. Research assistants were trained in fundamental skills and process in doing ethnographic field research including participant observation, field notes, and flexibility and openness [38-40]. On the days that research assistants were at the schools conducting interviews, they observed and recorded interactions and dialogue during informal daily activities (e.g., recess), special events (e.g., cancer prevention fund raising activities), and during the interviews themselves. Ongoing team meetings with research assistants allowed for debriefing and helped increase the sensitivity and richness of fieldnotes by critically highlighting features previously unconsidered by the observers alone. Raw field notes were written up and compared to interview data. Ethics Before commencing the study, permission was obtained from the University of Manitoba Research Ethics Committee and from the recruitment sites. Parental consent and assent from all youth participants was also provided. Youth were informed that they could withdraw from the study at any stage. Strategies to secure the participants’ confidentiality were applied. Participants gave permission to use their photographs for the purposes of publishing and were reassured that any identifiable information in the photos would be removed (digitally altered). Youth received an honorarium gift card for their participation. Before commencing the study, permission was obtained from the University of Manitoba Research Ethics Committee and from the recruitment sites. Parental consent and assent from all youth participants was also provided. Youth were informed that they could withdraw from the study at any stage. Strategies to secure the participants’ confidentiality were applied. Participants gave permission to use their photographs for the purposes of publishing and were reassured that any identifiable information in the photos would be removed (digitally altered). Youth received an honorarium gift card for their participation. Data analysis Consistent with qualitative research design, data analysis occurred simultaneously with data collection. A data management system, NVivo version 9.0 [41] helped facilitate organization of substantial transcripts. Inductive coding began with RW reading all the field notes and interview transcripts. Analytical categories emerged from rigorous and systematic analysis of all forms of data (interview transcripts, ethnographic field notes, and photographs). Analysis of the data followed ethnographic principles of interpreting the meanings youth attributed to cancer and cancer prevention including their meanings attributed to parental and family-related smoking issues and experiences. Data analysis followed multi-level analytic coding procedures congruent with interpretive qualitative analysis and ethnography [28,29,39,40]. First-level analysis involved isolating concepts or patterns referred to as domains. Second-level analysis involved organizing domains. Through processes of comparing, contrasting, and integrating, items were organized, associated with other items, and linked into higher order patterns. The third level of analysis involved identifying attributes in each domain, and the last level involved discovering relationships among the domains to create themes. Various strategies were used to enhance the rigor of the research process including prolonged engagement with participants and data, careful line-by-line transcript analysis, and detailed memo writing throughout the research process [42]. The researchers independently identified theme areas then jointly refined and linked analytic themes and categories. Discussion of initial interpretations with the youth themselves occurred during the second interviews, which also helped reveal new data and support emerging themes. Consistent with qualitative research design, data analysis occurred simultaneously with data collection. A data management system, NVivo version 9.0 [41] helped facilitate organization of substantial transcripts. Inductive coding began with RW reading all the field notes and interview transcripts. Analytical categories emerged from rigorous and systematic analysis of all forms of data (interview transcripts, ethnographic field notes, and photographs). Analysis of the data followed ethnographic principles of interpreting the meanings youth attributed to cancer and cancer prevention including their meanings attributed to parental and family-related smoking issues and experiences. Data analysis followed multi-level analytic coding procedures congruent with interpretive qualitative analysis and ethnography [28,29,39,40]. First-level analysis involved isolating concepts or patterns referred to as domains. Second-level analysis involved organizing domains. Through processes of comparing, contrasting, and integrating, items were organized, associated with other items, and linked into higher order patterns. The third level of analysis involved identifying attributes in each domain, and the last level involved discovering relationships among the domains to create themes. Various strategies were used to enhance the rigor of the research process including prolonged engagement with participants and data, careful line-by-line transcript analysis, and detailed memo writing throughout the research process [42]. The researchers independently identified theme areas then jointly refined and linked analytic themes and categories. Discussion of initial interpretations with the youth themselves occurred during the second interviews, which also helped reveal new data and support emerging themes.
Results
Smoking was one of the dominant lines of discourse across the sample of youth’s narratives of cancer and cancer prevention. Age, gender, smoking status (i.e., smoker or non-smoker), and place of residency did not influence the story line. Youth were considerably more knowledgeable about the association between smoking and cancer and anti-tobacco messages in comparison to any other cancer-related topic. Several youth photographed their own hand-drawn facsimiles or public signage depicting familiar anti-smoking slogans (Figure  1). Anti-Smoking Sign. Represented youth’s desire that adults should stop smoking. "When I walk around there’s like no smoking area signs. I think “Oh this is safe,” like it is good to know that there won’t be like smoke around for me to breathe in. [14-year-old male]" Youth in this study were well informed of how smoking could impact one’s health (e.g., increased the potential for cancer and other chronic illnesses). Of special importance were youth’s perspectives and experiences of parents and other family members smoking around them as represented by the primary theme, It’s not fair, and four subthemes: parenting the parent about the dangers of smoking; the good/bad parent; distancing family relationships; and the prisoner. Each of these themes are further discussed. It’s not fair Overall, youth viewed their parents and other family members smoking around children as something unjust. The phrase “It’s not fair” was frequently expressed by youth in this study as illustrated in the following comment, "Because the kids around parents who smoke have to breathe in, they have to breathe in all of it …and like, if parents want to smoke then they should like go outside becauseit’s not fair to the kids…Probably because they always have to be around it if their mother always smokes every time they’re taking a bath or every time they’re like colouring a picture like every time they do anything, they always have to breathe in the bad- like bad air that’s filled with smoke and stuff like that. Andit’s not fairto them. [12-year-old female]" Youth could not make sense of why parents would smoke around their children. They also were unsure with how to deal with what they saw was an act of injustice to children. They struggled about how the smoking made them feel, recognizing that their roles as children limited their ability to influence their parents’ behaviour. Their attempts to reconcile their feelings and deal with the unfairness through specific behaviours are further apparent in the following four subthemes. Overall, youth viewed their parents and other family members smoking around children as something unjust. The phrase “It’s not fair” was frequently expressed by youth in this study as illustrated in the following comment, "Because the kids around parents who smoke have to breathe in, they have to breathe in all of it …and like, if parents want to smoke then they should like go outside becauseit’s not fair to the kids…Probably because they always have to be around it if their mother always smokes every time they’re taking a bath or every time they’re like colouring a picture like every time they do anything, they always have to breathe in the bad- like bad air that’s filled with smoke and stuff like that. Andit’s not fairto them. [12-year-old female]" Youth could not make sense of why parents would smoke around their children. They also were unsure with how to deal with what they saw was an act of injustice to children. They struggled about how the smoking made them feel, recognizing that their roles as children limited their ability to influence their parents’ behaviour. Their attempts to reconcile their feelings and deal with the unfairness through specific behaviours are further apparent in the following four subthemes. Parenting the parent about the dangers of smoking Although youth did share stories of parents talking to them about the importance of not smoking, this was not the major family narrative. Instead of being talked to about smoking, it was more common for youth to share stories of they themselves talking to, educating, and even preaching to their parents about the dangers of smoking. In short, youth took it upon themselves to parent their parents. "But I’m getting my mom and my step-dad to quit…by talking to them, telling them how it makes all of us kids feel…Yeah, reading like everything the packages say or what the internet says or like what I learn from it and then they’re all just thinking and then they’re saying “well I won’t do that much then. I’ll try to quit.” Now they’re trying to quit but it’s not working for my mom but it is working for my step-dad. [14-year-old female]" In addition to talking to their parents about how they felt about them smoking, some youth also would take action to reduce their parents’ ability to smoke. "Like I have just tried, because I just tell my parents straight up to stop and…I always try to, like, hide their stuff on them but it doesn’t work. They get mad. [13-year-old male]" Talking to their parents about the dangers of smoking arose out of youth’s worries for their parents’ health. The concern that youth had for parents who smoked was in fact one of the reasons youth decided to participate in the study. Youth were looking for answers that could possibly help them to get their parents to stop smoking. Concerns about their parents smoking was also strongly depicted in the photographs. One 13-year-old female took a picture of an ashtray full of cigarettes and said, "I see them (ashtrays with cigarette butts) all the time. It would be different if it was like you know once in a while kind of thing I probably wouldn’t mind that much, but my parents smoke in the house and in the car and everywhere so it’s kind of I don’t know I wish they would stop (Figure  2)." Ashtray. Represented youth’s concern for parents. Youth who feared that their family members might die, or who had family members who died because of smoking-related illnesses (e.g., lung cancer), especially shared their concerns and would try to make their parents feel guilty. "I always tell them things to make them guilty I’m always like, “Do you want to meet my children, do you want to bring me down the aisle?” It’s working actually. I think my dad said my mom was talking about it so…Ever since my mom started again I really felt it hit me cause, like I want my mom to meet my children and she has to see me get married and have kids and I want my mom to be there. [16-year-old female]" Youth also provided many stories of their parents’ attempts to quit smoking. Their stories demonstrated how smoking was embedded within their family’s history and identity, as well as how parents’ smoking played a role in their child’s life. "Well I was really happy when my father quit because he had been smoking since like, I don’t know, before he was even a teenager. He was really young and he said he’d quit sort of when I was born like he’d smoke outside and he’d reduce it, but then when I got old enough he’d continued smoking, and my mom was the same way. She was also a smoker, but she quit like maybe I was five… [15-year-old female]" "My grandpa passed away a couple of years ago and he died of emphysema and a little bit after he got sick I should say he quit smoking and whenever my aunties and mom smoked, but they don’t anymore, then he would always tell them “Well you should quit because look what’s happening to me.” And that really pushed my mom to quit. [17-year-old female]" While youth were persistent in trying to convince their parents to stop smoking, most youth felt that their parents would not quit despite their efforts. A sense of helplessness was apparent. "Well, my parents like they are smoking and if I tell them not to they’re not going to listen because they’re like “We are your parents, you’re not our parents!” [18-year-old male]" "Like my mom and my dad both smoke and I’d like to tell them to stop and to show them…they don’t really care…I don’t know, like influencing somebody not to smoke is a lot different than I guess them already smoking and quitting. Because like, I mean obviously everything I’ve seen I’m never going to smoke, but I mean it doesn’t really influence my parents. [13-year-old male]" Although youth did share stories of parents talking to them about the importance of not smoking, this was not the major family narrative. Instead of being talked to about smoking, it was more common for youth to share stories of they themselves talking to, educating, and even preaching to their parents about the dangers of smoking. In short, youth took it upon themselves to parent their parents. "But I’m getting my mom and my step-dad to quit…by talking to them, telling them how it makes all of us kids feel…Yeah, reading like everything the packages say or what the internet says or like what I learn from it and then they’re all just thinking and then they’re saying “well I won’t do that much then. I’ll try to quit.” Now they’re trying to quit but it’s not working for my mom but it is working for my step-dad. [14-year-old female]" In addition to talking to their parents about how they felt about them smoking, some youth also would take action to reduce their parents’ ability to smoke. "Like I have just tried, because I just tell my parents straight up to stop and…I always try to, like, hide their stuff on them but it doesn’t work. They get mad. [13-year-old male]" Talking to their parents about the dangers of smoking arose out of youth’s worries for their parents’ health. The concern that youth had for parents who smoked was in fact one of the reasons youth decided to participate in the study. Youth were looking for answers that could possibly help them to get their parents to stop smoking. Concerns about their parents smoking was also strongly depicted in the photographs. One 13-year-old female took a picture of an ashtray full of cigarettes and said, "I see them (ashtrays with cigarette butts) all the time. It would be different if it was like you know once in a while kind of thing I probably wouldn’t mind that much, but my parents smoke in the house and in the car and everywhere so it’s kind of I don’t know I wish they would stop (Figure  2)." Ashtray. Represented youth’s concern for parents. Youth who feared that their family members might die, or who had family members who died because of smoking-related illnesses (e.g., lung cancer), especially shared their concerns and would try to make their parents feel guilty. "I always tell them things to make them guilty I’m always like, “Do you want to meet my children, do you want to bring me down the aisle?” It’s working actually. I think my dad said my mom was talking about it so…Ever since my mom started again I really felt it hit me cause, like I want my mom to meet my children and she has to see me get married and have kids and I want my mom to be there. [16-year-old female]" Youth also provided many stories of their parents’ attempts to quit smoking. Their stories demonstrated how smoking was embedded within their family’s history and identity, as well as how parents’ smoking played a role in their child’s life. "Well I was really happy when my father quit because he had been smoking since like, I don’t know, before he was even a teenager. He was really young and he said he’d quit sort of when I was born like he’d smoke outside and he’d reduce it, but then when I got old enough he’d continued smoking, and my mom was the same way. She was also a smoker, but she quit like maybe I was five… [15-year-old female]" "My grandpa passed away a couple of years ago and he died of emphysema and a little bit after he got sick I should say he quit smoking and whenever my aunties and mom smoked, but they don’t anymore, then he would always tell them “Well you should quit because look what’s happening to me.” And that really pushed my mom to quit. [17-year-old female]" While youth were persistent in trying to convince their parents to stop smoking, most youth felt that their parents would not quit despite their efforts. A sense of helplessness was apparent. "Well, my parents like they are smoking and if I tell them not to they’re not going to listen because they’re like “We are your parents, you’re not our parents!” [18-year-old male]" "Like my mom and my dad both smoke and I’d like to tell them to stop and to show them…they don’t really care…I don’t know, like influencing somebody not to smoke is a lot different than I guess them already smoking and quitting. Because like, I mean obviously everything I’ve seen I’m never going to smoke, but I mean it doesn’t really influence my parents. [13-year-old male]" The good/bad parent A second sub-theme involved a moral tone in youth’s conversations with respect to how they viewed their parents and other family members who did or did not smoke. On one hand, youth perceived parents who did not smoke as doing the right thing and as part of their parents’ overall plan of keeping themselves and their children healthy. "Like my parents don’t smoke, they don’t like do drugs or anything like that and they do like everything possible to stay to like be healthy and stuff and to keep me healthy and stuff like that. [12-year-old female]" In contrast, youth were especially disapproving of parents who did smoke. "Like if a mother wanted to have a baby so badly in the first place then she should have known that she’s not supposed to drink or she’s not supposed to smoke or she’s not supposed to do any type of drugs or anything…and they don’t know how bad it actually is for the kids who have to breathe it in. [14-year-old female]" Parents’ second-hand smoking was seen by youth as parents “doing” to their children. “Doing” was viewed in a negative sense where children were put in a dangerous situation in which they had little choice or control as depicted by the following quote and photo. "I guess for people with families already, like what it’s doing to their family or that second-hand smoking can be almost as bad as actually smoking like what are you doing to the people around you if you’re choosing to smoke. [17-year-old female] (Figure  3)." Essentially, youth felt that second-hand smoke was more dangerous than first-hand smoke as one 15-year-old male noted, “This stuff (second-hand smoke) does not get filtered through the back of the butt, it just comes out clear not filtered.” Youth expressed concern for how second-hand smoking impacted them and their siblings. "Both my parents smoke so I don’t like it too much because the smell is kind of it bugs me and you know I don’t know because so many people talk about smoking is related to cancer and that kind of thing so I’m kind of scared. I’m scared for myself and for my parents. But I’m more scared for like my brother than I am for me because I can leave more often than my brother’s allowed to… so. Either like my brother developing the habit or something or like him getting cancer because he’s around it too much. [13-year-old female]" Adult smoking beside a young child. Represented parents “doing” to children; children have no choice. Many youth, whose parents did not smoke currently or had never smoked, were concerned about the effects of second-hand smoke on their friends (whose parents smoked). These youth spoke about their friends’ situations vicariously. Their comments in the interviews arose from their extended empathy towards their peers and their peers’ siblings. "Um, I’m pet sitting for a friend while she’s in Florida and her parents are usually always smoking or getting ready to light another cigarette and so I went there it just smells so bad in their house and I feel sorry for her cause she’s got a sister in kindergarten and it’s her in grade nine and her parents are smoking and their dogs in there and cat and it just sticks to the furniture and it just smells smoke. [14-year-old female]" Youth felt that parents who smoked were poor role models and that their behaviour could influence their child’s desire to smoke. "Cause when children see, children do, right? Yeah, so lots of kids when your parents smoke when you’re like in grade two or something and kids get the idea that it’s cool or like whatever and then they want to be like their parents cause they think their parents are awesome. So then they start doing everything their parents do and then they start smoking… [14-year-old female]" "Or sometimes you can get addicted to smoking if both your parents smoke a lot and then like my cousin who smokes uses that excuse cause I’m like”Why do you smoke, that’s disgusting!” And he says “Well both my parents smoke so I started.” I don’t think his parents are a very good influence since they both smoke. And I’m not sure if they ever told him not to smoke, but maybe they just accepted his smoking not saying it is bad or anything. Like if my kid started smoking I would get mad. [13-year-old male]" Parents who smoked were also considered by youth to be less reliable and credible when talking to their children about the dangers of smoking. "When my parents found out I tried it (smoking) once, they knew that they couldn’t do anything cause since they were smoking too! [13-year-old female]" In general, parents and other adult family members who smoked were viewed by youth to be weak in character. "So like one year he (family relative) came out from Ontario with his wife and my grandma and it was pouring rain and he decided to go outside for a smoke, so he really ran across our yard and hid in the shed and smoked. I’m like, it’s pathetic. [14-year-old female]" It was evident from the youth’s narratives that parents’ smoking behaviours were unacceptable and should never be tolerated. "And I mean people really need to kind of jump on it and say don’t do this around your kids because it will affect them. Don’t do this around any young child because young children are really open to being affected by something like that and so I think definitely being careful about where you’re smoking or something like that is definitely a really big factor. [16-year-old female]" A second sub-theme involved a moral tone in youth’s conversations with respect to how they viewed their parents and other family members who did or did not smoke. On one hand, youth perceived parents who did not smoke as doing the right thing and as part of their parents’ overall plan of keeping themselves and their children healthy. "Like my parents don’t smoke, they don’t like do drugs or anything like that and they do like everything possible to stay to like be healthy and stuff and to keep me healthy and stuff like that. [12-year-old female]" In contrast, youth were especially disapproving of parents who did smoke. "Like if a mother wanted to have a baby so badly in the first place then she should have known that she’s not supposed to drink or she’s not supposed to smoke or she’s not supposed to do any type of drugs or anything…and they don’t know how bad it actually is for the kids who have to breathe it in. [14-year-old female]" Parents’ second-hand smoking was seen by youth as parents “doing” to their children. “Doing” was viewed in a negative sense where children were put in a dangerous situation in which they had little choice or control as depicted by the following quote and photo. "I guess for people with families already, like what it’s doing to their family or that second-hand smoking can be almost as bad as actually smoking like what are you doing to the people around you if you’re choosing to smoke. [17-year-old female] (Figure  3)." Essentially, youth felt that second-hand smoke was more dangerous than first-hand smoke as one 15-year-old male noted, “This stuff (second-hand smoke) does not get filtered through the back of the butt, it just comes out clear not filtered.” Youth expressed concern for how second-hand smoking impacted them and their siblings. "Both my parents smoke so I don’t like it too much because the smell is kind of it bugs me and you know I don’t know because so many people talk about smoking is related to cancer and that kind of thing so I’m kind of scared. I’m scared for myself and for my parents. But I’m more scared for like my brother than I am for me because I can leave more often than my brother’s allowed to… so. Either like my brother developing the habit or something or like him getting cancer because he’s around it too much. [13-year-old female]" Adult smoking beside a young child. Represented parents “doing” to children; children have no choice. Many youth, whose parents did not smoke currently or had never smoked, were concerned about the effects of second-hand smoke on their friends (whose parents smoked). These youth spoke about their friends’ situations vicariously. Their comments in the interviews arose from their extended empathy towards their peers and their peers’ siblings. "Um, I’m pet sitting for a friend while she’s in Florida and her parents are usually always smoking or getting ready to light another cigarette and so I went there it just smells so bad in their house and I feel sorry for her cause she’s got a sister in kindergarten and it’s her in grade nine and her parents are smoking and their dogs in there and cat and it just sticks to the furniture and it just smells smoke. [14-year-old female]" Youth felt that parents who smoked were poor role models and that their behaviour could influence their child’s desire to smoke. "Cause when children see, children do, right? Yeah, so lots of kids when your parents smoke when you’re like in grade two or something and kids get the idea that it’s cool or like whatever and then they want to be like their parents cause they think their parents are awesome. So then they start doing everything their parents do and then they start smoking… [14-year-old female]" "Or sometimes you can get addicted to smoking if both your parents smoke a lot and then like my cousin who smokes uses that excuse cause I’m like”Why do you smoke, that’s disgusting!” And he says “Well both my parents smoke so I started.” I don’t think his parents are a very good influence since they both smoke. And I’m not sure if they ever told him not to smoke, but maybe they just accepted his smoking not saying it is bad or anything. Like if my kid started smoking I would get mad. [13-year-old male]" Parents who smoked were also considered by youth to be less reliable and credible when talking to their children about the dangers of smoking. "When my parents found out I tried it (smoking) once, they knew that they couldn’t do anything cause since they were smoking too! [13-year-old female]" In general, parents and other adult family members who smoked were viewed by youth to be weak in character. "So like one year he (family relative) came out from Ontario with his wife and my grandma and it was pouring rain and he decided to go outside for a smoke, so he really ran across our yard and hid in the shed and smoked. I’m like, it’s pathetic. [14-year-old female]" It was evident from the youth’s narratives that parents’ smoking behaviours were unacceptable and should never be tolerated. "And I mean people really need to kind of jump on it and say don’t do this around your kids because it will affect them. Don’t do this around any young child because young children are really open to being affected by something like that and so I think definitely being careful about where you’re smoking or something like that is definitely a really big factor. [16-year-old female]" Distancing family relationships A third sub-theme that emerged was one where youth were separating themselves, both physically and emotionally, from their family members. In addition, youth associated smoking with causing emotional stress or strain on the family. "Youth:And they always get problems because of it so I kind of don’t want to have to deal with all those problems. And all that stress and everything so I’m just going to like leave it alone.Interviewer:Okay. So like what problems?Youth:Like family issues.Interviewer:So like you said that they have family issues and stuff, why is it important to you not to have that in your life like those things?P:Cause I already have enough family problems. I guess I don’t want anymore. [14-year-old female]" The discussion with youth revealed that smoking had, in varying degrees, disrupted family relationships. Just the presence of family members smoking around them resulted in youth altering their behaviour and wanting to physically distance themselves from smoker family members. "I live with my grandparents. They make me supper and then I have to usually eat in my room because they smoke, and I don’t like the smell of smoke when I’m eating. I hate the smell and it’s just I grew up my whole life with it and I just think I just see my grandma a lot of my family members have like my great-grandma had passed away with lung cancer and stuff. I just think it’s bad. [17-year-old female]" At times, being around parents who smoked resulted in feelings of worry and frustration for youth. "Well my step-dad smokes and he’s always saying, “No, it won’t happen to me, it won’t happen to me!” And he actually has a really high chance of catching it cause he started smoking when he was really young and he continued smoking and, uh, he still thinks it won’t happen and doesn’t believe any of the commercials or the ads on that stuff. He just keeps going so… [17-year-old male]" "I was riding with my dad and he was smoking. I was like, “Do you have to do that when we’re in the car?” Like I get so bugged by it when people do it. It’s like. “Look at the cigarette box!” I get so mad. I was like…like when we were getting out of the car and I said, “Oh can you just not do that?” I walked ahead of him and he said, “I’m sorry.” It’s like, “Okay!” [15-year-old female]" Youth were also sensitive how their negative reactions and behaviours towards family members who smoked could result in hurt feelings. "Whenever they smoke around me I just like take a shirt or something and just like cover my mouth and nose and my brothers and sisters are doing that too. Yeah, so just to try and keep it away. My parents don’t mind, but I’m pretty sure it hurts their feelings or something. [13-year-old female]" Many youth described family tension and conflict because a family member was smoking. "It can really bring your family down, if smoking hurts someone in your family. Um, it could really cause a lot of tension there… [16-year-old male]" "And like my cousin, her parents smoke. They quit. She helped them quit but then they started again and then she started crying and crying and crying and crying, and then she’s scared that her parents are going to die from lung disease or a lung cancer and she always cries when they do that and then one time they said “It’s our choice if we do this. It’s not your fault if we die or not and they said that they only smoke once a day, not too much.” Now she’s still gets mad but she doesn’t have temper tantrums anymore. [12-year-old female]" Feelings of anger were also associated with family members who smoked. One youth who had an extended family member who died from lung cancer was upset with a son-in-law who continued to smoke in spite of his father-in-law’s death. "I saw him smoking and I was like “Why? Like you saw what your father-in-law went through. Why are you still doing that?!” And it, it really angers me. Like I don’t know, just even talking about it gets me so mad like you’re seeing all these things like you know it happens. Why are you going to ruin that? [15-year-old female]" Notably, of all the feelings expressed by youth in the study, it was a deep sense of sadness that was most apparent in their narratives with respect to families who had a history of smoking. The sadness was in relation to a past or future loss of family members. For some youth, the sadness was physically evident through youth crying and holding back tears while being interviewed. Some held the view that smoking was the defining feature of their families that ultimately would lead to its destruction (Figure  4). Funeral Sign. Represented smoking as a sign of cancer and death. "Lots of my family smokes and I’m worried about them getting cancer and then not surviving it. [16-year-old female]" "Okay, I took two pictures of smokes cause the first reinforces that smoking could lead to lung cancer. And the second is cause it relates to me and my life because my mom smokes. My grandpa smokes, my grandma smokes, my aunty smokes because a lot of people smoke in my family. Well I feel sad that she probably could die soon like she maybe diagnosed with cancer like any time because she smokes a lot. [13-year-old male]" A third sub-theme that emerged was one where youth were separating themselves, both physically and emotionally, from their family members. In addition, youth associated smoking with causing emotional stress or strain on the family. "Youth:And they always get problems because of it so I kind of don’t want to have to deal with all those problems. And all that stress and everything so I’m just going to like leave it alone.Interviewer:Okay. So like what problems?Youth:Like family issues.Interviewer:So like you said that they have family issues and stuff, why is it important to you not to have that in your life like those things?P:Cause I already have enough family problems. I guess I don’t want anymore. [14-year-old female]" The discussion with youth revealed that smoking had, in varying degrees, disrupted family relationships. Just the presence of family members smoking around them resulted in youth altering their behaviour and wanting to physically distance themselves from smoker family members. "I live with my grandparents. They make me supper and then I have to usually eat in my room because they smoke, and I don’t like the smell of smoke when I’m eating. I hate the smell and it’s just I grew up my whole life with it and I just think I just see my grandma a lot of my family members have like my great-grandma had passed away with lung cancer and stuff. I just think it’s bad. [17-year-old female]" At times, being around parents who smoked resulted in feelings of worry and frustration for youth. "Well my step-dad smokes and he’s always saying, “No, it won’t happen to me, it won’t happen to me!” And he actually has a really high chance of catching it cause he started smoking when he was really young and he continued smoking and, uh, he still thinks it won’t happen and doesn’t believe any of the commercials or the ads on that stuff. He just keeps going so… [17-year-old male]" "I was riding with my dad and he was smoking. I was like, “Do you have to do that when we’re in the car?” Like I get so bugged by it when people do it. It’s like. “Look at the cigarette box!” I get so mad. I was like…like when we were getting out of the car and I said, “Oh can you just not do that?” I walked ahead of him and he said, “I’m sorry.” It’s like, “Okay!” [15-year-old female]" Youth were also sensitive how their negative reactions and behaviours towards family members who smoked could result in hurt feelings. "Whenever they smoke around me I just like take a shirt or something and just like cover my mouth and nose and my brothers and sisters are doing that too. Yeah, so just to try and keep it away. My parents don’t mind, but I’m pretty sure it hurts their feelings or something. [13-year-old female]" Many youth described family tension and conflict because a family member was smoking. "It can really bring your family down, if smoking hurts someone in your family. Um, it could really cause a lot of tension there… [16-year-old male]" "And like my cousin, her parents smoke. They quit. She helped them quit but then they started again and then she started crying and crying and crying and crying, and then she’s scared that her parents are going to die from lung disease or a lung cancer and she always cries when they do that and then one time they said “It’s our choice if we do this. It’s not your fault if we die or not and they said that they only smoke once a day, not too much.” Now she’s still gets mad but she doesn’t have temper tantrums anymore. [12-year-old female]" Feelings of anger were also associated with family members who smoked. One youth who had an extended family member who died from lung cancer was upset with a son-in-law who continued to smoke in spite of his father-in-law’s death. "I saw him smoking and I was like “Why? Like you saw what your father-in-law went through. Why are you still doing that?!” And it, it really angers me. Like I don’t know, just even talking about it gets me so mad like you’re seeing all these things like you know it happens. Why are you going to ruin that? [15-year-old female]" Notably, of all the feelings expressed by youth in the study, it was a deep sense of sadness that was most apparent in their narratives with respect to families who had a history of smoking. The sadness was in relation to a past or future loss of family members. For some youth, the sadness was physically evident through youth crying and holding back tears while being interviewed. Some held the view that smoking was the defining feature of their families that ultimately would lead to its destruction (Figure  4). Funeral Sign. Represented smoking as a sign of cancer and death. "Lots of my family smokes and I’m worried about them getting cancer and then not surviving it. [16-year-old female]" "Okay, I took two pictures of smokes cause the first reinforces that smoking could lead to lung cancer. And the second is cause it relates to me and my life because my mom smokes. My grandpa smokes, my grandma smokes, my aunty smokes because a lot of people smoke in my family. Well I feel sad that she probably could die soon like she maybe diagnosed with cancer like any time because she smokes a lot. [13-year-old male]" The prisoner The final sub-theme that emerged in the study was a sense of resigned acceptance, powerlessness, and being held as a prisoner. Ultimately, the unjust nature of parents smoking in the family home was truly felt by those youth who described having little choice but to feel trapped inside the smoke. "Um, yeah. Well I had to stop volleyball and taekwondo for a little while because my knees were really bad and I have been experiencing like a hard time breathing. But I think that’s particularly because after my dad died my mom let these people move in and the guy smoked a lot and I wasn’t used to that amount of smoke in my personal area. Like downstairs was all mine before, but then I was close to my bedroom and his smoke would come in my room. So I was I was stuck with that all the time. [16-year-old female]" "Whenever he smokes I’m like in a car with him or in the house with him. He’s always supposed to go outside of the house to smoke, but when I’m in the car with him I roll down the windows so I don’t have to breath in the smoke and I just go on with him and like, “Okay, you can do whatever you want, then I’m just going to do what I want to do.” [17-year-old male]" Within their home (and while travelling in vehicles with their parents), youth had few ways of escaping the second-hand smoke and little, if any, influence over their parents’ smoking behaviours and rights to live in a pollution-free environment. Some even described how they had to cover their nose and mouth when walking through their house. These youth were like prisoners within their homes. They experienced their own, and witnessed their siblings’ exposure to second-hand smoke, but felt they were unable to help and protect themselves, let alone their siblings. Youth expressed feeling caught in an unpleasant situation which was difficult to escape. They perceived it as unfair and just had to put up with it. The final sub-theme that emerged in the study was a sense of resigned acceptance, powerlessness, and being held as a prisoner. Ultimately, the unjust nature of parents smoking in the family home was truly felt by those youth who described having little choice but to feel trapped inside the smoke. "Um, yeah. Well I had to stop volleyball and taekwondo for a little while because my knees were really bad and I have been experiencing like a hard time breathing. But I think that’s particularly because after my dad died my mom let these people move in and the guy smoked a lot and I wasn’t used to that amount of smoke in my personal area. Like downstairs was all mine before, but then I was close to my bedroom and his smoke would come in my room. So I was I was stuck with that all the time. [16-year-old female]" "Whenever he smokes I’m like in a car with him or in the house with him. He’s always supposed to go outside of the house to smoke, but when I’m in the car with him I roll down the windows so I don’t have to breath in the smoke and I just go on with him and like, “Okay, you can do whatever you want, then I’m just going to do what I want to do.” [17-year-old male]" Within their home (and while travelling in vehicles with their parents), youth had few ways of escaping the second-hand smoke and little, if any, influence over their parents’ smoking behaviours and rights to live in a pollution-free environment. Some even described how they had to cover their nose and mouth when walking through their house. These youth were like prisoners within their homes. They experienced their own, and witnessed their siblings’ exposure to second-hand smoke, but felt they were unable to help and protect themselves, let alone their siblings. Youth expressed feeling caught in an unpleasant situation which was difficult to escape. They perceived it as unfair and just had to put up with it.
Conclusion
This study revealed that while youth often feel trapped by others smoking in their home and powerless to stop this behaviour, they took the position of educating, trying to influence, and ultimately protecting their parents regarding the harmful effects of smoking and second-hand smoke. The findings reinforce that more needs to be done in strengthening environments where youth can grow and flourish. Upholding the rights of youth to live in clean, healthy, and unpolluted environments is a right and fair public health policy. As one youth from our study assertively stated, parents and all adults should “just stop smoking cause it could affect your kid’s life and yours!”
[ "Background", "Design", "Participants", "Data collection", "Ethics", "Data analysis", "It’s not fair", "Parenting the parent about the dangers of smoking", "The good/bad parent", "Distancing family relationships", "The prisoner", "Strengths and limitations of the study", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Lung cancer is considered one of the most preventable types of cancer. While smoking (i.e., tobacco cigarette smoking) increases the risk of many forms of cancer, it is the predominant risk factor for lung cancer, accounting for about 80% of lung cancer cases in men and 50% in women worldwide\n[1]. Despite recent evidence that lung cancer is a high health risk concern to youth\n[2], adolescent smoking remains a public health problem. In 2010, 12% of Canadian youth aged 15 to 19 years smoked\n[3]. Although the number of youth smoking was the lowest level recorded for that age group, the decline in youth smoking rates has slowed\n[3] and youth smoking rates in some countries are rising\n[4]. In the United States, the surgeon general reported that more than 600,000 middle school students and 3 million high school students smoke cigarettes\n[5]. Also troubling is the fact that while smoking rates for youth are down, “of every three young smokers, only one will quit, and one of those remaining smokers will die from tobacco-related causes” (\n[5], p. 9). Adult smokers frequently report having started smoking as youth. Among smokers aged 15–17, almost 80% said they had tried smoking by age 14\n[6], with females (aged 15–17) having had their first cigarette at 12.9 years and males at age 13.3 years\n[7]. Global trends also reveal that smoking is increasing in developing countries due to adapting Western lifestyle habits. As well, lung cancer rates are increasing in some countries (e.g., China, Korea, and some African countries) and are expected to continue to rise for the next few decades\n[1].\nSmoking has been shown to be a relational and learned behaviour, especially influenced by the family\n[8]. Regarding youth health behaviours, it has been suggested that the family, especially parents, is one of the dominant arenas in which youth are influenced. It has been established that youth are most likely to smoke if they have been exposed to, or come from a family in which their parents smoked\n[9]. Findings from a Canadian Community health survey revealed that in 2011 youth (aged 15–17) residing in households where someone smoked regularly were three times more likely to smoke (22.4% versus 7.0%)\n[10]. One qualitative study reported that aboriginal adolescent girls often smoke because smoking is normalized and reinforced by families: they see family members smoking in the home, they are not discouraged from smoking, and in some cases, parents facilitate adolescents’ access to cigarettes\n[11]. Youth smoking behaviour has also been linked to a range of other parental influences. For example, using a nationally United States representative sample, Powell and Chaloupka\n[12] studied specific parenting behaviours and the degree to which high school students felt their parents’ opinions about smoking influenced their decision to smoke. The authors identified that certain parenting practices (i.e., parental smoking, setting limits on youth’s free time, in-home smoking rules, quality and frequency of parent–child communication) as well as how much youth value their parents’ opinions about smoking, strongly influenced youth deciding to smoke.\nThe evidence indicates that while parents exert a strong influence on youth smoking, they can also exist as a protective factor against youth smoking\n[8,12-14], especially when non-smoking rules are in place\n[15] including eliminating smoking in the home\n[12]. Clark et al.\n[15] revealed that if parents themselves smoked, banning smoking in the home and speaking against smoking reduced the likelihood that youth will smoke. Similarly, other research on household smoking rules found fewer adolescent smoking behaviours in homes with strict anti-smoking rules\n[16]. While there is empirical evidence of how parents are greatly influential regarding their children’s smoking behaviours, we know little; however, about the dynamics involved in the parent-adolescent relationship regarding smoking in the home, or how youth perceive their parents’ approval or disproval of smoking behaviours. Few studies have addressed how children or youth feel about adult smoking.\nIn addition to findings that parental smoking may be related to the initiation of smoking in youth, there is increasing concern for the health risks of second-hand smoke. Along with anti-smoking legislation in public spaces, attention has been aimed at protecting children from second-hand smoke and recognizing the risks involved in exposure to second-hand smoke in non-public places. Second-hand smoking rates and non-smoking rules, for example, have been examined in family homes and cars\n[17-20]. In 2006, 22.1% of Canadian youth in grades 5 through 12 were exposed to smoking in their home on a daily or almost daily basis and 28.1% were exposed to smoking while riding in a car on a nearly weekly basis\n[19]. In the 2008 Canadian survey, the rate of exposure for 12–19 year olds (16.8%) was almost twice as high as the Canadian average\n[21]. New Zealand national surveys indicate that while exposure to second-hand smoke decreased since 2000, youth’s perceptions revealed that exposure still remained at 35% (in-home exposure) and 32% (in-vehicle exposure)\n[22]. The effects of parental smoking and maintaining a smoke-free environment has also received attention in areas such as prenatal and newborn care\n[23] and later, poor respiratory symptoms and outcomes\n[24]. Studies have also begun emphasizing home smoking bans and perceived dangers of the less visible but harmful exposure of third-hand smoke to children\n[23,25].\nAlthough the current literature offers insights about the physical effects of second-hand smoke, how second-hand smoke impacts family relationships is unclear and what youth think about adult family members smoking remains in its infancy. This paper draws on data from a larger qualitative study that sought to extend our limited understanding of youth’s perspectives of cancer and cancer prevention. It aims to explore how youth conceptualize smoking within the context of their own life-situations with attention to their perspectives on parental and family-related smoking issues and experiences.", "The qualitative research design of ethnography was utilized. Ethnography is the study of a specific cultural group of people that provides explanations of people’s thoughts, beliefs, and behaviours in a specific context with the aim of describing aspects of a phenomenon of the group\n[26-28]. For this study, youth were the group and the phenomenon of interest was youth’s perspectives of cancer and cancer prevention. Key assumptions that were integral to the successful undertaking of the study included viewing youth as self-reflective beings expert on their own experiences and as flexible agents existing within and being touched by multiple social and cultural contexts.", "Youth were recruited in a Western Canadian province from six schools in both a rural and urban setting. Schools mailed invitation letters about the study to families of potential participants who, if interested, could contact the researcher for further information. Purposive sampling techniques were used with the goal to achieve variation among participants based on demographic information (e.g., age, SES, gender, and urban or rural residency) and experiences in relation to cancer (i.e., some youth had family members who had experienced cancer). Recruitment ended once redundancy or theoretical saturation was achieved, that is, no new themes were apparent. In total, 75 youth ranging in age from 11 to 19 years (M = 14.5, SD = 2.1) participated in the study. The demographic and background characteristics of the participants are presented in Table \n1.\nDemographic Profile of Youth Participants\nOf the 75 youth participating in the study, six (8%) had tried smoking but no longer smoked and four (5%) reported that they currently smoked. Twenty-two youth (29%) had parents who currently smoked, while seven (9%) had parents who had quit smoking. For the 10 youth (13%) who had a history of smoking, eight (11%) had parents who also had a history of smoking.", "Data collection occurred between December 2007 and October 2010. The longer time period was due to school breaks and use of multiple data collection methods. The aim was to have each youth participate in two in-depth open-ended interviews. For the first interview an interview guide was used which included questions to elicit youth’s thoughts, beliefs, and feelings about cancer and cancer prevention (e.g., When you hear the word “cancer,” what does it make you think of? If developing cancer messages for youth, what would you tell them?). The interview guide had no direct questions about smoking. The open-ended nature of the interview guide provided an opportunity for youth to discuss areas they considered significant and/or areas previously not anticipated by the researchers\n[29].\nAfter completing the first interview, all youth were asked to take part in the photovoice method. Photovoice is a participatory research method where individuals can address important issues through taking photographs and discussion\n[30-34]. Photographs, which are often used in ethnography, provided youth with a unique and creative means to reflect on cancer and cancer prevention. Youth were given a disposable camera to take pictures of people (with permission), objects, places or events that made them think of cancer and cancer prevention. Youth had four weeks to take photographs. During the second interview, participants were asked to talk about what the photos meant to them in terms of cancer and cancer prevention. In addition, youth were asked follow-up questions based on their initial interview and to comment on emerging themes. In total, 53 youth (71%) participated in the photovoice method and second interview. The remaining 22 youth (29%) were unable to complete the photovoice method and second interview due to scheduling difficulties.\nFour focus group interviews with youth who were previously interviewed were conducted in the schools near the end of the study. The purpose was to identify ideas about cancer and cancer prevention that might emerge from a group context and provide quality controls on data collection\n[35-37]. Between three and four youth participated in each focus group. In total, fourteen youth attended the focus group discussions.\nAll individual and focus group interviews lasted from 60 to 90 minutes and were digitally recorded and transcribed verbatim. Field notes were recorded to describe the context (e.g., participant’s non-verbal behaviours, communication processes) and the interviewer’s perceptions of the interview. The interviews were conducted by trained research assistants under the supervision of the first author and took place at the participant’s school or home.\nFinally, ethnographic field research was conducted. Research assistants were trained in fundamental skills and process in doing ethnographic field research including participant observation, field notes, and flexibility and openness\n[38-40]. On the days that research assistants were at the schools conducting interviews, they observed and recorded interactions and dialogue during informal daily activities (e.g., recess), special events (e.g., cancer prevention fund raising activities), and during the interviews themselves. Ongoing team meetings with research assistants allowed for debriefing and helped increase the sensitivity and richness of fieldnotes by critically highlighting features previously unconsidered by the observers alone. Raw field notes were written up and compared to interview data.", "Before commencing the study, permission was obtained from the University of Manitoba Research Ethics Committee and from the recruitment sites. Parental consent and assent from all youth participants was also provided. Youth were informed that they could withdraw from the study at any stage. Strategies to secure the participants’ confidentiality were applied. Participants gave permission to use their photographs for the purposes of publishing and were reassured that any identifiable information in the photos would be removed (digitally altered). Youth received an honorarium gift card for their participation.", "Consistent with qualitative research design, data analysis occurred simultaneously with data collection. A data management system, NVivo version 9.0\n[41] helped facilitate organization of substantial transcripts. Inductive coding began with RW reading all the field notes and interview transcripts. Analytical categories emerged from rigorous and systematic analysis of all forms of data (interview transcripts, ethnographic field notes, and photographs). Analysis of the data followed ethnographic principles of interpreting the meanings youth attributed to cancer and cancer prevention including their meanings attributed to parental and family-related smoking issues and experiences. Data analysis followed multi-level analytic coding procedures congruent with interpretive qualitative analysis and ethnography\n[28,29,39,40]. First-level analysis involved isolating concepts or patterns referred to as domains. Second-level analysis involved organizing domains. Through processes of comparing, contrasting, and integrating, items were organized, associated with other items, and linked into higher order patterns. The third level of analysis involved identifying attributes in each domain, and the last level involved discovering relationships among the domains to create themes. Various strategies were used to enhance the rigor of the research process including prolonged engagement with participants and data, careful line-by-line transcript analysis, and detailed memo writing throughout the research process\n[42]. The researchers independently identified theme areas then jointly refined and linked analytic themes and categories. Discussion of initial interpretations with the youth themselves occurred during the second interviews, which also helped reveal new data and support emerging themes.", "Overall, youth viewed their parents and other family members smoking around children as something unjust. The phrase “It’s not fair” was frequently expressed by youth in this study as illustrated in the following comment,\n\"Because the kids around parents who smoke have to breathe in, they have to breathe in all of it …and like, if parents want to smoke then they should like go outside becauseit’s not fair to the kids…Probably because they always have to be around it if their mother always smokes every time they’re taking a bath or every time they’re like colouring a picture like every time they do anything, they always have to breathe in the bad- like bad air that’s filled with smoke and stuff like that. Andit’s not fairto them. [12-year-old female]\"\nYouth could not make sense of why parents would smoke around their children. They also were unsure with how to deal with what they saw was an act of injustice to children. They struggled about how the smoking made them feel, recognizing that their roles as children limited their ability to influence their parents’ behaviour. Their attempts to reconcile their feelings and deal with the unfairness through specific behaviours are further apparent in the following four subthemes.", "Although youth did share stories of parents talking to them about the importance of not smoking, this was not the major family narrative. Instead of being talked to about smoking, it was more common for youth to share stories of they themselves talking to, educating, and even preaching to their parents about the dangers of smoking. In short, youth took it upon themselves to parent their parents.\n\"But I’m getting my mom and my step-dad to quit…by talking to them, telling them how it makes all of us kids feel…Yeah, reading like everything the packages say or what the internet says or like what I learn from it and then they’re all just thinking and then they’re saying “well I won’t do that much then. I’ll try to quit.” Now they’re trying to quit but it’s not working for my mom but it is working for my step-dad. [14-year-old female]\"\nIn addition to talking to their parents about how they felt about them smoking, some youth also would take action to reduce their parents’ ability to smoke.\n\"Like I have just tried, because I just tell my parents straight up to stop and…I always try to, like, hide their stuff on them but it doesn’t work. They get mad. [13-year-old male]\"\nTalking to their parents about the dangers of smoking arose out of youth’s worries for their parents’ health. The concern that youth had for parents who smoked was in fact one of the reasons youth decided to participate in the study. Youth were looking for answers that could possibly help them to get their parents to stop smoking. Concerns about their parents smoking was also strongly depicted in the photographs. One 13-year-old female took a picture of an ashtray full of cigarettes and said,\n\"I see them (ashtrays with cigarette butts) all the time. It would be different if it was like you know once in a while kind of thing I probably wouldn’t mind that much, but my parents smoke in the house and in the car and everywhere so it’s kind of I don’t know I wish they would stop (Figure \n2).\"\nAshtray. Represented youth’s concern for parents.\nYouth who feared that their family members might die, or who had family members who died because of smoking-related illnesses (e.g., lung cancer), especially shared their concerns and would try to make their parents feel guilty.\n\"I always tell them things to make them guilty I’m always like, “Do you want to meet my children, do you want to bring me down the aisle?” It’s working actually. I think my dad said my mom was talking about it so…Ever since my mom started again I really felt it hit me cause, like I want my mom to meet my children and she has to see me get married and have kids and I want my mom to be there. [16-year-old female]\"\nYouth also provided many stories of their parents’ attempts to quit smoking. Their stories demonstrated how smoking was embedded within their family’s history and identity, as well as how parents’ smoking played a role in their child’s life.\n\"Well I was really happy when my father quit because he had been smoking since like, I don’t know, before he was even a teenager. He was really young and he said he’d quit sort of when I was born like he’d smoke outside and he’d reduce it, but then when I got old enough he’d continued smoking, and my mom was the same way. She was also a smoker, but she quit like maybe I was five… [15-year-old female]\"\n\"My grandpa passed away a couple of years ago and he died of emphysema and a little bit after he got sick I should say he quit smoking and whenever my aunties and mom smoked, but they don’t anymore, then he would always tell them “Well you should quit because look what’s happening to me.” And that really pushed my mom to quit. [17-year-old female]\"\nWhile youth were persistent in trying to convince their parents to stop smoking, most youth felt that their parents would not quit despite their efforts. A sense of helplessness was apparent.\n\"Well, my parents like they are smoking and if I tell them not to they’re not going to listen because they’re like “We are your parents, you’re not our parents!” [18-year-old male]\"\n\"Like my mom and my dad both smoke and I’d like to tell them to stop and to show them…they don’t really care…I don’t know, like influencing somebody not to smoke is a lot different than I guess them already smoking and quitting. Because like, I mean obviously everything I’ve seen I’m never going to smoke, but I mean it doesn’t really influence my parents. [13-year-old male]\"", "A second sub-theme involved a moral tone in youth’s conversations with respect to how they viewed their parents and other family members who did or did not smoke. On one hand, youth perceived parents who did not smoke as doing the right thing and as part of their parents’ overall plan of keeping themselves and their children healthy.\n\"Like my parents don’t smoke, they don’t like do drugs or anything like that and they do like everything possible to stay to like be healthy and stuff and to keep me healthy and stuff like that. [12-year-old female]\"\nIn contrast, youth were especially disapproving of parents who did smoke.\n\"Like if a mother wanted to have a baby so badly in the first place then she should have known that she’s not supposed to drink or she’s not supposed to smoke or she’s not supposed to do any type of drugs or anything…and they don’t know how bad it actually is for the kids who have to breathe it in. [14-year-old female]\"\nParents’ second-hand smoking was seen by youth as parents “doing” to their children. “Doing” was viewed in a negative sense where children were put in a dangerous situation in which they had little choice or control as depicted by the following quote and photo.\n\"I guess for people with families already, like what it’s doing to their family or that second-hand smoking can be almost as bad as actually smoking like what are you doing to the people around you if you’re choosing to smoke. [17-year-old female] (Figure \n3).\"\nEssentially, youth felt that second-hand smoke was more dangerous than first-hand smoke as one 15-year-old male noted, “This stuff (second-hand smoke) does not get filtered through the back of the butt, it just comes out clear not filtered.”\nYouth expressed concern for how second-hand smoking impacted them and their siblings.\n\"Both my parents smoke so I don’t like it too much because the smell is kind of it bugs me and you know I don’t know because so many people talk about smoking is related to cancer and that kind of thing so I’m kind of scared. I’m scared for myself and for my parents. But I’m more scared for like my brother than I am for me because I can leave more often than my brother’s allowed to… so. Either like my brother developing the habit or something or like him getting cancer because he’s around it too much. [13-year-old female]\"\nAdult smoking beside a young child. Represented parents “doing” to children; children have no choice.\nMany youth, whose parents did not smoke currently or had never smoked, were concerned about the effects of second-hand smoke on their friends (whose parents smoked). These youth spoke about their friends’ situations vicariously. Their comments in the interviews arose from their extended empathy towards their peers and their peers’ siblings.\n\"Um, I’m pet sitting for a friend while she’s in Florida and her parents are usually always smoking or getting ready to light another cigarette and so I went there it just smells so bad in their house and I feel sorry for her cause she’s got a sister in kindergarten and it’s her in grade nine and her parents are smoking and their dogs in there and cat and it just sticks to the furniture and it just smells smoke. [14-year-old female]\"\nYouth felt that parents who smoked were poor role models and that their behaviour could influence their child’s desire to smoke.\n\"Cause when children see, children do, right? Yeah, so lots of kids when your parents smoke when you’re like in grade two or something and kids get the idea that it’s cool or like whatever and then they want to be like their parents cause they think their parents are awesome. So then they start doing everything their parents do and then they start smoking… [14-year-old female]\"\n\"Or sometimes you can get addicted to smoking if both your parents smoke a lot and then like my cousin who smokes uses that excuse cause I’m like”Why do you smoke, that’s disgusting!” And he says “Well both my parents smoke so I started.” I don’t think his parents are a very good influence since they both smoke. And I’m not sure if they ever told him not to smoke, but maybe they just accepted his smoking not saying it is bad or anything. Like if my kid started smoking I would get mad. [13-year-old male]\"\nParents who smoked were also considered by youth to be less reliable and credible when talking to their children about the dangers of smoking.\n\"When my parents found out I tried it (smoking) once, they knew that they couldn’t do anything cause since they were smoking too! [13-year-old female]\"\nIn general, parents and other adult family members who smoked were viewed by youth to be weak in character.\n\"So like one year he (family relative) came out from Ontario with his wife and my grandma and it was pouring rain and he decided to go outside for a smoke, so he really ran across our yard and hid in the shed and smoked. I’m like, it’s pathetic. [14-year-old female]\"\nIt was evident from the youth’s narratives that parents’ smoking behaviours were unacceptable and should never be tolerated.\n\"And I mean people really need to kind of jump on it and say don’t do this around your kids because it will affect them. Don’t do this around any young child because young children are really open to being affected by something like that and so I think definitely being careful about where you’re smoking or something like that is definitely a really big factor. [16-year-old female]\"", "A third sub-theme that emerged was one where youth were separating themselves, both physically and emotionally, from their family members. In addition, youth associated smoking with causing emotional stress or strain on the family.\n\"Youth:And they always get problems because of it so I kind of don’t want to have to deal with all those problems. And all that stress and everything so I’m just going to like leave it alone.Interviewer:Okay. So like what problems?Youth:Like family issues.Interviewer:So like you said that they have family issues and stuff, why is it important to you not to have that in your life like those things?P:Cause I already have enough family problems. I guess I don’t want anymore. [14-year-old female]\"\nThe discussion with youth revealed that smoking had, in varying degrees, disrupted family relationships. Just the presence of family members smoking around them resulted in youth altering their behaviour and wanting to physically distance themselves from smoker family members.\n\"I live with my grandparents. They make me supper and then I have to usually eat in my room because they smoke, and I don’t like the smell of smoke when I’m eating. I hate the smell and it’s just I grew up my whole life with it and I just think I just see my grandma a lot of my family members have like my great-grandma had passed away with lung cancer and stuff. I just think it’s bad. [17-year-old female]\"\nAt times, being around parents who smoked resulted in feelings of worry and frustration for youth.\n\"Well my step-dad smokes and he’s always saying, “No, it won’t happen to me, it won’t happen to me!” And he actually has a really high chance of catching it cause he started smoking when he was really young and he continued smoking and, uh, he still thinks it won’t happen and doesn’t believe any of the commercials or the ads on that stuff. He just keeps going so… [17-year-old male]\"\n\"I was riding with my dad and he was smoking. I was like, “Do you have to do that when we’re in the car?” Like I get so bugged by it when people do it. It’s like. “Look at the cigarette box!” I get so mad. I was like…like when we were getting out of the car and I said, “Oh can you just not do that?” I walked ahead of him and he said, “I’m sorry.” It’s like, “Okay!” [15-year-old female]\"\nYouth were also sensitive how their negative reactions and behaviours towards family members who smoked could result in hurt feelings.\n\"Whenever they smoke around me I just like take a shirt or something and just like cover my mouth and nose and my brothers and sisters are doing that too. Yeah, so just to try and keep it away. My parents don’t mind, but I’m pretty sure it hurts their feelings or something. [13-year-old female]\"\nMany youth described family tension and conflict because a family member was smoking.\n\"It can really bring your family down, if smoking hurts someone in your family. Um, it could really cause a lot of tension there… [16-year-old male]\"\n\"And like my cousin, her parents smoke. They quit. She helped them quit but then they started again and then she started crying and crying and crying and crying, and then she’s scared that her parents are going to die from lung disease or a lung cancer and she always cries when they do that and then one time they said “It’s our choice if we do this. It’s not your fault if we die or not and they said that they only smoke once a day, not too much.” Now she’s still gets mad but she doesn’t have temper tantrums anymore. [12-year-old female]\"\nFeelings of anger were also associated with family members who smoked. One youth who had an extended family member who died from lung cancer was upset with a son-in-law who continued to smoke in spite of his father-in-law’s death.\n\"I saw him smoking and I was like “Why? Like you saw what your father-in-law went through. Why are you still doing that?!” And it, it really angers me. Like I don’t know, just even talking about it gets me so mad like you’re seeing all these things like you know it happens. Why are you going to ruin that? [15-year-old female]\"\nNotably, of all the feelings expressed by youth in the study, it was a deep sense of sadness that was most apparent in their narratives with respect to families who had a history of smoking. The sadness was in relation to a past or future loss of family members. For some youth, the sadness was physically evident through youth crying and holding back tears while being interviewed. Some held the view that smoking was the defining feature of their families that ultimately would lead to its destruction (Figure \n4).\nFuneral Sign. Represented smoking as a sign of cancer and death.\n\"Lots of my family smokes and I’m worried about them getting cancer and then not surviving it. [16-year-old female]\"\n\"Okay, I took two pictures of smokes cause the first reinforces that smoking could lead to lung cancer. And the second is cause it relates to me and my life because my mom smokes. My grandpa smokes, my grandma smokes, my aunty smokes because a lot of people smoke in my family. Well I feel sad that she probably could die soon like she maybe diagnosed with cancer like any time because she smokes a lot. [13-year-old male]\"", "The final sub-theme that emerged in the study was a sense of resigned acceptance, powerlessness, and being held as a prisoner. Ultimately, the unjust nature of parents smoking in the family home was truly felt by those youth who described having little choice but to feel trapped inside the smoke.\n\"Um, yeah. Well I had to stop volleyball and taekwondo for a little while because my knees were really bad and I have been experiencing like a hard time breathing. But I think that’s particularly because after my dad died my mom let these people move in and the guy smoked a lot and I wasn’t used to that amount of smoke in my personal area. Like downstairs was all mine before, but then I was close to my bedroom and his smoke would come in my room. So I was I was stuck with that all the time. [16-year-old female]\"\n\"Whenever he smokes I’m like in a car with him or in the house with him. He’s always supposed to go outside of the house to smoke, but when I’m in the car with him I roll down the windows so I don’t have to breath in the smoke and I just go on with him and like, “Okay, you can do whatever you want, then I’m just going to do what I want to do.” [17-year-old male]\"\nWithin their home (and while travelling in vehicles with their parents), youth had few ways of escaping the second-hand smoke and little, if any, influence over their parents’ smoking behaviours and rights to live in a pollution-free environment. Some even described how they had to cover their nose and mouth when walking through their house. These youth were like prisoners within their homes. They experienced their own, and witnessed their siblings’ exposure to second-hand smoke, but felt they were unable to help and protect themselves, let alone their siblings. Youth expressed feeling caught in an unpleasant situation which was difficult to escape. They perceived it as unfair and just had to put up with it.", "Using a qualitative research approach afforded the opportunity to understand youth from their frames of reference and experiences of reality. The findings reported here add to the existing literature by providing a richer description on youth’s experiences and beliefs about smoking. Limitations of our study included a sample that was primarily females (72%) and in the younger and middle age range of youth with only 17% of participants being 17 years and older. Fewer youth who are male and older could explain why we did not detect differences based on age or gender. Despite striving for a diverse sample, we were unable to obtain diversity in ethnic backgrounds and socioeconomic status. As well, most youth in this study did not smoke. Future work that accounts for limitations in the study’s sample might result in additional perspectives on the relational aspects of smoking that warrant tailoring smoking cessation programs and policies to address the differences. As well, longitudinal work is recommended, as the cross-sectional nature of our study did not afford us an understanding how perspectives of youth change over time.", "The authors declare that they have no competing interests.", "Study design RLW; supervision of data collection RLW; analysis and interpretation of the data RLW, CK; manuscript preparation RLW, CK. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/12/965/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Method", "Design", "Participants", "Data collection", "Ethics", "Data analysis", "Results", "It’s not fair", "Parenting the parent about the dangers of smoking", "The good/bad parent", "Distancing family relationships", "The prisoner", "Discussion", "Strengths and limitations of the study", "Conclusion", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Lung cancer is considered one of the most preventable types of cancer. While smoking (i.e., tobacco cigarette smoking) increases the risk of many forms of cancer, it is the predominant risk factor for lung cancer, accounting for about 80% of lung cancer cases in men and 50% in women worldwide\n[1]. Despite recent evidence that lung cancer is a high health risk concern to youth\n[2], adolescent smoking remains a public health problem. In 2010, 12% of Canadian youth aged 15 to 19 years smoked\n[3]. Although the number of youth smoking was the lowest level recorded for that age group, the decline in youth smoking rates has slowed\n[3] and youth smoking rates in some countries are rising\n[4]. In the United States, the surgeon general reported that more than 600,000 middle school students and 3 million high school students smoke cigarettes\n[5]. Also troubling is the fact that while smoking rates for youth are down, “of every three young smokers, only one will quit, and one of those remaining smokers will die from tobacco-related causes” (\n[5], p. 9). Adult smokers frequently report having started smoking as youth. Among smokers aged 15–17, almost 80% said they had tried smoking by age 14\n[6], with females (aged 15–17) having had their first cigarette at 12.9 years and males at age 13.3 years\n[7]. Global trends also reveal that smoking is increasing in developing countries due to adapting Western lifestyle habits. As well, lung cancer rates are increasing in some countries (e.g., China, Korea, and some African countries) and are expected to continue to rise for the next few decades\n[1].\nSmoking has been shown to be a relational and learned behaviour, especially influenced by the family\n[8]. Regarding youth health behaviours, it has been suggested that the family, especially parents, is one of the dominant arenas in which youth are influenced. It has been established that youth are most likely to smoke if they have been exposed to, or come from a family in which their parents smoked\n[9]. Findings from a Canadian Community health survey revealed that in 2011 youth (aged 15–17) residing in households where someone smoked regularly were three times more likely to smoke (22.4% versus 7.0%)\n[10]. One qualitative study reported that aboriginal adolescent girls often smoke because smoking is normalized and reinforced by families: they see family members smoking in the home, they are not discouraged from smoking, and in some cases, parents facilitate adolescents’ access to cigarettes\n[11]. Youth smoking behaviour has also been linked to a range of other parental influences. For example, using a nationally United States representative sample, Powell and Chaloupka\n[12] studied specific parenting behaviours and the degree to which high school students felt their parents’ opinions about smoking influenced their decision to smoke. The authors identified that certain parenting practices (i.e., parental smoking, setting limits on youth’s free time, in-home smoking rules, quality and frequency of parent–child communication) as well as how much youth value their parents’ opinions about smoking, strongly influenced youth deciding to smoke.\nThe evidence indicates that while parents exert a strong influence on youth smoking, they can also exist as a protective factor against youth smoking\n[8,12-14], especially when non-smoking rules are in place\n[15] including eliminating smoking in the home\n[12]. Clark et al.\n[15] revealed that if parents themselves smoked, banning smoking in the home and speaking against smoking reduced the likelihood that youth will smoke. Similarly, other research on household smoking rules found fewer adolescent smoking behaviours in homes with strict anti-smoking rules\n[16]. While there is empirical evidence of how parents are greatly influential regarding their children’s smoking behaviours, we know little; however, about the dynamics involved in the parent-adolescent relationship regarding smoking in the home, or how youth perceive their parents’ approval or disproval of smoking behaviours. Few studies have addressed how children or youth feel about adult smoking.\nIn addition to findings that parental smoking may be related to the initiation of smoking in youth, there is increasing concern for the health risks of second-hand smoke. Along with anti-smoking legislation in public spaces, attention has been aimed at protecting children from second-hand smoke and recognizing the risks involved in exposure to second-hand smoke in non-public places. Second-hand smoking rates and non-smoking rules, for example, have been examined in family homes and cars\n[17-20]. In 2006, 22.1% of Canadian youth in grades 5 through 12 were exposed to smoking in their home on a daily or almost daily basis and 28.1% were exposed to smoking while riding in a car on a nearly weekly basis\n[19]. In the 2008 Canadian survey, the rate of exposure for 12–19 year olds (16.8%) was almost twice as high as the Canadian average\n[21]. New Zealand national surveys indicate that while exposure to second-hand smoke decreased since 2000, youth’s perceptions revealed that exposure still remained at 35% (in-home exposure) and 32% (in-vehicle exposure)\n[22]. The effects of parental smoking and maintaining a smoke-free environment has also received attention in areas such as prenatal and newborn care\n[23] and later, poor respiratory symptoms and outcomes\n[24]. Studies have also begun emphasizing home smoking bans and perceived dangers of the less visible but harmful exposure of third-hand smoke to children\n[23,25].\nAlthough the current literature offers insights about the physical effects of second-hand smoke, how second-hand smoke impacts family relationships is unclear and what youth think about adult family members smoking remains in its infancy. This paper draws on data from a larger qualitative study that sought to extend our limited understanding of youth’s perspectives of cancer and cancer prevention. It aims to explore how youth conceptualize smoking within the context of their own life-situations with attention to their perspectives on parental and family-related smoking issues and experiences.", " Design The qualitative research design of ethnography was utilized. Ethnography is the study of a specific cultural group of people that provides explanations of people’s thoughts, beliefs, and behaviours in a specific context with the aim of describing aspects of a phenomenon of the group\n[26-28]. For this study, youth were the group and the phenomenon of interest was youth’s perspectives of cancer and cancer prevention. Key assumptions that were integral to the successful undertaking of the study included viewing youth as self-reflective beings expert on their own experiences and as flexible agents existing within and being touched by multiple social and cultural contexts.\nThe qualitative research design of ethnography was utilized. Ethnography is the study of a specific cultural group of people that provides explanations of people’s thoughts, beliefs, and behaviours in a specific context with the aim of describing aspects of a phenomenon of the group\n[26-28]. For this study, youth were the group and the phenomenon of interest was youth’s perspectives of cancer and cancer prevention. Key assumptions that were integral to the successful undertaking of the study included viewing youth as self-reflective beings expert on their own experiences and as flexible agents existing within and being touched by multiple social and cultural contexts.\n Participants Youth were recruited in a Western Canadian province from six schools in both a rural and urban setting. Schools mailed invitation letters about the study to families of potential participants who, if interested, could contact the researcher for further information. Purposive sampling techniques were used with the goal to achieve variation among participants based on demographic information (e.g., age, SES, gender, and urban or rural residency) and experiences in relation to cancer (i.e., some youth had family members who had experienced cancer). Recruitment ended once redundancy or theoretical saturation was achieved, that is, no new themes were apparent. In total, 75 youth ranging in age from 11 to 19 years (M = 14.5, SD = 2.1) participated in the study. The demographic and background characteristics of the participants are presented in Table \n1.\nDemographic Profile of Youth Participants\nOf the 75 youth participating in the study, six (8%) had tried smoking but no longer smoked and four (5%) reported that they currently smoked. Twenty-two youth (29%) had parents who currently smoked, while seven (9%) had parents who had quit smoking. For the 10 youth (13%) who had a history of smoking, eight (11%) had parents who also had a history of smoking.\nYouth were recruited in a Western Canadian province from six schools in both a rural and urban setting. Schools mailed invitation letters about the study to families of potential participants who, if interested, could contact the researcher for further information. Purposive sampling techniques were used with the goal to achieve variation among participants based on demographic information (e.g., age, SES, gender, and urban or rural residency) and experiences in relation to cancer (i.e., some youth had family members who had experienced cancer). Recruitment ended once redundancy or theoretical saturation was achieved, that is, no new themes were apparent. In total, 75 youth ranging in age from 11 to 19 years (M = 14.5, SD = 2.1) participated in the study. The demographic and background characteristics of the participants are presented in Table \n1.\nDemographic Profile of Youth Participants\nOf the 75 youth participating in the study, six (8%) had tried smoking but no longer smoked and four (5%) reported that they currently smoked. Twenty-two youth (29%) had parents who currently smoked, while seven (9%) had parents who had quit smoking. For the 10 youth (13%) who had a history of smoking, eight (11%) had parents who also had a history of smoking.\n Data collection Data collection occurred between December 2007 and October 2010. The longer time period was due to school breaks and use of multiple data collection methods. The aim was to have each youth participate in two in-depth open-ended interviews. For the first interview an interview guide was used which included questions to elicit youth’s thoughts, beliefs, and feelings about cancer and cancer prevention (e.g., When you hear the word “cancer,” what does it make you think of? If developing cancer messages for youth, what would you tell them?). The interview guide had no direct questions about smoking. The open-ended nature of the interview guide provided an opportunity for youth to discuss areas they considered significant and/or areas previously not anticipated by the researchers\n[29].\nAfter completing the first interview, all youth were asked to take part in the photovoice method. Photovoice is a participatory research method where individuals can address important issues through taking photographs and discussion\n[30-34]. Photographs, which are often used in ethnography, provided youth with a unique and creative means to reflect on cancer and cancer prevention. Youth were given a disposable camera to take pictures of people (with permission), objects, places or events that made them think of cancer and cancer prevention. Youth had four weeks to take photographs. During the second interview, participants were asked to talk about what the photos meant to them in terms of cancer and cancer prevention. In addition, youth were asked follow-up questions based on their initial interview and to comment on emerging themes. In total, 53 youth (71%) participated in the photovoice method and second interview. The remaining 22 youth (29%) were unable to complete the photovoice method and second interview due to scheduling difficulties.\nFour focus group interviews with youth who were previously interviewed were conducted in the schools near the end of the study. The purpose was to identify ideas about cancer and cancer prevention that might emerge from a group context and provide quality controls on data collection\n[35-37]. Between three and four youth participated in each focus group. In total, fourteen youth attended the focus group discussions.\nAll individual and focus group interviews lasted from 60 to 90 minutes and were digitally recorded and transcribed verbatim. Field notes were recorded to describe the context (e.g., participant’s non-verbal behaviours, communication processes) and the interviewer’s perceptions of the interview. The interviews were conducted by trained research assistants under the supervision of the first author and took place at the participant’s school or home.\nFinally, ethnographic field research was conducted. Research assistants were trained in fundamental skills and process in doing ethnographic field research including participant observation, field notes, and flexibility and openness\n[38-40]. On the days that research assistants were at the schools conducting interviews, they observed and recorded interactions and dialogue during informal daily activities (e.g., recess), special events (e.g., cancer prevention fund raising activities), and during the interviews themselves. Ongoing team meetings with research assistants allowed for debriefing and helped increase the sensitivity and richness of fieldnotes by critically highlighting features previously unconsidered by the observers alone. Raw field notes were written up and compared to interview data.\nData collection occurred between December 2007 and October 2010. The longer time period was due to school breaks and use of multiple data collection methods. The aim was to have each youth participate in two in-depth open-ended interviews. For the first interview an interview guide was used which included questions to elicit youth’s thoughts, beliefs, and feelings about cancer and cancer prevention (e.g., When you hear the word “cancer,” what does it make you think of? If developing cancer messages for youth, what would you tell them?). The interview guide had no direct questions about smoking. The open-ended nature of the interview guide provided an opportunity for youth to discuss areas they considered significant and/or areas previously not anticipated by the researchers\n[29].\nAfter completing the first interview, all youth were asked to take part in the photovoice method. Photovoice is a participatory research method where individuals can address important issues through taking photographs and discussion\n[30-34]. Photographs, which are often used in ethnography, provided youth with a unique and creative means to reflect on cancer and cancer prevention. Youth were given a disposable camera to take pictures of people (with permission), objects, places or events that made them think of cancer and cancer prevention. Youth had four weeks to take photographs. During the second interview, participants were asked to talk about what the photos meant to them in terms of cancer and cancer prevention. In addition, youth were asked follow-up questions based on their initial interview and to comment on emerging themes. In total, 53 youth (71%) participated in the photovoice method and second interview. The remaining 22 youth (29%) were unable to complete the photovoice method and second interview due to scheduling difficulties.\nFour focus group interviews with youth who were previously interviewed were conducted in the schools near the end of the study. The purpose was to identify ideas about cancer and cancer prevention that might emerge from a group context and provide quality controls on data collection\n[35-37]. Between three and four youth participated in each focus group. In total, fourteen youth attended the focus group discussions.\nAll individual and focus group interviews lasted from 60 to 90 minutes and were digitally recorded and transcribed verbatim. Field notes were recorded to describe the context (e.g., participant’s non-verbal behaviours, communication processes) and the interviewer’s perceptions of the interview. The interviews were conducted by trained research assistants under the supervision of the first author and took place at the participant’s school or home.\nFinally, ethnographic field research was conducted. Research assistants were trained in fundamental skills and process in doing ethnographic field research including participant observation, field notes, and flexibility and openness\n[38-40]. On the days that research assistants were at the schools conducting interviews, they observed and recorded interactions and dialogue during informal daily activities (e.g., recess), special events (e.g., cancer prevention fund raising activities), and during the interviews themselves. Ongoing team meetings with research assistants allowed for debriefing and helped increase the sensitivity and richness of fieldnotes by critically highlighting features previously unconsidered by the observers alone. Raw field notes were written up and compared to interview data.\n Ethics Before commencing the study, permission was obtained from the University of Manitoba Research Ethics Committee and from the recruitment sites. Parental consent and assent from all youth participants was also provided. Youth were informed that they could withdraw from the study at any stage. Strategies to secure the participants’ confidentiality were applied. Participants gave permission to use their photographs for the purposes of publishing and were reassured that any identifiable information in the photos would be removed (digitally altered). Youth received an honorarium gift card for their participation.\nBefore commencing the study, permission was obtained from the University of Manitoba Research Ethics Committee and from the recruitment sites. Parental consent and assent from all youth participants was also provided. Youth were informed that they could withdraw from the study at any stage. Strategies to secure the participants’ confidentiality were applied. Participants gave permission to use their photographs for the purposes of publishing and were reassured that any identifiable information in the photos would be removed (digitally altered). Youth received an honorarium gift card for their participation.\n Data analysis Consistent with qualitative research design, data analysis occurred simultaneously with data collection. A data management system, NVivo version 9.0\n[41] helped facilitate organization of substantial transcripts. Inductive coding began with RW reading all the field notes and interview transcripts. Analytical categories emerged from rigorous and systematic analysis of all forms of data (interview transcripts, ethnographic field notes, and photographs). Analysis of the data followed ethnographic principles of interpreting the meanings youth attributed to cancer and cancer prevention including their meanings attributed to parental and family-related smoking issues and experiences. Data analysis followed multi-level analytic coding procedures congruent with interpretive qualitative analysis and ethnography\n[28,29,39,40]. First-level analysis involved isolating concepts or patterns referred to as domains. Second-level analysis involved organizing domains. Through processes of comparing, contrasting, and integrating, items were organized, associated with other items, and linked into higher order patterns. The third level of analysis involved identifying attributes in each domain, and the last level involved discovering relationships among the domains to create themes. Various strategies were used to enhance the rigor of the research process including prolonged engagement with participants and data, careful line-by-line transcript analysis, and detailed memo writing throughout the research process\n[42]. The researchers independently identified theme areas then jointly refined and linked analytic themes and categories. Discussion of initial interpretations with the youth themselves occurred during the second interviews, which also helped reveal new data and support emerging themes.\nConsistent with qualitative research design, data analysis occurred simultaneously with data collection. A data management system, NVivo version 9.0\n[41] helped facilitate organization of substantial transcripts. Inductive coding began with RW reading all the field notes and interview transcripts. Analytical categories emerged from rigorous and systematic analysis of all forms of data (interview transcripts, ethnographic field notes, and photographs). Analysis of the data followed ethnographic principles of interpreting the meanings youth attributed to cancer and cancer prevention including their meanings attributed to parental and family-related smoking issues and experiences. Data analysis followed multi-level analytic coding procedures congruent with interpretive qualitative analysis and ethnography\n[28,29,39,40]. First-level analysis involved isolating concepts or patterns referred to as domains. Second-level analysis involved organizing domains. Through processes of comparing, contrasting, and integrating, items were organized, associated with other items, and linked into higher order patterns. The third level of analysis involved identifying attributes in each domain, and the last level involved discovering relationships among the domains to create themes. Various strategies were used to enhance the rigor of the research process including prolonged engagement with participants and data, careful line-by-line transcript analysis, and detailed memo writing throughout the research process\n[42]. The researchers independently identified theme areas then jointly refined and linked analytic themes and categories. Discussion of initial interpretations with the youth themselves occurred during the second interviews, which also helped reveal new data and support emerging themes.", "The qualitative research design of ethnography was utilized. Ethnography is the study of a specific cultural group of people that provides explanations of people’s thoughts, beliefs, and behaviours in a specific context with the aim of describing aspects of a phenomenon of the group\n[26-28]. For this study, youth were the group and the phenomenon of interest was youth’s perspectives of cancer and cancer prevention. Key assumptions that were integral to the successful undertaking of the study included viewing youth as self-reflective beings expert on their own experiences and as flexible agents existing within and being touched by multiple social and cultural contexts.", "Youth were recruited in a Western Canadian province from six schools in both a rural and urban setting. Schools mailed invitation letters about the study to families of potential participants who, if interested, could contact the researcher for further information. Purposive sampling techniques were used with the goal to achieve variation among participants based on demographic information (e.g., age, SES, gender, and urban or rural residency) and experiences in relation to cancer (i.e., some youth had family members who had experienced cancer). Recruitment ended once redundancy or theoretical saturation was achieved, that is, no new themes were apparent. In total, 75 youth ranging in age from 11 to 19 years (M = 14.5, SD = 2.1) participated in the study. The demographic and background characteristics of the participants are presented in Table \n1.\nDemographic Profile of Youth Participants\nOf the 75 youth participating in the study, six (8%) had tried smoking but no longer smoked and four (5%) reported that they currently smoked. Twenty-two youth (29%) had parents who currently smoked, while seven (9%) had parents who had quit smoking. For the 10 youth (13%) who had a history of smoking, eight (11%) had parents who also had a history of smoking.", "Data collection occurred between December 2007 and October 2010. The longer time period was due to school breaks and use of multiple data collection methods. The aim was to have each youth participate in two in-depth open-ended interviews. For the first interview an interview guide was used which included questions to elicit youth’s thoughts, beliefs, and feelings about cancer and cancer prevention (e.g., When you hear the word “cancer,” what does it make you think of? If developing cancer messages for youth, what would you tell them?). The interview guide had no direct questions about smoking. The open-ended nature of the interview guide provided an opportunity for youth to discuss areas they considered significant and/or areas previously not anticipated by the researchers\n[29].\nAfter completing the first interview, all youth were asked to take part in the photovoice method. Photovoice is a participatory research method where individuals can address important issues through taking photographs and discussion\n[30-34]. Photographs, which are often used in ethnography, provided youth with a unique and creative means to reflect on cancer and cancer prevention. Youth were given a disposable camera to take pictures of people (with permission), objects, places or events that made them think of cancer and cancer prevention. Youth had four weeks to take photographs. During the second interview, participants were asked to talk about what the photos meant to them in terms of cancer and cancer prevention. In addition, youth were asked follow-up questions based on their initial interview and to comment on emerging themes. In total, 53 youth (71%) participated in the photovoice method and second interview. The remaining 22 youth (29%) were unable to complete the photovoice method and second interview due to scheduling difficulties.\nFour focus group interviews with youth who were previously interviewed were conducted in the schools near the end of the study. The purpose was to identify ideas about cancer and cancer prevention that might emerge from a group context and provide quality controls on data collection\n[35-37]. Between three and four youth participated in each focus group. In total, fourteen youth attended the focus group discussions.\nAll individual and focus group interviews lasted from 60 to 90 minutes and were digitally recorded and transcribed verbatim. Field notes were recorded to describe the context (e.g., participant’s non-verbal behaviours, communication processes) and the interviewer’s perceptions of the interview. The interviews were conducted by trained research assistants under the supervision of the first author and took place at the participant’s school or home.\nFinally, ethnographic field research was conducted. Research assistants were trained in fundamental skills and process in doing ethnographic field research including participant observation, field notes, and flexibility and openness\n[38-40]. On the days that research assistants were at the schools conducting interviews, they observed and recorded interactions and dialogue during informal daily activities (e.g., recess), special events (e.g., cancer prevention fund raising activities), and during the interviews themselves. Ongoing team meetings with research assistants allowed for debriefing and helped increase the sensitivity and richness of fieldnotes by critically highlighting features previously unconsidered by the observers alone. Raw field notes were written up and compared to interview data.", "Before commencing the study, permission was obtained from the University of Manitoba Research Ethics Committee and from the recruitment sites. Parental consent and assent from all youth participants was also provided. Youth were informed that they could withdraw from the study at any stage. Strategies to secure the participants’ confidentiality were applied. Participants gave permission to use their photographs for the purposes of publishing and were reassured that any identifiable information in the photos would be removed (digitally altered). Youth received an honorarium gift card for their participation.", "Consistent with qualitative research design, data analysis occurred simultaneously with data collection. A data management system, NVivo version 9.0\n[41] helped facilitate organization of substantial transcripts. Inductive coding began with RW reading all the field notes and interview transcripts. Analytical categories emerged from rigorous and systematic analysis of all forms of data (interview transcripts, ethnographic field notes, and photographs). Analysis of the data followed ethnographic principles of interpreting the meanings youth attributed to cancer and cancer prevention including their meanings attributed to parental and family-related smoking issues and experiences. Data analysis followed multi-level analytic coding procedures congruent with interpretive qualitative analysis and ethnography\n[28,29,39,40]. First-level analysis involved isolating concepts or patterns referred to as domains. Second-level analysis involved organizing domains. Through processes of comparing, contrasting, and integrating, items were organized, associated with other items, and linked into higher order patterns. The third level of analysis involved identifying attributes in each domain, and the last level involved discovering relationships among the domains to create themes. Various strategies were used to enhance the rigor of the research process including prolonged engagement with participants and data, careful line-by-line transcript analysis, and detailed memo writing throughout the research process\n[42]. The researchers independently identified theme areas then jointly refined and linked analytic themes and categories. Discussion of initial interpretations with the youth themselves occurred during the second interviews, which also helped reveal new data and support emerging themes.", "Smoking was one of the dominant lines of discourse across the sample of youth’s narratives of cancer and cancer prevention. Age, gender, smoking status (i.e., smoker or non-smoker), and place of residency did not influence the story line. Youth were considerably more knowledgeable about the association between smoking and cancer and anti-tobacco messages in comparison to any other cancer-related topic. Several youth photographed their own hand-drawn facsimiles or public signage depicting familiar anti-smoking slogans (Figure \n1).\nAnti-Smoking Sign. Represented youth’s desire that adults should stop smoking.\n\"When I walk around there’s like no smoking area signs. I think “Oh this is safe,” like it is good to know that there won’t be like smoke around for me to breathe in. [14-year-old male]\"\nYouth in this study were well informed of how smoking could impact one’s health (e.g., increased the potential for cancer and other chronic illnesses). Of special importance were youth’s perspectives and experiences of parents and other family members smoking around them as represented by the primary theme, It’s not fair, and four subthemes: parenting the parent about the dangers of smoking; the good/bad parent; distancing family relationships; and the prisoner. Each of these themes are further discussed.\n It’s not fair Overall, youth viewed their parents and other family members smoking around children as something unjust. The phrase “It’s not fair” was frequently expressed by youth in this study as illustrated in the following comment,\n\"Because the kids around parents who smoke have to breathe in, they have to breathe in all of it …and like, if parents want to smoke then they should like go outside becauseit’s not fair to the kids…Probably because they always have to be around it if their mother always smokes every time they’re taking a bath or every time they’re like colouring a picture like every time they do anything, they always have to breathe in the bad- like bad air that’s filled with smoke and stuff like that. Andit’s not fairto them. [12-year-old female]\"\nYouth could not make sense of why parents would smoke around their children. They also were unsure with how to deal with what they saw was an act of injustice to children. They struggled about how the smoking made them feel, recognizing that their roles as children limited their ability to influence their parents’ behaviour. Their attempts to reconcile their feelings and deal with the unfairness through specific behaviours are further apparent in the following four subthemes.\nOverall, youth viewed their parents and other family members smoking around children as something unjust. The phrase “It’s not fair” was frequently expressed by youth in this study as illustrated in the following comment,\n\"Because the kids around parents who smoke have to breathe in, they have to breathe in all of it …and like, if parents want to smoke then they should like go outside becauseit’s not fair to the kids…Probably because they always have to be around it if their mother always smokes every time they’re taking a bath or every time they’re like colouring a picture like every time they do anything, they always have to breathe in the bad- like bad air that’s filled with smoke and stuff like that. Andit’s not fairto them. [12-year-old female]\"\nYouth could not make sense of why parents would smoke around their children. They also were unsure with how to deal with what they saw was an act of injustice to children. They struggled about how the smoking made them feel, recognizing that their roles as children limited their ability to influence their parents’ behaviour. Their attempts to reconcile their feelings and deal with the unfairness through specific behaviours are further apparent in the following four subthemes.\n Parenting the parent about the dangers of smoking Although youth did share stories of parents talking to them about the importance of not smoking, this was not the major family narrative. Instead of being talked to about smoking, it was more common for youth to share stories of they themselves talking to, educating, and even preaching to their parents about the dangers of smoking. In short, youth took it upon themselves to parent their parents.\n\"But I’m getting my mom and my step-dad to quit…by talking to them, telling them how it makes all of us kids feel…Yeah, reading like everything the packages say or what the internet says or like what I learn from it and then they’re all just thinking and then they’re saying “well I won’t do that much then. I’ll try to quit.” Now they’re trying to quit but it’s not working for my mom but it is working for my step-dad. [14-year-old female]\"\nIn addition to talking to their parents about how they felt about them smoking, some youth also would take action to reduce their parents’ ability to smoke.\n\"Like I have just tried, because I just tell my parents straight up to stop and…I always try to, like, hide their stuff on them but it doesn’t work. They get mad. [13-year-old male]\"\nTalking to their parents about the dangers of smoking arose out of youth’s worries for their parents’ health. The concern that youth had for parents who smoked was in fact one of the reasons youth decided to participate in the study. Youth were looking for answers that could possibly help them to get their parents to stop smoking. Concerns about their parents smoking was also strongly depicted in the photographs. One 13-year-old female took a picture of an ashtray full of cigarettes and said,\n\"I see them (ashtrays with cigarette butts) all the time. It would be different if it was like you know once in a while kind of thing I probably wouldn’t mind that much, but my parents smoke in the house and in the car and everywhere so it’s kind of I don’t know I wish they would stop (Figure \n2).\"\nAshtray. Represented youth’s concern for parents.\nYouth who feared that their family members might die, or who had family members who died because of smoking-related illnesses (e.g., lung cancer), especially shared their concerns and would try to make their parents feel guilty.\n\"I always tell them things to make them guilty I’m always like, “Do you want to meet my children, do you want to bring me down the aisle?” It’s working actually. I think my dad said my mom was talking about it so…Ever since my mom started again I really felt it hit me cause, like I want my mom to meet my children and she has to see me get married and have kids and I want my mom to be there. [16-year-old female]\"\nYouth also provided many stories of their parents’ attempts to quit smoking. Their stories demonstrated how smoking was embedded within their family’s history and identity, as well as how parents’ smoking played a role in their child’s life.\n\"Well I was really happy when my father quit because he had been smoking since like, I don’t know, before he was even a teenager. He was really young and he said he’d quit sort of when I was born like he’d smoke outside and he’d reduce it, but then when I got old enough he’d continued smoking, and my mom was the same way. She was also a smoker, but she quit like maybe I was five… [15-year-old female]\"\n\"My grandpa passed away a couple of years ago and he died of emphysema and a little bit after he got sick I should say he quit smoking and whenever my aunties and mom smoked, but they don’t anymore, then he would always tell them “Well you should quit because look what’s happening to me.” And that really pushed my mom to quit. [17-year-old female]\"\nWhile youth were persistent in trying to convince their parents to stop smoking, most youth felt that their parents would not quit despite their efforts. A sense of helplessness was apparent.\n\"Well, my parents like they are smoking and if I tell them not to they’re not going to listen because they’re like “We are your parents, you’re not our parents!” [18-year-old male]\"\n\"Like my mom and my dad both smoke and I’d like to tell them to stop and to show them…they don’t really care…I don’t know, like influencing somebody not to smoke is a lot different than I guess them already smoking and quitting. Because like, I mean obviously everything I’ve seen I’m never going to smoke, but I mean it doesn’t really influence my parents. [13-year-old male]\"\nAlthough youth did share stories of parents talking to them about the importance of not smoking, this was not the major family narrative. Instead of being talked to about smoking, it was more common for youth to share stories of they themselves talking to, educating, and even preaching to their parents about the dangers of smoking. In short, youth took it upon themselves to parent their parents.\n\"But I’m getting my mom and my step-dad to quit…by talking to them, telling them how it makes all of us kids feel…Yeah, reading like everything the packages say or what the internet says or like what I learn from it and then they’re all just thinking and then they’re saying “well I won’t do that much then. I’ll try to quit.” Now they’re trying to quit but it’s not working for my mom but it is working for my step-dad. [14-year-old female]\"\nIn addition to talking to their parents about how they felt about them smoking, some youth also would take action to reduce their parents’ ability to smoke.\n\"Like I have just tried, because I just tell my parents straight up to stop and…I always try to, like, hide their stuff on them but it doesn’t work. They get mad. [13-year-old male]\"\nTalking to their parents about the dangers of smoking arose out of youth’s worries for their parents’ health. The concern that youth had for parents who smoked was in fact one of the reasons youth decided to participate in the study. Youth were looking for answers that could possibly help them to get their parents to stop smoking. Concerns about their parents smoking was also strongly depicted in the photographs. One 13-year-old female took a picture of an ashtray full of cigarettes and said,\n\"I see them (ashtrays with cigarette butts) all the time. It would be different if it was like you know once in a while kind of thing I probably wouldn’t mind that much, but my parents smoke in the house and in the car and everywhere so it’s kind of I don’t know I wish they would stop (Figure \n2).\"\nAshtray. Represented youth’s concern for parents.\nYouth who feared that their family members might die, or who had family members who died because of smoking-related illnesses (e.g., lung cancer), especially shared their concerns and would try to make their parents feel guilty.\n\"I always tell them things to make them guilty I’m always like, “Do you want to meet my children, do you want to bring me down the aisle?” It’s working actually. I think my dad said my mom was talking about it so…Ever since my mom started again I really felt it hit me cause, like I want my mom to meet my children and she has to see me get married and have kids and I want my mom to be there. [16-year-old female]\"\nYouth also provided many stories of their parents’ attempts to quit smoking. Their stories demonstrated how smoking was embedded within their family’s history and identity, as well as how parents’ smoking played a role in their child’s life.\n\"Well I was really happy when my father quit because he had been smoking since like, I don’t know, before he was even a teenager. He was really young and he said he’d quit sort of when I was born like he’d smoke outside and he’d reduce it, but then when I got old enough he’d continued smoking, and my mom was the same way. She was also a smoker, but she quit like maybe I was five… [15-year-old female]\"\n\"My grandpa passed away a couple of years ago and he died of emphysema and a little bit after he got sick I should say he quit smoking and whenever my aunties and mom smoked, but they don’t anymore, then he would always tell them “Well you should quit because look what’s happening to me.” And that really pushed my mom to quit. [17-year-old female]\"\nWhile youth were persistent in trying to convince their parents to stop smoking, most youth felt that their parents would not quit despite their efforts. A sense of helplessness was apparent.\n\"Well, my parents like they are smoking and if I tell them not to they’re not going to listen because they’re like “We are your parents, you’re not our parents!” [18-year-old male]\"\n\"Like my mom and my dad both smoke and I’d like to tell them to stop and to show them…they don’t really care…I don’t know, like influencing somebody not to smoke is a lot different than I guess them already smoking and quitting. Because like, I mean obviously everything I’ve seen I’m never going to smoke, but I mean it doesn’t really influence my parents. [13-year-old male]\"\n The good/bad parent A second sub-theme involved a moral tone in youth’s conversations with respect to how they viewed their parents and other family members who did or did not smoke. On one hand, youth perceived parents who did not smoke as doing the right thing and as part of their parents’ overall plan of keeping themselves and their children healthy.\n\"Like my parents don’t smoke, they don’t like do drugs or anything like that and they do like everything possible to stay to like be healthy and stuff and to keep me healthy and stuff like that. [12-year-old female]\"\nIn contrast, youth were especially disapproving of parents who did smoke.\n\"Like if a mother wanted to have a baby so badly in the first place then she should have known that she’s not supposed to drink or she’s not supposed to smoke or she’s not supposed to do any type of drugs or anything…and they don’t know how bad it actually is for the kids who have to breathe it in. [14-year-old female]\"\nParents’ second-hand smoking was seen by youth as parents “doing” to their children. “Doing” was viewed in a negative sense where children were put in a dangerous situation in which they had little choice or control as depicted by the following quote and photo.\n\"I guess for people with families already, like what it’s doing to their family or that second-hand smoking can be almost as bad as actually smoking like what are you doing to the people around you if you’re choosing to smoke. [17-year-old female] (Figure \n3).\"\nEssentially, youth felt that second-hand smoke was more dangerous than first-hand smoke as one 15-year-old male noted, “This stuff (second-hand smoke) does not get filtered through the back of the butt, it just comes out clear not filtered.”\nYouth expressed concern for how second-hand smoking impacted them and their siblings.\n\"Both my parents smoke so I don’t like it too much because the smell is kind of it bugs me and you know I don’t know because so many people talk about smoking is related to cancer and that kind of thing so I’m kind of scared. I’m scared for myself and for my parents. But I’m more scared for like my brother than I am for me because I can leave more often than my brother’s allowed to… so. Either like my brother developing the habit or something or like him getting cancer because he’s around it too much. [13-year-old female]\"\nAdult smoking beside a young child. Represented parents “doing” to children; children have no choice.\nMany youth, whose parents did not smoke currently or had never smoked, were concerned about the effects of second-hand smoke on their friends (whose parents smoked). These youth spoke about their friends’ situations vicariously. Their comments in the interviews arose from their extended empathy towards their peers and their peers’ siblings.\n\"Um, I’m pet sitting for a friend while she’s in Florida and her parents are usually always smoking or getting ready to light another cigarette and so I went there it just smells so bad in their house and I feel sorry for her cause she’s got a sister in kindergarten and it’s her in grade nine and her parents are smoking and their dogs in there and cat and it just sticks to the furniture and it just smells smoke. [14-year-old female]\"\nYouth felt that parents who smoked were poor role models and that their behaviour could influence their child’s desire to smoke.\n\"Cause when children see, children do, right? Yeah, so lots of kids when your parents smoke when you’re like in grade two or something and kids get the idea that it’s cool or like whatever and then they want to be like their parents cause they think their parents are awesome. So then they start doing everything their parents do and then they start smoking… [14-year-old female]\"\n\"Or sometimes you can get addicted to smoking if both your parents smoke a lot and then like my cousin who smokes uses that excuse cause I’m like”Why do you smoke, that’s disgusting!” And he says “Well both my parents smoke so I started.” I don’t think his parents are a very good influence since they both smoke. And I’m not sure if they ever told him not to smoke, but maybe they just accepted his smoking not saying it is bad or anything. Like if my kid started smoking I would get mad. [13-year-old male]\"\nParents who smoked were also considered by youth to be less reliable and credible when talking to their children about the dangers of smoking.\n\"When my parents found out I tried it (smoking) once, they knew that they couldn’t do anything cause since they were smoking too! [13-year-old female]\"\nIn general, parents and other adult family members who smoked were viewed by youth to be weak in character.\n\"So like one year he (family relative) came out from Ontario with his wife and my grandma and it was pouring rain and he decided to go outside for a smoke, so he really ran across our yard and hid in the shed and smoked. I’m like, it’s pathetic. [14-year-old female]\"\nIt was evident from the youth’s narratives that parents’ smoking behaviours were unacceptable and should never be tolerated.\n\"And I mean people really need to kind of jump on it and say don’t do this around your kids because it will affect them. Don’t do this around any young child because young children are really open to being affected by something like that and so I think definitely being careful about where you’re smoking or something like that is definitely a really big factor. [16-year-old female]\"\nA second sub-theme involved a moral tone in youth’s conversations with respect to how they viewed their parents and other family members who did or did not smoke. On one hand, youth perceived parents who did not smoke as doing the right thing and as part of their parents’ overall plan of keeping themselves and their children healthy.\n\"Like my parents don’t smoke, they don’t like do drugs or anything like that and they do like everything possible to stay to like be healthy and stuff and to keep me healthy and stuff like that. [12-year-old female]\"\nIn contrast, youth were especially disapproving of parents who did smoke.\n\"Like if a mother wanted to have a baby so badly in the first place then she should have known that she’s not supposed to drink or she’s not supposed to smoke or she’s not supposed to do any type of drugs or anything…and they don’t know how bad it actually is for the kids who have to breathe it in. [14-year-old female]\"\nParents’ second-hand smoking was seen by youth as parents “doing” to their children. “Doing” was viewed in a negative sense where children were put in a dangerous situation in which they had little choice or control as depicted by the following quote and photo.\n\"I guess for people with families already, like what it’s doing to their family or that second-hand smoking can be almost as bad as actually smoking like what are you doing to the people around you if you’re choosing to smoke. [17-year-old female] (Figure \n3).\"\nEssentially, youth felt that second-hand smoke was more dangerous than first-hand smoke as one 15-year-old male noted, “This stuff (second-hand smoke) does not get filtered through the back of the butt, it just comes out clear not filtered.”\nYouth expressed concern for how second-hand smoking impacted them and their siblings.\n\"Both my parents smoke so I don’t like it too much because the smell is kind of it bugs me and you know I don’t know because so many people talk about smoking is related to cancer and that kind of thing so I’m kind of scared. I’m scared for myself and for my parents. But I’m more scared for like my brother than I am for me because I can leave more often than my brother’s allowed to… so. Either like my brother developing the habit or something or like him getting cancer because he’s around it too much. [13-year-old female]\"\nAdult smoking beside a young child. Represented parents “doing” to children; children have no choice.\nMany youth, whose parents did not smoke currently or had never smoked, were concerned about the effects of second-hand smoke on their friends (whose parents smoked). These youth spoke about their friends’ situations vicariously. Their comments in the interviews arose from their extended empathy towards their peers and their peers’ siblings.\n\"Um, I’m pet sitting for a friend while she’s in Florida and her parents are usually always smoking or getting ready to light another cigarette and so I went there it just smells so bad in their house and I feel sorry for her cause she’s got a sister in kindergarten and it’s her in grade nine and her parents are smoking and their dogs in there and cat and it just sticks to the furniture and it just smells smoke. [14-year-old female]\"\nYouth felt that parents who smoked were poor role models and that their behaviour could influence their child’s desire to smoke.\n\"Cause when children see, children do, right? Yeah, so lots of kids when your parents smoke when you’re like in grade two or something and kids get the idea that it’s cool or like whatever and then they want to be like their parents cause they think their parents are awesome. So then they start doing everything their parents do and then they start smoking… [14-year-old female]\"\n\"Or sometimes you can get addicted to smoking if both your parents smoke a lot and then like my cousin who smokes uses that excuse cause I’m like”Why do you smoke, that’s disgusting!” And he says “Well both my parents smoke so I started.” I don’t think his parents are a very good influence since they both smoke. And I’m not sure if they ever told him not to smoke, but maybe they just accepted his smoking not saying it is bad or anything. Like if my kid started smoking I would get mad. [13-year-old male]\"\nParents who smoked were also considered by youth to be less reliable and credible when talking to their children about the dangers of smoking.\n\"When my parents found out I tried it (smoking) once, they knew that they couldn’t do anything cause since they were smoking too! [13-year-old female]\"\nIn general, parents and other adult family members who smoked were viewed by youth to be weak in character.\n\"So like one year he (family relative) came out from Ontario with his wife and my grandma and it was pouring rain and he decided to go outside for a smoke, so he really ran across our yard and hid in the shed and smoked. I’m like, it’s pathetic. [14-year-old female]\"\nIt was evident from the youth’s narratives that parents’ smoking behaviours were unacceptable and should never be tolerated.\n\"And I mean people really need to kind of jump on it and say don’t do this around your kids because it will affect them. Don’t do this around any young child because young children are really open to being affected by something like that and so I think definitely being careful about where you’re smoking or something like that is definitely a really big factor. [16-year-old female]\"\n Distancing family relationships A third sub-theme that emerged was one where youth were separating themselves, both physically and emotionally, from their family members. In addition, youth associated smoking with causing emotional stress or strain on the family.\n\"Youth:And they always get problems because of it so I kind of don’t want to have to deal with all those problems. And all that stress and everything so I’m just going to like leave it alone.Interviewer:Okay. So like what problems?Youth:Like family issues.Interviewer:So like you said that they have family issues and stuff, why is it important to you not to have that in your life like those things?P:Cause I already have enough family problems. I guess I don’t want anymore. [14-year-old female]\"\nThe discussion with youth revealed that smoking had, in varying degrees, disrupted family relationships. Just the presence of family members smoking around them resulted in youth altering their behaviour and wanting to physically distance themselves from smoker family members.\n\"I live with my grandparents. They make me supper and then I have to usually eat in my room because they smoke, and I don’t like the smell of smoke when I’m eating. I hate the smell and it’s just I grew up my whole life with it and I just think I just see my grandma a lot of my family members have like my great-grandma had passed away with lung cancer and stuff. I just think it’s bad. [17-year-old female]\"\nAt times, being around parents who smoked resulted in feelings of worry and frustration for youth.\n\"Well my step-dad smokes and he’s always saying, “No, it won’t happen to me, it won’t happen to me!” And he actually has a really high chance of catching it cause he started smoking when he was really young and he continued smoking and, uh, he still thinks it won’t happen and doesn’t believe any of the commercials or the ads on that stuff. He just keeps going so… [17-year-old male]\"\n\"I was riding with my dad and he was smoking. I was like, “Do you have to do that when we’re in the car?” Like I get so bugged by it when people do it. It’s like. “Look at the cigarette box!” I get so mad. I was like…like when we were getting out of the car and I said, “Oh can you just not do that?” I walked ahead of him and he said, “I’m sorry.” It’s like, “Okay!” [15-year-old female]\"\nYouth were also sensitive how their negative reactions and behaviours towards family members who smoked could result in hurt feelings.\n\"Whenever they smoke around me I just like take a shirt or something and just like cover my mouth and nose and my brothers and sisters are doing that too. Yeah, so just to try and keep it away. My parents don’t mind, but I’m pretty sure it hurts their feelings or something. [13-year-old female]\"\nMany youth described family tension and conflict because a family member was smoking.\n\"It can really bring your family down, if smoking hurts someone in your family. Um, it could really cause a lot of tension there… [16-year-old male]\"\n\"And like my cousin, her parents smoke. They quit. She helped them quit but then they started again and then she started crying and crying and crying and crying, and then she’s scared that her parents are going to die from lung disease or a lung cancer and she always cries when they do that and then one time they said “It’s our choice if we do this. It’s not your fault if we die or not and they said that they only smoke once a day, not too much.” Now she’s still gets mad but she doesn’t have temper tantrums anymore. [12-year-old female]\"\nFeelings of anger were also associated with family members who smoked. One youth who had an extended family member who died from lung cancer was upset with a son-in-law who continued to smoke in spite of his father-in-law’s death.\n\"I saw him smoking and I was like “Why? Like you saw what your father-in-law went through. Why are you still doing that?!” And it, it really angers me. Like I don’t know, just even talking about it gets me so mad like you’re seeing all these things like you know it happens. Why are you going to ruin that? [15-year-old female]\"\nNotably, of all the feelings expressed by youth in the study, it was a deep sense of sadness that was most apparent in their narratives with respect to families who had a history of smoking. The sadness was in relation to a past or future loss of family members. For some youth, the sadness was physically evident through youth crying and holding back tears while being interviewed. Some held the view that smoking was the defining feature of their families that ultimately would lead to its destruction (Figure \n4).\nFuneral Sign. Represented smoking as a sign of cancer and death.\n\"Lots of my family smokes and I’m worried about them getting cancer and then not surviving it. [16-year-old female]\"\n\"Okay, I took two pictures of smokes cause the first reinforces that smoking could lead to lung cancer. And the second is cause it relates to me and my life because my mom smokes. My grandpa smokes, my grandma smokes, my aunty smokes because a lot of people smoke in my family. Well I feel sad that she probably could die soon like she maybe diagnosed with cancer like any time because she smokes a lot. [13-year-old male]\"\nA third sub-theme that emerged was one where youth were separating themselves, both physically and emotionally, from their family members. In addition, youth associated smoking with causing emotional stress or strain on the family.\n\"Youth:And they always get problems because of it so I kind of don’t want to have to deal with all those problems. And all that stress and everything so I’m just going to like leave it alone.Interviewer:Okay. So like what problems?Youth:Like family issues.Interviewer:So like you said that they have family issues and stuff, why is it important to you not to have that in your life like those things?P:Cause I already have enough family problems. I guess I don’t want anymore. [14-year-old female]\"\nThe discussion with youth revealed that smoking had, in varying degrees, disrupted family relationships. Just the presence of family members smoking around them resulted in youth altering their behaviour and wanting to physically distance themselves from smoker family members.\n\"I live with my grandparents. They make me supper and then I have to usually eat in my room because they smoke, and I don’t like the smell of smoke when I’m eating. I hate the smell and it’s just I grew up my whole life with it and I just think I just see my grandma a lot of my family members have like my great-grandma had passed away with lung cancer and stuff. I just think it’s bad. [17-year-old female]\"\nAt times, being around parents who smoked resulted in feelings of worry and frustration for youth.\n\"Well my step-dad smokes and he’s always saying, “No, it won’t happen to me, it won’t happen to me!” And he actually has a really high chance of catching it cause he started smoking when he was really young and he continued smoking and, uh, he still thinks it won’t happen and doesn’t believe any of the commercials or the ads on that stuff. He just keeps going so… [17-year-old male]\"\n\"I was riding with my dad and he was smoking. I was like, “Do you have to do that when we’re in the car?” Like I get so bugged by it when people do it. It’s like. “Look at the cigarette box!” I get so mad. I was like…like when we were getting out of the car and I said, “Oh can you just not do that?” I walked ahead of him and he said, “I’m sorry.” It’s like, “Okay!” [15-year-old female]\"\nYouth were also sensitive how their negative reactions and behaviours towards family members who smoked could result in hurt feelings.\n\"Whenever they smoke around me I just like take a shirt or something and just like cover my mouth and nose and my brothers and sisters are doing that too. Yeah, so just to try and keep it away. My parents don’t mind, but I’m pretty sure it hurts their feelings or something. [13-year-old female]\"\nMany youth described family tension and conflict because a family member was smoking.\n\"It can really bring your family down, if smoking hurts someone in your family. Um, it could really cause a lot of tension there… [16-year-old male]\"\n\"And like my cousin, her parents smoke. They quit. She helped them quit but then they started again and then she started crying and crying and crying and crying, and then she’s scared that her parents are going to die from lung disease or a lung cancer and she always cries when they do that and then one time they said “It’s our choice if we do this. It’s not your fault if we die or not and they said that they only smoke once a day, not too much.” Now she’s still gets mad but she doesn’t have temper tantrums anymore. [12-year-old female]\"\nFeelings of anger were also associated with family members who smoked. One youth who had an extended family member who died from lung cancer was upset with a son-in-law who continued to smoke in spite of his father-in-law’s death.\n\"I saw him smoking and I was like “Why? Like you saw what your father-in-law went through. Why are you still doing that?!” And it, it really angers me. Like I don’t know, just even talking about it gets me so mad like you’re seeing all these things like you know it happens. Why are you going to ruin that? [15-year-old female]\"\nNotably, of all the feelings expressed by youth in the study, it was a deep sense of sadness that was most apparent in their narratives with respect to families who had a history of smoking. The sadness was in relation to a past or future loss of family members. For some youth, the sadness was physically evident through youth crying and holding back tears while being interviewed. Some held the view that smoking was the defining feature of their families that ultimately would lead to its destruction (Figure \n4).\nFuneral Sign. Represented smoking as a sign of cancer and death.\n\"Lots of my family smokes and I’m worried about them getting cancer and then not surviving it. [16-year-old female]\"\n\"Okay, I took two pictures of smokes cause the first reinforces that smoking could lead to lung cancer. And the second is cause it relates to me and my life because my mom smokes. My grandpa smokes, my grandma smokes, my aunty smokes because a lot of people smoke in my family. Well I feel sad that she probably could die soon like she maybe diagnosed with cancer like any time because she smokes a lot. [13-year-old male]\"\n The prisoner The final sub-theme that emerged in the study was a sense of resigned acceptance, powerlessness, and being held as a prisoner. Ultimately, the unjust nature of parents smoking in the family home was truly felt by those youth who described having little choice but to feel trapped inside the smoke.\n\"Um, yeah. Well I had to stop volleyball and taekwondo for a little while because my knees were really bad and I have been experiencing like a hard time breathing. But I think that’s particularly because after my dad died my mom let these people move in and the guy smoked a lot and I wasn’t used to that amount of smoke in my personal area. Like downstairs was all mine before, but then I was close to my bedroom and his smoke would come in my room. So I was I was stuck with that all the time. [16-year-old female]\"\n\"Whenever he smokes I’m like in a car with him or in the house with him. He’s always supposed to go outside of the house to smoke, but when I’m in the car with him I roll down the windows so I don’t have to breath in the smoke and I just go on with him and like, “Okay, you can do whatever you want, then I’m just going to do what I want to do.” [17-year-old male]\"\nWithin their home (and while travelling in vehicles with their parents), youth had few ways of escaping the second-hand smoke and little, if any, influence over their parents’ smoking behaviours and rights to live in a pollution-free environment. Some even described how they had to cover their nose and mouth when walking through their house. These youth were like prisoners within their homes. They experienced their own, and witnessed their siblings’ exposure to second-hand smoke, but felt they were unable to help and protect themselves, let alone their siblings. Youth expressed feeling caught in an unpleasant situation which was difficult to escape. They perceived it as unfair and just had to put up with it.\nThe final sub-theme that emerged in the study was a sense of resigned acceptance, powerlessness, and being held as a prisoner. Ultimately, the unjust nature of parents smoking in the family home was truly felt by those youth who described having little choice but to feel trapped inside the smoke.\n\"Um, yeah. Well I had to stop volleyball and taekwondo for a little while because my knees were really bad and I have been experiencing like a hard time breathing. But I think that’s particularly because after my dad died my mom let these people move in and the guy smoked a lot and I wasn’t used to that amount of smoke in my personal area. Like downstairs was all mine before, but then I was close to my bedroom and his smoke would come in my room. So I was I was stuck with that all the time. [16-year-old female]\"\n\"Whenever he smokes I’m like in a car with him or in the house with him. He’s always supposed to go outside of the house to smoke, but when I’m in the car with him I roll down the windows so I don’t have to breath in the smoke and I just go on with him and like, “Okay, you can do whatever you want, then I’m just going to do what I want to do.” [17-year-old male]\"\nWithin their home (and while travelling in vehicles with their parents), youth had few ways of escaping the second-hand smoke and little, if any, influence over their parents’ smoking behaviours and rights to live in a pollution-free environment. Some even described how they had to cover their nose and mouth when walking through their house. These youth were like prisoners within their homes. They experienced their own, and witnessed their siblings’ exposure to second-hand smoke, but felt they were unable to help and protect themselves, let alone their siblings. Youth expressed feeling caught in an unpleasant situation which was difficult to escape. They perceived it as unfair and just had to put up with it.", "Overall, youth viewed their parents and other family members smoking around children as something unjust. The phrase “It’s not fair” was frequently expressed by youth in this study as illustrated in the following comment,\n\"Because the kids around parents who smoke have to breathe in, they have to breathe in all of it …and like, if parents want to smoke then they should like go outside becauseit’s not fair to the kids…Probably because they always have to be around it if their mother always smokes every time they’re taking a bath or every time they’re like colouring a picture like every time they do anything, they always have to breathe in the bad- like bad air that’s filled with smoke and stuff like that. Andit’s not fairto them. [12-year-old female]\"\nYouth could not make sense of why parents would smoke around their children. They also were unsure with how to deal with what they saw was an act of injustice to children. They struggled about how the smoking made them feel, recognizing that their roles as children limited their ability to influence their parents’ behaviour. Their attempts to reconcile their feelings and deal with the unfairness through specific behaviours are further apparent in the following four subthemes.", "Although youth did share stories of parents talking to them about the importance of not smoking, this was not the major family narrative. Instead of being talked to about smoking, it was more common for youth to share stories of they themselves talking to, educating, and even preaching to their parents about the dangers of smoking. In short, youth took it upon themselves to parent their parents.\n\"But I’m getting my mom and my step-dad to quit…by talking to them, telling them how it makes all of us kids feel…Yeah, reading like everything the packages say or what the internet says or like what I learn from it and then they’re all just thinking and then they’re saying “well I won’t do that much then. I’ll try to quit.” Now they’re trying to quit but it’s not working for my mom but it is working for my step-dad. [14-year-old female]\"\nIn addition to talking to their parents about how they felt about them smoking, some youth also would take action to reduce their parents’ ability to smoke.\n\"Like I have just tried, because I just tell my parents straight up to stop and…I always try to, like, hide their stuff on them but it doesn’t work. They get mad. [13-year-old male]\"\nTalking to their parents about the dangers of smoking arose out of youth’s worries for their parents’ health. The concern that youth had for parents who smoked was in fact one of the reasons youth decided to participate in the study. Youth were looking for answers that could possibly help them to get their parents to stop smoking. Concerns about their parents smoking was also strongly depicted in the photographs. One 13-year-old female took a picture of an ashtray full of cigarettes and said,\n\"I see them (ashtrays with cigarette butts) all the time. It would be different if it was like you know once in a while kind of thing I probably wouldn’t mind that much, but my parents smoke in the house and in the car and everywhere so it’s kind of I don’t know I wish they would stop (Figure \n2).\"\nAshtray. Represented youth’s concern for parents.\nYouth who feared that their family members might die, or who had family members who died because of smoking-related illnesses (e.g., lung cancer), especially shared their concerns and would try to make their parents feel guilty.\n\"I always tell them things to make them guilty I’m always like, “Do you want to meet my children, do you want to bring me down the aisle?” It’s working actually. I think my dad said my mom was talking about it so…Ever since my mom started again I really felt it hit me cause, like I want my mom to meet my children and she has to see me get married and have kids and I want my mom to be there. [16-year-old female]\"\nYouth also provided many stories of their parents’ attempts to quit smoking. Their stories demonstrated how smoking was embedded within their family’s history and identity, as well as how parents’ smoking played a role in their child’s life.\n\"Well I was really happy when my father quit because he had been smoking since like, I don’t know, before he was even a teenager. He was really young and he said he’d quit sort of when I was born like he’d smoke outside and he’d reduce it, but then when I got old enough he’d continued smoking, and my mom was the same way. She was also a smoker, but she quit like maybe I was five… [15-year-old female]\"\n\"My grandpa passed away a couple of years ago and he died of emphysema and a little bit after he got sick I should say he quit smoking and whenever my aunties and mom smoked, but they don’t anymore, then he would always tell them “Well you should quit because look what’s happening to me.” And that really pushed my mom to quit. [17-year-old female]\"\nWhile youth were persistent in trying to convince their parents to stop smoking, most youth felt that their parents would not quit despite their efforts. A sense of helplessness was apparent.\n\"Well, my parents like they are smoking and if I tell them not to they’re not going to listen because they’re like “We are your parents, you’re not our parents!” [18-year-old male]\"\n\"Like my mom and my dad both smoke and I’d like to tell them to stop and to show them…they don’t really care…I don’t know, like influencing somebody not to smoke is a lot different than I guess them already smoking and quitting. Because like, I mean obviously everything I’ve seen I’m never going to smoke, but I mean it doesn’t really influence my parents. [13-year-old male]\"", "A second sub-theme involved a moral tone in youth’s conversations with respect to how they viewed their parents and other family members who did or did not smoke. On one hand, youth perceived parents who did not smoke as doing the right thing and as part of their parents’ overall plan of keeping themselves and their children healthy.\n\"Like my parents don’t smoke, they don’t like do drugs or anything like that and they do like everything possible to stay to like be healthy and stuff and to keep me healthy and stuff like that. [12-year-old female]\"\nIn contrast, youth were especially disapproving of parents who did smoke.\n\"Like if a mother wanted to have a baby so badly in the first place then she should have known that she’s not supposed to drink or she’s not supposed to smoke or she’s not supposed to do any type of drugs or anything…and they don’t know how bad it actually is for the kids who have to breathe it in. [14-year-old female]\"\nParents’ second-hand smoking was seen by youth as parents “doing” to their children. “Doing” was viewed in a negative sense where children were put in a dangerous situation in which they had little choice or control as depicted by the following quote and photo.\n\"I guess for people with families already, like what it’s doing to their family or that second-hand smoking can be almost as bad as actually smoking like what are you doing to the people around you if you’re choosing to smoke. [17-year-old female] (Figure \n3).\"\nEssentially, youth felt that second-hand smoke was more dangerous than first-hand smoke as one 15-year-old male noted, “This stuff (second-hand smoke) does not get filtered through the back of the butt, it just comes out clear not filtered.”\nYouth expressed concern for how second-hand smoking impacted them and their siblings.\n\"Both my parents smoke so I don’t like it too much because the smell is kind of it bugs me and you know I don’t know because so many people talk about smoking is related to cancer and that kind of thing so I’m kind of scared. I’m scared for myself and for my parents. But I’m more scared for like my brother than I am for me because I can leave more often than my brother’s allowed to… so. Either like my brother developing the habit or something or like him getting cancer because he’s around it too much. [13-year-old female]\"\nAdult smoking beside a young child. Represented parents “doing” to children; children have no choice.\nMany youth, whose parents did not smoke currently or had never smoked, were concerned about the effects of second-hand smoke on their friends (whose parents smoked). These youth spoke about their friends’ situations vicariously. Their comments in the interviews arose from their extended empathy towards their peers and their peers’ siblings.\n\"Um, I’m pet sitting for a friend while she’s in Florida and her parents are usually always smoking or getting ready to light another cigarette and so I went there it just smells so bad in their house and I feel sorry for her cause she’s got a sister in kindergarten and it’s her in grade nine and her parents are smoking and their dogs in there and cat and it just sticks to the furniture and it just smells smoke. [14-year-old female]\"\nYouth felt that parents who smoked were poor role models and that their behaviour could influence their child’s desire to smoke.\n\"Cause when children see, children do, right? Yeah, so lots of kids when your parents smoke when you’re like in grade two or something and kids get the idea that it’s cool or like whatever and then they want to be like their parents cause they think their parents are awesome. So then they start doing everything their parents do and then they start smoking… [14-year-old female]\"\n\"Or sometimes you can get addicted to smoking if both your parents smoke a lot and then like my cousin who smokes uses that excuse cause I’m like”Why do you smoke, that’s disgusting!” And he says “Well both my parents smoke so I started.” I don’t think his parents are a very good influence since they both smoke. And I’m not sure if they ever told him not to smoke, but maybe they just accepted his smoking not saying it is bad or anything. Like if my kid started smoking I would get mad. [13-year-old male]\"\nParents who smoked were also considered by youth to be less reliable and credible when talking to their children about the dangers of smoking.\n\"When my parents found out I tried it (smoking) once, they knew that they couldn’t do anything cause since they were smoking too! [13-year-old female]\"\nIn general, parents and other adult family members who smoked were viewed by youth to be weak in character.\n\"So like one year he (family relative) came out from Ontario with his wife and my grandma and it was pouring rain and he decided to go outside for a smoke, so he really ran across our yard and hid in the shed and smoked. I’m like, it’s pathetic. [14-year-old female]\"\nIt was evident from the youth’s narratives that parents’ smoking behaviours were unacceptable and should never be tolerated.\n\"And I mean people really need to kind of jump on it and say don’t do this around your kids because it will affect them. Don’t do this around any young child because young children are really open to being affected by something like that and so I think definitely being careful about where you’re smoking or something like that is definitely a really big factor. [16-year-old female]\"", "A third sub-theme that emerged was one where youth were separating themselves, both physically and emotionally, from their family members. In addition, youth associated smoking with causing emotional stress or strain on the family.\n\"Youth:And they always get problems because of it so I kind of don’t want to have to deal with all those problems. And all that stress and everything so I’m just going to like leave it alone.Interviewer:Okay. So like what problems?Youth:Like family issues.Interviewer:So like you said that they have family issues and stuff, why is it important to you not to have that in your life like those things?P:Cause I already have enough family problems. I guess I don’t want anymore. [14-year-old female]\"\nThe discussion with youth revealed that smoking had, in varying degrees, disrupted family relationships. Just the presence of family members smoking around them resulted in youth altering their behaviour and wanting to physically distance themselves from smoker family members.\n\"I live with my grandparents. They make me supper and then I have to usually eat in my room because they smoke, and I don’t like the smell of smoke when I’m eating. I hate the smell and it’s just I grew up my whole life with it and I just think I just see my grandma a lot of my family members have like my great-grandma had passed away with lung cancer and stuff. I just think it’s bad. [17-year-old female]\"\nAt times, being around parents who smoked resulted in feelings of worry and frustration for youth.\n\"Well my step-dad smokes and he’s always saying, “No, it won’t happen to me, it won’t happen to me!” And he actually has a really high chance of catching it cause he started smoking when he was really young and he continued smoking and, uh, he still thinks it won’t happen and doesn’t believe any of the commercials or the ads on that stuff. He just keeps going so… [17-year-old male]\"\n\"I was riding with my dad and he was smoking. I was like, “Do you have to do that when we’re in the car?” Like I get so bugged by it when people do it. It’s like. “Look at the cigarette box!” I get so mad. I was like…like when we were getting out of the car and I said, “Oh can you just not do that?” I walked ahead of him and he said, “I’m sorry.” It’s like, “Okay!” [15-year-old female]\"\nYouth were also sensitive how their negative reactions and behaviours towards family members who smoked could result in hurt feelings.\n\"Whenever they smoke around me I just like take a shirt or something and just like cover my mouth and nose and my brothers and sisters are doing that too. Yeah, so just to try and keep it away. My parents don’t mind, but I’m pretty sure it hurts their feelings or something. [13-year-old female]\"\nMany youth described family tension and conflict because a family member was smoking.\n\"It can really bring your family down, if smoking hurts someone in your family. Um, it could really cause a lot of tension there… [16-year-old male]\"\n\"And like my cousin, her parents smoke. They quit. She helped them quit but then they started again and then she started crying and crying and crying and crying, and then she’s scared that her parents are going to die from lung disease or a lung cancer and she always cries when they do that and then one time they said “It’s our choice if we do this. It’s not your fault if we die or not and they said that they only smoke once a day, not too much.” Now she’s still gets mad but she doesn’t have temper tantrums anymore. [12-year-old female]\"\nFeelings of anger were also associated with family members who smoked. One youth who had an extended family member who died from lung cancer was upset with a son-in-law who continued to smoke in spite of his father-in-law’s death.\n\"I saw him smoking and I was like “Why? Like you saw what your father-in-law went through. Why are you still doing that?!” And it, it really angers me. Like I don’t know, just even talking about it gets me so mad like you’re seeing all these things like you know it happens. Why are you going to ruin that? [15-year-old female]\"\nNotably, of all the feelings expressed by youth in the study, it was a deep sense of sadness that was most apparent in their narratives with respect to families who had a history of smoking. The sadness was in relation to a past or future loss of family members. For some youth, the sadness was physically evident through youth crying and holding back tears while being interviewed. Some held the view that smoking was the defining feature of their families that ultimately would lead to its destruction (Figure \n4).\nFuneral Sign. Represented smoking as a sign of cancer and death.\n\"Lots of my family smokes and I’m worried about them getting cancer and then not surviving it. [16-year-old female]\"\n\"Okay, I took two pictures of smokes cause the first reinforces that smoking could lead to lung cancer. And the second is cause it relates to me and my life because my mom smokes. My grandpa smokes, my grandma smokes, my aunty smokes because a lot of people smoke in my family. Well I feel sad that she probably could die soon like she maybe diagnosed with cancer like any time because she smokes a lot. [13-year-old male]\"", "The final sub-theme that emerged in the study was a sense of resigned acceptance, powerlessness, and being held as a prisoner. Ultimately, the unjust nature of parents smoking in the family home was truly felt by those youth who described having little choice but to feel trapped inside the smoke.\n\"Um, yeah. Well I had to stop volleyball and taekwondo for a little while because my knees were really bad and I have been experiencing like a hard time breathing. But I think that’s particularly because after my dad died my mom let these people move in and the guy smoked a lot and I wasn’t used to that amount of smoke in my personal area. Like downstairs was all mine before, but then I was close to my bedroom and his smoke would come in my room. So I was I was stuck with that all the time. [16-year-old female]\"\n\"Whenever he smokes I’m like in a car with him or in the house with him. He’s always supposed to go outside of the house to smoke, but when I’m in the car with him I roll down the windows so I don’t have to breath in the smoke and I just go on with him and like, “Okay, you can do whatever you want, then I’m just going to do what I want to do.” [17-year-old male]\"\nWithin their home (and while travelling in vehicles with their parents), youth had few ways of escaping the second-hand smoke and little, if any, influence over their parents’ smoking behaviours and rights to live in a pollution-free environment. Some even described how they had to cover their nose and mouth when walking through their house. These youth were like prisoners within their homes. They experienced their own, and witnessed their siblings’ exposure to second-hand smoke, but felt they were unable to help and protect themselves, let alone their siblings. Youth expressed feeling caught in an unpleasant situation which was difficult to escape. They perceived it as unfair and just had to put up with it.", "Across the sample, smoking was one of the dominant lines of discourse in the youth’s narratives of cancer and cancer prevention. This is not surprising and is consistent with a recent study examining perceived smoking related adverse effects, where youth consistently rated lung cancer as being most concerning\n[2]. It appears that youth are connecting smoking with cancer risk. Youth’s discourse (as reinforced through their photographs) reflected the broad public health messages conveyed by anti-tobacco and cancer prevention campaigns suggesting that youth are not passive and ill-informed with respect to tobacco use and health messages. Instead of being preached to by parents about the dangers of smoking, it was youth themselves who were speaking out against smoking. To date, very few studies have reported such behaviour.\nOurs is one of few studies that detailed youth perceptions of parental smoking and second-hand smoke and their association with health concerns and family relationships. Although there is research indicating that youth believe parents have an obligation to do all that they can to support their children to not start smoking\n[13], this study revealed that youth are taking responsibility by parenting the parent about the dangers of smoking. Youth in this study demonstrated a high awareness of the dangers of smoking. They expressed fear and concern about the health-related effects of smoking, especially regarding second-hand smoke in their families. Regardless if there was a history of smoking in their families, all youth were worried about the dangers of second-hand smoking for themselves and other family members. Moreover, their awareness and concern extended beyond their own family unit.\nOur study supports previous research reinforcing that youth continue to be frequently exposed to second-hand smoke in their homes and in cars even though most youth do not approve of it\n[19]. Overall, youth viewed the smoking behaviours of significant adults in their lives as an unjust act that all adults should be aware of. Although the concern for protecting children from second-hand smoke should be about recognizing children's rights over adult smoker's rights, research has shown that the responses of adults to smoking bans in homes and cars have not always been met with compliance and acceptance\n[17,18]. In a study that investigated New Zealand policymakers’ views on the regulation of smoking in private and public places, findings revealed that policymakers were more apt to defer to a smoker’s right to smoke, rather than the protection of children from second-hand smoke\n[43]. In that same study, some participants suggested that the successful regulation of smoking around children in private places will require reconstructing the culture around smoking in which any smoking around children would be considered unacceptable.\nOur current study adds to the literature as it identifies the various strategies youth use to deal with second-hand smoke in their home environments. In addressing second-hand smoking by their parents and other family members, youth opposed their parents’ smoking behaviours verbally or through their body language. Youth tried to cope with the situation in a variety of ways such as distancing themselves from family members, eating meals in their rooms, smoking with family members, and physically covering their faces to avoid exposure of second-hand smoke. However, despite their attempts, youth were forced to accept the fact that smoking was a defining feature of their families. Results suggest a great toll on youth’s emotional state and their ability to cope with their parents’ health compromising behaviours.\nAlthough research discusses how parents’ behaviours, related to smoking, influence the behaviour of youth, there is minimal discussion on how youth view the impact of smoking on the family unit. Through using qualitative methods, this study showed how smoking was an agent that altered youth’s relationships with family members, usually in a negative way. This is in contrast to a qualitative study by Passey and colleagues that explored the social context of smoking among aboriginal women and girls\n[8]. The researchers found that for young girls, smoking was an important way of maintaining relationships within the extended family and community. Sharing a cigarette was seen as a social activity within the family that built a sense of belonging. However, in our study, smoking behaviours of parents and other family members tended to produce feelings of concern, hurt, resentment, and a detachment from the family. The positive or negative impact that cigarette smoking can have on relationships, suggests there is a strong relational component to the act of smoking.\nWhile the focus on the dangers of second-hand smoke has mainly been on the physical health of children, our study reinforces how the harmful effects of parental smoking also extend to youth’s emotional well-being. Overall, youth reported experiencing stress, worry, helplessness, anxiety, and fear for their health and the health of friends whose parents smoked. These youth experienced a heavy emotional burden to parent the parent and carry concerns about siblings. Youth living in families where adult members smoke are least likely to have control over whether they are exposed to second-hand smoke; this places them not only at risk for physical health problems but for emotional distress as well. This study calls attention to the need for future research exploring how the emotional well-being of youth living in homes with adults who smoke, may go dismissed or unrecognized.\nAlthough research has examined some of the health-related effects of having substance-abusing parents, it has for the most part overlooked the detrimental effects on children and family functioning where parents use more socially “acceptable” addictive substances such as tobacco and nicotine. The literature is robust in its findings about the negative effects of living with a parent who has an addiction to alcohol. For example, adolescents of alcoholic parents reported mental health difficulties (including emotional symptoms) and other behaviours (e.g., academic performance and conduct problems) compared to a control group\n[44]. Children growing up in substance-abusing families have been shown to have a disrupted family life with increased family conflict, and may be at greater risk for developing alcohol, drug-related, and behavioural problems\n[45].\nTobacco addiction is a major public health concern. The findings emerging from this study reinforce the need for public health action in three areas. First, more public health-related research is warranted that examines youth’s perceptions about their life circumstances growing up in families where their parents smoke. Further research is needed to investigate possible linkages between youth exposed to second hand smoke in their home environment and emotional and lifestyle-related health difficulties.\nResearch is also needed to investigate how youth’s perceptions about being exposed to parental smoking behaviours and second-hand smoke impact on family relations and youth development. A report by Children’s Mental Health Policy Research Program at a Canadian University reinforces that research evidence on children's mental health needs may be best informed and strengthened by the participation and experiences of children and their families\n[46]. Creating a scientific base on youth’s perspectives of their health and well-being in the context of youth living with parents who smoke, is a critical step to improving and supporting youth's physical, mental, and emotional health.\nSecond, the findings also raise the issue of attending to the emotional well-being as well as the physical needs of children who reside in smoking households. We need to assist youth who live with second-hand smoke in their homes, and who worry about the health effects on their parents and other family members. Public health programs and policies that help to empower youth who live in families in which parents and other family members smoke are needed. Song et al. recommend that encouraging youth to express their objections to second-hand smoke, as well as encouraging smoke-free homes, may be powerful tobacco control strategies against youth smoking\n[47]. In addition to controlling smoking within households, the findings from this study may be used to move forward tobacco control programs and policies designed to prevent parents and other adults from smoking around youth in locations outside the home where parents and youth interact. A comprehensive tobacco control program should support the need for more smoke-free public places including patios, playgrounds, sports fields, beaches, provincial parks, public events, and building perimeters\n[48].\nUpon recognizing the potential toll that parental smoking can have on youth’s emotional well-being, community-based programs to help youth experiencing stress due to their concerns about the dangers of second-hand smoke are needed. Few supports are provided for youth of tobacco-addicted parents, especially for non-smoker youth who are experiencing distress because of their parents’ self-harming behaviours. Health care professionals also can be encouraged to give youth positive messages in dealing and coping with their parents’ smoking behaviour. Addressing youth’s concerns and distress related to second-hand smoke is essential for youth to thrive both physically and mentally.\nA final area for public health action is that anti-tobacco messaging to adult smokers needs to emphasize the relational component of smoking, the vulnerable hidden population of children in the smoking household, and how parental smoking can lead to family stress and negative health consequences to their children. Messages should include scenarios where youth feel distressed and trapped within their own homes. As well, messages that show concern for their health as well as the health of their siblings and parents may prove to be of value in getting the attention of parents and other adults who smoke. In a study by Nilsson and Emmelin, Swedish youth who smoked, felt strongly that parents had a duty to care and an obligation to do all they could to support their children not to start smoking: \"It's a parental duty\"\n[14]. Messages such as those voiced by youth in Nilsson and Emmelin’s study as well as those conveyed in the current study could potentially be powerful tools in smoking prevention and cessation programs.\nThis study’s findings are relevant to issues of childhood agency discussed by others\n[49]. Panel discussions with experts of tobacco control and community development revealed themes of children's levels of agency and their power in reducing their exposure to second-hand smoke in the home\n[49]. In fact, children were seen as potential agents of change and it was suggested that the voices of children towards their caregivers are potentially central in creating smoke-free homes. Youth are often afforded little opportunity to have their voices heard, and it is noteworthy that some youth in the study did not feel they could speak their mind to their parents about their parent's smoking habits. It is a matter of social justice in allowing and encouraging these youth voices to be heard.\n Strengths and limitations of the study Using a qualitative research approach afforded the opportunity to understand youth from their frames of reference and experiences of reality. The findings reported here add to the existing literature by providing a richer description on youth’s experiences and beliefs about smoking. Limitations of our study included a sample that was primarily females (72%) and in the younger and middle age range of youth with only 17% of participants being 17 years and older. Fewer youth who are male and older could explain why we did not detect differences based on age or gender. Despite striving for a diverse sample, we were unable to obtain diversity in ethnic backgrounds and socioeconomic status. As well, most youth in this study did not smoke. Future work that accounts for limitations in the study’s sample might result in additional perspectives on the relational aspects of smoking that warrant tailoring smoking cessation programs and policies to address the differences. As well, longitudinal work is recommended, as the cross-sectional nature of our study did not afford us an understanding how perspectives of youth change over time.\nUsing a qualitative research approach afforded the opportunity to understand youth from their frames of reference and experiences of reality. The findings reported here add to the existing literature by providing a richer description on youth’s experiences and beliefs about smoking. Limitations of our study included a sample that was primarily females (72%) and in the younger and middle age range of youth with only 17% of participants being 17 years and older. Fewer youth who are male and older could explain why we did not detect differences based on age or gender. Despite striving for a diverse sample, we were unable to obtain diversity in ethnic backgrounds and socioeconomic status. As well, most youth in this study did not smoke. Future work that accounts for limitations in the study’s sample might result in additional perspectives on the relational aspects of smoking that warrant tailoring smoking cessation programs and policies to address the differences. As well, longitudinal work is recommended, as the cross-sectional nature of our study did not afford us an understanding how perspectives of youth change over time.", "Using a qualitative research approach afforded the opportunity to understand youth from their frames of reference and experiences of reality. The findings reported here add to the existing literature by providing a richer description on youth’s experiences and beliefs about smoking. Limitations of our study included a sample that was primarily females (72%) and in the younger and middle age range of youth with only 17% of participants being 17 years and older. Fewer youth who are male and older could explain why we did not detect differences based on age or gender. Despite striving for a diverse sample, we were unable to obtain diversity in ethnic backgrounds and socioeconomic status. As well, most youth in this study did not smoke. Future work that accounts for limitations in the study’s sample might result in additional perspectives on the relational aspects of smoking that warrant tailoring smoking cessation programs and policies to address the differences. As well, longitudinal work is recommended, as the cross-sectional nature of our study did not afford us an understanding how perspectives of youth change over time.", "This study revealed that while youth often feel trapped by others smoking in their home and powerless to stop this behaviour, they took the position of educating, trying to influence, and ultimately protecting their parents regarding the harmful effects of smoking and second-hand smoke. The findings reinforce that more needs to be done in strengthening environments where youth can grow and flourish. Upholding the rights of youth to live in clean, healthy, and unpolluted environments is a right and fair public health policy. As one youth from our study assertively stated, parents and all adults should “just stop smoking cause it could affect your kid’s life and yours!”", "The authors declare that they have no competing interests.", "Study design RLW; supervision of data collection RLW; analysis and interpretation of the data RLW, CK; manuscript preparation RLW, CK. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/12/965/prepub\n" ]
[ null, "methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion", null, "conclusions", null, null, null ]
[ "Youth", "Cigarette smoking", "Second-hand smoke", "Parents", "Cancer prevention", "Qualitative research" ]
Background: Lung cancer is considered one of the most preventable types of cancer. While smoking (i.e., tobacco cigarette smoking) increases the risk of many forms of cancer, it is the predominant risk factor for lung cancer, accounting for about 80% of lung cancer cases in men and 50% in women worldwide [1]. Despite recent evidence that lung cancer is a high health risk concern to youth [2], adolescent smoking remains a public health problem. In 2010, 12% of Canadian youth aged 15 to 19 years smoked [3]. Although the number of youth smoking was the lowest level recorded for that age group, the decline in youth smoking rates has slowed [3] and youth smoking rates in some countries are rising [4]. In the United States, the surgeon general reported that more than 600,000 middle school students and 3 million high school students smoke cigarettes [5]. Also troubling is the fact that while smoking rates for youth are down, “of every three young smokers, only one will quit, and one of those remaining smokers will die from tobacco-related causes” ( [5], p. 9). Adult smokers frequently report having started smoking as youth. Among smokers aged 15–17, almost 80% said they had tried smoking by age 14 [6], with females (aged 15–17) having had their first cigarette at 12.9 years and males at age 13.3 years [7]. Global trends also reveal that smoking is increasing in developing countries due to adapting Western lifestyle habits. As well, lung cancer rates are increasing in some countries (e.g., China, Korea, and some African countries) and are expected to continue to rise for the next few decades [1]. Smoking has been shown to be a relational and learned behaviour, especially influenced by the family [8]. Regarding youth health behaviours, it has been suggested that the family, especially parents, is one of the dominant arenas in which youth are influenced. It has been established that youth are most likely to smoke if they have been exposed to, or come from a family in which their parents smoked [9]. Findings from a Canadian Community health survey revealed that in 2011 youth (aged 15–17) residing in households where someone smoked regularly were three times more likely to smoke (22.4% versus 7.0%) [10]. One qualitative study reported that aboriginal adolescent girls often smoke because smoking is normalized and reinforced by families: they see family members smoking in the home, they are not discouraged from smoking, and in some cases, parents facilitate adolescents’ access to cigarettes [11]. Youth smoking behaviour has also been linked to a range of other parental influences. For example, using a nationally United States representative sample, Powell and Chaloupka [12] studied specific parenting behaviours and the degree to which high school students felt their parents’ opinions about smoking influenced their decision to smoke. The authors identified that certain parenting practices (i.e., parental smoking, setting limits on youth’s free time, in-home smoking rules, quality and frequency of parent–child communication) as well as how much youth value their parents’ opinions about smoking, strongly influenced youth deciding to smoke. The evidence indicates that while parents exert a strong influence on youth smoking, they can also exist as a protective factor against youth smoking [8,12-14], especially when non-smoking rules are in place [15] including eliminating smoking in the home [12]. Clark et al. [15] revealed that if parents themselves smoked, banning smoking in the home and speaking against smoking reduced the likelihood that youth will smoke. Similarly, other research on household smoking rules found fewer adolescent smoking behaviours in homes with strict anti-smoking rules [16]. While there is empirical evidence of how parents are greatly influential regarding their children’s smoking behaviours, we know little; however, about the dynamics involved in the parent-adolescent relationship regarding smoking in the home, or how youth perceive their parents’ approval or disproval of smoking behaviours. Few studies have addressed how children or youth feel about adult smoking. In addition to findings that parental smoking may be related to the initiation of smoking in youth, there is increasing concern for the health risks of second-hand smoke. Along with anti-smoking legislation in public spaces, attention has been aimed at protecting children from second-hand smoke and recognizing the risks involved in exposure to second-hand smoke in non-public places. Second-hand smoking rates and non-smoking rules, for example, have been examined in family homes and cars [17-20]. In 2006, 22.1% of Canadian youth in grades 5 through 12 were exposed to smoking in their home on a daily or almost daily basis and 28.1% were exposed to smoking while riding in a car on a nearly weekly basis [19]. In the 2008 Canadian survey, the rate of exposure for 12–19 year olds (16.8%) was almost twice as high as the Canadian average [21]. New Zealand national surveys indicate that while exposure to second-hand smoke decreased since 2000, youth’s perceptions revealed that exposure still remained at 35% (in-home exposure) and 32% (in-vehicle exposure) [22]. The effects of parental smoking and maintaining a smoke-free environment has also received attention in areas such as prenatal and newborn care [23] and later, poor respiratory symptoms and outcomes [24]. Studies have also begun emphasizing home smoking bans and perceived dangers of the less visible but harmful exposure of third-hand smoke to children [23,25]. Although the current literature offers insights about the physical effects of second-hand smoke, how second-hand smoke impacts family relationships is unclear and what youth think about adult family members smoking remains in its infancy. This paper draws on data from a larger qualitative study that sought to extend our limited understanding of youth’s perspectives of cancer and cancer prevention. It aims to explore how youth conceptualize smoking within the context of their own life-situations with attention to their perspectives on parental and family-related smoking issues and experiences. Method: Design The qualitative research design of ethnography was utilized. Ethnography is the study of a specific cultural group of people that provides explanations of people’s thoughts, beliefs, and behaviours in a specific context with the aim of describing aspects of a phenomenon of the group [26-28]. For this study, youth were the group and the phenomenon of interest was youth’s perspectives of cancer and cancer prevention. Key assumptions that were integral to the successful undertaking of the study included viewing youth as self-reflective beings expert on their own experiences and as flexible agents existing within and being touched by multiple social and cultural contexts. The qualitative research design of ethnography was utilized. Ethnography is the study of a specific cultural group of people that provides explanations of people’s thoughts, beliefs, and behaviours in a specific context with the aim of describing aspects of a phenomenon of the group [26-28]. For this study, youth were the group and the phenomenon of interest was youth’s perspectives of cancer and cancer prevention. Key assumptions that were integral to the successful undertaking of the study included viewing youth as self-reflective beings expert on their own experiences and as flexible agents existing within and being touched by multiple social and cultural contexts. Participants Youth were recruited in a Western Canadian province from six schools in both a rural and urban setting. Schools mailed invitation letters about the study to families of potential participants who, if interested, could contact the researcher for further information. Purposive sampling techniques were used with the goal to achieve variation among participants based on demographic information (e.g., age, SES, gender, and urban or rural residency) and experiences in relation to cancer (i.e., some youth had family members who had experienced cancer). Recruitment ended once redundancy or theoretical saturation was achieved, that is, no new themes were apparent. In total, 75 youth ranging in age from 11 to 19 years (M = 14.5, SD = 2.1) participated in the study. The demographic and background characteristics of the participants are presented in Table  1. Demographic Profile of Youth Participants Of the 75 youth participating in the study, six (8%) had tried smoking but no longer smoked and four (5%) reported that they currently smoked. Twenty-two youth (29%) had parents who currently smoked, while seven (9%) had parents who had quit smoking. For the 10 youth (13%) who had a history of smoking, eight (11%) had parents who also had a history of smoking. Youth were recruited in a Western Canadian province from six schools in both a rural and urban setting. Schools mailed invitation letters about the study to families of potential participants who, if interested, could contact the researcher for further information. Purposive sampling techniques were used with the goal to achieve variation among participants based on demographic information (e.g., age, SES, gender, and urban or rural residency) and experiences in relation to cancer (i.e., some youth had family members who had experienced cancer). Recruitment ended once redundancy or theoretical saturation was achieved, that is, no new themes were apparent. In total, 75 youth ranging in age from 11 to 19 years (M = 14.5, SD = 2.1) participated in the study. The demographic and background characteristics of the participants are presented in Table  1. Demographic Profile of Youth Participants Of the 75 youth participating in the study, six (8%) had tried smoking but no longer smoked and four (5%) reported that they currently smoked. Twenty-two youth (29%) had parents who currently smoked, while seven (9%) had parents who had quit smoking. For the 10 youth (13%) who had a history of smoking, eight (11%) had parents who also had a history of smoking. Data collection Data collection occurred between December 2007 and October 2010. The longer time period was due to school breaks and use of multiple data collection methods. The aim was to have each youth participate in two in-depth open-ended interviews. For the first interview an interview guide was used which included questions to elicit youth’s thoughts, beliefs, and feelings about cancer and cancer prevention (e.g., When you hear the word “cancer,” what does it make you think of? If developing cancer messages for youth, what would you tell them?). The interview guide had no direct questions about smoking. The open-ended nature of the interview guide provided an opportunity for youth to discuss areas they considered significant and/or areas previously not anticipated by the researchers [29]. After completing the first interview, all youth were asked to take part in the photovoice method. Photovoice is a participatory research method where individuals can address important issues through taking photographs and discussion [30-34]. Photographs, which are often used in ethnography, provided youth with a unique and creative means to reflect on cancer and cancer prevention. Youth were given a disposable camera to take pictures of people (with permission), objects, places or events that made them think of cancer and cancer prevention. Youth had four weeks to take photographs. During the second interview, participants were asked to talk about what the photos meant to them in terms of cancer and cancer prevention. In addition, youth were asked follow-up questions based on their initial interview and to comment on emerging themes. In total, 53 youth (71%) participated in the photovoice method and second interview. The remaining 22 youth (29%) were unable to complete the photovoice method and second interview due to scheduling difficulties. Four focus group interviews with youth who were previously interviewed were conducted in the schools near the end of the study. The purpose was to identify ideas about cancer and cancer prevention that might emerge from a group context and provide quality controls on data collection [35-37]. Between three and four youth participated in each focus group. In total, fourteen youth attended the focus group discussions. All individual and focus group interviews lasted from 60 to 90 minutes and were digitally recorded and transcribed verbatim. Field notes were recorded to describe the context (e.g., participant’s non-verbal behaviours, communication processes) and the interviewer’s perceptions of the interview. The interviews were conducted by trained research assistants under the supervision of the first author and took place at the participant’s school or home. Finally, ethnographic field research was conducted. Research assistants were trained in fundamental skills and process in doing ethnographic field research including participant observation, field notes, and flexibility and openness [38-40]. On the days that research assistants were at the schools conducting interviews, they observed and recorded interactions and dialogue during informal daily activities (e.g., recess), special events (e.g., cancer prevention fund raising activities), and during the interviews themselves. Ongoing team meetings with research assistants allowed for debriefing and helped increase the sensitivity and richness of fieldnotes by critically highlighting features previously unconsidered by the observers alone. Raw field notes were written up and compared to interview data. Data collection occurred between December 2007 and October 2010. The longer time period was due to school breaks and use of multiple data collection methods. The aim was to have each youth participate in two in-depth open-ended interviews. For the first interview an interview guide was used which included questions to elicit youth’s thoughts, beliefs, and feelings about cancer and cancer prevention (e.g., When you hear the word “cancer,” what does it make you think of? If developing cancer messages for youth, what would you tell them?). The interview guide had no direct questions about smoking. The open-ended nature of the interview guide provided an opportunity for youth to discuss areas they considered significant and/or areas previously not anticipated by the researchers [29]. After completing the first interview, all youth were asked to take part in the photovoice method. Photovoice is a participatory research method where individuals can address important issues through taking photographs and discussion [30-34]. Photographs, which are often used in ethnography, provided youth with a unique and creative means to reflect on cancer and cancer prevention. Youth were given a disposable camera to take pictures of people (with permission), objects, places or events that made them think of cancer and cancer prevention. Youth had four weeks to take photographs. During the second interview, participants were asked to talk about what the photos meant to them in terms of cancer and cancer prevention. In addition, youth were asked follow-up questions based on their initial interview and to comment on emerging themes. In total, 53 youth (71%) participated in the photovoice method and second interview. The remaining 22 youth (29%) were unable to complete the photovoice method and second interview due to scheduling difficulties. Four focus group interviews with youth who were previously interviewed were conducted in the schools near the end of the study. The purpose was to identify ideas about cancer and cancer prevention that might emerge from a group context and provide quality controls on data collection [35-37]. Between three and four youth participated in each focus group. In total, fourteen youth attended the focus group discussions. All individual and focus group interviews lasted from 60 to 90 minutes and were digitally recorded and transcribed verbatim. Field notes were recorded to describe the context (e.g., participant’s non-verbal behaviours, communication processes) and the interviewer’s perceptions of the interview. The interviews were conducted by trained research assistants under the supervision of the first author and took place at the participant’s school or home. Finally, ethnographic field research was conducted. Research assistants were trained in fundamental skills and process in doing ethnographic field research including participant observation, field notes, and flexibility and openness [38-40]. On the days that research assistants were at the schools conducting interviews, they observed and recorded interactions and dialogue during informal daily activities (e.g., recess), special events (e.g., cancer prevention fund raising activities), and during the interviews themselves. Ongoing team meetings with research assistants allowed for debriefing and helped increase the sensitivity and richness of fieldnotes by critically highlighting features previously unconsidered by the observers alone. Raw field notes were written up and compared to interview data. Ethics Before commencing the study, permission was obtained from the University of Manitoba Research Ethics Committee and from the recruitment sites. Parental consent and assent from all youth participants was also provided. Youth were informed that they could withdraw from the study at any stage. Strategies to secure the participants’ confidentiality were applied. Participants gave permission to use their photographs for the purposes of publishing and were reassured that any identifiable information in the photos would be removed (digitally altered). Youth received an honorarium gift card for their participation. Before commencing the study, permission was obtained from the University of Manitoba Research Ethics Committee and from the recruitment sites. Parental consent and assent from all youth participants was also provided. Youth were informed that they could withdraw from the study at any stage. Strategies to secure the participants’ confidentiality were applied. Participants gave permission to use their photographs for the purposes of publishing and were reassured that any identifiable information in the photos would be removed (digitally altered). Youth received an honorarium gift card for their participation. Data analysis Consistent with qualitative research design, data analysis occurred simultaneously with data collection. A data management system, NVivo version 9.0 [41] helped facilitate organization of substantial transcripts. Inductive coding began with RW reading all the field notes and interview transcripts. Analytical categories emerged from rigorous and systematic analysis of all forms of data (interview transcripts, ethnographic field notes, and photographs). Analysis of the data followed ethnographic principles of interpreting the meanings youth attributed to cancer and cancer prevention including their meanings attributed to parental and family-related smoking issues and experiences. Data analysis followed multi-level analytic coding procedures congruent with interpretive qualitative analysis and ethnography [28,29,39,40]. First-level analysis involved isolating concepts or patterns referred to as domains. Second-level analysis involved organizing domains. Through processes of comparing, contrasting, and integrating, items were organized, associated with other items, and linked into higher order patterns. The third level of analysis involved identifying attributes in each domain, and the last level involved discovering relationships among the domains to create themes. Various strategies were used to enhance the rigor of the research process including prolonged engagement with participants and data, careful line-by-line transcript analysis, and detailed memo writing throughout the research process [42]. The researchers independently identified theme areas then jointly refined and linked analytic themes and categories. Discussion of initial interpretations with the youth themselves occurred during the second interviews, which also helped reveal new data and support emerging themes. Consistent with qualitative research design, data analysis occurred simultaneously with data collection. A data management system, NVivo version 9.0 [41] helped facilitate organization of substantial transcripts. Inductive coding began with RW reading all the field notes and interview transcripts. Analytical categories emerged from rigorous and systematic analysis of all forms of data (interview transcripts, ethnographic field notes, and photographs). Analysis of the data followed ethnographic principles of interpreting the meanings youth attributed to cancer and cancer prevention including their meanings attributed to parental and family-related smoking issues and experiences. Data analysis followed multi-level analytic coding procedures congruent with interpretive qualitative analysis and ethnography [28,29,39,40]. First-level analysis involved isolating concepts or patterns referred to as domains. Second-level analysis involved organizing domains. Through processes of comparing, contrasting, and integrating, items were organized, associated with other items, and linked into higher order patterns. The third level of analysis involved identifying attributes in each domain, and the last level involved discovering relationships among the domains to create themes. Various strategies were used to enhance the rigor of the research process including prolonged engagement with participants and data, careful line-by-line transcript analysis, and detailed memo writing throughout the research process [42]. The researchers independently identified theme areas then jointly refined and linked analytic themes and categories. Discussion of initial interpretations with the youth themselves occurred during the second interviews, which also helped reveal new data and support emerging themes. Design: The qualitative research design of ethnography was utilized. Ethnography is the study of a specific cultural group of people that provides explanations of people’s thoughts, beliefs, and behaviours in a specific context with the aim of describing aspects of a phenomenon of the group [26-28]. For this study, youth were the group and the phenomenon of interest was youth’s perspectives of cancer and cancer prevention. Key assumptions that were integral to the successful undertaking of the study included viewing youth as self-reflective beings expert on their own experiences and as flexible agents existing within and being touched by multiple social and cultural contexts. Participants: Youth were recruited in a Western Canadian province from six schools in both a rural and urban setting. Schools mailed invitation letters about the study to families of potential participants who, if interested, could contact the researcher for further information. Purposive sampling techniques were used with the goal to achieve variation among participants based on demographic information (e.g., age, SES, gender, and urban or rural residency) and experiences in relation to cancer (i.e., some youth had family members who had experienced cancer). Recruitment ended once redundancy or theoretical saturation was achieved, that is, no new themes were apparent. In total, 75 youth ranging in age from 11 to 19 years (M = 14.5, SD = 2.1) participated in the study. The demographic and background characteristics of the participants are presented in Table  1. Demographic Profile of Youth Participants Of the 75 youth participating in the study, six (8%) had tried smoking but no longer smoked and four (5%) reported that they currently smoked. Twenty-two youth (29%) had parents who currently smoked, while seven (9%) had parents who had quit smoking. For the 10 youth (13%) who had a history of smoking, eight (11%) had parents who also had a history of smoking. Data collection: Data collection occurred between December 2007 and October 2010. The longer time period was due to school breaks and use of multiple data collection methods. The aim was to have each youth participate in two in-depth open-ended interviews. For the first interview an interview guide was used which included questions to elicit youth’s thoughts, beliefs, and feelings about cancer and cancer prevention (e.g., When you hear the word “cancer,” what does it make you think of? If developing cancer messages for youth, what would you tell them?). The interview guide had no direct questions about smoking. The open-ended nature of the interview guide provided an opportunity for youth to discuss areas they considered significant and/or areas previously not anticipated by the researchers [29]. After completing the first interview, all youth were asked to take part in the photovoice method. Photovoice is a participatory research method where individuals can address important issues through taking photographs and discussion [30-34]. Photographs, which are often used in ethnography, provided youth with a unique and creative means to reflect on cancer and cancer prevention. Youth were given a disposable camera to take pictures of people (with permission), objects, places or events that made them think of cancer and cancer prevention. Youth had four weeks to take photographs. During the second interview, participants were asked to talk about what the photos meant to them in terms of cancer and cancer prevention. In addition, youth were asked follow-up questions based on their initial interview and to comment on emerging themes. In total, 53 youth (71%) participated in the photovoice method and second interview. The remaining 22 youth (29%) were unable to complete the photovoice method and second interview due to scheduling difficulties. Four focus group interviews with youth who were previously interviewed were conducted in the schools near the end of the study. The purpose was to identify ideas about cancer and cancer prevention that might emerge from a group context and provide quality controls on data collection [35-37]. Between three and four youth participated in each focus group. In total, fourteen youth attended the focus group discussions. All individual and focus group interviews lasted from 60 to 90 minutes and were digitally recorded and transcribed verbatim. Field notes were recorded to describe the context (e.g., participant’s non-verbal behaviours, communication processes) and the interviewer’s perceptions of the interview. The interviews were conducted by trained research assistants under the supervision of the first author and took place at the participant’s school or home. Finally, ethnographic field research was conducted. Research assistants were trained in fundamental skills and process in doing ethnographic field research including participant observation, field notes, and flexibility and openness [38-40]. On the days that research assistants were at the schools conducting interviews, they observed and recorded interactions and dialogue during informal daily activities (e.g., recess), special events (e.g., cancer prevention fund raising activities), and during the interviews themselves. Ongoing team meetings with research assistants allowed for debriefing and helped increase the sensitivity and richness of fieldnotes by critically highlighting features previously unconsidered by the observers alone. Raw field notes were written up and compared to interview data. Ethics: Before commencing the study, permission was obtained from the University of Manitoba Research Ethics Committee and from the recruitment sites. Parental consent and assent from all youth participants was also provided. Youth were informed that they could withdraw from the study at any stage. Strategies to secure the participants’ confidentiality were applied. Participants gave permission to use their photographs for the purposes of publishing and were reassured that any identifiable information in the photos would be removed (digitally altered). Youth received an honorarium gift card for their participation. Data analysis: Consistent with qualitative research design, data analysis occurred simultaneously with data collection. A data management system, NVivo version 9.0 [41] helped facilitate organization of substantial transcripts. Inductive coding began with RW reading all the field notes and interview transcripts. Analytical categories emerged from rigorous and systematic analysis of all forms of data (interview transcripts, ethnographic field notes, and photographs). Analysis of the data followed ethnographic principles of interpreting the meanings youth attributed to cancer and cancer prevention including their meanings attributed to parental and family-related smoking issues and experiences. Data analysis followed multi-level analytic coding procedures congruent with interpretive qualitative analysis and ethnography [28,29,39,40]. First-level analysis involved isolating concepts or patterns referred to as domains. Second-level analysis involved organizing domains. Through processes of comparing, contrasting, and integrating, items were organized, associated with other items, and linked into higher order patterns. The third level of analysis involved identifying attributes in each domain, and the last level involved discovering relationships among the domains to create themes. Various strategies were used to enhance the rigor of the research process including prolonged engagement with participants and data, careful line-by-line transcript analysis, and detailed memo writing throughout the research process [42]. The researchers independently identified theme areas then jointly refined and linked analytic themes and categories. Discussion of initial interpretations with the youth themselves occurred during the second interviews, which also helped reveal new data and support emerging themes. Results: Smoking was one of the dominant lines of discourse across the sample of youth’s narratives of cancer and cancer prevention. Age, gender, smoking status (i.e., smoker or non-smoker), and place of residency did not influence the story line. Youth were considerably more knowledgeable about the association between smoking and cancer and anti-tobacco messages in comparison to any other cancer-related topic. Several youth photographed their own hand-drawn facsimiles or public signage depicting familiar anti-smoking slogans (Figure  1). Anti-Smoking Sign. Represented youth’s desire that adults should stop smoking. "When I walk around there’s like no smoking area signs. I think “Oh this is safe,” like it is good to know that there won’t be like smoke around for me to breathe in. [14-year-old male]" Youth in this study were well informed of how smoking could impact one’s health (e.g., increased the potential for cancer and other chronic illnesses). Of special importance were youth’s perspectives and experiences of parents and other family members smoking around them as represented by the primary theme, It’s not fair, and four subthemes: parenting the parent about the dangers of smoking; the good/bad parent; distancing family relationships; and the prisoner. Each of these themes are further discussed. It’s not fair Overall, youth viewed their parents and other family members smoking around children as something unjust. The phrase “It’s not fair” was frequently expressed by youth in this study as illustrated in the following comment, "Because the kids around parents who smoke have to breathe in, they have to breathe in all of it …and like, if parents want to smoke then they should like go outside becauseit’s not fair to the kids…Probably because they always have to be around it if their mother always smokes every time they’re taking a bath or every time they’re like colouring a picture like every time they do anything, they always have to breathe in the bad- like bad air that’s filled with smoke and stuff like that. Andit’s not fairto them. [12-year-old female]" Youth could not make sense of why parents would smoke around their children. They also were unsure with how to deal with what they saw was an act of injustice to children. They struggled about how the smoking made them feel, recognizing that their roles as children limited their ability to influence their parents’ behaviour. Their attempts to reconcile their feelings and deal with the unfairness through specific behaviours are further apparent in the following four subthemes. Overall, youth viewed their parents and other family members smoking around children as something unjust. The phrase “It’s not fair” was frequently expressed by youth in this study as illustrated in the following comment, "Because the kids around parents who smoke have to breathe in, they have to breathe in all of it …and like, if parents want to smoke then they should like go outside becauseit’s not fair to the kids…Probably because they always have to be around it if their mother always smokes every time they’re taking a bath or every time they’re like colouring a picture like every time they do anything, they always have to breathe in the bad- like bad air that’s filled with smoke and stuff like that. Andit’s not fairto them. [12-year-old female]" Youth could not make sense of why parents would smoke around their children. They also were unsure with how to deal with what they saw was an act of injustice to children. They struggled about how the smoking made them feel, recognizing that their roles as children limited their ability to influence their parents’ behaviour. Their attempts to reconcile their feelings and deal with the unfairness through specific behaviours are further apparent in the following four subthemes. Parenting the parent about the dangers of smoking Although youth did share stories of parents talking to them about the importance of not smoking, this was not the major family narrative. Instead of being talked to about smoking, it was more common for youth to share stories of they themselves talking to, educating, and even preaching to their parents about the dangers of smoking. In short, youth took it upon themselves to parent their parents. "But I’m getting my mom and my step-dad to quit…by talking to them, telling them how it makes all of us kids feel…Yeah, reading like everything the packages say or what the internet says or like what I learn from it and then they’re all just thinking and then they’re saying “well I won’t do that much then. I’ll try to quit.” Now they’re trying to quit but it’s not working for my mom but it is working for my step-dad. [14-year-old female]" In addition to talking to their parents about how they felt about them smoking, some youth also would take action to reduce their parents’ ability to smoke. "Like I have just tried, because I just tell my parents straight up to stop and…I always try to, like, hide their stuff on them but it doesn’t work. They get mad. [13-year-old male]" Talking to their parents about the dangers of smoking arose out of youth’s worries for their parents’ health. The concern that youth had for parents who smoked was in fact one of the reasons youth decided to participate in the study. Youth were looking for answers that could possibly help them to get their parents to stop smoking. Concerns about their parents smoking was also strongly depicted in the photographs. One 13-year-old female took a picture of an ashtray full of cigarettes and said, "I see them (ashtrays with cigarette butts) all the time. It would be different if it was like you know once in a while kind of thing I probably wouldn’t mind that much, but my parents smoke in the house and in the car and everywhere so it’s kind of I don’t know I wish they would stop (Figure  2)." Ashtray. Represented youth’s concern for parents. Youth who feared that their family members might die, or who had family members who died because of smoking-related illnesses (e.g., lung cancer), especially shared their concerns and would try to make their parents feel guilty. "I always tell them things to make them guilty I’m always like, “Do you want to meet my children, do you want to bring me down the aisle?” It’s working actually. I think my dad said my mom was talking about it so…Ever since my mom started again I really felt it hit me cause, like I want my mom to meet my children and she has to see me get married and have kids and I want my mom to be there. [16-year-old female]" Youth also provided many stories of their parents’ attempts to quit smoking. Their stories demonstrated how smoking was embedded within their family’s history and identity, as well as how parents’ smoking played a role in their child’s life. "Well I was really happy when my father quit because he had been smoking since like, I don’t know, before he was even a teenager. He was really young and he said he’d quit sort of when I was born like he’d smoke outside and he’d reduce it, but then when I got old enough he’d continued smoking, and my mom was the same way. She was also a smoker, but she quit like maybe I was five… [15-year-old female]" "My grandpa passed away a couple of years ago and he died of emphysema and a little bit after he got sick I should say he quit smoking and whenever my aunties and mom smoked, but they don’t anymore, then he would always tell them “Well you should quit because look what’s happening to me.” And that really pushed my mom to quit. [17-year-old female]" While youth were persistent in trying to convince their parents to stop smoking, most youth felt that their parents would not quit despite their efforts. A sense of helplessness was apparent. "Well, my parents like they are smoking and if I tell them not to they’re not going to listen because they’re like “We are your parents, you’re not our parents!” [18-year-old male]" "Like my mom and my dad both smoke and I’d like to tell them to stop and to show them…they don’t really care…I don’t know, like influencing somebody not to smoke is a lot different than I guess them already smoking and quitting. Because like, I mean obviously everything I’ve seen I’m never going to smoke, but I mean it doesn’t really influence my parents. [13-year-old male]" Although youth did share stories of parents talking to them about the importance of not smoking, this was not the major family narrative. Instead of being talked to about smoking, it was more common for youth to share stories of they themselves talking to, educating, and even preaching to their parents about the dangers of smoking. In short, youth took it upon themselves to parent their parents. "But I’m getting my mom and my step-dad to quit…by talking to them, telling them how it makes all of us kids feel…Yeah, reading like everything the packages say or what the internet says or like what I learn from it and then they’re all just thinking and then they’re saying “well I won’t do that much then. I’ll try to quit.” Now they’re trying to quit but it’s not working for my mom but it is working for my step-dad. [14-year-old female]" In addition to talking to their parents about how they felt about them smoking, some youth also would take action to reduce their parents’ ability to smoke. "Like I have just tried, because I just tell my parents straight up to stop and…I always try to, like, hide their stuff on them but it doesn’t work. They get mad. [13-year-old male]" Talking to their parents about the dangers of smoking arose out of youth’s worries for their parents’ health. The concern that youth had for parents who smoked was in fact one of the reasons youth decided to participate in the study. Youth were looking for answers that could possibly help them to get their parents to stop smoking. Concerns about their parents smoking was also strongly depicted in the photographs. One 13-year-old female took a picture of an ashtray full of cigarettes and said, "I see them (ashtrays with cigarette butts) all the time. It would be different if it was like you know once in a while kind of thing I probably wouldn’t mind that much, but my parents smoke in the house and in the car and everywhere so it’s kind of I don’t know I wish they would stop (Figure  2)." Ashtray. Represented youth’s concern for parents. Youth who feared that their family members might die, or who had family members who died because of smoking-related illnesses (e.g., lung cancer), especially shared their concerns and would try to make their parents feel guilty. "I always tell them things to make them guilty I’m always like, “Do you want to meet my children, do you want to bring me down the aisle?” It’s working actually. I think my dad said my mom was talking about it so…Ever since my mom started again I really felt it hit me cause, like I want my mom to meet my children and she has to see me get married and have kids and I want my mom to be there. [16-year-old female]" Youth also provided many stories of their parents’ attempts to quit smoking. Their stories demonstrated how smoking was embedded within their family’s history and identity, as well as how parents’ smoking played a role in their child’s life. "Well I was really happy when my father quit because he had been smoking since like, I don’t know, before he was even a teenager. He was really young and he said he’d quit sort of when I was born like he’d smoke outside and he’d reduce it, but then when I got old enough he’d continued smoking, and my mom was the same way. She was also a smoker, but she quit like maybe I was five… [15-year-old female]" "My grandpa passed away a couple of years ago and he died of emphysema and a little bit after he got sick I should say he quit smoking and whenever my aunties and mom smoked, but they don’t anymore, then he would always tell them “Well you should quit because look what’s happening to me.” And that really pushed my mom to quit. [17-year-old female]" While youth were persistent in trying to convince their parents to stop smoking, most youth felt that their parents would not quit despite their efforts. A sense of helplessness was apparent. "Well, my parents like they are smoking and if I tell them not to they’re not going to listen because they’re like “We are your parents, you’re not our parents!” [18-year-old male]" "Like my mom and my dad both smoke and I’d like to tell them to stop and to show them…they don’t really care…I don’t know, like influencing somebody not to smoke is a lot different than I guess them already smoking and quitting. Because like, I mean obviously everything I’ve seen I’m never going to smoke, but I mean it doesn’t really influence my parents. [13-year-old male]" The good/bad parent A second sub-theme involved a moral tone in youth’s conversations with respect to how they viewed their parents and other family members who did or did not smoke. On one hand, youth perceived parents who did not smoke as doing the right thing and as part of their parents’ overall plan of keeping themselves and their children healthy. "Like my parents don’t smoke, they don’t like do drugs or anything like that and they do like everything possible to stay to like be healthy and stuff and to keep me healthy and stuff like that. [12-year-old female]" In contrast, youth were especially disapproving of parents who did smoke. "Like if a mother wanted to have a baby so badly in the first place then she should have known that she’s not supposed to drink or she’s not supposed to smoke or she’s not supposed to do any type of drugs or anything…and they don’t know how bad it actually is for the kids who have to breathe it in. [14-year-old female]" Parents’ second-hand smoking was seen by youth as parents “doing” to their children. “Doing” was viewed in a negative sense where children were put in a dangerous situation in which they had little choice or control as depicted by the following quote and photo. "I guess for people with families already, like what it’s doing to their family or that second-hand smoking can be almost as bad as actually smoking like what are you doing to the people around you if you’re choosing to smoke. [17-year-old female] (Figure  3)." Essentially, youth felt that second-hand smoke was more dangerous than first-hand smoke as one 15-year-old male noted, “This stuff (second-hand smoke) does not get filtered through the back of the butt, it just comes out clear not filtered.” Youth expressed concern for how second-hand smoking impacted them and their siblings. "Both my parents smoke so I don’t like it too much because the smell is kind of it bugs me and you know I don’t know because so many people talk about smoking is related to cancer and that kind of thing so I’m kind of scared. I’m scared for myself and for my parents. But I’m more scared for like my brother than I am for me because I can leave more often than my brother’s allowed to… so. Either like my brother developing the habit or something or like him getting cancer because he’s around it too much. [13-year-old female]" Adult smoking beside a young child. Represented parents “doing” to children; children have no choice. Many youth, whose parents did not smoke currently or had never smoked, were concerned about the effects of second-hand smoke on their friends (whose parents smoked). These youth spoke about their friends’ situations vicariously. Their comments in the interviews arose from their extended empathy towards their peers and their peers’ siblings. "Um, I’m pet sitting for a friend while she’s in Florida and her parents are usually always smoking or getting ready to light another cigarette and so I went there it just smells so bad in their house and I feel sorry for her cause she’s got a sister in kindergarten and it’s her in grade nine and her parents are smoking and their dogs in there and cat and it just sticks to the furniture and it just smells smoke. [14-year-old female]" Youth felt that parents who smoked were poor role models and that their behaviour could influence their child’s desire to smoke. "Cause when children see, children do, right? Yeah, so lots of kids when your parents smoke when you’re like in grade two or something and kids get the idea that it’s cool or like whatever and then they want to be like their parents cause they think their parents are awesome. So then they start doing everything their parents do and then they start smoking… [14-year-old female]" "Or sometimes you can get addicted to smoking if both your parents smoke a lot and then like my cousin who smokes uses that excuse cause I’m like”Why do you smoke, that’s disgusting!” And he says “Well both my parents smoke so I started.” I don’t think his parents are a very good influence since they both smoke. And I’m not sure if they ever told him not to smoke, but maybe they just accepted his smoking not saying it is bad or anything. Like if my kid started smoking I would get mad. [13-year-old male]" Parents who smoked were also considered by youth to be less reliable and credible when talking to their children about the dangers of smoking. "When my parents found out I tried it (smoking) once, they knew that they couldn’t do anything cause since they were smoking too! [13-year-old female]" In general, parents and other adult family members who smoked were viewed by youth to be weak in character. "So like one year he (family relative) came out from Ontario with his wife and my grandma and it was pouring rain and he decided to go outside for a smoke, so he really ran across our yard and hid in the shed and smoked. I’m like, it’s pathetic. [14-year-old female]" It was evident from the youth’s narratives that parents’ smoking behaviours were unacceptable and should never be tolerated. "And I mean people really need to kind of jump on it and say don’t do this around your kids because it will affect them. Don’t do this around any young child because young children are really open to being affected by something like that and so I think definitely being careful about where you’re smoking or something like that is definitely a really big factor. [16-year-old female]" A second sub-theme involved a moral tone in youth’s conversations with respect to how they viewed their parents and other family members who did or did not smoke. On one hand, youth perceived parents who did not smoke as doing the right thing and as part of their parents’ overall plan of keeping themselves and their children healthy. "Like my parents don’t smoke, they don’t like do drugs or anything like that and they do like everything possible to stay to like be healthy and stuff and to keep me healthy and stuff like that. [12-year-old female]" In contrast, youth were especially disapproving of parents who did smoke. "Like if a mother wanted to have a baby so badly in the first place then she should have known that she’s not supposed to drink or she’s not supposed to smoke or she’s not supposed to do any type of drugs or anything…and they don’t know how bad it actually is for the kids who have to breathe it in. [14-year-old female]" Parents’ second-hand smoking was seen by youth as parents “doing” to their children. “Doing” was viewed in a negative sense where children were put in a dangerous situation in which they had little choice or control as depicted by the following quote and photo. "I guess for people with families already, like what it’s doing to their family or that second-hand smoking can be almost as bad as actually smoking like what are you doing to the people around you if you’re choosing to smoke. [17-year-old female] (Figure  3)." Essentially, youth felt that second-hand smoke was more dangerous than first-hand smoke as one 15-year-old male noted, “This stuff (second-hand smoke) does not get filtered through the back of the butt, it just comes out clear not filtered.” Youth expressed concern for how second-hand smoking impacted them and their siblings. "Both my parents smoke so I don’t like it too much because the smell is kind of it bugs me and you know I don’t know because so many people talk about smoking is related to cancer and that kind of thing so I’m kind of scared. I’m scared for myself and for my parents. But I’m more scared for like my brother than I am for me because I can leave more often than my brother’s allowed to… so. Either like my brother developing the habit or something or like him getting cancer because he’s around it too much. [13-year-old female]" Adult smoking beside a young child. Represented parents “doing” to children; children have no choice. Many youth, whose parents did not smoke currently or had never smoked, were concerned about the effects of second-hand smoke on their friends (whose parents smoked). These youth spoke about their friends’ situations vicariously. Their comments in the interviews arose from their extended empathy towards their peers and their peers’ siblings. "Um, I’m pet sitting for a friend while she’s in Florida and her parents are usually always smoking or getting ready to light another cigarette and so I went there it just smells so bad in their house and I feel sorry for her cause she’s got a sister in kindergarten and it’s her in grade nine and her parents are smoking and their dogs in there and cat and it just sticks to the furniture and it just smells smoke. [14-year-old female]" Youth felt that parents who smoked were poor role models and that their behaviour could influence their child’s desire to smoke. "Cause when children see, children do, right? Yeah, so lots of kids when your parents smoke when you’re like in grade two or something and kids get the idea that it’s cool or like whatever and then they want to be like their parents cause they think their parents are awesome. So then they start doing everything their parents do and then they start smoking… [14-year-old female]" "Or sometimes you can get addicted to smoking if both your parents smoke a lot and then like my cousin who smokes uses that excuse cause I’m like”Why do you smoke, that’s disgusting!” And he says “Well both my parents smoke so I started.” I don’t think his parents are a very good influence since they both smoke. And I’m not sure if they ever told him not to smoke, but maybe they just accepted his smoking not saying it is bad or anything. Like if my kid started smoking I would get mad. [13-year-old male]" Parents who smoked were also considered by youth to be less reliable and credible when talking to their children about the dangers of smoking. "When my parents found out I tried it (smoking) once, they knew that they couldn’t do anything cause since they were smoking too! [13-year-old female]" In general, parents and other adult family members who smoked were viewed by youth to be weak in character. "So like one year he (family relative) came out from Ontario with his wife and my grandma and it was pouring rain and he decided to go outside for a smoke, so he really ran across our yard and hid in the shed and smoked. I’m like, it’s pathetic. [14-year-old female]" It was evident from the youth’s narratives that parents’ smoking behaviours were unacceptable and should never be tolerated. "And I mean people really need to kind of jump on it and say don’t do this around your kids because it will affect them. Don’t do this around any young child because young children are really open to being affected by something like that and so I think definitely being careful about where you’re smoking or something like that is definitely a really big factor. [16-year-old female]" Distancing family relationships A third sub-theme that emerged was one where youth were separating themselves, both physically and emotionally, from their family members. In addition, youth associated smoking with causing emotional stress or strain on the family. "Youth:And they always get problems because of it so I kind of don’t want to have to deal with all those problems. And all that stress and everything so I’m just going to like leave it alone.Interviewer:Okay. So like what problems?Youth:Like family issues.Interviewer:So like you said that they have family issues and stuff, why is it important to you not to have that in your life like those things?P:Cause I already have enough family problems. I guess I don’t want anymore. [14-year-old female]" The discussion with youth revealed that smoking had, in varying degrees, disrupted family relationships. Just the presence of family members smoking around them resulted in youth altering their behaviour and wanting to physically distance themselves from smoker family members. "I live with my grandparents. They make me supper and then I have to usually eat in my room because they smoke, and I don’t like the smell of smoke when I’m eating. I hate the smell and it’s just I grew up my whole life with it and I just think I just see my grandma a lot of my family members have like my great-grandma had passed away with lung cancer and stuff. I just think it’s bad. [17-year-old female]" At times, being around parents who smoked resulted in feelings of worry and frustration for youth. "Well my step-dad smokes and he’s always saying, “No, it won’t happen to me, it won’t happen to me!” And he actually has a really high chance of catching it cause he started smoking when he was really young and he continued smoking and, uh, he still thinks it won’t happen and doesn’t believe any of the commercials or the ads on that stuff. He just keeps going so… [17-year-old male]" "I was riding with my dad and he was smoking. I was like, “Do you have to do that when we’re in the car?” Like I get so bugged by it when people do it. It’s like. “Look at the cigarette box!” I get so mad. I was like…like when we were getting out of the car and I said, “Oh can you just not do that?” I walked ahead of him and he said, “I’m sorry.” It’s like, “Okay!” [15-year-old female]" Youth were also sensitive how their negative reactions and behaviours towards family members who smoked could result in hurt feelings. "Whenever they smoke around me I just like take a shirt or something and just like cover my mouth and nose and my brothers and sisters are doing that too. Yeah, so just to try and keep it away. My parents don’t mind, but I’m pretty sure it hurts their feelings or something. [13-year-old female]" Many youth described family tension and conflict because a family member was smoking. "It can really bring your family down, if smoking hurts someone in your family. Um, it could really cause a lot of tension there… [16-year-old male]" "And like my cousin, her parents smoke. They quit. She helped them quit but then they started again and then she started crying and crying and crying and crying, and then she’s scared that her parents are going to die from lung disease or a lung cancer and she always cries when they do that and then one time they said “It’s our choice if we do this. It’s not your fault if we die or not and they said that they only smoke once a day, not too much.” Now she’s still gets mad but she doesn’t have temper tantrums anymore. [12-year-old female]" Feelings of anger were also associated with family members who smoked. One youth who had an extended family member who died from lung cancer was upset with a son-in-law who continued to smoke in spite of his father-in-law’s death. "I saw him smoking and I was like “Why? Like you saw what your father-in-law went through. Why are you still doing that?!” And it, it really angers me. Like I don’t know, just even talking about it gets me so mad like you’re seeing all these things like you know it happens. Why are you going to ruin that? [15-year-old female]" Notably, of all the feelings expressed by youth in the study, it was a deep sense of sadness that was most apparent in their narratives with respect to families who had a history of smoking. The sadness was in relation to a past or future loss of family members. For some youth, the sadness was physically evident through youth crying and holding back tears while being interviewed. Some held the view that smoking was the defining feature of their families that ultimately would lead to its destruction (Figure  4). Funeral Sign. Represented smoking as a sign of cancer and death. "Lots of my family smokes and I’m worried about them getting cancer and then not surviving it. [16-year-old female]" "Okay, I took two pictures of smokes cause the first reinforces that smoking could lead to lung cancer. And the second is cause it relates to me and my life because my mom smokes. My grandpa smokes, my grandma smokes, my aunty smokes because a lot of people smoke in my family. Well I feel sad that she probably could die soon like she maybe diagnosed with cancer like any time because she smokes a lot. [13-year-old male]" A third sub-theme that emerged was one where youth were separating themselves, both physically and emotionally, from their family members. In addition, youth associated smoking with causing emotional stress or strain on the family. "Youth:And they always get problems because of it so I kind of don’t want to have to deal with all those problems. And all that stress and everything so I’m just going to like leave it alone.Interviewer:Okay. So like what problems?Youth:Like family issues.Interviewer:So like you said that they have family issues and stuff, why is it important to you not to have that in your life like those things?P:Cause I already have enough family problems. I guess I don’t want anymore. [14-year-old female]" The discussion with youth revealed that smoking had, in varying degrees, disrupted family relationships. Just the presence of family members smoking around them resulted in youth altering their behaviour and wanting to physically distance themselves from smoker family members. "I live with my grandparents. They make me supper and then I have to usually eat in my room because they smoke, and I don’t like the smell of smoke when I’m eating. I hate the smell and it’s just I grew up my whole life with it and I just think I just see my grandma a lot of my family members have like my great-grandma had passed away with lung cancer and stuff. I just think it’s bad. [17-year-old female]" At times, being around parents who smoked resulted in feelings of worry and frustration for youth. "Well my step-dad smokes and he’s always saying, “No, it won’t happen to me, it won’t happen to me!” And he actually has a really high chance of catching it cause he started smoking when he was really young and he continued smoking and, uh, he still thinks it won’t happen and doesn’t believe any of the commercials or the ads on that stuff. He just keeps going so… [17-year-old male]" "I was riding with my dad and he was smoking. I was like, “Do you have to do that when we’re in the car?” Like I get so bugged by it when people do it. It’s like. “Look at the cigarette box!” I get so mad. I was like…like when we were getting out of the car and I said, “Oh can you just not do that?” I walked ahead of him and he said, “I’m sorry.” It’s like, “Okay!” [15-year-old female]" Youth were also sensitive how their negative reactions and behaviours towards family members who smoked could result in hurt feelings. "Whenever they smoke around me I just like take a shirt or something and just like cover my mouth and nose and my brothers and sisters are doing that too. Yeah, so just to try and keep it away. My parents don’t mind, but I’m pretty sure it hurts their feelings or something. [13-year-old female]" Many youth described family tension and conflict because a family member was smoking. "It can really bring your family down, if smoking hurts someone in your family. Um, it could really cause a lot of tension there… [16-year-old male]" "And like my cousin, her parents smoke. They quit. She helped them quit but then they started again and then she started crying and crying and crying and crying, and then she’s scared that her parents are going to die from lung disease or a lung cancer and she always cries when they do that and then one time they said “It’s our choice if we do this. It’s not your fault if we die or not and they said that they only smoke once a day, not too much.” Now she’s still gets mad but she doesn’t have temper tantrums anymore. [12-year-old female]" Feelings of anger were also associated with family members who smoked. One youth who had an extended family member who died from lung cancer was upset with a son-in-law who continued to smoke in spite of his father-in-law’s death. "I saw him smoking and I was like “Why? Like you saw what your father-in-law went through. Why are you still doing that?!” And it, it really angers me. Like I don’t know, just even talking about it gets me so mad like you’re seeing all these things like you know it happens. Why are you going to ruin that? [15-year-old female]" Notably, of all the feelings expressed by youth in the study, it was a deep sense of sadness that was most apparent in their narratives with respect to families who had a history of smoking. The sadness was in relation to a past or future loss of family members. For some youth, the sadness was physically evident through youth crying and holding back tears while being interviewed. Some held the view that smoking was the defining feature of their families that ultimately would lead to its destruction (Figure  4). Funeral Sign. Represented smoking as a sign of cancer and death. "Lots of my family smokes and I’m worried about them getting cancer and then not surviving it. [16-year-old female]" "Okay, I took two pictures of smokes cause the first reinforces that smoking could lead to lung cancer. And the second is cause it relates to me and my life because my mom smokes. My grandpa smokes, my grandma smokes, my aunty smokes because a lot of people smoke in my family. Well I feel sad that she probably could die soon like she maybe diagnosed with cancer like any time because she smokes a lot. [13-year-old male]" The prisoner The final sub-theme that emerged in the study was a sense of resigned acceptance, powerlessness, and being held as a prisoner. Ultimately, the unjust nature of parents smoking in the family home was truly felt by those youth who described having little choice but to feel trapped inside the smoke. "Um, yeah. Well I had to stop volleyball and taekwondo for a little while because my knees were really bad and I have been experiencing like a hard time breathing. But I think that’s particularly because after my dad died my mom let these people move in and the guy smoked a lot and I wasn’t used to that amount of smoke in my personal area. Like downstairs was all mine before, but then I was close to my bedroom and his smoke would come in my room. So I was I was stuck with that all the time. [16-year-old female]" "Whenever he smokes I’m like in a car with him or in the house with him. He’s always supposed to go outside of the house to smoke, but when I’m in the car with him I roll down the windows so I don’t have to breath in the smoke and I just go on with him and like, “Okay, you can do whatever you want, then I’m just going to do what I want to do.” [17-year-old male]" Within their home (and while travelling in vehicles with their parents), youth had few ways of escaping the second-hand smoke and little, if any, influence over their parents’ smoking behaviours and rights to live in a pollution-free environment. Some even described how they had to cover their nose and mouth when walking through their house. These youth were like prisoners within their homes. They experienced their own, and witnessed their siblings’ exposure to second-hand smoke, but felt they were unable to help and protect themselves, let alone their siblings. Youth expressed feeling caught in an unpleasant situation which was difficult to escape. They perceived it as unfair and just had to put up with it. The final sub-theme that emerged in the study was a sense of resigned acceptance, powerlessness, and being held as a prisoner. Ultimately, the unjust nature of parents smoking in the family home was truly felt by those youth who described having little choice but to feel trapped inside the smoke. "Um, yeah. Well I had to stop volleyball and taekwondo for a little while because my knees were really bad and I have been experiencing like a hard time breathing. But I think that’s particularly because after my dad died my mom let these people move in and the guy smoked a lot and I wasn’t used to that amount of smoke in my personal area. Like downstairs was all mine before, but then I was close to my bedroom and his smoke would come in my room. So I was I was stuck with that all the time. [16-year-old female]" "Whenever he smokes I’m like in a car with him or in the house with him. He’s always supposed to go outside of the house to smoke, but when I’m in the car with him I roll down the windows so I don’t have to breath in the smoke and I just go on with him and like, “Okay, you can do whatever you want, then I’m just going to do what I want to do.” [17-year-old male]" Within their home (and while travelling in vehicles with their parents), youth had few ways of escaping the second-hand smoke and little, if any, influence over their parents’ smoking behaviours and rights to live in a pollution-free environment. Some even described how they had to cover their nose and mouth when walking through their house. These youth were like prisoners within their homes. They experienced their own, and witnessed their siblings’ exposure to second-hand smoke, but felt they were unable to help and protect themselves, let alone their siblings. Youth expressed feeling caught in an unpleasant situation which was difficult to escape. They perceived it as unfair and just had to put up with it. It’s not fair: Overall, youth viewed their parents and other family members smoking around children as something unjust. The phrase “It’s not fair” was frequently expressed by youth in this study as illustrated in the following comment, "Because the kids around parents who smoke have to breathe in, they have to breathe in all of it …and like, if parents want to smoke then they should like go outside becauseit’s not fair to the kids…Probably because they always have to be around it if their mother always smokes every time they’re taking a bath or every time they’re like colouring a picture like every time they do anything, they always have to breathe in the bad- like bad air that’s filled with smoke and stuff like that. Andit’s not fairto them. [12-year-old female]" Youth could not make sense of why parents would smoke around their children. They also were unsure with how to deal with what they saw was an act of injustice to children. They struggled about how the smoking made them feel, recognizing that their roles as children limited their ability to influence their parents’ behaviour. Their attempts to reconcile their feelings and deal with the unfairness through specific behaviours are further apparent in the following four subthemes. Parenting the parent about the dangers of smoking: Although youth did share stories of parents talking to them about the importance of not smoking, this was not the major family narrative. Instead of being talked to about smoking, it was more common for youth to share stories of they themselves talking to, educating, and even preaching to their parents about the dangers of smoking. In short, youth took it upon themselves to parent their parents. "But I’m getting my mom and my step-dad to quit…by talking to them, telling them how it makes all of us kids feel…Yeah, reading like everything the packages say or what the internet says or like what I learn from it and then they’re all just thinking and then they’re saying “well I won’t do that much then. I’ll try to quit.” Now they’re trying to quit but it’s not working for my mom but it is working for my step-dad. [14-year-old female]" In addition to talking to their parents about how they felt about them smoking, some youth also would take action to reduce their parents’ ability to smoke. "Like I have just tried, because I just tell my parents straight up to stop and…I always try to, like, hide their stuff on them but it doesn’t work. They get mad. [13-year-old male]" Talking to their parents about the dangers of smoking arose out of youth’s worries for their parents’ health. The concern that youth had for parents who smoked was in fact one of the reasons youth decided to participate in the study. Youth were looking for answers that could possibly help them to get their parents to stop smoking. Concerns about their parents smoking was also strongly depicted in the photographs. One 13-year-old female took a picture of an ashtray full of cigarettes and said, "I see them (ashtrays with cigarette butts) all the time. It would be different if it was like you know once in a while kind of thing I probably wouldn’t mind that much, but my parents smoke in the house and in the car and everywhere so it’s kind of I don’t know I wish they would stop (Figure  2)." Ashtray. Represented youth’s concern for parents. Youth who feared that their family members might die, or who had family members who died because of smoking-related illnesses (e.g., lung cancer), especially shared their concerns and would try to make their parents feel guilty. "I always tell them things to make them guilty I’m always like, “Do you want to meet my children, do you want to bring me down the aisle?” It’s working actually. I think my dad said my mom was talking about it so…Ever since my mom started again I really felt it hit me cause, like I want my mom to meet my children and she has to see me get married and have kids and I want my mom to be there. [16-year-old female]" Youth also provided many stories of their parents’ attempts to quit smoking. Their stories demonstrated how smoking was embedded within their family’s history and identity, as well as how parents’ smoking played a role in their child’s life. "Well I was really happy when my father quit because he had been smoking since like, I don’t know, before he was even a teenager. He was really young and he said he’d quit sort of when I was born like he’d smoke outside and he’d reduce it, but then when I got old enough he’d continued smoking, and my mom was the same way. She was also a smoker, but she quit like maybe I was five… [15-year-old female]" "My grandpa passed away a couple of years ago and he died of emphysema and a little bit after he got sick I should say he quit smoking and whenever my aunties and mom smoked, but they don’t anymore, then he would always tell them “Well you should quit because look what’s happening to me.” And that really pushed my mom to quit. [17-year-old female]" While youth were persistent in trying to convince their parents to stop smoking, most youth felt that their parents would not quit despite their efforts. A sense of helplessness was apparent. "Well, my parents like they are smoking and if I tell them not to they’re not going to listen because they’re like “We are your parents, you’re not our parents!” [18-year-old male]" "Like my mom and my dad both smoke and I’d like to tell them to stop and to show them…they don’t really care…I don’t know, like influencing somebody not to smoke is a lot different than I guess them already smoking and quitting. Because like, I mean obviously everything I’ve seen I’m never going to smoke, but I mean it doesn’t really influence my parents. [13-year-old male]" The good/bad parent: A second sub-theme involved a moral tone in youth’s conversations with respect to how they viewed their parents and other family members who did or did not smoke. On one hand, youth perceived parents who did not smoke as doing the right thing and as part of their parents’ overall plan of keeping themselves and their children healthy. "Like my parents don’t smoke, they don’t like do drugs or anything like that and they do like everything possible to stay to like be healthy and stuff and to keep me healthy and stuff like that. [12-year-old female]" In contrast, youth were especially disapproving of parents who did smoke. "Like if a mother wanted to have a baby so badly in the first place then she should have known that she’s not supposed to drink or she’s not supposed to smoke or she’s not supposed to do any type of drugs or anything…and they don’t know how bad it actually is for the kids who have to breathe it in. [14-year-old female]" Parents’ second-hand smoking was seen by youth as parents “doing” to their children. “Doing” was viewed in a negative sense where children were put in a dangerous situation in which they had little choice or control as depicted by the following quote and photo. "I guess for people with families already, like what it’s doing to their family or that second-hand smoking can be almost as bad as actually smoking like what are you doing to the people around you if you’re choosing to smoke. [17-year-old female] (Figure  3)." Essentially, youth felt that second-hand smoke was more dangerous than first-hand smoke as one 15-year-old male noted, “This stuff (second-hand smoke) does not get filtered through the back of the butt, it just comes out clear not filtered.” Youth expressed concern for how second-hand smoking impacted them and their siblings. "Both my parents smoke so I don’t like it too much because the smell is kind of it bugs me and you know I don’t know because so many people talk about smoking is related to cancer and that kind of thing so I’m kind of scared. I’m scared for myself and for my parents. But I’m more scared for like my brother than I am for me because I can leave more often than my brother’s allowed to… so. Either like my brother developing the habit or something or like him getting cancer because he’s around it too much. [13-year-old female]" Adult smoking beside a young child. Represented parents “doing” to children; children have no choice. Many youth, whose parents did not smoke currently or had never smoked, were concerned about the effects of second-hand smoke on their friends (whose parents smoked). These youth spoke about their friends’ situations vicariously. Their comments in the interviews arose from their extended empathy towards their peers and their peers’ siblings. "Um, I’m pet sitting for a friend while she’s in Florida and her parents are usually always smoking or getting ready to light another cigarette and so I went there it just smells so bad in their house and I feel sorry for her cause she’s got a sister in kindergarten and it’s her in grade nine and her parents are smoking and their dogs in there and cat and it just sticks to the furniture and it just smells smoke. [14-year-old female]" Youth felt that parents who smoked were poor role models and that their behaviour could influence their child’s desire to smoke. "Cause when children see, children do, right? Yeah, so lots of kids when your parents smoke when you’re like in grade two or something and kids get the idea that it’s cool or like whatever and then they want to be like their parents cause they think their parents are awesome. So then they start doing everything their parents do and then they start smoking… [14-year-old female]" "Or sometimes you can get addicted to smoking if both your parents smoke a lot and then like my cousin who smokes uses that excuse cause I’m like”Why do you smoke, that’s disgusting!” And he says “Well both my parents smoke so I started.” I don’t think his parents are a very good influence since they both smoke. And I’m not sure if they ever told him not to smoke, but maybe they just accepted his smoking not saying it is bad or anything. Like if my kid started smoking I would get mad. [13-year-old male]" Parents who smoked were also considered by youth to be less reliable and credible when talking to their children about the dangers of smoking. "When my parents found out I tried it (smoking) once, they knew that they couldn’t do anything cause since they were smoking too! [13-year-old female]" In general, parents and other adult family members who smoked were viewed by youth to be weak in character. "So like one year he (family relative) came out from Ontario with his wife and my grandma and it was pouring rain and he decided to go outside for a smoke, so he really ran across our yard and hid in the shed and smoked. I’m like, it’s pathetic. [14-year-old female]" It was evident from the youth’s narratives that parents’ smoking behaviours were unacceptable and should never be tolerated. "And I mean people really need to kind of jump on it and say don’t do this around your kids because it will affect them. Don’t do this around any young child because young children are really open to being affected by something like that and so I think definitely being careful about where you’re smoking or something like that is definitely a really big factor. [16-year-old female]" Distancing family relationships: A third sub-theme that emerged was one where youth were separating themselves, both physically and emotionally, from their family members. In addition, youth associated smoking with causing emotional stress or strain on the family. "Youth:And they always get problems because of it so I kind of don’t want to have to deal with all those problems. And all that stress and everything so I’m just going to like leave it alone.Interviewer:Okay. So like what problems?Youth:Like family issues.Interviewer:So like you said that they have family issues and stuff, why is it important to you not to have that in your life like those things?P:Cause I already have enough family problems. I guess I don’t want anymore. [14-year-old female]" The discussion with youth revealed that smoking had, in varying degrees, disrupted family relationships. Just the presence of family members smoking around them resulted in youth altering their behaviour and wanting to physically distance themselves from smoker family members. "I live with my grandparents. They make me supper and then I have to usually eat in my room because they smoke, and I don’t like the smell of smoke when I’m eating. I hate the smell and it’s just I grew up my whole life with it and I just think I just see my grandma a lot of my family members have like my great-grandma had passed away with lung cancer and stuff. I just think it’s bad. [17-year-old female]" At times, being around parents who smoked resulted in feelings of worry and frustration for youth. "Well my step-dad smokes and he’s always saying, “No, it won’t happen to me, it won’t happen to me!” And he actually has a really high chance of catching it cause he started smoking when he was really young and he continued smoking and, uh, he still thinks it won’t happen and doesn’t believe any of the commercials or the ads on that stuff. He just keeps going so… [17-year-old male]" "I was riding with my dad and he was smoking. I was like, “Do you have to do that when we’re in the car?” Like I get so bugged by it when people do it. It’s like. “Look at the cigarette box!” I get so mad. I was like…like when we were getting out of the car and I said, “Oh can you just not do that?” I walked ahead of him and he said, “I’m sorry.” It’s like, “Okay!” [15-year-old female]" Youth were also sensitive how their negative reactions and behaviours towards family members who smoked could result in hurt feelings. "Whenever they smoke around me I just like take a shirt or something and just like cover my mouth and nose and my brothers and sisters are doing that too. Yeah, so just to try and keep it away. My parents don’t mind, but I’m pretty sure it hurts their feelings or something. [13-year-old female]" Many youth described family tension and conflict because a family member was smoking. "It can really bring your family down, if smoking hurts someone in your family. Um, it could really cause a lot of tension there… [16-year-old male]" "And like my cousin, her parents smoke. They quit. She helped them quit but then they started again and then she started crying and crying and crying and crying, and then she’s scared that her parents are going to die from lung disease or a lung cancer and she always cries when they do that and then one time they said “It’s our choice if we do this. It’s not your fault if we die or not and they said that they only smoke once a day, not too much.” Now she’s still gets mad but she doesn’t have temper tantrums anymore. [12-year-old female]" Feelings of anger were also associated with family members who smoked. One youth who had an extended family member who died from lung cancer was upset with a son-in-law who continued to smoke in spite of his father-in-law’s death. "I saw him smoking and I was like “Why? Like you saw what your father-in-law went through. Why are you still doing that?!” And it, it really angers me. Like I don’t know, just even talking about it gets me so mad like you’re seeing all these things like you know it happens. Why are you going to ruin that? [15-year-old female]" Notably, of all the feelings expressed by youth in the study, it was a deep sense of sadness that was most apparent in their narratives with respect to families who had a history of smoking. The sadness was in relation to a past or future loss of family members. For some youth, the sadness was physically evident through youth crying and holding back tears while being interviewed. Some held the view that smoking was the defining feature of their families that ultimately would lead to its destruction (Figure  4). Funeral Sign. Represented smoking as a sign of cancer and death. "Lots of my family smokes and I’m worried about them getting cancer and then not surviving it. [16-year-old female]" "Okay, I took two pictures of smokes cause the first reinforces that smoking could lead to lung cancer. And the second is cause it relates to me and my life because my mom smokes. My grandpa smokes, my grandma smokes, my aunty smokes because a lot of people smoke in my family. Well I feel sad that she probably could die soon like she maybe diagnosed with cancer like any time because she smokes a lot. [13-year-old male]" The prisoner: The final sub-theme that emerged in the study was a sense of resigned acceptance, powerlessness, and being held as a prisoner. Ultimately, the unjust nature of parents smoking in the family home was truly felt by those youth who described having little choice but to feel trapped inside the smoke. "Um, yeah. Well I had to stop volleyball and taekwondo for a little while because my knees were really bad and I have been experiencing like a hard time breathing. But I think that’s particularly because after my dad died my mom let these people move in and the guy smoked a lot and I wasn’t used to that amount of smoke in my personal area. Like downstairs was all mine before, but then I was close to my bedroom and his smoke would come in my room. So I was I was stuck with that all the time. [16-year-old female]" "Whenever he smokes I’m like in a car with him or in the house with him. He’s always supposed to go outside of the house to smoke, but when I’m in the car with him I roll down the windows so I don’t have to breath in the smoke and I just go on with him and like, “Okay, you can do whatever you want, then I’m just going to do what I want to do.” [17-year-old male]" Within their home (and while travelling in vehicles with their parents), youth had few ways of escaping the second-hand smoke and little, if any, influence over their parents’ smoking behaviours and rights to live in a pollution-free environment. Some even described how they had to cover their nose and mouth when walking through their house. These youth were like prisoners within their homes. They experienced their own, and witnessed their siblings’ exposure to second-hand smoke, but felt they were unable to help and protect themselves, let alone their siblings. Youth expressed feeling caught in an unpleasant situation which was difficult to escape. They perceived it as unfair and just had to put up with it. Discussion: Across the sample, smoking was one of the dominant lines of discourse in the youth’s narratives of cancer and cancer prevention. This is not surprising and is consistent with a recent study examining perceived smoking related adverse effects, where youth consistently rated lung cancer as being most concerning [2]. It appears that youth are connecting smoking with cancer risk. Youth’s discourse (as reinforced through their photographs) reflected the broad public health messages conveyed by anti-tobacco and cancer prevention campaigns suggesting that youth are not passive and ill-informed with respect to tobacco use and health messages. Instead of being preached to by parents about the dangers of smoking, it was youth themselves who were speaking out against smoking. To date, very few studies have reported such behaviour. Ours is one of few studies that detailed youth perceptions of parental smoking and second-hand smoke and their association with health concerns and family relationships. Although there is research indicating that youth believe parents have an obligation to do all that they can to support their children to not start smoking [13], this study revealed that youth are taking responsibility by parenting the parent about the dangers of smoking. Youth in this study demonstrated a high awareness of the dangers of smoking. They expressed fear and concern about the health-related effects of smoking, especially regarding second-hand smoke in their families. Regardless if there was a history of smoking in their families, all youth were worried about the dangers of second-hand smoking for themselves and other family members. Moreover, their awareness and concern extended beyond their own family unit. Our study supports previous research reinforcing that youth continue to be frequently exposed to second-hand smoke in their homes and in cars even though most youth do not approve of it [19]. Overall, youth viewed the smoking behaviours of significant adults in their lives as an unjust act that all adults should be aware of. Although the concern for protecting children from second-hand smoke should be about recognizing children's rights over adult smoker's rights, research has shown that the responses of adults to smoking bans in homes and cars have not always been met with compliance and acceptance [17,18]. In a study that investigated New Zealand policymakers’ views on the regulation of smoking in private and public places, findings revealed that policymakers were more apt to defer to a smoker’s right to smoke, rather than the protection of children from second-hand smoke [43]. In that same study, some participants suggested that the successful regulation of smoking around children in private places will require reconstructing the culture around smoking in which any smoking around children would be considered unacceptable. Our current study adds to the literature as it identifies the various strategies youth use to deal with second-hand smoke in their home environments. In addressing second-hand smoking by their parents and other family members, youth opposed their parents’ smoking behaviours verbally or through their body language. Youth tried to cope with the situation in a variety of ways such as distancing themselves from family members, eating meals in their rooms, smoking with family members, and physically covering their faces to avoid exposure of second-hand smoke. However, despite their attempts, youth were forced to accept the fact that smoking was a defining feature of their families. Results suggest a great toll on youth’s emotional state and their ability to cope with their parents’ health compromising behaviours. Although research discusses how parents’ behaviours, related to smoking, influence the behaviour of youth, there is minimal discussion on how youth view the impact of smoking on the family unit. Through using qualitative methods, this study showed how smoking was an agent that altered youth’s relationships with family members, usually in a negative way. This is in contrast to a qualitative study by Passey and colleagues that explored the social context of smoking among aboriginal women and girls [8]. The researchers found that for young girls, smoking was an important way of maintaining relationships within the extended family and community. Sharing a cigarette was seen as a social activity within the family that built a sense of belonging. However, in our study, smoking behaviours of parents and other family members tended to produce feelings of concern, hurt, resentment, and a detachment from the family. The positive or negative impact that cigarette smoking can have on relationships, suggests there is a strong relational component to the act of smoking. While the focus on the dangers of second-hand smoke has mainly been on the physical health of children, our study reinforces how the harmful effects of parental smoking also extend to youth’s emotional well-being. Overall, youth reported experiencing stress, worry, helplessness, anxiety, and fear for their health and the health of friends whose parents smoked. These youth experienced a heavy emotional burden to parent the parent and carry concerns about siblings. Youth living in families where adult members smoke are least likely to have control over whether they are exposed to second-hand smoke; this places them not only at risk for physical health problems but for emotional distress as well. This study calls attention to the need for future research exploring how the emotional well-being of youth living in homes with adults who smoke, may go dismissed or unrecognized. Although research has examined some of the health-related effects of having substance-abusing parents, it has for the most part overlooked the detrimental effects on children and family functioning where parents use more socially “acceptable” addictive substances such as tobacco and nicotine. The literature is robust in its findings about the negative effects of living with a parent who has an addiction to alcohol. For example, adolescents of alcoholic parents reported mental health difficulties (including emotional symptoms) and other behaviours (e.g., academic performance and conduct problems) compared to a control group [44]. Children growing up in substance-abusing families have been shown to have a disrupted family life with increased family conflict, and may be at greater risk for developing alcohol, drug-related, and behavioural problems [45]. Tobacco addiction is a major public health concern. The findings emerging from this study reinforce the need for public health action in three areas. First, more public health-related research is warranted that examines youth’s perceptions about their life circumstances growing up in families where their parents smoke. Further research is needed to investigate possible linkages between youth exposed to second hand smoke in their home environment and emotional and lifestyle-related health difficulties. Research is also needed to investigate how youth’s perceptions about being exposed to parental smoking behaviours and second-hand smoke impact on family relations and youth development. A report by Children’s Mental Health Policy Research Program at a Canadian University reinforces that research evidence on children's mental health needs may be best informed and strengthened by the participation and experiences of children and their families [46]. Creating a scientific base on youth’s perspectives of their health and well-being in the context of youth living with parents who smoke, is a critical step to improving and supporting youth's physical, mental, and emotional health. Second, the findings also raise the issue of attending to the emotional well-being as well as the physical needs of children who reside in smoking households. We need to assist youth who live with second-hand smoke in their homes, and who worry about the health effects on their parents and other family members. Public health programs and policies that help to empower youth who live in families in which parents and other family members smoke are needed. Song et al. recommend that encouraging youth to express their objections to second-hand smoke, as well as encouraging smoke-free homes, may be powerful tobacco control strategies against youth smoking [47]. In addition to controlling smoking within households, the findings from this study may be used to move forward tobacco control programs and policies designed to prevent parents and other adults from smoking around youth in locations outside the home where parents and youth interact. A comprehensive tobacco control program should support the need for more smoke-free public places including patios, playgrounds, sports fields, beaches, provincial parks, public events, and building perimeters [48]. Upon recognizing the potential toll that parental smoking can have on youth’s emotional well-being, community-based programs to help youth experiencing stress due to their concerns about the dangers of second-hand smoke are needed. Few supports are provided for youth of tobacco-addicted parents, especially for non-smoker youth who are experiencing distress because of their parents’ self-harming behaviours. Health care professionals also can be encouraged to give youth positive messages in dealing and coping with their parents’ smoking behaviour. Addressing youth’s concerns and distress related to second-hand smoke is essential for youth to thrive both physically and mentally. A final area for public health action is that anti-tobacco messaging to adult smokers needs to emphasize the relational component of smoking, the vulnerable hidden population of children in the smoking household, and how parental smoking can lead to family stress and negative health consequences to their children. Messages should include scenarios where youth feel distressed and trapped within their own homes. As well, messages that show concern for their health as well as the health of their siblings and parents may prove to be of value in getting the attention of parents and other adults who smoke. In a study by Nilsson and Emmelin, Swedish youth who smoked, felt strongly that parents had a duty to care and an obligation to do all they could to support their children not to start smoking: "It's a parental duty" [14]. Messages such as those voiced by youth in Nilsson and Emmelin’s study as well as those conveyed in the current study could potentially be powerful tools in smoking prevention and cessation programs. This study’s findings are relevant to issues of childhood agency discussed by others [49]. Panel discussions with experts of tobacco control and community development revealed themes of children's levels of agency and their power in reducing their exposure to second-hand smoke in the home [49]. In fact, children were seen as potential agents of change and it was suggested that the voices of children towards their caregivers are potentially central in creating smoke-free homes. Youth are often afforded little opportunity to have their voices heard, and it is noteworthy that some youth in the study did not feel they could speak their mind to their parents about their parent's smoking habits. It is a matter of social justice in allowing and encouraging these youth voices to be heard. Strengths and limitations of the study Using a qualitative research approach afforded the opportunity to understand youth from their frames of reference and experiences of reality. The findings reported here add to the existing literature by providing a richer description on youth’s experiences and beliefs about smoking. Limitations of our study included a sample that was primarily females (72%) and in the younger and middle age range of youth with only 17% of participants being 17 years and older. Fewer youth who are male and older could explain why we did not detect differences based on age or gender. Despite striving for a diverse sample, we were unable to obtain diversity in ethnic backgrounds and socioeconomic status. As well, most youth in this study did not smoke. Future work that accounts for limitations in the study’s sample might result in additional perspectives on the relational aspects of smoking that warrant tailoring smoking cessation programs and policies to address the differences. As well, longitudinal work is recommended, as the cross-sectional nature of our study did not afford us an understanding how perspectives of youth change over time. Using a qualitative research approach afforded the opportunity to understand youth from their frames of reference and experiences of reality. The findings reported here add to the existing literature by providing a richer description on youth’s experiences and beliefs about smoking. Limitations of our study included a sample that was primarily females (72%) and in the younger and middle age range of youth with only 17% of participants being 17 years and older. Fewer youth who are male and older could explain why we did not detect differences based on age or gender. Despite striving for a diverse sample, we were unable to obtain diversity in ethnic backgrounds and socioeconomic status. As well, most youth in this study did not smoke. Future work that accounts for limitations in the study’s sample might result in additional perspectives on the relational aspects of smoking that warrant tailoring smoking cessation programs and policies to address the differences. As well, longitudinal work is recommended, as the cross-sectional nature of our study did not afford us an understanding how perspectives of youth change over time. Strengths and limitations of the study: Using a qualitative research approach afforded the opportunity to understand youth from their frames of reference and experiences of reality. The findings reported here add to the existing literature by providing a richer description on youth’s experiences and beliefs about smoking. Limitations of our study included a sample that was primarily females (72%) and in the younger and middle age range of youth with only 17% of participants being 17 years and older. Fewer youth who are male and older could explain why we did not detect differences based on age or gender. Despite striving for a diverse sample, we were unable to obtain diversity in ethnic backgrounds and socioeconomic status. As well, most youth in this study did not smoke. Future work that accounts for limitations in the study’s sample might result in additional perspectives on the relational aspects of smoking that warrant tailoring smoking cessation programs and policies to address the differences. As well, longitudinal work is recommended, as the cross-sectional nature of our study did not afford us an understanding how perspectives of youth change over time. Conclusion: This study revealed that while youth often feel trapped by others smoking in their home and powerless to stop this behaviour, they took the position of educating, trying to influence, and ultimately protecting their parents regarding the harmful effects of smoking and second-hand smoke. The findings reinforce that more needs to be done in strengthening environments where youth can grow and flourish. Upholding the rights of youth to live in clean, healthy, and unpolluted environments is a right and fair public health policy. As one youth from our study assertively stated, parents and all adults should “just stop smoking cause it could affect your kid’s life and yours!” Competing interests: The authors declare that they have no competing interests. Authors’ contributions: Study design RLW; supervision of data collection RLW; analysis and interpretation of the data RLW, CK; manuscript preparation RLW, CK. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/12/965/prepub
Background: Successful cancer prevention policies and programming for youth must be based on a solid understanding of youth's conceptualization of cancer and cancer prevention. Accordingly, a qualitative study examining youth's perspectives of cancer and its prevention was undertaken. Not surprisingly, smoking (i.e., tobacco cigarette smoking) was one of the dominant lines of discourse in the youth's narratives. This paper reports findings of how youth conceptualize smoking with attention to their perspectives on parental and family-related smoking issues and experiences. Methods: Seventy-five Canadian youth ranging in age from 11-19 years participated in the study. Six of the 75 youth had a history of smoking and 29 had parents with a history of smoking. Youth were involved in traditional ethnographic methods of interviewing and photovoice. Data analysis involved multiple levels of analysis congruent with ethnography. Results: Youth's perspectives of parents and other family members' cigarette smoking around them was salient as represented by the theme: It's not fair. Youth struggled to make sense of why parents would smoke around their children and perceived their smoking as an unjust act. The theme was supported by four subthemes: 1) parenting the parent about the dangers of smoking; 2) the good/bad parent; 3) distancing family relationships; and 4) the prisoner. Instead of being talked to about smoking it was more common for youth to share stories of talking to their parents about the dangers of smoking. Parents who did not smoke were seen by youth as the good parent, as opposed to the bad parent who smoked. Smoking was an agent that altered relationships with parents and other family members. Youth who lived in homes where they were exposed to cigarette smoke felt like a trapped prisoner. Conclusions: Further research is needed to investigate youth's perceptions about parental cigarette smoking as well as possible linkages between youth exposed to second hand smoke in their home environment and emotional and lifestyle-related health difficulties. Results emphasize the relational impact of smoking when developing anti-tobacco and cancer prevention campaigns. Recognizing the potential toll that second-hand smoke can have on youth's emotional well-being, health care professionals are encouraged to give youth positive messages in coping with their parents' smoking behaviour.
Background: Lung cancer is considered one of the most preventable types of cancer. While smoking (i.e., tobacco cigarette smoking) increases the risk of many forms of cancer, it is the predominant risk factor for lung cancer, accounting for about 80% of lung cancer cases in men and 50% in women worldwide [1]. Despite recent evidence that lung cancer is a high health risk concern to youth [2], adolescent smoking remains a public health problem. In 2010, 12% of Canadian youth aged 15 to 19 years smoked [3]. Although the number of youth smoking was the lowest level recorded for that age group, the decline in youth smoking rates has slowed [3] and youth smoking rates in some countries are rising [4]. In the United States, the surgeon general reported that more than 600,000 middle school students and 3 million high school students smoke cigarettes [5]. Also troubling is the fact that while smoking rates for youth are down, “of every three young smokers, only one will quit, and one of those remaining smokers will die from tobacco-related causes” ( [5], p. 9). Adult smokers frequently report having started smoking as youth. Among smokers aged 15–17, almost 80% said they had tried smoking by age 14 [6], with females (aged 15–17) having had their first cigarette at 12.9 years and males at age 13.3 years [7]. Global trends also reveal that smoking is increasing in developing countries due to adapting Western lifestyle habits. As well, lung cancer rates are increasing in some countries (e.g., China, Korea, and some African countries) and are expected to continue to rise for the next few decades [1]. Smoking has been shown to be a relational and learned behaviour, especially influenced by the family [8]. Regarding youth health behaviours, it has been suggested that the family, especially parents, is one of the dominant arenas in which youth are influenced. It has been established that youth are most likely to smoke if they have been exposed to, or come from a family in which their parents smoked [9]. Findings from a Canadian Community health survey revealed that in 2011 youth (aged 15–17) residing in households where someone smoked regularly were three times more likely to smoke (22.4% versus 7.0%) [10]. One qualitative study reported that aboriginal adolescent girls often smoke because smoking is normalized and reinforced by families: they see family members smoking in the home, they are not discouraged from smoking, and in some cases, parents facilitate adolescents’ access to cigarettes [11]. Youth smoking behaviour has also been linked to a range of other parental influences. For example, using a nationally United States representative sample, Powell and Chaloupka [12] studied specific parenting behaviours and the degree to which high school students felt their parents’ opinions about smoking influenced their decision to smoke. The authors identified that certain parenting practices (i.e., parental smoking, setting limits on youth’s free time, in-home smoking rules, quality and frequency of parent–child communication) as well as how much youth value their parents’ opinions about smoking, strongly influenced youth deciding to smoke. The evidence indicates that while parents exert a strong influence on youth smoking, they can also exist as a protective factor against youth smoking [8,12-14], especially when non-smoking rules are in place [15] including eliminating smoking in the home [12]. Clark et al. [15] revealed that if parents themselves smoked, banning smoking in the home and speaking against smoking reduced the likelihood that youth will smoke. Similarly, other research on household smoking rules found fewer adolescent smoking behaviours in homes with strict anti-smoking rules [16]. While there is empirical evidence of how parents are greatly influential regarding their children’s smoking behaviours, we know little; however, about the dynamics involved in the parent-adolescent relationship regarding smoking in the home, or how youth perceive their parents’ approval or disproval of smoking behaviours. Few studies have addressed how children or youth feel about adult smoking. In addition to findings that parental smoking may be related to the initiation of smoking in youth, there is increasing concern for the health risks of second-hand smoke. Along with anti-smoking legislation in public spaces, attention has been aimed at protecting children from second-hand smoke and recognizing the risks involved in exposure to second-hand smoke in non-public places. Second-hand smoking rates and non-smoking rules, for example, have been examined in family homes and cars [17-20]. In 2006, 22.1% of Canadian youth in grades 5 through 12 were exposed to smoking in their home on a daily or almost daily basis and 28.1% were exposed to smoking while riding in a car on a nearly weekly basis [19]. In the 2008 Canadian survey, the rate of exposure for 12–19 year olds (16.8%) was almost twice as high as the Canadian average [21]. New Zealand national surveys indicate that while exposure to second-hand smoke decreased since 2000, youth’s perceptions revealed that exposure still remained at 35% (in-home exposure) and 32% (in-vehicle exposure) [22]. The effects of parental smoking and maintaining a smoke-free environment has also received attention in areas such as prenatal and newborn care [23] and later, poor respiratory symptoms and outcomes [24]. Studies have also begun emphasizing home smoking bans and perceived dangers of the less visible but harmful exposure of third-hand smoke to children [23,25]. Although the current literature offers insights about the physical effects of second-hand smoke, how second-hand smoke impacts family relationships is unclear and what youth think about adult family members smoking remains in its infancy. This paper draws on data from a larger qualitative study that sought to extend our limited understanding of youth’s perspectives of cancer and cancer prevention. It aims to explore how youth conceptualize smoking within the context of their own life-situations with attention to their perspectives on parental and family-related smoking issues and experiences. Conclusion: This study revealed that while youth often feel trapped by others smoking in their home and powerless to stop this behaviour, they took the position of educating, trying to influence, and ultimately protecting their parents regarding the harmful effects of smoking and second-hand smoke. The findings reinforce that more needs to be done in strengthening environments where youth can grow and flourish. Upholding the rights of youth to live in clean, healthy, and unpolluted environments is a right and fair public health policy. As one youth from our study assertively stated, parents and all adults should “just stop smoking cause it could affect your kid’s life and yours!”
Background: Successful cancer prevention policies and programming for youth must be based on a solid understanding of youth's conceptualization of cancer and cancer prevention. Accordingly, a qualitative study examining youth's perspectives of cancer and its prevention was undertaken. Not surprisingly, smoking (i.e., tobacco cigarette smoking) was one of the dominant lines of discourse in the youth's narratives. This paper reports findings of how youth conceptualize smoking with attention to their perspectives on parental and family-related smoking issues and experiences. Methods: Seventy-five Canadian youth ranging in age from 11-19 years participated in the study. Six of the 75 youth had a history of smoking and 29 had parents with a history of smoking. Youth were involved in traditional ethnographic methods of interviewing and photovoice. Data analysis involved multiple levels of analysis congruent with ethnography. Results: Youth's perspectives of parents and other family members' cigarette smoking around them was salient as represented by the theme: It's not fair. Youth struggled to make sense of why parents would smoke around their children and perceived their smoking as an unjust act. The theme was supported by four subthemes: 1) parenting the parent about the dangers of smoking; 2) the good/bad parent; 3) distancing family relationships; and 4) the prisoner. Instead of being talked to about smoking it was more common for youth to share stories of talking to their parents about the dangers of smoking. Parents who did not smoke were seen by youth as the good parent, as opposed to the bad parent who smoked. Smoking was an agent that altered relationships with parents and other family members. Youth who lived in homes where they were exposed to cigarette smoke felt like a trapped prisoner. Conclusions: Further research is needed to investigate youth's perceptions about parental cigarette smoking as well as possible linkages between youth exposed to second hand smoke in their home environment and emotional and lifestyle-related health difficulties. Results emphasize the relational impact of smoking when developing anti-tobacco and cancer prevention campaigns. Recognizing the potential toll that second-hand smoke can have on youth's emotional well-being, health care professionals are encouraged to give youth positive messages in coping with their parents' smoking behaviour.
20,773
431
19
[ "youth", "smoking", "parents", "like", "smoke", "family", "cancer", "year", "old", "year old" ]
[ "test", "test" ]
[CONTENT] Youth | Cigarette smoking | Second-hand smoke | Parents | Cancer prevention | Qualitative research [SUMMARY]
[CONTENT] Youth | Cigarette smoking | Second-hand smoke | Parents | Cancer prevention | Qualitative research [SUMMARY]
[CONTENT] Youth | Cigarette smoking | Second-hand smoke | Parents | Cancer prevention | Qualitative research [SUMMARY]
[CONTENT] Youth | Cigarette smoking | Second-hand smoke | Parents | Cancer prevention | Qualitative research [SUMMARY]
[CONTENT] Youth | Cigarette smoking | Second-hand smoke | Parents | Cancer prevention | Qualitative research [SUMMARY]
[CONTENT] Youth | Cigarette smoking | Second-hand smoke | Parents | Cancer prevention | Qualitative research [SUMMARY]
[CONTENT] Adolescent | Air Pollution, Indoor | Attitude to Health | Canada | Child | Family | Family Relations | Female | Humans | Male | Narration | Parenting | Parents | Qualitative Research | Smoking | Tobacco Smoke Pollution | Young Adult [SUMMARY]
[CONTENT] Adolescent | Air Pollution, Indoor | Attitude to Health | Canada | Child | Family | Family Relations | Female | Humans | Male | Narration | Parenting | Parents | Qualitative Research | Smoking | Tobacco Smoke Pollution | Young Adult [SUMMARY]
[CONTENT] Adolescent | Air Pollution, Indoor | Attitude to Health | Canada | Child | Family | Family Relations | Female | Humans | Male | Narration | Parenting | Parents | Qualitative Research | Smoking | Tobacco Smoke Pollution | Young Adult [SUMMARY]
[CONTENT] Adolescent | Air Pollution, Indoor | Attitude to Health | Canada | Child | Family | Family Relations | Female | Humans | Male | Narration | Parenting | Parents | Qualitative Research | Smoking | Tobacco Smoke Pollution | Young Adult [SUMMARY]
[CONTENT] Adolescent | Air Pollution, Indoor | Attitude to Health | Canada | Child | Family | Family Relations | Female | Humans | Male | Narration | Parenting | Parents | Qualitative Research | Smoking | Tobacco Smoke Pollution | Young Adult [SUMMARY]
[CONTENT] Adolescent | Air Pollution, Indoor | Attitude to Health | Canada | Child | Family | Family Relations | Female | Humans | Male | Narration | Parenting | Parents | Qualitative Research | Smoking | Tobacco Smoke Pollution | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] youth | smoking | parents | like | smoke | family | cancer | year | old | year old [SUMMARY]
[CONTENT] youth | smoking | parents | like | smoke | family | cancer | year | old | year old [SUMMARY]
[CONTENT] youth | smoking | parents | like | smoke | family | cancer | year | old | year old [SUMMARY]
[CONTENT] youth | smoking | parents | like | smoke | family | cancer | year | old | year old [SUMMARY]
[CONTENT] youth | smoking | parents | like | smoke | family | cancer | year | old | year old [SUMMARY]
[CONTENT] youth | smoking | parents | like | smoke | family | cancer | year | old | year old [SUMMARY]
[CONTENT] smoking | youth | smoke | youth smoking | exposure | rates | smoking rules | rules | hand | home [SUMMARY]
[CONTENT] interview | youth | cancer | data | analysis | research | field | participants | group | interviews [SUMMARY]
[CONTENT] like | parents | smoking | smoke | old | year old | year | youth | female | year old female [SUMMARY]
[CONTENT] environments | stop | youth | smoking | study assertively stated | study assertively stated parents | powerless stop behaviour | youth feel trapped smoking | findings reinforce needs strengthening | findings reinforce needs [SUMMARY]
[CONTENT] youth | like | smoking | parents | smoke | cancer | old | year old | family | year [SUMMARY]
[CONTENT] youth | like | smoking | parents | smoke | cancer | old | year old | family | year [SUMMARY]
[CONTENT] ||| ||| ||| [SUMMARY]
[CONTENT] Seventy-five | Canadian | 11-19 years ||| Six | 75 | 29 ||| ||| [SUMMARY]
[CONTENT] Youth ||| ||| ||| four | 1 | 2 | 3 | 4 ||| ||| ||| ||| [SUMMARY]
[CONTENT] second ||| ||| second [SUMMARY]
[CONTENT] ||| ||| ||| ||| Seventy-five | Canadian | 11-19 years ||| Six | 75 | 29 ||| ||| ||| ||| Youth ||| ||| ||| four | 1 | 2 | 3 | 4 ||| ||| ||| ||| ||| second ||| ||| second [SUMMARY]
[CONTENT] ||| ||| ||| ||| Seventy-five | Canadian | 11-19 years ||| Six | 75 | 29 ||| ||| ||| ||| Youth ||| ||| ||| four | 1 | 2 | 3 | 4 ||| ||| ||| ||| ||| second ||| ||| second [SUMMARY]
Bird harvesting practices and knowledge, risk perceptions, and attitudes regarding avian influenza among Canadian First Nations subsistence hunters: implications for influenza pandemic plans.
25347949
There is concern of avian influenza virus (AIV) infections in humans. Subsistence hunters may be a potential risk group for AIV infections as they frequently come into close contact with wild birds and the aquatic habitats of birds while harvesting. This study aimed to examine if knowledge and risk perception of avian influenza influenced the use of protective measures and attitudes about hunting influenza-infected birds among subsistence hunters.
BACKGROUND
Using a community-based participatory research approach, a cross-sectional survey was conducted with current subsistence hunters (n = 106) residing in a remote and isolated First Nations community in northern Ontario, Canada from November 10-25, 2013. Simple descriptive statistics, cross-tabulations, and analysis of variance (ANOVA) were used to examine the distributions and relationships between variables. Written responses were deductively analyzed.
METHODS
ANOVA showed that males hunted significantly more birds per year than did females (F1,96 = 12.1; p = 0.001) and that those who hunted significantly more days per year did not perceive a risk of AIV infection (F1,94 = 4.4; p = 0.040). Hunters engaged in bird harvesting practices that could expose them to AIVs, namely by cleaning, plucking, and gutting birds and having direct contact with water. It was reported that 18 (17.0%) hunters wore gloves and 2 (1.9%) hunters wore goggles while processing birds. The majority of hunters washed their hands (n = 105; 99.1%) and sanitized their equipment (n = 69; 65.1%) after processing birds. More than half of the participants reported being aware of avian influenza, while almost one third perceived a risk of AIV infection while harvesting birds. Participants aware of avian influenza were more likely to perceive a risk of AIV infection while harvesting birds. Our results suggest that knowledge positively influenced the use of a recommended protective measure. Regarding attitudes, the frequency of participants who would cease harvesting birds was highest if avian influenza was detected in regional birds (n = 55; 51.9%).
RESULTS
Our study indicated a need for more education about avian influenza and precautionary behaviours that are culturally-appropriate. First Nations subsistence hunters should be considered an avian influenza risk group and have associated special considerations included in future influenza pandemic plans.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Animals", "Birds", "Community-Based Participatory Research", "Cross-Sectional Studies", "Female", "Health Knowledge, Attitudes, Practice", "Humans", "Influenza A virus", "Influenza in Birds", "Inuit", "Male", "Middle Aged", "Ontario", "Pandemics", "Zoonoses" ]
4223741
Background
Influenza A viruses may cause pandemics at unpredictable, irregular intervals resulting in devastating social and economic effects worldwide [1]. Wild aquatic birds in the orders Anseriformes and Charadriiformes are the natural hosts for influenza A viruses; these viruses have generally remained in evolutionary stasis and are usually non-pathogenic in wild birds [2, 3]. Most avian influenza viruses (AIVs) primarily replicate in the intestinal tract of wild birds and are spread amongst birds via an indirect fecal-oral route involving contaminated aquatic habitats [4]. Humans who are directly exposed to the tissues, secretions, and excretions of infected birds or water contaminated with bird feces can become infected themselves [2, 4, 5]. The transmission of an AIV from a bird to a human has significant pandemic potential as it may result in the direct introduction of a novel virus strain or allow for the creation of a novel virus strain via reassortment [3, 5]. The transmission of AIVs from birds to humans depends on many factors, such as the susceptibility of humans to the virus and the frequency and type of contact [2, 5]. Most AIVs are generally inefficient in infecting humans; however, there have been documented cases of AIVs transmitting directly from infected birds to humans [6, 7]. During the 1997 Hong Kong “bird flu” incident, there was demonstrated transmission of highly pathogenic avian influenza (HPAI) A virus (H5N1) from infected domesticated chickens to humans [3]. More recently, some Asian countries have reported human infections of avian influenza A virus (H7N9) with most patients having a history of exposure to live poultry in wet markets [8]. As such, most pandemic plans include special considerations (e.g., enhanced surveillance, prioritization for vaccination, and antiviral prophylaxis) for avian influenza risk groups that include humans who come in close, frequent contact with domestic birds, such as farmers, poultry farm workers, veterinarians, and livestock workers [9, 10]. Longitudinally migrating wild birds appear to play a primary role in influenza transmission and there is increased concern about the introduction of HPAI virus strains in North America from Eurasia, as migratory flyways around the world intersect [3, 4]. Thus, bird hunters may also be at risk as hunting and processing practices directly expose them to the bodily fluids of wild birds and water potentially contaminated with bird feces [5, 11]. Although the risk of AIV infection while hunting and processing wild birds is assumed to be very low [5], transmission has been previously reported. One study reported serologic evidence of past AIV infection in a recreational duck hunter and two wildlife professionals, inferring direct transmission of AIVs from wild birds to humans [12]. Another study reported that recreational waterfowl hunters were eight times more likely to be exposed to avian influenza-infected wildlife compared to occupationally-exposed people and the general public [13]. A study conducted in rural Iowa, USA, reported that participants who hunted wild birds had increased antibody titers against avian H7 influenza virus [14]. Further, in the Republic of Azerbaijan, HPAI H5N1 infection in humans is suspected to be linked to defeathering infected wild swans (Cygnus) [15]. Since handling wild birds and having contact with the aquatic habitats of wild birds are potential transmission pathways for AIV infections in hunters, it is important to better understand hunters’ knowledge and risk perceptions of avian influenza and include special considerations in pandemic plans. This is particularly important for some Canadian Aboriginal (First Nations, Inuit, and Métis) populations whose hunting of wild birds represents subsistence harvesting as opposed to a recreational activity [16]. Herein, subsistence harvesting will refer collectively to activities associated with hunting, fishing, trapping, and gathering of animals and other food for personal, family, and community consumption [17, 18]. The practice of subsistence harvesting for some Canadian Aboriginal populations, such as the Cree First Nations of the Mushkegowuk region, is culturally and economically important with the majority of hunters harvesting wild birds [17, 19]. Traditional land-based harvesting activities are economically valuable for the region and can reduce external economic dependence [17]. Moreover, as there are many physical, nutritional, and social benefits of this practice, it is a vital, well-established component of health and well-being in Canadian Aboriginal communities [20]. For instance, as Canadian Aboriginal populations, particularly those residing in geographically remote and isolated communities, experience a high prevalence of household food insecurity [21, 22], subsistence harvesting can provide an important source of healthy traditional foods and lessen the reliance on costly market foods. The potential of AIV infection while hunting and harvesting wild birds varies with geographical areas, seasons, and specific activities [5, 11, 12]. Moreover, previous studies have shown that knowledge and risk perception of avian influenza can positively influence compliance with recommended protective health behaviours [23, 24]. We conducted a cross-sectional survey of the bird harvesting practices and knowledge, risk perceptions, and attitudes regarding avian influenza among Canadian First Nations subsistence hunters. The purpose of this study was to examine if knowledge and risk perception of avian influenza influenced the use of personal protection measures and attitudes about hunting influenza-infected birds. The implications for addressing the special considerations of Canadian First Nations subsistence hunters in pandemic plans will be discussed.
Methods
Community-based participatory research approach The present study employed a community-based participatory research (CBPR) approach since the hallmark principles of CBPR can foster the engagement of Aboriginal populations and participatory methods have previously been a successful approach to partnering with Aboriginal communities [25–27]. As such, the research topic was locally relevant as it stemmed from previous research conducted in the region that explored culturally-appropriate measures to mitigate the effects of an influenza pandemic in the setting of a remote and isolated Canadian First Nations community [28]. Residents of the study community expressed questions and concerns about the transmission potential of AIVs from influenza-infected wild birds to subsistence hunters. Thus, the present study was specifically developed and conducted to address the identified questions and concerns. Following a CBPR approach, collaboration occurred throughout the research process between the researchers and a community-based advisory group (CBAG) comprised of two community representatives from the study community [29–31]. The two members of the CBAG were of First Nations heritage and were particularly interested in the topic at hand and desired to be involved. The CBAG helped design the study and was part of the iterative process of developing the survey questions and layout. The CBAG also provided input during the data analysis process, on the interpretation of results, and aided with disseminating the results to the community. CBPR endeavors aim to use the knowledge generated to achieve action-oriented outcomes for the involved community [29, 32]. At the request of the CBAG, the results of this study were disseminated via an oral presentation to community members during a lunch-and-learn activity in June 2014. An information sheet explaining avian influenza and recommended precautionary behaviours created by Health Canada was distributed to attendees [33]. Information about emerging avian influenzas that currently are of pandemic concern and the information sheet were also incorporated into the community’s influenza pandemic plan as a newly created appendix section. Approval to conduct this research was granted by the Office of Research Ethics at the University of Waterloo (ORE #16534), and was supported by the Band Council (locally elected First Nations government body) of the involved community. The present study employed a community-based participatory research (CBPR) approach since the hallmark principles of CBPR can foster the engagement of Aboriginal populations and participatory methods have previously been a successful approach to partnering with Aboriginal communities [25–27]. As such, the research topic was locally relevant as it stemmed from previous research conducted in the region that explored culturally-appropriate measures to mitigate the effects of an influenza pandemic in the setting of a remote and isolated Canadian First Nations community [28]. Residents of the study community expressed questions and concerns about the transmission potential of AIVs from influenza-infected wild birds to subsistence hunters. Thus, the present study was specifically developed and conducted to address the identified questions and concerns. Following a CBPR approach, collaboration occurred throughout the research process between the researchers and a community-based advisory group (CBAG) comprised of two community representatives from the study community [29–31]. The two members of the CBAG were of First Nations heritage and were particularly interested in the topic at hand and desired to be involved. The CBAG helped design the study and was part of the iterative process of developing the survey questions and layout. The CBAG also provided input during the data analysis process, on the interpretation of results, and aided with disseminating the results to the community. CBPR endeavors aim to use the knowledge generated to achieve action-oriented outcomes for the involved community [29, 32]. At the request of the CBAG, the results of this study were disseminated via an oral presentation to community members during a lunch-and-learn activity in June 2014. An information sheet explaining avian influenza and recommended precautionary behaviours created by Health Canada was distributed to attendees [33]. Information about emerging avian influenzas that currently are of pandemic concern and the information sheet were also incorporated into the community’s influenza pandemic plan as a newly created appendix section. Approval to conduct this research was granted by the Office of Research Ethics at the University of Waterloo (ORE #16534), and was supported by the Band Council (locally elected First Nations government body) of the involved community. Study area, population, and data collection The study community (name omitted for anonymity purposes) is considered remote (i.e., nearest service center with year-round road access is located over 350 kilometers away) and isolated (i.e., accessible only by airplanes year-round) [10]. The Cree First Nations community belongs to the Mushkegowuk region which is located in northern Ontario, Canada along the western shores of James Bay and the southern portion of Hudson Bay [17, 19]. The region is a productive wildlife area and the majority of hunters partake in the spring and fall bird harvests [34]. The cross-sectional survey was conducted in English (as suggested by the CBAG) from November 10–25, 2013. The time period was chosen to maximize participation, as most hunters would have returned from fall hunting activities. The survey was based on previous literature [11] and was developed in collaboration with the CBAG to ensure that it adequately addressed the objectives of the study and was culturally-appropriate. The survey employed closed-ended questions to gain a better understanding of First Nations hunters’ general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Open-ended questions were also included to allow for participants to describe their risk perceptions of AIV infection while harvesting birds as well as any additional concerns. Basic demographic questions to record the age and sex of participants were also included. Community First Nations subsistence hunters were invited to participate by the lead author (NAC) and a local community research assistant during individual meetings. The research assistant was of First Nations descent and a prominent Elder in the community. Being fluent in the Cree language, the assistant acted as a Cree translator upon request by the survey respondents. A current community housing list (updated in November 2013) which recorded all known community members living in First Nations (Band) households was used by the research assistant to identify eligible participants. Contemporary harvesting practices in the region typically involve multiple short trips versus traditional long trips [34]. To include as many hunters as possible from the study community, eligible participants were defined as current hunters, a group which included “intensive”, “active”, and “occasional” hunters (for definitions, see [17]). In addition to being a current hunter, participants were required to be First Nations (Band member), an adult (18 years old and over), and available to complete the survey in person during the study period to be eligible. Both male and female hunters were approached as it is widely recognized in Cree First Nations that both sexes play an important role while subsistence harvesting [35]. When approached, the participants were provided with an information/recruitment letter and the study was explained in English or Cree as required. Informed verbal consent was obtained, being culturally appropriate for the region [31, 36]. Incentives were not offered for participation. As participants preferred to complete the survey alone on their own time, a convenient time and location was arranged to collect the completed survey. Up to five follow-up visits and new survey copies were provided if the survey was not completed at the specified time and if the person was still interested in participating. The study community (name omitted for anonymity purposes) is considered remote (i.e., nearest service center with year-round road access is located over 350 kilometers away) and isolated (i.e., accessible only by airplanes year-round) [10]. The Cree First Nations community belongs to the Mushkegowuk region which is located in northern Ontario, Canada along the western shores of James Bay and the southern portion of Hudson Bay [17, 19]. The region is a productive wildlife area and the majority of hunters partake in the spring and fall bird harvests [34]. The cross-sectional survey was conducted in English (as suggested by the CBAG) from November 10–25, 2013. The time period was chosen to maximize participation, as most hunters would have returned from fall hunting activities. The survey was based on previous literature [11] and was developed in collaboration with the CBAG to ensure that it adequately addressed the objectives of the study and was culturally-appropriate. The survey employed closed-ended questions to gain a better understanding of First Nations hunters’ general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Open-ended questions were also included to allow for participants to describe their risk perceptions of AIV infection while harvesting birds as well as any additional concerns. Basic demographic questions to record the age and sex of participants were also included. Community First Nations subsistence hunters were invited to participate by the lead author (NAC) and a local community research assistant during individual meetings. The research assistant was of First Nations descent and a prominent Elder in the community. Being fluent in the Cree language, the assistant acted as a Cree translator upon request by the survey respondents. A current community housing list (updated in November 2013) which recorded all known community members living in First Nations (Band) households was used by the research assistant to identify eligible participants. Contemporary harvesting practices in the region typically involve multiple short trips versus traditional long trips [34]. To include as many hunters as possible from the study community, eligible participants were defined as current hunters, a group which included “intensive”, “active”, and “occasional” hunters (for definitions, see [17]). In addition to being a current hunter, participants were required to be First Nations (Band member), an adult (18 years old and over), and available to complete the survey in person during the study period to be eligible. Both male and female hunters were approached as it is widely recognized in Cree First Nations that both sexes play an important role while subsistence harvesting [35]. When approached, the participants were provided with an information/recruitment letter and the study was explained in English or Cree as required. Informed verbal consent was obtained, being culturally appropriate for the region [31, 36]. Incentives were not offered for participation. As participants preferred to complete the survey alone on their own time, a convenient time and location was arranged to collect the completed survey. Up to five follow-up visits and new survey copies were provided if the survey was not completed at the specified time and if the person was still interested in participating. Data management and analyses Collected surveys were coded by an identification number to maintain confidentiality of the participants. The CBAG was consulted to determine how to code inexact responses. Of note, it was decided that if a participant responded with a range of numbers, the median value was recorded. If a participant selected all of the possible response options or only provided a written response, the result was recorded as missing data. In instances where a pattern was observed amongst participants’ written responses, the responses were coded according to newly created response options approved by the CBAG to maintain the integrity of the data. Sample size for individual statistical analyses varied from 88 to 106, as not all participants answered each survey question; thus, presented percentages may not always equal 100% owing to missing data. Simple descriptive statistics were used to examine the distributions of variables pertaining to general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Cross-tabulations, as 2 × 2 contingency analyses, were used to examine the relationships between each of the main effects of sex, awareness of avian influenza, and risk perception of AIV infection by precautionary behaviours and attitudes about hunting influenza-infected birds. In instances where the expected cell count was less than five, the Fisher’s Exact Test was used in preference to the Pearson chi-square test. Absolute values greater than 1.96 of the adjusted standard residual (ASR) indicated a significant departure from the expected count and therefore considered to be a major contributor to the observed chi-square result. The influence of outlier values for continuous dependent variables (age, years of hunting, days of hunting per year, birds hunted per year) was examined using boxplots of raw and log transformed data. Owing to the presence of outlier values, we log-transformed values for days of hunting and number of birds hunted per year to satisfy the homogeneity of variance assumption of analysis of variance (ANOVA). It was decided that one individual’s improbable response for number of birds hunted per year should be removed as it continued to distort the results. Also, one individual’s response for years of hunting was recorded as missing data since the response did not reflect the age of the participant. Differences in mean values of these dependent variables between groups for sex, awareness of avian influenza, and risk perception of AIV infection were examined using ANOVA. Statistical results were considered to be significant at p < 0.05. Data analyses were carried out using SPSS version 22 (SPSS Inc., Chicago, Illinois, U.S.A). Written responses to the two open-ended questions and any additional comments were manually transcribed verbatim into electronic format to facilitate organization and coding. Qualitative coding of the transcribed data was conducted using QSR NVivo® version 9.2 (QSR International Pty Ltd., Doncaster, Victoria, Australia). Responses were deductively analyzed following a template organizing approach using the survey questions as a coding template [37, 38]. Analyzing the data was an iterative process conducted multiple times by the lead author (NAC) and findings were presented to the CBAG as a way of member checking to verify the results [37]. Collected surveys were coded by an identification number to maintain confidentiality of the participants. The CBAG was consulted to determine how to code inexact responses. Of note, it was decided that if a participant responded with a range of numbers, the median value was recorded. If a participant selected all of the possible response options or only provided a written response, the result was recorded as missing data. In instances where a pattern was observed amongst participants’ written responses, the responses were coded according to newly created response options approved by the CBAG to maintain the integrity of the data. Sample size for individual statistical analyses varied from 88 to 106, as not all participants answered each survey question; thus, presented percentages may not always equal 100% owing to missing data. Simple descriptive statistics were used to examine the distributions of variables pertaining to general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Cross-tabulations, as 2 × 2 contingency analyses, were used to examine the relationships between each of the main effects of sex, awareness of avian influenza, and risk perception of AIV infection by precautionary behaviours and attitudes about hunting influenza-infected birds. In instances where the expected cell count was less than five, the Fisher’s Exact Test was used in preference to the Pearson chi-square test. Absolute values greater than 1.96 of the adjusted standard residual (ASR) indicated a significant departure from the expected count and therefore considered to be a major contributor to the observed chi-square result. The influence of outlier values for continuous dependent variables (age, years of hunting, days of hunting per year, birds hunted per year) was examined using boxplots of raw and log transformed data. Owing to the presence of outlier values, we log-transformed values for days of hunting and number of birds hunted per year to satisfy the homogeneity of variance assumption of analysis of variance (ANOVA). It was decided that one individual’s improbable response for number of birds hunted per year should be removed as it continued to distort the results. Also, one individual’s response for years of hunting was recorded as missing data since the response did not reflect the age of the participant. Differences in mean values of these dependent variables between groups for sex, awareness of avian influenza, and risk perception of AIV infection were examined using ANOVA. Statistical results were considered to be significant at p < 0.05. Data analyses were carried out using SPSS version 22 (SPSS Inc., Chicago, Illinois, U.S.A). Written responses to the two open-ended questions and any additional comments were manually transcribed verbatim into electronic format to facilitate organization and coding. Qualitative coding of the transcribed data was conducted using QSR NVivo® version 9.2 (QSR International Pty Ltd., Doncaster, Victoria, Australia). Responses were deductively analyzed following a template organizing approach using the survey questions as a coding template [37, 38]. Analyzing the data was an iterative process conducted multiple times by the lead author (NAC) and findings were presented to the CBAG as a way of member checking to verify the results [37].
Results
A total of 173 participants in the censused community were deemed eligible to participate given the inclusion criteria and of these, 126 received surveys, for a 73% contact rate. Of the 126 distributed surveys, 106 completed surveys were returned, representing an 84% cooperation rate. Overall, a response rate of 61% was achieved. Of the 106 community members that participated in the survey, 80 (75.5%) were male and 26 (24.5%) were female. The untransformed demographic and harvesting characteristics of the participants are presented in Table 1.Table 1 Demographic and harvesting characteristics of Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 nMinimumMaximumMeanStd. deviation Demographic information    Age92187643.312.9 Harvesting characteristics    Years of hunting9916527.214.0   Days of hunting per year105120026.230.5   Number of birds hunted per year100020042.640.6 Demographic and harvesting characteristics of Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 All who responded participated in the spring/summer hunting activities (n = 105; 99.1%) with fewer hunters participating during the fall (n = 57; 53.8%) and winter (n = 16; 15.1%) seasons. During these hunts, 98.1% of participants hunted Canada geese (Branta canadensis), 88.7% hunted various species of ducks (Anatinae), 69.8% hunted lesser snow geese (Anser c. caerulescens, also referred to as wavies), and 43.4% hunted species of shorebirds (Charadriiformes). While hunting, the majority of participants reported having direct contact with water (n = 89; 84.0%). Bird harvesting practices were generally similar whether camping in the bush or at home; thus, only results pertaining to camping in the bush are presented. In the bush, most hunters processed the birds themselves (n = 72; 67.9%) or a family member was involved (n = 67; 63.2%). Most hunters partook in all of the bird processing activities in the bush; the percentage of participants who reported cleaning, plucking, and gutting the birds were 74.5%, 94.3%, and 77.4% respectively. Regarding the use of precautionary measures while processing birds in the bush, it was reported that 18 (17.0%) hunters wore gloves and 2 (1.9%) hunters wore goggles. In the bush, the majority of hunters washed their hands (n = 105; 99.1%) and sanitized their equipment (n = 69; 65.1%) after processing birds. Moreover, about half of the participants (n = 50; 47.2%) reported receiving the annual vaccination against seasonal human influenza viruses (Figure 1).Figure 1 Compliance with recommended protective health measures among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013. Compliance with recommended protective health measures among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013. The total frequency and percentage of participants’ knowledge of avian influenza, risk perception of AIV infection, and attitudes about hunting influenza-infected birds are presented in Table 2. Approximately half of the participants (n = 56; 52.8%) reported being generally aware of avian influenza, but few were aware of the signs and symptoms of avian influenza in birds (n = 16; 15.1%) or humans (n = 9; 8.5%).Table 2 Frequency and percentage a of knowledge of avian influenza, risk perception of avian influenza virus infection, and attitudes about hunting influenza-infected birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 All huntersMalesFemalesNo (%)Yes (%)No (%)Yes (%)No (%)Yes (%) Knowledge Aware of avian influenza49 (46.2)56 (52.8)37 (46.3)42 (52.5)12 (46.2)14 (53.8)Aware of signs and symptoms of avian influenza in birds89 (84.0)16 (15.1)67 (83.8)12 (15.0)22 (84.6)4 (15.4)Aware of signs and symptoms of avian influenza in humans95 (89.6)9 (8.5)74 (92.5)4 (5.0)21 (80.8)5 (19.2) Risk perception Perceived risk of avian influenza virus infection68 (64.2)29 (27.4)52 (65.0)23 (28.8)16 (61.5)6 (23.1) Attitudes Cease hunting if avian influenza detected in North American birds60 (56.6)43 (40.6)49 (61.3)29 (36.3)11 (42.3)14 (53.8)Cease hunting if avian influenza detected in Province of Ontario birds54 (50.9)45 (42.5)45 (56.3)30 (37.5)9 (34.6)15 (57.7)Cease hunting if avian influenza detected in Regional birds46 (43.4)55 (51.9)39 (48.8)37 (46.3)7 (26.9)18 (69.2) aPercentages may not always equal 100% owing to missing data. Frequency and percentage a of knowledge of avian influenza, risk perception of avian influenza virus infection, and attitudes about hunting influenza-infected birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 aPercentages may not always equal 100% owing to missing data. Some participants (n = 29; 27.4%) perceived a risk of contracting avian influenza while harvesting birds. “Just wondering every time we go out hunting geese in the spring, if any of the geese that come in [the] spring are carrying the flu” (Participant #41).“Yes there is a risk [be]cause the birds [are] from the South … who knows what they’ll catch out there” (Participant #103).“It will concern me if the bird flu is here on our Land and I wouldn’t be sure about hunting birds” (Participant #42). “Just wondering every time we go out hunting geese in the spring, if any of the geese that come in [the] spring are carrying the flu” (Participant #41). “Yes there is a risk [be]cause the birds [are] from the South … who knows what they’ll catch out there” (Participant #103). “It will concern me if the bird flu is here on our Land and I wouldn’t be sure about hunting birds” (Participant #42). On the other hand, many participants did not perceive a risk of AIV infection while harvesting birds, since local regional birds were not perceived to be infected with avian influenza. “I thought there was only bird flu in Asia …” (Participant #24).“If birds were sick, I don’t think they would make it this far [North]” (Participant #70).“No reports that bird flu has arrived in this area and people are not getting sick” (Participant #36). “I thought there was only bird flu in Asia …” (Participant #24). “If birds were sick, I don’t think they would make it this far [North]” (Participant #70). “No reports that bird flu has arrived in this area and people are not getting sick” (Participant #36). Detection of avian influenza in wild birds in nearby geographic areas would reportedly influence the participants’ harvesting behaviour. The frequency of participants who would cease harvesting birds was highest if avian influenza was detected in local regional birds (n = 55; 51.9%). It was reported that 45 (42.5%) respondents would stop hunting if avian influenza was found in birds from within the Province of Ontario, and 43 (40.6%) respondents would stop hunting if the virus was found in North American birds. For all of the aforementioned scenarios, some participants added written responses indicating that they were not sure if they would stop hunting and requested relevant information. The majority of respondents also were interested in receiving information about avian influenza transmission (n = 83; 78.3%), flyways of migrating birds (n = 79; 74.5%), and precautions to minimize exposure (n = 82; 77.4%). ANOVA showed that males hunted significantly more birds per year than did females (F1,96 = 12.1; p = 0.001; Figure 2). No significant difference in mean values of age, years of hunting, and days of hunting per year was observed between males and females. ANOVA did not identify any significant differences in mean values of age, years of hunting, days of hunting per year, and number of birds hunted per year between those who were or were not aware of avian influenza. However, ANOVA did show that those who hunted significantly more days per year did not perceive a risk of AIV infection while harvesting birds (F1,94 = 4.4; p = 0.040; Figure 2). No significant difference in mean values of age, years of hunting, and number of birds hunted per year was observed between those who did or did not perceive a risk of AIV infection.Figure 2 Analysis of variance for number of birds hunted per year by males and females (a) and number of days hunted per year by perceived risk of avian influenza virus infection while harvesting birds (b) among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013. Analysis of variance for number of birds hunted per year by males and females (a) and number of days hunted per year by perceived risk of avian influenza virus infection while harvesting birds (b) among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013. For all participants, in 2 × 2 contingency analysis, a significant dependence was observed between awareness of avian influenza and risk perception of AIV infection (Pearson χ2 = 4.456; p = 0.035) (Table 3). An ASR of +2.1 indicated that participants aware of avian influenza were significantly more likely to perceive a risk of AIV infection while harvesting birds. No significant dependence was seen between sex and awareness of avian influenza or sex and perceived risk of AIV infection.Table 3 Cross-tabulation for awareness of avian influenza by risk perception of avian influenza infection while harvesting birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 Perceived risk of avian influenza infection while harvesting birdsTotalNoYesAware of avian influenzaNoCount37946Adjusted Residual+2.1-2.1YesCount312051Adjusted Residual-2.1+2.1 Cross-tabulation for awareness of avian influenza by risk perception of avian influenza infection while harvesting birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 A significant dependence was observed between sex and the attitude of ceasing hunting if influenza was detected in regional birds (Pearson χ2 = 4.123; p = 0.042) (Table 4). An ASR of -2.0 indicted that males were significantly less likely to stop hunting if influenza was detected in the local regional birds. No significant dependence was observed between the two main effects of awareness of avian influenza and perceived risk of AIV infection by attitudes about hunting influenza-infected birds.Table 4 Cross-tabulation for sex by cease hunting if influenza detected in Regional birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 Cease hunting if influenza detected in Regional birdsTotalNoYesSexMaleCount393776Adjusted Residual+2.0-2.0FemaleCount71825Adjusted Residual-2.0+2.0 Cross-tabulation for sex by cease hunting if influenza detected in Regional birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 A significant dependence also was observed between awareness of avian influenza and the precautionary behaviour of sanitizing equipment after processing birds while camping in the bush (Pearson χ2 = 4.070; p = 0.044) (Table 5). An ASR of +2.0 indicated that a significantly greater frequency of aware participants were among those who cleaned their bird processing equipment. No significant dependence was observed between awareness of avian influenza by any of the other recommended precautions to be used while harvesting birds. Moreover, no significant dependence was observed between the two main effects of sex and perceived risk of AIV infection by any of the precautionary behaviours.Table 5 Cross-tabulation for awareness of avian influenza by sanitizing bird processing equipment in the bush among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 Sanitize bird processing equipment in the bushTotalNoYesAware of avian influenzaNoCount212748Adjusted Residual+2.0-2.0YesCount144256Adjusted Residual-2.0+2.0 Cross-tabulation for awareness of avian influenza by sanitizing bird processing equipment in the bush among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013
Conclusions
Our study aimed to gain an understanding of the bird harvesting practices and knowledge, risk perceptions, and attitudes regarding avian influenza among Canadian First Nations subsistence hunters and provide recommendations for pandemic plans. The findings herein indicated that First Nations subsistence hunters partook in some practices while harvesting wild birds that could potentially expose them to avian influenza, although appropriate levels of compliance with some protective measures were reported. More than half of the respondents were generally aware of avian influenza and almost one third perceived a risk of AIV infection while harvesting birds. Participants aware of avian influenza were more likely to perceive a risk of AIV infection while harvesting birds. Our results suggest that knowledge positively influenced the use of a recommended protective measure. Regarding attitudes about hunting influenza-infected birds, our results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas. Given that the potential exposure to AIVs while hunting is assumed to be low but the cultural importance of subsistence hunting high, our study indicated a need for more education about avian influenza and precautions First Nations hunters can take to reduce the possibility of AIV exposure while harvesting wild birds that are culturally-appropriate. We posit that First Nations hunters should be considered an avian influenza risk group and have associated special considerations included in pandemic plans.
[ "Background", "Community-based participatory research approach", "Study area, population, and data collection", "Data management and analyses", "Harvesting activities", "Awareness, risk perception, and attitudes", "Recommendations for influenza pandemic plans", "Study strengths and limitations" ]
[ "Influenza A viruses may cause pandemics at unpredictable, irregular intervals resulting in devastating social and economic effects worldwide [1]. Wild aquatic birds in the orders Anseriformes and Charadriiformes are the natural hosts for influenza A viruses; these viruses have generally remained in evolutionary stasis and are usually non-pathogenic in wild birds [2, 3]. Most avian influenza viruses (AIVs) primarily replicate in the intestinal tract of wild birds and are spread amongst birds via an indirect fecal-oral route involving contaminated aquatic habitats [4]. Humans who are directly exposed to the tissues, secretions, and excretions of infected birds or water contaminated with bird feces can become infected themselves [2, 4, 5]. The transmission of an AIV from a bird to a human has significant pandemic potential as it may result in the direct introduction of a novel virus strain or allow for the creation of a novel virus strain via reassortment [3, 5].\nThe transmission of AIVs from birds to humans depends on many factors, such as the susceptibility of humans to the virus and the frequency and type of contact [2, 5]. Most AIVs are generally inefficient in infecting humans; however, there have been documented cases of AIVs transmitting directly from infected birds to humans [6, 7]. During the 1997 Hong Kong “bird flu” incident, there was demonstrated transmission of highly pathogenic avian influenza (HPAI) A virus (H5N1) from infected domesticated chickens to humans [3]. More recently, some Asian countries have reported human infections of avian influenza A virus (H7N9) with most patients having a history of exposure to live poultry in wet markets [8]. As such, most pandemic plans include special considerations (e.g., enhanced surveillance, prioritization for vaccination, and antiviral prophylaxis) for avian influenza risk groups that include humans who come in close, frequent contact with domestic birds, such as farmers, poultry farm workers, veterinarians, and livestock workers [9, 10].\nLongitudinally migrating wild birds appear to play a primary role in influenza transmission and there is increased concern about the introduction of HPAI virus strains in North America from Eurasia, as migratory flyways around the world intersect [3, 4]. Thus, bird hunters may also be at risk as hunting and processing practices directly expose them to the bodily fluids of wild birds and water potentially contaminated with bird feces [5, 11]. Although the risk of AIV infection while hunting and processing wild birds is assumed to be very low [5], transmission has been previously reported. One study reported serologic evidence of past AIV infection in a recreational duck hunter and two wildlife professionals, inferring direct transmission of AIVs from wild birds to humans [12]. Another study reported that recreational waterfowl hunters were eight times more likely to be exposed to avian influenza-infected wildlife compared to occupationally-exposed people and the general public [13]. A study conducted in rural Iowa, USA, reported that participants who hunted wild birds had increased antibody titers against avian H7 influenza virus [14]. Further, in the Republic of Azerbaijan, HPAI H5N1 infection in humans is suspected to be linked to defeathering infected wild swans (Cygnus) [15].\nSince handling wild birds and having contact with the aquatic habitats of wild birds are potential transmission pathways for AIV infections in hunters, it is important to better understand hunters’ knowledge and risk perceptions of avian influenza and include special considerations in pandemic plans. This is particularly important for some Canadian Aboriginal (First Nations, Inuit, and Métis) populations whose hunting of wild birds represents subsistence harvesting as opposed to a recreational activity [16]. Herein, subsistence harvesting will refer collectively to activities associated with hunting, fishing, trapping, and gathering of animals and other food for personal, family, and community consumption [17, 18]. The practice of subsistence harvesting for some Canadian Aboriginal populations, such as the Cree First Nations of the Mushkegowuk region, is culturally and economically important with the majority of hunters harvesting wild birds [17, 19]. Traditional land-based harvesting activities are economically valuable for the region and can reduce external economic dependence [17]. Moreover, as there are many physical, nutritional, and social benefits of this practice, it is a vital, well-established component of health and well-being in Canadian Aboriginal communities [20]. For instance, as Canadian Aboriginal populations, particularly those residing in geographically remote and isolated communities, experience a high prevalence of household food insecurity [21, 22], subsistence harvesting can provide an important source of healthy traditional foods and lessen the reliance on costly market foods.\nThe potential of AIV infection while hunting and harvesting wild birds varies with geographical areas, seasons, and specific activities [5, 11, 12]. Moreover, previous studies have shown that knowledge and risk perception of avian influenza can positively influence compliance with recommended protective health behaviours [23, 24]. We conducted a cross-sectional survey of the bird harvesting practices and knowledge, risk perceptions, and attitudes regarding avian influenza among Canadian First Nations subsistence hunters. The purpose of this study was to examine if knowledge and risk perception of avian influenza influenced the use of personal protection measures and attitudes about hunting influenza-infected birds. The implications for addressing the special considerations of Canadian First Nations subsistence hunters in pandemic plans will be discussed.", "The present study employed a community-based participatory research (CBPR) approach since the hallmark principles of CBPR can foster the engagement of Aboriginal populations and participatory methods have previously been a successful approach to partnering with Aboriginal communities [25–27]. As such, the research topic was locally relevant as it stemmed from previous research conducted in the region that explored culturally-appropriate measures to mitigate the effects of an influenza pandemic in the setting of a remote and isolated Canadian First Nations community [28]. Residents of the study community expressed questions and concerns about the transmission potential of AIVs from influenza-infected wild birds to subsistence hunters. Thus, the present study was specifically developed and conducted to address the identified questions and concerns.\nFollowing a CBPR approach, collaboration occurred throughout the research process between the researchers and a community-based advisory group (CBAG) comprised of two community representatives from the study community [29–31]. The two members of the CBAG were of First Nations heritage and were particularly interested in the topic at hand and desired to be involved. The CBAG helped design the study and was part of the iterative process of developing the survey questions and layout. The CBAG also provided input during the data analysis process, on the interpretation of results, and aided with disseminating the results to the community. CBPR endeavors aim to use the knowledge generated to achieve action-oriented outcomes for the involved community [29, 32]. At the request of the CBAG, the results of this study were disseminated via an oral presentation to community members during a lunch-and-learn activity in June 2014. An information sheet explaining avian influenza and recommended precautionary behaviours created by Health Canada was distributed to attendees [33]. Information about emerging avian influenzas that currently are of pandemic concern and the information sheet were also incorporated into the community’s influenza pandemic plan as a newly created appendix section.\nApproval to conduct this research was granted by the Office of Research Ethics at the University of Waterloo (ORE #16534), and was supported by the Band Council (locally elected First Nations government body) of the involved community.", "The study community (name omitted for anonymity purposes) is considered remote (i.e., nearest service center with year-round road access is located over 350 kilometers away) and isolated (i.e., accessible only by airplanes year-round) [10]. The Cree First Nations community belongs to the Mushkegowuk region which is located in northern Ontario, Canada along the western shores of James Bay and the southern portion of Hudson Bay [17, 19]. The region is a productive wildlife area and the majority of hunters partake in the spring and fall bird harvests [34].\nThe cross-sectional survey was conducted in English (as suggested by the CBAG) from November 10–25, 2013. The time period was chosen to maximize participation, as most hunters would have returned from fall hunting activities. The survey was based on previous literature [11] and was developed in collaboration with the CBAG to ensure that it adequately addressed the objectives of the study and was culturally-appropriate. The survey employed closed-ended questions to gain a better understanding of First Nations hunters’ general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Open-ended questions were also included to allow for participants to describe their risk perceptions of AIV infection while harvesting birds as well as any additional concerns. Basic demographic questions to record the age and sex of participants were also included.\nCommunity First Nations subsistence hunters were invited to participate by the lead author (NAC) and a local community research assistant during individual meetings. The research assistant was of First Nations descent and a prominent Elder in the community. Being fluent in the Cree language, the assistant acted as a Cree translator upon request by the survey respondents. A current community housing list (updated in November 2013) which recorded all known community members living in First Nations (Band) households was used by the research assistant to identify eligible participants. Contemporary harvesting practices in the region typically involve multiple short trips versus traditional long trips [34]. To include as many hunters as possible from the study community, eligible participants were defined as current hunters, a group which included “intensive”, “active”, and “occasional” hunters (for definitions, see [17]). In addition to being a current hunter, participants were required to be First Nations (Band member), an adult (18 years old and over), and available to complete the survey in person during the study period to be eligible. Both male and female hunters were approached as it is widely recognized in Cree First Nations that both sexes play an important role while subsistence harvesting [35].\nWhen approached, the participants were provided with an information/recruitment letter and the study was explained in English or Cree as required. Informed verbal consent was obtained, being culturally appropriate for the region [31, 36]. Incentives were not offered for participation. As participants preferred to complete the survey alone on their own time, a convenient time and location was arranged to collect the completed survey. Up to five follow-up visits and new survey copies were provided if the survey was not completed at the specified time and if the person was still interested in participating.", "Collected surveys were coded by an identification number to maintain confidentiality of the participants. The CBAG was consulted to determine how to code inexact responses. Of note, it was decided that if a participant responded with a range of numbers, the median value was recorded. If a participant selected all of the possible response options or only provided a written response, the result was recorded as missing data. In instances where a pattern was observed amongst participants’ written responses, the responses were coded according to newly created response options approved by the CBAG to maintain the integrity of the data.\nSample size for individual statistical analyses varied from 88 to 106, as not all participants answered each survey question; thus, presented percentages may not always equal 100% owing to missing data. Simple descriptive statistics were used to examine the distributions of variables pertaining to general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Cross-tabulations, as 2 × 2 contingency analyses, were used to examine the relationships between each of the main effects of sex, awareness of avian influenza, and risk perception of AIV infection by precautionary behaviours and attitudes about hunting influenza-infected birds. In instances where the expected cell count was less than five, the Fisher’s Exact Test was used in preference to the Pearson chi-square test. Absolute values greater than 1.96 of the adjusted standard residual (ASR) indicated a significant departure from the expected count and therefore considered to be a major contributor to the observed chi-square result.\nThe influence of outlier values for continuous dependent variables (age, years of hunting, days of hunting per year, birds hunted per year) was examined using boxplots of raw and log transformed data. Owing to the presence of outlier values, we log-transformed values for days of hunting and number of birds hunted per year to satisfy the homogeneity of variance assumption of analysis of variance (ANOVA). It was decided that one individual’s improbable response for number of birds hunted per year should be removed as it continued to distort the results. Also, one individual’s response for years of hunting was recorded as missing data since the response did not reflect the age of the participant. Differences in mean values of these dependent variables between groups for sex, awareness of avian influenza, and risk perception of AIV infection were examined using ANOVA. Statistical results were considered to be significant at p < 0.05. Data analyses were carried out using SPSS version 22 (SPSS Inc., Chicago, Illinois, U.S.A).\nWritten responses to the two open-ended questions and any additional comments were manually transcribed verbatim into electronic format to facilitate organization and coding. Qualitative coding of the transcribed data was conducted using QSR NVivo® version 9.2 (QSR International Pty Ltd., Doncaster, Victoria, Australia). Responses were deductively analyzed following a template organizing approach using the survey questions as a coding template [37, 38]. Analyzing the data was an iterative process conducted multiple times by the lead author (NAC) and findings were presented to the CBAG as a way of member checking to verify the results [37].", "As mentioned, the potential of AIV infection while hunting and processing wild birds varies with specific practices, seasons, and geographical areas [5, 11, 12]. The hunters reported being in frequent contact with wild birds, as some participants hunted for more than 100 days per year and harvested up to 200 birds per year. Our findings indicated that First Nations subsistence hunters were involved in bird harvesting practices, such as processing the birds and having direct contact with water in the bush, that pose an increased hazard to AIV infections among this subpopulation. The main proposed pathway of transmission of AIV to humans is close contact between the tissues, secretions, and excretions of an infected bird and the respiratory tract, gastrointestinal tract, or conjunctiva of a human [2, 7, 39]. Infected birds shed copious amounts of virus particles in their feces which can also contaminate the environment and bodies of water [40, 41]. Our findings revealed that the majority of hunters had direct contact with water and cleaned, plucked, and gutted the wild birds themselves. If processing an influenza-infected wild bird in this manner, hunters may be exposed to virus-laden tissues, secretions, and excretions [2, 5]. The use of personal protective equipment was not routine practice as most hunters did not wear gloves and goggles to protect themselves while processing birds. However, most hunters reported using other measures of personal protection, such as washing their hands and cleaning their equipment, which can limit post-harvest AIV exposure.\nThe timing of the hunters’ bird harvesting activities in relation to when the prevalence peaks for AIVs and human influenza viruses is of particular interest. Similar to previous reports, our study revealed that the majority of hunters were involved in the spring and fall bird harvests [16, 19, 34]. The timing of these harvests is in relation to freeze-up and break-up events in the region which varies every year, but generally runs from April to October [42]. During these harvests, participants reported hunting migratory wild birds that are potential carriers of AIVs as all known influenza A virus subtypes have been identified in these birds [3, 43]. For instance, in North American wild ducks, AIV prevalence peaks around late summer/early fall prior to south bound migration, with highest virus isolation rates reported in juvenile ducks [44, 45]. On the other hand, previous studies have reported relatively low prevalence of AIVs in Canada geese regardless of the season [45, 46]. Moreover, in Canada, the peak season of influenza A infection in humans typically runs from November to April [33]. Similar to another study, our results suggest that the possibility of co-infection with AIVs and human influenza viruses resulting in a reassortment event is unlikely as the timing of the hunters’ potential exposure to AIVs is different from that of seasonal human influenza viruses [5].\nBased on previous studies, the surveyed participants generally hunt for wild birds around the southwestern coast of Hudson Bay and the western coast of James Bay which is along the Mississippi migratory flyway [3, 34, 47, 48]. Migratory flyways around the world intersect, particularly between eastern Eurasia and Alaska and between Europe and eastern North America, raising concerns about the exchange of AIVs between the Eurasian and American virus superfamilies [3, 43]. Intercontinental exchange of entire AIV genomes has not yet been reported and Eurasian HPAI virus subtypes have not been previously detected in North American migratory birds [43, 49]. However, reassortment events between the two lineages has been reported, notably in Alaska and along the northeastern coast of Canada [43, 49–51]. These observations suggest that the introduction of a novel AIV is more likely to occur along the Pacific and Atlantic coasts of North America, but once introduced, it has been suggested that migration to major congregation sites may disperse the novel AIV across flyways [49, 51, 52].", "Approximately half of our study participants were generally aware of avian influenza (52.8%), which is lower than previous studies conducted with bird hunters in the USA (86%) and poultry workers in Nigeria (67.1%) and Italy (63.8%) [11, 23, 53]. Similar to a previous study, our findings indicated that a general awareness of avian influenza was more common among the surveyed bird hunters compared to knowledge of the signs and symptoms [11]. Previous studies conducted with high-risk populations in Thailand and Laos also reported limited knowledge of the key signs and symptoms of avian influenza [54, 55]. Almost one third of surveyed participants perceived a risk of contracting avian influenza while hunting and processing birds which is similar to the values found in other studies [24, 56].\nOur results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas. This observation aligns with findings from a previous study; however, the percentage of hunters who would stop was relatively higher in our study as only 3% and 19% of active duck hunters in Georgia, USA reported that they would stop hunting if HPAI were found in duck populations in USA and the state of Georgia, respectively [11]. This result is interesting as harvesting activities are integral to First Nations’ culture and an important source of healthy food, especially in communities experiencing food insecurity [17, 20, 22].\nOur findings suggested that being aware of avian influenza or perceiving a risk of AIV infection did not influence the hunters’ decision to cease harvesting influenza-infected birds. However, those who were knowledgeable were more likely to clean their equipment after processing birds in the bush. This finding suggests that First Nations hunters are not only willing to use precautionary measures while harvesting birds, but that improving their knowledge level may lead to an increased use of recommended precautionary measures. Previous studies also found that knowledge and perception of risk was a significant determinant of greater compliance with recommended protective measures [23, 24]. However, in our study, being knowledgeable or perceiving risk did not always result in greater use of protective measures. Moreover, in general, the limited use of gloves and goggles while processing harvested birds was noted. These observations may be explained by the protection motivation theory which states that complying with a recommended protective health behavior is influenced by risk perception as well as efficacy variables, including response efficacy (i.e., whether the recommended measure is effective) and self-efficacy (i.e., whether the person is capable of performing the recommended measure) [57–59]. According to this theory, risk perception will generate a willingness to act, but efficacy variables will determine whether the resulting action is adaptive or maladaptive [57, 58]. In our study, those who perceived a risk may have doubted the effectiveness of recommended measures and/or had low self-efficacy owing to limited access to resources and ability to afford supplies required to implement the measures [60].", "These data support previous findings which suggest that bird hunting and processing activities may potentially expose individuals to avian influenza [5, 11–14]. Acknowledging the various benefits and cultural importance of subsistence harvesting [17, 20], while taking into account the increased hazard of potential AIV exposure in First Nations hunters, their inclusion as an avian influenza risk group with associated special considerations in pandemic plans seems warranted. The potential for a novel AIV to be introduced into an Aboriginal Canadian population is of great concern as they face many health disparities and are particularly susceptible to influenza and related complications [61]. Moreover, previous influenza pandemics have disproportionately impacted Aboriginal Canadians, especially those populations living in geographically remote communities, and reflected inadequacies in preparedness with regards to addressing their pre-existing inequalities and special needs during a pandemic [62–65].\nEfforts should be directed towards improving education for First Nations hunters regarding avian influenza and the hazard posed by AIVs while harvesting wild birds. More specifically, our results indicated that educational endeavours should include information regarding the signs and symptoms of avian influenza, transmission dynamics, flyways of migrating birds, and recommended precautionary measures (Table 6). Accordingly, access to supplies required to comply with recommended protective measures, such as cleaning solutions and gloves, should be improved for First Nations subsistence hunters. Moreover, our findings suggested that detection of avian influenza in wild birds in nearby geographic areas would influence the participants’ harvesting behaviour. Given this, we recommend that a culturally-appropriate communication system be implemented to promptly inform subsistence hunters and other community members of the findings and any associated recommendations.Table 6\nRecommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [\n[33]])\n-Do not touch or eat sick birds or birds that have died for unknown reasons-Avoid touching the blood, secretions, or dropping of wild game birds-Do not rub your eyes, touch your face, eat, drink or smoke when processing wild game birds-Keep young children away when processing wild game birds and discourage them from playing in areas that could be contaminated with wild bird droppings-When preparing game, wash knives, tools, work surfaces, and other equipment with soap and warm water followed by a household bleach solution (0.5% sodium hypochlorite)-Wear water-proof household gloves or disposable latex/plastic gloves when processing wild game birds-Wash gloves and hands (for at least 20 seconds) with soap and warm water immediately after you have finished processing game or cleaning equipment. If there is no water available, remove any dirt using a moist towlette, apply an alcohol based hand gel (between 60-90% alcohol) and wash your hands with soap and water as soon as it is possible-Change clothes after handling wild game birds and keep soiled clothing and shoes in a sealed plastic bag until they can be washed-When cooking birds, the inside temperature should reach 85°C for whole birds or 74°C for bird parts (no visible pink meat and juice runs clear)-Never keep wild birds in your home or as pets-Receive the annual influenza vaccine-If you become sick while handling birds or shortly afterwards, see your doctor and inform your doctor that you have been in close contact with wild birds.\n\nRecommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [\n[33]])\n", "To our knowledge, this is the first study to examine the knowledge and risk perceptions of avian influenza among Canadian First Nations subsistence hunters. The censused approach taken to select participants and the high contact and cooperation rates strengthen the assertion that our findings are representative of the study community. Also, in accordance with a CBPR approach, the CBAG was involved throughout the entire research process, thereby ensuring that the study was conducted in a culturally-appropriate manner and that the knowledge generated was used to directly benefit the involved community.\nDespite the novelty and significance of our findings, some limitations of our study must be highlighted when interpreting our results. First, the analysis was based on a cross-sectional survey of self-reported data which may limit drawing definitive conclusions about the observed relationships. The biases in recalling and reporting cannot be entirely ruled out; however, to help alleviate the potential for biased responses, participants were assured that their responses would remain anonymous. Also, it is not possible to discern whether those who did not return the survey or refused to participate were different in any way from those who did participate. However, there is no obvious reason to suspect that non-respondents and people who chose not to participate were any different from the respondents.\nFuture research should examine the prevalence of AIVs, particularly those strains that are currently of concern to humans (e.g., H5, H7), in birds from within the Mushkegowuk Territory that are typically harvested. Also, analyzing the sera for antibodies against AIV subtypes would be helpful to evaluate if previous AIV infections occurred in First Nation subsistence hunters. Moreover, conducting a quantitative exposure assessment would provide information to help characterize the study population’s exposure potential to AIVs. Lastly, previous research has noted that various barriers impede the effectiveness of implementing recommended pandemic mitigation measures [60]. Thus, future research should aim to understand if any barriers exist with regards to complying with recommended precautions to reduce exposure to AIVs while harvesting birds and if measures need to be adapted to be more context-specific and culturally-appropriate, while still maintaining the effectiveness of the measure." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Community-based participatory research approach", "Study area, population, and data collection", "Data management and analyses", "Results", "Discussion", "Harvesting activities", "Awareness, risk perception, and attitudes", "Recommendations for influenza pandemic plans", "Study strengths and limitations", "Conclusions" ]
[ "Influenza A viruses may cause pandemics at unpredictable, irregular intervals resulting in devastating social and economic effects worldwide [1]. Wild aquatic birds in the orders Anseriformes and Charadriiformes are the natural hosts for influenza A viruses; these viruses have generally remained in evolutionary stasis and are usually non-pathogenic in wild birds [2, 3]. Most avian influenza viruses (AIVs) primarily replicate in the intestinal tract of wild birds and are spread amongst birds via an indirect fecal-oral route involving contaminated aquatic habitats [4]. Humans who are directly exposed to the tissues, secretions, and excretions of infected birds or water contaminated with bird feces can become infected themselves [2, 4, 5]. The transmission of an AIV from a bird to a human has significant pandemic potential as it may result in the direct introduction of a novel virus strain or allow for the creation of a novel virus strain via reassortment [3, 5].\nThe transmission of AIVs from birds to humans depends on many factors, such as the susceptibility of humans to the virus and the frequency and type of contact [2, 5]. Most AIVs are generally inefficient in infecting humans; however, there have been documented cases of AIVs transmitting directly from infected birds to humans [6, 7]. During the 1997 Hong Kong “bird flu” incident, there was demonstrated transmission of highly pathogenic avian influenza (HPAI) A virus (H5N1) from infected domesticated chickens to humans [3]. More recently, some Asian countries have reported human infections of avian influenza A virus (H7N9) with most patients having a history of exposure to live poultry in wet markets [8]. As such, most pandemic plans include special considerations (e.g., enhanced surveillance, prioritization for vaccination, and antiviral prophylaxis) for avian influenza risk groups that include humans who come in close, frequent contact with domestic birds, such as farmers, poultry farm workers, veterinarians, and livestock workers [9, 10].\nLongitudinally migrating wild birds appear to play a primary role in influenza transmission and there is increased concern about the introduction of HPAI virus strains in North America from Eurasia, as migratory flyways around the world intersect [3, 4]. Thus, bird hunters may also be at risk as hunting and processing practices directly expose them to the bodily fluids of wild birds and water potentially contaminated with bird feces [5, 11]. Although the risk of AIV infection while hunting and processing wild birds is assumed to be very low [5], transmission has been previously reported. One study reported serologic evidence of past AIV infection in a recreational duck hunter and two wildlife professionals, inferring direct transmission of AIVs from wild birds to humans [12]. Another study reported that recreational waterfowl hunters were eight times more likely to be exposed to avian influenza-infected wildlife compared to occupationally-exposed people and the general public [13]. A study conducted in rural Iowa, USA, reported that participants who hunted wild birds had increased antibody titers against avian H7 influenza virus [14]. Further, in the Republic of Azerbaijan, HPAI H5N1 infection in humans is suspected to be linked to defeathering infected wild swans (Cygnus) [15].\nSince handling wild birds and having contact with the aquatic habitats of wild birds are potential transmission pathways for AIV infections in hunters, it is important to better understand hunters’ knowledge and risk perceptions of avian influenza and include special considerations in pandemic plans. This is particularly important for some Canadian Aboriginal (First Nations, Inuit, and Métis) populations whose hunting of wild birds represents subsistence harvesting as opposed to a recreational activity [16]. Herein, subsistence harvesting will refer collectively to activities associated with hunting, fishing, trapping, and gathering of animals and other food for personal, family, and community consumption [17, 18]. The practice of subsistence harvesting for some Canadian Aboriginal populations, such as the Cree First Nations of the Mushkegowuk region, is culturally and economically important with the majority of hunters harvesting wild birds [17, 19]. Traditional land-based harvesting activities are economically valuable for the region and can reduce external economic dependence [17]. Moreover, as there are many physical, nutritional, and social benefits of this practice, it is a vital, well-established component of health and well-being in Canadian Aboriginal communities [20]. For instance, as Canadian Aboriginal populations, particularly those residing in geographically remote and isolated communities, experience a high prevalence of household food insecurity [21, 22], subsistence harvesting can provide an important source of healthy traditional foods and lessen the reliance on costly market foods.\nThe potential of AIV infection while hunting and harvesting wild birds varies with geographical areas, seasons, and specific activities [5, 11, 12]. Moreover, previous studies have shown that knowledge and risk perception of avian influenza can positively influence compliance with recommended protective health behaviours [23, 24]. We conducted a cross-sectional survey of the bird harvesting practices and knowledge, risk perceptions, and attitudes regarding avian influenza among Canadian First Nations subsistence hunters. The purpose of this study was to examine if knowledge and risk perception of avian influenza influenced the use of personal protection measures and attitudes about hunting influenza-infected birds. The implications for addressing the special considerations of Canadian First Nations subsistence hunters in pandemic plans will be discussed.", " Community-based participatory research approach The present study employed a community-based participatory research (CBPR) approach since the hallmark principles of CBPR can foster the engagement of Aboriginal populations and participatory methods have previously been a successful approach to partnering with Aboriginal communities [25–27]. As such, the research topic was locally relevant as it stemmed from previous research conducted in the region that explored culturally-appropriate measures to mitigate the effects of an influenza pandemic in the setting of a remote and isolated Canadian First Nations community [28]. Residents of the study community expressed questions and concerns about the transmission potential of AIVs from influenza-infected wild birds to subsistence hunters. Thus, the present study was specifically developed and conducted to address the identified questions and concerns.\nFollowing a CBPR approach, collaboration occurred throughout the research process between the researchers and a community-based advisory group (CBAG) comprised of two community representatives from the study community [29–31]. The two members of the CBAG were of First Nations heritage and were particularly interested in the topic at hand and desired to be involved. The CBAG helped design the study and was part of the iterative process of developing the survey questions and layout. The CBAG also provided input during the data analysis process, on the interpretation of results, and aided with disseminating the results to the community. CBPR endeavors aim to use the knowledge generated to achieve action-oriented outcomes for the involved community [29, 32]. At the request of the CBAG, the results of this study were disseminated via an oral presentation to community members during a lunch-and-learn activity in June 2014. An information sheet explaining avian influenza and recommended precautionary behaviours created by Health Canada was distributed to attendees [33]. Information about emerging avian influenzas that currently are of pandemic concern and the information sheet were also incorporated into the community’s influenza pandemic plan as a newly created appendix section.\nApproval to conduct this research was granted by the Office of Research Ethics at the University of Waterloo (ORE #16534), and was supported by the Band Council (locally elected First Nations government body) of the involved community.\nThe present study employed a community-based participatory research (CBPR) approach since the hallmark principles of CBPR can foster the engagement of Aboriginal populations and participatory methods have previously been a successful approach to partnering with Aboriginal communities [25–27]. As such, the research topic was locally relevant as it stemmed from previous research conducted in the region that explored culturally-appropriate measures to mitigate the effects of an influenza pandemic in the setting of a remote and isolated Canadian First Nations community [28]. Residents of the study community expressed questions and concerns about the transmission potential of AIVs from influenza-infected wild birds to subsistence hunters. Thus, the present study was specifically developed and conducted to address the identified questions and concerns.\nFollowing a CBPR approach, collaboration occurred throughout the research process between the researchers and a community-based advisory group (CBAG) comprised of two community representatives from the study community [29–31]. The two members of the CBAG were of First Nations heritage and were particularly interested in the topic at hand and desired to be involved. The CBAG helped design the study and was part of the iterative process of developing the survey questions and layout. The CBAG also provided input during the data analysis process, on the interpretation of results, and aided with disseminating the results to the community. CBPR endeavors aim to use the knowledge generated to achieve action-oriented outcomes for the involved community [29, 32]. At the request of the CBAG, the results of this study were disseminated via an oral presentation to community members during a lunch-and-learn activity in June 2014. An information sheet explaining avian influenza and recommended precautionary behaviours created by Health Canada was distributed to attendees [33]. Information about emerging avian influenzas that currently are of pandemic concern and the information sheet were also incorporated into the community’s influenza pandemic plan as a newly created appendix section.\nApproval to conduct this research was granted by the Office of Research Ethics at the University of Waterloo (ORE #16534), and was supported by the Band Council (locally elected First Nations government body) of the involved community.\n Study area, population, and data collection The study community (name omitted for anonymity purposes) is considered remote (i.e., nearest service center with year-round road access is located over 350 kilometers away) and isolated (i.e., accessible only by airplanes year-round) [10]. The Cree First Nations community belongs to the Mushkegowuk region which is located in northern Ontario, Canada along the western shores of James Bay and the southern portion of Hudson Bay [17, 19]. The region is a productive wildlife area and the majority of hunters partake in the spring and fall bird harvests [34].\nThe cross-sectional survey was conducted in English (as suggested by the CBAG) from November 10–25, 2013. The time period was chosen to maximize participation, as most hunters would have returned from fall hunting activities. The survey was based on previous literature [11] and was developed in collaboration with the CBAG to ensure that it adequately addressed the objectives of the study and was culturally-appropriate. The survey employed closed-ended questions to gain a better understanding of First Nations hunters’ general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Open-ended questions were also included to allow for participants to describe their risk perceptions of AIV infection while harvesting birds as well as any additional concerns. Basic demographic questions to record the age and sex of participants were also included.\nCommunity First Nations subsistence hunters were invited to participate by the lead author (NAC) and a local community research assistant during individual meetings. The research assistant was of First Nations descent and a prominent Elder in the community. Being fluent in the Cree language, the assistant acted as a Cree translator upon request by the survey respondents. A current community housing list (updated in November 2013) which recorded all known community members living in First Nations (Band) households was used by the research assistant to identify eligible participants. Contemporary harvesting practices in the region typically involve multiple short trips versus traditional long trips [34]. To include as many hunters as possible from the study community, eligible participants were defined as current hunters, a group which included “intensive”, “active”, and “occasional” hunters (for definitions, see [17]). In addition to being a current hunter, participants were required to be First Nations (Band member), an adult (18 years old and over), and available to complete the survey in person during the study period to be eligible. Both male and female hunters were approached as it is widely recognized in Cree First Nations that both sexes play an important role while subsistence harvesting [35].\nWhen approached, the participants were provided with an information/recruitment letter and the study was explained in English or Cree as required. Informed verbal consent was obtained, being culturally appropriate for the region [31, 36]. Incentives were not offered for participation. As participants preferred to complete the survey alone on their own time, a convenient time and location was arranged to collect the completed survey. Up to five follow-up visits and new survey copies were provided if the survey was not completed at the specified time and if the person was still interested in participating.\nThe study community (name omitted for anonymity purposes) is considered remote (i.e., nearest service center with year-round road access is located over 350 kilometers away) and isolated (i.e., accessible only by airplanes year-round) [10]. The Cree First Nations community belongs to the Mushkegowuk region which is located in northern Ontario, Canada along the western shores of James Bay and the southern portion of Hudson Bay [17, 19]. The region is a productive wildlife area and the majority of hunters partake in the spring and fall bird harvests [34].\nThe cross-sectional survey was conducted in English (as suggested by the CBAG) from November 10–25, 2013. The time period was chosen to maximize participation, as most hunters would have returned from fall hunting activities. The survey was based on previous literature [11] and was developed in collaboration with the CBAG to ensure that it adequately addressed the objectives of the study and was culturally-appropriate. The survey employed closed-ended questions to gain a better understanding of First Nations hunters’ general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Open-ended questions were also included to allow for participants to describe their risk perceptions of AIV infection while harvesting birds as well as any additional concerns. Basic demographic questions to record the age and sex of participants were also included.\nCommunity First Nations subsistence hunters were invited to participate by the lead author (NAC) and a local community research assistant during individual meetings. The research assistant was of First Nations descent and a prominent Elder in the community. Being fluent in the Cree language, the assistant acted as a Cree translator upon request by the survey respondents. A current community housing list (updated in November 2013) which recorded all known community members living in First Nations (Band) households was used by the research assistant to identify eligible participants. Contemporary harvesting practices in the region typically involve multiple short trips versus traditional long trips [34]. To include as many hunters as possible from the study community, eligible participants were defined as current hunters, a group which included “intensive”, “active”, and “occasional” hunters (for definitions, see [17]). In addition to being a current hunter, participants were required to be First Nations (Band member), an adult (18 years old and over), and available to complete the survey in person during the study period to be eligible. Both male and female hunters were approached as it is widely recognized in Cree First Nations that both sexes play an important role while subsistence harvesting [35].\nWhen approached, the participants were provided with an information/recruitment letter and the study was explained in English or Cree as required. Informed verbal consent was obtained, being culturally appropriate for the region [31, 36]. Incentives were not offered for participation. As participants preferred to complete the survey alone on their own time, a convenient time and location was arranged to collect the completed survey. Up to five follow-up visits and new survey copies were provided if the survey was not completed at the specified time and if the person was still interested in participating.\n Data management and analyses Collected surveys were coded by an identification number to maintain confidentiality of the participants. The CBAG was consulted to determine how to code inexact responses. Of note, it was decided that if a participant responded with a range of numbers, the median value was recorded. If a participant selected all of the possible response options or only provided a written response, the result was recorded as missing data. In instances where a pattern was observed amongst participants’ written responses, the responses were coded according to newly created response options approved by the CBAG to maintain the integrity of the data.\nSample size for individual statistical analyses varied from 88 to 106, as not all participants answered each survey question; thus, presented percentages may not always equal 100% owing to missing data. Simple descriptive statistics were used to examine the distributions of variables pertaining to general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Cross-tabulations, as 2 × 2 contingency analyses, were used to examine the relationships between each of the main effects of sex, awareness of avian influenza, and risk perception of AIV infection by precautionary behaviours and attitudes about hunting influenza-infected birds. In instances where the expected cell count was less than five, the Fisher’s Exact Test was used in preference to the Pearson chi-square test. Absolute values greater than 1.96 of the adjusted standard residual (ASR) indicated a significant departure from the expected count and therefore considered to be a major contributor to the observed chi-square result.\nThe influence of outlier values for continuous dependent variables (age, years of hunting, days of hunting per year, birds hunted per year) was examined using boxplots of raw and log transformed data. Owing to the presence of outlier values, we log-transformed values for days of hunting and number of birds hunted per year to satisfy the homogeneity of variance assumption of analysis of variance (ANOVA). It was decided that one individual’s improbable response for number of birds hunted per year should be removed as it continued to distort the results. Also, one individual’s response for years of hunting was recorded as missing data since the response did not reflect the age of the participant. Differences in mean values of these dependent variables between groups for sex, awareness of avian influenza, and risk perception of AIV infection were examined using ANOVA. Statistical results were considered to be significant at p < 0.05. Data analyses were carried out using SPSS version 22 (SPSS Inc., Chicago, Illinois, U.S.A).\nWritten responses to the two open-ended questions and any additional comments were manually transcribed verbatim into electronic format to facilitate organization and coding. Qualitative coding of the transcribed data was conducted using QSR NVivo® version 9.2 (QSR International Pty Ltd., Doncaster, Victoria, Australia). Responses were deductively analyzed following a template organizing approach using the survey questions as a coding template [37, 38]. Analyzing the data was an iterative process conducted multiple times by the lead author (NAC) and findings were presented to the CBAG as a way of member checking to verify the results [37].\nCollected surveys were coded by an identification number to maintain confidentiality of the participants. The CBAG was consulted to determine how to code inexact responses. Of note, it was decided that if a participant responded with a range of numbers, the median value was recorded. If a participant selected all of the possible response options or only provided a written response, the result was recorded as missing data. In instances where a pattern was observed amongst participants’ written responses, the responses were coded according to newly created response options approved by the CBAG to maintain the integrity of the data.\nSample size for individual statistical analyses varied from 88 to 106, as not all participants answered each survey question; thus, presented percentages may not always equal 100% owing to missing data. Simple descriptive statistics were used to examine the distributions of variables pertaining to general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Cross-tabulations, as 2 × 2 contingency analyses, were used to examine the relationships between each of the main effects of sex, awareness of avian influenza, and risk perception of AIV infection by precautionary behaviours and attitudes about hunting influenza-infected birds. In instances where the expected cell count was less than five, the Fisher’s Exact Test was used in preference to the Pearson chi-square test. Absolute values greater than 1.96 of the adjusted standard residual (ASR) indicated a significant departure from the expected count and therefore considered to be a major contributor to the observed chi-square result.\nThe influence of outlier values for continuous dependent variables (age, years of hunting, days of hunting per year, birds hunted per year) was examined using boxplots of raw and log transformed data. Owing to the presence of outlier values, we log-transformed values for days of hunting and number of birds hunted per year to satisfy the homogeneity of variance assumption of analysis of variance (ANOVA). It was decided that one individual’s improbable response for number of birds hunted per year should be removed as it continued to distort the results. Also, one individual’s response for years of hunting was recorded as missing data since the response did not reflect the age of the participant. Differences in mean values of these dependent variables between groups for sex, awareness of avian influenza, and risk perception of AIV infection were examined using ANOVA. Statistical results were considered to be significant at p < 0.05. Data analyses were carried out using SPSS version 22 (SPSS Inc., Chicago, Illinois, U.S.A).\nWritten responses to the two open-ended questions and any additional comments were manually transcribed verbatim into electronic format to facilitate organization and coding. Qualitative coding of the transcribed data was conducted using QSR NVivo® version 9.2 (QSR International Pty Ltd., Doncaster, Victoria, Australia). Responses were deductively analyzed following a template organizing approach using the survey questions as a coding template [37, 38]. Analyzing the data was an iterative process conducted multiple times by the lead author (NAC) and findings were presented to the CBAG as a way of member checking to verify the results [37].", "The present study employed a community-based participatory research (CBPR) approach since the hallmark principles of CBPR can foster the engagement of Aboriginal populations and participatory methods have previously been a successful approach to partnering with Aboriginal communities [25–27]. As such, the research topic was locally relevant as it stemmed from previous research conducted in the region that explored culturally-appropriate measures to mitigate the effects of an influenza pandemic in the setting of a remote and isolated Canadian First Nations community [28]. Residents of the study community expressed questions and concerns about the transmission potential of AIVs from influenza-infected wild birds to subsistence hunters. Thus, the present study was specifically developed and conducted to address the identified questions and concerns.\nFollowing a CBPR approach, collaboration occurred throughout the research process between the researchers and a community-based advisory group (CBAG) comprised of two community representatives from the study community [29–31]. The two members of the CBAG were of First Nations heritage and were particularly interested in the topic at hand and desired to be involved. The CBAG helped design the study and was part of the iterative process of developing the survey questions and layout. The CBAG also provided input during the data analysis process, on the interpretation of results, and aided with disseminating the results to the community. CBPR endeavors aim to use the knowledge generated to achieve action-oriented outcomes for the involved community [29, 32]. At the request of the CBAG, the results of this study were disseminated via an oral presentation to community members during a lunch-and-learn activity in June 2014. An information sheet explaining avian influenza and recommended precautionary behaviours created by Health Canada was distributed to attendees [33]. Information about emerging avian influenzas that currently are of pandemic concern and the information sheet were also incorporated into the community’s influenza pandemic plan as a newly created appendix section.\nApproval to conduct this research was granted by the Office of Research Ethics at the University of Waterloo (ORE #16534), and was supported by the Band Council (locally elected First Nations government body) of the involved community.", "The study community (name omitted for anonymity purposes) is considered remote (i.e., nearest service center with year-round road access is located over 350 kilometers away) and isolated (i.e., accessible only by airplanes year-round) [10]. The Cree First Nations community belongs to the Mushkegowuk region which is located in northern Ontario, Canada along the western shores of James Bay and the southern portion of Hudson Bay [17, 19]. The region is a productive wildlife area and the majority of hunters partake in the spring and fall bird harvests [34].\nThe cross-sectional survey was conducted in English (as suggested by the CBAG) from November 10–25, 2013. The time period was chosen to maximize participation, as most hunters would have returned from fall hunting activities. The survey was based on previous literature [11] and was developed in collaboration with the CBAG to ensure that it adequately addressed the objectives of the study and was culturally-appropriate. The survey employed closed-ended questions to gain a better understanding of First Nations hunters’ general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Open-ended questions were also included to allow for participants to describe their risk perceptions of AIV infection while harvesting birds as well as any additional concerns. Basic demographic questions to record the age and sex of participants were also included.\nCommunity First Nations subsistence hunters were invited to participate by the lead author (NAC) and a local community research assistant during individual meetings. The research assistant was of First Nations descent and a prominent Elder in the community. Being fluent in the Cree language, the assistant acted as a Cree translator upon request by the survey respondents. A current community housing list (updated in November 2013) which recorded all known community members living in First Nations (Band) households was used by the research assistant to identify eligible participants. Contemporary harvesting practices in the region typically involve multiple short trips versus traditional long trips [34]. To include as many hunters as possible from the study community, eligible participants were defined as current hunters, a group which included “intensive”, “active”, and “occasional” hunters (for definitions, see [17]). In addition to being a current hunter, participants were required to be First Nations (Band member), an adult (18 years old and over), and available to complete the survey in person during the study period to be eligible. Both male and female hunters were approached as it is widely recognized in Cree First Nations that both sexes play an important role while subsistence harvesting [35].\nWhen approached, the participants were provided with an information/recruitment letter and the study was explained in English or Cree as required. Informed verbal consent was obtained, being culturally appropriate for the region [31, 36]. Incentives were not offered for participation. As participants preferred to complete the survey alone on their own time, a convenient time and location was arranged to collect the completed survey. Up to five follow-up visits and new survey copies were provided if the survey was not completed at the specified time and if the person was still interested in participating.", "Collected surveys were coded by an identification number to maintain confidentiality of the participants. The CBAG was consulted to determine how to code inexact responses. Of note, it was decided that if a participant responded with a range of numbers, the median value was recorded. If a participant selected all of the possible response options or only provided a written response, the result was recorded as missing data. In instances where a pattern was observed amongst participants’ written responses, the responses were coded according to newly created response options approved by the CBAG to maintain the integrity of the data.\nSample size for individual statistical analyses varied from 88 to 106, as not all participants answered each survey question; thus, presented percentages may not always equal 100% owing to missing data. Simple descriptive statistics were used to examine the distributions of variables pertaining to general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Cross-tabulations, as 2 × 2 contingency analyses, were used to examine the relationships between each of the main effects of sex, awareness of avian influenza, and risk perception of AIV infection by precautionary behaviours and attitudes about hunting influenza-infected birds. In instances where the expected cell count was less than five, the Fisher’s Exact Test was used in preference to the Pearson chi-square test. Absolute values greater than 1.96 of the adjusted standard residual (ASR) indicated a significant departure from the expected count and therefore considered to be a major contributor to the observed chi-square result.\nThe influence of outlier values for continuous dependent variables (age, years of hunting, days of hunting per year, birds hunted per year) was examined using boxplots of raw and log transformed data. Owing to the presence of outlier values, we log-transformed values for days of hunting and number of birds hunted per year to satisfy the homogeneity of variance assumption of analysis of variance (ANOVA). It was decided that one individual’s improbable response for number of birds hunted per year should be removed as it continued to distort the results. Also, one individual’s response for years of hunting was recorded as missing data since the response did not reflect the age of the participant. Differences in mean values of these dependent variables between groups for sex, awareness of avian influenza, and risk perception of AIV infection were examined using ANOVA. Statistical results were considered to be significant at p < 0.05. Data analyses were carried out using SPSS version 22 (SPSS Inc., Chicago, Illinois, U.S.A).\nWritten responses to the two open-ended questions and any additional comments were manually transcribed verbatim into electronic format to facilitate organization and coding. Qualitative coding of the transcribed data was conducted using QSR NVivo® version 9.2 (QSR International Pty Ltd., Doncaster, Victoria, Australia). Responses were deductively analyzed following a template organizing approach using the survey questions as a coding template [37, 38]. Analyzing the data was an iterative process conducted multiple times by the lead author (NAC) and findings were presented to the CBAG as a way of member checking to verify the results [37].", "A total of 173 participants in the censused community were deemed eligible to participate given the inclusion criteria and of these, 126 received surveys, for a 73% contact rate. Of the 126 distributed surveys, 106 completed surveys were returned, representing an 84% cooperation rate. Overall, a response rate of 61% was achieved. Of the 106 community members that participated in the survey, 80 (75.5%) were male and 26 (24.5%) were female. The untransformed demographic and harvesting characteristics of the participants are presented in Table 1.Table 1\nDemographic and harvesting characteristics of Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013\nnMinimumMaximumMeanStd. deviation\nDemographic information\n   Age92187643.312.9\nHarvesting characteristics\n   Years of hunting9916527.214.0   Days of hunting per year105120026.230.5   Number of birds hunted per year100020042.640.6\n\nDemographic and harvesting characteristics of Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013\n\nAll who responded participated in the spring/summer hunting activities (n = 105; 99.1%) with fewer hunters participating during the fall (n = 57; 53.8%) and winter (n = 16; 15.1%) seasons. During these hunts, 98.1% of participants hunted Canada geese (Branta canadensis), 88.7% hunted various species of ducks (Anatinae), 69.8% hunted lesser snow geese (Anser c. caerulescens, also referred to as wavies), and 43.4% hunted species of shorebirds (Charadriiformes).\nWhile hunting, the majority of participants reported having direct contact with water (n = 89; 84.0%). Bird harvesting practices were generally similar whether camping in the bush or at home; thus, only results pertaining to camping in the bush are presented. In the bush, most hunters processed the birds themselves (n = 72; 67.9%) or a family member was involved (n = 67; 63.2%). Most hunters partook in all of the bird processing activities in the bush; the percentage of participants who reported cleaning, plucking, and gutting the birds were 74.5%, 94.3%, and 77.4% respectively. Regarding the use of precautionary measures while processing birds in the bush, it was reported that 18 (17.0%) hunters wore gloves and 2 (1.9%) hunters wore goggles. In the bush, the majority of hunters washed their hands (n = 105; 99.1%) and sanitized their equipment (n = 69; 65.1%) after processing birds. Moreover, about half of the participants (n = 50; 47.2%) reported receiving the annual vaccination against seasonal human influenza viruses (Figure 1).Figure 1\nCompliance with recommended protective health measures among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013.\n\n\nCompliance with recommended protective health measures among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013.\n\nThe total frequency and percentage of participants’ knowledge of avian influenza, risk perception of AIV infection, and attitudes about hunting influenza-infected birds are presented in Table 2. Approximately half of the participants (n = 56; 52.8%) reported being generally aware of avian influenza, but few were aware of the signs and symptoms of avian influenza in birds (n = 16; 15.1%) or humans (n = 9; 8.5%).Table 2\nFrequency and percentage\na\nof knowledge of avian influenza, risk perception of avian influenza virus infection, and attitudes about hunting influenza-infected birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013\nAll huntersMalesFemalesNo (%)Yes (%)No (%)Yes (%)No (%)Yes (%)\nKnowledge\nAware of avian influenza49 (46.2)56 (52.8)37 (46.3)42 (52.5)12 (46.2)14 (53.8)Aware of signs and symptoms of avian influenza in birds89 (84.0)16 (15.1)67 (83.8)12 (15.0)22 (84.6)4 (15.4)Aware of signs and symptoms of avian influenza in humans95 (89.6)9 (8.5)74 (92.5)4 (5.0)21 (80.8)5 (19.2)\nRisk perception\nPerceived risk of avian influenza virus infection68 (64.2)29 (27.4)52 (65.0)23 (28.8)16 (61.5)6 (23.1)\nAttitudes\nCease hunting if avian influenza detected in North American birds60 (56.6)43 (40.6)49 (61.3)29 (36.3)11 (42.3)14 (53.8)Cease hunting if avian influenza detected in Province of Ontario birds54 (50.9)45 (42.5)45 (56.3)30 (37.5)9 (34.6)15 (57.7)Cease hunting if avian influenza detected in Regional birds46 (43.4)55 (51.9)39 (48.8)37 (46.3)7 (26.9)18 (69.2)\naPercentages may not always equal 100% owing to missing data.\n\nFrequency and percentage\na\nof knowledge of avian influenza, risk perception of avian influenza virus infection, and attitudes about hunting influenza-infected birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013\n\n\naPercentages may not always equal 100% owing to missing data.\nSome participants (n = 29; 27.4%) perceived a risk of contracting avian influenza while harvesting birds.\n“Just wondering every time we go out hunting geese in the spring, if any of the geese that come in [the] spring are carrying the flu” (Participant #41).“Yes there is a risk [be]cause the birds [are] from the South … who knows what they’ll catch out there” (Participant #103).“It will concern me if the bird flu is here on our Land and I wouldn’t be sure about hunting birds” (Participant #42).\n“Just wondering every time we go out hunting geese in the spring, if any of the geese that come in [the] spring are carrying the flu” (Participant #41).\n“Yes there is a risk [be]cause the birds [are] from the South … who knows what they’ll catch out there” (Participant #103).\n“It will concern me if the bird flu is here on our Land and I wouldn’t be sure about hunting birds” (Participant #42).\nOn the other hand, many participants did not perceive a risk of AIV infection while harvesting birds, since local regional birds were not perceived to be infected with avian influenza.\n“I thought there was only bird flu in Asia …” (Participant #24).“If birds were sick, I don’t think they would make it this far [North]” (Participant #70).“No reports that bird flu has arrived in this area and people are not getting sick” (Participant #36).\n“I thought there was only bird flu in Asia …” (Participant #24).\n“If birds were sick, I don’t think they would make it this far [North]” (Participant #70).\n“No reports that bird flu has arrived in this area and people are not getting sick” (Participant #36).\nDetection of avian influenza in wild birds in nearby geographic areas would reportedly influence the participants’ harvesting behaviour. The frequency of participants who would cease harvesting birds was highest if avian influenza was detected in local regional birds (n = 55; 51.9%). It was reported that 45 (42.5%) respondents would stop hunting if avian influenza was found in birds from within the Province of Ontario, and 43 (40.6%) respondents would stop hunting if the virus was found in North American birds. For all of the aforementioned scenarios, some participants added written responses indicating that they were not sure if they would stop hunting and requested relevant information. The majority of respondents also were interested in receiving information about avian influenza transmission (n = 83; 78.3%), flyways of migrating birds (n = 79; 74.5%), and precautions to minimize exposure (n = 82; 77.4%).\nANOVA showed that males hunted significantly more birds per year than did females (F1,96 = 12.1; p = 0.001; Figure 2). No significant difference in mean values of age, years of hunting, and days of hunting per year was observed between males and females. ANOVA did not identify any significant differences in mean values of age, years of hunting, days of hunting per year, and number of birds hunted per year between those who were or were not aware of avian influenza. However, ANOVA did show that those who hunted significantly more days per year did not perceive a risk of AIV infection while harvesting birds (F1,94 = 4.4; p = 0.040; Figure 2). No significant difference in mean values of age, years of hunting, and number of birds hunted per year was observed between those who did or did not perceive a risk of AIV infection.Figure 2\nAnalysis of variance for number of birds hunted per year by males and females (a) and number of days hunted per year by perceived risk of avian influenza virus infection while harvesting birds (b) among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013.\n\n\nAnalysis of variance for number of birds hunted per year by males and females (a) and number of days hunted per year by perceived risk of avian influenza virus infection while harvesting birds (b) among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013.\n\nFor all participants, in 2 × 2 contingency analysis, a significant dependence was observed between awareness of avian influenza and risk perception of AIV infection (Pearson χ2 = 4.456; p = 0.035) (Table 3). An ASR of +2.1 indicated that participants aware of avian influenza were significantly more likely to perceive a risk of AIV infection while harvesting birds. No significant dependence was seen between sex and awareness of avian influenza or sex and perceived risk of AIV infection.Table 3\nCross-tabulation for awareness of avian influenza by risk perception of avian influenza infection while harvesting birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013\nPerceived risk of avian influenza infection while harvesting birdsTotalNoYesAware of avian influenzaNoCount37946Adjusted Residual+2.1-2.1YesCount312051Adjusted Residual-2.1+2.1\n\nCross-tabulation for awareness of avian influenza by risk perception of avian influenza infection while harvesting birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013\n\nA significant dependence was observed between sex and the attitude of ceasing hunting if influenza was detected in regional birds (Pearson χ2 = 4.123; p = 0.042) (Table 4). An ASR of -2.0 indicted that males were significantly less likely to stop hunting if influenza was detected in the local regional birds. No significant dependence was observed between the two main effects of awareness of avian influenza and perceived risk of AIV infection by attitudes about hunting influenza-infected birds.Table 4\nCross-tabulation for sex by cease hunting if influenza detected in Regional birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013\nCease hunting if influenza detected in Regional birdsTotalNoYesSexMaleCount393776Adjusted Residual+2.0-2.0FemaleCount71825Adjusted Residual-2.0+2.0\n\nCross-tabulation for sex by cease hunting if influenza detected in Regional birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013\n\nA significant dependence also was observed between awareness of avian influenza and the precautionary behaviour of sanitizing equipment after processing birds while camping in the bush (Pearson χ2 = 4.070; p = 0.044) (Table 5). An ASR of +2.0 indicated that a significantly greater frequency of aware participants were among those who cleaned their bird processing equipment. No significant dependence was observed between awareness of avian influenza by any of the other recommended precautions to be used while harvesting birds. Moreover, no significant dependence was observed between the two main effects of sex and perceived risk of AIV infection by any of the precautionary behaviours.Table 5\nCross-tabulation for awareness of avian influenza by sanitizing bird processing equipment in the bush among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013\nSanitize bird processing equipment in the bushTotalNoYesAware of avian influenzaNoCount212748Adjusted Residual+2.0-2.0YesCount144256Adjusted Residual-2.0+2.0\n\nCross-tabulation for awareness of avian influenza by sanitizing bird processing equipment in the bush among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013\n", " Harvesting activities As mentioned, the potential of AIV infection while hunting and processing wild birds varies with specific practices, seasons, and geographical areas [5, 11, 12]. The hunters reported being in frequent contact with wild birds, as some participants hunted for more than 100 days per year and harvested up to 200 birds per year. Our findings indicated that First Nations subsistence hunters were involved in bird harvesting practices, such as processing the birds and having direct contact with water in the bush, that pose an increased hazard to AIV infections among this subpopulation. The main proposed pathway of transmission of AIV to humans is close contact between the tissues, secretions, and excretions of an infected bird and the respiratory tract, gastrointestinal tract, or conjunctiva of a human [2, 7, 39]. Infected birds shed copious amounts of virus particles in their feces which can also contaminate the environment and bodies of water [40, 41]. Our findings revealed that the majority of hunters had direct contact with water and cleaned, plucked, and gutted the wild birds themselves. If processing an influenza-infected wild bird in this manner, hunters may be exposed to virus-laden tissues, secretions, and excretions [2, 5]. The use of personal protective equipment was not routine practice as most hunters did not wear gloves and goggles to protect themselves while processing birds. However, most hunters reported using other measures of personal protection, such as washing their hands and cleaning their equipment, which can limit post-harvest AIV exposure.\nThe timing of the hunters’ bird harvesting activities in relation to when the prevalence peaks for AIVs and human influenza viruses is of particular interest. Similar to previous reports, our study revealed that the majority of hunters were involved in the spring and fall bird harvests [16, 19, 34]. The timing of these harvests is in relation to freeze-up and break-up events in the region which varies every year, but generally runs from April to October [42]. During these harvests, participants reported hunting migratory wild birds that are potential carriers of AIVs as all known influenza A virus subtypes have been identified in these birds [3, 43]. For instance, in North American wild ducks, AIV prevalence peaks around late summer/early fall prior to south bound migration, with highest virus isolation rates reported in juvenile ducks [44, 45]. On the other hand, previous studies have reported relatively low prevalence of AIVs in Canada geese regardless of the season [45, 46]. Moreover, in Canada, the peak season of influenza A infection in humans typically runs from November to April [33]. Similar to another study, our results suggest that the possibility of co-infection with AIVs and human influenza viruses resulting in a reassortment event is unlikely as the timing of the hunters’ potential exposure to AIVs is different from that of seasonal human influenza viruses [5].\nBased on previous studies, the surveyed participants generally hunt for wild birds around the southwestern coast of Hudson Bay and the western coast of James Bay which is along the Mississippi migratory flyway [3, 34, 47, 48]. Migratory flyways around the world intersect, particularly between eastern Eurasia and Alaska and between Europe and eastern North America, raising concerns about the exchange of AIVs between the Eurasian and American virus superfamilies [3, 43]. Intercontinental exchange of entire AIV genomes has not yet been reported and Eurasian HPAI virus subtypes have not been previously detected in North American migratory birds [43, 49]. However, reassortment events between the two lineages has been reported, notably in Alaska and along the northeastern coast of Canada [43, 49–51]. These observations suggest that the introduction of a novel AIV is more likely to occur along the Pacific and Atlantic coasts of North America, but once introduced, it has been suggested that migration to major congregation sites may disperse the novel AIV across flyways [49, 51, 52].\nAs mentioned, the potential of AIV infection while hunting and processing wild birds varies with specific practices, seasons, and geographical areas [5, 11, 12]. The hunters reported being in frequent contact with wild birds, as some participants hunted for more than 100 days per year and harvested up to 200 birds per year. Our findings indicated that First Nations subsistence hunters were involved in bird harvesting practices, such as processing the birds and having direct contact with water in the bush, that pose an increased hazard to AIV infections among this subpopulation. The main proposed pathway of transmission of AIV to humans is close contact between the tissues, secretions, and excretions of an infected bird and the respiratory tract, gastrointestinal tract, or conjunctiva of a human [2, 7, 39]. Infected birds shed copious amounts of virus particles in their feces which can also contaminate the environment and bodies of water [40, 41]. Our findings revealed that the majority of hunters had direct contact with water and cleaned, plucked, and gutted the wild birds themselves. If processing an influenza-infected wild bird in this manner, hunters may be exposed to virus-laden tissues, secretions, and excretions [2, 5]. The use of personal protective equipment was not routine practice as most hunters did not wear gloves and goggles to protect themselves while processing birds. However, most hunters reported using other measures of personal protection, such as washing their hands and cleaning their equipment, which can limit post-harvest AIV exposure.\nThe timing of the hunters’ bird harvesting activities in relation to when the prevalence peaks for AIVs and human influenza viruses is of particular interest. Similar to previous reports, our study revealed that the majority of hunters were involved in the spring and fall bird harvests [16, 19, 34]. The timing of these harvests is in relation to freeze-up and break-up events in the region which varies every year, but generally runs from April to October [42]. During these harvests, participants reported hunting migratory wild birds that are potential carriers of AIVs as all known influenza A virus subtypes have been identified in these birds [3, 43]. For instance, in North American wild ducks, AIV prevalence peaks around late summer/early fall prior to south bound migration, with highest virus isolation rates reported in juvenile ducks [44, 45]. On the other hand, previous studies have reported relatively low prevalence of AIVs in Canada geese regardless of the season [45, 46]. Moreover, in Canada, the peak season of influenza A infection in humans typically runs from November to April [33]. Similar to another study, our results suggest that the possibility of co-infection with AIVs and human influenza viruses resulting in a reassortment event is unlikely as the timing of the hunters’ potential exposure to AIVs is different from that of seasonal human influenza viruses [5].\nBased on previous studies, the surveyed participants generally hunt for wild birds around the southwestern coast of Hudson Bay and the western coast of James Bay which is along the Mississippi migratory flyway [3, 34, 47, 48]. Migratory flyways around the world intersect, particularly between eastern Eurasia and Alaska and between Europe and eastern North America, raising concerns about the exchange of AIVs between the Eurasian and American virus superfamilies [3, 43]. Intercontinental exchange of entire AIV genomes has not yet been reported and Eurasian HPAI virus subtypes have not been previously detected in North American migratory birds [43, 49]. However, reassortment events between the two lineages has been reported, notably in Alaska and along the northeastern coast of Canada [43, 49–51]. These observations suggest that the introduction of a novel AIV is more likely to occur along the Pacific and Atlantic coasts of North America, but once introduced, it has been suggested that migration to major congregation sites may disperse the novel AIV across flyways [49, 51, 52].\n Awareness, risk perception, and attitudes Approximately half of our study participants were generally aware of avian influenza (52.8%), which is lower than previous studies conducted with bird hunters in the USA (86%) and poultry workers in Nigeria (67.1%) and Italy (63.8%) [11, 23, 53]. Similar to a previous study, our findings indicated that a general awareness of avian influenza was more common among the surveyed bird hunters compared to knowledge of the signs and symptoms [11]. Previous studies conducted with high-risk populations in Thailand and Laos also reported limited knowledge of the key signs and symptoms of avian influenza [54, 55]. Almost one third of surveyed participants perceived a risk of contracting avian influenza while hunting and processing birds which is similar to the values found in other studies [24, 56].\nOur results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas. This observation aligns with findings from a previous study; however, the percentage of hunters who would stop was relatively higher in our study as only 3% and 19% of active duck hunters in Georgia, USA reported that they would stop hunting if HPAI were found in duck populations in USA and the state of Georgia, respectively [11]. This result is interesting as harvesting activities are integral to First Nations’ culture and an important source of healthy food, especially in communities experiencing food insecurity [17, 20, 22].\nOur findings suggested that being aware of avian influenza or perceiving a risk of AIV infection did not influence the hunters’ decision to cease harvesting influenza-infected birds. However, those who were knowledgeable were more likely to clean their equipment after processing birds in the bush. This finding suggests that First Nations hunters are not only willing to use precautionary measures while harvesting birds, but that improving their knowledge level may lead to an increased use of recommended precautionary measures. Previous studies also found that knowledge and perception of risk was a significant determinant of greater compliance with recommended protective measures [23, 24]. However, in our study, being knowledgeable or perceiving risk did not always result in greater use of protective measures. Moreover, in general, the limited use of gloves and goggles while processing harvested birds was noted. These observations may be explained by the protection motivation theory which states that complying with a recommended protective health behavior is influenced by risk perception as well as efficacy variables, including response efficacy (i.e., whether the recommended measure is effective) and self-efficacy (i.e., whether the person is capable of performing the recommended measure) [57–59]. According to this theory, risk perception will generate a willingness to act, but efficacy variables will determine whether the resulting action is adaptive or maladaptive [57, 58]. In our study, those who perceived a risk may have doubted the effectiveness of recommended measures and/or had low self-efficacy owing to limited access to resources and ability to afford supplies required to implement the measures [60].\nApproximately half of our study participants were generally aware of avian influenza (52.8%), which is lower than previous studies conducted with bird hunters in the USA (86%) and poultry workers in Nigeria (67.1%) and Italy (63.8%) [11, 23, 53]. Similar to a previous study, our findings indicated that a general awareness of avian influenza was more common among the surveyed bird hunters compared to knowledge of the signs and symptoms [11]. Previous studies conducted with high-risk populations in Thailand and Laos also reported limited knowledge of the key signs and symptoms of avian influenza [54, 55]. Almost one third of surveyed participants perceived a risk of contracting avian influenza while hunting and processing birds which is similar to the values found in other studies [24, 56].\nOur results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas. This observation aligns with findings from a previous study; however, the percentage of hunters who would stop was relatively higher in our study as only 3% and 19% of active duck hunters in Georgia, USA reported that they would stop hunting if HPAI were found in duck populations in USA and the state of Georgia, respectively [11]. This result is interesting as harvesting activities are integral to First Nations’ culture and an important source of healthy food, especially in communities experiencing food insecurity [17, 20, 22].\nOur findings suggested that being aware of avian influenza or perceiving a risk of AIV infection did not influence the hunters’ decision to cease harvesting influenza-infected birds. However, those who were knowledgeable were more likely to clean their equipment after processing birds in the bush. This finding suggests that First Nations hunters are not only willing to use precautionary measures while harvesting birds, but that improving their knowledge level may lead to an increased use of recommended precautionary measures. Previous studies also found that knowledge and perception of risk was a significant determinant of greater compliance with recommended protective measures [23, 24]. However, in our study, being knowledgeable or perceiving risk did not always result in greater use of protective measures. Moreover, in general, the limited use of gloves and goggles while processing harvested birds was noted. These observations may be explained by the protection motivation theory which states that complying with a recommended protective health behavior is influenced by risk perception as well as efficacy variables, including response efficacy (i.e., whether the recommended measure is effective) and self-efficacy (i.e., whether the person is capable of performing the recommended measure) [57–59]. According to this theory, risk perception will generate a willingness to act, but efficacy variables will determine whether the resulting action is adaptive or maladaptive [57, 58]. In our study, those who perceived a risk may have doubted the effectiveness of recommended measures and/or had low self-efficacy owing to limited access to resources and ability to afford supplies required to implement the measures [60].\n Recommendations for influenza pandemic plans These data support previous findings which suggest that bird hunting and processing activities may potentially expose individuals to avian influenza [5, 11–14]. Acknowledging the various benefits and cultural importance of subsistence harvesting [17, 20], while taking into account the increased hazard of potential AIV exposure in First Nations hunters, their inclusion as an avian influenza risk group with associated special considerations in pandemic plans seems warranted. The potential for a novel AIV to be introduced into an Aboriginal Canadian population is of great concern as they face many health disparities and are particularly susceptible to influenza and related complications [61]. Moreover, previous influenza pandemics have disproportionately impacted Aboriginal Canadians, especially those populations living in geographically remote communities, and reflected inadequacies in preparedness with regards to addressing their pre-existing inequalities and special needs during a pandemic [62–65].\nEfforts should be directed towards improving education for First Nations hunters regarding avian influenza and the hazard posed by AIVs while harvesting wild birds. More specifically, our results indicated that educational endeavours should include information regarding the signs and symptoms of avian influenza, transmission dynamics, flyways of migrating birds, and recommended precautionary measures (Table 6). Accordingly, access to supplies required to comply with recommended protective measures, such as cleaning solutions and gloves, should be improved for First Nations subsistence hunters. Moreover, our findings suggested that detection of avian influenza in wild birds in nearby geographic areas would influence the participants’ harvesting behaviour. Given this, we recommend that a culturally-appropriate communication system be implemented to promptly inform subsistence hunters and other community members of the findings and any associated recommendations.Table 6\nRecommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [\n[33]])\n-Do not touch or eat sick birds or birds that have died for unknown reasons-Avoid touching the blood, secretions, or dropping of wild game birds-Do not rub your eyes, touch your face, eat, drink or smoke when processing wild game birds-Keep young children away when processing wild game birds and discourage them from playing in areas that could be contaminated with wild bird droppings-When preparing game, wash knives, tools, work surfaces, and other equipment with soap and warm water followed by a household bleach solution (0.5% sodium hypochlorite)-Wear water-proof household gloves or disposable latex/plastic gloves when processing wild game birds-Wash gloves and hands (for at least 20 seconds) with soap and warm water immediately after you have finished processing game or cleaning equipment. If there is no water available, remove any dirt using a moist towlette, apply an alcohol based hand gel (between 60-90% alcohol) and wash your hands with soap and water as soon as it is possible-Change clothes after handling wild game birds and keep soiled clothing and shoes in a sealed plastic bag until they can be washed-When cooking birds, the inside temperature should reach 85°C for whole birds or 74°C for bird parts (no visible pink meat and juice runs clear)-Never keep wild birds in your home or as pets-Receive the annual influenza vaccine-If you become sick while handling birds or shortly afterwards, see your doctor and inform your doctor that you have been in close contact with wild birds.\n\nRecommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [\n[33]])\n\nThese data support previous findings which suggest that bird hunting and processing activities may potentially expose individuals to avian influenza [5, 11–14]. Acknowledging the various benefits and cultural importance of subsistence harvesting [17, 20], while taking into account the increased hazard of potential AIV exposure in First Nations hunters, their inclusion as an avian influenza risk group with associated special considerations in pandemic plans seems warranted. The potential for a novel AIV to be introduced into an Aboriginal Canadian population is of great concern as they face many health disparities and are particularly susceptible to influenza and related complications [61]. Moreover, previous influenza pandemics have disproportionately impacted Aboriginal Canadians, especially those populations living in geographically remote communities, and reflected inadequacies in preparedness with regards to addressing their pre-existing inequalities and special needs during a pandemic [62–65].\nEfforts should be directed towards improving education for First Nations hunters regarding avian influenza and the hazard posed by AIVs while harvesting wild birds. More specifically, our results indicated that educational endeavours should include information regarding the signs and symptoms of avian influenza, transmission dynamics, flyways of migrating birds, and recommended precautionary measures (Table 6). Accordingly, access to supplies required to comply with recommended protective measures, such as cleaning solutions and gloves, should be improved for First Nations subsistence hunters. Moreover, our findings suggested that detection of avian influenza in wild birds in nearby geographic areas would influence the participants’ harvesting behaviour. Given this, we recommend that a culturally-appropriate communication system be implemented to promptly inform subsistence hunters and other community members of the findings and any associated recommendations.Table 6\nRecommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [\n[33]])\n-Do not touch or eat sick birds or birds that have died for unknown reasons-Avoid touching the blood, secretions, or dropping of wild game birds-Do not rub your eyes, touch your face, eat, drink or smoke when processing wild game birds-Keep young children away when processing wild game birds and discourage them from playing in areas that could be contaminated with wild bird droppings-When preparing game, wash knives, tools, work surfaces, and other equipment with soap and warm water followed by a household bleach solution (0.5% sodium hypochlorite)-Wear water-proof household gloves or disposable latex/plastic gloves when processing wild game birds-Wash gloves and hands (for at least 20 seconds) with soap and warm water immediately after you have finished processing game or cleaning equipment. If there is no water available, remove any dirt using a moist towlette, apply an alcohol based hand gel (between 60-90% alcohol) and wash your hands with soap and water as soon as it is possible-Change clothes after handling wild game birds and keep soiled clothing and shoes in a sealed plastic bag until they can be washed-When cooking birds, the inside temperature should reach 85°C for whole birds or 74°C for bird parts (no visible pink meat and juice runs clear)-Never keep wild birds in your home or as pets-Receive the annual influenza vaccine-If you become sick while handling birds or shortly afterwards, see your doctor and inform your doctor that you have been in close contact with wild birds.\n\nRecommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [\n[33]])\n\n Study strengths and limitations To our knowledge, this is the first study to examine the knowledge and risk perceptions of avian influenza among Canadian First Nations subsistence hunters. The censused approach taken to select participants and the high contact and cooperation rates strengthen the assertion that our findings are representative of the study community. Also, in accordance with a CBPR approach, the CBAG was involved throughout the entire research process, thereby ensuring that the study was conducted in a culturally-appropriate manner and that the knowledge generated was used to directly benefit the involved community.\nDespite the novelty and significance of our findings, some limitations of our study must be highlighted when interpreting our results. First, the analysis was based on a cross-sectional survey of self-reported data which may limit drawing definitive conclusions about the observed relationships. The biases in recalling and reporting cannot be entirely ruled out; however, to help alleviate the potential for biased responses, participants were assured that their responses would remain anonymous. Also, it is not possible to discern whether those who did not return the survey or refused to participate were different in any way from those who did participate. However, there is no obvious reason to suspect that non-respondents and people who chose not to participate were any different from the respondents.\nFuture research should examine the prevalence of AIVs, particularly those strains that are currently of concern to humans (e.g., H5, H7), in birds from within the Mushkegowuk Territory that are typically harvested. Also, analyzing the sera for antibodies against AIV subtypes would be helpful to evaluate if previous AIV infections occurred in First Nation subsistence hunters. Moreover, conducting a quantitative exposure assessment would provide information to help characterize the study population’s exposure potential to AIVs. Lastly, previous research has noted that various barriers impede the effectiveness of implementing recommended pandemic mitigation measures [60]. Thus, future research should aim to understand if any barriers exist with regards to complying with recommended precautions to reduce exposure to AIVs while harvesting birds and if measures need to be adapted to be more context-specific and culturally-appropriate, while still maintaining the effectiveness of the measure.\nTo our knowledge, this is the first study to examine the knowledge and risk perceptions of avian influenza among Canadian First Nations subsistence hunters. The censused approach taken to select participants and the high contact and cooperation rates strengthen the assertion that our findings are representative of the study community. Also, in accordance with a CBPR approach, the CBAG was involved throughout the entire research process, thereby ensuring that the study was conducted in a culturally-appropriate manner and that the knowledge generated was used to directly benefit the involved community.\nDespite the novelty and significance of our findings, some limitations of our study must be highlighted when interpreting our results. First, the analysis was based on a cross-sectional survey of self-reported data which may limit drawing definitive conclusions about the observed relationships. The biases in recalling and reporting cannot be entirely ruled out; however, to help alleviate the potential for biased responses, participants were assured that their responses would remain anonymous. Also, it is not possible to discern whether those who did not return the survey or refused to participate were different in any way from those who did participate. However, there is no obvious reason to suspect that non-respondents and people who chose not to participate were any different from the respondents.\nFuture research should examine the prevalence of AIVs, particularly those strains that are currently of concern to humans (e.g., H5, H7), in birds from within the Mushkegowuk Territory that are typically harvested. Also, analyzing the sera for antibodies against AIV subtypes would be helpful to evaluate if previous AIV infections occurred in First Nation subsistence hunters. Moreover, conducting a quantitative exposure assessment would provide information to help characterize the study population’s exposure potential to AIVs. Lastly, previous research has noted that various barriers impede the effectiveness of implementing recommended pandemic mitigation measures [60]. Thus, future research should aim to understand if any barriers exist with regards to complying with recommended precautions to reduce exposure to AIVs while harvesting birds and if measures need to be adapted to be more context-specific and culturally-appropriate, while still maintaining the effectiveness of the measure.", "As mentioned, the potential of AIV infection while hunting and processing wild birds varies with specific practices, seasons, and geographical areas [5, 11, 12]. The hunters reported being in frequent contact with wild birds, as some participants hunted for more than 100 days per year and harvested up to 200 birds per year. Our findings indicated that First Nations subsistence hunters were involved in bird harvesting practices, such as processing the birds and having direct contact with water in the bush, that pose an increased hazard to AIV infections among this subpopulation. The main proposed pathway of transmission of AIV to humans is close contact between the tissues, secretions, and excretions of an infected bird and the respiratory tract, gastrointestinal tract, or conjunctiva of a human [2, 7, 39]. Infected birds shed copious amounts of virus particles in their feces which can also contaminate the environment and bodies of water [40, 41]. Our findings revealed that the majority of hunters had direct contact with water and cleaned, plucked, and gutted the wild birds themselves. If processing an influenza-infected wild bird in this manner, hunters may be exposed to virus-laden tissues, secretions, and excretions [2, 5]. The use of personal protective equipment was not routine practice as most hunters did not wear gloves and goggles to protect themselves while processing birds. However, most hunters reported using other measures of personal protection, such as washing their hands and cleaning their equipment, which can limit post-harvest AIV exposure.\nThe timing of the hunters’ bird harvesting activities in relation to when the prevalence peaks for AIVs and human influenza viruses is of particular interest. Similar to previous reports, our study revealed that the majority of hunters were involved in the spring and fall bird harvests [16, 19, 34]. The timing of these harvests is in relation to freeze-up and break-up events in the region which varies every year, but generally runs from April to October [42]. During these harvests, participants reported hunting migratory wild birds that are potential carriers of AIVs as all known influenza A virus subtypes have been identified in these birds [3, 43]. For instance, in North American wild ducks, AIV prevalence peaks around late summer/early fall prior to south bound migration, with highest virus isolation rates reported in juvenile ducks [44, 45]. On the other hand, previous studies have reported relatively low prevalence of AIVs in Canada geese regardless of the season [45, 46]. Moreover, in Canada, the peak season of influenza A infection in humans typically runs from November to April [33]. Similar to another study, our results suggest that the possibility of co-infection with AIVs and human influenza viruses resulting in a reassortment event is unlikely as the timing of the hunters’ potential exposure to AIVs is different from that of seasonal human influenza viruses [5].\nBased on previous studies, the surveyed participants generally hunt for wild birds around the southwestern coast of Hudson Bay and the western coast of James Bay which is along the Mississippi migratory flyway [3, 34, 47, 48]. Migratory flyways around the world intersect, particularly between eastern Eurasia and Alaska and between Europe and eastern North America, raising concerns about the exchange of AIVs between the Eurasian and American virus superfamilies [3, 43]. Intercontinental exchange of entire AIV genomes has not yet been reported and Eurasian HPAI virus subtypes have not been previously detected in North American migratory birds [43, 49]. However, reassortment events between the two lineages has been reported, notably in Alaska and along the northeastern coast of Canada [43, 49–51]. These observations suggest that the introduction of a novel AIV is more likely to occur along the Pacific and Atlantic coasts of North America, but once introduced, it has been suggested that migration to major congregation sites may disperse the novel AIV across flyways [49, 51, 52].", "Approximately half of our study participants were generally aware of avian influenza (52.8%), which is lower than previous studies conducted with bird hunters in the USA (86%) and poultry workers in Nigeria (67.1%) and Italy (63.8%) [11, 23, 53]. Similar to a previous study, our findings indicated that a general awareness of avian influenza was more common among the surveyed bird hunters compared to knowledge of the signs and symptoms [11]. Previous studies conducted with high-risk populations in Thailand and Laos also reported limited knowledge of the key signs and symptoms of avian influenza [54, 55]. Almost one third of surveyed participants perceived a risk of contracting avian influenza while hunting and processing birds which is similar to the values found in other studies [24, 56].\nOur results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas. This observation aligns with findings from a previous study; however, the percentage of hunters who would stop was relatively higher in our study as only 3% and 19% of active duck hunters in Georgia, USA reported that they would stop hunting if HPAI were found in duck populations in USA and the state of Georgia, respectively [11]. This result is interesting as harvesting activities are integral to First Nations’ culture and an important source of healthy food, especially in communities experiencing food insecurity [17, 20, 22].\nOur findings suggested that being aware of avian influenza or perceiving a risk of AIV infection did not influence the hunters’ decision to cease harvesting influenza-infected birds. However, those who were knowledgeable were more likely to clean their equipment after processing birds in the bush. This finding suggests that First Nations hunters are not only willing to use precautionary measures while harvesting birds, but that improving their knowledge level may lead to an increased use of recommended precautionary measures. Previous studies also found that knowledge and perception of risk was a significant determinant of greater compliance with recommended protective measures [23, 24]. However, in our study, being knowledgeable or perceiving risk did not always result in greater use of protective measures. Moreover, in general, the limited use of gloves and goggles while processing harvested birds was noted. These observations may be explained by the protection motivation theory which states that complying with a recommended protective health behavior is influenced by risk perception as well as efficacy variables, including response efficacy (i.e., whether the recommended measure is effective) and self-efficacy (i.e., whether the person is capable of performing the recommended measure) [57–59]. According to this theory, risk perception will generate a willingness to act, but efficacy variables will determine whether the resulting action is adaptive or maladaptive [57, 58]. In our study, those who perceived a risk may have doubted the effectiveness of recommended measures and/or had low self-efficacy owing to limited access to resources and ability to afford supplies required to implement the measures [60].", "These data support previous findings which suggest that bird hunting and processing activities may potentially expose individuals to avian influenza [5, 11–14]. Acknowledging the various benefits and cultural importance of subsistence harvesting [17, 20], while taking into account the increased hazard of potential AIV exposure in First Nations hunters, their inclusion as an avian influenza risk group with associated special considerations in pandemic plans seems warranted. The potential for a novel AIV to be introduced into an Aboriginal Canadian population is of great concern as they face many health disparities and are particularly susceptible to influenza and related complications [61]. Moreover, previous influenza pandemics have disproportionately impacted Aboriginal Canadians, especially those populations living in geographically remote communities, and reflected inadequacies in preparedness with regards to addressing their pre-existing inequalities and special needs during a pandemic [62–65].\nEfforts should be directed towards improving education for First Nations hunters regarding avian influenza and the hazard posed by AIVs while harvesting wild birds. More specifically, our results indicated that educational endeavours should include information regarding the signs and symptoms of avian influenza, transmission dynamics, flyways of migrating birds, and recommended precautionary measures (Table 6). Accordingly, access to supplies required to comply with recommended protective measures, such as cleaning solutions and gloves, should be improved for First Nations subsistence hunters. Moreover, our findings suggested that detection of avian influenza in wild birds in nearby geographic areas would influence the participants’ harvesting behaviour. Given this, we recommend that a culturally-appropriate communication system be implemented to promptly inform subsistence hunters and other community members of the findings and any associated recommendations.Table 6\nRecommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [\n[33]])\n-Do not touch or eat sick birds or birds that have died for unknown reasons-Avoid touching the blood, secretions, or dropping of wild game birds-Do not rub your eyes, touch your face, eat, drink or smoke when processing wild game birds-Keep young children away when processing wild game birds and discourage them from playing in areas that could be contaminated with wild bird droppings-When preparing game, wash knives, tools, work surfaces, and other equipment with soap and warm water followed by a household bleach solution (0.5% sodium hypochlorite)-Wear water-proof household gloves or disposable latex/plastic gloves when processing wild game birds-Wash gloves and hands (for at least 20 seconds) with soap and warm water immediately after you have finished processing game or cleaning equipment. If there is no water available, remove any dirt using a moist towlette, apply an alcohol based hand gel (between 60-90% alcohol) and wash your hands with soap and water as soon as it is possible-Change clothes after handling wild game birds and keep soiled clothing and shoes in a sealed plastic bag until they can be washed-When cooking birds, the inside temperature should reach 85°C for whole birds or 74°C for bird parts (no visible pink meat and juice runs clear)-Never keep wild birds in your home or as pets-Receive the annual influenza vaccine-If you become sick while handling birds or shortly afterwards, see your doctor and inform your doctor that you have been in close contact with wild birds.\n\nRecommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [\n[33]])\n", "To our knowledge, this is the first study to examine the knowledge and risk perceptions of avian influenza among Canadian First Nations subsistence hunters. The censused approach taken to select participants and the high contact and cooperation rates strengthen the assertion that our findings are representative of the study community. Also, in accordance with a CBPR approach, the CBAG was involved throughout the entire research process, thereby ensuring that the study was conducted in a culturally-appropriate manner and that the knowledge generated was used to directly benefit the involved community.\nDespite the novelty and significance of our findings, some limitations of our study must be highlighted when interpreting our results. First, the analysis was based on a cross-sectional survey of self-reported data which may limit drawing definitive conclusions about the observed relationships. The biases in recalling and reporting cannot be entirely ruled out; however, to help alleviate the potential for biased responses, participants were assured that their responses would remain anonymous. Also, it is not possible to discern whether those who did not return the survey or refused to participate were different in any way from those who did participate. However, there is no obvious reason to suspect that non-respondents and people who chose not to participate were any different from the respondents.\nFuture research should examine the prevalence of AIVs, particularly those strains that are currently of concern to humans (e.g., H5, H7), in birds from within the Mushkegowuk Territory that are typically harvested. Also, analyzing the sera for antibodies against AIV subtypes would be helpful to evaluate if previous AIV infections occurred in First Nation subsistence hunters. Moreover, conducting a quantitative exposure assessment would provide information to help characterize the study population’s exposure potential to AIVs. Lastly, previous research has noted that various barriers impede the effectiveness of implementing recommended pandemic mitigation measures [60]. Thus, future research should aim to understand if any barriers exist with regards to complying with recommended precautions to reduce exposure to AIVs while harvesting birds and if measures need to be adapted to be more context-specific and culturally-appropriate, while still maintaining the effectiveness of the measure.", "Our study aimed to gain an understanding of the bird harvesting practices and knowledge, risk perceptions, and attitudes regarding avian influenza among Canadian First Nations subsistence hunters and provide recommendations for pandemic plans. The findings herein indicated that First Nations subsistence hunters partook in some practices while harvesting wild birds that could potentially expose them to avian influenza, although appropriate levels of compliance with some protective measures were reported. More than half of the respondents were generally aware of avian influenza and almost one third perceived a risk of AIV infection while harvesting birds. Participants aware of avian influenza were more likely to perceive a risk of AIV infection while harvesting birds. Our results suggest that knowledge positively influenced the use of a recommended protective measure. Regarding attitudes about hunting influenza-infected birds, our results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas.\nGiven that the potential exposure to AIVs while hunting is assumed to be low but the cultural importance of subsistence hunting high, our study indicated a need for more education about avian influenza and precautions First Nations hunters can take to reduce the possibility of AIV exposure while harvesting wild birds that are culturally-appropriate. We posit that First Nations hunters should be considered an avian influenza risk group and have associated special considerations included in pandemic plans." ]
[ null, "methods", null, null, null, "results", "discussion", null, null, null, null, "conclusions" ]
[ "Avian influenza", "Birds", "Wild game", "First Nations", "Canada", "Subsistence hunting", "Harvesting", "Pandemic plans", "Risk perception" ]
Background: Influenza A viruses may cause pandemics at unpredictable, irregular intervals resulting in devastating social and economic effects worldwide [1]. Wild aquatic birds in the orders Anseriformes and Charadriiformes are the natural hosts for influenza A viruses; these viruses have generally remained in evolutionary stasis and are usually non-pathogenic in wild birds [2, 3]. Most avian influenza viruses (AIVs) primarily replicate in the intestinal tract of wild birds and are spread amongst birds via an indirect fecal-oral route involving contaminated aquatic habitats [4]. Humans who are directly exposed to the tissues, secretions, and excretions of infected birds or water contaminated with bird feces can become infected themselves [2, 4, 5]. The transmission of an AIV from a bird to a human has significant pandemic potential as it may result in the direct introduction of a novel virus strain or allow for the creation of a novel virus strain via reassortment [3, 5]. The transmission of AIVs from birds to humans depends on many factors, such as the susceptibility of humans to the virus and the frequency and type of contact [2, 5]. Most AIVs are generally inefficient in infecting humans; however, there have been documented cases of AIVs transmitting directly from infected birds to humans [6, 7]. During the 1997 Hong Kong “bird flu” incident, there was demonstrated transmission of highly pathogenic avian influenza (HPAI) A virus (H5N1) from infected domesticated chickens to humans [3]. More recently, some Asian countries have reported human infections of avian influenza A virus (H7N9) with most patients having a history of exposure to live poultry in wet markets [8]. As such, most pandemic plans include special considerations (e.g., enhanced surveillance, prioritization for vaccination, and antiviral prophylaxis) for avian influenza risk groups that include humans who come in close, frequent contact with domestic birds, such as farmers, poultry farm workers, veterinarians, and livestock workers [9, 10]. Longitudinally migrating wild birds appear to play a primary role in influenza transmission and there is increased concern about the introduction of HPAI virus strains in North America from Eurasia, as migratory flyways around the world intersect [3, 4]. Thus, bird hunters may also be at risk as hunting and processing practices directly expose them to the bodily fluids of wild birds and water potentially contaminated with bird feces [5, 11]. Although the risk of AIV infection while hunting and processing wild birds is assumed to be very low [5], transmission has been previously reported. One study reported serologic evidence of past AIV infection in a recreational duck hunter and two wildlife professionals, inferring direct transmission of AIVs from wild birds to humans [12]. Another study reported that recreational waterfowl hunters were eight times more likely to be exposed to avian influenza-infected wildlife compared to occupationally-exposed people and the general public [13]. A study conducted in rural Iowa, USA, reported that participants who hunted wild birds had increased antibody titers against avian H7 influenza virus [14]. Further, in the Republic of Azerbaijan, HPAI H5N1 infection in humans is suspected to be linked to defeathering infected wild swans (Cygnus) [15]. Since handling wild birds and having contact with the aquatic habitats of wild birds are potential transmission pathways for AIV infections in hunters, it is important to better understand hunters’ knowledge and risk perceptions of avian influenza and include special considerations in pandemic plans. This is particularly important for some Canadian Aboriginal (First Nations, Inuit, and Métis) populations whose hunting of wild birds represents subsistence harvesting as opposed to a recreational activity [16]. Herein, subsistence harvesting will refer collectively to activities associated with hunting, fishing, trapping, and gathering of animals and other food for personal, family, and community consumption [17, 18]. The practice of subsistence harvesting for some Canadian Aboriginal populations, such as the Cree First Nations of the Mushkegowuk region, is culturally and economically important with the majority of hunters harvesting wild birds [17, 19]. Traditional land-based harvesting activities are economically valuable for the region and can reduce external economic dependence [17]. Moreover, as there are many physical, nutritional, and social benefits of this practice, it is a vital, well-established component of health and well-being in Canadian Aboriginal communities [20]. For instance, as Canadian Aboriginal populations, particularly those residing in geographically remote and isolated communities, experience a high prevalence of household food insecurity [21, 22], subsistence harvesting can provide an important source of healthy traditional foods and lessen the reliance on costly market foods. The potential of AIV infection while hunting and harvesting wild birds varies with geographical areas, seasons, and specific activities [5, 11, 12]. Moreover, previous studies have shown that knowledge and risk perception of avian influenza can positively influence compliance with recommended protective health behaviours [23, 24]. We conducted a cross-sectional survey of the bird harvesting practices and knowledge, risk perceptions, and attitudes regarding avian influenza among Canadian First Nations subsistence hunters. The purpose of this study was to examine if knowledge and risk perception of avian influenza influenced the use of personal protection measures and attitudes about hunting influenza-infected birds. The implications for addressing the special considerations of Canadian First Nations subsistence hunters in pandemic plans will be discussed. Methods: Community-based participatory research approach The present study employed a community-based participatory research (CBPR) approach since the hallmark principles of CBPR can foster the engagement of Aboriginal populations and participatory methods have previously been a successful approach to partnering with Aboriginal communities [25–27]. As such, the research topic was locally relevant as it stemmed from previous research conducted in the region that explored culturally-appropriate measures to mitigate the effects of an influenza pandemic in the setting of a remote and isolated Canadian First Nations community [28]. Residents of the study community expressed questions and concerns about the transmission potential of AIVs from influenza-infected wild birds to subsistence hunters. Thus, the present study was specifically developed and conducted to address the identified questions and concerns. Following a CBPR approach, collaboration occurred throughout the research process between the researchers and a community-based advisory group (CBAG) comprised of two community representatives from the study community [29–31]. The two members of the CBAG were of First Nations heritage and were particularly interested in the topic at hand and desired to be involved. The CBAG helped design the study and was part of the iterative process of developing the survey questions and layout. The CBAG also provided input during the data analysis process, on the interpretation of results, and aided with disseminating the results to the community. CBPR endeavors aim to use the knowledge generated to achieve action-oriented outcomes for the involved community [29, 32]. At the request of the CBAG, the results of this study were disseminated via an oral presentation to community members during a lunch-and-learn activity in June 2014. An information sheet explaining avian influenza and recommended precautionary behaviours created by Health Canada was distributed to attendees [33]. Information about emerging avian influenzas that currently are of pandemic concern and the information sheet were also incorporated into the community’s influenza pandemic plan as a newly created appendix section. Approval to conduct this research was granted by the Office of Research Ethics at the University of Waterloo (ORE #16534), and was supported by the Band Council (locally elected First Nations government body) of the involved community. The present study employed a community-based participatory research (CBPR) approach since the hallmark principles of CBPR can foster the engagement of Aboriginal populations and participatory methods have previously been a successful approach to partnering with Aboriginal communities [25–27]. As such, the research topic was locally relevant as it stemmed from previous research conducted in the region that explored culturally-appropriate measures to mitigate the effects of an influenza pandemic in the setting of a remote and isolated Canadian First Nations community [28]. Residents of the study community expressed questions and concerns about the transmission potential of AIVs from influenza-infected wild birds to subsistence hunters. Thus, the present study was specifically developed and conducted to address the identified questions and concerns. Following a CBPR approach, collaboration occurred throughout the research process between the researchers and a community-based advisory group (CBAG) comprised of two community representatives from the study community [29–31]. The two members of the CBAG were of First Nations heritage and were particularly interested in the topic at hand and desired to be involved. The CBAG helped design the study and was part of the iterative process of developing the survey questions and layout. The CBAG also provided input during the data analysis process, on the interpretation of results, and aided with disseminating the results to the community. CBPR endeavors aim to use the knowledge generated to achieve action-oriented outcomes for the involved community [29, 32]. At the request of the CBAG, the results of this study were disseminated via an oral presentation to community members during a lunch-and-learn activity in June 2014. An information sheet explaining avian influenza and recommended precautionary behaviours created by Health Canada was distributed to attendees [33]. Information about emerging avian influenzas that currently are of pandemic concern and the information sheet were also incorporated into the community’s influenza pandemic plan as a newly created appendix section. Approval to conduct this research was granted by the Office of Research Ethics at the University of Waterloo (ORE #16534), and was supported by the Band Council (locally elected First Nations government body) of the involved community. Study area, population, and data collection The study community (name omitted for anonymity purposes) is considered remote (i.e., nearest service center with year-round road access is located over 350 kilometers away) and isolated (i.e., accessible only by airplanes year-round) [10]. The Cree First Nations community belongs to the Mushkegowuk region which is located in northern Ontario, Canada along the western shores of James Bay and the southern portion of Hudson Bay [17, 19]. The region is a productive wildlife area and the majority of hunters partake in the spring and fall bird harvests [34]. The cross-sectional survey was conducted in English (as suggested by the CBAG) from November 10–25, 2013. The time period was chosen to maximize participation, as most hunters would have returned from fall hunting activities. The survey was based on previous literature [11] and was developed in collaboration with the CBAG to ensure that it adequately addressed the objectives of the study and was culturally-appropriate. The survey employed closed-ended questions to gain a better understanding of First Nations hunters’ general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Open-ended questions were also included to allow for participants to describe their risk perceptions of AIV infection while harvesting birds as well as any additional concerns. Basic demographic questions to record the age and sex of participants were also included. Community First Nations subsistence hunters were invited to participate by the lead author (NAC) and a local community research assistant during individual meetings. The research assistant was of First Nations descent and a prominent Elder in the community. Being fluent in the Cree language, the assistant acted as a Cree translator upon request by the survey respondents. A current community housing list (updated in November 2013) which recorded all known community members living in First Nations (Band) households was used by the research assistant to identify eligible participants. Contemporary harvesting practices in the region typically involve multiple short trips versus traditional long trips [34]. To include as many hunters as possible from the study community, eligible participants were defined as current hunters, a group which included “intensive”, “active”, and “occasional” hunters (for definitions, see [17]). In addition to being a current hunter, participants were required to be First Nations (Band member), an adult (18 years old and over), and available to complete the survey in person during the study period to be eligible. Both male and female hunters were approached as it is widely recognized in Cree First Nations that both sexes play an important role while subsistence harvesting [35]. When approached, the participants were provided with an information/recruitment letter and the study was explained in English or Cree as required. Informed verbal consent was obtained, being culturally appropriate for the region [31, 36]. Incentives were not offered for participation. As participants preferred to complete the survey alone on their own time, a convenient time and location was arranged to collect the completed survey. Up to five follow-up visits and new survey copies were provided if the survey was not completed at the specified time and if the person was still interested in participating. The study community (name omitted for anonymity purposes) is considered remote (i.e., nearest service center with year-round road access is located over 350 kilometers away) and isolated (i.e., accessible only by airplanes year-round) [10]. The Cree First Nations community belongs to the Mushkegowuk region which is located in northern Ontario, Canada along the western shores of James Bay and the southern portion of Hudson Bay [17, 19]. The region is a productive wildlife area and the majority of hunters partake in the spring and fall bird harvests [34]. The cross-sectional survey was conducted in English (as suggested by the CBAG) from November 10–25, 2013. The time period was chosen to maximize participation, as most hunters would have returned from fall hunting activities. The survey was based on previous literature [11] and was developed in collaboration with the CBAG to ensure that it adequately addressed the objectives of the study and was culturally-appropriate. The survey employed closed-ended questions to gain a better understanding of First Nations hunters’ general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Open-ended questions were also included to allow for participants to describe their risk perceptions of AIV infection while harvesting birds as well as any additional concerns. Basic demographic questions to record the age and sex of participants were also included. Community First Nations subsistence hunters were invited to participate by the lead author (NAC) and a local community research assistant during individual meetings. The research assistant was of First Nations descent and a prominent Elder in the community. Being fluent in the Cree language, the assistant acted as a Cree translator upon request by the survey respondents. A current community housing list (updated in November 2013) which recorded all known community members living in First Nations (Band) households was used by the research assistant to identify eligible participants. Contemporary harvesting practices in the region typically involve multiple short trips versus traditional long trips [34]. To include as many hunters as possible from the study community, eligible participants were defined as current hunters, a group which included “intensive”, “active”, and “occasional” hunters (for definitions, see [17]). In addition to being a current hunter, participants were required to be First Nations (Band member), an adult (18 years old and over), and available to complete the survey in person during the study period to be eligible. Both male and female hunters were approached as it is widely recognized in Cree First Nations that both sexes play an important role while subsistence harvesting [35]. When approached, the participants were provided with an information/recruitment letter and the study was explained in English or Cree as required. Informed verbal consent was obtained, being culturally appropriate for the region [31, 36]. Incentives were not offered for participation. As participants preferred to complete the survey alone on their own time, a convenient time and location was arranged to collect the completed survey. Up to five follow-up visits and new survey copies were provided if the survey was not completed at the specified time and if the person was still interested in participating. Data management and analyses Collected surveys were coded by an identification number to maintain confidentiality of the participants. The CBAG was consulted to determine how to code inexact responses. Of note, it was decided that if a participant responded with a range of numbers, the median value was recorded. If a participant selected all of the possible response options or only provided a written response, the result was recorded as missing data. In instances where a pattern was observed amongst participants’ written responses, the responses were coded according to newly created response options approved by the CBAG to maintain the integrity of the data. Sample size for individual statistical analyses varied from 88 to 106, as not all participants answered each survey question; thus, presented percentages may not always equal 100% owing to missing data. Simple descriptive statistics were used to examine the distributions of variables pertaining to general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Cross-tabulations, as 2 × 2 contingency analyses, were used to examine the relationships between each of the main effects of sex, awareness of avian influenza, and risk perception of AIV infection by precautionary behaviours and attitudes about hunting influenza-infected birds. In instances where the expected cell count was less than five, the Fisher’s Exact Test was used in preference to the Pearson chi-square test. Absolute values greater than 1.96 of the adjusted standard residual (ASR) indicated a significant departure from the expected count and therefore considered to be a major contributor to the observed chi-square result. The influence of outlier values for continuous dependent variables (age, years of hunting, days of hunting per year, birds hunted per year) was examined using boxplots of raw and log transformed data. Owing to the presence of outlier values, we log-transformed values for days of hunting and number of birds hunted per year to satisfy the homogeneity of variance assumption of analysis of variance (ANOVA). It was decided that one individual’s improbable response for number of birds hunted per year should be removed as it continued to distort the results. Also, one individual’s response for years of hunting was recorded as missing data since the response did not reflect the age of the participant. Differences in mean values of these dependent variables between groups for sex, awareness of avian influenza, and risk perception of AIV infection were examined using ANOVA. Statistical results were considered to be significant at p < 0.05. Data analyses were carried out using SPSS version 22 (SPSS Inc., Chicago, Illinois, U.S.A). Written responses to the two open-ended questions and any additional comments were manually transcribed verbatim into electronic format to facilitate organization and coding. Qualitative coding of the transcribed data was conducted using QSR NVivo® version 9.2 (QSR International Pty Ltd., Doncaster, Victoria, Australia). Responses were deductively analyzed following a template organizing approach using the survey questions as a coding template [37, 38]. Analyzing the data was an iterative process conducted multiple times by the lead author (NAC) and findings were presented to the CBAG as a way of member checking to verify the results [37]. Collected surveys were coded by an identification number to maintain confidentiality of the participants. The CBAG was consulted to determine how to code inexact responses. Of note, it was decided that if a participant responded with a range of numbers, the median value was recorded. If a participant selected all of the possible response options or only provided a written response, the result was recorded as missing data. In instances where a pattern was observed amongst participants’ written responses, the responses were coded according to newly created response options approved by the CBAG to maintain the integrity of the data. Sample size for individual statistical analyses varied from 88 to 106, as not all participants answered each survey question; thus, presented percentages may not always equal 100% owing to missing data. Simple descriptive statistics were used to examine the distributions of variables pertaining to general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Cross-tabulations, as 2 × 2 contingency analyses, were used to examine the relationships between each of the main effects of sex, awareness of avian influenza, and risk perception of AIV infection by precautionary behaviours and attitudes about hunting influenza-infected birds. In instances where the expected cell count was less than five, the Fisher’s Exact Test was used in preference to the Pearson chi-square test. Absolute values greater than 1.96 of the adjusted standard residual (ASR) indicated a significant departure from the expected count and therefore considered to be a major contributor to the observed chi-square result. The influence of outlier values for continuous dependent variables (age, years of hunting, days of hunting per year, birds hunted per year) was examined using boxplots of raw and log transformed data. Owing to the presence of outlier values, we log-transformed values for days of hunting and number of birds hunted per year to satisfy the homogeneity of variance assumption of analysis of variance (ANOVA). It was decided that one individual’s improbable response for number of birds hunted per year should be removed as it continued to distort the results. Also, one individual’s response for years of hunting was recorded as missing data since the response did not reflect the age of the participant. Differences in mean values of these dependent variables between groups for sex, awareness of avian influenza, and risk perception of AIV infection were examined using ANOVA. Statistical results were considered to be significant at p < 0.05. Data analyses were carried out using SPSS version 22 (SPSS Inc., Chicago, Illinois, U.S.A). Written responses to the two open-ended questions and any additional comments were manually transcribed verbatim into electronic format to facilitate organization and coding. Qualitative coding of the transcribed data was conducted using QSR NVivo® version 9.2 (QSR International Pty Ltd., Doncaster, Victoria, Australia). Responses were deductively analyzed following a template organizing approach using the survey questions as a coding template [37, 38]. Analyzing the data was an iterative process conducted multiple times by the lead author (NAC) and findings were presented to the CBAG as a way of member checking to verify the results [37]. Community-based participatory research approach: The present study employed a community-based participatory research (CBPR) approach since the hallmark principles of CBPR can foster the engagement of Aboriginal populations and participatory methods have previously been a successful approach to partnering with Aboriginal communities [25–27]. As such, the research topic was locally relevant as it stemmed from previous research conducted in the region that explored culturally-appropriate measures to mitigate the effects of an influenza pandemic in the setting of a remote and isolated Canadian First Nations community [28]. Residents of the study community expressed questions and concerns about the transmission potential of AIVs from influenza-infected wild birds to subsistence hunters. Thus, the present study was specifically developed and conducted to address the identified questions and concerns. Following a CBPR approach, collaboration occurred throughout the research process between the researchers and a community-based advisory group (CBAG) comprised of two community representatives from the study community [29–31]. The two members of the CBAG were of First Nations heritage and were particularly interested in the topic at hand and desired to be involved. The CBAG helped design the study and was part of the iterative process of developing the survey questions and layout. The CBAG also provided input during the data analysis process, on the interpretation of results, and aided with disseminating the results to the community. CBPR endeavors aim to use the knowledge generated to achieve action-oriented outcomes for the involved community [29, 32]. At the request of the CBAG, the results of this study were disseminated via an oral presentation to community members during a lunch-and-learn activity in June 2014. An information sheet explaining avian influenza and recommended precautionary behaviours created by Health Canada was distributed to attendees [33]. Information about emerging avian influenzas that currently are of pandemic concern and the information sheet were also incorporated into the community’s influenza pandemic plan as a newly created appendix section. Approval to conduct this research was granted by the Office of Research Ethics at the University of Waterloo (ORE #16534), and was supported by the Band Council (locally elected First Nations government body) of the involved community. Study area, population, and data collection: The study community (name omitted for anonymity purposes) is considered remote (i.e., nearest service center with year-round road access is located over 350 kilometers away) and isolated (i.e., accessible only by airplanes year-round) [10]. The Cree First Nations community belongs to the Mushkegowuk region which is located in northern Ontario, Canada along the western shores of James Bay and the southern portion of Hudson Bay [17, 19]. The region is a productive wildlife area and the majority of hunters partake in the spring and fall bird harvests [34]. The cross-sectional survey was conducted in English (as suggested by the CBAG) from November 10–25, 2013. The time period was chosen to maximize participation, as most hunters would have returned from fall hunting activities. The survey was based on previous literature [11] and was developed in collaboration with the CBAG to ensure that it adequately addressed the objectives of the study and was culturally-appropriate. The survey employed closed-ended questions to gain a better understanding of First Nations hunters’ general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Open-ended questions were also included to allow for participants to describe their risk perceptions of AIV infection while harvesting birds as well as any additional concerns. Basic demographic questions to record the age and sex of participants were also included. Community First Nations subsistence hunters were invited to participate by the lead author (NAC) and a local community research assistant during individual meetings. The research assistant was of First Nations descent and a prominent Elder in the community. Being fluent in the Cree language, the assistant acted as a Cree translator upon request by the survey respondents. A current community housing list (updated in November 2013) which recorded all known community members living in First Nations (Band) households was used by the research assistant to identify eligible participants. Contemporary harvesting practices in the region typically involve multiple short trips versus traditional long trips [34]. To include as many hunters as possible from the study community, eligible participants were defined as current hunters, a group which included “intensive”, “active”, and “occasional” hunters (for definitions, see [17]). In addition to being a current hunter, participants were required to be First Nations (Band member), an adult (18 years old and over), and available to complete the survey in person during the study period to be eligible. Both male and female hunters were approached as it is widely recognized in Cree First Nations that both sexes play an important role while subsistence harvesting [35]. When approached, the participants were provided with an information/recruitment letter and the study was explained in English or Cree as required. Informed verbal consent was obtained, being culturally appropriate for the region [31, 36]. Incentives were not offered for participation. As participants preferred to complete the survey alone on their own time, a convenient time and location was arranged to collect the completed survey. Up to five follow-up visits and new survey copies were provided if the survey was not completed at the specified time and if the person was still interested in participating. Data management and analyses: Collected surveys were coded by an identification number to maintain confidentiality of the participants. The CBAG was consulted to determine how to code inexact responses. Of note, it was decided that if a participant responded with a range of numbers, the median value was recorded. If a participant selected all of the possible response options or only provided a written response, the result was recorded as missing data. In instances where a pattern was observed amongst participants’ written responses, the responses were coded according to newly created response options approved by the CBAG to maintain the integrity of the data. Sample size for individual statistical analyses varied from 88 to 106, as not all participants answered each survey question; thus, presented percentages may not always equal 100% owing to missing data. Simple descriptive statistics were used to examine the distributions of variables pertaining to general harvesting practices, knowledge and risk perception of avian influenza, and attitudes about hunting influenza-infected birds. Cross-tabulations, as 2 × 2 contingency analyses, were used to examine the relationships between each of the main effects of sex, awareness of avian influenza, and risk perception of AIV infection by precautionary behaviours and attitudes about hunting influenza-infected birds. In instances where the expected cell count was less than five, the Fisher’s Exact Test was used in preference to the Pearson chi-square test. Absolute values greater than 1.96 of the adjusted standard residual (ASR) indicated a significant departure from the expected count and therefore considered to be a major contributor to the observed chi-square result. The influence of outlier values for continuous dependent variables (age, years of hunting, days of hunting per year, birds hunted per year) was examined using boxplots of raw and log transformed data. Owing to the presence of outlier values, we log-transformed values for days of hunting and number of birds hunted per year to satisfy the homogeneity of variance assumption of analysis of variance (ANOVA). It was decided that one individual’s improbable response for number of birds hunted per year should be removed as it continued to distort the results. Also, one individual’s response for years of hunting was recorded as missing data since the response did not reflect the age of the participant. Differences in mean values of these dependent variables between groups for sex, awareness of avian influenza, and risk perception of AIV infection were examined using ANOVA. Statistical results were considered to be significant at p < 0.05. Data analyses were carried out using SPSS version 22 (SPSS Inc., Chicago, Illinois, U.S.A). Written responses to the two open-ended questions and any additional comments were manually transcribed verbatim into electronic format to facilitate organization and coding. Qualitative coding of the transcribed data was conducted using QSR NVivo® version 9.2 (QSR International Pty Ltd., Doncaster, Victoria, Australia). Responses were deductively analyzed following a template organizing approach using the survey questions as a coding template [37, 38]. Analyzing the data was an iterative process conducted multiple times by the lead author (NAC) and findings were presented to the CBAG as a way of member checking to verify the results [37]. Results: A total of 173 participants in the censused community were deemed eligible to participate given the inclusion criteria and of these, 126 received surveys, for a 73% contact rate. Of the 126 distributed surveys, 106 completed surveys were returned, representing an 84% cooperation rate. Overall, a response rate of 61% was achieved. Of the 106 community members that participated in the survey, 80 (75.5%) were male and 26 (24.5%) were female. The untransformed demographic and harvesting characteristics of the participants are presented in Table 1.Table 1 Demographic and harvesting characteristics of Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 nMinimumMaximumMeanStd. deviation Demographic information    Age92187643.312.9 Harvesting characteristics    Years of hunting9916527.214.0   Days of hunting per year105120026.230.5   Number of birds hunted per year100020042.640.6 Demographic and harvesting characteristics of Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 All who responded participated in the spring/summer hunting activities (n = 105; 99.1%) with fewer hunters participating during the fall (n = 57; 53.8%) and winter (n = 16; 15.1%) seasons. During these hunts, 98.1% of participants hunted Canada geese (Branta canadensis), 88.7% hunted various species of ducks (Anatinae), 69.8% hunted lesser snow geese (Anser c. caerulescens, also referred to as wavies), and 43.4% hunted species of shorebirds (Charadriiformes). While hunting, the majority of participants reported having direct contact with water (n = 89; 84.0%). Bird harvesting practices were generally similar whether camping in the bush or at home; thus, only results pertaining to camping in the bush are presented. In the bush, most hunters processed the birds themselves (n = 72; 67.9%) or a family member was involved (n = 67; 63.2%). Most hunters partook in all of the bird processing activities in the bush; the percentage of participants who reported cleaning, plucking, and gutting the birds were 74.5%, 94.3%, and 77.4% respectively. Regarding the use of precautionary measures while processing birds in the bush, it was reported that 18 (17.0%) hunters wore gloves and 2 (1.9%) hunters wore goggles. In the bush, the majority of hunters washed their hands (n = 105; 99.1%) and sanitized their equipment (n = 69; 65.1%) after processing birds. Moreover, about half of the participants (n = 50; 47.2%) reported receiving the annual vaccination against seasonal human influenza viruses (Figure 1).Figure 1 Compliance with recommended protective health measures among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013. Compliance with recommended protective health measures among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013. The total frequency and percentage of participants’ knowledge of avian influenza, risk perception of AIV infection, and attitudes about hunting influenza-infected birds are presented in Table 2. Approximately half of the participants (n = 56; 52.8%) reported being generally aware of avian influenza, but few were aware of the signs and symptoms of avian influenza in birds (n = 16; 15.1%) or humans (n = 9; 8.5%).Table 2 Frequency and percentage a of knowledge of avian influenza, risk perception of avian influenza virus infection, and attitudes about hunting influenza-infected birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 All huntersMalesFemalesNo (%)Yes (%)No (%)Yes (%)No (%)Yes (%) Knowledge Aware of avian influenza49 (46.2)56 (52.8)37 (46.3)42 (52.5)12 (46.2)14 (53.8)Aware of signs and symptoms of avian influenza in birds89 (84.0)16 (15.1)67 (83.8)12 (15.0)22 (84.6)4 (15.4)Aware of signs and symptoms of avian influenza in humans95 (89.6)9 (8.5)74 (92.5)4 (5.0)21 (80.8)5 (19.2) Risk perception Perceived risk of avian influenza virus infection68 (64.2)29 (27.4)52 (65.0)23 (28.8)16 (61.5)6 (23.1) Attitudes Cease hunting if avian influenza detected in North American birds60 (56.6)43 (40.6)49 (61.3)29 (36.3)11 (42.3)14 (53.8)Cease hunting if avian influenza detected in Province of Ontario birds54 (50.9)45 (42.5)45 (56.3)30 (37.5)9 (34.6)15 (57.7)Cease hunting if avian influenza detected in Regional birds46 (43.4)55 (51.9)39 (48.8)37 (46.3)7 (26.9)18 (69.2) aPercentages may not always equal 100% owing to missing data. Frequency and percentage a of knowledge of avian influenza, risk perception of avian influenza virus infection, and attitudes about hunting influenza-infected birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 aPercentages may not always equal 100% owing to missing data. Some participants (n = 29; 27.4%) perceived a risk of contracting avian influenza while harvesting birds. “Just wondering every time we go out hunting geese in the spring, if any of the geese that come in [the] spring are carrying the flu” (Participant #41).“Yes there is a risk [be]cause the birds [are] from the South … who knows what they’ll catch out there” (Participant #103).“It will concern me if the bird flu is here on our Land and I wouldn’t be sure about hunting birds” (Participant #42). “Just wondering every time we go out hunting geese in the spring, if any of the geese that come in [the] spring are carrying the flu” (Participant #41). “Yes there is a risk [be]cause the birds [are] from the South … who knows what they’ll catch out there” (Participant #103). “It will concern me if the bird flu is here on our Land and I wouldn’t be sure about hunting birds” (Participant #42). On the other hand, many participants did not perceive a risk of AIV infection while harvesting birds, since local regional birds were not perceived to be infected with avian influenza. “I thought there was only bird flu in Asia …” (Participant #24).“If birds were sick, I don’t think they would make it this far [North]” (Participant #70).“No reports that bird flu has arrived in this area and people are not getting sick” (Participant #36). “I thought there was only bird flu in Asia …” (Participant #24). “If birds were sick, I don’t think they would make it this far [North]” (Participant #70). “No reports that bird flu has arrived in this area and people are not getting sick” (Participant #36). Detection of avian influenza in wild birds in nearby geographic areas would reportedly influence the participants’ harvesting behaviour. The frequency of participants who would cease harvesting birds was highest if avian influenza was detected in local regional birds (n = 55; 51.9%). It was reported that 45 (42.5%) respondents would stop hunting if avian influenza was found in birds from within the Province of Ontario, and 43 (40.6%) respondents would stop hunting if the virus was found in North American birds. For all of the aforementioned scenarios, some participants added written responses indicating that they were not sure if they would stop hunting and requested relevant information. The majority of respondents also were interested in receiving information about avian influenza transmission (n = 83; 78.3%), flyways of migrating birds (n = 79; 74.5%), and precautions to minimize exposure (n = 82; 77.4%). ANOVA showed that males hunted significantly more birds per year than did females (F1,96 = 12.1; p = 0.001; Figure 2). No significant difference in mean values of age, years of hunting, and days of hunting per year was observed between males and females. ANOVA did not identify any significant differences in mean values of age, years of hunting, days of hunting per year, and number of birds hunted per year between those who were or were not aware of avian influenza. However, ANOVA did show that those who hunted significantly more days per year did not perceive a risk of AIV infection while harvesting birds (F1,94 = 4.4; p = 0.040; Figure 2). No significant difference in mean values of age, years of hunting, and number of birds hunted per year was observed between those who did or did not perceive a risk of AIV infection.Figure 2 Analysis of variance for number of birds hunted per year by males and females (a) and number of days hunted per year by perceived risk of avian influenza virus infection while harvesting birds (b) among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013. Analysis of variance for number of birds hunted per year by males and females (a) and number of days hunted per year by perceived risk of avian influenza virus infection while harvesting birds (b) among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013. For all participants, in 2 × 2 contingency analysis, a significant dependence was observed between awareness of avian influenza and risk perception of AIV infection (Pearson χ2 = 4.456; p = 0.035) (Table 3). An ASR of +2.1 indicated that participants aware of avian influenza were significantly more likely to perceive a risk of AIV infection while harvesting birds. No significant dependence was seen between sex and awareness of avian influenza or sex and perceived risk of AIV infection.Table 3 Cross-tabulation for awareness of avian influenza by risk perception of avian influenza infection while harvesting birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 Perceived risk of avian influenza infection while harvesting birdsTotalNoYesAware of avian influenzaNoCount37946Adjusted Residual+2.1-2.1YesCount312051Adjusted Residual-2.1+2.1 Cross-tabulation for awareness of avian influenza by risk perception of avian influenza infection while harvesting birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 A significant dependence was observed between sex and the attitude of ceasing hunting if influenza was detected in regional birds (Pearson χ2 = 4.123; p = 0.042) (Table 4). An ASR of -2.0 indicted that males were significantly less likely to stop hunting if influenza was detected in the local regional birds. No significant dependence was observed between the two main effects of awareness of avian influenza and perceived risk of AIV infection by attitudes about hunting influenza-infected birds.Table 4 Cross-tabulation for sex by cease hunting if influenza detected in Regional birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 Cease hunting if influenza detected in Regional birdsTotalNoYesSexMaleCount393776Adjusted Residual+2.0-2.0FemaleCount71825Adjusted Residual-2.0+2.0 Cross-tabulation for sex by cease hunting if influenza detected in Regional birds among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 A significant dependence also was observed between awareness of avian influenza and the precautionary behaviour of sanitizing equipment after processing birds while camping in the bush (Pearson χ2 = 4.070; p = 0.044) (Table 5). An ASR of +2.0 indicated that a significantly greater frequency of aware participants were among those who cleaned their bird processing equipment. No significant dependence was observed between awareness of avian influenza by any of the other recommended precautions to be used while harvesting birds. Moreover, no significant dependence was observed between the two main effects of sex and perceived risk of AIV infection by any of the precautionary behaviours.Table 5 Cross-tabulation for awareness of avian influenza by sanitizing bird processing equipment in the bush among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 Sanitize bird processing equipment in the bushTotalNoYesAware of avian influenzaNoCount212748Adjusted Residual+2.0-2.0YesCount144256Adjusted Residual-2.0+2.0 Cross-tabulation for awareness of avian influenza by sanitizing bird processing equipment in the bush among Canadian First Nations subsistence hunters residing in the study community (n = 106), November 10–25, 2013 Discussion: Harvesting activities As mentioned, the potential of AIV infection while hunting and processing wild birds varies with specific practices, seasons, and geographical areas [5, 11, 12]. The hunters reported being in frequent contact with wild birds, as some participants hunted for more than 100 days per year and harvested up to 200 birds per year. Our findings indicated that First Nations subsistence hunters were involved in bird harvesting practices, such as processing the birds and having direct contact with water in the bush, that pose an increased hazard to AIV infections among this subpopulation. The main proposed pathway of transmission of AIV to humans is close contact between the tissues, secretions, and excretions of an infected bird and the respiratory tract, gastrointestinal tract, or conjunctiva of a human [2, 7, 39]. Infected birds shed copious amounts of virus particles in their feces which can also contaminate the environment and bodies of water [40, 41]. Our findings revealed that the majority of hunters had direct contact with water and cleaned, plucked, and gutted the wild birds themselves. If processing an influenza-infected wild bird in this manner, hunters may be exposed to virus-laden tissues, secretions, and excretions [2, 5]. The use of personal protective equipment was not routine practice as most hunters did not wear gloves and goggles to protect themselves while processing birds. However, most hunters reported using other measures of personal protection, such as washing their hands and cleaning their equipment, which can limit post-harvest AIV exposure. The timing of the hunters’ bird harvesting activities in relation to when the prevalence peaks for AIVs and human influenza viruses is of particular interest. Similar to previous reports, our study revealed that the majority of hunters were involved in the spring and fall bird harvests [16, 19, 34]. The timing of these harvests is in relation to freeze-up and break-up events in the region which varies every year, but generally runs from April to October [42]. During these harvests, participants reported hunting migratory wild birds that are potential carriers of AIVs as all known influenza A virus subtypes have been identified in these birds [3, 43]. For instance, in North American wild ducks, AIV prevalence peaks around late summer/early fall prior to south bound migration, with highest virus isolation rates reported in juvenile ducks [44, 45]. On the other hand, previous studies have reported relatively low prevalence of AIVs in Canada geese regardless of the season [45, 46]. Moreover, in Canada, the peak season of influenza A infection in humans typically runs from November to April [33]. Similar to another study, our results suggest that the possibility of co-infection with AIVs and human influenza viruses resulting in a reassortment event is unlikely as the timing of the hunters’ potential exposure to AIVs is different from that of seasonal human influenza viruses [5]. Based on previous studies, the surveyed participants generally hunt for wild birds around the southwestern coast of Hudson Bay and the western coast of James Bay which is along the Mississippi migratory flyway [3, 34, 47, 48]. Migratory flyways around the world intersect, particularly between eastern Eurasia and Alaska and between Europe and eastern North America, raising concerns about the exchange of AIVs between the Eurasian and American virus superfamilies [3, 43]. Intercontinental exchange of entire AIV genomes has not yet been reported and Eurasian HPAI virus subtypes have not been previously detected in North American migratory birds [43, 49]. However, reassortment events between the two lineages has been reported, notably in Alaska and along the northeastern coast of Canada [43, 49–51]. These observations suggest that the introduction of a novel AIV is more likely to occur along the Pacific and Atlantic coasts of North America, but once introduced, it has been suggested that migration to major congregation sites may disperse the novel AIV across flyways [49, 51, 52]. As mentioned, the potential of AIV infection while hunting and processing wild birds varies with specific practices, seasons, and geographical areas [5, 11, 12]. The hunters reported being in frequent contact with wild birds, as some participants hunted for more than 100 days per year and harvested up to 200 birds per year. Our findings indicated that First Nations subsistence hunters were involved in bird harvesting practices, such as processing the birds and having direct contact with water in the bush, that pose an increased hazard to AIV infections among this subpopulation. The main proposed pathway of transmission of AIV to humans is close contact between the tissues, secretions, and excretions of an infected bird and the respiratory tract, gastrointestinal tract, or conjunctiva of a human [2, 7, 39]. Infected birds shed copious amounts of virus particles in their feces which can also contaminate the environment and bodies of water [40, 41]. Our findings revealed that the majority of hunters had direct contact with water and cleaned, plucked, and gutted the wild birds themselves. If processing an influenza-infected wild bird in this manner, hunters may be exposed to virus-laden tissues, secretions, and excretions [2, 5]. The use of personal protective equipment was not routine practice as most hunters did not wear gloves and goggles to protect themselves while processing birds. However, most hunters reported using other measures of personal protection, such as washing their hands and cleaning their equipment, which can limit post-harvest AIV exposure. The timing of the hunters’ bird harvesting activities in relation to when the prevalence peaks for AIVs and human influenza viruses is of particular interest. Similar to previous reports, our study revealed that the majority of hunters were involved in the spring and fall bird harvests [16, 19, 34]. The timing of these harvests is in relation to freeze-up and break-up events in the region which varies every year, but generally runs from April to October [42]. During these harvests, participants reported hunting migratory wild birds that are potential carriers of AIVs as all known influenza A virus subtypes have been identified in these birds [3, 43]. For instance, in North American wild ducks, AIV prevalence peaks around late summer/early fall prior to south bound migration, with highest virus isolation rates reported in juvenile ducks [44, 45]. On the other hand, previous studies have reported relatively low prevalence of AIVs in Canada geese regardless of the season [45, 46]. Moreover, in Canada, the peak season of influenza A infection in humans typically runs from November to April [33]. Similar to another study, our results suggest that the possibility of co-infection with AIVs and human influenza viruses resulting in a reassortment event is unlikely as the timing of the hunters’ potential exposure to AIVs is different from that of seasonal human influenza viruses [5]. Based on previous studies, the surveyed participants generally hunt for wild birds around the southwestern coast of Hudson Bay and the western coast of James Bay which is along the Mississippi migratory flyway [3, 34, 47, 48]. Migratory flyways around the world intersect, particularly between eastern Eurasia and Alaska and between Europe and eastern North America, raising concerns about the exchange of AIVs between the Eurasian and American virus superfamilies [3, 43]. Intercontinental exchange of entire AIV genomes has not yet been reported and Eurasian HPAI virus subtypes have not been previously detected in North American migratory birds [43, 49]. However, reassortment events between the two lineages has been reported, notably in Alaska and along the northeastern coast of Canada [43, 49–51]. These observations suggest that the introduction of a novel AIV is more likely to occur along the Pacific and Atlantic coasts of North America, but once introduced, it has been suggested that migration to major congregation sites may disperse the novel AIV across flyways [49, 51, 52]. Awareness, risk perception, and attitudes Approximately half of our study participants were generally aware of avian influenza (52.8%), which is lower than previous studies conducted with bird hunters in the USA (86%) and poultry workers in Nigeria (67.1%) and Italy (63.8%) [11, 23, 53]. Similar to a previous study, our findings indicated that a general awareness of avian influenza was more common among the surveyed bird hunters compared to knowledge of the signs and symptoms [11]. Previous studies conducted with high-risk populations in Thailand and Laos also reported limited knowledge of the key signs and symptoms of avian influenza [54, 55]. Almost one third of surveyed participants perceived a risk of contracting avian influenza while hunting and processing birds which is similar to the values found in other studies [24, 56]. Our results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas. This observation aligns with findings from a previous study; however, the percentage of hunters who would stop was relatively higher in our study as only 3% and 19% of active duck hunters in Georgia, USA reported that they would stop hunting if HPAI were found in duck populations in USA and the state of Georgia, respectively [11]. This result is interesting as harvesting activities are integral to First Nations’ culture and an important source of healthy food, especially in communities experiencing food insecurity [17, 20, 22]. Our findings suggested that being aware of avian influenza or perceiving a risk of AIV infection did not influence the hunters’ decision to cease harvesting influenza-infected birds. However, those who were knowledgeable were more likely to clean their equipment after processing birds in the bush. This finding suggests that First Nations hunters are not only willing to use precautionary measures while harvesting birds, but that improving their knowledge level may lead to an increased use of recommended precautionary measures. Previous studies also found that knowledge and perception of risk was a significant determinant of greater compliance with recommended protective measures [23, 24]. However, in our study, being knowledgeable or perceiving risk did not always result in greater use of protective measures. Moreover, in general, the limited use of gloves and goggles while processing harvested birds was noted. These observations may be explained by the protection motivation theory which states that complying with a recommended protective health behavior is influenced by risk perception as well as efficacy variables, including response efficacy (i.e., whether the recommended measure is effective) and self-efficacy (i.e., whether the person is capable of performing the recommended measure) [57–59]. According to this theory, risk perception will generate a willingness to act, but efficacy variables will determine whether the resulting action is adaptive or maladaptive [57, 58]. In our study, those who perceived a risk may have doubted the effectiveness of recommended measures and/or had low self-efficacy owing to limited access to resources and ability to afford supplies required to implement the measures [60]. Approximately half of our study participants were generally aware of avian influenza (52.8%), which is lower than previous studies conducted with bird hunters in the USA (86%) and poultry workers in Nigeria (67.1%) and Italy (63.8%) [11, 23, 53]. Similar to a previous study, our findings indicated that a general awareness of avian influenza was more common among the surveyed bird hunters compared to knowledge of the signs and symptoms [11]. Previous studies conducted with high-risk populations in Thailand and Laos also reported limited knowledge of the key signs and symptoms of avian influenza [54, 55]. Almost one third of surveyed participants perceived a risk of contracting avian influenza while hunting and processing birds which is similar to the values found in other studies [24, 56]. Our results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas. This observation aligns with findings from a previous study; however, the percentage of hunters who would stop was relatively higher in our study as only 3% and 19% of active duck hunters in Georgia, USA reported that they would stop hunting if HPAI were found in duck populations in USA and the state of Georgia, respectively [11]. This result is interesting as harvesting activities are integral to First Nations’ culture and an important source of healthy food, especially in communities experiencing food insecurity [17, 20, 22]. Our findings suggested that being aware of avian influenza or perceiving a risk of AIV infection did not influence the hunters’ decision to cease harvesting influenza-infected birds. However, those who were knowledgeable were more likely to clean their equipment after processing birds in the bush. This finding suggests that First Nations hunters are not only willing to use precautionary measures while harvesting birds, but that improving their knowledge level may lead to an increased use of recommended precautionary measures. Previous studies also found that knowledge and perception of risk was a significant determinant of greater compliance with recommended protective measures [23, 24]. However, in our study, being knowledgeable or perceiving risk did not always result in greater use of protective measures. Moreover, in general, the limited use of gloves and goggles while processing harvested birds was noted. These observations may be explained by the protection motivation theory which states that complying with a recommended protective health behavior is influenced by risk perception as well as efficacy variables, including response efficacy (i.e., whether the recommended measure is effective) and self-efficacy (i.e., whether the person is capable of performing the recommended measure) [57–59]. According to this theory, risk perception will generate a willingness to act, but efficacy variables will determine whether the resulting action is adaptive or maladaptive [57, 58]. In our study, those who perceived a risk may have doubted the effectiveness of recommended measures and/or had low self-efficacy owing to limited access to resources and ability to afford supplies required to implement the measures [60]. Recommendations for influenza pandemic plans These data support previous findings which suggest that bird hunting and processing activities may potentially expose individuals to avian influenza [5, 11–14]. Acknowledging the various benefits and cultural importance of subsistence harvesting [17, 20], while taking into account the increased hazard of potential AIV exposure in First Nations hunters, their inclusion as an avian influenza risk group with associated special considerations in pandemic plans seems warranted. The potential for a novel AIV to be introduced into an Aboriginal Canadian population is of great concern as they face many health disparities and are particularly susceptible to influenza and related complications [61]. Moreover, previous influenza pandemics have disproportionately impacted Aboriginal Canadians, especially those populations living in geographically remote communities, and reflected inadequacies in preparedness with regards to addressing their pre-existing inequalities and special needs during a pandemic [62–65]. Efforts should be directed towards improving education for First Nations hunters regarding avian influenza and the hazard posed by AIVs while harvesting wild birds. More specifically, our results indicated that educational endeavours should include information regarding the signs and symptoms of avian influenza, transmission dynamics, flyways of migrating birds, and recommended precautionary measures (Table 6). Accordingly, access to supplies required to comply with recommended protective measures, such as cleaning solutions and gloves, should be improved for First Nations subsistence hunters. Moreover, our findings suggested that detection of avian influenza in wild birds in nearby geographic areas would influence the participants’ harvesting behaviour. Given this, we recommend that a culturally-appropriate communication system be implemented to promptly inform subsistence hunters and other community members of the findings and any associated recommendations.Table 6 Recommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [ [33]]) -Do not touch or eat sick birds or birds that have died for unknown reasons-Avoid touching the blood, secretions, or dropping of wild game birds-Do not rub your eyes, touch your face, eat, drink or smoke when processing wild game birds-Keep young children away when processing wild game birds and discourage them from playing in areas that could be contaminated with wild bird droppings-When preparing game, wash knives, tools, work surfaces, and other equipment with soap and warm water followed by a household bleach solution (0.5% sodium hypochlorite)-Wear water-proof household gloves or disposable latex/plastic gloves when processing wild game birds-Wash gloves and hands (for at least 20 seconds) with soap and warm water immediately after you have finished processing game or cleaning equipment. If there is no water available, remove any dirt using a moist towlette, apply an alcohol based hand gel (between 60-90% alcohol) and wash your hands with soap and water as soon as it is possible-Change clothes after handling wild game birds and keep soiled clothing and shoes in a sealed plastic bag until they can be washed-When cooking birds, the inside temperature should reach 85°C for whole birds or 74°C for bird parts (no visible pink meat and juice runs clear)-Never keep wild birds in your home or as pets-Receive the annual influenza vaccine-If you become sick while handling birds or shortly afterwards, see your doctor and inform your doctor that you have been in close contact with wild birds. Recommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [ [33]]) These data support previous findings which suggest that bird hunting and processing activities may potentially expose individuals to avian influenza [5, 11–14]. Acknowledging the various benefits and cultural importance of subsistence harvesting [17, 20], while taking into account the increased hazard of potential AIV exposure in First Nations hunters, their inclusion as an avian influenza risk group with associated special considerations in pandemic plans seems warranted. The potential for a novel AIV to be introduced into an Aboriginal Canadian population is of great concern as they face many health disparities and are particularly susceptible to influenza and related complications [61]. Moreover, previous influenza pandemics have disproportionately impacted Aboriginal Canadians, especially those populations living in geographically remote communities, and reflected inadequacies in preparedness with regards to addressing their pre-existing inequalities and special needs during a pandemic [62–65]. Efforts should be directed towards improving education for First Nations hunters regarding avian influenza and the hazard posed by AIVs while harvesting wild birds. More specifically, our results indicated that educational endeavours should include information regarding the signs and symptoms of avian influenza, transmission dynamics, flyways of migrating birds, and recommended precautionary measures (Table 6). Accordingly, access to supplies required to comply with recommended protective measures, such as cleaning solutions and gloves, should be improved for First Nations subsistence hunters. Moreover, our findings suggested that detection of avian influenza in wild birds in nearby geographic areas would influence the participants’ harvesting behaviour. Given this, we recommend that a culturally-appropriate communication system be implemented to promptly inform subsistence hunters and other community members of the findings and any associated recommendations.Table 6 Recommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [ [33]]) -Do not touch or eat sick birds or birds that have died for unknown reasons-Avoid touching the blood, secretions, or dropping of wild game birds-Do not rub your eyes, touch your face, eat, drink or smoke when processing wild game birds-Keep young children away when processing wild game birds and discourage them from playing in areas that could be contaminated with wild bird droppings-When preparing game, wash knives, tools, work surfaces, and other equipment with soap and warm water followed by a household bleach solution (0.5% sodium hypochlorite)-Wear water-proof household gloves or disposable latex/plastic gloves when processing wild game birds-Wash gloves and hands (for at least 20 seconds) with soap and warm water immediately after you have finished processing game or cleaning equipment. If there is no water available, remove any dirt using a moist towlette, apply an alcohol based hand gel (between 60-90% alcohol) and wash your hands with soap and water as soon as it is possible-Change clothes after handling wild game birds and keep soiled clothing and shoes in a sealed plastic bag until they can be washed-When cooking birds, the inside temperature should reach 85°C for whole birds or 74°C for bird parts (no visible pink meat and juice runs clear)-Never keep wild birds in your home or as pets-Receive the annual influenza vaccine-If you become sick while handling birds or shortly afterwards, see your doctor and inform your doctor that you have been in close contact with wild birds. Recommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [ [33]]) Study strengths and limitations To our knowledge, this is the first study to examine the knowledge and risk perceptions of avian influenza among Canadian First Nations subsistence hunters. The censused approach taken to select participants and the high contact and cooperation rates strengthen the assertion that our findings are representative of the study community. Also, in accordance with a CBPR approach, the CBAG was involved throughout the entire research process, thereby ensuring that the study was conducted in a culturally-appropriate manner and that the knowledge generated was used to directly benefit the involved community. Despite the novelty and significance of our findings, some limitations of our study must be highlighted when interpreting our results. First, the analysis was based on a cross-sectional survey of self-reported data which may limit drawing definitive conclusions about the observed relationships. The biases in recalling and reporting cannot be entirely ruled out; however, to help alleviate the potential for biased responses, participants were assured that their responses would remain anonymous. Also, it is not possible to discern whether those who did not return the survey or refused to participate were different in any way from those who did participate. However, there is no obvious reason to suspect that non-respondents and people who chose not to participate were any different from the respondents. Future research should examine the prevalence of AIVs, particularly those strains that are currently of concern to humans (e.g., H5, H7), in birds from within the Mushkegowuk Territory that are typically harvested. Also, analyzing the sera for antibodies against AIV subtypes would be helpful to evaluate if previous AIV infections occurred in First Nation subsistence hunters. Moreover, conducting a quantitative exposure assessment would provide information to help characterize the study population’s exposure potential to AIVs. Lastly, previous research has noted that various barriers impede the effectiveness of implementing recommended pandemic mitigation measures [60]. Thus, future research should aim to understand if any barriers exist with regards to complying with recommended precautions to reduce exposure to AIVs while harvesting birds and if measures need to be adapted to be more context-specific and culturally-appropriate, while still maintaining the effectiveness of the measure. To our knowledge, this is the first study to examine the knowledge and risk perceptions of avian influenza among Canadian First Nations subsistence hunters. The censused approach taken to select participants and the high contact and cooperation rates strengthen the assertion that our findings are representative of the study community. Also, in accordance with a CBPR approach, the CBAG was involved throughout the entire research process, thereby ensuring that the study was conducted in a culturally-appropriate manner and that the knowledge generated was used to directly benefit the involved community. Despite the novelty and significance of our findings, some limitations of our study must be highlighted when interpreting our results. First, the analysis was based on a cross-sectional survey of self-reported data which may limit drawing definitive conclusions about the observed relationships. The biases in recalling and reporting cannot be entirely ruled out; however, to help alleviate the potential for biased responses, participants were assured that their responses would remain anonymous. Also, it is not possible to discern whether those who did not return the survey or refused to participate were different in any way from those who did participate. However, there is no obvious reason to suspect that non-respondents and people who chose not to participate were any different from the respondents. Future research should examine the prevalence of AIVs, particularly those strains that are currently of concern to humans (e.g., H5, H7), in birds from within the Mushkegowuk Territory that are typically harvested. Also, analyzing the sera for antibodies against AIV subtypes would be helpful to evaluate if previous AIV infections occurred in First Nation subsistence hunters. Moreover, conducting a quantitative exposure assessment would provide information to help characterize the study population’s exposure potential to AIVs. Lastly, previous research has noted that various barriers impede the effectiveness of implementing recommended pandemic mitigation measures [60]. Thus, future research should aim to understand if any barriers exist with regards to complying with recommended precautions to reduce exposure to AIVs while harvesting birds and if measures need to be adapted to be more context-specific and culturally-appropriate, while still maintaining the effectiveness of the measure. Harvesting activities: As mentioned, the potential of AIV infection while hunting and processing wild birds varies with specific practices, seasons, and geographical areas [5, 11, 12]. The hunters reported being in frequent contact with wild birds, as some participants hunted for more than 100 days per year and harvested up to 200 birds per year. Our findings indicated that First Nations subsistence hunters were involved in bird harvesting practices, such as processing the birds and having direct contact with water in the bush, that pose an increased hazard to AIV infections among this subpopulation. The main proposed pathway of transmission of AIV to humans is close contact between the tissues, secretions, and excretions of an infected bird and the respiratory tract, gastrointestinal tract, or conjunctiva of a human [2, 7, 39]. Infected birds shed copious amounts of virus particles in their feces which can also contaminate the environment and bodies of water [40, 41]. Our findings revealed that the majority of hunters had direct contact with water and cleaned, plucked, and gutted the wild birds themselves. If processing an influenza-infected wild bird in this manner, hunters may be exposed to virus-laden tissues, secretions, and excretions [2, 5]. The use of personal protective equipment was not routine practice as most hunters did not wear gloves and goggles to protect themselves while processing birds. However, most hunters reported using other measures of personal protection, such as washing their hands and cleaning their equipment, which can limit post-harvest AIV exposure. The timing of the hunters’ bird harvesting activities in relation to when the prevalence peaks for AIVs and human influenza viruses is of particular interest. Similar to previous reports, our study revealed that the majority of hunters were involved in the spring and fall bird harvests [16, 19, 34]. The timing of these harvests is in relation to freeze-up and break-up events in the region which varies every year, but generally runs from April to October [42]. During these harvests, participants reported hunting migratory wild birds that are potential carriers of AIVs as all known influenza A virus subtypes have been identified in these birds [3, 43]. For instance, in North American wild ducks, AIV prevalence peaks around late summer/early fall prior to south bound migration, with highest virus isolation rates reported in juvenile ducks [44, 45]. On the other hand, previous studies have reported relatively low prevalence of AIVs in Canada geese regardless of the season [45, 46]. Moreover, in Canada, the peak season of influenza A infection in humans typically runs from November to April [33]. Similar to another study, our results suggest that the possibility of co-infection with AIVs and human influenza viruses resulting in a reassortment event is unlikely as the timing of the hunters’ potential exposure to AIVs is different from that of seasonal human influenza viruses [5]. Based on previous studies, the surveyed participants generally hunt for wild birds around the southwestern coast of Hudson Bay and the western coast of James Bay which is along the Mississippi migratory flyway [3, 34, 47, 48]. Migratory flyways around the world intersect, particularly between eastern Eurasia and Alaska and between Europe and eastern North America, raising concerns about the exchange of AIVs between the Eurasian and American virus superfamilies [3, 43]. Intercontinental exchange of entire AIV genomes has not yet been reported and Eurasian HPAI virus subtypes have not been previously detected in North American migratory birds [43, 49]. However, reassortment events between the two lineages has been reported, notably in Alaska and along the northeastern coast of Canada [43, 49–51]. These observations suggest that the introduction of a novel AIV is more likely to occur along the Pacific and Atlantic coasts of North America, but once introduced, it has been suggested that migration to major congregation sites may disperse the novel AIV across flyways [49, 51, 52]. Awareness, risk perception, and attitudes: Approximately half of our study participants were generally aware of avian influenza (52.8%), which is lower than previous studies conducted with bird hunters in the USA (86%) and poultry workers in Nigeria (67.1%) and Italy (63.8%) [11, 23, 53]. Similar to a previous study, our findings indicated that a general awareness of avian influenza was more common among the surveyed bird hunters compared to knowledge of the signs and symptoms [11]. Previous studies conducted with high-risk populations in Thailand and Laos also reported limited knowledge of the key signs and symptoms of avian influenza [54, 55]. Almost one third of surveyed participants perceived a risk of contracting avian influenza while hunting and processing birds which is similar to the values found in other studies [24, 56]. Our results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas. This observation aligns with findings from a previous study; however, the percentage of hunters who would stop was relatively higher in our study as only 3% and 19% of active duck hunters in Georgia, USA reported that they would stop hunting if HPAI were found in duck populations in USA and the state of Georgia, respectively [11]. This result is interesting as harvesting activities are integral to First Nations’ culture and an important source of healthy food, especially in communities experiencing food insecurity [17, 20, 22]. Our findings suggested that being aware of avian influenza or perceiving a risk of AIV infection did not influence the hunters’ decision to cease harvesting influenza-infected birds. However, those who were knowledgeable were more likely to clean their equipment after processing birds in the bush. This finding suggests that First Nations hunters are not only willing to use precautionary measures while harvesting birds, but that improving their knowledge level may lead to an increased use of recommended precautionary measures. Previous studies also found that knowledge and perception of risk was a significant determinant of greater compliance with recommended protective measures [23, 24]. However, in our study, being knowledgeable or perceiving risk did not always result in greater use of protective measures. Moreover, in general, the limited use of gloves and goggles while processing harvested birds was noted. These observations may be explained by the protection motivation theory which states that complying with a recommended protective health behavior is influenced by risk perception as well as efficacy variables, including response efficacy (i.e., whether the recommended measure is effective) and self-efficacy (i.e., whether the person is capable of performing the recommended measure) [57–59]. According to this theory, risk perception will generate a willingness to act, but efficacy variables will determine whether the resulting action is adaptive or maladaptive [57, 58]. In our study, those who perceived a risk may have doubted the effectiveness of recommended measures and/or had low self-efficacy owing to limited access to resources and ability to afford supplies required to implement the measures [60]. Recommendations for influenza pandemic plans: These data support previous findings which suggest that bird hunting and processing activities may potentially expose individuals to avian influenza [5, 11–14]. Acknowledging the various benefits and cultural importance of subsistence harvesting [17, 20], while taking into account the increased hazard of potential AIV exposure in First Nations hunters, their inclusion as an avian influenza risk group with associated special considerations in pandemic plans seems warranted. The potential for a novel AIV to be introduced into an Aboriginal Canadian population is of great concern as they face many health disparities and are particularly susceptible to influenza and related complications [61]. Moreover, previous influenza pandemics have disproportionately impacted Aboriginal Canadians, especially those populations living in geographically remote communities, and reflected inadequacies in preparedness with regards to addressing their pre-existing inequalities and special needs during a pandemic [62–65]. Efforts should be directed towards improving education for First Nations hunters regarding avian influenza and the hazard posed by AIVs while harvesting wild birds. More specifically, our results indicated that educational endeavours should include information regarding the signs and symptoms of avian influenza, transmission dynamics, flyways of migrating birds, and recommended precautionary measures (Table 6). Accordingly, access to supplies required to comply with recommended protective measures, such as cleaning solutions and gloves, should be improved for First Nations subsistence hunters. Moreover, our findings suggested that detection of avian influenza in wild birds in nearby geographic areas would influence the participants’ harvesting behaviour. Given this, we recommend that a culturally-appropriate communication system be implemented to promptly inform subsistence hunters and other community members of the findings and any associated recommendations.Table 6 Recommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [ [33]]) -Do not touch or eat sick birds or birds that have died for unknown reasons-Avoid touching the blood, secretions, or dropping of wild game birds-Do not rub your eyes, touch your face, eat, drink or smoke when processing wild game birds-Keep young children away when processing wild game birds and discourage them from playing in areas that could be contaminated with wild bird droppings-When preparing game, wash knives, tools, work surfaces, and other equipment with soap and warm water followed by a household bleach solution (0.5% sodium hypochlorite)-Wear water-proof household gloves or disposable latex/plastic gloves when processing wild game birds-Wash gloves and hands (for at least 20 seconds) with soap and warm water immediately after you have finished processing game or cleaning equipment. If there is no water available, remove any dirt using a moist towlette, apply an alcohol based hand gel (between 60-90% alcohol) and wash your hands with soap and water as soon as it is possible-Change clothes after handling wild game birds and keep soiled clothing and shoes in a sealed plastic bag until they can be washed-When cooking birds, the inside temperature should reach 85°C for whole birds or 74°C for bird parts (no visible pink meat and juice runs clear)-Never keep wild birds in your home or as pets-Receive the annual influenza vaccine-If you become sick while handling birds or shortly afterwards, see your doctor and inform your doctor that you have been in close contact with wild birds. Recommended precautions for Canadian First Nations subsistence hunters to reduce exposure to avian influenza viruses while harvesting wild birds (adapted from [ [33]]) Study strengths and limitations: To our knowledge, this is the first study to examine the knowledge and risk perceptions of avian influenza among Canadian First Nations subsistence hunters. The censused approach taken to select participants and the high contact and cooperation rates strengthen the assertion that our findings are representative of the study community. Also, in accordance with a CBPR approach, the CBAG was involved throughout the entire research process, thereby ensuring that the study was conducted in a culturally-appropriate manner and that the knowledge generated was used to directly benefit the involved community. Despite the novelty and significance of our findings, some limitations of our study must be highlighted when interpreting our results. First, the analysis was based on a cross-sectional survey of self-reported data which may limit drawing definitive conclusions about the observed relationships. The biases in recalling and reporting cannot be entirely ruled out; however, to help alleviate the potential for biased responses, participants were assured that their responses would remain anonymous. Also, it is not possible to discern whether those who did not return the survey or refused to participate were different in any way from those who did participate. However, there is no obvious reason to suspect that non-respondents and people who chose not to participate were any different from the respondents. Future research should examine the prevalence of AIVs, particularly those strains that are currently of concern to humans (e.g., H5, H7), in birds from within the Mushkegowuk Territory that are typically harvested. Also, analyzing the sera for antibodies against AIV subtypes would be helpful to evaluate if previous AIV infections occurred in First Nation subsistence hunters. Moreover, conducting a quantitative exposure assessment would provide information to help characterize the study population’s exposure potential to AIVs. Lastly, previous research has noted that various barriers impede the effectiveness of implementing recommended pandemic mitigation measures [60]. Thus, future research should aim to understand if any barriers exist with regards to complying with recommended precautions to reduce exposure to AIVs while harvesting birds and if measures need to be adapted to be more context-specific and culturally-appropriate, while still maintaining the effectiveness of the measure. Conclusions: Our study aimed to gain an understanding of the bird harvesting practices and knowledge, risk perceptions, and attitudes regarding avian influenza among Canadian First Nations subsistence hunters and provide recommendations for pandemic plans. The findings herein indicated that First Nations subsistence hunters partook in some practices while harvesting wild birds that could potentially expose them to avian influenza, although appropriate levels of compliance with some protective measures were reported. More than half of the respondents were generally aware of avian influenza and almost one third perceived a risk of AIV infection while harvesting birds. Participants aware of avian influenza were more likely to perceive a risk of AIV infection while harvesting birds. Our results suggest that knowledge positively influenced the use of a recommended protective measure. Regarding attitudes about hunting influenza-infected birds, our results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas. Given that the potential exposure to AIVs while hunting is assumed to be low but the cultural importance of subsistence hunting high, our study indicated a need for more education about avian influenza and precautions First Nations hunters can take to reduce the possibility of AIV exposure while harvesting wild birds that are culturally-appropriate. We posit that First Nations hunters should be considered an avian influenza risk group and have associated special considerations included in pandemic plans.
Background: There is concern of avian influenza virus (AIV) infections in humans. Subsistence hunters may be a potential risk group for AIV infections as they frequently come into close contact with wild birds and the aquatic habitats of birds while harvesting. This study aimed to examine if knowledge and risk perception of avian influenza influenced the use of protective measures and attitudes about hunting influenza-infected birds among subsistence hunters. Methods: Using a community-based participatory research approach, a cross-sectional survey was conducted with current subsistence hunters (n = 106) residing in a remote and isolated First Nations community in northern Ontario, Canada from November 10-25, 2013. Simple descriptive statistics, cross-tabulations, and analysis of variance (ANOVA) were used to examine the distributions and relationships between variables. Written responses were deductively analyzed. Results: ANOVA showed that males hunted significantly more birds per year than did females (F1,96 = 12.1; p = 0.001) and that those who hunted significantly more days per year did not perceive a risk of AIV infection (F1,94 = 4.4; p = 0.040). Hunters engaged in bird harvesting practices that could expose them to AIVs, namely by cleaning, plucking, and gutting birds and having direct contact with water. It was reported that 18 (17.0%) hunters wore gloves and 2 (1.9%) hunters wore goggles while processing birds. The majority of hunters washed their hands (n = 105; 99.1%) and sanitized their equipment (n = 69; 65.1%) after processing birds. More than half of the participants reported being aware of avian influenza, while almost one third perceived a risk of AIV infection while harvesting birds. Participants aware of avian influenza were more likely to perceive a risk of AIV infection while harvesting birds. Our results suggest that knowledge positively influenced the use of a recommended protective measure. Regarding attitudes, the frequency of participants who would cease harvesting birds was highest if avian influenza was detected in regional birds (n = 55; 51.9%). Conclusions: Our study indicated a need for more education about avian influenza and precautionary behaviours that are culturally-appropriate. First Nations subsistence hunters should be considered an avian influenza risk group and have associated special considerations included in future influenza pandemic plans.
Background: Influenza A viruses may cause pandemics at unpredictable, irregular intervals resulting in devastating social and economic effects worldwide [1]. Wild aquatic birds in the orders Anseriformes and Charadriiformes are the natural hosts for influenza A viruses; these viruses have generally remained in evolutionary stasis and are usually non-pathogenic in wild birds [2, 3]. Most avian influenza viruses (AIVs) primarily replicate in the intestinal tract of wild birds and are spread amongst birds via an indirect fecal-oral route involving contaminated aquatic habitats [4]. Humans who are directly exposed to the tissues, secretions, and excretions of infected birds or water contaminated with bird feces can become infected themselves [2, 4, 5]. The transmission of an AIV from a bird to a human has significant pandemic potential as it may result in the direct introduction of a novel virus strain or allow for the creation of a novel virus strain via reassortment [3, 5]. The transmission of AIVs from birds to humans depends on many factors, such as the susceptibility of humans to the virus and the frequency and type of contact [2, 5]. Most AIVs are generally inefficient in infecting humans; however, there have been documented cases of AIVs transmitting directly from infected birds to humans [6, 7]. During the 1997 Hong Kong “bird flu” incident, there was demonstrated transmission of highly pathogenic avian influenza (HPAI) A virus (H5N1) from infected domesticated chickens to humans [3]. More recently, some Asian countries have reported human infections of avian influenza A virus (H7N9) with most patients having a history of exposure to live poultry in wet markets [8]. As such, most pandemic plans include special considerations (e.g., enhanced surveillance, prioritization for vaccination, and antiviral prophylaxis) for avian influenza risk groups that include humans who come in close, frequent contact with domestic birds, such as farmers, poultry farm workers, veterinarians, and livestock workers [9, 10]. Longitudinally migrating wild birds appear to play a primary role in influenza transmission and there is increased concern about the introduction of HPAI virus strains in North America from Eurasia, as migratory flyways around the world intersect [3, 4]. Thus, bird hunters may also be at risk as hunting and processing practices directly expose them to the bodily fluids of wild birds and water potentially contaminated with bird feces [5, 11]. Although the risk of AIV infection while hunting and processing wild birds is assumed to be very low [5], transmission has been previously reported. One study reported serologic evidence of past AIV infection in a recreational duck hunter and two wildlife professionals, inferring direct transmission of AIVs from wild birds to humans [12]. Another study reported that recreational waterfowl hunters were eight times more likely to be exposed to avian influenza-infected wildlife compared to occupationally-exposed people and the general public [13]. A study conducted in rural Iowa, USA, reported that participants who hunted wild birds had increased antibody titers against avian H7 influenza virus [14]. Further, in the Republic of Azerbaijan, HPAI H5N1 infection in humans is suspected to be linked to defeathering infected wild swans (Cygnus) [15]. Since handling wild birds and having contact with the aquatic habitats of wild birds are potential transmission pathways for AIV infections in hunters, it is important to better understand hunters’ knowledge and risk perceptions of avian influenza and include special considerations in pandemic plans. This is particularly important for some Canadian Aboriginal (First Nations, Inuit, and Métis) populations whose hunting of wild birds represents subsistence harvesting as opposed to a recreational activity [16]. Herein, subsistence harvesting will refer collectively to activities associated with hunting, fishing, trapping, and gathering of animals and other food for personal, family, and community consumption [17, 18]. The practice of subsistence harvesting for some Canadian Aboriginal populations, such as the Cree First Nations of the Mushkegowuk region, is culturally and economically important with the majority of hunters harvesting wild birds [17, 19]. Traditional land-based harvesting activities are economically valuable for the region and can reduce external economic dependence [17]. Moreover, as there are many physical, nutritional, and social benefits of this practice, it is a vital, well-established component of health and well-being in Canadian Aboriginal communities [20]. For instance, as Canadian Aboriginal populations, particularly those residing in geographically remote and isolated communities, experience a high prevalence of household food insecurity [21, 22], subsistence harvesting can provide an important source of healthy traditional foods and lessen the reliance on costly market foods. The potential of AIV infection while hunting and harvesting wild birds varies with geographical areas, seasons, and specific activities [5, 11, 12]. Moreover, previous studies have shown that knowledge and risk perception of avian influenza can positively influence compliance with recommended protective health behaviours [23, 24]. We conducted a cross-sectional survey of the bird harvesting practices and knowledge, risk perceptions, and attitudes regarding avian influenza among Canadian First Nations subsistence hunters. The purpose of this study was to examine if knowledge and risk perception of avian influenza influenced the use of personal protection measures and attitudes about hunting influenza-infected birds. The implications for addressing the special considerations of Canadian First Nations subsistence hunters in pandemic plans will be discussed. Conclusions: Our study aimed to gain an understanding of the bird harvesting practices and knowledge, risk perceptions, and attitudes regarding avian influenza among Canadian First Nations subsistence hunters and provide recommendations for pandemic plans. The findings herein indicated that First Nations subsistence hunters partook in some practices while harvesting wild birds that could potentially expose them to avian influenza, although appropriate levels of compliance with some protective measures were reported. More than half of the respondents were generally aware of avian influenza and almost one third perceived a risk of AIV infection while harvesting birds. Participants aware of avian influenza were more likely to perceive a risk of AIV infection while harvesting birds. Our results suggest that knowledge positively influenced the use of a recommended protective measure. Regarding attitudes about hunting influenza-infected birds, our results revealed that the frequency of First Nations hunters who would cease harvesting birds increased as AIV was detected in more nearby geographic areas. Given that the potential exposure to AIVs while hunting is assumed to be low but the cultural importance of subsistence hunting high, our study indicated a need for more education about avian influenza and precautions First Nations hunters can take to reduce the possibility of AIV exposure while harvesting wild birds that are culturally-appropriate. We posit that First Nations hunters should be considered an avian influenza risk group and have associated special considerations included in pandemic plans.
Background: There is concern of avian influenza virus (AIV) infections in humans. Subsistence hunters may be a potential risk group for AIV infections as they frequently come into close contact with wild birds and the aquatic habitats of birds while harvesting. This study aimed to examine if knowledge and risk perception of avian influenza influenced the use of protective measures and attitudes about hunting influenza-infected birds among subsistence hunters. Methods: Using a community-based participatory research approach, a cross-sectional survey was conducted with current subsistence hunters (n = 106) residing in a remote and isolated First Nations community in northern Ontario, Canada from November 10-25, 2013. Simple descriptive statistics, cross-tabulations, and analysis of variance (ANOVA) were used to examine the distributions and relationships between variables. Written responses were deductively analyzed. Results: ANOVA showed that males hunted significantly more birds per year than did females (F1,96 = 12.1; p = 0.001) and that those who hunted significantly more days per year did not perceive a risk of AIV infection (F1,94 = 4.4; p = 0.040). Hunters engaged in bird harvesting practices that could expose them to AIVs, namely by cleaning, plucking, and gutting birds and having direct contact with water. It was reported that 18 (17.0%) hunters wore gloves and 2 (1.9%) hunters wore goggles while processing birds. The majority of hunters washed their hands (n = 105; 99.1%) and sanitized their equipment (n = 69; 65.1%) after processing birds. More than half of the participants reported being aware of avian influenza, while almost one third perceived a risk of AIV infection while harvesting birds. Participants aware of avian influenza were more likely to perceive a risk of AIV infection while harvesting birds. Our results suggest that knowledge positively influenced the use of a recommended protective measure. Regarding attitudes, the frequency of participants who would cease harvesting birds was highest if avian influenza was detected in regional birds (n = 55; 51.9%). Conclusions: Our study indicated a need for more education about avian influenza and precautionary behaviours that are culturally-appropriate. First Nations subsistence hunters should be considered an avian influenza risk group and have associated special considerations included in future influenza pandemic plans.
16,060
440
12
[ "birds", "influenza", "hunters", "avian", "avian influenza", "study", "community", "harvesting", "nations", "risk" ]
[ "test", "test" ]
[CONTENT] Avian influenza | Birds | Wild game | First Nations | Canada | Subsistence hunting | Harvesting | Pandemic plans | Risk perception [SUMMARY]
[CONTENT] Avian influenza | Birds | Wild game | First Nations | Canada | Subsistence hunting | Harvesting | Pandemic plans | Risk perception [SUMMARY]
[CONTENT] Avian influenza | Birds | Wild game | First Nations | Canada | Subsistence hunting | Harvesting | Pandemic plans | Risk perception [SUMMARY]
[CONTENT] Avian influenza | Birds | Wild game | First Nations | Canada | Subsistence hunting | Harvesting | Pandemic plans | Risk perception [SUMMARY]
[CONTENT] Avian influenza | Birds | Wild game | First Nations | Canada | Subsistence hunting | Harvesting | Pandemic plans | Risk perception [SUMMARY]
[CONTENT] Avian influenza | Birds | Wild game | First Nations | Canada | Subsistence hunting | Harvesting | Pandemic plans | Risk perception [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Animals | Birds | Community-Based Participatory Research | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Humans | Influenza A virus | Influenza in Birds | Inuit | Male | Middle Aged | Ontario | Pandemics | Zoonoses [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Animals | Birds | Community-Based Participatory Research | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Humans | Influenza A virus | Influenza in Birds | Inuit | Male | Middle Aged | Ontario | Pandemics | Zoonoses [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Animals | Birds | Community-Based Participatory Research | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Humans | Influenza A virus | Influenza in Birds | Inuit | Male | Middle Aged | Ontario | Pandemics | Zoonoses [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Animals | Birds | Community-Based Participatory Research | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Humans | Influenza A virus | Influenza in Birds | Inuit | Male | Middle Aged | Ontario | Pandemics | Zoonoses [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Animals | Birds | Community-Based Participatory Research | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Humans | Influenza A virus | Influenza in Birds | Inuit | Male | Middle Aged | Ontario | Pandemics | Zoonoses [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Animals | Birds | Community-Based Participatory Research | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Humans | Influenza A virus | Influenza in Birds | Inuit | Male | Middle Aged | Ontario | Pandemics | Zoonoses [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] birds | influenza | hunters | avian | avian influenza | study | community | harvesting | nations | risk [SUMMARY]
[CONTENT] birds | influenza | hunters | avian | avian influenza | study | community | harvesting | nations | risk [SUMMARY]
[CONTENT] birds | influenza | hunters | avian | avian influenza | study | community | harvesting | nations | risk [SUMMARY]
[CONTENT] birds | influenza | hunters | avian | avian influenza | study | community | harvesting | nations | risk [SUMMARY]
[CONTENT] birds | influenza | hunters | avian | avian influenza | study | community | harvesting | nations | risk [SUMMARY]
[CONTENT] birds | influenza | hunters | avian | avian influenza | study | community | harvesting | nations | risk [SUMMARY]
[CONTENT] wild | birds | wild birds | humans | influenza | virus | canadian aboriginal | avian | transmission | avian influenza [SUMMARY]
[CONTENT] community | research | survey | cbag | questions | data | study | influenza | cree | nations [SUMMARY]
[CONTENT] influenza | birds | avian | study community 106 november | study community 106 | nations subsistence hunters residing | hunters residing | residing study community 106 | residing study community | residing study [SUMMARY]
[CONTENT] influenza | avian influenza | avian | harvesting | birds | birds results | hunters | nations | risk aiv infection harvesting | nations hunters [SUMMARY]
[CONTENT] birds | influenza | hunters | avian | community | study | avian influenza | wild | nations | harvesting [SUMMARY]
[CONTENT] birds | influenza | hunters | avian | community | study | avian influenza | wild | nations | harvesting [SUMMARY]
[CONTENT] avian | AIV ||| Subsistence | AIV ||| avian [SUMMARY]
[CONTENT] 106 | First Nations | Ontario | Canada | November 10-25, 2013 ||| ||| [SUMMARY]
[CONTENT] ANOVA | 12.1 | 0.001 | AIV | 4.4 | 0.040 ||| ||| 18 | 17.0% | 2 | 1.9% ||| 105 | 99.1% | 69 | 65.1% ||| More than half | avian influenza | almost one third | AIV ||| avian | AIV ||| ||| avian | 55 | 51.9% [SUMMARY]
[CONTENT] avian ||| First Nations | avian influenza risk group [SUMMARY]
[CONTENT] avian | AIV ||| Subsistence | AIV ||| avian ||| 106 | First Nations | Ontario | Canada | November 10-25, 2013 ||| ||| ||| ANOVA | 12.1 | 0.001 | AIV | 4.4 | 0.040 ||| ||| 18 | 17.0% | 2 | 1.9% ||| 105 | 99.1% | 69 | 65.1% ||| More than half | avian influenza | almost one third | AIV ||| avian | AIV ||| ||| avian | 55 | 51.9% ||| avian ||| First Nations | avian influenza risk group [SUMMARY]
[CONTENT] avian | AIV ||| Subsistence | AIV ||| avian ||| 106 | First Nations | Ontario | Canada | November 10-25, 2013 ||| ||| ||| ANOVA | 12.1 | 0.001 | AIV | 4.4 | 0.040 ||| ||| 18 | 17.0% | 2 | 1.9% ||| 105 | 99.1% | 69 | 65.1% ||| More than half | avian influenza | almost one third | AIV ||| avian | AIV ||| ||| avian | 55 | 51.9% ||| avian ||| First Nations | avian influenza risk group [SUMMARY]
Factors associated with health insurance enrolment among ghanaian children under the five years. Analysis of secondary data from a national survey.
35227256
Health insurance enrolment provides financial access to health care and reduces the risk of catastrophic healthcare expenditure. Therefore, the objective of this study was to assess the prevalence and correlates of health insurance enrolment among Ghanaian children under five years.
BACKGROUND
We analysed secondary data from the 2017/18 Ghana Multiple Indicator Cluster Survey. The survey was a nationally representative weighted sample comprising 8,874 children under five years and employed Computer Assisted Personal Interviewing to collect data from the participants. In addition, Chi-square and Logistic Regression analyses were conducted to determine factors associated with health insurance enrolment.
METHODS
The results showed that a majority (58.4%) of the participants were insured. Health insurance enrollment was associated with child age, maternal educational status, wealth index, place of residence and geographical region (p < 0.05). Children born to mothers with higher educational status (AOR = 2.14; 95% CI: 1.39-3.30) and mothers in the richest wealth quintile (AOR = 2.82; 95% CI: 2.00-3.98) had a higher likelihood of being insured compared with their counterparts. Also, children residing in rural areas (AOR = 0.75; 95% CI: 0.61-0.91) were less likely to be insured than children in urban areas.
RESULTS
This study revealed that more than half of the participants were insured. Health insurance enrolment was influenced by the child's age, mother's educational status, wealth index, residence, ethnicity and geographical region. Therefore, interventions aimed at increasing health insurance coverage among children should focus on children from low socio-economic backgrounds. Stakeholders can leverage these findings to help improve health insurance coverage among Ghanaian children under five years.
CONCLUSION
[ "Child", "Child, Preschool", "Educational Status", "Female", "Ghana", "Humans", "Insurance, Health", "National Health Programs", "Socioeconomic Factors" ]
8886748
Background
Globally, protecting and improving the well-being of children under five years remains a public health priority. For instance, Target 3.2 of the Sustainable Development Goals seek to end preventable deaths of newborns and children under five years of age, with all countries aiming to reduce neonatal mortality and under five mortality by 2030 [1]. Evidence shows that under five mortality has declined over the last three decades worldwide. Between 1990 and 2019, the under five mortality rate has reduced by 59%, thus from 93 deaths per 1000 live births to 38 respectively [2]. Notwithstanding, the burden of under five mortality remains high. For example, in 2019 alone, about 14,000 children died every day before their fifth birthday worldwide. More than half (53%) of under five deaths in 2019 occurred in sub-Saharan Africa, with Nigeria and Somalia recording the highest under five mortality rates (117 deaths per 1000 live births) [2]. Most under five deaths are preventable with timely access to quality healthcare services and child health interventions [3]. Therefore, an integral recommendation—among many others—to accelerate progress towards reducing the mortality rate among children under five years is to ensure health equity through universal health coverage. So that all children can access essential health services without undue financial hardship [3]. Evidence shows that health insurance enrolment is linked to increased access and utilisation of healthcare services [4] and better health outcomes—particularly for maternal and child health [5–10]. In Ghana, the under five mortality rate was estimated to be 46 deaths per 1000 live births in 2019 [11], higher than the global rate of 38 deaths per 1000 live births. Financial constraints pose a significant barrier to accessing healthcare services, including child health services [12]. Therefore, under the National Health Insurance Act 650, the Government of Ghana established the National Health Insurance Scheme (NHIS) in 2003 to eliminate financial barriers to accessing health care [13]. Upon establishment, the scheme operated semi-autonomous Mutual Health Insurance Schemes in districts across the country. In 2012, a new law, Act 852, replaced Act 650. Under Act 852, all District Mutual Health Insurance Schemes were consolidated under a National Health Insurance Authority (NHIA) to ensure effective management and efficient service delivery [14]. The primary sources of financing the NHIS comprise a National Health Insurance Levy on selected goods and services, 2.5% contribution from the National Social Security Scheme by formal sector workers, individual premiums mainly from informal sector workers, and miscellaneous funds from investment returns, grants, donations and gifts from international donor partners and agencies [14]. Since its inception, Ghana’s NHIS has been considered one of Africa’s model health insurance systems. The benefit package of the NHIS covers the cost of treatment for more than 95% of the disease conditions in Ghana. The range of services covered includes but are not limited to outpatient care, diagnostic services, in-patient care, pre-approved medications, maternal care, ear, nose and throat services, dental services and all emergency services. Excluded in the NHIS benefit package are procedures such as dialysis for chronic renal failure, treatments for cancer (other than cervical and breast cancers), organ transplants and cosmetic surgery. Child immunization services, family planning and treatment of conditions such as HIV/AIDS and tuberculosis are also not covered. However, these services are provided under alternative government programs. Apart from pregnant women and children under five years, new members serve a waiting period of three months after registration before accessing health care under the scheme. Further, members of the scheme can access healthcare from health services providers accredited by the NHIA. These include public, quasi-government, faith-based, some but not all private health facilities and licensed pharmacies [15]. Despite the benefits of increased access to healthcare offered by NHIS membership and mandatory enrolment for all residents in Ghana, universal population coverage on the scheme has proved challenging. As of 2021, the NHIS had an active membership coverage of over 15 million people, equating to about 53% of Ghana’s estimated population [16]. However, a recent study examining NHIS enrolment within the last decade showed that, on average, only about 40% of all Ghanaians had ever registered with the Scheme [17]. That notwithstanding, utilization trends for in-patient and outpatient care at NHIS accredited health facilities continues to increase across the country [18, 19]. As part of efforts to increase coverage, a premium exemption policy for vulnerable populations, such as children under 18, was implemented. Thus, persons below 18 years are exempted from paying annual premiums [20] but must pay administrative charges, including the NHIS card processing fee [21]. Furthermore, in 2010, the National Health Insurance Authority decoupled children under five years from their parents’ membership. Therefore, children under five years can be active members of the NHIS even if their parents are not active members [22]. In addition, private health insurance schemes are emerging rapidly in Ghana, with premiums based on the calculated risk of subscribers. The Ghana NHIS has been extensively investigated. Prior studies among the adult population showed that health insurance enrolment was associated with educational status, wealth, age, marital status, gender, type of occupation and place of residence [23–25]. In addition, an analysis of the 2011 Ghana Multiple Indicator Cluster Survey (MICS) revealed that a majority (73%) of children under five years were non-insured [26]. However, there is a paucity of literature on determinants of health insurance enrolment among children under five years. Therefore, this study aimed to determine factors associated with NHIS enrolment among children under five in Ghana using nationally representative survey data. Generating empirical evidence about factors influencing enrolment is essential to inform policy to help Ghana achieve Universal Health Coverage and Sustainable Development Goals.
null
null
Results
Descriptive statistics The results showed that a majority of the participants (58.4%) were insured, while a substantial minority (41.6%) were non-insured. Half of the participants were females (50.8%), and more than half (58%) were below 36 months. About three in ten (27.2%) mothers had no formal/pre-primary education, while 22.2% were in the poorest wealth quintile. Children from rural areas constituted 56.9%. In terms of ethnicity, 46.1% of the participants were Akan, 16.9% were Mole-Dagbani, and 37% were of other ethnic groups, such as Ewe and Gruma. Details are provided in Table 1. Table 1Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18 Characteristic n % Sex of child   Male  Female4369450549.250.8 Age of child (months)   0–11  12–23  24–35  36–47  48–591700169417501928180219.219.119.721.720.3 Mother's education   Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444327.320.236.710.85 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1966183417691676163022.220.719.918.818.4 Residence   Urban  Rural3821505343.156.9 Region   Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268627109532111833105528221110.510.49.7810.723.89.411.93.22.4 Ethnicity   Akan  Mole Dagbani  Others (Ewe, Gruma etc.)40911503328046.116.937 Health insurance status   Insured  Non-insured5186368958.441.6 Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18   Male   Female 4369 4505 49.2 50.8   0–11   12–23   24–35   36–47   48–59 1700 1694 1750 1928 1802 19.2 19.1 19.7 21.7 20.3   Pre-primary/none   Primary   Junior High School   Senior High School   Higher 2428 1790 3259 954 443 27.3 20.2 36.7 10.8 5   Poorest   Second   Middle   Fourth   Richest 1966 1834 1769 1676 1630 22.2 20.7 19.9 18.8 18.4   Urban   Rural 3821 5053 43.1 56.9   Western   Central   Greater Accra   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 931 926 862 710 953 2111 833 1055 282 211 10.5 10.4 9.7 8 10.7 23.8 9.4 11.9 3.2 2.4   Akan   Mole Dagbani   Others (Ewe, Gruma etc.) 4091 1503 3280 46.1 16.9 37   Insured   Non-insured 5186 3689 58.4 41.6 The results showed that a majority of the participants (58.4%) were insured, while a substantial minority (41.6%) were non-insured. Half of the participants were females (50.8%), and more than half (58%) were below 36 months. About three in ten (27.2%) mothers had no formal/pre-primary education, while 22.2% were in the poorest wealth quintile. Children from rural areas constituted 56.9%. In terms of ethnicity, 46.1% of the participants were Akan, 16.9% were Mole-Dagbani, and 37% were of other ethnic groups, such as Ewe and Gruma. Details are provided in Table 1. Table 1Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18 Characteristic n % Sex of child   Male  Female4369450549.250.8 Age of child (months)   0–11  12–23  24–35  36–47  48–591700169417501928180219.219.119.721.720.3 Mother's education   Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444327.320.236.710.85 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1966183417691676163022.220.719.918.818.4 Residence   Urban  Rural3821505343.156.9 Region   Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268627109532111833105528221110.510.49.7810.723.89.411.93.22.4 Ethnicity   Akan  Mole Dagbani  Others (Ewe, Gruma etc.)40911503328046.116.937 Health insurance status   Insured  Non-insured5186368958.441.6 Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18   Male   Female 4369 4505 49.2 50.8   0–11   12–23   24–35   36–47   48–59 1700 1694 1750 1928 1802 19.2 19.1 19.7 21.7 20.3   Pre-primary/none   Primary   Junior High School   Senior High School   Higher 2428 1790 3259 954 443 27.3 20.2 36.7 10.8 5   Poorest   Second   Middle   Fourth   Richest 1966 1834 1769 1676 1630 22.2 20.7 19.9 18.8 18.4   Urban   Rural 3821 5053 43.1 56.9   Western   Central   Greater Accra   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 931 926 862 710 953 2111 833 1055 282 211 10.5 10.4 9.7 8 10.7 23.8 9.4 11.9 3.2 2.4   Akan   Mole Dagbani   Others (Ewe, Gruma etc.) 4091 1503 3280 46.1 16.9 37   Insured   Non-insured 5186 3689 58.4 41.6 Bivariate analysis At the bivariate level, child age was significantly associated with health insurance enrolment (p < 0.05). Also, mothers’ education, wealth quintile, residence, geographic region, and ethnicity were significantly associated with health insurance enrollment (p < 0.05) among children under five years. A majority (56.1%) of children aged 0–11 months were not insured with the National Health Insurance Scheme. Less than half of children in the Central Region were insured, while eight in ten children were insured in the Brong-Ahafo Region. Details are provided in Table 2. Table 2Association between participant characteristics and health insurance status Characteristic n Non-insured (%) Insured (%) Chi-square p-value Sex of child   Male  Female4369450541.341.858.758.20.29610.7311 Age of child (months)   0–11  12–23  24–35  36–47  48–591700169417501928180256.143.634.637.73743.956.465.462.36329.3826 < 0.0001 Mother's education   Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444342.649.441.534.520.157.450.658.565.479.9149.8059 < 0.0001 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1966183417691676163048.744.442.641.129.151.355.657.458.970.9152.59 < 0.0001 Residence   Urban  Rural3821505335.845.964.254.192.0315 < 0.0001 Region   Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268637109532111833105528221146.75349.742.539.341.922.642.124.134.153.34750.357.560.758.177.457.975.965.9250.9397 < 0.0001 Ethnicity   Akan  Mole Dagbani  Other (Ewe, Gruma etc.)40911503328042.238.542.357.861.557.77.38000.3330 Association between participant characteristics and health insurance status   Male   Female 4369 4505 41.3 41.8 58.7 58.2   0–11   12–23   24–35   36–47   48–59 1700 1694 1750 1928 1802 56.1 43.6 34.6 37.7 37 43.9 56.4 65.4 62.3 63   Pre-primary/none   Primary   Junior High School   Senior High School   Higher 2428 1790 3259 954 443 42.6 49.4 41.5 34.5 20.1 57.4 50.6 58.5 65.4 79.9   Poorest   Second   Middle   Fourth   Richest 1966 1834 1769 1676 1630 48.7 44.4 42.6 41.1 29.1 51.3 55.6 57.4 58.9 70.9   Urban   Rural 3821 5053 35.8 45.9 64.2 54.1   Western   Central   Greater Accra   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 931 926 863 710 953 2111 833 1055 282 211 46.7 53 49.7 42.5 39.3 41.9 22.6 42.1 24.1 34.1 53.3 47 50.3 57.5 60.7 58.1 77.4 57.9 75.9 65.9   Akan   Mole Dagbani   Other (Ewe, Gruma etc.) 4091 1503 3280 42.2 38.5 42.3 57.8 61.5 57.7 At the bivariate level, child age was significantly associated with health insurance enrolment (p < 0.05). Also, mothers’ education, wealth quintile, residence, geographic region, and ethnicity were significantly associated with health insurance enrollment (p < 0.05) among children under five years. A majority (56.1%) of children aged 0–11 months were not insured with the National Health Insurance Scheme. Less than half of children in the Central Region were insured, while eight in ten children were insured in the Brong-Ahafo Region. Details are provided in Table 2. Table 2Association between participant characteristics and health insurance status Characteristic n Non-insured (%) Insured (%) Chi-square p-value Sex of child   Male  Female4369450541.341.858.758.20.29610.7311 Age of child (months)   0–11  12–23  24–35  36–47  48–591700169417501928180256.143.634.637.73743.956.465.462.36329.3826 < 0.0001 Mother's education   Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444342.649.441.534.520.157.450.658.565.479.9149.8059 < 0.0001 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1966183417691676163048.744.442.641.129.151.355.657.458.970.9152.59 < 0.0001 Residence   Urban  Rural3821505335.845.964.254.192.0315 < 0.0001 Region   Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268637109532111833105528221146.75349.742.539.341.922.642.124.134.153.34750.357.560.758.177.457.975.965.9250.9397 < 0.0001 Ethnicity   Akan  Mole Dagbani  Other (Ewe, Gruma etc.)40911503328042.238.542.357.861.557.77.38000.3330 Association between participant characteristics and health insurance status   Male   Female 4369 4505 41.3 41.8 58.7 58.2   0–11   12–23   24–35   36–47   48–59 1700 1694 1750 1928 1802 56.1 43.6 34.6 37.7 37 43.9 56.4 65.4 62.3 63   Pre-primary/none   Primary   Junior High School   Senior High School   Higher 2428 1790 3259 954 443 42.6 49.4 41.5 34.5 20.1 57.4 50.6 58.5 65.4 79.9   Poorest   Second   Middle   Fourth   Richest 1966 1834 1769 1676 1630 48.7 44.4 42.6 41.1 29.1 51.3 55.6 57.4 58.9 70.9   Urban   Rural 3821 5053 35.8 45.9 64.2 54.1   Western   Central   Greater Accra   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 931 926 863 710 953 2111 833 1055 282 211 46.7 53 49.7 42.5 39.3 41.9 22.6 42.1 24.1 34.1 53.3 47 50.3 57.5 60.7 58.1 77.4 57.9 75.9 65.9   Akan   Mole Dagbani   Other (Ewe, Gruma etc.) 4091 1503 3280 42.2 38.5 42.3 57.8 61.5 57.7 Multivariable analysis At the crude analysis level, health insurance enrolment was significantly predicted by wealth quintile, child’s age, mother’s education, place of residence, geographical region and ethnicity (p < 0.05). At the adjusted analysis level, it was found that children in the poorest wealth quintile were less likely to be insured compared with children in the second (AOR = 1.47; 95% CI: 1.15–1.89), middle (AOR = 1.59; 95% CI:1.16–2.17), fourth (AOR = 1.73; 95% CI:1.28–2.35) and richest (AOR = 2.82; 95% CI: 2.00-3.98) wealth quintiles. Children aged 0–11 months were less likely to be insured compared with children aged 12–23 months (AOR = 1.72; 95% CI: 1.42–2.10). Children whose mothers had higher education were two times (AOR = 2.14; 95% CI: 1.39–3.30) more likely to be insured compared with children whose mothers had no formal education. In addition, children in rural areas (AOR = 0.75; 95% CI: 0.61–0.91) had lesser odds of being insured compared with children in urban areas. Also, children in the Northern Region (AOR = 3.23; 95% CI: 2.18–4.80), Upper West Region (AOR = 4.82; 95% CI: 3.04–7.66) and Upper East Region (AOR = 8.74; 95% CI: 5.35–13.40) had a higher probability of being insured compared with children in the Greater Accra Region. Details are provided in Table 3. Table 3Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18 Covariate/exposure Crude analysis OR (95% CI) Wald p-value Adjusted analysis OR (95% CI) Adjusted Wald p -value Sex of child   Male  Female1(ref)0.98 (0.85–1.12)0.73111 (ref)0.98 (0.85–1.13)0.8045 Age of child (months)   0–11  12–23  24–35  36–47  48–591(ref)1.66 (1.37–2.00)2.42 (2.02–2.89)2.12 (1.76–2.55)2.17 (1.79–2.63) < 0.00011 (ref)1.72 (1.42–2.10)2.70 (2.22–3.29)2.30 (1.88–2.81)2.44 (1.99–2.99) < 0.0001 Mother’s education   Pre-primary/none  Primary  Junior High School  Senior High School   Higher1(ref)0.76 (0.62–0.93)1.05 (0.87–1.26)1.40 (1.09–1.81)2.96 (2.07–4.25) < 0.00011(ref)0.83 (0.67–1.02)1.16 (0.95–1.42)1.37 (1.02–1.82)2.14 (1.39–3.30)0.0002 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1(ref)1.19 (0.95–1.49)1.28 (0.98–1.66)1.36 (1.05–1.76)2.31 (1.77–3.02) < 0.00011(ref)1.47 (1.15–1.89)1.59 (1.16–2.17)1.73 (1.28–2.35)2.82 (2.00–3.98) < 0.0001 Residence   Urban  Rural1(ref)0.66 (0.55–0.79) < 0.00011(ref)0.75 (0.61- 0.91)0.0002 Region   Greater Accra  Western  Central  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West1(ref)1.13 (0.77–1.63)0.87 (0.60–1.26)1.33 (0.83–2.13)1.52 (1.06–2.18)1.37 (0.99–1.89)3.38 (2.27–5.04)1.36 (0.97–1.90)3.14 (2.21–4.44)1.92 (1.33–2.76) < 0.00011(ref)1.93 (1.32–2.80)1.38 (0.92–2.08)2.81 (1.65–4.80)2.68 (1.86–3.86)2.13 (1.50–3.02)6.61 (4.24–10.30)3.23 (2.18–4.80)8.74 (5.35–13.40)4.82 (3.04–7.66) < 0.0001 Ethnicity   Akan  Mole Dagbani  Others (Ewe, Gruma etc.)1(ref)1.17 (0.94–1.45)1.00 (0.83–1.19) < 0.00011(ref)1.07 (0.80–1.43)1.09 (0.90–1.31) < 0.0001 Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18   Male   Female 1(ref) 0.98 (0.85–1.12) 1 (ref) 0.98 (0.85–1.13)   0–11   12–23   24–35   36–47   48–59 1(ref) 1.66 (1.37–2.00) 2.42 (2.02–2.89) 2.12 (1.76–2.55) 2.17 (1.79–2.63) 1 (ref) 1.72 (1.42–2.10) 2.70 (2.22–3.29) 2.30 (1.88–2.81) 2.44 (1.99–2.99)   Pre-primary/none   Primary   Junior High School   Senior High School    Higher 1(ref) 0.76 (0.62–0.93) 1.05 (0.87–1.26) 1.40 (1.09–1.81) 2.96 (2.07–4.25) 1(ref) 0.83 (0.67–1.02) 1.16 (0.95–1.42) 1.37 (1.02–1.82) 2.14 (1.39–3.30)   Poorest   Second   Middle   Fourth   Richest 1(ref) 1.19 (0.95–1.49) 1.28 (0.98–1.66) 1.36 (1.05–1.76) 2.31 (1.77–3.02) 1(ref) 1.47 (1.15–1.89) 1.59 (1.16–2.17) 1.73 (1.28–2.35) 2.82 (2.00–3.98)   Urban   Rural 1(ref) 0.66 (0.55–0.79) 1(ref) 0.75 (0.61- 0.91)   Greater Accra   Western   Central   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 1(ref) 1.13 (0.77–1.63) 0.87 (0.60–1.26) 1.33 (0.83–2.13) 1.52 (1.06–2.18) 1.37 (0.99–1.89) 3.38 (2.27–5.04) 1.36 (0.97–1.90) 3.14 (2.21–4.44) 1.92 (1.33–2.76) 1(ref) 1.93 (1.32–2.80) 1.38 (0.92–2.08) 2.81 (1.65–4.80) 2.68 (1.86–3.86) 2.13 (1.50–3.02) 6.61 (4.24–10.30) 3.23 (2.18–4.80) 8.74 (5.35–13.40) 4.82 (3.04–7.66)   Akan   Mole Dagbani   Others (Ewe, Gruma etc.) 1(ref) 1.17 (0.94–1.45) 1.00 (0.83–1.19) 1(ref) 1.07 (0.80–1.43) 1.09 (0.90–1.31) At the crude analysis level, health insurance enrolment was significantly predicted by wealth quintile, child’s age, mother’s education, place of residence, geographical region and ethnicity (p < 0.05). At the adjusted analysis level, it was found that children in the poorest wealth quintile were less likely to be insured compared with children in the second (AOR = 1.47; 95% CI: 1.15–1.89), middle (AOR = 1.59; 95% CI:1.16–2.17), fourth (AOR = 1.73; 95% CI:1.28–2.35) and richest (AOR = 2.82; 95% CI: 2.00-3.98) wealth quintiles. Children aged 0–11 months were less likely to be insured compared with children aged 12–23 months (AOR = 1.72; 95% CI: 1.42–2.10). Children whose mothers had higher education were two times (AOR = 2.14; 95% CI: 1.39–3.30) more likely to be insured compared with children whose mothers had no formal education. In addition, children in rural areas (AOR = 0.75; 95% CI: 0.61–0.91) had lesser odds of being insured compared with children in urban areas. Also, children in the Northern Region (AOR = 3.23; 95% CI: 2.18–4.80), Upper West Region (AOR = 4.82; 95% CI: 3.04–7.66) and Upper East Region (AOR = 8.74; 95% CI: 5.35–13.40) had a higher probability of being insured compared with children in the Greater Accra Region. Details are provided in Table 3. Table 3Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18 Covariate/exposure Crude analysis OR (95% CI) Wald p-value Adjusted analysis OR (95% CI) Adjusted Wald p -value Sex of child   Male  Female1(ref)0.98 (0.85–1.12)0.73111 (ref)0.98 (0.85–1.13)0.8045 Age of child (months)   0–11  12–23  24–35  36–47  48–591(ref)1.66 (1.37–2.00)2.42 (2.02–2.89)2.12 (1.76–2.55)2.17 (1.79–2.63) < 0.00011 (ref)1.72 (1.42–2.10)2.70 (2.22–3.29)2.30 (1.88–2.81)2.44 (1.99–2.99) < 0.0001 Mother’s education   Pre-primary/none  Primary  Junior High School  Senior High School   Higher1(ref)0.76 (0.62–0.93)1.05 (0.87–1.26)1.40 (1.09–1.81)2.96 (2.07–4.25) < 0.00011(ref)0.83 (0.67–1.02)1.16 (0.95–1.42)1.37 (1.02–1.82)2.14 (1.39–3.30)0.0002 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1(ref)1.19 (0.95–1.49)1.28 (0.98–1.66)1.36 (1.05–1.76)2.31 (1.77–3.02) < 0.00011(ref)1.47 (1.15–1.89)1.59 (1.16–2.17)1.73 (1.28–2.35)2.82 (2.00–3.98) < 0.0001 Residence   Urban  Rural1(ref)0.66 (0.55–0.79) < 0.00011(ref)0.75 (0.61- 0.91)0.0002 Region   Greater Accra  Western  Central  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West1(ref)1.13 (0.77–1.63)0.87 (0.60–1.26)1.33 (0.83–2.13)1.52 (1.06–2.18)1.37 (0.99–1.89)3.38 (2.27–5.04)1.36 (0.97–1.90)3.14 (2.21–4.44)1.92 (1.33–2.76) < 0.00011(ref)1.93 (1.32–2.80)1.38 (0.92–2.08)2.81 (1.65–4.80)2.68 (1.86–3.86)2.13 (1.50–3.02)6.61 (4.24–10.30)3.23 (2.18–4.80)8.74 (5.35–13.40)4.82 (3.04–7.66) < 0.0001 Ethnicity   Akan  Mole Dagbani  Others (Ewe, Gruma etc.)1(ref)1.17 (0.94–1.45)1.00 (0.83–1.19) < 0.00011(ref)1.07 (0.80–1.43)1.09 (0.90–1.31) < 0.0001 Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18   Male   Female 1(ref) 0.98 (0.85–1.12) 1 (ref) 0.98 (0.85–1.13)   0–11   12–23   24–35   36–47   48–59 1(ref) 1.66 (1.37–2.00) 2.42 (2.02–2.89) 2.12 (1.76–2.55) 2.17 (1.79–2.63) 1 (ref) 1.72 (1.42–2.10) 2.70 (2.22–3.29) 2.30 (1.88–2.81) 2.44 (1.99–2.99)   Pre-primary/none   Primary   Junior High School   Senior High School    Higher 1(ref) 0.76 (0.62–0.93) 1.05 (0.87–1.26) 1.40 (1.09–1.81) 2.96 (2.07–4.25) 1(ref) 0.83 (0.67–1.02) 1.16 (0.95–1.42) 1.37 (1.02–1.82) 2.14 (1.39–3.30)   Poorest   Second   Middle   Fourth   Richest 1(ref) 1.19 (0.95–1.49) 1.28 (0.98–1.66) 1.36 (1.05–1.76) 2.31 (1.77–3.02) 1(ref) 1.47 (1.15–1.89) 1.59 (1.16–2.17) 1.73 (1.28–2.35) 2.82 (2.00–3.98)   Urban   Rural 1(ref) 0.66 (0.55–0.79) 1(ref) 0.75 (0.61- 0.91)   Greater Accra   Western   Central   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 1(ref) 1.13 (0.77–1.63) 0.87 (0.60–1.26) 1.33 (0.83–2.13) 1.52 (1.06–2.18) 1.37 (0.99–1.89) 3.38 (2.27–5.04) 1.36 (0.97–1.90) 3.14 (2.21–4.44) 1.92 (1.33–2.76) 1(ref) 1.93 (1.32–2.80) 1.38 (0.92–2.08) 2.81 (1.65–4.80) 2.68 (1.86–3.86) 2.13 (1.50–3.02) 6.61 (4.24–10.30) 3.23 (2.18–4.80) 8.74 (5.35–13.40) 4.82 (3.04–7.66)   Akan   Mole Dagbani   Others (Ewe, Gruma etc.) 1(ref) 1.17 (0.94–1.45) 1.00 (0.83–1.19) 1(ref) 1.07 (0.80–1.43) 1.09 (0.90–1.31)
Conclusions
This study demonstrated that more than half of the children were covered by health insurance. Health insurance enrolment was associated with wealth index, mother’ educational status, child’s age, type of residence, geographical region and ethnicity. Policymakers can leverage these findings to help improve health insurance coverage for children in Ghana. Interventions that seek to improve health insurance coverage must prioritize children from poor socio-economic backgrounds. Future studies should employ qualitative designs to expose the many intricate views of caregivers regarding health insurance enrolment among children under five years. Also, further studies may explore factors associated with non-enrollment among children under five years.
[ "Background", "Methods", "Descriptive statistics", "Bivariate analysis", "Multivariable analysis", "Strengths and limitations of the study" ]
[ "Globally, protecting and improving the well-being of children under five years remains a public health priority. For instance, Target 3.2 of the Sustainable Development Goals seek to end preventable deaths of newborns and children under five years of age, with all countries aiming to reduce neonatal mortality and under five mortality by 2030 [1]. Evidence shows that under five mortality has declined over the last three decades worldwide. Between 1990 and 2019, the under five mortality rate has reduced by 59%, thus from 93 deaths per 1000 live births to 38 respectively [2]. Notwithstanding, the burden of under five mortality remains high. For example, in 2019 alone, about 14,000 children died every day before their fifth birthday worldwide. More than half (53%) of under five deaths in 2019 occurred in sub-Saharan Africa, with Nigeria and Somalia recording the highest under five mortality rates (117 deaths per 1000 live births) [2]. Most under five deaths are preventable with timely access to quality healthcare services and child health interventions [3]. Therefore, an integral recommendation—among many others—to accelerate progress towards reducing the mortality rate among children under five years is to ensure health equity through universal health coverage. So that all children can access essential health services without undue financial hardship [3]. Evidence shows that health insurance enrolment is linked to increased access and utilisation of healthcare services [4] and better health outcomes—particularly for maternal and child health [5–10].\nIn Ghana, the under five mortality rate was estimated to be 46 deaths per 1000 live births in 2019 [11], higher than the global rate of 38 deaths per 1000 live births. Financial constraints pose a significant barrier to accessing healthcare services, including child health services [12]. Therefore, under the National Health Insurance Act 650, the Government of Ghana established the National Health Insurance Scheme (NHIS) in 2003 to eliminate financial barriers to accessing health care [13]. Upon establishment, the scheme operated semi-autonomous Mutual Health Insurance Schemes in districts across the country. In 2012, a new law, Act 852, replaced Act 650. Under Act 852, all District Mutual Health Insurance Schemes were consolidated under a National Health Insurance Authority (NHIA) to ensure effective management and efficient service delivery [14]. The primary sources of financing the NHIS comprise a National Health Insurance Levy on selected goods and services, 2.5% contribution from the National Social Security Scheme by formal sector workers, individual premiums mainly from informal sector workers, and miscellaneous funds from investment returns, grants, donations and gifts from international donor partners and agencies [14].\nSince its inception, Ghana’s NHIS has been considered one of Africa’s model health insurance systems. The benefit package of the NHIS covers the cost of treatment for more than 95% of the disease conditions in Ghana. The range of services covered includes but are not limited to outpatient care, diagnostic services, in-patient care, pre-approved medications, maternal care, ear, nose and throat services, dental services and all emergency services. Excluded in the NHIS benefit package are procedures such as dialysis for chronic renal failure, treatments for cancer (other than cervical and breast cancers), organ transplants and cosmetic surgery. Child immunization services, family planning and treatment of conditions such as HIV/AIDS and tuberculosis are also not covered. However, these services are provided under alternative government programs. Apart from pregnant women and children under five years, new members serve a waiting period of three months after registration before accessing health care under the scheme. Further, members of the scheme can access healthcare from health services providers accredited by the NHIA. These include public, quasi-government, faith-based, some but not all private health facilities and licensed pharmacies [15]. Despite the benefits of increased access to healthcare offered by NHIS membership and mandatory enrolment for all residents in Ghana, universal population coverage on the scheme has proved challenging. As of 2021, the NHIS had an active membership coverage of over 15 million people, equating to about 53% of Ghana’s estimated population [16]. However, a recent study examining NHIS enrolment within the last decade showed that, on average, only about 40% of all Ghanaians had ever registered with the Scheme [17]. That notwithstanding, utilization trends for in-patient and outpatient care at NHIS accredited health facilities continues to increase across the country [18, 19].\nAs part of efforts to increase coverage, a premium exemption policy for vulnerable populations, such as children under 18, was implemented. Thus, persons below 18 years are exempted from paying annual premiums [20] but must pay administrative charges, including the NHIS card processing fee [21]. Furthermore, in 2010, the National Health Insurance Authority decoupled children under five years from their parents’ membership. Therefore, children under five years can be active members of the NHIS even if their parents are not active members [22]. In addition, private health insurance schemes are emerging rapidly in Ghana, with premiums based on the calculated risk of subscribers.\nThe Ghana NHIS has been extensively investigated. Prior studies among the adult population showed that health insurance enrolment was associated with educational status, wealth, age, marital status, gender, type of occupation and place of residence [23–25]. In addition, an analysis of the 2011 Ghana Multiple Indicator Cluster Survey (MICS) revealed that a majority (73%) of children under five years were non-insured [26]. However, there is a paucity of literature on determinants of health insurance enrolment among children under five years. Therefore, this study aimed to determine factors associated with NHIS enrolment among children under five in Ghana using nationally representative survey data. Generating empirical evidence about factors influencing enrolment is essential to inform policy to help Ghana achieve Universal Health Coverage and Sustainable Development Goals.", "In this study, we analysed the 2017/2018 Ghana MICS [27]. The 2017/18 MICS collected demographic and health data across Ghana, including rural and urban settings. The sampling of participants was done in two phases. The first phase involved selecting 660 enumeration areas from 20 strata, proportional to size. The second involved the selection of 13,202 households within the selected enumeration areas. The weighted sample size of children under five years was 8,874. Ghana had ten administrative regions divided into 20 strata, of which ten are rural and ten are urban. Participants were selected across all the regions and strata. The inclusion criteria were under five children in the selected households or those who passed the night before the survey in the selected households. Data were collected using Computer Assisted Personal Interviewing (CAPI). The under five questionnaire was administered to caregivers of children below five years. Trained field officers and supervisors collected the data between October 2017 and September 2018. Details about the 2017/18 MICS are provided elsewhere [28].\nThe dependent variable in this study was health insurance status (i.e. is [name] covered by any health insurance?) coded as 1 = Yes and 0 = No. The independent variables identified in the literature included child and maternal characteristics. These include child’s age, maternal educational status, wealth index, ethnicity, geographic region and place of residence. Details about the coding are provided in Table 1. The complex nature of the survey was accounted for by employing the ‘svy’ STATA command. STATA/SE version 16 (StataCorp, College Station, Texas, USA) was used to analyse that data. Descriptive statistics were computed for participants’ characteristics and summarized in a table. The Chi-square test was employed to examine the association between participant characteristics and health insurance status at the bivariate level. Binary Logistic Regression was employed to identify significant predictors of health insurance enrolment among under five children. The results were reported at a 95% confidence level.", "The results showed that a majority of the participants (58.4%) were insured, while a substantial minority (41.6%) were non-insured. Half of the participants were females (50.8%), and more than half (58%) were below 36 months. About three in ten (27.2%) mothers had no formal/pre-primary education, while 22.2% were in the poorest wealth quintile. Children from rural areas constituted 56.9%. In terms of ethnicity, 46.1% of the participants were Akan, 16.9% were Mole-Dagbani, and 37% were of other ethnic groups, such as Ewe and Gruma. Details are provided in Table 1.\n\nTable 1Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18\nCharacteristic\n\nn\n\n%\n\nSex of child\n  Male  Female4369450549.250.8\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591700169417501928180219.219.119.721.720.3\nMother's education\n  Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444327.320.236.710.85\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1966183417691676163022.220.719.918.818.4\nResidence\n  Urban  Rural3821505343.156.9\nRegion\n  Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268627109532111833105528221110.510.49.7810.723.89.411.93.22.4\nEthnicity\n  Akan  Mole Dagbani  Others (Ewe, Gruma etc.)40911503328046.116.937\nHealth insurance status\n  Insured  Non-insured5186368958.441.6\nSocio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18\n  Male\n  Female\n4369\n4505\n49.2\n50.8\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1700\n1694\n1750\n1928\n1802\n19.2\n19.1\n19.7\n21.7\n20.3\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n  Higher\n2428\n1790\n3259\n954\n443\n27.3\n20.2\n36.7\n10.8\n5\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1966\n1834\n1769\n1676\n1630\n22.2\n20.7\n19.9\n18.8\n18.4\n  Urban\n  Rural\n3821\n5053\n43.1\n56.9\n  Western\n  Central\n  Greater Accra\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n931\n926\n862\n710\n953\n2111\n833\n1055\n282\n211\n10.5\n10.4\n9.7\n8\n10.7\n23.8\n9.4\n11.9\n3.2\n2.4\n  Akan\n  Mole Dagbani\n  Others (Ewe, Gruma etc.)\n4091\n1503\n3280\n46.1\n16.9\n37\n  Insured\n  Non-insured\n5186\n3689\n58.4\n41.6", "At the bivariate level, child age was significantly associated with health insurance enrolment (p < 0.05). Also, mothers’ education, wealth quintile, residence, geographic region, and ethnicity were significantly associated with health insurance enrollment (p < 0.05) among children under five years. A majority (56.1%) of children aged 0–11 months were not insured with the National Health Insurance Scheme. Less than half of children in the Central Region were insured, while eight in ten children were insured in the Brong-Ahafo Region. Details are provided in Table 2.\n\nTable 2Association between participant characteristics and health insurance status\nCharacteristic\n\nn\n\nNon-insured\n\n(%)\n\nInsured\n\n(%)\n\nChi-square\np-value\nSex of child\n  Male  Female4369450541.341.858.758.20.29610.7311\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591700169417501928180256.143.634.637.73743.956.465.462.36329.3826 < 0.0001\nMother's education\n  Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444342.649.441.534.520.157.450.658.565.479.9149.8059 < 0.0001\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1966183417691676163048.744.442.641.129.151.355.657.458.970.9152.59 < 0.0001\nResidence\n  Urban  Rural3821505335.845.964.254.192.0315 < 0.0001\nRegion\n  Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268637109532111833105528221146.75349.742.539.341.922.642.124.134.153.34750.357.560.758.177.457.975.965.9250.9397 < 0.0001\nEthnicity\n  Akan  Mole Dagbani  Other (Ewe, Gruma etc.)40911503328042.238.542.357.861.557.77.38000.3330\nAssociation between participant characteristics and health insurance status\n  Male\n  Female\n4369\n4505\n41.3\n41.8\n58.7\n58.2\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1700\n1694\n1750\n1928\n1802\n56.1\n43.6\n34.6\n37.7\n37\n43.9\n56.4\n65.4\n62.3\n63\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n  Higher\n2428\n1790\n3259\n954\n443\n42.6\n49.4\n41.5\n34.5\n20.1\n57.4\n50.6\n58.5\n65.4\n79.9\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1966\n1834\n1769\n1676\n1630\n48.7\n44.4\n42.6\n41.1\n29.1\n51.3\n55.6\n57.4\n58.9\n70.9\n  Urban\n  Rural\n3821\n5053\n35.8\n45.9\n64.2\n54.1\n  Western\n  Central\n  Greater Accra\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n931\n926\n863\n710\n953\n2111\n833\n1055\n282\n211\n46.7\n53\n49.7\n42.5\n39.3\n41.9\n22.6\n42.1\n24.1\n34.1\n53.3\n47\n50.3\n57.5\n60.7\n58.1\n77.4\n57.9\n75.9\n65.9\n  Akan\n  Mole Dagbani\n  Other (Ewe, Gruma etc.)\n4091\n1503\n3280\n42.2\n38.5\n42.3\n57.8\n61.5\n57.7", "At the crude analysis level, health insurance enrolment was significantly predicted by wealth quintile, child’s age, mother’s education, place of residence, geographical region and ethnicity (p < 0.05). At the adjusted analysis level, it was found that children in the poorest wealth quintile were less likely to be insured compared with children in the second (AOR = 1.47; 95% CI: 1.15–1.89), middle (AOR = 1.59; 95% CI:1.16–2.17), fourth (AOR = 1.73; 95% CI:1.28–2.35) and richest (AOR = 2.82; 95% CI: 2.00-3.98) wealth quintiles. Children aged 0–11 months were less likely to be insured compared with children aged 12–23 months (AOR = 1.72; 95% CI: 1.42–2.10). Children whose mothers had higher education were two times (AOR = 2.14; 95% CI: 1.39–3.30) more likely to be insured compared with children whose mothers had no formal education. In addition, children in rural areas (AOR = 0.75; 95% CI: 0.61–0.91) had lesser odds of being insured compared with children in urban areas. Also, children in the Northern Region (AOR = 3.23; 95% CI: 2.18–4.80), Upper West Region (AOR = 4.82; 95% CI: 3.04–7.66) and Upper East Region (AOR = 8.74; 95% CI: 5.35–13.40) had a higher probability of being insured compared with children in the Greater Accra Region. Details are provided in Table 3.\n\nTable 3Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18\nCovariate/exposure\n\nCrude analysis\n\nOR (95% CI)\n\nWald\np-value\nAdjusted analysis\n\nOR (95% CI)\n\nAdjusted Wald\np\n-value\n\nSex of child\n  Male  Female1(ref)0.98 (0.85–1.12)0.73111 (ref)0.98 (0.85–1.13)0.8045\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591(ref)1.66 (1.37–2.00)2.42 (2.02–2.89)2.12 (1.76–2.55)2.17 (1.79–2.63) < 0.00011 (ref)1.72 (1.42–2.10)2.70 (2.22–3.29)2.30 (1.88–2.81)2.44 (1.99–2.99) < 0.0001\nMother’s education\n  Pre-primary/none  Primary  Junior High School  Senior High School   Higher1(ref)0.76 (0.62–0.93)1.05 (0.87–1.26)1.40 (1.09–1.81)2.96 (2.07–4.25) < 0.00011(ref)0.83 (0.67–1.02)1.16 (0.95–1.42)1.37 (1.02–1.82)2.14 (1.39–3.30)0.0002\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1(ref)1.19 (0.95–1.49)1.28 (0.98–1.66)1.36 (1.05–1.76)2.31 (1.77–3.02) < 0.00011(ref)1.47 (1.15–1.89)1.59 (1.16–2.17)1.73 (1.28–2.35)2.82 (2.00–3.98) < 0.0001\nResidence\n  Urban  Rural1(ref)0.66 (0.55–0.79) < 0.00011(ref)0.75 (0.61- 0.91)0.0002\nRegion\n  Greater Accra  Western  Central  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West1(ref)1.13 (0.77–1.63)0.87 (0.60–1.26)1.33 (0.83–2.13)1.52 (1.06–2.18)1.37 (0.99–1.89)3.38 (2.27–5.04)1.36 (0.97–1.90)3.14 (2.21–4.44)1.92 (1.33–2.76) < 0.00011(ref)1.93 (1.32–2.80)1.38 (0.92–2.08)2.81 (1.65–4.80)2.68 (1.86–3.86)2.13 (1.50–3.02)6.61 (4.24–10.30)3.23 (2.18–4.80)8.74 (5.35–13.40)4.82 (3.04–7.66) < 0.0001\nEthnicity\n  Akan  Mole Dagbani  Others (Ewe, Gruma etc.)1(ref)1.17 (0.94–1.45)1.00 (0.83–1.19) < 0.00011(ref)1.07 (0.80–1.43)1.09 (0.90–1.31) < 0.0001\nLogistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18\n  Male\n  Female\n1(ref)\n0.98 (0.85–1.12)\n1 (ref)\n0.98 (0.85–1.13)\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1(ref)\n1.66 (1.37–2.00)\n2.42 (2.02–2.89)\n2.12 (1.76–2.55)\n2.17 (1.79–2.63)\n1 (ref)\n1.72 (1.42–2.10)\n2.70 (2.22–3.29)\n2.30 (1.88–2.81)\n2.44 (1.99–2.99)\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n   Higher\n1(ref)\n0.76 (0.62–0.93)\n1.05 (0.87–1.26)\n1.40 (1.09–1.81)\n2.96 (2.07–4.25)\n1(ref)\n0.83 (0.67–1.02)\n1.16 (0.95–1.42)\n1.37 (1.02–1.82)\n2.14 (1.39–3.30)\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1(ref)\n1.19 (0.95–1.49)\n1.28 (0.98–1.66)\n1.36 (1.05–1.76)\n2.31 (1.77–3.02)\n1(ref)\n1.47 (1.15–1.89)\n1.59 (1.16–2.17)\n1.73 (1.28–2.35)\n2.82 (2.00–3.98)\n  Urban\n  Rural\n1(ref)\n0.66 (0.55–0.79)\n1(ref)\n0.75 (0.61- 0.91)\n  Greater Accra\n  Western\n  Central\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n1(ref)\n1.13 (0.77–1.63)\n0.87 (0.60–1.26)\n1.33 (0.83–2.13)\n1.52 (1.06–2.18)\n1.37 (0.99–1.89)\n3.38 (2.27–5.04)\n1.36 (0.97–1.90)\n3.14 (2.21–4.44)\n1.92 (1.33–2.76)\n1(ref)\n1.93 (1.32–2.80)\n1.38 (0.92–2.08)\n2.81 (1.65–4.80)\n2.68 (1.86–3.86)\n2.13 (1.50–3.02)\n6.61 (4.24–10.30)\n3.23 (2.18–4.80)\n8.74 (5.35–13.40)\n4.82 (3.04–7.66)\n  Akan\n  Mole Dagbani\n  Others (Ewe, Gruma etc.)\n1(ref)\n1.17 (0.94–1.45)\n1.00 (0.83–1.19)\n1(ref)\n1.07 (0.80–1.43)\n1.09 (0.90–1.31)", "One major strength of this study is that we analysed nationally representative data so the findings from this study can be generalized to the population. This study is one of the few studies in Ghana investigating socio-demographic determinants of child health insurance. However, this study is not devoid of limitations. Cross-sectional studies cannot establish causal relationships, so the findings should be interpreted with caution. In addition, health insurance status was self-reported by caregivers/parents of the children. Therefore, it may be subjected to social desirability or recall biases." ]
[ null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Descriptive statistics", "Bivariate analysis", "Multivariable analysis", "Discussion", "Strengths and limitations of the study", "Conclusions" ]
[ "Globally, protecting and improving the well-being of children under five years remains a public health priority. For instance, Target 3.2 of the Sustainable Development Goals seek to end preventable deaths of newborns and children under five years of age, with all countries aiming to reduce neonatal mortality and under five mortality by 2030 [1]. Evidence shows that under five mortality has declined over the last three decades worldwide. Between 1990 and 2019, the under five mortality rate has reduced by 59%, thus from 93 deaths per 1000 live births to 38 respectively [2]. Notwithstanding, the burden of under five mortality remains high. For example, in 2019 alone, about 14,000 children died every day before their fifth birthday worldwide. More than half (53%) of under five deaths in 2019 occurred in sub-Saharan Africa, with Nigeria and Somalia recording the highest under five mortality rates (117 deaths per 1000 live births) [2]. Most under five deaths are preventable with timely access to quality healthcare services and child health interventions [3]. Therefore, an integral recommendation—among many others—to accelerate progress towards reducing the mortality rate among children under five years is to ensure health equity through universal health coverage. So that all children can access essential health services without undue financial hardship [3]. Evidence shows that health insurance enrolment is linked to increased access and utilisation of healthcare services [4] and better health outcomes—particularly for maternal and child health [5–10].\nIn Ghana, the under five mortality rate was estimated to be 46 deaths per 1000 live births in 2019 [11], higher than the global rate of 38 deaths per 1000 live births. Financial constraints pose a significant barrier to accessing healthcare services, including child health services [12]. Therefore, under the National Health Insurance Act 650, the Government of Ghana established the National Health Insurance Scheme (NHIS) in 2003 to eliminate financial barriers to accessing health care [13]. Upon establishment, the scheme operated semi-autonomous Mutual Health Insurance Schemes in districts across the country. In 2012, a new law, Act 852, replaced Act 650. Under Act 852, all District Mutual Health Insurance Schemes were consolidated under a National Health Insurance Authority (NHIA) to ensure effective management and efficient service delivery [14]. The primary sources of financing the NHIS comprise a National Health Insurance Levy on selected goods and services, 2.5% contribution from the National Social Security Scheme by formal sector workers, individual premiums mainly from informal sector workers, and miscellaneous funds from investment returns, grants, donations and gifts from international donor partners and agencies [14].\nSince its inception, Ghana’s NHIS has been considered one of Africa’s model health insurance systems. The benefit package of the NHIS covers the cost of treatment for more than 95% of the disease conditions in Ghana. The range of services covered includes but are not limited to outpatient care, diagnostic services, in-patient care, pre-approved medications, maternal care, ear, nose and throat services, dental services and all emergency services. Excluded in the NHIS benefit package are procedures such as dialysis for chronic renal failure, treatments for cancer (other than cervical and breast cancers), organ transplants and cosmetic surgery. Child immunization services, family planning and treatment of conditions such as HIV/AIDS and tuberculosis are also not covered. However, these services are provided under alternative government programs. Apart from pregnant women and children under five years, new members serve a waiting period of three months after registration before accessing health care under the scheme. Further, members of the scheme can access healthcare from health services providers accredited by the NHIA. These include public, quasi-government, faith-based, some but not all private health facilities and licensed pharmacies [15]. Despite the benefits of increased access to healthcare offered by NHIS membership and mandatory enrolment for all residents in Ghana, universal population coverage on the scheme has proved challenging. As of 2021, the NHIS had an active membership coverage of over 15 million people, equating to about 53% of Ghana’s estimated population [16]. However, a recent study examining NHIS enrolment within the last decade showed that, on average, only about 40% of all Ghanaians had ever registered with the Scheme [17]. That notwithstanding, utilization trends for in-patient and outpatient care at NHIS accredited health facilities continues to increase across the country [18, 19].\nAs part of efforts to increase coverage, a premium exemption policy for vulnerable populations, such as children under 18, was implemented. Thus, persons below 18 years are exempted from paying annual premiums [20] but must pay administrative charges, including the NHIS card processing fee [21]. Furthermore, in 2010, the National Health Insurance Authority decoupled children under five years from their parents’ membership. Therefore, children under five years can be active members of the NHIS even if their parents are not active members [22]. In addition, private health insurance schemes are emerging rapidly in Ghana, with premiums based on the calculated risk of subscribers.\nThe Ghana NHIS has been extensively investigated. Prior studies among the adult population showed that health insurance enrolment was associated with educational status, wealth, age, marital status, gender, type of occupation and place of residence [23–25]. In addition, an analysis of the 2011 Ghana Multiple Indicator Cluster Survey (MICS) revealed that a majority (73%) of children under five years were non-insured [26]. However, there is a paucity of literature on determinants of health insurance enrolment among children under five years. Therefore, this study aimed to determine factors associated with NHIS enrolment among children under five in Ghana using nationally representative survey data. Generating empirical evidence about factors influencing enrolment is essential to inform policy to help Ghana achieve Universal Health Coverage and Sustainable Development Goals.", "In this study, we analysed the 2017/2018 Ghana MICS [27]. The 2017/18 MICS collected demographic and health data across Ghana, including rural and urban settings. The sampling of participants was done in two phases. The first phase involved selecting 660 enumeration areas from 20 strata, proportional to size. The second involved the selection of 13,202 households within the selected enumeration areas. The weighted sample size of children under five years was 8,874. Ghana had ten administrative regions divided into 20 strata, of which ten are rural and ten are urban. Participants were selected across all the regions and strata. The inclusion criteria were under five children in the selected households or those who passed the night before the survey in the selected households. Data were collected using Computer Assisted Personal Interviewing (CAPI). The under five questionnaire was administered to caregivers of children below five years. Trained field officers and supervisors collected the data between October 2017 and September 2018. Details about the 2017/18 MICS are provided elsewhere [28].\nThe dependent variable in this study was health insurance status (i.e. is [name] covered by any health insurance?) coded as 1 = Yes and 0 = No. The independent variables identified in the literature included child and maternal characteristics. These include child’s age, maternal educational status, wealth index, ethnicity, geographic region and place of residence. Details about the coding are provided in Table 1. The complex nature of the survey was accounted for by employing the ‘svy’ STATA command. STATA/SE version 16 (StataCorp, College Station, Texas, USA) was used to analyse that data. Descriptive statistics were computed for participants’ characteristics and summarized in a table. The Chi-square test was employed to examine the association between participant characteristics and health insurance status at the bivariate level. Binary Logistic Regression was employed to identify significant predictors of health insurance enrolment among under five children. The results were reported at a 95% confidence level.", " Descriptive statistics The results showed that a majority of the participants (58.4%) were insured, while a substantial minority (41.6%) were non-insured. Half of the participants were females (50.8%), and more than half (58%) were below 36 months. About three in ten (27.2%) mothers had no formal/pre-primary education, while 22.2% were in the poorest wealth quintile. Children from rural areas constituted 56.9%. In terms of ethnicity, 46.1% of the participants were Akan, 16.9% were Mole-Dagbani, and 37% were of other ethnic groups, such as Ewe and Gruma. Details are provided in Table 1.\n\nTable 1Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18\nCharacteristic\n\nn\n\n%\n\nSex of child\n  Male  Female4369450549.250.8\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591700169417501928180219.219.119.721.720.3\nMother's education\n  Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444327.320.236.710.85\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1966183417691676163022.220.719.918.818.4\nResidence\n  Urban  Rural3821505343.156.9\nRegion\n  Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268627109532111833105528221110.510.49.7810.723.89.411.93.22.4\nEthnicity\n  Akan  Mole Dagbani  Others (Ewe, Gruma etc.)40911503328046.116.937\nHealth insurance status\n  Insured  Non-insured5186368958.441.6\nSocio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18\n  Male\n  Female\n4369\n4505\n49.2\n50.8\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1700\n1694\n1750\n1928\n1802\n19.2\n19.1\n19.7\n21.7\n20.3\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n  Higher\n2428\n1790\n3259\n954\n443\n27.3\n20.2\n36.7\n10.8\n5\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1966\n1834\n1769\n1676\n1630\n22.2\n20.7\n19.9\n18.8\n18.4\n  Urban\n  Rural\n3821\n5053\n43.1\n56.9\n  Western\n  Central\n  Greater Accra\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n931\n926\n862\n710\n953\n2111\n833\n1055\n282\n211\n10.5\n10.4\n9.7\n8\n10.7\n23.8\n9.4\n11.9\n3.2\n2.4\n  Akan\n  Mole Dagbani\n  Others (Ewe, Gruma etc.)\n4091\n1503\n3280\n46.1\n16.9\n37\n  Insured\n  Non-insured\n5186\n3689\n58.4\n41.6\nThe results showed that a majority of the participants (58.4%) were insured, while a substantial minority (41.6%) were non-insured. Half of the participants were females (50.8%), and more than half (58%) were below 36 months. About three in ten (27.2%) mothers had no formal/pre-primary education, while 22.2% were in the poorest wealth quintile. Children from rural areas constituted 56.9%. In terms of ethnicity, 46.1% of the participants were Akan, 16.9% were Mole-Dagbani, and 37% were of other ethnic groups, such as Ewe and Gruma. Details are provided in Table 1.\n\nTable 1Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18\nCharacteristic\n\nn\n\n%\n\nSex of child\n  Male  Female4369450549.250.8\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591700169417501928180219.219.119.721.720.3\nMother's education\n  Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444327.320.236.710.85\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1966183417691676163022.220.719.918.818.4\nResidence\n  Urban  Rural3821505343.156.9\nRegion\n  Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268627109532111833105528221110.510.49.7810.723.89.411.93.22.4\nEthnicity\n  Akan  Mole Dagbani  Others (Ewe, Gruma etc.)40911503328046.116.937\nHealth insurance status\n  Insured  Non-insured5186368958.441.6\nSocio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18\n  Male\n  Female\n4369\n4505\n49.2\n50.8\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1700\n1694\n1750\n1928\n1802\n19.2\n19.1\n19.7\n21.7\n20.3\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n  Higher\n2428\n1790\n3259\n954\n443\n27.3\n20.2\n36.7\n10.8\n5\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1966\n1834\n1769\n1676\n1630\n22.2\n20.7\n19.9\n18.8\n18.4\n  Urban\n  Rural\n3821\n5053\n43.1\n56.9\n  Western\n  Central\n  Greater Accra\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n931\n926\n862\n710\n953\n2111\n833\n1055\n282\n211\n10.5\n10.4\n9.7\n8\n10.7\n23.8\n9.4\n11.9\n3.2\n2.4\n  Akan\n  Mole Dagbani\n  Others (Ewe, Gruma etc.)\n4091\n1503\n3280\n46.1\n16.9\n37\n  Insured\n  Non-insured\n5186\n3689\n58.4\n41.6\n Bivariate analysis At the bivariate level, child age was significantly associated with health insurance enrolment (p < 0.05). Also, mothers’ education, wealth quintile, residence, geographic region, and ethnicity were significantly associated with health insurance enrollment (p < 0.05) among children under five years. A majority (56.1%) of children aged 0–11 months were not insured with the National Health Insurance Scheme. Less than half of children in the Central Region were insured, while eight in ten children were insured in the Brong-Ahafo Region. Details are provided in Table 2.\n\nTable 2Association between participant characteristics and health insurance status\nCharacteristic\n\nn\n\nNon-insured\n\n(%)\n\nInsured\n\n(%)\n\nChi-square\np-value\nSex of child\n  Male  Female4369450541.341.858.758.20.29610.7311\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591700169417501928180256.143.634.637.73743.956.465.462.36329.3826 < 0.0001\nMother's education\n  Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444342.649.441.534.520.157.450.658.565.479.9149.8059 < 0.0001\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1966183417691676163048.744.442.641.129.151.355.657.458.970.9152.59 < 0.0001\nResidence\n  Urban  Rural3821505335.845.964.254.192.0315 < 0.0001\nRegion\n  Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268637109532111833105528221146.75349.742.539.341.922.642.124.134.153.34750.357.560.758.177.457.975.965.9250.9397 < 0.0001\nEthnicity\n  Akan  Mole Dagbani  Other (Ewe, Gruma etc.)40911503328042.238.542.357.861.557.77.38000.3330\nAssociation between participant characteristics and health insurance status\n  Male\n  Female\n4369\n4505\n41.3\n41.8\n58.7\n58.2\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1700\n1694\n1750\n1928\n1802\n56.1\n43.6\n34.6\n37.7\n37\n43.9\n56.4\n65.4\n62.3\n63\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n  Higher\n2428\n1790\n3259\n954\n443\n42.6\n49.4\n41.5\n34.5\n20.1\n57.4\n50.6\n58.5\n65.4\n79.9\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1966\n1834\n1769\n1676\n1630\n48.7\n44.4\n42.6\n41.1\n29.1\n51.3\n55.6\n57.4\n58.9\n70.9\n  Urban\n  Rural\n3821\n5053\n35.8\n45.9\n64.2\n54.1\n  Western\n  Central\n  Greater Accra\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n931\n926\n863\n710\n953\n2111\n833\n1055\n282\n211\n46.7\n53\n49.7\n42.5\n39.3\n41.9\n22.6\n42.1\n24.1\n34.1\n53.3\n47\n50.3\n57.5\n60.7\n58.1\n77.4\n57.9\n75.9\n65.9\n  Akan\n  Mole Dagbani\n  Other (Ewe, Gruma etc.)\n4091\n1503\n3280\n42.2\n38.5\n42.3\n57.8\n61.5\n57.7\nAt the bivariate level, child age was significantly associated with health insurance enrolment (p < 0.05). Also, mothers’ education, wealth quintile, residence, geographic region, and ethnicity were significantly associated with health insurance enrollment (p < 0.05) among children under five years. A majority (56.1%) of children aged 0–11 months were not insured with the National Health Insurance Scheme. Less than half of children in the Central Region were insured, while eight in ten children were insured in the Brong-Ahafo Region. Details are provided in Table 2.\n\nTable 2Association between participant characteristics and health insurance status\nCharacteristic\n\nn\n\nNon-insured\n\n(%)\n\nInsured\n\n(%)\n\nChi-square\np-value\nSex of child\n  Male  Female4369450541.341.858.758.20.29610.7311\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591700169417501928180256.143.634.637.73743.956.465.462.36329.3826 < 0.0001\nMother's education\n  Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444342.649.441.534.520.157.450.658.565.479.9149.8059 < 0.0001\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1966183417691676163048.744.442.641.129.151.355.657.458.970.9152.59 < 0.0001\nResidence\n  Urban  Rural3821505335.845.964.254.192.0315 < 0.0001\nRegion\n  Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268637109532111833105528221146.75349.742.539.341.922.642.124.134.153.34750.357.560.758.177.457.975.965.9250.9397 < 0.0001\nEthnicity\n  Akan  Mole Dagbani  Other (Ewe, Gruma etc.)40911503328042.238.542.357.861.557.77.38000.3330\nAssociation between participant characteristics and health insurance status\n  Male\n  Female\n4369\n4505\n41.3\n41.8\n58.7\n58.2\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1700\n1694\n1750\n1928\n1802\n56.1\n43.6\n34.6\n37.7\n37\n43.9\n56.4\n65.4\n62.3\n63\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n  Higher\n2428\n1790\n3259\n954\n443\n42.6\n49.4\n41.5\n34.5\n20.1\n57.4\n50.6\n58.5\n65.4\n79.9\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1966\n1834\n1769\n1676\n1630\n48.7\n44.4\n42.6\n41.1\n29.1\n51.3\n55.6\n57.4\n58.9\n70.9\n  Urban\n  Rural\n3821\n5053\n35.8\n45.9\n64.2\n54.1\n  Western\n  Central\n  Greater Accra\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n931\n926\n863\n710\n953\n2111\n833\n1055\n282\n211\n46.7\n53\n49.7\n42.5\n39.3\n41.9\n22.6\n42.1\n24.1\n34.1\n53.3\n47\n50.3\n57.5\n60.7\n58.1\n77.4\n57.9\n75.9\n65.9\n  Akan\n  Mole Dagbani\n  Other (Ewe, Gruma etc.)\n4091\n1503\n3280\n42.2\n38.5\n42.3\n57.8\n61.5\n57.7\n Multivariable analysis At the crude analysis level, health insurance enrolment was significantly predicted by wealth quintile, child’s age, mother’s education, place of residence, geographical region and ethnicity (p < 0.05). At the adjusted analysis level, it was found that children in the poorest wealth quintile were less likely to be insured compared with children in the second (AOR = 1.47; 95% CI: 1.15–1.89), middle (AOR = 1.59; 95% CI:1.16–2.17), fourth (AOR = 1.73; 95% CI:1.28–2.35) and richest (AOR = 2.82; 95% CI: 2.00-3.98) wealth quintiles. Children aged 0–11 months were less likely to be insured compared with children aged 12–23 months (AOR = 1.72; 95% CI: 1.42–2.10). Children whose mothers had higher education were two times (AOR = 2.14; 95% CI: 1.39–3.30) more likely to be insured compared with children whose mothers had no formal education. In addition, children in rural areas (AOR = 0.75; 95% CI: 0.61–0.91) had lesser odds of being insured compared with children in urban areas. Also, children in the Northern Region (AOR = 3.23; 95% CI: 2.18–4.80), Upper West Region (AOR = 4.82; 95% CI: 3.04–7.66) and Upper East Region (AOR = 8.74; 95% CI: 5.35–13.40) had a higher probability of being insured compared with children in the Greater Accra Region. Details are provided in Table 3.\n\nTable 3Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18\nCovariate/exposure\n\nCrude analysis\n\nOR (95% CI)\n\nWald\np-value\nAdjusted analysis\n\nOR (95% CI)\n\nAdjusted Wald\np\n-value\n\nSex of child\n  Male  Female1(ref)0.98 (0.85–1.12)0.73111 (ref)0.98 (0.85–1.13)0.8045\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591(ref)1.66 (1.37–2.00)2.42 (2.02–2.89)2.12 (1.76–2.55)2.17 (1.79–2.63) < 0.00011 (ref)1.72 (1.42–2.10)2.70 (2.22–3.29)2.30 (1.88–2.81)2.44 (1.99–2.99) < 0.0001\nMother’s education\n  Pre-primary/none  Primary  Junior High School  Senior High School   Higher1(ref)0.76 (0.62–0.93)1.05 (0.87–1.26)1.40 (1.09–1.81)2.96 (2.07–4.25) < 0.00011(ref)0.83 (0.67–1.02)1.16 (0.95–1.42)1.37 (1.02–1.82)2.14 (1.39–3.30)0.0002\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1(ref)1.19 (0.95–1.49)1.28 (0.98–1.66)1.36 (1.05–1.76)2.31 (1.77–3.02) < 0.00011(ref)1.47 (1.15–1.89)1.59 (1.16–2.17)1.73 (1.28–2.35)2.82 (2.00–3.98) < 0.0001\nResidence\n  Urban  Rural1(ref)0.66 (0.55–0.79) < 0.00011(ref)0.75 (0.61- 0.91)0.0002\nRegion\n  Greater Accra  Western  Central  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West1(ref)1.13 (0.77–1.63)0.87 (0.60–1.26)1.33 (0.83–2.13)1.52 (1.06–2.18)1.37 (0.99–1.89)3.38 (2.27–5.04)1.36 (0.97–1.90)3.14 (2.21–4.44)1.92 (1.33–2.76) < 0.00011(ref)1.93 (1.32–2.80)1.38 (0.92–2.08)2.81 (1.65–4.80)2.68 (1.86–3.86)2.13 (1.50–3.02)6.61 (4.24–10.30)3.23 (2.18–4.80)8.74 (5.35–13.40)4.82 (3.04–7.66) < 0.0001\nEthnicity\n  Akan  Mole Dagbani  Others (Ewe, Gruma etc.)1(ref)1.17 (0.94–1.45)1.00 (0.83–1.19) < 0.00011(ref)1.07 (0.80–1.43)1.09 (0.90–1.31) < 0.0001\nLogistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18\n  Male\n  Female\n1(ref)\n0.98 (0.85–1.12)\n1 (ref)\n0.98 (0.85–1.13)\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1(ref)\n1.66 (1.37–2.00)\n2.42 (2.02–2.89)\n2.12 (1.76–2.55)\n2.17 (1.79–2.63)\n1 (ref)\n1.72 (1.42–2.10)\n2.70 (2.22–3.29)\n2.30 (1.88–2.81)\n2.44 (1.99–2.99)\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n   Higher\n1(ref)\n0.76 (0.62–0.93)\n1.05 (0.87–1.26)\n1.40 (1.09–1.81)\n2.96 (2.07–4.25)\n1(ref)\n0.83 (0.67–1.02)\n1.16 (0.95–1.42)\n1.37 (1.02–1.82)\n2.14 (1.39–3.30)\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1(ref)\n1.19 (0.95–1.49)\n1.28 (0.98–1.66)\n1.36 (1.05–1.76)\n2.31 (1.77–3.02)\n1(ref)\n1.47 (1.15–1.89)\n1.59 (1.16–2.17)\n1.73 (1.28–2.35)\n2.82 (2.00–3.98)\n  Urban\n  Rural\n1(ref)\n0.66 (0.55–0.79)\n1(ref)\n0.75 (0.61- 0.91)\n  Greater Accra\n  Western\n  Central\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n1(ref)\n1.13 (0.77–1.63)\n0.87 (0.60–1.26)\n1.33 (0.83–2.13)\n1.52 (1.06–2.18)\n1.37 (0.99–1.89)\n3.38 (2.27–5.04)\n1.36 (0.97–1.90)\n3.14 (2.21–4.44)\n1.92 (1.33–2.76)\n1(ref)\n1.93 (1.32–2.80)\n1.38 (0.92–2.08)\n2.81 (1.65–4.80)\n2.68 (1.86–3.86)\n2.13 (1.50–3.02)\n6.61 (4.24–10.30)\n3.23 (2.18–4.80)\n8.74 (5.35–13.40)\n4.82 (3.04–7.66)\n  Akan\n  Mole Dagbani\n  Others (Ewe, Gruma etc.)\n1(ref)\n1.17 (0.94–1.45)\n1.00 (0.83–1.19)\n1(ref)\n1.07 (0.80–1.43)\n1.09 (0.90–1.31)\nAt the crude analysis level, health insurance enrolment was significantly predicted by wealth quintile, child’s age, mother’s education, place of residence, geographical region and ethnicity (p < 0.05). At the adjusted analysis level, it was found that children in the poorest wealth quintile were less likely to be insured compared with children in the second (AOR = 1.47; 95% CI: 1.15–1.89), middle (AOR = 1.59; 95% CI:1.16–2.17), fourth (AOR = 1.73; 95% CI:1.28–2.35) and richest (AOR = 2.82; 95% CI: 2.00-3.98) wealth quintiles. Children aged 0–11 months were less likely to be insured compared with children aged 12–23 months (AOR = 1.72; 95% CI: 1.42–2.10). Children whose mothers had higher education were two times (AOR = 2.14; 95% CI: 1.39–3.30) more likely to be insured compared with children whose mothers had no formal education. In addition, children in rural areas (AOR = 0.75; 95% CI: 0.61–0.91) had lesser odds of being insured compared with children in urban areas. Also, children in the Northern Region (AOR = 3.23; 95% CI: 2.18–4.80), Upper West Region (AOR = 4.82; 95% CI: 3.04–7.66) and Upper East Region (AOR = 8.74; 95% CI: 5.35–13.40) had a higher probability of being insured compared with children in the Greater Accra Region. Details are provided in Table 3.\n\nTable 3Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18\nCovariate/exposure\n\nCrude analysis\n\nOR (95% CI)\n\nWald\np-value\nAdjusted analysis\n\nOR (95% CI)\n\nAdjusted Wald\np\n-value\n\nSex of child\n  Male  Female1(ref)0.98 (0.85–1.12)0.73111 (ref)0.98 (0.85–1.13)0.8045\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591(ref)1.66 (1.37–2.00)2.42 (2.02–2.89)2.12 (1.76–2.55)2.17 (1.79–2.63) < 0.00011 (ref)1.72 (1.42–2.10)2.70 (2.22–3.29)2.30 (1.88–2.81)2.44 (1.99–2.99) < 0.0001\nMother’s education\n  Pre-primary/none  Primary  Junior High School  Senior High School   Higher1(ref)0.76 (0.62–0.93)1.05 (0.87–1.26)1.40 (1.09–1.81)2.96 (2.07–4.25) < 0.00011(ref)0.83 (0.67–1.02)1.16 (0.95–1.42)1.37 (1.02–1.82)2.14 (1.39–3.30)0.0002\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1(ref)1.19 (0.95–1.49)1.28 (0.98–1.66)1.36 (1.05–1.76)2.31 (1.77–3.02) < 0.00011(ref)1.47 (1.15–1.89)1.59 (1.16–2.17)1.73 (1.28–2.35)2.82 (2.00–3.98) < 0.0001\nResidence\n  Urban  Rural1(ref)0.66 (0.55–0.79) < 0.00011(ref)0.75 (0.61- 0.91)0.0002\nRegion\n  Greater Accra  Western  Central  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West1(ref)1.13 (0.77–1.63)0.87 (0.60–1.26)1.33 (0.83–2.13)1.52 (1.06–2.18)1.37 (0.99–1.89)3.38 (2.27–5.04)1.36 (0.97–1.90)3.14 (2.21–4.44)1.92 (1.33–2.76) < 0.00011(ref)1.93 (1.32–2.80)1.38 (0.92–2.08)2.81 (1.65–4.80)2.68 (1.86–3.86)2.13 (1.50–3.02)6.61 (4.24–10.30)3.23 (2.18–4.80)8.74 (5.35–13.40)4.82 (3.04–7.66) < 0.0001\nEthnicity\n  Akan  Mole Dagbani  Others (Ewe, Gruma etc.)1(ref)1.17 (0.94–1.45)1.00 (0.83–1.19) < 0.00011(ref)1.07 (0.80–1.43)1.09 (0.90–1.31) < 0.0001\nLogistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18\n  Male\n  Female\n1(ref)\n0.98 (0.85–1.12)\n1 (ref)\n0.98 (0.85–1.13)\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1(ref)\n1.66 (1.37–2.00)\n2.42 (2.02–2.89)\n2.12 (1.76–2.55)\n2.17 (1.79–2.63)\n1 (ref)\n1.72 (1.42–2.10)\n2.70 (2.22–3.29)\n2.30 (1.88–2.81)\n2.44 (1.99–2.99)\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n   Higher\n1(ref)\n0.76 (0.62–0.93)\n1.05 (0.87–1.26)\n1.40 (1.09–1.81)\n2.96 (2.07–4.25)\n1(ref)\n0.83 (0.67–1.02)\n1.16 (0.95–1.42)\n1.37 (1.02–1.82)\n2.14 (1.39–3.30)\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1(ref)\n1.19 (0.95–1.49)\n1.28 (0.98–1.66)\n1.36 (1.05–1.76)\n2.31 (1.77–3.02)\n1(ref)\n1.47 (1.15–1.89)\n1.59 (1.16–2.17)\n1.73 (1.28–2.35)\n2.82 (2.00–3.98)\n  Urban\n  Rural\n1(ref)\n0.66 (0.55–0.79)\n1(ref)\n0.75 (0.61- 0.91)\n  Greater Accra\n  Western\n  Central\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n1(ref)\n1.13 (0.77–1.63)\n0.87 (0.60–1.26)\n1.33 (0.83–2.13)\n1.52 (1.06–2.18)\n1.37 (0.99–1.89)\n3.38 (2.27–5.04)\n1.36 (0.97–1.90)\n3.14 (2.21–4.44)\n1.92 (1.33–2.76)\n1(ref)\n1.93 (1.32–2.80)\n1.38 (0.92–2.08)\n2.81 (1.65–4.80)\n2.68 (1.86–3.86)\n2.13 (1.50–3.02)\n6.61 (4.24–10.30)\n3.23 (2.18–4.80)\n8.74 (5.35–13.40)\n4.82 (3.04–7.66)\n  Akan\n  Mole Dagbani\n  Others (Ewe, Gruma etc.)\n1(ref)\n1.17 (0.94–1.45)\n1.00 (0.83–1.19)\n1(ref)\n1.07 (0.80–1.43)\n1.09 (0.90–1.31)", "The results showed that a majority of the participants (58.4%) were insured, while a substantial minority (41.6%) were non-insured. Half of the participants were females (50.8%), and more than half (58%) were below 36 months. About three in ten (27.2%) mothers had no formal/pre-primary education, while 22.2% were in the poorest wealth quintile. Children from rural areas constituted 56.9%. In terms of ethnicity, 46.1% of the participants were Akan, 16.9% were Mole-Dagbani, and 37% were of other ethnic groups, such as Ewe and Gruma. Details are provided in Table 1.\n\nTable 1Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18\nCharacteristic\n\nn\n\n%\n\nSex of child\n  Male  Female4369450549.250.8\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591700169417501928180219.219.119.721.720.3\nMother's education\n  Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444327.320.236.710.85\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1966183417691676163022.220.719.918.818.4\nResidence\n  Urban  Rural3821505343.156.9\nRegion\n  Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268627109532111833105528221110.510.49.7810.723.89.411.93.22.4\nEthnicity\n  Akan  Mole Dagbani  Others (Ewe, Gruma etc.)40911503328046.116.937\nHealth insurance status\n  Insured  Non-insured5186368958.441.6\nSocio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18\n  Male\n  Female\n4369\n4505\n49.2\n50.8\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1700\n1694\n1750\n1928\n1802\n19.2\n19.1\n19.7\n21.7\n20.3\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n  Higher\n2428\n1790\n3259\n954\n443\n27.3\n20.2\n36.7\n10.8\n5\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1966\n1834\n1769\n1676\n1630\n22.2\n20.7\n19.9\n18.8\n18.4\n  Urban\n  Rural\n3821\n5053\n43.1\n56.9\n  Western\n  Central\n  Greater Accra\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n931\n926\n862\n710\n953\n2111\n833\n1055\n282\n211\n10.5\n10.4\n9.7\n8\n10.7\n23.8\n9.4\n11.9\n3.2\n2.4\n  Akan\n  Mole Dagbani\n  Others (Ewe, Gruma etc.)\n4091\n1503\n3280\n46.1\n16.9\n37\n  Insured\n  Non-insured\n5186\n3689\n58.4\n41.6", "At the bivariate level, child age was significantly associated with health insurance enrolment (p < 0.05). Also, mothers’ education, wealth quintile, residence, geographic region, and ethnicity were significantly associated with health insurance enrollment (p < 0.05) among children under five years. A majority (56.1%) of children aged 0–11 months were not insured with the National Health Insurance Scheme. Less than half of children in the Central Region were insured, while eight in ten children were insured in the Brong-Ahafo Region. Details are provided in Table 2.\n\nTable 2Association between participant characteristics and health insurance status\nCharacteristic\n\nn\n\nNon-insured\n\n(%)\n\nInsured\n\n(%)\n\nChi-square\np-value\nSex of child\n  Male  Female4369450541.341.858.758.20.29610.7311\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591700169417501928180256.143.634.637.73743.956.465.462.36329.3826 < 0.0001\nMother's education\n  Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444342.649.441.534.520.157.450.658.565.479.9149.8059 < 0.0001\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1966183417691676163048.744.442.641.129.151.355.657.458.970.9152.59 < 0.0001\nResidence\n  Urban  Rural3821505335.845.964.254.192.0315 < 0.0001\nRegion\n  Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268637109532111833105528221146.75349.742.539.341.922.642.124.134.153.34750.357.560.758.177.457.975.965.9250.9397 < 0.0001\nEthnicity\n  Akan  Mole Dagbani  Other (Ewe, Gruma etc.)40911503328042.238.542.357.861.557.77.38000.3330\nAssociation between participant characteristics and health insurance status\n  Male\n  Female\n4369\n4505\n41.3\n41.8\n58.7\n58.2\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1700\n1694\n1750\n1928\n1802\n56.1\n43.6\n34.6\n37.7\n37\n43.9\n56.4\n65.4\n62.3\n63\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n  Higher\n2428\n1790\n3259\n954\n443\n42.6\n49.4\n41.5\n34.5\n20.1\n57.4\n50.6\n58.5\n65.4\n79.9\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1966\n1834\n1769\n1676\n1630\n48.7\n44.4\n42.6\n41.1\n29.1\n51.3\n55.6\n57.4\n58.9\n70.9\n  Urban\n  Rural\n3821\n5053\n35.8\n45.9\n64.2\n54.1\n  Western\n  Central\n  Greater Accra\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n931\n926\n863\n710\n953\n2111\n833\n1055\n282\n211\n46.7\n53\n49.7\n42.5\n39.3\n41.9\n22.6\n42.1\n24.1\n34.1\n53.3\n47\n50.3\n57.5\n60.7\n58.1\n77.4\n57.9\n75.9\n65.9\n  Akan\n  Mole Dagbani\n  Other (Ewe, Gruma etc.)\n4091\n1503\n3280\n42.2\n38.5\n42.3\n57.8\n61.5\n57.7", "At the crude analysis level, health insurance enrolment was significantly predicted by wealth quintile, child’s age, mother’s education, place of residence, geographical region and ethnicity (p < 0.05). At the adjusted analysis level, it was found that children in the poorest wealth quintile were less likely to be insured compared with children in the second (AOR = 1.47; 95% CI: 1.15–1.89), middle (AOR = 1.59; 95% CI:1.16–2.17), fourth (AOR = 1.73; 95% CI:1.28–2.35) and richest (AOR = 2.82; 95% CI: 2.00-3.98) wealth quintiles. Children aged 0–11 months were less likely to be insured compared with children aged 12–23 months (AOR = 1.72; 95% CI: 1.42–2.10). Children whose mothers had higher education were two times (AOR = 2.14; 95% CI: 1.39–3.30) more likely to be insured compared with children whose mothers had no formal education. In addition, children in rural areas (AOR = 0.75; 95% CI: 0.61–0.91) had lesser odds of being insured compared with children in urban areas. Also, children in the Northern Region (AOR = 3.23; 95% CI: 2.18–4.80), Upper West Region (AOR = 4.82; 95% CI: 3.04–7.66) and Upper East Region (AOR = 8.74; 95% CI: 5.35–13.40) had a higher probability of being insured compared with children in the Greater Accra Region. Details are provided in Table 3.\n\nTable 3Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18\nCovariate/exposure\n\nCrude analysis\n\nOR (95% CI)\n\nWald\np-value\nAdjusted analysis\n\nOR (95% CI)\n\nAdjusted Wald\np\n-value\n\nSex of child\n  Male  Female1(ref)0.98 (0.85–1.12)0.73111 (ref)0.98 (0.85–1.13)0.8045\nAge of child (months)\n  0–11  12–23  24–35  36–47  48–591(ref)1.66 (1.37–2.00)2.42 (2.02–2.89)2.12 (1.76–2.55)2.17 (1.79–2.63) < 0.00011 (ref)1.72 (1.42–2.10)2.70 (2.22–3.29)2.30 (1.88–2.81)2.44 (1.99–2.99) < 0.0001\nMother’s education\n  Pre-primary/none  Primary  Junior High School  Senior High School   Higher1(ref)0.76 (0.62–0.93)1.05 (0.87–1.26)1.40 (1.09–1.81)2.96 (2.07–4.25) < 0.00011(ref)0.83 (0.67–1.02)1.16 (0.95–1.42)1.37 (1.02–1.82)2.14 (1.39–3.30)0.0002\nWealth quintile\n  Poorest  Second  Middle  Fourth  Richest1(ref)1.19 (0.95–1.49)1.28 (0.98–1.66)1.36 (1.05–1.76)2.31 (1.77–3.02) < 0.00011(ref)1.47 (1.15–1.89)1.59 (1.16–2.17)1.73 (1.28–2.35)2.82 (2.00–3.98) < 0.0001\nResidence\n  Urban  Rural1(ref)0.66 (0.55–0.79) < 0.00011(ref)0.75 (0.61- 0.91)0.0002\nRegion\n  Greater Accra  Western  Central  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West1(ref)1.13 (0.77–1.63)0.87 (0.60–1.26)1.33 (0.83–2.13)1.52 (1.06–2.18)1.37 (0.99–1.89)3.38 (2.27–5.04)1.36 (0.97–1.90)3.14 (2.21–4.44)1.92 (1.33–2.76) < 0.00011(ref)1.93 (1.32–2.80)1.38 (0.92–2.08)2.81 (1.65–4.80)2.68 (1.86–3.86)2.13 (1.50–3.02)6.61 (4.24–10.30)3.23 (2.18–4.80)8.74 (5.35–13.40)4.82 (3.04–7.66) < 0.0001\nEthnicity\n  Akan  Mole Dagbani  Others (Ewe, Gruma etc.)1(ref)1.17 (0.94–1.45)1.00 (0.83–1.19) < 0.00011(ref)1.07 (0.80–1.43)1.09 (0.90–1.31) < 0.0001\nLogistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18\n  Male\n  Female\n1(ref)\n0.98 (0.85–1.12)\n1 (ref)\n0.98 (0.85–1.13)\n  0–11\n  12–23\n  24–35\n  36–47\n  48–59\n1(ref)\n1.66 (1.37–2.00)\n2.42 (2.02–2.89)\n2.12 (1.76–2.55)\n2.17 (1.79–2.63)\n1 (ref)\n1.72 (1.42–2.10)\n2.70 (2.22–3.29)\n2.30 (1.88–2.81)\n2.44 (1.99–2.99)\n  Pre-primary/none\n  Primary\n  Junior High School\n  Senior High School\n   Higher\n1(ref)\n0.76 (0.62–0.93)\n1.05 (0.87–1.26)\n1.40 (1.09–1.81)\n2.96 (2.07–4.25)\n1(ref)\n0.83 (0.67–1.02)\n1.16 (0.95–1.42)\n1.37 (1.02–1.82)\n2.14 (1.39–3.30)\n  Poorest\n  Second\n  Middle\n  Fourth\n  Richest\n1(ref)\n1.19 (0.95–1.49)\n1.28 (0.98–1.66)\n1.36 (1.05–1.76)\n2.31 (1.77–3.02)\n1(ref)\n1.47 (1.15–1.89)\n1.59 (1.16–2.17)\n1.73 (1.28–2.35)\n2.82 (2.00–3.98)\n  Urban\n  Rural\n1(ref)\n0.66 (0.55–0.79)\n1(ref)\n0.75 (0.61- 0.91)\n  Greater Accra\n  Western\n  Central\n  Volta\n  Eastern\n  Ashanti\n  Brong- Ahafo\n  Northern\n  Upper East\n  Upper West\n1(ref)\n1.13 (0.77–1.63)\n0.87 (0.60–1.26)\n1.33 (0.83–2.13)\n1.52 (1.06–2.18)\n1.37 (0.99–1.89)\n3.38 (2.27–5.04)\n1.36 (0.97–1.90)\n3.14 (2.21–4.44)\n1.92 (1.33–2.76)\n1(ref)\n1.93 (1.32–2.80)\n1.38 (0.92–2.08)\n2.81 (1.65–4.80)\n2.68 (1.86–3.86)\n2.13 (1.50–3.02)\n6.61 (4.24–10.30)\n3.23 (2.18–4.80)\n8.74 (5.35–13.40)\n4.82 (3.04–7.66)\n  Akan\n  Mole Dagbani\n  Others (Ewe, Gruma etc.)\n1(ref)\n1.17 (0.94–1.45)\n1.00 (0.83–1.19)\n1(ref)\n1.07 (0.80–1.43)\n1.09 (0.90–1.31)", "The findings showed that more than half (58.4%) of the participants were covered by health insurance. Thus, caregivers/parents of insured children were protected against out-of-pocket payment, which is a risk factor for catastrophic health care expenditure and poverty [29]. A similar study revealed that 57% of Ghanaian children below 18 years were covered by health insurance [21]. Another household survey across three districts in Ghana reported that 55.9% of the participants were insured. A similar nationally representative survey demonstrated that 66% and 52.6% of women and men aged 15–49 years were insured respectively [30].\nFurther, our finding is similar to a study in Shanghai, China, where 56.5% of children under eight years were covered by health insurance [31]. However, health insurance coverage in this study was higher than coverage in other African countries. For instance, an analysis of data from four African countries revealed that Ghana had the highest health insurance coverage of 62.4% and 49.1% for females and males respectively. Followed by Kenya (18.2% for females and 21.9% for males), Tanzania (9.1% for females and 9.5% for males) and Nigeria (1.1% for females and 3.1% for males) [32]. The difference in findings may be attributed to contextual factors and health insurance policies. For instance, Ghana’s National Health Insurance Scheme (NHIS) covers more than 95% of the disease conditions in the country, including medications, medical investigations, outpatient and in-patient services. Also, women who register with NHIS have access to free maternal health services, such as antenatal, delivery and postnatal services. Children under 18 years, indigents, the elderly, persons with disability or mental disorders, Social Security and National Insurance Trust (SSNIT) contributors and pensioners are exempted from paying premiums but must renew their membership every year [33].\nIn addition, we found that four in ten children were not covered by health insurance. Therefore, parents/caregivers of uninsured children would have to pay-out-of pocket when accessing child health care services. Out-of-pocket payment has the potential of putting caregivers at risk of catastrophic healthcare expenditure and poverty. Also, parents of non-insured children are more likely to postpone or delay seeking health care, hence putting the child’s life at risk of poor health outcomes [34]. This finding is similar to findings from previous studies in Ghana and elsewhere. For instance, a study revealed that 43.2% of Ghanaian children under eighteen years were uninsured [21]. Another study among children under seven years in Shanghai, China, reported that 43.5% of the participants were uninsured [31]. This finding may be explained by individual, financial, country-specific and health system-related factors [22]. In Ghana, children under five years are exempted from paying NHIS premiums. However, they must pay membership card processing fees and renewal fees every year [33]; hence, it may pose a barrier for their caregivers.\nFurthermore, recent evidence shows that persons insured with Ghana’s NHIS still pay out-of-pocket for services in accredited health facilities [35]. Reasons for non-registration or non-renewing of membership with the NHIS include financial constraints, lack of confidence in the scheme, dissatisfaction with services, shortage of insured medications, long waiting time, payment of illegal charges and non-use of health services [36]. Going forward, the Ministry of Health, National Health Insurance Authority, Ministry of Gender, Children and Social Protection and health providers would have to collaborate to improve health insurance coverage for Ghanaian children under five years. However, there is a need for empirical evidence on the correlates and reasons for non-enrolment among children under five years. Hence, we recommend that future studies should explore these grey areas.\nIn addition, the findings revealed that health insurance enrolment was influenced by child’s age, mother’s educational status, wealth index, region, ethnicity and place of residence. Children whose mothers were less educated had low likelihoods of being insured. A similar study found that well-educated mothers were more likely to enroll their children on health insurance [37]. Another study in Shanghai, China, showed that children of women with low education were less likely to be covered by health insurance [31, 32]. A probable explanation is that less educated mothers may lack adequate understanding of the health insurance process and the benefits package due to their inability to access information [5]. Evidence shows that Ghanaian women who had access to information were more likely to be insured [38]. Women with higher educational status are more empowered to make health-seeking decisions. Also, children from wealthy families were more likely to be covered by health insurance. Previous studies have supported this finding [39]. Another study in Shanghai, China, revealed that children from the lowest income households had lesser odds of being insured [31]. A possible explanation is that wealthy parents/caregivers have large disposable incomes. Hence, they can afford health insurance premiums, NHIS card processing charges, and annual renewal fees. It implies that the purpose of the NHIS as a pro-poor social intervention has not been well achieved.\nAlso, children in rural areas had lower chances of being insured. A study in Ghana reported that women living in remote settings had lower odds of insurance coverage than those staying in urban areas [37]. A conceivable explanation for this finding is that parents of children residing in urban areas may have easy access to health insurance offices. In Ghana, few NHIS offices are sited in rural areas, leading to delays in registering and printing insurance cards. There are also few NHIS personnel and logistics in the rural areas compared to the urban areas [36]. These factors may explain the disparities in insurance coverage across the place of residence. It was revealed that children from the nine other administrative regions were more likely to be insured than those from the Greater Accra Region, Ghana’s capital city. A similar study reported that children in the Greater Accra region were more likely to be non-insured compared with the other regions in Ghana [21]. The Greater Accra region has the lowest health insurance coverage in Ghana [40].\nMoreover, we found that children from regions with a high incidence of poverty were more likely to be insured. This finding was expected because the poor perceive health insurance as a form of social security that protects them against catastrophic health care expenditure during health emergencies [41]. This finding may explain why the poorest region in Ghana (Upper East region) has the highest NHIS coverage [40]. Additionally, health insurance enrolment was associated with child’s age. Thus, children aged twelve months or older were more likely to be insured. A similar study in Shanghai, China, reported that older children were less likely to be uninsured [31]. The Free Maternal Health Policy may explain this finding. In Ghana, pregnant women who register with the NHIS have free maternal health care services up to three months postpartum [42]. Our findings imply that vulnerable children did not have health insurance. Consequently, their caregivers/parents may be predisposed to catastrophic health care expenditure. Besides, evidence shows that uninsured children are predisposed to poor health outcomes [43]. Therefore, in the quest to increase health insurance coverage, future interventions should prioritize children from the low socio-economic background.\n Strengths and limitations of the study One major strength of this study is that we analysed nationally representative data so the findings from this study can be generalized to the population. This study is one of the few studies in Ghana investigating socio-demographic determinants of child health insurance. However, this study is not devoid of limitations. Cross-sectional studies cannot establish causal relationships, so the findings should be interpreted with caution. In addition, health insurance status was self-reported by caregivers/parents of the children. Therefore, it may be subjected to social desirability or recall biases.\nOne major strength of this study is that we analysed nationally representative data so the findings from this study can be generalized to the population. This study is one of the few studies in Ghana investigating socio-demographic determinants of child health insurance. However, this study is not devoid of limitations. Cross-sectional studies cannot establish causal relationships, so the findings should be interpreted with caution. In addition, health insurance status was self-reported by caregivers/parents of the children. Therefore, it may be subjected to social desirability or recall biases.", "One major strength of this study is that we analysed nationally representative data so the findings from this study can be generalized to the population. This study is one of the few studies in Ghana investigating socio-demographic determinants of child health insurance. However, this study is not devoid of limitations. Cross-sectional studies cannot establish causal relationships, so the findings should be interpreted with caution. In addition, health insurance status was self-reported by caregivers/parents of the children. Therefore, it may be subjected to social desirability or recall biases.", "This study demonstrated that more than half of the children were covered by health insurance. Health insurance enrolment was associated with wealth index, mother’ educational status, child’s age, type of residence, geographical region and ethnicity. Policymakers can leverage these findings to help improve health insurance coverage for children in Ghana. Interventions that seek to improve health insurance coverage must prioritize children from poor socio-economic backgrounds. Future studies should employ qualitative designs to expose the many intricate views of caregivers regarding health insurance enrolment among children under five years. Also, further studies may explore factors associated with non-enrollment among children under five years." ]
[ null, null, "results", null, null, null, "discussion", null, "conclusion" ]
[ "Health insurance", "Children under five years", "Ghana", "Enrolment", "Factors" ]
Background: Globally, protecting and improving the well-being of children under five years remains a public health priority. For instance, Target 3.2 of the Sustainable Development Goals seek to end preventable deaths of newborns and children under five years of age, with all countries aiming to reduce neonatal mortality and under five mortality by 2030 [1]. Evidence shows that under five mortality has declined over the last three decades worldwide. Between 1990 and 2019, the under five mortality rate has reduced by 59%, thus from 93 deaths per 1000 live births to 38 respectively [2]. Notwithstanding, the burden of under five mortality remains high. For example, in 2019 alone, about 14,000 children died every day before their fifth birthday worldwide. More than half (53%) of under five deaths in 2019 occurred in sub-Saharan Africa, with Nigeria and Somalia recording the highest under five mortality rates (117 deaths per 1000 live births) [2]. Most under five deaths are preventable with timely access to quality healthcare services and child health interventions [3]. Therefore, an integral recommendation—among many others—to accelerate progress towards reducing the mortality rate among children under five years is to ensure health equity through universal health coverage. So that all children can access essential health services without undue financial hardship [3]. Evidence shows that health insurance enrolment is linked to increased access and utilisation of healthcare services [4] and better health outcomes—particularly for maternal and child health [5–10]. In Ghana, the under five mortality rate was estimated to be 46 deaths per 1000 live births in 2019 [11], higher than the global rate of 38 deaths per 1000 live births. Financial constraints pose a significant barrier to accessing healthcare services, including child health services [12]. Therefore, under the National Health Insurance Act 650, the Government of Ghana established the National Health Insurance Scheme (NHIS) in 2003 to eliminate financial barriers to accessing health care [13]. Upon establishment, the scheme operated semi-autonomous Mutual Health Insurance Schemes in districts across the country. In 2012, a new law, Act 852, replaced Act 650. Under Act 852, all District Mutual Health Insurance Schemes were consolidated under a National Health Insurance Authority (NHIA) to ensure effective management and efficient service delivery [14]. The primary sources of financing the NHIS comprise a National Health Insurance Levy on selected goods and services, 2.5% contribution from the National Social Security Scheme by formal sector workers, individual premiums mainly from informal sector workers, and miscellaneous funds from investment returns, grants, donations and gifts from international donor partners and agencies [14]. Since its inception, Ghana’s NHIS has been considered one of Africa’s model health insurance systems. The benefit package of the NHIS covers the cost of treatment for more than 95% of the disease conditions in Ghana. The range of services covered includes but are not limited to outpatient care, diagnostic services, in-patient care, pre-approved medications, maternal care, ear, nose and throat services, dental services and all emergency services. Excluded in the NHIS benefit package are procedures such as dialysis for chronic renal failure, treatments for cancer (other than cervical and breast cancers), organ transplants and cosmetic surgery. Child immunization services, family planning and treatment of conditions such as HIV/AIDS and tuberculosis are also not covered. However, these services are provided under alternative government programs. Apart from pregnant women and children under five years, new members serve a waiting period of three months after registration before accessing health care under the scheme. Further, members of the scheme can access healthcare from health services providers accredited by the NHIA. These include public, quasi-government, faith-based, some but not all private health facilities and licensed pharmacies [15]. Despite the benefits of increased access to healthcare offered by NHIS membership and mandatory enrolment for all residents in Ghana, universal population coverage on the scheme has proved challenging. As of 2021, the NHIS had an active membership coverage of over 15 million people, equating to about 53% of Ghana’s estimated population [16]. However, a recent study examining NHIS enrolment within the last decade showed that, on average, only about 40% of all Ghanaians had ever registered with the Scheme [17]. That notwithstanding, utilization trends for in-patient and outpatient care at NHIS accredited health facilities continues to increase across the country [18, 19]. As part of efforts to increase coverage, a premium exemption policy for vulnerable populations, such as children under 18, was implemented. Thus, persons below 18 years are exempted from paying annual premiums [20] but must pay administrative charges, including the NHIS card processing fee [21]. Furthermore, in 2010, the National Health Insurance Authority decoupled children under five years from their parents’ membership. Therefore, children under five years can be active members of the NHIS even if their parents are not active members [22]. In addition, private health insurance schemes are emerging rapidly in Ghana, with premiums based on the calculated risk of subscribers. The Ghana NHIS has been extensively investigated. Prior studies among the adult population showed that health insurance enrolment was associated with educational status, wealth, age, marital status, gender, type of occupation and place of residence [23–25]. In addition, an analysis of the 2011 Ghana Multiple Indicator Cluster Survey (MICS) revealed that a majority (73%) of children under five years were non-insured [26]. However, there is a paucity of literature on determinants of health insurance enrolment among children under five years. Therefore, this study aimed to determine factors associated with NHIS enrolment among children under five in Ghana using nationally representative survey data. Generating empirical evidence about factors influencing enrolment is essential to inform policy to help Ghana achieve Universal Health Coverage and Sustainable Development Goals. Methods: In this study, we analysed the 2017/2018 Ghana MICS [27]. The 2017/18 MICS collected demographic and health data across Ghana, including rural and urban settings. The sampling of participants was done in two phases. The first phase involved selecting 660 enumeration areas from 20 strata, proportional to size. The second involved the selection of 13,202 households within the selected enumeration areas. The weighted sample size of children under five years was 8,874. Ghana had ten administrative regions divided into 20 strata, of which ten are rural and ten are urban. Participants were selected across all the regions and strata. The inclusion criteria were under five children in the selected households or those who passed the night before the survey in the selected households. Data were collected using Computer Assisted Personal Interviewing (CAPI). The under five questionnaire was administered to caregivers of children below five years. Trained field officers and supervisors collected the data between October 2017 and September 2018. Details about the 2017/18 MICS are provided elsewhere [28]. The dependent variable in this study was health insurance status (i.e. is [name] covered by any health insurance?) coded as 1 = Yes and 0 = No. The independent variables identified in the literature included child and maternal characteristics. These include child’s age, maternal educational status, wealth index, ethnicity, geographic region and place of residence. Details about the coding are provided in Table 1. The complex nature of the survey was accounted for by employing the ‘svy’ STATA command. STATA/SE version 16 (StataCorp, College Station, Texas, USA) was used to analyse that data. Descriptive statistics were computed for participants’ characteristics and summarized in a table. The Chi-square test was employed to examine the association between participant characteristics and health insurance status at the bivariate level. Binary Logistic Regression was employed to identify significant predictors of health insurance enrolment among under five children. The results were reported at a 95% confidence level. Results: Descriptive statistics The results showed that a majority of the participants (58.4%) were insured, while a substantial minority (41.6%) were non-insured. Half of the participants were females (50.8%), and more than half (58%) were below 36 months. About three in ten (27.2%) mothers had no formal/pre-primary education, while 22.2% were in the poorest wealth quintile. Children from rural areas constituted 56.9%. In terms of ethnicity, 46.1% of the participants were Akan, 16.9% were Mole-Dagbani, and 37% were of other ethnic groups, such as Ewe and Gruma. Details are provided in Table 1. Table 1Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18 Characteristic n % Sex of child   Male  Female4369450549.250.8 Age of child (months)   0–11  12–23  24–35  36–47  48–591700169417501928180219.219.119.721.720.3 Mother's education   Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444327.320.236.710.85 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1966183417691676163022.220.719.918.818.4 Residence   Urban  Rural3821505343.156.9 Region   Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268627109532111833105528221110.510.49.7810.723.89.411.93.22.4 Ethnicity   Akan  Mole Dagbani  Others (Ewe, Gruma etc.)40911503328046.116.937 Health insurance status   Insured  Non-insured5186368958.441.6 Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18   Male   Female 4369 4505 49.2 50.8   0–11   12–23   24–35   36–47   48–59 1700 1694 1750 1928 1802 19.2 19.1 19.7 21.7 20.3   Pre-primary/none   Primary   Junior High School   Senior High School   Higher 2428 1790 3259 954 443 27.3 20.2 36.7 10.8 5   Poorest   Second   Middle   Fourth   Richest 1966 1834 1769 1676 1630 22.2 20.7 19.9 18.8 18.4   Urban   Rural 3821 5053 43.1 56.9   Western   Central   Greater Accra   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 931 926 862 710 953 2111 833 1055 282 211 10.5 10.4 9.7 8 10.7 23.8 9.4 11.9 3.2 2.4   Akan   Mole Dagbani   Others (Ewe, Gruma etc.) 4091 1503 3280 46.1 16.9 37   Insured   Non-insured 5186 3689 58.4 41.6 The results showed that a majority of the participants (58.4%) were insured, while a substantial minority (41.6%) were non-insured. Half of the participants were females (50.8%), and more than half (58%) were below 36 months. About three in ten (27.2%) mothers had no formal/pre-primary education, while 22.2% were in the poorest wealth quintile. Children from rural areas constituted 56.9%. In terms of ethnicity, 46.1% of the participants were Akan, 16.9% were Mole-Dagbani, and 37% were of other ethnic groups, such as Ewe and Gruma. Details are provided in Table 1. Table 1Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18 Characteristic n % Sex of child   Male  Female4369450549.250.8 Age of child (months)   0–11  12–23  24–35  36–47  48–591700169417501928180219.219.119.721.720.3 Mother's education   Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444327.320.236.710.85 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1966183417691676163022.220.719.918.818.4 Residence   Urban  Rural3821505343.156.9 Region   Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268627109532111833105528221110.510.49.7810.723.89.411.93.22.4 Ethnicity   Akan  Mole Dagbani  Others (Ewe, Gruma etc.)40911503328046.116.937 Health insurance status   Insured  Non-insured5186368958.441.6 Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18   Male   Female 4369 4505 49.2 50.8   0–11   12–23   24–35   36–47   48–59 1700 1694 1750 1928 1802 19.2 19.1 19.7 21.7 20.3   Pre-primary/none   Primary   Junior High School   Senior High School   Higher 2428 1790 3259 954 443 27.3 20.2 36.7 10.8 5   Poorest   Second   Middle   Fourth   Richest 1966 1834 1769 1676 1630 22.2 20.7 19.9 18.8 18.4   Urban   Rural 3821 5053 43.1 56.9   Western   Central   Greater Accra   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 931 926 862 710 953 2111 833 1055 282 211 10.5 10.4 9.7 8 10.7 23.8 9.4 11.9 3.2 2.4   Akan   Mole Dagbani   Others (Ewe, Gruma etc.) 4091 1503 3280 46.1 16.9 37   Insured   Non-insured 5186 3689 58.4 41.6 Bivariate analysis At the bivariate level, child age was significantly associated with health insurance enrolment (p < 0.05). Also, mothers’ education, wealth quintile, residence, geographic region, and ethnicity were significantly associated with health insurance enrollment (p < 0.05) among children under five years. A majority (56.1%) of children aged 0–11 months were not insured with the National Health Insurance Scheme. Less than half of children in the Central Region were insured, while eight in ten children were insured in the Brong-Ahafo Region. Details are provided in Table 2. Table 2Association between participant characteristics and health insurance status Characteristic n Non-insured (%) Insured (%) Chi-square p-value Sex of child   Male  Female4369450541.341.858.758.20.29610.7311 Age of child (months)   0–11  12–23  24–35  36–47  48–591700169417501928180256.143.634.637.73743.956.465.462.36329.3826 < 0.0001 Mother's education   Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444342.649.441.534.520.157.450.658.565.479.9149.8059 < 0.0001 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1966183417691676163048.744.442.641.129.151.355.657.458.970.9152.59 < 0.0001 Residence   Urban  Rural3821505335.845.964.254.192.0315 < 0.0001 Region   Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268637109532111833105528221146.75349.742.539.341.922.642.124.134.153.34750.357.560.758.177.457.975.965.9250.9397 < 0.0001 Ethnicity   Akan  Mole Dagbani  Other (Ewe, Gruma etc.)40911503328042.238.542.357.861.557.77.38000.3330 Association between participant characteristics and health insurance status   Male   Female 4369 4505 41.3 41.8 58.7 58.2   0–11   12–23   24–35   36–47   48–59 1700 1694 1750 1928 1802 56.1 43.6 34.6 37.7 37 43.9 56.4 65.4 62.3 63   Pre-primary/none   Primary   Junior High School   Senior High School   Higher 2428 1790 3259 954 443 42.6 49.4 41.5 34.5 20.1 57.4 50.6 58.5 65.4 79.9   Poorest   Second   Middle   Fourth   Richest 1966 1834 1769 1676 1630 48.7 44.4 42.6 41.1 29.1 51.3 55.6 57.4 58.9 70.9   Urban   Rural 3821 5053 35.8 45.9 64.2 54.1   Western   Central   Greater Accra   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 931 926 863 710 953 2111 833 1055 282 211 46.7 53 49.7 42.5 39.3 41.9 22.6 42.1 24.1 34.1 53.3 47 50.3 57.5 60.7 58.1 77.4 57.9 75.9 65.9   Akan   Mole Dagbani   Other (Ewe, Gruma etc.) 4091 1503 3280 42.2 38.5 42.3 57.8 61.5 57.7 At the bivariate level, child age was significantly associated with health insurance enrolment (p < 0.05). Also, mothers’ education, wealth quintile, residence, geographic region, and ethnicity were significantly associated with health insurance enrollment (p < 0.05) among children under five years. A majority (56.1%) of children aged 0–11 months were not insured with the National Health Insurance Scheme. Less than half of children in the Central Region were insured, while eight in ten children were insured in the Brong-Ahafo Region. Details are provided in Table 2. Table 2Association between participant characteristics and health insurance status Characteristic n Non-insured (%) Insured (%) Chi-square p-value Sex of child   Male  Female4369450541.341.858.758.20.29610.7311 Age of child (months)   0–11  12–23  24–35  36–47  48–591700169417501928180256.143.634.637.73743.956.465.462.36329.3826 < 0.0001 Mother's education   Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444342.649.441.534.520.157.450.658.565.479.9149.8059 < 0.0001 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1966183417691676163048.744.442.641.129.151.355.657.458.970.9152.59 < 0.0001 Residence   Urban  Rural3821505335.845.964.254.192.0315 < 0.0001 Region   Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268637109532111833105528221146.75349.742.539.341.922.642.124.134.153.34750.357.560.758.177.457.975.965.9250.9397 < 0.0001 Ethnicity   Akan  Mole Dagbani  Other (Ewe, Gruma etc.)40911503328042.238.542.357.861.557.77.38000.3330 Association between participant characteristics and health insurance status   Male   Female 4369 4505 41.3 41.8 58.7 58.2   0–11   12–23   24–35   36–47   48–59 1700 1694 1750 1928 1802 56.1 43.6 34.6 37.7 37 43.9 56.4 65.4 62.3 63   Pre-primary/none   Primary   Junior High School   Senior High School   Higher 2428 1790 3259 954 443 42.6 49.4 41.5 34.5 20.1 57.4 50.6 58.5 65.4 79.9   Poorest   Second   Middle   Fourth   Richest 1966 1834 1769 1676 1630 48.7 44.4 42.6 41.1 29.1 51.3 55.6 57.4 58.9 70.9   Urban   Rural 3821 5053 35.8 45.9 64.2 54.1   Western   Central   Greater Accra   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 931 926 863 710 953 2111 833 1055 282 211 46.7 53 49.7 42.5 39.3 41.9 22.6 42.1 24.1 34.1 53.3 47 50.3 57.5 60.7 58.1 77.4 57.9 75.9 65.9   Akan   Mole Dagbani   Other (Ewe, Gruma etc.) 4091 1503 3280 42.2 38.5 42.3 57.8 61.5 57.7 Multivariable analysis At the crude analysis level, health insurance enrolment was significantly predicted by wealth quintile, child’s age, mother’s education, place of residence, geographical region and ethnicity (p < 0.05). At the adjusted analysis level, it was found that children in the poorest wealth quintile were less likely to be insured compared with children in the second (AOR = 1.47; 95% CI: 1.15–1.89), middle (AOR = 1.59; 95% CI:1.16–2.17), fourth (AOR = 1.73; 95% CI:1.28–2.35) and richest (AOR = 2.82; 95% CI: 2.00-3.98) wealth quintiles. Children aged 0–11 months were less likely to be insured compared with children aged 12–23 months (AOR = 1.72; 95% CI: 1.42–2.10). Children whose mothers had higher education were two times (AOR = 2.14; 95% CI: 1.39–3.30) more likely to be insured compared with children whose mothers had no formal education. In addition, children in rural areas (AOR = 0.75; 95% CI: 0.61–0.91) had lesser odds of being insured compared with children in urban areas. Also, children in the Northern Region (AOR = 3.23; 95% CI: 2.18–4.80), Upper West Region (AOR = 4.82; 95% CI: 3.04–7.66) and Upper East Region (AOR = 8.74; 95% CI: 5.35–13.40) had a higher probability of being insured compared with children in the Greater Accra Region. Details are provided in Table 3. Table 3Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18 Covariate/exposure Crude analysis OR (95% CI) Wald p-value Adjusted analysis OR (95% CI) Adjusted Wald p -value Sex of child   Male  Female1(ref)0.98 (0.85–1.12)0.73111 (ref)0.98 (0.85–1.13)0.8045 Age of child (months)   0–11  12–23  24–35  36–47  48–591(ref)1.66 (1.37–2.00)2.42 (2.02–2.89)2.12 (1.76–2.55)2.17 (1.79–2.63) < 0.00011 (ref)1.72 (1.42–2.10)2.70 (2.22–3.29)2.30 (1.88–2.81)2.44 (1.99–2.99) < 0.0001 Mother’s education   Pre-primary/none  Primary  Junior High School  Senior High School   Higher1(ref)0.76 (0.62–0.93)1.05 (0.87–1.26)1.40 (1.09–1.81)2.96 (2.07–4.25) < 0.00011(ref)0.83 (0.67–1.02)1.16 (0.95–1.42)1.37 (1.02–1.82)2.14 (1.39–3.30)0.0002 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1(ref)1.19 (0.95–1.49)1.28 (0.98–1.66)1.36 (1.05–1.76)2.31 (1.77–3.02) < 0.00011(ref)1.47 (1.15–1.89)1.59 (1.16–2.17)1.73 (1.28–2.35)2.82 (2.00–3.98) < 0.0001 Residence   Urban  Rural1(ref)0.66 (0.55–0.79) < 0.00011(ref)0.75 (0.61- 0.91)0.0002 Region   Greater Accra  Western  Central  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West1(ref)1.13 (0.77–1.63)0.87 (0.60–1.26)1.33 (0.83–2.13)1.52 (1.06–2.18)1.37 (0.99–1.89)3.38 (2.27–5.04)1.36 (0.97–1.90)3.14 (2.21–4.44)1.92 (1.33–2.76) < 0.00011(ref)1.93 (1.32–2.80)1.38 (0.92–2.08)2.81 (1.65–4.80)2.68 (1.86–3.86)2.13 (1.50–3.02)6.61 (4.24–10.30)3.23 (2.18–4.80)8.74 (5.35–13.40)4.82 (3.04–7.66) < 0.0001 Ethnicity   Akan  Mole Dagbani  Others (Ewe, Gruma etc.)1(ref)1.17 (0.94–1.45)1.00 (0.83–1.19) < 0.00011(ref)1.07 (0.80–1.43)1.09 (0.90–1.31) < 0.0001 Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18   Male   Female 1(ref) 0.98 (0.85–1.12) 1 (ref) 0.98 (0.85–1.13)   0–11   12–23   24–35   36–47   48–59 1(ref) 1.66 (1.37–2.00) 2.42 (2.02–2.89) 2.12 (1.76–2.55) 2.17 (1.79–2.63) 1 (ref) 1.72 (1.42–2.10) 2.70 (2.22–3.29) 2.30 (1.88–2.81) 2.44 (1.99–2.99)   Pre-primary/none   Primary   Junior High School   Senior High School    Higher 1(ref) 0.76 (0.62–0.93) 1.05 (0.87–1.26) 1.40 (1.09–1.81) 2.96 (2.07–4.25) 1(ref) 0.83 (0.67–1.02) 1.16 (0.95–1.42) 1.37 (1.02–1.82) 2.14 (1.39–3.30)   Poorest   Second   Middle   Fourth   Richest 1(ref) 1.19 (0.95–1.49) 1.28 (0.98–1.66) 1.36 (1.05–1.76) 2.31 (1.77–3.02) 1(ref) 1.47 (1.15–1.89) 1.59 (1.16–2.17) 1.73 (1.28–2.35) 2.82 (2.00–3.98)   Urban   Rural 1(ref) 0.66 (0.55–0.79) 1(ref) 0.75 (0.61- 0.91)   Greater Accra   Western   Central   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 1(ref) 1.13 (0.77–1.63) 0.87 (0.60–1.26) 1.33 (0.83–2.13) 1.52 (1.06–2.18) 1.37 (0.99–1.89) 3.38 (2.27–5.04) 1.36 (0.97–1.90) 3.14 (2.21–4.44) 1.92 (1.33–2.76) 1(ref) 1.93 (1.32–2.80) 1.38 (0.92–2.08) 2.81 (1.65–4.80) 2.68 (1.86–3.86) 2.13 (1.50–3.02) 6.61 (4.24–10.30) 3.23 (2.18–4.80) 8.74 (5.35–13.40) 4.82 (3.04–7.66)   Akan   Mole Dagbani   Others (Ewe, Gruma etc.) 1(ref) 1.17 (0.94–1.45) 1.00 (0.83–1.19) 1(ref) 1.07 (0.80–1.43) 1.09 (0.90–1.31) At the crude analysis level, health insurance enrolment was significantly predicted by wealth quintile, child’s age, mother’s education, place of residence, geographical region and ethnicity (p < 0.05). At the adjusted analysis level, it was found that children in the poorest wealth quintile were less likely to be insured compared with children in the second (AOR = 1.47; 95% CI: 1.15–1.89), middle (AOR = 1.59; 95% CI:1.16–2.17), fourth (AOR = 1.73; 95% CI:1.28–2.35) and richest (AOR = 2.82; 95% CI: 2.00-3.98) wealth quintiles. Children aged 0–11 months were less likely to be insured compared with children aged 12–23 months (AOR = 1.72; 95% CI: 1.42–2.10). Children whose mothers had higher education were two times (AOR = 2.14; 95% CI: 1.39–3.30) more likely to be insured compared with children whose mothers had no formal education. In addition, children in rural areas (AOR = 0.75; 95% CI: 0.61–0.91) had lesser odds of being insured compared with children in urban areas. Also, children in the Northern Region (AOR = 3.23; 95% CI: 2.18–4.80), Upper West Region (AOR = 4.82; 95% CI: 3.04–7.66) and Upper East Region (AOR = 8.74; 95% CI: 5.35–13.40) had a higher probability of being insured compared with children in the Greater Accra Region. Details are provided in Table 3. Table 3Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18 Covariate/exposure Crude analysis OR (95% CI) Wald p-value Adjusted analysis OR (95% CI) Adjusted Wald p -value Sex of child   Male  Female1(ref)0.98 (0.85–1.12)0.73111 (ref)0.98 (0.85–1.13)0.8045 Age of child (months)   0–11  12–23  24–35  36–47  48–591(ref)1.66 (1.37–2.00)2.42 (2.02–2.89)2.12 (1.76–2.55)2.17 (1.79–2.63) < 0.00011 (ref)1.72 (1.42–2.10)2.70 (2.22–3.29)2.30 (1.88–2.81)2.44 (1.99–2.99) < 0.0001 Mother’s education   Pre-primary/none  Primary  Junior High School  Senior High School   Higher1(ref)0.76 (0.62–0.93)1.05 (0.87–1.26)1.40 (1.09–1.81)2.96 (2.07–4.25) < 0.00011(ref)0.83 (0.67–1.02)1.16 (0.95–1.42)1.37 (1.02–1.82)2.14 (1.39–3.30)0.0002 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1(ref)1.19 (0.95–1.49)1.28 (0.98–1.66)1.36 (1.05–1.76)2.31 (1.77–3.02) < 0.00011(ref)1.47 (1.15–1.89)1.59 (1.16–2.17)1.73 (1.28–2.35)2.82 (2.00–3.98) < 0.0001 Residence   Urban  Rural1(ref)0.66 (0.55–0.79) < 0.00011(ref)0.75 (0.61- 0.91)0.0002 Region   Greater Accra  Western  Central  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West1(ref)1.13 (0.77–1.63)0.87 (0.60–1.26)1.33 (0.83–2.13)1.52 (1.06–2.18)1.37 (0.99–1.89)3.38 (2.27–5.04)1.36 (0.97–1.90)3.14 (2.21–4.44)1.92 (1.33–2.76) < 0.00011(ref)1.93 (1.32–2.80)1.38 (0.92–2.08)2.81 (1.65–4.80)2.68 (1.86–3.86)2.13 (1.50–3.02)6.61 (4.24–10.30)3.23 (2.18–4.80)8.74 (5.35–13.40)4.82 (3.04–7.66) < 0.0001 Ethnicity   Akan  Mole Dagbani  Others (Ewe, Gruma etc.)1(ref)1.17 (0.94–1.45)1.00 (0.83–1.19) < 0.00011(ref)1.07 (0.80–1.43)1.09 (0.90–1.31) < 0.0001 Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18   Male   Female 1(ref) 0.98 (0.85–1.12) 1 (ref) 0.98 (0.85–1.13)   0–11   12–23   24–35   36–47   48–59 1(ref) 1.66 (1.37–2.00) 2.42 (2.02–2.89) 2.12 (1.76–2.55) 2.17 (1.79–2.63) 1 (ref) 1.72 (1.42–2.10) 2.70 (2.22–3.29) 2.30 (1.88–2.81) 2.44 (1.99–2.99)   Pre-primary/none   Primary   Junior High School   Senior High School    Higher 1(ref) 0.76 (0.62–0.93) 1.05 (0.87–1.26) 1.40 (1.09–1.81) 2.96 (2.07–4.25) 1(ref) 0.83 (0.67–1.02) 1.16 (0.95–1.42) 1.37 (1.02–1.82) 2.14 (1.39–3.30)   Poorest   Second   Middle   Fourth   Richest 1(ref) 1.19 (0.95–1.49) 1.28 (0.98–1.66) 1.36 (1.05–1.76) 2.31 (1.77–3.02) 1(ref) 1.47 (1.15–1.89) 1.59 (1.16–2.17) 1.73 (1.28–2.35) 2.82 (2.00–3.98)   Urban   Rural 1(ref) 0.66 (0.55–0.79) 1(ref) 0.75 (0.61- 0.91)   Greater Accra   Western   Central   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 1(ref) 1.13 (0.77–1.63) 0.87 (0.60–1.26) 1.33 (0.83–2.13) 1.52 (1.06–2.18) 1.37 (0.99–1.89) 3.38 (2.27–5.04) 1.36 (0.97–1.90) 3.14 (2.21–4.44) 1.92 (1.33–2.76) 1(ref) 1.93 (1.32–2.80) 1.38 (0.92–2.08) 2.81 (1.65–4.80) 2.68 (1.86–3.86) 2.13 (1.50–3.02) 6.61 (4.24–10.30) 3.23 (2.18–4.80) 8.74 (5.35–13.40) 4.82 (3.04–7.66)   Akan   Mole Dagbani   Others (Ewe, Gruma etc.) 1(ref) 1.17 (0.94–1.45) 1.00 (0.83–1.19) 1(ref) 1.07 (0.80–1.43) 1.09 (0.90–1.31) Descriptive statistics: The results showed that a majority of the participants (58.4%) were insured, while a substantial minority (41.6%) were non-insured. Half of the participants were females (50.8%), and more than half (58%) were below 36 months. About three in ten (27.2%) mothers had no formal/pre-primary education, while 22.2% were in the poorest wealth quintile. Children from rural areas constituted 56.9%. In terms of ethnicity, 46.1% of the participants were Akan, 16.9% were Mole-Dagbani, and 37% were of other ethnic groups, such as Ewe and Gruma. Details are provided in Table 1. Table 1Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18 Characteristic n % Sex of child   Male  Female4369450549.250.8 Age of child (months)   0–11  12–23  24–35  36–47  48–591700169417501928180219.219.119.721.720.3 Mother's education   Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444327.320.236.710.85 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1966183417691676163022.220.719.918.818.4 Residence   Urban  Rural3821505343.156.9 Region   Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268627109532111833105528221110.510.49.7810.723.89.411.93.22.4 Ethnicity   Akan  Mole Dagbani  Others (Ewe, Gruma etc.)40911503328046.116.937 Health insurance status   Insured  Non-insured5186368958.441.6 Socio-demographic characteristics of children, mothers and health insurance status in Ghana, 2017/18   Male   Female 4369 4505 49.2 50.8   0–11   12–23   24–35   36–47   48–59 1700 1694 1750 1928 1802 19.2 19.1 19.7 21.7 20.3   Pre-primary/none   Primary   Junior High School   Senior High School   Higher 2428 1790 3259 954 443 27.3 20.2 36.7 10.8 5   Poorest   Second   Middle   Fourth   Richest 1966 1834 1769 1676 1630 22.2 20.7 19.9 18.8 18.4   Urban   Rural 3821 5053 43.1 56.9   Western   Central   Greater Accra   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 931 926 862 710 953 2111 833 1055 282 211 10.5 10.4 9.7 8 10.7 23.8 9.4 11.9 3.2 2.4   Akan   Mole Dagbani   Others (Ewe, Gruma etc.) 4091 1503 3280 46.1 16.9 37   Insured   Non-insured 5186 3689 58.4 41.6 Bivariate analysis: At the bivariate level, child age was significantly associated with health insurance enrolment (p < 0.05). Also, mothers’ education, wealth quintile, residence, geographic region, and ethnicity were significantly associated with health insurance enrollment (p < 0.05) among children under five years. A majority (56.1%) of children aged 0–11 months were not insured with the National Health Insurance Scheme. Less than half of children in the Central Region were insured, while eight in ten children were insured in the Brong-Ahafo Region. Details are provided in Table 2. Table 2Association between participant characteristics and health insurance status Characteristic n Non-insured (%) Insured (%) Chi-square p-value Sex of child   Male  Female4369450541.341.858.758.20.29610.7311 Age of child (months)   0–11  12–23  24–35  36–47  48–591700169417501928180256.143.634.637.73743.956.465.462.36329.3826 < 0.0001 Mother's education   Pre-primary/none  Primary  Junior High School  Senior High School  Higher24281790325995444342.649.441.534.520.157.450.658.565.479.9149.8059 < 0.0001 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1966183417691676163048.744.442.641.129.151.355.657.458.970.9152.59 < 0.0001 Residence   Urban  Rural3821505335.845.964.254.192.0315 < 0.0001 Region   Western  Central  Greater Accra  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West9319268637109532111833105528221146.75349.742.539.341.922.642.124.134.153.34750.357.560.758.177.457.975.965.9250.9397 < 0.0001 Ethnicity   Akan  Mole Dagbani  Other (Ewe, Gruma etc.)40911503328042.238.542.357.861.557.77.38000.3330 Association between participant characteristics and health insurance status   Male   Female 4369 4505 41.3 41.8 58.7 58.2   0–11   12–23   24–35   36–47   48–59 1700 1694 1750 1928 1802 56.1 43.6 34.6 37.7 37 43.9 56.4 65.4 62.3 63   Pre-primary/none   Primary   Junior High School   Senior High School   Higher 2428 1790 3259 954 443 42.6 49.4 41.5 34.5 20.1 57.4 50.6 58.5 65.4 79.9   Poorest   Second   Middle   Fourth   Richest 1966 1834 1769 1676 1630 48.7 44.4 42.6 41.1 29.1 51.3 55.6 57.4 58.9 70.9   Urban   Rural 3821 5053 35.8 45.9 64.2 54.1   Western   Central   Greater Accra   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 931 926 863 710 953 2111 833 1055 282 211 46.7 53 49.7 42.5 39.3 41.9 22.6 42.1 24.1 34.1 53.3 47 50.3 57.5 60.7 58.1 77.4 57.9 75.9 65.9   Akan   Mole Dagbani   Other (Ewe, Gruma etc.) 4091 1503 3280 42.2 38.5 42.3 57.8 61.5 57.7 Multivariable analysis: At the crude analysis level, health insurance enrolment was significantly predicted by wealth quintile, child’s age, mother’s education, place of residence, geographical region and ethnicity (p < 0.05). At the adjusted analysis level, it was found that children in the poorest wealth quintile were less likely to be insured compared with children in the second (AOR = 1.47; 95% CI: 1.15–1.89), middle (AOR = 1.59; 95% CI:1.16–2.17), fourth (AOR = 1.73; 95% CI:1.28–2.35) and richest (AOR = 2.82; 95% CI: 2.00-3.98) wealth quintiles. Children aged 0–11 months were less likely to be insured compared with children aged 12–23 months (AOR = 1.72; 95% CI: 1.42–2.10). Children whose mothers had higher education were two times (AOR = 2.14; 95% CI: 1.39–3.30) more likely to be insured compared with children whose mothers had no formal education. In addition, children in rural areas (AOR = 0.75; 95% CI: 0.61–0.91) had lesser odds of being insured compared with children in urban areas. Also, children in the Northern Region (AOR = 3.23; 95% CI: 2.18–4.80), Upper West Region (AOR = 4.82; 95% CI: 3.04–7.66) and Upper East Region (AOR = 8.74; 95% CI: 5.35–13.40) had a higher probability of being insured compared with children in the Greater Accra Region. Details are provided in Table 3. Table 3Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18 Covariate/exposure Crude analysis OR (95% CI) Wald p-value Adjusted analysis OR (95% CI) Adjusted Wald p -value Sex of child   Male  Female1(ref)0.98 (0.85–1.12)0.73111 (ref)0.98 (0.85–1.13)0.8045 Age of child (months)   0–11  12–23  24–35  36–47  48–591(ref)1.66 (1.37–2.00)2.42 (2.02–2.89)2.12 (1.76–2.55)2.17 (1.79–2.63) < 0.00011 (ref)1.72 (1.42–2.10)2.70 (2.22–3.29)2.30 (1.88–2.81)2.44 (1.99–2.99) < 0.0001 Mother’s education   Pre-primary/none  Primary  Junior High School  Senior High School   Higher1(ref)0.76 (0.62–0.93)1.05 (0.87–1.26)1.40 (1.09–1.81)2.96 (2.07–4.25) < 0.00011(ref)0.83 (0.67–1.02)1.16 (0.95–1.42)1.37 (1.02–1.82)2.14 (1.39–3.30)0.0002 Wealth quintile   Poorest  Second  Middle  Fourth  Richest1(ref)1.19 (0.95–1.49)1.28 (0.98–1.66)1.36 (1.05–1.76)2.31 (1.77–3.02) < 0.00011(ref)1.47 (1.15–1.89)1.59 (1.16–2.17)1.73 (1.28–2.35)2.82 (2.00–3.98) < 0.0001 Residence   Urban  Rural1(ref)0.66 (0.55–0.79) < 0.00011(ref)0.75 (0.61- 0.91)0.0002 Region   Greater Accra  Western  Central  Volta  Eastern  Ashanti  Brong- Ahafo  Northern  Upper East  Upper West1(ref)1.13 (0.77–1.63)0.87 (0.60–1.26)1.33 (0.83–2.13)1.52 (1.06–2.18)1.37 (0.99–1.89)3.38 (2.27–5.04)1.36 (0.97–1.90)3.14 (2.21–4.44)1.92 (1.33–2.76) < 0.00011(ref)1.93 (1.32–2.80)1.38 (0.92–2.08)2.81 (1.65–4.80)2.68 (1.86–3.86)2.13 (1.50–3.02)6.61 (4.24–10.30)3.23 (2.18–4.80)8.74 (5.35–13.40)4.82 (3.04–7.66) < 0.0001 Ethnicity   Akan  Mole Dagbani  Others (Ewe, Gruma etc.)1(ref)1.17 (0.94–1.45)1.00 (0.83–1.19) < 0.00011(ref)1.07 (0.80–1.43)1.09 (0.90–1.31) < 0.0001 Logistic Regression on predictors of health insurance status among children under five years in Ghana, 2017/18   Male   Female 1(ref) 0.98 (0.85–1.12) 1 (ref) 0.98 (0.85–1.13)   0–11   12–23   24–35   36–47   48–59 1(ref) 1.66 (1.37–2.00) 2.42 (2.02–2.89) 2.12 (1.76–2.55) 2.17 (1.79–2.63) 1 (ref) 1.72 (1.42–2.10) 2.70 (2.22–3.29) 2.30 (1.88–2.81) 2.44 (1.99–2.99)   Pre-primary/none   Primary   Junior High School   Senior High School    Higher 1(ref) 0.76 (0.62–0.93) 1.05 (0.87–1.26) 1.40 (1.09–1.81) 2.96 (2.07–4.25) 1(ref) 0.83 (0.67–1.02) 1.16 (0.95–1.42) 1.37 (1.02–1.82) 2.14 (1.39–3.30)   Poorest   Second   Middle   Fourth   Richest 1(ref) 1.19 (0.95–1.49) 1.28 (0.98–1.66) 1.36 (1.05–1.76) 2.31 (1.77–3.02) 1(ref) 1.47 (1.15–1.89) 1.59 (1.16–2.17) 1.73 (1.28–2.35) 2.82 (2.00–3.98)   Urban   Rural 1(ref) 0.66 (0.55–0.79) 1(ref) 0.75 (0.61- 0.91)   Greater Accra   Western   Central   Volta   Eastern   Ashanti   Brong- Ahafo   Northern   Upper East   Upper West 1(ref) 1.13 (0.77–1.63) 0.87 (0.60–1.26) 1.33 (0.83–2.13) 1.52 (1.06–2.18) 1.37 (0.99–1.89) 3.38 (2.27–5.04) 1.36 (0.97–1.90) 3.14 (2.21–4.44) 1.92 (1.33–2.76) 1(ref) 1.93 (1.32–2.80) 1.38 (0.92–2.08) 2.81 (1.65–4.80) 2.68 (1.86–3.86) 2.13 (1.50–3.02) 6.61 (4.24–10.30) 3.23 (2.18–4.80) 8.74 (5.35–13.40) 4.82 (3.04–7.66)   Akan   Mole Dagbani   Others (Ewe, Gruma etc.) 1(ref) 1.17 (0.94–1.45) 1.00 (0.83–1.19) 1(ref) 1.07 (0.80–1.43) 1.09 (0.90–1.31) Discussion: The findings showed that more than half (58.4%) of the participants were covered by health insurance. Thus, caregivers/parents of insured children were protected against out-of-pocket payment, which is a risk factor for catastrophic health care expenditure and poverty [29]. A similar study revealed that 57% of Ghanaian children below 18 years were covered by health insurance [21]. Another household survey across three districts in Ghana reported that 55.9% of the participants were insured. A similar nationally representative survey demonstrated that 66% and 52.6% of women and men aged 15–49 years were insured respectively [30]. Further, our finding is similar to a study in Shanghai, China, where 56.5% of children under eight years were covered by health insurance [31]. However, health insurance coverage in this study was higher than coverage in other African countries. For instance, an analysis of data from four African countries revealed that Ghana had the highest health insurance coverage of 62.4% and 49.1% for females and males respectively. Followed by Kenya (18.2% for females and 21.9% for males), Tanzania (9.1% for females and 9.5% for males) and Nigeria (1.1% for females and 3.1% for males) [32]. The difference in findings may be attributed to contextual factors and health insurance policies. For instance, Ghana’s National Health Insurance Scheme (NHIS) covers more than 95% of the disease conditions in the country, including medications, medical investigations, outpatient and in-patient services. Also, women who register with NHIS have access to free maternal health services, such as antenatal, delivery and postnatal services. Children under 18 years, indigents, the elderly, persons with disability or mental disorders, Social Security and National Insurance Trust (SSNIT) contributors and pensioners are exempted from paying premiums but must renew their membership every year [33]. In addition, we found that four in ten children were not covered by health insurance. Therefore, parents/caregivers of uninsured children would have to pay-out-of pocket when accessing child health care services. Out-of-pocket payment has the potential of putting caregivers at risk of catastrophic healthcare expenditure and poverty. Also, parents of non-insured children are more likely to postpone or delay seeking health care, hence putting the child’s life at risk of poor health outcomes [34]. This finding is similar to findings from previous studies in Ghana and elsewhere. For instance, a study revealed that 43.2% of Ghanaian children under eighteen years were uninsured [21]. Another study among children under seven years in Shanghai, China, reported that 43.5% of the participants were uninsured [31]. This finding may be explained by individual, financial, country-specific and health system-related factors [22]. In Ghana, children under five years are exempted from paying NHIS premiums. However, they must pay membership card processing fees and renewal fees every year [33]; hence, it may pose a barrier for their caregivers. Furthermore, recent evidence shows that persons insured with Ghana’s NHIS still pay out-of-pocket for services in accredited health facilities [35]. Reasons for non-registration or non-renewing of membership with the NHIS include financial constraints, lack of confidence in the scheme, dissatisfaction with services, shortage of insured medications, long waiting time, payment of illegal charges and non-use of health services [36]. Going forward, the Ministry of Health, National Health Insurance Authority, Ministry of Gender, Children and Social Protection and health providers would have to collaborate to improve health insurance coverage for Ghanaian children under five years. However, there is a need for empirical evidence on the correlates and reasons for non-enrolment among children under five years. Hence, we recommend that future studies should explore these grey areas. In addition, the findings revealed that health insurance enrolment was influenced by child’s age, mother’s educational status, wealth index, region, ethnicity and place of residence. Children whose mothers were less educated had low likelihoods of being insured. A similar study found that well-educated mothers were more likely to enroll their children on health insurance [37]. Another study in Shanghai, China, showed that children of women with low education were less likely to be covered by health insurance [31, 32]. A probable explanation is that less educated mothers may lack adequate understanding of the health insurance process and the benefits package due to their inability to access information [5]. Evidence shows that Ghanaian women who had access to information were more likely to be insured [38]. Women with higher educational status are more empowered to make health-seeking decisions. Also, children from wealthy families were more likely to be covered by health insurance. Previous studies have supported this finding [39]. Another study in Shanghai, China, revealed that children from the lowest income households had lesser odds of being insured [31]. A possible explanation is that wealthy parents/caregivers have large disposable incomes. Hence, they can afford health insurance premiums, NHIS card processing charges, and annual renewal fees. It implies that the purpose of the NHIS as a pro-poor social intervention has not been well achieved. Also, children in rural areas had lower chances of being insured. A study in Ghana reported that women living in remote settings had lower odds of insurance coverage than those staying in urban areas [37]. A conceivable explanation for this finding is that parents of children residing in urban areas may have easy access to health insurance offices. In Ghana, few NHIS offices are sited in rural areas, leading to delays in registering and printing insurance cards. There are also few NHIS personnel and logistics in the rural areas compared to the urban areas [36]. These factors may explain the disparities in insurance coverage across the place of residence. It was revealed that children from the nine other administrative regions were more likely to be insured than those from the Greater Accra Region, Ghana’s capital city. A similar study reported that children in the Greater Accra region were more likely to be non-insured compared with the other regions in Ghana [21]. The Greater Accra region has the lowest health insurance coverage in Ghana [40]. Moreover, we found that children from regions with a high incidence of poverty were more likely to be insured. This finding was expected because the poor perceive health insurance as a form of social security that protects them against catastrophic health care expenditure during health emergencies [41]. This finding may explain why the poorest region in Ghana (Upper East region) has the highest NHIS coverage [40]. Additionally, health insurance enrolment was associated with child’s age. Thus, children aged twelve months or older were more likely to be insured. A similar study in Shanghai, China, reported that older children were less likely to be uninsured [31]. The Free Maternal Health Policy may explain this finding. In Ghana, pregnant women who register with the NHIS have free maternal health care services up to three months postpartum [42]. Our findings imply that vulnerable children did not have health insurance. Consequently, their caregivers/parents may be predisposed to catastrophic health care expenditure. Besides, evidence shows that uninsured children are predisposed to poor health outcomes [43]. Therefore, in the quest to increase health insurance coverage, future interventions should prioritize children from the low socio-economic background. Strengths and limitations of the study One major strength of this study is that we analysed nationally representative data so the findings from this study can be generalized to the population. This study is one of the few studies in Ghana investigating socio-demographic determinants of child health insurance. However, this study is not devoid of limitations. Cross-sectional studies cannot establish causal relationships, so the findings should be interpreted with caution. In addition, health insurance status was self-reported by caregivers/parents of the children. Therefore, it may be subjected to social desirability or recall biases. One major strength of this study is that we analysed nationally representative data so the findings from this study can be generalized to the population. This study is one of the few studies in Ghana investigating socio-demographic determinants of child health insurance. However, this study is not devoid of limitations. Cross-sectional studies cannot establish causal relationships, so the findings should be interpreted with caution. In addition, health insurance status was self-reported by caregivers/parents of the children. Therefore, it may be subjected to social desirability or recall biases. Strengths and limitations of the study: One major strength of this study is that we analysed nationally representative data so the findings from this study can be generalized to the population. This study is one of the few studies in Ghana investigating socio-demographic determinants of child health insurance. However, this study is not devoid of limitations. Cross-sectional studies cannot establish causal relationships, so the findings should be interpreted with caution. In addition, health insurance status was self-reported by caregivers/parents of the children. Therefore, it may be subjected to social desirability or recall biases. Conclusions: This study demonstrated that more than half of the children were covered by health insurance. Health insurance enrolment was associated with wealth index, mother’ educational status, child’s age, type of residence, geographical region and ethnicity. Policymakers can leverage these findings to help improve health insurance coverage for children in Ghana. Interventions that seek to improve health insurance coverage must prioritize children from poor socio-economic backgrounds. Future studies should employ qualitative designs to expose the many intricate views of caregivers regarding health insurance enrolment among children under five years. Also, further studies may explore factors associated with non-enrollment among children under five years.
Background: Health insurance enrolment provides financial access to health care and reduces the risk of catastrophic healthcare expenditure. Therefore, the objective of this study was to assess the prevalence and correlates of health insurance enrolment among Ghanaian children under five years. Methods: We analysed secondary data from the 2017/18 Ghana Multiple Indicator Cluster Survey. The survey was a nationally representative weighted sample comprising 8,874 children under five years and employed Computer Assisted Personal Interviewing to collect data from the participants. In addition, Chi-square and Logistic Regression analyses were conducted to determine factors associated with health insurance enrolment. Results: The results showed that a majority (58.4%) of the participants were insured. Health insurance enrollment was associated with child age, maternal educational status, wealth index, place of residence and geographical region (p < 0.05). Children born to mothers with higher educational status (AOR = 2.14; 95% CI: 1.39-3.30) and mothers in the richest wealth quintile (AOR = 2.82; 95% CI: 2.00-3.98) had a higher likelihood of being insured compared with their counterparts. Also, children residing in rural areas (AOR = 0.75; 95% CI: 0.61-0.91) were less likely to be insured than children in urban areas. Conclusions: This study revealed that more than half of the participants were insured. Health insurance enrolment was influenced by the child's age, mother's educational status, wealth index, residence, ethnicity and geographical region. Therefore, interventions aimed at increasing health insurance coverage among children should focus on children from low socio-economic backgrounds. Stakeholders can leverage these findings to help improve health insurance coverage among Ghanaian children under five years.
Background: Globally, protecting and improving the well-being of children under five years remains a public health priority. For instance, Target 3.2 of the Sustainable Development Goals seek to end preventable deaths of newborns and children under five years of age, with all countries aiming to reduce neonatal mortality and under five mortality by 2030 [1]. Evidence shows that under five mortality has declined over the last three decades worldwide. Between 1990 and 2019, the under five mortality rate has reduced by 59%, thus from 93 deaths per 1000 live births to 38 respectively [2]. Notwithstanding, the burden of under five mortality remains high. For example, in 2019 alone, about 14,000 children died every day before their fifth birthday worldwide. More than half (53%) of under five deaths in 2019 occurred in sub-Saharan Africa, with Nigeria and Somalia recording the highest under five mortality rates (117 deaths per 1000 live births) [2]. Most under five deaths are preventable with timely access to quality healthcare services and child health interventions [3]. Therefore, an integral recommendation—among many others—to accelerate progress towards reducing the mortality rate among children under five years is to ensure health equity through universal health coverage. So that all children can access essential health services without undue financial hardship [3]. Evidence shows that health insurance enrolment is linked to increased access and utilisation of healthcare services [4] and better health outcomes—particularly for maternal and child health [5–10]. In Ghana, the under five mortality rate was estimated to be 46 deaths per 1000 live births in 2019 [11], higher than the global rate of 38 deaths per 1000 live births. Financial constraints pose a significant barrier to accessing healthcare services, including child health services [12]. Therefore, under the National Health Insurance Act 650, the Government of Ghana established the National Health Insurance Scheme (NHIS) in 2003 to eliminate financial barriers to accessing health care [13]. Upon establishment, the scheme operated semi-autonomous Mutual Health Insurance Schemes in districts across the country. In 2012, a new law, Act 852, replaced Act 650. Under Act 852, all District Mutual Health Insurance Schemes were consolidated under a National Health Insurance Authority (NHIA) to ensure effective management and efficient service delivery [14]. The primary sources of financing the NHIS comprise a National Health Insurance Levy on selected goods and services, 2.5% contribution from the National Social Security Scheme by formal sector workers, individual premiums mainly from informal sector workers, and miscellaneous funds from investment returns, grants, donations and gifts from international donor partners and agencies [14]. Since its inception, Ghana’s NHIS has been considered one of Africa’s model health insurance systems. The benefit package of the NHIS covers the cost of treatment for more than 95% of the disease conditions in Ghana. The range of services covered includes but are not limited to outpatient care, diagnostic services, in-patient care, pre-approved medications, maternal care, ear, nose and throat services, dental services and all emergency services. Excluded in the NHIS benefit package are procedures such as dialysis for chronic renal failure, treatments for cancer (other than cervical and breast cancers), organ transplants and cosmetic surgery. Child immunization services, family planning and treatment of conditions such as HIV/AIDS and tuberculosis are also not covered. However, these services are provided under alternative government programs. Apart from pregnant women and children under five years, new members serve a waiting period of three months after registration before accessing health care under the scheme. Further, members of the scheme can access healthcare from health services providers accredited by the NHIA. These include public, quasi-government, faith-based, some but not all private health facilities and licensed pharmacies [15]. Despite the benefits of increased access to healthcare offered by NHIS membership and mandatory enrolment for all residents in Ghana, universal population coverage on the scheme has proved challenging. As of 2021, the NHIS had an active membership coverage of over 15 million people, equating to about 53% of Ghana’s estimated population [16]. However, a recent study examining NHIS enrolment within the last decade showed that, on average, only about 40% of all Ghanaians had ever registered with the Scheme [17]. That notwithstanding, utilization trends for in-patient and outpatient care at NHIS accredited health facilities continues to increase across the country [18, 19]. As part of efforts to increase coverage, a premium exemption policy for vulnerable populations, such as children under 18, was implemented. Thus, persons below 18 years are exempted from paying annual premiums [20] but must pay administrative charges, including the NHIS card processing fee [21]. Furthermore, in 2010, the National Health Insurance Authority decoupled children under five years from their parents’ membership. Therefore, children under five years can be active members of the NHIS even if their parents are not active members [22]. In addition, private health insurance schemes are emerging rapidly in Ghana, with premiums based on the calculated risk of subscribers. The Ghana NHIS has been extensively investigated. Prior studies among the adult population showed that health insurance enrolment was associated with educational status, wealth, age, marital status, gender, type of occupation and place of residence [23–25]. In addition, an analysis of the 2011 Ghana Multiple Indicator Cluster Survey (MICS) revealed that a majority (73%) of children under five years were non-insured [26]. However, there is a paucity of literature on determinants of health insurance enrolment among children under five years. Therefore, this study aimed to determine factors associated with NHIS enrolment among children under five in Ghana using nationally representative survey data. Generating empirical evidence about factors influencing enrolment is essential to inform policy to help Ghana achieve Universal Health Coverage and Sustainable Development Goals. Conclusions: This study demonstrated that more than half of the children were covered by health insurance. Health insurance enrolment was associated with wealth index, mother’ educational status, child’s age, type of residence, geographical region and ethnicity. Policymakers can leverage these findings to help improve health insurance coverage for children in Ghana. Interventions that seek to improve health insurance coverage must prioritize children from poor socio-economic backgrounds. Future studies should employ qualitative designs to expose the many intricate views of caregivers regarding health insurance enrolment among children under five years. Also, further studies may explore factors associated with non-enrollment among children under five years.
Background: Health insurance enrolment provides financial access to health care and reduces the risk of catastrophic healthcare expenditure. Therefore, the objective of this study was to assess the prevalence and correlates of health insurance enrolment among Ghanaian children under five years. Methods: We analysed secondary data from the 2017/18 Ghana Multiple Indicator Cluster Survey. The survey was a nationally representative weighted sample comprising 8,874 children under five years and employed Computer Assisted Personal Interviewing to collect data from the participants. In addition, Chi-square and Logistic Regression analyses were conducted to determine factors associated with health insurance enrolment. Results: The results showed that a majority (58.4%) of the participants were insured. Health insurance enrollment was associated with child age, maternal educational status, wealth index, place of residence and geographical region (p < 0.05). Children born to mothers with higher educational status (AOR = 2.14; 95% CI: 1.39-3.30) and mothers in the richest wealth quintile (AOR = 2.82; 95% CI: 2.00-3.98) had a higher likelihood of being insured compared with their counterparts. Also, children residing in rural areas (AOR = 0.75; 95% CI: 0.61-0.91) were less likely to be insured than children in urban areas. Conclusions: This study revealed that more than half of the participants were insured. Health insurance enrolment was influenced by the child's age, mother's educational status, wealth index, residence, ethnicity and geographical region. Therefore, interventions aimed at increasing health insurance coverage among children should focus on children from low socio-economic backgrounds. Stakeholders can leverage these findings to help improve health insurance coverage among Ghanaian children under five years.
9,765
335
9
[ "health", "children", "insurance", "ref", "health insurance", "insured", "95", "upper", "ghana", "18" ]
[ "test", "test" ]
null
[CONTENT] Health insurance | Children under five years | Ghana | Enrolment | Factors [SUMMARY]
null
[CONTENT] Health insurance | Children under five years | Ghana | Enrolment | Factors [SUMMARY]
[CONTENT] Health insurance | Children under five years | Ghana | Enrolment | Factors [SUMMARY]
[CONTENT] Health insurance | Children under five years | Ghana | Enrolment | Factors [SUMMARY]
[CONTENT] Health insurance | Children under five years | Ghana | Enrolment | Factors [SUMMARY]
[CONTENT] Child | Child, Preschool | Educational Status | Female | Ghana | Humans | Insurance, Health | National Health Programs | Socioeconomic Factors [SUMMARY]
null
[CONTENT] Child | Child, Preschool | Educational Status | Female | Ghana | Humans | Insurance, Health | National Health Programs | Socioeconomic Factors [SUMMARY]
[CONTENT] Child | Child, Preschool | Educational Status | Female | Ghana | Humans | Insurance, Health | National Health Programs | Socioeconomic Factors [SUMMARY]
[CONTENT] Child | Child, Preschool | Educational Status | Female | Ghana | Humans | Insurance, Health | National Health Programs | Socioeconomic Factors [SUMMARY]
[CONTENT] Child | Child, Preschool | Educational Status | Female | Ghana | Humans | Insurance, Health | National Health Programs | Socioeconomic Factors [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] health | children | insurance | ref | health insurance | insured | 95 | upper | ghana | 18 [SUMMARY]
null
[CONTENT] health | children | insurance | ref | health insurance | insured | 95 | upper | ghana | 18 [SUMMARY]
[CONTENT] health | children | insurance | ref | health insurance | insured | 95 | upper | ghana | 18 [SUMMARY]
[CONTENT] health | children | insurance | ref | health insurance | insured | 95 | upper | ghana | 18 [SUMMARY]
[CONTENT] health | children | insurance | ref | health insurance | insured | 95 | upper | ghana | 18 [SUMMARY]
[CONTENT] services | nhis | health | mortality | deaths | care | ghana | insurance | health insurance | children [SUMMARY]
null
[CONTENT] ref | 95 ci | ci | 95 | 02 | aor | 42 | upper | insured | high school [SUMMARY]
[CONTENT] insurance | children | health | health insurance | insurance coverage | improve health | improve health insurance coverage | health insurance coverage | improve | improve health insurance [SUMMARY]
[CONTENT] health | children | insurance | health insurance | ref | study | insured | ghana | nhis | services [SUMMARY]
[CONTENT] health | children | insurance | health insurance | ref | study | insured | ghana | nhis | services [SUMMARY]
[CONTENT] ||| Ghanaian | under five years [SUMMARY]
null
[CONTENT] 58.4% ||| 0.05 ||| 2.14 | 95% | CI | 1.39-3.30 | 2.82 | 95% | CI | 2.00-3.98 ||| 0.75 | 95% | CI | 0.61-0.91 [SUMMARY]
[CONTENT] more than half ||| ||| ||| Ghanaian | under five years [SUMMARY]
[CONTENT] ||| Ghanaian | under five years ||| 2017/18 | Ghana | Cluster Survey ||| 8,874 | under five years | Computer Assisted Personal Interviewing ||| Chi-square | Logistic Regression ||| ||| 58.4% ||| 0.05 ||| 2.14 | 95% | CI | 1.39-3.30 | 2.82 | 95% | CI | 2.00-3.98 ||| 0.75 | 95% | CI | 0.61-0.91 ||| more than half ||| ||| ||| Ghanaian | under five years [SUMMARY]
[CONTENT] ||| Ghanaian | under five years ||| 2017/18 | Ghana | Cluster Survey ||| 8,874 | under five years | Computer Assisted Personal Interviewing ||| Chi-square | Logistic Regression ||| ||| 58.4% ||| 0.05 ||| 2.14 | 95% | CI | 1.39-3.30 | 2.82 | 95% | CI | 2.00-3.98 ||| 0.75 | 95% | CI | 0.61-0.91 ||| more than half ||| ||| ||| Ghanaian | under five years [SUMMARY]
A comprehensive evaluation of skin aging-related circular RNA expression profiles.
33534927
Circular RNAs (circRNAs) have been shown to play important regulatory roles in a range of both pathological and physiological contexts, but their functions in the context of skin aging remain to be clarified. In the present study, we therefore, profiled circRNA expression profiles in four pairs of aged and non-aged skin samples to identify identifying differentially expressed circRNAs that may offer clinical value as biomarkers of the skin aging process.
BACKGROUND
We utilized an RNA-seq to profile the levels of circRNAs in eyelid tissue samples, with qRT-PCR being used to confirm these RNA-seq results, and with bioinformatics approaches being used to predict downstream target miRNAs for differentially expressed circRNAs.
METHODS
In total, we identified 571 circRNAs with 348 and 223 circRNAs being up and downregulated that were differentially expressed in aged skin samples compared to young skin samples. The top 10 upregulated circRNAs in aged skin sample were hsa_circ_0123543, hsa_circ_0057742, hsa_circ_0088179, hsa_circ_0132428, hsa_circ_0094423, hsa_circ_0008166, hsa_circ_0138184, hsa_circ_0135743, hsa_circ_0114119, and hsa_circ_0131421. The top 10 reduced circRNAs were hsa_circ_0101479, hsa_circ_0003650, hsa_circ_0004249, hsa_circ_0030345, hsa_circ_0047367, hsa_circ_0055629, hsa_circ_0062955, hsa_circ_0005305, hsa_circ_0001627, and hsa_circ_0008531. Functional enrichment analyses revealed the potential functionality of these differentially expressed circRNAs. The top 3 enriched gene ontology (GO) terms of the host genes of differentially expressed circRNAs are regulation of GTPase activity, positive regulation of GTPase activity and autophagy. The top 3 enriched KEGG pathway ID are Lysine degradation, Fatty acid degradation and Inositol phosphate metabolism. The top 3 enriched reactome pathway ID are RAB GEFs exchange GTP for GDP on RABs, Regulation of TP53 Degradation and Regulation of TP53 Expression and Degradation. Six circRNAs were selected for qRT-PCR verification, of which 5 verification results were consistent with the sequencing results. Moreover, targeted miRNAs, such as hsa-miR-588, hsa-miR-612, hsa-miR-4487, hsa-miR-149-5p, hsa-miR-494-5p were predicted for circRna-miRna interaction networks.
RESULTS
Overall, these results offer new insights into circRNA expression profiles, potentially highlighting future avenues for research regarding the roles of these circRNAs in the context of skin aging.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Down-Regulation", "Female", "Gene Expression Profiling", "Gene Expression Regulation", "Gene Ontology", "Gene Regulatory Networks", "Humans", "MicroRNAs", "Middle Aged", "RNA, Circular", "Reproducibility of Results", "Skin Aging", "Up-Regulation", "Young Adult" ]
8059755
INTRODUCTION
Aging is a complex process wherein cells and organisms undergo progressive changes at the molecular, tissue, and organ levels as a result of either normal physiology or pathological conditions. 1 The skin is the largest organ in the body, serving as a barrier against external threats. 2 Skin aging results in continuous alterations in the functionality and appearance of the skin 3 and can occur as either a result of internal factors or due to extrinsic photoaging. 4 , 5 Photoaging is believed to account for approximately 80% of skin aging. 6 Even so, the mechanisms that govern the skin aging process remain poorly clarified, and few aging‐related biomarkers have been identified to date. Transcriptomic analyses offer a powerful approach to identifying molecular biomarkers of skin aging. 7 , 8 Recent breakthroughs in the development of high‐throughput transcriptome sequencing technologies have led to the discovery of a diverse array of biomarkers of different human diseases, with non‐coding RNAs being commonly studied in this context. 9 , 10 Research conducted in the 1970s identified novel non‐coding circular RNAs (circRNAs) in the context of plant viral infections, 11 although at the time these RNAs were believed to lack functional relevance and were instead thought to be a result of unusual splicing reactions. 12 More recent work, however, suggests that circRNAs are key regulators of gene expression at the transcriptional and post‐transcriptional levels. 13 , 14 Indeed, some circRNAs are believed to function as competing endogenous RNAs (ceRNAs) 15 or as de facto molecular sponges capable of sequestering and altering the functionality or expression of specific miRNAs or proteins. 16 , 17 While they generally lack coding potential, there is also some evidence that certain circRNAs may be translated in some contexts, thereby further modulating biological functionality. 18 , 19 , 20 , 21 , 22 , 23 , 24 Indeed, circRNA dysregulation is a hallmark of conditions such as cancer, neurological disease, 25 and cardiovascular diseases. 26 , 27 The functional importance of circRNAs in the context of skin aging, however, remains to be clarified. Herein, we evaluated circRNA expression profiles in samples of aged and non‐aged skin tissue via high‐throughput sequencing to identify differentially expressed circRNAs (DECs). Appropriate bioinformatics analyses were then used to further predict the functional roles of these DECs in the aging process.
null
null
RESULTS
Detection of circRNA dysregulation in the context of skin aging In order to identify circRNAs associated with the skin aging processes, we began by employing an RNA‐seq analysis approach to analyze four eyelid skin samples from young adults and four eyelid skin samples from aged adults, with young samples being treated as controls. In total, we identified 14915 circRNAs, of which 571 were found to be differentially expressed (Fold change >1.5 and p < 0.05) between these groups (Figure 1A). Of these DECs, 348 and 223 were found to be up and downregulated, respectively (Figure 1B). These DECs were distributed among all chromosomes, shown as a circos plots (Figure 1C). The details regarding the top 10 upregulated and 10 downregulated circRNAs are presented in Table 2 Profiling of skin aging‐related circRNAs. A, Those circRNAs that were up or downregulated (red and green, respectively) in aged skin tissue samples relative to young tissues samples were arranged in a heat map, with rows corresponding to individual circRNAs and columns corresponding to individual samples. B, Differentially regulated circRNAs were arranged in a Volcano plot, with blue dots corresponding to a lack of statistical significance, whereas red and green correspond to up and downregulated circRNAs, respectively (Fold change >1.5 and p < 0.05). C, Locations of DECs on human chromosomes are represented by circos plots, with chromosomes being represented by the outermost circle, whereas circRNAs are indicated in the middle circle, and DECs are shown in the innermost circle. Up and downregulated circRNAs are represented by red and blue lines, respectively Top 10 upregulated and 10 downregulated circRNAs in aging skin samples ranked by fold changes In order to identify circRNAs associated with the skin aging processes, we began by employing an RNA‐seq analysis approach to analyze four eyelid skin samples from young adults and four eyelid skin samples from aged adults, with young samples being treated as controls. In total, we identified 14915 circRNAs, of which 571 were found to be differentially expressed (Fold change >1.5 and p < 0.05) between these groups (Figure 1A). Of these DECs, 348 and 223 were found to be up and downregulated, respectively (Figure 1B). These DECs were distributed among all chromosomes, shown as a circos plots (Figure 1C). The details regarding the top 10 upregulated and 10 downregulated circRNAs are presented in Table 2 Profiling of skin aging‐related circRNAs. A, Those circRNAs that were up or downregulated (red and green, respectively) in aged skin tissue samples relative to young tissues samples were arranged in a heat map, with rows corresponding to individual circRNAs and columns corresponding to individual samples. B, Differentially regulated circRNAs were arranged in a Volcano plot, with blue dots corresponding to a lack of statistical significance, whereas red and green correspond to up and downregulated circRNAs, respectively (Fold change >1.5 and p < 0.05). C, Locations of DECs on human chromosomes are represented by circos plots, with chromosomes being represented by the outermost circle, whereas circRNAs are indicated in the middle circle, and DECs are shown in the innermost circle. Up and downregulated circRNAs are represented by red and blue lines, respectively Top 10 upregulated and 10 downregulated circRNAs in aging skin samples ranked by fold changes Functional enrichment analyses We next conducted GO, KEGG, and Reactome analyses to explore the potential functional roles of identified DECs in the skin aging process. GO analyses suggested that these circRNAs are most closely associated with specific cellular components, molecular functions, and biological processes (Figure 2). Top enriched KEGG pathways associated with these DECs included the lysine degradation, inositol phosphate metabolism, and purine metabolism (Figure 3A). Top Reactome pathways for these DECs included SUMO E3 ligase SUMOylation of target proteins, Rab regulation of trafficking, and the Rho GTPase cycle (Figure 3B). GO analysis of DEC host genes associated with aging KEGG and Reactome pathway analyses in aged and young skin sample groups. A, The top 15 enriched KEGG pathways. B, The top 15 enriched Reactome pathways We next conducted GO, KEGG, and Reactome analyses to explore the potential functional roles of identified DECs in the skin aging process. GO analyses suggested that these circRNAs are most closely associated with specific cellular components, molecular functions, and biological processes (Figure 2). Top enriched KEGG pathways associated with these DECs included the lysine degradation, inositol phosphate metabolism, and purine metabolism (Figure 3A). Top Reactome pathways for these DECs included SUMO E3 ligase SUMOylation of target proteins, Rab regulation of trafficking, and the Rho GTPase cycle (Figure 3B). GO analysis of DEC host genes associated with aging KEGG and Reactome pathway analyses in aged and young skin sample groups. A, The top 15 enriched KEGG pathways. B, The top 15 enriched Reactome pathways Confirmation of differential circRNA expression In order to confirm the validity of our RNA‐seq results, three random upregulated circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0000205) and three random downregulated circRNAs (hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) were selected for qRT‐PCR analysis. For these analyses, we assessed circRNA expression levels in 40 total eyelid skin tissue samples (20 age, 20 young) using appropriate divergent primers. For full details regarding selected circRNAs, see Table 3. Six circRNAs were selected to perform further PCR validation We found that all the six DECs exhibited identical trends upon qRT‐PCR analysis to those observed in our RNA‐seq results (Figure 4). Of these, five circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) exhibited significant differential expression between control and aging skin group (p < 0.05). We additionally utilized Sanger sequencing to verify the qRT‐PCR products, confirming them to be consistent with the sequences in the circbase (http://www.circbase.org/) database (Supplemental Figure S1). qRT‐PCR validation of 6 differentially expressed circRNAs in 20 young and 20 aged skin samples In order to confirm the validity of our RNA‐seq results, three random upregulated circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0000205) and three random downregulated circRNAs (hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) were selected for qRT‐PCR analysis. For these analyses, we assessed circRNA expression levels in 40 total eyelid skin tissue samples (20 age, 20 young) using appropriate divergent primers. For full details regarding selected circRNAs, see Table 3. Six circRNAs were selected to perform further PCR validation We found that all the six DECs exhibited identical trends upon qRT‐PCR analysis to those observed in our RNA‐seq results (Figure 4). Of these, five circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) exhibited significant differential expression between control and aging skin group (p < 0.05). We additionally utilized Sanger sequencing to verify the qRT‐PCR products, confirming them to be consistent with the sequences in the circbase (http://www.circbase.org/) database (Supplemental Figure S1). qRT‐PCR validation of 6 differentially expressed circRNAs in 20 young and 20 aged skin samples circRNA‐miRNA interaction network generation To further evaluate the functional roles of these identified circRNAs, we assessed the potential miRNA binding of six circRNAs using miRnada (v 3.3a), with the resultant interaction network being visualized using the Cytoscape (V3.6.0) tool (Figure 5). In total, this network incorporated 6 circRNAs and 151 target miRNAs. Among these 151 miRNAs, we found that 6 miRNAs, hsa‐miR‐3064‐5p, hsa‐miR‐6762‐3p, hsa‐miR‐3194‐3p, hsa‐miR‐6731‐5p, hsa‐miR‐30c‐1‐3p and hsa‐miR‐6760‐5p not only bind to 1 circRNA, but they may be also regulated by multiple circRNAs at the same time. Predicted circRNA‐miRNA interaction networks. Yellow and red rectangles correspond to circRNAs and target miRNAs, respectively To further evaluate the functional roles of these identified circRNAs, we assessed the potential miRNA binding of six circRNAs using miRnada (v 3.3a), with the resultant interaction network being visualized using the Cytoscape (V3.6.0) tool (Figure 5). In total, this network incorporated 6 circRNAs and 151 target miRNAs. Among these 151 miRNAs, we found that 6 miRNAs, hsa‐miR‐3064‐5p, hsa‐miR‐6762‐3p, hsa‐miR‐3194‐3p, hsa‐miR‐6731‐5p, hsa‐miR‐30c‐1‐3p and hsa‐miR‐6760‐5p not only bind to 1 circRNA, but they may be also regulated by multiple circRNAs at the same time. Predicted circRNA‐miRNA interaction networks. Yellow and red rectangles correspond to circRNAs and target miRNAs, respectively
null
null
[ "INTRODUCTION", "Sample collection", "Bioinformatics analysis", "circRNA‐miRNA interaction network construction", "Quantitative real‐time PCR", "Statistical analysis", "Detection of circRNA dysregulation in the context of skin aging", "Functional enrichment analyses", "Confirmation of differential circRNA expression", "circRNA‐miRNA interaction network generation" ]
[ "Aging is a complex process wherein cells and organisms undergo progressive changes at the molecular, tissue, and organ levels as a result of either normal physiology or pathological conditions.\n1\n The skin is the largest organ in the body, serving as a barrier against external threats.\n2\n Skin aging results in continuous alterations in the functionality and appearance of the skin \n3\n and can occur as either a result of internal factors or due to extrinsic photoaging.\n4\n, \n5\n Photoaging is believed to account for approximately 80% of skin aging.\n6\n Even so, the mechanisms that govern the skin aging process remain poorly clarified, and few aging‐related biomarkers have been identified to date. Transcriptomic analyses offer a powerful approach to identifying molecular biomarkers of skin aging.\n7\n, \n8\n Recent breakthroughs in the development of high‐throughput transcriptome sequencing technologies have led to the discovery of a diverse array of biomarkers of different human diseases, with non‐coding RNAs being commonly studied in this context.\n9\n, \n10\n\n\nResearch conducted in the 1970s identified novel non‐coding circular RNAs (circRNAs) in the context of plant viral infections,\n11\n although at the time these RNAs were believed to lack functional relevance and were instead thought to be a result of unusual splicing reactions.\n12\n More recent work, however, suggests that circRNAs are key regulators of gene expression at the transcriptional and post‐transcriptional levels.\n13\n, \n14\n Indeed, some circRNAs are believed to function as competing endogenous RNAs (ceRNAs)\n15\n or as de facto molecular sponges capable of sequestering and altering the functionality or expression of specific miRNAs or proteins.\n16\n, \n17\n While they generally lack coding potential, there is also some evidence that certain circRNAs may be translated in some contexts, thereby further modulating biological functionality.\n18\n, \n19\n, \n20\n, \n21\n, \n22\n, \n23\n, \n24\n Indeed, circRNA dysregulation is a hallmark of conditions such as cancer, neurological disease,\n25\n and cardiovascular diseases.\n26\n, \n27\n The functional importance of circRNAs in the context of skin aging, however, remains to be clarified.\nHerein, we evaluated circRNA expression profiles in samples of aged and non‐aged skin tissue via high‐throughput sequencing to identify differentially expressed circRNAs (DECs). Appropriate bioinformatics analyses were then used to further predict the functional roles of these DECs in the aging process.", "The Institutional Review Board of The First Hospital of China Medical University approved the present study, with all participants having provided written informed consent. In total, we collected eyelid tissue samples from 28 females undergoing double eyelid surgery at The First Hospital of China Medical University between 2018 and 2019. Fourteen young patient samples were collected from individuals aged 17–23 years, whereas 14 aged patient samples were from those 55–70 years old. We ultimately used eight samples for RNA‐seq analyses (4 young, 4 aged), while 40 samples were used for downstream qRT‐PCR verification of our results.\nHigh‐throughput sequencing.\nA Hipure Total RNA Mini Kit (Magen) was used to extract RNA from these samples to the protocol. A Qubit 3.0 Fluorometer (Invitrogen), and Agilent 2100 Bioanalyzer (Applied Biosystems) were then used to assess RNA concentrations and integrity. Only samples yielding a RIN value of ≥7.0 were used for downstream RNA‐sequencing.\nA total of 1 μg of RNA per sample was used together with a KAPA RNA HyperPrep Kit with RiboErase (HMR) for Illumina® (Kapa Biosystems, Inc.) to eliminate rRNA prior to library preparation. Samples were then treated for 30 minutes at 37℃ with 10U RNase R (Geneseed).\nWe next fragmented the remaining RNA, after which first‐ and second‐strand synthesis reactions were conducted. Tails and adapters were then ligated to purified cDNA samples, and amplification of the adapter‐ligated purified DNA was performed. A DNA 1000 chip was then used to evaluate library quality with an Agilent 2100 Bioanalyzer. A qRT‐PCR‐based KAPA Biosystems Library Quantification kit (Kapa Biosystems, Inc.) was used to accurately quantify prepared samples, after which libraries were diluted to a 10 nM concentration and pooled in equimolar amounts. We then conducted 150 bp paired‐end (PE150) sequencing of all samples.", "Initially, reads were mapped to the latest UCSC transcript set with Bowtie2 v2.1.0,\n28\n after which RSEM v1.2.15 was used to estimate gene expression levels.\n29\n Gene expression was normalized via a TMM (trimmed mean of M‐values) approach, with edgeR being used to identify differentially expressed genes. Genes were considered to be differentially expressed if they met the following criteria: p < 0.05 and >1.5 fold change.\nTo evaluate circRNA expression, STAR\n30\n was used to map reads to the genome after which DCC\n31\n was employed to evaluate circRNA expression levels. DECs were identified using edgeR,\n32\n with resultant figures being generated using appropriate R packages.\nIn order to explore the potential functional relevance of identified DECs, we analyzed DEC target genes via gene ontology (GO),\n33\n Kyoto Encyclopedia of Genes and Genomes (KEGG),\n34\n and Reactome\n35\n functional enrichment analyses.", "To predict interactions between DECs and target miRNAs. We obtained miRNA sequences from the miRBase database with corresponding annotations, after which Miranda 3.3a was used to calculate binding interactions between miRNAs and circRNAs. We then generated a visualized version of the resultant DEC‐miRNA interaction network using Cytoscape (v3.7.2; Institute of Systems Biology).", "A Hipure Total RNA Mini Kit (Magen) was used to isolate RNA samples as above, after which a NanoDrop™ One (Thermo Fisher Scientific) instrument was used to assess RNA quality and quantity. In addition, 1.5% agarose gel electrophoresis was used to evaluate RNA integrity and to assess for the presence of any contaminating gDNA, while spectrophotometry at 260–280 nm was employed to assess the purity of these RNA samples. Next, 1 µg of total RNA per sample was used with a PrimeScript RT reagent kit (Takara Bio) to prepare cDNA samples. Primer3 (v. 0.4.0) was used to design all primers in the present study (Table 1), with GAPDH being used for normalization purposes. A 2−△△CT approach was used to confirm differential circRNA expression in analyzed samples, which were assessed in triplicate.\nThe primers used for RT‐qPCR\nF: AGAAGGCTGGGGCTCATTTG\nR: GCAGGAGGCATTGCTGATGAT\nF: GCTGATGTCATTCTCCACAAGG\nR: AGCAGCAGCTGACACAGGAT\nF: CTGGAGCACATGAGCCTGCA\nR: TGAAAGGTCGCTCCCCTGTGT\nF: GTTGCTGACACTAGTCTTATTG\nR: CGAGCTGTTAGTTCTTCGTA\nF: CCTGGGCGCACAGAAAATCC\nR: CACCTCTCGGAGTTTCCTCTG\nF: GCTCATCAAAGACATTTATATGATA\nR: GAAGAAATTGTAGGCTGTTC\nF: ACATCAGTGGAGAACCTCAGT\nR: GACAGTGTTGGTCTTCCATTCA", "Data are means ± standard deviation (SD). GraphPad Prism v7.0 was used for all statistical testing, and data were compared via Student's t‐tests with p < 0.05 as the significance threshold.", "In order to identify circRNAs associated with the skin aging processes, we began by employing an RNA‐seq analysis approach to analyze four eyelid skin samples from young adults and four eyelid skin samples from aged adults, with young samples being treated as controls. In total, we identified 14915 circRNAs, of which 571 were found to be differentially expressed (Fold change >1.5 and p < 0.05) between these groups (Figure 1A). Of these DECs, 348 and 223 were found to be up and downregulated, respectively (Figure 1B). These DECs were distributed among all chromosomes, shown as a circos plots (Figure 1C). The details regarding the top 10 upregulated and 10 downregulated circRNAs are presented in Table 2\n\nProfiling of skin aging‐related circRNAs. A, Those circRNAs that were up or downregulated (red and green, respectively) in aged skin tissue samples relative to young tissues samples were arranged in a heat map, with rows corresponding to individual circRNAs and columns corresponding to individual samples. B, Differentially regulated circRNAs were arranged in a Volcano plot, with blue dots corresponding to a lack of statistical significance, whereas red and green correspond to up and downregulated circRNAs, respectively (Fold change >1.5 and p < 0.05). C, Locations of DECs on human chromosomes are represented by circos plots, with chromosomes being represented by the outermost circle, whereas circRNAs are indicated in the middle circle, and DECs are shown in the innermost circle. Up and downregulated circRNAs are represented by red and blue lines, respectively\nTop 10 upregulated and 10 downregulated circRNAs in aging skin samples ranked by fold changes", "We next conducted GO, KEGG, and Reactome analyses to explore the potential functional roles of identified DECs in the skin aging process. GO analyses suggested that these circRNAs are most closely associated with specific cellular components, molecular functions, and biological processes (Figure 2). Top enriched KEGG pathways associated with these DECs included the lysine degradation, inositol phosphate metabolism, and purine metabolism (Figure 3A). Top Reactome pathways for these DECs included SUMO E3 ligase SUMOylation of target proteins, Rab regulation of trafficking, and the Rho GTPase cycle (Figure 3B).\nGO analysis of DEC host genes associated with aging\nKEGG and Reactome pathway analyses in aged and young skin sample groups. A, The top 15 enriched KEGG pathways. B, The top 15 enriched Reactome pathways", "In order to confirm the validity of our RNA‐seq results, three random upregulated circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0000205) and three random downregulated circRNAs (hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) were selected for qRT‐PCR analysis. For these analyses, we assessed circRNA expression levels in 40 total eyelid skin tissue samples (20 age, 20 young) using appropriate divergent primers. For full details regarding selected circRNAs, see Table 3.\nSix circRNAs were selected to perform further PCR validation\nWe found that all the six DECs exhibited identical trends upon qRT‐PCR analysis to those observed in our RNA‐seq results (Figure 4). Of these, five circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) exhibited significant differential expression between control and aging skin group (p < 0.05). We additionally utilized Sanger sequencing to verify the qRT‐PCR products, confirming them to be consistent with the sequences in the circbase (http://www.circbase.org/) database (Supplemental Figure S1).\nqRT‐PCR validation of 6 differentially expressed circRNAs in 20 young and 20 aged skin samples", "To further evaluate the functional roles of these identified circRNAs, we assessed the potential miRNA binding of six circRNAs using miRnada (v 3.3a), with the resultant interaction network being visualized using the Cytoscape (V3.6.0) tool (Figure 5). In total, this network incorporated 6 circRNAs and 151 target miRNAs. Among these 151 miRNAs, we found that 6 miRNAs, hsa‐miR‐3064‐5p, hsa‐miR‐6762‐3p, hsa‐miR‐3194‐3p, hsa‐miR‐6731‐5p, hsa‐miR‐30c‐1‐3p and hsa‐miR‐6760‐5p not only bind to 1 circRNA, but they may be also regulated by multiple circRNAs at the same time.\nPredicted circRNA‐miRNA interaction networks. Yellow and red rectangles correspond to circRNAs and target miRNAs, respectively" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Sample collection", "Bioinformatics analysis", "circRNA‐miRNA interaction network construction", "Quantitative real‐time PCR", "Statistical analysis", "RESULTS", "Detection of circRNA dysregulation in the context of skin aging", "Functional enrichment analyses", "Confirmation of differential circRNA expression", "circRNA‐miRNA interaction network generation", "DISCUSSION", "CONFLICT OF INTEREST", "Supporting information" ]
[ "Aging is a complex process wherein cells and organisms undergo progressive changes at the molecular, tissue, and organ levels as a result of either normal physiology or pathological conditions.\n1\n The skin is the largest organ in the body, serving as a barrier against external threats.\n2\n Skin aging results in continuous alterations in the functionality and appearance of the skin \n3\n and can occur as either a result of internal factors or due to extrinsic photoaging.\n4\n, \n5\n Photoaging is believed to account for approximately 80% of skin aging.\n6\n Even so, the mechanisms that govern the skin aging process remain poorly clarified, and few aging‐related biomarkers have been identified to date. Transcriptomic analyses offer a powerful approach to identifying molecular biomarkers of skin aging.\n7\n, \n8\n Recent breakthroughs in the development of high‐throughput transcriptome sequencing technologies have led to the discovery of a diverse array of biomarkers of different human diseases, with non‐coding RNAs being commonly studied in this context.\n9\n, \n10\n\n\nResearch conducted in the 1970s identified novel non‐coding circular RNAs (circRNAs) in the context of plant viral infections,\n11\n although at the time these RNAs were believed to lack functional relevance and were instead thought to be a result of unusual splicing reactions.\n12\n More recent work, however, suggests that circRNAs are key regulators of gene expression at the transcriptional and post‐transcriptional levels.\n13\n, \n14\n Indeed, some circRNAs are believed to function as competing endogenous RNAs (ceRNAs)\n15\n or as de facto molecular sponges capable of sequestering and altering the functionality or expression of specific miRNAs or proteins.\n16\n, \n17\n While they generally lack coding potential, there is also some evidence that certain circRNAs may be translated in some contexts, thereby further modulating biological functionality.\n18\n, \n19\n, \n20\n, \n21\n, \n22\n, \n23\n, \n24\n Indeed, circRNA dysregulation is a hallmark of conditions such as cancer, neurological disease,\n25\n and cardiovascular diseases.\n26\n, \n27\n The functional importance of circRNAs in the context of skin aging, however, remains to be clarified.\nHerein, we evaluated circRNA expression profiles in samples of aged and non‐aged skin tissue via high‐throughput sequencing to identify differentially expressed circRNAs (DECs). Appropriate bioinformatics analyses were then used to further predict the functional roles of these DECs in the aging process.", " Sample collection The Institutional Review Board of The First Hospital of China Medical University approved the present study, with all participants having provided written informed consent. In total, we collected eyelid tissue samples from 28 females undergoing double eyelid surgery at The First Hospital of China Medical University between 2018 and 2019. Fourteen young patient samples were collected from individuals aged 17–23 years, whereas 14 aged patient samples were from those 55–70 years old. We ultimately used eight samples for RNA‐seq analyses (4 young, 4 aged), while 40 samples were used for downstream qRT‐PCR verification of our results.\nHigh‐throughput sequencing.\nA Hipure Total RNA Mini Kit (Magen) was used to extract RNA from these samples to the protocol. A Qubit 3.0 Fluorometer (Invitrogen), and Agilent 2100 Bioanalyzer (Applied Biosystems) were then used to assess RNA concentrations and integrity. Only samples yielding a RIN value of ≥7.0 were used for downstream RNA‐sequencing.\nA total of 1 μg of RNA per sample was used together with a KAPA RNA HyperPrep Kit with RiboErase (HMR) for Illumina® (Kapa Biosystems, Inc.) to eliminate rRNA prior to library preparation. Samples were then treated for 30 minutes at 37℃ with 10U RNase R (Geneseed).\nWe next fragmented the remaining RNA, after which first‐ and second‐strand synthesis reactions were conducted. Tails and adapters were then ligated to purified cDNA samples, and amplification of the adapter‐ligated purified DNA was performed. A DNA 1000 chip was then used to evaluate library quality with an Agilent 2100 Bioanalyzer. A qRT‐PCR‐based KAPA Biosystems Library Quantification kit (Kapa Biosystems, Inc.) was used to accurately quantify prepared samples, after which libraries were diluted to a 10 nM concentration and pooled in equimolar amounts. We then conducted 150 bp paired‐end (PE150) sequencing of all samples.\nThe Institutional Review Board of The First Hospital of China Medical University approved the present study, with all participants having provided written informed consent. In total, we collected eyelid tissue samples from 28 females undergoing double eyelid surgery at The First Hospital of China Medical University between 2018 and 2019. Fourteen young patient samples were collected from individuals aged 17–23 years, whereas 14 aged patient samples were from those 55–70 years old. We ultimately used eight samples for RNA‐seq analyses (4 young, 4 aged), while 40 samples were used for downstream qRT‐PCR verification of our results.\nHigh‐throughput sequencing.\nA Hipure Total RNA Mini Kit (Magen) was used to extract RNA from these samples to the protocol. A Qubit 3.0 Fluorometer (Invitrogen), and Agilent 2100 Bioanalyzer (Applied Biosystems) were then used to assess RNA concentrations and integrity. Only samples yielding a RIN value of ≥7.0 were used for downstream RNA‐sequencing.\nA total of 1 μg of RNA per sample was used together with a KAPA RNA HyperPrep Kit with RiboErase (HMR) for Illumina® (Kapa Biosystems, Inc.) to eliminate rRNA prior to library preparation. Samples were then treated for 30 minutes at 37℃ with 10U RNase R (Geneseed).\nWe next fragmented the remaining RNA, after which first‐ and second‐strand synthesis reactions were conducted. Tails and adapters were then ligated to purified cDNA samples, and amplification of the adapter‐ligated purified DNA was performed. A DNA 1000 chip was then used to evaluate library quality with an Agilent 2100 Bioanalyzer. A qRT‐PCR‐based KAPA Biosystems Library Quantification kit (Kapa Biosystems, Inc.) was used to accurately quantify prepared samples, after which libraries were diluted to a 10 nM concentration and pooled in equimolar amounts. We then conducted 150 bp paired‐end (PE150) sequencing of all samples.\n Bioinformatics analysis Initially, reads were mapped to the latest UCSC transcript set with Bowtie2 v2.1.0,\n28\n after which RSEM v1.2.15 was used to estimate gene expression levels.\n29\n Gene expression was normalized via a TMM (trimmed mean of M‐values) approach, with edgeR being used to identify differentially expressed genes. Genes were considered to be differentially expressed if they met the following criteria: p < 0.05 and >1.5 fold change.\nTo evaluate circRNA expression, STAR\n30\n was used to map reads to the genome after which DCC\n31\n was employed to evaluate circRNA expression levels. DECs were identified using edgeR,\n32\n with resultant figures being generated using appropriate R packages.\nIn order to explore the potential functional relevance of identified DECs, we analyzed DEC target genes via gene ontology (GO),\n33\n Kyoto Encyclopedia of Genes and Genomes (KEGG),\n34\n and Reactome\n35\n functional enrichment analyses.\nInitially, reads were mapped to the latest UCSC transcript set with Bowtie2 v2.1.0,\n28\n after which RSEM v1.2.15 was used to estimate gene expression levels.\n29\n Gene expression was normalized via a TMM (trimmed mean of M‐values) approach, with edgeR being used to identify differentially expressed genes. Genes were considered to be differentially expressed if they met the following criteria: p < 0.05 and >1.5 fold change.\nTo evaluate circRNA expression, STAR\n30\n was used to map reads to the genome after which DCC\n31\n was employed to evaluate circRNA expression levels. DECs were identified using edgeR,\n32\n with resultant figures being generated using appropriate R packages.\nIn order to explore the potential functional relevance of identified DECs, we analyzed DEC target genes via gene ontology (GO),\n33\n Kyoto Encyclopedia of Genes and Genomes (KEGG),\n34\n and Reactome\n35\n functional enrichment analyses.\n circRNA‐miRNA interaction network construction To predict interactions between DECs and target miRNAs. We obtained miRNA sequences from the miRBase database with corresponding annotations, after which Miranda 3.3a was used to calculate binding interactions between miRNAs and circRNAs. We then generated a visualized version of the resultant DEC‐miRNA interaction network using Cytoscape (v3.7.2; Institute of Systems Biology).\nTo predict interactions between DECs and target miRNAs. We obtained miRNA sequences from the miRBase database with corresponding annotations, after which Miranda 3.3a was used to calculate binding interactions between miRNAs and circRNAs. We then generated a visualized version of the resultant DEC‐miRNA interaction network using Cytoscape (v3.7.2; Institute of Systems Biology).\n Quantitative real‐time PCR A Hipure Total RNA Mini Kit (Magen) was used to isolate RNA samples as above, after which a NanoDrop™ One (Thermo Fisher Scientific) instrument was used to assess RNA quality and quantity. In addition, 1.5% agarose gel electrophoresis was used to evaluate RNA integrity and to assess for the presence of any contaminating gDNA, while spectrophotometry at 260–280 nm was employed to assess the purity of these RNA samples. Next, 1 µg of total RNA per sample was used with a PrimeScript RT reagent kit (Takara Bio) to prepare cDNA samples. Primer3 (v. 0.4.0) was used to design all primers in the present study (Table 1), with GAPDH being used for normalization purposes. A 2−△△CT approach was used to confirm differential circRNA expression in analyzed samples, which were assessed in triplicate.\nThe primers used for RT‐qPCR\nF: AGAAGGCTGGGGCTCATTTG\nR: GCAGGAGGCATTGCTGATGAT\nF: GCTGATGTCATTCTCCACAAGG\nR: AGCAGCAGCTGACACAGGAT\nF: CTGGAGCACATGAGCCTGCA\nR: TGAAAGGTCGCTCCCCTGTGT\nF: GTTGCTGACACTAGTCTTATTG\nR: CGAGCTGTTAGTTCTTCGTA\nF: CCTGGGCGCACAGAAAATCC\nR: CACCTCTCGGAGTTTCCTCTG\nF: GCTCATCAAAGACATTTATATGATA\nR: GAAGAAATTGTAGGCTGTTC\nF: ACATCAGTGGAGAACCTCAGT\nR: GACAGTGTTGGTCTTCCATTCA\nA Hipure Total RNA Mini Kit (Magen) was used to isolate RNA samples as above, after which a NanoDrop™ One (Thermo Fisher Scientific) instrument was used to assess RNA quality and quantity. In addition, 1.5% agarose gel electrophoresis was used to evaluate RNA integrity and to assess for the presence of any contaminating gDNA, while spectrophotometry at 260–280 nm was employed to assess the purity of these RNA samples. Next, 1 µg of total RNA per sample was used with a PrimeScript RT reagent kit (Takara Bio) to prepare cDNA samples. Primer3 (v. 0.4.0) was used to design all primers in the present study (Table 1), with GAPDH being used for normalization purposes. A 2−△△CT approach was used to confirm differential circRNA expression in analyzed samples, which were assessed in triplicate.\nThe primers used for RT‐qPCR\nF: AGAAGGCTGGGGCTCATTTG\nR: GCAGGAGGCATTGCTGATGAT\nF: GCTGATGTCATTCTCCACAAGG\nR: AGCAGCAGCTGACACAGGAT\nF: CTGGAGCACATGAGCCTGCA\nR: TGAAAGGTCGCTCCCCTGTGT\nF: GTTGCTGACACTAGTCTTATTG\nR: CGAGCTGTTAGTTCTTCGTA\nF: CCTGGGCGCACAGAAAATCC\nR: CACCTCTCGGAGTTTCCTCTG\nF: GCTCATCAAAGACATTTATATGATA\nR: GAAGAAATTGTAGGCTGTTC\nF: ACATCAGTGGAGAACCTCAGT\nR: GACAGTGTTGGTCTTCCATTCA\n Statistical analysis Data are means ± standard deviation (SD). GraphPad Prism v7.0 was used for all statistical testing, and data were compared via Student's t‐tests with p < 0.05 as the significance threshold.\nData are means ± standard deviation (SD). GraphPad Prism v7.0 was used for all statistical testing, and data were compared via Student's t‐tests with p < 0.05 as the significance threshold.", "The Institutional Review Board of The First Hospital of China Medical University approved the present study, with all participants having provided written informed consent. In total, we collected eyelid tissue samples from 28 females undergoing double eyelid surgery at The First Hospital of China Medical University between 2018 and 2019. Fourteen young patient samples were collected from individuals aged 17–23 years, whereas 14 aged patient samples were from those 55–70 years old. We ultimately used eight samples for RNA‐seq analyses (4 young, 4 aged), while 40 samples were used for downstream qRT‐PCR verification of our results.\nHigh‐throughput sequencing.\nA Hipure Total RNA Mini Kit (Magen) was used to extract RNA from these samples to the protocol. A Qubit 3.0 Fluorometer (Invitrogen), and Agilent 2100 Bioanalyzer (Applied Biosystems) were then used to assess RNA concentrations and integrity. Only samples yielding a RIN value of ≥7.0 were used for downstream RNA‐sequencing.\nA total of 1 μg of RNA per sample was used together with a KAPA RNA HyperPrep Kit with RiboErase (HMR) for Illumina® (Kapa Biosystems, Inc.) to eliminate rRNA prior to library preparation. Samples were then treated for 30 minutes at 37℃ with 10U RNase R (Geneseed).\nWe next fragmented the remaining RNA, after which first‐ and second‐strand synthesis reactions were conducted. Tails and adapters were then ligated to purified cDNA samples, and amplification of the adapter‐ligated purified DNA was performed. A DNA 1000 chip was then used to evaluate library quality with an Agilent 2100 Bioanalyzer. A qRT‐PCR‐based KAPA Biosystems Library Quantification kit (Kapa Biosystems, Inc.) was used to accurately quantify prepared samples, after which libraries were diluted to a 10 nM concentration and pooled in equimolar amounts. We then conducted 150 bp paired‐end (PE150) sequencing of all samples.", "Initially, reads were mapped to the latest UCSC transcript set with Bowtie2 v2.1.0,\n28\n after which RSEM v1.2.15 was used to estimate gene expression levels.\n29\n Gene expression was normalized via a TMM (trimmed mean of M‐values) approach, with edgeR being used to identify differentially expressed genes. Genes were considered to be differentially expressed if they met the following criteria: p < 0.05 and >1.5 fold change.\nTo evaluate circRNA expression, STAR\n30\n was used to map reads to the genome after which DCC\n31\n was employed to evaluate circRNA expression levels. DECs were identified using edgeR,\n32\n with resultant figures being generated using appropriate R packages.\nIn order to explore the potential functional relevance of identified DECs, we analyzed DEC target genes via gene ontology (GO),\n33\n Kyoto Encyclopedia of Genes and Genomes (KEGG),\n34\n and Reactome\n35\n functional enrichment analyses.", "To predict interactions between DECs and target miRNAs. We obtained miRNA sequences from the miRBase database with corresponding annotations, after which Miranda 3.3a was used to calculate binding interactions between miRNAs and circRNAs. We then generated a visualized version of the resultant DEC‐miRNA interaction network using Cytoscape (v3.7.2; Institute of Systems Biology).", "A Hipure Total RNA Mini Kit (Magen) was used to isolate RNA samples as above, after which a NanoDrop™ One (Thermo Fisher Scientific) instrument was used to assess RNA quality and quantity. In addition, 1.5% agarose gel electrophoresis was used to evaluate RNA integrity and to assess for the presence of any contaminating gDNA, while spectrophotometry at 260–280 nm was employed to assess the purity of these RNA samples. Next, 1 µg of total RNA per sample was used with a PrimeScript RT reagent kit (Takara Bio) to prepare cDNA samples. Primer3 (v. 0.4.0) was used to design all primers in the present study (Table 1), with GAPDH being used for normalization purposes. A 2−△△CT approach was used to confirm differential circRNA expression in analyzed samples, which were assessed in triplicate.\nThe primers used for RT‐qPCR\nF: AGAAGGCTGGGGCTCATTTG\nR: GCAGGAGGCATTGCTGATGAT\nF: GCTGATGTCATTCTCCACAAGG\nR: AGCAGCAGCTGACACAGGAT\nF: CTGGAGCACATGAGCCTGCA\nR: TGAAAGGTCGCTCCCCTGTGT\nF: GTTGCTGACACTAGTCTTATTG\nR: CGAGCTGTTAGTTCTTCGTA\nF: CCTGGGCGCACAGAAAATCC\nR: CACCTCTCGGAGTTTCCTCTG\nF: GCTCATCAAAGACATTTATATGATA\nR: GAAGAAATTGTAGGCTGTTC\nF: ACATCAGTGGAGAACCTCAGT\nR: GACAGTGTTGGTCTTCCATTCA", "Data are means ± standard deviation (SD). GraphPad Prism v7.0 was used for all statistical testing, and data were compared via Student's t‐tests with p < 0.05 as the significance threshold.", " Detection of circRNA dysregulation in the context of skin aging In order to identify circRNAs associated with the skin aging processes, we began by employing an RNA‐seq analysis approach to analyze four eyelid skin samples from young adults and four eyelid skin samples from aged adults, with young samples being treated as controls. In total, we identified 14915 circRNAs, of which 571 were found to be differentially expressed (Fold change >1.5 and p < 0.05) between these groups (Figure 1A). Of these DECs, 348 and 223 were found to be up and downregulated, respectively (Figure 1B). These DECs were distributed among all chromosomes, shown as a circos plots (Figure 1C). The details regarding the top 10 upregulated and 10 downregulated circRNAs are presented in Table 2\n\nProfiling of skin aging‐related circRNAs. A, Those circRNAs that were up or downregulated (red and green, respectively) in aged skin tissue samples relative to young tissues samples were arranged in a heat map, with rows corresponding to individual circRNAs and columns corresponding to individual samples. B, Differentially regulated circRNAs were arranged in a Volcano plot, with blue dots corresponding to a lack of statistical significance, whereas red and green correspond to up and downregulated circRNAs, respectively (Fold change >1.5 and p < 0.05). C, Locations of DECs on human chromosomes are represented by circos plots, with chromosomes being represented by the outermost circle, whereas circRNAs are indicated in the middle circle, and DECs are shown in the innermost circle. Up and downregulated circRNAs are represented by red and blue lines, respectively\nTop 10 upregulated and 10 downregulated circRNAs in aging skin samples ranked by fold changes\nIn order to identify circRNAs associated with the skin aging processes, we began by employing an RNA‐seq analysis approach to analyze four eyelid skin samples from young adults and four eyelid skin samples from aged adults, with young samples being treated as controls. In total, we identified 14915 circRNAs, of which 571 were found to be differentially expressed (Fold change >1.5 and p < 0.05) between these groups (Figure 1A). Of these DECs, 348 and 223 were found to be up and downregulated, respectively (Figure 1B). These DECs were distributed among all chromosomes, shown as a circos plots (Figure 1C). The details regarding the top 10 upregulated and 10 downregulated circRNAs are presented in Table 2\n\nProfiling of skin aging‐related circRNAs. A, Those circRNAs that were up or downregulated (red and green, respectively) in aged skin tissue samples relative to young tissues samples were arranged in a heat map, with rows corresponding to individual circRNAs and columns corresponding to individual samples. B, Differentially regulated circRNAs were arranged in a Volcano plot, with blue dots corresponding to a lack of statistical significance, whereas red and green correspond to up and downregulated circRNAs, respectively (Fold change >1.5 and p < 0.05). C, Locations of DECs on human chromosomes are represented by circos plots, with chromosomes being represented by the outermost circle, whereas circRNAs are indicated in the middle circle, and DECs are shown in the innermost circle. Up and downregulated circRNAs are represented by red and blue lines, respectively\nTop 10 upregulated and 10 downregulated circRNAs in aging skin samples ranked by fold changes\n Functional enrichment analyses We next conducted GO, KEGG, and Reactome analyses to explore the potential functional roles of identified DECs in the skin aging process. GO analyses suggested that these circRNAs are most closely associated with specific cellular components, molecular functions, and biological processes (Figure 2). Top enriched KEGG pathways associated with these DECs included the lysine degradation, inositol phosphate metabolism, and purine metabolism (Figure 3A). Top Reactome pathways for these DECs included SUMO E3 ligase SUMOylation of target proteins, Rab regulation of trafficking, and the Rho GTPase cycle (Figure 3B).\nGO analysis of DEC host genes associated with aging\nKEGG and Reactome pathway analyses in aged and young skin sample groups. A, The top 15 enriched KEGG pathways. B, The top 15 enriched Reactome pathways\nWe next conducted GO, KEGG, and Reactome analyses to explore the potential functional roles of identified DECs in the skin aging process. GO analyses suggested that these circRNAs are most closely associated with specific cellular components, molecular functions, and biological processes (Figure 2). Top enriched KEGG pathways associated with these DECs included the lysine degradation, inositol phosphate metabolism, and purine metabolism (Figure 3A). Top Reactome pathways for these DECs included SUMO E3 ligase SUMOylation of target proteins, Rab regulation of trafficking, and the Rho GTPase cycle (Figure 3B).\nGO analysis of DEC host genes associated with aging\nKEGG and Reactome pathway analyses in aged and young skin sample groups. A, The top 15 enriched KEGG pathways. B, The top 15 enriched Reactome pathways\n Confirmation of differential circRNA expression In order to confirm the validity of our RNA‐seq results, three random upregulated circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0000205) and three random downregulated circRNAs (hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) were selected for qRT‐PCR analysis. For these analyses, we assessed circRNA expression levels in 40 total eyelid skin tissue samples (20 age, 20 young) using appropriate divergent primers. For full details regarding selected circRNAs, see Table 3.\nSix circRNAs were selected to perform further PCR validation\nWe found that all the six DECs exhibited identical trends upon qRT‐PCR analysis to those observed in our RNA‐seq results (Figure 4). Of these, five circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) exhibited significant differential expression between control and aging skin group (p < 0.05). We additionally utilized Sanger sequencing to verify the qRT‐PCR products, confirming them to be consistent with the sequences in the circbase (http://www.circbase.org/) database (Supplemental Figure S1).\nqRT‐PCR validation of 6 differentially expressed circRNAs in 20 young and 20 aged skin samples\nIn order to confirm the validity of our RNA‐seq results, three random upregulated circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0000205) and three random downregulated circRNAs (hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) were selected for qRT‐PCR analysis. For these analyses, we assessed circRNA expression levels in 40 total eyelid skin tissue samples (20 age, 20 young) using appropriate divergent primers. For full details regarding selected circRNAs, see Table 3.\nSix circRNAs were selected to perform further PCR validation\nWe found that all the six DECs exhibited identical trends upon qRT‐PCR analysis to those observed in our RNA‐seq results (Figure 4). Of these, five circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) exhibited significant differential expression between control and aging skin group (p < 0.05). We additionally utilized Sanger sequencing to verify the qRT‐PCR products, confirming them to be consistent with the sequences in the circbase (http://www.circbase.org/) database (Supplemental Figure S1).\nqRT‐PCR validation of 6 differentially expressed circRNAs in 20 young and 20 aged skin samples\n circRNA‐miRNA interaction network generation To further evaluate the functional roles of these identified circRNAs, we assessed the potential miRNA binding of six circRNAs using miRnada (v 3.3a), with the resultant interaction network being visualized using the Cytoscape (V3.6.0) tool (Figure 5). In total, this network incorporated 6 circRNAs and 151 target miRNAs. Among these 151 miRNAs, we found that 6 miRNAs, hsa‐miR‐3064‐5p, hsa‐miR‐6762‐3p, hsa‐miR‐3194‐3p, hsa‐miR‐6731‐5p, hsa‐miR‐30c‐1‐3p and hsa‐miR‐6760‐5p not only bind to 1 circRNA, but they may be also regulated by multiple circRNAs at the same time.\nPredicted circRNA‐miRNA interaction networks. Yellow and red rectangles correspond to circRNAs and target miRNAs, respectively\nTo further evaluate the functional roles of these identified circRNAs, we assessed the potential miRNA binding of six circRNAs using miRnada (v 3.3a), with the resultant interaction network being visualized using the Cytoscape (V3.6.0) tool (Figure 5). In total, this network incorporated 6 circRNAs and 151 target miRNAs. Among these 151 miRNAs, we found that 6 miRNAs, hsa‐miR‐3064‐5p, hsa‐miR‐6762‐3p, hsa‐miR‐3194‐3p, hsa‐miR‐6731‐5p, hsa‐miR‐30c‐1‐3p and hsa‐miR‐6760‐5p not only bind to 1 circRNA, but they may be also regulated by multiple circRNAs at the same time.\nPredicted circRNA‐miRNA interaction networks. Yellow and red rectangles correspond to circRNAs and target miRNAs, respectively", "In order to identify circRNAs associated with the skin aging processes, we began by employing an RNA‐seq analysis approach to analyze four eyelid skin samples from young adults and four eyelid skin samples from aged adults, with young samples being treated as controls. In total, we identified 14915 circRNAs, of which 571 were found to be differentially expressed (Fold change >1.5 and p < 0.05) between these groups (Figure 1A). Of these DECs, 348 and 223 were found to be up and downregulated, respectively (Figure 1B). These DECs were distributed among all chromosomes, shown as a circos plots (Figure 1C). The details regarding the top 10 upregulated and 10 downregulated circRNAs are presented in Table 2\n\nProfiling of skin aging‐related circRNAs. A, Those circRNAs that were up or downregulated (red and green, respectively) in aged skin tissue samples relative to young tissues samples were arranged in a heat map, with rows corresponding to individual circRNAs and columns corresponding to individual samples. B, Differentially regulated circRNAs were arranged in a Volcano plot, with blue dots corresponding to a lack of statistical significance, whereas red and green correspond to up and downregulated circRNAs, respectively (Fold change >1.5 and p < 0.05). C, Locations of DECs on human chromosomes are represented by circos plots, with chromosomes being represented by the outermost circle, whereas circRNAs are indicated in the middle circle, and DECs are shown in the innermost circle. Up and downregulated circRNAs are represented by red and blue lines, respectively\nTop 10 upregulated and 10 downregulated circRNAs in aging skin samples ranked by fold changes", "We next conducted GO, KEGG, and Reactome analyses to explore the potential functional roles of identified DECs in the skin aging process. GO analyses suggested that these circRNAs are most closely associated with specific cellular components, molecular functions, and biological processes (Figure 2). Top enriched KEGG pathways associated with these DECs included the lysine degradation, inositol phosphate metabolism, and purine metabolism (Figure 3A). Top Reactome pathways for these DECs included SUMO E3 ligase SUMOylation of target proteins, Rab regulation of trafficking, and the Rho GTPase cycle (Figure 3B).\nGO analysis of DEC host genes associated with aging\nKEGG and Reactome pathway analyses in aged and young skin sample groups. A, The top 15 enriched KEGG pathways. B, The top 15 enriched Reactome pathways", "In order to confirm the validity of our RNA‐seq results, three random upregulated circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0000205) and three random downregulated circRNAs (hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) were selected for qRT‐PCR analysis. For these analyses, we assessed circRNA expression levels in 40 total eyelid skin tissue samples (20 age, 20 young) using appropriate divergent primers. For full details regarding selected circRNAs, see Table 3.\nSix circRNAs were selected to perform further PCR validation\nWe found that all the six DECs exhibited identical trends upon qRT‐PCR analysis to those observed in our RNA‐seq results (Figure 4). Of these, five circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) exhibited significant differential expression between control and aging skin group (p < 0.05). We additionally utilized Sanger sequencing to verify the qRT‐PCR products, confirming them to be consistent with the sequences in the circbase (http://www.circbase.org/) database (Supplemental Figure S1).\nqRT‐PCR validation of 6 differentially expressed circRNAs in 20 young and 20 aged skin samples", "To further evaluate the functional roles of these identified circRNAs, we assessed the potential miRNA binding of six circRNAs using miRnada (v 3.3a), with the resultant interaction network being visualized using the Cytoscape (V3.6.0) tool (Figure 5). In total, this network incorporated 6 circRNAs and 151 target miRNAs. Among these 151 miRNAs, we found that 6 miRNAs, hsa‐miR‐3064‐5p, hsa‐miR‐6762‐3p, hsa‐miR‐3194‐3p, hsa‐miR‐6731‐5p, hsa‐miR‐30c‐1‐3p and hsa‐miR‐6760‐5p not only bind to 1 circRNA, but they may be also regulated by multiple circRNAs at the same time.\nPredicted circRNA‐miRNA interaction networks. Yellow and red rectangles correspond to circRNAs and target miRNAs, respectively", "Skin aging is an ongoing process influenced by both intrinsic and extrinsic factors that ultimately leads to impaired tissue integrity and functionality. As skin ages, it loses its elasticity and undergoes thickening, drying, and wrinkling.\n6\n Increased ultraviolet radiation exposure as a result of ozone layer degradation has further expedited the skin aging process in some areas of the world. Effective approaches to preventing skin aging, however, remain to be established and are of great interest to the pharmaceutical and cosmetic industries. At present, the molecular mechanisms governing skin aging remain poorly understood, and few relevant biomarkers of this progress have been identified.\nOwing to their closed‐loop structures, circRNAs are more resistant to exonuclease‐mediated degradation than are linear RNAs,\n36\n making them ideal as biomarkers of certain diseases.\n37\n, \n38\n, \n39\n, \n40\n However, relatively few studies have assessed the relationship between circRNA expression patterns and the skin aging process. Amaresh et al. evaluated circRNA expression profiles in WI‐38 cells from early and late passages via RNA‐seq and identified circPVT1 as a suppressor of cellular senescence that sequesters let7.\n41\n Peng et al. similarly identified 29 circRNAs that were differentially expressed in human dermal fibroblasts following UVA irradiation and determined that circCOL3A1 serves as a regulator of the expression of type I collagen during the photoaging process.\n42\n Si et al. also generated a UVB‐induced model of fibroblast senescence in which they identified 472 DECs via microarray. Of these, they identified circRNA_100797 acts as a molecular sponge capable of sequestering miR‐23a‐5p and inhibiting the UVB‐induced photoaging of these cells.\n43\n\n\nUnlike these prior studies analyzing cell models, in the present study we assessed differential circRNA expression in primary eyelid tissue samples from young and aged human donors. We ultimately identified 571 DECs and 5 circRNA expression levels in the two groups of samples were confirmed to have significant differences via qPCR. Hsa_circ_0137613 and has_circ_0077605 are significantly upregulated in the aging skin group, suggesting that they may have the biological functions of promoting skin cell apoptosis, reducing cell proliferation or arresting the cell cycle in G0/G1 phase. On the contrary, the expression of hsa_circ_0003803, hsa_circ_0113488 and hsa_circ_0112861 in aging skin tissues was significantly reduced, suggesting that they may have the function of accelerating skin cell proliferation and inhibiting skin cell apoptosis. We will use gain‐of‐function and loss‐of‐function experiments to verify the functions of these molecules in skin cell models. Moreover, we conducted GO, KEGG, and Reactome functional enrichment analyses to gain insight into the molecular mechanisms whereby these circRNAs functioned in this context. The interactions between circRNAs and miRNAs in the skin aging process are also poorly understood, and as such we utilized the miRnada program to construct a putative circRNA‐miRNA interaction network.\nThere are multiple limitations to the present study. For one, our sample size was relatively limited and should be expanded in future analyses. Furthermore, both in vivo and in vitro validation of the functional relevance of these DECs is still required. The identified circRNAs have the potential to function through a range of mechanisms including as ceRNAs or as translated peptides/proteins to modulate the aging process, and as such our analyses are too cursory to offer meaningful insights into their roles in this context. Despite these limitations, our results represent a novel dataset pertaining to the identification of circRNAs associated with the skin aging process, with hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488 and hsa_circ_0112861 being potential skin aging‐related biomarkers.", "The authors declare they have no competing interests.", "Fig S1\nClick here for additional data file." ]
[ null, "materials-and-methods", null, null, null, null, null, "results", null, null, null, null, "discussion", "COI-statement", "supplementary-material" ]
[ "skin aging", "circRNA", "NGS", "biomarker" ]
INTRODUCTION: Aging is a complex process wherein cells and organisms undergo progressive changes at the molecular, tissue, and organ levels as a result of either normal physiology or pathological conditions. 1 The skin is the largest organ in the body, serving as a barrier against external threats. 2 Skin aging results in continuous alterations in the functionality and appearance of the skin 3 and can occur as either a result of internal factors or due to extrinsic photoaging. 4 , 5 Photoaging is believed to account for approximately 80% of skin aging. 6 Even so, the mechanisms that govern the skin aging process remain poorly clarified, and few aging‐related biomarkers have been identified to date. Transcriptomic analyses offer a powerful approach to identifying molecular biomarkers of skin aging. 7 , 8 Recent breakthroughs in the development of high‐throughput transcriptome sequencing technologies have led to the discovery of a diverse array of biomarkers of different human diseases, with non‐coding RNAs being commonly studied in this context. 9 , 10 Research conducted in the 1970s identified novel non‐coding circular RNAs (circRNAs) in the context of plant viral infections, 11 although at the time these RNAs were believed to lack functional relevance and were instead thought to be a result of unusual splicing reactions. 12 More recent work, however, suggests that circRNAs are key regulators of gene expression at the transcriptional and post‐transcriptional levels. 13 , 14 Indeed, some circRNAs are believed to function as competing endogenous RNAs (ceRNAs) 15 or as de facto molecular sponges capable of sequestering and altering the functionality or expression of specific miRNAs or proteins. 16 , 17 While they generally lack coding potential, there is also some evidence that certain circRNAs may be translated in some contexts, thereby further modulating biological functionality. 18 , 19 , 20 , 21 , 22 , 23 , 24 Indeed, circRNA dysregulation is a hallmark of conditions such as cancer, neurological disease, 25 and cardiovascular diseases. 26 , 27 The functional importance of circRNAs in the context of skin aging, however, remains to be clarified. Herein, we evaluated circRNA expression profiles in samples of aged and non‐aged skin tissue via high‐throughput sequencing to identify differentially expressed circRNAs (DECs). Appropriate bioinformatics analyses were then used to further predict the functional roles of these DECs in the aging process. MATERIALS AND METHODS: Sample collection The Institutional Review Board of The First Hospital of China Medical University approved the present study, with all participants having provided written informed consent. In total, we collected eyelid tissue samples from 28 females undergoing double eyelid surgery at The First Hospital of China Medical University between 2018 and 2019. Fourteen young patient samples were collected from individuals aged 17–23 years, whereas 14 aged patient samples were from those 55–70 years old. We ultimately used eight samples for RNA‐seq analyses (4 young, 4 aged), while 40 samples were used for downstream qRT‐PCR verification of our results. High‐throughput sequencing. A Hipure Total RNA Mini Kit (Magen) was used to extract RNA from these samples to the protocol. A Qubit 3.0 Fluorometer (Invitrogen), and Agilent 2100 Bioanalyzer (Applied Biosystems) were then used to assess RNA concentrations and integrity. Only samples yielding a RIN value of ≥7.0 were used for downstream RNA‐sequencing. A total of 1 μg of RNA per sample was used together with a KAPA RNA HyperPrep Kit with RiboErase (HMR) for Illumina® (Kapa Biosystems, Inc.) to eliminate rRNA prior to library preparation. Samples were then treated for 30 minutes at 37℃ with 10U RNase R (Geneseed). We next fragmented the remaining RNA, after which first‐ and second‐strand synthesis reactions were conducted. Tails and adapters were then ligated to purified cDNA samples, and amplification of the adapter‐ligated purified DNA was performed. A DNA 1000 chip was then used to evaluate library quality with an Agilent 2100 Bioanalyzer. A qRT‐PCR‐based KAPA Biosystems Library Quantification kit (Kapa Biosystems, Inc.) was used to accurately quantify prepared samples, after which libraries were diluted to a 10 nM concentration and pooled in equimolar amounts. We then conducted 150 bp paired‐end (PE150) sequencing of all samples. The Institutional Review Board of The First Hospital of China Medical University approved the present study, with all participants having provided written informed consent. In total, we collected eyelid tissue samples from 28 females undergoing double eyelid surgery at The First Hospital of China Medical University between 2018 and 2019. Fourteen young patient samples were collected from individuals aged 17–23 years, whereas 14 aged patient samples were from those 55–70 years old. We ultimately used eight samples for RNA‐seq analyses (4 young, 4 aged), while 40 samples were used for downstream qRT‐PCR verification of our results. High‐throughput sequencing. A Hipure Total RNA Mini Kit (Magen) was used to extract RNA from these samples to the protocol. A Qubit 3.0 Fluorometer (Invitrogen), and Agilent 2100 Bioanalyzer (Applied Biosystems) were then used to assess RNA concentrations and integrity. Only samples yielding a RIN value of ≥7.0 were used for downstream RNA‐sequencing. A total of 1 μg of RNA per sample was used together with a KAPA RNA HyperPrep Kit with RiboErase (HMR) for Illumina® (Kapa Biosystems, Inc.) to eliminate rRNA prior to library preparation. Samples were then treated for 30 minutes at 37℃ with 10U RNase R (Geneseed). We next fragmented the remaining RNA, after which first‐ and second‐strand synthesis reactions were conducted. Tails and adapters were then ligated to purified cDNA samples, and amplification of the adapter‐ligated purified DNA was performed. A DNA 1000 chip was then used to evaluate library quality with an Agilent 2100 Bioanalyzer. A qRT‐PCR‐based KAPA Biosystems Library Quantification kit (Kapa Biosystems, Inc.) was used to accurately quantify prepared samples, after which libraries were diluted to a 10 nM concentration and pooled in equimolar amounts. We then conducted 150 bp paired‐end (PE150) sequencing of all samples. Bioinformatics analysis Initially, reads were mapped to the latest UCSC transcript set with Bowtie2 v2.1.0, 28 after which RSEM v1.2.15 was used to estimate gene expression levels. 29 Gene expression was normalized via a TMM (trimmed mean of M‐values) approach, with edgeR being used to identify differentially expressed genes. Genes were considered to be differentially expressed if they met the following criteria: p < 0.05 and >1.5 fold change. To evaluate circRNA expression, STAR 30 was used to map reads to the genome after which DCC 31 was employed to evaluate circRNA expression levels. DECs were identified using edgeR, 32 with resultant figures being generated using appropriate R packages. In order to explore the potential functional relevance of identified DECs, we analyzed DEC target genes via gene ontology (GO), 33 Kyoto Encyclopedia of Genes and Genomes (KEGG), 34 and Reactome 35 functional enrichment analyses. Initially, reads were mapped to the latest UCSC transcript set with Bowtie2 v2.1.0, 28 after which RSEM v1.2.15 was used to estimate gene expression levels. 29 Gene expression was normalized via a TMM (trimmed mean of M‐values) approach, with edgeR being used to identify differentially expressed genes. Genes were considered to be differentially expressed if they met the following criteria: p < 0.05 and >1.5 fold change. To evaluate circRNA expression, STAR 30 was used to map reads to the genome after which DCC 31 was employed to evaluate circRNA expression levels. DECs were identified using edgeR, 32 with resultant figures being generated using appropriate R packages. In order to explore the potential functional relevance of identified DECs, we analyzed DEC target genes via gene ontology (GO), 33 Kyoto Encyclopedia of Genes and Genomes (KEGG), 34 and Reactome 35 functional enrichment analyses. circRNA‐miRNA interaction network construction To predict interactions between DECs and target miRNAs. We obtained miRNA sequences from the miRBase database with corresponding annotations, after which Miranda 3.3a was used to calculate binding interactions between miRNAs and circRNAs. We then generated a visualized version of the resultant DEC‐miRNA interaction network using Cytoscape (v3.7.2; Institute of Systems Biology). To predict interactions between DECs and target miRNAs. We obtained miRNA sequences from the miRBase database with corresponding annotations, after which Miranda 3.3a was used to calculate binding interactions between miRNAs and circRNAs. We then generated a visualized version of the resultant DEC‐miRNA interaction network using Cytoscape (v3.7.2; Institute of Systems Biology). Quantitative real‐time PCR A Hipure Total RNA Mini Kit (Magen) was used to isolate RNA samples as above, after which a NanoDrop™ One (Thermo Fisher Scientific) instrument was used to assess RNA quality and quantity. In addition, 1.5% agarose gel electrophoresis was used to evaluate RNA integrity and to assess for the presence of any contaminating gDNA, while spectrophotometry at 260–280 nm was employed to assess the purity of these RNA samples. Next, 1 µg of total RNA per sample was used with a PrimeScript RT reagent kit (Takara Bio) to prepare cDNA samples. Primer3 (v. 0.4.0) was used to design all primers in the present study (Table 1), with GAPDH being used for normalization purposes. A 2−△△CT approach was used to confirm differential circRNA expression in analyzed samples, which were assessed in triplicate. The primers used for RT‐qPCR F: AGAAGGCTGGGGCTCATTTG R: GCAGGAGGCATTGCTGATGAT F: GCTGATGTCATTCTCCACAAGG R: AGCAGCAGCTGACACAGGAT F: CTGGAGCACATGAGCCTGCA R: TGAAAGGTCGCTCCCCTGTGT F: GTTGCTGACACTAGTCTTATTG R: CGAGCTGTTAGTTCTTCGTA F: CCTGGGCGCACAGAAAATCC R: CACCTCTCGGAGTTTCCTCTG F: GCTCATCAAAGACATTTATATGATA R: GAAGAAATTGTAGGCTGTTC F: ACATCAGTGGAGAACCTCAGT R: GACAGTGTTGGTCTTCCATTCA A Hipure Total RNA Mini Kit (Magen) was used to isolate RNA samples as above, after which a NanoDrop™ One (Thermo Fisher Scientific) instrument was used to assess RNA quality and quantity. In addition, 1.5% agarose gel electrophoresis was used to evaluate RNA integrity and to assess for the presence of any contaminating gDNA, while spectrophotometry at 260–280 nm was employed to assess the purity of these RNA samples. Next, 1 µg of total RNA per sample was used with a PrimeScript RT reagent kit (Takara Bio) to prepare cDNA samples. Primer3 (v. 0.4.0) was used to design all primers in the present study (Table 1), with GAPDH being used for normalization purposes. A 2−△△CT approach was used to confirm differential circRNA expression in analyzed samples, which were assessed in triplicate. The primers used for RT‐qPCR F: AGAAGGCTGGGGCTCATTTG R: GCAGGAGGCATTGCTGATGAT F: GCTGATGTCATTCTCCACAAGG R: AGCAGCAGCTGACACAGGAT F: CTGGAGCACATGAGCCTGCA R: TGAAAGGTCGCTCCCCTGTGT F: GTTGCTGACACTAGTCTTATTG R: CGAGCTGTTAGTTCTTCGTA F: CCTGGGCGCACAGAAAATCC R: CACCTCTCGGAGTTTCCTCTG F: GCTCATCAAAGACATTTATATGATA R: GAAGAAATTGTAGGCTGTTC F: ACATCAGTGGAGAACCTCAGT R: GACAGTGTTGGTCTTCCATTCA Statistical analysis Data are means ± standard deviation (SD). GraphPad Prism v7.0 was used for all statistical testing, and data were compared via Student's t‐tests with p < 0.05 as the significance threshold. Data are means ± standard deviation (SD). GraphPad Prism v7.0 was used for all statistical testing, and data were compared via Student's t‐tests with p < 0.05 as the significance threshold. Sample collection: The Institutional Review Board of The First Hospital of China Medical University approved the present study, with all participants having provided written informed consent. In total, we collected eyelid tissue samples from 28 females undergoing double eyelid surgery at The First Hospital of China Medical University between 2018 and 2019. Fourteen young patient samples were collected from individuals aged 17–23 years, whereas 14 aged patient samples were from those 55–70 years old. We ultimately used eight samples for RNA‐seq analyses (4 young, 4 aged), while 40 samples were used for downstream qRT‐PCR verification of our results. High‐throughput sequencing. A Hipure Total RNA Mini Kit (Magen) was used to extract RNA from these samples to the protocol. A Qubit 3.0 Fluorometer (Invitrogen), and Agilent 2100 Bioanalyzer (Applied Biosystems) were then used to assess RNA concentrations and integrity. Only samples yielding a RIN value of ≥7.0 were used for downstream RNA‐sequencing. A total of 1 μg of RNA per sample was used together with a KAPA RNA HyperPrep Kit with RiboErase (HMR) for Illumina® (Kapa Biosystems, Inc.) to eliminate rRNA prior to library preparation. Samples were then treated for 30 minutes at 37℃ with 10U RNase R (Geneseed). We next fragmented the remaining RNA, after which first‐ and second‐strand synthesis reactions were conducted. Tails and adapters were then ligated to purified cDNA samples, and amplification of the adapter‐ligated purified DNA was performed. A DNA 1000 chip was then used to evaluate library quality with an Agilent 2100 Bioanalyzer. A qRT‐PCR‐based KAPA Biosystems Library Quantification kit (Kapa Biosystems, Inc.) was used to accurately quantify prepared samples, after which libraries were diluted to a 10 nM concentration and pooled in equimolar amounts. We then conducted 150 bp paired‐end (PE150) sequencing of all samples. Bioinformatics analysis: Initially, reads were mapped to the latest UCSC transcript set with Bowtie2 v2.1.0, 28 after which RSEM v1.2.15 was used to estimate gene expression levels. 29 Gene expression was normalized via a TMM (trimmed mean of M‐values) approach, with edgeR being used to identify differentially expressed genes. Genes were considered to be differentially expressed if they met the following criteria: p < 0.05 and >1.5 fold change. To evaluate circRNA expression, STAR 30 was used to map reads to the genome after which DCC 31 was employed to evaluate circRNA expression levels. DECs were identified using edgeR, 32 with resultant figures being generated using appropriate R packages. In order to explore the potential functional relevance of identified DECs, we analyzed DEC target genes via gene ontology (GO), 33 Kyoto Encyclopedia of Genes and Genomes (KEGG), 34 and Reactome 35 functional enrichment analyses. circRNA‐miRNA interaction network construction: To predict interactions between DECs and target miRNAs. We obtained miRNA sequences from the miRBase database with corresponding annotations, after which Miranda 3.3a was used to calculate binding interactions between miRNAs and circRNAs. We then generated a visualized version of the resultant DEC‐miRNA interaction network using Cytoscape (v3.7.2; Institute of Systems Biology). Quantitative real‐time PCR: A Hipure Total RNA Mini Kit (Magen) was used to isolate RNA samples as above, after which a NanoDrop™ One (Thermo Fisher Scientific) instrument was used to assess RNA quality and quantity. In addition, 1.5% agarose gel electrophoresis was used to evaluate RNA integrity and to assess for the presence of any contaminating gDNA, while spectrophotometry at 260–280 nm was employed to assess the purity of these RNA samples. Next, 1 µg of total RNA per sample was used with a PrimeScript RT reagent kit (Takara Bio) to prepare cDNA samples. Primer3 (v. 0.4.0) was used to design all primers in the present study (Table 1), with GAPDH being used for normalization purposes. A 2−△△CT approach was used to confirm differential circRNA expression in analyzed samples, which were assessed in triplicate. The primers used for RT‐qPCR F: AGAAGGCTGGGGCTCATTTG R: GCAGGAGGCATTGCTGATGAT F: GCTGATGTCATTCTCCACAAGG R: AGCAGCAGCTGACACAGGAT F: CTGGAGCACATGAGCCTGCA R: TGAAAGGTCGCTCCCCTGTGT F: GTTGCTGACACTAGTCTTATTG R: CGAGCTGTTAGTTCTTCGTA F: CCTGGGCGCACAGAAAATCC R: CACCTCTCGGAGTTTCCTCTG F: GCTCATCAAAGACATTTATATGATA R: GAAGAAATTGTAGGCTGTTC F: ACATCAGTGGAGAACCTCAGT R: GACAGTGTTGGTCTTCCATTCA Statistical analysis: Data are means ± standard deviation (SD). GraphPad Prism v7.0 was used for all statistical testing, and data were compared via Student's t‐tests with p < 0.05 as the significance threshold. RESULTS: Detection of circRNA dysregulation in the context of skin aging In order to identify circRNAs associated with the skin aging processes, we began by employing an RNA‐seq analysis approach to analyze four eyelid skin samples from young adults and four eyelid skin samples from aged adults, with young samples being treated as controls. In total, we identified 14915 circRNAs, of which 571 were found to be differentially expressed (Fold change >1.5 and p < 0.05) between these groups (Figure 1A). Of these DECs, 348 and 223 were found to be up and downregulated, respectively (Figure 1B). These DECs were distributed among all chromosomes, shown as a circos plots (Figure 1C). The details regarding the top 10 upregulated and 10 downregulated circRNAs are presented in Table 2 Profiling of skin aging‐related circRNAs. A, Those circRNAs that were up or downregulated (red and green, respectively) in aged skin tissue samples relative to young tissues samples were arranged in a heat map, with rows corresponding to individual circRNAs and columns corresponding to individual samples. B, Differentially regulated circRNAs were arranged in a Volcano plot, with blue dots corresponding to a lack of statistical significance, whereas red and green correspond to up and downregulated circRNAs, respectively (Fold change >1.5 and p < 0.05). C, Locations of DECs on human chromosomes are represented by circos plots, with chromosomes being represented by the outermost circle, whereas circRNAs are indicated in the middle circle, and DECs are shown in the innermost circle. Up and downregulated circRNAs are represented by red and blue lines, respectively Top 10 upregulated and 10 downregulated circRNAs in aging skin samples ranked by fold changes In order to identify circRNAs associated with the skin aging processes, we began by employing an RNA‐seq analysis approach to analyze four eyelid skin samples from young adults and four eyelid skin samples from aged adults, with young samples being treated as controls. In total, we identified 14915 circRNAs, of which 571 were found to be differentially expressed (Fold change >1.5 and p < 0.05) between these groups (Figure 1A). Of these DECs, 348 and 223 were found to be up and downregulated, respectively (Figure 1B). These DECs were distributed among all chromosomes, shown as a circos plots (Figure 1C). The details regarding the top 10 upregulated and 10 downregulated circRNAs are presented in Table 2 Profiling of skin aging‐related circRNAs. A, Those circRNAs that were up or downregulated (red and green, respectively) in aged skin tissue samples relative to young tissues samples were arranged in a heat map, with rows corresponding to individual circRNAs and columns corresponding to individual samples. B, Differentially regulated circRNAs were arranged in a Volcano plot, with blue dots corresponding to a lack of statistical significance, whereas red and green correspond to up and downregulated circRNAs, respectively (Fold change >1.5 and p < 0.05). C, Locations of DECs on human chromosomes are represented by circos plots, with chromosomes being represented by the outermost circle, whereas circRNAs are indicated in the middle circle, and DECs are shown in the innermost circle. Up and downregulated circRNAs are represented by red and blue lines, respectively Top 10 upregulated and 10 downregulated circRNAs in aging skin samples ranked by fold changes Functional enrichment analyses We next conducted GO, KEGG, and Reactome analyses to explore the potential functional roles of identified DECs in the skin aging process. GO analyses suggested that these circRNAs are most closely associated with specific cellular components, molecular functions, and biological processes (Figure 2). Top enriched KEGG pathways associated with these DECs included the lysine degradation, inositol phosphate metabolism, and purine metabolism (Figure 3A). Top Reactome pathways for these DECs included SUMO E3 ligase SUMOylation of target proteins, Rab regulation of trafficking, and the Rho GTPase cycle (Figure 3B). GO analysis of DEC host genes associated with aging KEGG and Reactome pathway analyses in aged and young skin sample groups. A, The top 15 enriched KEGG pathways. B, The top 15 enriched Reactome pathways We next conducted GO, KEGG, and Reactome analyses to explore the potential functional roles of identified DECs in the skin aging process. GO analyses suggested that these circRNAs are most closely associated with specific cellular components, molecular functions, and biological processes (Figure 2). Top enriched KEGG pathways associated with these DECs included the lysine degradation, inositol phosphate metabolism, and purine metabolism (Figure 3A). Top Reactome pathways for these DECs included SUMO E3 ligase SUMOylation of target proteins, Rab regulation of trafficking, and the Rho GTPase cycle (Figure 3B). GO analysis of DEC host genes associated with aging KEGG and Reactome pathway analyses in aged and young skin sample groups. A, The top 15 enriched KEGG pathways. B, The top 15 enriched Reactome pathways Confirmation of differential circRNA expression In order to confirm the validity of our RNA‐seq results, three random upregulated circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0000205) and three random downregulated circRNAs (hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) were selected for qRT‐PCR analysis. For these analyses, we assessed circRNA expression levels in 40 total eyelid skin tissue samples (20 age, 20 young) using appropriate divergent primers. For full details regarding selected circRNAs, see Table 3. Six circRNAs were selected to perform further PCR validation We found that all the six DECs exhibited identical trends upon qRT‐PCR analysis to those observed in our RNA‐seq results (Figure 4). Of these, five circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) exhibited significant differential expression between control and aging skin group (p < 0.05). We additionally utilized Sanger sequencing to verify the qRT‐PCR products, confirming them to be consistent with the sequences in the circbase (http://www.circbase.org/) database (Supplemental Figure S1). qRT‐PCR validation of 6 differentially expressed circRNAs in 20 young and 20 aged skin samples In order to confirm the validity of our RNA‐seq results, three random upregulated circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0000205) and three random downregulated circRNAs (hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) were selected for qRT‐PCR analysis. For these analyses, we assessed circRNA expression levels in 40 total eyelid skin tissue samples (20 age, 20 young) using appropriate divergent primers. For full details regarding selected circRNAs, see Table 3. Six circRNAs were selected to perform further PCR validation We found that all the six DECs exhibited identical trends upon qRT‐PCR analysis to those observed in our RNA‐seq results (Figure 4). Of these, five circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) exhibited significant differential expression between control and aging skin group (p < 0.05). We additionally utilized Sanger sequencing to verify the qRT‐PCR products, confirming them to be consistent with the sequences in the circbase (http://www.circbase.org/) database (Supplemental Figure S1). qRT‐PCR validation of 6 differentially expressed circRNAs in 20 young and 20 aged skin samples circRNA‐miRNA interaction network generation To further evaluate the functional roles of these identified circRNAs, we assessed the potential miRNA binding of six circRNAs using miRnada (v 3.3a), with the resultant interaction network being visualized using the Cytoscape (V3.6.0) tool (Figure 5). In total, this network incorporated 6 circRNAs and 151 target miRNAs. Among these 151 miRNAs, we found that 6 miRNAs, hsa‐miR‐3064‐5p, hsa‐miR‐6762‐3p, hsa‐miR‐3194‐3p, hsa‐miR‐6731‐5p, hsa‐miR‐30c‐1‐3p and hsa‐miR‐6760‐5p not only bind to 1 circRNA, but they may be also regulated by multiple circRNAs at the same time. Predicted circRNA‐miRNA interaction networks. Yellow and red rectangles correspond to circRNAs and target miRNAs, respectively To further evaluate the functional roles of these identified circRNAs, we assessed the potential miRNA binding of six circRNAs using miRnada (v 3.3a), with the resultant interaction network being visualized using the Cytoscape (V3.6.0) tool (Figure 5). In total, this network incorporated 6 circRNAs and 151 target miRNAs. Among these 151 miRNAs, we found that 6 miRNAs, hsa‐miR‐3064‐5p, hsa‐miR‐6762‐3p, hsa‐miR‐3194‐3p, hsa‐miR‐6731‐5p, hsa‐miR‐30c‐1‐3p and hsa‐miR‐6760‐5p not only bind to 1 circRNA, but they may be also regulated by multiple circRNAs at the same time. Predicted circRNA‐miRNA interaction networks. Yellow and red rectangles correspond to circRNAs and target miRNAs, respectively Detection of circRNA dysregulation in the context of skin aging: In order to identify circRNAs associated with the skin aging processes, we began by employing an RNA‐seq analysis approach to analyze four eyelid skin samples from young adults and four eyelid skin samples from aged adults, with young samples being treated as controls. In total, we identified 14915 circRNAs, of which 571 were found to be differentially expressed (Fold change >1.5 and p < 0.05) between these groups (Figure 1A). Of these DECs, 348 and 223 were found to be up and downregulated, respectively (Figure 1B). These DECs were distributed among all chromosomes, shown as a circos plots (Figure 1C). The details regarding the top 10 upregulated and 10 downregulated circRNAs are presented in Table 2 Profiling of skin aging‐related circRNAs. A, Those circRNAs that were up or downregulated (red and green, respectively) in aged skin tissue samples relative to young tissues samples were arranged in a heat map, with rows corresponding to individual circRNAs and columns corresponding to individual samples. B, Differentially regulated circRNAs were arranged in a Volcano plot, with blue dots corresponding to a lack of statistical significance, whereas red and green correspond to up and downregulated circRNAs, respectively (Fold change >1.5 and p < 0.05). C, Locations of DECs on human chromosomes are represented by circos plots, with chromosomes being represented by the outermost circle, whereas circRNAs are indicated in the middle circle, and DECs are shown in the innermost circle. Up and downregulated circRNAs are represented by red and blue lines, respectively Top 10 upregulated and 10 downregulated circRNAs in aging skin samples ranked by fold changes Functional enrichment analyses: We next conducted GO, KEGG, and Reactome analyses to explore the potential functional roles of identified DECs in the skin aging process. GO analyses suggested that these circRNAs are most closely associated with specific cellular components, molecular functions, and biological processes (Figure 2). Top enriched KEGG pathways associated with these DECs included the lysine degradation, inositol phosphate metabolism, and purine metabolism (Figure 3A). Top Reactome pathways for these DECs included SUMO E3 ligase SUMOylation of target proteins, Rab regulation of trafficking, and the Rho GTPase cycle (Figure 3B). GO analysis of DEC host genes associated with aging KEGG and Reactome pathway analyses in aged and young skin sample groups. A, The top 15 enriched KEGG pathways. B, The top 15 enriched Reactome pathways Confirmation of differential circRNA expression: In order to confirm the validity of our RNA‐seq results, three random upregulated circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0000205) and three random downregulated circRNAs (hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) were selected for qRT‐PCR analysis. For these analyses, we assessed circRNA expression levels in 40 total eyelid skin tissue samples (20 age, 20 young) using appropriate divergent primers. For full details regarding selected circRNAs, see Table 3. Six circRNAs were selected to perform further PCR validation We found that all the six DECs exhibited identical trends upon qRT‐PCR analysis to those observed in our RNA‐seq results (Figure 4). Of these, five circRNAs (hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488, hsa_circ_0112861) exhibited significant differential expression between control and aging skin group (p < 0.05). We additionally utilized Sanger sequencing to verify the qRT‐PCR products, confirming them to be consistent with the sequences in the circbase (http://www.circbase.org/) database (Supplemental Figure S1). qRT‐PCR validation of 6 differentially expressed circRNAs in 20 young and 20 aged skin samples circRNA‐miRNA interaction network generation: To further evaluate the functional roles of these identified circRNAs, we assessed the potential miRNA binding of six circRNAs using miRnada (v 3.3a), with the resultant interaction network being visualized using the Cytoscape (V3.6.0) tool (Figure 5). In total, this network incorporated 6 circRNAs and 151 target miRNAs. Among these 151 miRNAs, we found that 6 miRNAs, hsa‐miR‐3064‐5p, hsa‐miR‐6762‐3p, hsa‐miR‐3194‐3p, hsa‐miR‐6731‐5p, hsa‐miR‐30c‐1‐3p and hsa‐miR‐6760‐5p not only bind to 1 circRNA, but they may be also regulated by multiple circRNAs at the same time. Predicted circRNA‐miRNA interaction networks. Yellow and red rectangles correspond to circRNAs and target miRNAs, respectively DISCUSSION: Skin aging is an ongoing process influenced by both intrinsic and extrinsic factors that ultimately leads to impaired tissue integrity and functionality. As skin ages, it loses its elasticity and undergoes thickening, drying, and wrinkling. 6 Increased ultraviolet radiation exposure as a result of ozone layer degradation has further expedited the skin aging process in some areas of the world. Effective approaches to preventing skin aging, however, remain to be established and are of great interest to the pharmaceutical and cosmetic industries. At present, the molecular mechanisms governing skin aging remain poorly understood, and few relevant biomarkers of this progress have been identified. Owing to their closed‐loop structures, circRNAs are more resistant to exonuclease‐mediated degradation than are linear RNAs, 36 making them ideal as biomarkers of certain diseases. 37 , 38 , 39 , 40 However, relatively few studies have assessed the relationship between circRNA expression patterns and the skin aging process. Amaresh et al. evaluated circRNA expression profiles in WI‐38 cells from early and late passages via RNA‐seq and identified circPVT1 as a suppressor of cellular senescence that sequesters let7. 41 Peng et al. similarly identified 29 circRNAs that were differentially expressed in human dermal fibroblasts following UVA irradiation and determined that circCOL3A1 serves as a regulator of the expression of type I collagen during the photoaging process. 42 Si et al. also generated a UVB‐induced model of fibroblast senescence in which they identified 472 DECs via microarray. Of these, they identified circRNA_100797 acts as a molecular sponge capable of sequestering miR‐23a‐5p and inhibiting the UVB‐induced photoaging of these cells. 43 Unlike these prior studies analyzing cell models, in the present study we assessed differential circRNA expression in primary eyelid tissue samples from young and aged human donors. We ultimately identified 571 DECs and 5 circRNA expression levels in the two groups of samples were confirmed to have significant differences via qPCR. Hsa_circ_0137613 and has_circ_0077605 are significantly upregulated in the aging skin group, suggesting that they may have the biological functions of promoting skin cell apoptosis, reducing cell proliferation or arresting the cell cycle in G0/G1 phase. On the contrary, the expression of hsa_circ_0003803, hsa_circ_0113488 and hsa_circ_0112861 in aging skin tissues was significantly reduced, suggesting that they may have the function of accelerating skin cell proliferation and inhibiting skin cell apoptosis. We will use gain‐of‐function and loss‐of‐function experiments to verify the functions of these molecules in skin cell models. Moreover, we conducted GO, KEGG, and Reactome functional enrichment analyses to gain insight into the molecular mechanisms whereby these circRNAs functioned in this context. The interactions between circRNAs and miRNAs in the skin aging process are also poorly understood, and as such we utilized the miRnada program to construct a putative circRNA‐miRNA interaction network. There are multiple limitations to the present study. For one, our sample size was relatively limited and should be expanded in future analyses. Furthermore, both in vivo and in vitro validation of the functional relevance of these DECs is still required. The identified circRNAs have the potential to function through a range of mechanisms including as ceRNAs or as translated peptides/proteins to modulate the aging process, and as such our analyses are too cursory to offer meaningful insights into their roles in this context. Despite these limitations, our results represent a novel dataset pertaining to the identification of circRNAs associated with the skin aging process, with hsa_circ_0137613, hsa_circ_0077605, hsa_circ_0003803, hsa_circ_0113488 and hsa_circ_0112861 being potential skin aging‐related biomarkers. CONFLICT OF INTEREST: The authors declare they have no competing interests. Supporting information: Fig S1 Click here for additional data file.
Background: Circular RNAs (circRNAs) have been shown to play important regulatory roles in a range of both pathological and physiological contexts, but their functions in the context of skin aging remain to be clarified. In the present study, we therefore, profiled circRNA expression profiles in four pairs of aged and non-aged skin samples to identify identifying differentially expressed circRNAs that may offer clinical value as biomarkers of the skin aging process. Methods: We utilized an RNA-seq to profile the levels of circRNAs in eyelid tissue samples, with qRT-PCR being used to confirm these RNA-seq results, and with bioinformatics approaches being used to predict downstream target miRNAs for differentially expressed circRNAs. Results: In total, we identified 571 circRNAs with 348 and 223 circRNAs being up and downregulated that were differentially expressed in aged skin samples compared to young skin samples. The top 10 upregulated circRNAs in aged skin sample were hsa_circ_0123543, hsa_circ_0057742, hsa_circ_0088179, hsa_circ_0132428, hsa_circ_0094423, hsa_circ_0008166, hsa_circ_0138184, hsa_circ_0135743, hsa_circ_0114119, and hsa_circ_0131421. The top 10 reduced circRNAs were hsa_circ_0101479, hsa_circ_0003650, hsa_circ_0004249, hsa_circ_0030345, hsa_circ_0047367, hsa_circ_0055629, hsa_circ_0062955, hsa_circ_0005305, hsa_circ_0001627, and hsa_circ_0008531. Functional enrichment analyses revealed the potential functionality of these differentially expressed circRNAs. The top 3 enriched gene ontology (GO) terms of the host genes of differentially expressed circRNAs are regulation of GTPase activity, positive regulation of GTPase activity and autophagy. The top 3 enriched KEGG pathway ID are Lysine degradation, Fatty acid degradation and Inositol phosphate metabolism. The top 3 enriched reactome pathway ID are RAB GEFs exchange GTP for GDP on RABs, Regulation of TP53 Degradation and Regulation of TP53 Expression and Degradation. Six circRNAs were selected for qRT-PCR verification, of which 5 verification results were consistent with the sequencing results. Moreover, targeted miRNAs, such as hsa-miR-588, hsa-miR-612, hsa-miR-4487, hsa-miR-149-5p, hsa-miR-494-5p were predicted for circRna-miRna interaction networks. Conclusions: Overall, these results offer new insights into circRNA expression profiles, potentially highlighting future avenues for research regarding the roles of these circRNAs in the context of skin aging.
null
null
6,211
414
15
[ "circrnas", "samples", "skin", "rna", "aging", "decs", "circrna", "expression", "figure", "analyses" ]
[ "test", "test" ]
null
null
null
[CONTENT] skin aging | circRNA | NGS | biomarker [SUMMARY]
null
[CONTENT] skin aging | circRNA | NGS | biomarker [SUMMARY]
null
[CONTENT] skin aging | circRNA | NGS | biomarker [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Down-Regulation | Female | Gene Expression Profiling | Gene Expression Regulation | Gene Ontology | Gene Regulatory Networks | Humans | MicroRNAs | Middle Aged | RNA, Circular | Reproducibility of Results | Skin Aging | Up-Regulation | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Down-Regulation | Female | Gene Expression Profiling | Gene Expression Regulation | Gene Ontology | Gene Regulatory Networks | Humans | MicroRNAs | Middle Aged | RNA, Circular | Reproducibility of Results | Skin Aging | Up-Regulation | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Down-Regulation | Female | Gene Expression Profiling | Gene Expression Regulation | Gene Ontology | Gene Regulatory Networks | Humans | MicroRNAs | Middle Aged | RNA, Circular | Reproducibility of Results | Skin Aging | Up-Regulation | Young Adult [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] circrnas | samples | skin | rna | aging | decs | circrna | expression | figure | analyses [SUMMARY]
null
[CONTENT] circrnas | samples | skin | rna | aging | decs | circrna | expression | figure | analyses [SUMMARY]
null
[CONTENT] circrnas | samples | skin | rna | aging | decs | circrna | expression | figure | analyses [SUMMARY]
null
[CONTENT] skin | aging | rnas | skin aging | non | coding | believed | circrnas | functionality | biomarkers [SUMMARY]
null
[CONTENT] circrnas | skin | figure | downregulated | hsa | hsa mir | mir | samples | respectively | downregulated circrnas [SUMMARY]
null
[CONTENT] circrnas | samples | skin | rna | aging | decs | expression | data | figure | hsa mir [SUMMARY]
null
[CONTENT] circRNAs ||| four [SUMMARY]
null
[CONTENT] 571 | 348 | 223 ||| 10 | hsa_circ_0123543 | hsa_circ_0008166 | hsa_circ_0138184 | hsa_circ_0135743 | hsa_circ_0114119 ||| 10 | hsa_circ_0101479 | hsa_circ_0003650 | hsa_circ_0005305 ||| ||| 3 | GTPase ||| 3 | KEGG | Lysine | Inositol ||| 3 | Degradation and Regulation ||| Six | 5 ||| [SUMMARY]
null
[CONTENT] ||| four ||| RNA ||| ||| 571 | 348 | 223 ||| 10 | hsa_circ_0123543 | hsa_circ_0008166 | hsa_circ_0138184 | hsa_circ_0135743 | hsa_circ_0114119 ||| 10 | hsa_circ_0101479 | hsa_circ_0003650 | hsa_circ_0005305 ||| ||| 3 | GTPase ||| 3 | KEGG | Lysine | Inositol ||| 3 | Degradation and Regulation ||| Six | 5 ||| ||| [SUMMARY]
null
Chabertia erschowi (Nematoda) is a distinct species based on nuclear ribosomal DNA sequences and mitochondrial DNA sequences.
24450932
Gastrointestinal nematodes of livestock have major socio-economic importance worldwide. In small ruminants, Chabertia spp. are responsible for economic losses to the livestock industries globally. Although much attention has given us insights into epidemiology, diagnosis, treatment and control of this parasite, over the years, only one species (C. ovina) has been accepted to infect small ruminants, and it is not clear whether C. erschowi is valid as a separate species.
BACKGROUND
The first and second internal transcribed spacers (ITS-1 and ITS-2) regions of nuclear ribosomal DNA (rDNA) and the complete mitochondrial (mt) genomes of C. ovina and C. erschowi were amplified and then sequenced. Phylogenetic re-construction of 15 Strongylida species (including C. erschowi) was carried out using Bayesian inference (BI) based on concatenated amino acid sequence datasets.
METHODS
The ITS rDNA sequences of C. ovina China isolates and C. erschowi samples were 852-854 bp and 862 -866 bp in length, respectively. The mt genome sequence of C. erschowi was 13,705 bp in length, which is 12 bp shorter than that of C. ovina China isolate. The sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. In addition, sequence comparison of the most conserved mt small subunit ribosomal (rrnS) and the least conserved nad2 genes among multiple individual nematodes revealed substantial nucleotide differences between these two species but limited sequence variation within each species.
RESULTS
The mtDNA and rDNA datasets provide robust genetic evidence that C. erschowi is a valid strongylid nematode species. The mtDNA and rDNA datasets presented in the present study provide useful novel markers for further studies of the taxonomy and systematics of the Chabertia species from different hosts and geographical regions.
CONCLUSIONS
[ "Animals", "Base Sequence", "China", "DNA, Helminth", "DNA, Mitochondrial", "DNA, Ribosomal", "DNA, Ribosomal Spacer", "Genetic Markers", "Genetic Variation", "Genome, Helminth", "Genome, Mitochondrial", "Molecular Sequence Data", "Phylogeny", "Ruminants", "Sequence Analysis, DNA", "Species Specificity", "Strongylida Infections", "Strongyloidea" ]
3937141
Background
The phylum Nematoda includes many parasites that threaten the health of plants, animals and humans on a global scale. The soil-transmitted helminthes (including roundworms, whipworms and hookworms) are estimated to infect almost one sixth of all humans, and more than a billion people are infected with at least one species [1]. Chabertia spp. are common gastrointestinal nematodes, causing significant economic losses to the livestock industries worldwide, due to poor productivity, failure to thrive and control costs [2-6]. In spite of the high prevalence of Chabertia reported in small ruminants [7], it is not clear whether the small ruminants harbour one or more than one species. Based on morphological features (e.g., cervical groove and cephalic vesicle) of adult worms, various Chabertia species have been described in sheep and goats in China, including C. ovina, C. rishati, C. bovis, C. erschowi, C. gaohanensis sp. nov and C. shaanxiensis sp. nov [8-10]. However, to date, only Chabertia ovina is well recognized as taxonomically valid [11,12]. Obviously, the identification and distinction of Chabertia to species using morphological criteria alone is not reliable. Therefore, there is an urgent need for suitable molecular approaches to accurately identify and distinguish closely-related Chabertia species from different hosts and regions. Molecular tools, using genetic markers in mitochondrial (mt) genomes and the internal transcribed spacer (ITS) regions of nuclear ribosomal DNA (rDNA), have been used effectively to identify and differentiate parasites of different groups [13-16]. For nematodes, recent studies showed that mt genomes are useful genetic markers for the identification and differentiation of closely-related species [17,18]. In addition, employing ITS rDNA sequences, recent studies also demonstrated that Haemonchus placei and H. contortus are distinct species [19]; Trichuris suis and T. trichiura are different nematode species [20,21]. Using a long-range PCR-coupled sequencing approach [22], the objectives of the present study were (i) to characterize the ITS rDNA and mt genomes of C. ovina and C. erschowi from goat and yak in China, (ii) to compare these ITS sequences and mt genome sequences, and (iii) to test the hypothesis that C. erschowi is a valid species in phylogenetic analyses of these sequence data.
Methods
Parasites and isolation of total genomic DNA Adult specimens of C. ovina (n = 6, coded CHO1-CHO6) and C. erschowi (n = 9, coded CHE1-CHE9) were collected, post-mortem, from the large intestine of a goat and a yak in Shaanxi and Qinghai Provinces, China, respectively, and were washed in physiological saline, identified morphologically [8,10], fixed in 70% (v/v) ethanol and stored at -20°C until use. Total genomic DNA was isolated separately from 15 individual worms using an established method [23]. Adult specimens of C. ovina (n = 6, coded CHO1-CHO6) and C. erschowi (n = 9, coded CHE1-CHE9) were collected, post-mortem, from the large intestine of a goat and a yak in Shaanxi and Qinghai Provinces, China, respectively, and were washed in physiological saline, identified morphologically [8,10], fixed in 70% (v/v) ethanol and stored at -20°C until use. Total genomic DNA was isolated separately from 15 individual worms using an established method [23]. Long-range PCR-based sequencing of mt genome To obtain some mt sequence data for primer design, we PCR-amplified regions of C. erschowi of cox1 gene by using a (relatively) conserved primer pair JB3-JB4.5 [24], rrnL gene was amplified using the designed primers rrnLF (forward; 5′-GAGCCTGTATTGGGTTCCAGTATGA-3′) and rrnLR (reverse; 5′-AACTTTTTTTGATTTTCCTTTCGTA-3′), nad1 gene was amplified using the designed primers nad1F (forward; 5′-GAGCGTCATTTGTTGGGAAG-3′) and nad1R (reverse; 5′-CCCCTTCAGCAAAATCAAAC-3′), cytb gene was amplified using the designed primers cytbF (forward; 5′-GGTACCTTTTTGGCTTTTTATTATA-3′) and cytbR (reverse; 5′-ATATGAACAGGGCTTATTATAGGAT-3′) based on sequences conserved between Oesophagostomum dentatum and C. ovina Australia isolate. The amplicons were sequenced in both directions using BigDye terminator v.3.1, ABI PRISM 3730. We then designed primers (Table 1) to regions within cox1, rrnL, nad1 and cytb and amplified from C. ovina (coded CHO1) in four overlapping fragments: cox1-rrnL, rrnL-nad1, nad1-cytb and cytb-cox1. Then we designed primers (Table 1) to regions within cox1, rrnL, nad5, nad1, nad2 and cytb and amplified from C. erschowi (coded CHE1) in six overlapping fragments: cox1- rrnL, rrnL-nad5, nad5-nad1, nad1-nad2, nad2-cytb and cytb-cox1. The cycling conditions used were 92°C for 2 min (initial denaturation), then 92°C/10 s (denaturation), 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s (annealing), and 60°C/10 min (extension) for 10 cycles, followed by 92°C for 2 min, then 92°C/10 s, 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s, and 60°C/10 min for 20 cycles, with a cycle elongation of 10 s for each cycle and a final extension at 60°C/10 min. Each amplicon, which represented a single band in a 1.0% (w/v) agarose gel, following electrophoresis and ethidium-bromide staining, was column-purified and then sequenced using a primer walking strategy [22]. Sequences of primers used to amplify mitochondrial DNA regions from Chabertia erschowi and Chabertia ovina from China To obtain some mt sequence data for primer design, we PCR-amplified regions of C. erschowi of cox1 gene by using a (relatively) conserved primer pair JB3-JB4.5 [24], rrnL gene was amplified using the designed primers rrnLF (forward; 5′-GAGCCTGTATTGGGTTCCAGTATGA-3′) and rrnLR (reverse; 5′-AACTTTTTTTGATTTTCCTTTCGTA-3′), nad1 gene was amplified using the designed primers nad1F (forward; 5′-GAGCGTCATTTGTTGGGAAG-3′) and nad1R (reverse; 5′-CCCCTTCAGCAAAATCAAAC-3′), cytb gene was amplified using the designed primers cytbF (forward; 5′-GGTACCTTTTTGGCTTTTTATTATA-3′) and cytbR (reverse; 5′-ATATGAACAGGGCTTATTATAGGAT-3′) based on sequences conserved between Oesophagostomum dentatum and C. ovina Australia isolate. The amplicons were sequenced in both directions using BigDye terminator v.3.1, ABI PRISM 3730. We then designed primers (Table 1) to regions within cox1, rrnL, nad1 and cytb and amplified from C. ovina (coded CHO1) in four overlapping fragments: cox1-rrnL, rrnL-nad1, nad1-cytb and cytb-cox1. Then we designed primers (Table 1) to regions within cox1, rrnL, nad5, nad1, nad2 and cytb and amplified from C. erschowi (coded CHE1) in six overlapping fragments: cox1- rrnL, rrnL-nad5, nad5-nad1, nad1-nad2, nad2-cytb and cytb-cox1. The cycling conditions used were 92°C for 2 min (initial denaturation), then 92°C/10 s (denaturation), 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s (annealing), and 60°C/10 min (extension) for 10 cycles, followed by 92°C for 2 min, then 92°C/10 s, 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s, and 60°C/10 min for 20 cycles, with a cycle elongation of 10 s for each cycle and a final extension at 60°C/10 min. Each amplicon, which represented a single band in a 1.0% (w/v) agarose gel, following electrophoresis and ethidium-bromide staining, was column-purified and then sequenced using a primer walking strategy [22]. Sequences of primers used to amplify mitochondrial DNA regions from Chabertia erschowi and Chabertia ovina from China Sequencing of ITS rDNA and mt rrnS and nad2 The full ITS rDNA region including primer flanking 18S and 28S rDNA sequences was PCR-amplified from individual DNA samples using universal primers NC5 (forward; 5′-GTAGGTGAACCTGCGGAAGGATCATT-3′) and NC2 (reverse; 5′-TTAGTTTCTTTTCCTCCGCT-3′) described previously [25]. The primers rrnSF and rrnSR (Table 1) designed to conserved mt genome sequences within the rrnS gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 700 bp) from multiple individuals of Chabertia spp. The primers nad2F and nad2R (Table 1) designed to conserved mt genome sequences within the nad2 gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 900 bp) from multiple individuals of Chabertia spp.. The full ITS rDNA region including primer flanking 18S and 28S rDNA sequences was PCR-amplified from individual DNA samples using universal primers NC5 (forward; 5′-GTAGGTGAACCTGCGGAAGGATCATT-3′) and NC2 (reverse; 5′-TTAGTTTCTTTTCCTCCGCT-3′) described previously [25]. The primers rrnSF and rrnSR (Table 1) designed to conserved mt genome sequences within the rrnS gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 700 bp) from multiple individuals of Chabertia spp. The primers nad2F and nad2R (Table 1) designed to conserved mt genome sequences within the nad2 gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 900 bp) from multiple individuals of Chabertia spp.. Sequence analyses Sequences were assembled manually and aligned against the complete mt genome sequences of C. ovina Australia isolate [26] using the computer program Clustal X 1.83 [27] to infer gene boundaries. Translation initiation and termination codons were identified based on comparison with that of C. ovina Australia isolate [26]. The secondary structures of 22 tRNA genes were predicted using tRNAscan-SE [28] and/or manual adjustment [29], and rRNA genes were identified by comparison with that of C. ovina Australia isolate [26]. Sequences were assembled manually and aligned against the complete mt genome sequences of C. ovina Australia isolate [26] using the computer program Clustal X 1.83 [27] to infer gene boundaries. Translation initiation and termination codons were identified based on comparison with that of C. ovina Australia isolate [26]. The secondary structures of 22 tRNA genes were predicted using tRNAscan-SE [28] and/or manual adjustment [29], and rRNA genes were identified by comparison with that of C. ovina Australia isolate [26]. Phylogenetic analyses Amino acid sequences inferred from the 12 protein-coding genes of the two Chabertia spp. worms were concatenated into a single alignment, and then aligned with those of 14 other Strongylida nematodes (Angiostrongylus cantonensis, GenBank accession number NC_013065 [30]; Angiostrongylus costaricensis, NC_013067 [30]; Angiostrongylus vasorum, JX268542 [31]; Aelurostrongylus abstrusus, NC_019571 [32]; Chabertia ovina Australia isolate, NC_013831 [26]; Cylicocyclus insignis, NC_013808 [26]; Metastrongylus pudendotectus, NC_013813 [26]; Metastrongylus salmi, NC_013815 [26]; Oesophagostomum dentatum, FM161882 [17]; Oesophagostomum quadrispinulatum, NC_014181 [17]; Oesophagostomum asperum, KC715826 [33]; Oesophagostomum columbianum, KC715827 [33]; Strongylus vulgaris, NC_013818 [26]; Syngamus trachea, NC_013821 [26], using the Ancylostomatoidea nematode, Necator americanus, NC_003416 as the outgroup [29]. Any regions of ambiguous alignment were excluded using Gblocks (http://molevol.cmima.csic.es/castresana/Gblocks_server.html) [34] with the default parameters (Gblocks removed 1.6% of the amino acid alignments) and then subjected to phylogenetic analysis using Bayesian Inference (BI) as described previously [35,36]. Phylograms were drawn using the program Tree View v.1.65 [37]. Amino acid sequences inferred from the 12 protein-coding genes of the two Chabertia spp. worms were concatenated into a single alignment, and then aligned with those of 14 other Strongylida nematodes (Angiostrongylus cantonensis, GenBank accession number NC_013065 [30]; Angiostrongylus costaricensis, NC_013067 [30]; Angiostrongylus vasorum, JX268542 [31]; Aelurostrongylus abstrusus, NC_019571 [32]; Chabertia ovina Australia isolate, NC_013831 [26]; Cylicocyclus insignis, NC_013808 [26]; Metastrongylus pudendotectus, NC_013813 [26]; Metastrongylus salmi, NC_013815 [26]; Oesophagostomum dentatum, FM161882 [17]; Oesophagostomum quadrispinulatum, NC_014181 [17]; Oesophagostomum asperum, KC715826 [33]; Oesophagostomum columbianum, KC715827 [33]; Strongylus vulgaris, NC_013818 [26]; Syngamus trachea, NC_013821 [26], using the Ancylostomatoidea nematode, Necator americanus, NC_003416 as the outgroup [29]. Any regions of ambiguous alignment were excluded using Gblocks (http://molevol.cmima.csic.es/castresana/Gblocks_server.html) [34] with the default parameters (Gblocks removed 1.6% of the amino acid alignments) and then subjected to phylogenetic analysis using Bayesian Inference (BI) as described previously [35,36]. Phylograms were drawn using the program Tree View v.1.65 [37].
Results
Nuclear ribosomal DNA regions of the two Chabertia species The rDNA region including ITS-1, 5.8S rDNA and ITS-2 were amplified and sequenced from C. ovina China isolates, and they were 852-854 bp (GenBank accession nos. KF913466-KF913471) in length, which contained 367-369 bp (ITS-1), 153 bp (5.8S rDNA) and 231-239 bp (ITS-2). These sequences were 862-866 bp in length for C. erschowi samples (GenBank accession nos. KF913448-KF913456), containing 375-378 bp (ITS-1), 153 bp (5.8S rDNA) and 239-245 bp (ITS-2). The rDNA region including ITS-1, 5.8S rDNA and ITS-2 were amplified and sequenced from C. ovina China isolates, and they were 852-854 bp (GenBank accession nos. KF913466-KF913471) in length, which contained 367-369 bp (ITS-1), 153 bp (5.8S rDNA) and 231-239 bp (ITS-2). These sequences were 862-866 bp in length for C. erschowi samples (GenBank accession nos. KF913448-KF913456), containing 375-378 bp (ITS-1), 153 bp (5.8S rDNA) and 239-245 bp (ITS-2). Features of the mt genomes of the two Chabertia species The complete mt genome sequence of C. ovina China isolate and C. erschowi were 13,717 bp and 13,705 bp in length, respectively (GenBank accession nos. KF660604 and KF660603, respectively). The two mt genomes contain 12 protein-coding genes (cox1-3, nad1-6, nad4L, cytb, atp6), 22 transfer RNA genes and two ribosomal RNA genes (rrnS and rrnL) (Table 2), but the atp8 gene is missing (Figure 1). The protein-coding genes are transcribed in the same directions, as reported for Oesophagostomum spp. [17,33]. Twenty-two tRNA genes were predicted from the mt genomes, which varied from 55 to 63 bp in size. The two ribosomal RNA genes (rrnL and rrnS) were inferred; rrnL is located between tRNA-His and nad3, and rrnS is located between tRNA-Glu and tRNA-Ser (UCN). Three AT-rich non-coding regions (NCRs) were inferred in the mt genomes (Table 2). For these genomes, the longest NCR (designated NC2; 250 bp for C. ovina China isolate and 240 bp for C. erschowi in length) is located between the tRNA-Ala and tRNA-Pro (Figure 1), have an A + T content of 83.75% and 84%, respectively. Mitochondrial genome organization of Chabertia erschowi (CE) and Chabertia ovina China isolate (COC) and Australia isolate (COA) Structure of the mitochondrial genomes for Chabertia. Genes are designated according to standard nomenclature, except for the 22 tRNA genes, which are designated using one-letter amino acid codes, with numerals differentiating each of the two leucine- and serine-specifying tRNAs (L1 and L2 for codon families CUN and UUR, respectively; S1 and S2 for codon families AGN and UCN, respectively). “NCR-1, NCR-2 and NCR-3” refer to three non-coding regions. The complete mt genome sequence of C. ovina China isolate and C. erschowi were 13,717 bp and 13,705 bp in length, respectively (GenBank accession nos. KF660604 and KF660603, respectively). The two mt genomes contain 12 protein-coding genes (cox1-3, nad1-6, nad4L, cytb, atp6), 22 transfer RNA genes and two ribosomal RNA genes (rrnS and rrnL) (Table 2), but the atp8 gene is missing (Figure 1). The protein-coding genes are transcribed in the same directions, as reported for Oesophagostomum spp. [17,33]. Twenty-two tRNA genes were predicted from the mt genomes, which varied from 55 to 63 bp in size. The two ribosomal RNA genes (rrnL and rrnS) were inferred; rrnL is located between tRNA-His and nad3, and rrnS is located between tRNA-Glu and tRNA-Ser (UCN). Three AT-rich non-coding regions (NCRs) were inferred in the mt genomes (Table 2). For these genomes, the longest NCR (designated NC2; 250 bp for C. ovina China isolate and 240 bp for C. erschowi in length) is located between the tRNA-Ala and tRNA-Pro (Figure 1), have an A + T content of 83.75% and 84%, respectively. Mitochondrial genome organization of Chabertia erschowi (CE) and Chabertia ovina China isolate (COC) and Australia isolate (COA) Structure of the mitochondrial genomes for Chabertia. Genes are designated according to standard nomenclature, except for the 22 tRNA genes, which are designated using one-letter amino acid codes, with numerals differentiating each of the two leucine- and serine-specifying tRNAs (L1 and L2 for codon families CUN and UUR, respectively; S1 and S2 for codon families AGN and UCN, respectively). “NCR-1, NCR-2 and NCR-3” refer to three non-coding regions. Comparative analyses between C. ovina and C. erschowi The mt genome sequence of C. erschowi was 13,705 bp in length, 12 bp shorter than that of C. ovina China isolate, and 23 bp longer than that of C. ovina Australia isolate. The arrangement of the mt genes (i.e., 13 protein genes, 2 rrn genes and 22 tRNA genes) and NCRs were the same. A comparison of the nucleotide sequences of each mt gene as well as the amino acid sequences conceptually translated from individual protein-coding genes of the two Chabertia are given in Table 3. The greatest nucleotide variation between the C. ovina China isolate and C. erschowi was in the nad2 gene (19.4% and 17.92%), whereas least differences (7.33%) were detected in the rrnS gene, respectively (Table 3). The nucleotide sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. Sequence difference between the entire mt genome of C. ovina Australia isolate and that of C. erschowi was 15.48%. Sequence difference between the entire mt genome of C. ovina China isolate and that of C. ovina Australia isolate was 4.28%. Nucleotide and/or predicted amino acid (aa) sequence differences for mt protein-coding and ribosomal RNA genes among Chabertia erschowi (CE) and Chabertia ovina China isolate (COC) and Australia isolate (COA) The difference in the concatenated amino acid sequences of the 12 protein-coding genes of the C. ovina China isolate and those of C. erschowi was 9.36%, 10% between those of the C. ovina Australia isolate and those of C. erschowi, and 2.37% between those of the C. ovina China isolate and those of C. ovina Australia isolate. The amino acid sequence differences between each of the 12 protein-coding genes of the C. ovina Australia isolate and the corresponding homologues of C. erschowi ranged from 0.57-17.92%, with COX1 being the most conserved and NAD2 the least conserved proteins (Table 3). Phylogenetic analyses of concatenated amino acid sequence data sets, using N. americanus as the outgroup, revealed that the Chabertia and Oesophagostomum were clustered together, with absolute support (posterior probability (pp) = 1.00) support (Figure 2). Inferred phylogenetic position of Chabertia within Strongylida nematodes. Analysis of the concatenated amino acid sequence data representing 12 protein-coding genes by Bayesian inference (BI), using Necator americanus (NC_003416) as the outgroup. Sequence variation in complete nad2 gene was assessed among 15 individuals of Chabertia from goats and yaks. Sequences of the six C. ovina China isolate individuals were the same in length (840 bp) (GenBank accession nos. KF913472-KF913477). Nucleotide variation among the six C. ovina China isolate individuals was detected at 18 sites (18/840; 2.1%). Sequences of the nine C. erschowi individuals were the same in length (840 bp) (GenBank accession nos. KF913484-KF913492). Nucleotide variation also occurred at 23 sites (23/840; 2.7%). All 15 alignments of the nad2 sequences revealed that all individuals of Chabertia differed at 182 nucleotide positions (182/840; 21.7%). Phylogenetic analysis of the nad2 sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure 3A). Inferred genetic relationships of 15 individual Chabertia specimens. The analyses were carried out by Bayesian inference (BI) based on mitochondrial rrnS (A) and nad2 (B) sequence data, using Necator americanus as the outgroup. Sequence variation in complete rrnS gene was assessed among 15 individuals of Chabertia from goat and yak. Sequences of the rrnS gene from the six C. ovina China isolate individuals were the same in length (696 bp) (GenBank accession nos. KF913478-KF913483). Nucleotide variation among the six C. ovina China isolate individuals was detected at seven sites (7/696; 1.0%). Sequences of the rrnS gene from the nine C. erschowi individuals were the same in length (696 bp) (GenBank accession nos. KF913457-KF913465). Nucleotide variation also occurred at 6 sites (6/696; 0.9%). All 15 alignments of the rrnS sequences revealed that all individuals of Chabertia differed at 56 nucleotide positions (56/696; 8.05%). Phylogenetic analysis of the rrnS sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure 3B). The ITS-1 and ITS-2 sequences from 10 individual adults of C. ovina China isolate were compared with that of 6 individual adults of C. erschowi. Sequence variations were 0–2.9% (ITS-1) and 0–2.7% (ITS-2) within the two Chabertia species, respectively. However, the sequence differences were 6.3-8.2% (ITS-1) and 10.4-13.6% (ITS-2) between the C. ovina China isolate and C. erschowi. The mt genome sequence of C. erschowi was 13,705 bp in length, 12 bp shorter than that of C. ovina China isolate, and 23 bp longer than that of C. ovina Australia isolate. The arrangement of the mt genes (i.e., 13 protein genes, 2 rrn genes and 22 tRNA genes) and NCRs were the same. A comparison of the nucleotide sequences of each mt gene as well as the amino acid sequences conceptually translated from individual protein-coding genes of the two Chabertia are given in Table 3. The greatest nucleotide variation between the C. ovina China isolate and C. erschowi was in the nad2 gene (19.4% and 17.92%), whereas least differences (7.33%) were detected in the rrnS gene, respectively (Table 3). The nucleotide sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. Sequence difference between the entire mt genome of C. ovina Australia isolate and that of C. erschowi was 15.48%. Sequence difference between the entire mt genome of C. ovina China isolate and that of C. ovina Australia isolate was 4.28%. Nucleotide and/or predicted amino acid (aa) sequence differences for mt protein-coding and ribosomal RNA genes among Chabertia erschowi (CE) and Chabertia ovina China isolate (COC) and Australia isolate (COA) The difference in the concatenated amino acid sequences of the 12 protein-coding genes of the C. ovina China isolate and those of C. erschowi was 9.36%, 10% between those of the C. ovina Australia isolate and those of C. erschowi, and 2.37% between those of the C. ovina China isolate and those of C. ovina Australia isolate. The amino acid sequence differences between each of the 12 protein-coding genes of the C. ovina Australia isolate and the corresponding homologues of C. erschowi ranged from 0.57-17.92%, with COX1 being the most conserved and NAD2 the least conserved proteins (Table 3). Phylogenetic analyses of concatenated amino acid sequence data sets, using N. americanus as the outgroup, revealed that the Chabertia and Oesophagostomum were clustered together, with absolute support (posterior probability (pp) = 1.00) support (Figure 2). Inferred phylogenetic position of Chabertia within Strongylida nematodes. Analysis of the concatenated amino acid sequence data representing 12 protein-coding genes by Bayesian inference (BI), using Necator americanus (NC_003416) as the outgroup. Sequence variation in complete nad2 gene was assessed among 15 individuals of Chabertia from goats and yaks. Sequences of the six C. ovina China isolate individuals were the same in length (840 bp) (GenBank accession nos. KF913472-KF913477). Nucleotide variation among the six C. ovina China isolate individuals was detected at 18 sites (18/840; 2.1%). Sequences of the nine C. erschowi individuals were the same in length (840 bp) (GenBank accession nos. KF913484-KF913492). Nucleotide variation also occurred at 23 sites (23/840; 2.7%). All 15 alignments of the nad2 sequences revealed that all individuals of Chabertia differed at 182 nucleotide positions (182/840; 21.7%). Phylogenetic analysis of the nad2 sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure 3A). Inferred genetic relationships of 15 individual Chabertia specimens. The analyses were carried out by Bayesian inference (BI) based on mitochondrial rrnS (A) and nad2 (B) sequence data, using Necator americanus as the outgroup. Sequence variation in complete rrnS gene was assessed among 15 individuals of Chabertia from goat and yak. Sequences of the rrnS gene from the six C. ovina China isolate individuals were the same in length (696 bp) (GenBank accession nos. KF913478-KF913483). Nucleotide variation among the six C. ovina China isolate individuals was detected at seven sites (7/696; 1.0%). Sequences of the rrnS gene from the nine C. erschowi individuals were the same in length (696 bp) (GenBank accession nos. KF913457-KF913465). Nucleotide variation also occurred at 6 sites (6/696; 0.9%). All 15 alignments of the rrnS sequences revealed that all individuals of Chabertia differed at 56 nucleotide positions (56/696; 8.05%). Phylogenetic analysis of the rrnS sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure 3B). The ITS-1 and ITS-2 sequences from 10 individual adults of C. ovina China isolate were compared with that of 6 individual adults of C. erschowi. Sequence variations were 0–2.9% (ITS-1) and 0–2.7% (ITS-2) within the two Chabertia species, respectively. However, the sequence differences were 6.3-8.2% (ITS-1) and 10.4-13.6% (ITS-2) between the C. ovina China isolate and C. erschowi.
Conclusion
The findings of this study provide robust genetic evidence that C. erschowi is a separate and valid species from C. ovina. The mtDNA and rDNA datasets reported in the present study should provide useful novel markers for further studies of the taxonomy and systematics of Chabertia spp. from different hosts and geographical regions.
[ "Background", "Parasites and isolation of total genomic DNA", "Long-range PCR-based sequencing of mt genome", "Sequencing of ITS rDNA and mt rrnS and nad2", "Sequence analyses", "Phylogenetic analyses", "Nuclear ribosomal DNA regions of the two Chabertia species", "Features of the mt genomes of the two Chabertia species", "Comparative analyses between C. ovina and C. erschowi", "Competing interests", "Authors’ contributions" ]
[ "The phylum Nematoda includes many parasites that threaten the health of plants, animals and humans on a global scale. The soil-transmitted helminthes (including roundworms, whipworms and hookworms) are estimated to infect almost one sixth of all humans, and more than a billion people are infected with at least one species\n[1]. Chabertia spp. are common gastrointestinal nematodes, causing significant economic losses to the livestock industries worldwide, due to poor productivity, failure to thrive and control costs\n[2-6]. In spite of the high prevalence of Chabertia reported in small ruminants\n[7], it is not clear whether the small ruminants harbour one or more than one species. Based on morphological features (e.g., cervical groove and cephalic vesicle) of adult worms, various Chabertia species have been described in sheep and goats in China, including C. ovina, C. rishati, C. bovis, C. erschowi, C. gaohanensis sp. nov and C. shaanxiensis sp. nov\n[8-10]. However, to date, only Chabertia ovina is well recognized as taxonomically valid\n[11,12]. Obviously, the identification and distinction of Chabertia to species using morphological criteria alone is not reliable. Therefore, there is an urgent need for suitable molecular approaches to accurately identify and distinguish closely-related Chabertia species from different hosts and regions.\nMolecular tools, using genetic markers in mitochondrial (mt) genomes and the internal transcribed spacer (ITS) regions of nuclear ribosomal DNA (rDNA), have been used effectively to identify and differentiate parasites of different groups\n[13-16]. For nematodes, recent studies showed that mt genomes are useful genetic markers for the identification and differentiation of closely-related species\n[17,18]. In addition, employing ITS rDNA sequences, recent studies also demonstrated that Haemonchus placei and H. contortus are distinct species\n[19]; Trichuris suis and T. trichiura are different nematode species\n[20,21].\nUsing a long-range PCR-coupled sequencing approach\n[22], the objectives of the present study were (i) to characterize the ITS rDNA and mt genomes of C. ovina and C. erschowi from goat and yak in China, (ii) to compare these ITS sequences and mt genome sequences, and (iii) to test the hypothesis that C. erschowi is a valid species in phylogenetic analyses of these sequence data.", "Adult specimens of C. ovina (n = 6, coded CHO1-CHO6) and C. erschowi (n = 9, coded CHE1-CHE9) were collected, post-mortem, from the large intestine of a goat and a yak in Shaanxi and Qinghai Provinces, China, respectively, and were washed in physiological saline, identified morphologically\n[8,10], fixed in 70% (v/v) ethanol and stored at -20°C until use. Total genomic DNA was isolated separately from 15 individual worms using an established method\n[23].", "To obtain some mt sequence data for primer design, we PCR-amplified regions of C. erschowi of cox1 gene by using a (relatively) conserved primer pair JB3-JB4.5\n[24], rrnL gene was amplified using the designed primers rrnLF (forward; 5′-GAGCCTGTATTGGGTTCCAGTATGA-3′) and rrnLR (reverse; 5′-AACTTTTTTTGATTTTCCTTTCGTA-3′), nad1 gene was amplified using the designed primers nad1F (forward; 5′-GAGCGTCATTTGTTGGGAAG-3′) and nad1R (reverse; 5′-CCCCTTCAGCAAAATCAAAC-3′), cytb gene was amplified using the designed primers cytbF (forward; 5′-GGTACCTTTTTGGCTTTTTATTATA-3′) and cytbR (reverse; 5′-ATATGAACAGGGCTTATTATAGGAT-3′) based on sequences conserved between Oesophagostomum dentatum and C. ovina Australia isolate. The amplicons were sequenced in both directions using BigDye terminator v.3.1, ABI PRISM 3730. We then designed primers (Table\n1) to regions within cox1, rrnL, nad1 and cytb and amplified from C. ovina (coded CHO1) in four overlapping fragments: cox1-rrnL, rrnL-nad1, nad1-cytb and cytb-cox1. Then we designed primers (Table\n1) to regions within cox1, rrnL, nad5, nad1, nad2 and cytb and amplified from C. erschowi (coded CHE1) in six overlapping fragments: cox1- rrnL, rrnL-nad5, nad5-nad1, nad1-nad2, nad2-cytb and cytb-cox1. The cycling conditions used were 92°C for 2 min (initial denaturation), then 92°C/10 s (denaturation), 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s (annealing), and 60°C/10 min (extension) for 10 cycles, followed by 92°C for 2 min, then 92°C/10 s, 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s, and 60°C/10 min for 20 cycles, with a cycle elongation of 10 s for each cycle and a final extension at 60°C/10 min. Each amplicon, which represented a single band in a 1.0% (w/v) agarose gel, following electrophoresis and ethidium-bromide staining, was column-purified and then sequenced using a primer walking strategy\n[22].\n\nSequences of primers used to amplify mitochondrial DNA regions from \n\nChabertia erschowi \n\nand \n\nChabertia ovina \n\nfrom China\n", "The full ITS rDNA region including primer flanking 18S and 28S rDNA sequences was PCR-amplified from individual DNA samples using universal primers NC5 (forward; 5′-GTAGGTGAACCTGCGGAAGGATCATT-3′) and NC2 (reverse; 5′-TTAGTTTCTTTTCCTCCGCT-3′) described previously\n[25]. The primers rrnSF and rrnSR (Table\n1) designed to conserved mt genome sequences within the rrnS gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 700 bp) from multiple individuals of Chabertia spp. The primers nad2F and nad2R (Table\n1) designed to conserved mt genome sequences within the nad2 gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 900 bp) from multiple individuals of Chabertia spp..", "Sequences were assembled manually and aligned against the complete mt genome sequences of C. ovina Australia isolate\n[26] using the computer program Clustal X 1.83\n[27] to infer gene boundaries. Translation initiation and termination codons were identified based on comparison with that of C. ovina Australia isolate\n[26]. The secondary structures of 22 tRNA genes were predicted using tRNAscan-SE\n[28] and/or manual adjustment\n[29], and rRNA genes were identified by comparison with that of C. ovina Australia isolate\n[26].", "Amino acid sequences inferred from the 12 protein-coding genes of the two Chabertia spp. worms were concatenated into a single alignment, and then aligned with those of 14 other Strongylida nematodes (Angiostrongylus cantonensis, GenBank accession number NC_013065\n[30]; Angiostrongylus costaricensis, NC_013067\n[30]; Angiostrongylus vasorum, JX268542\n[31]; Aelurostrongylus abstrusus, NC_019571\n[32]; Chabertia ovina Australia isolate, NC_013831\n[26]; Cylicocyclus insignis, NC_013808\n[26]; Metastrongylus pudendotectus, NC_013813\n[26]; Metastrongylus salmi, NC_013815\n[26]; Oesophagostomum dentatum, FM161882\n[17]; Oesophagostomum quadrispinulatum, NC_014181\n[17]; Oesophagostomum asperum, KC715826\n[33]; Oesophagostomum columbianum, KC715827\n[33]; Strongylus vulgaris, NC_013818\n[26]; Syngamus trachea, NC_013821\n[26], using the Ancylostomatoidea nematode, Necator americanus, NC_003416 as the outgroup\n[29]. Any regions of ambiguous alignment were excluded using Gblocks (http://molevol.cmima.csic.es/castresana/Gblocks_server.html)\n[34] with the default parameters (Gblocks removed 1.6% of the amino acid alignments) and then subjected to phylogenetic analysis using Bayesian Inference (BI) as described previously\n[35,36]. Phylograms were drawn using the program Tree View v.1.65\n[37].", "The rDNA region including ITS-1, 5.8S rDNA and ITS-2 were amplified and sequenced from C. ovina China isolates, and they were 852-854 bp (GenBank accession nos. KF913466-KF913471) in length, which contained 367-369 bp (ITS-1), 153 bp (5.8S rDNA) and 231-239 bp (ITS-2). These sequences were 862-866 bp in length for C. erschowi samples (GenBank accession nos. KF913448-KF913456), containing 375-378 bp (ITS-1), 153 bp (5.8S rDNA) and 239-245 bp (ITS-2).", "The complete mt genome sequence of C. ovina China isolate and C. erschowi were 13,717 bp and 13,705 bp in length, respectively (GenBank accession nos. KF660604 and KF660603, respectively). The two mt genomes contain 12 protein-coding genes (cox1-3, nad1-6, nad4L, cytb, atp6), 22 transfer RNA genes and two ribosomal RNA genes (rrnS and rrnL) (Table\n2), but the atp8 gene is missing (Figure\n1). The protein-coding genes are transcribed in the same directions, as reported for Oesophagostomum spp.\n[17,33]. Twenty-two tRNA genes were predicted from the mt genomes, which varied from 55 to 63 bp in size. The two ribosomal RNA genes (rrnL and rrnS) were inferred; rrnL is located between tRNA-His and nad3, and rrnS is located between tRNA-Glu and tRNA-Ser (UCN). Three AT-rich non-coding regions (NCRs) were inferred in the mt genomes (Table\n2). For these genomes, the longest NCR (designated NC2; 250 bp for C. ovina China isolate and 240 bp for C. erschowi in length) is located between the tRNA-Ala and tRNA-Pro (Figure\n1), have an A + T content of 83.75% and 84%, respectively.\n\nMitochondrial genome organization of \n\nChabertia erschowi \n\n(CE) and \n\nChabertia ovina \n\nChina isolate (COC) and Australia isolate (COA)\n\nStructure of the mitochondrial genomes for Chabertia. Genes are designated according to standard nomenclature, except for the 22 tRNA genes, which are designated using one-letter amino acid codes, with numerals differentiating each of the two leucine- and serine-specifying tRNAs (L1 and L2 for codon families CUN and UUR, respectively; S1 and S2 for codon families AGN and UCN, respectively). “NCR-1, NCR-2 and NCR-3” refer to three non-coding regions.", "The mt genome sequence of C. erschowi was 13,705 bp in length, 12 bp shorter than that of C. ovina China isolate, and 23 bp longer than that of C. ovina Australia isolate. The arrangement of the mt genes (i.e., 13 protein genes, 2 rrn genes and 22 tRNA genes) and NCRs were the same. A comparison of the nucleotide sequences of each mt gene as well as the amino acid sequences conceptually translated from individual protein-coding genes of the two Chabertia are given in Table\n3. The greatest nucleotide variation between the C. ovina China isolate and C. erschowi was in the nad2 gene (19.4% and 17.92%), whereas least differences (7.33%) were detected in the rrnS gene, respectively (Table\n3). The nucleotide sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. Sequence difference between the entire mt genome of C. ovina Australia isolate and that of C. erschowi was 15.48%. Sequence difference between the entire mt genome of C. ovina China isolate and that of C. ovina Australia isolate was 4.28%.\n\nNucleotide and/or predicted amino acid (aa) sequence differences for mt protein-coding and ribosomal RNA genes among \n\nChabertia erschowi \n\n(CE) and \n\nChabertia ovina \n\nChina isolate (COC) and Australia isolate (COA)\n\nThe difference in the concatenated amino acid sequences of the 12 protein-coding genes of the C. ovina China isolate and those of C. erschowi was 9.36%, 10% between those of the C. ovina Australia isolate and those of C. erschowi, and 2.37% between those of the C. ovina China isolate and those of C. ovina Australia isolate. The amino acid sequence differences between each of the 12 protein-coding genes of the C. ovina Australia isolate and the corresponding homologues of C. erschowi ranged from 0.57-17.92%, with COX1 being the most conserved and NAD2 the least conserved proteins (Table\n3). Phylogenetic analyses of concatenated amino acid sequence data sets, using N. americanus as the outgroup, revealed that the Chabertia and Oesophagostomum were clustered together, with absolute support (posterior probability (pp) = 1.00) support (Figure\n2).\nInferred phylogenetic position of Chabertia within Strongylida nematodes. Analysis of the concatenated amino acid sequence data representing 12 protein-coding genes by Bayesian inference (BI), using Necator americanus (NC_003416) as the outgroup.\nSequence variation in complete nad2 gene was assessed among 15 individuals of Chabertia from goats and yaks. Sequences of the six C. ovina China isolate individuals were the same in length (840 bp) (GenBank accession nos. KF913472-KF913477). Nucleotide variation among the six C. ovina China isolate individuals was detected at 18 sites (18/840; 2.1%). Sequences of the nine C. erschowi individuals were the same in length (840 bp) (GenBank accession nos. KF913484-KF913492). Nucleotide variation also occurred at 23 sites (23/840; 2.7%). All 15 alignments of the nad2 sequences revealed that all individuals of Chabertia differed at 182 nucleotide positions (182/840; 21.7%). Phylogenetic analysis of the nad2 sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure\n3A).\nInferred genetic relationships of 15 individual Chabertia specimens. The analyses were carried out by Bayesian inference (BI) based on mitochondrial rrnS (A) and nad2 (B) sequence data, using Necator americanus as the outgroup.\nSequence variation in complete rrnS gene was assessed among 15 individuals of Chabertia from goat and yak. Sequences of the rrnS gene from the six C. ovina China isolate individuals were the same in length (696 bp) (GenBank accession nos. KF913478-KF913483). Nucleotide variation among the six C. ovina China isolate individuals was detected at seven sites (7/696; 1.0%). Sequences of the rrnS gene from the nine C. erschowi individuals were the same in length (696 bp) (GenBank accession nos. KF913457-KF913465). Nucleotide variation also occurred at 6 sites (6/696; 0.9%). All 15 alignments of the rrnS sequences revealed that all individuals of Chabertia differed at 56 nucleotide positions (56/696; 8.05%). Phylogenetic analysis of the rrnS sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure\n3B).\nThe ITS-1 and ITS-2 sequences from 10 individual adults of C. ovina China isolate were compared with that of 6 individual adults of C. erschowi. Sequence variations were 0–2.9% (ITS-1) and 0–2.7% (ITS-2) within the two Chabertia species, respectively. However, the sequence differences were 6.3-8.2% (ITS-1) and 10.4-13.6% (ITS-2) between the C. ovina China isolate and C. erschowi.", "The authors declare that they have no competing interests.", "XQZ and GHL conceived and designed the study, and critically revised the manuscript. GHL, LZ and HQS performed the experiments, analyzed the data and drafted the manuscript. GHZ, JZC and QZ helped in study design, study implementation and manuscript revision. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Parasites and isolation of total genomic DNA", "Long-range PCR-based sequencing of mt genome", "Sequencing of ITS rDNA and mt rrnS and nad2", "Sequence analyses", "Phylogenetic analyses", "Results", "Nuclear ribosomal DNA regions of the two Chabertia species", "Features of the mt genomes of the two Chabertia species", "Comparative analyses between C. ovina and C. erschowi", "Discussion", "Conclusion", "Competing interests", "Authors’ contributions" ]
[ "The phylum Nematoda includes many parasites that threaten the health of plants, animals and humans on a global scale. The soil-transmitted helminthes (including roundworms, whipworms and hookworms) are estimated to infect almost one sixth of all humans, and more than a billion people are infected with at least one species\n[1]. Chabertia spp. are common gastrointestinal nematodes, causing significant economic losses to the livestock industries worldwide, due to poor productivity, failure to thrive and control costs\n[2-6]. In spite of the high prevalence of Chabertia reported in small ruminants\n[7], it is not clear whether the small ruminants harbour one or more than one species. Based on morphological features (e.g., cervical groove and cephalic vesicle) of adult worms, various Chabertia species have been described in sheep and goats in China, including C. ovina, C. rishati, C. bovis, C. erschowi, C. gaohanensis sp. nov and C. shaanxiensis sp. nov\n[8-10]. However, to date, only Chabertia ovina is well recognized as taxonomically valid\n[11,12]. Obviously, the identification and distinction of Chabertia to species using morphological criteria alone is not reliable. Therefore, there is an urgent need for suitable molecular approaches to accurately identify and distinguish closely-related Chabertia species from different hosts and regions.\nMolecular tools, using genetic markers in mitochondrial (mt) genomes and the internal transcribed spacer (ITS) regions of nuclear ribosomal DNA (rDNA), have been used effectively to identify and differentiate parasites of different groups\n[13-16]. For nematodes, recent studies showed that mt genomes are useful genetic markers for the identification and differentiation of closely-related species\n[17,18]. In addition, employing ITS rDNA sequences, recent studies also demonstrated that Haemonchus placei and H. contortus are distinct species\n[19]; Trichuris suis and T. trichiura are different nematode species\n[20,21].\nUsing a long-range PCR-coupled sequencing approach\n[22], the objectives of the present study were (i) to characterize the ITS rDNA and mt genomes of C. ovina and C. erschowi from goat and yak in China, (ii) to compare these ITS sequences and mt genome sequences, and (iii) to test the hypothesis that C. erschowi is a valid species in phylogenetic analyses of these sequence data.", " Parasites and isolation of total genomic DNA Adult specimens of C. ovina (n = 6, coded CHO1-CHO6) and C. erschowi (n = 9, coded CHE1-CHE9) were collected, post-mortem, from the large intestine of a goat and a yak in Shaanxi and Qinghai Provinces, China, respectively, and were washed in physiological saline, identified morphologically\n[8,10], fixed in 70% (v/v) ethanol and stored at -20°C until use. Total genomic DNA was isolated separately from 15 individual worms using an established method\n[23].\nAdult specimens of C. ovina (n = 6, coded CHO1-CHO6) and C. erschowi (n = 9, coded CHE1-CHE9) were collected, post-mortem, from the large intestine of a goat and a yak in Shaanxi and Qinghai Provinces, China, respectively, and were washed in physiological saline, identified morphologically\n[8,10], fixed in 70% (v/v) ethanol and stored at -20°C until use. Total genomic DNA was isolated separately from 15 individual worms using an established method\n[23].\n Long-range PCR-based sequencing of mt genome To obtain some mt sequence data for primer design, we PCR-amplified regions of C. erschowi of cox1 gene by using a (relatively) conserved primer pair JB3-JB4.5\n[24], rrnL gene was amplified using the designed primers rrnLF (forward; 5′-GAGCCTGTATTGGGTTCCAGTATGA-3′) and rrnLR (reverse; 5′-AACTTTTTTTGATTTTCCTTTCGTA-3′), nad1 gene was amplified using the designed primers nad1F (forward; 5′-GAGCGTCATTTGTTGGGAAG-3′) and nad1R (reverse; 5′-CCCCTTCAGCAAAATCAAAC-3′), cytb gene was amplified using the designed primers cytbF (forward; 5′-GGTACCTTTTTGGCTTTTTATTATA-3′) and cytbR (reverse; 5′-ATATGAACAGGGCTTATTATAGGAT-3′) based on sequences conserved between Oesophagostomum dentatum and C. ovina Australia isolate. The amplicons were sequenced in both directions using BigDye terminator v.3.1, ABI PRISM 3730. We then designed primers (Table\n1) to regions within cox1, rrnL, nad1 and cytb and amplified from C. ovina (coded CHO1) in four overlapping fragments: cox1-rrnL, rrnL-nad1, nad1-cytb and cytb-cox1. Then we designed primers (Table\n1) to regions within cox1, rrnL, nad5, nad1, nad2 and cytb and amplified from C. erschowi (coded CHE1) in six overlapping fragments: cox1- rrnL, rrnL-nad5, nad5-nad1, nad1-nad2, nad2-cytb and cytb-cox1. The cycling conditions used were 92°C for 2 min (initial denaturation), then 92°C/10 s (denaturation), 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s (annealing), and 60°C/10 min (extension) for 10 cycles, followed by 92°C for 2 min, then 92°C/10 s, 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s, and 60°C/10 min for 20 cycles, with a cycle elongation of 10 s for each cycle and a final extension at 60°C/10 min. Each amplicon, which represented a single band in a 1.0% (w/v) agarose gel, following electrophoresis and ethidium-bromide staining, was column-purified and then sequenced using a primer walking strategy\n[22].\n\nSequences of primers used to amplify mitochondrial DNA regions from \n\nChabertia erschowi \n\nand \n\nChabertia ovina \n\nfrom China\n\nTo obtain some mt sequence data for primer design, we PCR-amplified regions of C. erschowi of cox1 gene by using a (relatively) conserved primer pair JB3-JB4.5\n[24], rrnL gene was amplified using the designed primers rrnLF (forward; 5′-GAGCCTGTATTGGGTTCCAGTATGA-3′) and rrnLR (reverse; 5′-AACTTTTTTTGATTTTCCTTTCGTA-3′), nad1 gene was amplified using the designed primers nad1F (forward; 5′-GAGCGTCATTTGTTGGGAAG-3′) and nad1R (reverse; 5′-CCCCTTCAGCAAAATCAAAC-3′), cytb gene was amplified using the designed primers cytbF (forward; 5′-GGTACCTTTTTGGCTTTTTATTATA-3′) and cytbR (reverse; 5′-ATATGAACAGGGCTTATTATAGGAT-3′) based on sequences conserved between Oesophagostomum dentatum and C. ovina Australia isolate. The amplicons were sequenced in both directions using BigDye terminator v.3.1, ABI PRISM 3730. We then designed primers (Table\n1) to regions within cox1, rrnL, nad1 and cytb and amplified from C. ovina (coded CHO1) in four overlapping fragments: cox1-rrnL, rrnL-nad1, nad1-cytb and cytb-cox1. Then we designed primers (Table\n1) to regions within cox1, rrnL, nad5, nad1, nad2 and cytb and amplified from C. erschowi (coded CHE1) in six overlapping fragments: cox1- rrnL, rrnL-nad5, nad5-nad1, nad1-nad2, nad2-cytb and cytb-cox1. The cycling conditions used were 92°C for 2 min (initial denaturation), then 92°C/10 s (denaturation), 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s (annealing), and 60°C/10 min (extension) for 10 cycles, followed by 92°C for 2 min, then 92°C/10 s, 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s, and 60°C/10 min for 20 cycles, with a cycle elongation of 10 s for each cycle and a final extension at 60°C/10 min. Each amplicon, which represented a single band in a 1.0% (w/v) agarose gel, following electrophoresis and ethidium-bromide staining, was column-purified and then sequenced using a primer walking strategy\n[22].\n\nSequences of primers used to amplify mitochondrial DNA regions from \n\nChabertia erschowi \n\nand \n\nChabertia ovina \n\nfrom China\n\n Sequencing of ITS rDNA and mt rrnS and nad2 The full ITS rDNA region including primer flanking 18S and 28S rDNA sequences was PCR-amplified from individual DNA samples using universal primers NC5 (forward; 5′-GTAGGTGAACCTGCGGAAGGATCATT-3′) and NC2 (reverse; 5′-TTAGTTTCTTTTCCTCCGCT-3′) described previously\n[25]. The primers rrnSF and rrnSR (Table\n1) designed to conserved mt genome sequences within the rrnS gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 700 bp) from multiple individuals of Chabertia spp. The primers nad2F and nad2R (Table\n1) designed to conserved mt genome sequences within the nad2 gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 900 bp) from multiple individuals of Chabertia spp..\nThe full ITS rDNA region including primer flanking 18S and 28S rDNA sequences was PCR-amplified from individual DNA samples using universal primers NC5 (forward; 5′-GTAGGTGAACCTGCGGAAGGATCATT-3′) and NC2 (reverse; 5′-TTAGTTTCTTTTCCTCCGCT-3′) described previously\n[25]. The primers rrnSF and rrnSR (Table\n1) designed to conserved mt genome sequences within the rrnS gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 700 bp) from multiple individuals of Chabertia spp. The primers nad2F and nad2R (Table\n1) designed to conserved mt genome sequences within the nad2 gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 900 bp) from multiple individuals of Chabertia spp..\n Sequence analyses Sequences were assembled manually and aligned against the complete mt genome sequences of C. ovina Australia isolate\n[26] using the computer program Clustal X 1.83\n[27] to infer gene boundaries. Translation initiation and termination codons were identified based on comparison with that of C. ovina Australia isolate\n[26]. The secondary structures of 22 tRNA genes were predicted using tRNAscan-SE\n[28] and/or manual adjustment\n[29], and rRNA genes were identified by comparison with that of C. ovina Australia isolate\n[26].\nSequences were assembled manually and aligned against the complete mt genome sequences of C. ovina Australia isolate\n[26] using the computer program Clustal X 1.83\n[27] to infer gene boundaries. Translation initiation and termination codons were identified based on comparison with that of C. ovina Australia isolate\n[26]. The secondary structures of 22 tRNA genes were predicted using tRNAscan-SE\n[28] and/or manual adjustment\n[29], and rRNA genes were identified by comparison with that of C. ovina Australia isolate\n[26].\n Phylogenetic analyses Amino acid sequences inferred from the 12 protein-coding genes of the two Chabertia spp. worms were concatenated into a single alignment, and then aligned with those of 14 other Strongylida nematodes (Angiostrongylus cantonensis, GenBank accession number NC_013065\n[30]; Angiostrongylus costaricensis, NC_013067\n[30]; Angiostrongylus vasorum, JX268542\n[31]; Aelurostrongylus abstrusus, NC_019571\n[32]; Chabertia ovina Australia isolate, NC_013831\n[26]; Cylicocyclus insignis, NC_013808\n[26]; Metastrongylus pudendotectus, NC_013813\n[26]; Metastrongylus salmi, NC_013815\n[26]; Oesophagostomum dentatum, FM161882\n[17]; Oesophagostomum quadrispinulatum, NC_014181\n[17]; Oesophagostomum asperum, KC715826\n[33]; Oesophagostomum columbianum, KC715827\n[33]; Strongylus vulgaris, NC_013818\n[26]; Syngamus trachea, NC_013821\n[26], using the Ancylostomatoidea nematode, Necator americanus, NC_003416 as the outgroup\n[29]. Any regions of ambiguous alignment were excluded using Gblocks (http://molevol.cmima.csic.es/castresana/Gblocks_server.html)\n[34] with the default parameters (Gblocks removed 1.6% of the amino acid alignments) and then subjected to phylogenetic analysis using Bayesian Inference (BI) as described previously\n[35,36]. Phylograms were drawn using the program Tree View v.1.65\n[37].\nAmino acid sequences inferred from the 12 protein-coding genes of the two Chabertia spp. worms were concatenated into a single alignment, and then aligned with those of 14 other Strongylida nematodes (Angiostrongylus cantonensis, GenBank accession number NC_013065\n[30]; Angiostrongylus costaricensis, NC_013067\n[30]; Angiostrongylus vasorum, JX268542\n[31]; Aelurostrongylus abstrusus, NC_019571\n[32]; Chabertia ovina Australia isolate, NC_013831\n[26]; Cylicocyclus insignis, NC_013808\n[26]; Metastrongylus pudendotectus, NC_013813\n[26]; Metastrongylus salmi, NC_013815\n[26]; Oesophagostomum dentatum, FM161882\n[17]; Oesophagostomum quadrispinulatum, NC_014181\n[17]; Oesophagostomum asperum, KC715826\n[33]; Oesophagostomum columbianum, KC715827\n[33]; Strongylus vulgaris, NC_013818\n[26]; Syngamus trachea, NC_013821\n[26], using the Ancylostomatoidea nematode, Necator americanus, NC_003416 as the outgroup\n[29]. Any regions of ambiguous alignment were excluded using Gblocks (http://molevol.cmima.csic.es/castresana/Gblocks_server.html)\n[34] with the default parameters (Gblocks removed 1.6% of the amino acid alignments) and then subjected to phylogenetic analysis using Bayesian Inference (BI) as described previously\n[35,36]. Phylograms were drawn using the program Tree View v.1.65\n[37].", "Adult specimens of C. ovina (n = 6, coded CHO1-CHO6) and C. erschowi (n = 9, coded CHE1-CHE9) were collected, post-mortem, from the large intestine of a goat and a yak in Shaanxi and Qinghai Provinces, China, respectively, and were washed in physiological saline, identified morphologically\n[8,10], fixed in 70% (v/v) ethanol and stored at -20°C until use. Total genomic DNA was isolated separately from 15 individual worms using an established method\n[23].", "To obtain some mt sequence data for primer design, we PCR-amplified regions of C. erschowi of cox1 gene by using a (relatively) conserved primer pair JB3-JB4.5\n[24], rrnL gene was amplified using the designed primers rrnLF (forward; 5′-GAGCCTGTATTGGGTTCCAGTATGA-3′) and rrnLR (reverse; 5′-AACTTTTTTTGATTTTCCTTTCGTA-3′), nad1 gene was amplified using the designed primers nad1F (forward; 5′-GAGCGTCATTTGTTGGGAAG-3′) and nad1R (reverse; 5′-CCCCTTCAGCAAAATCAAAC-3′), cytb gene was amplified using the designed primers cytbF (forward; 5′-GGTACCTTTTTGGCTTTTTATTATA-3′) and cytbR (reverse; 5′-ATATGAACAGGGCTTATTATAGGAT-3′) based on sequences conserved between Oesophagostomum dentatum and C. ovina Australia isolate. The amplicons were sequenced in both directions using BigDye terminator v.3.1, ABI PRISM 3730. We then designed primers (Table\n1) to regions within cox1, rrnL, nad1 and cytb and amplified from C. ovina (coded CHO1) in four overlapping fragments: cox1-rrnL, rrnL-nad1, nad1-cytb and cytb-cox1. Then we designed primers (Table\n1) to regions within cox1, rrnL, nad5, nad1, nad2 and cytb and amplified from C. erschowi (coded CHE1) in six overlapping fragments: cox1- rrnL, rrnL-nad5, nad5-nad1, nad1-nad2, nad2-cytb and cytb-cox1. The cycling conditions used were 92°C for 2 min (initial denaturation), then 92°C/10 s (denaturation), 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s (annealing), and 60°C/10 min (extension) for 10 cycles, followed by 92°C for 2 min, then 92°C/10 s, 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s, and 60°C/10 min for 20 cycles, with a cycle elongation of 10 s for each cycle and a final extension at 60°C/10 min. Each amplicon, which represented a single band in a 1.0% (w/v) agarose gel, following electrophoresis and ethidium-bromide staining, was column-purified and then sequenced using a primer walking strategy\n[22].\n\nSequences of primers used to amplify mitochondrial DNA regions from \n\nChabertia erschowi \n\nand \n\nChabertia ovina \n\nfrom China\n", "The full ITS rDNA region including primer flanking 18S and 28S rDNA sequences was PCR-amplified from individual DNA samples using universal primers NC5 (forward; 5′-GTAGGTGAACCTGCGGAAGGATCATT-3′) and NC2 (reverse; 5′-TTAGTTTCTTTTCCTCCGCT-3′) described previously\n[25]. The primers rrnSF and rrnSR (Table\n1) designed to conserved mt genome sequences within the rrnS gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 700 bp) from multiple individuals of Chabertia spp. The primers nad2F and nad2R (Table\n1) designed to conserved mt genome sequences within the nad2 gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 900 bp) from multiple individuals of Chabertia spp..", "Sequences were assembled manually and aligned against the complete mt genome sequences of C. ovina Australia isolate\n[26] using the computer program Clustal X 1.83\n[27] to infer gene boundaries. Translation initiation and termination codons were identified based on comparison with that of C. ovina Australia isolate\n[26]. The secondary structures of 22 tRNA genes were predicted using tRNAscan-SE\n[28] and/or manual adjustment\n[29], and rRNA genes were identified by comparison with that of C. ovina Australia isolate\n[26].", "Amino acid sequences inferred from the 12 protein-coding genes of the two Chabertia spp. worms were concatenated into a single alignment, and then aligned with those of 14 other Strongylida nematodes (Angiostrongylus cantonensis, GenBank accession number NC_013065\n[30]; Angiostrongylus costaricensis, NC_013067\n[30]; Angiostrongylus vasorum, JX268542\n[31]; Aelurostrongylus abstrusus, NC_019571\n[32]; Chabertia ovina Australia isolate, NC_013831\n[26]; Cylicocyclus insignis, NC_013808\n[26]; Metastrongylus pudendotectus, NC_013813\n[26]; Metastrongylus salmi, NC_013815\n[26]; Oesophagostomum dentatum, FM161882\n[17]; Oesophagostomum quadrispinulatum, NC_014181\n[17]; Oesophagostomum asperum, KC715826\n[33]; Oesophagostomum columbianum, KC715827\n[33]; Strongylus vulgaris, NC_013818\n[26]; Syngamus trachea, NC_013821\n[26], using the Ancylostomatoidea nematode, Necator americanus, NC_003416 as the outgroup\n[29]. Any regions of ambiguous alignment were excluded using Gblocks (http://molevol.cmima.csic.es/castresana/Gblocks_server.html)\n[34] with the default parameters (Gblocks removed 1.6% of the amino acid alignments) and then subjected to phylogenetic analysis using Bayesian Inference (BI) as described previously\n[35,36]. Phylograms were drawn using the program Tree View v.1.65\n[37].", " Nuclear ribosomal DNA regions of the two Chabertia species The rDNA region including ITS-1, 5.8S rDNA and ITS-2 were amplified and sequenced from C. ovina China isolates, and they were 852-854 bp (GenBank accession nos. KF913466-KF913471) in length, which contained 367-369 bp (ITS-1), 153 bp (5.8S rDNA) and 231-239 bp (ITS-2). These sequences were 862-866 bp in length for C. erschowi samples (GenBank accession nos. KF913448-KF913456), containing 375-378 bp (ITS-1), 153 bp (5.8S rDNA) and 239-245 bp (ITS-2).\nThe rDNA region including ITS-1, 5.8S rDNA and ITS-2 were amplified and sequenced from C. ovina China isolates, and they were 852-854 bp (GenBank accession nos. KF913466-KF913471) in length, which contained 367-369 bp (ITS-1), 153 bp (5.8S rDNA) and 231-239 bp (ITS-2). These sequences were 862-866 bp in length for C. erschowi samples (GenBank accession nos. KF913448-KF913456), containing 375-378 bp (ITS-1), 153 bp (5.8S rDNA) and 239-245 bp (ITS-2).\n Features of the mt genomes of the two Chabertia species The complete mt genome sequence of C. ovina China isolate and C. erschowi were 13,717 bp and 13,705 bp in length, respectively (GenBank accession nos. KF660604 and KF660603, respectively). The two mt genomes contain 12 protein-coding genes (cox1-3, nad1-6, nad4L, cytb, atp6), 22 transfer RNA genes and two ribosomal RNA genes (rrnS and rrnL) (Table\n2), but the atp8 gene is missing (Figure\n1). The protein-coding genes are transcribed in the same directions, as reported for Oesophagostomum spp.\n[17,33]. Twenty-two tRNA genes were predicted from the mt genomes, which varied from 55 to 63 bp in size. The two ribosomal RNA genes (rrnL and rrnS) were inferred; rrnL is located between tRNA-His and nad3, and rrnS is located between tRNA-Glu and tRNA-Ser (UCN). Three AT-rich non-coding regions (NCRs) were inferred in the mt genomes (Table\n2). For these genomes, the longest NCR (designated NC2; 250 bp for C. ovina China isolate and 240 bp for C. erschowi in length) is located between the tRNA-Ala and tRNA-Pro (Figure\n1), have an A + T content of 83.75% and 84%, respectively.\n\nMitochondrial genome organization of \n\nChabertia erschowi \n\n(CE) and \n\nChabertia ovina \n\nChina isolate (COC) and Australia isolate (COA)\n\nStructure of the mitochondrial genomes for Chabertia. Genes are designated according to standard nomenclature, except for the 22 tRNA genes, which are designated using one-letter amino acid codes, with numerals differentiating each of the two leucine- and serine-specifying tRNAs (L1 and L2 for codon families CUN and UUR, respectively; S1 and S2 for codon families AGN and UCN, respectively). “NCR-1, NCR-2 and NCR-3” refer to three non-coding regions.\nThe complete mt genome sequence of C. ovina China isolate and C. erschowi were 13,717 bp and 13,705 bp in length, respectively (GenBank accession nos. KF660604 and KF660603, respectively). The two mt genomes contain 12 protein-coding genes (cox1-3, nad1-6, nad4L, cytb, atp6), 22 transfer RNA genes and two ribosomal RNA genes (rrnS and rrnL) (Table\n2), but the atp8 gene is missing (Figure\n1). The protein-coding genes are transcribed in the same directions, as reported for Oesophagostomum spp.\n[17,33]. Twenty-two tRNA genes were predicted from the mt genomes, which varied from 55 to 63 bp in size. The two ribosomal RNA genes (rrnL and rrnS) were inferred; rrnL is located between tRNA-His and nad3, and rrnS is located between tRNA-Glu and tRNA-Ser (UCN). Three AT-rich non-coding regions (NCRs) were inferred in the mt genomes (Table\n2). For these genomes, the longest NCR (designated NC2; 250 bp for C. ovina China isolate and 240 bp for C. erschowi in length) is located between the tRNA-Ala and tRNA-Pro (Figure\n1), have an A + T content of 83.75% and 84%, respectively.\n\nMitochondrial genome organization of \n\nChabertia erschowi \n\n(CE) and \n\nChabertia ovina \n\nChina isolate (COC) and Australia isolate (COA)\n\nStructure of the mitochondrial genomes for Chabertia. Genes are designated according to standard nomenclature, except for the 22 tRNA genes, which are designated using one-letter amino acid codes, with numerals differentiating each of the two leucine- and serine-specifying tRNAs (L1 and L2 for codon families CUN and UUR, respectively; S1 and S2 for codon families AGN and UCN, respectively). “NCR-1, NCR-2 and NCR-3” refer to three non-coding regions.\n Comparative analyses between C. ovina and C. erschowi The mt genome sequence of C. erschowi was 13,705 bp in length, 12 bp shorter than that of C. ovina China isolate, and 23 bp longer than that of C. ovina Australia isolate. The arrangement of the mt genes (i.e., 13 protein genes, 2 rrn genes and 22 tRNA genes) and NCRs were the same. A comparison of the nucleotide sequences of each mt gene as well as the amino acid sequences conceptually translated from individual protein-coding genes of the two Chabertia are given in Table\n3. The greatest nucleotide variation between the C. ovina China isolate and C. erschowi was in the nad2 gene (19.4% and 17.92%), whereas least differences (7.33%) were detected in the rrnS gene, respectively (Table\n3). The nucleotide sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. Sequence difference between the entire mt genome of C. ovina Australia isolate and that of C. erschowi was 15.48%. Sequence difference between the entire mt genome of C. ovina China isolate and that of C. ovina Australia isolate was 4.28%.\n\nNucleotide and/or predicted amino acid (aa) sequence differences for mt protein-coding and ribosomal RNA genes among \n\nChabertia erschowi \n\n(CE) and \n\nChabertia ovina \n\nChina isolate (COC) and Australia isolate (COA)\n\nThe difference in the concatenated amino acid sequences of the 12 protein-coding genes of the C. ovina China isolate and those of C. erschowi was 9.36%, 10% between those of the C. ovina Australia isolate and those of C. erschowi, and 2.37% between those of the C. ovina China isolate and those of C. ovina Australia isolate. The amino acid sequence differences between each of the 12 protein-coding genes of the C. ovina Australia isolate and the corresponding homologues of C. erschowi ranged from 0.57-17.92%, with COX1 being the most conserved and NAD2 the least conserved proteins (Table\n3). Phylogenetic analyses of concatenated amino acid sequence data sets, using N. americanus as the outgroup, revealed that the Chabertia and Oesophagostomum were clustered together, with absolute support (posterior probability (pp) = 1.00) support (Figure\n2).\nInferred phylogenetic position of Chabertia within Strongylida nematodes. Analysis of the concatenated amino acid sequence data representing 12 protein-coding genes by Bayesian inference (BI), using Necator americanus (NC_003416) as the outgroup.\nSequence variation in complete nad2 gene was assessed among 15 individuals of Chabertia from goats and yaks. Sequences of the six C. ovina China isolate individuals were the same in length (840 bp) (GenBank accession nos. KF913472-KF913477). Nucleotide variation among the six C. ovina China isolate individuals was detected at 18 sites (18/840; 2.1%). Sequences of the nine C. erschowi individuals were the same in length (840 bp) (GenBank accession nos. KF913484-KF913492). Nucleotide variation also occurred at 23 sites (23/840; 2.7%). All 15 alignments of the nad2 sequences revealed that all individuals of Chabertia differed at 182 nucleotide positions (182/840; 21.7%). Phylogenetic analysis of the nad2 sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure\n3A).\nInferred genetic relationships of 15 individual Chabertia specimens. The analyses were carried out by Bayesian inference (BI) based on mitochondrial rrnS (A) and nad2 (B) sequence data, using Necator americanus as the outgroup.\nSequence variation in complete rrnS gene was assessed among 15 individuals of Chabertia from goat and yak. Sequences of the rrnS gene from the six C. ovina China isolate individuals were the same in length (696 bp) (GenBank accession nos. KF913478-KF913483). Nucleotide variation among the six C. ovina China isolate individuals was detected at seven sites (7/696; 1.0%). Sequences of the rrnS gene from the nine C. erschowi individuals were the same in length (696 bp) (GenBank accession nos. KF913457-KF913465). Nucleotide variation also occurred at 6 sites (6/696; 0.9%). All 15 alignments of the rrnS sequences revealed that all individuals of Chabertia differed at 56 nucleotide positions (56/696; 8.05%). Phylogenetic analysis of the rrnS sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure\n3B).\nThe ITS-1 and ITS-2 sequences from 10 individual adults of C. ovina China isolate were compared with that of 6 individual adults of C. erschowi. Sequence variations were 0–2.9% (ITS-1) and 0–2.7% (ITS-2) within the two Chabertia species, respectively. However, the sequence differences were 6.3-8.2% (ITS-1) and 10.4-13.6% (ITS-2) between the C. ovina China isolate and C. erschowi.\nThe mt genome sequence of C. erschowi was 13,705 bp in length, 12 bp shorter than that of C. ovina China isolate, and 23 bp longer than that of C. ovina Australia isolate. The arrangement of the mt genes (i.e., 13 protein genes, 2 rrn genes and 22 tRNA genes) and NCRs were the same. A comparison of the nucleotide sequences of each mt gene as well as the amino acid sequences conceptually translated from individual protein-coding genes of the two Chabertia are given in Table\n3. The greatest nucleotide variation between the C. ovina China isolate and C. erschowi was in the nad2 gene (19.4% and 17.92%), whereas least differences (7.33%) were detected in the rrnS gene, respectively (Table\n3). The nucleotide sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. Sequence difference between the entire mt genome of C. ovina Australia isolate and that of C. erschowi was 15.48%. Sequence difference between the entire mt genome of C. ovina China isolate and that of C. ovina Australia isolate was 4.28%.\n\nNucleotide and/or predicted amino acid (aa) sequence differences for mt protein-coding and ribosomal RNA genes among \n\nChabertia erschowi \n\n(CE) and \n\nChabertia ovina \n\nChina isolate (COC) and Australia isolate (COA)\n\nThe difference in the concatenated amino acid sequences of the 12 protein-coding genes of the C. ovina China isolate and those of C. erschowi was 9.36%, 10% between those of the C. ovina Australia isolate and those of C. erschowi, and 2.37% between those of the C. ovina China isolate and those of C. ovina Australia isolate. The amino acid sequence differences between each of the 12 protein-coding genes of the C. ovina Australia isolate and the corresponding homologues of C. erschowi ranged from 0.57-17.92%, with COX1 being the most conserved and NAD2 the least conserved proteins (Table\n3). Phylogenetic analyses of concatenated amino acid sequence data sets, using N. americanus as the outgroup, revealed that the Chabertia and Oesophagostomum were clustered together, with absolute support (posterior probability (pp) = 1.00) support (Figure\n2).\nInferred phylogenetic position of Chabertia within Strongylida nematodes. Analysis of the concatenated amino acid sequence data representing 12 protein-coding genes by Bayesian inference (BI), using Necator americanus (NC_003416) as the outgroup.\nSequence variation in complete nad2 gene was assessed among 15 individuals of Chabertia from goats and yaks. Sequences of the six C. ovina China isolate individuals were the same in length (840 bp) (GenBank accession nos. KF913472-KF913477). Nucleotide variation among the six C. ovina China isolate individuals was detected at 18 sites (18/840; 2.1%). Sequences of the nine C. erschowi individuals were the same in length (840 bp) (GenBank accession nos. KF913484-KF913492). Nucleotide variation also occurred at 23 sites (23/840; 2.7%). All 15 alignments of the nad2 sequences revealed that all individuals of Chabertia differed at 182 nucleotide positions (182/840; 21.7%). Phylogenetic analysis of the nad2 sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure\n3A).\nInferred genetic relationships of 15 individual Chabertia specimens. The analyses were carried out by Bayesian inference (BI) based on mitochondrial rrnS (A) and nad2 (B) sequence data, using Necator americanus as the outgroup.\nSequence variation in complete rrnS gene was assessed among 15 individuals of Chabertia from goat and yak. Sequences of the rrnS gene from the six C. ovina China isolate individuals were the same in length (696 bp) (GenBank accession nos. KF913478-KF913483). Nucleotide variation among the six C. ovina China isolate individuals was detected at seven sites (7/696; 1.0%). Sequences of the rrnS gene from the nine C. erschowi individuals were the same in length (696 bp) (GenBank accession nos. KF913457-KF913465). Nucleotide variation also occurred at 6 sites (6/696; 0.9%). All 15 alignments of the rrnS sequences revealed that all individuals of Chabertia differed at 56 nucleotide positions (56/696; 8.05%). Phylogenetic analysis of the rrnS sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure\n3B).\nThe ITS-1 and ITS-2 sequences from 10 individual adults of C. ovina China isolate were compared with that of 6 individual adults of C. erschowi. Sequence variations were 0–2.9% (ITS-1) and 0–2.7% (ITS-2) within the two Chabertia species, respectively. However, the sequence differences were 6.3-8.2% (ITS-1) and 10.4-13.6% (ITS-2) between the C. ovina China isolate and C. erschowi.", "The rDNA region including ITS-1, 5.8S rDNA and ITS-2 were amplified and sequenced from C. ovina China isolates, and they were 852-854 bp (GenBank accession nos. KF913466-KF913471) in length, which contained 367-369 bp (ITS-1), 153 bp (5.8S rDNA) and 231-239 bp (ITS-2). These sequences were 862-866 bp in length for C. erschowi samples (GenBank accession nos. KF913448-KF913456), containing 375-378 bp (ITS-1), 153 bp (5.8S rDNA) and 239-245 bp (ITS-2).", "The complete mt genome sequence of C. ovina China isolate and C. erschowi were 13,717 bp and 13,705 bp in length, respectively (GenBank accession nos. KF660604 and KF660603, respectively). The two mt genomes contain 12 protein-coding genes (cox1-3, nad1-6, nad4L, cytb, atp6), 22 transfer RNA genes and two ribosomal RNA genes (rrnS and rrnL) (Table\n2), but the atp8 gene is missing (Figure\n1). The protein-coding genes are transcribed in the same directions, as reported for Oesophagostomum spp.\n[17,33]. Twenty-two tRNA genes were predicted from the mt genomes, which varied from 55 to 63 bp in size. The two ribosomal RNA genes (rrnL and rrnS) were inferred; rrnL is located between tRNA-His and nad3, and rrnS is located between tRNA-Glu and tRNA-Ser (UCN). Three AT-rich non-coding regions (NCRs) were inferred in the mt genomes (Table\n2). For these genomes, the longest NCR (designated NC2; 250 bp for C. ovina China isolate and 240 bp for C. erschowi in length) is located between the tRNA-Ala and tRNA-Pro (Figure\n1), have an A + T content of 83.75% and 84%, respectively.\n\nMitochondrial genome organization of \n\nChabertia erschowi \n\n(CE) and \n\nChabertia ovina \n\nChina isolate (COC) and Australia isolate (COA)\n\nStructure of the mitochondrial genomes for Chabertia. Genes are designated according to standard nomenclature, except for the 22 tRNA genes, which are designated using one-letter amino acid codes, with numerals differentiating each of the two leucine- and serine-specifying tRNAs (L1 and L2 for codon families CUN and UUR, respectively; S1 and S2 for codon families AGN and UCN, respectively). “NCR-1, NCR-2 and NCR-3” refer to three non-coding regions.", "The mt genome sequence of C. erschowi was 13,705 bp in length, 12 bp shorter than that of C. ovina China isolate, and 23 bp longer than that of C. ovina Australia isolate. The arrangement of the mt genes (i.e., 13 protein genes, 2 rrn genes and 22 tRNA genes) and NCRs were the same. A comparison of the nucleotide sequences of each mt gene as well as the amino acid sequences conceptually translated from individual protein-coding genes of the two Chabertia are given in Table\n3. The greatest nucleotide variation between the C. ovina China isolate and C. erschowi was in the nad2 gene (19.4% and 17.92%), whereas least differences (7.33%) were detected in the rrnS gene, respectively (Table\n3). The nucleotide sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. Sequence difference between the entire mt genome of C. ovina Australia isolate and that of C. erschowi was 15.48%. Sequence difference between the entire mt genome of C. ovina China isolate and that of C. ovina Australia isolate was 4.28%.\n\nNucleotide and/or predicted amino acid (aa) sequence differences for mt protein-coding and ribosomal RNA genes among \n\nChabertia erschowi \n\n(CE) and \n\nChabertia ovina \n\nChina isolate (COC) and Australia isolate (COA)\n\nThe difference in the concatenated amino acid sequences of the 12 protein-coding genes of the C. ovina China isolate and those of C. erschowi was 9.36%, 10% between those of the C. ovina Australia isolate and those of C. erschowi, and 2.37% between those of the C. ovina China isolate and those of C. ovina Australia isolate. The amino acid sequence differences between each of the 12 protein-coding genes of the C. ovina Australia isolate and the corresponding homologues of C. erschowi ranged from 0.57-17.92%, with COX1 being the most conserved and NAD2 the least conserved proteins (Table\n3). Phylogenetic analyses of concatenated amino acid sequence data sets, using N. americanus as the outgroup, revealed that the Chabertia and Oesophagostomum were clustered together, with absolute support (posterior probability (pp) = 1.00) support (Figure\n2).\nInferred phylogenetic position of Chabertia within Strongylida nematodes. Analysis of the concatenated amino acid sequence data representing 12 protein-coding genes by Bayesian inference (BI), using Necator americanus (NC_003416) as the outgroup.\nSequence variation in complete nad2 gene was assessed among 15 individuals of Chabertia from goats and yaks. Sequences of the six C. ovina China isolate individuals were the same in length (840 bp) (GenBank accession nos. KF913472-KF913477). Nucleotide variation among the six C. ovina China isolate individuals was detected at 18 sites (18/840; 2.1%). Sequences of the nine C. erschowi individuals were the same in length (840 bp) (GenBank accession nos. KF913484-KF913492). Nucleotide variation also occurred at 23 sites (23/840; 2.7%). All 15 alignments of the nad2 sequences revealed that all individuals of Chabertia differed at 182 nucleotide positions (182/840; 21.7%). Phylogenetic analysis of the nad2 sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure\n3A).\nInferred genetic relationships of 15 individual Chabertia specimens. The analyses were carried out by Bayesian inference (BI) based on mitochondrial rrnS (A) and nad2 (B) sequence data, using Necator americanus as the outgroup.\nSequence variation in complete rrnS gene was assessed among 15 individuals of Chabertia from goat and yak. Sequences of the rrnS gene from the six C. ovina China isolate individuals were the same in length (696 bp) (GenBank accession nos. KF913478-KF913483). Nucleotide variation among the six C. ovina China isolate individuals was detected at seven sites (7/696; 1.0%). Sequences of the rrnS gene from the nine C. erschowi individuals were the same in length (696 bp) (GenBank accession nos. KF913457-KF913465). Nucleotide variation also occurred at 6 sites (6/696; 0.9%). All 15 alignments of the rrnS sequences revealed that all individuals of Chabertia differed at 56 nucleotide positions (56/696; 8.05%). Phylogenetic analysis of the rrnS sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure\n3B).\nThe ITS-1 and ITS-2 sequences from 10 individual adults of C. ovina China isolate were compared with that of 6 individual adults of C. erschowi. Sequence variations were 0–2.9% (ITS-1) and 0–2.7% (ITS-2) within the two Chabertia species, respectively. However, the sequence differences were 6.3-8.2% (ITS-1) and 10.4-13.6% (ITS-2) between the C. ovina China isolate and C. erschowi.", "Chabertia spp. is responsible for economic losses to the livestock industries globally. Although several Chabertia species have been described from various hosts based on the microscopic features of the adult worms (e.g. cervical groove and cephalic vesicle), it is not clear whether C. erschowi is valid as a separate species due to unreliable morphological criteria. For this reason, we employed a molecular approach, so that comparative genetic analyses could be conducted.\nIn the present study, substantial levels of nucleotide differences (15.33%) were detected in the complete mt genome between C. ovina China isolate and C. erschowi, and 15.48% between C. ovina Australia isolate and C. erschowi. These mtDNA data provide strong support that C. erschowi represents a single species because a previous comparative study has clearly indicated that variation in mtDNA sequences between closely-related species were typically 10%-20%\n[13].\nThe difference in amino acid sequences of the concatenated 12 proteins encoded by the complete mt genome between C. ovina China isolate and C. erschowi is 9.36%, and 10% between the C. ovina Australia isolate and C. erschowi. This level of amino acid variation is higher than those of other nematodes. Previous studies of other congener nematodes have detected low level differences in 12 protein sequences. For example, differences in amino acid sequences between A. duodenale and A. caninum is 4.1%\n[29,38], and between Toxocara malaysiensis and Toxocara cati is 5.6%\n[39], and between O. dentatum and O. quadrispinulatum is 3.22%\n[17]. In addition, substantial levels of nucleotide differences (6.3%-8.2% in ITS-1 and 10.4-13.6% in ITS-2) were also detected between C. ovina China isolate and C. erschowi. These results also indicate that C. erschowi is a separate species from C. ovina. This proposal was further supported by phylogenetic analysis based on mtDNA sequences (Figure\n3), although, to date, only small numbers of adult worms have been studied molecularly. Clearly, larger population genetic and molecular epidemiological studies should be conducted using the mt and nuclear markers defined in this study to further test this proposal/hypothesis.", "The findings of this study provide robust genetic evidence that C. erschowi is a separate and valid species from C. ovina. The mtDNA and rDNA datasets reported in the present study should provide useful novel markers for further studies of the taxonomy and systematics of Chabertia spp. from different hosts and geographical regions.", "The authors declare that they have no competing interests.", "XQZ and GHL conceived and designed the study, and critically revised the manuscript. GHL, LZ and HQS performed the experiments, analyzed the data and drafted the manuscript. GHZ, JZC and QZ helped in study design, study implementation and manuscript revision. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, "results", null, null, null, "discussion", "conclusions", null, null ]
[ "Chabertia spp", "Nuclear ribosomal DNA", "Internal transcribed spacer (ITS)", "Mitochondrial DNA", "Phylogenetic analysis" ]
Background: The phylum Nematoda includes many parasites that threaten the health of plants, animals and humans on a global scale. The soil-transmitted helminthes (including roundworms, whipworms and hookworms) are estimated to infect almost one sixth of all humans, and more than a billion people are infected with at least one species [1]. Chabertia spp. are common gastrointestinal nematodes, causing significant economic losses to the livestock industries worldwide, due to poor productivity, failure to thrive and control costs [2-6]. In spite of the high prevalence of Chabertia reported in small ruminants [7], it is not clear whether the small ruminants harbour one or more than one species. Based on morphological features (e.g., cervical groove and cephalic vesicle) of adult worms, various Chabertia species have been described in sheep and goats in China, including C. ovina, C. rishati, C. bovis, C. erschowi, C. gaohanensis sp. nov and C. shaanxiensis sp. nov [8-10]. However, to date, only Chabertia ovina is well recognized as taxonomically valid [11,12]. Obviously, the identification and distinction of Chabertia to species using morphological criteria alone is not reliable. Therefore, there is an urgent need for suitable molecular approaches to accurately identify and distinguish closely-related Chabertia species from different hosts and regions. Molecular tools, using genetic markers in mitochondrial (mt) genomes and the internal transcribed spacer (ITS) regions of nuclear ribosomal DNA (rDNA), have been used effectively to identify and differentiate parasites of different groups [13-16]. For nematodes, recent studies showed that mt genomes are useful genetic markers for the identification and differentiation of closely-related species [17,18]. In addition, employing ITS rDNA sequences, recent studies also demonstrated that Haemonchus placei and H. contortus are distinct species [19]; Trichuris suis and T. trichiura are different nematode species [20,21]. Using a long-range PCR-coupled sequencing approach [22], the objectives of the present study were (i) to characterize the ITS rDNA and mt genomes of C. ovina and C. erschowi from goat and yak in China, (ii) to compare these ITS sequences and mt genome sequences, and (iii) to test the hypothesis that C. erschowi is a valid species in phylogenetic analyses of these sequence data. Methods: Parasites and isolation of total genomic DNA Adult specimens of C. ovina (n = 6, coded CHO1-CHO6) and C. erschowi (n = 9, coded CHE1-CHE9) were collected, post-mortem, from the large intestine of a goat and a yak in Shaanxi and Qinghai Provinces, China, respectively, and were washed in physiological saline, identified morphologically [8,10], fixed in 70% (v/v) ethanol and stored at -20°C until use. Total genomic DNA was isolated separately from 15 individual worms using an established method [23]. Adult specimens of C. ovina (n = 6, coded CHO1-CHO6) and C. erschowi (n = 9, coded CHE1-CHE9) were collected, post-mortem, from the large intestine of a goat and a yak in Shaanxi and Qinghai Provinces, China, respectively, and were washed in physiological saline, identified morphologically [8,10], fixed in 70% (v/v) ethanol and stored at -20°C until use. Total genomic DNA was isolated separately from 15 individual worms using an established method [23]. Long-range PCR-based sequencing of mt genome To obtain some mt sequence data for primer design, we PCR-amplified regions of C. erschowi of cox1 gene by using a (relatively) conserved primer pair JB3-JB4.5 [24], rrnL gene was amplified using the designed primers rrnLF (forward; 5′-GAGCCTGTATTGGGTTCCAGTATGA-3′) and rrnLR (reverse; 5′-AACTTTTTTTGATTTTCCTTTCGTA-3′), nad1 gene was amplified using the designed primers nad1F (forward; 5′-GAGCGTCATTTGTTGGGAAG-3′) and nad1R (reverse; 5′-CCCCTTCAGCAAAATCAAAC-3′), cytb gene was amplified using the designed primers cytbF (forward; 5′-GGTACCTTTTTGGCTTTTTATTATA-3′) and cytbR (reverse; 5′-ATATGAACAGGGCTTATTATAGGAT-3′) based on sequences conserved between Oesophagostomum dentatum and C. ovina Australia isolate. The amplicons were sequenced in both directions using BigDye terminator v.3.1, ABI PRISM 3730. We then designed primers (Table 1) to regions within cox1, rrnL, nad1 and cytb and amplified from C. ovina (coded CHO1) in four overlapping fragments: cox1-rrnL, rrnL-nad1, nad1-cytb and cytb-cox1. Then we designed primers (Table 1) to regions within cox1, rrnL, nad5, nad1, nad2 and cytb and amplified from C. erschowi (coded CHE1) in six overlapping fragments: cox1- rrnL, rrnL-nad5, nad5-nad1, nad1-nad2, nad2-cytb and cytb-cox1. The cycling conditions used were 92°C for 2 min (initial denaturation), then 92°C/10 s (denaturation), 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s (annealing), and 60°C/10 min (extension) for 10 cycles, followed by 92°C for 2 min, then 92°C/10 s, 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s, and 60°C/10 min for 20 cycles, with a cycle elongation of 10 s for each cycle and a final extension at 60°C/10 min. Each amplicon, which represented a single band in a 1.0% (w/v) agarose gel, following electrophoresis and ethidium-bromide staining, was column-purified and then sequenced using a primer walking strategy [22]. Sequences of primers used to amplify mitochondrial DNA regions from Chabertia erschowi and Chabertia ovina from China To obtain some mt sequence data for primer design, we PCR-amplified regions of C. erschowi of cox1 gene by using a (relatively) conserved primer pair JB3-JB4.5 [24], rrnL gene was amplified using the designed primers rrnLF (forward; 5′-GAGCCTGTATTGGGTTCCAGTATGA-3′) and rrnLR (reverse; 5′-AACTTTTTTTGATTTTCCTTTCGTA-3′), nad1 gene was amplified using the designed primers nad1F (forward; 5′-GAGCGTCATTTGTTGGGAAG-3′) and nad1R (reverse; 5′-CCCCTTCAGCAAAATCAAAC-3′), cytb gene was amplified using the designed primers cytbF (forward; 5′-GGTACCTTTTTGGCTTTTTATTATA-3′) and cytbR (reverse; 5′-ATATGAACAGGGCTTATTATAGGAT-3′) based on sequences conserved between Oesophagostomum dentatum and C. ovina Australia isolate. The amplicons were sequenced in both directions using BigDye terminator v.3.1, ABI PRISM 3730. We then designed primers (Table 1) to regions within cox1, rrnL, nad1 and cytb and amplified from C. ovina (coded CHO1) in four overlapping fragments: cox1-rrnL, rrnL-nad1, nad1-cytb and cytb-cox1. Then we designed primers (Table 1) to regions within cox1, rrnL, nad5, nad1, nad2 and cytb and amplified from C. erschowi (coded CHE1) in six overlapping fragments: cox1- rrnL, rrnL-nad5, nad5-nad1, nad1-nad2, nad2-cytb and cytb-cox1. The cycling conditions used were 92°C for 2 min (initial denaturation), then 92°C/10 s (denaturation), 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s (annealing), and 60°C/10 min (extension) for 10 cycles, followed by 92°C for 2 min, then 92°C/10 s, 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s, and 60°C/10 min for 20 cycles, with a cycle elongation of 10 s for each cycle and a final extension at 60°C/10 min. Each amplicon, which represented a single band in a 1.0% (w/v) agarose gel, following electrophoresis and ethidium-bromide staining, was column-purified and then sequenced using a primer walking strategy [22]. Sequences of primers used to amplify mitochondrial DNA regions from Chabertia erschowi and Chabertia ovina from China Sequencing of ITS rDNA and mt rrnS and nad2 The full ITS rDNA region including primer flanking 18S and 28S rDNA sequences was PCR-amplified from individual DNA samples using universal primers NC5 (forward; 5′-GTAGGTGAACCTGCGGAAGGATCATT-3′) and NC2 (reverse; 5′-TTAGTTTCTTTTCCTCCGCT-3′) described previously [25]. The primers rrnSF and rrnSR (Table 1) designed to conserved mt genome sequences within the rrnS gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 700 bp) from multiple individuals of Chabertia spp. The primers nad2F and nad2R (Table 1) designed to conserved mt genome sequences within the nad2 gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 900 bp) from multiple individuals of Chabertia spp.. The full ITS rDNA region including primer flanking 18S and 28S rDNA sequences was PCR-amplified from individual DNA samples using universal primers NC5 (forward; 5′-GTAGGTGAACCTGCGGAAGGATCATT-3′) and NC2 (reverse; 5′-TTAGTTTCTTTTCCTCCGCT-3′) described previously [25]. The primers rrnSF and rrnSR (Table 1) designed to conserved mt genome sequences within the rrnS gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 700 bp) from multiple individuals of Chabertia spp. The primers nad2F and nad2R (Table 1) designed to conserved mt genome sequences within the nad2 gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 900 bp) from multiple individuals of Chabertia spp.. Sequence analyses Sequences were assembled manually and aligned against the complete mt genome sequences of C. ovina Australia isolate [26] using the computer program Clustal X 1.83 [27] to infer gene boundaries. Translation initiation and termination codons were identified based on comparison with that of C. ovina Australia isolate [26]. The secondary structures of 22 tRNA genes were predicted using tRNAscan-SE [28] and/or manual adjustment [29], and rRNA genes were identified by comparison with that of C. ovina Australia isolate [26]. Sequences were assembled manually and aligned against the complete mt genome sequences of C. ovina Australia isolate [26] using the computer program Clustal X 1.83 [27] to infer gene boundaries. Translation initiation and termination codons were identified based on comparison with that of C. ovina Australia isolate [26]. The secondary structures of 22 tRNA genes were predicted using tRNAscan-SE [28] and/or manual adjustment [29], and rRNA genes were identified by comparison with that of C. ovina Australia isolate [26]. Phylogenetic analyses Amino acid sequences inferred from the 12 protein-coding genes of the two Chabertia spp. worms were concatenated into a single alignment, and then aligned with those of 14 other Strongylida nematodes (Angiostrongylus cantonensis, GenBank accession number NC_013065 [30]; Angiostrongylus costaricensis, NC_013067 [30]; Angiostrongylus vasorum, JX268542 [31]; Aelurostrongylus abstrusus, NC_019571 [32]; Chabertia ovina Australia isolate, NC_013831 [26]; Cylicocyclus insignis, NC_013808 [26]; Metastrongylus pudendotectus, NC_013813 [26]; Metastrongylus salmi, NC_013815 [26]; Oesophagostomum dentatum, FM161882 [17]; Oesophagostomum quadrispinulatum, NC_014181 [17]; Oesophagostomum asperum, KC715826 [33]; Oesophagostomum columbianum, KC715827 [33]; Strongylus vulgaris, NC_013818 [26]; Syngamus trachea, NC_013821 [26], using the Ancylostomatoidea nematode, Necator americanus, NC_003416 as the outgroup [29]. Any regions of ambiguous alignment were excluded using Gblocks (http://molevol.cmima.csic.es/castresana/Gblocks_server.html) [34] with the default parameters (Gblocks removed 1.6% of the amino acid alignments) and then subjected to phylogenetic analysis using Bayesian Inference (BI) as described previously [35,36]. Phylograms were drawn using the program Tree View v.1.65 [37]. Amino acid sequences inferred from the 12 protein-coding genes of the two Chabertia spp. worms were concatenated into a single alignment, and then aligned with those of 14 other Strongylida nematodes (Angiostrongylus cantonensis, GenBank accession number NC_013065 [30]; Angiostrongylus costaricensis, NC_013067 [30]; Angiostrongylus vasorum, JX268542 [31]; Aelurostrongylus abstrusus, NC_019571 [32]; Chabertia ovina Australia isolate, NC_013831 [26]; Cylicocyclus insignis, NC_013808 [26]; Metastrongylus pudendotectus, NC_013813 [26]; Metastrongylus salmi, NC_013815 [26]; Oesophagostomum dentatum, FM161882 [17]; Oesophagostomum quadrispinulatum, NC_014181 [17]; Oesophagostomum asperum, KC715826 [33]; Oesophagostomum columbianum, KC715827 [33]; Strongylus vulgaris, NC_013818 [26]; Syngamus trachea, NC_013821 [26], using the Ancylostomatoidea nematode, Necator americanus, NC_003416 as the outgroup [29]. Any regions of ambiguous alignment were excluded using Gblocks (http://molevol.cmima.csic.es/castresana/Gblocks_server.html) [34] with the default parameters (Gblocks removed 1.6% of the amino acid alignments) and then subjected to phylogenetic analysis using Bayesian Inference (BI) as described previously [35,36]. Phylograms were drawn using the program Tree View v.1.65 [37]. Parasites and isolation of total genomic DNA: Adult specimens of C. ovina (n = 6, coded CHO1-CHO6) and C. erschowi (n = 9, coded CHE1-CHE9) were collected, post-mortem, from the large intestine of a goat and a yak in Shaanxi and Qinghai Provinces, China, respectively, and were washed in physiological saline, identified morphologically [8,10], fixed in 70% (v/v) ethanol and stored at -20°C until use. Total genomic DNA was isolated separately from 15 individual worms using an established method [23]. Long-range PCR-based sequencing of mt genome: To obtain some mt sequence data for primer design, we PCR-amplified regions of C. erschowi of cox1 gene by using a (relatively) conserved primer pair JB3-JB4.5 [24], rrnL gene was amplified using the designed primers rrnLF (forward; 5′-GAGCCTGTATTGGGTTCCAGTATGA-3′) and rrnLR (reverse; 5′-AACTTTTTTTGATTTTCCTTTCGTA-3′), nad1 gene was amplified using the designed primers nad1F (forward; 5′-GAGCGTCATTTGTTGGGAAG-3′) and nad1R (reverse; 5′-CCCCTTCAGCAAAATCAAAC-3′), cytb gene was amplified using the designed primers cytbF (forward; 5′-GGTACCTTTTTGGCTTTTTATTATA-3′) and cytbR (reverse; 5′-ATATGAACAGGGCTTATTATAGGAT-3′) based on sequences conserved between Oesophagostomum dentatum and C. ovina Australia isolate. The amplicons were sequenced in both directions using BigDye terminator v.3.1, ABI PRISM 3730. We then designed primers (Table 1) to regions within cox1, rrnL, nad1 and cytb and amplified from C. ovina (coded CHO1) in four overlapping fragments: cox1-rrnL, rrnL-nad1, nad1-cytb and cytb-cox1. Then we designed primers (Table 1) to regions within cox1, rrnL, nad5, nad1, nad2 and cytb and amplified from C. erschowi (coded CHE1) in six overlapping fragments: cox1- rrnL, rrnL-nad5, nad5-nad1, nad1-nad2, nad2-cytb and cytb-cox1. The cycling conditions used were 92°C for 2 min (initial denaturation), then 92°C/10 s (denaturation), 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s (annealing), and 60°C/10 min (extension) for 10 cycles, followed by 92°C for 2 min, then 92°C/10 s, 50 -58°C (C. erschowi) or 56 -65°C (C. ovina)/30 s, and 60°C/10 min for 20 cycles, with a cycle elongation of 10 s for each cycle and a final extension at 60°C/10 min. Each amplicon, which represented a single band in a 1.0% (w/v) agarose gel, following electrophoresis and ethidium-bromide staining, was column-purified and then sequenced using a primer walking strategy [22]. Sequences of primers used to amplify mitochondrial DNA regions from Chabertia erschowi and Chabertia ovina from China Sequencing of ITS rDNA and mt rrnS and nad2: The full ITS rDNA region including primer flanking 18S and 28S rDNA sequences was PCR-amplified from individual DNA samples using universal primers NC5 (forward; 5′-GTAGGTGAACCTGCGGAAGGATCATT-3′) and NC2 (reverse; 5′-TTAGTTTCTTTTCCTCCGCT-3′) described previously [25]. The primers rrnSF and rrnSR (Table 1) designed to conserved mt genome sequences within the rrnS gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 700 bp) from multiple individuals of Chabertia spp. The primers nad2F and nad2R (Table 1) designed to conserved mt genome sequences within the nad2 gene were employed for PCR amplification and subsequent sequencing of this complete gene (~ 900 bp) from multiple individuals of Chabertia spp.. Sequence analyses: Sequences were assembled manually and aligned against the complete mt genome sequences of C. ovina Australia isolate [26] using the computer program Clustal X 1.83 [27] to infer gene boundaries. Translation initiation and termination codons were identified based on comparison with that of C. ovina Australia isolate [26]. The secondary structures of 22 tRNA genes were predicted using tRNAscan-SE [28] and/or manual adjustment [29], and rRNA genes were identified by comparison with that of C. ovina Australia isolate [26]. Phylogenetic analyses: Amino acid sequences inferred from the 12 protein-coding genes of the two Chabertia spp. worms were concatenated into a single alignment, and then aligned with those of 14 other Strongylida nematodes (Angiostrongylus cantonensis, GenBank accession number NC_013065 [30]; Angiostrongylus costaricensis, NC_013067 [30]; Angiostrongylus vasorum, JX268542 [31]; Aelurostrongylus abstrusus, NC_019571 [32]; Chabertia ovina Australia isolate, NC_013831 [26]; Cylicocyclus insignis, NC_013808 [26]; Metastrongylus pudendotectus, NC_013813 [26]; Metastrongylus salmi, NC_013815 [26]; Oesophagostomum dentatum, FM161882 [17]; Oesophagostomum quadrispinulatum, NC_014181 [17]; Oesophagostomum asperum, KC715826 [33]; Oesophagostomum columbianum, KC715827 [33]; Strongylus vulgaris, NC_013818 [26]; Syngamus trachea, NC_013821 [26], using the Ancylostomatoidea nematode, Necator americanus, NC_003416 as the outgroup [29]. Any regions of ambiguous alignment were excluded using Gblocks (http://molevol.cmima.csic.es/castresana/Gblocks_server.html) [34] with the default parameters (Gblocks removed 1.6% of the amino acid alignments) and then subjected to phylogenetic analysis using Bayesian Inference (BI) as described previously [35,36]. Phylograms were drawn using the program Tree View v.1.65 [37]. Results: Nuclear ribosomal DNA regions of the two Chabertia species The rDNA region including ITS-1, 5.8S rDNA and ITS-2 were amplified and sequenced from C. ovina China isolates, and they were 852-854 bp (GenBank accession nos. KF913466-KF913471) in length, which contained 367-369 bp (ITS-1), 153 bp (5.8S rDNA) and 231-239 bp (ITS-2). These sequences were 862-866 bp in length for C. erschowi samples (GenBank accession nos. KF913448-KF913456), containing 375-378 bp (ITS-1), 153 bp (5.8S rDNA) and 239-245 bp (ITS-2). The rDNA region including ITS-1, 5.8S rDNA and ITS-2 were amplified and sequenced from C. ovina China isolates, and they were 852-854 bp (GenBank accession nos. KF913466-KF913471) in length, which contained 367-369 bp (ITS-1), 153 bp (5.8S rDNA) and 231-239 bp (ITS-2). These sequences were 862-866 bp in length for C. erschowi samples (GenBank accession nos. KF913448-KF913456), containing 375-378 bp (ITS-1), 153 bp (5.8S rDNA) and 239-245 bp (ITS-2). Features of the mt genomes of the two Chabertia species The complete mt genome sequence of C. ovina China isolate and C. erschowi were 13,717 bp and 13,705 bp in length, respectively (GenBank accession nos. KF660604 and KF660603, respectively). The two mt genomes contain 12 protein-coding genes (cox1-3, nad1-6, nad4L, cytb, atp6), 22 transfer RNA genes and two ribosomal RNA genes (rrnS and rrnL) (Table 2), but the atp8 gene is missing (Figure 1). The protein-coding genes are transcribed in the same directions, as reported for Oesophagostomum spp. [17,33]. Twenty-two tRNA genes were predicted from the mt genomes, which varied from 55 to 63 bp in size. The two ribosomal RNA genes (rrnL and rrnS) were inferred; rrnL is located between tRNA-His and nad3, and rrnS is located between tRNA-Glu and tRNA-Ser (UCN). Three AT-rich non-coding regions (NCRs) were inferred in the mt genomes (Table 2). For these genomes, the longest NCR (designated NC2; 250 bp for C. ovina China isolate and 240 bp for C. erschowi in length) is located between the tRNA-Ala and tRNA-Pro (Figure 1), have an A + T content of 83.75% and 84%, respectively. Mitochondrial genome organization of Chabertia erschowi (CE) and Chabertia ovina China isolate (COC) and Australia isolate (COA) Structure of the mitochondrial genomes for Chabertia. Genes are designated according to standard nomenclature, except for the 22 tRNA genes, which are designated using one-letter amino acid codes, with numerals differentiating each of the two leucine- and serine-specifying tRNAs (L1 and L2 for codon families CUN and UUR, respectively; S1 and S2 for codon families AGN and UCN, respectively). “NCR-1, NCR-2 and NCR-3” refer to three non-coding regions. The complete mt genome sequence of C. ovina China isolate and C. erschowi were 13,717 bp and 13,705 bp in length, respectively (GenBank accession nos. KF660604 and KF660603, respectively). The two mt genomes contain 12 protein-coding genes (cox1-3, nad1-6, nad4L, cytb, atp6), 22 transfer RNA genes and two ribosomal RNA genes (rrnS and rrnL) (Table 2), but the atp8 gene is missing (Figure 1). The protein-coding genes are transcribed in the same directions, as reported for Oesophagostomum spp. [17,33]. Twenty-two tRNA genes were predicted from the mt genomes, which varied from 55 to 63 bp in size. The two ribosomal RNA genes (rrnL and rrnS) were inferred; rrnL is located between tRNA-His and nad3, and rrnS is located between tRNA-Glu and tRNA-Ser (UCN). Three AT-rich non-coding regions (NCRs) were inferred in the mt genomes (Table 2). For these genomes, the longest NCR (designated NC2; 250 bp for C. ovina China isolate and 240 bp for C. erschowi in length) is located between the tRNA-Ala and tRNA-Pro (Figure 1), have an A + T content of 83.75% and 84%, respectively. Mitochondrial genome organization of Chabertia erschowi (CE) and Chabertia ovina China isolate (COC) and Australia isolate (COA) Structure of the mitochondrial genomes for Chabertia. Genes are designated according to standard nomenclature, except for the 22 tRNA genes, which are designated using one-letter amino acid codes, with numerals differentiating each of the two leucine- and serine-specifying tRNAs (L1 and L2 for codon families CUN and UUR, respectively; S1 and S2 for codon families AGN and UCN, respectively). “NCR-1, NCR-2 and NCR-3” refer to three non-coding regions. Comparative analyses between C. ovina and C. erschowi The mt genome sequence of C. erschowi was 13,705 bp in length, 12 bp shorter than that of C. ovina China isolate, and 23 bp longer than that of C. ovina Australia isolate. The arrangement of the mt genes (i.e., 13 protein genes, 2 rrn genes and 22 tRNA genes) and NCRs were the same. A comparison of the nucleotide sequences of each mt gene as well as the amino acid sequences conceptually translated from individual protein-coding genes of the two Chabertia are given in Table 3. The greatest nucleotide variation between the C. ovina China isolate and C. erschowi was in the nad2 gene (19.4% and 17.92%), whereas least differences (7.33%) were detected in the rrnS gene, respectively (Table 3). The nucleotide sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. Sequence difference between the entire mt genome of C. ovina Australia isolate and that of C. erschowi was 15.48%. Sequence difference between the entire mt genome of C. ovina China isolate and that of C. ovina Australia isolate was 4.28%. Nucleotide and/or predicted amino acid (aa) sequence differences for mt protein-coding and ribosomal RNA genes among Chabertia erschowi (CE) and Chabertia ovina China isolate (COC) and Australia isolate (COA) The difference in the concatenated amino acid sequences of the 12 protein-coding genes of the C. ovina China isolate and those of C. erschowi was 9.36%, 10% between those of the C. ovina Australia isolate and those of C. erschowi, and 2.37% between those of the C. ovina China isolate and those of C. ovina Australia isolate. The amino acid sequence differences between each of the 12 protein-coding genes of the C. ovina Australia isolate and the corresponding homologues of C. erschowi ranged from 0.57-17.92%, with COX1 being the most conserved and NAD2 the least conserved proteins (Table 3). Phylogenetic analyses of concatenated amino acid sequence data sets, using N. americanus as the outgroup, revealed that the Chabertia and Oesophagostomum were clustered together, with absolute support (posterior probability (pp) = 1.00) support (Figure 2). Inferred phylogenetic position of Chabertia within Strongylida nematodes. Analysis of the concatenated amino acid sequence data representing 12 protein-coding genes by Bayesian inference (BI), using Necator americanus (NC_003416) as the outgroup. Sequence variation in complete nad2 gene was assessed among 15 individuals of Chabertia from goats and yaks. Sequences of the six C. ovina China isolate individuals were the same in length (840 bp) (GenBank accession nos. KF913472-KF913477). Nucleotide variation among the six C. ovina China isolate individuals was detected at 18 sites (18/840; 2.1%). Sequences of the nine C. erschowi individuals were the same in length (840 bp) (GenBank accession nos. KF913484-KF913492). Nucleotide variation also occurred at 23 sites (23/840; 2.7%). All 15 alignments of the nad2 sequences revealed that all individuals of Chabertia differed at 182 nucleotide positions (182/840; 21.7%). Phylogenetic analysis of the nad2 sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure 3A). Inferred genetic relationships of 15 individual Chabertia specimens. The analyses were carried out by Bayesian inference (BI) based on mitochondrial rrnS (A) and nad2 (B) sequence data, using Necator americanus as the outgroup. Sequence variation in complete rrnS gene was assessed among 15 individuals of Chabertia from goat and yak. Sequences of the rrnS gene from the six C. ovina China isolate individuals were the same in length (696 bp) (GenBank accession nos. KF913478-KF913483). Nucleotide variation among the six C. ovina China isolate individuals was detected at seven sites (7/696; 1.0%). Sequences of the rrnS gene from the nine C. erschowi individuals were the same in length (696 bp) (GenBank accession nos. KF913457-KF913465). Nucleotide variation also occurred at 6 sites (6/696; 0.9%). All 15 alignments of the rrnS sequences revealed that all individuals of Chabertia differed at 56 nucleotide positions (56/696; 8.05%). Phylogenetic analysis of the rrnS sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure 3B). The ITS-1 and ITS-2 sequences from 10 individual adults of C. ovina China isolate were compared with that of 6 individual adults of C. erschowi. Sequence variations were 0–2.9% (ITS-1) and 0–2.7% (ITS-2) within the two Chabertia species, respectively. However, the sequence differences were 6.3-8.2% (ITS-1) and 10.4-13.6% (ITS-2) between the C. ovina China isolate and C. erschowi. The mt genome sequence of C. erschowi was 13,705 bp in length, 12 bp shorter than that of C. ovina China isolate, and 23 bp longer than that of C. ovina Australia isolate. The arrangement of the mt genes (i.e., 13 protein genes, 2 rrn genes and 22 tRNA genes) and NCRs were the same. A comparison of the nucleotide sequences of each mt gene as well as the amino acid sequences conceptually translated from individual protein-coding genes of the two Chabertia are given in Table 3. The greatest nucleotide variation between the C. ovina China isolate and C. erschowi was in the nad2 gene (19.4% and 17.92%), whereas least differences (7.33%) were detected in the rrnS gene, respectively (Table 3). The nucleotide sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. Sequence difference between the entire mt genome of C. ovina Australia isolate and that of C. erschowi was 15.48%. Sequence difference between the entire mt genome of C. ovina China isolate and that of C. ovina Australia isolate was 4.28%. Nucleotide and/or predicted amino acid (aa) sequence differences for mt protein-coding and ribosomal RNA genes among Chabertia erschowi (CE) and Chabertia ovina China isolate (COC) and Australia isolate (COA) The difference in the concatenated amino acid sequences of the 12 protein-coding genes of the C. ovina China isolate and those of C. erschowi was 9.36%, 10% between those of the C. ovina Australia isolate and those of C. erschowi, and 2.37% between those of the C. ovina China isolate and those of C. ovina Australia isolate. The amino acid sequence differences between each of the 12 protein-coding genes of the C. ovina Australia isolate and the corresponding homologues of C. erschowi ranged from 0.57-17.92%, with COX1 being the most conserved and NAD2 the least conserved proteins (Table 3). Phylogenetic analyses of concatenated amino acid sequence data sets, using N. americanus as the outgroup, revealed that the Chabertia and Oesophagostomum were clustered together, with absolute support (posterior probability (pp) = 1.00) support (Figure 2). Inferred phylogenetic position of Chabertia within Strongylida nematodes. Analysis of the concatenated amino acid sequence data representing 12 protein-coding genes by Bayesian inference (BI), using Necator americanus (NC_003416) as the outgroup. Sequence variation in complete nad2 gene was assessed among 15 individuals of Chabertia from goats and yaks. Sequences of the six C. ovina China isolate individuals were the same in length (840 bp) (GenBank accession nos. KF913472-KF913477). Nucleotide variation among the six C. ovina China isolate individuals was detected at 18 sites (18/840; 2.1%). Sequences of the nine C. erschowi individuals were the same in length (840 bp) (GenBank accession nos. KF913484-KF913492). Nucleotide variation also occurred at 23 sites (23/840; 2.7%). All 15 alignments of the nad2 sequences revealed that all individuals of Chabertia differed at 182 nucleotide positions (182/840; 21.7%). Phylogenetic analysis of the nad2 sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure 3A). Inferred genetic relationships of 15 individual Chabertia specimens. The analyses were carried out by Bayesian inference (BI) based on mitochondrial rrnS (A) and nad2 (B) sequence data, using Necator americanus as the outgroup. Sequence variation in complete rrnS gene was assessed among 15 individuals of Chabertia from goat and yak. Sequences of the rrnS gene from the six C. ovina China isolate individuals were the same in length (696 bp) (GenBank accession nos. KF913478-KF913483). Nucleotide variation among the six C. ovina China isolate individuals was detected at seven sites (7/696; 1.0%). Sequences of the rrnS gene from the nine C. erschowi individuals were the same in length (696 bp) (GenBank accession nos. KF913457-KF913465). Nucleotide variation also occurred at 6 sites (6/696; 0.9%). All 15 alignments of the rrnS sequences revealed that all individuals of Chabertia differed at 56 nucleotide positions (56/696; 8.05%). Phylogenetic analysis of the rrnS sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure 3B). The ITS-1 and ITS-2 sequences from 10 individual adults of C. ovina China isolate were compared with that of 6 individual adults of C. erschowi. Sequence variations were 0–2.9% (ITS-1) and 0–2.7% (ITS-2) within the two Chabertia species, respectively. However, the sequence differences were 6.3-8.2% (ITS-1) and 10.4-13.6% (ITS-2) between the C. ovina China isolate and C. erschowi. Nuclear ribosomal DNA regions of the two Chabertia species: The rDNA region including ITS-1, 5.8S rDNA and ITS-2 were amplified and sequenced from C. ovina China isolates, and they were 852-854 bp (GenBank accession nos. KF913466-KF913471) in length, which contained 367-369 bp (ITS-1), 153 bp (5.8S rDNA) and 231-239 bp (ITS-2). These sequences were 862-866 bp in length for C. erschowi samples (GenBank accession nos. KF913448-KF913456), containing 375-378 bp (ITS-1), 153 bp (5.8S rDNA) and 239-245 bp (ITS-2). Features of the mt genomes of the two Chabertia species: The complete mt genome sequence of C. ovina China isolate and C. erschowi were 13,717 bp and 13,705 bp in length, respectively (GenBank accession nos. KF660604 and KF660603, respectively). The two mt genomes contain 12 protein-coding genes (cox1-3, nad1-6, nad4L, cytb, atp6), 22 transfer RNA genes and two ribosomal RNA genes (rrnS and rrnL) (Table 2), but the atp8 gene is missing (Figure 1). The protein-coding genes are transcribed in the same directions, as reported for Oesophagostomum spp. [17,33]. Twenty-two tRNA genes were predicted from the mt genomes, which varied from 55 to 63 bp in size. The two ribosomal RNA genes (rrnL and rrnS) were inferred; rrnL is located between tRNA-His and nad3, and rrnS is located between tRNA-Glu and tRNA-Ser (UCN). Three AT-rich non-coding regions (NCRs) were inferred in the mt genomes (Table 2). For these genomes, the longest NCR (designated NC2; 250 bp for C. ovina China isolate and 240 bp for C. erschowi in length) is located between the tRNA-Ala and tRNA-Pro (Figure 1), have an A + T content of 83.75% and 84%, respectively. Mitochondrial genome organization of Chabertia erschowi (CE) and Chabertia ovina China isolate (COC) and Australia isolate (COA) Structure of the mitochondrial genomes for Chabertia. Genes are designated according to standard nomenclature, except for the 22 tRNA genes, which are designated using one-letter amino acid codes, with numerals differentiating each of the two leucine- and serine-specifying tRNAs (L1 and L2 for codon families CUN and UUR, respectively; S1 and S2 for codon families AGN and UCN, respectively). “NCR-1, NCR-2 and NCR-3” refer to three non-coding regions. Comparative analyses between C. ovina and C. erschowi: The mt genome sequence of C. erschowi was 13,705 bp in length, 12 bp shorter than that of C. ovina China isolate, and 23 bp longer than that of C. ovina Australia isolate. The arrangement of the mt genes (i.e., 13 protein genes, 2 rrn genes and 22 tRNA genes) and NCRs were the same. A comparison of the nucleotide sequences of each mt gene as well as the amino acid sequences conceptually translated from individual protein-coding genes of the two Chabertia are given in Table 3. The greatest nucleotide variation between the C. ovina China isolate and C. erschowi was in the nad2 gene (19.4% and 17.92%), whereas least differences (7.33%) were detected in the rrnS gene, respectively (Table 3). The nucleotide sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. Sequence difference between the entire mt genome of C. ovina Australia isolate and that of C. erschowi was 15.48%. Sequence difference between the entire mt genome of C. ovina China isolate and that of C. ovina Australia isolate was 4.28%. Nucleotide and/or predicted amino acid (aa) sequence differences for mt protein-coding and ribosomal RNA genes among Chabertia erschowi (CE) and Chabertia ovina China isolate (COC) and Australia isolate (COA) The difference in the concatenated amino acid sequences of the 12 protein-coding genes of the C. ovina China isolate and those of C. erschowi was 9.36%, 10% between those of the C. ovina Australia isolate and those of C. erschowi, and 2.37% between those of the C. ovina China isolate and those of C. ovina Australia isolate. The amino acid sequence differences between each of the 12 protein-coding genes of the C. ovina Australia isolate and the corresponding homologues of C. erschowi ranged from 0.57-17.92%, with COX1 being the most conserved and NAD2 the least conserved proteins (Table 3). Phylogenetic analyses of concatenated amino acid sequence data sets, using N. americanus as the outgroup, revealed that the Chabertia and Oesophagostomum were clustered together, with absolute support (posterior probability (pp) = 1.00) support (Figure 2). Inferred phylogenetic position of Chabertia within Strongylida nematodes. Analysis of the concatenated amino acid sequence data representing 12 protein-coding genes by Bayesian inference (BI), using Necator americanus (NC_003416) as the outgroup. Sequence variation in complete nad2 gene was assessed among 15 individuals of Chabertia from goats and yaks. Sequences of the six C. ovina China isolate individuals were the same in length (840 bp) (GenBank accession nos. KF913472-KF913477). Nucleotide variation among the six C. ovina China isolate individuals was detected at 18 sites (18/840; 2.1%). Sequences of the nine C. erschowi individuals were the same in length (840 bp) (GenBank accession nos. KF913484-KF913492). Nucleotide variation also occurred at 23 sites (23/840; 2.7%). All 15 alignments of the nad2 sequences revealed that all individuals of Chabertia differed at 182 nucleotide positions (182/840; 21.7%). Phylogenetic analysis of the nad2 sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure 3A). Inferred genetic relationships of 15 individual Chabertia specimens. The analyses were carried out by Bayesian inference (BI) based on mitochondrial rrnS (A) and nad2 (B) sequence data, using Necator americanus as the outgroup. Sequence variation in complete rrnS gene was assessed among 15 individuals of Chabertia from goat and yak. Sequences of the rrnS gene from the six C. ovina China isolate individuals were the same in length (696 bp) (GenBank accession nos. KF913478-KF913483). Nucleotide variation among the six C. ovina China isolate individuals was detected at seven sites (7/696; 1.0%). Sequences of the rrnS gene from the nine C. erschowi individuals were the same in length (696 bp) (GenBank accession nos. KF913457-KF913465). Nucleotide variation also occurred at 6 sites (6/696; 0.9%). All 15 alignments of the rrnS sequences revealed that all individuals of Chabertia differed at 56 nucleotide positions (56/696; 8.05%). Phylogenetic analysis of the rrnS sequence data revealed strong support for the separation of C. ovina and C. erschowi individuals into two distinct clades (Figure 3B). The ITS-1 and ITS-2 sequences from 10 individual adults of C. ovina China isolate were compared with that of 6 individual adults of C. erschowi. Sequence variations were 0–2.9% (ITS-1) and 0–2.7% (ITS-2) within the two Chabertia species, respectively. However, the sequence differences were 6.3-8.2% (ITS-1) and 10.4-13.6% (ITS-2) between the C. ovina China isolate and C. erschowi. Discussion: Chabertia spp. is responsible for economic losses to the livestock industries globally. Although several Chabertia species have been described from various hosts based on the microscopic features of the adult worms (e.g. cervical groove and cephalic vesicle), it is not clear whether C. erschowi is valid as a separate species due to unreliable morphological criteria. For this reason, we employed a molecular approach, so that comparative genetic analyses could be conducted. In the present study, substantial levels of nucleotide differences (15.33%) were detected in the complete mt genome between C. ovina China isolate and C. erschowi, and 15.48% between C. ovina Australia isolate and C. erschowi. These mtDNA data provide strong support that C. erschowi represents a single species because a previous comparative study has clearly indicated that variation in mtDNA sequences between closely-related species were typically 10%-20% [13]. The difference in amino acid sequences of the concatenated 12 proteins encoded by the complete mt genome between C. ovina China isolate and C. erschowi is 9.36%, and 10% between the C. ovina Australia isolate and C. erschowi. This level of amino acid variation is higher than those of other nematodes. Previous studies of other congener nematodes have detected low level differences in 12 protein sequences. For example, differences in amino acid sequences between A. duodenale and A. caninum is 4.1% [29,38], and between Toxocara malaysiensis and Toxocara cati is 5.6% [39], and between O. dentatum and O. quadrispinulatum is 3.22% [17]. In addition, substantial levels of nucleotide differences (6.3%-8.2% in ITS-1 and 10.4-13.6% in ITS-2) were also detected between C. ovina China isolate and C. erschowi. These results also indicate that C. erschowi is a separate species from C. ovina. This proposal was further supported by phylogenetic analysis based on mtDNA sequences (Figure 3), although, to date, only small numbers of adult worms have been studied molecularly. Clearly, larger population genetic and molecular epidemiological studies should be conducted using the mt and nuclear markers defined in this study to further test this proposal/hypothesis. Conclusion: The findings of this study provide robust genetic evidence that C. erschowi is a separate and valid species from C. ovina. The mtDNA and rDNA datasets reported in the present study should provide useful novel markers for further studies of the taxonomy and systematics of Chabertia spp. from different hosts and geographical regions. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: XQZ and GHL conceived and designed the study, and critically revised the manuscript. GHL, LZ and HQS performed the experiments, analyzed the data and drafted the manuscript. GHZ, JZC and QZ helped in study design, study implementation and manuscript revision. All authors read and approved the final manuscript.
Background: Gastrointestinal nematodes of livestock have major socio-economic importance worldwide. In small ruminants, Chabertia spp. are responsible for economic losses to the livestock industries globally. Although much attention has given us insights into epidemiology, diagnosis, treatment and control of this parasite, over the years, only one species (C. ovina) has been accepted to infect small ruminants, and it is not clear whether C. erschowi is valid as a separate species. Methods: The first and second internal transcribed spacers (ITS-1 and ITS-2) regions of nuclear ribosomal DNA (rDNA) and the complete mitochondrial (mt) genomes of C. ovina and C. erschowi were amplified and then sequenced. Phylogenetic re-construction of 15 Strongylida species (including C. erschowi) was carried out using Bayesian inference (BI) based on concatenated amino acid sequence datasets. Results: The ITS rDNA sequences of C. ovina China isolates and C. erschowi samples were 852-854 bp and 862 -866 bp in length, respectively. The mt genome sequence of C. erschowi was 13,705 bp in length, which is 12 bp shorter than that of C. ovina China isolate. The sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. In addition, sequence comparison of the most conserved mt small subunit ribosomal (rrnS) and the least conserved nad2 genes among multiple individual nematodes revealed substantial nucleotide differences between these two species but limited sequence variation within each species. Conclusions: The mtDNA and rDNA datasets provide robust genetic evidence that C. erschowi is a valid strongylid nematode species. The mtDNA and rDNA datasets presented in the present study provide useful novel markers for further studies of the taxonomy and systematics of the Chabertia species from different hosts and geographical regions.
Background: The phylum Nematoda includes many parasites that threaten the health of plants, animals and humans on a global scale. The soil-transmitted helminthes (including roundworms, whipworms and hookworms) are estimated to infect almost one sixth of all humans, and more than a billion people are infected with at least one species [1]. Chabertia spp. are common gastrointestinal nematodes, causing significant economic losses to the livestock industries worldwide, due to poor productivity, failure to thrive and control costs [2-6]. In spite of the high prevalence of Chabertia reported in small ruminants [7], it is not clear whether the small ruminants harbour one or more than one species. Based on morphological features (e.g., cervical groove and cephalic vesicle) of adult worms, various Chabertia species have been described in sheep and goats in China, including C. ovina, C. rishati, C. bovis, C. erschowi, C. gaohanensis sp. nov and C. shaanxiensis sp. nov [8-10]. However, to date, only Chabertia ovina is well recognized as taxonomically valid [11,12]. Obviously, the identification and distinction of Chabertia to species using morphological criteria alone is not reliable. Therefore, there is an urgent need for suitable molecular approaches to accurately identify and distinguish closely-related Chabertia species from different hosts and regions. Molecular tools, using genetic markers in mitochondrial (mt) genomes and the internal transcribed spacer (ITS) regions of nuclear ribosomal DNA (rDNA), have been used effectively to identify and differentiate parasites of different groups [13-16]. For nematodes, recent studies showed that mt genomes are useful genetic markers for the identification and differentiation of closely-related species [17,18]. In addition, employing ITS rDNA sequences, recent studies also demonstrated that Haemonchus placei and H. contortus are distinct species [19]; Trichuris suis and T. trichiura are different nematode species [20,21]. Using a long-range PCR-coupled sequencing approach [22], the objectives of the present study were (i) to characterize the ITS rDNA and mt genomes of C. ovina and C. erschowi from goat and yak in China, (ii) to compare these ITS sequences and mt genome sequences, and (iii) to test the hypothesis that C. erschowi is a valid species in phylogenetic analyses of these sequence data. Conclusion: The findings of this study provide robust genetic evidence that C. erschowi is a separate and valid species from C. ovina. The mtDNA and rDNA datasets reported in the present study should provide useful novel markers for further studies of the taxonomy and systematics of Chabertia spp. from different hosts and geographical regions.
Background: Gastrointestinal nematodes of livestock have major socio-economic importance worldwide. In small ruminants, Chabertia spp. are responsible for economic losses to the livestock industries globally. Although much attention has given us insights into epidemiology, diagnosis, treatment and control of this parasite, over the years, only one species (C. ovina) has been accepted to infect small ruminants, and it is not clear whether C. erschowi is valid as a separate species. Methods: The first and second internal transcribed spacers (ITS-1 and ITS-2) regions of nuclear ribosomal DNA (rDNA) and the complete mitochondrial (mt) genomes of C. ovina and C. erschowi were amplified and then sequenced. Phylogenetic re-construction of 15 Strongylida species (including C. erschowi) was carried out using Bayesian inference (BI) based on concatenated amino acid sequence datasets. Results: The ITS rDNA sequences of C. ovina China isolates and C. erschowi samples were 852-854 bp and 862 -866 bp in length, respectively. The mt genome sequence of C. erschowi was 13,705 bp in length, which is 12 bp shorter than that of C. ovina China isolate. The sequence difference between the entire mt genome of C. ovina China isolate and that of C. erschowi was 15.33%. In addition, sequence comparison of the most conserved mt small subunit ribosomal (rrnS) and the least conserved nad2 genes among multiple individual nematodes revealed substantial nucleotide differences between these two species but limited sequence variation within each species. Conclusions: The mtDNA and rDNA datasets provide robust genetic evidence that C. erschowi is a valid strongylid nematode species. The mtDNA and rDNA datasets presented in the present study provide useful novel markers for further studies of the taxonomy and systematics of the Chabertia species from different hosts and geographical regions.
8,452
337
15
[ "ovina", "isolate", "erschowi", "chabertia", "bp", "sequences", "china", "genes", "mt", "ovina china" ]
[ "test", "test" ]
[CONTENT] Chabertia spp | Nuclear ribosomal DNA | Internal transcribed spacer (ITS) | Mitochondrial DNA | Phylogenetic analysis [SUMMARY]
[CONTENT] Chabertia spp | Nuclear ribosomal DNA | Internal transcribed spacer (ITS) | Mitochondrial DNA | Phylogenetic analysis [SUMMARY]
[CONTENT] Chabertia spp | Nuclear ribosomal DNA | Internal transcribed spacer (ITS) | Mitochondrial DNA | Phylogenetic analysis [SUMMARY]
[CONTENT] Chabertia spp | Nuclear ribosomal DNA | Internal transcribed spacer (ITS) | Mitochondrial DNA | Phylogenetic analysis [SUMMARY]
[CONTENT] Chabertia spp | Nuclear ribosomal DNA | Internal transcribed spacer (ITS) | Mitochondrial DNA | Phylogenetic analysis [SUMMARY]
[CONTENT] Chabertia spp | Nuclear ribosomal DNA | Internal transcribed spacer (ITS) | Mitochondrial DNA | Phylogenetic analysis [SUMMARY]
[CONTENT] Animals | Base Sequence | China | DNA, Helminth | DNA, Mitochondrial | DNA, Ribosomal | DNA, Ribosomal Spacer | Genetic Markers | Genetic Variation | Genome, Helminth | Genome, Mitochondrial | Molecular Sequence Data | Phylogeny | Ruminants | Sequence Analysis, DNA | Species Specificity | Strongylida Infections | Strongyloidea [SUMMARY]
[CONTENT] Animals | Base Sequence | China | DNA, Helminth | DNA, Mitochondrial | DNA, Ribosomal | DNA, Ribosomal Spacer | Genetic Markers | Genetic Variation | Genome, Helminth | Genome, Mitochondrial | Molecular Sequence Data | Phylogeny | Ruminants | Sequence Analysis, DNA | Species Specificity | Strongylida Infections | Strongyloidea [SUMMARY]
[CONTENT] Animals | Base Sequence | China | DNA, Helminth | DNA, Mitochondrial | DNA, Ribosomal | DNA, Ribosomal Spacer | Genetic Markers | Genetic Variation | Genome, Helminth | Genome, Mitochondrial | Molecular Sequence Data | Phylogeny | Ruminants | Sequence Analysis, DNA | Species Specificity | Strongylida Infections | Strongyloidea [SUMMARY]
[CONTENT] Animals | Base Sequence | China | DNA, Helminth | DNA, Mitochondrial | DNA, Ribosomal | DNA, Ribosomal Spacer | Genetic Markers | Genetic Variation | Genome, Helminth | Genome, Mitochondrial | Molecular Sequence Data | Phylogeny | Ruminants | Sequence Analysis, DNA | Species Specificity | Strongylida Infections | Strongyloidea [SUMMARY]
[CONTENT] Animals | Base Sequence | China | DNA, Helminth | DNA, Mitochondrial | DNA, Ribosomal | DNA, Ribosomal Spacer | Genetic Markers | Genetic Variation | Genome, Helminth | Genome, Mitochondrial | Molecular Sequence Data | Phylogeny | Ruminants | Sequence Analysis, DNA | Species Specificity | Strongylida Infections | Strongyloidea [SUMMARY]
[CONTENT] Animals | Base Sequence | China | DNA, Helminth | DNA, Mitochondrial | DNA, Ribosomal | DNA, Ribosomal Spacer | Genetic Markers | Genetic Variation | Genome, Helminth | Genome, Mitochondrial | Molecular Sequence Data | Phylogeny | Ruminants | Sequence Analysis, DNA | Species Specificity | Strongylida Infections | Strongyloidea [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] ovina | isolate | erschowi | chabertia | bp | sequences | china | genes | mt | ovina china [SUMMARY]
[CONTENT] ovina | isolate | erschowi | chabertia | bp | sequences | china | genes | mt | ovina china [SUMMARY]
[CONTENT] ovina | isolate | erschowi | chabertia | bp | sequences | china | genes | mt | ovina china [SUMMARY]
[CONTENT] ovina | isolate | erschowi | chabertia | bp | sequences | china | genes | mt | ovina china [SUMMARY]
[CONTENT] ovina | isolate | erschowi | chabertia | bp | sequences | china | genes | mt | ovina china [SUMMARY]
[CONTENT] ovina | isolate | erschowi | chabertia | bp | sequences | china | genes | mt | ovina china [SUMMARY]
[CONTENT] species | chabertia | different | genomes | mt genomes | chabertia species | recent studies | humans | identification | nov [SUMMARY]
[CONTENT] primers | 26 | gene | designed | cytb | nad1 | rrnl | amplified | cox1 | 10 [SUMMARY]
[CONTENT] isolate | bp | ovina china isolate | china isolate | genes | ovina | sequence | ovina china | individuals | erschowi [SUMMARY]
[CONTENT] study provide | provide | study | provide robust genetic evidence | valid species ovina mtdna | rdna datasets reported present | ovina mtdna rdna datasets | ovina mtdna rdna | ovina mtdna | genetic evidence erschowi separate [SUMMARY]
[CONTENT] ovina | bp | isolate | erschowi | genes | chabertia | sequences | 26 | gene | china isolate [SUMMARY]
[CONTENT] ovina | bp | isolate | erschowi | genes | chabertia | sequences | 26 | gene | china isolate [SUMMARY]
[CONTENT] ||| Chabertia ||| ||| the years | only one | C. | C. erschowi [SUMMARY]
[CONTENT] first | second | C. | C. ||| 15 | Strongylida | C. erschowi | Bayesian | BI [SUMMARY]
[CONTENT] C. | China | C. | 852-854 | 862 -866 bp ||| C. | 13,705 | 12 | C. | China ||| C. | China | C. | 15.33% ||| two [SUMMARY]
[CONTENT] C. erschowi ||| Chabertia [SUMMARY]
[CONTENT] ||| Chabertia ||| ||| the years | only one | C. | C. erschowi ||| first | second | C. | C. ||| 15 | Strongylida | C. erschowi | Bayesian | BI ||| ||| C. | China | C. | 852-854 | 862 -866 bp ||| C. | 13,705 | 12 | C. | China ||| C. | China | C. | 15.33% ||| two ||| C. erschowi ||| Chabertia [SUMMARY]
[CONTENT] ||| Chabertia ||| ||| the years | only one | C. | C. erschowi ||| first | second | C. | C. ||| 15 | Strongylida | C. erschowi | Bayesian | BI ||| ||| C. | China | C. | 852-854 | 862 -866 bp ||| C. | 13,705 | 12 | C. | China ||| C. | China | C. | 15.33% ||| two ||| C. erschowi ||| Chabertia [SUMMARY]
A systematic review of the status of children's school access in low- and middle-income countries between 1998 and 2013: using the INDEPTH Network platform to fill the research gaps.
26562137
The framework for expanding children's school access in low- and middle-income countries (LMICs) has been directed by universal education policies as part of Education for All since 1990. In measuring progress to universal education, a narrow conceptualisation of access which dichotomises children's participation as being in or out of school has often been assumed. Yet, the actual promise of universal education goes beyond this simple definition to include retention, progression, completion, and learning.
BACKGROUND
Using Web of Science, we conducted a literature search of studies published in international peer-reviewed journals between 1998 and 2013 in LMICs. The phrases we searched included six school outcomes: school enrolment, school attendance, grade progression, school dropout, primary to secondary school transition, and school completion. From our search, we recorded studies according to: 1) school outcomes; 2) whether longitudinal data were used; and 3) whether data from more than one country were analysed.
DESIGN
The area of school access most published is enrolment followed by attendance and dropout. Primary to secondary school transition and grade progression had the least number of publications. Of 132 publications which we found to be relevant to school access, 33 made use of longitudinal data and 17 performed cross-country analyses.
RESULTS
The majority of studies published in international peer-reviewed journals on children's school access between 1998 and 2013 were focused on three outcomes: enrolment, attendance, and dropout. Few of these studies used data collected over time or data collected from more than one country for comparative analyses. The contribution of the INDEPTH Network in helping to address these gaps in the literature lies in the longitudinal design of HDSS surveys and in the diversity of countries within the network.
CONCLUSIONS
[ "Child", "Developing Countries", "Education", "Global Health", "Humans", "Poverty", "Research", "Schools", "Students" ]
4643180
null
null
Methods
We conducted this research in three stages. In the first stage, we performed a systematic literature review of studies using Web of Science, a reference database holding citations for every discipline and world region. We searched for six phrases including: ‘school enrolment’, ‘school attendance’, ‘grade progression’, ‘school dropout’, ‘primary to secondary school transition’, and ‘school completion’. Each search was defined by journal publications in LMICs in Africa, Asia, and Oceania because these are the countries which form the INDEPTH Network. Also, we restricted our search to studies conducted between 1998 and 2013 as the INDEPTH was established in 1998 and the EFA was included in the MDGs in the year 2000. From our literature search, 1,481 references were returned: grade progression (418 references), primary to secondary school description (329 references), school attendance (274 references), school enrolment (234 references), school completion (143 references), and school dropout (83 references). Of the 1,481 references returned, only 132 were relevant to our focus on school access. In the second stage, we reviewed the 132 references and summarised them according to our key phrases or school outcomes. Finally, we made note of all studies that used longitudinal data sources and studies that used data from more than one country.
Results
This section presents our results from the literature review. We first present the publications which we found to be relevant to our search; we summarise findings according to publications which focused mainly on one of the six school outcomes that we searched and those which explored more than one of the school outcomes (Table 2). Subsequently, the findings from our analysis of studies using longitudinal data and cross-country data are presented, respectively. All references obtained from search in Web of Science by school outcome From our search, a total of 132 references were found to be relevant to children's schooling as framed within CREATE's zones of exclusion. Having reviewed these references, ‘school enrolment’ was the most analysed school outcome (71 publications). More than half of the studies which analysed school enrolment as an outcome focused mainly on children's enrolment (49 out of 71 publications). ‘School attendance’ (24 publications) and ‘school dropout’ (24 publications) were the second most analysed outcomes. As with school enrolment, the majority of studies published on school attendance and school dropout were focused singularly on exploring these outcomes – 19 out of the 24 publications for school attendance and 19 out of the 24 publications for dropout were focused mainly on analysing children's attendance and dropout, respectively. The least studied school outcomes were ‘grade progression’ (3 publications) and ‘primary to secondary school transition’ (3 publications). Few studies have also been conducted on ‘school completion’ (7 publications). All the publications that we reviewed on ‘primary to secondary transition’ analysed only this outcome in the study. In contrast, all the publications we reviewed for grade progression did not solely focus on exploring children's progression between grades; they analysed other outcomes such as dropout, completion, and school entry. Between 1998 and 2013, journal publications on longitudinal studies which explored children's school outcomes in LMICs were scarce (Tables 3 and 4). Table 3 shows publications that used longitudinal data and analysed one of the school outcomes that we searched. Table 4 also shows publications which used longitudinal data but where more than one outcome was analysed. Of the 132 that we reviewed, 33 made use of longitudinal data. In Table 3, we see that the use of a longitudinal data source has been most frequent among studies where school enrolment is the main outcome variable (10 publications). Five of the 19 studies on school dropout made use of longitudinal surveys compared to 3 of the 19 studies on school attendance. The publications on school completion and transition from primary to secondary school had one study each where longitudinal data were used. In Table 4, 13 studies (of the 38 studies that analysed more than one school outcome) were found to have made use of longitudinal data. Publications using longitudinal data which explored mainly one school outcome arranged by the source of data that was used, country in which data were collected and reference for the publication Longitudinal study (2000–2003) Kanchanaburi Demographic Surveillance System (2000–2004) African Centre for Health and Population Studies Panel data (2004–2007) Panel data from KwaZulu-Natal Income Dynamics Study (1993–1998) APHRC household data (2000–2005) APHRC household data (2005–2009) APHRC 2005 schooling history data APHRC household data (2005–2009) Ethiopian Environmental Household Study (2000–2007) Thailand Thailand South Africa Kenya South Africa Kenya Kenya Kenya Kenya Ethiopia Jampaklay (6) Mahaarcha and Kittisuksathit (7) Case et al. (8) Nishimura and Yamano (9) Handa and Peterman (10) Ngware et al. (11) Oketch et al. (12) Oketch et al. (13) Oketch et al. (14) Lindskog (15) Panel household survey (1991–1994) PASADA community faith-based agency Young Lives household survey Tanzania Tanzania India Ainsworth et al. (16) Ng’ondi (17) Woodhead et al. (18) Kanchanaburi Demographic Surveillance System (2001–2004) Community and School Studies data (2007–2009) 2009–2011 panel data set Longitudinal school-based dropout study (1999–2001) Individual-level data (2008–2009) Thailand Bangladesh China Kenya Cambodia Korinek and Punpuing (19) Sabates et al. (20) Yi et al. (21) Nyambedha and Aagaard-Hansen (22) No et al. (23) Household survey, Uttar Pradesh India Siddhu (24) Nang Rong Social (1984, 1994, 2004) Thailand Piotrowski and Paat (25) APHRC (African Population Health Research Centre) collects data in an urban demographic surveillance system in Nairobi, Kenya: Viwandani and Korogocho (slums); Jericho and Harambee (non-slum). Publications which used longitudinal data and analysed multiple school outcomes arranged by the source of data that was used, country in which data were collected and reference for the publication There was some variation in the data source of the longitudinal surveys and the countries in which the surveys were conducted. The surveys were more likely to have been conducted in countries in sub-Saharan Africa (21 publications) and Asia (11 publications). The most frequently studied countries were South Africa (7 publications) and Kenya (7 publications). The data sources from the studies on these countries were similar. Four of the seven studies on South Africa used data from the Demographic Surveillance Area in KwaZulu-Natal (8, 10, 32, 33); two used data from the Birth-to-Twenty cohort panel study (27, 37); and the remaining study used data from the Education Management Information System (38). For the studies on Kenya, data from Nairobi's Demographic Surveillance System sites collected under the African and Population Health Research Centre's Education Program were the most frequent source (11–14). Three of the studies however used data from elsewhere: 1) Evans and Miguel (29) used panel data collected from a Pupil Questionnaire and Tracking survey between 1998 and 2002 in Busia district; 2) Nyambedha and Aagaard-Hansen (22) analysed data from a school-based dropout study in Western Kenya; and 3) Nishimura and Yamano (9) made use of panel data collected from household community survey in rural Kenya. Other countries such as Thailand (four publications), India (three publications), China (two publications), Ethiopia (two publications), and Tanzania (two publications) were also studied using longitudinal surveys. Three of the four studies which performed longitudinal analyses for Thailand used the same data from the Demographic Surveillance System site in Kanchanaburi Province (6, 7, 25). The studies on India all used different data sources: one study used a household survey from Uttar Pradesh (24); another study used data from the Young Lives household survey (18); and the last study used school panel data (30). The studies on China (21, 35), Ethiopia (15, 26), and Tanzania (16, 17) also used data from different sources. Very few of the studies that we reviewed performed cross-country analyses (Table 5). Of the 132 publications that we reviewed, 17 performed cross-country analyses. Studies which explored school enrolment (n =6) as the main outcome had the most number of cross-country publications followed by those on dropout (n=3) and attendance (n =2). Countries in sub-Saharan Africa were the most likely to be included in comparative studies: 13 of the 17 studies were focused only on countries in the sub-Saharan context. Among the remaining studies, four were focused on LMICs more broadly (39–41) with one of these studies analysing data from low-income countries only (42). Publications that used data from more than one country arranged by the source of data that was used, country in which data were collected and reference for the publication World Bank Unit record household data sets Demographic and Health Survey (DHS) 15 African countries 30 countries in Africa Kakwani et al. (43) Longwe and Smits (44) Cross-sectional surveys DHS and Integrated Household Survey (IHS) Case study DHS Case study: Ministry of Education, United Nations, interviews, survey Case studies: interviews and observations of schools Malawi and Kenya 34 sub-Saharan African countries Ghana, Nigeria and Togo 21 poor countries Guinea and Ethiopia Jamaica, Kenya, Tanzania, Ghana, Indonesia, Pakistan Schafer (45) Smith-Greenaway and Heckert (46) Tuwor and Sossou (47) Filmer (42) Colclough et al. (48) Heyneman and Stern (41) National Survey of Adolescents DHS DHS Burkina Faso, Uganda, Ghana, Malawi Burkina Faso, Cameroon, Ivory Coast, Guinea, Togo 20 countries in sub-Saharan Africa Biddlecom et al. (49) Lloyd and Mensch (50) Melhado (51) Multiple Indicator Cluster Survey; DHS Africa Lloyd and Hewett (52) DHS Armed conflict data set of the international peace research institute DHS DHS Education Management Information Systems Kenya, Malawi, Nigeria, Tanzania, Uganda, and Zambia 43 countries in Africa Developing countries Global Sub-Saharan Africa Lewin and Sabates (53) Poirier (54) Grant and Behrman (40) Filmer and Pritchett (39) Lewin (4) The majority of the data used in these studies originated from cross-sectional household surveys. The Demographic and Health Survey (DHS) was the most frequently used data source: 8 of the 17 studies used the DHS for analyses. The Integrated Household Survey and Multiple Indicator Cluster Surveys were also used (46, 52). These surveys, like the DHS, are large-scale surveys designed to be national representative which are used to collect demographic, health, poverty, and education indicators in LMICs. Biddlecom et al. (49) also used a large-scale survey (i.e. National Survey of Adolescents) although this survey is administered only in four countries in sub-Saharan Africa: Ghana, Burkina Faso, Uganda, and Malawi. Some studies used a case study approach, triangulating different sources of data, for their research (41, 48, 47). Among the three remaining studies, one used data from Education Management Information Systems (4); another used the World Bank Unit record household data sets (43); and the last made use of data from the armed conflict data set of the international peace research institute (54).
Conclusions
The gaps which we have identified through our literature review suggest a significant role for longitudinal data in LMICs to explore educational outcomes beyond school enrolment and attendance. As we move towards a post-2015 development agenda, a broader conceptualisation of school access is likely to become more relevant, demanding a focus away from a dichotomous understanding of school access to one where it is understood as a continuum, a process in which children enter, remain, progress, complete primary school, and transition to higher levels of education. Adopting this alternative approach to understanding school access implies a significant role for studies conducted over time in future research. Longitudinal studies can be useful for observing children's school access as a continuum. Here, data collected repeatedly through sites within HDSS sites can make a contribution to better understand those school outcomes which have been little explored in educational studies in LMICs. Furthermore, the HDSS sites operate in populations which have been found to be the most marginalised in-school access, namely in rural and poor urban areas. The data collected from these sites can be used as evidence to design more targeted policy initiatives for improving participation and retention rates among children in deprived populations.
[]
[]
[]
[ "Methods", "Results", "Discussion", "Conclusions" ]
[ "We conducted this research in three stages. In the first stage, we performed a systematic literature review of studies using Web of Science, a reference database holding citations for every discipline and world region. We searched for six phrases including: ‘school enrolment’, ‘school attendance’, ‘grade progression’, ‘school dropout’, ‘primary to secondary school transition’, and ‘school completion’. Each search was defined by journal publications in LMICs in Africa, Asia, and Oceania because these are the countries which form the INDEPTH Network. Also, we restricted our search to studies conducted between 1998 and 2013 as the INDEPTH was established in 1998 and the EFA was included in the MDGs in the year 2000. From our literature search, 1,481 references were returned: grade progression (418 references), primary to secondary school description (329 references), school attendance (274 references), school enrolment (234 references), school completion (143 references), and school dropout (83 references). Of the 1,481 references returned, only 132 were relevant to our focus on school access. In the second stage, we reviewed the 132 references and summarised them according to our key phrases or school outcomes. Finally, we made note of all studies that used longitudinal data sources and studies that used data from more than one country.", "This section presents our results from the literature review. We first present the publications which we found to be relevant to our search; we summarise findings according to publications which focused mainly on one of the six school outcomes that we searched and those which explored more than one of the school outcomes (Table 2). Subsequently, the findings from our analysis of studies using longitudinal data and cross-country data are presented, respectively.\nAll references obtained from search in Web of Science by school outcome\nFrom our search, a total of 132 references were found to be relevant to children's schooling as framed within CREATE's zones of exclusion. Having reviewed these references, ‘school enrolment’ was the most analysed school outcome (71 publications). More than half of the studies which analysed school enrolment as an outcome focused mainly on children's enrolment (49 out of 71 publications). ‘School attendance’ (24 publications) and ‘school dropout’ (24 publications) were the second most analysed outcomes. As with school enrolment, the majority of studies published on school attendance and school dropout were focused singularly on exploring these outcomes – 19 out of the 24 publications for school attendance and 19 out of the 24 publications for dropout were focused mainly on analysing children's attendance and dropout, respectively. The least studied school outcomes were ‘grade progression’ (3 publications) and ‘primary to secondary school transition’ (3 publications). Few studies have also been conducted on ‘school completion’ (7 publications). All the publications that we reviewed on ‘primary to secondary transition’ analysed only this outcome in the study. In contrast, all the publications we reviewed for grade progression did not solely focus on exploring children's progression between grades; they analysed other outcomes such as dropout, completion, and school entry.\nBetween 1998 and 2013, journal publications on longitudinal studies which explored children's school outcomes in LMICs were scarce (Tables 3 and 4). Table 3 shows publications that used longitudinal data and analysed one of the school outcomes that we searched. Table 4 also shows publications which used longitudinal data but where more than one outcome was analysed. Of the 132 that we reviewed, 33 made use of longitudinal data. In Table 3, we see that the use of a longitudinal data source has been most frequent among studies where school enrolment is the main outcome variable (10 publications). Five of the 19 studies on school dropout made use of longitudinal surveys compared to 3 of the 19 studies on school attendance. The publications on school completion and transition from primary to secondary school had one study each where longitudinal data were used. In Table 4, 13 studies (of the 38 studies that analysed more than one school outcome) were found to have made use of longitudinal data.\nPublications using longitudinal data which explored mainly one school outcome arranged by the source of data that was used, country in which data were collected and reference for the publication\nLongitudinal study (2000–2003)\nKanchanaburi Demographic Surveillance System (2000–2004)\nAfrican Centre for Health and Population Studies\nPanel data (2004–2007)\nPanel data from KwaZulu-Natal Income Dynamics Study (1993–1998)\nAPHRC household data (2000–2005)\nAPHRC household data (2005–2009)\nAPHRC 2005 schooling history data\nAPHRC household data (2005–2009)\nEthiopian Environmental Household Study (2000–2007)\nThailand\nThailand\nSouth Africa\nKenya\nSouth Africa\nKenya\nKenya\nKenya\nKenya\nEthiopia\nJampaklay (6)\nMahaarcha and Kittisuksathit (7)\nCase et al. (8)\nNishimura and Yamano (9)\nHanda and Peterman (10)\nNgware et al. (11)\nOketch et al. (12)\nOketch et al. (13)\nOketch et al. (14)\nLindskog (15)\nPanel household survey (1991–1994)\nPASADA community faith-based agency\nYoung Lives household survey\nTanzania\nTanzania\nIndia\nAinsworth et al. (16)\nNg’ondi (17)\nWoodhead et al. (18)\nKanchanaburi Demographic Surveillance System (2001–2004)\nCommunity and School Studies data (2007–2009)\n2009–2011 panel data set\nLongitudinal school-based dropout study (1999–2001)\nIndividual-level data (2008–2009)\nThailand\nBangladesh\nChina\nKenya\nCambodia\nKorinek and Punpuing (19)\nSabates et al. (20)\nYi et al. (21)\nNyambedha and Aagaard-Hansen (22)\nNo et al. (23)\nHousehold survey, Uttar Pradesh\nIndia\nSiddhu (24)\nNang Rong Social (1984, 1994, 2004)\nThailand\nPiotrowski and Paat (25)\nAPHRC (African Population Health Research Centre) collects data in an urban demographic surveillance system in Nairobi, Kenya: Viwandani and Korogocho (slums); Jericho and Harambee (non-slum).\nPublications which used longitudinal data and analysed multiple school outcomes arranged by the source of data that was used, country in which data were collected and reference for the publication\nThere was some variation in the data source of the longitudinal surveys and the countries in which the surveys were conducted. The surveys were more likely to have been conducted in countries in sub-Saharan Africa (21 publications) and Asia (11 publications). The most frequently studied countries were South Africa (7 publications) and Kenya (7 publications). The data sources from the studies on these countries were similar. Four of the seven studies on South Africa used data from the Demographic Surveillance Area in KwaZulu-Natal (8, 10, 32, 33); two used data from the Birth-to-Twenty cohort panel study (27, 37); and the remaining study used data from the Education Management Information System (38). For the studies on Kenya, data from Nairobi's Demographic Surveillance System sites collected under the African and Population Health Research Centre's Education Program were the most frequent source (11–14). Three of the studies however used data from elsewhere: 1) Evans and Miguel (29) used panel data collected from a Pupil Questionnaire and Tracking survey between 1998 and 2002 in Busia district; 2) Nyambedha and Aagaard-Hansen (22) analysed data from a school-based dropout study in Western Kenya; and 3) Nishimura and Yamano (9) made use of panel data collected from household community survey in rural Kenya.\nOther countries such as Thailand (four publications), India (three publications), China (two publications), Ethiopia (two publications), and Tanzania (two publications) were also studied using longitudinal surveys. Three of the four studies which performed longitudinal analyses for Thailand used the same data from the Demographic Surveillance System site in Kanchanaburi Province (6, 7, 25). The studies on India all used different data sources: one study used a household survey from Uttar Pradesh (24); another study used data from the Young Lives household survey (18); and the last study used school panel data (30). The studies on China (21, 35), Ethiopia (15, 26), and Tanzania (16, 17) also used data from different sources.\nVery few of the studies that we reviewed performed cross-country analyses (Table 5). Of the 132 publications that we reviewed, 17 performed cross-country analyses. Studies which explored school enrolment (n =6) as the main outcome had the most number of cross-country publications followed by those on dropout (n=3) and attendance (n =2). Countries in sub-Saharan Africa were the most likely to be included in comparative studies: 13 of the 17 studies were focused only on countries in the sub-Saharan context. Among the remaining studies, four were focused on LMICs more broadly (39–41) with one of these studies analysing data from low-income countries only (42).\nPublications that used data from more than one country arranged by the source of data that was used, country in which data were collected and reference for the publication\nWorld Bank Unit record household data sets\nDemographic and Health Survey (DHS)\n15 African countries\n30 countries in Africa\nKakwani et al. (43)\nLongwe and Smits (44)\nCross-sectional surveys\nDHS and Integrated Household Survey (IHS)\nCase study\nDHS\nCase study: Ministry of Education, United Nations, interviews, survey\nCase studies: interviews and observations of schools\nMalawi and Kenya\n34 sub-Saharan African countries\nGhana, Nigeria and Togo\n21 poor countries\nGuinea and Ethiopia\nJamaica, Kenya, Tanzania, Ghana, Indonesia, Pakistan\nSchafer (45)\nSmith-Greenaway and Heckert (46)\nTuwor and Sossou (47)\nFilmer (42)\nColclough et al. (48)\nHeyneman and Stern (41)\nNational Survey of Adolescents\nDHS\nDHS\nBurkina Faso, Uganda, Ghana, Malawi\nBurkina Faso, Cameroon, Ivory Coast, Guinea, Togo\n20 countries in sub-Saharan Africa\nBiddlecom et al. (49)\nLloyd and Mensch (50)\nMelhado (51)\nMultiple Indicator Cluster Survey; DHS\nAfrica\nLloyd and Hewett (52)\nDHS\nArmed conflict data set of the international peace research institute\nDHS\nDHS\nEducation Management Information Systems\nKenya, Malawi, Nigeria, Tanzania, Uganda, and Zambia\n43 countries in Africa\nDeveloping countries\nGlobal\nSub-Saharan Africa\nLewin and Sabates (53)\nPoirier (54)\nGrant and Behrman (40)\nFilmer and Pritchett (39)\nLewin (4)\nThe majority of the data used in these studies originated from cross-sectional household surveys. The Demographic and Health Survey (DHS) was the most frequently used data source: 8 of the 17 studies used the DHS for analyses. The Integrated Household Survey and Multiple Indicator Cluster Surveys were also used (46, 52). These surveys, like the DHS, are large-scale surveys designed to be national representative which are used to collect demographic, health, poverty, and education indicators in LMICs. Biddlecom et al. (49) also used a large-scale survey (i.e. National Survey of Adolescents) although this survey is administered only in four countries in sub-Saharan Africa: Ghana, Burkina Faso, Uganda, and Malawi. Some studies used a case study approach, triangulating different sources of data, for their research (41, 48, 47). Among the three remaining studies, one used data from Education Management Information Systems (4); another used the World Bank Unit record household data sets (43); and the last made use of data from the armed conflict data set of the international peace research institute (54).", "The first objective of this paper has been to identify gaps in studies on children's school access in LMICs. The main gaps which we have identified can be summarised as such:Grade progression, primary to secondary school transition, and completion were the least studied school outcomes.Around a quarter of studies (33 out of 132 publications) in our review used data collected over time.Studies which used longitudinal data were more likely to have been conducted in South Africa, Kenya, and Thailand. The data from these studies were collected mainly from Demographic Surveillance System sites.Just over one-tenth of studies (17 out of 132 publications) in our review performed cross-country analyses.\nMore than two-thirds of the cross-country analyses (13 out of 17 publications) were focused only on countries in sub-Saharan Africa. The most frequently studied countries were Ghana, Malawi, and Uganda.Large-scale cross-sectional surveys were most frequently used to perform cross-country analyses; the DHS was the main data source.\n\nGrade progression, primary to secondary school transition, and completion were the least studied school outcomes.\nAround a quarter of studies (33 out of 132 publications) in our review used data collected over time.\nStudies which used longitudinal data were more likely to have been conducted in South Africa, Kenya, and Thailand. The data from these studies were collected mainly from Demographic Surveillance System sites.\nJust over one-tenth of studies (17 out of 132 publications) in our review performed cross-country analyses.\n\nMore than two-thirds of the cross-country analyses (13 out of 17 publications) were focused only on countries in sub-Saharan Africa. The most frequently studied countries were Ghana, Malawi, and Uganda.\nLarge-scale cross-sectional surveys were most frequently used to perform cross-country analyses; the DHS was the main data source.\nData from HDSS sites operating within the INDEPTH Network can contribute to narrowing the gaps which have been highlighted in this review. The INDEPTH Network oversees and coordinates multisite research activities in 52 HDSS sites in 20 LMICs in Africa, Asia, and Oceania (Table 6). Data on children's school attendance, including the grade and level of education being attended, are routinely collected among the population under surveillance within the HDSS sites. Children's school data are often enumerated at the beginning of the academic school year. These data can therefore be compared across years to observe whether a child returns to school and which grade a child attends from year to year. Where data are collected more than once a year, as in Ifakara (Tanzania) and Ouagadougou (Burkina Faso) for instance, we can observe disruptions in children's schooling during the academic year helping us to understand access beyond simple enrolment. That is, the data can be used to answer process-driven questions such as what happens to children when they enter school; how do children move from one grade to the next; and how do they transition from one level of education to the next. Exploring these questions can contribute to narrowing the deficit in studies on grade progression, primary school completion, and primary to secondary school transition.\nHealth and Demographic Surveillance System sites within the INDEPTH Network arranged by continents\nBurkina Faso: Ouagadougou; Nouna; Sapone; Kaya; Nanoro\nCote D’Ivoire: Taabo\nEthiopia: Gilgel Gibe; Kersa; Butajira; Dabat; Kilite Awlaelo\nThe Gambia: Farafenni; West Kiang\nGhana: Navrongo; Dodowa; Kintampo\nGuinea Bissau: Bandim\nKenya: Kisumu; Kombewa; Mbita; Kilifi; Nairobi\nMalawi: Karonga\nMozambique: Chokwe; Mahinca\nNigeria: Nahuche; Cross River\nSenegal: Bandafassi; Niakhar; Mlomp\nSouth Africa: ACDIS, Agincourt; Dikgale\nTanzania: Ifakara; Rufiji; Magu\nUganda: Rakai; Iganga/Mayuge; Kyamulibwa\nBangladesh: Matlab; Chakaria; Bandarban\nIndia: Ballabgarh; Birbhum; Vadu\nIndonesia: Purworejo\nThailand: Kachanaburi\nVietnam: Chililab; Dodolab; Filabavi\nPapua New Guinea: Wosera; PIH\nThe longitudinal design of the HDSS offers significant potential for studying children's schooling outcomes. The operation of the HDSS allows children to be continuously observed and tracked from the year they enter school. This provides rich data that can be used to perform detailed analyses of household schooling decisions over time. Information is also collected at the household and community levels. At the household level, questions are administered on socio-economic and demographic characteristics of the household. At the community level, information is available on school supply as well as type of school, and access to infrastructure, services, and amenities. Data collected at the household level make it possible to observe how changes within the home can affect decisions to send a child to school. Similar analyses can be applied to understand how changes within communities can affect schooling outcomes.\nThe longitudinal setup of the HDSS also enables us to observe how educational programmes and policies can affect children's schooling. Since 2000, governments in LMICs have introduced a series of measures to expand access, such as school feeding policies, girl-friendly policies, and capitation grants (55, 56). Often, however, these policies are assessed at a national level using large-scale, cross-sectional surveys to estimate enrolment ratios and levels of attainment (40, 53). Using the HDSS sites, it is possible to observe to what extent UPE policies affected children's schooling behaviour and analyse how children progressed in the school system once they entered. It is also possible to compare within countries (for countries with multiple HDSS sites) how response to education policies and programmes varied between localities. The longitudinal structure of the HDSS data can therefore make a significant contribution to educational studies in LMICs by enabling us to observe change over time and explore the temporal sequence of events.\nThe diversity of countries in the INDEPTH Network presents another way in which data from HDSS sites can make a contribution to educational studies in LMICs. As noted above, there are 20 countries within the INDEPTH Network in which there are 52 HDSS sites. The majority of the HDSS sites are in sub-Saharan Africa (39 out of 52 sites); there are 11 HDSS sites in Asia and 2 HDSS sites in Oceania. In sub-Saharan Africa, the HDSS sites are located in 14 countries; in Asia they are in 5 countries; and in Oceania the 2 HDSSs are located in the same country. The countries in sub-Saharan Africa include Burkina Faso, Cote D'Ivoire, Ethiopia, The Gambia, Ghana, Guinea Bissau, Kenya, Malawi, Mozambique, Nigeria, Senegal, South Africa, Tanzania, and Uganda. In Asia the countries are Bangladesh, India, Indonesia, Thailand, and Vietnam; in Oceania there is Papua New Guinea. The majority of comparative studies which have so far been conducted have focused on countries in sub-Saharan Africa, namely Ghana, Uganda, and Malawi. The countries within the INDEPTH Network are diverse and can be used to form comparisons between African and Asian countries as well as with Papua New Guinea. Even within the same continent, there are many countries which so far have been little explored. In the sub-Saharan context for instance, so-called Francophone countries have been less represented in the literature. Children's school access can be compared between these countries and the others in the sub-region as well as with those in Asia.\nAmong the cross-country studies that we reviewed, cross-sectional surveys designed to be national representative were mainly used with the DHS being the most frequently used survey. One of the constraints of using the DHS for studying education outcomes is that the survey does not collect information on school supply variables. Therefore, apart from Filmer's (42) study which used a special round of the DHS that had collected information on distance to school, none of the studies could account for school supply variables. Another limitation of the DHS is that analyses cannot be performed to understand how patterns and trends in access to school change over time. The most common theme among the cross-country studies that we reviewed was to demonstrate levels of school enrolment through univariate and bivariate analyses (controlling for sex of the child, household poverty, and area of residence). Data from the HDSS sites can contribute to narrowing these gaps by developing more complex and robust models which account for both supply and demand variables. These models can be applied across multiple HDSS sites between countries to assess variations in the factors which affect children's schooling. Additionally, longitudinal models can be developed to evaluate how the determinants of children's schooling outcomes have changed overtime. Assessing change in the determinants of schooling outcomes is justified by the need to target resources more efficiently to areas which have the strongest impact on access.\nSurveys conducted at a national level were more frequently used in the cross-country studies. The HDSS sites in contrast are focused often on smaller geographic and administrative regions and uniquely follow marginalised populations such as those in remote rural areas or poor urban informal settlements. Children living in marginalised populations such as urban informal settlements or rural communities have the least access to school (13, 14, 48). These localities are often resource deprived, lacking access to school infrastructure, particularly schools of good quality (57, 58). In these populations, children from poor households and girls are confronted with severe barriers to enter, progress, complete primary school, and transition to secondary school (53, 59, 60). There are few studies which utilise survey data over time to undertake enquiries as to how access among marginalised populations has changed over time and how changes within these contexts affect changes in children's schooling behaviour. Data collected at INDEPTH HDSS sites can contribute to narrowing this gap in the literature. Also, as well as forming comparisons between countries, analyses can be performed on multiple HDSS sites within countries as has been done by studies which have used the Nairobi HDSS (12–14). The emphasis is to uncover how variations both between and within countries can influence a households’ decision-making process to invest in a child's education over time. The location and size of the population under surveillance within HDSS sites therefore offer yet another opportune advantage to conduct more nuanced and detailed comparative analyses.", "The gaps which we have identified through our literature review suggest a significant role for longitudinal data in LMICs to explore educational outcomes beyond school enrolment and attendance. As we move towards a post-2015 development agenda, a broader conceptualisation of school access is likely to become more relevant, demanding a focus away from a dichotomous understanding of school access to one where it is understood as a continuum, a process in which children enter, remain, progress, complete primary school, and transition to higher levels of education. Adopting this alternative approach to understanding school access implies a significant role for studies conducted over time in future research. Longitudinal studies can be useful for observing children's school access as a continuum. Here, data collected repeatedly through sites within HDSS sites can make a contribution to better understand those school outcomes which have been little explored in educational studies in LMICs. Furthermore, the HDSS sites operate in populations which have been found to be the most marginalised in-school access, namely in rural and poor urban areas. The data collected from these sites can be used as evidence to design more targeted policy initiatives for improving participation and retention rates among children in deprived populations." ]
[ "methods", "results", "discussion", "conclusions" ]
[ "school access", "enrolment", "attendance", "grade progression", "dropout", "primary to secondary school transition", "completion" ]
Methods: We conducted this research in three stages. In the first stage, we performed a systematic literature review of studies using Web of Science, a reference database holding citations for every discipline and world region. We searched for six phrases including: ‘school enrolment’, ‘school attendance’, ‘grade progression’, ‘school dropout’, ‘primary to secondary school transition’, and ‘school completion’. Each search was defined by journal publications in LMICs in Africa, Asia, and Oceania because these are the countries which form the INDEPTH Network. Also, we restricted our search to studies conducted between 1998 and 2013 as the INDEPTH was established in 1998 and the EFA was included in the MDGs in the year 2000. From our literature search, 1,481 references were returned: grade progression (418 references), primary to secondary school description (329 references), school attendance (274 references), school enrolment (234 references), school completion (143 references), and school dropout (83 references). Of the 1,481 references returned, only 132 were relevant to our focus on school access. In the second stage, we reviewed the 132 references and summarised them according to our key phrases or school outcomes. Finally, we made note of all studies that used longitudinal data sources and studies that used data from more than one country. Results: This section presents our results from the literature review. We first present the publications which we found to be relevant to our search; we summarise findings according to publications which focused mainly on one of the six school outcomes that we searched and those which explored more than one of the school outcomes (Table 2). Subsequently, the findings from our analysis of studies using longitudinal data and cross-country data are presented, respectively. All references obtained from search in Web of Science by school outcome From our search, a total of 132 references were found to be relevant to children's schooling as framed within CREATE's zones of exclusion. Having reviewed these references, ‘school enrolment’ was the most analysed school outcome (71 publications). More than half of the studies which analysed school enrolment as an outcome focused mainly on children's enrolment (49 out of 71 publications). ‘School attendance’ (24 publications) and ‘school dropout’ (24 publications) were the second most analysed outcomes. As with school enrolment, the majority of studies published on school attendance and school dropout were focused singularly on exploring these outcomes – 19 out of the 24 publications for school attendance and 19 out of the 24 publications for dropout were focused mainly on analysing children's attendance and dropout, respectively. The least studied school outcomes were ‘grade progression’ (3 publications) and ‘primary to secondary school transition’ (3 publications). Few studies have also been conducted on ‘school completion’ (7 publications). All the publications that we reviewed on ‘primary to secondary transition’ analysed only this outcome in the study. In contrast, all the publications we reviewed for grade progression did not solely focus on exploring children's progression between grades; they analysed other outcomes such as dropout, completion, and school entry. Between 1998 and 2013, journal publications on longitudinal studies which explored children's school outcomes in LMICs were scarce (Tables 3 and 4). Table 3 shows publications that used longitudinal data and analysed one of the school outcomes that we searched. Table 4 also shows publications which used longitudinal data but where more than one outcome was analysed. Of the 132 that we reviewed, 33 made use of longitudinal data. In Table 3, we see that the use of a longitudinal data source has been most frequent among studies where school enrolment is the main outcome variable (10 publications). Five of the 19 studies on school dropout made use of longitudinal surveys compared to 3 of the 19 studies on school attendance. The publications on school completion and transition from primary to secondary school had one study each where longitudinal data were used. In Table 4, 13 studies (of the 38 studies that analysed more than one school outcome) were found to have made use of longitudinal data. Publications using longitudinal data which explored mainly one school outcome arranged by the source of data that was used, country in which data were collected and reference for the publication Longitudinal study (2000–2003) Kanchanaburi Demographic Surveillance System (2000–2004) African Centre for Health and Population Studies Panel data (2004–2007) Panel data from KwaZulu-Natal Income Dynamics Study (1993–1998) APHRC household data (2000–2005) APHRC household data (2005–2009) APHRC 2005 schooling history data APHRC household data (2005–2009) Ethiopian Environmental Household Study (2000–2007) Thailand Thailand South Africa Kenya South Africa Kenya Kenya Kenya Kenya Ethiopia Jampaklay (6) Mahaarcha and Kittisuksathit (7) Case et al. (8) Nishimura and Yamano (9) Handa and Peterman (10) Ngware et al. (11) Oketch et al. (12) Oketch et al. (13) Oketch et al. (14) Lindskog (15) Panel household survey (1991–1994) PASADA community faith-based agency Young Lives household survey Tanzania Tanzania India Ainsworth et al. (16) Ng’ondi (17) Woodhead et al. (18) Kanchanaburi Demographic Surveillance System (2001–2004) Community and School Studies data (2007–2009) 2009–2011 panel data set Longitudinal school-based dropout study (1999–2001) Individual-level data (2008–2009) Thailand Bangladesh China Kenya Cambodia Korinek and Punpuing (19) Sabates et al. (20) Yi et al. (21) Nyambedha and Aagaard-Hansen (22) No et al. (23) Household survey, Uttar Pradesh India Siddhu (24) Nang Rong Social (1984, 1994, 2004) Thailand Piotrowski and Paat (25) APHRC (African Population Health Research Centre) collects data in an urban demographic surveillance system in Nairobi, Kenya: Viwandani and Korogocho (slums); Jericho and Harambee (non-slum). Publications which used longitudinal data and analysed multiple school outcomes arranged by the source of data that was used, country in which data were collected and reference for the publication There was some variation in the data source of the longitudinal surveys and the countries in which the surveys were conducted. The surveys were more likely to have been conducted in countries in sub-Saharan Africa (21 publications) and Asia (11 publications). The most frequently studied countries were South Africa (7 publications) and Kenya (7 publications). The data sources from the studies on these countries were similar. Four of the seven studies on South Africa used data from the Demographic Surveillance Area in KwaZulu-Natal (8, 10, 32, 33); two used data from the Birth-to-Twenty cohort panel study (27, 37); and the remaining study used data from the Education Management Information System (38). For the studies on Kenya, data from Nairobi's Demographic Surveillance System sites collected under the African and Population Health Research Centre's Education Program were the most frequent source (11–14). Three of the studies however used data from elsewhere: 1) Evans and Miguel (29) used panel data collected from a Pupil Questionnaire and Tracking survey between 1998 and 2002 in Busia district; 2) Nyambedha and Aagaard-Hansen (22) analysed data from a school-based dropout study in Western Kenya; and 3) Nishimura and Yamano (9) made use of panel data collected from household community survey in rural Kenya. Other countries such as Thailand (four publications), India (three publications), China (two publications), Ethiopia (two publications), and Tanzania (two publications) were also studied using longitudinal surveys. Three of the four studies which performed longitudinal analyses for Thailand used the same data from the Demographic Surveillance System site in Kanchanaburi Province (6, 7, 25). The studies on India all used different data sources: one study used a household survey from Uttar Pradesh (24); another study used data from the Young Lives household survey (18); and the last study used school panel data (30). The studies on China (21, 35), Ethiopia (15, 26), and Tanzania (16, 17) also used data from different sources. Very few of the studies that we reviewed performed cross-country analyses (Table 5). Of the 132 publications that we reviewed, 17 performed cross-country analyses. Studies which explored school enrolment (n =6) as the main outcome had the most number of cross-country publications followed by those on dropout (n=3) and attendance (n =2). Countries in sub-Saharan Africa were the most likely to be included in comparative studies: 13 of the 17 studies were focused only on countries in the sub-Saharan context. Among the remaining studies, four were focused on LMICs more broadly (39–41) with one of these studies analysing data from low-income countries only (42). Publications that used data from more than one country arranged by the source of data that was used, country in which data were collected and reference for the publication World Bank Unit record household data sets Demographic and Health Survey (DHS) 15 African countries 30 countries in Africa Kakwani et al. (43) Longwe and Smits (44) Cross-sectional surveys DHS and Integrated Household Survey (IHS) Case study DHS Case study: Ministry of Education, United Nations, interviews, survey Case studies: interviews and observations of schools Malawi and Kenya 34 sub-Saharan African countries Ghana, Nigeria and Togo 21 poor countries Guinea and Ethiopia Jamaica, Kenya, Tanzania, Ghana, Indonesia, Pakistan Schafer (45) Smith-Greenaway and Heckert (46) Tuwor and Sossou (47) Filmer (42) Colclough et al. (48) Heyneman and Stern (41) National Survey of Adolescents DHS DHS Burkina Faso, Uganda, Ghana, Malawi Burkina Faso, Cameroon, Ivory Coast, Guinea, Togo 20 countries in sub-Saharan Africa Biddlecom et al. (49) Lloyd and Mensch (50) Melhado (51) Multiple Indicator Cluster Survey; DHS Africa Lloyd and Hewett (52) DHS Armed conflict data set of the international peace research institute DHS DHS Education Management Information Systems Kenya, Malawi, Nigeria, Tanzania, Uganda, and Zambia 43 countries in Africa Developing countries Global Sub-Saharan Africa Lewin and Sabates (53) Poirier (54) Grant and Behrman (40) Filmer and Pritchett (39) Lewin (4) The majority of the data used in these studies originated from cross-sectional household surveys. The Demographic and Health Survey (DHS) was the most frequently used data source: 8 of the 17 studies used the DHS for analyses. The Integrated Household Survey and Multiple Indicator Cluster Surveys were also used (46, 52). These surveys, like the DHS, are large-scale surveys designed to be national representative which are used to collect demographic, health, poverty, and education indicators in LMICs. Biddlecom et al. (49) also used a large-scale survey (i.e. National Survey of Adolescents) although this survey is administered only in four countries in sub-Saharan Africa: Ghana, Burkina Faso, Uganda, and Malawi. Some studies used a case study approach, triangulating different sources of data, for their research (41, 48, 47). Among the three remaining studies, one used data from Education Management Information Systems (4); another used the World Bank Unit record household data sets (43); and the last made use of data from the armed conflict data set of the international peace research institute (54). Discussion: The first objective of this paper has been to identify gaps in studies on children's school access in LMICs. The main gaps which we have identified can be summarised as such:Grade progression, primary to secondary school transition, and completion were the least studied school outcomes.Around a quarter of studies (33 out of 132 publications) in our review used data collected over time.Studies which used longitudinal data were more likely to have been conducted in South Africa, Kenya, and Thailand. The data from these studies were collected mainly from Demographic Surveillance System sites.Just over one-tenth of studies (17 out of 132 publications) in our review performed cross-country analyses. More than two-thirds of the cross-country analyses (13 out of 17 publications) were focused only on countries in sub-Saharan Africa. The most frequently studied countries were Ghana, Malawi, and Uganda.Large-scale cross-sectional surveys were most frequently used to perform cross-country analyses; the DHS was the main data source. Grade progression, primary to secondary school transition, and completion were the least studied school outcomes. Around a quarter of studies (33 out of 132 publications) in our review used data collected over time. Studies which used longitudinal data were more likely to have been conducted in South Africa, Kenya, and Thailand. The data from these studies were collected mainly from Demographic Surveillance System sites. Just over one-tenth of studies (17 out of 132 publications) in our review performed cross-country analyses. More than two-thirds of the cross-country analyses (13 out of 17 publications) were focused only on countries in sub-Saharan Africa. The most frequently studied countries were Ghana, Malawi, and Uganda. Large-scale cross-sectional surveys were most frequently used to perform cross-country analyses; the DHS was the main data source. Data from HDSS sites operating within the INDEPTH Network can contribute to narrowing the gaps which have been highlighted in this review. The INDEPTH Network oversees and coordinates multisite research activities in 52 HDSS sites in 20 LMICs in Africa, Asia, and Oceania (Table 6). Data on children's school attendance, including the grade and level of education being attended, are routinely collected among the population under surveillance within the HDSS sites. Children's school data are often enumerated at the beginning of the academic school year. These data can therefore be compared across years to observe whether a child returns to school and which grade a child attends from year to year. Where data are collected more than once a year, as in Ifakara (Tanzania) and Ouagadougou (Burkina Faso) for instance, we can observe disruptions in children's schooling during the academic year helping us to understand access beyond simple enrolment. That is, the data can be used to answer process-driven questions such as what happens to children when they enter school; how do children move from one grade to the next; and how do they transition from one level of education to the next. Exploring these questions can contribute to narrowing the deficit in studies on grade progression, primary school completion, and primary to secondary school transition. Health and Demographic Surveillance System sites within the INDEPTH Network arranged by continents Burkina Faso: Ouagadougou; Nouna; Sapone; Kaya; Nanoro Cote D’Ivoire: Taabo Ethiopia: Gilgel Gibe; Kersa; Butajira; Dabat; Kilite Awlaelo The Gambia: Farafenni; West Kiang Ghana: Navrongo; Dodowa; Kintampo Guinea Bissau: Bandim Kenya: Kisumu; Kombewa; Mbita; Kilifi; Nairobi Malawi: Karonga Mozambique: Chokwe; Mahinca Nigeria: Nahuche; Cross River Senegal: Bandafassi; Niakhar; Mlomp South Africa: ACDIS, Agincourt; Dikgale Tanzania: Ifakara; Rufiji; Magu Uganda: Rakai; Iganga/Mayuge; Kyamulibwa Bangladesh: Matlab; Chakaria; Bandarban India: Ballabgarh; Birbhum; Vadu Indonesia: Purworejo Thailand: Kachanaburi Vietnam: Chililab; Dodolab; Filabavi Papua New Guinea: Wosera; PIH The longitudinal design of the HDSS offers significant potential for studying children's schooling outcomes. The operation of the HDSS allows children to be continuously observed and tracked from the year they enter school. This provides rich data that can be used to perform detailed analyses of household schooling decisions over time. Information is also collected at the household and community levels. At the household level, questions are administered on socio-economic and demographic characteristics of the household. At the community level, information is available on school supply as well as type of school, and access to infrastructure, services, and amenities. Data collected at the household level make it possible to observe how changes within the home can affect decisions to send a child to school. Similar analyses can be applied to understand how changes within communities can affect schooling outcomes. The longitudinal setup of the HDSS also enables us to observe how educational programmes and policies can affect children's schooling. Since 2000, governments in LMICs have introduced a series of measures to expand access, such as school feeding policies, girl-friendly policies, and capitation grants (55, 56). Often, however, these policies are assessed at a national level using large-scale, cross-sectional surveys to estimate enrolment ratios and levels of attainment (40, 53). Using the HDSS sites, it is possible to observe to what extent UPE policies affected children's schooling behaviour and analyse how children progressed in the school system once they entered. It is also possible to compare within countries (for countries with multiple HDSS sites) how response to education policies and programmes varied between localities. The longitudinal structure of the HDSS data can therefore make a significant contribution to educational studies in LMICs by enabling us to observe change over time and explore the temporal sequence of events. The diversity of countries in the INDEPTH Network presents another way in which data from HDSS sites can make a contribution to educational studies in LMICs. As noted above, there are 20 countries within the INDEPTH Network in which there are 52 HDSS sites. The majority of the HDSS sites are in sub-Saharan Africa (39 out of 52 sites); there are 11 HDSS sites in Asia and 2 HDSS sites in Oceania. In sub-Saharan Africa, the HDSS sites are located in 14 countries; in Asia they are in 5 countries; and in Oceania the 2 HDSSs are located in the same country. The countries in sub-Saharan Africa include Burkina Faso, Cote D'Ivoire, Ethiopia, The Gambia, Ghana, Guinea Bissau, Kenya, Malawi, Mozambique, Nigeria, Senegal, South Africa, Tanzania, and Uganda. In Asia the countries are Bangladesh, India, Indonesia, Thailand, and Vietnam; in Oceania there is Papua New Guinea. The majority of comparative studies which have so far been conducted have focused on countries in sub-Saharan Africa, namely Ghana, Uganda, and Malawi. The countries within the INDEPTH Network are diverse and can be used to form comparisons between African and Asian countries as well as with Papua New Guinea. Even within the same continent, there are many countries which so far have been little explored. In the sub-Saharan context for instance, so-called Francophone countries have been less represented in the literature. Children's school access can be compared between these countries and the others in the sub-region as well as with those in Asia. Among the cross-country studies that we reviewed, cross-sectional surveys designed to be national representative were mainly used with the DHS being the most frequently used survey. One of the constraints of using the DHS for studying education outcomes is that the survey does not collect information on school supply variables. Therefore, apart from Filmer's (42) study which used a special round of the DHS that had collected information on distance to school, none of the studies could account for school supply variables. Another limitation of the DHS is that analyses cannot be performed to understand how patterns and trends in access to school change over time. The most common theme among the cross-country studies that we reviewed was to demonstrate levels of school enrolment through univariate and bivariate analyses (controlling for sex of the child, household poverty, and area of residence). Data from the HDSS sites can contribute to narrowing these gaps by developing more complex and robust models which account for both supply and demand variables. These models can be applied across multiple HDSS sites between countries to assess variations in the factors which affect children's schooling. Additionally, longitudinal models can be developed to evaluate how the determinants of children's schooling outcomes have changed overtime. Assessing change in the determinants of schooling outcomes is justified by the need to target resources more efficiently to areas which have the strongest impact on access. Surveys conducted at a national level were more frequently used in the cross-country studies. The HDSS sites in contrast are focused often on smaller geographic and administrative regions and uniquely follow marginalised populations such as those in remote rural areas or poor urban informal settlements. Children living in marginalised populations such as urban informal settlements or rural communities have the least access to school (13, 14, 48). These localities are often resource deprived, lacking access to school infrastructure, particularly schools of good quality (57, 58). In these populations, children from poor households and girls are confronted with severe barriers to enter, progress, complete primary school, and transition to secondary school (53, 59, 60). There are few studies which utilise survey data over time to undertake enquiries as to how access among marginalised populations has changed over time and how changes within these contexts affect changes in children's schooling behaviour. Data collected at INDEPTH HDSS sites can contribute to narrowing this gap in the literature. Also, as well as forming comparisons between countries, analyses can be performed on multiple HDSS sites within countries as has been done by studies which have used the Nairobi HDSS (12–14). The emphasis is to uncover how variations both between and within countries can influence a households’ decision-making process to invest in a child's education over time. The location and size of the population under surveillance within HDSS sites therefore offer yet another opportune advantage to conduct more nuanced and detailed comparative analyses. Conclusions: The gaps which we have identified through our literature review suggest a significant role for longitudinal data in LMICs to explore educational outcomes beyond school enrolment and attendance. As we move towards a post-2015 development agenda, a broader conceptualisation of school access is likely to become more relevant, demanding a focus away from a dichotomous understanding of school access to one where it is understood as a continuum, a process in which children enter, remain, progress, complete primary school, and transition to higher levels of education. Adopting this alternative approach to understanding school access implies a significant role for studies conducted over time in future research. Longitudinal studies can be useful for observing children's school access as a continuum. Here, data collected repeatedly through sites within HDSS sites can make a contribution to better understand those school outcomes which have been little explored in educational studies in LMICs. Furthermore, the HDSS sites operate in populations which have been found to be the most marginalised in-school access, namely in rural and poor urban areas. The data collected from these sites can be used as evidence to design more targeted policy initiatives for improving participation and retention rates among children in deprived populations.
Background: The framework for expanding children's school access in low- and middle-income countries (LMICs) has been directed by universal education policies as part of Education for All since 1990. In measuring progress to universal education, a narrow conceptualisation of access which dichotomises children's participation as being in or out of school has often been assumed. Yet, the actual promise of universal education goes beyond this simple definition to include retention, progression, completion, and learning. Methods: Using Web of Science, we conducted a literature search of studies published in international peer-reviewed journals between 1998 and 2013 in LMICs. The phrases we searched included six school outcomes: school enrolment, school attendance, grade progression, school dropout, primary to secondary school transition, and school completion. From our search, we recorded studies according to: 1) school outcomes; 2) whether longitudinal data were used; and 3) whether data from more than one country were analysed. Results: The area of school access most published is enrolment followed by attendance and dropout. Primary to secondary school transition and grade progression had the least number of publications. Of 132 publications which we found to be relevant to school access, 33 made use of longitudinal data and 17 performed cross-country analyses. Conclusions: The majority of studies published in international peer-reviewed journals on children's school access between 1998 and 2013 were focused on three outcomes: enrolment, attendance, and dropout. Few of these studies used data collected over time or data collected from more than one country for comparative analyses. The contribution of the INDEPTH Network in helping to address these gaps in the literature lies in the longitudinal design of HDSS surveys and in the diversity of countries within the network.
null
null
4,585
339
4
[ "school", "data", "studies", "countries", "publications", "sites", "children", "longitudinal", "hdss", "africa" ]
[ "test", "test" ]
null
null
null
[CONTENT] school access | enrolment | attendance | grade progression | dropout | primary to secondary school transition | completion [SUMMARY]
[CONTENT] school access | enrolment | attendance | grade progression | dropout | primary to secondary school transition | completion [SUMMARY]
[CONTENT] school access | enrolment | attendance | grade progression | dropout | primary to secondary school transition | completion [SUMMARY]
[CONTENT] school access | enrolment | attendance | grade progression | dropout | primary to secondary school transition | completion [SUMMARY]
null
null
[CONTENT] Child | Developing Countries | Education | Global Health | Humans | Poverty | Research | Schools | Students [SUMMARY]
[CONTENT] Child | Developing Countries | Education | Global Health | Humans | Poverty | Research | Schools | Students [SUMMARY]
[CONTENT] Child | Developing Countries | Education | Global Health | Humans | Poverty | Research | Schools | Students [SUMMARY]
[CONTENT] Child | Developing Countries | Education | Global Health | Humans | Poverty | Research | Schools | Students [SUMMARY]
null
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
null
[CONTENT] school | data | studies | countries | publications | sites | children | longitudinal | hdss | africa [SUMMARY]
[CONTENT] school | data | studies | countries | publications | sites | children | longitudinal | hdss | africa [SUMMARY]
[CONTENT] school | data | studies | countries | publications | sites | children | longitudinal | hdss | africa [SUMMARY]
[CONTENT] school | data | studies | countries | publications | sites | children | longitudinal | hdss | africa [SUMMARY]
null
null
[CONTENT] references | school | references school | search | studies | 481 references | 481 references returned | references returned | returned | 481 [SUMMARY]
[CONTENT] data | publications | school | studies | survey | household | study | kenya | countries | analysed [SUMMARY]
[CONTENT] school | school access | access | sites | understanding | understanding school access | continuum | significant role | role | understanding school [SUMMARY]
[CONTENT] school | data | studies | references | countries | publications | sites | hdss | children | access [SUMMARY]
null
null
[CONTENT] between 1998 and 2013 ||| six ||| 1 | 2 | 3 | more than one [SUMMARY]
[CONTENT] ||| ||| 132 | 33 | 17 [SUMMARY]
[CONTENT] between 1998 and 2013 | three ||| more than one ||| HDSS [SUMMARY]
[CONTENT] 1990 ||| ||| ||| between 1998 and 2013 ||| six ||| 1 | 2 | 3 | more than one ||| ||| ||| ||| 132 | 33 | 17 ||| between 1998 and 2013 | three ||| more than one ||| HDSS [SUMMARY]
null
Parental satisfaction and its associated factors towards neonatal intensive care unit service: a cross-sectional study.
36261864
Parental satisfaction is a well-established outcome indicator and tool for assessing a healthcare system's quality, as well as input for developing strategies for providing acceptable patient care. This study aimed to assess parental satisfaction with neonatal intensive care unit service and its associated factors.
BACKGROUND
A cross-sectional study design was conducted on parents whose neonates were admitted to the neonatal intensive care unit at Debre Tabor Comprehensive Specialized Hospital, in North Central Ethiopia. Data were collected by adopting an EMPATHIC-N instrument during the day of neonatal discharge, after translating the English version of the instrument to the local language (Amharic). Both Bivariable and multivariable logistic analyses were done to identify factors associated with parental satisfaction with neonatal intensive care unit service. P < 0.05 with 95% CI was considered statistically significant.
METHOD
The data analysis was done on 385 parents with a response rate of 95.06%. The overall average satisfaction of parents with neonatal intensive care unit service was 47.8% [95% CI= (43.1-52.5)]. The average parental satisfaction of neonatal intensive care unit service in the information dimension was 50.40%; in the care and treatment dimension was 36.9%, in the parental participation dimension was 50.1%, in the organization dimension was 59.0% and the professional attitude dimension was 48.6%. Gender of parents, residency, parental hospital stay, birth weight, and gestational age were factors associated with parental satisfaction.
RESULTS
There was a low level of parental satisfaction with neonatal intensive care unit service. Among the dimensions of EMPATHIC-N, the lowest parental satisfaction score was in the care and treatment while the highest parental satisfaction score was in the organization dimension.
CONCLUSION
[ "Infant, Newborn", "Humans", "Intensive Care Units, Neonatal", "Personal Satisfaction", "Cross-Sectional Studies", "Parents", "Quality of Health Care" ]
9583552
Introduction
Neonatal intensive care units (NICUs) are areas that require careful risk management with a wide range of neonatal care services [1–3]. It necessitates high-cost and efficient critical care delivery with a multidisciplinary team approach that focuses on preventive strategies for improved outcomes [4–6]. Parental tension and emotions are high when their child is admitted to a neonatal intensive care unit (NICU) due to serious illnesses [7, 8]. Satisfaction is a belief and attitude about a specific service provision of an institution. Parental and patient satisfaction has become a well-established outcome indicator and tool for assessing a healthcare system’s quality, as well as input for developing strategies and providing accessible, sustainable, economical, as well as acceptable patient care [7, 9, 10]. Parental satisfaction reflects the balance between their expectations of ideal care and their perception of real and available care [3, 11, 12]. It is also one of the objectives and missions of every health care center that gives NICU care service [10, 11]. Parent and patient satisfaction has become an important aspect of hospital management initiatives for quality assurance and accreditation. Also, parental involvement has an important role in the delivery of high-quality care, ranging from assisting with activities of daily living to being directly involved in important health care decisions [13, 14]. However, parental participation may not always be possible, but, effective communication will reduce the effects of crises and improve parental satisfaction [11, 15]. Ethiopia has been working to enhance its healthcare delivery systems by focusing on quality healthcare service giving special attention to mothers and children. The Ethiopian Health Service Alliance for Quality has agreed that the initial priority area would be self-motivated and transparent partnerships that stimulate innovation in health care quality management and learning across all hospitals [16]. Given the scarcity of studies on parental satisfaction with NICU care in Ethiopia, as well as clinical observations of parents complaining about NICU care, it is critical to determine the level of parental satisfaction. So, this study aimed to identify the level of parental satisfaction and associated factors with NICU care service in Debre Tabor Comprehensive Specialized Hospital (DTCSH). Also, this study helps the administrators to understand the deficiencies of the hospital’s NICU services, and then propose corresponding improvement strategies.
null
null
Results
This study was conducted on a total of 385 parents, with a 95.06% response rate. The majority of the parents were mothers 224 (58.2%), whereas 322 (83.6%) were married. Full-term newborns 325(84.4%) and those with respiratory issues 115 (29.9%) had the highest number of NICU admissions (Table 1). Table 1Socio-demographic characteristics of study participants (n = 385)VariablesFrequency (n)Percentage (%) Parental gender  Mother22458.2 Father16141.8 Age  18–2413234.3 25–3916643.1 40 and above8722.6 Marital status  Married32283.6 Not married6316.4 Residency  Urban18748.6 Rural19851.4 Educational level  No formal education11930.9 Elementary school9223.9 Secondary school8321.6 Collage and above9123.6 Profession  Housewife7018.2 Farmer10627.5 Student5915.3 Gov’t employ7218.7 Private employ7820.3 Parental hospital stay  ≤ 15 days24463.4 > 15 days14136.6 Neonatal gender  Male18848.8 Female19751.2 Birth weight  High birth weight8321.5 Normal birth weight16242.1 Low birth weight14036.4 Gestational age  Premature6015.6 Full term32584.4 Neonatal hospital stay  ≤ 15 days23561 > 15 days15039 Admission diagnosis  Infection6516.9 Respiratory problems11529.9 Prematurity5915.3 Gastrointestinal problems389.9 Jaundice4712.2 Neurological problems215.5 Cardiological problems266.8 Others143.5 Socio-demographic characteristics of study participants (n = 385) Explanatory factor analysis for subscales of parental satisfaction with NICU-care service Before assessing parental satisfaction levels, confirmatory factor analysis was used to ensure that the measured variables accurately represented the constructs. The factor correlation matrix was used in the analysis. KMO and Bartlett’s tests of sphericity were used to check the correlation of measurement variables, with a KMO value of 0.875 and Bartlett’s tests of sphericity (p = 0.00) respectively. The correlation matrix was checked for commonality extraction, and all item values were larger than 0.3. The parallel analysis determined the number of item components (factors), and two new components for parental participation, three new components for information and organization, and four new components for Care & Treatment and professional attitude of eigenvalues met the criteria, and the Varimax component correlation matrix was an appropriate model (Table 2). Table 2Factor loadings and identified components’ of EMPATHIC-N toolItemFactors 1 2 3 4 Information  What level of doctor’s information do you have regarding the child’s expected health outcomes?0.752 How satisfied are you with the physicians’ and nurses’ information similarity?0.704 How are you satisfied with doctors’ and nurses’ honesty in providing information?0.59 How satisfied are you with daily discussions with doctors and nurses about your child’s care and treatment?0.58 How understandable was the information provided by the doctors and nurses?0.763 How satisfied were you with the correct information when the child’s physical condition deteriorated?0.4280.657 How satisfied were you with the clear answers to your questions?0.653 How clear is the doctor’s information about the consequences of the child’s treatment?0.533 To what extent do you receive clear information about the examinations and tests?0.713 To what extent the information brochure received was complete and clear?0.641 How much clear information is given regarding a child’s illness?0.626 Level of received understandable information about the effects of the drugs?0.606 Care & Treatment  Level of child’s comfort taken into account by the doctors and nurses?0.826 The extent of satisfaction During acute situations on availability of nurses to support?0.822 Level of team alertness to the prevention and treatment of pain of neonate?0.797 Level of care taken to nurses while in the incubator/bed?0.6340.501 Level of correct medication always administered on time?0.6310.402 level of emotional support that has been provided?0.84 The doctors and nurses responded well to our own needs0.809 Transferals of care from the neonatal intensive care unit staff to colleagues in the high-care unit or pediatric ward had gone well?0.782 Every day we knew who of the doctors and nurses was responsible for our child.0.695 How closely did doctors and nurses collaborate during work?0.5780.466 Level of a common goal: to provide the finest care and treatment for our child and ourselves.0.774 Level of physicians and nurses paid close attention to our child’s development?0.756 The team as a whole was concerned for our child and you.0.698 Our child’s requirements were met promptly0.455 The extent of doctors’ and nurses’ professional knowledge of what they are doing?0.847 How satisfied are you with the doctors’ and nurses’ understanding of the child’s medical history at the time of admission?0.765 How satisfied are you with the rapid actions taken by doctors and nurses when a child’s condition deteriorated?0.723 Parental Participation  How involved are you in making decisions about our child’s care and treatment?0.803 The nurses had trained us on the specific aspects of newborn care.0.796 We were encouraged to stay close to our children.0.714 Before discharge, the care for our child was once more discussed with us.0.617 Even during intensive procedures, we could always stay close to our child.0.841 The nurses stimulated us to help in the care of our child0.837 The nurses helped us in the bonding with our child0.5320.575 We had confidence in the team0.561 Organization  The Neonatology unit made us feel safe0.885 There was a warm atmosphere in the Neonatology unit without hostility0.805 The Neonatology unit was clean0.7330.404 The unit could easily be reached by telephone0.809 Our child’s incubator or bed was clean0.8 The team worked efficiently0.661 There was enough space around our child’s incubator/bed0.9 Noise in the unit was muffled as good as possible0.788 Professional Attitude  Our child’s health always came first for the doctors and nurses0.799 The team worked hygienically0.747 Our cultural background was taken into account0.728 The doctors and nurses always took time to listen to us0.876 We felt welcome by the team0.804 Despite the workload, sufficient attention was paid to our child and us by the team0.5840.508 Nurses and doctors always introduced themselves by name and function0.794 We received sympathy from the doctors and nurses0.786 At our bedside, the discussion between the doctors and nurses was only about our child.0.4570.582 The team respected the privacy of our children and us.0.874 There was a pleasant atmosphere among the staff0.743 The team showed respect for our child and us0.5190.609 Factor loadings and identified components’ of EMPATHIC-N tool Before assessing parental satisfaction levels, confirmatory factor analysis was used to ensure that the measured variables accurately represented the constructs. The factor correlation matrix was used in the analysis. KMO and Bartlett’s tests of sphericity were used to check the correlation of measurement variables, with a KMO value of 0.875 and Bartlett’s tests of sphericity (p = 0.00) respectively. The correlation matrix was checked for commonality extraction, and all item values were larger than 0.3. The parallel analysis determined the number of item components (factors), and two new components for parental participation, three new components for information and organization, and four new components for Care & Treatment and professional attitude of eigenvalues met the criteria, and the Varimax component correlation matrix was an appropriate model (Table 2). Table 2Factor loadings and identified components’ of EMPATHIC-N toolItemFactors 1 2 3 4 Information  What level of doctor’s information do you have regarding the child’s expected health outcomes?0.752 How satisfied are you with the physicians’ and nurses’ information similarity?0.704 How are you satisfied with doctors’ and nurses’ honesty in providing information?0.59 How satisfied are you with daily discussions with doctors and nurses about your child’s care and treatment?0.58 How understandable was the information provided by the doctors and nurses?0.763 How satisfied were you with the correct information when the child’s physical condition deteriorated?0.4280.657 How satisfied were you with the clear answers to your questions?0.653 How clear is the doctor’s information about the consequences of the child’s treatment?0.533 To what extent do you receive clear information about the examinations and tests?0.713 To what extent the information brochure received was complete and clear?0.641 How much clear information is given regarding a child’s illness?0.626 Level of received understandable information about the effects of the drugs?0.606 Care & Treatment  Level of child’s comfort taken into account by the doctors and nurses?0.826 The extent of satisfaction During acute situations on availability of nurses to support?0.822 Level of team alertness to the prevention and treatment of pain of neonate?0.797 Level of care taken to nurses while in the incubator/bed?0.6340.501 Level of correct medication always administered on time?0.6310.402 level of emotional support that has been provided?0.84 The doctors and nurses responded well to our own needs0.809 Transferals of care from the neonatal intensive care unit staff to colleagues in the high-care unit or pediatric ward had gone well?0.782 Every day we knew who of the doctors and nurses was responsible for our child.0.695 How closely did doctors and nurses collaborate during work?0.5780.466 Level of a common goal: to provide the finest care and treatment for our child and ourselves.0.774 Level of physicians and nurses paid close attention to our child’s development?0.756 The team as a whole was concerned for our child and you.0.698 Our child’s requirements were met promptly0.455 The extent of doctors’ and nurses’ professional knowledge of what they are doing?0.847 How satisfied are you with the doctors’ and nurses’ understanding of the child’s medical history at the time of admission?0.765 How satisfied are you with the rapid actions taken by doctors and nurses when a child’s condition deteriorated?0.723 Parental Participation  How involved are you in making decisions about our child’s care and treatment?0.803 The nurses had trained us on the specific aspects of newborn care.0.796 We were encouraged to stay close to our children.0.714 Before discharge, the care for our child was once more discussed with us.0.617 Even during intensive procedures, we could always stay close to our child.0.841 The nurses stimulated us to help in the care of our child0.837 The nurses helped us in the bonding with our child0.5320.575 We had confidence in the team0.561 Organization  The Neonatology unit made us feel safe0.885 There was a warm atmosphere in the Neonatology unit without hostility0.805 The Neonatology unit was clean0.7330.404 The unit could easily be reached by telephone0.809 Our child’s incubator or bed was clean0.8 The team worked efficiently0.661 There was enough space around our child’s incubator/bed0.9 Noise in the unit was muffled as good as possible0.788 Professional Attitude  Our child’s health always came first for the doctors and nurses0.799 The team worked hygienically0.747 Our cultural background was taken into account0.728 The doctors and nurses always took time to listen to us0.876 We felt welcome by the team0.804 Despite the workload, sufficient attention was paid to our child and us by the team0.5840.508 Nurses and doctors always introduced themselves by name and function0.794 We received sympathy from the doctors and nurses0.786 At our bedside, the discussion between the doctors and nurses was only about our child.0.4570.582 The team respected the privacy of our children and us.0.874 There was a pleasant atmosphere among the staff0.743 The team showed respect for our child and us0.5190.609 Factor loadings and identified components’ of EMPATHIC-N tool Reliability of items of EMPATHIC-N tool The total EMPATHIC-N values and the five domains showed a good level of internal consistency. The five domains’ Cronbach’s values range from 0.639 to 0.791, whereas the total EMPATHIC-N Cronbach’s value was 0.904. The inter-item correlation (IIC) of the five domains had significant internal consistency (Table 3). Table 3Reliability of items of parental satisfaction with NICU-care serviceDimensionNumber of ItemsΧηρονβαχη αMean dimension score (SD)Maximum possible dimension scoreInter-item correlation (IIC)Item-discriminant validity (IDV)Information120.63939.55(8.42)720.22–0.48*0.20–0.57Care & Treatment170.79181.70(20.49)1700.14–0.64*0.25–0.38Parental Participation80.75524.86(7.79)480.01–0.71*0.37–0.54Organization80.72925.22(7.39)480.11–0.64*0.34–0.69Professional Attitude120.69438.106 (9.07)720.13–0.56*0.22–0.45EMPATHIC-N tool570.904209.44(42.82)410cc*Significant value (p < 0.001), c = not computable Reliability of items of parental satisfaction with NICU-care service *Significant value (p < 0.001), c = not computable The total EMPATHIC-N values and the five domains showed a good level of internal consistency. The five domains’ Cronbach’s values range from 0.639 to 0.791, whereas the total EMPATHIC-N Cronbach’s value was 0.904. The inter-item correlation (IIC) of the five domains had significant internal consistency (Table 3). Table 3Reliability of items of parental satisfaction with NICU-care serviceDimensionNumber of ItemsΧηρονβαχη αMean dimension score (SD)Maximum possible dimension scoreInter-item correlation (IIC)Item-discriminant validity (IDV)Information120.63939.55(8.42)720.22–0.48*0.20–0.57Care & Treatment170.79181.70(20.49)1700.14–0.64*0.25–0.38Parental Participation80.75524.86(7.79)480.01–0.71*0.37–0.54Organization80.72925.22(7.39)480.11–0.64*0.34–0.69Professional Attitude120.69438.106 (9.07)720.13–0.56*0.22–0.45EMPATHIC-N tool570.904209.44(42.82)410cc*Significant value (p < 0.001), c = not computable Reliability of items of parental satisfaction with NICU-care service *Significant value (p < 0.001), c = not computable Parental satisfaction level with NICU-care service Overall mean satisfaction level of parents in NICU-care service was 47.8% [95% CI= (43.1–52.5)]. The domains of parental satisfaction scores were compared with each other and, the lowest parental satisfaction score was for the domain of care and treatment 36.9% while the organizational domain showed the highest parental satisfaction level 59.0%. The domains of information, parental participation, and professional attitude showed comparable parental satisfaction scores (Fig. 1). Fig. 1Parents’ overall and dimensional satisfaction with NICU-care services Parents’ overall and dimensional satisfaction with NICU-care services Overall mean satisfaction level of parents in NICU-care service was 47.8% [95% CI= (43.1–52.5)]. The domains of parental satisfaction scores were compared with each other and, the lowest parental satisfaction score was for the domain of care and treatment 36.9% while the organizational domain showed the highest parental satisfaction level 59.0%. The domains of information, parental participation, and professional attitude showed comparable parental satisfaction scores (Fig. 1). Fig. 1Parents’ overall and dimensional satisfaction with NICU-care services Parents’ overall and dimensional satisfaction with NICU-care services Factors associated with the overall parental satisfaction of NICU care service In this study, the bivariable logistic regression showed that parental gender, residency, parental hospital stay, neonatal hospital stay, birth weight, and gestational age were factors with a p-value of less than 0.2 and were fitted with a multivariable logistic regression model. However, neonatal hospital stay was not associated with multivariable logistic regression with a p-value greater than 0.05. The multivariable logistic analyses showed that mothers were 2.16 (AOR = 2.16; 95%CI: 1.28–3.63) times more satisfied than fathers. Also, parents who are from the rural area were 2.94 (AOR = 2.94; 95%CI: 1.42–6.06) more satisfied than urban. Parents who say less than 15 days were 2.18 (AOR = 2.18; 95%CI: 1.13–4.20) times more satisfied than parents who stay 15 or more days in the hospital. Also, parents of a neonate with a normal birth weight of 2.14 (AOR = 2.14; 95%CI: 1.16–3.94) and full-term neonate 2.53(AOR = 2.53; 95%CI: 1.29–4.97) times more satisfied than their counterparts (Table 4). Table 4Factors associated with satisfaction of parents in NICU-care service (n = 385)VariablesSatisfaction level on NICU- serviceCrude odds ratioAdjusted odds ratiop-valueSatisfiedNot Satisfied(95% CI)(95% CI) Gender  Mother117(52.2%)107(47.8%)1.53(1.02,2.31)2.16(1.28,3.63)0.004* Father67(41.6%)94(58.4%)1 Residency  Urban68(36.4%)119(63.6%)1 Rural116(58.6%)82(41.4%)2.48(1.64,3.73)2.94(1.42,6.06)0.004* Parental hospital stay  ≤ 15 days140(57.4%)104(42.6%)2.97(1.92,4.59)2.18(1.13,4.20)0.019* > 15days44(31.2%)97(68.8%)1 Birth weight  High birth weight34(41.0%)49(59.0%)1 Normal birth weight96(59.3%)66(40.7%)2.09(1.22,3.59)2.14(1.16,3.94)0.015* Low birth weight54(38.6%)86(61.4%)0.91(0.52,1.58)0.76(0.41,1.41)0.38 Gestational age  Premature18(30.0%)42(70.0%)1 Full term166(51.1%)159(48.9%)2.44(1.35,4.41)2.53(1.29,4.97)0.007**= p-value < 0.05 1 = reference BWt-birth weightNote: p-values were extracted from the multivariate logistic regression model Factors associated with satisfaction of parents in NICU-care service (n = 385) *= p-value < 0.05 1 = reference BWt-birth weight Note: p-values were extracted from the multivariate logistic regression model In this study, the bivariable logistic regression showed that parental gender, residency, parental hospital stay, neonatal hospital stay, birth weight, and gestational age were factors with a p-value of less than 0.2 and were fitted with a multivariable logistic regression model. However, neonatal hospital stay was not associated with multivariable logistic regression with a p-value greater than 0.05. The multivariable logistic analyses showed that mothers were 2.16 (AOR = 2.16; 95%CI: 1.28–3.63) times more satisfied than fathers. Also, parents who are from the rural area were 2.94 (AOR = 2.94; 95%CI: 1.42–6.06) more satisfied than urban. Parents who say less than 15 days were 2.18 (AOR = 2.18; 95%CI: 1.13–4.20) times more satisfied than parents who stay 15 or more days in the hospital. Also, parents of a neonate with a normal birth weight of 2.14 (AOR = 2.14; 95%CI: 1.16–3.94) and full-term neonate 2.53(AOR = 2.53; 95%CI: 1.29–4.97) times more satisfied than their counterparts (Table 4). Table 4Factors associated with satisfaction of parents in NICU-care service (n = 385)VariablesSatisfaction level on NICU- serviceCrude odds ratioAdjusted odds ratiop-valueSatisfiedNot Satisfied(95% CI)(95% CI) Gender  Mother117(52.2%)107(47.8%)1.53(1.02,2.31)2.16(1.28,3.63)0.004* Father67(41.6%)94(58.4%)1 Residency  Urban68(36.4%)119(63.6%)1 Rural116(58.6%)82(41.4%)2.48(1.64,3.73)2.94(1.42,6.06)0.004* Parental hospital stay  ≤ 15 days140(57.4%)104(42.6%)2.97(1.92,4.59)2.18(1.13,4.20)0.019* > 15days44(31.2%)97(68.8%)1 Birth weight  High birth weight34(41.0%)49(59.0%)1 Normal birth weight96(59.3%)66(40.7%)2.09(1.22,3.59)2.14(1.16,3.94)0.015* Low birth weight54(38.6%)86(61.4%)0.91(0.52,1.58)0.76(0.41,1.41)0.38 Gestational age  Premature18(30.0%)42(70.0%)1 Full term166(51.1%)159(48.9%)2.44(1.35,4.41)2.53(1.29,4.97)0.007**= p-value < 0.05 1 = reference BWt-birth weightNote: p-values were extracted from the multivariate logistic regression model Factors associated with satisfaction of parents in NICU-care service (n = 385) *= p-value < 0.05 1 = reference BWt-birth weight Note: p-values were extracted from the multivariate logistic regression model
Conclusion
There was a low level of parental satisfaction with neonatal intensive care unit service. Among the dimensions of EMPATHIC-N, the lowest parental satisfaction score was in the care and treatment while the highest parental satisfaction score was in the organization dimension. As a result, health professionals and hospital administrators should collaborate to improve NICU services to provide high-quality service and satisfy parents.
[ "Methods and materials", "Study area and period", "Inclusion and exclusion criteria", "Sample size and sampling technique", "Data collection instrument and procedures", "Data quality assurance", "Ethical consideration", "Data entry and analysis", "Operational definitions", "Explanatory factor analysis for subscales of parental satisfaction with NICU-care service", "Reliability of items of EMPATHIC-N tool", "Parental satisfaction level with NICU-care service", "Factors associated with the overall parental satisfaction of NICU care service", "Limitations of the study" ]
[ " Study area and period A cross-sectional study was conducted in DTCSH which is a public hospital established in 1934 and located in the South Gondar Zone of Amara Regional State of Ethiopia. It is 97 km to the southwest of Bahir Dar, the capital city of Amara Regional State. According to the 2007 census, the total population of Debre tabor town was 155,596. It has a latitude and longitude of 11051N3801’E11.8500 N 38.0170E with an elevation of 2,706 m (8878ft) above sea level. The hospital provides neonatal intensive care unit service with five separate NICU rooms. According to the hospital’s 2017 report, 1159 neonates were admitted to the NICU, but according to evidence from a chart review in 2020, 1489 neonates were admitted [17]. This study was conducted on parents of neonates who were admitted to NICU at DTCSH from November 05, 2021, to April 30, 2022.\nA cross-sectional study was conducted in DTCSH which is a public hospital established in 1934 and located in the South Gondar Zone of Amara Regional State of Ethiopia. It is 97 km to the southwest of Bahir Dar, the capital city of Amara Regional State. According to the 2007 census, the total population of Debre tabor town was 155,596. It has a latitude and longitude of 11051N3801’E11.8500 N 38.0170E with an elevation of 2,706 m (8878ft) above sea level. The hospital provides neonatal intensive care unit service with five separate NICU rooms. According to the hospital’s 2017 report, 1159 neonates were admitted to the NICU, but according to evidence from a chart review in 2020, 1489 neonates were admitted [17]. This study was conducted on parents of neonates who were admitted to NICU at DTCSH from November 05, 2021, to April 30, 2022.\n Inclusion and exclusion criteria Parents whose neonate was discharged from NICU or transferred to high dependency neonatal ward, parents who can read and write or understand the Amharic language, and neonatal stay in NICU of less than three months were included in the study. Whereas, parents with neonatal death in NICU and parents with neonatal admissions of less than 24 h were excluded from the study.\nParents whose neonate was discharged from NICU or transferred to high dependency neonatal ward, parents who can read and write or understand the Amharic language, and neonatal stay in NICU of less than three months were included in the study. Whereas, parents with neonatal death in NICU and parents with neonatal admissions of less than 24 h were excluded from the study.\n Sample size and sampling technique The sample size was determined by using single proportion population formula taking 50% (P) of the proportion using a 95% confidence interval and 5% margin of error (d). The sample size was determined using the following formula.\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n={\\left({Z}_{\\frac{a}{2}}\\right)}^{2}P{(1-P)/d}^{2}$$\\end{document}\nn= [(1.96)2 0.5(1-0.5)2] / (0.05)2.\nN = 385.\nBy considering a 5% non-respondent rate the final sample size was 405.\nThe data were collected from all consecutive parents who met the inclusion criteria until the intended sample size was achieved.\nThe sample size was determined by using single proportion population formula taking 50% (P) of the proportion using a 95% confidence interval and 5% margin of error (d). The sample size was determined using the following formula.\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n={\\left({Z}_{\\frac{a}{2}}\\right)}^{2}P{(1-P)/d}^{2}$$\\end{document}\nn= [(1.96)2 0.5(1-0.5)2] / (0.05)2.\nN = 385.\nBy considering a 5% non-respondent rate the final sample size was 405.\nThe data were collected from all consecutive parents who met the inclusion criteria until the intended sample size was achieved.\n Data collection instrument and procedures The Amharic version anonymous questionnaire was used for data collection after translating the English version of the EMPATHIC-N tool by three language experts and then back to English by the other three experts to ensure that the translation was correct. The tool’s content validity was also examined and guaranteed by members of the Anesthesia department’s research committee.\nThe tool has been widely used to assess parental satisfaction with NICU care services, and it has strong reliability and validity, with reliability (Cronbach’s) values of the domains ranging from 0.82 to 0.95 [18–20]. There are five domains in the EMPATHIC-N tool: Information (12 questions with a six-point Likert scale); Care & Treatment (17 questions with a ten-point Likert scale); Parental Participation (8 questions with a six-point Likert scale); Organization (8 questions with a six-point Likert scale); and Professional Attitude (12 questions with a six-point Likert scale) [18, 21, 22].\nThe data were collected by three anesthetists on the day of neonatal discharge using an adopted Dutch instrument of Empowerment of Parents in The Intensive Care-Neonatology (EMPATHIC-N).\nThe Amharic version anonymous questionnaire was used for data collection after translating the English version of the EMPATHIC-N tool by three language experts and then back to English by the other three experts to ensure that the translation was correct. The tool’s content validity was also examined and guaranteed by members of the Anesthesia department’s research committee.\nThe tool has been widely used to assess parental satisfaction with NICU care services, and it has strong reliability and validity, with reliability (Cronbach’s) values of the domains ranging from 0.82 to 0.95 [18–20]. There are five domains in the EMPATHIC-N tool: Information (12 questions with a six-point Likert scale); Care & Treatment (17 questions with a ten-point Likert scale); Parental Participation (8 questions with a six-point Likert scale); Organization (8 questions with a six-point Likert scale); and Professional Attitude (12 questions with a six-point Likert scale) [18, 21, 22].\nThe data were collected by three anesthetists on the day of neonatal discharge using an adopted Dutch instrument of Empowerment of Parents in The Intensive Care-Neonatology (EMPATHIC-N).\n Data quality assurance To ensure the quality of data, pre-testing of the data collection tool (the questionnaire) was done on 5% of study parents from Felege Hiwot Comprehensive Specialized Hospital who were not included in the main study. The training was given to data collectors; data were collected and properly filled in the prepared format. Supervision was made throughout the data collection period to make sure the accuracy, clarity, and consistency of the collected data.\nTo ensure the quality of data, pre-testing of the data collection tool (the questionnaire) was done on 5% of study parents from Felege Hiwot Comprehensive Specialized Hospital who were not included in the main study. The training was given to data collectors; data were collected and properly filled in the prepared format. Supervision was made throughout the data collection period to make sure the accuracy, clarity, and consistency of the collected data.\n Ethical consideration Debre Tabor University provided ethical clearance, and each parent was given written informed consent after being briefed about the purpose study.\nDebre Tabor University provided ethical clearance, and each parent was given written informed consent after being briefed about the purpose study.\n Data entry and analysis Data were cleaned, coded, and entered into Epidata version 4.2 and exported to SPSS version 23 for statistical analysis. The adopted EMPATHIC-N instrument was validated using analysis (validity, reliability, standard factor loadings, and factor analysis). Cronbach’s alpha was used to determine the reliability and validity of the tool. Explanatory factor analysis was done to test how well the measured variables represent the number of constructs and to identify relationships between the measured items. The inter-item correlation was used to assess the relationship between items on the same scale, whereas item-discriminant validity was used to assess the relationship between scales. After categorizing the overall mean parental satisfaction score, independent variables were analyzed using binary logistic regression with parental satisfaction with NICU care service. Variables from the bivariable logistic regression with a p-value of 0.2 were fitted to a multivariable logistic regression, and certain variables were included in the model with their clinical importance. Both crude odds ratio (COR) in bivariable logistic regression and adjusted odds ratio (AOR) in multivariable logistic regression with the corresponding 95% Confidence interval were calculated to show the strength of association. In multivariable logistic regression analysis, variables with a p-value of < 0.05 were considered statistically significant. The Mann–Whitney test was used to determine the influencing factors of the parental satisfaction domains, while the Hosmer–Lemeshow goodness of fit test was performed to ensure that the analysis model was appropriate.\nData were cleaned, coded, and entered into Epidata version 4.2 and exported to SPSS version 23 for statistical analysis. The adopted EMPATHIC-N instrument was validated using analysis (validity, reliability, standard factor loadings, and factor analysis). Cronbach’s alpha was used to determine the reliability and validity of the tool. Explanatory factor analysis was done to test how well the measured variables represent the number of constructs and to identify relationships between the measured items. The inter-item correlation was used to assess the relationship between items on the same scale, whereas item-discriminant validity was used to assess the relationship between scales. After categorizing the overall mean parental satisfaction score, independent variables were analyzed using binary logistic regression with parental satisfaction with NICU care service. Variables from the bivariable logistic regression with a p-value of 0.2 were fitted to a multivariable logistic regression, and certain variables were included in the model with their clinical importance. Both crude odds ratio (COR) in bivariable logistic regression and adjusted odds ratio (AOR) in multivariable logistic regression with the corresponding 95% Confidence interval were calculated to show the strength of association. In multivariable logistic regression analysis, variables with a p-value of < 0.05 were considered statistically significant. The Mann–Whitney test was used to determine the influencing factors of the parental satisfaction domains, while the Hosmer–Lemeshow goodness of fit test was performed to ensure that the analysis model was appropriate.\n Operational definitions Satisfied: parents who scored greater than or equal to the overall mean EMPATHIC-N values were considered satisfied.\nDissatisfied: parents who scored less than the overall mean EMPATHIC-N values were considered dissatisfied.\nHigh birth weight: neonates with a birth weight of more than 400 g [23].\nNormal birth weight: neonates with a birth weight range from 250 to 400 g [24].\nLow birth weight: neonates with a birth weight of lower than 250 g [25].\nSatisfied: parents who scored greater than or equal to the overall mean EMPATHIC-N values were considered satisfied.\nDissatisfied: parents who scored less than the overall mean EMPATHIC-N values were considered dissatisfied.\nHigh birth weight: neonates with a birth weight of more than 400 g [23].\nNormal birth weight: neonates with a birth weight range from 250 to 400 g [24].\nLow birth weight: neonates with a birth weight of lower than 250 g [25].", "A cross-sectional study was conducted in DTCSH which is a public hospital established in 1934 and located in the South Gondar Zone of Amara Regional State of Ethiopia. It is 97 km to the southwest of Bahir Dar, the capital city of Amara Regional State. According to the 2007 census, the total population of Debre tabor town was 155,596. It has a latitude and longitude of 11051N3801’E11.8500 N 38.0170E with an elevation of 2,706 m (8878ft) above sea level. The hospital provides neonatal intensive care unit service with five separate NICU rooms. According to the hospital’s 2017 report, 1159 neonates were admitted to the NICU, but according to evidence from a chart review in 2020, 1489 neonates were admitted [17]. This study was conducted on parents of neonates who were admitted to NICU at DTCSH from November 05, 2021, to April 30, 2022.", "Parents whose neonate was discharged from NICU or transferred to high dependency neonatal ward, parents who can read and write or understand the Amharic language, and neonatal stay in NICU of less than three months were included in the study. Whereas, parents with neonatal death in NICU and parents with neonatal admissions of less than 24 h were excluded from the study.", "The sample size was determined by using single proportion population formula taking 50% (P) of the proportion using a 95% confidence interval and 5% margin of error (d). The sample size was determined using the following formula.\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n={\\left({Z}_{\\frac{a}{2}}\\right)}^{2}P{(1-P)/d}^{2}$$\\end{document}\nn= [(1.96)2 0.5(1-0.5)2] / (0.05)2.\nN = 385.\nBy considering a 5% non-respondent rate the final sample size was 405.\nThe data were collected from all consecutive parents who met the inclusion criteria until the intended sample size was achieved.", "The Amharic version anonymous questionnaire was used for data collection after translating the English version of the EMPATHIC-N tool by three language experts and then back to English by the other three experts to ensure that the translation was correct. The tool’s content validity was also examined and guaranteed by members of the Anesthesia department’s research committee.\nThe tool has been widely used to assess parental satisfaction with NICU care services, and it has strong reliability and validity, with reliability (Cronbach’s) values of the domains ranging from 0.82 to 0.95 [18–20]. There are five domains in the EMPATHIC-N tool: Information (12 questions with a six-point Likert scale); Care & Treatment (17 questions with a ten-point Likert scale); Parental Participation (8 questions with a six-point Likert scale); Organization (8 questions with a six-point Likert scale); and Professional Attitude (12 questions with a six-point Likert scale) [18, 21, 22].\nThe data were collected by three anesthetists on the day of neonatal discharge using an adopted Dutch instrument of Empowerment of Parents in The Intensive Care-Neonatology (EMPATHIC-N).", "To ensure the quality of data, pre-testing of the data collection tool (the questionnaire) was done on 5% of study parents from Felege Hiwot Comprehensive Specialized Hospital who were not included in the main study. The training was given to data collectors; data were collected and properly filled in the prepared format. Supervision was made throughout the data collection period to make sure the accuracy, clarity, and consistency of the collected data.", "Debre Tabor University provided ethical clearance, and each parent was given written informed consent after being briefed about the purpose study.", "Data were cleaned, coded, and entered into Epidata version 4.2 and exported to SPSS version 23 for statistical analysis. The adopted EMPATHIC-N instrument was validated using analysis (validity, reliability, standard factor loadings, and factor analysis). Cronbach’s alpha was used to determine the reliability and validity of the tool. Explanatory factor analysis was done to test how well the measured variables represent the number of constructs and to identify relationships between the measured items. The inter-item correlation was used to assess the relationship between items on the same scale, whereas item-discriminant validity was used to assess the relationship between scales. After categorizing the overall mean parental satisfaction score, independent variables were analyzed using binary logistic regression with parental satisfaction with NICU care service. Variables from the bivariable logistic regression with a p-value of 0.2 were fitted to a multivariable logistic regression, and certain variables were included in the model with their clinical importance. Both crude odds ratio (COR) in bivariable logistic regression and adjusted odds ratio (AOR) in multivariable logistic regression with the corresponding 95% Confidence interval were calculated to show the strength of association. In multivariable logistic regression analysis, variables with a p-value of < 0.05 were considered statistically significant. The Mann–Whitney test was used to determine the influencing factors of the parental satisfaction domains, while the Hosmer–Lemeshow goodness of fit test was performed to ensure that the analysis model was appropriate.", "Satisfied: parents who scored greater than or equal to the overall mean EMPATHIC-N values were considered satisfied.\nDissatisfied: parents who scored less than the overall mean EMPATHIC-N values were considered dissatisfied.\nHigh birth weight: neonates with a birth weight of more than 400 g [23].\nNormal birth weight: neonates with a birth weight range from 250 to 400 g [24].\nLow birth weight: neonates with a birth weight of lower than 250 g [25].", "Before assessing parental satisfaction levels, confirmatory factor analysis was used to ensure that the measured variables accurately represented the constructs. The factor correlation matrix was used in the analysis. KMO and Bartlett’s tests of sphericity were used to check the correlation of measurement variables, with a KMO value of 0.875 and Bartlett’s tests of sphericity (p = 0.00) respectively. The correlation matrix was checked for commonality extraction, and all item values were larger than 0.3. The parallel analysis determined the number of item components (factors), and two new components for parental participation, three new components for information and organization, and four new components for Care & Treatment and professional attitude of eigenvalues met the criteria, and the Varimax component correlation matrix was an appropriate model (Table 2).\n\nTable 2Factor loadings and identified components’ of EMPATHIC-N toolItemFactors\n1\n\n2\n\n3\n\n4\n\nInformation\n What level of doctor’s information do you have regarding the child’s expected health outcomes?0.752 How satisfied are you with the physicians’ and nurses’ information similarity?0.704 How are you satisfied with doctors’ and nurses’ honesty in providing information?0.59 How satisfied are you with daily discussions with doctors and nurses about your child’s care and treatment?0.58 How understandable was the information provided by the doctors and nurses?0.763 How satisfied were you with the correct information when the child’s physical condition deteriorated?0.4280.657 How satisfied were you with the clear answers to your questions?0.653 How clear is the doctor’s information about the consequences of the child’s treatment?0.533 To what extent do you receive clear information about the examinations and tests?0.713 To what extent the information brochure received was complete and clear?0.641 How much clear information is given regarding a child’s illness?0.626 Level of received understandable information about the effects of the drugs?0.606\nCare & Treatment\n Level of child’s comfort taken into account by the doctors and nurses?0.826 The extent of satisfaction During acute situations on availability of nurses to support?0.822 Level of team alertness to the prevention and treatment of pain of neonate?0.797 Level of care taken to nurses while in the incubator/bed?0.6340.501 Level of correct medication always administered on time?0.6310.402 level of emotional support that has been provided?0.84 The doctors and nurses responded well to our own needs0.809 Transferals of care from the neonatal intensive care unit staff to colleagues in the high-care unit or pediatric ward had gone well?0.782 Every day we knew who of the doctors and nurses was responsible for our child.0.695 How closely did doctors and nurses collaborate during work?0.5780.466 Level of a common goal: to provide the finest care and treatment for our child and ourselves.0.774 Level of physicians and nurses paid close attention to our child’s development?0.756 The team as a whole was concerned for our child and you.0.698 Our child’s requirements were met promptly0.455 The extent of doctors’ and nurses’ professional knowledge of what they are doing?0.847 How satisfied are you with the doctors’ and nurses’ understanding of the child’s medical history at the time of admission?0.765 How satisfied are you with the rapid actions taken by doctors and nurses when a child’s condition deteriorated?0.723\nParental Participation\n How involved are you in making decisions about our child’s care and treatment?0.803 The nurses had trained us on the specific aspects of newborn care.0.796 We were encouraged to stay close to our children.0.714 Before discharge, the care for our child was once more discussed with us.0.617 Even during intensive procedures, we could always stay close to our child.0.841 The nurses stimulated us to help in the care of our child0.837 The nurses helped us in the bonding with our child0.5320.575 We had confidence in the team0.561\nOrganization\n The Neonatology unit made us feel safe0.885 There was a warm atmosphere in the Neonatology unit without hostility0.805 The Neonatology unit was clean0.7330.404 The unit could easily be reached by telephone0.809 Our child’s incubator or bed was clean0.8 The team worked efficiently0.661 There was enough space around our child’s incubator/bed0.9 Noise in the unit was muffled as good as possible0.788\nProfessional Attitude\n Our child’s health always came first for the doctors and nurses0.799 The team worked hygienically0.747 Our cultural background was taken into account0.728 The doctors and nurses always took time to listen to us0.876 We felt welcome by the team0.804 Despite the workload, sufficient attention was paid to our child and us by the team0.5840.508 Nurses and doctors always introduced themselves by name and function0.794 We received sympathy from the doctors and nurses0.786 At our bedside, the discussion between the doctors and nurses was only about our child.0.4570.582 The team respected the privacy of our children and us.0.874 There was a pleasant atmosphere among the staff0.743 The team showed respect for our child and us0.5190.609\n\nFactor loadings and identified components’ of EMPATHIC-N tool", "The total EMPATHIC-N values and the five domains showed a good level of internal consistency. The five domains’ Cronbach’s values range from 0.639 to 0.791, whereas the total EMPATHIC-N Cronbach’s value was 0.904. The inter-item correlation (IIC) of the five domains had significant internal consistency (Table 3).\n\nTable 3Reliability of items of parental satisfaction with NICU-care serviceDimensionNumber of ItemsΧηρονβαχη αMean dimension score (SD)Maximum possible dimension scoreInter-item correlation (IIC)Item-discriminant validity (IDV)Information120.63939.55(8.42)720.22–0.48*0.20–0.57Care & Treatment170.79181.70(20.49)1700.14–0.64*0.25–0.38Parental Participation80.75524.86(7.79)480.01–0.71*0.37–0.54Organization80.72925.22(7.39)480.11–0.64*0.34–0.69Professional Attitude120.69438.106 (9.07)720.13–0.56*0.22–0.45EMPATHIC-N tool570.904209.44(42.82)410cc*Significant value (p < 0.001), c = not computable\n\nReliability of items of parental satisfaction with NICU-care service\n*Significant value (p < 0.001), c = not computable", "Overall mean satisfaction level of parents in NICU-care service was 47.8% [95% CI= (43.1–52.5)]. The domains of parental satisfaction scores were compared with each other and, the lowest parental satisfaction score was for the domain of care and treatment 36.9% while the organizational domain showed the highest parental satisfaction level 59.0%. The domains of information, parental participation, and professional attitude showed comparable parental satisfaction scores (Fig. 1).\n\nFig. 1Parents’ overall and dimensional satisfaction with NICU-care services\n\nParents’ overall and dimensional satisfaction with NICU-care services", "In this study, the bivariable logistic regression showed that parental gender, residency, parental hospital stay, neonatal hospital stay, birth weight, and gestational age were factors with a p-value of less than 0.2 and were fitted with a multivariable logistic regression model. However, neonatal hospital stay was not associated with multivariable logistic regression with a p-value greater than 0.05. The multivariable logistic analyses showed that mothers were 2.16 (AOR = 2.16; 95%CI: 1.28–3.63) times more satisfied than fathers. Also, parents who are from the rural area were 2.94 (AOR = 2.94; 95%CI: 1.42–6.06) more satisfied than urban. Parents who say less than 15 days were 2.18 (AOR = 2.18; 95%CI: 1.13–4.20) times more satisfied than parents who stay 15 or more days in the hospital. Also, parents of a neonate with a normal birth weight of 2.14 (AOR = 2.14; 95%CI: 1.16–3.94) and full-term neonate 2.53(AOR = 2.53; 95%CI: 1.29–4.97) times more satisfied than their counterparts (Table 4).\n\nTable 4Factors associated with satisfaction of parents in NICU-care service (n = 385)VariablesSatisfaction level on NICU- serviceCrude odds ratioAdjusted odds ratiop-valueSatisfiedNot Satisfied(95% CI)(95% CI)\nGender\n Mother117(52.2%)107(47.8%)1.53(1.02,2.31)2.16(1.28,3.63)0.004* Father67(41.6%)94(58.4%)1\nResidency\n Urban68(36.4%)119(63.6%)1 Rural116(58.6%)82(41.4%)2.48(1.64,3.73)2.94(1.42,6.06)0.004*\nParental hospital stay\n ≤ 15 days140(57.4%)104(42.6%)2.97(1.92,4.59)2.18(1.13,4.20)0.019* > 15days44(31.2%)97(68.8%)1\nBirth weight\n High birth weight34(41.0%)49(59.0%)1 Normal birth weight96(59.3%)66(40.7%)2.09(1.22,3.59)2.14(1.16,3.94)0.015* Low birth weight54(38.6%)86(61.4%)0.91(0.52,1.58)0.76(0.41,1.41)0.38\nGestational age\n Premature18(30.0%)42(70.0%)1 Full term166(51.1%)159(48.9%)2.44(1.35,4.41)2.53(1.29,4.97)0.007**= p-value < 0.05 1 = reference BWt-birth weightNote: p-values were extracted from the multivariate logistic regression model\n\nFactors associated with satisfaction of parents in NICU-care service (n = 385)\n*= p-value < 0.05 1 = reference BWt-birth weight\nNote: p-values were extracted from the multivariate logistic regression model", "The study’s limitations include being a single-center nature and the lack of analysis of various alternatives to logistic regression that might be more applicable for this investigation." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods and materials", "Study area and period", "Inclusion and exclusion criteria", "Sample size and sampling technique", "Data collection instrument and procedures", "Data quality assurance", "Ethical consideration", "Data entry and analysis", "Operational definitions", "Results", "Explanatory factor analysis for subscales of parental satisfaction with NICU-care service", "Reliability of items of EMPATHIC-N tool", "Parental satisfaction level with NICU-care service", "Factors associated with the overall parental satisfaction of NICU care service", "Discussion", "Limitations of the study", "Conclusion" ]
[ "Neonatal intensive care units (NICUs) are areas that require careful risk management with a wide range of neonatal care services [1–3]. It necessitates high-cost and efficient critical care delivery with a multidisciplinary team approach that focuses on preventive strategies for improved outcomes [4–6]. Parental tension and emotions are high when their child is admitted to a neonatal intensive care unit (NICU) due to serious illnesses [7, 8].\nSatisfaction is a belief and attitude about a specific service provision of an institution. Parental and patient satisfaction has become a well-established outcome indicator and tool for assessing a healthcare system’s quality, as well as input for developing strategies and providing accessible, sustainable, economical, as well as acceptable patient care [7, 9, 10]. Parental satisfaction reflects the balance between their expectations of ideal care and their perception of real and available care [3, 11, 12]. It is also one of the objectives and missions of every health care center that gives NICU care service [10, 11].\nParent and patient satisfaction has become an important aspect of hospital management initiatives for quality assurance and accreditation. Also, parental involvement has an important role in the delivery of high-quality care, ranging from assisting with activities of daily living to being directly involved in important health care decisions [13, 14]. However, parental participation may not always be possible, but, effective communication will reduce the effects of crises and improve parental satisfaction [11, 15].\nEthiopia has been working to enhance its healthcare delivery systems by focusing on quality healthcare service giving special attention to mothers and children. The Ethiopian Health Service Alliance for Quality has agreed that the initial priority area would be self-motivated and transparent partnerships that stimulate innovation in health care quality management and learning across all hospitals [16].\nGiven the scarcity of studies on parental satisfaction with NICU care in Ethiopia, as well as clinical observations of parents complaining about NICU care, it is critical to determine the level of parental satisfaction. So, this study aimed to identify the level of parental satisfaction and associated factors with NICU care service in Debre Tabor Comprehensive Specialized Hospital (DTCSH). Also, this study helps the administrators to understand the deficiencies of the hospital’s NICU services, and then propose corresponding improvement strategies.", " Study area and period A cross-sectional study was conducted in DTCSH which is a public hospital established in 1934 and located in the South Gondar Zone of Amara Regional State of Ethiopia. It is 97 km to the southwest of Bahir Dar, the capital city of Amara Regional State. According to the 2007 census, the total population of Debre tabor town was 155,596. It has a latitude and longitude of 11051N3801’E11.8500 N 38.0170E with an elevation of 2,706 m (8878ft) above sea level. The hospital provides neonatal intensive care unit service with five separate NICU rooms. According to the hospital’s 2017 report, 1159 neonates were admitted to the NICU, but according to evidence from a chart review in 2020, 1489 neonates were admitted [17]. This study was conducted on parents of neonates who were admitted to NICU at DTCSH from November 05, 2021, to April 30, 2022.\nA cross-sectional study was conducted in DTCSH which is a public hospital established in 1934 and located in the South Gondar Zone of Amara Regional State of Ethiopia. It is 97 km to the southwest of Bahir Dar, the capital city of Amara Regional State. According to the 2007 census, the total population of Debre tabor town was 155,596. It has a latitude and longitude of 11051N3801’E11.8500 N 38.0170E with an elevation of 2,706 m (8878ft) above sea level. The hospital provides neonatal intensive care unit service with five separate NICU rooms. According to the hospital’s 2017 report, 1159 neonates were admitted to the NICU, but according to evidence from a chart review in 2020, 1489 neonates were admitted [17]. This study was conducted on parents of neonates who were admitted to NICU at DTCSH from November 05, 2021, to April 30, 2022.\n Inclusion and exclusion criteria Parents whose neonate was discharged from NICU or transferred to high dependency neonatal ward, parents who can read and write or understand the Amharic language, and neonatal stay in NICU of less than three months were included in the study. Whereas, parents with neonatal death in NICU and parents with neonatal admissions of less than 24 h were excluded from the study.\nParents whose neonate was discharged from NICU or transferred to high dependency neonatal ward, parents who can read and write or understand the Amharic language, and neonatal stay in NICU of less than three months were included in the study. Whereas, parents with neonatal death in NICU and parents with neonatal admissions of less than 24 h were excluded from the study.\n Sample size and sampling technique The sample size was determined by using single proportion population formula taking 50% (P) of the proportion using a 95% confidence interval and 5% margin of error (d). The sample size was determined using the following formula.\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n={\\left({Z}_{\\frac{a}{2}}\\right)}^{2}P{(1-P)/d}^{2}$$\\end{document}\nn= [(1.96)2 0.5(1-0.5)2] / (0.05)2.\nN = 385.\nBy considering a 5% non-respondent rate the final sample size was 405.\nThe data were collected from all consecutive parents who met the inclusion criteria until the intended sample size was achieved.\nThe sample size was determined by using single proportion population formula taking 50% (P) of the proportion using a 95% confidence interval and 5% margin of error (d). The sample size was determined using the following formula.\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n={\\left({Z}_{\\frac{a}{2}}\\right)}^{2}P{(1-P)/d}^{2}$$\\end{document}\nn= [(1.96)2 0.5(1-0.5)2] / (0.05)2.\nN = 385.\nBy considering a 5% non-respondent rate the final sample size was 405.\nThe data were collected from all consecutive parents who met the inclusion criteria until the intended sample size was achieved.\n Data collection instrument and procedures The Amharic version anonymous questionnaire was used for data collection after translating the English version of the EMPATHIC-N tool by three language experts and then back to English by the other three experts to ensure that the translation was correct. The tool’s content validity was also examined and guaranteed by members of the Anesthesia department’s research committee.\nThe tool has been widely used to assess parental satisfaction with NICU care services, and it has strong reliability and validity, with reliability (Cronbach’s) values of the domains ranging from 0.82 to 0.95 [18–20]. There are five domains in the EMPATHIC-N tool: Information (12 questions with a six-point Likert scale); Care & Treatment (17 questions with a ten-point Likert scale); Parental Participation (8 questions with a six-point Likert scale); Organization (8 questions with a six-point Likert scale); and Professional Attitude (12 questions with a six-point Likert scale) [18, 21, 22].\nThe data were collected by three anesthetists on the day of neonatal discharge using an adopted Dutch instrument of Empowerment of Parents in The Intensive Care-Neonatology (EMPATHIC-N).\nThe Amharic version anonymous questionnaire was used for data collection after translating the English version of the EMPATHIC-N tool by three language experts and then back to English by the other three experts to ensure that the translation was correct. The tool’s content validity was also examined and guaranteed by members of the Anesthesia department’s research committee.\nThe tool has been widely used to assess parental satisfaction with NICU care services, and it has strong reliability and validity, with reliability (Cronbach’s) values of the domains ranging from 0.82 to 0.95 [18–20]. There are five domains in the EMPATHIC-N tool: Information (12 questions with a six-point Likert scale); Care & Treatment (17 questions with a ten-point Likert scale); Parental Participation (8 questions with a six-point Likert scale); Organization (8 questions with a six-point Likert scale); and Professional Attitude (12 questions with a six-point Likert scale) [18, 21, 22].\nThe data were collected by three anesthetists on the day of neonatal discharge using an adopted Dutch instrument of Empowerment of Parents in The Intensive Care-Neonatology (EMPATHIC-N).\n Data quality assurance To ensure the quality of data, pre-testing of the data collection tool (the questionnaire) was done on 5% of study parents from Felege Hiwot Comprehensive Specialized Hospital who were not included in the main study. The training was given to data collectors; data were collected and properly filled in the prepared format. Supervision was made throughout the data collection period to make sure the accuracy, clarity, and consistency of the collected data.\nTo ensure the quality of data, pre-testing of the data collection tool (the questionnaire) was done on 5% of study parents from Felege Hiwot Comprehensive Specialized Hospital who were not included in the main study. The training was given to data collectors; data were collected and properly filled in the prepared format. Supervision was made throughout the data collection period to make sure the accuracy, clarity, and consistency of the collected data.\n Ethical consideration Debre Tabor University provided ethical clearance, and each parent was given written informed consent after being briefed about the purpose study.\nDebre Tabor University provided ethical clearance, and each parent was given written informed consent after being briefed about the purpose study.\n Data entry and analysis Data were cleaned, coded, and entered into Epidata version 4.2 and exported to SPSS version 23 for statistical analysis. The adopted EMPATHIC-N instrument was validated using analysis (validity, reliability, standard factor loadings, and factor analysis). Cronbach’s alpha was used to determine the reliability and validity of the tool. Explanatory factor analysis was done to test how well the measured variables represent the number of constructs and to identify relationships between the measured items. The inter-item correlation was used to assess the relationship between items on the same scale, whereas item-discriminant validity was used to assess the relationship between scales. After categorizing the overall mean parental satisfaction score, independent variables were analyzed using binary logistic regression with parental satisfaction with NICU care service. Variables from the bivariable logistic regression with a p-value of 0.2 were fitted to a multivariable logistic regression, and certain variables were included in the model with their clinical importance. Both crude odds ratio (COR) in bivariable logistic regression and adjusted odds ratio (AOR) in multivariable logistic regression with the corresponding 95% Confidence interval were calculated to show the strength of association. In multivariable logistic regression analysis, variables with a p-value of < 0.05 were considered statistically significant. The Mann–Whitney test was used to determine the influencing factors of the parental satisfaction domains, while the Hosmer–Lemeshow goodness of fit test was performed to ensure that the analysis model was appropriate.\nData were cleaned, coded, and entered into Epidata version 4.2 and exported to SPSS version 23 for statistical analysis. The adopted EMPATHIC-N instrument was validated using analysis (validity, reliability, standard factor loadings, and factor analysis). Cronbach’s alpha was used to determine the reliability and validity of the tool. Explanatory factor analysis was done to test how well the measured variables represent the number of constructs and to identify relationships between the measured items. The inter-item correlation was used to assess the relationship between items on the same scale, whereas item-discriminant validity was used to assess the relationship between scales. After categorizing the overall mean parental satisfaction score, independent variables were analyzed using binary logistic regression with parental satisfaction with NICU care service. Variables from the bivariable logistic regression with a p-value of 0.2 were fitted to a multivariable logistic regression, and certain variables were included in the model with their clinical importance. Both crude odds ratio (COR) in bivariable logistic regression and adjusted odds ratio (AOR) in multivariable logistic regression with the corresponding 95% Confidence interval were calculated to show the strength of association. In multivariable logistic regression analysis, variables with a p-value of < 0.05 were considered statistically significant. The Mann–Whitney test was used to determine the influencing factors of the parental satisfaction domains, while the Hosmer–Lemeshow goodness of fit test was performed to ensure that the analysis model was appropriate.\n Operational definitions Satisfied: parents who scored greater than or equal to the overall mean EMPATHIC-N values were considered satisfied.\nDissatisfied: parents who scored less than the overall mean EMPATHIC-N values were considered dissatisfied.\nHigh birth weight: neonates with a birth weight of more than 400 g [23].\nNormal birth weight: neonates with a birth weight range from 250 to 400 g [24].\nLow birth weight: neonates with a birth weight of lower than 250 g [25].\nSatisfied: parents who scored greater than or equal to the overall mean EMPATHIC-N values were considered satisfied.\nDissatisfied: parents who scored less than the overall mean EMPATHIC-N values were considered dissatisfied.\nHigh birth weight: neonates with a birth weight of more than 400 g [23].\nNormal birth weight: neonates with a birth weight range from 250 to 400 g [24].\nLow birth weight: neonates with a birth weight of lower than 250 g [25].", "A cross-sectional study was conducted in DTCSH which is a public hospital established in 1934 and located in the South Gondar Zone of Amara Regional State of Ethiopia. It is 97 km to the southwest of Bahir Dar, the capital city of Amara Regional State. According to the 2007 census, the total population of Debre tabor town was 155,596. It has a latitude and longitude of 11051N3801’E11.8500 N 38.0170E with an elevation of 2,706 m (8878ft) above sea level. The hospital provides neonatal intensive care unit service with five separate NICU rooms. According to the hospital’s 2017 report, 1159 neonates were admitted to the NICU, but according to evidence from a chart review in 2020, 1489 neonates were admitted [17]. This study was conducted on parents of neonates who were admitted to NICU at DTCSH from November 05, 2021, to April 30, 2022.", "Parents whose neonate was discharged from NICU or transferred to high dependency neonatal ward, parents who can read and write or understand the Amharic language, and neonatal stay in NICU of less than three months were included in the study. Whereas, parents with neonatal death in NICU and parents with neonatal admissions of less than 24 h were excluded from the study.", "The sample size was determined by using single proportion population formula taking 50% (P) of the proportion using a 95% confidence interval and 5% margin of error (d). The sample size was determined using the following formula.\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n={\\left({Z}_{\\frac{a}{2}}\\right)}^{2}P{(1-P)/d}^{2}$$\\end{document}\nn= [(1.96)2 0.5(1-0.5)2] / (0.05)2.\nN = 385.\nBy considering a 5% non-respondent rate the final sample size was 405.\nThe data were collected from all consecutive parents who met the inclusion criteria until the intended sample size was achieved.", "The Amharic version anonymous questionnaire was used for data collection after translating the English version of the EMPATHIC-N tool by three language experts and then back to English by the other three experts to ensure that the translation was correct. The tool’s content validity was also examined and guaranteed by members of the Anesthesia department’s research committee.\nThe tool has been widely used to assess parental satisfaction with NICU care services, and it has strong reliability and validity, with reliability (Cronbach’s) values of the domains ranging from 0.82 to 0.95 [18–20]. There are five domains in the EMPATHIC-N tool: Information (12 questions with a six-point Likert scale); Care & Treatment (17 questions with a ten-point Likert scale); Parental Participation (8 questions with a six-point Likert scale); Organization (8 questions with a six-point Likert scale); and Professional Attitude (12 questions with a six-point Likert scale) [18, 21, 22].\nThe data were collected by three anesthetists on the day of neonatal discharge using an adopted Dutch instrument of Empowerment of Parents in The Intensive Care-Neonatology (EMPATHIC-N).", "To ensure the quality of data, pre-testing of the data collection tool (the questionnaire) was done on 5% of study parents from Felege Hiwot Comprehensive Specialized Hospital who were not included in the main study. The training was given to data collectors; data were collected and properly filled in the prepared format. Supervision was made throughout the data collection period to make sure the accuracy, clarity, and consistency of the collected data.", "Debre Tabor University provided ethical clearance, and each parent was given written informed consent after being briefed about the purpose study.", "Data were cleaned, coded, and entered into Epidata version 4.2 and exported to SPSS version 23 for statistical analysis. The adopted EMPATHIC-N instrument was validated using analysis (validity, reliability, standard factor loadings, and factor analysis). Cronbach’s alpha was used to determine the reliability and validity of the tool. Explanatory factor analysis was done to test how well the measured variables represent the number of constructs and to identify relationships between the measured items. The inter-item correlation was used to assess the relationship between items on the same scale, whereas item-discriminant validity was used to assess the relationship between scales. After categorizing the overall mean parental satisfaction score, independent variables were analyzed using binary logistic regression with parental satisfaction with NICU care service. Variables from the bivariable logistic regression with a p-value of 0.2 were fitted to a multivariable logistic regression, and certain variables were included in the model with their clinical importance. Both crude odds ratio (COR) in bivariable logistic regression and adjusted odds ratio (AOR) in multivariable logistic regression with the corresponding 95% Confidence interval were calculated to show the strength of association. In multivariable logistic regression analysis, variables with a p-value of < 0.05 were considered statistically significant. The Mann–Whitney test was used to determine the influencing factors of the parental satisfaction domains, while the Hosmer–Lemeshow goodness of fit test was performed to ensure that the analysis model was appropriate.", "Satisfied: parents who scored greater than or equal to the overall mean EMPATHIC-N values were considered satisfied.\nDissatisfied: parents who scored less than the overall mean EMPATHIC-N values were considered dissatisfied.\nHigh birth weight: neonates with a birth weight of more than 400 g [23].\nNormal birth weight: neonates with a birth weight range from 250 to 400 g [24].\nLow birth weight: neonates with a birth weight of lower than 250 g [25].", "This study was conducted on a total of 385 parents, with a 95.06% response rate. The majority of the parents were mothers 224 (58.2%), whereas 322 (83.6%) were married. Full-term newborns 325(84.4%) and those with respiratory issues 115 (29.9%) had the highest number of NICU admissions (Table 1).\n\nTable 1Socio-demographic characteristics of study participants (n = 385)VariablesFrequency (n)Percentage (%)\nParental gender\n Mother22458.2 Father16141.8\nAge\n 18–2413234.3 25–3916643.1 40 and above8722.6\nMarital status\n Married32283.6 Not married6316.4\nResidency\n Urban18748.6 Rural19851.4\nEducational level\n No formal education11930.9 Elementary school9223.9 Secondary school8321.6 Collage and above9123.6\nProfession\n Housewife7018.2 Farmer10627.5 Student5915.3 Gov’t employ7218.7 Private employ7820.3\nParental hospital stay\n ≤ 15 days24463.4 > 15 days14136.6\nNeonatal gender\n Male18848.8 Female19751.2\nBirth weight\n High birth weight8321.5 Normal birth weight16242.1 Low birth weight14036.4\nGestational age\n Premature6015.6 Full term32584.4\nNeonatal hospital stay\n ≤ 15 days23561 > 15 days15039\nAdmission diagnosis\n Infection6516.9 Respiratory problems11529.9 Prematurity5915.3 Gastrointestinal problems389.9 Jaundice4712.2 Neurological problems215.5 Cardiological problems266.8 Others143.5\n\nSocio-demographic characteristics of study participants (n = 385)\n Explanatory factor analysis for subscales of parental satisfaction with NICU-care service Before assessing parental satisfaction levels, confirmatory factor analysis was used to ensure that the measured variables accurately represented the constructs. The factor correlation matrix was used in the analysis. KMO and Bartlett’s tests of sphericity were used to check the correlation of measurement variables, with a KMO value of 0.875 and Bartlett’s tests of sphericity (p = 0.00) respectively. The correlation matrix was checked for commonality extraction, and all item values were larger than 0.3. The parallel analysis determined the number of item components (factors), and two new components for parental participation, three new components for information and organization, and four new components for Care & Treatment and professional attitude of eigenvalues met the criteria, and the Varimax component correlation matrix was an appropriate model (Table 2).\n\nTable 2Factor loadings and identified components’ of EMPATHIC-N toolItemFactors\n1\n\n2\n\n3\n\n4\n\nInformation\n What level of doctor’s information do you have regarding the child’s expected health outcomes?0.752 How satisfied are you with the physicians’ and nurses’ information similarity?0.704 How are you satisfied with doctors’ and nurses’ honesty in providing information?0.59 How satisfied are you with daily discussions with doctors and nurses about your child’s care and treatment?0.58 How understandable was the information provided by the doctors and nurses?0.763 How satisfied were you with the correct information when the child’s physical condition deteriorated?0.4280.657 How satisfied were you with the clear answers to your questions?0.653 How clear is the doctor’s information about the consequences of the child’s treatment?0.533 To what extent do you receive clear information about the examinations and tests?0.713 To what extent the information brochure received was complete and clear?0.641 How much clear information is given regarding a child’s illness?0.626 Level of received understandable information about the effects of the drugs?0.606\nCare & Treatment\n Level of child’s comfort taken into account by the doctors and nurses?0.826 The extent of satisfaction During acute situations on availability of nurses to support?0.822 Level of team alertness to the prevention and treatment of pain of neonate?0.797 Level of care taken to nurses while in the incubator/bed?0.6340.501 Level of correct medication always administered on time?0.6310.402 level of emotional support that has been provided?0.84 The doctors and nurses responded well to our own needs0.809 Transferals of care from the neonatal intensive care unit staff to colleagues in the high-care unit or pediatric ward had gone well?0.782 Every day we knew who of the doctors and nurses was responsible for our child.0.695 How closely did doctors and nurses collaborate during work?0.5780.466 Level of a common goal: to provide the finest care and treatment for our child and ourselves.0.774 Level of physicians and nurses paid close attention to our child’s development?0.756 The team as a whole was concerned for our child and you.0.698 Our child’s requirements were met promptly0.455 The extent of doctors’ and nurses’ professional knowledge of what they are doing?0.847 How satisfied are you with the doctors’ and nurses’ understanding of the child’s medical history at the time of admission?0.765 How satisfied are you with the rapid actions taken by doctors and nurses when a child’s condition deteriorated?0.723\nParental Participation\n How involved are you in making decisions about our child’s care and treatment?0.803 The nurses had trained us on the specific aspects of newborn care.0.796 We were encouraged to stay close to our children.0.714 Before discharge, the care for our child was once more discussed with us.0.617 Even during intensive procedures, we could always stay close to our child.0.841 The nurses stimulated us to help in the care of our child0.837 The nurses helped us in the bonding with our child0.5320.575 We had confidence in the team0.561\nOrganization\n The Neonatology unit made us feel safe0.885 There was a warm atmosphere in the Neonatology unit without hostility0.805 The Neonatology unit was clean0.7330.404 The unit could easily be reached by telephone0.809 Our child’s incubator or bed was clean0.8 The team worked efficiently0.661 There was enough space around our child’s incubator/bed0.9 Noise in the unit was muffled as good as possible0.788\nProfessional Attitude\n Our child’s health always came first for the doctors and nurses0.799 The team worked hygienically0.747 Our cultural background was taken into account0.728 The doctors and nurses always took time to listen to us0.876 We felt welcome by the team0.804 Despite the workload, sufficient attention was paid to our child and us by the team0.5840.508 Nurses and doctors always introduced themselves by name and function0.794 We received sympathy from the doctors and nurses0.786 At our bedside, the discussion between the doctors and nurses was only about our child.0.4570.582 The team respected the privacy of our children and us.0.874 There was a pleasant atmosphere among the staff0.743 The team showed respect for our child and us0.5190.609\n\nFactor loadings and identified components’ of EMPATHIC-N tool\nBefore assessing parental satisfaction levels, confirmatory factor analysis was used to ensure that the measured variables accurately represented the constructs. The factor correlation matrix was used in the analysis. KMO and Bartlett’s tests of sphericity were used to check the correlation of measurement variables, with a KMO value of 0.875 and Bartlett’s tests of sphericity (p = 0.00) respectively. The correlation matrix was checked for commonality extraction, and all item values were larger than 0.3. The parallel analysis determined the number of item components (factors), and two new components for parental participation, three new components for information and organization, and four new components for Care & Treatment and professional attitude of eigenvalues met the criteria, and the Varimax component correlation matrix was an appropriate model (Table 2).\n\nTable 2Factor loadings and identified components’ of EMPATHIC-N toolItemFactors\n1\n\n2\n\n3\n\n4\n\nInformation\n What level of doctor’s information do you have regarding the child’s expected health outcomes?0.752 How satisfied are you with the physicians’ and nurses’ information similarity?0.704 How are you satisfied with doctors’ and nurses’ honesty in providing information?0.59 How satisfied are you with daily discussions with doctors and nurses about your child’s care and treatment?0.58 How understandable was the information provided by the doctors and nurses?0.763 How satisfied were you with the correct information when the child’s physical condition deteriorated?0.4280.657 How satisfied were you with the clear answers to your questions?0.653 How clear is the doctor’s information about the consequences of the child’s treatment?0.533 To what extent do you receive clear information about the examinations and tests?0.713 To what extent the information brochure received was complete and clear?0.641 How much clear information is given regarding a child’s illness?0.626 Level of received understandable information about the effects of the drugs?0.606\nCare & Treatment\n Level of child’s comfort taken into account by the doctors and nurses?0.826 The extent of satisfaction During acute situations on availability of nurses to support?0.822 Level of team alertness to the prevention and treatment of pain of neonate?0.797 Level of care taken to nurses while in the incubator/bed?0.6340.501 Level of correct medication always administered on time?0.6310.402 level of emotional support that has been provided?0.84 The doctors and nurses responded well to our own needs0.809 Transferals of care from the neonatal intensive care unit staff to colleagues in the high-care unit or pediatric ward had gone well?0.782 Every day we knew who of the doctors and nurses was responsible for our child.0.695 How closely did doctors and nurses collaborate during work?0.5780.466 Level of a common goal: to provide the finest care and treatment for our child and ourselves.0.774 Level of physicians and nurses paid close attention to our child’s development?0.756 The team as a whole was concerned for our child and you.0.698 Our child’s requirements were met promptly0.455 The extent of doctors’ and nurses’ professional knowledge of what they are doing?0.847 How satisfied are you with the doctors’ and nurses’ understanding of the child’s medical history at the time of admission?0.765 How satisfied are you with the rapid actions taken by doctors and nurses when a child’s condition deteriorated?0.723\nParental Participation\n How involved are you in making decisions about our child’s care and treatment?0.803 The nurses had trained us on the specific aspects of newborn care.0.796 We were encouraged to stay close to our children.0.714 Before discharge, the care for our child was once more discussed with us.0.617 Even during intensive procedures, we could always stay close to our child.0.841 The nurses stimulated us to help in the care of our child0.837 The nurses helped us in the bonding with our child0.5320.575 We had confidence in the team0.561\nOrganization\n The Neonatology unit made us feel safe0.885 There was a warm atmosphere in the Neonatology unit without hostility0.805 The Neonatology unit was clean0.7330.404 The unit could easily be reached by telephone0.809 Our child’s incubator or bed was clean0.8 The team worked efficiently0.661 There was enough space around our child’s incubator/bed0.9 Noise in the unit was muffled as good as possible0.788\nProfessional Attitude\n Our child’s health always came first for the doctors and nurses0.799 The team worked hygienically0.747 Our cultural background was taken into account0.728 The doctors and nurses always took time to listen to us0.876 We felt welcome by the team0.804 Despite the workload, sufficient attention was paid to our child and us by the team0.5840.508 Nurses and doctors always introduced themselves by name and function0.794 We received sympathy from the doctors and nurses0.786 At our bedside, the discussion between the doctors and nurses was only about our child.0.4570.582 The team respected the privacy of our children and us.0.874 There was a pleasant atmosphere among the staff0.743 The team showed respect for our child and us0.5190.609\n\nFactor loadings and identified components’ of EMPATHIC-N tool\n Reliability of items of EMPATHIC-N tool The total EMPATHIC-N values and the five domains showed a good level of internal consistency. The five domains’ Cronbach’s values range from 0.639 to 0.791, whereas the total EMPATHIC-N Cronbach’s value was 0.904. The inter-item correlation (IIC) of the five domains had significant internal consistency (Table 3).\n\nTable 3Reliability of items of parental satisfaction with NICU-care serviceDimensionNumber of ItemsΧηρονβαχη αMean dimension score (SD)Maximum possible dimension scoreInter-item correlation (IIC)Item-discriminant validity (IDV)Information120.63939.55(8.42)720.22–0.48*0.20–0.57Care & Treatment170.79181.70(20.49)1700.14–0.64*0.25–0.38Parental Participation80.75524.86(7.79)480.01–0.71*0.37–0.54Organization80.72925.22(7.39)480.11–0.64*0.34–0.69Professional Attitude120.69438.106 (9.07)720.13–0.56*0.22–0.45EMPATHIC-N tool570.904209.44(42.82)410cc*Significant value (p < 0.001), c = not computable\n\nReliability of items of parental satisfaction with NICU-care service\n*Significant value (p < 0.001), c = not computable\nThe total EMPATHIC-N values and the five domains showed a good level of internal consistency. The five domains’ Cronbach’s values range from 0.639 to 0.791, whereas the total EMPATHIC-N Cronbach’s value was 0.904. The inter-item correlation (IIC) of the five domains had significant internal consistency (Table 3).\n\nTable 3Reliability of items of parental satisfaction with NICU-care serviceDimensionNumber of ItemsΧηρονβαχη αMean dimension score (SD)Maximum possible dimension scoreInter-item correlation (IIC)Item-discriminant validity (IDV)Information120.63939.55(8.42)720.22–0.48*0.20–0.57Care & Treatment170.79181.70(20.49)1700.14–0.64*0.25–0.38Parental Participation80.75524.86(7.79)480.01–0.71*0.37–0.54Organization80.72925.22(7.39)480.11–0.64*0.34–0.69Professional Attitude120.69438.106 (9.07)720.13–0.56*0.22–0.45EMPATHIC-N tool570.904209.44(42.82)410cc*Significant value (p < 0.001), c = not computable\n\nReliability of items of parental satisfaction with NICU-care service\n*Significant value (p < 0.001), c = not computable\n Parental satisfaction level with NICU-care service Overall mean satisfaction level of parents in NICU-care service was 47.8% [95% CI= (43.1–52.5)]. The domains of parental satisfaction scores were compared with each other and, the lowest parental satisfaction score was for the domain of care and treatment 36.9% while the organizational domain showed the highest parental satisfaction level 59.0%. The domains of information, parental participation, and professional attitude showed comparable parental satisfaction scores (Fig. 1).\n\nFig. 1Parents’ overall and dimensional satisfaction with NICU-care services\n\nParents’ overall and dimensional satisfaction with NICU-care services\nOverall mean satisfaction level of parents in NICU-care service was 47.8% [95% CI= (43.1–52.5)]. The domains of parental satisfaction scores were compared with each other and, the lowest parental satisfaction score was for the domain of care and treatment 36.9% while the organizational domain showed the highest parental satisfaction level 59.0%. The domains of information, parental participation, and professional attitude showed comparable parental satisfaction scores (Fig. 1).\n\nFig. 1Parents’ overall and dimensional satisfaction with NICU-care services\n\nParents’ overall and dimensional satisfaction with NICU-care services\n Factors associated with the overall parental satisfaction of NICU care service In this study, the bivariable logistic regression showed that parental gender, residency, parental hospital stay, neonatal hospital stay, birth weight, and gestational age were factors with a p-value of less than 0.2 and were fitted with a multivariable logistic regression model. However, neonatal hospital stay was not associated with multivariable logistic regression with a p-value greater than 0.05. The multivariable logistic analyses showed that mothers were 2.16 (AOR = 2.16; 95%CI: 1.28–3.63) times more satisfied than fathers. Also, parents who are from the rural area were 2.94 (AOR = 2.94; 95%CI: 1.42–6.06) more satisfied than urban. Parents who say less than 15 days were 2.18 (AOR = 2.18; 95%CI: 1.13–4.20) times more satisfied than parents who stay 15 or more days in the hospital. Also, parents of a neonate with a normal birth weight of 2.14 (AOR = 2.14; 95%CI: 1.16–3.94) and full-term neonate 2.53(AOR = 2.53; 95%CI: 1.29–4.97) times more satisfied than their counterparts (Table 4).\n\nTable 4Factors associated with satisfaction of parents in NICU-care service (n = 385)VariablesSatisfaction level on NICU- serviceCrude odds ratioAdjusted odds ratiop-valueSatisfiedNot Satisfied(95% CI)(95% CI)\nGender\n Mother117(52.2%)107(47.8%)1.53(1.02,2.31)2.16(1.28,3.63)0.004* Father67(41.6%)94(58.4%)1\nResidency\n Urban68(36.4%)119(63.6%)1 Rural116(58.6%)82(41.4%)2.48(1.64,3.73)2.94(1.42,6.06)0.004*\nParental hospital stay\n ≤ 15 days140(57.4%)104(42.6%)2.97(1.92,4.59)2.18(1.13,4.20)0.019* > 15days44(31.2%)97(68.8%)1\nBirth weight\n High birth weight34(41.0%)49(59.0%)1 Normal birth weight96(59.3%)66(40.7%)2.09(1.22,3.59)2.14(1.16,3.94)0.015* Low birth weight54(38.6%)86(61.4%)0.91(0.52,1.58)0.76(0.41,1.41)0.38\nGestational age\n Premature18(30.0%)42(70.0%)1 Full term166(51.1%)159(48.9%)2.44(1.35,4.41)2.53(1.29,4.97)0.007**= p-value < 0.05 1 = reference BWt-birth weightNote: p-values were extracted from the multivariate logistic regression model\n\nFactors associated with satisfaction of parents in NICU-care service (n = 385)\n*= p-value < 0.05 1 = reference BWt-birth weight\nNote: p-values were extracted from the multivariate logistic regression model\nIn this study, the bivariable logistic regression showed that parental gender, residency, parental hospital stay, neonatal hospital stay, birth weight, and gestational age were factors with a p-value of less than 0.2 and were fitted with a multivariable logistic regression model. However, neonatal hospital stay was not associated with multivariable logistic regression with a p-value greater than 0.05. The multivariable logistic analyses showed that mothers were 2.16 (AOR = 2.16; 95%CI: 1.28–3.63) times more satisfied than fathers. Also, parents who are from the rural area were 2.94 (AOR = 2.94; 95%CI: 1.42–6.06) more satisfied than urban. Parents who say less than 15 days were 2.18 (AOR = 2.18; 95%CI: 1.13–4.20) times more satisfied than parents who stay 15 or more days in the hospital. Also, parents of a neonate with a normal birth weight of 2.14 (AOR = 2.14; 95%CI: 1.16–3.94) and full-term neonate 2.53(AOR = 2.53; 95%CI: 1.29–4.97) times more satisfied than their counterparts (Table 4).\n\nTable 4Factors associated with satisfaction of parents in NICU-care service (n = 385)VariablesSatisfaction level on NICU- serviceCrude odds ratioAdjusted odds ratiop-valueSatisfiedNot Satisfied(95% CI)(95% CI)\nGender\n Mother117(52.2%)107(47.8%)1.53(1.02,2.31)2.16(1.28,3.63)0.004* Father67(41.6%)94(58.4%)1\nResidency\n Urban68(36.4%)119(63.6%)1 Rural116(58.6%)82(41.4%)2.48(1.64,3.73)2.94(1.42,6.06)0.004*\nParental hospital stay\n ≤ 15 days140(57.4%)104(42.6%)2.97(1.92,4.59)2.18(1.13,4.20)0.019* > 15days44(31.2%)97(68.8%)1\nBirth weight\n High birth weight34(41.0%)49(59.0%)1 Normal birth weight96(59.3%)66(40.7%)2.09(1.22,3.59)2.14(1.16,3.94)0.015* Low birth weight54(38.6%)86(61.4%)0.91(0.52,1.58)0.76(0.41,1.41)0.38\nGestational age\n Premature18(30.0%)42(70.0%)1 Full term166(51.1%)159(48.9%)2.44(1.35,4.41)2.53(1.29,4.97)0.007**= p-value < 0.05 1 = reference BWt-birth weightNote: p-values were extracted from the multivariate logistic regression model\n\nFactors associated with satisfaction of parents in NICU-care service (n = 385)\n*= p-value < 0.05 1 = reference BWt-birth weight\nNote: p-values were extracted from the multivariate logistic regression model", "Before assessing parental satisfaction levels, confirmatory factor analysis was used to ensure that the measured variables accurately represented the constructs. The factor correlation matrix was used in the analysis. KMO and Bartlett’s tests of sphericity were used to check the correlation of measurement variables, with a KMO value of 0.875 and Bartlett’s tests of sphericity (p = 0.00) respectively. The correlation matrix was checked for commonality extraction, and all item values were larger than 0.3. The parallel analysis determined the number of item components (factors), and two new components for parental participation, three new components for information and organization, and four new components for Care & Treatment and professional attitude of eigenvalues met the criteria, and the Varimax component correlation matrix was an appropriate model (Table 2).\n\nTable 2Factor loadings and identified components’ of EMPATHIC-N toolItemFactors\n1\n\n2\n\n3\n\n4\n\nInformation\n What level of doctor’s information do you have regarding the child’s expected health outcomes?0.752 How satisfied are you with the physicians’ and nurses’ information similarity?0.704 How are you satisfied with doctors’ and nurses’ honesty in providing information?0.59 How satisfied are you with daily discussions with doctors and nurses about your child’s care and treatment?0.58 How understandable was the information provided by the doctors and nurses?0.763 How satisfied were you with the correct information when the child’s physical condition deteriorated?0.4280.657 How satisfied were you with the clear answers to your questions?0.653 How clear is the doctor’s information about the consequences of the child’s treatment?0.533 To what extent do you receive clear information about the examinations and tests?0.713 To what extent the information brochure received was complete and clear?0.641 How much clear information is given regarding a child’s illness?0.626 Level of received understandable information about the effects of the drugs?0.606\nCare & Treatment\n Level of child’s comfort taken into account by the doctors and nurses?0.826 The extent of satisfaction During acute situations on availability of nurses to support?0.822 Level of team alertness to the prevention and treatment of pain of neonate?0.797 Level of care taken to nurses while in the incubator/bed?0.6340.501 Level of correct medication always administered on time?0.6310.402 level of emotional support that has been provided?0.84 The doctors and nurses responded well to our own needs0.809 Transferals of care from the neonatal intensive care unit staff to colleagues in the high-care unit or pediatric ward had gone well?0.782 Every day we knew who of the doctors and nurses was responsible for our child.0.695 How closely did doctors and nurses collaborate during work?0.5780.466 Level of a common goal: to provide the finest care and treatment for our child and ourselves.0.774 Level of physicians and nurses paid close attention to our child’s development?0.756 The team as a whole was concerned for our child and you.0.698 Our child’s requirements were met promptly0.455 The extent of doctors’ and nurses’ professional knowledge of what they are doing?0.847 How satisfied are you with the doctors’ and nurses’ understanding of the child’s medical history at the time of admission?0.765 How satisfied are you with the rapid actions taken by doctors and nurses when a child’s condition deteriorated?0.723\nParental Participation\n How involved are you in making decisions about our child’s care and treatment?0.803 The nurses had trained us on the specific aspects of newborn care.0.796 We were encouraged to stay close to our children.0.714 Before discharge, the care for our child was once more discussed with us.0.617 Even during intensive procedures, we could always stay close to our child.0.841 The nurses stimulated us to help in the care of our child0.837 The nurses helped us in the bonding with our child0.5320.575 We had confidence in the team0.561\nOrganization\n The Neonatology unit made us feel safe0.885 There was a warm atmosphere in the Neonatology unit without hostility0.805 The Neonatology unit was clean0.7330.404 The unit could easily be reached by telephone0.809 Our child’s incubator or bed was clean0.8 The team worked efficiently0.661 There was enough space around our child’s incubator/bed0.9 Noise in the unit was muffled as good as possible0.788\nProfessional Attitude\n Our child’s health always came first for the doctors and nurses0.799 The team worked hygienically0.747 Our cultural background was taken into account0.728 The doctors and nurses always took time to listen to us0.876 We felt welcome by the team0.804 Despite the workload, sufficient attention was paid to our child and us by the team0.5840.508 Nurses and doctors always introduced themselves by name and function0.794 We received sympathy from the doctors and nurses0.786 At our bedside, the discussion between the doctors and nurses was only about our child.0.4570.582 The team respected the privacy of our children and us.0.874 There was a pleasant atmosphere among the staff0.743 The team showed respect for our child and us0.5190.609\n\nFactor loadings and identified components’ of EMPATHIC-N tool", "The total EMPATHIC-N values and the five domains showed a good level of internal consistency. The five domains’ Cronbach’s values range from 0.639 to 0.791, whereas the total EMPATHIC-N Cronbach’s value was 0.904. The inter-item correlation (IIC) of the five domains had significant internal consistency (Table 3).\n\nTable 3Reliability of items of parental satisfaction with NICU-care serviceDimensionNumber of ItemsΧηρονβαχη αMean dimension score (SD)Maximum possible dimension scoreInter-item correlation (IIC)Item-discriminant validity (IDV)Information120.63939.55(8.42)720.22–0.48*0.20–0.57Care & Treatment170.79181.70(20.49)1700.14–0.64*0.25–0.38Parental Participation80.75524.86(7.79)480.01–0.71*0.37–0.54Organization80.72925.22(7.39)480.11–0.64*0.34–0.69Professional Attitude120.69438.106 (9.07)720.13–0.56*0.22–0.45EMPATHIC-N tool570.904209.44(42.82)410cc*Significant value (p < 0.001), c = not computable\n\nReliability of items of parental satisfaction with NICU-care service\n*Significant value (p < 0.001), c = not computable", "Overall mean satisfaction level of parents in NICU-care service was 47.8% [95% CI= (43.1–52.5)]. The domains of parental satisfaction scores were compared with each other and, the lowest parental satisfaction score was for the domain of care and treatment 36.9% while the organizational domain showed the highest parental satisfaction level 59.0%. The domains of information, parental participation, and professional attitude showed comparable parental satisfaction scores (Fig. 1).\n\nFig. 1Parents’ overall and dimensional satisfaction with NICU-care services\n\nParents’ overall and dimensional satisfaction with NICU-care services", "In this study, the bivariable logistic regression showed that parental gender, residency, parental hospital stay, neonatal hospital stay, birth weight, and gestational age were factors with a p-value of less than 0.2 and were fitted with a multivariable logistic regression model. However, neonatal hospital stay was not associated with multivariable logistic regression with a p-value greater than 0.05. The multivariable logistic analyses showed that mothers were 2.16 (AOR = 2.16; 95%CI: 1.28–3.63) times more satisfied than fathers. Also, parents who are from the rural area were 2.94 (AOR = 2.94; 95%CI: 1.42–6.06) more satisfied than urban. Parents who say less than 15 days were 2.18 (AOR = 2.18; 95%CI: 1.13–4.20) times more satisfied than parents who stay 15 or more days in the hospital. Also, parents of a neonate with a normal birth weight of 2.14 (AOR = 2.14; 95%CI: 1.16–3.94) and full-term neonate 2.53(AOR = 2.53; 95%CI: 1.29–4.97) times more satisfied than their counterparts (Table 4).\n\nTable 4Factors associated with satisfaction of parents in NICU-care service (n = 385)VariablesSatisfaction level on NICU- serviceCrude odds ratioAdjusted odds ratiop-valueSatisfiedNot Satisfied(95% CI)(95% CI)\nGender\n Mother117(52.2%)107(47.8%)1.53(1.02,2.31)2.16(1.28,3.63)0.004* Father67(41.6%)94(58.4%)1\nResidency\n Urban68(36.4%)119(63.6%)1 Rural116(58.6%)82(41.4%)2.48(1.64,3.73)2.94(1.42,6.06)0.004*\nParental hospital stay\n ≤ 15 days140(57.4%)104(42.6%)2.97(1.92,4.59)2.18(1.13,4.20)0.019* > 15days44(31.2%)97(68.8%)1\nBirth weight\n High birth weight34(41.0%)49(59.0%)1 Normal birth weight96(59.3%)66(40.7%)2.09(1.22,3.59)2.14(1.16,3.94)0.015* Low birth weight54(38.6%)86(61.4%)0.91(0.52,1.58)0.76(0.41,1.41)0.38\nGestational age\n Premature18(30.0%)42(70.0%)1 Full term166(51.1%)159(48.9%)2.44(1.35,4.41)2.53(1.29,4.97)0.007**= p-value < 0.05 1 = reference BWt-birth weightNote: p-values were extracted from the multivariate logistic regression model\n\nFactors associated with satisfaction of parents in NICU-care service (n = 385)\n*= p-value < 0.05 1 = reference BWt-birth weight\nNote: p-values were extracted from the multivariate logistic regression model", "In this study, the total average parental satisfaction score with NICU service was 47.8%. This finding is nearly similar to a study done in Ethiopia (50%) [26]. Contrary to this finding, studies done in Norway (76%) [7] and Greece (99%) [27] had a higher parental satisfaction score with NICU service.\nThe study found that parental satisfaction with NICU service was 50.4% in the information subscale, 36.9% in the care and treatment subscale, 50.1% in the parental participation subscale, 59.0% in the organization subscale, and 48.6% in the professional attitude subscale. Parental satisfaction was lowest in the care and treatment subscale. A study in Italy [20] and South Africa [28] on the other hand, discovered the lowest parental satisfaction score in the professional attitude and parental participation subscales respectively. This disparity could be attributed to the NICU’s lack of professional and medical resources to treat and care for neonates in this situation.\nAccording to this study, mothers, parents from rural regions, parents of neonates who stayed less than 15 days in the hospital, parents of neonates with normal birth weight, and parents of full-term neonates were all more satisfied with NICU services than their counterparts.\nThis study showed that mothers were more satisfied with NICU service than fathers. Similar studies conducted in Italy [20], Israel [11], and Greece [29] also showed that mothers were more satisfied than fathers in NICU service. The possible explanation might be women are allowed to spend more time in the NICU and participate in the care of their newborns and cultivate more relationships with medical caregivers than fathers.\nAlso in this study parents who came from rural areas were more satisfied than parents from urban areas. This founding was also similar to a study done in Greece [30] found that parents from rural areas were more satisfied. The probable reason for this might be parents from rural areas may have a low-level awareness of the hospital, expectations, and demand for NICU service as compared with actual practice.\nParents of normal birth weight and full-term neonates were more satisfied with NICU service than parents of low birth weight and preterm neonates in this study. In addition, a Norwegian study [7] found that parents of newborns with normal birth weights were more satisfied than those with low birth weights. Reasonably, parents of full-term and normal-weight neonates are likely to face unexpected concerns, which may lead to greater satisfaction with positive outcomes.\nIn this study, parents of neonates who stayed less than 15 days in the hospital were more satisfied with NICU service than parents who stayed 15 or more days. Similarly, studies done in Ethiopia [26] and Iran [10] found parents who stayed for a short period in the NICU were more satisfied. The possible reason for this might be that parents with short-stay are less likely to see their neonatal serious conditions that make emotional and care mismanagements.\n Limitations of the study The study’s limitations include being a single-center nature and the lack of analysis of various alternatives to logistic regression that might be more applicable for this investigation.\nThe study’s limitations include being a single-center nature and the lack of analysis of various alternatives to logistic regression that might be more applicable for this investigation.", "The study’s limitations include being a single-center nature and the lack of analysis of various alternatives to logistic regression that might be more applicable for this investigation.", "There was a low level of parental satisfaction with neonatal intensive care unit service. Among the dimensions of EMPATHIC-N, the lowest parental satisfaction score was in the care and treatment while the highest parental satisfaction score was in the organization dimension. As a result, health professionals and hospital administrators should collaborate to improve NICU services to provide high-quality service and satisfy parents." ]
[ "introduction", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", null, "conclusion" ]
[ "NICU", "Parents", "Satisfaction", "Service" ]
Introduction: Neonatal intensive care units (NICUs) are areas that require careful risk management with a wide range of neonatal care services [1–3]. It necessitates high-cost and efficient critical care delivery with a multidisciplinary team approach that focuses on preventive strategies for improved outcomes [4–6]. Parental tension and emotions are high when their child is admitted to a neonatal intensive care unit (NICU) due to serious illnesses [7, 8]. Satisfaction is a belief and attitude about a specific service provision of an institution. Parental and patient satisfaction has become a well-established outcome indicator and tool for assessing a healthcare system’s quality, as well as input for developing strategies and providing accessible, sustainable, economical, as well as acceptable patient care [7, 9, 10]. Parental satisfaction reflects the balance between their expectations of ideal care and their perception of real and available care [3, 11, 12]. It is also one of the objectives and missions of every health care center that gives NICU care service [10, 11]. Parent and patient satisfaction has become an important aspect of hospital management initiatives for quality assurance and accreditation. Also, parental involvement has an important role in the delivery of high-quality care, ranging from assisting with activities of daily living to being directly involved in important health care decisions [13, 14]. However, parental participation may not always be possible, but, effective communication will reduce the effects of crises and improve parental satisfaction [11, 15]. Ethiopia has been working to enhance its healthcare delivery systems by focusing on quality healthcare service giving special attention to mothers and children. The Ethiopian Health Service Alliance for Quality has agreed that the initial priority area would be self-motivated and transparent partnerships that stimulate innovation in health care quality management and learning across all hospitals [16]. Given the scarcity of studies on parental satisfaction with NICU care in Ethiopia, as well as clinical observations of parents complaining about NICU care, it is critical to determine the level of parental satisfaction. So, this study aimed to identify the level of parental satisfaction and associated factors with NICU care service in Debre Tabor Comprehensive Specialized Hospital (DTCSH). Also, this study helps the administrators to understand the deficiencies of the hospital’s NICU services, and then propose corresponding improvement strategies. Methods and materials: Study area and period A cross-sectional study was conducted in DTCSH which is a public hospital established in 1934 and located in the South Gondar Zone of Amara Regional State of Ethiopia. It is 97 km to the southwest of Bahir Dar, the capital city of Amara Regional State. According to the 2007 census, the total population of Debre tabor town was 155,596. It has a latitude and longitude of 11051N3801’E11.8500 N 38.0170E with an elevation of 2,706 m (8878ft) above sea level. The hospital provides neonatal intensive care unit service with five separate NICU rooms. According to the hospital’s 2017 report, 1159 neonates were admitted to the NICU, but according to evidence from a chart review in 2020, 1489 neonates were admitted [17]. This study was conducted on parents of neonates who were admitted to NICU at DTCSH from November 05, 2021, to April 30, 2022. A cross-sectional study was conducted in DTCSH which is a public hospital established in 1934 and located in the South Gondar Zone of Amara Regional State of Ethiopia. It is 97 km to the southwest of Bahir Dar, the capital city of Amara Regional State. According to the 2007 census, the total population of Debre tabor town was 155,596. It has a latitude and longitude of 11051N3801’E11.8500 N 38.0170E with an elevation of 2,706 m (8878ft) above sea level. The hospital provides neonatal intensive care unit service with five separate NICU rooms. According to the hospital’s 2017 report, 1159 neonates were admitted to the NICU, but according to evidence from a chart review in 2020, 1489 neonates were admitted [17]. This study was conducted on parents of neonates who were admitted to NICU at DTCSH from November 05, 2021, to April 30, 2022. Inclusion and exclusion criteria Parents whose neonate was discharged from NICU or transferred to high dependency neonatal ward, parents who can read and write or understand the Amharic language, and neonatal stay in NICU of less than three months were included in the study. Whereas, parents with neonatal death in NICU and parents with neonatal admissions of less than 24 h were excluded from the study. Parents whose neonate was discharged from NICU or transferred to high dependency neonatal ward, parents who can read and write or understand the Amharic language, and neonatal stay in NICU of less than three months were included in the study. Whereas, parents with neonatal death in NICU and parents with neonatal admissions of less than 24 h were excluded from the study. Sample size and sampling technique The sample size was determined by using single proportion population formula taking 50% (P) of the proportion using a 95% confidence interval and 5% margin of error (d). The sample size was determined using the following formula.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n={\left({Z}_{\frac{a}{2}}\right)}^{2}P{(1-P)/d}^{2}$$\end{document} n= [(1.96)2 0.5(1-0.5)2] / (0.05)2. N = 385. By considering a 5% non-respondent rate the final sample size was 405. The data were collected from all consecutive parents who met the inclusion criteria until the intended sample size was achieved. The sample size was determined by using single proportion population formula taking 50% (P) of the proportion using a 95% confidence interval and 5% margin of error (d). The sample size was determined using the following formula.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n={\left({Z}_{\frac{a}{2}}\right)}^{2}P{(1-P)/d}^{2}$$\end{document} n= [(1.96)2 0.5(1-0.5)2] / (0.05)2. N = 385. By considering a 5% non-respondent rate the final sample size was 405. The data were collected from all consecutive parents who met the inclusion criteria until the intended sample size was achieved. Data collection instrument and procedures The Amharic version anonymous questionnaire was used for data collection after translating the English version of the EMPATHIC-N tool by three language experts and then back to English by the other three experts to ensure that the translation was correct. The tool’s content validity was also examined and guaranteed by members of the Anesthesia department’s research committee. The tool has been widely used to assess parental satisfaction with NICU care services, and it has strong reliability and validity, with reliability (Cronbach’s) values of the domains ranging from 0.82 to 0.95 [18–20]. There are five domains in the EMPATHIC-N tool: Information (12 questions with a six-point Likert scale); Care & Treatment (17 questions with a ten-point Likert scale); Parental Participation (8 questions with a six-point Likert scale); Organization (8 questions with a six-point Likert scale); and Professional Attitude (12 questions with a six-point Likert scale) [18, 21, 22]. The data were collected by three anesthetists on the day of neonatal discharge using an adopted Dutch instrument of Empowerment of Parents in The Intensive Care-Neonatology (EMPATHIC-N). The Amharic version anonymous questionnaire was used for data collection after translating the English version of the EMPATHIC-N tool by three language experts and then back to English by the other three experts to ensure that the translation was correct. The tool’s content validity was also examined and guaranteed by members of the Anesthesia department’s research committee. The tool has been widely used to assess parental satisfaction with NICU care services, and it has strong reliability and validity, with reliability (Cronbach’s) values of the domains ranging from 0.82 to 0.95 [18–20]. There are five domains in the EMPATHIC-N tool: Information (12 questions with a six-point Likert scale); Care & Treatment (17 questions with a ten-point Likert scale); Parental Participation (8 questions with a six-point Likert scale); Organization (8 questions with a six-point Likert scale); and Professional Attitude (12 questions with a six-point Likert scale) [18, 21, 22]. The data were collected by three anesthetists on the day of neonatal discharge using an adopted Dutch instrument of Empowerment of Parents in The Intensive Care-Neonatology (EMPATHIC-N). Data quality assurance To ensure the quality of data, pre-testing of the data collection tool (the questionnaire) was done on 5% of study parents from Felege Hiwot Comprehensive Specialized Hospital who were not included in the main study. The training was given to data collectors; data were collected and properly filled in the prepared format. Supervision was made throughout the data collection period to make sure the accuracy, clarity, and consistency of the collected data. To ensure the quality of data, pre-testing of the data collection tool (the questionnaire) was done on 5% of study parents from Felege Hiwot Comprehensive Specialized Hospital who were not included in the main study. The training was given to data collectors; data were collected and properly filled in the prepared format. Supervision was made throughout the data collection period to make sure the accuracy, clarity, and consistency of the collected data. Ethical consideration Debre Tabor University provided ethical clearance, and each parent was given written informed consent after being briefed about the purpose study. Debre Tabor University provided ethical clearance, and each parent was given written informed consent after being briefed about the purpose study. Data entry and analysis Data were cleaned, coded, and entered into Epidata version 4.2 and exported to SPSS version 23 for statistical analysis. The adopted EMPATHIC-N instrument was validated using analysis (validity, reliability, standard factor loadings, and factor analysis). Cronbach’s alpha was used to determine the reliability and validity of the tool. Explanatory factor analysis was done to test how well the measured variables represent the number of constructs and to identify relationships between the measured items. The inter-item correlation was used to assess the relationship between items on the same scale, whereas item-discriminant validity was used to assess the relationship between scales. After categorizing the overall mean parental satisfaction score, independent variables were analyzed using binary logistic regression with parental satisfaction with NICU care service. Variables from the bivariable logistic regression with a p-value of 0.2 were fitted to a multivariable logistic regression, and certain variables were included in the model with their clinical importance. Both crude odds ratio (COR) in bivariable logistic regression and adjusted odds ratio (AOR) in multivariable logistic regression with the corresponding 95% Confidence interval were calculated to show the strength of association. In multivariable logistic regression analysis, variables with a p-value of < 0.05 were considered statistically significant. The Mann–Whitney test was used to determine the influencing factors of the parental satisfaction domains, while the Hosmer–Lemeshow goodness of fit test was performed to ensure that the analysis model was appropriate. Data were cleaned, coded, and entered into Epidata version 4.2 and exported to SPSS version 23 for statistical analysis. The adopted EMPATHIC-N instrument was validated using analysis (validity, reliability, standard factor loadings, and factor analysis). Cronbach’s alpha was used to determine the reliability and validity of the tool. Explanatory factor analysis was done to test how well the measured variables represent the number of constructs and to identify relationships between the measured items. The inter-item correlation was used to assess the relationship between items on the same scale, whereas item-discriminant validity was used to assess the relationship between scales. After categorizing the overall mean parental satisfaction score, independent variables were analyzed using binary logistic regression with parental satisfaction with NICU care service. Variables from the bivariable logistic regression with a p-value of 0.2 were fitted to a multivariable logistic regression, and certain variables were included in the model with their clinical importance. Both crude odds ratio (COR) in bivariable logistic regression and adjusted odds ratio (AOR) in multivariable logistic regression with the corresponding 95% Confidence interval were calculated to show the strength of association. In multivariable logistic regression analysis, variables with a p-value of < 0.05 were considered statistically significant. The Mann–Whitney test was used to determine the influencing factors of the parental satisfaction domains, while the Hosmer–Lemeshow goodness of fit test was performed to ensure that the analysis model was appropriate. Operational definitions Satisfied: parents who scored greater than or equal to the overall mean EMPATHIC-N values were considered satisfied. Dissatisfied: parents who scored less than the overall mean EMPATHIC-N values were considered dissatisfied. High birth weight: neonates with a birth weight of more than 400 g [23]. Normal birth weight: neonates with a birth weight range from 250 to 400 g [24]. Low birth weight: neonates with a birth weight of lower than 250 g [25]. Satisfied: parents who scored greater than or equal to the overall mean EMPATHIC-N values were considered satisfied. Dissatisfied: parents who scored less than the overall mean EMPATHIC-N values were considered dissatisfied. High birth weight: neonates with a birth weight of more than 400 g [23]. Normal birth weight: neonates with a birth weight range from 250 to 400 g [24]. Low birth weight: neonates with a birth weight of lower than 250 g [25]. Study area and period: A cross-sectional study was conducted in DTCSH which is a public hospital established in 1934 and located in the South Gondar Zone of Amara Regional State of Ethiopia. It is 97 km to the southwest of Bahir Dar, the capital city of Amara Regional State. According to the 2007 census, the total population of Debre tabor town was 155,596. It has a latitude and longitude of 11051N3801’E11.8500 N 38.0170E with an elevation of 2,706 m (8878ft) above sea level. The hospital provides neonatal intensive care unit service with five separate NICU rooms. According to the hospital’s 2017 report, 1159 neonates were admitted to the NICU, but according to evidence from a chart review in 2020, 1489 neonates were admitted [17]. This study was conducted on parents of neonates who were admitted to NICU at DTCSH from November 05, 2021, to April 30, 2022. Inclusion and exclusion criteria: Parents whose neonate was discharged from NICU or transferred to high dependency neonatal ward, parents who can read and write or understand the Amharic language, and neonatal stay in NICU of less than three months were included in the study. Whereas, parents with neonatal death in NICU and parents with neonatal admissions of less than 24 h were excluded from the study. Sample size and sampling technique: The sample size was determined by using single proportion population formula taking 50% (P) of the proportion using a 95% confidence interval and 5% margin of error (d). The sample size was determined using the following formula.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n={\left({Z}_{\frac{a}{2}}\right)}^{2}P{(1-P)/d}^{2}$$\end{document} n= [(1.96)2 0.5(1-0.5)2] / (0.05)2. N = 385. By considering a 5% non-respondent rate the final sample size was 405. The data were collected from all consecutive parents who met the inclusion criteria until the intended sample size was achieved. Data collection instrument and procedures: The Amharic version anonymous questionnaire was used for data collection after translating the English version of the EMPATHIC-N tool by three language experts and then back to English by the other three experts to ensure that the translation was correct. The tool’s content validity was also examined and guaranteed by members of the Anesthesia department’s research committee. The tool has been widely used to assess parental satisfaction with NICU care services, and it has strong reliability and validity, with reliability (Cronbach’s) values of the domains ranging from 0.82 to 0.95 [18–20]. There are five domains in the EMPATHIC-N tool: Information (12 questions with a six-point Likert scale); Care & Treatment (17 questions with a ten-point Likert scale); Parental Participation (8 questions with a six-point Likert scale); Organization (8 questions with a six-point Likert scale); and Professional Attitude (12 questions with a six-point Likert scale) [18, 21, 22]. The data were collected by three anesthetists on the day of neonatal discharge using an adopted Dutch instrument of Empowerment of Parents in The Intensive Care-Neonatology (EMPATHIC-N). Data quality assurance: To ensure the quality of data, pre-testing of the data collection tool (the questionnaire) was done on 5% of study parents from Felege Hiwot Comprehensive Specialized Hospital who were not included in the main study. The training was given to data collectors; data were collected and properly filled in the prepared format. Supervision was made throughout the data collection period to make sure the accuracy, clarity, and consistency of the collected data. Ethical consideration: Debre Tabor University provided ethical clearance, and each parent was given written informed consent after being briefed about the purpose study. Data entry and analysis: Data were cleaned, coded, and entered into Epidata version 4.2 and exported to SPSS version 23 for statistical analysis. The adopted EMPATHIC-N instrument was validated using analysis (validity, reliability, standard factor loadings, and factor analysis). Cronbach’s alpha was used to determine the reliability and validity of the tool. Explanatory factor analysis was done to test how well the measured variables represent the number of constructs and to identify relationships between the measured items. The inter-item correlation was used to assess the relationship between items on the same scale, whereas item-discriminant validity was used to assess the relationship between scales. After categorizing the overall mean parental satisfaction score, independent variables were analyzed using binary logistic regression with parental satisfaction with NICU care service. Variables from the bivariable logistic regression with a p-value of 0.2 were fitted to a multivariable logistic regression, and certain variables were included in the model with their clinical importance. Both crude odds ratio (COR) in bivariable logistic regression and adjusted odds ratio (AOR) in multivariable logistic regression with the corresponding 95% Confidence interval were calculated to show the strength of association. In multivariable logistic regression analysis, variables with a p-value of < 0.05 were considered statistically significant. The Mann–Whitney test was used to determine the influencing factors of the parental satisfaction domains, while the Hosmer–Lemeshow goodness of fit test was performed to ensure that the analysis model was appropriate. Operational definitions: Satisfied: parents who scored greater than or equal to the overall mean EMPATHIC-N values were considered satisfied. Dissatisfied: parents who scored less than the overall mean EMPATHIC-N values were considered dissatisfied. High birth weight: neonates with a birth weight of more than 400 g [23]. Normal birth weight: neonates with a birth weight range from 250 to 400 g [24]. Low birth weight: neonates with a birth weight of lower than 250 g [25]. Results: This study was conducted on a total of 385 parents, with a 95.06% response rate. The majority of the parents were mothers 224 (58.2%), whereas 322 (83.6%) were married. Full-term newborns 325(84.4%) and those with respiratory issues 115 (29.9%) had the highest number of NICU admissions (Table 1). Table 1Socio-demographic characteristics of study participants (n = 385)VariablesFrequency (n)Percentage (%) Parental gender  Mother22458.2 Father16141.8 Age  18–2413234.3 25–3916643.1 40 and above8722.6 Marital status  Married32283.6 Not married6316.4 Residency  Urban18748.6 Rural19851.4 Educational level  No formal education11930.9 Elementary school9223.9 Secondary school8321.6 Collage and above9123.6 Profession  Housewife7018.2 Farmer10627.5 Student5915.3 Gov’t employ7218.7 Private employ7820.3 Parental hospital stay  ≤ 15 days24463.4 > 15 days14136.6 Neonatal gender  Male18848.8 Female19751.2 Birth weight  High birth weight8321.5 Normal birth weight16242.1 Low birth weight14036.4 Gestational age  Premature6015.6 Full term32584.4 Neonatal hospital stay  ≤ 15 days23561 > 15 days15039 Admission diagnosis  Infection6516.9 Respiratory problems11529.9 Prematurity5915.3 Gastrointestinal problems389.9 Jaundice4712.2 Neurological problems215.5 Cardiological problems266.8 Others143.5 Socio-demographic characteristics of study participants (n = 385) Explanatory factor analysis for subscales of parental satisfaction with NICU-care service Before assessing parental satisfaction levels, confirmatory factor analysis was used to ensure that the measured variables accurately represented the constructs. The factor correlation matrix was used in the analysis. KMO and Bartlett’s tests of sphericity were used to check the correlation of measurement variables, with a KMO value of 0.875 and Bartlett’s tests of sphericity (p = 0.00) respectively. The correlation matrix was checked for commonality extraction, and all item values were larger than 0.3. The parallel analysis determined the number of item components (factors), and two new components for parental participation, three new components for information and organization, and four new components for Care & Treatment and professional attitude of eigenvalues met the criteria, and the Varimax component correlation matrix was an appropriate model (Table 2). Table 2Factor loadings and identified components’ of EMPATHIC-N toolItemFactors 1 2 3 4 Information  What level of doctor’s information do you have regarding the child’s expected health outcomes?0.752 How satisfied are you with the physicians’ and nurses’ information similarity?0.704 How are you satisfied with doctors’ and nurses’ honesty in providing information?0.59 How satisfied are you with daily discussions with doctors and nurses about your child’s care and treatment?0.58 How understandable was the information provided by the doctors and nurses?0.763 How satisfied were you with the correct information when the child’s physical condition deteriorated?0.4280.657 How satisfied were you with the clear answers to your questions?0.653 How clear is the doctor’s information about the consequences of the child’s treatment?0.533 To what extent do you receive clear information about the examinations and tests?0.713 To what extent the information brochure received was complete and clear?0.641 How much clear information is given regarding a child’s illness?0.626 Level of received understandable information about the effects of the drugs?0.606 Care & Treatment  Level of child’s comfort taken into account by the doctors and nurses?0.826 The extent of satisfaction During acute situations on availability of nurses to support?0.822 Level of team alertness to the prevention and treatment of pain of neonate?0.797 Level of care taken to nurses while in the incubator/bed?0.6340.501 Level of correct medication always administered on time?0.6310.402 level of emotional support that has been provided?0.84 The doctors and nurses responded well to our own needs0.809 Transferals of care from the neonatal intensive care unit staff to colleagues in the high-care unit or pediatric ward had gone well?0.782 Every day we knew who of the doctors and nurses was responsible for our child.0.695 How closely did doctors and nurses collaborate during work?0.5780.466 Level of a common goal: to provide the finest care and treatment for our child and ourselves.0.774 Level of physicians and nurses paid close attention to our child’s development?0.756 The team as a whole was concerned for our child and you.0.698 Our child’s requirements were met promptly0.455 The extent of doctors’ and nurses’ professional knowledge of what they are doing?0.847 How satisfied are you with the doctors’ and nurses’ understanding of the child’s medical history at the time of admission?0.765 How satisfied are you with the rapid actions taken by doctors and nurses when a child’s condition deteriorated?0.723 Parental Participation  How involved are you in making decisions about our child’s care and treatment?0.803 The nurses had trained us on the specific aspects of newborn care.0.796 We were encouraged to stay close to our children.0.714 Before discharge, the care for our child was once more discussed with us.0.617 Even during intensive procedures, we could always stay close to our child.0.841 The nurses stimulated us to help in the care of our child0.837 The nurses helped us in the bonding with our child0.5320.575 We had confidence in the team0.561 Organization  The Neonatology unit made us feel safe0.885 There was a warm atmosphere in the Neonatology unit without hostility0.805 The Neonatology unit was clean0.7330.404 The unit could easily be reached by telephone0.809 Our child’s incubator or bed was clean0.8 The team worked efficiently0.661 There was enough space around our child’s incubator/bed0.9 Noise in the unit was muffled as good as possible0.788 Professional Attitude  Our child’s health always came first for the doctors and nurses0.799 The team worked hygienically0.747 Our cultural background was taken into account0.728 The doctors and nurses always took time to listen to us0.876 We felt welcome by the team0.804 Despite the workload, sufficient attention was paid to our child and us by the team0.5840.508 Nurses and doctors always introduced themselves by name and function0.794 We received sympathy from the doctors and nurses0.786 At our bedside, the discussion between the doctors and nurses was only about our child.0.4570.582 The team respected the privacy of our children and us.0.874 There was a pleasant atmosphere among the staff0.743 The team showed respect for our child and us0.5190.609 Factor loadings and identified components’ of EMPATHIC-N tool Before assessing parental satisfaction levels, confirmatory factor analysis was used to ensure that the measured variables accurately represented the constructs. The factor correlation matrix was used in the analysis. KMO and Bartlett’s tests of sphericity were used to check the correlation of measurement variables, with a KMO value of 0.875 and Bartlett’s tests of sphericity (p = 0.00) respectively. The correlation matrix was checked for commonality extraction, and all item values were larger than 0.3. The parallel analysis determined the number of item components (factors), and two new components for parental participation, three new components for information and organization, and four new components for Care & Treatment and professional attitude of eigenvalues met the criteria, and the Varimax component correlation matrix was an appropriate model (Table 2). Table 2Factor loadings and identified components’ of EMPATHIC-N toolItemFactors 1 2 3 4 Information  What level of doctor’s information do you have regarding the child’s expected health outcomes?0.752 How satisfied are you with the physicians’ and nurses’ information similarity?0.704 How are you satisfied with doctors’ and nurses’ honesty in providing information?0.59 How satisfied are you with daily discussions with doctors and nurses about your child’s care and treatment?0.58 How understandable was the information provided by the doctors and nurses?0.763 How satisfied were you with the correct information when the child’s physical condition deteriorated?0.4280.657 How satisfied were you with the clear answers to your questions?0.653 How clear is the doctor’s information about the consequences of the child’s treatment?0.533 To what extent do you receive clear information about the examinations and tests?0.713 To what extent the information brochure received was complete and clear?0.641 How much clear information is given regarding a child’s illness?0.626 Level of received understandable information about the effects of the drugs?0.606 Care & Treatment  Level of child’s comfort taken into account by the doctors and nurses?0.826 The extent of satisfaction During acute situations on availability of nurses to support?0.822 Level of team alertness to the prevention and treatment of pain of neonate?0.797 Level of care taken to nurses while in the incubator/bed?0.6340.501 Level of correct medication always administered on time?0.6310.402 level of emotional support that has been provided?0.84 The doctors and nurses responded well to our own needs0.809 Transferals of care from the neonatal intensive care unit staff to colleagues in the high-care unit or pediatric ward had gone well?0.782 Every day we knew who of the doctors and nurses was responsible for our child.0.695 How closely did doctors and nurses collaborate during work?0.5780.466 Level of a common goal: to provide the finest care and treatment for our child and ourselves.0.774 Level of physicians and nurses paid close attention to our child’s development?0.756 The team as a whole was concerned for our child and you.0.698 Our child’s requirements were met promptly0.455 The extent of doctors’ and nurses’ professional knowledge of what they are doing?0.847 How satisfied are you with the doctors’ and nurses’ understanding of the child’s medical history at the time of admission?0.765 How satisfied are you with the rapid actions taken by doctors and nurses when a child’s condition deteriorated?0.723 Parental Participation  How involved are you in making decisions about our child’s care and treatment?0.803 The nurses had trained us on the specific aspects of newborn care.0.796 We were encouraged to stay close to our children.0.714 Before discharge, the care for our child was once more discussed with us.0.617 Even during intensive procedures, we could always stay close to our child.0.841 The nurses stimulated us to help in the care of our child0.837 The nurses helped us in the bonding with our child0.5320.575 We had confidence in the team0.561 Organization  The Neonatology unit made us feel safe0.885 There was a warm atmosphere in the Neonatology unit without hostility0.805 The Neonatology unit was clean0.7330.404 The unit could easily be reached by telephone0.809 Our child’s incubator or bed was clean0.8 The team worked efficiently0.661 There was enough space around our child’s incubator/bed0.9 Noise in the unit was muffled as good as possible0.788 Professional Attitude  Our child’s health always came first for the doctors and nurses0.799 The team worked hygienically0.747 Our cultural background was taken into account0.728 The doctors and nurses always took time to listen to us0.876 We felt welcome by the team0.804 Despite the workload, sufficient attention was paid to our child and us by the team0.5840.508 Nurses and doctors always introduced themselves by name and function0.794 We received sympathy from the doctors and nurses0.786 At our bedside, the discussion between the doctors and nurses was only about our child.0.4570.582 The team respected the privacy of our children and us.0.874 There was a pleasant atmosphere among the staff0.743 The team showed respect for our child and us0.5190.609 Factor loadings and identified components’ of EMPATHIC-N tool Reliability of items of EMPATHIC-N tool The total EMPATHIC-N values and the five domains showed a good level of internal consistency. The five domains’ Cronbach’s values range from 0.639 to 0.791, whereas the total EMPATHIC-N Cronbach’s value was 0.904. The inter-item correlation (IIC) of the five domains had significant internal consistency (Table 3). Table 3Reliability of items of parental satisfaction with NICU-care serviceDimensionNumber of ItemsΧηρονβαχη αMean dimension score (SD)Maximum possible dimension scoreInter-item correlation (IIC)Item-discriminant validity (IDV)Information120.63939.55(8.42)720.22–0.48*0.20–0.57Care & Treatment170.79181.70(20.49)1700.14–0.64*0.25–0.38Parental Participation80.75524.86(7.79)480.01–0.71*0.37–0.54Organization80.72925.22(7.39)480.11–0.64*0.34–0.69Professional Attitude120.69438.106 (9.07)720.13–0.56*0.22–0.45EMPATHIC-N tool570.904209.44(42.82)410cc*Significant value (p < 0.001), c = not computable Reliability of items of parental satisfaction with NICU-care service *Significant value (p < 0.001), c = not computable The total EMPATHIC-N values and the five domains showed a good level of internal consistency. The five domains’ Cronbach’s values range from 0.639 to 0.791, whereas the total EMPATHIC-N Cronbach’s value was 0.904. The inter-item correlation (IIC) of the five domains had significant internal consistency (Table 3). Table 3Reliability of items of parental satisfaction with NICU-care serviceDimensionNumber of ItemsΧηρονβαχη αMean dimension score (SD)Maximum possible dimension scoreInter-item correlation (IIC)Item-discriminant validity (IDV)Information120.63939.55(8.42)720.22–0.48*0.20–0.57Care & Treatment170.79181.70(20.49)1700.14–0.64*0.25–0.38Parental Participation80.75524.86(7.79)480.01–0.71*0.37–0.54Organization80.72925.22(7.39)480.11–0.64*0.34–0.69Professional Attitude120.69438.106 (9.07)720.13–0.56*0.22–0.45EMPATHIC-N tool570.904209.44(42.82)410cc*Significant value (p < 0.001), c = not computable Reliability of items of parental satisfaction with NICU-care service *Significant value (p < 0.001), c = not computable Parental satisfaction level with NICU-care service Overall mean satisfaction level of parents in NICU-care service was 47.8% [95% CI= (43.1–52.5)]. The domains of parental satisfaction scores were compared with each other and, the lowest parental satisfaction score was for the domain of care and treatment 36.9% while the organizational domain showed the highest parental satisfaction level 59.0%. The domains of information, parental participation, and professional attitude showed comparable parental satisfaction scores (Fig. 1). Fig. 1Parents’ overall and dimensional satisfaction with NICU-care services Parents’ overall and dimensional satisfaction with NICU-care services Overall mean satisfaction level of parents in NICU-care service was 47.8% [95% CI= (43.1–52.5)]. The domains of parental satisfaction scores were compared with each other and, the lowest parental satisfaction score was for the domain of care and treatment 36.9% while the organizational domain showed the highest parental satisfaction level 59.0%. The domains of information, parental participation, and professional attitude showed comparable parental satisfaction scores (Fig. 1). Fig. 1Parents’ overall and dimensional satisfaction with NICU-care services Parents’ overall and dimensional satisfaction with NICU-care services Factors associated with the overall parental satisfaction of NICU care service In this study, the bivariable logistic regression showed that parental gender, residency, parental hospital stay, neonatal hospital stay, birth weight, and gestational age were factors with a p-value of less than 0.2 and were fitted with a multivariable logistic regression model. However, neonatal hospital stay was not associated with multivariable logistic regression with a p-value greater than 0.05. The multivariable logistic analyses showed that mothers were 2.16 (AOR = 2.16; 95%CI: 1.28–3.63) times more satisfied than fathers. Also, parents who are from the rural area were 2.94 (AOR = 2.94; 95%CI: 1.42–6.06) more satisfied than urban. Parents who say less than 15 days were 2.18 (AOR = 2.18; 95%CI: 1.13–4.20) times more satisfied than parents who stay 15 or more days in the hospital. Also, parents of a neonate with a normal birth weight of 2.14 (AOR = 2.14; 95%CI: 1.16–3.94) and full-term neonate 2.53(AOR = 2.53; 95%CI: 1.29–4.97) times more satisfied than their counterparts (Table 4). Table 4Factors associated with satisfaction of parents in NICU-care service (n = 385)VariablesSatisfaction level on NICU- serviceCrude odds ratioAdjusted odds ratiop-valueSatisfiedNot Satisfied(95% CI)(95% CI) Gender  Mother117(52.2%)107(47.8%)1.53(1.02,2.31)2.16(1.28,3.63)0.004* Father67(41.6%)94(58.4%)1 Residency  Urban68(36.4%)119(63.6%)1 Rural116(58.6%)82(41.4%)2.48(1.64,3.73)2.94(1.42,6.06)0.004* Parental hospital stay  ≤ 15 days140(57.4%)104(42.6%)2.97(1.92,4.59)2.18(1.13,4.20)0.019* > 15days44(31.2%)97(68.8%)1 Birth weight  High birth weight34(41.0%)49(59.0%)1 Normal birth weight96(59.3%)66(40.7%)2.09(1.22,3.59)2.14(1.16,3.94)0.015* Low birth weight54(38.6%)86(61.4%)0.91(0.52,1.58)0.76(0.41,1.41)0.38 Gestational age  Premature18(30.0%)42(70.0%)1 Full term166(51.1%)159(48.9%)2.44(1.35,4.41)2.53(1.29,4.97)0.007**= p-value < 0.05 1 = reference BWt-birth weightNote: p-values were extracted from the multivariate logistic regression model Factors associated with satisfaction of parents in NICU-care service (n = 385) *= p-value < 0.05 1 = reference BWt-birth weight Note: p-values were extracted from the multivariate logistic regression model In this study, the bivariable logistic regression showed that parental gender, residency, parental hospital stay, neonatal hospital stay, birth weight, and gestational age were factors with a p-value of less than 0.2 and were fitted with a multivariable logistic regression model. However, neonatal hospital stay was not associated with multivariable logistic regression with a p-value greater than 0.05. The multivariable logistic analyses showed that mothers were 2.16 (AOR = 2.16; 95%CI: 1.28–3.63) times more satisfied than fathers. Also, parents who are from the rural area were 2.94 (AOR = 2.94; 95%CI: 1.42–6.06) more satisfied than urban. Parents who say less than 15 days were 2.18 (AOR = 2.18; 95%CI: 1.13–4.20) times more satisfied than parents who stay 15 or more days in the hospital. Also, parents of a neonate with a normal birth weight of 2.14 (AOR = 2.14; 95%CI: 1.16–3.94) and full-term neonate 2.53(AOR = 2.53; 95%CI: 1.29–4.97) times more satisfied than their counterparts (Table 4). Table 4Factors associated with satisfaction of parents in NICU-care service (n = 385)VariablesSatisfaction level on NICU- serviceCrude odds ratioAdjusted odds ratiop-valueSatisfiedNot Satisfied(95% CI)(95% CI) Gender  Mother117(52.2%)107(47.8%)1.53(1.02,2.31)2.16(1.28,3.63)0.004* Father67(41.6%)94(58.4%)1 Residency  Urban68(36.4%)119(63.6%)1 Rural116(58.6%)82(41.4%)2.48(1.64,3.73)2.94(1.42,6.06)0.004* Parental hospital stay  ≤ 15 days140(57.4%)104(42.6%)2.97(1.92,4.59)2.18(1.13,4.20)0.019* > 15days44(31.2%)97(68.8%)1 Birth weight  High birth weight34(41.0%)49(59.0%)1 Normal birth weight96(59.3%)66(40.7%)2.09(1.22,3.59)2.14(1.16,3.94)0.015* Low birth weight54(38.6%)86(61.4%)0.91(0.52,1.58)0.76(0.41,1.41)0.38 Gestational age  Premature18(30.0%)42(70.0%)1 Full term166(51.1%)159(48.9%)2.44(1.35,4.41)2.53(1.29,4.97)0.007**= p-value < 0.05 1 = reference BWt-birth weightNote: p-values were extracted from the multivariate logistic regression model Factors associated with satisfaction of parents in NICU-care service (n = 385) *= p-value < 0.05 1 = reference BWt-birth weight Note: p-values were extracted from the multivariate logistic regression model Explanatory factor analysis for subscales of parental satisfaction with NICU-care service: Before assessing parental satisfaction levels, confirmatory factor analysis was used to ensure that the measured variables accurately represented the constructs. The factor correlation matrix was used in the analysis. KMO and Bartlett’s tests of sphericity were used to check the correlation of measurement variables, with a KMO value of 0.875 and Bartlett’s tests of sphericity (p = 0.00) respectively. The correlation matrix was checked for commonality extraction, and all item values were larger than 0.3. The parallel analysis determined the number of item components (factors), and two new components for parental participation, three new components for information and organization, and four new components for Care & Treatment and professional attitude of eigenvalues met the criteria, and the Varimax component correlation matrix was an appropriate model (Table 2). Table 2Factor loadings and identified components’ of EMPATHIC-N toolItemFactors 1 2 3 4 Information  What level of doctor’s information do you have regarding the child’s expected health outcomes?0.752 How satisfied are you with the physicians’ and nurses’ information similarity?0.704 How are you satisfied with doctors’ and nurses’ honesty in providing information?0.59 How satisfied are you with daily discussions with doctors and nurses about your child’s care and treatment?0.58 How understandable was the information provided by the doctors and nurses?0.763 How satisfied were you with the correct information when the child’s physical condition deteriorated?0.4280.657 How satisfied were you with the clear answers to your questions?0.653 How clear is the doctor’s information about the consequences of the child’s treatment?0.533 To what extent do you receive clear information about the examinations and tests?0.713 To what extent the information brochure received was complete and clear?0.641 How much clear information is given regarding a child’s illness?0.626 Level of received understandable information about the effects of the drugs?0.606 Care & Treatment  Level of child’s comfort taken into account by the doctors and nurses?0.826 The extent of satisfaction During acute situations on availability of nurses to support?0.822 Level of team alertness to the prevention and treatment of pain of neonate?0.797 Level of care taken to nurses while in the incubator/bed?0.6340.501 Level of correct medication always administered on time?0.6310.402 level of emotional support that has been provided?0.84 The doctors and nurses responded well to our own needs0.809 Transferals of care from the neonatal intensive care unit staff to colleagues in the high-care unit or pediatric ward had gone well?0.782 Every day we knew who of the doctors and nurses was responsible for our child.0.695 How closely did doctors and nurses collaborate during work?0.5780.466 Level of a common goal: to provide the finest care and treatment for our child and ourselves.0.774 Level of physicians and nurses paid close attention to our child’s development?0.756 The team as a whole was concerned for our child and you.0.698 Our child’s requirements were met promptly0.455 The extent of doctors’ and nurses’ professional knowledge of what they are doing?0.847 How satisfied are you with the doctors’ and nurses’ understanding of the child’s medical history at the time of admission?0.765 How satisfied are you with the rapid actions taken by doctors and nurses when a child’s condition deteriorated?0.723 Parental Participation  How involved are you in making decisions about our child’s care and treatment?0.803 The nurses had trained us on the specific aspects of newborn care.0.796 We were encouraged to stay close to our children.0.714 Before discharge, the care for our child was once more discussed with us.0.617 Even during intensive procedures, we could always stay close to our child.0.841 The nurses stimulated us to help in the care of our child0.837 The nurses helped us in the bonding with our child0.5320.575 We had confidence in the team0.561 Organization  The Neonatology unit made us feel safe0.885 There was a warm atmosphere in the Neonatology unit without hostility0.805 The Neonatology unit was clean0.7330.404 The unit could easily be reached by telephone0.809 Our child’s incubator or bed was clean0.8 The team worked efficiently0.661 There was enough space around our child’s incubator/bed0.9 Noise in the unit was muffled as good as possible0.788 Professional Attitude  Our child’s health always came first for the doctors and nurses0.799 The team worked hygienically0.747 Our cultural background was taken into account0.728 The doctors and nurses always took time to listen to us0.876 We felt welcome by the team0.804 Despite the workload, sufficient attention was paid to our child and us by the team0.5840.508 Nurses and doctors always introduced themselves by name and function0.794 We received sympathy from the doctors and nurses0.786 At our bedside, the discussion between the doctors and nurses was only about our child.0.4570.582 The team respected the privacy of our children and us.0.874 There was a pleasant atmosphere among the staff0.743 The team showed respect for our child and us0.5190.609 Factor loadings and identified components’ of EMPATHIC-N tool Reliability of items of EMPATHIC-N tool: The total EMPATHIC-N values and the five domains showed a good level of internal consistency. The five domains’ Cronbach’s values range from 0.639 to 0.791, whereas the total EMPATHIC-N Cronbach’s value was 0.904. The inter-item correlation (IIC) of the five domains had significant internal consistency (Table 3). Table 3Reliability of items of parental satisfaction with NICU-care serviceDimensionNumber of ItemsΧηρονβαχη αMean dimension score (SD)Maximum possible dimension scoreInter-item correlation (IIC)Item-discriminant validity (IDV)Information120.63939.55(8.42)720.22–0.48*0.20–0.57Care & Treatment170.79181.70(20.49)1700.14–0.64*0.25–0.38Parental Participation80.75524.86(7.79)480.01–0.71*0.37–0.54Organization80.72925.22(7.39)480.11–0.64*0.34–0.69Professional Attitude120.69438.106 (9.07)720.13–0.56*0.22–0.45EMPATHIC-N tool570.904209.44(42.82)410cc*Significant value (p < 0.001), c = not computable Reliability of items of parental satisfaction with NICU-care service *Significant value (p < 0.001), c = not computable Parental satisfaction level with NICU-care service: Overall mean satisfaction level of parents in NICU-care service was 47.8% [95% CI= (43.1–52.5)]. The domains of parental satisfaction scores were compared with each other and, the lowest parental satisfaction score was for the domain of care and treatment 36.9% while the organizational domain showed the highest parental satisfaction level 59.0%. The domains of information, parental participation, and professional attitude showed comparable parental satisfaction scores (Fig. 1). Fig. 1Parents’ overall and dimensional satisfaction with NICU-care services Parents’ overall and dimensional satisfaction with NICU-care services Factors associated with the overall parental satisfaction of NICU care service: In this study, the bivariable logistic regression showed that parental gender, residency, parental hospital stay, neonatal hospital stay, birth weight, and gestational age were factors with a p-value of less than 0.2 and were fitted with a multivariable logistic regression model. However, neonatal hospital stay was not associated with multivariable logistic regression with a p-value greater than 0.05. The multivariable logistic analyses showed that mothers were 2.16 (AOR = 2.16; 95%CI: 1.28–3.63) times more satisfied than fathers. Also, parents who are from the rural area were 2.94 (AOR = 2.94; 95%CI: 1.42–6.06) more satisfied than urban. Parents who say less than 15 days were 2.18 (AOR = 2.18; 95%CI: 1.13–4.20) times more satisfied than parents who stay 15 or more days in the hospital. Also, parents of a neonate with a normal birth weight of 2.14 (AOR = 2.14; 95%CI: 1.16–3.94) and full-term neonate 2.53(AOR = 2.53; 95%CI: 1.29–4.97) times more satisfied than their counterparts (Table 4). Table 4Factors associated with satisfaction of parents in NICU-care service (n = 385)VariablesSatisfaction level on NICU- serviceCrude odds ratioAdjusted odds ratiop-valueSatisfiedNot Satisfied(95% CI)(95% CI) Gender  Mother117(52.2%)107(47.8%)1.53(1.02,2.31)2.16(1.28,3.63)0.004* Father67(41.6%)94(58.4%)1 Residency  Urban68(36.4%)119(63.6%)1 Rural116(58.6%)82(41.4%)2.48(1.64,3.73)2.94(1.42,6.06)0.004* Parental hospital stay  ≤ 15 days140(57.4%)104(42.6%)2.97(1.92,4.59)2.18(1.13,4.20)0.019* > 15days44(31.2%)97(68.8%)1 Birth weight  High birth weight34(41.0%)49(59.0%)1 Normal birth weight96(59.3%)66(40.7%)2.09(1.22,3.59)2.14(1.16,3.94)0.015* Low birth weight54(38.6%)86(61.4%)0.91(0.52,1.58)0.76(0.41,1.41)0.38 Gestational age  Premature18(30.0%)42(70.0%)1 Full term166(51.1%)159(48.9%)2.44(1.35,4.41)2.53(1.29,4.97)0.007**= p-value < 0.05 1 = reference BWt-birth weightNote: p-values were extracted from the multivariate logistic regression model Factors associated with satisfaction of parents in NICU-care service (n = 385) *= p-value < 0.05 1 = reference BWt-birth weight Note: p-values were extracted from the multivariate logistic regression model Discussion: In this study, the total average parental satisfaction score with NICU service was 47.8%. This finding is nearly similar to a study done in Ethiopia (50%) [26]. Contrary to this finding, studies done in Norway (76%) [7] and Greece (99%) [27] had a higher parental satisfaction score with NICU service. The study found that parental satisfaction with NICU service was 50.4% in the information subscale, 36.9% in the care and treatment subscale, 50.1% in the parental participation subscale, 59.0% in the organization subscale, and 48.6% in the professional attitude subscale. Parental satisfaction was lowest in the care and treatment subscale. A study in Italy [20] and South Africa [28] on the other hand, discovered the lowest parental satisfaction score in the professional attitude and parental participation subscales respectively. This disparity could be attributed to the NICU’s lack of professional and medical resources to treat and care for neonates in this situation. According to this study, mothers, parents from rural regions, parents of neonates who stayed less than 15 days in the hospital, parents of neonates with normal birth weight, and parents of full-term neonates were all more satisfied with NICU services than their counterparts. This study showed that mothers were more satisfied with NICU service than fathers. Similar studies conducted in Italy [20], Israel [11], and Greece [29] also showed that mothers were more satisfied than fathers in NICU service. The possible explanation might be women are allowed to spend more time in the NICU and participate in the care of their newborns and cultivate more relationships with medical caregivers than fathers. Also in this study parents who came from rural areas were more satisfied than parents from urban areas. This founding was also similar to a study done in Greece [30] found that parents from rural areas were more satisfied. The probable reason for this might be parents from rural areas may have a low-level awareness of the hospital, expectations, and demand for NICU service as compared with actual practice. Parents of normal birth weight and full-term neonates were more satisfied with NICU service than parents of low birth weight and preterm neonates in this study. In addition, a Norwegian study [7] found that parents of newborns with normal birth weights were more satisfied than those with low birth weights. Reasonably, parents of full-term and normal-weight neonates are likely to face unexpected concerns, which may lead to greater satisfaction with positive outcomes. In this study, parents of neonates who stayed less than 15 days in the hospital were more satisfied with NICU service than parents who stayed 15 or more days. Similarly, studies done in Ethiopia [26] and Iran [10] found parents who stayed for a short period in the NICU were more satisfied. The possible reason for this might be that parents with short-stay are less likely to see their neonatal serious conditions that make emotional and care mismanagements. Limitations of the study The study’s limitations include being a single-center nature and the lack of analysis of various alternatives to logistic regression that might be more applicable for this investigation. The study’s limitations include being a single-center nature and the lack of analysis of various alternatives to logistic regression that might be more applicable for this investigation. Limitations of the study: The study’s limitations include being a single-center nature and the lack of analysis of various alternatives to logistic regression that might be more applicable for this investigation. Conclusion: There was a low level of parental satisfaction with neonatal intensive care unit service. Among the dimensions of EMPATHIC-N, the lowest parental satisfaction score was in the care and treatment while the highest parental satisfaction score was in the organization dimension. As a result, health professionals and hospital administrators should collaborate to improve NICU services to provide high-quality service and satisfy parents.
Background: Parental satisfaction is a well-established outcome indicator and tool for assessing a healthcare system's quality, as well as input for developing strategies for providing acceptable patient care. This study aimed to assess parental satisfaction with neonatal intensive care unit service and its associated factors. Methods: A cross-sectional study design was conducted on parents whose neonates were admitted to the neonatal intensive care unit at Debre Tabor Comprehensive Specialized Hospital, in North Central Ethiopia. Data were collected by adopting an EMPATHIC-N instrument during the day of neonatal discharge, after translating the English version of the instrument to the local language (Amharic). Both Bivariable and multivariable logistic analyses were done to identify factors associated with parental satisfaction with neonatal intensive care unit service. P < 0.05 with 95% CI was considered statistically significant. Results: The data analysis was done on 385 parents with a response rate of 95.06%. The overall average satisfaction of parents with neonatal intensive care unit service was 47.8% [95% CI= (43.1-52.5)]. The average parental satisfaction of neonatal intensive care unit service in the information dimension was 50.40%; in the care and treatment dimension was 36.9%, in the parental participation dimension was 50.1%, in the organization dimension was 59.0% and the professional attitude dimension was 48.6%. Gender of parents, residency, parental hospital stay, birth weight, and gestational age were factors associated with parental satisfaction. Conclusions: There was a low level of parental satisfaction with neonatal intensive care unit service. Among the dimensions of EMPATHIC-N, the lowest parental satisfaction score was in the care and treatment while the highest parental satisfaction score was in the organization dimension.
Introduction: Neonatal intensive care units (NICUs) are areas that require careful risk management with a wide range of neonatal care services [1–3]. It necessitates high-cost and efficient critical care delivery with a multidisciplinary team approach that focuses on preventive strategies for improved outcomes [4–6]. Parental tension and emotions are high when their child is admitted to a neonatal intensive care unit (NICU) due to serious illnesses [7, 8]. Satisfaction is a belief and attitude about a specific service provision of an institution. Parental and patient satisfaction has become a well-established outcome indicator and tool for assessing a healthcare system’s quality, as well as input for developing strategies and providing accessible, sustainable, economical, as well as acceptable patient care [7, 9, 10]. Parental satisfaction reflects the balance between their expectations of ideal care and their perception of real and available care [3, 11, 12]. It is also one of the objectives and missions of every health care center that gives NICU care service [10, 11]. Parent and patient satisfaction has become an important aspect of hospital management initiatives for quality assurance and accreditation. Also, parental involvement has an important role in the delivery of high-quality care, ranging from assisting with activities of daily living to being directly involved in important health care decisions [13, 14]. However, parental participation may not always be possible, but, effective communication will reduce the effects of crises and improve parental satisfaction [11, 15]. Ethiopia has been working to enhance its healthcare delivery systems by focusing on quality healthcare service giving special attention to mothers and children. The Ethiopian Health Service Alliance for Quality has agreed that the initial priority area would be self-motivated and transparent partnerships that stimulate innovation in health care quality management and learning across all hospitals [16]. Given the scarcity of studies on parental satisfaction with NICU care in Ethiopia, as well as clinical observations of parents complaining about NICU care, it is critical to determine the level of parental satisfaction. So, this study aimed to identify the level of parental satisfaction and associated factors with NICU care service in Debre Tabor Comprehensive Specialized Hospital (DTCSH). Also, this study helps the administrators to understand the deficiencies of the hospital’s NICU services, and then propose corresponding improvement strategies. Conclusion: There was a low level of parental satisfaction with neonatal intensive care unit service. Among the dimensions of EMPATHIC-N, the lowest parental satisfaction score was in the care and treatment while the highest parental satisfaction score was in the organization dimension. As a result, health professionals and hospital administrators should collaborate to improve NICU services to provide high-quality service and satisfy parents.
Background: Parental satisfaction is a well-established outcome indicator and tool for assessing a healthcare system's quality, as well as input for developing strategies for providing acceptable patient care. This study aimed to assess parental satisfaction with neonatal intensive care unit service and its associated factors. Methods: A cross-sectional study design was conducted on parents whose neonates were admitted to the neonatal intensive care unit at Debre Tabor Comprehensive Specialized Hospital, in North Central Ethiopia. Data were collected by adopting an EMPATHIC-N instrument during the day of neonatal discharge, after translating the English version of the instrument to the local language (Amharic). Both Bivariable and multivariable logistic analyses were done to identify factors associated with parental satisfaction with neonatal intensive care unit service. P < 0.05 with 95% CI was considered statistically significant. Results: The data analysis was done on 385 parents with a response rate of 95.06%. The overall average satisfaction of parents with neonatal intensive care unit service was 47.8% [95% CI= (43.1-52.5)]. The average parental satisfaction of neonatal intensive care unit service in the information dimension was 50.40%; in the care and treatment dimension was 36.9%, in the parental participation dimension was 50.1%, in the organization dimension was 59.0% and the professional attitude dimension was 48.6%. Gender of parents, residency, parental hospital stay, birth weight, and gestational age were factors associated with parental satisfaction. Conclusions: There was a low level of parental satisfaction with neonatal intensive care unit service. Among the dimensions of EMPATHIC-N, the lowest parental satisfaction score was in the care and treatment while the highest parental satisfaction score was in the organization dimension.
9,635
330
18
[ "care", "parental", "parents", "satisfaction", "nicu", "child", "nurses", "parental satisfaction", "birth", "satisfied" ]
[ "test", "test" ]
null
[CONTENT] NICU | Parents | Satisfaction | Service [SUMMARY]
null
[CONTENT] NICU | Parents | Satisfaction | Service [SUMMARY]
[CONTENT] NICU | Parents | Satisfaction | Service [SUMMARY]
[CONTENT] NICU | Parents | Satisfaction | Service [SUMMARY]
[CONTENT] NICU | Parents | Satisfaction | Service [SUMMARY]
[CONTENT] Infant, Newborn | Humans | Intensive Care Units, Neonatal | Personal Satisfaction | Cross-Sectional Studies | Parents | Quality of Health Care [SUMMARY]
null
[CONTENT] Infant, Newborn | Humans | Intensive Care Units, Neonatal | Personal Satisfaction | Cross-Sectional Studies | Parents | Quality of Health Care [SUMMARY]
[CONTENT] Infant, Newborn | Humans | Intensive Care Units, Neonatal | Personal Satisfaction | Cross-Sectional Studies | Parents | Quality of Health Care [SUMMARY]
[CONTENT] Infant, Newborn | Humans | Intensive Care Units, Neonatal | Personal Satisfaction | Cross-Sectional Studies | Parents | Quality of Health Care [SUMMARY]
[CONTENT] Infant, Newborn | Humans | Intensive Care Units, Neonatal | Personal Satisfaction | Cross-Sectional Studies | Parents | Quality of Health Care [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] care | parental | parents | satisfaction | nicu | child | nurses | parental satisfaction | birth | satisfied [SUMMARY]
null
[CONTENT] care | parental | parents | satisfaction | nicu | child | nurses | parental satisfaction | birth | satisfied [SUMMARY]
[CONTENT] care | parental | parents | satisfaction | nicu | child | nurses | parental satisfaction | birth | satisfied [SUMMARY]
[CONTENT] care | parental | parents | satisfaction | nicu | child | nurses | parental satisfaction | birth | satisfied [SUMMARY]
[CONTENT] care | parental | parents | satisfaction | nicu | child | nurses | parental satisfaction | birth | satisfied [SUMMARY]
[CONTENT] care | quality | parental | satisfaction | important | strategies | management | patient | delivery | healthcare [SUMMARY]
null
[CONTENT] nurses | child | doctors | doctors nurses | care | information | satisfied | parental | level | birth [SUMMARY]
[CONTENT] parental satisfaction | parental | satisfaction | satisfaction score | parental satisfaction score | score | services provide | level parental satisfaction neonatal | dimension result | dimension result health [SUMMARY]
[CONTENT] care | parental | satisfaction | parents | nicu | study | birth | data | parental satisfaction | child [SUMMARY]
[CONTENT] care | parental | satisfaction | parents | nicu | study | birth | data | parental satisfaction | child [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] 385 | 95.06% ||| 47.8% ||| 95% | 43.1 ||| 50.40% | 36.9% | 50.1% | 59.0% | 48.6% ||| [SUMMARY]
[CONTENT] ||| EMPATHIC-N [SUMMARY]
[CONTENT] ||| ||| Debre Tabor Comprehensive Specialized Hospital | North Central Ethiopia ||| EMPATHIC-N | the day | English ||| ||| 0.05 | 95% | CI ||| ||| 385 | 95.06% ||| 47.8% ||| 95% | 43.1 ||| 50.40% | 36.9% | 50.1% | 59.0% | 48.6% ||| ||| ||| EMPATHIC-N [SUMMARY]
[CONTENT] ||| ||| Debre Tabor Comprehensive Specialized Hospital | North Central Ethiopia ||| EMPATHIC-N | the day | English ||| ||| 0.05 | 95% | CI ||| ||| 385 | 95.06% ||| 47.8% ||| 95% | 43.1 ||| 50.40% | 36.9% | 50.1% | 59.0% | 48.6% ||| ||| ||| EMPATHIC-N [SUMMARY]
The
34502367
the neoplastic B cells of the Helicobacter pylori-related low-grade gastric mucosa-associated lymphoid tissue (MALT) lymphoma proliferate in response to H. pylori, however, the nature of the H. pylori antigen responsible for proliferation is still unknown. The purpose of the study was to dissect whether CagY might be the H. pylori antigen able to drive B cell proliferation.
BACKGROUND
the B cells and the clonal progeny of T cells from the gastric mucosa of five patients with MALT lymphoma were compared with those of T cell clones obtained from five H. pylori-infected patients with chronic gastritis. The T cell clones were assessed for their specificity to H. pylori CagY, cytokine profile and helper function for B cell proliferation.
METHODS
22 of 158 CD4+ (13.9%) gastric clones from MALT lymphoma and three of 179 CD4+ (1.7%) clones from chronic gastritis recognized CagY. CagY predominantly drives Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17) secretion by gastric CD4+ T cells from H. pylori-infected patients with low-grade gastric MALT lymphoma. All MALT lymphoma-derived clones dose dependently increased their B cell help, whereas clones from chronic gastritis lost helper activity at T-to-B-cell ratios greater than 1.
RESULTS
the results obtained indicate that CagY drives both B cell proliferation and T cell activation in gastric MALT lymphomas.
CONCLUSION
[ "Aged", "B-Lymphocytes", "Bacterial Proteins", "Cell Proliferation", "Female", "Gastric Mucosa", "Gastritis", "Helicobacter Infections", "Helicobacter pylori", "Humans", "Inflammation", "Interferon-gamma", "Lymphocyte Activation", "Lymphocytes", "Lymphoma, B-Cell, Marginal Zone", "Male", "Middle Aged", "Stomach", "Th1 Cells", "Th17 Cells" ]
8431018
1. Introduction
Helicobacter pylori is a spiral-shaped Gram-negative bacterium that chronically infects the stomach of more than 50% of the human population, and is the leading cause of gastric cancer, gastric lymphoma, gastric autoimmunity and peptic ulcer diseases [1,2,3,4,5]. A strong association between Helicobacter pylori infection and the development of gastric mucosa-associated lymphoid tissue (MALT) lymphoma has been demonstrated [6,7,8]. A prerequisite for lymphomagenesis is the development of secondary inflammatory MALT, which is induced by chronic H. pylori infection [7,8]. In the early stages, this tumor is sensitive to the withdrawal of H. pylori-induced T cell help, providing an explanation for both the tendency of the tumor to remain localized at the primary site and its regression after eradication of H. pylori with antibiotics. The tumor cells of low-grade gastric MALT lymphoma are memory B lymphocytes that still respond to differentiation signals, such as CD40 costimulation and cytokines produced by antigen-stimulated T helper (Th) cells [9,10] and their growth depends on antigen-stimulation by H. pylori-specific T cells [11,12]. An important unanswered question remains the chemical nature of the H. pylori factors responsible for the induction of gastric Th cells which can promote the proliferation of B cells. Bacterial products are known to possess immunomodulatory properties and induce B cell responses as well as different types of innate and adaptive responses [13]. Among the bacterial components, some factors associated with malignancy have been identified, although the high degree of genomic variability of H. pylori strains has prevented the complete identification of the factors involved. The major virulence factor of H. pylori is the cag pathogenicity island (cagPAI), an approximately 40 kb genetic locus, containing 31 genes [14,15] and encoding for the so-called type IV secretion system (T4SS). This forms a syringe-like structure that injects bacterial components (mainly peptidoglycan and the oncoprotein cagA) into the host target cell [16]. H. pylori strains harboring the cagPAI pathogenicity locus show a significantly increased ability to induce severe pathological outcomes in infected individuals, such as gastric cancer and gastric lymphoma, compared to cagPAI-negative strains [17,18,19,20]. Recently, it was reported that among H. pylori-infected patients, those with gastric low-grade MALT lymphoma are preferentially seropositive for H. pylori CagY protein [21]. CagY, a VirB10-homologous protein, also known as Cag7 or HP0527, is able to activate innate cells in a flagellin-independent manner. CagY is a TLR5 agonist and five interaction sites have been identified in the CagY repeat domains [16,22,23]. HP0527 encodes a large protein of 1927 amino acids that is expressed on the surface and has been described as one of the main components of H. pylori cag T4SS-associated pilum; it may act as a molecular switch that modifies the proinflammatory host responses by modulating the T4SS function and tuning CagA injection [24,25]. The aims of this study were (1) to investigate the presence of H. pylori CagY-specific Th cells in the context of low-grade gastric MALT lymphomas, (2) to define the cytokine patterns of these cells, and (3) to assess whether gastric CagY-specific T cells from MALT lymphomas are able to provide help for B cell proliferation.
null
null
2. Results
2.1. H. pylori CagY-Specific CD4+ T Cells Predominate in Gastric Low-Grade MALT Lymphoma To characterize at the clonal level the in vivo activated T cells present in the gastric inflammatory infiltrates of H. pylori-infected patients, two cohorts were collected: five untreated patients with gastric low-grade MALT (MALT) and five patients with H. pylori-induced uncomplicated chronic gastritis (CG). All patients were infected with CagA1, VacA1 H. pylori type I strains and were ELISA positive for anti-CagA serum IgG antibodies. The T cells from all enrolled patients were obtained by multiple biopsies and expanded by culturing in Interleukin-2 (IL-2)-conditioned medium for 10 days. Then, T cell blasts were recovered and cloned by limiting dilution. Comparable numbers of clones were obtained from both cohorts: a total of 158 CD4+ and 17 CD8+ clones were obtained from the gastric biopsy specimens of MALT lymphoma. 179 CD4+ and 22 CD8+ T cell clones from chronic gastritis. All clones were tested for their ability to respond to either H. pylori lysate or purified CagY protein. The data obtained are summarized in Table 1 and show that none of the CD8+ clones from MALT and CG responded to H. pylori CagY antigen or to H. pylori lysate. Analyzing the proliferative response of CD4+ clones to H. pylori lysate, 23 and 22% of the clones showed positivity for MALT and CG, respectively. A marked difference was observed when CagY was used. While 22 CD4+ (corresponding to 13.9%) of clones from MALT lymphoma showed antigen-induced proliferation, only three CD4+ clones (1.7% of the clones) from chronic gastritis were CagY-specific (Table 1). The percentage of MALT lymphoma-derived T clones reactive for CagY was higher than CG. A significant difference (p = 0.012) was also found between the mitogenic index of CagY-specific MALT lymphoma-derived T clones (mean mitogenic index 44.48 ± 18.46) and CG-derived ones (mean mitogenic index 14.63 ± 3.89). A significantly higher proliferation (p < 0.001) to CagY than H. pylori lysate was found in MALT lymphoma-derived T clones (mean CagY mitogenic index 44.48 ± 18.46; mean H. pylori lysate mitogenic index 29.14 ± 11.67) (Figure 1). To characterize at the clonal level the in vivo activated T cells present in the gastric inflammatory infiltrates of H. pylori-infected patients, two cohorts were collected: five untreated patients with gastric low-grade MALT (MALT) and five patients with H. pylori-induced uncomplicated chronic gastritis (CG). All patients were infected with CagA1, VacA1 H. pylori type I strains and were ELISA positive for anti-CagA serum IgG antibodies. The T cells from all enrolled patients were obtained by multiple biopsies and expanded by culturing in Interleukin-2 (IL-2)-conditioned medium for 10 days. Then, T cell blasts were recovered and cloned by limiting dilution. Comparable numbers of clones were obtained from both cohorts: a total of 158 CD4+ and 17 CD8+ clones were obtained from the gastric biopsy specimens of MALT lymphoma. 179 CD4+ and 22 CD8+ T cell clones from chronic gastritis. All clones were tested for their ability to respond to either H. pylori lysate or purified CagY protein. The data obtained are summarized in Table 1 and show that none of the CD8+ clones from MALT and CG responded to H. pylori CagY antigen or to H. pylori lysate. Analyzing the proliferative response of CD4+ clones to H. pylori lysate, 23 and 22% of the clones showed positivity for MALT and CG, respectively. A marked difference was observed when CagY was used. While 22 CD4+ (corresponding to 13.9%) of clones from MALT lymphoma showed antigen-induced proliferation, only three CD4+ clones (1.7% of the clones) from chronic gastritis were CagY-specific (Table 1). The percentage of MALT lymphoma-derived T clones reactive for CagY was higher than CG. A significant difference (p = 0.012) was also found between the mitogenic index of CagY-specific MALT lymphoma-derived T clones (mean mitogenic index 44.48 ± 18.46) and CG-derived ones (mean mitogenic index 14.63 ± 3.89). A significantly higher proliferation (p < 0.001) to CagY than H. pylori lysate was found in MALT lymphoma-derived T clones (mean CagY mitogenic index 44.48 ± 18.46; mean H. pylori lysate mitogenic index 29.14 ± 11.67) (Figure 1). 2.2. H. pylori CagY Predominantly Drives IFN-γ and IL-17 Secretion by Gastric CD4+ T Cells from H. pylori-Infected Patients with Gastric Low-Grade MALT Lymphoma To evaluate cytokine production by gastric-derived H. pylori CagY–specific Th clones, each clone was co-cultured in duplicate with autologous antigen presenting cells (APCs) and H. pylori CagY for 48 h. After antigen stimulation, 32% of clones from MALT lymphoma gastritis produced both Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17), but not Interleukin-4 (IL-4) (Th1/Th17 profile), 27% of clones produced IFN-γ, but not IL-17, nor IL-4 (Th1 profile), 23% secreted both Tumor necrosis factor- alpha (TNF-α) and IL-4, but not IL-17 (Th0 profile), and 18% produced IL-4, but not TNF-α, nor IL-17 (Th2 profile) (Figure 2). Among the three gastric T cell clones obtained from chronic gastritis, two were Th1, and one Th1/Th17. To evaluate cytokine production by gastric-derived H. pylori CagY–specific Th clones, each clone was co-cultured in duplicate with autologous antigen presenting cells (APCs) and H. pylori CagY for 48 h. After antigen stimulation, 32% of clones from MALT lymphoma gastritis produced both Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17), but not Interleukin-4 (IL-4) (Th1/Th17 profile), 27% of clones produced IFN-γ, but not IL-17, nor IL-4 (Th1 profile), 23% secreted both Tumor necrosis factor- alpha (TNF-α) and IL-4, but not IL-17 (Th0 profile), and 18% produced IL-4, but not TNF-α, nor IL-17 (Th2 profile) (Figure 2). Among the three gastric T cell clones obtained from chronic gastritis, two were Th1, and one Th1/Th17. 2.3. Antigen-Dependent B Cell Help by H. pylori CagY-Specific Th Clones To assess the ability of H. pylori CagY-specific T cell clones to provide antigen-triggered B cell help, irradiated T cell blasts of each clone were co-cultured with autologous peripheral blood B cells at a ratio of 0.2, 1, and 5 to 1. At a T-to-B cell ratio of 0.2 to 1, all CagY-specific clones from MALT lymphoma patients but none from patients with chronic gastritis, provided significant help (p < 0.05) to B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 14 and 3.5; range, 1–26 and 1–6, respectively; Figure 3A). At a T-to-B cell ratio of 1 to 1, all CagY-specific clones from both MALT lymphoma and chronic gastritis patients provided significant help for B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 28 and 25; range, 11–46 and 22.8–27.2, respectively; Figure 3A). Finally, at a T-to-B cell ratio of 5 to 1, all 22 Th clones from MALT lymphoma further increased B cell proliferation with a mean mitogenic index of 37 (range 24–51; Figure 3A). A significant decrease (p < 0.001) in B cell proliferation was observed in the presence of all three Th clones from chronic gastritis with a mean mitogenic index of 3 (range 1–5; Figure 3A). B cells alone cultured with or without CagY did not proliferate unless autologous MALT lymphoma-derived T cells were added (Figure 3B). B cells cultured with autologous MALT lymphoma-derived T cells, without CagY did not proliferate at any T-to-B cell ratio (Figure 3C). Gastric T cell clones specific for H. pylori lysate but not specific for CagY (15/37 from MALT lymphoma and 36/39 from patients with chronic gastritis) showed no helper activity on B cell proliferation when co-cultured with CagY antigen and autologous B cells (mean mitogenic index was 1, range 0.9–1.3 both for MALT and for CG) (data not shown). To assess the ability of H. pylori CagY-specific T cell clones to provide antigen-triggered B cell help, irradiated T cell blasts of each clone were co-cultured with autologous peripheral blood B cells at a ratio of 0.2, 1, and 5 to 1. At a T-to-B cell ratio of 0.2 to 1, all CagY-specific clones from MALT lymphoma patients but none from patients with chronic gastritis, provided significant help (p < 0.05) to B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 14 and 3.5; range, 1–26 and 1–6, respectively; Figure 3A). At a T-to-B cell ratio of 1 to 1, all CagY-specific clones from both MALT lymphoma and chronic gastritis patients provided significant help for B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 28 and 25; range, 11–46 and 22.8–27.2, respectively; Figure 3A). Finally, at a T-to-B cell ratio of 5 to 1, all 22 Th clones from MALT lymphoma further increased B cell proliferation with a mean mitogenic index of 37 (range 24–51; Figure 3A). A significant decrease (p < 0.001) in B cell proliferation was observed in the presence of all three Th clones from chronic gastritis with a mean mitogenic index of 3 (range 1–5; Figure 3A). B cells alone cultured with or without CagY did not proliferate unless autologous MALT lymphoma-derived T cells were added (Figure 3B). B cells cultured with autologous MALT lymphoma-derived T cells, without CagY did not proliferate at any T-to-B cell ratio (Figure 3C). Gastric T cell clones specific for H. pylori lysate but not specific for CagY (15/37 from MALT lymphoma and 36/39 from patients with chronic gastritis) showed no helper activity on B cell proliferation when co-cultured with CagY antigen and autologous B cells (mean mitogenic index was 1, range 0.9–1.3 both for MALT and for CG) (data not shown).
5. Conclusions
The results obtained so far suggest that H. pylori CagY is an important factor involved in the genesis of Th1 and Th17 response in H. pylori-infected patients with gastric MALT lymphoma. We show that H. pylori CagY-specific Th cells derived from the gastric mucosa of H. pylori-infected patients with gastric low-grade MALT lymphoma can provide B cell help in a dose-dependent manner. Taken together, these results suggest that H. pylori CagY is an important factor in generating Th1 and Th17 responses in H. pylori-infected patients with gastric MALT lymphoma and that the T cell-dependent B cell proliferation induced by H. pylori CagY may represent an important link between bacterial infection and gastric lymphoma. CagY, Th1 and Th17 pathways might be useful for the design of novel diagnostics, novel vaccines and therapeutics for gastric MALT lymphomas related to H. pylori infection.
[ "2.1. H. pylori CagY-Specific CD4+ T Cells Predominate in Gastric Low-Grade MALT Lymphoma", "2.2. H. pylori CagY Predominantly Drives IFN-γ and IL-17 Secretion by Gastric CD4+ T Cells from H. pylori-Infected Patients with Gastric Low-Grade MALT Lymphoma", "2.3. Antigen-Dependent B Cell Help by H. pylori CagY-Specific Th Clones", "4. Materials and Methods", "4.2. Reagents", "4.3. Generation of H. pylori-Specific T Cell Clones", "4.4. Cytokine Profile of H. pylori CagY–Specific Gastric T Cell Clones", "4.5. Helper Activity of T Cell Lones for B Cell Proliferation", "4.6. Statistical Analysis" ]
[ "To characterize at the clonal level the in vivo activated T cells present in the gastric inflammatory infiltrates of H. pylori-infected patients, two cohorts were collected: five untreated patients with gastric low-grade MALT (MALT) and five patients with H. pylori-induced uncomplicated chronic gastritis (CG). All patients were infected with CagA1, VacA1 H. pylori type I strains and were ELISA positive for anti-CagA serum IgG antibodies. The T cells from all enrolled patients were obtained by multiple biopsies and expanded by culturing in Interleukin-2 (IL-2)-conditioned medium for 10 days. Then, T cell blasts were recovered and cloned by limiting dilution.\nComparable numbers of clones were obtained from both cohorts: a total of 158 CD4+ and 17 CD8+ clones were obtained from the gastric biopsy specimens of MALT lymphoma. 179 CD4+ and 22 CD8+ T cell clones from chronic gastritis. All clones were tested for their ability to respond to either H. pylori lysate or purified CagY protein. The data obtained are summarized in Table 1 and show that none of the CD8+ clones from MALT and CG responded to H. pylori CagY antigen or to H. pylori lysate. Analyzing the proliferative response of CD4+ clones to H. pylori lysate, 23 and 22% of the clones showed positivity for MALT and CG, respectively. A marked difference was observed when CagY was used. While 22 CD4+ (corresponding to 13.9%) of clones from MALT lymphoma showed antigen-induced proliferation, only three CD4+ clones (1.7% of the clones) from chronic gastritis were CagY-specific (Table 1).\nThe percentage of MALT lymphoma-derived T clones reactive for CagY was higher than CG. A significant difference (p = 0.012) was also found between the mitogenic index of CagY-specific MALT lymphoma-derived T clones (mean mitogenic index 44.48 ± 18.46) and CG-derived ones (mean mitogenic index 14.63 ± 3.89). A significantly higher proliferation (p < 0.001) to CagY than H. pylori lysate was found in MALT lymphoma-derived T clones (mean CagY mitogenic index 44.48 ± 18.46; mean H. pylori lysate mitogenic index 29.14 ± 11.67) (Figure 1).", "To evaluate cytokine production by gastric-derived H. pylori CagY–specific Th clones, each clone was co-cultured in duplicate with autologous antigen presenting cells (APCs) and H. pylori CagY for 48 h. After antigen stimulation, 32% of clones from MALT lymphoma gastritis produced both Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17), but not Interleukin-4 (IL-4) (Th1/Th17 profile), 27% of clones produced IFN-γ, but not IL-17, nor IL-4 (Th1 profile), 23% secreted both Tumor necrosis factor- alpha (TNF-α) and IL-4, but not IL-17 (Th0 profile), and 18% produced IL-4, but not TNF-α, nor IL-17 (Th2 profile) (Figure 2). Among the three gastric T cell clones obtained from chronic gastritis, two were Th1, and one Th1/Th17.", "To assess the ability of H. pylori CagY-specific T cell clones to provide antigen-triggered B cell help, irradiated T cell blasts of each clone were co-cultured with autologous peripheral blood B cells at a ratio of 0.2, 1, and 5 to 1. At a T-to-B cell ratio of 0.2 to 1, all CagY-specific clones from MALT lymphoma patients but none from patients with chronic gastritis, provided significant help (p < 0.05) to B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 14 and 3.5; range, 1–26 and 1–6, respectively; Figure 3A). At a T-to-B cell ratio of 1 to 1, all CagY-specific clones from both MALT lymphoma and chronic gastritis patients provided significant help for B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 28 and 25; range, 11–46 and 22.8–27.2, respectively; Figure 3A). Finally, at a T-to-B cell ratio of 5 to 1, all 22 Th clones from MALT lymphoma further increased B cell proliferation with a mean mitogenic index of 37 (range 24–51; Figure 3A). A significant decrease (p < 0.001) in B cell proliferation was observed in the presence of all three Th clones from chronic gastritis with a mean mitogenic index of 3 (range 1–5; Figure 3A). B cells alone cultured with or without CagY did not proliferate unless autologous MALT lymphoma-derived T cells were added (Figure 3B). B cells cultured with autologous MALT lymphoma-derived T cells, without CagY did not proliferate at any T-to-B cell ratio (Figure 3C).\nGastric T cell clones specific for H. pylori lysate but not specific for CagY (15/37 from MALT lymphoma and 36/39 from patients with chronic gastritis) showed no helper activity on B cell proliferation when co-cultured with CagY antigen and autologous B cells (mean mitogenic index was 1, range 0.9–1.3 both for MALT and for CG) (data not shown).", " 4.1. Patients Five untreated patients (three men and two women; mean age, 69 years; range, 63–75 years) with low-grade B cell lymphoma of gastric MALT (MALToma) and five patients (three men and two women; mean age, 59 years; range, 55–68 years) with uncomplicated chronic gastritis provided informed consent for this study, which was performed after approval by the local ethical committee (protocol number 14936_bio, approved on 8 October 2019). Multiple biopsy specimens were obtained from the gastric antrum of patients with chronic gastritis. In patients with low-grade MALT lymphoma, biopsy specimens were obtained from perilesional regions. Biopsy specimens were used for diagnosis (positive urease test, typing of H. pylori strain, and histology) and culture of tumor-infiltrating T lymphocytes. All patients with chronic gastritis or MALT lymphoma were infected with CagA1 VacA1 H. pylori type I strains and were positive for anti-CagA serum immunoglobulin (Ig) G antibodies, as assessed by specific ELISA (MyBioSource, San Diego, CA, USA).\nFive untreated patients (three men and two women; mean age, 69 years; range, 63–75 years) with low-grade B cell lymphoma of gastric MALT (MALToma) and five patients (three men and two women; mean age, 59 years; range, 55–68 years) with uncomplicated chronic gastritis provided informed consent for this study, which was performed after approval by the local ethical committee (protocol number 14936_bio, approved on 8 October 2019). Multiple biopsy specimens were obtained from the gastric antrum of patients with chronic gastritis. In patients with low-grade MALT lymphoma, biopsy specimens were obtained from perilesional regions. Biopsy specimens were used for diagnosis (positive urease test, typing of H. pylori strain, and histology) and culture of tumor-infiltrating T lymphocytes. All patients with chronic gastritis or MALT lymphoma were infected with CagA1 VacA1 H. pylori type I strains and were positive for anti-CagA serum immunoglobulin (Ig) G antibodies, as assessed by specific ELISA (MyBioSource, San Diego, CA, USA).\n 4.2. Reagents Helicobacter pylori CagY was produced as described [21]. We ruled out the presence of contaminants by a limulus test. The H. pylori CagY used resulted limulus test negative throughout the whole study.\nHelicobacter pylori CagY was produced as described [21]. We ruled out the presence of contaminants by a limulus test. The H. pylori CagY used resulted limulus test negative throughout the whole study.\n 4.3. Generation of H. pylori-Specific T Cell Clones Biopsy specimens were cultured for 10 days in RPMI 1640 medium (Biochrom AG, Berlin, Germany) supplemented with human IL-2 (PeproTech, London, UK) (50 U/mL) to expand in vivo activated T cells [10]. Mucosal specimens were disrupted and single T-cell blasts were cloned under limiting dilution (0.3 cells/well) as reported previously [40]. Each clone was screened (in triplicate cultures for each condition) for responsiveness to H. pylori by measuring [3H] thymidine (Perkin Elmer, Waltham, MA, US) uptake after 60 h’ stimulation with medium, H. pylori lysate (aqueous extract of NCTC11637 strain, 10 μg/mL being optimal), recombinant CagY protein (1 μg/mL) in the presence of irradiated autologous mononuclear cells as APCs [40]. A mitogenic index greater than 10 was considered a positive result.\nBiopsy specimens were cultured for 10 days in RPMI 1640 medium (Biochrom AG, Berlin, Germany) supplemented with human IL-2 (PeproTech, London, UK) (50 U/mL) to expand in vivo activated T cells [10]. Mucosal specimens were disrupted and single T-cell blasts were cloned under limiting dilution (0.3 cells/well) as reported previously [40]. Each clone was screened (in triplicate cultures for each condition) for responsiveness to H. pylori by measuring [3H] thymidine (Perkin Elmer, Waltham, MA, US) uptake after 60 h’ stimulation with medium, H. pylori lysate (aqueous extract of NCTC11637 strain, 10 μg/mL being optimal), recombinant CagY protein (1 μg/mL) in the presence of irradiated autologous mononuclear cells as APCs [40]. A mitogenic index greater than 10 was considered a positive result.\n 4.4. Cytokine Profile of H. pylori CagY–Specific Gastric T Cell Clones To assess the cytokine production of CagY-specific Th clones, 106 T cell blasts of each clone were co-cultured in duplicate cultures for 48 h in 1 mL of medium with 5 × 105 irradiated autologous peripheral blood mononuclear cells as APCs and CagY (1 μg/mL). To induce cytokine production by gastric T cell clones, T cell blasts were stimulated for 36 h with phorbol-12-myristate 13-acetate (PMA, 10 ng/mL) (BioLegend, San Diego, CA, USA) plus anti-CD3 monoclonal antibody (200 ng/mL) (BioLegend, San Diego, CA, USA) [10]. Duplicate samples of each supernatant were assayed by ELISA for IFN-γ, IL-4, TNF-α, and IL-17 (R&D Systems, Minneapolis, MN, USA).\nTo assess the cytokine production of CagY-specific Th clones, 106 T cell blasts of each clone were co-cultured in duplicate cultures for 48 h in 1 mL of medium with 5 × 105 irradiated autologous peripheral blood mononuclear cells as APCs and CagY (1 μg/mL). To induce cytokine production by gastric T cell clones, T cell blasts were stimulated for 36 h with phorbol-12-myristate 13-acetate (PMA, 10 ng/mL) (BioLegend, San Diego, CA, USA) plus anti-CD3 monoclonal antibody (200 ng/mL) (BioLegend, San Diego, CA, USA) [10]. Duplicate samples of each supernatant were assayed by ELISA for IFN-γ, IL-4, TNF-α, and IL-17 (R&D Systems, Minneapolis, MN, USA).\n 4.5. Helper Activity of T Cell Lones for B Cell Proliferation B cells were prepared from each patient as described [13]. Briefly, PBMCs isolated by Ficoll Hypaque gradient method (Lymphoprep, Alere Technologies, Oslo, Norway) from each enrolled patient were processed to obtain a purified population of B lymphocytes by positive selection magnetic labelling with anti-CD19 microbeads (MACS, Miltenyi biotec, Bergisch Gladbach, Germany). The target population was incubated with specific microbeads and the cell suspension was loaded onto MACS columns placed in a magnetic field; B cells were then retained within the column and the other PBMCs were eluted away. Purified B lymphocytes were then harvested, washed, and used for the subsequent experiments.\nThe ability of gastric Th clones to induce B cell proliferation under CagY stimulation was assessed by measuring [3H] thymidine uptake by peripheral blood B cells (3 × 104) alone or co-cultured for four days with different concentrations of irradiated (2000 rad) autologous clonal T cell blasts (0.2, 1, and 5 T-to-B cell ratio) with or without H. pylori CagY antigen (1 μg/mL), as described previously [10].\nB cells were prepared from each patient as described [13]. Briefly, PBMCs isolated by Ficoll Hypaque gradient method (Lymphoprep, Alere Technologies, Oslo, Norway) from each enrolled patient were processed to obtain a purified population of B lymphocytes by positive selection magnetic labelling with anti-CD19 microbeads (MACS, Miltenyi biotec, Bergisch Gladbach, Germany). The target population was incubated with specific microbeads and the cell suspension was loaded onto MACS columns placed in a magnetic field; B cells were then retained within the column and the other PBMCs were eluted away. Purified B lymphocytes were then harvested, washed, and used for the subsequent experiments.\nThe ability of gastric Th clones to induce B cell proliferation under CagY stimulation was assessed by measuring [3H] thymidine uptake by peripheral blood B cells (3 × 104) alone or co-cultured for four days with different concentrations of irradiated (2000 rad) autologous clonal T cell blasts (0.2, 1, and 5 T-to-B cell ratio) with or without H. pylori CagY antigen (1 μg/mL), as described previously [10].\n 4.6. Statistical Analysis Descriptive statistics were used for the calculation of absolute frequencies and percentages of qualitative data, as well as for mean and standard deviation of quantitative data. After evaluating the homogeneity of variance with Hartley’s test, the t-test (IC 95%) was performed and a p < 0.05 was considered statistically significant. Statistical analysis was computed using the IBM SPSS Statistics software, version 27.\nDescriptive statistics were used for the calculation of absolute frequencies and percentages of qualitative data, as well as for mean and standard deviation of quantitative data. After evaluating the homogeneity of variance with Hartley’s test, the t-test (IC 95%) was performed and a p < 0.05 was considered statistically significant. Statistical analysis was computed using the IBM SPSS Statistics software, version 27.", "Helicobacter pylori CagY was produced as described [21]. We ruled out the presence of contaminants by a limulus test. The H. pylori CagY used resulted limulus test negative throughout the whole study.", "Biopsy specimens were cultured for 10 days in RPMI 1640 medium (Biochrom AG, Berlin, Germany) supplemented with human IL-2 (PeproTech, London, UK) (50 U/mL) to expand in vivo activated T cells [10]. Mucosal specimens were disrupted and single T-cell blasts were cloned under limiting dilution (0.3 cells/well) as reported previously [40]. Each clone was screened (in triplicate cultures for each condition) for responsiveness to H. pylori by measuring [3H] thymidine (Perkin Elmer, Waltham, MA, US) uptake after 60 h’ stimulation with medium, H. pylori lysate (aqueous extract of NCTC11637 strain, 10 μg/mL being optimal), recombinant CagY protein (1 μg/mL) in the presence of irradiated autologous mononuclear cells as APCs [40]. A mitogenic index greater than 10 was considered a positive result.", "To assess the cytokine production of CagY-specific Th clones, 106 T cell blasts of each clone were co-cultured in duplicate cultures for 48 h in 1 mL of medium with 5 × 105 irradiated autologous peripheral blood mononuclear cells as APCs and CagY (1 μg/mL). To induce cytokine production by gastric T cell clones, T cell blasts were stimulated for 36 h with phorbol-12-myristate 13-acetate (PMA, 10 ng/mL) (BioLegend, San Diego, CA, USA) plus anti-CD3 monoclonal antibody (200 ng/mL) (BioLegend, San Diego, CA, USA) [10]. Duplicate samples of each supernatant were assayed by ELISA for IFN-γ, IL-4, TNF-α, and IL-17 (R&D Systems, Minneapolis, MN, USA).", "B cells were prepared from each patient as described [13]. Briefly, PBMCs isolated by Ficoll Hypaque gradient method (Lymphoprep, Alere Technologies, Oslo, Norway) from each enrolled patient were processed to obtain a purified population of B lymphocytes by positive selection magnetic labelling with anti-CD19 microbeads (MACS, Miltenyi biotec, Bergisch Gladbach, Germany). The target population was incubated with specific microbeads and the cell suspension was loaded onto MACS columns placed in a magnetic field; B cells were then retained within the column and the other PBMCs were eluted away. Purified B lymphocytes were then harvested, washed, and used for the subsequent experiments.\nThe ability of gastric Th clones to induce B cell proliferation under CagY stimulation was assessed by measuring [3H] thymidine uptake by peripheral blood B cells (3 × 104) alone or co-cultured for four days with different concentrations of irradiated (2000 rad) autologous clonal T cell blasts (0.2, 1, and 5 T-to-B cell ratio) with or without H. pylori CagY antigen (1 μg/mL), as described previously [10].", "Descriptive statistics were used for the calculation of absolute frequencies and percentages of qualitative data, as well as for mean and standard deviation of quantitative data. After evaluating the homogeneity of variance with Hartley’s test, the t-test (IC 95%) was performed and a p < 0.05 was considered statistically significant. Statistical analysis was computed using the IBM SPSS Statistics software, version 27." ]
[ null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. H. pylori CagY-Specific CD4+ T Cells Predominate in Gastric Low-Grade MALT Lymphoma", "2.2. H. pylori CagY Predominantly Drives IFN-γ and IL-17 Secretion by Gastric CD4+ T Cells from H. pylori-Infected Patients with Gastric Low-Grade MALT Lymphoma", "2.3. Antigen-Dependent B Cell Help by H. pylori CagY-Specific Th Clones", "3. Discussion", "4. Materials and Methods", "4.1. Patients", "4.2. Reagents", "4.3. Generation of H. pylori-Specific T Cell Clones", "4.4. Cytokine Profile of H. pylori CagY–Specific Gastric T Cell Clones", "4.5. Helper Activity of T Cell Lones for B Cell Proliferation", "4.6. Statistical Analysis", "5. Conclusions" ]
[ "Helicobacter pylori is a spiral-shaped Gram-negative bacterium that chronically infects the stomach of more than 50% of the human population, and is the leading cause of gastric cancer, gastric lymphoma, gastric autoimmunity and peptic ulcer diseases [1,2,3,4,5]. A strong association between Helicobacter pylori infection and the development of gastric mucosa-associated lymphoid tissue (MALT) lymphoma has been demonstrated [6,7,8]. A prerequisite for lymphomagenesis is the development of secondary inflammatory MALT, which is induced by chronic H. pylori infection [7,8]. In the early stages, this tumor is sensitive to the withdrawal of H. pylori-induced T cell help, providing an explanation for both the tendency of the tumor to remain localized at the primary site and its regression after eradication of H. pylori with antibiotics. The tumor cells of low-grade gastric MALT lymphoma are memory B lymphocytes that still respond to differentiation signals, such as CD40 costimulation and cytokines produced by antigen-stimulated T helper (Th) cells [9,10] and their growth depends on antigen-stimulation by H. pylori-specific T cells [11,12]. An important unanswered question remains the chemical nature of the H. pylori factors responsible for the induction of gastric Th cells which can promote the proliferation of B cells. Bacterial products are known to possess immunomodulatory properties and induce B cell responses as well as different types of innate and adaptive responses [13].\nAmong the bacterial components, some factors associated with malignancy have been identified, although the high degree of genomic variability of H. pylori strains has prevented the complete identification of the factors involved. The major virulence factor of H. pylori is the cag pathogenicity island (cagPAI), an approximately 40 kb genetic locus, containing 31 genes [14,15] and encoding for the so-called type IV secretion system (T4SS). This forms a syringe-like structure that injects bacterial components (mainly peptidoglycan and the oncoprotein cagA) into the host target cell [16]. H. pylori strains harboring the cagPAI pathogenicity locus show a significantly increased ability to induce severe pathological outcomes in infected individuals, such as gastric cancer and gastric lymphoma, compared to cagPAI-negative strains [17,18,19,20]. Recently, it was reported that among H. pylori-infected patients, those with gastric low-grade MALT lymphoma are preferentially seropositive for H. pylori CagY protein [21]. CagY, a VirB10-homologous protein, also known as Cag7 or HP0527, is able to activate innate cells in a flagellin-independent manner. CagY is a TLR5 agonist and five interaction sites have been identified in the CagY repeat domains [16,22,23]. HP0527 encodes a large protein of 1927 amino acids that is expressed on the surface and has been described as one of the main components of H. pylori cag T4SS-associated pilum; it may act as a molecular switch that modifies the proinflammatory host responses by modulating the T4SS function and tuning CagA injection [24,25].\nThe aims of this study were (1) to investigate the presence of H. pylori CagY-specific Th cells in the context of low-grade gastric MALT lymphomas, (2) to define the cytokine patterns of these cells, and (3) to assess whether gastric CagY-specific T cells from MALT lymphomas are able to provide help for B cell proliferation.", " 2.1. H. pylori CagY-Specific CD4+ T Cells Predominate in Gastric Low-Grade MALT Lymphoma To characterize at the clonal level the in vivo activated T cells present in the gastric inflammatory infiltrates of H. pylori-infected patients, two cohorts were collected: five untreated patients with gastric low-grade MALT (MALT) and five patients with H. pylori-induced uncomplicated chronic gastritis (CG). All patients were infected with CagA1, VacA1 H. pylori type I strains and were ELISA positive for anti-CagA serum IgG antibodies. The T cells from all enrolled patients were obtained by multiple biopsies and expanded by culturing in Interleukin-2 (IL-2)-conditioned medium for 10 days. Then, T cell blasts were recovered and cloned by limiting dilution.\nComparable numbers of clones were obtained from both cohorts: a total of 158 CD4+ and 17 CD8+ clones were obtained from the gastric biopsy specimens of MALT lymphoma. 179 CD4+ and 22 CD8+ T cell clones from chronic gastritis. All clones were tested for their ability to respond to either H. pylori lysate or purified CagY protein. The data obtained are summarized in Table 1 and show that none of the CD8+ clones from MALT and CG responded to H. pylori CagY antigen or to H. pylori lysate. Analyzing the proliferative response of CD4+ clones to H. pylori lysate, 23 and 22% of the clones showed positivity for MALT and CG, respectively. A marked difference was observed when CagY was used. While 22 CD4+ (corresponding to 13.9%) of clones from MALT lymphoma showed antigen-induced proliferation, only three CD4+ clones (1.7% of the clones) from chronic gastritis were CagY-specific (Table 1).\nThe percentage of MALT lymphoma-derived T clones reactive for CagY was higher than CG. A significant difference (p = 0.012) was also found between the mitogenic index of CagY-specific MALT lymphoma-derived T clones (mean mitogenic index 44.48 ± 18.46) and CG-derived ones (mean mitogenic index 14.63 ± 3.89). A significantly higher proliferation (p < 0.001) to CagY than H. pylori lysate was found in MALT lymphoma-derived T clones (mean CagY mitogenic index 44.48 ± 18.46; mean H. pylori lysate mitogenic index 29.14 ± 11.67) (Figure 1).\nTo characterize at the clonal level the in vivo activated T cells present in the gastric inflammatory infiltrates of H. pylori-infected patients, two cohorts were collected: five untreated patients with gastric low-grade MALT (MALT) and five patients with H. pylori-induced uncomplicated chronic gastritis (CG). All patients were infected with CagA1, VacA1 H. pylori type I strains and were ELISA positive for anti-CagA serum IgG antibodies. The T cells from all enrolled patients were obtained by multiple biopsies and expanded by culturing in Interleukin-2 (IL-2)-conditioned medium for 10 days. Then, T cell blasts were recovered and cloned by limiting dilution.\nComparable numbers of clones were obtained from both cohorts: a total of 158 CD4+ and 17 CD8+ clones were obtained from the gastric biopsy specimens of MALT lymphoma. 179 CD4+ and 22 CD8+ T cell clones from chronic gastritis. All clones were tested for their ability to respond to either H. pylori lysate or purified CagY protein. The data obtained are summarized in Table 1 and show that none of the CD8+ clones from MALT and CG responded to H. pylori CagY antigen or to H. pylori lysate. Analyzing the proliferative response of CD4+ clones to H. pylori lysate, 23 and 22% of the clones showed positivity for MALT and CG, respectively. A marked difference was observed when CagY was used. While 22 CD4+ (corresponding to 13.9%) of clones from MALT lymphoma showed antigen-induced proliferation, only three CD4+ clones (1.7% of the clones) from chronic gastritis were CagY-specific (Table 1).\nThe percentage of MALT lymphoma-derived T clones reactive for CagY was higher than CG. A significant difference (p = 0.012) was also found between the mitogenic index of CagY-specific MALT lymphoma-derived T clones (mean mitogenic index 44.48 ± 18.46) and CG-derived ones (mean mitogenic index 14.63 ± 3.89). A significantly higher proliferation (p < 0.001) to CagY than H. pylori lysate was found in MALT lymphoma-derived T clones (mean CagY mitogenic index 44.48 ± 18.46; mean H. pylori lysate mitogenic index 29.14 ± 11.67) (Figure 1).\n 2.2. H. pylori CagY Predominantly Drives IFN-γ and IL-17 Secretion by Gastric CD4+ T Cells from H. pylori-Infected Patients with Gastric Low-Grade MALT Lymphoma To evaluate cytokine production by gastric-derived H. pylori CagY–specific Th clones, each clone was co-cultured in duplicate with autologous antigen presenting cells (APCs) and H. pylori CagY for 48 h. After antigen stimulation, 32% of clones from MALT lymphoma gastritis produced both Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17), but not Interleukin-4 (IL-4) (Th1/Th17 profile), 27% of clones produced IFN-γ, but not IL-17, nor IL-4 (Th1 profile), 23% secreted both Tumor necrosis factor- alpha (TNF-α) and IL-4, but not IL-17 (Th0 profile), and 18% produced IL-4, but not TNF-α, nor IL-17 (Th2 profile) (Figure 2). Among the three gastric T cell clones obtained from chronic gastritis, two were Th1, and one Th1/Th17.\nTo evaluate cytokine production by gastric-derived H. pylori CagY–specific Th clones, each clone was co-cultured in duplicate with autologous antigen presenting cells (APCs) and H. pylori CagY for 48 h. After antigen stimulation, 32% of clones from MALT lymphoma gastritis produced both Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17), but not Interleukin-4 (IL-4) (Th1/Th17 profile), 27% of clones produced IFN-γ, but not IL-17, nor IL-4 (Th1 profile), 23% secreted both Tumor necrosis factor- alpha (TNF-α) and IL-4, but not IL-17 (Th0 profile), and 18% produced IL-4, but not TNF-α, nor IL-17 (Th2 profile) (Figure 2). Among the three gastric T cell clones obtained from chronic gastritis, two were Th1, and one Th1/Th17.\n 2.3. Antigen-Dependent B Cell Help by H. pylori CagY-Specific Th Clones To assess the ability of H. pylori CagY-specific T cell clones to provide antigen-triggered B cell help, irradiated T cell blasts of each clone were co-cultured with autologous peripheral blood B cells at a ratio of 0.2, 1, and 5 to 1. At a T-to-B cell ratio of 0.2 to 1, all CagY-specific clones from MALT lymphoma patients but none from patients with chronic gastritis, provided significant help (p < 0.05) to B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 14 and 3.5; range, 1–26 and 1–6, respectively; Figure 3A). At a T-to-B cell ratio of 1 to 1, all CagY-specific clones from both MALT lymphoma and chronic gastritis patients provided significant help for B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 28 and 25; range, 11–46 and 22.8–27.2, respectively; Figure 3A). Finally, at a T-to-B cell ratio of 5 to 1, all 22 Th clones from MALT lymphoma further increased B cell proliferation with a mean mitogenic index of 37 (range 24–51; Figure 3A). A significant decrease (p < 0.001) in B cell proliferation was observed in the presence of all three Th clones from chronic gastritis with a mean mitogenic index of 3 (range 1–5; Figure 3A). B cells alone cultured with or without CagY did not proliferate unless autologous MALT lymphoma-derived T cells were added (Figure 3B). B cells cultured with autologous MALT lymphoma-derived T cells, without CagY did not proliferate at any T-to-B cell ratio (Figure 3C).\nGastric T cell clones specific for H. pylori lysate but not specific for CagY (15/37 from MALT lymphoma and 36/39 from patients with chronic gastritis) showed no helper activity on B cell proliferation when co-cultured with CagY antigen and autologous B cells (mean mitogenic index was 1, range 0.9–1.3 both for MALT and for CG) (data not shown).\nTo assess the ability of H. pylori CagY-specific T cell clones to provide antigen-triggered B cell help, irradiated T cell blasts of each clone were co-cultured with autologous peripheral blood B cells at a ratio of 0.2, 1, and 5 to 1. At a T-to-B cell ratio of 0.2 to 1, all CagY-specific clones from MALT lymphoma patients but none from patients with chronic gastritis, provided significant help (p < 0.05) to B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 14 and 3.5; range, 1–26 and 1–6, respectively; Figure 3A). At a T-to-B cell ratio of 1 to 1, all CagY-specific clones from both MALT lymphoma and chronic gastritis patients provided significant help for B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 28 and 25; range, 11–46 and 22.8–27.2, respectively; Figure 3A). Finally, at a T-to-B cell ratio of 5 to 1, all 22 Th clones from MALT lymphoma further increased B cell proliferation with a mean mitogenic index of 37 (range 24–51; Figure 3A). A significant decrease (p < 0.001) in B cell proliferation was observed in the presence of all three Th clones from chronic gastritis with a mean mitogenic index of 3 (range 1–5; Figure 3A). B cells alone cultured with or without CagY did not proliferate unless autologous MALT lymphoma-derived T cells were added (Figure 3B). B cells cultured with autologous MALT lymphoma-derived T cells, without CagY did not proliferate at any T-to-B cell ratio (Figure 3C).\nGastric T cell clones specific for H. pylori lysate but not specific for CagY (15/37 from MALT lymphoma and 36/39 from patients with chronic gastritis) showed no helper activity on B cell proliferation when co-cultured with CagY antigen and autologous B cells (mean mitogenic index was 1, range 0.9–1.3 both for MALT and for CG) (data not shown).", "To characterize at the clonal level the in vivo activated T cells present in the gastric inflammatory infiltrates of H. pylori-infected patients, two cohorts were collected: five untreated patients with gastric low-grade MALT (MALT) and five patients with H. pylori-induced uncomplicated chronic gastritis (CG). All patients were infected with CagA1, VacA1 H. pylori type I strains and were ELISA positive for anti-CagA serum IgG antibodies. The T cells from all enrolled patients were obtained by multiple biopsies and expanded by culturing in Interleukin-2 (IL-2)-conditioned medium for 10 days. Then, T cell blasts were recovered and cloned by limiting dilution.\nComparable numbers of clones were obtained from both cohorts: a total of 158 CD4+ and 17 CD8+ clones were obtained from the gastric biopsy specimens of MALT lymphoma. 179 CD4+ and 22 CD8+ T cell clones from chronic gastritis. All clones were tested for their ability to respond to either H. pylori lysate or purified CagY protein. The data obtained are summarized in Table 1 and show that none of the CD8+ clones from MALT and CG responded to H. pylori CagY antigen or to H. pylori lysate. Analyzing the proliferative response of CD4+ clones to H. pylori lysate, 23 and 22% of the clones showed positivity for MALT and CG, respectively. A marked difference was observed when CagY was used. While 22 CD4+ (corresponding to 13.9%) of clones from MALT lymphoma showed antigen-induced proliferation, only three CD4+ clones (1.7% of the clones) from chronic gastritis were CagY-specific (Table 1).\nThe percentage of MALT lymphoma-derived T clones reactive for CagY was higher than CG. A significant difference (p = 0.012) was also found between the mitogenic index of CagY-specific MALT lymphoma-derived T clones (mean mitogenic index 44.48 ± 18.46) and CG-derived ones (mean mitogenic index 14.63 ± 3.89). A significantly higher proliferation (p < 0.001) to CagY than H. pylori lysate was found in MALT lymphoma-derived T clones (mean CagY mitogenic index 44.48 ± 18.46; mean H. pylori lysate mitogenic index 29.14 ± 11.67) (Figure 1).", "To evaluate cytokine production by gastric-derived H. pylori CagY–specific Th clones, each clone was co-cultured in duplicate with autologous antigen presenting cells (APCs) and H. pylori CagY for 48 h. After antigen stimulation, 32% of clones from MALT lymphoma gastritis produced both Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17), but not Interleukin-4 (IL-4) (Th1/Th17 profile), 27% of clones produced IFN-γ, but not IL-17, nor IL-4 (Th1 profile), 23% secreted both Tumor necrosis factor- alpha (TNF-α) and IL-4, but not IL-17 (Th0 profile), and 18% produced IL-4, but not TNF-α, nor IL-17 (Th2 profile) (Figure 2). Among the three gastric T cell clones obtained from chronic gastritis, two were Th1, and one Th1/Th17.", "To assess the ability of H. pylori CagY-specific T cell clones to provide antigen-triggered B cell help, irradiated T cell blasts of each clone were co-cultured with autologous peripheral blood B cells at a ratio of 0.2, 1, and 5 to 1. At a T-to-B cell ratio of 0.2 to 1, all CagY-specific clones from MALT lymphoma patients but none from patients with chronic gastritis, provided significant help (p < 0.05) to B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 14 and 3.5; range, 1–26 and 1–6, respectively; Figure 3A). At a T-to-B cell ratio of 1 to 1, all CagY-specific clones from both MALT lymphoma and chronic gastritis patients provided significant help for B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 28 and 25; range, 11–46 and 22.8–27.2, respectively; Figure 3A). Finally, at a T-to-B cell ratio of 5 to 1, all 22 Th clones from MALT lymphoma further increased B cell proliferation with a mean mitogenic index of 37 (range 24–51; Figure 3A). A significant decrease (p < 0.001) in B cell proliferation was observed in the presence of all three Th clones from chronic gastritis with a mean mitogenic index of 3 (range 1–5; Figure 3A). B cells alone cultured with or without CagY did not proliferate unless autologous MALT lymphoma-derived T cells were added (Figure 3B). B cells cultured with autologous MALT lymphoma-derived T cells, without CagY did not proliferate at any T-to-B cell ratio (Figure 3C).\nGastric T cell clones specific for H. pylori lysate but not specific for CagY (15/37 from MALT lymphoma and 36/39 from patients with chronic gastritis) showed no helper activity on B cell proliferation when co-cultured with CagY antigen and autologous B cells (mean mitogenic index was 1, range 0.9–1.3 both for MALT and for CG) (data not shown).", "H. pylori induces a strong inflammatory response that is directed at clearing the infection, but if not controlled, the response can be harmful to the host, and eventually lead to the development of gastric MALT lymphoma [26,27,28]. It has been shown that CagA activates the mTOR Complex 1 (mTORC1) which, in turn, promotes the expression and release of proinflammatory cytokines, chemokines, and of an antimicrobial peptide from gastric epithelial cells [29]. H. pylori stimulates macrophages, both in vitro and in vivo, to produce the proliferation-inducing ligand (APRIL), a crucial cytokine able to promote lymphomagenesis and B cell proliferation and abundantly expressed in gastric MALT lymphoma [30]. Lymphoma-infiltrating macrophages are a major gastric source of APRIL. By using a model of lymphomagenesis, based on the Helicobacter sp. infection of transgenic C57BL6 mice expressing the human form of the APRIL cytokine (Tg-hAPRIL), Blosse et al. [31] characterized the gastric mucosal inflammatory response associated with gastric MALT lymphoma and highlighted that all T cell subtypes infiltrate gastric MALT lymphoma, including regulatory T cells, both in the animal model and in human gastric MALT lymphoma patients. Regulatory T cell response might contribute to the persistence of the pathogen in the gastric mucosa by delaying the inflammatory response to allow the chronic antigen stimulation necessary for lymphoid proliferation. According to a previous report [30], authors found APRIL significantly dysregulated in human gastric MALT lymphoma and revealed that the cytokine is mainly expressed by eosinophils, suggesting the pro-tumorigenic potential of these cells. By using an antibody which recognizes the secreted and internalized form of APRIL, they also confirmed that the target cells of the cytokine are B cells [32]. Besides APRIL, also the cytokine BAFF contributes to the B cell lymphomagenesis during chronic H. pylori infection [31]. Chonwerawong et al. revealed that H pylori upregulates NLRC5 expression in the macrophages and gastric tissues of mice and humans and that this expression correlates with gastritis severity. However, by taking advantage of NLRC5-deficient macrophages, and of knockout mice with nonfunctional Nlrc5 within the myeloid cell lineage, the authors found that NLRC5 negatively modulates the production of proinflammatory cytokines, including BAFF and protects against the formation of mucosal B cell lymphoid tissue formation in response to chronic Helicobacter infection in mice [33].\nIn vivo activated T cells in the lesional gastric mucosa of five patients with H. pylori–associated low-grade MALT lymphoma were expanded in our study and efficiently cloned to assess their specificity for H. pylori CagY protein and functional profile. This procedure has proved useful and accurate for in vitro studies of tissue-infiltrating T cells in many diseases [13,34,35]. In the progeny of gastric T cells from MALT lymphoma, but not chronic gastritis, a high proportion of T cell clones were reactive to the H. pylori CagY protein. Whether infection with more aggressive CagA-positive H. pylori strains is associated with MALT lymphoma is controversial [36,37], present data suggest that CagY is one of the immunodominant targets of gastric T cells in gastric low-grade MALT lymphoma, but not in uncomplicated chronic gastritis. On the other hand, we cannot exclude that other still undefined antigens of H. pylori may be involved in driving gastric T cell and B cell responses in patients with low-grade gastric MALT lymphoma.\nThe main limitation of this study is the small sample size enrolled, composed of five gastric MALT lymphoma patients and five chronic gastritis subjects. This is a consequence of both the fact that MALT lymphoma is not frequently found in the population and the restricted sampling caused by the difficult COVID times we are facing nowadays. This study has, however, the power to detect a relevant characterization of the cellular immune response to the H. pylori protein CagY given the high number of T cell clones obtained from T lymphocytes infiltrating the gastric mucosa of our tested subjects. H. pylori CagY-activated T cell clones from MALT lymphoma showed higher helper activity for B cell proliferation than clones generated from chronic gastritis. This supports the concept that H. pylori-CagY-specific T cells are responsible for the abnormal B cell growth, which probably precedes and favors the development of low-grade B cell lymphoma at gastric level in some H. pylori-infected patients. A possible mechanism for enhanced B cell proliferation might be the abnormal production of Th-derived cytokines active on B cell growth. MALT lymphoma-like lesions of the gastric mucosa were found after long-term Helicobacter felis infection in aged BALB/c mice, a strain genetically prone to high production of Th2 cytokines and B cell response [38]. Th2-skewed cytokine production in the local T cell response might account for enhanced B cell proliferation in MALT lymphoma. The majority of CagY-specific T cells in gastric MALT lymphoma produced IFN-γ and TNF-α. A significant proportion of T cells from MALT lymphoma produced IL-17 together with IFN-γ. Some CagY-specific T cells were able to produce IL-4. We can speculate that almost all CagY-specific T cells were able to produce several cytokines, such as TNF-α, and IL-4, able to promote B cell proliferation. Moreover it has been recently showed that IL-17 is also able to promote the growth of human germinal center-derived non-Hodgkin B cell lymphoma [39].\nBased on the results obtained so far we can conclude that the H. pylori CagY protein, in patients with H. pylori infection and gastric low-grade MALT lymphoma, was able to promote gastric Th1 and Th17 inflammation through the production of various cytokines that can promote B cell proliferation. H. pylori CagY-specific Th cells derived from the gastric mucosa of H. pylori-infected patients with gastric low-grade MALT lymphoma were able to provide significantly higher B cell help compared to T cells obtained from patients with uncomplicated chronic gastritis.", " 4.1. Patients Five untreated patients (three men and two women; mean age, 69 years; range, 63–75 years) with low-grade B cell lymphoma of gastric MALT (MALToma) and five patients (three men and two women; mean age, 59 years; range, 55–68 years) with uncomplicated chronic gastritis provided informed consent for this study, which was performed after approval by the local ethical committee (protocol number 14936_bio, approved on 8 October 2019). Multiple biopsy specimens were obtained from the gastric antrum of patients with chronic gastritis. In patients with low-grade MALT lymphoma, biopsy specimens were obtained from perilesional regions. Biopsy specimens were used for diagnosis (positive urease test, typing of H. pylori strain, and histology) and culture of tumor-infiltrating T lymphocytes. All patients with chronic gastritis or MALT lymphoma were infected with CagA1 VacA1 H. pylori type I strains and were positive for anti-CagA serum immunoglobulin (Ig) G antibodies, as assessed by specific ELISA (MyBioSource, San Diego, CA, USA).\nFive untreated patients (three men and two women; mean age, 69 years; range, 63–75 years) with low-grade B cell lymphoma of gastric MALT (MALToma) and five patients (three men and two women; mean age, 59 years; range, 55–68 years) with uncomplicated chronic gastritis provided informed consent for this study, which was performed after approval by the local ethical committee (protocol number 14936_bio, approved on 8 October 2019). Multiple biopsy specimens were obtained from the gastric antrum of patients with chronic gastritis. In patients with low-grade MALT lymphoma, biopsy specimens were obtained from perilesional regions. Biopsy specimens were used for diagnosis (positive urease test, typing of H. pylori strain, and histology) and culture of tumor-infiltrating T lymphocytes. All patients with chronic gastritis or MALT lymphoma were infected with CagA1 VacA1 H. pylori type I strains and were positive for anti-CagA serum immunoglobulin (Ig) G antibodies, as assessed by specific ELISA (MyBioSource, San Diego, CA, USA).\n 4.2. Reagents Helicobacter pylori CagY was produced as described [21]. We ruled out the presence of contaminants by a limulus test. The H. pylori CagY used resulted limulus test negative throughout the whole study.\nHelicobacter pylori CagY was produced as described [21]. We ruled out the presence of contaminants by a limulus test. The H. pylori CagY used resulted limulus test negative throughout the whole study.\n 4.3. Generation of H. pylori-Specific T Cell Clones Biopsy specimens were cultured for 10 days in RPMI 1640 medium (Biochrom AG, Berlin, Germany) supplemented with human IL-2 (PeproTech, London, UK) (50 U/mL) to expand in vivo activated T cells [10]. Mucosal specimens were disrupted and single T-cell blasts were cloned under limiting dilution (0.3 cells/well) as reported previously [40]. Each clone was screened (in triplicate cultures for each condition) for responsiveness to H. pylori by measuring [3H] thymidine (Perkin Elmer, Waltham, MA, US) uptake after 60 h’ stimulation with medium, H. pylori lysate (aqueous extract of NCTC11637 strain, 10 μg/mL being optimal), recombinant CagY protein (1 μg/mL) in the presence of irradiated autologous mononuclear cells as APCs [40]. A mitogenic index greater than 10 was considered a positive result.\nBiopsy specimens were cultured for 10 days in RPMI 1640 medium (Biochrom AG, Berlin, Germany) supplemented with human IL-2 (PeproTech, London, UK) (50 U/mL) to expand in vivo activated T cells [10]. Mucosal specimens were disrupted and single T-cell blasts were cloned under limiting dilution (0.3 cells/well) as reported previously [40]. Each clone was screened (in triplicate cultures for each condition) for responsiveness to H. pylori by measuring [3H] thymidine (Perkin Elmer, Waltham, MA, US) uptake after 60 h’ stimulation with medium, H. pylori lysate (aqueous extract of NCTC11637 strain, 10 μg/mL being optimal), recombinant CagY protein (1 μg/mL) in the presence of irradiated autologous mononuclear cells as APCs [40]. A mitogenic index greater than 10 was considered a positive result.\n 4.4. Cytokine Profile of H. pylori CagY–Specific Gastric T Cell Clones To assess the cytokine production of CagY-specific Th clones, 106 T cell blasts of each clone were co-cultured in duplicate cultures for 48 h in 1 mL of medium with 5 × 105 irradiated autologous peripheral blood mononuclear cells as APCs and CagY (1 μg/mL). To induce cytokine production by gastric T cell clones, T cell blasts were stimulated for 36 h with phorbol-12-myristate 13-acetate (PMA, 10 ng/mL) (BioLegend, San Diego, CA, USA) plus anti-CD3 monoclonal antibody (200 ng/mL) (BioLegend, San Diego, CA, USA) [10]. Duplicate samples of each supernatant were assayed by ELISA for IFN-γ, IL-4, TNF-α, and IL-17 (R&D Systems, Minneapolis, MN, USA).\nTo assess the cytokine production of CagY-specific Th clones, 106 T cell blasts of each clone were co-cultured in duplicate cultures for 48 h in 1 mL of medium with 5 × 105 irradiated autologous peripheral blood mononuclear cells as APCs and CagY (1 μg/mL). To induce cytokine production by gastric T cell clones, T cell blasts were stimulated for 36 h with phorbol-12-myristate 13-acetate (PMA, 10 ng/mL) (BioLegend, San Diego, CA, USA) plus anti-CD3 monoclonal antibody (200 ng/mL) (BioLegend, San Diego, CA, USA) [10]. Duplicate samples of each supernatant were assayed by ELISA for IFN-γ, IL-4, TNF-α, and IL-17 (R&D Systems, Minneapolis, MN, USA).\n 4.5. Helper Activity of T Cell Lones for B Cell Proliferation B cells were prepared from each patient as described [13]. Briefly, PBMCs isolated by Ficoll Hypaque gradient method (Lymphoprep, Alere Technologies, Oslo, Norway) from each enrolled patient were processed to obtain a purified population of B lymphocytes by positive selection magnetic labelling with anti-CD19 microbeads (MACS, Miltenyi biotec, Bergisch Gladbach, Germany). The target population was incubated with specific microbeads and the cell suspension was loaded onto MACS columns placed in a magnetic field; B cells were then retained within the column and the other PBMCs were eluted away. Purified B lymphocytes were then harvested, washed, and used for the subsequent experiments.\nThe ability of gastric Th clones to induce B cell proliferation under CagY stimulation was assessed by measuring [3H] thymidine uptake by peripheral blood B cells (3 × 104) alone or co-cultured for four days with different concentrations of irradiated (2000 rad) autologous clonal T cell blasts (0.2, 1, and 5 T-to-B cell ratio) with or without H. pylori CagY antigen (1 μg/mL), as described previously [10].\nB cells were prepared from each patient as described [13]. Briefly, PBMCs isolated by Ficoll Hypaque gradient method (Lymphoprep, Alere Technologies, Oslo, Norway) from each enrolled patient were processed to obtain a purified population of B lymphocytes by positive selection magnetic labelling with anti-CD19 microbeads (MACS, Miltenyi biotec, Bergisch Gladbach, Germany). The target population was incubated with specific microbeads and the cell suspension was loaded onto MACS columns placed in a magnetic field; B cells were then retained within the column and the other PBMCs were eluted away. Purified B lymphocytes were then harvested, washed, and used for the subsequent experiments.\nThe ability of gastric Th clones to induce B cell proliferation under CagY stimulation was assessed by measuring [3H] thymidine uptake by peripheral blood B cells (3 × 104) alone or co-cultured for four days with different concentrations of irradiated (2000 rad) autologous clonal T cell blasts (0.2, 1, and 5 T-to-B cell ratio) with or without H. pylori CagY antigen (1 μg/mL), as described previously [10].\n 4.6. Statistical Analysis Descriptive statistics were used for the calculation of absolute frequencies and percentages of qualitative data, as well as for mean and standard deviation of quantitative data. After evaluating the homogeneity of variance with Hartley’s test, the t-test (IC 95%) was performed and a p < 0.05 was considered statistically significant. Statistical analysis was computed using the IBM SPSS Statistics software, version 27.\nDescriptive statistics were used for the calculation of absolute frequencies and percentages of qualitative data, as well as for mean and standard deviation of quantitative data. After evaluating the homogeneity of variance with Hartley’s test, the t-test (IC 95%) was performed and a p < 0.05 was considered statistically significant. Statistical analysis was computed using the IBM SPSS Statistics software, version 27.", "Five untreated patients (three men and two women; mean age, 69 years; range, 63–75 years) with low-grade B cell lymphoma of gastric MALT (MALToma) and five patients (three men and two women; mean age, 59 years; range, 55–68 years) with uncomplicated chronic gastritis provided informed consent for this study, which was performed after approval by the local ethical committee (protocol number 14936_bio, approved on 8 October 2019). Multiple biopsy specimens were obtained from the gastric antrum of patients with chronic gastritis. In patients with low-grade MALT lymphoma, biopsy specimens were obtained from perilesional regions. Biopsy specimens were used for diagnosis (positive urease test, typing of H. pylori strain, and histology) and culture of tumor-infiltrating T lymphocytes. All patients with chronic gastritis or MALT lymphoma were infected with CagA1 VacA1 H. pylori type I strains and were positive for anti-CagA serum immunoglobulin (Ig) G antibodies, as assessed by specific ELISA (MyBioSource, San Diego, CA, USA).", "Helicobacter pylori CagY was produced as described [21]. We ruled out the presence of contaminants by a limulus test. The H. pylori CagY used resulted limulus test negative throughout the whole study.", "Biopsy specimens were cultured for 10 days in RPMI 1640 medium (Biochrom AG, Berlin, Germany) supplemented with human IL-2 (PeproTech, London, UK) (50 U/mL) to expand in vivo activated T cells [10]. Mucosal specimens were disrupted and single T-cell blasts were cloned under limiting dilution (0.3 cells/well) as reported previously [40]. Each clone was screened (in triplicate cultures for each condition) for responsiveness to H. pylori by measuring [3H] thymidine (Perkin Elmer, Waltham, MA, US) uptake after 60 h’ stimulation with medium, H. pylori lysate (aqueous extract of NCTC11637 strain, 10 μg/mL being optimal), recombinant CagY protein (1 μg/mL) in the presence of irradiated autologous mononuclear cells as APCs [40]. A mitogenic index greater than 10 was considered a positive result.", "To assess the cytokine production of CagY-specific Th clones, 106 T cell blasts of each clone were co-cultured in duplicate cultures for 48 h in 1 mL of medium with 5 × 105 irradiated autologous peripheral blood mononuclear cells as APCs and CagY (1 μg/mL). To induce cytokine production by gastric T cell clones, T cell blasts were stimulated for 36 h with phorbol-12-myristate 13-acetate (PMA, 10 ng/mL) (BioLegend, San Diego, CA, USA) plus anti-CD3 monoclonal antibody (200 ng/mL) (BioLegend, San Diego, CA, USA) [10]. Duplicate samples of each supernatant were assayed by ELISA for IFN-γ, IL-4, TNF-α, and IL-17 (R&D Systems, Minneapolis, MN, USA).", "B cells were prepared from each patient as described [13]. Briefly, PBMCs isolated by Ficoll Hypaque gradient method (Lymphoprep, Alere Technologies, Oslo, Norway) from each enrolled patient were processed to obtain a purified population of B lymphocytes by positive selection magnetic labelling with anti-CD19 microbeads (MACS, Miltenyi biotec, Bergisch Gladbach, Germany). The target population was incubated with specific microbeads and the cell suspension was loaded onto MACS columns placed in a magnetic field; B cells were then retained within the column and the other PBMCs were eluted away. Purified B lymphocytes were then harvested, washed, and used for the subsequent experiments.\nThe ability of gastric Th clones to induce B cell proliferation under CagY stimulation was assessed by measuring [3H] thymidine uptake by peripheral blood B cells (3 × 104) alone or co-cultured for four days with different concentrations of irradiated (2000 rad) autologous clonal T cell blasts (0.2, 1, and 5 T-to-B cell ratio) with or without H. pylori CagY antigen (1 μg/mL), as described previously [10].", "Descriptive statistics were used for the calculation of absolute frequencies and percentages of qualitative data, as well as for mean and standard deviation of quantitative data. After evaluating the homogeneity of variance with Hartley’s test, the t-test (IC 95%) was performed and a p < 0.05 was considered statistically significant. Statistical analysis was computed using the IBM SPSS Statistics software, version 27.", "The results obtained so far suggest that H. pylori CagY is an important factor involved in the genesis of Th1 and Th17 response in H. pylori-infected patients with gastric MALT lymphoma. We show that H. pylori CagY-specific Th cells derived from the gastric mucosa of H. pylori-infected patients with gastric low-grade MALT lymphoma can provide B cell help in a dose-dependent manner. Taken together, these results suggest that H. pylori CagY is an important factor in generating Th1 and Th17 responses in H. pylori-infected patients with gastric MALT lymphoma and that the T cell-dependent B cell proliferation induced by H. pylori CagY may represent an important link between bacterial infection and gastric lymphoma. CagY, Th1 and Th17 pathways might be useful for the design of novel diagnostics, novel vaccines and therapeutics for gastric MALT lymphomas related to H. pylori infection." ]
[ "intro", "results", null, null, null, "discussion", null, "subjects", null, null, null, null, null, "conclusions" ]
[ "\nHelicobacter pylori\n", "CagY", "B cells", "T cells", "cytokines", "MALT", "gastric lymphoma" ]
1. Introduction: Helicobacter pylori is a spiral-shaped Gram-negative bacterium that chronically infects the stomach of more than 50% of the human population, and is the leading cause of gastric cancer, gastric lymphoma, gastric autoimmunity and peptic ulcer diseases [1,2,3,4,5]. A strong association between Helicobacter pylori infection and the development of gastric mucosa-associated lymphoid tissue (MALT) lymphoma has been demonstrated [6,7,8]. A prerequisite for lymphomagenesis is the development of secondary inflammatory MALT, which is induced by chronic H. pylori infection [7,8]. In the early stages, this tumor is sensitive to the withdrawal of H. pylori-induced T cell help, providing an explanation for both the tendency of the tumor to remain localized at the primary site and its regression after eradication of H. pylori with antibiotics. The tumor cells of low-grade gastric MALT lymphoma are memory B lymphocytes that still respond to differentiation signals, such as CD40 costimulation and cytokines produced by antigen-stimulated T helper (Th) cells [9,10] and their growth depends on antigen-stimulation by H. pylori-specific T cells [11,12]. An important unanswered question remains the chemical nature of the H. pylori factors responsible for the induction of gastric Th cells which can promote the proliferation of B cells. Bacterial products are known to possess immunomodulatory properties and induce B cell responses as well as different types of innate and adaptive responses [13]. Among the bacterial components, some factors associated with malignancy have been identified, although the high degree of genomic variability of H. pylori strains has prevented the complete identification of the factors involved. The major virulence factor of H. pylori is the cag pathogenicity island (cagPAI), an approximately 40 kb genetic locus, containing 31 genes [14,15] and encoding for the so-called type IV secretion system (T4SS). This forms a syringe-like structure that injects bacterial components (mainly peptidoglycan and the oncoprotein cagA) into the host target cell [16]. H. pylori strains harboring the cagPAI pathogenicity locus show a significantly increased ability to induce severe pathological outcomes in infected individuals, such as gastric cancer and gastric lymphoma, compared to cagPAI-negative strains [17,18,19,20]. Recently, it was reported that among H. pylori-infected patients, those with gastric low-grade MALT lymphoma are preferentially seropositive for H. pylori CagY protein [21]. CagY, a VirB10-homologous protein, also known as Cag7 or HP0527, is able to activate innate cells in a flagellin-independent manner. CagY is a TLR5 agonist and five interaction sites have been identified in the CagY repeat domains [16,22,23]. HP0527 encodes a large protein of 1927 amino acids that is expressed on the surface and has been described as one of the main components of H. pylori cag T4SS-associated pilum; it may act as a molecular switch that modifies the proinflammatory host responses by modulating the T4SS function and tuning CagA injection [24,25]. The aims of this study were (1) to investigate the presence of H. pylori CagY-specific Th cells in the context of low-grade gastric MALT lymphomas, (2) to define the cytokine patterns of these cells, and (3) to assess whether gastric CagY-specific T cells from MALT lymphomas are able to provide help for B cell proliferation. 2. Results: 2.1. H. pylori CagY-Specific CD4+ T Cells Predominate in Gastric Low-Grade MALT Lymphoma To characterize at the clonal level the in vivo activated T cells present in the gastric inflammatory infiltrates of H. pylori-infected patients, two cohorts were collected: five untreated patients with gastric low-grade MALT (MALT) and five patients with H. pylori-induced uncomplicated chronic gastritis (CG). All patients were infected with CagA1, VacA1 H. pylori type I strains and were ELISA positive for anti-CagA serum IgG antibodies. The T cells from all enrolled patients were obtained by multiple biopsies and expanded by culturing in Interleukin-2 (IL-2)-conditioned medium for 10 days. Then, T cell blasts were recovered and cloned by limiting dilution. Comparable numbers of clones were obtained from both cohorts: a total of 158 CD4+ and 17 CD8+ clones were obtained from the gastric biopsy specimens of MALT lymphoma. 179 CD4+ and 22 CD8+ T cell clones from chronic gastritis. All clones were tested for their ability to respond to either H. pylori lysate or purified CagY protein. The data obtained are summarized in Table 1 and show that none of the CD8+ clones from MALT and CG responded to H. pylori CagY antigen or to H. pylori lysate. Analyzing the proliferative response of CD4+ clones to H. pylori lysate, 23 and 22% of the clones showed positivity for MALT and CG, respectively. A marked difference was observed when CagY was used. While 22 CD4+ (corresponding to 13.9%) of clones from MALT lymphoma showed antigen-induced proliferation, only three CD4+ clones (1.7% of the clones) from chronic gastritis were CagY-specific (Table 1). The percentage of MALT lymphoma-derived T clones reactive for CagY was higher than CG. A significant difference (p = 0.012) was also found between the mitogenic index of CagY-specific MALT lymphoma-derived T clones (mean mitogenic index 44.48 ± 18.46) and CG-derived ones (mean mitogenic index 14.63 ± 3.89). A significantly higher proliferation (p < 0.001) to CagY than H. pylori lysate was found in MALT lymphoma-derived T clones (mean CagY mitogenic index 44.48 ± 18.46; mean H. pylori lysate mitogenic index 29.14 ± 11.67) (Figure 1). To characterize at the clonal level the in vivo activated T cells present in the gastric inflammatory infiltrates of H. pylori-infected patients, two cohorts were collected: five untreated patients with gastric low-grade MALT (MALT) and five patients with H. pylori-induced uncomplicated chronic gastritis (CG). All patients were infected with CagA1, VacA1 H. pylori type I strains and were ELISA positive for anti-CagA serum IgG antibodies. The T cells from all enrolled patients were obtained by multiple biopsies and expanded by culturing in Interleukin-2 (IL-2)-conditioned medium for 10 days. Then, T cell blasts were recovered and cloned by limiting dilution. Comparable numbers of clones were obtained from both cohorts: a total of 158 CD4+ and 17 CD8+ clones were obtained from the gastric biopsy specimens of MALT lymphoma. 179 CD4+ and 22 CD8+ T cell clones from chronic gastritis. All clones were tested for their ability to respond to either H. pylori lysate or purified CagY protein. The data obtained are summarized in Table 1 and show that none of the CD8+ clones from MALT and CG responded to H. pylori CagY antigen or to H. pylori lysate. Analyzing the proliferative response of CD4+ clones to H. pylori lysate, 23 and 22% of the clones showed positivity for MALT and CG, respectively. A marked difference was observed when CagY was used. While 22 CD4+ (corresponding to 13.9%) of clones from MALT lymphoma showed antigen-induced proliferation, only three CD4+ clones (1.7% of the clones) from chronic gastritis were CagY-specific (Table 1). The percentage of MALT lymphoma-derived T clones reactive for CagY was higher than CG. A significant difference (p = 0.012) was also found between the mitogenic index of CagY-specific MALT lymphoma-derived T clones (mean mitogenic index 44.48 ± 18.46) and CG-derived ones (mean mitogenic index 14.63 ± 3.89). A significantly higher proliferation (p < 0.001) to CagY than H. pylori lysate was found in MALT lymphoma-derived T clones (mean CagY mitogenic index 44.48 ± 18.46; mean H. pylori lysate mitogenic index 29.14 ± 11.67) (Figure 1). 2.2. H. pylori CagY Predominantly Drives IFN-γ and IL-17 Secretion by Gastric CD4+ T Cells from H. pylori-Infected Patients with Gastric Low-Grade MALT Lymphoma To evaluate cytokine production by gastric-derived H. pylori CagY–specific Th clones, each clone was co-cultured in duplicate with autologous antigen presenting cells (APCs) and H. pylori CagY for 48 h. After antigen stimulation, 32% of clones from MALT lymphoma gastritis produced both Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17), but not Interleukin-4 (IL-4) (Th1/Th17 profile), 27% of clones produced IFN-γ, but not IL-17, nor IL-4 (Th1 profile), 23% secreted both Tumor necrosis factor- alpha (TNF-α) and IL-4, but not IL-17 (Th0 profile), and 18% produced IL-4, but not TNF-α, nor IL-17 (Th2 profile) (Figure 2). Among the three gastric T cell clones obtained from chronic gastritis, two were Th1, and one Th1/Th17. To evaluate cytokine production by gastric-derived H. pylori CagY–specific Th clones, each clone was co-cultured in duplicate with autologous antigen presenting cells (APCs) and H. pylori CagY for 48 h. After antigen stimulation, 32% of clones from MALT lymphoma gastritis produced both Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17), but not Interleukin-4 (IL-4) (Th1/Th17 profile), 27% of clones produced IFN-γ, but not IL-17, nor IL-4 (Th1 profile), 23% secreted both Tumor necrosis factor- alpha (TNF-α) and IL-4, but not IL-17 (Th0 profile), and 18% produced IL-4, but not TNF-α, nor IL-17 (Th2 profile) (Figure 2). Among the three gastric T cell clones obtained from chronic gastritis, two were Th1, and one Th1/Th17. 2.3. Antigen-Dependent B Cell Help by H. pylori CagY-Specific Th Clones To assess the ability of H. pylori CagY-specific T cell clones to provide antigen-triggered B cell help, irradiated T cell blasts of each clone were co-cultured with autologous peripheral blood B cells at a ratio of 0.2, 1, and 5 to 1. At a T-to-B cell ratio of 0.2 to 1, all CagY-specific clones from MALT lymphoma patients but none from patients with chronic gastritis, provided significant help (p < 0.05) to B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 14 and 3.5; range, 1–26 and 1–6, respectively; Figure 3A). At a T-to-B cell ratio of 1 to 1, all CagY-specific clones from both MALT lymphoma and chronic gastritis patients provided significant help for B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 28 and 25; range, 11–46 and 22.8–27.2, respectively; Figure 3A). Finally, at a T-to-B cell ratio of 5 to 1, all 22 Th clones from MALT lymphoma further increased B cell proliferation with a mean mitogenic index of 37 (range 24–51; Figure 3A). A significant decrease (p < 0.001) in B cell proliferation was observed in the presence of all three Th clones from chronic gastritis with a mean mitogenic index of 3 (range 1–5; Figure 3A). B cells alone cultured with or without CagY did not proliferate unless autologous MALT lymphoma-derived T cells were added (Figure 3B). B cells cultured with autologous MALT lymphoma-derived T cells, without CagY did not proliferate at any T-to-B cell ratio (Figure 3C). Gastric T cell clones specific for H. pylori lysate but not specific for CagY (15/37 from MALT lymphoma and 36/39 from patients with chronic gastritis) showed no helper activity on B cell proliferation when co-cultured with CagY antigen and autologous B cells (mean mitogenic index was 1, range 0.9–1.3 both for MALT and for CG) (data not shown). To assess the ability of H. pylori CagY-specific T cell clones to provide antigen-triggered B cell help, irradiated T cell blasts of each clone were co-cultured with autologous peripheral blood B cells at a ratio of 0.2, 1, and 5 to 1. At a T-to-B cell ratio of 0.2 to 1, all CagY-specific clones from MALT lymphoma patients but none from patients with chronic gastritis, provided significant help (p < 0.05) to B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 14 and 3.5; range, 1–26 and 1–6, respectively; Figure 3A). At a T-to-B cell ratio of 1 to 1, all CagY-specific clones from both MALT lymphoma and chronic gastritis patients provided significant help for B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 28 and 25; range, 11–46 and 22.8–27.2, respectively; Figure 3A). Finally, at a T-to-B cell ratio of 5 to 1, all 22 Th clones from MALT lymphoma further increased B cell proliferation with a mean mitogenic index of 37 (range 24–51; Figure 3A). A significant decrease (p < 0.001) in B cell proliferation was observed in the presence of all three Th clones from chronic gastritis with a mean mitogenic index of 3 (range 1–5; Figure 3A). B cells alone cultured with or without CagY did not proliferate unless autologous MALT lymphoma-derived T cells were added (Figure 3B). B cells cultured with autologous MALT lymphoma-derived T cells, without CagY did not proliferate at any T-to-B cell ratio (Figure 3C). Gastric T cell clones specific for H. pylori lysate but not specific for CagY (15/37 from MALT lymphoma and 36/39 from patients with chronic gastritis) showed no helper activity on B cell proliferation when co-cultured with CagY antigen and autologous B cells (mean mitogenic index was 1, range 0.9–1.3 both for MALT and for CG) (data not shown). 2.1. H. pylori CagY-Specific CD4+ T Cells Predominate in Gastric Low-Grade MALT Lymphoma: To characterize at the clonal level the in vivo activated T cells present in the gastric inflammatory infiltrates of H. pylori-infected patients, two cohorts were collected: five untreated patients with gastric low-grade MALT (MALT) and five patients with H. pylori-induced uncomplicated chronic gastritis (CG). All patients were infected with CagA1, VacA1 H. pylori type I strains and were ELISA positive for anti-CagA serum IgG antibodies. The T cells from all enrolled patients were obtained by multiple biopsies and expanded by culturing in Interleukin-2 (IL-2)-conditioned medium for 10 days. Then, T cell blasts were recovered and cloned by limiting dilution. Comparable numbers of clones were obtained from both cohorts: a total of 158 CD4+ and 17 CD8+ clones were obtained from the gastric biopsy specimens of MALT lymphoma. 179 CD4+ and 22 CD8+ T cell clones from chronic gastritis. All clones were tested for their ability to respond to either H. pylori lysate or purified CagY protein. The data obtained are summarized in Table 1 and show that none of the CD8+ clones from MALT and CG responded to H. pylori CagY antigen or to H. pylori lysate. Analyzing the proliferative response of CD4+ clones to H. pylori lysate, 23 and 22% of the clones showed positivity for MALT and CG, respectively. A marked difference was observed when CagY was used. While 22 CD4+ (corresponding to 13.9%) of clones from MALT lymphoma showed antigen-induced proliferation, only three CD4+ clones (1.7% of the clones) from chronic gastritis were CagY-specific (Table 1). The percentage of MALT lymphoma-derived T clones reactive for CagY was higher than CG. A significant difference (p = 0.012) was also found between the mitogenic index of CagY-specific MALT lymphoma-derived T clones (mean mitogenic index 44.48 ± 18.46) and CG-derived ones (mean mitogenic index 14.63 ± 3.89). A significantly higher proliferation (p < 0.001) to CagY than H. pylori lysate was found in MALT lymphoma-derived T clones (mean CagY mitogenic index 44.48 ± 18.46; mean H. pylori lysate mitogenic index 29.14 ± 11.67) (Figure 1). 2.2. H. pylori CagY Predominantly Drives IFN-γ and IL-17 Secretion by Gastric CD4+ T Cells from H. pylori-Infected Patients with Gastric Low-Grade MALT Lymphoma: To evaluate cytokine production by gastric-derived H. pylori CagY–specific Th clones, each clone was co-cultured in duplicate with autologous antigen presenting cells (APCs) and H. pylori CagY for 48 h. After antigen stimulation, 32% of clones from MALT lymphoma gastritis produced both Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17), but not Interleukin-4 (IL-4) (Th1/Th17 profile), 27% of clones produced IFN-γ, but not IL-17, nor IL-4 (Th1 profile), 23% secreted both Tumor necrosis factor- alpha (TNF-α) and IL-4, but not IL-17 (Th0 profile), and 18% produced IL-4, but not TNF-α, nor IL-17 (Th2 profile) (Figure 2). Among the three gastric T cell clones obtained from chronic gastritis, two were Th1, and one Th1/Th17. 2.3. Antigen-Dependent B Cell Help by H. pylori CagY-Specific Th Clones: To assess the ability of H. pylori CagY-specific T cell clones to provide antigen-triggered B cell help, irradiated T cell blasts of each clone were co-cultured with autologous peripheral blood B cells at a ratio of 0.2, 1, and 5 to 1. At a T-to-B cell ratio of 0.2 to 1, all CagY-specific clones from MALT lymphoma patients but none from patients with chronic gastritis, provided significant help (p < 0.05) to B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 14 and 3.5; range, 1–26 and 1–6, respectively; Figure 3A). At a T-to-B cell ratio of 1 to 1, all CagY-specific clones from both MALT lymphoma and chronic gastritis patients provided significant help for B cell proliferation under H. pylori CagY stimulation (mean mitogenic index, 28 and 25; range, 11–46 and 22.8–27.2, respectively; Figure 3A). Finally, at a T-to-B cell ratio of 5 to 1, all 22 Th clones from MALT lymphoma further increased B cell proliferation with a mean mitogenic index of 37 (range 24–51; Figure 3A). A significant decrease (p < 0.001) in B cell proliferation was observed in the presence of all three Th clones from chronic gastritis with a mean mitogenic index of 3 (range 1–5; Figure 3A). B cells alone cultured with or without CagY did not proliferate unless autologous MALT lymphoma-derived T cells were added (Figure 3B). B cells cultured with autologous MALT lymphoma-derived T cells, without CagY did not proliferate at any T-to-B cell ratio (Figure 3C). Gastric T cell clones specific for H. pylori lysate but not specific for CagY (15/37 from MALT lymphoma and 36/39 from patients with chronic gastritis) showed no helper activity on B cell proliferation when co-cultured with CagY antigen and autologous B cells (mean mitogenic index was 1, range 0.9–1.3 both for MALT and for CG) (data not shown). 3. Discussion: H. pylori induces a strong inflammatory response that is directed at clearing the infection, but if not controlled, the response can be harmful to the host, and eventually lead to the development of gastric MALT lymphoma [26,27,28]. It has been shown that CagA activates the mTOR Complex 1 (mTORC1) which, in turn, promotes the expression and release of proinflammatory cytokines, chemokines, and of an antimicrobial peptide from gastric epithelial cells [29]. H. pylori stimulates macrophages, both in vitro and in vivo, to produce the proliferation-inducing ligand (APRIL), a crucial cytokine able to promote lymphomagenesis and B cell proliferation and abundantly expressed in gastric MALT lymphoma [30]. Lymphoma-infiltrating macrophages are a major gastric source of APRIL. By using a model of lymphomagenesis, based on the Helicobacter sp. infection of transgenic C57BL6 mice expressing the human form of the APRIL cytokine (Tg-hAPRIL), Blosse et al. [31] characterized the gastric mucosal inflammatory response associated with gastric MALT lymphoma and highlighted that all T cell subtypes infiltrate gastric MALT lymphoma, including regulatory T cells, both in the animal model and in human gastric MALT lymphoma patients. Regulatory T cell response might contribute to the persistence of the pathogen in the gastric mucosa by delaying the inflammatory response to allow the chronic antigen stimulation necessary for lymphoid proliferation. According to a previous report [30], authors found APRIL significantly dysregulated in human gastric MALT lymphoma and revealed that the cytokine is mainly expressed by eosinophils, suggesting the pro-tumorigenic potential of these cells. By using an antibody which recognizes the secreted and internalized form of APRIL, they also confirmed that the target cells of the cytokine are B cells [32]. Besides APRIL, also the cytokine BAFF contributes to the B cell lymphomagenesis during chronic H. pylori infection [31]. Chonwerawong et al. revealed that H pylori upregulates NLRC5 expression in the macrophages and gastric tissues of mice and humans and that this expression correlates with gastritis severity. However, by taking advantage of NLRC5-deficient macrophages, and of knockout mice with nonfunctional Nlrc5 within the myeloid cell lineage, the authors found that NLRC5 negatively modulates the production of proinflammatory cytokines, including BAFF and protects against the formation of mucosal B cell lymphoid tissue formation in response to chronic Helicobacter infection in mice [33]. In vivo activated T cells in the lesional gastric mucosa of five patients with H. pylori–associated low-grade MALT lymphoma were expanded in our study and efficiently cloned to assess their specificity for H. pylori CagY protein and functional profile. This procedure has proved useful and accurate for in vitro studies of tissue-infiltrating T cells in many diseases [13,34,35]. In the progeny of gastric T cells from MALT lymphoma, but not chronic gastritis, a high proportion of T cell clones were reactive to the H. pylori CagY protein. Whether infection with more aggressive CagA-positive H. pylori strains is associated with MALT lymphoma is controversial [36,37], present data suggest that CagY is one of the immunodominant targets of gastric T cells in gastric low-grade MALT lymphoma, but not in uncomplicated chronic gastritis. On the other hand, we cannot exclude that other still undefined antigens of H. pylori may be involved in driving gastric T cell and B cell responses in patients with low-grade gastric MALT lymphoma. The main limitation of this study is the small sample size enrolled, composed of five gastric MALT lymphoma patients and five chronic gastritis subjects. This is a consequence of both the fact that MALT lymphoma is not frequently found in the population and the restricted sampling caused by the difficult COVID times we are facing nowadays. This study has, however, the power to detect a relevant characterization of the cellular immune response to the H. pylori protein CagY given the high number of T cell clones obtained from T lymphocytes infiltrating the gastric mucosa of our tested subjects. H. pylori CagY-activated T cell clones from MALT lymphoma showed higher helper activity for B cell proliferation than clones generated from chronic gastritis. This supports the concept that H. pylori-CagY-specific T cells are responsible for the abnormal B cell growth, which probably precedes and favors the development of low-grade B cell lymphoma at gastric level in some H. pylori-infected patients. A possible mechanism for enhanced B cell proliferation might be the abnormal production of Th-derived cytokines active on B cell growth. MALT lymphoma-like lesions of the gastric mucosa were found after long-term Helicobacter felis infection in aged BALB/c mice, a strain genetically prone to high production of Th2 cytokines and B cell response [38]. Th2-skewed cytokine production in the local T cell response might account for enhanced B cell proliferation in MALT lymphoma. The majority of CagY-specific T cells in gastric MALT lymphoma produced IFN-γ and TNF-α. A significant proportion of T cells from MALT lymphoma produced IL-17 together with IFN-γ. Some CagY-specific T cells were able to produce IL-4. We can speculate that almost all CagY-specific T cells were able to produce several cytokines, such as TNF-α, and IL-4, able to promote B cell proliferation. Moreover it has been recently showed that IL-17 is also able to promote the growth of human germinal center-derived non-Hodgkin B cell lymphoma [39]. Based on the results obtained so far we can conclude that the H. pylori CagY protein, in patients with H. pylori infection and gastric low-grade MALT lymphoma, was able to promote gastric Th1 and Th17 inflammation through the production of various cytokines that can promote B cell proliferation. H. pylori CagY-specific Th cells derived from the gastric mucosa of H. pylori-infected patients with gastric low-grade MALT lymphoma were able to provide significantly higher B cell help compared to T cells obtained from patients with uncomplicated chronic gastritis. 4. Materials and Methods: 4.1. Patients Five untreated patients (three men and two women; mean age, 69 years; range, 63–75 years) with low-grade B cell lymphoma of gastric MALT (MALToma) and five patients (three men and two women; mean age, 59 years; range, 55–68 years) with uncomplicated chronic gastritis provided informed consent for this study, which was performed after approval by the local ethical committee (protocol number 14936_bio, approved on 8 October 2019). Multiple biopsy specimens were obtained from the gastric antrum of patients with chronic gastritis. In patients with low-grade MALT lymphoma, biopsy specimens were obtained from perilesional regions. Biopsy specimens were used for diagnosis (positive urease test, typing of H. pylori strain, and histology) and culture of tumor-infiltrating T lymphocytes. All patients with chronic gastritis or MALT lymphoma were infected with CagA1 VacA1 H. pylori type I strains and were positive for anti-CagA serum immunoglobulin (Ig) G antibodies, as assessed by specific ELISA (MyBioSource, San Diego, CA, USA). Five untreated patients (three men and two women; mean age, 69 years; range, 63–75 years) with low-grade B cell lymphoma of gastric MALT (MALToma) and five patients (three men and two women; mean age, 59 years; range, 55–68 years) with uncomplicated chronic gastritis provided informed consent for this study, which was performed after approval by the local ethical committee (protocol number 14936_bio, approved on 8 October 2019). Multiple biopsy specimens were obtained from the gastric antrum of patients with chronic gastritis. In patients with low-grade MALT lymphoma, biopsy specimens were obtained from perilesional regions. Biopsy specimens were used for diagnosis (positive urease test, typing of H. pylori strain, and histology) and culture of tumor-infiltrating T lymphocytes. All patients with chronic gastritis or MALT lymphoma were infected with CagA1 VacA1 H. pylori type I strains and were positive for anti-CagA serum immunoglobulin (Ig) G antibodies, as assessed by specific ELISA (MyBioSource, San Diego, CA, USA). 4.2. Reagents Helicobacter pylori CagY was produced as described [21]. We ruled out the presence of contaminants by a limulus test. The H. pylori CagY used resulted limulus test negative throughout the whole study. Helicobacter pylori CagY was produced as described [21]. We ruled out the presence of contaminants by a limulus test. The H. pylori CagY used resulted limulus test negative throughout the whole study. 4.3. Generation of H. pylori-Specific T Cell Clones Biopsy specimens were cultured for 10 days in RPMI 1640 medium (Biochrom AG, Berlin, Germany) supplemented with human IL-2 (PeproTech, London, UK) (50 U/mL) to expand in vivo activated T cells [10]. Mucosal specimens were disrupted and single T-cell blasts were cloned under limiting dilution (0.3 cells/well) as reported previously [40]. Each clone was screened (in triplicate cultures for each condition) for responsiveness to H. pylori by measuring [3H] thymidine (Perkin Elmer, Waltham, MA, US) uptake after 60 h’ stimulation with medium, H. pylori lysate (aqueous extract of NCTC11637 strain, 10 μg/mL being optimal), recombinant CagY protein (1 μg/mL) in the presence of irradiated autologous mononuclear cells as APCs [40]. A mitogenic index greater than 10 was considered a positive result. Biopsy specimens were cultured for 10 days in RPMI 1640 medium (Biochrom AG, Berlin, Germany) supplemented with human IL-2 (PeproTech, London, UK) (50 U/mL) to expand in vivo activated T cells [10]. Mucosal specimens were disrupted and single T-cell blasts were cloned under limiting dilution (0.3 cells/well) as reported previously [40]. Each clone was screened (in triplicate cultures for each condition) for responsiveness to H. pylori by measuring [3H] thymidine (Perkin Elmer, Waltham, MA, US) uptake after 60 h’ stimulation with medium, H. pylori lysate (aqueous extract of NCTC11637 strain, 10 μg/mL being optimal), recombinant CagY protein (1 μg/mL) in the presence of irradiated autologous mononuclear cells as APCs [40]. A mitogenic index greater than 10 was considered a positive result. 4.4. Cytokine Profile of H. pylori CagY–Specific Gastric T Cell Clones To assess the cytokine production of CagY-specific Th clones, 106 T cell blasts of each clone were co-cultured in duplicate cultures for 48 h in 1 mL of medium with 5 × 105 irradiated autologous peripheral blood mononuclear cells as APCs and CagY (1 μg/mL). To induce cytokine production by gastric T cell clones, T cell blasts were stimulated for 36 h with phorbol-12-myristate 13-acetate (PMA, 10 ng/mL) (BioLegend, San Diego, CA, USA) plus anti-CD3 monoclonal antibody (200 ng/mL) (BioLegend, San Diego, CA, USA) [10]. Duplicate samples of each supernatant were assayed by ELISA for IFN-γ, IL-4, TNF-α, and IL-17 (R&D Systems, Minneapolis, MN, USA). To assess the cytokine production of CagY-specific Th clones, 106 T cell blasts of each clone were co-cultured in duplicate cultures for 48 h in 1 mL of medium with 5 × 105 irradiated autologous peripheral blood mononuclear cells as APCs and CagY (1 μg/mL). To induce cytokine production by gastric T cell clones, T cell blasts were stimulated for 36 h with phorbol-12-myristate 13-acetate (PMA, 10 ng/mL) (BioLegend, San Diego, CA, USA) plus anti-CD3 monoclonal antibody (200 ng/mL) (BioLegend, San Diego, CA, USA) [10]. Duplicate samples of each supernatant were assayed by ELISA for IFN-γ, IL-4, TNF-α, and IL-17 (R&D Systems, Minneapolis, MN, USA). 4.5. Helper Activity of T Cell Lones for B Cell Proliferation B cells were prepared from each patient as described [13]. Briefly, PBMCs isolated by Ficoll Hypaque gradient method (Lymphoprep, Alere Technologies, Oslo, Norway) from each enrolled patient were processed to obtain a purified population of B lymphocytes by positive selection magnetic labelling with anti-CD19 microbeads (MACS, Miltenyi biotec, Bergisch Gladbach, Germany). The target population was incubated with specific microbeads and the cell suspension was loaded onto MACS columns placed in a magnetic field; B cells were then retained within the column and the other PBMCs were eluted away. Purified B lymphocytes were then harvested, washed, and used for the subsequent experiments. The ability of gastric Th clones to induce B cell proliferation under CagY stimulation was assessed by measuring [3H] thymidine uptake by peripheral blood B cells (3 × 104) alone or co-cultured for four days with different concentrations of irradiated (2000 rad) autologous clonal T cell blasts (0.2, 1, and 5 T-to-B cell ratio) with or without H. pylori CagY antigen (1 μg/mL), as described previously [10]. B cells were prepared from each patient as described [13]. Briefly, PBMCs isolated by Ficoll Hypaque gradient method (Lymphoprep, Alere Technologies, Oslo, Norway) from each enrolled patient were processed to obtain a purified population of B lymphocytes by positive selection magnetic labelling with anti-CD19 microbeads (MACS, Miltenyi biotec, Bergisch Gladbach, Germany). The target population was incubated with specific microbeads and the cell suspension was loaded onto MACS columns placed in a magnetic field; B cells were then retained within the column and the other PBMCs were eluted away. Purified B lymphocytes were then harvested, washed, and used for the subsequent experiments. The ability of gastric Th clones to induce B cell proliferation under CagY stimulation was assessed by measuring [3H] thymidine uptake by peripheral blood B cells (3 × 104) alone or co-cultured for four days with different concentrations of irradiated (2000 rad) autologous clonal T cell blasts (0.2, 1, and 5 T-to-B cell ratio) with or without H. pylori CagY antigen (1 μg/mL), as described previously [10]. 4.6. Statistical Analysis Descriptive statistics were used for the calculation of absolute frequencies and percentages of qualitative data, as well as for mean and standard deviation of quantitative data. After evaluating the homogeneity of variance with Hartley’s test, the t-test (IC 95%) was performed and a p < 0.05 was considered statistically significant. Statistical analysis was computed using the IBM SPSS Statistics software, version 27. Descriptive statistics were used for the calculation of absolute frequencies and percentages of qualitative data, as well as for mean and standard deviation of quantitative data. After evaluating the homogeneity of variance with Hartley’s test, the t-test (IC 95%) was performed and a p < 0.05 was considered statistically significant. Statistical analysis was computed using the IBM SPSS Statistics software, version 27. 4.1. Patients: Five untreated patients (three men and two women; mean age, 69 years; range, 63–75 years) with low-grade B cell lymphoma of gastric MALT (MALToma) and five patients (three men and two women; mean age, 59 years; range, 55–68 years) with uncomplicated chronic gastritis provided informed consent for this study, which was performed after approval by the local ethical committee (protocol number 14936_bio, approved on 8 October 2019). Multiple biopsy specimens were obtained from the gastric antrum of patients with chronic gastritis. In patients with low-grade MALT lymphoma, biopsy specimens were obtained from perilesional regions. Biopsy specimens were used for diagnosis (positive urease test, typing of H. pylori strain, and histology) and culture of tumor-infiltrating T lymphocytes. All patients with chronic gastritis or MALT lymphoma were infected with CagA1 VacA1 H. pylori type I strains and were positive for anti-CagA serum immunoglobulin (Ig) G antibodies, as assessed by specific ELISA (MyBioSource, San Diego, CA, USA). 4.2. Reagents: Helicobacter pylori CagY was produced as described [21]. We ruled out the presence of contaminants by a limulus test. The H. pylori CagY used resulted limulus test negative throughout the whole study. 4.3. Generation of H. pylori-Specific T Cell Clones: Biopsy specimens were cultured for 10 days in RPMI 1640 medium (Biochrom AG, Berlin, Germany) supplemented with human IL-2 (PeproTech, London, UK) (50 U/mL) to expand in vivo activated T cells [10]. Mucosal specimens were disrupted and single T-cell blasts were cloned under limiting dilution (0.3 cells/well) as reported previously [40]. Each clone was screened (in triplicate cultures for each condition) for responsiveness to H. pylori by measuring [3H] thymidine (Perkin Elmer, Waltham, MA, US) uptake after 60 h’ stimulation with medium, H. pylori lysate (aqueous extract of NCTC11637 strain, 10 μg/mL being optimal), recombinant CagY protein (1 μg/mL) in the presence of irradiated autologous mononuclear cells as APCs [40]. A mitogenic index greater than 10 was considered a positive result. 4.4. Cytokine Profile of H. pylori CagY–Specific Gastric T Cell Clones: To assess the cytokine production of CagY-specific Th clones, 106 T cell blasts of each clone were co-cultured in duplicate cultures for 48 h in 1 mL of medium with 5 × 105 irradiated autologous peripheral blood mononuclear cells as APCs and CagY (1 μg/mL). To induce cytokine production by gastric T cell clones, T cell blasts were stimulated for 36 h with phorbol-12-myristate 13-acetate (PMA, 10 ng/mL) (BioLegend, San Diego, CA, USA) plus anti-CD3 monoclonal antibody (200 ng/mL) (BioLegend, San Diego, CA, USA) [10]. Duplicate samples of each supernatant were assayed by ELISA for IFN-γ, IL-4, TNF-α, and IL-17 (R&D Systems, Minneapolis, MN, USA). 4.5. Helper Activity of T Cell Lones for B Cell Proliferation: B cells were prepared from each patient as described [13]. Briefly, PBMCs isolated by Ficoll Hypaque gradient method (Lymphoprep, Alere Technologies, Oslo, Norway) from each enrolled patient were processed to obtain a purified population of B lymphocytes by positive selection magnetic labelling with anti-CD19 microbeads (MACS, Miltenyi biotec, Bergisch Gladbach, Germany). The target population was incubated with specific microbeads and the cell suspension was loaded onto MACS columns placed in a magnetic field; B cells were then retained within the column and the other PBMCs were eluted away. Purified B lymphocytes were then harvested, washed, and used for the subsequent experiments. The ability of gastric Th clones to induce B cell proliferation under CagY stimulation was assessed by measuring [3H] thymidine uptake by peripheral blood B cells (3 × 104) alone or co-cultured for four days with different concentrations of irradiated (2000 rad) autologous clonal T cell blasts (0.2, 1, and 5 T-to-B cell ratio) with or without H. pylori CagY antigen (1 μg/mL), as described previously [10]. 4.6. Statistical Analysis: Descriptive statistics were used for the calculation of absolute frequencies and percentages of qualitative data, as well as for mean and standard deviation of quantitative data. After evaluating the homogeneity of variance with Hartley’s test, the t-test (IC 95%) was performed and a p < 0.05 was considered statistically significant. Statistical analysis was computed using the IBM SPSS Statistics software, version 27. 5. Conclusions: The results obtained so far suggest that H. pylori CagY is an important factor involved in the genesis of Th1 and Th17 response in H. pylori-infected patients with gastric MALT lymphoma. We show that H. pylori CagY-specific Th cells derived from the gastric mucosa of H. pylori-infected patients with gastric low-grade MALT lymphoma can provide B cell help in a dose-dependent manner. Taken together, these results suggest that H. pylori CagY is an important factor in generating Th1 and Th17 responses in H. pylori-infected patients with gastric MALT lymphoma and that the T cell-dependent B cell proliferation induced by H. pylori CagY may represent an important link between bacterial infection and gastric lymphoma. CagY, Th1 and Th17 pathways might be useful for the design of novel diagnostics, novel vaccines and therapeutics for gastric MALT lymphomas related to H. pylori infection.
Background: the neoplastic B cells of the Helicobacter pylori-related low-grade gastric mucosa-associated lymphoid tissue (MALT) lymphoma proliferate in response to H. pylori, however, the nature of the H. pylori antigen responsible for proliferation is still unknown. The purpose of the study was to dissect whether CagY might be the H. pylori antigen able to drive B cell proliferation. Methods: the B cells and the clonal progeny of T cells from the gastric mucosa of five patients with MALT lymphoma were compared with those of T cell clones obtained from five H. pylori-infected patients with chronic gastritis. The T cell clones were assessed for their specificity to H. pylori CagY, cytokine profile and helper function for B cell proliferation. Results: 22 of 158 CD4+ (13.9%) gastric clones from MALT lymphoma and three of 179 CD4+ (1.7%) clones from chronic gastritis recognized CagY. CagY predominantly drives Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17) secretion by gastric CD4+ T cells from H. pylori-infected patients with low-grade gastric MALT lymphoma. All MALT lymphoma-derived clones dose dependently increased their B cell help, whereas clones from chronic gastritis lost helper activity at T-to-B-cell ratios greater than 1. Conclusions: the results obtained indicate that CagY drives both B cell proliferation and T cell activation in gastric MALT lymphomas.
1. Introduction: Helicobacter pylori is a spiral-shaped Gram-negative bacterium that chronically infects the stomach of more than 50% of the human population, and is the leading cause of gastric cancer, gastric lymphoma, gastric autoimmunity and peptic ulcer diseases [1,2,3,4,5]. A strong association between Helicobacter pylori infection and the development of gastric mucosa-associated lymphoid tissue (MALT) lymphoma has been demonstrated [6,7,8]. A prerequisite for lymphomagenesis is the development of secondary inflammatory MALT, which is induced by chronic H. pylori infection [7,8]. In the early stages, this tumor is sensitive to the withdrawal of H. pylori-induced T cell help, providing an explanation for both the tendency of the tumor to remain localized at the primary site and its regression after eradication of H. pylori with antibiotics. The tumor cells of low-grade gastric MALT lymphoma are memory B lymphocytes that still respond to differentiation signals, such as CD40 costimulation and cytokines produced by antigen-stimulated T helper (Th) cells [9,10] and their growth depends on antigen-stimulation by H. pylori-specific T cells [11,12]. An important unanswered question remains the chemical nature of the H. pylori factors responsible for the induction of gastric Th cells which can promote the proliferation of B cells. Bacterial products are known to possess immunomodulatory properties and induce B cell responses as well as different types of innate and adaptive responses [13]. Among the bacterial components, some factors associated with malignancy have been identified, although the high degree of genomic variability of H. pylori strains has prevented the complete identification of the factors involved. The major virulence factor of H. pylori is the cag pathogenicity island (cagPAI), an approximately 40 kb genetic locus, containing 31 genes [14,15] and encoding for the so-called type IV secretion system (T4SS). This forms a syringe-like structure that injects bacterial components (mainly peptidoglycan and the oncoprotein cagA) into the host target cell [16]. H. pylori strains harboring the cagPAI pathogenicity locus show a significantly increased ability to induce severe pathological outcomes in infected individuals, such as gastric cancer and gastric lymphoma, compared to cagPAI-negative strains [17,18,19,20]. Recently, it was reported that among H. pylori-infected patients, those with gastric low-grade MALT lymphoma are preferentially seropositive for H. pylori CagY protein [21]. CagY, a VirB10-homologous protein, also known as Cag7 or HP0527, is able to activate innate cells in a flagellin-independent manner. CagY is a TLR5 agonist and five interaction sites have been identified in the CagY repeat domains [16,22,23]. HP0527 encodes a large protein of 1927 amino acids that is expressed on the surface and has been described as one of the main components of H. pylori cag T4SS-associated pilum; it may act as a molecular switch that modifies the proinflammatory host responses by modulating the T4SS function and tuning CagA injection [24,25]. The aims of this study were (1) to investigate the presence of H. pylori CagY-specific Th cells in the context of low-grade gastric MALT lymphomas, (2) to define the cytokine patterns of these cells, and (3) to assess whether gastric CagY-specific T cells from MALT lymphomas are able to provide help for B cell proliferation. 5. Conclusions: The results obtained so far suggest that H. pylori CagY is an important factor involved in the genesis of Th1 and Th17 response in H. pylori-infected patients with gastric MALT lymphoma. We show that H. pylori CagY-specific Th cells derived from the gastric mucosa of H. pylori-infected patients with gastric low-grade MALT lymphoma can provide B cell help in a dose-dependent manner. Taken together, these results suggest that H. pylori CagY is an important factor in generating Th1 and Th17 responses in H. pylori-infected patients with gastric MALT lymphoma and that the T cell-dependent B cell proliferation induced by H. pylori CagY may represent an important link between bacterial infection and gastric lymphoma. CagY, Th1 and Th17 pathways might be useful for the design of novel diagnostics, novel vaccines and therapeutics for gastric MALT lymphomas related to H. pylori infection.
Background: the neoplastic B cells of the Helicobacter pylori-related low-grade gastric mucosa-associated lymphoid tissue (MALT) lymphoma proliferate in response to H. pylori, however, the nature of the H. pylori antigen responsible for proliferation is still unknown. The purpose of the study was to dissect whether CagY might be the H. pylori antigen able to drive B cell proliferation. Methods: the B cells and the clonal progeny of T cells from the gastric mucosa of five patients with MALT lymphoma were compared with those of T cell clones obtained from five H. pylori-infected patients with chronic gastritis. The T cell clones were assessed for their specificity to H. pylori CagY, cytokine profile and helper function for B cell proliferation. Results: 22 of 158 CD4+ (13.9%) gastric clones from MALT lymphoma and three of 179 CD4+ (1.7%) clones from chronic gastritis recognized CagY. CagY predominantly drives Interferon-gamma (IFN-γ) and Interleukin-17 (IL-17) secretion by gastric CD4+ T cells from H. pylori-infected patients with low-grade gastric MALT lymphoma. All MALT lymphoma-derived clones dose dependently increased their B cell help, whereas clones from chronic gastritis lost helper activity at T-to-B-cell ratios greater than 1. Conclusions: the results obtained indicate that CagY drives both B cell proliferation and T cell activation in gastric MALT lymphomas.
7,693
270
14
[ "pylori", "cell", "cagy", "malt", "clones", "lymphoma", "gastric", "cells", "malt lymphoma", "patients" ]
[ "test", "test" ]
null
[CONTENT] Helicobacter pylori | CagY | B cells | T cells | cytokines | MALT | gastric lymphoma [SUMMARY]
null
[CONTENT] Helicobacter pylori | CagY | B cells | T cells | cytokines | MALT | gastric lymphoma [SUMMARY]
[CONTENT] Helicobacter pylori | CagY | B cells | T cells | cytokines | MALT | gastric lymphoma [SUMMARY]
[CONTENT] Helicobacter pylori | CagY | B cells | T cells | cytokines | MALT | gastric lymphoma [SUMMARY]
[CONTENT] Helicobacter pylori | CagY | B cells | T cells | cytokines | MALT | gastric lymphoma [SUMMARY]
[CONTENT] Aged | B-Lymphocytes | Bacterial Proteins | Cell Proliferation | Female | Gastric Mucosa | Gastritis | Helicobacter Infections | Helicobacter pylori | Humans | Inflammation | Interferon-gamma | Lymphocyte Activation | Lymphocytes | Lymphoma, B-Cell, Marginal Zone | Male | Middle Aged | Stomach | Th1 Cells | Th17 Cells [SUMMARY]
null
[CONTENT] Aged | B-Lymphocytes | Bacterial Proteins | Cell Proliferation | Female | Gastric Mucosa | Gastritis | Helicobacter Infections | Helicobacter pylori | Humans | Inflammation | Interferon-gamma | Lymphocyte Activation | Lymphocytes | Lymphoma, B-Cell, Marginal Zone | Male | Middle Aged | Stomach | Th1 Cells | Th17 Cells [SUMMARY]
[CONTENT] Aged | B-Lymphocytes | Bacterial Proteins | Cell Proliferation | Female | Gastric Mucosa | Gastritis | Helicobacter Infections | Helicobacter pylori | Humans | Inflammation | Interferon-gamma | Lymphocyte Activation | Lymphocytes | Lymphoma, B-Cell, Marginal Zone | Male | Middle Aged | Stomach | Th1 Cells | Th17 Cells [SUMMARY]
[CONTENT] Aged | B-Lymphocytes | Bacterial Proteins | Cell Proliferation | Female | Gastric Mucosa | Gastritis | Helicobacter Infections | Helicobacter pylori | Humans | Inflammation | Interferon-gamma | Lymphocyte Activation | Lymphocytes | Lymphoma, B-Cell, Marginal Zone | Male | Middle Aged | Stomach | Th1 Cells | Th17 Cells [SUMMARY]
[CONTENT] Aged | B-Lymphocytes | Bacterial Proteins | Cell Proliferation | Female | Gastric Mucosa | Gastritis | Helicobacter Infections | Helicobacter pylori | Humans | Inflammation | Interferon-gamma | Lymphocyte Activation | Lymphocytes | Lymphoma, B-Cell, Marginal Zone | Male | Middle Aged | Stomach | Th1 Cells | Th17 Cells [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] pylori | cell | cagy | malt | clones | lymphoma | gastric | cells | malt lymphoma | patients [SUMMARY]
null
[CONTENT] pylori | cell | cagy | malt | clones | lymphoma | gastric | cells | malt lymphoma | patients [SUMMARY]
[CONTENT] pylori | cell | cagy | malt | clones | lymphoma | gastric | cells | malt lymphoma | patients [SUMMARY]
[CONTENT] pylori | cell | cagy | malt | clones | lymphoma | gastric | cells | malt lymphoma | patients [SUMMARY]
[CONTENT] pylori | cell | cagy | malt | clones | lymphoma | gastric | cells | malt lymphoma | patients [SUMMARY]
[CONTENT] pylori | gastric | cells | cagpai | components | factors | t4ss | malt | associated | bacterial [SUMMARY]
null
[CONTENT] clones | malt | cagy | pylori | index | mitogenic | mitogenic index | cell | malt lymphoma | lymphoma [SUMMARY]
[CONTENT] pylori | important | gastric | th1 | pylori infected patients gastric | infected patients gastric | th1 th17 | th17 | cagy important factor | suggest pylori [SUMMARY]
[CONTENT] pylori | cagy | cell | malt | clones | lymphoma | gastric | cells | malt lymphoma | patients [SUMMARY]
[CONTENT] pylori | cagy | cell | malt | clones | lymphoma | gastric | cells | malt lymphoma | patients [SUMMARY]
[CONTENT] MALT ||| CagY [SUMMARY]
null
[CONTENT] 22 | 158 CD4+ | 13.9% | MALT | three | 1.7% | CagY. CagY | Interferon | IFN | IL-17 | gastric ||| 1 [SUMMARY]
[CONTENT] CagY | lymphomas [SUMMARY]
[CONTENT] MALT ||| CagY ||| five | five ||| CagY ||| 22 | 158 CD4+ | 13.9% | MALT | three | 1.7% | CagY. CagY | Interferon | IFN | IL-17 | gastric ||| CagY | lymphomas [SUMMARY]
[CONTENT] MALT ||| CagY ||| five | five ||| CagY ||| 22 | 158 CD4+ | 13.9% | MALT | three | 1.7% | CagY. CagY | Interferon | IFN | IL-17 | gastric ||| CagY | lymphomas [SUMMARY]
An innovative OSCE clinical log station: a quantitative study of its influence on Log use by medical students.
23140250
A Clinical Log was introduced as part of a medical student learning portfolio, aiming to develop a habit of critical reflection while learning was taking place, and provide feedback to students and the institution on learning progress. It was designed as a longitudinal self-directed structured record of student learning events, with reflection on these for personal and professional development, and actions planned or taken for learning.As incentive was needed to encourage student engagement, an innovative Clinical Log station was introduced in the OSCE, an assessment format with established acceptance at the School. This study questions: How does an OSCE Clinical Log station influence Log use by students?
BACKGROUND
The Log station was introduced into the formative, and subsequent summative, OSCEs with careful attention to student and assessor training, marking rubrics and the standard setting procedure. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner.
METHODS
Analysis of the first cohort's Log use over the four-year course (quantified as number of patient visits entered by all students) revealed limited initial use. Usage was stimulated after introduction of the Log station early in third year, with some improvement during the subsequent year-long integrated community-based clerkship. Student reflection, quantified by the mean number of characters in the 'reflection' fields per entry, peaked just prior to the final OSCE (mid-Year 4). Following this, very few students continued to enter and reflect on clinical experience using the Log.
RESULTS
While the current study suggested that we can't assume students will self-reflect unless such an activity is included in an assessment, ongoing work has focused on building learner and faculty confidence in the value of self-reflection as part of being a competent physician.
CONCLUSION
[ "Attitude of Health Personnel", "Awareness", "Clinical Clerkship", "Cohort Studies", "Competency-Based Education", "Curriculum", "Documentation", "Education, Medical, Graduate", "Educational Measurement", "Feedback", "Humans", "Internship and Residency", "New South Wales", "Problem-Based Learning", "Self-Assessment", "Software", "Students, Medical" ]
3511194
Introduction of OSCE station
Given the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station. The Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1. Clinical Log OSCE station marking sheet. Assessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs.
Methods
Introduction of OSCE station Given the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station. The Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1. Clinical Log OSCE station marking sheet. Assessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs. Given the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station. The Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1. Clinical Log OSCE station marking sheet. Assessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs. Post OSCE feedback While all students were offered one-on-one feedback on OSCE station performance, those who failed the station received an email encouraging them to come for feedback, and all students in this category (and the few who were absent for the formative OSCE), attended for follow-up. Each section of the marking sheet was discussed with these students, for example how each was valued according to the marks, response to written assessors comments and identification of strategies to respond to these. The study received ethics approval from the University Human Ethics Research Committee. While all students were offered one-on-one feedback on OSCE station performance, those who failed the station received an email encouraging them to come for feedback, and all students in this category (and the few who were absent for the formative OSCE), attended for follow-up. Each section of the marking sheet was discussed with these students, for example how each was valued according to the marks, response to written assessors comments and identification of strategies to respond to these. The study received ethics approval from the University Human Ethics Research Committee.
Results
Seventy students completed the Formative OSCE (95% of the Phase 2 cohort), with all students completing the Phase 2 (N = 74) and Phase 3 Summative OSCEs (N = 68) respectively. Student absence for the initial Phase 2 Formative OSCE and attrition prior to the Phase 3 OSCE in June 2009 explains the varying student numbers at each of these examinations. Performance on the OSCE Log station Cohort performance on this station improved with each OSCE experience, as evidenced by the increasing station cut point and mean of the cohort performance (Table 1). Comparison of average station scores and cut points for the Clinical Log station in the Formative and Summative OSCEs (2009, 2010) Cohort performance on this station improved with each OSCE experience, as evidenced by the increasing station cut point and mean of the cohort performance (Table 1). Comparison of average station scores and cut points for the Clinical Log station in the Formative and Summative OSCEs (2009, 2010) Log usage Examination of cohort Log use over the four years of the course (quantified as the number of patient visits entered by all students) revealed there was limited use for the first two years, with a small flurry of activity when the students commenced their Phase 2 hospital-based speciality rotations in July, 2008. Log use was stimulated following introduction of the OSCE station early in the third year, and following the June holiday break, students engaged with the log to a greater extent during Phase 3. However, after the Phase 3 OSCE in June 2010, very few students continued to enter and reflect on clinical experience using the Clinical Log (Figure 2). Cohort Clinical Log entries 2007–2010 (year-month vs number patient visits, all students). Continuous monitoring of log usage revealed there was a rush of log entries in the period prior to the Formative Phase 2 OSCE in late March, 2009; the Summative Phase 2 OSCE in mid-June 2009, and the Summative Phase 3 OSCE in early July, 2010. While many entries were recorded at the time of the patient visit, close analysis revealed that there was a flurry of data entry immediately preceding each OSCE, and that many of these entries were delayed relative to the patient visit. Examination of cohort Log use over the four years of the course (quantified as the number of patient visits entered by all students) revealed there was limited use for the first two years, with a small flurry of activity when the students commenced their Phase 2 hospital-based speciality rotations in July, 2008. Log use was stimulated following introduction of the OSCE station early in the third year, and following the June holiday break, students engaged with the log to a greater extent during Phase 3. However, after the Phase 3 OSCE in June 2010, very few students continued to enter and reflect on clinical experience using the Clinical Log (Figure 2). Cohort Clinical Log entries 2007–2010 (year-month vs number patient visits, all students). Continuous monitoring of log usage revealed there was a rush of log entries in the period prior to the Formative Phase 2 OSCE in late March, 2009; the Summative Phase 2 OSCE in mid-June 2009, and the Summative Phase 3 OSCE in early July, 2010. While many entries were recorded at the time of the patient visit, close analysis revealed that there was a flurry of data entry immediately preceding each OSCE, and that many of these entries were delayed relative to the patient visit. Quantity of reflection recorded in the log Student reflection, quantified by the mean number of characters in the ‘reflection’ fields per entry, peaked just prior to the Phase 3 OSCE (Figure 3). Closer analysis of the month of July entries reveals a large increase in data-entry in the reflection fields on July 1st and 2nd, in the two days preceding the Phase 3 OSCE (Figure 4). Cohort quantity of reflection over course 2007–2010 (year-month vs mean number of characters in all three ‘reflection fields’ per Log entry). Cohort quantity of reflection, prior to final OSCE on July 3rd 2010 (month-day vs mean number of characters in all three ‘reflection’ fields per Log entry). Following the final OSCE on July 3rd, log use including reflection was limited to a very small number of students. Student reflection, quantified by the mean number of characters in the ‘reflection’ fields per entry, peaked just prior to the Phase 3 OSCE (Figure 3). Closer analysis of the month of July entries reveals a large increase in data-entry in the reflection fields on July 1st and 2nd, in the two days preceding the Phase 3 OSCE (Figure 4). Cohort quantity of reflection over course 2007–2010 (year-month vs mean number of characters in all three ‘reflection fields’ per Log entry). Cohort quantity of reflection, prior to final OSCE on July 3rd 2010 (month-day vs mean number of characters in all three ‘reflection’ fields per Log entry). Following the final OSCE on July 3rd, log use including reflection was limited to a very small number of students.
Conclusions
While the current study suggested that we can’t assume students will self-reflect unless such an activity is included in an assessment, subsequent efforts to embed the reflective log in the students’ learning environment should facilitate ongoing student engagement. Ongoing work has also focused on building learner and faculty confidence in the value of self-reflection as part of being a competent physician.
[ "Introduction of OSCE station", "Post OSCE feedback", "Performance on the OSCE Log station", "Log usage", "Quantity of reflection recorded in the log", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Given the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station.\nThe Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1.\nClinical Log OSCE station marking sheet.\nAssessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs.", "While all students were offered one-on-one feedback on OSCE station performance, those who failed the station received an email encouraging them to come for feedback, and all students in this category (and the few who were absent for the formative OSCE), attended for follow-up. Each section of the marking sheet was discussed with these students, for example how each was valued according to the marks, response to written assessors comments and identification of strategies to respond to these.\nThe study received ethics approval from the University Human Ethics Research Committee.", "Cohort performance on this station improved with each OSCE experience, as evidenced by the increasing station cut point and mean of the cohort performance (Table 1).\nComparison of average station scores and cut points for the Clinical Log station in the Formative and Summative OSCEs (2009, 2010)", "Examination of cohort Log use over the four years of the course (quantified as the number of patient visits entered by all students) revealed there was limited use for the first two years, with a small flurry of activity when the students commenced their Phase 2 hospital-based speciality rotations in July, 2008. Log use was stimulated following introduction of the OSCE station early in the third year, and following the June holiday break, students engaged with the log to a greater extent during Phase 3. However, after the Phase 3 OSCE in June 2010, very few students continued to enter and reflect on clinical experience using the Clinical Log (Figure 2).\nCohort Clinical Log entries 2007–2010 (year-month vs number patient visits, all students).\nContinuous monitoring of log usage revealed there was a rush of log entries in the period prior to the Formative Phase 2 OSCE in late March, 2009; the Summative Phase 2 OSCE in mid-June 2009, and the Summative Phase 3 OSCE in early July, 2010. While many entries were recorded at the time of the patient visit, close analysis revealed that there was a flurry of data entry immediately preceding each OSCE, and that many of these entries were delayed relative to the patient visit.", "Student reflection, quantified by the mean number of characters in the ‘reflection’ fields per entry, peaked just prior to the Phase 3 OSCE (Figure 3). Closer analysis of the month of July entries reveals a large increase in data-entry in the reflection fields on July 1st and 2nd, in the two days preceding the Phase 3 OSCE (Figure 4).\nCohort quantity of reflection over course 2007–2010 (year-month vs mean number of characters in all three ‘reflection fields’ per Log entry).\nCohort quantity of reflection, prior to final OSCE on July 3rd 2010 (month-day vs mean number of characters in all three ‘reflection’ fields per Log entry).\nFollowing the final OSCE on July 3rd, log use including reflection was limited to a very small number of students.", "The authors declare that they have no competing interests.", "JNH, HR, LC and MO have all made substantial contributions to conception and design of the study. LC and MO acquired the data and completed data analysis, and all authors were involved in data interpretation. JNH drafted the manuscript. All authors have revised it critically for important intellectual content and have given final approval of the version to be published.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6920/12/111/prepub\n" ]
[ null, null, null, null, null, null, null, null ]
[ "Methods", "Introduction of OSCE station", "Post OSCE feedback", "Results", "Performance on the OSCE Log station", "Log usage", "Quantity of reflection recorded in the log", "Discussion", "Conclusions", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ " Introduction of OSCE station Given the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station.\nThe Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1.\nClinical Log OSCE station marking sheet.\nAssessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs.\nGiven the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station.\nThe Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1.\nClinical Log OSCE station marking sheet.\nAssessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs.\n Post OSCE feedback While all students were offered one-on-one feedback on OSCE station performance, those who failed the station received an email encouraging them to come for feedback, and all students in this category (and the few who were absent for the formative OSCE), attended for follow-up. Each section of the marking sheet was discussed with these students, for example how each was valued according to the marks, response to written assessors comments and identification of strategies to respond to these.\nThe study received ethics approval from the University Human Ethics Research Committee.\nWhile all students were offered one-on-one feedback on OSCE station performance, those who failed the station received an email encouraging them to come for feedback, and all students in this category (and the few who were absent for the formative OSCE), attended for follow-up. Each section of the marking sheet was discussed with these students, for example how each was valued according to the marks, response to written assessors comments and identification of strategies to respond to these.\nThe study received ethics approval from the University Human Ethics Research Committee.", "Given the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station.\nThe Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1.\nClinical Log OSCE station marking sheet.\nAssessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs.", "While all students were offered one-on-one feedback on OSCE station performance, those who failed the station received an email encouraging them to come for feedback, and all students in this category (and the few who were absent for the formative OSCE), attended for follow-up. Each section of the marking sheet was discussed with these students, for example how each was valued according to the marks, response to written assessors comments and identification of strategies to respond to these.\nThe study received ethics approval from the University Human Ethics Research Committee.", "Seventy students completed the Formative OSCE (95% of the Phase 2 cohort), with all students completing the Phase 2 (N = 74) and Phase 3 Summative OSCEs (N = 68) respectively. Student absence for the initial Phase 2 Formative OSCE and attrition prior to the Phase 3 OSCE in June 2009 explains the varying student numbers at each of these examinations.\n Performance on the OSCE Log station Cohort performance on this station improved with each OSCE experience, as evidenced by the increasing station cut point and mean of the cohort performance (Table 1).\nComparison of average station scores and cut points for the Clinical Log station in the Formative and Summative OSCEs (2009, 2010)\nCohort performance on this station improved with each OSCE experience, as evidenced by the increasing station cut point and mean of the cohort performance (Table 1).\nComparison of average station scores and cut points for the Clinical Log station in the Formative and Summative OSCEs (2009, 2010)\n Log usage Examination of cohort Log use over the four years of the course (quantified as the number of patient visits entered by all students) revealed there was limited use for the first two years, with a small flurry of activity when the students commenced their Phase 2 hospital-based speciality rotations in July, 2008. Log use was stimulated following introduction of the OSCE station early in the third year, and following the June holiday break, students engaged with the log to a greater extent during Phase 3. However, after the Phase 3 OSCE in June 2010, very few students continued to enter and reflect on clinical experience using the Clinical Log (Figure 2).\nCohort Clinical Log entries 2007–2010 (year-month vs number patient visits, all students).\nContinuous monitoring of log usage revealed there was a rush of log entries in the period prior to the Formative Phase 2 OSCE in late March, 2009; the Summative Phase 2 OSCE in mid-June 2009, and the Summative Phase 3 OSCE in early July, 2010. While many entries were recorded at the time of the patient visit, close analysis revealed that there was a flurry of data entry immediately preceding each OSCE, and that many of these entries were delayed relative to the patient visit.\nExamination of cohort Log use over the four years of the course (quantified as the number of patient visits entered by all students) revealed there was limited use for the first two years, with a small flurry of activity when the students commenced their Phase 2 hospital-based speciality rotations in July, 2008. Log use was stimulated following introduction of the OSCE station early in the third year, and following the June holiday break, students engaged with the log to a greater extent during Phase 3. However, after the Phase 3 OSCE in June 2010, very few students continued to enter and reflect on clinical experience using the Clinical Log (Figure 2).\nCohort Clinical Log entries 2007–2010 (year-month vs number patient visits, all students).\nContinuous monitoring of log usage revealed there was a rush of log entries in the period prior to the Formative Phase 2 OSCE in late March, 2009; the Summative Phase 2 OSCE in mid-June 2009, and the Summative Phase 3 OSCE in early July, 2010. While many entries were recorded at the time of the patient visit, close analysis revealed that there was a flurry of data entry immediately preceding each OSCE, and that many of these entries were delayed relative to the patient visit.\n Quantity of reflection recorded in the log Student reflection, quantified by the mean number of characters in the ‘reflection’ fields per entry, peaked just prior to the Phase 3 OSCE (Figure 3). Closer analysis of the month of July entries reveals a large increase in data-entry in the reflection fields on July 1st and 2nd, in the two days preceding the Phase 3 OSCE (Figure 4).\nCohort quantity of reflection over course 2007–2010 (year-month vs mean number of characters in all three ‘reflection fields’ per Log entry).\nCohort quantity of reflection, prior to final OSCE on July 3rd 2010 (month-day vs mean number of characters in all three ‘reflection’ fields per Log entry).\nFollowing the final OSCE on July 3rd, log use including reflection was limited to a very small number of students.\nStudent reflection, quantified by the mean number of characters in the ‘reflection’ fields per entry, peaked just prior to the Phase 3 OSCE (Figure 3). Closer analysis of the month of July entries reveals a large increase in data-entry in the reflection fields on July 1st and 2nd, in the two days preceding the Phase 3 OSCE (Figure 4).\nCohort quantity of reflection over course 2007–2010 (year-month vs mean number of characters in all three ‘reflection fields’ per Log entry).\nCohort quantity of reflection, prior to final OSCE on July 3rd 2010 (month-day vs mean number of characters in all three ‘reflection’ fields per Log entry).\nFollowing the final OSCE on July 3rd, log use including reflection was limited to a very small number of students.", "Cohort performance on this station improved with each OSCE experience, as evidenced by the increasing station cut point and mean of the cohort performance (Table 1).\nComparison of average station scores and cut points for the Clinical Log station in the Formative and Summative OSCEs (2009, 2010)", "Examination of cohort Log use over the four years of the course (quantified as the number of patient visits entered by all students) revealed there was limited use for the first two years, with a small flurry of activity when the students commenced their Phase 2 hospital-based speciality rotations in July, 2008. Log use was stimulated following introduction of the OSCE station early in the third year, and following the June holiday break, students engaged with the log to a greater extent during Phase 3. However, after the Phase 3 OSCE in June 2010, very few students continued to enter and reflect on clinical experience using the Clinical Log (Figure 2).\nCohort Clinical Log entries 2007–2010 (year-month vs number patient visits, all students).\nContinuous monitoring of log usage revealed there was a rush of log entries in the period prior to the Formative Phase 2 OSCE in late March, 2009; the Summative Phase 2 OSCE in mid-June 2009, and the Summative Phase 3 OSCE in early July, 2010. While many entries were recorded at the time of the patient visit, close analysis revealed that there was a flurry of data entry immediately preceding each OSCE, and that many of these entries were delayed relative to the patient visit.", "Student reflection, quantified by the mean number of characters in the ‘reflection’ fields per entry, peaked just prior to the Phase 3 OSCE (Figure 3). Closer analysis of the month of July entries reveals a large increase in data-entry in the reflection fields on July 1st and 2nd, in the two days preceding the Phase 3 OSCE (Figure 4).\nCohort quantity of reflection over course 2007–2010 (year-month vs mean number of characters in all three ‘reflection fields’ per Log entry).\nCohort quantity of reflection, prior to final OSCE on July 3rd 2010 (month-day vs mean number of characters in all three ‘reflection’ fields per Log entry).\nFollowing the final OSCE on July 3rd, log use including reflection was limited to a very small number of students.", "At the start of a new graduate-entry medical school in Australia, an electronic Clinical Log was implemented, as part of a learning portfolio, to capture students’ learning experiences and foster learning in a range of clinical settings throughout the course. It also aimed to develop a habit of critical reflection while learning was taking place, and provide feedback to students and the institution on learning progress. As such, the Clinical Log had high face validity with utility for formative assessment but sound psychometrics were needed for high stakes summative purposes [20]. To address reliability issues such as high variability of scoring between examiners, the Log assessment was carefully introduced with clear articulation of criteria to both students and assessors. Experienced trained assessors, who understood the purpose of the assessment and expected student performance, were used. This study, the first in a series reporting on outcomes and challenges associated with the Clinical Log, showed that most students had a low initial level of engagement with the Log until motivated by inclusion of a Clinical Log station in the end-of-Phase OSCEs. It demonstrated the well-known fact that ‘assessment drives learning’. However the initiative encouraged student engagement with the school’s aim to foster student reflection for professional reasons.\nThe improvement in cohort performance on the Log station with subsequent OSCEs was attributed to the following: assessment criteria were communicated to students and assessors prior to each exam; a formative assessment experience was offered prior to the summative testing; and post-exam feedback and remediation were offered to all students, especially borderline and failing students.\nWhile it seemed that the Log OSCE assessment was the major motivator for recording of clinical experiences, it was pleasing to observe greater use by Phase 3 students throughout the year-long community-based integrated placement. This may have been due to greater exposure to ‘undifferentiated patients’ and a growing appreciation of the value of self-reflection.\nFor this paper only the quantity of student reflection in the Log has been analysed. It seemed that for most students inclusion of the reflective criteria in the OSCE station was necessary for student engagement in this desired professional activity. The quality of the reflections in the Log needs further evaluation. The fact that many students were only motivated to record and reflect clinical encounters when the OSCE assessment approached suggested that most reflection occurred on rather than in, professional action [14]. Reflection on action can occur when the student enters and reflects on each patient-interaction in the Log, or shares Log experiences with preceptors, or peers and tutors. Delayed Log entry may have limited impact on reflection on action but it is likely to significantly impede reflection in action. The longitudinal placement is valued as it offers students the benefits of long-term patient follow-up (continuity of care experiences), and ongoing reflection on diagnosis and management decisions as the presentation unfolds (reflection in action). While the Log allowed recording of continuity of care for individual patients, use of this facility and reflection in action requires Log entry closely related to the patient-interaction, rather than delayed entry as assessment approaches. The planned development of a mobile Clinical Log application may facilitate student engagement with more reflection in, as well as on action.\nThe School continues to monitor student and clinician feedback on the Clinical Log, addressing issues that have discouraged use. More guidance is being offered on expectations and the educational use of the Log, showing it as an activity for, rather than being in competition with, learning. With the growing use of e-portfolios and logs in undergraduate and postgraduate education and recognition of the value of self-reflection for professional competency [15], the clinicians who supervise and/or mentor students have been offered more training on providing constructive feedback on learners’ personal and professional development, and reflection on this. This was deemed important for those who embraced completion of a dossier of evidence but were less comfortable with the reflective component in the Log and assessment. Potentially the investment in faculty professional development will have benefits for learners in the vertical continuum of medical education, and faculty themselves (as most teachers contribute to both undergraduate and postgraduate medical education, and may use logs/portfolios themselves in continuing medical education).\nFurther work is underway to review the quality, as well as the quantity of the reflections and correlate these with learner academic success. This should help strengthen the evidence base for use of electronic reflective logs as part of learning portfolios in undergraduate medical education and build learner confidence in the value of reflection for developing professional artistry. It will be interesting to further investigate at what stage students appreciate the value of self-reflection as part of being a competent physician.\nRecent work on different aspects of a portfolio approach to competency-based assessment has reported the value of giving students the responsibility of ‘interpreting, selecting and combining formative assessments received during the year, to document their performance in a learning portfolio for summative decisions’ [28-30]. The authors advise that this has helped students to internalise the self-regulation process, potentially more valuable for professional development than an extrinsic driver of portfolio use, such as assessment.", "While the current study suggested that we can’t assume students will self-reflect unless such an activity is included in an assessment, subsequent efforts to embed the reflective log in the students’ learning environment should facilitate ongoing student engagement. Ongoing work has also focused on building learner and faculty confidence in the value of self-reflection as part of being a competent physician.", "The authors declare that they have no competing interests.", "JNH, HR, LC and MO have all made substantial contributions to conception and design of the study. LC and MO acquired the data and completed data analysis, and all authors were involved in data interpretation. JNH drafted the manuscript. All authors have revised it critically for important intellectual content and have given final approval of the version to be published.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6920/12/111/prepub\n" ]
[ "methods", null, null, "results", null, null, null, "discussion", "conclusions", null, null, null ]
[ "Medical education", "Electronic reflective clinical log", "Assessment", "OSCE log station" ]
Methods: Introduction of OSCE station Given the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station. The Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1. Clinical Log OSCE station marking sheet. Assessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs. Given the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station. The Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1. Clinical Log OSCE station marking sheet. Assessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs. Post OSCE feedback While all students were offered one-on-one feedback on OSCE station performance, those who failed the station received an email encouraging them to come for feedback, and all students in this category (and the few who were absent for the formative OSCE), attended for follow-up. Each section of the marking sheet was discussed with these students, for example how each was valued according to the marks, response to written assessors comments and identification of strategies to respond to these. The study received ethics approval from the University Human Ethics Research Committee. While all students were offered one-on-one feedback on OSCE station performance, those who failed the station received an email encouraging them to come for feedback, and all students in this category (and the few who were absent for the formative OSCE), attended for follow-up. Each section of the marking sheet was discussed with these students, for example how each was valued according to the marks, response to written assessors comments and identification of strategies to respond to these. The study received ethics approval from the University Human Ethics Research Committee. Introduction of OSCE station: Given the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station. The Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1. Clinical Log OSCE station marking sheet. Assessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs. Post OSCE feedback: While all students were offered one-on-one feedback on OSCE station performance, those who failed the station received an email encouraging them to come for feedback, and all students in this category (and the few who were absent for the formative OSCE), attended for follow-up. Each section of the marking sheet was discussed with these students, for example how each was valued according to the marks, response to written assessors comments and identification of strategies to respond to these. The study received ethics approval from the University Human Ethics Research Committee. Results: Seventy students completed the Formative OSCE (95% of the Phase 2 cohort), with all students completing the Phase 2 (N = 74) and Phase 3 Summative OSCEs (N = 68) respectively. Student absence for the initial Phase 2 Formative OSCE and attrition prior to the Phase 3 OSCE in June 2009 explains the varying student numbers at each of these examinations. Performance on the OSCE Log station Cohort performance on this station improved with each OSCE experience, as evidenced by the increasing station cut point and mean of the cohort performance (Table 1). Comparison of average station scores and cut points for the Clinical Log station in the Formative and Summative OSCEs (2009, 2010) Cohort performance on this station improved with each OSCE experience, as evidenced by the increasing station cut point and mean of the cohort performance (Table 1). Comparison of average station scores and cut points for the Clinical Log station in the Formative and Summative OSCEs (2009, 2010) Log usage Examination of cohort Log use over the four years of the course (quantified as the number of patient visits entered by all students) revealed there was limited use for the first two years, with a small flurry of activity when the students commenced their Phase 2 hospital-based speciality rotations in July, 2008. Log use was stimulated following introduction of the OSCE station early in the third year, and following the June holiday break, students engaged with the log to a greater extent during Phase 3. However, after the Phase 3 OSCE in June 2010, very few students continued to enter and reflect on clinical experience using the Clinical Log (Figure 2). Cohort Clinical Log entries 2007–2010 (year-month vs number patient visits, all students). Continuous monitoring of log usage revealed there was a rush of log entries in the period prior to the Formative Phase 2 OSCE in late March, 2009; the Summative Phase 2 OSCE in mid-June 2009, and the Summative Phase 3 OSCE in early July, 2010. While many entries were recorded at the time of the patient visit, close analysis revealed that there was a flurry of data entry immediately preceding each OSCE, and that many of these entries were delayed relative to the patient visit. Examination of cohort Log use over the four years of the course (quantified as the number of patient visits entered by all students) revealed there was limited use for the first two years, with a small flurry of activity when the students commenced their Phase 2 hospital-based speciality rotations in July, 2008. Log use was stimulated following introduction of the OSCE station early in the third year, and following the June holiday break, students engaged with the log to a greater extent during Phase 3. However, after the Phase 3 OSCE in June 2010, very few students continued to enter and reflect on clinical experience using the Clinical Log (Figure 2). Cohort Clinical Log entries 2007–2010 (year-month vs number patient visits, all students). Continuous monitoring of log usage revealed there was a rush of log entries in the period prior to the Formative Phase 2 OSCE in late March, 2009; the Summative Phase 2 OSCE in mid-June 2009, and the Summative Phase 3 OSCE in early July, 2010. While many entries were recorded at the time of the patient visit, close analysis revealed that there was a flurry of data entry immediately preceding each OSCE, and that many of these entries were delayed relative to the patient visit. Quantity of reflection recorded in the log Student reflection, quantified by the mean number of characters in the ‘reflection’ fields per entry, peaked just prior to the Phase 3 OSCE (Figure 3). Closer analysis of the month of July entries reveals a large increase in data-entry in the reflection fields on July 1st and 2nd, in the two days preceding the Phase 3 OSCE (Figure 4). Cohort quantity of reflection over course 2007–2010 (year-month vs mean number of characters in all three ‘reflection fields’ per Log entry). Cohort quantity of reflection, prior to final OSCE on July 3rd 2010 (month-day vs mean number of characters in all three ‘reflection’ fields per Log entry). Following the final OSCE on July 3rd, log use including reflection was limited to a very small number of students. Student reflection, quantified by the mean number of characters in the ‘reflection’ fields per entry, peaked just prior to the Phase 3 OSCE (Figure 3). Closer analysis of the month of July entries reveals a large increase in data-entry in the reflection fields on July 1st and 2nd, in the two days preceding the Phase 3 OSCE (Figure 4). Cohort quantity of reflection over course 2007–2010 (year-month vs mean number of characters in all three ‘reflection fields’ per Log entry). Cohort quantity of reflection, prior to final OSCE on July 3rd 2010 (month-day vs mean number of characters in all three ‘reflection’ fields per Log entry). Following the final OSCE on July 3rd, log use including reflection was limited to a very small number of students. Performance on the OSCE Log station: Cohort performance on this station improved with each OSCE experience, as evidenced by the increasing station cut point and mean of the cohort performance (Table 1). Comparison of average station scores and cut points for the Clinical Log station in the Formative and Summative OSCEs (2009, 2010) Log usage: Examination of cohort Log use over the four years of the course (quantified as the number of patient visits entered by all students) revealed there was limited use for the first two years, with a small flurry of activity when the students commenced their Phase 2 hospital-based speciality rotations in July, 2008. Log use was stimulated following introduction of the OSCE station early in the third year, and following the June holiday break, students engaged with the log to a greater extent during Phase 3. However, after the Phase 3 OSCE in June 2010, very few students continued to enter and reflect on clinical experience using the Clinical Log (Figure 2). Cohort Clinical Log entries 2007–2010 (year-month vs number patient visits, all students). Continuous monitoring of log usage revealed there was a rush of log entries in the period prior to the Formative Phase 2 OSCE in late March, 2009; the Summative Phase 2 OSCE in mid-June 2009, and the Summative Phase 3 OSCE in early July, 2010. While many entries were recorded at the time of the patient visit, close analysis revealed that there was a flurry of data entry immediately preceding each OSCE, and that many of these entries were delayed relative to the patient visit. Quantity of reflection recorded in the log: Student reflection, quantified by the mean number of characters in the ‘reflection’ fields per entry, peaked just prior to the Phase 3 OSCE (Figure 3). Closer analysis of the month of July entries reveals a large increase in data-entry in the reflection fields on July 1st and 2nd, in the two days preceding the Phase 3 OSCE (Figure 4). Cohort quantity of reflection over course 2007–2010 (year-month vs mean number of characters in all three ‘reflection fields’ per Log entry). Cohort quantity of reflection, prior to final OSCE on July 3rd 2010 (month-day vs mean number of characters in all three ‘reflection’ fields per Log entry). Following the final OSCE on July 3rd, log use including reflection was limited to a very small number of students. Discussion: At the start of a new graduate-entry medical school in Australia, an electronic Clinical Log was implemented, as part of a learning portfolio, to capture students’ learning experiences and foster learning in a range of clinical settings throughout the course. It also aimed to develop a habit of critical reflection while learning was taking place, and provide feedback to students and the institution on learning progress. As such, the Clinical Log had high face validity with utility for formative assessment but sound psychometrics were needed for high stakes summative purposes [20]. To address reliability issues such as high variability of scoring between examiners, the Log assessment was carefully introduced with clear articulation of criteria to both students and assessors. Experienced trained assessors, who understood the purpose of the assessment and expected student performance, were used. This study, the first in a series reporting on outcomes and challenges associated with the Clinical Log, showed that most students had a low initial level of engagement with the Log until motivated by inclusion of a Clinical Log station in the end-of-Phase OSCEs. It demonstrated the well-known fact that ‘assessment drives learning’. However the initiative encouraged student engagement with the school’s aim to foster student reflection for professional reasons. The improvement in cohort performance on the Log station with subsequent OSCEs was attributed to the following: assessment criteria were communicated to students and assessors prior to each exam; a formative assessment experience was offered prior to the summative testing; and post-exam feedback and remediation were offered to all students, especially borderline and failing students. While it seemed that the Log OSCE assessment was the major motivator for recording of clinical experiences, it was pleasing to observe greater use by Phase 3 students throughout the year-long community-based integrated placement. This may have been due to greater exposure to ‘undifferentiated patients’ and a growing appreciation of the value of self-reflection. For this paper only the quantity of student reflection in the Log has been analysed. It seemed that for most students inclusion of the reflective criteria in the OSCE station was necessary for student engagement in this desired professional activity. The quality of the reflections in the Log needs further evaluation. The fact that many students were only motivated to record and reflect clinical encounters when the OSCE assessment approached suggested that most reflection occurred on rather than in, professional action [14]. Reflection on action can occur when the student enters and reflects on each patient-interaction in the Log, or shares Log experiences with preceptors, or peers and tutors. Delayed Log entry may have limited impact on reflection on action but it is likely to significantly impede reflection in action. The longitudinal placement is valued as it offers students the benefits of long-term patient follow-up (continuity of care experiences), and ongoing reflection on diagnosis and management decisions as the presentation unfolds (reflection in action). While the Log allowed recording of continuity of care for individual patients, use of this facility and reflection in action requires Log entry closely related to the patient-interaction, rather than delayed entry as assessment approaches. The planned development of a mobile Clinical Log application may facilitate student engagement with more reflection in, as well as on action. The School continues to monitor student and clinician feedback on the Clinical Log, addressing issues that have discouraged use. More guidance is being offered on expectations and the educational use of the Log, showing it as an activity for, rather than being in competition with, learning. With the growing use of e-portfolios and logs in undergraduate and postgraduate education and recognition of the value of self-reflection for professional competency [15], the clinicians who supervise and/or mentor students have been offered more training on providing constructive feedback on learners’ personal and professional development, and reflection on this. This was deemed important for those who embraced completion of a dossier of evidence but were less comfortable with the reflective component in the Log and assessment. Potentially the investment in faculty professional development will have benefits for learners in the vertical continuum of medical education, and faculty themselves (as most teachers contribute to both undergraduate and postgraduate medical education, and may use logs/portfolios themselves in continuing medical education). Further work is underway to review the quality, as well as the quantity of the reflections and correlate these with learner academic success. This should help strengthen the evidence base for use of electronic reflective logs as part of learning portfolios in undergraduate medical education and build learner confidence in the value of reflection for developing professional artistry. It will be interesting to further investigate at what stage students appreciate the value of self-reflection as part of being a competent physician. Recent work on different aspects of a portfolio approach to competency-based assessment has reported the value of giving students the responsibility of ‘interpreting, selecting and combining formative assessments received during the year, to document their performance in a learning portfolio for summative decisions’ [28-30]. The authors advise that this has helped students to internalise the self-regulation process, potentially more valuable for professional development than an extrinsic driver of portfolio use, such as assessment. Conclusions: While the current study suggested that we can’t assume students will self-reflect unless such an activity is included in an assessment, subsequent efforts to embed the reflective log in the students’ learning environment should facilitate ongoing student engagement. Ongoing work has also focused on building learner and faculty confidence in the value of self-reflection as part of being a competent physician. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: JNH, HR, LC and MO have all made substantial contributions to conception and design of the study. LC and MO acquired the data and completed data analysis, and all authors were involved in data interpretation. JNH drafted the manuscript. All authors have revised it critically for important intellectual content and have given final approval of the version to be published. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6920/12/111/prepub
Background: A Clinical Log was introduced as part of a medical student learning portfolio, aiming to develop a habit of critical reflection while learning was taking place, and provide feedback to students and the institution on learning progress. It was designed as a longitudinal self-directed structured record of student learning events, with reflection on these for personal and professional development, and actions planned or taken for learning.As incentive was needed to encourage student engagement, an innovative Clinical Log station was introduced in the OSCE, an assessment format with established acceptance at the School. This study questions: How does an OSCE Clinical Log station influence Log use by students? Methods: The Log station was introduced into the formative, and subsequent summative, OSCEs with careful attention to student and assessor training, marking rubrics and the standard setting procedure. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner. Results: Analysis of the first cohort's Log use over the four-year course (quantified as number of patient visits entered by all students) revealed limited initial use. Usage was stimulated after introduction of the Log station early in third year, with some improvement during the subsequent year-long integrated community-based clerkship. Student reflection, quantified by the mean number of characters in the 'reflection' fields per entry, peaked just prior to the final OSCE (mid-Year 4). Following this, very few students continued to enter and reflect on clinical experience using the Log. Conclusions: While the current study suggested that we can't assume students will self-reflect unless such an activity is included in an assessment, ongoing work has focused on building learner and faculty confidence in the value of self-reflection as part of being a competent physician.
Introduction of OSCE station: Given the dispersed nature of the clinical placements, the Clinical Log played a critical role in correlating students’ clinical experience with the curriculum. To encourage student use a decision was made to introduce a Clinical Log station in the Phase 2 OSCE. A Formative OSCE had been scheduled in March 2009, about three months before the first Summative Phase 2 OSCE, so students and assessors could be ‘trained’ on the nature of this examination: its content, format and standard-setting procedure, and to give student feedback on their progress to date. This was the ideal occasion to ‘trial’ a Clinical Log OSCE station. The Clinical Log station, developed by the Director of Clinical Education (JNH) in a similar format to other OSCE stations, comprised a marking sheet and station instructions and aims. It aimed to foster longitudinal recording and reflection on clinical experience and identification of significant learning issues in relation to all aspects of patient and self-care, health promotion, teamwork and quality and safety. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner, as illustrated in Figure 1. Clinical Log OSCE station marking sheet. Assessors scored performance using the following three main criteria: quantity and diversity of recorded experiences; presentation of case; and reflection on development issues in relation to the presentation. A systematic approach for generating the scores was implemented. The pass-fail cut score for the Log station was calculated using the borderline regression method of standard setting [27], as for the other 12 stations. Students and assessors were briefed on the station and standard setting procedure, and also had online access to the station information and aims, prior to the examinations. The Clinical Log station was then included in the subsequent Summative Phase 2 and Phase 3 OSCEs. Conclusions: While the current study suggested that we can’t assume students will self-reflect unless such an activity is included in an assessment, subsequent efforts to embed the reflective log in the students’ learning environment should facilitate ongoing student engagement. Ongoing work has also focused on building learner and faculty confidence in the value of self-reflection as part of being a competent physician.
Background: A Clinical Log was introduced as part of a medical student learning portfolio, aiming to develop a habit of critical reflection while learning was taking place, and provide feedback to students and the institution on learning progress. It was designed as a longitudinal self-directed structured record of student learning events, with reflection on these for personal and professional development, and actions planned or taken for learning.As incentive was needed to encourage student engagement, an innovative Clinical Log station was introduced in the OSCE, an assessment format with established acceptance at the School. This study questions: How does an OSCE Clinical Log station influence Log use by students? Methods: The Log station was introduced into the formative, and subsequent summative, OSCEs with careful attention to student and assessor training, marking rubrics and the standard setting procedure. The scoring process sought evidence of educational use of the log, and an ability to present and reflect on key learning issues in a concise and coherent manner. Results: Analysis of the first cohort's Log use over the four-year course (quantified as number of patient visits entered by all students) revealed limited initial use. Usage was stimulated after introduction of the Log station early in third year, with some improvement during the subsequent year-long integrated community-based clerkship. Student reflection, quantified by the mean number of characters in the 'reflection' fields per entry, peaked just prior to the final OSCE (mid-Year 4). Following this, very few students continued to enter and reflect on clinical experience using the Log. Conclusions: While the current study suggested that we can't assume students will self-reflect unless such an activity is included in an assessment, ongoing work has focused on building learner and faculty confidence in the value of self-reflection as part of being a competent physician.
4,088
358
12
[ "log", "osce", "students", "station", "clinical", "reflection", "phase", "clinical log", "use", "phase osce" ]
[ "test", "test" ]
[CONTENT] Medical education | Electronic reflective clinical log | Assessment | OSCE log station [SUMMARY]
[CONTENT] Medical education | Electronic reflective clinical log | Assessment | OSCE log station [SUMMARY]
[CONTENT] Medical education | Electronic reflective clinical log | Assessment | OSCE log station [SUMMARY]
[CONTENT] Medical education | Electronic reflective clinical log | Assessment | OSCE log station [SUMMARY]
[CONTENT] Medical education | Electronic reflective clinical log | Assessment | OSCE log station [SUMMARY]
[CONTENT] Medical education | Electronic reflective clinical log | Assessment | OSCE log station [SUMMARY]
[CONTENT] Attitude of Health Personnel | Awareness | Clinical Clerkship | Cohort Studies | Competency-Based Education | Curriculum | Documentation | Education, Medical, Graduate | Educational Measurement | Feedback | Humans | Internship and Residency | New South Wales | Problem-Based Learning | Self-Assessment | Software | Students, Medical [SUMMARY]
[CONTENT] Attitude of Health Personnel | Awareness | Clinical Clerkship | Cohort Studies | Competency-Based Education | Curriculum | Documentation | Education, Medical, Graduate | Educational Measurement | Feedback | Humans | Internship and Residency | New South Wales | Problem-Based Learning | Self-Assessment | Software | Students, Medical [SUMMARY]
[CONTENT] Attitude of Health Personnel | Awareness | Clinical Clerkship | Cohort Studies | Competency-Based Education | Curriculum | Documentation | Education, Medical, Graduate | Educational Measurement | Feedback | Humans | Internship and Residency | New South Wales | Problem-Based Learning | Self-Assessment | Software | Students, Medical [SUMMARY]
[CONTENT] Attitude of Health Personnel | Awareness | Clinical Clerkship | Cohort Studies | Competency-Based Education | Curriculum | Documentation | Education, Medical, Graduate | Educational Measurement | Feedback | Humans | Internship and Residency | New South Wales | Problem-Based Learning | Self-Assessment | Software | Students, Medical [SUMMARY]
[CONTENT] Attitude of Health Personnel | Awareness | Clinical Clerkship | Cohort Studies | Competency-Based Education | Curriculum | Documentation | Education, Medical, Graduate | Educational Measurement | Feedback | Humans | Internship and Residency | New South Wales | Problem-Based Learning | Self-Assessment | Software | Students, Medical [SUMMARY]
[CONTENT] Attitude of Health Personnel | Awareness | Clinical Clerkship | Cohort Studies | Competency-Based Education | Curriculum | Documentation | Education, Medical, Graduate | Educational Measurement | Feedback | Humans | Internship and Residency | New South Wales | Problem-Based Learning | Self-Assessment | Software | Students, Medical [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] log | osce | students | station | clinical | reflection | phase | clinical log | use | phase osce [SUMMARY]
[CONTENT] log | osce | students | station | clinical | reflection | phase | clinical log | use | phase osce [SUMMARY]
[CONTENT] log | osce | students | station | clinical | reflection | phase | clinical log | use | phase osce [SUMMARY]
[CONTENT] log | osce | students | station | clinical | reflection | phase | clinical log | use | phase osce [SUMMARY]
[CONTENT] log | osce | students | station | clinical | reflection | phase | clinical log | use | phase osce [SUMMARY]
[CONTENT] log | osce | students | station | clinical | reflection | phase | clinical log | use | phase osce [SUMMARY]
[CONTENT] clinical | station | log | clinical log | osce | setting | standard setting | standard | log station | issues [SUMMARY]
[CONTENT] station | clinical | osce | log | clinical log | students | assessors | standard setting | standard | setting [SUMMARY]
[CONTENT] osce | log | phase | reflection | july | number | 2010 | phase osce | cohort | entries [SUMMARY]
[CONTENT] ongoing | self | reflective log students learning | facilitate ongoing student engagement | facilitate ongoing student | log students | log students learning | log students learning environment | subsequent efforts | facilitate ongoing [SUMMARY]
[CONTENT] log | station | osce | students | clinical | reflection | phase | clinical log | phase osce | cohort [SUMMARY]
[CONTENT] log | station | osce | students | clinical | reflection | phase | clinical log | phase osce | cohort [SUMMARY]
[CONTENT] ||| ||| Clinical Log | OSCE | School ||| Log [SUMMARY]
[CONTENT] Log ||| [SUMMARY]
[CONTENT] first | four-year ||| third year | year-long ||| OSCE | mid-Year 4 ||| [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| Clinical Log | OSCE | School ||| Log ||| ||| ||| ||| first | four-year ||| third year | year-long ||| OSCE | mid-Year 4 ||| ||| [SUMMARY]
[CONTENT] ||| ||| Clinical Log | OSCE | School ||| Log ||| ||| ||| ||| first | four-year ||| third year | year-long ||| OSCE | mid-Year 4 ||| ||| [SUMMARY]
A combination of silver nanoparticles and visible blue light enhances the antibacterial efficacy of ineffective antibiotics against methicillin-resistant Staphylococcus aureus (MRSA).
27530257
Silver nanoparticles (AgNPs) are potential antimicrobials agents, which can be considered as an alternative to antibiotics for the treatment of infections caused by multi-drug resistant bacteria. The antimicrobial effects of double and triple combinations of AgNPs, visible blue light, and the conventional antibiotics amoxicillin, azithromycin, clarithromycin, linezolid, and vancomycin, against ten clinical isolates of methicillin-resistant Staphylococcus aureus (MRSA) were investigated.
BACKGROUND
The antimicrobial activity of AgNPs, applied in combination with blue light, against selected isolates of MRSA was investigated at 1/2-1/128 of its minimal inhibitory concentration (MIC) in 24-well plates. The wells were exposed to blue light source at 460 nm and 250 mW for 1 h using a photon emitting diode. Samples were taken at different time intervals, and viable bacterial counts were determined. The double combinations of AgNPs and each of the antibiotics were assessed by the checkerboard method. The killing assay was used to test possible synergistic effects when blue light was further combined to AgNPs and each antibiotic at a time against selected isolates of MRSA.
METHODS
The bactericidal activity of AgNPs, at sub-MIC, and blue light was significantly (p < 0.001) enhanced when both agents were applied in combination compared to each agent alone. Similarly, synergistic interactions were observed when AgNPs were combined with amoxicillin, azithromycin, clarithromycin or linezolid in 30-40 % of the double combinations with no observed antagonistic interaction against the tested isolates. Combination of the AgNPs with vancomycin did not result in enhanced killing against all isolates tested. The antimicrobial activity against MRSA isolates was significantly enhanced in triple combinations of AgNPs, blue light and antibiotic, compared to treatments involving one or two agents. The bactericidal activities were highest when azithromycin or clarithromycin was included in the triple therapy compared to the other antibiotics tested.
RESULTS
A new strategy can be used to combat serious infections caused by MRSA by combining AgNPs, blue light, and antibiotics. This triple therapy may include antibiotics, which have been proven to be ineffective against MRSA. The suggested approach would be useful to face the fast-growing drug-resistance with the slow development of new antimicrobial agents, and to preserve last resort antibiotics such as vancomycin.
CONCLUSIONS
[ "Amoxicillin", "Anti-Bacterial Agents", "Azithromycin", "Clarithromycin", "Combined Modality Therapy", "Drug Combinations", "Drug Synergism", "Light", "Linezolid", "Metal Nanoparticles", "Methicillin-Resistant Staphylococcus aureus", "Microbial Sensitivity Tests", "Phototherapy", "Silver", "Vancomycin" ]
4988001
Background
Treatment of infections caused by Staphylococcus aureus has become more difficult because of the emergence of multidrug-resistant isolates [1, 2]. Methicillin-resistant S. aureus (MRSA) presents problems for patients and healthcare facility-staff whose immune system is compromised, or who have open access to their bodies via wounds, catheters or drips. The infection spectrum ranges from superficial skin infections to more serious diseases such as bronchopneumonia [3]. Failure of antibiotics to manage infections caused by multidrug-resistant (MDR) pathogens, especially MRSA, has triggered much research effort for finding alternative antimicrobial approaches with higher efficiency and less resistance developed by the microorganisms. Silver has long been known to exhibit antimicrobial activity against wide range of microorganisms and has demonstrated considerable effectiveness in bactericidal applications [4] and silver nanoparticles (AgNPs) have been reconsidered as a potential alternative to conventional antimicrobial agents [5]. It has been estimated that 320 tons of nanosilver are used annually [6] with 30 % of all currently registered nano-products contain nanosilver [7]. The use of AgNPs alone or in combination with other antimicrobial agents has been suggested as a potential alternative for traditional treatment of infections caused by MDR pathogens [5]. AgNPs were found to exhibit antibacterial activity against MRSA in vitro when tested alone or in combination with other antimicrobial agents [8–10]. Metal nanostructures attract a lot of attention due to their unique properties. AgNPs is a potential biocide that has been reported to be less toxic compared to Silver ions [11]. AgNPs can be incorporated into antimicrobial applications such as bandages, surface coatings, medical equipment, food packaging, functional clothes and cosmetics [12]. Blue light is recently attracting increasing attention as a novel phototherapy-based antimicrobial agent that has significant antimicrobial activity against a broad range of bacterial and fungal pathogens with less chance to resistance development compared to antibiotics [13, 14]. Further, blue light has been shown to be highly effective against MRSA and other common nosocomial bacterial pathogens [15, 16]. The present investigation aims to evaluate the effectiveness of triple combination of AgNPs, blue light and the conventional antibiotics vancomycin, linezolid, amoxicillin, azithromycin, and clarithromycin against clinical isolates of MRSA. To the best of our knowledge, this is the first study, which utilizes this triple combination against pathogenic bacteria.
null
null
Results
Susceptibility of the isolates to AgNPs and the antibiotics The MIC of AgNPs was found to be 4 µg/mL with MBC range of 8–16 µg/mL, and MBC90 (The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates) was 8 µg/mL (Table 2). Vancomycin is the only antibiotic, which showed activity against the tested isolates with MIC90 (the minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates) and MBC90 values of 2 and 8 µg/mL, respectively (Table 2). The isolates were resistant to linezolid with MIC90 of 32 µg/mL, and to amoxicillin, azithromycin and clarithromycin with MIC90 >64 µg/mL.Table 2Susceptibility of the tested isolates to AgNPs and the antibioticsAntimicrobial agentsConcentration (µg/mL)a MIC90 MBC90 AgNPs48Amoxicillin>64>64Azithromycin>64>64Clarithromycin>64>64Linezolid32>64Vancomycin28 aMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates Susceptibility of the tested isolates to AgNPs and the antibiotics aMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates The MIC of AgNPs was found to be 4 µg/mL with MBC range of 8–16 µg/mL, and MBC90 (The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates) was 8 µg/mL (Table 2). Vancomycin is the only antibiotic, which showed activity against the tested isolates with MIC90 (the minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates) and MBC90 values of 2 and 8 µg/mL, respectively (Table 2). The isolates were resistant to linezolid with MIC90 of 32 µg/mL, and to amoxicillin, azithromycin and clarithromycin with MIC90 >64 µg/mL.Table 2Susceptibility of the tested isolates to AgNPs and the antibioticsAntimicrobial agentsConcentration (µg/mL)a MIC90 MBC90 AgNPs48Amoxicillin>64>64Azithromycin>64>64Clarithromycin>64>64Linezolid32>64Vancomycin28 aMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates Susceptibility of the tested isolates to AgNPs and the antibiotics aMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates Combination of AgNPs with blue light against MRSA The antimicrobial activity of AgNPs in combination with blue light against one of the MRSA isolates was investigated. The AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. The antimicrobial activity of these combinations against the tested isolate was significantly higher (p < 0.001) than each agent alone. All bacteria were killed after 8 h of exposure to the combined therapy at all tested concentrations. Figure 1 shows the results for the combinations tested at 1/2, 1/4, 1/8, and 1/16 of the MIC of AgNPs (data for lower concentrations are not shown).Fig. 1Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation The antimicrobial activity of AgNPs in combination with blue light against one of the MRSA isolates was investigated. The AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. The antimicrobial activity of these combinations against the tested isolate was significantly higher (p < 0.001) than each agent alone. All bacteria were killed after 8 h of exposure to the combined therapy at all tested concentrations. Figure 1 shows the results for the combinations tested at 1/2, 1/4, 1/8, and 1/16 of the MIC of AgNPs (data for lower concentrations are not shown).Fig. 1Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation Combination of AgNPs with the antibiotics against MRSA The efficiency of the double combination of the AgNPs and each of amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against the ten clinical MRSA isolates was assessed using the checkerboard method. The combination of AgNPs with amoxicillin resulted in synergistic activity against four isolates whereas indifference response was observed in six isolates. Similar results were observed when the AgNPs were combined with azithromycin, clarithromycin or linezolid, where synergism was observed against 4, 3 and 3 isolates, respectively, whereas indifferent interaction prevailed for the remaining isolates. On the other hand, combination of AgNPs with vancomycin was indifferent for all tested isolates (Fig. 2).Fig. 2Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin The efficiency of the double combination of the AgNPs and each of amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against the ten clinical MRSA isolates was assessed using the checkerboard method. The combination of AgNPs with amoxicillin resulted in synergistic activity against four isolates whereas indifference response was observed in six isolates. Similar results were observed when the AgNPs were combined with azithromycin, clarithromycin or linezolid, where synergism was observed against 4, 3 and 3 isolates, respectively, whereas indifferent interaction prevailed for the remaining isolates. On the other hand, combination of AgNPs with vancomycin was indifferent for all tested isolates (Fig. 2).Fig. 2Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin Triple combination of AgNPs, blue light, and the antibiotics against MRSA isolates The effectiveness of the AgNPs in combination with blue light and amoxicillin, linezolid, azithromycin, or clarithromycin, was tested against selected isolates of MRSA. Two isolates from each combination of AgNPs and antibiotic were selected based on the synergistic results of the checkerboard assay. Vancomycin was excluded because its combination with the AgNPs was indifferent against all isolates. The AgNPs and the antibiotics were tested at the concentrations, which gave the best results in checkerboard assay. Isolates N8 and C41 were used to assess the triple combination of AgNPs at 1/16 MIC, the blue light, and azithromycin at 0.25 and 2 µg/mL, respectively. The triple combination resulted in significantly higher (p < 0.001) killing effect of isolate C41 with log10 CFU/mL reductions of 8.4, 3.2, compared to the drug-free samples and to the double combinations of the antibiotic with the AgNPs (Fig. 3a). The triple combinations against isolate N8 resulted in killing of all bacteria compared to all other treatments, which showed lower activity with log10 CFU/mL reduction range of 1.0–2.0 (Fig. 3b).Fig. 3Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation Triple combinations that included clarithromycin were tested against isolates C51 and C41, at 1/8 and 1/512 of the MIC, respectively, of the AgNPs and 0.25 µg/mL of the antibiotic. The bactericidal activity of the three-agent combination was significantly higher (p < 0.001) than that attained with other treatment combinations with log10 CFU/mL reduction of 13.02 and 5.84 compared to the control of the two isolates, respectively (Fig. 4a, b).Fig. 4Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation The antimicrobial efficacy of linezolid at 0.25 and 8 µg/mL was evaluated against isolate C19 and N5, respectively, when combined with the silver compound at its 1/2 MIC and blue light. Synergistic interaction was observed when the AgNPs were combined with either the antibiotic or the blue light or with both of them, where the bacteria were completely killed following treatment with all combinations (Fig. 5a, b). The same effect was observed when amoxicillin at 1 and 0.25 µg/mL was combined with blue light and AgNPs at 1/32 and 1/256 of its MIC against isolates C12 and N8, respectively (data not shown).Fig. 5Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation The effectiveness of the AgNPs in combination with blue light and amoxicillin, linezolid, azithromycin, or clarithromycin, was tested against selected isolates of MRSA. Two isolates from each combination of AgNPs and antibiotic were selected based on the synergistic results of the checkerboard assay. Vancomycin was excluded because its combination with the AgNPs was indifferent against all isolates. The AgNPs and the antibiotics were tested at the concentrations, which gave the best results in checkerboard assay. Isolates N8 and C41 were used to assess the triple combination of AgNPs at 1/16 MIC, the blue light, and azithromycin at 0.25 and 2 µg/mL, respectively. The triple combination resulted in significantly higher (p < 0.001) killing effect of isolate C41 with log10 CFU/mL reductions of 8.4, 3.2, compared to the drug-free samples and to the double combinations of the antibiotic with the AgNPs (Fig. 3a). The triple combinations against isolate N8 resulted in killing of all bacteria compared to all other treatments, which showed lower activity with log10 CFU/mL reduction range of 1.0–2.0 (Fig. 3b).Fig. 3Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation Triple combinations that included clarithromycin were tested against isolates C51 and C41, at 1/8 and 1/512 of the MIC, respectively, of the AgNPs and 0.25 µg/mL of the antibiotic. The bactericidal activity of the three-agent combination was significantly higher (p < 0.001) than that attained with other treatment combinations with log10 CFU/mL reduction of 13.02 and 5.84 compared to the control of the two isolates, respectively (Fig. 4a, b).Fig. 4Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation The antimicrobial efficacy of linezolid at 0.25 and 8 µg/mL was evaluated against isolate C19 and N5, respectively, when combined with the silver compound at its 1/2 MIC and blue light. Synergistic interaction was observed when the AgNPs were combined with either the antibiotic or the blue light or with both of them, where the bacteria were completely killed following treatment with all combinations (Fig. 5a, b). The same effect was observed when amoxicillin at 1 and 0.25 µg/mL was combined with blue light and AgNPs at 1/32 and 1/256 of its MIC against isolates C12 and N8, respectively (data not shown).Fig. 5Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation TEM examination of MRSA isolate (N8) after treatment with the AgNPs, blue light, and azithromycin alone or in triple combination The antimicrobial efficiency of the AgNPs at 1/16 MIC, blue light for 1 h and azithromycin at 0.25 µg/mL against isolate N8 was visualized by TEM when each of them was used alone or in triple combination (Fig. 6a–e). Bacteria treated with AgNPs alone showed accumulation of the silver particles inside the cells concomitant with signs of membrane damage and lysis (Fig. 6b). Cell lysis was also observed when the bacteria were treated with either blue light or azithromycin alone (Fig. 6c, d). On the other hand, bacterial cell lysis was more pronounced following treatment with the three agents in combination, where the cells were severely affected (Fig. 6e).Fig. 6Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage The antimicrobial efficiency of the AgNPs at 1/16 MIC, blue light for 1 h and azithromycin at 0.25 µg/mL against isolate N8 was visualized by TEM when each of them was used alone or in triple combination (Fig. 6a–e). Bacteria treated with AgNPs alone showed accumulation of the silver particles inside the cells concomitant with signs of membrane damage and lysis (Fig. 6b). Cell lysis was also observed when the bacteria were treated with either blue light or azithromycin alone (Fig. 6c, d). On the other hand, bacterial cell lysis was more pronounced following treatment with the three agents in combination, where the cells were severely affected (Fig. 6e).Fig. 6Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage
Conclusions
This study suggests a new strategy to combat serious infections caused by MDR bacteria. The triple combination of AgNPs, blue light, and antibiotics is a promising therapy for infections caused by MRSA. The triple therapy may include antibiotics, which are proven to be ineffective against MRSA. This approach would be useful to face the fast-growing drug-resistance with the slow development of new antimicrobial agents, and to preserve last resort antibiotics such as vancomycin. The study can be taken further by exploring the application of the triple therapy in patients infected with MRSA and other MDR bacteria, taking into consideration the best conditions for optimizing their synergistic effects and decreasing the harmful side effect.
[ "Background", "Chemicals", "Antibiotics", "Microorganisms", "Oxacillin susceptibility", "Preparation of the AgNPs", "Susceptibility of the isolates to AgNPs and the antibiotics", "Double combination of AgNPs with blue light against MRSA", "Double combination of AgNPs with the antibiotics against MRSA", "Triple combination of AgNPs, blue light, and the antibiotics against MRSA", "Effects of triple combination of AgNPs, blue light, and azithromycin on MRSA isolate using transmission electron microscopy (TEM)", "Statistical analysis", "Susceptibility of the isolates to AgNPs and the antibiotics", "Combination of AgNPs with blue light against MRSA", "Combination of AgNPs with the antibiotics against MRSA", "Triple combination of AgNPs, blue light, and the antibiotics against MRSA isolates", "TEM examination of MRSA isolate (N8) after treatment with the AgNPs, blue light, and azithromycin alone or in triple combination" ]
[ "Treatment of infections caused by Staphylococcus aureus has become more difficult because of the emergence of multidrug-resistant isolates [1, 2]. Methicillin-resistant S. aureus (MRSA) presents problems for patients and healthcare facility-staff whose immune system is compromised, or who have open access to their bodies via wounds, catheters or drips. The infection spectrum ranges from superficial skin infections to more serious diseases such as bronchopneumonia [3].\nFailure of antibiotics to manage infections caused by multidrug-resistant (MDR) pathogens, especially MRSA, has triggered much research effort for finding alternative antimicrobial approaches with higher efficiency and less resistance developed by the microorganisms. Silver has long been known to exhibit antimicrobial activity against wide range of microorganisms and has demonstrated considerable effectiveness in bactericidal applications [4] and silver nanoparticles (AgNPs) have been reconsidered as a potential alternative to conventional antimicrobial agents [5].\nIt has been estimated that 320 tons of nanosilver are used annually [6] with 30 % of all currently registered nano-products contain nanosilver [7]. The use of AgNPs alone or in combination with other antimicrobial agents has been suggested as a potential alternative for traditional treatment of infections caused by MDR pathogens [5]. AgNPs were found to exhibit antibacterial activity against MRSA in vitro when tested alone or in combination with other antimicrobial agents [8–10].\nMetal nanostructures attract a lot of attention due to their unique properties. AgNPs is a potential biocide that has been reported to be less toxic compared to Silver ions [11]. AgNPs can be incorporated into antimicrobial applications such as bandages, surface coatings, medical equipment, food packaging, functional clothes and cosmetics [12].\nBlue light is recently attracting increasing attention as a novel phototherapy-based antimicrobial agent that has significant antimicrobial activity against a broad range of bacterial and fungal pathogens with less chance to resistance development compared to antibiotics [13, 14]. Further, blue light has been shown to be highly effective against MRSA and other common nosocomial bacterial pathogens [15, 16].\nThe present investigation aims to evaluate the effectiveness of triple combination of AgNPs, blue light and the conventional antibiotics vancomycin, linezolid, amoxicillin, azithromycin, and clarithromycin against clinical isolates of MRSA. To the best of our knowledge, this is the first study, which utilizes this triple combination against pathogenic bacteria.", "Unless otherwise indicated all chemicals were purchased from Sigma-Aldrich, USA.", "Amoxicillin (AMX), oxacillin (OXA), vancomycin (VAN) were purchased from Sigma Chemical Co., ST. Louis, Missouri, USA. Linezolid (LNZ) was provided by Pharmacia & Upjohn, Kalamazoo, MI, USA. Azithromycin (AZM) was provided by Pfizer, USA. Clarithromycin (CLR) was provided by Abbott Laboratories, USA.", "Ten clinical MRSA isolates were collected from The National Cancer Institute and from Abbasseya Hospital in Cairo, Egypt. The collected isolates were identified using conventional microbiological techniques.\nAccording to genotyping results, the isolates were sub-classified into 14 different pulsed field patterns, 11 spa-types and 8 multiple locus sequence typing (MLST) sequence types. The pulsed field type A was the predominant pulsed field type, which corresponded to spa-type t-037 and MLST sequence type ST-239, and belonged to clonal complex 8 (CC8) according to eBURST analysis (Table 1) (Unpublished data, Master Thesis, Moussa et al. 2010).Table 1Characteristics of the MRSA clinical isolates used in this studyIsolate designationSpa-repeatsMLSTClonal complex (eBURST)\nSCCmec\nC51t-186ST-88CC88IIIaC6t-5711ST-22CC22IVaC43t-037ST-239CC8IIIN11t-363ST-239CC8IIIN5t-037ST-239CC8IIIN8t-037ST-239CC8IIIC34t-037ST-239CC8IIIC19t-037ST-239CC8IIIC12t-037ST-239CC8IIIC41t-1234ST-97CC97III\nCharacteristics of the MRSA clinical isolates used in this study", "The isolates were inoculated onto Mueller–Hinton agar (Lab M, UK) plates supplemented with 4 % NaCl and 6 µg/mL oxacillin, followed by incubation at 37 °C for 24 h. Isolates that showed more than one colony were considered as MRSA [17].", "The AgNPs used for the purpose of this research are silver magnetite nanoparticles. To prepare the AgNPs, 0.127 g silver nitrate were dissolved in 75 mL of distilled water then 10 mL of an aqueous solution containing 0.08 g trisodium citrate and 0.2 g polyvinylpyrrolidone (PVP) were added. Ten milliliter of 0.1 M sodium borohydride were dissolved and added to the mixture. The solution turned dark brown indicating the conversion of silver nitrate to silver nanoparticles. The nanoparticles were characterized spectrophotometrically, where a surface plasmon resonance peak appeared between 390 and 410 nm [18]. The particles size was also characterized by Malvern Zetasizer Nano ZS (United Kingdom) and by Tecnai G20, Super twin, double tilt (FEI) ultra-high resolution Transmission Electron Microscope, which showed a uniform distribution of the nanoparticles, with an average size of 15–20 nm.", "MIC of the AgNPs was determined by the broth microdilution method using cation-adjusted Mueller–Hinton broth (MHB) based on the guidelines of the Clinical Laboratory Standard Institute (CLSI) [19]. The minimum bactericidal concentration (MBC) was determined by streaking 10 µL samples from bacterial cultures supplemented with AgNPs or the antibiotics at their MICs and higher concentrations, onto the surfaces of Muller Hinton agar plates. After a 24 h incubation period, the number of colony forming units per mL (CFU/mL) was determined and the MBC, defined as the concentration that kills 99.9 % of bacteria, was recorded.", "AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. Briefly, bacterial suspensions were pipetted into the wells, which contained the AgNPs at the tested concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells were exposed to visible blue light source at 460 nm and 250 mW for 1 h using Photon Emitting Diode (Photon Scientific, Egypt). Samples were taken after 0, 2, 4, 6, 8 and 10 h of inoculation, where viable bacterial counts were determined. Briefly, 10 µL aliquots were withdrawn and spread onto nutrient agar plates before being incubated at 37 °C for 24 h. The same procedure was repeated with nanoparticles-free and light-free wells. The experiment was performed in triplicates and the results were compared to drug-free samples.", "The efficiency of double combination of AgNPs and amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin against the ten clinical isolates of MRSA was assessed by the checkerboard method. The combination response was evaluated by calculation of the Fraction Inhibitory Index (∑ FIC) as follow:\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\sum {\\text{ FIC}} = \\frac{ \\, MIC \\, of \\, drug \\, A, \\, in \\, combination}{MIC \\, of \\, drugA, \\, tested \\, along} \\, + \\, \\frac{MIC \\, of \\, drug \\, B, \\, in \\, combination}{MIC \\, of \\, drug \\, B, \\, tested \\, along}$$\\end{document}∑FIC=MICofdrugA,incombinationMICofdrugA,testedalong+MICofdrugB,incombinationMICofdrugB,testedalong\nThe interaction is defined as synergistic if the FIC index is 0.5 or less; indifferent, if the FIC index is >0.5 and <4; and antagonistic if the FIC index is >4 [20].", "The purpose of this experiment was to test the effectiveness of AgNPs in combination with blue light and each of the following antibiotics at a time: amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against selected isolates of MRSA. Two isolates from the combination of AgNPs and each of the tested antibiotics were chosen on the basis of the synergistic response in the checkerboard assay.\nThe experiments were carried out in 24 multi-well plates where eight wells were designated as: drug- and light-free, blue light exposure, AgNPs alone, the antibiotic alone, blue light and AgNPs, blue light and the antibiotic, AgNPs and the antibiotic, and finally, the triple combination blue light, AgNPs and the antibiotic. The 24 multi-well plates were used because the diameter of their wells fits the tip of the Photon Emitting Diode, where the diode was placed at a distance of 5 mm over the surface of the bacterial culture in the well to ensure optimal exposure to the light and reduce light scattering. Only the wells in the four corners of one plate were used in parallel treatments to avoid the scattered light from adjacent wells, if any; all other wells were left empty.\nThe AgNPs and the antibiotics were tested at concentrations that resulted in the best combination in checkerboard assay against the selected isolates. Bacterial suspensions were pipetted into the wells, which contained the AgNPs alone or in combination with the antibiotics at the test concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells designated for light treatment were exposed to the light source emitting blue light at a wavelength of 460 nm for 1 h. The plates were then incubated at 37 °C for 24 h after which viable cell counts were determined. The experiment was performed in triplicate, and the results obtained were compared to the drug- and blue light-free wells.", "Ten milliliter of MHB medium were inoculated with 1 × 105 CFU/mL of MRSA isolate (N8) in 15 mL conical centrifuge tubes (Falcon, USA). The suspensions were then incubated at 37 °C for 4 h till the bacteria reached the logarithmic phase. The suspensions were then centrifuged at 2800×g for 10 min and the cell pellets were re-suspended in 10 mL of the fresh drug-free MHB, or containing 0.25 µg/mL (1/16 MIC) of AgNPs, or 0.25 µg/mL of azithromycin or both agents. Two milliliter aliquots of the suspension were transferred to 24 multi-wells plates. The plates were incubated at room temperature during which the blue light wells were exposed to the light at 460 nm for 1 h. One milliliter samples were then taken and prepared for TEM as previously described [21]. Briefly, the samples were centrifuged, and the bacterial pellets were fixed in 1 mL of 3 % glutaraldehyde for 2 h and then centrifuged and washed with 7.2 % phosphate buffer. A secondary fixative, osmium tetraoxide, was then added to the pellets, incubated for 1 h before being washed with phosphate buffer saline. The samples were then subjected to a series of dehydration steps using different concentrations of ethanol, starting with ethanol 50–95 %. During each step, the samples were left for 10 min and then put in absolute ethanol for 20 min. The samples were then embedded in resin blocks that were subsequently cut into semi- then ultra-thin thickness and finally stained with uranyl acetate and lead citrate before being examined by TEM JEOL (JEM-1400). The results were compared to drug- and light-free control experiments.", "The statistical analysis of the data was done using GraphPad Prism (version 5.0) software. One-way- and two-way analysis of variance (ANOVA) were used to test the significance among the different treatment groups, and 5 % error was accepted in the statistics. Error bars in the graphical presentation of data express the standard deviation of the means between samples.", "The MIC of AgNPs was found to be 4 µg/mL with MBC range of 8–16 µg/mL, and MBC90 (The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates) was 8 µg/mL (Table 2). Vancomycin is the only antibiotic, which showed activity against the tested isolates with MIC90 (the minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates) and MBC90 values of 2 and 8 µg/mL, respectively (Table 2). The isolates were resistant to linezolid with MIC90 of 32 µg/mL, and to amoxicillin, azithromycin and clarithromycin with MIC90 >64 µg/mL.Table 2Susceptibility of the tested isolates to AgNPs and the antibioticsAntimicrobial agentsConcentration (µg/mL)a\nMIC90\nMBC90\nAgNPs48Amoxicillin>64>64Azithromycin>64>64Clarithromycin>64>64Linezolid32>64Vancomycin28\naMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates\nSusceptibility of the tested isolates to AgNPs and the antibiotics\n\naMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates", "The antimicrobial activity of AgNPs in combination with blue light against one of the MRSA isolates was investigated. The AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. The antimicrobial activity of these combinations against the tested isolate was significantly higher (p < 0.001) than each agent alone. All bacteria were killed after 8 h of exposure to the combined therapy at all tested concentrations. Figure 1 shows the results for the combinations tested at 1/2, 1/4, 1/8, and 1/16 of the MIC of AgNPs (data for lower concentrations are not shown).Fig. 1Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation\nAntimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation", "The efficiency of the double combination of the AgNPs and each of amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against the ten clinical MRSA isolates was assessed using the checkerboard method. The combination of AgNPs with amoxicillin resulted in synergistic activity against four isolates whereas indifference response was observed in six isolates. Similar results were observed when the AgNPs were combined with azithromycin, clarithromycin or linezolid, where synergism was observed against 4, 3 and 3 isolates, respectively, whereas indifferent interaction prevailed for the remaining isolates. On the other hand, combination of AgNPs with vancomycin was indifferent for all tested isolates (Fig. 2).Fig. 2Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin\nDouble combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin", "The effectiveness of the AgNPs in combination with blue light and amoxicillin, linezolid, azithromycin, or clarithromycin, was tested against selected isolates of MRSA. Two isolates from each combination of AgNPs and antibiotic were selected based on the synergistic results of the checkerboard assay. Vancomycin was excluded because its combination with the AgNPs was indifferent against all isolates.\nThe AgNPs and the antibiotics were tested at the concentrations, which gave the best results in checkerboard assay. Isolates N8 and C41 were used to assess the triple combination of AgNPs at 1/16 MIC, the blue light, and azithromycin at 0.25 and 2 µg/mL, respectively. The triple combination resulted in significantly higher (p < 0.001) killing effect of isolate C41 with log10 CFU/mL reductions of 8.4, 3.2, compared to the drug-free samples and to the double combinations of the antibiotic with the AgNPs (Fig. 3a). The triple combinations against isolate N8 resulted in killing of all bacteria compared to all other treatments, which showed lower activity with log10 CFU/mL reduction range of 1.0–2.0 (Fig. 3b).Fig. 3Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation\nCombination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation\nTriple combinations that included clarithromycin were tested against isolates C51 and C41, at 1/8 and 1/512 of the MIC, respectively, of the AgNPs and 0.25 µg/mL of the antibiotic. The bactericidal activity of the three-agent combination was significantly higher (p < 0.001) than that attained with other treatment combinations with log10 CFU/mL reduction of 13.02 and 5.84 compared to the control of the two isolates, respectively (Fig. 4a, b).Fig. 4Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation\nCombination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation\nThe antimicrobial efficacy of linezolid at 0.25 and 8 µg/mL was evaluated against isolate C19 and N5, respectively, when combined with the silver compound at its 1/2 MIC and blue light. Synergistic interaction was observed when the AgNPs were combined with either the antibiotic or the blue light or with both of them, where the bacteria were completely killed following treatment with all combinations (Fig. 5a, b). The same effect was observed when amoxicillin at 1 and 0.25 µg/mL was combined with blue light and AgNPs at 1/32 and 1/256 of its MIC against isolates C12 and N8, respectively (data not shown).Fig. 5Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation\nCombination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation", "The antimicrobial efficiency of the AgNPs at 1/16 MIC, blue light for 1 h and azithromycin at 0.25 µg/mL against isolate N8 was visualized by TEM when each of them was used alone or in triple combination (Fig. 6a–e). Bacteria treated with AgNPs alone showed accumulation of the silver particles inside the cells concomitant with signs of membrane damage and lysis (Fig. 6b). Cell lysis was also observed when the bacteria were treated with either blue light or azithromycin alone (Fig. 6c, d). On the other hand, bacterial cell lysis was more pronounced following treatment with the three agents in combination, where the cells were severely affected (Fig. 6e).Fig. 6Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage\nVisualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Chemicals", "Antibiotics", "Microorganisms", "Oxacillin susceptibility", "Preparation of the AgNPs", "Susceptibility of the isolates to AgNPs and the antibiotics", "Double combination of AgNPs with blue light against MRSA", "Double combination of AgNPs with the antibiotics against MRSA", "Triple combination of AgNPs, blue light, and the antibiotics against MRSA", "Effects of triple combination of AgNPs, blue light, and azithromycin on MRSA isolate using transmission electron microscopy (TEM)", "Statistical analysis", "Results", "Susceptibility of the isolates to AgNPs and the antibiotics", "Combination of AgNPs with blue light against MRSA", "Combination of AgNPs with the antibiotics against MRSA", "Triple combination of AgNPs, blue light, and the antibiotics against MRSA isolates", "TEM examination of MRSA isolate (N8) after treatment with the AgNPs, blue light, and azithromycin alone or in triple combination", "Discussion", "Conclusions" ]
[ "Treatment of infections caused by Staphylococcus aureus has become more difficult because of the emergence of multidrug-resistant isolates [1, 2]. Methicillin-resistant S. aureus (MRSA) presents problems for patients and healthcare facility-staff whose immune system is compromised, or who have open access to their bodies via wounds, catheters or drips. The infection spectrum ranges from superficial skin infections to more serious diseases such as bronchopneumonia [3].\nFailure of antibiotics to manage infections caused by multidrug-resistant (MDR) pathogens, especially MRSA, has triggered much research effort for finding alternative antimicrobial approaches with higher efficiency and less resistance developed by the microorganisms. Silver has long been known to exhibit antimicrobial activity against wide range of microorganisms and has demonstrated considerable effectiveness in bactericidal applications [4] and silver nanoparticles (AgNPs) have been reconsidered as a potential alternative to conventional antimicrobial agents [5].\nIt has been estimated that 320 tons of nanosilver are used annually [6] with 30 % of all currently registered nano-products contain nanosilver [7]. The use of AgNPs alone or in combination with other antimicrobial agents has been suggested as a potential alternative for traditional treatment of infections caused by MDR pathogens [5]. AgNPs were found to exhibit antibacterial activity against MRSA in vitro when tested alone or in combination with other antimicrobial agents [8–10].\nMetal nanostructures attract a lot of attention due to their unique properties. AgNPs is a potential biocide that has been reported to be less toxic compared to Silver ions [11]. AgNPs can be incorporated into antimicrobial applications such as bandages, surface coatings, medical equipment, food packaging, functional clothes and cosmetics [12].\nBlue light is recently attracting increasing attention as a novel phototherapy-based antimicrobial agent that has significant antimicrobial activity against a broad range of bacterial and fungal pathogens with less chance to resistance development compared to antibiotics [13, 14]. Further, blue light has been shown to be highly effective against MRSA and other common nosocomial bacterial pathogens [15, 16].\nThe present investigation aims to evaluate the effectiveness of triple combination of AgNPs, blue light and the conventional antibiotics vancomycin, linezolid, amoxicillin, azithromycin, and clarithromycin against clinical isolates of MRSA. To the best of our knowledge, this is the first study, which utilizes this triple combination against pathogenic bacteria.", " Chemicals Unless otherwise indicated all chemicals were purchased from Sigma-Aldrich, USA.\nUnless otherwise indicated all chemicals were purchased from Sigma-Aldrich, USA.\n Antibiotics Amoxicillin (AMX), oxacillin (OXA), vancomycin (VAN) were purchased from Sigma Chemical Co., ST. Louis, Missouri, USA. Linezolid (LNZ) was provided by Pharmacia & Upjohn, Kalamazoo, MI, USA. Azithromycin (AZM) was provided by Pfizer, USA. Clarithromycin (CLR) was provided by Abbott Laboratories, USA.\nAmoxicillin (AMX), oxacillin (OXA), vancomycin (VAN) were purchased from Sigma Chemical Co., ST. Louis, Missouri, USA. Linezolid (LNZ) was provided by Pharmacia & Upjohn, Kalamazoo, MI, USA. Azithromycin (AZM) was provided by Pfizer, USA. Clarithromycin (CLR) was provided by Abbott Laboratories, USA.\n Microorganisms Ten clinical MRSA isolates were collected from The National Cancer Institute and from Abbasseya Hospital in Cairo, Egypt. The collected isolates were identified using conventional microbiological techniques.\nAccording to genotyping results, the isolates were sub-classified into 14 different pulsed field patterns, 11 spa-types and 8 multiple locus sequence typing (MLST) sequence types. The pulsed field type A was the predominant pulsed field type, which corresponded to spa-type t-037 and MLST sequence type ST-239, and belonged to clonal complex 8 (CC8) according to eBURST analysis (Table 1) (Unpublished data, Master Thesis, Moussa et al. 2010).Table 1Characteristics of the MRSA clinical isolates used in this studyIsolate designationSpa-repeatsMLSTClonal complex (eBURST)\nSCCmec\nC51t-186ST-88CC88IIIaC6t-5711ST-22CC22IVaC43t-037ST-239CC8IIIN11t-363ST-239CC8IIIN5t-037ST-239CC8IIIN8t-037ST-239CC8IIIC34t-037ST-239CC8IIIC19t-037ST-239CC8IIIC12t-037ST-239CC8IIIC41t-1234ST-97CC97III\nCharacteristics of the MRSA clinical isolates used in this study\nTen clinical MRSA isolates were collected from The National Cancer Institute and from Abbasseya Hospital in Cairo, Egypt. The collected isolates were identified using conventional microbiological techniques.\nAccording to genotyping results, the isolates were sub-classified into 14 different pulsed field patterns, 11 spa-types and 8 multiple locus sequence typing (MLST) sequence types. The pulsed field type A was the predominant pulsed field type, which corresponded to spa-type t-037 and MLST sequence type ST-239, and belonged to clonal complex 8 (CC8) according to eBURST analysis (Table 1) (Unpublished data, Master Thesis, Moussa et al. 2010).Table 1Characteristics of the MRSA clinical isolates used in this studyIsolate designationSpa-repeatsMLSTClonal complex (eBURST)\nSCCmec\nC51t-186ST-88CC88IIIaC6t-5711ST-22CC22IVaC43t-037ST-239CC8IIIN11t-363ST-239CC8IIIN5t-037ST-239CC8IIIN8t-037ST-239CC8IIIC34t-037ST-239CC8IIIC19t-037ST-239CC8IIIC12t-037ST-239CC8IIIC41t-1234ST-97CC97III\nCharacteristics of the MRSA clinical isolates used in this study\n Oxacillin susceptibility The isolates were inoculated onto Mueller–Hinton agar (Lab M, UK) plates supplemented with 4 % NaCl and 6 µg/mL oxacillin, followed by incubation at 37 °C for 24 h. Isolates that showed more than one colony were considered as MRSA [17].\nThe isolates were inoculated onto Mueller–Hinton agar (Lab M, UK) plates supplemented with 4 % NaCl and 6 µg/mL oxacillin, followed by incubation at 37 °C for 24 h. Isolates that showed more than one colony were considered as MRSA [17].\n Preparation of the AgNPs The AgNPs used for the purpose of this research are silver magnetite nanoparticles. To prepare the AgNPs, 0.127 g silver nitrate were dissolved in 75 mL of distilled water then 10 mL of an aqueous solution containing 0.08 g trisodium citrate and 0.2 g polyvinylpyrrolidone (PVP) were added. Ten milliliter of 0.1 M sodium borohydride were dissolved and added to the mixture. The solution turned dark brown indicating the conversion of silver nitrate to silver nanoparticles. The nanoparticles were characterized spectrophotometrically, where a surface plasmon resonance peak appeared between 390 and 410 nm [18]. The particles size was also characterized by Malvern Zetasizer Nano ZS (United Kingdom) and by Tecnai G20, Super twin, double tilt (FEI) ultra-high resolution Transmission Electron Microscope, which showed a uniform distribution of the nanoparticles, with an average size of 15–20 nm.\nThe AgNPs used for the purpose of this research are silver magnetite nanoparticles. To prepare the AgNPs, 0.127 g silver nitrate were dissolved in 75 mL of distilled water then 10 mL of an aqueous solution containing 0.08 g trisodium citrate and 0.2 g polyvinylpyrrolidone (PVP) were added. Ten milliliter of 0.1 M sodium borohydride were dissolved and added to the mixture. The solution turned dark brown indicating the conversion of silver nitrate to silver nanoparticles. The nanoparticles were characterized spectrophotometrically, where a surface plasmon resonance peak appeared between 390 and 410 nm [18]. The particles size was also characterized by Malvern Zetasizer Nano ZS (United Kingdom) and by Tecnai G20, Super twin, double tilt (FEI) ultra-high resolution Transmission Electron Microscope, which showed a uniform distribution of the nanoparticles, with an average size of 15–20 nm.\n Susceptibility of the isolates to AgNPs and the antibiotics MIC of the AgNPs was determined by the broth microdilution method using cation-adjusted Mueller–Hinton broth (MHB) based on the guidelines of the Clinical Laboratory Standard Institute (CLSI) [19]. The minimum bactericidal concentration (MBC) was determined by streaking 10 µL samples from bacterial cultures supplemented with AgNPs or the antibiotics at their MICs and higher concentrations, onto the surfaces of Muller Hinton agar plates. After a 24 h incubation period, the number of colony forming units per mL (CFU/mL) was determined and the MBC, defined as the concentration that kills 99.9 % of bacteria, was recorded.\nMIC of the AgNPs was determined by the broth microdilution method using cation-adjusted Mueller–Hinton broth (MHB) based on the guidelines of the Clinical Laboratory Standard Institute (CLSI) [19]. The minimum bactericidal concentration (MBC) was determined by streaking 10 µL samples from bacterial cultures supplemented with AgNPs or the antibiotics at their MICs and higher concentrations, onto the surfaces of Muller Hinton agar plates. After a 24 h incubation period, the number of colony forming units per mL (CFU/mL) was determined and the MBC, defined as the concentration that kills 99.9 % of bacteria, was recorded.\n Double combination of AgNPs with blue light against MRSA AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. Briefly, bacterial suspensions were pipetted into the wells, which contained the AgNPs at the tested concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells were exposed to visible blue light source at 460 nm and 250 mW for 1 h using Photon Emitting Diode (Photon Scientific, Egypt). Samples were taken after 0, 2, 4, 6, 8 and 10 h of inoculation, where viable bacterial counts were determined. Briefly, 10 µL aliquots were withdrawn and spread onto nutrient agar plates before being incubated at 37 °C for 24 h. The same procedure was repeated with nanoparticles-free and light-free wells. The experiment was performed in triplicates and the results were compared to drug-free samples.\nAgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. Briefly, bacterial suspensions were pipetted into the wells, which contained the AgNPs at the tested concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells were exposed to visible blue light source at 460 nm and 250 mW for 1 h using Photon Emitting Diode (Photon Scientific, Egypt). Samples were taken after 0, 2, 4, 6, 8 and 10 h of inoculation, where viable bacterial counts were determined. Briefly, 10 µL aliquots were withdrawn and spread onto nutrient agar plates before being incubated at 37 °C for 24 h. The same procedure was repeated with nanoparticles-free and light-free wells. The experiment was performed in triplicates and the results were compared to drug-free samples.\n Double combination of AgNPs with the antibiotics against MRSA The efficiency of double combination of AgNPs and amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin against the ten clinical isolates of MRSA was assessed by the checkerboard method. The combination response was evaluated by calculation of the Fraction Inhibitory Index (∑ FIC) as follow:\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\sum {\\text{ FIC}} = \\frac{ \\, MIC \\, of \\, drug \\, A, \\, in \\, combination}{MIC \\, of \\, drugA, \\, tested \\, along} \\, + \\, \\frac{MIC \\, of \\, drug \\, B, \\, in \\, combination}{MIC \\, of \\, drug \\, B, \\, tested \\, along}$$\\end{document}∑FIC=MICofdrugA,incombinationMICofdrugA,testedalong+MICofdrugB,incombinationMICofdrugB,testedalong\nThe interaction is defined as synergistic if the FIC index is 0.5 or less; indifferent, if the FIC index is >0.5 and <4; and antagonistic if the FIC index is >4 [20].\nThe efficiency of double combination of AgNPs and amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin against the ten clinical isolates of MRSA was assessed by the checkerboard method. The combination response was evaluated by calculation of the Fraction Inhibitory Index (∑ FIC) as follow:\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\sum {\\text{ FIC}} = \\frac{ \\, MIC \\, of \\, drug \\, A, \\, in \\, combination}{MIC \\, of \\, drugA, \\, tested \\, along} \\, + \\, \\frac{MIC \\, of \\, drug \\, B, \\, in \\, combination}{MIC \\, of \\, drug \\, B, \\, tested \\, along}$$\\end{document}∑FIC=MICofdrugA,incombinationMICofdrugA,testedalong+MICofdrugB,incombinationMICofdrugB,testedalong\nThe interaction is defined as synergistic if the FIC index is 0.5 or less; indifferent, if the FIC index is >0.5 and <4; and antagonistic if the FIC index is >4 [20].\n Triple combination of AgNPs, blue light, and the antibiotics against MRSA The purpose of this experiment was to test the effectiveness of AgNPs in combination with blue light and each of the following antibiotics at a time: amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against selected isolates of MRSA. Two isolates from the combination of AgNPs and each of the tested antibiotics were chosen on the basis of the synergistic response in the checkerboard assay.\nThe experiments were carried out in 24 multi-well plates where eight wells were designated as: drug- and light-free, blue light exposure, AgNPs alone, the antibiotic alone, blue light and AgNPs, blue light and the antibiotic, AgNPs and the antibiotic, and finally, the triple combination blue light, AgNPs and the antibiotic. The 24 multi-well plates were used because the diameter of their wells fits the tip of the Photon Emitting Diode, where the diode was placed at a distance of 5 mm over the surface of the bacterial culture in the well to ensure optimal exposure to the light and reduce light scattering. Only the wells in the four corners of one plate were used in parallel treatments to avoid the scattered light from adjacent wells, if any; all other wells were left empty.\nThe AgNPs and the antibiotics were tested at concentrations that resulted in the best combination in checkerboard assay against the selected isolates. Bacterial suspensions were pipetted into the wells, which contained the AgNPs alone or in combination with the antibiotics at the test concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells designated for light treatment were exposed to the light source emitting blue light at a wavelength of 460 nm for 1 h. The plates were then incubated at 37 °C for 24 h after which viable cell counts were determined. The experiment was performed in triplicate, and the results obtained were compared to the drug- and blue light-free wells.\nThe purpose of this experiment was to test the effectiveness of AgNPs in combination with blue light and each of the following antibiotics at a time: amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against selected isolates of MRSA. Two isolates from the combination of AgNPs and each of the tested antibiotics were chosen on the basis of the synergistic response in the checkerboard assay.\nThe experiments were carried out in 24 multi-well plates where eight wells were designated as: drug- and light-free, blue light exposure, AgNPs alone, the antibiotic alone, blue light and AgNPs, blue light and the antibiotic, AgNPs and the antibiotic, and finally, the triple combination blue light, AgNPs and the antibiotic. The 24 multi-well plates were used because the diameter of their wells fits the tip of the Photon Emitting Diode, where the diode was placed at a distance of 5 mm over the surface of the bacterial culture in the well to ensure optimal exposure to the light and reduce light scattering. Only the wells in the four corners of one plate were used in parallel treatments to avoid the scattered light from adjacent wells, if any; all other wells were left empty.\nThe AgNPs and the antibiotics were tested at concentrations that resulted in the best combination in checkerboard assay against the selected isolates. Bacterial suspensions were pipetted into the wells, which contained the AgNPs alone or in combination with the antibiotics at the test concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells designated for light treatment were exposed to the light source emitting blue light at a wavelength of 460 nm for 1 h. The plates were then incubated at 37 °C for 24 h after which viable cell counts were determined. The experiment was performed in triplicate, and the results obtained were compared to the drug- and blue light-free wells.\n Effects of triple combination of AgNPs, blue light, and azithromycin on MRSA isolate using transmission electron microscopy (TEM) Ten milliliter of MHB medium were inoculated with 1 × 105 CFU/mL of MRSA isolate (N8) in 15 mL conical centrifuge tubes (Falcon, USA). The suspensions were then incubated at 37 °C for 4 h till the bacteria reached the logarithmic phase. The suspensions were then centrifuged at 2800×g for 10 min and the cell pellets were re-suspended in 10 mL of the fresh drug-free MHB, or containing 0.25 µg/mL (1/16 MIC) of AgNPs, or 0.25 µg/mL of azithromycin or both agents. Two milliliter aliquots of the suspension were transferred to 24 multi-wells plates. The plates were incubated at room temperature during which the blue light wells were exposed to the light at 460 nm for 1 h. One milliliter samples were then taken and prepared for TEM as previously described [21]. Briefly, the samples were centrifuged, and the bacterial pellets were fixed in 1 mL of 3 % glutaraldehyde for 2 h and then centrifuged and washed with 7.2 % phosphate buffer. A secondary fixative, osmium tetraoxide, was then added to the pellets, incubated for 1 h before being washed with phosphate buffer saline. The samples were then subjected to a series of dehydration steps using different concentrations of ethanol, starting with ethanol 50–95 %. During each step, the samples were left for 10 min and then put in absolute ethanol for 20 min. The samples were then embedded in resin blocks that were subsequently cut into semi- then ultra-thin thickness and finally stained with uranyl acetate and lead citrate before being examined by TEM JEOL (JEM-1400). The results were compared to drug- and light-free control experiments.\nTen milliliter of MHB medium were inoculated with 1 × 105 CFU/mL of MRSA isolate (N8) in 15 mL conical centrifuge tubes (Falcon, USA). The suspensions were then incubated at 37 °C for 4 h till the bacteria reached the logarithmic phase. The suspensions were then centrifuged at 2800×g for 10 min and the cell pellets were re-suspended in 10 mL of the fresh drug-free MHB, or containing 0.25 µg/mL (1/16 MIC) of AgNPs, or 0.25 µg/mL of azithromycin or both agents. Two milliliter aliquots of the suspension were transferred to 24 multi-wells plates. The plates were incubated at room temperature during which the blue light wells were exposed to the light at 460 nm for 1 h. One milliliter samples were then taken and prepared for TEM as previously described [21]. Briefly, the samples were centrifuged, and the bacterial pellets were fixed in 1 mL of 3 % glutaraldehyde for 2 h and then centrifuged and washed with 7.2 % phosphate buffer. A secondary fixative, osmium tetraoxide, was then added to the pellets, incubated for 1 h before being washed with phosphate buffer saline. The samples were then subjected to a series of dehydration steps using different concentrations of ethanol, starting with ethanol 50–95 %. During each step, the samples were left for 10 min and then put in absolute ethanol for 20 min. The samples were then embedded in resin blocks that were subsequently cut into semi- then ultra-thin thickness and finally stained with uranyl acetate and lead citrate before being examined by TEM JEOL (JEM-1400). The results were compared to drug- and light-free control experiments.\n Statistical analysis The statistical analysis of the data was done using GraphPad Prism (version 5.0) software. One-way- and two-way analysis of variance (ANOVA) were used to test the significance among the different treatment groups, and 5 % error was accepted in the statistics. Error bars in the graphical presentation of data express the standard deviation of the means between samples.\nThe statistical analysis of the data was done using GraphPad Prism (version 5.0) software. One-way- and two-way analysis of variance (ANOVA) were used to test the significance among the different treatment groups, and 5 % error was accepted in the statistics. Error bars in the graphical presentation of data express the standard deviation of the means between samples.", "Unless otherwise indicated all chemicals were purchased from Sigma-Aldrich, USA.", "Amoxicillin (AMX), oxacillin (OXA), vancomycin (VAN) were purchased from Sigma Chemical Co., ST. Louis, Missouri, USA. Linezolid (LNZ) was provided by Pharmacia & Upjohn, Kalamazoo, MI, USA. Azithromycin (AZM) was provided by Pfizer, USA. Clarithromycin (CLR) was provided by Abbott Laboratories, USA.", "Ten clinical MRSA isolates were collected from The National Cancer Institute and from Abbasseya Hospital in Cairo, Egypt. The collected isolates were identified using conventional microbiological techniques.\nAccording to genotyping results, the isolates were sub-classified into 14 different pulsed field patterns, 11 spa-types and 8 multiple locus sequence typing (MLST) sequence types. The pulsed field type A was the predominant pulsed field type, which corresponded to spa-type t-037 and MLST sequence type ST-239, and belonged to clonal complex 8 (CC8) according to eBURST analysis (Table 1) (Unpublished data, Master Thesis, Moussa et al. 2010).Table 1Characteristics of the MRSA clinical isolates used in this studyIsolate designationSpa-repeatsMLSTClonal complex (eBURST)\nSCCmec\nC51t-186ST-88CC88IIIaC6t-5711ST-22CC22IVaC43t-037ST-239CC8IIIN11t-363ST-239CC8IIIN5t-037ST-239CC8IIIN8t-037ST-239CC8IIIC34t-037ST-239CC8IIIC19t-037ST-239CC8IIIC12t-037ST-239CC8IIIC41t-1234ST-97CC97III\nCharacteristics of the MRSA clinical isolates used in this study", "The isolates were inoculated onto Mueller–Hinton agar (Lab M, UK) plates supplemented with 4 % NaCl and 6 µg/mL oxacillin, followed by incubation at 37 °C for 24 h. Isolates that showed more than one colony were considered as MRSA [17].", "The AgNPs used for the purpose of this research are silver magnetite nanoparticles. To prepare the AgNPs, 0.127 g silver nitrate were dissolved in 75 mL of distilled water then 10 mL of an aqueous solution containing 0.08 g trisodium citrate and 0.2 g polyvinylpyrrolidone (PVP) were added. Ten milliliter of 0.1 M sodium borohydride were dissolved and added to the mixture. The solution turned dark brown indicating the conversion of silver nitrate to silver nanoparticles. The nanoparticles were characterized spectrophotometrically, where a surface plasmon resonance peak appeared between 390 and 410 nm [18]. The particles size was also characterized by Malvern Zetasizer Nano ZS (United Kingdom) and by Tecnai G20, Super twin, double tilt (FEI) ultra-high resolution Transmission Electron Microscope, which showed a uniform distribution of the nanoparticles, with an average size of 15–20 nm.", "MIC of the AgNPs was determined by the broth microdilution method using cation-adjusted Mueller–Hinton broth (MHB) based on the guidelines of the Clinical Laboratory Standard Institute (CLSI) [19]. The minimum bactericidal concentration (MBC) was determined by streaking 10 µL samples from bacterial cultures supplemented with AgNPs or the antibiotics at their MICs and higher concentrations, onto the surfaces of Muller Hinton agar plates. After a 24 h incubation period, the number of colony forming units per mL (CFU/mL) was determined and the MBC, defined as the concentration that kills 99.9 % of bacteria, was recorded.", "AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. Briefly, bacterial suspensions were pipetted into the wells, which contained the AgNPs at the tested concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells were exposed to visible blue light source at 460 nm and 250 mW for 1 h using Photon Emitting Diode (Photon Scientific, Egypt). Samples were taken after 0, 2, 4, 6, 8 and 10 h of inoculation, where viable bacterial counts were determined. Briefly, 10 µL aliquots were withdrawn and spread onto nutrient agar plates before being incubated at 37 °C for 24 h. The same procedure was repeated with nanoparticles-free and light-free wells. The experiment was performed in triplicates and the results were compared to drug-free samples.", "The efficiency of double combination of AgNPs and amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin against the ten clinical isolates of MRSA was assessed by the checkerboard method. The combination response was evaluated by calculation of the Fraction Inhibitory Index (∑ FIC) as follow:\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\sum {\\text{ FIC}} = \\frac{ \\, MIC \\, of \\, drug \\, A, \\, in \\, combination}{MIC \\, of \\, drugA, \\, tested \\, along} \\, + \\, \\frac{MIC \\, of \\, drug \\, B, \\, in \\, combination}{MIC \\, of \\, drug \\, B, \\, tested \\, along}$$\\end{document}∑FIC=MICofdrugA,incombinationMICofdrugA,testedalong+MICofdrugB,incombinationMICofdrugB,testedalong\nThe interaction is defined as synergistic if the FIC index is 0.5 or less; indifferent, if the FIC index is >0.5 and <4; and antagonistic if the FIC index is >4 [20].", "The purpose of this experiment was to test the effectiveness of AgNPs in combination with blue light and each of the following antibiotics at a time: amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against selected isolates of MRSA. Two isolates from the combination of AgNPs and each of the tested antibiotics were chosen on the basis of the synergistic response in the checkerboard assay.\nThe experiments were carried out in 24 multi-well plates where eight wells were designated as: drug- and light-free, blue light exposure, AgNPs alone, the antibiotic alone, blue light and AgNPs, blue light and the antibiotic, AgNPs and the antibiotic, and finally, the triple combination blue light, AgNPs and the antibiotic. The 24 multi-well plates were used because the diameter of their wells fits the tip of the Photon Emitting Diode, where the diode was placed at a distance of 5 mm over the surface of the bacterial culture in the well to ensure optimal exposure to the light and reduce light scattering. Only the wells in the four corners of one plate were used in parallel treatments to avoid the scattered light from adjacent wells, if any; all other wells were left empty.\nThe AgNPs and the antibiotics were tested at concentrations that resulted in the best combination in checkerboard assay against the selected isolates. Bacterial suspensions were pipetted into the wells, which contained the AgNPs alone or in combination with the antibiotics at the test concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells designated for light treatment were exposed to the light source emitting blue light at a wavelength of 460 nm for 1 h. The plates were then incubated at 37 °C for 24 h after which viable cell counts were determined. The experiment was performed in triplicate, and the results obtained were compared to the drug- and blue light-free wells.", "Ten milliliter of MHB medium were inoculated with 1 × 105 CFU/mL of MRSA isolate (N8) in 15 mL conical centrifuge tubes (Falcon, USA). The suspensions were then incubated at 37 °C for 4 h till the bacteria reached the logarithmic phase. The suspensions were then centrifuged at 2800×g for 10 min and the cell pellets were re-suspended in 10 mL of the fresh drug-free MHB, or containing 0.25 µg/mL (1/16 MIC) of AgNPs, or 0.25 µg/mL of azithromycin or both agents. Two milliliter aliquots of the suspension were transferred to 24 multi-wells plates. The plates were incubated at room temperature during which the blue light wells were exposed to the light at 460 nm for 1 h. One milliliter samples were then taken and prepared for TEM as previously described [21]. Briefly, the samples were centrifuged, and the bacterial pellets were fixed in 1 mL of 3 % glutaraldehyde for 2 h and then centrifuged and washed with 7.2 % phosphate buffer. A secondary fixative, osmium tetraoxide, was then added to the pellets, incubated for 1 h before being washed with phosphate buffer saline. The samples were then subjected to a series of dehydration steps using different concentrations of ethanol, starting with ethanol 50–95 %. During each step, the samples were left for 10 min and then put in absolute ethanol for 20 min. The samples were then embedded in resin blocks that were subsequently cut into semi- then ultra-thin thickness and finally stained with uranyl acetate and lead citrate before being examined by TEM JEOL (JEM-1400). The results were compared to drug- and light-free control experiments.", "The statistical analysis of the data was done using GraphPad Prism (version 5.0) software. One-way- and two-way analysis of variance (ANOVA) were used to test the significance among the different treatment groups, and 5 % error was accepted in the statistics. Error bars in the graphical presentation of data express the standard deviation of the means between samples.", " Susceptibility of the isolates to AgNPs and the antibiotics The MIC of AgNPs was found to be 4 µg/mL with MBC range of 8–16 µg/mL, and MBC90 (The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates) was 8 µg/mL (Table 2). Vancomycin is the only antibiotic, which showed activity against the tested isolates with MIC90 (the minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates) and MBC90 values of 2 and 8 µg/mL, respectively (Table 2). The isolates were resistant to linezolid with MIC90 of 32 µg/mL, and to amoxicillin, azithromycin and clarithromycin with MIC90 >64 µg/mL.Table 2Susceptibility of the tested isolates to AgNPs and the antibioticsAntimicrobial agentsConcentration (µg/mL)a\nMIC90\nMBC90\nAgNPs48Amoxicillin>64>64Azithromycin>64>64Clarithromycin>64>64Linezolid32>64Vancomycin28\naMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates\nSusceptibility of the tested isolates to AgNPs and the antibiotics\n\naMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates\nThe MIC of AgNPs was found to be 4 µg/mL with MBC range of 8–16 µg/mL, and MBC90 (The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates) was 8 µg/mL (Table 2). Vancomycin is the only antibiotic, which showed activity against the tested isolates with MIC90 (the minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates) and MBC90 values of 2 and 8 µg/mL, respectively (Table 2). The isolates were resistant to linezolid with MIC90 of 32 µg/mL, and to amoxicillin, azithromycin and clarithromycin with MIC90 >64 µg/mL.Table 2Susceptibility of the tested isolates to AgNPs and the antibioticsAntimicrobial agentsConcentration (µg/mL)a\nMIC90\nMBC90\nAgNPs48Amoxicillin>64>64Azithromycin>64>64Clarithromycin>64>64Linezolid32>64Vancomycin28\naMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates\nSusceptibility of the tested isolates to AgNPs and the antibiotics\n\naMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates\n Combination of AgNPs with blue light against MRSA The antimicrobial activity of AgNPs in combination with blue light against one of the MRSA isolates was investigated. The AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. The antimicrobial activity of these combinations against the tested isolate was significantly higher (p < 0.001) than each agent alone. All bacteria were killed after 8 h of exposure to the combined therapy at all tested concentrations. Figure 1 shows the results for the combinations tested at 1/2, 1/4, 1/8, and 1/16 of the MIC of AgNPs (data for lower concentrations are not shown).Fig. 1Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation\nAntimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation\nThe antimicrobial activity of AgNPs in combination with blue light against one of the MRSA isolates was investigated. The AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. The antimicrobial activity of these combinations against the tested isolate was significantly higher (p < 0.001) than each agent alone. All bacteria were killed after 8 h of exposure to the combined therapy at all tested concentrations. Figure 1 shows the results for the combinations tested at 1/2, 1/4, 1/8, and 1/16 of the MIC of AgNPs (data for lower concentrations are not shown).Fig. 1Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation\nAntimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation\n Combination of AgNPs with the antibiotics against MRSA The efficiency of the double combination of the AgNPs and each of amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against the ten clinical MRSA isolates was assessed using the checkerboard method. The combination of AgNPs with amoxicillin resulted in synergistic activity against four isolates whereas indifference response was observed in six isolates. Similar results were observed when the AgNPs were combined with azithromycin, clarithromycin or linezolid, where synergism was observed against 4, 3 and 3 isolates, respectively, whereas indifferent interaction prevailed for the remaining isolates. On the other hand, combination of AgNPs with vancomycin was indifferent for all tested isolates (Fig. 2).Fig. 2Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin\nDouble combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin\nThe efficiency of the double combination of the AgNPs and each of amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against the ten clinical MRSA isolates was assessed using the checkerboard method. The combination of AgNPs with amoxicillin resulted in synergistic activity against four isolates whereas indifference response was observed in six isolates. Similar results were observed when the AgNPs were combined with azithromycin, clarithromycin or linezolid, where synergism was observed against 4, 3 and 3 isolates, respectively, whereas indifferent interaction prevailed for the remaining isolates. On the other hand, combination of AgNPs with vancomycin was indifferent for all tested isolates (Fig. 2).Fig. 2Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin\nDouble combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin\n Triple combination of AgNPs, blue light, and the antibiotics against MRSA isolates The effectiveness of the AgNPs in combination with blue light and amoxicillin, linezolid, azithromycin, or clarithromycin, was tested against selected isolates of MRSA. Two isolates from each combination of AgNPs and antibiotic were selected based on the synergistic results of the checkerboard assay. Vancomycin was excluded because its combination with the AgNPs was indifferent against all isolates.\nThe AgNPs and the antibiotics were tested at the concentrations, which gave the best results in checkerboard assay. Isolates N8 and C41 were used to assess the triple combination of AgNPs at 1/16 MIC, the blue light, and azithromycin at 0.25 and 2 µg/mL, respectively. The triple combination resulted in significantly higher (p < 0.001) killing effect of isolate C41 with log10 CFU/mL reductions of 8.4, 3.2, compared to the drug-free samples and to the double combinations of the antibiotic with the AgNPs (Fig. 3a). The triple combinations against isolate N8 resulted in killing of all bacteria compared to all other treatments, which showed lower activity with log10 CFU/mL reduction range of 1.0–2.0 (Fig. 3b).Fig. 3Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation\nCombination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation\nTriple combinations that included clarithromycin were tested against isolates C51 and C41, at 1/8 and 1/512 of the MIC, respectively, of the AgNPs and 0.25 µg/mL of the antibiotic. The bactericidal activity of the three-agent combination was significantly higher (p < 0.001) than that attained with other treatment combinations with log10 CFU/mL reduction of 13.02 and 5.84 compared to the control of the two isolates, respectively (Fig. 4a, b).Fig. 4Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation\nCombination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation\nThe antimicrobial efficacy of linezolid at 0.25 and 8 µg/mL was evaluated against isolate C19 and N5, respectively, when combined with the silver compound at its 1/2 MIC and blue light. Synergistic interaction was observed when the AgNPs were combined with either the antibiotic or the blue light or with both of them, where the bacteria were completely killed following treatment with all combinations (Fig. 5a, b). The same effect was observed when amoxicillin at 1 and 0.25 µg/mL was combined with blue light and AgNPs at 1/32 and 1/256 of its MIC against isolates C12 and N8, respectively (data not shown).Fig. 5Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation\nCombination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation\nThe effectiveness of the AgNPs in combination with blue light and amoxicillin, linezolid, azithromycin, or clarithromycin, was tested against selected isolates of MRSA. Two isolates from each combination of AgNPs and antibiotic were selected based on the synergistic results of the checkerboard assay. Vancomycin was excluded because its combination with the AgNPs was indifferent against all isolates.\nThe AgNPs and the antibiotics were tested at the concentrations, which gave the best results in checkerboard assay. Isolates N8 and C41 were used to assess the triple combination of AgNPs at 1/16 MIC, the blue light, and azithromycin at 0.25 and 2 µg/mL, respectively. The triple combination resulted in significantly higher (p < 0.001) killing effect of isolate C41 with log10 CFU/mL reductions of 8.4, 3.2, compared to the drug-free samples and to the double combinations of the antibiotic with the AgNPs (Fig. 3a). The triple combinations against isolate N8 resulted in killing of all bacteria compared to all other treatments, which showed lower activity with log10 CFU/mL reduction range of 1.0–2.0 (Fig. 3b).Fig. 3Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation\nCombination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation\nTriple combinations that included clarithromycin were tested against isolates C51 and C41, at 1/8 and 1/512 of the MIC, respectively, of the AgNPs and 0.25 µg/mL of the antibiotic. The bactericidal activity of the three-agent combination was significantly higher (p < 0.001) than that attained with other treatment combinations with log10 CFU/mL reduction of 13.02 and 5.84 compared to the control of the two isolates, respectively (Fig. 4a, b).Fig. 4Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation\nCombination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation\nThe antimicrobial efficacy of linezolid at 0.25 and 8 µg/mL was evaluated against isolate C19 and N5, respectively, when combined with the silver compound at its 1/2 MIC and blue light. Synergistic interaction was observed when the AgNPs were combined with either the antibiotic or the blue light or with both of them, where the bacteria were completely killed following treatment with all combinations (Fig. 5a, b). The same effect was observed when amoxicillin at 1 and 0.25 µg/mL was combined with blue light and AgNPs at 1/32 and 1/256 of its MIC against isolates C12 and N8, respectively (data not shown).Fig. 5Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation\nCombination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation\n TEM examination of MRSA isolate (N8) after treatment with the AgNPs, blue light, and azithromycin alone or in triple combination The antimicrobial efficiency of the AgNPs at 1/16 MIC, blue light for 1 h and azithromycin at 0.25 µg/mL against isolate N8 was visualized by TEM when each of them was used alone or in triple combination (Fig. 6a–e). Bacteria treated with AgNPs alone showed accumulation of the silver particles inside the cells concomitant with signs of membrane damage and lysis (Fig. 6b). Cell lysis was also observed when the bacteria were treated with either blue light or azithromycin alone (Fig. 6c, d). On the other hand, bacterial cell lysis was more pronounced following treatment with the three agents in combination, where the cells were severely affected (Fig. 6e).Fig. 6Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage\nVisualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage\nThe antimicrobial efficiency of the AgNPs at 1/16 MIC, blue light for 1 h and azithromycin at 0.25 µg/mL against isolate N8 was visualized by TEM when each of them was used alone or in triple combination (Fig. 6a–e). Bacteria treated with AgNPs alone showed accumulation of the silver particles inside the cells concomitant with signs of membrane damage and lysis (Fig. 6b). Cell lysis was also observed when the bacteria were treated with either blue light or azithromycin alone (Fig. 6c, d). On the other hand, bacterial cell lysis was more pronounced following treatment with the three agents in combination, where the cells were severely affected (Fig. 6e).Fig. 6Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage\nVisualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage", "The MIC of AgNPs was found to be 4 µg/mL with MBC range of 8–16 µg/mL, and MBC90 (The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates) was 8 µg/mL (Table 2). Vancomycin is the only antibiotic, which showed activity against the tested isolates with MIC90 (the minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates) and MBC90 values of 2 and 8 µg/mL, respectively (Table 2). The isolates were resistant to linezolid with MIC90 of 32 µg/mL, and to amoxicillin, azithromycin and clarithromycin with MIC90 >64 µg/mL.Table 2Susceptibility of the tested isolates to AgNPs and the antibioticsAntimicrobial agentsConcentration (µg/mL)a\nMIC90\nMBC90\nAgNPs48Amoxicillin>64>64Azithromycin>64>64Clarithromycin>64>64Linezolid32>64Vancomycin28\naMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates\nSusceptibility of the tested isolates to AgNPs and the antibiotics\n\naMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates", "The antimicrobial activity of AgNPs in combination with blue light against one of the MRSA isolates was investigated. The AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. The antimicrobial activity of these combinations against the tested isolate was significantly higher (p < 0.001) than each agent alone. All bacteria were killed after 8 h of exposure to the combined therapy at all tested concentrations. Figure 1 shows the results for the combinations tested at 1/2, 1/4, 1/8, and 1/16 of the MIC of AgNPs (data for lower concentrations are not shown).Fig. 1Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation\nAntimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation", "The efficiency of the double combination of the AgNPs and each of amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against the ten clinical MRSA isolates was assessed using the checkerboard method. The combination of AgNPs with amoxicillin resulted in synergistic activity against four isolates whereas indifference response was observed in six isolates. Similar results were observed when the AgNPs were combined with azithromycin, clarithromycin or linezolid, where synergism was observed against 4, 3 and 3 isolates, respectively, whereas indifferent interaction prevailed for the remaining isolates. On the other hand, combination of AgNPs with vancomycin was indifferent for all tested isolates (Fig. 2).Fig. 2Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin\nDouble combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin", "The effectiveness of the AgNPs in combination with blue light and amoxicillin, linezolid, azithromycin, or clarithromycin, was tested against selected isolates of MRSA. Two isolates from each combination of AgNPs and antibiotic were selected based on the synergistic results of the checkerboard assay. Vancomycin was excluded because its combination with the AgNPs was indifferent against all isolates.\nThe AgNPs and the antibiotics were tested at the concentrations, which gave the best results in checkerboard assay. Isolates N8 and C41 were used to assess the triple combination of AgNPs at 1/16 MIC, the blue light, and azithromycin at 0.25 and 2 µg/mL, respectively. The triple combination resulted in significantly higher (p < 0.001) killing effect of isolate C41 with log10 CFU/mL reductions of 8.4, 3.2, compared to the drug-free samples and to the double combinations of the antibiotic with the AgNPs (Fig. 3a). The triple combinations against isolate N8 resulted in killing of all bacteria compared to all other treatments, which showed lower activity with log10 CFU/mL reduction range of 1.0–2.0 (Fig. 3b).Fig. 3Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation\nCombination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation\nTriple combinations that included clarithromycin were tested against isolates C51 and C41, at 1/8 and 1/512 of the MIC, respectively, of the AgNPs and 0.25 µg/mL of the antibiotic. The bactericidal activity of the three-agent combination was significantly higher (p < 0.001) than that attained with other treatment combinations with log10 CFU/mL reduction of 13.02 and 5.84 compared to the control of the two isolates, respectively (Fig. 4a, b).Fig. 4Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation\nCombination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation\nThe antimicrobial efficacy of linezolid at 0.25 and 8 µg/mL was evaluated against isolate C19 and N5, respectively, when combined with the silver compound at its 1/2 MIC and blue light. Synergistic interaction was observed when the AgNPs were combined with either the antibiotic or the blue light or with both of them, where the bacteria were completely killed following treatment with all combinations (Fig. 5a, b). The same effect was observed when amoxicillin at 1 and 0.25 µg/mL was combined with blue light and AgNPs at 1/32 and 1/256 of its MIC against isolates C12 and N8, respectively (data not shown).Fig. 5Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation\nCombination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation", "The antimicrobial efficiency of the AgNPs at 1/16 MIC, blue light for 1 h and azithromycin at 0.25 µg/mL against isolate N8 was visualized by TEM when each of them was used alone or in triple combination (Fig. 6a–e). Bacteria treated with AgNPs alone showed accumulation of the silver particles inside the cells concomitant with signs of membrane damage and lysis (Fig. 6b). Cell lysis was also observed when the bacteria were treated with either blue light or azithromycin alone (Fig. 6c, d). On the other hand, bacterial cell lysis was more pronounced following treatment with the three agents in combination, where the cells were severely affected (Fig. 6e).Fig. 6Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage\nVisualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage", "Antibiotic resistance by isolates of S. aureus has become a global alarming problem that limits the availability of effective antimicrobial agents [22]. Antibiotic-misuse, the failure of some patients to comply with their treatment regimen, and the high capability of bacteria to mutate, are among the major factors contributing to the emergence of bacterial resistance. Antibiotic resistance leads to the failure of treatment of life-threatening bacterial infections and increases costs due to longer stay in healthcare settings [23, 24]. The use of non-conventional therapy to which bacteria are improbably to develop resistance, would be the best alternative. AgNPs are potential antimicrobials agents, which can be considered as an alternative to antibiotics for the treatment of infections caused by MDR bacteria [5]. AgNPs have been shown to possess strong and broad-spectrum antimicrobial activity due to a combined effect between their physical properties and the released free silver ions [25].\nMethicillin-resistant S. aureus was used as a model in our study to assess the efficiency of combination of AgNPs, blue light, and anti-staphylococcal antibiotics. The isolates had been collected from different hospital units to guarantee the most possible representation of the Egyptian genotype population of S. aureus. We have previously found that members of CC8 are the prevailing MRSA clone in Egypt (Unpublished data, Master Thesis, Moussa et al. 2010). Infections caused by S. aureus are among the most frequent causes of both healthcare-associated and community-onset infections [26]. MRSA and coagulase-negative staphylococci are among the leading causes of nosocomial blood stream infections in the USA [27]. Staphylococci cause biofilm-associated infections by forming biofilms on damaged tissues, and indwelling vascular catheters [28–32].\nFive antibiotics were selected from different conventional classes including beta-lactam (amoxicillin), macrolides (azithromycin and clarithromycin), oxazolidinones (linezolid), and glycopeptides (vancomycin). Based on the European Committee on Antimicrobial Susceptibility Testing (EUCAST) MIC breakpoint guideline [33], all isolates were found to be resistant to amoxicillin, azithromycin, clarithromycin, and linezolid, while they were susceptible to vancomycin (Table 2). With few exceptions, MRSA isolates are resistant to all beta-lactam antibiotics and commonly resistant to macrolides, with very rare resistance to glycopeptides antibiotics [33, 34]. Emergence of linezolid-resistance was previously reported in 0.05 % of S. aureus infections [35]. Antibiotics that are known to be ineffective against MRSA were used in this study to assess the possibility of enhancement of their antimicrobial activity by AgNPs and the blue light. This approach would be useful in “recycling” of these antibiotics that became useless against infections caused by MRSA to face the fast-growing drug-resistance with the slow development of new antimicrobial agents, and to preserve last resort antibiotics such as vancomycin.\nBlue light has attracted increasing attention because of its intrinsic antimicrobial effect which does not involve the use of exogenous photosensitizers as in the photodynamic therapy (PDT), and the less damaging to mammalian cells than ultraviolet irradiation [13]. The biomedical applications of blue light at specific wavelengths and intensities against different pathogens have been reported earlier [36]. At 470 nm blue light was found to be efficient against MRSA strains associated with hospital-acquired and community-onset infections [37].\nDouble combinations of the AgNPs, at 1/2–1/128 of its MIC, with blue light were tested against selected MRSA isolates. The bactericidal activity of both agents was significantly enhanced (p < 0.001) when bacteria were treated with AgNPs with concurrent exposure to blue light for 1 h, compared to each of them alone (Fig. 1a–e). The combined therapy killed all bacteria after 8 h in all tested combinations while each of the silver compound and the blue light was less efficient in killing the organisms during the tested time. The mechanism of the antimicrobial effect of either AgNPs or blue light is still not fully understood. Several hypotheses have been suggested to explain such mechanisms. For example, it has been reported that AgNPs can damage bacterial cell membranes leading to structural changes, which render bacteria more permeable [38, 39]. AgNPs have unique optical, electrical, and thermal properties with high surface area to volume ratio resulting in the optimal possible interaction with bacterial surfaces leading to a higher antimicrobial activity [40]. The formation of free radicals by the silver nanoparticles is probably another mechanism that can lead to cell death [41, 42]. It has also been proposed that cationic silver is released from the nanoparticles when they are dissolved in water or when they penetrate into the cells [5]. Silver ions bind to the cellular membranes, proteins, and nucleic acids, causing structural changes and deformations of the bacterial cell [5]. They also deactivate many vital enzymes by interaction with thiol groups [31] and are involved in the generation of reactive oxygen species [32].\nFor blue light, the commonly accepted hypothesis is the production of highly cytotoxic reactive oxygen species (ROS) in a similar manner to PDT [43].\nWe have previously reported a similar synergistic interaction for a combination of AgNPs and blue light when both agents were tested against clinical isolates of Pseudomonas aeruginosa [44]. A possible mechanism of the observed synergy could be the transduction of the captured blue-light energy by blue light sensory proteins to the AgNPs resulting in the thermal destruction of the bacterial cells [45, 46].\nDouble combination of the AgNPs with five conventional antibiotics against the ten MRSA clinical isolates was investigated using checkerboard assay. Synergistic interactions were observed when the AgNPs were used with amoxicillin, azithromycin, clarithromycin or linezolid in 30–40 % of the combinations (Fig. 2). Other combinational activities were indifferent with no observed antagonistic interaction. Combination of the AgNPs with vancomycin, on the other hand, was indifferent against all isolates. Synergistic interactions between AgNPs and conventional antibiotics against different pathogens were previously reported. Ruden et al. [47] found synergistic interaction when AgNPs were combined with polymyxin B against gram-negative bacteria. Combination of AgNPs with ampicillin, chloramphenicol, and kanamycin against gram-positive and -negative pathogenic bacteria was also found to be synergistic [48]. Similarly, Smekalova et al. [49] observed synergistic effect when AgNPs were combined with penicillin G, gentamycin, and colistin against MDR bacteria.\nReversion of MRSA resistance to ineffective antibiotics by combination with AgNPs could be a novel strategy to combat infections caused by MDR pathogens. Biocompatible gold nanoparticles-amoxicillin complex was found to overcome the resistance of MRSA to the antibiotic [50]. The synergistic response of combination of AgNPs and ineffective antibiotics is probably due to an increase of the concentration of antibiotics at the site of bacterium–antibiotic interaction, and to facilitate binding of antibiotics to bacteria [51].\nAgNPs combined with amoxicillin, azithromycin, clarithromycin or linezolid were tested against selected MRSA isolates with concurrent exposure to blue light for 1 h. The isolates were selected based on synergistic response, where they were most affected by the previous double combinations in the checkerboard assay. The antimicrobial activity of the three agents was significantly (p > 0.001) enhanced in the triple combinations compared to single- and double treatments with one or two of them. The bactericidal activity was more pronounced when azithromycin or clarithromycin was included in the triple therapy (Figs. 3, 4). The antimicrobial efficiency was also enhanced when linezolid (Fig. 5) or amoxicillin (data not shown) was included but with the same bactericidal effect in double and triple combinations.\nThe synergy observed in triple therapy might be explained by the combined mechanism of action of each agent alone, and the enhanced outcomes in their double combinations.\nTEM images of isolate N8 that has been exposed to triple therapy (AgNPs, blue light and azithromycin) support the aforementioned suggestion that bacterial cell damage by the triple combination was more pronounced compared to the cells that were treated with each agent alone (Fig. 6a–e).", "This study suggests a new strategy to combat serious infections caused by MDR bacteria. The triple combination of AgNPs, blue light, and antibiotics is a promising therapy for infections caused by MRSA. The triple therapy may include antibiotics, which are proven to be ineffective against MRSA. This approach would be useful to face the fast-growing drug-resistance with the slow development of new antimicrobial agents, and to preserve last resort antibiotics such as vancomycin. The study can be taken further by exploring the application of the triple therapy in patients infected with MRSA and other MDR bacteria, taking into consideration the best conditions for optimizing their synergistic effects and decreasing the harmful side effect." ]
[ null, "materials|methods", null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusion" ]
[ "Nonconventional antimicrobials", "Double and triple combinations", "Multidrug-resistance", "Checkerboard assay", "Linezolid", "Vancomycin", "Azithromycin", "Clarithromycin" ]
Background: Treatment of infections caused by Staphylococcus aureus has become more difficult because of the emergence of multidrug-resistant isolates [1, 2]. Methicillin-resistant S. aureus (MRSA) presents problems for patients and healthcare facility-staff whose immune system is compromised, or who have open access to their bodies via wounds, catheters or drips. The infection spectrum ranges from superficial skin infections to more serious diseases such as bronchopneumonia [3]. Failure of antibiotics to manage infections caused by multidrug-resistant (MDR) pathogens, especially MRSA, has triggered much research effort for finding alternative antimicrobial approaches with higher efficiency and less resistance developed by the microorganisms. Silver has long been known to exhibit antimicrobial activity against wide range of microorganisms and has demonstrated considerable effectiveness in bactericidal applications [4] and silver nanoparticles (AgNPs) have been reconsidered as a potential alternative to conventional antimicrobial agents [5]. It has been estimated that 320 tons of nanosilver are used annually [6] with 30 % of all currently registered nano-products contain nanosilver [7]. The use of AgNPs alone or in combination with other antimicrobial agents has been suggested as a potential alternative for traditional treatment of infections caused by MDR pathogens [5]. AgNPs were found to exhibit antibacterial activity against MRSA in vitro when tested alone or in combination with other antimicrobial agents [8–10]. Metal nanostructures attract a lot of attention due to their unique properties. AgNPs is a potential biocide that has been reported to be less toxic compared to Silver ions [11]. AgNPs can be incorporated into antimicrobial applications such as bandages, surface coatings, medical equipment, food packaging, functional clothes and cosmetics [12]. Blue light is recently attracting increasing attention as a novel phototherapy-based antimicrobial agent that has significant antimicrobial activity against a broad range of bacterial and fungal pathogens with less chance to resistance development compared to antibiotics [13, 14]. Further, blue light has been shown to be highly effective against MRSA and other common nosocomial bacterial pathogens [15, 16]. The present investigation aims to evaluate the effectiveness of triple combination of AgNPs, blue light and the conventional antibiotics vancomycin, linezolid, amoxicillin, azithromycin, and clarithromycin against clinical isolates of MRSA. To the best of our knowledge, this is the first study, which utilizes this triple combination against pathogenic bacteria. Methods: Chemicals Unless otherwise indicated all chemicals were purchased from Sigma-Aldrich, USA. Unless otherwise indicated all chemicals were purchased from Sigma-Aldrich, USA. Antibiotics Amoxicillin (AMX), oxacillin (OXA), vancomycin (VAN) were purchased from Sigma Chemical Co., ST. Louis, Missouri, USA. Linezolid (LNZ) was provided by Pharmacia & Upjohn, Kalamazoo, MI, USA. Azithromycin (AZM) was provided by Pfizer, USA. Clarithromycin (CLR) was provided by Abbott Laboratories, USA. Amoxicillin (AMX), oxacillin (OXA), vancomycin (VAN) were purchased from Sigma Chemical Co., ST. Louis, Missouri, USA. Linezolid (LNZ) was provided by Pharmacia & Upjohn, Kalamazoo, MI, USA. Azithromycin (AZM) was provided by Pfizer, USA. Clarithromycin (CLR) was provided by Abbott Laboratories, USA. Microorganisms Ten clinical MRSA isolates were collected from The National Cancer Institute and from Abbasseya Hospital in Cairo, Egypt. The collected isolates were identified using conventional microbiological techniques. According to genotyping results, the isolates were sub-classified into 14 different pulsed field patterns, 11 spa-types and 8 multiple locus sequence typing (MLST) sequence types. The pulsed field type A was the predominant pulsed field type, which corresponded to spa-type t-037 and MLST sequence type ST-239, and belonged to clonal complex 8 (CC8) according to eBURST analysis (Table 1) (Unpublished data, Master Thesis, Moussa et al. 2010).Table 1Characteristics of the MRSA clinical isolates used in this studyIsolate designationSpa-repeatsMLSTClonal complex (eBURST) SCCmec C51t-186ST-88CC88IIIaC6t-5711ST-22CC22IVaC43t-037ST-239CC8IIIN11t-363ST-239CC8IIIN5t-037ST-239CC8IIIN8t-037ST-239CC8IIIC34t-037ST-239CC8IIIC19t-037ST-239CC8IIIC12t-037ST-239CC8IIIC41t-1234ST-97CC97III Characteristics of the MRSA clinical isolates used in this study Ten clinical MRSA isolates were collected from The National Cancer Institute and from Abbasseya Hospital in Cairo, Egypt. The collected isolates were identified using conventional microbiological techniques. According to genotyping results, the isolates were sub-classified into 14 different pulsed field patterns, 11 spa-types and 8 multiple locus sequence typing (MLST) sequence types. The pulsed field type A was the predominant pulsed field type, which corresponded to spa-type t-037 and MLST sequence type ST-239, and belonged to clonal complex 8 (CC8) according to eBURST analysis (Table 1) (Unpublished data, Master Thesis, Moussa et al. 2010).Table 1Characteristics of the MRSA clinical isolates used in this studyIsolate designationSpa-repeatsMLSTClonal complex (eBURST) SCCmec C51t-186ST-88CC88IIIaC6t-5711ST-22CC22IVaC43t-037ST-239CC8IIIN11t-363ST-239CC8IIIN5t-037ST-239CC8IIIN8t-037ST-239CC8IIIC34t-037ST-239CC8IIIC19t-037ST-239CC8IIIC12t-037ST-239CC8IIIC41t-1234ST-97CC97III Characteristics of the MRSA clinical isolates used in this study Oxacillin susceptibility The isolates were inoculated onto Mueller–Hinton agar (Lab M, UK) plates supplemented with 4 % NaCl and 6 µg/mL oxacillin, followed by incubation at 37 °C for 24 h. Isolates that showed more than one colony were considered as MRSA [17]. The isolates were inoculated onto Mueller–Hinton agar (Lab M, UK) plates supplemented with 4 % NaCl and 6 µg/mL oxacillin, followed by incubation at 37 °C for 24 h. Isolates that showed more than one colony were considered as MRSA [17]. Preparation of the AgNPs The AgNPs used for the purpose of this research are silver magnetite nanoparticles. To prepare the AgNPs, 0.127 g silver nitrate were dissolved in 75 mL of distilled water then 10 mL of an aqueous solution containing 0.08 g trisodium citrate and 0.2 g polyvinylpyrrolidone (PVP) were added. Ten milliliter of 0.1 M sodium borohydride were dissolved and added to the mixture. The solution turned dark brown indicating the conversion of silver nitrate to silver nanoparticles. The nanoparticles were characterized spectrophotometrically, where a surface plasmon resonance peak appeared between 390 and 410 nm [18]. The particles size was also characterized by Malvern Zetasizer Nano ZS (United Kingdom) and by Tecnai G20, Super twin, double tilt (FEI) ultra-high resolution Transmission Electron Microscope, which showed a uniform distribution of the nanoparticles, with an average size of 15–20 nm. The AgNPs used for the purpose of this research are silver magnetite nanoparticles. To prepare the AgNPs, 0.127 g silver nitrate were dissolved in 75 mL of distilled water then 10 mL of an aqueous solution containing 0.08 g trisodium citrate and 0.2 g polyvinylpyrrolidone (PVP) were added. Ten milliliter of 0.1 M sodium borohydride were dissolved and added to the mixture. The solution turned dark brown indicating the conversion of silver nitrate to silver nanoparticles. The nanoparticles were characterized spectrophotometrically, where a surface plasmon resonance peak appeared between 390 and 410 nm [18]. The particles size was also characterized by Malvern Zetasizer Nano ZS (United Kingdom) and by Tecnai G20, Super twin, double tilt (FEI) ultra-high resolution Transmission Electron Microscope, which showed a uniform distribution of the nanoparticles, with an average size of 15–20 nm. Susceptibility of the isolates to AgNPs and the antibiotics MIC of the AgNPs was determined by the broth microdilution method using cation-adjusted Mueller–Hinton broth (MHB) based on the guidelines of the Clinical Laboratory Standard Institute (CLSI) [19]. The minimum bactericidal concentration (MBC) was determined by streaking 10 µL samples from bacterial cultures supplemented with AgNPs or the antibiotics at their MICs and higher concentrations, onto the surfaces of Muller Hinton agar plates. After a 24 h incubation period, the number of colony forming units per mL (CFU/mL) was determined and the MBC, defined as the concentration that kills 99.9 % of bacteria, was recorded. MIC of the AgNPs was determined by the broth microdilution method using cation-adjusted Mueller–Hinton broth (MHB) based on the guidelines of the Clinical Laboratory Standard Institute (CLSI) [19]. The minimum bactericidal concentration (MBC) was determined by streaking 10 µL samples from bacterial cultures supplemented with AgNPs or the antibiotics at their MICs and higher concentrations, onto the surfaces of Muller Hinton agar plates. After a 24 h incubation period, the number of colony forming units per mL (CFU/mL) was determined and the MBC, defined as the concentration that kills 99.9 % of bacteria, was recorded. Double combination of AgNPs with blue light against MRSA AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. Briefly, bacterial suspensions were pipetted into the wells, which contained the AgNPs at the tested concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells were exposed to visible blue light source at 460 nm and 250 mW for 1 h using Photon Emitting Diode (Photon Scientific, Egypt). Samples were taken after 0, 2, 4, 6, 8 and 10 h of inoculation, where viable bacterial counts were determined. Briefly, 10 µL aliquots were withdrawn and spread onto nutrient agar plates before being incubated at 37 °C for 24 h. The same procedure was repeated with nanoparticles-free and light-free wells. The experiment was performed in triplicates and the results were compared to drug-free samples. AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. Briefly, bacterial suspensions were pipetted into the wells, which contained the AgNPs at the tested concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells were exposed to visible blue light source at 460 nm and 250 mW for 1 h using Photon Emitting Diode (Photon Scientific, Egypt). Samples were taken after 0, 2, 4, 6, 8 and 10 h of inoculation, where viable bacterial counts were determined. Briefly, 10 µL aliquots were withdrawn and spread onto nutrient agar plates before being incubated at 37 °C for 24 h. The same procedure was repeated with nanoparticles-free and light-free wells. The experiment was performed in triplicates and the results were compared to drug-free samples. Double combination of AgNPs with the antibiotics against MRSA The efficiency of double combination of AgNPs and amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin against the ten clinical isolates of MRSA was assessed by the checkerboard method. The combination response was evaluated by calculation of the Fraction Inhibitory Index (∑ FIC) as follow:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sum {\text{ FIC}} = \frac{ \, MIC \, of \, drug \, A, \, in \, combination}{MIC \, of \, drugA, \, tested \, along} \, + \, \frac{MIC \, of \, drug \, B, \, in \, combination}{MIC \, of \, drug \, B, \, tested \, along}$$\end{document}∑FIC=MICofdrugA,incombinationMICofdrugA,testedalong+MICofdrugB,incombinationMICofdrugB,testedalong The interaction is defined as synergistic if the FIC index is 0.5 or less; indifferent, if the FIC index is >0.5 and <4; and antagonistic if the FIC index is >4 [20]. The efficiency of double combination of AgNPs and amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin against the ten clinical isolates of MRSA was assessed by the checkerboard method. The combination response was evaluated by calculation of the Fraction Inhibitory Index (∑ FIC) as follow:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sum {\text{ FIC}} = \frac{ \, MIC \, of \, drug \, A, \, in \, combination}{MIC \, of \, drugA, \, tested \, along} \, + \, \frac{MIC \, of \, drug \, B, \, in \, combination}{MIC \, of \, drug \, B, \, tested \, along}$$\end{document}∑FIC=MICofdrugA,incombinationMICofdrugA,testedalong+MICofdrugB,incombinationMICofdrugB,testedalong The interaction is defined as synergistic if the FIC index is 0.5 or less; indifferent, if the FIC index is >0.5 and <4; and antagonistic if the FIC index is >4 [20]. Triple combination of AgNPs, blue light, and the antibiotics against MRSA The purpose of this experiment was to test the effectiveness of AgNPs in combination with blue light and each of the following antibiotics at a time: amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against selected isolates of MRSA. Two isolates from the combination of AgNPs and each of the tested antibiotics were chosen on the basis of the synergistic response in the checkerboard assay. The experiments were carried out in 24 multi-well plates where eight wells were designated as: drug- and light-free, blue light exposure, AgNPs alone, the antibiotic alone, blue light and AgNPs, blue light and the antibiotic, AgNPs and the antibiotic, and finally, the triple combination blue light, AgNPs and the antibiotic. The 24 multi-well plates were used because the diameter of their wells fits the tip of the Photon Emitting Diode, where the diode was placed at a distance of 5 mm over the surface of the bacterial culture in the well to ensure optimal exposure to the light and reduce light scattering. Only the wells in the four corners of one plate were used in parallel treatments to avoid the scattered light from adjacent wells, if any; all other wells were left empty. The AgNPs and the antibiotics were tested at concentrations that resulted in the best combination in checkerboard assay against the selected isolates. Bacterial suspensions were pipetted into the wells, which contained the AgNPs alone or in combination with the antibiotics at the test concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells designated for light treatment were exposed to the light source emitting blue light at a wavelength of 460 nm for 1 h. The plates were then incubated at 37 °C for 24 h after which viable cell counts were determined. The experiment was performed in triplicate, and the results obtained were compared to the drug- and blue light-free wells. The purpose of this experiment was to test the effectiveness of AgNPs in combination with blue light and each of the following antibiotics at a time: amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against selected isolates of MRSA. Two isolates from the combination of AgNPs and each of the tested antibiotics were chosen on the basis of the synergistic response in the checkerboard assay. The experiments were carried out in 24 multi-well plates where eight wells were designated as: drug- and light-free, blue light exposure, AgNPs alone, the antibiotic alone, blue light and AgNPs, blue light and the antibiotic, AgNPs and the antibiotic, and finally, the triple combination blue light, AgNPs and the antibiotic. The 24 multi-well plates were used because the diameter of their wells fits the tip of the Photon Emitting Diode, where the diode was placed at a distance of 5 mm over the surface of the bacterial culture in the well to ensure optimal exposure to the light and reduce light scattering. Only the wells in the four corners of one plate were used in parallel treatments to avoid the scattered light from adjacent wells, if any; all other wells were left empty. The AgNPs and the antibiotics were tested at concentrations that resulted in the best combination in checkerboard assay against the selected isolates. Bacterial suspensions were pipetted into the wells, which contained the AgNPs alone or in combination with the antibiotics at the test concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells designated for light treatment were exposed to the light source emitting blue light at a wavelength of 460 nm for 1 h. The plates were then incubated at 37 °C for 24 h after which viable cell counts were determined. The experiment was performed in triplicate, and the results obtained were compared to the drug- and blue light-free wells. Effects of triple combination of AgNPs, blue light, and azithromycin on MRSA isolate using transmission electron microscopy (TEM) Ten milliliter of MHB medium were inoculated with 1 × 105 CFU/mL of MRSA isolate (N8) in 15 mL conical centrifuge tubes (Falcon, USA). The suspensions were then incubated at 37 °C for 4 h till the bacteria reached the logarithmic phase. The suspensions were then centrifuged at 2800×g for 10 min and the cell pellets were re-suspended in 10 mL of the fresh drug-free MHB, or containing 0.25 µg/mL (1/16 MIC) of AgNPs, or 0.25 µg/mL of azithromycin or both agents. Two milliliter aliquots of the suspension were transferred to 24 multi-wells plates. The plates were incubated at room temperature during which the blue light wells were exposed to the light at 460 nm for 1 h. One milliliter samples were then taken and prepared for TEM as previously described [21]. Briefly, the samples were centrifuged, and the bacterial pellets were fixed in 1 mL of 3 % glutaraldehyde for 2 h and then centrifuged and washed with 7.2 % phosphate buffer. A secondary fixative, osmium tetraoxide, was then added to the pellets, incubated for 1 h before being washed with phosphate buffer saline. The samples were then subjected to a series of dehydration steps using different concentrations of ethanol, starting with ethanol 50–95 %. During each step, the samples were left for 10 min and then put in absolute ethanol for 20 min. The samples were then embedded in resin blocks that were subsequently cut into semi- then ultra-thin thickness and finally stained with uranyl acetate and lead citrate before being examined by TEM JEOL (JEM-1400). The results were compared to drug- and light-free control experiments. Ten milliliter of MHB medium were inoculated with 1 × 105 CFU/mL of MRSA isolate (N8) in 15 mL conical centrifuge tubes (Falcon, USA). The suspensions were then incubated at 37 °C for 4 h till the bacteria reached the logarithmic phase. The suspensions were then centrifuged at 2800×g for 10 min and the cell pellets were re-suspended in 10 mL of the fresh drug-free MHB, or containing 0.25 µg/mL (1/16 MIC) of AgNPs, or 0.25 µg/mL of azithromycin or both agents. Two milliliter aliquots of the suspension were transferred to 24 multi-wells plates. The plates were incubated at room temperature during which the blue light wells were exposed to the light at 460 nm for 1 h. One milliliter samples were then taken and prepared for TEM as previously described [21]. Briefly, the samples were centrifuged, and the bacterial pellets were fixed in 1 mL of 3 % glutaraldehyde for 2 h and then centrifuged and washed with 7.2 % phosphate buffer. A secondary fixative, osmium tetraoxide, was then added to the pellets, incubated for 1 h before being washed with phosphate buffer saline. The samples were then subjected to a series of dehydration steps using different concentrations of ethanol, starting with ethanol 50–95 %. During each step, the samples were left for 10 min and then put in absolute ethanol for 20 min. The samples were then embedded in resin blocks that were subsequently cut into semi- then ultra-thin thickness and finally stained with uranyl acetate and lead citrate before being examined by TEM JEOL (JEM-1400). The results were compared to drug- and light-free control experiments. Statistical analysis The statistical analysis of the data was done using GraphPad Prism (version 5.0) software. One-way- and two-way analysis of variance (ANOVA) were used to test the significance among the different treatment groups, and 5 % error was accepted in the statistics. Error bars in the graphical presentation of data express the standard deviation of the means between samples. The statistical analysis of the data was done using GraphPad Prism (version 5.0) software. One-way- and two-way analysis of variance (ANOVA) were used to test the significance among the different treatment groups, and 5 % error was accepted in the statistics. Error bars in the graphical presentation of data express the standard deviation of the means between samples. Chemicals: Unless otherwise indicated all chemicals were purchased from Sigma-Aldrich, USA. Antibiotics: Amoxicillin (AMX), oxacillin (OXA), vancomycin (VAN) were purchased from Sigma Chemical Co., ST. Louis, Missouri, USA. Linezolid (LNZ) was provided by Pharmacia & Upjohn, Kalamazoo, MI, USA. Azithromycin (AZM) was provided by Pfizer, USA. Clarithromycin (CLR) was provided by Abbott Laboratories, USA. Microorganisms: Ten clinical MRSA isolates were collected from The National Cancer Institute and from Abbasseya Hospital in Cairo, Egypt. The collected isolates were identified using conventional microbiological techniques. According to genotyping results, the isolates were sub-classified into 14 different pulsed field patterns, 11 spa-types and 8 multiple locus sequence typing (MLST) sequence types. The pulsed field type A was the predominant pulsed field type, which corresponded to spa-type t-037 and MLST sequence type ST-239, and belonged to clonal complex 8 (CC8) according to eBURST analysis (Table 1) (Unpublished data, Master Thesis, Moussa et al. 2010).Table 1Characteristics of the MRSA clinical isolates used in this studyIsolate designationSpa-repeatsMLSTClonal complex (eBURST) SCCmec C51t-186ST-88CC88IIIaC6t-5711ST-22CC22IVaC43t-037ST-239CC8IIIN11t-363ST-239CC8IIIN5t-037ST-239CC8IIIN8t-037ST-239CC8IIIC34t-037ST-239CC8IIIC19t-037ST-239CC8IIIC12t-037ST-239CC8IIIC41t-1234ST-97CC97III Characteristics of the MRSA clinical isolates used in this study Oxacillin susceptibility: The isolates were inoculated onto Mueller–Hinton agar (Lab M, UK) plates supplemented with 4 % NaCl and 6 µg/mL oxacillin, followed by incubation at 37 °C for 24 h. Isolates that showed more than one colony were considered as MRSA [17]. Preparation of the AgNPs: The AgNPs used for the purpose of this research are silver magnetite nanoparticles. To prepare the AgNPs, 0.127 g silver nitrate were dissolved in 75 mL of distilled water then 10 mL of an aqueous solution containing 0.08 g trisodium citrate and 0.2 g polyvinylpyrrolidone (PVP) were added. Ten milliliter of 0.1 M sodium borohydride were dissolved and added to the mixture. The solution turned dark brown indicating the conversion of silver nitrate to silver nanoparticles. The nanoparticles were characterized spectrophotometrically, where a surface plasmon resonance peak appeared between 390 and 410 nm [18]. The particles size was also characterized by Malvern Zetasizer Nano ZS (United Kingdom) and by Tecnai G20, Super twin, double tilt (FEI) ultra-high resolution Transmission Electron Microscope, which showed a uniform distribution of the nanoparticles, with an average size of 15–20 nm. Susceptibility of the isolates to AgNPs and the antibiotics: MIC of the AgNPs was determined by the broth microdilution method using cation-adjusted Mueller–Hinton broth (MHB) based on the guidelines of the Clinical Laboratory Standard Institute (CLSI) [19]. The minimum bactericidal concentration (MBC) was determined by streaking 10 µL samples from bacterial cultures supplemented with AgNPs or the antibiotics at their MICs and higher concentrations, onto the surfaces of Muller Hinton agar plates. After a 24 h incubation period, the number of colony forming units per mL (CFU/mL) was determined and the MBC, defined as the concentration that kills 99.9 % of bacteria, was recorded. Double combination of AgNPs with blue light against MRSA: AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. Briefly, bacterial suspensions were pipetted into the wells, which contained the AgNPs at the tested concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells were exposed to visible blue light source at 460 nm and 250 mW for 1 h using Photon Emitting Diode (Photon Scientific, Egypt). Samples were taken after 0, 2, 4, 6, 8 and 10 h of inoculation, where viable bacterial counts were determined. Briefly, 10 µL aliquots were withdrawn and spread onto nutrient agar plates before being incubated at 37 °C for 24 h. The same procedure was repeated with nanoparticles-free and light-free wells. The experiment was performed in triplicates and the results were compared to drug-free samples. Double combination of AgNPs with the antibiotics against MRSA: The efficiency of double combination of AgNPs and amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin against the ten clinical isolates of MRSA was assessed by the checkerboard method. The combination response was evaluated by calculation of the Fraction Inhibitory Index (∑ FIC) as follow:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sum {\text{ FIC}} = \frac{ \, MIC \, of \, drug \, A, \, in \, combination}{MIC \, of \, drugA, \, tested \, along} \, + \, \frac{MIC \, of \, drug \, B, \, in \, combination}{MIC \, of \, drug \, B, \, tested \, along}$$\end{document}∑FIC=MICofdrugA,incombinationMICofdrugA,testedalong+MICofdrugB,incombinationMICofdrugB,testedalong The interaction is defined as synergistic if the FIC index is 0.5 or less; indifferent, if the FIC index is >0.5 and <4; and antagonistic if the FIC index is >4 [20]. Triple combination of AgNPs, blue light, and the antibiotics against MRSA: The purpose of this experiment was to test the effectiveness of AgNPs in combination with blue light and each of the following antibiotics at a time: amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against selected isolates of MRSA. Two isolates from the combination of AgNPs and each of the tested antibiotics were chosen on the basis of the synergistic response in the checkerboard assay. The experiments were carried out in 24 multi-well plates where eight wells were designated as: drug- and light-free, blue light exposure, AgNPs alone, the antibiotic alone, blue light and AgNPs, blue light and the antibiotic, AgNPs and the antibiotic, and finally, the triple combination blue light, AgNPs and the antibiotic. The 24 multi-well plates were used because the diameter of their wells fits the tip of the Photon Emitting Diode, where the diode was placed at a distance of 5 mm over the surface of the bacterial culture in the well to ensure optimal exposure to the light and reduce light scattering. Only the wells in the four corners of one plate were used in parallel treatments to avoid the scattered light from adjacent wells, if any; all other wells were left empty. The AgNPs and the antibiotics were tested at concentrations that resulted in the best combination in checkerboard assay against the selected isolates. Bacterial suspensions were pipetted into the wells, which contained the AgNPs alone or in combination with the antibiotics at the test concentrations in MHB to give an initial inoculum size of 1 × 105 CFU/mL and a final volume of 2 mL/well. The wells designated for light treatment were exposed to the light source emitting blue light at a wavelength of 460 nm for 1 h. The plates were then incubated at 37 °C for 24 h after which viable cell counts were determined. The experiment was performed in triplicate, and the results obtained were compared to the drug- and blue light-free wells. Effects of triple combination of AgNPs, blue light, and azithromycin on MRSA isolate using transmission electron microscopy (TEM): Ten milliliter of MHB medium were inoculated with 1 × 105 CFU/mL of MRSA isolate (N8) in 15 mL conical centrifuge tubes (Falcon, USA). The suspensions were then incubated at 37 °C for 4 h till the bacteria reached the logarithmic phase. The suspensions were then centrifuged at 2800×g for 10 min and the cell pellets were re-suspended in 10 mL of the fresh drug-free MHB, or containing 0.25 µg/mL (1/16 MIC) of AgNPs, or 0.25 µg/mL of azithromycin or both agents. Two milliliter aliquots of the suspension were transferred to 24 multi-wells plates. The plates were incubated at room temperature during which the blue light wells were exposed to the light at 460 nm for 1 h. One milliliter samples were then taken and prepared for TEM as previously described [21]. Briefly, the samples were centrifuged, and the bacterial pellets were fixed in 1 mL of 3 % glutaraldehyde for 2 h and then centrifuged and washed with 7.2 % phosphate buffer. A secondary fixative, osmium tetraoxide, was then added to the pellets, incubated for 1 h before being washed with phosphate buffer saline. The samples were then subjected to a series of dehydration steps using different concentrations of ethanol, starting with ethanol 50–95 %. During each step, the samples were left for 10 min and then put in absolute ethanol for 20 min. The samples were then embedded in resin blocks that were subsequently cut into semi- then ultra-thin thickness and finally stained with uranyl acetate and lead citrate before being examined by TEM JEOL (JEM-1400). The results were compared to drug- and light-free control experiments. Statistical analysis: The statistical analysis of the data was done using GraphPad Prism (version 5.0) software. One-way- and two-way analysis of variance (ANOVA) were used to test the significance among the different treatment groups, and 5 % error was accepted in the statistics. Error bars in the graphical presentation of data express the standard deviation of the means between samples. Results: Susceptibility of the isolates to AgNPs and the antibiotics The MIC of AgNPs was found to be 4 µg/mL with MBC range of 8–16 µg/mL, and MBC90 (The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates) was 8 µg/mL (Table 2). Vancomycin is the only antibiotic, which showed activity against the tested isolates with MIC90 (the minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates) and MBC90 values of 2 and 8 µg/mL, respectively (Table 2). The isolates were resistant to linezolid with MIC90 of 32 µg/mL, and to amoxicillin, azithromycin and clarithromycin with MIC90 >64 µg/mL.Table 2Susceptibility of the tested isolates to AgNPs and the antibioticsAntimicrobial agentsConcentration (µg/mL)a MIC90 MBC90 AgNPs48Amoxicillin>64>64Azithromycin>64>64Clarithromycin>64>64Linezolid32>64Vancomycin28 aMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates Susceptibility of the tested isolates to AgNPs and the antibiotics aMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates The MIC of AgNPs was found to be 4 µg/mL with MBC range of 8–16 µg/mL, and MBC90 (The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates) was 8 µg/mL (Table 2). Vancomycin is the only antibiotic, which showed activity against the tested isolates with MIC90 (the minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates) and MBC90 values of 2 and 8 µg/mL, respectively (Table 2). The isolates were resistant to linezolid with MIC90 of 32 µg/mL, and to amoxicillin, azithromycin and clarithromycin with MIC90 >64 µg/mL.Table 2Susceptibility of the tested isolates to AgNPs and the antibioticsAntimicrobial agentsConcentration (µg/mL)a MIC90 MBC90 AgNPs48Amoxicillin>64>64Azithromycin>64>64Clarithromycin>64>64Linezolid32>64Vancomycin28 aMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates Susceptibility of the tested isolates to AgNPs and the antibiotics aMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates Combination of AgNPs with blue light against MRSA The antimicrobial activity of AgNPs in combination with blue light against one of the MRSA isolates was investigated. The AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. The antimicrobial activity of these combinations against the tested isolate was significantly higher (p < 0.001) than each agent alone. All bacteria were killed after 8 h of exposure to the combined therapy at all tested concentrations. Figure 1 shows the results for the combinations tested at 1/2, 1/4, 1/8, and 1/16 of the MIC of AgNPs (data for lower concentrations are not shown).Fig. 1Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation The antimicrobial activity of AgNPs in combination with blue light against one of the MRSA isolates was investigated. The AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. The antimicrobial activity of these combinations against the tested isolate was significantly higher (p < 0.001) than each agent alone. All bacteria were killed after 8 h of exposure to the combined therapy at all tested concentrations. Figure 1 shows the results for the combinations tested at 1/2, 1/4, 1/8, and 1/16 of the MIC of AgNPs (data for lower concentrations are not shown).Fig. 1Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation Combination of AgNPs with the antibiotics against MRSA The efficiency of the double combination of the AgNPs and each of amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against the ten clinical MRSA isolates was assessed using the checkerboard method. The combination of AgNPs with amoxicillin resulted in synergistic activity against four isolates whereas indifference response was observed in six isolates. Similar results were observed when the AgNPs were combined with azithromycin, clarithromycin or linezolid, where synergism was observed against 4, 3 and 3 isolates, respectively, whereas indifferent interaction prevailed for the remaining isolates. On the other hand, combination of AgNPs with vancomycin was indifferent for all tested isolates (Fig. 2).Fig. 2Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin The efficiency of the double combination of the AgNPs and each of amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against the ten clinical MRSA isolates was assessed using the checkerboard method. The combination of AgNPs with amoxicillin resulted in synergistic activity against four isolates whereas indifference response was observed in six isolates. Similar results were observed when the AgNPs were combined with azithromycin, clarithromycin or linezolid, where synergism was observed against 4, 3 and 3 isolates, respectively, whereas indifferent interaction prevailed for the remaining isolates. On the other hand, combination of AgNPs with vancomycin was indifferent for all tested isolates (Fig. 2).Fig. 2Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin Triple combination of AgNPs, blue light, and the antibiotics against MRSA isolates The effectiveness of the AgNPs in combination with blue light and amoxicillin, linezolid, azithromycin, or clarithromycin, was tested against selected isolates of MRSA. Two isolates from each combination of AgNPs and antibiotic were selected based on the synergistic results of the checkerboard assay. Vancomycin was excluded because its combination with the AgNPs was indifferent against all isolates. The AgNPs and the antibiotics were tested at the concentrations, which gave the best results in checkerboard assay. Isolates N8 and C41 were used to assess the triple combination of AgNPs at 1/16 MIC, the blue light, and azithromycin at 0.25 and 2 µg/mL, respectively. The triple combination resulted in significantly higher (p < 0.001) killing effect of isolate C41 with log10 CFU/mL reductions of 8.4, 3.2, compared to the drug-free samples and to the double combinations of the antibiotic with the AgNPs (Fig. 3a). The triple combinations against isolate N8 resulted in killing of all bacteria compared to all other treatments, which showed lower activity with log10 CFU/mL reduction range of 1.0–2.0 (Fig. 3b).Fig. 3Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation Triple combinations that included clarithromycin were tested against isolates C51 and C41, at 1/8 and 1/512 of the MIC, respectively, of the AgNPs and 0.25 µg/mL of the antibiotic. The bactericidal activity of the three-agent combination was significantly higher (p < 0.001) than that attained with other treatment combinations with log10 CFU/mL reduction of 13.02 and 5.84 compared to the control of the two isolates, respectively (Fig. 4a, b).Fig. 4Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation The antimicrobial efficacy of linezolid at 0.25 and 8 µg/mL was evaluated against isolate C19 and N5, respectively, when combined with the silver compound at its 1/2 MIC and blue light. Synergistic interaction was observed when the AgNPs were combined with either the antibiotic or the blue light or with both of them, where the bacteria were completely killed following treatment with all combinations (Fig. 5a, b). The same effect was observed when amoxicillin at 1 and 0.25 µg/mL was combined with blue light and AgNPs at 1/32 and 1/256 of its MIC against isolates C12 and N8, respectively (data not shown).Fig. 5Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation The effectiveness of the AgNPs in combination with blue light and amoxicillin, linezolid, azithromycin, or clarithromycin, was tested against selected isolates of MRSA. Two isolates from each combination of AgNPs and antibiotic were selected based on the synergistic results of the checkerboard assay. Vancomycin was excluded because its combination with the AgNPs was indifferent against all isolates. The AgNPs and the antibiotics were tested at the concentrations, which gave the best results in checkerboard assay. Isolates N8 and C41 were used to assess the triple combination of AgNPs at 1/16 MIC, the blue light, and azithromycin at 0.25 and 2 µg/mL, respectively. The triple combination resulted in significantly higher (p < 0.001) killing effect of isolate C41 with log10 CFU/mL reductions of 8.4, 3.2, compared to the drug-free samples and to the double combinations of the antibiotic with the AgNPs (Fig. 3a). The triple combinations against isolate N8 resulted in killing of all bacteria compared to all other treatments, which showed lower activity with log10 CFU/mL reduction range of 1.0–2.0 (Fig. 3b).Fig. 3Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation Triple combinations that included clarithromycin were tested against isolates C51 and C41, at 1/8 and 1/512 of the MIC, respectively, of the AgNPs and 0.25 µg/mL of the antibiotic. The bactericidal activity of the three-agent combination was significantly higher (p < 0.001) than that attained with other treatment combinations with log10 CFU/mL reduction of 13.02 and 5.84 compared to the control of the two isolates, respectively (Fig. 4a, b).Fig. 4Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation The antimicrobial efficacy of linezolid at 0.25 and 8 µg/mL was evaluated against isolate C19 and N5, respectively, when combined with the silver compound at its 1/2 MIC and blue light. Synergistic interaction was observed when the AgNPs were combined with either the antibiotic or the blue light or with both of them, where the bacteria were completely killed following treatment with all combinations (Fig. 5a, b). The same effect was observed when amoxicillin at 1 and 0.25 µg/mL was combined with blue light and AgNPs at 1/32 and 1/256 of its MIC against isolates C12 and N8, respectively (data not shown).Fig. 5Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation TEM examination of MRSA isolate (N8) after treatment with the AgNPs, blue light, and azithromycin alone or in triple combination The antimicrobial efficiency of the AgNPs at 1/16 MIC, blue light for 1 h and azithromycin at 0.25 µg/mL against isolate N8 was visualized by TEM when each of them was used alone or in triple combination (Fig. 6a–e). Bacteria treated with AgNPs alone showed accumulation of the silver particles inside the cells concomitant with signs of membrane damage and lysis (Fig. 6b). Cell lysis was also observed when the bacteria were treated with either blue light or azithromycin alone (Fig. 6c, d). On the other hand, bacterial cell lysis was more pronounced following treatment with the three agents in combination, where the cells were severely affected (Fig. 6e).Fig. 6Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage The antimicrobial efficiency of the AgNPs at 1/16 MIC, blue light for 1 h and azithromycin at 0.25 µg/mL against isolate N8 was visualized by TEM when each of them was used alone or in triple combination (Fig. 6a–e). Bacteria treated with AgNPs alone showed accumulation of the silver particles inside the cells concomitant with signs of membrane damage and lysis (Fig. 6b). Cell lysis was also observed when the bacteria were treated with either blue light or azithromycin alone (Fig. 6c, d). On the other hand, bacterial cell lysis was more pronounced following treatment with the three agents in combination, where the cells were severely affected (Fig. 6e).Fig. 6Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage Susceptibility of the isolates to AgNPs and the antibiotics: The MIC of AgNPs was found to be 4 µg/mL with MBC range of 8–16 µg/mL, and MBC90 (The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates) was 8 µg/mL (Table 2). Vancomycin is the only antibiotic, which showed activity against the tested isolates with MIC90 (the minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates) and MBC90 values of 2 and 8 µg/mL, respectively (Table 2). The isolates were resistant to linezolid with MIC90 of 32 µg/mL, and to amoxicillin, azithromycin and clarithromycin with MIC90 >64 µg/mL.Table 2Susceptibility of the tested isolates to AgNPs and the antibioticsAntimicrobial agentsConcentration (µg/mL)a MIC90 MBC90 AgNPs48Amoxicillin>64>64Azithromycin>64>64Clarithromycin>64>64Linezolid32>64Vancomycin28 aMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates Susceptibility of the tested isolates to AgNPs and the antibiotics aMIC90: The minimum inhibitory concentration of the antibiotic required to inhibit the growth of 90 % of the isolates. MBC90: The minimum bactericidal concentration of the antibiotic required to kill 99.9 % of bacteria in 90 % of the isolates Combination of AgNPs with blue light against MRSA: The antimicrobial activity of AgNPs in combination with blue light against one of the MRSA isolates was investigated. The AgNPs were tested at 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 of its MIC in 24-wells plates. The antimicrobial activity of these combinations against the tested isolate was significantly higher (p < 0.001) than each agent alone. All bacteria were killed after 8 h of exposure to the combined therapy at all tested concentrations. Figure 1 shows the results for the combinations tested at 1/2, 1/4, 1/8, and 1/16 of the MIC of AgNPs (data for lower concentrations are not shown).Fig. 1Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation Antimicrobial activity of the AgNPs at different concentrations in combination with blue light against MRSA isolates. Cell suspensions were exposed to either the silver compound alone at sub-MICs (a 1/2, b 1/4, c 1/8, and d 1/16 MIC), or blue light alone at 460 nm and 250 mW for 1 h, or combination of both agents. Viable colony count was recorded as mean ± SD of three independent experiments. AgNPs silver nanoparticles, CFU colony forming unit, MIC minimum inhibitory concentration, SD standard deviation Combination of AgNPs with the antibiotics against MRSA: The efficiency of the double combination of the AgNPs and each of amoxicillin, vancomycin, linezolid, azithromycin, or clarithromycin, against the ten clinical MRSA isolates was assessed using the checkerboard method. The combination of AgNPs with amoxicillin resulted in synergistic activity against four isolates whereas indifference response was observed in six isolates. Similar results were observed when the AgNPs were combined with azithromycin, clarithromycin or linezolid, where synergism was observed against 4, 3 and 3 isolates, respectively, whereas indifferent interaction prevailed for the remaining isolates. On the other hand, combination of AgNPs with vancomycin was indifferent for all tested isolates (Fig. 2).Fig. 2Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin Double combination of AgNPs with the amoxicillin, vancomycin, linezolid, azithromycin or clarithromycin against ten MRSA isolates. The combination was assessed by the checkerboard method and the response was evaluated by calculation of the fraction inhibitory index (FIC) as follow: synergistic if the FIC index is 0.5 or less, indifference if the FIC index more than 0.5 and less than four, and antagonistic if the FIC index more than four. AgNPs silver nanoparticles, AMX amoxicillin, AZM azithromycin, CLR clarithromycin, LNZ linezolid, VAN vancomycin Triple combination of AgNPs, blue light, and the antibiotics against MRSA isolates: The effectiveness of the AgNPs in combination with blue light and amoxicillin, linezolid, azithromycin, or clarithromycin, was tested against selected isolates of MRSA. Two isolates from each combination of AgNPs and antibiotic were selected based on the synergistic results of the checkerboard assay. Vancomycin was excluded because its combination with the AgNPs was indifferent against all isolates. The AgNPs and the antibiotics were tested at the concentrations, which gave the best results in checkerboard assay. Isolates N8 and C41 were used to assess the triple combination of AgNPs at 1/16 MIC, the blue light, and azithromycin at 0.25 and 2 µg/mL, respectively. The triple combination resulted in significantly higher (p < 0.001) killing effect of isolate C41 with log10 CFU/mL reductions of 8.4, 3.2, compared to the drug-free samples and to the double combinations of the antibiotic with the AgNPs (Fig. 3a). The triple combinations against isolate N8 resulted in killing of all bacteria compared to all other treatments, which showed lower activity with log10 CFU/mL reduction range of 1.0–2.0 (Fig. 3b).Fig. 3Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation Combination of AgNPs, blue light, and azithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and azithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C41: AgNPs at 1/16 of the MIC, and azithromycin at 2 µg/mL. b Isolate N8: AgNPs at 1/16 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, AZM azithromycin, SD standard deviation Triple combinations that included clarithromycin were tested against isolates C51 and C41, at 1/8 and 1/512 of the MIC, respectively, of the AgNPs and 0.25 µg/mL of the antibiotic. The bactericidal activity of the three-agent combination was significantly higher (p < 0.001) than that attained with other treatment combinations with log10 CFU/mL reduction of 13.02 and 5.84 compared to the control of the two isolates, respectively (Fig. 4a, b).Fig. 4Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation Combination of AgNPs, blue light, and clarithromycin against two isolates of MRSA. The triple combination of AgNPs with the blue light and clarithromycin against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C51: AgNPs at 1/8 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate C41: AgNPs at 1/512 of the MIC, and azithromycin at 0.25 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, CLR clarithromycin, SD standard deviation The antimicrobial efficacy of linezolid at 0.25 and 8 µg/mL was evaluated against isolate C19 and N5, respectively, when combined with the silver compound at its 1/2 MIC and blue light. Synergistic interaction was observed when the AgNPs were combined with either the antibiotic or the blue light or with both of them, where the bacteria were completely killed following treatment with all combinations (Fig. 5a, b). The same effect was observed when amoxicillin at 1 and 0.25 µg/mL was combined with blue light and AgNPs at 1/32 and 1/256 of its MIC against isolates C12 and N8, respectively (data not shown).Fig. 5Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation Combination of AgNPs, blue light, and linezolid against two isolates of MRSA. The triple combination of AgNPs with the blue light and linezolid against two isolates of MRSA was assessed. The isolates were selected on the basis of synergistic response in checkerboard assay. Based on the best result of the combination in the checkerboard assay, the concentrations of the two agents were used as follow: a Isolate C19: AgNPs at 1/2 of the MIC, and azithromycin at 0.25 µg/mL. b Isolate N5: AgNPs at 1/2 of the MIC, and azithromycin at 8 µg/mL. AgNPs silver nanoparticles, CFU colony forming unit, LNZ linezolid, SD standard deviation TEM examination of MRSA isolate (N8) after treatment with the AgNPs, blue light, and azithromycin alone or in triple combination: The antimicrobial efficiency of the AgNPs at 1/16 MIC, blue light for 1 h and azithromycin at 0.25 µg/mL against isolate N8 was visualized by TEM when each of them was used alone or in triple combination (Fig. 6a–e). Bacteria treated with AgNPs alone showed accumulation of the silver particles inside the cells concomitant with signs of membrane damage and lysis (Fig. 6b). Cell lysis was also observed when the bacteria were treated with either blue light or azithromycin alone (Fig. 6c, d). On the other hand, bacterial cell lysis was more pronounced following treatment with the three agents in combination, where the cells were severely affected (Fig. 6e).Fig. 6Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage Visualization of the effect of combination of AgNPs, blue light, and azithromycin on MRSA isolate N8 using transmission electron microscope (TEM). The antimicrobial efficacy of the AgNPs at 1/16 MIC, blue light and azithromycin at 0.25 µg/mL alone and in triple combination against isolate N8 was visualized by TEM at ×80,000. The photos show the response of the bacteria to the following treatments: a Drug-free and light-free (control). b AgNPs alone at 1/16 of its MIC. c Blue light exposure at 460 nm and 250 mW for 1 h. d Azithromycin alone at 0.25 µg/mL. e Triple combination of AgNPs, blue light and the azithromycin. Signs of membrane damage and cell lysis were more pronounced in cells treated with a combination of three agents compared to cells treated with each agent alone. Small arrows indicate the location of the AgNPs and the sites of the damage Discussion: Antibiotic resistance by isolates of S. aureus has become a global alarming problem that limits the availability of effective antimicrobial agents [22]. Antibiotic-misuse, the failure of some patients to comply with their treatment regimen, and the high capability of bacteria to mutate, are among the major factors contributing to the emergence of bacterial resistance. Antibiotic resistance leads to the failure of treatment of life-threatening bacterial infections and increases costs due to longer stay in healthcare settings [23, 24]. The use of non-conventional therapy to which bacteria are improbably to develop resistance, would be the best alternative. AgNPs are potential antimicrobials agents, which can be considered as an alternative to antibiotics for the treatment of infections caused by MDR bacteria [5]. AgNPs have been shown to possess strong and broad-spectrum antimicrobial activity due to a combined effect between their physical properties and the released free silver ions [25]. Methicillin-resistant S. aureus was used as a model in our study to assess the efficiency of combination of AgNPs, blue light, and anti-staphylococcal antibiotics. The isolates had been collected from different hospital units to guarantee the most possible representation of the Egyptian genotype population of S. aureus. We have previously found that members of CC8 are the prevailing MRSA clone in Egypt (Unpublished data, Master Thesis, Moussa et al. 2010). Infections caused by S. aureus are among the most frequent causes of both healthcare-associated and community-onset infections [26]. MRSA and coagulase-negative staphylococci are among the leading causes of nosocomial blood stream infections in the USA [27]. Staphylococci cause biofilm-associated infections by forming biofilms on damaged tissues, and indwelling vascular catheters [28–32]. Five antibiotics were selected from different conventional classes including beta-lactam (amoxicillin), macrolides (azithromycin and clarithromycin), oxazolidinones (linezolid), and glycopeptides (vancomycin). Based on the European Committee on Antimicrobial Susceptibility Testing (EUCAST) MIC breakpoint guideline [33], all isolates were found to be resistant to amoxicillin, azithromycin, clarithromycin, and linezolid, while they were susceptible to vancomycin (Table 2). With few exceptions, MRSA isolates are resistant to all beta-lactam antibiotics and commonly resistant to macrolides, with very rare resistance to glycopeptides antibiotics [33, 34]. Emergence of linezolid-resistance was previously reported in 0.05 % of S. aureus infections [35]. Antibiotics that are known to be ineffective against MRSA were used in this study to assess the possibility of enhancement of their antimicrobial activity by AgNPs and the blue light. This approach would be useful in “recycling” of these antibiotics that became useless against infections caused by MRSA to face the fast-growing drug-resistance with the slow development of new antimicrobial agents, and to preserve last resort antibiotics such as vancomycin. Blue light has attracted increasing attention because of its intrinsic antimicrobial effect which does not involve the use of exogenous photosensitizers as in the photodynamic therapy (PDT), and the less damaging to mammalian cells than ultraviolet irradiation [13]. The biomedical applications of blue light at specific wavelengths and intensities against different pathogens have been reported earlier [36]. At 470 nm blue light was found to be efficient against MRSA strains associated with hospital-acquired and community-onset infections [37]. Double combinations of the AgNPs, at 1/2–1/128 of its MIC, with blue light were tested against selected MRSA isolates. The bactericidal activity of both agents was significantly enhanced (p < 0.001) when bacteria were treated with AgNPs with concurrent exposure to blue light for 1 h, compared to each of them alone (Fig. 1a–e). The combined therapy killed all bacteria after 8 h in all tested combinations while each of the silver compound and the blue light was less efficient in killing the organisms during the tested time. The mechanism of the antimicrobial effect of either AgNPs or blue light is still not fully understood. Several hypotheses have been suggested to explain such mechanisms. For example, it has been reported that AgNPs can damage bacterial cell membranes leading to structural changes, which render bacteria more permeable [38, 39]. AgNPs have unique optical, electrical, and thermal properties with high surface area to volume ratio resulting in the optimal possible interaction with bacterial surfaces leading to a higher antimicrobial activity [40]. The formation of free radicals by the silver nanoparticles is probably another mechanism that can lead to cell death [41, 42]. It has also been proposed that cationic silver is released from the nanoparticles when they are dissolved in water or when they penetrate into the cells [5]. Silver ions bind to the cellular membranes, proteins, and nucleic acids, causing structural changes and deformations of the bacterial cell [5]. They also deactivate many vital enzymes by interaction with thiol groups [31] and are involved in the generation of reactive oxygen species [32]. For blue light, the commonly accepted hypothesis is the production of highly cytotoxic reactive oxygen species (ROS) in a similar manner to PDT [43]. We have previously reported a similar synergistic interaction for a combination of AgNPs and blue light when both agents were tested against clinical isolates of Pseudomonas aeruginosa [44]. A possible mechanism of the observed synergy could be the transduction of the captured blue-light energy by blue light sensory proteins to the AgNPs resulting in the thermal destruction of the bacterial cells [45, 46]. Double combination of the AgNPs with five conventional antibiotics against the ten MRSA clinical isolates was investigated using checkerboard assay. Synergistic interactions were observed when the AgNPs were used with amoxicillin, azithromycin, clarithromycin or linezolid in 30–40 % of the combinations (Fig. 2). Other combinational activities were indifferent with no observed antagonistic interaction. Combination of the AgNPs with vancomycin, on the other hand, was indifferent against all isolates. Synergistic interactions between AgNPs and conventional antibiotics against different pathogens were previously reported. Ruden et al. [47] found synergistic interaction when AgNPs were combined with polymyxin B against gram-negative bacteria. Combination of AgNPs with ampicillin, chloramphenicol, and kanamycin against gram-positive and -negative pathogenic bacteria was also found to be synergistic [48]. Similarly, Smekalova et al. [49] observed synergistic effect when AgNPs were combined with penicillin G, gentamycin, and colistin against MDR bacteria. Reversion of MRSA resistance to ineffective antibiotics by combination with AgNPs could be a novel strategy to combat infections caused by MDR pathogens. Biocompatible gold nanoparticles-amoxicillin complex was found to overcome the resistance of MRSA to the antibiotic [50]. The synergistic response of combination of AgNPs and ineffective antibiotics is probably due to an increase of the concentration of antibiotics at the site of bacterium–antibiotic interaction, and to facilitate binding of antibiotics to bacteria [51]. AgNPs combined with amoxicillin, azithromycin, clarithromycin or linezolid were tested against selected MRSA isolates with concurrent exposure to blue light for 1 h. The isolates were selected based on synergistic response, where they were most affected by the previous double combinations in the checkerboard assay. The antimicrobial activity of the three agents was significantly (p > 0.001) enhanced in the triple combinations compared to single- and double treatments with one or two of them. The bactericidal activity was more pronounced when azithromycin or clarithromycin was included in the triple therapy (Figs. 3, 4). The antimicrobial efficiency was also enhanced when linezolid (Fig. 5) or amoxicillin (data not shown) was included but with the same bactericidal effect in double and triple combinations. The synergy observed in triple therapy might be explained by the combined mechanism of action of each agent alone, and the enhanced outcomes in their double combinations. TEM images of isolate N8 that has been exposed to triple therapy (AgNPs, blue light and azithromycin) support the aforementioned suggestion that bacterial cell damage by the triple combination was more pronounced compared to the cells that were treated with each agent alone (Fig. 6a–e). Conclusions: This study suggests a new strategy to combat serious infections caused by MDR bacteria. The triple combination of AgNPs, blue light, and antibiotics is a promising therapy for infections caused by MRSA. The triple therapy may include antibiotics, which are proven to be ineffective against MRSA. This approach would be useful to face the fast-growing drug-resistance with the slow development of new antimicrobial agents, and to preserve last resort antibiotics such as vancomycin. The study can be taken further by exploring the application of the triple therapy in patients infected with MRSA and other MDR bacteria, taking into consideration the best conditions for optimizing their synergistic effects and decreasing the harmful side effect.
Background: Silver nanoparticles (AgNPs) are potential antimicrobials agents, which can be considered as an alternative to antibiotics for the treatment of infections caused by multi-drug resistant bacteria. The antimicrobial effects of double and triple combinations of AgNPs, visible blue light, and the conventional antibiotics amoxicillin, azithromycin, clarithromycin, linezolid, and vancomycin, against ten clinical isolates of methicillin-resistant Staphylococcus aureus (MRSA) were investigated. Methods: The antimicrobial activity of AgNPs, applied in combination with blue light, against selected isolates of MRSA was investigated at 1/2-1/128 of its minimal inhibitory concentration (MIC) in 24-well plates. The wells were exposed to blue light source at 460 nm and 250 mW for 1 h using a photon emitting diode. Samples were taken at different time intervals, and viable bacterial counts were determined. The double combinations of AgNPs and each of the antibiotics were assessed by the checkerboard method. The killing assay was used to test possible synergistic effects when blue light was further combined to AgNPs and each antibiotic at a time against selected isolates of MRSA. Results: The bactericidal activity of AgNPs, at sub-MIC, and blue light was significantly (p < 0.001) enhanced when both agents were applied in combination compared to each agent alone. Similarly, synergistic interactions were observed when AgNPs were combined with amoxicillin, azithromycin, clarithromycin or linezolid in 30-40 % of the double combinations with no observed antagonistic interaction against the tested isolates. Combination of the AgNPs with vancomycin did not result in enhanced killing against all isolates tested. The antimicrobial activity against MRSA isolates was significantly enhanced in triple combinations of AgNPs, blue light and antibiotic, compared to treatments involving one or two agents. The bactericidal activities were highest when azithromycin or clarithromycin was included in the triple therapy compared to the other antibiotics tested. Conclusions: A new strategy can be used to combat serious infections caused by MRSA by combining AgNPs, blue light, and antibiotics. This triple therapy may include antibiotics, which have been proven to be ineffective against MRSA. The suggested approach would be useful to face the fast-growing drug-resistance with the slow development of new antimicrobial agents, and to preserve last resort antibiotics such as vancomycin.
Background: Treatment of infections caused by Staphylococcus aureus has become more difficult because of the emergence of multidrug-resistant isolates [1, 2]. Methicillin-resistant S. aureus (MRSA) presents problems for patients and healthcare facility-staff whose immune system is compromised, or who have open access to their bodies via wounds, catheters or drips. The infection spectrum ranges from superficial skin infections to more serious diseases such as bronchopneumonia [3]. Failure of antibiotics to manage infections caused by multidrug-resistant (MDR) pathogens, especially MRSA, has triggered much research effort for finding alternative antimicrobial approaches with higher efficiency and less resistance developed by the microorganisms. Silver has long been known to exhibit antimicrobial activity against wide range of microorganisms and has demonstrated considerable effectiveness in bactericidal applications [4] and silver nanoparticles (AgNPs) have been reconsidered as a potential alternative to conventional antimicrobial agents [5]. It has been estimated that 320 tons of nanosilver are used annually [6] with 30 % of all currently registered nano-products contain nanosilver [7]. The use of AgNPs alone or in combination with other antimicrobial agents has been suggested as a potential alternative for traditional treatment of infections caused by MDR pathogens [5]. AgNPs were found to exhibit antibacterial activity against MRSA in vitro when tested alone or in combination with other antimicrobial agents [8–10]. Metal nanostructures attract a lot of attention due to their unique properties. AgNPs is a potential biocide that has been reported to be less toxic compared to Silver ions [11]. AgNPs can be incorporated into antimicrobial applications such as bandages, surface coatings, medical equipment, food packaging, functional clothes and cosmetics [12]. Blue light is recently attracting increasing attention as a novel phototherapy-based antimicrobial agent that has significant antimicrobial activity against a broad range of bacterial and fungal pathogens with less chance to resistance development compared to antibiotics [13, 14]. Further, blue light has been shown to be highly effective against MRSA and other common nosocomial bacterial pathogens [15, 16]. The present investigation aims to evaluate the effectiveness of triple combination of AgNPs, blue light and the conventional antibiotics vancomycin, linezolid, amoxicillin, azithromycin, and clarithromycin against clinical isolates of MRSA. To the best of our knowledge, this is the first study, which utilizes this triple combination against pathogenic bacteria. Conclusions: This study suggests a new strategy to combat serious infections caused by MDR bacteria. The triple combination of AgNPs, blue light, and antibiotics is a promising therapy for infections caused by MRSA. The triple therapy may include antibiotics, which are proven to be ineffective against MRSA. This approach would be useful to face the fast-growing drug-resistance with the slow development of new antimicrobial agents, and to preserve last resort antibiotics such as vancomycin. The study can be taken further by exploring the application of the triple therapy in patients infected with MRSA and other MDR bacteria, taking into consideration the best conditions for optimizing their synergistic effects and decreasing the harmful side effect.
Background: Silver nanoparticles (AgNPs) are potential antimicrobials agents, which can be considered as an alternative to antibiotics for the treatment of infections caused by multi-drug resistant bacteria. The antimicrobial effects of double and triple combinations of AgNPs, visible blue light, and the conventional antibiotics amoxicillin, azithromycin, clarithromycin, linezolid, and vancomycin, against ten clinical isolates of methicillin-resistant Staphylococcus aureus (MRSA) were investigated. Methods: The antimicrobial activity of AgNPs, applied in combination with blue light, against selected isolates of MRSA was investigated at 1/2-1/128 of its minimal inhibitory concentration (MIC) in 24-well plates. The wells were exposed to blue light source at 460 nm and 250 mW for 1 h using a photon emitting diode. Samples were taken at different time intervals, and viable bacterial counts were determined. The double combinations of AgNPs and each of the antibiotics were assessed by the checkerboard method. The killing assay was used to test possible synergistic effects when blue light was further combined to AgNPs and each antibiotic at a time against selected isolates of MRSA. Results: The bactericidal activity of AgNPs, at sub-MIC, and blue light was significantly (p < 0.001) enhanced when both agents were applied in combination compared to each agent alone. Similarly, synergistic interactions were observed when AgNPs were combined with amoxicillin, azithromycin, clarithromycin or linezolid in 30-40 % of the double combinations with no observed antagonistic interaction against the tested isolates. Combination of the AgNPs with vancomycin did not result in enhanced killing against all isolates tested. The antimicrobial activity against MRSA isolates was significantly enhanced in triple combinations of AgNPs, blue light and antibiotic, compared to treatments involving one or two agents. The bactericidal activities were highest when azithromycin or clarithromycin was included in the triple therapy compared to the other antibiotics tested. Conclusions: A new strategy can be used to combat serious infections caused by MRSA by combining AgNPs, blue light, and antibiotics. This triple therapy may include antibiotics, which have been proven to be ineffective against MRSA. The suggested approach would be useful to face the fast-growing drug-resistance with the slow development of new antimicrobial agents, and to preserve last resort antibiotics such as vancomycin.
15,556
436
21
[ "agnps", "isolates", "light", "combination", "blue light", "blue", "ml", "azithromycin", "mrsa", "mic" ]
[ "test", "test" ]
null
[CONTENT] Nonconventional antimicrobials | Double and triple combinations | Multidrug-resistance | Checkerboard assay | Linezolid | Vancomycin | Azithromycin | Clarithromycin [SUMMARY]
null
[CONTENT] Nonconventional antimicrobials | Double and triple combinations | Multidrug-resistance | Checkerboard assay | Linezolid | Vancomycin | Azithromycin | Clarithromycin [SUMMARY]
[CONTENT] Nonconventional antimicrobials | Double and triple combinations | Multidrug-resistance | Checkerboard assay | Linezolid | Vancomycin | Azithromycin | Clarithromycin [SUMMARY]
[CONTENT] Nonconventional antimicrobials | Double and triple combinations | Multidrug-resistance | Checkerboard assay | Linezolid | Vancomycin | Azithromycin | Clarithromycin [SUMMARY]
[CONTENT] Nonconventional antimicrobials | Double and triple combinations | Multidrug-resistance | Checkerboard assay | Linezolid | Vancomycin | Azithromycin | Clarithromycin [SUMMARY]
[CONTENT] Amoxicillin | Anti-Bacterial Agents | Azithromycin | Clarithromycin | Combined Modality Therapy | Drug Combinations | Drug Synergism | Light | Linezolid | Metal Nanoparticles | Methicillin-Resistant Staphylococcus aureus | Microbial Sensitivity Tests | Phototherapy | Silver | Vancomycin [SUMMARY]
null
[CONTENT] Amoxicillin | Anti-Bacterial Agents | Azithromycin | Clarithromycin | Combined Modality Therapy | Drug Combinations | Drug Synergism | Light | Linezolid | Metal Nanoparticles | Methicillin-Resistant Staphylococcus aureus | Microbial Sensitivity Tests | Phototherapy | Silver | Vancomycin [SUMMARY]
[CONTENT] Amoxicillin | Anti-Bacterial Agents | Azithromycin | Clarithromycin | Combined Modality Therapy | Drug Combinations | Drug Synergism | Light | Linezolid | Metal Nanoparticles | Methicillin-Resistant Staphylococcus aureus | Microbial Sensitivity Tests | Phototherapy | Silver | Vancomycin [SUMMARY]
[CONTENT] Amoxicillin | Anti-Bacterial Agents | Azithromycin | Clarithromycin | Combined Modality Therapy | Drug Combinations | Drug Synergism | Light | Linezolid | Metal Nanoparticles | Methicillin-Resistant Staphylococcus aureus | Microbial Sensitivity Tests | Phototherapy | Silver | Vancomycin [SUMMARY]
[CONTENT] Amoxicillin | Anti-Bacterial Agents | Azithromycin | Clarithromycin | Combined Modality Therapy | Drug Combinations | Drug Synergism | Light | Linezolid | Metal Nanoparticles | Methicillin-Resistant Staphylococcus aureus | Microbial Sensitivity Tests | Phototherapy | Silver | Vancomycin [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] agnps | isolates | light | combination | blue light | blue | ml | azithromycin | mrsa | mic [SUMMARY]
null
[CONTENT] agnps | isolates | light | combination | blue light | blue | ml | azithromycin | mrsa | mic [SUMMARY]
[CONTENT] agnps | isolates | light | combination | blue light | blue | ml | azithromycin | mrsa | mic [SUMMARY]
[CONTENT] agnps | isolates | light | combination | blue light | blue | ml | azithromycin | mrsa | mic [SUMMARY]
[CONTENT] agnps | isolates | light | combination | blue light | blue | ml | azithromycin | mrsa | mic [SUMMARY]
[CONTENT] antimicrobial | pathogens | infections | potential | alternative | caused | antimicrobial agents | infections caused | agnps | resistant [SUMMARY]
null
[CONTENT] agnps | combination | isolates | azithromycin | light | µg ml | µg | blue light | blue | ml [SUMMARY]
[CONTENT] therapy | triple therapy | mdr bacteria | new | triple | infections caused | caused | infections | mdr | antibiotics [SUMMARY]
[CONTENT] agnps | light | isolates | combination | ml | blue | blue light | mrsa | azithromycin | mic [SUMMARY]
[CONTENT] agnps | light | isolates | combination | ml | blue | blue light | mrsa | azithromycin | mic [SUMMARY]
[CONTENT] AgNPs ||| AgNPs | ten | methicillin | Staphylococcus aureus [SUMMARY]
null
[CONTENT] AgNPs ||| AgNPs | 30-40 ||| AgNPs ||| AgNPs | one | two ||| [SUMMARY]
[CONTENT] AgNPs ||| MRSA ||| [SUMMARY]
[CONTENT] AgNPs ||| AgNPs | ten | methicillin | Staphylococcus aureus ||| AgNPs | 1/2 | MIC | 24 ||| 460 | 250 | 1 ||| ||| AgNPs ||| AgNPs | MRSA ||| ||| AgNPs ||| AgNPs | 30-40 ||| AgNPs ||| AgNPs | one | two ||| ||| AgNPs ||| MRSA ||| [SUMMARY]
[CONTENT] AgNPs ||| AgNPs | ten | methicillin | Staphylococcus aureus ||| AgNPs | 1/2 | MIC | 24 ||| 460 | 250 | 1 ||| ||| AgNPs ||| AgNPs | MRSA ||| ||| AgNPs ||| AgNPs | 30-40 ||| AgNPs ||| AgNPs | one | two ||| ||| AgNPs ||| MRSA ||| [SUMMARY]
Corticosteroid use and risk of orofacial clefts.
24777675
Maternal use of corticosteroids during early pregnancy has been inconsistently associated with orofacial clefts in the offspring. A previous report from the National Birth Defect Prevention Study (NBDPS), using data from 1997 to 2002, found an association with cleft lip and palate (odds ratio, 1.7; 95% confidence interval [CI], 1.1-2.6), but not cleft palate only (odds ratio, 0.5, 95%CI, 0.2-1.3). From 2003 to 2009, the study population more than doubled in size, and our objective was to assess this association in the more recent data.
BACKGROUND
The NBDPS is an ongoing multi-state population-based case-control study of birth defects, with ascertainment of cases and controls born since 1997. We assessed the association of corticosteroids and orofacial clefts using data from 2372 cleft cases and 5922 controls born from 2003 to 2009. Maternal corticosteroid exposure was based on telephone interviews.
METHODS
The overall association of corticosteroids and cleft lip and palate in the new data was 1.0 (95% CI, 0.7-1.4). There was little evidence of associations between specific corticosteroid components or timing and clefts.
RESULTS
In contrast to the 1997 to 2002 data from the NBDPS, the 2003 to 2009 data show no association between maternal corticosteroid use and cleft lip and palate in the offspring.
CONCLUSION
[ "Adrenal Cortex Hormones", "Black People", "Case-Control Studies", "Cleft Lip", "Cleft Palate", "Female", "Hispanic or Latino", "Humans", "Infant, Newborn", "Maternal Exposure", "Odds Ratio", "Pregnancy", "Risk Factors", "Surveys and Questionnaires", "United States", "White People" ]
4283705
Introduction
Orofacial clefts are one of the most common birth defects in humans, with a world birth prevalence of 1.7 per 1000 live births (Mossey et al., 2009). Orofacial clefts occur when the fusion of the lip and/or palate, which takes place during the first-trimester of pregnancy, is disrupted (Dixon et al., 2011). Corticosteroids are well-established as an experimental teratogen in animal models, causing cleft palate in mice (Fraser and Fainstat, 1951; Walker and Fraser, 1957). Several epidemiological studies have reported an association between corticosteroid use in early pregnancy in humans and delivering an infant with an orofacial cleft (Czeizel and Rockenbauer, 1997; Rodríguez-Pinilla and Luisa Martínez-Frías, 1998; Carmichael and Shaw, 1999; Edwards et al., 2003; Pradat et al., 2003; Carmichael et al., 2007), although others have not (Kallen et al., 1999; Källén, 2003; Hviid and Mølgaard-Nielsen, 2011). The anti-inflammatory and immune modulating functions of corticosteroids are effective in the treatment of conditions such as asthma, allergic reactions, eczema, psoriasis, rheumatoid arthritis, and inflammatory bowel disease. These conditions are common and often affect women of reproductive age; however, the safety of corticosteroid medication during pregnancy is uncertain. We previously reported that maternal corticosteroid use was associated with increased risk of cleft lip with or without palate (CLP) (odds ratio [OR], 1.7; 95% confidence interval [CI], 1.1–2.6) but not cleft palate only (CPO) (OR, 0.5; 95%CI, 0.2–1.3), using data from the National Birth Defects Prevention Study (NBDPS) investigating deliveries from October 1997 through December 2002, including mothers of 1141 infants with CLP, 628 infants with CPO and 4143 controls (Carmichael et al., 2007). Since then, the study population has more than doubled in size, allowing the largest study of corticosteroids and clefts to date. Given continued uncertainty about the association between orofacial clefts and corticosteroid medications and the tentative findings from our earlier analyses, our objective here was to assess the association using larger and more recent NBDPS data.
null
null
Results
From 2003 to 2009, the NBDPS enrolled mothers of 1577 children with CLP, 795 children with CPO, and 5922 control children. Demographic characteristics are outlined in Table 2004. A total of 89% of the CLP cases (n = 1402) and 79% of CPO cases (n = 631) were isolated. Any use of corticosteroids four weeks prior through 12 weeks after conception was reported by mothers of 35 (2.3%) infants with CLP (OR, 1.0; 95% CI, 0.7–1.4) and mothers of 13 (1.7%) infants with CPO (OR, 0.7; 95% CI, 0.4–1.2), and by mothers of 137 (2.4%) control infants (Table 1999). There was no association by route of administration (systemic, nasal/inhaled, topical or other use) or specific components of corticosteroids (Prednisone, Beclomethasone, Budesonide, Fluticasone, Triamcinolone). Furthermore, we did not find associations at more specific time windows of exposure (Table 2007). Characteristics of mothers of 1577 infants with cleft lip with or without cleft palate (CLP), 795 infants with cleft palate only (CPO), and 5922 nonmalformed control infants NBDPS deliveries 2003–2009. Percentages may not add to 100% because of rounding. During the month before and first 3 months of pregnancy. Association of Risk of Cleft Lip and Palate (CLP) and Cleft Palate Only (CPO) among Offspring Born to Women Who Used Maternal Corticosteroid Medications from 4 Weeks before through 12 Weeks after Conception, by Route of Administration and Component Corticosteroid NBDPS deliveries 2003 to 2009*. Reference groups for the comparisons for the two time intervals included: 2003-09: 1542 CLP cases, 782 CPO cases, and 5785 controls with no exposure from 4 weeks before through 12 weeks after conception; 1997–2009: 2662 CLP cases, 1410 CP cases, and 9849 controls with no exposure from 4 weeks before through 12 weeks after conception. Odds ratios were estimated only if there were at least two exposed cases and two exposed controls. Association of Risk of Cleft Lip and Palate (CLP) among Offspring Born to Women Who Used Corticosteroids, by Timing of Exposure NBDPS Deliveries 2003–2009. Reference groups for the comparisons for the two time intervals included: 2003-09: 1542 CLP cases, 782 CPO cases, and 5785 controls with no exposure from 4 weeks before through 12 weeks after conception; 1997–2009: 2662 CLP cases, 1410 CP cases, and 9849 controls with no exposure from 4 weeks before through 12 weeks after conception. Odds ratios were estimated only if there were at least two exposed cases and two exposed controls. By combining the earlier data with more recent data, the total cohort included mothers of 2731 infants with CLP, 1429 infants with CPO, and 10063 controls, delivered from October 1997 through December 2009. Mothers of 69 (2.6%) infants with CLP (OR 1.2 95% CI, 0.9–1.6), 19 (1.3%) infants with CPO (OR 0.6, 95% CI, 0.4–1.0) and 214 (2.1%) controls reported using any corticosteroids from 4 weeks before 12 weeks after gestation (Table 1999). We did not find an association by route of administration or component of corticosteroid in the combined data, with the possible exception of prednisone (OR, 1.9; 95% CI, 1.0–3.7) (Table 1999). Results by time window of exposure were inconsistent (Table 2007). For CLP, odds ratios ranged from 2.8 (95% CI, 1.3–5.9) for exposures only during week 1 to 4 and 5 to 8 after conception to 0.5 (95% CI, 0.1–1.6) for exposures during weeks 9 to 12. For analyses of any corticosteroid use we adjusted for maternal race-ethnicity (Non-Hispanic white, Non-Hispanic black, Hispanic, Other, and unknown), education (<High school graduation, High school graduation, 1–3 years of college, ≥ 4years of college and unknown), intake of folic acid (any, none, and unknown), smoking (any [active], none, unknown), and study center (Arkansas, California, Georgia, Iowa, Massachusetts, New Jersey, New York, North Carolina, Texas, and Utah), and we excluded nonisolated cases in the pooled data. We conducted additional analyses restricting to states that participated in the study for the whole time period (Arkansas, California, Georgia, Iowa, Massachusetts, New York, and Texas). These modifications did not appreciably change our estimates. Length of time to interview was slightly shorter for mothers reporting corticosteroid use. This was true for both cases (mean time 9.2 months for mothers reporting corticosteroid use and 10.5 for no use) and controls (mean time 8.2 months for mothers reporting corticosteroid use and 9.1 for no use). The main results from the two time periods (1997–2002 vs. 2003–2009) are illustrated in Figure 1. Association of risk of cleft lip and palate (CLP) among offspring born to women who used maternal corticosteroid medications from 4 weeks before through 12 weeks after conception, comparing NBDPS deliveries 1997 to 2002 versus 2003 to 2009. Results are presented in a logarithmic scale.
Conclusion
Maternal use of corticosteroids is not associated with delivering an infant with an orofacial cleft in the NBDPS. This analysis is consistent with recent results from large population-based studies (Källén, 2003; Hviid and Mølgaard-Nielsen, 2011; Skuladottir et al., 2013). These data help to inform the clinical risk-benefit decision for use of corticosteroids during the first trimester of pregnancy.
[ "Introduction", "Conclusion" ]
[ "Orofacial clefts are one of the most common birth defects in humans, with a world birth prevalence of 1.7 per 1000 live births (Mossey et al., 2009). Orofacial clefts occur when the fusion of the lip and/or palate, which takes place during the first-trimester of pregnancy, is disrupted (Dixon et al., 2011). Corticosteroids are well-established as an experimental teratogen in animal models, causing cleft palate in mice (Fraser and Fainstat, 1951; Walker and Fraser, 1957). Several epidemiological studies have reported an association between corticosteroid use in early pregnancy in humans and delivering an infant with an orofacial cleft (Czeizel and Rockenbauer, 1997; Rodríguez-Pinilla and Luisa Martínez-Frías, 1998; Carmichael and Shaw, 1999; Edwards et al., 2003; Pradat et al., 2003; Carmichael et al., 2007), although others have not (Kallen et al., 1999; Källén, 2003; Hviid and Mølgaard-Nielsen, 2011).\nThe anti-inflammatory and immune modulating functions of corticosteroids are effective in the treatment of conditions such as asthma, allergic reactions, eczema, psoriasis, rheumatoid arthritis, and inflammatory bowel disease. These conditions are common and often affect women of reproductive age; however, the safety of corticosteroid medication during pregnancy is uncertain.\nWe previously reported that maternal corticosteroid use was associated with increased risk of cleft lip with or without palate (CLP) (odds ratio [OR], 1.7; 95% confidence interval [CI], 1.1–2.6) but not cleft palate only (CPO) (OR, 0.5; 95%CI, 0.2–1.3), using data from the National Birth Defects Prevention Study (NBDPS) investigating deliveries from October 1997 through December 2002, including mothers of 1141 infants with CLP, 628 infants with CPO and 4143 controls (Carmichael et al., 2007). Since then, the study population has more than doubled in size, allowing the largest study of corticosteroids and clefts to date. Given continued uncertainty about the association between orofacial clefts and corticosteroid medications and the tentative findings from our earlier analyses, our objective here was to assess the association using larger and more recent NBDPS data.", "Maternal use of corticosteroids is not associated with delivering an infant with an orofacial cleft in the NBDPS. This analysis is consistent with recent results from large population-based studies (Källén, 2003; Hviid and Mølgaard-Nielsen, 2011; Skuladottir et al., 2013). These data help to inform the clinical risk-benefit decision for use of corticosteroids during the first trimester of pregnancy." ]
[ null, null ]
[ "Introduction", "Materials and Methods", "Results", "Discussion", "Conclusion" ]
[ "Orofacial clefts are one of the most common birth defects in humans, with a world birth prevalence of 1.7 per 1000 live births (Mossey et al., 2009). Orofacial clefts occur when the fusion of the lip and/or palate, which takes place during the first-trimester of pregnancy, is disrupted (Dixon et al., 2011). Corticosteroids are well-established as an experimental teratogen in animal models, causing cleft palate in mice (Fraser and Fainstat, 1951; Walker and Fraser, 1957). Several epidemiological studies have reported an association between corticosteroid use in early pregnancy in humans and delivering an infant with an orofacial cleft (Czeizel and Rockenbauer, 1997; Rodríguez-Pinilla and Luisa Martínez-Frías, 1998; Carmichael and Shaw, 1999; Edwards et al., 2003; Pradat et al., 2003; Carmichael et al., 2007), although others have not (Kallen et al., 1999; Källén, 2003; Hviid and Mølgaard-Nielsen, 2011).\nThe anti-inflammatory and immune modulating functions of corticosteroids are effective in the treatment of conditions such as asthma, allergic reactions, eczema, psoriasis, rheumatoid arthritis, and inflammatory bowel disease. These conditions are common and often affect women of reproductive age; however, the safety of corticosteroid medication during pregnancy is uncertain.\nWe previously reported that maternal corticosteroid use was associated with increased risk of cleft lip with or without palate (CLP) (odds ratio [OR], 1.7; 95% confidence interval [CI], 1.1–2.6) but not cleft palate only (CPO) (OR, 0.5; 95%CI, 0.2–1.3), using data from the National Birth Defects Prevention Study (NBDPS) investigating deliveries from October 1997 through December 2002, including mothers of 1141 infants with CLP, 628 infants with CPO and 4143 controls (Carmichael et al., 2007). Since then, the study population has more than doubled in size, allowing the largest study of corticosteroids and clefts to date. Given continued uncertainty about the association between orofacial clefts and corticosteroid medications and the tentative findings from our earlier analyses, our objective here was to assess the association using larger and more recent NBDPS data.", "We used data from the NBDPS, a population-based, multi-center case–control study of birth defects. Information on deliveries taking place from October 1997 through December 2009 was collected from the 10 NBDPS study centers (Arkansas, California, Georgia, Iowa, Massachusetts, New Jersey, New York, North Carolina, Texas, and Utah), although not all study sites contributed for all the study years. The study was approved by institutional review boards of the participating centers and the Centers for Disease Control and Prevention. More details on study methods and its surveillance systems can be found elsewhere (Yoon et al., 2001; Rasmussen et al., 2003).\nInfants or fetuses with CLP or CPO were considered cases and analyzed separately. Case status was ascertained either through clinical or surgical records or autopsy reports. Medical records for all cases were assessed by a clinical geneticist who ensured that they fulfilled the eligibility criteria. Cases were ineligible if their clefts were believed to result from another defect (e.g., holoprosencephaly) or had a recognized or strongly suspected single-gene disorder or chromosomal abnormality. Cases were considered isolated if there were no accompanying major unrelated birth defects or as nonisolated if more than one additional major unrelated defect was present. Controls (live born infants, without birth defects) were randomly selected from hospital birth records or birth certificates at each study center.\nMothers were interviewed 6 weeks to 24 months after estimated date of delivery, using computer-assisted telephone interviews in English or Spanish; median time between estimated date of delivery and interview was 9.0 months for case mothers (interquartile range 8.0 months) and 8.0 months for controls (interquartile range 7.0 months). Overall participation from 1997 to 2009 was 72% for eligible mothers of infants with clefts and 65% for control mothers (participation in the two time periods declined from 76% to 67% for eligible mothers of infants with clefts and 68% to 61% for control mothers, with an overall decline from 70% to 63%).\nIn the questionnaires the mothers were asked whether they had specific medical conditions before or during pregnancy and then what medications they used to treat them. Mothers were also asked to list any other medication they had used that was not captured in response to the specific questions; indication was not reported for responses to this question. Mothers were asked for duration and frequency of use for each medication used from 12 weeks before conception to delivery. Medications were coded according to the Slone Epidemiology Center Drug Dictionary. We focused on periconceptional corticosteroid use by any administration route and component (systemic, nasal/inhaled and topical), defined as use that occurred between 4 weeks before through 12 weeks after conception.\nWe investigated the association between any corticosteroid use during the periconceptional period compared with no use. We also explored whether there was an association with specific timing of exposure, mode of administration, or corticosteroid component. Logistic regression models in SAS software were used to estimate ORs and their corresponding 95% CIs. ORs were only calculated if there were two or more exposed cases and two or more exposed controls. We also examined associations after adjustment for several covariates (maternal race-ethnicity, education, intake of folic acid-containing supplements, smoking, and study center) and after exclusion of nonisolated cases. We present results for deliveries from January 2003 through December 2009, and for pooled data for deliveries from October 1997 through December 2009.", "From 2003 to 2009, the NBDPS enrolled mothers of 1577 children with CLP, 795 children with CPO, and 5922 control children. Demographic characteristics are outlined in Table 2004. A total of 89% of the CLP cases (n = 1402) and 79% of CPO cases (n = 631) were isolated. Any use of corticosteroids four weeks prior through 12 weeks after conception was reported by mothers of 35 (2.3%) infants with CLP (OR, 1.0; 95% CI, 0.7–1.4) and mothers of 13 (1.7%) infants with CPO (OR, 0.7; 95% CI, 0.4–1.2), and by mothers of 137 (2.4%) control infants (Table 1999). There was no association by route of administration (systemic, nasal/inhaled, topical or other use) or specific components of corticosteroids (Prednisone, Beclomethasone, Budesonide, Fluticasone, Triamcinolone). Furthermore, we did not find associations at more specific time windows of exposure (Table 2007).\nCharacteristics of mothers of 1577 infants with cleft lip with or without cleft palate (CLP), 795 infants with cleft palate only (CPO), and 5922 nonmalformed control infants\nNBDPS deliveries 2003–2009.\nPercentages may not add to 100% because of rounding.\nDuring the month before and first 3 months of pregnancy.\nAssociation of Risk of Cleft Lip and Palate (CLP) and Cleft Palate Only (CPO) among Offspring Born to Women Who Used Maternal Corticosteroid Medications from 4 Weeks before through 12 Weeks after Conception, by Route of Administration and Component Corticosteroid\nNBDPS deliveries 2003 to 2009*.\nReference groups for the comparisons for the two time intervals included: 2003-09: 1542 CLP cases, 782 CPO cases, and 5785 controls with no exposure from 4 weeks before through 12 weeks after conception; 1997–2009: 2662 CLP cases, 1410 CP cases, and 9849 controls with no exposure from 4 weeks before through 12 weeks after conception. Odds ratios were estimated only if there were at least two exposed cases and two exposed controls.\nAssociation of Risk of Cleft Lip and Palate (CLP) among Offspring Born to Women Who Used Corticosteroids, by Timing of Exposure\nNBDPS Deliveries 2003–2009.\nReference groups for the comparisons for the two time intervals included: 2003-09: 1542 CLP cases, 782 CPO cases, and 5785 controls with no exposure from 4 weeks before through 12 weeks after conception; 1997–2009: 2662 CLP cases, 1410 CP cases, and 9849 controls with no exposure from 4 weeks before through 12 weeks after conception. Odds ratios were estimated only if there were at least two exposed cases and two exposed controls.\nBy combining the earlier data with more recent data, the total cohort included mothers of 2731 infants with CLP, 1429 infants with CPO, and 10063 controls, delivered from October 1997 through December 2009. Mothers of 69 (2.6%) infants with CLP (OR 1.2 95% CI, 0.9–1.6), 19 (1.3%) infants with CPO (OR 0.6, 95% CI, 0.4–1.0) and 214 (2.1%) controls reported using any corticosteroids from 4 weeks before 12 weeks after gestation (Table 1999). We did not find an association by route of administration or component of corticosteroid in the combined data, with the possible exception of prednisone (OR, 1.9; 95% CI, 1.0–3.7) (Table 1999). Results by time window of exposure were inconsistent (Table 2007). For CLP, odds ratios ranged from 2.8 (95% CI, 1.3–5.9) for exposures only during week 1 to 4 and 5 to 8 after conception to 0.5 (95% CI, 0.1–1.6) for exposures during weeks 9 to 12.\nFor analyses of any corticosteroid use we adjusted for maternal race-ethnicity (Non-Hispanic white, Non-Hispanic black, Hispanic, Other, and unknown), education (<High school graduation, High school graduation, 1–3 years of college, ≥ 4years of college and unknown), intake of folic acid (any, none, and unknown), smoking (any [active], none, unknown), and study center (Arkansas, California, Georgia, Iowa, Massachusetts, New Jersey, New York, North Carolina, Texas, and Utah), and we excluded nonisolated cases in the pooled data. We conducted additional analyses restricting to states that participated in the study for the whole time period (Arkansas, California, Georgia, Iowa, Massachusetts, New York, and Texas). These modifications did not appreciably change our estimates. Length of time to interview was slightly shorter for mothers reporting corticosteroid use. This was true for both cases (mean time 9.2 months for mothers reporting corticosteroid use and 10.5 for no use) and controls (mean time 8.2 months for mothers reporting corticosteroid use and 9.1 for no use).\nThe main results from the two time periods (1997–2002 vs. 2003–2009) are illustrated in Figure 1.\nAssociation of risk of cleft lip and palate (CLP) among offspring born to women who used maternal corticosteroid medications from 4 weeks before through 12 weeks after conception, comparing NBDPS deliveries 1997 to 2002 versus 2003 to 2009. Results are presented in a logarithmic scale.", "Recent data from the NBDPS provided no support for an association between maternal corticosteroid use during early pregnancy and delivering an infant with an orofacial cleft. This is in contrast to results from the first 6 years of the NBDPS, for which there was an association with CLP but not CPO (Carmichael et al., 2007). The component of corticosteroid most strongly associated with delivering an infant with CLP in the data from 1997 to 2002 was systemic prednisone; OR 2.7 (95% CI 1.1–6.7). This association was much weaker in the data from 2003 to 2009; OR 1.4 (95% CI 0.6–3.6).\nWhen comparing corticosteroid use between the early and more recent data (1997–2002 vs. 2003–2009) there was increased use among controls (1.7–2.4%) and CPO cases (1.0–1.7%), however, there was a decrease among CLP cases (2.9 to 2.3%). This resulted in weaker associations with CLP in the more recent data. We are not aware of any significant changes in the study protocol, case ascertainment, or recruitment of cases or controls that could explain these differences. Given the small numbers, it is not possible to determine whether the frequency of reported use represents a trend or lies within the range of normal variation. We therefore suggest that our best estimates of the association of corticosteroids and orofacial clefts in the NBDPS data are those derived from the pooled data. Participation rates declined over the two time periods, from 70% to 63% overall. While this decline is substantial, participation is still in a range usually regarded as acceptable for observational studies.\nThe strongest association by component observed in the pooled data was for CLP and prednisone (OR, 1.9; 95% CI, 1.0–3.7). The number of control mothers reporting use of prednisone was stable at 0.3% during both time periods (1997–2002 and 2003–2009), while the proportion of case mothers (CLP) reporting prednisone went down from 0.7% to 0.4%. Given the substantial difference between the association in the earlier and later data (ORs of 2.7 vs. 1.4), and its marginal statistical significance, we recommend interpreting this result with caution. This also applies to associations between corticosteroid exposures by specific time period, because they were so variable. The strongest finding was for exposures during week 1 to 4 and 5 to 8 after conception (OR, 2.8; 95% CI, 1.3, 5.9, for the pooled data). The OR in the early data was 7.3 (95% CI, 1.8–29.4) and in the later data 1.9 (95% CI, 0.7–5.0). Although we recommend interpreting these results with caution, an association by timing of exposure cannot be dismissed completely. For comparison, a large US study reported that 0.8% of women received first trimester prescriptions for systemic corticosteroids (Andrade et al., 2004), with actual medication use probably less than 100% (Olesen et al., 2001). Furthermore, the prevalence is similar to what has been found in the National Health and Nutrition Examination Study (NHANES) during the years 1999 to 2008, where 0.5% of women aged 20 to 29 years and 0.6% of women aged 30 to 39 years reported use of oral corticosteroids (Overman et al., 2013). The NHANES data also indicate a trend toward lower prevalence of oral corticosteroid use from 1999 to 2008.\nThe earliest report of corticosteroids causing clefts was a study in mice (Fraser and Fainstat, 1951). Since then, studies have shown that corticosteroids are involved in cellular processes that lead to fusion of the palatal shelves, which can be disrupted by altering physiological corticosteroid levels (Pratt and Salomon, 1980; Piddington et al., 1983; Ziejewski et al., 2012). Studies have shown that teratogenicity can vary across species (Nau, 1986). Such studies have involved systemic corticosteroids, at doses that are 15 to 150 times human doses; thus, their comparability to the human condition is uncertain.\nPrevious epidemiological studies on corticosteroid use during early pregnancy and the risk of delivering an infant with an orofacial cleft are outlined in Table 2013. Systemic corticosteroid use in early pregnancy has been associated with delivering an infant with CLP in some previous epidemiological studies in humans (Czeizel and Rockenbauer, 1997; Rodríguez-Pinilla and Luisa Martínez-Frías, 1998; Carmichael and Shaw, 1999; Pradat et al., 2003; Carmichael et al., 2007), one of which also reported an association with CPO (Carmichael and Shaw, 1999). Studies from Denmark, Norway, Sweden have found no association with systemic use in early pregnancy and orofacial clefts in the offspring, and a weak association with dermatological corticosteroids (Källén, 2003; Hviid and Mølgaard-Nielsen, 2011; Skuladottir et al., 2013). However, a recent population-based cohort study from the UK did not find an association between dermatological corticosteroids and clefts (Chi et al., 2013). In sum, the current literature is inconsistent regarding the association of first-trimester corticosteroid use and orofacial clefts in humans. The previous studies are limited by sample size, with number of cases (CLP and CPO combined) ranging from 8 to 1232.\nSummary of Previous Epidemiological Studies on Corticosteroid Use during Pregnancy and Risk of Orofacial Clefts\nThe NBDPS included 4072 pregnancies resulting in either CLP or CPO, with 23 (0.6%) mothers reporting systemic corticosteroid use, making it the largest study exploring this potential association to date. Other strengths include the population-based design and the detailed assessment on corticosteroid mode, specific component used and the detailed time windows of exposure. We lacked information on dose and indication, which were limitations. Other potential limitations include recall bias (mean time to interview was slightly shorter for the mothers who reported corticosteroid use than the mothers who did not report use) and selection bias (participation was 72% for case mothers and 65% for control mothers). In the NBDPS questionnaire, there is no specific question for dermatological disease or treatment, and dermatological medication is consequently underreported and estimates are therefore inaccurate. Under-reported use of other types of corticosteroids is also possible but difficult to determine.", "Maternal use of corticosteroids is not associated with delivering an infant with an orofacial cleft in the NBDPS. This analysis is consistent with recent results from large population-based studies (Källén, 2003; Hviid and Mølgaard-Nielsen, 2011; Skuladottir et al., 2013). These data help to inform the clinical risk-benefit decision for use of corticosteroids during the first trimester of pregnancy." ]
[ null, "materials|methods", "results", "discussion", null ]
[ "orofacial clefts", "cleft lip and palate", "corticosteroids", "birth defects", "pregnancy" ]
Introduction: Orofacial clefts are one of the most common birth defects in humans, with a world birth prevalence of 1.7 per 1000 live births (Mossey et al., 2009). Orofacial clefts occur when the fusion of the lip and/or palate, which takes place during the first-trimester of pregnancy, is disrupted (Dixon et al., 2011). Corticosteroids are well-established as an experimental teratogen in animal models, causing cleft palate in mice (Fraser and Fainstat, 1951; Walker and Fraser, 1957). Several epidemiological studies have reported an association between corticosteroid use in early pregnancy in humans and delivering an infant with an orofacial cleft (Czeizel and Rockenbauer, 1997; Rodríguez-Pinilla and Luisa Martínez-Frías, 1998; Carmichael and Shaw, 1999; Edwards et al., 2003; Pradat et al., 2003; Carmichael et al., 2007), although others have not (Kallen et al., 1999; Källén, 2003; Hviid and Mølgaard-Nielsen, 2011). The anti-inflammatory and immune modulating functions of corticosteroids are effective in the treatment of conditions such as asthma, allergic reactions, eczema, psoriasis, rheumatoid arthritis, and inflammatory bowel disease. These conditions are common and often affect women of reproductive age; however, the safety of corticosteroid medication during pregnancy is uncertain. We previously reported that maternal corticosteroid use was associated with increased risk of cleft lip with or without palate (CLP) (odds ratio [OR], 1.7; 95% confidence interval [CI], 1.1–2.6) but not cleft palate only (CPO) (OR, 0.5; 95%CI, 0.2–1.3), using data from the National Birth Defects Prevention Study (NBDPS) investigating deliveries from October 1997 through December 2002, including mothers of 1141 infants with CLP, 628 infants with CPO and 4143 controls (Carmichael et al., 2007). Since then, the study population has more than doubled in size, allowing the largest study of corticosteroids and clefts to date. Given continued uncertainty about the association between orofacial clefts and corticosteroid medications and the tentative findings from our earlier analyses, our objective here was to assess the association using larger and more recent NBDPS data. Materials and Methods: We used data from the NBDPS, a population-based, multi-center case–control study of birth defects. Information on deliveries taking place from October 1997 through December 2009 was collected from the 10 NBDPS study centers (Arkansas, California, Georgia, Iowa, Massachusetts, New Jersey, New York, North Carolina, Texas, and Utah), although not all study sites contributed for all the study years. The study was approved by institutional review boards of the participating centers and the Centers for Disease Control and Prevention. More details on study methods and its surveillance systems can be found elsewhere (Yoon et al., 2001; Rasmussen et al., 2003). Infants or fetuses with CLP or CPO were considered cases and analyzed separately. Case status was ascertained either through clinical or surgical records or autopsy reports. Medical records for all cases were assessed by a clinical geneticist who ensured that they fulfilled the eligibility criteria. Cases were ineligible if their clefts were believed to result from another defect (e.g., holoprosencephaly) or had a recognized or strongly suspected single-gene disorder or chromosomal abnormality. Cases were considered isolated if there were no accompanying major unrelated birth defects or as nonisolated if more than one additional major unrelated defect was present. Controls (live born infants, without birth defects) were randomly selected from hospital birth records or birth certificates at each study center. Mothers were interviewed 6 weeks to 24 months after estimated date of delivery, using computer-assisted telephone interviews in English or Spanish; median time between estimated date of delivery and interview was 9.0 months for case mothers (interquartile range 8.0 months) and 8.0 months for controls (interquartile range 7.0 months). Overall participation from 1997 to 2009 was 72% for eligible mothers of infants with clefts and 65% for control mothers (participation in the two time periods declined from 76% to 67% for eligible mothers of infants with clefts and 68% to 61% for control mothers, with an overall decline from 70% to 63%). In the questionnaires the mothers were asked whether they had specific medical conditions before or during pregnancy and then what medications they used to treat them. Mothers were also asked to list any other medication they had used that was not captured in response to the specific questions; indication was not reported for responses to this question. Mothers were asked for duration and frequency of use for each medication used from 12 weeks before conception to delivery. Medications were coded according to the Slone Epidemiology Center Drug Dictionary. We focused on periconceptional corticosteroid use by any administration route and component (systemic, nasal/inhaled and topical), defined as use that occurred between 4 weeks before through 12 weeks after conception. We investigated the association between any corticosteroid use during the periconceptional period compared with no use. We also explored whether there was an association with specific timing of exposure, mode of administration, or corticosteroid component. Logistic regression models in SAS software were used to estimate ORs and their corresponding 95% CIs. ORs were only calculated if there were two or more exposed cases and two or more exposed controls. We also examined associations after adjustment for several covariates (maternal race-ethnicity, education, intake of folic acid-containing supplements, smoking, and study center) and after exclusion of nonisolated cases. We present results for deliveries from January 2003 through December 2009, and for pooled data for deliveries from October 1997 through December 2009. Results: From 2003 to 2009, the NBDPS enrolled mothers of 1577 children with CLP, 795 children with CPO, and 5922 control children. Demographic characteristics are outlined in Table 2004. A total of 89% of the CLP cases (n = 1402) and 79% of CPO cases (n = 631) were isolated. Any use of corticosteroids four weeks prior through 12 weeks after conception was reported by mothers of 35 (2.3%) infants with CLP (OR, 1.0; 95% CI, 0.7–1.4) and mothers of 13 (1.7%) infants with CPO (OR, 0.7; 95% CI, 0.4–1.2), and by mothers of 137 (2.4%) control infants (Table 1999). There was no association by route of administration (systemic, nasal/inhaled, topical or other use) or specific components of corticosteroids (Prednisone, Beclomethasone, Budesonide, Fluticasone, Triamcinolone). Furthermore, we did not find associations at more specific time windows of exposure (Table 2007). Characteristics of mothers of 1577 infants with cleft lip with or without cleft palate (CLP), 795 infants with cleft palate only (CPO), and 5922 nonmalformed control infants NBDPS deliveries 2003–2009. Percentages may not add to 100% because of rounding. During the month before and first 3 months of pregnancy. Association of Risk of Cleft Lip and Palate (CLP) and Cleft Palate Only (CPO) among Offspring Born to Women Who Used Maternal Corticosteroid Medications from 4 Weeks before through 12 Weeks after Conception, by Route of Administration and Component Corticosteroid NBDPS deliveries 2003 to 2009*. Reference groups for the comparisons for the two time intervals included: 2003-09: 1542 CLP cases, 782 CPO cases, and 5785 controls with no exposure from 4 weeks before through 12 weeks after conception; 1997–2009: 2662 CLP cases, 1410 CP cases, and 9849 controls with no exposure from 4 weeks before through 12 weeks after conception. Odds ratios were estimated only if there were at least two exposed cases and two exposed controls. Association of Risk of Cleft Lip and Palate (CLP) among Offspring Born to Women Who Used Corticosteroids, by Timing of Exposure NBDPS Deliveries 2003–2009. Reference groups for the comparisons for the two time intervals included: 2003-09: 1542 CLP cases, 782 CPO cases, and 5785 controls with no exposure from 4 weeks before through 12 weeks after conception; 1997–2009: 2662 CLP cases, 1410 CP cases, and 9849 controls with no exposure from 4 weeks before through 12 weeks after conception. Odds ratios were estimated only if there were at least two exposed cases and two exposed controls. By combining the earlier data with more recent data, the total cohort included mothers of 2731 infants with CLP, 1429 infants with CPO, and 10063 controls, delivered from October 1997 through December 2009. Mothers of 69 (2.6%) infants with CLP (OR 1.2 95% CI, 0.9–1.6), 19 (1.3%) infants with CPO (OR 0.6, 95% CI, 0.4–1.0) and 214 (2.1%) controls reported using any corticosteroids from 4 weeks before 12 weeks after gestation (Table 1999). We did not find an association by route of administration or component of corticosteroid in the combined data, with the possible exception of prednisone (OR, 1.9; 95% CI, 1.0–3.7) (Table 1999). Results by time window of exposure were inconsistent (Table 2007). For CLP, odds ratios ranged from 2.8 (95% CI, 1.3–5.9) for exposures only during week 1 to 4 and 5 to 8 after conception to 0.5 (95% CI, 0.1–1.6) for exposures during weeks 9 to 12. For analyses of any corticosteroid use we adjusted for maternal race-ethnicity (Non-Hispanic white, Non-Hispanic black, Hispanic, Other, and unknown), education (<High school graduation, High school graduation, 1–3 years of college, ≥ 4years of college and unknown), intake of folic acid (any, none, and unknown), smoking (any [active], none, unknown), and study center (Arkansas, California, Georgia, Iowa, Massachusetts, New Jersey, New York, North Carolina, Texas, and Utah), and we excluded nonisolated cases in the pooled data. We conducted additional analyses restricting to states that participated in the study for the whole time period (Arkansas, California, Georgia, Iowa, Massachusetts, New York, and Texas). These modifications did not appreciably change our estimates. Length of time to interview was slightly shorter for mothers reporting corticosteroid use. This was true for both cases (mean time 9.2 months for mothers reporting corticosteroid use and 10.5 for no use) and controls (mean time 8.2 months for mothers reporting corticosteroid use and 9.1 for no use). The main results from the two time periods (1997–2002 vs. 2003–2009) are illustrated in Figure 1. Association of risk of cleft lip and palate (CLP) among offspring born to women who used maternal corticosteroid medications from 4 weeks before through 12 weeks after conception, comparing NBDPS deliveries 1997 to 2002 versus 2003 to 2009. Results are presented in a logarithmic scale. Discussion: Recent data from the NBDPS provided no support for an association between maternal corticosteroid use during early pregnancy and delivering an infant with an orofacial cleft. This is in contrast to results from the first 6 years of the NBDPS, for which there was an association with CLP but not CPO (Carmichael et al., 2007). The component of corticosteroid most strongly associated with delivering an infant with CLP in the data from 1997 to 2002 was systemic prednisone; OR 2.7 (95% CI 1.1–6.7). This association was much weaker in the data from 2003 to 2009; OR 1.4 (95% CI 0.6–3.6). When comparing corticosteroid use between the early and more recent data (1997–2002 vs. 2003–2009) there was increased use among controls (1.7–2.4%) and CPO cases (1.0–1.7%), however, there was a decrease among CLP cases (2.9 to 2.3%). This resulted in weaker associations with CLP in the more recent data. We are not aware of any significant changes in the study protocol, case ascertainment, or recruitment of cases or controls that could explain these differences. Given the small numbers, it is not possible to determine whether the frequency of reported use represents a trend or lies within the range of normal variation. We therefore suggest that our best estimates of the association of corticosteroids and orofacial clefts in the NBDPS data are those derived from the pooled data. Participation rates declined over the two time periods, from 70% to 63% overall. While this decline is substantial, participation is still in a range usually regarded as acceptable for observational studies. The strongest association by component observed in the pooled data was for CLP and prednisone (OR, 1.9; 95% CI, 1.0–3.7). The number of control mothers reporting use of prednisone was stable at 0.3% during both time periods (1997–2002 and 2003–2009), while the proportion of case mothers (CLP) reporting prednisone went down from 0.7% to 0.4%. Given the substantial difference between the association in the earlier and later data (ORs of 2.7 vs. 1.4), and its marginal statistical significance, we recommend interpreting this result with caution. This also applies to associations between corticosteroid exposures by specific time period, because they were so variable. The strongest finding was for exposures during week 1 to 4 and 5 to 8 after conception (OR, 2.8; 95% CI, 1.3, 5.9, for the pooled data). The OR in the early data was 7.3 (95% CI, 1.8–29.4) and in the later data 1.9 (95% CI, 0.7–5.0). Although we recommend interpreting these results with caution, an association by timing of exposure cannot be dismissed completely. For comparison, a large US study reported that 0.8% of women received first trimester prescriptions for systemic corticosteroids (Andrade et al., 2004), with actual medication use probably less than 100% (Olesen et al., 2001). Furthermore, the prevalence is similar to what has been found in the National Health and Nutrition Examination Study (NHANES) during the years 1999 to 2008, where 0.5% of women aged 20 to 29 years and 0.6% of women aged 30 to 39 years reported use of oral corticosteroids (Overman et al., 2013). The NHANES data also indicate a trend toward lower prevalence of oral corticosteroid use from 1999 to 2008. The earliest report of corticosteroids causing clefts was a study in mice (Fraser and Fainstat, 1951). Since then, studies have shown that corticosteroids are involved in cellular processes that lead to fusion of the palatal shelves, which can be disrupted by altering physiological corticosteroid levels (Pratt and Salomon, 1980; Piddington et al., 1983; Ziejewski et al., 2012). Studies have shown that teratogenicity can vary across species (Nau, 1986). Such studies have involved systemic corticosteroids, at doses that are 15 to 150 times human doses; thus, their comparability to the human condition is uncertain. Previous epidemiological studies on corticosteroid use during early pregnancy and the risk of delivering an infant with an orofacial cleft are outlined in Table 2013. Systemic corticosteroid use in early pregnancy has been associated with delivering an infant with CLP in some previous epidemiological studies in humans (Czeizel and Rockenbauer, 1997; Rodríguez-Pinilla and Luisa Martínez-Frías, 1998; Carmichael and Shaw, 1999; Pradat et al., 2003; Carmichael et al., 2007), one of which also reported an association with CPO (Carmichael and Shaw, 1999). Studies from Denmark, Norway, Sweden have found no association with systemic use in early pregnancy and orofacial clefts in the offspring, and a weak association with dermatological corticosteroids (Källén, 2003; Hviid and Mølgaard-Nielsen, 2011; Skuladottir et al., 2013). However, a recent population-based cohort study from the UK did not find an association between dermatological corticosteroids and clefts (Chi et al., 2013). In sum, the current literature is inconsistent regarding the association of first-trimester corticosteroid use and orofacial clefts in humans. The previous studies are limited by sample size, with number of cases (CLP and CPO combined) ranging from 8 to 1232. Summary of Previous Epidemiological Studies on Corticosteroid Use during Pregnancy and Risk of Orofacial Clefts The NBDPS included 4072 pregnancies resulting in either CLP or CPO, with 23 (0.6%) mothers reporting systemic corticosteroid use, making it the largest study exploring this potential association to date. Other strengths include the population-based design and the detailed assessment on corticosteroid mode, specific component used and the detailed time windows of exposure. We lacked information on dose and indication, which were limitations. Other potential limitations include recall bias (mean time to interview was slightly shorter for the mothers who reported corticosteroid use than the mothers who did not report use) and selection bias (participation was 72% for case mothers and 65% for control mothers). In the NBDPS questionnaire, there is no specific question for dermatological disease or treatment, and dermatological medication is consequently underreported and estimates are therefore inaccurate. Under-reported use of other types of corticosteroids is also possible but difficult to determine. Conclusion: Maternal use of corticosteroids is not associated with delivering an infant with an orofacial cleft in the NBDPS. This analysis is consistent with recent results from large population-based studies (Källén, 2003; Hviid and Mølgaard-Nielsen, 2011; Skuladottir et al., 2013). These data help to inform the clinical risk-benefit decision for use of corticosteroids during the first trimester of pregnancy.
Background: Maternal use of corticosteroids during early pregnancy has been inconsistently associated with orofacial clefts in the offspring. A previous report from the National Birth Defect Prevention Study (NBDPS), using data from 1997 to 2002, found an association with cleft lip and palate (odds ratio, 1.7; 95% confidence interval [CI], 1.1-2.6), but not cleft palate only (odds ratio, 0.5, 95%CI, 0.2-1.3). From 2003 to 2009, the study population more than doubled in size, and our objective was to assess this association in the more recent data. Methods: The NBDPS is an ongoing multi-state population-based case-control study of birth defects, with ascertainment of cases and controls born since 1997. We assessed the association of corticosteroids and orofacial clefts using data from 2372 cleft cases and 5922 controls born from 2003 to 2009. Maternal corticosteroid exposure was based on telephone interviews. Results: The overall association of corticosteroids and cleft lip and palate in the new data was 1.0 (95% CI, 0.7-1.4). There was little evidence of associations between specific corticosteroid components or timing and clefts. Conclusions: In contrast to the 1997 to 2002 data from the NBDPS, the 2003 to 2009 data show no association between maternal corticosteroid use and cleft lip and palate in the offspring.
Introduction: Orofacial clefts are one of the most common birth defects in humans, with a world birth prevalence of 1.7 per 1000 live births (Mossey et al., 2009). Orofacial clefts occur when the fusion of the lip and/or palate, which takes place during the first-trimester of pregnancy, is disrupted (Dixon et al., 2011). Corticosteroids are well-established as an experimental teratogen in animal models, causing cleft palate in mice (Fraser and Fainstat, 1951; Walker and Fraser, 1957). Several epidemiological studies have reported an association between corticosteroid use in early pregnancy in humans and delivering an infant with an orofacial cleft (Czeizel and Rockenbauer, 1997; Rodríguez-Pinilla and Luisa Martínez-Frías, 1998; Carmichael and Shaw, 1999; Edwards et al., 2003; Pradat et al., 2003; Carmichael et al., 2007), although others have not (Kallen et al., 1999; Källén, 2003; Hviid and Mølgaard-Nielsen, 2011). The anti-inflammatory and immune modulating functions of corticosteroids are effective in the treatment of conditions such as asthma, allergic reactions, eczema, psoriasis, rheumatoid arthritis, and inflammatory bowel disease. These conditions are common and often affect women of reproductive age; however, the safety of corticosteroid medication during pregnancy is uncertain. We previously reported that maternal corticosteroid use was associated with increased risk of cleft lip with or without palate (CLP) (odds ratio [OR], 1.7; 95% confidence interval [CI], 1.1–2.6) but not cleft palate only (CPO) (OR, 0.5; 95%CI, 0.2–1.3), using data from the National Birth Defects Prevention Study (NBDPS) investigating deliveries from October 1997 through December 2002, including mothers of 1141 infants with CLP, 628 infants with CPO and 4143 controls (Carmichael et al., 2007). Since then, the study population has more than doubled in size, allowing the largest study of corticosteroids and clefts to date. Given continued uncertainty about the association between orofacial clefts and corticosteroid medications and the tentative findings from our earlier analyses, our objective here was to assess the association using larger and more recent NBDPS data. Conclusion: Maternal use of corticosteroids is not associated with delivering an infant with an orofacial cleft in the NBDPS. This analysis is consistent with recent results from large population-based studies (Källén, 2003; Hviid and Mølgaard-Nielsen, 2011; Skuladottir et al., 2013). These data help to inform the clinical risk-benefit decision for use of corticosteroids during the first trimester of pregnancy.
Background: Maternal use of corticosteroids during early pregnancy has been inconsistently associated with orofacial clefts in the offspring. A previous report from the National Birth Defect Prevention Study (NBDPS), using data from 1997 to 2002, found an association with cleft lip and palate (odds ratio, 1.7; 95% confidence interval [CI], 1.1-2.6), but not cleft palate only (odds ratio, 0.5, 95%CI, 0.2-1.3). From 2003 to 2009, the study population more than doubled in size, and our objective was to assess this association in the more recent data. Methods: The NBDPS is an ongoing multi-state population-based case-control study of birth defects, with ascertainment of cases and controls born since 1997. We assessed the association of corticosteroids and orofacial clefts using data from 2372 cleft cases and 5922 controls born from 2003 to 2009. Maternal corticosteroid exposure was based on telephone interviews. Results: The overall association of corticosteroids and cleft lip and palate in the new data was 1.0 (95% CI, 0.7-1.4). There was little evidence of associations between specific corticosteroid components or timing and clefts. Conclusions: In contrast to the 1997 to 2002 data from the NBDPS, the 2003 to 2009 data show no association between maternal corticosteroid use and cleft lip and palate in the offspring.
3,366
263
5
[ "use", "corticosteroid", "mothers", "clp", "cases", "association", "data", "weeks", "2003", "study" ]
[ "test", "test" ]
null
[CONTENT] orofacial clefts | cleft lip and palate | corticosteroids | birth defects | pregnancy [SUMMARY]
null
[CONTENT] orofacial clefts | cleft lip and palate | corticosteroids | birth defects | pregnancy [SUMMARY]
[CONTENT] orofacial clefts | cleft lip and palate | corticosteroids | birth defects | pregnancy [SUMMARY]
[CONTENT] orofacial clefts | cleft lip and palate | corticosteroids | birth defects | pregnancy [SUMMARY]
[CONTENT] orofacial clefts | cleft lip and palate | corticosteroids | birth defects | pregnancy [SUMMARY]
[CONTENT] Adrenal Cortex Hormones | Black People | Case-Control Studies | Cleft Lip | Cleft Palate | Female | Hispanic or Latino | Humans | Infant, Newborn | Maternal Exposure | Odds Ratio | Pregnancy | Risk Factors | Surveys and Questionnaires | United States | White People [SUMMARY]
null
[CONTENT] Adrenal Cortex Hormones | Black People | Case-Control Studies | Cleft Lip | Cleft Palate | Female | Hispanic or Latino | Humans | Infant, Newborn | Maternal Exposure | Odds Ratio | Pregnancy | Risk Factors | Surveys and Questionnaires | United States | White People [SUMMARY]
[CONTENT] Adrenal Cortex Hormones | Black People | Case-Control Studies | Cleft Lip | Cleft Palate | Female | Hispanic or Latino | Humans | Infant, Newborn | Maternal Exposure | Odds Ratio | Pregnancy | Risk Factors | Surveys and Questionnaires | United States | White People [SUMMARY]
[CONTENT] Adrenal Cortex Hormones | Black People | Case-Control Studies | Cleft Lip | Cleft Palate | Female | Hispanic or Latino | Humans | Infant, Newborn | Maternal Exposure | Odds Ratio | Pregnancy | Risk Factors | Surveys and Questionnaires | United States | White People [SUMMARY]
[CONTENT] Adrenal Cortex Hormones | Black People | Case-Control Studies | Cleft Lip | Cleft Palate | Female | Hispanic or Latino | Humans | Infant, Newborn | Maternal Exposure | Odds Ratio | Pregnancy | Risk Factors | Surveys and Questionnaires | United States | White People [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] use | corticosteroid | mothers | clp | cases | association | data | weeks | 2003 | study [SUMMARY]
null
[CONTENT] use | corticosteroid | mothers | clp | cases | association | data | weeks | 2003 | study [SUMMARY]
[CONTENT] use | corticosteroid | mothers | clp | cases | association | data | weeks | 2003 | study [SUMMARY]
[CONTENT] use | corticosteroid | mothers | clp | cases | association | data | weeks | 2003 | study [SUMMARY]
[CONTENT] use | corticosteroid | mothers | clp | cases | association | data | weeks | 2003 | study [SUMMARY]
[CONTENT] palate | clefts | orofacial | birth | orofacial clefts | carmichael | cleft | corticosteroid | common | inflammatory [SUMMARY]
null
[CONTENT] weeks | cases | clp | 12 | infants | 12 weeks | weeks 12 | time | weeks conception | 12 weeks conception [SUMMARY]
[CONTENT] use corticosteroids | corticosteroids | data help inform | data help inform clinical | use corticosteroids associated delivering | use corticosteroids trimester | use corticosteroids trimester pregnancy | infant orofacial cleft nbdps | recent results large population | recent results large [SUMMARY]
[CONTENT] use | corticosteroid | mothers | weeks | cases | corticosteroids | clp | association | study | data [SUMMARY]
[CONTENT] use | corticosteroid | mothers | weeks | cases | corticosteroids | clp | association | study | data [SUMMARY]
[CONTENT] ||| the National Birth Defect Prevention Study | 1997 to 2002 | 1.7 | 95% ||| CI | 1.1-2.6 | 0.5 | 0.2 ||| 2003 to 2009 [SUMMARY]
null
[CONTENT] 1.0 | 95% | CI | 0.7-1.4 ||| [SUMMARY]
[CONTENT] 1997 to 2002 | NBDPS | 2003 to 2009 [SUMMARY]
[CONTENT] ||| the National Birth Defect Prevention Study | 1997 to 2002 | 1.7 | 95% ||| CI | 1.1-2.6 | 0.5 | 0.2 ||| 2003 to 2009 ||| 1997 ||| 2372 | 5922 | 2003 to 2009 ||| ||| ||| 1.0 | 95% | CI | 0.7-1.4 ||| ||| 1997 to 2002 | NBDPS | 2003 to 2009 [SUMMARY]
[CONTENT] ||| the National Birth Defect Prevention Study | 1997 to 2002 | 1.7 | 95% ||| CI | 1.1-2.6 | 0.5 | 0.2 ||| 2003 to 2009 ||| 1997 ||| 2372 | 5922 | 2003 to 2009 ||| ||| ||| 1.0 | 95% | CI | 0.7-1.4 ||| ||| 1997 to 2002 | NBDPS | 2003 to 2009 [SUMMARY]
Use of the hospital anxiety and depression scale (HADS) in a cardiac emergency room: chest pain unit.
19330247
Patients arriving at a Chest Pain Unit may present psychiatric disorders not identified, isolated or co-morbid to the main illness, which may interfere in the patient prognosis.
INTRODUCTION
Patients were assessed by the 'Hospital Anxiety and Depression Scale' as a screening instrument wile following a systematized protocol to rule out the diagnosis of acute coronary syndrome and other potentially fatal diseases. Patients with 8 or more points in the scale were considered 'probable case' of anxiety or depression.
METHODOLOGY
According to the protocol, 59 (45.4%) of 130 patients studied presented Chest Pain of Determined Cause, and 71 (54.6%) presented Chest Pain of Indefinite Cause. In the former group, in which 43 (33.1%) had acute coronary syndrome, 33.9% were probable anxiety cases and 30.5% depression cases. In the second group, formed by patients without acute coronary syndrome or any clinical conditions involving greater morbidity and mortality risk, 53.5% were probable anxiety cases and 25.4% depression.
RESULTS
The high anxiety and depression prevalence observed may indicate the need for early and specialized approach to these disorders. When coronary arterial disease is present, this may decrease complications and shorten hospital stay. When psychiatric disorder appears isolated, is possible to reduce unnecessary repeated visits to emergency room and increase patient's quality of life.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "Anxiety", "Chest Pain", "Depression", "Emergency Service, Hospital", "Female", "Humans", "Male", "Prevalence", "Psychiatric Status Rating Scales", "Surveys and Questionnaires" ]
2666460
INTRODUCTION
Patients who arrive at the emergency room complaining of chest pain (CP) may present symptoms associated with psychiatric disorders, including anxiety and depression.1 Upon assessing patients who presented atypical CP, Wulsin et al.2 found that the two most common diagnoses included mood disorders and anxiety. A review study3 demonstrated that 30.1% of patients complaining of chest pain who arrived at the emergency room (ER) were diagnosed with a panic disorder (PD), of which 22.4% exhibited PD with no coronary artery disease (CAD). Patients with anxiety and depression disorders who were not diagnosed as such and thus were not appropriately treated had a tendency to complain of chronic symptoms and to regularly seek medical attention. In analyzing ER patients with atypical pain, Demiryoguran et al.4 observed a high probability of anxiety in 31.2% of these patients. In this group, 78.3% of the patients reported having previously received care in the ER for the same symptoms. Additional studies have demonstrated the excessive use of medical resources by certain of these patients5 who were characterized as having high anxiety and depression scores and low quality of life indices.6 The objective of the study described herein is to use a self-reporting measure to estimate the prevalence of anxiety and depression in patients admitted to chest pain units.
METHODS
Among the several instruments designed for psychiatric diagnoses, the Hospital Anxiety and Depression Scale (HADS) distinguishes itself from all other scales due to its ability to assess anxiety and depression without investigating somatic symptoms. HADS is often used to analyze a variety of diseases in the clinical setting.7 This method was developed in 1983 and consists of a series of 14 questions, 7 of which are related to anxiety (HAD-A) while the other 7 questions are related to depression (HAD-D). The creators8 of the scale considered a score of less than 8 to indicate the lack of any mental disorder, a score equal to or greater than 8 to indicate that a disorder was “probably” present, while a score above 10 was considered to indicate that a patient was “highly likely” to have a disorder. Validation of the Portuguese version,9 using the 8 to 9 transition as a cut-off point for each subscale demonstrated a 93.7% sensitivity for HAD-A and a 84.6% sensitivity for HAD-D in addition to a 72.6% specificity for HAD-A and a 90.3% specificity for HAD-D. The HADS has been used to help diagnose patients suffering from chest pain,10,11 atypical chest pain12 and heart disease.13 It has also been validated in non-cardiac patients complaining of chest pain.14 The HADS was administered to chest pain (CP) patients admitted to the Chest Pain Unit (CPU) of a private hospital in Rio de Janeiro from May to August 2006. According to the unit stratification protocol, patients were referred to different acute coronary syndrome (ACS) investigation tracks according to the intensity of chest pain presented on arrival, classified as “definite angina” (Type A), “probable angina” (Type B), “probable non-angina” (Type C) and “definite non-angina” (Type D). Subsequently, the cardiac markers of patients were characterized. Upon admission and every 3 h thereafter, the serum creatine kinase-MB (CKMB) mass levels were measured in addition to the levels of serum troponin-I, which were analyzed upon admission and at 9 h post-admission. In addition, an 18-lead electrocardiogram was acquired upon admission and a 12-lead version was reacquired every 3 h thereafter. Moreover, we recorded a two-dimensional echocardiogram for most patients. In cases where no myocardial necrosis or rest ischemia was detected, a stress test using either a treadmill electrocardiogram or a single-photon emission computed tomographic myocardial scintigraphy was subsequently performed. We excluded patients with severe clinical conditions, severe respiratory failure, hemodynamic instability and neurological conditions with cognitive involvement. In addition, patients were excluded if diagnosed with dementia, delirium or any psychiatric disorder that causes changes in awareness or in formal thought processes. The HADS was administered by a nurse or physician who was on call at the CPU after patients had signed an informed consent form, during the waiting time between a blood test and its results or before a diagnostic procedure. If a patient presented a limitation in completing the HADS, such as difficulty reading, the health professional would offer assistance. The health professionals encouraged the patients to choose answers based on symptoms they had experienced during the previous week and asked them to provide spontaneous answers without excessive reflection. The cut-off value used in this study was 8 for “probable” anxiety or depression. Every patient who presented a score greater than or equal to 8 in the “anxiety” or “depression” subscores of the HADS was referred to a psychiatrist at the hospital. When the diagnosis was subsequently confirmed, a specific treatment regimen was initiated in the CPU. The current study was approved by the Ethics Committee consistent with the terms of the Helsinki Declaration. A bidirectional model was used for statistical analysis with a 5% level of statistical significance. SPSS for Windows version 13.0 was used for all analyses.
RESULTS
A total of 167 questionnaires were administered in this study and 37 potential recruits were excluded for a variety of reasons, including mistakes in filling the questionnaire, refusal to participate, transfer to another institution or inconclusive results. The 130 patients studied included both men (58.5%) and women (41.5%) between the ages of 31 to 87 years with a mean age of 61.2 ± 12.1 years. The most frequent types of reported chest pain were Type B (38.5%) and Type C (49.2%) while Type A and Type D were reported significantly less frequently (10.8% and 1.5% of the cases, respectively). Overall, we observed higher than expected anxiety and depression scores upon administering the HADS method, as indicated by the mean scores of 7.33 ± 4.36 and 4.78 ±3.92, respectively. A consistent predominance of probable anxiety cases was also observed - 44.6% of all cases were scored greater than or equal to 8, while 27.7% of the cases were determined to be likely depression sufferers with scores greater than or equal to 8. Of the 130 patients studied, 43 patients (33.1%) had acute coronary syndrome (ACS). In the latter group, a likely diagnosis of anxiety or depression was recorded in 34.8% and 27.9% of all cases, respectively. Interestingly, of all of the patients who reported symptoms of stratified chest pain, 87 (66.9%) were not diagnosed with acute coronary syndrome (NACS). In this group of 87 NACS patients, 16 (18.4%) patients were diagnosed with a non-coronary cause of chest pain, such as that resulting from pleuritis or pneumonia. However, in 71 of the NACS patients (54.6%), no cause for the chest pain was discovered prior to discharge from the CPU after investigating and dismissing clinical conditions that evolve with a greater risk of morbidity and mortality (Table 1). Thus we divide the population of 130 patients studied herein into two groups. The first group is composed of 59 patients (45.4%) who were diagnosed as having chest pain of a determined cause (PDC) while the second group of 71 patients (54.6%) were diagnosed as having chest pain of an indetermined cause (PIC). The PDC group was composed of 43 ACS cases (72.9%) and 16 NACS cases (27.1%) under the given clinical diagnoses. Among the clinically diagnosed NACS cases, 10.2% of the cases were characterized as coronary artery disease (CAD) not acute, 5% were patients who had been diagnosed with anxiety and 3.4% suffered from pleuritic pain. Other diagnoses included pneumonia, costochondritis, lymphatic fistula, exogenous intoxication and tonsillitis, each of which had an incidence of 1.7%. The mean age of this group, which consisted of 64.4% men and 35.6% women, was 60.4 ± 13.6 years. Type A pain, Type B pain, Type C pain and Type D pain was reported in 23.7%, 42.4%, 32.2% and 1.7% of the cases, respectively. The mean anxiety score as assessed by the HADS method for this group was 6.29 ± 3.96 points while the depression score was 4.86 ± 4.07 points. Thus, 33.9% of the patients were characterized as likely suffering from anxiety while 30.5% of the cases were characterized as likely suffering from depression. The aforementioned PIC group, which also consisted of several NACS cases with no clinical diagnosis, was composed of 53.5% men and 46.5% women with a mean age of 61.9 ± 12.8 years,. In this group, there were no reports of definite angina-related chest pain (Type A pain). The most common pain classification was type C, which was reported in 63.4% of the cases, followed by Type B pain (35.2%) and Type D pain (1.4%). In this group, the mean anxiety score was 8.20 ± 4.50 points and the mean depression score was 4.70 ± 3.82 points. Overall, 53.5% of the patients were characterized as likely suffering from anxiety while 25.4% were characterized as likely suffering from depression. In comparing the data gleaned from the PDC and PIC groups (Table 2), our statistical analysis indicates similarities between the demographic data but differences regarding the types of pain and the identified HADS scores. There was no statistically significant difference in terms of the mean age of the patients in the two groups (p = 0.768). In regard to gender, while there was a greater proportion of women in the PIC group, this difference did not achieve statistical significance (p = 0.210). Interestingly, the PDC group was composed of a significantly higher percentage of patients reporting Type A and Type B pain as compared with the PIC group, which was shown to have a higher percentage of patients reporting Type C pain (chi-square = 23.656; 3 df; p< 0.001). In addition, the mean anxiety scores were significantly higher in the PIC group (p = 0.011), however, this statistically significant difference was not observed for the mean depression scores (p = 0.819). Using the HADS value of 8 as a cut-off point, we observed a significantly higher incidence of anxiety in the PIC group (chi-square = 5.021, 1 df, p = 0.025) while the prevalence of depression was similar between the two groups (chi-square = 0.428, 1 df, p = 0.513).
CONCLUSIONS
A significant proportion of patients complaining of symptoms indicative of chest pain who were admitted to the emergency department at our facility had a diagnosable psychiatric illness. The HADS method is an easily applied screening method that can be incorporated into daily clinical practice at the ER to allow for the early diagnosis of anxiety and depression disorders in chest pain patients.
[]
[]
[]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSIONS" ]
[ "Patients who arrive at the emergency room complaining of chest pain (CP) may present symptoms associated with psychiatric disorders, including anxiety and depression.1 Upon assessing patients who presented atypical CP, Wulsin et al.2 found that the two most common diagnoses included mood disorders and anxiety. A review study3 demonstrated that 30.1% of patients complaining of chest pain who arrived at the emergency room (ER) were diagnosed with a panic disorder (PD), of which 22.4% exhibited PD with no coronary artery disease (CAD). Patients with anxiety and depression disorders who were not diagnosed as such and thus were not appropriately treated had a tendency to complain of chronic symptoms and to regularly seek medical attention. In analyzing ER patients with atypical pain, Demiryoguran et al.4 observed a high probability of anxiety in 31.2% of these patients. In this group, 78.3% of the patients reported having previously received care in the ER for the same symptoms. Additional studies have demonstrated the excessive use of medical resources by certain of these patients5 who were characterized as having high anxiety and depression scores and low quality of life indices.6 The objective of the study described herein is to use a self-reporting measure to estimate the prevalence of anxiety and depression in patients admitted to chest pain units.", "Among the several instruments designed for psychiatric diagnoses, the Hospital Anxiety and Depression Scale (HADS) distinguishes itself from all other scales due to its ability to assess anxiety and depression without investigating somatic symptoms. HADS is often used to analyze a variety of diseases in the clinical setting.7 This method was developed in 1983 and consists of a series of 14 questions, 7 of which are related to anxiety (HAD-A) while the other 7 questions are related to depression (HAD-D). The creators8 of the scale considered a score of less than 8 to indicate the lack of any mental disorder, a score equal to or greater than 8 to indicate that a disorder was “probably” present, while a score above 10 was considered to indicate that a patient was “highly likely” to have a disorder. Validation of the Portuguese version,9 using the 8 to 9 transition as a cut-off point for each subscale demonstrated a 93.7% sensitivity for HAD-A and a 84.6% sensitivity for HAD-D in addition to a 72.6% specificity for HAD-A and a 90.3% specificity for HAD-D. The HADS has been used to help diagnose patients suffering from chest pain,10,11 atypical chest pain12 and heart disease.13 It has also been validated in non-cardiac patients complaining of chest pain.14\nThe HADS was administered to chest pain (CP) patients admitted to the Chest Pain Unit (CPU) of a private hospital in Rio de Janeiro from May to August 2006. According to the unit stratification protocol, patients were referred to different acute coronary syndrome (ACS) investigation tracks according to the intensity of chest pain presented on arrival, classified as “definite angina” (Type A), “probable angina” (Type B), “probable non-angina” (Type C) and “definite non-angina” (Type D). Subsequently, the cardiac markers of patients were characterized. Upon admission and every 3 h thereafter, the serum creatine kinase-MB (CKMB) mass levels were measured in addition to the levels of serum troponin-I, which were analyzed upon admission and at 9 h post-admission. In addition, an 18-lead electrocardiogram was acquired upon admission and a 12-lead version was reacquired every 3 h thereafter. Moreover, we recorded a two-dimensional echocardiogram for most patients. In cases where no myocardial necrosis or rest ischemia was detected, a stress test using either a treadmill electrocardiogram or a single-photon emission computed tomographic myocardial scintigraphy was subsequently performed.\nWe excluded patients with severe clinical conditions, severe respiratory failure, hemodynamic instability and neurological conditions with cognitive involvement. In addition, patients were excluded if diagnosed with dementia, delirium or any psychiatric disorder that causes changes in awareness or in formal thought processes.\nThe HADS was administered by a nurse or physician who was on call at the CPU after patients had signed an informed consent form, during the waiting time between a blood test and its results or before a diagnostic procedure. If a patient presented a limitation in completing the HADS, such as difficulty reading, the health professional would offer assistance. The health professionals encouraged the patients to choose answers based on symptoms they had experienced during the previous week and asked them to provide spontaneous answers without excessive reflection.\nThe cut-off value used in this study was 8 for “probable” anxiety or depression. Every patient who presented a score greater than or equal to 8 in the “anxiety” or “depression” subscores of the HADS was referred to a psychiatrist at the hospital. When the diagnosis was subsequently confirmed, a specific treatment regimen was initiated in the CPU.\nThe current study was approved by the Ethics Committee consistent with the terms of the Helsinki Declaration. A bidirectional model was used for statistical analysis with a 5% level of statistical significance. SPSS for Windows version 13.0 was used for all analyses.", "A total of 167 questionnaires were administered in this study and 37 potential recruits were excluded for a variety of reasons, including mistakes in filling the questionnaire, refusal to participate, transfer to another institution or inconclusive results. The 130 patients studied included both men (58.5%) and women (41.5%) between the ages of 31 to 87 years with a mean age of 61.2 ± 12.1 years. The most frequent types of reported chest pain were Type B (38.5%) and Type C (49.2%) while Type A and Type D were reported significantly less frequently (10.8% and 1.5% of the cases, respectively). Overall, we observed higher than expected anxiety and depression scores upon administering the HADS method, as indicated by the mean scores of 7.33 ± 4.36 and 4.78 ±3.92, respectively. A consistent predominance of probable anxiety cases was also observed - 44.6% of all cases were scored greater than or equal to 8, while 27.7% of the cases were determined to be likely depression sufferers with scores greater than or equal to 8. Of the 130 patients studied, 43 patients (33.1%) had acute coronary syndrome (ACS). In the latter group, a likely diagnosis of anxiety or depression was recorded in 34.8% and 27.9% of all cases, respectively. Interestingly, of all of the patients who reported symptoms of stratified chest pain, 87 (66.9%) were not diagnosed with acute coronary syndrome (NACS). In this group of 87 NACS patients, 16 (18.4%) patients were diagnosed with a non-coronary cause of chest pain, such as that resulting from pleuritis or pneumonia. However, in 71 of the NACS patients (54.6%), no cause for the chest pain was discovered prior to discharge from the CPU after investigating and dismissing clinical conditions that evolve with a greater risk of morbidity and mortality (Table 1). Thus we divide the population of 130 patients studied herein into two groups. The first group is composed of 59 patients (45.4%) who were diagnosed as having chest pain of a determined cause (PDC) while the second group of 71 patients (54.6%) were diagnosed as having chest pain of an indetermined cause (PIC).\nThe PDC group was composed of 43 ACS cases (72.9%) and 16 NACS cases (27.1%) under the given clinical diagnoses. Among the clinically diagnosed NACS cases, 10.2% of the cases were characterized as coronary artery disease (CAD) not acute, 5% were patients who had been diagnosed with anxiety and 3.4% suffered from pleuritic pain. Other diagnoses included pneumonia, costochondritis, lymphatic fistula, exogenous intoxication and tonsillitis, each of which had an incidence of 1.7%. The mean age of this group, which consisted of 64.4% men and 35.6% women, was 60.4 ± 13.6 years. Type A pain, Type B pain, Type C pain and Type D pain was reported in 23.7%, 42.4%, 32.2% and 1.7% of the cases, respectively. The mean anxiety score as assessed by the HADS method for this group was 6.29 ± 3.96 points while the depression score was 4.86 ± 4.07 points. Thus, 33.9% of the patients were characterized as likely suffering from anxiety while 30.5% of the cases were characterized as likely suffering from depression.\nThe aforementioned PIC group, which also consisted of several NACS cases with no clinical diagnosis, was composed of 53.5% men and 46.5% women with a mean age of 61.9 ± 12.8 years,. In this group, there were no reports of definite angina-related chest pain (Type A pain). The most common pain classification was type C, which was reported in 63.4% of the cases, followed by Type B pain (35.2%) and Type D pain (1.4%). In this group, the mean anxiety score was 8.20 ± 4.50 points and the mean depression score was 4.70 ± 3.82 points. Overall, 53.5% of the patients were characterized as likely suffering from anxiety while 25.4% were characterized as likely suffering from depression.\nIn comparing the data gleaned from the PDC and PIC groups (Table 2), our statistical analysis indicates similarities between the demographic data but differences regarding the types of pain and the identified HADS scores. There was no statistically significant difference in terms of the mean age of the patients in the two groups (p = 0.768). In regard to gender, while there was a greater proportion of women in the PIC group, this difference did not achieve statistical significance (p = 0.210). Interestingly, the PDC group was composed of a significantly higher percentage of patients reporting Type A and Type B pain as compared with the PIC group, which was shown to have a higher percentage of patients reporting Type C pain (chi-square = 23.656; 3 df; p< 0.001). In addition, the mean anxiety scores were significantly higher in the PIC group (p = 0.011), however, this statistically significant difference was not observed for the mean depression scores (p = 0.819). Using the HADS value of 8 as a cut-off point, we observed a significantly higher incidence of anxiety in the PIC group (chi-square = 5.021, 1 df, p = 0.025) while the prevalence of depression was similar between the two groups (chi-square = 0.428, 1 df, p = 0.513).", "Our key result is the observation that among people who go to an emergency room (ER) complaining of chest pain, which was equivalent to 54.6% (71/130) of the patients in our study, patients belonging to the PIC group would have been discharged with a diagnosis of “no acute coronary syndrome or any other high risk disease” if the HADS method had not been administered. Among those cases, 53.5% were diagnosed as patients with a probable anxiety disorder while 25.3% were diagnosed as patients probably suffering from depression. With regard to the presence of anxiety, in comparison with the PDC group, the mean anxiety score was higher in the PIC group. A higher incidence of anxiety was also found in this group using a HADS cut-off point of eight.\nDespite the known limitation of this scale as a diagnostic method for research15 the application of this very simple test allowed for a significant improvement in the quality of care administered in the ER. Early diagnosis of anxiety or depression disorders ultimately decreased the frequency of ER visits, as well as the cost of treating these patients.16 On the other hand, the fact that patients are aware that their chest pain is “not due to a heart disease” has very little impact on the evolution of anxiety disorders. Clinical investigations17 even after coronary angiography18 are generally unable to prevent new episodes of non-coronary chest pain.\nAlthough a structured interview was not used to confirm the psychiatric diagnoses, in all at-risk cases the mental health care department was contacted and when indicated, the necessary treatment was administered in the CPU. In daily clinical practice, this scale was shown to be useful. A study among hospitalized cardiac patients19 found that identifying patients at risk for depression did not require a formal diagnostic tool, but could be achieved using the HADS. Moreover, the sensitivity and specificity of HADS have are already been well established.20 A review of 747 studies using the HADS21 demonstrated a sufficient ability to assess the symptomatic severity and caseness of anxiety disorders and depression in somatic patients, psychiatric patients, primary care patients and in the general population. However, using the HADS provides higher estimates of depression incidence than the estimates obtained using a diagnostic test such as the PRIME-MD.22\nWithout a screening tool, the ability of ER staff to diagnose anxiety and depression is very limited. In regard to the NACS patients from the PDC group with clinical cause, we note that only 5% of the patients were diagnosed with anxiety. This observation is consistent with observations reported by other authors.23–25 In a study involving ER patients who had reported chest pain, Fleet et al.23 noticed that PD was identified in only 2% of the 108 cases examined. In a separate study,24 the diagnosis of a psychiatric disorder was made in only 1 out of 30 patients, demonstrating failure in 97% of cases. Using a self-reporting measure such as the HADS, it was possible to classify 33.9% of the patients from the PDC group as likely anxiety sufferers and 30.5% of patients from this group as probable depression cases. In addition, after observing patients during the first week following acute myocardial infarct, Martin et al.25 observed a HADS score greater than 8 for HAD-A questions and for HAD-D questions in 30.4% and 15.2% of the cases studied, respectively. The hazardous impact of anxiety and depression on the development of CAD,26–29 the effects on the prognosis of the psychiatric disease itself,30 and the excessive use of medical resources31 have been well established in the literature. Recently it was demonstrated that anxiety and depression predict more problematic adverse cardiac events in patients with stable CAD.32 Given the elevated rate of morbidity and mortality associated with depression among cardiac patients,33 the HADS may serve as a useful tool in identifying at-risk patients.19", "A significant proportion of patients complaining of symptoms indicative of chest pain who were admitted to the emergency department at our facility had a diagnosable psychiatric illness. The HADS method is an easily applied screening method that can be incorporated into daily clinical practice at the ER to allow for the early diagnosis of anxiety and depression disorders in chest pain patients." ]
[ "intro", "methods", "results", "discussion", "conclusions" ]
[ "Emergency Room", "Anxiety", "Depression", "Chest Pain", "Coronary Artery Disease" ]
INTRODUCTION: Patients who arrive at the emergency room complaining of chest pain (CP) may present symptoms associated with psychiatric disorders, including anxiety and depression.1 Upon assessing patients who presented atypical CP, Wulsin et al.2 found that the two most common diagnoses included mood disorders and anxiety. A review study3 demonstrated that 30.1% of patients complaining of chest pain who arrived at the emergency room (ER) were diagnosed with a panic disorder (PD), of which 22.4% exhibited PD with no coronary artery disease (CAD). Patients with anxiety and depression disorders who were not diagnosed as such and thus were not appropriately treated had a tendency to complain of chronic symptoms and to regularly seek medical attention. In analyzing ER patients with atypical pain, Demiryoguran et al.4 observed a high probability of anxiety in 31.2% of these patients. In this group, 78.3% of the patients reported having previously received care in the ER for the same symptoms. Additional studies have demonstrated the excessive use of medical resources by certain of these patients5 who were characterized as having high anxiety and depression scores and low quality of life indices.6 The objective of the study described herein is to use a self-reporting measure to estimate the prevalence of anxiety and depression in patients admitted to chest pain units. METHODS: Among the several instruments designed for psychiatric diagnoses, the Hospital Anxiety and Depression Scale (HADS) distinguishes itself from all other scales due to its ability to assess anxiety and depression without investigating somatic symptoms. HADS is often used to analyze a variety of diseases in the clinical setting.7 This method was developed in 1983 and consists of a series of 14 questions, 7 of which are related to anxiety (HAD-A) while the other 7 questions are related to depression (HAD-D). The creators8 of the scale considered a score of less than 8 to indicate the lack of any mental disorder, a score equal to or greater than 8 to indicate that a disorder was “probably” present, while a score above 10 was considered to indicate that a patient was “highly likely” to have a disorder. Validation of the Portuguese version,9 using the 8 to 9 transition as a cut-off point for each subscale demonstrated a 93.7% sensitivity for HAD-A and a 84.6% sensitivity for HAD-D in addition to a 72.6% specificity for HAD-A and a 90.3% specificity for HAD-D. The HADS has been used to help diagnose patients suffering from chest pain,10,11 atypical chest pain12 and heart disease.13 It has also been validated in non-cardiac patients complaining of chest pain.14 The HADS was administered to chest pain (CP) patients admitted to the Chest Pain Unit (CPU) of a private hospital in Rio de Janeiro from May to August 2006. According to the unit stratification protocol, patients were referred to different acute coronary syndrome (ACS) investigation tracks according to the intensity of chest pain presented on arrival, classified as “definite angina” (Type A), “probable angina” (Type B), “probable non-angina” (Type C) and “definite non-angina” (Type D). Subsequently, the cardiac markers of patients were characterized. Upon admission and every 3 h thereafter, the serum creatine kinase-MB (CKMB) mass levels were measured in addition to the levels of serum troponin-I, which were analyzed upon admission and at 9 h post-admission. In addition, an 18-lead electrocardiogram was acquired upon admission and a 12-lead version was reacquired every 3 h thereafter. Moreover, we recorded a two-dimensional echocardiogram for most patients. In cases where no myocardial necrosis or rest ischemia was detected, a stress test using either a treadmill electrocardiogram or a single-photon emission computed tomographic myocardial scintigraphy was subsequently performed. We excluded patients with severe clinical conditions, severe respiratory failure, hemodynamic instability and neurological conditions with cognitive involvement. In addition, patients were excluded if diagnosed with dementia, delirium or any psychiatric disorder that causes changes in awareness or in formal thought processes. The HADS was administered by a nurse or physician who was on call at the CPU after patients had signed an informed consent form, during the waiting time between a blood test and its results or before a diagnostic procedure. If a patient presented a limitation in completing the HADS, such as difficulty reading, the health professional would offer assistance. The health professionals encouraged the patients to choose answers based on symptoms they had experienced during the previous week and asked them to provide spontaneous answers without excessive reflection. The cut-off value used in this study was 8 for “probable” anxiety or depression. Every patient who presented a score greater than or equal to 8 in the “anxiety” or “depression” subscores of the HADS was referred to a psychiatrist at the hospital. When the diagnosis was subsequently confirmed, a specific treatment regimen was initiated in the CPU. The current study was approved by the Ethics Committee consistent with the terms of the Helsinki Declaration. A bidirectional model was used for statistical analysis with a 5% level of statistical significance. SPSS for Windows version 13.0 was used for all analyses. RESULTS: A total of 167 questionnaires were administered in this study and 37 potential recruits were excluded for a variety of reasons, including mistakes in filling the questionnaire, refusal to participate, transfer to another institution or inconclusive results. The 130 patients studied included both men (58.5%) and women (41.5%) between the ages of 31 to 87 years with a mean age of 61.2 ± 12.1 years. The most frequent types of reported chest pain were Type B (38.5%) and Type C (49.2%) while Type A and Type D were reported significantly less frequently (10.8% and 1.5% of the cases, respectively). Overall, we observed higher than expected anxiety and depression scores upon administering the HADS method, as indicated by the mean scores of 7.33 ± 4.36 and 4.78 ±3.92, respectively. A consistent predominance of probable anxiety cases was also observed - 44.6% of all cases were scored greater than or equal to 8, while 27.7% of the cases were determined to be likely depression sufferers with scores greater than or equal to 8. Of the 130 patients studied, 43 patients (33.1%) had acute coronary syndrome (ACS). In the latter group, a likely diagnosis of anxiety or depression was recorded in 34.8% and 27.9% of all cases, respectively. Interestingly, of all of the patients who reported symptoms of stratified chest pain, 87 (66.9%) were not diagnosed with acute coronary syndrome (NACS). In this group of 87 NACS patients, 16 (18.4%) patients were diagnosed with a non-coronary cause of chest pain, such as that resulting from pleuritis or pneumonia. However, in 71 of the NACS patients (54.6%), no cause for the chest pain was discovered prior to discharge from the CPU after investigating and dismissing clinical conditions that evolve with a greater risk of morbidity and mortality (Table 1). Thus we divide the population of 130 patients studied herein into two groups. The first group is composed of 59 patients (45.4%) who were diagnosed as having chest pain of a determined cause (PDC) while the second group of 71 patients (54.6%) were diagnosed as having chest pain of an indetermined cause (PIC). The PDC group was composed of 43 ACS cases (72.9%) and 16 NACS cases (27.1%) under the given clinical diagnoses. Among the clinically diagnosed NACS cases, 10.2% of the cases were characterized as coronary artery disease (CAD) not acute, 5% were patients who had been diagnosed with anxiety and 3.4% suffered from pleuritic pain. Other diagnoses included pneumonia, costochondritis, lymphatic fistula, exogenous intoxication and tonsillitis, each of which had an incidence of 1.7%. The mean age of this group, which consisted of 64.4% men and 35.6% women, was 60.4 ± 13.6 years. Type A pain, Type B pain, Type C pain and Type D pain was reported in 23.7%, 42.4%, 32.2% and 1.7% of the cases, respectively. The mean anxiety score as assessed by the HADS method for this group was 6.29 ± 3.96 points while the depression score was 4.86 ± 4.07 points. Thus, 33.9% of the patients were characterized as likely suffering from anxiety while 30.5% of the cases were characterized as likely suffering from depression. The aforementioned PIC group, which also consisted of several NACS cases with no clinical diagnosis, was composed of 53.5% men and 46.5% women with a mean age of 61.9 ± 12.8 years,. In this group, there were no reports of definite angina-related chest pain (Type A pain). The most common pain classification was type C, which was reported in 63.4% of the cases, followed by Type B pain (35.2%) and Type D pain (1.4%). In this group, the mean anxiety score was 8.20 ± 4.50 points and the mean depression score was 4.70 ± 3.82 points. Overall, 53.5% of the patients were characterized as likely suffering from anxiety while 25.4% were characterized as likely suffering from depression. In comparing the data gleaned from the PDC and PIC groups (Table 2), our statistical analysis indicates similarities between the demographic data but differences regarding the types of pain and the identified HADS scores. There was no statistically significant difference in terms of the mean age of the patients in the two groups (p = 0.768). In regard to gender, while there was a greater proportion of women in the PIC group, this difference did not achieve statistical significance (p = 0.210). Interestingly, the PDC group was composed of a significantly higher percentage of patients reporting Type A and Type B pain as compared with the PIC group, which was shown to have a higher percentage of patients reporting Type C pain (chi-square = 23.656; 3 df; p< 0.001). In addition, the mean anxiety scores were significantly higher in the PIC group (p = 0.011), however, this statistically significant difference was not observed for the mean depression scores (p = 0.819). Using the HADS value of 8 as a cut-off point, we observed a significantly higher incidence of anxiety in the PIC group (chi-square = 5.021, 1 df, p = 0.025) while the prevalence of depression was similar between the two groups (chi-square = 0.428, 1 df, p = 0.513). DISCUSSION: Our key result is the observation that among people who go to an emergency room (ER) complaining of chest pain, which was equivalent to 54.6% (71/130) of the patients in our study, patients belonging to the PIC group would have been discharged with a diagnosis of “no acute coronary syndrome or any other high risk disease” if the HADS method had not been administered. Among those cases, 53.5% were diagnosed as patients with a probable anxiety disorder while 25.3% were diagnosed as patients probably suffering from depression. With regard to the presence of anxiety, in comparison with the PDC group, the mean anxiety score was higher in the PIC group. A higher incidence of anxiety was also found in this group using a HADS cut-off point of eight. Despite the known limitation of this scale as a diagnostic method for research15 the application of this very simple test allowed for a significant improvement in the quality of care administered in the ER. Early diagnosis of anxiety or depression disorders ultimately decreased the frequency of ER visits, as well as the cost of treating these patients.16 On the other hand, the fact that patients are aware that their chest pain is “not due to a heart disease” has very little impact on the evolution of anxiety disorders. Clinical investigations17 even after coronary angiography18 are generally unable to prevent new episodes of non-coronary chest pain. Although a structured interview was not used to confirm the psychiatric diagnoses, in all at-risk cases the mental health care department was contacted and when indicated, the necessary treatment was administered in the CPU. In daily clinical practice, this scale was shown to be useful. A study among hospitalized cardiac patients19 found that identifying patients at risk for depression did not require a formal diagnostic tool, but could be achieved using the HADS. Moreover, the sensitivity and specificity of HADS have are already been well established.20 A review of 747 studies using the HADS21 demonstrated a sufficient ability to assess the symptomatic severity and caseness of anxiety disorders and depression in somatic patients, psychiatric patients, primary care patients and in the general population. However, using the HADS provides higher estimates of depression incidence than the estimates obtained using a diagnostic test such as the PRIME-MD.22 Without a screening tool, the ability of ER staff to diagnose anxiety and depression is very limited. In regard to the NACS patients from the PDC group with clinical cause, we note that only 5% of the patients were diagnosed with anxiety. This observation is consistent with observations reported by other authors.23–25 In a study involving ER patients who had reported chest pain, Fleet et al.23 noticed that PD was identified in only 2% of the 108 cases examined. In a separate study,24 the diagnosis of a psychiatric disorder was made in only 1 out of 30 patients, demonstrating failure in 97% of cases. Using a self-reporting measure such as the HADS, it was possible to classify 33.9% of the patients from the PDC group as likely anxiety sufferers and 30.5% of patients from this group as probable depression cases. In addition, after observing patients during the first week following acute myocardial infarct, Martin et al.25 observed a HADS score greater than 8 for HAD-A questions and for HAD-D questions in 30.4% and 15.2% of the cases studied, respectively. The hazardous impact of anxiety and depression on the development of CAD,26–29 the effects on the prognosis of the psychiatric disease itself,30 and the excessive use of medical resources31 have been well established in the literature. Recently it was demonstrated that anxiety and depression predict more problematic adverse cardiac events in patients with stable CAD.32 Given the elevated rate of morbidity and mortality associated with depression among cardiac patients,33 the HADS may serve as a useful tool in identifying at-risk patients.19 CONCLUSIONS: A significant proportion of patients complaining of symptoms indicative of chest pain who were admitted to the emergency department at our facility had a diagnosable psychiatric illness. The HADS method is an easily applied screening method that can be incorporated into daily clinical practice at the ER to allow for the early diagnosis of anxiety and depression disorders in chest pain patients.
Background: Patients arriving at a Chest Pain Unit may present psychiatric disorders not identified, isolated or co-morbid to the main illness, which may interfere in the patient prognosis. Methods: Patients were assessed by the 'Hospital Anxiety and Depression Scale' as a screening instrument wile following a systematized protocol to rule out the diagnosis of acute coronary syndrome and other potentially fatal diseases. Patients with 8 or more points in the scale were considered 'probable case' of anxiety or depression. Results: According to the protocol, 59 (45.4%) of 130 patients studied presented Chest Pain of Determined Cause, and 71 (54.6%) presented Chest Pain of Indefinite Cause. In the former group, in which 43 (33.1%) had acute coronary syndrome, 33.9% were probable anxiety cases and 30.5% depression cases. In the second group, formed by patients without acute coronary syndrome or any clinical conditions involving greater morbidity and mortality risk, 53.5% were probable anxiety cases and 25.4% depression. Conclusions: The high anxiety and depression prevalence observed may indicate the need for early and specialized approach to these disorders. When coronary arterial disease is present, this may decrease complications and shorten hospital stay. When psychiatric disorder appears isolated, is possible to reduce unnecessary repeated visits to emergency room and increase patient's quality of life.
INTRODUCTION: Patients who arrive at the emergency room complaining of chest pain (CP) may present symptoms associated with psychiatric disorders, including anxiety and depression.1 Upon assessing patients who presented atypical CP, Wulsin et al.2 found that the two most common diagnoses included mood disorders and anxiety. A review study3 demonstrated that 30.1% of patients complaining of chest pain who arrived at the emergency room (ER) were diagnosed with a panic disorder (PD), of which 22.4% exhibited PD with no coronary artery disease (CAD). Patients with anxiety and depression disorders who were not diagnosed as such and thus were not appropriately treated had a tendency to complain of chronic symptoms and to regularly seek medical attention. In analyzing ER patients with atypical pain, Demiryoguran et al.4 observed a high probability of anxiety in 31.2% of these patients. In this group, 78.3% of the patients reported having previously received care in the ER for the same symptoms. Additional studies have demonstrated the excessive use of medical resources by certain of these patients5 who were characterized as having high anxiety and depression scores and low quality of life indices.6 The objective of the study described herein is to use a self-reporting measure to estimate the prevalence of anxiety and depression in patients admitted to chest pain units. CONCLUSIONS: A significant proportion of patients complaining of symptoms indicative of chest pain who were admitted to the emergency department at our facility had a diagnosable psychiatric illness. The HADS method is an easily applied screening method that can be incorporated into daily clinical practice at the ER to allow for the early diagnosis of anxiety and depression disorders in chest pain patients.
Background: Patients arriving at a Chest Pain Unit may present psychiatric disorders not identified, isolated or co-morbid to the main illness, which may interfere in the patient prognosis. Methods: Patients were assessed by the 'Hospital Anxiety and Depression Scale' as a screening instrument wile following a systematized protocol to rule out the diagnosis of acute coronary syndrome and other potentially fatal diseases. Patients with 8 or more points in the scale were considered 'probable case' of anxiety or depression. Results: According to the protocol, 59 (45.4%) of 130 patients studied presented Chest Pain of Determined Cause, and 71 (54.6%) presented Chest Pain of Indefinite Cause. In the former group, in which 43 (33.1%) had acute coronary syndrome, 33.9% were probable anxiety cases and 30.5% depression cases. In the second group, formed by patients without acute coronary syndrome or any clinical conditions involving greater morbidity and mortality risk, 53.5% were probable anxiety cases and 25.4% depression. Conclusions: The high anxiety and depression prevalence observed may indicate the need for early and specialized approach to these disorders. When coronary arterial disease is present, this may decrease complications and shorten hospital stay. When psychiatric disorder appears isolated, is possible to reduce unnecessary repeated visits to emergency room and increase patient's quality of life.
2,823
260
5
[ "patients", "anxiety", "pain", "depression", "group", "chest", "chest pain", "hads", "cases", "type" ]
[ "test", "test" ]
[CONTENT] Emergency Room | Anxiety | Depression | Chest Pain | Coronary Artery Disease [SUMMARY]
[CONTENT] Emergency Room | Anxiety | Depression | Chest Pain | Coronary Artery Disease [SUMMARY]
[CONTENT] Emergency Room | Anxiety | Depression | Chest Pain | Coronary Artery Disease [SUMMARY]
[CONTENT] Emergency Room | Anxiety | Depression | Chest Pain | Coronary Artery Disease [SUMMARY]
[CONTENT] Emergency Room | Anxiety | Depression | Chest Pain | Coronary Artery Disease [SUMMARY]
[CONTENT] Emergency Room | Anxiety | Depression | Chest Pain | Coronary Artery Disease [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anxiety | Chest Pain | Depression | Emergency Service, Hospital | Female | Humans | Male | Prevalence | Psychiatric Status Rating Scales | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anxiety | Chest Pain | Depression | Emergency Service, Hospital | Female | Humans | Male | Prevalence | Psychiatric Status Rating Scales | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anxiety | Chest Pain | Depression | Emergency Service, Hospital | Female | Humans | Male | Prevalence | Psychiatric Status Rating Scales | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anxiety | Chest Pain | Depression | Emergency Service, Hospital | Female | Humans | Male | Prevalence | Psychiatric Status Rating Scales | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anxiety | Chest Pain | Depression | Emergency Service, Hospital | Female | Humans | Male | Prevalence | Psychiatric Status Rating Scales | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Anxiety | Chest Pain | Depression | Emergency Service, Hospital | Female | Humans | Male | Prevalence | Psychiatric Status Rating Scales | Surveys and Questionnaires [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] patients | anxiety | pain | depression | group | chest | chest pain | hads | cases | type [SUMMARY]
[CONTENT] patients | anxiety | pain | depression | group | chest | chest pain | hads | cases | type [SUMMARY]
[CONTENT] patients | anxiety | pain | depression | group | chest | chest pain | hads | cases | type [SUMMARY]
[CONTENT] patients | anxiety | pain | depression | group | chest | chest pain | hads | cases | type [SUMMARY]
[CONTENT] patients | anxiety | pain | depression | group | chest | chest pain | hads | cases | type [SUMMARY]
[CONTENT] patients | anxiety | pain | depression | group | chest | chest pain | hads | cases | type [SUMMARY]
[CONTENT] patients | anxiety | disorders | er | anxiety depression | pain | depression | symptoms | atypical | pd [SUMMARY]
[CONTENT] patients | admission | angina type | hads | type | angina | indicate | version | patient | hospital [SUMMARY]
[CONTENT] type | group | pain | type pain | cases | mean | patients | pic | pain type | nacs [SUMMARY]
[CONTENT] method | emergency department | illness hads | illness hads method | patients complaining symptoms indicative | patients complaining symptoms | illness hads method easily | facility | facility diagnosable | facility diagnosable psychiatric [SUMMARY]
[CONTENT] patients | anxiety | pain | depression | chest | group | chest pain | hads | type | cases [SUMMARY]
[CONTENT] patients | anxiety | pain | depression | chest | group | chest pain | hads | type | cases [SUMMARY]
[CONTENT] Chest Pain Unit [SUMMARY]
[CONTENT] the 'Hospital Anxiety and Depression Scale' ||| 8 [SUMMARY]
[CONTENT] 59 | 45.4% | 130 | Chest Pain of Determined Cause | 71 | 54.6% | Chest Pain of Indefinite Cause ||| 43 | 33.1% | 33.9% | 30.5% ||| second | 53.5% | 25.4% [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] Chest Pain Unit ||| the 'Hospital Anxiety and Depression Scale' ||| 8 ||| ||| 59 | 45.4% | 130 | Chest Pain of Determined Cause | 71 | 54.6% | Chest Pain of Indefinite Cause ||| 43 | 33.1% | 33.9% | 30.5% ||| second | 53.5% | 25.4% ||| ||| ||| [SUMMARY]
[CONTENT] Chest Pain Unit ||| the 'Hospital Anxiety and Depression Scale' ||| 8 ||| ||| 59 | 45.4% | 130 | Chest Pain of Determined Cause | 71 | 54.6% | Chest Pain of Indefinite Cause ||| 43 | 33.1% | 33.9% | 30.5% ||| second | 53.5% | 25.4% ||| ||| ||| [SUMMARY]
Prenatal Depressive Symptoms, Self-Rated Health, and Diabetes Self-Efficacy: A Moderated Mediation Analysis.
36294181
Diabetes leads to risk for pregnant persons and their fetuses and requires behavioral changes that can be compromised by poor mental health. Poor self-rated health (SRH), a reliable predictor of morbidity and mortality, has been associated with depressive symptoms and lower self-efficacy in patients with diabetes. However, it is unclear whether SRH mediates the association between depressive symptoms and self-efficacy in pregnant patients with diabetes and whether the healthcare site moderates the mediation. Thus, we sought to test these associations in a racially and ethnically diverse sample of pregnant individuals diagnosed with diabetes from two clinical settings.
BACKGROUND
This was an observational, cross-sectional study of 137 pregnant individuals diagnosed with diabetes at two clinical study sites. Participants self-administered a demographic questionnaire and measures designed to assess depressive symptoms, SRH in pregnancy, and diabetes self-efficacy. A moderated mediation model tested whether these indirect effects were moderated by the site.
MATERIALS AND METHODS
The results show that SRH mediated the association between depressive symptoms and diabetes self-efficacy. The results also showed the site moderated the mediating effect of SRH on depressive symptoms and diabetes self-efficacy.
RESULTS
Understanding the role of clinical care settings can help inform when and how SRH mediates that association between prenatal depressive symptoms and self-efficacy in diabetic patients.
CONCLUSIONS
[ "Pregnancy", "Female", "Humans", "Depression", "Mediation Analysis", "Surveys and Questionnaires", "Diabetes Mellitus", "Self Efficacy", "Health Status" ]
9602843
1. Introduction
According to the Centers for Disease Control (2018), 1–2% of individuals who identify as women have type 1 or 2 diabetes, and approximately 6–9% of pregnant people will develop gestational diabetes. Between 2000 and 2010, gestational diabetes increased by 56%, and the percentage of women with type 1 and type 2 diabetes before pregnancy increased by 37% [1]. Type 2 diabetes is prevalent among minority ethnic groups, including people of African, Black Caribbean, South Asian, Middle Eastern, Central, and South American family origin [2,3]. Type 2 diabetes is projected to affect 693 million people worldwide by 2045 [4]. Diabetes increases the risks of adverse pregnancy outcomes for both parent and child, including preeclampsia, polyhydramnios, cesarean delivery, premature birth, neonatal hypoglycemia, birth defects, respiratory distress syndrome, and hyperbilirubinemia in the neonate [5,6,7,8]. Gestational diabetes also has been linked to long-term adverse health outcomes for pregnant people and their offspring [5,6,7,8]. Achieving adequate glycemic control is the cornerstone of preventing short- and long-term adverse health outcomes in people with diabetes during pregnancy. Treatment typically consists of lifestyle, behavioral and dietary changes, home glucose monitoring, and medical therapy with oral antihypoglycemics and/or insulin for persistent hyperglycemia. Though screening for diabetes in pregnancy is common, there is wide heterogeneity in reported improvements in pregnancy outcomes with treatment [5,6]. Adequate glycemic control during pregnancy has been demonstrated to reduce complications. Still, barriers to healthcare access, racial and ethnic disparities, including a higher prevalence of diabetes among people of color, and maternal comorbidities, such as mental health issues, may limit a person’s ability to achieve adequate blood glucose control [2,9,10]. Despite the known risks for poor outcomes, few studies have investigated factors simultaneously addressing diabetes and perinatal mental health. 1.1. Self-Rated Health Self-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored. Self-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored. 1.2. Diabetes Self-Efficacy and Depression Self-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical. Self-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical. 1.3. Research Objectives This study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1). This study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1).
null
null
3. Results
The sample demographic characteristics are reported in Table 1. The mean age was 30.47 (SD = 6.24), and the age of diabetes diagnosis was slightly younger, 28.60 (SD = 7.38), which was significantly different [t (107) = 40.283, p < 0.001]. Half the sample self-identified as Hispanic/Latina, a third were single, more than half worked at least part-time, and nearly half of the sample had more than high school education. Given the difference in the two study sites, we examined differences in demographic characteristics by site. There was a significant difference in the mean age of diagnosis by site [t (106) = 2.129, p = 0.036], with individuals at the IL site diagnosed at a younger age, on average. There was also a significant association between language and site (p < 0.001); Spanish data collection was unavailable at the IL site. There was a significant association between race/ethnicity and site [χ2 (4) = 94.031, p < 0.001]. The CA site had a higher proportion of Hispanic/Latina participants. In contrast, IL had a higher proportion of non-Hispanic White participants. Employment status and a history of depression diagnosis were also significantly associated with the site [χ2 (3) = 9.250, p = 0.026 and χ2 (3) = 25.113, p < 0.001], with a higher proportion of employment and a history of depression in IL. A higher proportion of individuals in IL versus CA met the cut-off for at-risk depression (18.2% versus 11%, respectively), a difference that was not statistically significant (p = 0.318). 3.1. Depressive Symptoms, Self-Rated Health, and Diabetes Self-Efficacy As shown in Table 2, mean EPDS scores were low (5.18, SD = 4.37). The mean DSES score for the entire sample was 8.05 (SD = 1.70). However, the average SRH was considered low at 2.92 (SD = 1.03). Unadjusted regression analysis indicated that race/ethnicity, age, and site were significantly associated with the outcome (DSES). Individuals who identified as Latina or biracial reported significantly higher DSES compared to non-Hispanic White individuals (B = 1.18, t (130) = 3.74, p < 0.001 and B = 1.74, t (130) = 2.08, p = 0.040). There was also a significantly positive association between age and DSES (B = 0.06, t (133) = 2.54, p = 0.013). Lastly, there was a significant difference by site (B = −1.08, t (133) = −3.79, p < 0.001), with higher mean DSES scores among individuals in CA versus IL. However, chi-square tests showed that race and ethnicity were significantly associated with site. Therefore, race and ethnicity were not included in the models. Thus, age was the only covariate in the models. We determined whether there were differences in the outcome (DSES) and the predictor (EPDS), and mediator variables (SRH) between the study sites. While individuals at the IL site had slightly higher mean EPDS scores, the difference was not statistically significant [t (129) = −1.935, p = 0.055]. Differences in mean SRH scores were not statistically different by study site [t (134) = 1.117, p = 0.266]. There was a significant difference in DSES by site, with individuals in the CA reporting significantly higher mean DSES [t (133) = 3.789, p < 0.001] compared to those in IL. EPDS scores were significantly and negatively correlated with SRH (r = −0.368, p < 0.001) and DSES (r = −0.393, p < 0.001). SRH was positively and significantly correlated with DSES (r = 0.311, p < 0.001). As shown in Table 2, mean EPDS scores were low (5.18, SD = 4.37). The mean DSES score for the entire sample was 8.05 (SD = 1.70). However, the average SRH was considered low at 2.92 (SD = 1.03). Unadjusted regression analysis indicated that race/ethnicity, age, and site were significantly associated with the outcome (DSES). Individuals who identified as Latina or biracial reported significantly higher DSES compared to non-Hispanic White individuals (B = 1.18, t (130) = 3.74, p < 0.001 and B = 1.74, t (130) = 2.08, p = 0.040). There was also a significantly positive association between age and DSES (B = 0.06, t (133) = 2.54, p = 0.013). Lastly, there was a significant difference by site (B = −1.08, t (133) = −3.79, p < 0.001), with higher mean DSES scores among individuals in CA versus IL. However, chi-square tests showed that race and ethnicity were significantly associated with site. Therefore, race and ethnicity were not included in the models. Thus, age was the only covariate in the models. We determined whether there were differences in the outcome (DSES) and the predictor (EPDS), and mediator variables (SRH) between the study sites. While individuals at the IL site had slightly higher mean EPDS scores, the difference was not statistically significant [t (129) = −1.935, p = 0.055]. Differences in mean SRH scores were not statistically different by study site [t (134) = 1.117, p = 0.266]. There was a significant difference in DSES by site, with individuals in the CA reporting significantly higher mean DSES [t (133) = 3.789, p < 0.001] compared to those in IL. EPDS scores were significantly and negatively correlated with SRH (r = −0.368, p < 0.001) and DSES (r = −0.393, p < 0.001). SRH was positively and significantly correlated with DSES (r = 0.311, p < 0.001). 3.2. Mediation Analysis Findings from the mediation analysis revealed that SRH mediated the association between EPDS and DSES (see Table 3; Estimate = −0.037, SE = 0.016, 95% CI = −0.077, −0.011). The results also show a negative association between depressive symptoms (EPDS) and SRH (Estimate = −0.09, SE = 0.019, 95% CI = −0.129, −0.053). A negative association between EPDS and DSES was also observed (Estimate = −0.075, SE = 0.035, 95% CI = −0.148, −0.009). The findings also revealed a positive association between SRH and DSES (Estimate = 0.407, SE = 0.157, 95% CI = 0.099, 0.715). Findings from the mediation analysis revealed that SRH mediated the association between EPDS and DSES (see Table 3; Estimate = −0.037, SE = 0.016, 95% CI = −0.077, −0.011). The results also show a negative association between depressive symptoms (EPDS) and SRH (Estimate = −0.09, SE = 0.019, 95% CI = −0.129, −0.053). A negative association between EPDS and DSES was also observed (Estimate = −0.075, SE = 0.035, 95% CI = −0.148, −0.009). The findings also revealed a positive association between SRH and DSES (Estimate = 0.407, SE = 0.157, 95% CI = 0.099, 0.715). 3.3. Moderated Mediation Analysis Regression analysis testing the joint effect of site and EPDS on SHR showed that the study site moderated the effect of EPDS on SHR (F = 8.33, p = 0.005). The results from the moderated mediation reported in Table 4 and Figure 3 show that the indirect effect of SRH on the association between EDPS and DSES differed between the two sites (Estimate = 0.103, SE = 0.039, 95%CI = 0.028, 0.182). The mediating effect of SRH was significant in the CA site (Estimate = −0.054, SE = 0.024, 95% CI = −0.11, −0.014), but not the IL site (Estimate = −0.012, SE = 0.013, 95% CI = −0.046, 0.009). The total effect was significant in CA (Estimate = −0.129, SE = 0.038, 95% CI = −0.211, −0.064) and IL (Estimate = −0.086, SE = 0.037, 95% CI = −0.162, −0.017). Regression analysis testing the joint effect of site and EPDS on SHR showed that the study site moderated the effect of EPDS on SHR (F = 8.33, p = 0.005). The results from the moderated mediation reported in Table 4 and Figure 3 show that the indirect effect of SRH on the association between EDPS and DSES differed between the two sites (Estimate = 0.103, SE = 0.039, 95%CI = 0.028, 0.182). The mediating effect of SRH was significant in the CA site (Estimate = −0.054, SE = 0.024, 95% CI = −0.11, −0.014), but not the IL site (Estimate = −0.012, SE = 0.013, 95% CI = −0.046, 0.009). The total effect was significant in CA (Estimate = −0.129, SE = 0.038, 95% CI = −0.211, −0.064) and IL (Estimate = −0.086, SE = 0.037, 95% CI = −0.162, −0.017).
5. Conclusions
The research shows that achieving normoglycemia in people with diabetes during pregnancy can reduce adverse pregnancy outcomes and improve the long-term health of both mother and child [48,49,50]. However, achieving and maintaining glycemic control requires significant commitment and behavioral changes on the part of the patient, which the presence of depression can influence. Here, we found that SRH mediated the association between depressive symptoms and diabetes self-efficacy in one clinical setting that consisted mainly of Hispanic/Latinas. While generalizability is limited, these findings suggest that further research is needed to understand the role of contextual (i.e., clinical setting) and individual-level factors (e.g., race and ethnicity) that moderate the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy.
[ "1.1. Self-Rated Health", "1.2. Diabetes Self-Efficacy and Depression", "1.3. Research Objectives", "2. Materials and Methods", "2.1. Measures", "2.1.1. Demographic Questionnaire", "2.1.2. Edinburgh Postnatal Depression Scale (EPDS)", "2.1.3. Diabetes Self-Efficacy Scale (DSES)", "2.1.4. Self-Rated Health (SRH)", "2.2. Statistical Analysis Plan", "3.1. Depressive Symptoms, Self-Rated Health, and Diabetes Self-Efficacy", "3.2. Mediation Analysis", "3.3. Moderated Mediation Analysis", "Limitations" ]
[ "Self-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored.", "Self-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical.", "This study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1).", "This was an observational, cross-sectional study of 137 racially and ethnically diverse pregnant individuals diagnosed with Type 1, Type 2, or gestational diabetes mellitus (GDM) at two clinical study sites. Patients between 27 and 40 weeks of gestational age and diagnosed with diabetes were approached about the study by approved healthcare staff. Patients consented to participate, and if written consent was granted, they self-administered a short demographic questionnaire and the survey items described below (see Figure 2). Participants were not compensated. Data collection was conducted between November 2017 and March 2020. Data collection ended at the start of the pandemic.\nThe two study sites were located in Southern California (CA), and Central Illinois (IL). The California site is a teaching, safety net hospital with a predominantly Latina population. Over 90% of the patients are considered medically indigent or are enrolled in the state Medicaid program (MediCal), unpublished data. This site follows the California Diabetes and Program, Sweet Success Guidelines for Care [32]. Pregnant people with pre-existing diabetes (Type 1, 2, or gestational) or a new diagnosis of gestational diabetes were referred for education to a certified diabetes educator nurse, who recruited them for this study at an initial or return visit. Obstetricians, in consultation with on-site Maternal-Fetal Medicine (MFM) specialists, managed care of the pregnancy, comorbidities, and delivery. Screening for diabetes or GDM is performed with the International Association of Diabetes in Pregnancy Study Groups criteria with a first-trimester hemoglobin A1c and 24–28 week 2-h glucose challenge test [33,34]. In contrast, the IL site utilizes the Carpenter Coustan method, including a risk-based screening approach during the first trimester [35]. Patients are assessed at their first prenatal visit, and an additional 1-h oral glucose tolerance test (OGTT) is ordered if the patient is at risk for gestational diabetes mellitus (GDM)—including a history of GDM, a history of a macrocosmic infant in a prior pregnancy, a family history of diabetes mellitus, obesity, etc. Alternatively, a 1-h OGTT is administered at 24–28 weeks gestation. A referral to a certified diabetes care and education specialist nurse is given to all patients who fail this screening test. If diabetes is uncontrolled and medical management is required, the patient is referred to MFM.\nTo be eligible to participate, patients had to be 18–45 years of age, have a singleton pregnancy, be at least 27 weeks pregnant, be diagnosed with diabetes (Type 1, Type 2, or GDM), and be able to speak, read and write in English or Spanish. Individuals with end-stage renal disease, dementia, or blindness were excluded because these conditions could interfere with survey completion.\n 2.1. Measures The following is a description of the self-administered measures. All measures were available in English and Spanish. Individuals completed the survey using their preferred language.\n 2.1.1. Demographic Questionnaire This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\nThis instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\n 2.1.2. Edinburgh Postnatal Depression Scale (EPDS) This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\nThis 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\n 2.1.3. Diabetes Self-Efficacy Scale (DSES) This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\nThis 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\n 2.1.4. Self-Rated Health (SRH) SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]\nSRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]\nThe following is a description of the self-administered measures. All measures were available in English and Spanish. Individuals completed the survey using their preferred language.\n 2.1.1. Demographic Questionnaire This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\nThis instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\n 2.1.2. Edinburgh Postnatal Depression Scale (EPDS) This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\nThis 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\n 2.1.3. Diabetes Self-Efficacy Scale (DSES) This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\nThis 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\n 2.1.4. Self-Rated Health (SRH) SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]\nSRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]\n 2.2. Statistical Analysis Plan Descriptive statistics were computed for all study variables using SPSS 28.0. Categorical variables were summarized with frequencies and percentages. Means and standard deviations for continuous variables were computed. Fisher’s exact test, t-tests, and chi-square determined associations between dichotomous and categorical demographic characteristics and the outcome (DSES). Unadjusted linear regression tested associations between continuous demographic variables and DSES. Site-level differences in demographic characteristics, prenatal depression (PND measured using the EPDS scores), and diabetes self-efficacy score (DSES) were also explored using bivariate analyses. Mediation was tested in a model to assess the significant effect of SRH on the association between EPDS and DSES. EPDS was the predictor, SRH was the mediator, and DSES was the outcome variable. A moderated mediation model was used to examine whether the site moderated these indirect effects. The model included covariates significantly associated with the outcome (DSES). Mediation and moderated mediation were tested using Mplus using full information maximum likelihood [44]. All 137 subjects were used in the analyses. Unstandardized coefficients were used to estimate the mediation and moderated mediation. Inferences on indirect effects were tested using a bootstrap approach with 5000 samples [45]. Bias corrected bootstrap standard errors, and 95% confidence intervals of the direct and indirect effects were calculated. A 95% confident interval that does not include zero indicated that parameters were statistically significant.\nDescriptive statistics were computed for all study variables using SPSS 28.0. Categorical variables were summarized with frequencies and percentages. Means and standard deviations for continuous variables were computed. Fisher’s exact test, t-tests, and chi-square determined associations between dichotomous and categorical demographic characteristics and the outcome (DSES). Unadjusted linear regression tested associations between continuous demographic variables and DSES. Site-level differences in demographic characteristics, prenatal depression (PND measured using the EPDS scores), and diabetes self-efficacy score (DSES) were also explored using bivariate analyses. Mediation was tested in a model to assess the significant effect of SRH on the association between EPDS and DSES. EPDS was the predictor, SRH was the mediator, and DSES was the outcome variable. A moderated mediation model was used to examine whether the site moderated these indirect effects. The model included covariates significantly associated with the outcome (DSES). Mediation and moderated mediation were tested using Mplus using full information maximum likelihood [44]. All 137 subjects were used in the analyses. Unstandardized coefficients were used to estimate the mediation and moderated mediation. Inferences on indirect effects were tested using a bootstrap approach with 5000 samples [45]. Bias corrected bootstrap standard errors, and 95% confidence intervals of the direct and indirect effects were calculated. A 95% confident interval that does not include zero indicated that parameters were statistically significant.", "The following is a description of the self-administered measures. All measures were available in English and Spanish. Individuals completed the survey using their preferred language.\n 2.1.1. Demographic Questionnaire This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\nThis instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\n 2.1.2. Edinburgh Postnatal Depression Scale (EPDS) This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\nThis 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\n 2.1.3. Diabetes Self-Efficacy Scale (DSES) This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\nThis 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\n 2.1.4. Self-Rated Health (SRH) SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]\nSRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]", "This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.", "This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.", "This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.", "SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]", "Descriptive statistics were computed for all study variables using SPSS 28.0. Categorical variables were summarized with frequencies and percentages. Means and standard deviations for continuous variables were computed. Fisher’s exact test, t-tests, and chi-square determined associations between dichotomous and categorical demographic characteristics and the outcome (DSES). Unadjusted linear regression tested associations between continuous demographic variables and DSES. Site-level differences in demographic characteristics, prenatal depression (PND measured using the EPDS scores), and diabetes self-efficacy score (DSES) were also explored using bivariate analyses. Mediation was tested in a model to assess the significant effect of SRH on the association between EPDS and DSES. EPDS was the predictor, SRH was the mediator, and DSES was the outcome variable. A moderated mediation model was used to examine whether the site moderated these indirect effects. The model included covariates significantly associated with the outcome (DSES). Mediation and moderated mediation were tested using Mplus using full information maximum likelihood [44]. All 137 subjects were used in the analyses. Unstandardized coefficients were used to estimate the mediation and moderated mediation. Inferences on indirect effects were tested using a bootstrap approach with 5000 samples [45]. Bias corrected bootstrap standard errors, and 95% confidence intervals of the direct and indirect effects were calculated. A 95% confident interval that does not include zero indicated that parameters were statistically significant.", "As shown in Table 2, mean EPDS scores were low (5.18, SD = 4.37). The mean DSES score for the entire sample was 8.05 (SD = 1.70). However, the average SRH was considered low at 2.92 (SD = 1.03). Unadjusted regression analysis indicated that race/ethnicity, age, and site were significantly associated with the outcome (DSES). Individuals who identified as Latina or biracial reported significantly higher DSES compared to non-Hispanic White individuals (B = 1.18, t (130) = 3.74, p < 0.001 and B = 1.74, t (130) = 2.08, p = 0.040). There was also a significantly positive association between age and DSES (B = 0.06, t (133) = 2.54, p = 0.013). Lastly, there was a significant difference by site (B = −1.08, t (133) = −3.79, p < 0.001), with higher mean DSES scores among individuals in CA versus IL. However, chi-square tests showed that race and ethnicity were significantly associated with site. Therefore, race and ethnicity were not included in the models. Thus, age was the only covariate in the models.\nWe determined whether there were differences in the outcome (DSES) and the predictor (EPDS), and mediator variables (SRH) between the study sites. While individuals at the IL site had slightly higher mean EPDS scores, the difference was not statistically significant [t (129) = −1.935, p = 0.055]. Differences in mean SRH scores were not statistically different by study site [t (134) = 1.117, p = 0.266]. There was a significant difference in DSES by site, with individuals in the CA reporting significantly higher mean DSES [t (133) = 3.789, p < 0.001] compared to those in IL. EPDS scores were significantly and negatively correlated with SRH (r = −0.368, p < 0.001) and DSES (r = −0.393, p < 0.001). SRH was positively and significantly correlated with DSES (r = 0.311, p < 0.001).", "Findings from the mediation analysis revealed that SRH mediated the association between EPDS and DSES (see Table 3; Estimate = −0.037, SE = 0.016, 95% CI = −0.077, −0.011). The results also show a negative association between depressive symptoms (EPDS) and SRH (Estimate = −0.09, SE = 0.019, 95% CI = −0.129, −0.053). A negative association between EPDS and DSES was also observed (Estimate = −0.075, SE = 0.035, 95% CI = −0.148, −0.009). The findings also revealed a positive association between SRH and DSES (Estimate = 0.407, SE = 0.157, 95% CI = 0.099, 0.715).", "Regression analysis testing the joint effect of site and EPDS on SHR showed that the study site moderated the effect of EPDS on SHR (F = 8.33, p = 0.005). The results from the moderated mediation reported in Table 4 and Figure 3 show that the indirect effect of SRH on the association between EDPS and DSES differed between the two sites (Estimate = 0.103, SE = 0.039, 95%CI = 0.028, 0.182). The mediating effect of SRH was significant in the CA site (Estimate = −0.054, SE = 0.024, 95% CI = −0.11, −0.014), but not the IL site (Estimate = −0.012, SE = 0.013, 95% CI = −0.046, 0.009). The total effect was significant in CA (Estimate = −0.129, SE = 0.038, 95% CI = −0.211, −0.064) and IL (Estimate = −0.086, SE = 0.037, 95% CI = −0.162, −0.017).", "This cross-sectional study presents several strengths but is also not without limitations. First, our study included a cross-sectional convenience sample of pregnant people with diabetes. The final sample was smaller than the target sample of 405 intended to detect significant mediated effects [47]. Nevertheless, this study yielded findings that merit further investigation, such as the potential role of clinical care settings and patient race and ethnicity. Therefore, future studies should replicate our design with a larger sample and equal proportions of racial and ethnic individuals from a similar clinical setting to test the role of race and ethnicity. Future studies should also consider examining the role of clinical care settings (e.g., context, treatment approaches) to determine the mechanism that might drive the moderating effect of the study site on the mediating role of SRH between depressive symptoms and diabetes self-efficacy. Second, we did not confirm a clinical diagnosis of depression using diagnostic measures with a clinical assessment. Additionally, depressive symptoms were measured once in late pregnancy. As mentioned, future studies should assess depressive symptoms early in pregnancy and preferably before diabetes testing, and additional assessments of a patient’s mental health can be conducted over time to identify potential escalation in symptoms. Clinicians and researchers can measure self-efficacy when patients are diagnosed with diabetes and throughout gestation to assess changes that can inform patient care. Third, we did not use directed acyclic graphs to identify the covariates for the model. Fourth, as described above, recruitment included patients with different types of diabetes. Related, we did not collect International Classification of Diseases (ICD) codes. Therefore, future studies should capture ICD codes and consider comparing equal proportions of birthing people by type of diabetes and test the associations reported." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "1.1. Self-Rated Health", "1.2. Diabetes Self-Efficacy and Depression", "1.3. Research Objectives", "2. Materials and Methods", "2.1. Measures", "2.1.1. Demographic Questionnaire", "2.1.2. Edinburgh Postnatal Depression Scale (EPDS)", "2.1.3. Diabetes Self-Efficacy Scale (DSES)", "2.1.4. Self-Rated Health (SRH)", "2.2. Statistical Analysis Plan", "3. Results", "3.1. Depressive Symptoms, Self-Rated Health, and Diabetes Self-Efficacy", "3.2. Mediation Analysis", "3.3. Moderated Mediation Analysis", "4. Discussion", "Limitations", "5. Conclusions" ]
[ "According to the Centers for Disease Control (2018), 1–2% of individuals who identify as women have type 1 or 2 diabetes, and approximately 6–9% of pregnant people will develop gestational diabetes. Between 2000 and 2010, gestational diabetes increased by 56%, and the percentage of women with type 1 and type 2 diabetes before pregnancy increased by 37% [1]. Type 2 diabetes is prevalent among minority ethnic groups, including people of African, Black Caribbean, South Asian, Middle Eastern, Central, and South American family origin [2,3]. Type 2 diabetes is projected to affect 693 million people worldwide by 2045 [4]. Diabetes increases the risks of adverse pregnancy outcomes for both parent and child, including preeclampsia, polyhydramnios, cesarean delivery, premature birth, neonatal hypoglycemia, birth defects, respiratory distress syndrome, and hyperbilirubinemia in the neonate [5,6,7,8]. Gestational diabetes also has been linked to long-term adverse health outcomes for pregnant people and their offspring [5,6,7,8]. Achieving adequate glycemic control is the cornerstone of preventing short- and long-term adverse health outcomes in people with diabetes during pregnancy. Treatment typically consists of lifestyle, behavioral and dietary changes, home glucose monitoring, and medical therapy with oral antihypoglycemics and/or insulin for persistent hyperglycemia. Though screening for diabetes in pregnancy is common, there is wide heterogeneity in reported improvements in pregnancy outcomes with treatment [5,6]. Adequate glycemic control during pregnancy has been demonstrated to reduce complications. Still, barriers to healthcare access, racial and ethnic disparities, including a higher prevalence of diabetes among people of color, and maternal comorbidities, such as mental health issues, may limit a person’s ability to achieve adequate blood glucose control [2,9,10]. Despite the known risks for poor outcomes, few studies have investigated factors simultaneously addressing diabetes and perinatal mental health.\n 1.1. Self-Rated Health Self-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored.\nSelf-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored.\n 1.2. Diabetes Self-Efficacy and Depression Self-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical.\nSelf-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical.\n 1.3. Research Objectives This study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1).\nThis study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1).", "Self-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored.", "Self-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical.", "This study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1).", "This was an observational, cross-sectional study of 137 racially and ethnically diverse pregnant individuals diagnosed with Type 1, Type 2, or gestational diabetes mellitus (GDM) at two clinical study sites. Patients between 27 and 40 weeks of gestational age and diagnosed with diabetes were approached about the study by approved healthcare staff. Patients consented to participate, and if written consent was granted, they self-administered a short demographic questionnaire and the survey items described below (see Figure 2). Participants were not compensated. Data collection was conducted between November 2017 and March 2020. Data collection ended at the start of the pandemic.\nThe two study sites were located in Southern California (CA), and Central Illinois (IL). The California site is a teaching, safety net hospital with a predominantly Latina population. Over 90% of the patients are considered medically indigent or are enrolled in the state Medicaid program (MediCal), unpublished data. This site follows the California Diabetes and Program, Sweet Success Guidelines for Care [32]. Pregnant people with pre-existing diabetes (Type 1, 2, or gestational) or a new diagnosis of gestational diabetes were referred for education to a certified diabetes educator nurse, who recruited them for this study at an initial or return visit. Obstetricians, in consultation with on-site Maternal-Fetal Medicine (MFM) specialists, managed care of the pregnancy, comorbidities, and delivery. Screening for diabetes or GDM is performed with the International Association of Diabetes in Pregnancy Study Groups criteria with a first-trimester hemoglobin A1c and 24–28 week 2-h glucose challenge test [33,34]. In contrast, the IL site utilizes the Carpenter Coustan method, including a risk-based screening approach during the first trimester [35]. Patients are assessed at their first prenatal visit, and an additional 1-h oral glucose tolerance test (OGTT) is ordered if the patient is at risk for gestational diabetes mellitus (GDM)—including a history of GDM, a history of a macrocosmic infant in a prior pregnancy, a family history of diabetes mellitus, obesity, etc. Alternatively, a 1-h OGTT is administered at 24–28 weeks gestation. A referral to a certified diabetes care and education specialist nurse is given to all patients who fail this screening test. If diabetes is uncontrolled and medical management is required, the patient is referred to MFM.\nTo be eligible to participate, patients had to be 18–45 years of age, have a singleton pregnancy, be at least 27 weeks pregnant, be diagnosed with diabetes (Type 1, Type 2, or GDM), and be able to speak, read and write in English or Spanish. Individuals with end-stage renal disease, dementia, or blindness were excluded because these conditions could interfere with survey completion.\n 2.1. Measures The following is a description of the self-administered measures. All measures were available in English and Spanish. Individuals completed the survey using their preferred language.\n 2.1.1. Demographic Questionnaire This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\nThis instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\n 2.1.2. Edinburgh Postnatal Depression Scale (EPDS) This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\nThis 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\n 2.1.3. Diabetes Self-Efficacy Scale (DSES) This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\nThis 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\n 2.1.4. Self-Rated Health (SRH) SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]\nSRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]\nThe following is a description of the self-administered measures. All measures were available in English and Spanish. Individuals completed the survey using their preferred language.\n 2.1.1. Demographic Questionnaire This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\nThis instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\n 2.1.2. Edinburgh Postnatal Depression Scale (EPDS) This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\nThis 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\n 2.1.3. Diabetes Self-Efficacy Scale (DSES) This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\nThis 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\n 2.1.4. Self-Rated Health (SRH) SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]\nSRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]\n 2.2. Statistical Analysis Plan Descriptive statistics were computed for all study variables using SPSS 28.0. Categorical variables were summarized with frequencies and percentages. Means and standard deviations for continuous variables were computed. Fisher’s exact test, t-tests, and chi-square determined associations between dichotomous and categorical demographic characteristics and the outcome (DSES). Unadjusted linear regression tested associations between continuous demographic variables and DSES. Site-level differences in demographic characteristics, prenatal depression (PND measured using the EPDS scores), and diabetes self-efficacy score (DSES) were also explored using bivariate analyses. Mediation was tested in a model to assess the significant effect of SRH on the association between EPDS and DSES. EPDS was the predictor, SRH was the mediator, and DSES was the outcome variable. A moderated mediation model was used to examine whether the site moderated these indirect effects. The model included covariates significantly associated with the outcome (DSES). Mediation and moderated mediation were tested using Mplus using full information maximum likelihood [44]. All 137 subjects were used in the analyses. Unstandardized coefficients were used to estimate the mediation and moderated mediation. Inferences on indirect effects were tested using a bootstrap approach with 5000 samples [45]. Bias corrected bootstrap standard errors, and 95% confidence intervals of the direct and indirect effects were calculated. A 95% confident interval that does not include zero indicated that parameters were statistically significant.\nDescriptive statistics were computed for all study variables using SPSS 28.0. Categorical variables were summarized with frequencies and percentages. Means and standard deviations for continuous variables were computed. Fisher’s exact test, t-tests, and chi-square determined associations between dichotomous and categorical demographic characteristics and the outcome (DSES). Unadjusted linear regression tested associations between continuous demographic variables and DSES. Site-level differences in demographic characteristics, prenatal depression (PND measured using the EPDS scores), and diabetes self-efficacy score (DSES) were also explored using bivariate analyses. Mediation was tested in a model to assess the significant effect of SRH on the association between EPDS and DSES. EPDS was the predictor, SRH was the mediator, and DSES was the outcome variable. A moderated mediation model was used to examine whether the site moderated these indirect effects. The model included covariates significantly associated with the outcome (DSES). Mediation and moderated mediation were tested using Mplus using full information maximum likelihood [44]. All 137 subjects were used in the analyses. Unstandardized coefficients were used to estimate the mediation and moderated mediation. Inferences on indirect effects were tested using a bootstrap approach with 5000 samples [45]. Bias corrected bootstrap standard errors, and 95% confidence intervals of the direct and indirect effects were calculated. A 95% confident interval that does not include zero indicated that parameters were statistically significant.", "The following is a description of the self-administered measures. All measures were available in English and Spanish. Individuals completed the survey using their preferred language.\n 2.1.1. Demographic Questionnaire This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\nThis instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.\n 2.1.2. Edinburgh Postnatal Depression Scale (EPDS) This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\nThis 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.\n 2.1.3. Diabetes Self-Efficacy Scale (DSES) This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\nThis 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.\n 2.1.4. Self-Rated Health (SRH) SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]\nSRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]", "This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories.", "This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84.", "This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88.", "SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43]", "Descriptive statistics were computed for all study variables using SPSS 28.0. Categorical variables were summarized with frequencies and percentages. Means and standard deviations for continuous variables were computed. Fisher’s exact test, t-tests, and chi-square determined associations between dichotomous and categorical demographic characteristics and the outcome (DSES). Unadjusted linear regression tested associations between continuous demographic variables and DSES. Site-level differences in demographic characteristics, prenatal depression (PND measured using the EPDS scores), and diabetes self-efficacy score (DSES) were also explored using bivariate analyses. Mediation was tested in a model to assess the significant effect of SRH on the association between EPDS and DSES. EPDS was the predictor, SRH was the mediator, and DSES was the outcome variable. A moderated mediation model was used to examine whether the site moderated these indirect effects. The model included covariates significantly associated with the outcome (DSES). Mediation and moderated mediation were tested using Mplus using full information maximum likelihood [44]. All 137 subjects were used in the analyses. Unstandardized coefficients were used to estimate the mediation and moderated mediation. Inferences on indirect effects were tested using a bootstrap approach with 5000 samples [45]. Bias corrected bootstrap standard errors, and 95% confidence intervals of the direct and indirect effects were calculated. A 95% confident interval that does not include zero indicated that parameters were statistically significant.", "The sample demographic characteristics are reported in Table 1. The mean age was 30.47 (SD = 6.24), and the age of diabetes diagnosis was slightly younger, 28.60 (SD = 7.38), which was significantly different [t (107) = 40.283, p < 0.001]. Half the sample self-identified as Hispanic/Latina, a third were single, more than half worked at least part-time, and nearly half of the sample had more than high school education. Given the difference in the two study sites, we examined differences in demographic characteristics by site. There was a significant difference in the mean age of diagnosis by site [t (106) = 2.129, p = 0.036], with individuals at the IL site diagnosed at a younger age, on average. There was also a significant association between language and site (p < 0.001); Spanish data collection was unavailable at the IL site. There was a significant association between race/ethnicity and site [χ2 (4) = 94.031, p < 0.001]. The CA site had a higher proportion of Hispanic/Latina participants. In contrast, IL had a higher proportion of non-Hispanic White participants. Employment status and a history of depression diagnosis were also significantly associated with the site [χ2 (3) = 9.250, p = 0.026 and χ2 (3) = 25.113, p < 0.001], with a higher proportion of employment and a history of depression in IL. A higher proportion of individuals in IL versus CA met the cut-off for at-risk depression (18.2% versus 11%, respectively), a difference that was not statistically significant (p = 0.318).\n 3.1. Depressive Symptoms, Self-Rated Health, and Diabetes Self-Efficacy As shown in Table 2, mean EPDS scores were low (5.18, SD = 4.37). The mean DSES score for the entire sample was 8.05 (SD = 1.70). However, the average SRH was considered low at 2.92 (SD = 1.03). Unadjusted regression analysis indicated that race/ethnicity, age, and site were significantly associated with the outcome (DSES). Individuals who identified as Latina or biracial reported significantly higher DSES compared to non-Hispanic White individuals (B = 1.18, t (130) = 3.74, p < 0.001 and B = 1.74, t (130) = 2.08, p = 0.040). There was also a significantly positive association between age and DSES (B = 0.06, t (133) = 2.54, p = 0.013). Lastly, there was a significant difference by site (B = −1.08, t (133) = −3.79, p < 0.001), with higher mean DSES scores among individuals in CA versus IL. However, chi-square tests showed that race and ethnicity were significantly associated with site. Therefore, race and ethnicity were not included in the models. Thus, age was the only covariate in the models.\nWe determined whether there were differences in the outcome (DSES) and the predictor (EPDS), and mediator variables (SRH) between the study sites. While individuals at the IL site had slightly higher mean EPDS scores, the difference was not statistically significant [t (129) = −1.935, p = 0.055]. Differences in mean SRH scores were not statistically different by study site [t (134) = 1.117, p = 0.266]. There was a significant difference in DSES by site, with individuals in the CA reporting significantly higher mean DSES [t (133) = 3.789, p < 0.001] compared to those in IL. EPDS scores were significantly and negatively correlated with SRH (r = −0.368, p < 0.001) and DSES (r = −0.393, p < 0.001). SRH was positively and significantly correlated with DSES (r = 0.311, p < 0.001).\nAs shown in Table 2, mean EPDS scores were low (5.18, SD = 4.37). The mean DSES score for the entire sample was 8.05 (SD = 1.70). However, the average SRH was considered low at 2.92 (SD = 1.03). Unadjusted regression analysis indicated that race/ethnicity, age, and site were significantly associated with the outcome (DSES). Individuals who identified as Latina or biracial reported significantly higher DSES compared to non-Hispanic White individuals (B = 1.18, t (130) = 3.74, p < 0.001 and B = 1.74, t (130) = 2.08, p = 0.040). There was also a significantly positive association between age and DSES (B = 0.06, t (133) = 2.54, p = 0.013). Lastly, there was a significant difference by site (B = −1.08, t (133) = −3.79, p < 0.001), with higher mean DSES scores among individuals in CA versus IL. However, chi-square tests showed that race and ethnicity were significantly associated with site. Therefore, race and ethnicity were not included in the models. Thus, age was the only covariate in the models.\nWe determined whether there were differences in the outcome (DSES) and the predictor (EPDS), and mediator variables (SRH) between the study sites. While individuals at the IL site had slightly higher mean EPDS scores, the difference was not statistically significant [t (129) = −1.935, p = 0.055]. Differences in mean SRH scores were not statistically different by study site [t (134) = 1.117, p = 0.266]. There was a significant difference in DSES by site, with individuals in the CA reporting significantly higher mean DSES [t (133) = 3.789, p < 0.001] compared to those in IL. EPDS scores were significantly and negatively correlated with SRH (r = −0.368, p < 0.001) and DSES (r = −0.393, p < 0.001). SRH was positively and significantly correlated with DSES (r = 0.311, p < 0.001).\n 3.2. Mediation Analysis Findings from the mediation analysis revealed that SRH mediated the association between EPDS and DSES (see Table 3; Estimate = −0.037, SE = 0.016, 95% CI = −0.077, −0.011). The results also show a negative association between depressive symptoms (EPDS) and SRH (Estimate = −0.09, SE = 0.019, 95% CI = −0.129, −0.053). A negative association between EPDS and DSES was also observed (Estimate = −0.075, SE = 0.035, 95% CI = −0.148, −0.009). The findings also revealed a positive association between SRH and DSES (Estimate = 0.407, SE = 0.157, 95% CI = 0.099, 0.715).\nFindings from the mediation analysis revealed that SRH mediated the association between EPDS and DSES (see Table 3; Estimate = −0.037, SE = 0.016, 95% CI = −0.077, −0.011). The results also show a negative association between depressive symptoms (EPDS) and SRH (Estimate = −0.09, SE = 0.019, 95% CI = −0.129, −0.053). A negative association between EPDS and DSES was also observed (Estimate = −0.075, SE = 0.035, 95% CI = −0.148, −0.009). The findings also revealed a positive association between SRH and DSES (Estimate = 0.407, SE = 0.157, 95% CI = 0.099, 0.715).\n 3.3. Moderated Mediation Analysis Regression analysis testing the joint effect of site and EPDS on SHR showed that the study site moderated the effect of EPDS on SHR (F = 8.33, p = 0.005). The results from the moderated mediation reported in Table 4 and Figure 3 show that the indirect effect of SRH on the association between EDPS and DSES differed between the two sites (Estimate = 0.103, SE = 0.039, 95%CI = 0.028, 0.182). The mediating effect of SRH was significant in the CA site (Estimate = −0.054, SE = 0.024, 95% CI = −0.11, −0.014), but not the IL site (Estimate = −0.012, SE = 0.013, 95% CI = −0.046, 0.009). The total effect was significant in CA (Estimate = −0.129, SE = 0.038, 95% CI = −0.211, −0.064) and IL (Estimate = −0.086, SE = 0.037, 95% CI = −0.162, −0.017).\nRegression analysis testing the joint effect of site and EPDS on SHR showed that the study site moderated the effect of EPDS on SHR (F = 8.33, p = 0.005). The results from the moderated mediation reported in Table 4 and Figure 3 show that the indirect effect of SRH on the association between EDPS and DSES differed between the two sites (Estimate = 0.103, SE = 0.039, 95%CI = 0.028, 0.182). The mediating effect of SRH was significant in the CA site (Estimate = −0.054, SE = 0.024, 95% CI = −0.11, −0.014), but not the IL site (Estimate = −0.012, SE = 0.013, 95% CI = −0.046, 0.009). The total effect was significant in CA (Estimate = −0.129, SE = 0.038, 95% CI = −0.211, −0.064) and IL (Estimate = −0.086, SE = 0.037, 95% CI = −0.162, −0.017).", "As shown in Table 2, mean EPDS scores were low (5.18, SD = 4.37). The mean DSES score for the entire sample was 8.05 (SD = 1.70). However, the average SRH was considered low at 2.92 (SD = 1.03). Unadjusted regression analysis indicated that race/ethnicity, age, and site were significantly associated with the outcome (DSES). Individuals who identified as Latina or biracial reported significantly higher DSES compared to non-Hispanic White individuals (B = 1.18, t (130) = 3.74, p < 0.001 and B = 1.74, t (130) = 2.08, p = 0.040). There was also a significantly positive association between age and DSES (B = 0.06, t (133) = 2.54, p = 0.013). Lastly, there was a significant difference by site (B = −1.08, t (133) = −3.79, p < 0.001), with higher mean DSES scores among individuals in CA versus IL. However, chi-square tests showed that race and ethnicity were significantly associated with site. Therefore, race and ethnicity were not included in the models. Thus, age was the only covariate in the models.\nWe determined whether there were differences in the outcome (DSES) and the predictor (EPDS), and mediator variables (SRH) between the study sites. While individuals at the IL site had slightly higher mean EPDS scores, the difference was not statistically significant [t (129) = −1.935, p = 0.055]. Differences in mean SRH scores were not statistically different by study site [t (134) = 1.117, p = 0.266]. There was a significant difference in DSES by site, with individuals in the CA reporting significantly higher mean DSES [t (133) = 3.789, p < 0.001] compared to those in IL. EPDS scores were significantly and negatively correlated with SRH (r = −0.368, p < 0.001) and DSES (r = −0.393, p < 0.001). SRH was positively and significantly correlated with DSES (r = 0.311, p < 0.001).", "Findings from the mediation analysis revealed that SRH mediated the association between EPDS and DSES (see Table 3; Estimate = −0.037, SE = 0.016, 95% CI = −0.077, −0.011). The results also show a negative association between depressive symptoms (EPDS) and SRH (Estimate = −0.09, SE = 0.019, 95% CI = −0.129, −0.053). A negative association between EPDS and DSES was also observed (Estimate = −0.075, SE = 0.035, 95% CI = −0.148, −0.009). The findings also revealed a positive association between SRH and DSES (Estimate = 0.407, SE = 0.157, 95% CI = 0.099, 0.715).", "Regression analysis testing the joint effect of site and EPDS on SHR showed that the study site moderated the effect of EPDS on SHR (F = 8.33, p = 0.005). The results from the moderated mediation reported in Table 4 and Figure 3 show that the indirect effect of SRH on the association between EDPS and DSES differed between the two sites (Estimate = 0.103, SE = 0.039, 95%CI = 0.028, 0.182). The mediating effect of SRH was significant in the CA site (Estimate = −0.054, SE = 0.024, 95% CI = −0.11, −0.014), but not the IL site (Estimate = −0.012, SE = 0.013, 95% CI = −0.046, 0.009). The total effect was significant in CA (Estimate = −0.129, SE = 0.038, 95% CI = −0.211, −0.064) and IL (Estimate = −0.086, SE = 0.037, 95% CI = −0.162, −0.017).", "This study identified the mediating role of SRH in the association between depressive symptoms and diabetes self-efficacy among pregnant individuals with diabetes. While the empirical evidence shows that depressive symptoms are associated with lower SRH and lower diabetes self-efficacy, the role of SRH in these associations has not been established. Thus, this novel study tested our hypothesis that SRH would mediate the association between depressive symptoms and diabetes self-efficacy. In doing so, we also hypothesized that depressive symptoms would be associated with lower SRH and diabetes self-efficacy. Because the sample was drawn from two clinical settings, we tested the moderated mediation of site. The findings supported our assumptions.\nThe analyses revealed that SRH mediated the association between depressive symptoms and diabetes self-efficacy. However, we also found that site moderated the mediation. In other words, the mediating effect of SRH differs by clinical setting. It is critical to note that race and ethnicity were correlated with the study setting; most Hispanic/Latinas were located in CA, and most non-Hispanic Whites were in IL. Therefore, we cannot determine whether the sample population, the geographic area, the clinical approach, or the characteristics of the clinical site explain the moderating effect of study site. Still, this cross-sectional study of a diverse sample of pregnant people diagnosed with diabetes indicated that depressive symptoms were significantly and negatively associated with diabetes self-efficacy, even after controlling for age, which was the only demographic variable associated with the outcome variable. Still, while these findings support previous studies that showed similar associations [46], we must acknowledge that the association was observed in one of the two clinical settings, suggesting that characteristicsof the setting or the clinical population matter. As noted previously, patient race and ethnicity were significantly correlated with study site. As Table 1 shows, most Hispanic/Latina patients were located in the CA study site, whereas most non-Hispanic Whites were in IL. Our previous study with Latina perinatal women showed a significant and negative association between SRH and depressive symptoms and diabetes diagnosis in pregnancy. However, the mechanisms that explain those associations are not clear. Despite having higher depressive symptoms that were marginally significant (p = 0.055) and slightly lower SRH, it is also unclear why there was no significant association between depressive symptoms and SRH in the IL sample. While some research shows differences in SRH by race and ethnicity, with non-Hispanic Whites exhibiting higher scores, there may be contextual factors (e.g., rurality) that may have negatively affected the IL sample in our study. Therefore, future studies should account for contextual factors in and outside the healthcare setting.\nAs with standard practice for assessing glucose control in pregnancy, there are many benefits to evaluating the presence of depressive symptoms during this critical period. First, it is one of the few periods in a person’s life when they have numerous opportunities to detect elevated depressive symptoms in a relatively short time. Measuring depressive symptoms over time is critical to identify patients with worsening depressive symptoms. Our study offered a snapshot during the third trimester of pregnancy but also highlighted the role depressive symptoms can have in diabetes self-efficacy in the late stages of gestation. The results also show that SRH may vary by clinical population or setting. Our findings suggest that SRH may be an intervention point for some people, such as Latinas. However, this speculation should be tested with a larger, more diverse sample drawn from similar settings to account for potential healthcare characteristics.\nFew studies have examined the associations between depressive symptoms, SRH, and diabetes self-efficacy in pregnant persons. One of the few studies found that pregnant persons with depressive symptoms or diabetes had worse SRH than their counterparts [17]. This is a critical population to study because SRH has been shown to decline in gestation [17]. Thus, it is crucial that clinicians assess patients’ SRH early in pregnancy and preferably before or in concert with diabetes testing to increase disease education to help identify potential intervention points.\n Limitations This cross-sectional study presents several strengths but is also not without limitations. First, our study included a cross-sectional convenience sample of pregnant people with diabetes. The final sample was smaller than the target sample of 405 intended to detect significant mediated effects [47]. Nevertheless, this study yielded findings that merit further investigation, such as the potential role of clinical care settings and patient race and ethnicity. Therefore, future studies should replicate our design with a larger sample and equal proportions of racial and ethnic individuals from a similar clinical setting to test the role of race and ethnicity. Future studies should also consider examining the role of clinical care settings (e.g., context, treatment approaches) to determine the mechanism that might drive the moderating effect of the study site on the mediating role of SRH between depressive symptoms and diabetes self-efficacy. Second, we did not confirm a clinical diagnosis of depression using diagnostic measures with a clinical assessment. Additionally, depressive symptoms were measured once in late pregnancy. As mentioned, future studies should assess depressive symptoms early in pregnancy and preferably before diabetes testing, and additional assessments of a patient’s mental health can be conducted over time to identify potential escalation in symptoms. Clinicians and researchers can measure self-efficacy when patients are diagnosed with diabetes and throughout gestation to assess changes that can inform patient care. Third, we did not use directed acyclic graphs to identify the covariates for the model. Fourth, as described above, recruitment included patients with different types of diabetes. Related, we did not collect International Classification of Diseases (ICD) codes. Therefore, future studies should capture ICD codes and consider comparing equal proportions of birthing people by type of diabetes and test the associations reported.\nThis cross-sectional study presents several strengths but is also not without limitations. First, our study included a cross-sectional convenience sample of pregnant people with diabetes. The final sample was smaller than the target sample of 405 intended to detect significant mediated effects [47]. Nevertheless, this study yielded findings that merit further investigation, such as the potential role of clinical care settings and patient race and ethnicity. Therefore, future studies should replicate our design with a larger sample and equal proportions of racial and ethnic individuals from a similar clinical setting to test the role of race and ethnicity. Future studies should also consider examining the role of clinical care settings (e.g., context, treatment approaches) to determine the mechanism that might drive the moderating effect of the study site on the mediating role of SRH between depressive symptoms and diabetes self-efficacy. Second, we did not confirm a clinical diagnosis of depression using diagnostic measures with a clinical assessment. Additionally, depressive symptoms were measured once in late pregnancy. As mentioned, future studies should assess depressive symptoms early in pregnancy and preferably before diabetes testing, and additional assessments of a patient’s mental health can be conducted over time to identify potential escalation in symptoms. Clinicians and researchers can measure self-efficacy when patients are diagnosed with diabetes and throughout gestation to assess changes that can inform patient care. Third, we did not use directed acyclic graphs to identify the covariates for the model. Fourth, as described above, recruitment included patients with different types of diabetes. Related, we did not collect International Classification of Diseases (ICD) codes. Therefore, future studies should capture ICD codes and consider comparing equal proportions of birthing people by type of diabetes and test the associations reported.", "This cross-sectional study presents several strengths but is also not without limitations. First, our study included a cross-sectional convenience sample of pregnant people with diabetes. The final sample was smaller than the target sample of 405 intended to detect significant mediated effects [47]. Nevertheless, this study yielded findings that merit further investigation, such as the potential role of clinical care settings and patient race and ethnicity. Therefore, future studies should replicate our design with a larger sample and equal proportions of racial and ethnic individuals from a similar clinical setting to test the role of race and ethnicity. Future studies should also consider examining the role of clinical care settings (e.g., context, treatment approaches) to determine the mechanism that might drive the moderating effect of the study site on the mediating role of SRH between depressive symptoms and diabetes self-efficacy. Second, we did not confirm a clinical diagnosis of depression using diagnostic measures with a clinical assessment. Additionally, depressive symptoms were measured once in late pregnancy. As mentioned, future studies should assess depressive symptoms early in pregnancy and preferably before diabetes testing, and additional assessments of a patient’s mental health can be conducted over time to identify potential escalation in symptoms. Clinicians and researchers can measure self-efficacy when patients are diagnosed with diabetes and throughout gestation to assess changes that can inform patient care. Third, we did not use directed acyclic graphs to identify the covariates for the model. Fourth, as described above, recruitment included patients with different types of diabetes. Related, we did not collect International Classification of Diseases (ICD) codes. Therefore, future studies should capture ICD codes and consider comparing equal proportions of birthing people by type of diabetes and test the associations reported.", "The research shows that achieving normoglycemia in people with diabetes during pregnancy can reduce adverse pregnancy outcomes and improve the long-term health of both mother and child [48,49,50]. However, achieving and maintaining glycemic control requires significant commitment and behavioral changes on the part of the patient, which the presence of depression can influence. Here, we found that SRH mediated the association between depressive symptoms and diabetes self-efficacy in one clinical setting that consisted mainly of Hispanic/Latinas. While generalizability is limited, these findings suggest that further research is needed to understand the role of contextual (i.e., clinical setting) and individual-level factors (e.g., race and ethnicity) that moderate the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy." ]
[ "intro", null, null, null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", null, "conclusions" ]
[ "diabetes", "self-efficacy", "depressive symptoms", "self-rated health", "pregnant persons" ]
1. Introduction: According to the Centers for Disease Control (2018), 1–2% of individuals who identify as women have type 1 or 2 diabetes, and approximately 6–9% of pregnant people will develop gestational diabetes. Between 2000 and 2010, gestational diabetes increased by 56%, and the percentage of women with type 1 and type 2 diabetes before pregnancy increased by 37% [1]. Type 2 diabetes is prevalent among minority ethnic groups, including people of African, Black Caribbean, South Asian, Middle Eastern, Central, and South American family origin [2,3]. Type 2 diabetes is projected to affect 693 million people worldwide by 2045 [4]. Diabetes increases the risks of adverse pregnancy outcomes for both parent and child, including preeclampsia, polyhydramnios, cesarean delivery, premature birth, neonatal hypoglycemia, birth defects, respiratory distress syndrome, and hyperbilirubinemia in the neonate [5,6,7,8]. Gestational diabetes also has been linked to long-term adverse health outcomes for pregnant people and their offspring [5,6,7,8]. Achieving adequate glycemic control is the cornerstone of preventing short- and long-term adverse health outcomes in people with diabetes during pregnancy. Treatment typically consists of lifestyle, behavioral and dietary changes, home glucose monitoring, and medical therapy with oral antihypoglycemics and/or insulin for persistent hyperglycemia. Though screening for diabetes in pregnancy is common, there is wide heterogeneity in reported improvements in pregnancy outcomes with treatment [5,6]. Adequate glycemic control during pregnancy has been demonstrated to reduce complications. Still, barriers to healthcare access, racial and ethnic disparities, including a higher prevalence of diabetes among people of color, and maternal comorbidities, such as mental health issues, may limit a person’s ability to achieve adequate blood glucose control [2,9,10]. Despite the known risks for poor outcomes, few studies have investigated factors simultaneously addressing diabetes and perinatal mental health. 1.1. Self-Rated Health Self-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored. Self-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored. 1.2. Diabetes Self-Efficacy and Depression Self-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical. Self-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical. 1.3. Research Objectives This study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1). This study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1). 1.1. Self-Rated Health: Self-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored. 1.2. Diabetes Self-Efficacy and Depression: Self-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical. 1.3. Research Objectives: This study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1). 2. Materials and Methods: This was an observational, cross-sectional study of 137 racially and ethnically diverse pregnant individuals diagnosed with Type 1, Type 2, or gestational diabetes mellitus (GDM) at two clinical study sites. Patients between 27 and 40 weeks of gestational age and diagnosed with diabetes were approached about the study by approved healthcare staff. Patients consented to participate, and if written consent was granted, they self-administered a short demographic questionnaire and the survey items described below (see Figure 2). Participants were not compensated. Data collection was conducted between November 2017 and March 2020. Data collection ended at the start of the pandemic. The two study sites were located in Southern California (CA), and Central Illinois (IL). The California site is a teaching, safety net hospital with a predominantly Latina population. Over 90% of the patients are considered medically indigent or are enrolled in the state Medicaid program (MediCal), unpublished data. This site follows the California Diabetes and Program, Sweet Success Guidelines for Care [32]. Pregnant people with pre-existing diabetes (Type 1, 2, or gestational) or a new diagnosis of gestational diabetes were referred for education to a certified diabetes educator nurse, who recruited them for this study at an initial or return visit. Obstetricians, in consultation with on-site Maternal-Fetal Medicine (MFM) specialists, managed care of the pregnancy, comorbidities, and delivery. Screening for diabetes or GDM is performed with the International Association of Diabetes in Pregnancy Study Groups criteria with a first-trimester hemoglobin A1c and 24–28 week 2-h glucose challenge test [33,34]. In contrast, the IL site utilizes the Carpenter Coustan method, including a risk-based screening approach during the first trimester [35]. Patients are assessed at their first prenatal visit, and an additional 1-h oral glucose tolerance test (OGTT) is ordered if the patient is at risk for gestational diabetes mellitus (GDM)—including a history of GDM, a history of a macrocosmic infant in a prior pregnancy, a family history of diabetes mellitus, obesity, etc. Alternatively, a 1-h OGTT is administered at 24–28 weeks gestation. A referral to a certified diabetes care and education specialist nurse is given to all patients who fail this screening test. If diabetes is uncontrolled and medical management is required, the patient is referred to MFM. To be eligible to participate, patients had to be 18–45 years of age, have a singleton pregnancy, be at least 27 weeks pregnant, be diagnosed with diabetes (Type 1, Type 2, or GDM), and be able to speak, read and write in English or Spanish. Individuals with end-stage renal disease, dementia, or blindness were excluded because these conditions could interfere with survey completion. 2.1. Measures The following is a description of the self-administered measures. All measures were available in English and Spanish. Individuals completed the survey using their preferred language. 2.1.1. Demographic Questionnaire This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories. This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories. 2.1.2. Edinburgh Postnatal Depression Scale (EPDS) This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84. This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84. 2.1.3. Diabetes Self-Efficacy Scale (DSES) This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88. This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88. 2.1.4. Self-Rated Health (SRH) SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43] SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43] The following is a description of the self-administered measures. All measures were available in English and Spanish. Individuals completed the survey using their preferred language. 2.1.1. Demographic Questionnaire This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories. This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories. 2.1.2. Edinburgh Postnatal Depression Scale (EPDS) This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84. This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84. 2.1.3. Diabetes Self-Efficacy Scale (DSES) This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88. This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88. 2.1.4. Self-Rated Health (SRH) SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43] SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43] 2.2. Statistical Analysis Plan Descriptive statistics were computed for all study variables using SPSS 28.0. Categorical variables were summarized with frequencies and percentages. Means and standard deviations for continuous variables were computed. Fisher’s exact test, t-tests, and chi-square determined associations between dichotomous and categorical demographic characteristics and the outcome (DSES). Unadjusted linear regression tested associations between continuous demographic variables and DSES. Site-level differences in demographic characteristics, prenatal depression (PND measured using the EPDS scores), and diabetes self-efficacy score (DSES) were also explored using bivariate analyses. Mediation was tested in a model to assess the significant effect of SRH on the association between EPDS and DSES. EPDS was the predictor, SRH was the mediator, and DSES was the outcome variable. A moderated mediation model was used to examine whether the site moderated these indirect effects. The model included covariates significantly associated with the outcome (DSES). Mediation and moderated mediation were tested using Mplus using full information maximum likelihood [44]. All 137 subjects were used in the analyses. Unstandardized coefficients were used to estimate the mediation and moderated mediation. Inferences on indirect effects were tested using a bootstrap approach with 5000 samples [45]. Bias corrected bootstrap standard errors, and 95% confidence intervals of the direct and indirect effects were calculated. A 95% confident interval that does not include zero indicated that parameters were statistically significant. Descriptive statistics were computed for all study variables using SPSS 28.0. Categorical variables were summarized with frequencies and percentages. Means and standard deviations for continuous variables were computed. Fisher’s exact test, t-tests, and chi-square determined associations between dichotomous and categorical demographic characteristics and the outcome (DSES). Unadjusted linear regression tested associations between continuous demographic variables and DSES. Site-level differences in demographic characteristics, prenatal depression (PND measured using the EPDS scores), and diabetes self-efficacy score (DSES) were also explored using bivariate analyses. Mediation was tested in a model to assess the significant effect of SRH on the association between EPDS and DSES. EPDS was the predictor, SRH was the mediator, and DSES was the outcome variable. A moderated mediation model was used to examine whether the site moderated these indirect effects. The model included covariates significantly associated with the outcome (DSES). Mediation and moderated mediation were tested using Mplus using full information maximum likelihood [44]. All 137 subjects were used in the analyses. Unstandardized coefficients were used to estimate the mediation and moderated mediation. Inferences on indirect effects were tested using a bootstrap approach with 5000 samples [45]. Bias corrected bootstrap standard errors, and 95% confidence intervals of the direct and indirect effects were calculated. A 95% confident interval that does not include zero indicated that parameters were statistically significant. 2.1. Measures: The following is a description of the self-administered measures. All measures were available in English and Spanish. Individuals completed the survey using their preferred language. 2.1.1. Demographic Questionnaire This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories. This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories. 2.1.2. Edinburgh Postnatal Depression Scale (EPDS) This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84. This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84. 2.1.3. Diabetes Self-Efficacy Scale (DSES) This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88. This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88. 2.1.4. Self-Rated Health (SRH) SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43] SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43] 2.1.1. Demographic Questionnaire: This instrument inquired about the patient’s age, marital status, education, annual family income, race and ethnicity, history of depression, age of diabetes diagnosis, and other health histories. 2.1.2. Edinburgh Postnatal Depression Scale (EPDS): This 10-item widely used instrument screens for post-childbirth depressive symptoms and has also been shown to be valid during the prenatal period [36]. Responses were based on a 4-point scale (0, 1, 2, and 3). After reverse scoring several items, responses were summed to produce a total score, with a maximum score of 30. Subjects with scores ≥ 10 were classified as at risk for depression [37,38]; a question specific to harming oneself is also included in the measure. If a patient indicated any choice other than ‘never,’ their provider was notified immediately, and the standard protocol for immediate treatment or referral was followed (see Figure 2). None of the participants reported anything other than ‘never’ having suicidal ideation. The EPDS has been used with diverse populations, including Spanish-speaking women [39]. The Cronbach’s alpha for the scale was α = 0.84. 2.1.3. Diabetes Self-Efficacy Scale (DSES): This 8-item scale includes questions regarding the extent to which respondents feel confident about their nutrition, exercise, glucose control, and disease management [40]. Participants are asked to rate their confidence on a 10-point Likert scale from 1 “Not at all confident” to 10 “Totally confident.” The score is the mean of items, with higher scores indicating higher confidence or self-efficacy. The scale is available in Spanish, the original language of the instrument, and English. Both measure versions are reliable and valid for assessing self-efficacy in diabetes management [41]. The Cronbach’s alpha for the scale was α = 0.88. 2.1.4. Self-Rated Health (SRH): SRH was measured through a single question, “Compared to other people your age, how would you describe the state of your physical health since you’ve been pregnant.” Responses were based on a 5-point Likert scale from Poor (1) to Excellent (5) and reversed coded for analytic purposes. This SRH measure is a subjective predictor of mortality similar to objective health [42]. This measure has also been used with diabetic populations, including pregnant people [17,43] 2.2. Statistical Analysis Plan: Descriptive statistics were computed for all study variables using SPSS 28.0. Categorical variables were summarized with frequencies and percentages. Means and standard deviations for continuous variables were computed. Fisher’s exact test, t-tests, and chi-square determined associations between dichotomous and categorical demographic characteristics and the outcome (DSES). Unadjusted linear regression tested associations between continuous demographic variables and DSES. Site-level differences in demographic characteristics, prenatal depression (PND measured using the EPDS scores), and diabetes self-efficacy score (DSES) were also explored using bivariate analyses. Mediation was tested in a model to assess the significant effect of SRH on the association between EPDS and DSES. EPDS was the predictor, SRH was the mediator, and DSES was the outcome variable. A moderated mediation model was used to examine whether the site moderated these indirect effects. The model included covariates significantly associated with the outcome (DSES). Mediation and moderated mediation were tested using Mplus using full information maximum likelihood [44]. All 137 subjects were used in the analyses. Unstandardized coefficients were used to estimate the mediation and moderated mediation. Inferences on indirect effects were tested using a bootstrap approach with 5000 samples [45]. Bias corrected bootstrap standard errors, and 95% confidence intervals of the direct and indirect effects were calculated. A 95% confident interval that does not include zero indicated that parameters were statistically significant. 3. Results: The sample demographic characteristics are reported in Table 1. The mean age was 30.47 (SD = 6.24), and the age of diabetes diagnosis was slightly younger, 28.60 (SD = 7.38), which was significantly different [t (107) = 40.283, p < 0.001]. Half the sample self-identified as Hispanic/Latina, a third were single, more than half worked at least part-time, and nearly half of the sample had more than high school education. Given the difference in the two study sites, we examined differences in demographic characteristics by site. There was a significant difference in the mean age of diagnosis by site [t (106) = 2.129, p = 0.036], with individuals at the IL site diagnosed at a younger age, on average. There was also a significant association between language and site (p < 0.001); Spanish data collection was unavailable at the IL site. There was a significant association between race/ethnicity and site [χ2 (4) = 94.031, p < 0.001]. The CA site had a higher proportion of Hispanic/Latina participants. In contrast, IL had a higher proportion of non-Hispanic White participants. Employment status and a history of depression diagnosis were also significantly associated with the site [χ2 (3) = 9.250, p = 0.026 and χ2 (3) = 25.113, p < 0.001], with a higher proportion of employment and a history of depression in IL. A higher proportion of individuals in IL versus CA met the cut-off for at-risk depression (18.2% versus 11%, respectively), a difference that was not statistically significant (p = 0.318). 3.1. Depressive Symptoms, Self-Rated Health, and Diabetes Self-Efficacy As shown in Table 2, mean EPDS scores were low (5.18, SD = 4.37). The mean DSES score for the entire sample was 8.05 (SD = 1.70). However, the average SRH was considered low at 2.92 (SD = 1.03). Unadjusted regression analysis indicated that race/ethnicity, age, and site were significantly associated with the outcome (DSES). Individuals who identified as Latina or biracial reported significantly higher DSES compared to non-Hispanic White individuals (B = 1.18, t (130) = 3.74, p < 0.001 and B = 1.74, t (130) = 2.08, p = 0.040). There was also a significantly positive association between age and DSES (B = 0.06, t (133) = 2.54, p = 0.013). Lastly, there was a significant difference by site (B = −1.08, t (133) = −3.79, p < 0.001), with higher mean DSES scores among individuals in CA versus IL. However, chi-square tests showed that race and ethnicity were significantly associated with site. Therefore, race and ethnicity were not included in the models. Thus, age was the only covariate in the models. We determined whether there were differences in the outcome (DSES) and the predictor (EPDS), and mediator variables (SRH) between the study sites. While individuals at the IL site had slightly higher mean EPDS scores, the difference was not statistically significant [t (129) = −1.935, p = 0.055]. Differences in mean SRH scores were not statistically different by study site [t (134) = 1.117, p = 0.266]. There was a significant difference in DSES by site, with individuals in the CA reporting significantly higher mean DSES [t (133) = 3.789, p < 0.001] compared to those in IL. EPDS scores were significantly and negatively correlated with SRH (r = −0.368, p < 0.001) and DSES (r = −0.393, p < 0.001). SRH was positively and significantly correlated with DSES (r = 0.311, p < 0.001). As shown in Table 2, mean EPDS scores were low (5.18, SD = 4.37). The mean DSES score for the entire sample was 8.05 (SD = 1.70). However, the average SRH was considered low at 2.92 (SD = 1.03). Unadjusted regression analysis indicated that race/ethnicity, age, and site were significantly associated with the outcome (DSES). Individuals who identified as Latina or biracial reported significantly higher DSES compared to non-Hispanic White individuals (B = 1.18, t (130) = 3.74, p < 0.001 and B = 1.74, t (130) = 2.08, p = 0.040). There was also a significantly positive association between age and DSES (B = 0.06, t (133) = 2.54, p = 0.013). Lastly, there was a significant difference by site (B = −1.08, t (133) = −3.79, p < 0.001), with higher mean DSES scores among individuals in CA versus IL. However, chi-square tests showed that race and ethnicity were significantly associated with site. Therefore, race and ethnicity were not included in the models. Thus, age was the only covariate in the models. We determined whether there were differences in the outcome (DSES) and the predictor (EPDS), and mediator variables (SRH) between the study sites. While individuals at the IL site had slightly higher mean EPDS scores, the difference was not statistically significant [t (129) = −1.935, p = 0.055]. Differences in mean SRH scores were not statistically different by study site [t (134) = 1.117, p = 0.266]. There was a significant difference in DSES by site, with individuals in the CA reporting significantly higher mean DSES [t (133) = 3.789, p < 0.001] compared to those in IL. EPDS scores were significantly and negatively correlated with SRH (r = −0.368, p < 0.001) and DSES (r = −0.393, p < 0.001). SRH was positively and significantly correlated with DSES (r = 0.311, p < 0.001). 3.2. Mediation Analysis Findings from the mediation analysis revealed that SRH mediated the association between EPDS and DSES (see Table 3; Estimate = −0.037, SE = 0.016, 95% CI = −0.077, −0.011). The results also show a negative association between depressive symptoms (EPDS) and SRH (Estimate = −0.09, SE = 0.019, 95% CI = −0.129, −0.053). A negative association between EPDS and DSES was also observed (Estimate = −0.075, SE = 0.035, 95% CI = −0.148, −0.009). The findings also revealed a positive association between SRH and DSES (Estimate = 0.407, SE = 0.157, 95% CI = 0.099, 0.715). Findings from the mediation analysis revealed that SRH mediated the association between EPDS and DSES (see Table 3; Estimate = −0.037, SE = 0.016, 95% CI = −0.077, −0.011). The results also show a negative association between depressive symptoms (EPDS) and SRH (Estimate = −0.09, SE = 0.019, 95% CI = −0.129, −0.053). A negative association between EPDS and DSES was also observed (Estimate = −0.075, SE = 0.035, 95% CI = −0.148, −0.009). The findings also revealed a positive association between SRH and DSES (Estimate = 0.407, SE = 0.157, 95% CI = 0.099, 0.715). 3.3. Moderated Mediation Analysis Regression analysis testing the joint effect of site and EPDS on SHR showed that the study site moderated the effect of EPDS on SHR (F = 8.33, p = 0.005). The results from the moderated mediation reported in Table 4 and Figure 3 show that the indirect effect of SRH on the association between EDPS and DSES differed between the two sites (Estimate = 0.103, SE = 0.039, 95%CI = 0.028, 0.182). The mediating effect of SRH was significant in the CA site (Estimate = −0.054, SE = 0.024, 95% CI = −0.11, −0.014), but not the IL site (Estimate = −0.012, SE = 0.013, 95% CI = −0.046, 0.009). The total effect was significant in CA (Estimate = −0.129, SE = 0.038, 95% CI = −0.211, −0.064) and IL (Estimate = −0.086, SE = 0.037, 95% CI = −0.162, −0.017). Regression analysis testing the joint effect of site and EPDS on SHR showed that the study site moderated the effect of EPDS on SHR (F = 8.33, p = 0.005). The results from the moderated mediation reported in Table 4 and Figure 3 show that the indirect effect of SRH on the association between EDPS and DSES differed between the two sites (Estimate = 0.103, SE = 0.039, 95%CI = 0.028, 0.182). The mediating effect of SRH was significant in the CA site (Estimate = −0.054, SE = 0.024, 95% CI = −0.11, −0.014), but not the IL site (Estimate = −0.012, SE = 0.013, 95% CI = −0.046, 0.009). The total effect was significant in CA (Estimate = −0.129, SE = 0.038, 95% CI = −0.211, −0.064) and IL (Estimate = −0.086, SE = 0.037, 95% CI = −0.162, −0.017). 3.1. Depressive Symptoms, Self-Rated Health, and Diabetes Self-Efficacy: As shown in Table 2, mean EPDS scores were low (5.18, SD = 4.37). The mean DSES score for the entire sample was 8.05 (SD = 1.70). However, the average SRH was considered low at 2.92 (SD = 1.03). Unadjusted regression analysis indicated that race/ethnicity, age, and site were significantly associated with the outcome (DSES). Individuals who identified as Latina or biracial reported significantly higher DSES compared to non-Hispanic White individuals (B = 1.18, t (130) = 3.74, p < 0.001 and B = 1.74, t (130) = 2.08, p = 0.040). There was also a significantly positive association between age and DSES (B = 0.06, t (133) = 2.54, p = 0.013). Lastly, there was a significant difference by site (B = −1.08, t (133) = −3.79, p < 0.001), with higher mean DSES scores among individuals in CA versus IL. However, chi-square tests showed that race and ethnicity were significantly associated with site. Therefore, race and ethnicity were not included in the models. Thus, age was the only covariate in the models. We determined whether there were differences in the outcome (DSES) and the predictor (EPDS), and mediator variables (SRH) between the study sites. While individuals at the IL site had slightly higher mean EPDS scores, the difference was not statistically significant [t (129) = −1.935, p = 0.055]. Differences in mean SRH scores were not statistically different by study site [t (134) = 1.117, p = 0.266]. There was a significant difference in DSES by site, with individuals in the CA reporting significantly higher mean DSES [t (133) = 3.789, p < 0.001] compared to those in IL. EPDS scores were significantly and negatively correlated with SRH (r = −0.368, p < 0.001) and DSES (r = −0.393, p < 0.001). SRH was positively and significantly correlated with DSES (r = 0.311, p < 0.001). 3.2. Mediation Analysis: Findings from the mediation analysis revealed that SRH mediated the association between EPDS and DSES (see Table 3; Estimate = −0.037, SE = 0.016, 95% CI = −0.077, −0.011). The results also show a negative association between depressive symptoms (EPDS) and SRH (Estimate = −0.09, SE = 0.019, 95% CI = −0.129, −0.053). A negative association between EPDS and DSES was also observed (Estimate = −0.075, SE = 0.035, 95% CI = −0.148, −0.009). The findings also revealed a positive association between SRH and DSES (Estimate = 0.407, SE = 0.157, 95% CI = 0.099, 0.715). 3.3. Moderated Mediation Analysis: Regression analysis testing the joint effect of site and EPDS on SHR showed that the study site moderated the effect of EPDS on SHR (F = 8.33, p = 0.005). The results from the moderated mediation reported in Table 4 and Figure 3 show that the indirect effect of SRH on the association between EDPS and DSES differed between the two sites (Estimate = 0.103, SE = 0.039, 95%CI = 0.028, 0.182). The mediating effect of SRH was significant in the CA site (Estimate = −0.054, SE = 0.024, 95% CI = −0.11, −0.014), but not the IL site (Estimate = −0.012, SE = 0.013, 95% CI = −0.046, 0.009). The total effect was significant in CA (Estimate = −0.129, SE = 0.038, 95% CI = −0.211, −0.064) and IL (Estimate = −0.086, SE = 0.037, 95% CI = −0.162, −0.017). 4. Discussion: This study identified the mediating role of SRH in the association between depressive symptoms and diabetes self-efficacy among pregnant individuals with diabetes. While the empirical evidence shows that depressive symptoms are associated with lower SRH and lower diabetes self-efficacy, the role of SRH in these associations has not been established. Thus, this novel study tested our hypothesis that SRH would mediate the association between depressive symptoms and diabetes self-efficacy. In doing so, we also hypothesized that depressive symptoms would be associated with lower SRH and diabetes self-efficacy. Because the sample was drawn from two clinical settings, we tested the moderated mediation of site. The findings supported our assumptions. The analyses revealed that SRH mediated the association between depressive symptoms and diabetes self-efficacy. However, we also found that site moderated the mediation. In other words, the mediating effect of SRH differs by clinical setting. It is critical to note that race and ethnicity were correlated with the study setting; most Hispanic/Latinas were located in CA, and most non-Hispanic Whites were in IL. Therefore, we cannot determine whether the sample population, the geographic area, the clinical approach, or the characteristics of the clinical site explain the moderating effect of study site. Still, this cross-sectional study of a diverse sample of pregnant people diagnosed with diabetes indicated that depressive symptoms were significantly and negatively associated with diabetes self-efficacy, even after controlling for age, which was the only demographic variable associated with the outcome variable. Still, while these findings support previous studies that showed similar associations [46], we must acknowledge that the association was observed in one of the two clinical settings, suggesting that characteristicsof the setting or the clinical population matter. As noted previously, patient race and ethnicity were significantly correlated with study site. As Table 1 shows, most Hispanic/Latina patients were located in the CA study site, whereas most non-Hispanic Whites were in IL. Our previous study with Latina perinatal women showed a significant and negative association between SRH and depressive symptoms and diabetes diagnosis in pregnancy. However, the mechanisms that explain those associations are not clear. Despite having higher depressive symptoms that were marginally significant (p = 0.055) and slightly lower SRH, it is also unclear why there was no significant association between depressive symptoms and SRH in the IL sample. While some research shows differences in SRH by race and ethnicity, with non-Hispanic Whites exhibiting higher scores, there may be contextual factors (e.g., rurality) that may have negatively affected the IL sample in our study. Therefore, future studies should account for contextual factors in and outside the healthcare setting. As with standard practice for assessing glucose control in pregnancy, there are many benefits to evaluating the presence of depressive symptoms during this critical period. First, it is one of the few periods in a person’s life when they have numerous opportunities to detect elevated depressive symptoms in a relatively short time. Measuring depressive symptoms over time is critical to identify patients with worsening depressive symptoms. Our study offered a snapshot during the third trimester of pregnancy but also highlighted the role depressive symptoms can have in diabetes self-efficacy in the late stages of gestation. The results also show that SRH may vary by clinical population or setting. Our findings suggest that SRH may be an intervention point for some people, such as Latinas. However, this speculation should be tested with a larger, more diverse sample drawn from similar settings to account for potential healthcare characteristics. Few studies have examined the associations between depressive symptoms, SRH, and diabetes self-efficacy in pregnant persons. One of the few studies found that pregnant persons with depressive symptoms or diabetes had worse SRH than their counterparts [17]. This is a critical population to study because SRH has been shown to decline in gestation [17]. Thus, it is crucial that clinicians assess patients’ SRH early in pregnancy and preferably before or in concert with diabetes testing to increase disease education to help identify potential intervention points. Limitations This cross-sectional study presents several strengths but is also not without limitations. First, our study included a cross-sectional convenience sample of pregnant people with diabetes. The final sample was smaller than the target sample of 405 intended to detect significant mediated effects [47]. Nevertheless, this study yielded findings that merit further investigation, such as the potential role of clinical care settings and patient race and ethnicity. Therefore, future studies should replicate our design with a larger sample and equal proportions of racial and ethnic individuals from a similar clinical setting to test the role of race and ethnicity. Future studies should also consider examining the role of clinical care settings (e.g., context, treatment approaches) to determine the mechanism that might drive the moderating effect of the study site on the mediating role of SRH between depressive symptoms and diabetes self-efficacy. Second, we did not confirm a clinical diagnosis of depression using diagnostic measures with a clinical assessment. Additionally, depressive symptoms were measured once in late pregnancy. As mentioned, future studies should assess depressive symptoms early in pregnancy and preferably before diabetes testing, and additional assessments of a patient’s mental health can be conducted over time to identify potential escalation in symptoms. Clinicians and researchers can measure self-efficacy when patients are diagnosed with diabetes and throughout gestation to assess changes that can inform patient care. Third, we did not use directed acyclic graphs to identify the covariates for the model. Fourth, as described above, recruitment included patients with different types of diabetes. Related, we did not collect International Classification of Diseases (ICD) codes. Therefore, future studies should capture ICD codes and consider comparing equal proportions of birthing people by type of diabetes and test the associations reported. This cross-sectional study presents several strengths but is also not without limitations. First, our study included a cross-sectional convenience sample of pregnant people with diabetes. The final sample was smaller than the target sample of 405 intended to detect significant mediated effects [47]. Nevertheless, this study yielded findings that merit further investigation, such as the potential role of clinical care settings and patient race and ethnicity. Therefore, future studies should replicate our design with a larger sample and equal proportions of racial and ethnic individuals from a similar clinical setting to test the role of race and ethnicity. Future studies should also consider examining the role of clinical care settings (e.g., context, treatment approaches) to determine the mechanism that might drive the moderating effect of the study site on the mediating role of SRH between depressive symptoms and diabetes self-efficacy. Second, we did not confirm a clinical diagnosis of depression using diagnostic measures with a clinical assessment. Additionally, depressive symptoms were measured once in late pregnancy. As mentioned, future studies should assess depressive symptoms early in pregnancy and preferably before diabetes testing, and additional assessments of a patient’s mental health can be conducted over time to identify potential escalation in symptoms. Clinicians and researchers can measure self-efficacy when patients are diagnosed with diabetes and throughout gestation to assess changes that can inform patient care. Third, we did not use directed acyclic graphs to identify the covariates for the model. Fourth, as described above, recruitment included patients with different types of diabetes. Related, we did not collect International Classification of Diseases (ICD) codes. Therefore, future studies should capture ICD codes and consider comparing equal proportions of birthing people by type of diabetes and test the associations reported. Limitations: This cross-sectional study presents several strengths but is also not without limitations. First, our study included a cross-sectional convenience sample of pregnant people with diabetes. The final sample was smaller than the target sample of 405 intended to detect significant mediated effects [47]. Nevertheless, this study yielded findings that merit further investigation, such as the potential role of clinical care settings and patient race and ethnicity. Therefore, future studies should replicate our design with a larger sample and equal proportions of racial and ethnic individuals from a similar clinical setting to test the role of race and ethnicity. Future studies should also consider examining the role of clinical care settings (e.g., context, treatment approaches) to determine the mechanism that might drive the moderating effect of the study site on the mediating role of SRH between depressive symptoms and diabetes self-efficacy. Second, we did not confirm a clinical diagnosis of depression using diagnostic measures with a clinical assessment. Additionally, depressive symptoms were measured once in late pregnancy. As mentioned, future studies should assess depressive symptoms early in pregnancy and preferably before diabetes testing, and additional assessments of a patient’s mental health can be conducted over time to identify potential escalation in symptoms. Clinicians and researchers can measure self-efficacy when patients are diagnosed with diabetes and throughout gestation to assess changes that can inform patient care. Third, we did not use directed acyclic graphs to identify the covariates for the model. Fourth, as described above, recruitment included patients with different types of diabetes. Related, we did not collect International Classification of Diseases (ICD) codes. Therefore, future studies should capture ICD codes and consider comparing equal proportions of birthing people by type of diabetes and test the associations reported. 5. Conclusions: The research shows that achieving normoglycemia in people with diabetes during pregnancy can reduce adverse pregnancy outcomes and improve the long-term health of both mother and child [48,49,50]. However, achieving and maintaining glycemic control requires significant commitment and behavioral changes on the part of the patient, which the presence of depression can influence. Here, we found that SRH mediated the association between depressive symptoms and diabetes self-efficacy in one clinical setting that consisted mainly of Hispanic/Latinas. While generalizability is limited, these findings suggest that further research is needed to understand the role of contextual (i.e., clinical setting) and individual-level factors (e.g., race and ethnicity) that moderate the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy.
Background: Diabetes leads to risk for pregnant persons and their fetuses and requires behavioral changes that can be compromised by poor mental health. Poor self-rated health (SRH), a reliable predictor of morbidity and mortality, has been associated with depressive symptoms and lower self-efficacy in patients with diabetes. However, it is unclear whether SRH mediates the association between depressive symptoms and self-efficacy in pregnant patients with diabetes and whether the healthcare site moderates the mediation. Thus, we sought to test these associations in a racially and ethnically diverse sample of pregnant individuals diagnosed with diabetes from two clinical settings. Methods: This was an observational, cross-sectional study of 137 pregnant individuals diagnosed with diabetes at two clinical study sites. Participants self-administered a demographic questionnaire and measures designed to assess depressive symptoms, SRH in pregnancy, and diabetes self-efficacy. A moderated mediation model tested whether these indirect effects were moderated by the site. Results: The results show that SRH mediated the association between depressive symptoms and diabetes self-efficacy. The results also showed the site moderated the mediating effect of SRH on depressive symptoms and diabetes self-efficacy. Conclusions: Understanding the role of clinical care settings can help inform when and how SRH mediates that association between prenatal depressive symptoms and self-efficacy in diabetic patients.
1. Introduction: According to the Centers for Disease Control (2018), 1–2% of individuals who identify as women have type 1 or 2 diabetes, and approximately 6–9% of pregnant people will develop gestational diabetes. Between 2000 and 2010, gestational diabetes increased by 56%, and the percentage of women with type 1 and type 2 diabetes before pregnancy increased by 37% [1]. Type 2 diabetes is prevalent among minority ethnic groups, including people of African, Black Caribbean, South Asian, Middle Eastern, Central, and South American family origin [2,3]. Type 2 diabetes is projected to affect 693 million people worldwide by 2045 [4]. Diabetes increases the risks of adverse pregnancy outcomes for both parent and child, including preeclampsia, polyhydramnios, cesarean delivery, premature birth, neonatal hypoglycemia, birth defects, respiratory distress syndrome, and hyperbilirubinemia in the neonate [5,6,7,8]. Gestational diabetes also has been linked to long-term adverse health outcomes for pregnant people and their offspring [5,6,7,8]. Achieving adequate glycemic control is the cornerstone of preventing short- and long-term adverse health outcomes in people with diabetes during pregnancy. Treatment typically consists of lifestyle, behavioral and dietary changes, home glucose monitoring, and medical therapy with oral antihypoglycemics and/or insulin for persistent hyperglycemia. Though screening for diabetes in pregnancy is common, there is wide heterogeneity in reported improvements in pregnancy outcomes with treatment [5,6]. Adequate glycemic control during pregnancy has been demonstrated to reduce complications. Still, barriers to healthcare access, racial and ethnic disparities, including a higher prevalence of diabetes among people of color, and maternal comorbidities, such as mental health issues, may limit a person’s ability to achieve adequate blood glucose control [2,9,10]. Despite the known risks for poor outcomes, few studies have investigated factors simultaneously addressing diabetes and perinatal mental health. 1.1. Self-Rated Health Self-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored. Self-rated health (SRH) considers an individual’s perception of their health and wellness. Self-rated health is a widely used measure of health that predicts morbidity, mortality, and health services [11,12]. Poor SRH is associated with a higher risk of type 2 diabetes [13,14]. Schytt and Hildingsson [15] found that SRH may decrease during pregnancy and postpartum or one year after giving birth. Others found an association between low SRH and perinatal depressive symptoms [16]. In a sample of Latina women, Lara-Cinisomo [17] found an association between diabetes, perinatal depression, and SRH, with worse SRH during pregnancy. The findings suggest that SRH can be a critical factor to explore in prenatal individuals with diabetes. Still, SRH’s association between depressive symptoms in pregnancy and diabetes self-efficacy is not fully explored. 1.2. Diabetes Self-Efficacy and Depression Self-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical. Self-efficacy, defined as the level of self-confidence required to perform a specific behavior within their ability efficiently, is one of the most significant factors in behavior change to strengthen the proper management of diabetes [18,19]. Self-efficacy increases adherence to blood glucose monitoring, diet, insulin injections, and exercise [20]; and plays a pivotal role in successful diabetes management [21]. However, the presence of mental health disorders among individuals with diabetes may limit a person’s ability to perform diabetes self-care behaviors [22], including being physically active, monitoring glucose, controlling diet, and adhering to medications [7,23]. People with depression experience functional decline, limiting effective lifestyle changes vital for diabetes self-care management [24,25]. Depression during pregnancy is critical given the global prevalence [26,27,28] and risk for the birthing person and infant [29,30,31]. Therefore, evaluating the associations between psychological well-being and diabetes self-efficacy during pregnancy is critical. 1.3. Research Objectives This study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1). This study aimed to test the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy in a sample of racially and ethnically diverse pregnant people. Because the healthcare setting might affect the mediating associations, we conducted a moderated mediation analysis. We hypothesized that SRH would mediate the association between depressive symptoms and self-efficacy. We also hypothesized that there would be moderated mediation, where mediation differed by study site (see Figure 1). 5. Conclusions: The research shows that achieving normoglycemia in people with diabetes during pregnancy can reduce adverse pregnancy outcomes and improve the long-term health of both mother and child [48,49,50]. However, achieving and maintaining glycemic control requires significant commitment and behavioral changes on the part of the patient, which the presence of depression can influence. Here, we found that SRH mediated the association between depressive symptoms and diabetes self-efficacy in one clinical setting that consisted mainly of Hispanic/Latinas. While generalizability is limited, these findings suggest that further research is needed to understand the role of contextual (i.e., clinical setting) and individual-level factors (e.g., race and ethnicity) that moderate the mediating effect of SRH on the association between depressive symptoms and diabetes self-efficacy.
Background: Diabetes leads to risk for pregnant persons and their fetuses and requires behavioral changes that can be compromised by poor mental health. Poor self-rated health (SRH), a reliable predictor of morbidity and mortality, has been associated with depressive symptoms and lower self-efficacy in patients with diabetes. However, it is unclear whether SRH mediates the association between depressive symptoms and self-efficacy in pregnant patients with diabetes and whether the healthcare site moderates the mediation. Thus, we sought to test these associations in a racially and ethnically diverse sample of pregnant individuals diagnosed with diabetes from two clinical settings. Methods: This was an observational, cross-sectional study of 137 pregnant individuals diagnosed with diabetes at two clinical study sites. Participants self-administered a demographic questionnaire and measures designed to assess depressive symptoms, SRH in pregnancy, and diabetes self-efficacy. A moderated mediation model tested whether these indirect effects were moderated by the site. Results: The results show that SRH mediated the association between depressive symptoms and diabetes self-efficacy. The results also showed the site moderated the mediating effect of SRH on depressive symptoms and diabetes self-efficacy. Conclusions: Understanding the role of clinical care settings can help inform when and how SRH mediates that association between prenatal depressive symptoms and self-efficacy in diabetic patients.
10,877
257
18
[ "diabetes", "srh", "self", "dses", "site", "efficacy", "self efficacy", "scale", "health", "symptoms" ]
[ "test", "test" ]
null
[CONTENT] diabetes | self-efficacy | depressive symptoms | self-rated health | pregnant persons [SUMMARY]
null
[CONTENT] diabetes | self-efficacy | depressive symptoms | self-rated health | pregnant persons [SUMMARY]
[CONTENT] diabetes | self-efficacy | depressive symptoms | self-rated health | pregnant persons [SUMMARY]
[CONTENT] diabetes | self-efficacy | depressive symptoms | self-rated health | pregnant persons [SUMMARY]
[CONTENT] diabetes | self-efficacy | depressive symptoms | self-rated health | pregnant persons [SUMMARY]
[CONTENT] Pregnancy | Female | Humans | Depression | Mediation Analysis | Surveys and Questionnaires | Diabetes Mellitus | Self Efficacy | Health Status [SUMMARY]
null
[CONTENT] Pregnancy | Female | Humans | Depression | Mediation Analysis | Surveys and Questionnaires | Diabetes Mellitus | Self Efficacy | Health Status [SUMMARY]
[CONTENT] Pregnancy | Female | Humans | Depression | Mediation Analysis | Surveys and Questionnaires | Diabetes Mellitus | Self Efficacy | Health Status [SUMMARY]
[CONTENT] Pregnancy | Female | Humans | Depression | Mediation Analysis | Surveys and Questionnaires | Diabetes Mellitus | Self Efficacy | Health Status [SUMMARY]
[CONTENT] Pregnancy | Female | Humans | Depression | Mediation Analysis | Surveys and Questionnaires | Diabetes Mellitus | Self Efficacy | Health Status [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] diabetes | srh | self | dses | site | efficacy | self efficacy | scale | health | symptoms [SUMMARY]
null
[CONTENT] diabetes | srh | self | dses | site | efficacy | self efficacy | scale | health | symptoms [SUMMARY]
[CONTENT] diabetes | srh | self | dses | site | efficacy | self efficacy | scale | health | symptoms [SUMMARY]
[CONTENT] diabetes | srh | self | dses | site | efficacy | self efficacy | scale | health | symptoms [SUMMARY]
[CONTENT] diabetes | srh | self | dses | site | efficacy | self efficacy | scale | health | symptoms [SUMMARY]
[CONTENT] diabetes | self | pregnancy | health | srh | self efficacy | efficacy | people | diabetes self | type [SUMMARY]
null
[CONTENT] dses | 95 ci | se | ci | site | 001 | 95 | estimate | significantly | il [SUMMARY]
[CONTENT] achieving | clinical setting | research | clinical | association depressive symptoms diabetes | depressive symptoms diabetes | symptoms diabetes | depressive symptoms diabetes self | symptoms diabetes self | symptoms diabetes self efficacy [SUMMARY]
[CONTENT] diabetes | srh | self | dses | scale | site | health | ci | se | 95 ci [SUMMARY]
[CONTENT] diabetes | srh | self | dses | scale | site | health | ci | se | 95 ci [SUMMARY]
[CONTENT] ||| ||| ||| two [SUMMARY]
null
[CONTENT] SRH ||| [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| ||| two ||| 137 | two ||| ||| ||| ||| SRH ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| two ||| 137 | two ||| ||| ||| ||| SRH ||| ||| [SUMMARY]
Features of self-management interventions for people with COPD associated with improved health-related quality of life and reduced emergency department visits: a systematic review and meta-analysis.
28652723
Self-management interventions (SMIs) are recommended for individuals with COPD to help monitor symptoms and optimize health-related quality of life (HRQOL). However, SMIs vary widely in content, delivery, and intensity, making it unclear which methods and techniques are associated with improved outcomes. This systematic review aimed to summarize the current evidence base surrounding the effectiveness of SMIs for improving HRQOL in people with COPD.
BACKGROUND
Systematic reviews that focused upon SMIs were eligible for inclusion. Intervention descriptions were coded for behavior change techniques (BCTs) that targeted self-management behaviors to address 1) symptoms, 2) physical activity, and 3) mental health. Meta-analyses and meta-regression were used to explore the association between health behaviors targeted by SMIs, the BCTs used, patient illness severity, and modes of delivery, with the impact on HRQOL and emergency department (ED) visits.
METHODS
Data related to SMI content were extracted from 26 randomized controlled trials identified from 11 systematic reviews. Patients receiving SMIs reported improved HRQOL (standardized mean difference =-0.16; 95% confidence interval [CI] =-0.25, -0.07; P=0.001) and made fewer ED visits (standardized mean difference =-0.13; 95% CI =-0.23, -0.03; P=0.02) compared to patients who received usual care. Patients receiving SMIs targeting mental health alongside symptom management had greater improvement of HRQOL (Q=4.37; P=0.04) and fewer ED visits (Q=5.95; P=0.02) than patients receiving SMIs focused on symptom management alone. Within-group analyses showed that HRQOL was significantly improved in 1) studies with COPD patients with severe symptoms, 2) single-practitioner based SMIs but not SMIs delivered by a multidisciplinary team, 3) SMIs with multiple sessions but not single session SMIs, and 4) both individual- and group-based SMIs.
RESULTS
SMIs can be effective at improving HRQOL and reducing ED visits, with those targeting mental health being significantly more effective than those targeting symptom management alone.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Disease Progression", "Emergency Service, Hospital", "Female", "Health Behavior", "Health Knowledge, Attitudes, Practice", "Health Resources", "Humans", "Lung", "Male", "Mental Health", "Middle Aged", "Pulmonary Disease, Chronic Obstructive", "Quality of Life", "Self Care", "Treatment Outcome" ]
5473493
Introduction
COPD is characterized by airflow limitation and is associated with inflammatory changes that lead to dyspnea, sputum purulence, and persistent coughing. The disease trajectory is one of progressive decline, punctuated by frequent acute exacerbations in symptoms. Patients with COPD have an average of three acute exacerbations per year, and these are the second biggest cause of unplanned hospital admissions in the UK.1–3 As COPD is irreversible, and health-related quality of life (HRQOL) in patients with COPD tends to be low, optimizing HRQOL and reducing hospital admissions have become key priorities in COPD management.4,5 Self-management planning is a recognized quality standard of the National Institute for Health and Care Excellence (NICE) guidelines in the UK,2 and a joint statement by the American Thoracic Society/European Respiratory Society6 emphasized its importance in quality of care. Self-management interventions (SMIs) encourage patients to monitor symptoms when stable and to take appropriate action when symptoms begin to worsen.2 However, there is no consensus on the form and content of effective SMIs and the variation in content may explain previous heterogeneity in effectiveness.2,7,8 A recent Health Technology Assessment (HTA) report on the efficacy of self-management for COPD recommended that future research should 1) “try to identify which are the most effective components of interventions and identify patient-specific factors that may modify this”, and that 2) “behavior change theories and strategies that underpin COPD SMIs need to be better characterized and described”.8 To enable better comparison and replication of intervention components, taxonomies have been developed to classify potential active ingredients of interventions according to preestablished descriptions of behavior change techniques (BCTs).9 BCTs are defined as “an observable, replicable, and irreducible component of an intervention designed to alter or redirect causal processes that regulate behavior”.9 While recent reviews have conducted content analysis to help identify effective components of SMIs for patients with COPD through individual patient data analysis,10,11 the coding of intervention content was not performed with established taxonomies and clear understanding between the BCT and the targeted behavior was absent (eg, symptom management, physical activity, mental health management, etc). This systematic review aims to summarize the current evidence base on the effectiveness of SMIs for improving HRQOL in people with COPD. Conclusions across reviews have been synthesized and evaluated within the context of how self-management was defined. Meta-analyses were performed that explore the relationship between health behaviors the SMIs target, the BCT they use to target behaviors, and subsequent improvement in HRQOL and health care utilization. In addition, we explore the extent to which trial and intervention features influence SMI effects.
Data extraction and analysis
The original studies were sought for further data extraction, to supplement the information reported in the reviews. Data reported at the follow-up time point most closely following the end point of the intervention period were used for meta-analyses. We decided to group patients across time points as patients with COPD have high mortality rates; of those patients who are admitted, 15% will die within 3 months,24 25% will die within 1 year,2 and 50% within 5 years.25 While prespecifying a follow-up time point limits the bias of treatment reactivity, we may be excluding patients with shorter survival, who may not necessary survive to a later time point, who are likely to be those most in need of interventions that improve HRQOL. Sensitivity analyses were performed for studies collecting outcome data at less than 6 months and 6-month and 12-month follow-up to explore any potential heterogeneity in the overall analysis. Data from intention-to-treat analyses were used where reported. If two interventions were compared against a control group (eg, action plans vs education vs usual care), data from both intervention arms were included in the main comparison and the number of participants in the control group was halved for each comparison.26 Postintervention outcomes reported as mean and SD were used for analysis. Mean change scores were used if postintervention scores were unavailable. When mean and SD values were unavailable, missing data were imputed using the median instead of the mean and by estimating the SD from the standard error, confidence intervals (CIs), or interquartile range.26,27 Standardized mean differences (SMDs) with 95% CIs were calculated and pooled using a random effects model for all studies. Dichotomous and continuous outcomes were merged using Comprehensive Meta-Analysis (CMA) software (v2.2, Biostat; Englewood, NJ, USA) to produce SMDs for each study, which are equivalent to Cohen’s d. SMD values of at least 0.2, 0.5, and 0.8 are indicative of small, medium, and large effect sizes, respectively.28 Heterogeneity across studies was assessed using Cochran Q test and I2 test statistics.26 Random effects subgroup analyses with Q statistic tests were conducted using CMA software. Univariate moderator analyses were conducted to compare effect size between SMIs with moderate/severe COPD, single/multiple sessions, and single/multiple practitioners. Univariate moderator analyses were also used to compare SMIs with/without BCTs targeting mental health self-management and physical activity to examine whether they differed in effect for the number of ED visits in comparison to patients receiving usual care. Random effects univariate meta-regression was conducted using CMA software to examine whether the number of BCTs coded across SMIs was predictive of effectiveness.
Results
Narrative synthesis Eleven reviews were eligible for inclusion,7,10,11,29–36 covering 66 clinical trials (Figure 1). The decision over whether to include three reviews warranted further discussion and the rationale for inclusion/exclusion is detailed in Supplementary material. Reviews varied widely in their definitions of self-management and the number of individual studies that met their inclusion criteria (Tables 1 and 2). Zwerink et al7 reported a significant effect of SMIs on HRQOL but did state that “heterogeneity among interventions, study populations, follow-up time, and outcome measures makes it difficult to formulate clear recommendations regarding the most effective form and content of self-management in COPD”. Jonkman et al10 showed SMIs to improve HRQOL in COPD patients at 6 and 12 months but did not identify any components of the SMIs that were associated with the intervention. The more recent reviews had strict inclusion criteria and were smaller in size as a result.30,33,34,36 Harrison et al33 and Majothi et al34 focused on hospitalized and recently discharged patients. Harrison et al33 did not find any significant differences in total or domain scores for HRQOL. In contrast, Majothi et al et al34 did find a significant effect on total SGRQ, but stressed that this finding should be treated with caution due to variable follow-up assessments. Walters et al36 focused on more restrictive criteria for interventions that were action plans and found no significant effect on HRQOL. Bentsen et al30 were less clear in their definition of an SMI, and this may explain the smaller number of studies. The authors reported that the majority of studies showed a benefit on HRQOL, but no meta-analysis was performed. Eleven reviews were eligible for inclusion,7,10,11,29–36 covering 66 clinical trials (Figure 1). The decision over whether to include three reviews warranted further discussion and the rationale for inclusion/exclusion is detailed in Supplementary material. Reviews varied widely in their definitions of self-management and the number of individual studies that met their inclusion criteria (Tables 1 and 2). Zwerink et al7 reported a significant effect of SMIs on HRQOL but did state that “heterogeneity among interventions, study populations, follow-up time, and outcome measures makes it difficult to formulate clear recommendations regarding the most effective form and content of self-management in COPD”. Jonkman et al10 showed SMIs to improve HRQOL in COPD patients at 6 and 12 months but did not identify any components of the SMIs that were associated with the intervention. The more recent reviews had strict inclusion criteria and were smaller in size as a result.30,33,34,36 Harrison et al33 and Majothi et al34 focused on hospitalized and recently discharged patients. Harrison et al33 did not find any significant differences in total or domain scores for HRQOL. In contrast, Majothi et al et al34 did find a significant effect on total SGRQ, but stressed that this finding should be treated with caution due to variable follow-up assessments. Walters et al36 focused on more restrictive criteria for interventions that were action plans and found no significant effect on HRQOL. Bentsen et al30 were less clear in their definition of an SMI, and this may explain the smaller number of studies. The authors reported that the majority of studies showed a benefit on HRQOL, but no meta-analysis was performed. Quantitative synthesis Twenty-six eligible, unique trials provide data on 28 intervention groups (Figure 1 and Table 3). In total, trials reported on 3,518 participants (1,827 intervention, 1,691 control) for this analysis. Mean age of participants was 65.6 (SD =1.6; range =45–89) years. The majority of participants were male (72%). Characteristics of included trials are reported in their respective reviews (Table 2). Table 3 details which specific BCTs were used in each SMI. Table 4 displays which intervention features and target behaviors were used in each SMI. SMIs showed a significant but small positive effect in improving HRQOL scores over usual care (SMD =−0.16; 95% CI =−0.25, −0.07; P=0.001). Statistical heterogeneity was moderate but significant (I2=36.6%; P=0.03), suggesting the need for further moderator/subgroup analyses (Table 5). When studies using measures other than the SGRQ (n=6) were excluded, SMIs continued to show a significant effect on improving HRQOL, which was of comparable effect size (SMD =−0.16; 95% CI =−0.26, −0.05; P=0.003). SMIs with 12-month follow-up (n=15) were significantly more effective than usual care (SMD =−0.16; 95% CI =−0.29, −0.03; P=0.02), but significant heterogeneity existed between studies (I2=53.4%; P=0.008). In trials with 6-month follow-up (n=10), there was no significant difference in effect between SMIs and control group on HRQOL (SMD =−0.11; 95% CI =−0.27, 0.04; P=0.14) and even heterogeneity between studies was not significant (I2=26.2%; P=0.20). In trials with a follow-up less than 6-month postintervention (n=7), SMIs were significantly more effective than usual care (SMD =−0.29; 95% CI =−0.48, −0.11; P=0.002) and there was no significant heterogeneity (I2=2.4%; P=0.41). Twenty-six eligible, unique trials provide data on 28 intervention groups (Figure 1 and Table 3). In total, trials reported on 3,518 participants (1,827 intervention, 1,691 control) for this analysis. Mean age of participants was 65.6 (SD =1.6; range =45–89) years. The majority of participants were male (72%). Characteristics of included trials are reported in their respective reviews (Table 2). Table 3 details which specific BCTs were used in each SMI. Table 4 displays which intervention features and target behaviors were used in each SMI. SMIs showed a significant but small positive effect in improving HRQOL scores over usual care (SMD =−0.16; 95% CI =−0.25, −0.07; P=0.001). Statistical heterogeneity was moderate but significant (I2=36.6%; P=0.03), suggesting the need for further moderator/subgroup analyses (Table 5). When studies using measures other than the SGRQ (n=6) were excluded, SMIs continued to show a significant effect on improving HRQOL, which was of comparable effect size (SMD =−0.16; 95% CI =−0.26, −0.05; P=0.003). SMIs with 12-month follow-up (n=15) were significantly more effective than usual care (SMD =−0.16; 95% CI =−0.29, −0.03; P=0.02), but significant heterogeneity existed between studies (I2=53.4%; P=0.008). In trials with 6-month follow-up (n=10), there was no significant difference in effect between SMIs and control group on HRQOL (SMD =−0.11; 95% CI =−0.27, 0.04; P=0.14) and even heterogeneity between studies was not significant (I2=26.2%; P=0.20). In trials with a follow-up less than 6-month postintervention (n=7), SMIs were significantly more effective than usual care (SMD =−0.29; 95% CI =−0.48, −0.11; P=0.002) and there was no significant heterogeneity (I2=2.4%; P=0.41). Intervention delivery features In comparison to patients receiving usual care, there were no significant differences in effect size in between-group comparisons of 1) single session vs multiple session SMIs, 2) SMIs delivered by a single practitioner vs multidisciplinary teams, 3) SMIs targeting patients with moderate vs severe symptoms, and 4) individual-based vs group-based SMIs (Table 5). However, within-group moderator analysis showed 1) SMIs to be significantly effective in COPD patients with severe symptoms, whereas no significant effect was observed in studies that recruited patients with moderate symptoms; 2) significant improvement with SMIs delivered by a single practitioner, while no effect with multidisciplinary interventions; 3) no effect with single-session SMIs, but significant improvement with SMIs with multiple sessions; and 4) significant improvement in HRQOL was observed in both individual and group-based SMIs (Table 5). SMIs targeting mental health had a significantly greater effect size than SMIs not targeting mental health management (Q=4.37; k=28; P=0.04) (Table 5). Within-group analysis showed SMIs that did not target mental health had no significant effect on HRQOL. There was no difference in effect size between SMIs targeting and not targeting physical activity, with both groups of SMIs showing significant improvement in improving HRQOL in comparison to usual care. In comparison to patients receiving usual care, there were no significant differences in effect size in between-group comparisons of 1) single session vs multiple session SMIs, 2) SMIs delivered by a single practitioner vs multidisciplinary teams, 3) SMIs targeting patients with moderate vs severe symptoms, and 4) individual-based vs group-based SMIs (Table 5). However, within-group moderator analysis showed 1) SMIs to be significantly effective in COPD patients with severe symptoms, whereas no significant effect was observed in studies that recruited patients with moderate symptoms; 2) significant improvement with SMIs delivered by a single practitioner, while no effect with multidisciplinary interventions; 3) no effect with single-session SMIs, but significant improvement with SMIs with multiple sessions; and 4) significant improvement in HRQOL was observed in both individual and group-based SMIs (Table 5). SMIs targeting mental health had a significantly greater effect size than SMIs not targeting mental health management (Q=4.37; k=28; P=0.04) (Table 5). Within-group analysis showed SMIs that did not target mental health had no significant effect on HRQOL. There was no difference in effect size between SMIs targeting and not targeting physical activity, with both groups of SMIs showing significant improvement in improving HRQOL in comparison to usual care. Intervention content All interventions were coded for at least one BCT that targeted symptom management. Of these 24 interventions, five targeted solely symptom management (20.8%), three targeted management of mental health concerns (12.5%), eleven targeted physical activity (45.8%), and five targeted all three behaviors (20.8%). The number of interventions reporting BCTs that target each of the three behaviors is presented in Tables 3 and 4. Across interventions, a mean of eight BCTs per intervention was coded (SD =3; range =3–13), with a mean of five BCTs (SD =2; range =2–10) for symptom management, one BCT (SD =1; range =0–4) for mental health management, and two BCTs (SD =3; range =0–10) for physical activity. For symptom management, the three most common BCTs reported were “instruction on how to perform a behavior” (n=23/24 trials; 95.8%), “information about health consequences” (n=21/24; 87.5%), and “action planning” (n=16/24; 66.7%). For physical activity, the three most common BCTs reported were “instruction on how to perform a behavior” (n=11/16 trials; 68.8%), “goal setting (behavior)” (n=8/16; 50%), and “demonstration of the behavior” (n=8/16 trials; 50%). For management of mental health concerns, the three most common BCTs reported were strategies to “reduce negative emotions” (n=4/8 trials; 50%), “provide social support (unspecified)” (n=3/8 trials; 37.5%), and “monitoring of emotional consequences” (n=3/8 trials; 37.5%). Sixty-six (70.9%) of the 93 BCTs in the BCTTv1 were not coded in any intervention. There was no significant association between the number of BCTs used and intervention effectiveness for improving HRQOL (β=−0.01; 95% CI =−0.04, 0.01; k=28; Q=1.75; P=0.19). All interventions were coded for at least one BCT that targeted symptom management. Of these 24 interventions, five targeted solely symptom management (20.8%), three targeted management of mental health concerns (12.5%), eleven targeted physical activity (45.8%), and five targeted all three behaviors (20.8%). The number of interventions reporting BCTs that target each of the three behaviors is presented in Tables 3 and 4. Across interventions, a mean of eight BCTs per intervention was coded (SD =3; range =3–13), with a mean of five BCTs (SD =2; range =2–10) for symptom management, one BCT (SD =1; range =0–4) for mental health management, and two BCTs (SD =3; range =0–10) for physical activity. For symptom management, the three most common BCTs reported were “instruction on how to perform a behavior” (n=23/24 trials; 95.8%), “information about health consequences” (n=21/24; 87.5%), and “action planning” (n=16/24; 66.7%). For physical activity, the three most common BCTs reported were “instruction on how to perform a behavior” (n=11/16 trials; 68.8%), “goal setting (behavior)” (n=8/16; 50%), and “demonstration of the behavior” (n=8/16 trials; 50%). For management of mental health concerns, the three most common BCTs reported were strategies to “reduce negative emotions” (n=4/8 trials; 50%), “provide social support (unspecified)” (n=3/8 trials; 37.5%), and “monitoring of emotional consequences” (n=3/8 trials; 37.5%). Sixty-six (70.9%) of the 93 BCTs in the BCTTv1 were not coded in any intervention. There was no significant association between the number of BCTs used and intervention effectiveness for improving HRQOL (β=−0.01; 95% CI =−0.04, 0.01; k=28; Q=1.75; P=0.19). Health care use Overall, patients who received SMIs had significantly fewer ED visits compared to those who received usual care (SMD =−0.13; 95% CI =−0.23, −0.03; n=15; P=0.02). There was no significant heterogeneity in the sample (I2=19.4%; P=0.24). The significant effect of SMIs on the number of ED visits in patients who received SMIs remained when examining only studies with a 12-month follow-up (SMD =−0.17, 95% CI =−0.27, −0.07; n=12; P=0.001) with no significant heterogeneity (I2=10.4%; P=0.34). Of the three intervention groups that did not have a 12-month follow-up, only one used a 3-month follow-up and the two were from the same study where a 6-month follow-up was used. Thus, meta-analyses were not performed for either 3-month or 6-month follow-up time points. Within-group analyses revealed patients receiving SMIs targeting mental health made significantly fewer ED visits compared to patients receiving usual care (SMD =−0.22; 95% CI =−0.32, −0.11; k=5; P<0.001). No difference was observed in the number of ED visits between patients receiving SMIs not targeting mental health and patients receiving usual care (SMD =0.001; 95% CI =−0.14, 0.14; k=10 P=0.99). This led to a significant between-group difference in effect between SMIs targeting mental health compared to SMIs not targeting mental health (Q=5.95; P=0.02). Patients receiving SMIs targeting physical activity made significantly fewer ED visits compared to those who received usual care (SMD =0.20; 95% CI =−0.31, −0.08; k=8; P=0.001). No difference was observed between patients receiving SMIs not targeting physical activity and usual care (SMD =−0.03; 95% CI =−0.18, 0.12; k=7; P=0.68). In comparison to usual care, there was no difference in effect between SMIs targeting and not targeting physical activity (Q=3.03; k=15; P=0.08). Overall, patients who received SMIs had significantly fewer ED visits compared to those who received usual care (SMD =−0.13; 95% CI =−0.23, −0.03; n=15; P=0.02). There was no significant heterogeneity in the sample (I2=19.4%; P=0.24). The significant effect of SMIs on the number of ED visits in patients who received SMIs remained when examining only studies with a 12-month follow-up (SMD =−0.17, 95% CI =−0.27, −0.07; n=12; P=0.001) with no significant heterogeneity (I2=10.4%; P=0.34). Of the three intervention groups that did not have a 12-month follow-up, only one used a 3-month follow-up and the two were from the same study where a 6-month follow-up was used. Thus, meta-analyses were not performed for either 3-month or 6-month follow-up time points. Within-group analyses revealed patients receiving SMIs targeting mental health made significantly fewer ED visits compared to patients receiving usual care (SMD =−0.22; 95% CI =−0.32, −0.11; k=5; P<0.001). No difference was observed in the number of ED visits between patients receiving SMIs not targeting mental health and patients receiving usual care (SMD =0.001; 95% CI =−0.14, 0.14; k=10 P=0.99). This led to a significant between-group difference in effect between SMIs targeting mental health compared to SMIs not targeting mental health (Q=5.95; P=0.02). Patients receiving SMIs targeting physical activity made significantly fewer ED visits compared to those who received usual care (SMD =0.20; 95% CI =−0.31, −0.08; k=8; P=0.001). No difference was observed between patients receiving SMIs not targeting physical activity and usual care (SMD =−0.03; 95% CI =−0.18, 0.12; k=7; P=0.68). In comparison to usual care, there was no difference in effect between SMIs targeting and not targeting physical activity (Q=3.03; k=15; P=0.08).
Conclusion
SMIs can improve HRQOL and reduce the number of ED visits for patients with COPD, but there is wide variability in effect. To be effective, future interventions should focus on tackling mental health concerns but need not entail multidisciplinary and individual-focused SMIs.
[ "Method", "Search strategy and selection criteria", "Primary outcome", "Classification of intervention content and intervention delivery features", "Narrative synthesis", "Quantitative synthesis", "Intervention delivery features", "Intervention content", "Health care use", "Strengths and limitations", "Conclusion" ]
[ " Search strategy and selection criteria The current review, registered with PROSPERO (CRD42 016043311), is available at http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016043311. To focus the search upon high-quality systematic reviews, we searched two databases of systematic reviews: Cochrane Database of Systematic Reviews (Wiley) issue 7 of 12 2016, Database of Abstracts of Reviews of Effects (Wiley) issue 2 of 4 2015 (latest available). In addition, we searched Ovid MEDLINE® In-Process & Other Non-Indexed Citations and Ovid MEDLINE® 2015 with a systematic review filter available from CADTH.12 All search results were screened for title and abstract by one reviewer (JN), and 20% of the results were screened by a second reviewer (KH-M) to ensure comparability. Two reviewers screened the search results at the full paper review stage. The search strategy (ran up to October 1, 2016) combined database-specific thesaurus headings and keywords describing COPD and self-management:\nexp Pulmonary Disease, Chronic Obstructiveemphysema$.tw.(chronic$ adj3 bronchiti$).tw.(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.(COPD or COAD or COBD or AECB).tw.or/1–5exp Self Care/(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.or/7–96 and 10.\nexp Pulmonary Disease, Chronic Obstructive\nemphysema$.tw.\n(chronic$ adj3 bronchiti$).tw.\n(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.\n(COPD or COAD or COBD or AECB).tw.\nor/1–5\nexp Self Care/\n(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.\n(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.\nor/7–9\n6 and 10.\nThe review approach provides an overview of existing systematic reviews and is particularly helpful where multiple systematic reviews have been conducted. A review also provides an opportunity to compare the summaries and findings of previous reviews. In the present review, both a meta-analysis and narrative synthesis were conducted. The narrative synthesis compared the overview of findings as presented by the original authors, whereas meta-analyses were conducted on data from individual studies presented within the reviews. These quantitative analyses are helpful to determine the effectiveness of SM interventions and the factors that influence their effectiveness.\nTo be eligible for the narrative synthesis, reviews had to focus upon interventions that targeted self-management. We sought to explore variations in definitions of self-management used by previous authors, and thus reviews were eligible if they specified they focused on SMIs, irrespective of the definition they applied. Reviews that focused on SMIs in addition to other types of interventions (eg, pulmonary rehabilitation, supervised exercise programs) were only eligible if the SMIs could be clearly separated from the other interventions. Reviews of interventions delivered in primary, secondary, tertiary, outpatient, or community care were eligible.\nTo be eligible for the quantitative analyses, randomized controlled trials delivered in primary, secondary, tertiary, outpatient, or community care were eligible if they 1) targeted patients with COPD (diagnosed by either a clinician/health care practitioner and/or agreed spirometry criteria, ie, forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) <70%),13 2) compared the SMI to a comparison group that received usual care during the study period, and 3) had a measure of HRQOL as an outcome measure. Studies were excluded if they involved mixed disease populations where COPD patients could not be separated for analysis. Figure 1 provides a PRISMA diagram of reviews, and the trials within reviews, eligible for inclusion.\nThe current review, registered with PROSPERO (CRD42 016043311), is available at http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016043311. To focus the search upon high-quality systematic reviews, we searched two databases of systematic reviews: Cochrane Database of Systematic Reviews (Wiley) issue 7 of 12 2016, Database of Abstracts of Reviews of Effects (Wiley) issue 2 of 4 2015 (latest available). In addition, we searched Ovid MEDLINE® In-Process & Other Non-Indexed Citations and Ovid MEDLINE® 2015 with a systematic review filter available from CADTH.12 All search results were screened for title and abstract by one reviewer (JN), and 20% of the results were screened by a second reviewer (KH-M) to ensure comparability. Two reviewers screened the search results at the full paper review stage. The search strategy (ran up to October 1, 2016) combined database-specific thesaurus headings and keywords describing COPD and self-management:\nexp Pulmonary Disease, Chronic Obstructiveemphysema$.tw.(chronic$ adj3 bronchiti$).tw.(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.(COPD or COAD or COBD or AECB).tw.or/1–5exp Self Care/(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.or/7–96 and 10.\nexp Pulmonary Disease, Chronic Obstructive\nemphysema$.tw.\n(chronic$ adj3 bronchiti$).tw.\n(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.\n(COPD or COAD or COBD or AECB).tw.\nor/1–5\nexp Self Care/\n(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.\n(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.\nor/7–9\n6 and 10.\nThe review approach provides an overview of existing systematic reviews and is particularly helpful where multiple systematic reviews have been conducted. A review also provides an opportunity to compare the summaries and findings of previous reviews. In the present review, both a meta-analysis and narrative synthesis were conducted. The narrative synthesis compared the overview of findings as presented by the original authors, whereas meta-analyses were conducted on data from individual studies presented within the reviews. These quantitative analyses are helpful to determine the effectiveness of SM interventions and the factors that influence their effectiveness.\nTo be eligible for the narrative synthesis, reviews had to focus upon interventions that targeted self-management. We sought to explore variations in definitions of self-management used by previous authors, and thus reviews were eligible if they specified they focused on SMIs, irrespective of the definition they applied. Reviews that focused on SMIs in addition to other types of interventions (eg, pulmonary rehabilitation, supervised exercise programs) were only eligible if the SMIs could be clearly separated from the other interventions. Reviews of interventions delivered in primary, secondary, tertiary, outpatient, or community care were eligible.\nTo be eligible for the quantitative analyses, randomized controlled trials delivered in primary, secondary, tertiary, outpatient, or community care were eligible if they 1) targeted patients with COPD (diagnosed by either a clinician/health care practitioner and/or agreed spirometry criteria, ie, forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) <70%),13 2) compared the SMI to a comparison group that received usual care during the study period, and 3) had a measure of HRQOL as an outcome measure. Studies were excluded if they involved mixed disease populations where COPD patients could not be separated for analysis. Figure 1 provides a PRISMA diagram of reviews, and the trials within reviews, eligible for inclusion.\n Primary outcome The primary outcome measure was HRQOL measured by the Saint George Respiratory Questionnaire (SGRQ). The SGRQ is a disease-specific instrument designed to measure impact on overall health, daily life, and perceived well-being in patients with obstructive airways disease.14 The measure provides a total score and subdomain scores of symptoms (frequency and severity of symptoms), activities (activities that cause or are limited by breathlessness), and impacts (social functioning and psychological disturbances resulting from airways disease) and is the most frequently used disease-specific measure of HRQOL in this population group.15 Where trials did not use the SGRQ, scores from alternative HRQOL measures were used; the Chronic Respiratory Disease Questionnaire (CRQ), Clinical COPD Questionnaire (CCQ), or Sickness Impact Profile (SIP).16–18 We combined scores across different questionnaires for meta-analyses, as total SGRQ, CRQ, CCQ, and SIP scores have been shown to correlate well and as subdomain constructs share conceptual similarity.19–22 Studies using alternative HRQOL measures were not eligible.\nThe primary outcome measure was HRQOL measured by the Saint George Respiratory Questionnaire (SGRQ). The SGRQ is a disease-specific instrument designed to measure impact on overall health, daily life, and perceived well-being in patients with obstructive airways disease.14 The measure provides a total score and subdomain scores of symptoms (frequency and severity of symptoms), activities (activities that cause or are limited by breathlessness), and impacts (social functioning and psychological disturbances resulting from airways disease) and is the most frequently used disease-specific measure of HRQOL in this population group.15 Where trials did not use the SGRQ, scores from alternative HRQOL measures were used; the Chronic Respiratory Disease Questionnaire (CRQ), Clinical COPD Questionnaire (CCQ), or Sickness Impact Profile (SIP).16–18 We combined scores across different questionnaires for meta-analyses, as total SGRQ, CRQ, CCQ, and SIP scores have been shown to correlate well and as subdomain constructs share conceptual similarity.19–22 Studies using alternative HRQOL measures were not eligible.\n Classification of intervention content and intervention delivery features The Behavior Change Techniques Taxonomy version 1 (BCTTv1) was used to code the content of intervention descriptions.9 Intervention descriptions were separately coded for self-management behaviors that targeted 1) symptoms, 2) physical activity, and 3) mental health. For instance, the description “patients were instructed to set themselves a walking goal each day” would be coded as “goal setting (behavior)” only for “physical activity” and not “mental health self-management” or “symptoms self-management.” Consequently, it is important to use an outcome measure where changes in score reflect a change in these behaviors; and these three behaviors of symptoms, physical activity, and mental health directly map on to the three subdomains of the SGRQ (ie, Symptoms, Activities, and Impacts, respectively). Examples of symptom-specific behaviors may include teaching appropriate inhalation techniques or mucus-clearing techniques.2 In contrast, physical activity behaviors may be structured exercise programs, techniques on how to incorporate light activity into daily routine, or energy conservation techniques.2 Finally, mental health-focused behaviors may include trying to teach patients communication strategies to help communicate mental health concerns, distraction techniques, relaxation exercises, or stress counseling.2\nTo identify whether any features of the delivery of the intervention itself influenced effectiveness, interventions were coded for intervention provider (multidisciplinary team or single practitioner), intervention format (individual or group-based), and intervention length (single session or multiple session).23 The BCTs identified and intervention features of delivery were coded independently by one reviewer (JN) and checked independently by another (KH-M) (κ=0.89; 95% CI =0.82, 0.96). To determine the length of an intervention, the end point was defined as the final time participants received intervention content from the intervention provider. Intervention contacts solely for data collection or for following up on participants without new content were not classed as intervention sessions. To assess whether intervention effects depend on disease severity, studies were divided into those with patients with mean predicted FEV1 score <50% or ≥50% at baseline.13\nData relating to the number of COPD-related emergency department (ED) visits and/or hospital admissions were extracted from eligible studies, where reported, and used to see whether fewer ED visits were reported in patients receiving SMIs compared to patients receiving usual care. Subsequently, SMIs were divided between those with and without BCTs targeting mental health and physical activity in order to examine whether they had a difference in the size of their effect in comparison to patients who received usual care.\nThe Behavior Change Techniques Taxonomy version 1 (BCTTv1) was used to code the content of intervention descriptions.9 Intervention descriptions were separately coded for self-management behaviors that targeted 1) symptoms, 2) physical activity, and 3) mental health. For instance, the description “patients were instructed to set themselves a walking goal each day” would be coded as “goal setting (behavior)” only for “physical activity” and not “mental health self-management” or “symptoms self-management.” Consequently, it is important to use an outcome measure where changes in score reflect a change in these behaviors; and these three behaviors of symptoms, physical activity, and mental health directly map on to the three subdomains of the SGRQ (ie, Symptoms, Activities, and Impacts, respectively). Examples of symptom-specific behaviors may include teaching appropriate inhalation techniques or mucus-clearing techniques.2 In contrast, physical activity behaviors may be structured exercise programs, techniques on how to incorporate light activity into daily routine, or energy conservation techniques.2 Finally, mental health-focused behaviors may include trying to teach patients communication strategies to help communicate mental health concerns, distraction techniques, relaxation exercises, or stress counseling.2\nTo identify whether any features of the delivery of the intervention itself influenced effectiveness, interventions were coded for intervention provider (multidisciplinary team or single practitioner), intervention format (individual or group-based), and intervention length (single session or multiple session).23 The BCTs identified and intervention features of delivery were coded independently by one reviewer (JN) and checked independently by another (KH-M) (κ=0.89; 95% CI =0.82, 0.96). To determine the length of an intervention, the end point was defined as the final time participants received intervention content from the intervention provider. Intervention contacts solely for data collection or for following up on participants without new content were not classed as intervention sessions. To assess whether intervention effects depend on disease severity, studies were divided into those with patients with mean predicted FEV1 score <50% or ≥50% at baseline.13\nData relating to the number of COPD-related emergency department (ED) visits and/or hospital admissions were extracted from eligible studies, where reported, and used to see whether fewer ED visits were reported in patients receiving SMIs compared to patients receiving usual care. Subsequently, SMIs were divided between those with and without BCTs targeting mental health and physical activity in order to examine whether they had a difference in the size of their effect in comparison to patients who received usual care.\n Data extraction and analysis The original studies were sought for further data extraction, to supplement the information reported in the reviews. Data reported at the follow-up time point most closely following the end point of the intervention period were used for meta-analyses. We decided to group patients across time points as patients with COPD have high mortality rates; of those patients who are admitted, 15% will die within 3 months,24 25% will die within 1 year,2 and 50% within 5 years.25 While prespecifying a follow-up time point limits the bias of treatment reactivity, we may be excluding patients with shorter survival, who may not necessary survive to a later time point, who are likely to be those most in need of interventions that improve HRQOL. Sensitivity analyses were performed for studies collecting outcome data at less than 6 months and 6-month and 12-month follow-up to explore any potential heterogeneity in the overall analysis. Data from intention-to-treat analyses were used where reported. If two interventions were compared against a control group (eg, action plans vs education vs usual care), data from both intervention arms were included in the main comparison and the number of participants in the control group was halved for each comparison.26\nPostintervention outcomes reported as mean and SD were used for analysis. Mean change scores were used if postintervention scores were unavailable. When mean and SD values were unavailable, missing data were imputed using the median instead of the mean and by estimating the SD from the standard error, confidence intervals (CIs), or interquartile range.26,27\nStandardized mean differences (SMDs) with 95% CIs were calculated and pooled using a random effects model for all studies. Dichotomous and continuous outcomes were merged using Comprehensive Meta-Analysis (CMA) software (v2.2, Biostat; Englewood, NJ, USA) to produce SMDs for each study, which are equivalent to Cohen’s d. SMD values of at least 0.2, 0.5, and 0.8 are indicative of small, medium, and large effect sizes, respectively.28 Heterogeneity across studies was assessed using Cochran Q test and I2 test statistics.26\nRandom effects subgroup analyses with Q statistic tests were conducted using CMA software. Univariate moderator analyses were conducted to compare effect size between SMIs with moderate/severe COPD, single/multiple sessions, and single/multiple practitioners. Univariate moderator analyses were also used to compare SMIs with/without BCTs targeting mental health self-management and physical activity to examine whether they differed in effect for the number of ED visits in comparison to patients receiving usual care. Random effects univariate meta-regression was conducted using CMA software to examine whether the number of BCTs coded across SMIs was predictive of effectiveness.\nThe original studies were sought for further data extraction, to supplement the information reported in the reviews. Data reported at the follow-up time point most closely following the end point of the intervention period were used for meta-analyses. We decided to group patients across time points as patients with COPD have high mortality rates; of those patients who are admitted, 15% will die within 3 months,24 25% will die within 1 year,2 and 50% within 5 years.25 While prespecifying a follow-up time point limits the bias of treatment reactivity, we may be excluding patients with shorter survival, who may not necessary survive to a later time point, who are likely to be those most in need of interventions that improve HRQOL. Sensitivity analyses were performed for studies collecting outcome data at less than 6 months and 6-month and 12-month follow-up to explore any potential heterogeneity in the overall analysis. Data from intention-to-treat analyses were used where reported. If two interventions were compared against a control group (eg, action plans vs education vs usual care), data from both intervention arms were included in the main comparison and the number of participants in the control group was halved for each comparison.26\nPostintervention outcomes reported as mean and SD were used for analysis. Mean change scores were used if postintervention scores were unavailable. When mean and SD values were unavailable, missing data were imputed using the median instead of the mean and by estimating the SD from the standard error, confidence intervals (CIs), or interquartile range.26,27\nStandardized mean differences (SMDs) with 95% CIs were calculated and pooled using a random effects model for all studies. Dichotomous and continuous outcomes were merged using Comprehensive Meta-Analysis (CMA) software (v2.2, Biostat; Englewood, NJ, USA) to produce SMDs for each study, which are equivalent to Cohen’s d. SMD values of at least 0.2, 0.5, and 0.8 are indicative of small, medium, and large effect sizes, respectively.28 Heterogeneity across studies was assessed using Cochran Q test and I2 test statistics.26\nRandom effects subgroup analyses with Q statistic tests were conducted using CMA software. Univariate moderator analyses were conducted to compare effect size between SMIs with moderate/severe COPD, single/multiple sessions, and single/multiple practitioners. Univariate moderator analyses were also used to compare SMIs with/without BCTs targeting mental health self-management and physical activity to examine whether they differed in effect for the number of ED visits in comparison to patients receiving usual care. Random effects univariate meta-regression was conducted using CMA software to examine whether the number of BCTs coded across SMIs was predictive of effectiveness.", "The current review, registered with PROSPERO (CRD42 016043311), is available at http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016043311. To focus the search upon high-quality systematic reviews, we searched two databases of systematic reviews: Cochrane Database of Systematic Reviews (Wiley) issue 7 of 12 2016, Database of Abstracts of Reviews of Effects (Wiley) issue 2 of 4 2015 (latest available). In addition, we searched Ovid MEDLINE® In-Process & Other Non-Indexed Citations and Ovid MEDLINE® 2015 with a systematic review filter available from CADTH.12 All search results were screened for title and abstract by one reviewer (JN), and 20% of the results were screened by a second reviewer (KH-M) to ensure comparability. Two reviewers screened the search results at the full paper review stage. The search strategy (ran up to October 1, 2016) combined database-specific thesaurus headings and keywords describing COPD and self-management:\nexp Pulmonary Disease, Chronic Obstructiveemphysema$.tw.(chronic$ adj3 bronchiti$).tw.(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.(COPD or COAD or COBD or AECB).tw.or/1–5exp Self Care/(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.or/7–96 and 10.\nexp Pulmonary Disease, Chronic Obstructive\nemphysema$.tw.\n(chronic$ adj3 bronchiti$).tw.\n(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.\n(COPD or COAD or COBD or AECB).tw.\nor/1–5\nexp Self Care/\n(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.\n(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.\nor/7–9\n6 and 10.\nThe review approach provides an overview of existing systematic reviews and is particularly helpful where multiple systematic reviews have been conducted. A review also provides an opportunity to compare the summaries and findings of previous reviews. In the present review, both a meta-analysis and narrative synthesis were conducted. The narrative synthesis compared the overview of findings as presented by the original authors, whereas meta-analyses were conducted on data from individual studies presented within the reviews. These quantitative analyses are helpful to determine the effectiveness of SM interventions and the factors that influence their effectiveness.\nTo be eligible for the narrative synthesis, reviews had to focus upon interventions that targeted self-management. We sought to explore variations in definitions of self-management used by previous authors, and thus reviews were eligible if they specified they focused on SMIs, irrespective of the definition they applied. Reviews that focused on SMIs in addition to other types of interventions (eg, pulmonary rehabilitation, supervised exercise programs) were only eligible if the SMIs could be clearly separated from the other interventions. Reviews of interventions delivered in primary, secondary, tertiary, outpatient, or community care were eligible.\nTo be eligible for the quantitative analyses, randomized controlled trials delivered in primary, secondary, tertiary, outpatient, or community care were eligible if they 1) targeted patients with COPD (diagnosed by either a clinician/health care practitioner and/or agreed spirometry criteria, ie, forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) <70%),13 2) compared the SMI to a comparison group that received usual care during the study period, and 3) had a measure of HRQOL as an outcome measure. Studies were excluded if they involved mixed disease populations where COPD patients could not be separated for analysis. Figure 1 provides a PRISMA diagram of reviews, and the trials within reviews, eligible for inclusion.", "The primary outcome measure was HRQOL measured by the Saint George Respiratory Questionnaire (SGRQ). The SGRQ is a disease-specific instrument designed to measure impact on overall health, daily life, and perceived well-being in patients with obstructive airways disease.14 The measure provides a total score and subdomain scores of symptoms (frequency and severity of symptoms), activities (activities that cause or are limited by breathlessness), and impacts (social functioning and psychological disturbances resulting from airways disease) and is the most frequently used disease-specific measure of HRQOL in this population group.15 Where trials did not use the SGRQ, scores from alternative HRQOL measures were used; the Chronic Respiratory Disease Questionnaire (CRQ), Clinical COPD Questionnaire (CCQ), or Sickness Impact Profile (SIP).16–18 We combined scores across different questionnaires for meta-analyses, as total SGRQ, CRQ, CCQ, and SIP scores have been shown to correlate well and as subdomain constructs share conceptual similarity.19–22 Studies using alternative HRQOL measures were not eligible.", "The Behavior Change Techniques Taxonomy version 1 (BCTTv1) was used to code the content of intervention descriptions.9 Intervention descriptions were separately coded for self-management behaviors that targeted 1) symptoms, 2) physical activity, and 3) mental health. For instance, the description “patients were instructed to set themselves a walking goal each day” would be coded as “goal setting (behavior)” only for “physical activity” and not “mental health self-management” or “symptoms self-management.” Consequently, it is important to use an outcome measure where changes in score reflect a change in these behaviors; and these three behaviors of symptoms, physical activity, and mental health directly map on to the three subdomains of the SGRQ (ie, Symptoms, Activities, and Impacts, respectively). Examples of symptom-specific behaviors may include teaching appropriate inhalation techniques or mucus-clearing techniques.2 In contrast, physical activity behaviors may be structured exercise programs, techniques on how to incorporate light activity into daily routine, or energy conservation techniques.2 Finally, mental health-focused behaviors may include trying to teach patients communication strategies to help communicate mental health concerns, distraction techniques, relaxation exercises, or stress counseling.2\nTo identify whether any features of the delivery of the intervention itself influenced effectiveness, interventions were coded for intervention provider (multidisciplinary team or single practitioner), intervention format (individual or group-based), and intervention length (single session or multiple session).23 The BCTs identified and intervention features of delivery were coded independently by one reviewer (JN) and checked independently by another (KH-M) (κ=0.89; 95% CI =0.82, 0.96). To determine the length of an intervention, the end point was defined as the final time participants received intervention content from the intervention provider. Intervention contacts solely for data collection or for following up on participants without new content were not classed as intervention sessions. To assess whether intervention effects depend on disease severity, studies were divided into those with patients with mean predicted FEV1 score <50% or ≥50% at baseline.13\nData relating to the number of COPD-related emergency department (ED) visits and/or hospital admissions were extracted from eligible studies, where reported, and used to see whether fewer ED visits were reported in patients receiving SMIs compared to patients receiving usual care. Subsequently, SMIs were divided between those with and without BCTs targeting mental health and physical activity in order to examine whether they had a difference in the size of their effect in comparison to patients who received usual care.", "Eleven reviews were eligible for inclusion,7,10,11,29–36 covering 66 clinical trials (Figure 1). The decision over whether to include three reviews warranted further discussion and the rationale for inclusion/exclusion is detailed in Supplementary material. Reviews varied widely in their definitions of self-management and the number of individual studies that met their inclusion criteria (Tables 1 and 2). Zwerink et al7 reported a significant effect of SMIs on HRQOL but did state that “heterogeneity among interventions, study populations, follow-up time, and outcome measures makes it difficult to formulate clear recommendations regarding the most effective form and content of self-management in COPD”. Jonkman et al10 showed SMIs to improve HRQOL in COPD patients at 6 and 12 months but did not identify any components of the SMIs that were associated with the intervention. The more recent reviews had strict inclusion criteria and were smaller in size as a result.30,33,34,36 Harrison et al33 and Majothi et al34 focused on hospitalized and recently discharged patients. Harrison et al33 did not find any significant differences in total or domain scores for HRQOL. In contrast, Majothi et al et al34 did find a significant effect on total SGRQ, but stressed that this finding should be treated with caution due to variable follow-up assessments. Walters et al36 focused on more restrictive criteria for interventions that were action plans and found no significant effect on HRQOL. Bentsen et al30 were less clear in their definition of an SMI, and this may explain the smaller number of studies. The authors reported that the majority of studies showed a benefit on HRQOL, but no meta-analysis was performed.", "Twenty-six eligible, unique trials provide data on 28 intervention groups (Figure 1 and Table 3). In total, trials reported on 3,518 participants (1,827 intervention, 1,691 control) for this analysis. Mean age of participants was 65.6 (SD =1.6; range =45–89) years. The majority of participants were male (72%). Characteristics of included trials are reported in their respective reviews (Table 2). Table 3 details which specific BCTs were used in each SMI. Table 4 displays which intervention features and target behaviors were used in each SMI. SMIs showed a significant but small positive effect in improving HRQOL scores over usual care (SMD =−0.16; 95% CI =−0.25, −0.07; P=0.001). Statistical heterogeneity was moderate but significant (I2=36.6%; P=0.03), suggesting the need for further moderator/subgroup analyses (Table 5). When studies using measures other than the SGRQ (n=6) were excluded, SMIs continued to show a significant effect on improving HRQOL, which was of comparable effect size (SMD =−0.16; 95% CI =−0.26, −0.05; P=0.003). SMIs with 12-month follow-up (n=15) were significantly more effective than usual care (SMD =−0.16; 95% CI =−0.29, −0.03; P=0.02), but significant heterogeneity existed between studies (I2=53.4%; P=0.008). In trials with 6-month follow-up (n=10), there was no significant difference in effect between SMIs and control group on HRQOL (SMD =−0.11; 95% CI =−0.27, 0.04; P=0.14) and even heterogeneity between studies was not significant (I2=26.2%; P=0.20). In trials with a follow-up less than 6-month postintervention (n=7), SMIs were significantly more effective than usual care (SMD =−0.29; 95% CI =−0.48, −0.11; P=0.002) and there was no significant heterogeneity (I2=2.4%; P=0.41).", "In comparison to patients receiving usual care, there were no significant differences in effect size in between-group comparisons of 1) single session vs multiple session SMIs, 2) SMIs delivered by a single practitioner vs multidisciplinary teams, 3) SMIs targeting patients with moderate vs severe symptoms, and 4) individual-based vs group-based SMIs (Table 5). However, within-group moderator analysis showed 1) SMIs to be significantly effective in COPD patients with severe symptoms, whereas no significant effect was observed in studies that recruited patients with moderate symptoms; 2) significant improvement with SMIs delivered by a single practitioner, while no effect with multidisciplinary interventions; 3) no effect with single-session SMIs, but significant improvement with SMIs with multiple sessions; and 4) significant improvement in HRQOL was observed in both individual and group-based SMIs (Table 5).\nSMIs targeting mental health had a significantly greater effect size than SMIs not targeting mental health management (Q=4.37; k=28; P=0.04) (Table 5). Within-group analysis showed SMIs that did not target mental health had no significant effect on HRQOL. There was no difference in effect size between SMIs targeting and not targeting physical activity, with both groups of SMIs showing significant improvement in improving HRQOL in comparison to usual care.", "All interventions were coded for at least one BCT that targeted symptom management. Of these 24 interventions, five targeted solely symptom management (20.8%), three targeted management of mental health concerns (12.5%), eleven targeted physical activity (45.8%), and five targeted all three behaviors (20.8%). The number of interventions reporting BCTs that target each of the three behaviors is presented in Tables 3 and 4. Across interventions, a mean of eight BCTs per intervention was coded (SD =3; range =3–13), with a mean of five BCTs (SD =2; range =2–10) for symptom management, one BCT (SD =1; range =0–4) for mental health management, and two BCTs (SD =3; range =0–10) for physical activity. For symptom management, the three most common BCTs reported were “instruction on how to perform a behavior” (n=23/24 trials; 95.8%), “information about health consequences” (n=21/24; 87.5%), and “action planning” (n=16/24; 66.7%). For physical activity, the three most common BCTs reported were “instruction on how to perform a behavior” (n=11/16 trials; 68.8%), “goal setting (behavior)” (n=8/16; 50%), and “demonstration of the behavior” (n=8/16 trials; 50%). For management of mental health concerns, the three most common BCTs reported were strategies to “reduce negative emotions” (n=4/8 trials; 50%), “provide social support (unspecified)” (n=3/8 trials; 37.5%), and “monitoring of emotional consequences” (n=3/8 trials; 37.5%). Sixty-six (70.9%) of the 93 BCTs in the BCTTv1 were not coded in any intervention. There was no significant association between the number of BCTs used and intervention effectiveness for improving HRQOL (β=−0.01; 95% CI =−0.04, 0.01; k=28; Q=1.75; P=0.19).", "Overall, patients who received SMIs had significantly fewer ED visits compared to those who received usual care (SMD =−0.13; 95% CI =−0.23, −0.03; n=15; P=0.02). There was no significant heterogeneity in the sample (I2=19.4%; P=0.24). The significant effect of SMIs on the number of ED visits in patients who received SMIs remained when examining only studies with a 12-month follow-up (SMD =−0.17, 95% CI =−0.27, −0.07; n=12; P=0.001) with no significant heterogeneity (I2=10.4%; P=0.34). Of the three intervention groups that did not have a 12-month follow-up, only one used a 3-month follow-up and the two were from the same study where a 6-month follow-up was used. Thus, meta-analyses were not performed for either 3-month or 6-month follow-up time points.\nWithin-group analyses revealed patients receiving SMIs targeting mental health made significantly fewer ED visits compared to patients receiving usual care (SMD =−0.22; 95% CI =−0.32, −0.11; k=5; P<0.001). No difference was observed in the number of ED visits between patients receiving SMIs not targeting mental health and patients receiving usual care (SMD =0.001; 95% CI =−0.14, 0.14; k=10 P=0.99). This led to a significant between-group difference in effect between SMIs targeting mental health compared to SMIs not targeting mental health (Q=5.95; P=0.02).\nPatients receiving SMIs targeting physical activity made significantly fewer ED visits compared to those who received usual care (SMD =0.20; 95% CI =−0.31, −0.08; k=8; P=0.001). No difference was observed between patients receiving SMIs not targeting physical activity and usual care (SMD =−0.03; 95% CI =−0.18, 0.12; k=7; P=0.68). In comparison to usual care, there was no difference in effect between SMIs targeting and not targeting physical activity (Q=3.03; k=15; P=0.08).", "Comparing the findings across reviews highlights how the definition of self-management had a direct impact on the number of eligible studies and consequently the conclusions drawn. The difference in conclusions further highlights the need for more detailed content analysis. The current analysis extracted robust empirical data from across reviews and their contributing clinical trials to examine intervention content and structure to isolate what factors may be essential for improving patient outcomes. The use of a standardized taxonomy of definitions allowed comparisons of intervention content across studies and provided a robust basis for synthesis. Our approach also highlighted specific BCTs used in a range of contexts to enable more discernment between intervention features and outcome effectiveness. We used a concise search strategy in order to identify individual trial data and perform novel forms of exploratory analysis that examined the effectiveness of individual intervention components. For this exploratory analysis, small and hard to find trials are unlikely to introduce components that do not occur in a range of other trials, and as such it is unnecessary to carry out an exhaustive search to identify all existing trials. However, it is important to stress that while we present a comprehensive summary of SMIs that have been reported in previous reviews, this review does not aim to present the most up-to-date evidence base as a number of more recent SMI trials will not have been captured in these reviews.\nThere are limitations to the study worth noting. The limited number of studies meant that single rather than multiple variables were entered into the moderator analyses. For example, we could compare single vs multiple session SMIs or individual vs group SMIs but could not examine combinations of the different variables in a multivariate analysis. Thus, while these univariate analyses can helpfully guide intervention development by highlighting potential associations, they should not be interpreted in an additive fashion. Furthermore, meta-regression findings do not imply causality, as factors entered into these analyses were not randomized groups in the analyses.\nIt was difficult to ascertain the intensity with which some BCTs were administered and the same BCT could be used with varying intensity, eg, the instructors could provide “Feedback on the behavior” on a daily or monthly basis. Ultimately, the utility of this secondary analysis is dependent on the reporting of intervention content by authors. It is possible that some BCTs were present in interventions, but not described in sufficient detail to allow coding. While we coded intervention manuals where available, there is a need for more transparency in intervention content in future studies.", "SMIs can improve HRQOL and reduce the number of ED visits for patients with COPD, but there is wide variability in effect. To be effective, future interventions should focus on tackling mental health concerns but need not entail multidisciplinary and individual-focused SMIs." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Method", "Search strategy and selection criteria", "Primary outcome", "Classification of intervention content and intervention delivery features", "Data extraction and analysis", "Results", "Narrative synthesis", "Quantitative synthesis", "Intervention delivery features", "Intervention content", "Health care use", "Discussion", "Strengths and limitations", "Conclusion", "Supplementary material" ]
[ "COPD is characterized by airflow limitation and is associated with inflammatory changes that lead to dyspnea, sputum purulence, and persistent coughing. The disease trajectory is one of progressive decline, punctuated by frequent acute exacerbations in symptoms. Patients with COPD have an average of three acute exacerbations per year, and these are the second biggest cause of unplanned hospital admissions in the UK.1–3 As COPD is irreversible, and health-related quality of life (HRQOL) in patients with COPD tends to be low, optimizing HRQOL and reducing hospital admissions have become key priorities in COPD management.4,5\nSelf-management planning is a recognized quality standard of the National Institute for Health and Care Excellence (NICE) guidelines in the UK,2 and a joint statement by the American Thoracic Society/European Respiratory Society6 emphasized its importance in quality of care. Self-management interventions (SMIs) encourage patients to monitor symptoms when stable and to take appropriate action when symptoms begin to worsen.2 However, there is no consensus on the form and content of effective SMIs and the variation in content may explain previous heterogeneity in effectiveness.2,7,8 A recent Health Technology Assessment (HTA) report on the efficacy of self-management for COPD recommended that future research should 1) “try to identify which are the most effective components of interventions and identify patient-specific factors that may modify this”, and that 2) “behavior change theories and strategies that underpin COPD SMIs need to be better characterized and described”.8 To enable better comparison and replication of intervention components, taxonomies have been developed to classify potential active ingredients of interventions according to preestablished descriptions of behavior change techniques (BCTs).9 BCTs are defined as “an observable, replicable, and irreducible component of an intervention designed to alter or redirect causal processes that regulate behavior”.9 While recent reviews have conducted content analysis to help identify effective components of SMIs for patients with COPD through individual patient data analysis,10,11 the coding of intervention content was not performed with established taxonomies and clear understanding between the BCT and the targeted behavior was absent (eg, symptom management, physical activity, mental health management, etc).\nThis systematic review aims to summarize the current evidence base on the effectiveness of SMIs for improving HRQOL in people with COPD. Conclusions across reviews have been synthesized and evaluated within the context of how self-management was defined. Meta-analyses were performed that explore the relationship between health behaviors the SMIs target, the BCT they use to target behaviors, and subsequent improvement in HRQOL and health care utilization. In addition, we explore the extent to which trial and intervention features influence SMI effects.", " Search strategy and selection criteria The current review, registered with PROSPERO (CRD42 016043311), is available at http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016043311. To focus the search upon high-quality systematic reviews, we searched two databases of systematic reviews: Cochrane Database of Systematic Reviews (Wiley) issue 7 of 12 2016, Database of Abstracts of Reviews of Effects (Wiley) issue 2 of 4 2015 (latest available). In addition, we searched Ovid MEDLINE® In-Process & Other Non-Indexed Citations and Ovid MEDLINE® 2015 with a systematic review filter available from CADTH.12 All search results were screened for title and abstract by one reviewer (JN), and 20% of the results were screened by a second reviewer (KH-M) to ensure comparability. Two reviewers screened the search results at the full paper review stage. The search strategy (ran up to October 1, 2016) combined database-specific thesaurus headings and keywords describing COPD and self-management:\nexp Pulmonary Disease, Chronic Obstructiveemphysema$.tw.(chronic$ adj3 bronchiti$).tw.(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.(COPD or COAD or COBD or AECB).tw.or/1–5exp Self Care/(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.or/7–96 and 10.\nexp Pulmonary Disease, Chronic Obstructive\nemphysema$.tw.\n(chronic$ adj3 bronchiti$).tw.\n(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.\n(COPD or COAD or COBD or AECB).tw.\nor/1–5\nexp Self Care/\n(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.\n(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.\nor/7–9\n6 and 10.\nThe review approach provides an overview of existing systematic reviews and is particularly helpful where multiple systematic reviews have been conducted. A review also provides an opportunity to compare the summaries and findings of previous reviews. In the present review, both a meta-analysis and narrative synthesis were conducted. The narrative synthesis compared the overview of findings as presented by the original authors, whereas meta-analyses were conducted on data from individual studies presented within the reviews. These quantitative analyses are helpful to determine the effectiveness of SM interventions and the factors that influence their effectiveness.\nTo be eligible for the narrative synthesis, reviews had to focus upon interventions that targeted self-management. We sought to explore variations in definitions of self-management used by previous authors, and thus reviews were eligible if they specified they focused on SMIs, irrespective of the definition they applied. Reviews that focused on SMIs in addition to other types of interventions (eg, pulmonary rehabilitation, supervised exercise programs) were only eligible if the SMIs could be clearly separated from the other interventions. Reviews of interventions delivered in primary, secondary, tertiary, outpatient, or community care were eligible.\nTo be eligible for the quantitative analyses, randomized controlled trials delivered in primary, secondary, tertiary, outpatient, or community care were eligible if they 1) targeted patients with COPD (diagnosed by either a clinician/health care practitioner and/or agreed spirometry criteria, ie, forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) <70%),13 2) compared the SMI to a comparison group that received usual care during the study period, and 3) had a measure of HRQOL as an outcome measure. Studies were excluded if they involved mixed disease populations where COPD patients could not be separated for analysis. Figure 1 provides a PRISMA diagram of reviews, and the trials within reviews, eligible for inclusion.\nThe current review, registered with PROSPERO (CRD42 016043311), is available at http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016043311. To focus the search upon high-quality systematic reviews, we searched two databases of systematic reviews: Cochrane Database of Systematic Reviews (Wiley) issue 7 of 12 2016, Database of Abstracts of Reviews of Effects (Wiley) issue 2 of 4 2015 (latest available). In addition, we searched Ovid MEDLINE® In-Process & Other Non-Indexed Citations and Ovid MEDLINE® 2015 with a systematic review filter available from CADTH.12 All search results were screened for title and abstract by one reviewer (JN), and 20% of the results were screened by a second reviewer (KH-M) to ensure comparability. Two reviewers screened the search results at the full paper review stage. The search strategy (ran up to October 1, 2016) combined database-specific thesaurus headings and keywords describing COPD and self-management:\nexp Pulmonary Disease, Chronic Obstructiveemphysema$.tw.(chronic$ adj3 bronchiti$).tw.(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.(COPD or COAD or COBD or AECB).tw.or/1–5exp Self Care/(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.or/7–96 and 10.\nexp Pulmonary Disease, Chronic Obstructive\nemphysema$.tw.\n(chronic$ adj3 bronchiti$).tw.\n(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.\n(COPD or COAD or COBD or AECB).tw.\nor/1–5\nexp Self Care/\n(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.\n(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.\nor/7–9\n6 and 10.\nThe review approach provides an overview of existing systematic reviews and is particularly helpful where multiple systematic reviews have been conducted. A review also provides an opportunity to compare the summaries and findings of previous reviews. In the present review, both a meta-analysis and narrative synthesis were conducted. The narrative synthesis compared the overview of findings as presented by the original authors, whereas meta-analyses were conducted on data from individual studies presented within the reviews. These quantitative analyses are helpful to determine the effectiveness of SM interventions and the factors that influence their effectiveness.\nTo be eligible for the narrative synthesis, reviews had to focus upon interventions that targeted self-management. We sought to explore variations in definitions of self-management used by previous authors, and thus reviews were eligible if they specified they focused on SMIs, irrespective of the definition they applied. Reviews that focused on SMIs in addition to other types of interventions (eg, pulmonary rehabilitation, supervised exercise programs) were only eligible if the SMIs could be clearly separated from the other interventions. Reviews of interventions delivered in primary, secondary, tertiary, outpatient, or community care were eligible.\nTo be eligible for the quantitative analyses, randomized controlled trials delivered in primary, secondary, tertiary, outpatient, or community care were eligible if they 1) targeted patients with COPD (diagnosed by either a clinician/health care practitioner and/or agreed spirometry criteria, ie, forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) <70%),13 2) compared the SMI to a comparison group that received usual care during the study period, and 3) had a measure of HRQOL as an outcome measure. Studies were excluded if they involved mixed disease populations where COPD patients could not be separated for analysis. Figure 1 provides a PRISMA diagram of reviews, and the trials within reviews, eligible for inclusion.\n Primary outcome The primary outcome measure was HRQOL measured by the Saint George Respiratory Questionnaire (SGRQ). The SGRQ is a disease-specific instrument designed to measure impact on overall health, daily life, and perceived well-being in patients with obstructive airways disease.14 The measure provides a total score and subdomain scores of symptoms (frequency and severity of symptoms), activities (activities that cause or are limited by breathlessness), and impacts (social functioning and psychological disturbances resulting from airways disease) and is the most frequently used disease-specific measure of HRQOL in this population group.15 Where trials did not use the SGRQ, scores from alternative HRQOL measures were used; the Chronic Respiratory Disease Questionnaire (CRQ), Clinical COPD Questionnaire (CCQ), or Sickness Impact Profile (SIP).16–18 We combined scores across different questionnaires for meta-analyses, as total SGRQ, CRQ, CCQ, and SIP scores have been shown to correlate well and as subdomain constructs share conceptual similarity.19–22 Studies using alternative HRQOL measures were not eligible.\nThe primary outcome measure was HRQOL measured by the Saint George Respiratory Questionnaire (SGRQ). The SGRQ is a disease-specific instrument designed to measure impact on overall health, daily life, and perceived well-being in patients with obstructive airways disease.14 The measure provides a total score and subdomain scores of symptoms (frequency and severity of symptoms), activities (activities that cause or are limited by breathlessness), and impacts (social functioning and psychological disturbances resulting from airways disease) and is the most frequently used disease-specific measure of HRQOL in this population group.15 Where trials did not use the SGRQ, scores from alternative HRQOL measures were used; the Chronic Respiratory Disease Questionnaire (CRQ), Clinical COPD Questionnaire (CCQ), or Sickness Impact Profile (SIP).16–18 We combined scores across different questionnaires for meta-analyses, as total SGRQ, CRQ, CCQ, and SIP scores have been shown to correlate well and as subdomain constructs share conceptual similarity.19–22 Studies using alternative HRQOL measures were not eligible.\n Classification of intervention content and intervention delivery features The Behavior Change Techniques Taxonomy version 1 (BCTTv1) was used to code the content of intervention descriptions.9 Intervention descriptions were separately coded for self-management behaviors that targeted 1) symptoms, 2) physical activity, and 3) mental health. For instance, the description “patients were instructed to set themselves a walking goal each day” would be coded as “goal setting (behavior)” only for “physical activity” and not “mental health self-management” or “symptoms self-management.” Consequently, it is important to use an outcome measure where changes in score reflect a change in these behaviors; and these three behaviors of symptoms, physical activity, and mental health directly map on to the three subdomains of the SGRQ (ie, Symptoms, Activities, and Impacts, respectively). Examples of symptom-specific behaviors may include teaching appropriate inhalation techniques or mucus-clearing techniques.2 In contrast, physical activity behaviors may be structured exercise programs, techniques on how to incorporate light activity into daily routine, or energy conservation techniques.2 Finally, mental health-focused behaviors may include trying to teach patients communication strategies to help communicate mental health concerns, distraction techniques, relaxation exercises, or stress counseling.2\nTo identify whether any features of the delivery of the intervention itself influenced effectiveness, interventions were coded for intervention provider (multidisciplinary team or single practitioner), intervention format (individual or group-based), and intervention length (single session or multiple session).23 The BCTs identified and intervention features of delivery were coded independently by one reviewer (JN) and checked independently by another (KH-M) (κ=0.89; 95% CI =0.82, 0.96). To determine the length of an intervention, the end point was defined as the final time participants received intervention content from the intervention provider. Intervention contacts solely for data collection or for following up on participants without new content were not classed as intervention sessions. To assess whether intervention effects depend on disease severity, studies were divided into those with patients with mean predicted FEV1 score <50% or ≥50% at baseline.13\nData relating to the number of COPD-related emergency department (ED) visits and/or hospital admissions were extracted from eligible studies, where reported, and used to see whether fewer ED visits were reported in patients receiving SMIs compared to patients receiving usual care. Subsequently, SMIs were divided between those with and without BCTs targeting mental health and physical activity in order to examine whether they had a difference in the size of their effect in comparison to patients who received usual care.\nThe Behavior Change Techniques Taxonomy version 1 (BCTTv1) was used to code the content of intervention descriptions.9 Intervention descriptions were separately coded for self-management behaviors that targeted 1) symptoms, 2) physical activity, and 3) mental health. For instance, the description “patients were instructed to set themselves a walking goal each day” would be coded as “goal setting (behavior)” only for “physical activity” and not “mental health self-management” or “symptoms self-management.” Consequently, it is important to use an outcome measure where changes in score reflect a change in these behaviors; and these three behaviors of symptoms, physical activity, and mental health directly map on to the three subdomains of the SGRQ (ie, Symptoms, Activities, and Impacts, respectively). Examples of symptom-specific behaviors may include teaching appropriate inhalation techniques or mucus-clearing techniques.2 In contrast, physical activity behaviors may be structured exercise programs, techniques on how to incorporate light activity into daily routine, or energy conservation techniques.2 Finally, mental health-focused behaviors may include trying to teach patients communication strategies to help communicate mental health concerns, distraction techniques, relaxation exercises, or stress counseling.2\nTo identify whether any features of the delivery of the intervention itself influenced effectiveness, interventions were coded for intervention provider (multidisciplinary team or single practitioner), intervention format (individual or group-based), and intervention length (single session or multiple session).23 The BCTs identified and intervention features of delivery were coded independently by one reviewer (JN) and checked independently by another (KH-M) (κ=0.89; 95% CI =0.82, 0.96). To determine the length of an intervention, the end point was defined as the final time participants received intervention content from the intervention provider. Intervention contacts solely for data collection or for following up on participants without new content were not classed as intervention sessions. To assess whether intervention effects depend on disease severity, studies were divided into those with patients with mean predicted FEV1 score <50% or ≥50% at baseline.13\nData relating to the number of COPD-related emergency department (ED) visits and/or hospital admissions were extracted from eligible studies, where reported, and used to see whether fewer ED visits were reported in patients receiving SMIs compared to patients receiving usual care. Subsequently, SMIs were divided between those with and without BCTs targeting mental health and physical activity in order to examine whether they had a difference in the size of their effect in comparison to patients who received usual care.\n Data extraction and analysis The original studies were sought for further data extraction, to supplement the information reported in the reviews. Data reported at the follow-up time point most closely following the end point of the intervention period were used for meta-analyses. We decided to group patients across time points as patients with COPD have high mortality rates; of those patients who are admitted, 15% will die within 3 months,24 25% will die within 1 year,2 and 50% within 5 years.25 While prespecifying a follow-up time point limits the bias of treatment reactivity, we may be excluding patients with shorter survival, who may not necessary survive to a later time point, who are likely to be those most in need of interventions that improve HRQOL. Sensitivity analyses were performed for studies collecting outcome data at less than 6 months and 6-month and 12-month follow-up to explore any potential heterogeneity in the overall analysis. Data from intention-to-treat analyses were used where reported. If two interventions were compared against a control group (eg, action plans vs education vs usual care), data from both intervention arms were included in the main comparison and the number of participants in the control group was halved for each comparison.26\nPostintervention outcomes reported as mean and SD were used for analysis. Mean change scores were used if postintervention scores were unavailable. When mean and SD values were unavailable, missing data were imputed using the median instead of the mean and by estimating the SD from the standard error, confidence intervals (CIs), or interquartile range.26,27\nStandardized mean differences (SMDs) with 95% CIs were calculated and pooled using a random effects model for all studies. Dichotomous and continuous outcomes were merged using Comprehensive Meta-Analysis (CMA) software (v2.2, Biostat; Englewood, NJ, USA) to produce SMDs for each study, which are equivalent to Cohen’s d. SMD values of at least 0.2, 0.5, and 0.8 are indicative of small, medium, and large effect sizes, respectively.28 Heterogeneity across studies was assessed using Cochran Q test and I2 test statistics.26\nRandom effects subgroup analyses with Q statistic tests were conducted using CMA software. Univariate moderator analyses were conducted to compare effect size between SMIs with moderate/severe COPD, single/multiple sessions, and single/multiple practitioners. Univariate moderator analyses were also used to compare SMIs with/without BCTs targeting mental health self-management and physical activity to examine whether they differed in effect for the number of ED visits in comparison to patients receiving usual care. Random effects univariate meta-regression was conducted using CMA software to examine whether the number of BCTs coded across SMIs was predictive of effectiveness.\nThe original studies were sought for further data extraction, to supplement the information reported in the reviews. Data reported at the follow-up time point most closely following the end point of the intervention period were used for meta-analyses. We decided to group patients across time points as patients with COPD have high mortality rates; of those patients who are admitted, 15% will die within 3 months,24 25% will die within 1 year,2 and 50% within 5 years.25 While prespecifying a follow-up time point limits the bias of treatment reactivity, we may be excluding patients with shorter survival, who may not necessary survive to a later time point, who are likely to be those most in need of interventions that improve HRQOL. Sensitivity analyses were performed for studies collecting outcome data at less than 6 months and 6-month and 12-month follow-up to explore any potential heterogeneity in the overall analysis. Data from intention-to-treat analyses were used where reported. If two interventions were compared against a control group (eg, action plans vs education vs usual care), data from both intervention arms were included in the main comparison and the number of participants in the control group was halved for each comparison.26\nPostintervention outcomes reported as mean and SD were used for analysis. Mean change scores were used if postintervention scores were unavailable. When mean and SD values were unavailable, missing data were imputed using the median instead of the mean and by estimating the SD from the standard error, confidence intervals (CIs), or interquartile range.26,27\nStandardized mean differences (SMDs) with 95% CIs were calculated and pooled using a random effects model for all studies. Dichotomous and continuous outcomes were merged using Comprehensive Meta-Analysis (CMA) software (v2.2, Biostat; Englewood, NJ, USA) to produce SMDs for each study, which are equivalent to Cohen’s d. SMD values of at least 0.2, 0.5, and 0.8 are indicative of small, medium, and large effect sizes, respectively.28 Heterogeneity across studies was assessed using Cochran Q test and I2 test statistics.26\nRandom effects subgroup analyses with Q statistic tests were conducted using CMA software. Univariate moderator analyses were conducted to compare effect size between SMIs with moderate/severe COPD, single/multiple sessions, and single/multiple practitioners. Univariate moderator analyses were also used to compare SMIs with/without BCTs targeting mental health self-management and physical activity to examine whether they differed in effect for the number of ED visits in comparison to patients receiving usual care. Random effects univariate meta-regression was conducted using CMA software to examine whether the number of BCTs coded across SMIs was predictive of effectiveness.", "The current review, registered with PROSPERO (CRD42 016043311), is available at http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016043311. To focus the search upon high-quality systematic reviews, we searched two databases of systematic reviews: Cochrane Database of Systematic Reviews (Wiley) issue 7 of 12 2016, Database of Abstracts of Reviews of Effects (Wiley) issue 2 of 4 2015 (latest available). In addition, we searched Ovid MEDLINE® In-Process & Other Non-Indexed Citations and Ovid MEDLINE® 2015 with a systematic review filter available from CADTH.12 All search results were screened for title and abstract by one reviewer (JN), and 20% of the results were screened by a second reviewer (KH-M) to ensure comparability. Two reviewers screened the search results at the full paper review stage. The search strategy (ran up to October 1, 2016) combined database-specific thesaurus headings and keywords describing COPD and self-management:\nexp Pulmonary Disease, Chronic Obstructiveemphysema$.tw.(chronic$ adj3 bronchiti$).tw.(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.(COPD or COAD or COBD or AECB).tw.or/1–5exp Self Care/(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.or/7–96 and 10.\nexp Pulmonary Disease, Chronic Obstructive\nemphysema$.tw.\n(chronic$ adj3 bronchiti$).tw.\n(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.\n(COPD or COAD or COBD or AECB).tw.\nor/1–5\nexp Self Care/\n(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.\n(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.\nor/7–9\n6 and 10.\nThe review approach provides an overview of existing systematic reviews and is particularly helpful where multiple systematic reviews have been conducted. A review also provides an opportunity to compare the summaries and findings of previous reviews. In the present review, both a meta-analysis and narrative synthesis were conducted. The narrative synthesis compared the overview of findings as presented by the original authors, whereas meta-analyses were conducted on data from individual studies presented within the reviews. These quantitative analyses are helpful to determine the effectiveness of SM interventions and the factors that influence their effectiveness.\nTo be eligible for the narrative synthesis, reviews had to focus upon interventions that targeted self-management. We sought to explore variations in definitions of self-management used by previous authors, and thus reviews were eligible if they specified they focused on SMIs, irrespective of the definition they applied. Reviews that focused on SMIs in addition to other types of interventions (eg, pulmonary rehabilitation, supervised exercise programs) were only eligible if the SMIs could be clearly separated from the other interventions. Reviews of interventions delivered in primary, secondary, tertiary, outpatient, or community care were eligible.\nTo be eligible for the quantitative analyses, randomized controlled trials delivered in primary, secondary, tertiary, outpatient, or community care were eligible if they 1) targeted patients with COPD (diagnosed by either a clinician/health care practitioner and/or agreed spirometry criteria, ie, forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) <70%),13 2) compared the SMI to a comparison group that received usual care during the study period, and 3) had a measure of HRQOL as an outcome measure. Studies were excluded if they involved mixed disease populations where COPD patients could not be separated for analysis. Figure 1 provides a PRISMA diagram of reviews, and the trials within reviews, eligible for inclusion.", "The primary outcome measure was HRQOL measured by the Saint George Respiratory Questionnaire (SGRQ). The SGRQ is a disease-specific instrument designed to measure impact on overall health, daily life, and perceived well-being in patients with obstructive airways disease.14 The measure provides a total score and subdomain scores of symptoms (frequency and severity of symptoms), activities (activities that cause or are limited by breathlessness), and impacts (social functioning and psychological disturbances resulting from airways disease) and is the most frequently used disease-specific measure of HRQOL in this population group.15 Where trials did not use the SGRQ, scores from alternative HRQOL measures were used; the Chronic Respiratory Disease Questionnaire (CRQ), Clinical COPD Questionnaire (CCQ), or Sickness Impact Profile (SIP).16–18 We combined scores across different questionnaires for meta-analyses, as total SGRQ, CRQ, CCQ, and SIP scores have been shown to correlate well and as subdomain constructs share conceptual similarity.19–22 Studies using alternative HRQOL measures were not eligible.", "The Behavior Change Techniques Taxonomy version 1 (BCTTv1) was used to code the content of intervention descriptions.9 Intervention descriptions were separately coded for self-management behaviors that targeted 1) symptoms, 2) physical activity, and 3) mental health. For instance, the description “patients were instructed to set themselves a walking goal each day” would be coded as “goal setting (behavior)” only for “physical activity” and not “mental health self-management” or “symptoms self-management.” Consequently, it is important to use an outcome measure where changes in score reflect a change in these behaviors; and these three behaviors of symptoms, physical activity, and mental health directly map on to the three subdomains of the SGRQ (ie, Symptoms, Activities, and Impacts, respectively). Examples of symptom-specific behaviors may include teaching appropriate inhalation techniques or mucus-clearing techniques.2 In contrast, physical activity behaviors may be structured exercise programs, techniques on how to incorporate light activity into daily routine, or energy conservation techniques.2 Finally, mental health-focused behaviors may include trying to teach patients communication strategies to help communicate mental health concerns, distraction techniques, relaxation exercises, or stress counseling.2\nTo identify whether any features of the delivery of the intervention itself influenced effectiveness, interventions were coded for intervention provider (multidisciplinary team or single practitioner), intervention format (individual or group-based), and intervention length (single session or multiple session).23 The BCTs identified and intervention features of delivery were coded independently by one reviewer (JN) and checked independently by another (KH-M) (κ=0.89; 95% CI =0.82, 0.96). To determine the length of an intervention, the end point was defined as the final time participants received intervention content from the intervention provider. Intervention contacts solely for data collection or for following up on participants without new content were not classed as intervention sessions. To assess whether intervention effects depend on disease severity, studies were divided into those with patients with mean predicted FEV1 score <50% or ≥50% at baseline.13\nData relating to the number of COPD-related emergency department (ED) visits and/or hospital admissions were extracted from eligible studies, where reported, and used to see whether fewer ED visits were reported in patients receiving SMIs compared to patients receiving usual care. Subsequently, SMIs were divided between those with and without BCTs targeting mental health and physical activity in order to examine whether they had a difference in the size of their effect in comparison to patients who received usual care.", "The original studies were sought for further data extraction, to supplement the information reported in the reviews. Data reported at the follow-up time point most closely following the end point of the intervention period were used for meta-analyses. We decided to group patients across time points as patients with COPD have high mortality rates; of those patients who are admitted, 15% will die within 3 months,24 25% will die within 1 year,2 and 50% within 5 years.25 While prespecifying a follow-up time point limits the bias of treatment reactivity, we may be excluding patients with shorter survival, who may not necessary survive to a later time point, who are likely to be those most in need of interventions that improve HRQOL. Sensitivity analyses were performed for studies collecting outcome data at less than 6 months and 6-month and 12-month follow-up to explore any potential heterogeneity in the overall analysis. Data from intention-to-treat analyses were used where reported. If two interventions were compared against a control group (eg, action plans vs education vs usual care), data from both intervention arms were included in the main comparison and the number of participants in the control group was halved for each comparison.26\nPostintervention outcomes reported as mean and SD were used for analysis. Mean change scores were used if postintervention scores were unavailable. When mean and SD values were unavailable, missing data were imputed using the median instead of the mean and by estimating the SD from the standard error, confidence intervals (CIs), or interquartile range.26,27\nStandardized mean differences (SMDs) with 95% CIs were calculated and pooled using a random effects model for all studies. Dichotomous and continuous outcomes were merged using Comprehensive Meta-Analysis (CMA) software (v2.2, Biostat; Englewood, NJ, USA) to produce SMDs for each study, which are equivalent to Cohen’s d. SMD values of at least 0.2, 0.5, and 0.8 are indicative of small, medium, and large effect sizes, respectively.28 Heterogeneity across studies was assessed using Cochran Q test and I2 test statistics.26\nRandom effects subgroup analyses with Q statistic tests were conducted using CMA software. Univariate moderator analyses were conducted to compare effect size between SMIs with moderate/severe COPD, single/multiple sessions, and single/multiple practitioners. Univariate moderator analyses were also used to compare SMIs with/without BCTs targeting mental health self-management and physical activity to examine whether they differed in effect for the number of ED visits in comparison to patients receiving usual care. Random effects univariate meta-regression was conducted using CMA software to examine whether the number of BCTs coded across SMIs was predictive of effectiveness.", " Narrative synthesis Eleven reviews were eligible for inclusion,7,10,11,29–36 covering 66 clinical trials (Figure 1). The decision over whether to include three reviews warranted further discussion and the rationale for inclusion/exclusion is detailed in Supplementary material. Reviews varied widely in their definitions of self-management and the number of individual studies that met their inclusion criteria (Tables 1 and 2). Zwerink et al7 reported a significant effect of SMIs on HRQOL but did state that “heterogeneity among interventions, study populations, follow-up time, and outcome measures makes it difficult to formulate clear recommendations regarding the most effective form and content of self-management in COPD”. Jonkman et al10 showed SMIs to improve HRQOL in COPD patients at 6 and 12 months but did not identify any components of the SMIs that were associated with the intervention. The more recent reviews had strict inclusion criteria and were smaller in size as a result.30,33,34,36 Harrison et al33 and Majothi et al34 focused on hospitalized and recently discharged patients. Harrison et al33 did not find any significant differences in total or domain scores for HRQOL. In contrast, Majothi et al et al34 did find a significant effect on total SGRQ, but stressed that this finding should be treated with caution due to variable follow-up assessments. Walters et al36 focused on more restrictive criteria for interventions that were action plans and found no significant effect on HRQOL. Bentsen et al30 were less clear in their definition of an SMI, and this may explain the smaller number of studies. The authors reported that the majority of studies showed a benefit on HRQOL, but no meta-analysis was performed.\nEleven reviews were eligible for inclusion,7,10,11,29–36 covering 66 clinical trials (Figure 1). The decision over whether to include three reviews warranted further discussion and the rationale for inclusion/exclusion is detailed in Supplementary material. Reviews varied widely in their definitions of self-management and the number of individual studies that met their inclusion criteria (Tables 1 and 2). Zwerink et al7 reported a significant effect of SMIs on HRQOL but did state that “heterogeneity among interventions, study populations, follow-up time, and outcome measures makes it difficult to formulate clear recommendations regarding the most effective form and content of self-management in COPD”. Jonkman et al10 showed SMIs to improve HRQOL in COPD patients at 6 and 12 months but did not identify any components of the SMIs that were associated with the intervention. The more recent reviews had strict inclusion criteria and were smaller in size as a result.30,33,34,36 Harrison et al33 and Majothi et al34 focused on hospitalized and recently discharged patients. Harrison et al33 did not find any significant differences in total or domain scores for HRQOL. In contrast, Majothi et al et al34 did find a significant effect on total SGRQ, but stressed that this finding should be treated with caution due to variable follow-up assessments. Walters et al36 focused on more restrictive criteria for interventions that were action plans and found no significant effect on HRQOL. Bentsen et al30 were less clear in their definition of an SMI, and this may explain the smaller number of studies. The authors reported that the majority of studies showed a benefit on HRQOL, but no meta-analysis was performed.\n Quantitative synthesis Twenty-six eligible, unique trials provide data on 28 intervention groups (Figure 1 and Table 3). In total, trials reported on 3,518 participants (1,827 intervention, 1,691 control) for this analysis. Mean age of participants was 65.6 (SD =1.6; range =45–89) years. The majority of participants were male (72%). Characteristics of included trials are reported in their respective reviews (Table 2). Table 3 details which specific BCTs were used in each SMI. Table 4 displays which intervention features and target behaviors were used in each SMI. SMIs showed a significant but small positive effect in improving HRQOL scores over usual care (SMD =−0.16; 95% CI =−0.25, −0.07; P=0.001). Statistical heterogeneity was moderate but significant (I2=36.6%; P=0.03), suggesting the need for further moderator/subgroup analyses (Table 5). When studies using measures other than the SGRQ (n=6) were excluded, SMIs continued to show a significant effect on improving HRQOL, which was of comparable effect size (SMD =−0.16; 95% CI =−0.26, −0.05; P=0.003). SMIs with 12-month follow-up (n=15) were significantly more effective than usual care (SMD =−0.16; 95% CI =−0.29, −0.03; P=0.02), but significant heterogeneity existed between studies (I2=53.4%; P=0.008). In trials with 6-month follow-up (n=10), there was no significant difference in effect between SMIs and control group on HRQOL (SMD =−0.11; 95% CI =−0.27, 0.04; P=0.14) and even heterogeneity between studies was not significant (I2=26.2%; P=0.20). In trials with a follow-up less than 6-month postintervention (n=7), SMIs were significantly more effective than usual care (SMD =−0.29; 95% CI =−0.48, −0.11; P=0.002) and there was no significant heterogeneity (I2=2.4%; P=0.41).\nTwenty-six eligible, unique trials provide data on 28 intervention groups (Figure 1 and Table 3). In total, trials reported on 3,518 participants (1,827 intervention, 1,691 control) for this analysis. Mean age of participants was 65.6 (SD =1.6; range =45–89) years. The majority of participants were male (72%). Characteristics of included trials are reported in their respective reviews (Table 2). Table 3 details which specific BCTs were used in each SMI. Table 4 displays which intervention features and target behaviors were used in each SMI. SMIs showed a significant but small positive effect in improving HRQOL scores over usual care (SMD =−0.16; 95% CI =−0.25, −0.07; P=0.001). Statistical heterogeneity was moderate but significant (I2=36.6%; P=0.03), suggesting the need for further moderator/subgroup analyses (Table 5). When studies using measures other than the SGRQ (n=6) were excluded, SMIs continued to show a significant effect on improving HRQOL, which was of comparable effect size (SMD =−0.16; 95% CI =−0.26, −0.05; P=0.003). SMIs with 12-month follow-up (n=15) were significantly more effective than usual care (SMD =−0.16; 95% CI =−0.29, −0.03; P=0.02), but significant heterogeneity existed between studies (I2=53.4%; P=0.008). In trials with 6-month follow-up (n=10), there was no significant difference in effect between SMIs and control group on HRQOL (SMD =−0.11; 95% CI =−0.27, 0.04; P=0.14) and even heterogeneity between studies was not significant (I2=26.2%; P=0.20). In trials with a follow-up less than 6-month postintervention (n=7), SMIs were significantly more effective than usual care (SMD =−0.29; 95% CI =−0.48, −0.11; P=0.002) and there was no significant heterogeneity (I2=2.4%; P=0.41).\n Intervention delivery features In comparison to patients receiving usual care, there were no significant differences in effect size in between-group comparisons of 1) single session vs multiple session SMIs, 2) SMIs delivered by a single practitioner vs multidisciplinary teams, 3) SMIs targeting patients with moderate vs severe symptoms, and 4) individual-based vs group-based SMIs (Table 5). However, within-group moderator analysis showed 1) SMIs to be significantly effective in COPD patients with severe symptoms, whereas no significant effect was observed in studies that recruited patients with moderate symptoms; 2) significant improvement with SMIs delivered by a single practitioner, while no effect with multidisciplinary interventions; 3) no effect with single-session SMIs, but significant improvement with SMIs with multiple sessions; and 4) significant improvement in HRQOL was observed in both individual and group-based SMIs (Table 5).\nSMIs targeting mental health had a significantly greater effect size than SMIs not targeting mental health management (Q=4.37; k=28; P=0.04) (Table 5). Within-group analysis showed SMIs that did not target mental health had no significant effect on HRQOL. There was no difference in effect size between SMIs targeting and not targeting physical activity, with both groups of SMIs showing significant improvement in improving HRQOL in comparison to usual care.\nIn comparison to patients receiving usual care, there were no significant differences in effect size in between-group comparisons of 1) single session vs multiple session SMIs, 2) SMIs delivered by a single practitioner vs multidisciplinary teams, 3) SMIs targeting patients with moderate vs severe symptoms, and 4) individual-based vs group-based SMIs (Table 5). However, within-group moderator analysis showed 1) SMIs to be significantly effective in COPD patients with severe symptoms, whereas no significant effect was observed in studies that recruited patients with moderate symptoms; 2) significant improvement with SMIs delivered by a single practitioner, while no effect with multidisciplinary interventions; 3) no effect with single-session SMIs, but significant improvement with SMIs with multiple sessions; and 4) significant improvement in HRQOL was observed in both individual and group-based SMIs (Table 5).\nSMIs targeting mental health had a significantly greater effect size than SMIs not targeting mental health management (Q=4.37; k=28; P=0.04) (Table 5). Within-group analysis showed SMIs that did not target mental health had no significant effect on HRQOL. There was no difference in effect size between SMIs targeting and not targeting physical activity, with both groups of SMIs showing significant improvement in improving HRQOL in comparison to usual care.\n Intervention content All interventions were coded for at least one BCT that targeted symptom management. Of these 24 interventions, five targeted solely symptom management (20.8%), three targeted management of mental health concerns (12.5%), eleven targeted physical activity (45.8%), and five targeted all three behaviors (20.8%). The number of interventions reporting BCTs that target each of the three behaviors is presented in Tables 3 and 4. Across interventions, a mean of eight BCTs per intervention was coded (SD =3; range =3–13), with a mean of five BCTs (SD =2; range =2–10) for symptom management, one BCT (SD =1; range =0–4) for mental health management, and two BCTs (SD =3; range =0–10) for physical activity. For symptom management, the three most common BCTs reported were “instruction on how to perform a behavior” (n=23/24 trials; 95.8%), “information about health consequences” (n=21/24; 87.5%), and “action planning” (n=16/24; 66.7%). For physical activity, the three most common BCTs reported were “instruction on how to perform a behavior” (n=11/16 trials; 68.8%), “goal setting (behavior)” (n=8/16; 50%), and “demonstration of the behavior” (n=8/16 trials; 50%). For management of mental health concerns, the three most common BCTs reported were strategies to “reduce negative emotions” (n=4/8 trials; 50%), “provide social support (unspecified)” (n=3/8 trials; 37.5%), and “monitoring of emotional consequences” (n=3/8 trials; 37.5%). Sixty-six (70.9%) of the 93 BCTs in the BCTTv1 were not coded in any intervention. There was no significant association between the number of BCTs used and intervention effectiveness for improving HRQOL (β=−0.01; 95% CI =−0.04, 0.01; k=28; Q=1.75; P=0.19).\nAll interventions were coded for at least one BCT that targeted symptom management. Of these 24 interventions, five targeted solely symptom management (20.8%), three targeted management of mental health concerns (12.5%), eleven targeted physical activity (45.8%), and five targeted all three behaviors (20.8%). The number of interventions reporting BCTs that target each of the three behaviors is presented in Tables 3 and 4. Across interventions, a mean of eight BCTs per intervention was coded (SD =3; range =3–13), with a mean of five BCTs (SD =2; range =2–10) for symptom management, one BCT (SD =1; range =0–4) for mental health management, and two BCTs (SD =3; range =0–10) for physical activity. For symptom management, the three most common BCTs reported were “instruction on how to perform a behavior” (n=23/24 trials; 95.8%), “information about health consequences” (n=21/24; 87.5%), and “action planning” (n=16/24; 66.7%). For physical activity, the three most common BCTs reported were “instruction on how to perform a behavior” (n=11/16 trials; 68.8%), “goal setting (behavior)” (n=8/16; 50%), and “demonstration of the behavior” (n=8/16 trials; 50%). For management of mental health concerns, the three most common BCTs reported were strategies to “reduce negative emotions” (n=4/8 trials; 50%), “provide social support (unspecified)” (n=3/8 trials; 37.5%), and “monitoring of emotional consequences” (n=3/8 trials; 37.5%). Sixty-six (70.9%) of the 93 BCTs in the BCTTv1 were not coded in any intervention. There was no significant association between the number of BCTs used and intervention effectiveness for improving HRQOL (β=−0.01; 95% CI =−0.04, 0.01; k=28; Q=1.75; P=0.19).\n Health care use Overall, patients who received SMIs had significantly fewer ED visits compared to those who received usual care (SMD =−0.13; 95% CI =−0.23, −0.03; n=15; P=0.02). There was no significant heterogeneity in the sample (I2=19.4%; P=0.24). The significant effect of SMIs on the number of ED visits in patients who received SMIs remained when examining only studies with a 12-month follow-up (SMD =−0.17, 95% CI =−0.27, −0.07; n=12; P=0.001) with no significant heterogeneity (I2=10.4%; P=0.34). Of the three intervention groups that did not have a 12-month follow-up, only one used a 3-month follow-up and the two were from the same study where a 6-month follow-up was used. Thus, meta-analyses were not performed for either 3-month or 6-month follow-up time points.\nWithin-group analyses revealed patients receiving SMIs targeting mental health made significantly fewer ED visits compared to patients receiving usual care (SMD =−0.22; 95% CI =−0.32, −0.11; k=5; P<0.001). No difference was observed in the number of ED visits between patients receiving SMIs not targeting mental health and patients receiving usual care (SMD =0.001; 95% CI =−0.14, 0.14; k=10 P=0.99). This led to a significant between-group difference in effect between SMIs targeting mental health compared to SMIs not targeting mental health (Q=5.95; P=0.02).\nPatients receiving SMIs targeting physical activity made significantly fewer ED visits compared to those who received usual care (SMD =0.20; 95% CI =−0.31, −0.08; k=8; P=0.001). No difference was observed between patients receiving SMIs not targeting physical activity and usual care (SMD =−0.03; 95% CI =−0.18, 0.12; k=7; P=0.68). In comparison to usual care, there was no difference in effect between SMIs targeting and not targeting physical activity (Q=3.03; k=15; P=0.08).\nOverall, patients who received SMIs had significantly fewer ED visits compared to those who received usual care (SMD =−0.13; 95% CI =−0.23, −0.03; n=15; P=0.02). There was no significant heterogeneity in the sample (I2=19.4%; P=0.24). The significant effect of SMIs on the number of ED visits in patients who received SMIs remained when examining only studies with a 12-month follow-up (SMD =−0.17, 95% CI =−0.27, −0.07; n=12; P=0.001) with no significant heterogeneity (I2=10.4%; P=0.34). Of the three intervention groups that did not have a 12-month follow-up, only one used a 3-month follow-up and the two were from the same study where a 6-month follow-up was used. Thus, meta-analyses were not performed for either 3-month or 6-month follow-up time points.\nWithin-group analyses revealed patients receiving SMIs targeting mental health made significantly fewer ED visits compared to patients receiving usual care (SMD =−0.22; 95% CI =−0.32, −0.11; k=5; P<0.001). No difference was observed in the number of ED visits between patients receiving SMIs not targeting mental health and patients receiving usual care (SMD =0.001; 95% CI =−0.14, 0.14; k=10 P=0.99). This led to a significant between-group difference in effect between SMIs targeting mental health compared to SMIs not targeting mental health (Q=5.95; P=0.02).\nPatients receiving SMIs targeting physical activity made significantly fewer ED visits compared to those who received usual care (SMD =0.20; 95% CI =−0.31, −0.08; k=8; P=0.001). No difference was observed between patients receiving SMIs not targeting physical activity and usual care (SMD =−0.03; 95% CI =−0.18, 0.12; k=7; P=0.68). In comparison to usual care, there was no difference in effect between SMIs targeting and not targeting physical activity (Q=3.03; k=15; P=0.08).", "Eleven reviews were eligible for inclusion,7,10,11,29–36 covering 66 clinical trials (Figure 1). The decision over whether to include three reviews warranted further discussion and the rationale for inclusion/exclusion is detailed in Supplementary material. Reviews varied widely in their definitions of self-management and the number of individual studies that met their inclusion criteria (Tables 1 and 2). Zwerink et al7 reported a significant effect of SMIs on HRQOL but did state that “heterogeneity among interventions, study populations, follow-up time, and outcome measures makes it difficult to formulate clear recommendations regarding the most effective form and content of self-management in COPD”. Jonkman et al10 showed SMIs to improve HRQOL in COPD patients at 6 and 12 months but did not identify any components of the SMIs that were associated with the intervention. The more recent reviews had strict inclusion criteria and were smaller in size as a result.30,33,34,36 Harrison et al33 and Majothi et al34 focused on hospitalized and recently discharged patients. Harrison et al33 did not find any significant differences in total or domain scores for HRQOL. In contrast, Majothi et al et al34 did find a significant effect on total SGRQ, but stressed that this finding should be treated with caution due to variable follow-up assessments. Walters et al36 focused on more restrictive criteria for interventions that were action plans and found no significant effect on HRQOL. Bentsen et al30 were less clear in their definition of an SMI, and this may explain the smaller number of studies. The authors reported that the majority of studies showed a benefit on HRQOL, but no meta-analysis was performed.", "Twenty-six eligible, unique trials provide data on 28 intervention groups (Figure 1 and Table 3). In total, trials reported on 3,518 participants (1,827 intervention, 1,691 control) for this analysis. Mean age of participants was 65.6 (SD =1.6; range =45–89) years. The majority of participants were male (72%). Characteristics of included trials are reported in their respective reviews (Table 2). Table 3 details which specific BCTs were used in each SMI. Table 4 displays which intervention features and target behaviors were used in each SMI. SMIs showed a significant but small positive effect in improving HRQOL scores over usual care (SMD =−0.16; 95% CI =−0.25, −0.07; P=0.001). Statistical heterogeneity was moderate but significant (I2=36.6%; P=0.03), suggesting the need for further moderator/subgroup analyses (Table 5). When studies using measures other than the SGRQ (n=6) were excluded, SMIs continued to show a significant effect on improving HRQOL, which was of comparable effect size (SMD =−0.16; 95% CI =−0.26, −0.05; P=0.003). SMIs with 12-month follow-up (n=15) were significantly more effective than usual care (SMD =−0.16; 95% CI =−0.29, −0.03; P=0.02), but significant heterogeneity existed between studies (I2=53.4%; P=0.008). In trials with 6-month follow-up (n=10), there was no significant difference in effect between SMIs and control group on HRQOL (SMD =−0.11; 95% CI =−0.27, 0.04; P=0.14) and even heterogeneity between studies was not significant (I2=26.2%; P=0.20). In trials with a follow-up less than 6-month postintervention (n=7), SMIs were significantly more effective than usual care (SMD =−0.29; 95% CI =−0.48, −0.11; P=0.002) and there was no significant heterogeneity (I2=2.4%; P=0.41).", "In comparison to patients receiving usual care, there were no significant differences in effect size in between-group comparisons of 1) single session vs multiple session SMIs, 2) SMIs delivered by a single practitioner vs multidisciplinary teams, 3) SMIs targeting patients with moderate vs severe symptoms, and 4) individual-based vs group-based SMIs (Table 5). However, within-group moderator analysis showed 1) SMIs to be significantly effective in COPD patients with severe symptoms, whereas no significant effect was observed in studies that recruited patients with moderate symptoms; 2) significant improvement with SMIs delivered by a single practitioner, while no effect with multidisciplinary interventions; 3) no effect with single-session SMIs, but significant improvement with SMIs with multiple sessions; and 4) significant improvement in HRQOL was observed in both individual and group-based SMIs (Table 5).\nSMIs targeting mental health had a significantly greater effect size than SMIs not targeting mental health management (Q=4.37; k=28; P=0.04) (Table 5). Within-group analysis showed SMIs that did not target mental health had no significant effect on HRQOL. There was no difference in effect size between SMIs targeting and not targeting physical activity, with both groups of SMIs showing significant improvement in improving HRQOL in comparison to usual care.", "All interventions were coded for at least one BCT that targeted symptom management. Of these 24 interventions, five targeted solely symptom management (20.8%), three targeted management of mental health concerns (12.5%), eleven targeted physical activity (45.8%), and five targeted all three behaviors (20.8%). The number of interventions reporting BCTs that target each of the three behaviors is presented in Tables 3 and 4. Across interventions, a mean of eight BCTs per intervention was coded (SD =3; range =3–13), with a mean of five BCTs (SD =2; range =2–10) for symptom management, one BCT (SD =1; range =0–4) for mental health management, and two BCTs (SD =3; range =0–10) for physical activity. For symptom management, the three most common BCTs reported were “instruction on how to perform a behavior” (n=23/24 trials; 95.8%), “information about health consequences” (n=21/24; 87.5%), and “action planning” (n=16/24; 66.7%). For physical activity, the three most common BCTs reported were “instruction on how to perform a behavior” (n=11/16 trials; 68.8%), “goal setting (behavior)” (n=8/16; 50%), and “demonstration of the behavior” (n=8/16 trials; 50%). For management of mental health concerns, the three most common BCTs reported were strategies to “reduce negative emotions” (n=4/8 trials; 50%), “provide social support (unspecified)” (n=3/8 trials; 37.5%), and “monitoring of emotional consequences” (n=3/8 trials; 37.5%). Sixty-six (70.9%) of the 93 BCTs in the BCTTv1 were not coded in any intervention. There was no significant association between the number of BCTs used and intervention effectiveness for improving HRQOL (β=−0.01; 95% CI =−0.04, 0.01; k=28; Q=1.75; P=0.19).", "Overall, patients who received SMIs had significantly fewer ED visits compared to those who received usual care (SMD =−0.13; 95% CI =−0.23, −0.03; n=15; P=0.02). There was no significant heterogeneity in the sample (I2=19.4%; P=0.24). The significant effect of SMIs on the number of ED visits in patients who received SMIs remained when examining only studies with a 12-month follow-up (SMD =−0.17, 95% CI =−0.27, −0.07; n=12; P=0.001) with no significant heterogeneity (I2=10.4%; P=0.34). Of the three intervention groups that did not have a 12-month follow-up, only one used a 3-month follow-up and the two were from the same study where a 6-month follow-up was used. Thus, meta-analyses were not performed for either 3-month or 6-month follow-up time points.\nWithin-group analyses revealed patients receiving SMIs targeting mental health made significantly fewer ED visits compared to patients receiving usual care (SMD =−0.22; 95% CI =−0.32, −0.11; k=5; P<0.001). No difference was observed in the number of ED visits between patients receiving SMIs not targeting mental health and patients receiving usual care (SMD =0.001; 95% CI =−0.14, 0.14; k=10 P=0.99). This led to a significant between-group difference in effect between SMIs targeting mental health compared to SMIs not targeting mental health (Q=5.95; P=0.02).\nPatients receiving SMIs targeting physical activity made significantly fewer ED visits compared to those who received usual care (SMD =0.20; 95% CI =−0.31, −0.08; k=8; P=0.001). No difference was observed between patients receiving SMIs not targeting physical activity and usual care (SMD =−0.03; 95% CI =−0.18, 0.12; k=7; P=0.68). In comparison to usual care, there was no difference in effect between SMIs targeting and not targeting physical activity (Q=3.03; k=15; P=0.08).", "The meta-analysis showed SMIs were significantly more effective than usual care in improving HRQOL and reducing the number of ED visits in patients with COPD. In addition, moderator analyses provided specific detail of relevance for clinicians regarding the design, content, and implementation of intervention in practice. SMIs that specifically target mental health concerns alongside symptom management were significantly more effective in improving HRQOL and reducing ED visits than SMIs that focus on symptom management alone. Within-group analyses showed that HRQOL was significantly improved in 1) studies with COPD patients with severe level of symptoms but not in patients with a moderate level of symptoms, 2) single-practitioner based SMIs but not in SMIs delivered by a multidisciplinary team, 3) SMIs with multiple sessions but not in SMIs delivered in a single session, 4) both individual- and group-based interventions, and 5) SMIs that target physical activity. Our analysis also highlighted how different BCTs were utilized for the three different self-management behaviors.\nTargeting of specific behaviors in self-management approaches may explain heterogeneity in effectiveness. Our review found SMIs that tackle mental health concerns are more effective than those aimed directly at respiratory health. Management of mental health problems is acknowledged as an important part of COPD care as comorbid mental health problems are common in COPD, with an estimated prevalence of 10%–42% for depression and 10%–60% for anxiety.63–66 However, fewer than 30% of treatment providers adhere to current guidance for management of anxiety and depression in COPD.67,68 The nature and direction of the relationship between mental health and respiratory symptoms in COPD are difficult to disentangle.69 Breathlessness may be a symptom of anxiety or COPD, and in turn, deteriorating respiratory health may trigger anxiety;64 and anxiety is associated with more frequent hospital admissions for exacerbations.69 It follows that SMIs targeting mental health have the potential to improve HRQOL. Overall, few of the identified SMIs contained BCTs that targeted mental health self-management, although the six SMIs with the highest effect sizes utilized BCTs that targeted mental health concerns (Table 4). The most commonly reported BCT to aid management of mental health was input to “reduce negative emotions.” Interventions using this technique may improve patients’ self-efficacy for managing their symptoms, which could reduce the likelihood of attending ED’s at the onset of an exacerbation.70 Alternatively, addressing mental health management may have an indirect effect in preventing a deterioration in clinical status by an improvement in mood, leading to greater willingness to engage in other preventative behaviors (eg, increased physical activity, medication adherence, improved nutritional diet).71\nIn both moderator analysis and within-group analyses, SMIs targeting physical activity did not demonstrate a greater improvement in HRQOL compared with SMIs that did not target physical activity. It is surprising that the effect was not stronger, as patients engaging in increased physical activity are less likely to experience deterioration in physical condition and acute exacerbation.66,72 In contrast, physical deconditioning and inactivity may lead to faster deterioration in clinical status and increase the likelihood of hospital admission.64,66 Zwerink et al7 also reported no improved benefit in SMIs that targeted physical activity. Furthermore, Table 4 highlights the wide variability in BCTs that were used when targeting physical activity; with interventions ranging from individualized, structured, supervised sessions to education on physical activity. It is important when reporting SMIs that target physical activity that authors are clear about what is being asked of the patient. The American Thoracic Society and European Respiratory Society’s joint summary identifies physical activity outcomes as a priority for future research.6 The summary states that determining the optimal level of instruction is a priority in design of future physical activity interventions (eg, how many sessions, over what time period, and what specific exercises). However, it is important to consider that an individually tailored approach is needed for patients with COPD where there is wide variability in capability and resources.\nThe most commonly identified BCTs varied for the three separate behaviors. Those coded for symptom management and physical activity were similar in that they used BCTs centered on information provision (eg, “Instruction on how to perform the behavior,” “Information about health consequences,” “Demonstration of the behavior”), whereas those coded for management of mental health concerns encourage more awareness and reflective thought processes (eg, “reduce negative emotions,” “monitoring of emotional consequences”). It is possible that SMIs that targeted mental health management routinely displayed larger effect sizes as a consequence of the type of BCTs rather than the behavior targeted. Further research should attempt to disentangle the extent to which it is specific BCTs, or the behavior targeted, that is responsible for the intervention effect.\nOne recommendation from the recent HTA review on SMIs was that “Novel approaches to influence behavior change … should be explored”.8 Our approach identifies that vast majority of potential BCTs in the taxonomy were not identified across studies, suggesting opportunities for novel intervention content. For instance, it was apparent that while SMIs targeting mental health were more effective in improving HRQOL (eg, “reduce negative emotions”), the BCTs employed in these studies were not those recommended in current guidance as core strategies of self-management for COPD (eg, goal setting, problem solving, action planning).2,6 Similarly, while action planning is seen as a key component of effective self-management,2,6–8 some theoretical models specify that action planning is not always sufficient for behavior change and that problem solving is required to effectively maintain behavior change.71 As COPD is characterized by frequent relapses in the form of acute exacerbations, it was surprising that the BCT “problem solving” was only coded in two studies (Table 4). Future SMIs need to incorporate how to deal with problem solving as coping with the repeated occurrences of breathlessness and exacerbations (and the associated anxiety of these symptoms arising) is an inevitable predictor of HRQOL.72\nIntervention providers should look at how they can deliver the core strategies of self-management (eg, goal setting, problem solving) in ways that work across multiple behaviors rather than feeling certain BCTs are only applicable to single behaviors (eg, action planning can only be used for symptom management but not when explaining physical activity). For instance, “body changes” referred to breathing/relaxation techniques. This was often used for a specific behavior, but patients may have better outcomes if they understand how breathing/relaxation techniques can be used for managing breathlessness, reducing stress, and improving lung capacity when physically active. Explaining how the same behavior can be applied across situations may also be a more understandable message for patients with poor health literacy, rather than them believing that certain behavioral techniques can only be applied in certain contexts, especially when elevated anxiety may impair cognitive processing of the most suitable course of action. Furthermore, from an implementation perspective, SMIs employing multidisciplinary teams for individual-based interventions did not confer any significant increase in effect size. This is an important finding for clinical practice as single practitioner and group-based interventions are potentially of lower cost as multiple patients can be seen in a single setting.73\nAn interesting finding from the narrative synthesis is that the number of eligible studies in Jonkman et al’s10,11 reviews and Zwerink et al7 was approximately double the number found in the other reviews. The majority of additional studies in these reviews were recent publications. This may suggest that the use of SMIs has increased in less than a decade. However, further inspection of the definitions highlight where disparities between previous reviews may exist. Walters et al36 only allowed single component (action plans) interventions and found no effect, whereas Zwerink et al7 and Jonkman et al10 both found a significant effect but stipulated that SMIs had to have at least two components (eg, action plans, symptom monitoring, physical activity component, etc.). This review highlights how the definition of SMI directly influences whether an effect is found on HRQOL.\nA number of the reviews attempted to summarize components of their contributing SMIs, but these were often limited in description and were often a mixture of BCTs (eg, problem solving, action planning) and target behaviors (eg, mental health, physical activity). Zwerink et al7 and Jonkman et al10 were the only reviews to conduct subgroup analyses to quantify what content of SMIs are most effective. Zwerink et al7 attempted to look at SMIs that did and did not utilize action plans, exercise programs, and behavioral components. However, the definitions for each of these three subgroup analyses were ambiguous: 1) action plans had to focus on symptom management, and thus excludes action planning techniques when the target behavior is physical or mental health, 2) focusing on only standardized exercise programs neglects the ways physical activity may be encouraged in everyday life (eg, energy conservation techniques), and 3) the authors themselves state that “behavioral components” was “difficult to determine because of lack of detailed information”. Jonkman et al’s11 subgroup analyses were based on the absence/presence of clear BCTs: management of psychological aspects, goal setting skills, self-monitoring logs, and problem-solving skills. However the authors 1) combined a number of chronic conditions (COPD, chronic heart failure, and Type 2 diabetes) and 2) did not differentiate between the individual BCTs and the target behaviors. To build upon these authors’ previous work, we have used a standardized taxonomy with definitions for a wider array of BCTs and been more specific about the behavior the BCT is targeting. This allows better comparison of intervention content across studies and a more robust basis for synthesis. Ultimately, the increasing popularity and awareness of SMIs, but an increasing variation in definition, indicates a need for more structured guidance on what constitutes self-management so that both practitioners and patients are aware of what the content of self-management entails.\n Strengths and limitations Comparing the findings across reviews highlights how the definition of self-management had a direct impact on the number of eligible studies and consequently the conclusions drawn. The difference in conclusions further highlights the need for more detailed content analysis. The current analysis extracted robust empirical data from across reviews and their contributing clinical trials to examine intervention content and structure to isolate what factors may be essential for improving patient outcomes. The use of a standardized taxonomy of definitions allowed comparisons of intervention content across studies and provided a robust basis for synthesis. Our approach also highlighted specific BCTs used in a range of contexts to enable more discernment between intervention features and outcome effectiveness. We used a concise search strategy in order to identify individual trial data and perform novel forms of exploratory analysis that examined the effectiveness of individual intervention components. For this exploratory analysis, small and hard to find trials are unlikely to introduce components that do not occur in a range of other trials, and as such it is unnecessary to carry out an exhaustive search to identify all existing trials. However, it is important to stress that while we present a comprehensive summary of SMIs that have been reported in previous reviews, this review does not aim to present the most up-to-date evidence base as a number of more recent SMI trials will not have been captured in these reviews.\nThere are limitations to the study worth noting. The limited number of studies meant that single rather than multiple variables were entered into the moderator analyses. For example, we could compare single vs multiple session SMIs or individual vs group SMIs but could not examine combinations of the different variables in a multivariate analysis. Thus, while these univariate analyses can helpfully guide intervention development by highlighting potential associations, they should not be interpreted in an additive fashion. Furthermore, meta-regression findings do not imply causality, as factors entered into these analyses were not randomized groups in the analyses.\nIt was difficult to ascertain the intensity with which some BCTs were administered and the same BCT could be used with varying intensity, eg, the instructors could provide “Feedback on the behavior” on a daily or monthly basis. Ultimately, the utility of this secondary analysis is dependent on the reporting of intervention content by authors. It is possible that some BCTs were present in interventions, but not described in sufficient detail to allow coding. While we coded intervention manuals where available, there is a need for more transparency in intervention content in future studies.\nComparing the findings across reviews highlights how the definition of self-management had a direct impact on the number of eligible studies and consequently the conclusions drawn. The difference in conclusions further highlights the need for more detailed content analysis. The current analysis extracted robust empirical data from across reviews and their contributing clinical trials to examine intervention content and structure to isolate what factors may be essential for improving patient outcomes. The use of a standardized taxonomy of definitions allowed comparisons of intervention content across studies and provided a robust basis for synthesis. Our approach also highlighted specific BCTs used in a range of contexts to enable more discernment between intervention features and outcome effectiveness. We used a concise search strategy in order to identify individual trial data and perform novel forms of exploratory analysis that examined the effectiveness of individual intervention components. For this exploratory analysis, small and hard to find trials are unlikely to introduce components that do not occur in a range of other trials, and as such it is unnecessary to carry out an exhaustive search to identify all existing trials. However, it is important to stress that while we present a comprehensive summary of SMIs that have been reported in previous reviews, this review does not aim to present the most up-to-date evidence base as a number of more recent SMI trials will not have been captured in these reviews.\nThere are limitations to the study worth noting. The limited number of studies meant that single rather than multiple variables were entered into the moderator analyses. For example, we could compare single vs multiple session SMIs or individual vs group SMIs but could not examine combinations of the different variables in a multivariate analysis. Thus, while these univariate analyses can helpfully guide intervention development by highlighting potential associations, they should not be interpreted in an additive fashion. Furthermore, meta-regression findings do not imply causality, as factors entered into these analyses were not randomized groups in the analyses.\nIt was difficult to ascertain the intensity with which some BCTs were administered and the same BCT could be used with varying intensity, eg, the instructors could provide “Feedback on the behavior” on a daily or monthly basis. Ultimately, the utility of this secondary analysis is dependent on the reporting of intervention content by authors. It is possible that some BCTs were present in interventions, but not described in sufficient detail to allow coding. While we coded intervention manuals where available, there is a need for more transparency in intervention content in future studies.", "Comparing the findings across reviews highlights how the definition of self-management had a direct impact on the number of eligible studies and consequently the conclusions drawn. The difference in conclusions further highlights the need for more detailed content analysis. The current analysis extracted robust empirical data from across reviews and their contributing clinical trials to examine intervention content and structure to isolate what factors may be essential for improving patient outcomes. The use of a standardized taxonomy of definitions allowed comparisons of intervention content across studies and provided a robust basis for synthesis. Our approach also highlighted specific BCTs used in a range of contexts to enable more discernment between intervention features and outcome effectiveness. We used a concise search strategy in order to identify individual trial data and perform novel forms of exploratory analysis that examined the effectiveness of individual intervention components. For this exploratory analysis, small and hard to find trials are unlikely to introduce components that do not occur in a range of other trials, and as such it is unnecessary to carry out an exhaustive search to identify all existing trials. However, it is important to stress that while we present a comprehensive summary of SMIs that have been reported in previous reviews, this review does not aim to present the most up-to-date evidence base as a number of more recent SMI trials will not have been captured in these reviews.\nThere are limitations to the study worth noting. The limited number of studies meant that single rather than multiple variables were entered into the moderator analyses. For example, we could compare single vs multiple session SMIs or individual vs group SMIs but could not examine combinations of the different variables in a multivariate analysis. Thus, while these univariate analyses can helpfully guide intervention development by highlighting potential associations, they should not be interpreted in an additive fashion. Furthermore, meta-regression findings do not imply causality, as factors entered into these analyses were not randomized groups in the analyses.\nIt was difficult to ascertain the intensity with which some BCTs were administered and the same BCT could be used with varying intensity, eg, the instructors could provide “Feedback on the behavior” on a daily or monthly basis. Ultimately, the utility of this secondary analysis is dependent on the reporting of intervention content by authors. It is possible that some BCTs were present in interventions, but not described in sufficient detail to allow coding. While we coded intervention manuals where available, there is a need for more transparency in intervention content in future studies.", "SMIs can improve HRQOL and reduce the number of ED visits for patients with COPD, but there is wide variability in effect. To be effective, future interventions should focus on tackling mental health concerns but need not entail multidisciplinary and individual-focused SMIs.", "Three reviews were discussed for eligibility. A review by Walters et al1 focused upon studies where the intervention could be defined as\n[…] use of guidelines detailing self-initiated interventions (eg, changing medication regime […]) which were undertaken in response to alterations in the state of the patients’ COPD (eg, increase in breathlessness) […]. An educational component was permitted if the duration was short, up to 1 hour.1Action plans were explicitly described as a central component in the definitions of self-management used by a number of the review authors,2–6 and as Walters et al’s1 definition was comparable to many of the definitions of self-management interventions used by other authors, we considered this review eligible.\n[…] use of guidelines detailing self-initiated interventions (eg, changing medication regime […]) which were undertaken in response to alterations in the state of the patients’ COPD (eg, increase in breathlessness) […]. An educational component was permitted if the duration was short, up to 1 hour.1\nIn contrast, the focus of Jolly et al’s review7 was “self-management” interventions, but the number of interventions included far exceeded the number of studies commonly found in the other eligible reviews and many would not typically be considered self-management (eg, structured pulmonary rehabilitation programs). As it was not possible to identify those that were primarily self-management based, we excluded this review as it summarizes evidence of a wider array of interventions for chronic obstructive pulmonary disease than self-management interventions.\nJonkman et al8 highlighted relevant studies in the search, but the focus of the analysis was on an individual patient data analysis and as such the overall findings are not relevant to the current narrative." ]
[ "intro", null, null, null, null, "methods", "results", null, null, null, null, null, "discussion", null, null, "supplementary-material" ]
[ "self-management", "emergency department visits", "behavior change techniques", "COPD", "mental health", "meta-analysis" ]
Introduction: COPD is characterized by airflow limitation and is associated with inflammatory changes that lead to dyspnea, sputum purulence, and persistent coughing. The disease trajectory is one of progressive decline, punctuated by frequent acute exacerbations in symptoms. Patients with COPD have an average of three acute exacerbations per year, and these are the second biggest cause of unplanned hospital admissions in the UK.1–3 As COPD is irreversible, and health-related quality of life (HRQOL) in patients with COPD tends to be low, optimizing HRQOL and reducing hospital admissions have become key priorities in COPD management.4,5 Self-management planning is a recognized quality standard of the National Institute for Health and Care Excellence (NICE) guidelines in the UK,2 and a joint statement by the American Thoracic Society/European Respiratory Society6 emphasized its importance in quality of care. Self-management interventions (SMIs) encourage patients to monitor symptoms when stable and to take appropriate action when symptoms begin to worsen.2 However, there is no consensus on the form and content of effective SMIs and the variation in content may explain previous heterogeneity in effectiveness.2,7,8 A recent Health Technology Assessment (HTA) report on the efficacy of self-management for COPD recommended that future research should 1) “try to identify which are the most effective components of interventions and identify patient-specific factors that may modify this”, and that 2) “behavior change theories and strategies that underpin COPD SMIs need to be better characterized and described”.8 To enable better comparison and replication of intervention components, taxonomies have been developed to classify potential active ingredients of interventions according to preestablished descriptions of behavior change techniques (BCTs).9 BCTs are defined as “an observable, replicable, and irreducible component of an intervention designed to alter or redirect causal processes that regulate behavior”.9 While recent reviews have conducted content analysis to help identify effective components of SMIs for patients with COPD through individual patient data analysis,10,11 the coding of intervention content was not performed with established taxonomies and clear understanding between the BCT and the targeted behavior was absent (eg, symptom management, physical activity, mental health management, etc). This systematic review aims to summarize the current evidence base on the effectiveness of SMIs for improving HRQOL in people with COPD. Conclusions across reviews have been synthesized and evaluated within the context of how self-management was defined. Meta-analyses were performed that explore the relationship between health behaviors the SMIs target, the BCT they use to target behaviors, and subsequent improvement in HRQOL and health care utilization. In addition, we explore the extent to which trial and intervention features influence SMI effects. Method: Search strategy and selection criteria The current review, registered with PROSPERO (CRD42 016043311), is available at http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016043311. To focus the search upon high-quality systematic reviews, we searched two databases of systematic reviews: Cochrane Database of Systematic Reviews (Wiley) issue 7 of 12 2016, Database of Abstracts of Reviews of Effects (Wiley) issue 2 of 4 2015 (latest available). In addition, we searched Ovid MEDLINE® In-Process & Other Non-Indexed Citations and Ovid MEDLINE® 2015 with a systematic review filter available from CADTH.12 All search results were screened for title and abstract by one reviewer (JN), and 20% of the results were screened by a second reviewer (KH-M) to ensure comparability. Two reviewers screened the search results at the full paper review stage. The search strategy (ran up to October 1, 2016) combined database-specific thesaurus headings and keywords describing COPD and self-management: exp Pulmonary Disease, Chronic Obstructiveemphysema$.tw.(chronic$ adj3 bronchiti$).tw.(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.(COPD or COAD or COBD or AECB).tw.or/1–5exp Self Care/(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.or/7–96 and 10. exp Pulmonary Disease, Chronic Obstructive emphysema$.tw. (chronic$ adj3 bronchiti$).tw. (obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw. (COPD or COAD or COBD or AECB).tw. or/1–5 exp Self Care/ (self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw. (patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw. or/7–9 6 and 10. The review approach provides an overview of existing systematic reviews and is particularly helpful where multiple systematic reviews have been conducted. A review also provides an opportunity to compare the summaries and findings of previous reviews. In the present review, both a meta-analysis and narrative synthesis were conducted. The narrative synthesis compared the overview of findings as presented by the original authors, whereas meta-analyses were conducted on data from individual studies presented within the reviews. These quantitative analyses are helpful to determine the effectiveness of SM interventions and the factors that influence their effectiveness. To be eligible for the narrative synthesis, reviews had to focus upon interventions that targeted self-management. We sought to explore variations in definitions of self-management used by previous authors, and thus reviews were eligible if they specified they focused on SMIs, irrespective of the definition they applied. Reviews that focused on SMIs in addition to other types of interventions (eg, pulmonary rehabilitation, supervised exercise programs) were only eligible if the SMIs could be clearly separated from the other interventions. Reviews of interventions delivered in primary, secondary, tertiary, outpatient, or community care were eligible. To be eligible for the quantitative analyses, randomized controlled trials delivered in primary, secondary, tertiary, outpatient, or community care were eligible if they 1) targeted patients with COPD (diagnosed by either a clinician/health care practitioner and/or agreed spirometry criteria, ie, forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) <70%),13 2) compared the SMI to a comparison group that received usual care during the study period, and 3) had a measure of HRQOL as an outcome measure. Studies were excluded if they involved mixed disease populations where COPD patients could not be separated for analysis. Figure 1 provides a PRISMA diagram of reviews, and the trials within reviews, eligible for inclusion. The current review, registered with PROSPERO (CRD42 016043311), is available at http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016043311. To focus the search upon high-quality systematic reviews, we searched two databases of systematic reviews: Cochrane Database of Systematic Reviews (Wiley) issue 7 of 12 2016, Database of Abstracts of Reviews of Effects (Wiley) issue 2 of 4 2015 (latest available). In addition, we searched Ovid MEDLINE® In-Process & Other Non-Indexed Citations and Ovid MEDLINE® 2015 with a systematic review filter available from CADTH.12 All search results were screened for title and abstract by one reviewer (JN), and 20% of the results were screened by a second reviewer (KH-M) to ensure comparability. Two reviewers screened the search results at the full paper review stage. The search strategy (ran up to October 1, 2016) combined database-specific thesaurus headings and keywords describing COPD and self-management: exp Pulmonary Disease, Chronic Obstructiveemphysema$.tw.(chronic$ adj3 bronchiti$).tw.(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.(COPD or COAD or COBD or AECB).tw.or/1–5exp Self Care/(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.or/7–96 and 10. exp Pulmonary Disease, Chronic Obstructive emphysema$.tw. (chronic$ adj3 bronchiti$).tw. (obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw. (COPD or COAD or COBD or AECB).tw. or/1–5 exp Self Care/ (self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw. (patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw. or/7–9 6 and 10. The review approach provides an overview of existing systematic reviews and is particularly helpful where multiple systematic reviews have been conducted. A review also provides an opportunity to compare the summaries and findings of previous reviews. In the present review, both a meta-analysis and narrative synthesis were conducted. The narrative synthesis compared the overview of findings as presented by the original authors, whereas meta-analyses were conducted on data from individual studies presented within the reviews. These quantitative analyses are helpful to determine the effectiveness of SM interventions and the factors that influence their effectiveness. To be eligible for the narrative synthesis, reviews had to focus upon interventions that targeted self-management. We sought to explore variations in definitions of self-management used by previous authors, and thus reviews were eligible if they specified they focused on SMIs, irrespective of the definition they applied. Reviews that focused on SMIs in addition to other types of interventions (eg, pulmonary rehabilitation, supervised exercise programs) were only eligible if the SMIs could be clearly separated from the other interventions. Reviews of interventions delivered in primary, secondary, tertiary, outpatient, or community care were eligible. To be eligible for the quantitative analyses, randomized controlled trials delivered in primary, secondary, tertiary, outpatient, or community care were eligible if they 1) targeted patients with COPD (diagnosed by either a clinician/health care practitioner and/or agreed spirometry criteria, ie, forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) <70%),13 2) compared the SMI to a comparison group that received usual care during the study period, and 3) had a measure of HRQOL as an outcome measure. Studies were excluded if they involved mixed disease populations where COPD patients could not be separated for analysis. Figure 1 provides a PRISMA diagram of reviews, and the trials within reviews, eligible for inclusion. Primary outcome The primary outcome measure was HRQOL measured by the Saint George Respiratory Questionnaire (SGRQ). The SGRQ is a disease-specific instrument designed to measure impact on overall health, daily life, and perceived well-being in patients with obstructive airways disease.14 The measure provides a total score and subdomain scores of symptoms (frequency and severity of symptoms), activities (activities that cause or are limited by breathlessness), and impacts (social functioning and psychological disturbances resulting from airways disease) and is the most frequently used disease-specific measure of HRQOL in this population group.15 Where trials did not use the SGRQ, scores from alternative HRQOL measures were used; the Chronic Respiratory Disease Questionnaire (CRQ), Clinical COPD Questionnaire (CCQ), or Sickness Impact Profile (SIP).16–18 We combined scores across different questionnaires for meta-analyses, as total SGRQ, CRQ, CCQ, and SIP scores have been shown to correlate well and as subdomain constructs share conceptual similarity.19–22 Studies using alternative HRQOL measures were not eligible. The primary outcome measure was HRQOL measured by the Saint George Respiratory Questionnaire (SGRQ). The SGRQ is a disease-specific instrument designed to measure impact on overall health, daily life, and perceived well-being in patients with obstructive airways disease.14 The measure provides a total score and subdomain scores of symptoms (frequency and severity of symptoms), activities (activities that cause or are limited by breathlessness), and impacts (social functioning and psychological disturbances resulting from airways disease) and is the most frequently used disease-specific measure of HRQOL in this population group.15 Where trials did not use the SGRQ, scores from alternative HRQOL measures were used; the Chronic Respiratory Disease Questionnaire (CRQ), Clinical COPD Questionnaire (CCQ), or Sickness Impact Profile (SIP).16–18 We combined scores across different questionnaires for meta-analyses, as total SGRQ, CRQ, CCQ, and SIP scores have been shown to correlate well and as subdomain constructs share conceptual similarity.19–22 Studies using alternative HRQOL measures were not eligible. Classification of intervention content and intervention delivery features The Behavior Change Techniques Taxonomy version 1 (BCTTv1) was used to code the content of intervention descriptions.9 Intervention descriptions were separately coded for self-management behaviors that targeted 1) symptoms, 2) physical activity, and 3) mental health. For instance, the description “patients were instructed to set themselves a walking goal each day” would be coded as “goal setting (behavior)” only for “physical activity” and not “mental health self-management” or “symptoms self-management.” Consequently, it is important to use an outcome measure where changes in score reflect a change in these behaviors; and these three behaviors of symptoms, physical activity, and mental health directly map on to the three subdomains of the SGRQ (ie, Symptoms, Activities, and Impacts, respectively). Examples of symptom-specific behaviors may include teaching appropriate inhalation techniques or mucus-clearing techniques.2 In contrast, physical activity behaviors may be structured exercise programs, techniques on how to incorporate light activity into daily routine, or energy conservation techniques.2 Finally, mental health-focused behaviors may include trying to teach patients communication strategies to help communicate mental health concerns, distraction techniques, relaxation exercises, or stress counseling.2 To identify whether any features of the delivery of the intervention itself influenced effectiveness, interventions were coded for intervention provider (multidisciplinary team or single practitioner), intervention format (individual or group-based), and intervention length (single session or multiple session).23 The BCTs identified and intervention features of delivery were coded independently by one reviewer (JN) and checked independently by another (KH-M) (κ=0.89; 95% CI =0.82, 0.96). To determine the length of an intervention, the end point was defined as the final time participants received intervention content from the intervention provider. Intervention contacts solely for data collection or for following up on participants without new content were not classed as intervention sessions. To assess whether intervention effects depend on disease severity, studies were divided into those with patients with mean predicted FEV1 score <50% or ≥50% at baseline.13 Data relating to the number of COPD-related emergency department (ED) visits and/or hospital admissions were extracted from eligible studies, where reported, and used to see whether fewer ED visits were reported in patients receiving SMIs compared to patients receiving usual care. Subsequently, SMIs were divided between those with and without BCTs targeting mental health and physical activity in order to examine whether they had a difference in the size of their effect in comparison to patients who received usual care. The Behavior Change Techniques Taxonomy version 1 (BCTTv1) was used to code the content of intervention descriptions.9 Intervention descriptions were separately coded for self-management behaviors that targeted 1) symptoms, 2) physical activity, and 3) mental health. For instance, the description “patients were instructed to set themselves a walking goal each day” would be coded as “goal setting (behavior)” only for “physical activity” and not “mental health self-management” or “symptoms self-management.” Consequently, it is important to use an outcome measure where changes in score reflect a change in these behaviors; and these three behaviors of symptoms, physical activity, and mental health directly map on to the three subdomains of the SGRQ (ie, Symptoms, Activities, and Impacts, respectively). Examples of symptom-specific behaviors may include teaching appropriate inhalation techniques or mucus-clearing techniques.2 In contrast, physical activity behaviors may be structured exercise programs, techniques on how to incorporate light activity into daily routine, or energy conservation techniques.2 Finally, mental health-focused behaviors may include trying to teach patients communication strategies to help communicate mental health concerns, distraction techniques, relaxation exercises, or stress counseling.2 To identify whether any features of the delivery of the intervention itself influenced effectiveness, interventions were coded for intervention provider (multidisciplinary team or single practitioner), intervention format (individual or group-based), and intervention length (single session or multiple session).23 The BCTs identified and intervention features of delivery were coded independently by one reviewer (JN) and checked independently by another (KH-M) (κ=0.89; 95% CI =0.82, 0.96). To determine the length of an intervention, the end point was defined as the final time participants received intervention content from the intervention provider. Intervention contacts solely for data collection or for following up on participants without new content were not classed as intervention sessions. To assess whether intervention effects depend on disease severity, studies were divided into those with patients with mean predicted FEV1 score <50% or ≥50% at baseline.13 Data relating to the number of COPD-related emergency department (ED) visits and/or hospital admissions were extracted from eligible studies, where reported, and used to see whether fewer ED visits were reported in patients receiving SMIs compared to patients receiving usual care. Subsequently, SMIs were divided between those with and without BCTs targeting mental health and physical activity in order to examine whether they had a difference in the size of their effect in comparison to patients who received usual care. Data extraction and analysis The original studies were sought for further data extraction, to supplement the information reported in the reviews. Data reported at the follow-up time point most closely following the end point of the intervention period were used for meta-analyses. We decided to group patients across time points as patients with COPD have high mortality rates; of those patients who are admitted, 15% will die within 3 months,24 25% will die within 1 year,2 and 50% within 5 years.25 While prespecifying a follow-up time point limits the bias of treatment reactivity, we may be excluding patients with shorter survival, who may not necessary survive to a later time point, who are likely to be those most in need of interventions that improve HRQOL. Sensitivity analyses were performed for studies collecting outcome data at less than 6 months and 6-month and 12-month follow-up to explore any potential heterogeneity in the overall analysis. Data from intention-to-treat analyses were used where reported. If two interventions were compared against a control group (eg, action plans vs education vs usual care), data from both intervention arms were included in the main comparison and the number of participants in the control group was halved for each comparison.26 Postintervention outcomes reported as mean and SD were used for analysis. Mean change scores were used if postintervention scores were unavailable. When mean and SD values were unavailable, missing data were imputed using the median instead of the mean and by estimating the SD from the standard error, confidence intervals (CIs), or interquartile range.26,27 Standardized mean differences (SMDs) with 95% CIs were calculated and pooled using a random effects model for all studies. Dichotomous and continuous outcomes were merged using Comprehensive Meta-Analysis (CMA) software (v2.2, Biostat; Englewood, NJ, USA) to produce SMDs for each study, which are equivalent to Cohen’s d. SMD values of at least 0.2, 0.5, and 0.8 are indicative of small, medium, and large effect sizes, respectively.28 Heterogeneity across studies was assessed using Cochran Q test and I2 test statistics.26 Random effects subgroup analyses with Q statistic tests were conducted using CMA software. Univariate moderator analyses were conducted to compare effect size between SMIs with moderate/severe COPD, single/multiple sessions, and single/multiple practitioners. Univariate moderator analyses were also used to compare SMIs with/without BCTs targeting mental health self-management and physical activity to examine whether they differed in effect for the number of ED visits in comparison to patients receiving usual care. Random effects univariate meta-regression was conducted using CMA software to examine whether the number of BCTs coded across SMIs was predictive of effectiveness. The original studies were sought for further data extraction, to supplement the information reported in the reviews. Data reported at the follow-up time point most closely following the end point of the intervention period were used for meta-analyses. We decided to group patients across time points as patients with COPD have high mortality rates; of those patients who are admitted, 15% will die within 3 months,24 25% will die within 1 year,2 and 50% within 5 years.25 While prespecifying a follow-up time point limits the bias of treatment reactivity, we may be excluding patients with shorter survival, who may not necessary survive to a later time point, who are likely to be those most in need of interventions that improve HRQOL. Sensitivity analyses were performed for studies collecting outcome data at less than 6 months and 6-month and 12-month follow-up to explore any potential heterogeneity in the overall analysis. Data from intention-to-treat analyses were used where reported. If two interventions were compared against a control group (eg, action plans vs education vs usual care), data from both intervention arms were included in the main comparison and the number of participants in the control group was halved for each comparison.26 Postintervention outcomes reported as mean and SD were used for analysis. Mean change scores were used if postintervention scores were unavailable. When mean and SD values were unavailable, missing data were imputed using the median instead of the mean and by estimating the SD from the standard error, confidence intervals (CIs), or interquartile range.26,27 Standardized mean differences (SMDs) with 95% CIs were calculated and pooled using a random effects model for all studies. Dichotomous and continuous outcomes were merged using Comprehensive Meta-Analysis (CMA) software (v2.2, Biostat; Englewood, NJ, USA) to produce SMDs for each study, which are equivalent to Cohen’s d. SMD values of at least 0.2, 0.5, and 0.8 are indicative of small, medium, and large effect sizes, respectively.28 Heterogeneity across studies was assessed using Cochran Q test and I2 test statistics.26 Random effects subgroup analyses with Q statistic tests were conducted using CMA software. Univariate moderator analyses were conducted to compare effect size between SMIs with moderate/severe COPD, single/multiple sessions, and single/multiple practitioners. Univariate moderator analyses were also used to compare SMIs with/without BCTs targeting mental health self-management and physical activity to examine whether they differed in effect for the number of ED visits in comparison to patients receiving usual care. Random effects univariate meta-regression was conducted using CMA software to examine whether the number of BCTs coded across SMIs was predictive of effectiveness. Search strategy and selection criteria: The current review, registered with PROSPERO (CRD42 016043311), is available at http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016043311. To focus the search upon high-quality systematic reviews, we searched two databases of systematic reviews: Cochrane Database of Systematic Reviews (Wiley) issue 7 of 12 2016, Database of Abstracts of Reviews of Effects (Wiley) issue 2 of 4 2015 (latest available). In addition, we searched Ovid MEDLINE® In-Process & Other Non-Indexed Citations and Ovid MEDLINE® 2015 with a systematic review filter available from CADTH.12 All search results were screened for title and abstract by one reviewer (JN), and 20% of the results were screened by a second reviewer (KH-M) to ensure comparability. Two reviewers screened the search results at the full paper review stage. The search strategy (ran up to October 1, 2016) combined database-specific thesaurus headings and keywords describing COPD and self-management: exp Pulmonary Disease, Chronic Obstructiveemphysema$.tw.(chronic$ adj3 bronchiti$).tw.(obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw.(COPD or COAD or COBD or AECB).tw.or/1–5exp Self Care/(self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw.(patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw.or/7–96 and 10. exp Pulmonary Disease, Chronic Obstructive emphysema$.tw. (chronic$ adj3 bronchiti$).tw. (obstruct$ adj3 (pulmonary or lung$ or bronch$ or respirat$)).tw. (COPD or COAD or COBD or AECB).tw. or/1–5 exp Self Care/ (self-manag$ or self manag$ or self-car$ or self car$ or self-administ$ or self administ$).tw. (patient$ adj3 (focus$ or participat$ or centr$ or center$ or empower$ or support$ or collaborat$ or co-operat$ or cooperat$)).tw. or/7–9 6 and 10. The review approach provides an overview of existing systematic reviews and is particularly helpful where multiple systematic reviews have been conducted. A review also provides an opportunity to compare the summaries and findings of previous reviews. In the present review, both a meta-analysis and narrative synthesis were conducted. The narrative synthesis compared the overview of findings as presented by the original authors, whereas meta-analyses were conducted on data from individual studies presented within the reviews. These quantitative analyses are helpful to determine the effectiveness of SM interventions and the factors that influence their effectiveness. To be eligible for the narrative synthesis, reviews had to focus upon interventions that targeted self-management. We sought to explore variations in definitions of self-management used by previous authors, and thus reviews were eligible if they specified they focused on SMIs, irrespective of the definition they applied. Reviews that focused on SMIs in addition to other types of interventions (eg, pulmonary rehabilitation, supervised exercise programs) were only eligible if the SMIs could be clearly separated from the other interventions. Reviews of interventions delivered in primary, secondary, tertiary, outpatient, or community care were eligible. To be eligible for the quantitative analyses, randomized controlled trials delivered in primary, secondary, tertiary, outpatient, or community care were eligible if they 1) targeted patients with COPD (diagnosed by either a clinician/health care practitioner and/or agreed spirometry criteria, ie, forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) <70%),13 2) compared the SMI to a comparison group that received usual care during the study period, and 3) had a measure of HRQOL as an outcome measure. Studies were excluded if they involved mixed disease populations where COPD patients could not be separated for analysis. Figure 1 provides a PRISMA diagram of reviews, and the trials within reviews, eligible for inclusion. Primary outcome: The primary outcome measure was HRQOL measured by the Saint George Respiratory Questionnaire (SGRQ). The SGRQ is a disease-specific instrument designed to measure impact on overall health, daily life, and perceived well-being in patients with obstructive airways disease.14 The measure provides a total score and subdomain scores of symptoms (frequency and severity of symptoms), activities (activities that cause or are limited by breathlessness), and impacts (social functioning and psychological disturbances resulting from airways disease) and is the most frequently used disease-specific measure of HRQOL in this population group.15 Where trials did not use the SGRQ, scores from alternative HRQOL measures were used; the Chronic Respiratory Disease Questionnaire (CRQ), Clinical COPD Questionnaire (CCQ), or Sickness Impact Profile (SIP).16–18 We combined scores across different questionnaires for meta-analyses, as total SGRQ, CRQ, CCQ, and SIP scores have been shown to correlate well and as subdomain constructs share conceptual similarity.19–22 Studies using alternative HRQOL measures were not eligible. Classification of intervention content and intervention delivery features: The Behavior Change Techniques Taxonomy version 1 (BCTTv1) was used to code the content of intervention descriptions.9 Intervention descriptions were separately coded for self-management behaviors that targeted 1) symptoms, 2) physical activity, and 3) mental health. For instance, the description “patients were instructed to set themselves a walking goal each day” would be coded as “goal setting (behavior)” only for “physical activity” and not “mental health self-management” or “symptoms self-management.” Consequently, it is important to use an outcome measure where changes in score reflect a change in these behaviors; and these three behaviors of symptoms, physical activity, and mental health directly map on to the three subdomains of the SGRQ (ie, Symptoms, Activities, and Impacts, respectively). Examples of symptom-specific behaviors may include teaching appropriate inhalation techniques or mucus-clearing techniques.2 In contrast, physical activity behaviors may be structured exercise programs, techniques on how to incorporate light activity into daily routine, or energy conservation techniques.2 Finally, mental health-focused behaviors may include trying to teach patients communication strategies to help communicate mental health concerns, distraction techniques, relaxation exercises, or stress counseling.2 To identify whether any features of the delivery of the intervention itself influenced effectiveness, interventions were coded for intervention provider (multidisciplinary team or single practitioner), intervention format (individual or group-based), and intervention length (single session or multiple session).23 The BCTs identified and intervention features of delivery were coded independently by one reviewer (JN) and checked independently by another (KH-M) (κ=0.89; 95% CI =0.82, 0.96). To determine the length of an intervention, the end point was defined as the final time participants received intervention content from the intervention provider. Intervention contacts solely for data collection or for following up on participants without new content were not classed as intervention sessions. To assess whether intervention effects depend on disease severity, studies were divided into those with patients with mean predicted FEV1 score <50% or ≥50% at baseline.13 Data relating to the number of COPD-related emergency department (ED) visits and/or hospital admissions were extracted from eligible studies, where reported, and used to see whether fewer ED visits were reported in patients receiving SMIs compared to patients receiving usual care. Subsequently, SMIs were divided between those with and without BCTs targeting mental health and physical activity in order to examine whether they had a difference in the size of their effect in comparison to patients who received usual care. Data extraction and analysis: The original studies were sought for further data extraction, to supplement the information reported in the reviews. Data reported at the follow-up time point most closely following the end point of the intervention period were used for meta-analyses. We decided to group patients across time points as patients with COPD have high mortality rates; of those patients who are admitted, 15% will die within 3 months,24 25% will die within 1 year,2 and 50% within 5 years.25 While prespecifying a follow-up time point limits the bias of treatment reactivity, we may be excluding patients with shorter survival, who may not necessary survive to a later time point, who are likely to be those most in need of interventions that improve HRQOL. Sensitivity analyses were performed for studies collecting outcome data at less than 6 months and 6-month and 12-month follow-up to explore any potential heterogeneity in the overall analysis. Data from intention-to-treat analyses were used where reported. If two interventions were compared against a control group (eg, action plans vs education vs usual care), data from both intervention arms were included in the main comparison and the number of participants in the control group was halved for each comparison.26 Postintervention outcomes reported as mean and SD were used for analysis. Mean change scores were used if postintervention scores were unavailable. When mean and SD values were unavailable, missing data were imputed using the median instead of the mean and by estimating the SD from the standard error, confidence intervals (CIs), or interquartile range.26,27 Standardized mean differences (SMDs) with 95% CIs were calculated and pooled using a random effects model for all studies. Dichotomous and continuous outcomes were merged using Comprehensive Meta-Analysis (CMA) software (v2.2, Biostat; Englewood, NJ, USA) to produce SMDs for each study, which are equivalent to Cohen’s d. SMD values of at least 0.2, 0.5, and 0.8 are indicative of small, medium, and large effect sizes, respectively.28 Heterogeneity across studies was assessed using Cochran Q test and I2 test statistics.26 Random effects subgroup analyses with Q statistic tests were conducted using CMA software. Univariate moderator analyses were conducted to compare effect size between SMIs with moderate/severe COPD, single/multiple sessions, and single/multiple practitioners. Univariate moderator analyses were also used to compare SMIs with/without BCTs targeting mental health self-management and physical activity to examine whether they differed in effect for the number of ED visits in comparison to patients receiving usual care. Random effects univariate meta-regression was conducted using CMA software to examine whether the number of BCTs coded across SMIs was predictive of effectiveness. Results: Narrative synthesis Eleven reviews were eligible for inclusion,7,10,11,29–36 covering 66 clinical trials (Figure 1). The decision over whether to include three reviews warranted further discussion and the rationale for inclusion/exclusion is detailed in Supplementary material. Reviews varied widely in their definitions of self-management and the number of individual studies that met their inclusion criteria (Tables 1 and 2). Zwerink et al7 reported a significant effect of SMIs on HRQOL but did state that “heterogeneity among interventions, study populations, follow-up time, and outcome measures makes it difficult to formulate clear recommendations regarding the most effective form and content of self-management in COPD”. Jonkman et al10 showed SMIs to improve HRQOL in COPD patients at 6 and 12 months but did not identify any components of the SMIs that were associated with the intervention. The more recent reviews had strict inclusion criteria and were smaller in size as a result.30,33,34,36 Harrison et al33 and Majothi et al34 focused on hospitalized and recently discharged patients. Harrison et al33 did not find any significant differences in total or domain scores for HRQOL. In contrast, Majothi et al et al34 did find a significant effect on total SGRQ, but stressed that this finding should be treated with caution due to variable follow-up assessments. Walters et al36 focused on more restrictive criteria for interventions that were action plans and found no significant effect on HRQOL. Bentsen et al30 were less clear in their definition of an SMI, and this may explain the smaller number of studies. The authors reported that the majority of studies showed a benefit on HRQOL, but no meta-analysis was performed. Eleven reviews were eligible for inclusion,7,10,11,29–36 covering 66 clinical trials (Figure 1). The decision over whether to include three reviews warranted further discussion and the rationale for inclusion/exclusion is detailed in Supplementary material. Reviews varied widely in their definitions of self-management and the number of individual studies that met their inclusion criteria (Tables 1 and 2). Zwerink et al7 reported a significant effect of SMIs on HRQOL but did state that “heterogeneity among interventions, study populations, follow-up time, and outcome measures makes it difficult to formulate clear recommendations regarding the most effective form and content of self-management in COPD”. Jonkman et al10 showed SMIs to improve HRQOL in COPD patients at 6 and 12 months but did not identify any components of the SMIs that were associated with the intervention. The more recent reviews had strict inclusion criteria and were smaller in size as a result.30,33,34,36 Harrison et al33 and Majothi et al34 focused on hospitalized and recently discharged patients. Harrison et al33 did not find any significant differences in total or domain scores for HRQOL. In contrast, Majothi et al et al34 did find a significant effect on total SGRQ, but stressed that this finding should be treated with caution due to variable follow-up assessments. Walters et al36 focused on more restrictive criteria for interventions that were action plans and found no significant effect on HRQOL. Bentsen et al30 were less clear in their definition of an SMI, and this may explain the smaller number of studies. The authors reported that the majority of studies showed a benefit on HRQOL, but no meta-analysis was performed. Quantitative synthesis Twenty-six eligible, unique trials provide data on 28 intervention groups (Figure 1 and Table 3). In total, trials reported on 3,518 participants (1,827 intervention, 1,691 control) for this analysis. Mean age of participants was 65.6 (SD =1.6; range =45–89) years. The majority of participants were male (72%). Characteristics of included trials are reported in their respective reviews (Table 2). Table 3 details which specific BCTs were used in each SMI. Table 4 displays which intervention features and target behaviors were used in each SMI. SMIs showed a significant but small positive effect in improving HRQOL scores over usual care (SMD =−0.16; 95% CI =−0.25, −0.07; P=0.001). Statistical heterogeneity was moderate but significant (I2=36.6%; P=0.03), suggesting the need for further moderator/subgroup analyses (Table 5). When studies using measures other than the SGRQ (n=6) were excluded, SMIs continued to show a significant effect on improving HRQOL, which was of comparable effect size (SMD =−0.16; 95% CI =−0.26, −0.05; P=0.003). SMIs with 12-month follow-up (n=15) were significantly more effective than usual care (SMD =−0.16; 95% CI =−0.29, −0.03; P=0.02), but significant heterogeneity existed between studies (I2=53.4%; P=0.008). In trials with 6-month follow-up (n=10), there was no significant difference in effect between SMIs and control group on HRQOL (SMD =−0.11; 95% CI =−0.27, 0.04; P=0.14) and even heterogeneity between studies was not significant (I2=26.2%; P=0.20). In trials with a follow-up less than 6-month postintervention (n=7), SMIs were significantly more effective than usual care (SMD =−0.29; 95% CI =−0.48, −0.11; P=0.002) and there was no significant heterogeneity (I2=2.4%; P=0.41). Twenty-six eligible, unique trials provide data on 28 intervention groups (Figure 1 and Table 3). In total, trials reported on 3,518 participants (1,827 intervention, 1,691 control) for this analysis. Mean age of participants was 65.6 (SD =1.6; range =45–89) years. The majority of participants were male (72%). Characteristics of included trials are reported in their respective reviews (Table 2). Table 3 details which specific BCTs were used in each SMI. Table 4 displays which intervention features and target behaviors were used in each SMI. SMIs showed a significant but small positive effect in improving HRQOL scores over usual care (SMD =−0.16; 95% CI =−0.25, −0.07; P=0.001). Statistical heterogeneity was moderate but significant (I2=36.6%; P=0.03), suggesting the need for further moderator/subgroup analyses (Table 5). When studies using measures other than the SGRQ (n=6) were excluded, SMIs continued to show a significant effect on improving HRQOL, which was of comparable effect size (SMD =−0.16; 95% CI =−0.26, −0.05; P=0.003). SMIs with 12-month follow-up (n=15) were significantly more effective than usual care (SMD =−0.16; 95% CI =−0.29, −0.03; P=0.02), but significant heterogeneity existed between studies (I2=53.4%; P=0.008). In trials with 6-month follow-up (n=10), there was no significant difference in effect between SMIs and control group on HRQOL (SMD =−0.11; 95% CI =−0.27, 0.04; P=0.14) and even heterogeneity between studies was not significant (I2=26.2%; P=0.20). In trials with a follow-up less than 6-month postintervention (n=7), SMIs were significantly more effective than usual care (SMD =−0.29; 95% CI =−0.48, −0.11; P=0.002) and there was no significant heterogeneity (I2=2.4%; P=0.41). Intervention delivery features In comparison to patients receiving usual care, there were no significant differences in effect size in between-group comparisons of 1) single session vs multiple session SMIs, 2) SMIs delivered by a single practitioner vs multidisciplinary teams, 3) SMIs targeting patients with moderate vs severe symptoms, and 4) individual-based vs group-based SMIs (Table 5). However, within-group moderator analysis showed 1) SMIs to be significantly effective in COPD patients with severe symptoms, whereas no significant effect was observed in studies that recruited patients with moderate symptoms; 2) significant improvement with SMIs delivered by a single practitioner, while no effect with multidisciplinary interventions; 3) no effect with single-session SMIs, but significant improvement with SMIs with multiple sessions; and 4) significant improvement in HRQOL was observed in both individual and group-based SMIs (Table 5). SMIs targeting mental health had a significantly greater effect size than SMIs not targeting mental health management (Q=4.37; k=28; P=0.04) (Table 5). Within-group analysis showed SMIs that did not target mental health had no significant effect on HRQOL. There was no difference in effect size between SMIs targeting and not targeting physical activity, with both groups of SMIs showing significant improvement in improving HRQOL in comparison to usual care. In comparison to patients receiving usual care, there were no significant differences in effect size in between-group comparisons of 1) single session vs multiple session SMIs, 2) SMIs delivered by a single practitioner vs multidisciplinary teams, 3) SMIs targeting patients with moderate vs severe symptoms, and 4) individual-based vs group-based SMIs (Table 5). However, within-group moderator analysis showed 1) SMIs to be significantly effective in COPD patients with severe symptoms, whereas no significant effect was observed in studies that recruited patients with moderate symptoms; 2) significant improvement with SMIs delivered by a single practitioner, while no effect with multidisciplinary interventions; 3) no effect with single-session SMIs, but significant improvement with SMIs with multiple sessions; and 4) significant improvement in HRQOL was observed in both individual and group-based SMIs (Table 5). SMIs targeting mental health had a significantly greater effect size than SMIs not targeting mental health management (Q=4.37; k=28; P=0.04) (Table 5). Within-group analysis showed SMIs that did not target mental health had no significant effect on HRQOL. There was no difference in effect size between SMIs targeting and not targeting physical activity, with both groups of SMIs showing significant improvement in improving HRQOL in comparison to usual care. Intervention content All interventions were coded for at least one BCT that targeted symptom management. Of these 24 interventions, five targeted solely symptom management (20.8%), three targeted management of mental health concerns (12.5%), eleven targeted physical activity (45.8%), and five targeted all three behaviors (20.8%). The number of interventions reporting BCTs that target each of the three behaviors is presented in Tables 3 and 4. Across interventions, a mean of eight BCTs per intervention was coded (SD =3; range =3–13), with a mean of five BCTs (SD =2; range =2–10) for symptom management, one BCT (SD =1; range =0–4) for mental health management, and two BCTs (SD =3; range =0–10) for physical activity. For symptom management, the three most common BCTs reported were “instruction on how to perform a behavior” (n=23/24 trials; 95.8%), “information about health consequences” (n=21/24; 87.5%), and “action planning” (n=16/24; 66.7%). For physical activity, the three most common BCTs reported were “instruction on how to perform a behavior” (n=11/16 trials; 68.8%), “goal setting (behavior)” (n=8/16; 50%), and “demonstration of the behavior” (n=8/16 trials; 50%). For management of mental health concerns, the three most common BCTs reported were strategies to “reduce negative emotions” (n=4/8 trials; 50%), “provide social support (unspecified)” (n=3/8 trials; 37.5%), and “monitoring of emotional consequences” (n=3/8 trials; 37.5%). Sixty-six (70.9%) of the 93 BCTs in the BCTTv1 were not coded in any intervention. There was no significant association between the number of BCTs used and intervention effectiveness for improving HRQOL (β=−0.01; 95% CI =−0.04, 0.01; k=28; Q=1.75; P=0.19). All interventions were coded for at least one BCT that targeted symptom management. Of these 24 interventions, five targeted solely symptom management (20.8%), three targeted management of mental health concerns (12.5%), eleven targeted physical activity (45.8%), and five targeted all three behaviors (20.8%). The number of interventions reporting BCTs that target each of the three behaviors is presented in Tables 3 and 4. Across interventions, a mean of eight BCTs per intervention was coded (SD =3; range =3–13), with a mean of five BCTs (SD =2; range =2–10) for symptom management, one BCT (SD =1; range =0–4) for mental health management, and two BCTs (SD =3; range =0–10) for physical activity. For symptom management, the three most common BCTs reported were “instruction on how to perform a behavior” (n=23/24 trials; 95.8%), “information about health consequences” (n=21/24; 87.5%), and “action planning” (n=16/24; 66.7%). For physical activity, the three most common BCTs reported were “instruction on how to perform a behavior” (n=11/16 trials; 68.8%), “goal setting (behavior)” (n=8/16; 50%), and “demonstration of the behavior” (n=8/16 trials; 50%). For management of mental health concerns, the three most common BCTs reported were strategies to “reduce negative emotions” (n=4/8 trials; 50%), “provide social support (unspecified)” (n=3/8 trials; 37.5%), and “monitoring of emotional consequences” (n=3/8 trials; 37.5%). Sixty-six (70.9%) of the 93 BCTs in the BCTTv1 were not coded in any intervention. There was no significant association between the number of BCTs used and intervention effectiveness for improving HRQOL (β=−0.01; 95% CI =−0.04, 0.01; k=28; Q=1.75; P=0.19). Health care use Overall, patients who received SMIs had significantly fewer ED visits compared to those who received usual care (SMD =−0.13; 95% CI =−0.23, −0.03; n=15; P=0.02). There was no significant heterogeneity in the sample (I2=19.4%; P=0.24). The significant effect of SMIs on the number of ED visits in patients who received SMIs remained when examining only studies with a 12-month follow-up (SMD =−0.17, 95% CI =−0.27, −0.07; n=12; P=0.001) with no significant heterogeneity (I2=10.4%; P=0.34). Of the three intervention groups that did not have a 12-month follow-up, only one used a 3-month follow-up and the two were from the same study where a 6-month follow-up was used. Thus, meta-analyses were not performed for either 3-month or 6-month follow-up time points. Within-group analyses revealed patients receiving SMIs targeting mental health made significantly fewer ED visits compared to patients receiving usual care (SMD =−0.22; 95% CI =−0.32, −0.11; k=5; P<0.001). No difference was observed in the number of ED visits between patients receiving SMIs not targeting mental health and patients receiving usual care (SMD =0.001; 95% CI =−0.14, 0.14; k=10 P=0.99). This led to a significant between-group difference in effect between SMIs targeting mental health compared to SMIs not targeting mental health (Q=5.95; P=0.02). Patients receiving SMIs targeting physical activity made significantly fewer ED visits compared to those who received usual care (SMD =0.20; 95% CI =−0.31, −0.08; k=8; P=0.001). No difference was observed between patients receiving SMIs not targeting physical activity and usual care (SMD =−0.03; 95% CI =−0.18, 0.12; k=7; P=0.68). In comparison to usual care, there was no difference in effect between SMIs targeting and not targeting physical activity (Q=3.03; k=15; P=0.08). Overall, patients who received SMIs had significantly fewer ED visits compared to those who received usual care (SMD =−0.13; 95% CI =−0.23, −0.03; n=15; P=0.02). There was no significant heterogeneity in the sample (I2=19.4%; P=0.24). The significant effect of SMIs on the number of ED visits in patients who received SMIs remained when examining only studies with a 12-month follow-up (SMD =−0.17, 95% CI =−0.27, −0.07; n=12; P=0.001) with no significant heterogeneity (I2=10.4%; P=0.34). Of the three intervention groups that did not have a 12-month follow-up, only one used a 3-month follow-up and the two were from the same study where a 6-month follow-up was used. Thus, meta-analyses were not performed for either 3-month or 6-month follow-up time points. Within-group analyses revealed patients receiving SMIs targeting mental health made significantly fewer ED visits compared to patients receiving usual care (SMD =−0.22; 95% CI =−0.32, −0.11; k=5; P<0.001). No difference was observed in the number of ED visits between patients receiving SMIs not targeting mental health and patients receiving usual care (SMD =0.001; 95% CI =−0.14, 0.14; k=10 P=0.99). This led to a significant between-group difference in effect between SMIs targeting mental health compared to SMIs not targeting mental health (Q=5.95; P=0.02). Patients receiving SMIs targeting physical activity made significantly fewer ED visits compared to those who received usual care (SMD =0.20; 95% CI =−0.31, −0.08; k=8; P=0.001). No difference was observed between patients receiving SMIs not targeting physical activity and usual care (SMD =−0.03; 95% CI =−0.18, 0.12; k=7; P=0.68). In comparison to usual care, there was no difference in effect between SMIs targeting and not targeting physical activity (Q=3.03; k=15; P=0.08). Narrative synthesis: Eleven reviews were eligible for inclusion,7,10,11,29–36 covering 66 clinical trials (Figure 1). The decision over whether to include three reviews warranted further discussion and the rationale for inclusion/exclusion is detailed in Supplementary material. Reviews varied widely in their definitions of self-management and the number of individual studies that met their inclusion criteria (Tables 1 and 2). Zwerink et al7 reported a significant effect of SMIs on HRQOL but did state that “heterogeneity among interventions, study populations, follow-up time, and outcome measures makes it difficult to formulate clear recommendations regarding the most effective form and content of self-management in COPD”. Jonkman et al10 showed SMIs to improve HRQOL in COPD patients at 6 and 12 months but did not identify any components of the SMIs that were associated with the intervention. The more recent reviews had strict inclusion criteria and were smaller in size as a result.30,33,34,36 Harrison et al33 and Majothi et al34 focused on hospitalized and recently discharged patients. Harrison et al33 did not find any significant differences in total or domain scores for HRQOL. In contrast, Majothi et al et al34 did find a significant effect on total SGRQ, but stressed that this finding should be treated with caution due to variable follow-up assessments. Walters et al36 focused on more restrictive criteria for interventions that were action plans and found no significant effect on HRQOL. Bentsen et al30 were less clear in their definition of an SMI, and this may explain the smaller number of studies. The authors reported that the majority of studies showed a benefit on HRQOL, but no meta-analysis was performed. Quantitative synthesis: Twenty-six eligible, unique trials provide data on 28 intervention groups (Figure 1 and Table 3). In total, trials reported on 3,518 participants (1,827 intervention, 1,691 control) for this analysis. Mean age of participants was 65.6 (SD =1.6; range =45–89) years. The majority of participants were male (72%). Characteristics of included trials are reported in their respective reviews (Table 2). Table 3 details which specific BCTs were used in each SMI. Table 4 displays which intervention features and target behaviors were used in each SMI. SMIs showed a significant but small positive effect in improving HRQOL scores over usual care (SMD =−0.16; 95% CI =−0.25, −0.07; P=0.001). Statistical heterogeneity was moderate but significant (I2=36.6%; P=0.03), suggesting the need for further moderator/subgroup analyses (Table 5). When studies using measures other than the SGRQ (n=6) were excluded, SMIs continued to show a significant effect on improving HRQOL, which was of comparable effect size (SMD =−0.16; 95% CI =−0.26, −0.05; P=0.003). SMIs with 12-month follow-up (n=15) were significantly more effective than usual care (SMD =−0.16; 95% CI =−0.29, −0.03; P=0.02), but significant heterogeneity existed between studies (I2=53.4%; P=0.008). In trials with 6-month follow-up (n=10), there was no significant difference in effect between SMIs and control group on HRQOL (SMD =−0.11; 95% CI =−0.27, 0.04; P=0.14) and even heterogeneity between studies was not significant (I2=26.2%; P=0.20). In trials with a follow-up less than 6-month postintervention (n=7), SMIs were significantly more effective than usual care (SMD =−0.29; 95% CI =−0.48, −0.11; P=0.002) and there was no significant heterogeneity (I2=2.4%; P=0.41). Intervention delivery features: In comparison to patients receiving usual care, there were no significant differences in effect size in between-group comparisons of 1) single session vs multiple session SMIs, 2) SMIs delivered by a single practitioner vs multidisciplinary teams, 3) SMIs targeting patients with moderate vs severe symptoms, and 4) individual-based vs group-based SMIs (Table 5). However, within-group moderator analysis showed 1) SMIs to be significantly effective in COPD patients with severe symptoms, whereas no significant effect was observed in studies that recruited patients with moderate symptoms; 2) significant improvement with SMIs delivered by a single practitioner, while no effect with multidisciplinary interventions; 3) no effect with single-session SMIs, but significant improvement with SMIs with multiple sessions; and 4) significant improvement in HRQOL was observed in both individual and group-based SMIs (Table 5). SMIs targeting mental health had a significantly greater effect size than SMIs not targeting mental health management (Q=4.37; k=28; P=0.04) (Table 5). Within-group analysis showed SMIs that did not target mental health had no significant effect on HRQOL. There was no difference in effect size between SMIs targeting and not targeting physical activity, with both groups of SMIs showing significant improvement in improving HRQOL in comparison to usual care. Intervention content: All interventions were coded for at least one BCT that targeted symptom management. Of these 24 interventions, five targeted solely symptom management (20.8%), three targeted management of mental health concerns (12.5%), eleven targeted physical activity (45.8%), and five targeted all three behaviors (20.8%). The number of interventions reporting BCTs that target each of the three behaviors is presented in Tables 3 and 4. Across interventions, a mean of eight BCTs per intervention was coded (SD =3; range =3–13), with a mean of five BCTs (SD =2; range =2–10) for symptom management, one BCT (SD =1; range =0–4) for mental health management, and two BCTs (SD =3; range =0–10) for physical activity. For symptom management, the three most common BCTs reported were “instruction on how to perform a behavior” (n=23/24 trials; 95.8%), “information about health consequences” (n=21/24; 87.5%), and “action planning” (n=16/24; 66.7%). For physical activity, the three most common BCTs reported were “instruction on how to perform a behavior” (n=11/16 trials; 68.8%), “goal setting (behavior)” (n=8/16; 50%), and “demonstration of the behavior” (n=8/16 trials; 50%). For management of mental health concerns, the three most common BCTs reported were strategies to “reduce negative emotions” (n=4/8 trials; 50%), “provide social support (unspecified)” (n=3/8 trials; 37.5%), and “monitoring of emotional consequences” (n=3/8 trials; 37.5%). Sixty-six (70.9%) of the 93 BCTs in the BCTTv1 were not coded in any intervention. There was no significant association between the number of BCTs used and intervention effectiveness for improving HRQOL (β=−0.01; 95% CI =−0.04, 0.01; k=28; Q=1.75; P=0.19). Health care use: Overall, patients who received SMIs had significantly fewer ED visits compared to those who received usual care (SMD =−0.13; 95% CI =−0.23, −0.03; n=15; P=0.02). There was no significant heterogeneity in the sample (I2=19.4%; P=0.24). The significant effect of SMIs on the number of ED visits in patients who received SMIs remained when examining only studies with a 12-month follow-up (SMD =−0.17, 95% CI =−0.27, −0.07; n=12; P=0.001) with no significant heterogeneity (I2=10.4%; P=0.34). Of the three intervention groups that did not have a 12-month follow-up, only one used a 3-month follow-up and the two were from the same study where a 6-month follow-up was used. Thus, meta-analyses were not performed for either 3-month or 6-month follow-up time points. Within-group analyses revealed patients receiving SMIs targeting mental health made significantly fewer ED visits compared to patients receiving usual care (SMD =−0.22; 95% CI =−0.32, −0.11; k=5; P<0.001). No difference was observed in the number of ED visits between patients receiving SMIs not targeting mental health and patients receiving usual care (SMD =0.001; 95% CI =−0.14, 0.14; k=10 P=0.99). This led to a significant between-group difference in effect between SMIs targeting mental health compared to SMIs not targeting mental health (Q=5.95; P=0.02). Patients receiving SMIs targeting physical activity made significantly fewer ED visits compared to those who received usual care (SMD =0.20; 95% CI =−0.31, −0.08; k=8; P=0.001). No difference was observed between patients receiving SMIs not targeting physical activity and usual care (SMD =−0.03; 95% CI =−0.18, 0.12; k=7; P=0.68). In comparison to usual care, there was no difference in effect between SMIs targeting and not targeting physical activity (Q=3.03; k=15; P=0.08). Discussion: The meta-analysis showed SMIs were significantly more effective than usual care in improving HRQOL and reducing the number of ED visits in patients with COPD. In addition, moderator analyses provided specific detail of relevance for clinicians regarding the design, content, and implementation of intervention in practice. SMIs that specifically target mental health concerns alongside symptom management were significantly more effective in improving HRQOL and reducing ED visits than SMIs that focus on symptom management alone. Within-group analyses showed that HRQOL was significantly improved in 1) studies with COPD patients with severe level of symptoms but not in patients with a moderate level of symptoms, 2) single-practitioner based SMIs but not in SMIs delivered by a multidisciplinary team, 3) SMIs with multiple sessions but not in SMIs delivered in a single session, 4) both individual- and group-based interventions, and 5) SMIs that target physical activity. Our analysis also highlighted how different BCTs were utilized for the three different self-management behaviors. Targeting of specific behaviors in self-management approaches may explain heterogeneity in effectiveness. Our review found SMIs that tackle mental health concerns are more effective than those aimed directly at respiratory health. Management of mental health problems is acknowledged as an important part of COPD care as comorbid mental health problems are common in COPD, with an estimated prevalence of 10%–42% for depression and 10%–60% for anxiety.63–66 However, fewer than 30% of treatment providers adhere to current guidance for management of anxiety and depression in COPD.67,68 The nature and direction of the relationship between mental health and respiratory symptoms in COPD are difficult to disentangle.69 Breathlessness may be a symptom of anxiety or COPD, and in turn, deteriorating respiratory health may trigger anxiety;64 and anxiety is associated with more frequent hospital admissions for exacerbations.69 It follows that SMIs targeting mental health have the potential to improve HRQOL. Overall, few of the identified SMIs contained BCTs that targeted mental health self-management, although the six SMIs with the highest effect sizes utilized BCTs that targeted mental health concerns (Table 4). The most commonly reported BCT to aid management of mental health was input to “reduce negative emotions.” Interventions using this technique may improve patients’ self-efficacy for managing their symptoms, which could reduce the likelihood of attending ED’s at the onset of an exacerbation.70 Alternatively, addressing mental health management may have an indirect effect in preventing a deterioration in clinical status by an improvement in mood, leading to greater willingness to engage in other preventative behaviors (eg, increased physical activity, medication adherence, improved nutritional diet).71 In both moderator analysis and within-group analyses, SMIs targeting physical activity did not demonstrate a greater improvement in HRQOL compared with SMIs that did not target physical activity. It is surprising that the effect was not stronger, as patients engaging in increased physical activity are less likely to experience deterioration in physical condition and acute exacerbation.66,72 In contrast, physical deconditioning and inactivity may lead to faster deterioration in clinical status and increase the likelihood of hospital admission.64,66 Zwerink et al7 also reported no improved benefit in SMIs that targeted physical activity. Furthermore, Table 4 highlights the wide variability in BCTs that were used when targeting physical activity; with interventions ranging from individualized, structured, supervised sessions to education on physical activity. It is important when reporting SMIs that target physical activity that authors are clear about what is being asked of the patient. The American Thoracic Society and European Respiratory Society’s joint summary identifies physical activity outcomes as a priority for future research.6 The summary states that determining the optimal level of instruction is a priority in design of future physical activity interventions (eg, how many sessions, over what time period, and what specific exercises). However, it is important to consider that an individually tailored approach is needed for patients with COPD where there is wide variability in capability and resources. The most commonly identified BCTs varied for the three separate behaviors. Those coded for symptom management and physical activity were similar in that they used BCTs centered on information provision (eg, “Instruction on how to perform the behavior,” “Information about health consequences,” “Demonstration of the behavior”), whereas those coded for management of mental health concerns encourage more awareness and reflective thought processes (eg, “reduce negative emotions,” “monitoring of emotional consequences”). It is possible that SMIs that targeted mental health management routinely displayed larger effect sizes as a consequence of the type of BCTs rather than the behavior targeted. Further research should attempt to disentangle the extent to which it is specific BCTs, or the behavior targeted, that is responsible for the intervention effect. One recommendation from the recent HTA review on SMIs was that “Novel approaches to influence behavior change … should be explored”.8 Our approach identifies that vast majority of potential BCTs in the taxonomy were not identified across studies, suggesting opportunities for novel intervention content. For instance, it was apparent that while SMIs targeting mental health were more effective in improving HRQOL (eg, “reduce negative emotions”), the BCTs employed in these studies were not those recommended in current guidance as core strategies of self-management for COPD (eg, goal setting, problem solving, action planning).2,6 Similarly, while action planning is seen as a key component of effective self-management,2,6–8 some theoretical models specify that action planning is not always sufficient for behavior change and that problem solving is required to effectively maintain behavior change.71 As COPD is characterized by frequent relapses in the form of acute exacerbations, it was surprising that the BCT “problem solving” was only coded in two studies (Table 4). Future SMIs need to incorporate how to deal with problem solving as coping with the repeated occurrences of breathlessness and exacerbations (and the associated anxiety of these symptoms arising) is an inevitable predictor of HRQOL.72 Intervention providers should look at how they can deliver the core strategies of self-management (eg, goal setting, problem solving) in ways that work across multiple behaviors rather than feeling certain BCTs are only applicable to single behaviors (eg, action planning can only be used for symptom management but not when explaining physical activity). For instance, “body changes” referred to breathing/relaxation techniques. This was often used for a specific behavior, but patients may have better outcomes if they understand how breathing/relaxation techniques can be used for managing breathlessness, reducing stress, and improving lung capacity when physically active. Explaining how the same behavior can be applied across situations may also be a more understandable message for patients with poor health literacy, rather than them believing that certain behavioral techniques can only be applied in certain contexts, especially when elevated anxiety may impair cognitive processing of the most suitable course of action. Furthermore, from an implementation perspective, SMIs employing multidisciplinary teams for individual-based interventions did not confer any significant increase in effect size. This is an important finding for clinical practice as single practitioner and group-based interventions are potentially of lower cost as multiple patients can be seen in a single setting.73 An interesting finding from the narrative synthesis is that the number of eligible studies in Jonkman et al’s10,11 reviews and Zwerink et al7 was approximately double the number found in the other reviews. The majority of additional studies in these reviews were recent publications. This may suggest that the use of SMIs has increased in less than a decade. However, further inspection of the definitions highlight where disparities between previous reviews may exist. Walters et al36 only allowed single component (action plans) interventions and found no effect, whereas Zwerink et al7 and Jonkman et al10 both found a significant effect but stipulated that SMIs had to have at least two components (eg, action plans, symptom monitoring, physical activity component, etc.). This review highlights how the definition of SMI directly influences whether an effect is found on HRQOL. A number of the reviews attempted to summarize components of their contributing SMIs, but these were often limited in description and were often a mixture of BCTs (eg, problem solving, action planning) and target behaviors (eg, mental health, physical activity). Zwerink et al7 and Jonkman et al10 were the only reviews to conduct subgroup analyses to quantify what content of SMIs are most effective. Zwerink et al7 attempted to look at SMIs that did and did not utilize action plans, exercise programs, and behavioral components. However, the definitions for each of these three subgroup analyses were ambiguous: 1) action plans had to focus on symptom management, and thus excludes action planning techniques when the target behavior is physical or mental health, 2) focusing on only standardized exercise programs neglects the ways physical activity may be encouraged in everyday life (eg, energy conservation techniques), and 3) the authors themselves state that “behavioral components” was “difficult to determine because of lack of detailed information”. Jonkman et al’s11 subgroup analyses were based on the absence/presence of clear BCTs: management of psychological aspects, goal setting skills, self-monitoring logs, and problem-solving skills. However the authors 1) combined a number of chronic conditions (COPD, chronic heart failure, and Type 2 diabetes) and 2) did not differentiate between the individual BCTs and the target behaviors. To build upon these authors’ previous work, we have used a standardized taxonomy with definitions for a wider array of BCTs and been more specific about the behavior the BCT is targeting. This allows better comparison of intervention content across studies and a more robust basis for synthesis. Ultimately, the increasing popularity and awareness of SMIs, but an increasing variation in definition, indicates a need for more structured guidance on what constitutes self-management so that both practitioners and patients are aware of what the content of self-management entails. Strengths and limitations Comparing the findings across reviews highlights how the definition of self-management had a direct impact on the number of eligible studies and consequently the conclusions drawn. The difference in conclusions further highlights the need for more detailed content analysis. The current analysis extracted robust empirical data from across reviews and their contributing clinical trials to examine intervention content and structure to isolate what factors may be essential for improving patient outcomes. The use of a standardized taxonomy of definitions allowed comparisons of intervention content across studies and provided a robust basis for synthesis. Our approach also highlighted specific BCTs used in a range of contexts to enable more discernment between intervention features and outcome effectiveness. We used a concise search strategy in order to identify individual trial data and perform novel forms of exploratory analysis that examined the effectiveness of individual intervention components. For this exploratory analysis, small and hard to find trials are unlikely to introduce components that do not occur in a range of other trials, and as such it is unnecessary to carry out an exhaustive search to identify all existing trials. However, it is important to stress that while we present a comprehensive summary of SMIs that have been reported in previous reviews, this review does not aim to present the most up-to-date evidence base as a number of more recent SMI trials will not have been captured in these reviews. There are limitations to the study worth noting. The limited number of studies meant that single rather than multiple variables were entered into the moderator analyses. For example, we could compare single vs multiple session SMIs or individual vs group SMIs but could not examine combinations of the different variables in a multivariate analysis. Thus, while these univariate analyses can helpfully guide intervention development by highlighting potential associations, they should not be interpreted in an additive fashion. Furthermore, meta-regression findings do not imply causality, as factors entered into these analyses were not randomized groups in the analyses. It was difficult to ascertain the intensity with which some BCTs were administered and the same BCT could be used with varying intensity, eg, the instructors could provide “Feedback on the behavior” on a daily or monthly basis. Ultimately, the utility of this secondary analysis is dependent on the reporting of intervention content by authors. It is possible that some BCTs were present in interventions, but not described in sufficient detail to allow coding. While we coded intervention manuals where available, there is a need for more transparency in intervention content in future studies. Comparing the findings across reviews highlights how the definition of self-management had a direct impact on the number of eligible studies and consequently the conclusions drawn. The difference in conclusions further highlights the need for more detailed content analysis. The current analysis extracted robust empirical data from across reviews and their contributing clinical trials to examine intervention content and structure to isolate what factors may be essential for improving patient outcomes. The use of a standardized taxonomy of definitions allowed comparisons of intervention content across studies and provided a robust basis for synthesis. Our approach also highlighted specific BCTs used in a range of contexts to enable more discernment between intervention features and outcome effectiveness. We used a concise search strategy in order to identify individual trial data and perform novel forms of exploratory analysis that examined the effectiveness of individual intervention components. For this exploratory analysis, small and hard to find trials are unlikely to introduce components that do not occur in a range of other trials, and as such it is unnecessary to carry out an exhaustive search to identify all existing trials. However, it is important to stress that while we present a comprehensive summary of SMIs that have been reported in previous reviews, this review does not aim to present the most up-to-date evidence base as a number of more recent SMI trials will not have been captured in these reviews. There are limitations to the study worth noting. The limited number of studies meant that single rather than multiple variables were entered into the moderator analyses. For example, we could compare single vs multiple session SMIs or individual vs group SMIs but could not examine combinations of the different variables in a multivariate analysis. Thus, while these univariate analyses can helpfully guide intervention development by highlighting potential associations, they should not be interpreted in an additive fashion. Furthermore, meta-regression findings do not imply causality, as factors entered into these analyses were not randomized groups in the analyses. It was difficult to ascertain the intensity with which some BCTs were administered and the same BCT could be used with varying intensity, eg, the instructors could provide “Feedback on the behavior” on a daily or monthly basis. Ultimately, the utility of this secondary analysis is dependent on the reporting of intervention content by authors. It is possible that some BCTs were present in interventions, but not described in sufficient detail to allow coding. While we coded intervention manuals where available, there is a need for more transparency in intervention content in future studies. Strengths and limitations: Comparing the findings across reviews highlights how the definition of self-management had a direct impact on the number of eligible studies and consequently the conclusions drawn. The difference in conclusions further highlights the need for more detailed content analysis. The current analysis extracted robust empirical data from across reviews and their contributing clinical trials to examine intervention content and structure to isolate what factors may be essential for improving patient outcomes. The use of a standardized taxonomy of definitions allowed comparisons of intervention content across studies and provided a robust basis for synthesis. Our approach also highlighted specific BCTs used in a range of contexts to enable more discernment between intervention features and outcome effectiveness. We used a concise search strategy in order to identify individual trial data and perform novel forms of exploratory analysis that examined the effectiveness of individual intervention components. For this exploratory analysis, small and hard to find trials are unlikely to introduce components that do not occur in a range of other trials, and as such it is unnecessary to carry out an exhaustive search to identify all existing trials. However, it is important to stress that while we present a comprehensive summary of SMIs that have been reported in previous reviews, this review does not aim to present the most up-to-date evidence base as a number of more recent SMI trials will not have been captured in these reviews. There are limitations to the study worth noting. The limited number of studies meant that single rather than multiple variables were entered into the moderator analyses. For example, we could compare single vs multiple session SMIs or individual vs group SMIs but could not examine combinations of the different variables in a multivariate analysis. Thus, while these univariate analyses can helpfully guide intervention development by highlighting potential associations, they should not be interpreted in an additive fashion. Furthermore, meta-regression findings do not imply causality, as factors entered into these analyses were not randomized groups in the analyses. It was difficult to ascertain the intensity with which some BCTs were administered and the same BCT could be used with varying intensity, eg, the instructors could provide “Feedback on the behavior” on a daily or monthly basis. Ultimately, the utility of this secondary analysis is dependent on the reporting of intervention content by authors. It is possible that some BCTs were present in interventions, but not described in sufficient detail to allow coding. While we coded intervention manuals where available, there is a need for more transparency in intervention content in future studies. Conclusion: SMIs can improve HRQOL and reduce the number of ED visits for patients with COPD, but there is wide variability in effect. To be effective, future interventions should focus on tackling mental health concerns but need not entail multidisciplinary and individual-focused SMIs. Supplementary material: Three reviews were discussed for eligibility. A review by Walters et al1 focused upon studies where the intervention could be defined as […] use of guidelines detailing self-initiated interventions (eg, changing medication regime […]) which were undertaken in response to alterations in the state of the patients’ COPD (eg, increase in breathlessness) […]. An educational component was permitted if the duration was short, up to 1 hour.1Action plans were explicitly described as a central component in the definitions of self-management used by a number of the review authors,2–6 and as Walters et al’s1 definition was comparable to many of the definitions of self-management interventions used by other authors, we considered this review eligible. […] use of guidelines detailing self-initiated interventions (eg, changing medication regime […]) which were undertaken in response to alterations in the state of the patients’ COPD (eg, increase in breathlessness) […]. An educational component was permitted if the duration was short, up to 1 hour.1 In contrast, the focus of Jolly et al’s review7 was “self-management” interventions, but the number of interventions included far exceeded the number of studies commonly found in the other eligible reviews and many would not typically be considered self-management (eg, structured pulmonary rehabilitation programs). As it was not possible to identify those that were primarily self-management based, we excluded this review as it summarizes evidence of a wider array of interventions for chronic obstructive pulmonary disease than self-management interventions. Jonkman et al8 highlighted relevant studies in the search, but the focus of the analysis was on an individual patient data analysis and as such the overall findings are not relevant to the current narrative.
Background: Self-management interventions (SMIs) are recommended for individuals with COPD to help monitor symptoms and optimize health-related quality of life (HRQOL). However, SMIs vary widely in content, delivery, and intensity, making it unclear which methods and techniques are associated with improved outcomes. This systematic review aimed to summarize the current evidence base surrounding the effectiveness of SMIs for improving HRQOL in people with COPD. Methods: Systematic reviews that focused upon SMIs were eligible for inclusion. Intervention descriptions were coded for behavior change techniques (BCTs) that targeted self-management behaviors to address 1) symptoms, 2) physical activity, and 3) mental health. Meta-analyses and meta-regression were used to explore the association between health behaviors targeted by SMIs, the BCTs used, patient illness severity, and modes of delivery, with the impact on HRQOL and emergency department (ED) visits. Results: Data related to SMI content were extracted from 26 randomized controlled trials identified from 11 systematic reviews. Patients receiving SMIs reported improved HRQOL (standardized mean difference =-0.16; 95% confidence interval [CI] =-0.25, -0.07; P=0.001) and made fewer ED visits (standardized mean difference =-0.13; 95% CI =-0.23, -0.03; P=0.02) compared to patients who received usual care. Patients receiving SMIs targeting mental health alongside symptom management had greater improvement of HRQOL (Q=4.37; P=0.04) and fewer ED visits (Q=5.95; P=0.02) than patients receiving SMIs focused on symptom management alone. Within-group analyses showed that HRQOL was significantly improved in 1) studies with COPD patients with severe symptoms, 2) single-practitioner based SMIs but not SMIs delivered by a multidisciplinary team, 3) SMIs with multiple sessions but not single session SMIs, and 4) both individual- and group-based SMIs. Conclusions: SMIs can be effective at improving HRQOL and reducing ED visits, with those targeting mental health being significantly more effective than those targeting symptom management alone.
Introduction: COPD is characterized by airflow limitation and is associated with inflammatory changes that lead to dyspnea, sputum purulence, and persistent coughing. The disease trajectory is one of progressive decline, punctuated by frequent acute exacerbations in symptoms. Patients with COPD have an average of three acute exacerbations per year, and these are the second biggest cause of unplanned hospital admissions in the UK.1–3 As COPD is irreversible, and health-related quality of life (HRQOL) in patients with COPD tends to be low, optimizing HRQOL and reducing hospital admissions have become key priorities in COPD management.4,5 Self-management planning is a recognized quality standard of the National Institute for Health and Care Excellence (NICE) guidelines in the UK,2 and a joint statement by the American Thoracic Society/European Respiratory Society6 emphasized its importance in quality of care. Self-management interventions (SMIs) encourage patients to monitor symptoms when stable and to take appropriate action when symptoms begin to worsen.2 However, there is no consensus on the form and content of effective SMIs and the variation in content may explain previous heterogeneity in effectiveness.2,7,8 A recent Health Technology Assessment (HTA) report on the efficacy of self-management for COPD recommended that future research should 1) “try to identify which are the most effective components of interventions and identify patient-specific factors that may modify this”, and that 2) “behavior change theories and strategies that underpin COPD SMIs need to be better characterized and described”.8 To enable better comparison and replication of intervention components, taxonomies have been developed to classify potential active ingredients of interventions according to preestablished descriptions of behavior change techniques (BCTs).9 BCTs are defined as “an observable, replicable, and irreducible component of an intervention designed to alter or redirect causal processes that regulate behavior”.9 While recent reviews have conducted content analysis to help identify effective components of SMIs for patients with COPD through individual patient data analysis,10,11 the coding of intervention content was not performed with established taxonomies and clear understanding between the BCT and the targeted behavior was absent (eg, symptom management, physical activity, mental health management, etc). This systematic review aims to summarize the current evidence base on the effectiveness of SMIs for improving HRQOL in people with COPD. Conclusions across reviews have been synthesized and evaluated within the context of how self-management was defined. Meta-analyses were performed that explore the relationship between health behaviors the SMIs target, the BCT they use to target behaviors, and subsequent improvement in HRQOL and health care utilization. In addition, we explore the extent to which trial and intervention features influence SMI effects. Conclusion: SMIs can improve HRQOL and reduce the number of ED visits for patients with COPD, but there is wide variability in effect. To be effective, future interventions should focus on tackling mental health concerns but need not entail multidisciplinary and individual-focused SMIs.
Background: Self-management interventions (SMIs) are recommended for individuals with COPD to help monitor symptoms and optimize health-related quality of life (HRQOL). However, SMIs vary widely in content, delivery, and intensity, making it unclear which methods and techniques are associated with improved outcomes. This systematic review aimed to summarize the current evidence base surrounding the effectiveness of SMIs for improving HRQOL in people with COPD. Methods: Systematic reviews that focused upon SMIs were eligible for inclusion. Intervention descriptions were coded for behavior change techniques (BCTs) that targeted self-management behaviors to address 1) symptoms, 2) physical activity, and 3) mental health. Meta-analyses and meta-regression were used to explore the association between health behaviors targeted by SMIs, the BCTs used, patient illness severity, and modes of delivery, with the impact on HRQOL and emergency department (ED) visits. Results: Data related to SMI content were extracted from 26 randomized controlled trials identified from 11 systematic reviews. Patients receiving SMIs reported improved HRQOL (standardized mean difference =-0.16; 95% confidence interval [CI] =-0.25, -0.07; P=0.001) and made fewer ED visits (standardized mean difference =-0.13; 95% CI =-0.23, -0.03; P=0.02) compared to patients who received usual care. Patients receiving SMIs targeting mental health alongside symptom management had greater improvement of HRQOL (Q=4.37; P=0.04) and fewer ED visits (Q=5.95; P=0.02) than patients receiving SMIs focused on symptom management alone. Within-group analyses showed that HRQOL was significantly improved in 1) studies with COPD patients with severe symptoms, 2) single-practitioner based SMIs but not SMIs delivered by a multidisciplinary team, 3) SMIs with multiple sessions but not single session SMIs, and 4) both individual- and group-based SMIs. Conclusions: SMIs can be effective at improving HRQOL and reducing ED visits, with those targeting mental health being significantly more effective than those targeting symptom management alone.
14,991
385
16
[ "smis", "intervention", "patients", "self", "health", "management", "reviews", "effect", "studies", "significant" ]
[ "test", "test" ]
[CONTENT] self-management | emergency department visits | behavior change techniques | COPD | mental health | meta-analysis [SUMMARY]
[CONTENT] self-management | emergency department visits | behavior change techniques | COPD | mental health | meta-analysis [SUMMARY]
[CONTENT] self-management | emergency department visits | behavior change techniques | COPD | mental health | meta-analysis [SUMMARY]
[CONTENT] self-management | emergency department visits | behavior change techniques | COPD | mental health | meta-analysis [SUMMARY]
[CONTENT] self-management | emergency department visits | behavior change techniques | COPD | mental health | meta-analysis [SUMMARY]
[CONTENT] self-management | emergency department visits | behavior change techniques | COPD | mental health | meta-analysis [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Disease Progression | Emergency Service, Hospital | Female | Health Behavior | Health Knowledge, Attitudes, Practice | Health Resources | Humans | Lung | Male | Mental Health | Middle Aged | Pulmonary Disease, Chronic Obstructive | Quality of Life | Self Care | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Disease Progression | Emergency Service, Hospital | Female | Health Behavior | Health Knowledge, Attitudes, Practice | Health Resources | Humans | Lung | Male | Mental Health | Middle Aged | Pulmonary Disease, Chronic Obstructive | Quality of Life | Self Care | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Disease Progression | Emergency Service, Hospital | Female | Health Behavior | Health Knowledge, Attitudes, Practice | Health Resources | Humans | Lung | Male | Mental Health | Middle Aged | Pulmonary Disease, Chronic Obstructive | Quality of Life | Self Care | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Disease Progression | Emergency Service, Hospital | Female | Health Behavior | Health Knowledge, Attitudes, Practice | Health Resources | Humans | Lung | Male | Mental Health | Middle Aged | Pulmonary Disease, Chronic Obstructive | Quality of Life | Self Care | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Disease Progression | Emergency Service, Hospital | Female | Health Behavior | Health Knowledge, Attitudes, Practice | Health Resources | Humans | Lung | Male | Mental Health | Middle Aged | Pulmonary Disease, Chronic Obstructive | Quality of Life | Self Care | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Disease Progression | Emergency Service, Hospital | Female | Health Behavior | Health Knowledge, Attitudes, Practice | Health Resources | Humans | Lung | Male | Mental Health | Middle Aged | Pulmonary Disease, Chronic Obstructive | Quality of Life | Self Care | Treatment Outcome [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] smis | intervention | patients | self | health | management | reviews | effect | studies | significant [SUMMARY]
[CONTENT] smis | intervention | patients | self | health | management | reviews | effect | studies | significant [SUMMARY]
[CONTENT] smis | intervention | patients | self | health | management | reviews | effect | studies | significant [SUMMARY]
[CONTENT] smis | intervention | patients | self | health | management | reviews | effect | studies | significant [SUMMARY]
[CONTENT] smis | intervention | patients | self | health | management | reviews | effect | studies | significant [SUMMARY]
[CONTENT] smis | intervention | patients | self | health | management | reviews | effect | studies | significant [SUMMARY]
[CONTENT] copd | management | health | quality | smis | content | behavior | identify effective components | effective components | taxonomies [SUMMARY]
[CONTENT] point | mean | data | analyses | cma software | time point | random | random effects | software | cma [SUMMARY]
[CONTENT] significant | smis | 95 | smis targeting | effect | targeting | 95 ci | ci | smd | usual care smd [SUMMARY]
[CONTENT] effective future interventions | reduce number ed visits | health concerns need entail | need entail multidisciplinary individual | need entail multidisciplinary | need entail | health concerns need | tackling | tackling mental | tackling mental health [SUMMARY]
[CONTENT] smis | intervention | significant | patients | self | management | health | effect | reviews | bcts [SUMMARY]
[CONTENT] smis | intervention | significant | patients | self | management | health | effect | reviews | bcts [SUMMARY]
[CONTENT] ||| ||| HRQOL | COPD [SUMMARY]
[CONTENT] ||| 1 | 2 | 3 ||| HRQOL [SUMMARY]
[CONTENT] SMI | 26 | 11 ||| HRQOL | 95% ||| CI | 95% | CI ||| HRQOL | P=0.04 ||| HRQOL | 1 | 2 | 3 | 4 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| HRQOL ||| ||| 1 | 2 | 3 ||| HRQOL ||| ||| SMI | 26 | 11 ||| HRQOL | 95% ||| CI | 95% | CI ||| HRQOL | P=0.04 ||| HRQOL | 1 | 2 | 3 | 4 ||| [SUMMARY]
[CONTENT] ||| ||| HRQOL ||| ||| 1 | 2 | 3 ||| HRQOL ||| ||| SMI | 26 | 11 ||| HRQOL | 95% ||| CI | 95% | CI ||| HRQOL | P=0.04 ||| HRQOL | 1 | 2 | 3 | 4 ||| [SUMMARY]
Brainstem infarcts predict REM sleep behavior disorder in acute ischemic stroke.
24758223
Rapid eye movement (REM) sleep behavior disorder (RBD) is a sleep disturbance in which patients enact their dreams while in REM sleep. The behavior is typically violent in association with violent dream content, so serious harm can be done to the patient or the bed partner. The prevalence of RBD is well-known in Parkinson's disease, Lewy body dementia, and multiple systems atrophy. However, its prevalence and causes in stroke remained unclear. The aim of this study was to determine factors influencing the appearance of RBD in a prospective cohort of patients with acute ischemic stroke.
BACKGROUND
A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. Within 2 days of admission, a research nurse collected demographic and clinical data and assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS). One hundred and nineteen of the 775 patients meeting study entry criteria formed the study sample. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke. In the attendance, a research assistant administered the MMSE and the 13-item RBD questionnaire (RBDQ).
METHODS
Among 119 stroke patients, 10.9% were exhibited RBD, defined as an REM sleep behavior disorder questionnaire score of 19 or above. The proportion of patients with acute brainstem infarct was significantly higher in RBD patients than those without RBD. Compared with patients without RBD, RBD patients were more likely to have brainstem infarcts and had smaller infarct volumes. In a multivariate analysis, in which stroke location and infarct volume were inserted, brainstem infarcts were an independent predictor of RBD (odds ratio = 3.686; P = 0.032).
RESULTS
The results support the notion of a predominant role of brainstem injury in the development of RBD and suggest that patients with brainstem infarcts RBD should be evaluated by a clinical neurologist.
CONCLUSIONS
[ "Aged", "Brain Stem Infarctions", "Female", "Humans", "Male", "REM Sleep Behavior Disorder", "Stroke" ]
4004510
Background
Sleep disturbances are frequently found in stroke [1-4]. They increase the risk of stroke [5,6] and affect the clinical course and outcome of stroke [1,7]. Functional impairment, longer hospitalization and rehabilitation periods have been reported in stroke patients with sleep disturbance [8]. Rapid eye movement (REM) sleep behavior disorder (RBD) is a sleep disturbance in which patients enact their dreams while in REM sleep. The behavior is typically violent in association with violent dream content, so serious harm can be done to the patient or the bed partner. RBD predominantly affects older adults and has an estimated prevalence of 0.4–0.5% in adults [9]. RBD may be idiopathic or part of a neurodegenerative condition, particularly Parkinson’s disease, Lewy body dementia, and multiple systems atrophy with a prevalence ranging from 13% to 100% [9,10]. In neurodegenerative diseases, RBD is associated with brainstem lesions [10]. The prevalence and pathophysiology of RBD in stroke are largely unknown. Only case reports and small case series have been published and suggest that RBD in stroke is related to brainstem lesions [11-15]. Specifically, pontine strokes were described in single case reports [11,13,15]. Three in a series of six patients with RBD had infarcts in the dorsal pontomesencephalon [12]. The aim of this study was to determine factors influencing the appearance of RBD in a prospective cohort of patients with acute ischemic stroke.
Methods
Participants A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. All patients were screened for inclusion criteria (Figure 1): 1. Chinese ethnicity; 2. Cantonese as the primary language; 3. well-documented first or recurrent acute stroke occurring within 7 days before admission; and 4. the ability and willingness to give consent. The exclusion criteria were: 1. transient ischemic attack, cerebral hemorrhage, subdural hematoma or subarachnoid hemorrhage; 2. history of a CNS disease such as tumor, Parkinson’s disease, dementia, or others; 3. history of depression, alcoholism or other psychiatric disorders; 4. Mini-Mental State Examination (MMSE) [16] score of less than 20; 5. severe aphasia or auditory or visual impairment; 6. physical frailty; and 7. recurrence of stroke prior to the 3-month assessment; and 8. No acute infarct or more than one acute infarct in MRI. Recruitment profile of the study. CNS central nervous system, MMSE mini-mental state examination, MRI magnetic resonance imaging. A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. All patients were screened for inclusion criteria (Figure 1): 1. Chinese ethnicity; 2. Cantonese as the primary language; 3. well-documented first or recurrent acute stroke occurring within 7 days before admission; and 4. the ability and willingness to give consent. The exclusion criteria were: 1. transient ischemic attack, cerebral hemorrhage, subdural hematoma or subarachnoid hemorrhage; 2. history of a CNS disease such as tumor, Parkinson’s disease, dementia, or others; 3. history of depression, alcoholism or other psychiatric disorders; 4. Mini-Mental State Examination (MMSE) [16] score of less than 20; 5. severe aphasia or auditory or visual impairment; 6. physical frailty; and 7. recurrence of stroke prior to the 3-month assessment; and 8. No acute infarct or more than one acute infarct in MRI. Recruitment profile of the study. CNS central nervous system, MMSE mini-mental state examination, MRI magnetic resonance imaging. Materials and procedure The study protocol was approved by the Clinical Research Ethics Committee of the Chinese University of Hong Kong. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke, where they signed a consent form and received face-to-face interview conducted by a research assistant. A research nurse collected demographic and clinical data, assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS) [17] within 2 days of admission, and entered these data in a Stroke Registry. The research assistant administered the MMSE 3 months after the onset of the index stroke. The research assistant, who was blind to the patients’ radiological data, also administered the 13-item RBD questionnaire (RBDQ) [18], which has a score ranging from 0 to 100. The RBDQ demonstrated robust psychometric properties with good sensitivity (82.2%), specificity (86.9%), positive (86.3%) and negative (83.0%) predictive value, high internal consistency (90%), and test-retest reliability (89%) [18]. RBD was defined as the presence of clinically significant RBD symptoms indicated by an RBDQ score of 19 or above [18]. MRI was performed with a 1.5-T system within 7 days after admission. A neurologist (YKC), who was blind to the patients’ RBD status, assessed all of the MRIs. The number and size of acute infarcts affecting different structures, including the frontal, temporal, parietal and occipital lobes, subcortical white matter, thalamus, basal ganglia, brain stem and cerebellum were evaluated. If an infarct involved more than on location, e.g. basal ganglia and subcortical region, then it was counted twice, one for the basal ganglia and one for the subcortical white matter. The total area of acute infarcts on the DWI was measured by manual outlines of all areas with restricted water diffusion identified on the diffusion-weighted images with b values of 1000. The total volume was calculated by multiplying the total area by the sum of the slice thickness and gap [19]. The details of the MRI assessment have been described elsewhere [19]. The study protocol was approved by the Clinical Research Ethics Committee of the Chinese University of Hong Kong. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke, where they signed a consent form and received face-to-face interview conducted by a research assistant. A research nurse collected demographic and clinical data, assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS) [17] within 2 days of admission, and entered these data in a Stroke Registry. The research assistant administered the MMSE 3 months after the onset of the index stroke. The research assistant, who was blind to the patients’ radiological data, also administered the 13-item RBD questionnaire (RBDQ) [18], which has a score ranging from 0 to 100. The RBDQ demonstrated robust psychometric properties with good sensitivity (82.2%), specificity (86.9%), positive (86.3%) and negative (83.0%) predictive value, high internal consistency (90%), and test-retest reliability (89%) [18]. RBD was defined as the presence of clinically significant RBD symptoms indicated by an RBDQ score of 19 or above [18]. MRI was performed with a 1.5-T system within 7 days after admission. A neurologist (YKC), who was blind to the patients’ RBD status, assessed all of the MRIs. The number and size of acute infarcts affecting different structures, including the frontal, temporal, parietal and occipital lobes, subcortical white matter, thalamus, basal ganglia, brain stem and cerebellum were evaluated. If an infarct involved more than on location, e.g. basal ganglia and subcortical region, then it was counted twice, one for the basal ganglia and one for the subcortical white matter. The total area of acute infarcts on the DWI was measured by manual outlines of all areas with restricted water diffusion identified on the diffusion-weighted images with b values of 1000. The total volume was calculated by multiplying the total area by the sum of the slice thickness and gap [19]. The details of the MRI assessment have been described elsewhere [19]. Statistical analysis Patients without acute infarct or more than one acute infarct on MRI were excluded from the analysis. The demographic and clinical characteristics of the RBD patients (RBD group) were compared to those without RBD (non-RBD group). Subsequently, logistic regression models were constructed. In a multivariate regression, risk factors with a P value <0.05 were inserted using a forward stepwise selection strategy. Throughout the study, the significance threshold was set at P = 0.05. Patients without acute infarct or more than one acute infarct on MRI were excluded from the analysis. The demographic and clinical characteristics of the RBD patients (RBD group) were compared to those without RBD (non-RBD group). Subsequently, logistic regression models were constructed. In a multivariate regression, risk factors with a P value <0.05 were inserted using a forward stepwise selection strategy. Throughout the study, the significance threshold was set at P = 0.05.
Results
One-hundred-nineteen of the 775 patients meeting the inclusion and exclusion criteria formed the study sample (Figure 1). There was no significant difference between included and excluded patients in terms of age, sex, and NIHSS score. Thirteen patients (10.9%) had RBD. In patients with brainstem infarct the prevalence of RBD was higher (6 out of 27; 22.2%). Four of the six (67%) of RBD patients with brainstem infarcts were men; five had ventral pontine infarct and one had medullar infarct. The demographic and MRI characteristics and stroke-related data of the sample are shown in Table 1. The proportion of patients with acute brainstem and pontine base infarct was significantly higher in RBD patients than those without RBD. Compared to patients without RBD, RBD patients had smaller infarct volumes. The correlation between the presence of acute brainstem infarcts and acute pontine base infarcts was 0.901. The presence of acute brainstem infarct and infarct volume were entered into a multivariate logistic regression analysis where the presence of acute brainstem infarct was a significant independent predictor of RBD (odds ratio = 3.69; Table 2). Another regression model was constructed by entering the presence and volume of acute pontine infarcts. Acute pontine base infarct was not a significant predictor of RBD (Table 3). Demographic characteristics, psychosocial risk factors, stroke severity, and radiological characteristics by RBD status MMSE = Mini-Mental State Examination; NIHSS = National Institutes of Health Stroke Scale, RBD = REM sleep behavior disorder. at-test, bchi-square test, cMann-Whitney U-test, dFisher’s exact test. Multivariate logistic model of the clinical determinants of RBD RBD = REM sleep behavior disorder. aLogistic regression. Multivariate logistic model of the clinical determinants of RBD RBD = REM sleep behavior disorder. aLogistic regression.
Conclusions
In conclusion, results of this study indicate that brainstem infarcts are associated with RBD in acute ischemic stroke. RBD should be evaluated by a clinical neurologist in patients with brainstem infarcts, given that RBD can easily be diagnosed and treated with clonazepam [24].
[ "Background", "Participants", "Materials and procedure", "Statistical analysis", "Strengths and limitations of the study", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Sleep disturbances are frequently found in stroke [1-4]. They increase the risk of stroke [5,6] and affect the clinical course and outcome of stroke [1,7]. Functional impairment, longer hospitalization and rehabilitation periods have been reported in stroke patients with sleep disturbance [8]. Rapid eye movement (REM) sleep behavior disorder (RBD) is a sleep disturbance in which patients enact their dreams while in REM sleep. The behavior is typically violent in association with violent dream content, so serious harm can be done to the patient or the bed partner. RBD predominantly affects older adults and has an estimated prevalence of 0.4–0.5% in adults [9]. RBD may be idiopathic or part of a neurodegenerative condition, particularly Parkinson’s disease, Lewy body dementia, and multiple systems atrophy with a prevalence ranging from 13% to 100% [9,10]. In neurodegenerative diseases, RBD is associated with brainstem lesions [10].\nThe prevalence and pathophysiology of RBD in stroke are largely unknown. Only case reports and small case series have been published and suggest that RBD in stroke is related to brainstem lesions [11-15]. Specifically, pontine strokes were described in single case reports [11,13,15]. Three in a series of six patients with RBD had infarcts in the dorsal pontomesencephalon [12].\nThe aim of this study was to determine factors influencing the appearance of RBD in a prospective cohort of patients with acute ischemic stroke.", "A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. All patients were screened for inclusion criteria (Figure 1): 1. Chinese ethnicity; 2. Cantonese as the primary language; 3. well-documented first or recurrent acute stroke occurring within 7 days before admission; and 4. the ability and willingness to give consent. The exclusion criteria were: 1. transient ischemic attack, cerebral hemorrhage, subdural hematoma or subarachnoid hemorrhage; 2. history of a CNS disease such as tumor, Parkinson’s disease, dementia, or others; 3. history of depression, alcoholism or other psychiatric disorders; 4. Mini-Mental State Examination (MMSE) [16] score of less than 20; 5. severe aphasia or auditory or visual impairment; 6. physical frailty; and 7. recurrence of stroke prior to the 3-month assessment; and 8. No acute infarct or more than one acute infarct in MRI.\nRecruitment profile of the study.\nCNS central nervous system, MMSE mini-mental state examination, MRI magnetic resonance imaging.", "The study protocol was approved by the Clinical Research Ethics Committee of the Chinese University of Hong Kong. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke, where they signed a consent form and received face-to-face interview conducted by a research assistant.\nA research nurse collected demographic and clinical data, assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS) [17] within 2 days of admission, and entered these data in a Stroke Registry. The research assistant administered the MMSE 3 months after the onset of the index stroke. The research assistant, who was blind to the patients’ radiological data, also administered the 13-item RBD questionnaire (RBDQ) [18], which has a score ranging from 0 to 100.\nThe RBDQ demonstrated robust psychometric properties with good sensitivity (82.2%), specificity (86.9%), positive (86.3%) and negative (83.0%) predictive value, high internal consistency (90%), and test-retest reliability (89%) [18]. RBD was defined as the presence of clinically significant RBD symptoms indicated by an RBDQ score of 19 or above [18].\nMRI was performed with a 1.5-T system within 7 days after admission. A neurologist (YKC), who was blind to the patients’ RBD status, assessed all of the MRIs. The number and size of acute infarcts affecting different structures, including the frontal, temporal, parietal and occipital lobes, subcortical white matter, thalamus, basal ganglia, brain stem and cerebellum were evaluated. If an infarct involved more than on location, e.g. basal ganglia and subcortical region, then it was counted twice, one for the basal ganglia and one for the subcortical white matter. The total area of acute infarcts on the DWI was measured by manual outlines of all areas with restricted water diffusion identified on the diffusion-weighted images with b values of 1000. The total volume was calculated by multiplying the total area by the sum of the slice thickness and gap [19]. The details of the MRI assessment have been described elsewhere [19].", "Patients without acute infarct or more than one acute infarct on MRI were excluded from the analysis. The demographic and clinical characteristics of the RBD patients (RBD group) were compared to those without RBD (non-RBD group). Subsequently, logistic regression models were constructed. In a multivariate regression, risk factors with a P value <0.05 were inserted using a forward stepwise selection strategy. Throughout the study, the significance threshold was set at P = 0.05.", "The strength of this study is the prospective subject recruitment with MRI scans. Its main limitation is the small sample size, which was the consequence of stringent inclusion and exclusion criteria that allowed to obtain a well-defined sample. Thus, despite inclusion of only 119 patients, multivariate logistic regression could be performed, in which two factors, stroke topography and volume were included [23]. RBD was diagnosed using a questionnaire, and the questionnaire did not differentiate the onset of RBD before or after stroke. Ideally, polysomnography (PSG) should have been performed, but for logistical reasons PSG could not be included in this study.", "The authors report no conflict of interest. The funding agencies had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.", "WKT designed the study. YKC, HJL, XXL, WCWC, ATA, JA, VCTM, KSW conducted data collection. YKC, HJL conducted statistical analysis. WKT, HJL, XXL interpreted the data. WKT wrote the first draft of the manuscript. The critical revision of the manuscript was made by DMH, GSU. All authors reviewed the first draft of the paper. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2377/14/88/prepub\n" ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Participants", "Materials and procedure", "Statistical analysis", "Results", "Discussion", "Strengths and limitations of the study", "Conclusions", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Sleep disturbances are frequently found in stroke [1-4]. They increase the risk of stroke [5,6] and affect the clinical course and outcome of stroke [1,7]. Functional impairment, longer hospitalization and rehabilitation periods have been reported in stroke patients with sleep disturbance [8]. Rapid eye movement (REM) sleep behavior disorder (RBD) is a sleep disturbance in which patients enact their dreams while in REM sleep. The behavior is typically violent in association with violent dream content, so serious harm can be done to the patient or the bed partner. RBD predominantly affects older adults and has an estimated prevalence of 0.4–0.5% in adults [9]. RBD may be idiopathic or part of a neurodegenerative condition, particularly Parkinson’s disease, Lewy body dementia, and multiple systems atrophy with a prevalence ranging from 13% to 100% [9,10]. In neurodegenerative diseases, RBD is associated with brainstem lesions [10].\nThe prevalence and pathophysiology of RBD in stroke are largely unknown. Only case reports and small case series have been published and suggest that RBD in stroke is related to brainstem lesions [11-15]. Specifically, pontine strokes were described in single case reports [11,13,15]. Three in a series of six patients with RBD had infarcts in the dorsal pontomesencephalon [12].\nThe aim of this study was to determine factors influencing the appearance of RBD in a prospective cohort of patients with acute ischemic stroke.", " Participants A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. All patients were screened for inclusion criteria (Figure 1): 1. Chinese ethnicity; 2. Cantonese as the primary language; 3. well-documented first or recurrent acute stroke occurring within 7 days before admission; and 4. the ability and willingness to give consent. The exclusion criteria were: 1. transient ischemic attack, cerebral hemorrhage, subdural hematoma or subarachnoid hemorrhage; 2. history of a CNS disease such as tumor, Parkinson’s disease, dementia, or others; 3. history of depression, alcoholism or other psychiatric disorders; 4. Mini-Mental State Examination (MMSE) [16] score of less than 20; 5. severe aphasia or auditory or visual impairment; 6. physical frailty; and 7. recurrence of stroke prior to the 3-month assessment; and 8. No acute infarct or more than one acute infarct in MRI.\nRecruitment profile of the study.\nCNS central nervous system, MMSE mini-mental state examination, MRI magnetic resonance imaging.\nA total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. All patients were screened for inclusion criteria (Figure 1): 1. Chinese ethnicity; 2. Cantonese as the primary language; 3. well-documented first or recurrent acute stroke occurring within 7 days before admission; and 4. the ability and willingness to give consent. The exclusion criteria were: 1. transient ischemic attack, cerebral hemorrhage, subdural hematoma or subarachnoid hemorrhage; 2. history of a CNS disease such as tumor, Parkinson’s disease, dementia, or others; 3. history of depression, alcoholism or other psychiatric disorders; 4. Mini-Mental State Examination (MMSE) [16] score of less than 20; 5. severe aphasia or auditory or visual impairment; 6. physical frailty; and 7. recurrence of stroke prior to the 3-month assessment; and 8. No acute infarct or more than one acute infarct in MRI.\nRecruitment profile of the study.\nCNS central nervous system, MMSE mini-mental state examination, MRI magnetic resonance imaging.\n Materials and procedure The study protocol was approved by the Clinical Research Ethics Committee of the Chinese University of Hong Kong. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke, where they signed a consent form and received face-to-face interview conducted by a research assistant.\nA research nurse collected demographic and clinical data, assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS) [17] within 2 days of admission, and entered these data in a Stroke Registry. The research assistant administered the MMSE 3 months after the onset of the index stroke. The research assistant, who was blind to the patients’ radiological data, also administered the 13-item RBD questionnaire (RBDQ) [18], which has a score ranging from 0 to 100.\nThe RBDQ demonstrated robust psychometric properties with good sensitivity (82.2%), specificity (86.9%), positive (86.3%) and negative (83.0%) predictive value, high internal consistency (90%), and test-retest reliability (89%) [18]. RBD was defined as the presence of clinically significant RBD symptoms indicated by an RBDQ score of 19 or above [18].\nMRI was performed with a 1.5-T system within 7 days after admission. A neurologist (YKC), who was blind to the patients’ RBD status, assessed all of the MRIs. The number and size of acute infarcts affecting different structures, including the frontal, temporal, parietal and occipital lobes, subcortical white matter, thalamus, basal ganglia, brain stem and cerebellum were evaluated. If an infarct involved more than on location, e.g. basal ganglia and subcortical region, then it was counted twice, one for the basal ganglia and one for the subcortical white matter. The total area of acute infarcts on the DWI was measured by manual outlines of all areas with restricted water diffusion identified on the diffusion-weighted images with b values of 1000. The total volume was calculated by multiplying the total area by the sum of the slice thickness and gap [19]. The details of the MRI assessment have been described elsewhere [19].\nThe study protocol was approved by the Clinical Research Ethics Committee of the Chinese University of Hong Kong. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke, where they signed a consent form and received face-to-face interview conducted by a research assistant.\nA research nurse collected demographic and clinical data, assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS) [17] within 2 days of admission, and entered these data in a Stroke Registry. The research assistant administered the MMSE 3 months after the onset of the index stroke. The research assistant, who was blind to the patients’ radiological data, also administered the 13-item RBD questionnaire (RBDQ) [18], which has a score ranging from 0 to 100.\nThe RBDQ demonstrated robust psychometric properties with good sensitivity (82.2%), specificity (86.9%), positive (86.3%) and negative (83.0%) predictive value, high internal consistency (90%), and test-retest reliability (89%) [18]. RBD was defined as the presence of clinically significant RBD symptoms indicated by an RBDQ score of 19 or above [18].\nMRI was performed with a 1.5-T system within 7 days after admission. A neurologist (YKC), who was blind to the patients’ RBD status, assessed all of the MRIs. The number and size of acute infarcts affecting different structures, including the frontal, temporal, parietal and occipital lobes, subcortical white matter, thalamus, basal ganglia, brain stem and cerebellum were evaluated. If an infarct involved more than on location, e.g. basal ganglia and subcortical region, then it was counted twice, one for the basal ganglia and one for the subcortical white matter. The total area of acute infarcts on the DWI was measured by manual outlines of all areas with restricted water diffusion identified on the diffusion-weighted images with b values of 1000. The total volume was calculated by multiplying the total area by the sum of the slice thickness and gap [19]. The details of the MRI assessment have been described elsewhere [19].\n Statistical analysis Patients without acute infarct or more than one acute infarct on MRI were excluded from the analysis. The demographic and clinical characteristics of the RBD patients (RBD group) were compared to those without RBD (non-RBD group). Subsequently, logistic regression models were constructed. In a multivariate regression, risk factors with a P value <0.05 were inserted using a forward stepwise selection strategy. Throughout the study, the significance threshold was set at P = 0.05.\nPatients without acute infarct or more than one acute infarct on MRI were excluded from the analysis. The demographic and clinical characteristics of the RBD patients (RBD group) were compared to those without RBD (non-RBD group). Subsequently, logistic regression models were constructed. In a multivariate regression, risk factors with a P value <0.05 were inserted using a forward stepwise selection strategy. Throughout the study, the significance threshold was set at P = 0.05.", "A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. All patients were screened for inclusion criteria (Figure 1): 1. Chinese ethnicity; 2. Cantonese as the primary language; 3. well-documented first or recurrent acute stroke occurring within 7 days before admission; and 4. the ability and willingness to give consent. The exclusion criteria were: 1. transient ischemic attack, cerebral hemorrhage, subdural hematoma or subarachnoid hemorrhage; 2. history of a CNS disease such as tumor, Parkinson’s disease, dementia, or others; 3. history of depression, alcoholism or other psychiatric disorders; 4. Mini-Mental State Examination (MMSE) [16] score of less than 20; 5. severe aphasia or auditory or visual impairment; 6. physical frailty; and 7. recurrence of stroke prior to the 3-month assessment; and 8. No acute infarct or more than one acute infarct in MRI.\nRecruitment profile of the study.\nCNS central nervous system, MMSE mini-mental state examination, MRI magnetic resonance imaging.", "The study protocol was approved by the Clinical Research Ethics Committee of the Chinese University of Hong Kong. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke, where they signed a consent form and received face-to-face interview conducted by a research assistant.\nA research nurse collected demographic and clinical data, assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS) [17] within 2 days of admission, and entered these data in a Stroke Registry. The research assistant administered the MMSE 3 months after the onset of the index stroke. The research assistant, who was blind to the patients’ radiological data, also administered the 13-item RBD questionnaire (RBDQ) [18], which has a score ranging from 0 to 100.\nThe RBDQ demonstrated robust psychometric properties with good sensitivity (82.2%), specificity (86.9%), positive (86.3%) and negative (83.0%) predictive value, high internal consistency (90%), and test-retest reliability (89%) [18]. RBD was defined as the presence of clinically significant RBD symptoms indicated by an RBDQ score of 19 or above [18].\nMRI was performed with a 1.5-T system within 7 days after admission. A neurologist (YKC), who was blind to the patients’ RBD status, assessed all of the MRIs. The number and size of acute infarcts affecting different structures, including the frontal, temporal, parietal and occipital lobes, subcortical white matter, thalamus, basal ganglia, brain stem and cerebellum were evaluated. If an infarct involved more than on location, e.g. basal ganglia and subcortical region, then it was counted twice, one for the basal ganglia and one for the subcortical white matter. The total area of acute infarcts on the DWI was measured by manual outlines of all areas with restricted water diffusion identified on the diffusion-weighted images with b values of 1000. The total volume was calculated by multiplying the total area by the sum of the slice thickness and gap [19]. The details of the MRI assessment have been described elsewhere [19].", "Patients without acute infarct or more than one acute infarct on MRI were excluded from the analysis. The demographic and clinical characteristics of the RBD patients (RBD group) were compared to those without RBD (non-RBD group). Subsequently, logistic regression models were constructed. In a multivariate regression, risk factors with a P value <0.05 were inserted using a forward stepwise selection strategy. Throughout the study, the significance threshold was set at P = 0.05.", "One-hundred-nineteen of the 775 patients meeting the inclusion and exclusion criteria formed the study sample (Figure 1). There was no significant difference between included and excluded patients in terms of age, sex, and NIHSS score. Thirteen patients (10.9%) had RBD. In patients with brainstem infarct the prevalence of RBD was higher (6 out of 27; 22.2%). Four of the six (67%) of RBD patients with brainstem infarcts were men; five had ventral pontine infarct and one had medullar infarct.\nThe demographic and MRI characteristics and stroke-related data of the sample are shown in Table 1. The proportion of patients with acute brainstem and pontine base infarct was significantly higher in RBD patients than those without RBD. Compared to patients without RBD, RBD patients had smaller infarct volumes. The correlation between the presence of acute brainstem infarcts and acute pontine base infarcts was 0.901. The presence of acute brainstem infarct and infarct volume were entered into a multivariate logistic regression analysis where the presence of acute brainstem infarct was a significant independent predictor of RBD (odds ratio = 3.69; Table 2). Another regression model was constructed by entering the presence and volume of acute pontine infarcts. Acute pontine base infarct was not a significant predictor of RBD (Table 3).\nDemographic characteristics, psychosocial risk factors, stroke severity, and radiological characteristics by RBD status\nMMSE = Mini-Mental State Examination; NIHSS = National Institutes of Health Stroke Scale, RBD = REM sleep behavior disorder.\nat-test, bchi-square test, cMann-Whitney U-test, dFisher’s exact test.\nMultivariate logistic model of the clinical determinants of RBD\nRBD = REM sleep behavior disorder.\naLogistic regression.\nMultivariate logistic model of the clinical determinants of RBD\nRBD = REM sleep behavior disorder.\naLogistic regression.", "This was the first systematic prospective examination of factors influencing the development of RBD in acute ischemic stroke. The main finding of the study is that brainstem infarcts are associated with RBD in acute ischemic stroke.\nThe frequency of RBD in this study was 10.9%, which is lower than found in other neurodegenerative diseases [9]. In patients with acute brainstem infarct, the frequency of RBD was 22% indicating that RBD may be common in this stroke subgroup. It is unknown whether RBD affects stroke outcome. Other parasomnias, such as restless leg syndrome, have been associated with increased mortality in the general population [20].\nStroke-related RBD is a secondary RBD [10]. There is evidence supporting the role of the brainstem in the pathogenesis of RBD in other neurological conditions [10,21]. Data from animal models suggest that RBD results from brainstem dysfunction leading to a lack of muscle atonia during REM sleep [9]. Within the brainstem, degeneration of the pontine glutamatergic and medullary GABAergic neurons has been implicated in the pathophysiology of RBD [22].\n Strengths and limitations of the study The strength of this study is the prospective subject recruitment with MRI scans. Its main limitation is the small sample size, which was the consequence of stringent inclusion and exclusion criteria that allowed to obtain a well-defined sample. Thus, despite inclusion of only 119 patients, multivariate logistic regression could be performed, in which two factors, stroke topography and volume were included [23]. RBD was diagnosed using a questionnaire, and the questionnaire did not differentiate the onset of RBD before or after stroke. Ideally, polysomnography (PSG) should have been performed, but for logistical reasons PSG could not be included in this study.\nThe strength of this study is the prospective subject recruitment with MRI scans. Its main limitation is the small sample size, which was the consequence of stringent inclusion and exclusion criteria that allowed to obtain a well-defined sample. Thus, despite inclusion of only 119 patients, multivariate logistic regression could be performed, in which two factors, stroke topography and volume were included [23]. RBD was diagnosed using a questionnaire, and the questionnaire did not differentiate the onset of RBD before or after stroke. Ideally, polysomnography (PSG) should have been performed, but for logistical reasons PSG could not be included in this study.", "The strength of this study is the prospective subject recruitment with MRI scans. Its main limitation is the small sample size, which was the consequence of stringent inclusion and exclusion criteria that allowed to obtain a well-defined sample. Thus, despite inclusion of only 119 patients, multivariate logistic regression could be performed, in which two factors, stroke topography and volume were included [23]. RBD was diagnosed using a questionnaire, and the questionnaire did not differentiate the onset of RBD before or after stroke. Ideally, polysomnography (PSG) should have been performed, but for logistical reasons PSG could not be included in this study.", "In conclusion, results of this study indicate that brainstem infarcts are associated with RBD in acute ischemic stroke. RBD should be evaluated by a clinical neurologist in patients with brainstem infarcts, given that RBD can easily be diagnosed and treated with clonazepam [24].", "The authors report no conflict of interest. The funding agencies had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.", "WKT designed the study. YKC, HJL, XXL, WCWC, ATA, JA, VCTM, KSW conducted data collection. YKC, HJL conducted statistical analysis. WKT, HJL, XXL interpreted the data. WKT wrote the first draft of the manuscript. The critical revision of the manuscript was made by DMH, GSU. All authors reviewed the first draft of the paper. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2377/14/88/prepub\n" ]
[ null, "methods", null, null, null, "results", "discussion", null, "conclusions", null, null, null ]
[ "Sleep", "Acute ischemic stroke", "Ischemia", "Brainstem", "Infarcts" ]
Background: Sleep disturbances are frequently found in stroke [1-4]. They increase the risk of stroke [5,6] and affect the clinical course and outcome of stroke [1,7]. Functional impairment, longer hospitalization and rehabilitation periods have been reported in stroke patients with sleep disturbance [8]. Rapid eye movement (REM) sleep behavior disorder (RBD) is a sleep disturbance in which patients enact their dreams while in REM sleep. The behavior is typically violent in association with violent dream content, so serious harm can be done to the patient or the bed partner. RBD predominantly affects older adults and has an estimated prevalence of 0.4–0.5% in adults [9]. RBD may be idiopathic or part of a neurodegenerative condition, particularly Parkinson’s disease, Lewy body dementia, and multiple systems atrophy with a prevalence ranging from 13% to 100% [9,10]. In neurodegenerative diseases, RBD is associated with brainstem lesions [10]. The prevalence and pathophysiology of RBD in stroke are largely unknown. Only case reports and small case series have been published and suggest that RBD in stroke is related to brainstem lesions [11-15]. Specifically, pontine strokes were described in single case reports [11,13,15]. Three in a series of six patients with RBD had infarcts in the dorsal pontomesencephalon [12]. The aim of this study was to determine factors influencing the appearance of RBD in a prospective cohort of patients with acute ischemic stroke. Methods: Participants A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. All patients were screened for inclusion criteria (Figure 1): 1. Chinese ethnicity; 2. Cantonese as the primary language; 3. well-documented first or recurrent acute stroke occurring within 7 days before admission; and 4. the ability and willingness to give consent. The exclusion criteria were: 1. transient ischemic attack, cerebral hemorrhage, subdural hematoma or subarachnoid hemorrhage; 2. history of a CNS disease such as tumor, Parkinson’s disease, dementia, or others; 3. history of depression, alcoholism or other psychiatric disorders; 4. Mini-Mental State Examination (MMSE) [16] score of less than 20; 5. severe aphasia or auditory or visual impairment; 6. physical frailty; and 7. recurrence of stroke prior to the 3-month assessment; and 8. No acute infarct or more than one acute infarct in MRI. Recruitment profile of the study. CNS central nervous system, MMSE mini-mental state examination, MRI magnetic resonance imaging. A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. All patients were screened for inclusion criteria (Figure 1): 1. Chinese ethnicity; 2. Cantonese as the primary language; 3. well-documented first or recurrent acute stroke occurring within 7 days before admission; and 4. the ability and willingness to give consent. The exclusion criteria were: 1. transient ischemic attack, cerebral hemorrhage, subdural hematoma or subarachnoid hemorrhage; 2. history of a CNS disease such as tumor, Parkinson’s disease, dementia, or others; 3. history of depression, alcoholism or other psychiatric disorders; 4. Mini-Mental State Examination (MMSE) [16] score of less than 20; 5. severe aphasia or auditory or visual impairment; 6. physical frailty; and 7. recurrence of stroke prior to the 3-month assessment; and 8. No acute infarct or more than one acute infarct in MRI. Recruitment profile of the study. CNS central nervous system, MMSE mini-mental state examination, MRI magnetic resonance imaging. Materials and procedure The study protocol was approved by the Clinical Research Ethics Committee of the Chinese University of Hong Kong. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke, where they signed a consent form and received face-to-face interview conducted by a research assistant. A research nurse collected demographic and clinical data, assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS) [17] within 2 days of admission, and entered these data in a Stroke Registry. The research assistant administered the MMSE 3 months after the onset of the index stroke. The research assistant, who was blind to the patients’ radiological data, also administered the 13-item RBD questionnaire (RBDQ) [18], which has a score ranging from 0 to 100. The RBDQ demonstrated robust psychometric properties with good sensitivity (82.2%), specificity (86.9%), positive (86.3%) and negative (83.0%) predictive value, high internal consistency (90%), and test-retest reliability (89%) [18]. RBD was defined as the presence of clinically significant RBD symptoms indicated by an RBDQ score of 19 or above [18]. MRI was performed with a 1.5-T system within 7 days after admission. A neurologist (YKC), who was blind to the patients’ RBD status, assessed all of the MRIs. The number and size of acute infarcts affecting different structures, including the frontal, temporal, parietal and occipital lobes, subcortical white matter, thalamus, basal ganglia, brain stem and cerebellum were evaluated. If an infarct involved more than on location, e.g. basal ganglia and subcortical region, then it was counted twice, one for the basal ganglia and one for the subcortical white matter. The total area of acute infarcts on the DWI was measured by manual outlines of all areas with restricted water diffusion identified on the diffusion-weighted images with b values of 1000. The total volume was calculated by multiplying the total area by the sum of the slice thickness and gap [19]. The details of the MRI assessment have been described elsewhere [19]. The study protocol was approved by the Clinical Research Ethics Committee of the Chinese University of Hong Kong. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke, where they signed a consent form and received face-to-face interview conducted by a research assistant. A research nurse collected demographic and clinical data, assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS) [17] within 2 days of admission, and entered these data in a Stroke Registry. The research assistant administered the MMSE 3 months after the onset of the index stroke. The research assistant, who was blind to the patients’ radiological data, also administered the 13-item RBD questionnaire (RBDQ) [18], which has a score ranging from 0 to 100. The RBDQ demonstrated robust psychometric properties with good sensitivity (82.2%), specificity (86.9%), positive (86.3%) and negative (83.0%) predictive value, high internal consistency (90%), and test-retest reliability (89%) [18]. RBD was defined as the presence of clinically significant RBD symptoms indicated by an RBDQ score of 19 or above [18]. MRI was performed with a 1.5-T system within 7 days after admission. A neurologist (YKC), who was blind to the patients’ RBD status, assessed all of the MRIs. The number and size of acute infarcts affecting different structures, including the frontal, temporal, parietal and occipital lobes, subcortical white matter, thalamus, basal ganglia, brain stem and cerebellum were evaluated. If an infarct involved more than on location, e.g. basal ganglia and subcortical region, then it was counted twice, one for the basal ganglia and one for the subcortical white matter. The total area of acute infarcts on the DWI was measured by manual outlines of all areas with restricted water diffusion identified on the diffusion-weighted images with b values of 1000. The total volume was calculated by multiplying the total area by the sum of the slice thickness and gap [19]. The details of the MRI assessment have been described elsewhere [19]. Statistical analysis Patients without acute infarct or more than one acute infarct on MRI were excluded from the analysis. The demographic and clinical characteristics of the RBD patients (RBD group) were compared to those without RBD (non-RBD group). Subsequently, logistic regression models were constructed. In a multivariate regression, risk factors with a P value <0.05 were inserted using a forward stepwise selection strategy. Throughout the study, the significance threshold was set at P = 0.05. Patients without acute infarct or more than one acute infarct on MRI were excluded from the analysis. The demographic and clinical characteristics of the RBD patients (RBD group) were compared to those without RBD (non-RBD group). Subsequently, logistic regression models were constructed. In a multivariate regression, risk factors with a P value <0.05 were inserted using a forward stepwise selection strategy. Throughout the study, the significance threshold was set at P = 0.05. Participants: A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. All patients were screened for inclusion criteria (Figure 1): 1. Chinese ethnicity; 2. Cantonese as the primary language; 3. well-documented first or recurrent acute stroke occurring within 7 days before admission; and 4. the ability and willingness to give consent. The exclusion criteria were: 1. transient ischemic attack, cerebral hemorrhage, subdural hematoma or subarachnoid hemorrhage; 2. history of a CNS disease such as tumor, Parkinson’s disease, dementia, or others; 3. history of depression, alcoholism or other psychiatric disorders; 4. Mini-Mental State Examination (MMSE) [16] score of less than 20; 5. severe aphasia or auditory or visual impairment; 6. physical frailty; and 7. recurrence of stroke prior to the 3-month assessment; and 8. No acute infarct or more than one acute infarct in MRI. Recruitment profile of the study. CNS central nervous system, MMSE mini-mental state examination, MRI magnetic resonance imaging. Materials and procedure: The study protocol was approved by the Clinical Research Ethics Committee of the Chinese University of Hong Kong. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke, where they signed a consent form and received face-to-face interview conducted by a research assistant. A research nurse collected demographic and clinical data, assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS) [17] within 2 days of admission, and entered these data in a Stroke Registry. The research assistant administered the MMSE 3 months after the onset of the index stroke. The research assistant, who was blind to the patients’ radiological data, also administered the 13-item RBD questionnaire (RBDQ) [18], which has a score ranging from 0 to 100. The RBDQ demonstrated robust psychometric properties with good sensitivity (82.2%), specificity (86.9%), positive (86.3%) and negative (83.0%) predictive value, high internal consistency (90%), and test-retest reliability (89%) [18]. RBD was defined as the presence of clinically significant RBD symptoms indicated by an RBDQ score of 19 or above [18]. MRI was performed with a 1.5-T system within 7 days after admission. A neurologist (YKC), who was blind to the patients’ RBD status, assessed all of the MRIs. The number and size of acute infarcts affecting different structures, including the frontal, temporal, parietal and occipital lobes, subcortical white matter, thalamus, basal ganglia, brain stem and cerebellum were evaluated. If an infarct involved more than on location, e.g. basal ganglia and subcortical region, then it was counted twice, one for the basal ganglia and one for the subcortical white matter. The total area of acute infarcts on the DWI was measured by manual outlines of all areas with restricted water diffusion identified on the diffusion-weighted images with b values of 1000. The total volume was calculated by multiplying the total area by the sum of the slice thickness and gap [19]. The details of the MRI assessment have been described elsewhere [19]. Statistical analysis: Patients without acute infarct or more than one acute infarct on MRI were excluded from the analysis. The demographic and clinical characteristics of the RBD patients (RBD group) were compared to those without RBD (non-RBD group). Subsequently, logistic regression models were constructed. In a multivariate regression, risk factors with a P value <0.05 were inserted using a forward stepwise selection strategy. Throughout the study, the significance threshold was set at P = 0.05. Results: One-hundred-nineteen of the 775 patients meeting the inclusion and exclusion criteria formed the study sample (Figure 1). There was no significant difference between included and excluded patients in terms of age, sex, and NIHSS score. Thirteen patients (10.9%) had RBD. In patients with brainstem infarct the prevalence of RBD was higher (6 out of 27; 22.2%). Four of the six (67%) of RBD patients with brainstem infarcts were men; five had ventral pontine infarct and one had medullar infarct. The demographic and MRI characteristics and stroke-related data of the sample are shown in Table 1. The proportion of patients with acute brainstem and pontine base infarct was significantly higher in RBD patients than those without RBD. Compared to patients without RBD, RBD patients had smaller infarct volumes. The correlation between the presence of acute brainstem infarcts and acute pontine base infarcts was 0.901. The presence of acute brainstem infarct and infarct volume were entered into a multivariate logistic regression analysis where the presence of acute brainstem infarct was a significant independent predictor of RBD (odds ratio = 3.69; Table 2). Another regression model was constructed by entering the presence and volume of acute pontine infarcts. Acute pontine base infarct was not a significant predictor of RBD (Table 3). Demographic characteristics, psychosocial risk factors, stroke severity, and radiological characteristics by RBD status MMSE = Mini-Mental State Examination; NIHSS = National Institutes of Health Stroke Scale, RBD = REM sleep behavior disorder. at-test, bchi-square test, cMann-Whitney U-test, dFisher’s exact test. Multivariate logistic model of the clinical determinants of RBD RBD = REM sleep behavior disorder. aLogistic regression. Multivariate logistic model of the clinical determinants of RBD RBD = REM sleep behavior disorder. aLogistic regression. Discussion: This was the first systematic prospective examination of factors influencing the development of RBD in acute ischemic stroke. The main finding of the study is that brainstem infarcts are associated with RBD in acute ischemic stroke. The frequency of RBD in this study was 10.9%, which is lower than found in other neurodegenerative diseases [9]. In patients with acute brainstem infarct, the frequency of RBD was 22% indicating that RBD may be common in this stroke subgroup. It is unknown whether RBD affects stroke outcome. Other parasomnias, such as restless leg syndrome, have been associated with increased mortality in the general population [20]. Stroke-related RBD is a secondary RBD [10]. There is evidence supporting the role of the brainstem in the pathogenesis of RBD in other neurological conditions [10,21]. Data from animal models suggest that RBD results from brainstem dysfunction leading to a lack of muscle atonia during REM sleep [9]. Within the brainstem, degeneration of the pontine glutamatergic and medullary GABAergic neurons has been implicated in the pathophysiology of RBD [22]. Strengths and limitations of the study The strength of this study is the prospective subject recruitment with MRI scans. Its main limitation is the small sample size, which was the consequence of stringent inclusion and exclusion criteria that allowed to obtain a well-defined sample. Thus, despite inclusion of only 119 patients, multivariate logistic regression could be performed, in which two factors, stroke topography and volume were included [23]. RBD was diagnosed using a questionnaire, and the questionnaire did not differentiate the onset of RBD before or after stroke. Ideally, polysomnography (PSG) should have been performed, but for logistical reasons PSG could not be included in this study. The strength of this study is the prospective subject recruitment with MRI scans. Its main limitation is the small sample size, which was the consequence of stringent inclusion and exclusion criteria that allowed to obtain a well-defined sample. Thus, despite inclusion of only 119 patients, multivariate logistic regression could be performed, in which two factors, stroke topography and volume were included [23]. RBD was diagnosed using a questionnaire, and the questionnaire did not differentiate the onset of RBD before or after stroke. Ideally, polysomnography (PSG) should have been performed, but for logistical reasons PSG could not be included in this study. Strengths and limitations of the study: The strength of this study is the prospective subject recruitment with MRI scans. Its main limitation is the small sample size, which was the consequence of stringent inclusion and exclusion criteria that allowed to obtain a well-defined sample. Thus, despite inclusion of only 119 patients, multivariate logistic regression could be performed, in which two factors, stroke topography and volume were included [23]. RBD was diagnosed using a questionnaire, and the questionnaire did not differentiate the onset of RBD before or after stroke. Ideally, polysomnography (PSG) should have been performed, but for logistical reasons PSG could not be included in this study. Conclusions: In conclusion, results of this study indicate that brainstem infarcts are associated with RBD in acute ischemic stroke. RBD should be evaluated by a clinical neurologist in patients with brainstem infarcts, given that RBD can easily be diagnosed and treated with clonazepam [24]. Competing interests: The authors report no conflict of interest. The funding agencies had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Authors’ contributions: WKT designed the study. YKC, HJL, XXL, WCWC, ATA, JA, VCTM, KSW conducted data collection. YKC, HJL conducted statistical analysis. WKT, HJL, XXL interpreted the data. WKT wrote the first draft of the manuscript. The critical revision of the manuscript was made by DMH, GSU. All authors reviewed the first draft of the paper. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2377/14/88/prepub
Background: Rapid eye movement (REM) sleep behavior disorder (RBD) is a sleep disturbance in which patients enact their dreams while in REM sleep. The behavior is typically violent in association with violent dream content, so serious harm can be done to the patient or the bed partner. The prevalence of RBD is well-known in Parkinson's disease, Lewy body dementia, and multiple systems atrophy. However, its prevalence and causes in stroke remained unclear. The aim of this study was to determine factors influencing the appearance of RBD in a prospective cohort of patients with acute ischemic stroke. Methods: A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. Within 2 days of admission, a research nurse collected demographic and clinical data and assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS). One hundred and nineteen of the 775 patients meeting study entry criteria formed the study sample. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke. In the attendance, a research assistant administered the MMSE and the 13-item RBD questionnaire (RBDQ). Results: Among 119 stroke patients, 10.9% were exhibited RBD, defined as an REM sleep behavior disorder questionnaire score of 19 or above. The proportion of patients with acute brainstem infarct was significantly higher in RBD patients than those without RBD. Compared with patients without RBD, RBD patients were more likely to have brainstem infarcts and had smaller infarct volumes. In a multivariate analysis, in which stroke location and infarct volume were inserted, brainstem infarcts were an independent predictor of RBD (odds ratio = 3.686; P = 0.032). Conclusions: The results support the notion of a predominant role of brainstem injury in the development of RBD and suggest that patients with brainstem infarcts RBD should be evaluated by a clinical neurologist.
Background: Sleep disturbances are frequently found in stroke [1-4]. They increase the risk of stroke [5,6] and affect the clinical course and outcome of stroke [1,7]. Functional impairment, longer hospitalization and rehabilitation periods have been reported in stroke patients with sleep disturbance [8]. Rapid eye movement (REM) sleep behavior disorder (RBD) is a sleep disturbance in which patients enact their dreams while in REM sleep. The behavior is typically violent in association with violent dream content, so serious harm can be done to the patient or the bed partner. RBD predominantly affects older adults and has an estimated prevalence of 0.4–0.5% in adults [9]. RBD may be idiopathic or part of a neurodegenerative condition, particularly Parkinson’s disease, Lewy body dementia, and multiple systems atrophy with a prevalence ranging from 13% to 100% [9,10]. In neurodegenerative diseases, RBD is associated with brainstem lesions [10]. The prevalence and pathophysiology of RBD in stroke are largely unknown. Only case reports and small case series have been published and suggest that RBD in stroke is related to brainstem lesions [11-15]. Specifically, pontine strokes were described in single case reports [11,13,15]. Three in a series of six patients with RBD had infarcts in the dorsal pontomesencephalon [12]. The aim of this study was to determine factors influencing the appearance of RBD in a prospective cohort of patients with acute ischemic stroke. Conclusions: In conclusion, results of this study indicate that brainstem infarcts are associated with RBD in acute ischemic stroke. RBD should be evaluated by a clinical neurologist in patients with brainstem infarcts, given that RBD can easily be diagnosed and treated with clonazepam [24].
Background: Rapid eye movement (REM) sleep behavior disorder (RBD) is a sleep disturbance in which patients enact their dreams while in REM sleep. The behavior is typically violent in association with violent dream content, so serious harm can be done to the patient or the bed partner. The prevalence of RBD is well-known in Parkinson's disease, Lewy body dementia, and multiple systems atrophy. However, its prevalence and causes in stroke remained unclear. The aim of this study was to determine factors influencing the appearance of RBD in a prospective cohort of patients with acute ischemic stroke. Methods: A total of 2,024 patients with first-ever or recurrent acute ischemic stroke were admitted to the Acute Stroke Unit at the Prince of Wales Hospital between January 2010 and November 2011; 775 of them received an MRI scan. Within 2 days of admission, a research nurse collected demographic and clinical data and assessed the severity of each stroke using the National Institute of Health Stroke Scale (NIHSS). One hundred and nineteen of the 775 patients meeting study entry criteria formed the study sample. All eligible participants were invited to attend a research clinic 3 months after the onset of the index stroke. In the attendance, a research assistant administered the MMSE and the 13-item RBD questionnaire (RBDQ). Results: Among 119 stroke patients, 10.9% were exhibited RBD, defined as an REM sleep behavior disorder questionnaire score of 19 or above. The proportion of patients with acute brainstem infarct was significantly higher in RBD patients than those without RBD. Compared with patients without RBD, RBD patients were more likely to have brainstem infarcts and had smaller infarct volumes. In a multivariate analysis, in which stroke location and infarct volume were inserted, brainstem infarcts were an independent predictor of RBD (odds ratio = 3.686; P = 0.032). Conclusions: The results support the notion of a predominant role of brainstem injury in the development of RBD and suggest that patients with brainstem infarcts RBD should be evaluated by a clinical neurologist.
3,731
395
12
[ "rbd", "stroke", "acute", "patients", "infarct", "study", "mri", "research", "brainstem", "data" ]
[ "test", "test" ]
[CONTENT] Sleep | Acute ischemic stroke | Ischemia | Brainstem | Infarcts [SUMMARY]
[CONTENT] Sleep | Acute ischemic stroke | Ischemia | Brainstem | Infarcts [SUMMARY]
[CONTENT] Sleep | Acute ischemic stroke | Ischemia | Brainstem | Infarcts [SUMMARY]
[CONTENT] Sleep | Acute ischemic stroke | Ischemia | Brainstem | Infarcts [SUMMARY]
[CONTENT] Sleep | Acute ischemic stroke | Ischemia | Brainstem | Infarcts [SUMMARY]
[CONTENT] Sleep | Acute ischemic stroke | Ischemia | Brainstem | Infarcts [SUMMARY]
[CONTENT] Aged | Brain Stem Infarctions | Female | Humans | Male | REM Sleep Behavior Disorder | Stroke [SUMMARY]
[CONTENT] Aged | Brain Stem Infarctions | Female | Humans | Male | REM Sleep Behavior Disorder | Stroke [SUMMARY]
[CONTENT] Aged | Brain Stem Infarctions | Female | Humans | Male | REM Sleep Behavior Disorder | Stroke [SUMMARY]
[CONTENT] Aged | Brain Stem Infarctions | Female | Humans | Male | REM Sleep Behavior Disorder | Stroke [SUMMARY]
[CONTENT] Aged | Brain Stem Infarctions | Female | Humans | Male | REM Sleep Behavior Disorder | Stroke [SUMMARY]
[CONTENT] Aged | Brain Stem Infarctions | Female | Humans | Male | REM Sleep Behavior Disorder | Stroke [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] rbd | stroke | acute | patients | infarct | study | mri | research | brainstem | data [SUMMARY]
[CONTENT] rbd | stroke | acute | patients | infarct | study | mri | research | brainstem | data [SUMMARY]
[CONTENT] rbd | stroke | acute | patients | infarct | study | mri | research | brainstem | data [SUMMARY]
[CONTENT] rbd | stroke | acute | patients | infarct | study | mri | research | brainstem | data [SUMMARY]
[CONTENT] rbd | stroke | acute | patients | infarct | study | mri | research | brainstem | data [SUMMARY]
[CONTENT] rbd | stroke | acute | patients | infarct | study | mri | research | brainstem | data [SUMMARY]
[CONTENT] rbd | sleep | stroke | case | prevalence | violent | brainstem lesions | 11 | adults | 15 [SUMMARY]
[CONTENT] research | stroke | acute | rbd | mri | acute infarct | total | infarct | patients | research assistant [SUMMARY]
[CONTENT] rbd | infarct | brainstem | patients | pontine | acute brainstem | acute | presence | rbd patients | test [SUMMARY]
[CONTENT] brainstem infarcts | rbd | brainstem | infarcts | infarcts given | infarcts given rbd | results study | treated | treated clonazepam | treated clonazepam 24 [SUMMARY]
[CONTENT] rbd | stroke | acute | patients | infarct | brainstem | study | mri | research | regression [SUMMARY]
[CONTENT] rbd | stroke | acute | patients | infarct | brainstem | study | mri | research | regression [SUMMARY]
[CONTENT] REM | RBD | REM ||| ||| RBD ||| ||| RBD [SUMMARY]
[CONTENT] 2,024 | first | the Acute Stroke Unit | the Prince of Wales Hospital | between January 2010 and November 2011 | 775 ||| 2 days | the National Institute of Health Stroke Scale | NIHSS ||| One hundred and nineteen | 775 ||| 3 months ||| 13 | RBD [SUMMARY]
[CONTENT] 119 | 10.9% | RBD | REM | 19 ||| RBD | RBD ||| RBD | RBD ||| RBD | 3.686 | P =  | 0.032 [SUMMARY]
[CONTENT] RBD | RBD [SUMMARY]
[CONTENT] REM | RBD | REM ||| ||| RBD ||| ||| RBD ||| 2,024 | first | the Acute Stroke Unit | the Prince of Wales Hospital | between January 2010 and November 2011 | 775 ||| 2 days | the National Institute of Health Stroke Scale | NIHSS ||| One hundred and nineteen | 775 ||| 3 months ||| 13 | RBD ||| ||| 119 | 10.9% | RBD | REM | 19 ||| RBD | RBD ||| RBD | RBD ||| RBD | 3.686 | P =  | 0.032 ||| RBD | RBD [SUMMARY]
[CONTENT] REM | RBD | REM ||| ||| RBD ||| ||| RBD ||| 2,024 | first | the Acute Stroke Unit | the Prince of Wales Hospital | between January 2010 and November 2011 | 775 ||| 2 days | the National Institute of Health Stroke Scale | NIHSS ||| One hundred and nineteen | 775 ||| 3 months ||| 13 | RBD ||| ||| 119 | 10.9% | RBD | REM | 19 ||| RBD | RBD ||| RBD | RBD ||| RBD | 3.686 | P =  | 0.032 ||| RBD | RBD [SUMMARY]
Prevalence and characteristics of smokers interested in internet-based smoking cessation interventions: cross-sectional findings from a national household survey.
23506944
An accurate and up-to-date estimate of the potential reach of Internet-based smoking cessation interventions (ISCIs) would improve calculations of impact while an understanding of the characteristics of potential users would facilitate the design of interventions.
BACKGROUND
Data were collected using cross-sectional household surveys of representative samples of adults in England. Interest in trying an Internet site or "app" that was proven to help with stopping smoking was assessed in 1128 adult smokers in addition to sociodemographic characteristics, dependence, motivation to quit, previous attempts to quit smoking, Internet and handheld computer access, and recent types of information searched online.
METHODS
Of a representative sample of current smokers, 46.6% (95% CI 43.5%-49.6%) were interested in using an Internet-based smoking cessation intervention. In contrast, only 0.3% (95% CI 0%-0.7%) of smokers reported having used such an intervention to support their most recent quit attempt within the past year. After adjusting for all other background characteristics, interested smokers were younger (OR=0.98, 95% CI 0.97-0.99), reported stronger urges (OR=1.29, 95% CI 1.10-1.51), were more motivated to quit within 3 months (OR=2.16, 95% CI 1.54-3.02), and were more likely to have made a quit attempt in the past year (OR=1.76, 95% CI 1.30-2.37), access the Internet at least weekly (OR=2.17, 95% CI 1.40-3.36), have handheld computer access (OR=1.65, 95% CI 1.22-2.24), and have used the Internet to search for online smoking cessation information or support in past 3 months (OR=2.82, 95% CI 1.20-6.62). There was no association with social grade.
RESULTS
Almost half of all smokers in England are interested in using online smoking cessation interventions, yet fewer than 1% have used them to support a quit attempt in the past year. Interest is not associated with social grade but is associated with being younger, more highly motivated, more cigarette dependent, having attempted to quit recently, having regular Internet and handheld computer access, and having recently searched for online smoking cessation information and support.
CONCLUSIONS
[ "Adult", "Cross-Sectional Studies", "Data Collection", "Female", "Humans", "Internet", "Male", "Middle Aged", "Prevalence", "Smoking", "Smoking Cessation" ]
3636298
Introduction
The World Health Organization recently attributed 12% of all global deaths among adults aged 30 years and over to tobacco [1]. Almost all these deaths could be avoided if smokers quit before their mid-30s [2]. Yet, in many countries such as the United Kingdom, less than a quarter of smokers quit by this age despite the majority wanting and trying to stop [3,4]. The most effective interventions involve face-to-face behavioral support combined with medication such as nicotine replacement therapy or varenicline [5-7]. However, even in England, where there is a universally available behavioral support program, the vast majority of smokers do not use face-to-face support and almost half attempt to stop unaided [8]. The Internet could be an ideal medium for helping those people who do not wish, or are unable, to engage in face-to-face behavioral support [9,10]. Behavioral support delivered via the Internet has the advantage that it is extremely cost-effective, and some patients prefer the increased convenience and confidentiality and reduced stigma [11,12], while others who are less able to access face-to-face support because of either mobility or geographical barriers may also find it useful. The benefits of Internet support over other low-cost and convenient alternatives to face-to-face support, such as written materials, include the capacity for interactivity and tailoring. Additionally, researchers and practitioners should be attracted by the capability to disseminate evidence-based support faithfully and flexibly update content to reflect new information as it emerges [13]. There is extensive evidence that the Internet can be an effective delivery mode for the behavioral support of a variety of health issues [11,14], and the United Kingdom has issued guidance to use particular programs in routine clinical care (eg, Beating the Blues for mild and moderate depression and FearFighter for phobia, panic, and anxiety) [15]. More importantly, there is also specific evidence from three separate systematic reviews that Internet-based smoking cessation interventions (ISCIs) can help smokers to quit compared with brief written materials or no intervention [16-18]. Current evidence is somewhat limited by the heterogeneity of effect across different interventions, insufficient reporting of content [19-21], and the paucity of data relating to long-term abstinence with biochemical verification of smoking status, yet research work is underway that may be able to address these limitations (eg, StopAdvisor [22,23]). In the context of this modest evidence of efficacy, together with the unique advantages of ISCIs, such as low cost, it is important to identify the prevalence of smokers who would be interested in using such support. An accurate estimate of the likely reach of ISCIs is necessary for calculations of impact [24]. Previous estimates of potential reach have often been based on either national figures for Internet access or reported interest among nonrepresentative samples [25,26]. One study that did assess a representative sample of smokers estimated that 40% were interested in using an ISCI [27]. However, the study was conducted between 2006-07, and Internet access and usage patterns are relatively fast-moving phenomena [28]. For example, in Britain the percentage of households that have at least one method of using the Internet while at home increased from 58% in 2003 to 70% in 2009 and again to 77% in 2011 [28,29], while the use of wireless Internet hotspots doubled in just 12 months to 4.9 million users in 2011 [29]. Understanding the characteristics of smokers interested in using ISCIs may help the development of new interventions, or modification of existing ones, in several regards including tailoring dimensions, choice of content and features, navigational architecture, and language style and complexity. Similarly, designers would be interested in these associated characteristics for the purpose of dissemination, particularly online advertising, which can often be targeted to reach, or at least focus on, only certain demographic groups. Previous studies have characterized individuals who search for cessation information [30] and who use Internet interventions [31-35], smokers on their use of the Internet [36], and smokers who were either invited to, eligible for, or enrolled in cessation programs according to their subsequent use of the interventions [37-39]. While it is clearly essential to understand these profiles, particularly what determines use among those who are already interested, in order to improve the appeal of these cessation interventions it is also important to establish how interested smokers compare with those who are not in nationally representative samples. To our knowledge, only one other study has characterized a representative sample of smokers on the basis of their interest in ISCIs [27]. In that study, younger and more cigarette dependent smokers who had better Internet access were more likely to express an interest. However, there was no assessment of other important smoking characteristics such as current motivation to stop and past quit attempts, nor was there an assessment of recent online searching behavior. This study addressed the following research questions: How many smokers in a nationally representative sample are interested in using ISCIs? What smoking, Internet use, and sociodemographic characteristics are associated with interest in the use of these interventions?
Methods
Study Design The data were taken from the Smoking Toolkit Study [40], which is an ongoing series of cross-sectional household surveys in England designed to provide information about smoking prevalence and behavior. Each month a new sample of approximately 1800 adults aged 16 and over completes a face-to-face computer-assisted survey with a trained interviewer. By conducting a face-to-face rather than online survey, Internet access should not confound the results. Taylor Nelson Sofres-British Market Research Bureau collects the data as part of their monthly omnibus surveys on behalf of researchers at the Cancer Research UK’s Health Behaviour Research Centre, University College London, who conceived of the study and continue to manage it. The surveys use a form of random location sampling. England is split into 165,665 Output Areas, each comprising approximately 300 households. These Output Areas are stratified by A Classification Of Residential Neighbourhoods (ACORN) characteristics (an established geo-demographic analysis of the population provided by CACI International) and then randomly selected to be included in the lists of the interviewers. Interviewers travel to the selected areas and perform interviews with one participant per household until quotas based upon factors influencing the probability of being at home (working status, age, and gender) are fulfilled. Morning interviews are avoided to maximize participant availability. These survey methods have been previously described and have been shown to result in a baseline sample that is nationally representative in its sociodemographic composition and proportion of smokers [40]. Ethical approval was granted by the University College London ethics committee. The data were taken from the Smoking Toolkit Study [40], which is an ongoing series of cross-sectional household surveys in England designed to provide information about smoking prevalence and behavior. Each month a new sample of approximately 1800 adults aged 16 and over completes a face-to-face computer-assisted survey with a trained interviewer. By conducting a face-to-face rather than online survey, Internet access should not confound the results. Taylor Nelson Sofres-British Market Research Bureau collects the data as part of their monthly omnibus surveys on behalf of researchers at the Cancer Research UK’s Health Behaviour Research Centre, University College London, who conceived of the study and continue to manage it. The surveys use a form of random location sampling. England is split into 165,665 Output Areas, each comprising approximately 300 households. These Output Areas are stratified by A Classification Of Residential Neighbourhoods (ACORN) characteristics (an established geo-demographic analysis of the population provided by CACI International) and then randomly selected to be included in the lists of the interviewers. Interviewers travel to the selected areas and perform interviews with one participant per household until quotas based upon factors influencing the probability of being at home (working status, age, and gender) are fulfilled. Morning interviews are avoided to maximize participant availability. These survey methods have been previously described and have been shown to result in a baseline sample that is nationally representative in its sociodemographic composition and proportion of smokers [40]. Ethical approval was granted by the University College London ethics committee. Participants We used data from respondents to the survey between February 2012 and April 2012 who reported smoking cigarettes (including hand-rolled) daily or occasionally at the time of the survey. A total of 5405 adults were surveyed; 1190 reported currently smoking cigarettes regularly of whom 1128 had complete data on all relevant variables. We used data from respondents to the survey between February 2012 and April 2012 who reported smoking cigarettes (including hand-rolled) daily or occasionally at the time of the survey. A total of 5405 adults were surveyed; 1190 reported currently smoking cigarettes regularly of whom 1128 had complete data on all relevant variables. Measures Current smokers were asked: “If there were an Internet site that was proven to help with stopping smoking, how likely is it that you would try it?” and also “If there were an application (“app”) for your handheld computer (like a “smartphone” [eg, an iPhone, Blackberry, or Android phone], palmtop, PDA, or tablet) that was proven to help with stopping smoking, how likely is it that you would try it?”. For the purposes of analysis, smokers’ responses on 4-point scales were dichotomized as either being “interested” in using an Internet-based smoking cessation intervention (ie, those responding “very likely” or “quite likely” to either question) or “not interested” (ie, those responding “very unlikely” or “quite unlikely” to both questions). Additionally, current smokers were asked questions that assessed gender, age, and social grade (AB = higher and intermediate professional/managerial, C1 = supervisory, clerical, junior managerial/administrative/professional, C2 = skilled manual workers, D=semi-skilled and unskilled manual workers, E=on state benefit, unemployed, lowest grade workers), dependence (Heaviness of Smoking Index, HSI [41] and Strength of Urges [42]), motivation to quit (Motivation to Stop Scale [43]), previous attempts to quit smoking, access to the Internet and handheld computers, and recent types of information searched online, that is, “For which of the following activities did you use the Internet in the last 3 months for private use? Please indicate all that apply: (a) using services related to travel and accommodation, (b) reading or downloading online newsnewspapersnews magazines, (c) looking for a job or sending a job application, (d) seeking health related information or support other than stopping smoking (eg, injury, disease, nutrition, improving health, etc), (e) seeking stop-smoking related information or support, (f) looking for information about education, training or courses, (g) doing an online course (in any subject), (h) consulting the Internet with the purpose of learning, (i) finding information about goods or services” [44]. Responses to Items (a) to (c) and (f) to (i) were aggregated to calculate a variable identifying use of the Internet for information other than health related (Item d) or smoking cessation (Item e). Current smokers were asked: “If there were an Internet site that was proven to help with stopping smoking, how likely is it that you would try it?” and also “If there were an application (“app”) for your handheld computer (like a “smartphone” [eg, an iPhone, Blackberry, or Android phone], palmtop, PDA, or tablet) that was proven to help with stopping smoking, how likely is it that you would try it?”. For the purposes of analysis, smokers’ responses on 4-point scales were dichotomized as either being “interested” in using an Internet-based smoking cessation intervention (ie, those responding “very likely” or “quite likely” to either question) or “not interested” (ie, those responding “very unlikely” or “quite unlikely” to both questions). Additionally, current smokers were asked questions that assessed gender, age, and social grade (AB = higher and intermediate professional/managerial, C1 = supervisory, clerical, junior managerial/administrative/professional, C2 = skilled manual workers, D=semi-skilled and unskilled manual workers, E=on state benefit, unemployed, lowest grade workers), dependence (Heaviness of Smoking Index, HSI [41] and Strength of Urges [42]), motivation to quit (Motivation to Stop Scale [43]), previous attempts to quit smoking, access to the Internet and handheld computers, and recent types of information searched online, that is, “For which of the following activities did you use the Internet in the last 3 months for private use? Please indicate all that apply: (a) using services related to travel and accommodation, (b) reading or downloading online newsnewspapersnews magazines, (c) looking for a job or sending a job application, (d) seeking health related information or support other than stopping smoking (eg, injury, disease, nutrition, improving health, etc), (e) seeking stop-smoking related information or support, (f) looking for information about education, training or courses, (g) doing an online course (in any subject), (h) consulting the Internet with the purpose of learning, (i) finding information about goods or services” [44]. Responses to Items (a) to (c) and (f) to (i) were aggregated to calculate a variable identifying use of the Internet for information other than health related (Item d) or smoking cessation (Item e). Analysis Data were analyzed using PASW 18.0.0. We used weighted data only to estimate the prevalence of interest in ISCIs among all smokers regardless of their Internet access. Data were weighted using the rim (marginal) weighting technique to match English census data on age, sex, and socioeconomic group. To assess smoking, Internet use, and sociodemographic characteristics associated with interest in the use of ISCIs, we conducted a series of simple and multiple logistic regressions. Alpha was set at P<.05. Data were analyzed using PASW 18.0.0. We used weighted data only to estimate the prevalence of interest in ISCIs among all smokers regardless of their Internet access. Data were weighted using the rim (marginal) weighting technique to match English census data on age, sex, and socioeconomic group. To assess smoking, Internet use, and sociodemographic characteristics associated with interest in the use of ISCIs, we conducted a series of simple and multiple logistic regressions. Alpha was set at P<.05.
Results
Approximately 70% of current smokers had accessed the Internet in the past week while a significant majority also had access to a handheld computer (see Table 1). A minority of users had searched for either smoking or health information support, and more than half had searched for at least one of a variety of “other” types of online information. The sociodemographic and smoking characteristics were typical of a representative sample of smokers [4,40], and by way of comparison, the characteristics of the 4451 current smokers included in the Smoking Toolkit Study for the 12 months before the current study (ie, January 2011 to January 2012) are presented in Table 1. A total of 42.6% (95% CI 39.6%-45.7%) of current smokers were interested in using Internet sites for smoking cessation, 23.9% (95% CI 21.3%-26.5%) were interested in apps, and 46.6% (95% CI 43.5%-49.6%) were interested in ISCIs (either sites or apps). In contrast, only 0.3% (95% CI 0%-0.7%) of smokers reported having used such an intervention to support their most recent quit attempt within the past year. Table 2 shows the smoking, Internet use, and sociodemographic characteristics of smokers by their interest in the use of ISCIs. There was evidence that interested smokers were younger, more cigarette dependent (measured by both HSI and Strength of Urges), more motivated to quit within 3 months, more likely to have made a quit attempt in the past year, accessed the Internet at least weekly, had handheld computer access, had used the Internet to search for online smoking cessation information or support in past 3 months, and had used it to search for a variety of “other” online information. After adjusting for all other background characteristics, associations remained between interest and age, cigarette dependence (measured by Strength of Urges), motivation to quit, past year quit attempt, weekly Internet access, handheld computer access, and recent searching for online smoking cessation information. Last, this pattern of results was unchanged during sensitivity analyses in which the associations between interest and the various characteristics were re-assessed separately when smokers were classified according to whether or not they had expressed interest in (1) an Internet site, or (2) an app (data not shown). Characteristics of current smokers. Factors associated with interest in Internet-based smoking cessation interventions. a P<.05.
null
null
[ "Introduction", "Study Design", "Participants", "Measures", "Analysis" ]
[ "The World Health Organization recently attributed 12% of all global deaths among adults aged 30 years and over to tobacco [1]. Almost all these deaths could be avoided if smokers quit before their mid-30s [2]. Yet, in many countries such as the United Kingdom, less than a quarter of smokers quit by this age despite the majority wanting and trying to stop [3,4]. The most effective interventions involve face-to-face behavioral support combined with medication such as nicotine replacement therapy or varenicline [5-7]. However, even in England, where there is a universally available behavioral support program, the vast majority of smokers do not use face-to-face support and almost half attempt to stop unaided [8]. The Internet could be an ideal medium for helping those people who do not wish, or are unable, to engage in face-to-face behavioral support [9,10].\nBehavioral support delivered via the Internet has the advantage that it is extremely cost-effective, and some patients prefer the increased convenience and confidentiality and reduced stigma [11,12], while others who are less able to access face-to-face support because of either mobility or geographical barriers may also find it useful. The benefits of Internet support over other low-cost and convenient alternatives to face-to-face support, such as written materials, include the capacity for interactivity and tailoring. Additionally, researchers and practitioners should be attracted by the capability to disseminate evidence-based support faithfully and flexibly update content to reflect new information as it emerges [13].\nThere is extensive evidence that the Internet can be an effective delivery mode for the behavioral support of a variety of health issues [11,14], and the United Kingdom has issued guidance to use particular programs in routine clinical care (eg, Beating the Blues for mild and moderate depression and FearFighter for phobia, panic, and anxiety) [15]. More importantly, there is also specific evidence from three separate systematic reviews that Internet-based smoking cessation interventions (ISCIs) can help smokers to quit compared with brief written materials or no intervention [16-18]. Current evidence is somewhat limited by the heterogeneity of effect across different interventions, insufficient reporting of content [19-21], and the paucity of data relating to long-term abstinence with biochemical verification of smoking status, yet research work is underway that may be able to address these limitations (eg, StopAdvisor [22,23]). In the context of this modest evidence of efficacy, together with the unique advantages of ISCIs, such as low cost, it is important to identify the prevalence of smokers who would be interested in using such support.\nAn accurate estimate of the likely reach of ISCIs is necessary for calculations of impact [24]. Previous estimates of potential reach have often been based on either national figures for Internet access or reported interest among nonrepresentative samples [25,26]. One study that did assess a representative sample of smokers estimated that 40% were interested in using an ISCI [27]. However, the study was conducted between 2006-07, and Internet access and usage patterns are relatively fast-moving phenomena [28]. For example, in Britain the percentage of households that have at least one method of using the Internet while at home increased from 58% in 2003 to 70% in 2009 and again to 77% in 2011 [28,29], while the use of wireless Internet hotspots doubled in just 12 months to 4.9 million users in 2011 [29].\nUnderstanding the characteristics of smokers interested in using ISCIs may help the development of new interventions, or modification of existing ones, in several regards including tailoring dimensions, choice of content and features, navigational architecture, and language style and complexity. Similarly, designers would be interested in these associated characteristics for the purpose of dissemination, particularly online advertising, which can often be targeted to reach, or at least focus on, only certain demographic groups. Previous studies have characterized individuals who search for cessation information [30] and who use Internet interventions [31-35], smokers on their use of the Internet [36], and smokers who were either invited to, eligible for, or enrolled in cessation programs according to their subsequent use of the interventions [37-39]. While it is clearly essential to understand these profiles, particularly what determines use among those who are already interested, in order to improve the appeal of these cessation interventions it is also important to establish how interested smokers compare with those who are not in nationally representative samples. To our knowledge, only one other study has characterized a representative sample of smokers on the basis of their interest in ISCIs [27]. In that study, younger and more cigarette dependent smokers who had better Internet access were more likely to express an interest. However, there was no assessment of other important smoking characteristics such as current motivation to stop and past quit attempts, nor was there an assessment of recent online searching behavior.\nThis study addressed the following research questions:\nHow many smokers in a nationally representative sample are interested in using ISCIs?\nWhat smoking, Internet use, and sociodemographic characteristics are associated with interest in the use of these interventions?", "The data were taken from the Smoking Toolkit Study [40], which is an ongoing series of cross-sectional household surveys in England designed to provide information about smoking prevalence and behavior. Each month a new sample of approximately 1800 adults aged 16 and over completes a face-to-face computer-assisted survey with a trained interviewer. By conducting a face-to-face rather than online survey, Internet access should not confound the results. Taylor Nelson Sofres-British Market Research Bureau collects the data as part of their monthly omnibus surveys on behalf of researchers at the Cancer Research UK’s Health Behaviour Research Centre, University College London, who conceived of the study and continue to manage it. The surveys use a form of random location sampling. England is split into 165,665 Output Areas, each comprising approximately 300 households. These Output Areas are stratified by A Classification Of Residential Neighbourhoods (ACORN) characteristics (an established geo-demographic analysis of the population provided by CACI International) and then randomly selected to be included in the lists of the interviewers. Interviewers travel to the selected areas and perform interviews with one participant per household until quotas based upon factors influencing the probability of being at home (working status, age, and gender) are fulfilled. Morning interviews are avoided to maximize participant availability. These survey methods have been previously described and have been shown to result in a baseline sample that is nationally representative in its sociodemographic composition and proportion of smokers [40]. Ethical approval was granted by the University College London ethics committee.", "We used data from respondents to the survey between February 2012 and April 2012 who reported smoking cigarettes (including hand-rolled) daily or occasionally at the time of the survey. A total of 5405 adults were surveyed; 1190 reported currently smoking cigarettes regularly of whom 1128 had complete data on all relevant variables.", "Current smokers were asked: “If there were an Internet site that was proven to help with stopping smoking, how likely is it that you would try it?” and also “If there were an application (“app”) for your handheld computer (like a “smartphone” [eg, an iPhone, Blackberry, or Android phone], palmtop, PDA, or tablet) that was proven to help with stopping smoking, how likely is it that you would try it?”. For the purposes of analysis, smokers’ responses on 4-point scales were dichotomized as either being “interested” in using an Internet-based smoking cessation intervention (ie, those responding “very likely” or “quite likely” to either question) or “not interested” (ie, those responding “very unlikely” or “quite unlikely” to both questions).\nAdditionally, current smokers were asked questions that assessed gender, age, and social grade (AB = higher and intermediate professional/managerial, C1 = supervisory, clerical, junior managerial/administrative/professional, C2 = skilled manual workers, D=semi-skilled and unskilled manual workers, E=on state benefit, unemployed, lowest grade workers), dependence (Heaviness of Smoking Index, HSI [41] and Strength of Urges [42]), motivation to quit (Motivation to Stop Scale [43]), previous attempts to quit smoking, access to the Internet and handheld computers, and recent types of information searched online, that is, “For which of the following activities did you use the Internet in the last 3 months for private use? Please indicate all that apply: (a) using services related to travel and accommodation, (b) reading or downloading online newsnewspapersnews magazines, (c) looking for a job or sending a job application, (d) seeking health related information or support other than stopping smoking (eg, injury, disease, nutrition, improving health, etc), (e) seeking stop-smoking related information or support, (f) looking for information about education, training or courses, (g) doing an online course (in any subject), (h) consulting the Internet with the purpose of learning, (i) finding information about goods or services” [44]. Responses to Items (a) to (c) and (f) to (i) were aggregated to calculate a variable identifying use of the Internet for information other than health related (Item d) or smoking cessation (Item e).", "Data were analyzed using PASW 18.0.0. We used weighted data only to estimate the prevalence of interest in ISCIs among all smokers regardless of their Internet access. Data were weighted using the rim (marginal) weighting technique to match English census data on age, sex, and socioeconomic group. To assess smoking, Internet use, and sociodemographic characteristics associated with interest in the use of ISCIs, we conducted a series of simple and multiple logistic regressions. Alpha was set at P<.05." ]
[ null, null, null, null, null ]
[ "Introduction", "Methods", "Study Design", "Participants", "Measures", "Analysis", "Results", "Discussion" ]
[ "The World Health Organization recently attributed 12% of all global deaths among adults aged 30 years and over to tobacco [1]. Almost all these deaths could be avoided if smokers quit before their mid-30s [2]. Yet, in many countries such as the United Kingdom, less than a quarter of smokers quit by this age despite the majority wanting and trying to stop [3,4]. The most effective interventions involve face-to-face behavioral support combined with medication such as nicotine replacement therapy or varenicline [5-7]. However, even in England, where there is a universally available behavioral support program, the vast majority of smokers do not use face-to-face support and almost half attempt to stop unaided [8]. The Internet could be an ideal medium for helping those people who do not wish, or are unable, to engage in face-to-face behavioral support [9,10].\nBehavioral support delivered via the Internet has the advantage that it is extremely cost-effective, and some patients prefer the increased convenience and confidentiality and reduced stigma [11,12], while others who are less able to access face-to-face support because of either mobility or geographical barriers may also find it useful. The benefits of Internet support over other low-cost and convenient alternatives to face-to-face support, such as written materials, include the capacity for interactivity and tailoring. Additionally, researchers and practitioners should be attracted by the capability to disseminate evidence-based support faithfully and flexibly update content to reflect new information as it emerges [13].\nThere is extensive evidence that the Internet can be an effective delivery mode for the behavioral support of a variety of health issues [11,14], and the United Kingdom has issued guidance to use particular programs in routine clinical care (eg, Beating the Blues for mild and moderate depression and FearFighter for phobia, panic, and anxiety) [15]. More importantly, there is also specific evidence from three separate systematic reviews that Internet-based smoking cessation interventions (ISCIs) can help smokers to quit compared with brief written materials or no intervention [16-18]. Current evidence is somewhat limited by the heterogeneity of effect across different interventions, insufficient reporting of content [19-21], and the paucity of data relating to long-term abstinence with biochemical verification of smoking status, yet research work is underway that may be able to address these limitations (eg, StopAdvisor [22,23]). In the context of this modest evidence of efficacy, together with the unique advantages of ISCIs, such as low cost, it is important to identify the prevalence of smokers who would be interested in using such support.\nAn accurate estimate of the likely reach of ISCIs is necessary for calculations of impact [24]. Previous estimates of potential reach have often been based on either national figures for Internet access or reported interest among nonrepresentative samples [25,26]. One study that did assess a representative sample of smokers estimated that 40% were interested in using an ISCI [27]. However, the study was conducted between 2006-07, and Internet access and usage patterns are relatively fast-moving phenomena [28]. For example, in Britain the percentage of households that have at least one method of using the Internet while at home increased from 58% in 2003 to 70% in 2009 and again to 77% in 2011 [28,29], while the use of wireless Internet hotspots doubled in just 12 months to 4.9 million users in 2011 [29].\nUnderstanding the characteristics of smokers interested in using ISCIs may help the development of new interventions, or modification of existing ones, in several regards including tailoring dimensions, choice of content and features, navigational architecture, and language style and complexity. Similarly, designers would be interested in these associated characteristics for the purpose of dissemination, particularly online advertising, which can often be targeted to reach, or at least focus on, only certain demographic groups. Previous studies have characterized individuals who search for cessation information [30] and who use Internet interventions [31-35], smokers on their use of the Internet [36], and smokers who were either invited to, eligible for, or enrolled in cessation programs according to their subsequent use of the interventions [37-39]. While it is clearly essential to understand these profiles, particularly what determines use among those who are already interested, in order to improve the appeal of these cessation interventions it is also important to establish how interested smokers compare with those who are not in nationally representative samples. To our knowledge, only one other study has characterized a representative sample of smokers on the basis of their interest in ISCIs [27]. In that study, younger and more cigarette dependent smokers who had better Internet access were more likely to express an interest. However, there was no assessment of other important smoking characteristics such as current motivation to stop and past quit attempts, nor was there an assessment of recent online searching behavior.\nThis study addressed the following research questions:\nHow many smokers in a nationally representative sample are interested in using ISCIs?\nWhat smoking, Internet use, and sociodemographic characteristics are associated with interest in the use of these interventions?", " Study Design The data were taken from the Smoking Toolkit Study [40], which is an ongoing series of cross-sectional household surveys in England designed to provide information about smoking prevalence and behavior. Each month a new sample of approximately 1800 adults aged 16 and over completes a face-to-face computer-assisted survey with a trained interviewer. By conducting a face-to-face rather than online survey, Internet access should not confound the results. Taylor Nelson Sofres-British Market Research Bureau collects the data as part of their monthly omnibus surveys on behalf of researchers at the Cancer Research UK’s Health Behaviour Research Centre, University College London, who conceived of the study and continue to manage it. The surveys use a form of random location sampling. England is split into 165,665 Output Areas, each comprising approximately 300 households. These Output Areas are stratified by A Classification Of Residential Neighbourhoods (ACORN) characteristics (an established geo-demographic analysis of the population provided by CACI International) and then randomly selected to be included in the lists of the interviewers. Interviewers travel to the selected areas and perform interviews with one participant per household until quotas based upon factors influencing the probability of being at home (working status, age, and gender) are fulfilled. Morning interviews are avoided to maximize participant availability. These survey methods have been previously described and have been shown to result in a baseline sample that is nationally representative in its sociodemographic composition and proportion of smokers [40]. Ethical approval was granted by the University College London ethics committee.\nThe data were taken from the Smoking Toolkit Study [40], which is an ongoing series of cross-sectional household surveys in England designed to provide information about smoking prevalence and behavior. Each month a new sample of approximately 1800 adults aged 16 and over completes a face-to-face computer-assisted survey with a trained interviewer. By conducting a face-to-face rather than online survey, Internet access should not confound the results. Taylor Nelson Sofres-British Market Research Bureau collects the data as part of their monthly omnibus surveys on behalf of researchers at the Cancer Research UK’s Health Behaviour Research Centre, University College London, who conceived of the study and continue to manage it. The surveys use a form of random location sampling. England is split into 165,665 Output Areas, each comprising approximately 300 households. These Output Areas are stratified by A Classification Of Residential Neighbourhoods (ACORN) characteristics (an established geo-demographic analysis of the population provided by CACI International) and then randomly selected to be included in the lists of the interviewers. Interviewers travel to the selected areas and perform interviews with one participant per household until quotas based upon factors influencing the probability of being at home (working status, age, and gender) are fulfilled. Morning interviews are avoided to maximize participant availability. These survey methods have been previously described and have been shown to result in a baseline sample that is nationally representative in its sociodemographic composition and proportion of smokers [40]. Ethical approval was granted by the University College London ethics committee.\n Participants We used data from respondents to the survey between February 2012 and April 2012 who reported smoking cigarettes (including hand-rolled) daily or occasionally at the time of the survey. A total of 5405 adults were surveyed; 1190 reported currently smoking cigarettes regularly of whom 1128 had complete data on all relevant variables.\nWe used data from respondents to the survey between February 2012 and April 2012 who reported smoking cigarettes (including hand-rolled) daily or occasionally at the time of the survey. A total of 5405 adults were surveyed; 1190 reported currently smoking cigarettes regularly of whom 1128 had complete data on all relevant variables.\n Measures Current smokers were asked: “If there were an Internet site that was proven to help with stopping smoking, how likely is it that you would try it?” and also “If there were an application (“app”) for your handheld computer (like a “smartphone” [eg, an iPhone, Blackberry, or Android phone], palmtop, PDA, or tablet) that was proven to help with stopping smoking, how likely is it that you would try it?”. For the purposes of analysis, smokers’ responses on 4-point scales were dichotomized as either being “interested” in using an Internet-based smoking cessation intervention (ie, those responding “very likely” or “quite likely” to either question) or “not interested” (ie, those responding “very unlikely” or “quite unlikely” to both questions).\nAdditionally, current smokers were asked questions that assessed gender, age, and social grade (AB = higher and intermediate professional/managerial, C1 = supervisory, clerical, junior managerial/administrative/professional, C2 = skilled manual workers, D=semi-skilled and unskilled manual workers, E=on state benefit, unemployed, lowest grade workers), dependence (Heaviness of Smoking Index, HSI [41] and Strength of Urges [42]), motivation to quit (Motivation to Stop Scale [43]), previous attempts to quit smoking, access to the Internet and handheld computers, and recent types of information searched online, that is, “For which of the following activities did you use the Internet in the last 3 months for private use? Please indicate all that apply: (a) using services related to travel and accommodation, (b) reading or downloading online newsnewspapersnews magazines, (c) looking for a job or sending a job application, (d) seeking health related information or support other than stopping smoking (eg, injury, disease, nutrition, improving health, etc), (e) seeking stop-smoking related information or support, (f) looking for information about education, training or courses, (g) doing an online course (in any subject), (h) consulting the Internet with the purpose of learning, (i) finding information about goods or services” [44]. Responses to Items (a) to (c) and (f) to (i) were aggregated to calculate a variable identifying use of the Internet for information other than health related (Item d) or smoking cessation (Item e).\nCurrent smokers were asked: “If there were an Internet site that was proven to help with stopping smoking, how likely is it that you would try it?” and also “If there were an application (“app”) for your handheld computer (like a “smartphone” [eg, an iPhone, Blackberry, or Android phone], palmtop, PDA, or tablet) that was proven to help with stopping smoking, how likely is it that you would try it?”. For the purposes of analysis, smokers’ responses on 4-point scales were dichotomized as either being “interested” in using an Internet-based smoking cessation intervention (ie, those responding “very likely” or “quite likely” to either question) or “not interested” (ie, those responding “very unlikely” or “quite unlikely” to both questions).\nAdditionally, current smokers were asked questions that assessed gender, age, and social grade (AB = higher and intermediate professional/managerial, C1 = supervisory, clerical, junior managerial/administrative/professional, C2 = skilled manual workers, D=semi-skilled and unskilled manual workers, E=on state benefit, unemployed, lowest grade workers), dependence (Heaviness of Smoking Index, HSI [41] and Strength of Urges [42]), motivation to quit (Motivation to Stop Scale [43]), previous attempts to quit smoking, access to the Internet and handheld computers, and recent types of information searched online, that is, “For which of the following activities did you use the Internet in the last 3 months for private use? Please indicate all that apply: (a) using services related to travel and accommodation, (b) reading or downloading online newsnewspapersnews magazines, (c) looking for a job or sending a job application, (d) seeking health related information or support other than stopping smoking (eg, injury, disease, nutrition, improving health, etc), (e) seeking stop-smoking related information or support, (f) looking for information about education, training or courses, (g) doing an online course (in any subject), (h) consulting the Internet with the purpose of learning, (i) finding information about goods or services” [44]. Responses to Items (a) to (c) and (f) to (i) were aggregated to calculate a variable identifying use of the Internet for information other than health related (Item d) or smoking cessation (Item e).\n Analysis Data were analyzed using PASW 18.0.0. We used weighted data only to estimate the prevalence of interest in ISCIs among all smokers regardless of their Internet access. Data were weighted using the rim (marginal) weighting technique to match English census data on age, sex, and socioeconomic group. To assess smoking, Internet use, and sociodemographic characteristics associated with interest in the use of ISCIs, we conducted a series of simple and multiple logistic regressions. Alpha was set at P<.05.\nData were analyzed using PASW 18.0.0. We used weighted data only to estimate the prevalence of interest in ISCIs among all smokers regardless of their Internet access. Data were weighted using the rim (marginal) weighting technique to match English census data on age, sex, and socioeconomic group. To assess smoking, Internet use, and sociodemographic characteristics associated with interest in the use of ISCIs, we conducted a series of simple and multiple logistic regressions. Alpha was set at P<.05.", "The data were taken from the Smoking Toolkit Study [40], which is an ongoing series of cross-sectional household surveys in England designed to provide information about smoking prevalence and behavior. Each month a new sample of approximately 1800 adults aged 16 and over completes a face-to-face computer-assisted survey with a trained interviewer. By conducting a face-to-face rather than online survey, Internet access should not confound the results. Taylor Nelson Sofres-British Market Research Bureau collects the data as part of their monthly omnibus surveys on behalf of researchers at the Cancer Research UK’s Health Behaviour Research Centre, University College London, who conceived of the study and continue to manage it. The surveys use a form of random location sampling. England is split into 165,665 Output Areas, each comprising approximately 300 households. These Output Areas are stratified by A Classification Of Residential Neighbourhoods (ACORN) characteristics (an established geo-demographic analysis of the population provided by CACI International) and then randomly selected to be included in the lists of the interviewers. Interviewers travel to the selected areas and perform interviews with one participant per household until quotas based upon factors influencing the probability of being at home (working status, age, and gender) are fulfilled. Morning interviews are avoided to maximize participant availability. These survey methods have been previously described and have been shown to result in a baseline sample that is nationally representative in its sociodemographic composition and proportion of smokers [40]. Ethical approval was granted by the University College London ethics committee.", "We used data from respondents to the survey between February 2012 and April 2012 who reported smoking cigarettes (including hand-rolled) daily or occasionally at the time of the survey. A total of 5405 adults were surveyed; 1190 reported currently smoking cigarettes regularly of whom 1128 had complete data on all relevant variables.", "Current smokers were asked: “If there were an Internet site that was proven to help with stopping smoking, how likely is it that you would try it?” and also “If there were an application (“app”) for your handheld computer (like a “smartphone” [eg, an iPhone, Blackberry, or Android phone], palmtop, PDA, or tablet) that was proven to help with stopping smoking, how likely is it that you would try it?”. For the purposes of analysis, smokers’ responses on 4-point scales were dichotomized as either being “interested” in using an Internet-based smoking cessation intervention (ie, those responding “very likely” or “quite likely” to either question) or “not interested” (ie, those responding “very unlikely” or “quite unlikely” to both questions).\nAdditionally, current smokers were asked questions that assessed gender, age, and social grade (AB = higher and intermediate professional/managerial, C1 = supervisory, clerical, junior managerial/administrative/professional, C2 = skilled manual workers, D=semi-skilled and unskilled manual workers, E=on state benefit, unemployed, lowest grade workers), dependence (Heaviness of Smoking Index, HSI [41] and Strength of Urges [42]), motivation to quit (Motivation to Stop Scale [43]), previous attempts to quit smoking, access to the Internet and handheld computers, and recent types of information searched online, that is, “For which of the following activities did you use the Internet in the last 3 months for private use? Please indicate all that apply: (a) using services related to travel and accommodation, (b) reading or downloading online newsnewspapersnews magazines, (c) looking for a job or sending a job application, (d) seeking health related information or support other than stopping smoking (eg, injury, disease, nutrition, improving health, etc), (e) seeking stop-smoking related information or support, (f) looking for information about education, training or courses, (g) doing an online course (in any subject), (h) consulting the Internet with the purpose of learning, (i) finding information about goods or services” [44]. Responses to Items (a) to (c) and (f) to (i) were aggregated to calculate a variable identifying use of the Internet for information other than health related (Item d) or smoking cessation (Item e).", "Data were analyzed using PASW 18.0.0. We used weighted data only to estimate the prevalence of interest in ISCIs among all smokers regardless of their Internet access. Data were weighted using the rim (marginal) weighting technique to match English census data on age, sex, and socioeconomic group. To assess smoking, Internet use, and sociodemographic characteristics associated with interest in the use of ISCIs, we conducted a series of simple and multiple logistic regressions. Alpha was set at P<.05.", "Approximately 70% of current smokers had accessed the Internet in the past week while a significant majority also had access to a handheld computer (see Table 1). A minority of users had searched for either smoking or health information support, and more than half had searched for at least one of a variety of “other” types of online information. The sociodemographic and smoking characteristics were typical of a representative sample of smokers [4,40], and by way of comparison, the characteristics of the 4451 current smokers included in the Smoking Toolkit Study for the 12 months before the current study (ie, January 2011 to January 2012) are presented in Table 1.\nA total of 42.6% (95% CI 39.6%-45.7%) of current smokers were interested in using Internet sites for smoking cessation, 23.9% (95% CI 21.3%-26.5%) were interested in apps, and 46.6% (95% CI 43.5%-49.6%) were interested in ISCIs (either sites or apps). In contrast, only 0.3% (95% CI 0%-0.7%) of smokers reported having used such an intervention to support their most recent quit attempt within the past year.\n\nTable 2 shows the smoking, Internet use, and sociodemographic characteristics of smokers by their interest in the use of ISCIs. There was evidence that interested smokers were younger, more cigarette dependent (measured by both HSI and Strength of Urges), more motivated to quit within 3 months, more likely to have made a quit attempt in the past year, accessed the Internet at least weekly, had handheld computer access, had used the Internet to search for online smoking cessation information or support in past 3 months, and had used it to search for a variety of “other” online information. After adjusting for all other background characteristics, associations remained between interest and age, cigarette dependence (measured by Strength of Urges), motivation to quit, past year quit attempt, weekly Internet access, handheld computer access, and recent searching for online smoking cessation information. Last, this pattern of results was unchanged during sensitivity analyses in which the associations between interest and the various characteristics were re-assessed separately when smokers were classified according to whether or not they had expressed interest in (1) an Internet site, or (2) an app (data not shown).\nCharacteristics of current smokers.\nFactors associated with interest in Internet-based smoking cessation interventions.\n\na\nP<.05.", "Almost half of all current smokers were interested in using an Internet-based smoking cessation intervention, however less than 1% had used one to support their most recent quit attempt in the past year. After adjustment for all background characteristics, smokers who were younger, more dependent, highly motivated to quit, had attempted to quit recently, accessed the Internet regularly, had handheld computer access, and had recently searched for online smoking cessation information or support were all more likely to be interested in using online stop smoking support.\nThe diffusion of the Internet since its inception over 40 years ago has been phenomenal—recently, the Internet reached a billion users worldwide [28]. In Britain, Internet access has continued to increase with 77% of households connected in 2011 as compared to 58% in 2003 [28,29]. Although smokers tend to have less access than nonsmokers [27], there remains a majority of smokers that would be possible to reach via the Internet—in this study over 70% had used the Internet in the past week—and this number is only likely to increase [28]. As a consequence, ISCIs are often cited as offering a valuable opportunity to deliver low-cost behavioral support to large numbers of smokers [16,21,25,45]. Importantly, this study now adds an up-to-date estimate of the proportion of smokers interested in using these interventions, which provides some indication of their maximum potential reach and should allow more accurate calculations of the likely impact of particular interventions [24]. For example, from the RE-AIM perspective, a public health impact score can be represented as a multiplicative combination of reach, efficacy, adoption, implementation, and maintenance. The accuracy of this calculation for particular ISCIs may be improved by the provision in the current paper of an up-to-date estimate of the denominator necessary to calculate the reach, which is defined as the proportion of the possible target population that participate in a particular intervention.\nThe current estimate of 47% of smokers who are interested in ISCIs is higher than the 40% previously estimated from a representative sample of smokers [27]. However, that study was conducted in 2006-07, and it is reasonable to assume that interest may have increased as a consequence of improving Internet access [28]. Additionally, interest in ISCIs was operationalized as an expression of interest in either an Internet site or app—the number reporting interest only in Internet sites was 43%. Last, the first study was conducted in Canada, and it is likely there are cultural differences in interest as compared to England.\nThe finding that interested smokers were likely to be younger and use the Internet more regularly is consistent with previous research [27]. This association with age is particularly important as treatment-seeking smokers tend to be older than those who do not seek treatment [46], and therefore online interventions may be particularly suitable for targeting younger smokers who may otherwise attempt to quit unaided. In contrast, the association between interest and dependence, motivation, and past year quit attempts is characteristic of smokers who are more likely to seek treatment [46-48]. It is important that any future assessments of the real-world effectiveness of ISCIs take these associations into account [48].\nThe association between interest and recent searching for online smoking cessation information or support is intuitive. However, it is also indicative of the potential demand for these interventions in that smokers do not appear to be deterred nor satisfied by what they are currently finding. The latter point is also suggested by the contrast between the 3% of smokers who recently searched for online smoking cessation information or support as compared with the 0.3% who used online support during a quit attempt in the past year.\nThe “digital divide” that characterizes Internet use in the wider population is also true of smokers: smokers who use the Internet tend to be more affluent and educated than those who do not [36]. Therefore, the interest of smokers regardless of social grade in the current study is an unexpected finding and suggests the Internet may yet offer a means of equitable treatment delivery, which is particularly important as smokers from more deprived socioeconomic groups typically want, and try, to stop as much as other smokers but find it more difficult [49].\nA potential concern with ISCIs is that it might prevent smokers from using face-to-face support, which is currently the most effective delivery mode for behavioral support [5-7]. While relatively few smokers had recently used face-to-face support in the current study, the lack of an association between use of this support and interest tentatively suggests the concern is unwarranted. Instead, it is likely that Internet support would appeal to a different subset of smokers who place more value on convenience and confidentiality, or find other types of support difficult to access [11]. Additionally, in other health areas where Internet support has become routine, the two have emerged as complementary, for example in several areas of mental health online cognitive behavioral therapy is frequently recommended while a patient waits for an appointment or to help with the more automated aspects of the support [11].\nTaken together, the pattern of associations provided in the current study present a comprehensive characterization of smokers interested in using ISCIs. This understanding is important as access cannot primarily account for the difference between interest in and use of these interventions; for example, in the current study the majority of smokers had used the Internet in the past week. It is hoped that these associations will inform the development or modification of interventions with regards to targeted dissemination, tailoring dimensions, choice of content and features, navigational architecture, and language style and complexity, and in turn help to actualize the potential of the Internet for delivering smoking cessation interventions [50,51].\nAn indirect point of interest is the finding that in simple logistic regressions, interest was associated with both measures of dependence: Strength of Urges and HSI. However, the association with only Strength of Urges in multiple logistic regression is consistent with previous research showing that a single-rating measure of urges may be a more useful measure of dependence than those based on consumption [42].\nOne possible limitation is that an expression of interest during a survey is clearly quite different from actually visiting a program to support a quit attempt and subsequently using the program regularly. However, the purpose of the current study was primarily to provide an estimate of the maximum possible reach if the support was more widely available and promoted. One of the important findings is the huge potential in terms of the difference between the proportion who are interested and the percentage currently using. Additionally, establishing the characteristics associated with interest is arguably critical to realizing this potential and improving uptake by facilitating effective targeting, both in terms of advertising and intervention content, and complements research into understanding the determinants of use among the small proportion currently using, eg, [37-39]. Future research should aim to derive the specific relationship between interest and subsequent uptake of ISCIs following targeted design and promotion. Another limitation is that the questions used to derive interest in ISCIs were framed positively (eg, “If there were an Internet site or ‘app’ that was proven to help with stopping smoking, how likely is it that you would try it?”). However, even if interest is overestimated, it is improbable that it would account for the extent of the discrepancy between interest in and use of ISCIs, which is highlighted by the current study. Another potential limitation may have been to operationalize interest in ISCIs as an expression of interest in either an Internet site or an app. While both require the Internet and a computer for delivery, there are also clear differences, including that only sites require regular Internet access and only apps require ongoing access to a handheld computer. Indeed, fewer smokers were interested in apps as compared to Internet sites; however, similar proportions of smokers who had regular Internet access and those with ongoing access to handheld computers were interested in sites and apps respectively. More importantly, in sensitivity analyses in which smokers were characterized separately according to interest in either sites or apps, the pattern of results was unchanged. Together these results suggest that the current operationalization of ISCIs for the purposes of estimating and characterizing interest is suitable; however, experimental research is needed to establish whether there are important differences in efficacy according to delivery mode of either Internet site or app. A final limitation is potential error or bias in the measurement of certain smoking characteristics. For example, smokers often forget failed quit attempts, particularly if they only lasted a short time or occurred long ago [52]. However, it is unlikely that forgetting would differ as a function of interest in ISCIs, and quit attempts—as with all other variables of interest in the present study—were assessed with a validated measure.\nIn conclusion, almost half the smokers in England are interested in using online smoking cessation interventions, and yet only a small proportion of smokers currently use these interventions to support quit attempts. Clearly, ISCIs represent an excellent opportunity to deliver low-cost behavioral support to a large number of smokers, which is currently not being realized. Moreover, as interest is expressed regardless of social grade, the Internet may also offer a means of delivering this support equitably. Last, designers of Internet-based interventions should be aware that potential users are likely to be younger, more cigarette dependent, highly motivated, have attempted to quit recently, have regular Internet and handheld computer access, and have recently searched for smoking cessation information and support." ]
[ null, "methods", null, null, null, null, "results", "discussion" ]
[ "smoking cessation intervention", "Internet-based", "website", "prevalence", "characteristic" ]
Introduction: The World Health Organization recently attributed 12% of all global deaths among adults aged 30 years and over to tobacco [1]. Almost all these deaths could be avoided if smokers quit before their mid-30s [2]. Yet, in many countries such as the United Kingdom, less than a quarter of smokers quit by this age despite the majority wanting and trying to stop [3,4]. The most effective interventions involve face-to-face behavioral support combined with medication such as nicotine replacement therapy or varenicline [5-7]. However, even in England, where there is a universally available behavioral support program, the vast majority of smokers do not use face-to-face support and almost half attempt to stop unaided [8]. The Internet could be an ideal medium for helping those people who do not wish, or are unable, to engage in face-to-face behavioral support [9,10]. Behavioral support delivered via the Internet has the advantage that it is extremely cost-effective, and some patients prefer the increased convenience and confidentiality and reduced stigma [11,12], while others who are less able to access face-to-face support because of either mobility or geographical barriers may also find it useful. The benefits of Internet support over other low-cost and convenient alternatives to face-to-face support, such as written materials, include the capacity for interactivity and tailoring. Additionally, researchers and practitioners should be attracted by the capability to disseminate evidence-based support faithfully and flexibly update content to reflect new information as it emerges [13]. There is extensive evidence that the Internet can be an effective delivery mode for the behavioral support of a variety of health issues [11,14], and the United Kingdom has issued guidance to use particular programs in routine clinical care (eg, Beating the Blues for mild and moderate depression and FearFighter for phobia, panic, and anxiety) [15]. More importantly, there is also specific evidence from three separate systematic reviews that Internet-based smoking cessation interventions (ISCIs) can help smokers to quit compared with brief written materials or no intervention [16-18]. Current evidence is somewhat limited by the heterogeneity of effect across different interventions, insufficient reporting of content [19-21], and the paucity of data relating to long-term abstinence with biochemical verification of smoking status, yet research work is underway that may be able to address these limitations (eg, StopAdvisor [22,23]). In the context of this modest evidence of efficacy, together with the unique advantages of ISCIs, such as low cost, it is important to identify the prevalence of smokers who would be interested in using such support. An accurate estimate of the likely reach of ISCIs is necessary for calculations of impact [24]. Previous estimates of potential reach have often been based on either national figures for Internet access or reported interest among nonrepresentative samples [25,26]. One study that did assess a representative sample of smokers estimated that 40% were interested in using an ISCI [27]. However, the study was conducted between 2006-07, and Internet access and usage patterns are relatively fast-moving phenomena [28]. For example, in Britain the percentage of households that have at least one method of using the Internet while at home increased from 58% in 2003 to 70% in 2009 and again to 77% in 2011 [28,29], while the use of wireless Internet hotspots doubled in just 12 months to 4.9 million users in 2011 [29]. Understanding the characteristics of smokers interested in using ISCIs may help the development of new interventions, or modification of existing ones, in several regards including tailoring dimensions, choice of content and features, navigational architecture, and language style and complexity. Similarly, designers would be interested in these associated characteristics for the purpose of dissemination, particularly online advertising, which can often be targeted to reach, or at least focus on, only certain demographic groups. Previous studies have characterized individuals who search for cessation information [30] and who use Internet interventions [31-35], smokers on their use of the Internet [36], and smokers who were either invited to, eligible for, or enrolled in cessation programs according to their subsequent use of the interventions [37-39]. While it is clearly essential to understand these profiles, particularly what determines use among those who are already interested, in order to improve the appeal of these cessation interventions it is also important to establish how interested smokers compare with those who are not in nationally representative samples. To our knowledge, only one other study has characterized a representative sample of smokers on the basis of their interest in ISCIs [27]. In that study, younger and more cigarette dependent smokers who had better Internet access were more likely to express an interest. However, there was no assessment of other important smoking characteristics such as current motivation to stop and past quit attempts, nor was there an assessment of recent online searching behavior. This study addressed the following research questions: How many smokers in a nationally representative sample are interested in using ISCIs? What smoking, Internet use, and sociodemographic characteristics are associated with interest in the use of these interventions? Methods: Study Design The data were taken from the Smoking Toolkit Study [40], which is an ongoing series of cross-sectional household surveys in England designed to provide information about smoking prevalence and behavior. Each month a new sample of approximately 1800 adults aged 16 and over completes a face-to-face computer-assisted survey with a trained interviewer. By conducting a face-to-face rather than online survey, Internet access should not confound the results. Taylor Nelson Sofres-British Market Research Bureau collects the data as part of their monthly omnibus surveys on behalf of researchers at the Cancer Research UK’s Health Behaviour Research Centre, University College London, who conceived of the study and continue to manage it. The surveys use a form of random location sampling. England is split into 165,665 Output Areas, each comprising approximately 300 households. These Output Areas are stratified by A Classification Of Residential Neighbourhoods (ACORN) characteristics (an established geo-demographic analysis of the population provided by CACI International) and then randomly selected to be included in the lists of the interviewers. Interviewers travel to the selected areas and perform interviews with one participant per household until quotas based upon factors influencing the probability of being at home (working status, age, and gender) are fulfilled. Morning interviews are avoided to maximize participant availability. These survey methods have been previously described and have been shown to result in a baseline sample that is nationally representative in its sociodemographic composition and proportion of smokers [40]. Ethical approval was granted by the University College London ethics committee. The data were taken from the Smoking Toolkit Study [40], which is an ongoing series of cross-sectional household surveys in England designed to provide information about smoking prevalence and behavior. Each month a new sample of approximately 1800 adults aged 16 and over completes a face-to-face computer-assisted survey with a trained interviewer. By conducting a face-to-face rather than online survey, Internet access should not confound the results. Taylor Nelson Sofres-British Market Research Bureau collects the data as part of their monthly omnibus surveys on behalf of researchers at the Cancer Research UK’s Health Behaviour Research Centre, University College London, who conceived of the study and continue to manage it. The surveys use a form of random location sampling. England is split into 165,665 Output Areas, each comprising approximately 300 households. These Output Areas are stratified by A Classification Of Residential Neighbourhoods (ACORN) characteristics (an established geo-demographic analysis of the population provided by CACI International) and then randomly selected to be included in the lists of the interviewers. Interviewers travel to the selected areas and perform interviews with one participant per household until quotas based upon factors influencing the probability of being at home (working status, age, and gender) are fulfilled. Morning interviews are avoided to maximize participant availability. These survey methods have been previously described and have been shown to result in a baseline sample that is nationally representative in its sociodemographic composition and proportion of smokers [40]. Ethical approval was granted by the University College London ethics committee. Participants We used data from respondents to the survey between February 2012 and April 2012 who reported smoking cigarettes (including hand-rolled) daily or occasionally at the time of the survey. A total of 5405 adults were surveyed; 1190 reported currently smoking cigarettes regularly of whom 1128 had complete data on all relevant variables. We used data from respondents to the survey between February 2012 and April 2012 who reported smoking cigarettes (including hand-rolled) daily or occasionally at the time of the survey. A total of 5405 adults were surveyed; 1190 reported currently smoking cigarettes regularly of whom 1128 had complete data on all relevant variables. Measures Current smokers were asked: “If there were an Internet site that was proven to help with stopping smoking, how likely is it that you would try it?” and also “If there were an application (“app”) for your handheld computer (like a “smartphone” [eg, an iPhone, Blackberry, or Android phone], palmtop, PDA, or tablet) that was proven to help with stopping smoking, how likely is it that you would try it?”. For the purposes of analysis, smokers’ responses on 4-point scales were dichotomized as either being “interested” in using an Internet-based smoking cessation intervention (ie, those responding “very likely” or “quite likely” to either question) or “not interested” (ie, those responding “very unlikely” or “quite unlikely” to both questions). Additionally, current smokers were asked questions that assessed gender, age, and social grade (AB = higher and intermediate professional/managerial, C1 = supervisory, clerical, junior managerial/administrative/professional, C2 = skilled manual workers, D=semi-skilled and unskilled manual workers, E=on state benefit, unemployed, lowest grade workers), dependence (Heaviness of Smoking Index, HSI [41] and Strength of Urges [42]), motivation to quit (Motivation to Stop Scale [43]), previous attempts to quit smoking, access to the Internet and handheld computers, and recent types of information searched online, that is, “For which of the following activities did you use the Internet in the last 3 months for private use? Please indicate all that apply: (a) using services related to travel and accommodation, (b) reading or downloading online newsnewspapersnews magazines, (c) looking for a job or sending a job application, (d) seeking health related information or support other than stopping smoking (eg, injury, disease, nutrition, improving health, etc), (e) seeking stop-smoking related information or support, (f) looking for information about education, training or courses, (g) doing an online course (in any subject), (h) consulting the Internet with the purpose of learning, (i) finding information about goods or services” [44]. Responses to Items (a) to (c) and (f) to (i) were aggregated to calculate a variable identifying use of the Internet for information other than health related (Item d) or smoking cessation (Item e). Current smokers were asked: “If there were an Internet site that was proven to help with stopping smoking, how likely is it that you would try it?” and also “If there were an application (“app”) for your handheld computer (like a “smartphone” [eg, an iPhone, Blackberry, or Android phone], palmtop, PDA, or tablet) that was proven to help with stopping smoking, how likely is it that you would try it?”. For the purposes of analysis, smokers’ responses on 4-point scales were dichotomized as either being “interested” in using an Internet-based smoking cessation intervention (ie, those responding “very likely” or “quite likely” to either question) or “not interested” (ie, those responding “very unlikely” or “quite unlikely” to both questions). Additionally, current smokers were asked questions that assessed gender, age, and social grade (AB = higher and intermediate professional/managerial, C1 = supervisory, clerical, junior managerial/administrative/professional, C2 = skilled manual workers, D=semi-skilled and unskilled manual workers, E=on state benefit, unemployed, lowest grade workers), dependence (Heaviness of Smoking Index, HSI [41] and Strength of Urges [42]), motivation to quit (Motivation to Stop Scale [43]), previous attempts to quit smoking, access to the Internet and handheld computers, and recent types of information searched online, that is, “For which of the following activities did you use the Internet in the last 3 months for private use? Please indicate all that apply: (a) using services related to travel and accommodation, (b) reading or downloading online newsnewspapersnews magazines, (c) looking for a job or sending a job application, (d) seeking health related information or support other than stopping smoking (eg, injury, disease, nutrition, improving health, etc), (e) seeking stop-smoking related information or support, (f) looking for information about education, training or courses, (g) doing an online course (in any subject), (h) consulting the Internet with the purpose of learning, (i) finding information about goods or services” [44]. Responses to Items (a) to (c) and (f) to (i) were aggregated to calculate a variable identifying use of the Internet for information other than health related (Item d) or smoking cessation (Item e). Analysis Data were analyzed using PASW 18.0.0. We used weighted data only to estimate the prevalence of interest in ISCIs among all smokers regardless of their Internet access. Data were weighted using the rim (marginal) weighting technique to match English census data on age, sex, and socioeconomic group. To assess smoking, Internet use, and sociodemographic characteristics associated with interest in the use of ISCIs, we conducted a series of simple and multiple logistic regressions. Alpha was set at P<.05. Data were analyzed using PASW 18.0.0. We used weighted data only to estimate the prevalence of interest in ISCIs among all smokers regardless of their Internet access. Data were weighted using the rim (marginal) weighting technique to match English census data on age, sex, and socioeconomic group. To assess smoking, Internet use, and sociodemographic characteristics associated with interest in the use of ISCIs, we conducted a series of simple and multiple logistic regressions. Alpha was set at P<.05. Study Design: The data were taken from the Smoking Toolkit Study [40], which is an ongoing series of cross-sectional household surveys in England designed to provide information about smoking prevalence and behavior. Each month a new sample of approximately 1800 adults aged 16 and over completes a face-to-face computer-assisted survey with a trained interviewer. By conducting a face-to-face rather than online survey, Internet access should not confound the results. Taylor Nelson Sofres-British Market Research Bureau collects the data as part of their monthly omnibus surveys on behalf of researchers at the Cancer Research UK’s Health Behaviour Research Centre, University College London, who conceived of the study and continue to manage it. The surveys use a form of random location sampling. England is split into 165,665 Output Areas, each comprising approximately 300 households. These Output Areas are stratified by A Classification Of Residential Neighbourhoods (ACORN) characteristics (an established geo-demographic analysis of the population provided by CACI International) and then randomly selected to be included in the lists of the interviewers. Interviewers travel to the selected areas and perform interviews with one participant per household until quotas based upon factors influencing the probability of being at home (working status, age, and gender) are fulfilled. Morning interviews are avoided to maximize participant availability. These survey methods have been previously described and have been shown to result in a baseline sample that is nationally representative in its sociodemographic composition and proportion of smokers [40]. Ethical approval was granted by the University College London ethics committee. Participants: We used data from respondents to the survey between February 2012 and April 2012 who reported smoking cigarettes (including hand-rolled) daily or occasionally at the time of the survey. A total of 5405 adults were surveyed; 1190 reported currently smoking cigarettes regularly of whom 1128 had complete data on all relevant variables. Measures: Current smokers were asked: “If there were an Internet site that was proven to help with stopping smoking, how likely is it that you would try it?” and also “If there were an application (“app”) for your handheld computer (like a “smartphone” [eg, an iPhone, Blackberry, or Android phone], palmtop, PDA, or tablet) that was proven to help with stopping smoking, how likely is it that you would try it?”. For the purposes of analysis, smokers’ responses on 4-point scales were dichotomized as either being “interested” in using an Internet-based smoking cessation intervention (ie, those responding “very likely” or “quite likely” to either question) or “not interested” (ie, those responding “very unlikely” or “quite unlikely” to both questions). Additionally, current smokers were asked questions that assessed gender, age, and social grade (AB = higher and intermediate professional/managerial, C1 = supervisory, clerical, junior managerial/administrative/professional, C2 = skilled manual workers, D=semi-skilled and unskilled manual workers, E=on state benefit, unemployed, lowest grade workers), dependence (Heaviness of Smoking Index, HSI [41] and Strength of Urges [42]), motivation to quit (Motivation to Stop Scale [43]), previous attempts to quit smoking, access to the Internet and handheld computers, and recent types of information searched online, that is, “For which of the following activities did you use the Internet in the last 3 months for private use? Please indicate all that apply: (a) using services related to travel and accommodation, (b) reading or downloading online newsnewspapersnews magazines, (c) looking for a job or sending a job application, (d) seeking health related information or support other than stopping smoking (eg, injury, disease, nutrition, improving health, etc), (e) seeking stop-smoking related information or support, (f) looking for information about education, training or courses, (g) doing an online course (in any subject), (h) consulting the Internet with the purpose of learning, (i) finding information about goods or services” [44]. Responses to Items (a) to (c) and (f) to (i) were aggregated to calculate a variable identifying use of the Internet for information other than health related (Item d) or smoking cessation (Item e). Analysis: Data were analyzed using PASW 18.0.0. We used weighted data only to estimate the prevalence of interest in ISCIs among all smokers regardless of their Internet access. Data were weighted using the rim (marginal) weighting technique to match English census data on age, sex, and socioeconomic group. To assess smoking, Internet use, and sociodemographic characteristics associated with interest in the use of ISCIs, we conducted a series of simple and multiple logistic regressions. Alpha was set at P<.05. Results: Approximately 70% of current smokers had accessed the Internet in the past week while a significant majority also had access to a handheld computer (see Table 1). A minority of users had searched for either smoking or health information support, and more than half had searched for at least one of a variety of “other” types of online information. The sociodemographic and smoking characteristics were typical of a representative sample of smokers [4,40], and by way of comparison, the characteristics of the 4451 current smokers included in the Smoking Toolkit Study for the 12 months before the current study (ie, January 2011 to January 2012) are presented in Table 1. A total of 42.6% (95% CI 39.6%-45.7%) of current smokers were interested in using Internet sites for smoking cessation, 23.9% (95% CI 21.3%-26.5%) were interested in apps, and 46.6% (95% CI 43.5%-49.6%) were interested in ISCIs (either sites or apps). In contrast, only 0.3% (95% CI 0%-0.7%) of smokers reported having used such an intervention to support their most recent quit attempt within the past year. Table 2 shows the smoking, Internet use, and sociodemographic characteristics of smokers by their interest in the use of ISCIs. There was evidence that interested smokers were younger, more cigarette dependent (measured by both HSI and Strength of Urges), more motivated to quit within 3 months, more likely to have made a quit attempt in the past year, accessed the Internet at least weekly, had handheld computer access, had used the Internet to search for online smoking cessation information or support in past 3 months, and had used it to search for a variety of “other” online information. After adjusting for all other background characteristics, associations remained between interest and age, cigarette dependence (measured by Strength of Urges), motivation to quit, past year quit attempt, weekly Internet access, handheld computer access, and recent searching for online smoking cessation information. Last, this pattern of results was unchanged during sensitivity analyses in which the associations between interest and the various characteristics were re-assessed separately when smokers were classified according to whether or not they had expressed interest in (1) an Internet site, or (2) an app (data not shown). Characteristics of current smokers. Factors associated with interest in Internet-based smoking cessation interventions. a P<.05. Discussion: Almost half of all current smokers were interested in using an Internet-based smoking cessation intervention, however less than 1% had used one to support their most recent quit attempt in the past year. After adjustment for all background characteristics, smokers who were younger, more dependent, highly motivated to quit, had attempted to quit recently, accessed the Internet regularly, had handheld computer access, and had recently searched for online smoking cessation information or support were all more likely to be interested in using online stop smoking support. The diffusion of the Internet since its inception over 40 years ago has been phenomenal—recently, the Internet reached a billion users worldwide [28]. In Britain, Internet access has continued to increase with 77% of households connected in 2011 as compared to 58% in 2003 [28,29]. Although smokers tend to have less access than nonsmokers [27], there remains a majority of smokers that would be possible to reach via the Internet—in this study over 70% had used the Internet in the past week—and this number is only likely to increase [28]. As a consequence, ISCIs are often cited as offering a valuable opportunity to deliver low-cost behavioral support to large numbers of smokers [16,21,25,45]. Importantly, this study now adds an up-to-date estimate of the proportion of smokers interested in using these interventions, which provides some indication of their maximum potential reach and should allow more accurate calculations of the likely impact of particular interventions [24]. For example, from the RE-AIM perspective, a public health impact score can be represented as a multiplicative combination of reach, efficacy, adoption, implementation, and maintenance. The accuracy of this calculation for particular ISCIs may be improved by the provision in the current paper of an up-to-date estimate of the denominator necessary to calculate the reach, which is defined as the proportion of the possible target population that participate in a particular intervention. The current estimate of 47% of smokers who are interested in ISCIs is higher than the 40% previously estimated from a representative sample of smokers [27]. However, that study was conducted in 2006-07, and it is reasonable to assume that interest may have increased as a consequence of improving Internet access [28]. Additionally, interest in ISCIs was operationalized as an expression of interest in either an Internet site or app—the number reporting interest only in Internet sites was 43%. Last, the first study was conducted in Canada, and it is likely there are cultural differences in interest as compared to England. The finding that interested smokers were likely to be younger and use the Internet more regularly is consistent with previous research [27]. This association with age is particularly important as treatment-seeking smokers tend to be older than those who do not seek treatment [46], and therefore online interventions may be particularly suitable for targeting younger smokers who may otherwise attempt to quit unaided. In contrast, the association between interest and dependence, motivation, and past year quit attempts is characteristic of smokers who are more likely to seek treatment [46-48]. It is important that any future assessments of the real-world effectiveness of ISCIs take these associations into account [48]. The association between interest and recent searching for online smoking cessation information or support is intuitive. However, it is also indicative of the potential demand for these interventions in that smokers do not appear to be deterred nor satisfied by what they are currently finding. The latter point is also suggested by the contrast between the 3% of smokers who recently searched for online smoking cessation information or support as compared with the 0.3% who used online support during a quit attempt in the past year. The “digital divide” that characterizes Internet use in the wider population is also true of smokers: smokers who use the Internet tend to be more affluent and educated than those who do not [36]. Therefore, the interest of smokers regardless of social grade in the current study is an unexpected finding and suggests the Internet may yet offer a means of equitable treatment delivery, which is particularly important as smokers from more deprived socioeconomic groups typically want, and try, to stop as much as other smokers but find it more difficult [49]. A potential concern with ISCIs is that it might prevent smokers from using face-to-face support, which is currently the most effective delivery mode for behavioral support [5-7]. While relatively few smokers had recently used face-to-face support in the current study, the lack of an association between use of this support and interest tentatively suggests the concern is unwarranted. Instead, it is likely that Internet support would appeal to a different subset of smokers who place more value on convenience and confidentiality, or find other types of support difficult to access [11]. Additionally, in other health areas where Internet support has become routine, the two have emerged as complementary, for example in several areas of mental health online cognitive behavioral therapy is frequently recommended while a patient waits for an appointment or to help with the more automated aspects of the support [11]. Taken together, the pattern of associations provided in the current study present a comprehensive characterization of smokers interested in using ISCIs. This understanding is important as access cannot primarily account for the difference between interest in and use of these interventions; for example, in the current study the majority of smokers had used the Internet in the past week. It is hoped that these associations will inform the development or modification of interventions with regards to targeted dissemination, tailoring dimensions, choice of content and features, navigational architecture, and language style and complexity, and in turn help to actualize the potential of the Internet for delivering smoking cessation interventions [50,51]. An indirect point of interest is the finding that in simple logistic regressions, interest was associated with both measures of dependence: Strength of Urges and HSI. However, the association with only Strength of Urges in multiple logistic regression is consistent with previous research showing that a single-rating measure of urges may be a more useful measure of dependence than those based on consumption [42]. One possible limitation is that an expression of interest during a survey is clearly quite different from actually visiting a program to support a quit attempt and subsequently using the program regularly. However, the purpose of the current study was primarily to provide an estimate of the maximum possible reach if the support was more widely available and promoted. One of the important findings is the huge potential in terms of the difference between the proportion who are interested and the percentage currently using. Additionally, establishing the characteristics associated with interest is arguably critical to realizing this potential and improving uptake by facilitating effective targeting, both in terms of advertising and intervention content, and complements research into understanding the determinants of use among the small proportion currently using, eg, [37-39]. Future research should aim to derive the specific relationship between interest and subsequent uptake of ISCIs following targeted design and promotion. Another limitation is that the questions used to derive interest in ISCIs were framed positively (eg, “If there were an Internet site or ‘app’ that was proven to help with stopping smoking, how likely is it that you would try it?”). However, even if interest is overestimated, it is improbable that it would account for the extent of the discrepancy between interest in and use of ISCIs, which is highlighted by the current study. Another potential limitation may have been to operationalize interest in ISCIs as an expression of interest in either an Internet site or an app. While both require the Internet and a computer for delivery, there are also clear differences, including that only sites require regular Internet access and only apps require ongoing access to a handheld computer. Indeed, fewer smokers were interested in apps as compared to Internet sites; however, similar proportions of smokers who had regular Internet access and those with ongoing access to handheld computers were interested in sites and apps respectively. More importantly, in sensitivity analyses in which smokers were characterized separately according to interest in either sites or apps, the pattern of results was unchanged. Together these results suggest that the current operationalization of ISCIs for the purposes of estimating and characterizing interest is suitable; however, experimental research is needed to establish whether there are important differences in efficacy according to delivery mode of either Internet site or app. A final limitation is potential error or bias in the measurement of certain smoking characteristics. For example, smokers often forget failed quit attempts, particularly if they only lasted a short time or occurred long ago [52]. However, it is unlikely that forgetting would differ as a function of interest in ISCIs, and quit attempts—as with all other variables of interest in the present study—were assessed with a validated measure. In conclusion, almost half the smokers in England are interested in using online smoking cessation interventions, and yet only a small proportion of smokers currently use these interventions to support quit attempts. Clearly, ISCIs represent an excellent opportunity to deliver low-cost behavioral support to a large number of smokers, which is currently not being realized. Moreover, as interest is expressed regardless of social grade, the Internet may also offer a means of delivering this support equitably. Last, designers of Internet-based interventions should be aware that potential users are likely to be younger, more cigarette dependent, highly motivated, have attempted to quit recently, have regular Internet and handheld computer access, and have recently searched for smoking cessation information and support.
Background: An accurate and up-to-date estimate of the potential reach of Internet-based smoking cessation interventions (ISCIs) would improve calculations of impact while an understanding of the characteristics of potential users would facilitate the design of interventions. Methods: Data were collected using cross-sectional household surveys of representative samples of adults in England. Interest in trying an Internet site or "app" that was proven to help with stopping smoking was assessed in 1128 adult smokers in addition to sociodemographic characteristics, dependence, motivation to quit, previous attempts to quit smoking, Internet and handheld computer access, and recent types of information searched online. Results: Of a representative sample of current smokers, 46.6% (95% CI 43.5%-49.6%) were interested in using an Internet-based smoking cessation intervention. In contrast, only 0.3% (95% CI 0%-0.7%) of smokers reported having used such an intervention to support their most recent quit attempt within the past year. After adjusting for all other background characteristics, interested smokers were younger (OR=0.98, 95% CI 0.97-0.99), reported stronger urges (OR=1.29, 95% CI 1.10-1.51), were more motivated to quit within 3 months (OR=2.16, 95% CI 1.54-3.02), and were more likely to have made a quit attempt in the past year (OR=1.76, 95% CI 1.30-2.37), access the Internet at least weekly (OR=2.17, 95% CI 1.40-3.36), have handheld computer access (OR=1.65, 95% CI 1.22-2.24), and have used the Internet to search for online smoking cessation information or support in past 3 months (OR=2.82, 95% CI 1.20-6.62). There was no association with social grade. Conclusions: Almost half of all smokers in England are interested in using online smoking cessation interventions, yet fewer than 1% have used them to support a quit attempt in the past year. Interest is not associated with social grade but is associated with being younger, more highly motivated, more cigarette dependent, having attempted to quit recently, having regular Internet and handheld computer access, and having recently searched for online smoking cessation information and support.
null
null
6,200
426
8
[ "internet", "smokers", "smoking", "support", "interest", "use", "information", "access", "iscis", "interested" ]
[ "test", "test" ]
null
null
[CONTENT] smoking cessation intervention | Internet-based | website | prevalence | characteristic [SUMMARY]
[CONTENT] smoking cessation intervention | Internet-based | website | prevalence | characteristic [SUMMARY]
[CONTENT] smoking cessation intervention | Internet-based | website | prevalence | characteristic [SUMMARY]
null
[CONTENT] smoking cessation intervention | Internet-based | website | prevalence | characteristic [SUMMARY]
null
[CONTENT] Adult | Cross-Sectional Studies | Data Collection | Female | Humans | Internet | Male | Middle Aged | Prevalence | Smoking | Smoking Cessation [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Data Collection | Female | Humans | Internet | Male | Middle Aged | Prevalence | Smoking | Smoking Cessation [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Data Collection | Female | Humans | Internet | Male | Middle Aged | Prevalence | Smoking | Smoking Cessation [SUMMARY]
null
[CONTENT] Adult | Cross-Sectional Studies | Data Collection | Female | Humans | Internet | Male | Middle Aged | Prevalence | Smoking | Smoking Cessation [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] internet | smokers | smoking | support | interest | use | information | access | iscis | interested [SUMMARY]
[CONTENT] internet | smokers | smoking | support | interest | use | information | access | iscis | interested [SUMMARY]
[CONTENT] internet | smokers | smoking | support | interest | use | information | access | iscis | interested [SUMMARY]
null
[CONTENT] internet | smokers | smoking | support | interest | use | information | access | iscis | interested [SUMMARY]
null
[CONTENT] face | support | internet | smokers | interventions | behavioral | evidence | behavioral support | use | interested [SUMMARY]
[CONTENT] smoking | internet | data | information | related | survey | use | face | surveys | workers [SUMMARY]
[CONTENT] smokers | ci | 95 ci | 95 | past | internet | smoking | table | characteristics | interest [SUMMARY]
null
[CONTENT] internet | smokers | smoking | interest | data | support | use | information | face | iscis [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] England ||| 1128 [SUMMARY]
[CONTENT] 46.6% | 95% | 43.5%-49.6% ||| only 0.3% | 95% | CI | 0%-0.7% | the past year ||| 95% | CI | 0.97-0.99 | OR=1.29 | 95% | CI | 1.10-1.51 | 3 months | 95% | CI | 1.54-3.02 | the past year | OR=1.76 | 95% | CI | 1.30 | 95% | 1.40-3.36 | OR=1.65 | 95% | CI | 1.22-2.24 | past 3 months | OR=2.82 | 95% | CI ||| [SUMMARY]
null
[CONTENT] ||| England ||| 1128 ||| ||| 46.6% | 95% | 43.5%-49.6% ||| only 0.3% | 95% | CI | 0%-0.7% | the past year ||| 95% | CI | 0.97-0.99 | OR=1.29 | 95% | CI | 1.10-1.51 | 3 months | 95% | CI | 1.54-3.02 | the past year | OR=1.76 | 95% | CI | 1.30 | 95% | 1.40-3.36 | OR=1.65 | 95% | CI | 1.22-2.24 | past 3 months | OR=2.82 | 95% | CI ||| ||| Almost half | England | fewer than 1% | the past year ||| [SUMMARY]
null
Clot accumulation at the tip of hemodialysis catheters in a large animal model.
33356813
The issue of side holes in the tips of the tunneled cuffed central venous catheters is complex and has been subject to longstanding debate. This study sought to compare the clotting potential of the side-hole-free Pristine hemodialysis catheter with that of a symmetric catheter with side holes.
BACKGROUND
Both jugular veins of five goats were catheterized with the two different catheters. The catheters were left in place for 4 weeks and were flushed and locked with heparin thrice weekly. The aspirated intraluminal clot length was assessed visually prior to each flushing. In addition, the size and weight of the clot were recorded upon catheter extraction at the end of the 4-week follow-up.
METHODS
The mean intraluminal clot length observed during the entire study follow-up measured up to a mean of 0.66 cm in the GlidePath (95% CI, 0.14-1.18) and 0.19 cm in the Pristine hemodialysis catheter (95% CI, -0.33 to 0.71), the difference being statistically significant (p = 0.026). On average, 0.01 g and 0.07 g of intraluminal clot were retrieved from the Pristine and GlidePath catheters, respectively (p = 0.052).
RESULTS
The Pristine hemodialysis catheter was largely superior to a standard side hole catheter in impeding clot formation, and, contrary to the side hole catheter, allowed for complete aspiration of the intraluminal clot.
CONCLUSION
[ "Animals", "Catheterization, Central Venous", "Catheters, Indwelling", "Central Venous Catheters", "Models, Animal", "Renal Dialysis" ]
8899813
Introduction
Use of central venous catheters (CVC) in the United States shows little, if any, change in pattern from 2005, with slightly over 80% of patients using a CVC at hemodialysis initiation and 68.5% still using catheters 90 days later, according to the recent data from the United States Renal Data System (USRDS).1,2 Substantial use of CVCs has also been reported in the European Economic Area (EEA), with studies showing an overall increase in dependency on CVCs for hemodialysis over time.3–5 Of note, the native arteriovenous fistula (AVF) remains the recommended first choice for vascular access, in both territories,6–8 due to amore frequent association of synthetic means of vascular access, especially CVCs, with infectious and thrombotic complications. This conception has been repeatedly challenged in the past decade,9,10 leading the National Kidney Foundation’s (NKF) Kidney Disease Outcomes Quality Initiative (KDOQI) to recognize the potential selection biases, both those in favor of the AVF and those against the CVC, as a point of major statistical concern casting doubt on the validity of the previous evidence. 11 Consequently, the KDOQI guidelines restated that the disadvantages of CVC may contribute to poor patient outcomes, but advised that the true magnitude of this effect is not certain in view of the aforementioned selection bias and confounding effects. Regardless of the type of vascular access, adequate blood flow rate (BFR) is the sanctum sanctorum of hemodialysis, as low BFR extends treatment times and may result in underdialysis. 6 The most likely cause for a low BFR achieved with CVCs is thrombosis of the catheter, accounting for access loss in 30% to 40% of patients. 6 Somewhat paradoxically, distal side holes, frequently introduced in CVCs with the goal of supporting inflow in case of thrombotic obstruction of the end hole, 12 have been themselves implicated in promotion of thrombosis by serving as anchors for irretrievable blood clots. 13 The Pristine hemodialysis catheter has a split, symmetrical, side-holes–free tip. It was designed with the anatomy of the right atrium in mind. The placement of the Pristine is such that the tip should be placed in the upper right atrium and is oriented in the anterior posterior position (Figure 1). This catheter design would appear to have a theoretical advantage in diminishing the risk of thrombosis. We devised this study to translate the aforementioned theory into practice by assessing the by-design potential advantages of the Pristine hemodialysis catheter in vivo. Pristine hemodialysis catheter tip orientation in the right atrium: (a) anterior-posterior view and (b) lateral view.
Methods
Animals and experimental setup Five domestic goats (female; 55–78 kg) were used in this study. Animals were allowed free access to food until 24 h before the procedure, at which time access to food was denied. Water was provided ad libitum until 24 h before the procedure. Before the procedure, animals were sedated with intramuscular ketamine 10 mg/kg + xylazine 0.1 mg/kg, and intravenous midazolam 5–10 mg; intubated and connected to a mechanical ventilator. Anesthesia consisted of 1–2% isoflurane. Tunneled cuffed double-lumen CVCs were inserted in both jugular veins of the animals according to the instructions for use (IFU) provided by the catheters’ manufacturers. Pristine hemodialysis catheters (15.5 F polyurethane; Pristine Access Technologies Ltd., Tel Aviv, Israel; hereinafter called Pristine) and GlidePath Long-Term Dialysis Catheters (14.5 F polyurethane; Bard Access Systems, Inc., UT, United States; hereinafter called GlidePath) were evaluated in this study. Catheter right-atrium location post-insertion was validated by fluoroscopy (Figure 2). Catheter patency was assessed by the operating physician using a 10 mL syringe with normal saline. At the end of the procedure, animals received a subcutaneous injection of 1 mL/kg of procaine penicillin G 200 mg/mL + dihydrostreptomycin sulphate 250 mg/mL. In addition, cefazolin (2–2.5 g) and dipyrone (1 g) were administered intravenously. Fluoroscopy imaging of Pristine and Glidepath catheters implanted in the right atrium: (a) Pristine tip and (b) Glidepath tip. Five domestic goats (female; 55–78 kg) were used in this study. Animals were allowed free access to food until 24 h before the procedure, at which time access to food was denied. Water was provided ad libitum until 24 h before the procedure. Before the procedure, animals were sedated with intramuscular ketamine 10 mg/kg + xylazine 0.1 mg/kg, and intravenous midazolam 5–10 mg; intubated and connected to a mechanical ventilator. Anesthesia consisted of 1–2% isoflurane. Tunneled cuffed double-lumen CVCs were inserted in both jugular veins of the animals according to the instructions for use (IFU) provided by the catheters’ manufacturers. Pristine hemodialysis catheters (15.5 F polyurethane; Pristine Access Technologies Ltd., Tel Aviv, Israel; hereinafter called Pristine) and GlidePath Long-Term Dialysis Catheters (14.5 F polyurethane; Bard Access Systems, Inc., UT, United States; hereinafter called GlidePath) were evaluated in this study. Catheter right-atrium location post-insertion was validated by fluoroscopy (Figure 2). Catheter patency was assessed by the operating physician using a 10 mL syringe with normal saline. At the end of the procedure, animals received a subcutaneous injection of 1 mL/kg of procaine penicillin G 200 mg/mL + dihydrostreptomycin sulphate 250 mg/mL. In addition, cefazolin (2–2.5 g) and dipyrone (1 g) were administered intravenously. Fluoroscopy imaging of Pristine and Glidepath catheters implanted in the right atrium: (a) Pristine tip and (b) Glidepath tip. Dialysis treatment imitation For each imitated treatment, 5 mL were aspirated from each catheter lumen using a syringe with Luer port. Where aspiration was not possible, 20 mL of normal saline were injected to restore lumen patency. Following aspiration, the syringes were checked for clots, findings were documented, and a digital image was taken. The lumens were washed with 20 mL of normal saline and locked with 5000 U/mL of heparin solution diluted according to the priming volume indicated on the catheter. For each imitated treatment, 5 mL were aspirated from each catheter lumen using a syringe with Luer port. Where aspiration was not possible, 20 mL of normal saline were injected to restore lumen patency. Following aspiration, the syringes were checked for clots, findings were documented, and a digital image was taken. The lumens were washed with 20 mL of normal saline and locked with 5000 U/mL of heparin solution diluted according to the priming volume indicated on the catheter. Follow-up termination and catheter removal Following completion of the study follow-up, catheter location was assessed using fluoroscopy. Catheters were then aspirated and removed according to the respective IFU. Where a clot residue was still present in the lumen, it was manually retrieved with tweezers by the same operator who aspirated and removed the catheters. The aspirated and the manually removed clot substances were weighed on analytical scales. Following completion of the study follow-up, catheter location was assessed using fluoroscopy. Catheters were then aspirated and removed according to the respective IFU. Where a clot residue was still present in the lumen, it was manually retrieved with tweezers by the same operator who aspirated and removed the catheters. The aspirated and the manually removed clot substances were weighed on analytical scales. Euthanasia Animals were euthanized with an injected barbiturate (pentobarbital) overdose, in agreement with the American Veterinary Medical Association (AVMA) acceptable method for euthanasia of fully anesthetized small ruminants. 14 Death was confirmed by the animal facility veterinarian after assessing heartbeat, respiration, and pupillary response to light. Animals were euthanized with an injected barbiturate (pentobarbital) overdose, in agreement with the American Veterinary Medical Association (AVMA) acceptable method for euthanasia of fully anesthetized small ruminants. 14 Death was confirmed by the animal facility veterinarian after assessing heartbeat, respiration, and pupillary response to light. Statistical assessment Both the follow up and end of study results were analyzed using a repeated measures ANOVA model in order to compare the catheters with respect to clot length and weight (total intraluminal clots, aspirated clots, and clot residues). This was done in order to take into consideration the within animal correlation between measurements using the two catheters as well as lumen size (arterial and venous). Both the follow up and end of study results were analyzed using a repeated measures ANOVA model in order to compare the catheters with respect to clot length and weight (total intraluminal clots, aspirated clots, and clot residues). This was done in order to take into consideration the within animal correlation between measurements using the two catheters as well as lumen size (arterial and venous).
Results
Ten tunneled cuffed double-lumen central venous catheters (five—Pristine and five—GlidePath) were inserted in the jugular veins of five animals. Insertions were uncomplicated and uneventful. The two catheters were inserted in each animal, one on each side. During the 28-day follow-up, imitated treatments were administered three times a week, on Sundays, Tuesdays, and Thursdays. Aspirations from both catheter types were uneventful at all locations. The mean intraluminal clot length aspirated before each session during the entire study follow-up measured up to a mean of 0.66 cm in the GlidePath (95% CI, 0.14–1.18) and 0.19 cm in the Pristine hemodialysis catheter (95% CI, −0.33 to 0.71), the difference being statistically significant (p = 0.026; Table 1). Average intraluminal clot length. None of the animals showed clinical signs of infection during the entire duration of the study. All catheters were removed at the end of the follow-up. The total intraluminal clot weight for a single lumen was calculated by combining the weight of the clot aspirated from the lumen before catheter removal with that of the residual clot retrieved manually immediately thereafter (Figures 3 and 4). This reached the mean of 0.0054 g and 0.0372 g for the lumens of the Pristine and GlidePath catheters, respectively, accounting for a mean intraluminal clot weight of roughly 0.01 g per each Pristine and 0.07 g per each GlidePath catheter (Figure 5). Aside of the obvious significance of the overall clot load, one determinant deserves a special consideration. Specifically, residual clot was not detected at all in the Pristine catheters after aspiration, while a mean of 0.002 g of clot were retrieved from the GlidePath catheters mechanically, following removal of the catheters from the blood vessels. Catheter tips and aspirated clots after 28 days: (a) Pristine and (b) Glidepath. Intraluminal clots removed from Pristine (top) and Glidepath (bottom) tips after 28 days. Comparison of total intraluminal clot weight (aspirated + manually retrieved following catheter removal). Data are presented as a mean ± standard deviation for a single catheter. The almost seven-fold total intraluminal clot weight difference between the catheter types showed a trend toward statistical significance (p = 0.052). The differences between the mean weights of the aspirated and manually retrieved residual clot showed the same trend when analyzed separately (p < 0.09 in each of the separate analyses). Finally, the mean aspirated, residual, and total intraluminal clot weights were statistically different from 0 for the GlidePath (p = 0.03), but not for the Pristine catheters (p > 0.64). Although this work featured only a limited number of animals observed over a relatively short period of time, the number of clot length observations obtained was considerable. The results of weight evaluation at the end of the study followed suit, showing that the Pristine hemodialysis catheter was not inferior to the Glidepath in any test, while showing superiority to the latter in some. Overall, this reduces the likelihood of study results being significantly affected by a random error.
null
null
[ "Animals and experimental setup", "Dialysis treatment imitation", "Follow-up termination and catheter removal", "Euthanasia", "Statistical assessment" ]
[ "Five domestic goats (female; 55–78 kg) were used in this study. Animals were\nallowed free access to food until 24 h before the procedure, at which time\naccess to food was denied. Water was provided ad libitum until 24 h before the\nprocedure. Before the procedure, animals were sedated with intramuscular\nketamine 10 mg/kg + xylazine 0.1 mg/kg, and intravenous midazolam 5–10 mg;\nintubated and connected to a mechanical ventilator. Anesthesia consisted of 1–2%\nisoflurane. Tunneled cuffed double-lumen CVCs were inserted in both jugular\nveins of the animals according to the instructions for use (IFU) provided by the\ncatheters’ manufacturers. Pristine hemodialysis catheters (15.5 F polyurethane;\nPristine Access Technologies Ltd., Tel Aviv, Israel; hereinafter called\nPristine) and GlidePath Long-Term Dialysis Catheters (14.5 F polyurethane; Bard\nAccess Systems, Inc., UT, United States; hereinafter called GlidePath) were\nevaluated in this study. Catheter right-atrium location post-insertion was\nvalidated by fluoroscopy (Figure 2). Catheter patency was assessed by the operating physician\nusing a 10 mL syringe with normal saline. At the end of the procedure, animals\nreceived a subcutaneous injection of 1 mL/kg of procaine penicillin G\n200 mg/mL + dihydrostreptomycin sulphate 250 mg/mL. In addition, cefazolin\n(2–2.5 g) and dipyrone (1 g) were administered intravenously.\nFluoroscopy imaging of Pristine and Glidepath catheters implanted in the\nright atrium: (a) Pristine tip and (b) Glidepath tip.", "For each imitated treatment, 5 mL were aspirated from each catheter lumen using a\nsyringe with Luer port. Where aspiration was not possible, 20 mL of normal\nsaline were injected to restore lumen patency. Following aspiration, the\nsyringes were checked for clots, findings were documented, and a digital image\nwas taken. The lumens were washed with 20 mL of normal saline and locked with\n5000 U/mL of heparin solution diluted according to the priming volume indicated\non the catheter.", "Following completion of the study follow-up, catheter location was assessed using\nfluoroscopy. Catheters were then aspirated and removed according to the\nrespective IFU. Where a clot residue was still present in the lumen, it was\nmanually retrieved with tweezers by the same operator who aspirated and removed\nthe catheters. The aspirated and the manually removed clot substances were\nweighed on analytical scales.", "Animals were euthanized with an injected barbiturate (pentobarbital) overdose, in\nagreement with the American Veterinary Medical Association (AVMA) acceptable\nmethod for euthanasia of fully anesthetized small ruminants.\n14\n Death was confirmed by the animal facility veterinarian after assessing\nheartbeat, respiration, and pupillary response to light.", "Both the follow up and end of study results were analyzed using a repeated\nmeasures ANOVA model in order to compare the catheters with respect to clot\nlength and weight (total intraluminal clots, aspirated clots, and clot\nresidues). This was done in order to take into consideration the within animal\ncorrelation between measurements using the two catheters as well as lumen size\n(arterial and venous)." ]
[ null, null, null, null, null ]
[ "Introduction", "Methods", "Animals and experimental setup", "Dialysis treatment imitation", "Follow-up termination and catheter removal", "Euthanasia", "Statistical assessment", "Results", "Discussion" ]
[ "Use of central venous catheters (CVC) in the United States shows little, if any,\nchange in pattern from 2005, with slightly over 80% of patients using a CVC at\nhemodialysis initiation and 68.5% still using catheters 90 days later, according to\nthe recent data from the United States Renal Data System (USRDS).1,2 Substantial use of CVCs has also\nbeen reported in the European Economic Area (EEA), with studies showing an overall\nincrease in dependency on CVCs for hemodialysis over time.3–5 Of note, the native\narteriovenous fistula (AVF) remains the recommended first choice for vascular\naccess, in both territories,6–8 due to amore\nfrequent association of synthetic means of vascular access, especially CVCs, with\ninfectious and thrombotic complications. This conception has been repeatedly\nchallenged in the past decade,9,10 leading the National Kidney\nFoundation’s (NKF) Kidney Disease Outcomes Quality Initiative (KDOQI) to recognize\nthe potential selection biases, both those in favor of the AVF and those against the\nCVC, as a point of major statistical concern casting doubt on the validity of the\nprevious evidence.\n11\n Consequently, the KDOQI guidelines restated that the disadvantages of CVC may\ncontribute to poor patient outcomes, but advised that the true magnitude of this\neffect is not certain in view of the aforementioned selection bias and confounding\neffects.\nRegardless of the type of vascular access, adequate blood flow rate (BFR) is the\nsanctum sanctorum of hemodialysis, as low BFR extends treatment times and may result\nin underdialysis.\n6\n The most likely cause for a low BFR achieved with CVCs is thrombosis of the\ncatheter, accounting for access loss in 30% to 40% of patients.\n6\n Somewhat paradoxically, distal side holes, frequently introduced in CVCs with\nthe goal of supporting inflow in case of thrombotic obstruction of the end hole,\n12\n have been themselves implicated in promotion of thrombosis by serving as\nanchors for irretrievable blood clots.\n13\n\nThe Pristine hemodialysis catheter has a split, symmetrical, side-holes–free tip. It\nwas designed with the anatomy of the right atrium in mind. The placement of the\nPristine is such that the tip should be placed in the upper right atrium and is\noriented in the anterior posterior position (Figure 1). This catheter design would appear\nto have a theoretical advantage in diminishing the risk of thrombosis. We devised\nthis study to translate the aforementioned theory into practice by assessing the\nby-design potential advantages of the Pristine hemodialysis catheter in vivo.\nPristine hemodialysis catheter tip orientation in the right atrium: (a)\nanterior-posterior view and (b) lateral view.", " Animals and experimental setup Five domestic goats (female; 55–78 kg) were used in this study. Animals were\nallowed free access to food until 24 h before the procedure, at which time\naccess to food was denied. Water was provided ad libitum until 24 h before the\nprocedure. Before the procedure, animals were sedated with intramuscular\nketamine 10 mg/kg + xylazine 0.1 mg/kg, and intravenous midazolam 5–10 mg;\nintubated and connected to a mechanical ventilator. Anesthesia consisted of 1–2%\nisoflurane. Tunneled cuffed double-lumen CVCs were inserted in both jugular\nveins of the animals according to the instructions for use (IFU) provided by the\ncatheters’ manufacturers. Pristine hemodialysis catheters (15.5 F polyurethane;\nPristine Access Technologies Ltd., Tel Aviv, Israel; hereinafter called\nPristine) and GlidePath Long-Term Dialysis Catheters (14.5 F polyurethane; Bard\nAccess Systems, Inc., UT, United States; hereinafter called GlidePath) were\nevaluated in this study. Catheter right-atrium location post-insertion was\nvalidated by fluoroscopy (Figure 2). Catheter patency was assessed by the operating physician\nusing a 10 mL syringe with normal saline. At the end of the procedure, animals\nreceived a subcutaneous injection of 1 mL/kg of procaine penicillin G\n200 mg/mL + dihydrostreptomycin sulphate 250 mg/mL. In addition, cefazolin\n(2–2.5 g) and dipyrone (1 g) were administered intravenously.\nFluoroscopy imaging of Pristine and Glidepath catheters implanted in the\nright atrium: (a) Pristine tip and (b) Glidepath tip.\nFive domestic goats (female; 55–78 kg) were used in this study. Animals were\nallowed free access to food until 24 h before the procedure, at which time\naccess to food was denied. Water was provided ad libitum until 24 h before the\nprocedure. Before the procedure, animals were sedated with intramuscular\nketamine 10 mg/kg + xylazine 0.1 mg/kg, and intravenous midazolam 5–10 mg;\nintubated and connected to a mechanical ventilator. Anesthesia consisted of 1–2%\nisoflurane. Tunneled cuffed double-lumen CVCs were inserted in both jugular\nveins of the animals according to the instructions for use (IFU) provided by the\ncatheters’ manufacturers. Pristine hemodialysis catheters (15.5 F polyurethane;\nPristine Access Technologies Ltd., Tel Aviv, Israel; hereinafter called\nPristine) and GlidePath Long-Term Dialysis Catheters (14.5 F polyurethane; Bard\nAccess Systems, Inc., UT, United States; hereinafter called GlidePath) were\nevaluated in this study. Catheter right-atrium location post-insertion was\nvalidated by fluoroscopy (Figure 2). Catheter patency was assessed by the operating physician\nusing a 10 mL syringe with normal saline. At the end of the procedure, animals\nreceived a subcutaneous injection of 1 mL/kg of procaine penicillin G\n200 mg/mL + dihydrostreptomycin sulphate 250 mg/mL. In addition, cefazolin\n(2–2.5 g) and dipyrone (1 g) were administered intravenously.\nFluoroscopy imaging of Pristine and Glidepath catheters implanted in the\nright atrium: (a) Pristine tip and (b) Glidepath tip.\n Dialysis treatment imitation For each imitated treatment, 5 mL were aspirated from each catheter lumen using a\nsyringe with Luer port. Where aspiration was not possible, 20 mL of normal\nsaline were injected to restore lumen patency. Following aspiration, the\nsyringes were checked for clots, findings were documented, and a digital image\nwas taken. The lumens were washed with 20 mL of normal saline and locked with\n5000 U/mL of heparin solution diluted according to the priming volume indicated\non the catheter.\nFor each imitated treatment, 5 mL were aspirated from each catheter lumen using a\nsyringe with Luer port. Where aspiration was not possible, 20 mL of normal\nsaline were injected to restore lumen patency. Following aspiration, the\nsyringes were checked for clots, findings were documented, and a digital image\nwas taken. The lumens were washed with 20 mL of normal saline and locked with\n5000 U/mL of heparin solution diluted according to the priming volume indicated\non the catheter.\n Follow-up termination and catheter removal Following completion of the study follow-up, catheter location was assessed using\nfluoroscopy. Catheters were then aspirated and removed according to the\nrespective IFU. Where a clot residue was still present in the lumen, it was\nmanually retrieved with tweezers by the same operator who aspirated and removed\nthe catheters. The aspirated and the manually removed clot substances were\nweighed on analytical scales.\nFollowing completion of the study follow-up, catheter location was assessed using\nfluoroscopy. Catheters were then aspirated and removed according to the\nrespective IFU. Where a clot residue was still present in the lumen, it was\nmanually retrieved with tweezers by the same operator who aspirated and removed\nthe catheters. The aspirated and the manually removed clot substances were\nweighed on analytical scales.\n Euthanasia Animals were euthanized with an injected barbiturate (pentobarbital) overdose, in\nagreement with the American Veterinary Medical Association (AVMA) acceptable\nmethod for euthanasia of fully anesthetized small ruminants.\n14\n Death was confirmed by the animal facility veterinarian after assessing\nheartbeat, respiration, and pupillary response to light.\nAnimals were euthanized with an injected barbiturate (pentobarbital) overdose, in\nagreement with the American Veterinary Medical Association (AVMA) acceptable\nmethod for euthanasia of fully anesthetized small ruminants.\n14\n Death was confirmed by the animal facility veterinarian after assessing\nheartbeat, respiration, and pupillary response to light.\n Statistical assessment Both the follow up and end of study results were analyzed using a repeated\nmeasures ANOVA model in order to compare the catheters with respect to clot\nlength and weight (total intraluminal clots, aspirated clots, and clot\nresidues). This was done in order to take into consideration the within animal\ncorrelation between measurements using the two catheters as well as lumen size\n(arterial and venous).\nBoth the follow up and end of study results were analyzed using a repeated\nmeasures ANOVA model in order to compare the catheters with respect to clot\nlength and weight (total intraluminal clots, aspirated clots, and clot\nresidues). This was done in order to take into consideration the within animal\ncorrelation between measurements using the two catheters as well as lumen size\n(arterial and venous).", "Five domestic goats (female; 55–78 kg) were used in this study. Animals were\nallowed free access to food until 24 h before the procedure, at which time\naccess to food was denied. Water was provided ad libitum until 24 h before the\nprocedure. Before the procedure, animals were sedated with intramuscular\nketamine 10 mg/kg + xylazine 0.1 mg/kg, and intravenous midazolam 5–10 mg;\nintubated and connected to a mechanical ventilator. Anesthesia consisted of 1–2%\nisoflurane. Tunneled cuffed double-lumen CVCs were inserted in both jugular\nveins of the animals according to the instructions for use (IFU) provided by the\ncatheters’ manufacturers. Pristine hemodialysis catheters (15.5 F polyurethane;\nPristine Access Technologies Ltd., Tel Aviv, Israel; hereinafter called\nPristine) and GlidePath Long-Term Dialysis Catheters (14.5 F polyurethane; Bard\nAccess Systems, Inc., UT, United States; hereinafter called GlidePath) were\nevaluated in this study. Catheter right-atrium location post-insertion was\nvalidated by fluoroscopy (Figure 2). Catheter patency was assessed by the operating physician\nusing a 10 mL syringe with normal saline. At the end of the procedure, animals\nreceived a subcutaneous injection of 1 mL/kg of procaine penicillin G\n200 mg/mL + dihydrostreptomycin sulphate 250 mg/mL. In addition, cefazolin\n(2–2.5 g) and dipyrone (1 g) were administered intravenously.\nFluoroscopy imaging of Pristine and Glidepath catheters implanted in the\nright atrium: (a) Pristine tip and (b) Glidepath tip.", "For each imitated treatment, 5 mL were aspirated from each catheter lumen using a\nsyringe with Luer port. Where aspiration was not possible, 20 mL of normal\nsaline were injected to restore lumen patency. Following aspiration, the\nsyringes were checked for clots, findings were documented, and a digital image\nwas taken. The lumens were washed with 20 mL of normal saline and locked with\n5000 U/mL of heparin solution diluted according to the priming volume indicated\non the catheter.", "Following completion of the study follow-up, catheter location was assessed using\nfluoroscopy. Catheters were then aspirated and removed according to the\nrespective IFU. Where a clot residue was still present in the lumen, it was\nmanually retrieved with tweezers by the same operator who aspirated and removed\nthe catheters. The aspirated and the manually removed clot substances were\nweighed on analytical scales.", "Animals were euthanized with an injected barbiturate (pentobarbital) overdose, in\nagreement with the American Veterinary Medical Association (AVMA) acceptable\nmethod for euthanasia of fully anesthetized small ruminants.\n14\n Death was confirmed by the animal facility veterinarian after assessing\nheartbeat, respiration, and pupillary response to light.", "Both the follow up and end of study results were analyzed using a repeated\nmeasures ANOVA model in order to compare the catheters with respect to clot\nlength and weight (total intraluminal clots, aspirated clots, and clot\nresidues). This was done in order to take into consideration the within animal\ncorrelation between measurements using the two catheters as well as lumen size\n(arterial and venous).", "Ten tunneled cuffed double-lumen central venous catheters (five—Pristine and\nfive—GlidePath) were inserted in the jugular veins of five animals. Insertions were\nuncomplicated and uneventful. The two catheters were inserted in each animal, one on\neach side.\nDuring the 28-day follow-up, imitated treatments were administered three times a\nweek, on Sundays, Tuesdays, and Thursdays. Aspirations from both catheter types were\nuneventful at all locations. The mean intraluminal clot length aspirated before each\nsession during the entire study follow-up measured up to a mean of 0.66 cm in the\nGlidePath (95% CI, 0.14–1.18) and 0.19 cm in the Pristine hemodialysis catheter (95%\nCI, −0.33 to 0.71), the difference being statistically significant\n(p = 0.026; Table 1).\nAverage intraluminal clot length.\nNone of the animals showed clinical signs of infection during the entire duration of\nthe study. All catheters were removed at the end of the follow-up. The total\nintraluminal clot weight for a single lumen was calculated by combining the weight\nof the clot aspirated from the lumen before catheter removal with that of the\nresidual clot retrieved manually immediately thereafter (Figures 3 and 4). This reached the mean of 0.0054 g and\n0.0372 g for the lumens of the Pristine and GlidePath catheters, respectively,\naccounting for a mean intraluminal clot weight of roughly 0.01 g per each Pristine\nand 0.07 g per each GlidePath catheter (Figure 5). Aside of the obvious significance\nof the overall clot load, one determinant deserves a special consideration.\nSpecifically, residual clot was not detected at all in the Pristine catheters after\naspiration, while a mean of 0.002 g of clot were retrieved from the GlidePath\ncatheters mechanically, following removal of the catheters from the blood\nvessels.\nCatheter tips and aspirated clots after 28 days: (a) Pristine and (b)\nGlidepath.\nIntraluminal clots removed from Pristine (top) and Glidepath (bottom) tips\nafter 28 days.\nComparison of total intraluminal clot weight (aspirated + manually retrieved\nfollowing catheter removal). Data are presented as a mean ± standard\ndeviation for a single catheter.\nThe almost seven-fold total intraluminal clot weight difference between the catheter\ntypes showed a trend toward statistical significance (p = 0.052).\nThe differences between the mean weights of the aspirated and manually retrieved\nresidual clot showed the same trend when analyzed separately\n(p < 0.09 in each of the separate analyses). Finally, the mean\naspirated, residual, and total intraluminal clot weights were statistically\ndifferent from 0 for the GlidePath (p = 0.03), but not for the\nPristine catheters (p > 0.64).\nAlthough this work featured only a limited number of animals observed over a\nrelatively short period of time, the number of clot length observations obtained was\nconsiderable. The results of weight evaluation at the end of the study followed\nsuit, showing that the Pristine hemodialysis catheter was not inferior to the\nGlidepath in any test, while showing superiority to the latter in some. Overall,\nthis reduces the likelihood of study results being significantly affected by a\nrandom error.", "From the data available in USRDS, at 90 days after the initiation, 68.5% of patients\nare still using catheters.\n1\n Among prevalent hemodialysis patients in 2018, catheter use was much higher\n(52%) for hemodialysis patients ⩽21 years old (versus 19–21% in other age groups),\nunderscoring the need for a durable vascular access.\n2\n\nCatheter failure due to low BFR or occlusion will likely occur at some time during\ncatheter use, with timing of such occurrence governed, to a certain extent, by the\ndefinition of failure. Catheter performance parameters proposed by the NKF Vascular\nAccess Work Group in 2006 mostly relied on the dialyzer BFR achieved, factored by\nthe prepump arterial limb pressure.\n6\n These guidelines were comparable to those issued by the Society of\nInterventional Radiology,\n15\n American College of Radiology,\n16\n and a joint committee of several surgical societies.\n17\n Dialyzer-delivered blood flow rates greater than 300 mL/min, factored by\npre-pump pressure, were an absolute requirement. Noteworthy, in Europe, blood flow\nrates less than 300 mL/min were used, conditioned on longer dialysis treatment durations.\n18\n Still, 300 mL/min was a conservative value in the adult practice, and the\nexisting European guidelines referred, and still do, to blood flow of 300 mL/min\navailable for hemodialysis as the parameter of adequacy in hemodialysis efficiency\nassessment, despite providing a definition for catheter dysfunction different from\ntheir American counterparts.7,8\nThe 2019 update of NKF KDOQI guidelines reassessed this definition in view of the\naccumulated evidence, removing the 300 mL/min requirement.\n11\n Still, failure to maintain the prescribed extracorporeal blood flow required\nfor adequate hemodialysis without lengthening the prescribed HD treatment is at the\ncore of expectations from CVCs, as it is with any other type of vascular access.\n11\n Consequently, susceptibility of CVCs to thrombosis remains the principal\nconcern associated with their use. Further, aside of the obvious impact on BFR,\nthrombosis has been acknowledged as a major predisposing factor in the development\nof CVC-related infections due to its role in promotion of adherence of bacterial and\nfungal organisms to catheters.19–21 This eventually results in\ncatheter-related septicemia\n22\n and frequently leads to catheter removal, a point where safety and efficacy\noutcomes converge.\nCatheter patency considerations are at heart of the CVC-related research and\ndevelopment. While the shaft design issues of the long-term catheters have been\nsatisfactorily resolved with introduction of CVCs featuring a D-shaped lumen in the\nmid-body,23–27 medical device manufacturers\nstill struggle with optimization of the tip of the catheter. Shape-wise, symmetry of\nthe tip showed benefits in preventing recirculation compared to other\nconfigurations, such as staggered tips, especially when arterial and venous blood\ntubing are reversed.\n28\n Much controversy surrounds the need for side holes which, despite their\npurpose to support catheter patency,\n12\n may predispose for thrombus formation by facilitating quick removal of\nanticoagulant lock solutions by blood flow.24,29,30 In a computational fluid\ndynamics analysis, distal side holes present as a low-flow zone with increased\nclotting risk at the catheter tip.\n31\n Finally, creation of side holes is riddled with imperfections of the cut\nsurfaces, to which thrombi have been shown to attach firmly and\nirretrievably.12,13\nThe GlidePath catheter is a double D catheter introduced into practice a decade ago.\nAdmixture of the arterial and venous blood in this catheter is reduced through\nintroduction of curved distal apertures on opposing sides of the catheter. The\nGlidePath’s tip symmetry is incomplete due to a guidewire aperture at the distal\ntip, as part of the venous lumen, and the offset side holes. Still, in computational\nanalysis, percentages of blood moving out of the catheter from the distal lumen and\nflow rates through the side holes of Glidepath were similar to those of the original\nsymmetrical catheter, the Palindrome (Medtronic, MN, USA).\n32\n In that computational analysis, the most prominent flow stagnation regions\nwere detected around side holes and terminal apertures, where laminar flow from the\ncatheter tip is interrupted by inflow from the side holes.\n32\n\nThe Pristine hemodialysis catheter is a dual-lumen CVC with a double-D–shaped\ncross-section of the mid-shaft. Unlike GlidePath, it has a pre-formed short\nsymmetric split-tip and is devoid of side holes. In our study, despite having a\ndiameter slightly larger than that of the comparator, the Pristine hemodialysis\ncatheter was largely superior to GlidePath in impeding clot formation, as evident\nfrom a significant reduction in the clot length and from the trend toward reduction\nin the clot weight. In addition, contrary to GlidePath, the Pristine hemodialysis\ncatheter allowed for complete aspiration of the intraluminal clot. The ability to\nretrieve the clot without removing the catheter is an important prerequisite of\ndurable patency of CVCs, and, as noted earlier, a significant contribution to the\nsafety of its use, through reduction of catheter-related septicemia. Aside of the\nobvious benefit to the patient stemming from the durable vascular access patency,\nthis quality is likely to contribute to reduction in the frequency and duration of\nuse of antibiotics, which, in turn, will contribute to the effort of reduction of\nspread of antibiotic resistance." ]
[ "intro", "methods", null, null, null, null, null, "results", "discussion" ]
[ "Catheters", "dialysis access", "new devices", "Techniques and procedures", "dialysis", "biomaterials" ]
Introduction: Use of central venous catheters (CVC) in the United States shows little, if any, change in pattern from 2005, with slightly over 80% of patients using a CVC at hemodialysis initiation and 68.5% still using catheters 90 days later, according to the recent data from the United States Renal Data System (USRDS).1,2 Substantial use of CVCs has also been reported in the European Economic Area (EEA), with studies showing an overall increase in dependency on CVCs for hemodialysis over time.3–5 Of note, the native arteriovenous fistula (AVF) remains the recommended first choice for vascular access, in both territories,6–8 due to amore frequent association of synthetic means of vascular access, especially CVCs, with infectious and thrombotic complications. This conception has been repeatedly challenged in the past decade,9,10 leading the National Kidney Foundation’s (NKF) Kidney Disease Outcomes Quality Initiative (KDOQI) to recognize the potential selection biases, both those in favor of the AVF and those against the CVC, as a point of major statistical concern casting doubt on the validity of the previous evidence. 11 Consequently, the KDOQI guidelines restated that the disadvantages of CVC may contribute to poor patient outcomes, but advised that the true magnitude of this effect is not certain in view of the aforementioned selection bias and confounding effects. Regardless of the type of vascular access, adequate blood flow rate (BFR) is the sanctum sanctorum of hemodialysis, as low BFR extends treatment times and may result in underdialysis. 6 The most likely cause for a low BFR achieved with CVCs is thrombosis of the catheter, accounting for access loss in 30% to 40% of patients. 6 Somewhat paradoxically, distal side holes, frequently introduced in CVCs with the goal of supporting inflow in case of thrombotic obstruction of the end hole, 12 have been themselves implicated in promotion of thrombosis by serving as anchors for irretrievable blood clots. 13 The Pristine hemodialysis catheter has a split, symmetrical, side-holes–free tip. It was designed with the anatomy of the right atrium in mind. The placement of the Pristine is such that the tip should be placed in the upper right atrium and is oriented in the anterior posterior position (Figure 1). This catheter design would appear to have a theoretical advantage in diminishing the risk of thrombosis. We devised this study to translate the aforementioned theory into practice by assessing the by-design potential advantages of the Pristine hemodialysis catheter in vivo. Pristine hemodialysis catheter tip orientation in the right atrium: (a) anterior-posterior view and (b) lateral view. Methods: Animals and experimental setup Five domestic goats (female; 55–78 kg) were used in this study. Animals were allowed free access to food until 24 h before the procedure, at which time access to food was denied. Water was provided ad libitum until 24 h before the procedure. Before the procedure, animals were sedated with intramuscular ketamine 10 mg/kg + xylazine 0.1 mg/kg, and intravenous midazolam 5–10 mg; intubated and connected to a mechanical ventilator. Anesthesia consisted of 1–2% isoflurane. Tunneled cuffed double-lumen CVCs were inserted in both jugular veins of the animals according to the instructions for use (IFU) provided by the catheters’ manufacturers. Pristine hemodialysis catheters (15.5 F polyurethane; Pristine Access Technologies Ltd., Tel Aviv, Israel; hereinafter called Pristine) and GlidePath Long-Term Dialysis Catheters (14.5 F polyurethane; Bard Access Systems, Inc., UT, United States; hereinafter called GlidePath) were evaluated in this study. Catheter right-atrium location post-insertion was validated by fluoroscopy (Figure 2). Catheter patency was assessed by the operating physician using a 10 mL syringe with normal saline. At the end of the procedure, animals received a subcutaneous injection of 1 mL/kg of procaine penicillin G 200 mg/mL + dihydrostreptomycin sulphate 250 mg/mL. In addition, cefazolin (2–2.5 g) and dipyrone (1 g) were administered intravenously. Fluoroscopy imaging of Pristine and Glidepath catheters implanted in the right atrium: (a) Pristine tip and (b) Glidepath tip. Five domestic goats (female; 55–78 kg) were used in this study. Animals were allowed free access to food until 24 h before the procedure, at which time access to food was denied. Water was provided ad libitum until 24 h before the procedure. Before the procedure, animals were sedated with intramuscular ketamine 10 mg/kg + xylazine 0.1 mg/kg, and intravenous midazolam 5–10 mg; intubated and connected to a mechanical ventilator. Anesthesia consisted of 1–2% isoflurane. Tunneled cuffed double-lumen CVCs were inserted in both jugular veins of the animals according to the instructions for use (IFU) provided by the catheters’ manufacturers. Pristine hemodialysis catheters (15.5 F polyurethane; Pristine Access Technologies Ltd., Tel Aviv, Israel; hereinafter called Pristine) and GlidePath Long-Term Dialysis Catheters (14.5 F polyurethane; Bard Access Systems, Inc., UT, United States; hereinafter called GlidePath) were evaluated in this study. Catheter right-atrium location post-insertion was validated by fluoroscopy (Figure 2). Catheter patency was assessed by the operating physician using a 10 mL syringe with normal saline. At the end of the procedure, animals received a subcutaneous injection of 1 mL/kg of procaine penicillin G 200 mg/mL + dihydrostreptomycin sulphate 250 mg/mL. In addition, cefazolin (2–2.5 g) and dipyrone (1 g) were administered intravenously. Fluoroscopy imaging of Pristine and Glidepath catheters implanted in the right atrium: (a) Pristine tip and (b) Glidepath tip. Dialysis treatment imitation For each imitated treatment, 5 mL were aspirated from each catheter lumen using a syringe with Luer port. Where aspiration was not possible, 20 mL of normal saline were injected to restore lumen patency. Following aspiration, the syringes were checked for clots, findings were documented, and a digital image was taken. The lumens were washed with 20 mL of normal saline and locked with 5000 U/mL of heparin solution diluted according to the priming volume indicated on the catheter. For each imitated treatment, 5 mL were aspirated from each catheter lumen using a syringe with Luer port. Where aspiration was not possible, 20 mL of normal saline were injected to restore lumen patency. Following aspiration, the syringes were checked for clots, findings were documented, and a digital image was taken. The lumens were washed with 20 mL of normal saline and locked with 5000 U/mL of heparin solution diluted according to the priming volume indicated on the catheter. Follow-up termination and catheter removal Following completion of the study follow-up, catheter location was assessed using fluoroscopy. Catheters were then aspirated and removed according to the respective IFU. Where a clot residue was still present in the lumen, it was manually retrieved with tweezers by the same operator who aspirated and removed the catheters. The aspirated and the manually removed clot substances were weighed on analytical scales. Following completion of the study follow-up, catheter location was assessed using fluoroscopy. Catheters were then aspirated and removed according to the respective IFU. Where a clot residue was still present in the lumen, it was manually retrieved with tweezers by the same operator who aspirated and removed the catheters. The aspirated and the manually removed clot substances were weighed on analytical scales. Euthanasia Animals were euthanized with an injected barbiturate (pentobarbital) overdose, in agreement with the American Veterinary Medical Association (AVMA) acceptable method for euthanasia of fully anesthetized small ruminants. 14 Death was confirmed by the animal facility veterinarian after assessing heartbeat, respiration, and pupillary response to light. Animals were euthanized with an injected barbiturate (pentobarbital) overdose, in agreement with the American Veterinary Medical Association (AVMA) acceptable method for euthanasia of fully anesthetized small ruminants. 14 Death was confirmed by the animal facility veterinarian after assessing heartbeat, respiration, and pupillary response to light. Statistical assessment Both the follow up and end of study results were analyzed using a repeated measures ANOVA model in order to compare the catheters with respect to clot length and weight (total intraluminal clots, aspirated clots, and clot residues). This was done in order to take into consideration the within animal correlation between measurements using the two catheters as well as lumen size (arterial and venous). Both the follow up and end of study results were analyzed using a repeated measures ANOVA model in order to compare the catheters with respect to clot length and weight (total intraluminal clots, aspirated clots, and clot residues). This was done in order to take into consideration the within animal correlation between measurements using the two catheters as well as lumen size (arterial and venous). Animals and experimental setup: Five domestic goats (female; 55–78 kg) were used in this study. Animals were allowed free access to food until 24 h before the procedure, at which time access to food was denied. Water was provided ad libitum until 24 h before the procedure. Before the procedure, animals were sedated with intramuscular ketamine 10 mg/kg + xylazine 0.1 mg/kg, and intravenous midazolam 5–10 mg; intubated and connected to a mechanical ventilator. Anesthesia consisted of 1–2% isoflurane. Tunneled cuffed double-lumen CVCs were inserted in both jugular veins of the animals according to the instructions for use (IFU) provided by the catheters’ manufacturers. Pristine hemodialysis catheters (15.5 F polyurethane; Pristine Access Technologies Ltd., Tel Aviv, Israel; hereinafter called Pristine) and GlidePath Long-Term Dialysis Catheters (14.5 F polyurethane; Bard Access Systems, Inc., UT, United States; hereinafter called GlidePath) were evaluated in this study. Catheter right-atrium location post-insertion was validated by fluoroscopy (Figure 2). Catheter patency was assessed by the operating physician using a 10 mL syringe with normal saline. At the end of the procedure, animals received a subcutaneous injection of 1 mL/kg of procaine penicillin G 200 mg/mL + dihydrostreptomycin sulphate 250 mg/mL. In addition, cefazolin (2–2.5 g) and dipyrone (1 g) were administered intravenously. Fluoroscopy imaging of Pristine and Glidepath catheters implanted in the right atrium: (a) Pristine tip and (b) Glidepath tip. Dialysis treatment imitation: For each imitated treatment, 5 mL were aspirated from each catheter lumen using a syringe with Luer port. Where aspiration was not possible, 20 mL of normal saline were injected to restore lumen patency. Following aspiration, the syringes were checked for clots, findings were documented, and a digital image was taken. The lumens were washed with 20 mL of normal saline and locked with 5000 U/mL of heparin solution diluted according to the priming volume indicated on the catheter. Follow-up termination and catheter removal: Following completion of the study follow-up, catheter location was assessed using fluoroscopy. Catheters were then aspirated and removed according to the respective IFU. Where a clot residue was still present in the lumen, it was manually retrieved with tweezers by the same operator who aspirated and removed the catheters. The aspirated and the manually removed clot substances were weighed on analytical scales. Euthanasia: Animals were euthanized with an injected barbiturate (pentobarbital) overdose, in agreement with the American Veterinary Medical Association (AVMA) acceptable method for euthanasia of fully anesthetized small ruminants. 14 Death was confirmed by the animal facility veterinarian after assessing heartbeat, respiration, and pupillary response to light. Statistical assessment: Both the follow up and end of study results were analyzed using a repeated measures ANOVA model in order to compare the catheters with respect to clot length and weight (total intraluminal clots, aspirated clots, and clot residues). This was done in order to take into consideration the within animal correlation between measurements using the two catheters as well as lumen size (arterial and venous). Results: Ten tunneled cuffed double-lumen central venous catheters (five—Pristine and five—GlidePath) were inserted in the jugular veins of five animals. Insertions were uncomplicated and uneventful. The two catheters were inserted in each animal, one on each side. During the 28-day follow-up, imitated treatments were administered three times a week, on Sundays, Tuesdays, and Thursdays. Aspirations from both catheter types were uneventful at all locations. The mean intraluminal clot length aspirated before each session during the entire study follow-up measured up to a mean of 0.66 cm in the GlidePath (95% CI, 0.14–1.18) and 0.19 cm in the Pristine hemodialysis catheter (95% CI, −0.33 to 0.71), the difference being statistically significant (p = 0.026; Table 1). Average intraluminal clot length. None of the animals showed clinical signs of infection during the entire duration of the study. All catheters were removed at the end of the follow-up. The total intraluminal clot weight for a single lumen was calculated by combining the weight of the clot aspirated from the lumen before catheter removal with that of the residual clot retrieved manually immediately thereafter (Figures 3 and 4). This reached the mean of 0.0054 g and 0.0372 g for the lumens of the Pristine and GlidePath catheters, respectively, accounting for a mean intraluminal clot weight of roughly 0.01 g per each Pristine and 0.07 g per each GlidePath catheter (Figure 5). Aside of the obvious significance of the overall clot load, one determinant deserves a special consideration. Specifically, residual clot was not detected at all in the Pristine catheters after aspiration, while a mean of 0.002 g of clot were retrieved from the GlidePath catheters mechanically, following removal of the catheters from the blood vessels. Catheter tips and aspirated clots after 28 days: (a) Pristine and (b) Glidepath. Intraluminal clots removed from Pristine (top) and Glidepath (bottom) tips after 28 days. Comparison of total intraluminal clot weight (aspirated + manually retrieved following catheter removal). Data are presented as a mean ± standard deviation for a single catheter. The almost seven-fold total intraluminal clot weight difference between the catheter types showed a trend toward statistical significance (p = 0.052). The differences between the mean weights of the aspirated and manually retrieved residual clot showed the same trend when analyzed separately (p < 0.09 in each of the separate analyses). Finally, the mean aspirated, residual, and total intraluminal clot weights were statistically different from 0 for the GlidePath (p = 0.03), but not for the Pristine catheters (p > 0.64). Although this work featured only a limited number of animals observed over a relatively short period of time, the number of clot length observations obtained was considerable. The results of weight evaluation at the end of the study followed suit, showing that the Pristine hemodialysis catheter was not inferior to the Glidepath in any test, while showing superiority to the latter in some. Overall, this reduces the likelihood of study results being significantly affected by a random error. Discussion: From the data available in USRDS, at 90 days after the initiation, 68.5% of patients are still using catheters. 1 Among prevalent hemodialysis patients in 2018, catheter use was much higher (52%) for hemodialysis patients ⩽21 years old (versus 19–21% in other age groups), underscoring the need for a durable vascular access. 2 Catheter failure due to low BFR or occlusion will likely occur at some time during catheter use, with timing of such occurrence governed, to a certain extent, by the definition of failure. Catheter performance parameters proposed by the NKF Vascular Access Work Group in 2006 mostly relied on the dialyzer BFR achieved, factored by the prepump arterial limb pressure. 6 These guidelines were comparable to those issued by the Society of Interventional Radiology, 15 American College of Radiology, 16 and a joint committee of several surgical societies. 17 Dialyzer-delivered blood flow rates greater than 300 mL/min, factored by pre-pump pressure, were an absolute requirement. Noteworthy, in Europe, blood flow rates less than 300 mL/min were used, conditioned on longer dialysis treatment durations. 18 Still, 300 mL/min was a conservative value in the adult practice, and the existing European guidelines referred, and still do, to blood flow of 300 mL/min available for hemodialysis as the parameter of adequacy in hemodialysis efficiency assessment, despite providing a definition for catheter dysfunction different from their American counterparts.7,8 The 2019 update of NKF KDOQI guidelines reassessed this definition in view of the accumulated evidence, removing the 300 mL/min requirement. 11 Still, failure to maintain the prescribed extracorporeal blood flow required for adequate hemodialysis without lengthening the prescribed HD treatment is at the core of expectations from CVCs, as it is with any other type of vascular access. 11 Consequently, susceptibility of CVCs to thrombosis remains the principal concern associated with their use. Further, aside of the obvious impact on BFR, thrombosis has been acknowledged as a major predisposing factor in the development of CVC-related infections due to its role in promotion of adherence of bacterial and fungal organisms to catheters.19–21 This eventually results in catheter-related septicemia 22 and frequently leads to catheter removal, a point where safety and efficacy outcomes converge. Catheter patency considerations are at heart of the CVC-related research and development. While the shaft design issues of the long-term catheters have been satisfactorily resolved with introduction of CVCs featuring a D-shaped lumen in the mid-body,23–27 medical device manufacturers still struggle with optimization of the tip of the catheter. Shape-wise, symmetry of the tip showed benefits in preventing recirculation compared to other configurations, such as staggered tips, especially when arterial and venous blood tubing are reversed. 28 Much controversy surrounds the need for side holes which, despite their purpose to support catheter patency, 12 may predispose for thrombus formation by facilitating quick removal of anticoagulant lock solutions by blood flow.24,29,30 In a computational fluid dynamics analysis, distal side holes present as a low-flow zone with increased clotting risk at the catheter tip. 31 Finally, creation of side holes is riddled with imperfections of the cut surfaces, to which thrombi have been shown to attach firmly and irretrievably.12,13 The GlidePath catheter is a double D catheter introduced into practice a decade ago. Admixture of the arterial and venous blood in this catheter is reduced through introduction of curved distal apertures on opposing sides of the catheter. The GlidePath’s tip symmetry is incomplete due to a guidewire aperture at the distal tip, as part of the venous lumen, and the offset side holes. Still, in computational analysis, percentages of blood moving out of the catheter from the distal lumen and flow rates through the side holes of Glidepath were similar to those of the original symmetrical catheter, the Palindrome (Medtronic, MN, USA). 32 In that computational analysis, the most prominent flow stagnation regions were detected around side holes and terminal apertures, where laminar flow from the catheter tip is interrupted by inflow from the side holes. 32 The Pristine hemodialysis catheter is a dual-lumen CVC with a double-D–shaped cross-section of the mid-shaft. Unlike GlidePath, it has a pre-formed short symmetric split-tip and is devoid of side holes. In our study, despite having a diameter slightly larger than that of the comparator, the Pristine hemodialysis catheter was largely superior to GlidePath in impeding clot formation, as evident from a significant reduction in the clot length and from the trend toward reduction in the clot weight. In addition, contrary to GlidePath, the Pristine hemodialysis catheter allowed for complete aspiration of the intraluminal clot. The ability to retrieve the clot without removing the catheter is an important prerequisite of durable patency of CVCs, and, as noted earlier, a significant contribution to the safety of its use, through reduction of catheter-related septicemia. Aside of the obvious benefit to the patient stemming from the durable vascular access patency, this quality is likely to contribute to reduction in the frequency and duration of use of antibiotics, which, in turn, will contribute to the effort of reduction of spread of antibiotic resistance.
Background: The issue of side holes in the tips of the tunneled cuffed central venous catheters is complex and has been subject to longstanding debate. This study sought to compare the clotting potential of the side-hole-free Pristine hemodialysis catheter with that of a symmetric catheter with side holes. Methods: Both jugular veins of five goats were catheterized with the two different catheters. The catheters were left in place for 4 weeks and were flushed and locked with heparin thrice weekly. The aspirated intraluminal clot length was assessed visually prior to each flushing. In addition, the size and weight of the clot were recorded upon catheter extraction at the end of the 4-week follow-up. Results: The mean intraluminal clot length observed during the entire study follow-up measured up to a mean of 0.66 cm in the GlidePath (95% CI, 0.14-1.18) and 0.19 cm in the Pristine hemodialysis catheter (95% CI, -0.33 to 0.71), the difference being statistically significant (p = 0.026). On average, 0.01 g and 0.07 g of intraluminal clot were retrieved from the Pristine and GlidePath catheters, respectively (p = 0.052). Conclusions: The Pristine hemodialysis catheter was largely superior to a standard side hole catheter in impeding clot formation, and, contrary to the side hole catheter, allowed for complete aspiration of the intraluminal clot.
null
null
4,237
275
9
[ "catheter", "catheters", "clot", "pristine", "ml", "glidepath", "lumen", "aspirated", "animals", "access" ]
[ "test", "test" ]
null
null
[CONTENT] Catheters | dialysis access | new devices | Techniques and procedures | dialysis | biomaterials [SUMMARY]
[CONTENT] Catheters | dialysis access | new devices | Techniques and procedures | dialysis | biomaterials [SUMMARY]
[CONTENT] Catheters | dialysis access | new devices | Techniques and procedures | dialysis | biomaterials [SUMMARY]
null
[CONTENT] Catheters | dialysis access | new devices | Techniques and procedures | dialysis | biomaterials [SUMMARY]
null
[CONTENT] Animals | Catheterization, Central Venous | Catheters, Indwelling | Central Venous Catheters | Models, Animal | Renal Dialysis [SUMMARY]
[CONTENT] Animals | Catheterization, Central Venous | Catheters, Indwelling | Central Venous Catheters | Models, Animal | Renal Dialysis [SUMMARY]
[CONTENT] Animals | Catheterization, Central Venous | Catheters, Indwelling | Central Venous Catheters | Models, Animal | Renal Dialysis [SUMMARY]
null
[CONTENT] Animals | Catheterization, Central Venous | Catheters, Indwelling | Central Venous Catheters | Models, Animal | Renal Dialysis [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] catheter | catheters | clot | pristine | ml | glidepath | lumen | aspirated | animals | access [SUMMARY]
[CONTENT] catheter | catheters | clot | pristine | ml | glidepath | lumen | aspirated | animals | access [SUMMARY]
[CONTENT] catheter | catheters | clot | pristine | ml | glidepath | lumen | aspirated | animals | access [SUMMARY]
null
[CONTENT] catheter | catheters | clot | pristine | ml | glidepath | lumen | aspirated | animals | access [SUMMARY]
null
[CONTENT] hemodialysis | cvc | cvcs | access | thrombosis | vascular access | bfr | vascular | view | catheter [SUMMARY]
[CONTENT] ml | mg | catheters | animals | kg | procedure | pristine | aspirated | glidepath | access [SUMMARY]
[CONTENT] clot | mean | intraluminal clot | glidepath | pristine | intraluminal | catheter | residual | intraluminal clot weight | total intraluminal clot [SUMMARY]
null
[CONTENT] catheter | clot | ml | catheters | aspirated | pristine | glidepath | animals | lumen | removed [SUMMARY]
null
[CONTENT] ||| Pristine [SUMMARY]
[CONTENT] five | two ||| 4 weeks | weekly ||| ||| the end of the | 4-week [SUMMARY]
[CONTENT] 0.66 cm | 95% | CI | 0.14-1.18 | 0.19 cm | Pristine | 95% | CI | 0.71 | 0.026 ||| 0.01 | 0.07 | Pristine | GlidePath | 0.052 [SUMMARY]
null
[CONTENT] ||| Pristine ||| five | two ||| 4 weeks | weekly ||| ||| the end of the | 4-week ||| 0.66 cm | 95% | CI | 0.14-1.18 | 0.19 cm | Pristine | 95% | CI | 0.71 | 0.026 ||| 0.01 | 0.07 | Pristine | GlidePath | 0.052 ||| Pristine [SUMMARY]
null
Effects of coffee on driving performance during prolonged simulated highway driving.
22315048
Coffee is often consumed to counteract driver sleepiness. There is limited information on the effects of a single low dose of coffee on prolonged highway driving in non-sleep deprived individuals.
RATIONALE
Non-sleep deprived healthy volunteers (n024) participated in a double-blind, placebo-controlled, crossover study. After 2 h of monotonous highway driving, subjects received caffeinated or decaffeinated coffee during a 15-min break before continuing driving for another 2 h. The primary outcome measure was the standard deviation of lateral position (SDLP), reflecting the weaving of the car. Secondary outcome measures were speed variability, subjective sleepiness, and subjective driving performance.
METHODS
The results showed that caffeinated coffee significantly reduced SDLP as compared to decaffeinated coffee, both in the first (p00.024) and second hour (p00.019) after the break. Similarly, the standard deviation of speed (p0 0.024; p00.001), mental effort (p00.003; p00.023), and subjective sleepiness (p00.001; p00.002) were reduced in both the first and second hour after consuming caffeinated coffee. Subjective driving quality was significantly improved in the first hour after consuming caffeinated coffee (p00.004).
RESULTS
These findings demonstrate a positive effect of one cup of caffeinated coffee on driving performance and subjective sleepiness during monotonous simulated highway driving.
CONCLUSIONS
[ "Automobile Driving", "Caffeine", "Central Nervous System Stimulants", "Coffee", "Cross-Over Studies", "Double-Blind Method", "Female", "Humans", "Male", "Time Factors", "Wakefulness", "Young Adult" ]
3382640
Introduction
Drowsy driving is an important cause of traffic accidents (Connor et al. 2002; Horne and Reyner 1995; Maycock 1996), and therefore, the development of effective countermeasures is essential. Consuming a cup of coffee is one of the most commonly used ways to combat driver sleepiness. An estimated 80% of the population consumes caffeine-containing beverages, often on a daily basis (Fredholm et al. 1999; Heckman et al. 2010). Caffeine (1,3,7-trimethylxanthine) is rapidly and completely absorbed in the body within approximately 45 min (Blanchard and Sawers 1983). It reaches its peak plasma concentration within 15 to 120 min after intake (Arnaud 1987), averaging around 30 min (O’Connell and Zurzola 1984; Blanchard and Sawers 1983). Its elimination half life is 1.5 to 9.5 h (Arnaud 1987; Bonati et al. 1982). Although additional mechanisms of action are involved, it is now believed that caffeine’s stimulant effects are exerted by antagonizing adenosine, primarily by blocking the adenosine A1 and A2A receptors. Adenosine is considered to be a mediator of sleep (Dunwiddie and Mansino 2001; Fredholm et al. 1999). A great number of studies have demonstrated effects of caffeine on mood and performance (Childs and De Wit 2006; Christopher et al. 2005; Haskell et al. 2005; Lieberman et al. 1987; Olson et al. 2010). However, the effects are complex and depend on the specific tasks examined, dosages, subjects, and test conditions (Lorist and Tops 2003). Overall, caffeine was found to be specifically effective in restoring performance to baseline levels when individuals are in a state of low arousal, such as seen during the dip in the circadian rhythm, after sleep restriction, and in fatigued subjects (Nehlig 2010; Smith 2002). Indeed, many people consume coffee with the purpose to refresh or stay awake, for example, when driving a car (Anund et al. 2008; Vanlaar et al. 2008). Several driving studies showed that caffeine improves performance and decreases subjective sleepiness both in driving simulators (Biggs et al. 2007; Brice and Smith 2001; De Valck and Cluydts 2001; Horne and Reyner 1996; Regina et al. 1974; Reyner and Horne 1997, 2000) and on the road (Philip et al. 2006; Sagaspe et al. 2007). Most of these studies tested sleep-restricted subjects. In addition, relatively high dosages of caffeine (100–300 mg) were examined. These studies showed that relatively high dosages of caffeine had a positive effect on driving performance and reduced driver sleepiness. In real life however, it is more likely that a driver consumes only one cup of coffee (80 mg of caffeine) during a break, before continuing driving. Up to now, the effects on driving performance of lower dosages of caffeine, e.g., a regular cup of coffee, have not been examined. Therefore, the objective of this study was to examine the effects of one cup of coffee (80 mg caffeine) on prolonged simulated highway driving in non-sleep deprived individuals. Various traffic safety organizations advise drivers to take a 15-min break after 2 h of driving. The protocol used in the current study (2 h driving, a 15-min break with or without consuming caffeinated coffee, followed by 2 h of driving) was based on this advice.
null
null
Results
A total of 24 subjects (12 males and 12 females) completed the study. Their mean (SD) age was 23.2 (1.6) years old; on average, they consumed 2.5 (0.7) caffeinated drinks per day, had a mean (SD) body mass index of 23.9 (2.7), possessed a valid driver’s license for 58.8 (17.9) months, and on average drove 12,979 (SD, 10,785) km per year. All subjects reported normal sleep quality and duration on the nights before the test days with no differences observed between the two test conditions. Results from the study are summarized in Table 1. There were no significant order effects (caffeinated–decaffeinated coffee versus decaffeinated–caffeinated coffee) or time-of-testing effects (a.m. versus p.m.).Table 1Effects of caffeinated coffee in comparison to decaf on simulated driving performance and subjective sleepinessTimeDecaffeinated coffeeCaffeinated coffeeDriving test resultsStandard deviation of lateral position (cm)121.43 (4.37)22.11 (3.67)223.65 (5.90)24.13 (4.76)322.92 (4.61)21.08 (3.74)*423.69 (4.72)22.41 (4.37)*Standard deviation of speed (km/h)10.85 (0.44)0.88 (0.35)20.98 (0.51)1.1 (0.61)31.03 (0.72)0.78 (0.34)*41.15 (0.77)0.87 (0.56)*Mean lateral position (cm)1−18.04 (12.71)−18.03 (10.47)2−19.24 (12.60)−18.98 (9.98)3−18.63 (12.31)−20.16 (11.05)4−18.17 (11.54)−18.93 (10.80)Mean speed (km/h)195.40 (0.19)95.42 (0.21)295.46 (0.16)95.40 (0.26)395.44 (0.31)95.54 (0.18)495.53 (0.15)95.54 (0.25)Subjective driving assessmentsDriving quality19.75 (3.66)9.08 (4.10)29.01 (2.81)8.48 (3.46)39.70 (3.89)11.84 (2.82)*49.23 (3.02)10.60 (3.41)Mental effort15.33 (2.30)5.70 (2.47)25.84 (2.76)6.39 (2.50)35.89 (2.82)4.50 (2.36)*45.72 (2.38)4.90 (2.93)*Subjective sleepiness scoresKarolinska sleepiness scaleBaseline3.25 (0.94)3.33 (0.87)16.08 (1.67)5.83 (2.16)26.17 (1.95)6.29 (1.97)36.13 (2.11)4.21 (1.47)*45.79 (1.59)4.54 (1.86)*Mean (SD) is shown for each parameter. Driving quality ranges from 0 (“I drove exceptionally poorly”) to 20 (“I drove exceptionally well”). For mental effort, higher scores indicate higher effort; higher KSS scores indicate increased subjective sleepiness*p < 0.05 compared to decaf Effects of caffeinated coffee in comparison to decaf on simulated driving performance and subjective sleepiness Mean (SD) is shown for each parameter. Driving quality ranges from 0 (“I drove exceptionally poorly”) to 20 (“I drove exceptionally well”). For mental effort, higher scores indicate higher effort; higher KSS scores indicate increased subjective sleepiness *p < 0.05 compared to decaf Driving test Figure 1 shows the effect of caffeinated coffee consumption on driving performance. No significant differences in SDLP were observed before the break. However, both in the first (F (1,23) = 5.8; p = 0.024) and in the second hour (F (1,23) = 6.4; p = 0.019) after the break, caffeinated coffee significantly reduced SDLP.Fig. 1Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05) Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05) In line, caffeinated coffee significantly reduced SD speed in the third (F (1,23) = 5.8; p = 0.024) and fourth hour (F (1,23) = 13.0; p = 0.001) of driving (see Fig. 2).Fig. 2Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05) Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05) No effects were found on mean speed or mean lateral position, confirming that subjects performed the test according to the instructions. Figure 1 shows the effect of caffeinated coffee consumption on driving performance. No significant differences in SDLP were observed before the break. However, both in the first (F (1,23) = 5.8; p = 0.024) and in the second hour (F (1,23) = 6.4; p = 0.019) after the break, caffeinated coffee significantly reduced SDLP.Fig. 1Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05) Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05) In line, caffeinated coffee significantly reduced SD speed in the third (F (1,23) = 5.8; p = 0.024) and fourth hour (F (1,23) = 13.0; p = 0.001) of driving (see Fig. 2).Fig. 2Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05) Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05) No effects were found on mean speed or mean lateral position, confirming that subjects performed the test according to the instructions. Subjective driving assessments Compared to decaffeinated coffee, caffeinated coffee improved subjective driving quality in the third hour of driving (F (1,23) = 10.5; p = 0.004), but not in the fourth (F (1,23) = 2.6; p = n.s.). Subjects indicated that the mental effort needed to perform the test after caffeinated coffee was significantly reduced in the third (F (1,23) = 11.4; p = 0.003) and fourth hour of driving (F (1,23) = 5.9; p = 0.023). In addition, drivers rated their driving quality as significantly more considerate, responsible, and safer in the caffeinated coffee condition (see Table 1). Compared to decaffeinated coffee, caffeinated coffee improved subjective driving quality in the third hour of driving (F (1,23) = 10.5; p = 0.004), but not in the fourth (F (1,23) = 2.6; p = n.s.). Subjects indicated that the mental effort needed to perform the test after caffeinated coffee was significantly reduced in the third (F (1,23) = 11.4; p = 0.003) and fourth hour of driving (F (1,23) = 5.9; p = 0.023). In addition, drivers rated their driving quality as significantly more considerate, responsible, and safer in the caffeinated coffee condition (see Table 1). Subjective sleepiness After the break with caffeinated coffee, drivers reported significantly lower sleepiness scores as compared to the break with decaffeinated coffee. This effect was significant both in the third (F (1,23) = 18.5; p < 0.001) and the fourth hour of driving (F (1,23) = 11.9; p = 0.002) (see Fig. 3).Fig. 3Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05) Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05) After the break with caffeinated coffee, drivers reported significantly lower sleepiness scores as compared to the break with decaffeinated coffee. This effect was significant both in the third (F (1,23) = 18.5; p < 0.001) and the fourth hour of driving (F (1,23) = 11.9; p = 0.002) (see Fig. 3).Fig. 3Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05) Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05)
null
null
[ "Subjects", "Study design", "Treatments", "STISIM highway driving test", "Subjective assessments", "Statistical analysis", "Driving test", "Subjective driving assessments", "Subjective sleepiness" ]
[ "Twenty-four adult healthy volunteers (12 males and 12 females) were recruited by means of public advertisements at and around Utrecht University campus. Subjects were included if they were healthy volunteers, moderate caffeine drinkers (two to four cups a day), non-smokers, had a body mass index between 21 and 30, possessed a valid driver’s license for at least 3 years, and drove more than 5,000 km per year.\nSleep disturbances were assessed with the SLEEP-50 questionnaire (Spoormaker et al. 2005), and excessive daytime sleepiness was examined using the Epworth Sleepiness Scale (ESS; Johns 1991), filled out by participants on the screening day.\nBefore the start of each test day, urine samples were collected to test for drugs of abuse including amphetamines (including MDMA), barbiturates, cannabinoids, benzodiazepines, cocaine, and opiates (Instant-View, Alfa Scientific Designs Inc.) and pregnancy in female subjects (β-HCG test). To test for the presence of alcohol, the Dräger Alcotest 7410 Breath Analyzer was used. From 24 h before the start of the test day until completion of the test day, alcohol consumption was not permitted. Caffeinated beverages were not allowed from awakening on test days until the end of the tests.", "Participants were screened and familiarized with the test procedures during a training day. When meeting all inclusion and passing all exclusion criteria, subjects performed a practice session in the STISIM driving simulator and completed the Simulator Sickness Questionnaire (Kennedy et al. 1993) to identify possible simulator sickness. Included subjects were randomly assigned to a treatment order comprising decaffeinated coffee and caffeinated coffee (80 mg) administered during a break.\nUpon arrival, possible use of drugs or alcohol, pregnancy, illness, and medication were checked. In addition, quality and duration of sleep was assessed using the 14-item Groningen Sleep Quality Scale (Mulder-Hajonides van der Meulen et al. 1980). When all criteria were met, a 120-min drive in the STISIM driving simulator was conducted. Thereafter, a 15-min break was scheduled in which subjects received the double-blind treatment. After the break, another 120-min driving session was performed. Every hour, subjective assessments of driving quality, driving style, mental effort to perform the test, and sleepiness were conducted. Test sessions were scheduled at the same time for each subject, either in the morning (0800–1300 hours) or in the afternoon (1300–1700 hours).", "This study aimed to mimic the effect of a cup of coffee drivers consume when having a break along the highway. Treatments were 2.68 g of Nescafé Gold® instant coffee containing 80 mg caffeine or 2.68 g of Nescafé Gold® decaffeinated coffee dissolved in 180 ml boiled water. To confirm that each cup of coffee contained 80 mg of caffeine, the amount of caffeine in the instant coffee was determined with high-performance liquid chromatography (HPLC; Shimadzu LC-10AT VP equipped with UV–Vis detector). The column was a reversed-phase Select B column Lichrocart HPLC C18, 5 μm, length, 0.125 m, Ø = 4.6 mm. All of the procedures were carried out isocratic. The separation was done at room temperature. Caffeine and the spiked matrices were separated with a mobile phase of 20% MeOH and 10 mM HClO4, at a flow rate of 0.5 mL/min. The injection volume was 5 μL, and the detection was carried out at 273 nm. The mean (SD) amount of caffeine per gram Nescafé Goud instant coffee samples (n = 10) was 29.79 (0.656) mg/g. The mean amount of caffeine in decaffeinated coffee was 0.79 mg/g. The accuracy of determinations was 98.1% (SD, 0.56). Because both the precision and the accuracy met up to the requirement demands, all of the results of this HPLC determination can be concluded with certainty.\nTreatments were administered double-blind, and a nose clip was worn to enhance treatment blinding. Drinks were consumed within 5 min, starting from 5 min after onset of the break.", "Driving tests were performed in a fixed-base driving simulator employing STISIM Drive™ (version M300, Systems Technology Inc., Hawthorne, CA, USA). This is an interactive system in which the roadway scenery is projected on a screen (2.10 × 1.58 m), 1.90 m in front of the center of the steering wheel of the car unit (Mets et al. 2011a). The 100-km highway driving test scenarios were developed (EyeOctopus BV) in accordance with Dutch traffic situations, including a two-lane highway in each direction and a monotonous environment with trees, occasional hills and bridges, and other traffic. The duration of each 100-km scenario is approximately 60 min. Two scenarios (200 km) were conducted before a 15-min break, and two other scenarios (200 km) thereafter (Mets et al. 2011a).\nSubjects were instructed to drive with a steady lateral position within the right, slower, traffic lane with a constant speed of 95 km/h. Overtaking slower-moving vehicles was allowed. During blinded editing, these maneuvers were removed from the data, before statistical analysis of the “clean” data. The primary outcome variable was the standard deviation of lateral position (SDLP, centimeters), expressing the weaving of the car (Verster and Roth 2011). The standard deviation of speed (SDS, kilometers per hour) was the secondary outcome measure. Mean speed (MS, kilometers per hour) and mean lateral position (MLP, cm) were control variables.", "After each hour of driving, questionnaires were administered on subjective sleepiness and driving performance. Subjective sleepiness was measured by means of the Karolinska Sleepiness Scale (KSS), ranging from 1 (very alert) to 9 (very sleepy, fighting sleep) (Åkerstedt and Gillberg 1990).\nDriving task-related questionnaires comprised mental effort to perform the driving test (Meijman et al. 1986; Zijlstra and Van Doorn 1985), subjective driving quality, and driving style (McCormick et al. 1987). Completing the questionnaires took approximately 2 min, after which, the driving task was immediately resumed.", "Statistical analyses were performed with SPSS, version 19. For each variable, mean (SD) was computed for each subsequent hour. Data of the first 2 h were compared, to confirm that no significant differences between the treatment days were present before the break and treatment administration. To determine whether caffeinated coffee has an effect on driving performance, data from the third and fourth hour were compared using a general linear model for repeated measures (two-tailed, p ≤ 0.05).", "Figure 1 shows the effect of caffeinated coffee consumption on driving performance. No significant differences in SDLP were observed before the break. However, both in the first (F\n(1,23) = 5.8; p = 0.024) and in the second hour (F\n(1,23) = 6.4; p = 0.019) after the break, caffeinated coffee significantly reduced SDLP.Fig. 1Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nStandard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05)\nIn line, caffeinated coffee significantly reduced SD speed in the third (F\n(1,23) = 5.8; p = 0.024) and fourth hour (F\n(1,23) = 13.0; p = 0.001) of driving (see Fig. 2).Fig. 2Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nStandard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05)\nNo effects were found on mean speed or mean lateral position, confirming that subjects performed the test according to the instructions.", "Compared to decaffeinated coffee, caffeinated coffee improved subjective driving quality in the third hour of driving (F\n(1,23) = 10.5; p = 0.004), but not in the fourth (F\n(1,23) = 2.6; p = n.s.). Subjects indicated that the mental effort needed to perform the test after caffeinated coffee was significantly reduced in the third (F\n(1,23) = 11.4; p = 0.003) and fourth hour of driving (F\n(1,23) = 5.9; p = 0.023). In addition, drivers rated their driving quality as significantly more considerate, responsible, and safer in the caffeinated coffee condition (see Table 1).", "After the break with caffeinated coffee, drivers reported significantly lower sleepiness scores as compared to the break with decaffeinated coffee. This effect was significant both in the third (F\n(1,23) = 18.5; p < 0.001) and the fourth hour of driving (F\n(1,23) = 11.9; p = 0.002) (see Fig. 3).Fig. 3Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nKarolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05)" ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Subjects", "Study design", "Treatments", "STISIM highway driving test", "Subjective assessments", "Statistical analysis", "Results", "Driving test", "Subjective driving assessments", "Subjective sleepiness", "Discussion" ]
[ "Drowsy driving is an important cause of traffic accidents (Connor et al. 2002; Horne and Reyner 1995; Maycock 1996), and therefore, the development of effective countermeasures is essential. Consuming a cup of coffee is one of the most commonly used ways to combat driver sleepiness. An estimated 80% of the population consumes caffeine-containing beverages, often on a daily basis (Fredholm et al. 1999; Heckman et al. 2010). Caffeine (1,3,7-trimethylxanthine) is rapidly and completely absorbed in the body within approximately 45 min (Blanchard and Sawers 1983). It reaches its peak plasma concentration within 15 to 120 min after intake (Arnaud 1987), averaging around 30 min (O’Connell and Zurzola 1984; Blanchard and Sawers 1983). Its elimination half life is 1.5 to 9.5 h (Arnaud 1987; Bonati et al. 1982). Although additional mechanisms of action are involved, it is now believed that caffeine’s stimulant effects are exerted by antagonizing adenosine, primarily by blocking the adenosine A1 and A2A receptors. Adenosine is considered to be a mediator of sleep (Dunwiddie and Mansino 2001; Fredholm et al. 1999).\nA great number of studies have demonstrated effects of caffeine on mood and performance (Childs and De Wit 2006; Christopher et al. 2005; Haskell et al. 2005; Lieberman et al. 1987; Olson et al. 2010). However, the effects are complex and depend on the specific tasks examined, dosages, subjects, and test conditions (Lorist and Tops 2003). Overall, caffeine was found to be specifically effective in restoring performance to baseline levels when individuals are in a state of low arousal, such as seen during the dip in the circadian rhythm, after sleep restriction, and in fatigued subjects (Nehlig 2010; Smith 2002). Indeed, many people consume coffee with the purpose to refresh or stay awake, for example, when driving a car (Anund et al. 2008; Vanlaar et al. 2008).\nSeveral driving studies showed that caffeine improves performance and decreases subjective sleepiness both in driving simulators (Biggs et al. 2007; Brice and Smith 2001; De Valck and Cluydts 2001; Horne and Reyner 1996; Regina et al. 1974; Reyner and Horne 1997, 2000) and on the road (Philip et al. 2006; Sagaspe et al. 2007). Most of these studies tested sleep-restricted subjects. In addition, relatively high dosages of caffeine (100–300 mg) were examined. These studies showed that relatively high dosages of caffeine had a positive effect on driving performance and reduced driver sleepiness. In real life however, it is more likely that a driver consumes only one cup of coffee (80 mg of caffeine) during a break, before continuing driving. Up to now, the effects on driving performance of lower dosages of caffeine, e.g., a regular cup of coffee, have not been examined.\nTherefore, the objective of this study was to examine the effects of one cup of coffee (80 mg caffeine) on prolonged simulated highway driving in non-sleep deprived individuals. Various traffic safety organizations advise drivers to take a 15-min break after 2 h of driving. The protocol used in the current study (2 h driving, a 15-min break with or without consuming caffeinated coffee, followed by 2 h of driving) was based on this advice.", "This study was a double-blind, randomized, placebo-controlled, cross-over study. The study was conducted according to the ICH Guidelines for “Good Clinical Practice,” and the Declaration of Helsinki and its latest amendments. Written informed consent was obtained from the participants before taking part in the study. The study was approved by the Institutional Review Board; no medical ethical approval was required to conduct the study.\n Subjects Twenty-four adult healthy volunteers (12 males and 12 females) were recruited by means of public advertisements at and around Utrecht University campus. Subjects were included if they were healthy volunteers, moderate caffeine drinkers (two to four cups a day), non-smokers, had a body mass index between 21 and 30, possessed a valid driver’s license for at least 3 years, and drove more than 5,000 km per year.\nSleep disturbances were assessed with the SLEEP-50 questionnaire (Spoormaker et al. 2005), and excessive daytime sleepiness was examined using the Epworth Sleepiness Scale (ESS; Johns 1991), filled out by participants on the screening day.\nBefore the start of each test day, urine samples were collected to test for drugs of abuse including amphetamines (including MDMA), barbiturates, cannabinoids, benzodiazepines, cocaine, and opiates (Instant-View, Alfa Scientific Designs Inc.) and pregnancy in female subjects (β-HCG test). To test for the presence of alcohol, the Dräger Alcotest 7410 Breath Analyzer was used. From 24 h before the start of the test day until completion of the test day, alcohol consumption was not permitted. Caffeinated beverages were not allowed from awakening on test days until the end of the tests.\nTwenty-four adult healthy volunteers (12 males and 12 females) were recruited by means of public advertisements at and around Utrecht University campus. Subjects were included if they were healthy volunteers, moderate caffeine drinkers (two to four cups a day), non-smokers, had a body mass index between 21 and 30, possessed a valid driver’s license for at least 3 years, and drove more than 5,000 km per year.\nSleep disturbances were assessed with the SLEEP-50 questionnaire (Spoormaker et al. 2005), and excessive daytime sleepiness was examined using the Epworth Sleepiness Scale (ESS; Johns 1991), filled out by participants on the screening day.\nBefore the start of each test day, urine samples were collected to test for drugs of abuse including amphetamines (including MDMA), barbiturates, cannabinoids, benzodiazepines, cocaine, and opiates (Instant-View, Alfa Scientific Designs Inc.) and pregnancy in female subjects (β-HCG test). To test for the presence of alcohol, the Dräger Alcotest 7410 Breath Analyzer was used. From 24 h before the start of the test day until completion of the test day, alcohol consumption was not permitted. Caffeinated beverages were not allowed from awakening on test days until the end of the tests.\n Study design Participants were screened and familiarized with the test procedures during a training day. When meeting all inclusion and passing all exclusion criteria, subjects performed a practice session in the STISIM driving simulator and completed the Simulator Sickness Questionnaire (Kennedy et al. 1993) to identify possible simulator sickness. Included subjects were randomly assigned to a treatment order comprising decaffeinated coffee and caffeinated coffee (80 mg) administered during a break.\nUpon arrival, possible use of drugs or alcohol, pregnancy, illness, and medication were checked. In addition, quality and duration of sleep was assessed using the 14-item Groningen Sleep Quality Scale (Mulder-Hajonides van der Meulen et al. 1980). When all criteria were met, a 120-min drive in the STISIM driving simulator was conducted. Thereafter, a 15-min break was scheduled in which subjects received the double-blind treatment. After the break, another 120-min driving session was performed. Every hour, subjective assessments of driving quality, driving style, mental effort to perform the test, and sleepiness were conducted. Test sessions were scheduled at the same time for each subject, either in the morning (0800–1300 hours) or in the afternoon (1300–1700 hours).\nParticipants were screened and familiarized with the test procedures during a training day. When meeting all inclusion and passing all exclusion criteria, subjects performed a practice session in the STISIM driving simulator and completed the Simulator Sickness Questionnaire (Kennedy et al. 1993) to identify possible simulator sickness. Included subjects were randomly assigned to a treatment order comprising decaffeinated coffee and caffeinated coffee (80 mg) administered during a break.\nUpon arrival, possible use of drugs or alcohol, pregnancy, illness, and medication were checked. In addition, quality and duration of sleep was assessed using the 14-item Groningen Sleep Quality Scale (Mulder-Hajonides van der Meulen et al. 1980). When all criteria were met, a 120-min drive in the STISIM driving simulator was conducted. Thereafter, a 15-min break was scheduled in which subjects received the double-blind treatment. After the break, another 120-min driving session was performed. Every hour, subjective assessments of driving quality, driving style, mental effort to perform the test, and sleepiness were conducted. Test sessions were scheduled at the same time for each subject, either in the morning (0800–1300 hours) or in the afternoon (1300–1700 hours).\n Treatments This study aimed to mimic the effect of a cup of coffee drivers consume when having a break along the highway. Treatments were 2.68 g of Nescafé Gold® instant coffee containing 80 mg caffeine or 2.68 g of Nescafé Gold® decaffeinated coffee dissolved in 180 ml boiled water. To confirm that each cup of coffee contained 80 mg of caffeine, the amount of caffeine in the instant coffee was determined with high-performance liquid chromatography (HPLC; Shimadzu LC-10AT VP equipped with UV–Vis detector). The column was a reversed-phase Select B column Lichrocart HPLC C18, 5 μm, length, 0.125 m, Ø = 4.6 mm. All of the procedures were carried out isocratic. The separation was done at room temperature. Caffeine and the spiked matrices were separated with a mobile phase of 20% MeOH and 10 mM HClO4, at a flow rate of 0.5 mL/min. The injection volume was 5 μL, and the detection was carried out at 273 nm. The mean (SD) amount of caffeine per gram Nescafé Goud instant coffee samples (n = 10) was 29.79 (0.656) mg/g. The mean amount of caffeine in decaffeinated coffee was 0.79 mg/g. The accuracy of determinations was 98.1% (SD, 0.56). Because both the precision and the accuracy met up to the requirement demands, all of the results of this HPLC determination can be concluded with certainty.\nTreatments were administered double-blind, and a nose clip was worn to enhance treatment blinding. Drinks were consumed within 5 min, starting from 5 min after onset of the break.\nThis study aimed to mimic the effect of a cup of coffee drivers consume when having a break along the highway. Treatments were 2.68 g of Nescafé Gold® instant coffee containing 80 mg caffeine or 2.68 g of Nescafé Gold® decaffeinated coffee dissolved in 180 ml boiled water. To confirm that each cup of coffee contained 80 mg of caffeine, the amount of caffeine in the instant coffee was determined with high-performance liquid chromatography (HPLC; Shimadzu LC-10AT VP equipped with UV–Vis detector). The column was a reversed-phase Select B column Lichrocart HPLC C18, 5 μm, length, 0.125 m, Ø = 4.6 mm. All of the procedures were carried out isocratic. The separation was done at room temperature. Caffeine and the spiked matrices were separated with a mobile phase of 20% MeOH and 10 mM HClO4, at a flow rate of 0.5 mL/min. The injection volume was 5 μL, and the detection was carried out at 273 nm. The mean (SD) amount of caffeine per gram Nescafé Goud instant coffee samples (n = 10) was 29.79 (0.656) mg/g. The mean amount of caffeine in decaffeinated coffee was 0.79 mg/g. The accuracy of determinations was 98.1% (SD, 0.56). Because both the precision and the accuracy met up to the requirement demands, all of the results of this HPLC determination can be concluded with certainty.\nTreatments were administered double-blind, and a nose clip was worn to enhance treatment blinding. Drinks were consumed within 5 min, starting from 5 min after onset of the break.\n STISIM highway driving test Driving tests were performed in a fixed-base driving simulator employing STISIM Drive™ (version M300, Systems Technology Inc., Hawthorne, CA, USA). This is an interactive system in which the roadway scenery is projected on a screen (2.10 × 1.58 m), 1.90 m in front of the center of the steering wheel of the car unit (Mets et al. 2011a). The 100-km highway driving test scenarios were developed (EyeOctopus BV) in accordance with Dutch traffic situations, including a two-lane highway in each direction and a monotonous environment with trees, occasional hills and bridges, and other traffic. The duration of each 100-km scenario is approximately 60 min. Two scenarios (200 km) were conducted before a 15-min break, and two other scenarios (200 km) thereafter (Mets et al. 2011a).\nSubjects were instructed to drive with a steady lateral position within the right, slower, traffic lane with a constant speed of 95 km/h. Overtaking slower-moving vehicles was allowed. During blinded editing, these maneuvers were removed from the data, before statistical analysis of the “clean” data. The primary outcome variable was the standard deviation of lateral position (SDLP, centimeters), expressing the weaving of the car (Verster and Roth 2011). The standard deviation of speed (SDS, kilometers per hour) was the secondary outcome measure. Mean speed (MS, kilometers per hour) and mean lateral position (MLP, cm) were control variables.\nDriving tests were performed in a fixed-base driving simulator employing STISIM Drive™ (version M300, Systems Technology Inc., Hawthorne, CA, USA). This is an interactive system in which the roadway scenery is projected on a screen (2.10 × 1.58 m), 1.90 m in front of the center of the steering wheel of the car unit (Mets et al. 2011a). The 100-km highway driving test scenarios were developed (EyeOctopus BV) in accordance with Dutch traffic situations, including a two-lane highway in each direction and a monotonous environment with trees, occasional hills and bridges, and other traffic. The duration of each 100-km scenario is approximately 60 min. Two scenarios (200 km) were conducted before a 15-min break, and two other scenarios (200 km) thereafter (Mets et al. 2011a).\nSubjects were instructed to drive with a steady lateral position within the right, slower, traffic lane with a constant speed of 95 km/h. Overtaking slower-moving vehicles was allowed. During blinded editing, these maneuvers were removed from the data, before statistical analysis of the “clean” data. The primary outcome variable was the standard deviation of lateral position (SDLP, centimeters), expressing the weaving of the car (Verster and Roth 2011). The standard deviation of speed (SDS, kilometers per hour) was the secondary outcome measure. Mean speed (MS, kilometers per hour) and mean lateral position (MLP, cm) were control variables.\n Subjective assessments After each hour of driving, questionnaires were administered on subjective sleepiness and driving performance. Subjective sleepiness was measured by means of the Karolinska Sleepiness Scale (KSS), ranging from 1 (very alert) to 9 (very sleepy, fighting sleep) (Åkerstedt and Gillberg 1990).\nDriving task-related questionnaires comprised mental effort to perform the driving test (Meijman et al. 1986; Zijlstra and Van Doorn 1985), subjective driving quality, and driving style (McCormick et al. 1987). Completing the questionnaires took approximately 2 min, after which, the driving task was immediately resumed.\nAfter each hour of driving, questionnaires were administered on subjective sleepiness and driving performance. Subjective sleepiness was measured by means of the Karolinska Sleepiness Scale (KSS), ranging from 1 (very alert) to 9 (very sleepy, fighting sleep) (Åkerstedt and Gillberg 1990).\nDriving task-related questionnaires comprised mental effort to perform the driving test (Meijman et al. 1986; Zijlstra and Van Doorn 1985), subjective driving quality, and driving style (McCormick et al. 1987). Completing the questionnaires took approximately 2 min, after which, the driving task was immediately resumed.\n Statistical analysis Statistical analyses were performed with SPSS, version 19. For each variable, mean (SD) was computed for each subsequent hour. Data of the first 2 h were compared, to confirm that no significant differences between the treatment days were present before the break and treatment administration. To determine whether caffeinated coffee has an effect on driving performance, data from the third and fourth hour were compared using a general linear model for repeated measures (two-tailed, p ≤ 0.05).\nStatistical analyses were performed with SPSS, version 19. For each variable, mean (SD) was computed for each subsequent hour. Data of the first 2 h were compared, to confirm that no significant differences between the treatment days were present before the break and treatment administration. To determine whether caffeinated coffee has an effect on driving performance, data from the third and fourth hour were compared using a general linear model for repeated measures (two-tailed, p ≤ 0.05).", "Twenty-four adult healthy volunteers (12 males and 12 females) were recruited by means of public advertisements at and around Utrecht University campus. Subjects were included if they were healthy volunteers, moderate caffeine drinkers (two to four cups a day), non-smokers, had a body mass index between 21 and 30, possessed a valid driver’s license for at least 3 years, and drove more than 5,000 km per year.\nSleep disturbances were assessed with the SLEEP-50 questionnaire (Spoormaker et al. 2005), and excessive daytime sleepiness was examined using the Epworth Sleepiness Scale (ESS; Johns 1991), filled out by participants on the screening day.\nBefore the start of each test day, urine samples were collected to test for drugs of abuse including amphetamines (including MDMA), barbiturates, cannabinoids, benzodiazepines, cocaine, and opiates (Instant-View, Alfa Scientific Designs Inc.) and pregnancy in female subjects (β-HCG test). To test for the presence of alcohol, the Dräger Alcotest 7410 Breath Analyzer was used. From 24 h before the start of the test day until completion of the test day, alcohol consumption was not permitted. Caffeinated beverages were not allowed from awakening on test days until the end of the tests.", "Participants were screened and familiarized with the test procedures during a training day. When meeting all inclusion and passing all exclusion criteria, subjects performed a practice session in the STISIM driving simulator and completed the Simulator Sickness Questionnaire (Kennedy et al. 1993) to identify possible simulator sickness. Included subjects were randomly assigned to a treatment order comprising decaffeinated coffee and caffeinated coffee (80 mg) administered during a break.\nUpon arrival, possible use of drugs or alcohol, pregnancy, illness, and medication were checked. In addition, quality and duration of sleep was assessed using the 14-item Groningen Sleep Quality Scale (Mulder-Hajonides van der Meulen et al. 1980). When all criteria were met, a 120-min drive in the STISIM driving simulator was conducted. Thereafter, a 15-min break was scheduled in which subjects received the double-blind treatment. After the break, another 120-min driving session was performed. Every hour, subjective assessments of driving quality, driving style, mental effort to perform the test, and sleepiness were conducted. Test sessions were scheduled at the same time for each subject, either in the morning (0800–1300 hours) or in the afternoon (1300–1700 hours).", "This study aimed to mimic the effect of a cup of coffee drivers consume when having a break along the highway. Treatments were 2.68 g of Nescafé Gold® instant coffee containing 80 mg caffeine or 2.68 g of Nescafé Gold® decaffeinated coffee dissolved in 180 ml boiled water. To confirm that each cup of coffee contained 80 mg of caffeine, the amount of caffeine in the instant coffee was determined with high-performance liquid chromatography (HPLC; Shimadzu LC-10AT VP equipped with UV–Vis detector). The column was a reversed-phase Select B column Lichrocart HPLC C18, 5 μm, length, 0.125 m, Ø = 4.6 mm. All of the procedures were carried out isocratic. The separation was done at room temperature. Caffeine and the spiked matrices were separated with a mobile phase of 20% MeOH and 10 mM HClO4, at a flow rate of 0.5 mL/min. The injection volume was 5 μL, and the detection was carried out at 273 nm. The mean (SD) amount of caffeine per gram Nescafé Goud instant coffee samples (n = 10) was 29.79 (0.656) mg/g. The mean amount of caffeine in decaffeinated coffee was 0.79 mg/g. The accuracy of determinations was 98.1% (SD, 0.56). Because both the precision and the accuracy met up to the requirement demands, all of the results of this HPLC determination can be concluded with certainty.\nTreatments were administered double-blind, and a nose clip was worn to enhance treatment blinding. Drinks were consumed within 5 min, starting from 5 min after onset of the break.", "Driving tests were performed in a fixed-base driving simulator employing STISIM Drive™ (version M300, Systems Technology Inc., Hawthorne, CA, USA). This is an interactive system in which the roadway scenery is projected on a screen (2.10 × 1.58 m), 1.90 m in front of the center of the steering wheel of the car unit (Mets et al. 2011a). The 100-km highway driving test scenarios were developed (EyeOctopus BV) in accordance with Dutch traffic situations, including a two-lane highway in each direction and a monotonous environment with trees, occasional hills and bridges, and other traffic. The duration of each 100-km scenario is approximately 60 min. Two scenarios (200 km) were conducted before a 15-min break, and two other scenarios (200 km) thereafter (Mets et al. 2011a).\nSubjects were instructed to drive with a steady lateral position within the right, slower, traffic lane with a constant speed of 95 km/h. Overtaking slower-moving vehicles was allowed. During blinded editing, these maneuvers were removed from the data, before statistical analysis of the “clean” data. The primary outcome variable was the standard deviation of lateral position (SDLP, centimeters), expressing the weaving of the car (Verster and Roth 2011). The standard deviation of speed (SDS, kilometers per hour) was the secondary outcome measure. Mean speed (MS, kilometers per hour) and mean lateral position (MLP, cm) were control variables.", "After each hour of driving, questionnaires were administered on subjective sleepiness and driving performance. Subjective sleepiness was measured by means of the Karolinska Sleepiness Scale (KSS), ranging from 1 (very alert) to 9 (very sleepy, fighting sleep) (Åkerstedt and Gillberg 1990).\nDriving task-related questionnaires comprised mental effort to perform the driving test (Meijman et al. 1986; Zijlstra and Van Doorn 1985), subjective driving quality, and driving style (McCormick et al. 1987). Completing the questionnaires took approximately 2 min, after which, the driving task was immediately resumed.", "Statistical analyses were performed with SPSS, version 19. For each variable, mean (SD) was computed for each subsequent hour. Data of the first 2 h were compared, to confirm that no significant differences between the treatment days were present before the break and treatment administration. To determine whether caffeinated coffee has an effect on driving performance, data from the third and fourth hour were compared using a general linear model for repeated measures (two-tailed, p ≤ 0.05).", "A total of 24 subjects (12 males and 12 females) completed the study. Their mean (SD) age was 23.2 (1.6) years old; on average, they consumed 2.5 (0.7) caffeinated drinks per day, had a mean (SD) body mass index of 23.9 (2.7), possessed a valid driver’s license for 58.8 (17.9) months, and on average drove 12,979 (SD, 10,785) km per year. All subjects reported normal sleep quality and duration on the nights before the test days with no differences observed between the two test conditions. Results from the study are summarized in Table 1. There were no significant order effects (caffeinated–decaffeinated coffee versus decaffeinated–caffeinated coffee) or time-of-testing effects (a.m. versus p.m.).Table 1Effects of caffeinated coffee in comparison to decaf on simulated driving performance and subjective sleepinessTimeDecaffeinated coffeeCaffeinated coffeeDriving test resultsStandard deviation of lateral position (cm)121.43 (4.37)22.11 (3.67)223.65 (5.90)24.13 (4.76)322.92 (4.61)21.08 (3.74)*423.69 (4.72)22.41 (4.37)*Standard deviation of speed (km/h)10.85 (0.44)0.88 (0.35)20.98 (0.51)1.1 (0.61)31.03 (0.72)0.78 (0.34)*41.15 (0.77)0.87 (0.56)*Mean lateral position (cm)1−18.04 (12.71)−18.03 (10.47)2−19.24 (12.60)−18.98 (9.98)3−18.63 (12.31)−20.16 (11.05)4−18.17 (11.54)−18.93 (10.80)Mean speed (km/h)195.40 (0.19)95.42 (0.21)295.46 (0.16)95.40 (0.26)395.44 (0.31)95.54 (0.18)495.53 (0.15)95.54 (0.25)Subjective driving assessmentsDriving quality19.75 (3.66)9.08 (4.10)29.01 (2.81)8.48 (3.46)39.70 (3.89)11.84 (2.82)*49.23 (3.02)10.60 (3.41)Mental effort15.33 (2.30)5.70 (2.47)25.84 (2.76)6.39 (2.50)35.89 (2.82)4.50 (2.36)*45.72 (2.38)4.90 (2.93)*Subjective sleepiness scoresKarolinska sleepiness scaleBaseline3.25 (0.94)3.33 (0.87)16.08 (1.67)5.83 (2.16)26.17 (1.95)6.29 (1.97)36.13 (2.11)4.21 (1.47)*45.79 (1.59)4.54 (1.86)*Mean (SD) is shown for each parameter. Driving quality ranges from 0 (“I drove exceptionally poorly”) to 20 (“I drove exceptionally well”). For mental effort, higher scores indicate higher effort; higher KSS scores indicate increased subjective sleepiness*p < 0.05 compared to decaf\n\nEffects of caffeinated coffee in comparison to decaf on simulated driving performance and subjective sleepiness\nMean (SD) is shown for each parameter. Driving quality ranges from 0 (“I drove exceptionally poorly”) to 20 (“I drove exceptionally well”). For mental effort, higher scores indicate higher effort; higher KSS scores indicate increased subjective sleepiness\n*p < 0.05 compared to decaf\n Driving test Figure 1 shows the effect of caffeinated coffee consumption on driving performance. No significant differences in SDLP were observed before the break. However, both in the first (F\n(1,23) = 5.8; p = 0.024) and in the second hour (F\n(1,23) = 6.4; p = 0.019) after the break, caffeinated coffee significantly reduced SDLP.Fig. 1Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nStandard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05)\nIn line, caffeinated coffee significantly reduced SD speed in the third (F\n(1,23) = 5.8; p = 0.024) and fourth hour (F\n(1,23) = 13.0; p = 0.001) of driving (see Fig. 2).Fig. 2Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nStandard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05)\nNo effects were found on mean speed or mean lateral position, confirming that subjects performed the test according to the instructions.\nFigure 1 shows the effect of caffeinated coffee consumption on driving performance. No significant differences in SDLP were observed before the break. However, both in the first (F\n(1,23) = 5.8; p = 0.024) and in the second hour (F\n(1,23) = 6.4; p = 0.019) after the break, caffeinated coffee significantly reduced SDLP.Fig. 1Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nStandard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05)\nIn line, caffeinated coffee significantly reduced SD speed in the third (F\n(1,23) = 5.8; p = 0.024) and fourth hour (F\n(1,23) = 13.0; p = 0.001) of driving (see Fig. 2).Fig. 2Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nStandard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05)\nNo effects were found on mean speed or mean lateral position, confirming that subjects performed the test according to the instructions.\n Subjective driving assessments Compared to decaffeinated coffee, caffeinated coffee improved subjective driving quality in the third hour of driving (F\n(1,23) = 10.5; p = 0.004), but not in the fourth (F\n(1,23) = 2.6; p = n.s.). Subjects indicated that the mental effort needed to perform the test after caffeinated coffee was significantly reduced in the third (F\n(1,23) = 11.4; p = 0.003) and fourth hour of driving (F\n(1,23) = 5.9; p = 0.023). In addition, drivers rated their driving quality as significantly more considerate, responsible, and safer in the caffeinated coffee condition (see Table 1).\nCompared to decaffeinated coffee, caffeinated coffee improved subjective driving quality in the third hour of driving (F\n(1,23) = 10.5; p = 0.004), but not in the fourth (F\n(1,23) = 2.6; p = n.s.). Subjects indicated that the mental effort needed to perform the test after caffeinated coffee was significantly reduced in the third (F\n(1,23) = 11.4; p = 0.003) and fourth hour of driving (F\n(1,23) = 5.9; p = 0.023). In addition, drivers rated their driving quality as significantly more considerate, responsible, and safer in the caffeinated coffee condition (see Table 1).\n Subjective sleepiness After the break with caffeinated coffee, drivers reported significantly lower sleepiness scores as compared to the break with decaffeinated coffee. This effect was significant both in the third (F\n(1,23) = 18.5; p < 0.001) and the fourth hour of driving (F\n(1,23) = 11.9; p = 0.002) (see Fig. 3).Fig. 3Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nKarolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05)\nAfter the break with caffeinated coffee, drivers reported significantly lower sleepiness scores as compared to the break with decaffeinated coffee. This effect was significant both in the third (F\n(1,23) = 18.5; p < 0.001) and the fourth hour of driving (F\n(1,23) = 11.9; p = 0.002) (see Fig. 3).Fig. 3Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nKarolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05)", "Figure 1 shows the effect of caffeinated coffee consumption on driving performance. No significant differences in SDLP were observed before the break. However, both in the first (F\n(1,23) = 5.8; p = 0.024) and in the second hour (F\n(1,23) = 6.4; p = 0.019) after the break, caffeinated coffee significantly reduced SDLP.Fig. 1Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nStandard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05)\nIn line, caffeinated coffee significantly reduced SD speed in the third (F\n(1,23) = 5.8; p = 0.024) and fourth hour (F\n(1,23) = 13.0; p = 0.001) of driving (see Fig. 2).Fig. 2Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nStandard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05)\nNo effects were found on mean speed or mean lateral position, confirming that subjects performed the test according to the instructions.", "Compared to decaffeinated coffee, caffeinated coffee improved subjective driving quality in the third hour of driving (F\n(1,23) = 10.5; p = 0.004), but not in the fourth (F\n(1,23) = 2.6; p = n.s.). Subjects indicated that the mental effort needed to perform the test after caffeinated coffee was significantly reduced in the third (F\n(1,23) = 11.4; p = 0.003) and fourth hour of driving (F\n(1,23) = 5.9; p = 0.023). In addition, drivers rated their driving quality as significantly more considerate, responsible, and safer in the caffeinated coffee condition (see Table 1).", "After the break with caffeinated coffee, drivers reported significantly lower sleepiness scores as compared to the break with decaffeinated coffee. This effect was significant both in the third (F\n(1,23) = 18.5; p < 0.001) and the fourth hour of driving (F\n(1,23) = 11.9; p = 0.002) (see Fig. 3).Fig. 3Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05)\n\nKarolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05)", "This study demonstrates that one cup of caffeinated coffee (80 mg caffeine) significantly improves driving performance and reduces driver sleepiness.\nBoth lane keeping (SDLP) and speed maintenance were improved up to 2 h after caffeine consumption. The effect on SDLP of caffeinated coffee, compared to placebo, is comparable to changes observed with a blood alcohol concentration of 0.05% (Mets et al. 2011b), i.e., the legal limit for driving in many countries, but in the opposite direction. Hence, the improvement by caffeinated coffee can be regarded as clinically relevant.\nThe magnitude of driving improvement observed after coffee consumption was comparable to the improvement seen in a driving study with Red Bull® Energy Drink using the same design and driving test (Mets et al. 2011a), and on-road driving studies showing improvement after administration of methylphenidate to patients with attention-deficit hyperactivity disorder (Verster et al. 2008).\nThe improvement in objective performance was accompanied by improvement in subjective assessments of sleepiness and driving performance. An average decrease of almost 2 points (out of 7) on the KSS scale was observed after the intake of caffeinated coffee as compared to decaffeinated coffee. The average KSS score was 6 (“some signs of sleepiness”) in the decaffeinated coffee condition compared to 4 (“rather alert”) in the caffeinated coffee condition. These findings are in agreement with the pharmacokinetic profile of caffeine (Tmax ≈ 30 min; T1/2 > 2 h), as well as with the known actions of caffeine as a sleepiness countermeasure with the ability to restore performance to baseline.\nUp to now, higher dosages of caffeine (150–250 mg, comparable to two to three cups of regular coffee) have been shown to be effective in counteracting sleep restriction (<5 h spent in bed) when driving in the early morning (Reyner and Horne 2000) and in the early afternoon (Horne and Reyner 1996; Reyner and Horne 1997). A moderate caffeine dosage (100 mg) decreased drifting out of lane and reduced subjective sleepiness in drivers who had slept for no more than 4 h (Biggs et al. 2007). Furthermore, caffeine (3 mg/kg, approximately 225 mg in a 75-kg adult) improved steering accuracy in non-fatigued volunteers (Brice and Smith 2001). Slow release caffeine capsules (300 mg) had similar effects (De Valck and Cluydts 2001). Interestingly, caffeine decreased lane drifting both in individuals who had spent 4.5 h in bed and in those who had spent 7.5 h in bed, while effects on speed maintenance, fatigue, and sleepiness were only observed after 4.5 h spent in bed (De Valck and Cluydts 2001). Two on-the-road driving studies on a public highway in France confirmed these findings and showed that relatively high dosages of caffeine (200 mg) improved nighttime driving both in young and in middle-aged drivers (Philip et al. 2006; Sagaspe et al. 2007).\nThe current results are in agreement with these studies, but further show that lower caffeine content found in one regular cup of coffee also significantly improves driving performance and reduces driver sleepiness. In addition, where studies find effects on lane keeping, the current study shows that speed maintenance is also affected, indicating more pronounced effects on vehicle control. The importance of this finding is evident, since it can be assumed that in order to refresh, most drivers consume only one cup of coffee during a break, instead of three or four. Driving simulator research has several limitations, which are discussed elsewhere (e.g., Mets et al 2011b; Verster and Roth 2011, 2012). Therefore, it is important to replicate and confirm our findings in an on-the-road driving study. Furthermore, subjects who were tested in the afternoon may have had an additional effect of the afternoon dip in the circadian rhythm. For this reason, each subject had his test days at the same time of day. However, no significant differences were found between subjects tested in the morning and those tested in the afternoon. Further studies could examine if a low dose of caffeine has similar effects on (professional) drivers who are sleep-restricted or shifted their day–night rhythm, since current studies have only been performed with higher dosages of caffeine.\nIn conclusion, the present study demonstrates that one cup of caffeinated coffee (80 mg caffeine) has a positive effect on continuous highway driving in non-sleep restricted, healthy volunteers." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Caffeine", "Automobile driving", "Fatigue", "Sleepiness" ]
Introduction: Drowsy driving is an important cause of traffic accidents (Connor et al. 2002; Horne and Reyner 1995; Maycock 1996), and therefore, the development of effective countermeasures is essential. Consuming a cup of coffee is one of the most commonly used ways to combat driver sleepiness. An estimated 80% of the population consumes caffeine-containing beverages, often on a daily basis (Fredholm et al. 1999; Heckman et al. 2010). Caffeine (1,3,7-trimethylxanthine) is rapidly and completely absorbed in the body within approximately 45 min (Blanchard and Sawers 1983). It reaches its peak plasma concentration within 15 to 120 min after intake (Arnaud 1987), averaging around 30 min (O’Connell and Zurzola 1984; Blanchard and Sawers 1983). Its elimination half life is 1.5 to 9.5 h (Arnaud 1987; Bonati et al. 1982). Although additional mechanisms of action are involved, it is now believed that caffeine’s stimulant effects are exerted by antagonizing adenosine, primarily by blocking the adenosine A1 and A2A receptors. Adenosine is considered to be a mediator of sleep (Dunwiddie and Mansino 2001; Fredholm et al. 1999). A great number of studies have demonstrated effects of caffeine on mood and performance (Childs and De Wit 2006; Christopher et al. 2005; Haskell et al. 2005; Lieberman et al. 1987; Olson et al. 2010). However, the effects are complex and depend on the specific tasks examined, dosages, subjects, and test conditions (Lorist and Tops 2003). Overall, caffeine was found to be specifically effective in restoring performance to baseline levels when individuals are in a state of low arousal, such as seen during the dip in the circadian rhythm, after sleep restriction, and in fatigued subjects (Nehlig 2010; Smith 2002). Indeed, many people consume coffee with the purpose to refresh or stay awake, for example, when driving a car (Anund et al. 2008; Vanlaar et al. 2008). Several driving studies showed that caffeine improves performance and decreases subjective sleepiness both in driving simulators (Biggs et al. 2007; Brice and Smith 2001; De Valck and Cluydts 2001; Horne and Reyner 1996; Regina et al. 1974; Reyner and Horne 1997, 2000) and on the road (Philip et al. 2006; Sagaspe et al. 2007). Most of these studies tested sleep-restricted subjects. In addition, relatively high dosages of caffeine (100–300 mg) were examined. These studies showed that relatively high dosages of caffeine had a positive effect on driving performance and reduced driver sleepiness. In real life however, it is more likely that a driver consumes only one cup of coffee (80 mg of caffeine) during a break, before continuing driving. Up to now, the effects on driving performance of lower dosages of caffeine, e.g., a regular cup of coffee, have not been examined. Therefore, the objective of this study was to examine the effects of one cup of coffee (80 mg caffeine) on prolonged simulated highway driving in non-sleep deprived individuals. Various traffic safety organizations advise drivers to take a 15-min break after 2 h of driving. The protocol used in the current study (2 h driving, a 15-min break with or without consuming caffeinated coffee, followed by 2 h of driving) was based on this advice. Materials and methods: This study was a double-blind, randomized, placebo-controlled, cross-over study. The study was conducted according to the ICH Guidelines for “Good Clinical Practice,” and the Declaration of Helsinki and its latest amendments. Written informed consent was obtained from the participants before taking part in the study. The study was approved by the Institutional Review Board; no medical ethical approval was required to conduct the study. Subjects Twenty-four adult healthy volunteers (12 males and 12 females) were recruited by means of public advertisements at and around Utrecht University campus. Subjects were included if they were healthy volunteers, moderate caffeine drinkers (two to four cups a day), non-smokers, had a body mass index between 21 and 30, possessed a valid driver’s license for at least 3 years, and drove more than 5,000 km per year. Sleep disturbances were assessed with the SLEEP-50 questionnaire (Spoormaker et al. 2005), and excessive daytime sleepiness was examined using the Epworth Sleepiness Scale (ESS; Johns 1991), filled out by participants on the screening day. Before the start of each test day, urine samples were collected to test for drugs of abuse including amphetamines (including MDMA), barbiturates, cannabinoids, benzodiazepines, cocaine, and opiates (Instant-View, Alfa Scientific Designs Inc.) and pregnancy in female subjects (β-HCG test). To test for the presence of alcohol, the Dräger Alcotest 7410 Breath Analyzer was used. From 24 h before the start of the test day until completion of the test day, alcohol consumption was not permitted. Caffeinated beverages were not allowed from awakening on test days until the end of the tests. Twenty-four adult healthy volunteers (12 males and 12 females) were recruited by means of public advertisements at and around Utrecht University campus. Subjects were included if they were healthy volunteers, moderate caffeine drinkers (two to four cups a day), non-smokers, had a body mass index between 21 and 30, possessed a valid driver’s license for at least 3 years, and drove more than 5,000 km per year. Sleep disturbances were assessed with the SLEEP-50 questionnaire (Spoormaker et al. 2005), and excessive daytime sleepiness was examined using the Epworth Sleepiness Scale (ESS; Johns 1991), filled out by participants on the screening day. Before the start of each test day, urine samples were collected to test for drugs of abuse including amphetamines (including MDMA), barbiturates, cannabinoids, benzodiazepines, cocaine, and opiates (Instant-View, Alfa Scientific Designs Inc.) and pregnancy in female subjects (β-HCG test). To test for the presence of alcohol, the Dräger Alcotest 7410 Breath Analyzer was used. From 24 h before the start of the test day until completion of the test day, alcohol consumption was not permitted. Caffeinated beverages were not allowed from awakening on test days until the end of the tests. Study design Participants were screened and familiarized with the test procedures during a training day. When meeting all inclusion and passing all exclusion criteria, subjects performed a practice session in the STISIM driving simulator and completed the Simulator Sickness Questionnaire (Kennedy et al. 1993) to identify possible simulator sickness. Included subjects were randomly assigned to a treatment order comprising decaffeinated coffee and caffeinated coffee (80 mg) administered during a break. Upon arrival, possible use of drugs or alcohol, pregnancy, illness, and medication were checked. In addition, quality and duration of sleep was assessed using the 14-item Groningen Sleep Quality Scale (Mulder-Hajonides van der Meulen et al. 1980). When all criteria were met, a 120-min drive in the STISIM driving simulator was conducted. Thereafter, a 15-min break was scheduled in which subjects received the double-blind treatment. After the break, another 120-min driving session was performed. Every hour, subjective assessments of driving quality, driving style, mental effort to perform the test, and sleepiness were conducted. Test sessions were scheduled at the same time for each subject, either in the morning (0800–1300 hours) or in the afternoon (1300–1700 hours). Participants were screened and familiarized with the test procedures during a training day. When meeting all inclusion and passing all exclusion criteria, subjects performed a practice session in the STISIM driving simulator and completed the Simulator Sickness Questionnaire (Kennedy et al. 1993) to identify possible simulator sickness. Included subjects were randomly assigned to a treatment order comprising decaffeinated coffee and caffeinated coffee (80 mg) administered during a break. Upon arrival, possible use of drugs or alcohol, pregnancy, illness, and medication were checked. In addition, quality and duration of sleep was assessed using the 14-item Groningen Sleep Quality Scale (Mulder-Hajonides van der Meulen et al. 1980). When all criteria were met, a 120-min drive in the STISIM driving simulator was conducted. Thereafter, a 15-min break was scheduled in which subjects received the double-blind treatment. After the break, another 120-min driving session was performed. Every hour, subjective assessments of driving quality, driving style, mental effort to perform the test, and sleepiness were conducted. Test sessions were scheduled at the same time for each subject, either in the morning (0800–1300 hours) or in the afternoon (1300–1700 hours). Treatments This study aimed to mimic the effect of a cup of coffee drivers consume when having a break along the highway. Treatments were 2.68 g of Nescafé Gold® instant coffee containing 80 mg caffeine or 2.68 g of Nescafé Gold® decaffeinated coffee dissolved in 180 ml boiled water. To confirm that each cup of coffee contained 80 mg of caffeine, the amount of caffeine in the instant coffee was determined with high-performance liquid chromatography (HPLC; Shimadzu LC-10AT VP equipped with UV–Vis detector). The column was a reversed-phase Select B column Lichrocart HPLC C18, 5 μm, length, 0.125 m, Ø = 4.6 mm. All of the procedures were carried out isocratic. The separation was done at room temperature. Caffeine and the spiked matrices were separated with a mobile phase of 20% MeOH and 10 mM HClO4, at a flow rate of 0.5 mL/min. The injection volume was 5 μL, and the detection was carried out at 273 nm. The mean (SD) amount of caffeine per gram Nescafé Goud instant coffee samples (n = 10) was 29.79 (0.656) mg/g. The mean amount of caffeine in decaffeinated coffee was 0.79 mg/g. The accuracy of determinations was 98.1% (SD, 0.56). Because both the precision and the accuracy met up to the requirement demands, all of the results of this HPLC determination can be concluded with certainty. Treatments were administered double-blind, and a nose clip was worn to enhance treatment blinding. Drinks were consumed within 5 min, starting from 5 min after onset of the break. This study aimed to mimic the effect of a cup of coffee drivers consume when having a break along the highway. Treatments were 2.68 g of Nescafé Gold® instant coffee containing 80 mg caffeine or 2.68 g of Nescafé Gold® decaffeinated coffee dissolved in 180 ml boiled water. To confirm that each cup of coffee contained 80 mg of caffeine, the amount of caffeine in the instant coffee was determined with high-performance liquid chromatography (HPLC; Shimadzu LC-10AT VP equipped with UV–Vis detector). The column was a reversed-phase Select B column Lichrocart HPLC C18, 5 μm, length, 0.125 m, Ø = 4.6 mm. All of the procedures were carried out isocratic. The separation was done at room temperature. Caffeine and the spiked matrices were separated with a mobile phase of 20% MeOH and 10 mM HClO4, at a flow rate of 0.5 mL/min. The injection volume was 5 μL, and the detection was carried out at 273 nm. The mean (SD) amount of caffeine per gram Nescafé Goud instant coffee samples (n = 10) was 29.79 (0.656) mg/g. The mean amount of caffeine in decaffeinated coffee was 0.79 mg/g. The accuracy of determinations was 98.1% (SD, 0.56). Because both the precision and the accuracy met up to the requirement demands, all of the results of this HPLC determination can be concluded with certainty. Treatments were administered double-blind, and a nose clip was worn to enhance treatment blinding. Drinks were consumed within 5 min, starting from 5 min after onset of the break. STISIM highway driving test Driving tests were performed in a fixed-base driving simulator employing STISIM Drive™ (version M300, Systems Technology Inc., Hawthorne, CA, USA). This is an interactive system in which the roadway scenery is projected on a screen (2.10 × 1.58 m), 1.90 m in front of the center of the steering wheel of the car unit (Mets et al. 2011a). The 100-km highway driving test scenarios were developed (EyeOctopus BV) in accordance with Dutch traffic situations, including a two-lane highway in each direction and a monotonous environment with trees, occasional hills and bridges, and other traffic. The duration of each 100-km scenario is approximately 60 min. Two scenarios (200 km) were conducted before a 15-min break, and two other scenarios (200 km) thereafter (Mets et al. 2011a). Subjects were instructed to drive with a steady lateral position within the right, slower, traffic lane with a constant speed of 95 km/h. Overtaking slower-moving vehicles was allowed. During blinded editing, these maneuvers were removed from the data, before statistical analysis of the “clean” data. The primary outcome variable was the standard deviation of lateral position (SDLP, centimeters), expressing the weaving of the car (Verster and Roth 2011). The standard deviation of speed (SDS, kilometers per hour) was the secondary outcome measure. Mean speed (MS, kilometers per hour) and mean lateral position (MLP, cm) were control variables. Driving tests were performed in a fixed-base driving simulator employing STISIM Drive™ (version M300, Systems Technology Inc., Hawthorne, CA, USA). This is an interactive system in which the roadway scenery is projected on a screen (2.10 × 1.58 m), 1.90 m in front of the center of the steering wheel of the car unit (Mets et al. 2011a). The 100-km highway driving test scenarios were developed (EyeOctopus BV) in accordance with Dutch traffic situations, including a two-lane highway in each direction and a monotonous environment with trees, occasional hills and bridges, and other traffic. The duration of each 100-km scenario is approximately 60 min. Two scenarios (200 km) were conducted before a 15-min break, and two other scenarios (200 km) thereafter (Mets et al. 2011a). Subjects were instructed to drive with a steady lateral position within the right, slower, traffic lane with a constant speed of 95 km/h. Overtaking slower-moving vehicles was allowed. During blinded editing, these maneuvers were removed from the data, before statistical analysis of the “clean” data. The primary outcome variable was the standard deviation of lateral position (SDLP, centimeters), expressing the weaving of the car (Verster and Roth 2011). The standard deviation of speed (SDS, kilometers per hour) was the secondary outcome measure. Mean speed (MS, kilometers per hour) and mean lateral position (MLP, cm) were control variables. Subjective assessments After each hour of driving, questionnaires were administered on subjective sleepiness and driving performance. Subjective sleepiness was measured by means of the Karolinska Sleepiness Scale (KSS), ranging from 1 (very alert) to 9 (very sleepy, fighting sleep) (Åkerstedt and Gillberg 1990). Driving task-related questionnaires comprised mental effort to perform the driving test (Meijman et al. 1986; Zijlstra and Van Doorn 1985), subjective driving quality, and driving style (McCormick et al. 1987). Completing the questionnaires took approximately 2 min, after which, the driving task was immediately resumed. After each hour of driving, questionnaires were administered on subjective sleepiness and driving performance. Subjective sleepiness was measured by means of the Karolinska Sleepiness Scale (KSS), ranging from 1 (very alert) to 9 (very sleepy, fighting sleep) (Åkerstedt and Gillberg 1990). Driving task-related questionnaires comprised mental effort to perform the driving test (Meijman et al. 1986; Zijlstra and Van Doorn 1985), subjective driving quality, and driving style (McCormick et al. 1987). Completing the questionnaires took approximately 2 min, after which, the driving task was immediately resumed. Statistical analysis Statistical analyses were performed with SPSS, version 19. For each variable, mean (SD) was computed for each subsequent hour. Data of the first 2 h were compared, to confirm that no significant differences between the treatment days were present before the break and treatment administration. To determine whether caffeinated coffee has an effect on driving performance, data from the third and fourth hour were compared using a general linear model for repeated measures (two-tailed, p ≤ 0.05). Statistical analyses were performed with SPSS, version 19. For each variable, mean (SD) was computed for each subsequent hour. Data of the first 2 h were compared, to confirm that no significant differences between the treatment days were present before the break and treatment administration. To determine whether caffeinated coffee has an effect on driving performance, data from the third and fourth hour were compared using a general linear model for repeated measures (two-tailed, p ≤ 0.05). Subjects: Twenty-four adult healthy volunteers (12 males and 12 females) were recruited by means of public advertisements at and around Utrecht University campus. Subjects were included if they were healthy volunteers, moderate caffeine drinkers (two to four cups a day), non-smokers, had a body mass index between 21 and 30, possessed a valid driver’s license for at least 3 years, and drove more than 5,000 km per year. Sleep disturbances were assessed with the SLEEP-50 questionnaire (Spoormaker et al. 2005), and excessive daytime sleepiness was examined using the Epworth Sleepiness Scale (ESS; Johns 1991), filled out by participants on the screening day. Before the start of each test day, urine samples were collected to test for drugs of abuse including amphetamines (including MDMA), barbiturates, cannabinoids, benzodiazepines, cocaine, and opiates (Instant-View, Alfa Scientific Designs Inc.) and pregnancy in female subjects (β-HCG test). To test for the presence of alcohol, the Dräger Alcotest 7410 Breath Analyzer was used. From 24 h before the start of the test day until completion of the test day, alcohol consumption was not permitted. Caffeinated beverages were not allowed from awakening on test days until the end of the tests. Study design: Participants were screened and familiarized with the test procedures during a training day. When meeting all inclusion and passing all exclusion criteria, subjects performed a practice session in the STISIM driving simulator and completed the Simulator Sickness Questionnaire (Kennedy et al. 1993) to identify possible simulator sickness. Included subjects were randomly assigned to a treatment order comprising decaffeinated coffee and caffeinated coffee (80 mg) administered during a break. Upon arrival, possible use of drugs or alcohol, pregnancy, illness, and medication were checked. In addition, quality and duration of sleep was assessed using the 14-item Groningen Sleep Quality Scale (Mulder-Hajonides van der Meulen et al. 1980). When all criteria were met, a 120-min drive in the STISIM driving simulator was conducted. Thereafter, a 15-min break was scheduled in which subjects received the double-blind treatment. After the break, another 120-min driving session was performed. Every hour, subjective assessments of driving quality, driving style, mental effort to perform the test, and sleepiness were conducted. Test sessions were scheduled at the same time for each subject, either in the morning (0800–1300 hours) or in the afternoon (1300–1700 hours). Treatments: This study aimed to mimic the effect of a cup of coffee drivers consume when having a break along the highway. Treatments were 2.68 g of Nescafé Gold® instant coffee containing 80 mg caffeine or 2.68 g of Nescafé Gold® decaffeinated coffee dissolved in 180 ml boiled water. To confirm that each cup of coffee contained 80 mg of caffeine, the amount of caffeine in the instant coffee was determined with high-performance liquid chromatography (HPLC; Shimadzu LC-10AT VP equipped with UV–Vis detector). The column was a reversed-phase Select B column Lichrocart HPLC C18, 5 μm, length, 0.125 m, Ø = 4.6 mm. All of the procedures were carried out isocratic. The separation was done at room temperature. Caffeine and the spiked matrices were separated with a mobile phase of 20% MeOH and 10 mM HClO4, at a flow rate of 0.5 mL/min. The injection volume was 5 μL, and the detection was carried out at 273 nm. The mean (SD) amount of caffeine per gram Nescafé Goud instant coffee samples (n = 10) was 29.79 (0.656) mg/g. The mean amount of caffeine in decaffeinated coffee was 0.79 mg/g. The accuracy of determinations was 98.1% (SD, 0.56). Because both the precision and the accuracy met up to the requirement demands, all of the results of this HPLC determination can be concluded with certainty. Treatments were administered double-blind, and a nose clip was worn to enhance treatment blinding. Drinks were consumed within 5 min, starting from 5 min after onset of the break. STISIM highway driving test: Driving tests were performed in a fixed-base driving simulator employing STISIM Drive™ (version M300, Systems Technology Inc., Hawthorne, CA, USA). This is an interactive system in which the roadway scenery is projected on a screen (2.10 × 1.58 m), 1.90 m in front of the center of the steering wheel of the car unit (Mets et al. 2011a). The 100-km highway driving test scenarios were developed (EyeOctopus BV) in accordance with Dutch traffic situations, including a two-lane highway in each direction and a monotonous environment with trees, occasional hills and bridges, and other traffic. The duration of each 100-km scenario is approximately 60 min. Two scenarios (200 km) were conducted before a 15-min break, and two other scenarios (200 km) thereafter (Mets et al. 2011a). Subjects were instructed to drive with a steady lateral position within the right, slower, traffic lane with a constant speed of 95 km/h. Overtaking slower-moving vehicles was allowed. During blinded editing, these maneuvers were removed from the data, before statistical analysis of the “clean” data. The primary outcome variable was the standard deviation of lateral position (SDLP, centimeters), expressing the weaving of the car (Verster and Roth 2011). The standard deviation of speed (SDS, kilometers per hour) was the secondary outcome measure. Mean speed (MS, kilometers per hour) and mean lateral position (MLP, cm) were control variables. Subjective assessments: After each hour of driving, questionnaires were administered on subjective sleepiness and driving performance. Subjective sleepiness was measured by means of the Karolinska Sleepiness Scale (KSS), ranging from 1 (very alert) to 9 (very sleepy, fighting sleep) (Åkerstedt and Gillberg 1990). Driving task-related questionnaires comprised mental effort to perform the driving test (Meijman et al. 1986; Zijlstra and Van Doorn 1985), subjective driving quality, and driving style (McCormick et al. 1987). Completing the questionnaires took approximately 2 min, after which, the driving task was immediately resumed. Statistical analysis: Statistical analyses were performed with SPSS, version 19. For each variable, mean (SD) was computed for each subsequent hour. Data of the first 2 h were compared, to confirm that no significant differences between the treatment days were present before the break and treatment administration. To determine whether caffeinated coffee has an effect on driving performance, data from the third and fourth hour were compared using a general linear model for repeated measures (two-tailed, p ≤ 0.05). Results: A total of 24 subjects (12 males and 12 females) completed the study. Their mean (SD) age was 23.2 (1.6) years old; on average, they consumed 2.5 (0.7) caffeinated drinks per day, had a mean (SD) body mass index of 23.9 (2.7), possessed a valid driver’s license for 58.8 (17.9) months, and on average drove 12,979 (SD, 10,785) km per year. All subjects reported normal sleep quality and duration on the nights before the test days with no differences observed between the two test conditions. Results from the study are summarized in Table 1. There were no significant order effects (caffeinated–decaffeinated coffee versus decaffeinated–caffeinated coffee) or time-of-testing effects (a.m. versus p.m.).Table 1Effects of caffeinated coffee in comparison to decaf on simulated driving performance and subjective sleepinessTimeDecaffeinated coffeeCaffeinated coffeeDriving test resultsStandard deviation of lateral position (cm)121.43 (4.37)22.11 (3.67)223.65 (5.90)24.13 (4.76)322.92 (4.61)21.08 (3.74)*423.69 (4.72)22.41 (4.37)*Standard deviation of speed (km/h)10.85 (0.44)0.88 (0.35)20.98 (0.51)1.1 (0.61)31.03 (0.72)0.78 (0.34)*41.15 (0.77)0.87 (0.56)*Mean lateral position (cm)1−18.04 (12.71)−18.03 (10.47)2−19.24 (12.60)−18.98 (9.98)3−18.63 (12.31)−20.16 (11.05)4−18.17 (11.54)−18.93 (10.80)Mean speed (km/h)195.40 (0.19)95.42 (0.21)295.46 (0.16)95.40 (0.26)395.44 (0.31)95.54 (0.18)495.53 (0.15)95.54 (0.25)Subjective driving assessmentsDriving quality19.75 (3.66)9.08 (4.10)29.01 (2.81)8.48 (3.46)39.70 (3.89)11.84 (2.82)*49.23 (3.02)10.60 (3.41)Mental effort15.33 (2.30)5.70 (2.47)25.84 (2.76)6.39 (2.50)35.89 (2.82)4.50 (2.36)*45.72 (2.38)4.90 (2.93)*Subjective sleepiness scoresKarolinska sleepiness scaleBaseline3.25 (0.94)3.33 (0.87)16.08 (1.67)5.83 (2.16)26.17 (1.95)6.29 (1.97)36.13 (2.11)4.21 (1.47)*45.79 (1.59)4.54 (1.86)*Mean (SD) is shown for each parameter. Driving quality ranges from 0 (“I drove exceptionally poorly”) to 20 (“I drove exceptionally well”). For mental effort, higher scores indicate higher effort; higher KSS scores indicate increased subjective sleepiness*p < 0.05 compared to decaf Effects of caffeinated coffee in comparison to decaf on simulated driving performance and subjective sleepiness Mean (SD) is shown for each parameter. Driving quality ranges from 0 (“I drove exceptionally poorly”) to 20 (“I drove exceptionally well”). For mental effort, higher scores indicate higher effort; higher KSS scores indicate increased subjective sleepiness *p < 0.05 compared to decaf Driving test Figure 1 shows the effect of caffeinated coffee consumption on driving performance. No significant differences in SDLP were observed before the break. However, both in the first (F (1,23) = 5.8; p = 0.024) and in the second hour (F (1,23) = 6.4; p = 0.019) after the break, caffeinated coffee significantly reduced SDLP.Fig. 1Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05) Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05) In line, caffeinated coffee significantly reduced SD speed in the third (F (1,23) = 5.8; p = 0.024) and fourth hour (F (1,23) = 13.0; p = 0.001) of driving (see Fig. 2).Fig. 2Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05) Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05) No effects were found on mean speed or mean lateral position, confirming that subjects performed the test according to the instructions. Figure 1 shows the effect of caffeinated coffee consumption on driving performance. No significant differences in SDLP were observed before the break. However, both in the first (F (1,23) = 5.8; p = 0.024) and in the second hour (F (1,23) = 6.4; p = 0.019) after the break, caffeinated coffee significantly reduced SDLP.Fig. 1Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05) Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05) In line, caffeinated coffee significantly reduced SD speed in the third (F (1,23) = 5.8; p = 0.024) and fourth hour (F (1,23) = 13.0; p = 0.001) of driving (see Fig. 2).Fig. 2Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05) Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05) No effects were found on mean speed or mean lateral position, confirming that subjects performed the test according to the instructions. Subjective driving assessments Compared to decaffeinated coffee, caffeinated coffee improved subjective driving quality in the third hour of driving (F (1,23) = 10.5; p = 0.004), but not in the fourth (F (1,23) = 2.6; p = n.s.). Subjects indicated that the mental effort needed to perform the test after caffeinated coffee was significantly reduced in the third (F (1,23) = 11.4; p = 0.003) and fourth hour of driving (F (1,23) = 5.9; p = 0.023). In addition, drivers rated their driving quality as significantly more considerate, responsible, and safer in the caffeinated coffee condition (see Table 1). Compared to decaffeinated coffee, caffeinated coffee improved subjective driving quality in the third hour of driving (F (1,23) = 10.5; p = 0.004), but not in the fourth (F (1,23) = 2.6; p = n.s.). Subjects indicated that the mental effort needed to perform the test after caffeinated coffee was significantly reduced in the third (F (1,23) = 11.4; p = 0.003) and fourth hour of driving (F (1,23) = 5.9; p = 0.023). In addition, drivers rated their driving quality as significantly more considerate, responsible, and safer in the caffeinated coffee condition (see Table 1). Subjective sleepiness After the break with caffeinated coffee, drivers reported significantly lower sleepiness scores as compared to the break with decaffeinated coffee. This effect was significant both in the third (F (1,23) = 18.5; p < 0.001) and the fourth hour of driving (F (1,23) = 11.9; p = 0.002) (see Fig. 3).Fig. 3Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05) Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05) After the break with caffeinated coffee, drivers reported significantly lower sleepiness scores as compared to the break with decaffeinated coffee. This effect was significant both in the third (F (1,23) = 18.5; p < 0.001) and the fourth hour of driving (F (1,23) = 11.9; p = 0.002) (see Fig. 3).Fig. 3Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05) Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05) Driving test: Figure 1 shows the effect of caffeinated coffee consumption on driving performance. No significant differences in SDLP were observed before the break. However, both in the first (F (1,23) = 5.8; p = 0.024) and in the second hour (F (1,23) = 6.4; p = 0.019) after the break, caffeinated coffee significantly reduced SDLP.Fig. 1Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05) Standard deviation of lateral position (SDLP). Asterisks indicate significant difference compared to placebo (p < 0.05) In line, caffeinated coffee significantly reduced SD speed in the third (F (1,23) = 5.8; p = 0.024) and fourth hour (F (1,23) = 13.0; p = 0.001) of driving (see Fig. 2).Fig. 2Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05) Standard deviation of speed (SDS). Asterisks indicate significant difference compared to placebo (p < 0.05) No effects were found on mean speed or mean lateral position, confirming that subjects performed the test according to the instructions. Subjective driving assessments: Compared to decaffeinated coffee, caffeinated coffee improved subjective driving quality in the third hour of driving (F (1,23) = 10.5; p = 0.004), but not in the fourth (F (1,23) = 2.6; p = n.s.). Subjects indicated that the mental effort needed to perform the test after caffeinated coffee was significantly reduced in the third (F (1,23) = 11.4; p = 0.003) and fourth hour of driving (F (1,23) = 5.9; p = 0.023). In addition, drivers rated their driving quality as significantly more considerate, responsible, and safer in the caffeinated coffee condition (see Table 1). Subjective sleepiness: After the break with caffeinated coffee, drivers reported significantly lower sleepiness scores as compared to the break with decaffeinated coffee. This effect was significant both in the third (F (1,23) = 18.5; p < 0.001) and the fourth hour of driving (F (1,23) = 11.9; p = 0.002) (see Fig. 3).Fig. 3Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05) Karolinska sleepiness scale. Asterisks indicate significant difference compared to placebo (p < 0.05) Discussion: This study demonstrates that one cup of caffeinated coffee (80 mg caffeine) significantly improves driving performance and reduces driver sleepiness. Both lane keeping (SDLP) and speed maintenance were improved up to 2 h after caffeine consumption. The effect on SDLP of caffeinated coffee, compared to placebo, is comparable to changes observed with a blood alcohol concentration of 0.05% (Mets et al. 2011b), i.e., the legal limit for driving in many countries, but in the opposite direction. Hence, the improvement by caffeinated coffee can be regarded as clinically relevant. The magnitude of driving improvement observed after coffee consumption was comparable to the improvement seen in a driving study with Red Bull® Energy Drink using the same design and driving test (Mets et al. 2011a), and on-road driving studies showing improvement after administration of methylphenidate to patients with attention-deficit hyperactivity disorder (Verster et al. 2008). The improvement in objective performance was accompanied by improvement in subjective assessments of sleepiness and driving performance. An average decrease of almost 2 points (out of 7) on the KSS scale was observed after the intake of caffeinated coffee as compared to decaffeinated coffee. The average KSS score was 6 (“some signs of sleepiness”) in the decaffeinated coffee condition compared to 4 (“rather alert”) in the caffeinated coffee condition. These findings are in agreement with the pharmacokinetic profile of caffeine (Tmax ≈ 30 min; T1/2 > 2 h), as well as with the known actions of caffeine as a sleepiness countermeasure with the ability to restore performance to baseline. Up to now, higher dosages of caffeine (150–250 mg, comparable to two to three cups of regular coffee) have been shown to be effective in counteracting sleep restriction (<5 h spent in bed) when driving in the early morning (Reyner and Horne 2000) and in the early afternoon (Horne and Reyner 1996; Reyner and Horne 1997). A moderate caffeine dosage (100 mg) decreased drifting out of lane and reduced subjective sleepiness in drivers who had slept for no more than 4 h (Biggs et al. 2007). Furthermore, caffeine (3 mg/kg, approximately 225 mg in a 75-kg adult) improved steering accuracy in non-fatigued volunteers (Brice and Smith 2001). Slow release caffeine capsules (300 mg) had similar effects (De Valck and Cluydts 2001). Interestingly, caffeine decreased lane drifting both in individuals who had spent 4.5 h in bed and in those who had spent 7.5 h in bed, while effects on speed maintenance, fatigue, and sleepiness were only observed after 4.5 h spent in bed (De Valck and Cluydts 2001). Two on-the-road driving studies on a public highway in France confirmed these findings and showed that relatively high dosages of caffeine (200 mg) improved nighttime driving both in young and in middle-aged drivers (Philip et al. 2006; Sagaspe et al. 2007). The current results are in agreement with these studies, but further show that lower caffeine content found in one regular cup of coffee also significantly improves driving performance and reduces driver sleepiness. In addition, where studies find effects on lane keeping, the current study shows that speed maintenance is also affected, indicating more pronounced effects on vehicle control. The importance of this finding is evident, since it can be assumed that in order to refresh, most drivers consume only one cup of coffee during a break, instead of three or four. Driving simulator research has several limitations, which are discussed elsewhere (e.g., Mets et al 2011b; Verster and Roth 2011, 2012). Therefore, it is important to replicate and confirm our findings in an on-the-road driving study. Furthermore, subjects who were tested in the afternoon may have had an additional effect of the afternoon dip in the circadian rhythm. For this reason, each subject had his test days at the same time of day. However, no significant differences were found between subjects tested in the morning and those tested in the afternoon. Further studies could examine if a low dose of caffeine has similar effects on (professional) drivers who are sleep-restricted or shifted their day–night rhythm, since current studies have only been performed with higher dosages of caffeine. In conclusion, the present study demonstrates that one cup of caffeinated coffee (80 mg caffeine) has a positive effect on continuous highway driving in non-sleep restricted, healthy volunteers.
Background: Coffee is often consumed to counteract driver sleepiness. There is limited information on the effects of a single low dose of coffee on prolonged highway driving in non-sleep deprived individuals. Methods: Non-sleep deprived healthy volunteers (n024) participated in a double-blind, placebo-controlled, crossover study. After 2 h of monotonous highway driving, subjects received caffeinated or decaffeinated coffee during a 15-min break before continuing driving for another 2 h. The primary outcome measure was the standard deviation of lateral position (SDLP), reflecting the weaving of the car. Secondary outcome measures were speed variability, subjective sleepiness, and subjective driving performance. Results: The results showed that caffeinated coffee significantly reduced SDLP as compared to decaffeinated coffee, both in the first (p00.024) and second hour (p00.019) after the break. Similarly, the standard deviation of speed (p0 0.024; p00.001), mental effort (p00.003; p00.023), and subjective sleepiness (p00.001; p00.002) were reduced in both the first and second hour after consuming caffeinated coffee. Subjective driving quality was significantly improved in the first hour after consuming caffeinated coffee (p00.004). Conclusions: These findings demonstrate a positive effect of one cup of caffeinated coffee on driving performance and subjective sleepiness during monotonous simulated highway driving.
null
null
7,684
254
13
[ "driving", "coffee", "test", "caffeine", "sleepiness", "caffeinated", "break", "caffeinated coffee", "compared", "subjects" ]
[ "test", "test" ]
null
null
null
[CONTENT] Caffeine | Automobile driving | Fatigue | Sleepiness [SUMMARY]
null
[CONTENT] Caffeine | Automobile driving | Fatigue | Sleepiness [SUMMARY]
null
[CONTENT] Caffeine | Automobile driving | Fatigue | Sleepiness [SUMMARY]
null
[CONTENT] Automobile Driving | Caffeine | Central Nervous System Stimulants | Coffee | Cross-Over Studies | Double-Blind Method | Female | Humans | Male | Time Factors | Wakefulness | Young Adult [SUMMARY]
null
[CONTENT] Automobile Driving | Caffeine | Central Nervous System Stimulants | Coffee | Cross-Over Studies | Double-Blind Method | Female | Humans | Male | Time Factors | Wakefulness | Young Adult [SUMMARY]
null
[CONTENT] Automobile Driving | Caffeine | Central Nervous System Stimulants | Coffee | Cross-Over Studies | Double-Blind Method | Female | Humans | Male | Time Factors | Wakefulness | Young Adult [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] driving | coffee | test | caffeine | sleepiness | caffeinated | break | caffeinated coffee | compared | subjects [SUMMARY]
null
[CONTENT] driving | coffee | test | caffeine | sleepiness | caffeinated | break | caffeinated coffee | compared | subjects [SUMMARY]
null
[CONTENT] driving | coffee | test | caffeine | sleepiness | caffeinated | break | caffeinated coffee | compared | subjects [SUMMARY]
null
[CONTENT] caffeine | driving | dosages | studies | effects | adenosine | 2010 | cup coffee | cup | min [SUMMARY]
null
[CONTENT] 23 | indicate | significant | compared | coffee | difference compared placebo | compared placebo 05 | asterisks indicate significant difference | difference | asterisks [SUMMARY]
null
[CONTENT] driving | coffee | caffeine | 23 | sleepiness | compared | test | caffeinated | significant | caffeinated coffee [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
null
[CONTENT] second hour ||| 0.024 | p00.002 | first | second hour ||| the first hour [SUMMARY]
null
[CONTENT] ||| ||| ||| 2 | 15 | 2 ||| ||| ||| ||| second hour ||| 0.024 | p00.002 | first | second hour ||| the first hour ||| one cup [SUMMARY]
null
Adjunctive Role of Bifrontal Transcranial Direct Current Stimulation in Distressed Patients with Severe Tinnitus.
30662385
This study assessed the therapeutic effect of adjunctive bifrontal transcranial direct current stimulation (tDCS) in patients with tinnitus.
BACKGROUND
Forty-four patients who visited our university hospital with a complaint of non-pulsatile subjective tinnitus in January through December 2016 were enrolled. All patients received directive counseling and sound therapy, such as a sound generator or hearing aids, and/or oral clonazepam. Patients who agreed to undergo additional bifrontal tDCS were classified as the study group (n = 26). For tDCS, 1.5 mA of direct current was applied to the prefrontal cortex with a 10-20 EEG system for 20 minutes per session.
METHODS
The Tinnitus Handicap Inventory (THI), Beck Depression Inventory, and Visual Analog Scale (VAS) scores decreased significantly after treatment (P < 0.001). Patients who had a moderate or catastrophic handicap were significantly more likely to respond favorably to bifrontal tDCS (P = 0.026). There was no correlation of number of tDCS sessions with change in the THI or VAS score (P > 0.05). Logistic regression analysis revealed that the initial THI score was independently associated with improvement in the THI. However, tDCS was not a significant determinant of recovery.
RESULTS
tDCS can be used as an adjunctive treatment in patients with severe tinnitus. Although tDCS did not decrease the loudness of tinnitus, it could alleviate the distress associated with the condition in some patients with a moderate or catastrophic handicap.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Electrodes", "Female", "Humans", "Logistic Models", "Male", "Middle Aged", "Prognosis", "Severity of Illness Index", "Tinnitus", "Transcranial Direct Current Stimulation", "Treatment Outcome", "Young Adult" ]
6335126
INTRODUCTION
Tinnitus is a troublesome symptom involving perception of sounds without a source. Individuals with tinnitus have frequent symptoms that negatively affect quality of life. The epidemiologic characteristics of tinnitus differ widely from population to population and from country to country. In Korea, the prevalence of tinnitus was reported to be 19.7% in individuals aged 12 years or older, with a women predominance.1 In the UK, the prevalence was reported to be 16.9% in the 40–69-year age group.2 A study based on the 1999–2004 National Health and Nutrition Examination Survey in the US reported that the risk of tinnitus increased with advancing age and that the majority of individuals with the disorder were in their sixties.3 Auditory deafferentation is caused by various conditions, including ageing, exposure to noise, and diseases affecting the middle ear, that may lead to reorganization of the auditory cortex.4 Altered tinnitus-related functional connectivity between different regions of the brain, involvement of non-auditory brain networks, such as perception, salience, and distress networks, as well as the memory area also play a role in the maintenance of tinnitus.5 Consequently, tinnitus is related to pathologic changes in the brain and is not confined to peripheral hearing loss. Various neuromodulation techniques, such as transcranial direct current stimulation (tDCS) and repetitive transcranial magnetic stimulation (rTMS), have been introduced for treatment of diverse neurologic disorders, including chronic pain, Parkinson's disease, stroke, aphasia, multiple sclerosis, epilepsy, Alzheimer's disease, depression, and tinnitus.67 The stimulation targets most frequently used for tinnitus include not only the auditory cortex but also the prefrontal cortex and the anterior cingulate cortex. It is thought that reversal of pathologic neural activity is modulated directly or indirectly in the functionally connected brain networks. Bifrontal tDCS uses a weak current of 0.2–2.0 mA and alters the excitability of the cortex; anodal tDCS increases cortical excitability and cathodal tDCS decreases excitability by hyperpolarization.6 In a previous study, we demonstrated that about 80% of patients showed a 50% or greater decrease in the Tinnitus Handicap Inventory (THI) score when treated with tDCS combined with tailor-made notched music training (TMNMT).6 However, not all patients like TMNMT, which relies on ready availability of preferred music and involves a very long treatment period. In addition, the small sample size in our previous study limited our ability to interpret its results. The aims of the present study were to evaluate the therapeutic effect of bifrontal tDCS for tinnitus in a large sample size and to determine whether this treatment modality has a role adjunctive to that of conventional treatment.
METHODS
Patients Patients who visited the tinnitus clinic at our university hospital between January and December 2016 complaining of non-pulsatile subjective tinnitus that had persisted for at least 3 months and followed up for longer than one month were screened. The exclusion criteria were as follows: THI, Beck Depression Inventory (BDI), and Visual Analog Scale (VAS) scores not obtained or obtained only once; previous tDCS; a history of treatment with other types of neuromodulation, such as vagus nerve stimulation, transcranial random noise stimulation (tRNS), or rTMS; and presence of patient factors precluding evaluation of tinnitus, such as mental retardation, schizophrenia, or low compliance. Patients who visited the tinnitus clinic at our university hospital between January and December 2016 complaining of non-pulsatile subjective tinnitus that had persisted for at least 3 months and followed up for longer than one month were screened. The exclusion criteria were as follows: THI, Beck Depression Inventory (BDI), and Visual Analog Scale (VAS) scores not obtained or obtained only once; previous tDCS; a history of treatment with other types of neuromodulation, such as vagus nerve stimulation, transcranial random noise stimulation (tRNS), or rTMS; and presence of patient factors precluding evaluation of tinnitus, such as mental retardation, schizophrenia, or low compliance. Treatment protocols The patients were divided into a tDCS group and a conventional treatment group. The conventional treatment group received 2–8 weeks of directive counselling based on the Jastreboff neurophysiologic model, sound therapy that included a sound generator or hearing aids, and clonazepam for patients who wished to take medication. The tDCS group underwent tDCS in addition to conventional treatment. tDCS was delivered using a DC-Stimulator Plus (neuroConn GmbH, Ilmenau, Germany). Saline-soaked cathodal and anodal electrodes (each with an area of 35 cm2) were placed over the F3 and F4 areas, respectively, in accordance with the 10–20 system, in order to stimulate the dorsolateral prefrontal cortex (DLPFC) bilaterally. The intensity and duration of stimulation were set to 1.5 mA and 20 minutes (a 10-seconds fade-in/fade-out time), respectively. The patients were divided into a tDCS group and a conventional treatment group. The conventional treatment group received 2–8 weeks of directive counselling based on the Jastreboff neurophysiologic model, sound therapy that included a sound generator or hearing aids, and clonazepam for patients who wished to take medication. The tDCS group underwent tDCS in addition to conventional treatment. tDCS was delivered using a DC-Stimulator Plus (neuroConn GmbH, Ilmenau, Germany). Saline-soaked cathodal and anodal electrodes (each with an area of 35 cm2) were placed over the F3 and F4 areas, respectively, in accordance with the 10–20 system, in order to stimulate the dorsolateral prefrontal cortex (DLPFC) bilaterally. The intensity and duration of stimulation were set to 1.5 mA and 20 minutes (a 10-seconds fade-in/fade-out time), respectively. Follow-up and evaluation of response to treatment All patients were instructed to visit the hospital at monthly intervals. Responses on the THI, BDI, and VAS were checked at each visit until the end of treatment. The initial severity of tinnitus-related distress was classified as slight (THI score 0–16), mild (18–36), moderate (38–56), severe (58–76), or catastrophic (78–100). Patients with a final improvement in THI score of ≥ 20 were defined as treatment responders and those with a final improvement of < 20 were considered non-responders. All patients were instructed to visit the hospital at monthly intervals. Responses on the THI, BDI, and VAS were checked at each visit until the end of treatment. The initial severity of tinnitus-related distress was classified as slight (THI score 0–16), mild (18–36), moderate (38–56), severe (58–76), or catastrophic (78–100). Patients with a final improvement in THI score of ≥ 20 were defined as treatment responders and those with a final improvement of < 20 were considered non-responders. Statistical analysis The χ2 and Fisher's exact tests were used to compare the data between groups and to analyze trends. The nominal data are presented as the mean, standard deviation, and range unless otherwise stated. The Shapiro-Wilk test was performed to test the data for normality. The paired t-test was used to compare the pretreatment and post-treatment THI, BDI, and VAS scores. Pearson correlation analysis was used to assess the relationship between number of tDCS sessions and the change in THI Score. All statistical analyses were performed using SPSS version 25.0 software (IBM Corp., Armonk, NY, USA). A P value < 0.05 was considered statistically significant. The χ2 and Fisher's exact tests were used to compare the data between groups and to analyze trends. The nominal data are presented as the mean, standard deviation, and range unless otherwise stated. The Shapiro-Wilk test was performed to test the data for normality. The paired t-test was used to compare the pretreatment and post-treatment THI, BDI, and VAS scores. Pearson correlation analysis was used to assess the relationship between number of tDCS sessions and the change in THI Score. All statistical analyses were performed using SPSS version 25.0 software (IBM Corp., Armonk, NY, USA). A P value < 0.05 was considered statistically significant. Ethics statement This study was performed in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Eulji University (EMC IRB 2018-06-010). IRB granted a waiver of written informed consent for this study. This study was performed in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Eulji University (EMC IRB 2018-06-010). IRB granted a waiver of written informed consent for this study.
RESULTS
Patient characteristics Seventy patients (37 men, 33 women; mean age, 47.81 ± 14.1 [range, 18–82] years) were recruited (Table 1). Forty-four patients had unilateral tinnitus and 26 had bilateral tinnitus. The mean interval between symptom onset and treatment was 23.94 ± 53.93 (range, 3–360) months; the mean hearing level on the right was 21.97 ± 22.00 dB and that on the left was 20.57 ± 20.03 dB. The initial THI and BDI scores were 47.29 ± 24.474 (range, 12–100) and 12.03 ± 9.92 (range, 0–44), respectively. When the mean interval was classified according to THI score, it was longest in patients with severe handicap (Table 2). The mean initial VAS tinnitus loudness score was 4.87 ± 2.47. The mean follow-up duration was 8.29 ± 9.26 months. Data are presented as mean ± standard deviation for numerical variables and number (%) for nominal variables. tDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory, BDI = Beck Depression Inventory, VAS = Visual Analog Scale (0–10). Data are presented as mean ± standard deviation for numerical variables. THI = Tinnitus Handicap Inventory, tDCS = transcranial direct current stimulation. Twenty-six (37.1%) of the 70 patients received 1–6 sessions of tDCS (mean, 1.04 ± 1.89) and the remaining 44 received conventional treatment. There was no significant difference in age, gender, side affected by tinnitus, history of trauma, onset, or hearing level on either side between the two study groups. The mean follow-up duration was 10.36 ± 11.77 months in the tDCS group and 6.77 ± 7.23 months in the conventional treatment group. The mean initial THI score was higher in the tDCS group than in the conventional treatment group (57.92 ± 22.16 vs. 41 ± 23.81; P = 0.004). There was no significant between-group difference in the BDI score (P > 0.05). The VAS score for tinnitus loudness was significantly higher in the tDCS group than in the conventional treatment group (5.73 ± 2.52 vs. 4.36 ± 2.314; P = 0.024). Seventy patients (37 men, 33 women; mean age, 47.81 ± 14.1 [range, 18–82] years) were recruited (Table 1). Forty-four patients had unilateral tinnitus and 26 had bilateral tinnitus. The mean interval between symptom onset and treatment was 23.94 ± 53.93 (range, 3–360) months; the mean hearing level on the right was 21.97 ± 22.00 dB and that on the left was 20.57 ± 20.03 dB. The initial THI and BDI scores were 47.29 ± 24.474 (range, 12–100) and 12.03 ± 9.92 (range, 0–44), respectively. When the mean interval was classified according to THI score, it was longest in patients with severe handicap (Table 2). The mean initial VAS tinnitus loudness score was 4.87 ± 2.47. The mean follow-up duration was 8.29 ± 9.26 months. Data are presented as mean ± standard deviation for numerical variables and number (%) for nominal variables. tDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory, BDI = Beck Depression Inventory, VAS = Visual Analog Scale (0–10). Data are presented as mean ± standard deviation for numerical variables. THI = Tinnitus Handicap Inventory, tDCS = transcranial direct current stimulation. Twenty-six (37.1%) of the 70 patients received 1–6 sessions of tDCS (mean, 1.04 ± 1.89) and the remaining 44 received conventional treatment. There was no significant difference in age, gender, side affected by tinnitus, history of trauma, onset, or hearing level on either side between the two study groups. The mean follow-up duration was 10.36 ± 11.77 months in the tDCS group and 6.77 ± 7.23 months in the conventional treatment group. The mean initial THI score was higher in the tDCS group than in the conventional treatment group (57.92 ± 22.16 vs. 41 ± 23.81; P = 0.004). There was no significant between-group difference in the BDI score (P > 0.05). The VAS score for tinnitus loudness was significantly higher in the tDCS group than in the conventional treatment group (5.73 ± 2.52 vs. 4.36 ± 2.314; P = 0.024). Outcomes according to treatment method Initially, we analyzed all of the enrolled patients' data together, irrespective of study group. The post-treatment THI and VAS scores indicated a significant reduction in patient distress and tinnitus loudness (P < 0.001). In total, the final THI score was 28.06 ± 19.24 and the final VAS score was 3.70 ± 2.46 (Table 1). The BDI score decreased to 9.59 ± 8.85 after treatment (P = 0.006). The χ2 test for trend demonstrated that patients with an initial THI score indicating more severe tinnitus-related distress tended to be treatment responders (P < 0.001). The percentages of patients with mild to catastrophic handicap who responded to treatment were 0% (0/7) for slight handicap, 26.1% (6/23) for mild handicap, 50% (7/14) for moderate handicap, 78.6% (11/14) for severe handicap, and 75% (9/12) for catastrophic handicap. The final THI scores were then analyzed separately according to treatment group. The change in the mean THI score was greater in the tDCS group than in the conventional treatment group (28.69 ± 24.81 vs. 13.63 ± 21.59; P = 0.010) (Fig. 1). However, there was no significant between-group difference in the mean BDI score or VAS score after treatment (P > 0.05). tDCS = transcranial direct current stimulation, BDI = Beck Depression Inventory, THI = Tinnitus Handicap Inventory. aP < 0.05. In patients whose final improvement in THI score was ≥ 20, differences were observed according to the initial degree of tinnitus severity and type of treatment received (P = 0.011; χ2). Treatment responders with initially moderate or catastrophic handicap were more likely to have received tDCS (Fig. 2); however, treatment responders with initially severe handicap were more likely to have received conventional treatment. None of the patients in the tDCS group with initially mild symptoms showed an improvement in THI of ≥ 20. However, there was no significant difference in the initial severity of tinnitus or type of treatment received in patients whose final THI score improvement was < 20 (P > 0.05). tDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory. Pearson correlation analysis revealed no correlation between change in THI score and number of tDCS sessions (r = 0.068; P = 0.692). Similarly, there was no correlation between the number of tDCS sessions and the BDI or VAS score for tinnitus loudness (P > 0.05). Initially, we analyzed all of the enrolled patients' data together, irrespective of study group. The post-treatment THI and VAS scores indicated a significant reduction in patient distress and tinnitus loudness (P < 0.001). In total, the final THI score was 28.06 ± 19.24 and the final VAS score was 3.70 ± 2.46 (Table 1). The BDI score decreased to 9.59 ± 8.85 after treatment (P = 0.006). The χ2 test for trend demonstrated that patients with an initial THI score indicating more severe tinnitus-related distress tended to be treatment responders (P < 0.001). The percentages of patients with mild to catastrophic handicap who responded to treatment were 0% (0/7) for slight handicap, 26.1% (6/23) for mild handicap, 50% (7/14) for moderate handicap, 78.6% (11/14) for severe handicap, and 75% (9/12) for catastrophic handicap. The final THI scores were then analyzed separately according to treatment group. The change in the mean THI score was greater in the tDCS group than in the conventional treatment group (28.69 ± 24.81 vs. 13.63 ± 21.59; P = 0.010) (Fig. 1). However, there was no significant between-group difference in the mean BDI score or VAS score after treatment (P > 0.05). tDCS = transcranial direct current stimulation, BDI = Beck Depression Inventory, THI = Tinnitus Handicap Inventory. aP < 0.05. In patients whose final improvement in THI score was ≥ 20, differences were observed according to the initial degree of tinnitus severity and type of treatment received (P = 0.011; χ2). Treatment responders with initially moderate or catastrophic handicap were more likely to have received tDCS (Fig. 2); however, treatment responders with initially severe handicap were more likely to have received conventional treatment. None of the patients in the tDCS group with initially mild symptoms showed an improvement in THI of ≥ 20. However, there was no significant difference in the initial severity of tinnitus or type of treatment received in patients whose final THI score improvement was < 20 (P > 0.05). tDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory. Pearson correlation analysis revealed no correlation between change in THI score and number of tDCS sessions (r = 0.068; P = 0.692). Similarly, there was no correlation between the number of tDCS sessions and the BDI or VAS score for tinnitus loudness (P > 0.05). Prognostic factors Conditional logistic regression analysis revealed a positive association of the initial THI score with an improvement in THI of ≥ 20 (odds ratio [OR], 1.060; 95% confidence interval [CI], 1.031–1.091; P < 0.001) and a negative association with a final THI score < 18 (OR, 0.969; CI, 0.945–0.993; P = 0.013). Thus, patients with more severe tinnitus-related distress initially were classified as responders after treatment but those with lower initial levels of distress showed better final recovery. Unexpectedly, the regression model did not identify application of tDCS as an independent prognostic factor for improvement in the THI score. Conditional logistic regression analysis revealed a positive association of the initial THI score with an improvement in THI of ≥ 20 (odds ratio [OR], 1.060; 95% confidence interval [CI], 1.031–1.091; P < 0.001) and a negative association with a final THI score < 18 (OR, 0.969; CI, 0.945–0.993; P = 0.013). Thus, patients with more severe tinnitus-related distress initially were classified as responders after treatment but those with lower initial levels of distress showed better final recovery. Unexpectedly, the regression model did not identify application of tDCS as an independent prognostic factor for improvement in the THI score.
null
null
[ "Patients", "Treatment protocols", "Follow-up and evaluation of response to treatment", "Statistical analysis", "Ethics statement", "Patient characteristics", "Outcomes according to treatment method", "Prognostic factors" ]
[ "Patients who visited the tinnitus clinic at our university hospital between January and December 2016 complaining of non-pulsatile subjective tinnitus that had persisted for at least 3 months and followed up for longer than one month were screened. The exclusion criteria were as follows: THI, Beck Depression Inventory (BDI), and Visual Analog Scale (VAS) scores not obtained or obtained only once; previous tDCS; a history of treatment with other types of neuromodulation, such as vagus nerve stimulation, transcranial random noise stimulation (tRNS), or rTMS; and presence of patient factors precluding evaluation of tinnitus, such as mental retardation, schizophrenia, or low compliance.", "The patients were divided into a tDCS group and a conventional treatment group. The conventional treatment group received 2–8 weeks of directive counselling based on the Jastreboff neurophysiologic model, sound therapy that included a sound generator or hearing aids, and clonazepam for patients who wished to take medication.\nThe tDCS group underwent tDCS in addition to conventional treatment. tDCS was delivered using a DC-Stimulator Plus (neuroConn GmbH, Ilmenau, Germany). Saline-soaked cathodal and anodal electrodes (each with an area of 35 cm2) were placed over the F3 and F4 areas, respectively, in accordance with the 10–20 system, in order to stimulate the dorsolateral prefrontal cortex (DLPFC) bilaterally. The intensity and duration of stimulation were set to 1.5 mA and 20 minutes (a 10-seconds fade-in/fade-out time), respectively.", "All patients were instructed to visit the hospital at monthly intervals. Responses on the THI, BDI, and VAS were checked at each visit until the end of treatment. The initial severity of tinnitus-related distress was classified as slight (THI score 0–16), mild (18–36), moderate (38–56), severe (58–76), or catastrophic (78–100). Patients with a final improvement in THI score of ≥ 20 were defined as treatment responders and those with a final improvement of < 20 were considered non-responders.", "The χ2 and Fisher's exact tests were used to compare the data between groups and to analyze trends. The nominal data are presented as the mean, standard deviation, and range unless otherwise stated. The Shapiro-Wilk test was performed to test the data for normality. The paired t-test was used to compare the pretreatment and post-treatment THI, BDI, and VAS scores. Pearson correlation analysis was used to assess the relationship between number of tDCS sessions and the change in THI Score. All statistical analyses were performed using SPSS version 25.0 software (IBM Corp., Armonk, NY, USA). A P value < 0.05 was considered statistically significant.", "This study was performed in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Eulji University (EMC IRB 2018-06-010). IRB granted a waiver of written informed consent for this study.", "Seventy patients (37 men, 33 women; mean age, 47.81 ± 14.1 [range, 18–82] years) were recruited (Table 1). Forty-four patients had unilateral tinnitus and 26 had bilateral tinnitus. The mean interval between symptom onset and treatment was 23.94 ± 53.93 (range, 3–360) months; the mean hearing level on the right was 21.97 ± 22.00 dB and that on the left was 20.57 ± 20.03 dB. The initial THI and BDI scores were 47.29 ± 24.474 (range, 12–100) and 12.03 ± 9.92 (range, 0–44), respectively. When the mean interval was classified according to THI score, it was longest in patients with severe handicap (Table 2). The mean initial VAS tinnitus loudness score was 4.87 ± 2.47. The mean follow-up duration was 8.29 ± 9.26 months.\nData are presented as mean ± standard deviation for numerical variables and number (%) for nominal variables.\ntDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory, BDI = Beck Depression Inventory, VAS = Visual Analog Scale (0–10).\nData are presented as mean ± standard deviation for numerical variables.\nTHI = Tinnitus Handicap Inventory, tDCS = transcranial direct current stimulation.\nTwenty-six (37.1%) of the 70 patients received 1–6 sessions of tDCS (mean, 1.04 ± 1.89) and the remaining 44 received conventional treatment. There was no significant difference in age, gender, side affected by tinnitus, history of trauma, onset, or hearing level on either side between the two study groups. The mean follow-up duration was 10.36 ± 11.77 months in the tDCS group and 6.77 ± 7.23 months in the conventional treatment group. The mean initial THI score was higher in the tDCS group than in the conventional treatment group (57.92 ± 22.16 vs. 41 ± 23.81; P = 0.004). There was no significant between-group difference in the BDI score (P > 0.05). The VAS score for tinnitus loudness was significantly higher in the tDCS group than in the conventional treatment group (5.73 ± 2.52 vs. 4.36 ± 2.314; P = 0.024).", "Initially, we analyzed all of the enrolled patients' data together, irrespective of study group. The post-treatment THI and VAS scores indicated a significant reduction in patient distress and tinnitus loudness (P < 0.001). In total, the final THI score was 28.06 ± 19.24 and the final VAS score was 3.70 ± 2.46 (Table 1). The BDI score decreased to 9.59 ± 8.85 after treatment (P = 0.006). The χ2 test for trend demonstrated that patients with an initial THI score indicating more severe tinnitus-related distress tended to be treatment responders (P < 0.001). The percentages of patients with mild to catastrophic handicap who responded to treatment were 0% (0/7) for slight handicap, 26.1% (6/23) for mild handicap, 50% (7/14) for moderate handicap, 78.6% (11/14) for severe handicap, and 75% (9/12) for catastrophic handicap.\nThe final THI scores were then analyzed separately according to treatment group. The change in the mean THI score was greater in the tDCS group than in the conventional treatment group (28.69 ± 24.81 vs. 13.63 ± 21.59; P = 0.010) (Fig. 1). However, there was no significant between-group difference in the mean BDI score or VAS score after treatment (P > 0.05).\ntDCS = transcranial direct current stimulation, BDI = Beck Depression Inventory, THI = Tinnitus Handicap Inventory.\naP < 0.05.\nIn patients whose final improvement in THI score was ≥ 20, differences were observed according to the initial degree of tinnitus severity and type of treatment received (P = 0.011; χ2). Treatment responders with initially moderate or catastrophic handicap were more likely to have received tDCS (Fig. 2); however, treatment responders with initially severe handicap were more likely to have received conventional treatment. None of the patients in the tDCS group with initially mild symptoms showed an improvement in THI of ≥ 20. However, there was no significant difference in the initial severity of tinnitus or type of treatment received in patients whose final THI score improvement was < 20 (P > 0.05).\ntDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory.\nPearson correlation analysis revealed no correlation between change in THI score and number of tDCS sessions (r = 0.068; P = 0.692). Similarly, there was no correlation between the number of tDCS sessions and the BDI or VAS score for tinnitus loudness (P > 0.05).", "Conditional logistic regression analysis revealed a positive association of the initial THI score with an improvement in THI of ≥ 20 (odds ratio [OR], 1.060; 95% confidence interval [CI], 1.031–1.091; P < 0.001) and a negative association with a final THI score < 18 (OR, 0.969; CI, 0.945–0.993; P = 0.013). Thus, patients with more severe tinnitus-related distress initially were classified as responders after treatment but those with lower initial levels of distress showed better final recovery. Unexpectedly, the regression model did not identify application of tDCS as an independent prognostic factor for improvement in the THI score." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Patients", "Treatment protocols", "Follow-up and evaluation of response to treatment", "Statistical analysis", "Ethics statement", "RESULTS", "Patient characteristics", "Outcomes according to treatment method", "Prognostic factors", "DISCUSSION" ]
[ "Tinnitus is a troublesome symptom involving perception of sounds without a source. Individuals with tinnitus have frequent symptoms that negatively affect quality of life. The epidemiologic characteristics of tinnitus differ widely from population to population and from country to country. In Korea, the prevalence of tinnitus was reported to be 19.7% in individuals aged 12 years or older, with a women predominance.1 In the UK, the prevalence was reported to be 16.9% in the 40–69-year age group.2 A study based on the 1999–2004 National Health and Nutrition Examination Survey in the US reported that the risk of tinnitus increased with advancing age and that the majority of individuals with the disorder were in their sixties.3\nAuditory deafferentation is caused by various conditions, including ageing, exposure to noise, and diseases affecting the middle ear, that may lead to reorganization of the auditory cortex.4 Altered tinnitus-related functional connectivity between different regions of the brain, involvement of non-auditory brain networks, such as perception, salience, and distress networks, as well as the memory area also play a role in the maintenance of tinnitus.5 Consequently, tinnitus is related to pathologic changes in the brain and is not confined to peripheral hearing loss.\nVarious neuromodulation techniques, such as transcranial direct current stimulation (tDCS) and repetitive transcranial magnetic stimulation (rTMS), have been introduced for treatment of diverse neurologic disorders, including chronic pain, Parkinson's disease, stroke, aphasia, multiple sclerosis, epilepsy, Alzheimer's disease, depression, and tinnitus.67 The stimulation targets most frequently used for tinnitus include not only the auditory cortex but also the prefrontal cortex and the anterior cingulate cortex. It is thought that reversal of pathologic neural activity is modulated directly or indirectly in the functionally connected brain networks.\nBifrontal tDCS uses a weak current of 0.2–2.0 mA and alters the excitability of the cortex; anodal tDCS increases cortical excitability and cathodal tDCS decreases excitability by hyperpolarization.6 In a previous study, we demonstrated that about 80% of patients showed a 50% or greater decrease in the Tinnitus Handicap Inventory (THI) score when treated with tDCS combined with tailor-made notched music training (TMNMT).6 However, not all patients like TMNMT, which relies on ready availability of preferred music and involves a very long treatment period. In addition, the small sample size in our previous study limited our ability to interpret its results.\nThe aims of the present study were to evaluate the therapeutic effect of bifrontal tDCS for tinnitus in a large sample size and to determine whether this treatment modality has a role adjunctive to that of conventional treatment.", " Patients Patients who visited the tinnitus clinic at our university hospital between January and December 2016 complaining of non-pulsatile subjective tinnitus that had persisted for at least 3 months and followed up for longer than one month were screened. The exclusion criteria were as follows: THI, Beck Depression Inventory (BDI), and Visual Analog Scale (VAS) scores not obtained or obtained only once; previous tDCS; a history of treatment with other types of neuromodulation, such as vagus nerve stimulation, transcranial random noise stimulation (tRNS), or rTMS; and presence of patient factors precluding evaluation of tinnitus, such as mental retardation, schizophrenia, or low compliance.\nPatients who visited the tinnitus clinic at our university hospital between January and December 2016 complaining of non-pulsatile subjective tinnitus that had persisted for at least 3 months and followed up for longer than one month were screened. The exclusion criteria were as follows: THI, Beck Depression Inventory (BDI), and Visual Analog Scale (VAS) scores not obtained or obtained only once; previous tDCS; a history of treatment with other types of neuromodulation, such as vagus nerve stimulation, transcranial random noise stimulation (tRNS), or rTMS; and presence of patient factors precluding evaluation of tinnitus, such as mental retardation, schizophrenia, or low compliance.\n Treatment protocols The patients were divided into a tDCS group and a conventional treatment group. The conventional treatment group received 2–8 weeks of directive counselling based on the Jastreboff neurophysiologic model, sound therapy that included a sound generator or hearing aids, and clonazepam for patients who wished to take medication.\nThe tDCS group underwent tDCS in addition to conventional treatment. tDCS was delivered using a DC-Stimulator Plus (neuroConn GmbH, Ilmenau, Germany). Saline-soaked cathodal and anodal electrodes (each with an area of 35 cm2) were placed over the F3 and F4 areas, respectively, in accordance with the 10–20 system, in order to stimulate the dorsolateral prefrontal cortex (DLPFC) bilaterally. The intensity and duration of stimulation were set to 1.5 mA and 20 minutes (a 10-seconds fade-in/fade-out time), respectively.\nThe patients were divided into a tDCS group and a conventional treatment group. The conventional treatment group received 2–8 weeks of directive counselling based on the Jastreboff neurophysiologic model, sound therapy that included a sound generator or hearing aids, and clonazepam for patients who wished to take medication.\nThe tDCS group underwent tDCS in addition to conventional treatment. tDCS was delivered using a DC-Stimulator Plus (neuroConn GmbH, Ilmenau, Germany). Saline-soaked cathodal and anodal electrodes (each with an area of 35 cm2) were placed over the F3 and F4 areas, respectively, in accordance with the 10–20 system, in order to stimulate the dorsolateral prefrontal cortex (DLPFC) bilaterally. The intensity and duration of stimulation were set to 1.5 mA and 20 minutes (a 10-seconds fade-in/fade-out time), respectively.\n Follow-up and evaluation of response to treatment All patients were instructed to visit the hospital at monthly intervals. Responses on the THI, BDI, and VAS were checked at each visit until the end of treatment. The initial severity of tinnitus-related distress was classified as slight (THI score 0–16), mild (18–36), moderate (38–56), severe (58–76), or catastrophic (78–100). Patients with a final improvement in THI score of ≥ 20 were defined as treatment responders and those with a final improvement of < 20 were considered non-responders.\nAll patients were instructed to visit the hospital at monthly intervals. Responses on the THI, BDI, and VAS were checked at each visit until the end of treatment. The initial severity of tinnitus-related distress was classified as slight (THI score 0–16), mild (18–36), moderate (38–56), severe (58–76), or catastrophic (78–100). Patients with a final improvement in THI score of ≥ 20 were defined as treatment responders and those with a final improvement of < 20 were considered non-responders.\n Statistical analysis The χ2 and Fisher's exact tests were used to compare the data between groups and to analyze trends. The nominal data are presented as the mean, standard deviation, and range unless otherwise stated. The Shapiro-Wilk test was performed to test the data for normality. The paired t-test was used to compare the pretreatment and post-treatment THI, BDI, and VAS scores. Pearson correlation analysis was used to assess the relationship between number of tDCS sessions and the change in THI Score. All statistical analyses were performed using SPSS version 25.0 software (IBM Corp., Armonk, NY, USA). A P value < 0.05 was considered statistically significant.\nThe χ2 and Fisher's exact tests were used to compare the data between groups and to analyze trends. The nominal data are presented as the mean, standard deviation, and range unless otherwise stated. The Shapiro-Wilk test was performed to test the data for normality. The paired t-test was used to compare the pretreatment and post-treatment THI, BDI, and VAS scores. Pearson correlation analysis was used to assess the relationship between number of tDCS sessions and the change in THI Score. All statistical analyses were performed using SPSS version 25.0 software (IBM Corp., Armonk, NY, USA). A P value < 0.05 was considered statistically significant.\n Ethics statement This study was performed in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Eulji University (EMC IRB 2018-06-010). IRB granted a waiver of written informed consent for this study.\nThis study was performed in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Eulji University (EMC IRB 2018-06-010). IRB granted a waiver of written informed consent for this study.", "Patients who visited the tinnitus clinic at our university hospital between January and December 2016 complaining of non-pulsatile subjective tinnitus that had persisted for at least 3 months and followed up for longer than one month were screened. The exclusion criteria were as follows: THI, Beck Depression Inventory (BDI), and Visual Analog Scale (VAS) scores not obtained or obtained only once; previous tDCS; a history of treatment with other types of neuromodulation, such as vagus nerve stimulation, transcranial random noise stimulation (tRNS), or rTMS; and presence of patient factors precluding evaluation of tinnitus, such as mental retardation, schizophrenia, or low compliance.", "The patients were divided into a tDCS group and a conventional treatment group. The conventional treatment group received 2–8 weeks of directive counselling based on the Jastreboff neurophysiologic model, sound therapy that included a sound generator or hearing aids, and clonazepam for patients who wished to take medication.\nThe tDCS group underwent tDCS in addition to conventional treatment. tDCS was delivered using a DC-Stimulator Plus (neuroConn GmbH, Ilmenau, Germany). Saline-soaked cathodal and anodal electrodes (each with an area of 35 cm2) were placed over the F3 and F4 areas, respectively, in accordance with the 10–20 system, in order to stimulate the dorsolateral prefrontal cortex (DLPFC) bilaterally. The intensity and duration of stimulation were set to 1.5 mA and 20 minutes (a 10-seconds fade-in/fade-out time), respectively.", "All patients were instructed to visit the hospital at monthly intervals. Responses on the THI, BDI, and VAS were checked at each visit until the end of treatment. The initial severity of tinnitus-related distress was classified as slight (THI score 0–16), mild (18–36), moderate (38–56), severe (58–76), or catastrophic (78–100). Patients with a final improvement in THI score of ≥ 20 were defined as treatment responders and those with a final improvement of < 20 were considered non-responders.", "The χ2 and Fisher's exact tests were used to compare the data between groups and to analyze trends. The nominal data are presented as the mean, standard deviation, and range unless otherwise stated. The Shapiro-Wilk test was performed to test the data for normality. The paired t-test was used to compare the pretreatment and post-treatment THI, BDI, and VAS scores. Pearson correlation analysis was used to assess the relationship between number of tDCS sessions and the change in THI Score. All statistical analyses were performed using SPSS version 25.0 software (IBM Corp., Armonk, NY, USA). A P value < 0.05 was considered statistically significant.", "This study was performed in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Eulji University (EMC IRB 2018-06-010). IRB granted a waiver of written informed consent for this study.", " Patient characteristics Seventy patients (37 men, 33 women; mean age, 47.81 ± 14.1 [range, 18–82] years) were recruited (Table 1). Forty-four patients had unilateral tinnitus and 26 had bilateral tinnitus. The mean interval between symptom onset and treatment was 23.94 ± 53.93 (range, 3–360) months; the mean hearing level on the right was 21.97 ± 22.00 dB and that on the left was 20.57 ± 20.03 dB. The initial THI and BDI scores were 47.29 ± 24.474 (range, 12–100) and 12.03 ± 9.92 (range, 0–44), respectively. When the mean interval was classified according to THI score, it was longest in patients with severe handicap (Table 2). The mean initial VAS tinnitus loudness score was 4.87 ± 2.47. The mean follow-up duration was 8.29 ± 9.26 months.\nData are presented as mean ± standard deviation for numerical variables and number (%) for nominal variables.\ntDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory, BDI = Beck Depression Inventory, VAS = Visual Analog Scale (0–10).\nData are presented as mean ± standard deviation for numerical variables.\nTHI = Tinnitus Handicap Inventory, tDCS = transcranial direct current stimulation.\nTwenty-six (37.1%) of the 70 patients received 1–6 sessions of tDCS (mean, 1.04 ± 1.89) and the remaining 44 received conventional treatment. There was no significant difference in age, gender, side affected by tinnitus, history of trauma, onset, or hearing level on either side between the two study groups. The mean follow-up duration was 10.36 ± 11.77 months in the tDCS group and 6.77 ± 7.23 months in the conventional treatment group. The mean initial THI score was higher in the tDCS group than in the conventional treatment group (57.92 ± 22.16 vs. 41 ± 23.81; P = 0.004). There was no significant between-group difference in the BDI score (P > 0.05). The VAS score for tinnitus loudness was significantly higher in the tDCS group than in the conventional treatment group (5.73 ± 2.52 vs. 4.36 ± 2.314; P = 0.024).\nSeventy patients (37 men, 33 women; mean age, 47.81 ± 14.1 [range, 18–82] years) were recruited (Table 1). Forty-four patients had unilateral tinnitus and 26 had bilateral tinnitus. The mean interval between symptom onset and treatment was 23.94 ± 53.93 (range, 3–360) months; the mean hearing level on the right was 21.97 ± 22.00 dB and that on the left was 20.57 ± 20.03 dB. The initial THI and BDI scores were 47.29 ± 24.474 (range, 12–100) and 12.03 ± 9.92 (range, 0–44), respectively. When the mean interval was classified according to THI score, it was longest in patients with severe handicap (Table 2). The mean initial VAS tinnitus loudness score was 4.87 ± 2.47. The mean follow-up duration was 8.29 ± 9.26 months.\nData are presented as mean ± standard deviation for numerical variables and number (%) for nominal variables.\ntDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory, BDI = Beck Depression Inventory, VAS = Visual Analog Scale (0–10).\nData are presented as mean ± standard deviation for numerical variables.\nTHI = Tinnitus Handicap Inventory, tDCS = transcranial direct current stimulation.\nTwenty-six (37.1%) of the 70 patients received 1–6 sessions of tDCS (mean, 1.04 ± 1.89) and the remaining 44 received conventional treatment. There was no significant difference in age, gender, side affected by tinnitus, history of trauma, onset, or hearing level on either side between the two study groups. The mean follow-up duration was 10.36 ± 11.77 months in the tDCS group and 6.77 ± 7.23 months in the conventional treatment group. The mean initial THI score was higher in the tDCS group than in the conventional treatment group (57.92 ± 22.16 vs. 41 ± 23.81; P = 0.004). There was no significant between-group difference in the BDI score (P > 0.05). The VAS score for tinnitus loudness was significantly higher in the tDCS group than in the conventional treatment group (5.73 ± 2.52 vs. 4.36 ± 2.314; P = 0.024).\n Outcomes according to treatment method Initially, we analyzed all of the enrolled patients' data together, irrespective of study group. The post-treatment THI and VAS scores indicated a significant reduction in patient distress and tinnitus loudness (P < 0.001). In total, the final THI score was 28.06 ± 19.24 and the final VAS score was 3.70 ± 2.46 (Table 1). The BDI score decreased to 9.59 ± 8.85 after treatment (P = 0.006). The χ2 test for trend demonstrated that patients with an initial THI score indicating more severe tinnitus-related distress tended to be treatment responders (P < 0.001). The percentages of patients with mild to catastrophic handicap who responded to treatment were 0% (0/7) for slight handicap, 26.1% (6/23) for mild handicap, 50% (7/14) for moderate handicap, 78.6% (11/14) for severe handicap, and 75% (9/12) for catastrophic handicap.\nThe final THI scores were then analyzed separately according to treatment group. The change in the mean THI score was greater in the tDCS group than in the conventional treatment group (28.69 ± 24.81 vs. 13.63 ± 21.59; P = 0.010) (Fig. 1). However, there was no significant between-group difference in the mean BDI score or VAS score after treatment (P > 0.05).\ntDCS = transcranial direct current stimulation, BDI = Beck Depression Inventory, THI = Tinnitus Handicap Inventory.\naP < 0.05.\nIn patients whose final improvement in THI score was ≥ 20, differences were observed according to the initial degree of tinnitus severity and type of treatment received (P = 0.011; χ2). Treatment responders with initially moderate or catastrophic handicap were more likely to have received tDCS (Fig. 2); however, treatment responders with initially severe handicap were more likely to have received conventional treatment. None of the patients in the tDCS group with initially mild symptoms showed an improvement in THI of ≥ 20. However, there was no significant difference in the initial severity of tinnitus or type of treatment received in patients whose final THI score improvement was < 20 (P > 0.05).\ntDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory.\nPearson correlation analysis revealed no correlation between change in THI score and number of tDCS sessions (r = 0.068; P = 0.692). Similarly, there was no correlation between the number of tDCS sessions and the BDI or VAS score for tinnitus loudness (P > 0.05).\nInitially, we analyzed all of the enrolled patients' data together, irrespective of study group. The post-treatment THI and VAS scores indicated a significant reduction in patient distress and tinnitus loudness (P < 0.001). In total, the final THI score was 28.06 ± 19.24 and the final VAS score was 3.70 ± 2.46 (Table 1). The BDI score decreased to 9.59 ± 8.85 after treatment (P = 0.006). The χ2 test for trend demonstrated that patients with an initial THI score indicating more severe tinnitus-related distress tended to be treatment responders (P < 0.001). The percentages of patients with mild to catastrophic handicap who responded to treatment were 0% (0/7) for slight handicap, 26.1% (6/23) for mild handicap, 50% (7/14) for moderate handicap, 78.6% (11/14) for severe handicap, and 75% (9/12) for catastrophic handicap.\nThe final THI scores were then analyzed separately according to treatment group. The change in the mean THI score was greater in the tDCS group than in the conventional treatment group (28.69 ± 24.81 vs. 13.63 ± 21.59; P = 0.010) (Fig. 1). However, there was no significant between-group difference in the mean BDI score or VAS score after treatment (P > 0.05).\ntDCS = transcranial direct current stimulation, BDI = Beck Depression Inventory, THI = Tinnitus Handicap Inventory.\naP < 0.05.\nIn patients whose final improvement in THI score was ≥ 20, differences were observed according to the initial degree of tinnitus severity and type of treatment received (P = 0.011; χ2). Treatment responders with initially moderate or catastrophic handicap were more likely to have received tDCS (Fig. 2); however, treatment responders with initially severe handicap were more likely to have received conventional treatment. None of the patients in the tDCS group with initially mild symptoms showed an improvement in THI of ≥ 20. However, there was no significant difference in the initial severity of tinnitus or type of treatment received in patients whose final THI score improvement was < 20 (P > 0.05).\ntDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory.\nPearson correlation analysis revealed no correlation between change in THI score and number of tDCS sessions (r = 0.068; P = 0.692). Similarly, there was no correlation between the number of tDCS sessions and the BDI or VAS score for tinnitus loudness (P > 0.05).\n Prognostic factors Conditional logistic regression analysis revealed a positive association of the initial THI score with an improvement in THI of ≥ 20 (odds ratio [OR], 1.060; 95% confidence interval [CI], 1.031–1.091; P < 0.001) and a negative association with a final THI score < 18 (OR, 0.969; CI, 0.945–0.993; P = 0.013). Thus, patients with more severe tinnitus-related distress initially were classified as responders after treatment but those with lower initial levels of distress showed better final recovery. Unexpectedly, the regression model did not identify application of tDCS as an independent prognostic factor for improvement in the THI score.\nConditional logistic regression analysis revealed a positive association of the initial THI score with an improvement in THI of ≥ 20 (odds ratio [OR], 1.060; 95% confidence interval [CI], 1.031–1.091; P < 0.001) and a negative association with a final THI score < 18 (OR, 0.969; CI, 0.945–0.993; P = 0.013). Thus, patients with more severe tinnitus-related distress initially were classified as responders after treatment but those with lower initial levels of distress showed better final recovery. Unexpectedly, the regression model did not identify application of tDCS as an independent prognostic factor for improvement in the THI score.", "Seventy patients (37 men, 33 women; mean age, 47.81 ± 14.1 [range, 18–82] years) were recruited (Table 1). Forty-four patients had unilateral tinnitus and 26 had bilateral tinnitus. The mean interval between symptom onset and treatment was 23.94 ± 53.93 (range, 3–360) months; the mean hearing level on the right was 21.97 ± 22.00 dB and that on the left was 20.57 ± 20.03 dB. The initial THI and BDI scores were 47.29 ± 24.474 (range, 12–100) and 12.03 ± 9.92 (range, 0–44), respectively. When the mean interval was classified according to THI score, it was longest in patients with severe handicap (Table 2). The mean initial VAS tinnitus loudness score was 4.87 ± 2.47. The mean follow-up duration was 8.29 ± 9.26 months.\nData are presented as mean ± standard deviation for numerical variables and number (%) for nominal variables.\ntDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory, BDI = Beck Depression Inventory, VAS = Visual Analog Scale (0–10).\nData are presented as mean ± standard deviation for numerical variables.\nTHI = Tinnitus Handicap Inventory, tDCS = transcranial direct current stimulation.\nTwenty-six (37.1%) of the 70 patients received 1–6 sessions of tDCS (mean, 1.04 ± 1.89) and the remaining 44 received conventional treatment. There was no significant difference in age, gender, side affected by tinnitus, history of trauma, onset, or hearing level on either side between the two study groups. The mean follow-up duration was 10.36 ± 11.77 months in the tDCS group and 6.77 ± 7.23 months in the conventional treatment group. The mean initial THI score was higher in the tDCS group than in the conventional treatment group (57.92 ± 22.16 vs. 41 ± 23.81; P = 0.004). There was no significant between-group difference in the BDI score (P > 0.05). The VAS score for tinnitus loudness was significantly higher in the tDCS group than in the conventional treatment group (5.73 ± 2.52 vs. 4.36 ± 2.314; P = 0.024).", "Initially, we analyzed all of the enrolled patients' data together, irrespective of study group. The post-treatment THI and VAS scores indicated a significant reduction in patient distress and tinnitus loudness (P < 0.001). In total, the final THI score was 28.06 ± 19.24 and the final VAS score was 3.70 ± 2.46 (Table 1). The BDI score decreased to 9.59 ± 8.85 after treatment (P = 0.006). The χ2 test for trend demonstrated that patients with an initial THI score indicating more severe tinnitus-related distress tended to be treatment responders (P < 0.001). The percentages of patients with mild to catastrophic handicap who responded to treatment were 0% (0/7) for slight handicap, 26.1% (6/23) for mild handicap, 50% (7/14) for moderate handicap, 78.6% (11/14) for severe handicap, and 75% (9/12) for catastrophic handicap.\nThe final THI scores were then analyzed separately according to treatment group. The change in the mean THI score was greater in the tDCS group than in the conventional treatment group (28.69 ± 24.81 vs. 13.63 ± 21.59; P = 0.010) (Fig. 1). However, there was no significant between-group difference in the mean BDI score or VAS score after treatment (P > 0.05).\ntDCS = transcranial direct current stimulation, BDI = Beck Depression Inventory, THI = Tinnitus Handicap Inventory.\naP < 0.05.\nIn patients whose final improvement in THI score was ≥ 20, differences were observed according to the initial degree of tinnitus severity and type of treatment received (P = 0.011; χ2). Treatment responders with initially moderate or catastrophic handicap were more likely to have received tDCS (Fig. 2); however, treatment responders with initially severe handicap were more likely to have received conventional treatment. None of the patients in the tDCS group with initially mild symptoms showed an improvement in THI of ≥ 20. However, there was no significant difference in the initial severity of tinnitus or type of treatment received in patients whose final THI score improvement was < 20 (P > 0.05).\ntDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory.\nPearson correlation analysis revealed no correlation between change in THI score and number of tDCS sessions (r = 0.068; P = 0.692). Similarly, there was no correlation between the number of tDCS sessions and the BDI or VAS score for tinnitus loudness (P > 0.05).", "Conditional logistic regression analysis revealed a positive association of the initial THI score with an improvement in THI of ≥ 20 (odds ratio [OR], 1.060; 95% confidence interval [CI], 1.031–1.091; P < 0.001) and a negative association with a final THI score < 18 (OR, 0.969; CI, 0.945–0.993; P = 0.013). Thus, patients with more severe tinnitus-related distress initially were classified as responders after treatment but those with lower initial levels of distress showed better final recovery. Unexpectedly, the regression model did not identify application of tDCS as an independent prognostic factor for improvement in the THI score.", "In this study, we found that tinnitus-related distress was alleviated in some patients who were treated with additional bifrontal tDCS targeting the DLPFC but that application of tDCS was not in itself an independent prognostic factor. These findings suggest that the therapeutic effect of bifrontal tDCS may be weak and that the severity of tinnitus-related distress may vary according to involvement of the DLPFC. It has been reported that the DLPFC plays a role in tinnitus by facilitating auditory memory and modulating input to the primary auditory cortex, leading to top-down inhibitory modulation of auditory processing.89 The number of tDCS sessions was not correlated with the THI, BDI, or VAS scores, suggesting the dose-dependent relationship may be limited in bifrontal tDCS.\nPrevious studies have shown that the results of application of rTMS to both the prefrontal cortex and the auditory cortex are better than stimulation of the prefrontal cortex or auditory cortex alone.1011 Similarly, applying tRNS to the auditory cortex after bifrontal tDCS resulted in a significant decrease in both tinnitus loudness and tinnitus-related distress.12 A single session of tRNS applied to the auditory cortex was reported to achieve a significant decrease in tinnitus loudness and tinnitus-related distress, and multiple tRNS sessions seemed to be more effective than a single session or use of an alternative method of current stimulation.13 In view of the evidence thus far, a further study is needed to investigate the effects of multi-site stimulation in conjunction with combined tDCS-tRNS beyond bifrontal tDCS.\nIn contrast, there was no statistically significant treatment-related difference in the changes in and final scores for the BDI and VAS. This suggests that bifrontal tDCS may be less able to control depressive symptoms or reduce the loudness of tinnitus than expected. One research group has compared the efficacy of tDCS with that of escitalopram, which is one of the most commonly used serotonin reuptake inhibitors, in patients with major depressive disorder.14 Although results of 22 sessions of tDCS were superior to those of placebo, some patients developed new-onset mania as well as itching and skin redness; further, escitalopram was more effective than tDCS in that study. Therefore, medical treatment should be considered before tDCS in patients with tinnitus and severe depression even though a recent guideline cautions against routine prescription of antidepressants, anticonvulsants, or anxiolytics.15\nInterestingly, all treatment responders who had initially mild tinnitus received conventional treatment alone. This suggests that bifrontal tDCS has no additional effect in patients with mild tinnitus-related distress. Moreover, patients who responded to bifrontal tDCS were more likely to have moderate or catastrophic handicap than severe handicap. We assume that this finding, although not statistically significant, might reflect the longer duration of severe handicap than that in the other subgroups. A factor analysis study of rTMS reported that tinnitus was suppressed in patients with a shorter duration of tinnitus, normal hearing, and no sleep disturbance, and suggested that the central auditory network becomes less plastic over time,16 which supports our assumption. Patients with severe tinnitus-related distress of short duration may be ideal candidates for neuromodulation.\nThe location of the surface electrodes is another issue. In order to stimulate the bifrontal cortex, the electrodes have been traditionally placed in one of two ways, i.e., “left-anode-right-cathode montage” or “right-anode-left-cathode montage.”7 The former montage has been reported to achieve greater improvement of the depressive symptoms of tinnitus while the latter montage alleviates anxiety.617 However, we could not find evidence to support this concept. High-definition tDCS using a 4 × 1 ring electrode targeting the right DLPFC has been developed for more precise current delivery.18 A recent study also suggested the possibility of stimulating the anterior cingulate cortex with high-definition tDCS; this structure is located deeply and its function is related to emotional experience and tinnitus-related distress.1920 A previous study of rTMS demonstrated that preceding mediofrontal stimulation of the anterior cingulate cortex with a double cone-coil before rTMS to the auditory cortex achieved better results than conventional rTMS targeting the left DLPFC and auditory cortex.20 Therefore, future tDCS studies should concentrate on using high-definition tDCS to stimulate various brain regions more precisely.\nThis study has several limitations. First, our analysis was based on data derived from a retrospective medical chart review. Second, not all patients complied with conventional treatment; for example, some patients refused to accept the sound generator or hearing aids because of the additional cost involved or because they felt self-conscious when wearing these devices. Third, more objective tools for assessment of tinnitus, such as radiologic evaluation or electroencephalography, were not used in this study. Nevertheless, we were able to assess the clinical importance of bifrontal tDCS by observing the natural course of tinnitus in patients in this retrospective cohort study. Finally, the number of sessions performed was relatively small. Nevertheless, some patients who had a higher tinnitus handicap showed a favorable treatment outcome. Given that tinnitus tends to wax and wane, adjunctive application of bifrontal tDCS at times when tinnitus is severe would help to alleviate the tinnitus-related distress.\nIn summary, patients with tinnitus who received conventional treatment plus bifrontal tDCS showed a significant reduction in THI score but not in VAS tinnitus loudness or BDI scores. In particular, patients treated with tDCS who initially had moderate or catastrophic handicap were more likely to show improvement in their THI score. However, application of additional bifrontal tDCS was not identified as a significant prognostic factor for improvement of the THI score. Bifrontal tDCS can be used as an adjunctive treatment for severe tinnitus. Although this treatment modality does not reduce tinnitus loudness, addition of bifrontal tDCS to conventional treatment alleviates the distress associated with tinnitus in some patients with moderate or catastrophic handicap." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Adjunctive Bifrontal Transcranial Direct Current Stimulation", "Distress", "Tinnitus" ]
INTRODUCTION: Tinnitus is a troublesome symptom involving perception of sounds without a source. Individuals with tinnitus have frequent symptoms that negatively affect quality of life. The epidemiologic characteristics of tinnitus differ widely from population to population and from country to country. In Korea, the prevalence of tinnitus was reported to be 19.7% in individuals aged 12 years or older, with a women predominance.1 In the UK, the prevalence was reported to be 16.9% in the 40–69-year age group.2 A study based on the 1999–2004 National Health and Nutrition Examination Survey in the US reported that the risk of tinnitus increased with advancing age and that the majority of individuals with the disorder were in their sixties.3 Auditory deafferentation is caused by various conditions, including ageing, exposure to noise, and diseases affecting the middle ear, that may lead to reorganization of the auditory cortex.4 Altered tinnitus-related functional connectivity between different regions of the brain, involvement of non-auditory brain networks, such as perception, salience, and distress networks, as well as the memory area also play a role in the maintenance of tinnitus.5 Consequently, tinnitus is related to pathologic changes in the brain and is not confined to peripheral hearing loss. Various neuromodulation techniques, such as transcranial direct current stimulation (tDCS) and repetitive transcranial magnetic stimulation (rTMS), have been introduced for treatment of diverse neurologic disorders, including chronic pain, Parkinson's disease, stroke, aphasia, multiple sclerosis, epilepsy, Alzheimer's disease, depression, and tinnitus.67 The stimulation targets most frequently used for tinnitus include not only the auditory cortex but also the prefrontal cortex and the anterior cingulate cortex. It is thought that reversal of pathologic neural activity is modulated directly or indirectly in the functionally connected brain networks. Bifrontal tDCS uses a weak current of 0.2–2.0 mA and alters the excitability of the cortex; anodal tDCS increases cortical excitability and cathodal tDCS decreases excitability by hyperpolarization.6 In a previous study, we demonstrated that about 80% of patients showed a 50% or greater decrease in the Tinnitus Handicap Inventory (THI) score when treated with tDCS combined with tailor-made notched music training (TMNMT).6 However, not all patients like TMNMT, which relies on ready availability of preferred music and involves a very long treatment period. In addition, the small sample size in our previous study limited our ability to interpret its results. The aims of the present study were to evaluate the therapeutic effect of bifrontal tDCS for tinnitus in a large sample size and to determine whether this treatment modality has a role adjunctive to that of conventional treatment. METHODS: Patients Patients who visited the tinnitus clinic at our university hospital between January and December 2016 complaining of non-pulsatile subjective tinnitus that had persisted for at least 3 months and followed up for longer than one month were screened. The exclusion criteria were as follows: THI, Beck Depression Inventory (BDI), and Visual Analog Scale (VAS) scores not obtained or obtained only once; previous tDCS; a history of treatment with other types of neuromodulation, such as vagus nerve stimulation, transcranial random noise stimulation (tRNS), or rTMS; and presence of patient factors precluding evaluation of tinnitus, such as mental retardation, schizophrenia, or low compliance. Patients who visited the tinnitus clinic at our university hospital between January and December 2016 complaining of non-pulsatile subjective tinnitus that had persisted for at least 3 months and followed up for longer than one month were screened. The exclusion criteria were as follows: THI, Beck Depression Inventory (BDI), and Visual Analog Scale (VAS) scores not obtained or obtained only once; previous tDCS; a history of treatment with other types of neuromodulation, such as vagus nerve stimulation, transcranial random noise stimulation (tRNS), or rTMS; and presence of patient factors precluding evaluation of tinnitus, such as mental retardation, schizophrenia, or low compliance. Treatment protocols The patients were divided into a tDCS group and a conventional treatment group. The conventional treatment group received 2–8 weeks of directive counselling based on the Jastreboff neurophysiologic model, sound therapy that included a sound generator or hearing aids, and clonazepam for patients who wished to take medication. The tDCS group underwent tDCS in addition to conventional treatment. tDCS was delivered using a DC-Stimulator Plus (neuroConn GmbH, Ilmenau, Germany). Saline-soaked cathodal and anodal electrodes (each with an area of 35 cm2) were placed over the F3 and F4 areas, respectively, in accordance with the 10–20 system, in order to stimulate the dorsolateral prefrontal cortex (DLPFC) bilaterally. The intensity and duration of stimulation were set to 1.5 mA and 20 minutes (a 10-seconds fade-in/fade-out time), respectively. The patients were divided into a tDCS group and a conventional treatment group. The conventional treatment group received 2–8 weeks of directive counselling based on the Jastreboff neurophysiologic model, sound therapy that included a sound generator or hearing aids, and clonazepam for patients who wished to take medication. The tDCS group underwent tDCS in addition to conventional treatment. tDCS was delivered using a DC-Stimulator Plus (neuroConn GmbH, Ilmenau, Germany). Saline-soaked cathodal and anodal electrodes (each with an area of 35 cm2) were placed over the F3 and F4 areas, respectively, in accordance with the 10–20 system, in order to stimulate the dorsolateral prefrontal cortex (DLPFC) bilaterally. The intensity and duration of stimulation were set to 1.5 mA and 20 minutes (a 10-seconds fade-in/fade-out time), respectively. Follow-up and evaluation of response to treatment All patients were instructed to visit the hospital at monthly intervals. Responses on the THI, BDI, and VAS were checked at each visit until the end of treatment. The initial severity of tinnitus-related distress was classified as slight (THI score 0–16), mild (18–36), moderate (38–56), severe (58–76), or catastrophic (78–100). Patients with a final improvement in THI score of ≥ 20 were defined as treatment responders and those with a final improvement of < 20 were considered non-responders. All patients were instructed to visit the hospital at monthly intervals. Responses on the THI, BDI, and VAS were checked at each visit until the end of treatment. The initial severity of tinnitus-related distress was classified as slight (THI score 0–16), mild (18–36), moderate (38–56), severe (58–76), or catastrophic (78–100). Patients with a final improvement in THI score of ≥ 20 were defined as treatment responders and those with a final improvement of < 20 were considered non-responders. Statistical analysis The χ2 and Fisher's exact tests were used to compare the data between groups and to analyze trends. The nominal data are presented as the mean, standard deviation, and range unless otherwise stated. The Shapiro-Wilk test was performed to test the data for normality. The paired t-test was used to compare the pretreatment and post-treatment THI, BDI, and VAS scores. Pearson correlation analysis was used to assess the relationship between number of tDCS sessions and the change in THI Score. All statistical analyses were performed using SPSS version 25.0 software (IBM Corp., Armonk, NY, USA). A P value < 0.05 was considered statistically significant. The χ2 and Fisher's exact tests were used to compare the data between groups and to analyze trends. The nominal data are presented as the mean, standard deviation, and range unless otherwise stated. The Shapiro-Wilk test was performed to test the data for normality. The paired t-test was used to compare the pretreatment and post-treatment THI, BDI, and VAS scores. Pearson correlation analysis was used to assess the relationship between number of tDCS sessions and the change in THI Score. All statistical analyses were performed using SPSS version 25.0 software (IBM Corp., Armonk, NY, USA). A P value < 0.05 was considered statistically significant. Ethics statement This study was performed in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Eulji University (EMC IRB 2018-06-010). IRB granted a waiver of written informed consent for this study. This study was performed in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Eulji University (EMC IRB 2018-06-010). IRB granted a waiver of written informed consent for this study. Patients: Patients who visited the tinnitus clinic at our university hospital between January and December 2016 complaining of non-pulsatile subjective tinnitus that had persisted for at least 3 months and followed up for longer than one month were screened. The exclusion criteria were as follows: THI, Beck Depression Inventory (BDI), and Visual Analog Scale (VAS) scores not obtained or obtained only once; previous tDCS; a history of treatment with other types of neuromodulation, such as vagus nerve stimulation, transcranial random noise stimulation (tRNS), or rTMS; and presence of patient factors precluding evaluation of tinnitus, such as mental retardation, schizophrenia, or low compliance. Treatment protocols: The patients were divided into a tDCS group and a conventional treatment group. The conventional treatment group received 2–8 weeks of directive counselling based on the Jastreboff neurophysiologic model, sound therapy that included a sound generator or hearing aids, and clonazepam for patients who wished to take medication. The tDCS group underwent tDCS in addition to conventional treatment. tDCS was delivered using a DC-Stimulator Plus (neuroConn GmbH, Ilmenau, Germany). Saline-soaked cathodal and anodal electrodes (each with an area of 35 cm2) were placed over the F3 and F4 areas, respectively, in accordance with the 10–20 system, in order to stimulate the dorsolateral prefrontal cortex (DLPFC) bilaterally. The intensity and duration of stimulation were set to 1.5 mA and 20 minutes (a 10-seconds fade-in/fade-out time), respectively. Follow-up and evaluation of response to treatment: All patients were instructed to visit the hospital at monthly intervals. Responses on the THI, BDI, and VAS were checked at each visit until the end of treatment. The initial severity of tinnitus-related distress was classified as slight (THI score 0–16), mild (18–36), moderate (38–56), severe (58–76), or catastrophic (78–100). Patients with a final improvement in THI score of ≥ 20 were defined as treatment responders and those with a final improvement of < 20 were considered non-responders. Statistical analysis: The χ2 and Fisher's exact tests were used to compare the data between groups and to analyze trends. The nominal data are presented as the mean, standard deviation, and range unless otherwise stated. The Shapiro-Wilk test was performed to test the data for normality. The paired t-test was used to compare the pretreatment and post-treatment THI, BDI, and VAS scores. Pearson correlation analysis was used to assess the relationship between number of tDCS sessions and the change in THI Score. All statistical analyses were performed using SPSS version 25.0 software (IBM Corp., Armonk, NY, USA). A P value < 0.05 was considered statistically significant. Ethics statement: This study was performed in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Eulji University (EMC IRB 2018-06-010). IRB granted a waiver of written informed consent for this study. RESULTS: Patient characteristics Seventy patients (37 men, 33 women; mean age, 47.81 ± 14.1 [range, 18–82] years) were recruited (Table 1). Forty-four patients had unilateral tinnitus and 26 had bilateral tinnitus. The mean interval between symptom onset and treatment was 23.94 ± 53.93 (range, 3–360) months; the mean hearing level on the right was 21.97 ± 22.00 dB and that on the left was 20.57 ± 20.03 dB. The initial THI and BDI scores were 47.29 ± 24.474 (range, 12–100) and 12.03 ± 9.92 (range, 0–44), respectively. When the mean interval was classified according to THI score, it was longest in patients with severe handicap (Table 2). The mean initial VAS tinnitus loudness score was 4.87 ± 2.47. The mean follow-up duration was 8.29 ± 9.26 months. Data are presented as mean ± standard deviation for numerical variables and number (%) for nominal variables. tDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory, BDI = Beck Depression Inventory, VAS = Visual Analog Scale (0–10). Data are presented as mean ± standard deviation for numerical variables. THI = Tinnitus Handicap Inventory, tDCS = transcranial direct current stimulation. Twenty-six (37.1%) of the 70 patients received 1–6 sessions of tDCS (mean, 1.04 ± 1.89) and the remaining 44 received conventional treatment. There was no significant difference in age, gender, side affected by tinnitus, history of trauma, onset, or hearing level on either side between the two study groups. The mean follow-up duration was 10.36 ± 11.77 months in the tDCS group and 6.77 ± 7.23 months in the conventional treatment group. The mean initial THI score was higher in the tDCS group than in the conventional treatment group (57.92 ± 22.16 vs. 41 ± 23.81; P = 0.004). There was no significant between-group difference in the BDI score (P > 0.05). The VAS score for tinnitus loudness was significantly higher in the tDCS group than in the conventional treatment group (5.73 ± 2.52 vs. 4.36 ± 2.314; P = 0.024). Seventy patients (37 men, 33 women; mean age, 47.81 ± 14.1 [range, 18–82] years) were recruited (Table 1). Forty-four patients had unilateral tinnitus and 26 had bilateral tinnitus. The mean interval between symptom onset and treatment was 23.94 ± 53.93 (range, 3–360) months; the mean hearing level on the right was 21.97 ± 22.00 dB and that on the left was 20.57 ± 20.03 dB. The initial THI and BDI scores were 47.29 ± 24.474 (range, 12–100) and 12.03 ± 9.92 (range, 0–44), respectively. When the mean interval was classified according to THI score, it was longest in patients with severe handicap (Table 2). The mean initial VAS tinnitus loudness score was 4.87 ± 2.47. The mean follow-up duration was 8.29 ± 9.26 months. Data are presented as mean ± standard deviation for numerical variables and number (%) for nominal variables. tDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory, BDI = Beck Depression Inventory, VAS = Visual Analog Scale (0–10). Data are presented as mean ± standard deviation for numerical variables. THI = Tinnitus Handicap Inventory, tDCS = transcranial direct current stimulation. Twenty-six (37.1%) of the 70 patients received 1–6 sessions of tDCS (mean, 1.04 ± 1.89) and the remaining 44 received conventional treatment. There was no significant difference in age, gender, side affected by tinnitus, history of trauma, onset, or hearing level on either side between the two study groups. The mean follow-up duration was 10.36 ± 11.77 months in the tDCS group and 6.77 ± 7.23 months in the conventional treatment group. The mean initial THI score was higher in the tDCS group than in the conventional treatment group (57.92 ± 22.16 vs. 41 ± 23.81; P = 0.004). There was no significant between-group difference in the BDI score (P > 0.05). The VAS score for tinnitus loudness was significantly higher in the tDCS group than in the conventional treatment group (5.73 ± 2.52 vs. 4.36 ± 2.314; P = 0.024). Outcomes according to treatment method Initially, we analyzed all of the enrolled patients' data together, irrespective of study group. The post-treatment THI and VAS scores indicated a significant reduction in patient distress and tinnitus loudness (P < 0.001). In total, the final THI score was 28.06 ± 19.24 and the final VAS score was 3.70 ± 2.46 (Table 1). The BDI score decreased to 9.59 ± 8.85 after treatment (P = 0.006). The χ2 test for trend demonstrated that patients with an initial THI score indicating more severe tinnitus-related distress tended to be treatment responders (P < 0.001). The percentages of patients with mild to catastrophic handicap who responded to treatment were 0% (0/7) for slight handicap, 26.1% (6/23) for mild handicap, 50% (7/14) for moderate handicap, 78.6% (11/14) for severe handicap, and 75% (9/12) for catastrophic handicap. The final THI scores were then analyzed separately according to treatment group. The change in the mean THI score was greater in the tDCS group than in the conventional treatment group (28.69 ± 24.81 vs. 13.63 ± 21.59; P = 0.010) (Fig. 1). However, there was no significant between-group difference in the mean BDI score or VAS score after treatment (P > 0.05). tDCS = transcranial direct current stimulation, BDI = Beck Depression Inventory, THI = Tinnitus Handicap Inventory. aP < 0.05. In patients whose final improvement in THI score was ≥ 20, differences were observed according to the initial degree of tinnitus severity and type of treatment received (P = 0.011; χ2). Treatment responders with initially moderate or catastrophic handicap were more likely to have received tDCS (Fig. 2); however, treatment responders with initially severe handicap were more likely to have received conventional treatment. None of the patients in the tDCS group with initially mild symptoms showed an improvement in THI of ≥ 20. However, there was no significant difference in the initial severity of tinnitus or type of treatment received in patients whose final THI score improvement was < 20 (P > 0.05). tDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory. Pearson correlation analysis revealed no correlation between change in THI score and number of tDCS sessions (r = 0.068; P = 0.692). Similarly, there was no correlation between the number of tDCS sessions and the BDI or VAS score for tinnitus loudness (P > 0.05). Initially, we analyzed all of the enrolled patients' data together, irrespective of study group. The post-treatment THI and VAS scores indicated a significant reduction in patient distress and tinnitus loudness (P < 0.001). In total, the final THI score was 28.06 ± 19.24 and the final VAS score was 3.70 ± 2.46 (Table 1). The BDI score decreased to 9.59 ± 8.85 after treatment (P = 0.006). The χ2 test for trend demonstrated that patients with an initial THI score indicating more severe tinnitus-related distress tended to be treatment responders (P < 0.001). The percentages of patients with mild to catastrophic handicap who responded to treatment were 0% (0/7) for slight handicap, 26.1% (6/23) for mild handicap, 50% (7/14) for moderate handicap, 78.6% (11/14) for severe handicap, and 75% (9/12) for catastrophic handicap. The final THI scores were then analyzed separately according to treatment group. The change in the mean THI score was greater in the tDCS group than in the conventional treatment group (28.69 ± 24.81 vs. 13.63 ± 21.59; P = 0.010) (Fig. 1). However, there was no significant between-group difference in the mean BDI score or VAS score after treatment (P > 0.05). tDCS = transcranial direct current stimulation, BDI = Beck Depression Inventory, THI = Tinnitus Handicap Inventory. aP < 0.05. In patients whose final improvement in THI score was ≥ 20, differences were observed according to the initial degree of tinnitus severity and type of treatment received (P = 0.011; χ2). Treatment responders with initially moderate or catastrophic handicap were more likely to have received tDCS (Fig. 2); however, treatment responders with initially severe handicap were more likely to have received conventional treatment. None of the patients in the tDCS group with initially mild symptoms showed an improvement in THI of ≥ 20. However, there was no significant difference in the initial severity of tinnitus or type of treatment received in patients whose final THI score improvement was < 20 (P > 0.05). tDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory. Pearson correlation analysis revealed no correlation between change in THI score and number of tDCS sessions (r = 0.068; P = 0.692). Similarly, there was no correlation between the number of tDCS sessions and the BDI or VAS score for tinnitus loudness (P > 0.05). Prognostic factors Conditional logistic regression analysis revealed a positive association of the initial THI score with an improvement in THI of ≥ 20 (odds ratio [OR], 1.060; 95% confidence interval [CI], 1.031–1.091; P < 0.001) and a negative association with a final THI score < 18 (OR, 0.969; CI, 0.945–0.993; P = 0.013). Thus, patients with more severe tinnitus-related distress initially were classified as responders after treatment but those with lower initial levels of distress showed better final recovery. Unexpectedly, the regression model did not identify application of tDCS as an independent prognostic factor for improvement in the THI score. Conditional logistic regression analysis revealed a positive association of the initial THI score with an improvement in THI of ≥ 20 (odds ratio [OR], 1.060; 95% confidence interval [CI], 1.031–1.091; P < 0.001) and a negative association with a final THI score < 18 (OR, 0.969; CI, 0.945–0.993; P = 0.013). Thus, patients with more severe tinnitus-related distress initially were classified as responders after treatment but those with lower initial levels of distress showed better final recovery. Unexpectedly, the regression model did not identify application of tDCS as an independent prognostic factor for improvement in the THI score. Patient characteristics: Seventy patients (37 men, 33 women; mean age, 47.81 ± 14.1 [range, 18–82] years) were recruited (Table 1). Forty-four patients had unilateral tinnitus and 26 had bilateral tinnitus. The mean interval between symptom onset and treatment was 23.94 ± 53.93 (range, 3–360) months; the mean hearing level on the right was 21.97 ± 22.00 dB and that on the left was 20.57 ± 20.03 dB. The initial THI and BDI scores were 47.29 ± 24.474 (range, 12–100) and 12.03 ± 9.92 (range, 0–44), respectively. When the mean interval was classified according to THI score, it was longest in patients with severe handicap (Table 2). The mean initial VAS tinnitus loudness score was 4.87 ± 2.47. The mean follow-up duration was 8.29 ± 9.26 months. Data are presented as mean ± standard deviation for numerical variables and number (%) for nominal variables. tDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory, BDI = Beck Depression Inventory, VAS = Visual Analog Scale (0–10). Data are presented as mean ± standard deviation for numerical variables. THI = Tinnitus Handicap Inventory, tDCS = transcranial direct current stimulation. Twenty-six (37.1%) of the 70 patients received 1–6 sessions of tDCS (mean, 1.04 ± 1.89) and the remaining 44 received conventional treatment. There was no significant difference in age, gender, side affected by tinnitus, history of trauma, onset, or hearing level on either side between the two study groups. The mean follow-up duration was 10.36 ± 11.77 months in the tDCS group and 6.77 ± 7.23 months in the conventional treatment group. The mean initial THI score was higher in the tDCS group than in the conventional treatment group (57.92 ± 22.16 vs. 41 ± 23.81; P = 0.004). There was no significant between-group difference in the BDI score (P > 0.05). The VAS score for tinnitus loudness was significantly higher in the tDCS group than in the conventional treatment group (5.73 ± 2.52 vs. 4.36 ± 2.314; P = 0.024). Outcomes according to treatment method: Initially, we analyzed all of the enrolled patients' data together, irrespective of study group. The post-treatment THI and VAS scores indicated a significant reduction in patient distress and tinnitus loudness (P < 0.001). In total, the final THI score was 28.06 ± 19.24 and the final VAS score was 3.70 ± 2.46 (Table 1). The BDI score decreased to 9.59 ± 8.85 after treatment (P = 0.006). The χ2 test for trend demonstrated that patients with an initial THI score indicating more severe tinnitus-related distress tended to be treatment responders (P < 0.001). The percentages of patients with mild to catastrophic handicap who responded to treatment were 0% (0/7) for slight handicap, 26.1% (6/23) for mild handicap, 50% (7/14) for moderate handicap, 78.6% (11/14) for severe handicap, and 75% (9/12) for catastrophic handicap. The final THI scores were then analyzed separately according to treatment group. The change in the mean THI score was greater in the tDCS group than in the conventional treatment group (28.69 ± 24.81 vs. 13.63 ± 21.59; P = 0.010) (Fig. 1). However, there was no significant between-group difference in the mean BDI score or VAS score after treatment (P > 0.05). tDCS = transcranial direct current stimulation, BDI = Beck Depression Inventory, THI = Tinnitus Handicap Inventory. aP < 0.05. In patients whose final improvement in THI score was ≥ 20, differences were observed according to the initial degree of tinnitus severity and type of treatment received (P = 0.011; χ2). Treatment responders with initially moderate or catastrophic handicap were more likely to have received tDCS (Fig. 2); however, treatment responders with initially severe handicap were more likely to have received conventional treatment. None of the patients in the tDCS group with initially mild symptoms showed an improvement in THI of ≥ 20. However, there was no significant difference in the initial severity of tinnitus or type of treatment received in patients whose final THI score improvement was < 20 (P > 0.05). tDCS = transcranial direct current stimulation, THI = Tinnitus Handicap Inventory. Pearson correlation analysis revealed no correlation between change in THI score and number of tDCS sessions (r = 0.068; P = 0.692). Similarly, there was no correlation between the number of tDCS sessions and the BDI or VAS score for tinnitus loudness (P > 0.05). Prognostic factors: Conditional logistic regression analysis revealed a positive association of the initial THI score with an improvement in THI of ≥ 20 (odds ratio [OR], 1.060; 95% confidence interval [CI], 1.031–1.091; P < 0.001) and a negative association with a final THI score < 18 (OR, 0.969; CI, 0.945–0.993; P = 0.013). Thus, patients with more severe tinnitus-related distress initially were classified as responders after treatment but those with lower initial levels of distress showed better final recovery. Unexpectedly, the regression model did not identify application of tDCS as an independent prognostic factor for improvement in the THI score. DISCUSSION: In this study, we found that tinnitus-related distress was alleviated in some patients who were treated with additional bifrontal tDCS targeting the DLPFC but that application of tDCS was not in itself an independent prognostic factor. These findings suggest that the therapeutic effect of bifrontal tDCS may be weak and that the severity of tinnitus-related distress may vary according to involvement of the DLPFC. It has been reported that the DLPFC plays a role in tinnitus by facilitating auditory memory and modulating input to the primary auditory cortex, leading to top-down inhibitory modulation of auditory processing.89 The number of tDCS sessions was not correlated with the THI, BDI, or VAS scores, suggesting the dose-dependent relationship may be limited in bifrontal tDCS. Previous studies have shown that the results of application of rTMS to both the prefrontal cortex and the auditory cortex are better than stimulation of the prefrontal cortex or auditory cortex alone.1011 Similarly, applying tRNS to the auditory cortex after bifrontal tDCS resulted in a significant decrease in both tinnitus loudness and tinnitus-related distress.12 A single session of tRNS applied to the auditory cortex was reported to achieve a significant decrease in tinnitus loudness and tinnitus-related distress, and multiple tRNS sessions seemed to be more effective than a single session or use of an alternative method of current stimulation.13 In view of the evidence thus far, a further study is needed to investigate the effects of multi-site stimulation in conjunction with combined tDCS-tRNS beyond bifrontal tDCS. In contrast, there was no statistically significant treatment-related difference in the changes in and final scores for the BDI and VAS. This suggests that bifrontal tDCS may be less able to control depressive symptoms or reduce the loudness of tinnitus than expected. One research group has compared the efficacy of tDCS with that of escitalopram, which is one of the most commonly used serotonin reuptake inhibitors, in patients with major depressive disorder.14 Although results of 22 sessions of tDCS were superior to those of placebo, some patients developed new-onset mania as well as itching and skin redness; further, escitalopram was more effective than tDCS in that study. Therefore, medical treatment should be considered before tDCS in patients with tinnitus and severe depression even though a recent guideline cautions against routine prescription of antidepressants, anticonvulsants, or anxiolytics.15 Interestingly, all treatment responders who had initially mild tinnitus received conventional treatment alone. This suggests that bifrontal tDCS has no additional effect in patients with mild tinnitus-related distress. Moreover, patients who responded to bifrontal tDCS were more likely to have moderate or catastrophic handicap than severe handicap. We assume that this finding, although not statistically significant, might reflect the longer duration of severe handicap than that in the other subgroups. A factor analysis study of rTMS reported that tinnitus was suppressed in patients with a shorter duration of tinnitus, normal hearing, and no sleep disturbance, and suggested that the central auditory network becomes less plastic over time,16 which supports our assumption. Patients with severe tinnitus-related distress of short duration may be ideal candidates for neuromodulation. The location of the surface electrodes is another issue. In order to stimulate the bifrontal cortex, the electrodes have been traditionally placed in one of two ways, i.e., “left-anode-right-cathode montage” or “right-anode-left-cathode montage.”7 The former montage has been reported to achieve greater improvement of the depressive symptoms of tinnitus while the latter montage alleviates anxiety.617 However, we could not find evidence to support this concept. High-definition tDCS using a 4 × 1 ring electrode targeting the right DLPFC has been developed for more precise current delivery.18 A recent study also suggested the possibility of stimulating the anterior cingulate cortex with high-definition tDCS; this structure is located deeply and its function is related to emotional experience and tinnitus-related distress.1920 A previous study of rTMS demonstrated that preceding mediofrontal stimulation of the anterior cingulate cortex with a double cone-coil before rTMS to the auditory cortex achieved better results than conventional rTMS targeting the left DLPFC and auditory cortex.20 Therefore, future tDCS studies should concentrate on using high-definition tDCS to stimulate various brain regions more precisely. This study has several limitations. First, our analysis was based on data derived from a retrospective medical chart review. Second, not all patients complied with conventional treatment; for example, some patients refused to accept the sound generator or hearing aids because of the additional cost involved or because they felt self-conscious when wearing these devices. Third, more objective tools for assessment of tinnitus, such as radiologic evaluation or electroencephalography, were not used in this study. Nevertheless, we were able to assess the clinical importance of bifrontal tDCS by observing the natural course of tinnitus in patients in this retrospective cohort study. Finally, the number of sessions performed was relatively small. Nevertheless, some patients who had a higher tinnitus handicap showed a favorable treatment outcome. Given that tinnitus tends to wax and wane, adjunctive application of bifrontal tDCS at times when tinnitus is severe would help to alleviate the tinnitus-related distress. In summary, patients with tinnitus who received conventional treatment plus bifrontal tDCS showed a significant reduction in THI score but not in VAS tinnitus loudness or BDI scores. In particular, patients treated with tDCS who initially had moderate or catastrophic handicap were more likely to show improvement in their THI score. However, application of additional bifrontal tDCS was not identified as a significant prognostic factor for improvement of the THI score. Bifrontal tDCS can be used as an adjunctive treatment for severe tinnitus. Although this treatment modality does not reduce tinnitus loudness, addition of bifrontal tDCS to conventional treatment alleviates the distress associated with tinnitus in some patients with moderate or catastrophic handicap.
Background: This study assessed the therapeutic effect of adjunctive bifrontal transcranial direct current stimulation (tDCS) in patients with tinnitus. Methods: Forty-four patients who visited our university hospital with a complaint of non-pulsatile subjective tinnitus in January through December 2016 were enrolled. All patients received directive counseling and sound therapy, such as a sound generator or hearing aids, and/or oral clonazepam. Patients who agreed to undergo additional bifrontal tDCS were classified as the study group (n = 26). For tDCS, 1.5 mA of direct current was applied to the prefrontal cortex with a 10-20 EEG system for 20 minutes per session. Results: The Tinnitus Handicap Inventory (THI), Beck Depression Inventory, and Visual Analog Scale (VAS) scores decreased significantly after treatment (P < 0.001). Patients who had a moderate or catastrophic handicap were significantly more likely to respond favorably to bifrontal tDCS (P = 0.026). There was no correlation of number of tDCS sessions with change in the THI or VAS score (P > 0.05). Logistic regression analysis revealed that the initial THI score was independently associated with improvement in the THI. However, tDCS was not a significant determinant of recovery. Conclusions: tDCS can be used as an adjunctive treatment in patients with severe tinnitus. Although tDCS did not decrease the loudness of tinnitus, it could alleviate the distress associated with the condition in some patients with a moderate or catastrophic handicap.
null
null
6,366
280
12
[ "tinnitus", "treatment", "tdcs", "thi", "score", "patients", "group", "thi score", "handicap", "mean" ]
[ "test", "test" ]
null
null
[CONTENT] Adjunctive Bifrontal Transcranial Direct Current Stimulation | Distress | Tinnitus [SUMMARY]
[CONTENT] Adjunctive Bifrontal Transcranial Direct Current Stimulation | Distress | Tinnitus [SUMMARY]
[CONTENT] Adjunctive Bifrontal Transcranial Direct Current Stimulation | Distress | Tinnitus [SUMMARY]
null
[CONTENT] Adjunctive Bifrontal Transcranial Direct Current Stimulation | Distress | Tinnitus [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Electrodes | Female | Humans | Logistic Models | Male | Middle Aged | Prognosis | Severity of Illness Index | Tinnitus | Transcranial Direct Current Stimulation | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Electrodes | Female | Humans | Logistic Models | Male | Middle Aged | Prognosis | Severity of Illness Index | Tinnitus | Transcranial Direct Current Stimulation | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Electrodes | Female | Humans | Logistic Models | Male | Middle Aged | Prognosis | Severity of Illness Index | Tinnitus | Transcranial Direct Current Stimulation | Treatment Outcome | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Electrodes | Female | Humans | Logistic Models | Male | Middle Aged | Prognosis | Severity of Illness Index | Tinnitus | Transcranial Direct Current Stimulation | Treatment Outcome | Young Adult [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] tinnitus | treatment | tdcs | thi | score | patients | group | thi score | handicap | mean [SUMMARY]
[CONTENT] tinnitus | treatment | tdcs | thi | score | patients | group | thi score | handicap | mean [SUMMARY]
[CONTENT] tinnitus | treatment | tdcs | thi | score | patients | group | thi score | handicap | mean [SUMMARY]
null
[CONTENT] tinnitus | treatment | tdcs | thi | score | patients | group | thi score | handicap | mean [SUMMARY]
null
[CONTENT] tinnitus | brain | auditory | cortex | excitability | networks | individuals | reported | tdcs | study [SUMMARY]
[CONTENT] treatment | irb | thi | tdcs | patients | group | test | performed | 20 | tinnitus [SUMMARY]
[CONTENT] score | thi | handicap | mean | treatment | group | tinnitus | tdcs | thi score | initial [SUMMARY]
null
[CONTENT] tinnitus | tdcs | thi | treatment | score | patients | group | handicap | mean | thi score [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] Forty-four | January through December 2016 ||| ||| 26 ||| 1.5 | 10 | EEG | 20 minutes [SUMMARY]
[CONTENT] The Tinnitus Handicap Inventory | Beck Depression Inventory | Visual ||| ||| 0.026 ||| THI | 0.05 ||| THI | THI ||| [SUMMARY]
null
[CONTENT] ||| ||| Forty-four | January through December 2016 ||| ||| 26 ||| 1.5 | 10 | EEG | 20 minutes ||| The Tinnitus Handicap Inventory | Beck Depression Inventory | Visual ||| ||| 0.026 ||| THI | 0.05 ||| THI | THI ||| ||| ||| [SUMMARY]
null
Preventing the recurrence of acute anorectal abscesses utilizing a loose seton: a pilot study.
32341739
This pilot study aimed to document our results of treating anorectal abscesses with drainage plus loose seton for possible coexisting high fistulas or drainage plus fistulotomy for low tracts at the same operation.
INTRODUCTION
Drainage plus fistulotomy were performed only in cases with subcutaneous mucosa, intersphincteric, or apparently low transsphincteric fistula tracts. For all other cases with high transsphincteric fistula or those with questionable sphincter involvement, a loose seton was placed through the tract. Drainage only was carried out in 17 patients.
METHODS
Twenty-three patients underwent drainage plus loose seton. Drainage plus fistulotomy were performed in four cases. None of the patients developed recurrent abscess during a follow-up of 12 months. Not surprisingly, the incontinence scores were similar pre and post-operatively (p=0.564). Only minor complications occurred in 4 cases (14.8 percent). Secondary interventions following loose seton were carried out in 13 patients (48.1 percent). At 12 months, drainage only was followed by 10 recurrences (58.8 percent; p<0.0001, compared with concomitant surgery).
RESULTS
Concomitant loose seton treatment of high fistula tracts associated with anorectal abscess prevents abscess recurrence without significant complications or disturbance of continence. Concomitant fistulotomy for associated low fistulas also aids in the same clinical outcome. Concomitant fistula treatment with the loose seton may suffice in treating the whole disease process in selected cases. Even in patients with high fistula tracts, the loose seton makes fistula surgery simpler with a mature tract. Abscess recurrence is high after drainage only.
CONCLUSION
[ "Abscess", "Adult", "Anus Diseases", "Digestive System Surgical Procedures", "Drainage", "Female", "Humans", "Male", "Middle Aged", "Pilot Projects", "Postoperative Complications", "Rectal Diseases", "Rectal Fistula", "Recurrence", "Secondary Prevention", "Treatment Outcome" ]
7170736
Introduction
Concurrent definitive treatment of underlying fistulas at the time when the anorectal abscesses are drained is a controversial issue. Fistula-in-ano can increase the likelihood of abscess recurrence which then requires repeat drainage. Even with no recurrent abscess, further interventions may be needed for fistula-related symptoms such as discharge or perianal soreness. Regarding the pathogenetic association of the crypto-glandular abscess and fistula, attempting to eliminate the whole disease process concomitantly with the drainage of the abscess might be a reasonable approach mainly because i) the development of recurrent abscesses and repeated surgical drainages may be prevented and ii) eventual fistula surgery with additional anesthesia may be omitted. However, drawbacks exist based on the suggestions that i) some abscesses will not recur or evolve to become a fistula and ii) a combined procedure has a greater risk of anal incontinence. Nevertheless, the persistent entity of anorectal abscess and fistula is a common problem contributing significantly to the daily surgical workload, as well as expenses, day off work, and quality of life [1]. In this respect, there are growing inclination and efforts to treat any coexisting fistula tract concomitantly with the drainage of the abscess. Although about one-thirds of perianal abscesses were thought to be associated with fistula-in-ano [1-3], more recent studies aiming to detect any coexisting fistula tract have reported much higher rates (80-90 percent) of finding a fistula with a perianal abscess [4-6]. Two meta-analyses showed significant reductions in recurrence, persistent abscess/fistula or repeat surgery in favor of fistula surgery at the time of abscess incision and drainage (I&D) [7,8]. Even in children surgically treated for first-time perianal abscess, recurrence rates appear to be lowered by locating and treating coexisting fistulas [9]. Stratifying this group of patients, Oliver and coworkers recommended concomitant fistulotomy for subcutaneous, intersphincteric or low transsphincteric tracts, but not for high fistulas [6]. Likewise, the clinical practice guidelines of the American Society of Colon and Rectal Surgeons (ASCRS) have suggested that fistulotomy may be performed when a simple fistula is encountered during I&D [10]. This study aimed to document our results of treating anorectal abscesses with drainage plus loose seton for possible coexisting high or suspect fistula tracts or drainage plus fistulotomy for low fistulas at the same operation. Our rationale was to approach the surgical treatment of choice in anorectal abscesses which would offer the lowest recurrence without affecting continence.
Methods
Patients: during a 12-month period starting in April 2016, 44 adult patients with primary or recurrent anorectal abscess, who fulfilled the inclusion criteria, were analyzed and documented on previously designed, standard forms. Twenty-nine of them applied to the Proctology Clinic and they were treated by or under the supervision of the dedicated proctologists, aiming to treat any coexisting fistula tract, as well. During the same time period, 15 patients who applied to the emergency unit on weekends or vacations were treated by the surgical registrars only by abscess drainage. Whenever possible, an urgent pelvic MRI was also performed and evaluated together with the dedicated radiologist, especially for clues for a coexisting fistulous tract. Patients with supralevator or horseshoe abscesses, diabetes or other endocrine disorders, allergy to latex, and those previously diagnosed as having Crohn’s disease, ulcerative colitis, or colorectal cancer were excluded. Informed consent was obtained from all patients. Definitions and surgery: the primary endpoint was abscess recurrence. The secondary endpoints were incontinence scores and any surgical complications. Any abscess developing twice in the same quadrant was considered recurrent. A fistula was considered high transsphincteric if the track involved more than 30-40% of the thickness of the external anal sphincter. Almost all patients underwent surgery under spinal/epidural anesthesia and in prone jackknife position. Antibiotics were used selectively for patients with anorectal abscess complicated by cellulitis, induration, or overt signs of systemic sepsis, as well as those described by the American Heart Association [11]. Surgical treatment of the abscess consisted of incision with monopolar diathermy at the point of maximum fluctuation followed by circular excision of the skin, abscess debridement, and search for possible compartments. Before incision and drainage, an attempt was made to locate the internal opening, which could occasionally be identified when, under slight pressure above the abscess, purulent material was found oozing from a corresponding crypt (Figure 1). However, it may be difficult to determine the internal opening due to the presence of edema and obliteration by inflammatory debris. Any coexisting fistula tract was further searched with the help of the MRI findings, following Goodsall’s rule (fistulous abscesses in the posterior half generally follow a curved course towards the posterior midline, whereas those with an external opening in the anterior half follow a radial course) and/or by gentle probing, avoiding forceful maneuvers (Figure 1). Drainage plus fistulotomy were performed only in cases with subcutaneous-mucosa, intersphincteric or apparently low transsphincteric fistula tracks. For all other cases with high transsphincteric fistulae or those with questionable sphincter involvement, a single-strand, relatively thin, soft and elastic loose seton was placed through the tract (Figure 1). This seton was created by cutting a thin (2-3mm) circular strip from a surgical glove (latex surgical glove, Beybi™, Beybi Plastic Factory, Istanbul, Turkey), including its thicker sleeve. Left anterior abscess; (A) note purulent drainage from the internal opening; (B) drainage plus identification of the fistula; (C) conversion to hybrid seton at 12 months Follow-up: postoperatively, the patients were re-examined at 7 and 28 days, 3, 6 and 12 months. They were encouraged to contact whenever they suspected a problem. The analysis aimed at one-year’s follow-up. At 6-month´s and/or 12-month’s follow-up, the patients with loose setons were informed about the possibility of eliminating the seton by various methods depending on the final topographic features of the tract. The technique and possible complications of removing the seton, fistulotomy, conversion to hybrid (elastic cutting) seton, or ligation of intersphincteric fistula tract (LIFT) were discussed in detail. Of the patients who underwent loose seton treatment, only those who kept their seton for 12 months were analyzed. After definitive fistula surgery or removal of the loose seton at 12 months, the Wexner incontinence scores and further complications were recorded again at 6 months [12]. Statistical analysis: SPSS software v23.0 (IBM Inc., Armonk, New York, USA) was used. Data were defined as the mean ± standard deviation. Chi-square test was used for the comparison of the independent variables and Wilcoxon test for the dependent groups. A p-value <0.05 was considered significant.
Results
A total of 44 patients treated for anorectal abscess (33 males, mean age 39.1 ± 10.3) were followed for 12 months. Twenty-seven (20 males, mean age 39.3 ± 9.3) underwent concomitant surgery (CS) for co-existing fistula tract simultaneously with abscess drainage. In only two patients (6.9 percent) assigned to drainage with fistula treatment, the internal opening couldn´t be identified, and these patients were regarded as I&D only. The detailed distribution of the patients is outlined in Figure 2. Of the 27 CS patients, 17 had had previous abscess drainages and/or fistula. Four (14.8 percent) had low fistula tracts. Two of them had developed following lateral internal sphincterotomy and all four were treated with fistulotomy. Twenty-three patients underwent drainage plus loose seton. Five patients (18.5 percent) kept their loose seton for longer than 12 months. One was diagnosed to have Crohn’s postoperatively, two were anterior transsphincteric fistula in women and two patients simply chose to proceed with their seton. The seton was removed in another four patients (14.8 percent) at 12-month’s follow-up. It dropped accidentally in a single patient at 5 months. Definitive fistula surgery was carried out in 13 patients (48.1 percent) 12-month’s follow-up. The detailed distribution of the patients treated for anorectal abscess Ten underwent fistulotomy or conversion to a hybrid seton, as described previously, because the tract had matured to a safe and superficial topography [13] (Figure 3). Three patients (11.1 percent) underwent LIFT because of high fistula tracts. None of the patients in the CS group developed recurrent abscess during a follow-up of 12 months. Not surprisingly, the CCIS scores were identical in 21 patients at 12 months (0.29 ± 0.86 vs. 0.33 ± 0.83; p=0.564). However, 6 months following definitive treatment, a statistically significant difference occurred (0.29 ± 0.86 vs. 0.63 ± 0.96; p<0.0001). Only minor complications occurred (4 cases-14.8 percent of urinary retention and one arrhythmia). One patient, who eventually underwent hybrid seton treatment at 12 months, and the patient who accidentally lost the seton developed fistulas (resulting in a final healing rate of 92.6 percent). A single patient who underwent secondary fistulotomy required hemostasis for bleeding with electrocautery. Incision and drainage were carried out in a total of 17 patients. Ten patients had had previous abscess drainages and/or fistula. The demographics of the two groups were similar. In a year, 10 recurrences (58.8 percent) were noted. Three patients (17.6 percent) had fistulas without any intercurrent abscess, and only 4 healed completely (23.5 percent). It´s noteworthy that all these healed cases had experienced an abscess for the first time. This recurrence rate was significantly higher than that of the CS group p<0.0001 (Figure 4). Although most of these recurrences were treated by concomitant fistula surgery, they were not reinstated further in the study. (A) drainage of the anorectal abscess with identification of the internal opening; (B) mature fistula tract became superficial at 12 months; (C) lay-open of the resultant simple fistula The recurrence rates of the concomitant surgery and incision & drainage groups
Conclusion
In conclusion, further controlled trials are inspired by the results of this study suggesting that: i) associated fistula tracts can be identified in the majority of cases with acute anorectal abscess; ii) loose seton treatment of high fistula tracts associated with anorectal abscess prevents abscess recurrence without complications or disturbance of continence. Concomitant fistulotomy for associated low fistulas also aids in similar clinical outcome; iii) concomitant fistula treatment with the loose seton may suffice in treating the whole disease process in selected cases; iv) after the inflammation resolves, the seton provides a mature and usually a more superficial tract. Therefore, an urgent, septic condition is converted to an elective one for tertiary centers; and v) abscess recurrence is high after I&D only. What is known about this topic Perianal abcess recurrence is possible due to untreated fistula if only simple drainage performed; Combined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence. Perianal abcess recurrence is possible due to untreated fistula if only simple drainage performed; Combined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence. What this study adds Utilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications. Utilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications.
[ "What is known about this topic", "What this study adds" ]
[ "Perianal abcess recurrence is possible due to untreated fistula if only simple drainage performed;\nCombined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence.", "Utilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds" ]
[ "Concurrent definitive treatment of underlying fistulas at the time when the anorectal abscesses are drained is a controversial issue. Fistula-in-ano can increase the likelihood of abscess recurrence which then requires repeat drainage. Even with no recurrent abscess, further interventions may be needed for fistula-related symptoms such as discharge or perianal soreness. Regarding the pathogenetic association of the crypto-glandular abscess and fistula, attempting to eliminate the whole disease process concomitantly with the drainage of the abscess might be a reasonable approach mainly because i) the development of recurrent abscesses and repeated surgical drainages may be prevented and ii) eventual fistula surgery with additional anesthesia may be omitted. However, drawbacks exist based on the suggestions that i) some abscesses will not recur or evolve to become a fistula and ii) a combined procedure has a greater risk of anal incontinence. Nevertheless, the persistent entity of anorectal abscess and fistula is a common problem contributing significantly to the daily surgical workload, as well as expenses, day off work, and quality of life [1].\nIn this respect, there are growing inclination and efforts to treat any coexisting fistula tract concomitantly with the drainage of the abscess. Although about one-thirds of perianal abscesses were thought to be associated with fistula-in-ano [1-3], more recent studies aiming to detect any coexisting fistula tract have reported much higher rates (80-90 percent) of finding a fistula with a perianal abscess [4-6]. Two meta-analyses showed significant reductions in recurrence, persistent abscess/fistula or repeat surgery in favor of fistula surgery at the time of abscess incision and drainage (I&D) [7,8]. Even in children surgically treated for first-time perianal abscess, recurrence rates appear to be lowered by locating and treating coexisting fistulas [9]. Stratifying this group of patients, Oliver and coworkers recommended concomitant fistulotomy for subcutaneous, intersphincteric or low transsphincteric tracts, but not for high fistulas [6]. Likewise, the clinical practice guidelines of the American Society of Colon and Rectal Surgeons (ASCRS) have suggested that fistulotomy may be performed when a simple fistula is encountered during I&D [10]. This study aimed to document our results of treating anorectal abscesses with drainage plus loose seton for possible coexisting high or suspect fistula tracts or drainage plus fistulotomy for low fistulas at the same operation. Our rationale was to approach the surgical treatment of choice in anorectal abscesses which would offer the lowest recurrence without affecting continence.", "Patients: during a 12-month period starting in April 2016, 44 adult patients with primary or recurrent anorectal abscess, who fulfilled the inclusion criteria, were analyzed and documented on previously designed, standard forms. Twenty-nine of them applied to the Proctology Clinic and they were treated by or under the supervision of the dedicated proctologists, aiming to treat any coexisting fistula tract, as well. During the same time period, 15 patients who applied to the emergency unit on weekends or vacations were treated by the surgical registrars only by abscess drainage. Whenever possible, an urgent pelvic MRI was also performed and evaluated together with the dedicated radiologist, especially for clues for a coexisting fistulous tract. Patients with supralevator or horseshoe abscesses, diabetes or other endocrine disorders, allergy to latex, and those previously diagnosed as having Crohn’s disease, ulcerative colitis, or colorectal cancer were excluded. Informed consent was obtained from all patients.\nDefinitions and surgery: the primary endpoint was abscess recurrence. The secondary endpoints were incontinence scores and any surgical complications. Any abscess developing twice in the same quadrant was considered recurrent. A fistula was considered high transsphincteric if the track involved more than 30-40% of the thickness of the external anal sphincter. Almost all patients underwent surgery under spinal/epidural anesthesia and in prone jackknife position. Antibiotics were used selectively for patients with anorectal abscess complicated by cellulitis, induration, or overt signs of systemic sepsis, as well as those described by the American Heart Association [11]. Surgical treatment of the abscess consisted of incision with monopolar diathermy at the point of maximum fluctuation followed by circular excision of the skin, abscess debridement, and search for possible compartments. Before incision and drainage, an attempt was made to locate the internal opening, which could occasionally be identified when, under slight pressure above the abscess, purulent material was found oozing from a corresponding crypt (Figure 1). However, it may be difficult to determine the internal opening due to the presence of edema and obliteration by inflammatory debris. Any coexisting fistula tract was further searched with the help of the MRI findings, following Goodsall’s rule (fistulous abscesses in the posterior half generally follow a curved course towards the posterior midline, whereas those with an external opening in the anterior half follow a radial course) and/or by gentle probing, avoiding forceful maneuvers (Figure 1). Drainage plus fistulotomy were performed only in cases with subcutaneous-mucosa, intersphincteric or apparently low transsphincteric fistula tracks. For all other cases with high transsphincteric fistulae or those with questionable sphincter involvement, a single-strand, relatively thin, soft and elastic loose seton was placed through the tract (Figure 1). This seton was created by cutting a thin (2-3mm) circular strip from a surgical glove (latex surgical glove, Beybi™, Beybi Plastic Factory, Istanbul, Turkey), including its thicker sleeve.\nLeft anterior abscess; (A) note purulent drainage from the internal opening; (B) drainage plus identification of the fistula; (C) conversion to hybrid seton at 12 months\nFollow-up: postoperatively, the patients were re-examined at 7 and 28 days, 3, 6 and 12 months. They were encouraged to contact whenever they suspected a problem. The analysis aimed at one-year’s follow-up. At 6-month´s and/or 12-month’s follow-up, the patients with loose setons were informed about the possibility of eliminating the seton by various methods depending on the final topographic features of the tract. The technique and possible complications of removing the seton, fistulotomy, conversion to hybrid (elastic cutting) seton, or ligation of intersphincteric fistula tract (LIFT) were discussed in detail. Of the patients who underwent loose seton treatment, only those who kept their seton for 12 months were analyzed. After definitive fistula surgery or removal of the loose seton at 12 months, the Wexner incontinence scores and further complications were recorded again at 6 months [12].\nStatistical analysis: SPSS software v23.0 (IBM Inc., Armonk, New York, USA) was used. Data were defined as the mean ± standard deviation. Chi-square test was used for the comparison of the independent variables and Wilcoxon test for the dependent groups. A p-value <0.05 was considered significant.", "A total of 44 patients treated for anorectal abscess (33 males, mean age 39.1 ± 10.3) were followed for 12 months. Twenty-seven (20 males, mean age 39.3 ± 9.3) underwent concomitant surgery (CS) for co-existing fistula tract simultaneously with abscess drainage. In only two patients (6.9 percent) assigned to drainage with fistula treatment, the internal opening couldn´t be identified, and these patients were regarded as I&D only. The detailed distribution of the patients is outlined in Figure 2. Of the 27 CS patients, 17 had had previous abscess drainages and/or fistula. Four (14.8 percent) had low fistula tracts. Two of them had developed following lateral internal sphincterotomy and all four were treated with fistulotomy. Twenty-three patients underwent drainage plus loose seton. Five patients (18.5 percent) kept their loose seton for longer than 12 months. One was diagnosed to have Crohn’s postoperatively, two were anterior transsphincteric fistula in women and two patients simply chose to proceed with their seton. The seton was removed in another four patients (14.8 percent) at 12-month’s follow-up. It dropped accidentally in a single patient at 5 months. Definitive fistula surgery was carried out in 13 patients (48.1 percent) 12-month’s follow-up.\nThe detailed distribution of the patients treated for anorectal abscess\nTen underwent fistulotomy or conversion to a hybrid seton, as described previously, because the tract had matured to a safe and superficial topography [13] (Figure 3). Three patients (11.1 percent) underwent LIFT because of high fistula tracts. None of the patients in the CS group developed recurrent abscess during a follow-up of 12 months. Not surprisingly, the CCIS scores were identical in 21 patients at 12 months (0.29 ± 0.86 vs. 0.33 ± 0.83; p=0.564). However, 6 months following definitive treatment, a statistically significant difference occurred (0.29 ± 0.86 vs. 0.63 ± 0.96; p<0.0001). Only minor complications occurred (4 cases-14.8 percent of urinary retention and one arrhythmia). One patient, who eventually underwent hybrid seton treatment at 12 months, and the patient who accidentally lost the seton developed fistulas (resulting in a final healing rate of 92.6 percent). A single patient who underwent secondary fistulotomy required hemostasis for bleeding with electrocautery. Incision and drainage were carried out in a total of 17 patients. Ten patients had had previous abscess drainages and/or fistula. The demographics of the two groups were similar. In a year, 10 recurrences (58.8 percent) were noted. Three patients (17.6 percent) had fistulas without any intercurrent abscess, and only 4 healed completely (23.5 percent). It´s noteworthy that all these healed cases had experienced an abscess for the first time. This recurrence rate was significantly higher than that of the CS group p<0.0001 (Figure 4). Although most of these recurrences were treated by concomitant fistula surgery, they were not reinstated further in the study.\n(A) drainage of the anorectal abscess with identification of the internal opening; (B) mature fistula tract became superficial at 12 months; (C) lay-open of the resultant simple fistula\nThe recurrence rates of the concomitant surgery and incision & drainage groups", "The results of our study have suggested that identification and loose seton treatment of high fistula tracts associated with anorectal abscess prevents abscess recurrence without significant complications or disturbance of continence. Concomitant fistulotomy for associated low fistulas also aids in the same clinical outcome. A fistula tract was identified in 93.1 percent of the cases. Although a draining seton has been suggested to be a safe and acceptable treatment as an alternative to ‘primary’ fistulotomy, our study is probably the first to present data in this setting [10]. The loose seton prevented abscess recurrences in all patient within the time limits. As expected, no specific complications or anal incontinence occurred. The literature and guidelines have been rather reluctant to suggest definitive fistula surgery together with abscess drainage [10,14]. The fact that concomitant fistula treatment has largely been limited to fistulotomy or cutting seton has contributed negatively to the development of this reasonable strategy. Although Read mentioned performing a primary fistulotomy for all their perianal abscesses if a fistula was found concomitantly, Hebjorn did the first controlled trial [2,4]. Their technique was not exactly synchronous drainage plus fistulotomy and the 40 percent rate of minor continence problems reported after CS put shade on a more liberal approach for many years. Oliver and coworkers randomized a large group of patients [6]; however, more than two-thirds of their patients surprisingly had subcutaneous or intersphincteric abscesses. This patient distribution is different to our findings and knowledge that anorectal abscesses are more common in the perianal and ischiorectal spaces, and less common in the intersphincteric, supralevator, and submucosal locations [2,15].\nA major problem with CS is the fact that anorectal abscesses are defined by the anatomic space in which they develop while the topographic features of a fistula are defined considering sphincter involvement [2,15]. Our strategy of treating any associated suspect fistula tract with the loose seton appears to solve this possible misinterpretation problem considering sphincter involvement, as well as the related and feared possibility of disturbance of continence. Even if the patient eventually turns out to harbor Crohn’s disease, the strategy is valid. In addition to preventing abscess recurrence, concomitant fistula treatment with the loose seton may suffice in treating the whole disease process in selected cases. Removal of the seton in four patients (15 percent) at 12 months due to their refusal for further surgery resulted in healing within the time limits. Although this strategy was supported by the recent study of Oluwatomilayo and coworkers, we are yet far from suggesting it as a standard procedure [16]. More important was the observation that after the inflammation resolved, the seton provided a mature and eventually more superficial tract in 37 percent of the patients. Simple lay-open or conversion to a hybrid seton was then possible on an outpatient basis. Even in patients with high fistula tracts, the operation was simpler with a mature tract. The loose seton converts a septic condition that may recur to an elective one that can be treated in tertiary centers. The surgical approach to fistula is a vast topic beyond the aim of this trial, and it may naturally affect continence. A control group was, by definition, not intended. However, 17 patients who underwent I&D only accumulated during the study. The recurrence rates as reported in the literature are almost exclusively based on data obtained from retrospective studies. Therefore, the exact number of recurrent abscesses and persistent fistulas after I&D is still unknown. We noted a 58.8 percent rate of abscess recurrence in a year and this rate will probably increase in long term.", "In conclusion, further controlled trials are inspired by the results of this study suggesting that: i) associated fistula tracts can be identified in the majority of cases with acute anorectal abscess; ii) loose seton treatment of high fistula tracts associated with anorectal abscess prevents abscess recurrence without complications or disturbance of continence. Concomitant fistulotomy for associated low fistulas also aids in similar clinical outcome; iii) concomitant fistula treatment with the loose seton may suffice in treating the whole disease process in selected cases; iv) after the inflammation resolves, the seton provides a mature and usually a more superficial tract. Therefore, an urgent, septic condition is converted to an elective one for tertiary centers; and v) abscess recurrence is high after I&D only.\n What is known about this topic Perianal abcess recurrence is possible due to untreated fistula if only simple drainage performed;\nCombined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence.\nPerianal abcess recurrence is possible due to untreated fistula if only simple drainage performed;\nCombined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence.\n What this study adds Utilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications.\nUtilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications.", "Perianal abcess recurrence is possible due to untreated fistula if only simple drainage performed;\nCombined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence.", "Utilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications." ]
[ "intro", "methods", "results", "discussion", "conclusion", null, null ]
[ "Anorectal abscess", "fistula-in-ano", "seton" ]
Introduction: Concurrent definitive treatment of underlying fistulas at the time when the anorectal abscesses are drained is a controversial issue. Fistula-in-ano can increase the likelihood of abscess recurrence which then requires repeat drainage. Even with no recurrent abscess, further interventions may be needed for fistula-related symptoms such as discharge or perianal soreness. Regarding the pathogenetic association of the crypto-glandular abscess and fistula, attempting to eliminate the whole disease process concomitantly with the drainage of the abscess might be a reasonable approach mainly because i) the development of recurrent abscesses and repeated surgical drainages may be prevented and ii) eventual fistula surgery with additional anesthesia may be omitted. However, drawbacks exist based on the suggestions that i) some abscesses will not recur or evolve to become a fistula and ii) a combined procedure has a greater risk of anal incontinence. Nevertheless, the persistent entity of anorectal abscess and fistula is a common problem contributing significantly to the daily surgical workload, as well as expenses, day off work, and quality of life [1]. In this respect, there are growing inclination and efforts to treat any coexisting fistula tract concomitantly with the drainage of the abscess. Although about one-thirds of perianal abscesses were thought to be associated with fistula-in-ano [1-3], more recent studies aiming to detect any coexisting fistula tract have reported much higher rates (80-90 percent) of finding a fistula with a perianal abscess [4-6]. Two meta-analyses showed significant reductions in recurrence, persistent abscess/fistula or repeat surgery in favor of fistula surgery at the time of abscess incision and drainage (I&D) [7,8]. Even in children surgically treated for first-time perianal abscess, recurrence rates appear to be lowered by locating and treating coexisting fistulas [9]. Stratifying this group of patients, Oliver and coworkers recommended concomitant fistulotomy for subcutaneous, intersphincteric or low transsphincteric tracts, but not for high fistulas [6]. Likewise, the clinical practice guidelines of the American Society of Colon and Rectal Surgeons (ASCRS) have suggested that fistulotomy may be performed when a simple fistula is encountered during I&D [10]. This study aimed to document our results of treating anorectal abscesses with drainage plus loose seton for possible coexisting high or suspect fistula tracts or drainage plus fistulotomy for low fistulas at the same operation. Our rationale was to approach the surgical treatment of choice in anorectal abscesses which would offer the lowest recurrence without affecting continence. Methods: Patients: during a 12-month period starting in April 2016, 44 adult patients with primary or recurrent anorectal abscess, who fulfilled the inclusion criteria, were analyzed and documented on previously designed, standard forms. Twenty-nine of them applied to the Proctology Clinic and they were treated by or under the supervision of the dedicated proctologists, aiming to treat any coexisting fistula tract, as well. During the same time period, 15 patients who applied to the emergency unit on weekends or vacations were treated by the surgical registrars only by abscess drainage. Whenever possible, an urgent pelvic MRI was also performed and evaluated together with the dedicated radiologist, especially for clues for a coexisting fistulous tract. Patients with supralevator or horseshoe abscesses, diabetes or other endocrine disorders, allergy to latex, and those previously diagnosed as having Crohn’s disease, ulcerative colitis, or colorectal cancer were excluded. Informed consent was obtained from all patients. Definitions and surgery: the primary endpoint was abscess recurrence. The secondary endpoints were incontinence scores and any surgical complications. Any abscess developing twice in the same quadrant was considered recurrent. A fistula was considered high transsphincteric if the track involved more than 30-40% of the thickness of the external anal sphincter. Almost all patients underwent surgery under spinal/epidural anesthesia and in prone jackknife position. Antibiotics were used selectively for patients with anorectal abscess complicated by cellulitis, induration, or overt signs of systemic sepsis, as well as those described by the American Heart Association [11]. Surgical treatment of the abscess consisted of incision with monopolar diathermy at the point of maximum fluctuation followed by circular excision of the skin, abscess debridement, and search for possible compartments. Before incision and drainage, an attempt was made to locate the internal opening, which could occasionally be identified when, under slight pressure above the abscess, purulent material was found oozing from a corresponding crypt (Figure 1). However, it may be difficult to determine the internal opening due to the presence of edema and obliteration by inflammatory debris. Any coexisting fistula tract was further searched with the help of the MRI findings, following Goodsall’s rule (fistulous abscesses in the posterior half generally follow a curved course towards the posterior midline, whereas those with an external opening in the anterior half follow a radial course) and/or by gentle probing, avoiding forceful maneuvers (Figure 1). Drainage plus fistulotomy were performed only in cases with subcutaneous-mucosa, intersphincteric or apparently low transsphincteric fistula tracks. For all other cases with high transsphincteric fistulae or those with questionable sphincter involvement, a single-strand, relatively thin, soft and elastic loose seton was placed through the tract (Figure 1). This seton was created by cutting a thin (2-3mm) circular strip from a surgical glove (latex surgical glove, Beybi™, Beybi Plastic Factory, Istanbul, Turkey), including its thicker sleeve. Left anterior abscess; (A) note purulent drainage from the internal opening; (B) drainage plus identification of the fistula; (C) conversion to hybrid seton at 12 months Follow-up: postoperatively, the patients were re-examined at 7 and 28 days, 3, 6 and 12 months. They were encouraged to contact whenever they suspected a problem. The analysis aimed at one-year’s follow-up. At 6-month´s and/or 12-month’s follow-up, the patients with loose setons were informed about the possibility of eliminating the seton by various methods depending on the final topographic features of the tract. The technique and possible complications of removing the seton, fistulotomy, conversion to hybrid (elastic cutting) seton, or ligation of intersphincteric fistula tract (LIFT) were discussed in detail. Of the patients who underwent loose seton treatment, only those who kept their seton for 12 months were analyzed. After definitive fistula surgery or removal of the loose seton at 12 months, the Wexner incontinence scores and further complications were recorded again at 6 months [12]. Statistical analysis: SPSS software v23.0 (IBM Inc., Armonk, New York, USA) was used. Data were defined as the mean ± standard deviation. Chi-square test was used for the comparison of the independent variables and Wilcoxon test for the dependent groups. A p-value <0.05 was considered significant. Results: A total of 44 patients treated for anorectal abscess (33 males, mean age 39.1 ± 10.3) were followed for 12 months. Twenty-seven (20 males, mean age 39.3 ± 9.3) underwent concomitant surgery (CS) for co-existing fistula tract simultaneously with abscess drainage. In only two patients (6.9 percent) assigned to drainage with fistula treatment, the internal opening couldn´t be identified, and these patients were regarded as I&D only. The detailed distribution of the patients is outlined in Figure 2. Of the 27 CS patients, 17 had had previous abscess drainages and/or fistula. Four (14.8 percent) had low fistula tracts. Two of them had developed following lateral internal sphincterotomy and all four were treated with fistulotomy. Twenty-three patients underwent drainage plus loose seton. Five patients (18.5 percent) kept their loose seton for longer than 12 months. One was diagnosed to have Crohn’s postoperatively, two were anterior transsphincteric fistula in women and two patients simply chose to proceed with their seton. The seton was removed in another four patients (14.8 percent) at 12-month’s follow-up. It dropped accidentally in a single patient at 5 months. Definitive fistula surgery was carried out in 13 patients (48.1 percent) 12-month’s follow-up. The detailed distribution of the patients treated for anorectal abscess Ten underwent fistulotomy or conversion to a hybrid seton, as described previously, because the tract had matured to a safe and superficial topography [13] (Figure 3). Three patients (11.1 percent) underwent LIFT because of high fistula tracts. None of the patients in the CS group developed recurrent abscess during a follow-up of 12 months. Not surprisingly, the CCIS scores were identical in 21 patients at 12 months (0.29 ± 0.86 vs. 0.33 ± 0.83; p=0.564). However, 6 months following definitive treatment, a statistically significant difference occurred (0.29 ± 0.86 vs. 0.63 ± 0.96; p<0.0001). Only minor complications occurred (4 cases-14.8 percent of urinary retention and one arrhythmia). One patient, who eventually underwent hybrid seton treatment at 12 months, and the patient who accidentally lost the seton developed fistulas (resulting in a final healing rate of 92.6 percent). A single patient who underwent secondary fistulotomy required hemostasis for bleeding with electrocautery. Incision and drainage were carried out in a total of 17 patients. Ten patients had had previous abscess drainages and/or fistula. The demographics of the two groups were similar. In a year, 10 recurrences (58.8 percent) were noted. Three patients (17.6 percent) had fistulas without any intercurrent abscess, and only 4 healed completely (23.5 percent). It´s noteworthy that all these healed cases had experienced an abscess for the first time. This recurrence rate was significantly higher than that of the CS group p<0.0001 (Figure 4). Although most of these recurrences were treated by concomitant fistula surgery, they were not reinstated further in the study. (A) drainage of the anorectal abscess with identification of the internal opening; (B) mature fistula tract became superficial at 12 months; (C) lay-open of the resultant simple fistula The recurrence rates of the concomitant surgery and incision & drainage groups Discussion: The results of our study have suggested that identification and loose seton treatment of high fistula tracts associated with anorectal abscess prevents abscess recurrence without significant complications or disturbance of continence. Concomitant fistulotomy for associated low fistulas also aids in the same clinical outcome. A fistula tract was identified in 93.1 percent of the cases. Although a draining seton has been suggested to be a safe and acceptable treatment as an alternative to ‘primary’ fistulotomy, our study is probably the first to present data in this setting [10]. The loose seton prevented abscess recurrences in all patient within the time limits. As expected, no specific complications or anal incontinence occurred. The literature and guidelines have been rather reluctant to suggest definitive fistula surgery together with abscess drainage [10,14]. The fact that concomitant fistula treatment has largely been limited to fistulotomy or cutting seton has contributed negatively to the development of this reasonable strategy. Although Read mentioned performing a primary fistulotomy for all their perianal abscesses if a fistula was found concomitantly, Hebjorn did the first controlled trial [2,4]. Their technique was not exactly synchronous drainage plus fistulotomy and the 40 percent rate of minor continence problems reported after CS put shade on a more liberal approach for many years. Oliver and coworkers randomized a large group of patients [6]; however, more than two-thirds of their patients surprisingly had subcutaneous or intersphincteric abscesses. This patient distribution is different to our findings and knowledge that anorectal abscesses are more common in the perianal and ischiorectal spaces, and less common in the intersphincteric, supralevator, and submucosal locations [2,15]. A major problem with CS is the fact that anorectal abscesses are defined by the anatomic space in which they develop while the topographic features of a fistula are defined considering sphincter involvement [2,15]. Our strategy of treating any associated suspect fistula tract with the loose seton appears to solve this possible misinterpretation problem considering sphincter involvement, as well as the related and feared possibility of disturbance of continence. Even if the patient eventually turns out to harbor Crohn’s disease, the strategy is valid. In addition to preventing abscess recurrence, concomitant fistula treatment with the loose seton may suffice in treating the whole disease process in selected cases. Removal of the seton in four patients (15 percent) at 12 months due to their refusal for further surgery resulted in healing within the time limits. Although this strategy was supported by the recent study of Oluwatomilayo and coworkers, we are yet far from suggesting it as a standard procedure [16]. More important was the observation that after the inflammation resolved, the seton provided a mature and eventually more superficial tract in 37 percent of the patients. Simple lay-open or conversion to a hybrid seton was then possible on an outpatient basis. Even in patients with high fistula tracts, the operation was simpler with a mature tract. The loose seton converts a septic condition that may recur to an elective one that can be treated in tertiary centers. The surgical approach to fistula is a vast topic beyond the aim of this trial, and it may naturally affect continence. A control group was, by definition, not intended. However, 17 patients who underwent I&D only accumulated during the study. The recurrence rates as reported in the literature are almost exclusively based on data obtained from retrospective studies. Therefore, the exact number of recurrent abscesses and persistent fistulas after I&D is still unknown. We noted a 58.8 percent rate of abscess recurrence in a year and this rate will probably increase in long term. Conclusion: In conclusion, further controlled trials are inspired by the results of this study suggesting that: i) associated fistula tracts can be identified in the majority of cases with acute anorectal abscess; ii) loose seton treatment of high fistula tracts associated with anorectal abscess prevents abscess recurrence without complications or disturbance of continence. Concomitant fistulotomy for associated low fistulas also aids in similar clinical outcome; iii) concomitant fistula treatment with the loose seton may suffice in treating the whole disease process in selected cases; iv) after the inflammation resolves, the seton provides a mature and usually a more superficial tract. Therefore, an urgent, septic condition is converted to an elective one for tertiary centers; and v) abscess recurrence is high after I&D only. What is known about this topic Perianal abcess recurrence is possible due to untreated fistula if only simple drainage performed; Combined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence. Perianal abcess recurrence is possible due to untreated fistula if only simple drainage performed; Combined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence. What this study adds Utilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications. Utilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications. What is known about this topic: Perianal abcess recurrence is possible due to untreated fistula if only simple drainage performed; Combined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence. What this study adds: Utilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications.
Background: This pilot study aimed to document our results of treating anorectal abscesses with drainage plus loose seton for possible coexisting high fistulas or drainage plus fistulotomy for low tracts at the same operation. Methods: Drainage plus fistulotomy were performed only in cases with subcutaneous mucosa, intersphincteric, or apparently low transsphincteric fistula tracts. For all other cases with high transsphincteric fistula or those with questionable sphincter involvement, a loose seton was placed through the tract. Drainage only was carried out in 17 patients. Results: Twenty-three patients underwent drainage plus loose seton. Drainage plus fistulotomy were performed in four cases. None of the patients developed recurrent abscess during a follow-up of 12 months. Not surprisingly, the incontinence scores were similar pre and post-operatively (p=0.564). Only minor complications occurred in 4 cases (14.8 percent). Secondary interventions following loose seton were carried out in 13 patients (48.1 percent). At 12 months, drainage only was followed by 10 recurrences (58.8 percent; p<0.0001, compared with concomitant surgery). Conclusions: Concomitant loose seton treatment of high fistula tracts associated with anorectal abscess prevents abscess recurrence without significant complications or disturbance of continence. Concomitant fistulotomy for associated low fistulas also aids in the same clinical outcome. Concomitant fistula treatment with the loose seton may suffice in treating the whole disease process in selected cases. Even in patients with high fistula tracts, the loose seton makes fistula surgery simpler with a mature tract. Abscess recurrence is high after drainage only.
Introduction: Concurrent definitive treatment of underlying fistulas at the time when the anorectal abscesses are drained is a controversial issue. Fistula-in-ano can increase the likelihood of abscess recurrence which then requires repeat drainage. Even with no recurrent abscess, further interventions may be needed for fistula-related symptoms such as discharge or perianal soreness. Regarding the pathogenetic association of the crypto-glandular abscess and fistula, attempting to eliminate the whole disease process concomitantly with the drainage of the abscess might be a reasonable approach mainly because i) the development of recurrent abscesses and repeated surgical drainages may be prevented and ii) eventual fistula surgery with additional anesthesia may be omitted. However, drawbacks exist based on the suggestions that i) some abscesses will not recur or evolve to become a fistula and ii) a combined procedure has a greater risk of anal incontinence. Nevertheless, the persistent entity of anorectal abscess and fistula is a common problem contributing significantly to the daily surgical workload, as well as expenses, day off work, and quality of life [1]. In this respect, there are growing inclination and efforts to treat any coexisting fistula tract concomitantly with the drainage of the abscess. Although about one-thirds of perianal abscesses were thought to be associated with fistula-in-ano [1-3], more recent studies aiming to detect any coexisting fistula tract have reported much higher rates (80-90 percent) of finding a fistula with a perianal abscess [4-6]. Two meta-analyses showed significant reductions in recurrence, persistent abscess/fistula or repeat surgery in favor of fistula surgery at the time of abscess incision and drainage (I&D) [7,8]. Even in children surgically treated for first-time perianal abscess, recurrence rates appear to be lowered by locating and treating coexisting fistulas [9]. Stratifying this group of patients, Oliver and coworkers recommended concomitant fistulotomy for subcutaneous, intersphincteric or low transsphincteric tracts, but not for high fistulas [6]. Likewise, the clinical practice guidelines of the American Society of Colon and Rectal Surgeons (ASCRS) have suggested that fistulotomy may be performed when a simple fistula is encountered during I&D [10]. This study aimed to document our results of treating anorectal abscesses with drainage plus loose seton for possible coexisting high or suspect fistula tracts or drainage plus fistulotomy for low fistulas at the same operation. Our rationale was to approach the surgical treatment of choice in anorectal abscesses which would offer the lowest recurrence without affecting continence. Conclusion: In conclusion, further controlled trials are inspired by the results of this study suggesting that: i) associated fistula tracts can be identified in the majority of cases with acute anorectal abscess; ii) loose seton treatment of high fistula tracts associated with anorectal abscess prevents abscess recurrence without complications or disturbance of continence. Concomitant fistulotomy for associated low fistulas also aids in similar clinical outcome; iii) concomitant fistula treatment with the loose seton may suffice in treating the whole disease process in selected cases; iv) after the inflammation resolves, the seton provides a mature and usually a more superficial tract. Therefore, an urgent, septic condition is converted to an elective one for tertiary centers; and v) abscess recurrence is high after I&D only. What is known about this topic Perianal abcess recurrence is possible due to untreated fistula if only simple drainage performed; Combined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence. Perianal abcess recurrence is possible due to untreated fistula if only simple drainage performed; Combined surgery for the treatment of abcess and fistula at the same operation can cause fecal incontinence. What this study adds Utilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications. Utilising a loose seton can prevent recurrence of abcess and avoids or fascilitates the fistulae surgery during follow up without major complications.
Background: This pilot study aimed to document our results of treating anorectal abscesses with drainage plus loose seton for possible coexisting high fistulas or drainage plus fistulotomy for low tracts at the same operation. Methods: Drainage plus fistulotomy were performed only in cases with subcutaneous mucosa, intersphincteric, or apparently low transsphincteric fistula tracts. For all other cases with high transsphincteric fistula or those with questionable sphincter involvement, a loose seton was placed through the tract. Drainage only was carried out in 17 patients. Results: Twenty-three patients underwent drainage plus loose seton. Drainage plus fistulotomy were performed in four cases. None of the patients developed recurrent abscess during a follow-up of 12 months. Not surprisingly, the incontinence scores were similar pre and post-operatively (p=0.564). Only minor complications occurred in 4 cases (14.8 percent). Secondary interventions following loose seton were carried out in 13 patients (48.1 percent). At 12 months, drainage only was followed by 10 recurrences (58.8 percent; p<0.0001, compared with concomitant surgery). Conclusions: Concomitant loose seton treatment of high fistula tracts associated with anorectal abscess prevents abscess recurrence without significant complications or disturbance of continence. Concomitant fistulotomy for associated low fistulas also aids in the same clinical outcome. Concomitant fistula treatment with the loose seton may suffice in treating the whole disease process in selected cases. Even in patients with high fistula tracts, the loose seton makes fistula surgery simpler with a mature tract. Abscess recurrence is high after drainage only.
2,943
290
7
[ "fistula", "abscess", "patients", "seton", "drainage", "recurrence", "surgery", "percent", "loose", "loose seton" ]
[ "test", "test" ]
[CONTENT] Anorectal abscess | fistula-in-ano | seton [SUMMARY]
[CONTENT] Anorectal abscess | fistula-in-ano | seton [SUMMARY]
[CONTENT] Anorectal abscess | fistula-in-ano | seton [SUMMARY]
[CONTENT] Anorectal abscess | fistula-in-ano | seton [SUMMARY]
[CONTENT] Anorectal abscess | fistula-in-ano | seton [SUMMARY]
[CONTENT] Anorectal abscess | fistula-in-ano | seton [SUMMARY]
[CONTENT] Abscess | Adult | Anus Diseases | Digestive System Surgical Procedures | Drainage | Female | Humans | Male | Middle Aged | Pilot Projects | Postoperative Complications | Rectal Diseases | Rectal Fistula | Recurrence | Secondary Prevention | Treatment Outcome [SUMMARY]
[CONTENT] Abscess | Adult | Anus Diseases | Digestive System Surgical Procedures | Drainage | Female | Humans | Male | Middle Aged | Pilot Projects | Postoperative Complications | Rectal Diseases | Rectal Fistula | Recurrence | Secondary Prevention | Treatment Outcome [SUMMARY]
[CONTENT] Abscess | Adult | Anus Diseases | Digestive System Surgical Procedures | Drainage | Female | Humans | Male | Middle Aged | Pilot Projects | Postoperative Complications | Rectal Diseases | Rectal Fistula | Recurrence | Secondary Prevention | Treatment Outcome [SUMMARY]
[CONTENT] Abscess | Adult | Anus Diseases | Digestive System Surgical Procedures | Drainage | Female | Humans | Male | Middle Aged | Pilot Projects | Postoperative Complications | Rectal Diseases | Rectal Fistula | Recurrence | Secondary Prevention | Treatment Outcome [SUMMARY]
[CONTENT] Abscess | Adult | Anus Diseases | Digestive System Surgical Procedures | Drainage | Female | Humans | Male | Middle Aged | Pilot Projects | Postoperative Complications | Rectal Diseases | Rectal Fistula | Recurrence | Secondary Prevention | Treatment Outcome [SUMMARY]
[CONTENT] Abscess | Adult | Anus Diseases | Digestive System Surgical Procedures | Drainage | Female | Humans | Male | Middle Aged | Pilot Projects | Postoperative Complications | Rectal Diseases | Rectal Fistula | Recurrence | Secondary Prevention | Treatment Outcome [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] fistula | abscess | patients | seton | drainage | recurrence | surgery | percent | loose | loose seton [SUMMARY]
[CONTENT] fistula | abscess | patients | seton | drainage | recurrence | surgery | percent | loose | loose seton [SUMMARY]
[CONTENT] fistula | abscess | patients | seton | drainage | recurrence | surgery | percent | loose | loose seton [SUMMARY]
[CONTENT] fistula | abscess | patients | seton | drainage | recurrence | surgery | percent | loose | loose seton [SUMMARY]
[CONTENT] fistula | abscess | patients | seton | drainage | recurrence | surgery | percent | loose | loose seton [SUMMARY]
[CONTENT] fistula | abscess | patients | seton | drainage | recurrence | surgery | percent | loose | loose seton [SUMMARY]
[CONTENT] fistula | abscess | abscesses | coexisting | abscess fistula | drainage | anorectal abscesses | perianal | fistulas | anorectal [SUMMARY]
[CONTENT] patients | 12 | abscess | seton | surgical | months | fistula | opening | tract | follow [SUMMARY]
[CONTENT] patients | percent | 12 | months | fistula | abscess | underwent | 12 months | seton | patient [SUMMARY]
[CONTENT] abcess | fistula | recurrence | seton | abscess | associated | loose seton | treatment | loose | surgery [SUMMARY]
[CONTENT] fistula | abscess | patients | abcess | seton | recurrence | drainage | surgery | loose | percent [SUMMARY]
[CONTENT] fistula | abscess | patients | abcess | seton | recurrence | drainage | surgery | loose | percent [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| 17 [SUMMARY]
[CONTENT] Twenty-three ||| four ||| 12 months ||| ||| 4 | 14.8 percent ||| Secondary | 13 | 48.1 percent ||| 12 months | 10 | 58.8 percent [SUMMARY]
[CONTENT] Concomitant ||| Concomitant ||| Concomitant ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| 17 ||| Twenty-three ||| four ||| 12 months ||| ||| 4 | 14.8 percent ||| Secondary | 13 | 48.1 percent ||| 12 months | 10 | 58.8 percent ||| Concomitant ||| Concomitant ||| Concomitant ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| 17 ||| Twenty-three ||| four ||| 12 months ||| ||| 4 | 14.8 percent ||| Secondary | 13 | 48.1 percent ||| 12 months | 10 | 58.8 percent ||| Concomitant ||| Concomitant ||| Concomitant ||| ||| [SUMMARY]
Neurapraxia in patients with trigeminal neuralgia but no identifiable neurovascular conflict during microvascular decompression: a retrospective analysis of 26 cases.
35016641
Microvascular decompression (MVD) is the first choice in patients with classic trigeminal neuralgia (TGN) that could not be sufficiently controlled by pharmacological treatment. However, neurovascular conflict (NVC) could not be identified during MVD in all patients. To describe the efficacy and safety of treatment with aneurysm clips in these situations.
BACKGROUND
A total of 205 patients underwent MVD for classic TGN at our center from January 1, 2015 to December 31, 2019. In patients without identifiable NVC upon dissection of the entire trigeminal nerve root, neurapraxia was performed using a Yasargil temporary titanium aneurysm clip (force: 90 g) for 40 s (or a total of 60 s if the process must be suspended temporarily due to bradycardia or hypertension).
METHODS
A total of 26 patients (median age: 64 years; 15 women) underwent neurapraxia. Five out of the 26 patients received prior MVD but relapsed. Immediate complete pain relief was achieved in all 26 cases. Within a median follow-up of 3 years (range: 1.0-6.0), recurrence was noted in 3 cases (11.5%). Postoperative complications included hemifacial numbness, herpes labialis, masseter weakness; most were transient and dissipated within 3-6 months.
RESULTS
Neurapraxia using aneurysm clip is safe and effective in patients with classic TGN but no identifiable NVC during MVD. Whether this method could be developed into a standardizable method needs further investigation.
CONCLUSIONS
[ "Female", "Humans", "Hypesthesia", "Microvascular Decompression Surgery", "Middle Aged", "Postoperative Complications", "Retrospective Studies", "Treatment Outcome", "Trigeminal Neuralgia" ]
8750803
Background
Microvascular decompression (MVD) is the first choice in patients with classic trigeminal neuralgia (TGN) that could not be well controlled by pharmacological treatments [1, 2]. However, in 3.1–28.8% of the cases, offending vessels (OVs) could be identified during MVD [3, 4]. In such cases, second surgical operation that ablates the extracranial segment of the trigeminal nerve may be needed. Revuelta-Gutierrez et al. reported promising results of neurapraxia with bipolar tips during MVD in such patients [5]. In 2014, we attempted to develop a standardizable method of neurapraxia by using a Yasargil temporary titanium aneurysm clip (force: 90 g). In the initial series of 3 cases, the sensory root or the main trunk of the trigeminal nerve was clipped for 2.5 min, but we observed severe hemifacial numbness and masseter weakness in all 3 cases. Starting from the beginning of 2015, we decreased the clipping duration to 40 s. In this retrospective analysis, we analyzed the data of all cases with at least 1-year follow-up.
null
null
Results
A total of 26 subjects (median age: 64 years; 15 women) were included in the analysis. Demographic and clinical characteristics are shown in Table 1. Among the 26 patients, 5 received previous MVD for TGN. The median disease duration was 3.25 years (range: 0.5–14.0). The most common territory was V2 + V3. The Barrow Neurological Institute (BNI) pain scores were IV in 9 cases, and V in the remaining 17 cases [6]. Table 1Demographic and baseline characteristics of the patientsCohort size (n = 26)Female sex, no. (%)15 (57.7%)Age (y), median (range)64 (33–81)Disease duration (y), median (range)3.25 (0.5–14.0)Affected side, no. (%) Right only14 (53.8%) Left only12 (46.2%)Territory involved V11 V23 V35 V1 + V22 V2 + V312 V1 + V2 + V33 Demographic and baseline characteristics of the patients In 16 out of the 26 cases, neurapraxia was completed in one session. In the remaining 10 cases, the process had to be temporarily suspended and repeated due to bradycardia (n = 7) or hypertension (n = 3). No severe peri-operative complications (e.g., intracranial hemorrhage, intracranial infection and cerebrospinal fluid leakage) occurred. Post-operative complications included transient hemifacial numbness (n = 26; 100%), herpes labialis (n = 9; 34.6%), masseter weakness (n = 8; 30.7%), hemifacial formication (n = 2; 7.7%) and blunted corneal reflex (n = 2; 7.7%) (Table 2). Majority of the complications lasted for 3–6 months and eventually dissipated, but facial formication persisted until the last follow-up in both cases (2 and 3 years from the surgery, respectively). Table 2Post-operative complicationsComplicationsCohort size (n = 26) (%)Hemifacial numbness26 (100)Herpes labialis9 (34.6)Masseter weakness8 (30.7)Hemifacial formication2 (7.7)Blunted corneal reflex2 (7.7) Post-operative complications Immediate and complete pain relief (BNI pain score of I without any medication) was achieved in all 26 cases. Within a median follow-up of 3.0 years (range: 1–6 years), recurrence occurred in 3 (11.5%) patients. The time from surgery to relapse was 1, 1, and 1.5 years, respectively. The BNI pain score was II, II and III with carbamazepine treatment in the 3 patients with relapse. The remaining 23 patients were medication-free.
Conclusions
Neurapraxia using a Yasargil temporary titanium aneurysm clip is safe and effective in patients with classic TGN but no identifiable NVC during MVD. The advantage of this method include: (1) potential wider use since the damage is standardizable; (2) no need to schedule a second surgery.
[ "Background", "Methods" ]
[ "Microvascular decompression (MVD) is the first choice in patients with classic trigeminal neuralgia (TGN) that could not be well controlled by pharmacological treatments [1, 2]. However, in 3.1–28.8% of the cases, offending vessels (OVs) could be identified during MVD [3, 4]. In such cases, second surgical operation that ablates the extracranial segment of the trigeminal nerve may be needed. Revuelta-Gutierrez et al. reported promising results of neurapraxia with bipolar tips during MVD in such patients [5]. In 2014, we attempted to develop a standardizable method of neurapraxia by using a Yasargil temporary titanium aneurysm clip (force: 90 g). In the initial series of 3 cases, the sensory root or the main trunk of the trigeminal nerve was clipped for 2.5 min, but we observed severe hemifacial numbness and masseter weakness in all 3 cases. Starting from the beginning of 2015, we decreased the clipping duration to 40 s. In this retrospective analysis, we analyzed the data of all cases with at least 1-year follow-up.", "\nThe current study was conducted in compliance with the principles outlined in the Declaration of Helsinki, and was approved by the Ethics Committee of Shanghai Tenth People’s Hospital (approval #: CPPRB1). Informed consent was waived due to the nature of the study. All patient data were anonymized in the paper. We retrospectively screened all cases of MVD for TGN at our center during a period from January 1, 2015 to December 31, 2019. The diagnosis of TGN was established based on the criteria by the International Classification of Headache Disorders 3 (ICDH-3; 13.1.1.1). All subjects received MR-angiography prior to the surgery. MVD was performed using a standard suboccipital retrosigmoid approach. After releasing cerebrospinal fluid (CSF) under the microscope, the cerebellar hemisphere was retracted, and the arachnoid membrane between the petrosal vein and the facial-auditory nerve was opened to expose the cisternal segment of the trigeminal nerve. The entire length of the trigeminal nerve root (from the pons to the entrance of Meckel’s cave) was dissected to identify NVC. Teflon fragments were applied between the trigeminal nerve and offending vessels.\nIn cases without identifiable NVC, neurapraxia was conducted using a standard straight Yasargil temporary titanium aneurysm clip (Aesculap-B. Braum, Germany; 90-g force) for 40 s, preferably to the sensory root (Fig. 1), or the main trunk of the trigeminal nerve if sensory root was not separate from the motor root (Fig. 2). The procedure could be readily conducted in patients with recurrent pain after previous MVD (Fig. 3). In subjects with clinically significant bradycardia or hypertension during neurapraxia, the aneurysm clip was released temporarily and the process was repeated to achieve 60-s accumulative time for neurapraxia before. Sutures were removed 7 days later. The last follow-up (via either office visit or telephone) was conducted in December 2020.\n\nFig. 1 A representative case with no OVs and separate sensory vs. motor trigeminal nerve root. SR: sensory root of trigeminal nerve, MR: motor root of trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the sensory root of trigeminal nerve\n A representative case with no OVs and separate sensory vs. motor trigeminal nerve root. SR: sensory root of trigeminal nerve, MR: motor root of trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the sensory root of trigeminal nerve\n\nFig. 2 A representative case with no OVs and united sensory and motor roots. TN: trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the whole trigeminal nerve\n A representative case with no OVs and united sensory and motor roots. TN: trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the whole trigeminal nerve\n\nFig. 3 A representative case of recurrent TGN, with no OVs and united sensory and motor root. TN: trigeminal nerve, AC: aneurysm clip, Teflon: Teflon pad, VC: vestige of clamp at the whole trigeminal nerve\n A representative case of recurrent TGN, with no OVs and united sensory and motor root. TN: trigeminal nerve, AC: aneurysm clip, Teflon: Teflon pad, VC: vestige of clamp at the whole trigeminal nerve" ]
[ null, null ]
[ "Background", "Methods", "Results", "Discussion", "Conclusions" ]
[ "Microvascular decompression (MVD) is the first choice in patients with classic trigeminal neuralgia (TGN) that could not be well controlled by pharmacological treatments [1, 2]. However, in 3.1–28.8% of the cases, offending vessels (OVs) could be identified during MVD [3, 4]. In such cases, second surgical operation that ablates the extracranial segment of the trigeminal nerve may be needed. Revuelta-Gutierrez et al. reported promising results of neurapraxia with bipolar tips during MVD in such patients [5]. In 2014, we attempted to develop a standardizable method of neurapraxia by using a Yasargil temporary titanium aneurysm clip (force: 90 g). In the initial series of 3 cases, the sensory root or the main trunk of the trigeminal nerve was clipped for 2.5 min, but we observed severe hemifacial numbness and masseter weakness in all 3 cases. Starting from the beginning of 2015, we decreased the clipping duration to 40 s. In this retrospective analysis, we analyzed the data of all cases with at least 1-year follow-up.", "\nThe current study was conducted in compliance with the principles outlined in the Declaration of Helsinki, and was approved by the Ethics Committee of Shanghai Tenth People’s Hospital (approval #: CPPRB1). Informed consent was waived due to the nature of the study. All patient data were anonymized in the paper. We retrospectively screened all cases of MVD for TGN at our center during a period from January 1, 2015 to December 31, 2019. The diagnosis of TGN was established based on the criteria by the International Classification of Headache Disorders 3 (ICDH-3; 13.1.1.1). All subjects received MR-angiography prior to the surgery. MVD was performed using a standard suboccipital retrosigmoid approach. After releasing cerebrospinal fluid (CSF) under the microscope, the cerebellar hemisphere was retracted, and the arachnoid membrane between the petrosal vein and the facial-auditory nerve was opened to expose the cisternal segment of the trigeminal nerve. The entire length of the trigeminal nerve root (from the pons to the entrance of Meckel’s cave) was dissected to identify NVC. Teflon fragments were applied between the trigeminal nerve and offending vessels.\nIn cases without identifiable NVC, neurapraxia was conducted using a standard straight Yasargil temporary titanium aneurysm clip (Aesculap-B. Braum, Germany; 90-g force) for 40 s, preferably to the sensory root (Fig. 1), or the main trunk of the trigeminal nerve if sensory root was not separate from the motor root (Fig. 2). The procedure could be readily conducted in patients with recurrent pain after previous MVD (Fig. 3). In subjects with clinically significant bradycardia or hypertension during neurapraxia, the aneurysm clip was released temporarily and the process was repeated to achieve 60-s accumulative time for neurapraxia before. Sutures were removed 7 days later. The last follow-up (via either office visit or telephone) was conducted in December 2020.\n\nFig. 1 A representative case with no OVs and separate sensory vs. motor trigeminal nerve root. SR: sensory root of trigeminal nerve, MR: motor root of trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the sensory root of trigeminal nerve\n A representative case with no OVs and separate sensory vs. motor trigeminal nerve root. SR: sensory root of trigeminal nerve, MR: motor root of trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the sensory root of trigeminal nerve\n\nFig. 2 A representative case with no OVs and united sensory and motor roots. TN: trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the whole trigeminal nerve\n A representative case with no OVs and united sensory and motor roots. TN: trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the whole trigeminal nerve\n\nFig. 3 A representative case of recurrent TGN, with no OVs and united sensory and motor root. TN: trigeminal nerve, AC: aneurysm clip, Teflon: Teflon pad, VC: vestige of clamp at the whole trigeminal nerve\n A representative case of recurrent TGN, with no OVs and united sensory and motor root. TN: trigeminal nerve, AC: aneurysm clip, Teflon: Teflon pad, VC: vestige of clamp at the whole trigeminal nerve", "A total of 26 subjects (median age: 64 years; 15 women) were included in the analysis. Demographic and clinical characteristics are shown in Table 1. Among the 26 patients, 5 received previous MVD for TGN. The median disease duration was 3.25 years (range: 0.5–14.0). The most common territory was V2 + V3. The Barrow Neurological Institute (BNI) pain scores were IV in 9 cases, and V in the remaining 17 cases [6].\n\nTable 1Demographic and baseline characteristics of the patientsCohort size (n = 26)Female sex, no. (%)15 (57.7%)Age (y), median (range)64 (33–81)Disease duration (y), median (range)3.25 (0.5–14.0)Affected side, no. (%) Right only14 (53.8%) Left only12 (46.2%)Territory involved V11 V23 V35 V1 + V22 V2 + V312 V1 + V2 + V33\nDemographic and baseline characteristics of the patients\nIn 16 out of the 26 cases, neurapraxia was completed in one session. In the remaining 10 cases, the process had to be temporarily suspended and repeated due to bradycardia (n = 7) or hypertension (n = 3). No severe peri-operative complications (e.g., intracranial hemorrhage, intracranial infection and cerebrospinal fluid leakage) occurred. Post-operative complications included transient hemifacial numbness (n = 26; 100%), herpes labialis (n = 9; 34.6%), masseter weakness (n = 8; 30.7%), hemifacial formication (n = 2; 7.7%) and blunted corneal reflex (n = 2; 7.7%) (Table 2). Majority of the complications lasted for 3–6 months and eventually dissipated, but facial formication persisted until the last follow-up in both cases (2 and 3 years from the surgery, respectively).\n\nTable 2Post-operative complicationsComplicationsCohort size (n = 26) (%)Hemifacial numbness26 (100)Herpes labialis9 (34.6)Masseter weakness8 (30.7)Hemifacial formication2 (7.7)Blunted corneal reflex2 (7.7)\nPost-operative complications\nImmediate and complete pain relief (BNI pain score of I without any medication) was achieved in all 26 cases. Within a median follow-up of 3.0 years (range: 1–6 years), recurrence occurred in 3 (11.5%) patients. The time from surgery to relapse was 1, 1, and 1.5 years, respectively. The BNI pain score was II, II and III with carbamazepine treatment in the 3 patients with relapse. The remaining 23 patients were medication-free.", "Vascular compression of the trigeminal nerve root is the leading cause of classic TGN. Other causes include focal arachnoid thickening, adhesion, cerebello-pontine angle tumors, inflammation, multiple sclerosis, brainstem infarction, and arteriovenous malformations [7, 8]. In patients who did not respond to or could not tolerate pharmacological treatments, MVD is the first choice regardless of the presence or absence of NVC as determined by MRI due to the low sensitivity of MRI in detecting NVC [9]. In a small percentage of the patients, NVC could not be identified despite of complete dissection of entire length of the trigeminal nerve root. In addition, complete dissection of the trigeminal nerve in recurrent cases after previous MVD is often difficult, if not impossible, due to adhesion [10, 11].\nPartial sensory rhizotomy (PSR) is one option in patients with no identifiable NVC. In a study of 83 cases, Young et al. reported 15% rate of severe complications (complete loss of sensory function and corneal ulcer) [12]. A literature review of 10,493 patients undergoing a variety of surgical treatments for trigeminal neuralgia also supported the high rate of severe complication with PSR [13]. As a result, PSR has been practically abandoned in clinical practice. Nerve combing is another option in TGN patients with no identifiable NVC, but is associated with relatively high rate of 5-year recurrence (approximately 40%) [14–16].\nPercutaneous balloon compression (PBC) at the site of the trigeminal Gasserian ganglion is currently recommended as the first-choice extracranial treatment of TGN [17]. The physiological basis of PBC is the higher sensitivity of the larger pain fibers in the trigeminal nerve to physical damages compared to smaller fibers that transmit other sensory input and the afferent fibers [18]. Despite of the advantage of PBC, MVD is the treatment of choice in patients with TGN regardless of the presence or absence of NVC as determined by pre-operative MRI. In other words, PBC is typically used in patients who failed MVD treatment, and must be conducted separately [3, 4].\nCheng et al. used bipolar electrocoagulation tip to produce neurapraxia in 28 patients without OV. 20 patients (71.4%) achieved immediate complete pain relief. With a median follow-up of 46 months (range: 8–60 months), the recurrence rate was 38.4%, and only 13 patients (46.4%) remained pain-free without medication during the follow-up. Four patients (14.3%) developed permanent facial numbness [19]. Revuelta-Gutierrez et al. used bipolar electrocoagulation tips to produce neurapraxia to the trigeminal nerve root in 21 patients, and achieved immediate complete pain relief in all 21 patients. Recurrence rate was 14.8% at 12–36 months and 43.2% at 48 months. Permanent hypoesthesia was present in 6 patients (28.6%), whereas transient loss of corneal reflex was observed in 1 patient (4.8%). Motor function of the trigeminal nerve was intact in all patients [6].\nIn the current study, we achieved 100% immediate complete pain relief with acceptable complications in 26 patients with no identifiable NVC during MVD. Within a median of 3-year follow-up, the recurrence rate was 11.5%. In our opinion, these encouraging results reflect consistent degree of damage to the sensory fibres of the trigeminal nerve due to the use of consistent force (90 g) and 40-s clipping duration. Whether this method could be developed as a standardizable approach requires further study in different settings.\nFrom a surgical viewpoint, the trigeminal nerve root must be completely exposed to reveal possible OVs. If possible, only the sensory root should be clipped. Also, duration of the clipping is essential. In a pilot series that consisted of 3 cases, we clipped the trigeminal nerve root for 2.5 min, and unfortunately, all 3 patients developed severe hemifacial numbness and masseter weakness. The protocol that we have been using since 2015 is clipping for 40 s if the procedure could be completely in a single attempt, and for a total of 60 s if the procedure must be suspended temporarily and repeated due to bradycardia or hypertension.\nIn addition to the retrospective nature, the current study is limited by the relatively small sample size and the relatively short follow-up (median at 3 years). Also, 3 patients who were lost to follow-up were not included in the analysis. It is likely that these 3 patients experienced relapse or complications but chose not coming back to us. This could produce some bias to our results. The follow-up was conducted via telephone in some patients, and not based on office visit, adding another layer of limitation to the current study. Having said that, we believe that the key results are solid since recurrence and majority of the complications are sensory abnormalities without standard objective examinations.", "Neurapraxia using a Yasargil temporary titanium aneurysm clip is safe and effective in patients with classic TGN but no identifiable NVC during MVD. The advantage of this method include: (1) potential wider use since the damage is standardizable; (2) no need to schedule a second surgery." ]
[ null, null, "results", "discussion", "conclusion" ]
[ "Trigeminal neuralgia", "Microvascular decompression", "Offending vessel", "neurapraxia" ]
Background: Microvascular decompression (MVD) is the first choice in patients with classic trigeminal neuralgia (TGN) that could not be well controlled by pharmacological treatments [1, 2]. However, in 3.1–28.8% of the cases, offending vessels (OVs) could be identified during MVD [3, 4]. In such cases, second surgical operation that ablates the extracranial segment of the trigeminal nerve may be needed. Revuelta-Gutierrez et al. reported promising results of neurapraxia with bipolar tips during MVD in such patients [5]. In 2014, we attempted to develop a standardizable method of neurapraxia by using a Yasargil temporary titanium aneurysm clip (force: 90 g). In the initial series of 3 cases, the sensory root or the main trunk of the trigeminal nerve was clipped for 2.5 min, but we observed severe hemifacial numbness and masseter weakness in all 3 cases. Starting from the beginning of 2015, we decreased the clipping duration to 40 s. In this retrospective analysis, we analyzed the data of all cases with at least 1-year follow-up. Methods: The current study was conducted in compliance with the principles outlined in the Declaration of Helsinki, and was approved by the Ethics Committee of Shanghai Tenth People’s Hospital (approval #: CPPRB1). Informed consent was waived due to the nature of the study. All patient data were anonymized in the paper. We retrospectively screened all cases of MVD for TGN at our center during a period from January 1, 2015 to December 31, 2019. The diagnosis of TGN was established based on the criteria by the International Classification of Headache Disorders 3 (ICDH-3; 13.1.1.1). All subjects received MR-angiography prior to the surgery. MVD was performed using a standard suboccipital retrosigmoid approach. After releasing cerebrospinal fluid (CSF) under the microscope, the cerebellar hemisphere was retracted, and the arachnoid membrane between the petrosal vein and the facial-auditory nerve was opened to expose the cisternal segment of the trigeminal nerve. The entire length of the trigeminal nerve root (from the pons to the entrance of Meckel’s cave) was dissected to identify NVC. Teflon fragments were applied between the trigeminal nerve and offending vessels. In cases without identifiable NVC, neurapraxia was conducted using a standard straight Yasargil temporary titanium aneurysm clip (Aesculap-B. Braum, Germany; 90-g force) for 40 s, preferably to the sensory root (Fig. 1), or the main trunk of the trigeminal nerve if sensory root was not separate from the motor root (Fig. 2). The procedure could be readily conducted in patients with recurrent pain after previous MVD (Fig. 3). In subjects with clinically significant bradycardia or hypertension during neurapraxia, the aneurysm clip was released temporarily and the process was repeated to achieve 60-s accumulative time for neurapraxia before. Sutures were removed 7 days later. The last follow-up (via either office visit or telephone) was conducted in December 2020. Fig. 1 A representative case with no OVs and separate sensory vs. motor trigeminal nerve root. SR: sensory root of trigeminal nerve, MR: motor root of trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the sensory root of trigeminal nerve  A representative case with no OVs and separate sensory vs. motor trigeminal nerve root. SR: sensory root of trigeminal nerve, MR: motor root of trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the sensory root of trigeminal nerve Fig. 2 A representative case with no OVs and united sensory and motor roots. TN: trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the whole trigeminal nerve  A representative case with no OVs and united sensory and motor roots. TN: trigeminal nerve, AC: aneurysm clip, VC: vestige of clamp at the whole trigeminal nerve Fig. 3 A representative case of recurrent TGN, with no OVs and united sensory and motor root. TN: trigeminal nerve, AC: aneurysm clip, Teflon: Teflon pad, VC: vestige of clamp at the whole trigeminal nerve  A representative case of recurrent TGN, with no OVs and united sensory and motor root. TN: trigeminal nerve, AC: aneurysm clip, Teflon: Teflon pad, VC: vestige of clamp at the whole trigeminal nerve Results: A total of 26 subjects (median age: 64 years; 15 women) were included in the analysis. Demographic and clinical characteristics are shown in Table 1. Among the 26 patients, 5 received previous MVD for TGN. The median disease duration was 3.25 years (range: 0.5–14.0). The most common territory was V2 + V3. The Barrow Neurological Institute (BNI) pain scores were IV in 9 cases, and V in the remaining 17 cases [6]. Table 1Demographic and baseline characteristics of the patientsCohort size (n = 26)Female sex, no. (%)15 (57.7%)Age (y), median (range)64 (33–81)Disease duration (y), median (range)3.25 (0.5–14.0)Affected side, no. (%) Right only14 (53.8%) Left only12 (46.2%)Territory involved V11 V23 V35 V1 + V22 V2 + V312 V1 + V2 + V33 Demographic and baseline characteristics of the patients In 16 out of the 26 cases, neurapraxia was completed in one session. In the remaining 10 cases, the process had to be temporarily suspended and repeated due to bradycardia (n = 7) or hypertension (n = 3). No severe peri-operative complications (e.g., intracranial hemorrhage, intracranial infection and cerebrospinal fluid leakage) occurred. Post-operative complications included transient hemifacial numbness (n = 26; 100%), herpes labialis (n = 9; 34.6%), masseter weakness (n = 8; 30.7%), hemifacial formication (n = 2; 7.7%) and blunted corneal reflex (n = 2; 7.7%) (Table 2). Majority of the complications lasted for 3–6 months and eventually dissipated, but facial formication persisted until the last follow-up in both cases (2 and 3 years from the surgery, respectively). Table 2Post-operative complicationsComplicationsCohort size (n = 26) (%)Hemifacial numbness26 (100)Herpes labialis9 (34.6)Masseter weakness8 (30.7)Hemifacial formication2 (7.7)Blunted corneal reflex2 (7.7) Post-operative complications Immediate and complete pain relief (BNI pain score of I without any medication) was achieved in all 26 cases. Within a median follow-up of 3.0 years (range: 1–6 years), recurrence occurred in 3 (11.5%) patients. The time from surgery to relapse was 1, 1, and 1.5 years, respectively. The BNI pain score was II, II and III with carbamazepine treatment in the 3 patients with relapse. The remaining 23 patients were medication-free. Discussion: Vascular compression of the trigeminal nerve root is the leading cause of classic TGN. Other causes include focal arachnoid thickening, adhesion, cerebello-pontine angle tumors, inflammation, multiple sclerosis, brainstem infarction, and arteriovenous malformations [7, 8]. In patients who did not respond to or could not tolerate pharmacological treatments, MVD is the first choice regardless of the presence or absence of NVC as determined by MRI due to the low sensitivity of MRI in detecting NVC [9]. In a small percentage of the patients, NVC could not be identified despite of complete dissection of entire length of the trigeminal nerve root. In addition, complete dissection of the trigeminal nerve in recurrent cases after previous MVD is often difficult, if not impossible, due to adhesion [10, 11]. Partial sensory rhizotomy (PSR) is one option in patients with no identifiable NVC. In a study of 83 cases, Young et al. reported 15% rate of severe complications (complete loss of sensory function and corneal ulcer) [12]. A literature review of 10,493 patients undergoing a variety of surgical treatments for trigeminal neuralgia also supported the high rate of severe complication with PSR [13]. As a result, PSR has been practically abandoned in clinical practice. Nerve combing is another option in TGN patients with no identifiable NVC, but is associated with relatively high rate of 5-year recurrence (approximately 40%) [14–16]. Percutaneous balloon compression (PBC) at the site of the trigeminal Gasserian ganglion is currently recommended as the first-choice extracranial treatment of TGN [17]. The physiological basis of PBC is the higher sensitivity of the larger pain fibers in the trigeminal nerve to physical damages compared to smaller fibers that transmit other sensory input and the afferent fibers [18]. Despite of the advantage of PBC, MVD is the treatment of choice in patients with TGN regardless of the presence or absence of NVC as determined by pre-operative MRI. In other words, PBC is typically used in patients who failed MVD treatment, and must be conducted separately [3, 4]. Cheng et al. used bipolar electrocoagulation tip to produce neurapraxia in 28 patients without OV. 20 patients (71.4%) achieved immediate complete pain relief. With a median follow-up of 46 months (range: 8–60 months), the recurrence rate was 38.4%, and only 13 patients (46.4%) remained pain-free without medication during the follow-up. Four patients (14.3%) developed permanent facial numbness [19]. Revuelta-Gutierrez et al. used bipolar electrocoagulation tips to produce neurapraxia to the trigeminal nerve root in 21 patients, and achieved immediate complete pain relief in all 21 patients. Recurrence rate was 14.8% at 12–36 months and 43.2% at 48 months. Permanent hypoesthesia was present in 6 patients (28.6%), whereas transient loss of corneal reflex was observed in 1 patient (4.8%). Motor function of the trigeminal nerve was intact in all patients [6]. In the current study, we achieved 100% immediate complete pain relief with acceptable complications in 26 patients with no identifiable NVC during MVD. Within a median of 3-year follow-up, the recurrence rate was 11.5%. In our opinion, these encouraging results reflect consistent degree of damage to the sensory fibres of the trigeminal nerve due to the use of consistent force (90 g) and 40-s clipping duration. Whether this method could be developed as a standardizable approach requires further study in different settings. From a surgical viewpoint, the trigeminal nerve root must be completely exposed to reveal possible OVs. If possible, only the sensory root should be clipped. Also, duration of the clipping is essential. In a pilot series that consisted of 3 cases, we clipped the trigeminal nerve root for 2.5 min, and unfortunately, all 3 patients developed severe hemifacial numbness and masseter weakness. The protocol that we have been using since 2015 is clipping for 40 s if the procedure could be completely in a single attempt, and for a total of 60 s if the procedure must be suspended temporarily and repeated due to bradycardia or hypertension. In addition to the retrospective nature, the current study is limited by the relatively small sample size and the relatively short follow-up (median at 3 years). Also, 3 patients who were lost to follow-up were not included in the analysis. It is likely that these 3 patients experienced relapse or complications but chose not coming back to us. This could produce some bias to our results. The follow-up was conducted via telephone in some patients, and not based on office visit, adding another layer of limitation to the current study. Having said that, we believe that the key results are solid since recurrence and majority of the complications are sensory abnormalities without standard objective examinations. Conclusions: Neurapraxia using a Yasargil temporary titanium aneurysm clip is safe and effective in patients with classic TGN but no identifiable NVC during MVD. The advantage of this method include: (1) potential wider use since the damage is standardizable; (2) no need to schedule a second surgery.
Background: Microvascular decompression (MVD) is the first choice in patients with classic trigeminal neuralgia (TGN) that could not be sufficiently controlled by pharmacological treatment. However, neurovascular conflict (NVC) could not be identified during MVD in all patients. To describe the efficacy and safety of treatment with aneurysm clips in these situations. Methods: A total of 205 patients underwent MVD for classic TGN at our center from January 1, 2015 to December 31, 2019. In patients without identifiable NVC upon dissection of the entire trigeminal nerve root, neurapraxia was performed using a Yasargil temporary titanium aneurysm clip (force: 90 g) for 40 s (or a total of 60 s if the process must be suspended temporarily due to bradycardia or hypertension). Results: A total of 26 patients (median age: 64 years; 15 women) underwent neurapraxia. Five out of the 26 patients received prior MVD but relapsed. Immediate complete pain relief was achieved in all 26 cases. Within a median follow-up of 3 years (range: 1.0-6.0), recurrence was noted in 3 cases (11.5%). Postoperative complications included hemifacial numbness, herpes labialis, masseter weakness; most were transient and dissipated within 3-6 months. Conclusions: Neurapraxia using aneurysm clip is safe and effective in patients with classic TGN but no identifiable NVC during MVD. Whether this method could be developed into a standardizable method needs further investigation.
Background: Microvascular decompression (MVD) is the first choice in patients with classic trigeminal neuralgia (TGN) that could not be well controlled by pharmacological treatments [1, 2]. However, in 3.1–28.8% of the cases, offending vessels (OVs) could be identified during MVD [3, 4]. In such cases, second surgical operation that ablates the extracranial segment of the trigeminal nerve may be needed. Revuelta-Gutierrez et al. reported promising results of neurapraxia with bipolar tips during MVD in such patients [5]. In 2014, we attempted to develop a standardizable method of neurapraxia by using a Yasargil temporary titanium aneurysm clip (force: 90 g). In the initial series of 3 cases, the sensory root or the main trunk of the trigeminal nerve was clipped for 2.5 min, but we observed severe hemifacial numbness and masseter weakness in all 3 cases. Starting from the beginning of 2015, we decreased the clipping duration to 40 s. In this retrospective analysis, we analyzed the data of all cases with at least 1-year follow-up. Conclusions: Neurapraxia using a Yasargil temporary titanium aneurysm clip is safe and effective in patients with classic TGN but no identifiable NVC during MVD. The advantage of this method include: (1) potential wider use since the damage is standardizable; (2) no need to schedule a second surgery.
Background: Microvascular decompression (MVD) is the first choice in patients with classic trigeminal neuralgia (TGN) that could not be sufficiently controlled by pharmacological treatment. However, neurovascular conflict (NVC) could not be identified during MVD in all patients. To describe the efficacy and safety of treatment with aneurysm clips in these situations. Methods: A total of 205 patients underwent MVD for classic TGN at our center from January 1, 2015 to December 31, 2019. In patients without identifiable NVC upon dissection of the entire trigeminal nerve root, neurapraxia was performed using a Yasargil temporary titanium aneurysm clip (force: 90 g) for 40 s (or a total of 60 s if the process must be suspended temporarily due to bradycardia or hypertension). Results: A total of 26 patients (median age: 64 years; 15 women) underwent neurapraxia. Five out of the 26 patients received prior MVD but relapsed. Immediate complete pain relief was achieved in all 26 cases. Within a median follow-up of 3 years (range: 1.0-6.0), recurrence was noted in 3 cases (11.5%). Postoperative complications included hemifacial numbness, herpes labialis, masseter weakness; most were transient and dissipated within 3-6 months. Conclusions: Neurapraxia using aneurysm clip is safe and effective in patients with classic TGN but no identifiable NVC during MVD. Whether this method could be developed into a standardizable method needs further investigation.
2,365
282
5
[ "trigeminal", "nerve", "trigeminal nerve", "patients", "root", "sensory", "cases", "mvd", "tgn", "motor" ]
[ "test", "test" ]
null
[CONTENT] Trigeminal neuralgia | Microvascular decompression | Offending vessel | neurapraxia [SUMMARY]
null
[CONTENT] Trigeminal neuralgia | Microvascular decompression | Offending vessel | neurapraxia [SUMMARY]
[CONTENT] Trigeminal neuralgia | Microvascular decompression | Offending vessel | neurapraxia [SUMMARY]
[CONTENT] Trigeminal neuralgia | Microvascular decompression | Offending vessel | neurapraxia [SUMMARY]
[CONTENT] Trigeminal neuralgia | Microvascular decompression | Offending vessel | neurapraxia [SUMMARY]
[CONTENT] Female | Humans | Hypesthesia | Microvascular Decompression Surgery | Middle Aged | Postoperative Complications | Retrospective Studies | Treatment Outcome | Trigeminal Neuralgia [SUMMARY]
null
[CONTENT] Female | Humans | Hypesthesia | Microvascular Decompression Surgery | Middle Aged | Postoperative Complications | Retrospective Studies | Treatment Outcome | Trigeminal Neuralgia [SUMMARY]
[CONTENT] Female | Humans | Hypesthesia | Microvascular Decompression Surgery | Middle Aged | Postoperative Complications | Retrospective Studies | Treatment Outcome | Trigeminal Neuralgia [SUMMARY]
[CONTENT] Female | Humans | Hypesthesia | Microvascular Decompression Surgery | Middle Aged | Postoperative Complications | Retrospective Studies | Treatment Outcome | Trigeminal Neuralgia [SUMMARY]
[CONTENT] Female | Humans | Hypesthesia | Microvascular Decompression Surgery | Middle Aged | Postoperative Complications | Retrospective Studies | Treatment Outcome | Trigeminal Neuralgia [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] trigeminal | nerve | trigeminal nerve | patients | root | sensory | cases | mvd | tgn | motor [SUMMARY]
null
[CONTENT] trigeminal | nerve | trigeminal nerve | patients | root | sensory | cases | mvd | tgn | motor [SUMMARY]
[CONTENT] trigeminal | nerve | trigeminal nerve | patients | root | sensory | cases | mvd | tgn | motor [SUMMARY]
[CONTENT] trigeminal | nerve | trigeminal nerve | patients | root | sensory | cases | mvd | tgn | motor [SUMMARY]
[CONTENT] trigeminal | nerve | trigeminal nerve | patients | root | sensory | cases | mvd | tgn | motor [SUMMARY]
[CONTENT] cases | trigeminal | mvd | trigeminal nerve | nerve | decompression mvd choice | 28 cases offending | gutierrez reported promising results | gutierrez reported promising | 28 cases offending vessels [SUMMARY]
null
[CONTENT] 26 | years | median | table | cases | operative | complications | range | operative complications | remaining [SUMMARY]
[CONTENT] second surgery | wider use damage | clip safe effective patients | clip safe effective | clip safe | need | wider | wider use | wider use damage standardizable | need schedule second surgery [SUMMARY]
[CONTENT] trigeminal | nerve | trigeminal nerve | patients | cases | root | sensory | mvd | aneurysm | clip [SUMMARY]
[CONTENT] trigeminal | nerve | trigeminal nerve | patients | cases | root | sensory | mvd | aneurysm | clip [SUMMARY]
[CONTENT] MVD | first | neuralgia | TGN ||| MVD ||| [SUMMARY]
null
[CONTENT] 26 | 64 years | 15 ||| Five | 26 | MVD ||| 26 ||| 3 years | 1.0-6.0 | 3 | 11.5% ||| 3-6 months [SUMMARY]
[CONTENT] TGN | NVC | MVD ||| [SUMMARY]
[CONTENT] MVD | first | neuralgia | TGN ||| MVD ||| ||| 205 | MVD | TGN | January 1, 2015 to December 31, 2019 ||| NVC | Yasargil | 90 | 40 s | 60 ||| ||| 26 | 64 years | 15 ||| Five | 26 | MVD ||| 26 ||| 3 years | 1.0-6.0 | 3 | 11.5% ||| 3-6 months ||| TGN | NVC | MVD ||| [SUMMARY]
[CONTENT] MVD | first | neuralgia | TGN ||| MVD ||| ||| 205 | MVD | TGN | January 1, 2015 to December 31, 2019 ||| NVC | Yasargil | 90 | 40 s | 60 ||| ||| 26 | 64 years | 15 ||| Five | 26 | MVD ||| 26 ||| 3 years | 1.0-6.0 | 3 | 11.5% ||| 3-6 months ||| TGN | NVC | MVD ||| [SUMMARY]
Interferon-Gamma Release Assay is Not Appropriate for the Diagnosis of Active Tuberculosis in High-Burden Tuberculosis Settings: A Retrospective Multicenter Investigation.
29363640
Interferon-gamma release assay (IGRA) has been used in latent tuberculosis (TB) infection and TB diagnosis, but the results from different high TB-endemic countries are different. The aim of this study was to investigate the value of IGRA in the diagnosis of active pulmonary TB (PTB) in China.
BACKGROUND
We conducted a large-scale retrospective multicenter investigation to further evaluate the role of IGRA in the diagnosis of active PTB in high TB-epidemic populations and the factors affecting the performance of the assay. All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. Patients were divided into three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive Mycobacterium tuberculosis sputum culture; Group 2, sputum culture-negative PTB patients; and Group 3, non-TB respiratory diseases. The medical records of all patients were collected. Chi-square tests and Fisher's exact test were used to compare categorical data. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors.
METHODS
A total of 3082 patients for whom complete information was available were included in the investigation, including 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. The positive rate of T-SPOT.TB was 93.3% in the culture-positive PTB group and 86.1% in the culture-negative PTB group. In the non-PTB group, the positive rate of T-SPOT.TB was 43.6%. The positive rate of T-SPOT.TB in the culture-positive PTB group was significantly higher than that in the culture-negative PTB group (χ2 = 25.118, P < 0.01), which in turn was significantly higher than that in the non-TB group (χ2 = 566.116, P < 0.01). The overall results were as follows: sensitivity, 89.7%; specificity, 56.37%; positive predictive value, 74.75%; negative predictive value, 79.11%; and accuracy, 76.02%.
RESULTS
High false-positive rates of T-SPOT.TB assays in the non-TB group limit the usefulness as a single test to diagnose active TB in China. We highly recommend that IGRAs not be used for the diagnosis of active TB in high-burden TB settings.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Child", "Female", "Humans", "Interferon-gamma", "Interferon-gamma Release Tests", "Male", "Middle Aged", "Mycobacterium tuberculosis", "Retrospective Studies", "Sensitivity and Specificity", "Sputum", "Tuberculosis, Pulmonary", "Young Adult" ]
5798046
INTRODUCTION
Tuberculosis (TB) is a major health problem throughout the world, especially in developing countries. According to the World Health Organization (WHO), worldwide, 10.4 million people are estimated to have fallen ill with TB in 2015, and TB killed 1.8 million people (1.4 million human immunodeficiency virus [HIV]-negative and 0.4 million HIV-positive people).[1] TB is also a serious problem in China. The WHO has estimated an incidence of 66/100,000 TB cases in China in 2015.[1] The rapid detection of mycobacteria and successful treatment of infectious patients is important for controlling and preventing TB. Chest X-rays are often used in pulmonary TB (PTB) screening and have the advantages of being simple and inexpensive. However, in smear-negative PTB, many cases show atypical or nonspecific patterns and are difficult to differentiate from other pulmonary diseases.[2] Smear microscopy has low sensitivity. Culture of mycobacteria requires several weeks to obtain the results and has low sensitivity. A recent breakthrough in the diagnosis of TB and latent TB infection is the introduction of interferon-gamma release assays (IGRAs), in which the production of interferon-gamma (IFN-γ) in response to Mycobacterium tuberculosis (MTB)-specific antigens is measured. There are currently two types of commercial IGRAs available: the QuantiFERON-TB Gold In-Tube test and the T-SPOT.TB blood test. The sensitivities of the IGRAs were reported to be high in detecting active TB patients in low TB-endemic countries in previous studies.[34] However, a recent study conducted in Poland did not show that a combination of IGRA and TST might be a step forward in the diagnosis of culture-negative TB cases.[5] The results from high TB-endemic countries were also different. Some studies indicated that the T-SPOT.TB assay is a promising diagnostic test for active PTB,[678] but other studies showed that IGRA was insufficient for the diagnosis of PTB.[910] In this study, we conducted a large retrospective multicenter investigation in China to further evaluate the use of IGRA in the diagnosis of active PTB in high TB-epidemic populations and the factors affecting the performance of the assay.
METHODS
Ethics approval This was an observational retrospective study and the three diagnostic tests were already used in clinical practice. Given that the medical information of patients was recorded anonymously by case history, which would not bring any risk to the participants, the Ethics Committee of Beijing Chest Hospital, Capital Medical University approved this retrospective study, with a waiver of informed consent from the patients. This was an observational retrospective study and the three diagnostic tests were already used in clinical practice. Given that the medical information of patients was recorded anonymously by case history, which would not bring any risk to the participants, the Ethics Committee of Beijing Chest Hospital, Capital Medical University approved this retrospective study, with a waiver of informed consent from the patients. Study subjects China is a high TB-burden country. In 2012, the incidence of TB in the study hospital situated provinces were between 40/100,000 and 181/100,000.[11] All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. The six hospitals with a capacity of 3000 beds are situated in the south, north, east, and center of China. At each study hospital, trained health workers extracted data from the computer database of medical records of inpatients. Records were collected in terms of age, gender, contact with TB, vaccination with Bacillus Calmette–Guérin (BCG), albumin, body mass index (BMI), smoking and alcohol intake, presenting complaints, sputum smear and culture, range of TB disease, lung cavity and its range, course of TB, history of TB treatment, comorbidity, etc. Only patients with complete information were included in the investigation. The prevalence of HIV infection is very low in China, and all cases had negative results on serological tests for HIV. All patients had not had immune diseases or received immunosuppressant before. All patients had not extrapulmonary TB. Patients were divided into the following three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive MTB sputum culture; Group 2, sputum culture-negative PTB patients diagnosed on the basis of typical clinical symptoms, typical features on radiographs, and proper responses to anti-TB treatment (sputum smear-positive and smear-negative cases were included); and Group 3, non-TB respiratory diseases, including pneumonia, lung cancer, pulmonary interstitial fibrosis, chronic obstructive pulmonary disease, and bronchiectasis. In China, PTB has generally been diagnosed by traditional methods that rely on clinical symptoms together with the results of bacteriology methods (including sputum smear microscopy and bacterial culture) and X-ray examination (diagnostic criteria for TB WS-288-2008). China is a high TB-burden country. In 2012, the incidence of TB in the study hospital situated provinces were between 40/100,000 and 181/100,000.[11] All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. The six hospitals with a capacity of 3000 beds are situated in the south, north, east, and center of China. At each study hospital, trained health workers extracted data from the computer database of medical records of inpatients. Records were collected in terms of age, gender, contact with TB, vaccination with Bacillus Calmette–Guérin (BCG), albumin, body mass index (BMI), smoking and alcohol intake, presenting complaints, sputum smear and culture, range of TB disease, lung cavity and its range, course of TB, history of TB treatment, comorbidity, etc. Only patients with complete information were included in the investigation. The prevalence of HIV infection is very low in China, and all cases had negative results on serological tests for HIV. All patients had not had immune diseases or received immunosuppressant before. All patients had not extrapulmonary TB. Patients were divided into the following three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive MTB sputum culture; Group 2, sputum culture-negative PTB patients diagnosed on the basis of typical clinical symptoms, typical features on radiographs, and proper responses to anti-TB treatment (sputum smear-positive and smear-negative cases were included); and Group 3, non-TB respiratory diseases, including pneumonia, lung cancer, pulmonary interstitial fibrosis, chronic obstructive pulmonary disease, and bronchiectasis. In China, PTB has generally been diagnosed by traditional methods that rely on clinical symptoms together with the results of bacteriology methods (including sputum smear microscopy and bacterial culture) and X-ray examination (diagnostic criteria for TB WS-288-2008). T-SPOT.TB assay The T-SPOT.TB test is an in vitro diagnostic test that detects the effector T-cells in human whole blood by capturing IFN-γ in the vicinity of the T-cells that are responding to stimulation with MTB-specific antigens such as 6 kDa early secretory antigenic target (ESAT-6) and 10 kDa culture filtrate protein (CFP-10). The T-SPOT.TB test (Oxford Immunotec Ltd., UK) was performed using peripheral blood mononuclear cells (PBMCs) separated from heparinized blood samples according to the manufacturer's instructions. Briefly, PBMCs were isolated and incubated with two antigens (ESAT-6 and CFP-10). The procedure was performed in plates precoated with anti-IFN-γ antibodies at 37°C for 16–20 h. After application of alkaline phosphatase-conjugated secondary antibody and chromogenic substrate, the number of spot-forming cells (million PBMCs) in each well was automatically counted with a CTL ELISPOT system. The results were interpreted as recommended by the test kit manufacturer.[12] The T-SPOT.TB test is an in vitro diagnostic test that detects the effector T-cells in human whole blood by capturing IFN-γ in the vicinity of the T-cells that are responding to stimulation with MTB-specific antigens such as 6 kDa early secretory antigenic target (ESAT-6) and 10 kDa culture filtrate protein (CFP-10). The T-SPOT.TB test (Oxford Immunotec Ltd., UK) was performed using peripheral blood mononuclear cells (PBMCs) separated from heparinized blood samples according to the manufacturer's instructions. Briefly, PBMCs were isolated and incubated with two antigens (ESAT-6 and CFP-10). The procedure was performed in plates precoated with anti-IFN-γ antibodies at 37°C for 16–20 h. After application of alkaline phosphatase-conjugated secondary antibody and chromogenic substrate, the number of spot-forming cells (million PBMCs) in each well was automatically counted with a CTL ELISPOT system. The results were interpreted as recommended by the test kit manufacturer.[12] Statistical analysis Frequencies and percentages were used to describe the categorical data, and continuous variables were presented as the mean or median. Chi-square tests and Fisher's exact test were used to compare categorical data. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and analytic accuracy (Acc) were calculated for each TB patient group. The analysis was performed on data from patients with culture-positive or culture-negative active TB and cases with nonmycobacterial lung diseases. Confidence intervals (95% CIs) were estimated according to the binomial distribution. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors. Odds ratios (ORs) and 95% CIs for risk were calculated. P < 0.05 was considered statistically significant. Logistic regression models should be used with a minimum of 10 events per predictor variable by a Monte Carlo study.[13] All statistical analyses were performed with SPSS software (version 13.0, SPSS Inc., Chicago, USA). Frequencies and percentages were used to describe the categorical data, and continuous variables were presented as the mean or median. Chi-square tests and Fisher's exact test were used to compare categorical data. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and analytic accuracy (Acc) were calculated for each TB patient group. The analysis was performed on data from patients with culture-positive or culture-negative active TB and cases with nonmycobacterial lung diseases. Confidence intervals (95% CIs) were estimated according to the binomial distribution. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors. Odds ratios (ORs) and 95% CIs for risk were calculated. P < 0.05 was considered statistically significant. Logistic regression models should be used with a minimum of 10 events per predictor variable by a Monte Carlo study.[13] All statistical analyses were performed with SPSS software (version 13.0, SPSS Inc., Chicago, USA).
RESULTS
Demographic characteristics of the three groups A total of 3082 patients for whom there was complete information were included in the investigation. The sample population included 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. Demographic characteristics of the three groups are presented in Table 1. Demographic characteristics of the three groups *The BMI is the weight in kilograms divided by the square of the height in meters. BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin. A total of 3082 patients for whom there was complete information were included in the investigation. The sample population included 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. Demographic characteristics of the three groups are presented in Table 1. Demographic characteristics of the three groups *The BMI is the weight in kilograms divided by the square of the height in meters. BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin. Clinical characteristics of the tuberculosis patients Clinical characteristics of the TB patients are summarized in Table 2. The rates of cough, productive cough, fever, sweat, and weight-loss in the culture-positive PTB group were higher than those in the culture-negative PTB group (P < 0.05). The ranges of TB disease and cavity were more extensive in the culture-positive PTB group than that in the culture-negative PTB group (P < 0.01). The frequency of diabetes in the culture-positive PTB group was higher than that in the culture-negative PTB group (P < 0.01). Clinical characteristics of the TB patients *Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. †Respiratory failure was defined as PaO2 lower than 60 by arterial blood gas analysis. TB: Tuberculosis; AFB: Acid-fast bacilli; COPD: Chronic obstructive pulmonary disease. –: Not applicable. Clinical characteristics of the TB patients are summarized in Table 2. The rates of cough, productive cough, fever, sweat, and weight-loss in the culture-positive PTB group were higher than those in the culture-negative PTB group (P < 0.05). The ranges of TB disease and cavity were more extensive in the culture-positive PTB group than that in the culture-negative PTB group (P < 0.01). The frequency of diabetes in the culture-positive PTB group was higher than that in the culture-negative PTB group (P < 0.01). Clinical characteristics of the TB patients *Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. †Respiratory failure was defined as PaO2 lower than 60 by arterial blood gas analysis. TB: Tuberculosis; AFB: Acid-fast bacilli; COPD: Chronic obstructive pulmonary disease. –: Not applicable. Diagnostic performance of T-SPOT.TB The positive rates of TSPOT.TB were 93.3% (95% CI: 91.7–94.9%), 86.1% (95% CI: 83.7–88.5%), 43.6% (95% CI: 40.9–46.3%) in the culture-positive PTB group, culture-negative PTB group, and non-TB group, respectively. The positive rate of T-SPOT.TB in the culture-positive PTB group was higher than that in the culture-negative PTB group or in the non-TB group (P < 0.01). The positive rate of T-SPOT.TB in the culture-negative PTB group was higher than that in the non-TB group (P < 0.01). The overall sensitivity, specificity, PPV, NPV, and Acc were 89.7% (95% CI: 88.2–91.0%), 56.4% (53.6–59.1%), 74.8% (95% CI: 72.9–76.6%), 79.1% (95% CI: 76.3–81.7%), and 76.0% (74.5–77.5%), respectively. The positive rates of TSPOT.TB were 93.3% (95% CI: 91.7–94.9%), 86.1% (95% CI: 83.7–88.5%), 43.6% (95% CI: 40.9–46.3%) in the culture-positive PTB group, culture-negative PTB group, and non-TB group, respectively. The positive rate of T-SPOT.TB in the culture-positive PTB group was higher than that in the culture-negative PTB group or in the non-TB group (P < 0.01). The positive rate of T-SPOT.TB in the culture-negative PTB group was higher than that in the non-TB group (P < 0.01). The overall sensitivity, specificity, PPV, NPV, and Acc were 89.7% (95% CI: 88.2–91.0%), 56.4% (53.6–59.1%), 74.8% (95% CI: 72.9–76.6%), 79.1% (95% CI: 76.3–81.7%), and 76.0% (74.5–77.5%), respectively. Risk factors associated with positive T-SPOT.TB results Risk factors associated with positive T-SPOT.TB results are presented in Table 3. The sensitivity in the older patients was lower than that in nonolder patients (P < 0.01). The sensitivity in patients with a BMI <18.5 kg/m2 was higher than that in patients with a BMI ≥18.5 kg/m2 (P = 0.02). The sensitivity in the smear-positive patients was higher than that in the smear-negative patients (P < 0.01). The sensitivity in the patients with comorbidity was higher than that in noncomorbidity patients (P = 0.017). The sensitivity in the patients with a record of contact with TB was higher than that in noncontact with TB patients (P = 0.049). The sensitivity of T-SPOT.TB in the patients with a record of vaccination with BCG was higher than that in noncontact with TB patients (P = 0.040). Risk factors associated with positive T-SPOT.TB results, n = 1819 BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin. Risk factors associated with positive T-SPOT.TB results are presented in Table 3. The sensitivity in the older patients was lower than that in nonolder patients (P < 0.01). The sensitivity in patients with a BMI <18.5 kg/m2 was higher than that in patients with a BMI ≥18.5 kg/m2 (P = 0.02). The sensitivity in the smear-positive patients was higher than that in the smear-negative patients (P < 0.01). The sensitivity in the patients with comorbidity was higher than that in noncomorbidity patients (P = 0.017). The sensitivity in the patients with a record of contact with TB was higher than that in noncontact with TB patients (P = 0.049). The sensitivity of T-SPOT.TB in the patients with a record of vaccination with BCG was higher than that in noncontact with TB patients (P = 0.040). Risk factors associated with positive T-SPOT.TB results, n = 1819 BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin. Association between T-SPOT.TB result and other factors in tuberculosis patients In multivariate logistic regression analysis, the affecting factors of T-SPOT.TB results in TB patients included gender (OR, 0.714; 95% CI, 0.515–0.989; P = 0.043), age (OR, 0.691; 95% CI, 0.556–0.859, P < 0.01), BMI (OR, 0.942; 95% CI, 0.900–0.987, P = 0.012), sputum culture (OR, 1.929; 95% CI, 1.271–2.927, P = 0.002), and contact with TB (OR, 2.635; 95% CI, 1.037–6.695, P = 0.042) [Table 4]. Females, age of >65 years, a BMI ≥18.5 kg/m2, sputum culture-negative, and noncontact of TB were demonstrated to be independent factors associated with negative test results. Multivariable logistic regression analysis of the association of T-SPOT.TB and clinical characteristics *Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. If the P<0.2 in the univariate analysis, then the affected factors were entered in the multivariate logistic regression. BMI: Body-mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin; COPD: Chronic obstructive pulmonary disease. In multivariate logistic regression analysis, the affecting factors of T-SPOT.TB results in TB patients included gender (OR, 0.714; 95% CI, 0.515–0.989; P = 0.043), age (OR, 0.691; 95% CI, 0.556–0.859, P < 0.01), BMI (OR, 0.942; 95% CI, 0.900–0.987, P = 0.012), sputum culture (OR, 1.929; 95% CI, 1.271–2.927, P = 0.002), and contact with TB (OR, 2.635; 95% CI, 1.037–6.695, P = 0.042) [Table 4]. Females, age of >65 years, a BMI ≥18.5 kg/m2, sputum culture-negative, and noncontact of TB were demonstrated to be independent factors associated with negative test results. Multivariable logistic regression analysis of the association of T-SPOT.TB and clinical characteristics *Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. If the P<0.2 in the univariate analysis, then the affected factors were entered in the multivariate logistic regression. BMI: Body-mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin; COPD: Chronic obstructive pulmonary disease.
null
null
[ "Ethics approval", "Study subjects", "T-SPOT.TB assay", "Statistical analysis", "Demographic characteristics of the three groups", "Clinical characteristics of the tuberculosis patients", "Diagnostic performance of T-SPOT.TB", "Risk factors associated with positive T-SPOT.TB results", "Association between T-SPOT.TB result and other factors in tuberculosis patients", "Financial support and sponsorship" ]
[ "This was an observational retrospective study and the three diagnostic tests were already used in clinical practice. Given that the medical information of patients was recorded anonymously by case history, which would not bring any risk to the participants, the Ethics Committee of Beijing Chest Hospital, Capital Medical University approved this retrospective study, with a waiver of informed consent from the patients.", "China is a high TB-burden country. In 2012, the incidence of TB in the study hospital situated provinces were between 40/100,000 and 181/100,000.[11] All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. The six hospitals with a capacity of 3000 beds are situated in the south, north, east, and center of China. At each study hospital, trained health workers extracted data from the computer database of medical records of inpatients. Records were collected in terms of age, gender, contact with TB, vaccination with Bacillus Calmette–Guérin (BCG), albumin, body mass index (BMI), smoking and alcohol intake, presenting complaints, sputum smear and culture, range of TB disease, lung cavity and its range, course of TB, history of TB treatment, comorbidity, etc. Only patients with complete information were included in the investigation. The prevalence of HIV infection is very low in China, and all cases had negative results on serological tests for HIV. All patients had not had immune diseases or received immunosuppressant before. All patients had not extrapulmonary TB. Patients were divided into the following three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive MTB sputum culture; Group 2, sputum culture-negative PTB patients diagnosed on the basis of typical clinical symptoms, typical features on radiographs, and proper responses to anti-TB treatment (sputum smear-positive and smear-negative cases were included); and Group 3, non-TB respiratory diseases, including pneumonia, lung cancer, pulmonary interstitial fibrosis, chronic obstructive pulmonary disease, and bronchiectasis. In China, PTB has generally been diagnosed by traditional methods that rely on clinical symptoms together with the results of bacteriology methods (including sputum smear microscopy and bacterial culture) and X-ray examination (diagnostic criteria for TB WS-288-2008).", "The T-SPOT.TB test is an in vitro diagnostic test that detects the effector T-cells in human whole blood by capturing IFN-γ in the vicinity of the T-cells that are responding to stimulation with MTB-specific antigens such as 6 kDa early secretory antigenic target (ESAT-6) and 10 kDa culture filtrate protein (CFP-10). The T-SPOT.TB test (Oxford Immunotec Ltd., UK) was performed using peripheral blood mononuclear cells (PBMCs) separated from heparinized blood samples according to the manufacturer's instructions. Briefly, PBMCs were isolated and incubated with two antigens (ESAT-6 and CFP-10). The procedure was performed in plates precoated with anti-IFN-γ antibodies at 37°C for 16–20 h. After application of alkaline phosphatase-conjugated secondary antibody and chromogenic substrate, the number of spot-forming cells (million PBMCs) in each well was automatically counted with a CTL ELISPOT system. The results were interpreted as recommended by the test kit manufacturer.[12]", "Frequencies and percentages were used to describe the categorical data, and continuous variables were presented as the mean or median. Chi-square tests and Fisher's exact test were used to compare categorical data. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and analytic accuracy (Acc) were calculated for each TB patient group. The analysis was performed on data from patients with culture-positive or culture-negative active TB and cases with nonmycobacterial lung diseases. Confidence intervals (95% CIs) were estimated according to the binomial distribution. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors. Odds ratios (ORs) and 95% CIs for risk were calculated. P < 0.05 was considered statistically significant. Logistic regression models should be used with a minimum of 10 events per predictor variable by a Monte Carlo study.[13] All statistical analyses were performed with SPSS software (version 13.0, SPSS Inc., Chicago, USA).", "A total of 3082 patients for whom there was complete information were included in the investigation. The sample population included 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. Demographic characteristics of the three groups are presented in Table 1.\nDemographic characteristics of the three groups\n*The BMI is the weight in kilograms divided by the square of the height in meters. BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin.", "Clinical characteristics of the TB patients are summarized in Table 2. The rates of cough, productive cough, fever, sweat, and weight-loss in the culture-positive PTB group were higher than those in the culture-negative PTB group (P < 0.05). The ranges of TB disease and cavity were more extensive in the culture-positive PTB group than that in the culture-negative PTB group (P < 0.01). The frequency of diabetes in the culture-positive PTB group was higher than that in the culture-negative PTB group (P < 0.01).\nClinical characteristics of the TB patients\n*Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. †Respiratory failure was defined as PaO2 lower than 60 by arterial blood gas analysis. TB: Tuberculosis; AFB: Acid-fast bacilli; COPD: Chronic obstructive pulmonary disease. –: Not applicable.", "The positive rates of TSPOT.TB were 93.3% (95% CI: 91.7–94.9%), 86.1% (95% CI: 83.7–88.5%), 43.6% (95% CI: 40.9–46.3%) in the culture-positive PTB group, culture-negative PTB group, and non-TB group, respectively. The positive rate of T-SPOT.TB in the culture-positive PTB group was higher than that in the culture-negative PTB group or in the non-TB group (P < 0.01). The positive rate of T-SPOT.TB in the culture-negative PTB group was higher than that in the non-TB group (P < 0.01). The overall sensitivity, specificity, PPV, NPV, and Acc were 89.7% (95% CI: 88.2–91.0%), 56.4% (53.6–59.1%), 74.8% (95% CI: 72.9–76.6%), 79.1% (95% CI: 76.3–81.7%), and 76.0% (74.5–77.5%), respectively.", "Risk factors associated with positive T-SPOT.TB results are presented in Table 3. The sensitivity in the older patients was lower than that in nonolder patients (P < 0.01). The sensitivity in patients with a BMI <18.5 kg/m2 was higher than that in patients with a BMI ≥18.5 kg/m2 (P = 0.02). The sensitivity in the smear-positive patients was higher than that in the smear-negative patients (P < 0.01). The sensitivity in the patients with comorbidity was higher than that in noncomorbidity patients (P = 0.017). The sensitivity in the patients with a record of contact with TB was higher than that in noncontact with TB patients (P = 0.049). The sensitivity of T-SPOT.TB in the patients with a record of vaccination with BCG was higher than that in noncontact with TB patients (P = 0.040).\nRisk factors associated with positive T-SPOT.TB results, n = 1819\nBMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin.", "In multivariate logistic regression analysis, the affecting factors of T-SPOT.TB results in TB patients included gender (OR, 0.714; 95% CI, 0.515–0.989; P = 0.043), age (OR, 0.691; 95% CI, 0.556–0.859, P < 0.01), BMI (OR, 0.942; 95% CI, 0.900–0.987, P = 0.012), sputum culture (OR, 1.929; 95% CI, 1.271–2.927, P = 0.002), and contact with TB (OR, 2.635; 95% CI, 1.037–6.695, P = 0.042) [Table 4]. Females, age of >65 years, a BMI ≥18.5 kg/m2, sputum culture-negative, and noncontact of TB were demonstrated to be independent factors associated with negative test results.\nMultivariable logistic regression analysis of the association of T-SPOT.TB and clinical characteristics\n*Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. If the P<0.2 in the univariate analysis, then the affected factors were entered in the multivariate logistic regression. BMI: Body-mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin; COPD: Chronic obstructive pulmonary disease.", "Nil." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Ethics approval", "Study subjects", "T-SPOT.TB assay", "Statistical analysis", "RESULTS", "Demographic characteristics of the three groups", "Clinical characteristics of the tuberculosis patients", "Diagnostic performance of T-SPOT.TB", "Risk factors associated with positive T-SPOT.TB results", "Association between T-SPOT.TB result and other factors in tuberculosis patients", "DISCUSSION", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Tuberculosis (TB) is a major health problem throughout the world, especially in developing countries. According to the World Health Organization (WHO), worldwide, 10.4 million people are estimated to have fallen ill with TB in 2015, and TB killed 1.8 million people (1.4 million human immunodeficiency virus [HIV]-negative and 0.4 million HIV-positive people).[1] TB is also a serious problem in China. The WHO has estimated an incidence of 66/100,000 TB cases in China in 2015.[1]\nThe rapid detection of mycobacteria and successful treatment of infectious patients is important for controlling and preventing TB. Chest X-rays are often used in pulmonary TB (PTB) screening and have the advantages of being simple and inexpensive. However, in smear-negative PTB, many cases show atypical or nonspecific patterns and are difficult to differentiate from other pulmonary diseases.[2] Smear microscopy has low sensitivity. Culture of mycobacteria requires several weeks to obtain the results and has low sensitivity.\nA recent breakthrough in the diagnosis of TB and latent TB infection is the introduction of interferon-gamma release assays (IGRAs), in which the production of interferon-gamma (IFN-γ) in response to Mycobacterium tuberculosis (MTB)-specific antigens is measured. There are currently two types of commercial IGRAs available: the QuantiFERON-TB Gold In-Tube test and the T-SPOT.TB blood test. The sensitivities of the IGRAs were reported to be high in detecting active TB patients in low TB-endemic countries in previous studies.[34] However, a recent study conducted in Poland did not show that a combination of IGRA and TST might be a step forward in the diagnosis of culture-negative TB cases.[5] The results from high TB-endemic countries were also different. Some studies indicated that the T-SPOT.TB assay is a promising diagnostic test for active PTB,[678] but other studies showed that IGRA was insufficient for the diagnosis of PTB.[910] In this study, we conducted a large retrospective multicenter investigation in China to further evaluate the use of IGRA in the diagnosis of active PTB in high TB-epidemic populations and the factors affecting the performance of the assay.", " Ethics approval This was an observational retrospective study and the three diagnostic tests were already used in clinical practice. Given that the medical information of patients was recorded anonymously by case history, which would not bring any risk to the participants, the Ethics Committee of Beijing Chest Hospital, Capital Medical University approved this retrospective study, with a waiver of informed consent from the patients.\nThis was an observational retrospective study and the three diagnostic tests were already used in clinical practice. Given that the medical information of patients was recorded anonymously by case history, which would not bring any risk to the participants, the Ethics Committee of Beijing Chest Hospital, Capital Medical University approved this retrospective study, with a waiver of informed consent from the patients.\n Study subjects China is a high TB-burden country. In 2012, the incidence of TB in the study hospital situated provinces were between 40/100,000 and 181/100,000.[11] All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. The six hospitals with a capacity of 3000 beds are situated in the south, north, east, and center of China. At each study hospital, trained health workers extracted data from the computer database of medical records of inpatients. Records were collected in terms of age, gender, contact with TB, vaccination with Bacillus Calmette–Guérin (BCG), albumin, body mass index (BMI), smoking and alcohol intake, presenting complaints, sputum smear and culture, range of TB disease, lung cavity and its range, course of TB, history of TB treatment, comorbidity, etc. Only patients with complete information were included in the investigation. The prevalence of HIV infection is very low in China, and all cases had negative results on serological tests for HIV. All patients had not had immune diseases or received immunosuppressant before. All patients had not extrapulmonary TB. Patients were divided into the following three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive MTB sputum culture; Group 2, sputum culture-negative PTB patients diagnosed on the basis of typical clinical symptoms, typical features on radiographs, and proper responses to anti-TB treatment (sputum smear-positive and smear-negative cases were included); and Group 3, non-TB respiratory diseases, including pneumonia, lung cancer, pulmonary interstitial fibrosis, chronic obstructive pulmonary disease, and bronchiectasis. In China, PTB has generally been diagnosed by traditional methods that rely on clinical symptoms together with the results of bacteriology methods (including sputum smear microscopy and bacterial culture) and X-ray examination (diagnostic criteria for TB WS-288-2008).\nChina is a high TB-burden country. In 2012, the incidence of TB in the study hospital situated provinces were between 40/100,000 and 181/100,000.[11] All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. The six hospitals with a capacity of 3000 beds are situated in the south, north, east, and center of China. At each study hospital, trained health workers extracted data from the computer database of medical records of inpatients. Records were collected in terms of age, gender, contact with TB, vaccination with Bacillus Calmette–Guérin (BCG), albumin, body mass index (BMI), smoking and alcohol intake, presenting complaints, sputum smear and culture, range of TB disease, lung cavity and its range, course of TB, history of TB treatment, comorbidity, etc. Only patients with complete information were included in the investigation. The prevalence of HIV infection is very low in China, and all cases had negative results on serological tests for HIV. All patients had not had immune diseases or received immunosuppressant before. All patients had not extrapulmonary TB. Patients were divided into the following three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive MTB sputum culture; Group 2, sputum culture-negative PTB patients diagnosed on the basis of typical clinical symptoms, typical features on radiographs, and proper responses to anti-TB treatment (sputum smear-positive and smear-negative cases were included); and Group 3, non-TB respiratory diseases, including pneumonia, lung cancer, pulmonary interstitial fibrosis, chronic obstructive pulmonary disease, and bronchiectasis. In China, PTB has generally been diagnosed by traditional methods that rely on clinical symptoms together with the results of bacteriology methods (including sputum smear microscopy and bacterial culture) and X-ray examination (diagnostic criteria for TB WS-288-2008).\n T-SPOT.TB assay The T-SPOT.TB test is an in vitro diagnostic test that detects the effector T-cells in human whole blood by capturing IFN-γ in the vicinity of the T-cells that are responding to stimulation with MTB-specific antigens such as 6 kDa early secretory antigenic target (ESAT-6) and 10 kDa culture filtrate protein (CFP-10). The T-SPOT.TB test (Oxford Immunotec Ltd., UK) was performed using peripheral blood mononuclear cells (PBMCs) separated from heparinized blood samples according to the manufacturer's instructions. Briefly, PBMCs were isolated and incubated with two antigens (ESAT-6 and CFP-10). The procedure was performed in plates precoated with anti-IFN-γ antibodies at 37°C for 16–20 h. After application of alkaline phosphatase-conjugated secondary antibody and chromogenic substrate, the number of spot-forming cells (million PBMCs) in each well was automatically counted with a CTL ELISPOT system. The results were interpreted as recommended by the test kit manufacturer.[12]\nThe T-SPOT.TB test is an in vitro diagnostic test that detects the effector T-cells in human whole blood by capturing IFN-γ in the vicinity of the T-cells that are responding to stimulation with MTB-specific antigens such as 6 kDa early secretory antigenic target (ESAT-6) and 10 kDa culture filtrate protein (CFP-10). The T-SPOT.TB test (Oxford Immunotec Ltd., UK) was performed using peripheral blood mononuclear cells (PBMCs) separated from heparinized blood samples according to the manufacturer's instructions. Briefly, PBMCs were isolated and incubated with two antigens (ESAT-6 and CFP-10). The procedure was performed in plates precoated with anti-IFN-γ antibodies at 37°C for 16–20 h. After application of alkaline phosphatase-conjugated secondary antibody and chromogenic substrate, the number of spot-forming cells (million PBMCs) in each well was automatically counted with a CTL ELISPOT system. The results were interpreted as recommended by the test kit manufacturer.[12]\n Statistical analysis Frequencies and percentages were used to describe the categorical data, and continuous variables were presented as the mean or median. Chi-square tests and Fisher's exact test were used to compare categorical data. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and analytic accuracy (Acc) were calculated for each TB patient group. The analysis was performed on data from patients with culture-positive or culture-negative active TB and cases with nonmycobacterial lung diseases. Confidence intervals (95% CIs) were estimated according to the binomial distribution. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors. Odds ratios (ORs) and 95% CIs for risk were calculated. P < 0.05 was considered statistically significant. Logistic regression models should be used with a minimum of 10 events per predictor variable by a Monte Carlo study.[13] All statistical analyses were performed with SPSS software (version 13.0, SPSS Inc., Chicago, USA).\nFrequencies and percentages were used to describe the categorical data, and continuous variables were presented as the mean or median. Chi-square tests and Fisher's exact test were used to compare categorical data. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and analytic accuracy (Acc) were calculated for each TB patient group. The analysis was performed on data from patients with culture-positive or culture-negative active TB and cases with nonmycobacterial lung diseases. Confidence intervals (95% CIs) were estimated according to the binomial distribution. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors. Odds ratios (ORs) and 95% CIs for risk were calculated. P < 0.05 was considered statistically significant. Logistic regression models should be used with a minimum of 10 events per predictor variable by a Monte Carlo study.[13] All statistical analyses were performed with SPSS software (version 13.0, SPSS Inc., Chicago, USA).", "This was an observational retrospective study and the three diagnostic tests were already used in clinical practice. Given that the medical information of patients was recorded anonymously by case history, which would not bring any risk to the participants, the Ethics Committee of Beijing Chest Hospital, Capital Medical University approved this retrospective study, with a waiver of informed consent from the patients.", "China is a high TB-burden country. In 2012, the incidence of TB in the study hospital situated provinces were between 40/100,000 and 181/100,000.[11] All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. The six hospitals with a capacity of 3000 beds are situated in the south, north, east, and center of China. At each study hospital, trained health workers extracted data from the computer database of medical records of inpatients. Records were collected in terms of age, gender, contact with TB, vaccination with Bacillus Calmette–Guérin (BCG), albumin, body mass index (BMI), smoking and alcohol intake, presenting complaints, sputum smear and culture, range of TB disease, lung cavity and its range, course of TB, history of TB treatment, comorbidity, etc. Only patients with complete information were included in the investigation. The prevalence of HIV infection is very low in China, and all cases had negative results on serological tests for HIV. All patients had not had immune diseases or received immunosuppressant before. All patients had not extrapulmonary TB. Patients were divided into the following three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive MTB sputum culture; Group 2, sputum culture-negative PTB patients diagnosed on the basis of typical clinical symptoms, typical features on radiographs, and proper responses to anti-TB treatment (sputum smear-positive and smear-negative cases were included); and Group 3, non-TB respiratory diseases, including pneumonia, lung cancer, pulmonary interstitial fibrosis, chronic obstructive pulmonary disease, and bronchiectasis. In China, PTB has generally been diagnosed by traditional methods that rely on clinical symptoms together with the results of bacteriology methods (including sputum smear microscopy and bacterial culture) and X-ray examination (diagnostic criteria for TB WS-288-2008).", "The T-SPOT.TB test is an in vitro diagnostic test that detects the effector T-cells in human whole blood by capturing IFN-γ in the vicinity of the T-cells that are responding to stimulation with MTB-specific antigens such as 6 kDa early secretory antigenic target (ESAT-6) and 10 kDa culture filtrate protein (CFP-10). The T-SPOT.TB test (Oxford Immunotec Ltd., UK) was performed using peripheral blood mononuclear cells (PBMCs) separated from heparinized blood samples according to the manufacturer's instructions. Briefly, PBMCs were isolated and incubated with two antigens (ESAT-6 and CFP-10). The procedure was performed in plates precoated with anti-IFN-γ antibodies at 37°C for 16–20 h. After application of alkaline phosphatase-conjugated secondary antibody and chromogenic substrate, the number of spot-forming cells (million PBMCs) in each well was automatically counted with a CTL ELISPOT system. The results were interpreted as recommended by the test kit manufacturer.[12]", "Frequencies and percentages were used to describe the categorical data, and continuous variables were presented as the mean or median. Chi-square tests and Fisher's exact test were used to compare categorical data. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and analytic accuracy (Acc) were calculated for each TB patient group. The analysis was performed on data from patients with culture-positive or culture-negative active TB and cases with nonmycobacterial lung diseases. Confidence intervals (95% CIs) were estimated according to the binomial distribution. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors. Odds ratios (ORs) and 95% CIs for risk were calculated. P < 0.05 was considered statistically significant. Logistic regression models should be used with a minimum of 10 events per predictor variable by a Monte Carlo study.[13] All statistical analyses were performed with SPSS software (version 13.0, SPSS Inc., Chicago, USA).", " Demographic characteristics of the three groups A total of 3082 patients for whom there was complete information were included in the investigation. The sample population included 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. Demographic characteristics of the three groups are presented in Table 1.\nDemographic characteristics of the three groups\n*The BMI is the weight in kilograms divided by the square of the height in meters. BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin.\nA total of 3082 patients for whom there was complete information were included in the investigation. The sample population included 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. Demographic characteristics of the three groups are presented in Table 1.\nDemographic characteristics of the three groups\n*The BMI is the weight in kilograms divided by the square of the height in meters. BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin.\n Clinical characteristics of the tuberculosis patients Clinical characteristics of the TB patients are summarized in Table 2. The rates of cough, productive cough, fever, sweat, and weight-loss in the culture-positive PTB group were higher than those in the culture-negative PTB group (P < 0.05). The ranges of TB disease and cavity were more extensive in the culture-positive PTB group than that in the culture-negative PTB group (P < 0.01). The frequency of diabetes in the culture-positive PTB group was higher than that in the culture-negative PTB group (P < 0.01).\nClinical characteristics of the TB patients\n*Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. †Respiratory failure was defined as PaO2 lower than 60 by arterial blood gas analysis. TB: Tuberculosis; AFB: Acid-fast bacilli; COPD: Chronic obstructive pulmonary disease. –: Not applicable.\nClinical characteristics of the TB patients are summarized in Table 2. The rates of cough, productive cough, fever, sweat, and weight-loss in the culture-positive PTB group were higher than those in the culture-negative PTB group (P < 0.05). The ranges of TB disease and cavity were more extensive in the culture-positive PTB group than that in the culture-negative PTB group (P < 0.01). The frequency of diabetes in the culture-positive PTB group was higher than that in the culture-negative PTB group (P < 0.01).\nClinical characteristics of the TB patients\n*Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. †Respiratory failure was defined as PaO2 lower than 60 by arterial blood gas analysis. TB: Tuberculosis; AFB: Acid-fast bacilli; COPD: Chronic obstructive pulmonary disease. –: Not applicable.\n Diagnostic performance of T-SPOT.TB The positive rates of TSPOT.TB were 93.3% (95% CI: 91.7–94.9%), 86.1% (95% CI: 83.7–88.5%), 43.6% (95% CI: 40.9–46.3%) in the culture-positive PTB group, culture-negative PTB group, and non-TB group, respectively. The positive rate of T-SPOT.TB in the culture-positive PTB group was higher than that in the culture-negative PTB group or in the non-TB group (P < 0.01). The positive rate of T-SPOT.TB in the culture-negative PTB group was higher than that in the non-TB group (P < 0.01). The overall sensitivity, specificity, PPV, NPV, and Acc were 89.7% (95% CI: 88.2–91.0%), 56.4% (53.6–59.1%), 74.8% (95% CI: 72.9–76.6%), 79.1% (95% CI: 76.3–81.7%), and 76.0% (74.5–77.5%), respectively.\nThe positive rates of TSPOT.TB were 93.3% (95% CI: 91.7–94.9%), 86.1% (95% CI: 83.7–88.5%), 43.6% (95% CI: 40.9–46.3%) in the culture-positive PTB group, culture-negative PTB group, and non-TB group, respectively. The positive rate of T-SPOT.TB in the culture-positive PTB group was higher than that in the culture-negative PTB group or in the non-TB group (P < 0.01). The positive rate of T-SPOT.TB in the culture-negative PTB group was higher than that in the non-TB group (P < 0.01). The overall sensitivity, specificity, PPV, NPV, and Acc were 89.7% (95% CI: 88.2–91.0%), 56.4% (53.6–59.1%), 74.8% (95% CI: 72.9–76.6%), 79.1% (95% CI: 76.3–81.7%), and 76.0% (74.5–77.5%), respectively.\n Risk factors associated with positive T-SPOT.TB results Risk factors associated with positive T-SPOT.TB results are presented in Table 3. The sensitivity in the older patients was lower than that in nonolder patients (P < 0.01). The sensitivity in patients with a BMI <18.5 kg/m2 was higher than that in patients with a BMI ≥18.5 kg/m2 (P = 0.02). The sensitivity in the smear-positive patients was higher than that in the smear-negative patients (P < 0.01). The sensitivity in the patients with comorbidity was higher than that in noncomorbidity patients (P = 0.017). The sensitivity in the patients with a record of contact with TB was higher than that in noncontact with TB patients (P = 0.049). The sensitivity of T-SPOT.TB in the patients with a record of vaccination with BCG was higher than that in noncontact with TB patients (P = 0.040).\nRisk factors associated with positive T-SPOT.TB results, n = 1819\nBMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin.\nRisk factors associated with positive T-SPOT.TB results are presented in Table 3. The sensitivity in the older patients was lower than that in nonolder patients (P < 0.01). The sensitivity in patients with a BMI <18.5 kg/m2 was higher than that in patients with a BMI ≥18.5 kg/m2 (P = 0.02). The sensitivity in the smear-positive patients was higher than that in the smear-negative patients (P < 0.01). The sensitivity in the patients with comorbidity was higher than that in noncomorbidity patients (P = 0.017). The sensitivity in the patients with a record of contact with TB was higher than that in noncontact with TB patients (P = 0.049). The sensitivity of T-SPOT.TB in the patients with a record of vaccination with BCG was higher than that in noncontact with TB patients (P = 0.040).\nRisk factors associated with positive T-SPOT.TB results, n = 1819\nBMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin.\n Association between T-SPOT.TB result and other factors in tuberculosis patients In multivariate logistic regression analysis, the affecting factors of T-SPOT.TB results in TB patients included gender (OR, 0.714; 95% CI, 0.515–0.989; P = 0.043), age (OR, 0.691; 95% CI, 0.556–0.859, P < 0.01), BMI (OR, 0.942; 95% CI, 0.900–0.987, P = 0.012), sputum culture (OR, 1.929; 95% CI, 1.271–2.927, P = 0.002), and contact with TB (OR, 2.635; 95% CI, 1.037–6.695, P = 0.042) [Table 4]. Females, age of >65 years, a BMI ≥18.5 kg/m2, sputum culture-negative, and noncontact of TB were demonstrated to be independent factors associated with negative test results.\nMultivariable logistic regression analysis of the association of T-SPOT.TB and clinical characteristics\n*Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. If the P<0.2 in the univariate analysis, then the affected factors were entered in the multivariate logistic regression. BMI: Body-mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin; COPD: Chronic obstructive pulmonary disease.\nIn multivariate logistic regression analysis, the affecting factors of T-SPOT.TB results in TB patients included gender (OR, 0.714; 95% CI, 0.515–0.989; P = 0.043), age (OR, 0.691; 95% CI, 0.556–0.859, P < 0.01), BMI (OR, 0.942; 95% CI, 0.900–0.987, P = 0.012), sputum culture (OR, 1.929; 95% CI, 1.271–2.927, P = 0.002), and contact with TB (OR, 2.635; 95% CI, 1.037–6.695, P = 0.042) [Table 4]. Females, age of >65 years, a BMI ≥18.5 kg/m2, sputum culture-negative, and noncontact of TB were demonstrated to be independent factors associated with negative test results.\nMultivariable logistic regression analysis of the association of T-SPOT.TB and clinical characteristics\n*Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. If the P<0.2 in the univariate analysis, then the affected factors were entered in the multivariate logistic regression. BMI: Body-mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin; COPD: Chronic obstructive pulmonary disease.", "A total of 3082 patients for whom there was complete information were included in the investigation. The sample population included 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. Demographic characteristics of the three groups are presented in Table 1.\nDemographic characteristics of the three groups\n*The BMI is the weight in kilograms divided by the square of the height in meters. BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin.", "Clinical characteristics of the TB patients are summarized in Table 2. The rates of cough, productive cough, fever, sweat, and weight-loss in the culture-positive PTB group were higher than those in the culture-negative PTB group (P < 0.05). The ranges of TB disease and cavity were more extensive in the culture-positive PTB group than that in the culture-negative PTB group (P < 0.01). The frequency of diabetes in the culture-positive PTB group was higher than that in the culture-negative PTB group (P < 0.01).\nClinical characteristics of the TB patients\n*Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. †Respiratory failure was defined as PaO2 lower than 60 by arterial blood gas analysis. TB: Tuberculosis; AFB: Acid-fast bacilli; COPD: Chronic obstructive pulmonary disease. –: Not applicable.", "The positive rates of TSPOT.TB were 93.3% (95% CI: 91.7–94.9%), 86.1% (95% CI: 83.7–88.5%), 43.6% (95% CI: 40.9–46.3%) in the culture-positive PTB group, culture-negative PTB group, and non-TB group, respectively. The positive rate of T-SPOT.TB in the culture-positive PTB group was higher than that in the culture-negative PTB group or in the non-TB group (P < 0.01). The positive rate of T-SPOT.TB in the culture-negative PTB group was higher than that in the non-TB group (P < 0.01). The overall sensitivity, specificity, PPV, NPV, and Acc were 89.7% (95% CI: 88.2–91.0%), 56.4% (53.6–59.1%), 74.8% (95% CI: 72.9–76.6%), 79.1% (95% CI: 76.3–81.7%), and 76.0% (74.5–77.5%), respectively.", "Risk factors associated with positive T-SPOT.TB results are presented in Table 3. The sensitivity in the older patients was lower than that in nonolder patients (P < 0.01). The sensitivity in patients with a BMI <18.5 kg/m2 was higher than that in patients with a BMI ≥18.5 kg/m2 (P = 0.02). The sensitivity in the smear-positive patients was higher than that in the smear-negative patients (P < 0.01). The sensitivity in the patients with comorbidity was higher than that in noncomorbidity patients (P = 0.017). The sensitivity in the patients with a record of contact with TB was higher than that in noncontact with TB patients (P = 0.049). The sensitivity of T-SPOT.TB in the patients with a record of vaccination with BCG was higher than that in noncontact with TB patients (P = 0.040).\nRisk factors associated with positive T-SPOT.TB results, n = 1819\nBMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin.", "In multivariate logistic regression analysis, the affecting factors of T-SPOT.TB results in TB patients included gender (OR, 0.714; 95% CI, 0.515–0.989; P = 0.043), age (OR, 0.691; 95% CI, 0.556–0.859, P < 0.01), BMI (OR, 0.942; 95% CI, 0.900–0.987, P = 0.012), sputum culture (OR, 1.929; 95% CI, 1.271–2.927, P = 0.002), and contact with TB (OR, 2.635; 95% CI, 1.037–6.695, P = 0.042) [Table 4]. Females, age of >65 years, a BMI ≥18.5 kg/m2, sputum culture-negative, and noncontact of TB were demonstrated to be independent factors associated with negative test results.\nMultivariable logistic regression analysis of the association of T-SPOT.TB and clinical characteristics\n*Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. If the P<0.2 in the univariate analysis, then the affected factors were entered in the multivariate logistic regression. BMI: Body-mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin; COPD: Chronic obstructive pulmonary disease.", "IGRAs were developed for the indirect or immunologic diagnosis of MTB infection. With their relatively high sensitivity and specificity, IGRAs have been widely used to diagnose the infection under national guidelines in many developed countries, such as the USA, UK, and Japan.[14] In most developing countries, including China, the clinical utilization of IGRAs for diagnosing active TB is not recommended, due to insufficient evidence of their performance in high TB-burden settings. Nevertheless, many private health-care providers in high-burden countries are using IGRAs for the diagnosis of active TB,[14] and many investigators continue to recommend their use for active TB.[151617] Thus, there is a growing concern about the inappropriate use of IGRAs for the diagnosis of active TB in high-burden settings, particularly when used to “rule-in” disease.[18] The aim of this study, performed in a country with a high prevalence of TB, was to determine the performance of the T-SPOT.TB test for diagnosing TB in routine clinical practice.\nTo our knowledge, this multicenter study that included 1819 TB patients and 1263 non-TB patients is the largest to date evaluating the performance of the T-SPOT.TB test for diagnosing TB in high-burden settings. The diagnosis of TB is problematic for the clinician as only 50% of patients with active disease have microbiologically confirmed TB disease. A negative IGRA may be a convenient “rule-out” test for TB if the diagnostic sensitivity of the assay is sufficiently high, for example, nearly 95%.[19] In our study, the sensitivity (89.66%) and NPV (79.11%) of the T-SPOT.TB test suggest that T-SPOT.TB does not have good rule-out value for active TB in high TB-burden settings, like China. Furthermore, the low specificity (56.37%) and PPV (74.75%) limit its usefulness to rule-in active TB in these settings, where the prevalence of latent tuberculosis infection (LTBI) is considerable. In China, IGRA is currently being used for diagnosis of active TB and for differentiating between TB and other diseases. However, the positive rate of T-SPOT.TB found that here was 43.6% in the non-TB group. Such a high false-positive rate makes it necessary to reconsider the value and scope of T-SPOT.TB in clinical practice.\nThe overall sensitivity of the T-SPOT.TB test in our study was 89.66% in all the PTB patients, which increased to 93.26% in culture-positive TB patients. Similar sensitivities for the diagnosis of active TB were demonstrated in the studies performed in China by Zhang et al. (94.7%),[20] Feng et al. (94.7%),[7] and Liu et al. (93.2%),[21] and in one study from India (90.6%).[9] From the ten reported studies evaluating T-SPOT.TB in China, the combined sensitivity was 88%.[22] However, the sensitivity in our study was higher than the reported range in the earlier studies from other highly endemic countries, for example, 76% in South Africa,[23] 78.7% in Gambia,[24] and 74% in Zambia.[25] Previous investigations identified several factors that may affect the sensitivity of IGRA. HIV is one of the factors,[26] and in most of the studies conducted in other high-burden countries, HIV-positive patients were included.[2425] Exclusion of HIV patients in our study may be one of the reasons for obtaining higher sensitivity. Furthermore, the sensitivity of IGRA reportedly decreased significantly with the age of patients[5] and gradually decreased with the treatment duration.[6] In our study, only 15.02% (245/1631) patients were older TB patients (≥65 years) and 20.72% (338/1631) of patients were retreated ones, which may be another reason for obtaining higher sensitivity.\nAs T-SPOT.TB tests are unable to distinguish between LTBI and active TB, the specificity of the T-SPOT.TB test depends on the prevalence of LTBI. The specificity of the T-SPOT.TB test in diagnosing active TB is known to be high (≥93%) in low TB incidence settings,[27] and as low as 61% in low- and middle-income countries[28] The poor specificity (56.44%) of T-SPOT.TB test obtained in this study was expected and several factors should be considered. First, the high prevalence of LTBI, as high as 44.5% in China,[29] inevitably decreases the specificity of T-SPOT.TB test. Second, this study was designed to evaluate the diagnostic validity of T-SPOT.TB test in routine clinical practice and thus focused on unselected patients with suspected active TB. In this setting, the diagnostic validity tends to be lower than in studies in which healthy people are enrolled as negative controls.\nIGRAs are designed to detect MTB infections, whether latent or active. However, in our study, 6.74% (61/905) of the persons with culture-confirmed TB had a negative T-SPOT.TB result, comparable to 8.7% (46/528) described by Pan et al.[30] and 14.4% (182/1264) described by Kwon et al.[31] There are a multitude of factors that may modulate the sensitivity of IGRA including HIV coinfection, immune suppression, young or advanced age, advanced disease, malnutrition, extrapulmonary TB, disseminated TB, concomitant TB treatment, bacteria strain differences, and smoking.[3032] In our multivariate logistic regression analysis, gender, age, BMI, sputum culture, and contact with TB were affecting factors for the false-negative results of the T-SPOT.TB test. In addition, this study showed that the sensitivity of T-SPOT.TB for the diagnosis of TB in the culture/smear-positive group was significantly higher than that in the culture/smear-negative group, which implied that the T-SPOT.TB result may be affected by the bacterial load. There were several limitations to our study. First, T-SPOT.TB results were analyzed in retrospect and quantitative test results were not acquired. Second, this study was performed in an area where the prevalence of LTBI is considerable. Third, this study did not include extrapulmonary TB and immunodeficient patients. Fourth, external validity is a concern, because this was a hospital-based study and may have overestimated the sensitivity of IGRA. Fifth, the retrospective study design limited us to describing the association between IGRA and affecting factors. Sixth, sputum culture-negative PTB patients were included according to clinical diagnosis, which may influence the diagnostic Acc. Additional longitudinal studies are needed to prove any causality in the associations found.\nDespite these limitations, to our knowledge, this might be the multicenter large-scale investigation to evaluate the role of T-SPOT.TB in diagnosis of PTB in China. Our study demonstrated that gender, age, BMI, sputum culture, and contact of TB are factors affecting the false-negative results of T-SPOT.TB test. Inadequate sensitivity and high false-positive rates of this test limit its usefulness as a single test to rule-in or rule-out active TB in China. We highly recommend that IGRAs not be used for the diagnosis of active TB in high-burden TB settings.\n Financial support and sponsorship Nil.\nNil.\n Conflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "Nil.", "There are no conflicts of interest." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, null, null, "discussion", null, "COI-statement" ]
[ "Active Tuberculosis", "Diagnosis", "Interferon-Gamma Release Assay" ]
INTRODUCTION: Tuberculosis (TB) is a major health problem throughout the world, especially in developing countries. According to the World Health Organization (WHO), worldwide, 10.4 million people are estimated to have fallen ill with TB in 2015, and TB killed 1.8 million people (1.4 million human immunodeficiency virus [HIV]-negative and 0.4 million HIV-positive people).[1] TB is also a serious problem in China. The WHO has estimated an incidence of 66/100,000 TB cases in China in 2015.[1] The rapid detection of mycobacteria and successful treatment of infectious patients is important for controlling and preventing TB. Chest X-rays are often used in pulmonary TB (PTB) screening and have the advantages of being simple and inexpensive. However, in smear-negative PTB, many cases show atypical or nonspecific patterns and are difficult to differentiate from other pulmonary diseases.[2] Smear microscopy has low sensitivity. Culture of mycobacteria requires several weeks to obtain the results and has low sensitivity. A recent breakthrough in the diagnosis of TB and latent TB infection is the introduction of interferon-gamma release assays (IGRAs), in which the production of interferon-gamma (IFN-γ) in response to Mycobacterium tuberculosis (MTB)-specific antigens is measured. There are currently two types of commercial IGRAs available: the QuantiFERON-TB Gold In-Tube test and the T-SPOT.TB blood test. The sensitivities of the IGRAs were reported to be high in detecting active TB patients in low TB-endemic countries in previous studies.[34] However, a recent study conducted in Poland did not show that a combination of IGRA and TST might be a step forward in the diagnosis of culture-negative TB cases.[5] The results from high TB-endemic countries were also different. Some studies indicated that the T-SPOT.TB assay is a promising diagnostic test for active PTB,[678] but other studies showed that IGRA was insufficient for the diagnosis of PTB.[910] In this study, we conducted a large retrospective multicenter investigation in China to further evaluate the use of IGRA in the diagnosis of active PTB in high TB-epidemic populations and the factors affecting the performance of the assay. METHODS: Ethics approval This was an observational retrospective study and the three diagnostic tests were already used in clinical practice. Given that the medical information of patients was recorded anonymously by case history, which would not bring any risk to the participants, the Ethics Committee of Beijing Chest Hospital, Capital Medical University approved this retrospective study, with a waiver of informed consent from the patients. This was an observational retrospective study and the three diagnostic tests were already used in clinical practice. Given that the medical information of patients was recorded anonymously by case history, which would not bring any risk to the participants, the Ethics Committee of Beijing Chest Hospital, Capital Medical University approved this retrospective study, with a waiver of informed consent from the patients. Study subjects China is a high TB-burden country. In 2012, the incidence of TB in the study hospital situated provinces were between 40/100,000 and 181/100,000.[11] All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. The six hospitals with a capacity of 3000 beds are situated in the south, north, east, and center of China. At each study hospital, trained health workers extracted data from the computer database of medical records of inpatients. Records were collected in terms of age, gender, contact with TB, vaccination with Bacillus Calmette–Guérin (BCG), albumin, body mass index (BMI), smoking and alcohol intake, presenting complaints, sputum smear and culture, range of TB disease, lung cavity and its range, course of TB, history of TB treatment, comorbidity, etc. Only patients with complete information were included in the investigation. The prevalence of HIV infection is very low in China, and all cases had negative results on serological tests for HIV. All patients had not had immune diseases or received immunosuppressant before. All patients had not extrapulmonary TB. Patients were divided into the following three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive MTB sputum culture; Group 2, sputum culture-negative PTB patients diagnosed on the basis of typical clinical symptoms, typical features on radiographs, and proper responses to anti-TB treatment (sputum smear-positive and smear-negative cases were included); and Group 3, non-TB respiratory diseases, including pneumonia, lung cancer, pulmonary interstitial fibrosis, chronic obstructive pulmonary disease, and bronchiectasis. In China, PTB has generally been diagnosed by traditional methods that rely on clinical symptoms together with the results of bacteriology methods (including sputum smear microscopy and bacterial culture) and X-ray examination (diagnostic criteria for TB WS-288-2008). China is a high TB-burden country. In 2012, the incidence of TB in the study hospital situated provinces were between 40/100,000 and 181/100,000.[11] All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. The six hospitals with a capacity of 3000 beds are situated in the south, north, east, and center of China. At each study hospital, trained health workers extracted data from the computer database of medical records of inpatients. Records were collected in terms of age, gender, contact with TB, vaccination with Bacillus Calmette–Guérin (BCG), albumin, body mass index (BMI), smoking and alcohol intake, presenting complaints, sputum smear and culture, range of TB disease, lung cavity and its range, course of TB, history of TB treatment, comorbidity, etc. Only patients with complete information were included in the investigation. The prevalence of HIV infection is very low in China, and all cases had negative results on serological tests for HIV. All patients had not had immune diseases or received immunosuppressant before. All patients had not extrapulmonary TB. Patients were divided into the following three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive MTB sputum culture; Group 2, sputum culture-negative PTB patients diagnosed on the basis of typical clinical symptoms, typical features on radiographs, and proper responses to anti-TB treatment (sputum smear-positive and smear-negative cases were included); and Group 3, non-TB respiratory diseases, including pneumonia, lung cancer, pulmonary interstitial fibrosis, chronic obstructive pulmonary disease, and bronchiectasis. In China, PTB has generally been diagnosed by traditional methods that rely on clinical symptoms together with the results of bacteriology methods (including sputum smear microscopy and bacterial culture) and X-ray examination (diagnostic criteria for TB WS-288-2008). T-SPOT.TB assay The T-SPOT.TB test is an in vitro diagnostic test that detects the effector T-cells in human whole blood by capturing IFN-γ in the vicinity of the T-cells that are responding to stimulation with MTB-specific antigens such as 6 kDa early secretory antigenic target (ESAT-6) and 10 kDa culture filtrate protein (CFP-10). The T-SPOT.TB test (Oxford Immunotec Ltd., UK) was performed using peripheral blood mononuclear cells (PBMCs) separated from heparinized blood samples according to the manufacturer's instructions. Briefly, PBMCs were isolated and incubated with two antigens (ESAT-6 and CFP-10). The procedure was performed in plates precoated with anti-IFN-γ antibodies at 37°C for 16–20 h. After application of alkaline phosphatase-conjugated secondary antibody and chromogenic substrate, the number of spot-forming cells (million PBMCs) in each well was automatically counted with a CTL ELISPOT system. The results were interpreted as recommended by the test kit manufacturer.[12] The T-SPOT.TB test is an in vitro diagnostic test that detects the effector T-cells in human whole blood by capturing IFN-γ in the vicinity of the T-cells that are responding to stimulation with MTB-specific antigens such as 6 kDa early secretory antigenic target (ESAT-6) and 10 kDa culture filtrate protein (CFP-10). The T-SPOT.TB test (Oxford Immunotec Ltd., UK) was performed using peripheral blood mononuclear cells (PBMCs) separated from heparinized blood samples according to the manufacturer's instructions. Briefly, PBMCs were isolated and incubated with two antigens (ESAT-6 and CFP-10). The procedure was performed in plates precoated with anti-IFN-γ antibodies at 37°C for 16–20 h. After application of alkaline phosphatase-conjugated secondary antibody and chromogenic substrate, the number of spot-forming cells (million PBMCs) in each well was automatically counted with a CTL ELISPOT system. The results were interpreted as recommended by the test kit manufacturer.[12] Statistical analysis Frequencies and percentages were used to describe the categorical data, and continuous variables were presented as the mean or median. Chi-square tests and Fisher's exact test were used to compare categorical data. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and analytic accuracy (Acc) were calculated for each TB patient group. The analysis was performed on data from patients with culture-positive or culture-negative active TB and cases with nonmycobacterial lung diseases. Confidence intervals (95% CIs) were estimated according to the binomial distribution. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors. Odds ratios (ORs) and 95% CIs for risk were calculated. P < 0.05 was considered statistically significant. Logistic regression models should be used with a minimum of 10 events per predictor variable by a Monte Carlo study.[13] All statistical analyses were performed with SPSS software (version 13.0, SPSS Inc., Chicago, USA). Frequencies and percentages were used to describe the categorical data, and continuous variables were presented as the mean or median. Chi-square tests and Fisher's exact test were used to compare categorical data. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and analytic accuracy (Acc) were calculated for each TB patient group. The analysis was performed on data from patients with culture-positive or culture-negative active TB and cases with nonmycobacterial lung diseases. Confidence intervals (95% CIs) were estimated according to the binomial distribution. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors. Odds ratios (ORs) and 95% CIs for risk were calculated. P < 0.05 was considered statistically significant. Logistic regression models should be used with a minimum of 10 events per predictor variable by a Monte Carlo study.[13] All statistical analyses were performed with SPSS software (version 13.0, SPSS Inc., Chicago, USA). Ethics approval: This was an observational retrospective study and the three diagnostic tests were already used in clinical practice. Given that the medical information of patients was recorded anonymously by case history, which would not bring any risk to the participants, the Ethics Committee of Beijing Chest Hospital, Capital Medical University approved this retrospective study, with a waiver of informed consent from the patients. Study subjects: China is a high TB-burden country. In 2012, the incidence of TB in the study hospital situated provinces were between 40/100,000 and 181/100,000.[11] All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. The six hospitals with a capacity of 3000 beds are situated in the south, north, east, and center of China. At each study hospital, trained health workers extracted data from the computer database of medical records of inpatients. Records were collected in terms of age, gender, contact with TB, vaccination with Bacillus Calmette–Guérin (BCG), albumin, body mass index (BMI), smoking and alcohol intake, presenting complaints, sputum smear and culture, range of TB disease, lung cavity and its range, course of TB, history of TB treatment, comorbidity, etc. Only patients with complete information were included in the investigation. The prevalence of HIV infection is very low in China, and all cases had negative results on serological tests for HIV. All patients had not had immune diseases or received immunosuppressant before. All patients had not extrapulmonary TB. Patients were divided into the following three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive MTB sputum culture; Group 2, sputum culture-negative PTB patients diagnosed on the basis of typical clinical symptoms, typical features on radiographs, and proper responses to anti-TB treatment (sputum smear-positive and smear-negative cases were included); and Group 3, non-TB respiratory diseases, including pneumonia, lung cancer, pulmonary interstitial fibrosis, chronic obstructive pulmonary disease, and bronchiectasis. In China, PTB has generally been diagnosed by traditional methods that rely on clinical symptoms together with the results of bacteriology methods (including sputum smear microscopy and bacterial culture) and X-ray examination (diagnostic criteria for TB WS-288-2008). T-SPOT.TB assay: The T-SPOT.TB test is an in vitro diagnostic test that detects the effector T-cells in human whole blood by capturing IFN-γ in the vicinity of the T-cells that are responding to stimulation with MTB-specific antigens such as 6 kDa early secretory antigenic target (ESAT-6) and 10 kDa culture filtrate protein (CFP-10). The T-SPOT.TB test (Oxford Immunotec Ltd., UK) was performed using peripheral blood mononuclear cells (PBMCs) separated from heparinized blood samples according to the manufacturer's instructions. Briefly, PBMCs were isolated and incubated with two antigens (ESAT-6 and CFP-10). The procedure was performed in plates precoated with anti-IFN-γ antibodies at 37°C for 16–20 h. After application of alkaline phosphatase-conjugated secondary antibody and chromogenic substrate, the number of spot-forming cells (million PBMCs) in each well was automatically counted with a CTL ELISPOT system. The results were interpreted as recommended by the test kit manufacturer.[12] Statistical analysis: Frequencies and percentages were used to describe the categorical data, and continuous variables were presented as the mean or median. Chi-square tests and Fisher's exact test were used to compare categorical data. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and analytic accuracy (Acc) were calculated for each TB patient group. The analysis was performed on data from patients with culture-positive or culture-negative active TB and cases with nonmycobacterial lung diseases. Confidence intervals (95% CIs) were estimated according to the binomial distribution. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors. Odds ratios (ORs) and 95% CIs for risk were calculated. P < 0.05 was considered statistically significant. Logistic regression models should be used with a minimum of 10 events per predictor variable by a Monte Carlo study.[13] All statistical analyses were performed with SPSS software (version 13.0, SPSS Inc., Chicago, USA). RESULTS: Demographic characteristics of the three groups A total of 3082 patients for whom there was complete information were included in the investigation. The sample population included 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. Demographic characteristics of the three groups are presented in Table 1. Demographic characteristics of the three groups *The BMI is the weight in kilograms divided by the square of the height in meters. BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin. A total of 3082 patients for whom there was complete information were included in the investigation. The sample population included 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. Demographic characteristics of the three groups are presented in Table 1. Demographic characteristics of the three groups *The BMI is the weight in kilograms divided by the square of the height in meters. BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin. Clinical characteristics of the tuberculosis patients Clinical characteristics of the TB patients are summarized in Table 2. The rates of cough, productive cough, fever, sweat, and weight-loss in the culture-positive PTB group were higher than those in the culture-negative PTB group (P < 0.05). The ranges of TB disease and cavity were more extensive in the culture-positive PTB group than that in the culture-negative PTB group (P < 0.01). The frequency of diabetes in the culture-positive PTB group was higher than that in the culture-negative PTB group (P < 0.01). Clinical characteristics of the TB patients *Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. †Respiratory failure was defined as PaO2 lower than 60 by arterial blood gas analysis. TB: Tuberculosis; AFB: Acid-fast bacilli; COPD: Chronic obstructive pulmonary disease. –: Not applicable. Clinical characteristics of the TB patients are summarized in Table 2. The rates of cough, productive cough, fever, sweat, and weight-loss in the culture-positive PTB group were higher than those in the culture-negative PTB group (P < 0.05). The ranges of TB disease and cavity were more extensive in the culture-positive PTB group than that in the culture-negative PTB group (P < 0.01). The frequency of diabetes in the culture-positive PTB group was higher than that in the culture-negative PTB group (P < 0.01). Clinical characteristics of the TB patients *Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. †Respiratory failure was defined as PaO2 lower than 60 by arterial blood gas analysis. TB: Tuberculosis; AFB: Acid-fast bacilli; COPD: Chronic obstructive pulmonary disease. –: Not applicable. Diagnostic performance of T-SPOT.TB The positive rates of TSPOT.TB were 93.3% (95% CI: 91.7–94.9%), 86.1% (95% CI: 83.7–88.5%), 43.6% (95% CI: 40.9–46.3%) in the culture-positive PTB group, culture-negative PTB group, and non-TB group, respectively. The positive rate of T-SPOT.TB in the culture-positive PTB group was higher than that in the culture-negative PTB group or in the non-TB group (P < 0.01). The positive rate of T-SPOT.TB in the culture-negative PTB group was higher than that in the non-TB group (P < 0.01). The overall sensitivity, specificity, PPV, NPV, and Acc were 89.7% (95% CI: 88.2–91.0%), 56.4% (53.6–59.1%), 74.8% (95% CI: 72.9–76.6%), 79.1% (95% CI: 76.3–81.7%), and 76.0% (74.5–77.5%), respectively. The positive rates of TSPOT.TB were 93.3% (95% CI: 91.7–94.9%), 86.1% (95% CI: 83.7–88.5%), 43.6% (95% CI: 40.9–46.3%) in the culture-positive PTB group, culture-negative PTB group, and non-TB group, respectively. The positive rate of T-SPOT.TB in the culture-positive PTB group was higher than that in the culture-negative PTB group or in the non-TB group (P < 0.01). The positive rate of T-SPOT.TB in the culture-negative PTB group was higher than that in the non-TB group (P < 0.01). The overall sensitivity, specificity, PPV, NPV, and Acc were 89.7% (95% CI: 88.2–91.0%), 56.4% (53.6–59.1%), 74.8% (95% CI: 72.9–76.6%), 79.1% (95% CI: 76.3–81.7%), and 76.0% (74.5–77.5%), respectively. Risk factors associated with positive T-SPOT.TB results Risk factors associated with positive T-SPOT.TB results are presented in Table 3. The sensitivity in the older patients was lower than that in nonolder patients (P < 0.01). The sensitivity in patients with a BMI <18.5 kg/m2 was higher than that in patients with a BMI ≥18.5 kg/m2 (P = 0.02). The sensitivity in the smear-positive patients was higher than that in the smear-negative patients (P < 0.01). The sensitivity in the patients with comorbidity was higher than that in noncomorbidity patients (P = 0.017). The sensitivity in the patients with a record of contact with TB was higher than that in noncontact with TB patients (P = 0.049). The sensitivity of T-SPOT.TB in the patients with a record of vaccination with BCG was higher than that in noncontact with TB patients (P = 0.040). Risk factors associated with positive T-SPOT.TB results, n = 1819 BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin. Risk factors associated with positive T-SPOT.TB results are presented in Table 3. The sensitivity in the older patients was lower than that in nonolder patients (P < 0.01). The sensitivity in patients with a BMI <18.5 kg/m2 was higher than that in patients with a BMI ≥18.5 kg/m2 (P = 0.02). The sensitivity in the smear-positive patients was higher than that in the smear-negative patients (P < 0.01). The sensitivity in the patients with comorbidity was higher than that in noncomorbidity patients (P = 0.017). The sensitivity in the patients with a record of contact with TB was higher than that in noncontact with TB patients (P = 0.049). The sensitivity of T-SPOT.TB in the patients with a record of vaccination with BCG was higher than that in noncontact with TB patients (P = 0.040). Risk factors associated with positive T-SPOT.TB results, n = 1819 BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin. Association between T-SPOT.TB result and other factors in tuberculosis patients In multivariate logistic regression analysis, the affecting factors of T-SPOT.TB results in TB patients included gender (OR, 0.714; 95% CI, 0.515–0.989; P = 0.043), age (OR, 0.691; 95% CI, 0.556–0.859, P < 0.01), BMI (OR, 0.942; 95% CI, 0.900–0.987, P = 0.012), sputum culture (OR, 1.929; 95% CI, 1.271–2.927, P = 0.002), and contact with TB (OR, 2.635; 95% CI, 1.037–6.695, P = 0.042) [Table 4]. Females, age of >65 years, a BMI ≥18.5 kg/m2, sputum culture-negative, and noncontact of TB were demonstrated to be independent factors associated with negative test results. Multivariable logistic regression analysis of the association of T-SPOT.TB and clinical characteristics *Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. If the P<0.2 in the univariate analysis, then the affected factors were entered in the multivariate logistic regression. BMI: Body-mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin; COPD: Chronic obstructive pulmonary disease. In multivariate logistic regression analysis, the affecting factors of T-SPOT.TB results in TB patients included gender (OR, 0.714; 95% CI, 0.515–0.989; P = 0.043), age (OR, 0.691; 95% CI, 0.556–0.859, P < 0.01), BMI (OR, 0.942; 95% CI, 0.900–0.987, P = 0.012), sputum culture (OR, 1.929; 95% CI, 1.271–2.927, P = 0.002), and contact with TB (OR, 2.635; 95% CI, 1.037–6.695, P = 0.042) [Table 4]. Females, age of >65 years, a BMI ≥18.5 kg/m2, sputum culture-negative, and noncontact of TB were demonstrated to be independent factors associated with negative test results. Multivariable logistic regression analysis of the association of T-SPOT.TB and clinical characteristics *Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. If the P<0.2 in the univariate analysis, then the affected factors were entered in the multivariate logistic regression. BMI: Body-mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin; COPD: Chronic obstructive pulmonary disease. Demographic characteristics of the three groups: A total of 3082 patients for whom there was complete information were included in the investigation. The sample population included 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. Demographic characteristics of the three groups are presented in Table 1. Demographic characteristics of the three groups *The BMI is the weight in kilograms divided by the square of the height in meters. BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin. Clinical characteristics of the tuberculosis patients: Clinical characteristics of the TB patients are summarized in Table 2. The rates of cough, productive cough, fever, sweat, and weight-loss in the culture-positive PTB group were higher than those in the culture-negative PTB group (P < 0.05). The ranges of TB disease and cavity were more extensive in the culture-positive PTB group than that in the culture-negative PTB group (P < 0.01). The frequency of diabetes in the culture-positive PTB group was higher than that in the culture-negative PTB group (P < 0.01). Clinical characteristics of the TB patients *Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. †Respiratory failure was defined as PaO2 lower than 60 by arterial blood gas analysis. TB: Tuberculosis; AFB: Acid-fast bacilli; COPD: Chronic obstructive pulmonary disease. –: Not applicable. Diagnostic performance of T-SPOT.TB: The positive rates of TSPOT.TB were 93.3% (95% CI: 91.7–94.9%), 86.1% (95% CI: 83.7–88.5%), 43.6% (95% CI: 40.9–46.3%) in the culture-positive PTB group, culture-negative PTB group, and non-TB group, respectively. The positive rate of T-SPOT.TB in the culture-positive PTB group was higher than that in the culture-negative PTB group or in the non-TB group (P < 0.01). The positive rate of T-SPOT.TB in the culture-negative PTB group was higher than that in the non-TB group (P < 0.01). The overall sensitivity, specificity, PPV, NPV, and Acc were 89.7% (95% CI: 88.2–91.0%), 56.4% (53.6–59.1%), 74.8% (95% CI: 72.9–76.6%), 79.1% (95% CI: 76.3–81.7%), and 76.0% (74.5–77.5%), respectively. Risk factors associated with positive T-SPOT.TB results: Risk factors associated with positive T-SPOT.TB results are presented in Table 3. The sensitivity in the older patients was lower than that in nonolder patients (P < 0.01). The sensitivity in patients with a BMI <18.5 kg/m2 was higher than that in patients with a BMI ≥18.5 kg/m2 (P = 0.02). The sensitivity in the smear-positive patients was higher than that in the smear-negative patients (P < 0.01). The sensitivity in the patients with comorbidity was higher than that in noncomorbidity patients (P = 0.017). The sensitivity in the patients with a record of contact with TB was higher than that in noncontact with TB patients (P = 0.049). The sensitivity of T-SPOT.TB in the patients with a record of vaccination with BCG was higher than that in noncontact with TB patients (P = 0.040). Risk factors associated with positive T-SPOT.TB results, n = 1819 BMI: Body mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin. Association between T-SPOT.TB result and other factors in tuberculosis patients: In multivariate logistic regression analysis, the affecting factors of T-SPOT.TB results in TB patients included gender (OR, 0.714; 95% CI, 0.515–0.989; P = 0.043), age (OR, 0.691; 95% CI, 0.556–0.859, P < 0.01), BMI (OR, 0.942; 95% CI, 0.900–0.987, P = 0.012), sputum culture (OR, 1.929; 95% CI, 1.271–2.927, P = 0.002), and contact with TB (OR, 2.635; 95% CI, 1.037–6.695, P = 0.042) [Table 4]. Females, age of >65 years, a BMI ≥18.5 kg/m2, sputum culture-negative, and noncontact of TB were demonstrated to be independent factors associated with negative test results. Multivariable logistic regression analysis of the association of T-SPOT.TB and clinical characteristics *Determined by Fisher's exact test. Not indicated: Determined by Pearson's Chi-square test. If the P<0.2 in the univariate analysis, then the affected factors were entered in the multivariate logistic regression. BMI: Body-mass index; TB: Tuberculosis; BCG: Bacillus Calmette–Guérin; COPD: Chronic obstructive pulmonary disease. DISCUSSION: IGRAs were developed for the indirect or immunologic diagnosis of MTB infection. With their relatively high sensitivity and specificity, IGRAs have been widely used to diagnose the infection under national guidelines in many developed countries, such as the USA, UK, and Japan.[14] In most developing countries, including China, the clinical utilization of IGRAs for diagnosing active TB is not recommended, due to insufficient evidence of their performance in high TB-burden settings. Nevertheless, many private health-care providers in high-burden countries are using IGRAs for the diagnosis of active TB,[14] and many investigators continue to recommend their use for active TB.[151617] Thus, there is a growing concern about the inappropriate use of IGRAs for the diagnosis of active TB in high-burden settings, particularly when used to “rule-in” disease.[18] The aim of this study, performed in a country with a high prevalence of TB, was to determine the performance of the T-SPOT.TB test for diagnosing TB in routine clinical practice. To our knowledge, this multicenter study that included 1819 TB patients and 1263 non-TB patients is the largest to date evaluating the performance of the T-SPOT.TB test for diagnosing TB in high-burden settings. The diagnosis of TB is problematic for the clinician as only 50% of patients with active disease have microbiologically confirmed TB disease. A negative IGRA may be a convenient “rule-out” test for TB if the diagnostic sensitivity of the assay is sufficiently high, for example, nearly 95%.[19] In our study, the sensitivity (89.66%) and NPV (79.11%) of the T-SPOT.TB test suggest that T-SPOT.TB does not have good rule-out value for active TB in high TB-burden settings, like China. Furthermore, the low specificity (56.37%) and PPV (74.75%) limit its usefulness to rule-in active TB in these settings, where the prevalence of latent tuberculosis infection (LTBI) is considerable. In China, IGRA is currently being used for diagnosis of active TB and for differentiating between TB and other diseases. However, the positive rate of T-SPOT.TB found that here was 43.6% in the non-TB group. Such a high false-positive rate makes it necessary to reconsider the value and scope of T-SPOT.TB in clinical practice. The overall sensitivity of the T-SPOT.TB test in our study was 89.66% in all the PTB patients, which increased to 93.26% in culture-positive TB patients. Similar sensitivities for the diagnosis of active TB were demonstrated in the studies performed in China by Zhang et al. (94.7%),[20] Feng et al. (94.7%),[7] and Liu et al. (93.2%),[21] and in one study from India (90.6%).[9] From the ten reported studies evaluating T-SPOT.TB in China, the combined sensitivity was 88%.[22] However, the sensitivity in our study was higher than the reported range in the earlier studies from other highly endemic countries, for example, 76% in South Africa,[23] 78.7% in Gambia,[24] and 74% in Zambia.[25] Previous investigations identified several factors that may affect the sensitivity of IGRA. HIV is one of the factors,[26] and in most of the studies conducted in other high-burden countries, HIV-positive patients were included.[2425] Exclusion of HIV patients in our study may be one of the reasons for obtaining higher sensitivity. Furthermore, the sensitivity of IGRA reportedly decreased significantly with the age of patients[5] and gradually decreased with the treatment duration.[6] In our study, only 15.02% (245/1631) patients were older TB patients (≥65 years) and 20.72% (338/1631) of patients were retreated ones, which may be another reason for obtaining higher sensitivity. As T-SPOT.TB tests are unable to distinguish between LTBI and active TB, the specificity of the T-SPOT.TB test depends on the prevalence of LTBI. The specificity of the T-SPOT.TB test in diagnosing active TB is known to be high (≥93%) in low TB incidence settings,[27] and as low as 61% in low- and middle-income countries[28] The poor specificity (56.44%) of T-SPOT.TB test obtained in this study was expected and several factors should be considered. First, the high prevalence of LTBI, as high as 44.5% in China,[29] inevitably decreases the specificity of T-SPOT.TB test. Second, this study was designed to evaluate the diagnostic validity of T-SPOT.TB test in routine clinical practice and thus focused on unselected patients with suspected active TB. In this setting, the diagnostic validity tends to be lower than in studies in which healthy people are enrolled as negative controls. IGRAs are designed to detect MTB infections, whether latent or active. However, in our study, 6.74% (61/905) of the persons with culture-confirmed TB had a negative T-SPOT.TB result, comparable to 8.7% (46/528) described by Pan et al.[30] and 14.4% (182/1264) described by Kwon et al.[31] There are a multitude of factors that may modulate the sensitivity of IGRA including HIV coinfection, immune suppression, young or advanced age, advanced disease, malnutrition, extrapulmonary TB, disseminated TB, concomitant TB treatment, bacteria strain differences, and smoking.[3032] In our multivariate logistic regression analysis, gender, age, BMI, sputum culture, and contact with TB were affecting factors for the false-negative results of the T-SPOT.TB test. In addition, this study showed that the sensitivity of T-SPOT.TB for the diagnosis of TB in the culture/smear-positive group was significantly higher than that in the culture/smear-negative group, which implied that the T-SPOT.TB result may be affected by the bacterial load. There were several limitations to our study. First, T-SPOT.TB results were analyzed in retrospect and quantitative test results were not acquired. Second, this study was performed in an area where the prevalence of LTBI is considerable. Third, this study did not include extrapulmonary TB and immunodeficient patients. Fourth, external validity is a concern, because this was a hospital-based study and may have overestimated the sensitivity of IGRA. Fifth, the retrospective study design limited us to describing the association between IGRA and affecting factors. Sixth, sputum culture-negative PTB patients were included according to clinical diagnosis, which may influence the diagnostic Acc. Additional longitudinal studies are needed to prove any causality in the associations found. Despite these limitations, to our knowledge, this might be the multicenter large-scale investigation to evaluate the role of T-SPOT.TB in diagnosis of PTB in China. Our study demonstrated that gender, age, BMI, sputum culture, and contact of TB are factors affecting the false-negative results of T-SPOT.TB test. Inadequate sensitivity and high false-positive rates of this test limit its usefulness as a single test to rule-in or rule-out active TB in China. We highly recommend that IGRAs not be used for the diagnosis of active TB in high-burden TB settings. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.
Background: Interferon-gamma release assay (IGRA) has been used in latent tuberculosis (TB) infection and TB diagnosis, but the results from different high TB-endemic countries are different. The aim of this study was to investigate the value of IGRA in the diagnosis of active pulmonary TB (PTB) in China. Methods: We conducted a large-scale retrospective multicenter investigation to further evaluate the role of IGRA in the diagnosis of active PTB in high TB-epidemic populations and the factors affecting the performance of the assay. All patients who underwent valid T-SPOT.TB assays from December 2012 to November 2015 in six large-scale specialized TB hospitals in China and met the study criteria were retrospectively evaluated. Patients were divided into three groups: Group 1, sputum culture-positive PTB patients, confirmed by positive Mycobacterium tuberculosis sputum culture; Group 2, sputum culture-negative PTB patients; and Group 3, non-TB respiratory diseases. The medical records of all patients were collected. Chi-square tests and Fisher's exact test were used to compare categorical data. Multivariable logistic analyses were performed to evaluate the relationship between the results of T-SPOT in TB patients and other factors. Results: A total of 3082 patients for whom complete information was available were included in the investigation, including 905 sputum culture-positive PTB cases, 914 sputum culture-negative PTB cases, and 1263 non-TB respiratory disease cases. The positive rate of T-SPOT.TB was 93.3% in the culture-positive PTB group and 86.1% in the culture-negative PTB group. In the non-PTB group, the positive rate of T-SPOT.TB was 43.6%. The positive rate of T-SPOT.TB in the culture-positive PTB group was significantly higher than that in the culture-negative PTB group (χ2 = 25.118, P < 0.01), which in turn was significantly higher than that in the non-TB group (χ2 = 566.116, P < 0.01). The overall results were as follows: sensitivity, 89.7%; specificity, 56.37%; positive predictive value, 74.75%; negative predictive value, 79.11%; and accuracy, 76.02%. Conclusions: High false-positive rates of T-SPOT.TB assays in the non-TB group limit the usefulness as a single test to diagnose active TB in China. We highly recommend that IGRAs not be used for the diagnosis of active TB in high-burden TB settings.
null
null
7,128
477
15
[ "tb", "patients", "culture", "spot", "spot tb", "positive", "group", "ptb", "negative", "test" ]
[ "test", "test" ]
null
null
[CONTENT] Active Tuberculosis | Diagnosis | Interferon-Gamma Release Assay [SUMMARY]
[CONTENT] Active Tuberculosis | Diagnosis | Interferon-Gamma Release Assay [SUMMARY]
[CONTENT] Active Tuberculosis | Diagnosis | Interferon-Gamma Release Assay [SUMMARY]
null
[CONTENT] Active Tuberculosis | Diagnosis | Interferon-Gamma Release Assay [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Female | Humans | Interferon-gamma | Interferon-gamma Release Tests | Male | Middle Aged | Mycobacterium tuberculosis | Retrospective Studies | Sensitivity and Specificity | Sputum | Tuberculosis, Pulmonary | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Female | Humans | Interferon-gamma | Interferon-gamma Release Tests | Male | Middle Aged | Mycobacterium tuberculosis | Retrospective Studies | Sensitivity and Specificity | Sputum | Tuberculosis, Pulmonary | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Female | Humans | Interferon-gamma | Interferon-gamma Release Tests | Male | Middle Aged | Mycobacterium tuberculosis | Retrospective Studies | Sensitivity and Specificity | Sputum | Tuberculosis, Pulmonary | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Female | Humans | Interferon-gamma | Interferon-gamma Release Tests | Male | Middle Aged | Mycobacterium tuberculosis | Retrospective Studies | Sensitivity and Specificity | Sputum | Tuberculosis, Pulmonary | Young Adult [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] tb | patients | culture | spot | spot tb | positive | group | ptb | negative | test [SUMMARY]
[CONTENT] tb | patients | culture | spot | spot tb | positive | group | ptb | negative | test [SUMMARY]
[CONTENT] tb | patients | culture | spot | spot tb | positive | group | ptb | negative | test [SUMMARY]
null
[CONTENT] tb | patients | culture | spot | spot tb | positive | group | ptb | negative | test [SUMMARY]
null
[CONTENT] tb | diagnosis | million | people | igra | studies | igras | countries | ptb | low [SUMMARY]
[CONTENT] tb | patients | study | sputum | performed | china | cells | culture | data | 10 [SUMMARY]
[CONTENT] tb | 95 ci | ci | ptb group | group | patients | ptb | 95 | culture | higher [SUMMARY]
null
[CONTENT] tb | nil | patients | culture | group | ptb | ptb group | positive | 95 ci | ci [SUMMARY]
null
[CONTENT] Interferon | IGRA ||| IGRA | TB | China [SUMMARY]
[CONTENT] IGRA | PTB ||| December 2012 to November 2015 | six | TB | China ||| three | Group 1 | PTB | Group 2 | PTB | Group 3 ||| ||| Chi-square | Fisher ||| TB [SUMMARY]
[CONTENT] 3082 | 905 | PTB | 914 | PTB | 1263 ||| 93.3% | PTB | 86.1% | PTB ||| non-PTB | 43.6% ||| PTB | PTB | χ2 | 25.118 | 566.116 | 0.01 ||| 89.7% | 56.37% | 74.75% | 79.11% | 76.02% [SUMMARY]
null
[CONTENT] Interferon | IGRA ||| IGRA | TB | China ||| IGRA | PTB ||| December 2012 to November 2015 | six | TB | China ||| three | Group 1 | PTB | Group 2 | PTB | Group 3 ||| ||| Chi-square | Fisher ||| TB ||| 3082 | 905 | PTB | 914 | PTB | 1263 ||| 93.3% | PTB | 86.1% | PTB ||| non-PTB | 43.6% ||| PTB | PTB | χ2 | 25.118 | 566.116 | 0.01 ||| 89.7% | 56.37% | 74.75% | 79.11% | 76.02% ||| TB | China ||| TB | TB [SUMMARY]
null
Coenzyme Q10 prevents hepatic fibrosis, inflammation, and oxidative stress in a male rat model of poor maternal nutrition and accelerated postnatal growth.
26718412
It is well established that low birth weight and accelerated postnatal growth increase the risk of liver dysfunction in later life. However, molecular mechanisms underlying such developmental programming are not well characterized, and potential intervention strategies are poorly defined.
BACKGROUND
A rat model of maternal protein restriction was used to generate low-birth-weight offspring that underwent accelerated postnatal growth (termed "recuperated"). These were compared with control rats. Offspring were weaned onto standard feed pellets with or without dietary CoQ10 (1 mg/kg body weight per day) supplementation. At 12 mo, hepatic fibrosis, indexes of inflammation, oxidative stress, and insulin signaling were measured by histology, Western blot, ELISA, and reverse transcriptase-polymerase chain reaction.
DESIGN
Hepatic collagen deposition (diameter of deposit) was greater in recuperated offspring (mean ± SEM: 12 ± 2 μm) than in controls (5 ± 0.5 μm) (P < 0.001). This was associated with greater inflammation (interleukin 6: 38% ± 24% increase; P < 0.05; tumor necrosis factor α: 64% ± 24% increase; P < 0.05), lipid peroxidation (4-hydroxynonenal, measured by ELISA: 0.30 ± 0.02 compared with 0.19 ± 0.05 μg/mL per μg protein; P < 0.05), and hyperinsulinemia (P < 0.05). CoQ10 supplementation increased (P < 0.01) hepatic CoQ10 concentrations and ameliorated liver fibrosis (P < 0.001), inflammation (P < 0.001), some measures of oxidative stress (P < 0.001), and hyperinsulinemia (P < 0.01).
RESULTS
Suboptimal in utero nutrition combined with accelerated postnatal catch-up growth caused more hepatic fibrosis in adulthood, which was associated with higher indexes of oxidative stress and inflammation and hyperinsulinemia. CoQ10 supplementation prevented liver fibrosis accompanied by downregulation of oxidative stress, inflammation, and hyperinsulinemia.
CONCLUSIONS
[ "Animals", "Anti-Inflammatory Agents, Non-Steroidal", "Cytokines", "Diet, Protein-Restricted", "Dietary Supplements", "Female", "Fetal Development", "Fetal Growth Retardation", "Hepatitis", "Hyperinsulinism", "Liver", "Liver Cirrhosis", "Male", "Malnutrition", "Maternal Nutritional Physiological Phenomena", "Oxidative Stress", "Pregnancy", "Pregnancy Complications", "Rats, Wistar", "Specific Pathogen-Free Organisms", "Ubiquinone", "Weaning" ]
4733260
INTRODUCTION
In 1992, Hales and Barker (1) proposed the “thrifty phenotype hypothesis,” which postulated that in response to suboptimal in utero nutrition, the fetus alters its organ structure and adapts its metabolism to ensure immediate survival. This occurs through the sparing of vital organs (e.g., the brain) at the expense of others, such as the liver, thus increasing the risk of metabolic disease such as liver dysfunction in later life (2–4). This risk is exacerbated if a suboptimal uterine environment is followed by rapid postnatal growth (5, 6). Nonalcoholic fatty liver disease (NAFLD)4 is the hepatic manifestation of the metabolic syndrome. Aspects of the metabolic syndrome, including NAFLD, have been linked to exposure to suboptimal early-life environments (7, 8). Although the incidence of NAFLD is high (9), the associated morbidity is low if there is no progression to hepatic fibrosis. Progression to fibrosis is indicative of the clinically important subtype of patients with NAFLD who have a high chance (20%) of developing frank liver cirrhosis and subsequent liver failure (10). At present, it is unknown why this progression occurs only in a subset of individuals. The development of an intervention that prevents these changes from accumulating could improve the prognosis of patients who develop NAFLD later in life. Increased oxidative stress is a common consequence of developmental programming (11). Increased reactive oxygen species (ROS) have been strongly implicated in the etiology of hepatic fibrosis. Several animal studies have focused on antioxidant therapies to prevent the deleterious phenotypes of developmental programming (12–14); however, the doses used are not recommended for use in humans. In practice, a suboptimal intrauterine environment is often recognized retrospectively (i.e., after delivery). Thus, it is important to address potential beneficial effects of targeted postnatal interventions. Coenzyme Q (CoQ10) is a benzoquinone ring linked to an isoprenoid side-chain. The isoform containing 9 isoprenoid units (CoQ9) is most abundant in rodents, whereas CoQ10 (10 isoprenoid units) is the most common in humans. When oxidized, CoQ10 shuttles electrons between mitochondrial complexes I and III and complexes II and III. Reduced CoQ10 is the most abundant endogenous cellular antioxidant (15) and is a safe and effective therapeutic antioxidant (16, 17). We have also shown that postnatal CoQ10 supplementation prevents developmentally programmed accelerated aging in rat aortas (18) and hearts (19). Liver is one of a few tissues to take up dietary CoQ10 (20), and CoQ10 supplementation has been previously investigated as a potential therapy to prevent the progression of NAFLD in humans (21). In this study, we aimed to 1) investigate the effects of poor maternal nutrition and rapid postnatal catch-up growth on hepatic CoQ9 concentrations and molecular pathways leading to proinflammatory changes and development of fibrosis and 2) determine whether a clinically relevant dose of dietary CoQ10 could correct any observed hepatic fibrosis.
METHODS
Animal experimentation All procedures involving animals were conducted under the British Animals (Scientific Procedures) Act (1986) and underwent ethical review by the University of Cambridge Animal Welfare and Ethical Review Board. Stock animals were purchased from Charles River, and dams were produced from in-house breeding from stock animals. Pregnant Wistar rats were maintained at room temperature in specific-pathogen-free housing with the use of individually ventilated cages with environmental enrichment. The dams were maintained on a 20% protein diet (control) or an isocaloric low-protein (8%) diet, as previously described (22). Access to diets and water was provided ad libitum. All rats used in this study were specific-pathogen-free housed individually at 22°C on a controlled 12:12-h light-dark cycle. Diets were purchased from Arie Blok. The day of birth was recorded as day 1 of postnatal life. Pups born to low-protein-diet–fed dams were cross-fostered to control-fed mothers on postnatal day 3 to create a recuperated litter. Each recuperated litter was standardized to 4 male pups at random to maximize their plane of nutrition. The control group consisted of the offspring of dams fed the 20%-protein diet and suckled by 20% protein–fed dams. Each control litter was culled to 8 pups as a standard. To minimize stress to the pups when cross-fostered, they were transferred with some of their own bedding. Body weights were recorded at postnatal days 3 and 21 and at 12 mo. At 21 d, 2 males per litter were weaned onto standard laboratory feed pellets (Special Diet Services) and the other 2 were weaned onto the same diet supplemented with CoQ10 to give a dose of 1 mg/kg body weight per day. Diets were given in the home cage. Rat pups were fed these diets until 12 mo of age, at which time all rats were killed by carbon dioxide asphyxiation at ∼1000. Post mortem, liver tissue was removed, weighed, and snap-frozen in liquid nitrogen and then stored at −80°C until analysis. A further portion of liver tissue (the same area for each sample) was removed and fixed in formalin for histologic assessment. For all measurements, 1 pup per litter was used, thus “n” represents number of litters. Ten litters per group were used in this study based on power calculation with the use of previous data from our studies of RNA expression in postnatal tissues from programmed animals. Rat numbers were calculated to give 80% power to detect a 20% difference between groups at the P < 0.05 level. Only male rats were used in this study. All procedures involving animals were conducted under the British Animals (Scientific Procedures) Act (1986) and underwent ethical review by the University of Cambridge Animal Welfare and Ethical Review Board. Stock animals were purchased from Charles River, and dams were produced from in-house breeding from stock animals. Pregnant Wistar rats were maintained at room temperature in specific-pathogen-free housing with the use of individually ventilated cages with environmental enrichment. The dams were maintained on a 20% protein diet (control) or an isocaloric low-protein (8%) diet, as previously described (22). Access to diets and water was provided ad libitum. All rats used in this study were specific-pathogen-free housed individually at 22°C on a controlled 12:12-h light-dark cycle. Diets were purchased from Arie Blok. The day of birth was recorded as day 1 of postnatal life. Pups born to low-protein-diet–fed dams were cross-fostered to control-fed mothers on postnatal day 3 to create a recuperated litter. Each recuperated litter was standardized to 4 male pups at random to maximize their plane of nutrition. The control group consisted of the offspring of dams fed the 20%-protein diet and suckled by 20% protein–fed dams. Each control litter was culled to 8 pups as a standard. To minimize stress to the pups when cross-fostered, they were transferred with some of their own bedding. Body weights were recorded at postnatal days 3 and 21 and at 12 mo. At 21 d, 2 males per litter were weaned onto standard laboratory feed pellets (Special Diet Services) and the other 2 were weaned onto the same diet supplemented with CoQ10 to give a dose of 1 mg/kg body weight per day. Diets were given in the home cage. Rat pups were fed these diets until 12 mo of age, at which time all rats were killed by carbon dioxide asphyxiation at ∼1000. Post mortem, liver tissue was removed, weighed, and snap-frozen in liquid nitrogen and then stored at −80°C until analysis. A further portion of liver tissue (the same area for each sample) was removed and fixed in formalin for histologic assessment. For all measurements, 1 pup per litter was used, thus “n” represents number of litters. Ten litters per group were used in this study based on power calculation with the use of previous data from our studies of RNA expression in postnatal tissues from programmed animals. Rat numbers were calculated to give 80% power to detect a 20% difference between groups at the P < 0.05 level. Only male rats were used in this study. CoQ10 diet preparation A dose of 1 mg CoQ10/kg body weight per day was used in this study, which was administered via the diet (18). This was achieved by appropriate CoQ10 supplementation of laboratory feed pellets, as we described previously (18, 19). Diet was prepared twice a week throughout the study. A dose of 1 mg CoQ10/kg body weight per day was used in this study, which was administered via the diet (18). This was achieved by appropriate CoQ10 supplementation of laboratory feed pellets, as we described previously (18, 19). Diet was prepared twice a week throughout the study. CoQ10, lipid profile, glucose, and insulin analysis Total liver ubiquinone (CoQ9 and CoQ10) was quantified by reverse-phase HPLC with UV detection at 275 nm, as described previously (18, 19). Serum was obtained as detailed previously (18), and blood from the tail vein collected into EDTA tubes and centrifuged for 3 min at 3000 rpm at 4° Celsius to isolate plasma. Fasted blood glucose measurements were obtained by using a glucose analyzer (Hemocue). The serum lipid profile and fasted plasma insulin analyses were performed by using an auto-analyzer (the Wellcome Trust–supported Cambridge Mouse Laboratory). Liver triglyceride concentrations were determined by using the Folch assay (23). Briefly, liver samples were homogenized in a 2:1 ratio of chloroform:methanol. The distinct lipid phase was removed after centrifugation, and lipid weight was quantified after the solvent was removed by evaporation. Total liver ubiquinone (CoQ9 and CoQ10) was quantified by reverse-phase HPLC with UV detection at 275 nm, as described previously (18, 19). Serum was obtained as detailed previously (18), and blood from the tail vein collected into EDTA tubes and centrifuged for 3 min at 3000 rpm at 4° Celsius to isolate plasma. Fasted blood glucose measurements were obtained by using a glucose analyzer (Hemocue). The serum lipid profile and fasted plasma insulin analyses were performed by using an auto-analyzer (the Wellcome Trust–supported Cambridge Mouse Laboratory). Liver triglyceride concentrations were determined by using the Folch assay (23). Briefly, liver samples were homogenized in a 2:1 ratio of chloroform:methanol. The distinct lipid phase was removed after centrifugation, and lipid weight was quantified after the solvent was removed by evaporation. Histologic assessment Liver samples were fixed in formalin, paraffin-embedded, and sectioned to a 6-μm thickness by using a microtome. Picro Sirius Red staining was used to stain for fibrosis. Cell-D software (Olympus Soft Imaging Solutions) was used to quantify the thickness of fibrosis around all visible hepatic vessels (including all arteries and veins) from 1 section per sample. This sample was taken at the same point (20 sections for each sample) by using a nonbiased grid sampling method. All analyses were performed at 10× magnification by using an Olympus microscope (Olympus Soft Imaging Solutions). All analyses were performed blinded. Liver samples were fixed in formalin, paraffin-embedded, and sectioned to a 6-μm thickness by using a microtome. Picro Sirius Red staining was used to stain for fibrosis. Cell-D software (Olympus Soft Imaging Solutions) was used to quantify the thickness of fibrosis around all visible hepatic vessels (including all arteries and veins) from 1 section per sample. This sample was taken at the same point (20 sections for each sample) by using a nonbiased grid sampling method. All analyses were performed at 10× magnification by using an Olympus microscope (Olympus Soft Imaging Solutions). All analyses were performed blinded. Mitochondrial electron transport chain complex activities Activities of complex I [NAD(H): ubiquinone reductase; Enzyme Commission (EC) 1.6.5.3], complexes II–III (succinate: cytochrome c reductase; EC 1.3.5.1 + EC 1.10.2.2), and complex IV (cytochrome oxidase; EC 1.9.3.1) as well as citrate synthase (EC 1.1.1.27) were assayed as described previously (18, 19). Activities of complex I [NAD(H): ubiquinone reductase; Enzyme Commission (EC) 1.6.5.3], complexes II–III (succinate: cytochrome c reductase; EC 1.3.5.1 + EC 1.10.2.2), and complex IV (cytochrome oxidase; EC 1.9.3.1) as well as citrate synthase (EC 1.1.1.27) were assayed as described previously (18, 19). Protein analysis Protein was extracted and assayed as described previously (18). Protein (20 μg) was loaded onto 10%, 12%, or 15% polyacrylamide gels, dependent on the molecular weight of the protein to be measured. The samples were electrophoresed and transferred to polyvinylidene fluoride membranes (18) and detected with the use of the following dilutions of primary antibody: insulin receptor substrate 1 (IRS-1; 1:1000; Millipore); phosphoinositide-3-kinase, p110-β (p110-β); insulin receptor β (IR-β); protein kinase-ζ (1:200; Santa-Cruz); Akt-1 and Akt-2 (1:1000; New England Biolabs); phosphoinositide-3-kinase, p85-α (p85-α 1:5000; Upstate); cytochrome P450-2E1 (CYP2E1) and Il-6 (1:1000; Abcam); Tnf-α (1:1000; Cell Signaling Technology); and anti-rabbit IgG secondary antibodies (1:20,000; Jackson Immunoresearch Laboratories). Equal protein loading was confirmed by staining electrophoresed gels with Coomassie Blue (Bio-Rad) to visualize total protein. Protein was extracted and assayed as described previously (18). Protein (20 μg) was loaded onto 10%, 12%, or 15% polyacrylamide gels, dependent on the molecular weight of the protein to be measured. The samples were electrophoresed and transferred to polyvinylidene fluoride membranes (18) and detected with the use of the following dilutions of primary antibody: insulin receptor substrate 1 (IRS-1; 1:1000; Millipore); phosphoinositide-3-kinase, p110-β (p110-β); insulin receptor β (IR-β); protein kinase-ζ (1:200; Santa-Cruz); Akt-1 and Akt-2 (1:1000; New England Biolabs); phosphoinositide-3-kinase, p85-α (p85-α 1:5000; Upstate); cytochrome P450-2E1 (CYP2E1) and Il-6 (1:1000; Abcam); Tnf-α (1:1000; Cell Signaling Technology); and anti-rabbit IgG secondary antibodies (1:20,000; Jackson Immunoresearch Laboratories). Equal protein loading was confirmed by staining electrophoresed gels with Coomassie Blue (Bio-Rad) to visualize total protein. Gene expression RNA was extracted by using a miReasy mini kit (Qiagen) following the manufacturer’s instructions, as detailed previously (19). A DNase digestion step was performed to ensure no genetic DNA contamination. RNA (1 μg) was used to synthesize complementary DNA by using oligo-dT-adaptor primers and Moloney murine leukemia virus reverse transcriptase (Promega). Gene expression was determined by using custom-designed primers (Sigma) and SYBR Green reagents (Applied Biosystems). Primer sequences are presented in Table 1. Quantification of gene expression was performed with the use of a Step One Plus reverse transcriptase–polymerase chain reaction machine (Applied Biosystems). Equal efficiency of the reverse transcription of RNA from all groups was confirmed through quantification of expression of the housekeeping gene β-actin (Actb). The expression of Actb did not differ between groups (effect of maternal diet, P = 0.9; effect of CoQ10 supplementation, P = 0.8; control: 153 ± 32; recuperated: 144 ± 24; control CoQ10: 143 ± 12; and recuperated CoQ10: 157 ± 21 average copy numbers). Primers1 Acta2, α-smooth muscle actin 2; Actb, β-actin; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; Des, desmin; F, forward; Gfap, glial fibrillary acidic protein; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; Nqo1, NAD(P)H dehydrogenase, quinone 1; Nrf2, nuclear factor, erythroid 2–like 2; R, reverse; Tgfb, transforming growth factor β Timp, tissue inhibitor of matrix metalloproteinases. RNA was extracted by using a miReasy mini kit (Qiagen) following the manufacturer’s instructions, as detailed previously (19). A DNase digestion step was performed to ensure no genetic DNA contamination. RNA (1 μg) was used to synthesize complementary DNA by using oligo-dT-adaptor primers and Moloney murine leukemia virus reverse transcriptase (Promega). Gene expression was determined by using custom-designed primers (Sigma) and SYBR Green reagents (Applied Biosystems). Primer sequences are presented in Table 1. Quantification of gene expression was performed with the use of a Step One Plus reverse transcriptase–polymerase chain reaction machine (Applied Biosystems). Equal efficiency of the reverse transcription of RNA from all groups was confirmed through quantification of expression of the housekeeping gene β-actin (Actb). The expression of Actb did not differ between groups (effect of maternal diet, P = 0.9; effect of CoQ10 supplementation, P = 0.8; control: 153 ± 32; recuperated: 144 ± 24; control CoQ10: 143 ± 12; and recuperated CoQ10: 157 ± 21 average copy numbers). Primers1 Acta2, α-smooth muscle actin 2; Actb, β-actin; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; Des, desmin; F, forward; Gfap, glial fibrillary acidic protein; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; Nqo1, NAD(P)H dehydrogenase, quinone 1; Nrf2, nuclear factor, erythroid 2–like 2; R, reverse; Tgfb, transforming growth factor β Timp, tissue inhibitor of matrix metalloproteinases. Mitochondrial DNA copy number Total DNA was extracted using a phenol/chloroform extraction protocol (24). Mitochondrial DNA copy number analysis was performed as described previously (25). Total DNA was extracted using a phenol/chloroform extraction protocol (24). Mitochondrial DNA copy number analysis was performed as described previously (25). 4-Hydroxynonenal and 3-nitrotyrosine analysis Protein nitrotyrosination was assayed by using a Nitrotyrosine ELISA kit (MitoSciences), according to the manufacturer’s instructions. Lipid peroxidation was analyzed by using an OxiSelect HNE Adduct ELISA kit (Cambridge Biosciences), according to the manufacturer’s instructions. Protein nitrotyrosination was assayed by using a Nitrotyrosine ELISA kit (MitoSciences), according to the manufacturer’s instructions. Lipid peroxidation was analyzed by using an OxiSelect HNE Adduct ELISA kit (Cambridge Biosciences), according to the manufacturer’s instructions. Statistical analysis Data were analyzed by using a 2-factor ANOVA with maternal diet and CoQ10 supplementation as the independent variables. Post hoc testing was carried out when appropriate and is indicated in the text accordingly. Data are represented as means ± SEMs. All statistical analyses were performed with the use of Statistica 7 software (Statsoft), and for all tests, P values <0.05 were considered significant. Data were checked for normal distribution. In all cases, “n” refers to the number of litters (with 1 rat used from each litter). Data were analyzed by using a 2-factor ANOVA with maternal diet and CoQ10 supplementation as the independent variables. Post hoc testing was carried out when appropriate and is indicated in the text accordingly. Data are represented as means ± SEMs. All statistical analyses were performed with the use of Statistica 7 software (Statsoft), and for all tests, P values <0.05 were considered significant. Data were checked for normal distribution. In all cases, “n” refers to the number of litters (with 1 rat used from each litter).
RESULTS
Anthropometric measurements Recuperated offspring were born smaller than control rats (6.3 ± 0.2 compared with 7.4 ± 0.2 g; P < 0.001) and underwent rapid postnatal catch-up growth so that their weights were similar to those of the control offspring at postnatal day 21 (52.2 ± 0.9 compared with 50.7 ± 1.2 g). At 12 mo of age, there was no effect of maternal diet or CoQ10 supplementation on body weights or absolute liver weights (Table 2). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on rat pup body and liver weights1 Values are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. No significant differences between groups were reported. CoQ10, coenzyme Q. Recuperated offspring were born smaller than control rats (6.3 ± 0.2 compared with 7.4 ± 0.2 g; P < 0.001) and underwent rapid postnatal catch-up growth so that their weights were similar to those of the control offspring at postnatal day 21 (52.2 ± 0.9 compared with 50.7 ± 1.2 g). At 12 mo of age, there was no effect of maternal diet or CoQ10 supplementation on body weights or absolute liver weights (Table 2). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on rat pup body and liver weights1 Values are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. No significant differences between groups were reported. CoQ10, coenzyme Q. Dietary CoQ10 supplementation leads to greater hepatic CoQ9 and CoQ10 concentrations Recuperated hepatic CoQ9 and CoQ10 concentrations were unaltered compared with those in control rats (Table 3). However, CoQ9 and CoQ10 concentrations were greater (P < 0.01) when supplemented with CoQ10 (Table 3). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum and plasma measurements1 Values are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. The overall effects of maternal diet and interaction between maternal diet and CoQ10 supplementation were not significant for any of the variables reported in the table. *,**Effect of CoQ10: *P < 0.05, **P < 0.01. CoQ10, coenzyme Q. Recuperated hepatic CoQ9 and CoQ10 concentrations were unaltered compared with those in control rats (Table 3). However, CoQ9 and CoQ10 concentrations were greater (P < 0.01) when supplemented with CoQ10 (Table 3). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum and plasma measurements1 Values are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. The overall effects of maternal diet and interaction between maternal diet and CoQ10 supplementation were not significant for any of the variables reported in the table. *,**Effect of CoQ10: *P < 0.05, **P < 0.01. CoQ10, coenzyme Q. CoQ10 supplementation ameliorates hepatic fibrosis and inflammation induced by poor maternal nutrition Recuperated offspring showed greater (P < 0.001) collagen deposition (Figure 1A) than did control offspring (Figure 1B, C). CoQ10 supplementation prevented this effect of maternal diet (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1A, D, E). Collagen type 1, α1 (Col1a1), mRNA expression was also greater (P < 0.05) in recuperated offspring (Figure 1F) and was reduced (P < 0.05) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1F). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on hepatic fibrosis, quantified by measurement of collagen (A), in 12-mo-old male rat livers (B–E). (F) mRNA expression of Col1a1. Values are as means ± SEMs; n = 10/group; 10/10 rats used. *,***C compared with R and R compared with RQ: *P < 0.05, ***P < 0.001. Interaction between in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation (statistical interaction): P = 0.001 (A and F). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; Col1a1 = collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; R, recuperated; RQ, recuperated CoQ10. Recuperated and control offspring had similar expression of the profibrotic cytokine transforming growth factor β1 (Tgfb1), and a trend for greater expression of monocyte chemoattractant protein 1 (Mcp1; effect of maternal diet, P = 0.06) was observed in recuperated offspring (Figure 2A). CoQ10 supplementation reduced the concentrations of Tgfb1 (effect of CoQ10 supplementation, P < 0.001) and Mcp1 (effect of CoQ10 supplementation, P = 0.07) (Figure 2A). The cytokine Tnf-α was greater in recuperated offspring than in controls (effect of maternal diet, P < 0.05) and concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.05). Concentrations of Il-6 were greater (P < 0.05) in recuperated offspring than in controls and this effect was prevented by CoQ10 supplementation (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P < 0.001) (Figure 2B). The gene expression of markers of hepatic stellate cell (HSC) and Kupffer cell (KC) activation [α-smooth muscle actin 2 (Acta2), desmin (Des), and C-type lectin-domain family 4 (Clec4f)] was greater in recuperated offspring than in controls (effects of maternal diet, P < 0.05 for all listed variables) (Figure 2C). CoQ10 supplementation reduced Des (P < 0.001), Clec4f (P < 0.05), and cluster of differientation 68 (Cd68) (P < 0.01; all were effects of CoQ10 supplementation) (Figure 2C). Acta2 was unchanged by CoQ10 supplementation (Figure 2C). Glial fibrillary acidic protein (Gfap) mRNA expression was not significantly different between groups (Figure 2C). Gene expression of matrix metalloproteinase (Mmp) 9 (Mmp9) was greater (P < 0.01) in recuperated offspring than in controls, and CoQ10 supplementation reduced (P < 0.001) recuperated Mmp9 mRNA to control levels (interaction between maternal diet and CoQ10 supplementation, P < 0.05) (Figure 2D). The gene expression of Mmp2 remained unaltered between groups (Figure 2D). Tissue inhibitors of Mmps (Timps) were also not significantly different between groups (Figure 2E). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on inflammatory markers Tgfb1 and Mcp1 mRNA (A), Tnf-α and Il-6 protein (B), mRNA expression of markers of HSC activation (Acta2, Des, Gfap) and KC activation (Clec4f, Cd68) (C), and Mmp (D) and Timp (E) mRNA in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. **,***C and R compared with CQ and RQ: **P < 0.01, ***P < 0.001. *,**C compared with R: *P < 0.05, **P < 0.01. ***R compared with RQ, P < 0.001. Statistical interactions: Tgfb1, P = 0.1; Mcp1, P = 0.3; Tnf-α, P = 0.5; Il-6, P = 0.002; Acta2, P = 0.5; Des, P = 0.3; Gfap, P = 0.5; Clec4f, P = 0.1; Cd68, P = 0.9; Mmp2, P = 0.9; Mmp9, P = 0.04; Timp1, P = 0.6; Timp2, P = 0.8. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. Acta2, α-smooth muscle actin 2; C, control; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; Des, desmin; Gfap, glial fibrillary acidic protein; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; R, recuperated; RQ, recuperated CoQ10; Tgfb1, transforming growth factor β1; Timp, tissue inhibitor of matrix metalloproteinase. Recuperated offspring showed greater (P < 0.001) collagen deposition (Figure 1A) than did control offspring (Figure 1B, C). CoQ10 supplementation prevented this effect of maternal diet (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1A, D, E). Collagen type 1, α1 (Col1a1), mRNA expression was also greater (P < 0.05) in recuperated offspring (Figure 1F) and was reduced (P < 0.05) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1F). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on hepatic fibrosis, quantified by measurement of collagen (A), in 12-mo-old male rat livers (B–E). (F) mRNA expression of Col1a1. Values are as means ± SEMs; n = 10/group; 10/10 rats used. *,***C compared with R and R compared with RQ: *P < 0.05, ***P < 0.001. Interaction between in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation (statistical interaction): P = 0.001 (A and F). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; Col1a1 = collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; R, recuperated; RQ, recuperated CoQ10. Recuperated and control offspring had similar expression of the profibrotic cytokine transforming growth factor β1 (Tgfb1), and a trend for greater expression of monocyte chemoattractant protein 1 (Mcp1; effect of maternal diet, P = 0.06) was observed in recuperated offspring (Figure 2A). CoQ10 supplementation reduced the concentrations of Tgfb1 (effect of CoQ10 supplementation, P < 0.001) and Mcp1 (effect of CoQ10 supplementation, P = 0.07) (Figure 2A). The cytokine Tnf-α was greater in recuperated offspring than in controls (effect of maternal diet, P < 0.05) and concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.05). Concentrations of Il-6 were greater (P < 0.05) in recuperated offspring than in controls and this effect was prevented by CoQ10 supplementation (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P < 0.001) (Figure 2B). The gene expression of markers of hepatic stellate cell (HSC) and Kupffer cell (KC) activation [α-smooth muscle actin 2 (Acta2), desmin (Des), and C-type lectin-domain family 4 (Clec4f)] was greater in recuperated offspring than in controls (effects of maternal diet, P < 0.05 for all listed variables) (Figure 2C). CoQ10 supplementation reduced Des (P < 0.001), Clec4f (P < 0.05), and cluster of differientation 68 (Cd68) (P < 0.01; all were effects of CoQ10 supplementation) (Figure 2C). Acta2 was unchanged by CoQ10 supplementation (Figure 2C). Glial fibrillary acidic protein (Gfap) mRNA expression was not significantly different between groups (Figure 2C). Gene expression of matrix metalloproteinase (Mmp) 9 (Mmp9) was greater (P < 0.01) in recuperated offspring than in controls, and CoQ10 supplementation reduced (P < 0.001) recuperated Mmp9 mRNA to control levels (interaction between maternal diet and CoQ10 supplementation, P < 0.05) (Figure 2D). The gene expression of Mmp2 remained unaltered between groups (Figure 2D). Tissue inhibitors of Mmps (Timps) were also not significantly different between groups (Figure 2E). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on inflammatory markers Tgfb1 and Mcp1 mRNA (A), Tnf-α and Il-6 protein (B), mRNA expression of markers of HSC activation (Acta2, Des, Gfap) and KC activation (Clec4f, Cd68) (C), and Mmp (D) and Timp (E) mRNA in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. **,***C and R compared with CQ and RQ: **P < 0.01, ***P < 0.001. *,**C compared with R: *P < 0.05, **P < 0.01. ***R compared with RQ, P < 0.001. Statistical interactions: Tgfb1, P = 0.1; Mcp1, P = 0.3; Tnf-α, P = 0.5; Il-6, P = 0.002; Acta2, P = 0.5; Des, P = 0.3; Gfap, P = 0.5; Clec4f, P = 0.1; Cd68, P = 0.9; Mmp2, P = 0.9; Mmp9, P = 0.04; Timp1, P = 0.6; Timp2, P = 0.8. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. Acta2, α-smooth muscle actin 2; C, control; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; Des, desmin; Gfap, glial fibrillary acidic protein; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; R, recuperated; RQ, recuperated CoQ10; Tgfb1, transforming growth factor β1; Timp, tissue inhibitor of matrix metalloproteinase. CoQ10 supplementation attenuates ROS induced by poor maternal nutrition Components of the NAD(P)H oxidase 2 (NOX-2) protein complex—Gp91phox (P < 0.05), P22phox (P < 0.05), and P47phox (P = 0.05)—were greater in recuperated offspring than in controls. P67phox was greater (P < 0.01) in recuperated offspring than in controls, and this effect was reduced (P < 0.001) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.01) (Figure 3A). Levels of Gp91phox (P = 0.08), P22phox (P < 0.001), and P47phox (P = 0.05) were reduced by CoQ10 supplementation (Figure 3A). CYP2E1 was not significantly different between control and recuperated offspring; however, CoQ10 supplementation reduced this concentration by 50% (effect of CoQ10 supplementation, P < 0.01) (Figure 3B). Complex I and complex IV electron transport chain activities were greater in recuperated offspring (effect of maternal diet, P = 0.05); however, complex II–III activity was unaffected. CoQ10 supplementation caused an increase in complex IV activity (effect of CoQ10 supplementation, P < 0.05) (Figure 3C). Mitochondrial DNA copy number was not significantly different between groups (control: 36 ± 4 copy numbers; recuperated: 36 ± 5 copy numbers) or by CoQ10 supplementation (control CoQ10: 41 ± 4 copy numbers; recuperated CoQ10: 35 ± 3 copy numbers). Concentrations of 4-hydroxynonenal (4-HNE) adducts were greater (P < 0.05) in recuperated offspring. There was a significant interaction between maternal diet and CoQ10 supplementation on 4-HNE concentrations (P < 0.05), reflecting the fact that CoQ10 supplementation reduced 4-HNE concentrations in recuperated offspring but had no effect in control offspring (Figure 3D). 3-nitrotyrosine concentrations were not significantly different between groups (Figure 3E). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on indexes of oxidative stress: components of the NOX-2 complex (A), CYP2E1 (B), ETC activities (C), 4-HNE (D), and 3-NT adducts (E) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. D: *Comparison of C with R and comparison of R with RQ, P < 0.05. A and B: **Comparison of C with R, P < 0.01; ***Comparison of R with RQ and comparison of C and R with CQ and RQ, P < 0.001. Statistical interactions: NOX-2 (Gp91phox, P = 0.5; P22phox, P = 0.3; P47phox, P = 0.4; P67phox, P = 0.01; P40phox, P = 0.3), CYP2E1 (P = 0.5), ETC activities (complex I, P = 0.9; complexes II–III, P = 0.6; complex IV, P = 0.9), 4-HNE (P = 0.04), and 3-NT (P = 0.8). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; CS, citrate synthase; CYP2E1, cytochrome P450-2E1; ETC, electron transport chain; NOX-2, NAD(P)H oxidase 2; R, recuperated; RQ, recuperated CoQ10; 3-NT, 3-nitrotyrosine; 4-HNE, 4-hydroxynonenal. Components of the NAD(P)H oxidase 2 (NOX-2) protein complex—Gp91phox (P < 0.05), P22phox (P < 0.05), and P47phox (P = 0.05)—were greater in recuperated offspring than in controls. P67phox was greater (P < 0.01) in recuperated offspring than in controls, and this effect was reduced (P < 0.001) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.01) (Figure 3A). Levels of Gp91phox (P = 0.08), P22phox (P < 0.001), and P47phox (P = 0.05) were reduced by CoQ10 supplementation (Figure 3A). CYP2E1 was not significantly different between control and recuperated offspring; however, CoQ10 supplementation reduced this concentration by 50% (effect of CoQ10 supplementation, P < 0.01) (Figure 3B). Complex I and complex IV electron transport chain activities were greater in recuperated offspring (effect of maternal diet, P = 0.05); however, complex II–III activity was unaffected. CoQ10 supplementation caused an increase in complex IV activity (effect of CoQ10 supplementation, P < 0.05) (Figure 3C). Mitochondrial DNA copy number was not significantly different between groups (control: 36 ± 4 copy numbers; recuperated: 36 ± 5 copy numbers) or by CoQ10 supplementation (control CoQ10: 41 ± 4 copy numbers; recuperated CoQ10: 35 ± 3 copy numbers). Concentrations of 4-hydroxynonenal (4-HNE) adducts were greater (P < 0.05) in recuperated offspring. There was a significant interaction between maternal diet and CoQ10 supplementation on 4-HNE concentrations (P < 0.05), reflecting the fact that CoQ10 supplementation reduced 4-HNE concentrations in recuperated offspring but had no effect in control offspring (Figure 3D). 3-nitrotyrosine concentrations were not significantly different between groups (Figure 3E). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on indexes of oxidative stress: components of the NOX-2 complex (A), CYP2E1 (B), ETC activities (C), 4-HNE (D), and 3-NT adducts (E) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. D: *Comparison of C with R and comparison of R with RQ, P < 0.05. A and B: **Comparison of C with R, P < 0.01; ***Comparison of R with RQ and comparison of C and R with CQ and RQ, P < 0.001. Statistical interactions: NOX-2 (Gp91phox, P = 0.5; P22phox, P = 0.3; P47phox, P = 0.4; P67phox, P = 0.01; P40phox, P = 0.3), CYP2E1 (P = 0.5), ETC activities (complex I, P = 0.9; complexes II–III, P = 0.6; complex IV, P = 0.9), 4-HNE (P = 0.04), and 3-NT (P = 0.8). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; CS, citrate synthase; CYP2E1, cytochrome P450-2E1; ETC, electron transport chain; NOX-2, NAD(P)H oxidase 2; R, recuperated; RQ, recuperated CoQ10; 3-NT, 3-nitrotyrosine; 4-HNE, 4-hydroxynonenal. CoQ10 supplementation alters hepatic antioxidant defense capacity Nuclear factor erythroid 2–like 2 (Nrf2), heme oxygenase 1 (Hmox1), and glutathione synthetase (Gst) expression were not significantly different between control and recuperated offspring (Figure 4A, B). Glutathione peroxidase 1 (Gpx1) was reduced in recuperated offspring compared with controls, and this effect was prevented by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.05). NAD(P)H dehydrogenase, quinone 1 (Nqo1), was reduced in recuperated offspring (effect of maternal diet, P < 0.05) (Figure 4A, B). CoQ10 supplementation increased Nrf2 (P < 0.001), Hmox1 (P < 0.05), and Gst (P < 0.001) (Figure 4A, B); however Nqo1 expression was reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.001) (Figure 4A). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on mRNA expression of molecules involved in the NRF antioxidant defense pathway in 12-mo-old male rat livers. (A) Nrf2, Hmox1, and Nqo1 and (B) Gst and Gpx1. Values are means ± SEMs; n = 10/group; 10/10 rats used. *,***C and R compared with CQ and RQ: *P < 0.05, ***P < 0.001. *C compared with R and C compared with CQ, P < 0.05. ***R compared with RQ, P < 0.001. Statistical interactions: Nrf2, P = 0.9; Hmox1, P = 0.6; Nqo1, P = 0.4; Gst, P = 0.1; Gpx1, P = 0.01. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Nqo1, NAD(P)H dehydrogenase, quinone 1; NRF, nuclear erythroid 2-related factor; Nrf2, nuclear factor, erythroid 2–like 2; R, recuperated; RQ, recuperated CoQ10. Nuclear factor erythroid 2–like 2 (Nrf2), heme oxygenase 1 (Hmox1), and glutathione synthetase (Gst) expression were not significantly different between control and recuperated offspring (Figure 4A, B). Glutathione peroxidase 1 (Gpx1) was reduced in recuperated offspring compared with controls, and this effect was prevented by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.05). NAD(P)H dehydrogenase, quinone 1 (Nqo1), was reduced in recuperated offspring (effect of maternal diet, P < 0.05) (Figure 4A, B). CoQ10 supplementation increased Nrf2 (P < 0.001), Hmox1 (P < 0.05), and Gst (P < 0.001) (Figure 4A, B); however Nqo1 expression was reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.001) (Figure 4A). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on mRNA expression of molecules involved in the NRF antioxidant defense pathway in 12-mo-old male rat livers. (A) Nrf2, Hmox1, and Nqo1 and (B) Gst and Gpx1. Values are means ± SEMs; n = 10/group; 10/10 rats used. *,***C and R compared with CQ and RQ: *P < 0.05, ***P < 0.001. *C compared with R and C compared with CQ, P < 0.05. ***R compared with RQ, P < 0.001. Statistical interactions: Nrf2, P = 0.9; Hmox1, P = 0.6; Nqo1, P = 0.4; Gst, P = 0.1; Gpx1, P = 0.01. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Nqo1, NAD(P)H dehydrogenase, quinone 1; NRF, nuclear erythroid 2-related factor; Nrf2, nuclear factor, erythroid 2–like 2; R, recuperated; RQ, recuperated CoQ10. CoQ10 supplementation alters expression of molecules involved in insulin and lipid metabolism Greater serum insulin concentrations were observed in recuperated offspring than in controls (overall effect of maternal diet, P < 0.05) (Figure 5A). Concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.01) (Figure 5A). Protein expression of IR-β (P < 0.001), IRS-1 (P < 0.001), and Akt-1 (P < 0.05) was reduced in recuperated offspring compared with controls (all effects of maternal diet). Phosphoinositide-3-kinase-p110-β (p110-β), phosphoinositide-3-kinase-p85α (p85-α), and Akt-2 were not influenced by maternal diet (Figure 5B). CoQ10 supplementation increased p110-β (P < 0.05) and Akt-2 (P < 0.01) protein expression (Figure 5B). Fasting plasma glucose concentrations were not significantly different between groups (Table 3). Serum and hepatic triglyceride concentrations and serum cholesterol concentrations were not significantly different between control and recuperated offspring (Table 3). CoQ10 supplementation increased serum triglyceride and cholesterol concentrations (effects of CoQ10 supplementation, P < 0.05); however, hepatic triglyceride concentrations were unchanged (Table 3). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum insulin (A) and insulin signaling protein expression (B) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. Statistical interactions: insulin, P = 0.3; insulin signaling proteins (IR-β, P = 0.12; IRS-1, P = 0.5; p110β, P = 0.4; p85α, P = 0.3; Akt-1, P = 0.05; Akt-2, P = 0.5). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; IR-β, insulin receptor β IRS-1, insulin receptor substrate 1; p85α, phosphoinositide-3-kinase, p85-α p110-β, phosphoinositide-3-kinase, p110-β R, recuperated; RQ, recuperated CoQ10. Greater serum insulin concentrations were observed in recuperated offspring than in controls (overall effect of maternal diet, P < 0.05) (Figure 5A). Concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.01) (Figure 5A). Protein expression of IR-β (P < 0.001), IRS-1 (P < 0.001), and Akt-1 (P < 0.05) was reduced in recuperated offspring compared with controls (all effects of maternal diet). Phosphoinositide-3-kinase-p110-β (p110-β), phosphoinositide-3-kinase-p85α (p85-α), and Akt-2 were not influenced by maternal diet (Figure 5B). CoQ10 supplementation increased p110-β (P < 0.05) and Akt-2 (P < 0.01) protein expression (Figure 5B). Fasting plasma glucose concentrations were not significantly different between groups (Table 3). Serum and hepatic triglyceride concentrations and serum cholesterol concentrations were not significantly different between control and recuperated offspring (Table 3). CoQ10 supplementation increased serum triglyceride and cholesterol concentrations (effects of CoQ10 supplementation, P < 0.05); however, hepatic triglyceride concentrations were unchanged (Table 3). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum insulin (A) and insulin signaling protein expression (B) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. Statistical interactions: insulin, P = 0.3; insulin signaling proteins (IR-β, P = 0.12; IRS-1, P = 0.5; p110β, P = 0.4; p85α, P = 0.3; Akt-1, P = 0.05; Akt-2, P = 0.5). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; IR-β, insulin receptor β IRS-1, insulin receptor substrate 1; p85α, phosphoinositide-3-kinase, p85-α p110-β, phosphoinositide-3-kinase, p110-β R, recuperated; RQ, recuperated CoQ10.
null
null
[ "Animal experimentation", "CoQ10 diet preparation", "CoQ10, lipid profile, glucose, and insulin analysis", "Histologic assessment", "Mitochondrial electron transport chain complex activities", "Protein analysis", "Gene expression", "Mitochondrial DNA copy number", "4-Hydroxynonenal and 3-nitrotyrosine analysis", "Statistical analysis", "Anthropometric measurements", "Dietary CoQ10 supplementation leads to greater hepatic CoQ9 and CoQ10 concentrations", "CoQ10 supplementation ameliorates hepatic fibrosis and inflammation induced by poor maternal nutrition", "CoQ10 supplementation attenuates ROS induced by poor maternal nutrition", "CoQ10 supplementation alters hepatic antioxidant defense capacity", "CoQ10 supplementation alters expression of molecules involved in insulin and lipid metabolism" ]
[ "All procedures involving animals were conducted under the British Animals (Scientific Procedures) Act (1986) and underwent ethical review by the University of Cambridge Animal Welfare and Ethical Review Board. Stock animals were purchased from Charles River, and dams were produced from in-house breeding from stock animals. \nPregnant Wistar rats were maintained at room temperature in specific-pathogen-free housing with the use of individually ventilated cages with environmental enrichment. The dams were maintained on a 20% protein diet (control) or an isocaloric low-protein (8%) diet, as previously described (22). Access to diets and water was provided ad libitum. All rats used in this study were specific-pathogen-free housed individually at 22°C on a controlled 12:12-h light-dark cycle. Diets were purchased from Arie Blok. \nThe day of birth was recorded as day 1 of postnatal life. Pups born to low-protein-diet–fed dams were cross-fostered to control-fed mothers on postnatal day 3 to create a recuperated litter. Each recuperated litter was standardized to 4 male pups at random to maximize their plane of nutrition. The control group consisted of the offspring of dams fed the 20%-protein diet and suckled by 20% protein–fed dams. Each control litter was culled to 8 pups as a standard. To minimize stress to the pups when cross-fostered, they were transferred with some of their own bedding. Body weights were recorded at postnatal days 3 and 21 and at 12 mo. At 21 d, 2 males per litter were weaned onto standard laboratory feed pellets (Special Diet Services) and the other 2 were weaned onto the same diet supplemented with CoQ10 to give a dose of 1 mg/kg body weight per day. Diets were given in the home cage. Rat pups were fed these diets until 12 mo of age, at which time all rats were killed by carbon dioxide asphyxiation at ∼1000. Post mortem, liver tissue was removed, weighed, and snap-frozen in liquid nitrogen and then stored at −80°C until analysis. A further portion of liver tissue (the same area for each sample) was removed and fixed in formalin for histologic assessment. \nFor all measurements, 1 pup per litter was used, thus “n” represents number of litters. Ten litters per group were used in this study based on power calculation with the use of previous data from our studies of RNA expression in postnatal tissues from programmed animals. Rat numbers were calculated to give 80% power to detect a 20% difference between groups at the P < 0.05 level. Only male rats were used in this study.", "A dose of 1 mg CoQ10/kg body weight per day was used in this study, which was administered via the diet (18). This was achieved by appropriate CoQ10 supplementation of laboratory feed pellets, as we described previously (18, 19). Diet was prepared twice a week throughout the study.", "Total liver ubiquinone (CoQ9 and CoQ10) was quantified by reverse-phase HPLC with UV detection at 275 nm, as described previously (18, 19). Serum was obtained as detailed previously (18), and blood from the tail vein collected into EDTA tubes and centrifuged for 3 min at 3000 rpm at 4° Celsius to isolate plasma. Fasted blood glucose measurements were obtained by using a glucose analyzer (Hemocue). The serum lipid profile and fasted plasma insulin analyses were performed by using an auto-analyzer (the Wellcome Trust–supported Cambridge Mouse Laboratory). Liver triglyceride concentrations were determined by using the Folch assay (23). Briefly, liver samples were homogenized in a 2:1 ratio of chloroform:methanol. The distinct lipid phase was removed after centrifugation, and lipid weight was quantified after the solvent was removed by evaporation.", "Liver samples were fixed in formalin, paraffin-embedded, and sectioned to a 6-μm thickness by using a microtome. Picro Sirius Red staining was used to stain for fibrosis. Cell-D software (Olympus Soft Imaging Solutions) was used to quantify the thickness of fibrosis around all visible hepatic vessels (including all arteries and veins) from 1 section per sample. This sample was taken at the same point (20 sections for each sample) by using a nonbiased grid sampling method. All analyses were performed at 10× magnification by using an Olympus microscope (Olympus Soft Imaging Solutions). All analyses were performed blinded.", "Activities of complex I [NAD(H): ubiquinone reductase; Enzyme Commission (EC) 1.6.5.3], complexes II–III (succinate: cytochrome c reductase; EC 1.3.5.1 + EC 1.10.2.2), and complex IV (cytochrome oxidase; EC 1.9.3.1) as well as citrate synthase (EC 1.1.1.27) were assayed as described previously (18, 19).", "Protein was extracted and assayed as described previously (18). Protein (20 μg) was loaded onto 10%, 12%, or 15% polyacrylamide gels, dependent on the molecular weight of the protein to be measured. The samples were electrophoresed and transferred to polyvinylidene fluoride membranes (18) and detected with the use of the following dilutions of primary antibody: insulin receptor substrate 1 (IRS-1; 1:1000; Millipore); phosphoinositide-3-kinase, p110-β (p110-β); insulin receptor β (IR-β); protein kinase-ζ (1:200; Santa-Cruz); Akt-1 and Akt-2 (1:1000; New England Biolabs); phosphoinositide-3-kinase, p85-α (p85-α 1:5000; Upstate); cytochrome P450-2E1 (CYP2E1) and Il-6 (1:1000; Abcam); Tnf-α (1:1000; Cell Signaling Technology); and anti-rabbit IgG secondary antibodies (1:20,000; Jackson Immunoresearch Laboratories). Equal protein loading was confirmed by staining electrophoresed gels with Coomassie Blue (Bio-Rad) to visualize total protein.", "RNA was extracted by using a miReasy mini kit (Qiagen) following the manufacturer’s instructions, as detailed previously (19). A DNase digestion step was performed to ensure no genetic DNA contamination. RNA (1 μg) was used to synthesize complementary DNA by using oligo-dT-adaptor primers and Moloney murine leukemia virus reverse transcriptase (Promega). Gene expression was determined by using custom-designed primers (Sigma) and SYBR Green reagents (Applied Biosystems). Primer sequences are presented in Table 1. Quantification of gene expression was performed with the use of a Step One Plus reverse transcriptase–polymerase chain reaction machine (Applied Biosystems). Equal efficiency of the reverse transcription of RNA from all groups was confirmed through quantification of expression of the housekeeping gene β-actin (Actb). The expression of Actb did not differ between groups (effect of maternal diet, P = 0.9; effect of CoQ10 supplementation, P = 0.8; control: 153 ± 32; recuperated: 144 ± 24; control CoQ10: 143 ± 12; and recuperated CoQ10: 157 ± 21 average copy numbers).\nPrimers1\nActa2, α-smooth muscle actin 2; Actb, β-actin; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; Des, desmin; F, forward; Gfap, glial fibrillary acidic protein; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; Nqo1, NAD(P)H dehydrogenase, quinone 1; Nrf2, nuclear factor, erythroid 2–like 2; R, reverse; Tgfb, transforming growth factor β Timp, tissue inhibitor of matrix metalloproteinases. ", "Total DNA was extracted using a phenol/chloroform extraction protocol (24). Mitochondrial DNA copy number analysis was performed as described previously (25).", "Protein nitrotyrosination was assayed by using a Nitrotyrosine ELISA kit (MitoSciences), according to the manufacturer’s instructions. Lipid peroxidation was analyzed by using an OxiSelect HNE Adduct ELISA kit (Cambridge Biosciences), according to the manufacturer’s instructions.", "Data were analyzed by using a 2-factor ANOVA with maternal diet and CoQ10 supplementation as the independent variables. Post hoc testing was carried out when appropriate and is indicated in the text accordingly. Data are represented as means ± SEMs. All statistical analyses were performed with the use of Statistica 7 software (Statsoft), and for all tests, P values <0.05 were considered significant. Data were checked for normal distribution. In all cases, “n” refers to the number of litters (with 1 rat used from each litter).", "Recuperated offspring were born smaller than control rats (6.3 ± 0.2 compared with 7.4 ± 0.2 g; P < 0.001) and underwent rapid postnatal catch-up growth so that their weights were similar to those of the control offspring at postnatal day 21 (52.2 ± 0.9 compared with 50.7 ± 1.2 g). At 12 mo of age, there was no effect of maternal diet or CoQ10 supplementation on body weights or absolute liver weights (Table 2).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on rat pup body and liver weights1\nValues are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. No significant differences between groups were reported. CoQ10, coenzyme Q.", "Recuperated hepatic CoQ9 and CoQ10 concentrations were unaltered compared with those in control rats (Table 3). However, CoQ9 and CoQ10 concentrations were greater (P < 0.01) when supplemented with CoQ10 (Table 3).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum and plasma measurements1\nValues are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. The overall effects of maternal diet and interaction between maternal diet and CoQ10 supplementation were not significant for any of the variables reported in the table. *,**Effect of CoQ10: *P < 0.05, **P < 0.01. CoQ10, coenzyme Q.", "Recuperated offspring showed greater (P < 0.001) collagen deposition (Figure 1A) than did control offspring (Figure 1B, C). CoQ10 supplementation prevented this effect of maternal diet (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1A, D, E). Collagen type 1, α1 (Col1a1), mRNA expression was also greater (P < 0.05) in recuperated offspring (Figure 1F) and was reduced (P < 0.05) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1F). \nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on hepatic fibrosis, quantified by measurement of collagen (A), in 12-mo-old male rat livers (B–E). (F) mRNA expression of Col1a1. Values are as means ± SEMs; n = 10/group; 10/10 rats used. *,***C compared with R and R compared with RQ: *P < 0.05, ***P < 0.001. Interaction between in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation (statistical interaction): P = 0.001 (A and F). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; Col1a1 = collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; R, recuperated; RQ, recuperated CoQ10. \nRecuperated and control offspring had similar expression of the profibrotic cytokine transforming growth factor β1 (Tgfb1), and a trend for greater expression of monocyte chemoattractant protein 1 (Mcp1; effect of maternal diet, P = 0.06) was observed in recuperated offspring (Figure 2A). CoQ10 supplementation reduced the concentrations of Tgfb1 (effect of CoQ10 supplementation, P < 0.001) and Mcp1 (effect of CoQ10 supplementation, P = 0.07) (Figure 2A). The cytokine Tnf-α was greater in recuperated offspring than in controls (effect of maternal diet, P < 0.05) and concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.05). Concentrations of Il-6 were greater (P < 0.05) in recuperated offspring than in controls and this effect was prevented by CoQ10 supplementation (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P < 0.001) (Figure 2B). The gene expression of markers of hepatic stellate cell (HSC) and Kupffer cell (KC) activation [α-smooth muscle actin 2 (Acta2), desmin (Des), and C-type lectin-domain family 4 (Clec4f)] was greater in recuperated offspring than in controls (effects of maternal diet, P < 0.05 for all listed variables) (Figure 2C). CoQ10 supplementation reduced Des (P < 0.001), Clec4f (P < 0.05), and cluster of differientation 68 (Cd68) (P < 0.01; all were effects of CoQ10 supplementation) (Figure 2C). Acta2 was unchanged by CoQ10 supplementation (Figure 2C). Glial fibrillary acidic protein (Gfap) mRNA expression was not significantly different between groups (Figure 2C). Gene expression of matrix metalloproteinase (Mmp) 9 (Mmp9) was greater (P < 0.01) in recuperated offspring than in controls, and CoQ10 supplementation reduced (P < 0.001) recuperated Mmp9 mRNA to control levels (interaction between maternal diet and CoQ10 supplementation, P < 0.05) (Figure 2D). The gene expression of Mmp2 remained unaltered between groups (Figure 2D). Tissue inhibitors of Mmps (Timps) were also not significantly different between groups (Figure 2E).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on inflammatory markers Tgfb1 and Mcp1 mRNA (A), Tnf-α and Il-6 protein (B), mRNA expression of markers of HSC activation (Acta2, Des, Gfap) and KC activation (Clec4f, Cd68) (C), and Mmp (D) and Timp (E) mRNA in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. **,***C and R compared with CQ and RQ: **P < 0.01, ***P < 0.001. *,**C compared with R: *P < 0.05, **P < 0.01. ***R compared with RQ, P < 0.001. Statistical interactions: Tgfb1, P = 0.1; Mcp1, P = 0.3; Tnf-α, P = 0.5; Il-6, P = 0.002; Acta2, P = 0.5; Des, P = 0.3; Gfap, P = 0.5; Clec4f, P = 0.1; Cd68, P = 0.9; Mmp2, P = 0.9; Mmp9, P = 0.04; Timp1, P = 0.6; Timp2, P = 0.8. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. Acta2, α-smooth muscle actin 2; C, control; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; Des, desmin; Gfap, glial fibrillary acidic protein; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; R, recuperated; RQ, recuperated CoQ10; Tgfb1, transforming growth factor β1; Timp, tissue inhibitor of matrix metalloproteinase.", "Components of the NAD(P)H oxidase 2 (NOX-2) protein complex—Gp91phox (P < 0.05), P22phox (P < 0.05), and P47phox (P = 0.05)—were greater in recuperated offspring than in controls. P67phox was greater (P < 0.01) in recuperated offspring than in controls, and this effect was reduced (P < 0.001) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.01) (Figure 3A). Levels of Gp91phox (P = 0.08), P22phox (P < 0.001), and P47phox (P = 0.05) were reduced by CoQ10 supplementation (Figure 3A). CYP2E1 was not significantly different between control and recuperated offspring; however, CoQ10 supplementation reduced this concentration by 50% (effect of CoQ10 supplementation, P < 0.01) (Figure 3B). Complex I and complex IV electron transport chain activities were greater in recuperated offspring (effect of maternal diet, P = 0.05); however, complex II–III activity was unaffected. CoQ10 supplementation caused an increase in complex IV activity (effect of CoQ10 supplementation, P < 0.05) (Figure 3C). Mitochondrial DNA copy number was not significantly different between groups (control: 36 ± 4 copy numbers; recuperated: 36 ± 5 copy numbers) or by CoQ10 supplementation (control CoQ10: 41 ± 4 copy numbers; recuperated CoQ10: 35 ± 3 copy numbers). Concentrations of 4-hydroxynonenal (4-HNE) adducts were greater (P < 0.05) in recuperated offspring. There was a significant interaction between maternal diet and CoQ10 supplementation on 4-HNE concentrations (P < 0.05), reflecting the fact that CoQ10 supplementation reduced 4-HNE concentrations in recuperated offspring but had no effect in control offspring (Figure 3D). 3-nitrotyrosine concentrations were not significantly different between groups (Figure 3E).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on indexes of oxidative stress: components of the NOX-2 complex (A), CYP2E1 (B), ETC activities (C), 4-HNE (D), and 3-NT adducts (E) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. D: *Comparison of C with R and comparison of R with RQ, P < 0.05. A and B: **Comparison of C with R, P < 0.01; ***Comparison of R with RQ and comparison of C and R with CQ and RQ, P < 0.001. Statistical interactions: NOX-2 (Gp91phox, P = 0.5; P22phox, P = 0.3; P47phox, P = 0.4; P67phox, P = 0.01; P40phox, P = 0.3), CYP2E1 (P = 0.5), ETC activities (complex I, P = 0.9; complexes II–III, P = 0.6; complex IV, P = 0.9), 4-HNE (P = 0.04), and 3-NT (P = 0.8). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; CS, citrate synthase; CYP2E1, cytochrome P450-2E1; ETC, electron transport chain; NOX-2, NAD(P)H oxidase 2; R, recuperated; RQ, recuperated CoQ10; 3-NT, 3-nitrotyrosine; 4-HNE, 4-hydroxynonenal.", "Nuclear factor erythroid 2–like 2 (Nrf2), heme oxygenase 1 (Hmox1), and glutathione synthetase (Gst) expression were not significantly different between control and recuperated offspring (Figure 4A, B). Glutathione peroxidase 1 (Gpx1) was reduced in recuperated offspring compared with controls, and this effect was prevented by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.05). NAD(P)H dehydrogenase, quinone 1 (Nqo1), was reduced in recuperated offspring (effect of maternal diet, P < 0.05) (Figure 4A, B). CoQ10 supplementation increased Nrf2 (P < 0.001), Hmox1 (P < 0.05), and Gst (P < 0.001) (Figure 4A, B); however Nqo1 expression was reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.001) (Figure 4A).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on mRNA expression of molecules involved in the NRF antioxidant defense pathway in 12-mo-old male rat livers. (A) Nrf2, Hmox1, and Nqo1 and (B) Gst and Gpx1. Values are means ± SEMs; n = 10/group; 10/10 rats used. *,***C and R compared with CQ and RQ: *P < 0.05, ***P < 0.001. *C compared with R and C compared with CQ, P < 0.05. ***R compared with RQ, P < 0.001. Statistical interactions: Nrf2, P = 0.9; Hmox1, P = 0.6; Nqo1, P = 0.4; Gst, P = 0.1; Gpx1, P = 0.01. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Nqo1, NAD(P)H dehydrogenase, quinone 1; NRF, nuclear erythroid 2-related factor; Nrf2, nuclear factor, erythroid 2–like 2; R, recuperated; RQ, recuperated CoQ10.", "Greater serum insulin concentrations were observed in recuperated offspring than in controls (overall effect of maternal diet, P < 0.05) (Figure 5A). Concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.01) (Figure 5A). Protein expression of IR-β (P < 0.001), IRS-1 (P < 0.001), and Akt-1 (P < 0.05) was reduced in recuperated offspring compared with controls (all effects of maternal diet). Phosphoinositide-3-kinase-p110-β (p110-β), phosphoinositide-3-kinase-p85α (p85-α), and Akt-2 were not influenced by maternal diet (Figure 5B). CoQ10 supplementation increased p110-β (P < 0.05) and Akt-2 (P < 0.01) protein expression (Figure 5B). Fasting plasma glucose concentrations were not significantly different between groups (Table 3). Serum and hepatic triglyceride concentrations and serum cholesterol concentrations were not significantly different between control and recuperated offspring (Table 3). CoQ10 supplementation increased serum triglyceride and cholesterol concentrations (effects of CoQ10 supplementation, P < 0.05); however, hepatic triglyceride concentrations were unchanged (Table 3).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum insulin (A) and insulin signaling protein expression (B) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. Statistical interactions: insulin, P = 0.3; insulin signaling proteins (IR-β, P = 0.12; IRS-1, P = 0.5; p110β, P = 0.4; p85α, P = 0.3; Akt-1, P = 0.05; Akt-2, P = 0.5). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; IR-β, insulin receptor β IRS-1, insulin receptor substrate 1; p85α, phosphoinositide-3-kinase, p85-α p110-β, phosphoinositide-3-kinase, p110-β R, recuperated; RQ, recuperated CoQ10." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Animal experimentation", "CoQ10 diet preparation", "CoQ10, lipid profile, glucose, and insulin analysis", "Histologic assessment", "Mitochondrial electron transport chain complex activities", "Protein analysis", "Gene expression", "Mitochondrial DNA copy number", "4-Hydroxynonenal and 3-nitrotyrosine analysis", "Statistical analysis", "RESULTS", "Anthropometric measurements", "Dietary CoQ10 supplementation leads to greater hepatic CoQ9 and CoQ10 concentrations", "CoQ10 supplementation ameliorates hepatic fibrosis and inflammation induced by poor maternal nutrition", "CoQ10 supplementation attenuates ROS induced by poor maternal nutrition", "CoQ10 supplementation alters hepatic antioxidant defense capacity", "CoQ10 supplementation alters expression of molecules involved in insulin and lipid metabolism", "DISCUSSION" ]
[ "In 1992, Hales and Barker (1) proposed the “thrifty phenotype hypothesis,” which postulated that in response to suboptimal in utero nutrition, the fetus alters its organ structure and adapts its metabolism to ensure immediate survival. This occurs through the sparing of vital organs (e.g., the brain) at the expense of others, such as the liver, thus increasing the risk of metabolic disease such as liver dysfunction in later life (2–4). This risk is exacerbated if a suboptimal uterine environment is followed by rapid postnatal growth (5, 6).\nNonalcoholic fatty liver disease (NAFLD)4 is the hepatic manifestation of the metabolic syndrome. Aspects of the metabolic syndrome, including NAFLD, have been linked to exposure to suboptimal early-life environments (7, 8). Although the incidence of NAFLD is high (9), the associated morbidity is low if there is no progression to hepatic fibrosis. Progression to fibrosis is indicative of the clinically important subtype of patients with NAFLD who have a high chance (20%) of developing frank liver cirrhosis and subsequent liver failure (10). At present, it is unknown why this progression occurs only in a subset of individuals. The development of an intervention that prevents these changes from accumulating could improve the prognosis of patients who develop NAFLD later in life.\nIncreased oxidative stress is a common consequence of developmental programming (11). Increased reactive oxygen species (ROS) have been strongly implicated in the etiology of hepatic fibrosis. Several animal studies have focused on antioxidant therapies to prevent the deleterious phenotypes of developmental programming (12–14); however, the doses used are not recommended for use in humans. In practice, a suboptimal intrauterine environment is often recognized retrospectively (i.e., after delivery). Thus, it is important to address potential beneficial effects of targeted postnatal interventions.\nCoenzyme Q (CoQ10) is a benzoquinone ring linked to an isoprenoid side-chain. The isoform containing 9 isoprenoid units (CoQ9) is most abundant in rodents, whereas CoQ10 (10 isoprenoid units) is the most common in humans. When oxidized, CoQ10 shuttles electrons between mitochondrial complexes I and III and complexes II and III. Reduced CoQ10 is the most abundant endogenous cellular antioxidant (15) and is a safe and effective therapeutic antioxidant (16, 17). We have also shown that postnatal CoQ10 supplementation prevents developmentally programmed accelerated aging in rat aortas (18) and hearts (19).\nLiver is one of a few tissues to take up dietary CoQ10 (20), and CoQ10 supplementation has been previously investigated as a potential therapy to prevent the progression of NAFLD in humans (21). In this study, we aimed to 1) investigate the effects of poor maternal nutrition and rapid postnatal catch-up growth on hepatic CoQ9 concentrations and molecular pathways leading to proinflammatory changes and development of fibrosis and 2) determine whether a clinically relevant dose of dietary CoQ10 could correct any observed hepatic fibrosis.", " Animal experimentation All procedures involving animals were conducted under the British Animals (Scientific Procedures) Act (1986) and underwent ethical review by the University of Cambridge Animal Welfare and Ethical Review Board. Stock animals were purchased from Charles River, and dams were produced from in-house breeding from stock animals. \nPregnant Wistar rats were maintained at room temperature in specific-pathogen-free housing with the use of individually ventilated cages with environmental enrichment. The dams were maintained on a 20% protein diet (control) or an isocaloric low-protein (8%) diet, as previously described (22). Access to diets and water was provided ad libitum. All rats used in this study were specific-pathogen-free housed individually at 22°C on a controlled 12:12-h light-dark cycle. Diets were purchased from Arie Blok. \nThe day of birth was recorded as day 1 of postnatal life. Pups born to low-protein-diet–fed dams were cross-fostered to control-fed mothers on postnatal day 3 to create a recuperated litter. Each recuperated litter was standardized to 4 male pups at random to maximize their plane of nutrition. The control group consisted of the offspring of dams fed the 20%-protein diet and suckled by 20% protein–fed dams. Each control litter was culled to 8 pups as a standard. To minimize stress to the pups when cross-fostered, they were transferred with some of their own bedding. Body weights were recorded at postnatal days 3 and 21 and at 12 mo. At 21 d, 2 males per litter were weaned onto standard laboratory feed pellets (Special Diet Services) and the other 2 were weaned onto the same diet supplemented with CoQ10 to give a dose of 1 mg/kg body weight per day. Diets were given in the home cage. Rat pups were fed these diets until 12 mo of age, at which time all rats were killed by carbon dioxide asphyxiation at ∼1000. Post mortem, liver tissue was removed, weighed, and snap-frozen in liquid nitrogen and then stored at −80°C until analysis. A further portion of liver tissue (the same area for each sample) was removed and fixed in formalin for histologic assessment. \nFor all measurements, 1 pup per litter was used, thus “n” represents number of litters. Ten litters per group were used in this study based on power calculation with the use of previous data from our studies of RNA expression in postnatal tissues from programmed animals. Rat numbers were calculated to give 80% power to detect a 20% difference between groups at the P < 0.05 level. Only male rats were used in this study.\nAll procedures involving animals were conducted under the British Animals (Scientific Procedures) Act (1986) and underwent ethical review by the University of Cambridge Animal Welfare and Ethical Review Board. Stock animals were purchased from Charles River, and dams were produced from in-house breeding from stock animals. \nPregnant Wistar rats were maintained at room temperature in specific-pathogen-free housing with the use of individually ventilated cages with environmental enrichment. The dams were maintained on a 20% protein diet (control) or an isocaloric low-protein (8%) diet, as previously described (22). Access to diets and water was provided ad libitum. All rats used in this study were specific-pathogen-free housed individually at 22°C on a controlled 12:12-h light-dark cycle. Diets were purchased from Arie Blok. \nThe day of birth was recorded as day 1 of postnatal life. Pups born to low-protein-diet–fed dams were cross-fostered to control-fed mothers on postnatal day 3 to create a recuperated litter. Each recuperated litter was standardized to 4 male pups at random to maximize their plane of nutrition. The control group consisted of the offspring of dams fed the 20%-protein diet and suckled by 20% protein–fed dams. Each control litter was culled to 8 pups as a standard. To minimize stress to the pups when cross-fostered, they were transferred with some of their own bedding. Body weights were recorded at postnatal days 3 and 21 and at 12 mo. At 21 d, 2 males per litter were weaned onto standard laboratory feed pellets (Special Diet Services) and the other 2 were weaned onto the same diet supplemented with CoQ10 to give a dose of 1 mg/kg body weight per day. Diets were given in the home cage. Rat pups were fed these diets until 12 mo of age, at which time all rats were killed by carbon dioxide asphyxiation at ∼1000. Post mortem, liver tissue was removed, weighed, and snap-frozen in liquid nitrogen and then stored at −80°C until analysis. A further portion of liver tissue (the same area for each sample) was removed and fixed in formalin for histologic assessment. \nFor all measurements, 1 pup per litter was used, thus “n” represents number of litters. Ten litters per group were used in this study based on power calculation with the use of previous data from our studies of RNA expression in postnatal tissues from programmed animals. Rat numbers were calculated to give 80% power to detect a 20% difference between groups at the P < 0.05 level. Only male rats were used in this study.\n CoQ10 diet preparation A dose of 1 mg CoQ10/kg body weight per day was used in this study, which was administered via the diet (18). This was achieved by appropriate CoQ10 supplementation of laboratory feed pellets, as we described previously (18, 19). Diet was prepared twice a week throughout the study.\nA dose of 1 mg CoQ10/kg body weight per day was used in this study, which was administered via the diet (18). This was achieved by appropriate CoQ10 supplementation of laboratory feed pellets, as we described previously (18, 19). Diet was prepared twice a week throughout the study.\n CoQ10, lipid profile, glucose, and insulin analysis Total liver ubiquinone (CoQ9 and CoQ10) was quantified by reverse-phase HPLC with UV detection at 275 nm, as described previously (18, 19). Serum was obtained as detailed previously (18), and blood from the tail vein collected into EDTA tubes and centrifuged for 3 min at 3000 rpm at 4° Celsius to isolate plasma. Fasted blood glucose measurements were obtained by using a glucose analyzer (Hemocue). The serum lipid profile and fasted plasma insulin analyses were performed by using an auto-analyzer (the Wellcome Trust–supported Cambridge Mouse Laboratory). Liver triglyceride concentrations were determined by using the Folch assay (23). Briefly, liver samples were homogenized in a 2:1 ratio of chloroform:methanol. The distinct lipid phase was removed after centrifugation, and lipid weight was quantified after the solvent was removed by evaporation.\nTotal liver ubiquinone (CoQ9 and CoQ10) was quantified by reverse-phase HPLC with UV detection at 275 nm, as described previously (18, 19). Serum was obtained as detailed previously (18), and blood from the tail vein collected into EDTA tubes and centrifuged for 3 min at 3000 rpm at 4° Celsius to isolate plasma. Fasted blood glucose measurements were obtained by using a glucose analyzer (Hemocue). The serum lipid profile and fasted plasma insulin analyses were performed by using an auto-analyzer (the Wellcome Trust–supported Cambridge Mouse Laboratory). Liver triglyceride concentrations were determined by using the Folch assay (23). Briefly, liver samples were homogenized in a 2:1 ratio of chloroform:methanol. The distinct lipid phase was removed after centrifugation, and lipid weight was quantified after the solvent was removed by evaporation.\n Histologic assessment Liver samples were fixed in formalin, paraffin-embedded, and sectioned to a 6-μm thickness by using a microtome. Picro Sirius Red staining was used to stain for fibrosis. Cell-D software (Olympus Soft Imaging Solutions) was used to quantify the thickness of fibrosis around all visible hepatic vessels (including all arteries and veins) from 1 section per sample. This sample was taken at the same point (20 sections for each sample) by using a nonbiased grid sampling method. All analyses were performed at 10× magnification by using an Olympus microscope (Olympus Soft Imaging Solutions). All analyses were performed blinded.\nLiver samples were fixed in formalin, paraffin-embedded, and sectioned to a 6-μm thickness by using a microtome. Picro Sirius Red staining was used to stain for fibrosis. Cell-D software (Olympus Soft Imaging Solutions) was used to quantify the thickness of fibrosis around all visible hepatic vessels (including all arteries and veins) from 1 section per sample. This sample was taken at the same point (20 sections for each sample) by using a nonbiased grid sampling method. All analyses were performed at 10× magnification by using an Olympus microscope (Olympus Soft Imaging Solutions). All analyses were performed blinded.\n Mitochondrial electron transport chain complex activities Activities of complex I [NAD(H): ubiquinone reductase; Enzyme Commission (EC) 1.6.5.3], complexes II–III (succinate: cytochrome c reductase; EC 1.3.5.1 + EC 1.10.2.2), and complex IV (cytochrome oxidase; EC 1.9.3.1) as well as citrate synthase (EC 1.1.1.27) were assayed as described previously (18, 19).\nActivities of complex I [NAD(H): ubiquinone reductase; Enzyme Commission (EC) 1.6.5.3], complexes II–III (succinate: cytochrome c reductase; EC 1.3.5.1 + EC 1.10.2.2), and complex IV (cytochrome oxidase; EC 1.9.3.1) as well as citrate synthase (EC 1.1.1.27) were assayed as described previously (18, 19).\n Protein analysis Protein was extracted and assayed as described previously (18). Protein (20 μg) was loaded onto 10%, 12%, or 15% polyacrylamide gels, dependent on the molecular weight of the protein to be measured. The samples were electrophoresed and transferred to polyvinylidene fluoride membranes (18) and detected with the use of the following dilutions of primary antibody: insulin receptor substrate 1 (IRS-1; 1:1000; Millipore); phosphoinositide-3-kinase, p110-β (p110-β); insulin receptor β (IR-β); protein kinase-ζ (1:200; Santa-Cruz); Akt-1 and Akt-2 (1:1000; New England Biolabs); phosphoinositide-3-kinase, p85-α (p85-α 1:5000; Upstate); cytochrome P450-2E1 (CYP2E1) and Il-6 (1:1000; Abcam); Tnf-α (1:1000; Cell Signaling Technology); and anti-rabbit IgG secondary antibodies (1:20,000; Jackson Immunoresearch Laboratories). Equal protein loading was confirmed by staining electrophoresed gels with Coomassie Blue (Bio-Rad) to visualize total protein.\nProtein was extracted and assayed as described previously (18). Protein (20 μg) was loaded onto 10%, 12%, or 15% polyacrylamide gels, dependent on the molecular weight of the protein to be measured. The samples were electrophoresed and transferred to polyvinylidene fluoride membranes (18) and detected with the use of the following dilutions of primary antibody: insulin receptor substrate 1 (IRS-1; 1:1000; Millipore); phosphoinositide-3-kinase, p110-β (p110-β); insulin receptor β (IR-β); protein kinase-ζ (1:200; Santa-Cruz); Akt-1 and Akt-2 (1:1000; New England Biolabs); phosphoinositide-3-kinase, p85-α (p85-α 1:5000; Upstate); cytochrome P450-2E1 (CYP2E1) and Il-6 (1:1000; Abcam); Tnf-α (1:1000; Cell Signaling Technology); and anti-rabbit IgG secondary antibodies (1:20,000; Jackson Immunoresearch Laboratories). Equal protein loading was confirmed by staining electrophoresed gels with Coomassie Blue (Bio-Rad) to visualize total protein.\n Gene expression RNA was extracted by using a miReasy mini kit (Qiagen) following the manufacturer’s instructions, as detailed previously (19). A DNase digestion step was performed to ensure no genetic DNA contamination. RNA (1 μg) was used to synthesize complementary DNA by using oligo-dT-adaptor primers and Moloney murine leukemia virus reverse transcriptase (Promega). Gene expression was determined by using custom-designed primers (Sigma) and SYBR Green reagents (Applied Biosystems). Primer sequences are presented in Table 1. Quantification of gene expression was performed with the use of a Step One Plus reverse transcriptase–polymerase chain reaction machine (Applied Biosystems). Equal efficiency of the reverse transcription of RNA from all groups was confirmed through quantification of expression of the housekeeping gene β-actin (Actb). The expression of Actb did not differ between groups (effect of maternal diet, P = 0.9; effect of CoQ10 supplementation, P = 0.8; control: 153 ± 32; recuperated: 144 ± 24; control CoQ10: 143 ± 12; and recuperated CoQ10: 157 ± 21 average copy numbers).\nPrimers1\nActa2, α-smooth muscle actin 2; Actb, β-actin; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; Des, desmin; F, forward; Gfap, glial fibrillary acidic protein; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; Nqo1, NAD(P)H dehydrogenase, quinone 1; Nrf2, nuclear factor, erythroid 2–like 2; R, reverse; Tgfb, transforming growth factor β Timp, tissue inhibitor of matrix metalloproteinases. \nRNA was extracted by using a miReasy mini kit (Qiagen) following the manufacturer’s instructions, as detailed previously (19). A DNase digestion step was performed to ensure no genetic DNA contamination. RNA (1 μg) was used to synthesize complementary DNA by using oligo-dT-adaptor primers and Moloney murine leukemia virus reverse transcriptase (Promega). Gene expression was determined by using custom-designed primers (Sigma) and SYBR Green reagents (Applied Biosystems). Primer sequences are presented in Table 1. Quantification of gene expression was performed with the use of a Step One Plus reverse transcriptase–polymerase chain reaction machine (Applied Biosystems). Equal efficiency of the reverse transcription of RNA from all groups was confirmed through quantification of expression of the housekeeping gene β-actin (Actb). The expression of Actb did not differ between groups (effect of maternal diet, P = 0.9; effect of CoQ10 supplementation, P = 0.8; control: 153 ± 32; recuperated: 144 ± 24; control CoQ10: 143 ± 12; and recuperated CoQ10: 157 ± 21 average copy numbers).\nPrimers1\nActa2, α-smooth muscle actin 2; Actb, β-actin; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; Des, desmin; F, forward; Gfap, glial fibrillary acidic protein; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; Nqo1, NAD(P)H dehydrogenase, quinone 1; Nrf2, nuclear factor, erythroid 2–like 2; R, reverse; Tgfb, transforming growth factor β Timp, tissue inhibitor of matrix metalloproteinases. \n Mitochondrial DNA copy number Total DNA was extracted using a phenol/chloroform extraction protocol (24). Mitochondrial DNA copy number analysis was performed as described previously (25).\nTotal DNA was extracted using a phenol/chloroform extraction protocol (24). Mitochondrial DNA copy number analysis was performed as described previously (25).\n 4-Hydroxynonenal and 3-nitrotyrosine analysis Protein nitrotyrosination was assayed by using a Nitrotyrosine ELISA kit (MitoSciences), according to the manufacturer’s instructions. Lipid peroxidation was analyzed by using an OxiSelect HNE Adduct ELISA kit (Cambridge Biosciences), according to the manufacturer’s instructions.\nProtein nitrotyrosination was assayed by using a Nitrotyrosine ELISA kit (MitoSciences), according to the manufacturer’s instructions. Lipid peroxidation was analyzed by using an OxiSelect HNE Adduct ELISA kit (Cambridge Biosciences), according to the manufacturer’s instructions.\n Statistical analysis Data were analyzed by using a 2-factor ANOVA with maternal diet and CoQ10 supplementation as the independent variables. Post hoc testing was carried out when appropriate and is indicated in the text accordingly. Data are represented as means ± SEMs. All statistical analyses were performed with the use of Statistica 7 software (Statsoft), and for all tests, P values <0.05 were considered significant. Data were checked for normal distribution. In all cases, “n” refers to the number of litters (with 1 rat used from each litter).\nData were analyzed by using a 2-factor ANOVA with maternal diet and CoQ10 supplementation as the independent variables. Post hoc testing was carried out when appropriate and is indicated in the text accordingly. Data are represented as means ± SEMs. All statistical analyses were performed with the use of Statistica 7 software (Statsoft), and for all tests, P values <0.05 were considered significant. Data were checked for normal distribution. In all cases, “n” refers to the number of litters (with 1 rat used from each litter).", "All procedures involving animals were conducted under the British Animals (Scientific Procedures) Act (1986) and underwent ethical review by the University of Cambridge Animal Welfare and Ethical Review Board. Stock animals were purchased from Charles River, and dams were produced from in-house breeding from stock animals. \nPregnant Wistar rats were maintained at room temperature in specific-pathogen-free housing with the use of individually ventilated cages with environmental enrichment. The dams were maintained on a 20% protein diet (control) or an isocaloric low-protein (8%) diet, as previously described (22). Access to diets and water was provided ad libitum. All rats used in this study were specific-pathogen-free housed individually at 22°C on a controlled 12:12-h light-dark cycle. Diets were purchased from Arie Blok. \nThe day of birth was recorded as day 1 of postnatal life. Pups born to low-protein-diet–fed dams were cross-fostered to control-fed mothers on postnatal day 3 to create a recuperated litter. Each recuperated litter was standardized to 4 male pups at random to maximize their plane of nutrition. The control group consisted of the offspring of dams fed the 20%-protein diet and suckled by 20% protein–fed dams. Each control litter was culled to 8 pups as a standard. To minimize stress to the pups when cross-fostered, they were transferred with some of their own bedding. Body weights were recorded at postnatal days 3 and 21 and at 12 mo. At 21 d, 2 males per litter were weaned onto standard laboratory feed pellets (Special Diet Services) and the other 2 were weaned onto the same diet supplemented with CoQ10 to give a dose of 1 mg/kg body weight per day. Diets were given in the home cage. Rat pups were fed these diets until 12 mo of age, at which time all rats were killed by carbon dioxide asphyxiation at ∼1000. Post mortem, liver tissue was removed, weighed, and snap-frozen in liquid nitrogen and then stored at −80°C until analysis. A further portion of liver tissue (the same area for each sample) was removed and fixed in formalin for histologic assessment. \nFor all measurements, 1 pup per litter was used, thus “n” represents number of litters. Ten litters per group were used in this study based on power calculation with the use of previous data from our studies of RNA expression in postnatal tissues from programmed animals. Rat numbers were calculated to give 80% power to detect a 20% difference between groups at the P < 0.05 level. Only male rats were used in this study.", "A dose of 1 mg CoQ10/kg body weight per day was used in this study, which was administered via the diet (18). This was achieved by appropriate CoQ10 supplementation of laboratory feed pellets, as we described previously (18, 19). Diet was prepared twice a week throughout the study.", "Total liver ubiquinone (CoQ9 and CoQ10) was quantified by reverse-phase HPLC with UV detection at 275 nm, as described previously (18, 19). Serum was obtained as detailed previously (18), and blood from the tail vein collected into EDTA tubes and centrifuged for 3 min at 3000 rpm at 4° Celsius to isolate plasma. Fasted blood glucose measurements were obtained by using a glucose analyzer (Hemocue). The serum lipid profile and fasted plasma insulin analyses were performed by using an auto-analyzer (the Wellcome Trust–supported Cambridge Mouse Laboratory). Liver triglyceride concentrations were determined by using the Folch assay (23). Briefly, liver samples were homogenized in a 2:1 ratio of chloroform:methanol. The distinct lipid phase was removed after centrifugation, and lipid weight was quantified after the solvent was removed by evaporation.", "Liver samples were fixed in formalin, paraffin-embedded, and sectioned to a 6-μm thickness by using a microtome. Picro Sirius Red staining was used to stain for fibrosis. Cell-D software (Olympus Soft Imaging Solutions) was used to quantify the thickness of fibrosis around all visible hepatic vessels (including all arteries and veins) from 1 section per sample. This sample was taken at the same point (20 sections for each sample) by using a nonbiased grid sampling method. All analyses were performed at 10× magnification by using an Olympus microscope (Olympus Soft Imaging Solutions). All analyses were performed blinded.", "Activities of complex I [NAD(H): ubiquinone reductase; Enzyme Commission (EC) 1.6.5.3], complexes II–III (succinate: cytochrome c reductase; EC 1.3.5.1 + EC 1.10.2.2), and complex IV (cytochrome oxidase; EC 1.9.3.1) as well as citrate synthase (EC 1.1.1.27) were assayed as described previously (18, 19).", "Protein was extracted and assayed as described previously (18). Protein (20 μg) was loaded onto 10%, 12%, or 15% polyacrylamide gels, dependent on the molecular weight of the protein to be measured. The samples were electrophoresed and transferred to polyvinylidene fluoride membranes (18) and detected with the use of the following dilutions of primary antibody: insulin receptor substrate 1 (IRS-1; 1:1000; Millipore); phosphoinositide-3-kinase, p110-β (p110-β); insulin receptor β (IR-β); protein kinase-ζ (1:200; Santa-Cruz); Akt-1 and Akt-2 (1:1000; New England Biolabs); phosphoinositide-3-kinase, p85-α (p85-α 1:5000; Upstate); cytochrome P450-2E1 (CYP2E1) and Il-6 (1:1000; Abcam); Tnf-α (1:1000; Cell Signaling Technology); and anti-rabbit IgG secondary antibodies (1:20,000; Jackson Immunoresearch Laboratories). Equal protein loading was confirmed by staining electrophoresed gels with Coomassie Blue (Bio-Rad) to visualize total protein.", "RNA was extracted by using a miReasy mini kit (Qiagen) following the manufacturer’s instructions, as detailed previously (19). A DNase digestion step was performed to ensure no genetic DNA contamination. RNA (1 μg) was used to synthesize complementary DNA by using oligo-dT-adaptor primers and Moloney murine leukemia virus reverse transcriptase (Promega). Gene expression was determined by using custom-designed primers (Sigma) and SYBR Green reagents (Applied Biosystems). Primer sequences are presented in Table 1. Quantification of gene expression was performed with the use of a Step One Plus reverse transcriptase–polymerase chain reaction machine (Applied Biosystems). Equal efficiency of the reverse transcription of RNA from all groups was confirmed through quantification of expression of the housekeeping gene β-actin (Actb). The expression of Actb did not differ between groups (effect of maternal diet, P = 0.9; effect of CoQ10 supplementation, P = 0.8; control: 153 ± 32; recuperated: 144 ± 24; control CoQ10: 143 ± 12; and recuperated CoQ10: 157 ± 21 average copy numbers).\nPrimers1\nActa2, α-smooth muscle actin 2; Actb, β-actin; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; Des, desmin; F, forward; Gfap, glial fibrillary acidic protein; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; Nqo1, NAD(P)H dehydrogenase, quinone 1; Nrf2, nuclear factor, erythroid 2–like 2; R, reverse; Tgfb, transforming growth factor β Timp, tissue inhibitor of matrix metalloproteinases. ", "Total DNA was extracted using a phenol/chloroform extraction protocol (24). Mitochondrial DNA copy number analysis was performed as described previously (25).", "Protein nitrotyrosination was assayed by using a Nitrotyrosine ELISA kit (MitoSciences), according to the manufacturer’s instructions. Lipid peroxidation was analyzed by using an OxiSelect HNE Adduct ELISA kit (Cambridge Biosciences), according to the manufacturer’s instructions.", "Data were analyzed by using a 2-factor ANOVA with maternal diet and CoQ10 supplementation as the independent variables. Post hoc testing was carried out when appropriate and is indicated in the text accordingly. Data are represented as means ± SEMs. All statistical analyses were performed with the use of Statistica 7 software (Statsoft), and for all tests, P values <0.05 were considered significant. Data were checked for normal distribution. In all cases, “n” refers to the number of litters (with 1 rat used from each litter).", " Anthropometric measurements Recuperated offspring were born smaller than control rats (6.3 ± 0.2 compared with 7.4 ± 0.2 g; P < 0.001) and underwent rapid postnatal catch-up growth so that their weights were similar to those of the control offspring at postnatal day 21 (52.2 ± 0.9 compared with 50.7 ± 1.2 g). At 12 mo of age, there was no effect of maternal diet or CoQ10 supplementation on body weights or absolute liver weights (Table 2).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on rat pup body and liver weights1\nValues are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. No significant differences between groups were reported. CoQ10, coenzyme Q.\nRecuperated offspring were born smaller than control rats (6.3 ± 0.2 compared with 7.4 ± 0.2 g; P < 0.001) and underwent rapid postnatal catch-up growth so that their weights were similar to those of the control offspring at postnatal day 21 (52.2 ± 0.9 compared with 50.7 ± 1.2 g). At 12 mo of age, there was no effect of maternal diet or CoQ10 supplementation on body weights or absolute liver weights (Table 2).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on rat pup body and liver weights1\nValues are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. No significant differences between groups were reported. CoQ10, coenzyme Q.\n Dietary CoQ10 supplementation leads to greater hepatic CoQ9 and CoQ10 concentrations Recuperated hepatic CoQ9 and CoQ10 concentrations were unaltered compared with those in control rats (Table 3). However, CoQ9 and CoQ10 concentrations were greater (P < 0.01) when supplemented with CoQ10 (Table 3).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum and plasma measurements1\nValues are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. The overall effects of maternal diet and interaction between maternal diet and CoQ10 supplementation were not significant for any of the variables reported in the table. *,**Effect of CoQ10: *P < 0.05, **P < 0.01. CoQ10, coenzyme Q.\nRecuperated hepatic CoQ9 and CoQ10 concentrations were unaltered compared with those in control rats (Table 3). However, CoQ9 and CoQ10 concentrations were greater (P < 0.01) when supplemented with CoQ10 (Table 3).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum and plasma measurements1\nValues are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. The overall effects of maternal diet and interaction between maternal diet and CoQ10 supplementation were not significant for any of the variables reported in the table. *,**Effect of CoQ10: *P < 0.05, **P < 0.01. CoQ10, coenzyme Q.\n CoQ10 supplementation ameliorates hepatic fibrosis and inflammation induced by poor maternal nutrition Recuperated offspring showed greater (P < 0.001) collagen deposition (Figure 1A) than did control offspring (Figure 1B, C). CoQ10 supplementation prevented this effect of maternal diet (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1A, D, E). Collagen type 1, α1 (Col1a1), mRNA expression was also greater (P < 0.05) in recuperated offspring (Figure 1F) and was reduced (P < 0.05) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1F). \nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on hepatic fibrosis, quantified by measurement of collagen (A), in 12-mo-old male rat livers (B–E). (F) mRNA expression of Col1a1. Values are as means ± SEMs; n = 10/group; 10/10 rats used. *,***C compared with R and R compared with RQ: *P < 0.05, ***P < 0.001. Interaction between in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation (statistical interaction): P = 0.001 (A and F). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; Col1a1 = collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; R, recuperated; RQ, recuperated CoQ10. \nRecuperated and control offspring had similar expression of the profibrotic cytokine transforming growth factor β1 (Tgfb1), and a trend for greater expression of monocyte chemoattractant protein 1 (Mcp1; effect of maternal diet, P = 0.06) was observed in recuperated offspring (Figure 2A). CoQ10 supplementation reduced the concentrations of Tgfb1 (effect of CoQ10 supplementation, P < 0.001) and Mcp1 (effect of CoQ10 supplementation, P = 0.07) (Figure 2A). The cytokine Tnf-α was greater in recuperated offspring than in controls (effect of maternal diet, P < 0.05) and concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.05). Concentrations of Il-6 were greater (P < 0.05) in recuperated offspring than in controls and this effect was prevented by CoQ10 supplementation (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P < 0.001) (Figure 2B). The gene expression of markers of hepatic stellate cell (HSC) and Kupffer cell (KC) activation [α-smooth muscle actin 2 (Acta2), desmin (Des), and C-type lectin-domain family 4 (Clec4f)] was greater in recuperated offspring than in controls (effects of maternal diet, P < 0.05 for all listed variables) (Figure 2C). CoQ10 supplementation reduced Des (P < 0.001), Clec4f (P < 0.05), and cluster of differientation 68 (Cd68) (P < 0.01; all were effects of CoQ10 supplementation) (Figure 2C). Acta2 was unchanged by CoQ10 supplementation (Figure 2C). Glial fibrillary acidic protein (Gfap) mRNA expression was not significantly different between groups (Figure 2C). Gene expression of matrix metalloproteinase (Mmp) 9 (Mmp9) was greater (P < 0.01) in recuperated offspring than in controls, and CoQ10 supplementation reduced (P < 0.001) recuperated Mmp9 mRNA to control levels (interaction between maternal diet and CoQ10 supplementation, P < 0.05) (Figure 2D). The gene expression of Mmp2 remained unaltered between groups (Figure 2D). Tissue inhibitors of Mmps (Timps) were also not significantly different between groups (Figure 2E).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on inflammatory markers Tgfb1 and Mcp1 mRNA (A), Tnf-α and Il-6 protein (B), mRNA expression of markers of HSC activation (Acta2, Des, Gfap) and KC activation (Clec4f, Cd68) (C), and Mmp (D) and Timp (E) mRNA in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. **,***C and R compared with CQ and RQ: **P < 0.01, ***P < 0.001. *,**C compared with R: *P < 0.05, **P < 0.01. ***R compared with RQ, P < 0.001. Statistical interactions: Tgfb1, P = 0.1; Mcp1, P = 0.3; Tnf-α, P = 0.5; Il-6, P = 0.002; Acta2, P = 0.5; Des, P = 0.3; Gfap, P = 0.5; Clec4f, P = 0.1; Cd68, P = 0.9; Mmp2, P = 0.9; Mmp9, P = 0.04; Timp1, P = 0.6; Timp2, P = 0.8. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. Acta2, α-smooth muscle actin 2; C, control; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; Des, desmin; Gfap, glial fibrillary acidic protein; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; R, recuperated; RQ, recuperated CoQ10; Tgfb1, transforming growth factor β1; Timp, tissue inhibitor of matrix metalloproteinase.\nRecuperated offspring showed greater (P < 0.001) collagen deposition (Figure 1A) than did control offspring (Figure 1B, C). CoQ10 supplementation prevented this effect of maternal diet (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1A, D, E). Collagen type 1, α1 (Col1a1), mRNA expression was also greater (P < 0.05) in recuperated offspring (Figure 1F) and was reduced (P < 0.05) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1F). \nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on hepatic fibrosis, quantified by measurement of collagen (A), in 12-mo-old male rat livers (B–E). (F) mRNA expression of Col1a1. Values are as means ± SEMs; n = 10/group; 10/10 rats used. *,***C compared with R and R compared with RQ: *P < 0.05, ***P < 0.001. Interaction between in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation (statistical interaction): P = 0.001 (A and F). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; Col1a1 = collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; R, recuperated; RQ, recuperated CoQ10. \nRecuperated and control offspring had similar expression of the profibrotic cytokine transforming growth factor β1 (Tgfb1), and a trend for greater expression of monocyte chemoattractant protein 1 (Mcp1; effect of maternal diet, P = 0.06) was observed in recuperated offspring (Figure 2A). CoQ10 supplementation reduced the concentrations of Tgfb1 (effect of CoQ10 supplementation, P < 0.001) and Mcp1 (effect of CoQ10 supplementation, P = 0.07) (Figure 2A). The cytokine Tnf-α was greater in recuperated offspring than in controls (effect of maternal diet, P < 0.05) and concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.05). Concentrations of Il-6 were greater (P < 0.05) in recuperated offspring than in controls and this effect was prevented by CoQ10 supplementation (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P < 0.001) (Figure 2B). The gene expression of markers of hepatic stellate cell (HSC) and Kupffer cell (KC) activation [α-smooth muscle actin 2 (Acta2), desmin (Des), and C-type lectin-domain family 4 (Clec4f)] was greater in recuperated offspring than in controls (effects of maternal diet, P < 0.05 for all listed variables) (Figure 2C). CoQ10 supplementation reduced Des (P < 0.001), Clec4f (P < 0.05), and cluster of differientation 68 (Cd68) (P < 0.01; all were effects of CoQ10 supplementation) (Figure 2C). Acta2 was unchanged by CoQ10 supplementation (Figure 2C). Glial fibrillary acidic protein (Gfap) mRNA expression was not significantly different between groups (Figure 2C). Gene expression of matrix metalloproteinase (Mmp) 9 (Mmp9) was greater (P < 0.01) in recuperated offspring than in controls, and CoQ10 supplementation reduced (P < 0.001) recuperated Mmp9 mRNA to control levels (interaction between maternal diet and CoQ10 supplementation, P < 0.05) (Figure 2D). The gene expression of Mmp2 remained unaltered between groups (Figure 2D). Tissue inhibitors of Mmps (Timps) were also not significantly different between groups (Figure 2E).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on inflammatory markers Tgfb1 and Mcp1 mRNA (A), Tnf-α and Il-6 protein (B), mRNA expression of markers of HSC activation (Acta2, Des, Gfap) and KC activation (Clec4f, Cd68) (C), and Mmp (D) and Timp (E) mRNA in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. **,***C and R compared with CQ and RQ: **P < 0.01, ***P < 0.001. *,**C compared with R: *P < 0.05, **P < 0.01. ***R compared with RQ, P < 0.001. Statistical interactions: Tgfb1, P = 0.1; Mcp1, P = 0.3; Tnf-α, P = 0.5; Il-6, P = 0.002; Acta2, P = 0.5; Des, P = 0.3; Gfap, P = 0.5; Clec4f, P = 0.1; Cd68, P = 0.9; Mmp2, P = 0.9; Mmp9, P = 0.04; Timp1, P = 0.6; Timp2, P = 0.8. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. Acta2, α-smooth muscle actin 2; C, control; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; Des, desmin; Gfap, glial fibrillary acidic protein; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; R, recuperated; RQ, recuperated CoQ10; Tgfb1, transforming growth factor β1; Timp, tissue inhibitor of matrix metalloproteinase.\n CoQ10 supplementation attenuates ROS induced by poor maternal nutrition Components of the NAD(P)H oxidase 2 (NOX-2) protein complex—Gp91phox (P < 0.05), P22phox (P < 0.05), and P47phox (P = 0.05)—were greater in recuperated offspring than in controls. P67phox was greater (P < 0.01) in recuperated offspring than in controls, and this effect was reduced (P < 0.001) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.01) (Figure 3A). Levels of Gp91phox (P = 0.08), P22phox (P < 0.001), and P47phox (P = 0.05) were reduced by CoQ10 supplementation (Figure 3A). CYP2E1 was not significantly different between control and recuperated offspring; however, CoQ10 supplementation reduced this concentration by 50% (effect of CoQ10 supplementation, P < 0.01) (Figure 3B). Complex I and complex IV electron transport chain activities were greater in recuperated offspring (effect of maternal diet, P = 0.05); however, complex II–III activity was unaffected. CoQ10 supplementation caused an increase in complex IV activity (effect of CoQ10 supplementation, P < 0.05) (Figure 3C). Mitochondrial DNA copy number was not significantly different between groups (control: 36 ± 4 copy numbers; recuperated: 36 ± 5 copy numbers) or by CoQ10 supplementation (control CoQ10: 41 ± 4 copy numbers; recuperated CoQ10: 35 ± 3 copy numbers). Concentrations of 4-hydroxynonenal (4-HNE) adducts were greater (P < 0.05) in recuperated offspring. There was a significant interaction between maternal diet and CoQ10 supplementation on 4-HNE concentrations (P < 0.05), reflecting the fact that CoQ10 supplementation reduced 4-HNE concentrations in recuperated offspring but had no effect in control offspring (Figure 3D). 3-nitrotyrosine concentrations were not significantly different between groups (Figure 3E).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on indexes of oxidative stress: components of the NOX-2 complex (A), CYP2E1 (B), ETC activities (C), 4-HNE (D), and 3-NT adducts (E) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. D: *Comparison of C with R and comparison of R with RQ, P < 0.05. A and B: **Comparison of C with R, P < 0.01; ***Comparison of R with RQ and comparison of C and R with CQ and RQ, P < 0.001. Statistical interactions: NOX-2 (Gp91phox, P = 0.5; P22phox, P = 0.3; P47phox, P = 0.4; P67phox, P = 0.01; P40phox, P = 0.3), CYP2E1 (P = 0.5), ETC activities (complex I, P = 0.9; complexes II–III, P = 0.6; complex IV, P = 0.9), 4-HNE (P = 0.04), and 3-NT (P = 0.8). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; CS, citrate synthase; CYP2E1, cytochrome P450-2E1; ETC, electron transport chain; NOX-2, NAD(P)H oxidase 2; R, recuperated; RQ, recuperated CoQ10; 3-NT, 3-nitrotyrosine; 4-HNE, 4-hydroxynonenal.\nComponents of the NAD(P)H oxidase 2 (NOX-2) protein complex—Gp91phox (P < 0.05), P22phox (P < 0.05), and P47phox (P = 0.05)—were greater in recuperated offspring than in controls. P67phox was greater (P < 0.01) in recuperated offspring than in controls, and this effect was reduced (P < 0.001) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.01) (Figure 3A). Levels of Gp91phox (P = 0.08), P22phox (P < 0.001), and P47phox (P = 0.05) were reduced by CoQ10 supplementation (Figure 3A). CYP2E1 was not significantly different between control and recuperated offspring; however, CoQ10 supplementation reduced this concentration by 50% (effect of CoQ10 supplementation, P < 0.01) (Figure 3B). Complex I and complex IV electron transport chain activities were greater in recuperated offspring (effect of maternal diet, P = 0.05); however, complex II–III activity was unaffected. CoQ10 supplementation caused an increase in complex IV activity (effect of CoQ10 supplementation, P < 0.05) (Figure 3C). Mitochondrial DNA copy number was not significantly different between groups (control: 36 ± 4 copy numbers; recuperated: 36 ± 5 copy numbers) or by CoQ10 supplementation (control CoQ10: 41 ± 4 copy numbers; recuperated CoQ10: 35 ± 3 copy numbers). Concentrations of 4-hydroxynonenal (4-HNE) adducts were greater (P < 0.05) in recuperated offspring. There was a significant interaction between maternal diet and CoQ10 supplementation on 4-HNE concentrations (P < 0.05), reflecting the fact that CoQ10 supplementation reduced 4-HNE concentrations in recuperated offspring but had no effect in control offspring (Figure 3D). 3-nitrotyrosine concentrations were not significantly different between groups (Figure 3E).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on indexes of oxidative stress: components of the NOX-2 complex (A), CYP2E1 (B), ETC activities (C), 4-HNE (D), and 3-NT adducts (E) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. D: *Comparison of C with R and comparison of R with RQ, P < 0.05. A and B: **Comparison of C with R, P < 0.01; ***Comparison of R with RQ and comparison of C and R with CQ and RQ, P < 0.001. Statistical interactions: NOX-2 (Gp91phox, P = 0.5; P22phox, P = 0.3; P47phox, P = 0.4; P67phox, P = 0.01; P40phox, P = 0.3), CYP2E1 (P = 0.5), ETC activities (complex I, P = 0.9; complexes II–III, P = 0.6; complex IV, P = 0.9), 4-HNE (P = 0.04), and 3-NT (P = 0.8). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; CS, citrate synthase; CYP2E1, cytochrome P450-2E1; ETC, electron transport chain; NOX-2, NAD(P)H oxidase 2; R, recuperated; RQ, recuperated CoQ10; 3-NT, 3-nitrotyrosine; 4-HNE, 4-hydroxynonenal.\n CoQ10 supplementation alters hepatic antioxidant defense capacity Nuclear factor erythroid 2–like 2 (Nrf2), heme oxygenase 1 (Hmox1), and glutathione synthetase (Gst) expression were not significantly different between control and recuperated offspring (Figure 4A, B). Glutathione peroxidase 1 (Gpx1) was reduced in recuperated offspring compared with controls, and this effect was prevented by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.05). NAD(P)H dehydrogenase, quinone 1 (Nqo1), was reduced in recuperated offspring (effect of maternal diet, P < 0.05) (Figure 4A, B). CoQ10 supplementation increased Nrf2 (P < 0.001), Hmox1 (P < 0.05), and Gst (P < 0.001) (Figure 4A, B); however Nqo1 expression was reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.001) (Figure 4A).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on mRNA expression of molecules involved in the NRF antioxidant defense pathway in 12-mo-old male rat livers. (A) Nrf2, Hmox1, and Nqo1 and (B) Gst and Gpx1. Values are means ± SEMs; n = 10/group; 10/10 rats used. *,***C and R compared with CQ and RQ: *P < 0.05, ***P < 0.001. *C compared with R and C compared with CQ, P < 0.05. ***R compared with RQ, P < 0.001. Statistical interactions: Nrf2, P = 0.9; Hmox1, P = 0.6; Nqo1, P = 0.4; Gst, P = 0.1; Gpx1, P = 0.01. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Nqo1, NAD(P)H dehydrogenase, quinone 1; NRF, nuclear erythroid 2-related factor; Nrf2, nuclear factor, erythroid 2–like 2; R, recuperated; RQ, recuperated CoQ10.\nNuclear factor erythroid 2–like 2 (Nrf2), heme oxygenase 1 (Hmox1), and glutathione synthetase (Gst) expression were not significantly different between control and recuperated offspring (Figure 4A, B). Glutathione peroxidase 1 (Gpx1) was reduced in recuperated offspring compared with controls, and this effect was prevented by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.05). NAD(P)H dehydrogenase, quinone 1 (Nqo1), was reduced in recuperated offspring (effect of maternal diet, P < 0.05) (Figure 4A, B). CoQ10 supplementation increased Nrf2 (P < 0.001), Hmox1 (P < 0.05), and Gst (P < 0.001) (Figure 4A, B); however Nqo1 expression was reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.001) (Figure 4A).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on mRNA expression of molecules involved in the NRF antioxidant defense pathway in 12-mo-old male rat livers. (A) Nrf2, Hmox1, and Nqo1 and (B) Gst and Gpx1. Values are means ± SEMs; n = 10/group; 10/10 rats used. *,***C and R compared with CQ and RQ: *P < 0.05, ***P < 0.001. *C compared with R and C compared with CQ, P < 0.05. ***R compared with RQ, P < 0.001. Statistical interactions: Nrf2, P = 0.9; Hmox1, P = 0.6; Nqo1, P = 0.4; Gst, P = 0.1; Gpx1, P = 0.01. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Nqo1, NAD(P)H dehydrogenase, quinone 1; NRF, nuclear erythroid 2-related factor; Nrf2, nuclear factor, erythroid 2–like 2; R, recuperated; RQ, recuperated CoQ10.\n CoQ10 supplementation alters expression of molecules involved in insulin and lipid metabolism Greater serum insulin concentrations were observed in recuperated offspring than in controls (overall effect of maternal diet, P < 0.05) (Figure 5A). Concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.01) (Figure 5A). Protein expression of IR-β (P < 0.001), IRS-1 (P < 0.001), and Akt-1 (P < 0.05) was reduced in recuperated offspring compared with controls (all effects of maternal diet). Phosphoinositide-3-kinase-p110-β (p110-β), phosphoinositide-3-kinase-p85α (p85-α), and Akt-2 were not influenced by maternal diet (Figure 5B). CoQ10 supplementation increased p110-β (P < 0.05) and Akt-2 (P < 0.01) protein expression (Figure 5B). Fasting plasma glucose concentrations were not significantly different between groups (Table 3). Serum and hepatic triglyceride concentrations and serum cholesterol concentrations were not significantly different between control and recuperated offspring (Table 3). CoQ10 supplementation increased serum triglyceride and cholesterol concentrations (effects of CoQ10 supplementation, P < 0.05); however, hepatic triglyceride concentrations were unchanged (Table 3).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum insulin (A) and insulin signaling protein expression (B) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. Statistical interactions: insulin, P = 0.3; insulin signaling proteins (IR-β, P = 0.12; IRS-1, P = 0.5; p110β, P = 0.4; p85α, P = 0.3; Akt-1, P = 0.05; Akt-2, P = 0.5). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; IR-β, insulin receptor β IRS-1, insulin receptor substrate 1; p85α, phosphoinositide-3-kinase, p85-α p110-β, phosphoinositide-3-kinase, p110-β R, recuperated; RQ, recuperated CoQ10.\nGreater serum insulin concentrations were observed in recuperated offspring than in controls (overall effect of maternal diet, P < 0.05) (Figure 5A). Concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.01) (Figure 5A). Protein expression of IR-β (P < 0.001), IRS-1 (P < 0.001), and Akt-1 (P < 0.05) was reduced in recuperated offspring compared with controls (all effects of maternal diet). Phosphoinositide-3-kinase-p110-β (p110-β), phosphoinositide-3-kinase-p85α (p85-α), and Akt-2 were not influenced by maternal diet (Figure 5B). CoQ10 supplementation increased p110-β (P < 0.05) and Akt-2 (P < 0.01) protein expression (Figure 5B). Fasting plasma glucose concentrations were not significantly different between groups (Table 3). Serum and hepatic triglyceride concentrations and serum cholesterol concentrations were not significantly different between control and recuperated offspring (Table 3). CoQ10 supplementation increased serum triglyceride and cholesterol concentrations (effects of CoQ10 supplementation, P < 0.05); however, hepatic triglyceride concentrations were unchanged (Table 3).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum insulin (A) and insulin signaling protein expression (B) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. Statistical interactions: insulin, P = 0.3; insulin signaling proteins (IR-β, P = 0.12; IRS-1, P = 0.5; p110β, P = 0.4; p85α, P = 0.3; Akt-1, P = 0.05; Akt-2, P = 0.5). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; IR-β, insulin receptor β IRS-1, insulin receptor substrate 1; p85α, phosphoinositide-3-kinase, p85-α p110-β, phosphoinositide-3-kinase, p110-β R, recuperated; RQ, recuperated CoQ10.", "Recuperated offspring were born smaller than control rats (6.3 ± 0.2 compared with 7.4 ± 0.2 g; P < 0.001) and underwent rapid postnatal catch-up growth so that their weights were similar to those of the control offspring at postnatal day 21 (52.2 ± 0.9 compared with 50.7 ± 1.2 g). At 12 mo of age, there was no effect of maternal diet or CoQ10 supplementation on body weights or absolute liver weights (Table 2).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on rat pup body and liver weights1\nValues are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. No significant differences between groups were reported. CoQ10, coenzyme Q.", "Recuperated hepatic CoQ9 and CoQ10 concentrations were unaltered compared with those in control rats (Table 3). However, CoQ9 and CoQ10 concentrations were greater (P < 0.01) when supplemented with CoQ10 (Table 3).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum and plasma measurements1\nValues are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. The overall effects of maternal diet and interaction between maternal diet and CoQ10 supplementation were not significant for any of the variables reported in the table. *,**Effect of CoQ10: *P < 0.05, **P < 0.01. CoQ10, coenzyme Q.", "Recuperated offspring showed greater (P < 0.001) collagen deposition (Figure 1A) than did control offspring (Figure 1B, C). CoQ10 supplementation prevented this effect of maternal diet (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1A, D, E). Collagen type 1, α1 (Col1a1), mRNA expression was also greater (P < 0.05) in recuperated offspring (Figure 1F) and was reduced (P < 0.05) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1F). \nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on hepatic fibrosis, quantified by measurement of collagen (A), in 12-mo-old male rat livers (B–E). (F) mRNA expression of Col1a1. Values are as means ± SEMs; n = 10/group; 10/10 rats used. *,***C compared with R and R compared with RQ: *P < 0.05, ***P < 0.001. Interaction between in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation (statistical interaction): P = 0.001 (A and F). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; Col1a1 = collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; R, recuperated; RQ, recuperated CoQ10. \nRecuperated and control offspring had similar expression of the profibrotic cytokine transforming growth factor β1 (Tgfb1), and a trend for greater expression of monocyte chemoattractant protein 1 (Mcp1; effect of maternal diet, P = 0.06) was observed in recuperated offspring (Figure 2A). CoQ10 supplementation reduced the concentrations of Tgfb1 (effect of CoQ10 supplementation, P < 0.001) and Mcp1 (effect of CoQ10 supplementation, P = 0.07) (Figure 2A). The cytokine Tnf-α was greater in recuperated offspring than in controls (effect of maternal diet, P < 0.05) and concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.05). Concentrations of Il-6 were greater (P < 0.05) in recuperated offspring than in controls and this effect was prevented by CoQ10 supplementation (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P < 0.001) (Figure 2B). The gene expression of markers of hepatic stellate cell (HSC) and Kupffer cell (KC) activation [α-smooth muscle actin 2 (Acta2), desmin (Des), and C-type lectin-domain family 4 (Clec4f)] was greater in recuperated offspring than in controls (effects of maternal diet, P < 0.05 for all listed variables) (Figure 2C). CoQ10 supplementation reduced Des (P < 0.001), Clec4f (P < 0.05), and cluster of differientation 68 (Cd68) (P < 0.01; all were effects of CoQ10 supplementation) (Figure 2C). Acta2 was unchanged by CoQ10 supplementation (Figure 2C). Glial fibrillary acidic protein (Gfap) mRNA expression was not significantly different between groups (Figure 2C). Gene expression of matrix metalloproteinase (Mmp) 9 (Mmp9) was greater (P < 0.01) in recuperated offspring than in controls, and CoQ10 supplementation reduced (P < 0.001) recuperated Mmp9 mRNA to control levels (interaction between maternal diet and CoQ10 supplementation, P < 0.05) (Figure 2D). The gene expression of Mmp2 remained unaltered between groups (Figure 2D). Tissue inhibitors of Mmps (Timps) were also not significantly different between groups (Figure 2E).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on inflammatory markers Tgfb1 and Mcp1 mRNA (A), Tnf-α and Il-6 protein (B), mRNA expression of markers of HSC activation (Acta2, Des, Gfap) and KC activation (Clec4f, Cd68) (C), and Mmp (D) and Timp (E) mRNA in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. **,***C and R compared with CQ and RQ: **P < 0.01, ***P < 0.001. *,**C compared with R: *P < 0.05, **P < 0.01. ***R compared with RQ, P < 0.001. Statistical interactions: Tgfb1, P = 0.1; Mcp1, P = 0.3; Tnf-α, P = 0.5; Il-6, P = 0.002; Acta2, P = 0.5; Des, P = 0.3; Gfap, P = 0.5; Clec4f, P = 0.1; Cd68, P = 0.9; Mmp2, P = 0.9; Mmp9, P = 0.04; Timp1, P = 0.6; Timp2, P = 0.8. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. Acta2, α-smooth muscle actin 2; C, control; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; Des, desmin; Gfap, glial fibrillary acidic protein; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; R, recuperated; RQ, recuperated CoQ10; Tgfb1, transforming growth factor β1; Timp, tissue inhibitor of matrix metalloproteinase.", "Components of the NAD(P)H oxidase 2 (NOX-2) protein complex—Gp91phox (P < 0.05), P22phox (P < 0.05), and P47phox (P = 0.05)—were greater in recuperated offspring than in controls. P67phox was greater (P < 0.01) in recuperated offspring than in controls, and this effect was reduced (P < 0.001) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.01) (Figure 3A). Levels of Gp91phox (P = 0.08), P22phox (P < 0.001), and P47phox (P = 0.05) were reduced by CoQ10 supplementation (Figure 3A). CYP2E1 was not significantly different between control and recuperated offspring; however, CoQ10 supplementation reduced this concentration by 50% (effect of CoQ10 supplementation, P < 0.01) (Figure 3B). Complex I and complex IV electron transport chain activities were greater in recuperated offspring (effect of maternal diet, P = 0.05); however, complex II–III activity was unaffected. CoQ10 supplementation caused an increase in complex IV activity (effect of CoQ10 supplementation, P < 0.05) (Figure 3C). Mitochondrial DNA copy number was not significantly different between groups (control: 36 ± 4 copy numbers; recuperated: 36 ± 5 copy numbers) or by CoQ10 supplementation (control CoQ10: 41 ± 4 copy numbers; recuperated CoQ10: 35 ± 3 copy numbers). Concentrations of 4-hydroxynonenal (4-HNE) adducts were greater (P < 0.05) in recuperated offspring. There was a significant interaction between maternal diet and CoQ10 supplementation on 4-HNE concentrations (P < 0.05), reflecting the fact that CoQ10 supplementation reduced 4-HNE concentrations in recuperated offspring but had no effect in control offspring (Figure 3D). 3-nitrotyrosine concentrations were not significantly different between groups (Figure 3E).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on indexes of oxidative stress: components of the NOX-2 complex (A), CYP2E1 (B), ETC activities (C), 4-HNE (D), and 3-NT adducts (E) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. D: *Comparison of C with R and comparison of R with RQ, P < 0.05. A and B: **Comparison of C with R, P < 0.01; ***Comparison of R with RQ and comparison of C and R with CQ and RQ, P < 0.001. Statistical interactions: NOX-2 (Gp91phox, P = 0.5; P22phox, P = 0.3; P47phox, P = 0.4; P67phox, P = 0.01; P40phox, P = 0.3), CYP2E1 (P = 0.5), ETC activities (complex I, P = 0.9; complexes II–III, P = 0.6; complex IV, P = 0.9), 4-HNE (P = 0.04), and 3-NT (P = 0.8). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; CS, citrate synthase; CYP2E1, cytochrome P450-2E1; ETC, electron transport chain; NOX-2, NAD(P)H oxidase 2; R, recuperated; RQ, recuperated CoQ10; 3-NT, 3-nitrotyrosine; 4-HNE, 4-hydroxynonenal.", "Nuclear factor erythroid 2–like 2 (Nrf2), heme oxygenase 1 (Hmox1), and glutathione synthetase (Gst) expression were not significantly different between control and recuperated offspring (Figure 4A, B). Glutathione peroxidase 1 (Gpx1) was reduced in recuperated offspring compared with controls, and this effect was prevented by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.05). NAD(P)H dehydrogenase, quinone 1 (Nqo1), was reduced in recuperated offspring (effect of maternal diet, P < 0.05) (Figure 4A, B). CoQ10 supplementation increased Nrf2 (P < 0.001), Hmox1 (P < 0.05), and Gst (P < 0.001) (Figure 4A, B); however Nqo1 expression was reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.001) (Figure 4A).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on mRNA expression of molecules involved in the NRF antioxidant defense pathway in 12-mo-old male rat livers. (A) Nrf2, Hmox1, and Nqo1 and (B) Gst and Gpx1. Values are means ± SEMs; n = 10/group; 10/10 rats used. *,***C and R compared with CQ and RQ: *P < 0.05, ***P < 0.001. *C compared with R and C compared with CQ, P < 0.05. ***R compared with RQ, P < 0.001. Statistical interactions: Nrf2, P = 0.9; Hmox1, P = 0.6; Nqo1, P = 0.4; Gst, P = 0.1; Gpx1, P = 0.01. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Nqo1, NAD(P)H dehydrogenase, quinone 1; NRF, nuclear erythroid 2-related factor; Nrf2, nuclear factor, erythroid 2–like 2; R, recuperated; RQ, recuperated CoQ10.", "Greater serum insulin concentrations were observed in recuperated offspring than in controls (overall effect of maternal diet, P < 0.05) (Figure 5A). Concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.01) (Figure 5A). Protein expression of IR-β (P < 0.001), IRS-1 (P < 0.001), and Akt-1 (P < 0.05) was reduced in recuperated offspring compared with controls (all effects of maternal diet). Phosphoinositide-3-kinase-p110-β (p110-β), phosphoinositide-3-kinase-p85α (p85-α), and Akt-2 were not influenced by maternal diet (Figure 5B). CoQ10 supplementation increased p110-β (P < 0.05) and Akt-2 (P < 0.01) protein expression (Figure 5B). Fasting plasma glucose concentrations were not significantly different between groups (Table 3). Serum and hepatic triglyceride concentrations and serum cholesterol concentrations were not significantly different between control and recuperated offspring (Table 3). CoQ10 supplementation increased serum triglyceride and cholesterol concentrations (effects of CoQ10 supplementation, P < 0.05); however, hepatic triglyceride concentrations were unchanged (Table 3).\nEffect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum insulin (A) and insulin signaling protein expression (B) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. Statistical interactions: insulin, P = 0.3; insulin signaling proteins (IR-β, P = 0.12; IRS-1, P = 0.5; p110β, P = 0.4; p85α, P = 0.3; Akt-1, P = 0.05; Akt-2, P = 0.5). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; IR-β, insulin receptor β IRS-1, insulin receptor substrate 1; p85α, phosphoinositide-3-kinase, p85-α p110-β, phosphoinositide-3-kinase, p110-β R, recuperated; RQ, recuperated CoQ10.", "Our findings that a maternal low-protein diet in utero followed by accelerated postnatal growth (recuperated) confers a higher risk of oxidative damage, proinflammatory changes, and liver fibrosis suggest that the early environment is an important determinant of an individual’s risk of developing complications of fatty liver disease. The potential for progression from NAFLD to nonalcoholic steatohepatitis and finally to hepatitis, fibrosis, and cirrhosis is well described. However, it has not previously been possible to identify patients who are at high risk of these changes. Our finding that recuperated rats developed more hepatic fibrosis than did controls indicates that the early environment plays a central role in the risk of liver disease in later life. Identifying the early environment as influential in the propensity to hepatic inflammation and fibrosis provides valuable new insight into predetermining individuals at increased risk from hepatic manifestations of the metabolic syndrome.\nHSCs are the main source of extracellular matrix formation during hepatic fibrosis (26). The key step in inducing fibrosis during liver injury is the transformation of quiescent HSCs to activated HSCs, which differentiate into myofibroblasts (26). We found increased expression of Acta2 and Des in recuperated offspring. Acta2 is expressed in myofibroblasts of damaged liver (27) and hence is a good marker of HSC activation. In rats, HSC activation and proliferation correlate with a high expression of Des and are found in HSCs of acutely injured liver (27). KC infiltration and activation play a prominent role in HSC activation. Increased KC infiltration coincides with the activation of HSC markers such as Acta2 (28). Cle4f (a unique KC receptor for glycoproteins and therefore a good marker of KC activation) was increased in livers of recuperated offspring.\nIncreased concentrations of proinflammatory cytokines are also crucial in initiating HSC activation. Hepatic protein expression of Tnf-α and Il-6 was greater in recuperated offspring, suggesting that inflammation plays a role in the HSC activation and consequent hepatic fibrosis observed in our model. Tgfb1 mRNA levels were not changed in recuperated offspring; however, we cannot discount the possibility that Tgfb1 is upregulated at the protein level. Mcp1 (a chemokine that acts as a chemoattractant for HSCs) was also upregulated in recuperated livers. Mmps are also associated with hepatic fibrosis (29). Mmp9 expression was upregulated in recuperated livers. Mmp9 is prominent in scar areas of active fibrosis, and treatment with a profibrotic agent can increase its expression, with peak expression coinciding with induction of inflammatory cytokines (29). Mmp2 expression was unaltered between groups. Because Mmp expression is an early event in wound healing, the time window for Mmp2 elevation may have been missed and that difference would be observed only in younger animals.\nA further driving factor in HSC activation and fibrosis is increased ROS, which can be generated by Tnf-α, Il-6 (30), KCs (28), and the mitochondrial electron transport chain. We found increased ROS in the context of increased lipid peroxidation [which is known to increase in liver disease (31)] and greater expression of the NOX-2 components (Gp91phox, P22phox, P47phox, and P67phox), a major source of hepatic ROS production, which have been observed in hepatic fibrosis (32, 33). Complex I activity, a predominant generator of ROS (34), was greater in recuperated livers. Decreased antioxidant defense capacity was evidenced by a reduction in Gpx1, a peroxidase responsible for the conversion of H2O2 into H2O and O2. Increased concentrations of cellular H2O2, due to Gpx1 depletion, could cause accumulation of the hydroxyl anion, a free radical that can directly increase lipid peroxidation.\nAccumulation of hepatic triglycerides also plays a role in hepatic fibrosis (35); however, neither liver nor plasma triglycerides were altered in recuperated rats. This may be explained by the fact that recuperated offspring are fed a feed pellet diet and do not display an obesogenic phenotype. This in itself is interesting, because it shows that the observed deleterious liver phenotypes develop in a physiologic environment that had been influenced only by developmental programming per se, and not by obesity.\nInsulin resistance and hyperinsulinemia are also major contributors to liver fibrosis (36) and are inherently linked to increased oxidative stress. Recuperated offspring had whole-body insulin resistance as indicated by hyperinsulinemia. The hyperinsulinemia was associated with hepatic insulin signaling protein dysregulation, as shown by the downregulation of IR-β, IRS-1, and Akt-1.\nImportantly, we identified an effective means of arresting the pathologic progression of NAFLD, by postnatal supplementation with CoQ10. In recuperated offspring, CoQ10 supplementation reduced markers of HSC and KC activation, the accumulation of ROS, and the deposition of collagen around the hepatic vessels. This agrees with a study in which high-fat diet fed mice administered 1% CoQ10 supplementation led to reduced hepatic NOX expression (37). CoQ10 supplementation also increased activity of complex IV, in keeping with in vitro studies (38). CoQ10 supplementation decreased Tnf-α, Il-6, Tgfb1, and Mcp1, suggesting that CoQ10 also can reduce inflammatory changes in liver, which is consistent with studies in mouse liver (37) and human blood (39). Our data therefore recapitulate CoQ10’s function, both as a potent antioxidant (15) and as an anti-inflammatory agent.\nCoQ10 supplementation also prevented hyperinsulinemia in recuperated rats; however, hepatic insulin signaling protein dysregulation was not normalized by CoQ10 supplementation. The whole-body insulin sensitivity may therefore be improved through other mechanisms such as improvements in muscle and/or adipose tissue insulin sensitivity. CoQ10 may exert antifibrotic effects through the activation of the Nrf2/antioxidant response element (Nrf2/ARE) pathway. Nrf2 is a transcription factor that responds to oxidative status and regulates transcription of genes involved in antioxidant defense. CoQ10 treatment in a model of hepatic fibrosis ameliorates liver damage via suppression of Tgfb1 and upregulation of Nrf-ARE-associated genes (40). Although Nrf2 expression was not affected by maternal diet, CoQ10 supplementation increased Nrf2 by 4-fold. The antioxidant genes involved in the Nrf2/ARE pathway—Hmox1, Gst, and Gpx1—were increased by CoQ10 supplementation. This suggests that CoQ10 supplementation upregulates the Nrf2/ARE pathway via suppression of Tgfb1 (40). These observations support a role for CoQ10 supplementation in increasing antioxidant defenses to a protective level in animals that have experienced detrimental catch-up growth (18, 19). Because Nqo1 activity is known to prevent 1 electron reduction in quinones, it is plausible that because hepatic CoQ10 concentrations are elevated by CoQ10 supplementation, Nqo1 expression is not required and thus suppressed.\nIn conclusion, a suboptimal early-life environment combined with a mismatched postnatal milieu predisposes offspring to increased hepatic ROS, inflammation, and hyperinsulinemia, leading to hepatic fibrosis. This recapitulates changes seen in patients in whom benign NAFLD progresses to cirrhosis and ultimately liver failure. We suggest that the early environment life is crucial in identifying the subgroup of patients at highest risk of such progression; however, this should be tested in humans. A clinically relevant dose of CoQ10 reversed liver fibrosis via downregulation of ROS, inflammation, and hyperinsulinemia and upregulation of the Nrf2/ARE antioxidant pathway. Because fibrosis contributes to up to 45% of deaths in the industrial world (41), CoQ10 supplementation may be a cost-effective and safe way of reducing this global burden in at-risk individuals, before the development of an NAFLD phenotype." ]
[ "introduction", "methods", null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion" ]
[ "developmental programming", "liver disease", "coenzyme Q", "low birth weight", "accelerated postnatal growth" ]
INTRODUCTION: In 1992, Hales and Barker (1) proposed the “thrifty phenotype hypothesis,” which postulated that in response to suboptimal in utero nutrition, the fetus alters its organ structure and adapts its metabolism to ensure immediate survival. This occurs through the sparing of vital organs (e.g., the brain) at the expense of others, such as the liver, thus increasing the risk of metabolic disease such as liver dysfunction in later life (2–4). This risk is exacerbated if a suboptimal uterine environment is followed by rapid postnatal growth (5, 6). Nonalcoholic fatty liver disease (NAFLD)4 is the hepatic manifestation of the metabolic syndrome. Aspects of the metabolic syndrome, including NAFLD, have been linked to exposure to suboptimal early-life environments (7, 8). Although the incidence of NAFLD is high (9), the associated morbidity is low if there is no progression to hepatic fibrosis. Progression to fibrosis is indicative of the clinically important subtype of patients with NAFLD who have a high chance (20%) of developing frank liver cirrhosis and subsequent liver failure (10). At present, it is unknown why this progression occurs only in a subset of individuals. The development of an intervention that prevents these changes from accumulating could improve the prognosis of patients who develop NAFLD later in life. Increased oxidative stress is a common consequence of developmental programming (11). Increased reactive oxygen species (ROS) have been strongly implicated in the etiology of hepatic fibrosis. Several animal studies have focused on antioxidant therapies to prevent the deleterious phenotypes of developmental programming (12–14); however, the doses used are not recommended for use in humans. In practice, a suboptimal intrauterine environment is often recognized retrospectively (i.e., after delivery). Thus, it is important to address potential beneficial effects of targeted postnatal interventions. Coenzyme Q (CoQ10) is a benzoquinone ring linked to an isoprenoid side-chain. The isoform containing 9 isoprenoid units (CoQ9) is most abundant in rodents, whereas CoQ10 (10 isoprenoid units) is the most common in humans. When oxidized, CoQ10 shuttles electrons between mitochondrial complexes I and III and complexes II and III. Reduced CoQ10 is the most abundant endogenous cellular antioxidant (15) and is a safe and effective therapeutic antioxidant (16, 17). We have also shown that postnatal CoQ10 supplementation prevents developmentally programmed accelerated aging in rat aortas (18) and hearts (19). Liver is one of a few tissues to take up dietary CoQ10 (20), and CoQ10 supplementation has been previously investigated as a potential therapy to prevent the progression of NAFLD in humans (21). In this study, we aimed to 1) investigate the effects of poor maternal nutrition and rapid postnatal catch-up growth on hepatic CoQ9 concentrations and molecular pathways leading to proinflammatory changes and development of fibrosis and 2) determine whether a clinically relevant dose of dietary CoQ10 could correct any observed hepatic fibrosis. METHODS: Animal experimentation All procedures involving animals were conducted under the British Animals (Scientific Procedures) Act (1986) and underwent ethical review by the University of Cambridge Animal Welfare and Ethical Review Board. Stock animals were purchased from Charles River, and dams were produced from in-house breeding from stock animals. Pregnant Wistar rats were maintained at room temperature in specific-pathogen-free housing with the use of individually ventilated cages with environmental enrichment. The dams were maintained on a 20% protein diet (control) or an isocaloric low-protein (8%) diet, as previously described (22). Access to diets and water was provided ad libitum. All rats used in this study were specific-pathogen-free housed individually at 22°C on a controlled 12:12-h light-dark cycle. Diets were purchased from Arie Blok. The day of birth was recorded as day 1 of postnatal life. Pups born to low-protein-diet–fed dams were cross-fostered to control-fed mothers on postnatal day 3 to create a recuperated litter. Each recuperated litter was standardized to 4 male pups at random to maximize their plane of nutrition. The control group consisted of the offspring of dams fed the 20%-protein diet and suckled by 20% protein–fed dams. Each control litter was culled to 8 pups as a standard. To minimize stress to the pups when cross-fostered, they were transferred with some of their own bedding. Body weights were recorded at postnatal days 3 and 21 and at 12 mo. At 21 d, 2 males per litter were weaned onto standard laboratory feed pellets (Special Diet Services) and the other 2 were weaned onto the same diet supplemented with CoQ10 to give a dose of 1 mg/kg body weight per day. Diets were given in the home cage. Rat pups were fed these diets until 12 mo of age, at which time all rats were killed by carbon dioxide asphyxiation at ∼1000. Post mortem, liver tissue was removed, weighed, and snap-frozen in liquid nitrogen and then stored at −80°C until analysis. A further portion of liver tissue (the same area for each sample) was removed and fixed in formalin for histologic assessment. For all measurements, 1 pup per litter was used, thus “n” represents number of litters. Ten litters per group were used in this study based on power calculation with the use of previous data from our studies of RNA expression in postnatal tissues from programmed animals. Rat numbers were calculated to give 80% power to detect a 20% difference between groups at the P < 0.05 level. Only male rats were used in this study. All procedures involving animals were conducted under the British Animals (Scientific Procedures) Act (1986) and underwent ethical review by the University of Cambridge Animal Welfare and Ethical Review Board. Stock animals were purchased from Charles River, and dams were produced from in-house breeding from stock animals. Pregnant Wistar rats were maintained at room temperature in specific-pathogen-free housing with the use of individually ventilated cages with environmental enrichment. The dams were maintained on a 20% protein diet (control) or an isocaloric low-protein (8%) diet, as previously described (22). Access to diets and water was provided ad libitum. All rats used in this study were specific-pathogen-free housed individually at 22°C on a controlled 12:12-h light-dark cycle. Diets were purchased from Arie Blok. The day of birth was recorded as day 1 of postnatal life. Pups born to low-protein-diet–fed dams were cross-fostered to control-fed mothers on postnatal day 3 to create a recuperated litter. Each recuperated litter was standardized to 4 male pups at random to maximize their plane of nutrition. The control group consisted of the offspring of dams fed the 20%-protein diet and suckled by 20% protein–fed dams. Each control litter was culled to 8 pups as a standard. To minimize stress to the pups when cross-fostered, they were transferred with some of their own bedding. Body weights were recorded at postnatal days 3 and 21 and at 12 mo. At 21 d, 2 males per litter were weaned onto standard laboratory feed pellets (Special Diet Services) and the other 2 were weaned onto the same diet supplemented with CoQ10 to give a dose of 1 mg/kg body weight per day. Diets were given in the home cage. Rat pups were fed these diets until 12 mo of age, at which time all rats were killed by carbon dioxide asphyxiation at ∼1000. Post mortem, liver tissue was removed, weighed, and snap-frozen in liquid nitrogen and then stored at −80°C until analysis. A further portion of liver tissue (the same area for each sample) was removed and fixed in formalin for histologic assessment. For all measurements, 1 pup per litter was used, thus “n” represents number of litters. Ten litters per group were used in this study based on power calculation with the use of previous data from our studies of RNA expression in postnatal tissues from programmed animals. Rat numbers were calculated to give 80% power to detect a 20% difference between groups at the P < 0.05 level. Only male rats were used in this study. CoQ10 diet preparation A dose of 1 mg CoQ10/kg body weight per day was used in this study, which was administered via the diet (18). This was achieved by appropriate CoQ10 supplementation of laboratory feed pellets, as we described previously (18, 19). Diet was prepared twice a week throughout the study. A dose of 1 mg CoQ10/kg body weight per day was used in this study, which was administered via the diet (18). This was achieved by appropriate CoQ10 supplementation of laboratory feed pellets, as we described previously (18, 19). Diet was prepared twice a week throughout the study. CoQ10, lipid profile, glucose, and insulin analysis Total liver ubiquinone (CoQ9 and CoQ10) was quantified by reverse-phase HPLC with UV detection at 275 nm, as described previously (18, 19). Serum was obtained as detailed previously (18), and blood from the tail vein collected into EDTA tubes and centrifuged for 3 min at 3000 rpm at 4° Celsius to isolate plasma. Fasted blood glucose measurements were obtained by using a glucose analyzer (Hemocue). The serum lipid profile and fasted plasma insulin analyses were performed by using an auto-analyzer (the Wellcome Trust–supported Cambridge Mouse Laboratory). Liver triglyceride concentrations were determined by using the Folch assay (23). Briefly, liver samples were homogenized in a 2:1 ratio of chloroform:methanol. The distinct lipid phase was removed after centrifugation, and lipid weight was quantified after the solvent was removed by evaporation. Total liver ubiquinone (CoQ9 and CoQ10) was quantified by reverse-phase HPLC with UV detection at 275 nm, as described previously (18, 19). Serum was obtained as detailed previously (18), and blood from the tail vein collected into EDTA tubes and centrifuged for 3 min at 3000 rpm at 4° Celsius to isolate plasma. Fasted blood glucose measurements were obtained by using a glucose analyzer (Hemocue). The serum lipid profile and fasted plasma insulin analyses were performed by using an auto-analyzer (the Wellcome Trust–supported Cambridge Mouse Laboratory). Liver triglyceride concentrations were determined by using the Folch assay (23). Briefly, liver samples were homogenized in a 2:1 ratio of chloroform:methanol. The distinct lipid phase was removed after centrifugation, and lipid weight was quantified after the solvent was removed by evaporation. Histologic assessment Liver samples were fixed in formalin, paraffin-embedded, and sectioned to a 6-μm thickness by using a microtome. Picro Sirius Red staining was used to stain for fibrosis. Cell-D software (Olympus Soft Imaging Solutions) was used to quantify the thickness of fibrosis around all visible hepatic vessels (including all arteries and veins) from 1 section per sample. This sample was taken at the same point (20 sections for each sample) by using a nonbiased grid sampling method. All analyses were performed at 10× magnification by using an Olympus microscope (Olympus Soft Imaging Solutions). All analyses were performed blinded. Liver samples were fixed in formalin, paraffin-embedded, and sectioned to a 6-μm thickness by using a microtome. Picro Sirius Red staining was used to stain for fibrosis. Cell-D software (Olympus Soft Imaging Solutions) was used to quantify the thickness of fibrosis around all visible hepatic vessels (including all arteries and veins) from 1 section per sample. This sample was taken at the same point (20 sections for each sample) by using a nonbiased grid sampling method. All analyses were performed at 10× magnification by using an Olympus microscope (Olympus Soft Imaging Solutions). All analyses were performed blinded. Mitochondrial electron transport chain complex activities Activities of complex I [NAD(H): ubiquinone reductase; Enzyme Commission (EC) 1.6.5.3], complexes II–III (succinate: cytochrome c reductase; EC 1.3.5.1 + EC 1.10.2.2), and complex IV (cytochrome oxidase; EC 1.9.3.1) as well as citrate synthase (EC 1.1.1.27) were assayed as described previously (18, 19). Activities of complex I [NAD(H): ubiquinone reductase; Enzyme Commission (EC) 1.6.5.3], complexes II–III (succinate: cytochrome c reductase; EC 1.3.5.1 + EC 1.10.2.2), and complex IV (cytochrome oxidase; EC 1.9.3.1) as well as citrate synthase (EC 1.1.1.27) were assayed as described previously (18, 19). Protein analysis Protein was extracted and assayed as described previously (18). Protein (20 μg) was loaded onto 10%, 12%, or 15% polyacrylamide gels, dependent on the molecular weight of the protein to be measured. The samples were electrophoresed and transferred to polyvinylidene fluoride membranes (18) and detected with the use of the following dilutions of primary antibody: insulin receptor substrate 1 (IRS-1; 1:1000; Millipore); phosphoinositide-3-kinase, p110-β (p110-β); insulin receptor β (IR-β); protein kinase-ζ (1:200; Santa-Cruz); Akt-1 and Akt-2 (1:1000; New England Biolabs); phosphoinositide-3-kinase, p85-α (p85-α 1:5000; Upstate); cytochrome P450-2E1 (CYP2E1) and Il-6 (1:1000; Abcam); Tnf-α (1:1000; Cell Signaling Technology); and anti-rabbit IgG secondary antibodies (1:20,000; Jackson Immunoresearch Laboratories). Equal protein loading was confirmed by staining electrophoresed gels with Coomassie Blue (Bio-Rad) to visualize total protein. Protein was extracted and assayed as described previously (18). Protein (20 μg) was loaded onto 10%, 12%, or 15% polyacrylamide gels, dependent on the molecular weight of the protein to be measured. The samples were electrophoresed and transferred to polyvinylidene fluoride membranes (18) and detected with the use of the following dilutions of primary antibody: insulin receptor substrate 1 (IRS-1; 1:1000; Millipore); phosphoinositide-3-kinase, p110-β (p110-β); insulin receptor β (IR-β); protein kinase-ζ (1:200; Santa-Cruz); Akt-1 and Akt-2 (1:1000; New England Biolabs); phosphoinositide-3-kinase, p85-α (p85-α 1:5000; Upstate); cytochrome P450-2E1 (CYP2E1) and Il-6 (1:1000; Abcam); Tnf-α (1:1000; Cell Signaling Technology); and anti-rabbit IgG secondary antibodies (1:20,000; Jackson Immunoresearch Laboratories). Equal protein loading was confirmed by staining electrophoresed gels with Coomassie Blue (Bio-Rad) to visualize total protein. Gene expression RNA was extracted by using a miReasy mini kit (Qiagen) following the manufacturer’s instructions, as detailed previously (19). A DNase digestion step was performed to ensure no genetic DNA contamination. RNA (1 μg) was used to synthesize complementary DNA by using oligo-dT-adaptor primers and Moloney murine leukemia virus reverse transcriptase (Promega). Gene expression was determined by using custom-designed primers (Sigma) and SYBR Green reagents (Applied Biosystems). Primer sequences are presented in Table 1. Quantification of gene expression was performed with the use of a Step One Plus reverse transcriptase–polymerase chain reaction machine (Applied Biosystems). Equal efficiency of the reverse transcription of RNA from all groups was confirmed through quantification of expression of the housekeeping gene β-actin (Actb). The expression of Actb did not differ between groups (effect of maternal diet, P = 0.9; effect of CoQ10 supplementation, P = 0.8; control: 153 ± 32; recuperated: 144 ± 24; control CoQ10: 143 ± 12; and recuperated CoQ10: 157 ± 21 average copy numbers). Primers1 Acta2, α-smooth muscle actin 2; Actb, β-actin; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; Des, desmin; F, forward; Gfap, glial fibrillary acidic protein; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; Nqo1, NAD(P)H dehydrogenase, quinone 1; Nrf2, nuclear factor, erythroid 2–like 2; R, reverse; Tgfb, transforming growth factor β Timp, tissue inhibitor of matrix metalloproteinases. RNA was extracted by using a miReasy mini kit (Qiagen) following the manufacturer’s instructions, as detailed previously (19). A DNase digestion step was performed to ensure no genetic DNA contamination. RNA (1 μg) was used to synthesize complementary DNA by using oligo-dT-adaptor primers and Moloney murine leukemia virus reverse transcriptase (Promega). Gene expression was determined by using custom-designed primers (Sigma) and SYBR Green reagents (Applied Biosystems). Primer sequences are presented in Table 1. Quantification of gene expression was performed with the use of a Step One Plus reverse transcriptase–polymerase chain reaction machine (Applied Biosystems). Equal efficiency of the reverse transcription of RNA from all groups was confirmed through quantification of expression of the housekeeping gene β-actin (Actb). The expression of Actb did not differ between groups (effect of maternal diet, P = 0.9; effect of CoQ10 supplementation, P = 0.8; control: 153 ± 32; recuperated: 144 ± 24; control CoQ10: 143 ± 12; and recuperated CoQ10: 157 ± 21 average copy numbers). Primers1 Acta2, α-smooth muscle actin 2; Actb, β-actin; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; Des, desmin; F, forward; Gfap, glial fibrillary acidic protein; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; Nqo1, NAD(P)H dehydrogenase, quinone 1; Nrf2, nuclear factor, erythroid 2–like 2; R, reverse; Tgfb, transforming growth factor β Timp, tissue inhibitor of matrix metalloproteinases. Mitochondrial DNA copy number Total DNA was extracted using a phenol/chloroform extraction protocol (24). Mitochondrial DNA copy number analysis was performed as described previously (25). Total DNA was extracted using a phenol/chloroform extraction protocol (24). Mitochondrial DNA copy number analysis was performed as described previously (25). 4-Hydroxynonenal and 3-nitrotyrosine analysis Protein nitrotyrosination was assayed by using a Nitrotyrosine ELISA kit (MitoSciences), according to the manufacturer’s instructions. Lipid peroxidation was analyzed by using an OxiSelect HNE Adduct ELISA kit (Cambridge Biosciences), according to the manufacturer’s instructions. Protein nitrotyrosination was assayed by using a Nitrotyrosine ELISA kit (MitoSciences), according to the manufacturer’s instructions. Lipid peroxidation was analyzed by using an OxiSelect HNE Adduct ELISA kit (Cambridge Biosciences), according to the manufacturer’s instructions. Statistical analysis Data were analyzed by using a 2-factor ANOVA with maternal diet and CoQ10 supplementation as the independent variables. Post hoc testing was carried out when appropriate and is indicated in the text accordingly. Data are represented as means ± SEMs. All statistical analyses were performed with the use of Statistica 7 software (Statsoft), and for all tests, P values <0.05 were considered significant. Data were checked for normal distribution. In all cases, “n” refers to the number of litters (with 1 rat used from each litter). Data were analyzed by using a 2-factor ANOVA with maternal diet and CoQ10 supplementation as the independent variables. Post hoc testing was carried out when appropriate and is indicated in the text accordingly. Data are represented as means ± SEMs. All statistical analyses were performed with the use of Statistica 7 software (Statsoft), and for all tests, P values <0.05 were considered significant. Data were checked for normal distribution. In all cases, “n” refers to the number of litters (with 1 rat used from each litter). Animal experimentation: All procedures involving animals were conducted under the British Animals (Scientific Procedures) Act (1986) and underwent ethical review by the University of Cambridge Animal Welfare and Ethical Review Board. Stock animals were purchased from Charles River, and dams were produced from in-house breeding from stock animals. Pregnant Wistar rats were maintained at room temperature in specific-pathogen-free housing with the use of individually ventilated cages with environmental enrichment. The dams were maintained on a 20% protein diet (control) or an isocaloric low-protein (8%) diet, as previously described (22). Access to diets and water was provided ad libitum. All rats used in this study were specific-pathogen-free housed individually at 22°C on a controlled 12:12-h light-dark cycle. Diets were purchased from Arie Blok. The day of birth was recorded as day 1 of postnatal life. Pups born to low-protein-diet–fed dams were cross-fostered to control-fed mothers on postnatal day 3 to create a recuperated litter. Each recuperated litter was standardized to 4 male pups at random to maximize their plane of nutrition. The control group consisted of the offspring of dams fed the 20%-protein diet and suckled by 20% protein–fed dams. Each control litter was culled to 8 pups as a standard. To minimize stress to the pups when cross-fostered, they were transferred with some of their own bedding. Body weights were recorded at postnatal days 3 and 21 and at 12 mo. At 21 d, 2 males per litter were weaned onto standard laboratory feed pellets (Special Diet Services) and the other 2 were weaned onto the same diet supplemented with CoQ10 to give a dose of 1 mg/kg body weight per day. Diets were given in the home cage. Rat pups were fed these diets until 12 mo of age, at which time all rats were killed by carbon dioxide asphyxiation at ∼1000. Post mortem, liver tissue was removed, weighed, and snap-frozen in liquid nitrogen and then stored at −80°C until analysis. A further portion of liver tissue (the same area for each sample) was removed and fixed in formalin for histologic assessment. For all measurements, 1 pup per litter was used, thus “n” represents number of litters. Ten litters per group were used in this study based on power calculation with the use of previous data from our studies of RNA expression in postnatal tissues from programmed animals. Rat numbers were calculated to give 80% power to detect a 20% difference between groups at the P < 0.05 level. Only male rats were used in this study. CoQ10 diet preparation: A dose of 1 mg CoQ10/kg body weight per day was used in this study, which was administered via the diet (18). This was achieved by appropriate CoQ10 supplementation of laboratory feed pellets, as we described previously (18, 19). Diet was prepared twice a week throughout the study. CoQ10, lipid profile, glucose, and insulin analysis: Total liver ubiquinone (CoQ9 and CoQ10) was quantified by reverse-phase HPLC with UV detection at 275 nm, as described previously (18, 19). Serum was obtained as detailed previously (18), and blood from the tail vein collected into EDTA tubes and centrifuged for 3 min at 3000 rpm at 4° Celsius to isolate plasma. Fasted blood glucose measurements were obtained by using a glucose analyzer (Hemocue). The serum lipid profile and fasted plasma insulin analyses were performed by using an auto-analyzer (the Wellcome Trust–supported Cambridge Mouse Laboratory). Liver triglyceride concentrations were determined by using the Folch assay (23). Briefly, liver samples were homogenized in a 2:1 ratio of chloroform:methanol. The distinct lipid phase was removed after centrifugation, and lipid weight was quantified after the solvent was removed by evaporation. Histologic assessment: Liver samples were fixed in formalin, paraffin-embedded, and sectioned to a 6-μm thickness by using a microtome. Picro Sirius Red staining was used to stain for fibrosis. Cell-D software (Olympus Soft Imaging Solutions) was used to quantify the thickness of fibrosis around all visible hepatic vessels (including all arteries and veins) from 1 section per sample. This sample was taken at the same point (20 sections for each sample) by using a nonbiased grid sampling method. All analyses were performed at 10× magnification by using an Olympus microscope (Olympus Soft Imaging Solutions). All analyses were performed blinded. Mitochondrial electron transport chain complex activities: Activities of complex I [NAD(H): ubiquinone reductase; Enzyme Commission (EC) 1.6.5.3], complexes II–III (succinate: cytochrome c reductase; EC 1.3.5.1 + EC 1.10.2.2), and complex IV (cytochrome oxidase; EC 1.9.3.1) as well as citrate synthase (EC 1.1.1.27) were assayed as described previously (18, 19). Protein analysis: Protein was extracted and assayed as described previously (18). Protein (20 μg) was loaded onto 10%, 12%, or 15% polyacrylamide gels, dependent on the molecular weight of the protein to be measured. The samples were electrophoresed and transferred to polyvinylidene fluoride membranes (18) and detected with the use of the following dilutions of primary antibody: insulin receptor substrate 1 (IRS-1; 1:1000; Millipore); phosphoinositide-3-kinase, p110-β (p110-β); insulin receptor β (IR-β); protein kinase-ζ (1:200; Santa-Cruz); Akt-1 and Akt-2 (1:1000; New England Biolabs); phosphoinositide-3-kinase, p85-α (p85-α 1:5000; Upstate); cytochrome P450-2E1 (CYP2E1) and Il-6 (1:1000; Abcam); Tnf-α (1:1000; Cell Signaling Technology); and anti-rabbit IgG secondary antibodies (1:20,000; Jackson Immunoresearch Laboratories). Equal protein loading was confirmed by staining electrophoresed gels with Coomassie Blue (Bio-Rad) to visualize total protein. Gene expression: RNA was extracted by using a miReasy mini kit (Qiagen) following the manufacturer’s instructions, as detailed previously (19). A DNase digestion step was performed to ensure no genetic DNA contamination. RNA (1 μg) was used to synthesize complementary DNA by using oligo-dT-adaptor primers and Moloney murine leukemia virus reverse transcriptase (Promega). Gene expression was determined by using custom-designed primers (Sigma) and SYBR Green reagents (Applied Biosystems). Primer sequences are presented in Table 1. Quantification of gene expression was performed with the use of a Step One Plus reverse transcriptase–polymerase chain reaction machine (Applied Biosystems). Equal efficiency of the reverse transcription of RNA from all groups was confirmed through quantification of expression of the housekeeping gene β-actin (Actb). The expression of Actb did not differ between groups (effect of maternal diet, P = 0.9; effect of CoQ10 supplementation, P = 0.8; control: 153 ± 32; recuperated: 144 ± 24; control CoQ10: 143 ± 12; and recuperated CoQ10: 157 ± 21 average copy numbers). Primers1 Acta2, α-smooth muscle actin 2; Actb, β-actin; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; Des, desmin; F, forward; Gfap, glial fibrillary acidic protein; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; Nqo1, NAD(P)H dehydrogenase, quinone 1; Nrf2, nuclear factor, erythroid 2–like 2; R, reverse; Tgfb, transforming growth factor β Timp, tissue inhibitor of matrix metalloproteinases. Mitochondrial DNA copy number: Total DNA was extracted using a phenol/chloroform extraction protocol (24). Mitochondrial DNA copy number analysis was performed as described previously (25). 4-Hydroxynonenal and 3-nitrotyrosine analysis: Protein nitrotyrosination was assayed by using a Nitrotyrosine ELISA kit (MitoSciences), according to the manufacturer’s instructions. Lipid peroxidation was analyzed by using an OxiSelect HNE Adduct ELISA kit (Cambridge Biosciences), according to the manufacturer’s instructions. Statistical analysis: Data were analyzed by using a 2-factor ANOVA with maternal diet and CoQ10 supplementation as the independent variables. Post hoc testing was carried out when appropriate and is indicated in the text accordingly. Data are represented as means ± SEMs. All statistical analyses were performed with the use of Statistica 7 software (Statsoft), and for all tests, P values <0.05 were considered significant. Data were checked for normal distribution. In all cases, “n” refers to the number of litters (with 1 rat used from each litter). RESULTS: Anthropometric measurements Recuperated offspring were born smaller than control rats (6.3 ± 0.2 compared with 7.4 ± 0.2 g; P < 0.001) and underwent rapid postnatal catch-up growth so that their weights were similar to those of the control offspring at postnatal day 21 (52.2 ± 0.9 compared with 50.7 ± 1.2 g). At 12 mo of age, there was no effect of maternal diet or CoQ10 supplementation on body weights or absolute liver weights (Table 2). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on rat pup body and liver weights1 Values are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. No significant differences between groups were reported. CoQ10, coenzyme Q. Recuperated offspring were born smaller than control rats (6.3 ± 0.2 compared with 7.4 ± 0.2 g; P < 0.001) and underwent rapid postnatal catch-up growth so that their weights were similar to those of the control offspring at postnatal day 21 (52.2 ± 0.9 compared with 50.7 ± 1.2 g). At 12 mo of age, there was no effect of maternal diet or CoQ10 supplementation on body weights or absolute liver weights (Table 2). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on rat pup body and liver weights1 Values are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. No significant differences between groups were reported. CoQ10, coenzyme Q. Dietary CoQ10 supplementation leads to greater hepatic CoQ9 and CoQ10 concentrations Recuperated hepatic CoQ9 and CoQ10 concentrations were unaltered compared with those in control rats (Table 3). However, CoQ9 and CoQ10 concentrations were greater (P < 0.01) when supplemented with CoQ10 (Table 3). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum and plasma measurements1 Values are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. The overall effects of maternal diet and interaction between maternal diet and CoQ10 supplementation were not significant for any of the variables reported in the table. *,**Effect of CoQ10: *P < 0.05, **P < 0.01. CoQ10, coenzyme Q. Recuperated hepatic CoQ9 and CoQ10 concentrations were unaltered compared with those in control rats (Table 3). However, CoQ9 and CoQ10 concentrations were greater (P < 0.01) when supplemented with CoQ10 (Table 3). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum and plasma measurements1 Values are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. The overall effects of maternal diet and interaction between maternal diet and CoQ10 supplementation were not significant for any of the variables reported in the table. *,**Effect of CoQ10: *P < 0.05, **P < 0.01. CoQ10, coenzyme Q. CoQ10 supplementation ameliorates hepatic fibrosis and inflammation induced by poor maternal nutrition Recuperated offspring showed greater (P < 0.001) collagen deposition (Figure 1A) than did control offspring (Figure 1B, C). CoQ10 supplementation prevented this effect of maternal diet (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1A, D, E). Collagen type 1, α1 (Col1a1), mRNA expression was also greater (P < 0.05) in recuperated offspring (Figure 1F) and was reduced (P < 0.05) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1F). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on hepatic fibrosis, quantified by measurement of collagen (A), in 12-mo-old male rat livers (B–E). (F) mRNA expression of Col1a1. Values are as means ± SEMs; n = 10/group; 10/10 rats used. *,***C compared with R and R compared with RQ: *P < 0.05, ***P < 0.001. Interaction between in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation (statistical interaction): P = 0.001 (A and F). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; Col1a1 = collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; R, recuperated; RQ, recuperated CoQ10. Recuperated and control offspring had similar expression of the profibrotic cytokine transforming growth factor β1 (Tgfb1), and a trend for greater expression of monocyte chemoattractant protein 1 (Mcp1; effect of maternal diet, P = 0.06) was observed in recuperated offspring (Figure 2A). CoQ10 supplementation reduced the concentrations of Tgfb1 (effect of CoQ10 supplementation, P < 0.001) and Mcp1 (effect of CoQ10 supplementation, P = 0.07) (Figure 2A). The cytokine Tnf-α was greater in recuperated offspring than in controls (effect of maternal diet, P < 0.05) and concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.05). Concentrations of Il-6 were greater (P < 0.05) in recuperated offspring than in controls and this effect was prevented by CoQ10 supplementation (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P < 0.001) (Figure 2B). The gene expression of markers of hepatic stellate cell (HSC) and Kupffer cell (KC) activation [α-smooth muscle actin 2 (Acta2), desmin (Des), and C-type lectin-domain family 4 (Clec4f)] was greater in recuperated offspring than in controls (effects of maternal diet, P < 0.05 for all listed variables) (Figure 2C). CoQ10 supplementation reduced Des (P < 0.001), Clec4f (P < 0.05), and cluster of differientation 68 (Cd68) (P < 0.01; all were effects of CoQ10 supplementation) (Figure 2C). Acta2 was unchanged by CoQ10 supplementation (Figure 2C). Glial fibrillary acidic protein (Gfap) mRNA expression was not significantly different between groups (Figure 2C). Gene expression of matrix metalloproteinase (Mmp) 9 (Mmp9) was greater (P < 0.01) in recuperated offspring than in controls, and CoQ10 supplementation reduced (P < 0.001) recuperated Mmp9 mRNA to control levels (interaction between maternal diet and CoQ10 supplementation, P < 0.05) (Figure 2D). The gene expression of Mmp2 remained unaltered between groups (Figure 2D). Tissue inhibitors of Mmps (Timps) were also not significantly different between groups (Figure 2E). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on inflammatory markers Tgfb1 and Mcp1 mRNA (A), Tnf-α and Il-6 protein (B), mRNA expression of markers of HSC activation (Acta2, Des, Gfap) and KC activation (Clec4f, Cd68) (C), and Mmp (D) and Timp (E) mRNA in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. **,***C and R compared with CQ and RQ: **P < 0.01, ***P < 0.001. *,**C compared with R: *P < 0.05, **P < 0.01. ***R compared with RQ, P < 0.001. Statistical interactions: Tgfb1, P = 0.1; Mcp1, P = 0.3; Tnf-α, P = 0.5; Il-6, P = 0.002; Acta2, P = 0.5; Des, P = 0.3; Gfap, P = 0.5; Clec4f, P = 0.1; Cd68, P = 0.9; Mmp2, P = 0.9; Mmp9, P = 0.04; Timp1, P = 0.6; Timp2, P = 0.8. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. Acta2, α-smooth muscle actin 2; C, control; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; Des, desmin; Gfap, glial fibrillary acidic protein; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; R, recuperated; RQ, recuperated CoQ10; Tgfb1, transforming growth factor β1; Timp, tissue inhibitor of matrix metalloproteinase. Recuperated offspring showed greater (P < 0.001) collagen deposition (Figure 1A) than did control offspring (Figure 1B, C). CoQ10 supplementation prevented this effect of maternal diet (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1A, D, E). Collagen type 1, α1 (Col1a1), mRNA expression was also greater (P < 0.05) in recuperated offspring (Figure 1F) and was reduced (P < 0.05) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1F). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on hepatic fibrosis, quantified by measurement of collagen (A), in 12-mo-old male rat livers (B–E). (F) mRNA expression of Col1a1. Values are as means ± SEMs; n = 10/group; 10/10 rats used. *,***C compared with R and R compared with RQ: *P < 0.05, ***P < 0.001. Interaction between in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation (statistical interaction): P = 0.001 (A and F). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; Col1a1 = collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; R, recuperated; RQ, recuperated CoQ10. Recuperated and control offspring had similar expression of the profibrotic cytokine transforming growth factor β1 (Tgfb1), and a trend for greater expression of monocyte chemoattractant protein 1 (Mcp1; effect of maternal diet, P = 0.06) was observed in recuperated offspring (Figure 2A). CoQ10 supplementation reduced the concentrations of Tgfb1 (effect of CoQ10 supplementation, P < 0.001) and Mcp1 (effect of CoQ10 supplementation, P = 0.07) (Figure 2A). The cytokine Tnf-α was greater in recuperated offspring than in controls (effect of maternal diet, P < 0.05) and concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.05). Concentrations of Il-6 were greater (P < 0.05) in recuperated offspring than in controls and this effect was prevented by CoQ10 supplementation (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P < 0.001) (Figure 2B). The gene expression of markers of hepatic stellate cell (HSC) and Kupffer cell (KC) activation [α-smooth muscle actin 2 (Acta2), desmin (Des), and C-type lectin-domain family 4 (Clec4f)] was greater in recuperated offspring than in controls (effects of maternal diet, P < 0.05 for all listed variables) (Figure 2C). CoQ10 supplementation reduced Des (P < 0.001), Clec4f (P < 0.05), and cluster of differientation 68 (Cd68) (P < 0.01; all were effects of CoQ10 supplementation) (Figure 2C). Acta2 was unchanged by CoQ10 supplementation (Figure 2C). Glial fibrillary acidic protein (Gfap) mRNA expression was not significantly different between groups (Figure 2C). Gene expression of matrix metalloproteinase (Mmp) 9 (Mmp9) was greater (P < 0.01) in recuperated offspring than in controls, and CoQ10 supplementation reduced (P < 0.001) recuperated Mmp9 mRNA to control levels (interaction between maternal diet and CoQ10 supplementation, P < 0.05) (Figure 2D). The gene expression of Mmp2 remained unaltered between groups (Figure 2D). Tissue inhibitors of Mmps (Timps) were also not significantly different between groups (Figure 2E). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on inflammatory markers Tgfb1 and Mcp1 mRNA (A), Tnf-α and Il-6 protein (B), mRNA expression of markers of HSC activation (Acta2, Des, Gfap) and KC activation (Clec4f, Cd68) (C), and Mmp (D) and Timp (E) mRNA in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. **,***C and R compared with CQ and RQ: **P < 0.01, ***P < 0.001. *,**C compared with R: *P < 0.05, **P < 0.01. ***R compared with RQ, P < 0.001. Statistical interactions: Tgfb1, P = 0.1; Mcp1, P = 0.3; Tnf-α, P = 0.5; Il-6, P = 0.002; Acta2, P = 0.5; Des, P = 0.3; Gfap, P = 0.5; Clec4f, P = 0.1; Cd68, P = 0.9; Mmp2, P = 0.9; Mmp9, P = 0.04; Timp1, P = 0.6; Timp2, P = 0.8. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. Acta2, α-smooth muscle actin 2; C, control; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; Des, desmin; Gfap, glial fibrillary acidic protein; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; R, recuperated; RQ, recuperated CoQ10; Tgfb1, transforming growth factor β1; Timp, tissue inhibitor of matrix metalloproteinase. CoQ10 supplementation attenuates ROS induced by poor maternal nutrition Components of the NAD(P)H oxidase 2 (NOX-2) protein complex—Gp91phox (P < 0.05), P22phox (P < 0.05), and P47phox (P = 0.05)—were greater in recuperated offspring than in controls. P67phox was greater (P < 0.01) in recuperated offspring than in controls, and this effect was reduced (P < 0.001) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.01) (Figure 3A). Levels of Gp91phox (P = 0.08), P22phox (P < 0.001), and P47phox (P = 0.05) were reduced by CoQ10 supplementation (Figure 3A). CYP2E1 was not significantly different between control and recuperated offspring; however, CoQ10 supplementation reduced this concentration by 50% (effect of CoQ10 supplementation, P < 0.01) (Figure 3B). Complex I and complex IV electron transport chain activities were greater in recuperated offspring (effect of maternal diet, P = 0.05); however, complex II–III activity was unaffected. CoQ10 supplementation caused an increase in complex IV activity (effect of CoQ10 supplementation, P < 0.05) (Figure 3C). Mitochondrial DNA copy number was not significantly different between groups (control: 36 ± 4 copy numbers; recuperated: 36 ± 5 copy numbers) or by CoQ10 supplementation (control CoQ10: 41 ± 4 copy numbers; recuperated CoQ10: 35 ± 3 copy numbers). Concentrations of 4-hydroxynonenal (4-HNE) adducts were greater (P < 0.05) in recuperated offspring. There was a significant interaction between maternal diet and CoQ10 supplementation on 4-HNE concentrations (P < 0.05), reflecting the fact that CoQ10 supplementation reduced 4-HNE concentrations in recuperated offspring but had no effect in control offspring (Figure 3D). 3-nitrotyrosine concentrations were not significantly different between groups (Figure 3E). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on indexes of oxidative stress: components of the NOX-2 complex (A), CYP2E1 (B), ETC activities (C), 4-HNE (D), and 3-NT adducts (E) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. D: *Comparison of C with R and comparison of R with RQ, P < 0.05. A and B: **Comparison of C with R, P < 0.01; ***Comparison of R with RQ and comparison of C and R with CQ and RQ, P < 0.001. Statistical interactions: NOX-2 (Gp91phox, P = 0.5; P22phox, P = 0.3; P47phox, P = 0.4; P67phox, P = 0.01; P40phox, P = 0.3), CYP2E1 (P = 0.5), ETC activities (complex I, P = 0.9; complexes II–III, P = 0.6; complex IV, P = 0.9), 4-HNE (P = 0.04), and 3-NT (P = 0.8). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; CS, citrate synthase; CYP2E1, cytochrome P450-2E1; ETC, electron transport chain; NOX-2, NAD(P)H oxidase 2; R, recuperated; RQ, recuperated CoQ10; 3-NT, 3-nitrotyrosine; 4-HNE, 4-hydroxynonenal. Components of the NAD(P)H oxidase 2 (NOX-2) protein complex—Gp91phox (P < 0.05), P22phox (P < 0.05), and P47phox (P = 0.05)—were greater in recuperated offspring than in controls. P67phox was greater (P < 0.01) in recuperated offspring than in controls, and this effect was reduced (P < 0.001) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.01) (Figure 3A). Levels of Gp91phox (P = 0.08), P22phox (P < 0.001), and P47phox (P = 0.05) were reduced by CoQ10 supplementation (Figure 3A). CYP2E1 was not significantly different between control and recuperated offspring; however, CoQ10 supplementation reduced this concentration by 50% (effect of CoQ10 supplementation, P < 0.01) (Figure 3B). Complex I and complex IV electron transport chain activities were greater in recuperated offspring (effect of maternal diet, P = 0.05); however, complex II–III activity was unaffected. CoQ10 supplementation caused an increase in complex IV activity (effect of CoQ10 supplementation, P < 0.05) (Figure 3C). Mitochondrial DNA copy number was not significantly different between groups (control: 36 ± 4 copy numbers; recuperated: 36 ± 5 copy numbers) or by CoQ10 supplementation (control CoQ10: 41 ± 4 copy numbers; recuperated CoQ10: 35 ± 3 copy numbers). Concentrations of 4-hydroxynonenal (4-HNE) adducts were greater (P < 0.05) in recuperated offspring. There was a significant interaction between maternal diet and CoQ10 supplementation on 4-HNE concentrations (P < 0.05), reflecting the fact that CoQ10 supplementation reduced 4-HNE concentrations in recuperated offspring but had no effect in control offspring (Figure 3D). 3-nitrotyrosine concentrations were not significantly different between groups (Figure 3E). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on indexes of oxidative stress: components of the NOX-2 complex (A), CYP2E1 (B), ETC activities (C), 4-HNE (D), and 3-NT adducts (E) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. D: *Comparison of C with R and comparison of R with RQ, P < 0.05. A and B: **Comparison of C with R, P < 0.01; ***Comparison of R with RQ and comparison of C and R with CQ and RQ, P < 0.001. Statistical interactions: NOX-2 (Gp91phox, P = 0.5; P22phox, P = 0.3; P47phox, P = 0.4; P67phox, P = 0.01; P40phox, P = 0.3), CYP2E1 (P = 0.5), ETC activities (complex I, P = 0.9; complexes II–III, P = 0.6; complex IV, P = 0.9), 4-HNE (P = 0.04), and 3-NT (P = 0.8). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; CS, citrate synthase; CYP2E1, cytochrome P450-2E1; ETC, electron transport chain; NOX-2, NAD(P)H oxidase 2; R, recuperated; RQ, recuperated CoQ10; 3-NT, 3-nitrotyrosine; 4-HNE, 4-hydroxynonenal. CoQ10 supplementation alters hepatic antioxidant defense capacity Nuclear factor erythroid 2–like 2 (Nrf2), heme oxygenase 1 (Hmox1), and glutathione synthetase (Gst) expression were not significantly different between control and recuperated offspring (Figure 4A, B). Glutathione peroxidase 1 (Gpx1) was reduced in recuperated offspring compared with controls, and this effect was prevented by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.05). NAD(P)H dehydrogenase, quinone 1 (Nqo1), was reduced in recuperated offspring (effect of maternal diet, P < 0.05) (Figure 4A, B). CoQ10 supplementation increased Nrf2 (P < 0.001), Hmox1 (P < 0.05), and Gst (P < 0.001) (Figure 4A, B); however Nqo1 expression was reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.001) (Figure 4A). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on mRNA expression of molecules involved in the NRF antioxidant defense pathway in 12-mo-old male rat livers. (A) Nrf2, Hmox1, and Nqo1 and (B) Gst and Gpx1. Values are means ± SEMs; n = 10/group; 10/10 rats used. *,***C and R compared with CQ and RQ: *P < 0.05, ***P < 0.001. *C compared with R and C compared with CQ, P < 0.05. ***R compared with RQ, P < 0.001. Statistical interactions: Nrf2, P = 0.9; Hmox1, P = 0.6; Nqo1, P = 0.4; Gst, P = 0.1; Gpx1, P = 0.01. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Nqo1, NAD(P)H dehydrogenase, quinone 1; NRF, nuclear erythroid 2-related factor; Nrf2, nuclear factor, erythroid 2–like 2; R, recuperated; RQ, recuperated CoQ10. Nuclear factor erythroid 2–like 2 (Nrf2), heme oxygenase 1 (Hmox1), and glutathione synthetase (Gst) expression were not significantly different between control and recuperated offspring (Figure 4A, B). Glutathione peroxidase 1 (Gpx1) was reduced in recuperated offspring compared with controls, and this effect was prevented by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.05). NAD(P)H dehydrogenase, quinone 1 (Nqo1), was reduced in recuperated offspring (effect of maternal diet, P < 0.05) (Figure 4A, B). CoQ10 supplementation increased Nrf2 (P < 0.001), Hmox1 (P < 0.05), and Gst (P < 0.001) (Figure 4A, B); however Nqo1 expression was reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.001) (Figure 4A). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on mRNA expression of molecules involved in the NRF antioxidant defense pathway in 12-mo-old male rat livers. (A) Nrf2, Hmox1, and Nqo1 and (B) Gst and Gpx1. Values are means ± SEMs; n = 10/group; 10/10 rats used. *,***C and R compared with CQ and RQ: *P < 0.05, ***P < 0.001. *C compared with R and C compared with CQ, P < 0.05. ***R compared with RQ, P < 0.001. Statistical interactions: Nrf2, P = 0.9; Hmox1, P = 0.6; Nqo1, P = 0.4; Gst, P = 0.1; Gpx1, P = 0.01. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Nqo1, NAD(P)H dehydrogenase, quinone 1; NRF, nuclear erythroid 2-related factor; Nrf2, nuclear factor, erythroid 2–like 2; R, recuperated; RQ, recuperated CoQ10. CoQ10 supplementation alters expression of molecules involved in insulin and lipid metabolism Greater serum insulin concentrations were observed in recuperated offspring than in controls (overall effect of maternal diet, P < 0.05) (Figure 5A). Concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.01) (Figure 5A). Protein expression of IR-β (P < 0.001), IRS-1 (P < 0.001), and Akt-1 (P < 0.05) was reduced in recuperated offspring compared with controls (all effects of maternal diet). Phosphoinositide-3-kinase-p110-β (p110-β), phosphoinositide-3-kinase-p85α (p85-α), and Akt-2 were not influenced by maternal diet (Figure 5B). CoQ10 supplementation increased p110-β (P < 0.05) and Akt-2 (P < 0.01) protein expression (Figure 5B). Fasting plasma glucose concentrations were not significantly different between groups (Table 3). Serum and hepatic triglyceride concentrations and serum cholesterol concentrations were not significantly different between control and recuperated offspring (Table 3). CoQ10 supplementation increased serum triglyceride and cholesterol concentrations (effects of CoQ10 supplementation, P < 0.05); however, hepatic triglyceride concentrations were unchanged (Table 3). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum insulin (A) and insulin signaling protein expression (B) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. Statistical interactions: insulin, P = 0.3; insulin signaling proteins (IR-β, P = 0.12; IRS-1, P = 0.5; p110β, P = 0.4; p85α, P = 0.3; Akt-1, P = 0.05; Akt-2, P = 0.5). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; IR-β, insulin receptor β IRS-1, insulin receptor substrate 1; p85α, phosphoinositide-3-kinase, p85-α p110-β, phosphoinositide-3-kinase, p110-β R, recuperated; RQ, recuperated CoQ10. Greater serum insulin concentrations were observed in recuperated offspring than in controls (overall effect of maternal diet, P < 0.05) (Figure 5A). Concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.01) (Figure 5A). Protein expression of IR-β (P < 0.001), IRS-1 (P < 0.001), and Akt-1 (P < 0.05) was reduced in recuperated offspring compared with controls (all effects of maternal diet). Phosphoinositide-3-kinase-p110-β (p110-β), phosphoinositide-3-kinase-p85α (p85-α), and Akt-2 were not influenced by maternal diet (Figure 5B). CoQ10 supplementation increased p110-β (P < 0.05) and Akt-2 (P < 0.01) protein expression (Figure 5B). Fasting plasma glucose concentrations were not significantly different between groups (Table 3). Serum and hepatic triglyceride concentrations and serum cholesterol concentrations were not significantly different between control and recuperated offspring (Table 3). CoQ10 supplementation increased serum triglyceride and cholesterol concentrations (effects of CoQ10 supplementation, P < 0.05); however, hepatic triglyceride concentrations were unchanged (Table 3). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum insulin (A) and insulin signaling protein expression (B) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. Statistical interactions: insulin, P = 0.3; insulin signaling proteins (IR-β, P = 0.12; IRS-1, P = 0.5; p110β, P = 0.4; p85α, P = 0.3; Akt-1, P = 0.05; Akt-2, P = 0.5). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; IR-β, insulin receptor β IRS-1, insulin receptor substrate 1; p85α, phosphoinositide-3-kinase, p85-α p110-β, phosphoinositide-3-kinase, p110-β R, recuperated; RQ, recuperated CoQ10. Anthropometric measurements: Recuperated offspring were born smaller than control rats (6.3 ± 0.2 compared with 7.4 ± 0.2 g; P < 0.001) and underwent rapid postnatal catch-up growth so that their weights were similar to those of the control offspring at postnatal day 21 (52.2 ± 0.9 compared with 50.7 ± 1.2 g). At 12 mo of age, there was no effect of maternal diet or CoQ10 supplementation on body weights or absolute liver weights (Table 2). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on rat pup body and liver weights1 Values are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. No significant differences between groups were reported. CoQ10, coenzyme Q. Dietary CoQ10 supplementation leads to greater hepatic CoQ9 and CoQ10 concentrations: Recuperated hepatic CoQ9 and CoQ10 concentrations were unaltered compared with those in control rats (Table 3). However, CoQ9 and CoQ10 concentrations were greater (P < 0.01) when supplemented with CoQ10 (Table 3). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum and plasma measurements1 Values are means ± SEMs; n = 10/group. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. The overall effects of maternal diet and interaction between maternal diet and CoQ10 supplementation were not significant for any of the variables reported in the table. *,**Effect of CoQ10: *P < 0.05, **P < 0.01. CoQ10, coenzyme Q. CoQ10 supplementation ameliorates hepatic fibrosis and inflammation induced by poor maternal nutrition: Recuperated offspring showed greater (P < 0.001) collagen deposition (Figure 1A) than did control offspring (Figure 1B, C). CoQ10 supplementation prevented this effect of maternal diet (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1A, D, E). Collagen type 1, α1 (Col1a1), mRNA expression was also greater (P < 0.05) in recuperated offspring (Figure 1F) and was reduced (P < 0.05) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P = 0.001) (Figure 1F). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on hepatic fibrosis, quantified by measurement of collagen (A), in 12-mo-old male rat livers (B–E). (F) mRNA expression of Col1a1. Values are as means ± SEMs; n = 10/group; 10/10 rats used. *,***C compared with R and R compared with RQ: *P < 0.05, ***P < 0.001. Interaction between in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation (statistical interaction): P = 0.001 (A and F). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; Col1a1 = collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; R, recuperated; RQ, recuperated CoQ10. Recuperated and control offspring had similar expression of the profibrotic cytokine transforming growth factor β1 (Tgfb1), and a trend for greater expression of monocyte chemoattractant protein 1 (Mcp1; effect of maternal diet, P = 0.06) was observed in recuperated offspring (Figure 2A). CoQ10 supplementation reduced the concentrations of Tgfb1 (effect of CoQ10 supplementation, P < 0.001) and Mcp1 (effect of CoQ10 supplementation, P = 0.07) (Figure 2A). The cytokine Tnf-α was greater in recuperated offspring than in controls (effect of maternal diet, P < 0.05) and concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.05). Concentrations of Il-6 were greater (P < 0.05) in recuperated offspring than in controls and this effect was prevented by CoQ10 supplementation (P < 0.001; interaction between maternal diet and CoQ10 supplementation, P < 0.001) (Figure 2B). The gene expression of markers of hepatic stellate cell (HSC) and Kupffer cell (KC) activation [α-smooth muscle actin 2 (Acta2), desmin (Des), and C-type lectin-domain family 4 (Clec4f)] was greater in recuperated offspring than in controls (effects of maternal diet, P < 0.05 for all listed variables) (Figure 2C). CoQ10 supplementation reduced Des (P < 0.001), Clec4f (P < 0.05), and cluster of differientation 68 (Cd68) (P < 0.01; all were effects of CoQ10 supplementation) (Figure 2C). Acta2 was unchanged by CoQ10 supplementation (Figure 2C). Glial fibrillary acidic protein (Gfap) mRNA expression was not significantly different between groups (Figure 2C). Gene expression of matrix metalloproteinase (Mmp) 9 (Mmp9) was greater (P < 0.01) in recuperated offspring than in controls, and CoQ10 supplementation reduced (P < 0.001) recuperated Mmp9 mRNA to control levels (interaction between maternal diet and CoQ10 supplementation, P < 0.05) (Figure 2D). The gene expression of Mmp2 remained unaltered between groups (Figure 2D). Tissue inhibitors of Mmps (Timps) were also not significantly different between groups (Figure 2E). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on inflammatory markers Tgfb1 and Mcp1 mRNA (A), Tnf-α and Il-6 protein (B), mRNA expression of markers of HSC activation (Acta2, Des, Gfap) and KC activation (Clec4f, Cd68) (C), and Mmp (D) and Timp (E) mRNA in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. **,***C and R compared with CQ and RQ: **P < 0.01, ***P < 0.001. *,**C compared with R: *P < 0.05, **P < 0.01. ***R compared with RQ, P < 0.001. Statistical interactions: Tgfb1, P = 0.1; Mcp1, P = 0.3; Tnf-α, P = 0.5; Il-6, P = 0.002; Acta2, P = 0.5; Des, P = 0.3; Gfap, P = 0.5; Clec4f, P = 0.1; Cd68, P = 0.9; Mmp2, P = 0.9; Mmp9, P = 0.04; Timp1, P = 0.6; Timp2, P = 0.8. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. Acta2, α-smooth muscle actin 2; C, control; Cd68, cluster of differentiation 68; Clec4f, C-type lectin-domain family 4; Col1a1, collagen type 1, α1; CoQ10, coenzyme Q; CQ, control CoQ10; Des, desmin; Gfap, glial fibrillary acidic protein; Mcp1, monocyte chemoattractant protein 1; Mmp, matrix metalloproteinase; R, recuperated; RQ, recuperated CoQ10; Tgfb1, transforming growth factor β1; Timp, tissue inhibitor of matrix metalloproteinase. CoQ10 supplementation attenuates ROS induced by poor maternal nutrition: Components of the NAD(P)H oxidase 2 (NOX-2) protein complex—Gp91phox (P < 0.05), P22phox (P < 0.05), and P47phox (P = 0.05)—were greater in recuperated offspring than in controls. P67phox was greater (P < 0.01) in recuperated offspring than in controls, and this effect was reduced (P < 0.001) by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.01) (Figure 3A). Levels of Gp91phox (P = 0.08), P22phox (P < 0.001), and P47phox (P = 0.05) were reduced by CoQ10 supplementation (Figure 3A). CYP2E1 was not significantly different between control and recuperated offspring; however, CoQ10 supplementation reduced this concentration by 50% (effect of CoQ10 supplementation, P < 0.01) (Figure 3B). Complex I and complex IV electron transport chain activities were greater in recuperated offspring (effect of maternal diet, P = 0.05); however, complex II–III activity was unaffected. CoQ10 supplementation caused an increase in complex IV activity (effect of CoQ10 supplementation, P < 0.05) (Figure 3C). Mitochondrial DNA copy number was not significantly different between groups (control: 36 ± 4 copy numbers; recuperated: 36 ± 5 copy numbers) or by CoQ10 supplementation (control CoQ10: 41 ± 4 copy numbers; recuperated CoQ10: 35 ± 3 copy numbers). Concentrations of 4-hydroxynonenal (4-HNE) adducts were greater (P < 0.05) in recuperated offspring. There was a significant interaction between maternal diet and CoQ10 supplementation on 4-HNE concentrations (P < 0.05), reflecting the fact that CoQ10 supplementation reduced 4-HNE concentrations in recuperated offspring but had no effect in control offspring (Figure 3D). 3-nitrotyrosine concentrations were not significantly different between groups (Figure 3E). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on indexes of oxidative stress: components of the NOX-2 complex (A), CYP2E1 (B), ETC activities (C), 4-HNE (D), and 3-NT adducts (E) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. D: *Comparison of C with R and comparison of R with RQ, P < 0.05. A and B: **Comparison of C with R, P < 0.01; ***Comparison of R with RQ and comparison of C and R with CQ and RQ, P < 0.001. Statistical interactions: NOX-2 (Gp91phox, P = 0.5; P22phox, P = 0.3; P47phox, P = 0.4; P67phox, P = 0.01; P40phox, P = 0.3), CYP2E1 (P = 0.5), ETC activities (complex I, P = 0.9; complexes II–III, P = 0.6; complex IV, P = 0.9), 4-HNE (P = 0.04), and 3-NT (P = 0.8). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; CS, citrate synthase; CYP2E1, cytochrome P450-2E1; ETC, electron transport chain; NOX-2, NAD(P)H oxidase 2; R, recuperated; RQ, recuperated CoQ10; 3-NT, 3-nitrotyrosine; 4-HNE, 4-hydroxynonenal. CoQ10 supplementation alters hepatic antioxidant defense capacity: Nuclear factor erythroid 2–like 2 (Nrf2), heme oxygenase 1 (Hmox1), and glutathione synthetase (Gst) expression were not significantly different between control and recuperated offspring (Figure 4A, B). Glutathione peroxidase 1 (Gpx1) was reduced in recuperated offspring compared with controls, and this effect was prevented by CoQ10 supplementation (interaction between maternal diet and CoQ10 supplementation, P < 0.05). NAD(P)H dehydrogenase, quinone 1 (Nqo1), was reduced in recuperated offspring (effect of maternal diet, P < 0.05) (Figure 4A, B). CoQ10 supplementation increased Nrf2 (P < 0.001), Hmox1 (P < 0.05), and Gst (P < 0.001) (Figure 4A, B); however Nqo1 expression was reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.001) (Figure 4A). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on mRNA expression of molecules involved in the NRF antioxidant defense pathway in 12-mo-old male rat livers. (A) Nrf2, Hmox1, and Nqo1 and (B) Gst and Gpx1. Values are means ± SEMs; n = 10/group; 10/10 rats used. *,***C and R compared with CQ and RQ: *P < 0.05, ***P < 0.001. *C compared with R and C compared with CQ, P < 0.05. ***R compared with RQ, P < 0.001. Statistical interactions: Nrf2, P = 0.9; Hmox1, P = 0.6; Nqo1, P = 0.4; Gst, P = 0.1; Gpx1, P = 0.01. Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; Gpx1, glutathione peroxidase 1; Gst, glutathione synthetase; Hmox1, heme oxygenase 1; Nqo1, NAD(P)H dehydrogenase, quinone 1; NRF, nuclear erythroid 2-related factor; Nrf2, nuclear factor, erythroid 2–like 2; R, recuperated; RQ, recuperated CoQ10. CoQ10 supplementation alters expression of molecules involved in insulin and lipid metabolism: Greater serum insulin concentrations were observed in recuperated offspring than in controls (overall effect of maternal diet, P < 0.05) (Figure 5A). Concentrations were reduced by CoQ10 supplementation (effect of CoQ10 supplementation, P < 0.01) (Figure 5A). Protein expression of IR-β (P < 0.001), IRS-1 (P < 0.001), and Akt-1 (P < 0.05) was reduced in recuperated offspring compared with controls (all effects of maternal diet). Phosphoinositide-3-kinase-p110-β (p110-β), phosphoinositide-3-kinase-p85α (p85-α), and Akt-2 were not influenced by maternal diet (Figure 5B). CoQ10 supplementation increased p110-β (P < 0.05) and Akt-2 (P < 0.01) protein expression (Figure 5B). Fasting plasma glucose concentrations were not significantly different between groups (Table 3). Serum and hepatic triglyceride concentrations and serum cholesterol concentrations were not significantly different between control and recuperated offspring (Table 3). CoQ10 supplementation increased serum triglyceride and cholesterol concentrations (effects of CoQ10 supplementation, P < 0.05); however, hepatic triglyceride concentrations were unchanged (Table 3). Effect of in utero protein restriction, accelerated postnatal growth, and CoQ10 supplementation on serum insulin (A) and insulin signaling protein expression (B) in 12-mo-old male rat livers. Values are means ± SEMs; n = 10/group; 10/10 rats used. Statistical interactions: insulin, P = 0.3; insulin signaling proteins (IR-β, P = 0.12; IRS-1, P = 0.5; p110β, P = 0.4; p85α, P = 0.3; Akt-1, P = 0.05; Akt-2, P = 0.5). Data were analyzed by using 2-factor ANOVA and Duncan’s post hoc testing, where appropriate. C, control; CoQ10, coenzyme Q; CQ, control CoQ10; IR-β, insulin receptor β IRS-1, insulin receptor substrate 1; p85α, phosphoinositide-3-kinase, p85-α p110-β, phosphoinositide-3-kinase, p110-β R, recuperated; RQ, recuperated CoQ10. DISCUSSION: Our findings that a maternal low-protein diet in utero followed by accelerated postnatal growth (recuperated) confers a higher risk of oxidative damage, proinflammatory changes, and liver fibrosis suggest that the early environment is an important determinant of an individual’s risk of developing complications of fatty liver disease. The potential for progression from NAFLD to nonalcoholic steatohepatitis and finally to hepatitis, fibrosis, and cirrhosis is well described. However, it has not previously been possible to identify patients who are at high risk of these changes. Our finding that recuperated rats developed more hepatic fibrosis than did controls indicates that the early environment plays a central role in the risk of liver disease in later life. Identifying the early environment as influential in the propensity to hepatic inflammation and fibrosis provides valuable new insight into predetermining individuals at increased risk from hepatic manifestations of the metabolic syndrome. HSCs are the main source of extracellular matrix formation during hepatic fibrosis (26). The key step in inducing fibrosis during liver injury is the transformation of quiescent HSCs to activated HSCs, which differentiate into myofibroblasts (26). We found increased expression of Acta2 and Des in recuperated offspring. Acta2 is expressed in myofibroblasts of damaged liver (27) and hence is a good marker of HSC activation. In rats, HSC activation and proliferation correlate with a high expression of Des and are found in HSCs of acutely injured liver (27). KC infiltration and activation play a prominent role in HSC activation. Increased KC infiltration coincides with the activation of HSC markers such as Acta2 (28). Cle4f (a unique KC receptor for glycoproteins and therefore a good marker of KC activation) was increased in livers of recuperated offspring. Increased concentrations of proinflammatory cytokines are also crucial in initiating HSC activation. Hepatic protein expression of Tnf-α and Il-6 was greater in recuperated offspring, suggesting that inflammation plays a role in the HSC activation and consequent hepatic fibrosis observed in our model. Tgfb1 mRNA levels were not changed in recuperated offspring; however, we cannot discount the possibility that Tgfb1 is upregulated at the protein level. Mcp1 (a chemokine that acts as a chemoattractant for HSCs) was also upregulated in recuperated livers. Mmps are also associated with hepatic fibrosis (29). Mmp9 expression was upregulated in recuperated livers. Mmp9 is prominent in scar areas of active fibrosis, and treatment with a profibrotic agent can increase its expression, with peak expression coinciding with induction of inflammatory cytokines (29). Mmp2 expression was unaltered between groups. Because Mmp expression is an early event in wound healing, the time window for Mmp2 elevation may have been missed and that difference would be observed only in younger animals. A further driving factor in HSC activation and fibrosis is increased ROS, which can be generated by Tnf-α, Il-6 (30), KCs (28), and the mitochondrial electron transport chain. We found increased ROS in the context of increased lipid peroxidation [which is known to increase in liver disease (31)] and greater expression of the NOX-2 components (Gp91phox, P22phox, P47phox, and P67phox), a major source of hepatic ROS production, which have been observed in hepatic fibrosis (32, 33). Complex I activity, a predominant generator of ROS (34), was greater in recuperated livers. Decreased antioxidant defense capacity was evidenced by a reduction in Gpx1, a peroxidase responsible for the conversion of H2O2 into H2O and O2. Increased concentrations of cellular H2O2, due to Gpx1 depletion, could cause accumulation of the hydroxyl anion, a free radical that can directly increase lipid peroxidation. Accumulation of hepatic triglycerides also plays a role in hepatic fibrosis (35); however, neither liver nor plasma triglycerides were altered in recuperated rats. This may be explained by the fact that recuperated offspring are fed a feed pellet diet and do not display an obesogenic phenotype. This in itself is interesting, because it shows that the observed deleterious liver phenotypes develop in a physiologic environment that had been influenced only by developmental programming per se, and not by obesity. Insulin resistance and hyperinsulinemia are also major contributors to liver fibrosis (36) and are inherently linked to increased oxidative stress. Recuperated offspring had whole-body insulin resistance as indicated by hyperinsulinemia. The hyperinsulinemia was associated with hepatic insulin signaling protein dysregulation, as shown by the downregulation of IR-β, IRS-1, and Akt-1. Importantly, we identified an effective means of arresting the pathologic progression of NAFLD, by postnatal supplementation with CoQ10. In recuperated offspring, CoQ10 supplementation reduced markers of HSC and KC activation, the accumulation of ROS, and the deposition of collagen around the hepatic vessels. This agrees with a study in which high-fat diet fed mice administered 1% CoQ10 supplementation led to reduced hepatic NOX expression (37). CoQ10 supplementation also increased activity of complex IV, in keeping with in vitro studies (38). CoQ10 supplementation decreased Tnf-α, Il-6, Tgfb1, and Mcp1, suggesting that CoQ10 also can reduce inflammatory changes in liver, which is consistent with studies in mouse liver (37) and human blood (39). Our data therefore recapitulate CoQ10’s function, both as a potent antioxidant (15) and as an anti-inflammatory agent. CoQ10 supplementation also prevented hyperinsulinemia in recuperated rats; however, hepatic insulin signaling protein dysregulation was not normalized by CoQ10 supplementation. The whole-body insulin sensitivity may therefore be improved through other mechanisms such as improvements in muscle and/or adipose tissue insulin sensitivity. CoQ10 may exert antifibrotic effects through the activation of the Nrf2/antioxidant response element (Nrf2/ARE) pathway. Nrf2 is a transcription factor that responds to oxidative status and regulates transcription of genes involved in antioxidant defense. CoQ10 treatment in a model of hepatic fibrosis ameliorates liver damage via suppression of Tgfb1 and upregulation of Nrf-ARE-associated genes (40). Although Nrf2 expression was not affected by maternal diet, CoQ10 supplementation increased Nrf2 by 4-fold. The antioxidant genes involved in the Nrf2/ARE pathway—Hmox1, Gst, and Gpx1—were increased by CoQ10 supplementation. This suggests that CoQ10 supplementation upregulates the Nrf2/ARE pathway via suppression of Tgfb1 (40). These observations support a role for CoQ10 supplementation in increasing antioxidant defenses to a protective level in animals that have experienced detrimental catch-up growth (18, 19). Because Nqo1 activity is known to prevent 1 electron reduction in quinones, it is plausible that because hepatic CoQ10 concentrations are elevated by CoQ10 supplementation, Nqo1 expression is not required and thus suppressed. In conclusion, a suboptimal early-life environment combined with a mismatched postnatal milieu predisposes offspring to increased hepatic ROS, inflammation, and hyperinsulinemia, leading to hepatic fibrosis. This recapitulates changes seen in patients in whom benign NAFLD progresses to cirrhosis and ultimately liver failure. We suggest that the early environment life is crucial in identifying the subgroup of patients at highest risk of such progression; however, this should be tested in humans. A clinically relevant dose of CoQ10 reversed liver fibrosis via downregulation of ROS, inflammation, and hyperinsulinemia and upregulation of the Nrf2/ARE antioxidant pathway. Because fibrosis contributes to up to 45% of deaths in the industrial world (41), CoQ10 supplementation may be a cost-effective and safe way of reducing this global burden in at-risk individuals, before the development of an NAFLD phenotype.
Background: It is well established that low birth weight and accelerated postnatal growth increase the risk of liver dysfunction in later life. However, molecular mechanisms underlying such developmental programming are not well characterized, and potential intervention strategies are poorly defined. Methods: A rat model of maternal protein restriction was used to generate low-birth-weight offspring that underwent accelerated postnatal growth (termed "recuperated"). These were compared with control rats. Offspring were weaned onto standard feed pellets with or without dietary CoQ10 (1 mg/kg body weight per day) supplementation. At 12 mo, hepatic fibrosis, indexes of inflammation, oxidative stress, and insulin signaling were measured by histology, Western blot, ELISA, and reverse transcriptase-polymerase chain reaction. Results: Hepatic collagen deposition (diameter of deposit) was greater in recuperated offspring (mean ± SEM: 12 ± 2 μm) than in controls (5 ± 0.5 μm) (P < 0.001). This was associated with greater inflammation (interleukin 6: 38% ± 24% increase; P < 0.05; tumor necrosis factor α: 64% ± 24% increase; P < 0.05), lipid peroxidation (4-hydroxynonenal, measured by ELISA: 0.30 ± 0.02 compared with 0.19 ± 0.05 μg/mL per μg protein; P < 0.05), and hyperinsulinemia (P < 0.05). CoQ10 supplementation increased (P < 0.01) hepatic CoQ10 concentrations and ameliorated liver fibrosis (P < 0.001), inflammation (P < 0.001), some measures of oxidative stress (P < 0.001), and hyperinsulinemia (P < 0.01). Conclusions: Suboptimal in utero nutrition combined with accelerated postnatal catch-up growth caused more hepatic fibrosis in adulthood, which was associated with higher indexes of oxidative stress and inflammation and hyperinsulinemia. CoQ10 supplementation prevented liver fibrosis accompanied by downregulation of oxidative stress, inflammation, and hyperinsulinemia.
null
null
15,652
365
20
[ "coq10", "supplementation", "coq10 supplementation", "recuperated", "protein", "05", "diet", "figure", "control", "effect" ]
[ "test", "test" ]
null
null
[CONTENT] developmental programming | liver disease | coenzyme Q | low birth weight | accelerated postnatal growth [SUMMARY]
[CONTENT] developmental programming | liver disease | coenzyme Q | low birth weight | accelerated postnatal growth [SUMMARY]
[CONTENT] developmental programming | liver disease | coenzyme Q | low birth weight | accelerated postnatal growth [SUMMARY]
null
[CONTENT] developmental programming | liver disease | coenzyme Q | low birth weight | accelerated postnatal growth [SUMMARY]
null
[CONTENT] Animals | Anti-Inflammatory Agents, Non-Steroidal | Cytokines | Diet, Protein-Restricted | Dietary Supplements | Female | Fetal Development | Fetal Growth Retardation | Hepatitis | Hyperinsulinism | Liver | Liver Cirrhosis | Male | Malnutrition | Maternal Nutritional Physiological Phenomena | Oxidative Stress | Pregnancy | Pregnancy Complications | Rats, Wistar | Specific Pathogen-Free Organisms | Ubiquinone | Weaning [SUMMARY]
[CONTENT] Animals | Anti-Inflammatory Agents, Non-Steroidal | Cytokines | Diet, Protein-Restricted | Dietary Supplements | Female | Fetal Development | Fetal Growth Retardation | Hepatitis | Hyperinsulinism | Liver | Liver Cirrhosis | Male | Malnutrition | Maternal Nutritional Physiological Phenomena | Oxidative Stress | Pregnancy | Pregnancy Complications | Rats, Wistar | Specific Pathogen-Free Organisms | Ubiquinone | Weaning [SUMMARY]
[CONTENT] Animals | Anti-Inflammatory Agents, Non-Steroidal | Cytokines | Diet, Protein-Restricted | Dietary Supplements | Female | Fetal Development | Fetal Growth Retardation | Hepatitis | Hyperinsulinism | Liver | Liver Cirrhosis | Male | Malnutrition | Maternal Nutritional Physiological Phenomena | Oxidative Stress | Pregnancy | Pregnancy Complications | Rats, Wistar | Specific Pathogen-Free Organisms | Ubiquinone | Weaning [SUMMARY]
null
[CONTENT] Animals | Anti-Inflammatory Agents, Non-Steroidal | Cytokines | Diet, Protein-Restricted | Dietary Supplements | Female | Fetal Development | Fetal Growth Retardation | Hepatitis | Hyperinsulinism | Liver | Liver Cirrhosis | Male | Malnutrition | Maternal Nutritional Physiological Phenomena | Oxidative Stress | Pregnancy | Pregnancy Complications | Rats, Wistar | Specific Pathogen-Free Organisms | Ubiquinone | Weaning [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] coq10 | supplementation | coq10 supplementation | recuperated | protein | 05 | diet | figure | control | effect [SUMMARY]
[CONTENT] coq10 | supplementation | coq10 supplementation | recuperated | protein | 05 | diet | figure | control | effect [SUMMARY]
[CONTENT] coq10 | supplementation | coq10 supplementation | recuperated | protein | 05 | diet | figure | control | effect [SUMMARY]
null
[CONTENT] coq10 | supplementation | coq10 supplementation | recuperated | protein | 05 | diet | figure | control | effect [SUMMARY]
null
[CONTENT] nafld | suboptimal | progression | liver | fibrosis | coq10 | isoprenoid | hepatic | humans | metabolic [SUMMARY]
[CONTENT] protein | litter | 20 | diet | dams | ec | pups | performed | 18 | reverse [SUMMARY]
[CONTENT] coq10 | supplementation | coq10 supplementation | figure | recuperated | 05 | 001 | effect | offspring | recuperated offspring [SUMMARY]
null
[CONTENT] coq10 | supplementation | coq10 supplementation | recuperated | protein | figure | 05 | diet | effect | control [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| 1 mg/kg ||| 12 | ELISA [SUMMARY]
[CONTENT] 12 | 5 ± 0.5 ||| 6 | 38% | 24% | P < 0.05 | 64% | 24% | P < 0.05 | 4 | ELISA | 0.30 | 0.02 | 0.19 | 0.05 | P < 0.05 ||| CoQ10 [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| 1 mg/kg ||| 12 | ELISA ||| ||| 12 | 5 ± 0.5 ||| 6 | 38% | 24% | P < 0.05 | 64% | 24% | P < 0.05 | 4 | ELISA | 0.30 | 0.02 | 0.19 | 0.05 | P < 0.05 ||| CoQ10 ||| ||| [SUMMARY]
null
Views of patients with obesity on person-centred care: A Q-methodology study.
36177904
To better accommodate patients with obesity, the adoption of a person-centred approach to healthcare seems to be imperative. Eight dimensions are important for person-centred care (PCC): respect for patients' preferences, physical comfort, the coordination of care, emotional support, access to care, the continuity of care, the provision of information and education, and the involvement of family and friends. The aim of this study was to explore the views of patients with obesity on the relative importance of the dimensions of PCC.
INTRODUCTION
Q methodology was used to study the viewpoints of 21 patients with obesity on PCC. Respondents were asked to rank 31 statements about the eight dimensions of PCC by level of personal significance. Using by-person factor analysis, distinct viewpoints were identified. Respondents' comments made while ranking were used to verify and refine the interpretation of the viewpoints.
METHODS
Five distinct viewpoints were identified: (1) 'someone who listens in an unbiased manner', (2) 'everything should run smoothly', (3) 'interpersonal communication is key', (4) 'I want my independence', and (5) 'support for myself and my loved ones'. Viewpoint 1 was supported by the largest number of respondents and explained the most variance in the data, followed by viewpoint 3 and the other viewpoints, respectively.
RESULTS
Our findings highlight the need for tailored care in obesity treatment and shed light on aspects of care and support that are most important for patients with obesity.
CONCLUSION
[ "Humans", "Patient-Centered Care", "Patient Preference", "Factor Analysis, Statistical", "Self Care", "Obesity" ]
9700190
INTRODUCTION
Over the past four decades, the global prevalence of obesity has nearly tripled. 1 , 2 The World Health Organization defines obesity as an excessive accumulation of body fat that poses a threat to health. 3 Living with obesity seriously impairs physical and psychosocial functioning, resulting in a reduced quality of life. 4 Obesity also increases the risk of developing other serious health conditions, such as type 2 diabetes, cardiovascular diseases, several types of cancer, and many other diseases. 5 Consequently, obesity, and especially severe obesity is associated with increases in healthcare utilization and expenditures, as well as substantial societal costs due to productivity losses. 6 , 7 Although many health institutions have recognized it as a chronic disease, 8 healthcare systems seem poorly prepared to meet the needs of patients living with obesity. Clinical guidelines for the treatment of these patients are often too simplistic, focusing merely on weight loss instead of the improvement of overall health and well‐being. 9 As a result, individual circumstances, including contributing factors and underlying diseases, are often overlooked. 10 Furthermore, patients with obesity often experience weight‐related stigma and discrimination in healthcare, which can affect the quality of their care and their treatment outcomes. 11 , 12 For instance, some healthcare professionals view patients with obesity more negatively than other patients and spend less time treating them. 13 Healthcare professionals may also be insufficiently equipped or educated to perform standard medical procedures on patients with obesity. 14 To better accommodate patients with obesity, the adoption of a person‐centred approach in which care is tailored to the individual and individuals' preferences, needs, and values are respected seems to be imperative. 15 Person‐centred care (PCC) can be seen as a paradigm shift in healthcare that has been gaining broad support with the increasing interest in the quality of care. 16 , 17 The Picker Institute distinguishes eight dimensions that are important for PCC: respect for patients' preferences, physical comfort, coordination of care, emotional support, access to care, continuity of care, the provision of information and education, and the involvement of family and friends. 18 , 19 An overview of these dimensions can be found in Table 1. The eight dimensions of PCC Abbreviation: PCC, person‐centred care. PCC has been associated with improved patient outcomes in various healthcare settings, 26 including the provision of care to patients with obesity. 27 However, the relative importance of the different aspects of PCC seems to vary among patient groups. 28 , 29 Although aspects of care that may be important specifically for patients with obesity have been identified, the significance of the eight dimensions of PCC for patients with obesity has not been assessed. Gaining insight into the aspects of PCC that are most important to this patient group is a vital step toward improved care provision, and consequently improved quality of care and patient outcomes. Thus, the aim of this study was to explore the views of patients with obesity on the relative importance of the dimensions of PCC.
METHODS
Q methodology To examine the views of patients with obesity on what is important for PCC, the mixed‐method Q methodology was used. Q methodology may be best described as an inverted factor analytic technique for the systematic study of subjective viewpoints. 30 Q‐methodology research aims to identify and discern views on a specific topic, rather than determine the prevalence of these viewpoints. In a Q‐methodology study, respondents are asked to rank a set of statements about the study subject. Using by‐person factor analysis, in which the respondents are treated as variates, distinct viewpoints are identified. Q methodology has been used to examine the views of patients and professionals, such as patients with multimorbidity, 28 those with end‐stage renal disease, 29  and professionals and volunteers providing palliative care, 31 on what is important for PCC. To examine the views of patients with obesity on what is important for PCC, the mixed‐method Q methodology was used. Q methodology may be best described as an inverted factor analytic technique for the systematic study of subjective viewpoints. 30 Q‐methodology research aims to identify and discern views on a specific topic, rather than determine the prevalence of these viewpoints. In a Q‐methodology study, respondents are asked to rank a set of statements about the study subject. Using by‐person factor analysis, in which the respondents are treated as variates, distinct viewpoints are identified. Q methodology has been used to examine the views of patients and professionals, such as patients with multimorbidity, 28 those with end‐stage renal disease, 29  and professionals and volunteers providing palliative care, 31 on what is important for PCC. Respondents As our goal was to obtain a wide breadth of views on what is important for PCC for patients with obesity, we recruited respondents varying in terms of gender, age, educational background, marital status, and health literacy. Eligible patients were over the age of 18 years and had body mass indices (BMIs) of at least 40 kg/m2, which defines severe obesity. This obesity threshold was chosen because it is associated with the most healthcare utilization and greatest health risks. 5 , 6 Practitioners working in the internal medicine departments of four hospitals in the area of Rotterdam, the Netherlands, informed patients about the study. In the Netherlands, access to nonurgent hospitals or specialty care requires a referral from a general practitioner (GP). 32 Recruitment through hospitals thus ensured that respondents were familiar with both specialty and primary care (e.g., GP visitation), characteristic of care provision for patients with severe obesity. 6 , 24 Data collection took place between April and October 2021. Twenty‐six eligible patients gave consent to be contacted to receive detailed study information and schedule an appointment. Of the 26 patients that were contacted, 3 were unable to schedule appointments and 2 could not be reached by the researcher. This led to the inclusion of 21 patients in the study, which is a typical sample size for a Q‐methodology study. 30 Q‐methodological research requires only a small number of purposively selected respondents to represent the breadth of views in a population. 30 Consultation of the literature and the expert opinion of a professor of obesity and stress research who is involved in the treatment of patients with obesity revealed no evidence of missing viewpoints. As our goal was to obtain a wide breadth of views on what is important for PCC for patients with obesity, we recruited respondents varying in terms of gender, age, educational background, marital status, and health literacy. Eligible patients were over the age of 18 years and had body mass indices (BMIs) of at least 40 kg/m2, which defines severe obesity. This obesity threshold was chosen because it is associated with the most healthcare utilization and greatest health risks. 5 , 6 Practitioners working in the internal medicine departments of four hospitals in the area of Rotterdam, the Netherlands, informed patients about the study. In the Netherlands, access to nonurgent hospitals or specialty care requires a referral from a general practitioner (GP). 32 Recruitment through hospitals thus ensured that respondents were familiar with both specialty and primary care (e.g., GP visitation), characteristic of care provision for patients with severe obesity. 6 , 24 Data collection took place between April and October 2021. Twenty‐six eligible patients gave consent to be contacted to receive detailed study information and schedule an appointment. Of the 26 patients that were contacted, 3 were unable to schedule appointments and 2 could not be reached by the researcher. This led to the inclusion of 21 patients in the study, which is a typical sample size for a Q‐methodology study. 30 Q‐methodological research requires only a small number of purposively selected respondents to represent the breadth of views in a population. 30 Consultation of the literature and the expert opinion of a professor of obesity and stress research who is involved in the treatment of patients with obesity revealed no evidence of missing viewpoints. Statements To capture the full range of possible views on a specific topic, the statements in a Q‐methodology study should have good coverage of the subject of interest. 30 The eight dimensions of PCC provided by the Picker Institute were used as a conceptual framework for this study. 18 , 19 First, statements from previous studies in which the same framework was used to investigate the views of patients or professionals on what is important for PCC were collected. 28 , 29 , 31 Further statement selection was informed by various sources covering the care and support needs of patients with obesity, such as scientific articles 23 , 33 and clinical guidelines, 34 as well as the autobiographies and social media posts of individuals living with obesity. In an iterative process, all members of the research team, including an internist‐endocrinologist who is a professor in the field of obesity and stress research and involved in clinical care provided to patients with obesity, generated, reviewed, and revised statements. A final set of 31 statements was constructed and pilot tested with three respondents fulfilling our inclusion criteria. Based on the pilot testing results, a few adjustments to the phrasing of some statements were made (see Supporting Information: Appendix A). No substantive change was required, and no missing statement was revealed. The final statement set is provided in Table 2. Because no substantial change was made to the statement set, the pilot data were included in the analyses conducted for this study. Statements and factor arrays Viewpoints: 1, ‘someone who listens in an unbiased manner’; 2, ‘everything should run smoothly’; 3, ‘interpersonal communication is key’; 4, ‘I want my independence’; 5, ‘support for myself and my loved ones’. To capture the full range of possible views on a specific topic, the statements in a Q‐methodology study should have good coverage of the subject of interest. 30 The eight dimensions of PCC provided by the Picker Institute were used as a conceptual framework for this study. 18 , 19 First, statements from previous studies in which the same framework was used to investigate the views of patients or professionals on what is important for PCC were collected. 28 , 29 , 31 Further statement selection was informed by various sources covering the care and support needs of patients with obesity, such as scientific articles 23 , 33 and clinical guidelines, 34 as well as the autobiographies and social media posts of individuals living with obesity. In an iterative process, all members of the research team, including an internist‐endocrinologist who is a professor in the field of obesity and stress research and involved in clinical care provided to patients with obesity, generated, reviewed, and revised statements. A final set of 31 statements was constructed and pilot tested with three respondents fulfilling our inclusion criteria. Based on the pilot testing results, a few adjustments to the phrasing of some statements were made (see Supporting Information: Appendix A). No substantive change was required, and no missing statement was revealed. The final statement set is provided in Table 2. Because no substantial change was made to the statement set, the pilot data were included in the analyses conducted for this study. Statements and factor arrays Viewpoints: 1, ‘someone who listens in an unbiased manner’; 2, ‘everything should run smoothly’; 3, ‘interpersonal communication is key’; 4, ‘I want my independence’; 5, ‘support for myself and my loved ones’. Data collection Data collection took place in an online environment using video conferencing software; the process lasted approximately 60 min per respondent. One researcher guided the respondents' ranking of statements. All sessions were audio recorded with respondents' informed consent. First, the respondents answered basic demographic questions and filled in the Set of Brief Screening Questions (SBSQ) as an assessment of health literacy. 35 Low health literacy was defined as an average SBSQ score of 2 or lower. Next, the respondents were asked to carefully read the statements about aspects of PCC, displayed on the screen one by one in random order using the HtmlQ software, 36 and to sort them into ‘important’, ‘neutral’, and ‘unimportant’ piles. The researcher then asked the respondents to rank the statements in each pile according to their personal significance using a forced sorting grid with a scale ranging from +4 (most important) to –4 (most unimportant; Figure 1). While ranking, the respondents were encouraged to speak out loud about their views; after completing the ranking, they were asked to elaborate on their placement of the statements. All comments made by the respondents during and after the ranking process were transcribed verbatim. Sorting grid. Data collection took place in an online environment using video conferencing software; the process lasted approximately 60 min per respondent. One researcher guided the respondents' ranking of statements. All sessions were audio recorded with respondents' informed consent. First, the respondents answered basic demographic questions and filled in the Set of Brief Screening Questions (SBSQ) as an assessment of health literacy. 35 Low health literacy was defined as an average SBSQ score of 2 or lower. Next, the respondents were asked to carefully read the statements about aspects of PCC, displayed on the screen one by one in random order using the HtmlQ software, 36 and to sort them into ‘important’, ‘neutral’, and ‘unimportant’ piles. The researcher then asked the respondents to rank the statements in each pile according to their personal significance using a forced sorting grid with a scale ranging from +4 (most important) to –4 (most unimportant; Figure 1). While ranking, the respondents were encouraged to speak out loud about their views; after completing the ranking, they were asked to elaborate on their placement of the statements. All comments made by the respondents during and after the ranking process were transcribed verbatim. Sorting grid. Statistical analysis To identify distinct viewpoints on what is important for PCC for patients with obesity, the rankings of the 21 respondents were intercorrelated and subjected to by‐person factor analysis using the PQMethod software. 37 Clusters in the data were identified using centroid factor extraction and varimax rotation. Potential factor solutions were evaluated by considering the total of associated respondents at a significance level of 0.05 (i.e., a factor loading of ±0.42), upholding a minimum of two associated respondents per factor, and the percentage of explained variance. Fulfilment of the Kaiser–Guttman criterion, which suggests that only factors with eigenvalues of 1.0 or more be retained, was examined. 38 , 39 To finalize our decision on the number of factors to retain, qualitative data (i.e., comments made by the respondents during and after ranking) were considered. For each factor or viewpoint, the rankings of associated respondents were merged by calculating weighted averages, thereby forming a ‘factor array’ that depicted how a typical respondent holding that viewpoint would rank the statements. As our aim was to gain a broad understanding of respondents' diverse viewpoints, our interpretation was based on these factor arrays. For each viewpoint, statements ranked as most important (+3 and +4) and most unimportant (–3 and –4) and distinguishing statements (ranked significantly higher or lower than in other viewpoints) were inspected. The qualitative data were used to verify and refine our interpretation of the viewpoints. To identify distinct viewpoints on what is important for PCC for patients with obesity, the rankings of the 21 respondents were intercorrelated and subjected to by‐person factor analysis using the PQMethod software. 37 Clusters in the data were identified using centroid factor extraction and varimax rotation. Potential factor solutions were evaluated by considering the total of associated respondents at a significance level of 0.05 (i.e., a factor loading of ±0.42), upholding a minimum of two associated respondents per factor, and the percentage of explained variance. Fulfilment of the Kaiser–Guttman criterion, which suggests that only factors with eigenvalues of 1.0 or more be retained, was examined. 38 , 39 To finalize our decision on the number of factors to retain, qualitative data (i.e., comments made by the respondents during and after ranking) were considered. For each factor or viewpoint, the rankings of associated respondents were merged by calculating weighted averages, thereby forming a ‘factor array’ that depicted how a typical respondent holding that viewpoint would rank the statements. As our aim was to gain a broad understanding of respondents' diverse viewpoints, our interpretation was based on these factor arrays. For each viewpoint, statements ranked as most important (+3 and +4) and most unimportant (–3 and –4) and distinguishing statements (ranked significantly higher or lower than in other viewpoints) were inspected. The qualitative data were used to verify and refine our interpretation of the viewpoints.
RESULTS
Twenty‐one respondents completed the ranking (Table 3). The analysis revealed five factors, or distinct viewpoints, that together explained 48% of the variance in the data. Data from 17 (81%) of the 21 respondents were associated significantly with one of the five viewpoints (p ≤ .05). Data from two respondents were associated with two viewpoints each, and those from two respondents were not associated significantly with any factor. All viewpoints were supported by at least two respondents; viewpoints 1 and 3 were supported by 7 and 4 respondents, respectively. Viewpoint 5 had an eigenvalue of 0.95, just below the Kaiser–Guttman cut‐off of 1.0, but the qualitative data indicated that it was meaningful and distinguishable from the other viewpoints. The degree of correlation between viewpoints was low to moderate (r = –.15 to .37). The factor arrays for the five viewpoints are provided in Table 2. Demographic characteristics of respondents (n = 21) Viewpoint 1: ‘someone who listens in an unbiased manner’ Viewpoint 1 accounted for the most explained variance (17%) in this study. The PCC dimensions most characterizing this viewpoint are ‘respect for patients’ preferences’ and ‘emotional support’. Central to this viewpoint was respondents’ desire to be seen and heard like any other patient without obesity. These patients wish to be treated with dignity and respect (statement 1, +4). Respondent 8 stated ‘You just want to be taken seriously. We are all human, that includes people who are overweight’. They often feel misunderstood because healthcare professionals blame all of their health issues on their weight. [‘You fight against a judgment that you cannot get out of. They do not even examine me. Right off the bat they go: “I can refer you for a stomach reduction”’ (Respondent 18)]. To get the care and support that suits their needs, these patients believe that unbiased healthcare professionals (statement 2, +3) who genuinely listen (statement 14, +4) are crucial. Respondent 13 stated ‘That they look further than your weight, that is the most important thing to me. That it is not like everything that is wrong with you is because of your weight’. They want healthcare professionals to provide emotional support and acknowledge the impact of their health problems on their life [statement 16, +3; ‘I have three small children and it is really hard for me to do things with them just because I am overweight’ (Respondent 6)]. They seek recognition for the complexity of their condition. Respondent 8 stated ‘Recognition that obesity is a disease and it should be treated that way is very important’. To remain in charge of their care, these patients want to be involved in decisions (statement 4, +3), while leaving friends and family members out of the decision‐making process [statement 29, –4; ‘No, I do not think that is important. I decide what I want’ (Respondent 6)]. Respondents holding this viewpoint ranked all statements covering the ‘involvement of friends and family’ dimension as least important. Viewpoint 1 accounted for the most explained variance (17%) in this study. The PCC dimensions most characterizing this viewpoint are ‘respect for patients’ preferences’ and ‘emotional support’. Central to this viewpoint was respondents’ desire to be seen and heard like any other patient without obesity. These patients wish to be treated with dignity and respect (statement 1, +4). Respondent 8 stated ‘You just want to be taken seriously. We are all human, that includes people who are overweight’. They often feel misunderstood because healthcare professionals blame all of their health issues on their weight. [‘You fight against a judgment that you cannot get out of. They do not even examine me. Right off the bat they go: “I can refer you for a stomach reduction”’ (Respondent 18)]. To get the care and support that suits their needs, these patients believe that unbiased healthcare professionals (statement 2, +3) who genuinely listen (statement 14, +4) are crucial. Respondent 13 stated ‘That they look further than your weight, that is the most important thing to me. That it is not like everything that is wrong with you is because of your weight’. They want healthcare professionals to provide emotional support and acknowledge the impact of their health problems on their life [statement 16, +3; ‘I have three small children and it is really hard for me to do things with them just because I am overweight’ (Respondent 6)]. They seek recognition for the complexity of their condition. Respondent 8 stated ‘Recognition that obesity is a disease and it should be treated that way is very important’. To remain in charge of their care, these patients want to be involved in decisions (statement 4, +3), while leaving friends and family members out of the decision‐making process [statement 29, –4; ‘No, I do not think that is important. I decide what I want’ (Respondent 6)]. Respondents holding this viewpoint ranked all statements covering the ‘involvement of friends and family’ dimension as least important. Viewpoint 2: ‘everything should run smoothly’ Viewpoint 2 accounted for 8% of the explained variance. Patients holding this viewpoint seek well‐coordinated care and advice (statement 12, +4) and the proper transfer of information in case of referral (statement 23, +4). Respondent 3 stated ‘The doctors have to agree on what is the best option for me’. Furthermore, they desire easily accessible care with short wait times [statement 17, +3; ‘That it will not be a lengthy process before I can be helped’ (Respondent 16)]. These patients would also like healthcare professionals to consider their physical comfort by attending to problems with physical activity [statement 8, +3; ‘Stairs are very much a no go for me and it is important that they know that’ (Respondent 16)]. However, they consider other aspects of physical comfort, such as waiting areas and treatment rooms that are comfortable (statement 9, –3) or provide enough privacy (statement 10, –3), to be less important. Respondent 16 stated ‘When I weighed 127 kilos at my heaviest, the seats were a bit uncomfortable, but I do not have that problem now’. In contrast to those holding viewpoint 1, patients holding viewpoint 2 do not mind if care does not align with their own preferences [statement 5, –4; ‘I do not think that your preferences should be taken into account in a hospital or with a doctor because as human beings we can have a lot of preferences that do not really apply’ (Respondent 16)]. They emphasize their own responsibility for getting the care they need [‘Right now in the Netherlands, you get the right care. As a patient, you also need to be somewhat well‐informed yourself’ (Respondent 16)]. They believe that being well prepared avoids the need for lengthy appointments (statement 18, –4). Respondent 3 stated ‘If I have a question, I just ask it. And if I did not understand something or if I forgot something […] I can just call and ask’. Viewpoint 2 accounted for 8% of the explained variance. Patients holding this viewpoint seek well‐coordinated care and advice (statement 12, +4) and the proper transfer of information in case of referral (statement 23, +4). Respondent 3 stated ‘The doctors have to agree on what is the best option for me’. Furthermore, they desire easily accessible care with short wait times [statement 17, +3; ‘That it will not be a lengthy process before I can be helped’ (Respondent 16)]. These patients would also like healthcare professionals to consider their physical comfort by attending to problems with physical activity [statement 8, +3; ‘Stairs are very much a no go for me and it is important that they know that’ (Respondent 16)]. However, they consider other aspects of physical comfort, such as waiting areas and treatment rooms that are comfortable (statement 9, –3) or provide enough privacy (statement 10, –3), to be less important. Respondent 16 stated ‘When I weighed 127 kilos at my heaviest, the seats were a bit uncomfortable, but I do not have that problem now’. In contrast to those holding viewpoint 1, patients holding viewpoint 2 do not mind if care does not align with their own preferences [statement 5, –4; ‘I do not think that your preferences should be taken into account in a hospital or with a doctor because as human beings we can have a lot of preferences that do not really apply’ (Respondent 16)]. They emphasize their own responsibility for getting the care they need [‘Right now in the Netherlands, you get the right care. As a patient, you also need to be somewhat well‐informed yourself’ (Respondent 16)]. They believe that being well prepared avoids the need for lengthy appointments (statement 18, –4). Respondent 3 stated ‘If I have a question, I just ask it. And if I did not understand something or if I forgot something […] I can just call and ask’. Viewpoint 3: ‘interpersonal communication is key’ Viewpoint 3 accounted for 10% of the explained variance. It focuses on the exchange of information among all involved parties. Patients holding this viewpoint want to know what to expect, and thus value information about all aspects of their care (statement 25, +3), including information about referrals (statement 22, +3), very highly [‘Because I want to know where I stand, what will happen and what is needed’ (Respondent 10)]. They believe that a good explanation is needed to properly understand information (statement 27, +3). Respondent 7 stated ‘I often feel a bit overwhelmed during consultations. That things are being said for which I was not fully prepared. I sometimes think afterwards, “have I understood everything that has been said?”’. These patients believe that having sufficient time during appointments is a prerequisite for the proper exchange of information (statement 18, +4). They often leave consultations feeling poorly because of unanswered questions. [‘You just notice that they are under time pressure, that it should all happen quickly. You hardly have time for questions, so you do not leave with a good feeling’ (Respondent 10)]. Similarly to those holding viewpoint 2, these patients value the coordination of care and advice among practitioners highly (statement 12, +4). They specifically dislike the conflicting of treatment plans with each other [‘It is important that one practitioner also knows what the other practitioner is doing and that it fits together’ (Respondent 7)]. In contrast to those holding viewpoint 1, these patients prefer that care and support are of an informative nature, rather than attending to emotions that they might be experiencing (statement 15, –4). Respondent 1 stated ‘Things like quality of care are much more important to me than people sitting down to listen to emotions or something like that. To me, emotions and scientific correctness often clash’. Similarly to those holding viewpoint 2, they do not mind if care does not align well with their preferences [statement 5, –3; ‘For me it is really about that the care is good and that it is the best, even if I do not prefer it’ (Respondent 1)]. Viewpoint 3 accounted for 10% of the explained variance. It focuses on the exchange of information among all involved parties. Patients holding this viewpoint want to know what to expect, and thus value information about all aspects of their care (statement 25, +3), including information about referrals (statement 22, +3), very highly [‘Because I want to know where I stand, what will happen and what is needed’ (Respondent 10)]. They believe that a good explanation is needed to properly understand information (statement 27, +3). Respondent 7 stated ‘I often feel a bit overwhelmed during consultations. That things are being said for which I was not fully prepared. I sometimes think afterwards, “have I understood everything that has been said?”’. These patients believe that having sufficient time during appointments is a prerequisite for the proper exchange of information (statement 18, +4). They often leave consultations feeling poorly because of unanswered questions. [‘You just notice that they are under time pressure, that it should all happen quickly. You hardly have time for questions, so you do not leave with a good feeling’ (Respondent 10)]. Similarly to those holding viewpoint 2, these patients value the coordination of care and advice among practitioners highly (statement 12, +4). They specifically dislike the conflicting of treatment plans with each other [‘It is important that one practitioner also knows what the other practitioner is doing and that it fits together’ (Respondent 7)]. In contrast to those holding viewpoint 1, these patients prefer that care and support are of an informative nature, rather than attending to emotions that they might be experiencing (statement 15, –4). Respondent 1 stated ‘Things like quality of care are much more important to me than people sitting down to listen to emotions or something like that. To me, emotions and scientific correctness often clash’. Similarly to those holding viewpoint 2, they do not mind if care does not align well with their preferences [statement 5, –3; ‘For me it is really about that the care is good and that it is the best, even if I do not prefer it’ (Respondent 1)]. Viewpoint 4: ‘I want my independence’ Viewpoint 4 accounted for 7% of the explained variance. The aim of remaining independent is central to this viewpoint. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 want to focus on what they can do on their own (statement 6, +2), as they believe that this will preserve their quality of life [statement 3, +3; ‘I think it is important that I can and may continue to do a lot independently’ (Respondent 17)]. In line with this focus, these patients want healthcare professionals to attend to their problems with physical activity (statement 8, +4). Respondent 17 stated ‘I think it is very important to work on this [problems with physical activity] as much as possible and to expand what is possible to do myself’. Although these respondents seek independence, they value knowing where to go for care and support after treatment highly (statement 24, +4). They are willing to take the lead, provided that they know where they can go for support. Respondent 4 stated ‘That you have a telephone number and that you can call them with questions or if anything is unclear. I find accessibility very important’. To facilitate independence, they also prefer to be well informed about all aspects of their care (statement 25, +3) and appreciate easy access to their own medical data (statement 26, +2). However, these patients do not require a good explanation of all information provided to them (statement 27, −2) as they have no difficulty understanding their medical data [‘I have been walking in and out of hospitals for so long, most of it is self‐evident’ (Respondent 17)]. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 find other aspects of the ‘continuity of care’, such as being well informed during referrals (statement 22, –3) and the proper transfer of information upon referral (statement 23, –2) to be less important. They do not mind asking questions or re‐sharing information with professionals [‘I can also tell it myself and I can ask for everything I need and I always do that’ (Respondent 4)]. Viewpoint 4 accounted for 7% of the explained variance. The aim of remaining independent is central to this viewpoint. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 want to focus on what they can do on their own (statement 6, +2), as they believe that this will preserve their quality of life [statement 3, +3; ‘I think it is important that I can and may continue to do a lot independently’ (Respondent 17)]. In line with this focus, these patients want healthcare professionals to attend to their problems with physical activity (statement 8, +4). Respondent 17 stated ‘I think it is very important to work on this [problems with physical activity] as much as possible and to expand what is possible to do myself’. Although these respondents seek independence, they value knowing where to go for care and support after treatment highly (statement 24, +4). They are willing to take the lead, provided that they know where they can go for support. Respondent 4 stated ‘That you have a telephone number and that you can call them with questions or if anything is unclear. I find accessibility very important’. To facilitate independence, they also prefer to be well informed about all aspects of their care (statement 25, +3) and appreciate easy access to their own medical data (statement 26, +2). However, these patients do not require a good explanation of all information provided to them (statement 27, −2) as they have no difficulty understanding their medical data [‘I have been walking in and out of hospitals for so long, most of it is self‐evident’ (Respondent 17)]. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 find other aspects of the ‘continuity of care’, such as being well informed during referrals (statement 22, –3) and the proper transfer of information upon referral (statement 23, –2) to be less important. They do not mind asking questions or re‐sharing information with professionals [‘I can also tell it myself and I can ask for everything I need and I always do that’ (Respondent 4)]. Viewpoint 5: ‘support for myself and my loved ones’ Viewpoint 5 accounted for 5% of the explained variance. This viewpoint is distinguished by an emphasis on the supporting roles of family members and friends. Patients holding this viewpoint seek support from their loved ones and help from healthcare professionals in obtaining it [statement 31, +3; ‘I am married and I want help from my husband because he really knows a lot about me’ (Respondent 20)]. They also value their autonomy highly; they want to be informed about all aspects of their care (statement 25, +4) and involved in decisions (statement 4, +3). Respondent 20 stated ‘I do not like them talking about me behind my back’. Similarly to those with viewpoint 1, patients with viewpoint 5 consider being treated with dignity and respect (statement 1, +4) to be one of the most important aspects of PCC [‘Everyone has the right to be treated with respect and receive proper care’ (Respondent 5)]. They value comfortable waiting areas and treatment rooms (statement 9, +1) more than patients with other viewpoints, as they appreciate their personal space. Respondent 20 stated ‘I do not think it is necessary that they sit right on top of me in treatment rooms’. Compared with patients with other viewpoints, those with viewpoint 5 consider some aspects of PCC to be out of reach, and thus rank them as less important. For example, they accept that money may be a problem sometimes [statement 20, –4; ‘Money comes, money goes. It just makes some things a little easier, but if you do not have it, you do not have it’ (Respondent 5)] and they believe that receiving treatment only from unbiased healthcare professionals is not realistic [statement 2, –3; ‘It is not realistic because that [stigmatisation from healthcare professionals] happens, whether you like it or not’ (Respondent 5)]. Viewpoint 5 accounted for 5% of the explained variance. This viewpoint is distinguished by an emphasis on the supporting roles of family members and friends. Patients holding this viewpoint seek support from their loved ones and help from healthcare professionals in obtaining it [statement 31, +3; ‘I am married and I want help from my husband because he really knows a lot about me’ (Respondent 20)]. They also value their autonomy highly; they want to be informed about all aspects of their care (statement 25, +4) and involved in decisions (statement 4, +3). Respondent 20 stated ‘I do not like them talking about me behind my back’. Similarly to those with viewpoint 1, patients with viewpoint 5 consider being treated with dignity and respect (statement 1, +4) to be one of the most important aspects of PCC [‘Everyone has the right to be treated with respect and receive proper care’ (Respondent 5)]. They value comfortable waiting areas and treatment rooms (statement 9, +1) more than patients with other viewpoints, as they appreciate their personal space. Respondent 20 stated ‘I do not think it is necessary that they sit right on top of me in treatment rooms’. Compared with patients with other viewpoints, those with viewpoint 5 consider some aspects of PCC to be out of reach, and thus rank them as less important. For example, they accept that money may be a problem sometimes [statement 20, –4; ‘Money comes, money goes. It just makes some things a little easier, but if you do not have it, you do not have it’ (Respondent 5)] and they believe that receiving treatment only from unbiased healthcare professionals is not realistic [statement 2, –3; ‘It is not realistic because that [stigmatisation from healthcare professionals] happens, whether you like it or not’ (Respondent 5)].
CONCLUSION
Five distinct views on what is important for PCC for patients with obesity were identified. The viewpoint ‘someone who listens in an unbiased manner’ was supported by the largest number of respondents. With these findings, we have begun to shed light on the communalities in the views of patients living with obesity on PCC. Our data shows that the views on what care and support should look like for patients with obesity vary, stressing the need for tailored care for patients with obesity. Future research may build on and expand our study's findings by considering additional factors that might influence how patients with obesity view PCC. For instance, experiences of weight stigmatization in healthcare may lead to different perspectives on what is most important in care and support. Furthermore, we explored the views of patients living with severe obesity. Future studies might examine the views of patients living with less severe obesity and explore to what extent their views on PCC differ. The views that are described in this paper provide valuable insight into the perspective of patients living with obesity on what is most important in care and support. Importantly, this knowledge helps us to understand what PCC provision for patients with obesity might entail and may help organizations arrange care accordingly. For example, some patients may benefit greatly from a high level of emotional support, while others will respond better to care and support that is centred around patient education or self‐management.
[ "INTRODUCTION", "Q methodology", "Respondents", "Statements", "Data collection", "Statistical analysis", "Viewpoint 1: ‘someone who listens in an unbiased manner’", "Viewpoint 2: ‘everything should run smoothly’", "Viewpoint 3: ‘interpersonal communication is key’", "Viewpoint 4: ‘I want my independence’", "Viewpoint 5: ‘support for myself and my loved ones’", "LIMITATIONS" ]
[ "Over the past four decades, the global prevalence of obesity has nearly tripled.\n1\n, \n2\n The World Health Organization defines obesity as an excessive accumulation of body fat that poses a threat to health.\n3\n Living with obesity seriously impairs physical and psychosocial functioning, resulting in a reduced quality of life.\n4\n Obesity also increases the risk of developing other serious health conditions, such as type 2 diabetes, cardiovascular diseases, several types of cancer, and many other diseases.\n5\n Consequently, obesity, and especially severe obesity is associated with increases in healthcare utilization and expenditures, as well as substantial societal costs due to productivity losses.\n6\n, \n7\n Although many health institutions have recognized it as a chronic disease,\n8\n healthcare systems seem poorly prepared to meet the needs of patients living with obesity. Clinical guidelines for the treatment of these patients are often too simplistic, focusing merely on weight loss instead of the improvement of overall health and well‐being.\n9\n As a result, individual circumstances, including contributing factors and underlying diseases, are often overlooked.\n10\n Furthermore, patients with obesity often experience weight‐related stigma and discrimination in healthcare, which can affect the quality of their care and their treatment outcomes.\n11\n, \n12\n For instance, some healthcare professionals view patients with obesity more negatively than other patients and spend less time treating them.\n13\n Healthcare professionals may also be insufficiently equipped or educated to perform standard medical procedures on patients with obesity.\n14\n\n\nTo better accommodate patients with obesity, the adoption of a person‐centred approach in which care is tailored to the individual and individuals' preferences, needs, and values are respected seems to be imperative.\n15\n Person‐centred care (PCC) can be seen as a paradigm shift in healthcare that has been gaining broad support with the increasing interest in the quality of care.\n16\n, \n17\n The Picker Institute distinguishes eight dimensions that are important for PCC: respect for patients' preferences, physical comfort, coordination of care, emotional support, access to care, continuity of care, the provision of information and education, and the involvement of family and friends.\n18\n, \n19\n An overview of these dimensions can be found in Table 1.\nThe eight dimensions of PCC\nAbbreviation: PCC, person‐centred care.\nPCC has been associated with improved patient outcomes in various healthcare settings,\n26\n including the provision of care to patients with obesity.\n27\n However, the relative importance of the different aspects of PCC seems to vary among patient groups.\n28\n, \n29\n Although aspects of care that may be important specifically for patients with obesity have been identified, the significance of the eight dimensions of PCC for patients with obesity has not been assessed. Gaining insight into the aspects of PCC that are most important to this patient group is a vital step toward improved care provision, and consequently improved quality of care and patient outcomes. Thus, the aim of this study was to explore the views of patients with obesity on the relative importance of the dimensions of PCC.", "To examine the views of patients with obesity on what is important for PCC, the mixed‐method Q methodology was used. Q methodology may be best described as an inverted factor analytic technique for the systematic study of subjective viewpoints.\n30\n Q‐methodology research aims to identify and discern views on a specific topic, rather than determine the prevalence of these viewpoints. In a Q‐methodology study, respondents are asked to rank a set of statements about the study subject. Using by‐person factor analysis, in which the respondents are treated as variates, distinct viewpoints are identified. Q methodology has been used to examine the views of patients and professionals, such as patients with multimorbidity,\n28\n those with end‐stage renal disease,\n29\n and professionals and volunteers providing palliative care,\n31\n on what is important for PCC.", "As our goal was to obtain a wide breadth of views on what is important for PCC for patients with obesity, we recruited respondents varying in terms of gender, age, educational background, marital status, and health literacy. Eligible patients were over the age of 18 years and had body mass indices (BMIs) of at least 40 kg/m2, which defines severe obesity. This obesity threshold was chosen because it is associated with the most healthcare utilization and greatest health risks.\n5\n, \n6\n Practitioners working in the internal medicine departments of four hospitals in the area of Rotterdam, the Netherlands, informed patients about the study. In the Netherlands, access to nonurgent hospitals or specialty care requires a referral from a general practitioner (GP).\n32\n Recruitment through hospitals thus ensured that respondents were familiar with both specialty and primary care (e.g., GP visitation), characteristic of care provision for patients with severe obesity.\n6\n, \n24\n Data collection took place between April and October 2021. Twenty‐six eligible patients gave consent to be contacted to receive detailed study information and schedule an appointment. Of the 26 patients that were contacted, 3 were unable to schedule appointments and 2 could not be reached by the researcher. This led to the inclusion of 21 patients in the study, which is a typical sample size for a Q‐methodology study.\n30\n Q‐methodological research requires only a small number of purposively selected respondents to represent the breadth of views in a population.\n30\n Consultation of the literature and the expert opinion of a professor of obesity and stress research who is involved in the treatment of patients with obesity revealed no evidence of missing viewpoints.", "To capture the full range of possible views on a specific topic, the statements in a Q‐methodology study should have good coverage of the subject of interest.\n30\n The eight dimensions of PCC provided by the Picker Institute were used as a conceptual framework for this study.\n18\n, \n19\n First, statements from previous studies in which the same framework was used to investigate the views of patients or professionals on what is important for PCC were collected.\n28\n, \n29\n, \n31\n Further statement selection was informed by various sources covering the care and support needs of patients with obesity, such as scientific articles\n23\n, \n33\n and clinical guidelines,\n34\n as well as the autobiographies and social media posts of individuals living with obesity. In an iterative process, all members of the research team, including an internist‐endocrinologist who is a professor in the field of obesity and stress research and involved in clinical care provided to patients with obesity, generated, reviewed, and revised statements. A final set of 31 statements was constructed and pilot tested with three respondents fulfilling our inclusion criteria. Based on the pilot testing results, a few adjustments to the phrasing of some statements were made (see Supporting Information: Appendix A). No substantive change was required, and no missing statement was revealed. The final statement set is provided in Table 2. Because no substantial change was made to the statement set, the pilot data were included in the analyses conducted for this study.\nStatements and factor arrays\nViewpoints: 1, ‘someone who listens in an unbiased manner’; 2, ‘everything should run smoothly’; 3, ‘interpersonal communication is key’; 4, ‘I want my independence’; 5, ‘support for myself and my loved ones’.", "Data collection took place in an online environment using video conferencing software; the process lasted approximately 60 min per respondent. One researcher guided the respondents' ranking of statements. All sessions were audio recorded with respondents' informed consent. First, the respondents answered basic demographic questions and filled in the Set of Brief Screening Questions (SBSQ) as an assessment of health literacy.\n35\n Low health literacy was defined as an average SBSQ score of 2 or lower. Next, the respondents were asked to carefully read the statements about aspects of PCC, displayed on the screen one by one in random order using the HtmlQ software,\n36\n and to sort them into ‘important’, ‘neutral’, and ‘unimportant’ piles. The researcher then asked the respondents to rank the statements in each pile according to their personal significance using a forced sorting grid with a scale ranging from +4 (most important) to –4 (most unimportant; Figure 1). While ranking, the respondents were encouraged to speak out loud about their views; after completing the ranking, they were asked to elaborate on their placement of the statements. All comments made by the respondents during and after the ranking process were transcribed verbatim.\nSorting grid.", "To identify distinct viewpoints on what is important for PCC for patients with obesity, the rankings of the 21 respondents were intercorrelated and subjected to by‐person factor analysis using the PQMethod software.\n37\n Clusters in the data were identified using centroid factor extraction and varimax rotation. Potential factor solutions were evaluated by considering the total of associated respondents at a significance level of 0.05 (i.e., a factor loading of ±0.42), upholding a minimum of two associated respondents per factor, and the percentage of explained variance. Fulfilment of the Kaiser–Guttman criterion, which suggests that only factors with eigenvalues of 1.0 or more be retained, was examined.\n38\n, \n39\n To finalize our decision on the number of factors to retain, qualitative data (i.e., comments made by the respondents during and after ranking) were considered. For each factor or viewpoint, the rankings of associated respondents were merged by calculating weighted averages, thereby forming a ‘factor array’ that depicted how a typical respondent holding that viewpoint would rank the statements. As our aim was to gain a broad understanding of respondents' diverse viewpoints, our interpretation was based on these factor arrays. For each viewpoint, statements ranked as most important (+3 and +4) and most unimportant (–3 and –4) and distinguishing statements (ranked significantly higher or lower than in other viewpoints) were inspected. The qualitative data were used to verify and refine our interpretation of the viewpoints.", "Viewpoint 1 accounted for the most explained variance (17%) in this study. The PCC dimensions most characterizing this viewpoint are ‘respect for patients’ preferences’ and ‘emotional support’. Central to this viewpoint was respondents’ desire to be seen and heard like any other patient without obesity. These patients wish to be treated with dignity and respect (statement 1, +4). Respondent 8 stated ‘You just want to be taken seriously. We are all human, that includes people who are overweight’. They often feel misunderstood because healthcare professionals blame all of their health issues on their weight. [‘You fight against a judgment that you cannot get out of. They do not even examine me. Right off the bat they go: “I can refer you for a stomach reduction”’ (Respondent 18)]. To get the care and support that suits their needs, these patients believe that unbiased healthcare professionals (statement 2, +3) who genuinely listen (statement 14, +4) are crucial. Respondent 13 stated ‘That they look further than your weight, that is the most important thing to me. That it is not like everything that is wrong with you is because of your weight’. They want healthcare professionals to provide emotional support and acknowledge the impact of their health problems on their life [statement 16, +3; ‘I have three small children and it is really hard for me to do things with them just because I am overweight’ (Respondent 6)]. They seek recognition for the complexity of their condition. Respondent 8 stated ‘Recognition that obesity is a disease and it should be treated that way is very important’.\nTo remain in charge of their care, these patients want to be involved in decisions (statement 4, +3), while leaving friends and family members out of the decision‐making process [statement 29, –4; ‘No, I do not think that is important. I decide what I want’ (Respondent 6)]. Respondents holding this viewpoint ranked all statements covering the ‘involvement of friends and family’ dimension as least important.", "Viewpoint 2 accounted for 8% of the explained variance. Patients holding this viewpoint seek well‐coordinated care and advice (statement 12, +4) and the proper transfer of information in case of referral (statement 23, +4). Respondent 3 stated ‘The doctors have to agree on what is the best option for me’. Furthermore, they desire easily accessible care with short wait times [statement 17, +3; ‘That it will not be a lengthy process before I can be helped’ (Respondent 16)].\nThese patients would also like healthcare professionals to consider their physical comfort by attending to problems with physical activity [statement 8, +3; ‘Stairs are very much a no go for me and it is important that they know that’ (Respondent 16)]. However, they consider other aspects of physical comfort, such as waiting areas and treatment rooms that are comfortable (statement 9, –3) or provide enough privacy (statement 10, –3), to be less important. Respondent 16 stated ‘When I weighed 127 kilos at my heaviest, the seats were a bit uncomfortable, but I do not have that problem now’.\nIn contrast to those holding viewpoint 1, patients holding viewpoint 2 do not mind if care does not align with their own preferences [statement 5, –4; ‘I do not think that your preferences should be taken into account in a hospital or with a doctor because as human beings we can have a lot of preferences that do not really apply’ (Respondent 16)]. They emphasize their own responsibility for getting the care they need [‘Right now in the Netherlands, you get the right care. As a patient, you also need to be somewhat well‐informed yourself’ (Respondent 16)]. They believe that being well prepared avoids the need for lengthy appointments (statement 18, –4). Respondent 3 stated ‘If I have a question, I just ask it. And if I did not understand something or if I forgot something […] I can just call and ask’.", "Viewpoint 3 accounted for 10% of the explained variance. It focuses on the exchange of information among all involved parties. Patients holding this viewpoint want to know what to expect, and thus value information about all aspects of their care (statement 25, +3), including information about referrals (statement 22, +3), very highly [‘Because I want to know where I stand, what will happen and what is needed’ (Respondent 10)]. They believe that a good explanation is needed to properly understand information (statement 27, +3). Respondent 7 stated ‘I often feel a bit overwhelmed during consultations. That things are being said for which I was not fully prepared. I sometimes think afterwards, “have I understood everything that has been said?”’. These patients believe that having sufficient time during appointments is a prerequisite for the proper exchange of information (statement 18, +4). They often leave consultations feeling poorly because of unanswered questions. [‘You just notice that they are under time pressure, that it should all happen quickly. You hardly have time for questions, so you do not leave with a good feeling’ (Respondent 10)].\nSimilarly to those holding viewpoint 2, these patients value the coordination of care and advice among practitioners highly (statement 12, +4). They specifically dislike the conflicting of treatment plans with each other [‘It is important that one practitioner also knows what the other practitioner is doing and that it fits together’ (Respondent 7)].\nIn contrast to those holding viewpoint 1, these patients prefer that care and support are of an informative nature, rather than attending to emotions that they might be experiencing (statement 15, –4). Respondent 1 stated ‘Things like quality of care are much more important to me than people sitting down to listen to emotions or something like that. To me, emotions and scientific correctness often clash’. Similarly to those holding viewpoint 2, they do not mind if care does not align well with their preferences [statement 5, –3; ‘For me it is really about that the care is good and that it is the best, even if I do not prefer it’ (Respondent 1)].", "Viewpoint 4 accounted for 7% of the explained variance. The aim of remaining independent is central to this viewpoint. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 want to focus on what they can do on their own (statement 6, +2), as they believe that this will preserve their quality of life [statement 3, +3; ‘I think it is important that I can and may continue to do a lot independently’ (Respondent 17)]. In line with this focus, these patients want healthcare professionals to attend to their problems with physical activity (statement 8, +4). Respondent 17 stated ‘I think it is very important to work on this [problems with physical activity] as much as possible and to expand what is possible to do myself’.\nAlthough these respondents seek independence, they value knowing where to go for care and support after treatment highly (statement 24, +4). They are willing to take the lead, provided that they know where they can go for support. Respondent 4 stated ‘That you have a telephone number and that you can call them with questions or if anything is unclear. I find accessibility very important’. To facilitate independence, they also prefer to be well informed about all aspects of their care (statement 25, +3) and appreciate easy access to their own medical data (statement 26, +2). However, these patients do not require a good explanation of all information provided to them (statement 27, −2) as they have no difficulty understanding their medical data [‘I have been walking in and out of hospitals for so long, most of it is self‐evident’ (Respondent 17)].\nIn contrast to those holding viewpoints 1–3, patients holding viewpoint 4 find other aspects of the ‘continuity of care’, such as being well informed during referrals (statement 22, –3) and the proper transfer of information upon referral (statement 23, –2) to be less important. They do not mind asking questions or re‐sharing information with professionals [‘I can also tell it myself and I can ask for everything I need and I always do that’ (Respondent 4)].", "Viewpoint 5 accounted for 5% of the explained variance. This viewpoint is distinguished by an emphasis on the supporting roles of family members and friends. Patients holding this viewpoint seek support from their loved ones and help from healthcare professionals in obtaining it [statement 31, +3; ‘I am married and I want help from my husband because he really knows a lot about me’ (Respondent 20)]. They also value their autonomy highly; they want to be informed about all aspects of their care (statement 25, +4) and involved in decisions (statement 4, +3). Respondent 20 stated ‘I do not like them talking about me behind my back’. Similarly to those with viewpoint 1, patients with viewpoint 5 consider being treated with dignity and respect (statement 1, +4) to be one of the most important aspects of PCC [‘Everyone has the right to be treated with respect and receive proper care’ (Respondent 5)]. They value comfortable waiting areas and treatment rooms (statement 9, +1) more than patients with other viewpoints, as they appreciate their personal space. Respondent 20 stated ‘I do not think it is necessary that they sit right on top of me in treatment rooms’.\nCompared with patients with other viewpoints, those with viewpoint 5 consider some aspects of PCC to be out of reach, and thus rank them as less important. For example, they accept that money may be a problem sometimes [statement 20, –4; ‘Money comes, money goes. It just makes some things a little easier, but if you do not have it, you do not have it’ (Respondent 5)] and they believe that receiving treatment only from unbiased healthcare professionals is not realistic [statement 2, –3; ‘It is not realistic because that [stigmatisation from healthcare professionals] happens, whether you like it or not’ (Respondent 5)].", "Several potential limitations of this study should be considered. First, the sample of patients recruited for this study may seem to be small. However, it meets the requirements of Q methodology\n30\n and is similar to those of other studies.\n28\n, \n47\n Furthermore, consultation of the literature revealed no evidence of a missing viewpoint. Additionally, the viewpoints identified in this study were recognized by a professor of obesity and stress research who is involved in the treatment of patients with obesity and indicated that no viewpoint was missing, based on many years of clinical experience. Furthermore, the representation of the male perspective in this sample might be limited due to the male‐to‐female ratio. However, a similar ratio is seen in patients seeking obesity care.\n48\n Second, at the start of the data collection period, respondents could only participate online due to COVID‐19 pandemic precautions. Although we later offered the opportunity for face‐to‐face participation, this approach may have led to the underrepresentation of individuals with low health literacy, for whom digitalization can be a barrier to engagement.\n49\n However, the views of individuals with low health literacy are represented in this study, as four respondents met this criterion. Finally, our study was conducted in the Netherlands, and the identified viewpoints may not represent the views of patients living in countries with different health systems. For example, because health insurance is mandatory in the Netherlands, every resident has basic access to care. Aspects of the ‘access to care’ dimension may thus be viewed differently in countries without universal healthcare. However, Dutch health insurance does not cover all obesity treatments. For instance, most weight‐reducing medications are not covered." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Q methodology", "Respondents", "Statements", "Data collection", "Statistical analysis", "RESULTS", "Viewpoint 1: ‘someone who listens in an unbiased manner’", "Viewpoint 2: ‘everything should run smoothly’", "Viewpoint 3: ‘interpersonal communication is key’", "Viewpoint 4: ‘I want my independence’", "Viewpoint 5: ‘support for myself and my loved ones’", "DISCUSSION", "LIMITATIONS", "CONCLUSION", "CONFLICT OF INTEREST", "Supporting information" ]
[ "Over the past four decades, the global prevalence of obesity has nearly tripled.\n1\n, \n2\n The World Health Organization defines obesity as an excessive accumulation of body fat that poses a threat to health.\n3\n Living with obesity seriously impairs physical and psychosocial functioning, resulting in a reduced quality of life.\n4\n Obesity also increases the risk of developing other serious health conditions, such as type 2 diabetes, cardiovascular diseases, several types of cancer, and many other diseases.\n5\n Consequently, obesity, and especially severe obesity is associated with increases in healthcare utilization and expenditures, as well as substantial societal costs due to productivity losses.\n6\n, \n7\n Although many health institutions have recognized it as a chronic disease,\n8\n healthcare systems seem poorly prepared to meet the needs of patients living with obesity. Clinical guidelines for the treatment of these patients are often too simplistic, focusing merely on weight loss instead of the improvement of overall health and well‐being.\n9\n As a result, individual circumstances, including contributing factors and underlying diseases, are often overlooked.\n10\n Furthermore, patients with obesity often experience weight‐related stigma and discrimination in healthcare, which can affect the quality of their care and their treatment outcomes.\n11\n, \n12\n For instance, some healthcare professionals view patients with obesity more negatively than other patients and spend less time treating them.\n13\n Healthcare professionals may also be insufficiently equipped or educated to perform standard medical procedures on patients with obesity.\n14\n\n\nTo better accommodate patients with obesity, the adoption of a person‐centred approach in which care is tailored to the individual and individuals' preferences, needs, and values are respected seems to be imperative.\n15\n Person‐centred care (PCC) can be seen as a paradigm shift in healthcare that has been gaining broad support with the increasing interest in the quality of care.\n16\n, \n17\n The Picker Institute distinguishes eight dimensions that are important for PCC: respect for patients' preferences, physical comfort, coordination of care, emotional support, access to care, continuity of care, the provision of information and education, and the involvement of family and friends.\n18\n, \n19\n An overview of these dimensions can be found in Table 1.\nThe eight dimensions of PCC\nAbbreviation: PCC, person‐centred care.\nPCC has been associated with improved patient outcomes in various healthcare settings,\n26\n including the provision of care to patients with obesity.\n27\n However, the relative importance of the different aspects of PCC seems to vary among patient groups.\n28\n, \n29\n Although aspects of care that may be important specifically for patients with obesity have been identified, the significance of the eight dimensions of PCC for patients with obesity has not been assessed. Gaining insight into the aspects of PCC that are most important to this patient group is a vital step toward improved care provision, and consequently improved quality of care and patient outcomes. Thus, the aim of this study was to explore the views of patients with obesity on the relative importance of the dimensions of PCC.", " Q methodology To examine the views of patients with obesity on what is important for PCC, the mixed‐method Q methodology was used. Q methodology may be best described as an inverted factor analytic technique for the systematic study of subjective viewpoints.\n30\n Q‐methodology research aims to identify and discern views on a specific topic, rather than determine the prevalence of these viewpoints. In a Q‐methodology study, respondents are asked to rank a set of statements about the study subject. Using by‐person factor analysis, in which the respondents are treated as variates, distinct viewpoints are identified. Q methodology has been used to examine the views of patients and professionals, such as patients with multimorbidity,\n28\n those with end‐stage renal disease,\n29\n and professionals and volunteers providing palliative care,\n31\n on what is important for PCC.\nTo examine the views of patients with obesity on what is important for PCC, the mixed‐method Q methodology was used. Q methodology may be best described as an inverted factor analytic technique for the systematic study of subjective viewpoints.\n30\n Q‐methodology research aims to identify and discern views on a specific topic, rather than determine the prevalence of these viewpoints. In a Q‐methodology study, respondents are asked to rank a set of statements about the study subject. Using by‐person factor analysis, in which the respondents are treated as variates, distinct viewpoints are identified. Q methodology has been used to examine the views of patients and professionals, such as patients with multimorbidity,\n28\n those with end‐stage renal disease,\n29\n and professionals and volunteers providing palliative care,\n31\n on what is important for PCC.\n Respondents As our goal was to obtain a wide breadth of views on what is important for PCC for patients with obesity, we recruited respondents varying in terms of gender, age, educational background, marital status, and health literacy. Eligible patients were over the age of 18 years and had body mass indices (BMIs) of at least 40 kg/m2, which defines severe obesity. This obesity threshold was chosen because it is associated with the most healthcare utilization and greatest health risks.\n5\n, \n6\n Practitioners working in the internal medicine departments of four hospitals in the area of Rotterdam, the Netherlands, informed patients about the study. In the Netherlands, access to nonurgent hospitals or specialty care requires a referral from a general practitioner (GP).\n32\n Recruitment through hospitals thus ensured that respondents were familiar with both specialty and primary care (e.g., GP visitation), characteristic of care provision for patients with severe obesity.\n6\n, \n24\n Data collection took place between April and October 2021. Twenty‐six eligible patients gave consent to be contacted to receive detailed study information and schedule an appointment. Of the 26 patients that were contacted, 3 were unable to schedule appointments and 2 could not be reached by the researcher. This led to the inclusion of 21 patients in the study, which is a typical sample size for a Q‐methodology study.\n30\n Q‐methodological research requires only a small number of purposively selected respondents to represent the breadth of views in a population.\n30\n Consultation of the literature and the expert opinion of a professor of obesity and stress research who is involved in the treatment of patients with obesity revealed no evidence of missing viewpoints.\nAs our goal was to obtain a wide breadth of views on what is important for PCC for patients with obesity, we recruited respondents varying in terms of gender, age, educational background, marital status, and health literacy. Eligible patients were over the age of 18 years and had body mass indices (BMIs) of at least 40 kg/m2, which defines severe obesity. This obesity threshold was chosen because it is associated with the most healthcare utilization and greatest health risks.\n5\n, \n6\n Practitioners working in the internal medicine departments of four hospitals in the area of Rotterdam, the Netherlands, informed patients about the study. In the Netherlands, access to nonurgent hospitals or specialty care requires a referral from a general practitioner (GP).\n32\n Recruitment through hospitals thus ensured that respondents were familiar with both specialty and primary care (e.g., GP visitation), characteristic of care provision for patients with severe obesity.\n6\n, \n24\n Data collection took place between April and October 2021. Twenty‐six eligible patients gave consent to be contacted to receive detailed study information and schedule an appointment. Of the 26 patients that were contacted, 3 were unable to schedule appointments and 2 could not be reached by the researcher. This led to the inclusion of 21 patients in the study, which is a typical sample size for a Q‐methodology study.\n30\n Q‐methodological research requires only a small number of purposively selected respondents to represent the breadth of views in a population.\n30\n Consultation of the literature and the expert opinion of a professor of obesity and stress research who is involved in the treatment of patients with obesity revealed no evidence of missing viewpoints.\n Statements To capture the full range of possible views on a specific topic, the statements in a Q‐methodology study should have good coverage of the subject of interest.\n30\n The eight dimensions of PCC provided by the Picker Institute were used as a conceptual framework for this study.\n18\n, \n19\n First, statements from previous studies in which the same framework was used to investigate the views of patients or professionals on what is important for PCC were collected.\n28\n, \n29\n, \n31\n Further statement selection was informed by various sources covering the care and support needs of patients with obesity, such as scientific articles\n23\n, \n33\n and clinical guidelines,\n34\n as well as the autobiographies and social media posts of individuals living with obesity. In an iterative process, all members of the research team, including an internist‐endocrinologist who is a professor in the field of obesity and stress research and involved in clinical care provided to patients with obesity, generated, reviewed, and revised statements. A final set of 31 statements was constructed and pilot tested with three respondents fulfilling our inclusion criteria. Based on the pilot testing results, a few adjustments to the phrasing of some statements were made (see Supporting Information: Appendix A). No substantive change was required, and no missing statement was revealed. The final statement set is provided in Table 2. Because no substantial change was made to the statement set, the pilot data were included in the analyses conducted for this study.\nStatements and factor arrays\nViewpoints: 1, ‘someone who listens in an unbiased manner’; 2, ‘everything should run smoothly’; 3, ‘interpersonal communication is key’; 4, ‘I want my independence’; 5, ‘support for myself and my loved ones’.\nTo capture the full range of possible views on a specific topic, the statements in a Q‐methodology study should have good coverage of the subject of interest.\n30\n The eight dimensions of PCC provided by the Picker Institute were used as a conceptual framework for this study.\n18\n, \n19\n First, statements from previous studies in which the same framework was used to investigate the views of patients or professionals on what is important for PCC were collected.\n28\n, \n29\n, \n31\n Further statement selection was informed by various sources covering the care and support needs of patients with obesity, such as scientific articles\n23\n, \n33\n and clinical guidelines,\n34\n as well as the autobiographies and social media posts of individuals living with obesity. In an iterative process, all members of the research team, including an internist‐endocrinologist who is a professor in the field of obesity and stress research and involved in clinical care provided to patients with obesity, generated, reviewed, and revised statements. A final set of 31 statements was constructed and pilot tested with three respondents fulfilling our inclusion criteria. Based on the pilot testing results, a few adjustments to the phrasing of some statements were made (see Supporting Information: Appendix A). No substantive change was required, and no missing statement was revealed. The final statement set is provided in Table 2. Because no substantial change was made to the statement set, the pilot data were included in the analyses conducted for this study.\nStatements and factor arrays\nViewpoints: 1, ‘someone who listens in an unbiased manner’; 2, ‘everything should run smoothly’; 3, ‘interpersonal communication is key’; 4, ‘I want my independence’; 5, ‘support for myself and my loved ones’.\n Data collection Data collection took place in an online environment using video conferencing software; the process lasted approximately 60 min per respondent. One researcher guided the respondents' ranking of statements. All sessions were audio recorded with respondents' informed consent. First, the respondents answered basic demographic questions and filled in the Set of Brief Screening Questions (SBSQ) as an assessment of health literacy.\n35\n Low health literacy was defined as an average SBSQ score of 2 or lower. Next, the respondents were asked to carefully read the statements about aspects of PCC, displayed on the screen one by one in random order using the HtmlQ software,\n36\n and to sort them into ‘important’, ‘neutral’, and ‘unimportant’ piles. The researcher then asked the respondents to rank the statements in each pile according to their personal significance using a forced sorting grid with a scale ranging from +4 (most important) to –4 (most unimportant; Figure 1). While ranking, the respondents were encouraged to speak out loud about their views; after completing the ranking, they were asked to elaborate on their placement of the statements. All comments made by the respondents during and after the ranking process were transcribed verbatim.\nSorting grid.\nData collection took place in an online environment using video conferencing software; the process lasted approximately 60 min per respondent. One researcher guided the respondents' ranking of statements. All sessions were audio recorded with respondents' informed consent. First, the respondents answered basic demographic questions and filled in the Set of Brief Screening Questions (SBSQ) as an assessment of health literacy.\n35\n Low health literacy was defined as an average SBSQ score of 2 or lower. Next, the respondents were asked to carefully read the statements about aspects of PCC, displayed on the screen one by one in random order using the HtmlQ software,\n36\n and to sort them into ‘important’, ‘neutral’, and ‘unimportant’ piles. The researcher then asked the respondents to rank the statements in each pile according to their personal significance using a forced sorting grid with a scale ranging from +4 (most important) to –4 (most unimportant; Figure 1). While ranking, the respondents were encouraged to speak out loud about their views; after completing the ranking, they were asked to elaborate on their placement of the statements. All comments made by the respondents during and after the ranking process were transcribed verbatim.\nSorting grid.\n Statistical analysis To identify distinct viewpoints on what is important for PCC for patients with obesity, the rankings of the 21 respondents were intercorrelated and subjected to by‐person factor analysis using the PQMethod software.\n37\n Clusters in the data were identified using centroid factor extraction and varimax rotation. Potential factor solutions were evaluated by considering the total of associated respondents at a significance level of 0.05 (i.e., a factor loading of ±0.42), upholding a minimum of two associated respondents per factor, and the percentage of explained variance. Fulfilment of the Kaiser–Guttman criterion, which suggests that only factors with eigenvalues of 1.0 or more be retained, was examined.\n38\n, \n39\n To finalize our decision on the number of factors to retain, qualitative data (i.e., comments made by the respondents during and after ranking) were considered. For each factor or viewpoint, the rankings of associated respondents were merged by calculating weighted averages, thereby forming a ‘factor array’ that depicted how a typical respondent holding that viewpoint would rank the statements. As our aim was to gain a broad understanding of respondents' diverse viewpoints, our interpretation was based on these factor arrays. For each viewpoint, statements ranked as most important (+3 and +4) and most unimportant (–3 and –4) and distinguishing statements (ranked significantly higher or lower than in other viewpoints) were inspected. The qualitative data were used to verify and refine our interpretation of the viewpoints.\nTo identify distinct viewpoints on what is important for PCC for patients with obesity, the rankings of the 21 respondents were intercorrelated and subjected to by‐person factor analysis using the PQMethod software.\n37\n Clusters in the data were identified using centroid factor extraction and varimax rotation. Potential factor solutions were evaluated by considering the total of associated respondents at a significance level of 0.05 (i.e., a factor loading of ±0.42), upholding a minimum of two associated respondents per factor, and the percentage of explained variance. Fulfilment of the Kaiser–Guttman criterion, which suggests that only factors with eigenvalues of 1.0 or more be retained, was examined.\n38\n, \n39\n To finalize our decision on the number of factors to retain, qualitative data (i.e., comments made by the respondents during and after ranking) were considered. For each factor or viewpoint, the rankings of associated respondents were merged by calculating weighted averages, thereby forming a ‘factor array’ that depicted how a typical respondent holding that viewpoint would rank the statements. As our aim was to gain a broad understanding of respondents' diverse viewpoints, our interpretation was based on these factor arrays. For each viewpoint, statements ranked as most important (+3 and +4) and most unimportant (–3 and –4) and distinguishing statements (ranked significantly higher or lower than in other viewpoints) were inspected. The qualitative data were used to verify and refine our interpretation of the viewpoints.", "To examine the views of patients with obesity on what is important for PCC, the mixed‐method Q methodology was used. Q methodology may be best described as an inverted factor analytic technique for the systematic study of subjective viewpoints.\n30\n Q‐methodology research aims to identify and discern views on a specific topic, rather than determine the prevalence of these viewpoints. In a Q‐methodology study, respondents are asked to rank a set of statements about the study subject. Using by‐person factor analysis, in which the respondents are treated as variates, distinct viewpoints are identified. Q methodology has been used to examine the views of patients and professionals, such as patients with multimorbidity,\n28\n those with end‐stage renal disease,\n29\n and professionals and volunteers providing palliative care,\n31\n on what is important for PCC.", "As our goal was to obtain a wide breadth of views on what is important for PCC for patients with obesity, we recruited respondents varying in terms of gender, age, educational background, marital status, and health literacy. Eligible patients were over the age of 18 years and had body mass indices (BMIs) of at least 40 kg/m2, which defines severe obesity. This obesity threshold was chosen because it is associated with the most healthcare utilization and greatest health risks.\n5\n, \n6\n Practitioners working in the internal medicine departments of four hospitals in the area of Rotterdam, the Netherlands, informed patients about the study. In the Netherlands, access to nonurgent hospitals or specialty care requires a referral from a general practitioner (GP).\n32\n Recruitment through hospitals thus ensured that respondents were familiar with both specialty and primary care (e.g., GP visitation), characteristic of care provision for patients with severe obesity.\n6\n, \n24\n Data collection took place between April and October 2021. Twenty‐six eligible patients gave consent to be contacted to receive detailed study information and schedule an appointment. Of the 26 patients that were contacted, 3 were unable to schedule appointments and 2 could not be reached by the researcher. This led to the inclusion of 21 patients in the study, which is a typical sample size for a Q‐methodology study.\n30\n Q‐methodological research requires only a small number of purposively selected respondents to represent the breadth of views in a population.\n30\n Consultation of the literature and the expert opinion of a professor of obesity and stress research who is involved in the treatment of patients with obesity revealed no evidence of missing viewpoints.", "To capture the full range of possible views on a specific topic, the statements in a Q‐methodology study should have good coverage of the subject of interest.\n30\n The eight dimensions of PCC provided by the Picker Institute were used as a conceptual framework for this study.\n18\n, \n19\n First, statements from previous studies in which the same framework was used to investigate the views of patients or professionals on what is important for PCC were collected.\n28\n, \n29\n, \n31\n Further statement selection was informed by various sources covering the care and support needs of patients with obesity, such as scientific articles\n23\n, \n33\n and clinical guidelines,\n34\n as well as the autobiographies and social media posts of individuals living with obesity. In an iterative process, all members of the research team, including an internist‐endocrinologist who is a professor in the field of obesity and stress research and involved in clinical care provided to patients with obesity, generated, reviewed, and revised statements. A final set of 31 statements was constructed and pilot tested with three respondents fulfilling our inclusion criteria. Based on the pilot testing results, a few adjustments to the phrasing of some statements were made (see Supporting Information: Appendix A). No substantive change was required, and no missing statement was revealed. The final statement set is provided in Table 2. Because no substantial change was made to the statement set, the pilot data were included in the analyses conducted for this study.\nStatements and factor arrays\nViewpoints: 1, ‘someone who listens in an unbiased manner’; 2, ‘everything should run smoothly’; 3, ‘interpersonal communication is key’; 4, ‘I want my independence’; 5, ‘support for myself and my loved ones’.", "Data collection took place in an online environment using video conferencing software; the process lasted approximately 60 min per respondent. One researcher guided the respondents' ranking of statements. All sessions were audio recorded with respondents' informed consent. First, the respondents answered basic demographic questions and filled in the Set of Brief Screening Questions (SBSQ) as an assessment of health literacy.\n35\n Low health literacy was defined as an average SBSQ score of 2 or lower. Next, the respondents were asked to carefully read the statements about aspects of PCC, displayed on the screen one by one in random order using the HtmlQ software,\n36\n and to sort them into ‘important’, ‘neutral’, and ‘unimportant’ piles. The researcher then asked the respondents to rank the statements in each pile according to their personal significance using a forced sorting grid with a scale ranging from +4 (most important) to –4 (most unimportant; Figure 1). While ranking, the respondents were encouraged to speak out loud about their views; after completing the ranking, they were asked to elaborate on their placement of the statements. All comments made by the respondents during and after the ranking process were transcribed verbatim.\nSorting grid.", "To identify distinct viewpoints on what is important for PCC for patients with obesity, the rankings of the 21 respondents were intercorrelated and subjected to by‐person factor analysis using the PQMethod software.\n37\n Clusters in the data were identified using centroid factor extraction and varimax rotation. Potential factor solutions were evaluated by considering the total of associated respondents at a significance level of 0.05 (i.e., a factor loading of ±0.42), upholding a minimum of two associated respondents per factor, and the percentage of explained variance. Fulfilment of the Kaiser–Guttman criterion, which suggests that only factors with eigenvalues of 1.0 or more be retained, was examined.\n38\n, \n39\n To finalize our decision on the number of factors to retain, qualitative data (i.e., comments made by the respondents during and after ranking) were considered. For each factor or viewpoint, the rankings of associated respondents were merged by calculating weighted averages, thereby forming a ‘factor array’ that depicted how a typical respondent holding that viewpoint would rank the statements. As our aim was to gain a broad understanding of respondents' diverse viewpoints, our interpretation was based on these factor arrays. For each viewpoint, statements ranked as most important (+3 and +4) and most unimportant (–3 and –4) and distinguishing statements (ranked significantly higher or lower than in other viewpoints) were inspected. The qualitative data were used to verify and refine our interpretation of the viewpoints.", "Twenty‐one respondents completed the ranking (Table 3). The analysis revealed five factors, or distinct viewpoints, that together explained 48% of the variance in the data. Data from 17 (81%) of the 21 respondents were associated significantly with one of the five viewpoints (p ≤ .05). Data from two respondents were associated with two viewpoints each, and those from two respondents were not associated significantly with any factor. All viewpoints were supported by at least two respondents; viewpoints 1 and 3 were supported by 7 and 4 respondents, respectively. Viewpoint 5 had an eigenvalue of 0.95, just below the Kaiser–Guttman cut‐off of 1.0, but the qualitative data indicated that it was meaningful and distinguishable from the other viewpoints. The degree of correlation between viewpoints was low to moderate (r = –.15 to .37). The factor arrays for the five viewpoints are provided in Table 2.\nDemographic characteristics of respondents (n = 21)\n Viewpoint 1: ‘someone who listens in an unbiased manner’ Viewpoint 1 accounted for the most explained variance (17%) in this study. The PCC dimensions most characterizing this viewpoint are ‘respect for patients’ preferences’ and ‘emotional support’. Central to this viewpoint was respondents’ desire to be seen and heard like any other patient without obesity. These patients wish to be treated with dignity and respect (statement 1, +4). Respondent 8 stated ‘You just want to be taken seriously. We are all human, that includes people who are overweight’. They often feel misunderstood because healthcare professionals blame all of their health issues on their weight. [‘You fight against a judgment that you cannot get out of. They do not even examine me. Right off the bat they go: “I can refer you for a stomach reduction”’ (Respondent 18)]. To get the care and support that suits their needs, these patients believe that unbiased healthcare professionals (statement 2, +3) who genuinely listen (statement 14, +4) are crucial. Respondent 13 stated ‘That they look further than your weight, that is the most important thing to me. That it is not like everything that is wrong with you is because of your weight’. They want healthcare professionals to provide emotional support and acknowledge the impact of their health problems on their life [statement 16, +3; ‘I have three small children and it is really hard for me to do things with them just because I am overweight’ (Respondent 6)]. They seek recognition for the complexity of their condition. Respondent 8 stated ‘Recognition that obesity is a disease and it should be treated that way is very important’.\nTo remain in charge of their care, these patients want to be involved in decisions (statement 4, +3), while leaving friends and family members out of the decision‐making process [statement 29, –4; ‘No, I do not think that is important. I decide what I want’ (Respondent 6)]. Respondents holding this viewpoint ranked all statements covering the ‘involvement of friends and family’ dimension as least important.\nViewpoint 1 accounted for the most explained variance (17%) in this study. The PCC dimensions most characterizing this viewpoint are ‘respect for patients’ preferences’ and ‘emotional support’. Central to this viewpoint was respondents’ desire to be seen and heard like any other patient without obesity. These patients wish to be treated with dignity and respect (statement 1, +4). Respondent 8 stated ‘You just want to be taken seriously. We are all human, that includes people who are overweight’. They often feel misunderstood because healthcare professionals blame all of their health issues on their weight. [‘You fight against a judgment that you cannot get out of. They do not even examine me. Right off the bat they go: “I can refer you for a stomach reduction”’ (Respondent 18)]. To get the care and support that suits their needs, these patients believe that unbiased healthcare professionals (statement 2, +3) who genuinely listen (statement 14, +4) are crucial. Respondent 13 stated ‘That they look further than your weight, that is the most important thing to me. That it is not like everything that is wrong with you is because of your weight’. They want healthcare professionals to provide emotional support and acknowledge the impact of their health problems on their life [statement 16, +3; ‘I have three small children and it is really hard for me to do things with them just because I am overweight’ (Respondent 6)]. They seek recognition for the complexity of their condition. Respondent 8 stated ‘Recognition that obesity is a disease and it should be treated that way is very important’.\nTo remain in charge of their care, these patients want to be involved in decisions (statement 4, +3), while leaving friends and family members out of the decision‐making process [statement 29, –4; ‘No, I do not think that is important. I decide what I want’ (Respondent 6)]. Respondents holding this viewpoint ranked all statements covering the ‘involvement of friends and family’ dimension as least important.\n Viewpoint 2: ‘everything should run smoothly’ Viewpoint 2 accounted for 8% of the explained variance. Patients holding this viewpoint seek well‐coordinated care and advice (statement 12, +4) and the proper transfer of information in case of referral (statement 23, +4). Respondent 3 stated ‘The doctors have to agree on what is the best option for me’. Furthermore, they desire easily accessible care with short wait times [statement 17, +3; ‘That it will not be a lengthy process before I can be helped’ (Respondent 16)].\nThese patients would also like healthcare professionals to consider their physical comfort by attending to problems with physical activity [statement 8, +3; ‘Stairs are very much a no go for me and it is important that they know that’ (Respondent 16)]. However, they consider other aspects of physical comfort, such as waiting areas and treatment rooms that are comfortable (statement 9, –3) or provide enough privacy (statement 10, –3), to be less important. Respondent 16 stated ‘When I weighed 127 kilos at my heaviest, the seats were a bit uncomfortable, but I do not have that problem now’.\nIn contrast to those holding viewpoint 1, patients holding viewpoint 2 do not mind if care does not align with their own preferences [statement 5, –4; ‘I do not think that your preferences should be taken into account in a hospital or with a doctor because as human beings we can have a lot of preferences that do not really apply’ (Respondent 16)]. They emphasize their own responsibility for getting the care they need [‘Right now in the Netherlands, you get the right care. As a patient, you also need to be somewhat well‐informed yourself’ (Respondent 16)]. They believe that being well prepared avoids the need for lengthy appointments (statement 18, –4). Respondent 3 stated ‘If I have a question, I just ask it. And if I did not understand something or if I forgot something […] I can just call and ask’.\nViewpoint 2 accounted for 8% of the explained variance. Patients holding this viewpoint seek well‐coordinated care and advice (statement 12, +4) and the proper transfer of information in case of referral (statement 23, +4). Respondent 3 stated ‘The doctors have to agree on what is the best option for me’. Furthermore, they desire easily accessible care with short wait times [statement 17, +3; ‘That it will not be a lengthy process before I can be helped’ (Respondent 16)].\nThese patients would also like healthcare professionals to consider their physical comfort by attending to problems with physical activity [statement 8, +3; ‘Stairs are very much a no go for me and it is important that they know that’ (Respondent 16)]. However, they consider other aspects of physical comfort, such as waiting areas and treatment rooms that are comfortable (statement 9, –3) or provide enough privacy (statement 10, –3), to be less important. Respondent 16 stated ‘When I weighed 127 kilos at my heaviest, the seats were a bit uncomfortable, but I do not have that problem now’.\nIn contrast to those holding viewpoint 1, patients holding viewpoint 2 do not mind if care does not align with their own preferences [statement 5, –4; ‘I do not think that your preferences should be taken into account in a hospital or with a doctor because as human beings we can have a lot of preferences that do not really apply’ (Respondent 16)]. They emphasize their own responsibility for getting the care they need [‘Right now in the Netherlands, you get the right care. As a patient, you also need to be somewhat well‐informed yourself’ (Respondent 16)]. They believe that being well prepared avoids the need for lengthy appointments (statement 18, –4). Respondent 3 stated ‘If I have a question, I just ask it. And if I did not understand something or if I forgot something […] I can just call and ask’.\n Viewpoint 3: ‘interpersonal communication is key’ Viewpoint 3 accounted for 10% of the explained variance. It focuses on the exchange of information among all involved parties. Patients holding this viewpoint want to know what to expect, and thus value information about all aspects of their care (statement 25, +3), including information about referrals (statement 22, +3), very highly [‘Because I want to know where I stand, what will happen and what is needed’ (Respondent 10)]. They believe that a good explanation is needed to properly understand information (statement 27, +3). Respondent 7 stated ‘I often feel a bit overwhelmed during consultations. That things are being said for which I was not fully prepared. I sometimes think afterwards, “have I understood everything that has been said?”’. These patients believe that having sufficient time during appointments is a prerequisite for the proper exchange of information (statement 18, +4). They often leave consultations feeling poorly because of unanswered questions. [‘You just notice that they are under time pressure, that it should all happen quickly. You hardly have time for questions, so you do not leave with a good feeling’ (Respondent 10)].\nSimilarly to those holding viewpoint 2, these patients value the coordination of care and advice among practitioners highly (statement 12, +4). They specifically dislike the conflicting of treatment plans with each other [‘It is important that one practitioner also knows what the other practitioner is doing and that it fits together’ (Respondent 7)].\nIn contrast to those holding viewpoint 1, these patients prefer that care and support are of an informative nature, rather than attending to emotions that they might be experiencing (statement 15, –4). Respondent 1 stated ‘Things like quality of care are much more important to me than people sitting down to listen to emotions or something like that. To me, emotions and scientific correctness often clash’. Similarly to those holding viewpoint 2, they do not mind if care does not align well with their preferences [statement 5, –3; ‘For me it is really about that the care is good and that it is the best, even if I do not prefer it’ (Respondent 1)].\nViewpoint 3 accounted for 10% of the explained variance. It focuses on the exchange of information among all involved parties. Patients holding this viewpoint want to know what to expect, and thus value information about all aspects of their care (statement 25, +3), including information about referrals (statement 22, +3), very highly [‘Because I want to know where I stand, what will happen and what is needed’ (Respondent 10)]. They believe that a good explanation is needed to properly understand information (statement 27, +3). Respondent 7 stated ‘I often feel a bit overwhelmed during consultations. That things are being said for which I was not fully prepared. I sometimes think afterwards, “have I understood everything that has been said?”’. These patients believe that having sufficient time during appointments is a prerequisite for the proper exchange of information (statement 18, +4). They often leave consultations feeling poorly because of unanswered questions. [‘You just notice that they are under time pressure, that it should all happen quickly. You hardly have time for questions, so you do not leave with a good feeling’ (Respondent 10)].\nSimilarly to those holding viewpoint 2, these patients value the coordination of care and advice among practitioners highly (statement 12, +4). They specifically dislike the conflicting of treatment plans with each other [‘It is important that one practitioner also knows what the other practitioner is doing and that it fits together’ (Respondent 7)].\nIn contrast to those holding viewpoint 1, these patients prefer that care and support are of an informative nature, rather than attending to emotions that they might be experiencing (statement 15, –4). Respondent 1 stated ‘Things like quality of care are much more important to me than people sitting down to listen to emotions or something like that. To me, emotions and scientific correctness often clash’. Similarly to those holding viewpoint 2, they do not mind if care does not align well with their preferences [statement 5, –3; ‘For me it is really about that the care is good and that it is the best, even if I do not prefer it’ (Respondent 1)].\n Viewpoint 4: ‘I want my independence’ Viewpoint 4 accounted for 7% of the explained variance. The aim of remaining independent is central to this viewpoint. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 want to focus on what they can do on their own (statement 6, +2), as they believe that this will preserve their quality of life [statement 3, +3; ‘I think it is important that I can and may continue to do a lot independently’ (Respondent 17)]. In line with this focus, these patients want healthcare professionals to attend to their problems with physical activity (statement 8, +4). Respondent 17 stated ‘I think it is very important to work on this [problems with physical activity] as much as possible and to expand what is possible to do myself’.\nAlthough these respondents seek independence, they value knowing where to go for care and support after treatment highly (statement 24, +4). They are willing to take the lead, provided that they know where they can go for support. Respondent 4 stated ‘That you have a telephone number and that you can call them with questions or if anything is unclear. I find accessibility very important’. To facilitate independence, they also prefer to be well informed about all aspects of their care (statement 25, +3) and appreciate easy access to their own medical data (statement 26, +2). However, these patients do not require a good explanation of all information provided to them (statement 27, −2) as they have no difficulty understanding their medical data [‘I have been walking in and out of hospitals for so long, most of it is self‐evident’ (Respondent 17)].\nIn contrast to those holding viewpoints 1–3, patients holding viewpoint 4 find other aspects of the ‘continuity of care’, such as being well informed during referrals (statement 22, –3) and the proper transfer of information upon referral (statement 23, –2) to be less important. They do not mind asking questions or re‐sharing information with professionals [‘I can also tell it myself and I can ask for everything I need and I always do that’ (Respondent 4)].\nViewpoint 4 accounted for 7% of the explained variance. The aim of remaining independent is central to this viewpoint. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 want to focus on what they can do on their own (statement 6, +2), as they believe that this will preserve their quality of life [statement 3, +3; ‘I think it is important that I can and may continue to do a lot independently’ (Respondent 17)]. In line with this focus, these patients want healthcare professionals to attend to their problems with physical activity (statement 8, +4). Respondent 17 stated ‘I think it is very important to work on this [problems with physical activity] as much as possible and to expand what is possible to do myself’.\nAlthough these respondents seek independence, they value knowing where to go for care and support after treatment highly (statement 24, +4). They are willing to take the lead, provided that they know where they can go for support. Respondent 4 stated ‘That you have a telephone number and that you can call them with questions or if anything is unclear. I find accessibility very important’. To facilitate independence, they also prefer to be well informed about all aspects of their care (statement 25, +3) and appreciate easy access to their own medical data (statement 26, +2). However, these patients do not require a good explanation of all information provided to them (statement 27, −2) as they have no difficulty understanding their medical data [‘I have been walking in and out of hospitals for so long, most of it is self‐evident’ (Respondent 17)].\nIn contrast to those holding viewpoints 1–3, patients holding viewpoint 4 find other aspects of the ‘continuity of care’, such as being well informed during referrals (statement 22, –3) and the proper transfer of information upon referral (statement 23, –2) to be less important. They do not mind asking questions or re‐sharing information with professionals [‘I can also tell it myself and I can ask for everything I need and I always do that’ (Respondent 4)].\n Viewpoint 5: ‘support for myself and my loved ones’ Viewpoint 5 accounted for 5% of the explained variance. This viewpoint is distinguished by an emphasis on the supporting roles of family members and friends. Patients holding this viewpoint seek support from their loved ones and help from healthcare professionals in obtaining it [statement 31, +3; ‘I am married and I want help from my husband because he really knows a lot about me’ (Respondent 20)]. They also value their autonomy highly; they want to be informed about all aspects of their care (statement 25, +4) and involved in decisions (statement 4, +3). Respondent 20 stated ‘I do not like them talking about me behind my back’. Similarly to those with viewpoint 1, patients with viewpoint 5 consider being treated with dignity and respect (statement 1, +4) to be one of the most important aspects of PCC [‘Everyone has the right to be treated with respect and receive proper care’ (Respondent 5)]. They value comfortable waiting areas and treatment rooms (statement 9, +1) more than patients with other viewpoints, as they appreciate their personal space. Respondent 20 stated ‘I do not think it is necessary that they sit right on top of me in treatment rooms’.\nCompared with patients with other viewpoints, those with viewpoint 5 consider some aspects of PCC to be out of reach, and thus rank them as less important. For example, they accept that money may be a problem sometimes [statement 20, –4; ‘Money comes, money goes. It just makes some things a little easier, but if you do not have it, you do not have it’ (Respondent 5)] and they believe that receiving treatment only from unbiased healthcare professionals is not realistic [statement 2, –3; ‘It is not realistic because that [stigmatisation from healthcare professionals] happens, whether you like it or not’ (Respondent 5)].\nViewpoint 5 accounted for 5% of the explained variance. This viewpoint is distinguished by an emphasis on the supporting roles of family members and friends. Patients holding this viewpoint seek support from their loved ones and help from healthcare professionals in obtaining it [statement 31, +3; ‘I am married and I want help from my husband because he really knows a lot about me’ (Respondent 20)]. They also value their autonomy highly; they want to be informed about all aspects of their care (statement 25, +4) and involved in decisions (statement 4, +3). Respondent 20 stated ‘I do not like them talking about me behind my back’. Similarly to those with viewpoint 1, patients with viewpoint 5 consider being treated with dignity and respect (statement 1, +4) to be one of the most important aspects of PCC [‘Everyone has the right to be treated with respect and receive proper care’ (Respondent 5)]. They value comfortable waiting areas and treatment rooms (statement 9, +1) more than patients with other viewpoints, as they appreciate their personal space. Respondent 20 stated ‘I do not think it is necessary that they sit right on top of me in treatment rooms’.\nCompared with patients with other viewpoints, those with viewpoint 5 consider some aspects of PCC to be out of reach, and thus rank them as less important. For example, they accept that money may be a problem sometimes [statement 20, –4; ‘Money comes, money goes. It just makes some things a little easier, but if you do not have it, you do not have it’ (Respondent 5)] and they believe that receiving treatment only from unbiased healthcare professionals is not realistic [statement 2, –3; ‘It is not realistic because that [stigmatisation from healthcare professionals] happens, whether you like it or not’ (Respondent 5)].", "Viewpoint 1 accounted for the most explained variance (17%) in this study. The PCC dimensions most characterizing this viewpoint are ‘respect for patients’ preferences’ and ‘emotional support’. Central to this viewpoint was respondents’ desire to be seen and heard like any other patient without obesity. These patients wish to be treated with dignity and respect (statement 1, +4). Respondent 8 stated ‘You just want to be taken seriously. We are all human, that includes people who are overweight’. They often feel misunderstood because healthcare professionals blame all of their health issues on their weight. [‘You fight against a judgment that you cannot get out of. They do not even examine me. Right off the bat they go: “I can refer you for a stomach reduction”’ (Respondent 18)]. To get the care and support that suits their needs, these patients believe that unbiased healthcare professionals (statement 2, +3) who genuinely listen (statement 14, +4) are crucial. Respondent 13 stated ‘That they look further than your weight, that is the most important thing to me. That it is not like everything that is wrong with you is because of your weight’. They want healthcare professionals to provide emotional support and acknowledge the impact of their health problems on their life [statement 16, +3; ‘I have three small children and it is really hard for me to do things with them just because I am overweight’ (Respondent 6)]. They seek recognition for the complexity of their condition. Respondent 8 stated ‘Recognition that obesity is a disease and it should be treated that way is very important’.\nTo remain in charge of their care, these patients want to be involved in decisions (statement 4, +3), while leaving friends and family members out of the decision‐making process [statement 29, –4; ‘No, I do not think that is important. I decide what I want’ (Respondent 6)]. Respondents holding this viewpoint ranked all statements covering the ‘involvement of friends and family’ dimension as least important.", "Viewpoint 2 accounted for 8% of the explained variance. Patients holding this viewpoint seek well‐coordinated care and advice (statement 12, +4) and the proper transfer of information in case of referral (statement 23, +4). Respondent 3 stated ‘The doctors have to agree on what is the best option for me’. Furthermore, they desire easily accessible care with short wait times [statement 17, +3; ‘That it will not be a lengthy process before I can be helped’ (Respondent 16)].\nThese patients would also like healthcare professionals to consider their physical comfort by attending to problems with physical activity [statement 8, +3; ‘Stairs are very much a no go for me and it is important that they know that’ (Respondent 16)]. However, they consider other aspects of physical comfort, such as waiting areas and treatment rooms that are comfortable (statement 9, –3) or provide enough privacy (statement 10, –3), to be less important. Respondent 16 stated ‘When I weighed 127 kilos at my heaviest, the seats were a bit uncomfortable, but I do not have that problem now’.\nIn contrast to those holding viewpoint 1, patients holding viewpoint 2 do not mind if care does not align with their own preferences [statement 5, –4; ‘I do not think that your preferences should be taken into account in a hospital or with a doctor because as human beings we can have a lot of preferences that do not really apply’ (Respondent 16)]. They emphasize their own responsibility for getting the care they need [‘Right now in the Netherlands, you get the right care. As a patient, you also need to be somewhat well‐informed yourself’ (Respondent 16)]. They believe that being well prepared avoids the need for lengthy appointments (statement 18, –4). Respondent 3 stated ‘If I have a question, I just ask it. And if I did not understand something or if I forgot something […] I can just call and ask’.", "Viewpoint 3 accounted for 10% of the explained variance. It focuses on the exchange of information among all involved parties. Patients holding this viewpoint want to know what to expect, and thus value information about all aspects of their care (statement 25, +3), including information about referrals (statement 22, +3), very highly [‘Because I want to know where I stand, what will happen and what is needed’ (Respondent 10)]. They believe that a good explanation is needed to properly understand information (statement 27, +3). Respondent 7 stated ‘I often feel a bit overwhelmed during consultations. That things are being said for which I was not fully prepared. I sometimes think afterwards, “have I understood everything that has been said?”’. These patients believe that having sufficient time during appointments is a prerequisite for the proper exchange of information (statement 18, +4). They often leave consultations feeling poorly because of unanswered questions. [‘You just notice that they are under time pressure, that it should all happen quickly. You hardly have time for questions, so you do not leave with a good feeling’ (Respondent 10)].\nSimilarly to those holding viewpoint 2, these patients value the coordination of care and advice among practitioners highly (statement 12, +4). They specifically dislike the conflicting of treatment plans with each other [‘It is important that one practitioner also knows what the other practitioner is doing and that it fits together’ (Respondent 7)].\nIn contrast to those holding viewpoint 1, these patients prefer that care and support are of an informative nature, rather than attending to emotions that they might be experiencing (statement 15, –4). Respondent 1 stated ‘Things like quality of care are much more important to me than people sitting down to listen to emotions or something like that. To me, emotions and scientific correctness often clash’. Similarly to those holding viewpoint 2, they do not mind if care does not align well with their preferences [statement 5, –3; ‘For me it is really about that the care is good and that it is the best, even if I do not prefer it’ (Respondent 1)].", "Viewpoint 4 accounted for 7% of the explained variance. The aim of remaining independent is central to this viewpoint. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 want to focus on what they can do on their own (statement 6, +2), as they believe that this will preserve their quality of life [statement 3, +3; ‘I think it is important that I can and may continue to do a lot independently’ (Respondent 17)]. In line with this focus, these patients want healthcare professionals to attend to their problems with physical activity (statement 8, +4). Respondent 17 stated ‘I think it is very important to work on this [problems with physical activity] as much as possible and to expand what is possible to do myself’.\nAlthough these respondents seek independence, they value knowing where to go for care and support after treatment highly (statement 24, +4). They are willing to take the lead, provided that they know where they can go for support. Respondent 4 stated ‘That you have a telephone number and that you can call them with questions or if anything is unclear. I find accessibility very important’. To facilitate independence, they also prefer to be well informed about all aspects of their care (statement 25, +3) and appreciate easy access to their own medical data (statement 26, +2). However, these patients do not require a good explanation of all information provided to them (statement 27, −2) as they have no difficulty understanding their medical data [‘I have been walking in and out of hospitals for so long, most of it is self‐evident’ (Respondent 17)].\nIn contrast to those holding viewpoints 1–3, patients holding viewpoint 4 find other aspects of the ‘continuity of care’, such as being well informed during referrals (statement 22, –3) and the proper transfer of information upon referral (statement 23, –2) to be less important. They do not mind asking questions or re‐sharing information with professionals [‘I can also tell it myself and I can ask for everything I need and I always do that’ (Respondent 4)].", "Viewpoint 5 accounted for 5% of the explained variance. This viewpoint is distinguished by an emphasis on the supporting roles of family members and friends. Patients holding this viewpoint seek support from their loved ones and help from healthcare professionals in obtaining it [statement 31, +3; ‘I am married and I want help from my husband because he really knows a lot about me’ (Respondent 20)]. They also value their autonomy highly; they want to be informed about all aspects of their care (statement 25, +4) and involved in decisions (statement 4, +3). Respondent 20 stated ‘I do not like them talking about me behind my back’. Similarly to those with viewpoint 1, patients with viewpoint 5 consider being treated with dignity and respect (statement 1, +4) to be one of the most important aspects of PCC [‘Everyone has the right to be treated with respect and receive proper care’ (Respondent 5)]. They value comfortable waiting areas and treatment rooms (statement 9, +1) more than patients with other viewpoints, as they appreciate their personal space. Respondent 20 stated ‘I do not think it is necessary that they sit right on top of me in treatment rooms’.\nCompared with patients with other viewpoints, those with viewpoint 5 consider some aspects of PCC to be out of reach, and thus rank them as less important. For example, they accept that money may be a problem sometimes [statement 20, –4; ‘Money comes, money goes. It just makes some things a little easier, but if you do not have it, you do not have it’ (Respondent 5)] and they believe that receiving treatment only from unbiased healthcare professionals is not realistic [statement 2, –3; ‘It is not realistic because that [stigmatisation from healthcare professionals] happens, whether you like it or not’ (Respondent 5)].", "In this study, five distinct views on what is important for PCC for patients with obesity were identified. Patients holding viewpoint 1, ‘someone who listens in an unbiased manner’, want healthcare professionals to look beyond a patient's weight. This viewpoint explained the most variance in the data and was supported by the largest number of respondents. Patients holding viewpoint 2, ‘everything should run smoothly’, seek care that is well coordinated and accessible. Patients holding viewpoint 3, ‘interpersonal communication is key’, prefer care of an informative nature. Patients holding viewpoint 4, ‘I want my independence’, are driven by the desire to remain independent. Finally, patients holding viewpoint 5, ‘support for myself and my loved ones’, seek help to involve their loved ones in their care. Our findings thus show that patients with obesity hold various views on what is most important in care and support. This diversity may be explained by the multifactorial nature of obesity,\n10\n which results in different care needs. Our results suggest that we cannot apply a single standard of care to patients with obesity, and reflect the importance of care that is tailored to each individual.\nAlthough views on PCC varied among patients, ‘being treated with dignity and respect' was deemed to be relatively important across viewpoints. This result is not surprising, as obesity is a highly stigmatized condition, and many individuals living with it report having stigmatizing healthcare experiences, such as disrespectful treatment.\n40\n Research suggests that higher patient BMIs are associated with lesser physician respect.\n41\n Although many respondents in our study reported stigmatizing healthcare experiences, ‘unbiased healthcare professionals' was not unequivocally ranked as important across viewpoints. Patients holding viewpoint 5 even ranked it as one of the least important aspects of PCC, but they explained this judgment as reflecting their belief that weight‐related stigmatization in healthcare is an unsolvable problem. Furthermore, some respondents with other viewpoints related ‘unbiased healthcare professionals’ strongly to ‘treatment with dignity and respect’, and for practical purposes chose to rank the former statement lower. This perspective has also been identified in research on patients’ views on weight stigmatization in healthcare; patients with obesity agreed that a lack of physician respect results from such stigmatization.\n42\n\n\nOur results further show notable differences in views on the importance of emotional support. Patients with viewpoint 1 value such support highly, viewing it as fundamental for obesity treatment. In contrast, patients with viewpoint 3 do not want practitioners to attend to their emotions, although they acknowledge the emotional impact of their condition. Many individuals with obesity struggle with psychosocial issues, including psychiatric illness, low self‐esteem, reduced quality of life, and the internalization of weight stigmatization.\n22\n, \n43\n Thus, multidisciplinary obesity treatment often includes a focus on emotional well‐being, which is suggested to have beneficial effects on health.\n44\n, \n45\n However, patients with some viewpoints prefer a pragmatic approach. These opposing views may pose a dilemma for healthcare professionals aiming to provide high‐quality and holistic care to patients with obesity. Future research may clarify the emotional support needs of patients with obesity and the relationship of emotional support to treatment outcomes.\nThe involvement of family and friends was considered to be relatively unimportant across viewpoints in this study, except among patients with viewpoint 5, who seem to depend more on social support. Patients with viewpoint 1 strongly oppose the involvement of loved ones and prefer to make decisions individually. This perspective might be explained by the complexity of living with obesity, which only the patient can understand fully. These findings bring to light new questions about the extent to and the manner in which family members and friends should be involved in obesity treatment. Social support has been shown to be beneficial in chronic illness management,\n46\n but literature on the involvement of family and friends in adult obesity treatment is inconclusive.", "Several potential limitations of this study should be considered. First, the sample of patients recruited for this study may seem to be small. However, it meets the requirements of Q methodology\n30\n and is similar to those of other studies.\n28\n, \n47\n Furthermore, consultation of the literature revealed no evidence of a missing viewpoint. Additionally, the viewpoints identified in this study were recognized by a professor of obesity and stress research who is involved in the treatment of patients with obesity and indicated that no viewpoint was missing, based on many years of clinical experience. Furthermore, the representation of the male perspective in this sample might be limited due to the male‐to‐female ratio. However, a similar ratio is seen in patients seeking obesity care.\n48\n Second, at the start of the data collection period, respondents could only participate online due to COVID‐19 pandemic precautions. Although we later offered the opportunity for face‐to‐face participation, this approach may have led to the underrepresentation of individuals with low health literacy, for whom digitalization can be a barrier to engagement.\n49\n However, the views of individuals with low health literacy are represented in this study, as four respondents met this criterion. Finally, our study was conducted in the Netherlands, and the identified viewpoints may not represent the views of patients living in countries with different health systems. For example, because health insurance is mandatory in the Netherlands, every resident has basic access to care. Aspects of the ‘access to care’ dimension may thus be viewed differently in countries without universal healthcare. However, Dutch health insurance does not cover all obesity treatments. For instance, most weight‐reducing medications are not covered.", "Five distinct views on what is important for PCC for patients with obesity were identified. The viewpoint ‘someone who listens in an unbiased manner’ was supported by the largest number of respondents. With these findings, we have begun to shed light on the communalities in the views of patients living with obesity on PCC. Our data shows that the views on what care and support should look like for patients with obesity vary, stressing the need for tailored care for patients with obesity. Future research may build on and expand our study's findings by considering additional factors that might influence how patients with obesity view PCC. For instance, experiences of weight stigmatization in healthcare may lead to different perspectives on what is most important in care and support. Furthermore, we explored the views of patients living with severe obesity. Future studies might examine the views of patients living with less severe obesity and explore to what extent their views on PCC differ.\nThe views that are described in this paper provide valuable insight into the perspective of patients living with obesity on what is most important in care and support. Importantly, this knowledge helps us to understand what PCC provision for patients with obesity might entail and may help organizations arrange care accordingly. For example, some patients may benefit greatly from a high level of emotional support, while others will respond better to care and support that is centred around patient education or self‐management.", "The authors declare no conflict of interest.", "Supporting information.\nClick here for additional data file." ]
[ null, "methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion", null, "conclusions", "COI-statement", "supplementary-material" ]
[ "care provision", "obesity", "patient views", "person‐centred care", "Q methodology" ]
INTRODUCTION: Over the past four decades, the global prevalence of obesity has nearly tripled. 1 , 2 The World Health Organization defines obesity as an excessive accumulation of body fat that poses a threat to health. 3 Living with obesity seriously impairs physical and psychosocial functioning, resulting in a reduced quality of life. 4 Obesity also increases the risk of developing other serious health conditions, such as type 2 diabetes, cardiovascular diseases, several types of cancer, and many other diseases. 5 Consequently, obesity, and especially severe obesity is associated with increases in healthcare utilization and expenditures, as well as substantial societal costs due to productivity losses. 6 , 7 Although many health institutions have recognized it as a chronic disease, 8 healthcare systems seem poorly prepared to meet the needs of patients living with obesity. Clinical guidelines for the treatment of these patients are often too simplistic, focusing merely on weight loss instead of the improvement of overall health and well‐being. 9 As a result, individual circumstances, including contributing factors and underlying diseases, are often overlooked. 10 Furthermore, patients with obesity often experience weight‐related stigma and discrimination in healthcare, which can affect the quality of their care and their treatment outcomes. 11 , 12 For instance, some healthcare professionals view patients with obesity more negatively than other patients and spend less time treating them. 13 Healthcare professionals may also be insufficiently equipped or educated to perform standard medical procedures on patients with obesity. 14 To better accommodate patients with obesity, the adoption of a person‐centred approach in which care is tailored to the individual and individuals' preferences, needs, and values are respected seems to be imperative. 15 Person‐centred care (PCC) can be seen as a paradigm shift in healthcare that has been gaining broad support with the increasing interest in the quality of care. 16 , 17 The Picker Institute distinguishes eight dimensions that are important for PCC: respect for patients' preferences, physical comfort, coordination of care, emotional support, access to care, continuity of care, the provision of information and education, and the involvement of family and friends. 18 , 19 An overview of these dimensions can be found in Table 1. The eight dimensions of PCC Abbreviation: PCC, person‐centred care. PCC has been associated with improved patient outcomes in various healthcare settings, 26 including the provision of care to patients with obesity. 27 However, the relative importance of the different aspects of PCC seems to vary among patient groups. 28 , 29 Although aspects of care that may be important specifically for patients with obesity have been identified, the significance of the eight dimensions of PCC for patients with obesity has not been assessed. Gaining insight into the aspects of PCC that are most important to this patient group is a vital step toward improved care provision, and consequently improved quality of care and patient outcomes. Thus, the aim of this study was to explore the views of patients with obesity on the relative importance of the dimensions of PCC. METHODS: Q methodology To examine the views of patients with obesity on what is important for PCC, the mixed‐method Q methodology was used. Q methodology may be best described as an inverted factor analytic technique for the systematic study of subjective viewpoints. 30 Q‐methodology research aims to identify and discern views on a specific topic, rather than determine the prevalence of these viewpoints. In a Q‐methodology study, respondents are asked to rank a set of statements about the study subject. Using by‐person factor analysis, in which the respondents are treated as variates, distinct viewpoints are identified. Q methodology has been used to examine the views of patients and professionals, such as patients with multimorbidity, 28 those with end‐stage renal disease, 29  and professionals and volunteers providing palliative care, 31 on what is important for PCC. To examine the views of patients with obesity on what is important for PCC, the mixed‐method Q methodology was used. Q methodology may be best described as an inverted factor analytic technique for the systematic study of subjective viewpoints. 30 Q‐methodology research aims to identify and discern views on a specific topic, rather than determine the prevalence of these viewpoints. In a Q‐methodology study, respondents are asked to rank a set of statements about the study subject. Using by‐person factor analysis, in which the respondents are treated as variates, distinct viewpoints are identified. Q methodology has been used to examine the views of patients and professionals, such as patients with multimorbidity, 28 those with end‐stage renal disease, 29  and professionals and volunteers providing palliative care, 31 on what is important for PCC. Respondents As our goal was to obtain a wide breadth of views on what is important for PCC for patients with obesity, we recruited respondents varying in terms of gender, age, educational background, marital status, and health literacy. Eligible patients were over the age of 18 years and had body mass indices (BMIs) of at least 40 kg/m2, which defines severe obesity. This obesity threshold was chosen because it is associated with the most healthcare utilization and greatest health risks. 5 , 6 Practitioners working in the internal medicine departments of four hospitals in the area of Rotterdam, the Netherlands, informed patients about the study. In the Netherlands, access to nonurgent hospitals or specialty care requires a referral from a general practitioner (GP). 32 Recruitment through hospitals thus ensured that respondents were familiar with both specialty and primary care (e.g., GP visitation), characteristic of care provision for patients with severe obesity. 6 , 24 Data collection took place between April and October 2021. Twenty‐six eligible patients gave consent to be contacted to receive detailed study information and schedule an appointment. Of the 26 patients that were contacted, 3 were unable to schedule appointments and 2 could not be reached by the researcher. This led to the inclusion of 21 patients in the study, which is a typical sample size for a Q‐methodology study. 30 Q‐methodological research requires only a small number of purposively selected respondents to represent the breadth of views in a population. 30 Consultation of the literature and the expert opinion of a professor of obesity and stress research who is involved in the treatment of patients with obesity revealed no evidence of missing viewpoints. As our goal was to obtain a wide breadth of views on what is important for PCC for patients with obesity, we recruited respondents varying in terms of gender, age, educational background, marital status, and health literacy. Eligible patients were over the age of 18 years and had body mass indices (BMIs) of at least 40 kg/m2, which defines severe obesity. This obesity threshold was chosen because it is associated with the most healthcare utilization and greatest health risks. 5 , 6 Practitioners working in the internal medicine departments of four hospitals in the area of Rotterdam, the Netherlands, informed patients about the study. In the Netherlands, access to nonurgent hospitals or specialty care requires a referral from a general practitioner (GP). 32 Recruitment through hospitals thus ensured that respondents were familiar with both specialty and primary care (e.g., GP visitation), characteristic of care provision for patients with severe obesity. 6 , 24 Data collection took place between April and October 2021. Twenty‐six eligible patients gave consent to be contacted to receive detailed study information and schedule an appointment. Of the 26 patients that were contacted, 3 were unable to schedule appointments and 2 could not be reached by the researcher. This led to the inclusion of 21 patients in the study, which is a typical sample size for a Q‐methodology study. 30 Q‐methodological research requires only a small number of purposively selected respondents to represent the breadth of views in a population. 30 Consultation of the literature and the expert opinion of a professor of obesity and stress research who is involved in the treatment of patients with obesity revealed no evidence of missing viewpoints. Statements To capture the full range of possible views on a specific topic, the statements in a Q‐methodology study should have good coverage of the subject of interest. 30 The eight dimensions of PCC provided by the Picker Institute were used as a conceptual framework for this study. 18 , 19 First, statements from previous studies in which the same framework was used to investigate the views of patients or professionals on what is important for PCC were collected. 28 , 29 , 31 Further statement selection was informed by various sources covering the care and support needs of patients with obesity, such as scientific articles 23 , 33 and clinical guidelines, 34 as well as the autobiographies and social media posts of individuals living with obesity. In an iterative process, all members of the research team, including an internist‐endocrinologist who is a professor in the field of obesity and stress research and involved in clinical care provided to patients with obesity, generated, reviewed, and revised statements. A final set of 31 statements was constructed and pilot tested with three respondents fulfilling our inclusion criteria. Based on the pilot testing results, a few adjustments to the phrasing of some statements were made (see Supporting Information: Appendix A). No substantive change was required, and no missing statement was revealed. The final statement set is provided in Table 2. Because no substantial change was made to the statement set, the pilot data were included in the analyses conducted for this study. Statements and factor arrays Viewpoints: 1, ‘someone who listens in an unbiased manner’; 2, ‘everything should run smoothly’; 3, ‘interpersonal communication is key’; 4, ‘I want my independence’; 5, ‘support for myself and my loved ones’. To capture the full range of possible views on a specific topic, the statements in a Q‐methodology study should have good coverage of the subject of interest. 30 The eight dimensions of PCC provided by the Picker Institute were used as a conceptual framework for this study. 18 , 19 First, statements from previous studies in which the same framework was used to investigate the views of patients or professionals on what is important for PCC were collected. 28 , 29 , 31 Further statement selection was informed by various sources covering the care and support needs of patients with obesity, such as scientific articles 23 , 33 and clinical guidelines, 34 as well as the autobiographies and social media posts of individuals living with obesity. In an iterative process, all members of the research team, including an internist‐endocrinologist who is a professor in the field of obesity and stress research and involved in clinical care provided to patients with obesity, generated, reviewed, and revised statements. A final set of 31 statements was constructed and pilot tested with three respondents fulfilling our inclusion criteria. Based on the pilot testing results, a few adjustments to the phrasing of some statements were made (see Supporting Information: Appendix A). No substantive change was required, and no missing statement was revealed. The final statement set is provided in Table 2. Because no substantial change was made to the statement set, the pilot data were included in the analyses conducted for this study. Statements and factor arrays Viewpoints: 1, ‘someone who listens in an unbiased manner’; 2, ‘everything should run smoothly’; 3, ‘interpersonal communication is key’; 4, ‘I want my independence’; 5, ‘support for myself and my loved ones’. Data collection Data collection took place in an online environment using video conferencing software; the process lasted approximately 60 min per respondent. One researcher guided the respondents' ranking of statements. All sessions were audio recorded with respondents' informed consent. First, the respondents answered basic demographic questions and filled in the Set of Brief Screening Questions (SBSQ) as an assessment of health literacy. 35 Low health literacy was defined as an average SBSQ score of 2 or lower. Next, the respondents were asked to carefully read the statements about aspects of PCC, displayed on the screen one by one in random order using the HtmlQ software, 36 and to sort them into ‘important’, ‘neutral’, and ‘unimportant’ piles. The researcher then asked the respondents to rank the statements in each pile according to their personal significance using a forced sorting grid with a scale ranging from +4 (most important) to –4 (most unimportant; Figure 1). While ranking, the respondents were encouraged to speak out loud about their views; after completing the ranking, they were asked to elaborate on their placement of the statements. All comments made by the respondents during and after the ranking process were transcribed verbatim. Sorting grid. Data collection took place in an online environment using video conferencing software; the process lasted approximately 60 min per respondent. One researcher guided the respondents' ranking of statements. All sessions were audio recorded with respondents' informed consent. First, the respondents answered basic demographic questions and filled in the Set of Brief Screening Questions (SBSQ) as an assessment of health literacy. 35 Low health literacy was defined as an average SBSQ score of 2 or lower. Next, the respondents were asked to carefully read the statements about aspects of PCC, displayed on the screen one by one in random order using the HtmlQ software, 36 and to sort them into ‘important’, ‘neutral’, and ‘unimportant’ piles. The researcher then asked the respondents to rank the statements in each pile according to their personal significance using a forced sorting grid with a scale ranging from +4 (most important) to –4 (most unimportant; Figure 1). While ranking, the respondents were encouraged to speak out loud about their views; after completing the ranking, they were asked to elaborate on their placement of the statements. All comments made by the respondents during and after the ranking process were transcribed verbatim. Sorting grid. Statistical analysis To identify distinct viewpoints on what is important for PCC for patients with obesity, the rankings of the 21 respondents were intercorrelated and subjected to by‐person factor analysis using the PQMethod software. 37 Clusters in the data were identified using centroid factor extraction and varimax rotation. Potential factor solutions were evaluated by considering the total of associated respondents at a significance level of 0.05 (i.e., a factor loading of ±0.42), upholding a minimum of two associated respondents per factor, and the percentage of explained variance. Fulfilment of the Kaiser–Guttman criterion, which suggests that only factors with eigenvalues of 1.0 or more be retained, was examined. 38 , 39 To finalize our decision on the number of factors to retain, qualitative data (i.e., comments made by the respondents during and after ranking) were considered. For each factor or viewpoint, the rankings of associated respondents were merged by calculating weighted averages, thereby forming a ‘factor array’ that depicted how a typical respondent holding that viewpoint would rank the statements. As our aim was to gain a broad understanding of respondents' diverse viewpoints, our interpretation was based on these factor arrays. For each viewpoint, statements ranked as most important (+3 and +4) and most unimportant (–3 and –4) and distinguishing statements (ranked significantly higher or lower than in other viewpoints) were inspected. The qualitative data were used to verify and refine our interpretation of the viewpoints. To identify distinct viewpoints on what is important for PCC for patients with obesity, the rankings of the 21 respondents were intercorrelated and subjected to by‐person factor analysis using the PQMethod software. 37 Clusters in the data were identified using centroid factor extraction and varimax rotation. Potential factor solutions were evaluated by considering the total of associated respondents at a significance level of 0.05 (i.e., a factor loading of ±0.42), upholding a minimum of two associated respondents per factor, and the percentage of explained variance. Fulfilment of the Kaiser–Guttman criterion, which suggests that only factors with eigenvalues of 1.0 or more be retained, was examined. 38 , 39 To finalize our decision on the number of factors to retain, qualitative data (i.e., comments made by the respondents during and after ranking) were considered. For each factor or viewpoint, the rankings of associated respondents were merged by calculating weighted averages, thereby forming a ‘factor array’ that depicted how a typical respondent holding that viewpoint would rank the statements. As our aim was to gain a broad understanding of respondents' diverse viewpoints, our interpretation was based on these factor arrays. For each viewpoint, statements ranked as most important (+3 and +4) and most unimportant (–3 and –4) and distinguishing statements (ranked significantly higher or lower than in other viewpoints) were inspected. The qualitative data were used to verify and refine our interpretation of the viewpoints. Q methodology: To examine the views of patients with obesity on what is important for PCC, the mixed‐method Q methodology was used. Q methodology may be best described as an inverted factor analytic technique for the systematic study of subjective viewpoints. 30 Q‐methodology research aims to identify and discern views on a specific topic, rather than determine the prevalence of these viewpoints. In a Q‐methodology study, respondents are asked to rank a set of statements about the study subject. Using by‐person factor analysis, in which the respondents are treated as variates, distinct viewpoints are identified. Q methodology has been used to examine the views of patients and professionals, such as patients with multimorbidity, 28 those with end‐stage renal disease, 29  and professionals and volunteers providing palliative care, 31 on what is important for PCC. Respondents: As our goal was to obtain a wide breadth of views on what is important for PCC for patients with obesity, we recruited respondents varying in terms of gender, age, educational background, marital status, and health literacy. Eligible patients were over the age of 18 years and had body mass indices (BMIs) of at least 40 kg/m2, which defines severe obesity. This obesity threshold was chosen because it is associated with the most healthcare utilization and greatest health risks. 5 , 6 Practitioners working in the internal medicine departments of four hospitals in the area of Rotterdam, the Netherlands, informed patients about the study. In the Netherlands, access to nonurgent hospitals or specialty care requires a referral from a general practitioner (GP). 32 Recruitment through hospitals thus ensured that respondents were familiar with both specialty and primary care (e.g., GP visitation), characteristic of care provision for patients with severe obesity. 6 , 24 Data collection took place between April and October 2021. Twenty‐six eligible patients gave consent to be contacted to receive detailed study information and schedule an appointment. Of the 26 patients that were contacted, 3 were unable to schedule appointments and 2 could not be reached by the researcher. This led to the inclusion of 21 patients in the study, which is a typical sample size for a Q‐methodology study. 30 Q‐methodological research requires only a small number of purposively selected respondents to represent the breadth of views in a population. 30 Consultation of the literature and the expert opinion of a professor of obesity and stress research who is involved in the treatment of patients with obesity revealed no evidence of missing viewpoints. Statements: To capture the full range of possible views on a specific topic, the statements in a Q‐methodology study should have good coverage of the subject of interest. 30 The eight dimensions of PCC provided by the Picker Institute were used as a conceptual framework for this study. 18 , 19 First, statements from previous studies in which the same framework was used to investigate the views of patients or professionals on what is important for PCC were collected. 28 , 29 , 31 Further statement selection was informed by various sources covering the care and support needs of patients with obesity, such as scientific articles 23 , 33 and clinical guidelines, 34 as well as the autobiographies and social media posts of individuals living with obesity. In an iterative process, all members of the research team, including an internist‐endocrinologist who is a professor in the field of obesity and stress research and involved in clinical care provided to patients with obesity, generated, reviewed, and revised statements. A final set of 31 statements was constructed and pilot tested with three respondents fulfilling our inclusion criteria. Based on the pilot testing results, a few adjustments to the phrasing of some statements were made (see Supporting Information: Appendix A). No substantive change was required, and no missing statement was revealed. The final statement set is provided in Table 2. Because no substantial change was made to the statement set, the pilot data were included in the analyses conducted for this study. Statements and factor arrays Viewpoints: 1, ‘someone who listens in an unbiased manner’; 2, ‘everything should run smoothly’; 3, ‘interpersonal communication is key’; 4, ‘I want my independence’; 5, ‘support for myself and my loved ones’. Data collection: Data collection took place in an online environment using video conferencing software; the process lasted approximately 60 min per respondent. One researcher guided the respondents' ranking of statements. All sessions were audio recorded with respondents' informed consent. First, the respondents answered basic demographic questions and filled in the Set of Brief Screening Questions (SBSQ) as an assessment of health literacy. 35 Low health literacy was defined as an average SBSQ score of 2 or lower. Next, the respondents were asked to carefully read the statements about aspects of PCC, displayed on the screen one by one in random order using the HtmlQ software, 36 and to sort them into ‘important’, ‘neutral’, and ‘unimportant’ piles. The researcher then asked the respondents to rank the statements in each pile according to their personal significance using a forced sorting grid with a scale ranging from +4 (most important) to –4 (most unimportant; Figure 1). While ranking, the respondents were encouraged to speak out loud about their views; after completing the ranking, they were asked to elaborate on their placement of the statements. All comments made by the respondents during and after the ranking process were transcribed verbatim. Sorting grid. Statistical analysis: To identify distinct viewpoints on what is important for PCC for patients with obesity, the rankings of the 21 respondents were intercorrelated and subjected to by‐person factor analysis using the PQMethod software. 37 Clusters in the data were identified using centroid factor extraction and varimax rotation. Potential factor solutions were evaluated by considering the total of associated respondents at a significance level of 0.05 (i.e., a factor loading of ±0.42), upholding a minimum of two associated respondents per factor, and the percentage of explained variance. Fulfilment of the Kaiser–Guttman criterion, which suggests that only factors with eigenvalues of 1.0 or more be retained, was examined. 38 , 39 To finalize our decision on the number of factors to retain, qualitative data (i.e., comments made by the respondents during and after ranking) were considered. For each factor or viewpoint, the rankings of associated respondents were merged by calculating weighted averages, thereby forming a ‘factor array’ that depicted how a typical respondent holding that viewpoint would rank the statements. As our aim was to gain a broad understanding of respondents' diverse viewpoints, our interpretation was based on these factor arrays. For each viewpoint, statements ranked as most important (+3 and +4) and most unimportant (–3 and –4) and distinguishing statements (ranked significantly higher or lower than in other viewpoints) were inspected. The qualitative data were used to verify and refine our interpretation of the viewpoints. RESULTS: Twenty‐one respondents completed the ranking (Table 3). The analysis revealed five factors, or distinct viewpoints, that together explained 48% of the variance in the data. Data from 17 (81%) of the 21 respondents were associated significantly with one of the five viewpoints (p ≤ .05). Data from two respondents were associated with two viewpoints each, and those from two respondents were not associated significantly with any factor. All viewpoints were supported by at least two respondents; viewpoints 1 and 3 were supported by 7 and 4 respondents, respectively. Viewpoint 5 had an eigenvalue of 0.95, just below the Kaiser–Guttman cut‐off of 1.0, but the qualitative data indicated that it was meaningful and distinguishable from the other viewpoints. The degree of correlation between viewpoints was low to moderate (r = –.15 to .37). The factor arrays for the five viewpoints are provided in Table 2. Demographic characteristics of respondents (n = 21) Viewpoint 1: ‘someone who listens in an unbiased manner’ Viewpoint 1 accounted for the most explained variance (17%) in this study. The PCC dimensions most characterizing this viewpoint are ‘respect for patients’ preferences’ and ‘emotional support’. Central to this viewpoint was respondents’ desire to be seen and heard like any other patient without obesity. These patients wish to be treated with dignity and respect (statement 1, +4). Respondent 8 stated ‘You just want to be taken seriously. We are all human, that includes people who are overweight’. They often feel misunderstood because healthcare professionals blame all of their health issues on their weight. [‘You fight against a judgment that you cannot get out of. They do not even examine me. Right off the bat they go: “I can refer you for a stomach reduction”’ (Respondent 18)]. To get the care and support that suits their needs, these patients believe that unbiased healthcare professionals (statement 2, +3) who genuinely listen (statement 14, +4) are crucial. Respondent 13 stated ‘That they look further than your weight, that is the most important thing to me. That it is not like everything that is wrong with you is because of your weight’. They want healthcare professionals to provide emotional support and acknowledge the impact of their health problems on their life [statement 16, +3; ‘I have three small children and it is really hard for me to do things with them just because I am overweight’ (Respondent 6)]. They seek recognition for the complexity of their condition. Respondent 8 stated ‘Recognition that obesity is a disease and it should be treated that way is very important’. To remain in charge of their care, these patients want to be involved in decisions (statement 4, +3), while leaving friends and family members out of the decision‐making process [statement 29, –4; ‘No, I do not think that is important. I decide what I want’ (Respondent 6)]. Respondents holding this viewpoint ranked all statements covering the ‘involvement of friends and family’ dimension as least important. Viewpoint 1 accounted for the most explained variance (17%) in this study. The PCC dimensions most characterizing this viewpoint are ‘respect for patients’ preferences’ and ‘emotional support’. Central to this viewpoint was respondents’ desire to be seen and heard like any other patient without obesity. These patients wish to be treated with dignity and respect (statement 1, +4). Respondent 8 stated ‘You just want to be taken seriously. We are all human, that includes people who are overweight’. They often feel misunderstood because healthcare professionals blame all of their health issues on their weight. [‘You fight against a judgment that you cannot get out of. They do not even examine me. Right off the bat they go: “I can refer you for a stomach reduction”’ (Respondent 18)]. To get the care and support that suits their needs, these patients believe that unbiased healthcare professionals (statement 2, +3) who genuinely listen (statement 14, +4) are crucial. Respondent 13 stated ‘That they look further than your weight, that is the most important thing to me. That it is not like everything that is wrong with you is because of your weight’. They want healthcare professionals to provide emotional support and acknowledge the impact of their health problems on their life [statement 16, +3; ‘I have three small children and it is really hard for me to do things with them just because I am overweight’ (Respondent 6)]. They seek recognition for the complexity of their condition. Respondent 8 stated ‘Recognition that obesity is a disease and it should be treated that way is very important’. To remain in charge of their care, these patients want to be involved in decisions (statement 4, +3), while leaving friends and family members out of the decision‐making process [statement 29, –4; ‘No, I do not think that is important. I decide what I want’ (Respondent 6)]. Respondents holding this viewpoint ranked all statements covering the ‘involvement of friends and family’ dimension as least important. Viewpoint 2: ‘everything should run smoothly’ Viewpoint 2 accounted for 8% of the explained variance. Patients holding this viewpoint seek well‐coordinated care and advice (statement 12, +4) and the proper transfer of information in case of referral (statement 23, +4). Respondent 3 stated ‘The doctors have to agree on what is the best option for me’. Furthermore, they desire easily accessible care with short wait times [statement 17, +3; ‘That it will not be a lengthy process before I can be helped’ (Respondent 16)]. These patients would also like healthcare professionals to consider their physical comfort by attending to problems with physical activity [statement 8, +3; ‘Stairs are very much a no go for me and it is important that they know that’ (Respondent 16)]. However, they consider other aspects of physical comfort, such as waiting areas and treatment rooms that are comfortable (statement 9, –3) or provide enough privacy (statement 10, –3), to be less important. Respondent 16 stated ‘When I weighed 127 kilos at my heaviest, the seats were a bit uncomfortable, but I do not have that problem now’. In contrast to those holding viewpoint 1, patients holding viewpoint 2 do not mind if care does not align with their own preferences [statement 5, –4; ‘I do not think that your preferences should be taken into account in a hospital or with a doctor because as human beings we can have a lot of preferences that do not really apply’ (Respondent 16)]. They emphasize their own responsibility for getting the care they need [‘Right now in the Netherlands, you get the right care. As a patient, you also need to be somewhat well‐informed yourself’ (Respondent 16)]. They believe that being well prepared avoids the need for lengthy appointments (statement 18, –4). Respondent 3 stated ‘If I have a question, I just ask it. And if I did not understand something or if I forgot something […] I can just call and ask’. Viewpoint 2 accounted for 8% of the explained variance. Patients holding this viewpoint seek well‐coordinated care and advice (statement 12, +4) and the proper transfer of information in case of referral (statement 23, +4). Respondent 3 stated ‘The doctors have to agree on what is the best option for me’. Furthermore, they desire easily accessible care with short wait times [statement 17, +3; ‘That it will not be a lengthy process before I can be helped’ (Respondent 16)]. These patients would also like healthcare professionals to consider their physical comfort by attending to problems with physical activity [statement 8, +3; ‘Stairs are very much a no go for me and it is important that they know that’ (Respondent 16)]. However, they consider other aspects of physical comfort, such as waiting areas and treatment rooms that are comfortable (statement 9, –3) or provide enough privacy (statement 10, –3), to be less important. Respondent 16 stated ‘When I weighed 127 kilos at my heaviest, the seats were a bit uncomfortable, but I do not have that problem now’. In contrast to those holding viewpoint 1, patients holding viewpoint 2 do not mind if care does not align with their own preferences [statement 5, –4; ‘I do not think that your preferences should be taken into account in a hospital or with a doctor because as human beings we can have a lot of preferences that do not really apply’ (Respondent 16)]. They emphasize their own responsibility for getting the care they need [‘Right now in the Netherlands, you get the right care. As a patient, you also need to be somewhat well‐informed yourself’ (Respondent 16)]. They believe that being well prepared avoids the need for lengthy appointments (statement 18, –4). Respondent 3 stated ‘If I have a question, I just ask it. And if I did not understand something or if I forgot something […] I can just call and ask’. Viewpoint 3: ‘interpersonal communication is key’ Viewpoint 3 accounted for 10% of the explained variance. It focuses on the exchange of information among all involved parties. Patients holding this viewpoint want to know what to expect, and thus value information about all aspects of their care (statement 25, +3), including information about referrals (statement 22, +3), very highly [‘Because I want to know where I stand, what will happen and what is needed’ (Respondent 10)]. They believe that a good explanation is needed to properly understand information (statement 27, +3). Respondent 7 stated ‘I often feel a bit overwhelmed during consultations. That things are being said for which I was not fully prepared. I sometimes think afterwards, “have I understood everything that has been said?”’. These patients believe that having sufficient time during appointments is a prerequisite for the proper exchange of information (statement 18, +4). They often leave consultations feeling poorly because of unanswered questions. [‘You just notice that they are under time pressure, that it should all happen quickly. You hardly have time for questions, so you do not leave with a good feeling’ (Respondent 10)]. Similarly to those holding viewpoint 2, these patients value the coordination of care and advice among practitioners highly (statement 12, +4). They specifically dislike the conflicting of treatment plans with each other [‘It is important that one practitioner also knows what the other practitioner is doing and that it fits together’ (Respondent 7)]. In contrast to those holding viewpoint 1, these patients prefer that care and support are of an informative nature, rather than attending to emotions that they might be experiencing (statement 15, –4). Respondent 1 stated ‘Things like quality of care are much more important to me than people sitting down to listen to emotions or something like that. To me, emotions and scientific correctness often clash’. Similarly to those holding viewpoint 2, they do not mind if care does not align well with their preferences [statement 5, –3; ‘For me it is really about that the care is good and that it is the best, even if I do not prefer it’ (Respondent 1)]. Viewpoint 3 accounted for 10% of the explained variance. It focuses on the exchange of information among all involved parties. Patients holding this viewpoint want to know what to expect, and thus value information about all aspects of their care (statement 25, +3), including information about referrals (statement 22, +3), very highly [‘Because I want to know where I stand, what will happen and what is needed’ (Respondent 10)]. They believe that a good explanation is needed to properly understand information (statement 27, +3). Respondent 7 stated ‘I often feel a bit overwhelmed during consultations. That things are being said for which I was not fully prepared. I sometimes think afterwards, “have I understood everything that has been said?”’. These patients believe that having sufficient time during appointments is a prerequisite for the proper exchange of information (statement 18, +4). They often leave consultations feeling poorly because of unanswered questions. [‘You just notice that they are under time pressure, that it should all happen quickly. You hardly have time for questions, so you do not leave with a good feeling’ (Respondent 10)]. Similarly to those holding viewpoint 2, these patients value the coordination of care and advice among practitioners highly (statement 12, +4). They specifically dislike the conflicting of treatment plans with each other [‘It is important that one practitioner also knows what the other practitioner is doing and that it fits together’ (Respondent 7)]. In contrast to those holding viewpoint 1, these patients prefer that care and support are of an informative nature, rather than attending to emotions that they might be experiencing (statement 15, –4). Respondent 1 stated ‘Things like quality of care are much more important to me than people sitting down to listen to emotions or something like that. To me, emotions and scientific correctness often clash’. Similarly to those holding viewpoint 2, they do not mind if care does not align well with their preferences [statement 5, –3; ‘For me it is really about that the care is good and that it is the best, even if I do not prefer it’ (Respondent 1)]. Viewpoint 4: ‘I want my independence’ Viewpoint 4 accounted for 7% of the explained variance. The aim of remaining independent is central to this viewpoint. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 want to focus on what they can do on their own (statement 6, +2), as they believe that this will preserve their quality of life [statement 3, +3; ‘I think it is important that I can and may continue to do a lot independently’ (Respondent 17)]. In line with this focus, these patients want healthcare professionals to attend to their problems with physical activity (statement 8, +4). Respondent 17 stated ‘I think it is very important to work on this [problems with physical activity] as much as possible and to expand what is possible to do myself’. Although these respondents seek independence, they value knowing where to go for care and support after treatment highly (statement 24, +4). They are willing to take the lead, provided that they know where they can go for support. Respondent 4 stated ‘That you have a telephone number and that you can call them with questions or if anything is unclear. I find accessibility very important’. To facilitate independence, they also prefer to be well informed about all aspects of their care (statement 25, +3) and appreciate easy access to their own medical data (statement 26, +2). However, these patients do not require a good explanation of all information provided to them (statement 27, −2) as they have no difficulty understanding their medical data [‘I have been walking in and out of hospitals for so long, most of it is self‐evident’ (Respondent 17)]. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 find other aspects of the ‘continuity of care’, such as being well informed during referrals (statement 22, –3) and the proper transfer of information upon referral (statement 23, –2) to be less important. They do not mind asking questions or re‐sharing information with professionals [‘I can also tell it myself and I can ask for everything I need and I always do that’ (Respondent 4)]. Viewpoint 4 accounted for 7% of the explained variance. The aim of remaining independent is central to this viewpoint. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 want to focus on what they can do on their own (statement 6, +2), as they believe that this will preserve their quality of life [statement 3, +3; ‘I think it is important that I can and may continue to do a lot independently’ (Respondent 17)]. In line with this focus, these patients want healthcare professionals to attend to their problems with physical activity (statement 8, +4). Respondent 17 stated ‘I think it is very important to work on this [problems with physical activity] as much as possible and to expand what is possible to do myself’. Although these respondents seek independence, they value knowing where to go for care and support after treatment highly (statement 24, +4). They are willing to take the lead, provided that they know where they can go for support. Respondent 4 stated ‘That you have a telephone number and that you can call them with questions or if anything is unclear. I find accessibility very important’. To facilitate independence, they also prefer to be well informed about all aspects of their care (statement 25, +3) and appreciate easy access to their own medical data (statement 26, +2). However, these patients do not require a good explanation of all information provided to them (statement 27, −2) as they have no difficulty understanding their medical data [‘I have been walking in and out of hospitals for so long, most of it is self‐evident’ (Respondent 17)]. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 find other aspects of the ‘continuity of care’, such as being well informed during referrals (statement 22, –3) and the proper transfer of information upon referral (statement 23, –2) to be less important. They do not mind asking questions or re‐sharing information with professionals [‘I can also tell it myself and I can ask for everything I need and I always do that’ (Respondent 4)]. Viewpoint 5: ‘support for myself and my loved ones’ Viewpoint 5 accounted for 5% of the explained variance. This viewpoint is distinguished by an emphasis on the supporting roles of family members and friends. Patients holding this viewpoint seek support from their loved ones and help from healthcare professionals in obtaining it [statement 31, +3; ‘I am married and I want help from my husband because he really knows a lot about me’ (Respondent 20)]. They also value their autonomy highly; they want to be informed about all aspects of their care (statement 25, +4) and involved in decisions (statement 4, +3). Respondent 20 stated ‘I do not like them talking about me behind my back’. Similarly to those with viewpoint 1, patients with viewpoint 5 consider being treated with dignity and respect (statement 1, +4) to be one of the most important aspects of PCC [‘Everyone has the right to be treated with respect and receive proper care’ (Respondent 5)]. They value comfortable waiting areas and treatment rooms (statement 9, +1) more than patients with other viewpoints, as they appreciate their personal space. Respondent 20 stated ‘I do not think it is necessary that they sit right on top of me in treatment rooms’. Compared with patients with other viewpoints, those with viewpoint 5 consider some aspects of PCC to be out of reach, and thus rank them as less important. For example, they accept that money may be a problem sometimes [statement 20, –4; ‘Money comes, money goes. It just makes some things a little easier, but if you do not have it, you do not have it’ (Respondent 5)] and they believe that receiving treatment only from unbiased healthcare professionals is not realistic [statement 2, –3; ‘It is not realistic because that [stigmatisation from healthcare professionals] happens, whether you like it or not’ (Respondent 5)]. Viewpoint 5 accounted for 5% of the explained variance. This viewpoint is distinguished by an emphasis on the supporting roles of family members and friends. Patients holding this viewpoint seek support from their loved ones and help from healthcare professionals in obtaining it [statement 31, +3; ‘I am married and I want help from my husband because he really knows a lot about me’ (Respondent 20)]. They also value their autonomy highly; they want to be informed about all aspects of their care (statement 25, +4) and involved in decisions (statement 4, +3). Respondent 20 stated ‘I do not like them talking about me behind my back’. Similarly to those with viewpoint 1, patients with viewpoint 5 consider being treated with dignity and respect (statement 1, +4) to be one of the most important aspects of PCC [‘Everyone has the right to be treated with respect and receive proper care’ (Respondent 5)]. They value comfortable waiting areas and treatment rooms (statement 9, +1) more than patients with other viewpoints, as they appreciate their personal space. Respondent 20 stated ‘I do not think it is necessary that they sit right on top of me in treatment rooms’. Compared with patients with other viewpoints, those with viewpoint 5 consider some aspects of PCC to be out of reach, and thus rank them as less important. For example, they accept that money may be a problem sometimes [statement 20, –4; ‘Money comes, money goes. It just makes some things a little easier, but if you do not have it, you do not have it’ (Respondent 5)] and they believe that receiving treatment only from unbiased healthcare professionals is not realistic [statement 2, –3; ‘It is not realistic because that [stigmatisation from healthcare professionals] happens, whether you like it or not’ (Respondent 5)]. Viewpoint 1: ‘someone who listens in an unbiased manner’: Viewpoint 1 accounted for the most explained variance (17%) in this study. The PCC dimensions most characterizing this viewpoint are ‘respect for patients’ preferences’ and ‘emotional support’. Central to this viewpoint was respondents’ desire to be seen and heard like any other patient without obesity. These patients wish to be treated with dignity and respect (statement 1, +4). Respondent 8 stated ‘You just want to be taken seriously. We are all human, that includes people who are overweight’. They often feel misunderstood because healthcare professionals blame all of their health issues on their weight. [‘You fight against a judgment that you cannot get out of. They do not even examine me. Right off the bat they go: “I can refer you for a stomach reduction”’ (Respondent 18)]. To get the care and support that suits their needs, these patients believe that unbiased healthcare professionals (statement 2, +3) who genuinely listen (statement 14, +4) are crucial. Respondent 13 stated ‘That they look further than your weight, that is the most important thing to me. That it is not like everything that is wrong with you is because of your weight’. They want healthcare professionals to provide emotional support and acknowledge the impact of their health problems on their life [statement 16, +3; ‘I have three small children and it is really hard for me to do things with them just because I am overweight’ (Respondent 6)]. They seek recognition for the complexity of their condition. Respondent 8 stated ‘Recognition that obesity is a disease and it should be treated that way is very important’. To remain in charge of their care, these patients want to be involved in decisions (statement 4, +3), while leaving friends and family members out of the decision‐making process [statement 29, –4; ‘No, I do not think that is important. I decide what I want’ (Respondent 6)]. Respondents holding this viewpoint ranked all statements covering the ‘involvement of friends and family’ dimension as least important. Viewpoint 2: ‘everything should run smoothly’: Viewpoint 2 accounted for 8% of the explained variance. Patients holding this viewpoint seek well‐coordinated care and advice (statement 12, +4) and the proper transfer of information in case of referral (statement 23, +4). Respondent 3 stated ‘The doctors have to agree on what is the best option for me’. Furthermore, they desire easily accessible care with short wait times [statement 17, +3; ‘That it will not be a lengthy process before I can be helped’ (Respondent 16)]. These patients would also like healthcare professionals to consider their physical comfort by attending to problems with physical activity [statement 8, +3; ‘Stairs are very much a no go for me and it is important that they know that’ (Respondent 16)]. However, they consider other aspects of physical comfort, such as waiting areas and treatment rooms that are comfortable (statement 9, –3) or provide enough privacy (statement 10, –3), to be less important. Respondent 16 stated ‘When I weighed 127 kilos at my heaviest, the seats were a bit uncomfortable, but I do not have that problem now’. In contrast to those holding viewpoint 1, patients holding viewpoint 2 do not mind if care does not align with their own preferences [statement 5, –4; ‘I do not think that your preferences should be taken into account in a hospital or with a doctor because as human beings we can have a lot of preferences that do not really apply’ (Respondent 16)]. They emphasize their own responsibility for getting the care they need [‘Right now in the Netherlands, you get the right care. As a patient, you also need to be somewhat well‐informed yourself’ (Respondent 16)]. They believe that being well prepared avoids the need for lengthy appointments (statement 18, –4). Respondent 3 stated ‘If I have a question, I just ask it. And if I did not understand something or if I forgot something […] I can just call and ask’. Viewpoint 3: ‘interpersonal communication is key’: Viewpoint 3 accounted for 10% of the explained variance. It focuses on the exchange of information among all involved parties. Patients holding this viewpoint want to know what to expect, and thus value information about all aspects of their care (statement 25, +3), including information about referrals (statement 22, +3), very highly [‘Because I want to know where I stand, what will happen and what is needed’ (Respondent 10)]. They believe that a good explanation is needed to properly understand information (statement 27, +3). Respondent 7 stated ‘I often feel a bit overwhelmed during consultations. That things are being said for which I was not fully prepared. I sometimes think afterwards, “have I understood everything that has been said?”’. These patients believe that having sufficient time during appointments is a prerequisite for the proper exchange of information (statement 18, +4). They often leave consultations feeling poorly because of unanswered questions. [‘You just notice that they are under time pressure, that it should all happen quickly. You hardly have time for questions, so you do not leave with a good feeling’ (Respondent 10)]. Similarly to those holding viewpoint 2, these patients value the coordination of care and advice among practitioners highly (statement 12, +4). They specifically dislike the conflicting of treatment plans with each other [‘It is important that one practitioner also knows what the other practitioner is doing and that it fits together’ (Respondent 7)]. In contrast to those holding viewpoint 1, these patients prefer that care and support are of an informative nature, rather than attending to emotions that they might be experiencing (statement 15, –4). Respondent 1 stated ‘Things like quality of care are much more important to me than people sitting down to listen to emotions or something like that. To me, emotions and scientific correctness often clash’. Similarly to those holding viewpoint 2, they do not mind if care does not align well with their preferences [statement 5, –3; ‘For me it is really about that the care is good and that it is the best, even if I do not prefer it’ (Respondent 1)]. Viewpoint 4: ‘I want my independence’: Viewpoint 4 accounted for 7% of the explained variance. The aim of remaining independent is central to this viewpoint. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 want to focus on what they can do on their own (statement 6, +2), as they believe that this will preserve their quality of life [statement 3, +3; ‘I think it is important that I can and may continue to do a lot independently’ (Respondent 17)]. In line with this focus, these patients want healthcare professionals to attend to their problems with physical activity (statement 8, +4). Respondent 17 stated ‘I think it is very important to work on this [problems with physical activity] as much as possible and to expand what is possible to do myself’. Although these respondents seek independence, they value knowing where to go for care and support after treatment highly (statement 24, +4). They are willing to take the lead, provided that they know where they can go for support. Respondent 4 stated ‘That you have a telephone number and that you can call them with questions or if anything is unclear. I find accessibility very important’. To facilitate independence, they also prefer to be well informed about all aspects of their care (statement 25, +3) and appreciate easy access to their own medical data (statement 26, +2). However, these patients do not require a good explanation of all information provided to them (statement 27, −2) as they have no difficulty understanding their medical data [‘I have been walking in and out of hospitals for so long, most of it is self‐evident’ (Respondent 17)]. In contrast to those holding viewpoints 1–3, patients holding viewpoint 4 find other aspects of the ‘continuity of care’, such as being well informed during referrals (statement 22, –3) and the proper transfer of information upon referral (statement 23, –2) to be less important. They do not mind asking questions or re‐sharing information with professionals [‘I can also tell it myself and I can ask for everything I need and I always do that’ (Respondent 4)]. Viewpoint 5: ‘support for myself and my loved ones’: Viewpoint 5 accounted for 5% of the explained variance. This viewpoint is distinguished by an emphasis on the supporting roles of family members and friends. Patients holding this viewpoint seek support from their loved ones and help from healthcare professionals in obtaining it [statement 31, +3; ‘I am married and I want help from my husband because he really knows a lot about me’ (Respondent 20)]. They also value their autonomy highly; they want to be informed about all aspects of their care (statement 25, +4) and involved in decisions (statement 4, +3). Respondent 20 stated ‘I do not like them talking about me behind my back’. Similarly to those with viewpoint 1, patients with viewpoint 5 consider being treated with dignity and respect (statement 1, +4) to be one of the most important aspects of PCC [‘Everyone has the right to be treated with respect and receive proper care’ (Respondent 5)]. They value comfortable waiting areas and treatment rooms (statement 9, +1) more than patients with other viewpoints, as they appreciate their personal space. Respondent 20 stated ‘I do not think it is necessary that they sit right on top of me in treatment rooms’. Compared with patients with other viewpoints, those with viewpoint 5 consider some aspects of PCC to be out of reach, and thus rank them as less important. For example, they accept that money may be a problem sometimes [statement 20, –4; ‘Money comes, money goes. It just makes some things a little easier, but if you do not have it, you do not have it’ (Respondent 5)] and they believe that receiving treatment only from unbiased healthcare professionals is not realistic [statement 2, –3; ‘It is not realistic because that [stigmatisation from healthcare professionals] happens, whether you like it or not’ (Respondent 5)]. DISCUSSION: In this study, five distinct views on what is important for PCC for patients with obesity were identified. Patients holding viewpoint 1, ‘someone who listens in an unbiased manner’, want healthcare professionals to look beyond a patient's weight. This viewpoint explained the most variance in the data and was supported by the largest number of respondents. Patients holding viewpoint 2, ‘everything should run smoothly’, seek care that is well coordinated and accessible. Patients holding viewpoint 3, ‘interpersonal communication is key’, prefer care of an informative nature. Patients holding viewpoint 4, ‘I want my independence’, are driven by the desire to remain independent. Finally, patients holding viewpoint 5, ‘support for myself and my loved ones’, seek help to involve their loved ones in their care. Our findings thus show that patients with obesity hold various views on what is most important in care and support. This diversity may be explained by the multifactorial nature of obesity, 10 which results in different care needs. Our results suggest that we cannot apply a single standard of care to patients with obesity, and reflect the importance of care that is tailored to each individual. Although views on PCC varied among patients, ‘being treated with dignity and respect' was deemed to be relatively important across viewpoints. This result is not surprising, as obesity is a highly stigmatized condition, and many individuals living with it report having stigmatizing healthcare experiences, such as disrespectful treatment. 40 Research suggests that higher patient BMIs are associated with lesser physician respect. 41 Although many respondents in our study reported stigmatizing healthcare experiences, ‘unbiased healthcare professionals' was not unequivocally ranked as important across viewpoints. Patients holding viewpoint 5 even ranked it as one of the least important aspects of PCC, but they explained this judgment as reflecting their belief that weight‐related stigmatization in healthcare is an unsolvable problem. Furthermore, some respondents with other viewpoints related ‘unbiased healthcare professionals’ strongly to ‘treatment with dignity and respect’, and for practical purposes chose to rank the former statement lower. This perspective has also been identified in research on patients’ views on weight stigmatization in healthcare; patients with obesity agreed that a lack of physician respect results from such stigmatization. 42 Our results further show notable differences in views on the importance of emotional support. Patients with viewpoint 1 value such support highly, viewing it as fundamental for obesity treatment. In contrast, patients with viewpoint 3 do not want practitioners to attend to their emotions, although they acknowledge the emotional impact of their condition. Many individuals with obesity struggle with psychosocial issues, including psychiatric illness, low self‐esteem, reduced quality of life, and the internalization of weight stigmatization. 22 , 43 Thus, multidisciplinary obesity treatment often includes a focus on emotional well‐being, which is suggested to have beneficial effects on health. 44 , 45 However, patients with some viewpoints prefer a pragmatic approach. These opposing views may pose a dilemma for healthcare professionals aiming to provide high‐quality and holistic care to patients with obesity. Future research may clarify the emotional support needs of patients with obesity and the relationship of emotional support to treatment outcomes. The involvement of family and friends was considered to be relatively unimportant across viewpoints in this study, except among patients with viewpoint 5, who seem to depend more on social support. Patients with viewpoint 1 strongly oppose the involvement of loved ones and prefer to make decisions individually. This perspective might be explained by the complexity of living with obesity, which only the patient can understand fully. These findings bring to light new questions about the extent to and the manner in which family members and friends should be involved in obesity treatment. Social support has been shown to be beneficial in chronic illness management, 46 but literature on the involvement of family and friends in adult obesity treatment is inconclusive. LIMITATIONS: Several potential limitations of this study should be considered. First, the sample of patients recruited for this study may seem to be small. However, it meets the requirements of Q methodology 30 and is similar to those of other studies. 28 , 47 Furthermore, consultation of the literature revealed no evidence of a missing viewpoint. Additionally, the viewpoints identified in this study were recognized by a professor of obesity and stress research who is involved in the treatment of patients with obesity and indicated that no viewpoint was missing, based on many years of clinical experience. Furthermore, the representation of the male perspective in this sample might be limited due to the male‐to‐female ratio. However, a similar ratio is seen in patients seeking obesity care. 48 Second, at the start of the data collection period, respondents could only participate online due to COVID‐19 pandemic precautions. Although we later offered the opportunity for face‐to‐face participation, this approach may have led to the underrepresentation of individuals with low health literacy, for whom digitalization can be a barrier to engagement. 49 However, the views of individuals with low health literacy are represented in this study, as four respondents met this criterion. Finally, our study was conducted in the Netherlands, and the identified viewpoints may not represent the views of patients living in countries with different health systems. For example, because health insurance is mandatory in the Netherlands, every resident has basic access to care. Aspects of the ‘access to care’ dimension may thus be viewed differently in countries without universal healthcare. However, Dutch health insurance does not cover all obesity treatments. For instance, most weight‐reducing medications are not covered. CONCLUSION: Five distinct views on what is important for PCC for patients with obesity were identified. The viewpoint ‘someone who listens in an unbiased manner’ was supported by the largest number of respondents. With these findings, we have begun to shed light on the communalities in the views of patients living with obesity on PCC. Our data shows that the views on what care and support should look like for patients with obesity vary, stressing the need for tailored care for patients with obesity. Future research may build on and expand our study's findings by considering additional factors that might influence how patients with obesity view PCC. For instance, experiences of weight stigmatization in healthcare may lead to different perspectives on what is most important in care and support. Furthermore, we explored the views of patients living with severe obesity. Future studies might examine the views of patients living with less severe obesity and explore to what extent their views on PCC differ. The views that are described in this paper provide valuable insight into the perspective of patients living with obesity on what is most important in care and support. Importantly, this knowledge helps us to understand what PCC provision for patients with obesity might entail and may help organizations arrange care accordingly. For example, some patients may benefit greatly from a high level of emotional support, while others will respond better to care and support that is centred around patient education or self‐management. CONFLICT OF INTEREST: The authors declare no conflict of interest. Supporting information: Supporting information. Click here for additional data file.
Background: To better accommodate patients with obesity, the adoption of a person-centred approach to healthcare seems to be imperative. Eight dimensions are important for person-centred care (PCC): respect for patients' preferences, physical comfort, the coordination of care, emotional support, access to care, the continuity of care, the provision of information and education, and the involvement of family and friends. The aim of this study was to explore the views of patients with obesity on the relative importance of the dimensions of PCC. Methods: Q methodology was used to study the viewpoints of 21 patients with obesity on PCC. Respondents were asked to rank 31 statements about the eight dimensions of PCC by level of personal significance. Using by-person factor analysis, distinct viewpoints were identified. Respondents' comments made while ranking were used to verify and refine the interpretation of the viewpoints. Results: Five distinct viewpoints were identified: (1) 'someone who listens in an unbiased manner', (2) 'everything should run smoothly', (3) 'interpersonal communication is key', (4) 'I want my independence', and (5) 'support for myself and my loved ones'. Viewpoint 1 was supported by the largest number of respondents and explained the most variance in the data, followed by viewpoint 3 and the other viewpoints, respectively. Conclusions: Our findings highlight the need for tailored care in obesity treatment and shed light on aspects of care and support that are most important for patients with obesity.
INTRODUCTION: Over the past four decades, the global prevalence of obesity has nearly tripled. 1 , 2 The World Health Organization defines obesity as an excessive accumulation of body fat that poses a threat to health. 3 Living with obesity seriously impairs physical and psychosocial functioning, resulting in a reduced quality of life. 4 Obesity also increases the risk of developing other serious health conditions, such as type 2 diabetes, cardiovascular diseases, several types of cancer, and many other diseases. 5 Consequently, obesity, and especially severe obesity is associated with increases in healthcare utilization and expenditures, as well as substantial societal costs due to productivity losses. 6 , 7 Although many health institutions have recognized it as a chronic disease, 8 healthcare systems seem poorly prepared to meet the needs of patients living with obesity. Clinical guidelines for the treatment of these patients are often too simplistic, focusing merely on weight loss instead of the improvement of overall health and well‐being. 9 As a result, individual circumstances, including contributing factors and underlying diseases, are often overlooked. 10 Furthermore, patients with obesity often experience weight‐related stigma and discrimination in healthcare, which can affect the quality of their care and their treatment outcomes. 11 , 12 For instance, some healthcare professionals view patients with obesity more negatively than other patients and spend less time treating them. 13 Healthcare professionals may also be insufficiently equipped or educated to perform standard medical procedures on patients with obesity. 14 To better accommodate patients with obesity, the adoption of a person‐centred approach in which care is tailored to the individual and individuals' preferences, needs, and values are respected seems to be imperative. 15 Person‐centred care (PCC) can be seen as a paradigm shift in healthcare that has been gaining broad support with the increasing interest in the quality of care. 16 , 17 The Picker Institute distinguishes eight dimensions that are important for PCC: respect for patients' preferences, physical comfort, coordination of care, emotional support, access to care, continuity of care, the provision of information and education, and the involvement of family and friends. 18 , 19 An overview of these dimensions can be found in Table 1. The eight dimensions of PCC Abbreviation: PCC, person‐centred care. PCC has been associated with improved patient outcomes in various healthcare settings, 26 including the provision of care to patients with obesity. 27 However, the relative importance of the different aspects of PCC seems to vary among patient groups. 28 , 29 Although aspects of care that may be important specifically for patients with obesity have been identified, the significance of the eight dimensions of PCC for patients with obesity has not been assessed. Gaining insight into the aspects of PCC that are most important to this patient group is a vital step toward improved care provision, and consequently improved quality of care and patient outcomes. Thus, the aim of this study was to explore the views of patients with obesity on the relative importance of the dimensions of PCC. CONCLUSION: Five distinct views on what is important for PCC for patients with obesity were identified. The viewpoint ‘someone who listens in an unbiased manner’ was supported by the largest number of respondents. With these findings, we have begun to shed light on the communalities in the views of patients living with obesity on PCC. Our data shows that the views on what care and support should look like for patients with obesity vary, stressing the need for tailored care for patients with obesity. Future research may build on and expand our study's findings by considering additional factors that might influence how patients with obesity view PCC. For instance, experiences of weight stigmatization in healthcare may lead to different perspectives on what is most important in care and support. Furthermore, we explored the views of patients living with severe obesity. Future studies might examine the views of patients living with less severe obesity and explore to what extent their views on PCC differ. The views that are described in this paper provide valuable insight into the perspective of patients living with obesity on what is most important in care and support. Importantly, this knowledge helps us to understand what PCC provision for patients with obesity might entail and may help organizations arrange care accordingly. For example, some patients may benefit greatly from a high level of emotional support, while others will respond better to care and support that is centred around patient education or self‐management.
Background: To better accommodate patients with obesity, the adoption of a person-centred approach to healthcare seems to be imperative. Eight dimensions are important for person-centred care (PCC): respect for patients' preferences, physical comfort, the coordination of care, emotional support, access to care, the continuity of care, the provision of information and education, and the involvement of family and friends. The aim of this study was to explore the views of patients with obesity on the relative importance of the dimensions of PCC. Methods: Q methodology was used to study the viewpoints of 21 patients with obesity on PCC. Respondents were asked to rank 31 statements about the eight dimensions of PCC by level of personal significance. Using by-person factor analysis, distinct viewpoints were identified. Respondents' comments made while ranking were used to verify and refine the interpretation of the viewpoints. Results: Five distinct viewpoints were identified: (1) 'someone who listens in an unbiased manner', (2) 'everything should run smoothly', (3) 'interpersonal communication is key', (4) 'I want my independence', and (5) 'support for myself and my loved ones'. Viewpoint 1 was supported by the largest number of respondents and explained the most variance in the data, followed by viewpoint 3 and the other viewpoints, respectively. Conclusions: Our findings highlight the need for tailored care in obesity treatment and shed light on aspects of care and support that are most important for patients with obesity.
12,540
301
18
[ "patients", "statement", "viewpoint", "care", "respondent", "obesity", "respondents", "important", "viewpoints", "holding" ]
[ "test", "test" ]
[CONTENT] care provision | obesity | patient views | person‐centred care | Q methodology [SUMMARY]
[CONTENT] care provision | obesity | patient views | person‐centred care | Q methodology [SUMMARY]
[CONTENT] care provision | obesity | patient views | person‐centred care | Q methodology [SUMMARY]
[CONTENT] care provision | obesity | patient views | person‐centred care | Q methodology [SUMMARY]
[CONTENT] care provision | obesity | patient views | person‐centred care | Q methodology [SUMMARY]
[CONTENT] care provision | obesity | patient views | person‐centred care | Q methodology [SUMMARY]
[CONTENT] Humans | Patient-Centered Care | Patient Preference | Factor Analysis, Statistical | Self Care | Obesity [SUMMARY]
[CONTENT] Humans | Patient-Centered Care | Patient Preference | Factor Analysis, Statistical | Self Care | Obesity [SUMMARY]
[CONTENT] Humans | Patient-Centered Care | Patient Preference | Factor Analysis, Statistical | Self Care | Obesity [SUMMARY]
[CONTENT] Humans | Patient-Centered Care | Patient Preference | Factor Analysis, Statistical | Self Care | Obesity [SUMMARY]
[CONTENT] Humans | Patient-Centered Care | Patient Preference | Factor Analysis, Statistical | Self Care | Obesity [SUMMARY]
[CONTENT] Humans | Patient-Centered Care | Patient Preference | Factor Analysis, Statistical | Self Care | Obesity [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] patients | statement | viewpoint | care | respondent | obesity | respondents | important | viewpoints | holding [SUMMARY]
[CONTENT] patients | statement | viewpoint | care | respondent | obesity | respondents | important | viewpoints | holding [SUMMARY]
[CONTENT] patients | statement | viewpoint | care | respondent | obesity | respondents | important | viewpoints | holding [SUMMARY]
[CONTENT] patients | statement | viewpoint | care | respondent | obesity | respondents | important | viewpoints | holding [SUMMARY]
[CONTENT] patients | statement | viewpoint | care | respondent | obesity | respondents | important | viewpoints | holding [SUMMARY]
[CONTENT] patients | statement | viewpoint | care | respondent | obesity | respondents | important | viewpoints | holding [SUMMARY]
[CONTENT] obesity | care | patients | patients obesity | pcc | dimensions | healthcare | diseases | improved | person centred [SUMMARY]
[CONTENT] statements | respondents | factor | patients | obesity | methodology | study | viewpoints | views | set [SUMMARY]
[CONTENT] statement | respondent | viewpoint | stated | patients | care | holding | want | holding viewpoint | important [SUMMARY]
[CONTENT] obesity | patients | views | patients living | views patients living | patients obesity | living | support | care | care support [SUMMARY]
[CONTENT] patients | statement | obesity | respondent | viewpoint | care | respondents | important | statements | views [SUMMARY]
[CONTENT] patients | statement | obesity | respondent | viewpoint | care | respondents | important | statements | views [SUMMARY]
[CONTENT] ||| Eight ||| [SUMMARY]
[CONTENT] 21 | PCC ||| 31 | eight ||| ||| [SUMMARY]
[CONTENT] Five | 1 | 2 | 3 | 4 | 5 ||| Viewpoint 1 | 3 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| Eight ||| ||| 21 | PCC ||| 31 | eight ||| ||| ||| Five | 1 | 2 | 3 | 4 | 5 ||| Viewpoint 1 | 3 ||| [SUMMARY]
[CONTENT] ||| Eight ||| ||| 21 | PCC ||| 31 | eight ||| ||| ||| Five | 1 | 2 | 3 | 4 | 5 ||| Viewpoint 1 | 3 ||| [SUMMARY]
Serum estradiol levels associated with specific gene expression patterns in normal breast tissue and in breast carcinomas.
21812955
High serum levels of estradiol are associated with increased risk of postmenopausal breast cancer. Little is known about the gene expression in normal breast tissue in relation to levels of circulating serum estradiol.
BACKGROUND
We compared whole genome expression data of breast tissue samples with serum hormone levels using data from 79 healthy women and 64 breast cancer patients. Significance analysis of microarrays (SAM) was used to identify differentially expressed genes and multivariate linear regression was used to identify independent associations.
METHODS
Six genes (SCGB3A1, RSPO1, TLN2, SLITRK4, DCLK1, PTGS1) were found differentially expressed according to serum estradiol levels (FDR = 0). Three of these independently predicted estradiol levels in a multivariate model, as SCGB3A1 (HIN1) and TLN2 were up-regulated and PTGS1 (COX1) was down-regulated in breast samples from women with high serum estradiol. Serum estradiol, but none of the differentially expressed genes were significantly associated with mammographic density, another strong breast cancer risk factor. In breast carcinomas, expression of GREB1 and AREG was associated with serum estradiol in all cancers and in the subgroup of estrogen receptor positive cases.
RESULTS
We have identified genes associated with serum estradiol levels in normal breast tissue and in breast carcinomas. SCGB3A1 is a suggested tumor suppressor gene that inhibits cell growth and invasion and is methylated and down-regulated in many epithelial cancers. Our findings indicate this gene as an important inhibitor of breast cell proliferation in healthy women with high estradiol levels. In the breast, this gene is expressed in luminal cells only and is methylated in non-BRCA-related breast cancers. The possibility of a carcinogenic contribution of silencing of this gene for luminal, but not basal-like cancers should be further explored. PTGS1 induces prostaglandin E2 (PGE2) production which in turn stimulates aromatase expression and hence increases the local production of estradiol. This is the first report studying such associations in normal breast tissue in humans.
CONCLUSIONS
[ "Adult", "Aged", "Breast", "Breast Neoplasms", "Carcinoma, Ductal, Breast", "Carcinoma, Intraductal, Noninfiltrating", "Cyclooxygenase 1", "Cytokines", "Doublecortin-Like Kinases", "Estradiol", "Female", "Gene Expression Profiling", "Gene Expression Regulation, Neoplastic", "Humans", "Intracellular Signaling Peptides and Proteins", "Linear Models", "Membrane Proteins", "Middle Aged", "Multivariate Analysis", "Oligonucleotide Array Sequence Analysis", "Protein Serine-Threonine Kinases", "Talin", "Thrombospondins", "Transcriptome", "Tumor Suppressor Proteins" ]
3163631
Background
Influence of estradiol on breast development [1], the menopausal transition [2] and on the breast epithelial cells [3] is widely studied. However, little is known about the effect of serum estradiol on gene expression in the normal breast tissue. For postmenopausal women, high serum estradiol levels are associated with increased risk of breast cancer [4-6]. The results are less conclusive for premenopausal women, but epidemiologic evidence indicates an increased risk from higher exposure to female hormones [7]. In estrogen receptor (ER) positive breast carcinomas, the proliferating tumor cells express ER while in normal breast tissue the proliferating epithelial cells are ER negative (ER-) [8,9]. Both normal and malignant breast epithelial cells are influenced by estradiol but through different mechanisms. In the lack of ER, normal breast epithelial cells receive proliferating paracrine signals from ER+ fibroblasts [3]. The importance of estrogen stimuli in the proliferation of ER+ breast cancer cells is evident from the effect of anti-estrogen treatment. Previously, several studies have identified genes whose expression is regulated by estradiol in breast cancer cell lines. Recently, a study reported an association between serum levels of estradiol and gene expression of trefoil factor 1 (TFF1), growth regulation by estrogen in breast cancer 1 (GREB1), PDZ domain containing 1 (PDZK1) and progesterone receptor (PGR) in ER+ breast carcinomas [10]. Functional studies on breast cancer cell lines have described that estradiol induces expression of c-fos [11] and that exposure to physiologic doses of estradiol is necessary for malignant transformation [12]. Intratumoral levels of estrogens have also been measured and were found correlated with tumor gene expression of estradiol-metabolizing enzymes and the estrogen receptor gene (ESR1) [13] and of proliferation markers [14]. A recent study did, however, conclude that the intratumoral estradiol levels were mainly determined by its binding to ER (associated with ESR1-expression). The intratumoral estradiol levels were not found to be associated with local estradiol production [15]. Serum estradiol levels were found to be associated with local estradiol levels in normal breast tissue of breast cancer patients in a recent study [16]. This strengthens the hypothesis that serum estradiol levels influence the gene expression in breast tissue. Wilson and colleagues studied the effect of estradiol on normal human breast tissue transplanted into athymic nude mice. They identified a list of genes associated with estradiol treatment, including TFF1, AREG, SCGB2A2, GREB1 and GATA3. The normal tissues used in the xenografts were from breasts with benign breast disease and from mammoplasty reductions [17]. Studies describing associations between serum estradiol levels and gene expression of normal human breast tissue in its natural milieu are lacking. Knowledge about gene expression changes associated with high serum estradiol may reveal biological mechanisms underlying the increased risk for both elevated mammographic density and for developing breast cancer as seen in women with high estradiol levels. We have identified genes differentially expressed between normal breast tissue samples according to serum estradiol levels. Several genes identified in previous studies using normal breast tissue or breast carcinomas are confirmed, but additional genes were identified making important contributions to our previous knowledge.
Methods
Subjects Two cohorts of women were recruited to the study from different breast diagnostic centers in Norway in the period 2002-2007 as described previously [18]. Exclusion criteria were pregnancy and use of anticoagulant therapy. The first cohort consisted of 120 women referred to the breast diagnostic centers who were cancer-free after further evaluation. These will be referred to as healthy women. Breast biopsies were taken from an area with some mammographic density in the breast contralateral to any suspect lesion. The second cohort consisted of 66 women who were diagnosed with breast cancer. For this cohort, study biopsies were taken from the breast carcinoma after the diagnostic biopsies were obtained. Fourteen gauge needles were used for the biopsies and sampling was guided by ultrasound. The biopsies were either soaked in RNAlater (Ambion, Austin, TX) and sent to the Oslo University Hospital, Radiumhospitalet, before storage at -20°C or directly snap-frozen in liquid nitrogen and stored at -80°C. Based on serum hormone analyses (see below), 57 of the 120 healthy women included were postmenopausal, 43 were premenopausal, 10 were perimenopausal and serum samples were lacking for 10 women. Of the 66 breast cancer patients, 50 were estimated to be postmenopausal, 13 to be premenopausal and 3 to be perimenopausal. All women provided information about height, weight, parity, hormone therapy use and family history of breast cancer and provided a signed informed consent. The study was approved by the regional ethical committee (IRB approval no S-02036). Three additional datasets were used to explore the regulation of identified genes in breast cancer. One unpublished dataset from the Akershus University Hospital (AHUS), Norway, included normal breast tissue from 42 reduction mammoplasties and both tumor and normal adjacent tissue from 48 breast cancer patients (referred to as the AHUS dataset). Another unpublished dataset from University of North Carolina (UNC), USA, included breast cancer and adjacent normal breast tissue from 55 breast cancer patients (referred to as the UNC dataset). The third dataset is previously published and consists of biopsies from 31 pure ductal carcinoma in situ (DCIS), 36 pure invasive breast cancers and 42 tumours with mixed histology, both DCIS and invasive [19]. Two cohorts of women were recruited to the study from different breast diagnostic centers in Norway in the period 2002-2007 as described previously [18]. Exclusion criteria were pregnancy and use of anticoagulant therapy. The first cohort consisted of 120 women referred to the breast diagnostic centers who were cancer-free after further evaluation. These will be referred to as healthy women. Breast biopsies were taken from an area with some mammographic density in the breast contralateral to any suspect lesion. The second cohort consisted of 66 women who were diagnosed with breast cancer. For this cohort, study biopsies were taken from the breast carcinoma after the diagnostic biopsies were obtained. Fourteen gauge needles were used for the biopsies and sampling was guided by ultrasound. The biopsies were either soaked in RNAlater (Ambion, Austin, TX) and sent to the Oslo University Hospital, Radiumhospitalet, before storage at -20°C or directly snap-frozen in liquid nitrogen and stored at -80°C. Based on serum hormone analyses (see below), 57 of the 120 healthy women included were postmenopausal, 43 were premenopausal, 10 were perimenopausal and serum samples were lacking for 10 women. Of the 66 breast cancer patients, 50 were estimated to be postmenopausal, 13 to be premenopausal and 3 to be perimenopausal. All women provided information about height, weight, parity, hormone therapy use and family history of breast cancer and provided a signed informed consent. The study was approved by the regional ethical committee (IRB approval no S-02036). Three additional datasets were used to explore the regulation of identified genes in breast cancer. One unpublished dataset from the Akershus University Hospital (AHUS), Norway, included normal breast tissue from 42 reduction mammoplasties and both tumor and normal adjacent tissue from 48 breast cancer patients (referred to as the AHUS dataset). Another unpublished dataset from University of North Carolina (UNC), USA, included breast cancer and adjacent normal breast tissue from 55 breast cancer patients (referred to as the UNC dataset). The third dataset is previously published and consists of biopsies from 31 pure ductal carcinoma in situ (DCIS), 36 pure invasive breast cancers and 42 tumours with mixed histology, both DCIS and invasive [19]. Serum hormone analysis Serum hormone levels (LH, FSH, prolactin, estradiol, progesterone, SHBG and testosterone) were measured with electrochemiluminescence immunoassays (ECLIA) on a Roche Modular E instrument (Roche, Basel, Switzerland) by Department of Medical Biochemistry, Oslo University Hospital, Rikshospitalet. The menopausal status was determined based on serum levels of hormones, age and hormone use. The criteria used can be found in Additional file 1. Biochemically perimenopausal women or women with uncertain menopausal status were excluded from analyses stratified on menopause. These hormone assays are tested through an external quality assessment scheme, Labquality, and the laboratory is accredited according to ISO-ES 17025. Serum estradiol values are given as picograms per milliliter (pg/ml) (pg/ml × 3.67 = pmol/). The functional sensitivity of the estradiol assay was 10.9 pg/ml (40 pmol/l) with a total analytical sensitivity of < 5%. Serum hormone levels (LH, FSH, prolactin, estradiol, progesterone, SHBG and testosterone) were measured with electrochemiluminescence immunoassays (ECLIA) on a Roche Modular E instrument (Roche, Basel, Switzerland) by Department of Medical Biochemistry, Oslo University Hospital, Rikshospitalet. The menopausal status was determined based on serum levels of hormones, age and hormone use. The criteria used can be found in Additional file 1. Biochemically perimenopausal women or women with uncertain menopausal status were excluded from analyses stratified on menopause. These hormone assays are tested through an external quality assessment scheme, Labquality, and the laboratory is accredited according to ISO-ES 17025. Serum estradiol values are given as picograms per milliliter (pg/ml) (pg/ml × 3.67 = pmol/). The functional sensitivity of the estradiol assay was 10.9 pg/ml (40 pmol/l) with a total analytical sensitivity of < 5%. Gene expression analysis RNA extraction and hybridization were performed as previously described [18]. Briefly, RNeasy Mini Protocol (Qiagen, Valencia, CA) was used for RNA extraction. Forty samples (38 from healthy women) were excluded from further analysis due to low RNA amount (< 10 ng) or poor RNA quality assessed by the curves given by Agilent Bioanalyzer (Agilent Technologies, Palo Alto, CA). The analyses were performed before RNA integrity value (RIN) was included as a measure of degradation and samples with poor quality were excluded. Agilent Low RNA input Fluorescent Linear Amplification Kit Protocol was used for amplification and labelling with Cy5 (Amersham Biosciences, Little Chalfont, England) for sample RNA and Cy3 (Amersham Biosciences, Little Chalfont, England) for the reference (Universal Human total RNA (Stratagene, La Jolla, CA)). Labelled RNA was hybridized onto Agilent Human Whole Genome Oligo Microarrays (G4110A) (Agilent Technologies, Santa Clara, CA). Three arrays were excluded due to poor quality leaving data from 79 healthy women and 64 breast cancer patients. The scanned data was processed in Feature Extraction 9.1.3.1 (Agilent Technologies, Santa Clara, CA). Locally weighted scatterplot smoothing (lowess) was used to normalize the data. The normalized and log2-transformed data was stored in the Stanford Microarray Database (SMD)[20] and retrieved for further analysis. Gene filtering excluded probes with ≥ 20% missing values and probes with less than three arrays being at least 1.6 standard deviation away from the mean. This reduced the dataset from 40791 probes to 9767 for the healthy women and to 10153 for the breast cancer patients. Missing values were imputed in R using the method impute. knn in the library impute [21]. All expression data are available in Gene Expression Omnibus (GEO)(GSE18672). RNA extraction and hybridization were performed as previously described [18]. Briefly, RNeasy Mini Protocol (Qiagen, Valencia, CA) was used for RNA extraction. Forty samples (38 from healthy women) were excluded from further analysis due to low RNA amount (< 10 ng) or poor RNA quality assessed by the curves given by Agilent Bioanalyzer (Agilent Technologies, Palo Alto, CA). The analyses were performed before RNA integrity value (RIN) was included as a measure of degradation and samples with poor quality were excluded. Agilent Low RNA input Fluorescent Linear Amplification Kit Protocol was used for amplification and labelling with Cy5 (Amersham Biosciences, Little Chalfont, England) for sample RNA and Cy3 (Amersham Biosciences, Little Chalfont, England) for the reference (Universal Human total RNA (Stratagene, La Jolla, CA)). Labelled RNA was hybridized onto Agilent Human Whole Genome Oligo Microarrays (G4110A) (Agilent Technologies, Santa Clara, CA). Three arrays were excluded due to poor quality leaving data from 79 healthy women and 64 breast cancer patients. The scanned data was processed in Feature Extraction 9.1.3.1 (Agilent Technologies, Santa Clara, CA). Locally weighted scatterplot smoothing (lowess) was used to normalize the data. The normalized and log2-transformed data was stored in the Stanford Microarray Database (SMD)[20] and retrieved for further analysis. Gene filtering excluded probes with ≥ 20% missing values and probes with less than three arrays being at least 1.6 standard deviation away from the mean. This reduced the dataset from 40791 probes to 9767 for the healthy women and to 10153 for the breast cancer patients. Missing values were imputed in R using the method impute. knn in the library impute [21]. All expression data are available in Gene Expression Omnibus (GEO)(GSE18672). Mammographic density Mammographic density was estimated from digitized craniocaudal mammograms as previously described [18] using the University of Southern California Madena assessment method [22]. First, the total breast area was outlined using a computerized tool and the area was represented as number of pixels. One of the co-authors, GU, identified a region of interest that incorporated all areas of density excluding those representing the pectoralis muscle and scanning artifacts. All densities above a certain threshold were tinted yellow, and the tinted pixles converted to cm2 representing the absolute density and was available for 108 of 120 healthy women. Percent mammographic density is calculated as the absolute density divided by the total breast area and was available for 114 of 120 healthy women. Test-retest reliability was 0.99 for absolute density. Mammographic density was estimated from digitized craniocaudal mammograms as previously described [18] using the University of Southern California Madena assessment method [22]. First, the total breast area was outlined using a computerized tool and the area was represented as number of pixels. One of the co-authors, GU, identified a region of interest that incorporated all areas of density excluding those representing the pectoralis muscle and scanning artifacts. All densities above a certain threshold were tinted yellow, and the tinted pixles converted to cm2 representing the absolute density and was available for 108 of 120 healthy women. Percent mammographic density is calculated as the absolute density divided by the total breast area and was available for 114 of 120 healthy women. Test-retest reliability was 0.99 for absolute density. Statistical Analysis Quantitative significance analysis of microarrays (SAM) [23,24] was used for analysis of differentially expressed genes, by the library samr in R 2.12.0. Serum estradiol (nmol/L) was used as dependent variable. The distribution of serum levels is skewed and therefore the non-parametric Wilcoxon test-statistic was used. Probes with an FDR < 50% were included for gene ontology analyses. DAVID Bioinformatics Resources 2008 from the National Institute of Allergy and Infectious Diseases, NIH [25] was used for gene ontology analysis. Functional annotation clustering was applied and the following annotation categories were selected: biological processes, molecular function, cellular compartment and KEGG pathways. We included annotation terms with a p-value (FDR-corrected) of < 0.01 containing between 5 and 500 genes. For multivariate analysis, linear regression was fitted in R 2.12.0 to identify independent associations. Stepwise selection was performed to determine which variables had an independent contribution to the response variable. In the first step, all variables were included in the model. The variable with the highest p-value was rejected from the model in each step, before the model was refitted. This was repeated until all variables in the model had a p-value smaller than 0.05. Linear regression was used to determine the independent association between serum estradiol and the differentially expressed genes in healthy women. Age, menopause and current hormone use were included in the model and forced to stay throughout the stepwise selection to correct for confounding by these factors. Linear regression was also fitted in two analyses with mammographic density in healthy women as a dependent variable. In one set of analyses serum hormone levels were included as the independent covariates, and in the other analysis, variables representing gene expression associated with serum estradiol were included as covariates. Epidemiologic covariates, such as age, BMI, parity and use of hormone therapy were included in the mammographic density analyses and forced to stay throughout the stepwise selection to control for potential confounding by these factors. Tumor subtypes were calculated using the intrinsic subtypes published by Sørlie et al in 2001 [26]. The total gene set was filtered for the intrinsic genes. The correlation between gene expression profiles for the intrinsic genes for each sample with each subtype was calculated. Each sample was assigned to the subtype with which it had the highest correlation. Samples with all correlations < 0.1 were not assigned to any subtype. Two-sided t-tests were used to check for difference in expression for single genes between two categories of variables (eg: pre- and postmenopausal). Quantitative significance analysis of microarrays (SAM) [23,24] was used for analysis of differentially expressed genes, by the library samr in R 2.12.0. Serum estradiol (nmol/L) was used as dependent variable. The distribution of serum levels is skewed and therefore the non-parametric Wilcoxon test-statistic was used. Probes with an FDR < 50% were included for gene ontology analyses. DAVID Bioinformatics Resources 2008 from the National Institute of Allergy and Infectious Diseases, NIH [25] was used for gene ontology analysis. Functional annotation clustering was applied and the following annotation categories were selected: biological processes, molecular function, cellular compartment and KEGG pathways. We included annotation terms with a p-value (FDR-corrected) of < 0.01 containing between 5 and 500 genes. For multivariate analysis, linear regression was fitted in R 2.12.0 to identify independent associations. Stepwise selection was performed to determine which variables had an independent contribution to the response variable. In the first step, all variables were included in the model. The variable with the highest p-value was rejected from the model in each step, before the model was refitted. This was repeated until all variables in the model had a p-value smaller than 0.05. Linear regression was used to determine the independent association between serum estradiol and the differentially expressed genes in healthy women. Age, menopause and current hormone use were included in the model and forced to stay throughout the stepwise selection to correct for confounding by these factors. Linear regression was also fitted in two analyses with mammographic density in healthy women as a dependent variable. In one set of analyses serum hormone levels were included as the independent covariates, and in the other analysis, variables representing gene expression associated with serum estradiol were included as covariates. Epidemiologic covariates, such as age, BMI, parity and use of hormone therapy were included in the mammographic density analyses and forced to stay throughout the stepwise selection to control for potential confounding by these factors. Tumor subtypes were calculated using the intrinsic subtypes published by Sørlie et al in 2001 [26]. The total gene set was filtered for the intrinsic genes. The correlation between gene expression profiles for the intrinsic genes for each sample with each subtype was calculated. Each sample was assigned to the subtype with which it had the highest correlation. Samples with all correlations < 0.1 were not assigned to any subtype. Two-sided t-tests were used to check for difference in expression for single genes between two categories of variables (eg: pre- and postmenopausal).
null
null
Conclusions
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2407/11/332/prepub
[ "Background", "Subjects", "Serum hormone analysis", "Gene expression analysis", "Mammographic density", "Statistical Analysis", "Results", "Gene expression in normal breast tissue according to serum estradiol levels", "Serum estradiol related to mammographic density in healthy women", "Gene expression in breast carcinomas according to serum estradiol levels", "Discussion", "Gene expression in normal breast tissue according to serum estradiol levels", "Serum estradiol associated with mammographic density in healthy women", "Gene expression in breast carcinomas according to serum estradiol levels", "Overall strengths and limitations of the study", "Conclusions" ]
[ "Influence of estradiol on breast development [1], the menopausal transition [2] and on the breast epithelial cells [3] is widely studied. However, little is known about the effect of serum estradiol on gene expression in the normal breast tissue. For postmenopausal women, high serum estradiol levels are associated with increased risk of breast cancer [4-6]. The results are less conclusive for premenopausal women, but epidemiologic evidence indicates an increased risk from higher exposure to female hormones [7].\nIn estrogen receptor (ER) positive breast carcinomas, the proliferating tumor cells express ER while in normal breast tissue the proliferating epithelial cells are ER negative (ER-) [8,9]. Both normal and malignant breast epithelial cells are influenced by estradiol but through different mechanisms. In the lack of ER, normal breast epithelial cells receive proliferating paracrine signals from ER+ fibroblasts [3]. The importance of estrogen stimuli in the proliferation of ER+ breast cancer cells is evident from the effect of anti-estrogen treatment. Previously, several studies have identified genes whose expression is regulated by estradiol in breast cancer cell lines. Recently, a study reported an association between serum levels of estradiol and gene expression of trefoil factor 1 (TFF1), growth regulation by estrogen in breast cancer 1 (GREB1), PDZ domain containing 1 (PDZK1) and progesterone receptor (PGR) in ER+ breast carcinomas [10]. Functional studies on breast cancer cell lines have described that estradiol induces expression of c-fos [11] and that exposure to physiologic doses of estradiol is necessary for malignant transformation [12]. Intratumoral levels of estrogens have also been measured and were found correlated with tumor gene expression of estradiol-metabolizing enzymes and the estrogen receptor gene (ESR1) [13] and of proliferation markers [14]. A recent study did, however, conclude that the intratumoral estradiol levels were mainly determined by its binding to ER (associated with ESR1-expression). The intratumoral estradiol levels were not found to be associated with local estradiol production [15]. Serum estradiol levels were found to be associated with local estradiol levels in normal breast tissue of breast cancer patients in a recent study [16]. This strengthens the hypothesis that serum estradiol levels influence the gene expression in breast tissue.\nWilson and colleagues studied the effect of estradiol on normal human breast tissue transplanted into athymic nude mice. They identified a list of genes associated with estradiol treatment, including TFF1, AREG, SCGB2A2, GREB1 and GATA3. The normal tissues used in the xenografts were from breasts with benign breast disease and from mammoplasty reductions [17].\nStudies describing associations between serum estradiol levels and gene expression of normal human breast tissue in its natural milieu are lacking. Knowledge about gene expression changes associated with high serum estradiol may reveal biological mechanisms underlying the increased risk for both elevated mammographic density and for developing breast cancer as seen in women with high estradiol levels. We have identified genes differentially expressed between normal breast tissue samples according to serum estradiol levels. Several genes identified in previous studies using normal breast tissue or breast carcinomas are confirmed, but additional genes were identified making important contributions to our previous knowledge.", "Two cohorts of women were recruited to the study from different breast diagnostic centers in Norway in the period 2002-2007 as described previously [18]. Exclusion criteria were pregnancy and use of anticoagulant therapy. The first cohort consisted of 120 women referred to the breast diagnostic centers who were cancer-free after further evaluation. These will be referred to as healthy women. Breast biopsies were taken from an area with some mammographic density in the breast contralateral to any suspect lesion. The second cohort consisted of 66 women who were diagnosed with breast cancer. For this cohort, study biopsies were taken from the breast carcinoma after the diagnostic biopsies were obtained. Fourteen gauge needles were used for the biopsies and sampling was guided by ultrasound. The biopsies were either soaked in RNAlater (Ambion, Austin, TX) and sent to the Oslo University Hospital, Radiumhospitalet, before storage at -20°C or directly snap-frozen in liquid nitrogen and stored at -80°C. Based on serum hormone analyses (see below), 57 of the 120 healthy women included were postmenopausal, 43 were premenopausal, 10 were perimenopausal and serum samples were lacking for 10 women. Of the 66 breast cancer patients, 50 were estimated to be postmenopausal, 13 to be premenopausal and 3 to be perimenopausal. All women provided information about height, weight, parity, hormone therapy use and family history of breast cancer and provided a signed informed consent. The study was approved by the regional ethical committee (IRB approval no S-02036).\nThree additional datasets were used to explore the regulation of identified genes in breast cancer. One unpublished dataset from the Akershus University Hospital (AHUS), Norway, included normal breast tissue from 42 reduction mammoplasties and both tumor and normal adjacent tissue from 48 breast cancer patients (referred to as the AHUS dataset). Another unpublished dataset from University of North Carolina (UNC), USA, included breast cancer and adjacent normal breast tissue from 55 breast cancer patients (referred to as the UNC dataset). The third dataset is previously published and consists of biopsies from 31 pure ductal carcinoma in situ (DCIS), 36 pure invasive breast cancers and 42 tumours with mixed histology, both DCIS and invasive [19].", "Serum hormone levels (LH, FSH, prolactin, estradiol, progesterone, SHBG and testosterone) were measured with electrochemiluminescence immunoassays (ECLIA) on a Roche Modular E instrument (Roche, Basel, Switzerland) by Department of Medical Biochemistry, Oslo University Hospital, Rikshospitalet. The menopausal status was determined based on serum levels of hormones, age and hormone use. The criteria used can be found in Additional file 1. Biochemically perimenopausal women or women with uncertain menopausal status were excluded from analyses stratified on menopause. These hormone assays are tested through an external quality assessment scheme, Labquality, and the laboratory is accredited according to ISO-ES 17025. Serum estradiol values are given as picograms per milliliter (pg/ml) (pg/ml × 3.67 = pmol/). The functional sensitivity of the estradiol assay was 10.9 pg/ml (40 pmol/l) with a total analytical sensitivity of < 5%.", "RNA extraction and hybridization were performed as previously described [18]. Briefly, RNeasy Mini Protocol (Qiagen, Valencia, CA) was used for RNA extraction. Forty samples (38 from healthy women) were excluded from further analysis due to low RNA amount (< 10 ng) or poor RNA quality assessed by the curves given by Agilent Bioanalyzer (Agilent Technologies, Palo Alto, CA). The analyses were performed before RNA integrity value (RIN) was included as a measure of degradation and samples with poor quality were excluded. Agilent Low RNA input Fluorescent Linear Amplification Kit Protocol was used for amplification and labelling with Cy5 (Amersham Biosciences, Little Chalfont, England) for sample RNA and Cy3 (Amersham Biosciences, Little Chalfont, England) for the reference (Universal Human total RNA (Stratagene, La Jolla, CA)). Labelled RNA was hybridized onto Agilent Human Whole Genome Oligo Microarrays (G4110A) (Agilent Technologies, Santa Clara, CA). Three arrays were excluded due to poor quality leaving data from 79 healthy women and 64 breast cancer patients.\nThe scanned data was processed in Feature Extraction 9.1.3.1 (Agilent Technologies, Santa Clara, CA). Locally weighted scatterplot smoothing (lowess) was used to normalize the data. The normalized and log2-transformed data was stored in the Stanford Microarray Database (SMD)[20] and retrieved for further analysis. Gene filtering excluded probes with ≥ 20% missing values and probes with less than three arrays being at least 1.6 standard deviation away from the mean. This reduced the dataset from 40791 probes to 9767 for the healthy women and to 10153 for the breast cancer patients. Missing values were imputed in R using the method impute. knn in the library impute [21]. All expression data are available in Gene Expression Omnibus (GEO)(GSE18672).", "Mammographic density was estimated from digitized craniocaudal mammograms as previously described [18] using the University of Southern California Madena assessment method [22]. First, the total breast area was outlined using a computerized tool and the area was represented as number of pixels. One of the co-authors, GU, identified a region of interest that incorporated all areas of density excluding those representing the pectoralis muscle and scanning artifacts. All densities above a certain threshold were tinted yellow, and the tinted pixles converted to cm2 representing the absolute density and was available for 108 of 120 healthy women. Percent mammographic density is calculated as the absolute density divided by the total breast area and was available for 114 of 120 healthy women. Test-retest reliability was 0.99 for absolute density.", "Quantitative significance analysis of microarrays (SAM) [23,24] was used for analysis of differentially expressed genes, by the library samr in R 2.12.0. Serum estradiol (nmol/L) was used as dependent variable. The distribution of serum levels is skewed and therefore the non-parametric Wilcoxon test-statistic was used. Probes with an FDR < 50% were included for gene ontology analyses.\nDAVID Bioinformatics Resources 2008 from the National Institute of Allergy and Infectious Diseases, NIH [25] was used for gene ontology analysis. Functional annotation clustering was applied and the following annotation categories were selected: biological processes, molecular function, cellular compartment and KEGG pathways. We included annotation terms with a p-value (FDR-corrected) of < 0.01 containing between 5 and 500 genes.\nFor multivariate analysis, linear regression was fitted in R 2.12.0 to identify independent associations. Stepwise selection was performed to determine which variables had an independent contribution to the response variable. In the first step, all variables were included in the model. The variable with the highest p-value was rejected from the model in each step, before the model was refitted. This was repeated until all variables in the model had a p-value smaller than 0.05.\nLinear regression was used to determine the independent association between serum estradiol and the differentially expressed genes in healthy women. Age, menopause and current hormone use were included in the model and forced to stay throughout the stepwise selection to correct for confounding by these factors. Linear regression was also fitted in two analyses with mammographic density in healthy women as a dependent variable. In one set of analyses serum hormone levels were included as the independent covariates, and in the other analysis, variables representing gene expression associated with serum estradiol were included as covariates. Epidemiologic covariates, such as age, BMI, parity and use of hormone therapy were included in the mammographic density analyses and forced to stay throughout the stepwise selection to control for potential confounding by these factors.\nTumor subtypes were calculated using the intrinsic subtypes published by Sørlie et al in 2001 [26]. The total gene set was filtered for the intrinsic genes. The correlation between gene expression profiles for the intrinsic genes for each sample with each subtype was calculated. Each sample was assigned to the subtype with which it had the highest correlation. Samples with all correlations < 0.1 were not assigned to any subtype. Two-sided t-tests were used to check for difference in expression for single genes between two categories of variables (eg: pre- and postmenopausal).", " Gene expression in normal breast tissue according to serum estradiol levels Genes differentially expressed in normal breast tissue from healthy women according to serum estradiol levels with FDR = 0 are listed in Table 1. The gene ontology terms extracellular region and skeletal system development were significantly enriched in the top 80 up-regulated genes (FDR < 50%). There were no significant gene ontology terms enriched in the down-regulated genes with FDR < 50 (n = 8), although response to steroid hormone stimulus was the most enriched term with three observed genes (prostaglandin-endoperoxide synthase 1 (PTGS1), ESR1 and GATA3)(Additional file 2).\nGenes significantly differentially expressed in normal breast tissue of healthy women according to serum estradiol.\nA) Q-values and regulation of gene expression from quantitative SAM analysis of gene expression according to serum estradiol. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression in normal breast tissue according to serum estradiol\n2) s-est = serum estradiol\n3) BC = breast cancer\n4) P-value from two-sided t-test\n5) ER+ BC = estrogen recepor positive breast cancer (n = 53)\n6) ER- BC = estrogen recepor negative breast cancer (n = 8)\nThe genes differentially expressed in normal breast tissue according to serum estradiol with an FDR = 0 (from Table 1) were tested for differential expression between breast cancer tissue and normal breast tissue from healthy women. All six genes were differentially expressed between carcinomas and normal tissue. Interestingly, the expression in breast carcinomas was similar to that in normal tissue from women with lower levels of circulating estradiol and opposite to that found in normal samples from women with higher levels of serum estradiol (Table 1). Comparing the expression of these genes in normal breast tissue with the expression in ER+ and ER- carcinomas respectively revealed similar results (Table 1).\nIn tumors, SCGB3A1 tended to be expressed at a lower level in basal-like tumors compared with all other tumors or compared with luminal A tumors, but this did not reach statistical significance (both p-values = 0.2). However in two other datasets (AHUS and UNC), SCGB3A1 was expressed at significantly lower levels in basal-like tumors compared with all other subtypes (p = 0.04 and 0.003 respectively). There was no consistent significant difference in SCGB3A1 expression in ER+ and ER- tumors.\nOf the six genes differentially expressed according to serum estradiol in normal breast tissue, three were differentially expressed between DCIS and early invasive breast carcinomas based on a previously published dataset [19](Table 1). SCGB3A1 was down-regulated in invasive compared with DCIS, whereas talin 2 (TLN2) and PTGS1 were up-regulated in invasive compared with DCIS.\nA linear regression was fitted with all differentially expressed genes as covariates and controlling for age, menopause and current hormone therapy use. After leave-one-out elimination of insignificant covariates, SCGB3A1, TLN2 and PTGS1 were still significant (Table 2).\nGenes independently associated with serum estradiol in a linear regression model.\nAll genes differentially expressed according to serum estradiol (Table 1) were included. Values shown are corrected for age, menopause and current hormone therapy. After leave-one-out stepwise selection the following covariates remained:\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.\n2) Values for the non-significant genes are from the last model before they were excluded.\nGenes differentially expressed in normal breast tissue from healthy women according to serum estradiol levels with FDR = 0 are listed in Table 1. The gene ontology terms extracellular region and skeletal system development were significantly enriched in the top 80 up-regulated genes (FDR < 50%). There were no significant gene ontology terms enriched in the down-regulated genes with FDR < 50 (n = 8), although response to steroid hormone stimulus was the most enriched term with three observed genes (prostaglandin-endoperoxide synthase 1 (PTGS1), ESR1 and GATA3)(Additional file 2).\nGenes significantly differentially expressed in normal breast tissue of healthy women according to serum estradiol.\nA) Q-values and regulation of gene expression from quantitative SAM analysis of gene expression according to serum estradiol. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression in normal breast tissue according to serum estradiol\n2) s-est = serum estradiol\n3) BC = breast cancer\n4) P-value from two-sided t-test\n5) ER+ BC = estrogen recepor positive breast cancer (n = 53)\n6) ER- BC = estrogen recepor negative breast cancer (n = 8)\nThe genes differentially expressed in normal breast tissue according to serum estradiol with an FDR = 0 (from Table 1) were tested for differential expression between breast cancer tissue and normal breast tissue from healthy women. All six genes were differentially expressed between carcinomas and normal tissue. Interestingly, the expression in breast carcinomas was similar to that in normal tissue from women with lower levels of circulating estradiol and opposite to that found in normal samples from women with higher levels of serum estradiol (Table 1). Comparing the expression of these genes in normal breast tissue with the expression in ER+ and ER- carcinomas respectively revealed similar results (Table 1).\nIn tumors, SCGB3A1 tended to be expressed at a lower level in basal-like tumors compared with all other tumors or compared with luminal A tumors, but this did not reach statistical significance (both p-values = 0.2). However in two other datasets (AHUS and UNC), SCGB3A1 was expressed at significantly lower levels in basal-like tumors compared with all other subtypes (p = 0.04 and 0.003 respectively). There was no consistent significant difference in SCGB3A1 expression in ER+ and ER- tumors.\nOf the six genes differentially expressed according to serum estradiol in normal breast tissue, three were differentially expressed between DCIS and early invasive breast carcinomas based on a previously published dataset [19](Table 1). SCGB3A1 was down-regulated in invasive compared with DCIS, whereas talin 2 (TLN2) and PTGS1 were up-regulated in invasive compared with DCIS.\nA linear regression was fitted with all differentially expressed genes as covariates and controlling for age, menopause and current hormone therapy use. After leave-one-out elimination of insignificant covariates, SCGB3A1, TLN2 and PTGS1 were still significant (Table 2).\nGenes independently associated with serum estradiol in a linear regression model.\nAll genes differentially expressed according to serum estradiol (Table 1) were included. Values shown are corrected for age, menopause and current hormone therapy. After leave-one-out stepwise selection the following covariates remained:\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.\n2) Values for the non-significant genes are from the last model before they were excluded.\n Serum estradiol related to mammographic density in healthy women Regression analysis in postmenopausal women showed that serum estradiol was independently associated with both absolute and percent mammographic density when controlling for age, BMI and current use of hormone therapy (Table 3). None of the genes differentially expressed in normal breast tissue according to serum estradiol levels were independently associated with mammographic density (data not shown).\nSerum hormones independently associated with mammographic density in linear regression models.\nValues shown are corrected for age, HT and BMI. Through leave-one-out stepwise elimination of covariates, prolactin, SHBG and testosterone were excluded and the following variables remained.\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.\nRegression analysis in postmenopausal women showed that serum estradiol was independently associated with both absolute and percent mammographic density when controlling for age, BMI and current use of hormone therapy (Table 3). None of the genes differentially expressed in normal breast tissue according to serum estradiol levels were independently associated with mammographic density (data not shown).\nSerum hormones independently associated with mammographic density in linear regression models.\nValues shown are corrected for age, HT and BMI. Through leave-one-out stepwise elimination of covariates, prolactin, SHBG and testosterone were excluded and the following variables remained.\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.\n Gene expression in breast carcinomas according to serum estradiol levels In breast carcinomas, quantitative SAM revealed two genes, AREG and GREB1, as differentially expressed according to serum estradiol levels with FDR = 0 (Table 4). Both genes were up-regulated in samples from women with high serum estradiol (estradiol was used as a continuous response variable in the analysis). Of 16 probes up-regulated in samples from women with high serum estradiol, there were three probes for TFF3 and one for TFF1, although these did not reach statistical significance (Table 4). No genes were significantly down-regulated according to serum estradiol. In ER+ samples (n = 53), we also found AREG and GREB1 up-regulated in samples from women with high serum estradiol (FDR = 0), but the TFF-genes were not up-regulated. Among the ER- samples (n = 8) there was very little variation in serum estradiol levels and a search for genes differentially expressed according to serum estradiol is not feasible.\nGenes significantly differentially expressed according to serum estradiol levels in breast carcinomas.\nA) Quantitativ SAM analysis for differential expression according to serum estradiol with q-values and direction of regulation indicated. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression according to serum estradiol\n2) Gene expression in samples from patients with high compared with low serum estradiol\n3) ER+ BC = Estrogen receptor positive breast cancer (n = 53)\n4) BC = breast cancer\n5) P-value from two-sided t-test\n6) ER- BC = Estrogen receptor negative breast cancer (n = 8)\n7) Two different probes for TFF3 are used\nLooking at the expression of these genes in normal breast tissue from healthy women according to serum estradiol, both AREG and GREB1 are up-regulated in samples from women with high estradiol levels without reaching significance. Comparing the expression of these genes in breast carcinomas and normal breast tissue, neither AREG nor GREB1 are differentially expressed between normal breast tissue and breast carcinomas. All the probes for TFF-genes are, however, significantly down-regulated in normal breast tissue compared with breast carcinomas. All these genes (AREG, GREB1, TFF1 and TFF3) were up-regulated in ER+ carcinomas compared to ER- carcinomas (AREG was only borderline significant) (Additional file 3).\nIn breast carcinomas, quantitative SAM revealed two genes, AREG and GREB1, as differentially expressed according to serum estradiol levels with FDR = 0 (Table 4). Both genes were up-regulated in samples from women with high serum estradiol (estradiol was used as a continuous response variable in the analysis). Of 16 probes up-regulated in samples from women with high serum estradiol, there were three probes for TFF3 and one for TFF1, although these did not reach statistical significance (Table 4). No genes were significantly down-regulated according to serum estradiol. In ER+ samples (n = 53), we also found AREG and GREB1 up-regulated in samples from women with high serum estradiol (FDR = 0), but the TFF-genes were not up-regulated. Among the ER- samples (n = 8) there was very little variation in serum estradiol levels and a search for genes differentially expressed according to serum estradiol is not feasible.\nGenes significantly differentially expressed according to serum estradiol levels in breast carcinomas.\nA) Quantitativ SAM analysis for differential expression according to serum estradiol with q-values and direction of regulation indicated. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression according to serum estradiol\n2) Gene expression in samples from patients with high compared with low serum estradiol\n3) ER+ BC = Estrogen receptor positive breast cancer (n = 53)\n4) BC = breast cancer\n5) P-value from two-sided t-test\n6) ER- BC = Estrogen receptor negative breast cancer (n = 8)\n7) Two different probes for TFF3 are used\nLooking at the expression of these genes in normal breast tissue from healthy women according to serum estradiol, both AREG and GREB1 are up-regulated in samples from women with high estradiol levels without reaching significance. Comparing the expression of these genes in breast carcinomas and normal breast tissue, neither AREG nor GREB1 are differentially expressed between normal breast tissue and breast carcinomas. All the probes for TFF-genes are, however, significantly down-regulated in normal breast tissue compared with breast carcinomas. All these genes (AREG, GREB1, TFF1 and TFF3) were up-regulated in ER+ carcinomas compared to ER- carcinomas (AREG was only borderline significant) (Additional file 3).", "Genes differentially expressed in normal breast tissue from healthy women according to serum estradiol levels with FDR = 0 are listed in Table 1. The gene ontology terms extracellular region and skeletal system development were significantly enriched in the top 80 up-regulated genes (FDR < 50%). There were no significant gene ontology terms enriched in the down-regulated genes with FDR < 50 (n = 8), although response to steroid hormone stimulus was the most enriched term with three observed genes (prostaglandin-endoperoxide synthase 1 (PTGS1), ESR1 and GATA3)(Additional file 2).\nGenes significantly differentially expressed in normal breast tissue of healthy women according to serum estradiol.\nA) Q-values and regulation of gene expression from quantitative SAM analysis of gene expression according to serum estradiol. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression in normal breast tissue according to serum estradiol\n2) s-est = serum estradiol\n3) BC = breast cancer\n4) P-value from two-sided t-test\n5) ER+ BC = estrogen recepor positive breast cancer (n = 53)\n6) ER- BC = estrogen recepor negative breast cancer (n = 8)\nThe genes differentially expressed in normal breast tissue according to serum estradiol with an FDR = 0 (from Table 1) were tested for differential expression between breast cancer tissue and normal breast tissue from healthy women. All six genes were differentially expressed between carcinomas and normal tissue. Interestingly, the expression in breast carcinomas was similar to that in normal tissue from women with lower levels of circulating estradiol and opposite to that found in normal samples from women with higher levels of serum estradiol (Table 1). Comparing the expression of these genes in normal breast tissue with the expression in ER+ and ER- carcinomas respectively revealed similar results (Table 1).\nIn tumors, SCGB3A1 tended to be expressed at a lower level in basal-like tumors compared with all other tumors or compared with luminal A tumors, but this did not reach statistical significance (both p-values = 0.2). However in two other datasets (AHUS and UNC), SCGB3A1 was expressed at significantly lower levels in basal-like tumors compared with all other subtypes (p = 0.04 and 0.003 respectively). There was no consistent significant difference in SCGB3A1 expression in ER+ and ER- tumors.\nOf the six genes differentially expressed according to serum estradiol in normal breast tissue, three were differentially expressed between DCIS and early invasive breast carcinomas based on a previously published dataset [19](Table 1). SCGB3A1 was down-regulated in invasive compared with DCIS, whereas talin 2 (TLN2) and PTGS1 were up-regulated in invasive compared with DCIS.\nA linear regression was fitted with all differentially expressed genes as covariates and controlling for age, menopause and current hormone therapy use. After leave-one-out elimination of insignificant covariates, SCGB3A1, TLN2 and PTGS1 were still significant (Table 2).\nGenes independently associated with serum estradiol in a linear regression model.\nAll genes differentially expressed according to serum estradiol (Table 1) were included. Values shown are corrected for age, menopause and current hormone therapy. After leave-one-out stepwise selection the following covariates remained:\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.\n2) Values for the non-significant genes are from the last model before they were excluded.", "Regression analysis in postmenopausal women showed that serum estradiol was independently associated with both absolute and percent mammographic density when controlling for age, BMI and current use of hormone therapy (Table 3). None of the genes differentially expressed in normal breast tissue according to serum estradiol levels were independently associated with mammographic density (data not shown).\nSerum hormones independently associated with mammographic density in linear regression models.\nValues shown are corrected for age, HT and BMI. Through leave-one-out stepwise elimination of covariates, prolactin, SHBG and testosterone were excluded and the following variables remained.\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.", "In breast carcinomas, quantitative SAM revealed two genes, AREG and GREB1, as differentially expressed according to serum estradiol levels with FDR = 0 (Table 4). Both genes were up-regulated in samples from women with high serum estradiol (estradiol was used as a continuous response variable in the analysis). Of 16 probes up-regulated in samples from women with high serum estradiol, there were three probes for TFF3 and one for TFF1, although these did not reach statistical significance (Table 4). No genes were significantly down-regulated according to serum estradiol. In ER+ samples (n = 53), we also found AREG and GREB1 up-regulated in samples from women with high serum estradiol (FDR = 0), but the TFF-genes were not up-regulated. Among the ER- samples (n = 8) there was very little variation in serum estradiol levels and a search for genes differentially expressed according to serum estradiol is not feasible.\nGenes significantly differentially expressed according to serum estradiol levels in breast carcinomas.\nA) Quantitativ SAM analysis for differential expression according to serum estradiol with q-values and direction of regulation indicated. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression according to serum estradiol\n2) Gene expression in samples from patients with high compared with low serum estradiol\n3) ER+ BC = Estrogen receptor positive breast cancer (n = 53)\n4) BC = breast cancer\n5) P-value from two-sided t-test\n6) ER- BC = Estrogen receptor negative breast cancer (n = 8)\n7) Two different probes for TFF3 are used\nLooking at the expression of these genes in normal breast tissue from healthy women according to serum estradiol, both AREG and GREB1 are up-regulated in samples from women with high estradiol levels without reaching significance. Comparing the expression of these genes in breast carcinomas and normal breast tissue, neither AREG nor GREB1 are differentially expressed between normal breast tissue and breast carcinomas. All the probes for TFF-genes are, however, significantly down-regulated in normal breast tissue compared with breast carcinomas. All these genes (AREG, GREB1, TFF1 and TFF3) were up-regulated in ER+ carcinomas compared to ER- carcinomas (AREG was only borderline significant) (Additional file 3).", " Gene expression in normal breast tissue according to serum estradiol levels We have identified genes differentially expressed according to serum estradiol in normal breast tissue of healthy women.\nThe genes up-regulated in normal breast tissue under influence of high serum estradiol are enriched for the gene ontology terms extracellular matrix and skeletal system development. Both ER isoforms α and β are expressed in the stromal cells [27]. The proliferating epithelial cells are not found to be ER α + [8] and most often negative to both ER isoforms [9]. In normal breast tissue, the estrogen-induced epithelial proliferation is, at least partly, caused by paracrine signals from ER+ fibroblasts [3]. The enrichment of gene ontology terms related to extracellular matrix may be linked to the effect of estradiol on the ER+ stromal cells.\nThree genes were independently associated with serum estradiol levels in normal breast tissue in a linear regression model after controlling for age, menopause and current hormone therapy. The two genes SCBG3A1 and TLN2 were positively associated with serum estradiol and PTGS1 (COX1) negatively.\nSCBG3A1 is also called high in normal 1 (HIN1) and is a secretoglobin transcribed in luminal, but not in myoepithelial breast cells and is secreted from the cell [28]. The protein is a tumor suppressor and inhibits cell growth, migration and invasion acting through the AKT-pathway. SCBG3A1 inhibits Akt-phosphorylation, which reduces the Akt-function in promoting cell cycle progression (transition from the G1 to the S-phase) and preventing apoptosis (through inhibition of the TGFβ-pathway) [29] (Figure 1).\nSimplified illustration of the cellular mechanisms of action of SCGB3A1. SCGB3A1 inhibits the phosphorylation of Akt leading to reduced cell cycle division and increased apoptosis. Molecules in red are increased/stimulated as result of SCGB3A1-action, whereas molecules in blue are decreased/inhibited.\nThe SCBG3A1 promoter was found to be hypermethylated with down-regulated expression of the gene in breast carcinomas compared with normal breast tissue, where it is referred to as \"high in normal 1\" (HIN1)[30-32]. Interestingly, the gene is not methylated in BRCA-mutated and BRCA-like breast cancer [32]. Methylation of the gene is suggested to be an early event in non-BRCA-associated breast cancer [33].\nWe found SCBG3A1 down-regulated in basal-like cancers compared to other subtypes. At first glance, this may seem contradictory to the observation that the gene is not methylated in BRCA-like breast cancers. However, Krop and colleagues found that the gene is expressed in luminal epithelial cell lines, but not in myoepithelial cell lines. The reduced expression seen in basal-like cancer could be due to a myoepithelial phenotype arising from a myoepithelial cell of origin or from phenotypic changes acquired during carcinogensis. This could also be linked to the lack of methylation in BRCA-associated breast cancers, which are often basal-like. An a priori low gene expression would make methylation unnecessary. The increased Akt-activity seen in basal-like cancers [34] is consistent with the low levels of SCBG3A1 expression observed in the basal-like cancers in this study leading to increased Akt-phosphorylation and thereby Akt-activity.\nThe up-regulation of SCGB3A1 in the breasts of women with high serum estradiol protects the breast epithelial cells against uncontrolled proliferation. Women with methylation of the SCGB3A1-promoter may be at risk of developing luminal, but not basal-like, breast cancer and a reduction in serum estradiol levels may be protective for these women. Hormone therapy after menopause is associated with receptor positive, but not receptor negative, breast cancer [35]. Our results indicate that the same may be true for circulating estradiol levels in absence of functional SCGB3A1, but this is not yet shown empirically.\nPTGS1 (prostaglandin-endoperoxide synthase 1) is synonymous with cyclooxygenase 1 (COX1) and codes for an enzyme important in prostaglandin production. Studies of normal human adiopocytes have shown that the enzyme induces production of prostaglandin E2 (PGE2) which in turn increases the expression of aromatase (CYP19A1) [36]. Aromatase is the enzyme responsible for the last step in the conversion of androgens to estrogens in adipose tissue. Hence, the expression of PTGS1 may increase the local production of estradiol (Figure 2). In normal breast tissue, we observed that the expression of PTGS1 was lower in samples from women with higher levels of serum estradiol. This may be due to negative feedback. High systemic levels of estradiol make local production unnecessary and PTGS1-induced aromatase production is abolished.\nSchematic illustration of mechanism of action of PTGS1. PTGS1 induces PGE2-production. PGE2 increases the expression of aromatase (CYP19A1) which in turn converts androgens to estrogens in adipose tissue. 17βHSD1 = 17β-hydroxysteroid dehydrogenase.\nThe up-regulation of PTGS1 in breast carcinomas compared to normal tissue is expected from current knowledge. Several studies have suggested that PTGS1 has a carcinogenic role in different epithelial cancers [37-41]. The gene has also previously been found over-expressed in tumors compared with tumor adjacent normal tissue [42]. There has been large amount of research on PTGS2 in relation to cancer, indicating a role in carcinogenesis. The probes for PTGS2 were filtered out due to too many missing values and were not included in the analysis. Hence, this study lacks information about the role of PTGS2-expression in relation to serum estradiol levels.\nTLN2 is less known and less studied than Talin 1 (TLN1). Both talins are believed to connect integrins to the actin cytoskeleton and are involved in integrin-associated cell adhesion [43,44]. TLN2 is located on chromosome 15q15-21, close to CYP19A1 coding for aromatase. A study on aromatase-excess syndrome found that certain minor chromosomal rearrangements may cause cryptic transcription of the CYP19A1 gene through the TLN2-promoter [45]. We found that TLN2 was up-regulated in breasts of healthy women with high levels of serum estradiol. This could indicate an activation of cell adhesion. This gene was the only gene significantly up-regulated according to serum estradiol in normal breast tissue of premenopausal women. The down-regulation observed in breast cancers compared with normal breast tissue indicates a loss of cell adhesion. The expression of the gene is lower in DCIS than in invasive carcinomas, which is contrary to expected, but the data set is small.\nA previous study report on the gene expression in normal human breast tissue transplanted into two groups of athymic mice treated with different levels of estradiol [17]. Neither SCGB3A1, TLN2 nor PTGS1 was significantly differentially expressed in their study. They did, however, identify many of the genes found to be significantly differentially expressed according to serum estradiol in breast carcinomas in the current study, such as AREG, GREB1, TFF1 and TFF3. Going back to our normal samples, we see that several of their genes (including AREG, GREB1, TFF1 and TFF3, GATA3 and two SERPIN-genes) are differentially expressed in our normal breast tissue, but did not reach statistical significance (Additional file 4).\nThe differences observed between our study and that of Wilson and colleagues may be due to chance and due to the presence of different residual confounding in the two studies. Wilson and colleagues studied the effects of estradiol treatment, which may act differently upon the breast tissue than endogenous estradiol. Normal human breast tissue transplanted into mice may react differently to varying levels of estradiol than it does in its natural milieu in humans. The genes that were significant in the Wilson-study and differentially expressed but not significant in our study (eg: AREG, GREB1, TFF1, TFF3 and GATA3) may be associated with serum estradiol levels in normal tissues as well as in tumor tissues where we and others have observed significant associations. Our study is the first study to identify the expression of SCGB3A1, TLN2 and PTGS1 in normal breast tissue to be significantly associated with serum estradiol levels. These findings are biologically reasonable and may have been missed in previous studies due to lack of representative study material.\nWe have identified genes differentially expressed according to serum estradiol in normal breast tissue of healthy women.\nThe genes up-regulated in normal breast tissue under influence of high serum estradiol are enriched for the gene ontology terms extracellular matrix and skeletal system development. Both ER isoforms α and β are expressed in the stromal cells [27]. The proliferating epithelial cells are not found to be ER α + [8] and most often negative to both ER isoforms [9]. In normal breast tissue, the estrogen-induced epithelial proliferation is, at least partly, caused by paracrine signals from ER+ fibroblasts [3]. The enrichment of gene ontology terms related to extracellular matrix may be linked to the effect of estradiol on the ER+ stromal cells.\nThree genes were independently associated with serum estradiol levels in normal breast tissue in a linear regression model after controlling for age, menopause and current hormone therapy. The two genes SCBG3A1 and TLN2 were positively associated with serum estradiol and PTGS1 (COX1) negatively.\nSCBG3A1 is also called high in normal 1 (HIN1) and is a secretoglobin transcribed in luminal, but not in myoepithelial breast cells and is secreted from the cell [28]. The protein is a tumor suppressor and inhibits cell growth, migration and invasion acting through the AKT-pathway. SCBG3A1 inhibits Akt-phosphorylation, which reduces the Akt-function in promoting cell cycle progression (transition from the G1 to the S-phase) and preventing apoptosis (through inhibition of the TGFβ-pathway) [29] (Figure 1).\nSimplified illustration of the cellular mechanisms of action of SCGB3A1. SCGB3A1 inhibits the phosphorylation of Akt leading to reduced cell cycle division and increased apoptosis. Molecules in red are increased/stimulated as result of SCGB3A1-action, whereas molecules in blue are decreased/inhibited.\nThe SCBG3A1 promoter was found to be hypermethylated with down-regulated expression of the gene in breast carcinomas compared with normal breast tissue, where it is referred to as \"high in normal 1\" (HIN1)[30-32]. Interestingly, the gene is not methylated in BRCA-mutated and BRCA-like breast cancer [32]. Methylation of the gene is suggested to be an early event in non-BRCA-associated breast cancer [33].\nWe found SCBG3A1 down-regulated in basal-like cancers compared to other subtypes. At first glance, this may seem contradictory to the observation that the gene is not methylated in BRCA-like breast cancers. However, Krop and colleagues found that the gene is expressed in luminal epithelial cell lines, but not in myoepithelial cell lines. The reduced expression seen in basal-like cancer could be due to a myoepithelial phenotype arising from a myoepithelial cell of origin or from phenotypic changes acquired during carcinogensis. This could also be linked to the lack of methylation in BRCA-associated breast cancers, which are often basal-like. An a priori low gene expression would make methylation unnecessary. The increased Akt-activity seen in basal-like cancers [34] is consistent with the low levels of SCBG3A1 expression observed in the basal-like cancers in this study leading to increased Akt-phosphorylation and thereby Akt-activity.\nThe up-regulation of SCGB3A1 in the breasts of women with high serum estradiol protects the breast epithelial cells against uncontrolled proliferation. Women with methylation of the SCGB3A1-promoter may be at risk of developing luminal, but not basal-like, breast cancer and a reduction in serum estradiol levels may be protective for these women. Hormone therapy after menopause is associated with receptor positive, but not receptor negative, breast cancer [35]. Our results indicate that the same may be true for circulating estradiol levels in absence of functional SCGB3A1, but this is not yet shown empirically.\nPTGS1 (prostaglandin-endoperoxide synthase 1) is synonymous with cyclooxygenase 1 (COX1) and codes for an enzyme important in prostaglandin production. Studies of normal human adiopocytes have shown that the enzyme induces production of prostaglandin E2 (PGE2) which in turn increases the expression of aromatase (CYP19A1) [36]. Aromatase is the enzyme responsible for the last step in the conversion of androgens to estrogens in adipose tissue. Hence, the expression of PTGS1 may increase the local production of estradiol (Figure 2). In normal breast tissue, we observed that the expression of PTGS1 was lower in samples from women with higher levels of serum estradiol. This may be due to negative feedback. High systemic levels of estradiol make local production unnecessary and PTGS1-induced aromatase production is abolished.\nSchematic illustration of mechanism of action of PTGS1. PTGS1 induces PGE2-production. PGE2 increases the expression of aromatase (CYP19A1) which in turn converts androgens to estrogens in adipose tissue. 17βHSD1 = 17β-hydroxysteroid dehydrogenase.\nThe up-regulation of PTGS1 in breast carcinomas compared to normal tissue is expected from current knowledge. Several studies have suggested that PTGS1 has a carcinogenic role in different epithelial cancers [37-41]. The gene has also previously been found over-expressed in tumors compared with tumor adjacent normal tissue [42]. There has been large amount of research on PTGS2 in relation to cancer, indicating a role in carcinogenesis. The probes for PTGS2 were filtered out due to too many missing values and were not included in the analysis. Hence, this study lacks information about the role of PTGS2-expression in relation to serum estradiol levels.\nTLN2 is less known and less studied than Talin 1 (TLN1). Both talins are believed to connect integrins to the actin cytoskeleton and are involved in integrin-associated cell adhesion [43,44]. TLN2 is located on chromosome 15q15-21, close to CYP19A1 coding for aromatase. A study on aromatase-excess syndrome found that certain minor chromosomal rearrangements may cause cryptic transcription of the CYP19A1 gene through the TLN2-promoter [45]. We found that TLN2 was up-regulated in breasts of healthy women with high levels of serum estradiol. This could indicate an activation of cell adhesion. This gene was the only gene significantly up-regulated according to serum estradiol in normal breast tissue of premenopausal women. The down-regulation observed in breast cancers compared with normal breast tissue indicates a loss of cell adhesion. The expression of the gene is lower in DCIS than in invasive carcinomas, which is contrary to expected, but the data set is small.\nA previous study report on the gene expression in normal human breast tissue transplanted into two groups of athymic mice treated with different levels of estradiol [17]. Neither SCGB3A1, TLN2 nor PTGS1 was significantly differentially expressed in their study. They did, however, identify many of the genes found to be significantly differentially expressed according to serum estradiol in breast carcinomas in the current study, such as AREG, GREB1, TFF1 and TFF3. Going back to our normal samples, we see that several of their genes (including AREG, GREB1, TFF1 and TFF3, GATA3 and two SERPIN-genes) are differentially expressed in our normal breast tissue, but did not reach statistical significance (Additional file 4).\nThe differences observed between our study and that of Wilson and colleagues may be due to chance and due to the presence of different residual confounding in the two studies. Wilson and colleagues studied the effects of estradiol treatment, which may act differently upon the breast tissue than endogenous estradiol. Normal human breast tissue transplanted into mice may react differently to varying levels of estradiol than it does in its natural milieu in humans. The genes that were significant in the Wilson-study and differentially expressed but not significant in our study (eg: AREG, GREB1, TFF1, TFF3 and GATA3) may be associated with serum estradiol levels in normal tissues as well as in tumor tissues where we and others have observed significant associations. Our study is the first study to identify the expression of SCGB3A1, TLN2 and PTGS1 in normal breast tissue to be significantly associated with serum estradiol levels. These findings are biologically reasonable and may have been missed in previous studies due to lack of representative study material.\n Serum estradiol associated with mammographic density in healthy women Serum estradiol levels were independently associated with mammographic density controlling for age, BMI and current use of hormone therapy, and the magnitude of the association was substantial (Table 3). The high beta-value in the regression equation implies a large magnitude of impact which supports the hypothesis that high serum estradiol levels increases mammographic density with both statistical and biological significance.\nSerum estradiol levels were independently associated with mammographic density controlling for age, BMI and current use of hormone therapy, and the magnitude of the association was substantial (Table 3). The high beta-value in the regression equation implies a large magnitude of impact which supports the hypothesis that high serum estradiol levels increases mammographic density with both statistical and biological significance.\n Gene expression in breast carcinomas according to serum estradiol levels The expression of genes found to be differentially expressed in normal breast tissue according to serum estradiol levels was examined in breast carcinomas. We found that the expression was all opposite of that in normal breast tissue from women with high serum estrogen (Table 1). This may be due to lack of negative feedback of growth regulation in breast tumors. In breast cancer cell lines, estrogen induced up-regulation of positive proliferation regulators and down-regulation of anti-proliferative and pro-apoptotic genes, resulting in a net positive proliferative drive [46]. This is in line with our findings. In normal breast tissue from women with high serum estradiol, SCGB3A1, which regulate proliferation negatively, and TLN2, which prevents invasion, are up-regulated. PTGS1, which induce local production of estradiol-stimulated proliferation, is down-regulated. All three genes are expressed to maintain control and regulation of the epithelial cells. In breast cancers the expression of these genes favors growth, migration and proliferation. This supports the hypothesis that high serum estradiol increases the proliferative pressure in normal breasts, which leads to an activation of mechanisms counter-acting this proliferative pressure. In carcinomas, growth regulation is lost, and these hormone-related growth-promoting mechanisms are turned on.\nInterestingly, both AREG and GREB1 were up-regulated in ER+ breast carcinomas of younger (< 45 years) compared with older (> 70 years) women in a previous publication [47]. The increased expression of these genes was proposed as a mechanism responsible for the observed increase in proliferation seen in the tumors of younger compared with older women [47].\nThe genes differentially expressed according to serum estradiol levels in tumors confirmed many of the findings from the Dunbier-study of ER+ tumors [10]. The previously published list of genes positively correlated with serum estradiol included TFF-genes and GREB1. These genes were also found significant in the analysis of all tumors in this study, although TFF1 and TFF3 did not reach statistical significance (Table 4). In addition to the previously published genes, we identified the gene AREG, an EGFR-ligand essential for breast development, as up-regulated in tumors from patients with high serum estradiol.\nGREB1 is previously found to be an important estrogen-induced stimulator of growth in ER+ breast cancer cell lines [48]. AREG binds to and stimulates EGFR and hence epithelial cell growth. The up-regulation of these two genes in breast carcinomas of women with high estradiol levels may indicate a loss of regulation of growth associated with cancer development. This corresponds well with the interpretation of our findings in normal breast tissue referred above and confirms the results indicated by the cell line studies by Frasor and colleagues [46]. These two genes are not differentially expressed between normal breast tissue and breast cancers. Both are, however, higher expressed in ER+ than ER- breast carcinomas.\nThe expression of genes found to be differentially expressed in normal breast tissue according to serum estradiol levels was examined in breast carcinomas. We found that the expression was all opposite of that in normal breast tissue from women with high serum estrogen (Table 1). This may be due to lack of negative feedback of growth regulation in breast tumors. In breast cancer cell lines, estrogen induced up-regulation of positive proliferation regulators and down-regulation of anti-proliferative and pro-apoptotic genes, resulting in a net positive proliferative drive [46]. This is in line with our findings. In normal breast tissue from women with high serum estradiol, SCGB3A1, which regulate proliferation negatively, and TLN2, which prevents invasion, are up-regulated. PTGS1, which induce local production of estradiol-stimulated proliferation, is down-regulated. All three genes are expressed to maintain control and regulation of the epithelial cells. In breast cancers the expression of these genes favors growth, migration and proliferation. This supports the hypothesis that high serum estradiol increases the proliferative pressure in normal breasts, which leads to an activation of mechanisms counter-acting this proliferative pressure. In carcinomas, growth regulation is lost, and these hormone-related growth-promoting mechanisms are turned on.\nInterestingly, both AREG and GREB1 were up-regulated in ER+ breast carcinomas of younger (< 45 years) compared with older (> 70 years) women in a previous publication [47]. The increased expression of these genes was proposed as a mechanism responsible for the observed increase in proliferation seen in the tumors of younger compared with older women [47].\nThe genes differentially expressed according to serum estradiol levels in tumors confirmed many of the findings from the Dunbier-study of ER+ tumors [10]. The previously published list of genes positively correlated with serum estradiol included TFF-genes and GREB1. These genes were also found significant in the analysis of all tumors in this study, although TFF1 and TFF3 did not reach statistical significance (Table 4). In addition to the previously published genes, we identified the gene AREG, an EGFR-ligand essential for breast development, as up-regulated in tumors from patients with high serum estradiol.\nGREB1 is previously found to be an important estrogen-induced stimulator of growth in ER+ breast cancer cell lines [48]. AREG binds to and stimulates EGFR and hence epithelial cell growth. The up-regulation of these two genes in breast carcinomas of women with high estradiol levels may indicate a loss of regulation of growth associated with cancer development. This corresponds well with the interpretation of our findings in normal breast tissue referred above and confirms the results indicated by the cell line studies by Frasor and colleagues [46]. These two genes are not differentially expressed between normal breast tissue and breast cancers. Both are, however, higher expressed in ER+ than ER- breast carcinomas.\n Overall strengths and limitations of the study The currently used method for detection of serum estradiol has a limited sensitivity in the lower serum levels often seen in postmenopausal women. Despite the limited sample size we found several biologically plausible associations. However, due to limited power, there may be other associations that we could not reveal. We have included women with and without hormone therapy in the study. There may be differences in action between endogenous and exogenous estradiol that will not be revealed in this study.\nOne important strength of this study is the unique material with normal human tissue in its natural mileu, not influenced by an adjacent tumor [49-51] or by an adipose-dominated biology that may bias the study of reduction mammoplasties.\nThe currently used method for detection of serum estradiol has a limited sensitivity in the lower serum levels often seen in postmenopausal women. Despite the limited sample size we found several biologically plausible associations. However, due to limited power, there may be other associations that we could not reveal. We have included women with and without hormone therapy in the study. There may be differences in action between endogenous and exogenous estradiol that will not be revealed in this study.\nOne important strength of this study is the unique material with normal human tissue in its natural mileu, not influenced by an adjacent tumor [49-51] or by an adipose-dominated biology that may bias the study of reduction mammoplasties.", "We have identified genes differentially expressed according to serum estradiol in normal breast tissue of healthy women.\nThe genes up-regulated in normal breast tissue under influence of high serum estradiol are enriched for the gene ontology terms extracellular matrix and skeletal system development. Both ER isoforms α and β are expressed in the stromal cells [27]. The proliferating epithelial cells are not found to be ER α + [8] and most often negative to both ER isoforms [9]. In normal breast tissue, the estrogen-induced epithelial proliferation is, at least partly, caused by paracrine signals from ER+ fibroblasts [3]. The enrichment of gene ontology terms related to extracellular matrix may be linked to the effect of estradiol on the ER+ stromal cells.\nThree genes were independently associated with serum estradiol levels in normal breast tissue in a linear regression model after controlling for age, menopause and current hormone therapy. The two genes SCBG3A1 and TLN2 were positively associated with serum estradiol and PTGS1 (COX1) negatively.\nSCBG3A1 is also called high in normal 1 (HIN1) and is a secretoglobin transcribed in luminal, but not in myoepithelial breast cells and is secreted from the cell [28]. The protein is a tumor suppressor and inhibits cell growth, migration and invasion acting through the AKT-pathway. SCBG3A1 inhibits Akt-phosphorylation, which reduces the Akt-function in promoting cell cycle progression (transition from the G1 to the S-phase) and preventing apoptosis (through inhibition of the TGFβ-pathway) [29] (Figure 1).\nSimplified illustration of the cellular mechanisms of action of SCGB3A1. SCGB3A1 inhibits the phosphorylation of Akt leading to reduced cell cycle division and increased apoptosis. Molecules in red are increased/stimulated as result of SCGB3A1-action, whereas molecules in blue are decreased/inhibited.\nThe SCBG3A1 promoter was found to be hypermethylated with down-regulated expression of the gene in breast carcinomas compared with normal breast tissue, where it is referred to as \"high in normal 1\" (HIN1)[30-32]. Interestingly, the gene is not methylated in BRCA-mutated and BRCA-like breast cancer [32]. Methylation of the gene is suggested to be an early event in non-BRCA-associated breast cancer [33].\nWe found SCBG3A1 down-regulated in basal-like cancers compared to other subtypes. At first glance, this may seem contradictory to the observation that the gene is not methylated in BRCA-like breast cancers. However, Krop and colleagues found that the gene is expressed in luminal epithelial cell lines, but not in myoepithelial cell lines. The reduced expression seen in basal-like cancer could be due to a myoepithelial phenotype arising from a myoepithelial cell of origin or from phenotypic changes acquired during carcinogensis. This could also be linked to the lack of methylation in BRCA-associated breast cancers, which are often basal-like. An a priori low gene expression would make methylation unnecessary. The increased Akt-activity seen in basal-like cancers [34] is consistent with the low levels of SCBG3A1 expression observed in the basal-like cancers in this study leading to increased Akt-phosphorylation and thereby Akt-activity.\nThe up-regulation of SCGB3A1 in the breasts of women with high serum estradiol protects the breast epithelial cells against uncontrolled proliferation. Women with methylation of the SCGB3A1-promoter may be at risk of developing luminal, but not basal-like, breast cancer and a reduction in serum estradiol levels may be protective for these women. Hormone therapy after menopause is associated with receptor positive, but not receptor negative, breast cancer [35]. Our results indicate that the same may be true for circulating estradiol levels in absence of functional SCGB3A1, but this is not yet shown empirically.\nPTGS1 (prostaglandin-endoperoxide synthase 1) is synonymous with cyclooxygenase 1 (COX1) and codes for an enzyme important in prostaglandin production. Studies of normal human adiopocytes have shown that the enzyme induces production of prostaglandin E2 (PGE2) which in turn increases the expression of aromatase (CYP19A1) [36]. Aromatase is the enzyme responsible for the last step in the conversion of androgens to estrogens in adipose tissue. Hence, the expression of PTGS1 may increase the local production of estradiol (Figure 2). In normal breast tissue, we observed that the expression of PTGS1 was lower in samples from women with higher levels of serum estradiol. This may be due to negative feedback. High systemic levels of estradiol make local production unnecessary and PTGS1-induced aromatase production is abolished.\nSchematic illustration of mechanism of action of PTGS1. PTGS1 induces PGE2-production. PGE2 increases the expression of aromatase (CYP19A1) which in turn converts androgens to estrogens in adipose tissue. 17βHSD1 = 17β-hydroxysteroid dehydrogenase.\nThe up-regulation of PTGS1 in breast carcinomas compared to normal tissue is expected from current knowledge. Several studies have suggested that PTGS1 has a carcinogenic role in different epithelial cancers [37-41]. The gene has also previously been found over-expressed in tumors compared with tumor adjacent normal tissue [42]. There has been large amount of research on PTGS2 in relation to cancer, indicating a role in carcinogenesis. The probes for PTGS2 were filtered out due to too many missing values and were not included in the analysis. Hence, this study lacks information about the role of PTGS2-expression in relation to serum estradiol levels.\nTLN2 is less known and less studied than Talin 1 (TLN1). Both talins are believed to connect integrins to the actin cytoskeleton and are involved in integrin-associated cell adhesion [43,44]. TLN2 is located on chromosome 15q15-21, close to CYP19A1 coding for aromatase. A study on aromatase-excess syndrome found that certain minor chromosomal rearrangements may cause cryptic transcription of the CYP19A1 gene through the TLN2-promoter [45]. We found that TLN2 was up-regulated in breasts of healthy women with high levels of serum estradiol. This could indicate an activation of cell adhesion. This gene was the only gene significantly up-regulated according to serum estradiol in normal breast tissue of premenopausal women. The down-regulation observed in breast cancers compared with normal breast tissue indicates a loss of cell adhesion. The expression of the gene is lower in DCIS than in invasive carcinomas, which is contrary to expected, but the data set is small.\nA previous study report on the gene expression in normal human breast tissue transplanted into two groups of athymic mice treated with different levels of estradiol [17]. Neither SCGB3A1, TLN2 nor PTGS1 was significantly differentially expressed in their study. They did, however, identify many of the genes found to be significantly differentially expressed according to serum estradiol in breast carcinomas in the current study, such as AREG, GREB1, TFF1 and TFF3. Going back to our normal samples, we see that several of their genes (including AREG, GREB1, TFF1 and TFF3, GATA3 and two SERPIN-genes) are differentially expressed in our normal breast tissue, but did not reach statistical significance (Additional file 4).\nThe differences observed between our study and that of Wilson and colleagues may be due to chance and due to the presence of different residual confounding in the two studies. Wilson and colleagues studied the effects of estradiol treatment, which may act differently upon the breast tissue than endogenous estradiol. Normal human breast tissue transplanted into mice may react differently to varying levels of estradiol than it does in its natural milieu in humans. The genes that were significant in the Wilson-study and differentially expressed but not significant in our study (eg: AREG, GREB1, TFF1, TFF3 and GATA3) may be associated with serum estradiol levels in normal tissues as well as in tumor tissues where we and others have observed significant associations. Our study is the first study to identify the expression of SCGB3A1, TLN2 and PTGS1 in normal breast tissue to be significantly associated with serum estradiol levels. These findings are biologically reasonable and may have been missed in previous studies due to lack of representative study material.", "Serum estradiol levels were independently associated with mammographic density controlling for age, BMI and current use of hormone therapy, and the magnitude of the association was substantial (Table 3). The high beta-value in the regression equation implies a large magnitude of impact which supports the hypothesis that high serum estradiol levels increases mammographic density with both statistical and biological significance.", "The expression of genes found to be differentially expressed in normal breast tissue according to serum estradiol levels was examined in breast carcinomas. We found that the expression was all opposite of that in normal breast tissue from women with high serum estrogen (Table 1). This may be due to lack of negative feedback of growth regulation in breast tumors. In breast cancer cell lines, estrogen induced up-regulation of positive proliferation regulators and down-regulation of anti-proliferative and pro-apoptotic genes, resulting in a net positive proliferative drive [46]. This is in line with our findings. In normal breast tissue from women with high serum estradiol, SCGB3A1, which regulate proliferation negatively, and TLN2, which prevents invasion, are up-regulated. PTGS1, which induce local production of estradiol-stimulated proliferation, is down-regulated. All three genes are expressed to maintain control and regulation of the epithelial cells. In breast cancers the expression of these genes favors growth, migration and proliferation. This supports the hypothesis that high serum estradiol increases the proliferative pressure in normal breasts, which leads to an activation of mechanisms counter-acting this proliferative pressure. In carcinomas, growth regulation is lost, and these hormone-related growth-promoting mechanisms are turned on.\nInterestingly, both AREG and GREB1 were up-regulated in ER+ breast carcinomas of younger (< 45 years) compared with older (> 70 years) women in a previous publication [47]. The increased expression of these genes was proposed as a mechanism responsible for the observed increase in proliferation seen in the tumors of younger compared with older women [47].\nThe genes differentially expressed according to serum estradiol levels in tumors confirmed many of the findings from the Dunbier-study of ER+ tumors [10]. The previously published list of genes positively correlated with serum estradiol included TFF-genes and GREB1. These genes were also found significant in the analysis of all tumors in this study, although TFF1 and TFF3 did not reach statistical significance (Table 4). In addition to the previously published genes, we identified the gene AREG, an EGFR-ligand essential for breast development, as up-regulated in tumors from patients with high serum estradiol.\nGREB1 is previously found to be an important estrogen-induced stimulator of growth in ER+ breast cancer cell lines [48]. AREG binds to and stimulates EGFR and hence epithelial cell growth. The up-regulation of these two genes in breast carcinomas of women with high estradiol levels may indicate a loss of regulation of growth associated with cancer development. This corresponds well with the interpretation of our findings in normal breast tissue referred above and confirms the results indicated by the cell line studies by Frasor and colleagues [46]. These two genes are not differentially expressed between normal breast tissue and breast cancers. Both are, however, higher expressed in ER+ than ER- breast carcinomas.", "The currently used method for detection of serum estradiol has a limited sensitivity in the lower serum levels often seen in postmenopausal women. Despite the limited sample size we found several biologically plausible associations. However, due to limited power, there may be other associations that we could not reveal. We have included women with and without hormone therapy in the study. There may be differences in action between endogenous and exogenous estradiol that will not be revealed in this study.\nOne important strength of this study is the unique material with normal human tissue in its natural mileu, not influenced by an adjacent tumor [49-51] or by an adipose-dominated biology that may bias the study of reduction mammoplasties.", "In conclusion we report a list of genes whose expression is associated with serum estradiol levels. This list includes genes with known relation to estradiol-signaling, mammary proliferation and breast carcinogenesis. All these genes were expressed differently in tumor and normal breast tissue. The gene expression in tumors resembled that in normal breast tissue from women with low serum estradiol. Associations between serum estradiol and the expression in breast carcinomas confirmed previous findings and revealed new associations. The comparison of results between normal breast tissue from healthy women and breast carcinomas indicate the difference in biological impact of estradiol in normal and cancerous breast tissue." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Serum hormone analysis", "Gene expression analysis", "Mammographic density", "Statistical Analysis", "Results", "Gene expression in normal breast tissue according to serum estradiol levels", "Serum estradiol related to mammographic density in healthy women", "Gene expression in breast carcinomas according to serum estradiol levels", "Discussion", "Gene expression in normal breast tissue according to serum estradiol levels", "Serum estradiol associated with mammographic density in healthy women", "Gene expression in breast carcinomas according to serum estradiol levels", "Overall strengths and limitations of the study", "Conclusions", "Supplementary Material" ]
[ "Influence of estradiol on breast development [1], the menopausal transition [2] and on the breast epithelial cells [3] is widely studied. However, little is known about the effect of serum estradiol on gene expression in the normal breast tissue. For postmenopausal women, high serum estradiol levels are associated with increased risk of breast cancer [4-6]. The results are less conclusive for premenopausal women, but epidemiologic evidence indicates an increased risk from higher exposure to female hormones [7].\nIn estrogen receptor (ER) positive breast carcinomas, the proliferating tumor cells express ER while in normal breast tissue the proliferating epithelial cells are ER negative (ER-) [8,9]. Both normal and malignant breast epithelial cells are influenced by estradiol but through different mechanisms. In the lack of ER, normal breast epithelial cells receive proliferating paracrine signals from ER+ fibroblasts [3]. The importance of estrogen stimuli in the proliferation of ER+ breast cancer cells is evident from the effect of anti-estrogen treatment. Previously, several studies have identified genes whose expression is regulated by estradiol in breast cancer cell lines. Recently, a study reported an association between serum levels of estradiol and gene expression of trefoil factor 1 (TFF1), growth regulation by estrogen in breast cancer 1 (GREB1), PDZ domain containing 1 (PDZK1) and progesterone receptor (PGR) in ER+ breast carcinomas [10]. Functional studies on breast cancer cell lines have described that estradiol induces expression of c-fos [11] and that exposure to physiologic doses of estradiol is necessary for malignant transformation [12]. Intratumoral levels of estrogens have also been measured and were found correlated with tumor gene expression of estradiol-metabolizing enzymes and the estrogen receptor gene (ESR1) [13] and of proliferation markers [14]. A recent study did, however, conclude that the intratumoral estradiol levels were mainly determined by its binding to ER (associated with ESR1-expression). The intratumoral estradiol levels were not found to be associated with local estradiol production [15]. Serum estradiol levels were found to be associated with local estradiol levels in normal breast tissue of breast cancer patients in a recent study [16]. This strengthens the hypothesis that serum estradiol levels influence the gene expression in breast tissue.\nWilson and colleagues studied the effect of estradiol on normal human breast tissue transplanted into athymic nude mice. They identified a list of genes associated with estradiol treatment, including TFF1, AREG, SCGB2A2, GREB1 and GATA3. The normal tissues used in the xenografts were from breasts with benign breast disease and from mammoplasty reductions [17].\nStudies describing associations between serum estradiol levels and gene expression of normal human breast tissue in its natural milieu are lacking. Knowledge about gene expression changes associated with high serum estradiol may reveal biological mechanisms underlying the increased risk for both elevated mammographic density and for developing breast cancer as seen in women with high estradiol levels. We have identified genes differentially expressed between normal breast tissue samples according to serum estradiol levels. Several genes identified in previous studies using normal breast tissue or breast carcinomas are confirmed, but additional genes were identified making important contributions to our previous knowledge.", " Subjects Two cohorts of women were recruited to the study from different breast diagnostic centers in Norway in the period 2002-2007 as described previously [18]. Exclusion criteria were pregnancy and use of anticoagulant therapy. The first cohort consisted of 120 women referred to the breast diagnostic centers who were cancer-free after further evaluation. These will be referred to as healthy women. Breast biopsies were taken from an area with some mammographic density in the breast contralateral to any suspect lesion. The second cohort consisted of 66 women who were diagnosed with breast cancer. For this cohort, study biopsies were taken from the breast carcinoma after the diagnostic biopsies were obtained. Fourteen gauge needles were used for the biopsies and sampling was guided by ultrasound. The biopsies were either soaked in RNAlater (Ambion, Austin, TX) and sent to the Oslo University Hospital, Radiumhospitalet, before storage at -20°C or directly snap-frozen in liquid nitrogen and stored at -80°C. Based on serum hormone analyses (see below), 57 of the 120 healthy women included were postmenopausal, 43 were premenopausal, 10 were perimenopausal and serum samples were lacking for 10 women. Of the 66 breast cancer patients, 50 were estimated to be postmenopausal, 13 to be premenopausal and 3 to be perimenopausal. All women provided information about height, weight, parity, hormone therapy use and family history of breast cancer and provided a signed informed consent. The study was approved by the regional ethical committee (IRB approval no S-02036).\nThree additional datasets were used to explore the regulation of identified genes in breast cancer. One unpublished dataset from the Akershus University Hospital (AHUS), Norway, included normal breast tissue from 42 reduction mammoplasties and both tumor and normal adjacent tissue from 48 breast cancer patients (referred to as the AHUS dataset). Another unpublished dataset from University of North Carolina (UNC), USA, included breast cancer and adjacent normal breast tissue from 55 breast cancer patients (referred to as the UNC dataset). The third dataset is previously published and consists of biopsies from 31 pure ductal carcinoma in situ (DCIS), 36 pure invasive breast cancers and 42 tumours with mixed histology, both DCIS and invasive [19].\nTwo cohorts of women were recruited to the study from different breast diagnostic centers in Norway in the period 2002-2007 as described previously [18]. Exclusion criteria were pregnancy and use of anticoagulant therapy. The first cohort consisted of 120 women referred to the breast diagnostic centers who were cancer-free after further evaluation. These will be referred to as healthy women. Breast biopsies were taken from an area with some mammographic density in the breast contralateral to any suspect lesion. The second cohort consisted of 66 women who were diagnosed with breast cancer. For this cohort, study biopsies were taken from the breast carcinoma after the diagnostic biopsies were obtained. Fourteen gauge needles were used for the biopsies and sampling was guided by ultrasound. The biopsies were either soaked in RNAlater (Ambion, Austin, TX) and sent to the Oslo University Hospital, Radiumhospitalet, before storage at -20°C or directly snap-frozen in liquid nitrogen and stored at -80°C. Based on serum hormone analyses (see below), 57 of the 120 healthy women included were postmenopausal, 43 were premenopausal, 10 were perimenopausal and serum samples were lacking for 10 women. Of the 66 breast cancer patients, 50 were estimated to be postmenopausal, 13 to be premenopausal and 3 to be perimenopausal. All women provided information about height, weight, parity, hormone therapy use and family history of breast cancer and provided a signed informed consent. The study was approved by the regional ethical committee (IRB approval no S-02036).\nThree additional datasets were used to explore the regulation of identified genes in breast cancer. One unpublished dataset from the Akershus University Hospital (AHUS), Norway, included normal breast tissue from 42 reduction mammoplasties and both tumor and normal adjacent tissue from 48 breast cancer patients (referred to as the AHUS dataset). Another unpublished dataset from University of North Carolina (UNC), USA, included breast cancer and adjacent normal breast tissue from 55 breast cancer patients (referred to as the UNC dataset). The third dataset is previously published and consists of biopsies from 31 pure ductal carcinoma in situ (DCIS), 36 pure invasive breast cancers and 42 tumours with mixed histology, both DCIS and invasive [19].\n Serum hormone analysis Serum hormone levels (LH, FSH, prolactin, estradiol, progesterone, SHBG and testosterone) were measured with electrochemiluminescence immunoassays (ECLIA) on a Roche Modular E instrument (Roche, Basel, Switzerland) by Department of Medical Biochemistry, Oslo University Hospital, Rikshospitalet. The menopausal status was determined based on serum levels of hormones, age and hormone use. The criteria used can be found in Additional file 1. Biochemically perimenopausal women or women with uncertain menopausal status were excluded from analyses stratified on menopause. These hormone assays are tested through an external quality assessment scheme, Labquality, and the laboratory is accredited according to ISO-ES 17025. Serum estradiol values are given as picograms per milliliter (pg/ml) (pg/ml × 3.67 = pmol/). The functional sensitivity of the estradiol assay was 10.9 pg/ml (40 pmol/l) with a total analytical sensitivity of < 5%.\nSerum hormone levels (LH, FSH, prolactin, estradiol, progesterone, SHBG and testosterone) were measured with electrochemiluminescence immunoassays (ECLIA) on a Roche Modular E instrument (Roche, Basel, Switzerland) by Department of Medical Biochemistry, Oslo University Hospital, Rikshospitalet. The menopausal status was determined based on serum levels of hormones, age and hormone use. The criteria used can be found in Additional file 1. Biochemically perimenopausal women or women with uncertain menopausal status were excluded from analyses stratified on menopause. These hormone assays are tested through an external quality assessment scheme, Labquality, and the laboratory is accredited according to ISO-ES 17025. Serum estradiol values are given as picograms per milliliter (pg/ml) (pg/ml × 3.67 = pmol/). The functional sensitivity of the estradiol assay was 10.9 pg/ml (40 pmol/l) with a total analytical sensitivity of < 5%.\n Gene expression analysis RNA extraction and hybridization were performed as previously described [18]. Briefly, RNeasy Mini Protocol (Qiagen, Valencia, CA) was used for RNA extraction. Forty samples (38 from healthy women) were excluded from further analysis due to low RNA amount (< 10 ng) or poor RNA quality assessed by the curves given by Agilent Bioanalyzer (Agilent Technologies, Palo Alto, CA). The analyses were performed before RNA integrity value (RIN) was included as a measure of degradation and samples with poor quality were excluded. Agilent Low RNA input Fluorescent Linear Amplification Kit Protocol was used for amplification and labelling with Cy5 (Amersham Biosciences, Little Chalfont, England) for sample RNA and Cy3 (Amersham Biosciences, Little Chalfont, England) for the reference (Universal Human total RNA (Stratagene, La Jolla, CA)). Labelled RNA was hybridized onto Agilent Human Whole Genome Oligo Microarrays (G4110A) (Agilent Technologies, Santa Clara, CA). Three arrays were excluded due to poor quality leaving data from 79 healthy women and 64 breast cancer patients.\nThe scanned data was processed in Feature Extraction 9.1.3.1 (Agilent Technologies, Santa Clara, CA). Locally weighted scatterplot smoothing (lowess) was used to normalize the data. The normalized and log2-transformed data was stored in the Stanford Microarray Database (SMD)[20] and retrieved for further analysis. Gene filtering excluded probes with ≥ 20% missing values and probes with less than three arrays being at least 1.6 standard deviation away from the mean. This reduced the dataset from 40791 probes to 9767 for the healthy women and to 10153 for the breast cancer patients. Missing values were imputed in R using the method impute. knn in the library impute [21]. All expression data are available in Gene Expression Omnibus (GEO)(GSE18672).\nRNA extraction and hybridization were performed as previously described [18]. Briefly, RNeasy Mini Protocol (Qiagen, Valencia, CA) was used for RNA extraction. Forty samples (38 from healthy women) were excluded from further analysis due to low RNA amount (< 10 ng) or poor RNA quality assessed by the curves given by Agilent Bioanalyzer (Agilent Technologies, Palo Alto, CA). The analyses were performed before RNA integrity value (RIN) was included as a measure of degradation and samples with poor quality were excluded. Agilent Low RNA input Fluorescent Linear Amplification Kit Protocol was used for amplification and labelling with Cy5 (Amersham Biosciences, Little Chalfont, England) for sample RNA and Cy3 (Amersham Biosciences, Little Chalfont, England) for the reference (Universal Human total RNA (Stratagene, La Jolla, CA)). Labelled RNA was hybridized onto Agilent Human Whole Genome Oligo Microarrays (G4110A) (Agilent Technologies, Santa Clara, CA). Three arrays were excluded due to poor quality leaving data from 79 healthy women and 64 breast cancer patients.\nThe scanned data was processed in Feature Extraction 9.1.3.1 (Agilent Technologies, Santa Clara, CA). Locally weighted scatterplot smoothing (lowess) was used to normalize the data. The normalized and log2-transformed data was stored in the Stanford Microarray Database (SMD)[20] and retrieved for further analysis. Gene filtering excluded probes with ≥ 20% missing values and probes with less than three arrays being at least 1.6 standard deviation away from the mean. This reduced the dataset from 40791 probes to 9767 for the healthy women and to 10153 for the breast cancer patients. Missing values were imputed in R using the method impute. knn in the library impute [21]. All expression data are available in Gene Expression Omnibus (GEO)(GSE18672).\n Mammographic density Mammographic density was estimated from digitized craniocaudal mammograms as previously described [18] using the University of Southern California Madena assessment method [22]. First, the total breast area was outlined using a computerized tool and the area was represented as number of pixels. One of the co-authors, GU, identified a region of interest that incorporated all areas of density excluding those representing the pectoralis muscle and scanning artifacts. All densities above a certain threshold were tinted yellow, and the tinted pixles converted to cm2 representing the absolute density and was available for 108 of 120 healthy women. Percent mammographic density is calculated as the absolute density divided by the total breast area and was available for 114 of 120 healthy women. Test-retest reliability was 0.99 for absolute density.\nMammographic density was estimated from digitized craniocaudal mammograms as previously described [18] using the University of Southern California Madena assessment method [22]. First, the total breast area was outlined using a computerized tool and the area was represented as number of pixels. One of the co-authors, GU, identified a region of interest that incorporated all areas of density excluding those representing the pectoralis muscle and scanning artifacts. All densities above a certain threshold were tinted yellow, and the tinted pixles converted to cm2 representing the absolute density and was available for 108 of 120 healthy women. Percent mammographic density is calculated as the absolute density divided by the total breast area and was available for 114 of 120 healthy women. Test-retest reliability was 0.99 for absolute density.\n Statistical Analysis Quantitative significance analysis of microarrays (SAM) [23,24] was used for analysis of differentially expressed genes, by the library samr in R 2.12.0. Serum estradiol (nmol/L) was used as dependent variable. The distribution of serum levels is skewed and therefore the non-parametric Wilcoxon test-statistic was used. Probes with an FDR < 50% were included for gene ontology analyses.\nDAVID Bioinformatics Resources 2008 from the National Institute of Allergy and Infectious Diseases, NIH [25] was used for gene ontology analysis. Functional annotation clustering was applied and the following annotation categories were selected: biological processes, molecular function, cellular compartment and KEGG pathways. We included annotation terms with a p-value (FDR-corrected) of < 0.01 containing between 5 and 500 genes.\nFor multivariate analysis, linear regression was fitted in R 2.12.0 to identify independent associations. Stepwise selection was performed to determine which variables had an independent contribution to the response variable. In the first step, all variables were included in the model. The variable with the highest p-value was rejected from the model in each step, before the model was refitted. This was repeated until all variables in the model had a p-value smaller than 0.05.\nLinear regression was used to determine the independent association between serum estradiol and the differentially expressed genes in healthy women. Age, menopause and current hormone use were included in the model and forced to stay throughout the stepwise selection to correct for confounding by these factors. Linear regression was also fitted in two analyses with mammographic density in healthy women as a dependent variable. In one set of analyses serum hormone levels were included as the independent covariates, and in the other analysis, variables representing gene expression associated with serum estradiol were included as covariates. Epidemiologic covariates, such as age, BMI, parity and use of hormone therapy were included in the mammographic density analyses and forced to stay throughout the stepwise selection to control for potential confounding by these factors.\nTumor subtypes were calculated using the intrinsic subtypes published by Sørlie et al in 2001 [26]. The total gene set was filtered for the intrinsic genes. The correlation between gene expression profiles for the intrinsic genes for each sample with each subtype was calculated. Each sample was assigned to the subtype with which it had the highest correlation. Samples with all correlations < 0.1 were not assigned to any subtype. Two-sided t-tests were used to check for difference in expression for single genes between two categories of variables (eg: pre- and postmenopausal).\nQuantitative significance analysis of microarrays (SAM) [23,24] was used for analysis of differentially expressed genes, by the library samr in R 2.12.0. Serum estradiol (nmol/L) was used as dependent variable. The distribution of serum levels is skewed and therefore the non-parametric Wilcoxon test-statistic was used. Probes with an FDR < 50% were included for gene ontology analyses.\nDAVID Bioinformatics Resources 2008 from the National Institute of Allergy and Infectious Diseases, NIH [25] was used for gene ontology analysis. Functional annotation clustering was applied and the following annotation categories were selected: biological processes, molecular function, cellular compartment and KEGG pathways. We included annotation terms with a p-value (FDR-corrected) of < 0.01 containing between 5 and 500 genes.\nFor multivariate analysis, linear regression was fitted in R 2.12.0 to identify independent associations. Stepwise selection was performed to determine which variables had an independent contribution to the response variable. In the first step, all variables were included in the model. The variable with the highest p-value was rejected from the model in each step, before the model was refitted. This was repeated until all variables in the model had a p-value smaller than 0.05.\nLinear regression was used to determine the independent association between serum estradiol and the differentially expressed genes in healthy women. Age, menopause and current hormone use were included in the model and forced to stay throughout the stepwise selection to correct for confounding by these factors. Linear regression was also fitted in two analyses with mammographic density in healthy women as a dependent variable. In one set of analyses serum hormone levels were included as the independent covariates, and in the other analysis, variables representing gene expression associated with serum estradiol were included as covariates. Epidemiologic covariates, such as age, BMI, parity and use of hormone therapy were included in the mammographic density analyses and forced to stay throughout the stepwise selection to control for potential confounding by these factors.\nTumor subtypes were calculated using the intrinsic subtypes published by Sørlie et al in 2001 [26]. The total gene set was filtered for the intrinsic genes. The correlation between gene expression profiles for the intrinsic genes for each sample with each subtype was calculated. Each sample was assigned to the subtype with which it had the highest correlation. Samples with all correlations < 0.1 were not assigned to any subtype. Two-sided t-tests were used to check for difference in expression for single genes between two categories of variables (eg: pre- and postmenopausal).", "Two cohorts of women were recruited to the study from different breast diagnostic centers in Norway in the period 2002-2007 as described previously [18]. Exclusion criteria were pregnancy and use of anticoagulant therapy. The first cohort consisted of 120 women referred to the breast diagnostic centers who were cancer-free after further evaluation. These will be referred to as healthy women. Breast biopsies were taken from an area with some mammographic density in the breast contralateral to any suspect lesion. The second cohort consisted of 66 women who were diagnosed with breast cancer. For this cohort, study biopsies were taken from the breast carcinoma after the diagnostic biopsies were obtained. Fourteen gauge needles were used for the biopsies and sampling was guided by ultrasound. The biopsies were either soaked in RNAlater (Ambion, Austin, TX) and sent to the Oslo University Hospital, Radiumhospitalet, before storage at -20°C or directly snap-frozen in liquid nitrogen and stored at -80°C. Based on serum hormone analyses (see below), 57 of the 120 healthy women included were postmenopausal, 43 were premenopausal, 10 were perimenopausal and serum samples were lacking for 10 women. Of the 66 breast cancer patients, 50 were estimated to be postmenopausal, 13 to be premenopausal and 3 to be perimenopausal. All women provided information about height, weight, parity, hormone therapy use and family history of breast cancer and provided a signed informed consent. The study was approved by the regional ethical committee (IRB approval no S-02036).\nThree additional datasets were used to explore the regulation of identified genes in breast cancer. One unpublished dataset from the Akershus University Hospital (AHUS), Norway, included normal breast tissue from 42 reduction mammoplasties and both tumor and normal adjacent tissue from 48 breast cancer patients (referred to as the AHUS dataset). Another unpublished dataset from University of North Carolina (UNC), USA, included breast cancer and adjacent normal breast tissue from 55 breast cancer patients (referred to as the UNC dataset). The third dataset is previously published and consists of biopsies from 31 pure ductal carcinoma in situ (DCIS), 36 pure invasive breast cancers and 42 tumours with mixed histology, both DCIS and invasive [19].", "Serum hormone levels (LH, FSH, prolactin, estradiol, progesterone, SHBG and testosterone) were measured with electrochemiluminescence immunoassays (ECLIA) on a Roche Modular E instrument (Roche, Basel, Switzerland) by Department of Medical Biochemistry, Oslo University Hospital, Rikshospitalet. The menopausal status was determined based on serum levels of hormones, age and hormone use. The criteria used can be found in Additional file 1. Biochemically perimenopausal women or women with uncertain menopausal status were excluded from analyses stratified on menopause. These hormone assays are tested through an external quality assessment scheme, Labquality, and the laboratory is accredited according to ISO-ES 17025. Serum estradiol values are given as picograms per milliliter (pg/ml) (pg/ml × 3.67 = pmol/). The functional sensitivity of the estradiol assay was 10.9 pg/ml (40 pmol/l) with a total analytical sensitivity of < 5%.", "RNA extraction and hybridization were performed as previously described [18]. Briefly, RNeasy Mini Protocol (Qiagen, Valencia, CA) was used for RNA extraction. Forty samples (38 from healthy women) were excluded from further analysis due to low RNA amount (< 10 ng) or poor RNA quality assessed by the curves given by Agilent Bioanalyzer (Agilent Technologies, Palo Alto, CA). The analyses were performed before RNA integrity value (RIN) was included as a measure of degradation and samples with poor quality were excluded. Agilent Low RNA input Fluorescent Linear Amplification Kit Protocol was used for amplification and labelling with Cy5 (Amersham Biosciences, Little Chalfont, England) for sample RNA and Cy3 (Amersham Biosciences, Little Chalfont, England) for the reference (Universal Human total RNA (Stratagene, La Jolla, CA)). Labelled RNA was hybridized onto Agilent Human Whole Genome Oligo Microarrays (G4110A) (Agilent Technologies, Santa Clara, CA). Three arrays were excluded due to poor quality leaving data from 79 healthy women and 64 breast cancer patients.\nThe scanned data was processed in Feature Extraction 9.1.3.1 (Agilent Technologies, Santa Clara, CA). Locally weighted scatterplot smoothing (lowess) was used to normalize the data. The normalized and log2-transformed data was stored in the Stanford Microarray Database (SMD)[20] and retrieved for further analysis. Gene filtering excluded probes with ≥ 20% missing values and probes with less than three arrays being at least 1.6 standard deviation away from the mean. This reduced the dataset from 40791 probes to 9767 for the healthy women and to 10153 for the breast cancer patients. Missing values were imputed in R using the method impute. knn in the library impute [21]. All expression data are available in Gene Expression Omnibus (GEO)(GSE18672).", "Mammographic density was estimated from digitized craniocaudal mammograms as previously described [18] using the University of Southern California Madena assessment method [22]. First, the total breast area was outlined using a computerized tool and the area was represented as number of pixels. One of the co-authors, GU, identified a region of interest that incorporated all areas of density excluding those representing the pectoralis muscle and scanning artifacts. All densities above a certain threshold were tinted yellow, and the tinted pixles converted to cm2 representing the absolute density and was available for 108 of 120 healthy women. Percent mammographic density is calculated as the absolute density divided by the total breast area and was available for 114 of 120 healthy women. Test-retest reliability was 0.99 for absolute density.", "Quantitative significance analysis of microarrays (SAM) [23,24] was used for analysis of differentially expressed genes, by the library samr in R 2.12.0. Serum estradiol (nmol/L) was used as dependent variable. The distribution of serum levels is skewed and therefore the non-parametric Wilcoxon test-statistic was used. Probes with an FDR < 50% were included for gene ontology analyses.\nDAVID Bioinformatics Resources 2008 from the National Institute of Allergy and Infectious Diseases, NIH [25] was used for gene ontology analysis. Functional annotation clustering was applied and the following annotation categories were selected: biological processes, molecular function, cellular compartment and KEGG pathways. We included annotation terms with a p-value (FDR-corrected) of < 0.01 containing between 5 and 500 genes.\nFor multivariate analysis, linear regression was fitted in R 2.12.0 to identify independent associations. Stepwise selection was performed to determine which variables had an independent contribution to the response variable. In the first step, all variables were included in the model. The variable with the highest p-value was rejected from the model in each step, before the model was refitted. This was repeated until all variables in the model had a p-value smaller than 0.05.\nLinear regression was used to determine the independent association between serum estradiol and the differentially expressed genes in healthy women. Age, menopause and current hormone use were included in the model and forced to stay throughout the stepwise selection to correct for confounding by these factors. Linear regression was also fitted in two analyses with mammographic density in healthy women as a dependent variable. In one set of analyses serum hormone levels were included as the independent covariates, and in the other analysis, variables representing gene expression associated with serum estradiol were included as covariates. Epidemiologic covariates, such as age, BMI, parity and use of hormone therapy were included in the mammographic density analyses and forced to stay throughout the stepwise selection to control for potential confounding by these factors.\nTumor subtypes were calculated using the intrinsic subtypes published by Sørlie et al in 2001 [26]. The total gene set was filtered for the intrinsic genes. The correlation between gene expression profiles for the intrinsic genes for each sample with each subtype was calculated. Each sample was assigned to the subtype with which it had the highest correlation. Samples with all correlations < 0.1 were not assigned to any subtype. Two-sided t-tests were used to check for difference in expression for single genes between two categories of variables (eg: pre- and postmenopausal).", " Gene expression in normal breast tissue according to serum estradiol levels Genes differentially expressed in normal breast tissue from healthy women according to serum estradiol levels with FDR = 0 are listed in Table 1. The gene ontology terms extracellular region and skeletal system development were significantly enriched in the top 80 up-regulated genes (FDR < 50%). There were no significant gene ontology terms enriched in the down-regulated genes with FDR < 50 (n = 8), although response to steroid hormone stimulus was the most enriched term with three observed genes (prostaglandin-endoperoxide synthase 1 (PTGS1), ESR1 and GATA3)(Additional file 2).\nGenes significantly differentially expressed in normal breast tissue of healthy women according to serum estradiol.\nA) Q-values and regulation of gene expression from quantitative SAM analysis of gene expression according to serum estradiol. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression in normal breast tissue according to serum estradiol\n2) s-est = serum estradiol\n3) BC = breast cancer\n4) P-value from two-sided t-test\n5) ER+ BC = estrogen recepor positive breast cancer (n = 53)\n6) ER- BC = estrogen recepor negative breast cancer (n = 8)\nThe genes differentially expressed in normal breast tissue according to serum estradiol with an FDR = 0 (from Table 1) were tested for differential expression between breast cancer tissue and normal breast tissue from healthy women. All six genes were differentially expressed between carcinomas and normal tissue. Interestingly, the expression in breast carcinomas was similar to that in normal tissue from women with lower levels of circulating estradiol and opposite to that found in normal samples from women with higher levels of serum estradiol (Table 1). Comparing the expression of these genes in normal breast tissue with the expression in ER+ and ER- carcinomas respectively revealed similar results (Table 1).\nIn tumors, SCGB3A1 tended to be expressed at a lower level in basal-like tumors compared with all other tumors or compared with luminal A tumors, but this did not reach statistical significance (both p-values = 0.2). However in two other datasets (AHUS and UNC), SCGB3A1 was expressed at significantly lower levels in basal-like tumors compared with all other subtypes (p = 0.04 and 0.003 respectively). There was no consistent significant difference in SCGB3A1 expression in ER+ and ER- tumors.\nOf the six genes differentially expressed according to serum estradiol in normal breast tissue, three were differentially expressed between DCIS and early invasive breast carcinomas based on a previously published dataset [19](Table 1). SCGB3A1 was down-regulated in invasive compared with DCIS, whereas talin 2 (TLN2) and PTGS1 were up-regulated in invasive compared with DCIS.\nA linear regression was fitted with all differentially expressed genes as covariates and controlling for age, menopause and current hormone therapy use. After leave-one-out elimination of insignificant covariates, SCGB3A1, TLN2 and PTGS1 were still significant (Table 2).\nGenes independently associated with serum estradiol in a linear regression model.\nAll genes differentially expressed according to serum estradiol (Table 1) were included. Values shown are corrected for age, menopause and current hormone therapy. After leave-one-out stepwise selection the following covariates remained:\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.\n2) Values for the non-significant genes are from the last model before they were excluded.\nGenes differentially expressed in normal breast tissue from healthy women according to serum estradiol levels with FDR = 0 are listed in Table 1. The gene ontology terms extracellular region and skeletal system development were significantly enriched in the top 80 up-regulated genes (FDR < 50%). There were no significant gene ontology terms enriched in the down-regulated genes with FDR < 50 (n = 8), although response to steroid hormone stimulus was the most enriched term with three observed genes (prostaglandin-endoperoxide synthase 1 (PTGS1), ESR1 and GATA3)(Additional file 2).\nGenes significantly differentially expressed in normal breast tissue of healthy women according to serum estradiol.\nA) Q-values and regulation of gene expression from quantitative SAM analysis of gene expression according to serum estradiol. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression in normal breast tissue according to serum estradiol\n2) s-est = serum estradiol\n3) BC = breast cancer\n4) P-value from two-sided t-test\n5) ER+ BC = estrogen recepor positive breast cancer (n = 53)\n6) ER- BC = estrogen recepor negative breast cancer (n = 8)\nThe genes differentially expressed in normal breast tissue according to serum estradiol with an FDR = 0 (from Table 1) were tested for differential expression between breast cancer tissue and normal breast tissue from healthy women. All six genes were differentially expressed between carcinomas and normal tissue. Interestingly, the expression in breast carcinomas was similar to that in normal tissue from women with lower levels of circulating estradiol and opposite to that found in normal samples from women with higher levels of serum estradiol (Table 1). Comparing the expression of these genes in normal breast tissue with the expression in ER+ and ER- carcinomas respectively revealed similar results (Table 1).\nIn tumors, SCGB3A1 tended to be expressed at a lower level in basal-like tumors compared with all other tumors or compared with luminal A tumors, but this did not reach statistical significance (both p-values = 0.2). However in two other datasets (AHUS and UNC), SCGB3A1 was expressed at significantly lower levels in basal-like tumors compared with all other subtypes (p = 0.04 and 0.003 respectively). There was no consistent significant difference in SCGB3A1 expression in ER+ and ER- tumors.\nOf the six genes differentially expressed according to serum estradiol in normal breast tissue, three were differentially expressed between DCIS and early invasive breast carcinomas based on a previously published dataset [19](Table 1). SCGB3A1 was down-regulated in invasive compared with DCIS, whereas talin 2 (TLN2) and PTGS1 were up-regulated in invasive compared with DCIS.\nA linear regression was fitted with all differentially expressed genes as covariates and controlling for age, menopause and current hormone therapy use. After leave-one-out elimination of insignificant covariates, SCGB3A1, TLN2 and PTGS1 were still significant (Table 2).\nGenes independently associated with serum estradiol in a linear regression model.\nAll genes differentially expressed according to serum estradiol (Table 1) were included. Values shown are corrected for age, menopause and current hormone therapy. After leave-one-out stepwise selection the following covariates remained:\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.\n2) Values for the non-significant genes are from the last model before they were excluded.\n Serum estradiol related to mammographic density in healthy women Regression analysis in postmenopausal women showed that serum estradiol was independently associated with both absolute and percent mammographic density when controlling for age, BMI and current use of hormone therapy (Table 3). None of the genes differentially expressed in normal breast tissue according to serum estradiol levels were independently associated with mammographic density (data not shown).\nSerum hormones independently associated with mammographic density in linear regression models.\nValues shown are corrected for age, HT and BMI. Through leave-one-out stepwise elimination of covariates, prolactin, SHBG and testosterone were excluded and the following variables remained.\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.\nRegression analysis in postmenopausal women showed that serum estradiol was independently associated with both absolute and percent mammographic density when controlling for age, BMI and current use of hormone therapy (Table 3). None of the genes differentially expressed in normal breast tissue according to serum estradiol levels were independently associated with mammographic density (data not shown).\nSerum hormones independently associated with mammographic density in linear regression models.\nValues shown are corrected for age, HT and BMI. Through leave-one-out stepwise elimination of covariates, prolactin, SHBG and testosterone were excluded and the following variables remained.\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.\n Gene expression in breast carcinomas according to serum estradiol levels In breast carcinomas, quantitative SAM revealed two genes, AREG and GREB1, as differentially expressed according to serum estradiol levels with FDR = 0 (Table 4). Both genes were up-regulated in samples from women with high serum estradiol (estradiol was used as a continuous response variable in the analysis). Of 16 probes up-regulated in samples from women with high serum estradiol, there were three probes for TFF3 and one for TFF1, although these did not reach statistical significance (Table 4). No genes were significantly down-regulated according to serum estradiol. In ER+ samples (n = 53), we also found AREG and GREB1 up-regulated in samples from women with high serum estradiol (FDR = 0), but the TFF-genes were not up-regulated. Among the ER- samples (n = 8) there was very little variation in serum estradiol levels and a search for genes differentially expressed according to serum estradiol is not feasible.\nGenes significantly differentially expressed according to serum estradiol levels in breast carcinomas.\nA) Quantitativ SAM analysis for differential expression according to serum estradiol with q-values and direction of regulation indicated. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression according to serum estradiol\n2) Gene expression in samples from patients with high compared with low serum estradiol\n3) ER+ BC = Estrogen receptor positive breast cancer (n = 53)\n4) BC = breast cancer\n5) P-value from two-sided t-test\n6) ER- BC = Estrogen receptor negative breast cancer (n = 8)\n7) Two different probes for TFF3 are used\nLooking at the expression of these genes in normal breast tissue from healthy women according to serum estradiol, both AREG and GREB1 are up-regulated in samples from women with high estradiol levels without reaching significance. Comparing the expression of these genes in breast carcinomas and normal breast tissue, neither AREG nor GREB1 are differentially expressed between normal breast tissue and breast carcinomas. All the probes for TFF-genes are, however, significantly down-regulated in normal breast tissue compared with breast carcinomas. All these genes (AREG, GREB1, TFF1 and TFF3) were up-regulated in ER+ carcinomas compared to ER- carcinomas (AREG was only borderline significant) (Additional file 3).\nIn breast carcinomas, quantitative SAM revealed two genes, AREG and GREB1, as differentially expressed according to serum estradiol levels with FDR = 0 (Table 4). Both genes were up-regulated in samples from women with high serum estradiol (estradiol was used as a continuous response variable in the analysis). Of 16 probes up-regulated in samples from women with high serum estradiol, there were three probes for TFF3 and one for TFF1, although these did not reach statistical significance (Table 4). No genes were significantly down-regulated according to serum estradiol. In ER+ samples (n = 53), we also found AREG and GREB1 up-regulated in samples from women with high serum estradiol (FDR = 0), but the TFF-genes were not up-regulated. Among the ER- samples (n = 8) there was very little variation in serum estradiol levels and a search for genes differentially expressed according to serum estradiol is not feasible.\nGenes significantly differentially expressed according to serum estradiol levels in breast carcinomas.\nA) Quantitativ SAM analysis for differential expression according to serum estradiol with q-values and direction of regulation indicated. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression according to serum estradiol\n2) Gene expression in samples from patients with high compared with low serum estradiol\n3) ER+ BC = Estrogen receptor positive breast cancer (n = 53)\n4) BC = breast cancer\n5) P-value from two-sided t-test\n6) ER- BC = Estrogen receptor negative breast cancer (n = 8)\n7) Two different probes for TFF3 are used\nLooking at the expression of these genes in normal breast tissue from healthy women according to serum estradiol, both AREG and GREB1 are up-regulated in samples from women with high estradiol levels without reaching significance. Comparing the expression of these genes in breast carcinomas and normal breast tissue, neither AREG nor GREB1 are differentially expressed between normal breast tissue and breast carcinomas. All the probes for TFF-genes are, however, significantly down-regulated in normal breast tissue compared with breast carcinomas. All these genes (AREG, GREB1, TFF1 and TFF3) were up-regulated in ER+ carcinomas compared to ER- carcinomas (AREG was only borderline significant) (Additional file 3).", "Genes differentially expressed in normal breast tissue from healthy women according to serum estradiol levels with FDR = 0 are listed in Table 1. The gene ontology terms extracellular region and skeletal system development were significantly enriched in the top 80 up-regulated genes (FDR < 50%). There were no significant gene ontology terms enriched in the down-regulated genes with FDR < 50 (n = 8), although response to steroid hormone stimulus was the most enriched term with three observed genes (prostaglandin-endoperoxide synthase 1 (PTGS1), ESR1 and GATA3)(Additional file 2).\nGenes significantly differentially expressed in normal breast tissue of healthy women according to serum estradiol.\nA) Q-values and regulation of gene expression from quantitative SAM analysis of gene expression according to serum estradiol. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression in normal breast tissue according to serum estradiol\n2) s-est = serum estradiol\n3) BC = breast cancer\n4) P-value from two-sided t-test\n5) ER+ BC = estrogen recepor positive breast cancer (n = 53)\n6) ER- BC = estrogen recepor negative breast cancer (n = 8)\nThe genes differentially expressed in normal breast tissue according to serum estradiol with an FDR = 0 (from Table 1) were tested for differential expression between breast cancer tissue and normal breast tissue from healthy women. All six genes were differentially expressed between carcinomas and normal tissue. Interestingly, the expression in breast carcinomas was similar to that in normal tissue from women with lower levels of circulating estradiol and opposite to that found in normal samples from women with higher levels of serum estradiol (Table 1). Comparing the expression of these genes in normal breast tissue with the expression in ER+ and ER- carcinomas respectively revealed similar results (Table 1).\nIn tumors, SCGB3A1 tended to be expressed at a lower level in basal-like tumors compared with all other tumors or compared with luminal A tumors, but this did not reach statistical significance (both p-values = 0.2). However in two other datasets (AHUS and UNC), SCGB3A1 was expressed at significantly lower levels in basal-like tumors compared with all other subtypes (p = 0.04 and 0.003 respectively). There was no consistent significant difference in SCGB3A1 expression in ER+ and ER- tumors.\nOf the six genes differentially expressed according to serum estradiol in normal breast tissue, three were differentially expressed between DCIS and early invasive breast carcinomas based on a previously published dataset [19](Table 1). SCGB3A1 was down-regulated in invasive compared with DCIS, whereas talin 2 (TLN2) and PTGS1 were up-regulated in invasive compared with DCIS.\nA linear regression was fitted with all differentially expressed genes as covariates and controlling for age, menopause and current hormone therapy use. After leave-one-out elimination of insignificant covariates, SCGB3A1, TLN2 and PTGS1 were still significant (Table 2).\nGenes independently associated with serum estradiol in a linear regression model.\nAll genes differentially expressed according to serum estradiol (Table 1) were included. Values shown are corrected for age, menopause and current hormone therapy. After leave-one-out stepwise selection the following covariates remained:\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.\n2) Values for the non-significant genes are from the last model before they were excluded.", "Regression analysis in postmenopausal women showed that serum estradiol was independently associated with both absolute and percent mammographic density when controlling for age, BMI and current use of hormone therapy (Table 3). None of the genes differentially expressed in normal breast tissue according to serum estradiol levels were independently associated with mammographic density (data not shown).\nSerum hormones independently associated with mammographic density in linear regression models.\nValues shown are corrected for age, HT and BMI. Through leave-one-out stepwise elimination of covariates, prolactin, SHBG and testosterone were excluded and the following variables remained.\n1) Estimate denotes the beta-value corresponding to each covariate in the regression equation.", "In breast carcinomas, quantitative SAM revealed two genes, AREG and GREB1, as differentially expressed according to serum estradiol levels with FDR = 0 (Table 4). Both genes were up-regulated in samples from women with high serum estradiol (estradiol was used as a continuous response variable in the analysis). Of 16 probes up-regulated in samples from women with high serum estradiol, there were three probes for TFF3 and one for TFF1, although these did not reach statistical significance (Table 4). No genes were significantly down-regulated according to serum estradiol. In ER+ samples (n = 53), we also found AREG and GREB1 up-regulated in samples from women with high serum estradiol (FDR = 0), but the TFF-genes were not up-regulated. Among the ER- samples (n = 8) there was very little variation in serum estradiol levels and a search for genes differentially expressed according to serum estradiol is not feasible.\nGenes significantly differentially expressed according to serum estradiol levels in breast carcinomas.\nA) Quantitativ SAM analysis for differential expression according to serum estradiol with q-values and direction of regulation indicated. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts.\n1) Q-value from SAM of gene expression according to serum estradiol\n2) Gene expression in samples from patients with high compared with low serum estradiol\n3) ER+ BC = Estrogen receptor positive breast cancer (n = 53)\n4) BC = breast cancer\n5) P-value from two-sided t-test\n6) ER- BC = Estrogen receptor negative breast cancer (n = 8)\n7) Two different probes for TFF3 are used\nLooking at the expression of these genes in normal breast tissue from healthy women according to serum estradiol, both AREG and GREB1 are up-regulated in samples from women with high estradiol levels without reaching significance. Comparing the expression of these genes in breast carcinomas and normal breast tissue, neither AREG nor GREB1 are differentially expressed between normal breast tissue and breast carcinomas. All the probes for TFF-genes are, however, significantly down-regulated in normal breast tissue compared with breast carcinomas. All these genes (AREG, GREB1, TFF1 and TFF3) were up-regulated in ER+ carcinomas compared to ER- carcinomas (AREG was only borderline significant) (Additional file 3).", " Gene expression in normal breast tissue according to serum estradiol levels We have identified genes differentially expressed according to serum estradiol in normal breast tissue of healthy women.\nThe genes up-regulated in normal breast tissue under influence of high serum estradiol are enriched for the gene ontology terms extracellular matrix and skeletal system development. Both ER isoforms α and β are expressed in the stromal cells [27]. The proliferating epithelial cells are not found to be ER α + [8] and most often negative to both ER isoforms [9]. In normal breast tissue, the estrogen-induced epithelial proliferation is, at least partly, caused by paracrine signals from ER+ fibroblasts [3]. The enrichment of gene ontology terms related to extracellular matrix may be linked to the effect of estradiol on the ER+ stromal cells.\nThree genes were independently associated with serum estradiol levels in normal breast tissue in a linear regression model after controlling for age, menopause and current hormone therapy. The two genes SCBG3A1 and TLN2 were positively associated with serum estradiol and PTGS1 (COX1) negatively.\nSCBG3A1 is also called high in normal 1 (HIN1) and is a secretoglobin transcribed in luminal, but not in myoepithelial breast cells and is secreted from the cell [28]. The protein is a tumor suppressor and inhibits cell growth, migration and invasion acting through the AKT-pathway. SCBG3A1 inhibits Akt-phosphorylation, which reduces the Akt-function in promoting cell cycle progression (transition from the G1 to the S-phase) and preventing apoptosis (through inhibition of the TGFβ-pathway) [29] (Figure 1).\nSimplified illustration of the cellular mechanisms of action of SCGB3A1. SCGB3A1 inhibits the phosphorylation of Akt leading to reduced cell cycle division and increased apoptosis. Molecules in red are increased/stimulated as result of SCGB3A1-action, whereas molecules in blue are decreased/inhibited.\nThe SCBG3A1 promoter was found to be hypermethylated with down-regulated expression of the gene in breast carcinomas compared with normal breast tissue, where it is referred to as \"high in normal 1\" (HIN1)[30-32]. Interestingly, the gene is not methylated in BRCA-mutated and BRCA-like breast cancer [32]. Methylation of the gene is suggested to be an early event in non-BRCA-associated breast cancer [33].\nWe found SCBG3A1 down-regulated in basal-like cancers compared to other subtypes. At first glance, this may seem contradictory to the observation that the gene is not methylated in BRCA-like breast cancers. However, Krop and colleagues found that the gene is expressed in luminal epithelial cell lines, but not in myoepithelial cell lines. The reduced expression seen in basal-like cancer could be due to a myoepithelial phenotype arising from a myoepithelial cell of origin or from phenotypic changes acquired during carcinogensis. This could also be linked to the lack of methylation in BRCA-associated breast cancers, which are often basal-like. An a priori low gene expression would make methylation unnecessary. The increased Akt-activity seen in basal-like cancers [34] is consistent with the low levels of SCBG3A1 expression observed in the basal-like cancers in this study leading to increased Akt-phosphorylation and thereby Akt-activity.\nThe up-regulation of SCGB3A1 in the breasts of women with high serum estradiol protects the breast epithelial cells against uncontrolled proliferation. Women with methylation of the SCGB3A1-promoter may be at risk of developing luminal, but not basal-like, breast cancer and a reduction in serum estradiol levels may be protective for these women. Hormone therapy after menopause is associated with receptor positive, but not receptor negative, breast cancer [35]. Our results indicate that the same may be true for circulating estradiol levels in absence of functional SCGB3A1, but this is not yet shown empirically.\nPTGS1 (prostaglandin-endoperoxide synthase 1) is synonymous with cyclooxygenase 1 (COX1) and codes for an enzyme important in prostaglandin production. Studies of normal human adiopocytes have shown that the enzyme induces production of prostaglandin E2 (PGE2) which in turn increases the expression of aromatase (CYP19A1) [36]. Aromatase is the enzyme responsible for the last step in the conversion of androgens to estrogens in adipose tissue. Hence, the expression of PTGS1 may increase the local production of estradiol (Figure 2). In normal breast tissue, we observed that the expression of PTGS1 was lower in samples from women with higher levels of serum estradiol. This may be due to negative feedback. High systemic levels of estradiol make local production unnecessary and PTGS1-induced aromatase production is abolished.\nSchematic illustration of mechanism of action of PTGS1. PTGS1 induces PGE2-production. PGE2 increases the expression of aromatase (CYP19A1) which in turn converts androgens to estrogens in adipose tissue. 17βHSD1 = 17β-hydroxysteroid dehydrogenase.\nThe up-regulation of PTGS1 in breast carcinomas compared to normal tissue is expected from current knowledge. Several studies have suggested that PTGS1 has a carcinogenic role in different epithelial cancers [37-41]. The gene has also previously been found over-expressed in tumors compared with tumor adjacent normal tissue [42]. There has been large amount of research on PTGS2 in relation to cancer, indicating a role in carcinogenesis. The probes for PTGS2 were filtered out due to too many missing values and were not included in the analysis. Hence, this study lacks information about the role of PTGS2-expression in relation to serum estradiol levels.\nTLN2 is less known and less studied than Talin 1 (TLN1). Both talins are believed to connect integrins to the actin cytoskeleton and are involved in integrin-associated cell adhesion [43,44]. TLN2 is located on chromosome 15q15-21, close to CYP19A1 coding for aromatase. A study on aromatase-excess syndrome found that certain minor chromosomal rearrangements may cause cryptic transcription of the CYP19A1 gene through the TLN2-promoter [45]. We found that TLN2 was up-regulated in breasts of healthy women with high levels of serum estradiol. This could indicate an activation of cell adhesion. This gene was the only gene significantly up-regulated according to serum estradiol in normal breast tissue of premenopausal women. The down-regulation observed in breast cancers compared with normal breast tissue indicates a loss of cell adhesion. The expression of the gene is lower in DCIS than in invasive carcinomas, which is contrary to expected, but the data set is small.\nA previous study report on the gene expression in normal human breast tissue transplanted into two groups of athymic mice treated with different levels of estradiol [17]. Neither SCGB3A1, TLN2 nor PTGS1 was significantly differentially expressed in their study. They did, however, identify many of the genes found to be significantly differentially expressed according to serum estradiol in breast carcinomas in the current study, such as AREG, GREB1, TFF1 and TFF3. Going back to our normal samples, we see that several of their genes (including AREG, GREB1, TFF1 and TFF3, GATA3 and two SERPIN-genes) are differentially expressed in our normal breast tissue, but did not reach statistical significance (Additional file 4).\nThe differences observed between our study and that of Wilson and colleagues may be due to chance and due to the presence of different residual confounding in the two studies. Wilson and colleagues studied the effects of estradiol treatment, which may act differently upon the breast tissue than endogenous estradiol. Normal human breast tissue transplanted into mice may react differently to varying levels of estradiol than it does in its natural milieu in humans. The genes that were significant in the Wilson-study and differentially expressed but not significant in our study (eg: AREG, GREB1, TFF1, TFF3 and GATA3) may be associated with serum estradiol levels in normal tissues as well as in tumor tissues where we and others have observed significant associations. Our study is the first study to identify the expression of SCGB3A1, TLN2 and PTGS1 in normal breast tissue to be significantly associated with serum estradiol levels. These findings are biologically reasonable and may have been missed in previous studies due to lack of representative study material.\nWe have identified genes differentially expressed according to serum estradiol in normal breast tissue of healthy women.\nThe genes up-regulated in normal breast tissue under influence of high serum estradiol are enriched for the gene ontology terms extracellular matrix and skeletal system development. Both ER isoforms α and β are expressed in the stromal cells [27]. The proliferating epithelial cells are not found to be ER α + [8] and most often negative to both ER isoforms [9]. In normal breast tissue, the estrogen-induced epithelial proliferation is, at least partly, caused by paracrine signals from ER+ fibroblasts [3]. The enrichment of gene ontology terms related to extracellular matrix may be linked to the effect of estradiol on the ER+ stromal cells.\nThree genes were independently associated with serum estradiol levels in normal breast tissue in a linear regression model after controlling for age, menopause and current hormone therapy. The two genes SCBG3A1 and TLN2 were positively associated with serum estradiol and PTGS1 (COX1) negatively.\nSCBG3A1 is also called high in normal 1 (HIN1) and is a secretoglobin transcribed in luminal, but not in myoepithelial breast cells and is secreted from the cell [28]. The protein is a tumor suppressor and inhibits cell growth, migration and invasion acting through the AKT-pathway. SCBG3A1 inhibits Akt-phosphorylation, which reduces the Akt-function in promoting cell cycle progression (transition from the G1 to the S-phase) and preventing apoptosis (through inhibition of the TGFβ-pathway) [29] (Figure 1).\nSimplified illustration of the cellular mechanisms of action of SCGB3A1. SCGB3A1 inhibits the phosphorylation of Akt leading to reduced cell cycle division and increased apoptosis. Molecules in red are increased/stimulated as result of SCGB3A1-action, whereas molecules in blue are decreased/inhibited.\nThe SCBG3A1 promoter was found to be hypermethylated with down-regulated expression of the gene in breast carcinomas compared with normal breast tissue, where it is referred to as \"high in normal 1\" (HIN1)[30-32]. Interestingly, the gene is not methylated in BRCA-mutated and BRCA-like breast cancer [32]. Methylation of the gene is suggested to be an early event in non-BRCA-associated breast cancer [33].\nWe found SCBG3A1 down-regulated in basal-like cancers compared to other subtypes. At first glance, this may seem contradictory to the observation that the gene is not methylated in BRCA-like breast cancers. However, Krop and colleagues found that the gene is expressed in luminal epithelial cell lines, but not in myoepithelial cell lines. The reduced expression seen in basal-like cancer could be due to a myoepithelial phenotype arising from a myoepithelial cell of origin or from phenotypic changes acquired during carcinogensis. This could also be linked to the lack of methylation in BRCA-associated breast cancers, which are often basal-like. An a priori low gene expression would make methylation unnecessary. The increased Akt-activity seen in basal-like cancers [34] is consistent with the low levels of SCBG3A1 expression observed in the basal-like cancers in this study leading to increased Akt-phosphorylation and thereby Akt-activity.\nThe up-regulation of SCGB3A1 in the breasts of women with high serum estradiol protects the breast epithelial cells against uncontrolled proliferation. Women with methylation of the SCGB3A1-promoter may be at risk of developing luminal, but not basal-like, breast cancer and a reduction in serum estradiol levels may be protective for these women. Hormone therapy after menopause is associated with receptor positive, but not receptor negative, breast cancer [35]. Our results indicate that the same may be true for circulating estradiol levels in absence of functional SCGB3A1, but this is not yet shown empirically.\nPTGS1 (prostaglandin-endoperoxide synthase 1) is synonymous with cyclooxygenase 1 (COX1) and codes for an enzyme important in prostaglandin production. Studies of normal human adiopocytes have shown that the enzyme induces production of prostaglandin E2 (PGE2) which in turn increases the expression of aromatase (CYP19A1) [36]. Aromatase is the enzyme responsible for the last step in the conversion of androgens to estrogens in adipose tissue. Hence, the expression of PTGS1 may increase the local production of estradiol (Figure 2). In normal breast tissue, we observed that the expression of PTGS1 was lower in samples from women with higher levels of serum estradiol. This may be due to negative feedback. High systemic levels of estradiol make local production unnecessary and PTGS1-induced aromatase production is abolished.\nSchematic illustration of mechanism of action of PTGS1. PTGS1 induces PGE2-production. PGE2 increases the expression of aromatase (CYP19A1) which in turn converts androgens to estrogens in adipose tissue. 17βHSD1 = 17β-hydroxysteroid dehydrogenase.\nThe up-regulation of PTGS1 in breast carcinomas compared to normal tissue is expected from current knowledge. Several studies have suggested that PTGS1 has a carcinogenic role in different epithelial cancers [37-41]. The gene has also previously been found over-expressed in tumors compared with tumor adjacent normal tissue [42]. There has been large amount of research on PTGS2 in relation to cancer, indicating a role in carcinogenesis. The probes for PTGS2 were filtered out due to too many missing values and were not included in the analysis. Hence, this study lacks information about the role of PTGS2-expression in relation to serum estradiol levels.\nTLN2 is less known and less studied than Talin 1 (TLN1). Both talins are believed to connect integrins to the actin cytoskeleton and are involved in integrin-associated cell adhesion [43,44]. TLN2 is located on chromosome 15q15-21, close to CYP19A1 coding for aromatase. A study on aromatase-excess syndrome found that certain minor chromosomal rearrangements may cause cryptic transcription of the CYP19A1 gene through the TLN2-promoter [45]. We found that TLN2 was up-regulated in breasts of healthy women with high levels of serum estradiol. This could indicate an activation of cell adhesion. This gene was the only gene significantly up-regulated according to serum estradiol in normal breast tissue of premenopausal women. The down-regulation observed in breast cancers compared with normal breast tissue indicates a loss of cell adhesion. The expression of the gene is lower in DCIS than in invasive carcinomas, which is contrary to expected, but the data set is small.\nA previous study report on the gene expression in normal human breast tissue transplanted into two groups of athymic mice treated with different levels of estradiol [17]. Neither SCGB3A1, TLN2 nor PTGS1 was significantly differentially expressed in their study. They did, however, identify many of the genes found to be significantly differentially expressed according to serum estradiol in breast carcinomas in the current study, such as AREG, GREB1, TFF1 and TFF3. Going back to our normal samples, we see that several of their genes (including AREG, GREB1, TFF1 and TFF3, GATA3 and two SERPIN-genes) are differentially expressed in our normal breast tissue, but did not reach statistical significance (Additional file 4).\nThe differences observed between our study and that of Wilson and colleagues may be due to chance and due to the presence of different residual confounding in the two studies. Wilson and colleagues studied the effects of estradiol treatment, which may act differently upon the breast tissue than endogenous estradiol. Normal human breast tissue transplanted into mice may react differently to varying levels of estradiol than it does in its natural milieu in humans. The genes that were significant in the Wilson-study and differentially expressed but not significant in our study (eg: AREG, GREB1, TFF1, TFF3 and GATA3) may be associated with serum estradiol levels in normal tissues as well as in tumor tissues where we and others have observed significant associations. Our study is the first study to identify the expression of SCGB3A1, TLN2 and PTGS1 in normal breast tissue to be significantly associated with serum estradiol levels. These findings are biologically reasonable and may have been missed in previous studies due to lack of representative study material.\n Serum estradiol associated with mammographic density in healthy women Serum estradiol levels were independently associated with mammographic density controlling for age, BMI and current use of hormone therapy, and the magnitude of the association was substantial (Table 3). The high beta-value in the regression equation implies a large magnitude of impact which supports the hypothesis that high serum estradiol levels increases mammographic density with both statistical and biological significance.\nSerum estradiol levels were independently associated with mammographic density controlling for age, BMI and current use of hormone therapy, and the magnitude of the association was substantial (Table 3). The high beta-value in the regression equation implies a large magnitude of impact which supports the hypothesis that high serum estradiol levels increases mammographic density with both statistical and biological significance.\n Gene expression in breast carcinomas according to serum estradiol levels The expression of genes found to be differentially expressed in normal breast tissue according to serum estradiol levels was examined in breast carcinomas. We found that the expression was all opposite of that in normal breast tissue from women with high serum estrogen (Table 1). This may be due to lack of negative feedback of growth regulation in breast tumors. In breast cancer cell lines, estrogen induced up-regulation of positive proliferation regulators and down-regulation of anti-proliferative and pro-apoptotic genes, resulting in a net positive proliferative drive [46]. This is in line with our findings. In normal breast tissue from women with high serum estradiol, SCGB3A1, which regulate proliferation negatively, and TLN2, which prevents invasion, are up-regulated. PTGS1, which induce local production of estradiol-stimulated proliferation, is down-regulated. All three genes are expressed to maintain control and regulation of the epithelial cells. In breast cancers the expression of these genes favors growth, migration and proliferation. This supports the hypothesis that high serum estradiol increases the proliferative pressure in normal breasts, which leads to an activation of mechanisms counter-acting this proliferative pressure. In carcinomas, growth regulation is lost, and these hormone-related growth-promoting mechanisms are turned on.\nInterestingly, both AREG and GREB1 were up-regulated in ER+ breast carcinomas of younger (< 45 years) compared with older (> 70 years) women in a previous publication [47]. The increased expression of these genes was proposed as a mechanism responsible for the observed increase in proliferation seen in the tumors of younger compared with older women [47].\nThe genes differentially expressed according to serum estradiol levels in tumors confirmed many of the findings from the Dunbier-study of ER+ tumors [10]. The previously published list of genes positively correlated with serum estradiol included TFF-genes and GREB1. These genes were also found significant in the analysis of all tumors in this study, although TFF1 and TFF3 did not reach statistical significance (Table 4). In addition to the previously published genes, we identified the gene AREG, an EGFR-ligand essential for breast development, as up-regulated in tumors from patients with high serum estradiol.\nGREB1 is previously found to be an important estrogen-induced stimulator of growth in ER+ breast cancer cell lines [48]. AREG binds to and stimulates EGFR and hence epithelial cell growth. The up-regulation of these two genes in breast carcinomas of women with high estradiol levels may indicate a loss of regulation of growth associated with cancer development. This corresponds well with the interpretation of our findings in normal breast tissue referred above and confirms the results indicated by the cell line studies by Frasor and colleagues [46]. These two genes are not differentially expressed between normal breast tissue and breast cancers. Both are, however, higher expressed in ER+ than ER- breast carcinomas.\nThe expression of genes found to be differentially expressed in normal breast tissue according to serum estradiol levels was examined in breast carcinomas. We found that the expression was all opposite of that in normal breast tissue from women with high serum estrogen (Table 1). This may be due to lack of negative feedback of growth regulation in breast tumors. In breast cancer cell lines, estrogen induced up-regulation of positive proliferation regulators and down-regulation of anti-proliferative and pro-apoptotic genes, resulting in a net positive proliferative drive [46]. This is in line with our findings. In normal breast tissue from women with high serum estradiol, SCGB3A1, which regulate proliferation negatively, and TLN2, which prevents invasion, are up-regulated. PTGS1, which induce local production of estradiol-stimulated proliferation, is down-regulated. All three genes are expressed to maintain control and regulation of the epithelial cells. In breast cancers the expression of these genes favors growth, migration and proliferation. This supports the hypothesis that high serum estradiol increases the proliferative pressure in normal breasts, which leads to an activation of mechanisms counter-acting this proliferative pressure. In carcinomas, growth regulation is lost, and these hormone-related growth-promoting mechanisms are turned on.\nInterestingly, both AREG and GREB1 were up-regulated in ER+ breast carcinomas of younger (< 45 years) compared with older (> 70 years) women in a previous publication [47]. The increased expression of these genes was proposed as a mechanism responsible for the observed increase in proliferation seen in the tumors of younger compared with older women [47].\nThe genes differentially expressed according to serum estradiol levels in tumors confirmed many of the findings from the Dunbier-study of ER+ tumors [10]. The previously published list of genes positively correlated with serum estradiol included TFF-genes and GREB1. These genes were also found significant in the analysis of all tumors in this study, although TFF1 and TFF3 did not reach statistical significance (Table 4). In addition to the previously published genes, we identified the gene AREG, an EGFR-ligand essential for breast development, as up-regulated in tumors from patients with high serum estradiol.\nGREB1 is previously found to be an important estrogen-induced stimulator of growth in ER+ breast cancer cell lines [48]. AREG binds to and stimulates EGFR and hence epithelial cell growth. The up-regulation of these two genes in breast carcinomas of women with high estradiol levels may indicate a loss of regulation of growth associated with cancer development. This corresponds well with the interpretation of our findings in normal breast tissue referred above and confirms the results indicated by the cell line studies by Frasor and colleagues [46]. These two genes are not differentially expressed between normal breast tissue and breast cancers. Both are, however, higher expressed in ER+ than ER- breast carcinomas.\n Overall strengths and limitations of the study The currently used method for detection of serum estradiol has a limited sensitivity in the lower serum levels often seen in postmenopausal women. Despite the limited sample size we found several biologically plausible associations. However, due to limited power, there may be other associations that we could not reveal. We have included women with and without hormone therapy in the study. There may be differences in action between endogenous and exogenous estradiol that will not be revealed in this study.\nOne important strength of this study is the unique material with normal human tissue in its natural mileu, not influenced by an adjacent tumor [49-51] or by an adipose-dominated biology that may bias the study of reduction mammoplasties.\nThe currently used method for detection of serum estradiol has a limited sensitivity in the lower serum levels often seen in postmenopausal women. Despite the limited sample size we found several biologically plausible associations. However, due to limited power, there may be other associations that we could not reveal. We have included women with and without hormone therapy in the study. There may be differences in action between endogenous and exogenous estradiol that will not be revealed in this study.\nOne important strength of this study is the unique material with normal human tissue in its natural mileu, not influenced by an adjacent tumor [49-51] or by an adipose-dominated biology that may bias the study of reduction mammoplasties.", "We have identified genes differentially expressed according to serum estradiol in normal breast tissue of healthy women.\nThe genes up-regulated in normal breast tissue under influence of high serum estradiol are enriched for the gene ontology terms extracellular matrix and skeletal system development. Both ER isoforms α and β are expressed in the stromal cells [27]. The proliferating epithelial cells are not found to be ER α + [8] and most often negative to both ER isoforms [9]. In normal breast tissue, the estrogen-induced epithelial proliferation is, at least partly, caused by paracrine signals from ER+ fibroblasts [3]. The enrichment of gene ontology terms related to extracellular matrix may be linked to the effect of estradiol on the ER+ stromal cells.\nThree genes were independently associated with serum estradiol levels in normal breast tissue in a linear regression model after controlling for age, menopause and current hormone therapy. The two genes SCBG3A1 and TLN2 were positively associated with serum estradiol and PTGS1 (COX1) negatively.\nSCBG3A1 is also called high in normal 1 (HIN1) and is a secretoglobin transcribed in luminal, but not in myoepithelial breast cells and is secreted from the cell [28]. The protein is a tumor suppressor and inhibits cell growth, migration and invasion acting through the AKT-pathway. SCBG3A1 inhibits Akt-phosphorylation, which reduces the Akt-function in promoting cell cycle progression (transition from the G1 to the S-phase) and preventing apoptosis (through inhibition of the TGFβ-pathway) [29] (Figure 1).\nSimplified illustration of the cellular mechanisms of action of SCGB3A1. SCGB3A1 inhibits the phosphorylation of Akt leading to reduced cell cycle division and increased apoptosis. Molecules in red are increased/stimulated as result of SCGB3A1-action, whereas molecules in blue are decreased/inhibited.\nThe SCBG3A1 promoter was found to be hypermethylated with down-regulated expression of the gene in breast carcinomas compared with normal breast tissue, where it is referred to as \"high in normal 1\" (HIN1)[30-32]. Interestingly, the gene is not methylated in BRCA-mutated and BRCA-like breast cancer [32]. Methylation of the gene is suggested to be an early event in non-BRCA-associated breast cancer [33].\nWe found SCBG3A1 down-regulated in basal-like cancers compared to other subtypes. At first glance, this may seem contradictory to the observation that the gene is not methylated in BRCA-like breast cancers. However, Krop and colleagues found that the gene is expressed in luminal epithelial cell lines, but not in myoepithelial cell lines. The reduced expression seen in basal-like cancer could be due to a myoepithelial phenotype arising from a myoepithelial cell of origin or from phenotypic changes acquired during carcinogensis. This could also be linked to the lack of methylation in BRCA-associated breast cancers, which are often basal-like. An a priori low gene expression would make methylation unnecessary. The increased Akt-activity seen in basal-like cancers [34] is consistent with the low levels of SCBG3A1 expression observed in the basal-like cancers in this study leading to increased Akt-phosphorylation and thereby Akt-activity.\nThe up-regulation of SCGB3A1 in the breasts of women with high serum estradiol protects the breast epithelial cells against uncontrolled proliferation. Women with methylation of the SCGB3A1-promoter may be at risk of developing luminal, but not basal-like, breast cancer and a reduction in serum estradiol levels may be protective for these women. Hormone therapy after menopause is associated with receptor positive, but not receptor negative, breast cancer [35]. Our results indicate that the same may be true for circulating estradiol levels in absence of functional SCGB3A1, but this is not yet shown empirically.\nPTGS1 (prostaglandin-endoperoxide synthase 1) is synonymous with cyclooxygenase 1 (COX1) and codes for an enzyme important in prostaglandin production. Studies of normal human adiopocytes have shown that the enzyme induces production of prostaglandin E2 (PGE2) which in turn increases the expression of aromatase (CYP19A1) [36]. Aromatase is the enzyme responsible for the last step in the conversion of androgens to estrogens in adipose tissue. Hence, the expression of PTGS1 may increase the local production of estradiol (Figure 2). In normal breast tissue, we observed that the expression of PTGS1 was lower in samples from women with higher levels of serum estradiol. This may be due to negative feedback. High systemic levels of estradiol make local production unnecessary and PTGS1-induced aromatase production is abolished.\nSchematic illustration of mechanism of action of PTGS1. PTGS1 induces PGE2-production. PGE2 increases the expression of aromatase (CYP19A1) which in turn converts androgens to estrogens in adipose tissue. 17βHSD1 = 17β-hydroxysteroid dehydrogenase.\nThe up-regulation of PTGS1 in breast carcinomas compared to normal tissue is expected from current knowledge. Several studies have suggested that PTGS1 has a carcinogenic role in different epithelial cancers [37-41]. The gene has also previously been found over-expressed in tumors compared with tumor adjacent normal tissue [42]. There has been large amount of research on PTGS2 in relation to cancer, indicating a role in carcinogenesis. The probes for PTGS2 were filtered out due to too many missing values and were not included in the analysis. Hence, this study lacks information about the role of PTGS2-expression in relation to serum estradiol levels.\nTLN2 is less known and less studied than Talin 1 (TLN1). Both talins are believed to connect integrins to the actin cytoskeleton and are involved in integrin-associated cell adhesion [43,44]. TLN2 is located on chromosome 15q15-21, close to CYP19A1 coding for aromatase. A study on aromatase-excess syndrome found that certain minor chromosomal rearrangements may cause cryptic transcription of the CYP19A1 gene through the TLN2-promoter [45]. We found that TLN2 was up-regulated in breasts of healthy women with high levels of serum estradiol. This could indicate an activation of cell adhesion. This gene was the only gene significantly up-regulated according to serum estradiol in normal breast tissue of premenopausal women. The down-regulation observed in breast cancers compared with normal breast tissue indicates a loss of cell adhesion. The expression of the gene is lower in DCIS than in invasive carcinomas, which is contrary to expected, but the data set is small.\nA previous study report on the gene expression in normal human breast tissue transplanted into two groups of athymic mice treated with different levels of estradiol [17]. Neither SCGB3A1, TLN2 nor PTGS1 was significantly differentially expressed in their study. They did, however, identify many of the genes found to be significantly differentially expressed according to serum estradiol in breast carcinomas in the current study, such as AREG, GREB1, TFF1 and TFF3. Going back to our normal samples, we see that several of their genes (including AREG, GREB1, TFF1 and TFF3, GATA3 and two SERPIN-genes) are differentially expressed in our normal breast tissue, but did not reach statistical significance (Additional file 4).\nThe differences observed between our study and that of Wilson and colleagues may be due to chance and due to the presence of different residual confounding in the two studies. Wilson and colleagues studied the effects of estradiol treatment, which may act differently upon the breast tissue than endogenous estradiol. Normal human breast tissue transplanted into mice may react differently to varying levels of estradiol than it does in its natural milieu in humans. The genes that were significant in the Wilson-study and differentially expressed but not significant in our study (eg: AREG, GREB1, TFF1, TFF3 and GATA3) may be associated with serum estradiol levels in normal tissues as well as in tumor tissues where we and others have observed significant associations. Our study is the first study to identify the expression of SCGB3A1, TLN2 and PTGS1 in normal breast tissue to be significantly associated with serum estradiol levels. These findings are biologically reasonable and may have been missed in previous studies due to lack of representative study material.", "Serum estradiol levels were independently associated with mammographic density controlling for age, BMI and current use of hormone therapy, and the magnitude of the association was substantial (Table 3). The high beta-value in the regression equation implies a large magnitude of impact which supports the hypothesis that high serum estradiol levels increases mammographic density with both statistical and biological significance.", "The expression of genes found to be differentially expressed in normal breast tissue according to serum estradiol levels was examined in breast carcinomas. We found that the expression was all opposite of that in normal breast tissue from women with high serum estrogen (Table 1). This may be due to lack of negative feedback of growth regulation in breast tumors. In breast cancer cell lines, estrogen induced up-regulation of positive proliferation regulators and down-regulation of anti-proliferative and pro-apoptotic genes, resulting in a net positive proliferative drive [46]. This is in line with our findings. In normal breast tissue from women with high serum estradiol, SCGB3A1, which regulate proliferation negatively, and TLN2, which prevents invasion, are up-regulated. PTGS1, which induce local production of estradiol-stimulated proliferation, is down-regulated. All three genes are expressed to maintain control and regulation of the epithelial cells. In breast cancers the expression of these genes favors growth, migration and proliferation. This supports the hypothesis that high serum estradiol increases the proliferative pressure in normal breasts, which leads to an activation of mechanisms counter-acting this proliferative pressure. In carcinomas, growth regulation is lost, and these hormone-related growth-promoting mechanisms are turned on.\nInterestingly, both AREG and GREB1 were up-regulated in ER+ breast carcinomas of younger (< 45 years) compared with older (> 70 years) women in a previous publication [47]. The increased expression of these genes was proposed as a mechanism responsible for the observed increase in proliferation seen in the tumors of younger compared with older women [47].\nThe genes differentially expressed according to serum estradiol levels in tumors confirmed many of the findings from the Dunbier-study of ER+ tumors [10]. The previously published list of genes positively correlated with serum estradiol included TFF-genes and GREB1. These genes were also found significant in the analysis of all tumors in this study, although TFF1 and TFF3 did not reach statistical significance (Table 4). In addition to the previously published genes, we identified the gene AREG, an EGFR-ligand essential for breast development, as up-regulated in tumors from patients with high serum estradiol.\nGREB1 is previously found to be an important estrogen-induced stimulator of growth in ER+ breast cancer cell lines [48]. AREG binds to and stimulates EGFR and hence epithelial cell growth. The up-regulation of these two genes in breast carcinomas of women with high estradiol levels may indicate a loss of regulation of growth associated with cancer development. This corresponds well with the interpretation of our findings in normal breast tissue referred above and confirms the results indicated by the cell line studies by Frasor and colleagues [46]. These two genes are not differentially expressed between normal breast tissue and breast cancers. Both are, however, higher expressed in ER+ than ER- breast carcinomas.", "The currently used method for detection of serum estradiol has a limited sensitivity in the lower serum levels often seen in postmenopausal women. Despite the limited sample size we found several biologically plausible associations. However, due to limited power, there may be other associations that we could not reveal. We have included women with and without hormone therapy in the study. There may be differences in action between endogenous and exogenous estradiol that will not be revealed in this study.\nOne important strength of this study is the unique material with normal human tissue in its natural mileu, not influenced by an adjacent tumor [49-51] or by an adipose-dominated biology that may bias the study of reduction mammoplasties.", "In conclusion we report a list of genes whose expression is associated with serum estradiol levels. This list includes genes with known relation to estradiol-signaling, mammary proliferation and breast carcinogenesis. All these genes were expressed differently in tumor and normal breast tissue. The gene expression in tumors resembled that in normal breast tissue from women with low serum estradiol. Associations between serum estradiol and the expression in breast carcinomas confirmed previous findings and revealed new associations. The comparison of results between normal breast tissue from healthy women and breast carcinomas indicate the difference in biological impact of estradiol in normal and cancerous breast tissue.", "Table S1: Criteria for estimation of menopausal status. A description of the different criteria used to determine menopausal status.\nClick here for file\nTable S2: Gene ontology terms for genes differentially expressed in healthy women according to serum estradiol levels. A listing of different gene ontology terms for genes differentialle expressed in healthy women dependent on levels of estradiol levels in the serum, with FDR reported.\nClick here for file\nTable S3: Genes differentially expressed according to serum estradiol in breast carcinomas and their expression in normal breast tissue. TFF3 represented with two different probes. Four genes differentially expressed according to serum levels of estradiol, levels in both normal breasts and in breast carcinomas.\nClick here for file\nTable S4: Genes differentially expressed according to estradiol treatment in Wilson et al and according to serum estradiol in the current study. A comparison between a previous published study and this study.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[ "Serum estradiol", "\nSCGB3A1\n", "\nHIN1\n", "\nTLN2\n", "\nPTGS1\n", "\nCOX1\n", "\nAREG\n", "\nGREB1\n", "\nTFF\n", "normal breast tissue", "gene expression" ]
Background: Influence of estradiol on breast development [1], the menopausal transition [2] and on the breast epithelial cells [3] is widely studied. However, little is known about the effect of serum estradiol on gene expression in the normal breast tissue. For postmenopausal women, high serum estradiol levels are associated with increased risk of breast cancer [4-6]. The results are less conclusive for premenopausal women, but epidemiologic evidence indicates an increased risk from higher exposure to female hormones [7]. In estrogen receptor (ER) positive breast carcinomas, the proliferating tumor cells express ER while in normal breast tissue the proliferating epithelial cells are ER negative (ER-) [8,9]. Both normal and malignant breast epithelial cells are influenced by estradiol but through different mechanisms. In the lack of ER, normal breast epithelial cells receive proliferating paracrine signals from ER+ fibroblasts [3]. The importance of estrogen stimuli in the proliferation of ER+ breast cancer cells is evident from the effect of anti-estrogen treatment. Previously, several studies have identified genes whose expression is regulated by estradiol in breast cancer cell lines. Recently, a study reported an association between serum levels of estradiol and gene expression of trefoil factor 1 (TFF1), growth regulation by estrogen in breast cancer 1 (GREB1), PDZ domain containing 1 (PDZK1) and progesterone receptor (PGR) in ER+ breast carcinomas [10]. Functional studies on breast cancer cell lines have described that estradiol induces expression of c-fos [11] and that exposure to physiologic doses of estradiol is necessary for malignant transformation [12]. Intratumoral levels of estrogens have also been measured and were found correlated with tumor gene expression of estradiol-metabolizing enzymes and the estrogen receptor gene (ESR1) [13] and of proliferation markers [14]. A recent study did, however, conclude that the intratumoral estradiol levels were mainly determined by its binding to ER (associated with ESR1-expression). The intratumoral estradiol levels were not found to be associated with local estradiol production [15]. Serum estradiol levels were found to be associated with local estradiol levels in normal breast tissue of breast cancer patients in a recent study [16]. This strengthens the hypothesis that serum estradiol levels influence the gene expression in breast tissue. Wilson and colleagues studied the effect of estradiol on normal human breast tissue transplanted into athymic nude mice. They identified a list of genes associated with estradiol treatment, including TFF1, AREG, SCGB2A2, GREB1 and GATA3. The normal tissues used in the xenografts were from breasts with benign breast disease and from mammoplasty reductions [17]. Studies describing associations between serum estradiol levels and gene expression of normal human breast tissue in its natural milieu are lacking. Knowledge about gene expression changes associated with high serum estradiol may reveal biological mechanisms underlying the increased risk for both elevated mammographic density and for developing breast cancer as seen in women with high estradiol levels. We have identified genes differentially expressed between normal breast tissue samples according to serum estradiol levels. Several genes identified in previous studies using normal breast tissue or breast carcinomas are confirmed, but additional genes were identified making important contributions to our previous knowledge. Methods: Subjects Two cohorts of women were recruited to the study from different breast diagnostic centers in Norway in the period 2002-2007 as described previously [18]. Exclusion criteria were pregnancy and use of anticoagulant therapy. The first cohort consisted of 120 women referred to the breast diagnostic centers who were cancer-free after further evaluation. These will be referred to as healthy women. Breast biopsies were taken from an area with some mammographic density in the breast contralateral to any suspect lesion. The second cohort consisted of 66 women who were diagnosed with breast cancer. For this cohort, study biopsies were taken from the breast carcinoma after the diagnostic biopsies were obtained. Fourteen gauge needles were used for the biopsies and sampling was guided by ultrasound. The biopsies were either soaked in RNAlater (Ambion, Austin, TX) and sent to the Oslo University Hospital, Radiumhospitalet, before storage at -20°C or directly snap-frozen in liquid nitrogen and stored at -80°C. Based on serum hormone analyses (see below), 57 of the 120 healthy women included were postmenopausal, 43 were premenopausal, 10 were perimenopausal and serum samples were lacking for 10 women. Of the 66 breast cancer patients, 50 were estimated to be postmenopausal, 13 to be premenopausal and 3 to be perimenopausal. All women provided information about height, weight, parity, hormone therapy use and family history of breast cancer and provided a signed informed consent. The study was approved by the regional ethical committee (IRB approval no S-02036). Three additional datasets were used to explore the regulation of identified genes in breast cancer. One unpublished dataset from the Akershus University Hospital (AHUS), Norway, included normal breast tissue from 42 reduction mammoplasties and both tumor and normal adjacent tissue from 48 breast cancer patients (referred to as the AHUS dataset). Another unpublished dataset from University of North Carolina (UNC), USA, included breast cancer and adjacent normal breast tissue from 55 breast cancer patients (referred to as the UNC dataset). The third dataset is previously published and consists of biopsies from 31 pure ductal carcinoma in situ (DCIS), 36 pure invasive breast cancers and 42 tumours with mixed histology, both DCIS and invasive [19]. Two cohorts of women were recruited to the study from different breast diagnostic centers in Norway in the period 2002-2007 as described previously [18]. Exclusion criteria were pregnancy and use of anticoagulant therapy. The first cohort consisted of 120 women referred to the breast diagnostic centers who were cancer-free after further evaluation. These will be referred to as healthy women. Breast biopsies were taken from an area with some mammographic density in the breast contralateral to any suspect lesion. The second cohort consisted of 66 women who were diagnosed with breast cancer. For this cohort, study biopsies were taken from the breast carcinoma after the diagnostic biopsies were obtained. Fourteen gauge needles were used for the biopsies and sampling was guided by ultrasound. The biopsies were either soaked in RNAlater (Ambion, Austin, TX) and sent to the Oslo University Hospital, Radiumhospitalet, before storage at -20°C or directly snap-frozen in liquid nitrogen and stored at -80°C. Based on serum hormone analyses (see below), 57 of the 120 healthy women included were postmenopausal, 43 were premenopausal, 10 were perimenopausal and serum samples were lacking for 10 women. Of the 66 breast cancer patients, 50 were estimated to be postmenopausal, 13 to be premenopausal and 3 to be perimenopausal. All women provided information about height, weight, parity, hormone therapy use and family history of breast cancer and provided a signed informed consent. The study was approved by the regional ethical committee (IRB approval no S-02036). Three additional datasets were used to explore the regulation of identified genes in breast cancer. One unpublished dataset from the Akershus University Hospital (AHUS), Norway, included normal breast tissue from 42 reduction mammoplasties and both tumor and normal adjacent tissue from 48 breast cancer patients (referred to as the AHUS dataset). Another unpublished dataset from University of North Carolina (UNC), USA, included breast cancer and adjacent normal breast tissue from 55 breast cancer patients (referred to as the UNC dataset). The third dataset is previously published and consists of biopsies from 31 pure ductal carcinoma in situ (DCIS), 36 pure invasive breast cancers and 42 tumours with mixed histology, both DCIS and invasive [19]. Serum hormone analysis Serum hormone levels (LH, FSH, prolactin, estradiol, progesterone, SHBG and testosterone) were measured with electrochemiluminescence immunoassays (ECLIA) on a Roche Modular E instrument (Roche, Basel, Switzerland) by Department of Medical Biochemistry, Oslo University Hospital, Rikshospitalet. The menopausal status was determined based on serum levels of hormones, age and hormone use. The criteria used can be found in Additional file 1. Biochemically perimenopausal women or women with uncertain menopausal status were excluded from analyses stratified on menopause. These hormone assays are tested through an external quality assessment scheme, Labquality, and the laboratory is accredited according to ISO-ES 17025. Serum estradiol values are given as picograms per milliliter (pg/ml) (pg/ml × 3.67 = pmol/). The functional sensitivity of the estradiol assay was 10.9 pg/ml (40 pmol/l) with a total analytical sensitivity of < 5%. Serum hormone levels (LH, FSH, prolactin, estradiol, progesterone, SHBG and testosterone) were measured with electrochemiluminescence immunoassays (ECLIA) on a Roche Modular E instrument (Roche, Basel, Switzerland) by Department of Medical Biochemistry, Oslo University Hospital, Rikshospitalet. The menopausal status was determined based on serum levels of hormones, age and hormone use. The criteria used can be found in Additional file 1. Biochemically perimenopausal women or women with uncertain menopausal status were excluded from analyses stratified on menopause. These hormone assays are tested through an external quality assessment scheme, Labquality, and the laboratory is accredited according to ISO-ES 17025. Serum estradiol values are given as picograms per milliliter (pg/ml) (pg/ml × 3.67 = pmol/). The functional sensitivity of the estradiol assay was 10.9 pg/ml (40 pmol/l) with a total analytical sensitivity of < 5%. Gene expression analysis RNA extraction and hybridization were performed as previously described [18]. Briefly, RNeasy Mini Protocol (Qiagen, Valencia, CA) was used for RNA extraction. Forty samples (38 from healthy women) were excluded from further analysis due to low RNA amount (< 10 ng) or poor RNA quality assessed by the curves given by Agilent Bioanalyzer (Agilent Technologies, Palo Alto, CA). The analyses were performed before RNA integrity value (RIN) was included as a measure of degradation and samples with poor quality were excluded. Agilent Low RNA input Fluorescent Linear Amplification Kit Protocol was used for amplification and labelling with Cy5 (Amersham Biosciences, Little Chalfont, England) for sample RNA and Cy3 (Amersham Biosciences, Little Chalfont, England) for the reference (Universal Human total RNA (Stratagene, La Jolla, CA)). Labelled RNA was hybridized onto Agilent Human Whole Genome Oligo Microarrays (G4110A) (Agilent Technologies, Santa Clara, CA). Three arrays were excluded due to poor quality leaving data from 79 healthy women and 64 breast cancer patients. The scanned data was processed in Feature Extraction 9.1.3.1 (Agilent Technologies, Santa Clara, CA). Locally weighted scatterplot smoothing (lowess) was used to normalize the data. The normalized and log2-transformed data was stored in the Stanford Microarray Database (SMD)[20] and retrieved for further analysis. Gene filtering excluded probes with ≥ 20% missing values and probes with less than three arrays being at least 1.6 standard deviation away from the mean. This reduced the dataset from 40791 probes to 9767 for the healthy women and to 10153 for the breast cancer patients. Missing values were imputed in R using the method impute. knn in the library impute [21]. All expression data are available in Gene Expression Omnibus (GEO)(GSE18672). RNA extraction and hybridization were performed as previously described [18]. Briefly, RNeasy Mini Protocol (Qiagen, Valencia, CA) was used for RNA extraction. Forty samples (38 from healthy women) were excluded from further analysis due to low RNA amount (< 10 ng) or poor RNA quality assessed by the curves given by Agilent Bioanalyzer (Agilent Technologies, Palo Alto, CA). The analyses were performed before RNA integrity value (RIN) was included as a measure of degradation and samples with poor quality were excluded. Agilent Low RNA input Fluorescent Linear Amplification Kit Protocol was used for amplification and labelling with Cy5 (Amersham Biosciences, Little Chalfont, England) for sample RNA and Cy3 (Amersham Biosciences, Little Chalfont, England) for the reference (Universal Human total RNA (Stratagene, La Jolla, CA)). Labelled RNA was hybridized onto Agilent Human Whole Genome Oligo Microarrays (G4110A) (Agilent Technologies, Santa Clara, CA). Three arrays were excluded due to poor quality leaving data from 79 healthy women and 64 breast cancer patients. The scanned data was processed in Feature Extraction 9.1.3.1 (Agilent Technologies, Santa Clara, CA). Locally weighted scatterplot smoothing (lowess) was used to normalize the data. The normalized and log2-transformed data was stored in the Stanford Microarray Database (SMD)[20] and retrieved for further analysis. Gene filtering excluded probes with ≥ 20% missing values and probes with less than three arrays being at least 1.6 standard deviation away from the mean. This reduced the dataset from 40791 probes to 9767 for the healthy women and to 10153 for the breast cancer patients. Missing values were imputed in R using the method impute. knn in the library impute [21]. All expression data are available in Gene Expression Omnibus (GEO)(GSE18672). Mammographic density Mammographic density was estimated from digitized craniocaudal mammograms as previously described [18] using the University of Southern California Madena assessment method [22]. First, the total breast area was outlined using a computerized tool and the area was represented as number of pixels. One of the co-authors, GU, identified a region of interest that incorporated all areas of density excluding those representing the pectoralis muscle and scanning artifacts. All densities above a certain threshold were tinted yellow, and the tinted pixles converted to cm2 representing the absolute density and was available for 108 of 120 healthy women. Percent mammographic density is calculated as the absolute density divided by the total breast area and was available for 114 of 120 healthy women. Test-retest reliability was 0.99 for absolute density. Mammographic density was estimated from digitized craniocaudal mammograms as previously described [18] using the University of Southern California Madena assessment method [22]. First, the total breast area was outlined using a computerized tool and the area was represented as number of pixels. One of the co-authors, GU, identified a region of interest that incorporated all areas of density excluding those representing the pectoralis muscle and scanning artifacts. All densities above a certain threshold were tinted yellow, and the tinted pixles converted to cm2 representing the absolute density and was available for 108 of 120 healthy women. Percent mammographic density is calculated as the absolute density divided by the total breast area and was available for 114 of 120 healthy women. Test-retest reliability was 0.99 for absolute density. Statistical Analysis Quantitative significance analysis of microarrays (SAM) [23,24] was used for analysis of differentially expressed genes, by the library samr in R 2.12.0. Serum estradiol (nmol/L) was used as dependent variable. The distribution of serum levels is skewed and therefore the non-parametric Wilcoxon test-statistic was used. Probes with an FDR < 50% were included for gene ontology analyses. DAVID Bioinformatics Resources 2008 from the National Institute of Allergy and Infectious Diseases, NIH [25] was used for gene ontology analysis. Functional annotation clustering was applied and the following annotation categories were selected: biological processes, molecular function, cellular compartment and KEGG pathways. We included annotation terms with a p-value (FDR-corrected) of < 0.01 containing between 5 and 500 genes. For multivariate analysis, linear regression was fitted in R 2.12.0 to identify independent associations. Stepwise selection was performed to determine which variables had an independent contribution to the response variable. In the first step, all variables were included in the model. The variable with the highest p-value was rejected from the model in each step, before the model was refitted. This was repeated until all variables in the model had a p-value smaller than 0.05. Linear regression was used to determine the independent association between serum estradiol and the differentially expressed genes in healthy women. Age, menopause and current hormone use were included in the model and forced to stay throughout the stepwise selection to correct for confounding by these factors. Linear regression was also fitted in two analyses with mammographic density in healthy women as a dependent variable. In one set of analyses serum hormone levels were included as the independent covariates, and in the other analysis, variables representing gene expression associated with serum estradiol were included as covariates. Epidemiologic covariates, such as age, BMI, parity and use of hormone therapy were included in the mammographic density analyses and forced to stay throughout the stepwise selection to control for potential confounding by these factors. Tumor subtypes were calculated using the intrinsic subtypes published by Sørlie et al in 2001 [26]. The total gene set was filtered for the intrinsic genes. The correlation between gene expression profiles for the intrinsic genes for each sample with each subtype was calculated. Each sample was assigned to the subtype with which it had the highest correlation. Samples with all correlations < 0.1 were not assigned to any subtype. Two-sided t-tests were used to check for difference in expression for single genes between two categories of variables (eg: pre- and postmenopausal). Quantitative significance analysis of microarrays (SAM) [23,24] was used for analysis of differentially expressed genes, by the library samr in R 2.12.0. Serum estradiol (nmol/L) was used as dependent variable. The distribution of serum levels is skewed and therefore the non-parametric Wilcoxon test-statistic was used. Probes with an FDR < 50% were included for gene ontology analyses. DAVID Bioinformatics Resources 2008 from the National Institute of Allergy and Infectious Diseases, NIH [25] was used for gene ontology analysis. Functional annotation clustering was applied and the following annotation categories were selected: biological processes, molecular function, cellular compartment and KEGG pathways. We included annotation terms with a p-value (FDR-corrected) of < 0.01 containing between 5 and 500 genes. For multivariate analysis, linear regression was fitted in R 2.12.0 to identify independent associations. Stepwise selection was performed to determine which variables had an independent contribution to the response variable. In the first step, all variables were included in the model. The variable with the highest p-value was rejected from the model in each step, before the model was refitted. This was repeated until all variables in the model had a p-value smaller than 0.05. Linear regression was used to determine the independent association between serum estradiol and the differentially expressed genes in healthy women. Age, menopause and current hormone use were included in the model and forced to stay throughout the stepwise selection to correct for confounding by these factors. Linear regression was also fitted in two analyses with mammographic density in healthy women as a dependent variable. In one set of analyses serum hormone levels were included as the independent covariates, and in the other analysis, variables representing gene expression associated with serum estradiol were included as covariates. Epidemiologic covariates, such as age, BMI, parity and use of hormone therapy were included in the mammographic density analyses and forced to stay throughout the stepwise selection to control for potential confounding by these factors. Tumor subtypes were calculated using the intrinsic subtypes published by Sørlie et al in 2001 [26]. The total gene set was filtered for the intrinsic genes. The correlation between gene expression profiles for the intrinsic genes for each sample with each subtype was calculated. Each sample was assigned to the subtype with which it had the highest correlation. Samples with all correlations < 0.1 were not assigned to any subtype. Two-sided t-tests were used to check for difference in expression for single genes between two categories of variables (eg: pre- and postmenopausal). Subjects: Two cohorts of women were recruited to the study from different breast diagnostic centers in Norway in the period 2002-2007 as described previously [18]. Exclusion criteria were pregnancy and use of anticoagulant therapy. The first cohort consisted of 120 women referred to the breast diagnostic centers who were cancer-free after further evaluation. These will be referred to as healthy women. Breast biopsies were taken from an area with some mammographic density in the breast contralateral to any suspect lesion. The second cohort consisted of 66 women who were diagnosed with breast cancer. For this cohort, study biopsies were taken from the breast carcinoma after the diagnostic biopsies were obtained. Fourteen gauge needles were used for the biopsies and sampling was guided by ultrasound. The biopsies were either soaked in RNAlater (Ambion, Austin, TX) and sent to the Oslo University Hospital, Radiumhospitalet, before storage at -20°C or directly snap-frozen in liquid nitrogen and stored at -80°C. Based on serum hormone analyses (see below), 57 of the 120 healthy women included were postmenopausal, 43 were premenopausal, 10 were perimenopausal and serum samples were lacking for 10 women. Of the 66 breast cancer patients, 50 were estimated to be postmenopausal, 13 to be premenopausal and 3 to be perimenopausal. All women provided information about height, weight, parity, hormone therapy use and family history of breast cancer and provided a signed informed consent. The study was approved by the regional ethical committee (IRB approval no S-02036). Three additional datasets were used to explore the regulation of identified genes in breast cancer. One unpublished dataset from the Akershus University Hospital (AHUS), Norway, included normal breast tissue from 42 reduction mammoplasties and both tumor and normal adjacent tissue from 48 breast cancer patients (referred to as the AHUS dataset). Another unpublished dataset from University of North Carolina (UNC), USA, included breast cancer and adjacent normal breast tissue from 55 breast cancer patients (referred to as the UNC dataset). The third dataset is previously published and consists of biopsies from 31 pure ductal carcinoma in situ (DCIS), 36 pure invasive breast cancers and 42 tumours with mixed histology, both DCIS and invasive [19]. Serum hormone analysis: Serum hormone levels (LH, FSH, prolactin, estradiol, progesterone, SHBG and testosterone) were measured with electrochemiluminescence immunoassays (ECLIA) on a Roche Modular E instrument (Roche, Basel, Switzerland) by Department of Medical Biochemistry, Oslo University Hospital, Rikshospitalet. The menopausal status was determined based on serum levels of hormones, age and hormone use. The criteria used can be found in Additional file 1. Biochemically perimenopausal women or women with uncertain menopausal status were excluded from analyses stratified on menopause. These hormone assays are tested through an external quality assessment scheme, Labquality, and the laboratory is accredited according to ISO-ES 17025. Serum estradiol values are given as picograms per milliliter (pg/ml) (pg/ml × 3.67 = pmol/). The functional sensitivity of the estradiol assay was 10.9 pg/ml (40 pmol/l) with a total analytical sensitivity of < 5%. Gene expression analysis: RNA extraction and hybridization were performed as previously described [18]. Briefly, RNeasy Mini Protocol (Qiagen, Valencia, CA) was used for RNA extraction. Forty samples (38 from healthy women) were excluded from further analysis due to low RNA amount (< 10 ng) or poor RNA quality assessed by the curves given by Agilent Bioanalyzer (Agilent Technologies, Palo Alto, CA). The analyses were performed before RNA integrity value (RIN) was included as a measure of degradation and samples with poor quality were excluded. Agilent Low RNA input Fluorescent Linear Amplification Kit Protocol was used for amplification and labelling with Cy5 (Amersham Biosciences, Little Chalfont, England) for sample RNA and Cy3 (Amersham Biosciences, Little Chalfont, England) for the reference (Universal Human total RNA (Stratagene, La Jolla, CA)). Labelled RNA was hybridized onto Agilent Human Whole Genome Oligo Microarrays (G4110A) (Agilent Technologies, Santa Clara, CA). Three arrays were excluded due to poor quality leaving data from 79 healthy women and 64 breast cancer patients. The scanned data was processed in Feature Extraction 9.1.3.1 (Agilent Technologies, Santa Clara, CA). Locally weighted scatterplot smoothing (lowess) was used to normalize the data. The normalized and log2-transformed data was stored in the Stanford Microarray Database (SMD)[20] and retrieved for further analysis. Gene filtering excluded probes with ≥ 20% missing values and probes with less than three arrays being at least 1.6 standard deviation away from the mean. This reduced the dataset from 40791 probes to 9767 for the healthy women and to 10153 for the breast cancer patients. Missing values were imputed in R using the method impute. knn in the library impute [21]. All expression data are available in Gene Expression Omnibus (GEO)(GSE18672). Mammographic density: Mammographic density was estimated from digitized craniocaudal mammograms as previously described [18] using the University of Southern California Madena assessment method [22]. First, the total breast area was outlined using a computerized tool and the area was represented as number of pixels. One of the co-authors, GU, identified a region of interest that incorporated all areas of density excluding those representing the pectoralis muscle and scanning artifacts. All densities above a certain threshold were tinted yellow, and the tinted pixles converted to cm2 representing the absolute density and was available for 108 of 120 healthy women. Percent mammographic density is calculated as the absolute density divided by the total breast area and was available for 114 of 120 healthy women. Test-retest reliability was 0.99 for absolute density. Statistical Analysis: Quantitative significance analysis of microarrays (SAM) [23,24] was used for analysis of differentially expressed genes, by the library samr in R 2.12.0. Serum estradiol (nmol/L) was used as dependent variable. The distribution of serum levels is skewed and therefore the non-parametric Wilcoxon test-statistic was used. Probes with an FDR < 50% were included for gene ontology analyses. DAVID Bioinformatics Resources 2008 from the National Institute of Allergy and Infectious Diseases, NIH [25] was used for gene ontology analysis. Functional annotation clustering was applied and the following annotation categories were selected: biological processes, molecular function, cellular compartment and KEGG pathways. We included annotation terms with a p-value (FDR-corrected) of < 0.01 containing between 5 and 500 genes. For multivariate analysis, linear regression was fitted in R 2.12.0 to identify independent associations. Stepwise selection was performed to determine which variables had an independent contribution to the response variable. In the first step, all variables were included in the model. The variable with the highest p-value was rejected from the model in each step, before the model was refitted. This was repeated until all variables in the model had a p-value smaller than 0.05. Linear regression was used to determine the independent association between serum estradiol and the differentially expressed genes in healthy women. Age, menopause and current hormone use were included in the model and forced to stay throughout the stepwise selection to correct for confounding by these factors. Linear regression was also fitted in two analyses with mammographic density in healthy women as a dependent variable. In one set of analyses serum hormone levels were included as the independent covariates, and in the other analysis, variables representing gene expression associated with serum estradiol were included as covariates. Epidemiologic covariates, such as age, BMI, parity and use of hormone therapy were included in the mammographic density analyses and forced to stay throughout the stepwise selection to control for potential confounding by these factors. Tumor subtypes were calculated using the intrinsic subtypes published by Sørlie et al in 2001 [26]. The total gene set was filtered for the intrinsic genes. The correlation between gene expression profiles for the intrinsic genes for each sample with each subtype was calculated. Each sample was assigned to the subtype with which it had the highest correlation. Samples with all correlations < 0.1 were not assigned to any subtype. Two-sided t-tests were used to check for difference in expression for single genes between two categories of variables (eg: pre- and postmenopausal). Results: Gene expression in normal breast tissue according to serum estradiol levels Genes differentially expressed in normal breast tissue from healthy women according to serum estradiol levels with FDR = 0 are listed in Table 1. The gene ontology terms extracellular region and skeletal system development were significantly enriched in the top 80 up-regulated genes (FDR < 50%). There were no significant gene ontology terms enriched in the down-regulated genes with FDR < 50 (n = 8), although response to steroid hormone stimulus was the most enriched term with three observed genes (prostaglandin-endoperoxide synthase 1 (PTGS1), ESR1 and GATA3)(Additional file 2). Genes significantly differentially expressed in normal breast tissue of healthy women according to serum estradiol. A) Q-values and regulation of gene expression from quantitative SAM analysis of gene expression according to serum estradiol. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts. 1) Q-value from SAM of gene expression in normal breast tissue according to serum estradiol 2) s-est = serum estradiol 3) BC = breast cancer 4) P-value from two-sided t-test 5) ER+ BC = estrogen recepor positive breast cancer (n = 53) 6) ER- BC = estrogen recepor negative breast cancer (n = 8) The genes differentially expressed in normal breast tissue according to serum estradiol with an FDR = 0 (from Table 1) were tested for differential expression between breast cancer tissue and normal breast tissue from healthy women. All six genes were differentially expressed between carcinomas and normal tissue. Interestingly, the expression in breast carcinomas was similar to that in normal tissue from women with lower levels of circulating estradiol and opposite to that found in normal samples from women with higher levels of serum estradiol (Table 1). Comparing the expression of these genes in normal breast tissue with the expression in ER+ and ER- carcinomas respectively revealed similar results (Table 1). In tumors, SCGB3A1 tended to be expressed at a lower level in basal-like tumors compared with all other tumors or compared with luminal A tumors, but this did not reach statistical significance (both p-values = 0.2). However in two other datasets (AHUS and UNC), SCGB3A1 was expressed at significantly lower levels in basal-like tumors compared with all other subtypes (p = 0.04 and 0.003 respectively). There was no consistent significant difference in SCGB3A1 expression in ER+ and ER- tumors. Of the six genes differentially expressed according to serum estradiol in normal breast tissue, three were differentially expressed between DCIS and early invasive breast carcinomas based on a previously published dataset [19](Table 1). SCGB3A1 was down-regulated in invasive compared with DCIS, whereas talin 2 (TLN2) and PTGS1 were up-regulated in invasive compared with DCIS. A linear regression was fitted with all differentially expressed genes as covariates and controlling for age, menopause and current hormone therapy use. After leave-one-out elimination of insignificant covariates, SCGB3A1, TLN2 and PTGS1 were still significant (Table 2). Genes independently associated with serum estradiol in a linear regression model. All genes differentially expressed according to serum estradiol (Table 1) were included. Values shown are corrected for age, menopause and current hormone therapy. After leave-one-out stepwise selection the following covariates remained: 1) Estimate denotes the beta-value corresponding to each covariate in the regression equation. 2) Values for the non-significant genes are from the last model before they were excluded. Genes differentially expressed in normal breast tissue from healthy women according to serum estradiol levels with FDR = 0 are listed in Table 1. The gene ontology terms extracellular region and skeletal system development were significantly enriched in the top 80 up-regulated genes (FDR < 50%). There were no significant gene ontology terms enriched in the down-regulated genes with FDR < 50 (n = 8), although response to steroid hormone stimulus was the most enriched term with three observed genes (prostaglandin-endoperoxide synthase 1 (PTGS1), ESR1 and GATA3)(Additional file 2). Genes significantly differentially expressed in normal breast tissue of healthy women according to serum estradiol. A) Q-values and regulation of gene expression from quantitative SAM analysis of gene expression according to serum estradiol. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts. 1) Q-value from SAM of gene expression in normal breast tissue according to serum estradiol 2) s-est = serum estradiol 3) BC = breast cancer 4) P-value from two-sided t-test 5) ER+ BC = estrogen recepor positive breast cancer (n = 53) 6) ER- BC = estrogen recepor negative breast cancer (n = 8) The genes differentially expressed in normal breast tissue according to serum estradiol with an FDR = 0 (from Table 1) were tested for differential expression between breast cancer tissue and normal breast tissue from healthy women. All six genes were differentially expressed between carcinomas and normal tissue. Interestingly, the expression in breast carcinomas was similar to that in normal tissue from women with lower levels of circulating estradiol and opposite to that found in normal samples from women with higher levels of serum estradiol (Table 1). Comparing the expression of these genes in normal breast tissue with the expression in ER+ and ER- carcinomas respectively revealed similar results (Table 1). In tumors, SCGB3A1 tended to be expressed at a lower level in basal-like tumors compared with all other tumors or compared with luminal A tumors, but this did not reach statistical significance (both p-values = 0.2). However in two other datasets (AHUS and UNC), SCGB3A1 was expressed at significantly lower levels in basal-like tumors compared with all other subtypes (p = 0.04 and 0.003 respectively). There was no consistent significant difference in SCGB3A1 expression in ER+ and ER- tumors. Of the six genes differentially expressed according to serum estradiol in normal breast tissue, three were differentially expressed between DCIS and early invasive breast carcinomas based on a previously published dataset [19](Table 1). SCGB3A1 was down-regulated in invasive compared with DCIS, whereas talin 2 (TLN2) and PTGS1 were up-regulated in invasive compared with DCIS. A linear regression was fitted with all differentially expressed genes as covariates and controlling for age, menopause and current hormone therapy use. After leave-one-out elimination of insignificant covariates, SCGB3A1, TLN2 and PTGS1 were still significant (Table 2). Genes independently associated with serum estradiol in a linear regression model. All genes differentially expressed according to serum estradiol (Table 1) were included. Values shown are corrected for age, menopause and current hormone therapy. After leave-one-out stepwise selection the following covariates remained: 1) Estimate denotes the beta-value corresponding to each covariate in the regression equation. 2) Values for the non-significant genes are from the last model before they were excluded. Serum estradiol related to mammographic density in healthy women Regression analysis in postmenopausal women showed that serum estradiol was independently associated with both absolute and percent mammographic density when controlling for age, BMI and current use of hormone therapy (Table 3). None of the genes differentially expressed in normal breast tissue according to serum estradiol levels were independently associated with mammographic density (data not shown). Serum hormones independently associated with mammographic density in linear regression models. Values shown are corrected for age, HT and BMI. Through leave-one-out stepwise elimination of covariates, prolactin, SHBG and testosterone were excluded and the following variables remained. 1) Estimate denotes the beta-value corresponding to each covariate in the regression equation. Regression analysis in postmenopausal women showed that serum estradiol was independently associated with both absolute and percent mammographic density when controlling for age, BMI and current use of hormone therapy (Table 3). None of the genes differentially expressed in normal breast tissue according to serum estradiol levels were independently associated with mammographic density (data not shown). Serum hormones independently associated with mammographic density in linear regression models. Values shown are corrected for age, HT and BMI. Through leave-one-out stepwise elimination of covariates, prolactin, SHBG and testosterone were excluded and the following variables remained. 1) Estimate denotes the beta-value corresponding to each covariate in the regression equation. Gene expression in breast carcinomas according to serum estradiol levels In breast carcinomas, quantitative SAM revealed two genes, AREG and GREB1, as differentially expressed according to serum estradiol levels with FDR = 0 (Table 4). Both genes were up-regulated in samples from women with high serum estradiol (estradiol was used as a continuous response variable in the analysis). Of 16 probes up-regulated in samples from women with high serum estradiol, there were three probes for TFF3 and one for TFF1, although these did not reach statistical significance (Table 4). No genes were significantly down-regulated according to serum estradiol. In ER+ samples (n = 53), we also found AREG and GREB1 up-regulated in samples from women with high serum estradiol (FDR = 0), but the TFF-genes were not up-regulated. Among the ER- samples (n = 8) there was very little variation in serum estradiol levels and a search for genes differentially expressed according to serum estradiol is not feasible. Genes significantly differentially expressed according to serum estradiol levels in breast carcinomas. A) Quantitativ SAM analysis for differential expression according to serum estradiol with q-values and direction of regulation indicated. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts. 1) Q-value from SAM of gene expression according to serum estradiol 2) Gene expression in samples from patients with high compared with low serum estradiol 3) ER+ BC = Estrogen receptor positive breast cancer (n = 53) 4) BC = breast cancer 5) P-value from two-sided t-test 6) ER- BC = Estrogen receptor negative breast cancer (n = 8) 7) Two different probes for TFF3 are used Looking at the expression of these genes in normal breast tissue from healthy women according to serum estradiol, both AREG and GREB1 are up-regulated in samples from women with high estradiol levels without reaching significance. Comparing the expression of these genes in breast carcinomas and normal breast tissue, neither AREG nor GREB1 are differentially expressed between normal breast tissue and breast carcinomas. All the probes for TFF-genes are, however, significantly down-regulated in normal breast tissue compared with breast carcinomas. All these genes (AREG, GREB1, TFF1 and TFF3) were up-regulated in ER+ carcinomas compared to ER- carcinomas (AREG was only borderline significant) (Additional file 3). In breast carcinomas, quantitative SAM revealed two genes, AREG and GREB1, as differentially expressed according to serum estradiol levels with FDR = 0 (Table 4). Both genes were up-regulated in samples from women with high serum estradiol (estradiol was used as a continuous response variable in the analysis). Of 16 probes up-regulated in samples from women with high serum estradiol, there were three probes for TFF3 and one for TFF1, although these did not reach statistical significance (Table 4). No genes were significantly down-regulated according to serum estradiol. In ER+ samples (n = 53), we also found AREG and GREB1 up-regulated in samples from women with high serum estradiol (FDR = 0), but the TFF-genes were not up-regulated. Among the ER- samples (n = 8) there was very little variation in serum estradiol levels and a search for genes differentially expressed according to serum estradiol is not feasible. Genes significantly differentially expressed according to serum estradiol levels in breast carcinomas. A) Quantitativ SAM analysis for differential expression according to serum estradiol with q-values and direction of regulation indicated. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts. 1) Q-value from SAM of gene expression according to serum estradiol 2) Gene expression in samples from patients with high compared with low serum estradiol 3) ER+ BC = Estrogen receptor positive breast cancer (n = 53) 4) BC = breast cancer 5) P-value from two-sided t-test 6) ER- BC = Estrogen receptor negative breast cancer (n = 8) 7) Two different probes for TFF3 are used Looking at the expression of these genes in normal breast tissue from healthy women according to serum estradiol, both AREG and GREB1 are up-regulated in samples from women with high estradiol levels without reaching significance. Comparing the expression of these genes in breast carcinomas and normal breast tissue, neither AREG nor GREB1 are differentially expressed between normal breast tissue and breast carcinomas. All the probes for TFF-genes are, however, significantly down-regulated in normal breast tissue compared with breast carcinomas. All these genes (AREG, GREB1, TFF1 and TFF3) were up-regulated in ER+ carcinomas compared to ER- carcinomas (AREG was only borderline significant) (Additional file 3). Gene expression in normal breast tissue according to serum estradiol levels: Genes differentially expressed in normal breast tissue from healthy women according to serum estradiol levels with FDR = 0 are listed in Table 1. The gene ontology terms extracellular region and skeletal system development were significantly enriched in the top 80 up-regulated genes (FDR < 50%). There were no significant gene ontology terms enriched in the down-regulated genes with FDR < 50 (n = 8), although response to steroid hormone stimulus was the most enriched term with three observed genes (prostaglandin-endoperoxide synthase 1 (PTGS1), ESR1 and GATA3)(Additional file 2). Genes significantly differentially expressed in normal breast tissue of healthy women according to serum estradiol. A) Q-values and regulation of gene expression from quantitative SAM analysis of gene expression according to serum estradiol. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts. 1) Q-value from SAM of gene expression in normal breast tissue according to serum estradiol 2) s-est = serum estradiol 3) BC = breast cancer 4) P-value from two-sided t-test 5) ER+ BC = estrogen recepor positive breast cancer (n = 53) 6) ER- BC = estrogen recepor negative breast cancer (n = 8) The genes differentially expressed in normal breast tissue according to serum estradiol with an FDR = 0 (from Table 1) were tested for differential expression between breast cancer tissue and normal breast tissue from healthy women. All six genes were differentially expressed between carcinomas and normal tissue. Interestingly, the expression in breast carcinomas was similar to that in normal tissue from women with lower levels of circulating estradiol and opposite to that found in normal samples from women with higher levels of serum estradiol (Table 1). Comparing the expression of these genes in normal breast tissue with the expression in ER+ and ER- carcinomas respectively revealed similar results (Table 1). In tumors, SCGB3A1 tended to be expressed at a lower level in basal-like tumors compared with all other tumors or compared with luminal A tumors, but this did not reach statistical significance (both p-values = 0.2). However in two other datasets (AHUS and UNC), SCGB3A1 was expressed at significantly lower levels in basal-like tumors compared with all other subtypes (p = 0.04 and 0.003 respectively). There was no consistent significant difference in SCGB3A1 expression in ER+ and ER- tumors. Of the six genes differentially expressed according to serum estradiol in normal breast tissue, three were differentially expressed between DCIS and early invasive breast carcinomas based on a previously published dataset [19](Table 1). SCGB3A1 was down-regulated in invasive compared with DCIS, whereas talin 2 (TLN2) and PTGS1 were up-regulated in invasive compared with DCIS. A linear regression was fitted with all differentially expressed genes as covariates and controlling for age, menopause and current hormone therapy use. After leave-one-out elimination of insignificant covariates, SCGB3A1, TLN2 and PTGS1 were still significant (Table 2). Genes independently associated with serum estradiol in a linear regression model. All genes differentially expressed according to serum estradiol (Table 1) were included. Values shown are corrected for age, menopause and current hormone therapy. After leave-one-out stepwise selection the following covariates remained: 1) Estimate denotes the beta-value corresponding to each covariate in the regression equation. 2) Values for the non-significant genes are from the last model before they were excluded. Serum estradiol related to mammographic density in healthy women: Regression analysis in postmenopausal women showed that serum estradiol was independently associated with both absolute and percent mammographic density when controlling for age, BMI and current use of hormone therapy (Table 3). None of the genes differentially expressed in normal breast tissue according to serum estradiol levels were independently associated with mammographic density (data not shown). Serum hormones independently associated with mammographic density in linear regression models. Values shown are corrected for age, HT and BMI. Through leave-one-out stepwise elimination of covariates, prolactin, SHBG and testosterone were excluded and the following variables remained. 1) Estimate denotes the beta-value corresponding to each covariate in the regression equation. Gene expression in breast carcinomas according to serum estradiol levels: In breast carcinomas, quantitative SAM revealed two genes, AREG and GREB1, as differentially expressed according to serum estradiol levels with FDR = 0 (Table 4). Both genes were up-regulated in samples from women with high serum estradiol (estradiol was used as a continuous response variable in the analysis). Of 16 probes up-regulated in samples from women with high serum estradiol, there were three probes for TFF3 and one for TFF1, although these did not reach statistical significance (Table 4). No genes were significantly down-regulated according to serum estradiol. In ER+ samples (n = 53), we also found AREG and GREB1 up-regulated in samples from women with high serum estradiol (FDR = 0), but the TFF-genes were not up-regulated. Among the ER- samples (n = 8) there was very little variation in serum estradiol levels and a search for genes differentially expressed according to serum estradiol is not feasible. Genes significantly differentially expressed according to serum estradiol levels in breast carcinomas. A) Quantitativ SAM analysis for differential expression according to serum estradiol with q-values and direction of regulation indicated. B) Significance testing of difference in gene expression of the genes identified in A) in different sample cohorts. 1) Q-value from SAM of gene expression according to serum estradiol 2) Gene expression in samples from patients with high compared with low serum estradiol 3) ER+ BC = Estrogen receptor positive breast cancer (n = 53) 4) BC = breast cancer 5) P-value from two-sided t-test 6) ER- BC = Estrogen receptor negative breast cancer (n = 8) 7) Two different probes for TFF3 are used Looking at the expression of these genes in normal breast tissue from healthy women according to serum estradiol, both AREG and GREB1 are up-regulated in samples from women with high estradiol levels without reaching significance. Comparing the expression of these genes in breast carcinomas and normal breast tissue, neither AREG nor GREB1 are differentially expressed between normal breast tissue and breast carcinomas. All the probes for TFF-genes are, however, significantly down-regulated in normal breast tissue compared with breast carcinomas. All these genes (AREG, GREB1, TFF1 and TFF3) were up-regulated in ER+ carcinomas compared to ER- carcinomas (AREG was only borderline significant) (Additional file 3). Discussion: Gene expression in normal breast tissue according to serum estradiol levels We have identified genes differentially expressed according to serum estradiol in normal breast tissue of healthy women. The genes up-regulated in normal breast tissue under influence of high serum estradiol are enriched for the gene ontology terms extracellular matrix and skeletal system development. Both ER isoforms α and β are expressed in the stromal cells [27]. The proliferating epithelial cells are not found to be ER α + [8] and most often negative to both ER isoforms [9]. In normal breast tissue, the estrogen-induced epithelial proliferation is, at least partly, caused by paracrine signals from ER+ fibroblasts [3]. The enrichment of gene ontology terms related to extracellular matrix may be linked to the effect of estradiol on the ER+ stromal cells. Three genes were independently associated with serum estradiol levels in normal breast tissue in a linear regression model after controlling for age, menopause and current hormone therapy. The two genes SCBG3A1 and TLN2 were positively associated with serum estradiol and PTGS1 (COX1) negatively. SCBG3A1 is also called high in normal 1 (HIN1) and is a secretoglobin transcribed in luminal, but not in myoepithelial breast cells and is secreted from the cell [28]. The protein is a tumor suppressor and inhibits cell growth, migration and invasion acting through the AKT-pathway. SCBG3A1 inhibits Akt-phosphorylation, which reduces the Akt-function in promoting cell cycle progression (transition from the G1 to the S-phase) and preventing apoptosis (through inhibition of the TGFβ-pathway) [29] (Figure 1). Simplified illustration of the cellular mechanisms of action of SCGB3A1. SCGB3A1 inhibits the phosphorylation of Akt leading to reduced cell cycle division and increased apoptosis. Molecules in red are increased/stimulated as result of SCGB3A1-action, whereas molecules in blue are decreased/inhibited. The SCBG3A1 promoter was found to be hypermethylated with down-regulated expression of the gene in breast carcinomas compared with normal breast tissue, where it is referred to as "high in normal 1" (HIN1)[30-32]. Interestingly, the gene is not methylated in BRCA-mutated and BRCA-like breast cancer [32]. Methylation of the gene is suggested to be an early event in non-BRCA-associated breast cancer [33]. We found SCBG3A1 down-regulated in basal-like cancers compared to other subtypes. At first glance, this may seem contradictory to the observation that the gene is not methylated in BRCA-like breast cancers. However, Krop and colleagues found that the gene is expressed in luminal epithelial cell lines, but not in myoepithelial cell lines. The reduced expression seen in basal-like cancer could be due to a myoepithelial phenotype arising from a myoepithelial cell of origin or from phenotypic changes acquired during carcinogensis. This could also be linked to the lack of methylation in BRCA-associated breast cancers, which are often basal-like. An a priori low gene expression would make methylation unnecessary. The increased Akt-activity seen in basal-like cancers [34] is consistent with the low levels of SCBG3A1 expression observed in the basal-like cancers in this study leading to increased Akt-phosphorylation and thereby Akt-activity. The up-regulation of SCGB3A1 in the breasts of women with high serum estradiol protects the breast epithelial cells against uncontrolled proliferation. Women with methylation of the SCGB3A1-promoter may be at risk of developing luminal, but not basal-like, breast cancer and a reduction in serum estradiol levels may be protective for these women. Hormone therapy after menopause is associated with receptor positive, but not receptor negative, breast cancer [35]. Our results indicate that the same may be true for circulating estradiol levels in absence of functional SCGB3A1, but this is not yet shown empirically. PTGS1 (prostaglandin-endoperoxide synthase 1) is synonymous with cyclooxygenase 1 (COX1) and codes for an enzyme important in prostaglandin production. Studies of normal human adiopocytes have shown that the enzyme induces production of prostaglandin E2 (PGE2) which in turn increases the expression of aromatase (CYP19A1) [36]. Aromatase is the enzyme responsible for the last step in the conversion of androgens to estrogens in adipose tissue. Hence, the expression of PTGS1 may increase the local production of estradiol (Figure 2). In normal breast tissue, we observed that the expression of PTGS1 was lower in samples from women with higher levels of serum estradiol. This may be due to negative feedback. High systemic levels of estradiol make local production unnecessary and PTGS1-induced aromatase production is abolished. Schematic illustration of mechanism of action of PTGS1. PTGS1 induces PGE2-production. PGE2 increases the expression of aromatase (CYP19A1) which in turn converts androgens to estrogens in adipose tissue. 17βHSD1 = 17β-hydroxysteroid dehydrogenase. The up-regulation of PTGS1 in breast carcinomas compared to normal tissue is expected from current knowledge. Several studies have suggested that PTGS1 has a carcinogenic role in different epithelial cancers [37-41]. The gene has also previously been found over-expressed in tumors compared with tumor adjacent normal tissue [42]. There has been large amount of research on PTGS2 in relation to cancer, indicating a role in carcinogenesis. The probes for PTGS2 were filtered out due to too many missing values and were not included in the analysis. Hence, this study lacks information about the role of PTGS2-expression in relation to serum estradiol levels. TLN2 is less known and less studied than Talin 1 (TLN1). Both talins are believed to connect integrins to the actin cytoskeleton and are involved in integrin-associated cell adhesion [43,44]. TLN2 is located on chromosome 15q15-21, close to CYP19A1 coding for aromatase. A study on aromatase-excess syndrome found that certain minor chromosomal rearrangements may cause cryptic transcription of the CYP19A1 gene through the TLN2-promoter [45]. We found that TLN2 was up-regulated in breasts of healthy women with high levels of serum estradiol. This could indicate an activation of cell adhesion. This gene was the only gene significantly up-regulated according to serum estradiol in normal breast tissue of premenopausal women. The down-regulation observed in breast cancers compared with normal breast tissue indicates a loss of cell adhesion. The expression of the gene is lower in DCIS than in invasive carcinomas, which is contrary to expected, but the data set is small. A previous study report on the gene expression in normal human breast tissue transplanted into two groups of athymic mice treated with different levels of estradiol [17]. Neither SCGB3A1, TLN2 nor PTGS1 was significantly differentially expressed in their study. They did, however, identify many of the genes found to be significantly differentially expressed according to serum estradiol in breast carcinomas in the current study, such as AREG, GREB1, TFF1 and TFF3. Going back to our normal samples, we see that several of their genes (including AREG, GREB1, TFF1 and TFF3, GATA3 and two SERPIN-genes) are differentially expressed in our normal breast tissue, but did not reach statistical significance (Additional file 4). The differences observed between our study and that of Wilson and colleagues may be due to chance and due to the presence of different residual confounding in the two studies. Wilson and colleagues studied the effects of estradiol treatment, which may act differently upon the breast tissue than endogenous estradiol. Normal human breast tissue transplanted into mice may react differently to varying levels of estradiol than it does in its natural milieu in humans. The genes that were significant in the Wilson-study and differentially expressed but not significant in our study (eg: AREG, GREB1, TFF1, TFF3 and GATA3) may be associated with serum estradiol levels in normal tissues as well as in tumor tissues where we and others have observed significant associations. Our study is the first study to identify the expression of SCGB3A1, TLN2 and PTGS1 in normal breast tissue to be significantly associated with serum estradiol levels. These findings are biologically reasonable and may have been missed in previous studies due to lack of representative study material. We have identified genes differentially expressed according to serum estradiol in normal breast tissue of healthy women. The genes up-regulated in normal breast tissue under influence of high serum estradiol are enriched for the gene ontology terms extracellular matrix and skeletal system development. Both ER isoforms α and β are expressed in the stromal cells [27]. The proliferating epithelial cells are not found to be ER α + [8] and most often negative to both ER isoforms [9]. In normal breast tissue, the estrogen-induced epithelial proliferation is, at least partly, caused by paracrine signals from ER+ fibroblasts [3]. The enrichment of gene ontology terms related to extracellular matrix may be linked to the effect of estradiol on the ER+ stromal cells. Three genes were independently associated with serum estradiol levels in normal breast tissue in a linear regression model after controlling for age, menopause and current hormone therapy. The two genes SCBG3A1 and TLN2 were positively associated with serum estradiol and PTGS1 (COX1) negatively. SCBG3A1 is also called high in normal 1 (HIN1) and is a secretoglobin transcribed in luminal, but not in myoepithelial breast cells and is secreted from the cell [28]. The protein is a tumor suppressor and inhibits cell growth, migration and invasion acting through the AKT-pathway. SCBG3A1 inhibits Akt-phosphorylation, which reduces the Akt-function in promoting cell cycle progression (transition from the G1 to the S-phase) and preventing apoptosis (through inhibition of the TGFβ-pathway) [29] (Figure 1). Simplified illustration of the cellular mechanisms of action of SCGB3A1. SCGB3A1 inhibits the phosphorylation of Akt leading to reduced cell cycle division and increased apoptosis. Molecules in red are increased/stimulated as result of SCGB3A1-action, whereas molecules in blue are decreased/inhibited. The SCBG3A1 promoter was found to be hypermethylated with down-regulated expression of the gene in breast carcinomas compared with normal breast tissue, where it is referred to as "high in normal 1" (HIN1)[30-32]. Interestingly, the gene is not methylated in BRCA-mutated and BRCA-like breast cancer [32]. Methylation of the gene is suggested to be an early event in non-BRCA-associated breast cancer [33]. We found SCBG3A1 down-regulated in basal-like cancers compared to other subtypes. At first glance, this may seem contradictory to the observation that the gene is not methylated in BRCA-like breast cancers. However, Krop and colleagues found that the gene is expressed in luminal epithelial cell lines, but not in myoepithelial cell lines. The reduced expression seen in basal-like cancer could be due to a myoepithelial phenotype arising from a myoepithelial cell of origin or from phenotypic changes acquired during carcinogensis. This could also be linked to the lack of methylation in BRCA-associated breast cancers, which are often basal-like. An a priori low gene expression would make methylation unnecessary. The increased Akt-activity seen in basal-like cancers [34] is consistent with the low levels of SCBG3A1 expression observed in the basal-like cancers in this study leading to increased Akt-phosphorylation and thereby Akt-activity. The up-regulation of SCGB3A1 in the breasts of women with high serum estradiol protects the breast epithelial cells against uncontrolled proliferation. Women with methylation of the SCGB3A1-promoter may be at risk of developing luminal, but not basal-like, breast cancer and a reduction in serum estradiol levels may be protective for these women. Hormone therapy after menopause is associated with receptor positive, but not receptor negative, breast cancer [35]. Our results indicate that the same may be true for circulating estradiol levels in absence of functional SCGB3A1, but this is not yet shown empirically. PTGS1 (prostaglandin-endoperoxide synthase 1) is synonymous with cyclooxygenase 1 (COX1) and codes for an enzyme important in prostaglandin production. Studies of normal human adiopocytes have shown that the enzyme induces production of prostaglandin E2 (PGE2) which in turn increases the expression of aromatase (CYP19A1) [36]. Aromatase is the enzyme responsible for the last step in the conversion of androgens to estrogens in adipose tissue. Hence, the expression of PTGS1 may increase the local production of estradiol (Figure 2). In normal breast tissue, we observed that the expression of PTGS1 was lower in samples from women with higher levels of serum estradiol. This may be due to negative feedback. High systemic levels of estradiol make local production unnecessary and PTGS1-induced aromatase production is abolished. Schematic illustration of mechanism of action of PTGS1. PTGS1 induces PGE2-production. PGE2 increases the expression of aromatase (CYP19A1) which in turn converts androgens to estrogens in adipose tissue. 17βHSD1 = 17β-hydroxysteroid dehydrogenase. The up-regulation of PTGS1 in breast carcinomas compared to normal tissue is expected from current knowledge. Several studies have suggested that PTGS1 has a carcinogenic role in different epithelial cancers [37-41]. The gene has also previously been found over-expressed in tumors compared with tumor adjacent normal tissue [42]. There has been large amount of research on PTGS2 in relation to cancer, indicating a role in carcinogenesis. The probes for PTGS2 were filtered out due to too many missing values and were not included in the analysis. Hence, this study lacks information about the role of PTGS2-expression in relation to serum estradiol levels. TLN2 is less known and less studied than Talin 1 (TLN1). Both talins are believed to connect integrins to the actin cytoskeleton and are involved in integrin-associated cell adhesion [43,44]. TLN2 is located on chromosome 15q15-21, close to CYP19A1 coding for aromatase. A study on aromatase-excess syndrome found that certain minor chromosomal rearrangements may cause cryptic transcription of the CYP19A1 gene through the TLN2-promoter [45]. We found that TLN2 was up-regulated in breasts of healthy women with high levels of serum estradiol. This could indicate an activation of cell adhesion. This gene was the only gene significantly up-regulated according to serum estradiol in normal breast tissue of premenopausal women. The down-regulation observed in breast cancers compared with normal breast tissue indicates a loss of cell adhesion. The expression of the gene is lower in DCIS than in invasive carcinomas, which is contrary to expected, but the data set is small. A previous study report on the gene expression in normal human breast tissue transplanted into two groups of athymic mice treated with different levels of estradiol [17]. Neither SCGB3A1, TLN2 nor PTGS1 was significantly differentially expressed in their study. They did, however, identify many of the genes found to be significantly differentially expressed according to serum estradiol in breast carcinomas in the current study, such as AREG, GREB1, TFF1 and TFF3. Going back to our normal samples, we see that several of their genes (including AREG, GREB1, TFF1 and TFF3, GATA3 and two SERPIN-genes) are differentially expressed in our normal breast tissue, but did not reach statistical significance (Additional file 4). The differences observed between our study and that of Wilson and colleagues may be due to chance and due to the presence of different residual confounding in the two studies. Wilson and colleagues studied the effects of estradiol treatment, which may act differently upon the breast tissue than endogenous estradiol. Normal human breast tissue transplanted into mice may react differently to varying levels of estradiol than it does in its natural milieu in humans. The genes that were significant in the Wilson-study and differentially expressed but not significant in our study (eg: AREG, GREB1, TFF1, TFF3 and GATA3) may be associated with serum estradiol levels in normal tissues as well as in tumor tissues where we and others have observed significant associations. Our study is the first study to identify the expression of SCGB3A1, TLN2 and PTGS1 in normal breast tissue to be significantly associated with serum estradiol levels. These findings are biologically reasonable and may have been missed in previous studies due to lack of representative study material. Serum estradiol associated with mammographic density in healthy women Serum estradiol levels were independently associated with mammographic density controlling for age, BMI and current use of hormone therapy, and the magnitude of the association was substantial (Table 3). The high beta-value in the regression equation implies a large magnitude of impact which supports the hypothesis that high serum estradiol levels increases mammographic density with both statistical and biological significance. Serum estradiol levels were independently associated with mammographic density controlling for age, BMI and current use of hormone therapy, and the magnitude of the association was substantial (Table 3). The high beta-value in the regression equation implies a large magnitude of impact which supports the hypothesis that high serum estradiol levels increases mammographic density with both statistical and biological significance. Gene expression in breast carcinomas according to serum estradiol levels The expression of genes found to be differentially expressed in normal breast tissue according to serum estradiol levels was examined in breast carcinomas. We found that the expression was all opposite of that in normal breast tissue from women with high serum estrogen (Table 1). This may be due to lack of negative feedback of growth regulation in breast tumors. In breast cancer cell lines, estrogen induced up-regulation of positive proliferation regulators and down-regulation of anti-proliferative and pro-apoptotic genes, resulting in a net positive proliferative drive [46]. This is in line with our findings. In normal breast tissue from women with high serum estradiol, SCGB3A1, which regulate proliferation negatively, and TLN2, which prevents invasion, are up-regulated. PTGS1, which induce local production of estradiol-stimulated proliferation, is down-regulated. All three genes are expressed to maintain control and regulation of the epithelial cells. In breast cancers the expression of these genes favors growth, migration and proliferation. This supports the hypothesis that high serum estradiol increases the proliferative pressure in normal breasts, which leads to an activation of mechanisms counter-acting this proliferative pressure. In carcinomas, growth regulation is lost, and these hormone-related growth-promoting mechanisms are turned on. Interestingly, both AREG and GREB1 were up-regulated in ER+ breast carcinomas of younger (< 45 years) compared with older (> 70 years) women in a previous publication [47]. The increased expression of these genes was proposed as a mechanism responsible for the observed increase in proliferation seen in the tumors of younger compared with older women [47]. The genes differentially expressed according to serum estradiol levels in tumors confirmed many of the findings from the Dunbier-study of ER+ tumors [10]. The previously published list of genes positively correlated with serum estradiol included TFF-genes and GREB1. These genes were also found significant in the analysis of all tumors in this study, although TFF1 and TFF3 did not reach statistical significance (Table 4). In addition to the previously published genes, we identified the gene AREG, an EGFR-ligand essential for breast development, as up-regulated in tumors from patients with high serum estradiol. GREB1 is previously found to be an important estrogen-induced stimulator of growth in ER+ breast cancer cell lines [48]. AREG binds to and stimulates EGFR and hence epithelial cell growth. The up-regulation of these two genes in breast carcinomas of women with high estradiol levels may indicate a loss of regulation of growth associated with cancer development. This corresponds well with the interpretation of our findings in normal breast tissue referred above and confirms the results indicated by the cell line studies by Frasor and colleagues [46]. These two genes are not differentially expressed between normal breast tissue and breast cancers. Both are, however, higher expressed in ER+ than ER- breast carcinomas. The expression of genes found to be differentially expressed in normal breast tissue according to serum estradiol levels was examined in breast carcinomas. We found that the expression was all opposite of that in normal breast tissue from women with high serum estrogen (Table 1). This may be due to lack of negative feedback of growth regulation in breast tumors. In breast cancer cell lines, estrogen induced up-regulation of positive proliferation regulators and down-regulation of anti-proliferative and pro-apoptotic genes, resulting in a net positive proliferative drive [46]. This is in line with our findings. In normal breast tissue from women with high serum estradiol, SCGB3A1, which regulate proliferation negatively, and TLN2, which prevents invasion, are up-regulated. PTGS1, which induce local production of estradiol-stimulated proliferation, is down-regulated. All three genes are expressed to maintain control and regulation of the epithelial cells. In breast cancers the expression of these genes favors growth, migration and proliferation. This supports the hypothesis that high serum estradiol increases the proliferative pressure in normal breasts, which leads to an activation of mechanisms counter-acting this proliferative pressure. In carcinomas, growth regulation is lost, and these hormone-related growth-promoting mechanisms are turned on. Interestingly, both AREG and GREB1 were up-regulated in ER+ breast carcinomas of younger (< 45 years) compared with older (> 70 years) women in a previous publication [47]. The increased expression of these genes was proposed as a mechanism responsible for the observed increase in proliferation seen in the tumors of younger compared with older women [47]. The genes differentially expressed according to serum estradiol levels in tumors confirmed many of the findings from the Dunbier-study of ER+ tumors [10]. The previously published list of genes positively correlated with serum estradiol included TFF-genes and GREB1. These genes were also found significant in the analysis of all tumors in this study, although TFF1 and TFF3 did not reach statistical significance (Table 4). In addition to the previously published genes, we identified the gene AREG, an EGFR-ligand essential for breast development, as up-regulated in tumors from patients with high serum estradiol. GREB1 is previously found to be an important estrogen-induced stimulator of growth in ER+ breast cancer cell lines [48]. AREG binds to and stimulates EGFR and hence epithelial cell growth. The up-regulation of these two genes in breast carcinomas of women with high estradiol levels may indicate a loss of regulation of growth associated with cancer development. This corresponds well with the interpretation of our findings in normal breast tissue referred above and confirms the results indicated by the cell line studies by Frasor and colleagues [46]. These two genes are not differentially expressed between normal breast tissue and breast cancers. Both are, however, higher expressed in ER+ than ER- breast carcinomas. Overall strengths and limitations of the study The currently used method for detection of serum estradiol has a limited sensitivity in the lower serum levels often seen in postmenopausal women. Despite the limited sample size we found several biologically plausible associations. However, due to limited power, there may be other associations that we could not reveal. We have included women with and without hormone therapy in the study. There may be differences in action between endogenous and exogenous estradiol that will not be revealed in this study. One important strength of this study is the unique material with normal human tissue in its natural mileu, not influenced by an adjacent tumor [49-51] or by an adipose-dominated biology that may bias the study of reduction mammoplasties. The currently used method for detection of serum estradiol has a limited sensitivity in the lower serum levels often seen in postmenopausal women. Despite the limited sample size we found several biologically plausible associations. However, due to limited power, there may be other associations that we could not reveal. We have included women with and without hormone therapy in the study. There may be differences in action between endogenous and exogenous estradiol that will not be revealed in this study. One important strength of this study is the unique material with normal human tissue in its natural mileu, not influenced by an adjacent tumor [49-51] or by an adipose-dominated biology that may bias the study of reduction mammoplasties. Gene expression in normal breast tissue according to serum estradiol levels: We have identified genes differentially expressed according to serum estradiol in normal breast tissue of healthy women. The genes up-regulated in normal breast tissue under influence of high serum estradiol are enriched for the gene ontology terms extracellular matrix and skeletal system development. Both ER isoforms α and β are expressed in the stromal cells [27]. The proliferating epithelial cells are not found to be ER α + [8] and most often negative to both ER isoforms [9]. In normal breast tissue, the estrogen-induced epithelial proliferation is, at least partly, caused by paracrine signals from ER+ fibroblasts [3]. The enrichment of gene ontology terms related to extracellular matrix may be linked to the effect of estradiol on the ER+ stromal cells. Three genes were independently associated with serum estradiol levels in normal breast tissue in a linear regression model after controlling for age, menopause and current hormone therapy. The two genes SCBG3A1 and TLN2 were positively associated with serum estradiol and PTGS1 (COX1) negatively. SCBG3A1 is also called high in normal 1 (HIN1) and is a secretoglobin transcribed in luminal, but not in myoepithelial breast cells and is secreted from the cell [28]. The protein is a tumor suppressor and inhibits cell growth, migration and invasion acting through the AKT-pathway. SCBG3A1 inhibits Akt-phosphorylation, which reduces the Akt-function in promoting cell cycle progression (transition from the G1 to the S-phase) and preventing apoptosis (through inhibition of the TGFβ-pathway) [29] (Figure 1). Simplified illustration of the cellular mechanisms of action of SCGB3A1. SCGB3A1 inhibits the phosphorylation of Akt leading to reduced cell cycle division and increased apoptosis. Molecules in red are increased/stimulated as result of SCGB3A1-action, whereas molecules in blue are decreased/inhibited. The SCBG3A1 promoter was found to be hypermethylated with down-regulated expression of the gene in breast carcinomas compared with normal breast tissue, where it is referred to as "high in normal 1" (HIN1)[30-32]. Interestingly, the gene is not methylated in BRCA-mutated and BRCA-like breast cancer [32]. Methylation of the gene is suggested to be an early event in non-BRCA-associated breast cancer [33]. We found SCBG3A1 down-regulated in basal-like cancers compared to other subtypes. At first glance, this may seem contradictory to the observation that the gene is not methylated in BRCA-like breast cancers. However, Krop and colleagues found that the gene is expressed in luminal epithelial cell lines, but not in myoepithelial cell lines. The reduced expression seen in basal-like cancer could be due to a myoepithelial phenotype arising from a myoepithelial cell of origin or from phenotypic changes acquired during carcinogensis. This could also be linked to the lack of methylation in BRCA-associated breast cancers, which are often basal-like. An a priori low gene expression would make methylation unnecessary. The increased Akt-activity seen in basal-like cancers [34] is consistent with the low levels of SCBG3A1 expression observed in the basal-like cancers in this study leading to increased Akt-phosphorylation and thereby Akt-activity. The up-regulation of SCGB3A1 in the breasts of women with high serum estradiol protects the breast epithelial cells against uncontrolled proliferation. Women with methylation of the SCGB3A1-promoter may be at risk of developing luminal, but not basal-like, breast cancer and a reduction in serum estradiol levels may be protective for these women. Hormone therapy after menopause is associated with receptor positive, but not receptor negative, breast cancer [35]. Our results indicate that the same may be true for circulating estradiol levels in absence of functional SCGB3A1, but this is not yet shown empirically. PTGS1 (prostaglandin-endoperoxide synthase 1) is synonymous with cyclooxygenase 1 (COX1) and codes for an enzyme important in prostaglandin production. Studies of normal human adiopocytes have shown that the enzyme induces production of prostaglandin E2 (PGE2) which in turn increases the expression of aromatase (CYP19A1) [36]. Aromatase is the enzyme responsible for the last step in the conversion of androgens to estrogens in adipose tissue. Hence, the expression of PTGS1 may increase the local production of estradiol (Figure 2). In normal breast tissue, we observed that the expression of PTGS1 was lower in samples from women with higher levels of serum estradiol. This may be due to negative feedback. High systemic levels of estradiol make local production unnecessary and PTGS1-induced aromatase production is abolished. Schematic illustration of mechanism of action of PTGS1. PTGS1 induces PGE2-production. PGE2 increases the expression of aromatase (CYP19A1) which in turn converts androgens to estrogens in adipose tissue. 17βHSD1 = 17β-hydroxysteroid dehydrogenase. The up-regulation of PTGS1 in breast carcinomas compared to normal tissue is expected from current knowledge. Several studies have suggested that PTGS1 has a carcinogenic role in different epithelial cancers [37-41]. The gene has also previously been found over-expressed in tumors compared with tumor adjacent normal tissue [42]. There has been large amount of research on PTGS2 in relation to cancer, indicating a role in carcinogenesis. The probes for PTGS2 were filtered out due to too many missing values and were not included in the analysis. Hence, this study lacks information about the role of PTGS2-expression in relation to serum estradiol levels. TLN2 is less known and less studied than Talin 1 (TLN1). Both talins are believed to connect integrins to the actin cytoskeleton and are involved in integrin-associated cell adhesion [43,44]. TLN2 is located on chromosome 15q15-21, close to CYP19A1 coding for aromatase. A study on aromatase-excess syndrome found that certain minor chromosomal rearrangements may cause cryptic transcription of the CYP19A1 gene through the TLN2-promoter [45]. We found that TLN2 was up-regulated in breasts of healthy women with high levels of serum estradiol. This could indicate an activation of cell adhesion. This gene was the only gene significantly up-regulated according to serum estradiol in normal breast tissue of premenopausal women. The down-regulation observed in breast cancers compared with normal breast tissue indicates a loss of cell adhesion. The expression of the gene is lower in DCIS than in invasive carcinomas, which is contrary to expected, but the data set is small. A previous study report on the gene expression in normal human breast tissue transplanted into two groups of athymic mice treated with different levels of estradiol [17]. Neither SCGB3A1, TLN2 nor PTGS1 was significantly differentially expressed in their study. They did, however, identify many of the genes found to be significantly differentially expressed according to serum estradiol in breast carcinomas in the current study, such as AREG, GREB1, TFF1 and TFF3. Going back to our normal samples, we see that several of their genes (including AREG, GREB1, TFF1 and TFF3, GATA3 and two SERPIN-genes) are differentially expressed in our normal breast tissue, but did not reach statistical significance (Additional file 4). The differences observed between our study and that of Wilson and colleagues may be due to chance and due to the presence of different residual confounding in the two studies. Wilson and colleagues studied the effects of estradiol treatment, which may act differently upon the breast tissue than endogenous estradiol. Normal human breast tissue transplanted into mice may react differently to varying levels of estradiol than it does in its natural milieu in humans. The genes that were significant in the Wilson-study and differentially expressed but not significant in our study (eg: AREG, GREB1, TFF1, TFF3 and GATA3) may be associated with serum estradiol levels in normal tissues as well as in tumor tissues where we and others have observed significant associations. Our study is the first study to identify the expression of SCGB3A1, TLN2 and PTGS1 in normal breast tissue to be significantly associated with serum estradiol levels. These findings are biologically reasonable and may have been missed in previous studies due to lack of representative study material. Serum estradiol associated with mammographic density in healthy women: Serum estradiol levels were independently associated with mammographic density controlling for age, BMI and current use of hormone therapy, and the magnitude of the association was substantial (Table 3). The high beta-value in the regression equation implies a large magnitude of impact which supports the hypothesis that high serum estradiol levels increases mammographic density with both statistical and biological significance. Gene expression in breast carcinomas according to serum estradiol levels: The expression of genes found to be differentially expressed in normal breast tissue according to serum estradiol levels was examined in breast carcinomas. We found that the expression was all opposite of that in normal breast tissue from women with high serum estrogen (Table 1). This may be due to lack of negative feedback of growth regulation in breast tumors. In breast cancer cell lines, estrogen induced up-regulation of positive proliferation regulators and down-regulation of anti-proliferative and pro-apoptotic genes, resulting in a net positive proliferative drive [46]. This is in line with our findings. In normal breast tissue from women with high serum estradiol, SCGB3A1, which regulate proliferation negatively, and TLN2, which prevents invasion, are up-regulated. PTGS1, which induce local production of estradiol-stimulated proliferation, is down-regulated. All three genes are expressed to maintain control and regulation of the epithelial cells. In breast cancers the expression of these genes favors growth, migration and proliferation. This supports the hypothesis that high serum estradiol increases the proliferative pressure in normal breasts, which leads to an activation of mechanisms counter-acting this proliferative pressure. In carcinomas, growth regulation is lost, and these hormone-related growth-promoting mechanisms are turned on. Interestingly, both AREG and GREB1 were up-regulated in ER+ breast carcinomas of younger (< 45 years) compared with older (> 70 years) women in a previous publication [47]. The increased expression of these genes was proposed as a mechanism responsible for the observed increase in proliferation seen in the tumors of younger compared with older women [47]. The genes differentially expressed according to serum estradiol levels in tumors confirmed many of the findings from the Dunbier-study of ER+ tumors [10]. The previously published list of genes positively correlated with serum estradiol included TFF-genes and GREB1. These genes were also found significant in the analysis of all tumors in this study, although TFF1 and TFF3 did not reach statistical significance (Table 4). In addition to the previously published genes, we identified the gene AREG, an EGFR-ligand essential for breast development, as up-regulated in tumors from patients with high serum estradiol. GREB1 is previously found to be an important estrogen-induced stimulator of growth in ER+ breast cancer cell lines [48]. AREG binds to and stimulates EGFR and hence epithelial cell growth. The up-regulation of these two genes in breast carcinomas of women with high estradiol levels may indicate a loss of regulation of growth associated with cancer development. This corresponds well with the interpretation of our findings in normal breast tissue referred above and confirms the results indicated by the cell line studies by Frasor and colleagues [46]. These two genes are not differentially expressed between normal breast tissue and breast cancers. Both are, however, higher expressed in ER+ than ER- breast carcinomas. Overall strengths and limitations of the study: The currently used method for detection of serum estradiol has a limited sensitivity in the lower serum levels often seen in postmenopausal women. Despite the limited sample size we found several biologically plausible associations. However, due to limited power, there may be other associations that we could not reveal. We have included women with and without hormone therapy in the study. There may be differences in action between endogenous and exogenous estradiol that will not be revealed in this study. One important strength of this study is the unique material with normal human tissue in its natural mileu, not influenced by an adjacent tumor [49-51] or by an adipose-dominated biology that may bias the study of reduction mammoplasties. Conclusions: In conclusion we report a list of genes whose expression is associated with serum estradiol levels. This list includes genes with known relation to estradiol-signaling, mammary proliferation and breast carcinogenesis. All these genes were expressed differently in tumor and normal breast tissue. The gene expression in tumors resembled that in normal breast tissue from women with low serum estradiol. Associations between serum estradiol and the expression in breast carcinomas confirmed previous findings and revealed new associations. The comparison of results between normal breast tissue from healthy women and breast carcinomas indicate the difference in biological impact of estradiol in normal and cancerous breast tissue. Supplementary Material: Table S1: Criteria for estimation of menopausal status. A description of the different criteria used to determine menopausal status. Click here for file Table S2: Gene ontology terms for genes differentially expressed in healthy women according to serum estradiol levels. A listing of different gene ontology terms for genes differentialle expressed in healthy women dependent on levels of estradiol levels in the serum, with FDR reported. Click here for file Table S3: Genes differentially expressed according to serum estradiol in breast carcinomas and their expression in normal breast tissue. TFF3 represented with two different probes. Four genes differentially expressed according to serum levels of estradiol, levels in both normal breasts and in breast carcinomas. Click here for file Table S4: Genes differentially expressed according to estradiol treatment in Wilson et al and according to serum estradiol in the current study. A comparison between a previous published study and this study. Click here for file
Background: High serum levels of estradiol are associated with increased risk of postmenopausal breast cancer. Little is known about the gene expression in normal breast tissue in relation to levels of circulating serum estradiol. Methods: We compared whole genome expression data of breast tissue samples with serum hormone levels using data from 79 healthy women and 64 breast cancer patients. Significance analysis of microarrays (SAM) was used to identify differentially expressed genes and multivariate linear regression was used to identify independent associations. Results: Six genes (SCGB3A1, RSPO1, TLN2, SLITRK4, DCLK1, PTGS1) were found differentially expressed according to serum estradiol levels (FDR = 0). Three of these independently predicted estradiol levels in a multivariate model, as SCGB3A1 (HIN1) and TLN2 were up-regulated and PTGS1 (COX1) was down-regulated in breast samples from women with high serum estradiol. Serum estradiol, but none of the differentially expressed genes were significantly associated with mammographic density, another strong breast cancer risk factor. In breast carcinomas, expression of GREB1 and AREG was associated with serum estradiol in all cancers and in the subgroup of estrogen receptor positive cases. Conclusions: We have identified genes associated with serum estradiol levels in normal breast tissue and in breast carcinomas. SCGB3A1 is a suggested tumor suppressor gene that inhibits cell growth and invasion and is methylated and down-regulated in many epithelial cancers. Our findings indicate this gene as an important inhibitor of breast cell proliferation in healthy women with high estradiol levels. In the breast, this gene is expressed in luminal cells only and is methylated in non-BRCA-related breast cancers. The possibility of a carcinogenic contribution of silencing of this gene for luminal, but not basal-like cancers should be further explored. PTGS1 induces prostaglandin E2 (PGE2) production which in turn stimulates aromatase expression and hence increases the local production of estradiol. This is the first report studying such associations in normal breast tissue in humans.
Background: Influence of estradiol on breast development [1], the menopausal transition [2] and on the breast epithelial cells [3] is widely studied. However, little is known about the effect of serum estradiol on gene expression in the normal breast tissue. For postmenopausal women, high serum estradiol levels are associated with increased risk of breast cancer [4-6]. The results are less conclusive for premenopausal women, but epidemiologic evidence indicates an increased risk from higher exposure to female hormones [7]. In estrogen receptor (ER) positive breast carcinomas, the proliferating tumor cells express ER while in normal breast tissue the proliferating epithelial cells are ER negative (ER-) [8,9]. Both normal and malignant breast epithelial cells are influenced by estradiol but through different mechanisms. In the lack of ER, normal breast epithelial cells receive proliferating paracrine signals from ER+ fibroblasts [3]. The importance of estrogen stimuli in the proliferation of ER+ breast cancer cells is evident from the effect of anti-estrogen treatment. Previously, several studies have identified genes whose expression is regulated by estradiol in breast cancer cell lines. Recently, a study reported an association between serum levels of estradiol and gene expression of trefoil factor 1 (TFF1), growth regulation by estrogen in breast cancer 1 (GREB1), PDZ domain containing 1 (PDZK1) and progesterone receptor (PGR) in ER+ breast carcinomas [10]. Functional studies on breast cancer cell lines have described that estradiol induces expression of c-fos [11] and that exposure to physiologic doses of estradiol is necessary for malignant transformation [12]. Intratumoral levels of estrogens have also been measured and were found correlated with tumor gene expression of estradiol-metabolizing enzymes and the estrogen receptor gene (ESR1) [13] and of proliferation markers [14]. A recent study did, however, conclude that the intratumoral estradiol levels were mainly determined by its binding to ER (associated with ESR1-expression). The intratumoral estradiol levels were not found to be associated with local estradiol production [15]. Serum estradiol levels were found to be associated with local estradiol levels in normal breast tissue of breast cancer patients in a recent study [16]. This strengthens the hypothesis that serum estradiol levels influence the gene expression in breast tissue. Wilson and colleagues studied the effect of estradiol on normal human breast tissue transplanted into athymic nude mice. They identified a list of genes associated with estradiol treatment, including TFF1, AREG, SCGB2A2, GREB1 and GATA3. The normal tissues used in the xenografts were from breasts with benign breast disease and from mammoplasty reductions [17]. Studies describing associations between serum estradiol levels and gene expression of normal human breast tissue in its natural milieu are lacking. Knowledge about gene expression changes associated with high serum estradiol may reveal biological mechanisms underlying the increased risk for both elevated mammographic density and for developing breast cancer as seen in women with high estradiol levels. We have identified genes differentially expressed between normal breast tissue samples according to serum estradiol levels. Several genes identified in previous studies using normal breast tissue or breast carcinomas are confirmed, but additional genes were identified making important contributions to our previous knowledge. Conclusions: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2407/11/332/prepub
Background: High serum levels of estradiol are associated with increased risk of postmenopausal breast cancer. Little is known about the gene expression in normal breast tissue in relation to levels of circulating serum estradiol. Methods: We compared whole genome expression data of breast tissue samples with serum hormone levels using data from 79 healthy women and 64 breast cancer patients. Significance analysis of microarrays (SAM) was used to identify differentially expressed genes and multivariate linear regression was used to identify independent associations. Results: Six genes (SCGB3A1, RSPO1, TLN2, SLITRK4, DCLK1, PTGS1) were found differentially expressed according to serum estradiol levels (FDR = 0). Three of these independently predicted estradiol levels in a multivariate model, as SCGB3A1 (HIN1) and TLN2 were up-regulated and PTGS1 (COX1) was down-regulated in breast samples from women with high serum estradiol. Serum estradiol, but none of the differentially expressed genes were significantly associated with mammographic density, another strong breast cancer risk factor. In breast carcinomas, expression of GREB1 and AREG was associated with serum estradiol in all cancers and in the subgroup of estrogen receptor positive cases. Conclusions: We have identified genes associated with serum estradiol levels in normal breast tissue and in breast carcinomas. SCGB3A1 is a suggested tumor suppressor gene that inhibits cell growth and invasion and is methylated and down-regulated in many epithelial cancers. Our findings indicate this gene as an important inhibitor of breast cell proliferation in healthy women with high estradiol levels. In the breast, this gene is expressed in luminal cells only and is methylated in non-BRCA-related breast cancers. The possibility of a carcinogenic contribution of silencing of this gene for luminal, but not basal-like cancers should be further explored. PTGS1 induces prostaglandin E2 (PGE2) production which in turn stimulates aromatase expression and hence increases the local production of estradiol. This is the first report studying such associations in normal breast tissue in humans.
16,572
375
18
[ "breast", "estradiol", "serum", "serum estradiol", "genes", "normal", "tissue", "women", "expression", "levels" ]
[ "test", "test" ]
null
[CONTENT] Serum estradiol | SCGB3A1 | HIN1 | TLN2 | PTGS1 | COX1 | AREG | GREB1 | TFF | normal breast tissue | gene expression [SUMMARY]
[CONTENT] Serum estradiol | SCGB3A1 | HIN1 | TLN2 | PTGS1 | COX1 | AREG | GREB1 | TFF | normal breast tissue | gene expression [SUMMARY]
null
[CONTENT] Serum estradiol | SCGB3A1 | HIN1 | TLN2 | PTGS1 | COX1 | AREG | GREB1 | TFF | normal breast tissue | gene expression [SUMMARY]
[CONTENT] Serum estradiol | SCGB3A1 | HIN1 | TLN2 | PTGS1 | COX1 | AREG | GREB1 | TFF | normal breast tissue | gene expression [SUMMARY]
[CONTENT] Serum estradiol | SCGB3A1 | HIN1 | TLN2 | PTGS1 | COX1 | AREG | GREB1 | TFF | normal breast tissue | gene expression [SUMMARY]
[CONTENT] Adult | Aged | Breast | Breast Neoplasms | Carcinoma, Ductal, Breast | Carcinoma, Intraductal, Noninfiltrating | Cyclooxygenase 1 | Cytokines | Doublecortin-Like Kinases | Estradiol | Female | Gene Expression Profiling | Gene Expression Regulation, Neoplastic | Humans | Intracellular Signaling Peptides and Proteins | Linear Models | Membrane Proteins | Middle Aged | Multivariate Analysis | Oligonucleotide Array Sequence Analysis | Protein Serine-Threonine Kinases | Talin | Thrombospondins | Transcriptome | Tumor Suppressor Proteins [SUMMARY]
[CONTENT] Adult | Aged | Breast | Breast Neoplasms | Carcinoma, Ductal, Breast | Carcinoma, Intraductal, Noninfiltrating | Cyclooxygenase 1 | Cytokines | Doublecortin-Like Kinases | Estradiol | Female | Gene Expression Profiling | Gene Expression Regulation, Neoplastic | Humans | Intracellular Signaling Peptides and Proteins | Linear Models | Membrane Proteins | Middle Aged | Multivariate Analysis | Oligonucleotide Array Sequence Analysis | Protein Serine-Threonine Kinases | Talin | Thrombospondins | Transcriptome | Tumor Suppressor Proteins [SUMMARY]
null
[CONTENT] Adult | Aged | Breast | Breast Neoplasms | Carcinoma, Ductal, Breast | Carcinoma, Intraductal, Noninfiltrating | Cyclooxygenase 1 | Cytokines | Doublecortin-Like Kinases | Estradiol | Female | Gene Expression Profiling | Gene Expression Regulation, Neoplastic | Humans | Intracellular Signaling Peptides and Proteins | Linear Models | Membrane Proteins | Middle Aged | Multivariate Analysis | Oligonucleotide Array Sequence Analysis | Protein Serine-Threonine Kinases | Talin | Thrombospondins | Transcriptome | Tumor Suppressor Proteins [SUMMARY]
[CONTENT] Adult | Aged | Breast | Breast Neoplasms | Carcinoma, Ductal, Breast | Carcinoma, Intraductal, Noninfiltrating | Cyclooxygenase 1 | Cytokines | Doublecortin-Like Kinases | Estradiol | Female | Gene Expression Profiling | Gene Expression Regulation, Neoplastic | Humans | Intracellular Signaling Peptides and Proteins | Linear Models | Membrane Proteins | Middle Aged | Multivariate Analysis | Oligonucleotide Array Sequence Analysis | Protein Serine-Threonine Kinases | Talin | Thrombospondins | Transcriptome | Tumor Suppressor Proteins [SUMMARY]
[CONTENT] Adult | Aged | Breast | Breast Neoplasms | Carcinoma, Ductal, Breast | Carcinoma, Intraductal, Noninfiltrating | Cyclooxygenase 1 | Cytokines | Doublecortin-Like Kinases | Estradiol | Female | Gene Expression Profiling | Gene Expression Regulation, Neoplastic | Humans | Intracellular Signaling Peptides and Proteins | Linear Models | Membrane Proteins | Middle Aged | Multivariate Analysis | Oligonucleotide Array Sequence Analysis | Protein Serine-Threonine Kinases | Talin | Thrombospondins | Transcriptome | Tumor Suppressor Proteins [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] breast | estradiol | serum | serum estradiol | genes | normal | tissue | women | expression | levels [SUMMARY]
[CONTENT] breast | estradiol | serum | serum estradiol | genes | normal | tissue | women | expression | levels [SUMMARY]
null
[CONTENT] breast | estradiol | serum | serum estradiol | genes | normal | tissue | women | expression | levels [SUMMARY]
[CONTENT] breast | estradiol | serum | serum estradiol | genes | normal | tissue | women | expression | levels [SUMMARY]
[CONTENT] breast | estradiol | serum | serum estradiol | genes | normal | tissue | women | expression | levels [SUMMARY]
[CONTENT] breast | estradiol | er | cells | estradiol levels | normal | levels | expression | breast tissue | cancer [SUMMARY]
[CONTENT] rna | breast | agilent | biopsies | included | women | density | cancer | analyses | breast cancer [SUMMARY]
null
[CONTENT] breast | estradiol | breast tissue | normal | tissue | list | normal breast tissue | normal breast | expression | genes [SUMMARY]
[CONTENT] breast | estradiol | serum | genes | serum estradiol | normal | tissue | expression | women | breast tissue [SUMMARY]
[CONTENT] breast | estradiol | serum | genes | serum estradiol | normal | tissue | expression | women | breast tissue [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 79 | 64 ||| SAM [SUMMARY]
null
[CONTENT] ||| ||| ||| non-BRCA ||| ||| E2 (PGE2 ||| first [SUMMARY]
[CONTENT] ||| ||| 79 | 64 ||| SAM ||| Six | RSPO1 | SLITRK4 | DCLK1 | 0 ||| Three | TLN2 | PTGS1 ||| ||| GREB1 | AREG ||| ||| ||| ||| non-BRCA ||| ||| E2 (PGE2 ||| first [SUMMARY]
[CONTENT] ||| ||| 79 | 64 ||| SAM ||| Six | RSPO1 | SLITRK4 | DCLK1 | 0 ||| Three | TLN2 | PTGS1 ||| ||| GREB1 | AREG ||| ||| ||| ||| non-BRCA ||| ||| E2 (PGE2 ||| first [SUMMARY]
Pharmacokinetic interaction study between ligustrazine and valsartan in rats and its potential mechanism.
33355495
Ligustrazine and valsartan are commonly used drugs in the treatment of cardiac and cardiovascular disease.
CONTEXT
The pharmacokinetics of valsartan (10 mg/kg) was investigated in Sprague-Dawley rats divided into three groups (with the pretreatment of 4 or 10 mg/kg/day ligustrazine for 10 days and without the pretreatment of ligustrazine as control) of six rats each. The in vitro experiments in rat liver microsomes were performed to explore the effect of ligustrazine on the metabolic stability of valsartan.
MATERIALS AND METHODS
Ligustrazine changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0- t ) (385.37 ± 93.05 versus 851.64 ± 104.26 μg/L*h), t1/2 (5.46 ± 0.93 versus 6.34 ± 1.25 h), and C max (62.64 ± 9.09 versus 83.87 ± 6.15 μg/L) of valsartan was significantly decreased, and the clearance rate was increased from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg and similar changes were observed in the group with 10 mg/kg ligustrazine (p < 0.05). The metabolic stability of valsartan was also decreased by ligustrazine as the half-life of valsartan in rat liver microsomes decreased from 37.12 ± 4.06 to 33.48 ± 3.56 min and the intrinsic clearance rate increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein (p < 0.05).
RESULTS
Ligustrazine promoted the metabolism of valsartan via activating CYP3A4. The co-administration of ligustrazine and valsartan should be taken into account.
DISCUSSION AND CONCLUSIONS
[ "Animals", "Cytochrome P-450 CYP3A Inducers", "Drug Interactions", "Male", "Pyrazines", "Rats", "Rats, Sprague-Dawley", "Valsartan" ]
7759250
Introduction
Ligustrazine is a bioactive ingredient extracted from Rhizoma Ligustici wallichii [Ligustici wallichii (Apiaceae)]. Ligustrazine has been widely used in the clinic for the treatment of cardiovascular disease because of its function in activating blood circulation (Wu et al. 2012; Qian et al. 2014). Moreover, ligustrazine has been demonstrated to possess antioxidant properties in animal models of ischemic-reperfusion, atherosclerosis, and cerebral vasospasm (Jiang et al. 2011; Lv et al. 2012). Commonly, patients with cardiovascular or cerebral diseases may have received ligustrazine treatment before using other drugs, or it may be used with other drugs together for the treatment of complex chronic disorders (Hockenberry 1991; Phougat et al. 2014). Valsartan is a commonly used drug in cardiac disease with relatively low absolute bioavailability (Jung et al. 2015). Valsartan is usually applied in the treatment of hypertension due to its properties of lowing blood pressure, and it also co-administrated with other drugs to make the treatment more efficient in the clinic (Cheng et al. 2001; Liu et al. 2019a). Previously, the co-administration of valsartan and quercetin could exert greater cardiac protection and this kind of combination affected the pharmacokinetics of valsartan (Challa et al. 2013). The combination of valsartan and amlodipine has been demonstrated to control blood pressure efficiently and the pharmacokinetic and transport of valsartan were influenced by the administration of amlodipine (Cai et al. 2011). The combination of ligustrazine and valsartan has been reported to have a protective effect on hippocampal neuronal loss with vascular dementia (Qin et al. 2011). However, the drug–drug interaction between ligustrazine and valsartan was still unclear. This study focussed on the interaction between ligustrazine and valsartan to evaluate the effect of ligustrazine on the pharmacokinetics of valsartan, which can provide more information for the clinical co-administration of ligustrazine and valsartan.
null
null
Results
Effect of ligustrazine on the pharmacokinetics of valsartan The plasma concentration of valsartan was analyzed by LC-MS/MS (Supplementary Figure 1) and the plasma concentration–time curves of valsartan in rats with or without treatment of ligustrazine are shown in Figure 1. The pharmacokinetic parameters, including area under curves (AUC(0–t)), half-life (t1/2), time to reach maximum concentration (Tmax), maximum concentration (Cmax), and clearance (Clz/F), are summarized in Table 1. Plasma concentration–time curves of valsartan in the presence or the absence of 4 or 10 mg/kg ligustrazine. Pharmacokinetic parameters of valsartan (10 mg/kg) with or without ligustrazine (4 or 10 mg/kg). p < 0.05. The administration of ligustrazine significantly changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0–t) of valsartan decreased from 851.64 ± 104.26 to 385.37 ± 93.05 μg/L/h and the Cmax reduced from 83.87 ± 6.15 to 62.64 ± 9.09 μg/L (p < 0.05). Similarly, the oral administration of 10 mg/kg ligustrazine also dramatically decreased the AUC(0–t) (249.85 ± 41.27 versus 851.64 ± 104.26 μg/L/h) and Cmax (51.28 ± 2.67 versus 83.87 ± 6.15 μg/L). The t1/2 of valsartan reduced from 6.34 ± 1.25 to 5.46 ± 0.93 h after co-administrated of 4 mg/kg ligustrazine and to 5.15 ± 0.88 h in the presence of 10 mg/kg ligustrazine, and the difference was significant (p < 0.05). Additionally, the clearance rate of valsartan was significantly enhanced by both dose of ligustrazine (from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg for 4 mg/kg ligustrazine, and to 39.44 ± 6.53 L/h/kg for 10 mg/kg ligustrazine, p < 0.05, Table 1). The plasma concentration of valsartan was analyzed by LC-MS/MS (Supplementary Figure 1) and the plasma concentration–time curves of valsartan in rats with or without treatment of ligustrazine are shown in Figure 1. The pharmacokinetic parameters, including area under curves (AUC(0–t)), half-life (t1/2), time to reach maximum concentration (Tmax), maximum concentration (Cmax), and clearance (Clz/F), are summarized in Table 1. Plasma concentration–time curves of valsartan in the presence or the absence of 4 or 10 mg/kg ligustrazine. Pharmacokinetic parameters of valsartan (10 mg/kg) with or without ligustrazine (4 or 10 mg/kg). p < 0.05. The administration of ligustrazine significantly changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0–t) of valsartan decreased from 851.64 ± 104.26 to 385.37 ± 93.05 μg/L/h and the Cmax reduced from 83.87 ± 6.15 to 62.64 ± 9.09 μg/L (p < 0.05). Similarly, the oral administration of 10 mg/kg ligustrazine also dramatically decreased the AUC(0–t) (249.85 ± 41.27 versus 851.64 ± 104.26 μg/L/h) and Cmax (51.28 ± 2.67 versus 83.87 ± 6.15 μg/L). The t1/2 of valsartan reduced from 6.34 ± 1.25 to 5.46 ± 0.93 h after co-administrated of 4 mg/kg ligustrazine and to 5.15 ± 0.88 h in the presence of 10 mg/kg ligustrazine, and the difference was significant (p < 0.05). Additionally, the clearance rate of valsartan was significantly enhanced by both dose of ligustrazine (from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg for 4 mg/kg ligustrazine, and to 39.44 ± 6.53 L/h/kg for 10 mg/kg ligustrazine, p < 0.05, Table 1). Effect of ligustrazine on the metabolic stability of valsartan In rat liver microsomes, the obtained half-life of valsartan was 37.12 ± 4.06 min, which was significantly shortened to 33.48 ± 3.56 min after the administration of ligustrazine (p < 0.05). Moreover, the intrinsic clearance rate of valsartan significantly increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein by ligustrazine indicating the significant decrease in the metabolic stability of valsartan after the co-administration with ligustrazine (p < 0.05). In rat liver microsomes, the obtained half-life of valsartan was 37.12 ± 4.06 min, which was significantly shortened to 33.48 ± 3.56 min after the administration of ligustrazine (p < 0.05). Moreover, the intrinsic clearance rate of valsartan significantly increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein by ligustrazine indicating the significant decrease in the metabolic stability of valsartan after the co-administration with ligustrazine (p < 0.05).
Conclusions
The co-administration of valsartan and ligustrazine induced drug–drug interaction, which decreased the system exposure of valsartan by activating CYP3A4. Therefore, the dose of ligustrazine and valsartan should be carefully considered in the clinic, when they are co-administrated.
[ "Drugs and chemicals", "Animals", "Preparation of plasma samples", "LC-MS/MS condition", "Pharmacokinetic study", "The metabolic stability of valsartan in rat liver microsomes", "Statistical analysis", "Effect of ligustrazine on the pharmacokinetics of valsartan", "Effect of ligustrazine on the metabolic stability of valsartan" ]
[ "Standards of valsartan (purity >98%), and ligustrazine (purity >98%) were purchased from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). β-NADPH was obtained from Sigma Chemical Co. (St. Louis, MO). Pooled RLM were purchased from BD Biosciences Discovery Labware (Woburn, MA). Acetonitrile and methanol were purchased from Fisher Scientific (Fair Lawn, NJ). Formic acid was purchased from Anaqua Chemicals Supply Inc. Limited (Houston, TX). Ultrapure water was prepared with a Milli-Q water purification system (Millipore, Billerica, MA). All other chemicals were of analytical grade or better.", "Experiments were performed with male Sprague–Dawley rats weighing 230–250 g supplied by Sino-British Sippr/BK Lab Animal Ltd (Shanghai, China). The rats were housed in an air-conditioned animal quarter at 22 ± 2 °C and 50 ± 10% relative humidity. The animals were acclimatized to the facilities for 5 days, during this period, animals had free access to food and water. Before each experiment, the rats fasted with free access to water for 12 h. The experiment was approved by the Animal Care and Use Committee of The First Affiliated Hospital of Qiqihar Medical University.", "Plasma sample (100 μL) was mixed with 20 μL methanol and 180 μL internal standard methanol solution and vortexed for 60 s. After centrifugation at 12,000 rpm for 10 min, the supernatant was removed into an injection vial and 5 μL aliquot was injected into LC/MS-MS system for analysis.", "Chromatographic analysis was performed by using an Agilent 1290 series liquid chromatography system (Agilent Technologies, Palo Alto, CA) as previously reported (Deng et al. 2017; Liu et al. 2019b). The sample was separated on Waters Xbridge C18 column (100 × 3.0 mm, i.d.; 3.0 μm, Waters Corporation, Milford, MA) and eluted with an isocratic mobile phase: solvent A (Water containing 0.1% formic acid) – solvent B (acetonitrile) (65:35, v/v). The column temperature was set at 25 °C, the flowing rate at 0.4 mL/min, and the injection volume at 5 μL. Mass spectrometric detection was carried out on an Agilent 6460 triple-quadruple mass spectrometer (Agilent Technologies, Palo Alto, CA) with Turbo Ion spray, which is connected to the liquid chromatography system. The mass scan mode was MRM positive. The precursor ion and product ion were m/z 434.2→179.0 for valsartan and m/z 357.9→321.8 for methyclothiazide as IS according to the previous study (Wei et al. 2019). The collision energy for valsartan and methyclothiazide was 30 and 20 eV, respectively. The MS/MS conditions were optimized as follows: fragmentor, 110 V; capillary voltage, 3.5 kV; nozzle voltage, 500 V; nebulizer gas pressure (N2), 40 psig; drying gas flow (N2), 10 L/min; gas temperature, 350 °C; sheath gas temperature, 400 °C; sheath gas flow, 11 L/min. Agilent MassHunter B.07 software was used for the control of the equipment and data acquisition. Agilent Quantitative analysis software was used for data analysis.", "The rats were randomly divided into three groups of six animals each, including 10 mg/kg valsartan only group (A), 10 mg/kg valsartan + 4 mg/kg ligustrazine group (B), and 10 mg/kg valsartan + 10 mg/kg ligustrazine (C). The administration of ligustrazine was prior to valsartan for 10 days. Blood samples (0.25 mL) were collected into a heparinized tube via the oculi choroidal vein before drug administration and at 0.083, 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, and 24 h after drug administration. After centrifuging at 3500 rpm for 10 min, the supernatant was obtained and frozen at −80 °C until analysis.", "According to previous studies, the reaction mixture was pre-incubated at 37 °C for 5 min, after that 1 μM valsartan was added (Qi et al. 2014; Li et al. 2016). For the effect of ligustrazine on the metabolic stability of valsartan, ligustrazine was added into rat liver microsomes and pre-incubated for 30 min at 37 °C, and then valsartan was added. After incubating for 0, 1, 3, 5, 15, 30, and 60 min, 30 μL samples were collected from volumes and the reaction was terminated by ice-cold acetonitrile. Then samples were prepared according to the above methods and analyzed by LC-MS/MS.\nThe half-life (t1/2) in vitro was obtained using equation: t1/2\n= 0.693/k. V (μL/mg) = volume of incubation (μL)/protein in the incubation (mg); Intrinsic clearance (Clint) (μL/min/mg protein) = V × 0.693/t1/2.", "All data were represented as mean ± SD obtained from triplicate experiments and analyzed with one-way analysis of variance (ANOVA) with SPSS 20.0 (SPSS, Inc., Chicago, IL). The pharmacokinetic parameters of valsartan were obtained with DAS 3.0 pharmacokinetic software (Chinese Pharmacological Association, Anhui, China). Differences were considered to be statistically significant when p < 0.05.", "The plasma concentration of valsartan was analyzed by LC-MS/MS (Supplementary Figure 1) and the plasma concentration–time curves of valsartan in rats with or without treatment of ligustrazine are shown in Figure 1. The pharmacokinetic parameters, including area under curves (AUC(0–t)), half-life (t1/2), time to reach maximum concentration (Tmax), maximum concentration (Cmax), and clearance (Clz/F), are summarized in Table 1.\nPlasma concentration–time curves of valsartan in the presence or the absence of 4 or 10 mg/kg ligustrazine.\nPharmacokinetic parameters of valsartan (10 mg/kg) with or without ligustrazine (4 or 10 mg/kg).\np < 0.05.\nThe administration of ligustrazine significantly changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0–t) of valsartan decreased from 851.64 ± 104.26 to 385.37 ± 93.05 μg/L/h and the Cmax reduced from 83.87 ± 6.15 to 62.64 ± 9.09 μg/L (p < 0.05). Similarly, the oral administration of 10 mg/kg ligustrazine also dramatically decreased the AUC(0–t) (249.85 ± 41.27 versus 851.64 ± 104.26 μg/L/h) and Cmax (51.28 ± 2.67 versus 83.87 ± 6.15 μg/L). The t1/2 of valsartan reduced from 6.34 ± 1.25 to 5.46 ± 0.93 h after co-administrated of 4 mg/kg ligustrazine and to 5.15 ± 0.88 h in the presence of 10 mg/kg ligustrazine, and the difference was significant (p < 0.05). Additionally, the clearance rate of valsartan was significantly enhanced by both dose of ligustrazine (from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg for 4 mg/kg ligustrazine, and to 39.44 ± 6.53 L/h/kg for 10 mg/kg ligustrazine, p < 0.05, Table 1).", "In rat liver microsomes, the obtained half-life of valsartan was 37.12 ± 4.06 min, which was significantly shortened to 33.48 ± 3.56 min after the administration of ligustrazine (p < 0.05). Moreover, the intrinsic clearance rate of valsartan significantly increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein by ligustrazine indicating the significant decrease in the metabolic stability of valsartan after the co-administration with ligustrazine (p < 0.05)." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Drugs and chemicals", "Animals", "Preparation of plasma samples", "LC-MS/MS condition", "Pharmacokinetic study", "The metabolic stability of valsartan in rat liver microsomes", "Statistical analysis", "Results", "Effect of ligustrazine on the pharmacokinetics of valsartan", "Effect of ligustrazine on the metabolic stability of valsartan", "Discussion", "Conclusions", "Supplementary Material" ]
[ "Ligustrazine is a bioactive ingredient extracted from Rhizoma Ligustici wallichii [Ligustici wallichii (Apiaceae)]. Ligustrazine has been widely used in the clinic for the treatment of cardiovascular disease because of its function in activating blood circulation (Wu et al. 2012; Qian et al. 2014). Moreover, ligustrazine has been demonstrated to possess antioxidant properties in animal models of ischemic-reperfusion, atherosclerosis, and cerebral vasospasm (Jiang et al. 2011; Lv et al. 2012). Commonly, patients with cardiovascular or cerebral diseases may have received ligustrazine treatment before using other drugs, or it may be used with other drugs together for the treatment of complex chronic disorders (Hockenberry 1991; Phougat et al. 2014).\nValsartan is a commonly used drug in cardiac disease with relatively low absolute bioavailability (Jung et al. 2015). Valsartan is usually applied in the treatment of hypertension due to its properties of lowing blood pressure, and it also co-administrated with other drugs to make the treatment more efficient in the clinic (Cheng et al. 2001; Liu et al. 2019a). Previously, the co-administration of valsartan and quercetin could exert greater cardiac protection and this kind of combination affected the pharmacokinetics of valsartan (Challa et al. 2013). The combination of valsartan and amlodipine has been demonstrated to control blood pressure efficiently and the pharmacokinetic and transport of valsartan were influenced by the administration of amlodipine (Cai et al. 2011). The combination of ligustrazine and valsartan has been reported to have a protective effect on hippocampal neuronal loss with vascular dementia (Qin et al. 2011). However, the drug–drug interaction between ligustrazine and valsartan was still unclear.\nThis study focussed on the interaction between ligustrazine and valsartan to evaluate the effect of ligustrazine on the pharmacokinetics of valsartan, which can provide more information for the clinical co-administration of ligustrazine and valsartan.", " Drugs and chemicals Standards of valsartan (purity >98%), and ligustrazine (purity >98%) were purchased from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). β-NADPH was obtained from Sigma Chemical Co. (St. Louis, MO). Pooled RLM were purchased from BD Biosciences Discovery Labware (Woburn, MA). Acetonitrile and methanol were purchased from Fisher Scientific (Fair Lawn, NJ). Formic acid was purchased from Anaqua Chemicals Supply Inc. Limited (Houston, TX). Ultrapure water was prepared with a Milli-Q water purification system (Millipore, Billerica, MA). All other chemicals were of analytical grade or better.\nStandards of valsartan (purity >98%), and ligustrazine (purity >98%) were purchased from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). β-NADPH was obtained from Sigma Chemical Co. (St. Louis, MO). Pooled RLM were purchased from BD Biosciences Discovery Labware (Woburn, MA). Acetonitrile and methanol were purchased from Fisher Scientific (Fair Lawn, NJ). Formic acid was purchased from Anaqua Chemicals Supply Inc. Limited (Houston, TX). Ultrapure water was prepared with a Milli-Q water purification system (Millipore, Billerica, MA). All other chemicals were of analytical grade or better.\n Animals Experiments were performed with male Sprague–Dawley rats weighing 230–250 g supplied by Sino-British Sippr/BK Lab Animal Ltd (Shanghai, China). The rats were housed in an air-conditioned animal quarter at 22 ± 2 °C and 50 ± 10% relative humidity. The animals were acclimatized to the facilities for 5 days, during this period, animals had free access to food and water. Before each experiment, the rats fasted with free access to water for 12 h. The experiment was approved by the Animal Care and Use Committee of The First Affiliated Hospital of Qiqihar Medical University.\nExperiments were performed with male Sprague–Dawley rats weighing 230–250 g supplied by Sino-British Sippr/BK Lab Animal Ltd (Shanghai, China). The rats were housed in an air-conditioned animal quarter at 22 ± 2 °C and 50 ± 10% relative humidity. The animals were acclimatized to the facilities for 5 days, during this period, animals had free access to food and water. Before each experiment, the rats fasted with free access to water for 12 h. The experiment was approved by the Animal Care and Use Committee of The First Affiliated Hospital of Qiqihar Medical University.\n Preparation of plasma samples Plasma sample (100 μL) was mixed with 20 μL methanol and 180 μL internal standard methanol solution and vortexed for 60 s. After centrifugation at 12,000 rpm for 10 min, the supernatant was removed into an injection vial and 5 μL aliquot was injected into LC/MS-MS system for analysis.\nPlasma sample (100 μL) was mixed with 20 μL methanol and 180 μL internal standard methanol solution and vortexed for 60 s. After centrifugation at 12,000 rpm for 10 min, the supernatant was removed into an injection vial and 5 μL aliquot was injected into LC/MS-MS system for analysis.\n LC-MS/MS condition Chromatographic analysis was performed by using an Agilent 1290 series liquid chromatography system (Agilent Technologies, Palo Alto, CA) as previously reported (Deng et al. 2017; Liu et al. 2019b). The sample was separated on Waters Xbridge C18 column (100 × 3.0 mm, i.d.; 3.0 μm, Waters Corporation, Milford, MA) and eluted with an isocratic mobile phase: solvent A (Water containing 0.1% formic acid) – solvent B (acetonitrile) (65:35, v/v). The column temperature was set at 25 °C, the flowing rate at 0.4 mL/min, and the injection volume at 5 μL. Mass spectrometric detection was carried out on an Agilent 6460 triple-quadruple mass spectrometer (Agilent Technologies, Palo Alto, CA) with Turbo Ion spray, which is connected to the liquid chromatography system. The mass scan mode was MRM positive. The precursor ion and product ion were m/z 434.2→179.0 for valsartan and m/z 357.9→321.8 for methyclothiazide as IS according to the previous study (Wei et al. 2019). The collision energy for valsartan and methyclothiazide was 30 and 20 eV, respectively. The MS/MS conditions were optimized as follows: fragmentor, 110 V; capillary voltage, 3.5 kV; nozzle voltage, 500 V; nebulizer gas pressure (N2), 40 psig; drying gas flow (N2), 10 L/min; gas temperature, 350 °C; sheath gas temperature, 400 °C; sheath gas flow, 11 L/min. Agilent MassHunter B.07 software was used for the control of the equipment and data acquisition. Agilent Quantitative analysis software was used for data analysis.\nChromatographic analysis was performed by using an Agilent 1290 series liquid chromatography system (Agilent Technologies, Palo Alto, CA) as previously reported (Deng et al. 2017; Liu et al. 2019b). The sample was separated on Waters Xbridge C18 column (100 × 3.0 mm, i.d.; 3.0 μm, Waters Corporation, Milford, MA) and eluted with an isocratic mobile phase: solvent A (Water containing 0.1% formic acid) – solvent B (acetonitrile) (65:35, v/v). The column temperature was set at 25 °C, the flowing rate at 0.4 mL/min, and the injection volume at 5 μL. Mass spectrometric detection was carried out on an Agilent 6460 triple-quadruple mass spectrometer (Agilent Technologies, Palo Alto, CA) with Turbo Ion spray, which is connected to the liquid chromatography system. The mass scan mode was MRM positive. The precursor ion and product ion were m/z 434.2→179.0 for valsartan and m/z 357.9→321.8 for methyclothiazide as IS according to the previous study (Wei et al. 2019). The collision energy for valsartan and methyclothiazide was 30 and 20 eV, respectively. The MS/MS conditions were optimized as follows: fragmentor, 110 V; capillary voltage, 3.5 kV; nozzle voltage, 500 V; nebulizer gas pressure (N2), 40 psig; drying gas flow (N2), 10 L/min; gas temperature, 350 °C; sheath gas temperature, 400 °C; sheath gas flow, 11 L/min. Agilent MassHunter B.07 software was used for the control of the equipment and data acquisition. Agilent Quantitative analysis software was used for data analysis.\n Pharmacokinetic study The rats were randomly divided into three groups of six animals each, including 10 mg/kg valsartan only group (A), 10 mg/kg valsartan + 4 mg/kg ligustrazine group (B), and 10 mg/kg valsartan + 10 mg/kg ligustrazine (C). The administration of ligustrazine was prior to valsartan for 10 days. Blood samples (0.25 mL) were collected into a heparinized tube via the oculi choroidal vein before drug administration and at 0.083, 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, and 24 h after drug administration. After centrifuging at 3500 rpm for 10 min, the supernatant was obtained and frozen at −80 °C until analysis.\nThe rats were randomly divided into three groups of six animals each, including 10 mg/kg valsartan only group (A), 10 mg/kg valsartan + 4 mg/kg ligustrazine group (B), and 10 mg/kg valsartan + 10 mg/kg ligustrazine (C). The administration of ligustrazine was prior to valsartan for 10 days. Blood samples (0.25 mL) were collected into a heparinized tube via the oculi choroidal vein before drug administration and at 0.083, 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, and 24 h after drug administration. After centrifuging at 3500 rpm for 10 min, the supernatant was obtained and frozen at −80 °C until analysis.\n The metabolic stability of valsartan in rat liver microsomes According to previous studies, the reaction mixture was pre-incubated at 37 °C for 5 min, after that 1 μM valsartan was added (Qi et al. 2014; Li et al. 2016). For the effect of ligustrazine on the metabolic stability of valsartan, ligustrazine was added into rat liver microsomes and pre-incubated for 30 min at 37 °C, and then valsartan was added. After incubating for 0, 1, 3, 5, 15, 30, and 60 min, 30 μL samples were collected from volumes and the reaction was terminated by ice-cold acetonitrile. Then samples were prepared according to the above methods and analyzed by LC-MS/MS.\nThe half-life (t1/2) in vitro was obtained using equation: t1/2\n= 0.693/k. V (μL/mg) = volume of incubation (μL)/protein in the incubation (mg); Intrinsic clearance (Clint) (μL/min/mg protein) = V × 0.693/t1/2.\nAccording to previous studies, the reaction mixture was pre-incubated at 37 °C for 5 min, after that 1 μM valsartan was added (Qi et al. 2014; Li et al. 2016). For the effect of ligustrazine on the metabolic stability of valsartan, ligustrazine was added into rat liver microsomes and pre-incubated for 30 min at 37 °C, and then valsartan was added. After incubating for 0, 1, 3, 5, 15, 30, and 60 min, 30 μL samples were collected from volumes and the reaction was terminated by ice-cold acetonitrile. Then samples were prepared according to the above methods and analyzed by LC-MS/MS.\nThe half-life (t1/2) in vitro was obtained using equation: t1/2\n= 0.693/k. V (μL/mg) = volume of incubation (μL)/protein in the incubation (mg); Intrinsic clearance (Clint) (μL/min/mg protein) = V × 0.693/t1/2.\n Statistical analysis All data were represented as mean ± SD obtained from triplicate experiments and analyzed with one-way analysis of variance (ANOVA) with SPSS 20.0 (SPSS, Inc., Chicago, IL). The pharmacokinetic parameters of valsartan were obtained with DAS 3.0 pharmacokinetic software (Chinese Pharmacological Association, Anhui, China). Differences were considered to be statistically significant when p < 0.05.\nAll data were represented as mean ± SD obtained from triplicate experiments and analyzed with one-way analysis of variance (ANOVA) with SPSS 20.0 (SPSS, Inc., Chicago, IL). The pharmacokinetic parameters of valsartan were obtained with DAS 3.0 pharmacokinetic software (Chinese Pharmacological Association, Anhui, China). Differences were considered to be statistically significant when p < 0.05.", "Standards of valsartan (purity >98%), and ligustrazine (purity >98%) were purchased from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). β-NADPH was obtained from Sigma Chemical Co. (St. Louis, MO). Pooled RLM were purchased from BD Biosciences Discovery Labware (Woburn, MA). Acetonitrile and methanol were purchased from Fisher Scientific (Fair Lawn, NJ). Formic acid was purchased from Anaqua Chemicals Supply Inc. Limited (Houston, TX). Ultrapure water was prepared with a Milli-Q water purification system (Millipore, Billerica, MA). All other chemicals were of analytical grade or better.", "Experiments were performed with male Sprague–Dawley rats weighing 230–250 g supplied by Sino-British Sippr/BK Lab Animal Ltd (Shanghai, China). The rats were housed in an air-conditioned animal quarter at 22 ± 2 °C and 50 ± 10% relative humidity. The animals were acclimatized to the facilities for 5 days, during this period, animals had free access to food and water. Before each experiment, the rats fasted with free access to water for 12 h. The experiment was approved by the Animal Care and Use Committee of The First Affiliated Hospital of Qiqihar Medical University.", "Plasma sample (100 μL) was mixed with 20 μL methanol and 180 μL internal standard methanol solution and vortexed for 60 s. After centrifugation at 12,000 rpm for 10 min, the supernatant was removed into an injection vial and 5 μL aliquot was injected into LC/MS-MS system for analysis.", "Chromatographic analysis was performed by using an Agilent 1290 series liquid chromatography system (Agilent Technologies, Palo Alto, CA) as previously reported (Deng et al. 2017; Liu et al. 2019b). The sample was separated on Waters Xbridge C18 column (100 × 3.0 mm, i.d.; 3.0 μm, Waters Corporation, Milford, MA) and eluted with an isocratic mobile phase: solvent A (Water containing 0.1% formic acid) – solvent B (acetonitrile) (65:35, v/v). The column temperature was set at 25 °C, the flowing rate at 0.4 mL/min, and the injection volume at 5 μL. Mass spectrometric detection was carried out on an Agilent 6460 triple-quadruple mass spectrometer (Agilent Technologies, Palo Alto, CA) with Turbo Ion spray, which is connected to the liquid chromatography system. The mass scan mode was MRM positive. The precursor ion and product ion were m/z 434.2→179.0 for valsartan and m/z 357.9→321.8 for methyclothiazide as IS according to the previous study (Wei et al. 2019). The collision energy for valsartan and methyclothiazide was 30 and 20 eV, respectively. The MS/MS conditions were optimized as follows: fragmentor, 110 V; capillary voltage, 3.5 kV; nozzle voltage, 500 V; nebulizer gas pressure (N2), 40 psig; drying gas flow (N2), 10 L/min; gas temperature, 350 °C; sheath gas temperature, 400 °C; sheath gas flow, 11 L/min. Agilent MassHunter B.07 software was used for the control of the equipment and data acquisition. Agilent Quantitative analysis software was used for data analysis.", "The rats were randomly divided into three groups of six animals each, including 10 mg/kg valsartan only group (A), 10 mg/kg valsartan + 4 mg/kg ligustrazine group (B), and 10 mg/kg valsartan + 10 mg/kg ligustrazine (C). The administration of ligustrazine was prior to valsartan for 10 days. Blood samples (0.25 mL) were collected into a heparinized tube via the oculi choroidal vein before drug administration and at 0.083, 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, and 24 h after drug administration. After centrifuging at 3500 rpm for 10 min, the supernatant was obtained and frozen at −80 °C until analysis.", "According to previous studies, the reaction mixture was pre-incubated at 37 °C for 5 min, after that 1 μM valsartan was added (Qi et al. 2014; Li et al. 2016). For the effect of ligustrazine on the metabolic stability of valsartan, ligustrazine was added into rat liver microsomes and pre-incubated for 30 min at 37 °C, and then valsartan was added. After incubating for 0, 1, 3, 5, 15, 30, and 60 min, 30 μL samples were collected from volumes and the reaction was terminated by ice-cold acetonitrile. Then samples were prepared according to the above methods and analyzed by LC-MS/MS.\nThe half-life (t1/2) in vitro was obtained using equation: t1/2\n= 0.693/k. V (μL/mg) = volume of incubation (μL)/protein in the incubation (mg); Intrinsic clearance (Clint) (μL/min/mg protein) = V × 0.693/t1/2.", "All data were represented as mean ± SD obtained from triplicate experiments and analyzed with one-way analysis of variance (ANOVA) with SPSS 20.0 (SPSS, Inc., Chicago, IL). The pharmacokinetic parameters of valsartan were obtained with DAS 3.0 pharmacokinetic software (Chinese Pharmacological Association, Anhui, China). Differences were considered to be statistically significant when p < 0.05.", " Effect of ligustrazine on the pharmacokinetics of valsartan The plasma concentration of valsartan was analyzed by LC-MS/MS (Supplementary Figure 1) and the plasma concentration–time curves of valsartan in rats with or without treatment of ligustrazine are shown in Figure 1. The pharmacokinetic parameters, including area under curves (AUC(0–t)), half-life (t1/2), time to reach maximum concentration (Tmax), maximum concentration (Cmax), and clearance (Clz/F), are summarized in Table 1.\nPlasma concentration–time curves of valsartan in the presence or the absence of 4 or 10 mg/kg ligustrazine.\nPharmacokinetic parameters of valsartan (10 mg/kg) with or without ligustrazine (4 or 10 mg/kg).\np < 0.05.\nThe administration of ligustrazine significantly changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0–t) of valsartan decreased from 851.64 ± 104.26 to 385.37 ± 93.05 μg/L/h and the Cmax reduced from 83.87 ± 6.15 to 62.64 ± 9.09 μg/L (p < 0.05). Similarly, the oral administration of 10 mg/kg ligustrazine also dramatically decreased the AUC(0–t) (249.85 ± 41.27 versus 851.64 ± 104.26 μg/L/h) and Cmax (51.28 ± 2.67 versus 83.87 ± 6.15 μg/L). The t1/2 of valsartan reduced from 6.34 ± 1.25 to 5.46 ± 0.93 h after co-administrated of 4 mg/kg ligustrazine and to 5.15 ± 0.88 h in the presence of 10 mg/kg ligustrazine, and the difference was significant (p < 0.05). Additionally, the clearance rate of valsartan was significantly enhanced by both dose of ligustrazine (from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg for 4 mg/kg ligustrazine, and to 39.44 ± 6.53 L/h/kg for 10 mg/kg ligustrazine, p < 0.05, Table 1).\nThe plasma concentration of valsartan was analyzed by LC-MS/MS (Supplementary Figure 1) and the plasma concentration–time curves of valsartan in rats with or without treatment of ligustrazine are shown in Figure 1. The pharmacokinetic parameters, including area under curves (AUC(0–t)), half-life (t1/2), time to reach maximum concentration (Tmax), maximum concentration (Cmax), and clearance (Clz/F), are summarized in Table 1.\nPlasma concentration–time curves of valsartan in the presence or the absence of 4 or 10 mg/kg ligustrazine.\nPharmacokinetic parameters of valsartan (10 mg/kg) with or without ligustrazine (4 or 10 mg/kg).\np < 0.05.\nThe administration of ligustrazine significantly changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0–t) of valsartan decreased from 851.64 ± 104.26 to 385.37 ± 93.05 μg/L/h and the Cmax reduced from 83.87 ± 6.15 to 62.64 ± 9.09 μg/L (p < 0.05). Similarly, the oral administration of 10 mg/kg ligustrazine also dramatically decreased the AUC(0–t) (249.85 ± 41.27 versus 851.64 ± 104.26 μg/L/h) and Cmax (51.28 ± 2.67 versus 83.87 ± 6.15 μg/L). The t1/2 of valsartan reduced from 6.34 ± 1.25 to 5.46 ± 0.93 h after co-administrated of 4 mg/kg ligustrazine and to 5.15 ± 0.88 h in the presence of 10 mg/kg ligustrazine, and the difference was significant (p < 0.05). Additionally, the clearance rate of valsartan was significantly enhanced by both dose of ligustrazine (from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg for 4 mg/kg ligustrazine, and to 39.44 ± 6.53 L/h/kg for 10 mg/kg ligustrazine, p < 0.05, Table 1).\n Effect of ligustrazine on the metabolic stability of valsartan In rat liver microsomes, the obtained half-life of valsartan was 37.12 ± 4.06 min, which was significantly shortened to 33.48 ± 3.56 min after the administration of ligustrazine (p < 0.05). Moreover, the intrinsic clearance rate of valsartan significantly increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein by ligustrazine indicating the significant decrease in the metabolic stability of valsartan after the co-administration with ligustrazine (p < 0.05).\nIn rat liver microsomes, the obtained half-life of valsartan was 37.12 ± 4.06 min, which was significantly shortened to 33.48 ± 3.56 min after the administration of ligustrazine (p < 0.05). Moreover, the intrinsic clearance rate of valsartan significantly increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein by ligustrazine indicating the significant decrease in the metabolic stability of valsartan after the co-administration with ligustrazine (p < 0.05).", "The plasma concentration of valsartan was analyzed by LC-MS/MS (Supplementary Figure 1) and the plasma concentration–time curves of valsartan in rats with or without treatment of ligustrazine are shown in Figure 1. The pharmacokinetic parameters, including area under curves (AUC(0–t)), half-life (t1/2), time to reach maximum concentration (Tmax), maximum concentration (Cmax), and clearance (Clz/F), are summarized in Table 1.\nPlasma concentration–time curves of valsartan in the presence or the absence of 4 or 10 mg/kg ligustrazine.\nPharmacokinetic parameters of valsartan (10 mg/kg) with or without ligustrazine (4 or 10 mg/kg).\np < 0.05.\nThe administration of ligustrazine significantly changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0–t) of valsartan decreased from 851.64 ± 104.26 to 385.37 ± 93.05 μg/L/h and the Cmax reduced from 83.87 ± 6.15 to 62.64 ± 9.09 μg/L (p < 0.05). Similarly, the oral administration of 10 mg/kg ligustrazine also dramatically decreased the AUC(0–t) (249.85 ± 41.27 versus 851.64 ± 104.26 μg/L/h) and Cmax (51.28 ± 2.67 versus 83.87 ± 6.15 μg/L). The t1/2 of valsartan reduced from 6.34 ± 1.25 to 5.46 ± 0.93 h after co-administrated of 4 mg/kg ligustrazine and to 5.15 ± 0.88 h in the presence of 10 mg/kg ligustrazine, and the difference was significant (p < 0.05). Additionally, the clearance rate of valsartan was significantly enhanced by both dose of ligustrazine (from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg for 4 mg/kg ligustrazine, and to 39.44 ± 6.53 L/h/kg for 10 mg/kg ligustrazine, p < 0.05, Table 1).", "In rat liver microsomes, the obtained half-life of valsartan was 37.12 ± 4.06 min, which was significantly shortened to 33.48 ± 3.56 min after the administration of ligustrazine (p < 0.05). Moreover, the intrinsic clearance rate of valsartan significantly increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein by ligustrazine indicating the significant decrease in the metabolic stability of valsartan after the co-administration with ligustrazine (p < 0.05).", "Both ligustrazine and valsartan are widely used in the therapy of cardiac and cardiovascular disease. A combination of ligustrazine and valsartan can exert protective effects on the hippocampal neuronal loss with vascular dementia (Qin et al. 2011). Previous studies reported the co-administration of different drugs could induce adverse interaction results in the induction or inhibition of drug pharmacokinetics. For example, puerarin could inhibit the metabolism of triptolide and increase the plasma concentration of triptolide, which would increase the toxicity of triptolide (Wang et al. 2019). Therefore, it is necessary to investigate the interaction between different drugs.\nThe co-administration of ligustrazine and valsartan investigated in this study indicated the promoted effect of ligustrazine on the metabolism of valsartan. Co-administration of 4 or 10 mg/kg ligustrazine and 10 mg/kg valsartan significantly reduced the AUC(0–t), t1/2, and Cmax of valsartan in rat and increased the clearance rate of valsartan. For in vitro experiments, the metabolic stability of valsartan in rat liver microsomes was decreased by ligustrazine with the increased intrinsic clearance rate and decreased half-life. According to previous studies, the metabolism of valsartan is mainly mediated by CYP3A4, which is an important enzyme that participates in the phase-I metabolism of various drugs in the liver (Nakashima et al. 2005; Manikandan and Nagini 2018). Ligustrazine was also demonstrated to be an inducer of CYP3A4 (Zhang et al. 2014). The activity of CYP3A4 plays a vital role in the drug–drug interaction between valsartan and many other co-administrated drugs, such as quercetin (Challa et al. 2013). It also induced the drug-drug interaction between other drugs. For example, the co-administration of glycyrrhizin and nobiletin led to the enhanced metabolism of nobiletin via inducing the activity of CYP3A4 (Wang et al. 2020). Therefore, it is speculated that the effect of ligustrazine on the pharmacokinetics of valsartan was a result of the enhanced activity of CYP3A4. However, the concentration of drugs during drug–drug interaction is an important factor. The induced effect of ligustrazine on the activity of CYP3A4 was reported at the ligustrazine concentration of 20 μM (Zhang et al. 2014), which is much higher than the dosage used in this study. Therefore, it is necessary to validate the effect of ligustrazine on the activity of CYPs at different concentrations.\nThis study provides direct in vivo and in vitro evidence to prove the promoted effect of ligustrazine on the pharmacokinetics of valsartan. The interaction between these two drugs in humans needs to be explore, which should be studied with more experiments closer to the metabolism in humans. Moreover, the effect of ligustrazine on the pharmacokinetics of valsartan resulted from the induction of CYP3A4. Hence, valsartan should be carefully used with drugs that affect the activity of CYP3A4.", "The co-administration of valsartan and ligustrazine induced drug–drug interaction, which decreased the system exposure of valsartan by activating CYP3A4. Therefore, the dose of ligustrazine and valsartan should be carefully considered in the clinic, when they are co-administrated.", "Click here for additional data file." ]
[ "intro", "materials", null, null, null, null, null, null, null, "results", null, null, "discussion", "conclusions", "supplementary-material" ]
[ "Metabolic stability", "CYP3A4", "metabolism", "drug–drug interaction" ]
Introduction: Ligustrazine is a bioactive ingredient extracted from Rhizoma Ligustici wallichii [Ligustici wallichii (Apiaceae)]. Ligustrazine has been widely used in the clinic for the treatment of cardiovascular disease because of its function in activating blood circulation (Wu et al. 2012; Qian et al. 2014). Moreover, ligustrazine has been demonstrated to possess antioxidant properties in animal models of ischemic-reperfusion, atherosclerosis, and cerebral vasospasm (Jiang et al. 2011; Lv et al. 2012). Commonly, patients with cardiovascular or cerebral diseases may have received ligustrazine treatment before using other drugs, or it may be used with other drugs together for the treatment of complex chronic disorders (Hockenberry 1991; Phougat et al. 2014). Valsartan is a commonly used drug in cardiac disease with relatively low absolute bioavailability (Jung et al. 2015). Valsartan is usually applied in the treatment of hypertension due to its properties of lowing blood pressure, and it also co-administrated with other drugs to make the treatment more efficient in the clinic (Cheng et al. 2001; Liu et al. 2019a). Previously, the co-administration of valsartan and quercetin could exert greater cardiac protection and this kind of combination affected the pharmacokinetics of valsartan (Challa et al. 2013). The combination of valsartan and amlodipine has been demonstrated to control blood pressure efficiently and the pharmacokinetic and transport of valsartan were influenced by the administration of amlodipine (Cai et al. 2011). The combination of ligustrazine and valsartan has been reported to have a protective effect on hippocampal neuronal loss with vascular dementia (Qin et al. 2011). However, the drug–drug interaction between ligustrazine and valsartan was still unclear. This study focussed on the interaction between ligustrazine and valsartan to evaluate the effect of ligustrazine on the pharmacokinetics of valsartan, which can provide more information for the clinical co-administration of ligustrazine and valsartan. Materials and methods: Drugs and chemicals Standards of valsartan (purity >98%), and ligustrazine (purity >98%) were purchased from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). β-NADPH was obtained from Sigma Chemical Co. (St. Louis, MO). Pooled RLM were purchased from BD Biosciences Discovery Labware (Woburn, MA). Acetonitrile and methanol were purchased from Fisher Scientific (Fair Lawn, NJ). Formic acid was purchased from Anaqua Chemicals Supply Inc. Limited (Houston, TX). Ultrapure water was prepared with a Milli-Q water purification system (Millipore, Billerica, MA). All other chemicals were of analytical grade or better. Standards of valsartan (purity >98%), and ligustrazine (purity >98%) were purchased from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). β-NADPH was obtained from Sigma Chemical Co. (St. Louis, MO). Pooled RLM were purchased from BD Biosciences Discovery Labware (Woburn, MA). Acetonitrile and methanol were purchased from Fisher Scientific (Fair Lawn, NJ). Formic acid was purchased from Anaqua Chemicals Supply Inc. Limited (Houston, TX). Ultrapure water was prepared with a Milli-Q water purification system (Millipore, Billerica, MA). All other chemicals were of analytical grade or better. Animals Experiments were performed with male Sprague–Dawley rats weighing 230–250 g supplied by Sino-British Sippr/BK Lab Animal Ltd (Shanghai, China). The rats were housed in an air-conditioned animal quarter at 22 ± 2 °C and 50 ± 10% relative humidity. The animals were acclimatized to the facilities for 5 days, during this period, animals had free access to food and water. Before each experiment, the rats fasted with free access to water for 12 h. The experiment was approved by the Animal Care and Use Committee of The First Affiliated Hospital of Qiqihar Medical University. Experiments were performed with male Sprague–Dawley rats weighing 230–250 g supplied by Sino-British Sippr/BK Lab Animal Ltd (Shanghai, China). The rats were housed in an air-conditioned animal quarter at 22 ± 2 °C and 50 ± 10% relative humidity. The animals were acclimatized to the facilities for 5 days, during this period, animals had free access to food and water. Before each experiment, the rats fasted with free access to water for 12 h. The experiment was approved by the Animal Care and Use Committee of The First Affiliated Hospital of Qiqihar Medical University. Preparation of plasma samples Plasma sample (100 μL) was mixed with 20 μL methanol and 180 μL internal standard methanol solution and vortexed for 60 s. After centrifugation at 12,000 rpm for 10 min, the supernatant was removed into an injection vial and 5 μL aliquot was injected into LC/MS-MS system for analysis. Plasma sample (100 μL) was mixed with 20 μL methanol and 180 μL internal standard methanol solution and vortexed for 60 s. After centrifugation at 12,000 rpm for 10 min, the supernatant was removed into an injection vial and 5 μL aliquot was injected into LC/MS-MS system for analysis. LC-MS/MS condition Chromatographic analysis was performed by using an Agilent 1290 series liquid chromatography system (Agilent Technologies, Palo Alto, CA) as previously reported (Deng et al. 2017; Liu et al. 2019b). The sample was separated on Waters Xbridge C18 column (100 × 3.0 mm, i.d.; 3.0 μm, Waters Corporation, Milford, MA) and eluted with an isocratic mobile phase: solvent A (Water containing 0.1% formic acid) – solvent B (acetonitrile) (65:35, v/v). The column temperature was set at 25 °C, the flowing rate at 0.4 mL/min, and the injection volume at 5 μL. Mass spectrometric detection was carried out on an Agilent 6460 triple-quadruple mass spectrometer (Agilent Technologies, Palo Alto, CA) with Turbo Ion spray, which is connected to the liquid chromatography system. The mass scan mode was MRM positive. The precursor ion and product ion were m/z 434.2→179.0 for valsartan and m/z 357.9→321.8 for methyclothiazide as IS according to the previous study (Wei et al. 2019). The collision energy for valsartan and methyclothiazide was 30 and 20 eV, respectively. The MS/MS conditions were optimized as follows: fragmentor, 110 V; capillary voltage, 3.5 kV; nozzle voltage, 500 V; nebulizer gas pressure (N2), 40 psig; drying gas flow (N2), 10 L/min; gas temperature, 350 °C; sheath gas temperature, 400 °C; sheath gas flow, 11 L/min. Agilent MassHunter B.07 software was used for the control of the equipment and data acquisition. Agilent Quantitative analysis software was used for data analysis. Chromatographic analysis was performed by using an Agilent 1290 series liquid chromatography system (Agilent Technologies, Palo Alto, CA) as previously reported (Deng et al. 2017; Liu et al. 2019b). The sample was separated on Waters Xbridge C18 column (100 × 3.0 mm, i.d.; 3.0 μm, Waters Corporation, Milford, MA) and eluted with an isocratic mobile phase: solvent A (Water containing 0.1% formic acid) – solvent B (acetonitrile) (65:35, v/v). The column temperature was set at 25 °C, the flowing rate at 0.4 mL/min, and the injection volume at 5 μL. Mass spectrometric detection was carried out on an Agilent 6460 triple-quadruple mass spectrometer (Agilent Technologies, Palo Alto, CA) with Turbo Ion spray, which is connected to the liquid chromatography system. The mass scan mode was MRM positive. The precursor ion and product ion were m/z 434.2→179.0 for valsartan and m/z 357.9→321.8 for methyclothiazide as IS according to the previous study (Wei et al. 2019). The collision energy for valsartan and methyclothiazide was 30 and 20 eV, respectively. The MS/MS conditions were optimized as follows: fragmentor, 110 V; capillary voltage, 3.5 kV; nozzle voltage, 500 V; nebulizer gas pressure (N2), 40 psig; drying gas flow (N2), 10 L/min; gas temperature, 350 °C; sheath gas temperature, 400 °C; sheath gas flow, 11 L/min. Agilent MassHunter B.07 software was used for the control of the equipment and data acquisition. Agilent Quantitative analysis software was used for data analysis. Pharmacokinetic study The rats were randomly divided into three groups of six animals each, including 10 mg/kg valsartan only group (A), 10 mg/kg valsartan + 4 mg/kg ligustrazine group (B), and 10 mg/kg valsartan + 10 mg/kg ligustrazine (C). The administration of ligustrazine was prior to valsartan for 10 days. Blood samples (0.25 mL) were collected into a heparinized tube via the oculi choroidal vein before drug administration and at 0.083, 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, and 24 h after drug administration. After centrifuging at 3500 rpm for 10 min, the supernatant was obtained and frozen at −80 °C until analysis. The rats were randomly divided into three groups of six animals each, including 10 mg/kg valsartan only group (A), 10 mg/kg valsartan + 4 mg/kg ligustrazine group (B), and 10 mg/kg valsartan + 10 mg/kg ligustrazine (C). The administration of ligustrazine was prior to valsartan for 10 days. Blood samples (0.25 mL) were collected into a heparinized tube via the oculi choroidal vein before drug administration and at 0.083, 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, and 24 h after drug administration. After centrifuging at 3500 rpm for 10 min, the supernatant was obtained and frozen at −80 °C until analysis. The metabolic stability of valsartan in rat liver microsomes According to previous studies, the reaction mixture was pre-incubated at 37 °C for 5 min, after that 1 μM valsartan was added (Qi et al. 2014; Li et al. 2016). For the effect of ligustrazine on the metabolic stability of valsartan, ligustrazine was added into rat liver microsomes and pre-incubated for 30 min at 37 °C, and then valsartan was added. After incubating for 0, 1, 3, 5, 15, 30, and 60 min, 30 μL samples were collected from volumes and the reaction was terminated by ice-cold acetonitrile. Then samples were prepared according to the above methods and analyzed by LC-MS/MS. The half-life (t1/2) in vitro was obtained using equation: t1/2 = 0.693/k. V (μL/mg) = volume of incubation (μL)/protein in the incubation (mg); Intrinsic clearance (Clint) (μL/min/mg protein) = V × 0.693/t1/2. According to previous studies, the reaction mixture was pre-incubated at 37 °C for 5 min, after that 1 μM valsartan was added (Qi et al. 2014; Li et al. 2016). For the effect of ligustrazine on the metabolic stability of valsartan, ligustrazine was added into rat liver microsomes and pre-incubated for 30 min at 37 °C, and then valsartan was added. After incubating for 0, 1, 3, 5, 15, 30, and 60 min, 30 μL samples were collected from volumes and the reaction was terminated by ice-cold acetonitrile. Then samples were prepared according to the above methods and analyzed by LC-MS/MS. The half-life (t1/2) in vitro was obtained using equation: t1/2 = 0.693/k. V (μL/mg) = volume of incubation (μL)/protein in the incubation (mg); Intrinsic clearance (Clint) (μL/min/mg protein) = V × 0.693/t1/2. Statistical analysis All data were represented as mean ± SD obtained from triplicate experiments and analyzed with one-way analysis of variance (ANOVA) with SPSS 20.0 (SPSS, Inc., Chicago, IL). The pharmacokinetic parameters of valsartan were obtained with DAS 3.0 pharmacokinetic software (Chinese Pharmacological Association, Anhui, China). Differences were considered to be statistically significant when p < 0.05. All data were represented as mean ± SD obtained from triplicate experiments and analyzed with one-way analysis of variance (ANOVA) with SPSS 20.0 (SPSS, Inc., Chicago, IL). The pharmacokinetic parameters of valsartan were obtained with DAS 3.0 pharmacokinetic software (Chinese Pharmacological Association, Anhui, China). Differences were considered to be statistically significant when p < 0.05. Drugs and chemicals: Standards of valsartan (purity >98%), and ligustrazine (purity >98%) were purchased from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). β-NADPH was obtained from Sigma Chemical Co. (St. Louis, MO). Pooled RLM were purchased from BD Biosciences Discovery Labware (Woburn, MA). Acetonitrile and methanol were purchased from Fisher Scientific (Fair Lawn, NJ). Formic acid was purchased from Anaqua Chemicals Supply Inc. Limited (Houston, TX). Ultrapure water was prepared with a Milli-Q water purification system (Millipore, Billerica, MA). All other chemicals were of analytical grade or better. Animals: Experiments were performed with male Sprague–Dawley rats weighing 230–250 g supplied by Sino-British Sippr/BK Lab Animal Ltd (Shanghai, China). The rats were housed in an air-conditioned animal quarter at 22 ± 2 °C and 50 ± 10% relative humidity. The animals were acclimatized to the facilities for 5 days, during this period, animals had free access to food and water. Before each experiment, the rats fasted with free access to water for 12 h. The experiment was approved by the Animal Care and Use Committee of The First Affiliated Hospital of Qiqihar Medical University. Preparation of plasma samples: Plasma sample (100 μL) was mixed with 20 μL methanol and 180 μL internal standard methanol solution and vortexed for 60 s. After centrifugation at 12,000 rpm for 10 min, the supernatant was removed into an injection vial and 5 μL aliquot was injected into LC/MS-MS system for analysis. LC-MS/MS condition: Chromatographic analysis was performed by using an Agilent 1290 series liquid chromatography system (Agilent Technologies, Palo Alto, CA) as previously reported (Deng et al. 2017; Liu et al. 2019b). The sample was separated on Waters Xbridge C18 column (100 × 3.0 mm, i.d.; 3.0 μm, Waters Corporation, Milford, MA) and eluted with an isocratic mobile phase: solvent A (Water containing 0.1% formic acid) – solvent B (acetonitrile) (65:35, v/v). The column temperature was set at 25 °C, the flowing rate at 0.4 mL/min, and the injection volume at 5 μL. Mass spectrometric detection was carried out on an Agilent 6460 triple-quadruple mass spectrometer (Agilent Technologies, Palo Alto, CA) with Turbo Ion spray, which is connected to the liquid chromatography system. The mass scan mode was MRM positive. The precursor ion and product ion were m/z 434.2→179.0 for valsartan and m/z 357.9→321.8 for methyclothiazide as IS according to the previous study (Wei et al. 2019). The collision energy for valsartan and methyclothiazide was 30 and 20 eV, respectively. The MS/MS conditions were optimized as follows: fragmentor, 110 V; capillary voltage, 3.5 kV; nozzle voltage, 500 V; nebulizer gas pressure (N2), 40 psig; drying gas flow (N2), 10 L/min; gas temperature, 350 °C; sheath gas temperature, 400 °C; sheath gas flow, 11 L/min. Agilent MassHunter B.07 software was used for the control of the equipment and data acquisition. Agilent Quantitative analysis software was used for data analysis. Pharmacokinetic study: The rats were randomly divided into three groups of six animals each, including 10 mg/kg valsartan only group (A), 10 mg/kg valsartan + 4 mg/kg ligustrazine group (B), and 10 mg/kg valsartan + 10 mg/kg ligustrazine (C). The administration of ligustrazine was prior to valsartan for 10 days. Blood samples (0.25 mL) were collected into a heparinized tube via the oculi choroidal vein before drug administration and at 0.083, 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, and 24 h after drug administration. After centrifuging at 3500 rpm for 10 min, the supernatant was obtained and frozen at −80 °C until analysis. The metabolic stability of valsartan in rat liver microsomes: According to previous studies, the reaction mixture was pre-incubated at 37 °C for 5 min, after that 1 μM valsartan was added (Qi et al. 2014; Li et al. 2016). For the effect of ligustrazine on the metabolic stability of valsartan, ligustrazine was added into rat liver microsomes and pre-incubated for 30 min at 37 °C, and then valsartan was added. After incubating for 0, 1, 3, 5, 15, 30, and 60 min, 30 μL samples were collected from volumes and the reaction was terminated by ice-cold acetonitrile. Then samples were prepared according to the above methods and analyzed by LC-MS/MS. The half-life (t1/2) in vitro was obtained using equation: t1/2 = 0.693/k. V (μL/mg) = volume of incubation (μL)/protein in the incubation (mg); Intrinsic clearance (Clint) (μL/min/mg protein) = V × 0.693/t1/2. Statistical analysis: All data were represented as mean ± SD obtained from triplicate experiments and analyzed with one-way analysis of variance (ANOVA) with SPSS 20.0 (SPSS, Inc., Chicago, IL). The pharmacokinetic parameters of valsartan were obtained with DAS 3.0 pharmacokinetic software (Chinese Pharmacological Association, Anhui, China). Differences were considered to be statistically significant when p < 0.05. Results: Effect of ligustrazine on the pharmacokinetics of valsartan The plasma concentration of valsartan was analyzed by LC-MS/MS (Supplementary Figure 1) and the plasma concentration–time curves of valsartan in rats with or without treatment of ligustrazine are shown in Figure 1. The pharmacokinetic parameters, including area under curves (AUC(0–t)), half-life (t1/2), time to reach maximum concentration (Tmax), maximum concentration (Cmax), and clearance (Clz/F), are summarized in Table 1. Plasma concentration–time curves of valsartan in the presence or the absence of 4 or 10 mg/kg ligustrazine. Pharmacokinetic parameters of valsartan (10 mg/kg) with or without ligustrazine (4 or 10 mg/kg). p < 0.05. The administration of ligustrazine significantly changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0–t) of valsartan decreased from 851.64 ± 104.26 to 385.37 ± 93.05 μg/L/h and the Cmax reduced from 83.87 ± 6.15 to 62.64 ± 9.09 μg/L (p < 0.05). Similarly, the oral administration of 10 mg/kg ligustrazine also dramatically decreased the AUC(0–t) (249.85 ± 41.27 versus 851.64 ± 104.26 μg/L/h) and Cmax (51.28 ± 2.67 versus 83.87 ± 6.15 μg/L). The t1/2 of valsartan reduced from 6.34 ± 1.25 to 5.46 ± 0.93 h after co-administrated of 4 mg/kg ligustrazine and to 5.15 ± 0.88 h in the presence of 10 mg/kg ligustrazine, and the difference was significant (p < 0.05). Additionally, the clearance rate of valsartan was significantly enhanced by both dose of ligustrazine (from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg for 4 mg/kg ligustrazine, and to 39.44 ± 6.53 L/h/kg for 10 mg/kg ligustrazine, p < 0.05, Table 1). The plasma concentration of valsartan was analyzed by LC-MS/MS (Supplementary Figure 1) and the plasma concentration–time curves of valsartan in rats with or without treatment of ligustrazine are shown in Figure 1. The pharmacokinetic parameters, including area under curves (AUC(0–t)), half-life (t1/2), time to reach maximum concentration (Tmax), maximum concentration (Cmax), and clearance (Clz/F), are summarized in Table 1. Plasma concentration–time curves of valsartan in the presence or the absence of 4 or 10 mg/kg ligustrazine. Pharmacokinetic parameters of valsartan (10 mg/kg) with or without ligustrazine (4 or 10 mg/kg). p < 0.05. The administration of ligustrazine significantly changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0–t) of valsartan decreased from 851.64 ± 104.26 to 385.37 ± 93.05 μg/L/h and the Cmax reduced from 83.87 ± 6.15 to 62.64 ± 9.09 μg/L (p < 0.05). Similarly, the oral administration of 10 mg/kg ligustrazine also dramatically decreased the AUC(0–t) (249.85 ± 41.27 versus 851.64 ± 104.26 μg/L/h) and Cmax (51.28 ± 2.67 versus 83.87 ± 6.15 μg/L). The t1/2 of valsartan reduced from 6.34 ± 1.25 to 5.46 ± 0.93 h after co-administrated of 4 mg/kg ligustrazine and to 5.15 ± 0.88 h in the presence of 10 mg/kg ligustrazine, and the difference was significant (p < 0.05). Additionally, the clearance rate of valsartan was significantly enhanced by both dose of ligustrazine (from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg for 4 mg/kg ligustrazine, and to 39.44 ± 6.53 L/h/kg for 10 mg/kg ligustrazine, p < 0.05, Table 1). Effect of ligustrazine on the metabolic stability of valsartan In rat liver microsomes, the obtained half-life of valsartan was 37.12 ± 4.06 min, which was significantly shortened to 33.48 ± 3.56 min after the administration of ligustrazine (p < 0.05). Moreover, the intrinsic clearance rate of valsartan significantly increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein by ligustrazine indicating the significant decrease in the metabolic stability of valsartan after the co-administration with ligustrazine (p < 0.05). In rat liver microsomes, the obtained half-life of valsartan was 37.12 ± 4.06 min, which was significantly shortened to 33.48 ± 3.56 min after the administration of ligustrazine (p < 0.05). Moreover, the intrinsic clearance rate of valsartan significantly increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein by ligustrazine indicating the significant decrease in the metabolic stability of valsartan after the co-administration with ligustrazine (p < 0.05). Effect of ligustrazine on the pharmacokinetics of valsartan: The plasma concentration of valsartan was analyzed by LC-MS/MS (Supplementary Figure 1) and the plasma concentration–time curves of valsartan in rats with or without treatment of ligustrazine are shown in Figure 1. The pharmacokinetic parameters, including area under curves (AUC(0–t)), half-life (t1/2), time to reach maximum concentration (Tmax), maximum concentration (Cmax), and clearance (Clz/F), are summarized in Table 1. Plasma concentration–time curves of valsartan in the presence or the absence of 4 or 10 mg/kg ligustrazine. Pharmacokinetic parameters of valsartan (10 mg/kg) with or without ligustrazine (4 or 10 mg/kg). p < 0.05. The administration of ligustrazine significantly changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0–t) of valsartan decreased from 851.64 ± 104.26 to 385.37 ± 93.05 μg/L/h and the Cmax reduced from 83.87 ± 6.15 to 62.64 ± 9.09 μg/L (p < 0.05). Similarly, the oral administration of 10 mg/kg ligustrazine also dramatically decreased the AUC(0–t) (249.85 ± 41.27 versus 851.64 ± 104.26 μg/L/h) and Cmax (51.28 ± 2.67 versus 83.87 ± 6.15 μg/L). The t1/2 of valsartan reduced from 6.34 ± 1.25 to 5.46 ± 0.93 h after co-administrated of 4 mg/kg ligustrazine and to 5.15 ± 0.88 h in the presence of 10 mg/kg ligustrazine, and the difference was significant (p < 0.05). Additionally, the clearance rate of valsartan was significantly enhanced by both dose of ligustrazine (from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg for 4 mg/kg ligustrazine, and to 39.44 ± 6.53 L/h/kg for 10 mg/kg ligustrazine, p < 0.05, Table 1). Effect of ligustrazine on the metabolic stability of valsartan: In rat liver microsomes, the obtained half-life of valsartan was 37.12 ± 4.06 min, which was significantly shortened to 33.48 ± 3.56 min after the administration of ligustrazine (p < 0.05). Moreover, the intrinsic clearance rate of valsartan significantly increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein by ligustrazine indicating the significant decrease in the metabolic stability of valsartan after the co-administration with ligustrazine (p < 0.05). Discussion: Both ligustrazine and valsartan are widely used in the therapy of cardiac and cardiovascular disease. A combination of ligustrazine and valsartan can exert protective effects on the hippocampal neuronal loss with vascular dementia (Qin et al. 2011). Previous studies reported the co-administration of different drugs could induce adverse interaction results in the induction or inhibition of drug pharmacokinetics. For example, puerarin could inhibit the metabolism of triptolide and increase the plasma concentration of triptolide, which would increase the toxicity of triptolide (Wang et al. 2019). Therefore, it is necessary to investigate the interaction between different drugs. The co-administration of ligustrazine and valsartan investigated in this study indicated the promoted effect of ligustrazine on the metabolism of valsartan. Co-administration of 4 or 10 mg/kg ligustrazine and 10 mg/kg valsartan significantly reduced the AUC(0–t), t1/2, and Cmax of valsartan in rat and increased the clearance rate of valsartan. For in vitro experiments, the metabolic stability of valsartan in rat liver microsomes was decreased by ligustrazine with the increased intrinsic clearance rate and decreased half-life. According to previous studies, the metabolism of valsartan is mainly mediated by CYP3A4, which is an important enzyme that participates in the phase-I metabolism of various drugs in the liver (Nakashima et al. 2005; Manikandan and Nagini 2018). Ligustrazine was also demonstrated to be an inducer of CYP3A4 (Zhang et al. 2014). The activity of CYP3A4 plays a vital role in the drug–drug interaction between valsartan and many other co-administrated drugs, such as quercetin (Challa et al. 2013). It also induced the drug-drug interaction between other drugs. For example, the co-administration of glycyrrhizin and nobiletin led to the enhanced metabolism of nobiletin via inducing the activity of CYP3A4 (Wang et al. 2020). Therefore, it is speculated that the effect of ligustrazine on the pharmacokinetics of valsartan was a result of the enhanced activity of CYP3A4. However, the concentration of drugs during drug–drug interaction is an important factor. The induced effect of ligustrazine on the activity of CYP3A4 was reported at the ligustrazine concentration of 20 μM (Zhang et al. 2014), which is much higher than the dosage used in this study. Therefore, it is necessary to validate the effect of ligustrazine on the activity of CYPs at different concentrations. This study provides direct in vivo and in vitro evidence to prove the promoted effect of ligustrazine on the pharmacokinetics of valsartan. The interaction between these two drugs in humans needs to be explore, which should be studied with more experiments closer to the metabolism in humans. Moreover, the effect of ligustrazine on the pharmacokinetics of valsartan resulted from the induction of CYP3A4. Hence, valsartan should be carefully used with drugs that affect the activity of CYP3A4. Conclusions: The co-administration of valsartan and ligustrazine induced drug–drug interaction, which decreased the system exposure of valsartan by activating CYP3A4. Therefore, the dose of ligustrazine and valsartan should be carefully considered in the clinic, when they are co-administrated. Supplementary Material: Click here for additional data file.
Background: Ligustrazine and valsartan are commonly used drugs in the treatment of cardiac and cardiovascular disease. Methods: The pharmacokinetics of valsartan (10 mg/kg) was investigated in Sprague-Dawley rats divided into three groups (with the pretreatment of 4 or 10 mg/kg/day ligustrazine for 10 days and without the pretreatment of ligustrazine as control) of six rats each. The in vitro experiments in rat liver microsomes were performed to explore the effect of ligustrazine on the metabolic stability of valsartan. Results: Ligustrazine changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0- t ) (385.37 ± 93.05 versus 851.64 ± 104.26 μg/L*h), t1/2 (5.46 ± 0.93 versus 6.34 ± 1.25 h), and C max (62.64 ± 9.09 versus 83.87 ± 6.15 μg/L) of valsartan was significantly decreased, and the clearance rate was increased from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg and similar changes were observed in the group with 10 mg/kg ligustrazine (p < 0.05). The metabolic stability of valsartan was also decreased by ligustrazine as the half-life of valsartan in rat liver microsomes decreased from 37.12 ± 4.06 to 33.48 ± 3.56 min and the intrinsic clearance rate increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein (p < 0.05). Conclusions: Ligustrazine promoted the metabolism of valsartan via activating CYP3A4. The co-administration of ligustrazine and valsartan should be taken into account.
Introduction: Ligustrazine is a bioactive ingredient extracted from Rhizoma Ligustici wallichii [Ligustici wallichii (Apiaceae)]. Ligustrazine has been widely used in the clinic for the treatment of cardiovascular disease because of its function in activating blood circulation (Wu et al. 2012; Qian et al. 2014). Moreover, ligustrazine has been demonstrated to possess antioxidant properties in animal models of ischemic-reperfusion, atherosclerosis, and cerebral vasospasm (Jiang et al. 2011; Lv et al. 2012). Commonly, patients with cardiovascular or cerebral diseases may have received ligustrazine treatment before using other drugs, or it may be used with other drugs together for the treatment of complex chronic disorders (Hockenberry 1991; Phougat et al. 2014). Valsartan is a commonly used drug in cardiac disease with relatively low absolute bioavailability (Jung et al. 2015). Valsartan is usually applied in the treatment of hypertension due to its properties of lowing blood pressure, and it also co-administrated with other drugs to make the treatment more efficient in the clinic (Cheng et al. 2001; Liu et al. 2019a). Previously, the co-administration of valsartan and quercetin could exert greater cardiac protection and this kind of combination affected the pharmacokinetics of valsartan (Challa et al. 2013). The combination of valsartan and amlodipine has been demonstrated to control blood pressure efficiently and the pharmacokinetic and transport of valsartan were influenced by the administration of amlodipine (Cai et al. 2011). The combination of ligustrazine and valsartan has been reported to have a protective effect on hippocampal neuronal loss with vascular dementia (Qin et al. 2011). However, the drug–drug interaction between ligustrazine and valsartan was still unclear. This study focussed on the interaction between ligustrazine and valsartan to evaluate the effect of ligustrazine on the pharmacokinetics of valsartan, which can provide more information for the clinical co-administration of ligustrazine and valsartan. Conclusions: The co-administration of valsartan and ligustrazine induced drug–drug interaction, which decreased the system exposure of valsartan by activating CYP3A4. Therefore, the dose of ligustrazine and valsartan should be carefully considered in the clinic, when they are co-administrated.
Background: Ligustrazine and valsartan are commonly used drugs in the treatment of cardiac and cardiovascular disease. Methods: The pharmacokinetics of valsartan (10 mg/kg) was investigated in Sprague-Dawley rats divided into three groups (with the pretreatment of 4 or 10 mg/kg/day ligustrazine for 10 days and without the pretreatment of ligustrazine as control) of six rats each. The in vitro experiments in rat liver microsomes were performed to explore the effect of ligustrazine on the metabolic stability of valsartan. Results: Ligustrazine changed the pharmacokinetic profile of valsartan. In the presence of 4 mg/kg ligustrazine, the AUC(0- t ) (385.37 ± 93.05 versus 851.64 ± 104.26 μg/L*h), t1/2 (5.46 ± 0.93 versus 6.34 ± 1.25 h), and C max (62.64 ± 9.09 versus 83.87 ± 6.15 μg/L) of valsartan was significantly decreased, and the clearance rate was increased from 10.92 ± 1.521 to 25.76 ± 6.24 L/h/kg and similar changes were observed in the group with 10 mg/kg ligustrazine (p < 0.05). The metabolic stability of valsartan was also decreased by ligustrazine as the half-life of valsartan in rat liver microsomes decreased from 37.12 ± 4.06 to 33.48 ± 3.56 min and the intrinsic clearance rate increased from 37.34 ± 3.84 to 41.40 ± 4.32 μL/min/mg protein (p < 0.05). Conclusions: Ligustrazine promoted the metabolism of valsartan via activating CYP3A4. The co-administration of ligustrazine and valsartan should be taken into account.
5,953
333
15
[ "valsartan", "ligustrazine", "mg", "10", "kg", "mg kg", "min", "10 mg", "10 mg kg", "mg kg ligustrazine" ]
[ "test", "test" ]
null
[CONTENT] Metabolic stability | CYP3A4 | metabolism | drug–drug interaction [SUMMARY]
null
[CONTENT] Metabolic stability | CYP3A4 | metabolism | drug–drug interaction [SUMMARY]
[CONTENT] Metabolic stability | CYP3A4 | metabolism | drug–drug interaction [SUMMARY]
[CONTENT] Metabolic stability | CYP3A4 | metabolism | drug–drug interaction [SUMMARY]
[CONTENT] Metabolic stability | CYP3A4 | metabolism | drug–drug interaction [SUMMARY]
[CONTENT] Animals | Cytochrome P-450 CYP3A Inducers | Drug Interactions | Male | Pyrazines | Rats | Rats, Sprague-Dawley | Valsartan [SUMMARY]
null
[CONTENT] Animals | Cytochrome P-450 CYP3A Inducers | Drug Interactions | Male | Pyrazines | Rats | Rats, Sprague-Dawley | Valsartan [SUMMARY]
[CONTENT] Animals | Cytochrome P-450 CYP3A Inducers | Drug Interactions | Male | Pyrazines | Rats | Rats, Sprague-Dawley | Valsartan [SUMMARY]
[CONTENT] Animals | Cytochrome P-450 CYP3A Inducers | Drug Interactions | Male | Pyrazines | Rats | Rats, Sprague-Dawley | Valsartan [SUMMARY]
[CONTENT] Animals | Cytochrome P-450 CYP3A Inducers | Drug Interactions | Male | Pyrazines | Rats | Rats, Sprague-Dawley | Valsartan [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] valsartan | ligustrazine | mg | 10 | kg | mg kg | min | 10 mg | 10 mg kg | mg kg ligustrazine [SUMMARY]
null
[CONTENT] valsartan | ligustrazine | mg | 10 | kg | mg kg | min | 10 mg | 10 mg kg | mg kg ligustrazine [SUMMARY]
[CONTENT] valsartan | ligustrazine | mg | 10 | kg | mg kg | min | 10 mg | 10 mg kg | mg kg ligustrazine [SUMMARY]
[CONTENT] valsartan | ligustrazine | mg | 10 | kg | mg kg | min | 10 mg | 10 mg kg | mg kg ligustrazine [SUMMARY]
[CONTENT] valsartan | ligustrazine | mg | 10 | kg | mg kg | min | 10 mg | 10 mg kg | mg kg ligustrazine [SUMMARY]
[CONTENT] valsartan | ligustrazine | treatment | ligustrazine valsartan | combination | 2011 | blood | drugs | amlodipine | interaction ligustrazine [SUMMARY]
null
[CONTENT] kg | ligustrazine | mg kg | mg | kg ligustrazine | mg kg ligustrazine | valsartan | 05 | concentration | 10 mg kg [SUMMARY]
[CONTENT] drug | valsartan | co | interaction decreased | valsartan activating cyp3a4 dose | valsartan activating cyp3a4 | valsartan activating | administration valsartan ligustrazine induced | administration valsartan ligustrazine | co administration valsartan ligustrazine [SUMMARY]
[CONTENT] valsartan | ligustrazine | mg | kg | mg kg | 10 | min | μl | 10 mg | 10 mg kg [SUMMARY]
[CONTENT] valsartan | ligustrazine | mg | kg | mg kg | 10 | min | μl | 10 mg | 10 mg kg [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] Ligustrazine ||| 4 mg/kg | 385.37 ±  | 93.05 | 851.64 | 104.26 | 5.46 ±  | 0.93 | 6.34 ±  | 1.25 | max | 62.64 ± | 9.09 | 83.87 ± 6.15 μg/L | 10.92 ± | 1.521 | 25.76 ±  | 6.24 | 10 mg/kg | 0.05 ||| half | 37.12 ± | 4.06 | 33.48 ± | 3.56 | 37.34 ± | 3.84 | 41.40 ± | 4.32 μL | 0.05 [SUMMARY]
[CONTENT] Ligustrazine ||| [SUMMARY]
[CONTENT] ||| 10 mg/kg | Sprague-Dawley | three | 4 | 10 days | six ||| ||| ||| Ligustrazine ||| 4 mg/kg | 385.37 ±  | 93.05 | 851.64 | 104.26 | 5.46 ±  | 0.93 | 6.34 ±  | 1.25 | max | 62.64 ± | 9.09 | 83.87 ± 6.15 μg/L | 10.92 ± | 1.521 | 25.76 ±  | 6.24 | 10 mg/kg | 0.05 ||| half | 37.12 ± | 4.06 | 33.48 ± | 3.56 | 37.34 ± | 3.84 | 41.40 ± | 4.32 μL | 0.05 ||| ||| [SUMMARY]
[CONTENT] ||| 10 mg/kg | Sprague-Dawley | three | 4 | 10 days | six ||| ||| ||| Ligustrazine ||| 4 mg/kg | 385.37 ±  | 93.05 | 851.64 | 104.26 | 5.46 ±  | 0.93 | 6.34 ±  | 1.25 | max | 62.64 ± | 9.09 | 83.87 ± 6.15 μg/L | 10.92 ± | 1.521 | 25.76 ±  | 6.24 | 10 mg/kg | 0.05 ||| half | 37.12 ± | 4.06 | 33.48 ± | 3.56 | 37.34 ± | 3.84 | 41.40 ± | 4.32 μL | 0.05 ||| ||| [SUMMARY]
HIV, Hepatitis B and C viruses' coinfection among patients in a Nigerian tertiary hospital.
23133700
Hepatitis co-infection with HIV is associated with increased morbidity and mortality.
INTRODUCTION
This cross sectional study was carried out among HIV positive patients and HIV negative blood donors, HIV infected patients were recruited from the antiretroviral therapy clinics of the Lagos State University Teaching Hospital, in Nigeria. The diagnosis of HIV infection among patients and predonation screening of control blood donors was carried out using Determine 1/2 screening rapid kits. (Inverness Medical, Japan). Reactive patients' sera were confirmed with Enzyme Linked Immunosorbant Assay (Elisa) based immuuocomb 1&11 comb firm kits (Orgenics, Israel). Hepatitis B surface antigen (HBsAg) and antibodies to hepatitis C virus (anti-HCV) were assayed using 4(th) generation Dialab Elisa kits for patients and control sera.
METHODS
Dual presence of HBsAg and anti-HCV was observed in 4(3.9%) of HIV infected patients, while 29(28.4%) and 15(14.7%) were repeatedly reactive for HBsAg and anti-HCV respectively. HIV negative blood donor controls have HBsAg and anti-HCV prevalence of (22) 6.0% and (3) 0.8% respectively. The prevalence of hepatitis co infection is higher among the male study patients 16(50%) than the female 32 (45.7%).p > 0.001.Data analysis was done with statistical Package for social sciences (SPSS,9) and Chi square tests.
RESULTS
This study reveals a higher risk and prevalence of HBV and HCV co infections among HIV infected patients compared to HIV negative blood donors p < 0.001.
CONCLUSION
[ "Adult", "Aged", "Cross-Sectional Studies", "Enzyme-Linked Immunosorbent Assay", "Female", "HIV Infections", "Hepatitis B", "Hepatitis B Surface Antigens", "Hepatitis C", "Hepatitis C Antibodies", "Hospitals, University", "Humans", "Male", "Middle Aged", "Nigeria", "Risk Factors", "Sex Factors", "Young Adult" ]
3489383
Introduction
Hepatitis B virus (HBV) and hepatitis C virus (HCV) infections cause chronic hepatitis, cirrhosis and hepatocellular carcinoma, all of which are of serious public health concern [1]. There is a heavy burden of HIV-HBV and HIV- HCV co infections in many regions of the developing world [2], Nigeria inclusive [3, 4]. Available data suggest 15-60% of the normal population in many African countries may be positive for one or more of the serological markers of hepatitis B virus [5]. The high prevalence of HBV infection in this region is thought to be due to horizontal transmission during childhood [5]. Individuals co infected with HIV and HBV are more likely to develop chronic hepatitis B and are at increased risk for liver related mortality [6]. Hepatitis C virus is the major cause of nonA, nonB hepatitis worldwide [7]. Hepatitis C co infection has been found to be more common in HIV+ve individuals and is associated with an increased mortality and renal morbidity [8]. In persons with HIV, HCV prevalence is estimated to be approximately 50% in the USA [9]. Recently, co infection between hepatitis C virus and HIV have been associated with rapid decline in the CD4 count, rapid progression of HIV infection and with increased morbidity and mortality [10]. Hepatitis co-infection with HIV accelerates disease progression in both HCV and HBV and also increases the risk of antiretroviral drug associated hepatotoxicity [11]. With an increase use and accessibility of highly active antiretroviral therapy among HIV positive patients in sub Saharan Africa, co-infection with these viruses could contribute significantly to continuing morbidity and mortality among this group of patients over the coming years. The significant advancing in HIV management and survival have led to the recognition of chronic hepatitis as the pre eminent co morbid illness which now accounts for the majority of non AIDS related deaths in this population. To define the magnitude of this burden, we have examined the prevalence and risk of co infection with HBVand HCV among Nigerians with HIV infection. The result of this study would provide the baseline for future larger studies.
Methods
After obtaining ethical approval from the Lagos State University Teaching Hospital (LASUTH) health research ethics committee, a cross-sectional study was conducted in March - August 2006. The study population was adult Nigerian HIV infected patients attending the antiretroviral therapy clinics. Informed consent was obtained from study subjects before specific structured questionnaires were administered to capture demographic data and risk factors which predispose to acquisition of both HBV and HCV co infections. About 5millilitres of blood was collected from each patient and control subjects by venepuncture. The sera from the blood samples were separated and stored at −20°C until tested. Samples were brought to room temperature prior to testing and analyzed according to manufacturer's recommendations at the Blood Screening Centre of LASUTH. Each serum was analyzed for the presence of HBsAg and anti-HCV using commercially available 4th generation Enzyme Limited Immuno Assay (ELISA) kits (Dialab, Austria) with 99.87% specificity and100%sensitivity. The diagnosis of HIV was made in patients using WHO approved Determine1/2 very rapid kits with100% sensitivity and 99.6% specificity. Positive serostatus was confirmed with ELISA based Immunocomb I &II comb firm kits (Orgenics, Israel) The data were analyzed using a statistical package for social sciences (Version 9, SPSS). Chi-square test was used to assess the significance of differences among groups. A value of less than or equal to 0.001 was considered significant in all statistical comparisons.
Results
One hundred and two HIV infected patients comprising 32 (31.4%) males and 70 (68.6%) females with M: F ratio of 1:2.2 was enrolled for this study. Table 1 shows the dual presence of HBsAg and anti-HCV in 4(3.9%)study patients while 29 (28.4%) tested positive for HBsAg and 15 (14.7%) for anti HCV. The control subjects have a much lower prevalence of 22(6.0%) and 3(0.8%) for HBsAg and anti-HCV respectively while dual presence of HBsAg and ant-HCV was 0% prevalence. The difference in prevalence of hepatitis between the control and subjects is statistically significant p< 0.001 In Figure 1, there are more female patients in the younger age group with a peak at 30-34years while the males are older with a peak in the 55- 59 years age group. The age range of the patients is 20-79 with a mean of 38years. More male patients 16(50%) have hepatitis co infection than the female32 (45.7%). The difference is however not statistically significant p > 0.001 HIV/HBV co infected patients were more likely to be male12 (75%) while HIV/HCV co infected patients were more likely to be female12 (37.5%). In Table 2, triple infection with HIV/HBV/HCV was higher among the female 3 (9.4%) than the male patients 1 (6.3%).The difference is not statistically significant p > 0.001. Age and sex distribution of patients Prevalence and Risk of HBsAg and Anti-HCV among HIV Patients and Controls Gender Prevalence of HBsAg and Anti-HCV among study subjects
Conclusion
We recommend that HIV positive patients should be routinely screened for HBV&HCV markers before initiation of highly active antiretroviral therapy as this practice would guide correct choice of drug combination. This would in turn reduce morbidity and mortality from antiretroviral drug associated hepatotoxicity among these patients.
[ "Introduction" ]
[ "Hepatitis B virus (HBV) and hepatitis C virus (HCV) infections cause chronic hepatitis, cirrhosis and hepatocellular carcinoma, all of which are of serious public health concern [1]. There is a heavy burden of HIV-HBV and HIV- HCV co infections in many regions of the developing world [2], Nigeria inclusive [3, 4]. Available data suggest 15-60% of the normal population in many African countries may be positive for one or more of the serological markers of hepatitis B virus [5]. The high prevalence of HBV infection in this region is thought to be due to horizontal transmission during childhood [5]. Individuals co infected with HIV and HBV are more likely to develop chronic hepatitis B and are at increased risk for liver related mortality [6]. Hepatitis C virus is the major cause of nonA, nonB hepatitis worldwide [7]. Hepatitis C co infection has been found to be more common in HIV+ve individuals and is associated with an increased mortality and renal morbidity [8]. In persons with HIV, HCV prevalence is estimated to be approximately 50% in the USA [9].\nRecently, co infection between hepatitis C virus and HIV have been associated with rapid decline in the CD4 count, rapid progression of HIV infection and with increased morbidity and mortality [10]. Hepatitis co-infection with HIV accelerates disease progression in both HCV and HBV and also increases the risk of antiretroviral drug associated hepatotoxicity [11]. With an increase use and accessibility of highly active antiretroviral therapy among HIV positive patients in sub Saharan Africa, co-infection with these viruses could contribute significantly to continuing morbidity and mortality among this group of patients over the coming years.\nThe significant advancing in HIV management and survival have led to the recognition of chronic hepatitis as the pre eminent co morbid illness which now accounts for the majority of non AIDS related deaths in this population. To define the magnitude of this burden, we have examined the prevalence and risk of co infection with HBVand HCV among Nigerians with HIV infection. The result of this study would provide the baseline for future larger studies." ]
[ null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion" ]
[ "Hepatitis B virus (HBV) and hepatitis C virus (HCV) infections cause chronic hepatitis, cirrhosis and hepatocellular carcinoma, all of which are of serious public health concern [1]. There is a heavy burden of HIV-HBV and HIV- HCV co infections in many regions of the developing world [2], Nigeria inclusive [3, 4]. Available data suggest 15-60% of the normal population in many African countries may be positive for one or more of the serological markers of hepatitis B virus [5]. The high prevalence of HBV infection in this region is thought to be due to horizontal transmission during childhood [5]. Individuals co infected with HIV and HBV are more likely to develop chronic hepatitis B and are at increased risk for liver related mortality [6]. Hepatitis C virus is the major cause of nonA, nonB hepatitis worldwide [7]. Hepatitis C co infection has been found to be more common in HIV+ve individuals and is associated with an increased mortality and renal morbidity [8]. In persons with HIV, HCV prevalence is estimated to be approximately 50% in the USA [9].\nRecently, co infection between hepatitis C virus and HIV have been associated with rapid decline in the CD4 count, rapid progression of HIV infection and with increased morbidity and mortality [10]. Hepatitis co-infection with HIV accelerates disease progression in both HCV and HBV and also increases the risk of antiretroviral drug associated hepatotoxicity [11]. With an increase use and accessibility of highly active antiretroviral therapy among HIV positive patients in sub Saharan Africa, co-infection with these viruses could contribute significantly to continuing morbidity and mortality among this group of patients over the coming years.\nThe significant advancing in HIV management and survival have led to the recognition of chronic hepatitis as the pre eminent co morbid illness which now accounts for the majority of non AIDS related deaths in this population. To define the magnitude of this burden, we have examined the prevalence and risk of co infection with HBVand HCV among Nigerians with HIV infection. The result of this study would provide the baseline for future larger studies.", "After obtaining ethical approval from the Lagos State University Teaching Hospital (LASUTH) health research ethics committee, a cross-sectional study was conducted in March - August 2006. The study population was adult Nigerian HIV infected patients attending the antiretroviral therapy clinics. Informed consent was obtained from study subjects before specific structured questionnaires were administered to capture demographic data and risk factors which predispose to acquisition of both HBV and HCV co infections.\nAbout 5millilitres of blood was collected from each patient and control subjects by venepuncture. The sera from the blood samples were separated and stored at −20°C until tested. Samples were brought to room temperature prior to testing and analyzed according to manufacturer's recommendations at the Blood Screening Centre of LASUTH.\nEach serum was analyzed for the presence of HBsAg and anti-HCV using commercially available 4th generation Enzyme Limited Immuno Assay (ELISA) kits (Dialab, Austria) with 99.87% specificity and100%sensitivity. The diagnosis of HIV was made in patients using WHO approved Determine1/2 very rapid kits with100% sensitivity and 99.6% specificity. Positive serostatus was confirmed with ELISA based Immunocomb I &II comb firm kits (Orgenics, Israel) The data were analyzed using a statistical package for social sciences (Version 9, SPSS). Chi-square test was used to assess the significance of differences among groups. A value of less than or equal to 0.001 was considered significant in all statistical comparisons.", "One hundred and two HIV infected patients comprising 32 (31.4%) males and 70 (68.6%) females with M: F ratio of 1:2.2 was enrolled for this study. Table 1 shows the dual presence of HBsAg and anti-HCV in 4(3.9%)study patients while 29 (28.4%) tested positive for HBsAg and 15 (14.7%) for anti HCV. The control subjects have a much lower prevalence of 22(6.0%) and 3(0.8%) for HBsAg and anti-HCV respectively while dual presence of HBsAg and ant-HCV was 0% prevalence. The difference in prevalence of hepatitis between the control and subjects is statistically significant p< 0.001 In Figure 1, there are more female patients in the younger age group with a peak at 30-34years while the males are older with a peak in the 55- 59 years age group. The age range of the patients is 20-79 with a mean of 38years. More male patients 16(50%) have hepatitis co infection than the female32 (45.7%). The difference is however not statistically significant p > 0.001 HIV/HBV co infected patients were more likely to be male12 (75%) while HIV/HCV co infected patients were more likely to be female12 (37.5%). In Table 2, triple infection with HIV/HBV/HCV was higher among the female 3 (9.4%) than the male patients 1 (6.3%).The difference is not statistically significant p > 0.001.\n\nAge and sex distribution of patients\nPrevalence and Risk of HBsAg and Anti-HCV among HIV Patients and Controls\nGender Prevalence of HBsAg and Anti-HCV among study subjects", "Our study examined the prevalence of HBsAg and anti-HCV among HIV infected patients and healthy HIV negative blood donors. We observed HBsAg and anti-HCV positivity of 29 (28.4%) and 15(14.7%) respectively among HIV infected patients. Among healthy HIV negative blood donor controls, we observed HBsAg and anti-HCV prevalence of 22(6.0%) and 3(0.8%) respectively. The high prevalence of HBsAg 29(28.4%) and anti-HCV 15 (14.7%) among HIV infected patients in this study is comparable with reports by Forbi et al in North Central Nigeria [4], South Africa cohort [8], Senegal [12] and France [13]. The prevalence of HBsAg 3(6.0%) among the control blood donors in this study is comparable with reported figures from previous studies in Lagos 6.9% [14] Ile-Ife7.3% [15], Nigeria. The observed HIV/HBV co infection prevalence(28.4%) in this study is comparable with previous reported high figures in different parts of Nigeria; (Keffi 20.6% [4], Jos28.7% [16], Ilorin30.4% [17], Kano70.5% [18]) and India 33.8% [19]. Some previous reports of HIV/HBV co infection from South Africa 6% [20], Nigeria (Maiduguri15% [21], Lagos 9.2% [22], Niger Delta9.7% [23] and Thailand 8.7% [24] are however lower than observed figures in this study. Varying sample size, test kit sensitivity and specificity may be responsible for the differences in prevalence figures in this group of patients.\nThe prevalence of HBV/HIV co infection was found to be higher among male study subjects12 (37.5%) than females 17(24.3%) in this study p = 0.001. The difference is not statistically significant. This finding is compatible with previous reports from Jos, North Central Nigeria [16] and India [19]. This observation may have been accounted for by the fact that men are more likely to have multiple sex partners and also practice unprotected sex in our polygamous setting. We also observed a significantly higher prevalence of HCV antibody among HIV infected patients as compared to HIV negative blood donors (14.7%vs0.8%respectively), p< 0.001. The difference is statistically significant. The reason in difference may be due to shared modes of transmission of both viruses in the study patients. This observed high prevalence (14.7%) of HCV/HIVcoinfection among study patients is comparable with reports of previous studies in Nigeria (Keffi11.1% [4] Abuja 8.2% [24]), South Africa 13.4% [8], France 17% [13], and San Francisco 11.7% [26]. We also found that the prevalence of HIV/HCV co infection was higher among the female patients 12 (37.5%) than the male 3 (18.8%), p > 0.001. The difference is however not statistically significant. This observation is at par with a previous study by Lesi et al [22], but at variance with other reports [4, 25] with findings of older males being more co- infected in Nigeria. This higher rate of HIV/HCV co infection among females may have been accounted for by the fact that women of all ages are more likely than men to become infected with HIV and HCV during unprotected vaginal intercourse. We have observed that 4(3.9%) of the study patients have triple co infection with HIV/HBV/HCV in this study. Previous prevalence reports of triple co infection in this group of study subjects vary from Keffi7.2% [4], Ibadan 1% [27] in Nigeria, Kenya 0.3% [28] and France 1.6% [13]. The results in this study show that the prevalence of HBV and HCV in the population of persons living with HIV in Nigeria is higher than found in the HIV negative population. In conclusion, there is therefore a higher risk of HBV and HCV co-infections among HIV infected patients compared to HIV negative blood donors in this study p < 0.001. This high co infection rates among the study subjects demonstrate a correlation between these viral infections which could influence evolution of HIV, HBV and HCV diseases.", "We recommend that HIV positive patients should be routinely screened for HBV&HCV markers before initiation of highly active antiretroviral therapy as this practice would guide correct choice of drug combination. This would in turn reduce morbidity and mortality from antiretroviral drug associated hepatotoxicity among these patients." ]
[ null, "methods", "results", "discussion", "conclusion" ]
[ "HIV", "Hepatitis B", "Hepatitis C", "coinfections", "Nigeria" ]
Introduction: Hepatitis B virus (HBV) and hepatitis C virus (HCV) infections cause chronic hepatitis, cirrhosis and hepatocellular carcinoma, all of which are of serious public health concern [1]. There is a heavy burden of HIV-HBV and HIV- HCV co infections in many regions of the developing world [2], Nigeria inclusive [3, 4]. Available data suggest 15-60% of the normal population in many African countries may be positive for one or more of the serological markers of hepatitis B virus [5]. The high prevalence of HBV infection in this region is thought to be due to horizontal transmission during childhood [5]. Individuals co infected with HIV and HBV are more likely to develop chronic hepatitis B and are at increased risk for liver related mortality [6]. Hepatitis C virus is the major cause of nonA, nonB hepatitis worldwide [7]. Hepatitis C co infection has been found to be more common in HIV+ve individuals and is associated with an increased mortality and renal morbidity [8]. In persons with HIV, HCV prevalence is estimated to be approximately 50% in the USA [9]. Recently, co infection between hepatitis C virus and HIV have been associated with rapid decline in the CD4 count, rapid progression of HIV infection and with increased morbidity and mortality [10]. Hepatitis co-infection with HIV accelerates disease progression in both HCV and HBV and also increases the risk of antiretroviral drug associated hepatotoxicity [11]. With an increase use and accessibility of highly active antiretroviral therapy among HIV positive patients in sub Saharan Africa, co-infection with these viruses could contribute significantly to continuing morbidity and mortality among this group of patients over the coming years. The significant advancing in HIV management and survival have led to the recognition of chronic hepatitis as the pre eminent co morbid illness which now accounts for the majority of non AIDS related deaths in this population. To define the magnitude of this burden, we have examined the prevalence and risk of co infection with HBVand HCV among Nigerians with HIV infection. The result of this study would provide the baseline for future larger studies. Methods: After obtaining ethical approval from the Lagos State University Teaching Hospital (LASUTH) health research ethics committee, a cross-sectional study was conducted in March - August 2006. The study population was adult Nigerian HIV infected patients attending the antiretroviral therapy clinics. Informed consent was obtained from study subjects before specific structured questionnaires were administered to capture demographic data and risk factors which predispose to acquisition of both HBV and HCV co infections. About 5millilitres of blood was collected from each patient and control subjects by venepuncture. The sera from the blood samples were separated and stored at −20°C until tested. Samples were brought to room temperature prior to testing and analyzed according to manufacturer's recommendations at the Blood Screening Centre of LASUTH. Each serum was analyzed for the presence of HBsAg and anti-HCV using commercially available 4th generation Enzyme Limited Immuno Assay (ELISA) kits (Dialab, Austria) with 99.87% specificity and100%sensitivity. The diagnosis of HIV was made in patients using WHO approved Determine1/2 very rapid kits with100% sensitivity and 99.6% specificity. Positive serostatus was confirmed with ELISA based Immunocomb I &II comb firm kits (Orgenics, Israel) The data were analyzed using a statistical package for social sciences (Version 9, SPSS). Chi-square test was used to assess the significance of differences among groups. A value of less than or equal to 0.001 was considered significant in all statistical comparisons. Results: One hundred and two HIV infected patients comprising 32 (31.4%) males and 70 (68.6%) females with M: F ratio of 1:2.2 was enrolled for this study. Table 1 shows the dual presence of HBsAg and anti-HCV in 4(3.9%)study patients while 29 (28.4%) tested positive for HBsAg and 15 (14.7%) for anti HCV. The control subjects have a much lower prevalence of 22(6.0%) and 3(0.8%) for HBsAg and anti-HCV respectively while dual presence of HBsAg and ant-HCV was 0% prevalence. The difference in prevalence of hepatitis between the control and subjects is statistically significant p< 0.001 In Figure 1, there are more female patients in the younger age group with a peak at 30-34years while the males are older with a peak in the 55- 59 years age group. The age range of the patients is 20-79 with a mean of 38years. More male patients 16(50%) have hepatitis co infection than the female32 (45.7%). The difference is however not statistically significant p > 0.001 HIV/HBV co infected patients were more likely to be male12 (75%) while HIV/HCV co infected patients were more likely to be female12 (37.5%). In Table 2, triple infection with HIV/HBV/HCV was higher among the female 3 (9.4%) than the male patients 1 (6.3%).The difference is not statistically significant p > 0.001. Age and sex distribution of patients Prevalence and Risk of HBsAg and Anti-HCV among HIV Patients and Controls Gender Prevalence of HBsAg and Anti-HCV among study subjects Discussion: Our study examined the prevalence of HBsAg and anti-HCV among HIV infected patients and healthy HIV negative blood donors. We observed HBsAg and anti-HCV positivity of 29 (28.4%) and 15(14.7%) respectively among HIV infected patients. Among healthy HIV negative blood donor controls, we observed HBsAg and anti-HCV prevalence of 22(6.0%) and 3(0.8%) respectively. The high prevalence of HBsAg 29(28.4%) and anti-HCV 15 (14.7%) among HIV infected patients in this study is comparable with reports by Forbi et al in North Central Nigeria [4], South Africa cohort [8], Senegal [12] and France [13]. The prevalence of HBsAg 3(6.0%) among the control blood donors in this study is comparable with reported figures from previous studies in Lagos 6.9% [14] Ile-Ife7.3% [15], Nigeria. The observed HIV/HBV co infection prevalence(28.4%) in this study is comparable with previous reported high figures in different parts of Nigeria; (Keffi 20.6% [4], Jos28.7% [16], Ilorin30.4% [17], Kano70.5% [18]) and India 33.8% [19]. Some previous reports of HIV/HBV co infection from South Africa 6% [20], Nigeria (Maiduguri15% [21], Lagos 9.2% [22], Niger Delta9.7% [23] and Thailand 8.7% [24] are however lower than observed figures in this study. Varying sample size, test kit sensitivity and specificity may be responsible for the differences in prevalence figures in this group of patients. The prevalence of HBV/HIV co infection was found to be higher among male study subjects12 (37.5%) than females 17(24.3%) in this study p = 0.001. The difference is not statistically significant. This finding is compatible with previous reports from Jos, North Central Nigeria [16] and India [19]. This observation may have been accounted for by the fact that men are more likely to have multiple sex partners and also practice unprotected sex in our polygamous setting. We also observed a significantly higher prevalence of HCV antibody among HIV infected patients as compared to HIV negative blood donors (14.7%vs0.8%respectively), p< 0.001. The difference is statistically significant. The reason in difference may be due to shared modes of transmission of both viruses in the study patients. This observed high prevalence (14.7%) of HCV/HIVcoinfection among study patients is comparable with reports of previous studies in Nigeria (Keffi11.1% [4] Abuja 8.2% [24]), South Africa 13.4% [8], France 17% [13], and San Francisco 11.7% [26]. We also found that the prevalence of HIV/HCV co infection was higher among the female patients 12 (37.5%) than the male 3 (18.8%), p > 0.001. The difference is however not statistically significant. This observation is at par with a previous study by Lesi et al [22], but at variance with other reports [4, 25] with findings of older males being more co- infected in Nigeria. This higher rate of HIV/HCV co infection among females may have been accounted for by the fact that women of all ages are more likely than men to become infected with HIV and HCV during unprotected vaginal intercourse. We have observed that 4(3.9%) of the study patients have triple co infection with HIV/HBV/HCV in this study. Previous prevalence reports of triple co infection in this group of study subjects vary from Keffi7.2% [4], Ibadan 1% [27] in Nigeria, Kenya 0.3% [28] and France 1.6% [13]. The results in this study show that the prevalence of HBV and HCV in the population of persons living with HIV in Nigeria is higher than found in the HIV negative population. In conclusion, there is therefore a higher risk of HBV and HCV co-infections among HIV infected patients compared to HIV negative blood donors in this study p < 0.001. This high co infection rates among the study subjects demonstrate a correlation between these viral infections which could influence evolution of HIV, HBV and HCV diseases. Conclusion: We recommend that HIV positive patients should be routinely screened for HBV&HCV markers before initiation of highly active antiretroviral therapy as this practice would guide correct choice of drug combination. This would in turn reduce morbidity and mortality from antiretroviral drug associated hepatotoxicity among these patients.
Background: Hepatitis co-infection with HIV is associated with increased morbidity and mortality. Methods: This cross sectional study was carried out among HIV positive patients and HIV negative blood donors, HIV infected patients were recruited from the antiretroviral therapy clinics of the Lagos State University Teaching Hospital, in Nigeria. The diagnosis of HIV infection among patients and predonation screening of control blood donors was carried out using Determine 1/2 screening rapid kits. (Inverness Medical, Japan). Reactive patients' sera were confirmed with Enzyme Linked Immunosorbant Assay (Elisa) based immuuocomb 1&11 comb firm kits (Orgenics, Israel). Hepatitis B surface antigen (HBsAg) and antibodies to hepatitis C virus (anti-HCV) were assayed using 4(th) generation Dialab Elisa kits for patients and control sera. Results: Dual presence of HBsAg and anti-HCV was observed in 4(3.9%) of HIV infected patients, while 29(28.4%) and 15(14.7%) were repeatedly reactive for HBsAg and anti-HCV respectively. HIV negative blood donor controls have HBsAg and anti-HCV prevalence of (22) 6.0% and (3) 0.8% respectively. The prevalence of hepatitis co infection is higher among the male study patients 16(50%) than the female 32 (45.7%).p > 0.001.Data analysis was done with statistical Package for social sciences (SPSS,9) and Chi square tests. Conclusions: This study reveals a higher risk and prevalence of HBV and HCV co infections among HIV infected patients compared to HIV negative blood donors p < 0.001.
Introduction: Hepatitis B virus (HBV) and hepatitis C virus (HCV) infections cause chronic hepatitis, cirrhosis and hepatocellular carcinoma, all of which are of serious public health concern [1]. There is a heavy burden of HIV-HBV and HIV- HCV co infections in many regions of the developing world [2], Nigeria inclusive [3, 4]. Available data suggest 15-60% of the normal population in many African countries may be positive for one or more of the serological markers of hepatitis B virus [5]. The high prevalence of HBV infection in this region is thought to be due to horizontal transmission during childhood [5]. Individuals co infected with HIV and HBV are more likely to develop chronic hepatitis B and are at increased risk for liver related mortality [6]. Hepatitis C virus is the major cause of nonA, nonB hepatitis worldwide [7]. Hepatitis C co infection has been found to be more common in HIV+ve individuals and is associated with an increased mortality and renal morbidity [8]. In persons with HIV, HCV prevalence is estimated to be approximately 50% in the USA [9]. Recently, co infection between hepatitis C virus and HIV have been associated with rapid decline in the CD4 count, rapid progression of HIV infection and with increased morbidity and mortality [10]. Hepatitis co-infection with HIV accelerates disease progression in both HCV and HBV and also increases the risk of antiretroviral drug associated hepatotoxicity [11]. With an increase use and accessibility of highly active antiretroviral therapy among HIV positive patients in sub Saharan Africa, co-infection with these viruses could contribute significantly to continuing morbidity and mortality among this group of patients over the coming years. The significant advancing in HIV management and survival have led to the recognition of chronic hepatitis as the pre eminent co morbid illness which now accounts for the majority of non AIDS related deaths in this population. To define the magnitude of this burden, we have examined the prevalence and risk of co infection with HBVand HCV among Nigerians with HIV infection. The result of this study would provide the baseline for future larger studies. Conclusion: We recommend that HIV positive patients should be routinely screened for HBV&HCV markers before initiation of highly active antiretroviral therapy as this practice would guide correct choice of drug combination. This would in turn reduce morbidity and mortality from antiretroviral drug associated hepatotoxicity among these patients.
Background: Hepatitis co-infection with HIV is associated with increased morbidity and mortality. Methods: This cross sectional study was carried out among HIV positive patients and HIV negative blood donors, HIV infected patients were recruited from the antiretroviral therapy clinics of the Lagos State University Teaching Hospital, in Nigeria. The diagnosis of HIV infection among patients and predonation screening of control blood donors was carried out using Determine 1/2 screening rapid kits. (Inverness Medical, Japan). Reactive patients' sera were confirmed with Enzyme Linked Immunosorbant Assay (Elisa) based immuuocomb 1&11 comb firm kits (Orgenics, Israel). Hepatitis B surface antigen (HBsAg) and antibodies to hepatitis C virus (anti-HCV) were assayed using 4(th) generation Dialab Elisa kits for patients and control sera. Results: Dual presence of HBsAg and anti-HCV was observed in 4(3.9%) of HIV infected patients, while 29(28.4%) and 15(14.7%) were repeatedly reactive for HBsAg and anti-HCV respectively. HIV negative blood donor controls have HBsAg and anti-HCV prevalence of (22) 6.0% and (3) 0.8% respectively. The prevalence of hepatitis co infection is higher among the male study patients 16(50%) than the female 32 (45.7%).p > 0.001.Data analysis was done with statistical Package for social sciences (SPSS,9) and Chi square tests. Conclusions: This study reveals a higher risk and prevalence of HBV and HCV co infections among HIV infected patients compared to HIV negative blood donors p < 0.001.
1,860
290
5
[ "hiv", "hcv", "patients", "study", "co", "prevalence", "infection", "hbv", "co infection", "hepatitis" ]
[ "test", "test" ]
[CONTENT] HIV | Hepatitis B | Hepatitis C | coinfections | Nigeria [SUMMARY]
[CONTENT] HIV | Hepatitis B | Hepatitis C | coinfections | Nigeria [SUMMARY]
[CONTENT] HIV | Hepatitis B | Hepatitis C | coinfections | Nigeria [SUMMARY]
[CONTENT] HIV | Hepatitis B | Hepatitis C | coinfections | Nigeria [SUMMARY]
[CONTENT] HIV | Hepatitis B | Hepatitis C | coinfections | Nigeria [SUMMARY]
[CONTENT] HIV | Hepatitis B | Hepatitis C | coinfections | Nigeria [SUMMARY]
[CONTENT] Adult | Aged | Cross-Sectional Studies | Enzyme-Linked Immunosorbent Assay | Female | HIV Infections | Hepatitis B | Hepatitis B Surface Antigens | Hepatitis C | Hepatitis C Antibodies | Hospitals, University | Humans | Male | Middle Aged | Nigeria | Risk Factors | Sex Factors | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Cross-Sectional Studies | Enzyme-Linked Immunosorbent Assay | Female | HIV Infections | Hepatitis B | Hepatitis B Surface Antigens | Hepatitis C | Hepatitis C Antibodies | Hospitals, University | Humans | Male | Middle Aged | Nigeria | Risk Factors | Sex Factors | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Cross-Sectional Studies | Enzyme-Linked Immunosorbent Assay | Female | HIV Infections | Hepatitis B | Hepatitis B Surface Antigens | Hepatitis C | Hepatitis C Antibodies | Hospitals, University | Humans | Male | Middle Aged | Nigeria | Risk Factors | Sex Factors | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Cross-Sectional Studies | Enzyme-Linked Immunosorbent Assay | Female | HIV Infections | Hepatitis B | Hepatitis B Surface Antigens | Hepatitis C | Hepatitis C Antibodies | Hospitals, University | Humans | Male | Middle Aged | Nigeria | Risk Factors | Sex Factors | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Cross-Sectional Studies | Enzyme-Linked Immunosorbent Assay | Female | HIV Infections | Hepatitis B | Hepatitis B Surface Antigens | Hepatitis C | Hepatitis C Antibodies | Hospitals, University | Humans | Male | Middle Aged | Nigeria | Risk Factors | Sex Factors | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Cross-Sectional Studies | Enzyme-Linked Immunosorbent Assay | Female | HIV Infections | Hepatitis B | Hepatitis B Surface Antigens | Hepatitis C | Hepatitis C Antibodies | Hospitals, University | Humans | Male | Middle Aged | Nigeria | Risk Factors | Sex Factors | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] hiv | hcv | patients | study | co | prevalence | infection | hbv | co infection | hepatitis [SUMMARY]
[CONTENT] hiv | hcv | patients | study | co | prevalence | infection | hbv | co infection | hepatitis [SUMMARY]
[CONTENT] hiv | hcv | patients | study | co | prevalence | infection | hbv | co infection | hepatitis [SUMMARY]
[CONTENT] hiv | hcv | patients | study | co | prevalence | infection | hbv | co infection | hepatitis [SUMMARY]
[CONTENT] hiv | hcv | patients | study | co | prevalence | infection | hbv | co infection | hepatitis [SUMMARY]
[CONTENT] hiv | hcv | patients | study | co | prevalence | infection | hbv | co infection | hepatitis [SUMMARY]
[CONTENT] hepatitis | infection | hiv | virus | hepatitis virus | co | co infection | mortality | chronic | chronic hepatitis [SUMMARY]
[CONTENT] kits | analyzed | blood | elisa | samples | lasuth | 99 | statistical | study | data [SUMMARY]
[CONTENT] patients | hbsag | age | hcv | anti hcv | anti | prevalence | statistically significant 001 | significant 001 | hbsag anti hcv [SUMMARY]
[CONTENT] drug | antiretroviral | morbidity mortality antiretroviral | screened hbv hcv markers | associated hepatotoxicity patients | therapy practice | therapy practice guide | therapy practice guide correct | hepatotoxicity patients | turn [SUMMARY]
[CONTENT] hiv | patients | hcv | prevalence | co | hepatitis | study | infection | hbsag | hbv [SUMMARY]
[CONTENT] hiv | patients | hcv | prevalence | co | hepatitis | study | infection | hbsag | hbv [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] the Lagos State University Teaching Hospital | Nigeria ||| Determine 1/2 ||| Japan ||| Enzyme Linked Immunosorbant Assay ( | Elisa | 1&11 | Israel ||| 4(th | Elisa [SUMMARY]
[CONTENT] HBsAg | 4(3.9% | 29(28.4% | 15(14.7% | HBsAg ||| HBsAg | 22 | 6.0% | 3 | 0.8% ||| 16(50% | 32 | 45.7%).p | Package | Chi square [SUMMARY]
[CONTENT] HBV [SUMMARY]
[CONTENT] ||| the Lagos State University Teaching Hospital | Nigeria ||| Determine 1/2 ||| Japan ||| Enzyme Linked Immunosorbant Assay ( | Elisa | 1&11 | Israel ||| 4(th | Elisa ||| HBsAg | 4(3.9% | 29(28.4% | 15(14.7% | HBsAg ||| HBsAg | 22 | 6.0% | 3 | 0.8% ||| 16(50% | 32 | 45.7%).p | Package | Chi square ||| HBV [SUMMARY]
[CONTENT] ||| the Lagos State University Teaching Hospital | Nigeria ||| Determine 1/2 ||| Japan ||| Enzyme Linked Immunosorbant Assay ( | Elisa | 1&11 | Israel ||| 4(th | Elisa ||| HBsAg | 4(3.9% | 29(28.4% | 15(14.7% | HBsAg ||| HBsAg | 22 | 6.0% | 3 | 0.8% ||| 16(50% | 32 | 45.7%).p | Package | Chi square ||| HBV [SUMMARY]
Neighborhood food environment and walkability predict obesity in New York City.
19337520
Differences in the neighborhood food environment may contribute to disparities in obesity.
BACKGROUND
This study employed a cross-sectional, multilevel analysis of BMI and obesity among 13,102 adult residents of New York City. We constructed measures of the food environment and walkability for the neighborhood, defined as a half-mile buffer around the study subject's home address.
METHODS
Density of BMI-healthy food outlets (supermarkets, fruit and vegetable markets, and natural food stores) was inversely associated with BMI. Mean adjusted BMI was similar in the first two quintiles of healthy food density (0 and 1.13 stores/km2, respectively), but declined across the three higher quintiles and was 0.80 units lower [95% confidence interval (CI), 0.27-1.32] in the fifth quintile (10.98 stores/km2) than in the first. The prevalence ratio for obesity comparing the fifth quintile of healthy food density with the lowest two quintiles combined was 0.87 (95% CI, 0.78-0.97). These associations remained after control for two neighborhood walkability measures, population density and land-use mix. The prevalence ratio for obesity for the fourth versus first quartile of population density was 0.84 (95% CI, 0.73-0.96) and for land-use mix was 0.91 (95% CI, 0.86-0.97). Increasing density of food outlets categorized as BMI-unhealthy was not significantly associated with BMI or obesity.
RESULTS
Access to BMI-healthy food stores is associated with lower BMI and lower prevalence of obesity.
CONCLUSIONS
[ "Body Mass Index", "Cross-Sectional Studies", "Environment", "Food, Organic", "Geography", "Humans", "New York City", "Obesity", "Prevalence", "Urban Population", "Walking" ]
2661915
null
null
null
null
Results
The data set initially received from D&B included 32,949 retail food businesses for New York City. After correction of geocoded addresses and removal of duplicate records, businesses likely to be defunct, and records likely to represent back offices and corporate offices, the data set included 29,976 businesses, of which 29,858 fell within the bounds of study subjects’ neighborhoods. Table 2 displays descriptive statistics for the BMI-healthy, BMI-unhealthy, and BMI-intermediate categories as well as for specific food outlet types. Density of intermediate and unhealthy food outlets was much higher than density of healthy food outlets. Almost all study subjects lived within a half-mile of an unhealthy food outlet, with an average density of 31 such outlets per square kilometer. By contrast, only 82% lived within a half-mile of a healthy food outlet, with an average density of four outlets per square kilometer. Density measures for food outlet types were significantly correlated across neighborhoods, with correlation coefficients ranging from 0.38 (convenience stores and supermarkets) to 0.85 (non-fast-food restaurants and pizza restaurants). Figure 1 maps the density of BMI-healthy food outlets, expressed in outlets per square kilometer, across the city. Outlet density was highest in high-walkability areas of the city, such as Manhattan, and lowest in low-walkability areas, such as Staten Island. Outlet density also varied by neighborhood income and race/ethnic composition, with higher densities in affluent and predominantly white neighborhoods in the southern half of Manhattan and lower densities in the poor and predominantly black or Latino neighborhoods in the northern half of Manhattan and in the South Bronx. To reduce the risk of confounding, the multivariate analyses controlled statistically for individual-level race/ethnicity and education and neighborhood-level poverty rate and race/ethnic composition, as well as indices of neighborhood walkability, including population density and land-use mix. Multilevel analyses of the association between BMI and the food environment measures showed significant associations only with access to BMI-healthy food. We also assessed possible confounding effects of built environment variables. Population density, which has previously been inversely associated with BMI in analyses of the same data set, had an appreciable confounding effect, but further control for land-use mix, percent commercial area, and access to and neighborhood use of public transit did not alter the results. Table 3 shows adjusted mean BMI for each quintile of the three food categories and the median density of food outlets for each category; Figure 2 displays the association between healthy food outlet density and BMI based on this analysis. The adjusted mean BMI in the fifth quintile of healthy food was 0.80 units [95% confidence interval (CI), 0.27–1.32, p < 0.01] lower than in the first quintile of healthy food. Population density and land-use mix remained significantly inversely associated with BMI after controlling for measures of the food environment. Increasing density of the BMI-unhealthy and BMI-intermediate food categories was not associated with BMI, and analyses of selected subcategories of BMI-unhealthy food (fast food, pizzerias, and convenience stores) found no significant associations. Because there was little difference in the adjusted mean BMI of individuals living in the first and second quintile of BMI-healthy food density, we collapsed these two categories into a single reference category to increase statistical power for analyses of the prevalence of overweight and obesity. The reference category had a median density of 0.76 healthy food outlets per square kilometer. Table 4 shows the prevalence ratios for overweight and obesity by increasing density of healthy food outlets, increasing population density, and land-use mix. Controlling for population density and land-use mix, the prevalence of overweight and obesity were both lower among individuals with the highest density of healthy food outlets. Controlling for other features of the built environment did not alter the prevalence ratio for healthy food density. Our previous work showed that increasing land-use mix and population density were inversely associated with BMI; this association remained after control for the density of BMI-healthy, BMI-unhealthy, and BMI-intermediate food outlets (Rundle et al. 2007). The prevalence ratio for obesity comparing the fourth and first quartiles of land-use mix was 0.91 (95% CI, 0.86–0.97) and comparing the fourth and first quartiles of population density was 0.84 (95% CI, 0.73–0.96).
null
null
[ "Neighborhood measures", "Food environment measures", "Statistical analysis" ]
[ "We defined a study subject’s neighborhood as a half-mile (805 m) “network buffer” around his or her residential address, comprising locations reachable within a half-mile walk along the street network. Most urban planners assume that a half-mile is a walkable distance (Agrawal et al. 2008; Calthorpe 1993; Cervero 2006). We constructed sociodemographic and built environment measures, including food environment variables, for each individual’s neighborhood. To control for the effects of neighborhood composition on BMI, our models adjusted for the proportion of residents below the federal poverty line, proportion black, and proportion Hispanic using data from the 2000 U.S. Census summary file 3 (U.S. Census Bureau 2000).\nWe assessed the possible confounding effects of the following measures of neighborhood walkability: population density, density of bus and subway stops, percentage of commuters using public transit, land-use mix, and proportion of land zoned to permit commercial development (Rundle et al. 2007). We calculated population density, expressed as persons per square kilometer of land area, and the percentage of commuters using public transit from 2000 U.S. Census data (U.S. Census Bureau 2000). We based the numbers of bus and subway stops per square kilometer on data from the Department of City Planning (DCP). We constructed the proportion of the buffer zoned to permit commercial development and a measure of residential/commercial land-use mix using the Primary Land Use Tax Lot Output data, a parcel-level data set also available from DCP. Land-use mix is an index of the extent to which a neighborhood supports both commercial and residential lands uses, with the index tending toward 1 as the mix of residential and commercial floor area approaches a 1:1 ratio.", "We derived food environment measures from 2001 data purchased from Dun & Bradstreet (D&B; unpublished data). The data include business name, geocoded location, and detailed Standard Industrial Classification (SIC) industry codes (http://www.osha.gov/pls/imis/sic_manual.html) for food establishments. A priori, we grouped food outlets into categories hypothesized to provide BMI-healthy or BMI-unhealthy food, with one intermediate category for food outlets whose classification was uncertain. We classified supermarkets and fruit and vegetable markets as BMI-healthy based on evidence associating proximity to supermarkets with better dietary patterns and lower BMI (Laraia et al. 2004; Morland et al. 2002, 2006; Zenk et al. 2005), lower fruit and vegetable prices with slower growth in BMI (Powell et al. 2007; Sturm and Datar 2005), and daily vegetable consumption with lower rates of obesity (Lahti-Koski et al. 2002). Although supermarkets sell a range of food, including both healthy and unhealthy options, we consider them healthy food outlets because they offer local residents the opportunity to purchase healthy food. No evidence is available linking natural food stores to diet or BMI, but food products typically available at natural food stores tend to be healthier; thus, we also categorized natural food stores as BMI-healthy food outlets.\nThe category of BMI-unhealthy food outlets included fast-food restaurants, a choice based on extensive evidence linking fast-food consumption with high energy intake, fat intake, BMI, and weight gain (Befort et al. 2006; Bowman and Vinyard 2004; Duerksen et al. 2007; Duffey et al. 2007; French et al. 2001; Jeffery et al. 2006; Jeffery and French 1998; Thompson et al. 2004). The BMI-unhealthy food index also included convenience stores (Morland et al. 2006) and meat markets (Gillis and Bar-Or 2003; Lahti-Koski et al. 2002). We classified pizzerias, bakeries, and candy and nut stores as BMI-unhealthy based on the energy density of the types of foods sold there. Because “bodegas” or very small grocery stores tend to sell energy-dense foods and few fruits and vegetables, they were classed as BMI-unhealthy (Kaufman and Karpati 2007).\nThe BMI-intermediate category comprised food outlets for which evidence was insufficient for placement in the other two categories. This category included non-fast-food restaurants—that is, restaurants excluding fast food and pizzerias. Although eating food prepared away from home has sometimes been associated with poor diet and higher weight (Gillis and Bar-Or 2003; Guthrie et al. 2002; Yao et al. 2003), research on consumption of food from non-fast-food restaurants has found no effect on weight or weight gain (Duffey et al. 2007; Jeffery et al. 2006; Thompson et al. 2004), and one study found higher vegetable consumption among adolescents who ate more frequently at non-fast-food restaurants (Befort et al. 2006). The intermediate category also includes medium-sized grocery stores and specialty stores, as well as fish markets. Although some evidence associates fish intake with weight loss (Thorsdottir et al. 2007), fish markets in New York City often sell fried fish and fried seafood for immediate consumption; thus, the implication of this food outlet type for weight is unclear.\nWe identified most food outlet types by SIC code number alone: fruit and vegetable markets (#5431), natural or health food stores (#549901), fish markets (#542101), specialty food stores (#5451 and #5499, excluding #549901), convenience stores (#541102), bakeries (#5461), candy and nut stores (#5441), and meat markets (#542102). We distinguished three categories of grocery stores, excluding convenience stores. We identified “supermarkets” as grocery stores (#5411) with at least $2 million in annual sales or, for establishments with missing data on annual sales, at least 18 employees. (Among establishments with annual sales data, 18 employees was the threshold at which at least half had annual sales of ≥ $2 million.) “Mediumsized grocery stores” were nonsupermarket groceries with at least five employees. “Bodegas” were grocery stores with fewer than five employees. We identified national-chain fast-food restaurants through text searches in the D&B “company name” and “tradestyle” fields for names appearing in Technomic Inc.’s list of the top 100 limited-service chain brands (Technomic Inc. 2006). We identified as local fast food those restaurants that were not already identified as a national-chain fast-food restaurant and that had an SIC code indicating fast food (#58120300, #58120307, or #58120308), as well as the restaurants with names matching those on this list of local fast-food restaurants. We identified non-fast-food restaurants with “pizza” or “pizzeria” in their name, or with SIC codes of #58120600, #58120601, or #58120602, as pizzerias. We categorized all other establishments with an SIC code of 5812 as non-fast-food restaurants.\nThe density per square kilometer of establishments falling within each of these three categories was calculated for each subject’s unique network buffer. Subjects were then categorized into increasing quintiles for each of the three food outlet categories.", "We calculated adjusted mean BMI for each quintile of retail density for the three food categories using cross-sectional, multilevel modeling (Diez Roux 2000) with the Proc Mixed procedure (Singer 1998) in SAS (version 9; SAS Institute Inc., Cary, NC). Because each of the neighborhood-level measures was generated for each individual’s address, we treated the neighborhood variables as level 1 variables. We expected intercor-relations among individuals, reflecting similarity among those living in proximity to each other, to exist across a geographic scale larger than the half-mile buffers. To account for this, we estimated our multilevel models with community district as a level 2 clustering factor. New York City’s 59 community districts correspond to named areas such as the Upper West Side and Chinatown. Although we measured no predictive variables at level 2, the use of this nested data structure allowed for valid estimation of standard errors. We adjusted analyses for individual and neighborhood sociodemographic characteristics and then for the five neighborhood walkability measures. We evaluated the five walkability measures as possible confounders individually and in combination. We mutually adjusted all analyses for quintiles of each of the three food categories.\nWe calculated separate prevalence ratios for overweight and obesity compared with normal weight for increasing quintiles of retail food density categories using Poisson regression with robust variance estimates (Spiegelman and Hertzmark 2005). We used community district as a clustering variable to correct the standard errors for intercorrelations among individuals across larger areas of the city and to generate robust SE estimates." ]
[ null, null, null ]
[ "Materials and Methods", "Neighborhood measures", "Food environment measures", "Statistical analysis", "Results", "Discussion" ]
[ "The analyses presented here employed data collected during the baseline enrollment of subjects for the New York Cancer Project, a study of residents of New York City and the surrounding suburbs that has been described extensively elsewhere (Mitchell et al. 2004; Rundle et al. 2007). Of the total sample, 14,147 individuals had geocoded addresses falling within New York City boundaries, and 13,102 had a BMI < 70 and complete data for objectively measured height and weight and questionnaire measures of age, race and ethnicity, sex, income, and educational attainment. Table 1 shows descriptive statistics for individual characteristics. The demographic profile and spatial distribution of the sample are similar to those derived from the 2000 U.S. Census and from the 2002 New York City Community Health Survey (Rundle et al. 2007). Analyses of BMI, individual demographic variables, and appended neighborhood characteristics were approved by the Columbia University Medical Center Institutional Review Board.\n Neighborhood measures We defined a study subject’s neighborhood as a half-mile (805 m) “network buffer” around his or her residential address, comprising locations reachable within a half-mile walk along the street network. Most urban planners assume that a half-mile is a walkable distance (Agrawal et al. 2008; Calthorpe 1993; Cervero 2006). We constructed sociodemographic and built environment measures, including food environment variables, for each individual’s neighborhood. To control for the effects of neighborhood composition on BMI, our models adjusted for the proportion of residents below the federal poverty line, proportion black, and proportion Hispanic using data from the 2000 U.S. Census summary file 3 (U.S. Census Bureau 2000).\nWe assessed the possible confounding effects of the following measures of neighborhood walkability: population density, density of bus and subway stops, percentage of commuters using public transit, land-use mix, and proportion of land zoned to permit commercial development (Rundle et al. 2007). We calculated population density, expressed as persons per square kilometer of land area, and the percentage of commuters using public transit from 2000 U.S. Census data (U.S. Census Bureau 2000). We based the numbers of bus and subway stops per square kilometer on data from the Department of City Planning (DCP). We constructed the proportion of the buffer zoned to permit commercial development and a measure of residential/commercial land-use mix using the Primary Land Use Tax Lot Output data, a parcel-level data set also available from DCP. Land-use mix is an index of the extent to which a neighborhood supports both commercial and residential lands uses, with the index tending toward 1 as the mix of residential and commercial floor area approaches a 1:1 ratio.\nWe defined a study subject’s neighborhood as a half-mile (805 m) “network buffer” around his or her residential address, comprising locations reachable within a half-mile walk along the street network. Most urban planners assume that a half-mile is a walkable distance (Agrawal et al. 2008; Calthorpe 1993; Cervero 2006). We constructed sociodemographic and built environment measures, including food environment variables, for each individual’s neighborhood. To control for the effects of neighborhood composition on BMI, our models adjusted for the proportion of residents below the federal poverty line, proportion black, and proportion Hispanic using data from the 2000 U.S. Census summary file 3 (U.S. Census Bureau 2000).\nWe assessed the possible confounding effects of the following measures of neighborhood walkability: population density, density of bus and subway stops, percentage of commuters using public transit, land-use mix, and proportion of land zoned to permit commercial development (Rundle et al. 2007). We calculated population density, expressed as persons per square kilometer of land area, and the percentage of commuters using public transit from 2000 U.S. Census data (U.S. Census Bureau 2000). We based the numbers of bus and subway stops per square kilometer on data from the Department of City Planning (DCP). We constructed the proportion of the buffer zoned to permit commercial development and a measure of residential/commercial land-use mix using the Primary Land Use Tax Lot Output data, a parcel-level data set also available from DCP. Land-use mix is an index of the extent to which a neighborhood supports both commercial and residential lands uses, with the index tending toward 1 as the mix of residential and commercial floor area approaches a 1:1 ratio.\n Food environment measures We derived food environment measures from 2001 data purchased from Dun & Bradstreet (D&B; unpublished data). The data include business name, geocoded location, and detailed Standard Industrial Classification (SIC) industry codes (http://www.osha.gov/pls/imis/sic_manual.html) for food establishments. A priori, we grouped food outlets into categories hypothesized to provide BMI-healthy or BMI-unhealthy food, with one intermediate category for food outlets whose classification was uncertain. We classified supermarkets and fruit and vegetable markets as BMI-healthy based on evidence associating proximity to supermarkets with better dietary patterns and lower BMI (Laraia et al. 2004; Morland et al. 2002, 2006; Zenk et al. 2005), lower fruit and vegetable prices with slower growth in BMI (Powell et al. 2007; Sturm and Datar 2005), and daily vegetable consumption with lower rates of obesity (Lahti-Koski et al. 2002). Although supermarkets sell a range of food, including both healthy and unhealthy options, we consider them healthy food outlets because they offer local residents the opportunity to purchase healthy food. No evidence is available linking natural food stores to diet or BMI, but food products typically available at natural food stores tend to be healthier; thus, we also categorized natural food stores as BMI-healthy food outlets.\nThe category of BMI-unhealthy food outlets included fast-food restaurants, a choice based on extensive evidence linking fast-food consumption with high energy intake, fat intake, BMI, and weight gain (Befort et al. 2006; Bowman and Vinyard 2004; Duerksen et al. 2007; Duffey et al. 2007; French et al. 2001; Jeffery et al. 2006; Jeffery and French 1998; Thompson et al. 2004). The BMI-unhealthy food index also included convenience stores (Morland et al. 2006) and meat markets (Gillis and Bar-Or 2003; Lahti-Koski et al. 2002). We classified pizzerias, bakeries, and candy and nut stores as BMI-unhealthy based on the energy density of the types of foods sold there. Because “bodegas” or very small grocery stores tend to sell energy-dense foods and few fruits and vegetables, they were classed as BMI-unhealthy (Kaufman and Karpati 2007).\nThe BMI-intermediate category comprised food outlets for which evidence was insufficient for placement in the other two categories. This category included non-fast-food restaurants—that is, restaurants excluding fast food and pizzerias. Although eating food prepared away from home has sometimes been associated with poor diet and higher weight (Gillis and Bar-Or 2003; Guthrie et al. 2002; Yao et al. 2003), research on consumption of food from non-fast-food restaurants has found no effect on weight or weight gain (Duffey et al. 2007; Jeffery et al. 2006; Thompson et al. 2004), and one study found higher vegetable consumption among adolescents who ate more frequently at non-fast-food restaurants (Befort et al. 2006). The intermediate category also includes medium-sized grocery stores and specialty stores, as well as fish markets. Although some evidence associates fish intake with weight loss (Thorsdottir et al. 2007), fish markets in New York City often sell fried fish and fried seafood for immediate consumption; thus, the implication of this food outlet type for weight is unclear.\nWe identified most food outlet types by SIC code number alone: fruit and vegetable markets (#5431), natural or health food stores (#549901), fish markets (#542101), specialty food stores (#5451 and #5499, excluding #549901), convenience stores (#541102), bakeries (#5461), candy and nut stores (#5441), and meat markets (#542102). We distinguished three categories of grocery stores, excluding convenience stores. We identified “supermarkets” as grocery stores (#5411) with at least $2 million in annual sales or, for establishments with missing data on annual sales, at least 18 employees. (Among establishments with annual sales data, 18 employees was the threshold at which at least half had annual sales of ≥ $2 million.) “Mediumsized grocery stores” were nonsupermarket groceries with at least five employees. “Bodegas” were grocery stores with fewer than five employees. We identified national-chain fast-food restaurants through text searches in the D&B “company name” and “tradestyle” fields for names appearing in Technomic Inc.’s list of the top 100 limited-service chain brands (Technomic Inc. 2006). We identified as local fast food those restaurants that were not already identified as a national-chain fast-food restaurant and that had an SIC code indicating fast food (#58120300, #58120307, or #58120308), as well as the restaurants with names matching those on this list of local fast-food restaurants. We identified non-fast-food restaurants with “pizza” or “pizzeria” in their name, or with SIC codes of #58120600, #58120601, or #58120602, as pizzerias. We categorized all other establishments with an SIC code of 5812 as non-fast-food restaurants.\nThe density per square kilometer of establishments falling within each of these three categories was calculated for each subject’s unique network buffer. Subjects were then categorized into increasing quintiles for each of the three food outlet categories.\nWe derived food environment measures from 2001 data purchased from Dun & Bradstreet (D&B; unpublished data). The data include business name, geocoded location, and detailed Standard Industrial Classification (SIC) industry codes (http://www.osha.gov/pls/imis/sic_manual.html) for food establishments. A priori, we grouped food outlets into categories hypothesized to provide BMI-healthy or BMI-unhealthy food, with one intermediate category for food outlets whose classification was uncertain. We classified supermarkets and fruit and vegetable markets as BMI-healthy based on evidence associating proximity to supermarkets with better dietary patterns and lower BMI (Laraia et al. 2004; Morland et al. 2002, 2006; Zenk et al. 2005), lower fruit and vegetable prices with slower growth in BMI (Powell et al. 2007; Sturm and Datar 2005), and daily vegetable consumption with lower rates of obesity (Lahti-Koski et al. 2002). Although supermarkets sell a range of food, including both healthy and unhealthy options, we consider them healthy food outlets because they offer local residents the opportunity to purchase healthy food. No evidence is available linking natural food stores to diet or BMI, but food products typically available at natural food stores tend to be healthier; thus, we also categorized natural food stores as BMI-healthy food outlets.\nThe category of BMI-unhealthy food outlets included fast-food restaurants, a choice based on extensive evidence linking fast-food consumption with high energy intake, fat intake, BMI, and weight gain (Befort et al. 2006; Bowman and Vinyard 2004; Duerksen et al. 2007; Duffey et al. 2007; French et al. 2001; Jeffery et al. 2006; Jeffery and French 1998; Thompson et al. 2004). The BMI-unhealthy food index also included convenience stores (Morland et al. 2006) and meat markets (Gillis and Bar-Or 2003; Lahti-Koski et al. 2002). We classified pizzerias, bakeries, and candy and nut stores as BMI-unhealthy based on the energy density of the types of foods sold there. Because “bodegas” or very small grocery stores tend to sell energy-dense foods and few fruits and vegetables, they were classed as BMI-unhealthy (Kaufman and Karpati 2007).\nThe BMI-intermediate category comprised food outlets for which evidence was insufficient for placement in the other two categories. This category included non-fast-food restaurants—that is, restaurants excluding fast food and pizzerias. Although eating food prepared away from home has sometimes been associated with poor diet and higher weight (Gillis and Bar-Or 2003; Guthrie et al. 2002; Yao et al. 2003), research on consumption of food from non-fast-food restaurants has found no effect on weight or weight gain (Duffey et al. 2007; Jeffery et al. 2006; Thompson et al. 2004), and one study found higher vegetable consumption among adolescents who ate more frequently at non-fast-food restaurants (Befort et al. 2006). The intermediate category also includes medium-sized grocery stores and specialty stores, as well as fish markets. Although some evidence associates fish intake with weight loss (Thorsdottir et al. 2007), fish markets in New York City often sell fried fish and fried seafood for immediate consumption; thus, the implication of this food outlet type for weight is unclear.\nWe identified most food outlet types by SIC code number alone: fruit and vegetable markets (#5431), natural or health food stores (#549901), fish markets (#542101), specialty food stores (#5451 and #5499, excluding #549901), convenience stores (#541102), bakeries (#5461), candy and nut stores (#5441), and meat markets (#542102). We distinguished three categories of grocery stores, excluding convenience stores. We identified “supermarkets” as grocery stores (#5411) with at least $2 million in annual sales or, for establishments with missing data on annual sales, at least 18 employees. (Among establishments with annual sales data, 18 employees was the threshold at which at least half had annual sales of ≥ $2 million.) “Mediumsized grocery stores” were nonsupermarket groceries with at least five employees. “Bodegas” were grocery stores with fewer than five employees. We identified national-chain fast-food restaurants through text searches in the D&B “company name” and “tradestyle” fields for names appearing in Technomic Inc.’s list of the top 100 limited-service chain brands (Technomic Inc. 2006). We identified as local fast food those restaurants that were not already identified as a national-chain fast-food restaurant and that had an SIC code indicating fast food (#58120300, #58120307, or #58120308), as well as the restaurants with names matching those on this list of local fast-food restaurants. We identified non-fast-food restaurants with “pizza” or “pizzeria” in their name, or with SIC codes of #58120600, #58120601, or #58120602, as pizzerias. We categorized all other establishments with an SIC code of 5812 as non-fast-food restaurants.\nThe density per square kilometer of establishments falling within each of these three categories was calculated for each subject’s unique network buffer. Subjects were then categorized into increasing quintiles for each of the three food outlet categories.\n Statistical analysis We calculated adjusted mean BMI for each quintile of retail density for the three food categories using cross-sectional, multilevel modeling (Diez Roux 2000) with the Proc Mixed procedure (Singer 1998) in SAS (version 9; SAS Institute Inc., Cary, NC). Because each of the neighborhood-level measures was generated for each individual’s address, we treated the neighborhood variables as level 1 variables. We expected intercor-relations among individuals, reflecting similarity among those living in proximity to each other, to exist across a geographic scale larger than the half-mile buffers. To account for this, we estimated our multilevel models with community district as a level 2 clustering factor. New York City’s 59 community districts correspond to named areas such as the Upper West Side and Chinatown. Although we measured no predictive variables at level 2, the use of this nested data structure allowed for valid estimation of standard errors. We adjusted analyses for individual and neighborhood sociodemographic characteristics and then for the five neighborhood walkability measures. We evaluated the five walkability measures as possible confounders individually and in combination. We mutually adjusted all analyses for quintiles of each of the three food categories.\nWe calculated separate prevalence ratios for overweight and obesity compared with normal weight for increasing quintiles of retail food density categories using Poisson regression with robust variance estimates (Spiegelman and Hertzmark 2005). We used community district as a clustering variable to correct the standard errors for intercorrelations among individuals across larger areas of the city and to generate robust SE estimates.\nWe calculated adjusted mean BMI for each quintile of retail density for the three food categories using cross-sectional, multilevel modeling (Diez Roux 2000) with the Proc Mixed procedure (Singer 1998) in SAS (version 9; SAS Institute Inc., Cary, NC). Because each of the neighborhood-level measures was generated for each individual’s address, we treated the neighborhood variables as level 1 variables. We expected intercor-relations among individuals, reflecting similarity among those living in proximity to each other, to exist across a geographic scale larger than the half-mile buffers. To account for this, we estimated our multilevel models with community district as a level 2 clustering factor. New York City’s 59 community districts correspond to named areas such as the Upper West Side and Chinatown. Although we measured no predictive variables at level 2, the use of this nested data structure allowed for valid estimation of standard errors. We adjusted analyses for individual and neighborhood sociodemographic characteristics and then for the five neighborhood walkability measures. We evaluated the five walkability measures as possible confounders individually and in combination. We mutually adjusted all analyses for quintiles of each of the three food categories.\nWe calculated separate prevalence ratios for overweight and obesity compared with normal weight for increasing quintiles of retail food density categories using Poisson regression with robust variance estimates (Spiegelman and Hertzmark 2005). We used community district as a clustering variable to correct the standard errors for intercorrelations among individuals across larger areas of the city and to generate robust SE estimates.", "We defined a study subject’s neighborhood as a half-mile (805 m) “network buffer” around his or her residential address, comprising locations reachable within a half-mile walk along the street network. Most urban planners assume that a half-mile is a walkable distance (Agrawal et al. 2008; Calthorpe 1993; Cervero 2006). We constructed sociodemographic and built environment measures, including food environment variables, for each individual’s neighborhood. To control for the effects of neighborhood composition on BMI, our models adjusted for the proportion of residents below the federal poverty line, proportion black, and proportion Hispanic using data from the 2000 U.S. Census summary file 3 (U.S. Census Bureau 2000).\nWe assessed the possible confounding effects of the following measures of neighborhood walkability: population density, density of bus and subway stops, percentage of commuters using public transit, land-use mix, and proportion of land zoned to permit commercial development (Rundle et al. 2007). We calculated population density, expressed as persons per square kilometer of land area, and the percentage of commuters using public transit from 2000 U.S. Census data (U.S. Census Bureau 2000). We based the numbers of bus and subway stops per square kilometer on data from the Department of City Planning (DCP). We constructed the proportion of the buffer zoned to permit commercial development and a measure of residential/commercial land-use mix using the Primary Land Use Tax Lot Output data, a parcel-level data set also available from DCP. Land-use mix is an index of the extent to which a neighborhood supports both commercial and residential lands uses, with the index tending toward 1 as the mix of residential and commercial floor area approaches a 1:1 ratio.", "We derived food environment measures from 2001 data purchased from Dun & Bradstreet (D&B; unpublished data). The data include business name, geocoded location, and detailed Standard Industrial Classification (SIC) industry codes (http://www.osha.gov/pls/imis/sic_manual.html) for food establishments. A priori, we grouped food outlets into categories hypothesized to provide BMI-healthy or BMI-unhealthy food, with one intermediate category for food outlets whose classification was uncertain. We classified supermarkets and fruit and vegetable markets as BMI-healthy based on evidence associating proximity to supermarkets with better dietary patterns and lower BMI (Laraia et al. 2004; Morland et al. 2002, 2006; Zenk et al. 2005), lower fruit and vegetable prices with slower growth in BMI (Powell et al. 2007; Sturm and Datar 2005), and daily vegetable consumption with lower rates of obesity (Lahti-Koski et al. 2002). Although supermarkets sell a range of food, including both healthy and unhealthy options, we consider them healthy food outlets because they offer local residents the opportunity to purchase healthy food. No evidence is available linking natural food stores to diet or BMI, but food products typically available at natural food stores tend to be healthier; thus, we also categorized natural food stores as BMI-healthy food outlets.\nThe category of BMI-unhealthy food outlets included fast-food restaurants, a choice based on extensive evidence linking fast-food consumption with high energy intake, fat intake, BMI, and weight gain (Befort et al. 2006; Bowman and Vinyard 2004; Duerksen et al. 2007; Duffey et al. 2007; French et al. 2001; Jeffery et al. 2006; Jeffery and French 1998; Thompson et al. 2004). The BMI-unhealthy food index also included convenience stores (Morland et al. 2006) and meat markets (Gillis and Bar-Or 2003; Lahti-Koski et al. 2002). We classified pizzerias, bakeries, and candy and nut stores as BMI-unhealthy based on the energy density of the types of foods sold there. Because “bodegas” or very small grocery stores tend to sell energy-dense foods and few fruits and vegetables, they were classed as BMI-unhealthy (Kaufman and Karpati 2007).\nThe BMI-intermediate category comprised food outlets for which evidence was insufficient for placement in the other two categories. This category included non-fast-food restaurants—that is, restaurants excluding fast food and pizzerias. Although eating food prepared away from home has sometimes been associated with poor diet and higher weight (Gillis and Bar-Or 2003; Guthrie et al. 2002; Yao et al. 2003), research on consumption of food from non-fast-food restaurants has found no effect on weight or weight gain (Duffey et al. 2007; Jeffery et al. 2006; Thompson et al. 2004), and one study found higher vegetable consumption among adolescents who ate more frequently at non-fast-food restaurants (Befort et al. 2006). The intermediate category also includes medium-sized grocery stores and specialty stores, as well as fish markets. Although some evidence associates fish intake with weight loss (Thorsdottir et al. 2007), fish markets in New York City often sell fried fish and fried seafood for immediate consumption; thus, the implication of this food outlet type for weight is unclear.\nWe identified most food outlet types by SIC code number alone: fruit and vegetable markets (#5431), natural or health food stores (#549901), fish markets (#542101), specialty food stores (#5451 and #5499, excluding #549901), convenience stores (#541102), bakeries (#5461), candy and nut stores (#5441), and meat markets (#542102). We distinguished three categories of grocery stores, excluding convenience stores. We identified “supermarkets” as grocery stores (#5411) with at least $2 million in annual sales or, for establishments with missing data on annual sales, at least 18 employees. (Among establishments with annual sales data, 18 employees was the threshold at which at least half had annual sales of ≥ $2 million.) “Mediumsized grocery stores” were nonsupermarket groceries with at least five employees. “Bodegas” were grocery stores with fewer than five employees. We identified national-chain fast-food restaurants through text searches in the D&B “company name” and “tradestyle” fields for names appearing in Technomic Inc.’s list of the top 100 limited-service chain brands (Technomic Inc. 2006). We identified as local fast food those restaurants that were not already identified as a national-chain fast-food restaurant and that had an SIC code indicating fast food (#58120300, #58120307, or #58120308), as well as the restaurants with names matching those on this list of local fast-food restaurants. We identified non-fast-food restaurants with “pizza” or “pizzeria” in their name, or with SIC codes of #58120600, #58120601, or #58120602, as pizzerias. We categorized all other establishments with an SIC code of 5812 as non-fast-food restaurants.\nThe density per square kilometer of establishments falling within each of these three categories was calculated for each subject’s unique network buffer. Subjects were then categorized into increasing quintiles for each of the three food outlet categories.", "We calculated adjusted mean BMI for each quintile of retail density for the three food categories using cross-sectional, multilevel modeling (Diez Roux 2000) with the Proc Mixed procedure (Singer 1998) in SAS (version 9; SAS Institute Inc., Cary, NC). Because each of the neighborhood-level measures was generated for each individual’s address, we treated the neighborhood variables as level 1 variables. We expected intercor-relations among individuals, reflecting similarity among those living in proximity to each other, to exist across a geographic scale larger than the half-mile buffers. To account for this, we estimated our multilevel models with community district as a level 2 clustering factor. New York City’s 59 community districts correspond to named areas such as the Upper West Side and Chinatown. Although we measured no predictive variables at level 2, the use of this nested data structure allowed for valid estimation of standard errors. We adjusted analyses for individual and neighborhood sociodemographic characteristics and then for the five neighborhood walkability measures. We evaluated the five walkability measures as possible confounders individually and in combination. We mutually adjusted all analyses for quintiles of each of the three food categories.\nWe calculated separate prevalence ratios for overweight and obesity compared with normal weight for increasing quintiles of retail food density categories using Poisson regression with robust variance estimates (Spiegelman and Hertzmark 2005). We used community district as a clustering variable to correct the standard errors for intercorrelations among individuals across larger areas of the city and to generate robust SE estimates.", "The data set initially received from D&B included 32,949 retail food businesses for New York City. After correction of geocoded addresses and removal of duplicate records, businesses likely to be defunct, and records likely to represent back offices and corporate offices, the data set included 29,976 businesses, of which 29,858 fell within the bounds of study subjects’ neighborhoods. Table 2 displays descriptive statistics for the BMI-healthy, BMI-unhealthy, and BMI-intermediate categories as well as for specific food outlet types. Density of intermediate and unhealthy food outlets was much higher than density of healthy food outlets. Almost all study subjects lived within a half-mile of an unhealthy food outlet, with an average density of 31 such outlets per square kilometer. By contrast, only 82% lived within a half-mile of a healthy food outlet, with an average density of four outlets per square kilometer. Density measures for food outlet types were significantly correlated across neighborhoods, with correlation coefficients ranging from 0.38 (convenience stores and supermarkets) to 0.85 (non-fast-food restaurants and pizza restaurants).\nFigure 1 maps the density of BMI-healthy food outlets, expressed in outlets per square kilometer, across the city. Outlet density was highest in high-walkability areas of the city, such as Manhattan, and lowest in low-walkability areas, such as Staten Island. Outlet density also varied by neighborhood income and race/ethnic composition, with higher densities in affluent and predominantly white neighborhoods in the southern half of Manhattan and lower densities in the poor and predominantly black or Latino neighborhoods in the northern half of Manhattan and in the South Bronx. To reduce the risk of confounding, the multivariate analyses controlled statistically for individual-level race/ethnicity and education and neighborhood-level poverty rate and race/ethnic composition, as well as indices of neighborhood walkability, including population density and land-use mix.\nMultilevel analyses of the association between BMI and the food environment measures showed significant associations only with access to BMI-healthy food. We also assessed possible confounding effects of built environment variables. Population density, which has previously been inversely associated with BMI in analyses of the same data set, had an appreciable confounding effect, but further control for land-use mix, percent commercial area, and access to and neighborhood use of public transit did not alter the results. Table 3 shows adjusted mean BMI for each quintile of the three food categories and the median density of food outlets for each category; Figure 2 displays the association between healthy food outlet density and BMI based on this analysis. The adjusted mean BMI in the fifth quintile of healthy food was 0.80 units [95% confidence interval (CI), 0.27–1.32, p < 0.01] lower than in the first quintile of healthy food. Population density and land-use mix remained significantly inversely associated with BMI after controlling for measures of the food environment. Increasing density of the BMI-unhealthy and BMI-intermediate food categories was not associated with BMI, and analyses of selected subcategories of BMI-unhealthy food (fast food, pizzerias, and convenience stores) found no significant associations.\nBecause there was little difference in the adjusted mean BMI of individuals living in the first and second quintile of BMI-healthy food density, we collapsed these two categories into a single reference category to increase statistical power for analyses of the prevalence of overweight and obesity. The reference category had a median density of 0.76 healthy food outlets per square kilometer. Table 4 shows the prevalence ratios for overweight and obesity by increasing density of healthy food outlets, increasing population density, and land-use mix. Controlling for population density and land-use mix, the prevalence of overweight and obesity were both lower among individuals with the highest density of healthy food outlets. Controlling for other features of the built environment did not alter the prevalence ratio for healthy food density.\nOur previous work showed that increasing land-use mix and population density were inversely associated with BMI; this association remained after control for the density of BMI-healthy, BMI-unhealthy, and BMI-intermediate food outlets (Rundle et al. 2007). The prevalence ratio for obesity comparing the fourth and first quartiles of land-use mix was 0.91 (95% CI, 0.86–0.97) and comparing the fourth and first quartiles of population density was 0.84 (95% CI, 0.73–0.96).", "The results presented here indicate that the food environment is significantly associated with body size net of individual and neighborhood characteristics and neighborhood walkability features. A higher local density of BMI-healthy food outlets was associated with a lower mean BMI, a lower prevalence of overweight, and a lower prevalence of obesity. BMI-unhealthy food stores and restaurants were far more abundant than healthy ones, but the density of these unhealthy food outlets was not significantly associated with BMI or with body size categories. Of studies relating the food environment to body size, this work is among the first to measure the food environment comprehensively and to account for the effects of other built environment factors associated with obesity. The apparent effect of the food environment, while modest, is net of the significant associations between indices of neighborhood walkability and BMI. Our prior work showed that built environment features related to walkability were associated with approximately a 10% difference in the prevalence of obesity (Rundle et al. 2007). Even after control for measures of the food environment, the estimated effects of these built environment variables remained and were of a similar magnitude. Considered together, food environment and neighborhood walkability may have a substantial effect on body size.\nAlthough the observed associations between BMI and the density of BMI-healthy food establishments were consistent with expectations, we had also hypothesized that increasing density of BMI-unhealthy food options would be positively associated with BMI. Because the density of unhealthy food outlets is correlated with commercial activity in general as well as other features of the urban landscape that promote pedestrian activity, we expected that associations between unhealthy food density and BMI had been masked in prior research and would be observed after control for such built environment features. Consistent with other studies in this area, however, we found no association between density of unhealthy food and BMI or obesity. This lack of association may reflect the ubiquity of unhealthy food in an urban environment; as Table 2 shows, virtually all New York City neighborhoods provide many opportunities to eat poorly. In addition, unhealthy convenience foods may be consumed near the workplace or during travel about the city, making the density of unhealthy foods in the residential neighborhood less relevant. Alternatively, the null findings may reflect undercounting of unhealthy food outlets in the most disadvantaged urban neighborhoods. As the case of New York City shows, the penetration of national-chain fast food is low in some of the poorest neighborhoods; this niche in the food environment is filled by inexpensive ethnic restaurants selling high-calorie take-out food (Graham et al. 2006). Better measures of the food environment may show an association of unhealthy food outlets with body size.\nOne limitation of this study and, indeed, of most studies on this topic is that our data are observational and cross-sectional. Observed associations may be attributable to self-selection of individuals into neighborhoods that support their preferred lifestyle; for instance, individuals who prefer to consume healthy foods may move to neighborhoods with more healthy food outlets. Conversely, retailers selling healthy foods may choose to locate in neighborhoods where they believe the population will be most receptive to their products. In addition, questions might be raised about two potential sources of error in the food environment measures. The first is incomplete coverage of the D&B data. Because the D&B data are used primarily for marketing purposes, coverage may be less complete in areas less attractive to marketers, such as low-income neighborhoods. For error in the D&B database to bias our results, it would have to be correlated with the spatial distribution of BMI. Our analyses control for neighborhood sociodemographic composition, which may be an important correlate of measurement error in the D&B data. Second, measurement error may be caused by misclassification of food outlets into the BMI-healthy, BMI-unhealthy, and BMI-intermediate categories. Some food outlets, such as fruit and vegetable markets, are internally relatively homogeneous, whereas others, such as grocery stores or non-fast-food restaurants, may have significant internal heterogeneity. Within-category heterogeneity in food selection may bias food environment coefficients toward zero or create interactions between neighborhood composition and food environment characteristics. Although the analyses controlled for neighborhood sociodemographic composition and for land-use mix and commercial space, variables that might be expected to influence the extent of within-category measurement error, measurement error remains a concern. A further limitation is the mismatch between the time period of the survey (2000–2002) and the time period of food environment measures (2001), population census measures (2000), and land-use and zoning data (2003); because neighborhood demographic and built environment characteristics typically change slowly, these discrepancies should not affect the results significantly. Limitations also include the lack of an audit to verify types of food sold in different types of stores.\nA distinctive feature of this study is its use of broad categories to characterize the food environment based on the existing literature. Although this analytic strategy sacrifices the opportunity to identify associations between specific food outlet types and BMI, it has several advantages. First, although some in the public health and medical communities, as well as the popular media, have focused on the contribution of fast food to the obesity epidemic, other types of food outlets also sell high-energy-density food; comprehensive measures of the food environment provide a more accurate account of the food choices available to urban residents (Stender et al. 2007; Wallis 2004). Second, density measures for the 14 individual food outlet categories are significantly correlated; reflecting this multi-collinearity, models including all 14 measures are quite unstable. Third, reducing the number of food outlet measures made it less likely that one would be significant simply by chance. Specific choices about how to group food outlet types can certainly be debated and can be tested in replication.\nThe research reported here adds to our knowledge about the relationship between the food environment and obesity with evidence that access to BMI-healthy food outlets such as supermarkets, fruit and vegetable markets, and natural food stores is inversely associated with obesity. This protective association is net of urban design features that promote pedestrian activity and lower BMI, as well as the density of other types of food outlets. Although not identifying a specific culprit within the retail food environment for the obesity epidemic, these analyses indicate that retail outlets providing opportunities for healthier food purchases are associated with lower BMI. If the results of our observational research are confirmed by future studies that permit causal inference, this evidence would suggest that increasing access to healthy food outlets is likely to do more to address the obesity epidemic than limiting unhealthy food outlets. Given the recent proliferation of initiatives to promote access to supermarkets, farmers markets, and fruit and vegetable stands and to limit fast-food outlets (Abdollah 2007; Lee 2007; Marter 2007), study of the causal relationship between the food environment and diet or body size should be a priority for future research." ]
[ "materials|methods", null, null, null, "results", "discussion" ]
[ "neighborhood studies", "obesity", "retail food environment", "walkability" ]
Materials and Methods: The analyses presented here employed data collected during the baseline enrollment of subjects for the New York Cancer Project, a study of residents of New York City and the surrounding suburbs that has been described extensively elsewhere (Mitchell et al. 2004; Rundle et al. 2007). Of the total sample, 14,147 individuals had geocoded addresses falling within New York City boundaries, and 13,102 had a BMI < 70 and complete data for objectively measured height and weight and questionnaire measures of age, race and ethnicity, sex, income, and educational attainment. Table 1 shows descriptive statistics for individual characteristics. The demographic profile and spatial distribution of the sample are similar to those derived from the 2000 U.S. Census and from the 2002 New York City Community Health Survey (Rundle et al. 2007). Analyses of BMI, individual demographic variables, and appended neighborhood characteristics were approved by the Columbia University Medical Center Institutional Review Board. Neighborhood measures We defined a study subject’s neighborhood as a half-mile (805 m) “network buffer” around his or her residential address, comprising locations reachable within a half-mile walk along the street network. Most urban planners assume that a half-mile is a walkable distance (Agrawal et al. 2008; Calthorpe 1993; Cervero 2006). We constructed sociodemographic and built environment measures, including food environment variables, for each individual’s neighborhood. To control for the effects of neighborhood composition on BMI, our models adjusted for the proportion of residents below the federal poverty line, proportion black, and proportion Hispanic using data from the 2000 U.S. Census summary file 3 (U.S. Census Bureau 2000). We assessed the possible confounding effects of the following measures of neighborhood walkability: population density, density of bus and subway stops, percentage of commuters using public transit, land-use mix, and proportion of land zoned to permit commercial development (Rundle et al. 2007). We calculated population density, expressed as persons per square kilometer of land area, and the percentage of commuters using public transit from 2000 U.S. Census data (U.S. Census Bureau 2000). We based the numbers of bus and subway stops per square kilometer on data from the Department of City Planning (DCP). We constructed the proportion of the buffer zoned to permit commercial development and a measure of residential/commercial land-use mix using the Primary Land Use Tax Lot Output data, a parcel-level data set also available from DCP. Land-use mix is an index of the extent to which a neighborhood supports both commercial and residential lands uses, with the index tending toward 1 as the mix of residential and commercial floor area approaches a 1:1 ratio. We defined a study subject’s neighborhood as a half-mile (805 m) “network buffer” around his or her residential address, comprising locations reachable within a half-mile walk along the street network. Most urban planners assume that a half-mile is a walkable distance (Agrawal et al. 2008; Calthorpe 1993; Cervero 2006). We constructed sociodemographic and built environment measures, including food environment variables, for each individual’s neighborhood. To control for the effects of neighborhood composition on BMI, our models adjusted for the proportion of residents below the federal poverty line, proportion black, and proportion Hispanic using data from the 2000 U.S. Census summary file 3 (U.S. Census Bureau 2000). We assessed the possible confounding effects of the following measures of neighborhood walkability: population density, density of bus and subway stops, percentage of commuters using public transit, land-use mix, and proportion of land zoned to permit commercial development (Rundle et al. 2007). We calculated population density, expressed as persons per square kilometer of land area, and the percentage of commuters using public transit from 2000 U.S. Census data (U.S. Census Bureau 2000). We based the numbers of bus and subway stops per square kilometer on data from the Department of City Planning (DCP). We constructed the proportion of the buffer zoned to permit commercial development and a measure of residential/commercial land-use mix using the Primary Land Use Tax Lot Output data, a parcel-level data set also available from DCP. Land-use mix is an index of the extent to which a neighborhood supports both commercial and residential lands uses, with the index tending toward 1 as the mix of residential and commercial floor area approaches a 1:1 ratio. Food environment measures We derived food environment measures from 2001 data purchased from Dun & Bradstreet (D&B; unpublished data). The data include business name, geocoded location, and detailed Standard Industrial Classification (SIC) industry codes (http://www.osha.gov/pls/imis/sic_manual.html) for food establishments. A priori, we grouped food outlets into categories hypothesized to provide BMI-healthy or BMI-unhealthy food, with one intermediate category for food outlets whose classification was uncertain. We classified supermarkets and fruit and vegetable markets as BMI-healthy based on evidence associating proximity to supermarkets with better dietary patterns and lower BMI (Laraia et al. 2004; Morland et al. 2002, 2006; Zenk et al. 2005), lower fruit and vegetable prices with slower growth in BMI (Powell et al. 2007; Sturm and Datar 2005), and daily vegetable consumption with lower rates of obesity (Lahti-Koski et al. 2002). Although supermarkets sell a range of food, including both healthy and unhealthy options, we consider them healthy food outlets because they offer local residents the opportunity to purchase healthy food. No evidence is available linking natural food stores to diet or BMI, but food products typically available at natural food stores tend to be healthier; thus, we also categorized natural food stores as BMI-healthy food outlets. The category of BMI-unhealthy food outlets included fast-food restaurants, a choice based on extensive evidence linking fast-food consumption with high energy intake, fat intake, BMI, and weight gain (Befort et al. 2006; Bowman and Vinyard 2004; Duerksen et al. 2007; Duffey et al. 2007; French et al. 2001; Jeffery et al. 2006; Jeffery and French 1998; Thompson et al. 2004). The BMI-unhealthy food index also included convenience stores (Morland et al. 2006) and meat markets (Gillis and Bar-Or 2003; Lahti-Koski et al. 2002). We classified pizzerias, bakeries, and candy and nut stores as BMI-unhealthy based on the energy density of the types of foods sold there. Because “bodegas” or very small grocery stores tend to sell energy-dense foods and few fruits and vegetables, they were classed as BMI-unhealthy (Kaufman and Karpati 2007). The BMI-intermediate category comprised food outlets for which evidence was insufficient for placement in the other two categories. This category included non-fast-food restaurants—that is, restaurants excluding fast food and pizzerias. Although eating food prepared away from home has sometimes been associated with poor diet and higher weight (Gillis and Bar-Or 2003; Guthrie et al. 2002; Yao et al. 2003), research on consumption of food from non-fast-food restaurants has found no effect on weight or weight gain (Duffey et al. 2007; Jeffery et al. 2006; Thompson et al. 2004), and one study found higher vegetable consumption among adolescents who ate more frequently at non-fast-food restaurants (Befort et al. 2006). The intermediate category also includes medium-sized grocery stores and specialty stores, as well as fish markets. Although some evidence associates fish intake with weight loss (Thorsdottir et al. 2007), fish markets in New York City often sell fried fish and fried seafood for immediate consumption; thus, the implication of this food outlet type for weight is unclear. We identified most food outlet types by SIC code number alone: fruit and vegetable markets (#5431), natural or health food stores (#549901), fish markets (#542101), specialty food stores (#5451 and #5499, excluding #549901), convenience stores (#541102), bakeries (#5461), candy and nut stores (#5441), and meat markets (#542102). We distinguished three categories of grocery stores, excluding convenience stores. We identified “supermarkets” as grocery stores (#5411) with at least $2 million in annual sales or, for establishments with missing data on annual sales, at least 18 employees. (Among establishments with annual sales data, 18 employees was the threshold at which at least half had annual sales of ≥ $2 million.) “Mediumsized grocery stores” were nonsupermarket groceries with at least five employees. “Bodegas” were grocery stores with fewer than five employees. We identified national-chain fast-food restaurants through text searches in the D&B “company name” and “tradestyle” fields for names appearing in Technomic Inc.’s list of the top 100 limited-service chain brands (Technomic Inc. 2006). We identified as local fast food those restaurants that were not already identified as a national-chain fast-food restaurant and that had an SIC code indicating fast food (#58120300, #58120307, or #58120308), as well as the restaurants with names matching those on this list of local fast-food restaurants. We identified non-fast-food restaurants with “pizza” or “pizzeria” in their name, or with SIC codes of #58120600, #58120601, or #58120602, as pizzerias. We categorized all other establishments with an SIC code of 5812 as non-fast-food restaurants. The density per square kilometer of establishments falling within each of these three categories was calculated for each subject’s unique network buffer. Subjects were then categorized into increasing quintiles for each of the three food outlet categories. We derived food environment measures from 2001 data purchased from Dun & Bradstreet (D&B; unpublished data). The data include business name, geocoded location, and detailed Standard Industrial Classification (SIC) industry codes (http://www.osha.gov/pls/imis/sic_manual.html) for food establishments. A priori, we grouped food outlets into categories hypothesized to provide BMI-healthy or BMI-unhealthy food, with one intermediate category for food outlets whose classification was uncertain. We classified supermarkets and fruit and vegetable markets as BMI-healthy based on evidence associating proximity to supermarkets with better dietary patterns and lower BMI (Laraia et al. 2004; Morland et al. 2002, 2006; Zenk et al. 2005), lower fruit and vegetable prices with slower growth in BMI (Powell et al. 2007; Sturm and Datar 2005), and daily vegetable consumption with lower rates of obesity (Lahti-Koski et al. 2002). Although supermarkets sell a range of food, including both healthy and unhealthy options, we consider them healthy food outlets because they offer local residents the opportunity to purchase healthy food. No evidence is available linking natural food stores to diet or BMI, but food products typically available at natural food stores tend to be healthier; thus, we also categorized natural food stores as BMI-healthy food outlets. The category of BMI-unhealthy food outlets included fast-food restaurants, a choice based on extensive evidence linking fast-food consumption with high energy intake, fat intake, BMI, and weight gain (Befort et al. 2006; Bowman and Vinyard 2004; Duerksen et al. 2007; Duffey et al. 2007; French et al. 2001; Jeffery et al. 2006; Jeffery and French 1998; Thompson et al. 2004). The BMI-unhealthy food index also included convenience stores (Morland et al. 2006) and meat markets (Gillis and Bar-Or 2003; Lahti-Koski et al. 2002). We classified pizzerias, bakeries, and candy and nut stores as BMI-unhealthy based on the energy density of the types of foods sold there. Because “bodegas” or very small grocery stores tend to sell energy-dense foods and few fruits and vegetables, they were classed as BMI-unhealthy (Kaufman and Karpati 2007). The BMI-intermediate category comprised food outlets for which evidence was insufficient for placement in the other two categories. This category included non-fast-food restaurants—that is, restaurants excluding fast food and pizzerias. Although eating food prepared away from home has sometimes been associated with poor diet and higher weight (Gillis and Bar-Or 2003; Guthrie et al. 2002; Yao et al. 2003), research on consumption of food from non-fast-food restaurants has found no effect on weight or weight gain (Duffey et al. 2007; Jeffery et al. 2006; Thompson et al. 2004), and one study found higher vegetable consumption among adolescents who ate more frequently at non-fast-food restaurants (Befort et al. 2006). The intermediate category also includes medium-sized grocery stores and specialty stores, as well as fish markets. Although some evidence associates fish intake with weight loss (Thorsdottir et al. 2007), fish markets in New York City often sell fried fish and fried seafood for immediate consumption; thus, the implication of this food outlet type for weight is unclear. We identified most food outlet types by SIC code number alone: fruit and vegetable markets (#5431), natural or health food stores (#549901), fish markets (#542101), specialty food stores (#5451 and #5499, excluding #549901), convenience stores (#541102), bakeries (#5461), candy and nut stores (#5441), and meat markets (#542102). We distinguished three categories of grocery stores, excluding convenience stores. We identified “supermarkets” as grocery stores (#5411) with at least $2 million in annual sales or, for establishments with missing data on annual sales, at least 18 employees. (Among establishments with annual sales data, 18 employees was the threshold at which at least half had annual sales of ≥ $2 million.) “Mediumsized grocery stores” were nonsupermarket groceries with at least five employees. “Bodegas” were grocery stores with fewer than five employees. We identified national-chain fast-food restaurants through text searches in the D&B “company name” and “tradestyle” fields for names appearing in Technomic Inc.’s list of the top 100 limited-service chain brands (Technomic Inc. 2006). We identified as local fast food those restaurants that were not already identified as a national-chain fast-food restaurant and that had an SIC code indicating fast food (#58120300, #58120307, or #58120308), as well as the restaurants with names matching those on this list of local fast-food restaurants. We identified non-fast-food restaurants with “pizza” or “pizzeria” in their name, or with SIC codes of #58120600, #58120601, or #58120602, as pizzerias. We categorized all other establishments with an SIC code of 5812 as non-fast-food restaurants. The density per square kilometer of establishments falling within each of these three categories was calculated for each subject’s unique network buffer. Subjects were then categorized into increasing quintiles for each of the three food outlet categories. Statistical analysis We calculated adjusted mean BMI for each quintile of retail density for the three food categories using cross-sectional, multilevel modeling (Diez Roux 2000) with the Proc Mixed procedure (Singer 1998) in SAS (version 9; SAS Institute Inc., Cary, NC). Because each of the neighborhood-level measures was generated for each individual’s address, we treated the neighborhood variables as level 1 variables. We expected intercor-relations among individuals, reflecting similarity among those living in proximity to each other, to exist across a geographic scale larger than the half-mile buffers. To account for this, we estimated our multilevel models with community district as a level 2 clustering factor. New York City’s 59 community districts correspond to named areas such as the Upper West Side and Chinatown. Although we measured no predictive variables at level 2, the use of this nested data structure allowed for valid estimation of standard errors. We adjusted analyses for individual and neighborhood sociodemographic characteristics and then for the five neighborhood walkability measures. We evaluated the five walkability measures as possible confounders individually and in combination. We mutually adjusted all analyses for quintiles of each of the three food categories. We calculated separate prevalence ratios for overweight and obesity compared with normal weight for increasing quintiles of retail food density categories using Poisson regression with robust variance estimates (Spiegelman and Hertzmark 2005). We used community district as a clustering variable to correct the standard errors for intercorrelations among individuals across larger areas of the city and to generate robust SE estimates. We calculated adjusted mean BMI for each quintile of retail density for the three food categories using cross-sectional, multilevel modeling (Diez Roux 2000) with the Proc Mixed procedure (Singer 1998) in SAS (version 9; SAS Institute Inc., Cary, NC). Because each of the neighborhood-level measures was generated for each individual’s address, we treated the neighborhood variables as level 1 variables. We expected intercor-relations among individuals, reflecting similarity among those living in proximity to each other, to exist across a geographic scale larger than the half-mile buffers. To account for this, we estimated our multilevel models with community district as a level 2 clustering factor. New York City’s 59 community districts correspond to named areas such as the Upper West Side and Chinatown. Although we measured no predictive variables at level 2, the use of this nested data structure allowed for valid estimation of standard errors. We adjusted analyses for individual and neighborhood sociodemographic characteristics and then for the five neighborhood walkability measures. We evaluated the five walkability measures as possible confounders individually and in combination. We mutually adjusted all analyses for quintiles of each of the three food categories. We calculated separate prevalence ratios for overweight and obesity compared with normal weight for increasing quintiles of retail food density categories using Poisson regression with robust variance estimates (Spiegelman and Hertzmark 2005). We used community district as a clustering variable to correct the standard errors for intercorrelations among individuals across larger areas of the city and to generate robust SE estimates. Neighborhood measures: We defined a study subject’s neighborhood as a half-mile (805 m) “network buffer” around his or her residential address, comprising locations reachable within a half-mile walk along the street network. Most urban planners assume that a half-mile is a walkable distance (Agrawal et al. 2008; Calthorpe 1993; Cervero 2006). We constructed sociodemographic and built environment measures, including food environment variables, for each individual’s neighborhood. To control for the effects of neighborhood composition on BMI, our models adjusted for the proportion of residents below the federal poverty line, proportion black, and proportion Hispanic using data from the 2000 U.S. Census summary file 3 (U.S. Census Bureau 2000). We assessed the possible confounding effects of the following measures of neighborhood walkability: population density, density of bus and subway stops, percentage of commuters using public transit, land-use mix, and proportion of land zoned to permit commercial development (Rundle et al. 2007). We calculated population density, expressed as persons per square kilometer of land area, and the percentage of commuters using public transit from 2000 U.S. Census data (U.S. Census Bureau 2000). We based the numbers of bus and subway stops per square kilometer on data from the Department of City Planning (DCP). We constructed the proportion of the buffer zoned to permit commercial development and a measure of residential/commercial land-use mix using the Primary Land Use Tax Lot Output data, a parcel-level data set also available from DCP. Land-use mix is an index of the extent to which a neighborhood supports both commercial and residential lands uses, with the index tending toward 1 as the mix of residential and commercial floor area approaches a 1:1 ratio. Food environment measures: We derived food environment measures from 2001 data purchased from Dun & Bradstreet (D&B; unpublished data). The data include business name, geocoded location, and detailed Standard Industrial Classification (SIC) industry codes (http://www.osha.gov/pls/imis/sic_manual.html) for food establishments. A priori, we grouped food outlets into categories hypothesized to provide BMI-healthy or BMI-unhealthy food, with one intermediate category for food outlets whose classification was uncertain. We classified supermarkets and fruit and vegetable markets as BMI-healthy based on evidence associating proximity to supermarkets with better dietary patterns and lower BMI (Laraia et al. 2004; Morland et al. 2002, 2006; Zenk et al. 2005), lower fruit and vegetable prices with slower growth in BMI (Powell et al. 2007; Sturm and Datar 2005), and daily vegetable consumption with lower rates of obesity (Lahti-Koski et al. 2002). Although supermarkets sell a range of food, including both healthy and unhealthy options, we consider them healthy food outlets because they offer local residents the opportunity to purchase healthy food. No evidence is available linking natural food stores to diet or BMI, but food products typically available at natural food stores tend to be healthier; thus, we also categorized natural food stores as BMI-healthy food outlets. The category of BMI-unhealthy food outlets included fast-food restaurants, a choice based on extensive evidence linking fast-food consumption with high energy intake, fat intake, BMI, and weight gain (Befort et al. 2006; Bowman and Vinyard 2004; Duerksen et al. 2007; Duffey et al. 2007; French et al. 2001; Jeffery et al. 2006; Jeffery and French 1998; Thompson et al. 2004). The BMI-unhealthy food index also included convenience stores (Morland et al. 2006) and meat markets (Gillis and Bar-Or 2003; Lahti-Koski et al. 2002). We classified pizzerias, bakeries, and candy and nut stores as BMI-unhealthy based on the energy density of the types of foods sold there. Because “bodegas” or very small grocery stores tend to sell energy-dense foods and few fruits and vegetables, they were classed as BMI-unhealthy (Kaufman and Karpati 2007). The BMI-intermediate category comprised food outlets for which evidence was insufficient for placement in the other two categories. This category included non-fast-food restaurants—that is, restaurants excluding fast food and pizzerias. Although eating food prepared away from home has sometimes been associated with poor diet and higher weight (Gillis and Bar-Or 2003; Guthrie et al. 2002; Yao et al. 2003), research on consumption of food from non-fast-food restaurants has found no effect on weight or weight gain (Duffey et al. 2007; Jeffery et al. 2006; Thompson et al. 2004), and one study found higher vegetable consumption among adolescents who ate more frequently at non-fast-food restaurants (Befort et al. 2006). The intermediate category also includes medium-sized grocery stores and specialty stores, as well as fish markets. Although some evidence associates fish intake with weight loss (Thorsdottir et al. 2007), fish markets in New York City often sell fried fish and fried seafood for immediate consumption; thus, the implication of this food outlet type for weight is unclear. We identified most food outlet types by SIC code number alone: fruit and vegetable markets (#5431), natural or health food stores (#549901), fish markets (#542101), specialty food stores (#5451 and #5499, excluding #549901), convenience stores (#541102), bakeries (#5461), candy and nut stores (#5441), and meat markets (#542102). We distinguished three categories of grocery stores, excluding convenience stores. We identified “supermarkets” as grocery stores (#5411) with at least $2 million in annual sales or, for establishments with missing data on annual sales, at least 18 employees. (Among establishments with annual sales data, 18 employees was the threshold at which at least half had annual sales of ≥ $2 million.) “Mediumsized grocery stores” were nonsupermarket groceries with at least five employees. “Bodegas” were grocery stores with fewer than five employees. We identified national-chain fast-food restaurants through text searches in the D&B “company name” and “tradestyle” fields for names appearing in Technomic Inc.’s list of the top 100 limited-service chain brands (Technomic Inc. 2006). We identified as local fast food those restaurants that were not already identified as a national-chain fast-food restaurant and that had an SIC code indicating fast food (#58120300, #58120307, or #58120308), as well as the restaurants with names matching those on this list of local fast-food restaurants. We identified non-fast-food restaurants with “pizza” or “pizzeria” in their name, or with SIC codes of #58120600, #58120601, or #58120602, as pizzerias. We categorized all other establishments with an SIC code of 5812 as non-fast-food restaurants. The density per square kilometer of establishments falling within each of these three categories was calculated for each subject’s unique network buffer. Subjects were then categorized into increasing quintiles for each of the three food outlet categories. Statistical analysis: We calculated adjusted mean BMI for each quintile of retail density for the three food categories using cross-sectional, multilevel modeling (Diez Roux 2000) with the Proc Mixed procedure (Singer 1998) in SAS (version 9; SAS Institute Inc., Cary, NC). Because each of the neighborhood-level measures was generated for each individual’s address, we treated the neighborhood variables as level 1 variables. We expected intercor-relations among individuals, reflecting similarity among those living in proximity to each other, to exist across a geographic scale larger than the half-mile buffers. To account for this, we estimated our multilevel models with community district as a level 2 clustering factor. New York City’s 59 community districts correspond to named areas such as the Upper West Side and Chinatown. Although we measured no predictive variables at level 2, the use of this nested data structure allowed for valid estimation of standard errors. We adjusted analyses for individual and neighborhood sociodemographic characteristics and then for the five neighborhood walkability measures. We evaluated the five walkability measures as possible confounders individually and in combination. We mutually adjusted all analyses for quintiles of each of the three food categories. We calculated separate prevalence ratios for overweight and obesity compared with normal weight for increasing quintiles of retail food density categories using Poisson regression with robust variance estimates (Spiegelman and Hertzmark 2005). We used community district as a clustering variable to correct the standard errors for intercorrelations among individuals across larger areas of the city and to generate robust SE estimates. Results: The data set initially received from D&B included 32,949 retail food businesses for New York City. After correction of geocoded addresses and removal of duplicate records, businesses likely to be defunct, and records likely to represent back offices and corporate offices, the data set included 29,976 businesses, of which 29,858 fell within the bounds of study subjects’ neighborhoods. Table 2 displays descriptive statistics for the BMI-healthy, BMI-unhealthy, and BMI-intermediate categories as well as for specific food outlet types. Density of intermediate and unhealthy food outlets was much higher than density of healthy food outlets. Almost all study subjects lived within a half-mile of an unhealthy food outlet, with an average density of 31 such outlets per square kilometer. By contrast, only 82% lived within a half-mile of a healthy food outlet, with an average density of four outlets per square kilometer. Density measures for food outlet types were significantly correlated across neighborhoods, with correlation coefficients ranging from 0.38 (convenience stores and supermarkets) to 0.85 (non-fast-food restaurants and pizza restaurants). Figure 1 maps the density of BMI-healthy food outlets, expressed in outlets per square kilometer, across the city. Outlet density was highest in high-walkability areas of the city, such as Manhattan, and lowest in low-walkability areas, such as Staten Island. Outlet density also varied by neighborhood income and race/ethnic composition, with higher densities in affluent and predominantly white neighborhoods in the southern half of Manhattan and lower densities in the poor and predominantly black or Latino neighborhoods in the northern half of Manhattan and in the South Bronx. To reduce the risk of confounding, the multivariate analyses controlled statistically for individual-level race/ethnicity and education and neighborhood-level poverty rate and race/ethnic composition, as well as indices of neighborhood walkability, including population density and land-use mix. Multilevel analyses of the association between BMI and the food environment measures showed significant associations only with access to BMI-healthy food. We also assessed possible confounding effects of built environment variables. Population density, which has previously been inversely associated with BMI in analyses of the same data set, had an appreciable confounding effect, but further control for land-use mix, percent commercial area, and access to and neighborhood use of public transit did not alter the results. Table 3 shows adjusted mean BMI for each quintile of the three food categories and the median density of food outlets for each category; Figure 2 displays the association between healthy food outlet density and BMI based on this analysis. The adjusted mean BMI in the fifth quintile of healthy food was 0.80 units [95% confidence interval (CI), 0.27–1.32, p < 0.01] lower than in the first quintile of healthy food. Population density and land-use mix remained significantly inversely associated with BMI after controlling for measures of the food environment. Increasing density of the BMI-unhealthy and BMI-intermediate food categories was not associated with BMI, and analyses of selected subcategories of BMI-unhealthy food (fast food, pizzerias, and convenience stores) found no significant associations. Because there was little difference in the adjusted mean BMI of individuals living in the first and second quintile of BMI-healthy food density, we collapsed these two categories into a single reference category to increase statistical power for analyses of the prevalence of overweight and obesity. The reference category had a median density of 0.76 healthy food outlets per square kilometer. Table 4 shows the prevalence ratios for overweight and obesity by increasing density of healthy food outlets, increasing population density, and land-use mix. Controlling for population density and land-use mix, the prevalence of overweight and obesity were both lower among individuals with the highest density of healthy food outlets. Controlling for other features of the built environment did not alter the prevalence ratio for healthy food density. Our previous work showed that increasing land-use mix and population density were inversely associated with BMI; this association remained after control for the density of BMI-healthy, BMI-unhealthy, and BMI-intermediate food outlets (Rundle et al. 2007). The prevalence ratio for obesity comparing the fourth and first quartiles of land-use mix was 0.91 (95% CI, 0.86–0.97) and comparing the fourth and first quartiles of population density was 0.84 (95% CI, 0.73–0.96). Discussion: The results presented here indicate that the food environment is significantly associated with body size net of individual and neighborhood characteristics and neighborhood walkability features. A higher local density of BMI-healthy food outlets was associated with a lower mean BMI, a lower prevalence of overweight, and a lower prevalence of obesity. BMI-unhealthy food stores and restaurants were far more abundant than healthy ones, but the density of these unhealthy food outlets was not significantly associated with BMI or with body size categories. Of studies relating the food environment to body size, this work is among the first to measure the food environment comprehensively and to account for the effects of other built environment factors associated with obesity. The apparent effect of the food environment, while modest, is net of the significant associations between indices of neighborhood walkability and BMI. Our prior work showed that built environment features related to walkability were associated with approximately a 10% difference in the prevalence of obesity (Rundle et al. 2007). Even after control for measures of the food environment, the estimated effects of these built environment variables remained and were of a similar magnitude. Considered together, food environment and neighborhood walkability may have a substantial effect on body size. Although the observed associations between BMI and the density of BMI-healthy food establishments were consistent with expectations, we had also hypothesized that increasing density of BMI-unhealthy food options would be positively associated with BMI. Because the density of unhealthy food outlets is correlated with commercial activity in general as well as other features of the urban landscape that promote pedestrian activity, we expected that associations between unhealthy food density and BMI had been masked in prior research and would be observed after control for such built environment features. Consistent with other studies in this area, however, we found no association between density of unhealthy food and BMI or obesity. This lack of association may reflect the ubiquity of unhealthy food in an urban environment; as Table 2 shows, virtually all New York City neighborhoods provide many opportunities to eat poorly. In addition, unhealthy convenience foods may be consumed near the workplace or during travel about the city, making the density of unhealthy foods in the residential neighborhood less relevant. Alternatively, the null findings may reflect undercounting of unhealthy food outlets in the most disadvantaged urban neighborhoods. As the case of New York City shows, the penetration of national-chain fast food is low in some of the poorest neighborhoods; this niche in the food environment is filled by inexpensive ethnic restaurants selling high-calorie take-out food (Graham et al. 2006). Better measures of the food environment may show an association of unhealthy food outlets with body size. One limitation of this study and, indeed, of most studies on this topic is that our data are observational and cross-sectional. Observed associations may be attributable to self-selection of individuals into neighborhoods that support their preferred lifestyle; for instance, individuals who prefer to consume healthy foods may move to neighborhoods with more healthy food outlets. Conversely, retailers selling healthy foods may choose to locate in neighborhoods where they believe the population will be most receptive to their products. In addition, questions might be raised about two potential sources of error in the food environment measures. The first is incomplete coverage of the D&B data. Because the D&B data are used primarily for marketing purposes, coverage may be less complete in areas less attractive to marketers, such as low-income neighborhoods. For error in the D&B database to bias our results, it would have to be correlated with the spatial distribution of BMI. Our analyses control for neighborhood sociodemographic composition, which may be an important correlate of measurement error in the D&B data. Second, measurement error may be caused by misclassification of food outlets into the BMI-healthy, BMI-unhealthy, and BMI-intermediate categories. Some food outlets, such as fruit and vegetable markets, are internally relatively homogeneous, whereas others, such as grocery stores or non-fast-food restaurants, may have significant internal heterogeneity. Within-category heterogeneity in food selection may bias food environment coefficients toward zero or create interactions between neighborhood composition and food environment characteristics. Although the analyses controlled for neighborhood sociodemographic composition and for land-use mix and commercial space, variables that might be expected to influence the extent of within-category measurement error, measurement error remains a concern. A further limitation is the mismatch between the time period of the survey (2000–2002) and the time period of food environment measures (2001), population census measures (2000), and land-use and zoning data (2003); because neighborhood demographic and built environment characteristics typically change slowly, these discrepancies should not affect the results significantly. Limitations also include the lack of an audit to verify types of food sold in different types of stores. A distinctive feature of this study is its use of broad categories to characterize the food environment based on the existing literature. Although this analytic strategy sacrifices the opportunity to identify associations between specific food outlet types and BMI, it has several advantages. First, although some in the public health and medical communities, as well as the popular media, have focused on the contribution of fast food to the obesity epidemic, other types of food outlets also sell high-energy-density food; comprehensive measures of the food environment provide a more accurate account of the food choices available to urban residents (Stender et al. 2007; Wallis 2004). Second, density measures for the 14 individual food outlet categories are significantly correlated; reflecting this multi-collinearity, models including all 14 measures are quite unstable. Third, reducing the number of food outlet measures made it less likely that one would be significant simply by chance. Specific choices about how to group food outlet types can certainly be debated and can be tested in replication. The research reported here adds to our knowledge about the relationship between the food environment and obesity with evidence that access to BMI-healthy food outlets such as supermarkets, fruit and vegetable markets, and natural food stores is inversely associated with obesity. This protective association is net of urban design features that promote pedestrian activity and lower BMI, as well as the density of other types of food outlets. Although not identifying a specific culprit within the retail food environment for the obesity epidemic, these analyses indicate that retail outlets providing opportunities for healthier food purchases are associated with lower BMI. If the results of our observational research are confirmed by future studies that permit causal inference, this evidence would suggest that increasing access to healthy food outlets is likely to do more to address the obesity epidemic than limiting unhealthy food outlets. Given the recent proliferation of initiatives to promote access to supermarkets, farmers markets, and fruit and vegetable stands and to limit fast-food outlets (Abdollah 2007; Lee 2007; Marter 2007), study of the causal relationship between the food environment and diet or body size should be a priority for future research.
Background: Differences in the neighborhood food environment may contribute to disparities in obesity. Methods: This study employed a cross-sectional, multilevel analysis of BMI and obesity among 13,102 adult residents of New York City. We constructed measures of the food environment and walkability for the neighborhood, defined as a half-mile buffer around the study subject's home address. Results: Density of BMI-healthy food outlets (supermarkets, fruit and vegetable markets, and natural food stores) was inversely associated with BMI. Mean adjusted BMI was similar in the first two quintiles of healthy food density (0 and 1.13 stores/km2, respectively), but declined across the three higher quintiles and was 0.80 units lower [95% confidence interval (CI), 0.27-1.32] in the fifth quintile (10.98 stores/km2) than in the first. The prevalence ratio for obesity comparing the fifth quintile of healthy food density with the lowest two quintiles combined was 0.87 (95% CI, 0.78-0.97). These associations remained after control for two neighborhood walkability measures, population density and land-use mix. The prevalence ratio for obesity for the fourth versus first quartile of population density was 0.84 (95% CI, 0.73-0.96) and for land-use mix was 0.91 (95% CI, 0.86-0.97). Increasing density of food outlets categorized as BMI-unhealthy was not significantly associated with BMI or obesity. Conclusions: Access to BMI-healthy food stores is associated with lower BMI and lower prevalence of obesity.
null
null
7,383
300
6
[ "food", "bmi", "stores", "density", "fast food", "fast", "outlets", "neighborhood", "data", "healthy" ]
[ "test", "test" ]
null
null
null
null
null
null
[CONTENT] neighborhood studies | obesity | retail food environment | walkability [SUMMARY]
null
[CONTENT] neighborhood studies | obesity | retail food environment | walkability [SUMMARY]
null
null
null
[CONTENT] Body Mass Index | Cross-Sectional Studies | Environment | Food, Organic | Geography | Humans | New York City | Obesity | Prevalence | Urban Population | Walking [SUMMARY]
null
[CONTENT] Body Mass Index | Cross-Sectional Studies | Environment | Food, Organic | Geography | Humans | New York City | Obesity | Prevalence | Urban Population | Walking [SUMMARY]
null
null
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
null
null
[CONTENT] food | bmi | stores | density | fast food | fast | outlets | neighborhood | data | healthy [SUMMARY]
null
[CONTENT] food | bmi | stores | density | fast food | fast | outlets | neighborhood | data | healthy [SUMMARY]
null
null
null
[CONTENT] food | density | bmi | healthy | healthy food | outlets | population density | food outlets | population | mix [SUMMARY]
null
[CONTENT] food | bmi | density | stores | neighborhood | outlets | healthy | food outlets | fast food | fast [SUMMARY]
null
null
null
[CONTENT] BMI ||| BMI | first | two | 0 | 1.13 | three | 0.80 ||| 95% | CI | 0.27 | fifth | 10.98 | first ||| fifth | two | 0.87 | 95% | CI | 0.78-0.97 ||| two ||| fourth | first | 0.84 | 95% | CI | 0.73 | 0.91 | 95% | CI | 0.86 ||| BMI [SUMMARY]
null
[CONTENT] ||| BMI | 13,102 | New York City ||| half-mile ||| BMI ||| BMI | first | two | 0 | 1.13 | three | 0.80 ||| 95% | CI | 0.27 | fifth | 10.98 | first ||| fifth | two | 0.87 | 95% | CI | 0.78-0.97 ||| two ||| fourth | first | 0.84 | 95% | CI | 0.73 | 0.91 | 95% | CI | 0.86 ||| BMI ||| BMI [SUMMARY]
null
Magnitude of Cryptococcal Antigenemia among HIV Infected Patients at a Referral Hospital, Northwest Ethiopia.
30607049
Cryptococcosis is one of the common opportunistic fungal infections among HIV infected patients living in Sub-Saharan Africa, including Ethiopia. The magnitude of the disease at Felege Hiwot Referral Hospital (FHRH) in particular and in Ethiopia at large is not well explored.
BACKGROUND
A retrospective document review and analysis was done on records of 137 HIV infected patients who visited FHRH ART clinic from 1 Sep to 30 Dec 2016 and had registered data on their sex, age, CD4 count and cryptococcal antigen screening result. The cryptoccocal antigen (CrAg) detection was done by the IMMY CrAg® LFA (Cryptococcal Antigen Lateral Flow Assay) kit from patient serum as per the manufacturer's instruction. All data were entered, cleared, and analyzed using SPSS v20. Descriptive data analysis and cross tabulation were done to assess factors associated with cryptococcal antigenemia. Statistical significance was set at p-value less than or equal to 0.05.
METHODS
More than half of the participants, 54.7% (75/137), included in the study were females. The median age of the participants was 32.0 years (ranged: 8-52 years). The mean CD4 count was 51.8 with SD of 26.3 (range 3-98). All the patients were HIV stage IV. The proportion of positive cryptococal antigen from serum test was at 11.7% (95% CI: 7.3-18.1%). The IMMY CrAg® LFA result was found statically associated with patient sex (p= 0.045). However, it was not associated with patient age group and the CD4 count (P>0.05).
RESULTS
This study provided baseline data on the magnitude of cryptococcal antigenemia among HIV positive patients that is not touched before in the studied area. The results of the study showed that this opportunistic fungal infection is an important health concern among HIV patients. Further studies with sound design employing adequate sample size should be considered.
CONCLUSIONS
[ "AIDS-Related Opportunistic Infections", "Adolescent", "Adult", "Ambulatory Care Facilities", "Anti-HIV Agents", "Antigens, Fungal", "CD4 Lymphocyte Count", "Child", "Cryptococcosis", "Cryptococcus", "Ethiopia", "Female", "HIV Infections", "Hospitals", "Humans", "Male", "Middle Aged", "Retrospective Studies", "Young Adult" ]
6308728
Introduction
Increasing access to antiretroviral therapy (ART) has transformed the prognosis of HIV infected patients in resource-limited settings. However, treatment coverage remains relatively low, and HIV diagnosis occurs at a late stage. As a result, many patients continue to die of HIV-related opportunistic infections (OIs) in the weeks prior to, and months following initiation of ART (1, 2). Cryptococcus neoformans, a causative agent of cryptococcosis remain a common cause of infectious morbidity and mortality, especially among HIV-positive patients living in Sub-Saharan Africa (3). Cryptococcal disease is one of the most important OIs, and a major contributor to this early mortality, accounting for between 13% and 44% of deaths among HIV infected people living in developing nations. In sub-Saharan Africa alone, there are more than 500,000 deaths each year due to cryptococcal meningitis (CM), which may exceed those attributed to tuberculosis (1). Cryptococcus is a type of fungus that lives in soil, especially soil that is contaminated with large amounts of bird droppings. Some people inhale the spores from the environment and never get sick, but in people with weak immune systems, the fungus can cause an infection. The only way a person can get sick from this fungus is by directly inhaling the agent from the environment, making it no person to person transmission (1,4). Cryptococcal infection is associated with a range of illness. In some people, it causes a lung infection similar to tuberculosis, or it can cause no symptoms at all. The incubation period is not known, but it is thought that the infection can remain dormant in the body for many years. In immunosuppressed people, particularly HIV-infected once with CD4 counts less than 100, the infection can reactivate and spread throughout the body. When this happens, the infection usually presents as meningitis which is a common cause of death among HIV/AIDS patients (5). The case fatality rate in patients with cryptococcal meningitis, the commonest presentation of HIV-related cryptococcal disease in adults, remains unacceptably high, particularly in sub-Saharan Africa, between 35%–65%. This compares with 10%-20% in most developed countries. The main reason for this is a delay in presentation with diagnosis only when meningitis is advanced and treatment is less effective, mainly as a result of limited access to lumbar puncture (LP) and rapid diagnostic assays (1). There are three categories of methods that can be used to diagnose cryptococcal meningitis: India Ink microscopy, which can be used on cerebrospinal fluid (CSF); culture, which can be used on CSF or blood; and antigen detection. There are several methods to detect cryptococcal antigen in CSF or serum: latex agglutination (LA), enzyme immunoassay (EIA), and lateral flow assay (LFA) (6–8). A patient who tests positive for cryptococcal antigen can take oral fluconazole to help the body fight the early stage of the infection. This could prevent the infection from developing into meningitis (9). In Ethiopia, where there is poor surveillance system, there are few studies conducted on the level of cryptococcal infection and its associated risk factors (2,10–11). However, the data is missing in our study site where there are a number of HIV patients getting ART service in accordance with the national ART program. Some HIV patients in the study area get tested for cryptococcal infection during follow-up when there is suggestive clinical presentations. Thus, the aim of this study was to assess the prevalence of cryptococcal antigenemia and its associated factors among HIV infected patients at Felege Hiwot Referral Hospital (FHRH) setting.
null
null
Results
In this study, we analyzed the records of 137 HIV infected patients to assess their cryptococcal antigenemia. More than half of the participants, 75(54.7%), were females. The median age of the participants was 32.0 years (ranged: 8–52 years). The mean CD4 count was at 51.8 with standard deviation of 26.3 (range 3–98). The proportion of positive cryptococal antigenemia from serum test was at 11.7% (95% CI: 7.3–18.1%) (Table 1). Further, based on the WHO classification system, all of the HIV positive participants in this study were stage IV. Demographic and related clinical data of HIV patients screened for cryptococcal antigenemia at FHRH, Bahir Dar, 2016 We found that the IMMY CrAg® LFA result statically associated with patient sex (p= 0.045). However, it was not found associated with the was patient age group and the CD4 count (P>0.05) (Table).
null
null
[]
[]
[]
[ "Introduction", "Materials and Methods", "Results", "Discussion" ]
[ "Increasing access to antiretroviral therapy (ART) has transformed the prognosis of HIV infected patients in resource-limited settings. However, treatment coverage remains relatively low, and HIV diagnosis occurs at a late stage. As a result, many patients continue to die of HIV-related opportunistic infections (OIs) in the weeks prior to, and months following initiation of ART (1, 2). Cryptococcus neoformans, a causative agent of cryptococcosis remain a common cause of infectious morbidity and mortality, especially among HIV-positive patients living in Sub-Saharan Africa (3). Cryptococcal disease is one of the most important OIs, and a major contributor to this early mortality, accounting for between 13% and 44% of deaths among HIV infected people living in developing nations. In sub-Saharan Africa alone, there are more than 500,000 deaths each year due to cryptococcal meningitis (CM), which may exceed those attributed to tuberculosis (1).\nCryptococcus is a type of fungus that lives in soil, especially soil that is contaminated with large amounts of bird droppings. Some people inhale the spores from the environment and never get sick, but in people with weak immune systems, the fungus can cause an infection. The only way a person can get sick from this fungus is by directly inhaling the agent from the environment, making it no person to person transmission (1,4).\nCryptococcal infection is associated with a range of illness. In some people, it causes a lung infection similar to tuberculosis, or it can cause no symptoms at all. The incubation period is not known, but it is thought that the infection can remain dormant in the body for many years. In immunosuppressed people, particularly HIV-infected once with CD4 counts less than 100, the infection can reactivate and spread throughout the body. When this happens, the infection usually presents as meningitis which is a common cause of death among HIV/AIDS patients (5).\nThe case fatality rate in patients with cryptococcal meningitis, the commonest presentation of HIV-related cryptococcal disease in adults, remains unacceptably high, particularly in sub-Saharan Africa, between 35%–65%. This compares with 10%-20% in most developed countries. The main reason for this is a delay in presentation with diagnosis only when meningitis is advanced and treatment is less effective, mainly as a result of limited access to lumbar puncture (LP) and rapid diagnostic assays (1).\nThere are three categories of methods that can be used to diagnose cryptococcal meningitis: India Ink microscopy, which can be used on cerebrospinal fluid (CSF); culture, which can be used on CSF or blood; and antigen detection. There are several methods to detect cryptococcal antigen in CSF or serum: latex agglutination (LA), enzyme immunoassay (EIA), and lateral flow assay (LFA) (6–8). A patient who tests positive for cryptococcal antigen can take oral fluconazole to help the body fight the early stage of the infection. This could prevent the infection from developing into meningitis (9).\nIn Ethiopia, where there is poor surveillance system, there are few studies conducted on the level of cryptococcal infection and its associated risk factors (2,10–11). However, the data is missing in our study site where there are a number of HIV patients getting ART service in accordance with the national ART program. Some HIV patients in the study area get tested for cryptococcal infection during follow-up when there is suggestive clinical presentations. Thus, the aim of this study was to assess the prevalence of cryptococcal antigenemia and its associated factors among HIV infected patients at Felege Hiwot Referral Hospital (FHRH) setting.", "Study design, setting and data collection: A retrospective record review was done on records of 137 HIV infected patients who were attending ART monitoring at FHRH ART clinic. The hospital is located in Bahir Dar town, which is the capital of Amhara National Regional State located 565 km away from the capital Addis Ababa. The FHRH is a tertiary health care level hospital serving the population of Bahir Dar and remote areas of Northwest Ethiopia. The total population served by the hospital is about 12 million. In the hospital, the ART clinic is operating under the National HIV Program of Ethiopia, under which patients are diagnosed for HIV and get ART treatment and follow-up services (CD4 count, liver and kidney function tests and hematologic evaluation) for free.\nHIV positive patients, regardless of their ART status, whose CD4 count was less than 100 and had headache with signs suggestive of meningitis were screened for CrAg in the hospital (1). Taking this background, those HIV patients who visited the hospital's ART clinic from 1 Sep to 30 Dec 2016 and had registered data on their sex, age, CD4 count and cryptococcal antigen screening result were included for analysis. Patient records that missed one of these variables were excluded from the study. Data were retrieved directly from laboratory registration logbook using data extraction sheet on 1 to 10 January 2017.\nCryptococcal antigen test: Although culture is the standard method for definitive diagnosis, detection of cryptococcal antigen in serum or cerebrospinal fluid is used for presumptive diagnosis. Cryptococcal antigen screening in peripheral blood is also recommended for HIV-infected persons with CD4 cell counts <100/µl to reduce early deaths while receiving ART (6, 12).\nIn the studied area, the cryptoccocal antigen (CrAg) detection was done by the IMMY CrAg® LFA (Cryptococcal Antigen Lateral Flow Assay) kit as per the manufacturer's instruction. The IMMY CrAg® LFA is an immunochromatographic dipstick assay for the qualitative and semi-quantitative detection of cryptococcal antigen in serum, plasma, whole blood and cerebrospinal fluid (CSF). The CrAg® LFA is a prescription use laboratory assay which can aid in the diagnosis of cryptococcosis (6,12). In this study, from those patients who came with evidence of cryptococcal meningitis, 40µl serum was used to detect the CrAg. The serum and its one drop dilute were added into a test tube and the lateral flow device that is pre-coated with anti-CrAg monoclonal antibodies and gold conjugated control antibodies was submerged with its white end. After ten minutes of incubation, the result was read and recorded. The antigen antibody complex forms a test line causing a visible line to form. The IMMY CrAg® LFA has sensitivity and specificity of 100% using serum (12–13). Patients found positive for CrAg were managed using fluconzole based antifungal treatment based on the recommended approach.\nStatistical analysis: All data were entered, cleared, and analyzed using SPSS statistical software package, Version 22.0. Descriptive data analysis was used to visualize differences within data, and cross tabulation was done to assess factors associated with cryptococcal antigen. Differences were considered significant when p-value was less than or equal to 0.05.\nEthical issues: Permission and ethical clearance were obtained from Amhara Regional Health Bureau Institutional Review Board (IRB) to utilize the data. As the data was collected retrospectively, no patient details were linked to the patient's identity and confidentiality was maintained.", "In this study, we analyzed the records of 137 HIV infected patients to assess their cryptococcal antigenemia. More than half of the participants, 75(54.7%), were females. The median age of the participants was 32.0 years (ranged: 8–52 years). The mean CD4 count was at 51.8 with standard deviation of 26.3 (range 3–98). The proportion of positive cryptococal antigenemia from serum test was at 11.7% (95% CI: 7.3–18.1%) (Table 1). Further, based on the WHO classification system, all of the HIV positive participants in this study were stage IV.\nDemographic and related clinical data of HIV patients screened for cryptococcal antigenemia at FHRH, Bahir Dar, 2016\nWe found that the IMMY CrAg® LFA result statically associated with patient sex (p= 0.045). However, it was not found associated with the was patient age group and the CD4 count (P>0.05) (Table).", "Although the widespread availability of antiretroviral therapy (ART) in developed countries has helped reduce cryptococcal infections in these areas, it is still a major problem in developing countries, like Ethiopia, where access to healthcare is limited (9). Most of the patients with cryptococcal meningitis have serious disease and high fatality rate, and its clinical symptoms are not typical, misdiagnosis is common in early stage (4).\nEach year, Cryptococcus is believed to cause more deaths than tuberculosis in Sub-Sahran Africa (14–15). Therefore, screening individuals with AIDS for serum cryptococcal antigen (CrAg), followed by treatment of CrAg positives with antifungals, may prevent cryptococcal meningitis (2, 6, 15). In this study, most of the participants living with HIV were females (54.7%). Similarly, in terims of their age category, the majority of HIV patients, 50.4% were in the 25–36 years group which implies that HIV still affects those individuals who are sexually active. Moreover, there is a similar report by Kharsany et al's study in which adolescent girls and young women aged 15–24 years had up to eight fold higher rates of HIV infection compared to their male peers. There remains a gap in women initiated HIV prevention technologies especially for women who are unable to negotiate the current HIV prevention options of abstinence, behavior change, condoms and medical male circumcision or early treatment initiation in their relationships (16).\nIn this study, we found that the overall prevalence of cryptococcal antigenemia was 11.7% among HIV infected patients with mean CD4 count at 51.8 cells/µl. According to Shiferaw et al's study, in Addis Ababa-Ethiopia, the magnitude of cryptococcal antigenemia was 11% among HIV infected patients with CD4 count lessthan100 cells/µl (10). Similarly, according to Bitew et al's. study in Addis Ababa, the prevalence of cryptococcal antigenemia among HIV positive patients was 8.5% (11). An overall prevalence of 9.9% of C. neoformans infection among HIV patients was also documented by Egbe et al. in Nigeria (17) which makes our finding almost comparable with previous studies done in Ethiopia and elsewhere in the world.\nIt is indicated that among immunosuppressed persons, particularly HIV-infected people with CD4 counts under 100, cryptococcal infection is important (16) and is one of the AIDS defining infections (3). Cryptococcal antigenemia was not found associated with patient age group and their CD4 count (P>0.05) (Table 2). On a study done in Nigeria, authors demonstrated that CD4 count less than 200/µl was significantly associated with cryptococcal antigenemia but age group and gender did not show association (17). The statistical difference on CD4 might be because of the small sample size we have employed in the present study.\nCrAg LFA test result of study subjects at FHRH, Bahir Dar, 2016\nCrAg LFA: Cryptococcal antigen lateral flow assay test\nIn this study, we found that the mean CD4 count of the study participants was 51.8 per 1µl of blood (range 3–98). At the same time, all of them were stage IV. Considering the fact that in such patients with CD4 counts < 100, screening them for CrAg and early ART initiation is the most important and cost-effective preventive strategy to reduce the incidence and high mortality associated with cryptococcal meningitis in HIV-infected patients (1). Since Amphotericin B is not available, the management protocol of CrAg positive HIV patients in FHRH was using fluconzole based antifungal treatment based on the WHO recommended approach (1) and an expert's opinion. The fluconzole based treatment was given in three phases: 1) induction phase (for two weeks, high dose-600mg BID), 2) consolidation phase (for up to eight weeks, 400mg BID) and 3) maintenance phase (200mg daily for 3–6 months until the patients attains CD4 count of >200). Lumber puncture was also given every 24 hours for these patients until they got relief from the headache.\nThe results of our study should be applied with caution. Our findings are subject to at least two limitations: selection bias and quite limited sample size. At the same time, due to the retrospective nature of the study, it was not possible to show detailed clinical picture of HIV patients which might have a significant role to indicate the overall profile of the study participants. No data was found on the ART status and antifungal treatment outcome of patients with positive CrAg test. However, our study is the first of its kind in the studied area that provide baseline information about the magnitude of cryptococcosis among HIV positive patients for further large scale study. The results of this study also showed that this opportunistic fungal infection is an important health problem among HIV patients that needs the attention of physicians who are in charge of attending them.\nIn conclusion, althrough we have employed quite limited study participants' data, the reported prevalence of cryptococcal antigenemia calls stakeholders to expand CrAg screening service for individuals with HIV/AIDS, especially for those with CD4 count <100/µl. Additional prospective study with adequate sample size is needed to determine the exact magnitude of the disease and to explore its determinants." ]
[ "intro", "materials|methods", "results", "discussion" ]
[ "Cryptococcal antigenemia", "IMMY CrAg® LFA", "Bahir Dar", "Ethiopia" ]
Introduction: Increasing access to antiretroviral therapy (ART) has transformed the prognosis of HIV infected patients in resource-limited settings. However, treatment coverage remains relatively low, and HIV diagnosis occurs at a late stage. As a result, many patients continue to die of HIV-related opportunistic infections (OIs) in the weeks prior to, and months following initiation of ART (1, 2). Cryptococcus neoformans, a causative agent of cryptococcosis remain a common cause of infectious morbidity and mortality, especially among HIV-positive patients living in Sub-Saharan Africa (3). Cryptococcal disease is one of the most important OIs, and a major contributor to this early mortality, accounting for between 13% and 44% of deaths among HIV infected people living in developing nations. In sub-Saharan Africa alone, there are more than 500,000 deaths each year due to cryptococcal meningitis (CM), which may exceed those attributed to tuberculosis (1). Cryptococcus is a type of fungus that lives in soil, especially soil that is contaminated with large amounts of bird droppings. Some people inhale the spores from the environment and never get sick, but in people with weak immune systems, the fungus can cause an infection. The only way a person can get sick from this fungus is by directly inhaling the agent from the environment, making it no person to person transmission (1,4). Cryptococcal infection is associated with a range of illness. In some people, it causes a lung infection similar to tuberculosis, or it can cause no symptoms at all. The incubation period is not known, but it is thought that the infection can remain dormant in the body for many years. In immunosuppressed people, particularly HIV-infected once with CD4 counts less than 100, the infection can reactivate and spread throughout the body. When this happens, the infection usually presents as meningitis which is a common cause of death among HIV/AIDS patients (5). The case fatality rate in patients with cryptococcal meningitis, the commonest presentation of HIV-related cryptococcal disease in adults, remains unacceptably high, particularly in sub-Saharan Africa, between 35%–65%. This compares with 10%-20% in most developed countries. The main reason for this is a delay in presentation with diagnosis only when meningitis is advanced and treatment is less effective, mainly as a result of limited access to lumbar puncture (LP) and rapid diagnostic assays (1). There are three categories of methods that can be used to diagnose cryptococcal meningitis: India Ink microscopy, which can be used on cerebrospinal fluid (CSF); culture, which can be used on CSF or blood; and antigen detection. There are several methods to detect cryptococcal antigen in CSF or serum: latex agglutination (LA), enzyme immunoassay (EIA), and lateral flow assay (LFA) (6–8). A patient who tests positive for cryptococcal antigen can take oral fluconazole to help the body fight the early stage of the infection. This could prevent the infection from developing into meningitis (9). In Ethiopia, where there is poor surveillance system, there are few studies conducted on the level of cryptococcal infection and its associated risk factors (2,10–11). However, the data is missing in our study site where there are a number of HIV patients getting ART service in accordance with the national ART program. Some HIV patients in the study area get tested for cryptococcal infection during follow-up when there is suggestive clinical presentations. Thus, the aim of this study was to assess the prevalence of cryptococcal antigenemia and its associated factors among HIV infected patients at Felege Hiwot Referral Hospital (FHRH) setting. Materials and Methods: Study design, setting and data collection: A retrospective record review was done on records of 137 HIV infected patients who were attending ART monitoring at FHRH ART clinic. The hospital is located in Bahir Dar town, which is the capital of Amhara National Regional State located 565 km away from the capital Addis Ababa. The FHRH is a tertiary health care level hospital serving the population of Bahir Dar and remote areas of Northwest Ethiopia. The total population served by the hospital is about 12 million. In the hospital, the ART clinic is operating under the National HIV Program of Ethiopia, under which patients are diagnosed for HIV and get ART treatment and follow-up services (CD4 count, liver and kidney function tests and hematologic evaluation) for free. HIV positive patients, regardless of their ART status, whose CD4 count was less than 100 and had headache with signs suggestive of meningitis were screened for CrAg in the hospital (1). Taking this background, those HIV patients who visited the hospital's ART clinic from 1 Sep to 30 Dec 2016 and had registered data on their sex, age, CD4 count and cryptococcal antigen screening result were included for analysis. Patient records that missed one of these variables were excluded from the study. Data were retrieved directly from laboratory registration logbook using data extraction sheet on 1 to 10 January 2017. Cryptococcal antigen test: Although culture is the standard method for definitive diagnosis, detection of cryptococcal antigen in serum or cerebrospinal fluid is used for presumptive diagnosis. Cryptococcal antigen screening in peripheral blood is also recommended for HIV-infected persons with CD4 cell counts <100/µl to reduce early deaths while receiving ART (6, 12). In the studied area, the cryptoccocal antigen (CrAg) detection was done by the IMMY CrAg® LFA (Cryptococcal Antigen Lateral Flow Assay) kit as per the manufacturer's instruction. The IMMY CrAg® LFA is an immunochromatographic dipstick assay for the qualitative and semi-quantitative detection of cryptococcal antigen in serum, plasma, whole blood and cerebrospinal fluid (CSF). The CrAg® LFA is a prescription use laboratory assay which can aid in the diagnosis of cryptococcosis (6,12). In this study, from those patients who came with evidence of cryptococcal meningitis, 40µl serum was used to detect the CrAg. The serum and its one drop dilute were added into a test tube and the lateral flow device that is pre-coated with anti-CrAg monoclonal antibodies and gold conjugated control antibodies was submerged with its white end. After ten minutes of incubation, the result was read and recorded. The antigen antibody complex forms a test line causing a visible line to form. The IMMY CrAg® LFA has sensitivity and specificity of 100% using serum (12–13). Patients found positive for CrAg were managed using fluconzole based antifungal treatment based on the recommended approach. Statistical analysis: All data were entered, cleared, and analyzed using SPSS statistical software package, Version 22.0. Descriptive data analysis was used to visualize differences within data, and cross tabulation was done to assess factors associated with cryptococcal antigen. Differences were considered significant when p-value was less than or equal to 0.05. Ethical issues: Permission and ethical clearance were obtained from Amhara Regional Health Bureau Institutional Review Board (IRB) to utilize the data. As the data was collected retrospectively, no patient details were linked to the patient's identity and confidentiality was maintained. Results: In this study, we analyzed the records of 137 HIV infected patients to assess their cryptococcal antigenemia. More than half of the participants, 75(54.7%), were females. The median age of the participants was 32.0 years (ranged: 8–52 years). The mean CD4 count was at 51.8 with standard deviation of 26.3 (range 3–98). The proportion of positive cryptococal antigenemia from serum test was at 11.7% (95% CI: 7.3–18.1%) (Table 1). Further, based on the WHO classification system, all of the HIV positive participants in this study were stage IV. Demographic and related clinical data of HIV patients screened for cryptococcal antigenemia at FHRH, Bahir Dar, 2016 We found that the IMMY CrAg® LFA result statically associated with patient sex (p= 0.045). However, it was not found associated with the was patient age group and the CD4 count (P>0.05) (Table). Discussion: Although the widespread availability of antiretroviral therapy (ART) in developed countries has helped reduce cryptococcal infections in these areas, it is still a major problem in developing countries, like Ethiopia, where access to healthcare is limited (9). Most of the patients with cryptococcal meningitis have serious disease and high fatality rate, and its clinical symptoms are not typical, misdiagnosis is common in early stage (4). Each year, Cryptococcus is believed to cause more deaths than tuberculosis in Sub-Sahran Africa (14–15). Therefore, screening individuals with AIDS for serum cryptococcal antigen (CrAg), followed by treatment of CrAg positives with antifungals, may prevent cryptococcal meningitis (2, 6, 15). In this study, most of the participants living with HIV were females (54.7%). Similarly, in terims of their age category, the majority of HIV patients, 50.4% were in the 25–36 years group which implies that HIV still affects those individuals who are sexually active. Moreover, there is a similar report by Kharsany et al's study in which adolescent girls and young women aged 15–24 years had up to eight fold higher rates of HIV infection compared to their male peers. There remains a gap in women initiated HIV prevention technologies especially for women who are unable to negotiate the current HIV prevention options of abstinence, behavior change, condoms and medical male circumcision or early treatment initiation in their relationships (16). In this study, we found that the overall prevalence of cryptococcal antigenemia was 11.7% among HIV infected patients with mean CD4 count at 51.8 cells/µl. According to Shiferaw et al's study, in Addis Ababa-Ethiopia, the magnitude of cryptococcal antigenemia was 11% among HIV infected patients with CD4 count lessthan100 cells/µl (10). Similarly, according to Bitew et al's. study in Addis Ababa, the prevalence of cryptococcal antigenemia among HIV positive patients was 8.5% (11). An overall prevalence of 9.9% of C. neoformans infection among HIV patients was also documented by Egbe et al. in Nigeria (17) which makes our finding almost comparable with previous studies done in Ethiopia and elsewhere in the world. It is indicated that among immunosuppressed persons, particularly HIV-infected people with CD4 counts under 100, cryptococcal infection is important (16) and is one of the AIDS defining infections (3). Cryptococcal antigenemia was not found associated with patient age group and their CD4 count (P>0.05) (Table 2). On a study done in Nigeria, authors demonstrated that CD4 count less than 200/µl was significantly associated with cryptococcal antigenemia but age group and gender did not show association (17). The statistical difference on CD4 might be because of the small sample size we have employed in the present study. CrAg LFA test result of study subjects at FHRH, Bahir Dar, 2016 CrAg LFA: Cryptococcal antigen lateral flow assay test In this study, we found that the mean CD4 count of the study participants was 51.8 per 1µl of blood (range 3–98). At the same time, all of them were stage IV. Considering the fact that in such patients with CD4 counts < 100, screening them for CrAg and early ART initiation is the most important and cost-effective preventive strategy to reduce the incidence and high mortality associated with cryptococcal meningitis in HIV-infected patients (1). Since Amphotericin B is not available, the management protocol of CrAg positive HIV patients in FHRH was using fluconzole based antifungal treatment based on the WHO recommended approach (1) and an expert's opinion. The fluconzole based treatment was given in three phases: 1) induction phase (for two weeks, high dose-600mg BID), 2) consolidation phase (for up to eight weeks, 400mg BID) and 3) maintenance phase (200mg daily for 3–6 months until the patients attains CD4 count of >200). Lumber puncture was also given every 24 hours for these patients until they got relief from the headache. The results of our study should be applied with caution. Our findings are subject to at least two limitations: selection bias and quite limited sample size. At the same time, due to the retrospective nature of the study, it was not possible to show detailed clinical picture of HIV patients which might have a significant role to indicate the overall profile of the study participants. No data was found on the ART status and antifungal treatment outcome of patients with positive CrAg test. However, our study is the first of its kind in the studied area that provide baseline information about the magnitude of cryptococcosis among HIV positive patients for further large scale study. The results of this study also showed that this opportunistic fungal infection is an important health problem among HIV patients that needs the attention of physicians who are in charge of attending them. In conclusion, althrough we have employed quite limited study participants' data, the reported prevalence of cryptococcal antigenemia calls stakeholders to expand CrAg screening service for individuals with HIV/AIDS, especially for those with CD4 count <100/µl. Additional prospective study with adequate sample size is needed to determine the exact magnitude of the disease and to explore its determinants.
Background: Cryptococcosis is one of the common opportunistic fungal infections among HIV infected patients living in Sub-Saharan Africa, including Ethiopia. The magnitude of the disease at Felege Hiwot Referral Hospital (FHRH) in particular and in Ethiopia at large is not well explored. Methods: A retrospective document review and analysis was done on records of 137 HIV infected patients who visited FHRH ART clinic from 1 Sep to 30 Dec 2016 and had registered data on their sex, age, CD4 count and cryptococcal antigen screening result. The cryptoccocal antigen (CrAg) detection was done by the IMMY CrAg® LFA (Cryptococcal Antigen Lateral Flow Assay) kit from patient serum as per the manufacturer's instruction. All data were entered, cleared, and analyzed using SPSS v20. Descriptive data analysis and cross tabulation were done to assess factors associated with cryptococcal antigenemia. Statistical significance was set at p-value less than or equal to 0.05. Results: More than half of the participants, 54.7% (75/137), included in the study were females. The median age of the participants was 32.0 years (ranged: 8-52 years). The mean CD4 count was 51.8 with SD of 26.3 (range 3-98). All the patients were HIV stage IV. The proportion of positive cryptococal antigen from serum test was at 11.7% (95% CI: 7.3-18.1%). The IMMY CrAg® LFA result was found statically associated with patient sex (p= 0.045). However, it was not associated with patient age group and the CD4 count (P>0.05). Conclusions: This study provided baseline data on the magnitude of cryptococcal antigenemia among HIV positive patients that is not touched before in the studied area. The results of the study showed that this opportunistic fungal infection is an important health concern among HIV patients. Further studies with sound design employing adequate sample size should be considered.
null
null
2,534
366
4
[ "hiv", "cryptococcal", "patients", "study", "crag", "cd4", "art", "infection", "antigen", "data" ]
[ "test", "test" ]
null
null
null
[CONTENT] Cryptococcal antigenemia | IMMY CrAg® LFA | Bahir Dar | Ethiopia [SUMMARY]
null
[CONTENT] Cryptococcal antigenemia | IMMY CrAg® LFA | Bahir Dar | Ethiopia [SUMMARY]
null
[CONTENT] Cryptococcal antigenemia | IMMY CrAg® LFA | Bahir Dar | Ethiopia [SUMMARY]
null
[CONTENT] AIDS-Related Opportunistic Infections | Adolescent | Adult | Ambulatory Care Facilities | Anti-HIV Agents | Antigens, Fungal | CD4 Lymphocyte Count | Child | Cryptococcosis | Cryptococcus | Ethiopia | Female | HIV Infections | Hospitals | Humans | Male | Middle Aged | Retrospective Studies | Young Adult [SUMMARY]
null
[CONTENT] AIDS-Related Opportunistic Infections | Adolescent | Adult | Ambulatory Care Facilities | Anti-HIV Agents | Antigens, Fungal | CD4 Lymphocyte Count | Child | Cryptococcosis | Cryptococcus | Ethiopia | Female | HIV Infections | Hospitals | Humans | Male | Middle Aged | Retrospective Studies | Young Adult [SUMMARY]
null
[CONTENT] AIDS-Related Opportunistic Infections | Adolescent | Adult | Ambulatory Care Facilities | Anti-HIV Agents | Antigens, Fungal | CD4 Lymphocyte Count | Child | Cryptococcosis | Cryptococcus | Ethiopia | Female | HIV Infections | Hospitals | Humans | Male | Middle Aged | Retrospective Studies | Young Adult [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] hiv | cryptococcal | patients | study | crag | cd4 | art | infection | antigen | data [SUMMARY]
null
[CONTENT] hiv | cryptococcal | patients | study | crag | cd4 | art | infection | antigen | data [SUMMARY]
null
[CONTENT] hiv | cryptococcal | patients | study | crag | cd4 | art | infection | antigen | data [SUMMARY]
null
[CONTENT] infection | hiv | cryptococcal | patients | people | meningitis | cause | saharan africa | person | saharan [SUMMARY]
null
[CONTENT] participants | antigenemia | associated patient | table | hiv | count | years | cryptococcal antigenemia | age | found [SUMMARY]
null
[CONTENT] hiv | cryptococcal | patients | study | crag | infection | cd4 | antigen | art | cd4 count [SUMMARY]
null
[CONTENT] Sub-Saharan Africa | Ethiopia ||| Felege Hiwot Referral Hospital | FHRH | Ethiopia [SUMMARY]
null
[CONTENT] More than half | 54.7% | 75/137 ||| 32.0 years | 8-52 years ||| 51.8 | SD of | 26.3 | 3-98 ||| IV ||| 11.7% | 95% | CI | 7.3-18.1% ||| LFA | 0.045 ||| P>0.05 [SUMMARY]
null
[CONTENT] Sub-Saharan Africa | Ethiopia ||| Felege Hiwot Referral Hospital | FHRH | Ethiopia ||| 137 | 1 Sep to | 30 ||| CrAg | LFA ||| SPSS ||| ||| 0.05 ||| More than half | 54.7% | 75/137 ||| 32.0 years | 8-52 years ||| 51.8 | SD of | 26.3 | 3-98 ||| IV ||| 11.7% | 95% | CI | 7.3-18.1% ||| LFA | 0.045 ||| P>0.05 ||| ||| ||| [SUMMARY]
null
Diabetes, Frequency of Exercise, and Mortality Over 12 Years: Analysis of the National Health Insurance Service-Health Screening (NHIS-HEALS) Database.
29441753
The goal of this study was to analyze the relationship between exercise frequency and all-cause mortality for individuals diagnosed with and without diabetes mellitus (DM).
BACKGROUND
We analyzed data for 505,677 participants (53.9% men) in the National Health Insurance Service-National Health Screening (NHIS-HEALS) cohort. The study endpoint variable was all-cause mortality.
METHODS
Frequency of exercise and covariates including age, sex, smoking status, household income, blood pressure, fasting glucose, body mass index, total cholesterol, and Charlson comorbidity index were determined at baseline. Cox proportional hazard regression models were developed to assess the effects of exercise frequency (0, 1-2, 3-4, 5-6, and 7 days per week) on mortality, separately in individuals with and without DM. We found a U-shaped association between exercise frequency and mortality in individuals with and without DM. However, the frequency of exercise associated with the lowest risk of all-cause mortality was 3-4 times per week (hazard ratio [HR], 0.69; 95% confidence interval [CI], 0.65-0.73) in individuals without DM, and 5-6 times per week in those with DM (HR, 0.93; 95% CI, 0.78-1.10).
RESULTS
A moderate frequency of exercise may reduce mortality regardless of the presence or absence of DM; however, when compared to those without the condition, people with DM may need to exercise more often.
CONCLUSION
[ "Adult", "Aged", "Cohort Studies", "Databases, Factual", "Diabetes Mellitus, Type 2", "Exercise", "Female", "Humans", "Male", "Middle Aged", "National Health Programs", "Proportional Hazards Models", "Republic of Korea", "Risk Factors" ]
5809750
INTRODUCTION
Diabetes mellitus (DM) is one of the most common chronic diseases worldwide.1 In the UK, the number of people who had DM was found to be 3.1 million in 20111; and in Scotland, 9.6% of people were diagnosed with DM in 2008.2 The prevalence of diagnosed and undiagnosed DM was 8.6% and 0.9%, respectively.3 In the Japanese population in 1991, 1996, 2001, 2006, and 2011, the prevalence of DM increased among men (6.0%, 8.9%, 10.0%, 10.8%, and 12.0%) and women (3.3%, 4.5%, 4.2%, 4.1%, and 5.1%).4 In Korea, between 2002 and 2004, 29,787 of 1,025,340 people (15,625 men and 14,162 women) were diagnosed with DM. In Korea, the prevalence of DM increased from 8.6% in 2001 to 11.0% in 2013.56 People with DM showed significantly higher risk of mortality than those without DM.78 An inverse association of total mortality was identified for the overall physical activity level.91011 In a retrospective cohort study of Korean men, both regular physical activity and fitness were reported to be associated with lower mortality, but fitness was not associated with mortality among men who regularly engaged in physical activity.12 However, there are a few studies which examine the association between physical activity and mortality by comparing groups with and without DM. Therefore, we investigated the association between physical activity and mortality among people with and without DM by analyzing a large cohort data in Korea.
METHODS
Study participants In this study, we analyzed data from the National Health Insurance Service-Health Screening (NHIS-HEALS) cohort from 2002 to 2013, which were released by the National Health Insurance Service (NHIS).13 The NHIS-HEALS cohort comprised a nationally representative random sample of 514,795 individuals, which accounted for 10% of the entire population who were aged between 40–79 years in 2002 and 2003. The NHIS-HEALS data were built by using probabilistic sampling to represent an individual defined by age, sex, eligibility status (employed or self-employed), and household income levels from Korean residents in 2002 and 2003. This database has eligibility and demographic information regarding health services as well as data on medical aid beneficiaries, medical bill details, medical treatment, medical history, and prescriptions. The participants were followed up for 12 years from 2002 to 2013. For this research, we excluded participants with missing data on DM status (yes/no) and body mass index (BMI, kg/m2) and those who died during the first 3 years of the follow-up period, leaving 505,677 participants for analysis. In this study, we analyzed data from the National Health Insurance Service-Health Screening (NHIS-HEALS) cohort from 2002 to 2013, which were released by the National Health Insurance Service (NHIS).13 The NHIS-HEALS cohort comprised a nationally representative random sample of 514,795 individuals, which accounted for 10% of the entire population who were aged between 40–79 years in 2002 and 2003. The NHIS-HEALS data were built by using probabilistic sampling to represent an individual defined by age, sex, eligibility status (employed or self-employed), and household income levels from Korean residents in 2002 and 2003. This database has eligibility and demographic information regarding health services as well as data on medical aid beneficiaries, medical bill details, medical treatment, medical history, and prescriptions. The participants were followed up for 12 years from 2002 to 2013. For this research, we excluded participants with missing data on DM status (yes/no) and body mass index (BMI, kg/m2) and those who died during the first 3 years of the follow-up period, leaving 505,677 participants for analysis. Measurements Frequency of exercise was determined at study entry with a questionnaire. Participants were requested to estimate the exercise frequency per week at baseline, and were classified as exercising on 0, 1–2, 3–4, 5–6, and 7 days per week. Within this study, DM was defined as either receiving diabetes treatment or having a fasting blood glucose greater than or equal to 126 mg/dL. Smoking, drinking, and medical history were assessed using a self-administered questionnaire at baseline. Based on the responses, the individuals were categorized as non-smoker, past smoker, and current smoker. Household income levels were divided into five groups as 0–2 (lowest level), 3–4, 5–6, 7–8, and 9–10 (highest level). We calculated BMI as weight in kilograms per square meter of height (kg/m2). Hypercholesterolemia was defined as total cholesterol > 240 mg/dL. Hypertension was identified by systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg, or those taking antihypertensive medication. To control the effects of concomitant disorders, we used Charlson comorbidity index (CCI) which was originally developed to predict 1-year mortality using comorbidity data obtained from hospital chart review. We calculated CCI score using International Classification of Diseases, 10th revision (ICD-10) codes represent 17 conditions for each participant, under which scores 1, 2, 3, or 6 are determined according to severity of the condition.14 Information on death (date and cause of death) from Statistics Korea was individually linked using unique personal identification numbers. Frequency of exercise was determined at study entry with a questionnaire. Participants were requested to estimate the exercise frequency per week at baseline, and were classified as exercising on 0, 1–2, 3–4, 5–6, and 7 days per week. Within this study, DM was defined as either receiving diabetes treatment or having a fasting blood glucose greater than or equal to 126 mg/dL. Smoking, drinking, and medical history were assessed using a self-administered questionnaire at baseline. Based on the responses, the individuals were categorized as non-smoker, past smoker, and current smoker. Household income levels were divided into five groups as 0–2 (lowest level), 3–4, 5–6, 7–8, and 9–10 (highest level). We calculated BMI as weight in kilograms per square meter of height (kg/m2). Hypercholesterolemia was defined as total cholesterol > 240 mg/dL. Hypertension was identified by systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg, or those taking antihypertensive medication. To control the effects of concomitant disorders, we used Charlson comorbidity index (CCI) which was originally developed to predict 1-year mortality using comorbidity data obtained from hospital chart review. We calculated CCI score using International Classification of Diseases, 10th revision (ICD-10) codes represent 17 conditions for each participant, under which scores 1, 2, 3, or 6 are determined according to severity of the condition.14 Information on death (date and cause of death) from Statistics Korea was individually linked using unique personal identification numbers. Statistical analysis Cox proportional hazard regression was used to compare the calculated hazard ratio (HR) of the group identified as having DM versus the group without DM. Continuous variables were age, BMI, total cholesterol, fasting glucose, systolic blood pressure, and diastolic blood pressure. Categorical variables were sex, frequency of exercise, household income, smoking, hypercholesterolemia, hypertension, and CCI. We studied three sequentially adjusted models for each result. Model 1 was used to estimate single effects of DM status and exercise frequency per week at baseline on the risk of mortality. For total mortality, model 2 was additionally adjusted for age and sex. In addition to age and sex, model 3 also adjusted for smoking, BMI, and CCI at baseline. We performed all statistical analyses using SAS software, version 9.4 (SAS Institute Inc., Cary, NC, USA) and, the results were considered significant if their P value was < 0.05. Cox proportional hazard regression was used to compare the calculated hazard ratio (HR) of the group identified as having DM versus the group without DM. Continuous variables were age, BMI, total cholesterol, fasting glucose, systolic blood pressure, and diastolic blood pressure. Categorical variables were sex, frequency of exercise, household income, smoking, hypercholesterolemia, hypertension, and CCI. We studied three sequentially adjusted models for each result. Model 1 was used to estimate single effects of DM status and exercise frequency per week at baseline on the risk of mortality. For total mortality, model 2 was additionally adjusted for age and sex. In addition to age and sex, model 3 also adjusted for smoking, BMI, and CCI at baseline. We performed all statistical analyses using SAS software, version 9.4 (SAS Institute Inc., Cary, NC, USA) and, the results were considered significant if their P value was < 0.05. Ethics statement The study protocol was approved by the Yonsei University Health System, Severance Hospital, Institutional Review Board (IRB No. 4-2016-0496). Informed consent was waived by the board. The study protocol was approved by the Yonsei University Health System, Severance Hospital, Institutional Review Board (IRB No. 4-2016-0496). Informed consent was waived by the board.
RESULTS
Among the 505,677 participants, 10.8% (n = 55,439) had DM at baseline and 89.2% (n = 50,238) did not have DM at baseline (Table 1). DM group tended to have lower household income, higher blood cholesterol, higher blood pressure, and more comorbidities than non-DM group. Baseline characteristics are also presented by sex in Supplementary Table 1. Men had higher fasting blood glucose and higher blood pressure, but lower blood cholesterol compared to women. Household income, exercise frequency, and smoking rate were also higher in men than in women. Data presented as mean ± standard deviation or number (%). BMI = body mass index, FBS = fasting blood sugar, SBP = systolic blood pressure, DBP = diastolic blood pressure, CCI = Charlson comorbidity index. Table 2 describes the baseline characteristics of the participants according to frequency of exercise. Of these, 232,315 (57.50%) did not exercise at all. Number of people who did exercise, 1–2, 3–4, 5–6, and 7 days per week was 116,389 (23.71%), 45,447 (9.26%), 12,881 (2.62%), and 33,927 (6.91%), respectively. People with higher exercise frequency tended to be male, be older, and have higher household income. Data presented as mean ± standard deviation or number (%). BMI = body mass index, FBS = fasting blood sugar, SBP = systolic blood pressure, DBP = diastolic blood pressure, CCI = Charlson comorbidity index. During the 12 years of follow-up, 31,264 deaths were observed. People with DM had significantly higher risk of mortality than those without DM. In addition, the HR for total mortality among participants according to frequency of exercise showed a U-shaped relationship. However, people who perform exercise at any frequency had significantly lower mortality, compared to those who did not exercise at all. The lowest HR for mortality was observed among people who exercise 3–4 times per week (Table 3). However, when we limited our analysis to people with DM, the lowest HR for mortality was observed at exercise frequency of 5–6 times per week (Table 4). Model 1 unadjusted; model 2 adjusted for age and sex; model 3 adjusted for age, sex, body mass index, smoking, and Charlson comorbidity index. HR = hazard ratio, CI = confidence interval. Model 1 unadjusted; model 2 adjusted for age and sex; model 3 adjusted for age, sex, body mass index, smoking, and Charlson comorbidity index. HR = hazard ratio, CI = confidence interval. Fig. 1 shows mortality risk for 10 combined categories based on DM diagnosis and exercise frequency. The highest mortality was observed in people with DM and no exercise, and the lowest mortality was observed in people without DM and 3–4 exercises per week. The relationship between exercise frequency and mortality was found to be U-shaped in both diabetic and non-diabetic participants. However, the frequency of exercise with the lowest mortality rate varied depending on whether or not diabetes was present. The lowest relative mortality was observed among people who exercise 5–6 times per week (HR, 0.93) in the DM group, but among those who exercise 3–4 times per week (HR, 0.69) in the non-DM group. Combined effects of diabetes and frequency of exercise on total mortality. Hazard ratios were calculated using Cox proportional hazard regression models that adjusted for sex, age, hypertension, total cholesterol, smoking, and Charlson comorbidity index.
null
null
[ "Study participants", "Measurements", "Statistical analysis", "Ethics statement" ]
[ "In this study, we analyzed data from the National Health Insurance Service-Health Screening (NHIS-HEALS) cohort from 2002 to 2013, which were released by the National Health Insurance Service (NHIS).13 The NHIS-HEALS cohort comprised a nationally representative random sample of 514,795 individuals, which accounted for 10% of the entire population who were aged between 40–79 years in 2002 and 2003. The NHIS-HEALS data were built by using probabilistic sampling to represent an individual defined by age, sex, eligibility status (employed or self-employed), and household income levels from Korean residents in 2002 and 2003. This database has eligibility and demographic information regarding health services as well as data on medical aid beneficiaries, medical bill details, medical treatment, medical history, and prescriptions. The participants were followed up for 12 years from 2002 to 2013. For this research, we excluded participants with missing data on DM status (yes/no) and body mass index (BMI, kg/m2) and those who died during the first 3 years of the follow-up period, leaving 505,677 participants for analysis.", "Frequency of exercise was determined at study entry with a questionnaire. Participants were requested to estimate the exercise frequency per week at baseline, and were classified as exercising on 0, 1–2, 3–4, 5–6, and 7 days per week. Within this study, DM was defined as either receiving diabetes treatment or having a fasting blood glucose greater than or equal to 126 mg/dL. Smoking, drinking, and medical history were assessed using a self-administered questionnaire at baseline. Based on the responses, the individuals were categorized as non-smoker, past smoker, and current smoker. Household income levels were divided into five groups as 0–2 (lowest level), 3–4, 5–6, 7–8, and 9–10 (highest level). We calculated BMI as weight in kilograms per square meter of height (kg/m2). Hypercholesterolemia was defined as total cholesterol > 240 mg/dL. Hypertension was identified by systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg, or those taking antihypertensive medication. To control the effects of concomitant disorders, we used Charlson comorbidity index (CCI) which was originally developed to predict 1-year mortality using comorbidity data obtained from hospital chart review. We calculated CCI score using International Classification of Diseases, 10th revision (ICD-10) codes represent 17 conditions for each participant, under which scores 1, 2, 3, or 6 are determined according to severity of the condition.14 Information on death (date and cause of death) from Statistics Korea was individually linked using unique personal identification numbers.", "Cox proportional hazard regression was used to compare the calculated hazard ratio (HR) of the group identified as having DM versus the group without DM. Continuous variables were age, BMI, total cholesterol, fasting glucose, systolic blood pressure, and diastolic blood pressure. Categorical variables were sex, frequency of exercise, household income, smoking, hypercholesterolemia, hypertension, and CCI. We studied three sequentially adjusted models for each result. Model 1 was used to estimate single effects of DM status and exercise frequency per week at baseline on the risk of mortality. For total mortality, model 2 was additionally adjusted for age and sex. In addition to age and sex, model 3 also adjusted for smoking, BMI, and CCI at baseline. We performed all statistical analyses using SAS software, version 9.4 (SAS Institute Inc., Cary, NC, USA) and, the results were considered significant if their P value was < 0.05.", "The study protocol was approved by the Yonsei University Health System, Severance Hospital, Institutional Review Board (IRB No. 4-2016-0496). Informed consent was waived by the board." ]
[ null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study participants", "Measurements", "Statistical analysis", "Ethics statement", "RESULTS", "DISCUSSION" ]
[ "Diabetes mellitus (DM) is one of the most common chronic diseases worldwide.1 In the UK, the number of people who had DM was found to be 3.1 million in 20111; and in Scotland, 9.6% of people were diagnosed with DM in 2008.2 The prevalence of diagnosed and undiagnosed DM was 8.6% and 0.9%, respectively.3 In the Japanese population in 1991, 1996, 2001, 2006, and 2011, the prevalence of DM increased among men (6.0%, 8.9%, 10.0%, 10.8%, and 12.0%) and women (3.3%, 4.5%, 4.2%, 4.1%, and 5.1%).4 In Korea, between 2002 and 2004, 29,787 of 1,025,340 people (15,625 men and 14,162 women) were diagnosed with DM. In Korea, the prevalence of DM increased from 8.6% in 2001 to 11.0% in 2013.56 People with DM showed significantly higher risk of mortality than those without DM.78 An inverse association of total mortality was identified for the overall physical activity level.91011 In a retrospective cohort study of Korean men, both regular physical activity and fitness were reported to be associated with lower mortality, but fitness was not associated with mortality among men who regularly engaged in physical activity.12\nHowever, there are a few studies which examine the association between physical activity and mortality by comparing groups with and without DM. Therefore, we investigated the association between physical activity and mortality among people with and without DM by analyzing a large cohort data in Korea.", " Study participants In this study, we analyzed data from the National Health Insurance Service-Health Screening (NHIS-HEALS) cohort from 2002 to 2013, which were released by the National Health Insurance Service (NHIS).13 The NHIS-HEALS cohort comprised a nationally representative random sample of 514,795 individuals, which accounted for 10% of the entire population who were aged between 40–79 years in 2002 and 2003. The NHIS-HEALS data were built by using probabilistic sampling to represent an individual defined by age, sex, eligibility status (employed or self-employed), and household income levels from Korean residents in 2002 and 2003. This database has eligibility and demographic information regarding health services as well as data on medical aid beneficiaries, medical bill details, medical treatment, medical history, and prescriptions. The participants were followed up for 12 years from 2002 to 2013. For this research, we excluded participants with missing data on DM status (yes/no) and body mass index (BMI, kg/m2) and those who died during the first 3 years of the follow-up period, leaving 505,677 participants for analysis.\nIn this study, we analyzed data from the National Health Insurance Service-Health Screening (NHIS-HEALS) cohort from 2002 to 2013, which were released by the National Health Insurance Service (NHIS).13 The NHIS-HEALS cohort comprised a nationally representative random sample of 514,795 individuals, which accounted for 10% of the entire population who were aged between 40–79 years in 2002 and 2003. The NHIS-HEALS data were built by using probabilistic sampling to represent an individual defined by age, sex, eligibility status (employed or self-employed), and household income levels from Korean residents in 2002 and 2003. This database has eligibility and demographic information regarding health services as well as data on medical aid beneficiaries, medical bill details, medical treatment, medical history, and prescriptions. The participants were followed up for 12 years from 2002 to 2013. For this research, we excluded participants with missing data on DM status (yes/no) and body mass index (BMI, kg/m2) and those who died during the first 3 years of the follow-up period, leaving 505,677 participants for analysis.\n Measurements Frequency of exercise was determined at study entry with a questionnaire. Participants were requested to estimate the exercise frequency per week at baseline, and were classified as exercising on 0, 1–2, 3–4, 5–6, and 7 days per week. Within this study, DM was defined as either receiving diabetes treatment or having a fasting blood glucose greater than or equal to 126 mg/dL. Smoking, drinking, and medical history were assessed using a self-administered questionnaire at baseline. Based on the responses, the individuals were categorized as non-smoker, past smoker, and current smoker. Household income levels were divided into five groups as 0–2 (lowest level), 3–4, 5–6, 7–8, and 9–10 (highest level). We calculated BMI as weight in kilograms per square meter of height (kg/m2). Hypercholesterolemia was defined as total cholesterol > 240 mg/dL. Hypertension was identified by systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg, or those taking antihypertensive medication. To control the effects of concomitant disorders, we used Charlson comorbidity index (CCI) which was originally developed to predict 1-year mortality using comorbidity data obtained from hospital chart review. We calculated CCI score using International Classification of Diseases, 10th revision (ICD-10) codes represent 17 conditions for each participant, under which scores 1, 2, 3, or 6 are determined according to severity of the condition.14 Information on death (date and cause of death) from Statistics Korea was individually linked using unique personal identification numbers.\nFrequency of exercise was determined at study entry with a questionnaire. Participants were requested to estimate the exercise frequency per week at baseline, and were classified as exercising on 0, 1–2, 3–4, 5–6, and 7 days per week. Within this study, DM was defined as either receiving diabetes treatment or having a fasting blood glucose greater than or equal to 126 mg/dL. Smoking, drinking, and medical history were assessed using a self-administered questionnaire at baseline. Based on the responses, the individuals were categorized as non-smoker, past smoker, and current smoker. Household income levels were divided into five groups as 0–2 (lowest level), 3–4, 5–6, 7–8, and 9–10 (highest level). We calculated BMI as weight in kilograms per square meter of height (kg/m2). Hypercholesterolemia was defined as total cholesterol > 240 mg/dL. Hypertension was identified by systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg, or those taking antihypertensive medication. To control the effects of concomitant disorders, we used Charlson comorbidity index (CCI) which was originally developed to predict 1-year mortality using comorbidity data obtained from hospital chart review. We calculated CCI score using International Classification of Diseases, 10th revision (ICD-10) codes represent 17 conditions for each participant, under which scores 1, 2, 3, or 6 are determined according to severity of the condition.14 Information on death (date and cause of death) from Statistics Korea was individually linked using unique personal identification numbers.\n Statistical analysis Cox proportional hazard regression was used to compare the calculated hazard ratio (HR) of the group identified as having DM versus the group without DM. Continuous variables were age, BMI, total cholesterol, fasting glucose, systolic blood pressure, and diastolic blood pressure. Categorical variables were sex, frequency of exercise, household income, smoking, hypercholesterolemia, hypertension, and CCI. We studied three sequentially adjusted models for each result. Model 1 was used to estimate single effects of DM status and exercise frequency per week at baseline on the risk of mortality. For total mortality, model 2 was additionally adjusted for age and sex. In addition to age and sex, model 3 also adjusted for smoking, BMI, and CCI at baseline. We performed all statistical analyses using SAS software, version 9.4 (SAS Institute Inc., Cary, NC, USA) and, the results were considered significant if their P value was < 0.05.\nCox proportional hazard regression was used to compare the calculated hazard ratio (HR) of the group identified as having DM versus the group without DM. Continuous variables were age, BMI, total cholesterol, fasting glucose, systolic blood pressure, and diastolic blood pressure. Categorical variables were sex, frequency of exercise, household income, smoking, hypercholesterolemia, hypertension, and CCI. We studied three sequentially adjusted models for each result. Model 1 was used to estimate single effects of DM status and exercise frequency per week at baseline on the risk of mortality. For total mortality, model 2 was additionally adjusted for age and sex. In addition to age and sex, model 3 also adjusted for smoking, BMI, and CCI at baseline. We performed all statistical analyses using SAS software, version 9.4 (SAS Institute Inc., Cary, NC, USA) and, the results were considered significant if their P value was < 0.05.\n Ethics statement The study protocol was approved by the Yonsei University Health System, Severance Hospital, Institutional Review Board (IRB No. 4-2016-0496). Informed consent was waived by the board.\nThe study protocol was approved by the Yonsei University Health System, Severance Hospital, Institutional Review Board (IRB No. 4-2016-0496). Informed consent was waived by the board.", "In this study, we analyzed data from the National Health Insurance Service-Health Screening (NHIS-HEALS) cohort from 2002 to 2013, which were released by the National Health Insurance Service (NHIS).13 The NHIS-HEALS cohort comprised a nationally representative random sample of 514,795 individuals, which accounted for 10% of the entire population who were aged between 40–79 years in 2002 and 2003. The NHIS-HEALS data were built by using probabilistic sampling to represent an individual defined by age, sex, eligibility status (employed or self-employed), and household income levels from Korean residents in 2002 and 2003. This database has eligibility and demographic information regarding health services as well as data on medical aid beneficiaries, medical bill details, medical treatment, medical history, and prescriptions. The participants were followed up for 12 years from 2002 to 2013. For this research, we excluded participants with missing data on DM status (yes/no) and body mass index (BMI, kg/m2) and those who died during the first 3 years of the follow-up period, leaving 505,677 participants for analysis.", "Frequency of exercise was determined at study entry with a questionnaire. Participants were requested to estimate the exercise frequency per week at baseline, and were classified as exercising on 0, 1–2, 3–4, 5–6, and 7 days per week. Within this study, DM was defined as either receiving diabetes treatment or having a fasting blood glucose greater than or equal to 126 mg/dL. Smoking, drinking, and medical history were assessed using a self-administered questionnaire at baseline. Based on the responses, the individuals were categorized as non-smoker, past smoker, and current smoker. Household income levels were divided into five groups as 0–2 (lowest level), 3–4, 5–6, 7–8, and 9–10 (highest level). We calculated BMI as weight in kilograms per square meter of height (kg/m2). Hypercholesterolemia was defined as total cholesterol > 240 mg/dL. Hypertension was identified by systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg, or those taking antihypertensive medication. To control the effects of concomitant disorders, we used Charlson comorbidity index (CCI) which was originally developed to predict 1-year mortality using comorbidity data obtained from hospital chart review. We calculated CCI score using International Classification of Diseases, 10th revision (ICD-10) codes represent 17 conditions for each participant, under which scores 1, 2, 3, or 6 are determined according to severity of the condition.14 Information on death (date and cause of death) from Statistics Korea was individually linked using unique personal identification numbers.", "Cox proportional hazard regression was used to compare the calculated hazard ratio (HR) of the group identified as having DM versus the group without DM. Continuous variables were age, BMI, total cholesterol, fasting glucose, systolic blood pressure, and diastolic blood pressure. Categorical variables were sex, frequency of exercise, household income, smoking, hypercholesterolemia, hypertension, and CCI. We studied three sequentially adjusted models for each result. Model 1 was used to estimate single effects of DM status and exercise frequency per week at baseline on the risk of mortality. For total mortality, model 2 was additionally adjusted for age and sex. In addition to age and sex, model 3 also adjusted for smoking, BMI, and CCI at baseline. We performed all statistical analyses using SAS software, version 9.4 (SAS Institute Inc., Cary, NC, USA) and, the results were considered significant if their P value was < 0.05.", "The study protocol was approved by the Yonsei University Health System, Severance Hospital, Institutional Review Board (IRB No. 4-2016-0496). Informed consent was waived by the board.", "Among the 505,677 participants, 10.8% (n = 55,439) had DM at baseline and 89.2% (n = 50,238) did not have DM at baseline (Table 1). DM group tended to have lower household income, higher blood cholesterol, higher blood pressure, and more comorbidities than non-DM group. Baseline characteristics are also presented by sex in Supplementary Table 1. Men had higher fasting blood glucose and higher blood pressure, but lower blood cholesterol compared to women. Household income, exercise frequency, and smoking rate were also higher in men than in women.\nData presented as mean ± standard deviation or number (%).\nBMI = body mass index, FBS = fasting blood sugar, SBP = systolic blood pressure, DBP = diastolic blood pressure, CCI = Charlson comorbidity index.\nTable 2 describes the baseline characteristics of the participants according to frequency of exercise. Of these, 232,315 (57.50%) did not exercise at all. Number of people who did exercise, 1–2, 3–4, 5–6, and 7 days per week was 116,389 (23.71%), 45,447 (9.26%), 12,881 (2.62%), and 33,927 (6.91%), respectively. People with higher exercise frequency tended to be male, be older, and have higher household income.\nData presented as mean ± standard deviation or number (%).\nBMI = body mass index, FBS = fasting blood sugar, SBP = systolic blood pressure, DBP = diastolic blood pressure, CCI = Charlson comorbidity index.\nDuring the 12 years of follow-up, 31,264 deaths were observed. People with DM had significantly higher risk of mortality than those without DM. In addition, the HR for total mortality among participants according to frequency of exercise showed a U-shaped relationship. However, people who perform exercise at any frequency had significantly lower mortality, compared to those who did not exercise at all. The lowest HR for mortality was observed among people who exercise 3–4 times per week (Table 3). However, when we limited our analysis to people with DM, the lowest HR for mortality was observed at exercise frequency of 5–6 times per week (Table 4).\nModel 1 unadjusted; model 2 adjusted for age and sex; model 3 adjusted for age, sex, body mass index, smoking, and Charlson comorbidity index.\nHR = hazard ratio, CI = confidence interval.\nModel 1 unadjusted; model 2 adjusted for age and sex; model 3 adjusted for age, sex, body mass index, smoking, and Charlson comorbidity index.\nHR = hazard ratio, CI = confidence interval.\nFig. 1 shows mortality risk for 10 combined categories based on DM diagnosis and exercise frequency. The highest mortality was observed in people with DM and no exercise, and the lowest mortality was observed in people without DM and 3–4 exercises per week. The relationship between exercise frequency and mortality was found to be U-shaped in both diabetic and non-diabetic participants. However, the frequency of exercise with the lowest mortality rate varied depending on whether or not diabetes was present. The lowest relative mortality was observed among people who exercise 5–6 times per week (HR, 0.93) in the DM group, but among those who exercise 3–4 times per week (HR, 0.69) in the non-DM group.\nCombined effects of diabetes and frequency of exercise on total mortality. Hazard ratios were calculated using Cox proportional hazard regression models that adjusted for sex, age, hypertension, total cholesterol, smoking, and Charlson comorbidity index.", "In this large and representative cohort study of the Korean population, we observed a joint association between frequency of exercise for all-cause mortality in adults with and without DM. When compared to participants without DM, those with DM had a higher risk of all-cause mortality.\nIt has been reported that regular exercise can decrease the risk of mortality in people without DM as well as in people with DM.815 The unique finding of this study is that the most beneficial frequency of exercise may differ for people with and without DM. In our study, the lowest mortality was observed for people exercising 5–6 days per week in DM group, but for people exercising 3–4 days per week in non-DM group. It is also notable that the people with DM who exercised 5–6 times per week showed lower mortality than those without DM who did not exercise. It implies that even if people have DM, they can enjoy a long lifespan through maintaining a proper lifestyle and medical treatment.\nOur study is in line with previous studies which have reported a U-shaped relationship between physical activity and mortality.161718192021 Our study also showed U-shaped curves not only in the DM group but also in the non-DM group; these associations of exercise frequency with all-cause mortality compared to the reference group, who did not exercise, suggest that moderate frequency of exercise is indeed beneficial. The moderate physical activity for at least 150 minutes per week was related with significant decreased in all-cause mortality.22\nPhysically inactive adults are strongly recommended to increase their activity and to begin regular exercise. However, people who are already at high-activity levels do not need to reduce exercise frequency because of the risk of death.23 It is unclear why the mortality rate is higher for those who exercise seven days a week than those who exercise three to six days a week. Too much exercise can have a negative impact on health, although it is unlikely to be a major reason for the U-shaped effect of exercise frequency. Studies reported that activation of ref1/nrf2/antioxidant defense pathway may effect on preventing oxidative stress resistance during vigorous physical activity,24 and long-term strenuous exercise may lead to adverse impact on cardiovascular health.2526 Unmeasured or uncontrolled cofounders can be another explanation. In our study, 9.35% of people with DM and 6.61% of people without DM answered that they exercise 7 days a week. Only a small part of people did exercise every day, and they might be more worried about their health. A previous Korean study reported that people with DM had more regularly exercise than people without.27 In our study population, people who exercise every day were more likely to have poor baseline conditions such as lower household income and larger number of comorbidities. We attempted to control the effect of other mortality risk factors, but there is a possibility of residual confounding factors.\nSeveral limitations of the present study warrant consideration. A major limitation of this study was the relatively crude assessment of our explanatory variable, which was solely based on self-reporting by the participants. The measure was further limited by relying on subjects' response on physical activity and not assessing type, duration, and intensity of physical activities. A more precise assessment of these factors may have resulted in a different contribution of these factors to the reduction in mortality. Second, the NHIS-HEALS cohort was a random sample of individuals who participated in the general health-screening program; thus, it represents screened individuals, but not unscreened ones. Third, when we analyzed the association frequency of exercise with total mortality, we could not control for disease severity because we did not have information about glycemic control. Lastly, the participants' exercise frequency can change during the study period, but we did not evaluate the effect of changes in exercise habit on mortality. Despite these limitations, this study has important advantages because it prospectively analyzed a large sample of Korean adults and observed mortality events over a long-term period.\nIn conclusion, we observed that moderate frequency of exercise may reduce mortality, but the optimal frequency of exercise may vary depending on the diagnosis of DM. Our results support the encouragement of more frequent exercise in people with DM than those without DM." ]
[ "intro", "methods", null, null, null, null, "results", "discussion" ]
[ "Exercise", "Mortality", "Diabetes Mellitus", "Cohort Study", "Korea" ]
INTRODUCTION: Diabetes mellitus (DM) is one of the most common chronic diseases worldwide.1 In the UK, the number of people who had DM was found to be 3.1 million in 20111; and in Scotland, 9.6% of people were diagnosed with DM in 2008.2 The prevalence of diagnosed and undiagnosed DM was 8.6% and 0.9%, respectively.3 In the Japanese population in 1991, 1996, 2001, 2006, and 2011, the prevalence of DM increased among men (6.0%, 8.9%, 10.0%, 10.8%, and 12.0%) and women (3.3%, 4.5%, 4.2%, 4.1%, and 5.1%).4 In Korea, between 2002 and 2004, 29,787 of 1,025,340 people (15,625 men and 14,162 women) were diagnosed with DM. In Korea, the prevalence of DM increased from 8.6% in 2001 to 11.0% in 2013.56 People with DM showed significantly higher risk of mortality than those without DM.78 An inverse association of total mortality was identified for the overall physical activity level.91011 In a retrospective cohort study of Korean men, both regular physical activity and fitness were reported to be associated with lower mortality, but fitness was not associated with mortality among men who regularly engaged in physical activity.12 However, there are a few studies which examine the association between physical activity and mortality by comparing groups with and without DM. Therefore, we investigated the association between physical activity and mortality among people with and without DM by analyzing a large cohort data in Korea. METHODS: Study participants In this study, we analyzed data from the National Health Insurance Service-Health Screening (NHIS-HEALS) cohort from 2002 to 2013, which were released by the National Health Insurance Service (NHIS).13 The NHIS-HEALS cohort comprised a nationally representative random sample of 514,795 individuals, which accounted for 10% of the entire population who were aged between 40–79 years in 2002 and 2003. The NHIS-HEALS data were built by using probabilistic sampling to represent an individual defined by age, sex, eligibility status (employed or self-employed), and household income levels from Korean residents in 2002 and 2003. This database has eligibility and demographic information regarding health services as well as data on medical aid beneficiaries, medical bill details, medical treatment, medical history, and prescriptions. The participants were followed up for 12 years from 2002 to 2013. For this research, we excluded participants with missing data on DM status (yes/no) and body mass index (BMI, kg/m2) and those who died during the first 3 years of the follow-up period, leaving 505,677 participants for analysis. In this study, we analyzed data from the National Health Insurance Service-Health Screening (NHIS-HEALS) cohort from 2002 to 2013, which were released by the National Health Insurance Service (NHIS).13 The NHIS-HEALS cohort comprised a nationally representative random sample of 514,795 individuals, which accounted for 10% of the entire population who were aged between 40–79 years in 2002 and 2003. The NHIS-HEALS data were built by using probabilistic sampling to represent an individual defined by age, sex, eligibility status (employed or self-employed), and household income levels from Korean residents in 2002 and 2003. This database has eligibility and demographic information regarding health services as well as data on medical aid beneficiaries, medical bill details, medical treatment, medical history, and prescriptions. The participants were followed up for 12 years from 2002 to 2013. For this research, we excluded participants with missing data on DM status (yes/no) and body mass index (BMI, kg/m2) and those who died during the first 3 years of the follow-up period, leaving 505,677 participants for analysis. Measurements Frequency of exercise was determined at study entry with a questionnaire. Participants were requested to estimate the exercise frequency per week at baseline, and were classified as exercising on 0, 1–2, 3–4, 5–6, and 7 days per week. Within this study, DM was defined as either receiving diabetes treatment or having a fasting blood glucose greater than or equal to 126 mg/dL. Smoking, drinking, and medical history were assessed using a self-administered questionnaire at baseline. Based on the responses, the individuals were categorized as non-smoker, past smoker, and current smoker. Household income levels were divided into five groups as 0–2 (lowest level), 3–4, 5–6, 7–8, and 9–10 (highest level). We calculated BMI as weight in kilograms per square meter of height (kg/m2). Hypercholesterolemia was defined as total cholesterol > 240 mg/dL. Hypertension was identified by systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg, or those taking antihypertensive medication. To control the effects of concomitant disorders, we used Charlson comorbidity index (CCI) which was originally developed to predict 1-year mortality using comorbidity data obtained from hospital chart review. We calculated CCI score using International Classification of Diseases, 10th revision (ICD-10) codes represent 17 conditions for each participant, under which scores 1, 2, 3, or 6 are determined according to severity of the condition.14 Information on death (date and cause of death) from Statistics Korea was individually linked using unique personal identification numbers. Frequency of exercise was determined at study entry with a questionnaire. Participants were requested to estimate the exercise frequency per week at baseline, and were classified as exercising on 0, 1–2, 3–4, 5–6, and 7 days per week. Within this study, DM was defined as either receiving diabetes treatment or having a fasting blood glucose greater than or equal to 126 mg/dL. Smoking, drinking, and medical history were assessed using a self-administered questionnaire at baseline. Based on the responses, the individuals were categorized as non-smoker, past smoker, and current smoker. Household income levels were divided into five groups as 0–2 (lowest level), 3–4, 5–6, 7–8, and 9–10 (highest level). We calculated BMI as weight in kilograms per square meter of height (kg/m2). Hypercholesterolemia was defined as total cholesterol > 240 mg/dL. Hypertension was identified by systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg, or those taking antihypertensive medication. To control the effects of concomitant disorders, we used Charlson comorbidity index (CCI) which was originally developed to predict 1-year mortality using comorbidity data obtained from hospital chart review. We calculated CCI score using International Classification of Diseases, 10th revision (ICD-10) codes represent 17 conditions for each participant, under which scores 1, 2, 3, or 6 are determined according to severity of the condition.14 Information on death (date and cause of death) from Statistics Korea was individually linked using unique personal identification numbers. Statistical analysis Cox proportional hazard regression was used to compare the calculated hazard ratio (HR) of the group identified as having DM versus the group without DM. Continuous variables were age, BMI, total cholesterol, fasting glucose, systolic blood pressure, and diastolic blood pressure. Categorical variables were sex, frequency of exercise, household income, smoking, hypercholesterolemia, hypertension, and CCI. We studied three sequentially adjusted models for each result. Model 1 was used to estimate single effects of DM status and exercise frequency per week at baseline on the risk of mortality. For total mortality, model 2 was additionally adjusted for age and sex. In addition to age and sex, model 3 also adjusted for smoking, BMI, and CCI at baseline. We performed all statistical analyses using SAS software, version 9.4 (SAS Institute Inc., Cary, NC, USA) and, the results were considered significant if their P value was < 0.05. Cox proportional hazard regression was used to compare the calculated hazard ratio (HR) of the group identified as having DM versus the group without DM. Continuous variables were age, BMI, total cholesterol, fasting glucose, systolic blood pressure, and diastolic blood pressure. Categorical variables were sex, frequency of exercise, household income, smoking, hypercholesterolemia, hypertension, and CCI. We studied three sequentially adjusted models for each result. Model 1 was used to estimate single effects of DM status and exercise frequency per week at baseline on the risk of mortality. For total mortality, model 2 was additionally adjusted for age and sex. In addition to age and sex, model 3 also adjusted for smoking, BMI, and CCI at baseline. We performed all statistical analyses using SAS software, version 9.4 (SAS Institute Inc., Cary, NC, USA) and, the results were considered significant if their P value was < 0.05. Ethics statement The study protocol was approved by the Yonsei University Health System, Severance Hospital, Institutional Review Board (IRB No. 4-2016-0496). Informed consent was waived by the board. The study protocol was approved by the Yonsei University Health System, Severance Hospital, Institutional Review Board (IRB No. 4-2016-0496). Informed consent was waived by the board. Study participants: In this study, we analyzed data from the National Health Insurance Service-Health Screening (NHIS-HEALS) cohort from 2002 to 2013, which were released by the National Health Insurance Service (NHIS).13 The NHIS-HEALS cohort comprised a nationally representative random sample of 514,795 individuals, which accounted for 10% of the entire population who were aged between 40–79 years in 2002 and 2003. The NHIS-HEALS data were built by using probabilistic sampling to represent an individual defined by age, sex, eligibility status (employed or self-employed), and household income levels from Korean residents in 2002 and 2003. This database has eligibility and demographic information regarding health services as well as data on medical aid beneficiaries, medical bill details, medical treatment, medical history, and prescriptions. The participants were followed up for 12 years from 2002 to 2013. For this research, we excluded participants with missing data on DM status (yes/no) and body mass index (BMI, kg/m2) and those who died during the first 3 years of the follow-up period, leaving 505,677 participants for analysis. Measurements: Frequency of exercise was determined at study entry with a questionnaire. Participants were requested to estimate the exercise frequency per week at baseline, and were classified as exercising on 0, 1–2, 3–4, 5–6, and 7 days per week. Within this study, DM was defined as either receiving diabetes treatment or having a fasting blood glucose greater than or equal to 126 mg/dL. Smoking, drinking, and medical history were assessed using a self-administered questionnaire at baseline. Based on the responses, the individuals were categorized as non-smoker, past smoker, and current smoker. Household income levels were divided into five groups as 0–2 (lowest level), 3–4, 5–6, 7–8, and 9–10 (highest level). We calculated BMI as weight in kilograms per square meter of height (kg/m2). Hypercholesterolemia was defined as total cholesterol > 240 mg/dL. Hypertension was identified by systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg, or those taking antihypertensive medication. To control the effects of concomitant disorders, we used Charlson comorbidity index (CCI) which was originally developed to predict 1-year mortality using comorbidity data obtained from hospital chart review. We calculated CCI score using International Classification of Diseases, 10th revision (ICD-10) codes represent 17 conditions for each participant, under which scores 1, 2, 3, or 6 are determined according to severity of the condition.14 Information on death (date and cause of death) from Statistics Korea was individually linked using unique personal identification numbers. Statistical analysis: Cox proportional hazard regression was used to compare the calculated hazard ratio (HR) of the group identified as having DM versus the group without DM. Continuous variables were age, BMI, total cholesterol, fasting glucose, systolic blood pressure, and diastolic blood pressure. Categorical variables were sex, frequency of exercise, household income, smoking, hypercholesterolemia, hypertension, and CCI. We studied three sequentially adjusted models for each result. Model 1 was used to estimate single effects of DM status and exercise frequency per week at baseline on the risk of mortality. For total mortality, model 2 was additionally adjusted for age and sex. In addition to age and sex, model 3 also adjusted for smoking, BMI, and CCI at baseline. We performed all statistical analyses using SAS software, version 9.4 (SAS Institute Inc., Cary, NC, USA) and, the results were considered significant if their P value was < 0.05. Ethics statement: The study protocol was approved by the Yonsei University Health System, Severance Hospital, Institutional Review Board (IRB No. 4-2016-0496). Informed consent was waived by the board. RESULTS: Among the 505,677 participants, 10.8% (n = 55,439) had DM at baseline and 89.2% (n = 50,238) did not have DM at baseline (Table 1). DM group tended to have lower household income, higher blood cholesterol, higher blood pressure, and more comorbidities than non-DM group. Baseline characteristics are also presented by sex in Supplementary Table 1. Men had higher fasting blood glucose and higher blood pressure, but lower blood cholesterol compared to women. Household income, exercise frequency, and smoking rate were also higher in men than in women. Data presented as mean ± standard deviation or number (%). BMI = body mass index, FBS = fasting blood sugar, SBP = systolic blood pressure, DBP = diastolic blood pressure, CCI = Charlson comorbidity index. Table 2 describes the baseline characteristics of the participants according to frequency of exercise. Of these, 232,315 (57.50%) did not exercise at all. Number of people who did exercise, 1–2, 3–4, 5–6, and 7 days per week was 116,389 (23.71%), 45,447 (9.26%), 12,881 (2.62%), and 33,927 (6.91%), respectively. People with higher exercise frequency tended to be male, be older, and have higher household income. Data presented as mean ± standard deviation or number (%). BMI = body mass index, FBS = fasting blood sugar, SBP = systolic blood pressure, DBP = diastolic blood pressure, CCI = Charlson comorbidity index. During the 12 years of follow-up, 31,264 deaths were observed. People with DM had significantly higher risk of mortality than those without DM. In addition, the HR for total mortality among participants according to frequency of exercise showed a U-shaped relationship. However, people who perform exercise at any frequency had significantly lower mortality, compared to those who did not exercise at all. The lowest HR for mortality was observed among people who exercise 3–4 times per week (Table 3). However, when we limited our analysis to people with DM, the lowest HR for mortality was observed at exercise frequency of 5–6 times per week (Table 4). Model 1 unadjusted; model 2 adjusted for age and sex; model 3 adjusted for age, sex, body mass index, smoking, and Charlson comorbidity index. HR = hazard ratio, CI = confidence interval. Model 1 unadjusted; model 2 adjusted for age and sex; model 3 adjusted for age, sex, body mass index, smoking, and Charlson comorbidity index. HR = hazard ratio, CI = confidence interval. Fig. 1 shows mortality risk for 10 combined categories based on DM diagnosis and exercise frequency. The highest mortality was observed in people with DM and no exercise, and the lowest mortality was observed in people without DM and 3–4 exercises per week. The relationship between exercise frequency and mortality was found to be U-shaped in both diabetic and non-diabetic participants. However, the frequency of exercise with the lowest mortality rate varied depending on whether or not diabetes was present. The lowest relative mortality was observed among people who exercise 5–6 times per week (HR, 0.93) in the DM group, but among those who exercise 3–4 times per week (HR, 0.69) in the non-DM group. Combined effects of diabetes and frequency of exercise on total mortality. Hazard ratios were calculated using Cox proportional hazard regression models that adjusted for sex, age, hypertension, total cholesterol, smoking, and Charlson comorbidity index. DISCUSSION: In this large and representative cohort study of the Korean population, we observed a joint association between frequency of exercise for all-cause mortality in adults with and without DM. When compared to participants without DM, those with DM had a higher risk of all-cause mortality. It has been reported that regular exercise can decrease the risk of mortality in people without DM as well as in people with DM.815 The unique finding of this study is that the most beneficial frequency of exercise may differ for people with and without DM. In our study, the lowest mortality was observed for people exercising 5–6 days per week in DM group, but for people exercising 3–4 days per week in non-DM group. It is also notable that the people with DM who exercised 5–6 times per week showed lower mortality than those without DM who did not exercise. It implies that even if people have DM, they can enjoy a long lifespan through maintaining a proper lifestyle and medical treatment. Our study is in line with previous studies which have reported a U-shaped relationship between physical activity and mortality.161718192021 Our study also showed U-shaped curves not only in the DM group but also in the non-DM group; these associations of exercise frequency with all-cause mortality compared to the reference group, who did not exercise, suggest that moderate frequency of exercise is indeed beneficial. The moderate physical activity for at least 150 minutes per week was related with significant decreased in all-cause mortality.22 Physically inactive adults are strongly recommended to increase their activity and to begin regular exercise. However, people who are already at high-activity levels do not need to reduce exercise frequency because of the risk of death.23 It is unclear why the mortality rate is higher for those who exercise seven days a week than those who exercise three to six days a week. Too much exercise can have a negative impact on health, although it is unlikely to be a major reason for the U-shaped effect of exercise frequency. Studies reported that activation of ref1/nrf2/antioxidant defense pathway may effect on preventing oxidative stress resistance during vigorous physical activity,24 and long-term strenuous exercise may lead to adverse impact on cardiovascular health.2526 Unmeasured or uncontrolled cofounders can be another explanation. In our study, 9.35% of people with DM and 6.61% of people without DM answered that they exercise 7 days a week. Only a small part of people did exercise every day, and they might be more worried about their health. A previous Korean study reported that people with DM had more regularly exercise than people without.27 In our study population, people who exercise every day were more likely to have poor baseline conditions such as lower household income and larger number of comorbidities. We attempted to control the effect of other mortality risk factors, but there is a possibility of residual confounding factors. Several limitations of the present study warrant consideration. A major limitation of this study was the relatively crude assessment of our explanatory variable, which was solely based on self-reporting by the participants. The measure was further limited by relying on subjects' response on physical activity and not assessing type, duration, and intensity of physical activities. A more precise assessment of these factors may have resulted in a different contribution of these factors to the reduction in mortality. Second, the NHIS-HEALS cohort was a random sample of individuals who participated in the general health-screening program; thus, it represents screened individuals, but not unscreened ones. Third, when we analyzed the association frequency of exercise with total mortality, we could not control for disease severity because we did not have information about glycemic control. Lastly, the participants' exercise frequency can change during the study period, but we did not evaluate the effect of changes in exercise habit on mortality. Despite these limitations, this study has important advantages because it prospectively analyzed a large sample of Korean adults and observed mortality events over a long-term period. In conclusion, we observed that moderate frequency of exercise may reduce mortality, but the optimal frequency of exercise may vary depending on the diagnosis of DM. Our results support the encouragement of more frequent exercise in people with DM than those without DM.
Background: The goal of this study was to analyze the relationship between exercise frequency and all-cause mortality for individuals diagnosed with and without diabetes mellitus (DM). Methods: We analyzed data for 505,677 participants (53.9% men) in the National Health Insurance Service-National Health Screening (NHIS-HEALS) cohort. The study endpoint variable was all-cause mortality. Results: Frequency of exercise and covariates including age, sex, smoking status, household income, blood pressure, fasting glucose, body mass index, total cholesterol, and Charlson comorbidity index were determined at baseline. Cox proportional hazard regression models were developed to assess the effects of exercise frequency (0, 1-2, 3-4, 5-6, and 7 days per week) on mortality, separately in individuals with and without DM. We found a U-shaped association between exercise frequency and mortality in individuals with and without DM. However, the frequency of exercise associated with the lowest risk of all-cause mortality was 3-4 times per week (hazard ratio [HR], 0.69; 95% confidence interval [CI], 0.65-0.73) in individuals without DM, and 5-6 times per week in those with DM (HR, 0.93; 95% CI, 0.78-1.10). Conclusions: A moderate frequency of exercise may reduce mortality regardless of the presence or absence of DM; however, when compared to those without the condition, people with DM may need to exercise more often.
null
null
3,973
297
8
[ "dm", "exercise", "mortality", "frequency", "people", "study", "blood", "week", "participants", "health" ]
[ "test", "test" ]
null
null
[CONTENT] Exercise | Mortality | Diabetes Mellitus | Cohort Study | Korea [SUMMARY]
[CONTENT] Exercise | Mortality | Diabetes Mellitus | Cohort Study | Korea [SUMMARY]
[CONTENT] Exercise | Mortality | Diabetes Mellitus | Cohort Study | Korea [SUMMARY]
null
[CONTENT] Exercise | Mortality | Diabetes Mellitus | Cohort Study | Korea [SUMMARY]
null
[CONTENT] Adult | Aged | Cohort Studies | Databases, Factual | Diabetes Mellitus, Type 2 | Exercise | Female | Humans | Male | Middle Aged | National Health Programs | Proportional Hazards Models | Republic of Korea | Risk Factors [SUMMARY]
[CONTENT] Adult | Aged | Cohort Studies | Databases, Factual | Diabetes Mellitus, Type 2 | Exercise | Female | Humans | Male | Middle Aged | National Health Programs | Proportional Hazards Models | Republic of Korea | Risk Factors [SUMMARY]
[CONTENT] Adult | Aged | Cohort Studies | Databases, Factual | Diabetes Mellitus, Type 2 | Exercise | Female | Humans | Male | Middle Aged | National Health Programs | Proportional Hazards Models | Republic of Korea | Risk Factors [SUMMARY]
null
[CONTENT] Adult | Aged | Cohort Studies | Databases, Factual | Diabetes Mellitus, Type 2 | Exercise | Female | Humans | Male | Middle Aged | National Health Programs | Proportional Hazards Models | Republic of Korea | Risk Factors [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] dm | exercise | mortality | frequency | people | study | blood | week | participants | health [SUMMARY]
[CONTENT] dm | exercise | mortality | frequency | people | study | blood | week | participants | health [SUMMARY]
[CONTENT] dm | exercise | mortality | frequency | people | study | blood | week | participants | health [SUMMARY]
null
[CONTENT] dm | exercise | mortality | frequency | people | study | blood | week | participants | health [SUMMARY]
null
[CONTENT] dm | physical activity | activity | physical | people | men | prevalence | diagnosed | mortality | association [SUMMARY]
[CONTENT] blood | health | medical | 2002 | nhis | data | cci | blood pressure | age | sex [SUMMARY]
[CONTENT] exercise | blood | people | mortality | higher | index | frequency | dm | hr | observed [SUMMARY]
null
[CONTENT] dm | exercise | mortality | people | frequency | blood | health | study | sex | pressure [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] 505,677 | 53.9% | the National Health Insurance Service-National Health Screening | NHIS ||| [SUMMARY]
[CONTENT] Charlson ||| 1 | 3 | 5 | 7 days per week ||| ||| 3 | 0.69 | 95% ||| CI] | 0.65 | DM | 5-6 | DM (HR | 0.93 | 95% | CI | 0.78-1.10 [SUMMARY]
null
[CONTENT] ||| 505,677 | 53.9% | the National Health Insurance Service-National Health Screening | NHIS ||| ||| Charlson ||| 1 | 3 | 5 | 7 days per week ||| ||| 3 | 0.69 | 95% ||| CI] | 0.65 | DM | 5-6 | DM (HR | 0.93 | 95% | CI | 0.78-1.10 ||| DM | DM [SUMMARY]
null
Effect of enamel protective agents on shear bond strength of orthodontic brackets.
25138692
This paper aimed to study the effect of two enamel protective agents on the shear bond strength (SBS) of orthodontic brackets bonded with conventional and self-etching primer (SEP) adhesive systems.
BACKGROUND
The two protective agents used were resin infiltrate (ICON) and Clinpro; the two adhesive systems used were self-etching primer system (Transbond Plus Self Etching Primer + Transbond XT adhesive) and a conventional adhesive system (37% phosphoric acid etch + Transbond XT primer + Transbond XT adhesive ). Sixty premolars divided into three major groups and six subgroups were included. The shear bond strength was tested 72 h after bracket bonding. Adhesive remnant index scores (ARI) were assessed. Statistical analysis consisted of a one-way ANOVA for the SBS and Kruskal-Wallis test followed by Mann-Whitney test for the ARI scores.
METHODS
In the control group, the mean SBS when using the conventional adhesive was 21.1 ± 7.5 MPa while when using SEP was 20.2 ± 4.0 MPa. When ICON was used with the conventional adhesive system, the SBS was 20.2 ± 5.6 MPa while with SEP was 17.6 ± 4.1 MPa. When Clinpro was used with the conventional adhesive system, the SBS was 24.3 ± 7.6 MPa while with SEP was 11.2 ± 3.5 MPa. Significant differences in the shear bond strength of the different groups (P = .000) was found as well as in the ARI scores distribution (P = .000).
RESULTS
The type of the adhesive system used to bond the orthodontic brackets, either conventional or self-etching primer, influenced the SBS, while the enamel protective material influenced the adhesive remnant on the enamel surface after debonding.
CONCLUSION
[ "Acid Etching, Dental", "Adhesiveness", "Bicuspid", "Composite Resins", "Dental Bonding", "Dental Enamel", "Dental Stress Analysis", "Humans", "Materials Testing", "Orthodontic Brackets", "Phosphoric Acids", "Pit and Fissure Sealants", "Protective Agents", "Resin Cements", "Resins, Synthetic", "Shear Strength", "Stainless Steel", "Stress, Mechanical", "Surface Properties", "Temperature", "Water" ]
4138552
Background
Enamel demineralization and white spot lesions associated with orthodontic fixed appliances is one of the greatest challenges faced by clinicians at the end of the orthodontic treatment not only for esthetic reasons but also because this subsurface demineralization represents the first stage of caries formation [1-4]. Different methods have been studied, all aiming to reduce enamel demineralization during orthodontic treatment without compromising the bond strength of the orthodontic brackets. The most common method was the use of fluoride-containing mouth rinses, gels, and tooth pastes [5-7]; however, studies found a significant association between the patient compliance to the rinsing program advised by the clinician and the reduction in the development of white spot lesions [8]. It was found that with only standardized general prophylactic measures, new white spot lesions developing on the maxillary front teeth during orthodontic treatment were seen in 60.9% of the patients [9]. Preventive measures that do not depend on the patient’s compliance have been developed and gained popularity to solve the problem of demineralization. These included the use of glass ionomer cement [10,11], topical applications of preventive agents as fluoride and casein phosphopeptide-amorphous calcium phosphate [12,13], antibacterial agents incorporated in the adhesive resin [14,15], fluoride releasing adhesives [16,17], caries infiltration resins [18,19], laser irradiation [20,21], bioactive glass-containing adhesives [22], and enamel deproteinizing agents [23]. The current study focused on two preventive agents Clinpro and ICON. Clinpro is a fluoridated varnish containing 5% sodium fluoride. Fluoride was found to be effective in reducing the development of white spot lesions associated with fixed orthodontic treatment [16,24]. Also, ICON resin infiltration was found to decrease the dissolution of enamel and so limit the appearance of white spot lesions [25]. When is the proper timing to apply these materials to get the best result of decreasing the white spot lesions around the orthodontic brackets is a worthwhile question. These preventive agents could be applied after bonding the orthodontic brackets, but this may not be easy all the times especially where there are severely crowded or partially erupted teeth. The other option is to apply these materials before bonding the orthodontic brackets, but these preventive agents could have an effect on the shear bond strength and/or the amount of adhesive left on the teeth after debonding of the orthodontic brackets upon treatment completion. The objective of this study was to study the effect of using the two enamel protective agents before bonding on the shear bond strength of orthodontic brackets bonded with conventional and self-etch adhesive systems.
null
null
Results
The descriptive statistics of the SBS of each group are presented in Table 1. The one-way ANOVA, Table 1, indicated significant differences in the shear bond strength of the different groups (P = .000). When using Clinpro before bonding with SEP and Transbond XT, the SBS was significantly less than the other groups; the two control groups, the conventional adhesive group (P = .000) and the SEP group (P = .001); the two ICON groups, the conventional adhesive (P = .001) and the SEP group (P = .015); and Clinpro with the conventional adhesive system (P = .000). When using Clinpro before bonding with the conventional adhesive, the bond strength was similar to that of the other groups but significantly higher than the SBS when using ICON before bonding with SEP and Transbond XT adhesive.Table 1 Descriptive statistics of the in vitro shear bond strength (MPa) Number Mean SD Minimum Maximum Transbond XT + ICON + H3PO4 a,d 1020.2±4.611.028.6Transbond XT + ICON + SEPbd 1017.6±4.113.323.7Transbond XT + Clinpro + H3PO4 a 1021.3±6.613.434.2Transbond XT + Clinpro + SEPc 1011.2±3.55.615.0Transbond XT + primer + H3PO4 a,d 1021.1±7.511.234.7Transbond XT + SEPa,d 1020.2±4.014.025.9 F = 6.07; P = .000. Mean values in each row with the same letter are not significantly different at P ≤ .05. Descriptive statistics of the in vitro shear bond strength (MPa) F = 6.07; P = .000. Mean values in each row with the same letter are not significantly different at P ≤ .05. The results of the Kruskal-Wallis test showed that the ARI scores, Table 2, were significantly different (P = .000) between the groups. The Mann-Whitney test showed no difference in the ARI scores between self-etching and conventional etching groups when using ICON (P = .166), Clinpro (P = .802), as well as when bonding to untreated enamel in the control group (P = .751). Results of one-way ANOVA showed also that the type of preventive agent used on the enamel significantly influenced the ARI scores distribution; there was a significant difference depending on whether it was ICON or Clinpro that was used before bonding with SEP (P = .005) or with phosphoric acid etching (P = .000).Table 2 Freq uencies of the ARI scores for the two groups Number ARI scores 0 1 2 3 Transbond XT + ICON + H3PO4 a 100118Transbond XT + ICON + SEPa 100325Transbond XT + Clinpro + H3PO4 b 105320Transbond XT + Clinpro + SEPb 106211Transbond XT + primer + H3PO4 c 102422Transbond XT + SEPc 103403Chi-square =22.77; P = .000; 0 indicates no adhesive left on the tooth surface, 1 indicates less than half the resin left on the tooth surface, 2 indicates more than half the resin left on the tooth surface, and 3 indicates all resin left on the tooth surface, with a distinct impression of the bracket base. Scores in each row with the same letter are not significantly different at P ≤ .05. Freq uencies of the ARI scores for the two groups Chi-square =22.77; P = .000; 0 indicates no adhesive left on the tooth surface, 1 indicates less than half the resin left on the tooth surface, 2 indicates more than half the resin left on the tooth surface, and 3 indicates all resin left on the tooth surface, with a distinct impression of the bracket base. Scores in each row with the same letter are not significantly different at P ≤ .05.
Conclusions
Based on the above findings, we conclude the following:Overall, the SBS was lower when self-etching primer was used than when phosphoric acid was used for enamel preparation before bonding in the three major groups.Significantly lower SBS was recorded when Clinpro was used before bonding using the self-etching adhesive system.The ICON group showed the higher ARI scores to be more frequent, while Clinpro and control groups showed lower ARI scores more frequently. The adhesive remnant was not different between the self-etching and the conventional etching subgroups. Overall, the SBS was lower when self-etching primer was used than when phosphoric acid was used for enamel preparation before bonding in the three major groups. Significantly lower SBS was recorded when Clinpro was used before bonding using the self-etching adhesive system. The ICON group showed the higher ARI scores to be more frequent, while Clinpro and control groups showed lower ARI scores more frequently. The adhesive remnant was not different between the self-etching and the conventional etching subgroups.
[]
[]
[]
[ "Background", "Methods", "Results", "Discussion", "Conclusions" ]
[ "Enamel demineralization and white spot lesions associated with orthodontic fixed appliances is one of the greatest challenges faced by clinicians at the end of the orthodontic treatment not only for esthetic reasons but also because this subsurface demineralization represents the first stage of caries formation [1-4].\nDifferent methods have been studied, all aiming to reduce enamel demineralization during orthodontic treatment without compromising the bond strength of the orthodontic brackets. The most common method was the use of fluoride-containing mouth rinses, gels, and tooth pastes [5-7]; however, studies found a significant association between the patient compliance to the rinsing program advised by the clinician and the reduction in the development of white spot lesions [8]. It was found that with only standardized general prophylactic measures, new white spot lesions developing on the maxillary front teeth during orthodontic treatment were seen in 60.9% of the patients [9].\nPreventive measures that do not depend on the patient’s compliance have been developed and gained popularity to solve the problem of demineralization. These included the use of glass ionomer cement [10,11], topical applications of preventive agents as fluoride and casein phosphopeptide-amorphous calcium phosphate [12,13], antibacterial agents incorporated in the adhesive resin [14,15], fluoride releasing adhesives [16,17], caries infiltration resins [18,19], laser irradiation [20,21], bioactive glass-containing adhesives [22], and enamel deproteinizing agents [23].\nThe current study focused on two preventive agents Clinpro and ICON. Clinpro is a fluoridated varnish containing 5% sodium fluoride. Fluoride was found to be effective in reducing the development of white spot lesions associated with fixed orthodontic treatment [16,24]. Also, ICON resin infiltration was found to decrease the dissolution of enamel and so limit the appearance of white spot lesions [25]. When is the proper timing to apply these materials to get the best result of decreasing the white spot lesions around the orthodontic brackets is a worthwhile question. These preventive agents could be applied after bonding the orthodontic brackets, but this may not be easy all the times especially where there are severely crowded or partially erupted teeth. The other option is to apply these materials before bonding the orthodontic brackets, but these preventive agents could have an effect on the shear bond strength and/or the amount of adhesive left on the teeth after debonding of the orthodontic brackets upon treatment completion.\nThe objective of this study was to study the effect of using the two enamel protective agents before bonding on the shear bond strength of orthodontic brackets bonded with conventional and self-etch adhesive systems.", "This in vitro testing used 60 extracted human upper premolars stored in an aqueous solution of thymol (0.1% wt/vol). Teeth were extracted as part of orthodontic treatment and collected to be used in research. To calculate the sample size, Epicalc 2000 software version 1.02 (Brixton Books, Brixton, UK) was used. The sample size was found to be ten specimens for each group based on 80% power and 95% confidence interval.\n\nTypically\nmounted specimen for SBS testing.\n\nThe teeth were fixed in self-curing acrylic resin placed in flexible molds with the roots embedded in the acrylic and the crown exposed and oriented perpendicularly to the bottom of the mold.\nTwo types of enamel protective agents were used in the current study: ICON (DMG, Hamburg, Germany) and Clinpro (3M Unitek, Monrovia, CA, USA). The two adhesive systems used in this study were Transbond XT light cure adhesive and Transbond Plus Self Etching Primer (3M Unitek, Monrovia, CA, USA), and Transbond XT light cure adhesive, Transbond XT primer, and 37% phosphoric acid (3M Unitek, Monrovia, CA, USA). All materials were used according to the manufacturers’ instructions.\nThe sample was divided into three major groups; group 1 used (ICON) before bonding the orthodontic brackets, group 2 used Clinpro before bonding, and group 3 a control group with no protective enamel agent used. Each group was divided into two subgroups; in the first one, orthodontic brackets were bonded with self-etching adhesive system and in the second one, a conventional adhesive system was used.\nPremolar stainless steel brackets (Equilibrium 2 Roth prescription, 0.022 in. slot size, Dentaurum Orthodontics, Ispringen, Germany) were used. The buccal surface of each tooth was cleaned with non-fluoride oil-free pumice paste using a nylon brush attached to a slow-speed hand piece for 5 s, and then the tooth was rinsed with water for 10 s and dried with an oil-free air spray. Brackets were bonded to the teeth according to the manufacturer’s instructions for the adhesive system and stored in distilled water at 37°C until testing.\nBracket debonding was performed 72 h after bonding in a material testing unit (model no 5500, Instron Corp, Canton, MA, USA) with an occluso-gingival load applied to the bracket base. The shearing rod was adjusted each time so the shearing blade is parallel to the base of the bracket contacting it in a reproducible way each test. The shear force was applied to the bracket by lowering the shearing rod perpendicularly in the gingival direction, producing a shear force at the bracket-enamel interface (Figure 1).\nThe crosshead speed was 2.0 mm/min, and the failure load in Newton was divided by the bracket base bonding area of 10.90 mm2 to calculate the shear bonding strength in MPa.\nThe adhesive remnant index (ARI) and failure site assessment was completed immediately after each shear bond strength debonding under ×20 magnification [26]. The ARI evaluation used the 4-point scale of Årtun and Bergland [27] where 0 indicates no adhesive left on the tooth surface, implying that bond fracture occurred at the resin/enamel interface; 1 indicates less than half the resin left on the tooth surface, implying that bond fracture occurred predominantly at the resin/enamel interface; 2 indicates more than half the resin left on the tooth surface, implying that bond fracture occurred predominantly at the bracket/resin interface; and 3 indicates all resin left on the tooth surface, with a distinct impression of the bracket base, implying that bond fracture occurred at the bracket/resin interface.\nDescriptive statistics, including mean, standard deviation, and minimum and maximum values of the shear bond strength, were calculated for each of the adhesive systems tested. Analysis of variance (ANOVA) test followed by a LSD post hoc multimeans comparison test was used to compare the groups. A Kruskal-Wallis test was used in conjunction with a Mann-Whitney test to compare the differences in the ARI scores between the groups. Significance for all statistical tests was at P ≤ .05. Statistics were carried out using Statistical Package for Social Sciences (SPSS Inc, Chicago, IL, USA) program version 10.", "The descriptive statistics of the SBS of each group are presented in Table 1. The one-way ANOVA, Table 1, indicated significant differences in the shear bond strength of the different groups (P = .000). When using Clinpro before bonding with SEP and Transbond XT, the SBS was significantly less than the other groups; the two control groups, the conventional adhesive group (P = .000) and the SEP group (P = .001); the two ICON groups, the conventional adhesive (P = .001) and the SEP group (P = .015); and Clinpro with the conventional adhesive system (P = .000). When using Clinpro before bonding with the conventional adhesive, the bond strength was similar to that of the other groups but significantly higher than the SBS when using ICON before bonding with SEP and Transbond XT adhesive.Table 1\nDescriptive statistics of the\nin vitro\nshear bond strength (MPa)\n\nNumber\n\nMean\n\nSD\n\nMinimum\n\nMaximum\nTransbond XT + ICON + H3PO4\na,d\n1020.2±4.611.028.6Transbond XT + ICON + SEPbd\n1017.6±4.113.323.7Transbond XT + Clinpro + H3PO4\na\n1021.3±6.613.434.2Transbond XT + Clinpro + SEPc\n1011.2±3.55.615.0Transbond XT + primer + H3PO4\na,d\n1021.1±7.511.234.7Transbond XT + SEPa,d\n1020.2±4.014.025.9\nF = 6.07; P = .000. Mean values in each row with the same letter are not significantly different at P ≤ .05.\n\nDescriptive statistics of the\nin vitro\nshear bond strength (MPa)\n\n\nF = 6.07; P = .000. Mean values in each row with the same letter are not significantly different at P ≤ .05.\nThe results of the Kruskal-Wallis test showed that the ARI scores, Table 2, were significantly different (P = .000) between the groups. The Mann-Whitney test showed no difference in the ARI scores between self-etching and conventional etching groups when using ICON (P = .166), Clinpro (P = .802), as well as when bonding to untreated enamel in the control group (P = .751). Results of one-way ANOVA showed also that the type of preventive agent used on the enamel significantly influenced the ARI scores distribution; there was a significant difference depending on whether it was ICON or Clinpro that was used before bonding with SEP (P = .005) or with phosphoric acid etching (P = .000).Table 2\nFreq\nuencies of the ARI scores for the two groups\n\nNumber\n\nARI scores\n\n0\n\n1\n\n2\n\n3\nTransbond XT + ICON + H3PO4\na\n100118Transbond XT + ICON + SEPa\n100325Transbond XT + Clinpro + H3PO4\nb\n105320Transbond XT + Clinpro + SEPb\n106211Transbond XT + primer + H3PO4\nc\n102422Transbond XT + SEPc\n103403Chi-square =22.77; P = .000; 0 indicates no adhesive left on the tooth surface, 1 indicates less than half the resin left on the tooth surface, 2 indicates more than half the resin left on the tooth surface, and 3 indicates all resin left on the tooth surface, with a distinct impression of the bracket base. Scores in each row with the same letter are not significantly different at P ≤ .05.\n\nFreq\nuencies of the ARI scores for the two groups\n\nChi-square =22.77; P = .000; 0 indicates no adhesive left on the tooth surface, 1 indicates less than half the resin left on the tooth surface, 2 indicates more than half the resin left on the tooth surface, and 3 indicates all resin left on the tooth surface, with a distinct impression of the bracket base. Scores in each row with the same letter are not significantly different at P ≤ .05.", "The lowest SBS was recorded with the samples treated with Clinpro before bonding the orthodontic brackets; the SBS in this group was significantly lower than the SBS in the other five groups. This could be attributed to the resistance effect that the outer enamel layer acquires from the fluoride content of the Clinpro which may be of significant effect especially when using self-etching primers in bonding due to their more superficial etching effect compared with the etching of the conventionally used phosphoric acid. Previous studies [28-30] with scanning electron microscope (SEM) indicated that although self-etch priming agents have the potential to etch the enamel surface, the etching pattern is less deep compared to the etching pattern of phosphoric acid. A chemical bonding capacity through the interaction between some functional monomers and the calcium of residual hydroxyapatite may contribute favorably to the bonding effectiveness [31-33], but fluoride affects the enamel surface rendering it more resistant to demineralization. Fluoride in low concentrations favors the formation of fluoro-hydroxyapatite, which is less susceptible to acidic solubility than hydroxyapatite [34,35]. Therefore, it is recommended to use these preventive agents after bonding the brackets when self-etch adhesive systems are used.\nIn the current study, using the caries infiltrant (ICON) before bonding did not significantly change the bond strength compared to the other groups, although the bond strength was lower when self-etching primer was used than when phosphoric acid was used for enamel preparation before bonding. This was also observed in the control group; shear bond strength was lower when self-etching primer was used than when phosphoric acid was used, but this difference was statistically insignificant. Previous studies found a significant increase in the shear bond strength of Transbond XT adhesive with phosphoric acid and Transbond XT primer when ICON was used before bonding orthodontic brackets to sound enamel [36] or even to demineralized enamel [37]. The shear bond strength was also increased when Transbond Plus Self Etching Primer was used instead of the conventional phosphoric acid etching to sound enamel [36]. The shear bond strengths recorded in this study were sufficient for clinical use in all the six groups presenting different combinations of adhesive systems and enamel protective agents as well as control groups. The average range of bond strength was suggested by Reynolds [38] to be 5.9 to 7.8 MPa for clinical and 4.9 MPa for laboratory performances. In vitro and in vivo studies of SBS are both needed; in vitro measurements of shear bond strength provide useful information about the bonding efficiency of different types of materials, but the actual performance of these materials can only be evaluated in the environment where they were intended to function [39]. Unfortunately, no one variable or combination of variables that can be measured in the laboratory is perfectly predictive of what might occur when the bonding adhesive is used in the demanding environment of the oral cavity [40-42]. Therefore; in vitro studies are mainly important as a preliminary guide to the clinician, while in vivo studies are needed for evidence-based practice.\nThe distribution of the ARI scores was assessed in this study under ×20 magnifications [26]. Although different quantitative and qualitative methods have been used to assess the ARI scores after orthodontic bracket debonding and quantitative methods were found preferable if accurate evaluation of the adhesive remnant is required [43], ARI score evaluation system has proved to be of value in the studies of orthodontic adhesive systems. ARI score system is a quick and simple method that needs no special equipment. Although SEM evaluation might be more accurate than evaluation under ×10 or ×20 magnification, it is harder to be reflected in clinical applications [26]. The distribution of the ARI scores was found different between the three major groups. In the ICON group, for both self-etching and conventional etching subgroups, higher ARI scores tended to be more frequent, while in Clinpro and control groups, both self-etching and conventional etching subgroups, less adhesive remnant tended to be seen left on the enamel surface after debonding. This could be attributed to the chemical bond between the resin infiltrant and the adhesive resin. However, the adhesive remaining on the enamel surface after debonding was not different in the three major groups between the self-etching subgroup and the conventional etching subgroup indicating a similar effect of the enamel protective material with the two types of adhesive systems. These results differed from the results of Naidu et al. [36] study that found using ICON as preconditioning before bonding orthodontic brackets to sound enamel did not affect ARI scores distribution compared to the control groups using Transbond XT primer and Transbond PSEP. The importance of the site of bond failure was found not to be a reflection of bond strength; therefore, the site of failure did not reflect different bond strengths at different interfaces [44,45]. On the other hand, a variety of factors could affect bond strength including the type of enamel conditioner, acid concentration, length of etching time, composition of the adhesive, bracket base design, bracket material, oral environment, skill of the clinician, and time of light exposure in case of light-cure approach [46].\nApplying the results of this study clinically, it would be preferred using Clinpro after bonding the orthodontic brackets when self-etch adhesive systems are used, while it could be used before bonding when conventional adhesive systems are used. ICON resin infiltrate, on the other hand, could be used before bonding with either of the two adhesive systems, but removal of large amount of adhesive remnant would be needed.", "Based on the above findings, we conclude the following:Overall, the SBS was lower when self-etching primer was used than when phosphoric acid was used for enamel preparation before bonding in the three major groups.Significantly lower SBS was recorded when Clinpro was used before bonding using the self-etching adhesive system.The ICON group showed the higher ARI scores to be more frequent, while Clinpro and control groups showed lower ARI scores more frequently. The adhesive remnant was not different between the self-etching and the conventional etching subgroups.\nOverall, the SBS was lower when self-etching primer was used than when phosphoric acid was used for enamel preparation before bonding in the three major groups.\nSignificantly lower SBS was recorded when Clinpro was used before bonding using the self-etching adhesive system.\nThe ICON group showed the higher ARI scores to be more frequent, while Clinpro and control groups showed lower ARI scores more frequently. The adhesive remnant was not different between the self-etching and the conventional etching subgroups." ]
[ "introduction", "materials|methods", "results", "discussion", "conclusion" ]
[ "Enamel protective agents", "Shear bond strength", "ARI scores" ]
Background: Enamel demineralization and white spot lesions associated with orthodontic fixed appliances is one of the greatest challenges faced by clinicians at the end of the orthodontic treatment not only for esthetic reasons but also because this subsurface demineralization represents the first stage of caries formation [1-4]. Different methods have been studied, all aiming to reduce enamel demineralization during orthodontic treatment without compromising the bond strength of the orthodontic brackets. The most common method was the use of fluoride-containing mouth rinses, gels, and tooth pastes [5-7]; however, studies found a significant association between the patient compliance to the rinsing program advised by the clinician and the reduction in the development of white spot lesions [8]. It was found that with only standardized general prophylactic measures, new white spot lesions developing on the maxillary front teeth during orthodontic treatment were seen in 60.9% of the patients [9]. Preventive measures that do not depend on the patient’s compliance have been developed and gained popularity to solve the problem of demineralization. These included the use of glass ionomer cement [10,11], topical applications of preventive agents as fluoride and casein phosphopeptide-amorphous calcium phosphate [12,13], antibacterial agents incorporated in the adhesive resin [14,15], fluoride releasing adhesives [16,17], caries infiltration resins [18,19], laser irradiation [20,21], bioactive glass-containing adhesives [22], and enamel deproteinizing agents [23]. The current study focused on two preventive agents Clinpro and ICON. Clinpro is a fluoridated varnish containing 5% sodium fluoride. Fluoride was found to be effective in reducing the development of white spot lesions associated with fixed orthodontic treatment [16,24]. Also, ICON resin infiltration was found to decrease the dissolution of enamel and so limit the appearance of white spot lesions [25]. When is the proper timing to apply these materials to get the best result of decreasing the white spot lesions around the orthodontic brackets is a worthwhile question. These preventive agents could be applied after bonding the orthodontic brackets, but this may not be easy all the times especially where there are severely crowded or partially erupted teeth. The other option is to apply these materials before bonding the orthodontic brackets, but these preventive agents could have an effect on the shear bond strength and/or the amount of adhesive left on the teeth after debonding of the orthodontic brackets upon treatment completion. The objective of this study was to study the effect of using the two enamel protective agents before bonding on the shear bond strength of orthodontic brackets bonded with conventional and self-etch adhesive systems. Methods: This in vitro testing used 60 extracted human upper premolars stored in an aqueous solution of thymol (0.1% wt/vol). Teeth were extracted as part of orthodontic treatment and collected to be used in research. To calculate the sample size, Epicalc 2000 software version 1.02 (Brixton Books, Brixton, UK) was used. The sample size was found to be ten specimens for each group based on 80% power and 95% confidence interval. Typically mounted specimen for SBS testing. The teeth were fixed in self-curing acrylic resin placed in flexible molds with the roots embedded in the acrylic and the crown exposed and oriented perpendicularly to the bottom of the mold. Two types of enamel protective agents were used in the current study: ICON (DMG, Hamburg, Germany) and Clinpro (3M Unitek, Monrovia, CA, USA). The two adhesive systems used in this study were Transbond XT light cure adhesive and Transbond Plus Self Etching Primer (3M Unitek, Monrovia, CA, USA), and Transbond XT light cure adhesive, Transbond XT primer, and 37% phosphoric acid (3M Unitek, Monrovia, CA, USA). All materials were used according to the manufacturers’ instructions. The sample was divided into three major groups; group 1 used (ICON) before bonding the orthodontic brackets, group 2 used Clinpro before bonding, and group 3 a control group with no protective enamel agent used. Each group was divided into two subgroups; in the first one, orthodontic brackets were bonded with self-etching adhesive system and in the second one, a conventional adhesive system was used. Premolar stainless steel brackets (Equilibrium 2 Roth prescription, 0.022 in. slot size, Dentaurum Orthodontics, Ispringen, Germany) were used. The buccal surface of each tooth was cleaned with non-fluoride oil-free pumice paste using a nylon brush attached to a slow-speed hand piece for 5 s, and then the tooth was rinsed with water for 10 s and dried with an oil-free air spray. Brackets were bonded to the teeth according to the manufacturer’s instructions for the adhesive system and stored in distilled water at 37°C until testing. Bracket debonding was performed 72 h after bonding in a material testing unit (model no 5500, Instron Corp, Canton, MA, USA) with an occluso-gingival load applied to the bracket base. The shearing rod was adjusted each time so the shearing blade is parallel to the base of the bracket contacting it in a reproducible way each test. The shear force was applied to the bracket by lowering the shearing rod perpendicularly in the gingival direction, producing a shear force at the bracket-enamel interface (Figure 1). The crosshead speed was 2.0 mm/min, and the failure load in Newton was divided by the bracket base bonding area of 10.90 mm2 to calculate the shear bonding strength in MPa. The adhesive remnant index (ARI) and failure site assessment was completed immediately after each shear bond strength debonding under ×20 magnification [26]. The ARI evaluation used the 4-point scale of Årtun and Bergland [27] where 0 indicates no adhesive left on the tooth surface, implying that bond fracture occurred at the resin/enamel interface; 1 indicates less than half the resin left on the tooth surface, implying that bond fracture occurred predominantly at the resin/enamel interface; 2 indicates more than half the resin left on the tooth surface, implying that bond fracture occurred predominantly at the bracket/resin interface; and 3 indicates all resin left on the tooth surface, with a distinct impression of the bracket base, implying that bond fracture occurred at the bracket/resin interface. Descriptive statistics, including mean, standard deviation, and minimum and maximum values of the shear bond strength, were calculated for each of the adhesive systems tested. Analysis of variance (ANOVA) test followed by a LSD post hoc multimeans comparison test was used to compare the groups. A Kruskal-Wallis test was used in conjunction with a Mann-Whitney test to compare the differences in the ARI scores between the groups. Significance for all statistical tests was at P ≤ .05. Statistics were carried out using Statistical Package for Social Sciences (SPSS Inc, Chicago, IL, USA) program version 10. Results: The descriptive statistics of the SBS of each group are presented in Table 1. The one-way ANOVA, Table 1, indicated significant differences in the shear bond strength of the different groups (P = .000). When using Clinpro before bonding with SEP and Transbond XT, the SBS was significantly less than the other groups; the two control groups, the conventional adhesive group (P = .000) and the SEP group (P = .001); the two ICON groups, the conventional adhesive (P = .001) and the SEP group (P = .015); and Clinpro with the conventional adhesive system (P = .000). When using Clinpro before bonding with the conventional adhesive, the bond strength was similar to that of the other groups but significantly higher than the SBS when using ICON before bonding with SEP and Transbond XT adhesive.Table 1 Descriptive statistics of the in vitro shear bond strength (MPa) Number Mean SD Minimum Maximum Transbond XT + ICON + H3PO4 a,d 1020.2±4.611.028.6Transbond XT + ICON + SEPbd 1017.6±4.113.323.7Transbond XT + Clinpro + H3PO4 a 1021.3±6.613.434.2Transbond XT + Clinpro + SEPc 1011.2±3.55.615.0Transbond XT + primer + H3PO4 a,d 1021.1±7.511.234.7Transbond XT + SEPa,d 1020.2±4.014.025.9 F = 6.07; P = .000. Mean values in each row with the same letter are not significantly different at P ≤ .05. Descriptive statistics of the in vitro shear bond strength (MPa) F = 6.07; P = .000. Mean values in each row with the same letter are not significantly different at P ≤ .05. The results of the Kruskal-Wallis test showed that the ARI scores, Table 2, were significantly different (P = .000) between the groups. The Mann-Whitney test showed no difference in the ARI scores between self-etching and conventional etching groups when using ICON (P = .166), Clinpro (P = .802), as well as when bonding to untreated enamel in the control group (P = .751). Results of one-way ANOVA showed also that the type of preventive agent used on the enamel significantly influenced the ARI scores distribution; there was a significant difference depending on whether it was ICON or Clinpro that was used before bonding with SEP (P = .005) or with phosphoric acid etching (P = .000).Table 2 Freq uencies of the ARI scores for the two groups Number ARI scores 0 1 2 3 Transbond XT + ICON + H3PO4 a 100118Transbond XT + ICON + SEPa 100325Transbond XT + Clinpro + H3PO4 b 105320Transbond XT + Clinpro + SEPb 106211Transbond XT + primer + H3PO4 c 102422Transbond XT + SEPc 103403Chi-square =22.77; P = .000; 0 indicates no adhesive left on the tooth surface, 1 indicates less than half the resin left on the tooth surface, 2 indicates more than half the resin left on the tooth surface, and 3 indicates all resin left on the tooth surface, with a distinct impression of the bracket base. Scores in each row with the same letter are not significantly different at P ≤ .05. Freq uencies of the ARI scores for the two groups Chi-square =22.77; P = .000; 0 indicates no adhesive left on the tooth surface, 1 indicates less than half the resin left on the tooth surface, 2 indicates more than half the resin left on the tooth surface, and 3 indicates all resin left on the tooth surface, with a distinct impression of the bracket base. Scores in each row with the same letter are not significantly different at P ≤ .05. Discussion: The lowest SBS was recorded with the samples treated with Clinpro before bonding the orthodontic brackets; the SBS in this group was significantly lower than the SBS in the other five groups. This could be attributed to the resistance effect that the outer enamel layer acquires from the fluoride content of the Clinpro which may be of significant effect especially when using self-etching primers in bonding due to their more superficial etching effect compared with the etching of the conventionally used phosphoric acid. Previous studies [28-30] with scanning electron microscope (SEM) indicated that although self-etch priming agents have the potential to etch the enamel surface, the etching pattern is less deep compared to the etching pattern of phosphoric acid. A chemical bonding capacity through the interaction between some functional monomers and the calcium of residual hydroxyapatite may contribute favorably to the bonding effectiveness [31-33], but fluoride affects the enamel surface rendering it more resistant to demineralization. Fluoride in low concentrations favors the formation of fluoro-hydroxyapatite, which is less susceptible to acidic solubility than hydroxyapatite [34,35]. Therefore, it is recommended to use these preventive agents after bonding the brackets when self-etch adhesive systems are used. In the current study, using the caries infiltrant (ICON) before bonding did not significantly change the bond strength compared to the other groups, although the bond strength was lower when self-etching primer was used than when phosphoric acid was used for enamel preparation before bonding. This was also observed in the control group; shear bond strength was lower when self-etching primer was used than when phosphoric acid was used, but this difference was statistically insignificant. Previous studies found a significant increase in the shear bond strength of Transbond XT adhesive with phosphoric acid and Transbond XT primer when ICON was used before bonding orthodontic brackets to sound enamel [36] or even to demineralized enamel [37]. The shear bond strength was also increased when Transbond Plus Self Etching Primer was used instead of the conventional phosphoric acid etching to sound enamel [36]. The shear bond strengths recorded in this study were sufficient for clinical use in all the six groups presenting different combinations of adhesive systems and enamel protective agents as well as control groups. The average range of bond strength was suggested by Reynolds [38] to be 5.9 to 7.8 MPa for clinical and 4.9 MPa for laboratory performances. In vitro and in vivo studies of SBS are both needed; in vitro measurements of shear bond strength provide useful information about the bonding efficiency of different types of materials, but the actual performance of these materials can only be evaluated in the environment where they were intended to function [39]. Unfortunately, no one variable or combination of variables that can be measured in the laboratory is perfectly predictive of what might occur when the bonding adhesive is used in the demanding environment of the oral cavity [40-42]. Therefore; in vitro studies are mainly important as a preliminary guide to the clinician, while in vivo studies are needed for evidence-based practice. The distribution of the ARI scores was assessed in this study under ×20 magnifications [26]. Although different quantitative and qualitative methods have been used to assess the ARI scores after orthodontic bracket debonding and quantitative methods were found preferable if accurate evaluation of the adhesive remnant is required [43], ARI score evaluation system has proved to be of value in the studies of orthodontic adhesive systems. ARI score system is a quick and simple method that needs no special equipment. Although SEM evaluation might be more accurate than evaluation under ×10 or ×20 magnification, it is harder to be reflected in clinical applications [26]. The distribution of the ARI scores was found different between the three major groups. In the ICON group, for both self-etching and conventional etching subgroups, higher ARI scores tended to be more frequent, while in Clinpro and control groups, both self-etching and conventional etching subgroups, less adhesive remnant tended to be seen left on the enamel surface after debonding. This could be attributed to the chemical bond between the resin infiltrant and the adhesive resin. However, the adhesive remaining on the enamel surface after debonding was not different in the three major groups between the self-etching subgroup and the conventional etching subgroup indicating a similar effect of the enamel protective material with the two types of adhesive systems. These results differed from the results of Naidu et al. [36] study that found using ICON as preconditioning before bonding orthodontic brackets to sound enamel did not affect ARI scores distribution compared to the control groups using Transbond XT primer and Transbond PSEP. The importance of the site of bond failure was found not to be a reflection of bond strength; therefore, the site of failure did not reflect different bond strengths at different interfaces [44,45]. On the other hand, a variety of factors could affect bond strength including the type of enamel conditioner, acid concentration, length of etching time, composition of the adhesive, bracket base design, bracket material, oral environment, skill of the clinician, and time of light exposure in case of light-cure approach [46]. Applying the results of this study clinically, it would be preferred using Clinpro after bonding the orthodontic brackets when self-etch adhesive systems are used, while it could be used before bonding when conventional adhesive systems are used. ICON resin infiltrate, on the other hand, could be used before bonding with either of the two adhesive systems, but removal of large amount of adhesive remnant would be needed. Conclusions: Based on the above findings, we conclude the following:Overall, the SBS was lower when self-etching primer was used than when phosphoric acid was used for enamel preparation before bonding in the three major groups.Significantly lower SBS was recorded when Clinpro was used before bonding using the self-etching adhesive system.The ICON group showed the higher ARI scores to be more frequent, while Clinpro and control groups showed lower ARI scores more frequently. The adhesive remnant was not different between the self-etching and the conventional etching subgroups. Overall, the SBS was lower when self-etching primer was used than when phosphoric acid was used for enamel preparation before bonding in the three major groups. Significantly lower SBS was recorded when Clinpro was used before bonding using the self-etching adhesive system. The ICON group showed the higher ARI scores to be more frequent, while Clinpro and control groups showed lower ARI scores more frequently. The adhesive remnant was not different between the self-etching and the conventional etching subgroups.
Background: This paper aimed to study the effect of two enamel protective agents on the shear bond strength (SBS) of orthodontic brackets bonded with conventional and self-etching primer (SEP) adhesive systems. Methods: The two protective agents used were resin infiltrate (ICON) and Clinpro; the two adhesive systems used were self-etching primer system (Transbond Plus Self Etching Primer + Transbond XT adhesive) and a conventional adhesive system (37% phosphoric acid etch + Transbond XT primer + Transbond XT adhesive ). Sixty premolars divided into three major groups and six subgroups were included. The shear bond strength was tested 72 h after bracket bonding. Adhesive remnant index scores (ARI) were assessed. Statistical analysis consisted of a one-way ANOVA for the SBS and Kruskal-Wallis test followed by Mann-Whitney test for the ARI scores. Results: In the control group, the mean SBS when using the conventional adhesive was 21.1 ± 7.5 MPa while when using SEP was 20.2 ± 4.0 MPa. When ICON was used with the conventional adhesive system, the SBS was 20.2 ± 5.6 MPa while with SEP was 17.6 ± 4.1 MPa. When Clinpro was used with the conventional adhesive system, the SBS was 24.3 ± 7.6 MPa while with SEP was 11.2 ± 3.5 MPa. Significant differences in the shear bond strength of the different groups (P = .000) was found as well as in the ARI scores distribution (P = .000). Conclusions: The type of the adhesive system used to bond the orthodontic brackets, either conventional or self-etching primer, influenced the SBS, while the enamel protective material influenced the adhesive remnant on the enamel surface after debonding.
Background: Enamel demineralization and white spot lesions associated with orthodontic fixed appliances is one of the greatest challenges faced by clinicians at the end of the orthodontic treatment not only for esthetic reasons but also because this subsurface demineralization represents the first stage of caries formation [1-4]. Different methods have been studied, all aiming to reduce enamel demineralization during orthodontic treatment without compromising the bond strength of the orthodontic brackets. The most common method was the use of fluoride-containing mouth rinses, gels, and tooth pastes [5-7]; however, studies found a significant association between the patient compliance to the rinsing program advised by the clinician and the reduction in the development of white spot lesions [8]. It was found that with only standardized general prophylactic measures, new white spot lesions developing on the maxillary front teeth during orthodontic treatment were seen in 60.9% of the patients [9]. Preventive measures that do not depend on the patient’s compliance have been developed and gained popularity to solve the problem of demineralization. These included the use of glass ionomer cement [10,11], topical applications of preventive agents as fluoride and casein phosphopeptide-amorphous calcium phosphate [12,13], antibacterial agents incorporated in the adhesive resin [14,15], fluoride releasing adhesives [16,17], caries infiltration resins [18,19], laser irradiation [20,21], bioactive glass-containing adhesives [22], and enamel deproteinizing agents [23]. The current study focused on two preventive agents Clinpro and ICON. Clinpro is a fluoridated varnish containing 5% sodium fluoride. Fluoride was found to be effective in reducing the development of white spot lesions associated with fixed orthodontic treatment [16,24]. Also, ICON resin infiltration was found to decrease the dissolution of enamel and so limit the appearance of white spot lesions [25]. When is the proper timing to apply these materials to get the best result of decreasing the white spot lesions around the orthodontic brackets is a worthwhile question. These preventive agents could be applied after bonding the orthodontic brackets, but this may not be easy all the times especially where there are severely crowded or partially erupted teeth. The other option is to apply these materials before bonding the orthodontic brackets, but these preventive agents could have an effect on the shear bond strength and/or the amount of adhesive left on the teeth after debonding of the orthodontic brackets upon treatment completion. The objective of this study was to study the effect of using the two enamel protective agents before bonding on the shear bond strength of orthodontic brackets bonded with conventional and self-etch adhesive systems. Conclusions: Based on the above findings, we conclude the following:Overall, the SBS was lower when self-etching primer was used than when phosphoric acid was used for enamel preparation before bonding in the three major groups.Significantly lower SBS was recorded when Clinpro was used before bonding using the self-etching adhesive system.The ICON group showed the higher ARI scores to be more frequent, while Clinpro and control groups showed lower ARI scores more frequently. The adhesive remnant was not different between the self-etching and the conventional etching subgroups. Overall, the SBS was lower when self-etching primer was used than when phosphoric acid was used for enamel preparation before bonding in the three major groups. Significantly lower SBS was recorded when Clinpro was used before bonding using the self-etching adhesive system. The ICON group showed the higher ARI scores to be more frequent, while Clinpro and control groups showed lower ARI scores more frequently. The adhesive remnant was not different between the self-etching and the conventional etching subgroups.
Background: This paper aimed to study the effect of two enamel protective agents on the shear bond strength (SBS) of orthodontic brackets bonded with conventional and self-etching primer (SEP) adhesive systems. Methods: The two protective agents used were resin infiltrate (ICON) and Clinpro; the two adhesive systems used were self-etching primer system (Transbond Plus Self Etching Primer + Transbond XT adhesive) and a conventional adhesive system (37% phosphoric acid etch + Transbond XT primer + Transbond XT adhesive ). Sixty premolars divided into three major groups and six subgroups were included. The shear bond strength was tested 72 h after bracket bonding. Adhesive remnant index scores (ARI) were assessed. Statistical analysis consisted of a one-way ANOVA for the SBS and Kruskal-Wallis test followed by Mann-Whitney test for the ARI scores. Results: In the control group, the mean SBS when using the conventional adhesive was 21.1 ± 7.5 MPa while when using SEP was 20.2 ± 4.0 MPa. When ICON was used with the conventional adhesive system, the SBS was 20.2 ± 5.6 MPa while with SEP was 17.6 ± 4.1 MPa. When Clinpro was used with the conventional adhesive system, the SBS was 24.3 ± 7.6 MPa while with SEP was 11.2 ± 3.5 MPa. Significant differences in the shear bond strength of the different groups (P = .000) was found as well as in the ARI scores distribution (P = .000). Conclusions: The type of the adhesive system used to bond the orthodontic brackets, either conventional or self-etching primer, influenced the SBS, while the enamel protective material influenced the adhesive remnant on the enamel surface after debonding.
3,376
347
5
[ "adhesive", "bonding", "etching", "enamel", "bond", "groups", "self", "clinpro", "ari", "xt" ]
[ "test", "test" ]
null
[CONTENT] Enamel protective agents | Shear bond strength | ARI scores [SUMMARY]
null
[CONTENT] Enamel protective agents | Shear bond strength | ARI scores [SUMMARY]
[CONTENT] Enamel protective agents | Shear bond strength | ARI scores [SUMMARY]
[CONTENT] Enamel protective agents | Shear bond strength | ARI scores [SUMMARY]
[CONTENT] Enamel protective agents | Shear bond strength | ARI scores [SUMMARY]
[CONTENT] Acid Etching, Dental | Adhesiveness | Bicuspid | Composite Resins | Dental Bonding | Dental Enamel | Dental Stress Analysis | Humans | Materials Testing | Orthodontic Brackets | Phosphoric Acids | Pit and Fissure Sealants | Protective Agents | Resin Cements | Resins, Synthetic | Shear Strength | Stainless Steel | Stress, Mechanical | Surface Properties | Temperature | Water [SUMMARY]
null
[CONTENT] Acid Etching, Dental | Adhesiveness | Bicuspid | Composite Resins | Dental Bonding | Dental Enamel | Dental Stress Analysis | Humans | Materials Testing | Orthodontic Brackets | Phosphoric Acids | Pit and Fissure Sealants | Protective Agents | Resin Cements | Resins, Synthetic | Shear Strength | Stainless Steel | Stress, Mechanical | Surface Properties | Temperature | Water [SUMMARY]
[CONTENT] Acid Etching, Dental | Adhesiveness | Bicuspid | Composite Resins | Dental Bonding | Dental Enamel | Dental Stress Analysis | Humans | Materials Testing | Orthodontic Brackets | Phosphoric Acids | Pit and Fissure Sealants | Protective Agents | Resin Cements | Resins, Synthetic | Shear Strength | Stainless Steel | Stress, Mechanical | Surface Properties | Temperature | Water [SUMMARY]
[CONTENT] Acid Etching, Dental | Adhesiveness | Bicuspid | Composite Resins | Dental Bonding | Dental Enamel | Dental Stress Analysis | Humans | Materials Testing | Orthodontic Brackets | Phosphoric Acids | Pit and Fissure Sealants | Protective Agents | Resin Cements | Resins, Synthetic | Shear Strength | Stainless Steel | Stress, Mechanical | Surface Properties | Temperature | Water [SUMMARY]
[CONTENT] Acid Etching, Dental | Adhesiveness | Bicuspid | Composite Resins | Dental Bonding | Dental Enamel | Dental Stress Analysis | Humans | Materials Testing | Orthodontic Brackets | Phosphoric Acids | Pit and Fissure Sealants | Protective Agents | Resin Cements | Resins, Synthetic | Shear Strength | Stainless Steel | Stress, Mechanical | Surface Properties | Temperature | Water [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] adhesive | bonding | etching | enamel | bond | groups | self | clinpro | ari | xt [SUMMARY]
null
[CONTENT] adhesive | bonding | etching | enamel | bond | groups | self | clinpro | ari | xt [SUMMARY]
[CONTENT] adhesive | bonding | etching | enamel | bond | groups | self | clinpro | ari | xt [SUMMARY]
[CONTENT] adhesive | bonding | etching | enamel | bond | groups | self | clinpro | ari | xt [SUMMARY]
[CONTENT] adhesive | bonding | etching | enamel | bond | groups | self | clinpro | ari | xt [SUMMARY]
[CONTENT] orthodontic | lesions | white spot lesions | white spot | white | spot lesions | spot | agents | treatment | orthodontic brackets [SUMMARY]
null
[CONTENT] xt | 000 | left tooth | indicates | left tooth surface | tooth surface | surface indicates | tooth surface indicates | left tooth surface indicates | h3po4 [SUMMARY]
[CONTENT] lower | etching | self etching | showed | self | sbs | ari | groups | ari scores | scores [SUMMARY]
[CONTENT] etching | adhesive | bonding | orthodontic | groups | bond | enamel | xt | ari | self [SUMMARY]
[CONTENT] etching | adhesive | bonding | orthodontic | groups | bond | enamel | xt | ari | self [SUMMARY]
[CONTENT] two | SEP [SUMMARY]
null
[CONTENT] 21.1 ± | 7.5 | SEP | 20.2 ± | 4.0 ||| ICON | 20.2 ± | 5.6 | SEP | 17.6 ± | 4.1 ||| 24.3 ± | 7.6 | SEP | 11.2 | 3.5 ||| ARI | .000 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] two | SEP ||| two | Clinpro | two | Transbond Plus Self Etching Primer | Transbond XT | 37% | Transbond XT | Transbond XT ||| Sixty | three | six ||| 72 ||| ARI ||| one | Kruskal-Wallis | Mann-Whitney | ARI ||| ||| 21.1 ± | 7.5 | SEP | 20.2 ± | 4.0 ||| ICON | 20.2 ± | 5.6 | SEP | 17.6 ± | 4.1 ||| 24.3 ± | 7.6 | SEP | 11.2 | 3.5 ||| ARI | .000 ||| [SUMMARY]
[CONTENT] two | SEP ||| two | Clinpro | two | Transbond Plus Self Etching Primer | Transbond XT | 37% | Transbond XT | Transbond XT ||| Sixty | three | six ||| 72 ||| ARI ||| one | Kruskal-Wallis | Mann-Whitney | ARI ||| ||| 21.1 ± | 7.5 | SEP | 20.2 ± | 4.0 ||| ICON | 20.2 ± | 5.6 | SEP | 17.6 ± | 4.1 ||| 24.3 ± | 7.6 | SEP | 11.2 | 3.5 ||| ARI | .000 ||| [SUMMARY]
Association between serum perfluorooctanoic acid (PFOA) and thyroid disease in the U.S. National Health and Nutrition Examination Survey.
20089479
Perfluorooctanoic acid (PFOA, also known as C8) and perfluorooctane sulfonate (PFOS) are stable compounds with many industrial and consumer uses. Their persistence in the environment plus toxicity in animal models has raised concern over low-level chronic exposure effects on human health.
BACKGROUND
Analyses of PFOA/PFOS versus disease status in the National Health and Nutrition Examination Survey (NHANES) for 1999-2000, 2003-2004, and 2005-2006 included 3,974 adults with measured concentrations for perfluorinated chemicals. Regression models were adjusted for age, sex, race/ethnicity, education, smoking status, body mass index, and alcohol intake.
METHODS
The NHANES-weighted prevalence of reporting any thyroid disease was 16.18% (n = 292) in women and 3.06% (n = 69) in men; prevalence of current thyroid disease with related medication was 9.89% (n = 163) in women and 1.88% (n = 46) in men. In fully adjusted logistic models, women with PFOA >or= 5.7 ng/mL [fourth (highest) population quartile] were more likely to report current treated thyroid disease [odds ratio (OR) = 2.24; 95% confidence interval (CI), 1.38-3.65; p = 0.002] compared with PFOA <or= 4.0 ng/mL (quartiles 1 and 2); we found a near significant similar trend in men (OR = 2.12; 95% CI, 0.93-4.82; p = 0.073). For PFOS, in men we found a similar association for those with PFOS >or= 36.8 ng/mL (quartile 4) versus <or= 25.5 ng/mL (quartiles 1 and 2: OR for treated disease = 2.68; 95% CI, 1.03-6.98; p = 0.043); in women this association was not significant.
RESULTS
Higher concentrations of serum PFOA and PFOS are associated with current thyroid disease in the U.S. general adult population. More work is needed to establish the mechanisms involved and to exclude confounding and pharmacokinetic explanations.
CONCLUSIONS
[ "Adult", "Aged", "Alkanesulfonic Acids", "Animals", "Caprylates", "Cross-Sectional Studies", "Endocrine Disruptors", "Environmental Exposure", "Environmental Pollutants", "Female", "Fluorocarbons", "Health Surveys", "Humans", "Male", "Middle Aged", "Risk Assessment", "Thyroid Diseases", "United States", "Young Adult" ]
2866686
null
null
Statistical analysis
NHANES uses a complex cluster sample design with some demographic groups (including less-privileged socioeconomic groups and Mexican Americans) oversampled to ensure adequate representation. Prevalence estimates and models were therefore survey-weighted using the NHANES primary sampling unit, strata, and population weights, unless otherwise stated. Multivariate logistic regression modeling was used to estimate odds ratios (ORs) of thyroid disease outcomes by quartile of PFOA and PFOS concentrations, and associations of other physician-diagnosed diseases. Because thyroid disease prevalence is markedly higher in women, we used sex-specific models. Because the distribution of PFC concentrations is skewed (with most people having relatively low exposures and with considerably more variance at the higher exposure end), all available data were pooled, and PFOA and PFOS concentrations were divided into population-weighted quartiles. Using the Hsieh method (Hsieh et al. 1998), our estimated power to detect an association of OR ≥ 1.8 with current treated disease comparing the top PFOA quartile (Q4) with bottom quartile (Q1) is 67% in women. Combining the lowest two quartiles (Q1 and Q2) into a larger control group provides 80% power. The corresponding minimum detectable effect size in men is OR > 2.9. Assumptions for the power calculations include a significance level of 5% and a multiple correlation coefficient of 0.2 relating PFOA exposure to potential confounders. Models were adjusted for the following potential confounding factors: year of NHANES study; age; sex; race/ethnicity, from self-description and categorized into Mexican American, other Hispanic, non-Hispanic white, non-Hispanic black, and other race (including multiracial); education, categorized into less than high school, high school diploma (including GED), more than high school, and unknown education; smoking (from self-reported status asked for those ≥ 20 years of age), categorized into never smoked, former smoker, smoking some days, smoking every day, and unknown smoking status; body mass index (BMI; weight in kilograms divided by the square of measured height in meters), categorized into underweight (BMI < 18.5), recommended weight (BMI = 18.5–24.9), overweight (BMI = 25.0–29.9), obese (BMI = 30.0–34.9), and unknown BMI; and alcohol consumption (in adults ≥ 20 years of age, based on responses to the question “In the past 12 months, on those days that you drank alcoholic beverages, on the average day, how many drinks did you have?”), categorized into 0, 1, 2, 3, 4, and ≥ 5 drinks per day, and unknown alcohol consumption. Regression analyses were conducted using STATA/SE (version 10.1; StataCorp LP, College Station, TX, USA).
Results
Serum concentrations of PFOA were available for n = 3,974 individuals ≥ 20 years of age from NHANES waves 1999–2000 (n = 1,040), 2003–2004 (n = 1,454), and 2005–2006 (n = 1,480). In analyses adjusted for age, sex, NHANES wave, and ethnicity, mean levels of PFOA were higher in men than in women [by 0.76 ng/mL; 95% confidence interval (CI), 0.73–0.80; p < 0.0001], and we found significant differences between ethnic groups (Table 1). Individuals with more education had higher PFOA levels (highest vs. lowest education: 1.1 ng/mL difference; 95% CI, 1.03–1.19 ng/mL difference; p = 0.008). Increased alcohol consumption levels were also associated with higher PFOA concentrations (e.g., those having five or more drinks per day had mean PFOA levels 1.24 ng/mL higher than nondrinkers; 95% CI, 1.14–1.37 ng/mL difference; p < 0.0001). In analyses of the full sample (men and women), adjusted for age, sex, NHANES wave, and ethnicity, mean levels of PFOA were higher in men than in women (p < 0.0001), and we found significant differences among ethnic groups (Table 1). Individuals with more education had higher PFOA levels (p = 0.008). Increased alcohol consumption levels were also associated with higher PFOA concentrations (p < 0.0001). We found similar differences in PFOS concentrations. Mean levels of PFOS were higher in men (p < 0.0001), with significant differences in levels between ethnic groups, and individuals with more education had higher PFOS levels (p = 0.008). Eight individuals did not answer questions about thyroid disease, so the sample size for this analysis was n = 3,966, with 1,900 men, and 2,066 women (Table 2). In women, overall (unweighted) reported (any) thyroid disease was n = 292, and the NHANES-weighted but unadjusted prevalence was 16.18%; in men, n = 69, and weighted prevalence, 3.06%. The study-weighted prevalence of current thyroid disease taking medication was necessarily lower (women, n = 163, 9.89%; men, n = 46, 1.18%). We computed population-weighted quartiles of PFOA and PFOS concentrations in men and women separately (Table 2). The highest quartile (Q4) of PFOA in women ranged from 5.7 to 123.0 ng/mL, and in men from 7.3 ng/mL to 45.9 ng/mL. Study-weighted but unadjusted prevalences of current thyroid disease taking related medication in women varied across the quartiles but with wide CIs: Q1 = 8.14% (95% CI, 5.75–10.53%), Q4 = 16.19% (95% CI, 11.74–20.62%); in men, unadjusted prevalence rates were far lower throughout (prevalence = 2.27% in Q1 and Q4). For PFOS the prevalence of treated thyroid disease ranged from 8.14% (Q1) to 12.55% (Q4) in women and from 1.85% to 3.89% in men. In logistic regression models adjusting for age, ethnicity, and study year (Table 3), we found associations between PFOA quartiles and both definitions of thyroid disease in women. For logistic models additionally adjusted for educational status, BMI, smoking status, and alcohol consumption, these associations remained significant; for example, comparing those with PFOA concentrations ≥ 5.7 ng/mL (Q4) versus ≤ 2.6 ng/mL (Q1), the OR for current thyroid disease on medication was 1.86 (95% CI, 1.12–3.09; p = 0.018). Comparing Q4 with the larger control group of PFOA ≤ 4.0 ng/mL (Q1 and Q2) the estimated OR for treated thyroid disease was 2.24 (95% CI, 1.38–3.65; p = 0.002). In men, we found a similar suggestive trend, but it narrowly missed significance: Comparing PFOA concentrations ≥ 7.3 ng/mL (Q4) versus ≤ 5.2 ng/mL (Q1 and Q2), the OR for treated disease was 2.12 (95% CI, 0.93–4.82; p = 0.073). For PFOS concentrations, in women ORs for disease trended in a similar direction but were far from significant. However, in men we found an association comparing those with PFOS concentrations ≥ 36.8 ng/mL (Q4) versus ≤ 25.5 ng/mL (Q1 and Q2): OR for treated disease was 2.68 (95% CI, 1.03–6.98; p = 0.043). Sensitivity analyses For a sensitivity analysis, we computed a logistic regression model including both men and women, testing an interaction term between sex and PFOA levels for treated thyroid disease risk. The interaction term was not significant (p-value for interaction = 0.152). For a post hoc analysis, we examined associations between chemical concentration quartile and any of the other major disease categories covered in NHANES: arthritis, asthma, COPD, diabetes, heart disease, or liver disease. Combining men and women (to reduce multiple testing and because these diseases are less sex related), we found no significant associations for PFOA (Table 4) except for comparisons between the intermediate quartiles and Q1 for arthritis, but this did not reach significance in the top quartile. For PFOS, we found no “positive” associations between higher serum concentrations and higher prevalence of disease. We found one statistically significant “negative” association suggesting that people reporting having COPD may be less likely to be in the highest PFOA concentration quartile (OR = 0.58; 95% CI, 0.43–0.76; p = 0.0003). For a sensitivity analysis, we computed a logistic regression model including both men and women, testing an interaction term between sex and PFOA levels for treated thyroid disease risk. The interaction term was not significant (p-value for interaction = 0.152). For a post hoc analysis, we examined associations between chemical concentration quartile and any of the other major disease categories covered in NHANES: arthritis, asthma, COPD, diabetes, heart disease, or liver disease. Combining men and women (to reduce multiple testing and because these diseases are less sex related), we found no significant associations for PFOA (Table 4) except for comparisons between the intermediate quartiles and Q1 for arthritis, but this did not reach significance in the top quartile. For PFOS, we found no “positive” associations between higher serum concentrations and higher prevalence of disease. We found one statistically significant “negative” association suggesting that people reporting having COPD may be less likely to be in the highest PFOA concentration quartile (OR = 0.58; 95% CI, 0.43–0.76; p = 0.0003).
Conclusions
Higher PFOA and PFOS concentrations are associated with thyroid disease (and being on thyroid-related medication) in the NHANES U.S. general adult population representative study samples. More work is needed to establish the mechanisms underlying this association and to exclude confounding and pharmacokinetic explanations.
[ "Methods", "Study population", "Assessment of PFOA/PFOS concentrations", "Disease outcomes", "Sensitivity analyses" ]
[ " Study population Data were from three independent cross-sectional waves of NHANES: 1999–2000, 2003–2004, and 2005–2006. NHANES surveys assess the health and diet of the noninstitutionalized civilian population of the United States and are administered by the National Center for Health Statistics (NCHS). The study protocol for NHANES was approved by the NCHS Institutional Review Board.\nData were from three independent cross-sectional waves of NHANES: 1999–2000, 2003–2004, and 2005–2006. NHANES surveys assess the health and diet of the noninstitutionalized civilian population of the United States and are administered by the National Center for Health Statistics (NCHS). The study protocol for NHANES was approved by the NCHS Institutional Review Board.\n Assessment of PFOA/PFOS concentrations Solid-phase extraction coupled to high-performance liquid chromatography/turbo ion spray ionization/tandem mass spectrometry with isotope-labeled internal standards was used for the detection of PFOA and PFOS, with a limit of detection of 0.2 ng/mL (Kuklenyik et al. 2005). The laboratory methods and comprehensive quality control system were consistent in each NHANES wave, and documentation for each wave is available (NCHS 2009; NHANES 2006, 2007).\nSerum polyfluorinated chemicals (PFCs) were measured in a one-third representative random subset of persons ≥ 12 years of age in each NHANES wave. Data from individuals < 20 years of age were excluded, because questions relating to disease prevalence were asked only for adults.\nSolid-phase extraction coupled to high-performance liquid chromatography/turbo ion spray ionization/tandem mass spectrometry with isotope-labeled internal standards was used for the detection of PFOA and PFOS, with a limit of detection of 0.2 ng/mL (Kuklenyik et al. 2005). The laboratory methods and comprehensive quality control system were consistent in each NHANES wave, and documentation for each wave is available (NCHS 2009; NHANES 2006, 2007).\nSerum polyfluorinated chemicals (PFCs) were measured in a one-third representative random subset of persons ≥ 12 years of age in each NHANES wave. Data from individuals < 20 years of age were excluded, because questions relating to disease prevalence were asked only for adults.\n Disease outcomes In all NHANES waves, adult respondents were asked about physician-diagnosed diseases. Associations were examined between PFOA and PFOS concentrations and thyroid disease outcomes. Individuals were asked whether they had ever been told by a doctor or health professional that they had a thyroid problem (in the 1999–2000 survey the questions related to goiter and other thyroid conditions) and whether they still had the condition. We further defined thyroid disease by considering those people who said they currently had thyroid disease and were taking any thyroid-related medication, including levothyroxine, liothyronine, “thyroid desiccated,” and “thyroid drugs unspecified” for hypothyroidism and propylthiouracil and methimazole for hyperthyroidism. No details were available on specific thyroid disease diagnosis, and the PFC samples did not overlap with the thyroid hormone measurement subsamples in NHANES.\nTo assess disease specificity, associations were examined between PFOA and the other NHANES disease categories elicited: ischemic heart disease (combining any diagnoses of coronary heart disease, angina, and/or heart attack), diabetes, arthritis, current asthma, chronic obstructive pulmonary disease (COPD; bronchitis or emphysema), and current liver disease.\nIn all NHANES waves, adult respondents were asked about physician-diagnosed diseases. Associations were examined between PFOA and PFOS concentrations and thyroid disease outcomes. Individuals were asked whether they had ever been told by a doctor or health professional that they had a thyroid problem (in the 1999–2000 survey the questions related to goiter and other thyroid conditions) and whether they still had the condition. We further defined thyroid disease by considering those people who said they currently had thyroid disease and were taking any thyroid-related medication, including levothyroxine, liothyronine, “thyroid desiccated,” and “thyroid drugs unspecified” for hypothyroidism and propylthiouracil and methimazole for hyperthyroidism. No details were available on specific thyroid disease diagnosis, and the PFC samples did not overlap with the thyroid hormone measurement subsamples in NHANES.\nTo assess disease specificity, associations were examined between PFOA and the other NHANES disease categories elicited: ischemic heart disease (combining any diagnoses of coronary heart disease, angina, and/or heart attack), diabetes, arthritis, current asthma, chronic obstructive pulmonary disease (COPD; bronchitis or emphysema), and current liver disease.\n Statistical analysis NHANES uses a complex cluster sample design with some demographic groups (including less-privileged socioeconomic groups and Mexican Americans) oversampled to ensure adequate representation. Prevalence estimates and models were therefore survey-weighted using the NHANES primary sampling unit, strata, and population weights, unless otherwise stated.\nMultivariate logistic regression modeling was used to estimate odds ratios (ORs) of thyroid disease outcomes by quartile of PFOA and PFOS concentrations, and associations of other physician-diagnosed diseases. Because thyroid disease prevalence is markedly higher in women, we used sex-specific models. Because the distribution of PFC concentrations is skewed (with most people having relatively low exposures and with considerably more variance at the higher exposure end), all available data were pooled, and PFOA and PFOS concentrations were divided into population-weighted quartiles. Using the Hsieh method (Hsieh et al. 1998), our estimated power to detect an association of OR ≥ 1.8 with current treated disease comparing the top PFOA quartile (Q4) with bottom quartile (Q1) is 67% in women. Combining the lowest two quartiles (Q1 and Q2) into a larger control group provides 80% power. The corresponding minimum detectable effect size in men is OR > 2.9. Assumptions for the power calculations include a significance level of 5% and a multiple correlation coefficient of 0.2 relating PFOA exposure to potential confounders.\nModels were adjusted for the following potential confounding factors: year of NHANES study; age; sex; race/ethnicity, from self-description and categorized into Mexican American, other Hispanic, non-Hispanic white, non-Hispanic black, and other race (including multiracial); education, categorized into less than high school, high school diploma (including GED), more than high school, and unknown education; smoking (from self-reported status asked for those ≥ 20 years of age), categorized into never smoked, former smoker, smoking some days, smoking every day, and unknown smoking status; body mass index (BMI; weight in kilograms divided by the square of measured height in meters), categorized into underweight (BMI < 18.5), recommended weight (BMI = 18.5–24.9), overweight (BMI = 25.0–29.9), obese (BMI = 30.0–34.9), and unknown BMI; and alcohol consumption (in adults ≥ 20 years of age, based on responses to the question “In the past 12 months, on those days that you drank alcoholic beverages, on the average day, how many drinks did you have?”), categorized into 0, 1, 2, 3, 4, and ≥ 5 drinks per day, and unknown alcohol consumption. Regression analyses were conducted using STATA/SE (version 10.1; StataCorp LP, College Station, TX, USA).\nNHANES uses a complex cluster sample design with some demographic groups (including less-privileged socioeconomic groups and Mexican Americans) oversampled to ensure adequate representation. Prevalence estimates and models were therefore survey-weighted using the NHANES primary sampling unit, strata, and population weights, unless otherwise stated.\nMultivariate logistic regression modeling was used to estimate odds ratios (ORs) of thyroid disease outcomes by quartile of PFOA and PFOS concentrations, and associations of other physician-diagnosed diseases. Because thyroid disease prevalence is markedly higher in women, we used sex-specific models. Because the distribution of PFC concentrations is skewed (with most people having relatively low exposures and with considerably more variance at the higher exposure end), all available data were pooled, and PFOA and PFOS concentrations were divided into population-weighted quartiles. Using the Hsieh method (Hsieh et al. 1998), our estimated power to detect an association of OR ≥ 1.8 with current treated disease comparing the top PFOA quartile (Q4) with bottom quartile (Q1) is 67% in women. Combining the lowest two quartiles (Q1 and Q2) into a larger control group provides 80% power. The corresponding minimum detectable effect size in men is OR > 2.9. Assumptions for the power calculations include a significance level of 5% and a multiple correlation coefficient of 0.2 relating PFOA exposure to potential confounders.\nModels were adjusted for the following potential confounding factors: year of NHANES study; age; sex; race/ethnicity, from self-description and categorized into Mexican American, other Hispanic, non-Hispanic white, non-Hispanic black, and other race (including multiracial); education, categorized into less than high school, high school diploma (including GED), more than high school, and unknown education; smoking (from self-reported status asked for those ≥ 20 years of age), categorized into never smoked, former smoker, smoking some days, smoking every day, and unknown smoking status; body mass index (BMI; weight in kilograms divided by the square of measured height in meters), categorized into underweight (BMI < 18.5), recommended weight (BMI = 18.5–24.9), overweight (BMI = 25.0–29.9), obese (BMI = 30.0–34.9), and unknown BMI; and alcohol consumption (in adults ≥ 20 years of age, based on responses to the question “In the past 12 months, on those days that you drank alcoholic beverages, on the average day, how many drinks did you have?”), categorized into 0, 1, 2, 3, 4, and ≥ 5 drinks per day, and unknown alcohol consumption. Regression analyses were conducted using STATA/SE (version 10.1; StataCorp LP, College Station, TX, USA).", "Data were from three independent cross-sectional waves of NHANES: 1999–2000, 2003–2004, and 2005–2006. NHANES surveys assess the health and diet of the noninstitutionalized civilian population of the United States and are administered by the National Center for Health Statistics (NCHS). The study protocol for NHANES was approved by the NCHS Institutional Review Board.", "Solid-phase extraction coupled to high-performance liquid chromatography/turbo ion spray ionization/tandem mass spectrometry with isotope-labeled internal standards was used for the detection of PFOA and PFOS, with a limit of detection of 0.2 ng/mL (Kuklenyik et al. 2005). The laboratory methods and comprehensive quality control system were consistent in each NHANES wave, and documentation for each wave is available (NCHS 2009; NHANES 2006, 2007).\nSerum polyfluorinated chemicals (PFCs) were measured in a one-third representative random subset of persons ≥ 12 years of age in each NHANES wave. Data from individuals < 20 years of age were excluded, because questions relating to disease prevalence were asked only for adults.", "In all NHANES waves, adult respondents were asked about physician-diagnosed diseases. Associations were examined between PFOA and PFOS concentrations and thyroid disease outcomes. Individuals were asked whether they had ever been told by a doctor or health professional that they had a thyroid problem (in the 1999–2000 survey the questions related to goiter and other thyroid conditions) and whether they still had the condition. We further defined thyroid disease by considering those people who said they currently had thyroid disease and were taking any thyroid-related medication, including levothyroxine, liothyronine, “thyroid desiccated,” and “thyroid drugs unspecified” for hypothyroidism and propylthiouracil and methimazole for hyperthyroidism. No details were available on specific thyroid disease diagnosis, and the PFC samples did not overlap with the thyroid hormone measurement subsamples in NHANES.\nTo assess disease specificity, associations were examined between PFOA and the other NHANES disease categories elicited: ischemic heart disease (combining any diagnoses of coronary heart disease, angina, and/or heart attack), diabetes, arthritis, current asthma, chronic obstructive pulmonary disease (COPD; bronchitis or emphysema), and current liver disease.", "For a sensitivity analysis, we computed a logistic regression model including both men and women, testing an interaction term between sex and PFOA levels for treated thyroid disease risk. The interaction term was not significant (p-value for interaction = 0.152).\nFor a post hoc analysis, we examined associations between chemical concentration quartile and any of the other major disease categories covered in NHANES: arthritis, asthma, COPD, diabetes, heart disease, or liver disease. Combining men and women (to reduce multiple testing and because these diseases are less sex related), we found no significant associations for PFOA (Table 4) except for comparisons between the intermediate quartiles and Q1 for arthritis, but this did not reach significance in the top quartile.\nFor PFOS, we found no “positive” associations between higher serum concentrations and higher prevalence of disease. We found one statistically significant “negative” association suggesting that people reporting having COPD may be less likely to be in the highest PFOA concentration quartile (OR = 0.58; 95% CI, 0.43–0.76; p = 0.0003)." ]
[ "methods", "methods", null, null, null ]
[ "Methods", "Study population", "Assessment of PFOA/PFOS concentrations", "Disease outcomes", "Statistical analysis", "Results", "Sensitivity analyses", "Discussion", "Conclusions" ]
[ " Study population Data were from three independent cross-sectional waves of NHANES: 1999–2000, 2003–2004, and 2005–2006. NHANES surveys assess the health and diet of the noninstitutionalized civilian population of the United States and are administered by the National Center for Health Statistics (NCHS). The study protocol for NHANES was approved by the NCHS Institutional Review Board.\nData were from three independent cross-sectional waves of NHANES: 1999–2000, 2003–2004, and 2005–2006. NHANES surveys assess the health and diet of the noninstitutionalized civilian population of the United States and are administered by the National Center for Health Statistics (NCHS). The study protocol for NHANES was approved by the NCHS Institutional Review Board.\n Assessment of PFOA/PFOS concentrations Solid-phase extraction coupled to high-performance liquid chromatography/turbo ion spray ionization/tandem mass spectrometry with isotope-labeled internal standards was used for the detection of PFOA and PFOS, with a limit of detection of 0.2 ng/mL (Kuklenyik et al. 2005). The laboratory methods and comprehensive quality control system were consistent in each NHANES wave, and documentation for each wave is available (NCHS 2009; NHANES 2006, 2007).\nSerum polyfluorinated chemicals (PFCs) were measured in a one-third representative random subset of persons ≥ 12 years of age in each NHANES wave. Data from individuals < 20 years of age were excluded, because questions relating to disease prevalence were asked only for adults.\nSolid-phase extraction coupled to high-performance liquid chromatography/turbo ion spray ionization/tandem mass spectrometry with isotope-labeled internal standards was used for the detection of PFOA and PFOS, with a limit of detection of 0.2 ng/mL (Kuklenyik et al. 2005). The laboratory methods and comprehensive quality control system were consistent in each NHANES wave, and documentation for each wave is available (NCHS 2009; NHANES 2006, 2007).\nSerum polyfluorinated chemicals (PFCs) were measured in a one-third representative random subset of persons ≥ 12 years of age in each NHANES wave. Data from individuals < 20 years of age were excluded, because questions relating to disease prevalence were asked only for adults.\n Disease outcomes In all NHANES waves, adult respondents were asked about physician-diagnosed diseases. Associations were examined between PFOA and PFOS concentrations and thyroid disease outcomes. Individuals were asked whether they had ever been told by a doctor or health professional that they had a thyroid problem (in the 1999–2000 survey the questions related to goiter and other thyroid conditions) and whether they still had the condition. We further defined thyroid disease by considering those people who said they currently had thyroid disease and were taking any thyroid-related medication, including levothyroxine, liothyronine, “thyroid desiccated,” and “thyroid drugs unspecified” for hypothyroidism and propylthiouracil and methimazole for hyperthyroidism. No details were available on specific thyroid disease diagnosis, and the PFC samples did not overlap with the thyroid hormone measurement subsamples in NHANES.\nTo assess disease specificity, associations were examined between PFOA and the other NHANES disease categories elicited: ischemic heart disease (combining any diagnoses of coronary heart disease, angina, and/or heart attack), diabetes, arthritis, current asthma, chronic obstructive pulmonary disease (COPD; bronchitis or emphysema), and current liver disease.\nIn all NHANES waves, adult respondents were asked about physician-diagnosed diseases. Associations were examined between PFOA and PFOS concentrations and thyroid disease outcomes. Individuals were asked whether they had ever been told by a doctor or health professional that they had a thyroid problem (in the 1999–2000 survey the questions related to goiter and other thyroid conditions) and whether they still had the condition. We further defined thyroid disease by considering those people who said they currently had thyroid disease and were taking any thyroid-related medication, including levothyroxine, liothyronine, “thyroid desiccated,” and “thyroid drugs unspecified” for hypothyroidism and propylthiouracil and methimazole for hyperthyroidism. No details were available on specific thyroid disease diagnosis, and the PFC samples did not overlap with the thyroid hormone measurement subsamples in NHANES.\nTo assess disease specificity, associations were examined between PFOA and the other NHANES disease categories elicited: ischemic heart disease (combining any diagnoses of coronary heart disease, angina, and/or heart attack), diabetes, arthritis, current asthma, chronic obstructive pulmonary disease (COPD; bronchitis or emphysema), and current liver disease.\n Statistical analysis NHANES uses a complex cluster sample design with some demographic groups (including less-privileged socioeconomic groups and Mexican Americans) oversampled to ensure adequate representation. Prevalence estimates and models were therefore survey-weighted using the NHANES primary sampling unit, strata, and population weights, unless otherwise stated.\nMultivariate logistic regression modeling was used to estimate odds ratios (ORs) of thyroid disease outcomes by quartile of PFOA and PFOS concentrations, and associations of other physician-diagnosed diseases. Because thyroid disease prevalence is markedly higher in women, we used sex-specific models. Because the distribution of PFC concentrations is skewed (with most people having relatively low exposures and with considerably more variance at the higher exposure end), all available data were pooled, and PFOA and PFOS concentrations were divided into population-weighted quartiles. Using the Hsieh method (Hsieh et al. 1998), our estimated power to detect an association of OR ≥ 1.8 with current treated disease comparing the top PFOA quartile (Q4) with bottom quartile (Q1) is 67% in women. Combining the lowest two quartiles (Q1 and Q2) into a larger control group provides 80% power. The corresponding minimum detectable effect size in men is OR > 2.9. Assumptions for the power calculations include a significance level of 5% and a multiple correlation coefficient of 0.2 relating PFOA exposure to potential confounders.\nModels were adjusted for the following potential confounding factors: year of NHANES study; age; sex; race/ethnicity, from self-description and categorized into Mexican American, other Hispanic, non-Hispanic white, non-Hispanic black, and other race (including multiracial); education, categorized into less than high school, high school diploma (including GED), more than high school, and unknown education; smoking (from self-reported status asked for those ≥ 20 years of age), categorized into never smoked, former smoker, smoking some days, smoking every day, and unknown smoking status; body mass index (BMI; weight in kilograms divided by the square of measured height in meters), categorized into underweight (BMI < 18.5), recommended weight (BMI = 18.5–24.9), overweight (BMI = 25.0–29.9), obese (BMI = 30.0–34.9), and unknown BMI; and alcohol consumption (in adults ≥ 20 years of age, based on responses to the question “In the past 12 months, on those days that you drank alcoholic beverages, on the average day, how many drinks did you have?”), categorized into 0, 1, 2, 3, 4, and ≥ 5 drinks per day, and unknown alcohol consumption. Regression analyses were conducted using STATA/SE (version 10.1; StataCorp LP, College Station, TX, USA).\nNHANES uses a complex cluster sample design with some demographic groups (including less-privileged socioeconomic groups and Mexican Americans) oversampled to ensure adequate representation. Prevalence estimates and models were therefore survey-weighted using the NHANES primary sampling unit, strata, and population weights, unless otherwise stated.\nMultivariate logistic regression modeling was used to estimate odds ratios (ORs) of thyroid disease outcomes by quartile of PFOA and PFOS concentrations, and associations of other physician-diagnosed diseases. Because thyroid disease prevalence is markedly higher in women, we used sex-specific models. Because the distribution of PFC concentrations is skewed (with most people having relatively low exposures and with considerably more variance at the higher exposure end), all available data were pooled, and PFOA and PFOS concentrations were divided into population-weighted quartiles. Using the Hsieh method (Hsieh et al. 1998), our estimated power to detect an association of OR ≥ 1.8 with current treated disease comparing the top PFOA quartile (Q4) with bottom quartile (Q1) is 67% in women. Combining the lowest two quartiles (Q1 and Q2) into a larger control group provides 80% power. The corresponding minimum detectable effect size in men is OR > 2.9. Assumptions for the power calculations include a significance level of 5% and a multiple correlation coefficient of 0.2 relating PFOA exposure to potential confounders.\nModels were adjusted for the following potential confounding factors: year of NHANES study; age; sex; race/ethnicity, from self-description and categorized into Mexican American, other Hispanic, non-Hispanic white, non-Hispanic black, and other race (including multiracial); education, categorized into less than high school, high school diploma (including GED), more than high school, and unknown education; smoking (from self-reported status asked for those ≥ 20 years of age), categorized into never smoked, former smoker, smoking some days, smoking every day, and unknown smoking status; body mass index (BMI; weight in kilograms divided by the square of measured height in meters), categorized into underweight (BMI < 18.5), recommended weight (BMI = 18.5–24.9), overweight (BMI = 25.0–29.9), obese (BMI = 30.0–34.9), and unknown BMI; and alcohol consumption (in adults ≥ 20 years of age, based on responses to the question “In the past 12 months, on those days that you drank alcoholic beverages, on the average day, how many drinks did you have?”), categorized into 0, 1, 2, 3, 4, and ≥ 5 drinks per day, and unknown alcohol consumption. Regression analyses were conducted using STATA/SE (version 10.1; StataCorp LP, College Station, TX, USA).", "Data were from three independent cross-sectional waves of NHANES: 1999–2000, 2003–2004, and 2005–2006. NHANES surveys assess the health and diet of the noninstitutionalized civilian population of the United States and are administered by the National Center for Health Statistics (NCHS). The study protocol for NHANES was approved by the NCHS Institutional Review Board.", "Solid-phase extraction coupled to high-performance liquid chromatography/turbo ion spray ionization/tandem mass spectrometry with isotope-labeled internal standards was used for the detection of PFOA and PFOS, with a limit of detection of 0.2 ng/mL (Kuklenyik et al. 2005). The laboratory methods and comprehensive quality control system were consistent in each NHANES wave, and documentation for each wave is available (NCHS 2009; NHANES 2006, 2007).\nSerum polyfluorinated chemicals (PFCs) were measured in a one-third representative random subset of persons ≥ 12 years of age in each NHANES wave. Data from individuals < 20 years of age were excluded, because questions relating to disease prevalence were asked only for adults.", "In all NHANES waves, adult respondents were asked about physician-diagnosed diseases. Associations were examined between PFOA and PFOS concentrations and thyroid disease outcomes. Individuals were asked whether they had ever been told by a doctor or health professional that they had a thyroid problem (in the 1999–2000 survey the questions related to goiter and other thyroid conditions) and whether they still had the condition. We further defined thyroid disease by considering those people who said they currently had thyroid disease and were taking any thyroid-related medication, including levothyroxine, liothyronine, “thyroid desiccated,” and “thyroid drugs unspecified” for hypothyroidism and propylthiouracil and methimazole for hyperthyroidism. No details were available on specific thyroid disease diagnosis, and the PFC samples did not overlap with the thyroid hormone measurement subsamples in NHANES.\nTo assess disease specificity, associations were examined between PFOA and the other NHANES disease categories elicited: ischemic heart disease (combining any diagnoses of coronary heart disease, angina, and/or heart attack), diabetes, arthritis, current asthma, chronic obstructive pulmonary disease (COPD; bronchitis or emphysema), and current liver disease.", "NHANES uses a complex cluster sample design with some demographic groups (including less-privileged socioeconomic groups and Mexican Americans) oversampled to ensure adequate representation. Prevalence estimates and models were therefore survey-weighted using the NHANES primary sampling unit, strata, and population weights, unless otherwise stated.\nMultivariate logistic regression modeling was used to estimate odds ratios (ORs) of thyroid disease outcomes by quartile of PFOA and PFOS concentrations, and associations of other physician-diagnosed diseases. Because thyroid disease prevalence is markedly higher in women, we used sex-specific models. Because the distribution of PFC concentrations is skewed (with most people having relatively low exposures and with considerably more variance at the higher exposure end), all available data were pooled, and PFOA and PFOS concentrations were divided into population-weighted quartiles. Using the Hsieh method (Hsieh et al. 1998), our estimated power to detect an association of OR ≥ 1.8 with current treated disease comparing the top PFOA quartile (Q4) with bottom quartile (Q1) is 67% in women. Combining the lowest two quartiles (Q1 and Q2) into a larger control group provides 80% power. The corresponding minimum detectable effect size in men is OR > 2.9. Assumptions for the power calculations include a significance level of 5% and a multiple correlation coefficient of 0.2 relating PFOA exposure to potential confounders.\nModels were adjusted for the following potential confounding factors: year of NHANES study; age; sex; race/ethnicity, from self-description and categorized into Mexican American, other Hispanic, non-Hispanic white, non-Hispanic black, and other race (including multiracial); education, categorized into less than high school, high school diploma (including GED), more than high school, and unknown education; smoking (from self-reported status asked for those ≥ 20 years of age), categorized into never smoked, former smoker, smoking some days, smoking every day, and unknown smoking status; body mass index (BMI; weight in kilograms divided by the square of measured height in meters), categorized into underweight (BMI < 18.5), recommended weight (BMI = 18.5–24.9), overweight (BMI = 25.0–29.9), obese (BMI = 30.0–34.9), and unknown BMI; and alcohol consumption (in adults ≥ 20 years of age, based on responses to the question “In the past 12 months, on those days that you drank alcoholic beverages, on the average day, how many drinks did you have?”), categorized into 0, 1, 2, 3, 4, and ≥ 5 drinks per day, and unknown alcohol consumption. Regression analyses were conducted using STATA/SE (version 10.1; StataCorp LP, College Station, TX, USA).", "Serum concentrations of PFOA were available for n = 3,974 individuals ≥ 20 years of age from NHANES waves 1999–2000 (n = 1,040), 2003–2004 (n = 1,454), and 2005–2006 (n = 1,480). In analyses adjusted for age, sex, NHANES wave, and ethnicity, mean levels of PFOA were higher in men than in women [by 0.76 ng/mL; 95% confidence interval (CI), 0.73–0.80; p < 0.0001], and we found significant differences between ethnic groups (Table 1). Individuals with more education had higher PFOA levels (highest vs. lowest education: 1.1 ng/mL difference; 95% CI, 1.03–1.19 ng/mL difference; p = 0.008). Increased alcohol consumption levels were also associated with higher PFOA concentrations (e.g., those having five or more drinks per day had mean PFOA levels 1.24 ng/mL higher than nondrinkers; 95% CI, 1.14–1.37 ng/mL difference; p < 0.0001).\nIn analyses of the full sample (men and women), adjusted for age, sex, NHANES wave, and ethnicity, mean levels of PFOA were higher in men than in women (p < 0.0001), and we found significant differences among ethnic groups (Table 1). Individuals with more education had higher PFOA levels (p = 0.008). Increased alcohol consumption levels were also associated with higher PFOA concentrations (p < 0.0001). We found similar differences in PFOS concentrations. Mean levels of PFOS were higher in men (p < 0.0001), with significant differences in levels between ethnic groups, and individuals with more education had higher PFOS levels (p = 0.008).\nEight individuals did not answer questions about thyroid disease, so the sample size for this analysis was n = 3,966, with 1,900 men, and 2,066 women (Table 2). In women, overall (unweighted) reported (any) thyroid disease was n = 292, and the NHANES-weighted but unadjusted prevalence was 16.18%; in men, n = 69, and weighted prevalence, 3.06%. The study-weighted prevalence of current thyroid disease taking medication was necessarily lower (women, n = 163, 9.89%; men, n = 46, 1.18%).\nWe computed population-weighted quartiles of PFOA and PFOS concentrations in men and women separately (Table 2). The highest quartile (Q4) of PFOA in women ranged from 5.7 to 123.0 ng/mL, and in men from 7.3 ng/mL to 45.9 ng/mL. Study-weighted but unadjusted prevalences of current thyroid disease taking related medication in women varied across the quartiles but with wide CIs: Q1 = 8.14% (95% CI, 5.75–10.53%), Q4 = 16.19% (95% CI, 11.74–20.62%); in men, unadjusted prevalence rates were far lower throughout (prevalence = 2.27% in Q1 and Q4). For PFOS the prevalence of treated thyroid disease ranged from 8.14% (Q1) to 12.55% (Q4) in women and from 1.85% to 3.89% in men.\nIn logistic regression models adjusting for age, ethnicity, and study year (Table 3), we found associations between PFOA quartiles and both definitions of thyroid disease in women. For logistic models additionally adjusted for educational status, BMI, smoking status, and alcohol consumption, these associations remained significant; for example, comparing those with PFOA concentrations ≥ 5.7 ng/mL (Q4) versus ≤ 2.6 ng/mL (Q1), the OR for current thyroid disease on medication was 1.86 (95% CI, 1.12–3.09; p = 0.018). Comparing Q4 with the larger control group of PFOA ≤ 4.0 ng/mL (Q1 and Q2) the estimated OR for treated thyroid disease was 2.24 (95% CI, 1.38–3.65; p = 0.002). In men, we found a similar suggestive trend, but it narrowly missed significance: Comparing PFOA concentrations ≥ 7.3 ng/mL (Q4) versus ≤ 5.2 ng/mL (Q1 and Q2), the OR for treated disease was 2.12 (95% CI, 0.93–4.82; p = 0.073).\nFor PFOS concentrations, in women ORs for disease trended in a similar direction but were far from significant. However, in men we found an association comparing those with PFOS concentrations ≥ 36.8 ng/mL (Q4) versus ≤ 25.5 ng/mL (Q1 and Q2): OR for treated disease was 2.68 (95% CI, 1.03–6.98; p = 0.043).\n Sensitivity analyses For a sensitivity analysis, we computed a logistic regression model including both men and women, testing an interaction term between sex and PFOA levels for treated thyroid disease risk. The interaction term was not significant (p-value for interaction = 0.152).\nFor a post hoc analysis, we examined associations between chemical concentration quartile and any of the other major disease categories covered in NHANES: arthritis, asthma, COPD, diabetes, heart disease, or liver disease. Combining men and women (to reduce multiple testing and because these diseases are less sex related), we found no significant associations for PFOA (Table 4) except for comparisons between the intermediate quartiles and Q1 for arthritis, but this did not reach significance in the top quartile.\nFor PFOS, we found no “positive” associations between higher serum concentrations and higher prevalence of disease. We found one statistically significant “negative” association suggesting that people reporting having COPD may be less likely to be in the highest PFOA concentration quartile (OR = 0.58; 95% CI, 0.43–0.76; p = 0.0003).\nFor a sensitivity analysis, we computed a logistic regression model including both men and women, testing an interaction term between sex and PFOA levels for treated thyroid disease risk. The interaction term was not significant (p-value for interaction = 0.152).\nFor a post hoc analysis, we examined associations between chemical concentration quartile and any of the other major disease categories covered in NHANES: arthritis, asthma, COPD, diabetes, heart disease, or liver disease. Combining men and women (to reduce multiple testing and because these diseases are less sex related), we found no significant associations for PFOA (Table 4) except for comparisons between the intermediate quartiles and Q1 for arthritis, but this did not reach significance in the top quartile.\nFor PFOS, we found no “positive” associations between higher serum concentrations and higher prevalence of disease. We found one statistically significant “negative” association suggesting that people reporting having COPD may be less likely to be in the highest PFOA concentration quartile (OR = 0.58; 95% CI, 0.43–0.76; p = 0.0003).", "For a sensitivity analysis, we computed a logistic regression model including both men and women, testing an interaction term between sex and PFOA levels for treated thyroid disease risk. The interaction term was not significant (p-value for interaction = 0.152).\nFor a post hoc analysis, we examined associations between chemical concentration quartile and any of the other major disease categories covered in NHANES: arthritis, asthma, COPD, diabetes, heart disease, or liver disease. Combining men and women (to reduce multiple testing and because these diseases are less sex related), we found no significant associations for PFOA (Table 4) except for comparisons between the intermediate quartiles and Q1 for arthritis, but this did not reach significance in the top quartile.\nFor PFOS, we found no “positive” associations between higher serum concentrations and higher prevalence of disease. We found one statistically significant “negative” association suggesting that people reporting having COPD may be less likely to be in the highest PFOA concentration quartile (OR = 0.58; 95% CI, 0.43–0.76; p = 0.0003).", "In this study we aimed to determine whether increased serum PFOA or PFOS concentrations were associated with thyroid disease in a general adult U.S. population sample. The prevalence of thyroid disease is markedly higher in women than in men, so we estimated sex-specific associations. We found that, across all the available data from NHANES, thyroid disease associations with serum PFOA concentrations are present in women and are strongest for those currently being treated for thyroid disease. In men, we also found a near significant association between PFOA and treated thyroid disease. An interaction term analysis suggests that the PFOA trends in men and women are not significantly different, despite the relative rarity of thyroid disease in men. In addition, we found a nominally significant association between PFOS concentrations and treated thyroid disease in men but not in women.\nThe presence of associations with both PFOA and PFOS raises the issue of how best to perform risk assessments for combinations of perfluorochemicals. The somewhat divergent risk patterns for the two compounds support their separate risk assessment (Scialli et al. 2007), given that current legislative advice (Minnesota Department of Health 2008) is to consider the combined effects of chemicals only when two or more chemicals in a mixture affect the same tissue, organ, or organ system.\nOur results are important because PFAAs are detectable in virtually everyone in society (Kannan et al. 2004), with ubiquitous presence across global populations (Calafat et al. 2006). Occupational exposure to PFOA reported in 2003 showed mean serum values of 1,780 ng/mL (range, 40–10,060 ng/mL) (Olsen et al. 2003a) and 899 ng/mL (range, 722–1,120 ng/mL) (Olsen et al. 2003c). Production of PFOS was halted in 2002 in the United States by its principal producer, due largely to concerns over bioaccumulation and toxicity. Since then, voluntary industry reductions in production and use of other perfluorinated compounds, such as the U.S. EPA–initiated PFOA Stewardship Program (U.S. EPA 2006), have contributed to a decreasing trend in human exposure for all perfluorinated compounds (with the notable exception of perfluorononanoic acid) (Calafat et al. 2007; Olsen et al. 2007). In May 2009, PFOS was listed under the Stockholm Convention on Persistent Organic Pollutants (2008).\nOur results can be compared with previous studies of human populations and of nonhuman primates. A 6-month study of cynomolgus monkeys chronically exposed to PFOA showed no associations between PFOA and thyroid parameters, at mean serum PFOA concentrations higher than those reported in NHANES, although only male monkeys were involved (Butenhoff et al. 2002). The largest human study of PFOA centers on an industrial facility in Washington, West Virginia, from which PFOA spread to the population through air, water, occupational, and domestic exposure in a point-source contamination. The C8 Health Project (Steenland et al. 2009) has measured PFOA concentrations in > 69,000 residents. Markedly high concentrations were found, with an arithmetic mean of 83 ng/mL and a median concentration in serum of 28 ng/mL (C8 Science Panel 2008), far higher than the NHANES concentrations in the general population. Preliminary analyses report associations between PFOA and total cholesterol, low-density lipoproteins, and triglyceride concentrations in multivariate models adjusting for age, BMI, sex, education, smoking, alcohol, and regular exercise. Comprehensive cross-sectional and follow-on analyses of associations with thyroid disease have not yet been reported but are expected to be released in 2010–2011 (C8 Science Panel 2009).\nImportantly, disruption to thyroid hormone balance was not found in other studies of populations exposed to PFOA, despite the considerably higher levels reported in some studies (Emmett et al. 2006; Olsen et al. 2003b). Emmett et al. (2006) studied 371 residents of a community with long-standing environmental exposure to PFOA and found a median serum PFOA concentration of 181–571 ng/mL but no association between serum PFOA and a history of thyroid disease. A study that included thyroid hormone levels reported a positive association between serum PFOA concentration and T3 levels in occupationally exposed workers, although there were no changes in other thyroid hormones (Olsen et al. 2001). Modest associations between PFOA and thyroid hormones (negative for free T4 and positive for T3) were reported in 506 PFOA production workers across three production facilities (Olsen and Zobel 2007); there were no associations between TSH or T4 and PFOA, and the free hormone levels were within the normal reference range.\nA linear extrapolation of the findings reported here would be expected to lead to associations being more evident at higher exposure levels, yet this is not supported by the literature. Nonlinearity of response is not uncommon for receptor-mediated systems such as endocrine-signaling pathways that act to amplify the original signal. Large changes in cell function can occur in response to extremely low concentrations, but which may become saturated and hence unresponsive at higher concentrations (vom Saal and Hughes 2005; Welshons et al. 2003).\nThe mechanisms involved in thyroid homeostasis are numerous and complex, and there are multiple potential targets for disruption of thyroid hormone homeostasis (Schmutzler et al. 2007). These include thyrotropin receptor (Santini et al. 2003), iodine uptake by the sodium iodide transporter (Schröder van der Elst et al. 2003), type 1 5′-deiodinase (Ferreira et al. 2002), transthyretin (Kohrle et al. 1988), thyroid hormone receptor (Moriyama et al. 2002), and the thyroid hormone–dependent growth of pituitary cells (Ghisari and Bonefeld-Jorgensen 2005). Depression of serum T4 and T3 has been reported by several authors in PFOS-exposed rats (Lau et al. 2003; Luebker et al. 2005; Seacat et al. 2003). One mechanism by which PFAAs may deplete T4 is through induction of the hepatic uridine diphosphoglucuronosyl transferase (UGT) system, which is involved in hepatic metabolism of thyroid hormone and biliary clearance of T4 as T4-glucuronide (Barter and Klaassen 1994). Because PFOA is an agonist for PPARα, it is plausible that induction of hepatic UGT in PFAA-exposed rats (Yu et al. 2009) could represent a PPARα-mediated response. The involvement of another PPARα agonist, WY 14643, in enhancing the hepatic degradation of thyroid hormone has recently been shown (Weineke et al. 2009).\nA growing body of data describes the in vitro binding affinity of PFOA to human serum-binding proteins (Chen and Guo 2009) and to PPARα, -β, and -γ and other nuclear receptors (Vanden Huevel et al. 2006), but the contribution of these mechanisms to PFOA’s thyroid-mediating effects in humans remains to be established. Many cellular and metabolic processes, including lipid metabolism, energy homeostasis, and cell differentiation, are controlled by PPARα. Early studies of the effects of PFAAs in rodents showed that a single dose lowered heart rate and body temperature and depressed T4 and T3. Replacement of T4 did not reverse the clinical symptoms of hypothermia (Gutshall et al. 1988; Langley and Pilcher 1985). Although circulating thyroid hormone levels were low, liver enzymes responsive to thyroid hormone levels were elevated, suggesting that thyroidal homeostasis was not functionally compromised. Chang et al. (2007) found that exposure to PFOS for up to 3 weeks did not affect functional thyroid status, because free T4, TSH, and various thyroid-responsive liver enzymes were all unaffected. These findings and later results have led to proposals that displacement of circulating thyroid hormones from plasma protein-binding sites and a reduced responsiveness of the HPT axis contribute significantly towards PFOA’s hypothyroid-inducing effects (Lau et al. 2007). Whatever the mechanisms involved, it is clear that more research is merited to clarify the pathways involved.\nThe feedback mechanism by which the rate of release of TSH and the circulating levels of T3 and T4 are regulated tends to show a low level of individual variation (Felt-Rasmussen et al. 1980). Therefore, subtle disruption of the HPT axis within normal reference ranges may have negative health consequences for the individual while remaining within normal reference values, highlighting the importance of including both clinical and laboratory end points in such studies. The NHANES data do not allow specification of the precise type of thyroid disease present, because NHANES does not report on individual hormone levels. PFOA concentration was positively associated with free T4 and negatively associated with T3 levels in a cohort of 506 exposed workers, with a near significant association with TSH levels (Olsen et al. 2007), although all effects were regarded as modest.\nThe limitations of these analyses should be noted. We based the PFOA and PFOS measures on a single serum sample. Although PFOA has a half-life of 4 years (Olsen et al. 2007), so a single sample is likely to represent medium-term internal dose, samples taken at several time points might be more accurate in classifying exposure. Any misclassification from single measures would tend to decrease power and underestimate the real strengths of association. Second, the PFOA concentrations were measured at the same time as disease status, making attribution of causal direction difficult. This raises the possibility of reverse causation. One might hypothesize that after onset of thyroid disease, changes in the nature of exposure or in the pharmacokinetics of PFOA might occur [including patterns of absorption, distribution (including protein binding) or excretion]. Because the associations we report were present in people who were on thyroid hormone replacements, which effectively mimic normal thyroid function, a mechanism for reverse causation through changes in pharmacokinetics is difficult to imagine. Confounding by unmeasured factors is also possible, but it is unlikely that confounding could explain similar findings reported from some of the diverse experimental and observational studies discussed above.\nPost hoc association testing with other common diseases (necessarily involving multiple statistical testing) did not identify other robust associations of higher PFC concentration with increased disease prevalence, suggesting specificity of our findings for thyroid disease. An apparent association between higher PFOS concentrations and lower prevalence of COPD requires replication, to exclude a false-positive result from multiple testing.\nIn addition to the limitations of our analyses, the strengths should also be noted: This is the first large-scale analysis in a nationally representative general adult population of directly measured serum concentrations of PFOA and PFOS. In addition, the associations present are strongest for the most specific identification of thyroid disease, based on reported diagnosis with current use of thyroid-specific medication. The NHANES study also supported adjustment of models for a range of potential confounding factors, which in fact made relatively minor differences to the key estimates, suggesting that the associations are robust.\nFurther work is clearly needed to characterize the PFOA and PFOS associations with specific thyroid diagnoses and thyroid hormone levels in the general population and to clarify whether the associations reflect pathology, changes in exposure, or altered pharmacokinetics. Longitudinal analyses are also needed to establish whether high exposures predict future onsets of thyroid disease, although concurrent alteration of thyroid functioning would still be a cause for concern.", "Higher PFOA and PFOS concentrations are associated with thyroid disease (and being on thyroid-related medication) in the NHANES U.S. general adult population representative study samples. More work is needed to establish the mechanisms underlying this association and to exclude confounding and pharmacokinetic explanations." ]
[ "methods", "methods", null, null, "methods", "results", null, "discussion", "conclusions" ]
[ "C8", "human population", "PFOA", "PFOS", "thyroid disease" ]
Methods: Study population Data were from three independent cross-sectional waves of NHANES: 1999–2000, 2003–2004, and 2005–2006. NHANES surveys assess the health and diet of the noninstitutionalized civilian population of the United States and are administered by the National Center for Health Statistics (NCHS). The study protocol for NHANES was approved by the NCHS Institutional Review Board. Data were from three independent cross-sectional waves of NHANES: 1999–2000, 2003–2004, and 2005–2006. NHANES surveys assess the health and diet of the noninstitutionalized civilian population of the United States and are administered by the National Center for Health Statistics (NCHS). The study protocol for NHANES was approved by the NCHS Institutional Review Board. Assessment of PFOA/PFOS concentrations Solid-phase extraction coupled to high-performance liquid chromatography/turbo ion spray ionization/tandem mass spectrometry with isotope-labeled internal standards was used for the detection of PFOA and PFOS, with a limit of detection of 0.2 ng/mL (Kuklenyik et al. 2005). The laboratory methods and comprehensive quality control system were consistent in each NHANES wave, and documentation for each wave is available (NCHS 2009; NHANES 2006, 2007). Serum polyfluorinated chemicals (PFCs) were measured in a one-third representative random subset of persons ≥ 12 years of age in each NHANES wave. Data from individuals < 20 years of age were excluded, because questions relating to disease prevalence were asked only for adults. Solid-phase extraction coupled to high-performance liquid chromatography/turbo ion spray ionization/tandem mass spectrometry with isotope-labeled internal standards was used for the detection of PFOA and PFOS, with a limit of detection of 0.2 ng/mL (Kuklenyik et al. 2005). The laboratory methods and comprehensive quality control system were consistent in each NHANES wave, and documentation for each wave is available (NCHS 2009; NHANES 2006, 2007). Serum polyfluorinated chemicals (PFCs) were measured in a one-third representative random subset of persons ≥ 12 years of age in each NHANES wave. Data from individuals < 20 years of age were excluded, because questions relating to disease prevalence were asked only for adults. Disease outcomes In all NHANES waves, adult respondents were asked about physician-diagnosed diseases. Associations were examined between PFOA and PFOS concentrations and thyroid disease outcomes. Individuals were asked whether they had ever been told by a doctor or health professional that they had a thyroid problem (in the 1999–2000 survey the questions related to goiter and other thyroid conditions) and whether they still had the condition. We further defined thyroid disease by considering those people who said they currently had thyroid disease and were taking any thyroid-related medication, including levothyroxine, liothyronine, “thyroid desiccated,” and “thyroid drugs unspecified” for hypothyroidism and propylthiouracil and methimazole for hyperthyroidism. No details were available on specific thyroid disease diagnosis, and the PFC samples did not overlap with the thyroid hormone measurement subsamples in NHANES. To assess disease specificity, associations were examined between PFOA and the other NHANES disease categories elicited: ischemic heart disease (combining any diagnoses of coronary heart disease, angina, and/or heart attack), diabetes, arthritis, current asthma, chronic obstructive pulmonary disease (COPD; bronchitis or emphysema), and current liver disease. In all NHANES waves, adult respondents were asked about physician-diagnosed diseases. Associations were examined between PFOA and PFOS concentrations and thyroid disease outcomes. Individuals were asked whether they had ever been told by a doctor or health professional that they had a thyroid problem (in the 1999–2000 survey the questions related to goiter and other thyroid conditions) and whether they still had the condition. We further defined thyroid disease by considering those people who said they currently had thyroid disease and were taking any thyroid-related medication, including levothyroxine, liothyronine, “thyroid desiccated,” and “thyroid drugs unspecified” for hypothyroidism and propylthiouracil and methimazole for hyperthyroidism. No details were available on specific thyroid disease diagnosis, and the PFC samples did not overlap with the thyroid hormone measurement subsamples in NHANES. To assess disease specificity, associations were examined between PFOA and the other NHANES disease categories elicited: ischemic heart disease (combining any diagnoses of coronary heart disease, angina, and/or heart attack), diabetes, arthritis, current asthma, chronic obstructive pulmonary disease (COPD; bronchitis or emphysema), and current liver disease. Statistical analysis NHANES uses a complex cluster sample design with some demographic groups (including less-privileged socioeconomic groups and Mexican Americans) oversampled to ensure adequate representation. Prevalence estimates and models were therefore survey-weighted using the NHANES primary sampling unit, strata, and population weights, unless otherwise stated. Multivariate logistic regression modeling was used to estimate odds ratios (ORs) of thyroid disease outcomes by quartile of PFOA and PFOS concentrations, and associations of other physician-diagnosed diseases. Because thyroid disease prevalence is markedly higher in women, we used sex-specific models. Because the distribution of PFC concentrations is skewed (with most people having relatively low exposures and with considerably more variance at the higher exposure end), all available data were pooled, and PFOA and PFOS concentrations were divided into population-weighted quartiles. Using the Hsieh method (Hsieh et al. 1998), our estimated power to detect an association of OR ≥ 1.8 with current treated disease comparing the top PFOA quartile (Q4) with bottom quartile (Q1) is 67% in women. Combining the lowest two quartiles (Q1 and Q2) into a larger control group provides 80% power. The corresponding minimum detectable effect size in men is OR > 2.9. Assumptions for the power calculations include a significance level of 5% and a multiple correlation coefficient of 0.2 relating PFOA exposure to potential confounders. Models were adjusted for the following potential confounding factors: year of NHANES study; age; sex; race/ethnicity, from self-description and categorized into Mexican American, other Hispanic, non-Hispanic white, non-Hispanic black, and other race (including multiracial); education, categorized into less than high school, high school diploma (including GED), more than high school, and unknown education; smoking (from self-reported status asked for those ≥ 20 years of age), categorized into never smoked, former smoker, smoking some days, smoking every day, and unknown smoking status; body mass index (BMI; weight in kilograms divided by the square of measured height in meters), categorized into underweight (BMI < 18.5), recommended weight (BMI = 18.5–24.9), overweight (BMI = 25.0–29.9), obese (BMI = 30.0–34.9), and unknown BMI; and alcohol consumption (in adults ≥ 20 years of age, based on responses to the question “In the past 12 months, on those days that you drank alcoholic beverages, on the average day, how many drinks did you have?”), categorized into 0, 1, 2, 3, 4, and ≥ 5 drinks per day, and unknown alcohol consumption. Regression analyses were conducted using STATA/SE (version 10.1; StataCorp LP, College Station, TX, USA). NHANES uses a complex cluster sample design with some demographic groups (including less-privileged socioeconomic groups and Mexican Americans) oversampled to ensure adequate representation. Prevalence estimates and models were therefore survey-weighted using the NHANES primary sampling unit, strata, and population weights, unless otherwise stated. Multivariate logistic regression modeling was used to estimate odds ratios (ORs) of thyroid disease outcomes by quartile of PFOA and PFOS concentrations, and associations of other physician-diagnosed diseases. Because thyroid disease prevalence is markedly higher in women, we used sex-specific models. Because the distribution of PFC concentrations is skewed (with most people having relatively low exposures and with considerably more variance at the higher exposure end), all available data were pooled, and PFOA and PFOS concentrations were divided into population-weighted quartiles. Using the Hsieh method (Hsieh et al. 1998), our estimated power to detect an association of OR ≥ 1.8 with current treated disease comparing the top PFOA quartile (Q4) with bottom quartile (Q1) is 67% in women. Combining the lowest two quartiles (Q1 and Q2) into a larger control group provides 80% power. The corresponding minimum detectable effect size in men is OR > 2.9. Assumptions for the power calculations include a significance level of 5% and a multiple correlation coefficient of 0.2 relating PFOA exposure to potential confounders. Models were adjusted for the following potential confounding factors: year of NHANES study; age; sex; race/ethnicity, from self-description and categorized into Mexican American, other Hispanic, non-Hispanic white, non-Hispanic black, and other race (including multiracial); education, categorized into less than high school, high school diploma (including GED), more than high school, and unknown education; smoking (from self-reported status asked for those ≥ 20 years of age), categorized into never smoked, former smoker, smoking some days, smoking every day, and unknown smoking status; body mass index (BMI; weight in kilograms divided by the square of measured height in meters), categorized into underweight (BMI < 18.5), recommended weight (BMI = 18.5–24.9), overweight (BMI = 25.0–29.9), obese (BMI = 30.0–34.9), and unknown BMI; and alcohol consumption (in adults ≥ 20 years of age, based on responses to the question “In the past 12 months, on those days that you drank alcoholic beverages, on the average day, how many drinks did you have?”), categorized into 0, 1, 2, 3, 4, and ≥ 5 drinks per day, and unknown alcohol consumption. Regression analyses were conducted using STATA/SE (version 10.1; StataCorp LP, College Station, TX, USA). Study population: Data were from three independent cross-sectional waves of NHANES: 1999–2000, 2003–2004, and 2005–2006. NHANES surveys assess the health and diet of the noninstitutionalized civilian population of the United States and are administered by the National Center for Health Statistics (NCHS). The study protocol for NHANES was approved by the NCHS Institutional Review Board. Assessment of PFOA/PFOS concentrations: Solid-phase extraction coupled to high-performance liquid chromatography/turbo ion spray ionization/tandem mass spectrometry with isotope-labeled internal standards was used for the detection of PFOA and PFOS, with a limit of detection of 0.2 ng/mL (Kuklenyik et al. 2005). The laboratory methods and comprehensive quality control system were consistent in each NHANES wave, and documentation for each wave is available (NCHS 2009; NHANES 2006, 2007). Serum polyfluorinated chemicals (PFCs) were measured in a one-third representative random subset of persons ≥ 12 years of age in each NHANES wave. Data from individuals < 20 years of age were excluded, because questions relating to disease prevalence were asked only for adults. Disease outcomes: In all NHANES waves, adult respondents were asked about physician-diagnosed diseases. Associations were examined between PFOA and PFOS concentrations and thyroid disease outcomes. Individuals were asked whether they had ever been told by a doctor or health professional that they had a thyroid problem (in the 1999–2000 survey the questions related to goiter and other thyroid conditions) and whether they still had the condition. We further defined thyroid disease by considering those people who said they currently had thyroid disease and were taking any thyroid-related medication, including levothyroxine, liothyronine, “thyroid desiccated,” and “thyroid drugs unspecified” for hypothyroidism and propylthiouracil and methimazole for hyperthyroidism. No details were available on specific thyroid disease diagnosis, and the PFC samples did not overlap with the thyroid hormone measurement subsamples in NHANES. To assess disease specificity, associations were examined between PFOA and the other NHANES disease categories elicited: ischemic heart disease (combining any diagnoses of coronary heart disease, angina, and/or heart attack), diabetes, arthritis, current asthma, chronic obstructive pulmonary disease (COPD; bronchitis or emphysema), and current liver disease. Statistical analysis: NHANES uses a complex cluster sample design with some demographic groups (including less-privileged socioeconomic groups and Mexican Americans) oversampled to ensure adequate representation. Prevalence estimates and models were therefore survey-weighted using the NHANES primary sampling unit, strata, and population weights, unless otherwise stated. Multivariate logistic regression modeling was used to estimate odds ratios (ORs) of thyroid disease outcomes by quartile of PFOA and PFOS concentrations, and associations of other physician-diagnosed diseases. Because thyroid disease prevalence is markedly higher in women, we used sex-specific models. Because the distribution of PFC concentrations is skewed (with most people having relatively low exposures and with considerably more variance at the higher exposure end), all available data were pooled, and PFOA and PFOS concentrations were divided into population-weighted quartiles. Using the Hsieh method (Hsieh et al. 1998), our estimated power to detect an association of OR ≥ 1.8 with current treated disease comparing the top PFOA quartile (Q4) with bottom quartile (Q1) is 67% in women. Combining the lowest two quartiles (Q1 and Q2) into a larger control group provides 80% power. The corresponding minimum detectable effect size in men is OR > 2.9. Assumptions for the power calculations include a significance level of 5% and a multiple correlation coefficient of 0.2 relating PFOA exposure to potential confounders. Models were adjusted for the following potential confounding factors: year of NHANES study; age; sex; race/ethnicity, from self-description and categorized into Mexican American, other Hispanic, non-Hispanic white, non-Hispanic black, and other race (including multiracial); education, categorized into less than high school, high school diploma (including GED), more than high school, and unknown education; smoking (from self-reported status asked for those ≥ 20 years of age), categorized into never smoked, former smoker, smoking some days, smoking every day, and unknown smoking status; body mass index (BMI; weight in kilograms divided by the square of measured height in meters), categorized into underweight (BMI < 18.5), recommended weight (BMI = 18.5–24.9), overweight (BMI = 25.0–29.9), obese (BMI = 30.0–34.9), and unknown BMI; and alcohol consumption (in adults ≥ 20 years of age, based on responses to the question “In the past 12 months, on those days that you drank alcoholic beverages, on the average day, how many drinks did you have?”), categorized into 0, 1, 2, 3, 4, and ≥ 5 drinks per day, and unknown alcohol consumption. Regression analyses were conducted using STATA/SE (version 10.1; StataCorp LP, College Station, TX, USA). Results: Serum concentrations of PFOA were available for n = 3,974 individuals ≥ 20 years of age from NHANES waves 1999–2000 (n = 1,040), 2003–2004 (n = 1,454), and 2005–2006 (n = 1,480). In analyses adjusted for age, sex, NHANES wave, and ethnicity, mean levels of PFOA were higher in men than in women [by 0.76 ng/mL; 95% confidence interval (CI), 0.73–0.80; p < 0.0001], and we found significant differences between ethnic groups (Table 1). Individuals with more education had higher PFOA levels (highest vs. lowest education: 1.1 ng/mL difference; 95% CI, 1.03–1.19 ng/mL difference; p = 0.008). Increased alcohol consumption levels were also associated with higher PFOA concentrations (e.g., those having five or more drinks per day had mean PFOA levels 1.24 ng/mL higher than nondrinkers; 95% CI, 1.14–1.37 ng/mL difference; p < 0.0001). In analyses of the full sample (men and women), adjusted for age, sex, NHANES wave, and ethnicity, mean levels of PFOA were higher in men than in women (p < 0.0001), and we found significant differences among ethnic groups (Table 1). Individuals with more education had higher PFOA levels (p = 0.008). Increased alcohol consumption levels were also associated with higher PFOA concentrations (p < 0.0001). We found similar differences in PFOS concentrations. Mean levels of PFOS were higher in men (p < 0.0001), with significant differences in levels between ethnic groups, and individuals with more education had higher PFOS levels (p = 0.008). Eight individuals did not answer questions about thyroid disease, so the sample size for this analysis was n = 3,966, with 1,900 men, and 2,066 women (Table 2). In women, overall (unweighted) reported (any) thyroid disease was n = 292, and the NHANES-weighted but unadjusted prevalence was 16.18%; in men, n = 69, and weighted prevalence, 3.06%. The study-weighted prevalence of current thyroid disease taking medication was necessarily lower (women, n = 163, 9.89%; men, n = 46, 1.18%). We computed population-weighted quartiles of PFOA and PFOS concentrations in men and women separately (Table 2). The highest quartile (Q4) of PFOA in women ranged from 5.7 to 123.0 ng/mL, and in men from 7.3 ng/mL to 45.9 ng/mL. Study-weighted but unadjusted prevalences of current thyroid disease taking related medication in women varied across the quartiles but with wide CIs: Q1 = 8.14% (95% CI, 5.75–10.53%), Q4 = 16.19% (95% CI, 11.74–20.62%); in men, unadjusted prevalence rates were far lower throughout (prevalence = 2.27% in Q1 and Q4). For PFOS the prevalence of treated thyroid disease ranged from 8.14% (Q1) to 12.55% (Q4) in women and from 1.85% to 3.89% in men. In logistic regression models adjusting for age, ethnicity, and study year (Table 3), we found associations between PFOA quartiles and both definitions of thyroid disease in women. For logistic models additionally adjusted for educational status, BMI, smoking status, and alcohol consumption, these associations remained significant; for example, comparing those with PFOA concentrations ≥ 5.7 ng/mL (Q4) versus ≤ 2.6 ng/mL (Q1), the OR for current thyroid disease on medication was 1.86 (95% CI, 1.12–3.09; p = 0.018). Comparing Q4 with the larger control group of PFOA ≤ 4.0 ng/mL (Q1 and Q2) the estimated OR for treated thyroid disease was 2.24 (95% CI, 1.38–3.65; p = 0.002). In men, we found a similar suggestive trend, but it narrowly missed significance: Comparing PFOA concentrations ≥ 7.3 ng/mL (Q4) versus ≤ 5.2 ng/mL (Q1 and Q2), the OR for treated disease was 2.12 (95% CI, 0.93–4.82; p = 0.073). For PFOS concentrations, in women ORs for disease trended in a similar direction but were far from significant. However, in men we found an association comparing those with PFOS concentrations ≥ 36.8 ng/mL (Q4) versus ≤ 25.5 ng/mL (Q1 and Q2): OR for treated disease was 2.68 (95% CI, 1.03–6.98; p = 0.043). Sensitivity analyses For a sensitivity analysis, we computed a logistic regression model including both men and women, testing an interaction term between sex and PFOA levels for treated thyroid disease risk. The interaction term was not significant (p-value for interaction = 0.152). For a post hoc analysis, we examined associations between chemical concentration quartile and any of the other major disease categories covered in NHANES: arthritis, asthma, COPD, diabetes, heart disease, or liver disease. Combining men and women (to reduce multiple testing and because these diseases are less sex related), we found no significant associations for PFOA (Table 4) except for comparisons between the intermediate quartiles and Q1 for arthritis, but this did not reach significance in the top quartile. For PFOS, we found no “positive” associations between higher serum concentrations and higher prevalence of disease. We found one statistically significant “negative” association suggesting that people reporting having COPD may be less likely to be in the highest PFOA concentration quartile (OR = 0.58; 95% CI, 0.43–0.76; p = 0.0003). For a sensitivity analysis, we computed a logistic regression model including both men and women, testing an interaction term between sex and PFOA levels for treated thyroid disease risk. The interaction term was not significant (p-value for interaction = 0.152). For a post hoc analysis, we examined associations between chemical concentration quartile and any of the other major disease categories covered in NHANES: arthritis, asthma, COPD, diabetes, heart disease, or liver disease. Combining men and women (to reduce multiple testing and because these diseases are less sex related), we found no significant associations for PFOA (Table 4) except for comparisons between the intermediate quartiles and Q1 for arthritis, but this did not reach significance in the top quartile. For PFOS, we found no “positive” associations between higher serum concentrations and higher prevalence of disease. We found one statistically significant “negative” association suggesting that people reporting having COPD may be less likely to be in the highest PFOA concentration quartile (OR = 0.58; 95% CI, 0.43–0.76; p = 0.0003). Sensitivity analyses: For a sensitivity analysis, we computed a logistic regression model including both men and women, testing an interaction term between sex and PFOA levels for treated thyroid disease risk. The interaction term was not significant (p-value for interaction = 0.152). For a post hoc analysis, we examined associations between chemical concentration quartile and any of the other major disease categories covered in NHANES: arthritis, asthma, COPD, diabetes, heart disease, or liver disease. Combining men and women (to reduce multiple testing and because these diseases are less sex related), we found no significant associations for PFOA (Table 4) except for comparisons between the intermediate quartiles and Q1 for arthritis, but this did not reach significance in the top quartile. For PFOS, we found no “positive” associations between higher serum concentrations and higher prevalence of disease. We found one statistically significant “negative” association suggesting that people reporting having COPD may be less likely to be in the highest PFOA concentration quartile (OR = 0.58; 95% CI, 0.43–0.76; p = 0.0003). Discussion: In this study we aimed to determine whether increased serum PFOA or PFOS concentrations were associated with thyroid disease in a general adult U.S. population sample. The prevalence of thyroid disease is markedly higher in women than in men, so we estimated sex-specific associations. We found that, across all the available data from NHANES, thyroid disease associations with serum PFOA concentrations are present in women and are strongest for those currently being treated for thyroid disease. In men, we also found a near significant association between PFOA and treated thyroid disease. An interaction term analysis suggests that the PFOA trends in men and women are not significantly different, despite the relative rarity of thyroid disease in men. In addition, we found a nominally significant association between PFOS concentrations and treated thyroid disease in men but not in women. The presence of associations with both PFOA and PFOS raises the issue of how best to perform risk assessments for combinations of perfluorochemicals. The somewhat divergent risk patterns for the two compounds support their separate risk assessment (Scialli et al. 2007), given that current legislative advice (Minnesota Department of Health 2008) is to consider the combined effects of chemicals only when two or more chemicals in a mixture affect the same tissue, organ, or organ system. Our results are important because PFAAs are detectable in virtually everyone in society (Kannan et al. 2004), with ubiquitous presence across global populations (Calafat et al. 2006). Occupational exposure to PFOA reported in 2003 showed mean serum values of 1,780 ng/mL (range, 40–10,060 ng/mL) (Olsen et al. 2003a) and 899 ng/mL (range, 722–1,120 ng/mL) (Olsen et al. 2003c). Production of PFOS was halted in 2002 in the United States by its principal producer, due largely to concerns over bioaccumulation and toxicity. Since then, voluntary industry reductions in production and use of other perfluorinated compounds, such as the U.S. EPA–initiated PFOA Stewardship Program (U.S. EPA 2006), have contributed to a decreasing trend in human exposure for all perfluorinated compounds (with the notable exception of perfluorononanoic acid) (Calafat et al. 2007; Olsen et al. 2007). In May 2009, PFOS was listed under the Stockholm Convention on Persistent Organic Pollutants (2008). Our results can be compared with previous studies of human populations and of nonhuman primates. A 6-month study of cynomolgus monkeys chronically exposed to PFOA showed no associations between PFOA and thyroid parameters, at mean serum PFOA concentrations higher than those reported in NHANES, although only male monkeys were involved (Butenhoff et al. 2002). The largest human study of PFOA centers on an industrial facility in Washington, West Virginia, from which PFOA spread to the population through air, water, occupational, and domestic exposure in a point-source contamination. The C8 Health Project (Steenland et al. 2009) has measured PFOA concentrations in > 69,000 residents. Markedly high concentrations were found, with an arithmetic mean of 83 ng/mL and a median concentration in serum of 28 ng/mL (C8 Science Panel 2008), far higher than the NHANES concentrations in the general population. Preliminary analyses report associations between PFOA and total cholesterol, low-density lipoproteins, and triglyceride concentrations in multivariate models adjusting for age, BMI, sex, education, smoking, alcohol, and regular exercise. Comprehensive cross-sectional and follow-on analyses of associations with thyroid disease have not yet been reported but are expected to be released in 2010–2011 (C8 Science Panel 2009). Importantly, disruption to thyroid hormone balance was not found in other studies of populations exposed to PFOA, despite the considerably higher levels reported in some studies (Emmett et al. 2006; Olsen et al. 2003b). Emmett et al. (2006) studied 371 residents of a community with long-standing environmental exposure to PFOA and found a median serum PFOA concentration of 181–571 ng/mL but no association between serum PFOA and a history of thyroid disease. A study that included thyroid hormone levels reported a positive association between serum PFOA concentration and T3 levels in occupationally exposed workers, although there were no changes in other thyroid hormones (Olsen et al. 2001). Modest associations between PFOA and thyroid hormones (negative for free T4 and positive for T3) were reported in 506 PFOA production workers across three production facilities (Olsen and Zobel 2007); there were no associations between TSH or T4 and PFOA, and the free hormone levels were within the normal reference range. A linear extrapolation of the findings reported here would be expected to lead to associations being more evident at higher exposure levels, yet this is not supported by the literature. Nonlinearity of response is not uncommon for receptor-mediated systems such as endocrine-signaling pathways that act to amplify the original signal. Large changes in cell function can occur in response to extremely low concentrations, but which may become saturated and hence unresponsive at higher concentrations (vom Saal and Hughes 2005; Welshons et al. 2003). The mechanisms involved in thyroid homeostasis are numerous and complex, and there are multiple potential targets for disruption of thyroid hormone homeostasis (Schmutzler et al. 2007). These include thyrotropin receptor (Santini et al. 2003), iodine uptake by the sodium iodide transporter (Schröder van der Elst et al. 2003), type 1 5′-deiodinase (Ferreira et al. 2002), transthyretin (Kohrle et al. 1988), thyroid hormone receptor (Moriyama et al. 2002), and the thyroid hormone–dependent growth of pituitary cells (Ghisari and Bonefeld-Jorgensen 2005). Depression of serum T4 and T3 has been reported by several authors in PFOS-exposed rats (Lau et al. 2003; Luebker et al. 2005; Seacat et al. 2003). One mechanism by which PFAAs may deplete T4 is through induction of the hepatic uridine diphosphoglucuronosyl transferase (UGT) system, which is involved in hepatic metabolism of thyroid hormone and biliary clearance of T4 as T4-glucuronide (Barter and Klaassen 1994). Because PFOA is an agonist for PPARα, it is plausible that induction of hepatic UGT in PFAA-exposed rats (Yu et al. 2009) could represent a PPARα-mediated response. The involvement of another PPARα agonist, WY 14643, in enhancing the hepatic degradation of thyroid hormone has recently been shown (Weineke et al. 2009). A growing body of data describes the in vitro binding affinity of PFOA to human serum-binding proteins (Chen and Guo 2009) and to PPARα, -β, and -γ and other nuclear receptors (Vanden Huevel et al. 2006), but the contribution of these mechanisms to PFOA’s thyroid-mediating effects in humans remains to be established. Many cellular and metabolic processes, including lipid metabolism, energy homeostasis, and cell differentiation, are controlled by PPARα. Early studies of the effects of PFAAs in rodents showed that a single dose lowered heart rate and body temperature and depressed T4 and T3. Replacement of T4 did not reverse the clinical symptoms of hypothermia (Gutshall et al. 1988; Langley and Pilcher 1985). Although circulating thyroid hormone levels were low, liver enzymes responsive to thyroid hormone levels were elevated, suggesting that thyroidal homeostasis was not functionally compromised. Chang et al. (2007) found that exposure to PFOS for up to 3 weeks did not affect functional thyroid status, because free T4, TSH, and various thyroid-responsive liver enzymes were all unaffected. These findings and later results have led to proposals that displacement of circulating thyroid hormones from plasma protein-binding sites and a reduced responsiveness of the HPT axis contribute significantly towards PFOA’s hypothyroid-inducing effects (Lau et al. 2007). Whatever the mechanisms involved, it is clear that more research is merited to clarify the pathways involved. The feedback mechanism by which the rate of release of TSH and the circulating levels of T3 and T4 are regulated tends to show a low level of individual variation (Felt-Rasmussen et al. 1980). Therefore, subtle disruption of the HPT axis within normal reference ranges may have negative health consequences for the individual while remaining within normal reference values, highlighting the importance of including both clinical and laboratory end points in such studies. The NHANES data do not allow specification of the precise type of thyroid disease present, because NHANES does not report on individual hormone levels. PFOA concentration was positively associated with free T4 and negatively associated with T3 levels in a cohort of 506 exposed workers, with a near significant association with TSH levels (Olsen et al. 2007), although all effects were regarded as modest. The limitations of these analyses should be noted. We based the PFOA and PFOS measures on a single serum sample. Although PFOA has a half-life of 4 years (Olsen et al. 2007), so a single sample is likely to represent medium-term internal dose, samples taken at several time points might be more accurate in classifying exposure. Any misclassification from single measures would tend to decrease power and underestimate the real strengths of association. Second, the PFOA concentrations were measured at the same time as disease status, making attribution of causal direction difficult. This raises the possibility of reverse causation. One might hypothesize that after onset of thyroid disease, changes in the nature of exposure or in the pharmacokinetics of PFOA might occur [including patterns of absorption, distribution (including protein binding) or excretion]. Because the associations we report were present in people who were on thyroid hormone replacements, which effectively mimic normal thyroid function, a mechanism for reverse causation through changes in pharmacokinetics is difficult to imagine. Confounding by unmeasured factors is also possible, but it is unlikely that confounding could explain similar findings reported from some of the diverse experimental and observational studies discussed above. Post hoc association testing with other common diseases (necessarily involving multiple statistical testing) did not identify other robust associations of higher PFC concentration with increased disease prevalence, suggesting specificity of our findings for thyroid disease. An apparent association between higher PFOS concentrations and lower prevalence of COPD requires replication, to exclude a false-positive result from multiple testing. In addition to the limitations of our analyses, the strengths should also be noted: This is the first large-scale analysis in a nationally representative general adult population of directly measured serum concentrations of PFOA and PFOS. In addition, the associations present are strongest for the most specific identification of thyroid disease, based on reported diagnosis with current use of thyroid-specific medication. The NHANES study also supported adjustment of models for a range of potential confounding factors, which in fact made relatively minor differences to the key estimates, suggesting that the associations are robust. Further work is clearly needed to characterize the PFOA and PFOS associations with specific thyroid diagnoses and thyroid hormone levels in the general population and to clarify whether the associations reflect pathology, changes in exposure, or altered pharmacokinetics. Longitudinal analyses are also needed to establish whether high exposures predict future onsets of thyroid disease, although concurrent alteration of thyroid functioning would still be a cause for concern. Conclusions: Higher PFOA and PFOS concentrations are associated with thyroid disease (and being on thyroid-related medication) in the NHANES U.S. general adult population representative study samples. More work is needed to establish the mechanisms underlying this association and to exclude confounding and pharmacokinetic explanations.
Background: Perfluorooctanoic acid (PFOA, also known as C8) and perfluorooctane sulfonate (PFOS) are stable compounds with many industrial and consumer uses. Their persistence in the environment plus toxicity in animal models has raised concern over low-level chronic exposure effects on human health. Methods: Analyses of PFOA/PFOS versus disease status in the National Health and Nutrition Examination Survey (NHANES) for 1999-2000, 2003-2004, and 2005-2006 included 3,974 adults with measured concentrations for perfluorinated chemicals. Regression models were adjusted for age, sex, race/ethnicity, education, smoking status, body mass index, and alcohol intake. Results: The NHANES-weighted prevalence of reporting any thyroid disease was 16.18% (n = 292) in women and 3.06% (n = 69) in men; prevalence of current thyroid disease with related medication was 9.89% (n = 163) in women and 1.88% (n = 46) in men. In fully adjusted logistic models, women with PFOA >or= 5.7 ng/mL [fourth (highest) population quartile] were more likely to report current treated thyroid disease [odds ratio (OR) = 2.24; 95% confidence interval (CI), 1.38-3.65; p = 0.002] compared with PFOA <or= 4.0 ng/mL (quartiles 1 and 2); we found a near significant similar trend in men (OR = 2.12; 95% CI, 0.93-4.82; p = 0.073). For PFOS, in men we found a similar association for those with PFOS >or= 36.8 ng/mL (quartile 4) versus <or= 25.5 ng/mL (quartiles 1 and 2: OR for treated disease = 2.68; 95% CI, 1.03-6.98; p = 0.043); in women this association was not significant. Conclusions: Higher concentrations of serum PFOA and PFOS are associated with current thyroid disease in the U.S. general adult population. More work is needed to establish the mechanisms involved and to exclude confounding and pharmacokinetic explanations.
null
null
6,546
395
9
[ "disease", "thyroid", "pfoa", "nhanes", "thyroid disease", "concentrations", "pfos", "associations", "higher", "women" ]
[ "test", "test" ]
null
null
null
[CONTENT] C8 | human population | PFOA | PFOS | thyroid disease [SUMMARY]
[CONTENT] C8 | human population | PFOA | PFOS | thyroid disease [SUMMARY]
[CONTENT] C8 | human population | PFOA | PFOS | thyroid disease [SUMMARY]
[CONTENT] C8 | human population | PFOA | PFOS | thyroid disease [SUMMARY]
null
null
[CONTENT] Adult | Aged | Alkanesulfonic Acids | Animals | Caprylates | Cross-Sectional Studies | Endocrine Disruptors | Environmental Exposure | Environmental Pollutants | Female | Fluorocarbons | Health Surveys | Humans | Male | Middle Aged | Risk Assessment | Thyroid Diseases | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Alkanesulfonic Acids | Animals | Caprylates | Cross-Sectional Studies | Endocrine Disruptors | Environmental Exposure | Environmental Pollutants | Female | Fluorocarbons | Health Surveys | Humans | Male | Middle Aged | Risk Assessment | Thyroid Diseases | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Alkanesulfonic Acids | Animals | Caprylates | Cross-Sectional Studies | Endocrine Disruptors | Environmental Exposure | Environmental Pollutants | Female | Fluorocarbons | Health Surveys | Humans | Male | Middle Aged | Risk Assessment | Thyroid Diseases | United States | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Alkanesulfonic Acids | Animals | Caprylates | Cross-Sectional Studies | Endocrine Disruptors | Environmental Exposure | Environmental Pollutants | Female | Fluorocarbons | Health Surveys | Humans | Male | Middle Aged | Risk Assessment | Thyroid Diseases | United States | Young Adult [SUMMARY]
null
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
null
[CONTENT] disease | thyroid | pfoa | nhanes | thyroid disease | concentrations | pfos | associations | higher | women [SUMMARY]
[CONTENT] disease | thyroid | pfoa | nhanes | thyroid disease | concentrations | pfos | associations | higher | women [SUMMARY]
[CONTENT] disease | thyroid | pfoa | nhanes | thyroid disease | concentrations | pfos | associations | higher | women [SUMMARY]
[CONTENT] disease | thyroid | pfoa | nhanes | thyroid disease | concentrations | pfos | associations | higher | women [SUMMARY]
null
null
[CONTENT] categorized | bmi | unknown | smoking | school | high school | hispanic | day | power | high [SUMMARY]
[CONTENT] men | ml | ng ml | ng | 95 | ci | women | disease | found | levels [SUMMARY]
[CONTENT] underlying association exclude confounding | associated thyroid disease thyroid | association exclude | establish mechanisms | higher pfoa pfos | higher pfoa pfos concentrations | establish mechanisms underlying | establish mechanisms underlying association | representative study samples work | representative study [SUMMARY]
[CONTENT] disease | thyroid | pfoa | nhanes | thyroid disease | associations | concentrations | higher | found | pfos [SUMMARY]
null
null
[CONTENT] PFOA | the National Health and Nutrition Examination Survey | NHANES | 1999-2000 | 2003-2004 | 2005-2006 | 3,974 ||| [SUMMARY]
[CONTENT] NHANES | 16.18% | 292 | 3.06% | 69 | 9.89% | 163 | 1.88% | 46 ||| PFOA | 5.7 ng/mL ||| fourth | 2.24 | 95% | CI | 1.38 | 0.002 | PFOA | 4.0 ng/mL | 1 | 2.12 | 95% | CI | 0.93 | 0.073 ||| PFOS | PFOS | 36.8 ng/mL | 4 | 25.5 ng/mL | 1 | 2 | 2.68 | 95% | CI | 1.03-6.98 | 0.043 [SUMMARY]
[CONTENT] PFOA | PFOS | U.S. ||| [SUMMARY]
[CONTENT] PFOA | C8 | PFOS ||| ||| PFOA | the National Health and Nutrition Examination Survey | NHANES | 1999-2000 | 2003-2004 | 2005-2006 | 3,974 ||| ||| NHANES | 16.18% | 292 | 3.06% | 69 | 9.89% | 163 | 1.88% | 46 ||| PFOA | 5.7 ng/mL ||| fourth | 2.24 | 95% | CI | 1.38 | 0.002 | PFOA | 4.0 ng/mL | 1 | 2.12 | 95% | CI | 0.93 | 0.073 ||| PFOS | PFOS | 36.8 ng/mL | 4 | 25.5 ng/mL | 1 | 2 | 2.68 | 95% | CI | 1.03-6.98 | 0.043 ||| PFOA | PFOS | U.S. ||| [SUMMARY]
null
Time-dependent risk of developing distant metastasis in breast cancer patients according to treatment, age and tumour characteristics.
24434426
Metastatic breast cancer is a severe condition without curative treatment. How relative and absolute risk of distant metastasis varies over time since diagnosis, as a function of treatment, age and tumour characteristics, has not been studied in detail.
BACKGROUND
A total of 9514 women under the age of 75 when diagnosed with breast cancer in Stockholm and Gotland regions during 1990-2006 were followed up for metastasis (mean follow-up=5.7 years). Time-dependent development of distant metastasis was analysed using flexible parametric survival models and presented as hazard ratio (HR) and cumulative risk.
METHODS
A total of 995 (10.4%) patients developed distant metastasis; the most common sites were skeleton (32.5%) and multiple sites (28.3%). Women younger than 50 years at diagnosis, with lymph node-positive, oestrogen receptor (ER)-negative, >20 mm tumours and treated only locally, had the highest risk of distant metastasis (0-5 years' cumulative risk =0.55; 95% confidence interval (CI): 0.47-0.64). Women older than 50 years at diagnosis, with ER-positive, lymph node-negative and ≤20-mm tumours, had the same and lowest cumulative risk of developing metastasis 0-5 and 5-10 years (cumulative risk=0.03; 95% CI: 0.02-0.04). In the period of 5-10 years after diagnosis, women with ER-positive, lymph node-positive and >20-mm tumours were at highest risk of distant recurrence. Women with ER-negative tumours showed a decline in risk during this period.
RESULTS
Our data show no support for discontinuation at 5 years of clinical follow-up in breast cancer patients and suggest further investigation on differential clinical follow-up for different subgroups of patients.
CONCLUSION
[ "Aged", "Antineoplastic Agents", "Breast Neoplasms", "Cohort Studies", "Female", "Humans", "Middle Aged", "Neoplasm Metastasis", "Neoplasm Recurrence, Local", "Risk", "Sweden", "Time Factors" ]
3950882
null
null
null
null
Results
The mean age of the patients was 56.4 years at diagnosis. The tumour characteristics were distributed as follows: 63.5% of patients had lymph node-negative tumours, 69% had tumours of size 20 mm or less and 82.3% had ER-positive tumours (Table 1). In our cohort, 87.9% of women underwent systemic adjuvant treatment. The overall rate of first distant metastasis was 19.4 per 1000 person–years (95% CI: 18.2–20.6). Figure 1 shows rates of first distant metastasis in relation to time since breast cancer diagnosis for different age groups of patients and tumour characteristics. Women younger than 50 years at breast cancer diagnosis, women with ER-negative tumours, positive lymph nodes and tumours larger than 20 mm, all showed a peak in the rate of first distant metastasis at about 2 years, in comparison with fairly stable rates in other subgroups of patients. In particular, women with ER-negative tumours showed a sharp decrease in the rate of first distant metastasis after 2 years since breast cancer diagnosis, whereas women with ER-positive tumours showed rather stable rates of first distant metastasis over time from about 1 year up until 10 years since breast cancer diagnosis. Table 2 shows frequency distribution of sites of first distant metastasis by time-to-first-distant metastasis subdivided into 0–2, 2–5 and 5–10 years since diagnosis. Overall, metastasis to the skeleton (32.5%) and multiple sites of metastasis (28.3%) were the most frequent presentations of distant metastasis within 10 years. No particular combination pattern of multiple sites of first distant metastasis was found in the cohort. The site distribution of first distant metastasis changed significantly (P<0.05) over time for the following sites: skeleton, CNS and liver. The proportion of first distant metastasis to the skeleton increased from 29.9% of all first distant metastasis in the first 2 years to 36.5% in the period of 5–10 years since breast cancer diagnosis. The proportion of CNS and liver metastasis instead decreased from 6.8% and 15.4%, respectively, in the first 2 years to 1.8% and 8.0%, respectively, in 5–10 years since breast cancer diagnosis. Between 5 and 10 years since breast cancer diagnosis, 274 (27.5%) of all first distant metastasis were diagnosed. Table 3 shows time-dependent HRs of developing first distant metastasis by age and tumour characteristics at 2, 5 and 10 years from breast cancer diagnosis. For women with positive lymph nodes, the hazard of developing metastasis was still significantly increased at 10 years after diagnosis (HR=2.6; 95% CI: 1.9–3.5) compared with women with negative lymph nodes. Women with ER-negative tumours had an increased hazard at 2 (HR=2.7; 95% CI: 2.2–3.3) and 5 (HR=1.4; 95% CI: 1.1–1.7) years from breast cancer diagnosis compared with women with ER-positive tumours; at 10 years from diagnosis, the same HR was not significantly increased anymore (HR=0.9; 95% CI: 0.6–1.4). Having a tumour larger than 20 mm at breast cancer diagnosis was still causing an increased hazard of developing first distant metastasis at 10 years (HR=1.5; 95% CI: 1.1–2.0). Table 4 shows cumulative risks of developing first distant metastasis within 5 years of diagnosis, and within 10 years surviving 5 years without metastasis, according to all different covariate patterns. Among those with an adjuvant treatment combination including CT and/or HT, the highest risk of first distant metastasis in the period of 0–5 years was found in patients of 50 years of age or less, with tumour size >20 mm, positive lymph nodes and ER-negative tumours at breast cancer diagnosis (cumulative risk =0.45; 95% CI: 0.38–0.53); whereas the lowest risk was among patients aged 51–74 years with negative lymph nodes, tumour size ⩽20 mm and with ER-positive tumours at breast cancer diagnosis (cumulative risk =0.03; 95% CI: 0.02–0.04). For the same two groups of patients in the period of 5–10 years following diagnosis (that is, among those who survived metastasis-free until 5 years), the risk dropped (cumulative risk=0.09; 95% CI: 0.05–0.17) for the group originally at the highest risk, whereas it remained the same (cumulative risk=0.03; 95% CI: 0.02–0.04) for the group at the lowest risk. The risk at 5–10 years following diagnosis was similar to the risk for the period of 0–5 years in all women with ER-positive tumours and negative lymph nodes, regardless of treatment, tumour size and age. This was also true for women with ER-positive tumours and positive lymph nodes, if the tumour size at diagnosis was ⩽20 mm. For all patients with ER-negative tumours instead, the risk at 5–10 years following diagnosis was significantly lower compared with the risk for the period of 0–5 years regardless of age, tumour size and nodes. Women treated with systemic adjuvant treatment (that is, CT and/or HT) always had lower risks compared to women treated with surgery only, with or without RT.
null
null
[ "Data source", "Study cohort", "Ethics", "Statistical analysis" ]
[ "The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient.", "A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers).\nThe cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis.\nInformation on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis.\nInformation on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis).", "This study has an ethical permission from the regional ethics committee at the Karolinska Institutet.", "This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring).\nRates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer.\nFrom the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)).\nFirstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects).\nSecondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years." ]
[ null, null, null, null ]
[ "Materials and Methods", "Data source", "Study cohort", "Ethics", "Statistical analysis", "Results", "Discussion" ]
[ " Data source The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient.\nThe Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient.\n Study cohort A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers).\nThe cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis.\nInformation on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis.\nInformation on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis).\nA population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers).\nThe cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis.\nInformation on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis.\nInformation on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis).\n Ethics This study has an ethical permission from the regional ethics committee at the Karolinska Institutet.\nThis study has an ethical permission from the regional ethics committee at the Karolinska Institutet.\n Statistical analysis This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring).\nRates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer.\nFrom the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)).\nFirstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects).\nSecondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years.\nThis cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring).\nRates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer.\nFrom the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)).\nFirstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects).\nSecondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years.", "The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient.", "A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers).\nThe cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis.\nInformation on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis.\nInformation on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis).", "This study has an ethical permission from the regional ethics committee at the Karolinska Institutet.", "This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring).\nRates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer.\nFrom the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)).\nFirstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects).\nSecondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years.", "The mean age of the patients was 56.4 years at diagnosis. The tumour characteristics were distributed as follows: 63.5% of patients had lymph node-negative tumours, 69% had tumours of size 20 mm or less and 82.3% had ER-positive tumours (Table 1). In our cohort, 87.9% of women underwent systemic adjuvant treatment. The overall rate of first distant metastasis was 19.4 per 1000 person–years (95% CI: 18.2–20.6).\nFigure 1 shows rates of first distant metastasis in relation to time since breast cancer diagnosis for different age groups of patients and tumour characteristics. Women younger than 50 years at breast cancer diagnosis, women with ER-negative tumours, positive lymph nodes and tumours larger than 20 mm, all showed a peak in the rate of first distant metastasis at about 2 years, in comparison with fairly stable rates in other subgroups of patients. In particular, women with ER-negative tumours showed a sharp decrease in the rate of first distant metastasis after 2 years since breast cancer diagnosis, whereas women with ER-positive tumours showed rather stable rates of first distant metastasis over time from about 1 year up until 10 years since breast cancer diagnosis.\nTable 2 shows frequency distribution of sites of first distant metastasis by time-to-first-distant metastasis subdivided into 0–2, 2–5 and 5–10 years since diagnosis. Overall, metastasis to the skeleton (32.5%) and multiple sites of metastasis (28.3%) were the most frequent presentations of distant metastasis within 10 years. No particular combination pattern of multiple sites of first distant metastasis was found in the cohort. The site distribution of first distant metastasis changed significantly (P<0.05) over time for the following sites: skeleton, CNS and liver. The proportion of first distant metastasis to the skeleton increased from 29.9% of all first distant metastasis in the first 2 years to 36.5% in the period of 5–10 years since breast cancer diagnosis. The proportion of CNS and liver metastasis instead decreased from 6.8% and 15.4%, respectively, in the first 2 years to 1.8% and 8.0%, respectively, in 5–10 years since breast cancer diagnosis. Between 5 and 10 years since breast cancer diagnosis, 274 (27.5%) of all first distant metastasis were diagnosed.\nTable 3 shows time-dependent HRs of developing first distant metastasis by age and tumour characteristics at 2, 5 and 10 years from breast cancer diagnosis. For women with positive lymph nodes, the hazard of developing metastasis was still significantly increased at 10 years after diagnosis (HR=2.6; 95% CI: 1.9–3.5) compared with women with negative lymph nodes. Women with ER-negative tumours had an increased hazard at 2 (HR=2.7; 95% CI: 2.2–3.3) and 5 (HR=1.4; 95% CI: 1.1–1.7) years from breast cancer diagnosis compared with women with ER-positive tumours; at 10 years from diagnosis, the same HR was not significantly increased anymore (HR=0.9; 95% CI: 0.6–1.4). Having a tumour larger than 20 mm at breast cancer diagnosis was still causing an increased hazard of developing first distant metastasis at 10 years (HR=1.5; 95% CI: 1.1–2.0).\nTable 4 shows cumulative risks of developing first distant metastasis within 5 years of diagnosis, and within 10 years surviving 5 years without metastasis, according to all different covariate patterns. Among those with an adjuvant treatment combination including CT and/or HT, the highest risk of first distant metastasis in the period of 0–5 years was found in patients of 50 years of age or less, with tumour size >20 mm, positive lymph nodes and ER-negative tumours at breast cancer diagnosis (cumulative risk =0.45; 95% CI: 0.38–0.53); whereas the lowest risk was among patients aged 51–74 years with negative lymph nodes, tumour size ⩽20 mm and with ER-positive tumours at breast cancer diagnosis (cumulative risk =0.03; 95% CI: 0.02–0.04). For the same two groups of patients in the period of 5–10 years following diagnosis (that is, among those who survived metastasis-free until 5 years), the risk dropped (cumulative risk=0.09; 95% CI: 0.05–0.17) for the group originally at the highest risk, whereas it remained the same (cumulative risk=0.03; 95% CI: 0.02–0.04) for the group at the lowest risk. The risk at 5–10 years following diagnosis was similar to the risk for the period of 0–5 years in all women with ER-positive tumours and negative lymph nodes, regardless of treatment, tumour size and age. This was also true for women with ER-positive tumours and positive lymph nodes, if the tumour size at diagnosis was ⩽20 mm. For all patients with ER-negative tumours instead, the risk at 5–10 years following diagnosis was significantly lower compared with the risk for the period of 0–5 years regardless of age, tumour size and nodes. Women treated with systemic adjuvant treatment (that is, CT and/or HT) always had lower risks compared to women treated with surgery only, with or without RT.", "We found that the site distribution of first distant metastasis over time changed significantly for metastasis to the skeleton, CNS and liver. The risk of developing distant metastasis within 10 years of breast cancer diagnosis significantly varied in different subgroups of patients. The risk remained non-negligible up to 10 years since diagnosis particularly among women with positive lymph nodes. The risk was high in particular among patients with ER-negative tumours within the first 5 years of diagnosis, whereas it significantly decreased after 5 years since diagnosis. The risk of developing distant metastasis remained instead rather stable for most subgroups of patients with ER-positive tumours independent of age and other tumour characteristics.\nIn our cohort, one-third of first distant metastasis was diagnosed in the skeleton. This proportion significantly increased over time since diagnosis, whereas the proportion of metastasis to the liver and CNS significantly decreased. This seems to reflect the natural history of distant recurrences as women with ER-positive tumours more often develop metastasis to the skeleton later during follow-up, whereas women with ER-negative tumours more often develop early liver and CNS metastasis (Kennecke et al, 2010).\nInterestingly, the risk for developing distant metastasis over time since diagnosis mirrors the pattern observed for breast cancer-specific mortality that has been previously studied in this same cohort: lymph node status and tumour size at diagnosis are significant prognosticators of distant recurrence, as well as of breast cancer death, at 10 years since breast cancer diagnosis, whereas ER status is not. These findings confirm that distant metastasis still indicate a very poor prognosis in breast cancer patients (Hilsenbeck et al, 1998; Louwman et al, 2008; Colzani et al, 2011).\nCumulative risk estimates of developing distant metastasis were obtained for specific subgroups of patients while taking into account the competing risk of dying from other causes and by allowing time-dependent effects (interaction with time since diagnosis) for age and tumour characteristics. The use of cumulative risk as a function of time is of relevant clinical value as it allows a quantitative estimation of what is the actual probability of developing distant metastasis for any given subgroup of breast cancer patients at different time points. This measurement may help clinicians to better estimate the individual risk of developing first distant metastasis during the first 5 years as well as 5–10 years after diagnosis. One of the main messages from this study stems from the fact that the risk of developing distant metastasis carries over significantly to the second 5 years of follow-up for most metastasis-free patients, particularly those patients with lymph node-positive tumours where the risk at 5–10 years after diagnosis is always higher or equal to 8%. In addition, for some subgroups of patients with ER-positive tumours, the cumulative risk of developing distant metastasis in the period of 0–5 and 5–10 years is essentially the same.\nPatients with ER-positive and lymph node-negative tumours show no change and very low risk over time. This could be explained by a very good effect of treatment, or alternatively even by overtreatment since patients undergoing local treatment show a similar low risk. More clinical attention should, however, be given to other subgroups in which follow-up could be intensified and treatment could be improved. In particular, women with ER-positive, lymph node-positive, >20 mm tumours still have a risk higher than 10% to develop first distant metastasis in the period of 5–10 years after diagnosis. Of note, all subgroups of patients with ER-negative tumours have a sharp significant decrease in risk of developing distant metastasis in the period of 5–10 years following diagnosis compared with the period of 0–5 years, independent of age and other tumour characteristics. The American Society of Clinical Oncology (ASCO) has recently concluded in a review of the clinical practice guidelines of primary breast cancer follow-up that there is no present need for updating the current guidelines (Khatcheressian et al, 2013). Although evidence supporting the change of current practice is rather weak (Robertson et al, 2011), following future improvements in prevention and treatment of metastatic breast cancer, a differential follow-up of patients with ER-positive and ER-negative tumours could be considered, given their remarkably different risks of spreading.\nThis study has some relevant strengths. We used a large cohort of women followed up to 10 years with accurate and complete information, enabling us to apply a comprehensive design and methodology. We analysed the risk of developing distant metastasis from many different perspectives, providing a thorough picture of the topic by analysing the proportion of first distant metastasis in different sites, the relative (HR) and absolute risk (cumulative risk) of developing metastasis at different follow-up times according to different patient and tumour characteristics by taking into account competing risks, and allowing the main effects to vary over time.\nThis paper has some limitations as well. The date of diagnosis of distant metastasis might be subject to timing of clinical work-ups and type of follow-up. In addition, site of first distant metastasis could be affected by detection bias. As in all studies requiring a long follow-up, the estimated cumulative risk of first distant metastasis might not reflect current risk as it was observed in women diagnosed between 1995 and 1999. In particular, adjuvant treatment has changed and we do not know whether the same risk patterns are observed in recently diagnosed patients: for instance, aromatase inhibitors have been widely used instead of tamoxifen from early 2000s, and high risk patients with ER-positive tumours are today offered extended endocrine therapy up to 10 years (Burstein et al, 2010).\nIn conclusion, there is still a clinically relevant risk of developing first distant metastasis from 5 to 10 years after breast cancer diagnosis in several groups of patients, especially those with positive lymph nodes at diagnosis. Patients with negative lymph nodes and ER-positive tumours, unlike those with ER-negative tumours, have a very similar low risk of developing first distant metastasis in the first 5 years and in the second 5 years of follow-up, independent of age, other tumour characteristics and competing risks of dying due to other causes. Upcoming improvements in metastasis prevention and treatment should elicit further research aimed at identifying specific clinical follow-ups for different subgroups of breast cancer patients. Five-year metastasis-free survival may not any longer be an appropriate outcome indicator measurement tool for breast cancer patients, particularly for those with ER-positive tumours." ]
[ "materials|methods", null, null, null, null, "results", "discussion" ]
[ "breast cancer", "distant metastasis", "risk", "survival analysis", "tumour characteristics", "competing risk" ]
Materials and Methods: Data source The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient. The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient. Study cohort A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers). The cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis. Information on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis. Information on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis). A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers). The cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis. Information on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis. Information on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis). Ethics This study has an ethical permission from the regional ethics committee at the Karolinska Institutet. This study has an ethical permission from the regional ethics committee at the Karolinska Institutet. Statistical analysis This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring). Rates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer. From the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)). Firstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects). Secondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years. This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring). Rates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer. From the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)). Firstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects). Secondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years. Data source: The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient. Study cohort: A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers). The cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis. Information on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis. Information on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis). Ethics: This study has an ethical permission from the regional ethics committee at the Karolinska Institutet. Statistical analysis: This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring). Rates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer. From the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)). Firstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects). Secondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years. Results: The mean age of the patients was 56.4 years at diagnosis. The tumour characteristics were distributed as follows: 63.5% of patients had lymph node-negative tumours, 69% had tumours of size 20 mm or less and 82.3% had ER-positive tumours (Table 1). In our cohort, 87.9% of women underwent systemic adjuvant treatment. The overall rate of first distant metastasis was 19.4 per 1000 person–years (95% CI: 18.2–20.6). Figure 1 shows rates of first distant metastasis in relation to time since breast cancer diagnosis for different age groups of patients and tumour characteristics. Women younger than 50 years at breast cancer diagnosis, women with ER-negative tumours, positive lymph nodes and tumours larger than 20 mm, all showed a peak in the rate of first distant metastasis at about 2 years, in comparison with fairly stable rates in other subgroups of patients. In particular, women with ER-negative tumours showed a sharp decrease in the rate of first distant metastasis after 2 years since breast cancer diagnosis, whereas women with ER-positive tumours showed rather stable rates of first distant metastasis over time from about 1 year up until 10 years since breast cancer diagnosis. Table 2 shows frequency distribution of sites of first distant metastasis by time-to-first-distant metastasis subdivided into 0–2, 2–5 and 5–10 years since diagnosis. Overall, metastasis to the skeleton (32.5%) and multiple sites of metastasis (28.3%) were the most frequent presentations of distant metastasis within 10 years. No particular combination pattern of multiple sites of first distant metastasis was found in the cohort. The site distribution of first distant metastasis changed significantly (P<0.05) over time for the following sites: skeleton, CNS and liver. The proportion of first distant metastasis to the skeleton increased from 29.9% of all first distant metastasis in the first 2 years to 36.5% in the period of 5–10 years since breast cancer diagnosis. The proportion of CNS and liver metastasis instead decreased from 6.8% and 15.4%, respectively, in the first 2 years to 1.8% and 8.0%, respectively, in 5–10 years since breast cancer diagnosis. Between 5 and 10 years since breast cancer diagnosis, 274 (27.5%) of all first distant metastasis were diagnosed. Table 3 shows time-dependent HRs of developing first distant metastasis by age and tumour characteristics at 2, 5 and 10 years from breast cancer diagnosis. For women with positive lymph nodes, the hazard of developing metastasis was still significantly increased at 10 years after diagnosis (HR=2.6; 95% CI: 1.9–3.5) compared with women with negative lymph nodes. Women with ER-negative tumours had an increased hazard at 2 (HR=2.7; 95% CI: 2.2–3.3) and 5 (HR=1.4; 95% CI: 1.1–1.7) years from breast cancer diagnosis compared with women with ER-positive tumours; at 10 years from diagnosis, the same HR was not significantly increased anymore (HR=0.9; 95% CI: 0.6–1.4). Having a tumour larger than 20 mm at breast cancer diagnosis was still causing an increased hazard of developing first distant metastasis at 10 years (HR=1.5; 95% CI: 1.1–2.0). Table 4 shows cumulative risks of developing first distant metastasis within 5 years of diagnosis, and within 10 years surviving 5 years without metastasis, according to all different covariate patterns. Among those with an adjuvant treatment combination including CT and/or HT, the highest risk of first distant metastasis in the period of 0–5 years was found in patients of 50 years of age or less, with tumour size >20 mm, positive lymph nodes and ER-negative tumours at breast cancer diagnosis (cumulative risk =0.45; 95% CI: 0.38–0.53); whereas the lowest risk was among patients aged 51–74 years with negative lymph nodes, tumour size ⩽20 mm and with ER-positive tumours at breast cancer diagnosis (cumulative risk =0.03; 95% CI: 0.02–0.04). For the same two groups of patients in the period of 5–10 years following diagnosis (that is, among those who survived metastasis-free until 5 years), the risk dropped (cumulative risk=0.09; 95% CI: 0.05–0.17) for the group originally at the highest risk, whereas it remained the same (cumulative risk=0.03; 95% CI: 0.02–0.04) for the group at the lowest risk. The risk at 5–10 years following diagnosis was similar to the risk for the period of 0–5 years in all women with ER-positive tumours and negative lymph nodes, regardless of treatment, tumour size and age. This was also true for women with ER-positive tumours and positive lymph nodes, if the tumour size at diagnosis was ⩽20 mm. For all patients with ER-negative tumours instead, the risk at 5–10 years following diagnosis was significantly lower compared with the risk for the period of 0–5 years regardless of age, tumour size and nodes. Women treated with systemic adjuvant treatment (that is, CT and/or HT) always had lower risks compared to women treated with surgery only, with or without RT. Discussion: We found that the site distribution of first distant metastasis over time changed significantly for metastasis to the skeleton, CNS and liver. The risk of developing distant metastasis within 10 years of breast cancer diagnosis significantly varied in different subgroups of patients. The risk remained non-negligible up to 10 years since diagnosis particularly among women with positive lymph nodes. The risk was high in particular among patients with ER-negative tumours within the first 5 years of diagnosis, whereas it significantly decreased after 5 years since diagnosis. The risk of developing distant metastasis remained instead rather stable for most subgroups of patients with ER-positive tumours independent of age and other tumour characteristics. In our cohort, one-third of first distant metastasis was diagnosed in the skeleton. This proportion significantly increased over time since diagnosis, whereas the proportion of metastasis to the liver and CNS significantly decreased. This seems to reflect the natural history of distant recurrences as women with ER-positive tumours more often develop metastasis to the skeleton later during follow-up, whereas women with ER-negative tumours more often develop early liver and CNS metastasis (Kennecke et al, 2010). Interestingly, the risk for developing distant metastasis over time since diagnosis mirrors the pattern observed for breast cancer-specific mortality that has been previously studied in this same cohort: lymph node status and tumour size at diagnosis are significant prognosticators of distant recurrence, as well as of breast cancer death, at 10 years since breast cancer diagnosis, whereas ER status is not. These findings confirm that distant metastasis still indicate a very poor prognosis in breast cancer patients (Hilsenbeck et al, 1998; Louwman et al, 2008; Colzani et al, 2011). Cumulative risk estimates of developing distant metastasis were obtained for specific subgroups of patients while taking into account the competing risk of dying from other causes and by allowing time-dependent effects (interaction with time since diagnosis) for age and tumour characteristics. The use of cumulative risk as a function of time is of relevant clinical value as it allows a quantitative estimation of what is the actual probability of developing distant metastasis for any given subgroup of breast cancer patients at different time points. This measurement may help clinicians to better estimate the individual risk of developing first distant metastasis during the first 5 years as well as 5–10 years after diagnosis. One of the main messages from this study stems from the fact that the risk of developing distant metastasis carries over significantly to the second 5 years of follow-up for most metastasis-free patients, particularly those patients with lymph node-positive tumours where the risk at 5–10 years after diagnosis is always higher or equal to 8%. In addition, for some subgroups of patients with ER-positive tumours, the cumulative risk of developing distant metastasis in the period of 0–5 and 5–10 years is essentially the same. Patients with ER-positive and lymph node-negative tumours show no change and very low risk over time. This could be explained by a very good effect of treatment, or alternatively even by overtreatment since patients undergoing local treatment show a similar low risk. More clinical attention should, however, be given to other subgroups in which follow-up could be intensified and treatment could be improved. In particular, women with ER-positive, lymph node-positive, >20 mm tumours still have a risk higher than 10% to develop first distant metastasis in the period of 5–10 years after diagnosis. Of note, all subgroups of patients with ER-negative tumours have a sharp significant decrease in risk of developing distant metastasis in the period of 5–10 years following diagnosis compared with the period of 0–5 years, independent of age and other tumour characteristics. The American Society of Clinical Oncology (ASCO) has recently concluded in a review of the clinical practice guidelines of primary breast cancer follow-up that there is no present need for updating the current guidelines (Khatcheressian et al, 2013). Although evidence supporting the change of current practice is rather weak (Robertson et al, 2011), following future improvements in prevention and treatment of metastatic breast cancer, a differential follow-up of patients with ER-positive and ER-negative tumours could be considered, given their remarkably different risks of spreading. This study has some relevant strengths. We used a large cohort of women followed up to 10 years with accurate and complete information, enabling us to apply a comprehensive design and methodology. We analysed the risk of developing distant metastasis from many different perspectives, providing a thorough picture of the topic by analysing the proportion of first distant metastasis in different sites, the relative (HR) and absolute risk (cumulative risk) of developing metastasis at different follow-up times according to different patient and tumour characteristics by taking into account competing risks, and allowing the main effects to vary over time. This paper has some limitations as well. The date of diagnosis of distant metastasis might be subject to timing of clinical work-ups and type of follow-up. In addition, site of first distant metastasis could be affected by detection bias. As in all studies requiring a long follow-up, the estimated cumulative risk of first distant metastasis might not reflect current risk as it was observed in women diagnosed between 1995 and 1999. In particular, adjuvant treatment has changed and we do not know whether the same risk patterns are observed in recently diagnosed patients: for instance, aromatase inhibitors have been widely used instead of tamoxifen from early 2000s, and high risk patients with ER-positive tumours are today offered extended endocrine therapy up to 10 years (Burstein et al, 2010). In conclusion, there is still a clinically relevant risk of developing first distant metastasis from 5 to 10 years after breast cancer diagnosis in several groups of patients, especially those with positive lymph nodes at diagnosis. Patients with negative lymph nodes and ER-positive tumours, unlike those with ER-negative tumours, have a very similar low risk of developing first distant metastasis in the first 5 years and in the second 5 years of follow-up, independent of age, other tumour characteristics and competing risks of dying due to other causes. Upcoming improvements in metastasis prevention and treatment should elicit further research aimed at identifying specific clinical follow-ups for different subgroups of breast cancer patients. Five-year metastasis-free survival may not any longer be an appropriate outcome indicator measurement tool for breast cancer patients, particularly for those with ER-positive tumours.
Background: Metastatic breast cancer is a severe condition without curative treatment. How relative and absolute risk of distant metastasis varies over time since diagnosis, as a function of treatment, age and tumour characteristics, has not been studied in detail. Methods: A total of 9514 women under the age of 75 when diagnosed with breast cancer in Stockholm and Gotland regions during 1990-2006 were followed up for metastasis (mean follow-up=5.7 years). Time-dependent development of distant metastasis was analysed using flexible parametric survival models and presented as hazard ratio (HR) and cumulative risk. Results: A total of 995 (10.4%) patients developed distant metastasis; the most common sites were skeleton (32.5%) and multiple sites (28.3%). Women younger than 50 years at diagnosis, with lymph node-positive, oestrogen receptor (ER)-negative, >20 mm tumours and treated only locally, had the highest risk of distant metastasis (0-5 years' cumulative risk =0.55; 95% confidence interval (CI): 0.47-0.64). Women older than 50 years at diagnosis, with ER-positive, lymph node-negative and ≤20-mm tumours, had the same and lowest cumulative risk of developing metastasis 0-5 and 5-10 years (cumulative risk=0.03; 95% CI: 0.02-0.04). In the period of 5-10 years after diagnosis, women with ER-positive, lymph node-positive and >20-mm tumours were at highest risk of distant recurrence. Women with ER-negative tumours showed a decline in risk during this period. Conclusions: Our data show no support for discontinuation at 5 years of clinical follow-up in breast cancer patients and suggest further investigation on differential clinical follow-up for different subgroups of patients.
null
null
5,902
349
7
[ "metastasis", "diagnosis", "years", "distant", "cancer", "distant metastasis", "breast cancer", "breast", "risk", "10" ]
[ "test", "test" ]
null
null
null
null
null
null
[CONTENT] breast cancer | distant metastasis | risk | survival analysis | tumour characteristics | competing risk [SUMMARY]
null
[CONTENT] breast cancer | distant metastasis | risk | survival analysis | tumour characteristics | competing risk [SUMMARY]
null
null
null
[CONTENT] Aged | Antineoplastic Agents | Breast Neoplasms | Cohort Studies | Female | Humans | Middle Aged | Neoplasm Metastasis | Neoplasm Recurrence, Local | Risk | Sweden | Time Factors [SUMMARY]
null
[CONTENT] Aged | Antineoplastic Agents | Breast Neoplasms | Cohort Studies | Female | Humans | Middle Aged | Neoplasm Metastasis | Neoplasm Recurrence, Local | Risk | Sweden | Time Factors [SUMMARY]
null
null
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
null
null
[CONTENT] metastasis | diagnosis | years | distant | cancer | distant metastasis | breast cancer | breast | risk | 10 [SUMMARY]
null
[CONTENT] metastasis | diagnosis | years | distant | cancer | distant metastasis | breast cancer | breast | risk | 10 [SUMMARY]
null
null
null
[CONTENT] years | metastasis | tumours | diagnosis | 95 ci | ci | distant | distant metastasis | risk | 10 [SUMMARY]
null
[CONTENT] metastasis | diagnosis | years | cancer | distant | distant metastasis | breast | breast cancer | risk | patients [SUMMARY]
null
null
null
[CONTENT] 995 | 10.4% | 32.5% | 28.3% ||| 50 years | 20 mm | 0-5 years' | 0.55 | 95% | CI | 0.47 ||| older than 50 years | ER | 0 | 5-10 years | 95% | CI | 0.02-0.04 ||| 5-10 years | ER | 20-mm ||| ER [SUMMARY]
null
[CONTENT] ||| ||| 9514 | the age of 75 | Stockholm | Gotland | 1990-2006 | years ||| ||| 995 | 10.4% | 32.5% | 28.3% ||| 50 years | 20 mm | 0-5 years' | 0.55 | 95% | CI | 0.47 ||| older than 50 years | ER | 0 | 5-10 years | 95% | CI | 0.02-0.04 ||| 5-10 years | ER | 20-mm ||| ER ||| 5 years [SUMMARY]
null
Fragile X protein in newborn dried blood spots.
25348928
The fragile X syndrome (FXS) results from mutation of the FMR1 gene that prevents expression of its gene product, FMRP. We previously characterized 215 dried blood spots (DBS) representing different FMR1 genotypes and ages with a Luminex-based immunoassay (qFMRP). We found variable FMRP levels in the normal samples and identified affected males by the drastic reduction of FMRP.
BACKGROUND
Here, to establish the variability of expression of FMRP in a larger random population we quantified FMRP in 2,000 anonymous fresh newborn DBS. We also evaluated the effect of long term storage on qFMRP by retrospectively assaying 74 aged newborn DBS that had been stored for 7-84 months that included normal and full mutation individuals. These analyses were performed on 3 mm DBS disks. To identify the alleles associated with the lowest FMRP levels in the fresh DBS, we analyzed the DNA in the samples that were more than two standard deviations below the mean.
METHODS
Analysis of the fresh newborn DBS revealed a broad distribution of FMRP with a mean approximately 7-fold higher than that we previously reported for fresh DBS in normal adults and no samples whose FMRP level indicated FXS. DNA analysis of the lowest FMRP DBS showed that this was the low extreme of the normal range and included a female carrying a 165 CGG repeat premutation. In the retrospective study of aged newborn DBS, the FMRP mean of the normal samples was less than 30% of the mean of the fresh DBS. Despite the degraded signal from these aged DBS, qFMRP identified the FXS individuals.
RESULTS
The assay showed that newborn DBS contain high levels of FMRP that will allow identification of males and potentially females, affected by FXS. The assay is also an effective screening tool for aged DBS stored for up to four years.
CONCLUSIONS
[ "Blood Preservation", "Dried Blood Spot Testing", "Female", "Fragile X Mental Retardation Protein", "Fragile X Syndrome", "Humans", "Infant, Newborn", "Male", "Retrospective Studies", "Time Factors" ]
4412103
Background
Fragile X syndrome, the most common inherited cause of intellectual disability, results from mutation of the FMR1 gene on the X chromosome that disrupts expression of the fragile X mental retardation protein (FMRP) [1,2]. Although there are very rare cases in which the fragile X syndrome is due to point mutation or deletion of the FMR1 gene [3–7], the most common fragile X mutation eliminates FMRP expression through expansion of a CGG repeat in the 5’-untranslated region of FMR1 to more than 200 triplets (the full mutation). Although the fragile X syndrome results directly from the absence of functional FMRP, diagnosis of this syndrome is based on allele size and detection of the full mutation expansion in genomic DNA. FMR1 alleles are sorted into four size categories based on their stability upon transmission: normal (up to 44 CGG repeats): intermediate (45-54 repeats); premutation (55-200 repeats); and full mutation (>200 repeats). In the North American population the full-mutation allele (>200 repeats) which is exclusively maternally inherited, has an approximate prevalence of 1 in 4,000 while the premutation is much more common with a prevalence of 1 in 151 females and 1 in 468 males [8]. The premutation can expand to the full mutation when transmitted from mother to offspring and the likelihood of this expansion is dependent on the length and structure of the allele. The majority of premutation alleles are less than 90 triplet repeats in length. FMRP expression is reduced as triplet repeat length increases from the normal range and is eliminated by a process analogous to X inactivation when the repeat exceeds approximately 200 copies [9,10]. Since males carry only a single X chromosome, those with a full mutation allele develop the fragile X syndrome due to the absence of FMRP. Approximately 40% of males with the full mutation have some somatic cells with smaller alleles from triplet repeat contractions during early embryogenesis. These alleles express FMRP but only rarely does this mosaic expression ameliorate the syndrome. Contractions are also presumably responsible for the premutation size alleles in the sperm of full mutation males [11]. Large premutation alleles (approximately 150-200 repeats) are associated with intellectual impairment due presumably to reduced FMRP expression. Females carry two X chromosomes and two copies of the FMR1 gene, only one of which is expressed in any particular somatic cell after X inactivation during early embryogenesis. When FMRP expression is reduced because one FMR1 allele in a female carries the full mutation, the degree of impairment can range from undetectable to severe. This variability is due to variation in random X inactivation and the resulting mosaic distribution of somatic cells in which FMRP is expressed. Since the full mutation must be maternally inherited, homozygous full mutation females do not occur. As in males, mosaicism for smaller alleles occurs in females but it is more rarely observed. We recently reported the development of a simple, accurate, and inexpensive capture immunoassay that determines the level of FMRP in dried blood spots (DBS) as well as in lymphocytes and other tissues [12]. Our initial study of FMRP in DBS from 215 individuals with normal, premutation, and full-mutation FMR1 alleles was designed to characterize the FMRP expression of different FMR1 genotypes. In samples from normal individuals we found a broad distribution of FMRP. The level of the protein declined with age from infants to preteens. It leveled off in teenage years and remained unchanged through adulthood, with no difference between males and females. We used these DBS samples to evaluate how effectively this assay distinguished affected from unaffected individuals. While the assay readily identified affected (full-mutation) males with sensitivity and specificity approaching 100%, we needed to test a larger set of random population samples to establish the FMRP variability detected by the assay. Since residual DBS from state-mandated newborn screening for metabolic and genetic diseases are available for research and represent a uniform age, we decided to use 2,000 randomly selected anonymous fresh newborn DBS to characterize the variability of FMRP expression in the newborn population which is a potential target for screening with this assay. Considering the estimated prevalence of fragile X, it was relatively unlikely that we would find any affected individuals (i.e. those with virtually no FMRP) among the 1,000 male and 1,000 females sampled. We were primarily interested, however, in the variability of FMRP expression, especially the low extreme of normal expression. To evaluate the effect of long term storage of newborn DBS on the qFMRP assay, we conducted a retrospective study of 74 aged newborn DBS that had been stored for up to 7 years. These DBS were from a different newborn screening program and included samples from 6 affected (full mutation) males.
null
null
Results
NY State newborn DBS samples We applied the qFMRP immunoassay to 1,000 male and 1,000 female newborn DBS that had been stored for only five weeks (fresh DBS). FMRP concentration (pM) in each sample (3-mm-diameter disk eluate) was calculated by comparison to dilutions of GST-SR7 as previously described [12]. The results showed a variable expression of FMRP ranging from 10.3- to 92-pM, an average FMRP of 44.8 pM, and a SD of 12.4 pM (Figure 1). Comparison of the FMRP levels obtained in this study to those reported previously [12] for normal adults using larger DBS disks (6.9-mm-diameter, 37.4 mm2) revealed that, the average FMRP in newborn was approximately seven-fold higher (6.3 pM eluted per mm2) than in adults (0.93 pM eluted per mm2). None of the 1,000 male or 1,000 female random newborn DBS lacked FMRP or had the extremely low level that would indicate the fragile X syndrome [12].Figure 1 Distribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175. Distribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175. To examine the FMR1 genotypes of the samples at the low extreme of the FMRP distribution, we extracted genomic DNA from duplicate DBS of the 14 samples whose level of FMRP was more than two standard deviation units below the newborn population mean and determined the size of the CGG repeat in the FMR1 alleles that were present (Table 1). The alleles detected in the 14 DBS samples (0.7%) that met this criterion are shown in Table 1. Thirteen of the samples showed an assortment of normal FMR1 CGG repeat alleles that reflects the allele distribution in the normal population which has a mode of 30 repeats. One sample from a female at the extreme low end of the FMRP distribution (Table 1, Sample 2) showed a normal allele of 30 repeats and a large premutation that appeared as two alleles of approximately 161 and 167 repeats (Figure 2). Methylation analysis, informed by two HpaII sites on either side of the CGG repeat [15,16] showed that the normal, 30 repeat, allele was highly resistant to Hpa II digestion which indicated that it was highly (~90%) methylated (Figure 2). This implied highly skewed X inactivation in which the normal allele resided on the inactive X chromosome in most white blood cells in the newborn blood sample while the premutation resided on the active chromosome in most of these cells.Table 1 NY State newborn DBS with lowest FMRP levels Sample FMRP(pM) Z # Sex Allele 1 Allele 2 Allele 3 110.3−2.8f3244210.4−2.8f30161167311.3−2.7m20413.2−2.5m31516.6−2.3m30616.9−2.2m30717.5−2.2m29817.8−2.2f2223918.6−2.1m291019.0−2.1f30331119.2−2.1m291219.2−2.1m301319.6−2.0m301419.7−2.0m30Mean*44.8SD*12.4*Mean and standard deviation of all 2,000 samples.#Difference from mean (in standard deviation units).Allele sizes are in CGG repeat number.Figure 2 PCR analysis of DBS sample 2 in Table 1 , a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis. NY State newborn DBS with lowest FMRP levels *Mean and standard deviation of all 2,000 samples. #Difference from mean (in standard deviation units). Allele sizes are in CGG repeat number. PCR analysis of DBS sample 2 in Table 1 , a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis. We applied the qFMRP immunoassay to 1,000 male and 1,000 female newborn DBS that had been stored for only five weeks (fresh DBS). FMRP concentration (pM) in each sample (3-mm-diameter disk eluate) was calculated by comparison to dilutions of GST-SR7 as previously described [12]. The results showed a variable expression of FMRP ranging from 10.3- to 92-pM, an average FMRP of 44.8 pM, and a SD of 12.4 pM (Figure 1). Comparison of the FMRP levels obtained in this study to those reported previously [12] for normal adults using larger DBS disks (6.9-mm-diameter, 37.4 mm2) revealed that, the average FMRP in newborn was approximately seven-fold higher (6.3 pM eluted per mm2) than in adults (0.93 pM eluted per mm2). None of the 1,000 male or 1,000 female random newborn DBS lacked FMRP or had the extremely low level that would indicate the fragile X syndrome [12].Figure 1 Distribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175. Distribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175. To examine the FMR1 genotypes of the samples at the low extreme of the FMRP distribution, we extracted genomic DNA from duplicate DBS of the 14 samples whose level of FMRP was more than two standard deviation units below the newborn population mean and determined the size of the CGG repeat in the FMR1 alleles that were present (Table 1). The alleles detected in the 14 DBS samples (0.7%) that met this criterion are shown in Table 1. Thirteen of the samples showed an assortment of normal FMR1 CGG repeat alleles that reflects the allele distribution in the normal population which has a mode of 30 repeats. One sample from a female at the extreme low end of the FMRP distribution (Table 1, Sample 2) showed a normal allele of 30 repeats and a large premutation that appeared as two alleles of approximately 161 and 167 repeats (Figure 2). Methylation analysis, informed by two HpaII sites on either side of the CGG repeat [15,16] showed that the normal, 30 repeat, allele was highly resistant to Hpa II digestion which indicated that it was highly (~90%) methylated (Figure 2). This implied highly skewed X inactivation in which the normal allele resided on the inactive X chromosome in most white blood cells in the newborn blood sample while the premutation resided on the active chromosome in most of these cells.Table 1 NY State newborn DBS with lowest FMRP levels Sample FMRP(pM) Z # Sex Allele 1 Allele 2 Allele 3 110.3−2.8f3244210.4−2.8f30161167311.3−2.7m20413.2−2.5m31516.6−2.3m30616.9−2.2m30717.5−2.2m29817.8−2.2f2223918.6−2.1m291019.0−2.1f30331119.2−2.1m291219.2−2.1m301319.6−2.0m301419.7−2.0m30Mean*44.8SD*12.4*Mean and standard deviation of all 2,000 samples.#Difference from mean (in standard deviation units).Allele sizes are in CGG repeat number.Figure 2 PCR analysis of DBS sample 2 in Table 1 , a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis. NY State newborn DBS with lowest FMRP levels *Mean and standard deviation of all 2,000 samples. #Difference from mean (in standard deviation units). Allele sizes are in CGG repeat number. PCR analysis of DBS sample 2 in Table 1 , a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis. New South Wales newborn DBS samples We also applied the qFMRP assay in a retrospective study of 74 newborn DBS that had been stored for an extended period and included 6 full mutation males as well as 68 normal individuals. This analysis was performed in a blinded manner to correlate the FMRP levels detected in the aged newborn DBS with the diagnoses of the fragile X syndrome that had been made later by the GOLD Service Hunter Genetics (Newcastle, Australia), after the phenotype appeared. The storage time for the full mutation and normal DBS ranged from 19 to 79 months and from 7 to 84 months, respectively. Table 2 shows the results for the 9 aged newborn DBS with the lowest levels of FMRP. The FMRP level of the 6 full mutation males in this sample set was indistinguishable from background and more than 20 fold lower than the average normal level in this aged sample set. Analysis of the normal controls in this study indicated that the amount of detectable FMRP in the DBS had declined significantly with extended storage (Figure 3) which shifted the FMRP distribution toward zero (Figure 4). The only normal sample that approached (0.77 pM) the level of the fragile X syndrome samples had been stored for seven years. Despite the loss of detectable FMRP with DBS storage time, the qFMRP assay identified all of the affected (full mutation) males. Because of the decline of detectable FMRP in normal control DBS that could lead to false positives, the Mann-Whitney analysis and the boxplot of these results shown in Figure 5 excluded samples that had been stored for more than 47 months. Samples in this subset showed a significant separation between normal (male and female) and full mutation males (P ≤0.001, U-test).Table 2 Australian newborn DBS with lowest FMRP levels Genotype Sex FMRP pM zFMRP # Storage (mo) fullm0.11−1.947fullm0.14−1.919fullm0.35−1.973fullm0.53−1.838fullm0.58−1.879fullm0.64−1.832nlf0.77−1.884nlm1.45−1.748nlf2.32−1.648Mean*m + f12.5SD*6.5*Mean and standard deviation of control normal samples (n = 59).#Difference from mean (in standard deviation units).nl: normal, full: full mutation.Figure 3 Decline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.Figure 4 Distribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.Figure 5 Boxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months. Australian newborn DBS with lowest FMRP levels *Mean and standard deviation of control normal samples (n = 59). #Difference from mean (in standard deviation units). nl: normal, full: full mutation. Decline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509. Distribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months. Boxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months. We also applied the qFMRP assay in a retrospective study of 74 newborn DBS that had been stored for an extended period and included 6 full mutation males as well as 68 normal individuals. This analysis was performed in a blinded manner to correlate the FMRP levels detected in the aged newborn DBS with the diagnoses of the fragile X syndrome that had been made later by the GOLD Service Hunter Genetics (Newcastle, Australia), after the phenotype appeared. The storage time for the full mutation and normal DBS ranged from 19 to 79 months and from 7 to 84 months, respectively. Table 2 shows the results for the 9 aged newborn DBS with the lowest levels of FMRP. The FMRP level of the 6 full mutation males in this sample set was indistinguishable from background and more than 20 fold lower than the average normal level in this aged sample set. Analysis of the normal controls in this study indicated that the amount of detectable FMRP in the DBS had declined significantly with extended storage (Figure 3) which shifted the FMRP distribution toward zero (Figure 4). The only normal sample that approached (0.77 pM) the level of the fragile X syndrome samples had been stored for seven years. Despite the loss of detectable FMRP with DBS storage time, the qFMRP assay identified all of the affected (full mutation) males. Because of the decline of detectable FMRP in normal control DBS that could lead to false positives, the Mann-Whitney analysis and the boxplot of these results shown in Figure 5 excluded samples that had been stored for more than 47 months. Samples in this subset showed a significant separation between normal (male and female) and full mutation males (P ≤0.001, U-test).Table 2 Australian newborn DBS with lowest FMRP levels Genotype Sex FMRP pM zFMRP # Storage (mo) fullm0.11−1.947fullm0.14−1.919fullm0.35−1.973fullm0.53−1.838fullm0.58−1.879fullm0.64−1.832nlf0.77−1.884nlm1.45−1.748nlf2.32−1.648Mean*m + f12.5SD*6.5*Mean and standard deviation of control normal samples (n = 59).#Difference from mean (in standard deviation units).nl: normal, full: full mutation.Figure 3 Decline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.Figure 4 Distribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.Figure 5 Boxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months. Australian newborn DBS with lowest FMRP levels *Mean and standard deviation of control normal samples (n = 59). #Difference from mean (in standard deviation units). nl: normal, full: full mutation. Decline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509. Distribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months. Boxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months.
Conclusion
Our data show for the first time that it is feasible to measure FMRP in 3-mm-diameter newborn DBS disks which are used in mandatory screening for metabolic and hereditary diseases. The accurate measurement of FMRP in these small diameter disks is enhanced by the relatively high levels of the protein present in neonates, which could in part be due the high leukocyte count [17,18]. The level of FMRP in newborn is variable and is, on average, seven times higher than that detected in adults. Even though the levels of FMRP detected in DBS decrease with storage time, the qFMRP assay allowed us to identify all affected males from the set of aged DBS from normal and fragile X individuals. Our data suggests that the qFMRP assay could serve as the initial step in a fragile X newborn screening program. In a second screening step, characterization of CGG size and/or methylation status of the FMR1 alleles associated with DBS at the low extreme of the FMRP distribution would indicate the presence of the fragile X syndrome. The correlation between highly reduced or absent FMRP expression and the fragile X syndrome is firmly established for males. This correlation is likely to apply to females as well but further studies will be necessary to firmly establish this link. Fragile X screening by qFMRP has distinct advantages over techniques that detect a CGG expansion since it avoids ethical issues associated with identification of asymptomatic premutation carrier infants and is consistent with current newborn screening technology and cost parameters.
[ "Newborn DBS from the Wadsworth center in New York State", "Newborn DBS from New South Wales (Australia) newborn screening program", "Elution of FMRP from DBS", "qFMRP assay procedure", "DNA studies", "Statistical analysis", "NY State newborn DBS samples", "New South Wales newborn DBS samples" ]
[ "The DBS disks from 1,000 males and 1,000 females were recent (5 weeks old) and stored with a desiccant in a refrigerator at 2-8°C. They were received (in duplicate) in sealed 96-well plates with no identifying information except gender.", "The seventy-four disks from DBS that had been stored for 7 to 84 months included samples from 6 males diagnosed with the fragile X syndrome and 68 normal individuals. The samples from 6 affected males had been stored from 19 to 79 months at low humidity and 22-28°C for 12 months and then at 20°C before FMRP quantification. Those from 68 normal controls had been stored from 7 to 84 months. Information about phenotype, gender, and age of DBS was not known by the researchers performing qFMRP analysis until completion of the study.", "Each 3-mm-diameter disk was placed into a well of a Low Protein Binding Durapore R Multiscreen 96-well filter plate (Millipore, Billerica, MA) and protein was eluted in 50 uL of extraction reagent: M-PER mammalian protein extraction reagent (Thermo Fisher Scientific, Rockford, IL) containing 150 mmol/L NaCl, 10 ug/mL chymostatin, 10 ug/mL antipain, and 1× protease inhibitor cocktail (Complete mini tablets, EDTA free; Roche Applied Science, Indianapolis, IN) by shaking for 3 hr with agitation at room temperature. Eluates were collected by centrifugation at 4°C into a 96-well catch plate for 5 minutes at 1258 × g [12].", "FMRP assays were performed with the anti-FMRP mouse monoclonal antibody mAb6B8 (MMS-5231, Covance Inc., Dedham, MA) and the anti-FMRP rabbit polyclonal antibody R477 [12]. These antibodies are highly specific and each recognizes a different epitope of the protein [12]. A GST fusion protein, GST-SR7 carrying an abbreviated sequence of FMRP that includes the epitopes of mAb6B8 and R477 was used as standard [12]. The immunoassays were performed as previously described [12]. Briefly, 50 uL DBS eluate was incubated for 6 hr with 3,000 xMAP-MicroPlex microspheres (Luminex, Austin, TX) coupled to anti-FMRP mAb6B8. The microspheres were then washed and incubated overnight with rabbit antibody, R477 that was subsequently labeled for 2 hr by phycoerythrin-conjugated goat anti-rabbit IgG. FMRP was quantified with a Luminex 200 system. Dilutions of GST-SR7 were used to generate an FMRP standard concentration curve for each 96-well plate analyzed. The amount of FMRP in the DBS was reported as concentration (pM) in the 50 uL DBS eluate.", "DNA was eluted from approximately ½ of a duplicate 3mm (~3.5 mm2) disk by the following modification of a published procedure [13].The DBS disk was initially washed in 1 mL of SSPE (0.15 M NaCl, 0.01M NaH2PO4 pH 7.0, and 0.001M EDTA) containing 0.1% Tween 80 (Sigma-Aldrich, St Louis, MO 63103 USA) for 10 minutes at room temperature and the half disk was transferred to 100 μL of 5% Chelex (BioRad, Hercules, CA 94547 USA) in H2O, incubated for 30 min at 60°C and then for 30 min at 100°C. The liquid phase was separated from the Chelex and brought to 1 mM EDTA. Two microliters of this eluate served as template for polymerase chain reaction (PCR) amplification of the FMR1 CGG repeat region with the AmplideX® FMR1 PCR (RUO) reagents according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA). PCR products were analyzed by capillary electrophoresis (ABI 3130 Genetic Analyzer, Applied Biosystems, Foster City, CA) [14]. This eluate was also used for analysis of DNA methylation in the FMR1 CGG repeat region with the Amplidex® FMR1 mPCR kit according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA).", "Data were analyzed with either IBM SPSS Statistics (IBM, Armonk, NY) or SigmaPlot (Systat Software, San Jose, CA) software.", "We applied the qFMRP immunoassay to 1,000 male and 1,000 female newborn DBS that had been stored for only five weeks (fresh DBS). FMRP concentration (pM) in each sample (3-mm-diameter disk eluate) was calculated by comparison to dilutions of GST-SR7 as previously described [12]. The results showed a variable expression of FMRP ranging from 10.3- to 92-pM, an average FMRP of 44.8 pM, and a SD of 12.4 pM (Figure 1). Comparison of the FMRP levels obtained in this study to those reported previously [12] for normal adults using larger DBS disks (6.9-mm-diameter, 37.4 mm2) revealed that, the average FMRP in newborn was approximately seven-fold higher (6.3 pM eluted per mm2) than in adults (0.93 pM eluted per mm2). None of the 1,000 male or 1,000 female random newborn DBS lacked FMRP or had the extremely low level that would indicate the fragile X syndrome [12].Figure 1\nDistribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175.\n\nDistribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175.\nTo examine the FMR1 genotypes of the samples at the low extreme of the FMRP distribution, we extracted genomic DNA from duplicate DBS of the 14 samples whose level of FMRP was more than two standard deviation units below the newborn population mean and determined the size of the CGG repeat in the FMR1 alleles that were present (Table 1). The alleles detected in the 14 DBS samples (0.7%) that met this criterion are shown in Table 1. Thirteen of the samples showed an assortment of normal FMR1 CGG repeat alleles that reflects the allele distribution in the normal population which has a mode of 30 repeats. One sample from a female at the extreme low end of the FMRP distribution (Table 1, Sample 2) showed a normal allele of 30 repeats and a large premutation that appeared as two alleles of approximately 161 and 167 repeats (Figure 2). Methylation analysis, informed by two HpaII sites on either side of the CGG repeat [15,16] showed that the normal, 30 repeat, allele was highly resistant to Hpa II digestion which indicated that it was highly (~90%) methylated (Figure 2). This implied highly skewed X inactivation in which the normal allele resided on the inactive X chromosome in most white blood cells in the newborn blood sample while the premutation resided on the active chromosome in most of these cells.Table 1\nNY State newborn DBS with lowest FMRP levels\n\nSample\n\nFMRP(pM)\n\nZ\n#\n\nSex\n\nAllele 1\n\nAllele 2\n\nAllele 3\n110.3−2.8f3244210.4−2.8f30161167311.3−2.7m20413.2−2.5m31516.6−2.3m30616.9−2.2m30717.5−2.2m29817.8−2.2f2223918.6−2.1m291019.0−2.1f30331119.2−2.1m291219.2−2.1m301319.6−2.0m301419.7−2.0m30Mean*44.8SD*12.4*Mean and standard deviation of all 2,000 samples.#Difference from mean (in standard deviation units).Allele sizes are in CGG repeat number.Figure 2\nPCR analysis of DBS sample 2 in Table\n1\n, a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis.\n\nNY State newborn DBS with lowest FMRP levels\n\n*Mean and standard deviation of all 2,000 samples.\n#Difference from mean (in standard deviation units).\nAllele sizes are in CGG repeat number.\n\nPCR analysis of DBS sample 2 in Table\n1\n, a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis.", "We also applied the qFMRP assay in a retrospective study of 74 newborn DBS that had been stored for an extended period and included 6 full mutation males as well as 68 normal individuals. This analysis was performed in a blinded manner to correlate the FMRP levels detected in the aged newborn DBS with the diagnoses of the fragile X syndrome that had been made later by the GOLD Service Hunter Genetics (Newcastle, Australia), after the phenotype appeared. The storage time for the full mutation and normal DBS ranged from 19 to 79 months and from 7 to 84 months, respectively. Table 2 shows the results for the 9 aged newborn DBS with the lowest levels of FMRP. The FMRP level of the 6 full mutation males in this sample set was indistinguishable from background and more than 20 fold lower than the average normal level in this aged sample set. Analysis of the normal controls in this study indicated that the amount of detectable FMRP in the DBS had declined significantly with extended storage (Figure 3) which shifted the FMRP distribution toward zero (Figure 4). The only normal sample that approached (0.77 pM) the level of the fragile X syndrome samples had been stored for seven years. Despite the loss of detectable FMRP with DBS storage time, the qFMRP assay identified all of the affected (full mutation) males. Because of the decline of detectable FMRP in normal control DBS that could lead to false positives, the Mann-Whitney analysis and the boxplot of these results shown in Figure 5 excluded samples that had been stored for more than 47 months. Samples in this subset showed a significant separation between normal (male and female) and full mutation males (P ≤0.001, U-test).Table 2\nAustralian newborn DBS with lowest FMRP levels\n\nGenotype\n\nSex\n\nFMRP pM\n\nzFMRP\n#\n\nStorage (mo)\nfullm0.11−1.947fullm0.14−1.919fullm0.35−1.973fullm0.53−1.838fullm0.58−1.879fullm0.64−1.832nlf0.77−1.884nlm1.45−1.748nlf2.32−1.648Mean*m + f12.5SD*6.5*Mean and standard deviation of control normal samples (n = 59).#Difference from mean (in standard deviation units).nl: normal, full: full mutation.Figure 3\nDecline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.Figure 4\nDistribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.Figure 5\nBoxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months.\n\nAustralian newborn DBS with lowest FMRP levels\n\n*Mean and standard deviation of control normal samples (n = 59).\n#Difference from mean (in standard deviation units).\nnl: normal, full: full mutation.\n\nDecline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.\n\nDistribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.\n\nBoxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Newborn DBS from the Wadsworth center in New York State", "Newborn DBS from New South Wales (Australia) newborn screening program", "Elution of FMRP from DBS", "qFMRP assay procedure", "DNA studies", "Statistical analysis", "Results", "NY State newborn DBS samples", "New South Wales newborn DBS samples", "Discussion", "Conclusion" ]
[ "Fragile X syndrome, the most common inherited cause of intellectual disability, results from mutation of the FMR1 gene on the X chromosome that disrupts expression of the fragile X mental retardation protein (FMRP) [1,2]. Although there are very rare cases in which the fragile X syndrome is due to point mutation or deletion of the FMR1 gene [3–7], the most common fragile X mutation eliminates FMRP expression through expansion of a CGG repeat in the 5’-untranslated region of FMR1 to more than 200 triplets (the full mutation). Although the fragile X syndrome results directly from the absence of functional FMRP, diagnosis of this syndrome is based on allele size and detection of the full mutation expansion in genomic DNA.\nFMR1 alleles are sorted into four size categories based on their stability upon transmission: normal (up to 44 CGG repeats): intermediate (45-54 repeats); premutation (55-200 repeats); and full mutation (>200 repeats). In the North American population the full-mutation allele (>200 repeats) which is exclusively maternally inherited, has an approximate prevalence of 1 in 4,000 while the premutation is much more common with a prevalence of 1 in 151 females and 1 in 468 males [8]. The premutation can expand to the full mutation when transmitted from mother to offspring and the likelihood of this expansion is dependent on the length and structure of the allele. The majority of premutation alleles are less than 90 triplet repeats in length. FMRP expression is reduced as triplet repeat length increases from the normal range and is eliminated by a process analogous to X inactivation when the repeat exceeds approximately 200 copies [9,10].\nSince males carry only a single X chromosome, those with a full mutation allele develop the fragile X syndrome due to the absence of FMRP. Approximately 40% of males with the full mutation have some somatic cells with smaller alleles from triplet repeat contractions during early embryogenesis. These alleles express FMRP but only rarely does this mosaic expression ameliorate the syndrome. Contractions are also presumably responsible for the premutation size alleles in the sperm of full mutation males [11]. Large premutation alleles (approximately 150-200 repeats) are associated with intellectual impairment due presumably to reduced FMRP expression.\nFemales carry two X chromosomes and two copies of the FMR1 gene, only one of which is expressed in any particular somatic cell after X inactivation during early embryogenesis. When FMRP expression is reduced because one FMR1 allele in a female carries the full mutation, the degree of impairment can range from undetectable to severe. This variability is due to variation in random X inactivation and the resulting mosaic distribution of somatic cells in which FMRP is expressed. Since the full mutation must be maternally inherited, homozygous full mutation females do not occur. As in males, mosaicism for smaller alleles occurs in females but it is more rarely observed.\nWe recently reported the development of a simple, accurate, and inexpensive capture immunoassay that determines the level of FMRP in dried blood spots (DBS) as well as in lymphocytes and other tissues [12]. Our initial study of FMRP in DBS from 215 individuals with normal, premutation, and full-mutation FMR1 alleles was designed to characterize the FMRP expression of different FMR1 genotypes. In samples from normal individuals we found a broad distribution of FMRP. The level of the protein declined with age from infants to preteens. It leveled off in teenage years and remained unchanged through adulthood, with no difference between males and females. We used these DBS samples to evaluate how effectively this assay distinguished affected from unaffected individuals. While the assay readily identified affected (full-mutation) males with sensitivity and specificity approaching 100%, we needed to test a larger set of random population samples to establish the FMRP variability detected by the assay. Since residual DBS from state-mandated newborn screening for metabolic and genetic diseases are available for research and represent a uniform age, we decided to use 2,000 randomly selected anonymous fresh newborn DBS to characterize the variability of FMRP expression in the newborn population which is a potential target for screening with this assay. Considering the estimated prevalence of fragile X, it was relatively unlikely that we would find any affected individuals (i.e. those with virtually no FMRP) among the 1,000 male and 1,000 females sampled. We were primarily interested, however, in the variability of FMRP expression, especially the low extreme of normal expression.\nTo evaluate the effect of long term storage of newborn DBS on the qFMRP assay, we conducted a retrospective study of 74 aged newborn DBS that had been stored for up to 7 years. These DBS were from a different newborn screening program and included samples from 6 affected (full mutation) males.", "Newborn DBS disks (3-mm-diameter; ~7.1 mm2) were obtained from the New York State Department of Health’s Wadsworth Center, Albany, NY, USA and the New South Wales Newborn Screening Program, Sydney, Australia. Studies performed with both sets of disks were reviewed and approved by the Institutional Review Boards of the Institute for Basic Research in Developmental Disabilities (IBR) and the DBS source institutions.\n Newborn DBS from the Wadsworth center in New York State The DBS disks from 1,000 males and 1,000 females were recent (5 weeks old) and stored with a desiccant in a refrigerator at 2-8°C. They were received (in duplicate) in sealed 96-well plates with no identifying information except gender.\nThe DBS disks from 1,000 males and 1,000 females were recent (5 weeks old) and stored with a desiccant in a refrigerator at 2-8°C. They were received (in duplicate) in sealed 96-well plates with no identifying information except gender.\n Newborn DBS from New South Wales (Australia) newborn screening program The seventy-four disks from DBS that had been stored for 7 to 84 months included samples from 6 males diagnosed with the fragile X syndrome and 68 normal individuals. The samples from 6 affected males had been stored from 19 to 79 months at low humidity and 22-28°C for 12 months and then at 20°C before FMRP quantification. Those from 68 normal controls had been stored from 7 to 84 months. Information about phenotype, gender, and age of DBS was not known by the researchers performing qFMRP analysis until completion of the study.\nThe seventy-four disks from DBS that had been stored for 7 to 84 months included samples from 6 males diagnosed with the fragile X syndrome and 68 normal individuals. The samples from 6 affected males had been stored from 19 to 79 months at low humidity and 22-28°C for 12 months and then at 20°C before FMRP quantification. Those from 68 normal controls had been stored from 7 to 84 months. Information about phenotype, gender, and age of DBS was not known by the researchers performing qFMRP analysis until completion of the study.\n Elution of FMRP from DBS Each 3-mm-diameter disk was placed into a well of a Low Protein Binding Durapore R Multiscreen 96-well filter plate (Millipore, Billerica, MA) and protein was eluted in 50 uL of extraction reagent: M-PER mammalian protein extraction reagent (Thermo Fisher Scientific, Rockford, IL) containing 150 mmol/L NaCl, 10 ug/mL chymostatin, 10 ug/mL antipain, and 1× protease inhibitor cocktail (Complete mini tablets, EDTA free; Roche Applied Science, Indianapolis, IN) by shaking for 3 hr with agitation at room temperature. Eluates were collected by centrifugation at 4°C into a 96-well catch plate for 5 minutes at 1258 × g [12].\nEach 3-mm-diameter disk was placed into a well of a Low Protein Binding Durapore R Multiscreen 96-well filter plate (Millipore, Billerica, MA) and protein was eluted in 50 uL of extraction reagent: M-PER mammalian protein extraction reagent (Thermo Fisher Scientific, Rockford, IL) containing 150 mmol/L NaCl, 10 ug/mL chymostatin, 10 ug/mL antipain, and 1× protease inhibitor cocktail (Complete mini tablets, EDTA free; Roche Applied Science, Indianapolis, IN) by shaking for 3 hr with agitation at room temperature. Eluates were collected by centrifugation at 4°C into a 96-well catch plate for 5 minutes at 1258 × g [12].\n qFMRP assay procedure FMRP assays were performed with the anti-FMRP mouse monoclonal antibody mAb6B8 (MMS-5231, Covance Inc., Dedham, MA) and the anti-FMRP rabbit polyclonal antibody R477 [12]. These antibodies are highly specific and each recognizes a different epitope of the protein [12]. A GST fusion protein, GST-SR7 carrying an abbreviated sequence of FMRP that includes the epitopes of mAb6B8 and R477 was used as standard [12]. The immunoassays were performed as previously described [12]. Briefly, 50 uL DBS eluate was incubated for 6 hr with 3,000 xMAP-MicroPlex microspheres (Luminex, Austin, TX) coupled to anti-FMRP mAb6B8. The microspheres were then washed and incubated overnight with rabbit antibody, R477 that was subsequently labeled for 2 hr by phycoerythrin-conjugated goat anti-rabbit IgG. FMRP was quantified with a Luminex 200 system. Dilutions of GST-SR7 were used to generate an FMRP standard concentration curve for each 96-well plate analyzed. The amount of FMRP in the DBS was reported as concentration (pM) in the 50 uL DBS eluate.\nFMRP assays were performed with the anti-FMRP mouse monoclonal antibody mAb6B8 (MMS-5231, Covance Inc., Dedham, MA) and the anti-FMRP rabbit polyclonal antibody R477 [12]. These antibodies are highly specific and each recognizes a different epitope of the protein [12]. A GST fusion protein, GST-SR7 carrying an abbreviated sequence of FMRP that includes the epitopes of mAb6B8 and R477 was used as standard [12]. The immunoassays were performed as previously described [12]. Briefly, 50 uL DBS eluate was incubated for 6 hr with 3,000 xMAP-MicroPlex microspheres (Luminex, Austin, TX) coupled to anti-FMRP mAb6B8. The microspheres were then washed and incubated overnight with rabbit antibody, R477 that was subsequently labeled for 2 hr by phycoerythrin-conjugated goat anti-rabbit IgG. FMRP was quantified with a Luminex 200 system. Dilutions of GST-SR7 were used to generate an FMRP standard concentration curve for each 96-well plate analyzed. The amount of FMRP in the DBS was reported as concentration (pM) in the 50 uL DBS eluate.\n DNA studies DNA was eluted from approximately ½ of a duplicate 3mm (~3.5 mm2) disk by the following modification of a published procedure [13].The DBS disk was initially washed in 1 mL of SSPE (0.15 M NaCl, 0.01M NaH2PO4 pH 7.0, and 0.001M EDTA) containing 0.1% Tween 80 (Sigma-Aldrich, St Louis, MO 63103 USA) for 10 minutes at room temperature and the half disk was transferred to 100 μL of 5% Chelex (BioRad, Hercules, CA 94547 USA) in H2O, incubated for 30 min at 60°C and then for 30 min at 100°C. The liquid phase was separated from the Chelex and brought to 1 mM EDTA. Two microliters of this eluate served as template for polymerase chain reaction (PCR) amplification of the FMR1 CGG repeat region with the AmplideX® FMR1 PCR (RUO) reagents according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA). PCR products were analyzed by capillary electrophoresis (ABI 3130 Genetic Analyzer, Applied Biosystems, Foster City, CA) [14]. This eluate was also used for analysis of DNA methylation in the FMR1 CGG repeat region with the Amplidex® FMR1 mPCR kit according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA).\nDNA was eluted from approximately ½ of a duplicate 3mm (~3.5 mm2) disk by the following modification of a published procedure [13].The DBS disk was initially washed in 1 mL of SSPE (0.15 M NaCl, 0.01M NaH2PO4 pH 7.0, and 0.001M EDTA) containing 0.1% Tween 80 (Sigma-Aldrich, St Louis, MO 63103 USA) for 10 minutes at room temperature and the half disk was transferred to 100 μL of 5% Chelex (BioRad, Hercules, CA 94547 USA) in H2O, incubated for 30 min at 60°C and then for 30 min at 100°C. The liquid phase was separated from the Chelex and brought to 1 mM EDTA. Two microliters of this eluate served as template for polymerase chain reaction (PCR) amplification of the FMR1 CGG repeat region with the AmplideX® FMR1 PCR (RUO) reagents according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA). PCR products were analyzed by capillary electrophoresis (ABI 3130 Genetic Analyzer, Applied Biosystems, Foster City, CA) [14]. This eluate was also used for analysis of DNA methylation in the FMR1 CGG repeat region with the Amplidex® FMR1 mPCR kit according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA).\n Statistical analysis Data were analyzed with either IBM SPSS Statistics (IBM, Armonk, NY) or SigmaPlot (Systat Software, San Jose, CA) software.\nData were analyzed with either IBM SPSS Statistics (IBM, Armonk, NY) or SigmaPlot (Systat Software, San Jose, CA) software.", "The DBS disks from 1,000 males and 1,000 females were recent (5 weeks old) and stored with a desiccant in a refrigerator at 2-8°C. They were received (in duplicate) in sealed 96-well plates with no identifying information except gender.", "The seventy-four disks from DBS that had been stored for 7 to 84 months included samples from 6 males diagnosed with the fragile X syndrome and 68 normal individuals. The samples from 6 affected males had been stored from 19 to 79 months at low humidity and 22-28°C for 12 months and then at 20°C before FMRP quantification. Those from 68 normal controls had been stored from 7 to 84 months. Information about phenotype, gender, and age of DBS was not known by the researchers performing qFMRP analysis until completion of the study.", "Each 3-mm-diameter disk was placed into a well of a Low Protein Binding Durapore R Multiscreen 96-well filter plate (Millipore, Billerica, MA) and protein was eluted in 50 uL of extraction reagent: M-PER mammalian protein extraction reagent (Thermo Fisher Scientific, Rockford, IL) containing 150 mmol/L NaCl, 10 ug/mL chymostatin, 10 ug/mL antipain, and 1× protease inhibitor cocktail (Complete mini tablets, EDTA free; Roche Applied Science, Indianapolis, IN) by shaking for 3 hr with agitation at room temperature. Eluates were collected by centrifugation at 4°C into a 96-well catch plate for 5 minutes at 1258 × g [12].", "FMRP assays were performed with the anti-FMRP mouse monoclonal antibody mAb6B8 (MMS-5231, Covance Inc., Dedham, MA) and the anti-FMRP rabbit polyclonal antibody R477 [12]. These antibodies are highly specific and each recognizes a different epitope of the protein [12]. A GST fusion protein, GST-SR7 carrying an abbreviated sequence of FMRP that includes the epitopes of mAb6B8 and R477 was used as standard [12]. The immunoassays were performed as previously described [12]. Briefly, 50 uL DBS eluate was incubated for 6 hr with 3,000 xMAP-MicroPlex microspheres (Luminex, Austin, TX) coupled to anti-FMRP mAb6B8. The microspheres were then washed and incubated overnight with rabbit antibody, R477 that was subsequently labeled for 2 hr by phycoerythrin-conjugated goat anti-rabbit IgG. FMRP was quantified with a Luminex 200 system. Dilutions of GST-SR7 were used to generate an FMRP standard concentration curve for each 96-well plate analyzed. The amount of FMRP in the DBS was reported as concentration (pM) in the 50 uL DBS eluate.", "DNA was eluted from approximately ½ of a duplicate 3mm (~3.5 mm2) disk by the following modification of a published procedure [13].The DBS disk was initially washed in 1 mL of SSPE (0.15 M NaCl, 0.01M NaH2PO4 pH 7.0, and 0.001M EDTA) containing 0.1% Tween 80 (Sigma-Aldrich, St Louis, MO 63103 USA) for 10 minutes at room temperature and the half disk was transferred to 100 μL of 5% Chelex (BioRad, Hercules, CA 94547 USA) in H2O, incubated for 30 min at 60°C and then for 30 min at 100°C. The liquid phase was separated from the Chelex and brought to 1 mM EDTA. Two microliters of this eluate served as template for polymerase chain reaction (PCR) amplification of the FMR1 CGG repeat region with the AmplideX® FMR1 PCR (RUO) reagents according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA). PCR products were analyzed by capillary electrophoresis (ABI 3130 Genetic Analyzer, Applied Biosystems, Foster City, CA) [14]. This eluate was also used for analysis of DNA methylation in the FMR1 CGG repeat region with the Amplidex® FMR1 mPCR kit according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA).", "Data were analyzed with either IBM SPSS Statistics (IBM, Armonk, NY) or SigmaPlot (Systat Software, San Jose, CA) software.", " NY State newborn DBS samples We applied the qFMRP immunoassay to 1,000 male and 1,000 female newborn DBS that had been stored for only five weeks (fresh DBS). FMRP concentration (pM) in each sample (3-mm-diameter disk eluate) was calculated by comparison to dilutions of GST-SR7 as previously described [12]. The results showed a variable expression of FMRP ranging from 10.3- to 92-pM, an average FMRP of 44.8 pM, and a SD of 12.4 pM (Figure 1). Comparison of the FMRP levels obtained in this study to those reported previously [12] for normal adults using larger DBS disks (6.9-mm-diameter, 37.4 mm2) revealed that, the average FMRP in newborn was approximately seven-fold higher (6.3 pM eluted per mm2) than in adults (0.93 pM eluted per mm2). None of the 1,000 male or 1,000 female random newborn DBS lacked FMRP or had the extremely low level that would indicate the fragile X syndrome [12].Figure 1\nDistribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175.\n\nDistribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175.\nTo examine the FMR1 genotypes of the samples at the low extreme of the FMRP distribution, we extracted genomic DNA from duplicate DBS of the 14 samples whose level of FMRP was more than two standard deviation units below the newborn population mean and determined the size of the CGG repeat in the FMR1 alleles that were present (Table 1). The alleles detected in the 14 DBS samples (0.7%) that met this criterion are shown in Table 1. Thirteen of the samples showed an assortment of normal FMR1 CGG repeat alleles that reflects the allele distribution in the normal population which has a mode of 30 repeats. One sample from a female at the extreme low end of the FMRP distribution (Table 1, Sample 2) showed a normal allele of 30 repeats and a large premutation that appeared as two alleles of approximately 161 and 167 repeats (Figure 2). Methylation analysis, informed by two HpaII sites on either side of the CGG repeat [15,16] showed that the normal, 30 repeat, allele was highly resistant to Hpa II digestion which indicated that it was highly (~90%) methylated (Figure 2). This implied highly skewed X inactivation in which the normal allele resided on the inactive X chromosome in most white blood cells in the newborn blood sample while the premutation resided on the active chromosome in most of these cells.Table 1\nNY State newborn DBS with lowest FMRP levels\n\nSample\n\nFMRP(pM)\n\nZ\n#\n\nSex\n\nAllele 1\n\nAllele 2\n\nAllele 3\n110.3−2.8f3244210.4−2.8f30161167311.3−2.7m20413.2−2.5m31516.6−2.3m30616.9−2.2m30717.5−2.2m29817.8−2.2f2223918.6−2.1m291019.0−2.1f30331119.2−2.1m291219.2−2.1m301319.6−2.0m301419.7−2.0m30Mean*44.8SD*12.4*Mean and standard deviation of all 2,000 samples.#Difference from mean (in standard deviation units).Allele sizes are in CGG repeat number.Figure 2\nPCR analysis of DBS sample 2 in Table\n1\n, a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis.\n\nNY State newborn DBS with lowest FMRP levels\n\n*Mean and standard deviation of all 2,000 samples.\n#Difference from mean (in standard deviation units).\nAllele sizes are in CGG repeat number.\n\nPCR analysis of DBS sample 2 in Table\n1\n, a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis.\nWe applied the qFMRP immunoassay to 1,000 male and 1,000 female newborn DBS that had been stored for only five weeks (fresh DBS). FMRP concentration (pM) in each sample (3-mm-diameter disk eluate) was calculated by comparison to dilutions of GST-SR7 as previously described [12]. The results showed a variable expression of FMRP ranging from 10.3- to 92-pM, an average FMRP of 44.8 pM, and a SD of 12.4 pM (Figure 1). Comparison of the FMRP levels obtained in this study to those reported previously [12] for normal adults using larger DBS disks (6.9-mm-diameter, 37.4 mm2) revealed that, the average FMRP in newborn was approximately seven-fold higher (6.3 pM eluted per mm2) than in adults (0.93 pM eluted per mm2). None of the 1,000 male or 1,000 female random newborn DBS lacked FMRP or had the extremely low level that would indicate the fragile X syndrome [12].Figure 1\nDistribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175.\n\nDistribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175.\nTo examine the FMR1 genotypes of the samples at the low extreme of the FMRP distribution, we extracted genomic DNA from duplicate DBS of the 14 samples whose level of FMRP was more than two standard deviation units below the newborn population mean and determined the size of the CGG repeat in the FMR1 alleles that were present (Table 1). The alleles detected in the 14 DBS samples (0.7%) that met this criterion are shown in Table 1. Thirteen of the samples showed an assortment of normal FMR1 CGG repeat alleles that reflects the allele distribution in the normal population which has a mode of 30 repeats. One sample from a female at the extreme low end of the FMRP distribution (Table 1, Sample 2) showed a normal allele of 30 repeats and a large premutation that appeared as two alleles of approximately 161 and 167 repeats (Figure 2). Methylation analysis, informed by two HpaII sites on either side of the CGG repeat [15,16] showed that the normal, 30 repeat, allele was highly resistant to Hpa II digestion which indicated that it was highly (~90%) methylated (Figure 2). This implied highly skewed X inactivation in which the normal allele resided on the inactive X chromosome in most white blood cells in the newborn blood sample while the premutation resided on the active chromosome in most of these cells.Table 1\nNY State newborn DBS with lowest FMRP levels\n\nSample\n\nFMRP(pM)\n\nZ\n#\n\nSex\n\nAllele 1\n\nAllele 2\n\nAllele 3\n110.3−2.8f3244210.4−2.8f30161167311.3−2.7m20413.2−2.5m31516.6−2.3m30616.9−2.2m30717.5−2.2m29817.8−2.2f2223918.6−2.1m291019.0−2.1f30331119.2−2.1m291219.2−2.1m301319.6−2.0m301419.7−2.0m30Mean*44.8SD*12.4*Mean and standard deviation of all 2,000 samples.#Difference from mean (in standard deviation units).Allele sizes are in CGG repeat number.Figure 2\nPCR analysis of DBS sample 2 in Table\n1\n, a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis.\n\nNY State newborn DBS with lowest FMRP levels\n\n*Mean and standard deviation of all 2,000 samples.\n#Difference from mean (in standard deviation units).\nAllele sizes are in CGG repeat number.\n\nPCR analysis of DBS sample 2 in Table\n1\n, a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis.\n New South Wales newborn DBS samples We also applied the qFMRP assay in a retrospective study of 74 newborn DBS that had been stored for an extended period and included 6 full mutation males as well as 68 normal individuals. This analysis was performed in a blinded manner to correlate the FMRP levels detected in the aged newborn DBS with the diagnoses of the fragile X syndrome that had been made later by the GOLD Service Hunter Genetics (Newcastle, Australia), after the phenotype appeared. The storage time for the full mutation and normal DBS ranged from 19 to 79 months and from 7 to 84 months, respectively. Table 2 shows the results for the 9 aged newborn DBS with the lowest levels of FMRP. The FMRP level of the 6 full mutation males in this sample set was indistinguishable from background and more than 20 fold lower than the average normal level in this aged sample set. Analysis of the normal controls in this study indicated that the amount of detectable FMRP in the DBS had declined significantly with extended storage (Figure 3) which shifted the FMRP distribution toward zero (Figure 4). The only normal sample that approached (0.77 pM) the level of the fragile X syndrome samples had been stored for seven years. Despite the loss of detectable FMRP with DBS storage time, the qFMRP assay identified all of the affected (full mutation) males. Because of the decline of detectable FMRP in normal control DBS that could lead to false positives, the Mann-Whitney analysis and the boxplot of these results shown in Figure 5 excluded samples that had been stored for more than 47 months. Samples in this subset showed a significant separation between normal (male and female) and full mutation males (P ≤0.001, U-test).Table 2\nAustralian newborn DBS with lowest FMRP levels\n\nGenotype\n\nSex\n\nFMRP pM\n\nzFMRP\n#\n\nStorage (mo)\nfullm0.11−1.947fullm0.14−1.919fullm0.35−1.973fullm0.53−1.838fullm0.58−1.879fullm0.64−1.832nlf0.77−1.884nlm1.45−1.748nlf2.32−1.648Mean*m + f12.5SD*6.5*Mean and standard deviation of control normal samples (n = 59).#Difference from mean (in standard deviation units).nl: normal, full: full mutation.Figure 3\nDecline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.Figure 4\nDistribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.Figure 5\nBoxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months.\n\nAustralian newborn DBS with lowest FMRP levels\n\n*Mean and standard deviation of control normal samples (n = 59).\n#Difference from mean (in standard deviation units).\nnl: normal, full: full mutation.\n\nDecline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.\n\nDistribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.\n\nBoxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months.\nWe also applied the qFMRP assay in a retrospective study of 74 newborn DBS that had been stored for an extended period and included 6 full mutation males as well as 68 normal individuals. This analysis was performed in a blinded manner to correlate the FMRP levels detected in the aged newborn DBS with the diagnoses of the fragile X syndrome that had been made later by the GOLD Service Hunter Genetics (Newcastle, Australia), after the phenotype appeared. The storage time for the full mutation and normal DBS ranged from 19 to 79 months and from 7 to 84 months, respectively. Table 2 shows the results for the 9 aged newborn DBS with the lowest levels of FMRP. The FMRP level of the 6 full mutation males in this sample set was indistinguishable from background and more than 20 fold lower than the average normal level in this aged sample set. Analysis of the normal controls in this study indicated that the amount of detectable FMRP in the DBS had declined significantly with extended storage (Figure 3) which shifted the FMRP distribution toward zero (Figure 4). The only normal sample that approached (0.77 pM) the level of the fragile X syndrome samples had been stored for seven years. Despite the loss of detectable FMRP with DBS storage time, the qFMRP assay identified all of the affected (full mutation) males. Because of the decline of detectable FMRP in normal control DBS that could lead to false positives, the Mann-Whitney analysis and the boxplot of these results shown in Figure 5 excluded samples that had been stored for more than 47 months. Samples in this subset showed a significant separation between normal (male and female) and full mutation males (P ≤0.001, U-test).Table 2\nAustralian newborn DBS with lowest FMRP levels\n\nGenotype\n\nSex\n\nFMRP pM\n\nzFMRP\n#\n\nStorage (mo)\nfullm0.11−1.947fullm0.14−1.919fullm0.35−1.973fullm0.53−1.838fullm0.58−1.879fullm0.64−1.832nlf0.77−1.884nlm1.45−1.748nlf2.32−1.648Mean*m + f12.5SD*6.5*Mean and standard deviation of control normal samples (n = 59).#Difference from mean (in standard deviation units).nl: normal, full: full mutation.Figure 3\nDecline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.Figure 4\nDistribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.Figure 5\nBoxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months.\n\nAustralian newborn DBS with lowest FMRP levels\n\n*Mean and standard deviation of control normal samples (n = 59).\n#Difference from mean (in standard deviation units).\nnl: normal, full: full mutation.\n\nDecline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.\n\nDistribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.\n\nBoxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months.", "We applied the qFMRP immunoassay to 1,000 male and 1,000 female newborn DBS that had been stored for only five weeks (fresh DBS). FMRP concentration (pM) in each sample (3-mm-diameter disk eluate) was calculated by comparison to dilutions of GST-SR7 as previously described [12]. The results showed a variable expression of FMRP ranging from 10.3- to 92-pM, an average FMRP of 44.8 pM, and a SD of 12.4 pM (Figure 1). Comparison of the FMRP levels obtained in this study to those reported previously [12] for normal adults using larger DBS disks (6.9-mm-diameter, 37.4 mm2) revealed that, the average FMRP in newborn was approximately seven-fold higher (6.3 pM eluted per mm2) than in adults (0.93 pM eluted per mm2). None of the 1,000 male or 1,000 female random newborn DBS lacked FMRP or had the extremely low level that would indicate the fragile X syndrome [12].Figure 1\nDistribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175.\n\nDistribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175.\nTo examine the FMR1 genotypes of the samples at the low extreme of the FMRP distribution, we extracted genomic DNA from duplicate DBS of the 14 samples whose level of FMRP was more than two standard deviation units below the newborn population mean and determined the size of the CGG repeat in the FMR1 alleles that were present (Table 1). The alleles detected in the 14 DBS samples (0.7%) that met this criterion are shown in Table 1. Thirteen of the samples showed an assortment of normal FMR1 CGG repeat alleles that reflects the allele distribution in the normal population which has a mode of 30 repeats. One sample from a female at the extreme low end of the FMRP distribution (Table 1, Sample 2) showed a normal allele of 30 repeats and a large premutation that appeared as two alleles of approximately 161 and 167 repeats (Figure 2). Methylation analysis, informed by two HpaII sites on either side of the CGG repeat [15,16] showed that the normal, 30 repeat, allele was highly resistant to Hpa II digestion which indicated that it was highly (~90%) methylated (Figure 2). This implied highly skewed X inactivation in which the normal allele resided on the inactive X chromosome in most white blood cells in the newborn blood sample while the premutation resided on the active chromosome in most of these cells.Table 1\nNY State newborn DBS with lowest FMRP levels\n\nSample\n\nFMRP(pM)\n\nZ\n#\n\nSex\n\nAllele 1\n\nAllele 2\n\nAllele 3\n110.3−2.8f3244210.4−2.8f30161167311.3−2.7m20413.2−2.5m31516.6−2.3m30616.9−2.2m30717.5−2.2m29817.8−2.2f2223918.6−2.1m291019.0−2.1f30331119.2−2.1m291219.2−2.1m301319.6−2.0m301419.7−2.0m30Mean*44.8SD*12.4*Mean and standard deviation of all 2,000 samples.#Difference from mean (in standard deviation units).Allele sizes are in CGG repeat number.Figure 2\nPCR analysis of DBS sample 2 in Table\n1\n, a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis.\n\nNY State newborn DBS with lowest FMRP levels\n\n*Mean and standard deviation of all 2,000 samples.\n#Difference from mean (in standard deviation units).\nAllele sizes are in CGG repeat number.\n\nPCR analysis of DBS sample 2 in Table\n1\n, a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis.", "We also applied the qFMRP assay in a retrospective study of 74 newborn DBS that had been stored for an extended period and included 6 full mutation males as well as 68 normal individuals. This analysis was performed in a blinded manner to correlate the FMRP levels detected in the aged newborn DBS with the diagnoses of the fragile X syndrome that had been made later by the GOLD Service Hunter Genetics (Newcastle, Australia), after the phenotype appeared. The storage time for the full mutation and normal DBS ranged from 19 to 79 months and from 7 to 84 months, respectively. Table 2 shows the results for the 9 aged newborn DBS with the lowest levels of FMRP. The FMRP level of the 6 full mutation males in this sample set was indistinguishable from background and more than 20 fold lower than the average normal level in this aged sample set. Analysis of the normal controls in this study indicated that the amount of detectable FMRP in the DBS had declined significantly with extended storage (Figure 3) which shifted the FMRP distribution toward zero (Figure 4). The only normal sample that approached (0.77 pM) the level of the fragile X syndrome samples had been stored for seven years. Despite the loss of detectable FMRP with DBS storage time, the qFMRP assay identified all of the affected (full mutation) males. Because of the decline of detectable FMRP in normal control DBS that could lead to false positives, the Mann-Whitney analysis and the boxplot of these results shown in Figure 5 excluded samples that had been stored for more than 47 months. Samples in this subset showed a significant separation between normal (male and female) and full mutation males (P ≤0.001, U-test).Table 2\nAustralian newborn DBS with lowest FMRP levels\n\nGenotype\n\nSex\n\nFMRP pM\n\nzFMRP\n#\n\nStorage (mo)\nfullm0.11−1.947fullm0.14−1.919fullm0.35−1.973fullm0.53−1.838fullm0.58−1.879fullm0.64−1.832nlf0.77−1.884nlm1.45−1.748nlf2.32−1.648Mean*m + f12.5SD*6.5*Mean and standard deviation of control normal samples (n = 59).#Difference from mean (in standard deviation units).nl: normal, full: full mutation.Figure 3\nDecline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.Figure 4\nDistribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.Figure 5\nBoxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months.\n\nAustralian newborn DBS with lowest FMRP levels\n\n*Mean and standard deviation of control normal samples (n = 59).\n#Difference from mean (in standard deviation units).\nnl: normal, full: full mutation.\n\nDecline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.\n\nDistribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.\n\nBoxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months.", "The distribution of FMRP levels in the sample of 2,000 newborn DBS from the NY State collection (Figure 1) parallels the profile of 134 non-newborn DBS samples from individuals of normal phenotype and normal size CGG repeat alleles that were analyzed previously [12]. The mean FMRP level was considerably higher than expected--approximately seven-fold higher than in the previous study of 134 normal individuals. While the higher white blood cell level in newborns [17] is almost certainly a contributor to this increase, only about half of the increase in the average FMRP level is explained by the predicted increase in white blood cell count. This suggests that newborns may have increased FMRP expression in white blood cells. Whatever the explanation, the increase in the average FMRP level will enhance the specificity and sensitivity of the qFMRP assay for screening newborn DBS.\nConsidering the prevalence of fragile X, it is not surprising that none of the 1,000 samples from males and 1,000 from females lacked FMRP or had an FMRP level low enough to signal the presence of the fragile X syndrome. Although there were clearly no samples in this set of 2,000 that were from individuals who would develop the fragile X syndrome, we examined the FMR1 CGG region in the DBS samples with FMRP lower than two SDs below the mean to see what alleles were present at the low extreme of the distribution. Thirteen of the 14 samples in this group had normal CGG repeat alleles, which confirmed that these FMRP levels were within the normal range and that they contained reduced levels of FMRP due to factors other than CGG repeat length—for example, due to the variability in the number of white blood cells in newborns [18].\nIn contrast one sample (sample 2 in Table 1) appeared to be at the low extreme of the normal FMRP distribution due to reduced FMRP expression from a 161/167-repeat premutation allele. FMR1 alleles of this size are rare and have reduced levels of FMRP [9,19]. Methylation analysis (Figure 2) indicated skewed inactivation of approximately 90% of the X chromosome with the normal 30-repeat allele. Thus, FMRP expression was primarily from the premutation allele with its reduced FMRP expression which is probably the explanation for the low-normal amount of FMRP. The assignment of this one sample out of 2,000 to the low normal range is extremely unlikely to have occurred by chance and demonstrates the potential discriminatory power of this assay.\nThe aged DBS from New South Wales newborn screening collection were very different from the fresh 2,000 DBS from the NY State collection. The former newborn DBS had been archived for an extended period and were analyzed for FMRP only after some of the males represented were later found to have the fragile X syndrome. The extended storage time reduced the level of FMRP that could be detected (Figure 3) which shifted the distribution of FMRP levels toward zero (Figure 4). The mean normal concentration was approximately half that of the normal adult population we had previously analyzed [12] and less than a third of the mean of newborn DBS that had been stored for only five weeks (Table 1). The results shown in Table 2 indicate that extended DBS storage, especially for longer than 48 months, could lead to some overlap between normal and full mutation. Despite the reduction of detectable FMRP due to prolonged storage, this retrospective analysis was a highly effective screen that identified all specimens from affected (full mutation) males. However, prolonged storage of newborn DBS (more than 47 months) could lead to an increase in false positive samples. Thus, the qFMRP assay will have limited utility for DBS stored for four years or more. Future retrospective studies with aged newborn DBS should be performed with samples that have been stored less than 47 months, and the levels of FMRP should be compared to those of aged normal DBS having the same prolonged storage time.\nIt is likely that the qFMRP assay would be able to predict cognitive impairment in full mutation females and distinguish between those that had an IQ <70 and high functioning full mutation as well as premutation females. Studies of FMR1 alleles have shown that the methylation of specific sites is predictive of intellectual impairment in full mutation females [20,21]. This methylation is presumably an indirect measure of FMRP expression and thus a direct qFMRP analysis is likely to be as predictive as an assay of FMR1 methylation.\nFragile X does not have a distinctive phenotype in infants and young children and the average age at which the syndrome is diagnosed in males is 36 months in the USA and 54 months in Australia [22–24]. While there is currently no specific treatment for fragile X, early diagnosis of the syndrome would allow early therapeutic intervention for affected children and timely genetic counseling for their parents. Early diagnosis would become even more critical if pharmacological therapies for fragile X that are currently in phase 2 or 3 clinical trials prove effective.\nDiagnosis of the fragile X syndrome is currently based on analysis of the CGG repeat in genomic DNA and DNA molecular tests have been used in pilot newborn screening for fragile X [25,26]. However, DNA molecular tests that determine CGG allele size identify premutation carriers as well as full mutation individuals. Premutation carriers are at risk for an adult onset disorder, fragile X associated tremor/ataxia syndrome (FXTAS) [27]. Since there is currently no treatment for FXTAS, the identification of newborns with the premutation complicates the ethics of fragile X screening by DNA analysis. There are currently no fragile X newborn screening programs.", "Our data show for the first time that it is feasible to measure FMRP in 3-mm-diameter newborn DBS disks which are used in mandatory screening for metabolic and hereditary diseases. The accurate measurement of FMRP in these small diameter disks is enhanced by the relatively high levels of the protein present in neonates, which could in part be due the high leukocyte count [17,18].\nThe level of FMRP in newborn is variable and is, on average, seven times higher than that detected in adults. Even though the levels of FMRP detected in DBS decrease with storage time, the qFMRP assay allowed us to identify all affected males from the set of aged DBS from normal and fragile X individuals. Our data suggests that the qFMRP assay could serve as the initial step in a fragile X newborn screening program. In a second screening step, characterization of CGG size and/or methylation status of the FMR1 alleles associated with DBS at the low extreme of the FMRP distribution would indicate the presence of the fragile X syndrome. The correlation between highly reduced or absent FMRP expression and the fragile X syndrome is firmly established for males. This correlation is likely to apply to females as well but further studies will be necessary to firmly establish this link. Fragile X screening by qFMRP has distinct advantages over techniques that detect a CGG expansion since it avoids ethical issues associated with identification of asymptomatic premutation carrier infants and is consistent with current newborn screening technology and cost parameters." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", null, null, "discussion", "conclusion" ]
[ "Fragile X syndrome", "FMR1", "FMRP", "Dried blood spots", "DBS", "Capture immune assay", "CGG repeats", "Luminex", "Newborn screening" ]
Background: Fragile X syndrome, the most common inherited cause of intellectual disability, results from mutation of the FMR1 gene on the X chromosome that disrupts expression of the fragile X mental retardation protein (FMRP) [1,2]. Although there are very rare cases in which the fragile X syndrome is due to point mutation or deletion of the FMR1 gene [3–7], the most common fragile X mutation eliminates FMRP expression through expansion of a CGG repeat in the 5’-untranslated region of FMR1 to more than 200 triplets (the full mutation). Although the fragile X syndrome results directly from the absence of functional FMRP, diagnosis of this syndrome is based on allele size and detection of the full mutation expansion in genomic DNA. FMR1 alleles are sorted into four size categories based on their stability upon transmission: normal (up to 44 CGG repeats): intermediate (45-54 repeats); premutation (55-200 repeats); and full mutation (>200 repeats). In the North American population the full-mutation allele (>200 repeats) which is exclusively maternally inherited, has an approximate prevalence of 1 in 4,000 while the premutation is much more common with a prevalence of 1 in 151 females and 1 in 468 males [8]. The premutation can expand to the full mutation when transmitted from mother to offspring and the likelihood of this expansion is dependent on the length and structure of the allele. The majority of premutation alleles are less than 90 triplet repeats in length. FMRP expression is reduced as triplet repeat length increases from the normal range and is eliminated by a process analogous to X inactivation when the repeat exceeds approximately 200 copies [9,10]. Since males carry only a single X chromosome, those with a full mutation allele develop the fragile X syndrome due to the absence of FMRP. Approximately 40% of males with the full mutation have some somatic cells with smaller alleles from triplet repeat contractions during early embryogenesis. These alleles express FMRP but only rarely does this mosaic expression ameliorate the syndrome. Contractions are also presumably responsible for the premutation size alleles in the sperm of full mutation males [11]. Large premutation alleles (approximately 150-200 repeats) are associated with intellectual impairment due presumably to reduced FMRP expression. Females carry two X chromosomes and two copies of the FMR1 gene, only one of which is expressed in any particular somatic cell after X inactivation during early embryogenesis. When FMRP expression is reduced because one FMR1 allele in a female carries the full mutation, the degree of impairment can range from undetectable to severe. This variability is due to variation in random X inactivation and the resulting mosaic distribution of somatic cells in which FMRP is expressed. Since the full mutation must be maternally inherited, homozygous full mutation females do not occur. As in males, mosaicism for smaller alleles occurs in females but it is more rarely observed. We recently reported the development of a simple, accurate, and inexpensive capture immunoassay that determines the level of FMRP in dried blood spots (DBS) as well as in lymphocytes and other tissues [12]. Our initial study of FMRP in DBS from 215 individuals with normal, premutation, and full-mutation FMR1 alleles was designed to characterize the FMRP expression of different FMR1 genotypes. In samples from normal individuals we found a broad distribution of FMRP. The level of the protein declined with age from infants to preteens. It leveled off in teenage years and remained unchanged through adulthood, with no difference between males and females. We used these DBS samples to evaluate how effectively this assay distinguished affected from unaffected individuals. While the assay readily identified affected (full-mutation) males with sensitivity and specificity approaching 100%, we needed to test a larger set of random population samples to establish the FMRP variability detected by the assay. Since residual DBS from state-mandated newborn screening for metabolic and genetic diseases are available for research and represent a uniform age, we decided to use 2,000 randomly selected anonymous fresh newborn DBS to characterize the variability of FMRP expression in the newborn population which is a potential target for screening with this assay. Considering the estimated prevalence of fragile X, it was relatively unlikely that we would find any affected individuals (i.e. those with virtually no FMRP) among the 1,000 male and 1,000 females sampled. We were primarily interested, however, in the variability of FMRP expression, especially the low extreme of normal expression. To evaluate the effect of long term storage of newborn DBS on the qFMRP assay, we conducted a retrospective study of 74 aged newborn DBS that had been stored for up to 7 years. These DBS were from a different newborn screening program and included samples from 6 affected (full mutation) males. Methods: Newborn DBS disks (3-mm-diameter; ~7.1 mm2) were obtained from the New York State Department of Health’s Wadsworth Center, Albany, NY, USA and the New South Wales Newborn Screening Program, Sydney, Australia. Studies performed with both sets of disks were reviewed and approved by the Institutional Review Boards of the Institute for Basic Research in Developmental Disabilities (IBR) and the DBS source institutions. Newborn DBS from the Wadsworth center in New York State The DBS disks from 1,000 males and 1,000 females were recent (5 weeks old) and stored with a desiccant in a refrigerator at 2-8°C. They were received (in duplicate) in sealed 96-well plates with no identifying information except gender. The DBS disks from 1,000 males and 1,000 females were recent (5 weeks old) and stored with a desiccant in a refrigerator at 2-8°C. They were received (in duplicate) in sealed 96-well plates with no identifying information except gender. Newborn DBS from New South Wales (Australia) newborn screening program The seventy-four disks from DBS that had been stored for 7 to 84 months included samples from 6 males diagnosed with the fragile X syndrome and 68 normal individuals. The samples from 6 affected males had been stored from 19 to 79 months at low humidity and 22-28°C for 12 months and then at 20°C before FMRP quantification. Those from 68 normal controls had been stored from 7 to 84 months. Information about phenotype, gender, and age of DBS was not known by the researchers performing qFMRP analysis until completion of the study. The seventy-four disks from DBS that had been stored for 7 to 84 months included samples from 6 males diagnosed with the fragile X syndrome and 68 normal individuals. The samples from 6 affected males had been stored from 19 to 79 months at low humidity and 22-28°C for 12 months and then at 20°C before FMRP quantification. Those from 68 normal controls had been stored from 7 to 84 months. Information about phenotype, gender, and age of DBS was not known by the researchers performing qFMRP analysis until completion of the study. Elution of FMRP from DBS Each 3-mm-diameter disk was placed into a well of a Low Protein Binding Durapore R Multiscreen 96-well filter plate (Millipore, Billerica, MA) and protein was eluted in 50 uL of extraction reagent: M-PER mammalian protein extraction reagent (Thermo Fisher Scientific, Rockford, IL) containing 150 mmol/L NaCl, 10 ug/mL chymostatin, 10 ug/mL antipain, and 1× protease inhibitor cocktail (Complete mini tablets, EDTA free; Roche Applied Science, Indianapolis, IN) by shaking for 3 hr with agitation at room temperature. Eluates were collected by centrifugation at 4°C into a 96-well catch plate for 5 minutes at 1258 × g [12]. Each 3-mm-diameter disk was placed into a well of a Low Protein Binding Durapore R Multiscreen 96-well filter plate (Millipore, Billerica, MA) and protein was eluted in 50 uL of extraction reagent: M-PER mammalian protein extraction reagent (Thermo Fisher Scientific, Rockford, IL) containing 150 mmol/L NaCl, 10 ug/mL chymostatin, 10 ug/mL antipain, and 1× protease inhibitor cocktail (Complete mini tablets, EDTA free; Roche Applied Science, Indianapolis, IN) by shaking for 3 hr with agitation at room temperature. Eluates were collected by centrifugation at 4°C into a 96-well catch plate for 5 minutes at 1258 × g [12]. qFMRP assay procedure FMRP assays were performed with the anti-FMRP mouse monoclonal antibody mAb6B8 (MMS-5231, Covance Inc., Dedham, MA) and the anti-FMRP rabbit polyclonal antibody R477 [12]. These antibodies are highly specific and each recognizes a different epitope of the protein [12]. A GST fusion protein, GST-SR7 carrying an abbreviated sequence of FMRP that includes the epitopes of mAb6B8 and R477 was used as standard [12]. The immunoassays were performed as previously described [12]. Briefly, 50 uL DBS eluate was incubated for 6 hr with 3,000 xMAP-MicroPlex microspheres (Luminex, Austin, TX) coupled to anti-FMRP mAb6B8. The microspheres were then washed and incubated overnight with rabbit antibody, R477 that was subsequently labeled for 2 hr by phycoerythrin-conjugated goat anti-rabbit IgG. FMRP was quantified with a Luminex 200 system. Dilutions of GST-SR7 were used to generate an FMRP standard concentration curve for each 96-well plate analyzed. The amount of FMRP in the DBS was reported as concentration (pM) in the 50 uL DBS eluate. FMRP assays were performed with the anti-FMRP mouse monoclonal antibody mAb6B8 (MMS-5231, Covance Inc., Dedham, MA) and the anti-FMRP rabbit polyclonal antibody R477 [12]. These antibodies are highly specific and each recognizes a different epitope of the protein [12]. A GST fusion protein, GST-SR7 carrying an abbreviated sequence of FMRP that includes the epitopes of mAb6B8 and R477 was used as standard [12]. The immunoassays were performed as previously described [12]. Briefly, 50 uL DBS eluate was incubated for 6 hr with 3,000 xMAP-MicroPlex microspheres (Luminex, Austin, TX) coupled to anti-FMRP mAb6B8. The microspheres were then washed and incubated overnight with rabbit antibody, R477 that was subsequently labeled for 2 hr by phycoerythrin-conjugated goat anti-rabbit IgG. FMRP was quantified with a Luminex 200 system. Dilutions of GST-SR7 were used to generate an FMRP standard concentration curve for each 96-well plate analyzed. The amount of FMRP in the DBS was reported as concentration (pM) in the 50 uL DBS eluate. DNA studies DNA was eluted from approximately ½ of a duplicate 3mm (~3.5 mm2) disk by the following modification of a published procedure [13].The DBS disk was initially washed in 1 mL of SSPE (0.15 M NaCl, 0.01M NaH2PO4 pH 7.0, and 0.001M EDTA) containing 0.1% Tween 80 (Sigma-Aldrich, St Louis, MO 63103 USA) for 10 minutes at room temperature and the half disk was transferred to 100 μL of 5% Chelex (BioRad, Hercules, CA 94547 USA) in H2O, incubated for 30 min at 60°C and then for 30 min at 100°C. The liquid phase was separated from the Chelex and brought to 1 mM EDTA. Two microliters of this eluate served as template for polymerase chain reaction (PCR) amplification of the FMR1 CGG repeat region with the AmplideX® FMR1 PCR (RUO) reagents according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA). PCR products were analyzed by capillary electrophoresis (ABI 3130 Genetic Analyzer, Applied Biosystems, Foster City, CA) [14]. This eluate was also used for analysis of DNA methylation in the FMR1 CGG repeat region with the Amplidex® FMR1 mPCR kit according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA). DNA was eluted from approximately ½ of a duplicate 3mm (~3.5 mm2) disk by the following modification of a published procedure [13].The DBS disk was initially washed in 1 mL of SSPE (0.15 M NaCl, 0.01M NaH2PO4 pH 7.0, and 0.001M EDTA) containing 0.1% Tween 80 (Sigma-Aldrich, St Louis, MO 63103 USA) for 10 minutes at room temperature and the half disk was transferred to 100 μL of 5% Chelex (BioRad, Hercules, CA 94547 USA) in H2O, incubated for 30 min at 60°C and then for 30 min at 100°C. The liquid phase was separated from the Chelex and brought to 1 mM EDTA. Two microliters of this eluate served as template for polymerase chain reaction (PCR) amplification of the FMR1 CGG repeat region with the AmplideX® FMR1 PCR (RUO) reagents according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA). PCR products were analyzed by capillary electrophoresis (ABI 3130 Genetic Analyzer, Applied Biosystems, Foster City, CA) [14]. This eluate was also used for analysis of DNA methylation in the FMR1 CGG repeat region with the Amplidex® FMR1 mPCR kit according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA). Statistical analysis Data were analyzed with either IBM SPSS Statistics (IBM, Armonk, NY) or SigmaPlot (Systat Software, San Jose, CA) software. Data were analyzed with either IBM SPSS Statistics (IBM, Armonk, NY) or SigmaPlot (Systat Software, San Jose, CA) software. Newborn DBS from the Wadsworth center in New York State: The DBS disks from 1,000 males and 1,000 females were recent (5 weeks old) and stored with a desiccant in a refrigerator at 2-8°C. They were received (in duplicate) in sealed 96-well plates with no identifying information except gender. Newborn DBS from New South Wales (Australia) newborn screening program: The seventy-four disks from DBS that had been stored for 7 to 84 months included samples from 6 males diagnosed with the fragile X syndrome and 68 normal individuals. The samples from 6 affected males had been stored from 19 to 79 months at low humidity and 22-28°C for 12 months and then at 20°C before FMRP quantification. Those from 68 normal controls had been stored from 7 to 84 months. Information about phenotype, gender, and age of DBS was not known by the researchers performing qFMRP analysis until completion of the study. Elution of FMRP from DBS: Each 3-mm-diameter disk was placed into a well of a Low Protein Binding Durapore R Multiscreen 96-well filter plate (Millipore, Billerica, MA) and protein was eluted in 50 uL of extraction reagent: M-PER mammalian protein extraction reagent (Thermo Fisher Scientific, Rockford, IL) containing 150 mmol/L NaCl, 10 ug/mL chymostatin, 10 ug/mL antipain, and 1× protease inhibitor cocktail (Complete mini tablets, EDTA free; Roche Applied Science, Indianapolis, IN) by shaking for 3 hr with agitation at room temperature. Eluates were collected by centrifugation at 4°C into a 96-well catch plate for 5 minutes at 1258 × g [12]. qFMRP assay procedure: FMRP assays were performed with the anti-FMRP mouse monoclonal antibody mAb6B8 (MMS-5231, Covance Inc., Dedham, MA) and the anti-FMRP rabbit polyclonal antibody R477 [12]. These antibodies are highly specific and each recognizes a different epitope of the protein [12]. A GST fusion protein, GST-SR7 carrying an abbreviated sequence of FMRP that includes the epitopes of mAb6B8 and R477 was used as standard [12]. The immunoassays were performed as previously described [12]. Briefly, 50 uL DBS eluate was incubated for 6 hr with 3,000 xMAP-MicroPlex microspheres (Luminex, Austin, TX) coupled to anti-FMRP mAb6B8. The microspheres were then washed and incubated overnight with rabbit antibody, R477 that was subsequently labeled for 2 hr by phycoerythrin-conjugated goat anti-rabbit IgG. FMRP was quantified with a Luminex 200 system. Dilutions of GST-SR7 were used to generate an FMRP standard concentration curve for each 96-well plate analyzed. The amount of FMRP in the DBS was reported as concentration (pM) in the 50 uL DBS eluate. DNA studies: DNA was eluted from approximately ½ of a duplicate 3mm (~3.5 mm2) disk by the following modification of a published procedure [13].The DBS disk was initially washed in 1 mL of SSPE (0.15 M NaCl, 0.01M NaH2PO4 pH 7.0, and 0.001M EDTA) containing 0.1% Tween 80 (Sigma-Aldrich, St Louis, MO 63103 USA) for 10 minutes at room temperature and the half disk was transferred to 100 μL of 5% Chelex (BioRad, Hercules, CA 94547 USA) in H2O, incubated for 30 min at 60°C and then for 30 min at 100°C. The liquid phase was separated from the Chelex and brought to 1 mM EDTA. Two microliters of this eluate served as template for polymerase chain reaction (PCR) amplification of the FMR1 CGG repeat region with the AmplideX® FMR1 PCR (RUO) reagents according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA). PCR products were analyzed by capillary electrophoresis (ABI 3130 Genetic Analyzer, Applied Biosystems, Foster City, CA) [14]. This eluate was also used for analysis of DNA methylation in the FMR1 CGG repeat region with the Amplidex® FMR1 mPCR kit according to the manufacturer’s directions (Asuragen, Austin, TX 78744 USA). Statistical analysis: Data were analyzed with either IBM SPSS Statistics (IBM, Armonk, NY) or SigmaPlot (Systat Software, San Jose, CA) software. Results: NY State newborn DBS samples We applied the qFMRP immunoassay to 1,000 male and 1,000 female newborn DBS that had been stored for only five weeks (fresh DBS). FMRP concentration (pM) in each sample (3-mm-diameter disk eluate) was calculated by comparison to dilutions of GST-SR7 as previously described [12]. The results showed a variable expression of FMRP ranging from 10.3- to 92-pM, an average FMRP of 44.8 pM, and a SD of 12.4 pM (Figure 1). Comparison of the FMRP levels obtained in this study to those reported previously [12] for normal adults using larger DBS disks (6.9-mm-diameter, 37.4 mm2) revealed that, the average FMRP in newborn was approximately seven-fold higher (6.3 pM eluted per mm2) than in adults (0.93 pM eluted per mm2). None of the 1,000 male or 1,000 female random newborn DBS lacked FMRP or had the extremely low level that would indicate the fragile X syndrome [12].Figure 1 Distribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175. Distribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175. To examine the FMR1 genotypes of the samples at the low extreme of the FMRP distribution, we extracted genomic DNA from duplicate DBS of the 14 samples whose level of FMRP was more than two standard deviation units below the newborn population mean and determined the size of the CGG repeat in the FMR1 alleles that were present (Table 1). The alleles detected in the 14 DBS samples (0.7%) that met this criterion are shown in Table 1. Thirteen of the samples showed an assortment of normal FMR1 CGG repeat alleles that reflects the allele distribution in the normal population which has a mode of 30 repeats. One sample from a female at the extreme low end of the FMRP distribution (Table 1, Sample 2) showed a normal allele of 30 repeats and a large premutation that appeared as two alleles of approximately 161 and 167 repeats (Figure 2). Methylation analysis, informed by two HpaII sites on either side of the CGG repeat [15,16] showed that the normal, 30 repeat, allele was highly resistant to Hpa II digestion which indicated that it was highly (~90%) methylated (Figure 2). This implied highly skewed X inactivation in which the normal allele resided on the inactive X chromosome in most white blood cells in the newborn blood sample while the premutation resided on the active chromosome in most of these cells.Table 1 NY State newborn DBS with lowest FMRP levels Sample FMRP(pM) Z # Sex Allele 1 Allele 2 Allele 3 110.3−2.8f3244210.4−2.8f30161167311.3−2.7m20413.2−2.5m31516.6−2.3m30616.9−2.2m30717.5−2.2m29817.8−2.2f2223918.6−2.1m291019.0−2.1f30331119.2−2.1m291219.2−2.1m301319.6−2.0m301419.7−2.0m30Mean*44.8SD*12.4*Mean and standard deviation of all 2,000 samples.#Difference from mean (in standard deviation units).Allele sizes are in CGG repeat number.Figure 2 PCR analysis of DBS sample 2 in Table 1 , a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis. NY State newborn DBS with lowest FMRP levels *Mean and standard deviation of all 2,000 samples. #Difference from mean (in standard deviation units). Allele sizes are in CGG repeat number. PCR analysis of DBS sample 2 in Table 1 , a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis. We applied the qFMRP immunoassay to 1,000 male and 1,000 female newborn DBS that had been stored for only five weeks (fresh DBS). FMRP concentration (pM) in each sample (3-mm-diameter disk eluate) was calculated by comparison to dilutions of GST-SR7 as previously described [12]. The results showed a variable expression of FMRP ranging from 10.3- to 92-pM, an average FMRP of 44.8 pM, and a SD of 12.4 pM (Figure 1). Comparison of the FMRP levels obtained in this study to those reported previously [12] for normal adults using larger DBS disks (6.9-mm-diameter, 37.4 mm2) revealed that, the average FMRP in newborn was approximately seven-fold higher (6.3 pM eluted per mm2) than in adults (0.93 pM eluted per mm2). None of the 1,000 male or 1,000 female random newborn DBS lacked FMRP or had the extremely low level that would indicate the fragile X syndrome [12].Figure 1 Distribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175. Distribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175. To examine the FMR1 genotypes of the samples at the low extreme of the FMRP distribution, we extracted genomic DNA from duplicate DBS of the 14 samples whose level of FMRP was more than two standard deviation units below the newborn population mean and determined the size of the CGG repeat in the FMR1 alleles that were present (Table 1). The alleles detected in the 14 DBS samples (0.7%) that met this criterion are shown in Table 1. Thirteen of the samples showed an assortment of normal FMR1 CGG repeat alleles that reflects the allele distribution in the normal population which has a mode of 30 repeats. One sample from a female at the extreme low end of the FMRP distribution (Table 1, Sample 2) showed a normal allele of 30 repeats and a large premutation that appeared as two alleles of approximately 161 and 167 repeats (Figure 2). Methylation analysis, informed by two HpaII sites on either side of the CGG repeat [15,16] showed that the normal, 30 repeat, allele was highly resistant to Hpa II digestion which indicated that it was highly (~90%) methylated (Figure 2). This implied highly skewed X inactivation in which the normal allele resided on the inactive X chromosome in most white blood cells in the newborn blood sample while the premutation resided on the active chromosome in most of these cells.Table 1 NY State newborn DBS with lowest FMRP levels Sample FMRP(pM) Z # Sex Allele 1 Allele 2 Allele 3 110.3−2.8f3244210.4−2.8f30161167311.3−2.7m20413.2−2.5m31516.6−2.3m30616.9−2.2m30717.5−2.2m29817.8−2.2f2223918.6−2.1m291019.0−2.1f30331119.2−2.1m291219.2−2.1m301319.6−2.0m301419.7−2.0m30Mean*44.8SD*12.4*Mean and standard deviation of all 2,000 samples.#Difference from mean (in standard deviation units).Allele sizes are in CGG repeat number.Figure 2 PCR analysis of DBS sample 2 in Table 1 , a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis. NY State newborn DBS with lowest FMRP levels *Mean and standard deviation of all 2,000 samples. #Difference from mean (in standard deviation units). Allele sizes are in CGG repeat number. PCR analysis of DBS sample 2 in Table 1 , a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis. New South Wales newborn DBS samples We also applied the qFMRP assay in a retrospective study of 74 newborn DBS that had been stored for an extended period and included 6 full mutation males as well as 68 normal individuals. This analysis was performed in a blinded manner to correlate the FMRP levels detected in the aged newborn DBS with the diagnoses of the fragile X syndrome that had been made later by the GOLD Service Hunter Genetics (Newcastle, Australia), after the phenotype appeared. The storage time for the full mutation and normal DBS ranged from 19 to 79 months and from 7 to 84 months, respectively. Table 2 shows the results for the 9 aged newborn DBS with the lowest levels of FMRP. The FMRP level of the 6 full mutation males in this sample set was indistinguishable from background and more than 20 fold lower than the average normal level in this aged sample set. Analysis of the normal controls in this study indicated that the amount of detectable FMRP in the DBS had declined significantly with extended storage (Figure 3) which shifted the FMRP distribution toward zero (Figure 4). The only normal sample that approached (0.77 pM) the level of the fragile X syndrome samples had been stored for seven years. Despite the loss of detectable FMRP with DBS storage time, the qFMRP assay identified all of the affected (full mutation) males. Because of the decline of detectable FMRP in normal control DBS that could lead to false positives, the Mann-Whitney analysis and the boxplot of these results shown in Figure 5 excluded samples that had been stored for more than 47 months. Samples in this subset showed a significant separation between normal (male and female) and full mutation males (P ≤0.001, U-test).Table 2 Australian newborn DBS with lowest FMRP levels Genotype Sex FMRP pM zFMRP # Storage (mo) fullm0.11−1.947fullm0.14−1.919fullm0.35−1.973fullm0.53−1.838fullm0.58−1.879fullm0.64−1.832nlf0.77−1.884nlm1.45−1.748nlf2.32−1.648Mean*m + f12.5SD*6.5*Mean and standard deviation of control normal samples (n = 59).#Difference from mean (in standard deviation units).nl: normal, full: full mutation.Figure 3 Decline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.Figure 4 Distribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.Figure 5 Boxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months. Australian newborn DBS with lowest FMRP levels *Mean and standard deviation of control normal samples (n = 59). #Difference from mean (in standard deviation units). nl: normal, full: full mutation. Decline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509. Distribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months. Boxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months. We also applied the qFMRP assay in a retrospective study of 74 newborn DBS that had been stored for an extended period and included 6 full mutation males as well as 68 normal individuals. This analysis was performed in a blinded manner to correlate the FMRP levels detected in the aged newborn DBS with the diagnoses of the fragile X syndrome that had been made later by the GOLD Service Hunter Genetics (Newcastle, Australia), after the phenotype appeared. The storage time for the full mutation and normal DBS ranged from 19 to 79 months and from 7 to 84 months, respectively. Table 2 shows the results for the 9 aged newborn DBS with the lowest levels of FMRP. The FMRP level of the 6 full mutation males in this sample set was indistinguishable from background and more than 20 fold lower than the average normal level in this aged sample set. Analysis of the normal controls in this study indicated that the amount of detectable FMRP in the DBS had declined significantly with extended storage (Figure 3) which shifted the FMRP distribution toward zero (Figure 4). The only normal sample that approached (0.77 pM) the level of the fragile X syndrome samples had been stored for seven years. Despite the loss of detectable FMRP with DBS storage time, the qFMRP assay identified all of the affected (full mutation) males. Because of the decline of detectable FMRP in normal control DBS that could lead to false positives, the Mann-Whitney analysis and the boxplot of these results shown in Figure 5 excluded samples that had been stored for more than 47 months. Samples in this subset showed a significant separation between normal (male and female) and full mutation males (P ≤0.001, U-test).Table 2 Australian newborn DBS with lowest FMRP levels Genotype Sex FMRP pM zFMRP # Storage (mo) fullm0.11−1.947fullm0.14−1.919fullm0.35−1.973fullm0.53−1.838fullm0.58−1.879fullm0.64−1.832nlf0.77−1.884nlm1.45−1.748nlf2.32−1.648Mean*m + f12.5SD*6.5*Mean and standard deviation of control normal samples (n = 59).#Difference from mean (in standard deviation units).nl: normal, full: full mutation.Figure 3 Decline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.Figure 4 Distribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.Figure 5 Boxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months. Australian newborn DBS with lowest FMRP levels *Mean and standard deviation of control normal samples (n = 59). #Difference from mean (in standard deviation units). nl: normal, full: full mutation. Decline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509. Distribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months. Boxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months. NY State newborn DBS samples: We applied the qFMRP immunoassay to 1,000 male and 1,000 female newborn DBS that had been stored for only five weeks (fresh DBS). FMRP concentration (pM) in each sample (3-mm-diameter disk eluate) was calculated by comparison to dilutions of GST-SR7 as previously described [12]. The results showed a variable expression of FMRP ranging from 10.3- to 92-pM, an average FMRP of 44.8 pM, and a SD of 12.4 pM (Figure 1). Comparison of the FMRP levels obtained in this study to those reported previously [12] for normal adults using larger DBS disks (6.9-mm-diameter, 37.4 mm2) revealed that, the average FMRP in newborn was approximately seven-fold higher (6.3 pM eluted per mm2) than in adults (0.93 pM eluted per mm2). None of the 1,000 male or 1,000 female random newborn DBS lacked FMRP or had the extremely low level that would indicate the fragile X syndrome [12].Figure 1 Distribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175. Distribution of FMRP levels in 2,000 newborn DBS from the NY State collection. The mean FMRP value was 44.8 pM; standard deviation 12.4 pM; skewness 1.11; kurtosis 3.175. To examine the FMR1 genotypes of the samples at the low extreme of the FMRP distribution, we extracted genomic DNA from duplicate DBS of the 14 samples whose level of FMRP was more than two standard deviation units below the newborn population mean and determined the size of the CGG repeat in the FMR1 alleles that were present (Table 1). The alleles detected in the 14 DBS samples (0.7%) that met this criterion are shown in Table 1. Thirteen of the samples showed an assortment of normal FMR1 CGG repeat alleles that reflects the allele distribution in the normal population which has a mode of 30 repeats. One sample from a female at the extreme low end of the FMRP distribution (Table 1, Sample 2) showed a normal allele of 30 repeats and a large premutation that appeared as two alleles of approximately 161 and 167 repeats (Figure 2). Methylation analysis, informed by two HpaII sites on either side of the CGG repeat [15,16] showed that the normal, 30 repeat, allele was highly resistant to Hpa II digestion which indicated that it was highly (~90%) methylated (Figure 2). This implied highly skewed X inactivation in which the normal allele resided on the inactive X chromosome in most white blood cells in the newborn blood sample while the premutation resided on the active chromosome in most of these cells.Table 1 NY State newborn DBS with lowest FMRP levels Sample FMRP(pM) Z # Sex Allele 1 Allele 2 Allele 3 110.3−2.8f3244210.4−2.8f30161167311.3−2.7m20413.2−2.5m31516.6−2.3m30616.9−2.2m30717.5−2.2m29817.8−2.2f2223918.6−2.1m291019.0−2.1f30331119.2−2.1m291219.2−2.1m301319.6−2.0m301419.7−2.0m30Mean*44.8SD*12.4*Mean and standard deviation of all 2,000 samples.#Difference from mean (in standard deviation units).Allele sizes are in CGG repeat number.Figure 2 PCR analysis of DBS sample 2 in Table 1 , a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis. NY State newborn DBS with lowest FMRP levels *Mean and standard deviation of all 2,000 samples. #Difference from mean (in standard deviation units). Allele sizes are in CGG repeat number. PCR analysis of DBS sample 2 in Table 1 , a female with a large premutation allele and a highly methylated normal allele. A: Capillary electrophoresis profile of PCR analysis. Arrows indicate alleles of 30, 161 and 167 CGG repeats. The 161 and 167 repeat alleles (arrows at right) represent a premutation allele. (Somatic mosaicism is presumably responsible for the bifurcation). B: Methylation analysis reference (no Hpa II digestion) PCR profile of DBS DNA. C: Methylation analysis PCR profile of Hpa II-digested DBS DNA. Comparison of B and C illustrates how each allele is protected from HpaII digestion by methylation. The premutation was split into two alleles and the analysis suggests that the larger, 167 repeat had a lower level of methylation than the smaller, 161 repeat allele. The 2 PCR products at approximately 0 CGG repeats represent internal controls for the methylation analysis. New South Wales newborn DBS samples: We also applied the qFMRP assay in a retrospective study of 74 newborn DBS that had been stored for an extended period and included 6 full mutation males as well as 68 normal individuals. This analysis was performed in a blinded manner to correlate the FMRP levels detected in the aged newborn DBS with the diagnoses of the fragile X syndrome that had been made later by the GOLD Service Hunter Genetics (Newcastle, Australia), after the phenotype appeared. The storage time for the full mutation and normal DBS ranged from 19 to 79 months and from 7 to 84 months, respectively. Table 2 shows the results for the 9 aged newborn DBS with the lowest levels of FMRP. The FMRP level of the 6 full mutation males in this sample set was indistinguishable from background and more than 20 fold lower than the average normal level in this aged sample set. Analysis of the normal controls in this study indicated that the amount of detectable FMRP in the DBS had declined significantly with extended storage (Figure 3) which shifted the FMRP distribution toward zero (Figure 4). The only normal sample that approached (0.77 pM) the level of the fragile X syndrome samples had been stored for seven years. Despite the loss of detectable FMRP with DBS storage time, the qFMRP assay identified all of the affected (full mutation) males. Because of the decline of detectable FMRP in normal control DBS that could lead to false positives, the Mann-Whitney analysis and the boxplot of these results shown in Figure 5 excluded samples that had been stored for more than 47 months. Samples in this subset showed a significant separation between normal (male and female) and full mutation males (P ≤0.001, U-test).Table 2 Australian newborn DBS with lowest FMRP levels Genotype Sex FMRP pM zFMRP # Storage (mo) fullm0.11−1.947fullm0.14−1.919fullm0.35−1.973fullm0.53−1.838fullm0.58−1.879fullm0.64−1.832nlf0.77−1.884nlm1.45−1.748nlf2.32−1.648Mean*m + f12.5SD*6.5*Mean and standard deviation of control normal samples (n = 59).#Difference from mean (in standard deviation units).nl: normal, full: full mutation.Figure 3 Decline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509.Figure 4 Distribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months.Figure 5 Boxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months. Australian newborn DBS with lowest FMRP levels *Mean and standard deviation of control normal samples (n = 59). #Difference from mean (in standard deviation units). nl: normal, full: full mutation. Decline in detectable FMRP with DBS storage time. Samples from normal individuals are plotted according to duration of storage in months. The formula for the best fit trend line: y =21.903e-0.028×; R2 = 0.5509. Distribution of FMRP levels in 59 newborn DBS from normal controls in the New South Wales archive. The mean FMRP value was 12.5 ± 6.5. Storage time for this sample set was ≤47 months. Boxplot of FMRP values in 63 newborn DBS from New South Wales archive. Box-plot data are expressed as 25th to 75th percentile, median, and whiskers to 10th and 90th percentiles with outliers shown as circles. *P = ≤0.001, Mann-Whitney U-test. Storage time for this sample set was ≤47 months. Discussion: The distribution of FMRP levels in the sample of 2,000 newborn DBS from the NY State collection (Figure 1) parallels the profile of 134 non-newborn DBS samples from individuals of normal phenotype and normal size CGG repeat alleles that were analyzed previously [12]. The mean FMRP level was considerably higher than expected--approximately seven-fold higher than in the previous study of 134 normal individuals. While the higher white blood cell level in newborns [17] is almost certainly a contributor to this increase, only about half of the increase in the average FMRP level is explained by the predicted increase in white blood cell count. This suggests that newborns may have increased FMRP expression in white blood cells. Whatever the explanation, the increase in the average FMRP level will enhance the specificity and sensitivity of the qFMRP assay for screening newborn DBS. Considering the prevalence of fragile X, it is not surprising that none of the 1,000 samples from males and 1,000 from females lacked FMRP or had an FMRP level low enough to signal the presence of the fragile X syndrome. Although there were clearly no samples in this set of 2,000 that were from individuals who would develop the fragile X syndrome, we examined the FMR1 CGG region in the DBS samples with FMRP lower than two SDs below the mean to see what alleles were present at the low extreme of the distribution. Thirteen of the 14 samples in this group had normal CGG repeat alleles, which confirmed that these FMRP levels were within the normal range and that they contained reduced levels of FMRP due to factors other than CGG repeat length—for example, due to the variability in the number of white blood cells in newborns [18]. In contrast one sample (sample 2 in Table 1) appeared to be at the low extreme of the normal FMRP distribution due to reduced FMRP expression from a 161/167-repeat premutation allele. FMR1 alleles of this size are rare and have reduced levels of FMRP [9,19]. Methylation analysis (Figure 2) indicated skewed inactivation of approximately 90% of the X chromosome with the normal 30-repeat allele. Thus, FMRP expression was primarily from the premutation allele with its reduced FMRP expression which is probably the explanation for the low-normal amount of FMRP. The assignment of this one sample out of 2,000 to the low normal range is extremely unlikely to have occurred by chance and demonstrates the potential discriminatory power of this assay. The aged DBS from New South Wales newborn screening collection were very different from the fresh 2,000 DBS from the NY State collection. The former newborn DBS had been archived for an extended period and were analyzed for FMRP only after some of the males represented were later found to have the fragile X syndrome. The extended storage time reduced the level of FMRP that could be detected (Figure 3) which shifted the distribution of FMRP levels toward zero (Figure 4). The mean normal concentration was approximately half that of the normal adult population we had previously analyzed [12] and less than a third of the mean of newborn DBS that had been stored for only five weeks (Table 1). The results shown in Table 2 indicate that extended DBS storage, especially for longer than 48 months, could lead to some overlap between normal and full mutation. Despite the reduction of detectable FMRP due to prolonged storage, this retrospective analysis was a highly effective screen that identified all specimens from affected (full mutation) males. However, prolonged storage of newborn DBS (more than 47 months) could lead to an increase in false positive samples. Thus, the qFMRP assay will have limited utility for DBS stored for four years or more. Future retrospective studies with aged newborn DBS should be performed with samples that have been stored less than 47 months, and the levels of FMRP should be compared to those of aged normal DBS having the same prolonged storage time. It is likely that the qFMRP assay would be able to predict cognitive impairment in full mutation females and distinguish between those that had an IQ <70 and high functioning full mutation as well as premutation females. Studies of FMR1 alleles have shown that the methylation of specific sites is predictive of intellectual impairment in full mutation females [20,21]. This methylation is presumably an indirect measure of FMRP expression and thus a direct qFMRP analysis is likely to be as predictive as an assay of FMR1 methylation. Fragile X does not have a distinctive phenotype in infants and young children and the average age at which the syndrome is diagnosed in males is 36 months in the USA and 54 months in Australia [22–24]. While there is currently no specific treatment for fragile X, early diagnosis of the syndrome would allow early therapeutic intervention for affected children and timely genetic counseling for their parents. Early diagnosis would become even more critical if pharmacological therapies for fragile X that are currently in phase 2 or 3 clinical trials prove effective. Diagnosis of the fragile X syndrome is currently based on analysis of the CGG repeat in genomic DNA and DNA molecular tests have been used in pilot newborn screening for fragile X [25,26]. However, DNA molecular tests that determine CGG allele size identify premutation carriers as well as full mutation individuals. Premutation carriers are at risk for an adult onset disorder, fragile X associated tremor/ataxia syndrome (FXTAS) [27]. Since there is currently no treatment for FXTAS, the identification of newborns with the premutation complicates the ethics of fragile X screening by DNA analysis. There are currently no fragile X newborn screening programs. Conclusion: Our data show for the first time that it is feasible to measure FMRP in 3-mm-diameter newborn DBS disks which are used in mandatory screening for metabolic and hereditary diseases. The accurate measurement of FMRP in these small diameter disks is enhanced by the relatively high levels of the protein present in neonates, which could in part be due the high leukocyte count [17,18]. The level of FMRP in newborn is variable and is, on average, seven times higher than that detected in adults. Even though the levels of FMRP detected in DBS decrease with storage time, the qFMRP assay allowed us to identify all affected males from the set of aged DBS from normal and fragile X individuals. Our data suggests that the qFMRP assay could serve as the initial step in a fragile X newborn screening program. In a second screening step, characterization of CGG size and/or methylation status of the FMR1 alleles associated with DBS at the low extreme of the FMRP distribution would indicate the presence of the fragile X syndrome. The correlation between highly reduced or absent FMRP expression and the fragile X syndrome is firmly established for males. This correlation is likely to apply to females as well but further studies will be necessary to firmly establish this link. Fragile X screening by qFMRP has distinct advantages over techniques that detect a CGG expansion since it avoids ethical issues associated with identification of asymptomatic premutation carrier infants and is consistent with current newborn screening technology and cost parameters.
Background: The fragile X syndrome (FXS) results from mutation of the FMR1 gene that prevents expression of its gene product, FMRP. We previously characterized 215 dried blood spots (DBS) representing different FMR1 genotypes and ages with a Luminex-based immunoassay (qFMRP). We found variable FMRP levels in the normal samples and identified affected males by the drastic reduction of FMRP. Methods: Here, to establish the variability of expression of FMRP in a larger random population we quantified FMRP in 2,000 anonymous fresh newborn DBS. We also evaluated the effect of long term storage on qFMRP by retrospectively assaying 74 aged newborn DBS that had been stored for 7-84 months that included normal and full mutation individuals. These analyses were performed on 3 mm DBS disks. To identify the alleles associated with the lowest FMRP levels in the fresh DBS, we analyzed the DNA in the samples that were more than two standard deviations below the mean. Results: Analysis of the fresh newborn DBS revealed a broad distribution of FMRP with a mean approximately 7-fold higher than that we previously reported for fresh DBS in normal adults and no samples whose FMRP level indicated FXS. DNA analysis of the lowest FMRP DBS showed that this was the low extreme of the normal range and included a female carrying a 165 CGG repeat premutation. In the retrospective study of aged newborn DBS, the FMRP mean of the normal samples was less than 30% of the mean of the fresh DBS. Despite the degraded signal from these aged DBS, qFMRP identified the FXS individuals. Conclusions: The assay showed that newborn DBS contain high levels of FMRP that will allow identification of males and potentially females, affected by FXS. The assay is also an effective screening tool for aged DBS stored for up to four years.
Background: Fragile X syndrome, the most common inherited cause of intellectual disability, results from mutation of the FMR1 gene on the X chromosome that disrupts expression of the fragile X mental retardation protein (FMRP) [1,2]. Although there are very rare cases in which the fragile X syndrome is due to point mutation or deletion of the FMR1 gene [3–7], the most common fragile X mutation eliminates FMRP expression through expansion of a CGG repeat in the 5’-untranslated region of FMR1 to more than 200 triplets (the full mutation). Although the fragile X syndrome results directly from the absence of functional FMRP, diagnosis of this syndrome is based on allele size and detection of the full mutation expansion in genomic DNA. FMR1 alleles are sorted into four size categories based on their stability upon transmission: normal (up to 44 CGG repeats): intermediate (45-54 repeats); premutation (55-200 repeats); and full mutation (>200 repeats). In the North American population the full-mutation allele (>200 repeats) which is exclusively maternally inherited, has an approximate prevalence of 1 in 4,000 while the premutation is much more common with a prevalence of 1 in 151 females and 1 in 468 males [8]. The premutation can expand to the full mutation when transmitted from mother to offspring and the likelihood of this expansion is dependent on the length and structure of the allele. The majority of premutation alleles are less than 90 triplet repeats in length. FMRP expression is reduced as triplet repeat length increases from the normal range and is eliminated by a process analogous to X inactivation when the repeat exceeds approximately 200 copies [9,10]. Since males carry only a single X chromosome, those with a full mutation allele develop the fragile X syndrome due to the absence of FMRP. Approximately 40% of males with the full mutation have some somatic cells with smaller alleles from triplet repeat contractions during early embryogenesis. These alleles express FMRP but only rarely does this mosaic expression ameliorate the syndrome. Contractions are also presumably responsible for the premutation size alleles in the sperm of full mutation males [11]. Large premutation alleles (approximately 150-200 repeats) are associated with intellectual impairment due presumably to reduced FMRP expression. Females carry two X chromosomes and two copies of the FMR1 gene, only one of which is expressed in any particular somatic cell after X inactivation during early embryogenesis. When FMRP expression is reduced because one FMR1 allele in a female carries the full mutation, the degree of impairment can range from undetectable to severe. This variability is due to variation in random X inactivation and the resulting mosaic distribution of somatic cells in which FMRP is expressed. Since the full mutation must be maternally inherited, homozygous full mutation females do not occur. As in males, mosaicism for smaller alleles occurs in females but it is more rarely observed. We recently reported the development of a simple, accurate, and inexpensive capture immunoassay that determines the level of FMRP in dried blood spots (DBS) as well as in lymphocytes and other tissues [12]. Our initial study of FMRP in DBS from 215 individuals with normal, premutation, and full-mutation FMR1 alleles was designed to characterize the FMRP expression of different FMR1 genotypes. In samples from normal individuals we found a broad distribution of FMRP. The level of the protein declined with age from infants to preteens. It leveled off in teenage years and remained unchanged through adulthood, with no difference between males and females. We used these DBS samples to evaluate how effectively this assay distinguished affected from unaffected individuals. While the assay readily identified affected (full-mutation) males with sensitivity and specificity approaching 100%, we needed to test a larger set of random population samples to establish the FMRP variability detected by the assay. Since residual DBS from state-mandated newborn screening for metabolic and genetic diseases are available for research and represent a uniform age, we decided to use 2,000 randomly selected anonymous fresh newborn DBS to characterize the variability of FMRP expression in the newborn population which is a potential target for screening with this assay. Considering the estimated prevalence of fragile X, it was relatively unlikely that we would find any affected individuals (i.e. those with virtually no FMRP) among the 1,000 male and 1,000 females sampled. We were primarily interested, however, in the variability of FMRP expression, especially the low extreme of normal expression. To evaluate the effect of long term storage of newborn DBS on the qFMRP assay, we conducted a retrospective study of 74 aged newborn DBS that had been stored for up to 7 years. These DBS were from a different newborn screening program and included samples from 6 affected (full mutation) males. Conclusion: Our data show for the first time that it is feasible to measure FMRP in 3-mm-diameter newborn DBS disks which are used in mandatory screening for metabolic and hereditary diseases. The accurate measurement of FMRP in these small diameter disks is enhanced by the relatively high levels of the protein present in neonates, which could in part be due the high leukocyte count [17,18]. The level of FMRP in newborn is variable and is, on average, seven times higher than that detected in adults. Even though the levels of FMRP detected in DBS decrease with storage time, the qFMRP assay allowed us to identify all affected males from the set of aged DBS from normal and fragile X individuals. Our data suggests that the qFMRP assay could serve as the initial step in a fragile X newborn screening program. In a second screening step, characterization of CGG size and/or methylation status of the FMR1 alleles associated with DBS at the low extreme of the FMRP distribution would indicate the presence of the fragile X syndrome. The correlation between highly reduced or absent FMRP expression and the fragile X syndrome is firmly established for males. This correlation is likely to apply to females as well but further studies will be necessary to firmly establish this link. Fragile X screening by qFMRP has distinct advantages over techniques that detect a CGG expansion since it avoids ethical issues associated with identification of asymptomatic premutation carrier infants and is consistent with current newborn screening technology and cost parameters.
Background: The fragile X syndrome (FXS) results from mutation of the FMR1 gene that prevents expression of its gene product, FMRP. We previously characterized 215 dried blood spots (DBS) representing different FMR1 genotypes and ages with a Luminex-based immunoassay (qFMRP). We found variable FMRP levels in the normal samples and identified affected males by the drastic reduction of FMRP. Methods: Here, to establish the variability of expression of FMRP in a larger random population we quantified FMRP in 2,000 anonymous fresh newborn DBS. We also evaluated the effect of long term storage on qFMRP by retrospectively assaying 74 aged newborn DBS that had been stored for 7-84 months that included normal and full mutation individuals. These analyses were performed on 3 mm DBS disks. To identify the alleles associated with the lowest FMRP levels in the fresh DBS, we analyzed the DNA in the samples that were more than two standard deviations below the mean. Results: Analysis of the fresh newborn DBS revealed a broad distribution of FMRP with a mean approximately 7-fold higher than that we previously reported for fresh DBS in normal adults and no samples whose FMRP level indicated FXS. DNA analysis of the lowest FMRP DBS showed that this was the low extreme of the normal range and included a female carrying a 165 CGG repeat premutation. In the retrospective study of aged newborn DBS, the FMRP mean of the normal samples was less than 30% of the mean of the fresh DBS. Despite the degraded signal from these aged DBS, qFMRP identified the FXS individuals. Conclusions: The assay showed that newborn DBS contain high levels of FMRP that will allow identification of males and potentially females, affected by FXS. The assay is also an effective screening tool for aged DBS stored for up to four years.
9,851
344
13
[ "fmrp", "dbs", "normal", "newborn", "allele", "newborn dbs", "analysis", "samples", "repeat", "12" ]
[ "test", "test" ]
null
[CONTENT] Fragile X syndrome | FMR1 | FMRP | Dried blood spots | DBS | Capture immune assay | CGG repeats | Luminex | Newborn screening [SUMMARY]
null
[CONTENT] Fragile X syndrome | FMR1 | FMRP | Dried blood spots | DBS | Capture immune assay | CGG repeats | Luminex | Newborn screening [SUMMARY]
[CONTENT] Fragile X syndrome | FMR1 | FMRP | Dried blood spots | DBS | Capture immune assay | CGG repeats | Luminex | Newborn screening [SUMMARY]
[CONTENT] Fragile X syndrome | FMR1 | FMRP | Dried blood spots | DBS | Capture immune assay | CGG repeats | Luminex | Newborn screening [SUMMARY]
[CONTENT] Fragile X syndrome | FMR1 | FMRP | Dried blood spots | DBS | Capture immune assay | CGG repeats | Luminex | Newborn screening [SUMMARY]
[CONTENT] Blood Preservation | Dried Blood Spot Testing | Female | Fragile X Mental Retardation Protein | Fragile X Syndrome | Humans | Infant, Newborn | Male | Retrospective Studies | Time Factors [SUMMARY]
null
[CONTENT] Blood Preservation | Dried Blood Spot Testing | Female | Fragile X Mental Retardation Protein | Fragile X Syndrome | Humans | Infant, Newborn | Male | Retrospective Studies | Time Factors [SUMMARY]
[CONTENT] Blood Preservation | Dried Blood Spot Testing | Female | Fragile X Mental Retardation Protein | Fragile X Syndrome | Humans | Infant, Newborn | Male | Retrospective Studies | Time Factors [SUMMARY]
[CONTENT] Blood Preservation | Dried Blood Spot Testing | Female | Fragile X Mental Retardation Protein | Fragile X Syndrome | Humans | Infant, Newborn | Male | Retrospective Studies | Time Factors [SUMMARY]
[CONTENT] Blood Preservation | Dried Blood Spot Testing | Female | Fragile X Mental Retardation Protein | Fragile X Syndrome | Humans | Infant, Newborn | Male | Retrospective Studies | Time Factors [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] fmrp | dbs | normal | newborn | allele | newborn dbs | analysis | samples | repeat | 12 [SUMMARY]
null
[CONTENT] fmrp | dbs | normal | newborn | allele | newborn dbs | analysis | samples | repeat | 12 [SUMMARY]
[CONTENT] fmrp | dbs | normal | newborn | allele | newborn dbs | analysis | samples | repeat | 12 [SUMMARY]
[CONTENT] fmrp | dbs | normal | newborn | allele | newborn dbs | analysis | samples | repeat | 12 [SUMMARY]
[CONTENT] fmrp | dbs | normal | newborn | allele | newborn dbs | analysis | samples | repeat | 12 [SUMMARY]
[CONTENT] mutation | fmrp | expression | repeats | fmrp expression | alleles | 200 | premutation | fmr1 | 200 repeats [SUMMARY]
null
[CONTENT] fmrp | allele | dbs | normal | newborn | sample | mean | analysis | newborn dbs | standard deviation [SUMMARY]
[CONTENT] screening | fmrp | fragile | newborn | firmly | correlation | step | high | dbs | associated [SUMMARY]
[CONTENT] fmrp | dbs | normal | newborn | months | allele | mutation | samples | 000 | males [SUMMARY]
[CONTENT] fmrp | dbs | normal | newborn | months | allele | mutation | samples | 000 | males [SUMMARY]
[CONTENT] FXS | FMR1 | FMRP ||| 215 | Luminex ||| FMRP [SUMMARY]
null
[CONTENT] DBS | FMRP | approximately 7-fold | FXS ||| 165 | CGG ||| DBS | FMRP | less than 30% ||| DBS | FXS [SUMMARY]
[CONTENT] FXS ||| DBS | up to four years [SUMMARY]
[CONTENT] FXS | FMR1 | FMRP ||| 215 | Luminex ||| FMRP ||| FMRP | FMRP | 2,000 | DBS ||| 74 | DBS | 7-84 months ||| 3 mm ||| more than two ||| ||| DBS | FMRP | approximately 7-fold | FXS ||| 165 | CGG ||| DBS | FMRP | less than 30% ||| DBS | FXS ||| FXS ||| DBS | up to four years [SUMMARY]
[CONTENT] FXS | FMR1 | FMRP ||| 215 | Luminex ||| FMRP ||| FMRP | FMRP | 2,000 | DBS ||| 74 | DBS | 7-84 months ||| 3 mm ||| more than two ||| ||| DBS | FMRP | approximately 7-fold | FXS ||| 165 | CGG ||| DBS | FMRP | less than 30% ||| DBS | FXS ||| FXS ||| DBS | up to four years [SUMMARY]
Prevalence of Dyslipidemias in Three Regions in Venezuela: The VEMSOLS Study Results.
29538522
The prevalence of dyslipidemia in multiple regions of Venezuela is unknown. The Venezuelan Metabolic Syndrome, Obesity and Lifestyle Study (VEMSOLS) was undertaken to evaluate cardiometabolic risk factors in Venezuela.
BACKGROUND
During the years 2006 to 2010, 1320 subjects aged 20 years or older were selected by multistage stratified random sampling from all households in five municipalities from 3 regions of Venezuela: Lara State (Western region), Merida State (Andean region), and Capital District (Capital region). Anthropometric measurements and biochemical analysis were obtained from each participant. Dyslipidemia was defined according to the NCEP/ATPIII definitions.
METHODS
Mean age was 44.8 ± 0.39 years and 68.5% were females. The prevalence of lipids abnormalities related to the metabolic syndrome (low HDL-c [58.6%; 95% CI 54.9 - 62.1] and elevated triglycerides [39.7%; 36.1 - 43.2]) were the most prevalent lipid alterations, followed by atherogenic dyslipidemia (25.9%; 22.7 - 29.1), elevated LDL-c (23.3%; 20.2 - 26.4), hypercholesterolemia (22.2%; 19.2 - 25.2), and mix dyslipidemia (8.9%; 6.8 - 11.0). Dyslipidemia was more prevalent with increasing body mass index.
RESULTS
Dyslipidemias are prevalent cardiometabolic risk factors in Venezuela. Among these, a higher prevalence of low HDL is a condition also consistently reported in Latin America.
CONCLUSION
[ "Adult", "Cross-Sectional Studies", "Dyslipidemias", "Female", "Humans", "Life Style", "Male", "Prevalence", "Risk Factors", "Spatial Analysis", "Venezuela" ]
5831299
Introduction
In Venezuela, cardiovascular disease (CVD), represented by ischemic heart disease (16.3%) and stroke (7.7%), was the major cause of death in 2012.1 Both are strongly related with modifiable risk factors. According to the INTERHEART2 and the INTERSTROKE3 studies, dyslipidemias, assessed as increased levels of apolipoprotein (ApoB/ApoA1 ratio), represented the 49.2% and the 25.9% of the attributable risk for acute myocardial infarction and stroke, respectively. Randomized controlled clinical trials have consistently demonstrated that a reduction in low-density lipoprotein cholesterol (LDL-C) with statin therapy reduces the incidence of heart attack and ischemic stroke. For every 38.6 mg/dL LDL-c reduction, the annual rate of major vascular events decreases to one-fifth.4 Studies evaluating the prevalence of dyslipidemias in Venezuela have been compiled.5 However, most of them have small samples, and only two are representative of a city or a state. In 1,848 adults from the city of Barquisimeto, in the western region of the country, the Cardiovascular Risk Factor Multiple Evaluation in Latin America (CARMELA) study6 reported the lowest prevalence of hypercholesterolemia (cholesterol ≥ 240 mg/dL) observed in Latin America (5.7%).6 In 3,108 adults from the state of Zulia, Florez et al.7 documented the prevalence of atherogenic dyslipidemia (high triglycerides and low levels of high- density lipoprotein of cholesterol [HDL-c]) in 24.1%. This number was higher in men than women, and increased with age. No study in Venezuela has included more than one region, prompting the design of the Venezuelan Metabolic Syndrome, Obesity and Lifestyle Study (VEMSOLS). This paper presents the results of VEMSOLS, specifically the prevalence of dyslipidemia in five populations of three regions in Venezuela.
Methods
Design and Subjects An observational, cross-sectional study was designed to determine the prevalence of cardiometabolic risk factors in a sub-national sample of Venezuela. Five municipalities from three regions were evaluated: Palavecino, in Lara State (urban), from the Western region; Ejido (Merida city), in Merida State (urban), and Rangel (Páramo area), in Merida State (rural), both from the Andes region; Catia La Mar, in Vargas state (urban), and Sucre, in the Capital District (urban), both from the Capital region. From 2006 to 2010, a total of 1,320 subjects aged 20 years or more, who had lived in their houses for at least six months, were selected by a two-stage random sampling. Three different geographic regions of the country - Andes, mountains at the south; Western, llanos in the middle; and the Capital District, coast at the north - were assessed. Each region was stratified by municipalities and one was randomly selected. A map and a census of each location were required to delimit the streets or blocks, and to select the households to visit in each municipality. After selecting the sector to be surveyed in each location, the visits to households started from number 1 onwards, skipping every two houses. Pregnant women and participants unable to stand up and/or communicate verbally were excluded. All participants signed the informed consent form for participation. The sample size was calculated to detect the prevalence of hypercholesterolemia (the lowest prevalent condition reported in Venezuela) in 5.7%6, with standard deviation of 1.55%, which allows to calculate the 95% confidence interval (95% CI). The minimal estimated number of subjects to be evaluated was 830. Overall, 1,320 subjects were evaluated (89.4% from the urban and 10.6% from the rural area). An observational, cross-sectional study was designed to determine the prevalence of cardiometabolic risk factors in a sub-national sample of Venezuela. Five municipalities from three regions were evaluated: Palavecino, in Lara State (urban), from the Western region; Ejido (Merida city), in Merida State (urban), and Rangel (Páramo area), in Merida State (rural), both from the Andes region; Catia La Mar, in Vargas state (urban), and Sucre, in the Capital District (urban), both from the Capital region. From 2006 to 2010, a total of 1,320 subjects aged 20 years or more, who had lived in their houses for at least six months, were selected by a two-stage random sampling. Three different geographic regions of the country - Andes, mountains at the south; Western, llanos in the middle; and the Capital District, coast at the north - were assessed. Each region was stratified by municipalities and one was randomly selected. A map and a census of each location were required to delimit the streets or blocks, and to select the households to visit in each municipality. After selecting the sector to be surveyed in each location, the visits to households started from number 1 onwards, skipping every two houses. Pregnant women and participants unable to stand up and/or communicate verbally were excluded. All participants signed the informed consent form for participation. The sample size was calculated to detect the prevalence of hypercholesterolemia (the lowest prevalent condition reported in Venezuela) in 5.7%6, with standard deviation of 1.55%, which allows to calculate the 95% confidence interval (95% CI). The minimal estimated number of subjects to be evaluated was 830. Overall, 1,320 subjects were evaluated (89.4% from the urban and 10.6% from the rural area). Clinical and biochemical data All subjects were evaluated in their households or in a nearby health center by a trained health team according to a standardized protocol. Each home was visited twice. In the first visit, the participants received information about the study and signed the written informed consent form. Demographic and clinical information was obtained using a standardized questionnaire. Weight was measured with as few clothes as possible, without shoes, using a calibrated scale. Height was measured using a metric tape on the wall. Waist circumference was measured with a metric tape at the iliac crest at the end of the expiration. Body mass index was calculated (BMI: weight[kg]/height[m]2). In the second visit, blood samples were drawn after 12 hours of overnight fasting. Then, they were centrifuged for 15 minutes at 3000 rpm, within 30-40 minutes after collection, and transported with dry ice to the central laboratory, where they were properly stored at -40ºC until analysis. Data from participants who were absent during the first visit were collected. Total cholesterol,8 triglycerides,9 LDL-c, and HDL-c10 were determined by standard enzymatic colorimetric methods. All subjects were evaluated in their households or in a nearby health center by a trained health team according to a standardized protocol. Each home was visited twice. In the first visit, the participants received information about the study and signed the written informed consent form. Demographic and clinical information was obtained using a standardized questionnaire. Weight was measured with as few clothes as possible, without shoes, using a calibrated scale. Height was measured using a metric tape on the wall. Waist circumference was measured with a metric tape at the iliac crest at the end of the expiration. Body mass index was calculated (BMI: weight[kg]/height[m]2). In the second visit, blood samples were drawn after 12 hours of overnight fasting. Then, they were centrifuged for 15 minutes at 3000 rpm, within 30-40 minutes after collection, and transported with dry ice to the central laboratory, where they were properly stored at -40ºC until analysis. Data from participants who were absent during the first visit were collected. Total cholesterol,8 triglycerides,9 LDL-c, and HDL-c10 were determined by standard enzymatic colorimetric methods. Categorization of variables Dyslipidemia was defined according the National Cholesterol Education Program /Adult Treatment Panel III (NCEP/ATPIII)11, being categorized in 6 types. Of these, four were isolated dyslipidemias: Low HDL-c (hyperalphalipoproteinemia) < 40 mg/dL in men and < 50 mg/dL in women; high triglycerides: ≥ 150 mg/dL; hypercholesterolemia (≥ 240 mg/dL of total cholesterol); high LDL-c ≥ 160 mg/dL; and two were combined dyslipidemias: atherogenic dyslipidemia (triglycerides ≥ 150 mg/dL + low HDL-c) and mixed dyslipidemia (triglycerides ≥ 150 mg/dL + total cholesterol ≥ 240 mg/dL). Additionally, individuals were classified according to BMI as normal weight (BMI < 25 kg/m2), overweight (BMI ≥ 25 kg/m2 and < 30 kg/m2), or obese (BMI ≥ 30 kg/m2).12 Abdominal obesity was established by waist circumference ≥ 94 cm in men and ≥ 90 cm in women.13 Dyslipidemia was defined according the National Cholesterol Education Program /Adult Treatment Panel III (NCEP/ATPIII)11, being categorized in 6 types. Of these, four were isolated dyslipidemias: Low HDL-c (hyperalphalipoproteinemia) < 40 mg/dL in men and < 50 mg/dL in women; high triglycerides: ≥ 150 mg/dL; hypercholesterolemia (≥ 240 mg/dL of total cholesterol); high LDL-c ≥ 160 mg/dL; and two were combined dyslipidemias: atherogenic dyslipidemia (triglycerides ≥ 150 mg/dL + low HDL-c) and mixed dyslipidemia (triglycerides ≥ 150 mg/dL + total cholesterol ≥ 240 mg/dL). Additionally, individuals were classified according to BMI as normal weight (BMI < 25 kg/m2), overweight (BMI ≥ 25 kg/m2 and < 30 kg/m2), or obese (BMI ≥ 30 kg/m2).12 Abdominal obesity was established by waist circumference ≥ 94 cm in men and ≥ 90 cm in women.13 Statistical analysis All calculations were performed using the SPSS 20 software (IBM corp. Released 2011. Armonk, NY: USA). It was verified that all variables had normal distribution using a normality test (Kolmogorov-Smirnov). All variables were continuous and data were presented as mean ± standard deviation (SD). Differences between mean values were assessed with the t-test. Proportions of subjects with dyslipidemia were presented as prevalence rates and 95% confidence intervals (CI). A Chi-square test was applied to compare different frequencies by gender, nutritional status and abdominal obesity. P-value of < 0.05 was considered statistically significant. All calculations were performed using the SPSS 20 software (IBM corp. Released 2011. Armonk, NY: USA). It was verified that all variables had normal distribution using a normality test (Kolmogorov-Smirnov). All variables were continuous and data were presented as mean ± standard deviation (SD). Differences between mean values were assessed with the t-test. Proportions of subjects with dyslipidemia were presented as prevalence rates and 95% confidence intervals (CI). A Chi-square test was applied to compare different frequencies by gender, nutritional status and abdominal obesity. P-value of < 0.05 was considered statistically significant.
Results
Characteristics of the subjects Two thirds of the study subjects were female. Men had higher triglycerides, waist circumference and lower HDL-c than women (Table 1). Age, BMI, total cholesterol and LDL-c were similar. Subject Characteristics Data are mean ± SD. Gender differences according t-test. Two thirds of the study subjects were female. Men had higher triglycerides, waist circumference and lower HDL-c than women (Table 1). Age, BMI, total cholesterol and LDL-c were similar. Subject Characteristics Data are mean ± SD. Gender differences according t-test. Prevalence of dyslipidemia Low HDL-c was the most prevalent lipid change present in nearly seven of ten women, and in about four of ten men (p < 0.01), followed by high triglycerides that were present in half of the men and in one third of women (p < 0.01). Their combination, atherogenic dyslipidemia, was observed in 25.9% of subjects, followed in frequency by increasing LDL-c and total cholesterol levels (Table 2). Mixed dyslipidemia was observed in only 8.9% of the subjects, and was higher among men than in women. An increasing prevalence of all types of dyslipidemias was found when individuals were classified according to BMI and at the presence of abdominal obesity (Figure 1 and Figure 2). The prevalence of hypercholesterolemia, high LDL-c and mixed dyslipidemia were similar in overweight and obese subjects, but higher than those found in the normal weight group. Prevalence of Dyslipidemias by Gender Data are showed in percentage (95% CI). Gender differences according to the Chi-square test. Figure 1Prevalence of dyslipidemia by nutritional status.*Difference in the prevalence of dyslipidemia according to nutritional status using Chi-square (p < 0.01). High triglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and < 50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150 mg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240 mg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia: triglycerides = 150 + total cholesterol = 240 mg/dL. Prevalence of dyslipidemia by nutritional status. *Difference in the prevalence of dyslipidemia according to nutritional status using Chi-square (p < 0.01). High triglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and < 50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150 mg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240 mg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia: triglycerides = 150 + total cholesterol = 240 mg/dL. Figure 2Prevalence of dyslipidemias by abdominal obesity (waist circumference = 94 cm in men and = 90 cm in women).Significant difference of the prevalence of dyslipidemia between abdominal obesity or normal waist circumference *(p < 0.001) †(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c < 40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia triglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240 mg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides = 150 + cholesterol = 240 mg/dL. Prevalence of dyslipidemias by abdominal obesity (waist circumference = 94 cm in men and = 90 cm in women). Significant difference of the prevalence of dyslipidemia between abdominal obesity or normal waist circumference *(p < 0.001) †(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c < 40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia triglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240 mg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides = 150 + cholesterol = 240 mg/dL. Low HDL-c was the most prevalent lipid change present in nearly seven of ten women, and in about four of ten men (p < 0.01), followed by high triglycerides that were present in half of the men and in one third of women (p < 0.01). Their combination, atherogenic dyslipidemia, was observed in 25.9% of subjects, followed in frequency by increasing LDL-c and total cholesterol levels (Table 2). Mixed dyslipidemia was observed in only 8.9% of the subjects, and was higher among men than in women. An increasing prevalence of all types of dyslipidemias was found when individuals were classified according to BMI and at the presence of abdominal obesity (Figure 1 and Figure 2). The prevalence of hypercholesterolemia, high LDL-c and mixed dyslipidemia were similar in overweight and obese subjects, but higher than those found in the normal weight group. Prevalence of Dyslipidemias by Gender Data are showed in percentage (95% CI). Gender differences according to the Chi-square test. Figure 1Prevalence of dyslipidemia by nutritional status.*Difference in the prevalence of dyslipidemia according to nutritional status using Chi-square (p < 0.01). High triglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and < 50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150 mg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240 mg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia: triglycerides = 150 + total cholesterol = 240 mg/dL. Prevalence of dyslipidemia by nutritional status. *Difference in the prevalence of dyslipidemia according to nutritional status using Chi-square (p < 0.01). High triglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and < 50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150 mg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240 mg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia: triglycerides = 150 + total cholesterol = 240 mg/dL. Figure 2Prevalence of dyslipidemias by abdominal obesity (waist circumference = 94 cm in men and = 90 cm in women).Significant difference of the prevalence of dyslipidemia between abdominal obesity or normal waist circumference *(p < 0.001) †(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c < 40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia triglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240 mg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides = 150 + cholesterol = 240 mg/dL. Prevalence of dyslipidemias by abdominal obesity (waist circumference = 94 cm in men and = 90 cm in women). Significant difference of the prevalence of dyslipidemia between abdominal obesity or normal waist circumference *(p < 0.001) †(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c < 40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia triglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240 mg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides = 150 + cholesterol = 240 mg/dL.
Conclusions
This is the first report presenting the prevalence of dyslipidemia in more than one region of Venezuela. The results observed are consistent with other Latin American studies, reporting low HDL-c as the most frequent lipid alteration in the region. Additionally, high levels hypercholesterolemia were observed. Both conditions could be related with CVD, which represent a major public health problem in the region. A suggestion resulting from our findings is to monitor a complete lipid profile during medical check-ups, because in some Latin-American countries it is common to check only total cholesterol. The triggers of these changes need to be determined in future studies. The implementation of strategies focused in proper nutrition, more physical activity and avoiding weight gain is imperative.
[ "Design and Subjects", "Clinical and biochemical data", "Categorization of variables", "Statistical analysis", "Characteristics of the subjects", "Prevalence of dyslipidemia" ]
[ "An observational, cross-sectional study was designed to determine the prevalence\nof cardiometabolic risk factors in a sub-national sample of Venezuela. Five\nmunicipalities from three regions were evaluated: Palavecino, in Lara State\n(urban), from the Western region; Ejido (Merida city), in Merida State (urban),\nand Rangel (Páramo area), in Merida State (rural), both from the Andes\nregion; Catia La Mar, in Vargas state (urban), and Sucre, in the Capital\nDistrict (urban), both from the Capital region. From 2006 to 2010, a total of\n1,320 subjects aged 20 years or more, who had lived in their houses for at least\nsix months, were selected by a two-stage random sampling. Three different\ngeographic regions of the country - Andes, mountains at the south; Western,\nllanos in the middle; and the Capital District, coast at the north - were\nassessed. Each region was stratified by municipalities and one was randomly\nselected. A map and a census of each location were required to delimit the\nstreets or blocks, and to select the households to visit in each municipality.\nAfter selecting the sector to be surveyed in each location, the visits to\nhouseholds started from number 1 onwards, skipping every two houses. Pregnant\nwomen and participants unable to stand up and/or communicate verbally were\nexcluded. All participants signed the informed consent form for\nparticipation.\nThe sample size was calculated to detect the prevalence of hypercholesterolemia\n(the lowest prevalent condition reported in Venezuela) in 5.7%6, with standard deviation of\n1.55%, which allows to calculate the 95% confidence interval (95% CI). The\nminimal estimated number of subjects to be evaluated was 830. Overall, 1,320\nsubjects were evaluated (89.4% from the urban and 10.6% from the rural\narea).", "All subjects were evaluated in their households or in a nearby health center by a\ntrained health team according to a standardized protocol. Each home was visited\ntwice. In the first visit, the participants received information about the study\nand signed the written informed consent form. Demographic and clinical\ninformation was obtained using a standardized questionnaire. Weight was measured\nwith as few clothes as possible, without shoes, using a calibrated scale. Height\nwas measured using a metric tape on the wall. Waist circumference was measured\nwith a metric tape at the iliac crest at the end of the expiration. Body mass\nindex was calculated (BMI: weight[kg]/height[m]2).\nIn the second visit, blood samples were drawn after 12 hours of overnight\nfasting. Then, they were centrifuged for 15 minutes at 3000 rpm, within 30-40\nminutes after collection, and transported with dry ice to the central\nlaboratory, where they were properly stored at -40ºC until analysis. Data from\nparticipants who were absent during the first visit were collected. Total\ncholesterol,8\ntriglycerides,9 LDL-c,\nand HDL-c10 were determined by\nstandard enzymatic colorimetric methods.", "Dyslipidemia was defined according the National Cholesterol Education Program\n/Adult Treatment Panel III (NCEP/ATPIII)11, being categorized in 6 types. Of these, four were\nisolated dyslipidemias: Low HDL-c (hyperalphalipoproteinemia) < 40 mg/dL in\nmen and < 50 mg/dL in women; high triglycerides: ≥ 150 mg/dL;\nhypercholesterolemia (≥ 240 mg/dL of total cholesterol); high LDL-c\n≥ 160 mg/dL; and two were combined dyslipidemias: atherogenic\ndyslipidemia (triglycerides ≥ 150 mg/dL + low HDL-c) and mixed\ndyslipidemia (triglycerides ≥ 150 mg/dL + total cholesterol ≥ 240\nmg/dL). Additionally, individuals were classified according to BMI as normal\nweight (BMI < 25 kg/m2), overweight (BMI ≥ 25\nkg/m2 and < 30 kg/m2), or obese (BMI ≥ 30\nkg/m2).12\nAbdominal obesity was established by waist circumference ≥ 94 cm in men\nand ≥ 90 cm in women.13", "All calculations were performed using the SPSS 20 software (IBM corp. Released\n2011. Armonk, NY: USA). It was verified that all variables had normal\ndistribution using a normality test (Kolmogorov-Smirnov). All variables were\ncontinuous and data were presented as mean ± standard deviation (SD).\nDifferences between mean values were assessed with the t-test. Proportions of\nsubjects with dyslipidemia were presented as prevalence rates and 95% confidence\nintervals (CI). A Chi-square test was applied to compare different frequencies\nby gender, nutritional status and abdominal obesity. P-value of < 0.05 was\nconsidered statistically significant.", "Two thirds of the study subjects were female. Men had higher triglycerides, waist\ncircumference and lower HDL-c than women (Table\n1). Age, BMI, total cholesterol and LDL-c were similar.\nSubject Characteristics\nData are mean ± SD. Gender differences according t-test.", "Low HDL-c was the most prevalent lipid change present in nearly seven of ten\nwomen, and in about four of ten men (p < 0.01), followed by high\ntriglycerides that were present in half of the men and in one third of women (p\n< 0.01). Their combination, atherogenic dyslipidemia, was observed in 25.9%\nof subjects, followed in frequency by increasing LDL-c and total cholesterol\nlevels (Table 2). Mixed dyslipidemia was\nobserved in only 8.9% of the subjects, and was higher among men than in women.\nAn increasing prevalence of all types of dyslipidemias was found when\nindividuals were classified according to BMI and at the presence of abdominal\nobesity (Figure 1 and Figure 2). The prevalence of hypercholesterolemia, high\nLDL-c and mixed dyslipidemia were similar in overweight and obese subjects, but\nhigher than those found in the normal weight group.\nPrevalence of Dyslipidemias by Gender\nData are showed in percentage (95% CI). Gender differences according\nto the Chi-square test.\n\nFigure 1Prevalence of dyslipidemia by nutritional status.*Difference in the prevalence of dyslipidemia according to\nnutritional status using Chi-square (p < 0.01). High\ntriglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and <\n50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150\nmg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240\nmg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia:\ntriglycerides = 150 + total cholesterol = 240 mg/dL.\n\nPrevalence of dyslipidemia by nutritional status.\n*Difference in the prevalence of dyslipidemia according to\nnutritional status using Chi-square (p < 0.01). High\ntriglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and <\n50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150\nmg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240\nmg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia:\ntriglycerides = 150 + total cholesterol = 240 mg/dL.\n\nFigure 2Prevalence of dyslipidemias by abdominal obesity (waist circumference\n= 94 cm in men and = 90 cm in women).Significant difference of the prevalence of dyslipidemia between\nabdominal obesity or normal waist circumference *(p < 0.001)\n†(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c <\n40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia\ntriglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240\nmg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides =\n150 + cholesterol = 240 mg/dL.\n\nPrevalence of dyslipidemias by abdominal obesity (waist circumference\n= 94 cm in men and = 90 cm in women).\nSignificant difference of the prevalence of dyslipidemia between\nabdominal obesity or normal waist circumference *(p < 0.001)\n†(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c <\n40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia\ntriglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240\nmg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides =\n150 + cholesterol = 240 mg/dL." ]
[ null, null, null, null, null, null ]
[ "Introduction", "Methods", "Design and Subjects", "Clinical and biochemical data", "Categorization of variables", "Statistical analysis", "Results", "Characteristics of the subjects", "Prevalence of dyslipidemia", "Discussion", "Conclusions" ]
[ "In Venezuela, cardiovascular disease (CVD), represented by ischemic heart disease\n(16.3%) and stroke (7.7%), was the major cause of death in 2012.1 Both are strongly related with\nmodifiable risk factors. According to the INTERHEART2 and the INTERSTROKE3 studies, dyslipidemias, assessed as increased levels\nof apolipoprotein (ApoB/ApoA1 ratio), represented the 49.2% and the 25.9% of the\nattributable risk for acute myocardial infarction and stroke, respectively.\nRandomized controlled clinical trials have consistently demonstrated that a\nreduction in low-density lipoprotein cholesterol (LDL-C) with statin therapy reduces\nthe incidence of heart attack and ischemic stroke. For every 38.6 mg/dL LDL-c\nreduction, the annual rate of major vascular events decreases to\none-fifth.4\nStudies evaluating the prevalence of dyslipidemias in Venezuela have been\ncompiled.5 However, most of\nthem have small samples, and only two are representative of a city or a state. In\n1,848 adults from the city of Barquisimeto, in the western region of the country,\nthe Cardiovascular Risk Factor Multiple Evaluation in Latin America (CARMELA)\nstudy6 reported the lowest\nprevalence of hypercholesterolemia (cholesterol ≥ 240 mg/dL) observed in\nLatin America (5.7%).6 In 3,108\nadults from the state of Zulia, Florez et al.7 documented the prevalence of atherogenic dyslipidemia (high\ntriglycerides and low levels of high- density lipoprotein of cholesterol [HDL-c]) in\n24.1%. This number was higher in men than women, and increased with age. No study in\nVenezuela has included more than one region, prompting the design of the Venezuelan\nMetabolic Syndrome, Obesity and Lifestyle Study (VEMSOLS). This paper presents the\nresults of VEMSOLS, specifically the prevalence of dyslipidemia in five populations\nof three regions in Venezuela.", " Design and Subjects An observational, cross-sectional study was designed to determine the prevalence\nof cardiometabolic risk factors in a sub-national sample of Venezuela. Five\nmunicipalities from three regions were evaluated: Palavecino, in Lara State\n(urban), from the Western region; Ejido (Merida city), in Merida State (urban),\nand Rangel (Páramo area), in Merida State (rural), both from the Andes\nregion; Catia La Mar, in Vargas state (urban), and Sucre, in the Capital\nDistrict (urban), both from the Capital region. From 2006 to 2010, a total of\n1,320 subjects aged 20 years or more, who had lived in their houses for at least\nsix months, were selected by a two-stage random sampling. Three different\ngeographic regions of the country - Andes, mountains at the south; Western,\nllanos in the middle; and the Capital District, coast at the north - were\nassessed. Each region was stratified by municipalities and one was randomly\nselected. A map and a census of each location were required to delimit the\nstreets or blocks, and to select the households to visit in each municipality.\nAfter selecting the sector to be surveyed in each location, the visits to\nhouseholds started from number 1 onwards, skipping every two houses. Pregnant\nwomen and participants unable to stand up and/or communicate verbally were\nexcluded. All participants signed the informed consent form for\nparticipation.\nThe sample size was calculated to detect the prevalence of hypercholesterolemia\n(the lowest prevalent condition reported in Venezuela) in 5.7%6, with standard deviation of\n1.55%, which allows to calculate the 95% confidence interval (95% CI). The\nminimal estimated number of subjects to be evaluated was 830. Overall, 1,320\nsubjects were evaluated (89.4% from the urban and 10.6% from the rural\narea).\nAn observational, cross-sectional study was designed to determine the prevalence\nof cardiometabolic risk factors in a sub-national sample of Venezuela. Five\nmunicipalities from three regions were evaluated: Palavecino, in Lara State\n(urban), from the Western region; Ejido (Merida city), in Merida State (urban),\nand Rangel (Páramo area), in Merida State (rural), both from the Andes\nregion; Catia La Mar, in Vargas state (urban), and Sucre, in the Capital\nDistrict (urban), both from the Capital region. From 2006 to 2010, a total of\n1,320 subjects aged 20 years or more, who had lived in their houses for at least\nsix months, were selected by a two-stage random sampling. Three different\ngeographic regions of the country - Andes, mountains at the south; Western,\nllanos in the middle; and the Capital District, coast at the north - were\nassessed. Each region was stratified by municipalities and one was randomly\nselected. A map and a census of each location were required to delimit the\nstreets or blocks, and to select the households to visit in each municipality.\nAfter selecting the sector to be surveyed in each location, the visits to\nhouseholds started from number 1 onwards, skipping every two houses. Pregnant\nwomen and participants unable to stand up and/or communicate verbally were\nexcluded. All participants signed the informed consent form for\nparticipation.\nThe sample size was calculated to detect the prevalence of hypercholesterolemia\n(the lowest prevalent condition reported in Venezuela) in 5.7%6, with standard deviation of\n1.55%, which allows to calculate the 95% confidence interval (95% CI). The\nminimal estimated number of subjects to be evaluated was 830. Overall, 1,320\nsubjects were evaluated (89.4% from the urban and 10.6% from the rural\narea).\n Clinical and biochemical data All subjects were evaluated in their households or in a nearby health center by a\ntrained health team according to a standardized protocol. Each home was visited\ntwice. In the first visit, the participants received information about the study\nand signed the written informed consent form. Demographic and clinical\ninformation was obtained using a standardized questionnaire. Weight was measured\nwith as few clothes as possible, without shoes, using a calibrated scale. Height\nwas measured using a metric tape on the wall. Waist circumference was measured\nwith a metric tape at the iliac crest at the end of the expiration. Body mass\nindex was calculated (BMI: weight[kg]/height[m]2).\nIn the second visit, blood samples were drawn after 12 hours of overnight\nfasting. Then, they were centrifuged for 15 minutes at 3000 rpm, within 30-40\nminutes after collection, and transported with dry ice to the central\nlaboratory, where they were properly stored at -40ºC until analysis. Data from\nparticipants who were absent during the first visit were collected. Total\ncholesterol,8\ntriglycerides,9 LDL-c,\nand HDL-c10 were determined by\nstandard enzymatic colorimetric methods.\nAll subjects were evaluated in their households or in a nearby health center by a\ntrained health team according to a standardized protocol. Each home was visited\ntwice. In the first visit, the participants received information about the study\nand signed the written informed consent form. Demographic and clinical\ninformation was obtained using a standardized questionnaire. Weight was measured\nwith as few clothes as possible, without shoes, using a calibrated scale. Height\nwas measured using a metric tape on the wall. Waist circumference was measured\nwith a metric tape at the iliac crest at the end of the expiration. Body mass\nindex was calculated (BMI: weight[kg]/height[m]2).\nIn the second visit, blood samples were drawn after 12 hours of overnight\nfasting. Then, they were centrifuged for 15 minutes at 3000 rpm, within 30-40\nminutes after collection, and transported with dry ice to the central\nlaboratory, where they were properly stored at -40ºC until analysis. Data from\nparticipants who were absent during the first visit were collected. Total\ncholesterol,8\ntriglycerides,9 LDL-c,\nand HDL-c10 were determined by\nstandard enzymatic colorimetric methods.\n Categorization of variables Dyslipidemia was defined according the National Cholesterol Education Program\n/Adult Treatment Panel III (NCEP/ATPIII)11, being categorized in 6 types. Of these, four were\nisolated dyslipidemias: Low HDL-c (hyperalphalipoproteinemia) < 40 mg/dL in\nmen and < 50 mg/dL in women; high triglycerides: ≥ 150 mg/dL;\nhypercholesterolemia (≥ 240 mg/dL of total cholesterol); high LDL-c\n≥ 160 mg/dL; and two were combined dyslipidemias: atherogenic\ndyslipidemia (triglycerides ≥ 150 mg/dL + low HDL-c) and mixed\ndyslipidemia (triglycerides ≥ 150 mg/dL + total cholesterol ≥ 240\nmg/dL). Additionally, individuals were classified according to BMI as normal\nweight (BMI < 25 kg/m2), overweight (BMI ≥ 25\nkg/m2 and < 30 kg/m2), or obese (BMI ≥ 30\nkg/m2).12\nAbdominal obesity was established by waist circumference ≥ 94 cm in men\nand ≥ 90 cm in women.13\nDyslipidemia was defined according the National Cholesterol Education Program\n/Adult Treatment Panel III (NCEP/ATPIII)11, being categorized in 6 types. Of these, four were\nisolated dyslipidemias: Low HDL-c (hyperalphalipoproteinemia) < 40 mg/dL in\nmen and < 50 mg/dL in women; high triglycerides: ≥ 150 mg/dL;\nhypercholesterolemia (≥ 240 mg/dL of total cholesterol); high LDL-c\n≥ 160 mg/dL; and two were combined dyslipidemias: atherogenic\ndyslipidemia (triglycerides ≥ 150 mg/dL + low HDL-c) and mixed\ndyslipidemia (triglycerides ≥ 150 mg/dL + total cholesterol ≥ 240\nmg/dL). Additionally, individuals were classified according to BMI as normal\nweight (BMI < 25 kg/m2), overweight (BMI ≥ 25\nkg/m2 and < 30 kg/m2), or obese (BMI ≥ 30\nkg/m2).12\nAbdominal obesity was established by waist circumference ≥ 94 cm in men\nand ≥ 90 cm in women.13\n Statistical analysis All calculations were performed using the SPSS 20 software (IBM corp. Released\n2011. Armonk, NY: USA). It was verified that all variables had normal\ndistribution using a normality test (Kolmogorov-Smirnov). All variables were\ncontinuous and data were presented as mean ± standard deviation (SD).\nDifferences between mean values were assessed with the t-test. Proportions of\nsubjects with dyslipidemia were presented as prevalence rates and 95% confidence\nintervals (CI). A Chi-square test was applied to compare different frequencies\nby gender, nutritional status and abdominal obesity. P-value of < 0.05 was\nconsidered statistically significant.\nAll calculations were performed using the SPSS 20 software (IBM corp. Released\n2011. Armonk, NY: USA). It was verified that all variables had normal\ndistribution using a normality test (Kolmogorov-Smirnov). All variables were\ncontinuous and data were presented as mean ± standard deviation (SD).\nDifferences between mean values were assessed with the t-test. Proportions of\nsubjects with dyslipidemia were presented as prevalence rates and 95% confidence\nintervals (CI). A Chi-square test was applied to compare different frequencies\nby gender, nutritional status and abdominal obesity. P-value of < 0.05 was\nconsidered statistically significant.", "An observational, cross-sectional study was designed to determine the prevalence\nof cardiometabolic risk factors in a sub-national sample of Venezuela. Five\nmunicipalities from three regions were evaluated: Palavecino, in Lara State\n(urban), from the Western region; Ejido (Merida city), in Merida State (urban),\nand Rangel (Páramo area), in Merida State (rural), both from the Andes\nregion; Catia La Mar, in Vargas state (urban), and Sucre, in the Capital\nDistrict (urban), both from the Capital region. From 2006 to 2010, a total of\n1,320 subjects aged 20 years or more, who had lived in their houses for at least\nsix months, were selected by a two-stage random sampling. Three different\ngeographic regions of the country - Andes, mountains at the south; Western,\nllanos in the middle; and the Capital District, coast at the north - were\nassessed. Each region was stratified by municipalities and one was randomly\nselected. A map and a census of each location were required to delimit the\nstreets or blocks, and to select the households to visit in each municipality.\nAfter selecting the sector to be surveyed in each location, the visits to\nhouseholds started from number 1 onwards, skipping every two houses. Pregnant\nwomen and participants unable to stand up and/or communicate verbally were\nexcluded. All participants signed the informed consent form for\nparticipation.\nThe sample size was calculated to detect the prevalence of hypercholesterolemia\n(the lowest prevalent condition reported in Venezuela) in 5.7%6, with standard deviation of\n1.55%, which allows to calculate the 95% confidence interval (95% CI). The\nminimal estimated number of subjects to be evaluated was 830. Overall, 1,320\nsubjects were evaluated (89.4% from the urban and 10.6% from the rural\narea).", "All subjects were evaluated in their households or in a nearby health center by a\ntrained health team according to a standardized protocol. Each home was visited\ntwice. In the first visit, the participants received information about the study\nand signed the written informed consent form. Demographic and clinical\ninformation was obtained using a standardized questionnaire. Weight was measured\nwith as few clothes as possible, without shoes, using a calibrated scale. Height\nwas measured using a metric tape on the wall. Waist circumference was measured\nwith a metric tape at the iliac crest at the end of the expiration. Body mass\nindex was calculated (BMI: weight[kg]/height[m]2).\nIn the second visit, blood samples were drawn after 12 hours of overnight\nfasting. Then, they were centrifuged for 15 minutes at 3000 rpm, within 30-40\nminutes after collection, and transported with dry ice to the central\nlaboratory, where they were properly stored at -40ºC until analysis. Data from\nparticipants who were absent during the first visit were collected. Total\ncholesterol,8\ntriglycerides,9 LDL-c,\nand HDL-c10 were determined by\nstandard enzymatic colorimetric methods.", "Dyslipidemia was defined according the National Cholesterol Education Program\n/Adult Treatment Panel III (NCEP/ATPIII)11, being categorized in 6 types. Of these, four were\nisolated dyslipidemias: Low HDL-c (hyperalphalipoproteinemia) < 40 mg/dL in\nmen and < 50 mg/dL in women; high triglycerides: ≥ 150 mg/dL;\nhypercholesterolemia (≥ 240 mg/dL of total cholesterol); high LDL-c\n≥ 160 mg/dL; and two were combined dyslipidemias: atherogenic\ndyslipidemia (triglycerides ≥ 150 mg/dL + low HDL-c) and mixed\ndyslipidemia (triglycerides ≥ 150 mg/dL + total cholesterol ≥ 240\nmg/dL). Additionally, individuals were classified according to BMI as normal\nweight (BMI < 25 kg/m2), overweight (BMI ≥ 25\nkg/m2 and < 30 kg/m2), or obese (BMI ≥ 30\nkg/m2).12\nAbdominal obesity was established by waist circumference ≥ 94 cm in men\nand ≥ 90 cm in women.13", "All calculations were performed using the SPSS 20 software (IBM corp. Released\n2011. Armonk, NY: USA). It was verified that all variables had normal\ndistribution using a normality test (Kolmogorov-Smirnov). All variables were\ncontinuous and data were presented as mean ± standard deviation (SD).\nDifferences between mean values were assessed with the t-test. Proportions of\nsubjects with dyslipidemia were presented as prevalence rates and 95% confidence\nintervals (CI). A Chi-square test was applied to compare different frequencies\nby gender, nutritional status and abdominal obesity. P-value of < 0.05 was\nconsidered statistically significant.", " Characteristics of the subjects Two thirds of the study subjects were female. Men had higher triglycerides, waist\ncircumference and lower HDL-c than women (Table\n1). Age, BMI, total cholesterol and LDL-c were similar.\nSubject Characteristics\nData are mean ± SD. Gender differences according t-test.\nTwo thirds of the study subjects were female. Men had higher triglycerides, waist\ncircumference and lower HDL-c than women (Table\n1). Age, BMI, total cholesterol and LDL-c were similar.\nSubject Characteristics\nData are mean ± SD. Gender differences according t-test.\n Prevalence of dyslipidemia Low HDL-c was the most prevalent lipid change present in nearly seven of ten\nwomen, and in about four of ten men (p < 0.01), followed by high\ntriglycerides that were present in half of the men and in one third of women (p\n< 0.01). Their combination, atherogenic dyslipidemia, was observed in 25.9%\nof subjects, followed in frequency by increasing LDL-c and total cholesterol\nlevels (Table 2). Mixed dyslipidemia was\nobserved in only 8.9% of the subjects, and was higher among men than in women.\nAn increasing prevalence of all types of dyslipidemias was found when\nindividuals were classified according to BMI and at the presence of abdominal\nobesity (Figure 1 and Figure 2). The prevalence of hypercholesterolemia, high\nLDL-c and mixed dyslipidemia were similar in overweight and obese subjects, but\nhigher than those found in the normal weight group.\nPrevalence of Dyslipidemias by Gender\nData are showed in percentage (95% CI). Gender differences according\nto the Chi-square test.\n\nFigure 1Prevalence of dyslipidemia by nutritional status.*Difference in the prevalence of dyslipidemia according to\nnutritional status using Chi-square (p < 0.01). High\ntriglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and <\n50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150\nmg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240\nmg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia:\ntriglycerides = 150 + total cholesterol = 240 mg/dL.\n\nPrevalence of dyslipidemia by nutritional status.\n*Difference in the prevalence of dyslipidemia according to\nnutritional status using Chi-square (p < 0.01). High\ntriglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and <\n50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150\nmg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240\nmg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia:\ntriglycerides = 150 + total cholesterol = 240 mg/dL.\n\nFigure 2Prevalence of dyslipidemias by abdominal obesity (waist circumference\n= 94 cm in men and = 90 cm in women).Significant difference of the prevalence of dyslipidemia between\nabdominal obesity or normal waist circumference *(p < 0.001)\n†(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c <\n40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia\ntriglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240\nmg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides =\n150 + cholesterol = 240 mg/dL.\n\nPrevalence of dyslipidemias by abdominal obesity (waist circumference\n= 94 cm in men and = 90 cm in women).\nSignificant difference of the prevalence of dyslipidemia between\nabdominal obesity or normal waist circumference *(p < 0.001)\n†(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c <\n40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia\ntriglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240\nmg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides =\n150 + cholesterol = 240 mg/dL.\nLow HDL-c was the most prevalent lipid change present in nearly seven of ten\nwomen, and in about four of ten men (p < 0.01), followed by high\ntriglycerides that were present in half of the men and in one third of women (p\n< 0.01). Their combination, atherogenic dyslipidemia, was observed in 25.9%\nof subjects, followed in frequency by increasing LDL-c and total cholesterol\nlevels (Table 2). Mixed dyslipidemia was\nobserved in only 8.9% of the subjects, and was higher among men than in women.\nAn increasing prevalence of all types of dyslipidemias was found when\nindividuals were classified according to BMI and at the presence of abdominal\nobesity (Figure 1 and Figure 2). The prevalence of hypercholesterolemia, high\nLDL-c and mixed dyslipidemia were similar in overweight and obese subjects, but\nhigher than those found in the normal weight group.\nPrevalence of Dyslipidemias by Gender\nData are showed in percentage (95% CI). Gender differences according\nto the Chi-square test.\n\nFigure 1Prevalence of dyslipidemia by nutritional status.*Difference in the prevalence of dyslipidemia according to\nnutritional status using Chi-square (p < 0.01). High\ntriglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and <\n50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150\nmg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240\nmg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia:\ntriglycerides = 150 + total cholesterol = 240 mg/dL.\n\nPrevalence of dyslipidemia by nutritional status.\n*Difference in the prevalence of dyslipidemia according to\nnutritional status using Chi-square (p < 0.01). High\ntriglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and <\n50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150\nmg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240\nmg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia:\ntriglycerides = 150 + total cholesterol = 240 mg/dL.\n\nFigure 2Prevalence of dyslipidemias by abdominal obesity (waist circumference\n= 94 cm in men and = 90 cm in women).Significant difference of the prevalence of dyslipidemia between\nabdominal obesity or normal waist circumference *(p < 0.001)\n†(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c <\n40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia\ntriglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240\nmg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides =\n150 + cholesterol = 240 mg/dL.\n\nPrevalence of dyslipidemias by abdominal obesity (waist circumference\n= 94 cm in men and = 90 cm in women).\nSignificant difference of the prevalence of dyslipidemia between\nabdominal obesity or normal waist circumference *(p < 0.001)\n†(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c <\n40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia\ntriglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240\nmg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides =\n150 + cholesterol = 240 mg/dL.", "Two thirds of the study subjects were female. Men had higher triglycerides, waist\ncircumference and lower HDL-c than women (Table\n1). Age, BMI, total cholesterol and LDL-c were similar.\nSubject Characteristics\nData are mean ± SD. Gender differences according t-test.", "Low HDL-c was the most prevalent lipid change present in nearly seven of ten\nwomen, and in about four of ten men (p < 0.01), followed by high\ntriglycerides that were present in half of the men and in one third of women (p\n< 0.01). Their combination, atherogenic dyslipidemia, was observed in 25.9%\nof subjects, followed in frequency by increasing LDL-c and total cholesterol\nlevels (Table 2). Mixed dyslipidemia was\nobserved in only 8.9% of the subjects, and was higher among men than in women.\nAn increasing prevalence of all types of dyslipidemias was found when\nindividuals were classified according to BMI and at the presence of abdominal\nobesity (Figure 1 and Figure 2). The prevalence of hypercholesterolemia, high\nLDL-c and mixed dyslipidemia were similar in overweight and obese subjects, but\nhigher than those found in the normal weight group.\nPrevalence of Dyslipidemias by Gender\nData are showed in percentage (95% CI). Gender differences according\nto the Chi-square test.\n\nFigure 1Prevalence of dyslipidemia by nutritional status.*Difference in the prevalence of dyslipidemia according to\nnutritional status using Chi-square (p < 0.01). High\ntriglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and <\n50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150\nmg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240\nmg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia:\ntriglycerides = 150 + total cholesterol = 240 mg/dL.\n\nPrevalence of dyslipidemia by nutritional status.\n*Difference in the prevalence of dyslipidemia according to\nnutritional status using Chi-square (p < 0.01). High\ntriglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and <\n50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150\nmg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240\nmg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia:\ntriglycerides = 150 + total cholesterol = 240 mg/dL.\n\nFigure 2Prevalence of dyslipidemias by abdominal obesity (waist circumference\n= 94 cm in men and = 90 cm in women).Significant difference of the prevalence of dyslipidemia between\nabdominal obesity or normal waist circumference *(p < 0.001)\n†(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c <\n40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia\ntriglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240\nmg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides =\n150 + cholesterol = 240 mg/dL.\n\nPrevalence of dyslipidemias by abdominal obesity (waist circumference\n= 94 cm in men and = 90 cm in women).\nSignificant difference of the prevalence of dyslipidemia between\nabdominal obesity or normal waist circumference *(p < 0.001)\n†(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c <\n40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia\ntriglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240\nmg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides =\n150 + cholesterol = 240 mg/dL.", "The present study reports that the most prevalent lipid abnormality in our\nsub-national sample of adults in Venezuela is the low HDL-c (58.6%), followed by\nhigh triglycerides (38.7%), whereas the prevalence of hypercholesterolemia (22%) and\nits combination with hypertriglyceridemia (8.9%) were lower. Similar findings have\nbeen reported in earlier studies, both in Venezuela (Zulia state, Low HDL-c 65.3%,\nhigh triglycerides 32.3%),7 and\nMexico (Low-HDL 48.4% and high triglycerides 42.3%).14 Using a cut-off point similar to that in our\nstudy, an extremely high prevalence of hypoalphalipoproteinemia has been also\nobserved in Valencia city (90%)15\nand the Junquito municipality (81.1%),16 both in the central region of Venezuela. Similarly to the\nobserved in men in our study (49.5%), the aforementioned studies in Valencia and\nJunquito also reported high prevalence of elevated triglycerides (51%).15,16 Most of these results are consistent with previous findings\nin the Latin America region. In a systematic review of metabolic syndrome in Latin\nAmerica, the most frequent change was low HDL-c in 62.9% of the subjects.17\nAlthough hypercholesterolemia (22.2%) is significantly less common compared with the\naforementioned alterations, it was higher than the CARMELA study (5.7%) in\nBarquisimeto,6 and similar\nto that observed in Valencia (19.0%).15 Therefore, hypercholesterolemia remains as a cardiovascular\nrisk factor to be considered when implementing public health measures in the\nVenezuelan population. Other of our findings are consistent with previous studies\nreporting that the prevalence of dyslipidemia increases with adiposity, and subjects\nwith overweight/obesity14,18 and abdominal obesity18 show worse lipid profiles than\nsubjects of normal weight. As in our study, higher figures of elevated triglycerides\nin male,14,18 and no differences between overweight and obese\nsubjects when grouped according to BMI,14 have been reported.\nDyslipidemias can be caused by both genetic and environmental factors (obesity,\nsmoking, low physical activity). In our study, the prevalence of low HDL-c without\nother lipid abnormalities was 29.2% (male 15%, female 35.7%). Of these, those with\nlow HDL-c and normal weight (total 10.6%, male 5.3%, female 13.0%) could suggest the\nproportion of cases of hypoalphalipoproteinemia that could be associated with\ngenetic factors. Also, part of the prevalence of low HDL-c in this population can be\nexplained by metabolic factors (i.e., insulin resistance), a condition that produces\nmodifications in more than one lipid sub-fraction. In fact, the prevalence of\natherogenic dyslipidemia (25.9%) in our study was significant and remarkably similar\nto that reported by Florez et al.7\nin the Zulia region (24.1%). Atherogenic dyslipidemia is the pattern most frequently\nobserved in subjects with metabolic syndrome and insulin resistance, and both\nabnormalities are components of the metabolic syndrome definition. Besides genetic\nor metabolic factors, environmental adverse conditions are also important in\nVenezuela. The factors involving nutritional transition promoted inappropriate\neating and lifestyle patterns in Venezuela and other Latin American countries,\nclearly contributing with the incidence of non-communicable diseases, especially\nthose related to obesity and diabetes.19 A follow-up survey of food consumption, based on the food\npurchase, reported that caloric intake and the selection of foods with lower quality\nhave increased in Venezuela.20 A\nhigh rate of physical inactivity (68%) has also been reported in Venezuela in two\nstudies involving 3,422 adults.5\nSuccessful dietary strategies to reduce dyslipidemias and other metabolic syndrome\ncomponents should include energy restriction and weight loss, manipulation of\ndietary macronutrients, and adherence to dietary and lifestyle patterns, such as the\nMediterranean diet and diet/exercise.21 After the evaluation of the typical food-based eating and\nphysical activity pattern in the Venezuelan population, culturally-sensitive\nadaptations of the Mediterranean diet with local foods and physical activity\nrecommendations have been proposed.5,22 Specific\nrecommendations for patients with dyslipidemia have been also included in local\nclinical practice guidelines.23\nSome limitations can be observed in the present study. The sample did not represent\nthe entire population of the country; only three of the eight regions of Venezuela\nwere included. Additionally, in the VEMSOLS, eating pattern and physical activity\nwere not investigated. The cut-off point for low HDL and triglycerides used was\nestablished for the metabolic syndrome definition, which can limit the comparison\nwith other studies using a level below 3514 or 4018\nmg/dL to define hypoalphalipoproteinemia. However, despite these limitations, this\nstudy is the first report of dyslipidemias in more than one region of Venezuela. A\nnational survey in Venezuela in ongoing (Estudio Venezolano de Salud\nCardiometabólica, EVESCAM study). Data collection will be completed in\n2017.", "This is the first report presenting the prevalence of dyslipidemia in more than one\nregion of Venezuela. The results observed are consistent with other Latin American\nstudies, reporting low HDL-c as the most frequent lipid alteration in the region.\nAdditionally, high levels hypercholesterolemia were observed. Both conditions could\nbe related with CVD, which represent a major public health problem in the region. A\nsuggestion resulting from our findings is to monitor a complete lipid profile during\nmedical check-ups, because in some Latin-American countries it is common to check\nonly total cholesterol. The triggers of these changes need to be determined in\nfuture studies. The implementation of strategies focused in proper nutrition, more\nphysical activity and avoiding weight gain is imperative." ]
[ "intro", "methods", null, null, null, null, "results", null, null, "discussion", "conclusions" ]
[ "Dyslipidemias / epidemiology", "Cardiovascular Diseases", "Risk Factors", "Stroke / mortality", "Obesity", "Metabolic Syndrome" ]
Introduction: In Venezuela, cardiovascular disease (CVD), represented by ischemic heart disease (16.3%) and stroke (7.7%), was the major cause of death in 2012.1 Both are strongly related with modifiable risk factors. According to the INTERHEART2 and the INTERSTROKE3 studies, dyslipidemias, assessed as increased levels of apolipoprotein (ApoB/ApoA1 ratio), represented the 49.2% and the 25.9% of the attributable risk for acute myocardial infarction and stroke, respectively. Randomized controlled clinical trials have consistently demonstrated that a reduction in low-density lipoprotein cholesterol (LDL-C) with statin therapy reduces the incidence of heart attack and ischemic stroke. For every 38.6 mg/dL LDL-c reduction, the annual rate of major vascular events decreases to one-fifth.4 Studies evaluating the prevalence of dyslipidemias in Venezuela have been compiled.5 However, most of them have small samples, and only two are representative of a city or a state. In 1,848 adults from the city of Barquisimeto, in the western region of the country, the Cardiovascular Risk Factor Multiple Evaluation in Latin America (CARMELA) study6 reported the lowest prevalence of hypercholesterolemia (cholesterol ≥ 240 mg/dL) observed in Latin America (5.7%).6 In 3,108 adults from the state of Zulia, Florez et al.7 documented the prevalence of atherogenic dyslipidemia (high triglycerides and low levels of high- density lipoprotein of cholesterol [HDL-c]) in 24.1%. This number was higher in men than women, and increased with age. No study in Venezuela has included more than one region, prompting the design of the Venezuelan Metabolic Syndrome, Obesity and Lifestyle Study (VEMSOLS). This paper presents the results of VEMSOLS, specifically the prevalence of dyslipidemia in five populations of three regions in Venezuela. Methods: Design and Subjects An observational, cross-sectional study was designed to determine the prevalence of cardiometabolic risk factors in a sub-national sample of Venezuela. Five municipalities from three regions were evaluated: Palavecino, in Lara State (urban), from the Western region; Ejido (Merida city), in Merida State (urban), and Rangel (Páramo area), in Merida State (rural), both from the Andes region; Catia La Mar, in Vargas state (urban), and Sucre, in the Capital District (urban), both from the Capital region. From 2006 to 2010, a total of 1,320 subjects aged 20 years or more, who had lived in their houses for at least six months, were selected by a two-stage random sampling. Three different geographic regions of the country - Andes, mountains at the south; Western, llanos in the middle; and the Capital District, coast at the north - were assessed. Each region was stratified by municipalities and one was randomly selected. A map and a census of each location were required to delimit the streets or blocks, and to select the households to visit in each municipality. After selecting the sector to be surveyed in each location, the visits to households started from number 1 onwards, skipping every two houses. Pregnant women and participants unable to stand up and/or communicate verbally were excluded. All participants signed the informed consent form for participation. The sample size was calculated to detect the prevalence of hypercholesterolemia (the lowest prevalent condition reported in Venezuela) in 5.7%6, with standard deviation of 1.55%, which allows to calculate the 95% confidence interval (95% CI). The minimal estimated number of subjects to be evaluated was 830. Overall, 1,320 subjects were evaluated (89.4% from the urban and 10.6% from the rural area). An observational, cross-sectional study was designed to determine the prevalence of cardiometabolic risk factors in a sub-national sample of Venezuela. Five municipalities from three regions were evaluated: Palavecino, in Lara State (urban), from the Western region; Ejido (Merida city), in Merida State (urban), and Rangel (Páramo area), in Merida State (rural), both from the Andes region; Catia La Mar, in Vargas state (urban), and Sucre, in the Capital District (urban), both from the Capital region. From 2006 to 2010, a total of 1,320 subjects aged 20 years or more, who had lived in their houses for at least six months, were selected by a two-stage random sampling. Three different geographic regions of the country - Andes, mountains at the south; Western, llanos in the middle; and the Capital District, coast at the north - were assessed. Each region was stratified by municipalities and one was randomly selected. A map and a census of each location were required to delimit the streets or blocks, and to select the households to visit in each municipality. After selecting the sector to be surveyed in each location, the visits to households started from number 1 onwards, skipping every two houses. Pregnant women and participants unable to stand up and/or communicate verbally were excluded. All participants signed the informed consent form for participation. The sample size was calculated to detect the prevalence of hypercholesterolemia (the lowest prevalent condition reported in Venezuela) in 5.7%6, with standard deviation of 1.55%, which allows to calculate the 95% confidence interval (95% CI). The minimal estimated number of subjects to be evaluated was 830. Overall, 1,320 subjects were evaluated (89.4% from the urban and 10.6% from the rural area). Clinical and biochemical data All subjects were evaluated in their households or in a nearby health center by a trained health team according to a standardized protocol. Each home was visited twice. In the first visit, the participants received information about the study and signed the written informed consent form. Demographic and clinical information was obtained using a standardized questionnaire. Weight was measured with as few clothes as possible, without shoes, using a calibrated scale. Height was measured using a metric tape on the wall. Waist circumference was measured with a metric tape at the iliac crest at the end of the expiration. Body mass index was calculated (BMI: weight[kg]/height[m]2). In the second visit, blood samples were drawn after 12 hours of overnight fasting. Then, they were centrifuged for 15 minutes at 3000 rpm, within 30-40 minutes after collection, and transported with dry ice to the central laboratory, where they were properly stored at -40ºC until analysis. Data from participants who were absent during the first visit were collected. Total cholesterol,8 triglycerides,9 LDL-c, and HDL-c10 were determined by standard enzymatic colorimetric methods. All subjects were evaluated in their households or in a nearby health center by a trained health team according to a standardized protocol. Each home was visited twice. In the first visit, the participants received information about the study and signed the written informed consent form. Demographic and clinical information was obtained using a standardized questionnaire. Weight was measured with as few clothes as possible, without shoes, using a calibrated scale. Height was measured using a metric tape on the wall. Waist circumference was measured with a metric tape at the iliac crest at the end of the expiration. Body mass index was calculated (BMI: weight[kg]/height[m]2). In the second visit, blood samples were drawn after 12 hours of overnight fasting. Then, they were centrifuged for 15 minutes at 3000 rpm, within 30-40 minutes after collection, and transported with dry ice to the central laboratory, where they were properly stored at -40ºC until analysis. Data from participants who were absent during the first visit were collected. Total cholesterol,8 triglycerides,9 LDL-c, and HDL-c10 were determined by standard enzymatic colorimetric methods. Categorization of variables Dyslipidemia was defined according the National Cholesterol Education Program /Adult Treatment Panel III (NCEP/ATPIII)11, being categorized in 6 types. Of these, four were isolated dyslipidemias: Low HDL-c (hyperalphalipoproteinemia) < 40 mg/dL in men and < 50 mg/dL in women; high triglycerides: ≥ 150 mg/dL; hypercholesterolemia (≥ 240 mg/dL of total cholesterol); high LDL-c ≥ 160 mg/dL; and two were combined dyslipidemias: atherogenic dyslipidemia (triglycerides ≥ 150 mg/dL + low HDL-c) and mixed dyslipidemia (triglycerides ≥ 150 mg/dL + total cholesterol ≥ 240 mg/dL). Additionally, individuals were classified according to BMI as normal weight (BMI < 25 kg/m2), overweight (BMI ≥ 25 kg/m2 and < 30 kg/m2), or obese (BMI ≥ 30 kg/m2).12 Abdominal obesity was established by waist circumference ≥ 94 cm in men and ≥ 90 cm in women.13 Dyslipidemia was defined according the National Cholesterol Education Program /Adult Treatment Panel III (NCEP/ATPIII)11, being categorized in 6 types. Of these, four were isolated dyslipidemias: Low HDL-c (hyperalphalipoproteinemia) < 40 mg/dL in men and < 50 mg/dL in women; high triglycerides: ≥ 150 mg/dL; hypercholesterolemia (≥ 240 mg/dL of total cholesterol); high LDL-c ≥ 160 mg/dL; and two were combined dyslipidemias: atherogenic dyslipidemia (triglycerides ≥ 150 mg/dL + low HDL-c) and mixed dyslipidemia (triglycerides ≥ 150 mg/dL + total cholesterol ≥ 240 mg/dL). Additionally, individuals were classified according to BMI as normal weight (BMI < 25 kg/m2), overweight (BMI ≥ 25 kg/m2 and < 30 kg/m2), or obese (BMI ≥ 30 kg/m2).12 Abdominal obesity was established by waist circumference ≥ 94 cm in men and ≥ 90 cm in women.13 Statistical analysis All calculations were performed using the SPSS 20 software (IBM corp. Released 2011. Armonk, NY: USA). It was verified that all variables had normal distribution using a normality test (Kolmogorov-Smirnov). All variables were continuous and data were presented as mean ± standard deviation (SD). Differences between mean values were assessed with the t-test. Proportions of subjects with dyslipidemia were presented as prevalence rates and 95% confidence intervals (CI). A Chi-square test was applied to compare different frequencies by gender, nutritional status and abdominal obesity. P-value of < 0.05 was considered statistically significant. All calculations were performed using the SPSS 20 software (IBM corp. Released 2011. Armonk, NY: USA). It was verified that all variables had normal distribution using a normality test (Kolmogorov-Smirnov). All variables were continuous and data were presented as mean ± standard deviation (SD). Differences between mean values were assessed with the t-test. Proportions of subjects with dyslipidemia were presented as prevalence rates and 95% confidence intervals (CI). A Chi-square test was applied to compare different frequencies by gender, nutritional status and abdominal obesity. P-value of < 0.05 was considered statistically significant. Design and Subjects: An observational, cross-sectional study was designed to determine the prevalence of cardiometabolic risk factors in a sub-national sample of Venezuela. Five municipalities from three regions were evaluated: Palavecino, in Lara State (urban), from the Western region; Ejido (Merida city), in Merida State (urban), and Rangel (Páramo area), in Merida State (rural), both from the Andes region; Catia La Mar, in Vargas state (urban), and Sucre, in the Capital District (urban), both from the Capital region. From 2006 to 2010, a total of 1,320 subjects aged 20 years or more, who had lived in their houses for at least six months, were selected by a two-stage random sampling. Three different geographic regions of the country - Andes, mountains at the south; Western, llanos in the middle; and the Capital District, coast at the north - were assessed. Each region was stratified by municipalities and one was randomly selected. A map and a census of each location were required to delimit the streets or blocks, and to select the households to visit in each municipality. After selecting the sector to be surveyed in each location, the visits to households started from number 1 onwards, skipping every two houses. Pregnant women and participants unable to stand up and/or communicate verbally were excluded. All participants signed the informed consent form for participation. The sample size was calculated to detect the prevalence of hypercholesterolemia (the lowest prevalent condition reported in Venezuela) in 5.7%6, with standard deviation of 1.55%, which allows to calculate the 95% confidence interval (95% CI). The minimal estimated number of subjects to be evaluated was 830. Overall, 1,320 subjects were evaluated (89.4% from the urban and 10.6% from the rural area). Clinical and biochemical data: All subjects were evaluated in their households or in a nearby health center by a trained health team according to a standardized protocol. Each home was visited twice. In the first visit, the participants received information about the study and signed the written informed consent form. Demographic and clinical information was obtained using a standardized questionnaire. Weight was measured with as few clothes as possible, without shoes, using a calibrated scale. Height was measured using a metric tape on the wall. Waist circumference was measured with a metric tape at the iliac crest at the end of the expiration. Body mass index was calculated (BMI: weight[kg]/height[m]2). In the second visit, blood samples were drawn after 12 hours of overnight fasting. Then, they were centrifuged for 15 minutes at 3000 rpm, within 30-40 minutes after collection, and transported with dry ice to the central laboratory, where they were properly stored at -40ºC until analysis. Data from participants who were absent during the first visit were collected. Total cholesterol,8 triglycerides,9 LDL-c, and HDL-c10 were determined by standard enzymatic colorimetric methods. Categorization of variables: Dyslipidemia was defined according the National Cholesterol Education Program /Adult Treatment Panel III (NCEP/ATPIII)11, being categorized in 6 types. Of these, four were isolated dyslipidemias: Low HDL-c (hyperalphalipoproteinemia) < 40 mg/dL in men and < 50 mg/dL in women; high triglycerides: ≥ 150 mg/dL; hypercholesterolemia (≥ 240 mg/dL of total cholesterol); high LDL-c ≥ 160 mg/dL; and two were combined dyslipidemias: atherogenic dyslipidemia (triglycerides ≥ 150 mg/dL + low HDL-c) and mixed dyslipidemia (triglycerides ≥ 150 mg/dL + total cholesterol ≥ 240 mg/dL). Additionally, individuals were classified according to BMI as normal weight (BMI < 25 kg/m2), overweight (BMI ≥ 25 kg/m2 and < 30 kg/m2), or obese (BMI ≥ 30 kg/m2).12 Abdominal obesity was established by waist circumference ≥ 94 cm in men and ≥ 90 cm in women.13 Statistical analysis: All calculations were performed using the SPSS 20 software (IBM corp. Released 2011. Armonk, NY: USA). It was verified that all variables had normal distribution using a normality test (Kolmogorov-Smirnov). All variables were continuous and data were presented as mean ± standard deviation (SD). Differences between mean values were assessed with the t-test. Proportions of subjects with dyslipidemia were presented as prevalence rates and 95% confidence intervals (CI). A Chi-square test was applied to compare different frequencies by gender, nutritional status and abdominal obesity. P-value of < 0.05 was considered statistically significant. Results: Characteristics of the subjects Two thirds of the study subjects were female. Men had higher triglycerides, waist circumference and lower HDL-c than women (Table 1). Age, BMI, total cholesterol and LDL-c were similar. Subject Characteristics Data are mean ± SD. Gender differences according t-test. Two thirds of the study subjects were female. Men had higher triglycerides, waist circumference and lower HDL-c than women (Table 1). Age, BMI, total cholesterol and LDL-c were similar. Subject Characteristics Data are mean ± SD. Gender differences according t-test. Prevalence of dyslipidemia Low HDL-c was the most prevalent lipid change present in nearly seven of ten women, and in about four of ten men (p < 0.01), followed by high triglycerides that were present in half of the men and in one third of women (p < 0.01). Their combination, atherogenic dyslipidemia, was observed in 25.9% of subjects, followed in frequency by increasing LDL-c and total cholesterol levels (Table 2). Mixed dyslipidemia was observed in only 8.9% of the subjects, and was higher among men than in women. An increasing prevalence of all types of dyslipidemias was found when individuals were classified according to BMI and at the presence of abdominal obesity (Figure 1 and Figure 2). The prevalence of hypercholesterolemia, high LDL-c and mixed dyslipidemia were similar in overweight and obese subjects, but higher than those found in the normal weight group. Prevalence of Dyslipidemias by Gender Data are showed in percentage (95% CI). Gender differences according to the Chi-square test. Figure 1Prevalence of dyslipidemia by nutritional status.*Difference in the prevalence of dyslipidemia according to nutritional status using Chi-square (p < 0.01). High triglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and < 50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150 mg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240 mg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia: triglycerides = 150 + total cholesterol = 240 mg/dL. Prevalence of dyslipidemia by nutritional status. *Difference in the prevalence of dyslipidemia according to nutritional status using Chi-square (p < 0.01). High triglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and < 50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150 mg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240 mg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia: triglycerides = 150 + total cholesterol = 240 mg/dL. Figure 2Prevalence of dyslipidemias by abdominal obesity (waist circumference = 94 cm in men and = 90 cm in women).Significant difference of the prevalence of dyslipidemia between abdominal obesity or normal waist circumference *(p < 0.001) †(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c < 40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia triglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240 mg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides = 150 + cholesterol = 240 mg/dL. Prevalence of dyslipidemias by abdominal obesity (waist circumference = 94 cm in men and = 90 cm in women). Significant difference of the prevalence of dyslipidemia between abdominal obesity or normal waist circumference *(p < 0.001) †(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c < 40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia triglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240 mg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides = 150 + cholesterol = 240 mg/dL. Low HDL-c was the most prevalent lipid change present in nearly seven of ten women, and in about four of ten men (p < 0.01), followed by high triglycerides that were present in half of the men and in one third of women (p < 0.01). Their combination, atherogenic dyslipidemia, was observed in 25.9% of subjects, followed in frequency by increasing LDL-c and total cholesterol levels (Table 2). Mixed dyslipidemia was observed in only 8.9% of the subjects, and was higher among men than in women. An increasing prevalence of all types of dyslipidemias was found when individuals were classified according to BMI and at the presence of abdominal obesity (Figure 1 and Figure 2). The prevalence of hypercholesterolemia, high LDL-c and mixed dyslipidemia were similar in overweight and obese subjects, but higher than those found in the normal weight group. Prevalence of Dyslipidemias by Gender Data are showed in percentage (95% CI). Gender differences according to the Chi-square test. Figure 1Prevalence of dyslipidemia by nutritional status.*Difference in the prevalence of dyslipidemia according to nutritional status using Chi-square (p < 0.01). High triglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and < 50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150 mg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240 mg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia: triglycerides = 150 + total cholesterol = 240 mg/dL. Prevalence of dyslipidemia by nutritional status. *Difference in the prevalence of dyslipidemia according to nutritional status using Chi-square (p < 0.01). High triglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and < 50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150 mg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240 mg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia: triglycerides = 150 + total cholesterol = 240 mg/dL. Figure 2Prevalence of dyslipidemias by abdominal obesity (waist circumference = 94 cm in men and = 90 cm in women).Significant difference of the prevalence of dyslipidemia between abdominal obesity or normal waist circumference *(p < 0.001) †(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c < 40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia triglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240 mg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides = 150 + cholesterol = 240 mg/dL. Prevalence of dyslipidemias by abdominal obesity (waist circumference = 94 cm in men and = 90 cm in women). Significant difference of the prevalence of dyslipidemia between abdominal obesity or normal waist circumference *(p < 0.001) †(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c < 40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia triglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240 mg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides = 150 + cholesterol = 240 mg/dL. Characteristics of the subjects: Two thirds of the study subjects were female. Men had higher triglycerides, waist circumference and lower HDL-c than women (Table 1). Age, BMI, total cholesterol and LDL-c were similar. Subject Characteristics Data are mean ± SD. Gender differences according t-test. Prevalence of dyslipidemia: Low HDL-c was the most prevalent lipid change present in nearly seven of ten women, and in about four of ten men (p < 0.01), followed by high triglycerides that were present in half of the men and in one third of women (p < 0.01). Their combination, atherogenic dyslipidemia, was observed in 25.9% of subjects, followed in frequency by increasing LDL-c and total cholesterol levels (Table 2). Mixed dyslipidemia was observed in only 8.9% of the subjects, and was higher among men than in women. An increasing prevalence of all types of dyslipidemias was found when individuals were classified according to BMI and at the presence of abdominal obesity (Figure 1 and Figure 2). The prevalence of hypercholesterolemia, high LDL-c and mixed dyslipidemia were similar in overweight and obese subjects, but higher than those found in the normal weight group. Prevalence of Dyslipidemias by Gender Data are showed in percentage (95% CI). Gender differences according to the Chi-square test. Figure 1Prevalence of dyslipidemia by nutritional status.*Difference in the prevalence of dyslipidemia according to nutritional status using Chi-square (p < 0.01). High triglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and < 50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150 mg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240 mg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia: triglycerides = 150 + total cholesterol = 240 mg/dL. Prevalence of dyslipidemia by nutritional status. *Difference in the prevalence of dyslipidemia according to nutritional status using Chi-square (p < 0.01). High triglycerides: 150 mg/dL; low HDL-c: < 40 mg/dL in men and < 50 mg/dL in women; atherogenic dyslipidemia: triglycerides = 150 mg/dL + low HDL-c; hypercholesterolemia: total cholesterol = 240 mg/dL; elevated LDL-c: = 160 mg/dL; mixed dyslipidemia: triglycerides = 150 + total cholesterol = 240 mg/dL. Figure 2Prevalence of dyslipidemias by abdominal obesity (waist circumference = 94 cm in men and = 90 cm in women).Significant difference of the prevalence of dyslipidemia between abdominal obesity or normal waist circumference *(p < 0.001) †(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c < 40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia triglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240 mg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides = 150 + cholesterol = 240 mg/dL. Prevalence of dyslipidemias by abdominal obesity (waist circumference = 94 cm in men and = 90 cm in women). Significant difference of the prevalence of dyslipidemia between abdominal obesity or normal waist circumference *(p < 0.001) †(p = 0.002). High triglycerides = 150 mg/dL; Low HDL-c < 40 mg/dL in men and < 50 mg/dL in women; Atherogenic dyslipidemia triglycerides =150 mg/dL + low HDL-c; Hypercholesterolemia =240 mg/dL; Elevated LDL-c = 160 mg/dL; Mix dyslipidemia triglycerides = 150 + cholesterol = 240 mg/dL. Discussion: The present study reports that the most prevalent lipid abnormality in our sub-national sample of adults in Venezuela is the low HDL-c (58.6%), followed by high triglycerides (38.7%), whereas the prevalence of hypercholesterolemia (22%) and its combination with hypertriglyceridemia (8.9%) were lower. Similar findings have been reported in earlier studies, both in Venezuela (Zulia state, Low HDL-c 65.3%, high triglycerides 32.3%),7 and Mexico (Low-HDL 48.4% and high triglycerides 42.3%).14 Using a cut-off point similar to that in our study, an extremely high prevalence of hypoalphalipoproteinemia has been also observed in Valencia city (90%)15 and the Junquito municipality (81.1%),16 both in the central region of Venezuela. Similarly to the observed in men in our study (49.5%), the aforementioned studies in Valencia and Junquito also reported high prevalence of elevated triglycerides (51%).15,16 Most of these results are consistent with previous findings in the Latin America region. In a systematic review of metabolic syndrome in Latin America, the most frequent change was low HDL-c in 62.9% of the subjects.17 Although hypercholesterolemia (22.2%) is significantly less common compared with the aforementioned alterations, it was higher than the CARMELA study (5.7%) in Barquisimeto,6 and similar to that observed in Valencia (19.0%).15 Therefore, hypercholesterolemia remains as a cardiovascular risk factor to be considered when implementing public health measures in the Venezuelan population. Other of our findings are consistent with previous studies reporting that the prevalence of dyslipidemia increases with adiposity, and subjects with overweight/obesity14,18 and abdominal obesity18 show worse lipid profiles than subjects of normal weight. As in our study, higher figures of elevated triglycerides in male,14,18 and no differences between overweight and obese subjects when grouped according to BMI,14 have been reported. Dyslipidemias can be caused by both genetic and environmental factors (obesity, smoking, low physical activity). In our study, the prevalence of low HDL-c without other lipid abnormalities was 29.2% (male 15%, female 35.7%). Of these, those with low HDL-c and normal weight (total 10.6%, male 5.3%, female 13.0%) could suggest the proportion of cases of hypoalphalipoproteinemia that could be associated with genetic factors. Also, part of the prevalence of low HDL-c in this population can be explained by metabolic factors (i.e., insulin resistance), a condition that produces modifications in more than one lipid sub-fraction. In fact, the prevalence of atherogenic dyslipidemia (25.9%) in our study was significant and remarkably similar to that reported by Florez et al.7 in the Zulia region (24.1%). Atherogenic dyslipidemia is the pattern most frequently observed in subjects with metabolic syndrome and insulin resistance, and both abnormalities are components of the metabolic syndrome definition. Besides genetic or metabolic factors, environmental adverse conditions are also important in Venezuela. The factors involving nutritional transition promoted inappropriate eating and lifestyle patterns in Venezuela and other Latin American countries, clearly contributing with the incidence of non-communicable diseases, especially those related to obesity and diabetes.19 A follow-up survey of food consumption, based on the food purchase, reported that caloric intake and the selection of foods with lower quality have increased in Venezuela.20 A high rate of physical inactivity (68%) has also been reported in Venezuela in two studies involving 3,422 adults.5 Successful dietary strategies to reduce dyslipidemias and other metabolic syndrome components should include energy restriction and weight loss, manipulation of dietary macronutrients, and adherence to dietary and lifestyle patterns, such as the Mediterranean diet and diet/exercise.21 After the evaluation of the typical food-based eating and physical activity pattern in the Venezuelan population, culturally-sensitive adaptations of the Mediterranean diet with local foods and physical activity recommendations have been proposed.5,22 Specific recommendations for patients with dyslipidemia have been also included in local clinical practice guidelines.23 Some limitations can be observed in the present study. The sample did not represent the entire population of the country; only three of the eight regions of Venezuela were included. Additionally, in the VEMSOLS, eating pattern and physical activity were not investigated. The cut-off point for low HDL and triglycerides used was established for the metabolic syndrome definition, which can limit the comparison with other studies using a level below 3514 or 4018 mg/dL to define hypoalphalipoproteinemia. However, despite these limitations, this study is the first report of dyslipidemias in more than one region of Venezuela. A national survey in Venezuela in ongoing (Estudio Venezolano de Salud Cardiometabólica, EVESCAM study). Data collection will be completed in 2017. Conclusions: This is the first report presenting the prevalence of dyslipidemia in more than one region of Venezuela. The results observed are consistent with other Latin American studies, reporting low HDL-c as the most frequent lipid alteration in the region. Additionally, high levels hypercholesterolemia were observed. Both conditions could be related with CVD, which represent a major public health problem in the region. A suggestion resulting from our findings is to monitor a complete lipid profile during medical check-ups, because in some Latin-American countries it is common to check only total cholesterol. The triggers of these changes need to be determined in future studies. The implementation of strategies focused in proper nutrition, more physical activity and avoiding weight gain is imperative.
Background: The prevalence of dyslipidemia in multiple regions of Venezuela is unknown. The Venezuelan Metabolic Syndrome, Obesity and Lifestyle Study (VEMSOLS) was undertaken to evaluate cardiometabolic risk factors in Venezuela. Methods: During the years 2006 to 2010, 1320 subjects aged 20 years or older were selected by multistage stratified random sampling from all households in five municipalities from 3 regions of Venezuela: Lara State (Western region), Merida State (Andean region), and Capital District (Capital region). Anthropometric measurements and biochemical analysis were obtained from each participant. Dyslipidemia was defined according to the NCEP/ATPIII definitions. Results: Mean age was 44.8 ± 0.39 years and 68.5% were females. The prevalence of lipids abnormalities related to the metabolic syndrome (low HDL-c [58.6%; 95% CI 54.9 - 62.1] and elevated triglycerides [39.7%; 36.1 - 43.2]) were the most prevalent lipid alterations, followed by atherogenic dyslipidemia (25.9%; 22.7 - 29.1), elevated LDL-c (23.3%; 20.2 - 26.4), hypercholesterolemia (22.2%; 19.2 - 25.2), and mix dyslipidemia (8.9%; 6.8 - 11.0). Dyslipidemia was more prevalent with increasing body mass index. Conclusions: Dyslipidemias are prevalent cardiometabolic risk factors in Venezuela. Among these, a higher prevalence of low HDL is a condition also consistently reported in Latin America.
Introduction: In Venezuela, cardiovascular disease (CVD), represented by ischemic heart disease (16.3%) and stroke (7.7%), was the major cause of death in 2012.1 Both are strongly related with modifiable risk factors. According to the INTERHEART2 and the INTERSTROKE3 studies, dyslipidemias, assessed as increased levels of apolipoprotein (ApoB/ApoA1 ratio), represented the 49.2% and the 25.9% of the attributable risk for acute myocardial infarction and stroke, respectively. Randomized controlled clinical trials have consistently demonstrated that a reduction in low-density lipoprotein cholesterol (LDL-C) with statin therapy reduces the incidence of heart attack and ischemic stroke. For every 38.6 mg/dL LDL-c reduction, the annual rate of major vascular events decreases to one-fifth.4 Studies evaluating the prevalence of dyslipidemias in Venezuela have been compiled.5 However, most of them have small samples, and only two are representative of a city or a state. In 1,848 adults from the city of Barquisimeto, in the western region of the country, the Cardiovascular Risk Factor Multiple Evaluation in Latin America (CARMELA) study6 reported the lowest prevalence of hypercholesterolemia (cholesterol ≥ 240 mg/dL) observed in Latin America (5.7%).6 In 3,108 adults from the state of Zulia, Florez et al.7 documented the prevalence of atherogenic dyslipidemia (high triglycerides and low levels of high- density lipoprotein of cholesterol [HDL-c]) in 24.1%. This number was higher in men than women, and increased with age. No study in Venezuela has included more than one region, prompting the design of the Venezuelan Metabolic Syndrome, Obesity and Lifestyle Study (VEMSOLS). This paper presents the results of VEMSOLS, specifically the prevalence of dyslipidemia in five populations of three regions in Venezuela. Conclusions: This is the first report presenting the prevalence of dyslipidemia in more than one region of Venezuela. The results observed are consistent with other Latin American studies, reporting low HDL-c as the most frequent lipid alteration in the region. Additionally, high levels hypercholesterolemia were observed. Both conditions could be related with CVD, which represent a major public health problem in the region. A suggestion resulting from our findings is to monitor a complete lipid profile during medical check-ups, because in some Latin-American countries it is common to check only total cholesterol. The triggers of these changes need to be determined in future studies. The implementation of strategies focused in proper nutrition, more physical activity and avoiding weight gain is imperative.
Background: The prevalence of dyslipidemia in multiple regions of Venezuela is unknown. The Venezuelan Metabolic Syndrome, Obesity and Lifestyle Study (VEMSOLS) was undertaken to evaluate cardiometabolic risk factors in Venezuela. Methods: During the years 2006 to 2010, 1320 subjects aged 20 years or older were selected by multistage stratified random sampling from all households in five municipalities from 3 regions of Venezuela: Lara State (Western region), Merida State (Andean region), and Capital District (Capital region). Anthropometric measurements and biochemical analysis were obtained from each participant. Dyslipidemia was defined according to the NCEP/ATPIII definitions. Results: Mean age was 44.8 ± 0.39 years and 68.5% were females. The prevalence of lipids abnormalities related to the metabolic syndrome (low HDL-c [58.6%; 95% CI 54.9 - 62.1] and elevated triglycerides [39.7%; 36.1 - 43.2]) were the most prevalent lipid alterations, followed by atherogenic dyslipidemia (25.9%; 22.7 - 29.1), elevated LDL-c (23.3%; 20.2 - 26.4), hypercholesterolemia (22.2%; 19.2 - 25.2), and mix dyslipidemia (8.9%; 6.8 - 11.0). Dyslipidemia was more prevalent with increasing body mass index. Conclusions: Dyslipidemias are prevalent cardiometabolic risk factors in Venezuela. Among these, a higher prevalence of low HDL is a condition also consistently reported in Latin America.
6,564
270
11
[ "mg", "mg dl", "dl", "dyslipidemia", "triglycerides", "prevalence", "hdl", "low", "150", "triglycerides 150" ]
[ "test", "test" ]
[CONTENT] Dyslipidemias / epidemiology | Cardiovascular Diseases | Risk Factors | Stroke / mortality | Obesity | Metabolic Syndrome [SUMMARY]
[CONTENT] Dyslipidemias / epidemiology | Cardiovascular Diseases | Risk Factors | Stroke / mortality | Obesity | Metabolic Syndrome [SUMMARY]
[CONTENT] Dyslipidemias / epidemiology | Cardiovascular Diseases | Risk Factors | Stroke / mortality | Obesity | Metabolic Syndrome [SUMMARY]
[CONTENT] Dyslipidemias / epidemiology | Cardiovascular Diseases | Risk Factors | Stroke / mortality | Obesity | Metabolic Syndrome [SUMMARY]
[CONTENT] Dyslipidemias / epidemiology | Cardiovascular Diseases | Risk Factors | Stroke / mortality | Obesity | Metabolic Syndrome [SUMMARY]
[CONTENT] Dyslipidemias / epidemiology | Cardiovascular Diseases | Risk Factors | Stroke / mortality | Obesity | Metabolic Syndrome [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Life Style | Male | Prevalence | Risk Factors | Spatial Analysis | Venezuela [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Life Style | Male | Prevalence | Risk Factors | Spatial Analysis | Venezuela [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Life Style | Male | Prevalence | Risk Factors | Spatial Analysis | Venezuela [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Life Style | Male | Prevalence | Risk Factors | Spatial Analysis | Venezuela [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Life Style | Male | Prevalence | Risk Factors | Spatial Analysis | Venezuela [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Life Style | Male | Prevalence | Risk Factors | Spatial Analysis | Venezuela [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] mg | mg dl | dl | dyslipidemia | triglycerides | prevalence | hdl | low | 150 | triglycerides 150 [SUMMARY]
[CONTENT] mg | mg dl | dl | dyslipidemia | triglycerides | prevalence | hdl | low | 150 | triglycerides 150 [SUMMARY]
[CONTENT] mg | mg dl | dl | dyslipidemia | triglycerides | prevalence | hdl | low | 150 | triglycerides 150 [SUMMARY]
[CONTENT] mg | mg dl | dl | dyslipidemia | triglycerides | prevalence | hdl | low | 150 | triglycerides 150 [SUMMARY]
[CONTENT] mg | mg dl | dl | dyslipidemia | triglycerides | prevalence | hdl | low | 150 | triglycerides 150 [SUMMARY]
[CONTENT] mg | mg dl | dl | dyslipidemia | triglycerides | prevalence | hdl | low | 150 | triglycerides 150 [SUMMARY]
[CONTENT] stroke | venezuela | risk | reduction | density | density lipoprotein | density lipoprotein cholesterol | disease | lipoprotein cholesterol | represented [SUMMARY]
[CONTENT] mg | dl | mg dl | urban | kg | kg m2 | m2 | participants | evaluated | visit [SUMMARY]
[CONTENT] dl | mg | mg dl | dyslipidemia | triglycerides 150 | 150 | triglycerides | dl low | dl low hdl | mg dl low [SUMMARY]
[CONTENT] check | region | american | latin american | studies | latin | lipid | observed | resulting findings monitor | hdl frequent lipid [SUMMARY]
[CONTENT] dl | mg dl | mg | dyslipidemia | triglycerides | triglycerides 150 | 150 | prevalence | hdl | low [SUMMARY]
[CONTENT] dl | mg dl | mg | dyslipidemia | triglycerides | triglycerides 150 | 150 | prevalence | hdl | low [SUMMARY]
[CONTENT] Venezuela ||| The Venezuelan Metabolic Syndrome | Lifestyle Study | evaluate cardiometabolic risk | Venezuela [SUMMARY]
[CONTENT] the years 2006 to 2010 | 1320 | 20 years | five | 3 | Venezuela ||| Lara State | Merida State | Andean | Capital District ||| ||| NCEP [SUMMARY]
[CONTENT] 0.39 years | 68.5% ||| HDL ||| 58.6% | 95% | CI | 54.9 - 62.1 | 39.7% | 36.1 | 25.9% | 22.7 - 29.1 | 23.3% | 20.2 | 22.2% | 19.2 | 8.9% | 6.8 ||| [SUMMARY]
[CONTENT] Venezuela ||| HDL | Latin America [SUMMARY]
[CONTENT] Venezuela ||| The Venezuelan Metabolic Syndrome | Lifestyle Study | evaluate cardiometabolic risk | Venezuela ||| the years 2006 to 2010 | 1320 | 20 years | five | 3 | Venezuela ||| Lara State | Merida State | Andean | Capital District ||| ||| NCEP ||| ||| 0.39 years | 68.5% ||| HDL ||| 58.6% | 95% | CI | 54.9 - 62.1 | 39.7% | 36.1 | 25.9% | 22.7 - 29.1 | 23.3% | 20.2 | 22.2% | 19.2 | 8.9% | 6.8 ||| ||| Venezuela ||| HDL | Latin America [SUMMARY]
[CONTENT] Venezuela ||| The Venezuelan Metabolic Syndrome | Lifestyle Study | evaluate cardiometabolic risk | Venezuela ||| the years 2006 to 2010 | 1320 | 20 years | five | 3 | Venezuela ||| Lara State | Merida State | Andean | Capital District ||| ||| NCEP ||| ||| 0.39 years | 68.5% ||| HDL ||| 58.6% | 95% | CI | 54.9 - 62.1 | 39.7% | 36.1 | 25.9% | 22.7 - 29.1 | 23.3% | 20.2 | 22.2% | 19.2 | 8.9% | 6.8 ||| ||| Venezuela ||| HDL | Latin America [SUMMARY]
Pharmacokinetics, pharmacodynamics and safety of QGE031 (ligelizumab), a novel high-affinity anti-IgE antibody, in atopic subjects.
25200415
Using a monoclonal antibody with greater affinity for IgE than omalizumab, we examined whether more complete suppression of IgE provided greater pharmacodynamic effects, including suppression of skin prick responses to allergen.
BACKGROUND
Preclinical assessments and two randomized, placebo-controlled, double-blind clinical trials were conducted in atopic subjects. The first trial administered single doses of QGE031 (0.1-10 mg/kg) or placebo intravenously, while the second trial administered two to four doses of QGE031 (0.2- 4 mg/kg) or placebo subcutaneously at 2-week intervals. Both trials included an open-label omalizumab arm.
METHODS
Sixty of 73 (82%) and 96 of 110 (87%) subjects completed the intravenous and subcutaneous studies, respectively. Exposure to QGE031 and its half-life depended on the QGE031 dose and serum IgE level. QGE031 had a biexponential pharmacokinetic profile after intravenous administration and a terminal half-life of approximately 20 days. QGE031 demonstrated dose- and time-dependent suppression of free IgE, basophil FcεRI and basophil surface IgE superior in extent (free IgE and surface IgE) and duration to omalizumab. At Day 85, 6 weeks after the last dose, skin prick wheal responses to allergen were suppressed by > 95% and 41% in subjects treated subcutaneously with QGE031 (2 mg/kg) or omalizumab, respectively (P < 0.001). Urticaria was observed in QGE031- and placebo-treated subjects and was accompanied by systemic symptoms in one subject treated with 10 mg/kg intravenous QGE031. There were no serious adverse events.
RESULTS
These first clinical data for QGE031, a high-affinity IgG1κ anti-IgE, demonstrate that increased suppression of free IgE compared with omalizumab translated to superior pharmacodynamic effects in atopic subjects, including those with high IgE levels. QGE031 may therefore benefit patients unable to receive, or suboptimally treated with, omalizumab.
CONCLUSION AND CLINICAL RELEVANCE
[ "Adolescent", "Adult", "Anti-Allergic Agents", "Antibodies, Anti-Idiotypic", "Antibodies, Monoclonal, Humanized", "Antibody Affinity", "Drug Evaluation, Preclinical", "Female", "Humans", "Hypersensitivity, Immediate", "Immunoglobulin E", "Male", "Middle Aged", "Skin Tests", "Treatment Outcome", "Young Adult" ]
4278557
Introduction
IgE acts as an environmental sensor that detects allergens and elicits an immune response via the high-affinity IgE receptor, FcεRI, resulting in the sensitization of mast cells to specific antigens 1,2. On exposure to specific antigens, IgE bound to FcεRI induces secretory granule exocytosis from mast cells and basophils, as well as the generation of newly synthesized lipid mediators and cytokines, resulting in both early- and late-phase allergic responses 1,2. Omalizumab (Xolair®) is a recombinant monoclonal antibody with a dissociation constant (KD) of 6–8 nm for IgE 3. It is approved for the treatment of patients with severe 4 or moderate-to-severe 5 persistent allergic asthma. Omalizumab binds the Cε3 domain of free IgE preventing it from binding to FcεRI 6,7. Omalizumab suppresses serum-free IgE concentrations 8–10, which in turn, through direct feedback, down-regulates FcεRI surface expression on effector cells 9,11,12 further dampening the effector cell response to allergen. The omalizumab dosing table aims to suppress free IgE to < 25 ng/mL (i.e. 10.4 IU/mL) 13 with doses based on body weight and IgE levels 4,5. Correlations between free IgE and asthma symptom control in controlled clinical studies suggest that a more profound suppression of free IgE may translate to better asthma clinical outcomes 14. QGE031 (ligelizumab) is a humanized IgG1 monoclonal antibody that binds with higher affinity to the Cε3 domain of IgE. QGE031 is designed to achieve superior IgE suppression, with an equilibrium dissociation constant (KD) of 139 pm, that may overcome some of the limitations associated with omalizumab dosing and lead to better clinical outcomes. This report describes data from preclinical experiments and two phase I randomized, double-blind, placebo-controlled clinical trials investigating the pharmacokinetics (PK), pharmacodynamics (PD) and safety of QGE031 in atopic, but otherwise healthy, subjects. The first trial was a single escalating-dose trial with intravenously administered QGE031, while the second trial was a multiple ascending-dose trial with subcutaneously administered QGE031; both trials included an omalizumab arm, which was dosed according to the US Food and Drug Administration (FDA) Prescribing Information dosing table 5. Preliminary data have been published in abstract form 15.
Methods
Preclinical experiments Details of experiments to characterize the in vitro pharmacology of QGE031 are given in the Supplementary Material and included experiments to determine the equilibrium constant of IgE; inhibition of binding to cell-surface FcεRI and the immobilized α-subunit of FCεRI; impact on mast cell degranulation and activation assays; and the binding activity of QGE031 across several mammalian species. Details of experiments to characterize the in vitro pharmacology of QGE031 are given in the Supplementary Material and included experiments to determine the equilibrium constant of IgE; inhibition of binding to cell-surface FcεRI and the immobilized α-subunit of FCεRI; impact on mast cell degranulation and activation assays; and the binding activity of QGE031 across several mammalian species. Clinical trials Two clinical trials of QGE031 were conducted. The first-in-human trial administered QGE031 intravenously at a single site in the USA (January to December 2009), while the second trial administered QGE031 subcutaneously at three sites in the USA (May 2010 to September 2011). Both trials were approved by each site's Institutional Review Board, details of which are provided in the Supplementary Material (S1), and all subjects provided written informed consent. Male or female subjects aged 18–55 years with a history of atopy, defined as having one or more positive skin tests to common airborne allergens, a history of food allergy (subcutaneous trial) or serum IgE > 30 IU/mL (intravenous trial) were enrolled. Subjects participating in the subcutaneous trial were restricted to 45–120 kg body weight. In both studies, omalizumab was given open label in accordance with body weight and baseline IgE levels as defined by the FDA dosing table 5. The main exclusion criteria included poorly controlled asthma, prior use of omalizumab (intravenous trial), or use of QGE031 or omalizumab in the previous 6 months (subcutaneous trial). Two clinical trials of QGE031 were conducted. The first-in-human trial administered QGE031 intravenously at a single site in the USA (January to December 2009), while the second trial administered QGE031 subcutaneously at three sites in the USA (May 2010 to September 2011). Both trials were approved by each site's Institutional Review Board, details of which are provided in the Supplementary Material (S1), and all subjects provided written informed consent. Male or female subjects aged 18–55 years with a history of atopy, defined as having one or more positive skin tests to common airborne allergens, a history of food allergy (subcutaneous trial) or serum IgE > 30 IU/mL (intravenous trial) were enrolled. Subjects participating in the subcutaneous trial were restricted to 45–120 kg body weight. In both studies, omalizumab was given open label in accordance with body weight and baseline IgE levels as defined by the FDA dosing table 5. The main exclusion criteria included poorly controlled asthma, prior use of omalizumab (intravenous trial), or use of QGE031 or omalizumab in the previous 6 months (subcutaneous trial). Study design A site-specific randomization list was generated to assign the subjects to the lowest available numbers according to the specified assignment ratio. Subjects, site staff, persons performing the assessments and data analysts remained blinded to treatment from randomization until database lock. Treatments were concealed by identical packaging, labelling and schedule of administration. For the subcutaneous trial, interim analyses were conducted, and therefore, access to unblinded data was allowed for some members of the clinical team using a controlled process for protection of randomization data. Project teams were given access to data at the group but not the individual level. Dose selection details for both trials are detailed in the Supplementary Material. In the intravenous trial, subjects with IgE 30–1000 IU/mL were randomized to increasing doses of QGE031 [0.1, 0.3, 1, 3 or 10 mg/kg (Fig.1a)] or placebo in a ratio of 3 : 1 for each cohort. Cohort 5 included subjects with IgE > 1000 IU/mL treated with 3 mg/kg QGE031 or placebo (3 : 1). In Cohort 7, subjects received open-label subcutaneous omalizumab, dosed according to the FDA dosing table 5. Following a protocol amendment (see Supplementary Material), additional subjects in Cohort 6 (10 mg/kg QGE031) were exposed to placebo (Cohort 6a) to investigate the allergenic potential of the excipient, polysorbate 80. Subject disposition for (a) the intravenous trial and (b) the subcutaneous trial. *Total approximate numbers are provided. Subjects were excluded because they declined to participate or failed to meet eligibility criteria. †The placebo cohort was introduced following a protocol amendment to serve as an expansion of the placebo group at the highest (10 mg/kg) dose level. ‡Omalizumab was dosed as per the FDA dosing table 5. ¶Placebo pooled from Cohorts 2, 3 and 6. In the subcutaneous trial, subjects were randomized to one of six cohorts (Fig.1b). Three cohorts (Cohorts 1, 2 and 6) of subjects with IgE 30–700 IU/mL were treated sequentially with multiple escalating doses of subcutaneous QGE031 (0.2, 0.6 or 4 mg/kg, respectively) or placebo (2 : 1); Cohort 6 (i.e. 4 mg/kg) was introduced following a protocol amendment (see Supplementary Material). Cohort 3 received 2 mg/kg QGE031 or placebo (4 : 1). Cohort 4 included subjects with IgE > 700 IU/mL treated with 2 mg/kg QGE031 or placebo (1 : 1). All QGE031 cohorts received study drug at 2-week intervals for a total of two (Cohort 1) or four (Cohorts 2, 3, 4 and 6) doses. Subjects in Cohort 5 received subcutaneous open-label omalizumab, dosed as per the FDA dosing table 5. Subjects were admitted approximately 24 h prior to dosing and domiciled for 96 h (intravenous trial) or 48 h (subcutaneous trial) after drug administration. The investigator and Novartis performed a blinded review of at least 10 days (intravenous) or 15 days (subcutaneous) follow-up safety data for all subjects in a given cohort before dosing the next. Subjects were followed for up to approximately 16 weeks. A site-specific randomization list was generated to assign the subjects to the lowest available numbers according to the specified assignment ratio. Subjects, site staff, persons performing the assessments and data analysts remained blinded to treatment from randomization until database lock. Treatments were concealed by identical packaging, labelling and schedule of administration. For the subcutaneous trial, interim analyses were conducted, and therefore, access to unblinded data was allowed for some members of the clinical team using a controlled process for protection of randomization data. Project teams were given access to data at the group but not the individual level. Dose selection details for both trials are detailed in the Supplementary Material. In the intravenous trial, subjects with IgE 30–1000 IU/mL were randomized to increasing doses of QGE031 [0.1, 0.3, 1, 3 or 10 mg/kg (Fig.1a)] or placebo in a ratio of 3 : 1 for each cohort. Cohort 5 included subjects with IgE > 1000 IU/mL treated with 3 mg/kg QGE031 or placebo (3 : 1). In Cohort 7, subjects received open-label subcutaneous omalizumab, dosed according to the FDA dosing table 5. Following a protocol amendment (see Supplementary Material), additional subjects in Cohort 6 (10 mg/kg QGE031) were exposed to placebo (Cohort 6a) to investigate the allergenic potential of the excipient, polysorbate 80. Subject disposition for (a) the intravenous trial and (b) the subcutaneous trial. *Total approximate numbers are provided. Subjects were excluded because they declined to participate or failed to meet eligibility criteria. †The placebo cohort was introduced following a protocol amendment to serve as an expansion of the placebo group at the highest (10 mg/kg) dose level. ‡Omalizumab was dosed as per the FDA dosing table 5. ¶Placebo pooled from Cohorts 2, 3 and 6. In the subcutaneous trial, subjects were randomized to one of six cohorts (Fig.1b). Three cohorts (Cohorts 1, 2 and 6) of subjects with IgE 30–700 IU/mL were treated sequentially with multiple escalating doses of subcutaneous QGE031 (0.2, 0.6 or 4 mg/kg, respectively) or placebo (2 : 1); Cohort 6 (i.e. 4 mg/kg) was introduced following a protocol amendment (see Supplementary Material). Cohort 3 received 2 mg/kg QGE031 or placebo (4 : 1). Cohort 4 included subjects with IgE > 700 IU/mL treated with 2 mg/kg QGE031 or placebo (1 : 1). All QGE031 cohorts received study drug at 2-week intervals for a total of two (Cohort 1) or four (Cohorts 2, 3, 4 and 6) doses. Subjects in Cohort 5 received subcutaneous open-label omalizumab, dosed as per the FDA dosing table 5. Subjects were admitted approximately 24 h prior to dosing and domiciled for 96 h (intravenous trial) or 48 h (subcutaneous trial) after drug administration. The investigator and Novartis performed a blinded review of at least 10 days (intravenous) or 15 days (subcutaneous) follow-up safety data for all subjects in a given cohort before dosing the next. Subjects were followed for up to approximately 16 weeks. PK assessments Total QGE031 serum concentrations were determined by ELISA with a lower limit of quantification (LLOQ) of 200 ng/mL in serum (see Data S1). Total QGE031 serum concentrations were determined by ELISA with a lower limit of quantification (LLOQ) of 200 ng/mL in serum (see Data S1). PD assessments: serum total and free IgE Total IgE was determined by ELISA with LLOQ of 100 ng/mL in human serum. Free IgE was determined in human serum by ELISA with LLOQ of 7.8 ng/mL and 1.95 ng/mL in the intravenous and subcutaneous trials, respectively; for both trials, the upper limit of quantification of free IgE was 250 ng/mL (see Data S1). Total IgE was determined by ELISA with LLOQ of 100 ng/mL in human serum. Free IgE was determined in human serum by ELISA with LLOQ of 7.8 ng/mL and 1.95 ng/mL in the intravenous and subcutaneous trials, respectively; for both trials, the upper limit of quantification of free IgE was 250 ng/mL (see Data S1). PD assessments: FACS analysis Serum samples from both studies were subjected to fluorescence-activated cell sorting (FACS) using a FACS Canto II cytometer. In each sample, 2000 basophils were acquired and molecules of equivalent soluble fluorochrome values were calculated (see Data S1). Serum samples from both studies were subjected to fluorescence-activated cell sorting (FACS) using a FACS Canto II cytometer. In each sample, 2000 basophils were acquired and molecules of equivalent soluble fluorochrome values were calculated (see Data S1). PK and PD model of binding to and capture of IgE by QGE031 and omalizumab The PK profile of QGE031 and several PD parameters including total IgE, basophil FcεRI and basophil surface IgE were analyzed using an adaptation of the previously published omalizumab PK–IgE binding model 14,16,17 (see Data S1). The PK profile of QGE031 and several PD parameters including total IgE, basophil FcεRI and basophil surface IgE were analyzed using an adaptation of the previously published omalizumab PK–IgE binding model 14,16,17 (see Data S1). PD assessments: extinction skin prick testing (subcutaneous trial) At baseline and Days 29, 57, 85 and 155, extinction skin prick testing was performed in duplicate on the subjects’ skin of the back with serial threefold dilutions of a selected allergen that provided a > 5 mm mean wheal diameter at screening. The mean of the longest diameter and corresponding mid-point orthogonal diameter for wheal and flares was recorded (see Data S1). Sites used the Greer Prick System (Greer, Lenoir, NC) or Duotip applicators (Lincoln Diagnostics, Decatur, IL) and allergen extracts from Greer or Hollister-Stier (Spokane, WA). At baseline and Days 29, 57, 85 and 155, extinction skin prick testing was performed in duplicate on the subjects’ skin of the back with serial threefold dilutions of a selected allergen that provided a > 5 mm mean wheal diameter at screening. The mean of the longest diameter and corresponding mid-point orthogonal diameter for wheal and flares was recorded (see Data S1). Sites used the Greer Prick System (Greer, Lenoir, NC) or Duotip applicators (Lincoln Diagnostics, Decatur, IL) and allergen extracts from Greer or Hollister-Stier (Spokane, WA). Immunogenicity In the intravenous trial, serum samples were collected at Days 29 and 113 (end of study) and were tested for the presence of anti-QGE031 antibodies (ADAs) using a Biacore-based assay with QGE031 bound to the surface of the Biacore chip using protein G. A homogenous bridging Meso Scale Discovery-based assay with an improved sensitivity was used for the immunogenicity testing in the subcutaneous trial; samples were collected predose and at Days 15, 29, 43, 99 and 155 (end of study). For both studies, to distinguish between IgE-derived bonding and ADA-derived bonding, the soluble form of human recombinant FcεRI was added to prevent endogenous IgE from binding to QGE031 and interfering with the detection of ADAs. An inhibition step with QGE031 confirmed the presence of ADAs. In the intravenous trial, serum samples were collected at Days 29 and 113 (end of study) and were tested for the presence of anti-QGE031 antibodies (ADAs) using a Biacore-based assay with QGE031 bound to the surface of the Biacore chip using protein G. A homogenous bridging Meso Scale Discovery-based assay with an improved sensitivity was used for the immunogenicity testing in the subcutaneous trial; samples were collected predose and at Days 15, 29, 43, 99 and 155 (end of study). For both studies, to distinguish between IgE-derived bonding and ADA-derived bonding, the soluble form of human recombinant FcεRI was added to prevent endogenous IgE from binding to QGE031 and interfering with the detection of ADAs. An inhibition step with QGE031 confirmed the presence of ADAs. Objectives and outcome measures The primary objectives for both trials were to establish the safety and tolerability of single intravenous doses or multiple subcutaneous doses of QGE031 in atopic subjects, with PK as a key secondary objective. PD effects of QGE031, including levels of free and total IgE in the serum, FcεRI and surface IgE expression on circulating basophils were evaluated. Suppression of skin prick responses to allergen by QGE031 was an exploratory objective (subcutaneous trial). The primary objectives for both trials were to establish the safety and tolerability of single intravenous doses or multiple subcutaneous doses of QGE031 in atopic subjects, with PK as a key secondary objective. PD effects of QGE031, including levels of free and total IgE in the serum, FcεRI and surface IgE expression on circulating basophils were evaluated. Suppression of skin prick responses to allergen by QGE031 was an exploratory objective (subcutaneous trial). Statistical methods In each trial, the safety population consisted of all subjects who received at least one dose of study drug. The PK and PD populations consisted of all subjects with available PK or PD data, respectively, and no major protocol deviations that could impact on the data. The intravenous trial was a first-in-human study of QGE031. The sample size of six subjects on active drug for the QGE031 cohorts was based on published detectable adverse event (AE) rates and changes in laboratory parameters 18. For a more complete assessment of tolerability, two subjects treated with placebo were added to each cohort. For Cohort 6a, a cohort of 18 subjects on placebo provides 90% confidence that the true rate of urticaria or other allergic event is ≤ 20% (rate observed in the 3 and 10 mg/kg cohorts), when not more than one event is observed. In the subcutaneous trial, a Bayesian design determined the size of Cohort 3. A cohort size of 40 subjects on active drug provided 80% confidence that the incidence of hypersensitivity events after subcutaneous administration of 2 mg/kg of QGE031 is 7.5% or less when not more than one subject experienced an event. All placebo subjects from Cohorts 2, 3 and 6 were pooled into one placebo group for data presentation and statistical analysis. Pharmacokinetics parameters of QGE031 were determined using WinNonlin Pro (version 5.2) and descriptive statistics presented. Concentrations of QGE031 below the LLOQ were treated as zero. For subjects that did not complete the study, the end of study sample was excluded from the analysis. Descriptive statistics were generated for free and total IgE with Day 1 (predose) values considered baseline values. Descriptive statistics were also generated for FACS data. Skin prick data were presented for the threshold dilution of allergen that elicited a wheal of ≥ 3 mm (see Statistical Methods in the Supplementary Material). Areas under the allergen dose–response curves for wheal responses to allergens were calculated for each subject at each visit using the linear trapezoidal rule. The allergen areas under the curve (AUC) were analyzed using a covariance model with baseline AUC (Day 1) as a covariate and treatment as a fixed factor (SAS PROC MIXED). Differences between each QGE031 treatment group and placebo were calculated along with the 95% confidence intervals (CI) and P-value. Post hoc analysis was conducted for the difference between 2 mg/kg QGE031 and omalizumab. In each trial, the safety population consisted of all subjects who received at least one dose of study drug. The PK and PD populations consisted of all subjects with available PK or PD data, respectively, and no major protocol deviations that could impact on the data. The intravenous trial was a first-in-human study of QGE031. The sample size of six subjects on active drug for the QGE031 cohorts was based on published detectable adverse event (AE) rates and changes in laboratory parameters 18. For a more complete assessment of tolerability, two subjects treated with placebo were added to each cohort. For Cohort 6a, a cohort of 18 subjects on placebo provides 90% confidence that the true rate of urticaria or other allergic event is ≤ 20% (rate observed in the 3 and 10 mg/kg cohorts), when not more than one event is observed. In the subcutaneous trial, a Bayesian design determined the size of Cohort 3. A cohort size of 40 subjects on active drug provided 80% confidence that the incidence of hypersensitivity events after subcutaneous administration of 2 mg/kg of QGE031 is 7.5% or less when not more than one subject experienced an event. All placebo subjects from Cohorts 2, 3 and 6 were pooled into one placebo group for data presentation and statistical analysis. Pharmacokinetics parameters of QGE031 were determined using WinNonlin Pro (version 5.2) and descriptive statistics presented. Concentrations of QGE031 below the LLOQ were treated as zero. For subjects that did not complete the study, the end of study sample was excluded from the analysis. Descriptive statistics were generated for free and total IgE with Day 1 (predose) values considered baseline values. Descriptive statistics were also generated for FACS data. Skin prick data were presented for the threshold dilution of allergen that elicited a wheal of ≥ 3 mm (see Statistical Methods in the Supplementary Material). Areas under the allergen dose–response curves for wheal responses to allergens were calculated for each subject at each visit using the linear trapezoidal rule. The allergen areas under the curve (AUC) were analyzed using a covariance model with baseline AUC (Day 1) as a covariate and treatment as a fixed factor (SAS PROC MIXED). Differences between each QGE031 treatment group and placebo were calculated along with the 95% confidence intervals (CI) and P-value. Post hoc analysis was conducted for the difference between 2 mg/kg QGE031 and omalizumab.
Results
Preclinical results QGE031 demonstrated higher affinity binding for human IgE compared with omalizumab, as assessed by surface plasmon resonance, with equilibrium KD of 139 pm vs. 6.98 nm, respectively. In a cell-based functional assay using human cord blood-derived mast cells, an approximate 1 : 1 molar ratio of QGE031 to IgE was sufficient to achieve a 90% inhibition of IgE-dependent mast cell degranulation. By comparison, between ninefold and 27-fold excess of omalizumab was needed to achieve the same response. QGE031 is highly selective for human and non-human primate IgE. QGE031 did not bind to IgE purified from rat, cat or dog. The affinity of QGE031 for cynomolgus non-human primate IgE was approximately 12-fold lower than for human with a KD of 1.53 nm. QGE031 demonstrated higher affinity binding for human IgE compared with omalizumab, as assessed by surface plasmon resonance, with equilibrium KD of 139 pm vs. 6.98 nm, respectively. In a cell-based functional assay using human cord blood-derived mast cells, an approximate 1 : 1 molar ratio of QGE031 to IgE was sufficient to achieve a 90% inhibition of IgE-dependent mast cell degranulation. By comparison, between ninefold and 27-fold excess of omalizumab was needed to achieve the same response. QGE031 is highly selective for human and non-human primate IgE. QGE031 did not bind to IgE purified from rat, cat or dog. The affinity of QGE031 for cynomolgus non-human primate IgE was approximately 12-fold lower than for human with a KD of 1.53 nm. Clinical results
null
null
[ "Preclinical experiments", "Clinical trials", "Study design", "PK assessments", "PD assessments: serum total and free IgE", "PD assessments: FACS analysis", "PK and PD model of binding to and capture of IgE by QGE031 and omalizumab", "PD assessments: extinction skin prick testing (subcutaneous trial)", "Immunogenicity", "Objectives and outcome measures", "Statistical methods", "Preclinical results", "Subjects", "PK results", "PD: total and free IgE", "FACS analysis", "Quantification of the in vivo binding to IgE", "Skin prick tests (subcutaneous trial)", "Overview of PD parameters", "Immunogenicity", "Safety" ]
[ "Details of experiments to characterize the in vitro pharmacology of QGE031 are given in the Supplementary Material and included experiments to determine the equilibrium constant of IgE; inhibition of binding to cell-surface FcεRI and the immobilized α-subunit of FCεRI; impact on mast cell degranulation and activation assays; and the binding activity of QGE031 across several mammalian species.", "Two clinical trials of QGE031 were conducted. The first-in-human trial administered QGE031 intravenously at a single site in the USA (January to December 2009), while the second trial administered QGE031 subcutaneously at three sites in the USA (May 2010 to September 2011). Both trials were approved by each site's Institutional Review Board, details of which are provided in the Supplementary Material (S1), and all subjects provided written informed consent.\nMale or female subjects aged 18–55 years with a history of atopy, defined as having one or more positive skin tests to common airborne allergens, a history of food allergy (subcutaneous trial) or serum IgE > 30 IU/mL (intravenous trial) were enrolled. Subjects participating in the subcutaneous trial were restricted to 45–120 kg body weight. In both studies, omalizumab was given open label in accordance with body weight and baseline IgE levels as defined by the FDA dosing table 5. The main exclusion criteria included poorly controlled asthma, prior use of omalizumab (intravenous trial), or use of QGE031 or omalizumab in the previous 6 months (subcutaneous trial).", "A site-specific randomization list was generated to assign the subjects to the lowest available numbers according to the specified assignment ratio. Subjects, site staff, persons performing the assessments and data analysts remained blinded to treatment from randomization until database lock. Treatments were concealed by identical packaging, labelling and schedule of administration. For the subcutaneous trial, interim analyses were conducted, and therefore, access to unblinded data was allowed for some members of the clinical team using a controlled process for protection of randomization data. Project teams were given access to data at the group but not the individual level.\nDose selection details for both trials are detailed in the Supplementary Material. In the intravenous trial, subjects with IgE 30–1000 IU/mL were randomized to increasing doses of QGE031 [0.1, 0.3, 1, 3 or 10 mg/kg (Fig.1a)] or placebo in a ratio of 3 : 1 for each cohort. Cohort 5 included subjects with IgE > 1000 IU/mL treated with 3 mg/kg QGE031 or placebo (3 : 1). In Cohort 7, subjects received open-label subcutaneous omalizumab, dosed according to the FDA dosing table 5. Following a protocol amendment (see Supplementary Material), additional subjects in Cohort 6 (10 mg/kg QGE031) were exposed to placebo (Cohort 6a) to investigate the allergenic potential of the excipient, polysorbate 80.\nSubject disposition for (a) the intravenous trial and (b) the subcutaneous trial. *Total approximate numbers are provided. Subjects were excluded because they declined to participate or failed to meet eligibility criteria. †The placebo cohort was introduced following a protocol amendment to serve as an expansion of the placebo group at the highest (10 mg/kg) dose level. ‡Omalizumab was dosed as per the FDA dosing table 5. ¶Placebo pooled from Cohorts 2, 3 and 6.\nIn the subcutaneous trial, subjects were randomized to one of six cohorts (Fig.1b). Three cohorts (Cohorts 1, 2 and 6) of subjects with IgE 30–700 IU/mL were treated sequentially with multiple escalating doses of subcutaneous QGE031 (0.2, 0.6 or 4 mg/kg, respectively) or placebo (2 : 1); Cohort 6 (i.e. 4 mg/kg) was introduced following a protocol amendment (see Supplementary Material). Cohort 3 received 2 mg/kg QGE031 or placebo (4 : 1). Cohort 4 included subjects with IgE > 700 IU/mL treated with 2 mg/kg QGE031 or placebo (1 : 1). All QGE031 cohorts received study drug at 2-week intervals for a total of two (Cohort 1) or four (Cohorts 2, 3, 4 and 6) doses. Subjects in Cohort 5 received subcutaneous open-label omalizumab, dosed as per the FDA dosing table 5.\nSubjects were admitted approximately 24 h prior to dosing and domiciled for 96 h (intravenous trial) or 48 h (subcutaneous trial) after drug administration. The investigator and Novartis performed a blinded review of at least 10 days (intravenous) or 15 days (subcutaneous) follow-up safety data for all subjects in a given cohort before dosing the next. Subjects were followed for up to approximately 16 weeks.", "Total QGE031 serum concentrations were determined by ELISA with a lower limit of quantification (LLOQ) of 200 ng/mL in serum (see Data S1).", "Total IgE was determined by ELISA with LLOQ of 100 ng/mL in human serum. Free IgE was determined in human serum by ELISA with LLOQ of 7.8 ng/mL and 1.95 ng/mL in the intravenous and subcutaneous trials, respectively; for both trials, the upper limit of quantification of free IgE was 250 ng/mL (see Data S1).", "Serum samples from both studies were subjected to fluorescence-activated cell sorting (FACS) using a FACS Canto II cytometer. In each sample, 2000 basophils were acquired and molecules of equivalent soluble fluorochrome values were calculated (see Data S1).", "The PK profile of QGE031 and several PD parameters including total IgE, basophil FcεRI and basophil surface IgE were analyzed using an adaptation of the previously published omalizumab PK–IgE binding model 14,16,17 (see Data S1).", "At baseline and Days 29, 57, 85 and 155, extinction skin prick testing was performed in duplicate on the subjects’ skin of the back with serial threefold dilutions of a selected allergen that provided a > 5 mm mean wheal diameter at screening. The mean of the longest diameter and corresponding mid-point orthogonal diameter for wheal and flares was recorded (see Data S1). Sites used the Greer Prick System (Greer, Lenoir, NC) or Duotip applicators (Lincoln Diagnostics, Decatur, IL) and allergen extracts from Greer or Hollister-Stier (Spokane, WA).", "In the intravenous trial, serum samples were collected at Days 29 and 113 (end of study) and were tested for the presence of anti-QGE031 antibodies (ADAs) using a Biacore-based assay with QGE031 bound to the surface of the Biacore chip using protein G. A homogenous bridging Meso Scale Discovery-based assay with an improved sensitivity was used for the immunogenicity testing in the subcutaneous trial; samples were collected predose and at Days 15, 29, 43, 99 and 155 (end of study). For both studies, to distinguish between IgE-derived bonding and ADA-derived bonding, the soluble form of human recombinant FcεRI was added to prevent endogenous IgE from binding to QGE031 and interfering with the detection of ADAs. An inhibition step with QGE031 confirmed the presence of ADAs.", "The primary objectives for both trials were to establish the safety and tolerability of single intravenous doses or multiple subcutaneous doses of QGE031 in atopic subjects, with PK as a key secondary objective. PD effects of QGE031, including levels of free and total IgE in the serum, FcεRI and surface IgE expression on circulating basophils were evaluated. Suppression of skin prick responses to allergen by QGE031 was an exploratory objective (subcutaneous trial).", "In each trial, the safety population consisted of all subjects who received at least one dose of study drug. The PK and PD populations consisted of all subjects with available PK or PD data, respectively, and no major protocol deviations that could impact on the data.\nThe intravenous trial was a first-in-human study of QGE031. The sample size of six subjects on active drug for the QGE031 cohorts was based on published detectable adverse event (AE) rates and changes in laboratory parameters 18. For a more complete assessment of tolerability, two subjects treated with placebo were added to each cohort. For Cohort 6a, a cohort of 18 subjects on placebo provides 90% confidence that the true rate of urticaria or other allergic event is ≤ 20% (rate observed in the 3 and 10 mg/kg cohorts), when not more than one event is observed.\nIn the subcutaneous trial, a Bayesian design determined the size of Cohort 3. A cohort size of 40 subjects on active drug provided 80% confidence that the incidence of hypersensitivity events after subcutaneous administration of 2 mg/kg of QGE031 is 7.5% or less when not more than one subject experienced an event. All placebo subjects from Cohorts 2, 3 and 6 were pooled into one placebo group for data presentation and statistical analysis.\nPharmacokinetics parameters of QGE031 were determined using WinNonlin Pro (version 5.2) and descriptive statistics presented. Concentrations of QGE031 below the LLOQ were treated as zero. For subjects that did not complete the study, the end of study sample was excluded from the analysis.\nDescriptive statistics were generated for free and total IgE with Day 1 (predose) values considered baseline values. Descriptive statistics were also generated for FACS data.\nSkin prick data were presented for the threshold dilution of allergen that elicited a wheal of ≥ 3 mm (see Statistical Methods in the Supplementary Material). Areas under the allergen dose–response curves for wheal responses to allergens were calculated for each subject at each visit using the linear trapezoidal rule. The allergen areas under the curve (AUC) were analyzed using a covariance model with baseline AUC (Day 1) as a covariate and treatment as a fixed factor (SAS PROC MIXED). Differences between each QGE031 treatment group and placebo were calculated along with the 95% confidence intervals (CI) and P-value. Post hoc analysis was conducted for the difference between 2 mg/kg QGE031 and omalizumab.", "QGE031 demonstrated higher affinity binding for human IgE compared with omalizumab, as assessed by surface plasmon resonance, with equilibrium KD of 139 pm vs. 6.98 nm, respectively.\nIn a cell-based functional assay using human cord blood-derived mast cells, an approximate 1 : 1 molar ratio of QGE031 to IgE was sufficient to achieve a 90% inhibition of IgE-dependent mast cell degranulation. By comparison, between ninefold and 27-fold excess of omalizumab was needed to achieve the same response. QGE031 is highly selective for human and non-human primate IgE. QGE031 did not bind to IgE purified from rat, cat or dog. The affinity of QGE031 for cynomolgus non-human primate IgE was approximately 12-fold lower than for human with a KD of 1.53 nm.", "Seventy-three subjects (55 in the core study and 18 in Cohort 6a) were enrolled in the intravenous study with 60 subjects completing the study (82%) (Fig.1a). Owing to recruitment difficulties, only one subject was randomized in Cohort 5 (IgE > 1000 IU/mL). The main reasons for discontinuation were withdrawal of consent (n = 8) or lost to follow-up (n = 5).\nThe subcutaneous study enrolled 110 subjects with 96 (87%) completing the study (Fig.1b). Fourteen (13%) subjects did not complete the study due to AEs (n = 2), positive drug screen (n = 1), withdrawal of consent (n = 8), lost to follow-up (n = 2) and protocol deviation (n = 1). Of the two subjects who withdrew due to AEs, one had an asthma exacerbation on Day 5 and one subject developed a flu’-like illness on Day 36; neither AE was considered by the investigator to be related to study drug.\nSubject demographics and other baseline characteristics were similar across the cohorts in both trials (Table1 and Table S1).\nIgE levels for treatment groups for (a) the intravenous trial and (b) the subcutaneous trial\nNC, not calculable.\nFrom screening visit, data highly skewed so median shown.\nPooled over Cohorts 2, 3 and 6.\n PK results Not all subjects had a fully evaluable PK profile, so the QGE031 PK were characterized in 36 subjects in the intravenous trial and 64 subjects in the subcutaneous trial.\nThe time course of QGE031 in serum when administered intravenously over 2 h was characterized by a biexponential decline, with a rapid initial and slower terminal elimination phase (Fig.2a; Fig. S1). A slower terminal disposition phase became visible at doses of 1 mg/kg and higher. At doses of 3 and 10 mg/kg, the PK profile demonstrated a half-life of 17–23 days (Table2a). Higher levels of IgE accelerated the elimination of QGE as shown by PK parameters including a shorter half-life (Table2a) and time courses of response (Fig.2a).\nSummary of PK parameters for (a) the intravenous trial and (b) the subcutaneous trial\nAUC, area under the curve; AUCinf, area under the curve from time zero to infinity; Cmax, peak serum concentration; SD, standard deviation; Tmax, time to reach peak serum concentration; T½, half-life in serum; AUC, area under the curve; Cmax, peak serum concentration; NC, not calculable; SD, standard deviation; T1/2, half-life in serum.\nTime course for serum concentrations of QGE031 following (a) single 2-h intravenous infusion and (b) 2-weekly subcutaneous administration on two (0.2 mg/kg) to four occasions (all other cohorts). Data presented are geometric means. Vertical arrows in panel (b) denote the times of administration of QGE031 (except for the 0.2 mg/kg cohort where QGE031 was administered on just the first two instances).\nThe PK of QGE031 concentrations in serum following subcutaneous administration are also presented (Fig.2b; Fig. S1). The maximum concentration (Cmax) of QGE031 in serum following subcutaneous dosing occurred 2–4 days after the last administered dose (Table2b). Systemic drug exposures reached stable and clinically relevant serum concentrations during the 2-week interval after dosing as demonstrated by dose-proportional increases in peak serum concentration (Cmax) and AUC0–14 days at dose levels between 0.2 and 2 mg/kg QGE031 in subjects with IgE below 700 IU/mL (Table2b). There was no dose-proportional increase in systemic exposure (Cmax, AUC0–14 days) between the 2 and 4 mg/kg doses (Table2b). Whether this observation is due to random interindividual variation, given the small numbers of subjects dosed at 4 mg/kg, or some form of saturation of absorption is not currently known. At the lowest QGE031 dose (0.2 mg/kg) and at the 2 mg/kg dose with high IgE levels, mean terminal elimination half-life was shorter (13–15 days) compared with the other groups (23–26 days; Table2b).\nNot all subjects had a fully evaluable PK profile, so the QGE031 PK were characterized in 36 subjects in the intravenous trial and 64 subjects in the subcutaneous trial.\nThe time course of QGE031 in serum when administered intravenously over 2 h was characterized by a biexponential decline, with a rapid initial and slower terminal elimination phase (Fig.2a; Fig. S1). A slower terminal disposition phase became visible at doses of 1 mg/kg and higher. At doses of 3 and 10 mg/kg, the PK profile demonstrated a half-life of 17–23 days (Table2a). Higher levels of IgE accelerated the elimination of QGE as shown by PK parameters including a shorter half-life (Table2a) and time courses of response (Fig.2a).\nSummary of PK parameters for (a) the intravenous trial and (b) the subcutaneous trial\nAUC, area under the curve; AUCinf, area under the curve from time zero to infinity; Cmax, peak serum concentration; SD, standard deviation; Tmax, time to reach peak serum concentration; T½, half-life in serum; AUC, area under the curve; Cmax, peak serum concentration; NC, not calculable; SD, standard deviation; T1/2, half-life in serum.\nTime course for serum concentrations of QGE031 following (a) single 2-h intravenous infusion and (b) 2-weekly subcutaneous administration on two (0.2 mg/kg) to four occasions (all other cohorts). Data presented are geometric means. Vertical arrows in panel (b) denote the times of administration of QGE031 (except for the 0.2 mg/kg cohort where QGE031 was administered on just the first two instances).\nThe PK of QGE031 concentrations in serum following subcutaneous administration are also presented (Fig.2b; Fig. S1). The maximum concentration (Cmax) of QGE031 in serum following subcutaneous dosing occurred 2–4 days after the last administered dose (Table2b). Systemic drug exposures reached stable and clinically relevant serum concentrations during the 2-week interval after dosing as demonstrated by dose-proportional increases in peak serum concentration (Cmax) and AUC0–14 days at dose levels between 0.2 and 2 mg/kg QGE031 in subjects with IgE below 700 IU/mL (Table2b). There was no dose-proportional increase in systemic exposure (Cmax, AUC0–14 days) between the 2 and 4 mg/kg doses (Table2b). Whether this observation is due to random interindividual variation, given the small numbers of subjects dosed at 4 mg/kg, or some form of saturation of absorption is not currently known. At the lowest QGE031 dose (0.2 mg/kg) and at the 2 mg/kg dose with high IgE levels, mean terminal elimination half-life was shorter (13–15 days) compared with the other groups (23–26 days; Table2b).\n PD: total and free IgE In the intravenous study, QGE031 induced a dose-dependent accumulation of total IgE and suppressed free IgE compared to placebo (Fig.3a; Fig. S1). Free IgE was suppressed more rapidly and to a greater extent than was seen with omalizumab. For all doses of QGE031, suppression of free IgE was below the LLOQ (7.8 ng/mL; Fig.3a). Omalizumab-induced suppression of free IgE was ‘shallower’ with a gradual return to baseline. By contrast, QGE031 suppressed free IgE more rapidly, to a greater extent, for longer and with a faster return to baseline (Fig.3a).\nIndividual subject serum concentrations of total and free IgE in response to increasing doses of QGE031, placebo or omalizumab following (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. The upper and lower limit of quantification for free IgE was 250 ng/mL and 7.8 ng/mL, respectively, for the intravenous study. The upper and lower limit of quantification for free IgE was 250 ng/mL and 1.95 ng/mL, respectively, for the subcutaneous study. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IU/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; sc, subcutaneous.\nSubcutaneous delivery of QGE031 resulted in rapid, incremental and sustained increases in total IgE serum concentrations at all doses compared with placebo (Fig.3b; Fig. S1). All doses of QGE031 reduced free IgE below the LLOQ (1.95 ng/mL) to a greater extent than omalizumab (Fig.3b), even in subjects with high IgE (> 700 IU/mL).\nFor both intravenous and subcutaneous administration, the duration of free IgE suppression was longer for higher doses of QGE031 and tended to be shorter in subjects with higher baseline IgE (Fig.3a,b).\nIn the intravenous study, QGE031 induced a dose-dependent accumulation of total IgE and suppressed free IgE compared to placebo (Fig.3a; Fig. S1). Free IgE was suppressed more rapidly and to a greater extent than was seen with omalizumab. For all doses of QGE031, suppression of free IgE was below the LLOQ (7.8 ng/mL; Fig.3a). Omalizumab-induced suppression of free IgE was ‘shallower’ with a gradual return to baseline. By contrast, QGE031 suppressed free IgE more rapidly, to a greater extent, for longer and with a faster return to baseline (Fig.3a).\nIndividual subject serum concentrations of total and free IgE in response to increasing doses of QGE031, placebo or omalizumab following (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. The upper and lower limit of quantification for free IgE was 250 ng/mL and 7.8 ng/mL, respectively, for the intravenous study. The upper and lower limit of quantification for free IgE was 250 ng/mL and 1.95 ng/mL, respectively, for the subcutaneous study. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IU/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; sc, subcutaneous.\nSubcutaneous delivery of QGE031 resulted in rapid, incremental and sustained increases in total IgE serum concentrations at all doses compared with placebo (Fig.3b; Fig. S1). All doses of QGE031 reduced free IgE below the LLOQ (1.95 ng/mL) to a greater extent than omalizumab (Fig.3b), even in subjects with high IgE (> 700 IU/mL).\nFor both intravenous and subcutaneous administration, the duration of free IgE suppression was longer for higher doses of QGE031 and tended to be shorter in subjects with higher baseline IgE (Fig.3a,b).\n FACS analysis In both trials, QGE031 produced a dose- and time-dependent reduction in basophil FcεRI and IgE expression that was superior in extent (surface IgE only) but longer in duration to omalizumab (Fig.4; Fig. S1). In the subcutaneous study, basophil FcεRI and IgE expression were suppressed for 2 to > 16 weeks after the last dose, with those individuals exhibiting higher levels of IgE at screening (i.e. ≥ 700 IU/mL) having a shorter duration of suppression.\nIndividual subject time courses for the expression of basophil surface FcεRI and basophil surface IgE in response to increasing doses of QGE031, placebo or omalizumab in (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IUM/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; MESF, molecules of equivalent soluble fluorochrome; s, soluble; sc, subcutaneous.\nIn both trials, QGE031 produced a dose- and time-dependent reduction in basophil FcεRI and IgE expression that was superior in extent (surface IgE only) but longer in duration to omalizumab (Fig.4; Fig. S1). In the subcutaneous study, basophil FcεRI and IgE expression were suppressed for 2 to > 16 weeks after the last dose, with those individuals exhibiting higher levels of IgE at screening (i.e. ≥ 700 IU/mL) having a shorter duration of suppression.\nIndividual subject time courses for the expression of basophil surface FcεRI and basophil surface IgE in response to increasing doses of QGE031, placebo or omalizumab in (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IUM/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; MESF, molecules of equivalent soluble fluorochrome; s, soluble; sc, subcutaneous.\n Quantification of the in vivo binding to IgE The PK–PD model fitted well the QGE031 PK/PD data obtained clinically, including the accumulation of drug and associated responses towards steady state, followed by washout and return towards baseline after treatment cessation. The half-maximum concentration for in vivo binding of QGE031 to IgE (KD = 0.32 nm; 95% CI 0.19–0.45 nm) was ninefold (95% CI 6.1–14-fold) lower than that for omalizumab.\nThe PK–PD model fitted well the QGE031 PK/PD data obtained clinically, including the accumulation of drug and associated responses towards steady state, followed by washout and return towards baseline after treatment cessation. The half-maximum concentration for in vivo binding of QGE031 to IgE (KD = 0.32 nm; 95% CI 0.19–0.45 nm) was ninefold (95% CI 6.1–14-fold) lower than that for omalizumab.\n Skin prick tests (subcutaneous trial) Both the AUC and the threshold dose of allergen eliciting a wheal were suppressed in a dose- and time-dependent manner by treatment with QGE031 (Fig.5; Table S2). In Cohort 3 (QGE031 2 mg/kg; n = 31), the allergen AUC was maximally suppressed by > 95% for QGE031 compared with 41% for omalizumab (P < 0.001) with an 81-fold increase in threshold dilution of allergen at Day 85, 6 weeks after the last dose of QGE031.\nTime courses of changes in allergen-induced skin prick wheal responses: (a) area under the dose–response curve values and (b) threshold 1 : 3 dilution of allergen eliciting a response after subcutaneous administration of QGE031, placebo or omalizumab. Data are presented as mean + standard deviation. Serial threefold dilutions of allergen were applied in skin prick testing. A value of 1 = threefold dilution, 2 = ninefold dilution, etc. A value of 0 was assigned if the threshold concentration eliciting a response was the neat allergen. A value of −1 was assigned if no response was elicited at any concentration. AUC, area under the curve.\nIn subjects with baseline IgE levels > 700 IU/mL, 2 mg/kg QGE031 significantly reduced the wheal allergen AUC compared with placebo as effectively as in subjects with IgE < 700 IU/mL, but with an earlier peak effect at Day 57 and recovery to baseline by the end of the study (Fig.5).\nThe percentage of subjects with positive responses to increasing concentrations of allergen was numerically reduced in a dose-dependent manner by QGE031 compared with pooled placebo cohorts (0.6; 2 and 4 mg/kg) and omalizumab for wheal responses up to the end of study (Day 155) (Fig. S2).\nBoth the AUC and the threshold dose of allergen eliciting a wheal were suppressed in a dose- and time-dependent manner by treatment with QGE031 (Fig.5; Table S2). In Cohort 3 (QGE031 2 mg/kg; n = 31), the allergen AUC was maximally suppressed by > 95% for QGE031 compared with 41% for omalizumab (P < 0.001) with an 81-fold increase in threshold dilution of allergen at Day 85, 6 weeks after the last dose of QGE031.\nTime courses of changes in allergen-induced skin prick wheal responses: (a) area under the dose–response curve values and (b) threshold 1 : 3 dilution of allergen eliciting a response after subcutaneous administration of QGE031, placebo or omalizumab. Data are presented as mean + standard deviation. Serial threefold dilutions of allergen were applied in skin prick testing. A value of 1 = threefold dilution, 2 = ninefold dilution, etc. A value of 0 was assigned if the threshold concentration eliciting a response was the neat allergen. A value of −1 was assigned if no response was elicited at any concentration. AUC, area under the curve.\nIn subjects with baseline IgE levels > 700 IU/mL, 2 mg/kg QGE031 significantly reduced the wheal allergen AUC compared with placebo as effectively as in subjects with IgE < 700 IU/mL, but with an earlier peak effect at Day 57 and recovery to baseline by the end of the study (Fig.5).\nThe percentage of subjects with positive responses to increasing concentrations of allergen was numerically reduced in a dose-dependent manner by QGE031 compared with pooled placebo cohorts (0.6; 2 and 4 mg/kg) and omalizumab for wheal responses up to the end of study (Day 155) (Fig. S2).\n Overview of PD parameters To illustrate the sequential expression and recovery of free IgE, basophil surface expression of FcεRI and IgE, and skin prick responses, as well as the influence of baseline IgE, the kinetics of individual subject PD parameters are presented (Fig. S1).\nTo illustrate the sequential expression and recovery of free IgE, basophil surface expression of FcεRI and IgE, and skin prick responses, as well as the influence of baseline IgE, the kinetics of individual subject PD parameters are presented (Fig. S1).\n Immunogenicity In the intravenous trial, QGE031 concentrations > 5 μg/mL may have interfered with the detection of anti-QGE031 antibodies. The QGE031 concentrations were above the tolerable drug levels in only four of 95 analyzed postdose samples; weak immunogenicity signals were detected in 10 subjects, four of whom were treated with placebo (data not shown). Of the six subjects exposed to QGE031 that had immunogenicity responses, there were no differences in PK/PD responses compared with subjects without an immunogenicity signal.\nFor the subcutaneous trial, QGE031 concentrations > 0.76 μg/mL may have interfered with detection of anti-QGE031 antibodies. The drug tolerance levels were exceeded in 44% of immunogenicity samples tested at the end of study; however, PK and PD results did not indicate that a strong anti-QGE031 response was missed in any of the subjects exposed to QGE031.\nIn the intravenous trial, QGE031 concentrations > 5 μg/mL may have interfered with the detection of anti-QGE031 antibodies. The QGE031 concentrations were above the tolerable drug levels in only four of 95 analyzed postdose samples; weak immunogenicity signals were detected in 10 subjects, four of whom were treated with placebo (data not shown). Of the six subjects exposed to QGE031 that had immunogenicity responses, there were no differences in PK/PD responses compared with subjects without an immunogenicity signal.\nFor the subcutaneous trial, QGE031 concentrations > 0.76 μg/mL may have interfered with detection of anti-QGE031 antibodies. The drug tolerance levels were exceeded in 44% of immunogenicity samples tested at the end of study; however, PK and PD results did not indicate that a strong anti-QGE031 response was missed in any of the subjects exposed to QGE031.\n Safety AEs were experienced by 59% of subjects in the intravenous study (Table3a) and 66% of subjects in the subcutaneous study (Table3b). The most common AEs across both studies were headache and upper respiratory tract infection with the majority of events mild to moderate in severity and not suspected to be related to study medication, with the exception of injection site events.\nIncidence of AEs by preferred term (safety analysis set) for (a) the intravenous trial and (b) the subcutaneous trial\nAE, adverse event.\nIn the intravenous trial, four subjects experienced urticaria. Two of 10 subjects who received 3 mg/kg QGE031 experienced urticaria concurrent with the end of the 2-h infusion. One of 10 subjects treated with 10 mg/kg QGE031 experienced urticaria and angioedema with chest pressure, diarrhoea and abdominal pain that began close to the end of the infusion period. One subject experienced urticaria and loose stools 22 h after receiving placebo to 10 mg/kg QGE031. In all subjects, symptoms resolved rapidly after treatment with diphenhydramine with the exception of angioedema that persisted for 90 h. There were no observed episodes of urticaria in 18 subjects who subsequently received a single infusion of placebo. The true underlying rate is unknown. The rate observed in the 3 and 10 mg/kg cohorts was ≤ 20%; there is a 90% probability that the true rate on placebo is statistically less than 20%.\nIn the subcutaneous study, there were four episodes of urticaria in three subjects that occurred from 13 h to 1 week after administration of study drug. Of these, three episodes occurred in two subjects treated with placebo, while one episode occurred in a subject dosed with 0.6 mg/kg QGE031, which did not recur after further doses of QGE031. All episodes were mild or moderate and transient, and resolved without treatment. No episodes of urticaria occurred in 40 subjects who received a total of 149 doses of 2 mg/kg QGE031 or in eight subjects treated with 4 mg/kg QGE031. On the basis of prespecified analyses, the lack of urticarial events in 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence of urticaria is 7.5% or less.\nThere were no serious AEs in either trial, but two subjects were withdrawn from Cohort 3 (QGE031 2 mg/kg) in the subcutaneous trial due to AEs; one suffered an asthma exacerbation and one developed a severe flu’-like illness. Neither event was considered related to study drug.\nAEs were experienced by 59% of subjects in the intravenous study (Table3a) and 66% of subjects in the subcutaneous study (Table3b). The most common AEs across both studies were headache and upper respiratory tract infection with the majority of events mild to moderate in severity and not suspected to be related to study medication, with the exception of injection site events.\nIncidence of AEs by preferred term (safety analysis set) for (a) the intravenous trial and (b) the subcutaneous trial\nAE, adverse event.\nIn the intravenous trial, four subjects experienced urticaria. Two of 10 subjects who received 3 mg/kg QGE031 experienced urticaria concurrent with the end of the 2-h infusion. One of 10 subjects treated with 10 mg/kg QGE031 experienced urticaria and angioedema with chest pressure, diarrhoea and abdominal pain that began close to the end of the infusion period. One subject experienced urticaria and loose stools 22 h after receiving placebo to 10 mg/kg QGE031. In all subjects, symptoms resolved rapidly after treatment with diphenhydramine with the exception of angioedema that persisted for 90 h. There were no observed episodes of urticaria in 18 subjects who subsequently received a single infusion of placebo. The true underlying rate is unknown. The rate observed in the 3 and 10 mg/kg cohorts was ≤ 20%; there is a 90% probability that the true rate on placebo is statistically less than 20%.\nIn the subcutaneous study, there were four episodes of urticaria in three subjects that occurred from 13 h to 1 week after administration of study drug. Of these, three episodes occurred in two subjects treated with placebo, while one episode occurred in a subject dosed with 0.6 mg/kg QGE031, which did not recur after further doses of QGE031. All episodes were mild or moderate and transient, and resolved without treatment. No episodes of urticaria occurred in 40 subjects who received a total of 149 doses of 2 mg/kg QGE031 or in eight subjects treated with 4 mg/kg QGE031. On the basis of prespecified analyses, the lack of urticarial events in 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence of urticaria is 7.5% or less.\nThere were no serious AEs in either trial, but two subjects were withdrawn from Cohort 3 (QGE031 2 mg/kg) in the subcutaneous trial due to AEs; one suffered an asthma exacerbation and one developed a severe flu’-like illness. Neither event was considered related to study drug.", "Not all subjects had a fully evaluable PK profile, so the QGE031 PK were characterized in 36 subjects in the intravenous trial and 64 subjects in the subcutaneous trial.\nThe time course of QGE031 in serum when administered intravenously over 2 h was characterized by a biexponential decline, with a rapid initial and slower terminal elimination phase (Fig.2a; Fig. S1). A slower terminal disposition phase became visible at doses of 1 mg/kg and higher. At doses of 3 and 10 mg/kg, the PK profile demonstrated a half-life of 17–23 days (Table2a). Higher levels of IgE accelerated the elimination of QGE as shown by PK parameters including a shorter half-life (Table2a) and time courses of response (Fig.2a).\nSummary of PK parameters for (a) the intravenous trial and (b) the subcutaneous trial\nAUC, area under the curve; AUCinf, area under the curve from time zero to infinity; Cmax, peak serum concentration; SD, standard deviation; Tmax, time to reach peak serum concentration; T½, half-life in serum; AUC, area under the curve; Cmax, peak serum concentration; NC, not calculable; SD, standard deviation; T1/2, half-life in serum.\nTime course for serum concentrations of QGE031 following (a) single 2-h intravenous infusion and (b) 2-weekly subcutaneous administration on two (0.2 mg/kg) to four occasions (all other cohorts). Data presented are geometric means. Vertical arrows in panel (b) denote the times of administration of QGE031 (except for the 0.2 mg/kg cohort where QGE031 was administered on just the first two instances).\nThe PK of QGE031 concentrations in serum following subcutaneous administration are also presented (Fig.2b; Fig. S1). The maximum concentration (Cmax) of QGE031 in serum following subcutaneous dosing occurred 2–4 days after the last administered dose (Table2b). Systemic drug exposures reached stable and clinically relevant serum concentrations during the 2-week interval after dosing as demonstrated by dose-proportional increases in peak serum concentration (Cmax) and AUC0–14 days at dose levels between 0.2 and 2 mg/kg QGE031 in subjects with IgE below 700 IU/mL (Table2b). There was no dose-proportional increase in systemic exposure (Cmax, AUC0–14 days) between the 2 and 4 mg/kg doses (Table2b). Whether this observation is due to random interindividual variation, given the small numbers of subjects dosed at 4 mg/kg, or some form of saturation of absorption is not currently known. At the lowest QGE031 dose (0.2 mg/kg) and at the 2 mg/kg dose with high IgE levels, mean terminal elimination half-life was shorter (13–15 days) compared with the other groups (23–26 days; Table2b).", "In the intravenous study, QGE031 induced a dose-dependent accumulation of total IgE and suppressed free IgE compared to placebo (Fig.3a; Fig. S1). Free IgE was suppressed more rapidly and to a greater extent than was seen with omalizumab. For all doses of QGE031, suppression of free IgE was below the LLOQ (7.8 ng/mL; Fig.3a). Omalizumab-induced suppression of free IgE was ‘shallower’ with a gradual return to baseline. By contrast, QGE031 suppressed free IgE more rapidly, to a greater extent, for longer and with a faster return to baseline (Fig.3a).\nIndividual subject serum concentrations of total and free IgE in response to increasing doses of QGE031, placebo or omalizumab following (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. The upper and lower limit of quantification for free IgE was 250 ng/mL and 7.8 ng/mL, respectively, for the intravenous study. The upper and lower limit of quantification for free IgE was 250 ng/mL and 1.95 ng/mL, respectively, for the subcutaneous study. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IU/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; sc, subcutaneous.\nSubcutaneous delivery of QGE031 resulted in rapid, incremental and sustained increases in total IgE serum concentrations at all doses compared with placebo (Fig.3b; Fig. S1). All doses of QGE031 reduced free IgE below the LLOQ (1.95 ng/mL) to a greater extent than omalizumab (Fig.3b), even in subjects with high IgE (> 700 IU/mL).\nFor both intravenous and subcutaneous administration, the duration of free IgE suppression was longer for higher doses of QGE031 and tended to be shorter in subjects with higher baseline IgE (Fig.3a,b).", "In both trials, QGE031 produced a dose- and time-dependent reduction in basophil FcεRI and IgE expression that was superior in extent (surface IgE only) but longer in duration to omalizumab (Fig.4; Fig. S1). In the subcutaneous study, basophil FcεRI and IgE expression were suppressed for 2 to > 16 weeks after the last dose, with those individuals exhibiting higher levels of IgE at screening (i.e. ≥ 700 IU/mL) having a shorter duration of suppression.\nIndividual subject time courses for the expression of basophil surface FcεRI and basophil surface IgE in response to increasing doses of QGE031, placebo or omalizumab in (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IUM/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; MESF, molecules of equivalent soluble fluorochrome; s, soluble; sc, subcutaneous.", "The PK–PD model fitted well the QGE031 PK/PD data obtained clinically, including the accumulation of drug and associated responses towards steady state, followed by washout and return towards baseline after treatment cessation. The half-maximum concentration for in vivo binding of QGE031 to IgE (KD = 0.32 nm; 95% CI 0.19–0.45 nm) was ninefold (95% CI 6.1–14-fold) lower than that for omalizumab.", "Both the AUC and the threshold dose of allergen eliciting a wheal were suppressed in a dose- and time-dependent manner by treatment with QGE031 (Fig.5; Table S2). In Cohort 3 (QGE031 2 mg/kg; n = 31), the allergen AUC was maximally suppressed by > 95% for QGE031 compared with 41% for omalizumab (P < 0.001) with an 81-fold increase in threshold dilution of allergen at Day 85, 6 weeks after the last dose of QGE031.\nTime courses of changes in allergen-induced skin prick wheal responses: (a) area under the dose–response curve values and (b) threshold 1 : 3 dilution of allergen eliciting a response after subcutaneous administration of QGE031, placebo or omalizumab. Data are presented as mean + standard deviation. Serial threefold dilutions of allergen were applied in skin prick testing. A value of 1 = threefold dilution, 2 = ninefold dilution, etc. A value of 0 was assigned if the threshold concentration eliciting a response was the neat allergen. A value of −1 was assigned if no response was elicited at any concentration. AUC, area under the curve.\nIn subjects with baseline IgE levels > 700 IU/mL, 2 mg/kg QGE031 significantly reduced the wheal allergen AUC compared with placebo as effectively as in subjects with IgE < 700 IU/mL, but with an earlier peak effect at Day 57 and recovery to baseline by the end of the study (Fig.5).\nThe percentage of subjects with positive responses to increasing concentrations of allergen was numerically reduced in a dose-dependent manner by QGE031 compared with pooled placebo cohorts (0.6; 2 and 4 mg/kg) and omalizumab for wheal responses up to the end of study (Day 155) (Fig. S2).", "To illustrate the sequential expression and recovery of free IgE, basophil surface expression of FcεRI and IgE, and skin prick responses, as well as the influence of baseline IgE, the kinetics of individual subject PD parameters are presented (Fig. S1).", "In the intravenous trial, QGE031 concentrations > 5 μg/mL may have interfered with the detection of anti-QGE031 antibodies. The QGE031 concentrations were above the tolerable drug levels in only four of 95 analyzed postdose samples; weak immunogenicity signals were detected in 10 subjects, four of whom were treated with placebo (data not shown). Of the six subjects exposed to QGE031 that had immunogenicity responses, there were no differences in PK/PD responses compared with subjects without an immunogenicity signal.\nFor the subcutaneous trial, QGE031 concentrations > 0.76 μg/mL may have interfered with detection of anti-QGE031 antibodies. The drug tolerance levels were exceeded in 44% of immunogenicity samples tested at the end of study; however, PK and PD results did not indicate that a strong anti-QGE031 response was missed in any of the subjects exposed to QGE031.", "AEs were experienced by 59% of subjects in the intravenous study (Table3a) and 66% of subjects in the subcutaneous study (Table3b). The most common AEs across both studies were headache and upper respiratory tract infection with the majority of events mild to moderate in severity and not suspected to be related to study medication, with the exception of injection site events.\nIncidence of AEs by preferred term (safety analysis set) for (a) the intravenous trial and (b) the subcutaneous trial\nAE, adverse event.\nIn the intravenous trial, four subjects experienced urticaria. Two of 10 subjects who received 3 mg/kg QGE031 experienced urticaria concurrent with the end of the 2-h infusion. One of 10 subjects treated with 10 mg/kg QGE031 experienced urticaria and angioedema with chest pressure, diarrhoea and abdominal pain that began close to the end of the infusion period. One subject experienced urticaria and loose stools 22 h after receiving placebo to 10 mg/kg QGE031. In all subjects, symptoms resolved rapidly after treatment with diphenhydramine with the exception of angioedema that persisted for 90 h. There were no observed episodes of urticaria in 18 subjects who subsequently received a single infusion of placebo. The true underlying rate is unknown. The rate observed in the 3 and 10 mg/kg cohorts was ≤ 20%; there is a 90% probability that the true rate on placebo is statistically less than 20%.\nIn the subcutaneous study, there were four episodes of urticaria in three subjects that occurred from 13 h to 1 week after administration of study drug. Of these, three episodes occurred in two subjects treated with placebo, while one episode occurred in a subject dosed with 0.6 mg/kg QGE031, which did not recur after further doses of QGE031. All episodes were mild or moderate and transient, and resolved without treatment. No episodes of urticaria occurred in 40 subjects who received a total of 149 doses of 2 mg/kg QGE031 or in eight subjects treated with 4 mg/kg QGE031. On the basis of prespecified analyses, the lack of urticarial events in 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence of urticaria is 7.5% or less.\nThere were no serious AEs in either trial, but two subjects were withdrawn from Cohort 3 (QGE031 2 mg/kg) in the subcutaneous trial due to AEs; one suffered an asthma exacerbation and one developed a severe flu’-like illness. Neither event was considered related to study drug." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Preclinical experiments", "Clinical trials", "Study design", "PK assessments", "PD assessments: serum total and free IgE", "PD assessments: FACS analysis", "PK and PD model of binding to and capture of IgE by QGE031 and omalizumab", "PD assessments: extinction skin prick testing (subcutaneous trial)", "Immunogenicity", "Objectives and outcome measures", "Statistical methods", "Results", "Preclinical results", "Clinical results", "Subjects", "PK results", "PD: total and free IgE", "FACS analysis", "Quantification of the in vivo binding to IgE", "Skin prick tests (subcutaneous trial)", "Overview of PD parameters", "Immunogenicity", "Safety", "Discussion" ]
[ "IgE acts as an environmental sensor that detects allergens and elicits an immune response via the high-affinity IgE receptor, FcεRI, resulting in the sensitization of mast cells to specific antigens 1,2. On exposure to specific antigens, IgE bound to FcεRI induces secretory granule exocytosis from mast cells and basophils, as well as the generation of newly synthesized lipid mediators and cytokines, resulting in both early- and late-phase allergic responses 1,2.\nOmalizumab (Xolair®) is a recombinant monoclonal antibody with a dissociation constant (KD) of 6–8 nm for IgE 3. It is approved for the treatment of patients with severe 4 or moderate-to-severe 5 persistent allergic asthma. Omalizumab binds the Cε3 domain of free IgE preventing it from binding to FcεRI 6,7. Omalizumab suppresses serum-free IgE concentrations 8–10, which in turn, through direct feedback, down-regulates FcεRI surface expression on effector cells 9,11,12 further dampening the effector cell response to allergen.\nThe omalizumab dosing table aims to suppress free IgE to < 25 ng/mL (i.e. 10.4 IU/mL) 13 with doses based on body weight and IgE levels 4,5. Correlations between free IgE and asthma symptom control in controlled clinical studies suggest that a more profound suppression of free IgE may translate to better asthma clinical outcomes 14. QGE031 (ligelizumab) is a humanized IgG1 monoclonal antibody that binds with higher affinity to the Cε3 domain of IgE. QGE031 is designed to achieve superior IgE suppression, with an equilibrium dissociation constant (KD) of 139 pm, that may overcome some of the limitations associated with omalizumab dosing and lead to better clinical outcomes.\nThis report describes data from preclinical experiments and two phase I randomized, double-blind, placebo-controlled clinical trials investigating the pharmacokinetics (PK), pharmacodynamics (PD) and safety of QGE031 in atopic, but otherwise healthy, subjects. The first trial was a single escalating-dose trial with intravenously administered QGE031, while the second trial was a multiple ascending-dose trial with subcutaneously administered QGE031; both trials included an omalizumab arm, which was dosed according to the US Food and Drug Administration (FDA) Prescribing Information dosing table 5. Preliminary data have been published in abstract form 15.", " Preclinical experiments Details of experiments to characterize the in vitro pharmacology of QGE031 are given in the Supplementary Material and included experiments to determine the equilibrium constant of IgE; inhibition of binding to cell-surface FcεRI and the immobilized α-subunit of FCεRI; impact on mast cell degranulation and activation assays; and the binding activity of QGE031 across several mammalian species.\nDetails of experiments to characterize the in vitro pharmacology of QGE031 are given in the Supplementary Material and included experiments to determine the equilibrium constant of IgE; inhibition of binding to cell-surface FcεRI and the immobilized α-subunit of FCεRI; impact on mast cell degranulation and activation assays; and the binding activity of QGE031 across several mammalian species.\n Clinical trials Two clinical trials of QGE031 were conducted. The first-in-human trial administered QGE031 intravenously at a single site in the USA (January to December 2009), while the second trial administered QGE031 subcutaneously at three sites in the USA (May 2010 to September 2011). Both trials were approved by each site's Institutional Review Board, details of which are provided in the Supplementary Material (S1), and all subjects provided written informed consent.\nMale or female subjects aged 18–55 years with a history of atopy, defined as having one or more positive skin tests to common airborne allergens, a history of food allergy (subcutaneous trial) or serum IgE > 30 IU/mL (intravenous trial) were enrolled. Subjects participating in the subcutaneous trial were restricted to 45–120 kg body weight. In both studies, omalizumab was given open label in accordance with body weight and baseline IgE levels as defined by the FDA dosing table 5. The main exclusion criteria included poorly controlled asthma, prior use of omalizumab (intravenous trial), or use of QGE031 or omalizumab in the previous 6 months (subcutaneous trial).\nTwo clinical trials of QGE031 were conducted. The first-in-human trial administered QGE031 intravenously at a single site in the USA (January to December 2009), while the second trial administered QGE031 subcutaneously at three sites in the USA (May 2010 to September 2011). Both trials were approved by each site's Institutional Review Board, details of which are provided in the Supplementary Material (S1), and all subjects provided written informed consent.\nMale or female subjects aged 18–55 years with a history of atopy, defined as having one or more positive skin tests to common airborne allergens, a history of food allergy (subcutaneous trial) or serum IgE > 30 IU/mL (intravenous trial) were enrolled. Subjects participating in the subcutaneous trial were restricted to 45–120 kg body weight. In both studies, omalizumab was given open label in accordance with body weight and baseline IgE levels as defined by the FDA dosing table 5. The main exclusion criteria included poorly controlled asthma, prior use of omalizumab (intravenous trial), or use of QGE031 or omalizumab in the previous 6 months (subcutaneous trial).\n Study design A site-specific randomization list was generated to assign the subjects to the lowest available numbers according to the specified assignment ratio. Subjects, site staff, persons performing the assessments and data analysts remained blinded to treatment from randomization until database lock. Treatments were concealed by identical packaging, labelling and schedule of administration. For the subcutaneous trial, interim analyses were conducted, and therefore, access to unblinded data was allowed for some members of the clinical team using a controlled process for protection of randomization data. Project teams were given access to data at the group but not the individual level.\nDose selection details for both trials are detailed in the Supplementary Material. In the intravenous trial, subjects with IgE 30–1000 IU/mL were randomized to increasing doses of QGE031 [0.1, 0.3, 1, 3 or 10 mg/kg (Fig.1a)] or placebo in a ratio of 3 : 1 for each cohort. Cohort 5 included subjects with IgE > 1000 IU/mL treated with 3 mg/kg QGE031 or placebo (3 : 1). In Cohort 7, subjects received open-label subcutaneous omalizumab, dosed according to the FDA dosing table 5. Following a protocol amendment (see Supplementary Material), additional subjects in Cohort 6 (10 mg/kg QGE031) were exposed to placebo (Cohort 6a) to investigate the allergenic potential of the excipient, polysorbate 80.\nSubject disposition for (a) the intravenous trial and (b) the subcutaneous trial. *Total approximate numbers are provided. Subjects were excluded because they declined to participate or failed to meet eligibility criteria. †The placebo cohort was introduced following a protocol amendment to serve as an expansion of the placebo group at the highest (10 mg/kg) dose level. ‡Omalizumab was dosed as per the FDA dosing table 5. ¶Placebo pooled from Cohorts 2, 3 and 6.\nIn the subcutaneous trial, subjects were randomized to one of six cohorts (Fig.1b). Three cohorts (Cohorts 1, 2 and 6) of subjects with IgE 30–700 IU/mL were treated sequentially with multiple escalating doses of subcutaneous QGE031 (0.2, 0.6 or 4 mg/kg, respectively) or placebo (2 : 1); Cohort 6 (i.e. 4 mg/kg) was introduced following a protocol amendment (see Supplementary Material). Cohort 3 received 2 mg/kg QGE031 or placebo (4 : 1). Cohort 4 included subjects with IgE > 700 IU/mL treated with 2 mg/kg QGE031 or placebo (1 : 1). All QGE031 cohorts received study drug at 2-week intervals for a total of two (Cohort 1) or four (Cohorts 2, 3, 4 and 6) doses. Subjects in Cohort 5 received subcutaneous open-label omalizumab, dosed as per the FDA dosing table 5.\nSubjects were admitted approximately 24 h prior to dosing and domiciled for 96 h (intravenous trial) or 48 h (subcutaneous trial) after drug administration. The investigator and Novartis performed a blinded review of at least 10 days (intravenous) or 15 days (subcutaneous) follow-up safety data for all subjects in a given cohort before dosing the next. Subjects were followed for up to approximately 16 weeks.\nA site-specific randomization list was generated to assign the subjects to the lowest available numbers according to the specified assignment ratio. Subjects, site staff, persons performing the assessments and data analysts remained blinded to treatment from randomization until database lock. Treatments were concealed by identical packaging, labelling and schedule of administration. For the subcutaneous trial, interim analyses were conducted, and therefore, access to unblinded data was allowed for some members of the clinical team using a controlled process for protection of randomization data. Project teams were given access to data at the group but not the individual level.\nDose selection details for both trials are detailed in the Supplementary Material. In the intravenous trial, subjects with IgE 30–1000 IU/mL were randomized to increasing doses of QGE031 [0.1, 0.3, 1, 3 or 10 mg/kg (Fig.1a)] or placebo in a ratio of 3 : 1 for each cohort. Cohort 5 included subjects with IgE > 1000 IU/mL treated with 3 mg/kg QGE031 or placebo (3 : 1). In Cohort 7, subjects received open-label subcutaneous omalizumab, dosed according to the FDA dosing table 5. Following a protocol amendment (see Supplementary Material), additional subjects in Cohort 6 (10 mg/kg QGE031) were exposed to placebo (Cohort 6a) to investigate the allergenic potential of the excipient, polysorbate 80.\nSubject disposition for (a) the intravenous trial and (b) the subcutaneous trial. *Total approximate numbers are provided. Subjects were excluded because they declined to participate or failed to meet eligibility criteria. †The placebo cohort was introduced following a protocol amendment to serve as an expansion of the placebo group at the highest (10 mg/kg) dose level. ‡Omalizumab was dosed as per the FDA dosing table 5. ¶Placebo pooled from Cohorts 2, 3 and 6.\nIn the subcutaneous trial, subjects were randomized to one of six cohorts (Fig.1b). Three cohorts (Cohorts 1, 2 and 6) of subjects with IgE 30–700 IU/mL were treated sequentially with multiple escalating doses of subcutaneous QGE031 (0.2, 0.6 or 4 mg/kg, respectively) or placebo (2 : 1); Cohort 6 (i.e. 4 mg/kg) was introduced following a protocol amendment (see Supplementary Material). Cohort 3 received 2 mg/kg QGE031 or placebo (4 : 1). Cohort 4 included subjects with IgE > 700 IU/mL treated with 2 mg/kg QGE031 or placebo (1 : 1). All QGE031 cohorts received study drug at 2-week intervals for a total of two (Cohort 1) or four (Cohorts 2, 3, 4 and 6) doses. Subjects in Cohort 5 received subcutaneous open-label omalizumab, dosed as per the FDA dosing table 5.\nSubjects were admitted approximately 24 h prior to dosing and domiciled for 96 h (intravenous trial) or 48 h (subcutaneous trial) after drug administration. The investigator and Novartis performed a blinded review of at least 10 days (intravenous) or 15 days (subcutaneous) follow-up safety data for all subjects in a given cohort before dosing the next. Subjects were followed for up to approximately 16 weeks.\n PK assessments Total QGE031 serum concentrations were determined by ELISA with a lower limit of quantification (LLOQ) of 200 ng/mL in serum (see Data S1).\nTotal QGE031 serum concentrations were determined by ELISA with a lower limit of quantification (LLOQ) of 200 ng/mL in serum (see Data S1).\n PD assessments: serum total and free IgE Total IgE was determined by ELISA with LLOQ of 100 ng/mL in human serum. Free IgE was determined in human serum by ELISA with LLOQ of 7.8 ng/mL and 1.95 ng/mL in the intravenous and subcutaneous trials, respectively; for both trials, the upper limit of quantification of free IgE was 250 ng/mL (see Data S1).\nTotal IgE was determined by ELISA with LLOQ of 100 ng/mL in human serum. Free IgE was determined in human serum by ELISA with LLOQ of 7.8 ng/mL and 1.95 ng/mL in the intravenous and subcutaneous trials, respectively; for both trials, the upper limit of quantification of free IgE was 250 ng/mL (see Data S1).\n PD assessments: FACS analysis Serum samples from both studies were subjected to fluorescence-activated cell sorting (FACS) using a FACS Canto II cytometer. In each sample, 2000 basophils were acquired and molecules of equivalent soluble fluorochrome values were calculated (see Data S1).\nSerum samples from both studies were subjected to fluorescence-activated cell sorting (FACS) using a FACS Canto II cytometer. In each sample, 2000 basophils were acquired and molecules of equivalent soluble fluorochrome values were calculated (see Data S1).\n PK and PD model of binding to and capture of IgE by QGE031 and omalizumab The PK profile of QGE031 and several PD parameters including total IgE, basophil FcεRI and basophil surface IgE were analyzed using an adaptation of the previously published omalizumab PK–IgE binding model 14,16,17 (see Data S1).\nThe PK profile of QGE031 and several PD parameters including total IgE, basophil FcεRI and basophil surface IgE were analyzed using an adaptation of the previously published omalizumab PK–IgE binding model 14,16,17 (see Data S1).\n PD assessments: extinction skin prick testing (subcutaneous trial) At baseline and Days 29, 57, 85 and 155, extinction skin prick testing was performed in duplicate on the subjects’ skin of the back with serial threefold dilutions of a selected allergen that provided a > 5 mm mean wheal diameter at screening. The mean of the longest diameter and corresponding mid-point orthogonal diameter for wheal and flares was recorded (see Data S1). Sites used the Greer Prick System (Greer, Lenoir, NC) or Duotip applicators (Lincoln Diagnostics, Decatur, IL) and allergen extracts from Greer or Hollister-Stier (Spokane, WA).\nAt baseline and Days 29, 57, 85 and 155, extinction skin prick testing was performed in duplicate on the subjects’ skin of the back with serial threefold dilutions of a selected allergen that provided a > 5 mm mean wheal diameter at screening. The mean of the longest diameter and corresponding mid-point orthogonal diameter for wheal and flares was recorded (see Data S1). Sites used the Greer Prick System (Greer, Lenoir, NC) or Duotip applicators (Lincoln Diagnostics, Decatur, IL) and allergen extracts from Greer or Hollister-Stier (Spokane, WA).\n Immunogenicity In the intravenous trial, serum samples were collected at Days 29 and 113 (end of study) and were tested for the presence of anti-QGE031 antibodies (ADAs) using a Biacore-based assay with QGE031 bound to the surface of the Biacore chip using protein G. A homogenous bridging Meso Scale Discovery-based assay with an improved sensitivity was used for the immunogenicity testing in the subcutaneous trial; samples were collected predose and at Days 15, 29, 43, 99 and 155 (end of study). For both studies, to distinguish between IgE-derived bonding and ADA-derived bonding, the soluble form of human recombinant FcεRI was added to prevent endogenous IgE from binding to QGE031 and interfering with the detection of ADAs. An inhibition step with QGE031 confirmed the presence of ADAs.\nIn the intravenous trial, serum samples were collected at Days 29 and 113 (end of study) and were tested for the presence of anti-QGE031 antibodies (ADAs) using a Biacore-based assay with QGE031 bound to the surface of the Biacore chip using protein G. A homogenous bridging Meso Scale Discovery-based assay with an improved sensitivity was used for the immunogenicity testing in the subcutaneous trial; samples were collected predose and at Days 15, 29, 43, 99 and 155 (end of study). For both studies, to distinguish between IgE-derived bonding and ADA-derived bonding, the soluble form of human recombinant FcεRI was added to prevent endogenous IgE from binding to QGE031 and interfering with the detection of ADAs. An inhibition step with QGE031 confirmed the presence of ADAs.\n Objectives and outcome measures The primary objectives for both trials were to establish the safety and tolerability of single intravenous doses or multiple subcutaneous doses of QGE031 in atopic subjects, with PK as a key secondary objective. PD effects of QGE031, including levels of free and total IgE in the serum, FcεRI and surface IgE expression on circulating basophils were evaluated. Suppression of skin prick responses to allergen by QGE031 was an exploratory objective (subcutaneous trial).\nThe primary objectives for both trials were to establish the safety and tolerability of single intravenous doses or multiple subcutaneous doses of QGE031 in atopic subjects, with PK as a key secondary objective. PD effects of QGE031, including levels of free and total IgE in the serum, FcεRI and surface IgE expression on circulating basophils were evaluated. Suppression of skin prick responses to allergen by QGE031 was an exploratory objective (subcutaneous trial).\n Statistical methods In each trial, the safety population consisted of all subjects who received at least one dose of study drug. The PK and PD populations consisted of all subjects with available PK or PD data, respectively, and no major protocol deviations that could impact on the data.\nThe intravenous trial was a first-in-human study of QGE031. The sample size of six subjects on active drug for the QGE031 cohorts was based on published detectable adverse event (AE) rates and changes in laboratory parameters 18. For a more complete assessment of tolerability, two subjects treated with placebo were added to each cohort. For Cohort 6a, a cohort of 18 subjects on placebo provides 90% confidence that the true rate of urticaria or other allergic event is ≤ 20% (rate observed in the 3 and 10 mg/kg cohorts), when not more than one event is observed.\nIn the subcutaneous trial, a Bayesian design determined the size of Cohort 3. A cohort size of 40 subjects on active drug provided 80% confidence that the incidence of hypersensitivity events after subcutaneous administration of 2 mg/kg of QGE031 is 7.5% or less when not more than one subject experienced an event. All placebo subjects from Cohorts 2, 3 and 6 were pooled into one placebo group for data presentation and statistical analysis.\nPharmacokinetics parameters of QGE031 were determined using WinNonlin Pro (version 5.2) and descriptive statistics presented. Concentrations of QGE031 below the LLOQ were treated as zero. For subjects that did not complete the study, the end of study sample was excluded from the analysis.\nDescriptive statistics were generated for free and total IgE with Day 1 (predose) values considered baseline values. Descriptive statistics were also generated for FACS data.\nSkin prick data were presented for the threshold dilution of allergen that elicited a wheal of ≥ 3 mm (see Statistical Methods in the Supplementary Material). Areas under the allergen dose–response curves for wheal responses to allergens were calculated for each subject at each visit using the linear trapezoidal rule. The allergen areas under the curve (AUC) were analyzed using a covariance model with baseline AUC (Day 1) as a covariate and treatment as a fixed factor (SAS PROC MIXED). Differences between each QGE031 treatment group and placebo were calculated along with the 95% confidence intervals (CI) and P-value. Post hoc analysis was conducted for the difference between 2 mg/kg QGE031 and omalizumab.\nIn each trial, the safety population consisted of all subjects who received at least one dose of study drug. The PK and PD populations consisted of all subjects with available PK or PD data, respectively, and no major protocol deviations that could impact on the data.\nThe intravenous trial was a first-in-human study of QGE031. The sample size of six subjects on active drug for the QGE031 cohorts was based on published detectable adverse event (AE) rates and changes in laboratory parameters 18. For a more complete assessment of tolerability, two subjects treated with placebo were added to each cohort. For Cohort 6a, a cohort of 18 subjects on placebo provides 90% confidence that the true rate of urticaria or other allergic event is ≤ 20% (rate observed in the 3 and 10 mg/kg cohorts), when not more than one event is observed.\nIn the subcutaneous trial, a Bayesian design determined the size of Cohort 3. A cohort size of 40 subjects on active drug provided 80% confidence that the incidence of hypersensitivity events after subcutaneous administration of 2 mg/kg of QGE031 is 7.5% or less when not more than one subject experienced an event. All placebo subjects from Cohorts 2, 3 and 6 were pooled into one placebo group for data presentation and statistical analysis.\nPharmacokinetics parameters of QGE031 were determined using WinNonlin Pro (version 5.2) and descriptive statistics presented. Concentrations of QGE031 below the LLOQ were treated as zero. For subjects that did not complete the study, the end of study sample was excluded from the analysis.\nDescriptive statistics were generated for free and total IgE with Day 1 (predose) values considered baseline values. Descriptive statistics were also generated for FACS data.\nSkin prick data were presented for the threshold dilution of allergen that elicited a wheal of ≥ 3 mm (see Statistical Methods in the Supplementary Material). Areas under the allergen dose–response curves for wheal responses to allergens were calculated for each subject at each visit using the linear trapezoidal rule. The allergen areas under the curve (AUC) were analyzed using a covariance model with baseline AUC (Day 1) as a covariate and treatment as a fixed factor (SAS PROC MIXED). Differences between each QGE031 treatment group and placebo were calculated along with the 95% confidence intervals (CI) and P-value. Post hoc analysis was conducted for the difference between 2 mg/kg QGE031 and omalizumab.", "Details of experiments to characterize the in vitro pharmacology of QGE031 are given in the Supplementary Material and included experiments to determine the equilibrium constant of IgE; inhibition of binding to cell-surface FcεRI and the immobilized α-subunit of FCεRI; impact on mast cell degranulation and activation assays; and the binding activity of QGE031 across several mammalian species.", "Two clinical trials of QGE031 were conducted. The first-in-human trial administered QGE031 intravenously at a single site in the USA (January to December 2009), while the second trial administered QGE031 subcutaneously at three sites in the USA (May 2010 to September 2011). Both trials were approved by each site's Institutional Review Board, details of which are provided in the Supplementary Material (S1), and all subjects provided written informed consent.\nMale or female subjects aged 18–55 years with a history of atopy, defined as having one or more positive skin tests to common airborne allergens, a history of food allergy (subcutaneous trial) or serum IgE > 30 IU/mL (intravenous trial) were enrolled. Subjects participating in the subcutaneous trial were restricted to 45–120 kg body weight. In both studies, omalizumab was given open label in accordance with body weight and baseline IgE levels as defined by the FDA dosing table 5. The main exclusion criteria included poorly controlled asthma, prior use of omalizumab (intravenous trial), or use of QGE031 or omalizumab in the previous 6 months (subcutaneous trial).", "A site-specific randomization list was generated to assign the subjects to the lowest available numbers according to the specified assignment ratio. Subjects, site staff, persons performing the assessments and data analysts remained blinded to treatment from randomization until database lock. Treatments were concealed by identical packaging, labelling and schedule of administration. For the subcutaneous trial, interim analyses were conducted, and therefore, access to unblinded data was allowed for some members of the clinical team using a controlled process for protection of randomization data. Project teams were given access to data at the group but not the individual level.\nDose selection details for both trials are detailed in the Supplementary Material. In the intravenous trial, subjects with IgE 30–1000 IU/mL were randomized to increasing doses of QGE031 [0.1, 0.3, 1, 3 or 10 mg/kg (Fig.1a)] or placebo in a ratio of 3 : 1 for each cohort. Cohort 5 included subjects with IgE > 1000 IU/mL treated with 3 mg/kg QGE031 or placebo (3 : 1). In Cohort 7, subjects received open-label subcutaneous omalizumab, dosed according to the FDA dosing table 5. Following a protocol amendment (see Supplementary Material), additional subjects in Cohort 6 (10 mg/kg QGE031) were exposed to placebo (Cohort 6a) to investigate the allergenic potential of the excipient, polysorbate 80.\nSubject disposition for (a) the intravenous trial and (b) the subcutaneous trial. *Total approximate numbers are provided. Subjects were excluded because they declined to participate or failed to meet eligibility criteria. †The placebo cohort was introduced following a protocol amendment to serve as an expansion of the placebo group at the highest (10 mg/kg) dose level. ‡Omalizumab was dosed as per the FDA dosing table 5. ¶Placebo pooled from Cohorts 2, 3 and 6.\nIn the subcutaneous trial, subjects were randomized to one of six cohorts (Fig.1b). Three cohorts (Cohorts 1, 2 and 6) of subjects with IgE 30–700 IU/mL were treated sequentially with multiple escalating doses of subcutaneous QGE031 (0.2, 0.6 or 4 mg/kg, respectively) or placebo (2 : 1); Cohort 6 (i.e. 4 mg/kg) was introduced following a protocol amendment (see Supplementary Material). Cohort 3 received 2 mg/kg QGE031 or placebo (4 : 1). Cohort 4 included subjects with IgE > 700 IU/mL treated with 2 mg/kg QGE031 or placebo (1 : 1). All QGE031 cohorts received study drug at 2-week intervals for a total of two (Cohort 1) or four (Cohorts 2, 3, 4 and 6) doses. Subjects in Cohort 5 received subcutaneous open-label omalizumab, dosed as per the FDA dosing table 5.\nSubjects were admitted approximately 24 h prior to dosing and domiciled for 96 h (intravenous trial) or 48 h (subcutaneous trial) after drug administration. The investigator and Novartis performed a blinded review of at least 10 days (intravenous) or 15 days (subcutaneous) follow-up safety data for all subjects in a given cohort before dosing the next. Subjects were followed for up to approximately 16 weeks.", "Total QGE031 serum concentrations were determined by ELISA with a lower limit of quantification (LLOQ) of 200 ng/mL in serum (see Data S1).", "Total IgE was determined by ELISA with LLOQ of 100 ng/mL in human serum. Free IgE was determined in human serum by ELISA with LLOQ of 7.8 ng/mL and 1.95 ng/mL in the intravenous and subcutaneous trials, respectively; for both trials, the upper limit of quantification of free IgE was 250 ng/mL (see Data S1).", "Serum samples from both studies were subjected to fluorescence-activated cell sorting (FACS) using a FACS Canto II cytometer. In each sample, 2000 basophils were acquired and molecules of equivalent soluble fluorochrome values were calculated (see Data S1).", "The PK profile of QGE031 and several PD parameters including total IgE, basophil FcεRI and basophil surface IgE were analyzed using an adaptation of the previously published omalizumab PK–IgE binding model 14,16,17 (see Data S1).", "At baseline and Days 29, 57, 85 and 155, extinction skin prick testing was performed in duplicate on the subjects’ skin of the back with serial threefold dilutions of a selected allergen that provided a > 5 mm mean wheal diameter at screening. The mean of the longest diameter and corresponding mid-point orthogonal diameter for wheal and flares was recorded (see Data S1). Sites used the Greer Prick System (Greer, Lenoir, NC) or Duotip applicators (Lincoln Diagnostics, Decatur, IL) and allergen extracts from Greer or Hollister-Stier (Spokane, WA).", "In the intravenous trial, serum samples were collected at Days 29 and 113 (end of study) and were tested for the presence of anti-QGE031 antibodies (ADAs) using a Biacore-based assay with QGE031 bound to the surface of the Biacore chip using protein G. A homogenous bridging Meso Scale Discovery-based assay with an improved sensitivity was used for the immunogenicity testing in the subcutaneous trial; samples were collected predose and at Days 15, 29, 43, 99 and 155 (end of study). For both studies, to distinguish between IgE-derived bonding and ADA-derived bonding, the soluble form of human recombinant FcεRI was added to prevent endogenous IgE from binding to QGE031 and interfering with the detection of ADAs. An inhibition step with QGE031 confirmed the presence of ADAs.", "The primary objectives for both trials were to establish the safety and tolerability of single intravenous doses or multiple subcutaneous doses of QGE031 in atopic subjects, with PK as a key secondary objective. PD effects of QGE031, including levels of free and total IgE in the serum, FcεRI and surface IgE expression on circulating basophils were evaluated. Suppression of skin prick responses to allergen by QGE031 was an exploratory objective (subcutaneous trial).", "In each trial, the safety population consisted of all subjects who received at least one dose of study drug. The PK and PD populations consisted of all subjects with available PK or PD data, respectively, and no major protocol deviations that could impact on the data.\nThe intravenous trial was a first-in-human study of QGE031. The sample size of six subjects on active drug for the QGE031 cohorts was based on published detectable adverse event (AE) rates and changes in laboratory parameters 18. For a more complete assessment of tolerability, two subjects treated with placebo were added to each cohort. For Cohort 6a, a cohort of 18 subjects on placebo provides 90% confidence that the true rate of urticaria or other allergic event is ≤ 20% (rate observed in the 3 and 10 mg/kg cohorts), when not more than one event is observed.\nIn the subcutaneous trial, a Bayesian design determined the size of Cohort 3. A cohort size of 40 subjects on active drug provided 80% confidence that the incidence of hypersensitivity events after subcutaneous administration of 2 mg/kg of QGE031 is 7.5% or less when not more than one subject experienced an event. All placebo subjects from Cohorts 2, 3 and 6 were pooled into one placebo group for data presentation and statistical analysis.\nPharmacokinetics parameters of QGE031 were determined using WinNonlin Pro (version 5.2) and descriptive statistics presented. Concentrations of QGE031 below the LLOQ were treated as zero. For subjects that did not complete the study, the end of study sample was excluded from the analysis.\nDescriptive statistics were generated for free and total IgE with Day 1 (predose) values considered baseline values. Descriptive statistics were also generated for FACS data.\nSkin prick data were presented for the threshold dilution of allergen that elicited a wheal of ≥ 3 mm (see Statistical Methods in the Supplementary Material). Areas under the allergen dose–response curves for wheal responses to allergens were calculated for each subject at each visit using the linear trapezoidal rule. The allergen areas under the curve (AUC) were analyzed using a covariance model with baseline AUC (Day 1) as a covariate and treatment as a fixed factor (SAS PROC MIXED). Differences between each QGE031 treatment group and placebo were calculated along with the 95% confidence intervals (CI) and P-value. Post hoc analysis was conducted for the difference between 2 mg/kg QGE031 and omalizumab.", " Preclinical results QGE031 demonstrated higher affinity binding for human IgE compared with omalizumab, as assessed by surface plasmon resonance, with equilibrium KD of 139 pm vs. 6.98 nm, respectively.\nIn a cell-based functional assay using human cord blood-derived mast cells, an approximate 1 : 1 molar ratio of QGE031 to IgE was sufficient to achieve a 90% inhibition of IgE-dependent mast cell degranulation. By comparison, between ninefold and 27-fold excess of omalizumab was needed to achieve the same response. QGE031 is highly selective for human and non-human primate IgE. QGE031 did not bind to IgE purified from rat, cat or dog. The affinity of QGE031 for cynomolgus non-human primate IgE was approximately 12-fold lower than for human with a KD of 1.53 nm.\nQGE031 demonstrated higher affinity binding for human IgE compared with omalizumab, as assessed by surface plasmon resonance, with equilibrium KD of 139 pm vs. 6.98 nm, respectively.\nIn a cell-based functional assay using human cord blood-derived mast cells, an approximate 1 : 1 molar ratio of QGE031 to IgE was sufficient to achieve a 90% inhibition of IgE-dependent mast cell degranulation. By comparison, between ninefold and 27-fold excess of omalizumab was needed to achieve the same response. QGE031 is highly selective for human and non-human primate IgE. QGE031 did not bind to IgE purified from rat, cat or dog. The affinity of QGE031 for cynomolgus non-human primate IgE was approximately 12-fold lower than for human with a KD of 1.53 nm.\n Clinical results ", "QGE031 demonstrated higher affinity binding for human IgE compared with omalizumab, as assessed by surface plasmon resonance, with equilibrium KD of 139 pm vs. 6.98 nm, respectively.\nIn a cell-based functional assay using human cord blood-derived mast cells, an approximate 1 : 1 molar ratio of QGE031 to IgE was sufficient to achieve a 90% inhibition of IgE-dependent mast cell degranulation. By comparison, between ninefold and 27-fold excess of omalizumab was needed to achieve the same response. QGE031 is highly selective for human and non-human primate IgE. QGE031 did not bind to IgE purified from rat, cat or dog. The affinity of QGE031 for cynomolgus non-human primate IgE was approximately 12-fold lower than for human with a KD of 1.53 nm.", "", "Seventy-three subjects (55 in the core study and 18 in Cohort 6a) were enrolled in the intravenous study with 60 subjects completing the study (82%) (Fig.1a). Owing to recruitment difficulties, only one subject was randomized in Cohort 5 (IgE > 1000 IU/mL). The main reasons for discontinuation were withdrawal of consent (n = 8) or lost to follow-up (n = 5).\nThe subcutaneous study enrolled 110 subjects with 96 (87%) completing the study (Fig.1b). Fourteen (13%) subjects did not complete the study due to AEs (n = 2), positive drug screen (n = 1), withdrawal of consent (n = 8), lost to follow-up (n = 2) and protocol deviation (n = 1). Of the two subjects who withdrew due to AEs, one had an asthma exacerbation on Day 5 and one subject developed a flu’-like illness on Day 36; neither AE was considered by the investigator to be related to study drug.\nSubject demographics and other baseline characteristics were similar across the cohorts in both trials (Table1 and Table S1).\nIgE levels for treatment groups for (a) the intravenous trial and (b) the subcutaneous trial\nNC, not calculable.\nFrom screening visit, data highly skewed so median shown.\nPooled over Cohorts 2, 3 and 6.\n PK results Not all subjects had a fully evaluable PK profile, so the QGE031 PK were characterized in 36 subjects in the intravenous trial and 64 subjects in the subcutaneous trial.\nThe time course of QGE031 in serum when administered intravenously over 2 h was characterized by a biexponential decline, with a rapid initial and slower terminal elimination phase (Fig.2a; Fig. S1). A slower terminal disposition phase became visible at doses of 1 mg/kg and higher. At doses of 3 and 10 mg/kg, the PK profile demonstrated a half-life of 17–23 days (Table2a). Higher levels of IgE accelerated the elimination of QGE as shown by PK parameters including a shorter half-life (Table2a) and time courses of response (Fig.2a).\nSummary of PK parameters for (a) the intravenous trial and (b) the subcutaneous trial\nAUC, area under the curve; AUCinf, area under the curve from time zero to infinity; Cmax, peak serum concentration; SD, standard deviation; Tmax, time to reach peak serum concentration; T½, half-life in serum; AUC, area under the curve; Cmax, peak serum concentration; NC, not calculable; SD, standard deviation; T1/2, half-life in serum.\nTime course for serum concentrations of QGE031 following (a) single 2-h intravenous infusion and (b) 2-weekly subcutaneous administration on two (0.2 mg/kg) to four occasions (all other cohorts). Data presented are geometric means. Vertical arrows in panel (b) denote the times of administration of QGE031 (except for the 0.2 mg/kg cohort where QGE031 was administered on just the first two instances).\nThe PK of QGE031 concentrations in serum following subcutaneous administration are also presented (Fig.2b; Fig. S1). The maximum concentration (Cmax) of QGE031 in serum following subcutaneous dosing occurred 2–4 days after the last administered dose (Table2b). Systemic drug exposures reached stable and clinically relevant serum concentrations during the 2-week interval after dosing as demonstrated by dose-proportional increases in peak serum concentration (Cmax) and AUC0–14 days at dose levels between 0.2 and 2 mg/kg QGE031 in subjects with IgE below 700 IU/mL (Table2b). There was no dose-proportional increase in systemic exposure (Cmax, AUC0–14 days) between the 2 and 4 mg/kg doses (Table2b). Whether this observation is due to random interindividual variation, given the small numbers of subjects dosed at 4 mg/kg, or some form of saturation of absorption is not currently known. At the lowest QGE031 dose (0.2 mg/kg) and at the 2 mg/kg dose with high IgE levels, mean terminal elimination half-life was shorter (13–15 days) compared with the other groups (23–26 days; Table2b).\nNot all subjects had a fully evaluable PK profile, so the QGE031 PK were characterized in 36 subjects in the intravenous trial and 64 subjects in the subcutaneous trial.\nThe time course of QGE031 in serum when administered intravenously over 2 h was characterized by a biexponential decline, with a rapid initial and slower terminal elimination phase (Fig.2a; Fig. S1). A slower terminal disposition phase became visible at doses of 1 mg/kg and higher. At doses of 3 and 10 mg/kg, the PK profile demonstrated a half-life of 17–23 days (Table2a). Higher levels of IgE accelerated the elimination of QGE as shown by PK parameters including a shorter half-life (Table2a) and time courses of response (Fig.2a).\nSummary of PK parameters for (a) the intravenous trial and (b) the subcutaneous trial\nAUC, area under the curve; AUCinf, area under the curve from time zero to infinity; Cmax, peak serum concentration; SD, standard deviation; Tmax, time to reach peak serum concentration; T½, half-life in serum; AUC, area under the curve; Cmax, peak serum concentration; NC, not calculable; SD, standard deviation; T1/2, half-life in serum.\nTime course for serum concentrations of QGE031 following (a) single 2-h intravenous infusion and (b) 2-weekly subcutaneous administration on two (0.2 mg/kg) to four occasions (all other cohorts). Data presented are geometric means. Vertical arrows in panel (b) denote the times of administration of QGE031 (except for the 0.2 mg/kg cohort where QGE031 was administered on just the first two instances).\nThe PK of QGE031 concentrations in serum following subcutaneous administration are also presented (Fig.2b; Fig. S1). The maximum concentration (Cmax) of QGE031 in serum following subcutaneous dosing occurred 2–4 days after the last administered dose (Table2b). Systemic drug exposures reached stable and clinically relevant serum concentrations during the 2-week interval after dosing as demonstrated by dose-proportional increases in peak serum concentration (Cmax) and AUC0–14 days at dose levels between 0.2 and 2 mg/kg QGE031 in subjects with IgE below 700 IU/mL (Table2b). There was no dose-proportional increase in systemic exposure (Cmax, AUC0–14 days) between the 2 and 4 mg/kg doses (Table2b). Whether this observation is due to random interindividual variation, given the small numbers of subjects dosed at 4 mg/kg, or some form of saturation of absorption is not currently known. At the lowest QGE031 dose (0.2 mg/kg) and at the 2 mg/kg dose with high IgE levels, mean terminal elimination half-life was shorter (13–15 days) compared with the other groups (23–26 days; Table2b).\n PD: total and free IgE In the intravenous study, QGE031 induced a dose-dependent accumulation of total IgE and suppressed free IgE compared to placebo (Fig.3a; Fig. S1). Free IgE was suppressed more rapidly and to a greater extent than was seen with omalizumab. For all doses of QGE031, suppression of free IgE was below the LLOQ (7.8 ng/mL; Fig.3a). Omalizumab-induced suppression of free IgE was ‘shallower’ with a gradual return to baseline. By contrast, QGE031 suppressed free IgE more rapidly, to a greater extent, for longer and with a faster return to baseline (Fig.3a).\nIndividual subject serum concentrations of total and free IgE in response to increasing doses of QGE031, placebo or omalizumab following (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. The upper and lower limit of quantification for free IgE was 250 ng/mL and 7.8 ng/mL, respectively, for the intravenous study. The upper and lower limit of quantification for free IgE was 250 ng/mL and 1.95 ng/mL, respectively, for the subcutaneous study. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IU/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; sc, subcutaneous.\nSubcutaneous delivery of QGE031 resulted in rapid, incremental and sustained increases in total IgE serum concentrations at all doses compared with placebo (Fig.3b; Fig. S1). All doses of QGE031 reduced free IgE below the LLOQ (1.95 ng/mL) to a greater extent than omalizumab (Fig.3b), even in subjects with high IgE (> 700 IU/mL).\nFor both intravenous and subcutaneous administration, the duration of free IgE suppression was longer for higher doses of QGE031 and tended to be shorter in subjects with higher baseline IgE (Fig.3a,b).\nIn the intravenous study, QGE031 induced a dose-dependent accumulation of total IgE and suppressed free IgE compared to placebo (Fig.3a; Fig. S1). Free IgE was suppressed more rapidly and to a greater extent than was seen with omalizumab. For all doses of QGE031, suppression of free IgE was below the LLOQ (7.8 ng/mL; Fig.3a). Omalizumab-induced suppression of free IgE was ‘shallower’ with a gradual return to baseline. By contrast, QGE031 suppressed free IgE more rapidly, to a greater extent, for longer and with a faster return to baseline (Fig.3a).\nIndividual subject serum concentrations of total and free IgE in response to increasing doses of QGE031, placebo or omalizumab following (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. The upper and lower limit of quantification for free IgE was 250 ng/mL and 7.8 ng/mL, respectively, for the intravenous study. The upper and lower limit of quantification for free IgE was 250 ng/mL and 1.95 ng/mL, respectively, for the subcutaneous study. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IU/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; sc, subcutaneous.\nSubcutaneous delivery of QGE031 resulted in rapid, incremental and sustained increases in total IgE serum concentrations at all doses compared with placebo (Fig.3b; Fig. S1). All doses of QGE031 reduced free IgE below the LLOQ (1.95 ng/mL) to a greater extent than omalizumab (Fig.3b), even in subjects with high IgE (> 700 IU/mL).\nFor both intravenous and subcutaneous administration, the duration of free IgE suppression was longer for higher doses of QGE031 and tended to be shorter in subjects with higher baseline IgE (Fig.3a,b).\n FACS analysis In both trials, QGE031 produced a dose- and time-dependent reduction in basophil FcεRI and IgE expression that was superior in extent (surface IgE only) but longer in duration to omalizumab (Fig.4; Fig. S1). In the subcutaneous study, basophil FcεRI and IgE expression were suppressed for 2 to > 16 weeks after the last dose, with those individuals exhibiting higher levels of IgE at screening (i.e. ≥ 700 IU/mL) having a shorter duration of suppression.\nIndividual subject time courses for the expression of basophil surface FcεRI and basophil surface IgE in response to increasing doses of QGE031, placebo or omalizumab in (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IUM/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; MESF, molecules of equivalent soluble fluorochrome; s, soluble; sc, subcutaneous.\nIn both trials, QGE031 produced a dose- and time-dependent reduction in basophil FcεRI and IgE expression that was superior in extent (surface IgE only) but longer in duration to omalizumab (Fig.4; Fig. S1). In the subcutaneous study, basophil FcεRI and IgE expression were suppressed for 2 to > 16 weeks after the last dose, with those individuals exhibiting higher levels of IgE at screening (i.e. ≥ 700 IU/mL) having a shorter duration of suppression.\nIndividual subject time courses for the expression of basophil surface FcεRI and basophil surface IgE in response to increasing doses of QGE031, placebo or omalizumab in (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IUM/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; MESF, molecules of equivalent soluble fluorochrome; s, soluble; sc, subcutaneous.\n Quantification of the in vivo binding to IgE The PK–PD model fitted well the QGE031 PK/PD data obtained clinically, including the accumulation of drug and associated responses towards steady state, followed by washout and return towards baseline after treatment cessation. The half-maximum concentration for in vivo binding of QGE031 to IgE (KD = 0.32 nm; 95% CI 0.19–0.45 nm) was ninefold (95% CI 6.1–14-fold) lower than that for omalizumab.\nThe PK–PD model fitted well the QGE031 PK/PD data obtained clinically, including the accumulation of drug and associated responses towards steady state, followed by washout and return towards baseline after treatment cessation. The half-maximum concentration for in vivo binding of QGE031 to IgE (KD = 0.32 nm; 95% CI 0.19–0.45 nm) was ninefold (95% CI 6.1–14-fold) lower than that for omalizumab.\n Skin prick tests (subcutaneous trial) Both the AUC and the threshold dose of allergen eliciting a wheal were suppressed in a dose- and time-dependent manner by treatment with QGE031 (Fig.5; Table S2). In Cohort 3 (QGE031 2 mg/kg; n = 31), the allergen AUC was maximally suppressed by > 95% for QGE031 compared with 41% for omalizumab (P < 0.001) with an 81-fold increase in threshold dilution of allergen at Day 85, 6 weeks after the last dose of QGE031.\nTime courses of changes in allergen-induced skin prick wheal responses: (a) area under the dose–response curve values and (b) threshold 1 : 3 dilution of allergen eliciting a response after subcutaneous administration of QGE031, placebo or omalizumab. Data are presented as mean + standard deviation. Serial threefold dilutions of allergen were applied in skin prick testing. A value of 1 = threefold dilution, 2 = ninefold dilution, etc. A value of 0 was assigned if the threshold concentration eliciting a response was the neat allergen. A value of −1 was assigned if no response was elicited at any concentration. AUC, area under the curve.\nIn subjects with baseline IgE levels > 700 IU/mL, 2 mg/kg QGE031 significantly reduced the wheal allergen AUC compared with placebo as effectively as in subjects with IgE < 700 IU/mL, but with an earlier peak effect at Day 57 and recovery to baseline by the end of the study (Fig.5).\nThe percentage of subjects with positive responses to increasing concentrations of allergen was numerically reduced in a dose-dependent manner by QGE031 compared with pooled placebo cohorts (0.6; 2 and 4 mg/kg) and omalizumab for wheal responses up to the end of study (Day 155) (Fig. S2).\nBoth the AUC and the threshold dose of allergen eliciting a wheal were suppressed in a dose- and time-dependent manner by treatment with QGE031 (Fig.5; Table S2). In Cohort 3 (QGE031 2 mg/kg; n = 31), the allergen AUC was maximally suppressed by > 95% for QGE031 compared with 41% for omalizumab (P < 0.001) with an 81-fold increase in threshold dilution of allergen at Day 85, 6 weeks after the last dose of QGE031.\nTime courses of changes in allergen-induced skin prick wheal responses: (a) area under the dose–response curve values and (b) threshold 1 : 3 dilution of allergen eliciting a response after subcutaneous administration of QGE031, placebo or omalizumab. Data are presented as mean + standard deviation. Serial threefold dilutions of allergen were applied in skin prick testing. A value of 1 = threefold dilution, 2 = ninefold dilution, etc. A value of 0 was assigned if the threshold concentration eliciting a response was the neat allergen. A value of −1 was assigned if no response was elicited at any concentration. AUC, area under the curve.\nIn subjects with baseline IgE levels > 700 IU/mL, 2 mg/kg QGE031 significantly reduced the wheal allergen AUC compared with placebo as effectively as in subjects with IgE < 700 IU/mL, but with an earlier peak effect at Day 57 and recovery to baseline by the end of the study (Fig.5).\nThe percentage of subjects with positive responses to increasing concentrations of allergen was numerically reduced in a dose-dependent manner by QGE031 compared with pooled placebo cohorts (0.6; 2 and 4 mg/kg) and omalizumab for wheal responses up to the end of study (Day 155) (Fig. S2).\n Overview of PD parameters To illustrate the sequential expression and recovery of free IgE, basophil surface expression of FcεRI and IgE, and skin prick responses, as well as the influence of baseline IgE, the kinetics of individual subject PD parameters are presented (Fig. S1).\nTo illustrate the sequential expression and recovery of free IgE, basophil surface expression of FcεRI and IgE, and skin prick responses, as well as the influence of baseline IgE, the kinetics of individual subject PD parameters are presented (Fig. S1).\n Immunogenicity In the intravenous trial, QGE031 concentrations > 5 μg/mL may have interfered with the detection of anti-QGE031 antibodies. The QGE031 concentrations were above the tolerable drug levels in only four of 95 analyzed postdose samples; weak immunogenicity signals were detected in 10 subjects, four of whom were treated with placebo (data not shown). Of the six subjects exposed to QGE031 that had immunogenicity responses, there were no differences in PK/PD responses compared with subjects without an immunogenicity signal.\nFor the subcutaneous trial, QGE031 concentrations > 0.76 μg/mL may have interfered with detection of anti-QGE031 antibodies. The drug tolerance levels were exceeded in 44% of immunogenicity samples tested at the end of study; however, PK and PD results did not indicate that a strong anti-QGE031 response was missed in any of the subjects exposed to QGE031.\nIn the intravenous trial, QGE031 concentrations > 5 μg/mL may have interfered with the detection of anti-QGE031 antibodies. The QGE031 concentrations were above the tolerable drug levels in only four of 95 analyzed postdose samples; weak immunogenicity signals were detected in 10 subjects, four of whom were treated with placebo (data not shown). Of the six subjects exposed to QGE031 that had immunogenicity responses, there were no differences in PK/PD responses compared with subjects without an immunogenicity signal.\nFor the subcutaneous trial, QGE031 concentrations > 0.76 μg/mL may have interfered with detection of anti-QGE031 antibodies. The drug tolerance levels were exceeded in 44% of immunogenicity samples tested at the end of study; however, PK and PD results did not indicate that a strong anti-QGE031 response was missed in any of the subjects exposed to QGE031.\n Safety AEs were experienced by 59% of subjects in the intravenous study (Table3a) and 66% of subjects in the subcutaneous study (Table3b). The most common AEs across both studies were headache and upper respiratory tract infection with the majority of events mild to moderate in severity and not suspected to be related to study medication, with the exception of injection site events.\nIncidence of AEs by preferred term (safety analysis set) for (a) the intravenous trial and (b) the subcutaneous trial\nAE, adverse event.\nIn the intravenous trial, four subjects experienced urticaria. Two of 10 subjects who received 3 mg/kg QGE031 experienced urticaria concurrent with the end of the 2-h infusion. One of 10 subjects treated with 10 mg/kg QGE031 experienced urticaria and angioedema with chest pressure, diarrhoea and abdominal pain that began close to the end of the infusion period. One subject experienced urticaria and loose stools 22 h after receiving placebo to 10 mg/kg QGE031. In all subjects, symptoms resolved rapidly after treatment with diphenhydramine with the exception of angioedema that persisted for 90 h. There were no observed episodes of urticaria in 18 subjects who subsequently received a single infusion of placebo. The true underlying rate is unknown. The rate observed in the 3 and 10 mg/kg cohorts was ≤ 20%; there is a 90% probability that the true rate on placebo is statistically less than 20%.\nIn the subcutaneous study, there were four episodes of urticaria in three subjects that occurred from 13 h to 1 week after administration of study drug. Of these, three episodes occurred in two subjects treated with placebo, while one episode occurred in a subject dosed with 0.6 mg/kg QGE031, which did not recur after further doses of QGE031. All episodes were mild or moderate and transient, and resolved without treatment. No episodes of urticaria occurred in 40 subjects who received a total of 149 doses of 2 mg/kg QGE031 or in eight subjects treated with 4 mg/kg QGE031. On the basis of prespecified analyses, the lack of urticarial events in 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence of urticaria is 7.5% or less.\nThere were no serious AEs in either trial, but two subjects were withdrawn from Cohort 3 (QGE031 2 mg/kg) in the subcutaneous trial due to AEs; one suffered an asthma exacerbation and one developed a severe flu’-like illness. Neither event was considered related to study drug.\nAEs were experienced by 59% of subjects in the intravenous study (Table3a) and 66% of subjects in the subcutaneous study (Table3b). The most common AEs across both studies were headache and upper respiratory tract infection with the majority of events mild to moderate in severity and not suspected to be related to study medication, with the exception of injection site events.\nIncidence of AEs by preferred term (safety analysis set) for (a) the intravenous trial and (b) the subcutaneous trial\nAE, adverse event.\nIn the intravenous trial, four subjects experienced urticaria. Two of 10 subjects who received 3 mg/kg QGE031 experienced urticaria concurrent with the end of the 2-h infusion. One of 10 subjects treated with 10 mg/kg QGE031 experienced urticaria and angioedema with chest pressure, diarrhoea and abdominal pain that began close to the end of the infusion period. One subject experienced urticaria and loose stools 22 h after receiving placebo to 10 mg/kg QGE031. In all subjects, symptoms resolved rapidly after treatment with diphenhydramine with the exception of angioedema that persisted for 90 h. There were no observed episodes of urticaria in 18 subjects who subsequently received a single infusion of placebo. The true underlying rate is unknown. The rate observed in the 3 and 10 mg/kg cohorts was ≤ 20%; there is a 90% probability that the true rate on placebo is statistically less than 20%.\nIn the subcutaneous study, there were four episodes of urticaria in three subjects that occurred from 13 h to 1 week after administration of study drug. Of these, three episodes occurred in two subjects treated with placebo, while one episode occurred in a subject dosed with 0.6 mg/kg QGE031, which did not recur after further doses of QGE031. All episodes were mild or moderate and transient, and resolved without treatment. No episodes of urticaria occurred in 40 subjects who received a total of 149 doses of 2 mg/kg QGE031 or in eight subjects treated with 4 mg/kg QGE031. On the basis of prespecified analyses, the lack of urticarial events in 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence of urticaria is 7.5% or less.\nThere were no serious AEs in either trial, but two subjects were withdrawn from Cohort 3 (QGE031 2 mg/kg) in the subcutaneous trial due to AEs; one suffered an asthma exacerbation and one developed a severe flu’-like illness. Neither event was considered related to study drug.", "Not all subjects had a fully evaluable PK profile, so the QGE031 PK were characterized in 36 subjects in the intravenous trial and 64 subjects in the subcutaneous trial.\nThe time course of QGE031 in serum when administered intravenously over 2 h was characterized by a biexponential decline, with a rapid initial and slower terminal elimination phase (Fig.2a; Fig. S1). A slower terminal disposition phase became visible at doses of 1 mg/kg and higher. At doses of 3 and 10 mg/kg, the PK profile demonstrated a half-life of 17–23 days (Table2a). Higher levels of IgE accelerated the elimination of QGE as shown by PK parameters including a shorter half-life (Table2a) and time courses of response (Fig.2a).\nSummary of PK parameters for (a) the intravenous trial and (b) the subcutaneous trial\nAUC, area under the curve; AUCinf, area under the curve from time zero to infinity; Cmax, peak serum concentration; SD, standard deviation; Tmax, time to reach peak serum concentration; T½, half-life in serum; AUC, area under the curve; Cmax, peak serum concentration; NC, not calculable; SD, standard deviation; T1/2, half-life in serum.\nTime course for serum concentrations of QGE031 following (a) single 2-h intravenous infusion and (b) 2-weekly subcutaneous administration on two (0.2 mg/kg) to four occasions (all other cohorts). Data presented are geometric means. Vertical arrows in panel (b) denote the times of administration of QGE031 (except for the 0.2 mg/kg cohort where QGE031 was administered on just the first two instances).\nThe PK of QGE031 concentrations in serum following subcutaneous administration are also presented (Fig.2b; Fig. S1). The maximum concentration (Cmax) of QGE031 in serum following subcutaneous dosing occurred 2–4 days after the last administered dose (Table2b). Systemic drug exposures reached stable and clinically relevant serum concentrations during the 2-week interval after dosing as demonstrated by dose-proportional increases in peak serum concentration (Cmax) and AUC0–14 days at dose levels between 0.2 and 2 mg/kg QGE031 in subjects with IgE below 700 IU/mL (Table2b). There was no dose-proportional increase in systemic exposure (Cmax, AUC0–14 days) between the 2 and 4 mg/kg doses (Table2b). Whether this observation is due to random interindividual variation, given the small numbers of subjects dosed at 4 mg/kg, or some form of saturation of absorption is not currently known. At the lowest QGE031 dose (0.2 mg/kg) and at the 2 mg/kg dose with high IgE levels, mean terminal elimination half-life was shorter (13–15 days) compared with the other groups (23–26 days; Table2b).", "In the intravenous study, QGE031 induced a dose-dependent accumulation of total IgE and suppressed free IgE compared to placebo (Fig.3a; Fig. S1). Free IgE was suppressed more rapidly and to a greater extent than was seen with omalizumab. For all doses of QGE031, suppression of free IgE was below the LLOQ (7.8 ng/mL; Fig.3a). Omalizumab-induced suppression of free IgE was ‘shallower’ with a gradual return to baseline. By contrast, QGE031 suppressed free IgE more rapidly, to a greater extent, for longer and with a faster return to baseline (Fig.3a).\nIndividual subject serum concentrations of total and free IgE in response to increasing doses of QGE031, placebo or omalizumab following (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. The upper and lower limit of quantification for free IgE was 250 ng/mL and 7.8 ng/mL, respectively, for the intravenous study. The upper and lower limit of quantification for free IgE was 250 ng/mL and 1.95 ng/mL, respectively, for the subcutaneous study. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IU/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; sc, subcutaneous.\nSubcutaneous delivery of QGE031 resulted in rapid, incremental and sustained increases in total IgE serum concentrations at all doses compared with placebo (Fig.3b; Fig. S1). All doses of QGE031 reduced free IgE below the LLOQ (1.95 ng/mL) to a greater extent than omalizumab (Fig.3b), even in subjects with high IgE (> 700 IU/mL).\nFor both intravenous and subcutaneous administration, the duration of free IgE suppression was longer for higher doses of QGE031 and tended to be shorter in subjects with higher baseline IgE (Fig.3a,b).", "In both trials, QGE031 produced a dose- and time-dependent reduction in basophil FcεRI and IgE expression that was superior in extent (surface IgE only) but longer in duration to omalizumab (Fig.4; Fig. S1). In the subcutaneous study, basophil FcεRI and IgE expression were suppressed for 2 to > 16 weeks after the last dose, with those individuals exhibiting higher levels of IgE at screening (i.e. ≥ 700 IU/mL) having a shorter duration of suppression.\nIndividual subject time courses for the expression of basophil surface FcεRI and basophil surface IgE in response to increasing doses of QGE031, placebo or omalizumab in (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IUM/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; MESF, molecules of equivalent soluble fluorochrome; s, soluble; sc, subcutaneous.", "The PK–PD model fitted well the QGE031 PK/PD data obtained clinically, including the accumulation of drug and associated responses towards steady state, followed by washout and return towards baseline after treatment cessation. The half-maximum concentration for in vivo binding of QGE031 to IgE (KD = 0.32 nm; 95% CI 0.19–0.45 nm) was ninefold (95% CI 6.1–14-fold) lower than that for omalizumab.", "Both the AUC and the threshold dose of allergen eliciting a wheal were suppressed in a dose- and time-dependent manner by treatment with QGE031 (Fig.5; Table S2). In Cohort 3 (QGE031 2 mg/kg; n = 31), the allergen AUC was maximally suppressed by > 95% for QGE031 compared with 41% for omalizumab (P < 0.001) with an 81-fold increase in threshold dilution of allergen at Day 85, 6 weeks after the last dose of QGE031.\nTime courses of changes in allergen-induced skin prick wheal responses: (a) area under the dose–response curve values and (b) threshold 1 : 3 dilution of allergen eliciting a response after subcutaneous administration of QGE031, placebo or omalizumab. Data are presented as mean + standard deviation. Serial threefold dilutions of allergen were applied in skin prick testing. A value of 1 = threefold dilution, 2 = ninefold dilution, etc. A value of 0 was assigned if the threshold concentration eliciting a response was the neat allergen. A value of −1 was assigned if no response was elicited at any concentration. AUC, area under the curve.\nIn subjects with baseline IgE levels > 700 IU/mL, 2 mg/kg QGE031 significantly reduced the wheal allergen AUC compared with placebo as effectively as in subjects with IgE < 700 IU/mL, but with an earlier peak effect at Day 57 and recovery to baseline by the end of the study (Fig.5).\nThe percentage of subjects with positive responses to increasing concentrations of allergen was numerically reduced in a dose-dependent manner by QGE031 compared with pooled placebo cohorts (0.6; 2 and 4 mg/kg) and omalizumab for wheal responses up to the end of study (Day 155) (Fig. S2).", "To illustrate the sequential expression and recovery of free IgE, basophil surface expression of FcεRI and IgE, and skin prick responses, as well as the influence of baseline IgE, the kinetics of individual subject PD parameters are presented (Fig. S1).", "In the intravenous trial, QGE031 concentrations > 5 μg/mL may have interfered with the detection of anti-QGE031 antibodies. The QGE031 concentrations were above the tolerable drug levels in only four of 95 analyzed postdose samples; weak immunogenicity signals were detected in 10 subjects, four of whom were treated with placebo (data not shown). Of the six subjects exposed to QGE031 that had immunogenicity responses, there were no differences in PK/PD responses compared with subjects without an immunogenicity signal.\nFor the subcutaneous trial, QGE031 concentrations > 0.76 μg/mL may have interfered with detection of anti-QGE031 antibodies. The drug tolerance levels were exceeded in 44% of immunogenicity samples tested at the end of study; however, PK and PD results did not indicate that a strong anti-QGE031 response was missed in any of the subjects exposed to QGE031.", "AEs were experienced by 59% of subjects in the intravenous study (Table3a) and 66% of subjects in the subcutaneous study (Table3b). The most common AEs across both studies were headache and upper respiratory tract infection with the majority of events mild to moderate in severity and not suspected to be related to study medication, with the exception of injection site events.\nIncidence of AEs by preferred term (safety analysis set) for (a) the intravenous trial and (b) the subcutaneous trial\nAE, adverse event.\nIn the intravenous trial, four subjects experienced urticaria. Two of 10 subjects who received 3 mg/kg QGE031 experienced urticaria concurrent with the end of the 2-h infusion. One of 10 subjects treated with 10 mg/kg QGE031 experienced urticaria and angioedema with chest pressure, diarrhoea and abdominal pain that began close to the end of the infusion period. One subject experienced urticaria and loose stools 22 h after receiving placebo to 10 mg/kg QGE031. In all subjects, symptoms resolved rapidly after treatment with diphenhydramine with the exception of angioedema that persisted for 90 h. There were no observed episodes of urticaria in 18 subjects who subsequently received a single infusion of placebo. The true underlying rate is unknown. The rate observed in the 3 and 10 mg/kg cohorts was ≤ 20%; there is a 90% probability that the true rate on placebo is statistically less than 20%.\nIn the subcutaneous study, there were four episodes of urticaria in three subjects that occurred from 13 h to 1 week after administration of study drug. Of these, three episodes occurred in two subjects treated with placebo, while one episode occurred in a subject dosed with 0.6 mg/kg QGE031, which did not recur after further doses of QGE031. All episodes were mild or moderate and transient, and resolved without treatment. No episodes of urticaria occurred in 40 subjects who received a total of 149 doses of 2 mg/kg QGE031 or in eight subjects treated with 4 mg/kg QGE031. On the basis of prespecified analyses, the lack of urticarial events in 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence of urticaria is 7.5% or less.\nThere were no serious AEs in either trial, but two subjects were withdrawn from Cohort 3 (QGE031 2 mg/kg) in the subcutaneous trial due to AEs; one suffered an asthma exacerbation and one developed a severe flu’-like illness. Neither event was considered related to study drug.", "This report presents the first data demonstrating the efficient suppression of circulating free IgE, basophil FcεRI and surface IgE, and subsequent inhibition of the allergen-induced skin prick response by QGE031, a novel high-affinity, anti-IgE antibody. The data demonstrate that the 50-fold higher in vitro affinity of QGE031 compared to omalizumab translated into sixfold to ninefold greater potency in vivo. Compared to omalizumab, treatment with QGE031 provided greater and longer suppression of free IgE and IgE on the surface of circulating basophils and markedly superior suppression of skin prick test responses to allergen. These effects were apparent even in subjects with high baseline IgE levels who would be ineligible to receive omalizumab treatment 4,5. The data suggest that QGE031 may be more potent than omalizumab in the treatment of allergic disease.\nAs predicted from modelling and simulation 3,19,20 and published experience with another high-affinity anti-IgE, HAE-1 21, QGE031 was more potent than omalizumab in capturing and thereby suppressing levels of free IgE, which declined to below the LLOQ in almost all subjects dosed with QGE031. The duration of suppression of free IgE was dependent on the dose of QGE031 and baseline IgE. At higher subcutaneous doses, the suppression of free IgE was maintained until Day 155 (end of study), which was more than 100 days after the last dose and longer than that observed with omalizumab. Once free IgE levels started to return to baseline, the rate of return to baseline was more rapid after treatment with QGE031 than with omalizumab, as would be expected based upon drug-target binding model simulations for lower KD values, as previously seen with an earlier high-affinity anti-IgE, HAE1 21. Based on PK/PD model fitting, QGE031 demonstrated a ninefold increase in potency for suppression of free IgE compared with omalizumab. This potency achieved in the clinic is supported by preclinical studies conducted in the present study, which demonstrated an approximate 50-fold higher affinity for human IgE with QGE031 compared with omalizumab (KD 139 pm vs. 6.98 nm, respectively). The KD for omalizumab was consistent with previous in vitro experiments (7.7 nm) 3 and similar to values obtained with clinical experience (KD 1–3 nm) 3,17.\nSuppression of free IgE was followed by dose- and time-dependent suppression of FcεRI and surface IgE expression on circulating basophils, as observed with previous studies using omalizumab (for FcεRI expression) 9,11,12. Despite the increased potency of QGE031 in suppressing free IgE, the reduction in expression of FcεRI was similar for subjects who received QGE031 and omalizumab, suggesting that a low level of basal FcεRI expression is maintained on the surface of basophils independent of the presence of IgE. The combined suppression of free IgE and expression of FcεRI led to a > 100-fold reduction in IgE on the surface of basophils (Fig.4) that was superior in extent and duration at higher doses of QGE031 compared with omalizumab. The recovery of FcεRI and surface IgE expression after dosing with QGE031 was relatively slower compared with the more rapid return to baseline for free IgE, indicating perhaps an indirect mechanism of action and/or time for blood–tissue equilibration.\nOmalizumab suppressed the wheal response to allergen by 41%, as reflected in the allergen AUC, and increased the threshold concentration of allergen that elicited a positive response by approximately threefold, which is consistent with previously published data for subjects dosed with subcutaneous omalizumab 22. In contrast, a comparable dose of subcutaneous QGE031 (2 mg/kg) almost completely ablated the response to allergen and increased the threshold dose 81-fold. The maximal inhibition of skin prick test responses was seen approximately 6 weeks after the last dose of QGE031, likely reflecting time for blood–tissue equilibration and/or turnover of skin mast cells.\nMacGlashan et al. 23 reported that treatment with omalizumab led to increased intrinsic sensitivity of basophils to IgE-mediated stimulation, which may partially offset the beneficial effects of omalizumab on reduced binding of free IgE to FcεRI. We did not measure the sensitivity of basophils to IgE-mediated stimulation in this study. Nevertheless, the profound suppression of skin prick test responses to allergen upon QGE031 treatment suggests that if there is any increase in intrinsic sensitivity to IgE-mediated stimulation in cutaneous mast cells upon treatment with QGE031, it is offset by the profound suppression of IgE bound to FcεRI.\nQGE031, as with other IgG antibodies including omalizumab, is eliminated from the systemic circulation not only by clearance processes common to all IgGs, that is intracellular proteolytic degradation 24, but also by binding to its target, IgE. As QGE031–IgE complexes are cleared faster than unbound QGE031, as with omalizumab–IgE complexes 3,17, target-mediated disposition is most pronounced as the molar ratio of the serum concentration of QGE031 to the serum concentration of IgE falls. This was evidenced by greater than dose-proportional exposure in the intravenous study and by a shorter terminal elimination half-life in subjects treated with the lowest subcutaneous dose of QGE031 or in subjects with high IgE. Nevertheless, even in subjects with higher levels of baseline IgE, currently outside the US dosing table, treatment with subcutaneous QGE031 led to suppression of free IgE to below the LLOQ with accompanying suppression of basophil surface IgE and skin test responses that were superior to those seen with omalizumab.\nThe most significant AE observed was the occurrence of mild-to-moderate urticaria that occurred in four subjects treated with QGE031 and five subjects treated with placebo across the two studies. All events resolved spontaneously or with antihistamines, and no epinephrine was required. Nevertheless, urticaria was accompanied by systemic symptoms in one subject given the highest dose of QGE031 intravenously. As the excipient for intravenous QGE031 contained polysorbate 80, which can elicit mast cell activation 25,26, the protocol was amended to include a cohort dosed with only the placebo to the 10 mg/kg dose of QGE031 as this cohort had the greatest volume of excipient. None of the subjects in the protocol-amended cohort experienced urticaria or a hypersensitivity event, which suggests that polysorbate 80 was not responsible for these events.\nThe size of the 2 mg/kg cohort in the subcutaneous study was designed to provide information on the incidence of hypersensitivity events following administration of a QGE031 dose that was predicted to provide sustained suppression of free IgE for subjects with a wide range of baseline IgE concentrations and body weights. The lack of urticaria and hypersensitivity events in the 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence rate is 7.5% or less. A more accurate estimate of the true incidence of hypersensitivity events following multiple subcutaneous administration of QGE031 will be obtained as more clinical trials are conducted.\nIn conclusion, QGE031 was superior to omalizumab in suppressing free IgE and basophil surface expression of FcεRI and IgE. These effects translated into almost complete suppression of the skin prick response to allergen that was superior in extent and duration compared with omalizumab. Correlations between free IgE and asthma symptom control in controlled clinical studies suggest that a more profound suppression of free IgE may translate to correspondingly better asthma clinical outcomes 14. Thus, QGE031 with its superior suppression of serum IgE and allergen skin test responses may provide benefit to atopic asthma patients not effectively treated with omalizumab, but the risk/benefit remains to be established." ]
[ "intro", "methods", null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, null, null, null, null, "discussion" ]
[ "allergic", "antibody", "anti-IgE", "atopic", "IgE", "ligelizumab", "monoclonal", "QGE031" ]
Introduction: IgE acts as an environmental sensor that detects allergens and elicits an immune response via the high-affinity IgE receptor, FcεRI, resulting in the sensitization of mast cells to specific antigens 1,2. On exposure to specific antigens, IgE bound to FcεRI induces secretory granule exocytosis from mast cells and basophils, as well as the generation of newly synthesized lipid mediators and cytokines, resulting in both early- and late-phase allergic responses 1,2. Omalizumab (Xolair®) is a recombinant monoclonal antibody with a dissociation constant (KD) of 6–8 nm for IgE 3. It is approved for the treatment of patients with severe 4 or moderate-to-severe 5 persistent allergic asthma. Omalizumab binds the Cε3 domain of free IgE preventing it from binding to FcεRI 6,7. Omalizumab suppresses serum-free IgE concentrations 8–10, which in turn, through direct feedback, down-regulates FcεRI surface expression on effector cells 9,11,12 further dampening the effector cell response to allergen. The omalizumab dosing table aims to suppress free IgE to < 25 ng/mL (i.e. 10.4 IU/mL) 13 with doses based on body weight and IgE levels 4,5. Correlations between free IgE and asthma symptom control in controlled clinical studies suggest that a more profound suppression of free IgE may translate to better asthma clinical outcomes 14. QGE031 (ligelizumab) is a humanized IgG1 monoclonal antibody that binds with higher affinity to the Cε3 domain of IgE. QGE031 is designed to achieve superior IgE suppression, with an equilibrium dissociation constant (KD) of 139 pm, that may overcome some of the limitations associated with omalizumab dosing and lead to better clinical outcomes. This report describes data from preclinical experiments and two phase I randomized, double-blind, placebo-controlled clinical trials investigating the pharmacokinetics (PK), pharmacodynamics (PD) and safety of QGE031 in atopic, but otherwise healthy, subjects. The first trial was a single escalating-dose trial with intravenously administered QGE031, while the second trial was a multiple ascending-dose trial with subcutaneously administered QGE031; both trials included an omalizumab arm, which was dosed according to the US Food and Drug Administration (FDA) Prescribing Information dosing table 5. Preliminary data have been published in abstract form 15. Methods: Preclinical experiments Details of experiments to characterize the in vitro pharmacology of QGE031 are given in the Supplementary Material and included experiments to determine the equilibrium constant of IgE; inhibition of binding to cell-surface FcεRI and the immobilized α-subunit of FCεRI; impact on mast cell degranulation and activation assays; and the binding activity of QGE031 across several mammalian species. Details of experiments to characterize the in vitro pharmacology of QGE031 are given in the Supplementary Material and included experiments to determine the equilibrium constant of IgE; inhibition of binding to cell-surface FcεRI and the immobilized α-subunit of FCεRI; impact on mast cell degranulation and activation assays; and the binding activity of QGE031 across several mammalian species. Clinical trials Two clinical trials of QGE031 were conducted. The first-in-human trial administered QGE031 intravenously at a single site in the USA (January to December 2009), while the second trial administered QGE031 subcutaneously at three sites in the USA (May 2010 to September 2011). Both trials were approved by each site's Institutional Review Board, details of which are provided in the Supplementary Material (S1), and all subjects provided written informed consent. Male or female subjects aged 18–55 years with a history of atopy, defined as having one or more positive skin tests to common airborne allergens, a history of food allergy (subcutaneous trial) or serum IgE > 30 IU/mL (intravenous trial) were enrolled. Subjects participating in the subcutaneous trial were restricted to 45–120 kg body weight. In both studies, omalizumab was given open label in accordance with body weight and baseline IgE levels as defined by the FDA dosing table 5. The main exclusion criteria included poorly controlled asthma, prior use of omalizumab (intravenous trial), or use of QGE031 or omalizumab in the previous 6 months (subcutaneous trial). Two clinical trials of QGE031 were conducted. The first-in-human trial administered QGE031 intravenously at a single site in the USA (January to December 2009), while the second trial administered QGE031 subcutaneously at three sites in the USA (May 2010 to September 2011). Both trials were approved by each site's Institutional Review Board, details of which are provided in the Supplementary Material (S1), and all subjects provided written informed consent. Male or female subjects aged 18–55 years with a history of atopy, defined as having one or more positive skin tests to common airborne allergens, a history of food allergy (subcutaneous trial) or serum IgE > 30 IU/mL (intravenous trial) were enrolled. Subjects participating in the subcutaneous trial were restricted to 45–120 kg body weight. In both studies, omalizumab was given open label in accordance with body weight and baseline IgE levels as defined by the FDA dosing table 5. The main exclusion criteria included poorly controlled asthma, prior use of omalizumab (intravenous trial), or use of QGE031 or omalizumab in the previous 6 months (subcutaneous trial). Study design A site-specific randomization list was generated to assign the subjects to the lowest available numbers according to the specified assignment ratio. Subjects, site staff, persons performing the assessments and data analysts remained blinded to treatment from randomization until database lock. Treatments were concealed by identical packaging, labelling and schedule of administration. For the subcutaneous trial, interim analyses were conducted, and therefore, access to unblinded data was allowed for some members of the clinical team using a controlled process for protection of randomization data. Project teams were given access to data at the group but not the individual level. Dose selection details for both trials are detailed in the Supplementary Material. In the intravenous trial, subjects with IgE 30–1000 IU/mL were randomized to increasing doses of QGE031 [0.1, 0.3, 1, 3 or 10 mg/kg (Fig.1a)] or placebo in a ratio of 3 : 1 for each cohort. Cohort 5 included subjects with IgE > 1000 IU/mL treated with 3 mg/kg QGE031 or placebo (3 : 1). In Cohort 7, subjects received open-label subcutaneous omalizumab, dosed according to the FDA dosing table 5. Following a protocol amendment (see Supplementary Material), additional subjects in Cohort 6 (10 mg/kg QGE031) were exposed to placebo (Cohort 6a) to investigate the allergenic potential of the excipient, polysorbate 80. Subject disposition for (a) the intravenous trial and (b) the subcutaneous trial. *Total approximate numbers are provided. Subjects were excluded because they declined to participate or failed to meet eligibility criteria. †The placebo cohort was introduced following a protocol amendment to serve as an expansion of the placebo group at the highest (10 mg/kg) dose level. ‡Omalizumab was dosed as per the FDA dosing table 5. ¶Placebo pooled from Cohorts 2, 3 and 6. In the subcutaneous trial, subjects were randomized to one of six cohorts (Fig.1b). Three cohorts (Cohorts 1, 2 and 6) of subjects with IgE 30–700 IU/mL were treated sequentially with multiple escalating doses of subcutaneous QGE031 (0.2, 0.6 or 4 mg/kg, respectively) or placebo (2 : 1); Cohort 6 (i.e. 4 mg/kg) was introduced following a protocol amendment (see Supplementary Material). Cohort 3 received 2 mg/kg QGE031 or placebo (4 : 1). Cohort 4 included subjects with IgE > 700 IU/mL treated with 2 mg/kg QGE031 or placebo (1 : 1). All QGE031 cohorts received study drug at 2-week intervals for a total of two (Cohort 1) or four (Cohorts 2, 3, 4 and 6) doses. Subjects in Cohort 5 received subcutaneous open-label omalizumab, dosed as per the FDA dosing table 5. Subjects were admitted approximately 24 h prior to dosing and domiciled for 96 h (intravenous trial) or 48 h (subcutaneous trial) after drug administration. The investigator and Novartis performed a blinded review of at least 10 days (intravenous) or 15 days (subcutaneous) follow-up safety data for all subjects in a given cohort before dosing the next. Subjects were followed for up to approximately 16 weeks. A site-specific randomization list was generated to assign the subjects to the lowest available numbers according to the specified assignment ratio. Subjects, site staff, persons performing the assessments and data analysts remained blinded to treatment from randomization until database lock. Treatments were concealed by identical packaging, labelling and schedule of administration. For the subcutaneous trial, interim analyses were conducted, and therefore, access to unblinded data was allowed for some members of the clinical team using a controlled process for protection of randomization data. Project teams were given access to data at the group but not the individual level. Dose selection details for both trials are detailed in the Supplementary Material. In the intravenous trial, subjects with IgE 30–1000 IU/mL were randomized to increasing doses of QGE031 [0.1, 0.3, 1, 3 or 10 mg/kg (Fig.1a)] or placebo in a ratio of 3 : 1 for each cohort. Cohort 5 included subjects with IgE > 1000 IU/mL treated with 3 mg/kg QGE031 or placebo (3 : 1). In Cohort 7, subjects received open-label subcutaneous omalizumab, dosed according to the FDA dosing table 5. Following a protocol amendment (see Supplementary Material), additional subjects in Cohort 6 (10 mg/kg QGE031) were exposed to placebo (Cohort 6a) to investigate the allergenic potential of the excipient, polysorbate 80. Subject disposition for (a) the intravenous trial and (b) the subcutaneous trial. *Total approximate numbers are provided. Subjects were excluded because they declined to participate or failed to meet eligibility criteria. †The placebo cohort was introduced following a protocol amendment to serve as an expansion of the placebo group at the highest (10 mg/kg) dose level. ‡Omalizumab was dosed as per the FDA dosing table 5. ¶Placebo pooled from Cohorts 2, 3 and 6. In the subcutaneous trial, subjects were randomized to one of six cohorts (Fig.1b). Three cohorts (Cohorts 1, 2 and 6) of subjects with IgE 30–700 IU/mL were treated sequentially with multiple escalating doses of subcutaneous QGE031 (0.2, 0.6 or 4 mg/kg, respectively) or placebo (2 : 1); Cohort 6 (i.e. 4 mg/kg) was introduced following a protocol amendment (see Supplementary Material). Cohort 3 received 2 mg/kg QGE031 or placebo (4 : 1). Cohort 4 included subjects with IgE > 700 IU/mL treated with 2 mg/kg QGE031 or placebo (1 : 1). All QGE031 cohorts received study drug at 2-week intervals for a total of two (Cohort 1) or four (Cohorts 2, 3, 4 and 6) doses. Subjects in Cohort 5 received subcutaneous open-label omalizumab, dosed as per the FDA dosing table 5. Subjects were admitted approximately 24 h prior to dosing and domiciled for 96 h (intravenous trial) or 48 h (subcutaneous trial) after drug administration. The investigator and Novartis performed a blinded review of at least 10 days (intravenous) or 15 days (subcutaneous) follow-up safety data for all subjects in a given cohort before dosing the next. Subjects were followed for up to approximately 16 weeks. PK assessments Total QGE031 serum concentrations were determined by ELISA with a lower limit of quantification (LLOQ) of 200 ng/mL in serum (see Data S1). Total QGE031 serum concentrations were determined by ELISA with a lower limit of quantification (LLOQ) of 200 ng/mL in serum (see Data S1). PD assessments: serum total and free IgE Total IgE was determined by ELISA with LLOQ of 100 ng/mL in human serum. Free IgE was determined in human serum by ELISA with LLOQ of 7.8 ng/mL and 1.95 ng/mL in the intravenous and subcutaneous trials, respectively; for both trials, the upper limit of quantification of free IgE was 250 ng/mL (see Data S1). Total IgE was determined by ELISA with LLOQ of 100 ng/mL in human serum. Free IgE was determined in human serum by ELISA with LLOQ of 7.8 ng/mL and 1.95 ng/mL in the intravenous and subcutaneous trials, respectively; for both trials, the upper limit of quantification of free IgE was 250 ng/mL (see Data S1). PD assessments: FACS analysis Serum samples from both studies were subjected to fluorescence-activated cell sorting (FACS) using a FACS Canto II cytometer. In each sample, 2000 basophils were acquired and molecules of equivalent soluble fluorochrome values were calculated (see Data S1). Serum samples from both studies were subjected to fluorescence-activated cell sorting (FACS) using a FACS Canto II cytometer. In each sample, 2000 basophils were acquired and molecules of equivalent soluble fluorochrome values were calculated (see Data S1). PK and PD model of binding to and capture of IgE by QGE031 and omalizumab The PK profile of QGE031 and several PD parameters including total IgE, basophil FcεRI and basophil surface IgE were analyzed using an adaptation of the previously published omalizumab PK–IgE binding model 14,16,17 (see Data S1). The PK profile of QGE031 and several PD parameters including total IgE, basophil FcεRI and basophil surface IgE were analyzed using an adaptation of the previously published omalizumab PK–IgE binding model 14,16,17 (see Data S1). PD assessments: extinction skin prick testing (subcutaneous trial) At baseline and Days 29, 57, 85 and 155, extinction skin prick testing was performed in duplicate on the subjects’ skin of the back with serial threefold dilutions of a selected allergen that provided a > 5 mm mean wheal diameter at screening. The mean of the longest diameter and corresponding mid-point orthogonal diameter for wheal and flares was recorded (see Data S1). Sites used the Greer Prick System (Greer, Lenoir, NC) or Duotip applicators (Lincoln Diagnostics, Decatur, IL) and allergen extracts from Greer or Hollister-Stier (Spokane, WA). At baseline and Days 29, 57, 85 and 155, extinction skin prick testing was performed in duplicate on the subjects’ skin of the back with serial threefold dilutions of a selected allergen that provided a > 5 mm mean wheal diameter at screening. The mean of the longest diameter and corresponding mid-point orthogonal diameter for wheal and flares was recorded (see Data S1). Sites used the Greer Prick System (Greer, Lenoir, NC) or Duotip applicators (Lincoln Diagnostics, Decatur, IL) and allergen extracts from Greer or Hollister-Stier (Spokane, WA). Immunogenicity In the intravenous trial, serum samples were collected at Days 29 and 113 (end of study) and were tested for the presence of anti-QGE031 antibodies (ADAs) using a Biacore-based assay with QGE031 bound to the surface of the Biacore chip using protein G. A homogenous bridging Meso Scale Discovery-based assay with an improved sensitivity was used for the immunogenicity testing in the subcutaneous trial; samples were collected predose and at Days 15, 29, 43, 99 and 155 (end of study). For both studies, to distinguish between IgE-derived bonding and ADA-derived bonding, the soluble form of human recombinant FcεRI was added to prevent endogenous IgE from binding to QGE031 and interfering with the detection of ADAs. An inhibition step with QGE031 confirmed the presence of ADAs. In the intravenous trial, serum samples were collected at Days 29 and 113 (end of study) and were tested for the presence of anti-QGE031 antibodies (ADAs) using a Biacore-based assay with QGE031 bound to the surface of the Biacore chip using protein G. A homogenous bridging Meso Scale Discovery-based assay with an improved sensitivity was used for the immunogenicity testing in the subcutaneous trial; samples were collected predose and at Days 15, 29, 43, 99 and 155 (end of study). For both studies, to distinguish between IgE-derived bonding and ADA-derived bonding, the soluble form of human recombinant FcεRI was added to prevent endogenous IgE from binding to QGE031 and interfering with the detection of ADAs. An inhibition step with QGE031 confirmed the presence of ADAs. Objectives and outcome measures The primary objectives for both trials were to establish the safety and tolerability of single intravenous doses or multiple subcutaneous doses of QGE031 in atopic subjects, with PK as a key secondary objective. PD effects of QGE031, including levels of free and total IgE in the serum, FcεRI and surface IgE expression on circulating basophils were evaluated. Suppression of skin prick responses to allergen by QGE031 was an exploratory objective (subcutaneous trial). The primary objectives for both trials were to establish the safety and tolerability of single intravenous doses or multiple subcutaneous doses of QGE031 in atopic subjects, with PK as a key secondary objective. PD effects of QGE031, including levels of free and total IgE in the serum, FcεRI and surface IgE expression on circulating basophils were evaluated. Suppression of skin prick responses to allergen by QGE031 was an exploratory objective (subcutaneous trial). Statistical methods In each trial, the safety population consisted of all subjects who received at least one dose of study drug. The PK and PD populations consisted of all subjects with available PK or PD data, respectively, and no major protocol deviations that could impact on the data. The intravenous trial was a first-in-human study of QGE031. The sample size of six subjects on active drug for the QGE031 cohorts was based on published detectable adverse event (AE) rates and changes in laboratory parameters 18. For a more complete assessment of tolerability, two subjects treated with placebo were added to each cohort. For Cohort 6a, a cohort of 18 subjects on placebo provides 90% confidence that the true rate of urticaria or other allergic event is ≤ 20% (rate observed in the 3 and 10 mg/kg cohorts), when not more than one event is observed. In the subcutaneous trial, a Bayesian design determined the size of Cohort 3. A cohort size of 40 subjects on active drug provided 80% confidence that the incidence of hypersensitivity events after subcutaneous administration of 2 mg/kg of QGE031 is 7.5% or less when not more than one subject experienced an event. All placebo subjects from Cohorts 2, 3 and 6 were pooled into one placebo group for data presentation and statistical analysis. Pharmacokinetics parameters of QGE031 were determined using WinNonlin Pro (version 5.2) and descriptive statistics presented. Concentrations of QGE031 below the LLOQ were treated as zero. For subjects that did not complete the study, the end of study sample was excluded from the analysis. Descriptive statistics were generated for free and total IgE with Day 1 (predose) values considered baseline values. Descriptive statistics were also generated for FACS data. Skin prick data were presented for the threshold dilution of allergen that elicited a wheal of ≥ 3 mm (see Statistical Methods in the Supplementary Material). Areas under the allergen dose–response curves for wheal responses to allergens were calculated for each subject at each visit using the linear trapezoidal rule. The allergen areas under the curve (AUC) were analyzed using a covariance model with baseline AUC (Day 1) as a covariate and treatment as a fixed factor (SAS PROC MIXED). Differences between each QGE031 treatment group and placebo were calculated along with the 95% confidence intervals (CI) and P-value. Post hoc analysis was conducted for the difference between 2 mg/kg QGE031 and omalizumab. In each trial, the safety population consisted of all subjects who received at least one dose of study drug. The PK and PD populations consisted of all subjects with available PK or PD data, respectively, and no major protocol deviations that could impact on the data. The intravenous trial was a first-in-human study of QGE031. The sample size of six subjects on active drug for the QGE031 cohorts was based on published detectable adverse event (AE) rates and changes in laboratory parameters 18. For a more complete assessment of tolerability, two subjects treated with placebo were added to each cohort. For Cohort 6a, a cohort of 18 subjects on placebo provides 90% confidence that the true rate of urticaria or other allergic event is ≤ 20% (rate observed in the 3 and 10 mg/kg cohorts), when not more than one event is observed. In the subcutaneous trial, a Bayesian design determined the size of Cohort 3. A cohort size of 40 subjects on active drug provided 80% confidence that the incidence of hypersensitivity events after subcutaneous administration of 2 mg/kg of QGE031 is 7.5% or less when not more than one subject experienced an event. All placebo subjects from Cohorts 2, 3 and 6 were pooled into one placebo group for data presentation and statistical analysis. Pharmacokinetics parameters of QGE031 were determined using WinNonlin Pro (version 5.2) and descriptive statistics presented. Concentrations of QGE031 below the LLOQ were treated as zero. For subjects that did not complete the study, the end of study sample was excluded from the analysis. Descriptive statistics were generated for free and total IgE with Day 1 (predose) values considered baseline values. Descriptive statistics were also generated for FACS data. Skin prick data were presented for the threshold dilution of allergen that elicited a wheal of ≥ 3 mm (see Statistical Methods in the Supplementary Material). Areas under the allergen dose–response curves for wheal responses to allergens were calculated for each subject at each visit using the linear trapezoidal rule. The allergen areas under the curve (AUC) were analyzed using a covariance model with baseline AUC (Day 1) as a covariate and treatment as a fixed factor (SAS PROC MIXED). Differences between each QGE031 treatment group and placebo were calculated along with the 95% confidence intervals (CI) and P-value. Post hoc analysis was conducted for the difference between 2 mg/kg QGE031 and omalizumab. Preclinical experiments: Details of experiments to characterize the in vitro pharmacology of QGE031 are given in the Supplementary Material and included experiments to determine the equilibrium constant of IgE; inhibition of binding to cell-surface FcεRI and the immobilized α-subunit of FCεRI; impact on mast cell degranulation and activation assays; and the binding activity of QGE031 across several mammalian species. Clinical trials: Two clinical trials of QGE031 were conducted. The first-in-human trial administered QGE031 intravenously at a single site in the USA (January to December 2009), while the second trial administered QGE031 subcutaneously at three sites in the USA (May 2010 to September 2011). Both trials were approved by each site's Institutional Review Board, details of which are provided in the Supplementary Material (S1), and all subjects provided written informed consent. Male or female subjects aged 18–55 years with a history of atopy, defined as having one or more positive skin tests to common airborne allergens, a history of food allergy (subcutaneous trial) or serum IgE > 30 IU/mL (intravenous trial) were enrolled. Subjects participating in the subcutaneous trial were restricted to 45–120 kg body weight. In both studies, omalizumab was given open label in accordance with body weight and baseline IgE levels as defined by the FDA dosing table 5. The main exclusion criteria included poorly controlled asthma, prior use of omalizumab (intravenous trial), or use of QGE031 or omalizumab in the previous 6 months (subcutaneous trial). Study design: A site-specific randomization list was generated to assign the subjects to the lowest available numbers according to the specified assignment ratio. Subjects, site staff, persons performing the assessments and data analysts remained blinded to treatment from randomization until database lock. Treatments were concealed by identical packaging, labelling and schedule of administration. For the subcutaneous trial, interim analyses were conducted, and therefore, access to unblinded data was allowed for some members of the clinical team using a controlled process for protection of randomization data. Project teams were given access to data at the group but not the individual level. Dose selection details for both trials are detailed in the Supplementary Material. In the intravenous trial, subjects with IgE 30–1000 IU/mL were randomized to increasing doses of QGE031 [0.1, 0.3, 1, 3 or 10 mg/kg (Fig.1a)] or placebo in a ratio of 3 : 1 for each cohort. Cohort 5 included subjects with IgE > 1000 IU/mL treated with 3 mg/kg QGE031 or placebo (3 : 1). In Cohort 7, subjects received open-label subcutaneous omalizumab, dosed according to the FDA dosing table 5. Following a protocol amendment (see Supplementary Material), additional subjects in Cohort 6 (10 mg/kg QGE031) were exposed to placebo (Cohort 6a) to investigate the allergenic potential of the excipient, polysorbate 80. Subject disposition for (a) the intravenous trial and (b) the subcutaneous trial. *Total approximate numbers are provided. Subjects were excluded because they declined to participate or failed to meet eligibility criteria. †The placebo cohort was introduced following a protocol amendment to serve as an expansion of the placebo group at the highest (10 mg/kg) dose level. ‡Omalizumab was dosed as per the FDA dosing table 5. ¶Placebo pooled from Cohorts 2, 3 and 6. In the subcutaneous trial, subjects were randomized to one of six cohorts (Fig.1b). Three cohorts (Cohorts 1, 2 and 6) of subjects with IgE 30–700 IU/mL were treated sequentially with multiple escalating doses of subcutaneous QGE031 (0.2, 0.6 or 4 mg/kg, respectively) or placebo (2 : 1); Cohort 6 (i.e. 4 mg/kg) was introduced following a protocol amendment (see Supplementary Material). Cohort 3 received 2 mg/kg QGE031 or placebo (4 : 1). Cohort 4 included subjects with IgE > 700 IU/mL treated with 2 mg/kg QGE031 or placebo (1 : 1). All QGE031 cohorts received study drug at 2-week intervals for a total of two (Cohort 1) or four (Cohorts 2, 3, 4 and 6) doses. Subjects in Cohort 5 received subcutaneous open-label omalizumab, dosed as per the FDA dosing table 5. Subjects were admitted approximately 24 h prior to dosing and domiciled for 96 h (intravenous trial) or 48 h (subcutaneous trial) after drug administration. The investigator and Novartis performed a blinded review of at least 10 days (intravenous) or 15 days (subcutaneous) follow-up safety data for all subjects in a given cohort before dosing the next. Subjects were followed for up to approximately 16 weeks. PK assessments: Total QGE031 serum concentrations were determined by ELISA with a lower limit of quantification (LLOQ) of 200 ng/mL in serum (see Data S1). PD assessments: serum total and free IgE: Total IgE was determined by ELISA with LLOQ of 100 ng/mL in human serum. Free IgE was determined in human serum by ELISA with LLOQ of 7.8 ng/mL and 1.95 ng/mL in the intravenous and subcutaneous trials, respectively; for both trials, the upper limit of quantification of free IgE was 250 ng/mL (see Data S1). PD assessments: FACS analysis: Serum samples from both studies were subjected to fluorescence-activated cell sorting (FACS) using a FACS Canto II cytometer. In each sample, 2000 basophils were acquired and molecules of equivalent soluble fluorochrome values were calculated (see Data S1). PK and PD model of binding to and capture of IgE by QGE031 and omalizumab: The PK profile of QGE031 and several PD parameters including total IgE, basophil FcεRI and basophil surface IgE were analyzed using an adaptation of the previously published omalizumab PK–IgE binding model 14,16,17 (see Data S1). PD assessments: extinction skin prick testing (subcutaneous trial): At baseline and Days 29, 57, 85 and 155, extinction skin prick testing was performed in duplicate on the subjects’ skin of the back with serial threefold dilutions of a selected allergen that provided a > 5 mm mean wheal diameter at screening. The mean of the longest diameter and corresponding mid-point orthogonal diameter for wheal and flares was recorded (see Data S1). Sites used the Greer Prick System (Greer, Lenoir, NC) or Duotip applicators (Lincoln Diagnostics, Decatur, IL) and allergen extracts from Greer or Hollister-Stier (Spokane, WA). Immunogenicity: In the intravenous trial, serum samples were collected at Days 29 and 113 (end of study) and were tested for the presence of anti-QGE031 antibodies (ADAs) using a Biacore-based assay with QGE031 bound to the surface of the Biacore chip using protein G. A homogenous bridging Meso Scale Discovery-based assay with an improved sensitivity was used for the immunogenicity testing in the subcutaneous trial; samples were collected predose and at Days 15, 29, 43, 99 and 155 (end of study). For both studies, to distinguish between IgE-derived bonding and ADA-derived bonding, the soluble form of human recombinant FcεRI was added to prevent endogenous IgE from binding to QGE031 and interfering with the detection of ADAs. An inhibition step with QGE031 confirmed the presence of ADAs. Objectives and outcome measures: The primary objectives for both trials were to establish the safety and tolerability of single intravenous doses or multiple subcutaneous doses of QGE031 in atopic subjects, with PK as a key secondary objective. PD effects of QGE031, including levels of free and total IgE in the serum, FcεRI and surface IgE expression on circulating basophils were evaluated. Suppression of skin prick responses to allergen by QGE031 was an exploratory objective (subcutaneous trial). Statistical methods: In each trial, the safety population consisted of all subjects who received at least one dose of study drug. The PK and PD populations consisted of all subjects with available PK or PD data, respectively, and no major protocol deviations that could impact on the data. The intravenous trial was a first-in-human study of QGE031. The sample size of six subjects on active drug for the QGE031 cohorts was based on published detectable adverse event (AE) rates and changes in laboratory parameters 18. For a more complete assessment of tolerability, two subjects treated with placebo were added to each cohort. For Cohort 6a, a cohort of 18 subjects on placebo provides 90% confidence that the true rate of urticaria or other allergic event is ≤ 20% (rate observed in the 3 and 10 mg/kg cohorts), when not more than one event is observed. In the subcutaneous trial, a Bayesian design determined the size of Cohort 3. A cohort size of 40 subjects on active drug provided 80% confidence that the incidence of hypersensitivity events after subcutaneous administration of 2 mg/kg of QGE031 is 7.5% or less when not more than one subject experienced an event. All placebo subjects from Cohorts 2, 3 and 6 were pooled into one placebo group for data presentation and statistical analysis. Pharmacokinetics parameters of QGE031 were determined using WinNonlin Pro (version 5.2) and descriptive statistics presented. Concentrations of QGE031 below the LLOQ were treated as zero. For subjects that did not complete the study, the end of study sample was excluded from the analysis. Descriptive statistics were generated for free and total IgE with Day 1 (predose) values considered baseline values. Descriptive statistics were also generated for FACS data. Skin prick data were presented for the threshold dilution of allergen that elicited a wheal of ≥ 3 mm (see Statistical Methods in the Supplementary Material). Areas under the allergen dose–response curves for wheal responses to allergens were calculated for each subject at each visit using the linear trapezoidal rule. The allergen areas under the curve (AUC) were analyzed using a covariance model with baseline AUC (Day 1) as a covariate and treatment as a fixed factor (SAS PROC MIXED). Differences between each QGE031 treatment group and placebo were calculated along with the 95% confidence intervals (CI) and P-value. Post hoc analysis was conducted for the difference between 2 mg/kg QGE031 and omalizumab. Results: Preclinical results QGE031 demonstrated higher affinity binding for human IgE compared with omalizumab, as assessed by surface plasmon resonance, with equilibrium KD of 139 pm vs. 6.98 nm, respectively. In a cell-based functional assay using human cord blood-derived mast cells, an approximate 1 : 1 molar ratio of QGE031 to IgE was sufficient to achieve a 90% inhibition of IgE-dependent mast cell degranulation. By comparison, between ninefold and 27-fold excess of omalizumab was needed to achieve the same response. QGE031 is highly selective for human and non-human primate IgE. QGE031 did not bind to IgE purified from rat, cat or dog. The affinity of QGE031 for cynomolgus non-human primate IgE was approximately 12-fold lower than for human with a KD of 1.53 nm. QGE031 demonstrated higher affinity binding for human IgE compared with omalizumab, as assessed by surface plasmon resonance, with equilibrium KD of 139 pm vs. 6.98 nm, respectively. In a cell-based functional assay using human cord blood-derived mast cells, an approximate 1 : 1 molar ratio of QGE031 to IgE was sufficient to achieve a 90% inhibition of IgE-dependent mast cell degranulation. By comparison, between ninefold and 27-fold excess of omalizumab was needed to achieve the same response. QGE031 is highly selective for human and non-human primate IgE. QGE031 did not bind to IgE purified from rat, cat or dog. The affinity of QGE031 for cynomolgus non-human primate IgE was approximately 12-fold lower than for human with a KD of 1.53 nm. Clinical results Preclinical results: QGE031 demonstrated higher affinity binding for human IgE compared with omalizumab, as assessed by surface plasmon resonance, with equilibrium KD of 139 pm vs. 6.98 nm, respectively. In a cell-based functional assay using human cord blood-derived mast cells, an approximate 1 : 1 molar ratio of QGE031 to IgE was sufficient to achieve a 90% inhibition of IgE-dependent mast cell degranulation. By comparison, between ninefold and 27-fold excess of omalizumab was needed to achieve the same response. QGE031 is highly selective for human and non-human primate IgE. QGE031 did not bind to IgE purified from rat, cat or dog. The affinity of QGE031 for cynomolgus non-human primate IgE was approximately 12-fold lower than for human with a KD of 1.53 nm. Clinical results: Subjects: Seventy-three subjects (55 in the core study and 18 in Cohort 6a) were enrolled in the intravenous study with 60 subjects completing the study (82%) (Fig.1a). Owing to recruitment difficulties, only one subject was randomized in Cohort 5 (IgE > 1000 IU/mL). The main reasons for discontinuation were withdrawal of consent (n = 8) or lost to follow-up (n = 5). The subcutaneous study enrolled 110 subjects with 96 (87%) completing the study (Fig.1b). Fourteen (13%) subjects did not complete the study due to AEs (n = 2), positive drug screen (n = 1), withdrawal of consent (n = 8), lost to follow-up (n = 2) and protocol deviation (n = 1). Of the two subjects who withdrew due to AEs, one had an asthma exacerbation on Day 5 and one subject developed a flu’-like illness on Day 36; neither AE was considered by the investigator to be related to study drug. Subject demographics and other baseline characteristics were similar across the cohorts in both trials (Table1 and Table S1). IgE levels for treatment groups for (a) the intravenous trial and (b) the subcutaneous trial NC, not calculable. From screening visit, data highly skewed so median shown. Pooled over Cohorts 2, 3 and 6. PK results Not all subjects had a fully evaluable PK profile, so the QGE031 PK were characterized in 36 subjects in the intravenous trial and 64 subjects in the subcutaneous trial. The time course of QGE031 in serum when administered intravenously over 2 h was characterized by a biexponential decline, with a rapid initial and slower terminal elimination phase (Fig.2a; Fig. S1). A slower terminal disposition phase became visible at doses of 1 mg/kg and higher. At doses of 3 and 10 mg/kg, the PK profile demonstrated a half-life of 17–23 days (Table2a). Higher levels of IgE accelerated the elimination of QGE as shown by PK parameters including a shorter half-life (Table2a) and time courses of response (Fig.2a). Summary of PK parameters for (a) the intravenous trial and (b) the subcutaneous trial AUC, area under the curve; AUCinf, area under the curve from time zero to infinity; Cmax, peak serum concentration; SD, standard deviation; Tmax, time to reach peak serum concentration; T½, half-life in serum; AUC, area under the curve; Cmax, peak serum concentration; NC, not calculable; SD, standard deviation; T1/2, half-life in serum. Time course for serum concentrations of QGE031 following (a) single 2-h intravenous infusion and (b) 2-weekly subcutaneous administration on two (0.2 mg/kg) to four occasions (all other cohorts). Data presented are geometric means. Vertical arrows in panel (b) denote the times of administration of QGE031 (except for the 0.2 mg/kg cohort where QGE031 was administered on just the first two instances). The PK of QGE031 concentrations in serum following subcutaneous administration are also presented (Fig.2b; Fig. S1). The maximum concentration (Cmax) of QGE031 in serum following subcutaneous dosing occurred 2–4 days after the last administered dose (Table2b). Systemic drug exposures reached stable and clinically relevant serum concentrations during the 2-week interval after dosing as demonstrated by dose-proportional increases in peak serum concentration (Cmax) and AUC0–14 days at dose levels between 0.2 and 2 mg/kg QGE031 in subjects with IgE below 700 IU/mL (Table2b). There was no dose-proportional increase in systemic exposure (Cmax, AUC0–14 days) between the 2 and 4 mg/kg doses (Table2b). Whether this observation is due to random interindividual variation, given the small numbers of subjects dosed at 4 mg/kg, or some form of saturation of absorption is not currently known. At the lowest QGE031 dose (0.2 mg/kg) and at the 2 mg/kg dose with high IgE levels, mean terminal elimination half-life was shorter (13–15 days) compared with the other groups (23–26 days; Table2b). Not all subjects had a fully evaluable PK profile, so the QGE031 PK were characterized in 36 subjects in the intravenous trial and 64 subjects in the subcutaneous trial. The time course of QGE031 in serum when administered intravenously over 2 h was characterized by a biexponential decline, with a rapid initial and slower terminal elimination phase (Fig.2a; Fig. S1). A slower terminal disposition phase became visible at doses of 1 mg/kg and higher. At doses of 3 and 10 mg/kg, the PK profile demonstrated a half-life of 17–23 days (Table2a). Higher levels of IgE accelerated the elimination of QGE as shown by PK parameters including a shorter half-life (Table2a) and time courses of response (Fig.2a). Summary of PK parameters for (a) the intravenous trial and (b) the subcutaneous trial AUC, area under the curve; AUCinf, area under the curve from time zero to infinity; Cmax, peak serum concentration; SD, standard deviation; Tmax, time to reach peak serum concentration; T½, half-life in serum; AUC, area under the curve; Cmax, peak serum concentration; NC, not calculable; SD, standard deviation; T1/2, half-life in serum. Time course for serum concentrations of QGE031 following (a) single 2-h intravenous infusion and (b) 2-weekly subcutaneous administration on two (0.2 mg/kg) to four occasions (all other cohorts). Data presented are geometric means. Vertical arrows in panel (b) denote the times of administration of QGE031 (except for the 0.2 mg/kg cohort where QGE031 was administered on just the first two instances). The PK of QGE031 concentrations in serum following subcutaneous administration are also presented (Fig.2b; Fig. S1). The maximum concentration (Cmax) of QGE031 in serum following subcutaneous dosing occurred 2–4 days after the last administered dose (Table2b). Systemic drug exposures reached stable and clinically relevant serum concentrations during the 2-week interval after dosing as demonstrated by dose-proportional increases in peak serum concentration (Cmax) and AUC0–14 days at dose levels between 0.2 and 2 mg/kg QGE031 in subjects with IgE below 700 IU/mL (Table2b). There was no dose-proportional increase in systemic exposure (Cmax, AUC0–14 days) between the 2 and 4 mg/kg doses (Table2b). Whether this observation is due to random interindividual variation, given the small numbers of subjects dosed at 4 mg/kg, or some form of saturation of absorption is not currently known. At the lowest QGE031 dose (0.2 mg/kg) and at the 2 mg/kg dose with high IgE levels, mean terminal elimination half-life was shorter (13–15 days) compared with the other groups (23–26 days; Table2b). PD: total and free IgE In the intravenous study, QGE031 induced a dose-dependent accumulation of total IgE and suppressed free IgE compared to placebo (Fig.3a; Fig. S1). Free IgE was suppressed more rapidly and to a greater extent than was seen with omalizumab. For all doses of QGE031, suppression of free IgE was below the LLOQ (7.8 ng/mL; Fig.3a). Omalizumab-induced suppression of free IgE was ‘shallower’ with a gradual return to baseline. By contrast, QGE031 suppressed free IgE more rapidly, to a greater extent, for longer and with a faster return to baseline (Fig.3a). Individual subject serum concentrations of total and free IgE in response to increasing doses of QGE031, placebo or omalizumab following (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. The upper and lower limit of quantification for free IgE was 250 ng/mL and 7.8 ng/mL, respectively, for the intravenous study. The upper and lower limit of quantification for free IgE was 250 ng/mL and 1.95 ng/mL, respectively, for the subcutaneous study. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IU/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; sc, subcutaneous. Subcutaneous delivery of QGE031 resulted in rapid, incremental and sustained increases in total IgE serum concentrations at all doses compared with placebo (Fig.3b; Fig. S1). All doses of QGE031 reduced free IgE below the LLOQ (1.95 ng/mL) to a greater extent than omalizumab (Fig.3b), even in subjects with high IgE (> 700 IU/mL). For both intravenous and subcutaneous administration, the duration of free IgE suppression was longer for higher doses of QGE031 and tended to be shorter in subjects with higher baseline IgE (Fig.3a,b). In the intravenous study, QGE031 induced a dose-dependent accumulation of total IgE and suppressed free IgE compared to placebo (Fig.3a; Fig. S1). Free IgE was suppressed more rapidly and to a greater extent than was seen with omalizumab. For all doses of QGE031, suppression of free IgE was below the LLOQ (7.8 ng/mL; Fig.3a). Omalizumab-induced suppression of free IgE was ‘shallower’ with a gradual return to baseline. By contrast, QGE031 suppressed free IgE more rapidly, to a greater extent, for longer and with a faster return to baseline (Fig.3a). Individual subject serum concentrations of total and free IgE in response to increasing doses of QGE031, placebo or omalizumab following (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. The upper and lower limit of quantification for free IgE was 250 ng/mL and 7.8 ng/mL, respectively, for the intravenous study. The upper and lower limit of quantification for free IgE was 250 ng/mL and 1.95 ng/mL, respectively, for the subcutaneous study. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IU/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; sc, subcutaneous. Subcutaneous delivery of QGE031 resulted in rapid, incremental and sustained increases in total IgE serum concentrations at all doses compared with placebo (Fig.3b; Fig. S1). All doses of QGE031 reduced free IgE below the LLOQ (1.95 ng/mL) to a greater extent than omalizumab (Fig.3b), even in subjects with high IgE (> 700 IU/mL). For both intravenous and subcutaneous administration, the duration of free IgE suppression was longer for higher doses of QGE031 and tended to be shorter in subjects with higher baseline IgE (Fig.3a,b). FACS analysis In both trials, QGE031 produced a dose- and time-dependent reduction in basophil FcεRI and IgE expression that was superior in extent (surface IgE only) but longer in duration to omalizumab (Fig.4; Fig. S1). In the subcutaneous study, basophil FcεRI and IgE expression were suppressed for 2 to > 16 weeks after the last dose, with those individuals exhibiting higher levels of IgE at screening (i.e. ≥ 700 IU/mL) having a shorter duration of suppression. Individual subject time courses for the expression of basophil surface FcεRI and basophil surface IgE in response to increasing doses of QGE031, placebo or omalizumab in (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IUM/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; MESF, molecules of equivalent soluble fluorochrome; s, soluble; sc, subcutaneous. In both trials, QGE031 produced a dose- and time-dependent reduction in basophil FcεRI and IgE expression that was superior in extent (surface IgE only) but longer in duration to omalizumab (Fig.4; Fig. S1). In the subcutaneous study, basophil FcεRI and IgE expression were suppressed for 2 to > 16 weeks after the last dose, with those individuals exhibiting higher levels of IgE at screening (i.e. ≥ 700 IU/mL) having a shorter duration of suppression. Individual subject time courses for the expression of basophil surface FcεRI and basophil surface IgE in response to increasing doses of QGE031, placebo or omalizumab in (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IUM/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; MESF, molecules of equivalent soluble fluorochrome; s, soluble; sc, subcutaneous. Quantification of the in vivo binding to IgE The PK–PD model fitted well the QGE031 PK/PD data obtained clinically, including the accumulation of drug and associated responses towards steady state, followed by washout and return towards baseline after treatment cessation. The half-maximum concentration for in vivo binding of QGE031 to IgE (KD = 0.32 nm; 95% CI 0.19–0.45 nm) was ninefold (95% CI 6.1–14-fold) lower than that for omalizumab. The PK–PD model fitted well the QGE031 PK/PD data obtained clinically, including the accumulation of drug and associated responses towards steady state, followed by washout and return towards baseline after treatment cessation. The half-maximum concentration for in vivo binding of QGE031 to IgE (KD = 0.32 nm; 95% CI 0.19–0.45 nm) was ninefold (95% CI 6.1–14-fold) lower than that for omalizumab. Skin prick tests (subcutaneous trial) Both the AUC and the threshold dose of allergen eliciting a wheal were suppressed in a dose- and time-dependent manner by treatment with QGE031 (Fig.5; Table S2). In Cohort 3 (QGE031 2 mg/kg; n = 31), the allergen AUC was maximally suppressed by > 95% for QGE031 compared with 41% for omalizumab (P < 0.001) with an 81-fold increase in threshold dilution of allergen at Day 85, 6 weeks after the last dose of QGE031. Time courses of changes in allergen-induced skin prick wheal responses: (a) area under the dose–response curve values and (b) threshold 1 : 3 dilution of allergen eliciting a response after subcutaneous administration of QGE031, placebo or omalizumab. Data are presented as mean + standard deviation. Serial threefold dilutions of allergen were applied in skin prick testing. A value of 1 = threefold dilution, 2 = ninefold dilution, etc. A value of 0 was assigned if the threshold concentration eliciting a response was the neat allergen. A value of −1 was assigned if no response was elicited at any concentration. AUC, area under the curve. In subjects with baseline IgE levels > 700 IU/mL, 2 mg/kg QGE031 significantly reduced the wheal allergen AUC compared with placebo as effectively as in subjects with IgE < 700 IU/mL, but with an earlier peak effect at Day 57 and recovery to baseline by the end of the study (Fig.5). The percentage of subjects with positive responses to increasing concentrations of allergen was numerically reduced in a dose-dependent manner by QGE031 compared with pooled placebo cohorts (0.6; 2 and 4 mg/kg) and omalizumab for wheal responses up to the end of study (Day 155) (Fig. S2). Both the AUC and the threshold dose of allergen eliciting a wheal were suppressed in a dose- and time-dependent manner by treatment with QGE031 (Fig.5; Table S2). In Cohort 3 (QGE031 2 mg/kg; n = 31), the allergen AUC was maximally suppressed by > 95% for QGE031 compared with 41% for omalizumab (P < 0.001) with an 81-fold increase in threshold dilution of allergen at Day 85, 6 weeks after the last dose of QGE031. Time courses of changes in allergen-induced skin prick wheal responses: (a) area under the dose–response curve values and (b) threshold 1 : 3 dilution of allergen eliciting a response after subcutaneous administration of QGE031, placebo or omalizumab. Data are presented as mean + standard deviation. Serial threefold dilutions of allergen were applied in skin prick testing. A value of 1 = threefold dilution, 2 = ninefold dilution, etc. A value of 0 was assigned if the threshold concentration eliciting a response was the neat allergen. A value of −1 was assigned if no response was elicited at any concentration. AUC, area under the curve. In subjects with baseline IgE levels > 700 IU/mL, 2 mg/kg QGE031 significantly reduced the wheal allergen AUC compared with placebo as effectively as in subjects with IgE < 700 IU/mL, but with an earlier peak effect at Day 57 and recovery to baseline by the end of the study (Fig.5). The percentage of subjects with positive responses to increasing concentrations of allergen was numerically reduced in a dose-dependent manner by QGE031 compared with pooled placebo cohorts (0.6; 2 and 4 mg/kg) and omalizumab for wheal responses up to the end of study (Day 155) (Fig. S2). Overview of PD parameters To illustrate the sequential expression and recovery of free IgE, basophil surface expression of FcεRI and IgE, and skin prick responses, as well as the influence of baseline IgE, the kinetics of individual subject PD parameters are presented (Fig. S1). To illustrate the sequential expression and recovery of free IgE, basophil surface expression of FcεRI and IgE, and skin prick responses, as well as the influence of baseline IgE, the kinetics of individual subject PD parameters are presented (Fig. S1). Immunogenicity In the intravenous trial, QGE031 concentrations > 5 μg/mL may have interfered with the detection of anti-QGE031 antibodies. The QGE031 concentrations were above the tolerable drug levels in only four of 95 analyzed postdose samples; weak immunogenicity signals were detected in 10 subjects, four of whom were treated with placebo (data not shown). Of the six subjects exposed to QGE031 that had immunogenicity responses, there were no differences in PK/PD responses compared with subjects without an immunogenicity signal. For the subcutaneous trial, QGE031 concentrations > 0.76 μg/mL may have interfered with detection of anti-QGE031 antibodies. The drug tolerance levels were exceeded in 44% of immunogenicity samples tested at the end of study; however, PK and PD results did not indicate that a strong anti-QGE031 response was missed in any of the subjects exposed to QGE031. In the intravenous trial, QGE031 concentrations > 5 μg/mL may have interfered with the detection of anti-QGE031 antibodies. The QGE031 concentrations were above the tolerable drug levels in only four of 95 analyzed postdose samples; weak immunogenicity signals were detected in 10 subjects, four of whom were treated with placebo (data not shown). Of the six subjects exposed to QGE031 that had immunogenicity responses, there were no differences in PK/PD responses compared with subjects without an immunogenicity signal. For the subcutaneous trial, QGE031 concentrations > 0.76 μg/mL may have interfered with detection of anti-QGE031 antibodies. The drug tolerance levels were exceeded in 44% of immunogenicity samples tested at the end of study; however, PK and PD results did not indicate that a strong anti-QGE031 response was missed in any of the subjects exposed to QGE031. Safety AEs were experienced by 59% of subjects in the intravenous study (Table3a) and 66% of subjects in the subcutaneous study (Table3b). The most common AEs across both studies were headache and upper respiratory tract infection with the majority of events mild to moderate in severity and not suspected to be related to study medication, with the exception of injection site events. Incidence of AEs by preferred term (safety analysis set) for (a) the intravenous trial and (b) the subcutaneous trial AE, adverse event. In the intravenous trial, four subjects experienced urticaria. Two of 10 subjects who received 3 mg/kg QGE031 experienced urticaria concurrent with the end of the 2-h infusion. One of 10 subjects treated with 10 mg/kg QGE031 experienced urticaria and angioedema with chest pressure, diarrhoea and abdominal pain that began close to the end of the infusion period. One subject experienced urticaria and loose stools 22 h after receiving placebo to 10 mg/kg QGE031. In all subjects, symptoms resolved rapidly after treatment with diphenhydramine with the exception of angioedema that persisted for 90 h. There were no observed episodes of urticaria in 18 subjects who subsequently received a single infusion of placebo. The true underlying rate is unknown. The rate observed in the 3 and 10 mg/kg cohorts was ≤ 20%; there is a 90% probability that the true rate on placebo is statistically less than 20%. In the subcutaneous study, there were four episodes of urticaria in three subjects that occurred from 13 h to 1 week after administration of study drug. Of these, three episodes occurred in two subjects treated with placebo, while one episode occurred in a subject dosed with 0.6 mg/kg QGE031, which did not recur after further doses of QGE031. All episodes were mild or moderate and transient, and resolved without treatment. No episodes of urticaria occurred in 40 subjects who received a total of 149 doses of 2 mg/kg QGE031 or in eight subjects treated with 4 mg/kg QGE031. On the basis of prespecified analyses, the lack of urticarial events in 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence of urticaria is 7.5% or less. There were no serious AEs in either trial, but two subjects were withdrawn from Cohort 3 (QGE031 2 mg/kg) in the subcutaneous trial due to AEs; one suffered an asthma exacerbation and one developed a severe flu’-like illness. Neither event was considered related to study drug. AEs were experienced by 59% of subjects in the intravenous study (Table3a) and 66% of subjects in the subcutaneous study (Table3b). The most common AEs across both studies were headache and upper respiratory tract infection with the majority of events mild to moderate in severity and not suspected to be related to study medication, with the exception of injection site events. Incidence of AEs by preferred term (safety analysis set) for (a) the intravenous trial and (b) the subcutaneous trial AE, adverse event. In the intravenous trial, four subjects experienced urticaria. Two of 10 subjects who received 3 mg/kg QGE031 experienced urticaria concurrent with the end of the 2-h infusion. One of 10 subjects treated with 10 mg/kg QGE031 experienced urticaria and angioedema with chest pressure, diarrhoea and abdominal pain that began close to the end of the infusion period. One subject experienced urticaria and loose stools 22 h after receiving placebo to 10 mg/kg QGE031. In all subjects, symptoms resolved rapidly after treatment with diphenhydramine with the exception of angioedema that persisted for 90 h. There were no observed episodes of urticaria in 18 subjects who subsequently received a single infusion of placebo. The true underlying rate is unknown. The rate observed in the 3 and 10 mg/kg cohorts was ≤ 20%; there is a 90% probability that the true rate on placebo is statistically less than 20%. In the subcutaneous study, there were four episodes of urticaria in three subjects that occurred from 13 h to 1 week after administration of study drug. Of these, three episodes occurred in two subjects treated with placebo, while one episode occurred in a subject dosed with 0.6 mg/kg QGE031, which did not recur after further doses of QGE031. All episodes were mild or moderate and transient, and resolved without treatment. No episodes of urticaria occurred in 40 subjects who received a total of 149 doses of 2 mg/kg QGE031 or in eight subjects treated with 4 mg/kg QGE031. On the basis of prespecified analyses, the lack of urticarial events in 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence of urticaria is 7.5% or less. There were no serious AEs in either trial, but two subjects were withdrawn from Cohort 3 (QGE031 2 mg/kg) in the subcutaneous trial due to AEs; one suffered an asthma exacerbation and one developed a severe flu’-like illness. Neither event was considered related to study drug. PK results: Not all subjects had a fully evaluable PK profile, so the QGE031 PK were characterized in 36 subjects in the intravenous trial and 64 subjects in the subcutaneous trial. The time course of QGE031 in serum when administered intravenously over 2 h was characterized by a biexponential decline, with a rapid initial and slower terminal elimination phase (Fig.2a; Fig. S1). A slower terminal disposition phase became visible at doses of 1 mg/kg and higher. At doses of 3 and 10 mg/kg, the PK profile demonstrated a half-life of 17–23 days (Table2a). Higher levels of IgE accelerated the elimination of QGE as shown by PK parameters including a shorter half-life (Table2a) and time courses of response (Fig.2a). Summary of PK parameters for (a) the intravenous trial and (b) the subcutaneous trial AUC, area under the curve; AUCinf, area under the curve from time zero to infinity; Cmax, peak serum concentration; SD, standard deviation; Tmax, time to reach peak serum concentration; T½, half-life in serum; AUC, area under the curve; Cmax, peak serum concentration; NC, not calculable; SD, standard deviation; T1/2, half-life in serum. Time course for serum concentrations of QGE031 following (a) single 2-h intravenous infusion and (b) 2-weekly subcutaneous administration on two (0.2 mg/kg) to four occasions (all other cohorts). Data presented are geometric means. Vertical arrows in panel (b) denote the times of administration of QGE031 (except for the 0.2 mg/kg cohort where QGE031 was administered on just the first two instances). The PK of QGE031 concentrations in serum following subcutaneous administration are also presented (Fig.2b; Fig. S1). The maximum concentration (Cmax) of QGE031 in serum following subcutaneous dosing occurred 2–4 days after the last administered dose (Table2b). Systemic drug exposures reached stable and clinically relevant serum concentrations during the 2-week interval after dosing as demonstrated by dose-proportional increases in peak serum concentration (Cmax) and AUC0–14 days at dose levels between 0.2 and 2 mg/kg QGE031 in subjects with IgE below 700 IU/mL (Table2b). There was no dose-proportional increase in systemic exposure (Cmax, AUC0–14 days) between the 2 and 4 mg/kg doses (Table2b). Whether this observation is due to random interindividual variation, given the small numbers of subjects dosed at 4 mg/kg, or some form of saturation of absorption is not currently known. At the lowest QGE031 dose (0.2 mg/kg) and at the 2 mg/kg dose with high IgE levels, mean terminal elimination half-life was shorter (13–15 days) compared with the other groups (23–26 days; Table2b). PD: total and free IgE: In the intravenous study, QGE031 induced a dose-dependent accumulation of total IgE and suppressed free IgE compared to placebo (Fig.3a; Fig. S1). Free IgE was suppressed more rapidly and to a greater extent than was seen with omalizumab. For all doses of QGE031, suppression of free IgE was below the LLOQ (7.8 ng/mL; Fig.3a). Omalizumab-induced suppression of free IgE was ‘shallower’ with a gradual return to baseline. By contrast, QGE031 suppressed free IgE more rapidly, to a greater extent, for longer and with a faster return to baseline (Fig.3a). Individual subject serum concentrations of total and free IgE in response to increasing doses of QGE031, placebo or omalizumab following (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. The upper and lower limit of quantification for free IgE was 250 ng/mL and 7.8 ng/mL, respectively, for the intravenous study. The upper and lower limit of quantification for free IgE was 250 ng/mL and 1.95 ng/mL, respectively, for the subcutaneous study. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IU/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; sc, subcutaneous. Subcutaneous delivery of QGE031 resulted in rapid, incremental and sustained increases in total IgE serum concentrations at all doses compared with placebo (Fig.3b; Fig. S1). All doses of QGE031 reduced free IgE below the LLOQ (1.95 ng/mL) to a greater extent than omalizumab (Fig.3b), even in subjects with high IgE (> 700 IU/mL). For both intravenous and subcutaneous administration, the duration of free IgE suppression was longer for higher doses of QGE031 and tended to be shorter in subjects with higher baseline IgE (Fig.3a,b). FACS analysis: In both trials, QGE031 produced a dose- and time-dependent reduction in basophil FcεRI and IgE expression that was superior in extent (surface IgE only) but longer in duration to omalizumab (Fig.4; Fig. S1). In the subcutaneous study, basophil FcεRI and IgE expression were suppressed for 2 to > 16 weeks after the last dose, with those individuals exhibiting higher levels of IgE at screening (i.e. ≥ 700 IU/mL) having a shorter duration of suppression. Individual subject time courses for the expression of basophil surface FcεRI and basophil surface IgE in response to increasing doses of QGE031, placebo or omalizumab in (a) single 2-h intravenous infusion and (b) multiple, 2-weekly subcutaneous administrations. Subjects with high IgE (i.e. > 1000 IU/mL for intravenous study and > 700 IUM/mL for subcutaneous study) are plotted in red. The placebo group in subcutaneous study contains all placebo-treated patients in subcutaneous study regardless of cohort. iv, intravenous; MESF, molecules of equivalent soluble fluorochrome; s, soluble; sc, subcutaneous. Quantification of the in vivo binding to IgE: The PK–PD model fitted well the QGE031 PK/PD data obtained clinically, including the accumulation of drug and associated responses towards steady state, followed by washout and return towards baseline after treatment cessation. The half-maximum concentration for in vivo binding of QGE031 to IgE (KD = 0.32 nm; 95% CI 0.19–0.45 nm) was ninefold (95% CI 6.1–14-fold) lower than that for omalizumab. Skin prick tests (subcutaneous trial): Both the AUC and the threshold dose of allergen eliciting a wheal were suppressed in a dose- and time-dependent manner by treatment with QGE031 (Fig.5; Table S2). In Cohort 3 (QGE031 2 mg/kg; n = 31), the allergen AUC was maximally suppressed by > 95% for QGE031 compared with 41% for omalizumab (P < 0.001) with an 81-fold increase in threshold dilution of allergen at Day 85, 6 weeks after the last dose of QGE031. Time courses of changes in allergen-induced skin prick wheal responses: (a) area under the dose–response curve values and (b) threshold 1 : 3 dilution of allergen eliciting a response after subcutaneous administration of QGE031, placebo or omalizumab. Data are presented as mean + standard deviation. Serial threefold dilutions of allergen were applied in skin prick testing. A value of 1 = threefold dilution, 2 = ninefold dilution, etc. A value of 0 was assigned if the threshold concentration eliciting a response was the neat allergen. A value of −1 was assigned if no response was elicited at any concentration. AUC, area under the curve. In subjects with baseline IgE levels > 700 IU/mL, 2 mg/kg QGE031 significantly reduced the wheal allergen AUC compared with placebo as effectively as in subjects with IgE < 700 IU/mL, but with an earlier peak effect at Day 57 and recovery to baseline by the end of the study (Fig.5). The percentage of subjects with positive responses to increasing concentrations of allergen was numerically reduced in a dose-dependent manner by QGE031 compared with pooled placebo cohorts (0.6; 2 and 4 mg/kg) and omalizumab for wheal responses up to the end of study (Day 155) (Fig. S2). Overview of PD parameters: To illustrate the sequential expression and recovery of free IgE, basophil surface expression of FcεRI and IgE, and skin prick responses, as well as the influence of baseline IgE, the kinetics of individual subject PD parameters are presented (Fig. S1). Immunogenicity: In the intravenous trial, QGE031 concentrations > 5 μg/mL may have interfered with the detection of anti-QGE031 antibodies. The QGE031 concentrations were above the tolerable drug levels in only four of 95 analyzed postdose samples; weak immunogenicity signals were detected in 10 subjects, four of whom were treated with placebo (data not shown). Of the six subjects exposed to QGE031 that had immunogenicity responses, there were no differences in PK/PD responses compared with subjects without an immunogenicity signal. For the subcutaneous trial, QGE031 concentrations > 0.76 μg/mL may have interfered with detection of anti-QGE031 antibodies. The drug tolerance levels were exceeded in 44% of immunogenicity samples tested at the end of study; however, PK and PD results did not indicate that a strong anti-QGE031 response was missed in any of the subjects exposed to QGE031. Safety: AEs were experienced by 59% of subjects in the intravenous study (Table3a) and 66% of subjects in the subcutaneous study (Table3b). The most common AEs across both studies were headache and upper respiratory tract infection with the majority of events mild to moderate in severity and not suspected to be related to study medication, with the exception of injection site events. Incidence of AEs by preferred term (safety analysis set) for (a) the intravenous trial and (b) the subcutaneous trial AE, adverse event. In the intravenous trial, four subjects experienced urticaria. Two of 10 subjects who received 3 mg/kg QGE031 experienced urticaria concurrent with the end of the 2-h infusion. One of 10 subjects treated with 10 mg/kg QGE031 experienced urticaria and angioedema with chest pressure, diarrhoea and abdominal pain that began close to the end of the infusion period. One subject experienced urticaria and loose stools 22 h after receiving placebo to 10 mg/kg QGE031. In all subjects, symptoms resolved rapidly after treatment with diphenhydramine with the exception of angioedema that persisted for 90 h. There were no observed episodes of urticaria in 18 subjects who subsequently received a single infusion of placebo. The true underlying rate is unknown. The rate observed in the 3 and 10 mg/kg cohorts was ≤ 20%; there is a 90% probability that the true rate on placebo is statistically less than 20%. In the subcutaneous study, there were four episodes of urticaria in three subjects that occurred from 13 h to 1 week after administration of study drug. Of these, three episodes occurred in two subjects treated with placebo, while one episode occurred in a subject dosed with 0.6 mg/kg QGE031, which did not recur after further doses of QGE031. All episodes were mild or moderate and transient, and resolved without treatment. No episodes of urticaria occurred in 40 subjects who received a total of 149 doses of 2 mg/kg QGE031 or in eight subjects treated with 4 mg/kg QGE031. On the basis of prespecified analyses, the lack of urticarial events in 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence of urticaria is 7.5% or less. There were no serious AEs in either trial, but two subjects were withdrawn from Cohort 3 (QGE031 2 mg/kg) in the subcutaneous trial due to AEs; one suffered an asthma exacerbation and one developed a severe flu’-like illness. Neither event was considered related to study drug. Discussion: This report presents the first data demonstrating the efficient suppression of circulating free IgE, basophil FcεRI and surface IgE, and subsequent inhibition of the allergen-induced skin prick response by QGE031, a novel high-affinity, anti-IgE antibody. The data demonstrate that the 50-fold higher in vitro affinity of QGE031 compared to omalizumab translated into sixfold to ninefold greater potency in vivo. Compared to omalizumab, treatment with QGE031 provided greater and longer suppression of free IgE and IgE on the surface of circulating basophils and markedly superior suppression of skin prick test responses to allergen. These effects were apparent even in subjects with high baseline IgE levels who would be ineligible to receive omalizumab treatment 4,5. The data suggest that QGE031 may be more potent than omalizumab in the treatment of allergic disease. As predicted from modelling and simulation 3,19,20 and published experience with another high-affinity anti-IgE, HAE-1 21, QGE031 was more potent than omalizumab in capturing and thereby suppressing levels of free IgE, which declined to below the LLOQ in almost all subjects dosed with QGE031. The duration of suppression of free IgE was dependent on the dose of QGE031 and baseline IgE. At higher subcutaneous doses, the suppression of free IgE was maintained until Day 155 (end of study), which was more than 100 days after the last dose and longer than that observed with omalizumab. Once free IgE levels started to return to baseline, the rate of return to baseline was more rapid after treatment with QGE031 than with omalizumab, as would be expected based upon drug-target binding model simulations for lower KD values, as previously seen with an earlier high-affinity anti-IgE, HAE1 21. Based on PK/PD model fitting, QGE031 demonstrated a ninefold increase in potency for suppression of free IgE compared with omalizumab. This potency achieved in the clinic is supported by preclinical studies conducted in the present study, which demonstrated an approximate 50-fold higher affinity for human IgE with QGE031 compared with omalizumab (KD 139 pm vs. 6.98 nm, respectively). The KD for omalizumab was consistent with previous in vitro experiments (7.7 nm) 3 and similar to values obtained with clinical experience (KD 1–3 nm) 3,17. Suppression of free IgE was followed by dose- and time-dependent suppression of FcεRI and surface IgE expression on circulating basophils, as observed with previous studies using omalizumab (for FcεRI expression) 9,11,12. Despite the increased potency of QGE031 in suppressing free IgE, the reduction in expression of FcεRI was similar for subjects who received QGE031 and omalizumab, suggesting that a low level of basal FcεRI expression is maintained on the surface of basophils independent of the presence of IgE. The combined suppression of free IgE and expression of FcεRI led to a > 100-fold reduction in IgE on the surface of basophils (Fig.4) that was superior in extent and duration at higher doses of QGE031 compared with omalizumab. The recovery of FcεRI and surface IgE expression after dosing with QGE031 was relatively slower compared with the more rapid return to baseline for free IgE, indicating perhaps an indirect mechanism of action and/or time for blood–tissue equilibration. Omalizumab suppressed the wheal response to allergen by 41%, as reflected in the allergen AUC, and increased the threshold concentration of allergen that elicited a positive response by approximately threefold, which is consistent with previously published data for subjects dosed with subcutaneous omalizumab 22. In contrast, a comparable dose of subcutaneous QGE031 (2 mg/kg) almost completely ablated the response to allergen and increased the threshold dose 81-fold. The maximal inhibition of skin prick test responses was seen approximately 6 weeks after the last dose of QGE031, likely reflecting time for blood–tissue equilibration and/or turnover of skin mast cells. MacGlashan et al. 23 reported that treatment with omalizumab led to increased intrinsic sensitivity of basophils to IgE-mediated stimulation, which may partially offset the beneficial effects of omalizumab on reduced binding of free IgE to FcεRI. We did not measure the sensitivity of basophils to IgE-mediated stimulation in this study. Nevertheless, the profound suppression of skin prick test responses to allergen upon QGE031 treatment suggests that if there is any increase in intrinsic sensitivity to IgE-mediated stimulation in cutaneous mast cells upon treatment with QGE031, it is offset by the profound suppression of IgE bound to FcεRI. QGE031, as with other IgG antibodies including omalizumab, is eliminated from the systemic circulation not only by clearance processes common to all IgGs, that is intracellular proteolytic degradation 24, but also by binding to its target, IgE. As QGE031–IgE complexes are cleared faster than unbound QGE031, as with omalizumab–IgE complexes 3,17, target-mediated disposition is most pronounced as the molar ratio of the serum concentration of QGE031 to the serum concentration of IgE falls. This was evidenced by greater than dose-proportional exposure in the intravenous study and by a shorter terminal elimination half-life in subjects treated with the lowest subcutaneous dose of QGE031 or in subjects with high IgE. Nevertheless, even in subjects with higher levels of baseline IgE, currently outside the US dosing table, treatment with subcutaneous QGE031 led to suppression of free IgE to below the LLOQ with accompanying suppression of basophil surface IgE and skin test responses that were superior to those seen with omalizumab. The most significant AE observed was the occurrence of mild-to-moderate urticaria that occurred in four subjects treated with QGE031 and five subjects treated with placebo across the two studies. All events resolved spontaneously or with antihistamines, and no epinephrine was required. Nevertheless, urticaria was accompanied by systemic symptoms in one subject given the highest dose of QGE031 intravenously. As the excipient for intravenous QGE031 contained polysorbate 80, which can elicit mast cell activation 25,26, the protocol was amended to include a cohort dosed with only the placebo to the 10 mg/kg dose of QGE031 as this cohort had the greatest volume of excipient. None of the subjects in the protocol-amended cohort experienced urticaria or a hypersensitivity event, which suggests that polysorbate 80 was not responsible for these events. The size of the 2 mg/kg cohort in the subcutaneous study was designed to provide information on the incidence of hypersensitivity events following administration of a QGE031 dose that was predicted to provide sustained suppression of free IgE for subjects with a wide range of baseline IgE concentrations and body weights. The lack of urticaria and hypersensitivity events in the 40 subjects treated with 2 mg/kg QGE031 provides 96% probability that the true incidence rate is 7.5% or less. A more accurate estimate of the true incidence of hypersensitivity events following multiple subcutaneous administration of QGE031 will be obtained as more clinical trials are conducted. In conclusion, QGE031 was superior to omalizumab in suppressing free IgE and basophil surface expression of FcεRI and IgE. These effects translated into almost complete suppression of the skin prick response to allergen that was superior in extent and duration compared with omalizumab. Correlations between free IgE and asthma symptom control in controlled clinical studies suggest that a more profound suppression of free IgE may translate to correspondingly better asthma clinical outcomes 14. Thus, QGE031 with its superior suppression of serum IgE and allergen skin test responses may provide benefit to atopic asthma patients not effectively treated with omalizumab, but the risk/benefit remains to be established.
Background: Using a monoclonal antibody with greater affinity for IgE than omalizumab, we examined whether more complete suppression of IgE provided greater pharmacodynamic effects, including suppression of skin prick responses to allergen. Methods: Preclinical assessments and two randomized, placebo-controlled, double-blind clinical trials were conducted in atopic subjects. The first trial administered single doses of QGE031 (0.1-10 mg/kg) or placebo intravenously, while the second trial administered two to four doses of QGE031 (0.2- 4 mg/kg) or placebo subcutaneously at 2-week intervals. Both trials included an open-label omalizumab arm. Results: Sixty of 73 (82%) and 96 of 110 (87%) subjects completed the intravenous and subcutaneous studies, respectively. Exposure to QGE031 and its half-life depended on the QGE031 dose and serum IgE level. QGE031 had a biexponential pharmacokinetic profile after intravenous administration and a terminal half-life of approximately 20 days. QGE031 demonstrated dose- and time-dependent suppression of free IgE, basophil FcεRI and basophil surface IgE superior in extent (free IgE and surface IgE) and duration to omalizumab. At Day 85, 6 weeks after the last dose, skin prick wheal responses to allergen were suppressed by > 95% and 41% in subjects treated subcutaneously with QGE031 (2 mg/kg) or omalizumab, respectively (P < 0.001). Urticaria was observed in QGE031- and placebo-treated subjects and was accompanied by systemic symptoms in one subject treated with 10 mg/kg intravenous QGE031. There were no serious adverse events. Conclusions: These first clinical data for QGE031, a high-affinity IgG1κ anti-IgE, demonstrate that increased suppression of free IgE compared with omalizumab translated to superior pharmacodynamic effects in atopic subjects, including those with high IgE levels. QGE031 may therefore benefit patients unable to receive, or suboptimally treated with, omalizumab.
null
null
15,661
373
26
[ "qge031", "ige", "subjects", "subcutaneous", "trial", "kg", "mg kg", "mg", "study", "placebo" ]
[ "test", "test" ]
null
null
[CONTENT] allergic | antibody | anti-IgE | atopic | IgE | ligelizumab | monoclonal | QGE031 [SUMMARY]
[CONTENT] allergic | antibody | anti-IgE | atopic | IgE | ligelizumab | monoclonal | QGE031 [SUMMARY]
[CONTENT] allergic | antibody | anti-IgE | atopic | IgE | ligelizumab | monoclonal | QGE031 [SUMMARY]
null
[CONTENT] allergic | antibody | anti-IgE | atopic | IgE | ligelizumab | monoclonal | QGE031 [SUMMARY]
null
[CONTENT] Adolescent | Adult | Anti-Allergic Agents | Antibodies, Anti-Idiotypic | Antibodies, Monoclonal, Humanized | Antibody Affinity | Drug Evaluation, Preclinical | Female | Humans | Hypersensitivity, Immediate | Immunoglobulin E | Male | Middle Aged | Skin Tests | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Anti-Allergic Agents | Antibodies, Anti-Idiotypic | Antibodies, Monoclonal, Humanized | Antibody Affinity | Drug Evaluation, Preclinical | Female | Humans | Hypersensitivity, Immediate | Immunoglobulin E | Male | Middle Aged | Skin Tests | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Anti-Allergic Agents | Antibodies, Anti-Idiotypic | Antibodies, Monoclonal, Humanized | Antibody Affinity | Drug Evaluation, Preclinical | Female | Humans | Hypersensitivity, Immediate | Immunoglobulin E | Male | Middle Aged | Skin Tests | Treatment Outcome | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Anti-Allergic Agents | Antibodies, Anti-Idiotypic | Antibodies, Monoclonal, Humanized | Antibody Affinity | Drug Evaluation, Preclinical | Female | Humans | Hypersensitivity, Immediate | Immunoglobulin E | Male | Middle Aged | Skin Tests | Treatment Outcome | Young Adult [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] qge031 | ige | subjects | subcutaneous | trial | kg | mg kg | mg | study | placebo [SUMMARY]
[CONTENT] qge031 | ige | subjects | subcutaneous | trial | kg | mg kg | mg | study | placebo [SUMMARY]
[CONTENT] qge031 | ige | subjects | subcutaneous | trial | kg | mg kg | mg | study | placebo [SUMMARY]
null
[CONTENT] qge031 | ige | subjects | subcutaneous | trial | kg | mg kg | mg | study | placebo [SUMMARY]
null
[CONTENT] ige | free ige | free | omalizumab | clinical | cells | fcεri | trial | dissociation | cε3 domain [SUMMARY]
[CONTENT] subjects | trial | qge031 | cohort | subcutaneous | placebo | kg | ige | mg | mg kg [SUMMARY]
[CONTENT] human | ige | primate ige | non | non human | non human primate | non human primate ige | human primate | human primate ige | primate [SUMMARY]
null
[CONTENT] qge031 | ige | subjects | subcutaneous | trial | ml | mg | mg kg | kg | omalizumab [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] two ||| first | QGE031 | 0.1-10 | second | two to four | QGE031 | 4 mg/kg | 2-week ||| [SUMMARY]
[CONTENT] Sixty of 73 | 82% | 96 | 110 | 87% ||| QGE031 | half | QGE031 ||| QGE031 | half | approximately 20 days ||| QGE031 ||| ||| Day 85 | 6 weeks | prick wheal | 95% | 41% | QGE031 | 2 ||| Urticaria | one | 10 | QGE031 ||| [SUMMARY]
null
[CONTENT] ||| two ||| first | QGE031 | 0.1-10 | second | two to four | QGE031 | 4 mg/kg | 2-week ||| ||| ||| Sixty of 73 | 82% | 96 | 110 | 87% ||| QGE031 | half | QGE031 ||| QGE031 | half | approximately 20 days ||| QGE031 ||| ||| Day 85 | 6 weeks | prick wheal | 95% | 41% | QGE031 | 2 ||| Urticaria | one | 10 | QGE031 ||| ||| first | QGE031 ||| QGE031 [SUMMARY]
null
A decision support system for quality of life in head and neck oncology patients.
22340746
The assessment of Quality of Life (QoL) is a Medical goal; it is used in clinical research, medical practice, health-related economic studies and in planning health management measures and strategies. The objective of this project is to develop an informational platform to achieve a patient self-assessment with standardized QoL measuring instruments, through friendly software, easy for the user to adapt, which should aid the study of QoL, by promoting the creation of databases and accelerating its statistical treatment and yet generating subsequent useful results in graphical format for the physician analyzes in an appointment immediately after the answers collection.
BACKGROUND
First, a software platform was designed and developed in an action-research process with patients, physicians and nurses. The computerized patient self-assessment with standardized QoL measuring instruments was compared with traditional one, to verify if its use did not influence the patient's answers. For that, the Wilcoxon and t-Student tests were applied. After, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments.
METHODS
The results show that the computerized patient self-assessment does not influence the patient's answers and can be used as a suitable tool in the routine appointment, because indicates problems which are more difficult to identify in a traditional appointment, improving thus the physician's decisions.
RESULTS
The possibility of representing graphically useful results that physician needs to analyze in the appointment, immediately after the answer collection, in an useful time, makes this QoL assessment platform a diagnosis instrument ready to be used routinely in clinical practice.
CONCLUSIONS
[ "Decision Support Systems, Clinical", "Head and Neck Neoplasms", "Humans", "Quality of Life", "Self-Assessment", "Software", "Statistics, Nonparametric", "Surveys and Questionnaires", "User-Computer Interface" ]
3296664
Background
Scope The concept of "Quality of Life - QoL" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3]. Preliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment. The purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts. In the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients. The concept of "Quality of Life - QoL" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3]. Preliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment. The purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts. In the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients. Evaluation of HRQoL in oncologic patients Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6]. Research methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2]. The time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1]. In fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1]. Promoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9]. Although being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11]. Table 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]: Dimensions and items for HRQoL assessment Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6]. Research methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2]. The time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1]. In fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1]. Promoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9]. Although being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11]. Table 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]: Dimensions and items for HRQoL assessment KMS in routine HRQoL assessment Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13]. Moreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13]. HRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15]. However, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment. Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13]. Moreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13]. HRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15]. However, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment. Methodology A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses. In order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered. Patients demographics In order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer. In the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions. The two statistical hypotheses for a bilateral test in each situation were written. Hypothesis H 0 F ( X 0 ) = F ( X 1 ) ; Hypothesis H 1 : F ( X 0 ) <> F ( X 1 ) . We used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations. A high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers. After the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments. The Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters. We analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time. In addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment. A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses. In order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered. Patients demographics In order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer. In the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions. The two statistical hypotheses for a bilateral test in each situation were written. Hypothesis H 0 F ( X 0 ) = F ( X 1 ) ; Hypothesis H 1 : F ( X 0 ) <> F ( X 1 ) . We used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations. A high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers. After the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments. The Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters. We analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time. In addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment.
null
null
null
null
Conclusions
In this paper we defined the concept of QoL, in different contexts and situations, which is reaching almost every sector of society. The main focus, however, was on the Health context. Some studies have suggested that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The platform designed and developed in this project gives the physician an opportunity to use patient's QoL measurement in real time as a clinical decision support element. The knowledge about patient QoL constitutes another factor that may, in certain circumstances, contribute to a better decision. The systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. Moreover, therapeutic alternatives can help the physician by giving him important data from which he can infer the patient's future QoL. We proved the validity of the developed platform in the acquisition of data required for the QoL assessment, and in allowing a routine QoL assessment to become a part of the appointment. An evolution of the platform for collecting clinical information, in order to typify patients and therapies according to a specific patient's QoL and a class of patients, is under development. The need to develop this platform underlines the importance of knowledge management systems as decision making aids.
[ "Background", "Scope", "Evaluation of HRQoL in oncologic patients", "KMS in routine HRQoL assessment", "Methodology", "Results and discussion", "Friendly software design", "Computer versus paper support", "Data analysis", "Clinical decision support system", "Competing interests", "Authors' contributions" ]
[ " Scope The concept of \"Quality of Life - QoL\" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3].\nPreliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment.\nThe purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts.\nIn the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients.\nThe concept of \"Quality of Life - QoL\" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3].\nPreliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment.\nThe purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts.\nIn the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients.\n Evaluation of HRQoL in oncologic patients Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6].\nResearch methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2].\nThe time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1].\nIn fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1].\nPromoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9].\nAlthough being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11].\nTable 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]:\nDimensions and items for HRQoL assessment\nMalignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6].\nResearch methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2].\nThe time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1].\nIn fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1].\nPromoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9].\nAlthough being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11].\nTable 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]:\nDimensions and items for HRQoL assessment\n KMS in routine HRQoL assessment Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13].\nMoreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13].\nHRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15].\nHowever, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment.\nPreliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13].\nMoreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13].\nHRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15].\nHowever, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment.\n Methodology A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses.\nIn order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered.\nPatients demographics\nIn order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer.\nIn the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions.\nThe two statistical hypotheses for a bilateral test in each situation were written.\n\n\n\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n0\n\n\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n=\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n;\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n1\n\n\n:\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n<>\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n.\n\n\n\n\nWe used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations.\nA high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers.\nAfter the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments.\nThe Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters.\nWe analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time.\nIn addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment.\nA software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses.\nIn order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered.\nPatients demographics\nIn order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer.\nIn the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions.\nThe two statistical hypotheses for a bilateral test in each situation were written.\n\n\n\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n0\n\n\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n=\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n;\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n1\n\n\n:\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n<>\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n.\n\n\n\n\nWe used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations.\nA high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers.\nAfter the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments.\nThe Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters.\nWe analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time.\nIn addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment.", "The concept of \"Quality of Life - QoL\" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3].\nPreliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment.\nThe purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts.\nIn the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients.", "Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6].\nResearch methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2].\nThe time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1].\nIn fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1].\nPromoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9].\nAlthough being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11].\nTable 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]:\nDimensions and items for HRQoL assessment", "Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13].\nMoreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13].\nHRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15].\nHowever, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment.", "A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses.\nIn order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered.\nPatients demographics\nIn order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer.\nIn the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions.\nThe two statistical hypotheses for a bilateral test in each situation were written.\n\n\n\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n0\n\n\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n=\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n;\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n1\n\n\n:\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n<>\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n.\n\n\n\n\nWe used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations.\nA high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers.\nAfter the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments.\nThe Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters.\nWe analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time.\nIn addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment.", " Friendly software design It cannot be denied that the health domain is extremely sensitive, and every aspect that interferes with traditional processes is potentiated in terms of impact. Thus, we started this project assessing the influence that a technological environment would have in the patient's behavior.\nKnowledge management systems can and should be used in order to optimize certain procedures, but the type of organization where they are introduced in must be kept in mind. Dimensions and items for a model of knowledge management were presented in Table 1.\nThe purpose of this project involves the development of an informational platform that would not interfere with patient's answers when used, and which could be applied by health professionals. This software should run through a browser working in the health unit's intranet, or even in the internet.\nThe main requirement in the creation of this software was to build an interface as close to a traditional paper form as possible. Using key-words like usability, accessibility and confidentiality, the intention was to build a simple interface with an intuitive use, where the correction to an answer could be done in a clear, objective way, where the patient could clearly understand the confidentiality of his answers, and accessible to all types of patients. Blind, illiterate and physically challenged individuals are recurrent amongst the frequent oncology service patients of the IPO in Oporto. Sound or touch screens are presently the two used interface solutions, but we are still investigating the use of other communication devices.\nNext, we present two figures. Figure 1 shows a view of the QoL screen and Figure 2 shows a view of the patient's screen when he answers the questionnaire.\nView of the QoL screen.\nView of the patient when answering the questionnaire.\nIt cannot be denied that the health domain is extremely sensitive, and every aspect that interferes with traditional processes is potentiated in terms of impact. Thus, we started this project assessing the influence that a technological environment would have in the patient's behavior.\nKnowledge management systems can and should be used in order to optimize certain procedures, but the type of organization where they are introduced in must be kept in mind. Dimensions and items for a model of knowledge management were presented in Table 1.\nThe purpose of this project involves the development of an informational platform that would not interfere with patient's answers when used, and which could be applied by health professionals. This software should run through a browser working in the health unit's intranet, or even in the internet.\nThe main requirement in the creation of this software was to build an interface as close to a traditional paper form as possible. Using key-words like usability, accessibility and confidentiality, the intention was to build a simple interface with an intuitive use, where the correction to an answer could be done in a clear, objective way, where the patient could clearly understand the confidentiality of his answers, and accessible to all types of patients. Blind, illiterate and physically challenged individuals are recurrent amongst the frequent oncology service patients of the IPO in Oporto. Sound or touch screens are presently the two used interface solutions, but we are still investigating the use of other communication devices.\nNext, we present two figures. Figure 1 shows a view of the QoL screen and Figure 2 shows a view of the patient's screen when he answers the questionnaire.\nView of the QoL screen.\nView of the patient when answering the questionnaire.\n Computer versus paper support The following graphics (Figure 3 and 4) show, for each question, the percentage of equal and different answers given by patients answering on the computer, comparing with answers given on paper.\nQLQ-C30 Answers Comparison.\nQLQ-HN35 Answers Comparison.\nWe thus concluded that the answers given by patients on paper and on the computer-based platform are generally the same. Answers q1, q2, q22 of QLQ-C30 and answer h41 of QLQ-H&N35 show the higher number of different answers, a little over 40% in the case of h41 and fewer than 40% on the other cases.\nIt is worth noting that the specific questionnaire (QLQ-H&N35) reveals a higher proportion of equal answers. Ideally, the answers should always be the same, but previous experiments (performed on paper) show that answers given by patients in two separate moments are sometimes different, and the percentage for this difference is close to the one we observed between the answers on paper and the answers on the computer. This confirms the results observed in the mentioned test, leading us to conclude that the different results are caused by other factors.\nThe Wilcoxon test results (Table 3 and Table 4) suggest the not rejection of null hypothesis, in fact for all answers the significance level is minor that 0,05. So we cannot conclude that the answers are not similar between paper support and computer support.\nWilcoxon test results for QLQ-C30\nWilcoxon test results for QLQ-H&N 35\nThe rejection of null hypothesis do not guarantee that answers are similar, so we analyze the correlation between both answers, and we create a linear regression line (y = mx + b) to analyze the coefficient of dependent (m) variable and the independent term (b).\nThe linear regression analysis allowed conclude for all questions that the coefficient of dependent variable value was near from one (1) and the independent term value was near from zero (0). The worst obtained value was in question 16 where the values were 1,038 and 0,017 respectively. The correlation coefficient was 0,853. Therefore we realized a t-Student test to verify if the coefficient of dependent variable value could be zero in the population. The test allowed to reject the null hypothesis. In resume, can be concluded that the answers are similar for both supports, i.e., the platform does not influence the patient's answers.\nAdditionally was asked to the patients data about their impression with the platform in comparison with the paper. So, we asked about their level like user in four categories (none contact, a little contact, some contact and substantial contact) and what they prefer: the paper or the computer. The results are showed in Figure 5.\nPatient's preferences by patient utilization level.\nIt is clear that except the patients that never use the computer, the preference goes to the answer in the computer platform. It should be emphasized the relationship between the level of computer use and preference for answering in this way.\nThe following graphics (Figure 3 and 4) show, for each question, the percentage of equal and different answers given by patients answering on the computer, comparing with answers given on paper.\nQLQ-C30 Answers Comparison.\nQLQ-HN35 Answers Comparison.\nWe thus concluded that the answers given by patients on paper and on the computer-based platform are generally the same. Answers q1, q2, q22 of QLQ-C30 and answer h41 of QLQ-H&N35 show the higher number of different answers, a little over 40% in the case of h41 and fewer than 40% on the other cases.\nIt is worth noting that the specific questionnaire (QLQ-H&N35) reveals a higher proportion of equal answers. Ideally, the answers should always be the same, but previous experiments (performed on paper) show that answers given by patients in two separate moments are sometimes different, and the percentage for this difference is close to the one we observed between the answers on paper and the answers on the computer. This confirms the results observed in the mentioned test, leading us to conclude that the different results are caused by other factors.\nThe Wilcoxon test results (Table 3 and Table 4) suggest the not rejection of null hypothesis, in fact for all answers the significance level is minor that 0,05. So we cannot conclude that the answers are not similar between paper support and computer support.\nWilcoxon test results for QLQ-C30\nWilcoxon test results for QLQ-H&N 35\nThe rejection of null hypothesis do not guarantee that answers are similar, so we analyze the correlation between both answers, and we create a linear regression line (y = mx + b) to analyze the coefficient of dependent (m) variable and the independent term (b).\nThe linear regression analysis allowed conclude for all questions that the coefficient of dependent variable value was near from one (1) and the independent term value was near from zero (0). The worst obtained value was in question 16 where the values were 1,038 and 0,017 respectively. The correlation coefficient was 0,853. Therefore we realized a t-Student test to verify if the coefficient of dependent variable value could be zero in the population. The test allowed to reject the null hypothesis. In resume, can be concluded that the answers are similar for both supports, i.e., the platform does not influence the patient's answers.\nAdditionally was asked to the patients data about their impression with the platform in comparison with the paper. So, we asked about their level like user in four categories (none contact, a little contact, some contact and substantial contact) and what they prefer: the paper or the computer. The results are showed in Figure 5.\nPatient's preferences by patient utilization level.\nIt is clear that except the patients that never use the computer, the preference goes to the answer in the computer platform. It should be emphasized the relationship between the level of computer use and preference for answering in this way.\n Data analysis After the data registration stage, concerning the patient's QoL, it is important to forward this information in a clear and objective way to the physician, to enable an improved decision making. The following stage was the clinical variables identification and development of the output information obtained from physicians contributions.\nMeasures verified in clinical analysis differentiate patients from each other, but we understand that QoL measuring should also be considered in patient standardization.\nWe used the Rasch model [17] to analyze the patient answers. An important feature of the Rasch model is the sufficient raw score consistent estimation of item parameters without reference to the distribution of the latent variable in the possible population. This feature allows to analyze each answer from each patient individually without concern with the other answers or the population distribution.\nFigure 6 shows a graphic with patient's answers: signaled in yellow are the answers below what is expected for this type of patient, and signaled in blue are those above the expected. In the right column are highlighted the most critical answers in order to turn easier and quick the physician understanding.\nGraphic with patient's answers.\nThe advantage in using of this platform is almost the quick analysis by the physician about patient clinical problems. In fact, the physician takes note about patient problems before observing the patient. This information facilitates and improves the conduction of the appointment. Without using the platform, it is not possible to identify some signs and symptoms.\nAfter the data registration stage, concerning the patient's QoL, it is important to forward this information in a clear and objective way to the physician, to enable an improved decision making. The following stage was the clinical variables identification and development of the output information obtained from physicians contributions.\nMeasures verified in clinical analysis differentiate patients from each other, but we understand that QoL measuring should also be considered in patient standardization.\nWe used the Rasch model [17] to analyze the patient answers. An important feature of the Rasch model is the sufficient raw score consistent estimation of item parameters without reference to the distribution of the latent variable in the possible population. This feature allows to analyze each answer from each patient individually without concern with the other answers or the population distribution.\nFigure 6 shows a graphic with patient's answers: signaled in yellow are the answers below what is expected for this type of patient, and signaled in blue are those above the expected. In the right column are highlighted the most critical answers in order to turn easier and quick the physician understanding.\nGraphic with patient's answers.\nThe advantage in using of this platform is almost the quick analysis by the physician about patient clinical problems. In fact, the physician takes note about patient problems before observing the patient. This information facilitates and improves the conduction of the appointment. Without using the platform, it is not possible to identify some signs and symptoms.\n Clinical decision support system Why do we consider this system as a clinical decision support system?\nIn addition to the information about QoL the platform can register the patient's clinical information and socio-demographic characteristics allowing to classify and group patients according to these characteristics.\nThe collected data can help the physician in two levels:\n• Identify the problems that the patient has at the moment;\n• Assist the physician in decision-making by providing a forecast on future quality of life of the patient according to the treatments prescribed.\nIt is important that the patient and the physician know the effect that a treatment will have within 3, 4 or 5 years in his QoL. The decision about choosing a treatment protocol should include the patient's QoL not necessarily during the treatment but especially during the years following it. The objective measurement of patient's QoL allows, in this context, to consider it as a clinical data contribution to the characterization of the patient.\nWhy do we consider this system as a clinical decision support system?\nIn addition to the information about QoL the platform can register the patient's clinical information and socio-demographic characteristics allowing to classify and group patients according to these characteristics.\nThe collected data can help the physician in two levels:\n• Identify the problems that the patient has at the moment;\n• Assist the physician in decision-making by providing a forecast on future quality of life of the patient according to the treatments prescribed.\nIt is important that the patient and the physician know the effect that a treatment will have within 3, 4 or 5 years in his QoL. The decision about choosing a treatment protocol should include the patient's QoL not necessarily during the treatment but especially during the years following it. The objective measurement of patient's QoL allows, in this context, to consider it as a clinical data contribution to the characterization of the patient.", "It cannot be denied that the health domain is extremely sensitive, and every aspect that interferes with traditional processes is potentiated in terms of impact. Thus, we started this project assessing the influence that a technological environment would have in the patient's behavior.\nKnowledge management systems can and should be used in order to optimize certain procedures, but the type of organization where they are introduced in must be kept in mind. Dimensions and items for a model of knowledge management were presented in Table 1.\nThe purpose of this project involves the development of an informational platform that would not interfere with patient's answers when used, and which could be applied by health professionals. This software should run through a browser working in the health unit's intranet, or even in the internet.\nThe main requirement in the creation of this software was to build an interface as close to a traditional paper form as possible. Using key-words like usability, accessibility and confidentiality, the intention was to build a simple interface with an intuitive use, where the correction to an answer could be done in a clear, objective way, where the patient could clearly understand the confidentiality of his answers, and accessible to all types of patients. Blind, illiterate and physically challenged individuals are recurrent amongst the frequent oncology service patients of the IPO in Oporto. Sound or touch screens are presently the two used interface solutions, but we are still investigating the use of other communication devices.\nNext, we present two figures. Figure 1 shows a view of the QoL screen and Figure 2 shows a view of the patient's screen when he answers the questionnaire.\nView of the QoL screen.\nView of the patient when answering the questionnaire.", "The following graphics (Figure 3 and 4) show, for each question, the percentage of equal and different answers given by patients answering on the computer, comparing with answers given on paper.\nQLQ-C30 Answers Comparison.\nQLQ-HN35 Answers Comparison.\nWe thus concluded that the answers given by patients on paper and on the computer-based platform are generally the same. Answers q1, q2, q22 of QLQ-C30 and answer h41 of QLQ-H&N35 show the higher number of different answers, a little over 40% in the case of h41 and fewer than 40% on the other cases.\nIt is worth noting that the specific questionnaire (QLQ-H&N35) reveals a higher proportion of equal answers. Ideally, the answers should always be the same, but previous experiments (performed on paper) show that answers given by patients in two separate moments are sometimes different, and the percentage for this difference is close to the one we observed between the answers on paper and the answers on the computer. This confirms the results observed in the mentioned test, leading us to conclude that the different results are caused by other factors.\nThe Wilcoxon test results (Table 3 and Table 4) suggest the not rejection of null hypothesis, in fact for all answers the significance level is minor that 0,05. So we cannot conclude that the answers are not similar between paper support and computer support.\nWilcoxon test results for QLQ-C30\nWilcoxon test results for QLQ-H&N 35\nThe rejection of null hypothesis do not guarantee that answers are similar, so we analyze the correlation between both answers, and we create a linear regression line (y = mx + b) to analyze the coefficient of dependent (m) variable and the independent term (b).\nThe linear regression analysis allowed conclude for all questions that the coefficient of dependent variable value was near from one (1) and the independent term value was near from zero (0). The worst obtained value was in question 16 where the values were 1,038 and 0,017 respectively. The correlation coefficient was 0,853. Therefore we realized a t-Student test to verify if the coefficient of dependent variable value could be zero in the population. The test allowed to reject the null hypothesis. In resume, can be concluded that the answers are similar for both supports, i.e., the platform does not influence the patient's answers.\nAdditionally was asked to the patients data about their impression with the platform in comparison with the paper. So, we asked about their level like user in four categories (none contact, a little contact, some contact and substantial contact) and what they prefer: the paper or the computer. The results are showed in Figure 5.\nPatient's preferences by patient utilization level.\nIt is clear that except the patients that never use the computer, the preference goes to the answer in the computer platform. It should be emphasized the relationship between the level of computer use and preference for answering in this way.", "After the data registration stage, concerning the patient's QoL, it is important to forward this information in a clear and objective way to the physician, to enable an improved decision making. The following stage was the clinical variables identification and development of the output information obtained from physicians contributions.\nMeasures verified in clinical analysis differentiate patients from each other, but we understand that QoL measuring should also be considered in patient standardization.\nWe used the Rasch model [17] to analyze the patient answers. An important feature of the Rasch model is the sufficient raw score consistent estimation of item parameters without reference to the distribution of the latent variable in the possible population. This feature allows to analyze each answer from each patient individually without concern with the other answers or the population distribution.\nFigure 6 shows a graphic with patient's answers: signaled in yellow are the answers below what is expected for this type of patient, and signaled in blue are those above the expected. In the right column are highlighted the most critical answers in order to turn easier and quick the physician understanding.\nGraphic with patient's answers.\nThe advantage in using of this platform is almost the quick analysis by the physician about patient clinical problems. In fact, the physician takes note about patient problems before observing the patient. This information facilitates and improves the conduction of the appointment. Without using the platform, it is not possible to identify some signs and symptoms.", "Why do we consider this system as a clinical decision support system?\nIn addition to the information about QoL the platform can register the patient's clinical information and socio-demographic characteristics allowing to classify and group patients according to these characteristics.\nThe collected data can help the physician in two levels:\n• Identify the problems that the patient has at the moment;\n• Assist the physician in decision-making by providing a forecast on future quality of life of the patient according to the treatments prescribed.\nIt is important that the patient and the physician know the effect that a treatment will have within 3, 4 or 5 years in his QoL. The decision about choosing a treatment protocol should include the patient's QoL not necessarily during the treatment but especially during the years following it. The objective measurement of patient's QoL allows, in this context, to consider it as a clinical data contribution to the characterization of the patient.", "The authors declare that they have no competing interests.", "JG conceived and developed the decision support system, participated in the acquisition, analysis and interpretation of data, performed the statistical analysis, and drafted the manuscript. ÁR participated in the design and coordination of the project, helped to draft the manuscript, and revised it critically for important intellectual content. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Scope", "Evaluation of HRQoL in oncologic patients", "KMS in routine HRQoL assessment", "Methodology", "Results and discussion", "Friendly software design", "Computer versus paper support", "Data analysis", "Clinical decision support system", "Conclusions", "Competing interests", "Authors' contributions" ]
[ " Scope The concept of \"Quality of Life - QoL\" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3].\nPreliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment.\nThe purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts.\nIn the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients.\nThe concept of \"Quality of Life - QoL\" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3].\nPreliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment.\nThe purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts.\nIn the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients.\n Evaluation of HRQoL in oncologic patients Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6].\nResearch methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2].\nThe time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1].\nIn fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1].\nPromoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9].\nAlthough being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11].\nTable 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]:\nDimensions and items for HRQoL assessment\nMalignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6].\nResearch methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2].\nThe time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1].\nIn fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1].\nPromoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9].\nAlthough being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11].\nTable 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]:\nDimensions and items for HRQoL assessment\n KMS in routine HRQoL assessment Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13].\nMoreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13].\nHRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15].\nHowever, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment.\nPreliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13].\nMoreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13].\nHRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15].\nHowever, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment.\n Methodology A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses.\nIn order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered.\nPatients demographics\nIn order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer.\nIn the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions.\nThe two statistical hypotheses for a bilateral test in each situation were written.\n\n\n\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n0\n\n\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n=\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n;\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n1\n\n\n:\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n<>\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n.\n\n\n\n\nWe used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations.\nA high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers.\nAfter the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments.\nThe Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters.\nWe analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time.\nIn addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment.\nA software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses.\nIn order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered.\nPatients demographics\nIn order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer.\nIn the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions.\nThe two statistical hypotheses for a bilateral test in each situation were written.\n\n\n\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n0\n\n\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n=\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n;\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n1\n\n\n:\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n<>\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n.\n\n\n\n\nWe used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations.\nA high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers.\nAfter the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments.\nThe Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters.\nWe analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time.\nIn addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment.", "The concept of \"Quality of Life - QoL\" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3].\nPreliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment.\nThe purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts.\nIn the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients.", "Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6].\nResearch methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2].\nThe time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1].\nIn fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1].\nPromoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9].\nAlthough being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11].\nTable 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]:\nDimensions and items for HRQoL assessment", "Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13].\nMoreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13].\nHRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15].\nHowever, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment.", "A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses.\nIn order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered.\nPatients demographics\nIn order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer.\nIn the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions.\nThe two statistical hypotheses for a bilateral test in each situation were written.\n\n\n\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n0\n\n\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n=\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n;\n\n\nHypothesis\n\n\n\n\n\nH\n\n\n\n1\n\n\n:\n\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n0\n\n\n\n)\n\n<>\n\nF\n\n\n(\n\n\n\n\nX\n\n\n\n1\n\n\n\n)\n\n.\n\n\n\n\nWe used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations.\nA high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers.\nAfter the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments.\nThe Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters.\nWe analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time.\nIn addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment.", " Friendly software design It cannot be denied that the health domain is extremely sensitive, and every aspect that interferes with traditional processes is potentiated in terms of impact. Thus, we started this project assessing the influence that a technological environment would have in the patient's behavior.\nKnowledge management systems can and should be used in order to optimize certain procedures, but the type of organization where they are introduced in must be kept in mind. Dimensions and items for a model of knowledge management were presented in Table 1.\nThe purpose of this project involves the development of an informational platform that would not interfere with patient's answers when used, and which could be applied by health professionals. This software should run through a browser working in the health unit's intranet, or even in the internet.\nThe main requirement in the creation of this software was to build an interface as close to a traditional paper form as possible. Using key-words like usability, accessibility and confidentiality, the intention was to build a simple interface with an intuitive use, where the correction to an answer could be done in a clear, objective way, where the patient could clearly understand the confidentiality of his answers, and accessible to all types of patients. Blind, illiterate and physically challenged individuals are recurrent amongst the frequent oncology service patients of the IPO in Oporto. Sound or touch screens are presently the two used interface solutions, but we are still investigating the use of other communication devices.\nNext, we present two figures. Figure 1 shows a view of the QoL screen and Figure 2 shows a view of the patient's screen when he answers the questionnaire.\nView of the QoL screen.\nView of the patient when answering the questionnaire.\nIt cannot be denied that the health domain is extremely sensitive, and every aspect that interferes with traditional processes is potentiated in terms of impact. Thus, we started this project assessing the influence that a technological environment would have in the patient's behavior.\nKnowledge management systems can and should be used in order to optimize certain procedures, but the type of organization where they are introduced in must be kept in mind. Dimensions and items for a model of knowledge management were presented in Table 1.\nThe purpose of this project involves the development of an informational platform that would not interfere with patient's answers when used, and which could be applied by health professionals. This software should run through a browser working in the health unit's intranet, or even in the internet.\nThe main requirement in the creation of this software was to build an interface as close to a traditional paper form as possible. Using key-words like usability, accessibility and confidentiality, the intention was to build a simple interface with an intuitive use, where the correction to an answer could be done in a clear, objective way, where the patient could clearly understand the confidentiality of his answers, and accessible to all types of patients. Blind, illiterate and physically challenged individuals are recurrent amongst the frequent oncology service patients of the IPO in Oporto. Sound or touch screens are presently the two used interface solutions, but we are still investigating the use of other communication devices.\nNext, we present two figures. Figure 1 shows a view of the QoL screen and Figure 2 shows a view of the patient's screen when he answers the questionnaire.\nView of the QoL screen.\nView of the patient when answering the questionnaire.\n Computer versus paper support The following graphics (Figure 3 and 4) show, for each question, the percentage of equal and different answers given by patients answering on the computer, comparing with answers given on paper.\nQLQ-C30 Answers Comparison.\nQLQ-HN35 Answers Comparison.\nWe thus concluded that the answers given by patients on paper and on the computer-based platform are generally the same. Answers q1, q2, q22 of QLQ-C30 and answer h41 of QLQ-H&N35 show the higher number of different answers, a little over 40% in the case of h41 and fewer than 40% on the other cases.\nIt is worth noting that the specific questionnaire (QLQ-H&N35) reveals a higher proportion of equal answers. Ideally, the answers should always be the same, but previous experiments (performed on paper) show that answers given by patients in two separate moments are sometimes different, and the percentage for this difference is close to the one we observed between the answers on paper and the answers on the computer. This confirms the results observed in the mentioned test, leading us to conclude that the different results are caused by other factors.\nThe Wilcoxon test results (Table 3 and Table 4) suggest the not rejection of null hypothesis, in fact for all answers the significance level is minor that 0,05. So we cannot conclude that the answers are not similar between paper support and computer support.\nWilcoxon test results for QLQ-C30\nWilcoxon test results for QLQ-H&N 35\nThe rejection of null hypothesis do not guarantee that answers are similar, so we analyze the correlation between both answers, and we create a linear regression line (y = mx + b) to analyze the coefficient of dependent (m) variable and the independent term (b).\nThe linear regression analysis allowed conclude for all questions that the coefficient of dependent variable value was near from one (1) and the independent term value was near from zero (0). The worst obtained value was in question 16 where the values were 1,038 and 0,017 respectively. The correlation coefficient was 0,853. Therefore we realized a t-Student test to verify if the coefficient of dependent variable value could be zero in the population. The test allowed to reject the null hypothesis. In resume, can be concluded that the answers are similar for both supports, i.e., the platform does not influence the patient's answers.\nAdditionally was asked to the patients data about their impression with the platform in comparison with the paper. So, we asked about their level like user in four categories (none contact, a little contact, some contact and substantial contact) and what they prefer: the paper or the computer. The results are showed in Figure 5.\nPatient's preferences by patient utilization level.\nIt is clear that except the patients that never use the computer, the preference goes to the answer in the computer platform. It should be emphasized the relationship between the level of computer use and preference for answering in this way.\nThe following graphics (Figure 3 and 4) show, for each question, the percentage of equal and different answers given by patients answering on the computer, comparing with answers given on paper.\nQLQ-C30 Answers Comparison.\nQLQ-HN35 Answers Comparison.\nWe thus concluded that the answers given by patients on paper and on the computer-based platform are generally the same. Answers q1, q2, q22 of QLQ-C30 and answer h41 of QLQ-H&N35 show the higher number of different answers, a little over 40% in the case of h41 and fewer than 40% on the other cases.\nIt is worth noting that the specific questionnaire (QLQ-H&N35) reveals a higher proportion of equal answers. Ideally, the answers should always be the same, but previous experiments (performed on paper) show that answers given by patients in two separate moments are sometimes different, and the percentage for this difference is close to the one we observed between the answers on paper and the answers on the computer. This confirms the results observed in the mentioned test, leading us to conclude that the different results are caused by other factors.\nThe Wilcoxon test results (Table 3 and Table 4) suggest the not rejection of null hypothesis, in fact for all answers the significance level is minor that 0,05. So we cannot conclude that the answers are not similar between paper support and computer support.\nWilcoxon test results for QLQ-C30\nWilcoxon test results for QLQ-H&N 35\nThe rejection of null hypothesis do not guarantee that answers are similar, so we analyze the correlation between both answers, and we create a linear regression line (y = mx + b) to analyze the coefficient of dependent (m) variable and the independent term (b).\nThe linear regression analysis allowed conclude for all questions that the coefficient of dependent variable value was near from one (1) and the independent term value was near from zero (0). The worst obtained value was in question 16 where the values were 1,038 and 0,017 respectively. The correlation coefficient was 0,853. Therefore we realized a t-Student test to verify if the coefficient of dependent variable value could be zero in the population. The test allowed to reject the null hypothesis. In resume, can be concluded that the answers are similar for both supports, i.e., the platform does not influence the patient's answers.\nAdditionally was asked to the patients data about their impression with the platform in comparison with the paper. So, we asked about their level like user in four categories (none contact, a little contact, some contact and substantial contact) and what they prefer: the paper or the computer. The results are showed in Figure 5.\nPatient's preferences by patient utilization level.\nIt is clear that except the patients that never use the computer, the preference goes to the answer in the computer platform. It should be emphasized the relationship between the level of computer use and preference for answering in this way.\n Data analysis After the data registration stage, concerning the patient's QoL, it is important to forward this information in a clear and objective way to the physician, to enable an improved decision making. The following stage was the clinical variables identification and development of the output information obtained from physicians contributions.\nMeasures verified in clinical analysis differentiate patients from each other, but we understand that QoL measuring should also be considered in patient standardization.\nWe used the Rasch model [17] to analyze the patient answers. An important feature of the Rasch model is the sufficient raw score consistent estimation of item parameters without reference to the distribution of the latent variable in the possible population. This feature allows to analyze each answer from each patient individually without concern with the other answers or the population distribution.\nFigure 6 shows a graphic with patient's answers: signaled in yellow are the answers below what is expected for this type of patient, and signaled in blue are those above the expected. In the right column are highlighted the most critical answers in order to turn easier and quick the physician understanding.\nGraphic with patient's answers.\nThe advantage in using of this platform is almost the quick analysis by the physician about patient clinical problems. In fact, the physician takes note about patient problems before observing the patient. This information facilitates and improves the conduction of the appointment. Without using the platform, it is not possible to identify some signs and symptoms.\nAfter the data registration stage, concerning the patient's QoL, it is important to forward this information in a clear and objective way to the physician, to enable an improved decision making. The following stage was the clinical variables identification and development of the output information obtained from physicians contributions.\nMeasures verified in clinical analysis differentiate patients from each other, but we understand that QoL measuring should also be considered in patient standardization.\nWe used the Rasch model [17] to analyze the patient answers. An important feature of the Rasch model is the sufficient raw score consistent estimation of item parameters without reference to the distribution of the latent variable in the possible population. This feature allows to analyze each answer from each patient individually without concern with the other answers or the population distribution.\nFigure 6 shows a graphic with patient's answers: signaled in yellow are the answers below what is expected for this type of patient, and signaled in blue are those above the expected. In the right column are highlighted the most critical answers in order to turn easier and quick the physician understanding.\nGraphic with patient's answers.\nThe advantage in using of this platform is almost the quick analysis by the physician about patient clinical problems. In fact, the physician takes note about patient problems before observing the patient. This information facilitates and improves the conduction of the appointment. Without using the platform, it is not possible to identify some signs and symptoms.\n Clinical decision support system Why do we consider this system as a clinical decision support system?\nIn addition to the information about QoL the platform can register the patient's clinical information and socio-demographic characteristics allowing to classify and group patients according to these characteristics.\nThe collected data can help the physician in two levels:\n• Identify the problems that the patient has at the moment;\n• Assist the physician in decision-making by providing a forecast on future quality of life of the patient according to the treatments prescribed.\nIt is important that the patient and the physician know the effect that a treatment will have within 3, 4 or 5 years in his QoL. The decision about choosing a treatment protocol should include the patient's QoL not necessarily during the treatment but especially during the years following it. The objective measurement of patient's QoL allows, in this context, to consider it as a clinical data contribution to the characterization of the patient.\nWhy do we consider this system as a clinical decision support system?\nIn addition to the information about QoL the platform can register the patient's clinical information and socio-demographic characteristics allowing to classify and group patients according to these characteristics.\nThe collected data can help the physician in two levels:\n• Identify the problems that the patient has at the moment;\n• Assist the physician in decision-making by providing a forecast on future quality of life of the patient according to the treatments prescribed.\nIt is important that the patient and the physician know the effect that a treatment will have within 3, 4 or 5 years in his QoL. The decision about choosing a treatment protocol should include the patient's QoL not necessarily during the treatment but especially during the years following it. The objective measurement of patient's QoL allows, in this context, to consider it as a clinical data contribution to the characterization of the patient.", "It cannot be denied that the health domain is extremely sensitive, and every aspect that interferes with traditional processes is potentiated in terms of impact. Thus, we started this project assessing the influence that a technological environment would have in the patient's behavior.\nKnowledge management systems can and should be used in order to optimize certain procedures, but the type of organization where they are introduced in must be kept in mind. Dimensions and items for a model of knowledge management were presented in Table 1.\nThe purpose of this project involves the development of an informational platform that would not interfere with patient's answers when used, and which could be applied by health professionals. This software should run through a browser working in the health unit's intranet, or even in the internet.\nThe main requirement in the creation of this software was to build an interface as close to a traditional paper form as possible. Using key-words like usability, accessibility and confidentiality, the intention was to build a simple interface with an intuitive use, where the correction to an answer could be done in a clear, objective way, where the patient could clearly understand the confidentiality of his answers, and accessible to all types of patients. Blind, illiterate and physically challenged individuals are recurrent amongst the frequent oncology service patients of the IPO in Oporto. Sound or touch screens are presently the two used interface solutions, but we are still investigating the use of other communication devices.\nNext, we present two figures. Figure 1 shows a view of the QoL screen and Figure 2 shows a view of the patient's screen when he answers the questionnaire.\nView of the QoL screen.\nView of the patient when answering the questionnaire.", "The following graphics (Figure 3 and 4) show, for each question, the percentage of equal and different answers given by patients answering on the computer, comparing with answers given on paper.\nQLQ-C30 Answers Comparison.\nQLQ-HN35 Answers Comparison.\nWe thus concluded that the answers given by patients on paper and on the computer-based platform are generally the same. Answers q1, q2, q22 of QLQ-C30 and answer h41 of QLQ-H&N35 show the higher number of different answers, a little over 40% in the case of h41 and fewer than 40% on the other cases.\nIt is worth noting that the specific questionnaire (QLQ-H&N35) reveals a higher proportion of equal answers. Ideally, the answers should always be the same, but previous experiments (performed on paper) show that answers given by patients in two separate moments are sometimes different, and the percentage for this difference is close to the one we observed between the answers on paper and the answers on the computer. This confirms the results observed in the mentioned test, leading us to conclude that the different results are caused by other factors.\nThe Wilcoxon test results (Table 3 and Table 4) suggest the not rejection of null hypothesis, in fact for all answers the significance level is minor that 0,05. So we cannot conclude that the answers are not similar between paper support and computer support.\nWilcoxon test results for QLQ-C30\nWilcoxon test results for QLQ-H&N 35\nThe rejection of null hypothesis do not guarantee that answers are similar, so we analyze the correlation between both answers, and we create a linear regression line (y = mx + b) to analyze the coefficient of dependent (m) variable and the independent term (b).\nThe linear regression analysis allowed conclude for all questions that the coefficient of dependent variable value was near from one (1) and the independent term value was near from zero (0). The worst obtained value was in question 16 where the values were 1,038 and 0,017 respectively. The correlation coefficient was 0,853. Therefore we realized a t-Student test to verify if the coefficient of dependent variable value could be zero in the population. The test allowed to reject the null hypothesis. In resume, can be concluded that the answers are similar for both supports, i.e., the platform does not influence the patient's answers.\nAdditionally was asked to the patients data about their impression with the platform in comparison with the paper. So, we asked about their level like user in four categories (none contact, a little contact, some contact and substantial contact) and what they prefer: the paper or the computer. The results are showed in Figure 5.\nPatient's preferences by patient utilization level.\nIt is clear that except the patients that never use the computer, the preference goes to the answer in the computer platform. It should be emphasized the relationship between the level of computer use and preference for answering in this way.", "After the data registration stage, concerning the patient's QoL, it is important to forward this information in a clear and objective way to the physician, to enable an improved decision making. The following stage was the clinical variables identification and development of the output information obtained from physicians contributions.\nMeasures verified in clinical analysis differentiate patients from each other, but we understand that QoL measuring should also be considered in patient standardization.\nWe used the Rasch model [17] to analyze the patient answers. An important feature of the Rasch model is the sufficient raw score consistent estimation of item parameters without reference to the distribution of the latent variable in the possible population. This feature allows to analyze each answer from each patient individually without concern with the other answers or the population distribution.\nFigure 6 shows a graphic with patient's answers: signaled in yellow are the answers below what is expected for this type of patient, and signaled in blue are those above the expected. In the right column are highlighted the most critical answers in order to turn easier and quick the physician understanding.\nGraphic with patient's answers.\nThe advantage in using of this platform is almost the quick analysis by the physician about patient clinical problems. In fact, the physician takes note about patient problems before observing the patient. This information facilitates and improves the conduction of the appointment. Without using the platform, it is not possible to identify some signs and symptoms.", "Why do we consider this system as a clinical decision support system?\nIn addition to the information about QoL the platform can register the patient's clinical information and socio-demographic characteristics allowing to classify and group patients according to these characteristics.\nThe collected data can help the physician in two levels:\n• Identify the problems that the patient has at the moment;\n• Assist the physician in decision-making by providing a forecast on future quality of life of the patient according to the treatments prescribed.\nIt is important that the patient and the physician know the effect that a treatment will have within 3, 4 or 5 years in his QoL. The decision about choosing a treatment protocol should include the patient's QoL not necessarily during the treatment but especially during the years following it. The objective measurement of patient's QoL allows, in this context, to consider it as a clinical data contribution to the characterization of the patient.", "In this paper we defined the concept of QoL, in different contexts and situations, which is reaching almost every sector of society. The main focus, however, was on the Health context.\nSome studies have suggested that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4].\nThe platform designed and developed in this project gives the physician an opportunity to use patient's QoL measurement in real time as a clinical decision support element.\nThe knowledge about patient QoL constitutes another factor that may, in certain circumstances, contribute to a better decision. The systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. Moreover, therapeutic alternatives can help the physician by giving him important data from which he can infer the patient's future QoL.\nWe proved the validity of the developed platform in the acquisition of data required for the QoL assessment, and in allowing a routine QoL assessment to become a part of the appointment.\nAn evolution of the platform for collecting clinical information, in order to typify patients and therapies according to a specific patient's QoL and a class of patients, is under development. The need to develop this platform underlines the importance of knowledge management systems as decision making aids.", "The authors declare that they have no competing interests.", "JG conceived and developed the decision support system, participated in the acquisition, analysis and interpretation of data, performed the statistical analysis, and drafted the manuscript. ÁR participated in the design and coordination of the project, helped to draft the manuscript, and revised it critically for important intellectual content. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, "conclusions", null, null ]
[ "Heath and Neck Oncologic Patients", "Health-Related Quality of Life", "Knowledge Management Systems", "Decision Support Systems" ]
Background: Scope The concept of "Quality of Life - QoL" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3]. Preliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment. The purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts. In the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients. The concept of "Quality of Life - QoL" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3]. Preliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment. The purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts. In the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients. Evaluation of HRQoL in oncologic patients Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6]. Research methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2]. The time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1]. In fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1]. Promoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9]. Although being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11]. Table 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]: Dimensions and items for HRQoL assessment Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6]. Research methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2]. The time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1]. In fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1]. Promoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9]. Although being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11]. Table 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]: Dimensions and items for HRQoL assessment KMS in routine HRQoL assessment Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13]. Moreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13]. HRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15]. However, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment. Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13]. Moreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13]. HRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15]. However, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment. Methodology A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses. In order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered. Patients demographics In order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer. In the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions. The two statistical hypotheses for a bilateral test in each situation were written. Hypothesis H 0 F ( X 0 ) = F ( X 1 ) ; Hypothesis H 1 : F ( X 0 ) <> F ( X 1 ) . We used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations. A high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers. After the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments. The Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters. We analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time. In addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment. A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses. In order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered. Patients demographics In order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer. In the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions. The two statistical hypotheses for a bilateral test in each situation were written. Hypothesis H 0 F ( X 0 ) = F ( X 1 ) ; Hypothesis H 1 : F ( X 0 ) <> F ( X 1 ) . We used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations. A high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers. After the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments. The Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters. We analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time. In addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment. Scope: The concept of "Quality of Life - QoL" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3]. Preliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment. The purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts. In the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients. Evaluation of HRQoL in oncologic patients: Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6]. Research methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2]. The time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1]. In fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1]. Promoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9]. Although being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11]. Table 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]: Dimensions and items for HRQoL assessment KMS in routine HRQoL assessment: Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13]. Moreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13]. HRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15]. However, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment. Methodology: A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses. In order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered. Patients demographics In order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer. In the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions. The two statistical hypotheses for a bilateral test in each situation were written. Hypothesis H 0 F ( X 0 ) = F ( X 1 ) ; Hypothesis H 1 : F ( X 0 ) <> F ( X 1 ) . We used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations. A high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers. After the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments. The Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters. We analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time. In addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment. Results and discussion: Friendly software design It cannot be denied that the health domain is extremely sensitive, and every aspect that interferes with traditional processes is potentiated in terms of impact. Thus, we started this project assessing the influence that a technological environment would have in the patient's behavior. Knowledge management systems can and should be used in order to optimize certain procedures, but the type of organization where they are introduced in must be kept in mind. Dimensions and items for a model of knowledge management were presented in Table 1. The purpose of this project involves the development of an informational platform that would not interfere with patient's answers when used, and which could be applied by health professionals. This software should run through a browser working in the health unit's intranet, or even in the internet. The main requirement in the creation of this software was to build an interface as close to a traditional paper form as possible. Using key-words like usability, accessibility and confidentiality, the intention was to build a simple interface with an intuitive use, where the correction to an answer could be done in a clear, objective way, where the patient could clearly understand the confidentiality of his answers, and accessible to all types of patients. Blind, illiterate and physically challenged individuals are recurrent amongst the frequent oncology service patients of the IPO in Oporto. Sound or touch screens are presently the two used interface solutions, but we are still investigating the use of other communication devices. Next, we present two figures. Figure 1 shows a view of the QoL screen and Figure 2 shows a view of the patient's screen when he answers the questionnaire. View of the QoL screen. View of the patient when answering the questionnaire. It cannot be denied that the health domain is extremely sensitive, and every aspect that interferes with traditional processes is potentiated in terms of impact. Thus, we started this project assessing the influence that a technological environment would have in the patient's behavior. Knowledge management systems can and should be used in order to optimize certain procedures, but the type of organization where they are introduced in must be kept in mind. Dimensions and items for a model of knowledge management were presented in Table 1. The purpose of this project involves the development of an informational platform that would not interfere with patient's answers when used, and which could be applied by health professionals. This software should run through a browser working in the health unit's intranet, or even in the internet. The main requirement in the creation of this software was to build an interface as close to a traditional paper form as possible. Using key-words like usability, accessibility and confidentiality, the intention was to build a simple interface with an intuitive use, where the correction to an answer could be done in a clear, objective way, where the patient could clearly understand the confidentiality of his answers, and accessible to all types of patients. Blind, illiterate and physically challenged individuals are recurrent amongst the frequent oncology service patients of the IPO in Oporto. Sound or touch screens are presently the two used interface solutions, but we are still investigating the use of other communication devices. Next, we present two figures. Figure 1 shows a view of the QoL screen and Figure 2 shows a view of the patient's screen when he answers the questionnaire. View of the QoL screen. View of the patient when answering the questionnaire. Computer versus paper support The following graphics (Figure 3 and 4) show, for each question, the percentage of equal and different answers given by patients answering on the computer, comparing with answers given on paper. QLQ-C30 Answers Comparison. QLQ-HN35 Answers Comparison. We thus concluded that the answers given by patients on paper and on the computer-based platform are generally the same. Answers q1, q2, q22 of QLQ-C30 and answer h41 of QLQ-H&N35 show the higher number of different answers, a little over 40% in the case of h41 and fewer than 40% on the other cases. It is worth noting that the specific questionnaire (QLQ-H&N35) reveals a higher proportion of equal answers. Ideally, the answers should always be the same, but previous experiments (performed on paper) show that answers given by patients in two separate moments are sometimes different, and the percentage for this difference is close to the one we observed between the answers on paper and the answers on the computer. This confirms the results observed in the mentioned test, leading us to conclude that the different results are caused by other factors. The Wilcoxon test results (Table 3 and Table 4) suggest the not rejection of null hypothesis, in fact for all answers the significance level is minor that 0,05. So we cannot conclude that the answers are not similar between paper support and computer support. Wilcoxon test results for QLQ-C30 Wilcoxon test results for QLQ-H&N 35 The rejection of null hypothesis do not guarantee that answers are similar, so we analyze the correlation between both answers, and we create a linear regression line (y = mx + b) to analyze the coefficient of dependent (m) variable and the independent term (b). The linear regression analysis allowed conclude for all questions that the coefficient of dependent variable value was near from one (1) and the independent term value was near from zero (0). The worst obtained value was in question 16 where the values were 1,038 and 0,017 respectively. The correlation coefficient was 0,853. Therefore we realized a t-Student test to verify if the coefficient of dependent variable value could be zero in the population. The test allowed to reject the null hypothesis. In resume, can be concluded that the answers are similar for both supports, i.e., the platform does not influence the patient's answers. Additionally was asked to the patients data about their impression with the platform in comparison with the paper. So, we asked about their level like user in four categories (none contact, a little contact, some contact and substantial contact) and what they prefer: the paper or the computer. The results are showed in Figure 5. Patient's preferences by patient utilization level. It is clear that except the patients that never use the computer, the preference goes to the answer in the computer platform. It should be emphasized the relationship between the level of computer use and preference for answering in this way. The following graphics (Figure 3 and 4) show, for each question, the percentage of equal and different answers given by patients answering on the computer, comparing with answers given on paper. QLQ-C30 Answers Comparison. QLQ-HN35 Answers Comparison. We thus concluded that the answers given by patients on paper and on the computer-based platform are generally the same. Answers q1, q2, q22 of QLQ-C30 and answer h41 of QLQ-H&N35 show the higher number of different answers, a little over 40% in the case of h41 and fewer than 40% on the other cases. It is worth noting that the specific questionnaire (QLQ-H&N35) reveals a higher proportion of equal answers. Ideally, the answers should always be the same, but previous experiments (performed on paper) show that answers given by patients in two separate moments are sometimes different, and the percentage for this difference is close to the one we observed between the answers on paper and the answers on the computer. This confirms the results observed in the mentioned test, leading us to conclude that the different results are caused by other factors. The Wilcoxon test results (Table 3 and Table 4) suggest the not rejection of null hypothesis, in fact for all answers the significance level is minor that 0,05. So we cannot conclude that the answers are not similar between paper support and computer support. Wilcoxon test results for QLQ-C30 Wilcoxon test results for QLQ-H&N 35 The rejection of null hypothesis do not guarantee that answers are similar, so we analyze the correlation between both answers, and we create a linear regression line (y = mx + b) to analyze the coefficient of dependent (m) variable and the independent term (b). The linear regression analysis allowed conclude for all questions that the coefficient of dependent variable value was near from one (1) and the independent term value was near from zero (0). The worst obtained value was in question 16 where the values were 1,038 and 0,017 respectively. The correlation coefficient was 0,853. Therefore we realized a t-Student test to verify if the coefficient of dependent variable value could be zero in the population. The test allowed to reject the null hypothesis. In resume, can be concluded that the answers are similar for both supports, i.e., the platform does not influence the patient's answers. Additionally was asked to the patients data about their impression with the platform in comparison with the paper. So, we asked about their level like user in four categories (none contact, a little contact, some contact and substantial contact) and what they prefer: the paper or the computer. The results are showed in Figure 5. Patient's preferences by patient utilization level. It is clear that except the patients that never use the computer, the preference goes to the answer in the computer platform. It should be emphasized the relationship between the level of computer use and preference for answering in this way. Data analysis After the data registration stage, concerning the patient's QoL, it is important to forward this information in a clear and objective way to the physician, to enable an improved decision making. The following stage was the clinical variables identification and development of the output information obtained from physicians contributions. Measures verified in clinical analysis differentiate patients from each other, but we understand that QoL measuring should also be considered in patient standardization. We used the Rasch model [17] to analyze the patient answers. An important feature of the Rasch model is the sufficient raw score consistent estimation of item parameters without reference to the distribution of the latent variable in the possible population. This feature allows to analyze each answer from each patient individually without concern with the other answers or the population distribution. Figure 6 shows a graphic with patient's answers: signaled in yellow are the answers below what is expected for this type of patient, and signaled in blue are those above the expected. In the right column are highlighted the most critical answers in order to turn easier and quick the physician understanding. Graphic with patient's answers. The advantage in using of this platform is almost the quick analysis by the physician about patient clinical problems. In fact, the physician takes note about patient problems before observing the patient. This information facilitates and improves the conduction of the appointment. Without using the platform, it is not possible to identify some signs and symptoms. After the data registration stage, concerning the patient's QoL, it is important to forward this information in a clear and objective way to the physician, to enable an improved decision making. The following stage was the clinical variables identification and development of the output information obtained from physicians contributions. Measures verified in clinical analysis differentiate patients from each other, but we understand that QoL measuring should also be considered in patient standardization. We used the Rasch model [17] to analyze the patient answers. An important feature of the Rasch model is the sufficient raw score consistent estimation of item parameters without reference to the distribution of the latent variable in the possible population. This feature allows to analyze each answer from each patient individually without concern with the other answers or the population distribution. Figure 6 shows a graphic with patient's answers: signaled in yellow are the answers below what is expected for this type of patient, and signaled in blue are those above the expected. In the right column are highlighted the most critical answers in order to turn easier and quick the physician understanding. Graphic with patient's answers. The advantage in using of this platform is almost the quick analysis by the physician about patient clinical problems. In fact, the physician takes note about patient problems before observing the patient. This information facilitates and improves the conduction of the appointment. Without using the platform, it is not possible to identify some signs and symptoms. Clinical decision support system Why do we consider this system as a clinical decision support system? In addition to the information about QoL the platform can register the patient's clinical information and socio-demographic characteristics allowing to classify and group patients according to these characteristics. The collected data can help the physician in two levels: • Identify the problems that the patient has at the moment; • Assist the physician in decision-making by providing a forecast on future quality of life of the patient according to the treatments prescribed. It is important that the patient and the physician know the effect that a treatment will have within 3, 4 or 5 years in his QoL. The decision about choosing a treatment protocol should include the patient's QoL not necessarily during the treatment but especially during the years following it. The objective measurement of patient's QoL allows, in this context, to consider it as a clinical data contribution to the characterization of the patient. Why do we consider this system as a clinical decision support system? In addition to the information about QoL the platform can register the patient's clinical information and socio-demographic characteristics allowing to classify and group patients according to these characteristics. The collected data can help the physician in two levels: • Identify the problems that the patient has at the moment; • Assist the physician in decision-making by providing a forecast on future quality of life of the patient according to the treatments prescribed. It is important that the patient and the physician know the effect that a treatment will have within 3, 4 or 5 years in his QoL. The decision about choosing a treatment protocol should include the patient's QoL not necessarily during the treatment but especially during the years following it. The objective measurement of patient's QoL allows, in this context, to consider it as a clinical data contribution to the characterization of the patient. Friendly software design: It cannot be denied that the health domain is extremely sensitive, and every aspect that interferes with traditional processes is potentiated in terms of impact. Thus, we started this project assessing the influence that a technological environment would have in the patient's behavior. Knowledge management systems can and should be used in order to optimize certain procedures, but the type of organization where they are introduced in must be kept in mind. Dimensions and items for a model of knowledge management were presented in Table 1. The purpose of this project involves the development of an informational platform that would not interfere with patient's answers when used, and which could be applied by health professionals. This software should run through a browser working in the health unit's intranet, or even in the internet. The main requirement in the creation of this software was to build an interface as close to a traditional paper form as possible. Using key-words like usability, accessibility and confidentiality, the intention was to build a simple interface with an intuitive use, where the correction to an answer could be done in a clear, objective way, where the patient could clearly understand the confidentiality of his answers, and accessible to all types of patients. Blind, illiterate and physically challenged individuals are recurrent amongst the frequent oncology service patients of the IPO in Oporto. Sound or touch screens are presently the two used interface solutions, but we are still investigating the use of other communication devices. Next, we present two figures. Figure 1 shows a view of the QoL screen and Figure 2 shows a view of the patient's screen when he answers the questionnaire. View of the QoL screen. View of the patient when answering the questionnaire. Computer versus paper support: The following graphics (Figure 3 and 4) show, for each question, the percentage of equal and different answers given by patients answering on the computer, comparing with answers given on paper. QLQ-C30 Answers Comparison. QLQ-HN35 Answers Comparison. We thus concluded that the answers given by patients on paper and on the computer-based platform are generally the same. Answers q1, q2, q22 of QLQ-C30 and answer h41 of QLQ-H&N35 show the higher number of different answers, a little over 40% in the case of h41 and fewer than 40% on the other cases. It is worth noting that the specific questionnaire (QLQ-H&N35) reveals a higher proportion of equal answers. Ideally, the answers should always be the same, but previous experiments (performed on paper) show that answers given by patients in two separate moments are sometimes different, and the percentage for this difference is close to the one we observed between the answers on paper and the answers on the computer. This confirms the results observed in the mentioned test, leading us to conclude that the different results are caused by other factors. The Wilcoxon test results (Table 3 and Table 4) suggest the not rejection of null hypothesis, in fact for all answers the significance level is minor that 0,05. So we cannot conclude that the answers are not similar between paper support and computer support. Wilcoxon test results for QLQ-C30 Wilcoxon test results for QLQ-H&N 35 The rejection of null hypothesis do not guarantee that answers are similar, so we analyze the correlation between both answers, and we create a linear regression line (y = mx + b) to analyze the coefficient of dependent (m) variable and the independent term (b). The linear regression analysis allowed conclude for all questions that the coefficient of dependent variable value was near from one (1) and the independent term value was near from zero (0). The worst obtained value was in question 16 where the values were 1,038 and 0,017 respectively. The correlation coefficient was 0,853. Therefore we realized a t-Student test to verify if the coefficient of dependent variable value could be zero in the population. The test allowed to reject the null hypothesis. In resume, can be concluded that the answers are similar for both supports, i.e., the platform does not influence the patient's answers. Additionally was asked to the patients data about their impression with the platform in comparison with the paper. So, we asked about their level like user in four categories (none contact, a little contact, some contact and substantial contact) and what they prefer: the paper or the computer. The results are showed in Figure 5. Patient's preferences by patient utilization level. It is clear that except the patients that never use the computer, the preference goes to the answer in the computer platform. It should be emphasized the relationship between the level of computer use and preference for answering in this way. Data analysis: After the data registration stage, concerning the patient's QoL, it is important to forward this information in a clear and objective way to the physician, to enable an improved decision making. The following stage was the clinical variables identification and development of the output information obtained from physicians contributions. Measures verified in clinical analysis differentiate patients from each other, but we understand that QoL measuring should also be considered in patient standardization. We used the Rasch model [17] to analyze the patient answers. An important feature of the Rasch model is the sufficient raw score consistent estimation of item parameters without reference to the distribution of the latent variable in the possible population. This feature allows to analyze each answer from each patient individually without concern with the other answers or the population distribution. Figure 6 shows a graphic with patient's answers: signaled in yellow are the answers below what is expected for this type of patient, and signaled in blue are those above the expected. In the right column are highlighted the most critical answers in order to turn easier and quick the physician understanding. Graphic with patient's answers. The advantage in using of this platform is almost the quick analysis by the physician about patient clinical problems. In fact, the physician takes note about patient problems before observing the patient. This information facilitates and improves the conduction of the appointment. Without using the platform, it is not possible to identify some signs and symptoms. Clinical decision support system: Why do we consider this system as a clinical decision support system? In addition to the information about QoL the platform can register the patient's clinical information and socio-demographic characteristics allowing to classify and group patients according to these characteristics. The collected data can help the physician in two levels: • Identify the problems that the patient has at the moment; • Assist the physician in decision-making by providing a forecast on future quality of life of the patient according to the treatments prescribed. It is important that the patient and the physician know the effect that a treatment will have within 3, 4 or 5 years in his QoL. The decision about choosing a treatment protocol should include the patient's QoL not necessarily during the treatment but especially during the years following it. The objective measurement of patient's QoL allows, in this context, to consider it as a clinical data contribution to the characterization of the patient. Conclusions: In this paper we defined the concept of QoL, in different contexts and situations, which is reaching almost every sector of society. The main focus, however, was on the Health context. Some studies have suggested that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The platform designed and developed in this project gives the physician an opportunity to use patient's QoL measurement in real time as a clinical decision support element. The knowledge about patient QoL constitutes another factor that may, in certain circumstances, contribute to a better decision. The systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. Moreover, therapeutic alternatives can help the physician by giving him important data from which he can infer the patient's future QoL. We proved the validity of the developed platform in the acquisition of data required for the QoL assessment, and in allowing a routine QoL assessment to become a part of the appointment. An evolution of the platform for collecting clinical information, in order to typify patients and therapies according to a specific patient's QoL and a class of patients, is under development. The need to develop this platform underlines the importance of knowledge management systems as decision making aids. Competing interests: The authors declare that they have no competing interests. Authors' contributions: JG conceived and developed the decision support system, participated in the acquisition, analysis and interpretation of data, performed the statistical analysis, and drafted the manuscript. ÁR participated in the design and coordination of the project, helped to draft the manuscript, and revised it critically for important intellectual content. All authors read and approved the final manuscript.
Background: The assessment of Quality of Life (QoL) is a Medical goal; it is used in clinical research, medical practice, health-related economic studies and in planning health management measures and strategies. The objective of this project is to develop an informational platform to achieve a patient self-assessment with standardized QoL measuring instruments, through friendly software, easy for the user to adapt, which should aid the study of QoL, by promoting the creation of databases and accelerating its statistical treatment and yet generating subsequent useful results in graphical format for the physician analyzes in an appointment immediately after the answers collection. Methods: First, a software platform was designed and developed in an action-research process with patients, physicians and nurses. The computerized patient self-assessment with standardized QoL measuring instruments was compared with traditional one, to verify if its use did not influence the patient's answers. For that, the Wilcoxon and t-Student tests were applied. After, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments. Results: The results show that the computerized patient self-assessment does not influence the patient's answers and can be used as a suitable tool in the routine appointment, because indicates problems which are more difficult to identify in a traditional appointment, improving thus the physician's decisions. Conclusions: The possibility of representing graphically useful results that physician needs to analyze in the appointment, immediately after the answer collection, in an useful time, makes this QoL assessment platform a diagnosis instrument ready to be used routinely in clinical practice.
Background: Scope The concept of "Quality of Life - QoL" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3]. Preliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment. The purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts. In the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients. The concept of "Quality of Life - QoL" is used in different contexts and situations, reaching practically all sectors of society. The perception that an individual holds about his place in life, which depends upon his culture and values, defines this individual's Quality of Life (QoL). When applied in a health context this is known as: Health-Related Quality of Life (HRQoL) [1]. Nowadays, indicators of HRQoL are used in health management strategies. Managers, economists, political analysts and pharmaceutical companies use QoL measures from the World Health Organization (WHO) in some of their departments [2]. Today, HRQoL is a medical goal, being used in epidemiological studies, clinical essays, medical practice, health-related economic studies, and in planning and comparing measures and strategies [3]. Preliminary studies indicate that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The reasons include: a lack of familiarity with relevant studies in this area; the absence of sensitivity; lack of time; reluctance in accepting that the patient's perceptions regarding their own outcomes are as important as the physicians [5]; difficulty in quantifying subjective parameters; difficulty in converting tacit knowledge in explicit knowledge; inexistence of friendly computer-based applications; inexistence of health care service infrastructures that enable a routine HRQoL assessment. The purpose of this project is to allow the physician to use patient's QoL measurements as clinical decision support elements. A timely knowledge of the patient's QoL-related elements constitutes another factor that may, in certain circumstances, contribute to a better decision making. On the other hand, a systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. By other words, in the presence of several therapeutic strategies this can help the physician by giving him clues about the patient's future QoL according to applied medical acts. In the this paper we intend to demonstrate the importance of HRQoL assessment in oncologic patients, and the relevance of Knowledge Management Systems (KMS) as decision making aids. We analyze this problem and show the results obtained with a platform developed for the self-evaluation questionnaire that measures patients QoL and collects clinical information in order to infer about the patient future Qol through crossing the QoL measure and the several treatments used in the patients. Evaluation of HRQoL in oncologic patients Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6]. Research methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2]. The time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1]. In fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1]. Promoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9]. Although being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11]. Table 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]: Dimensions and items for HRQoL assessment Malignant tumors are the second leading cause of death in Portugal. Their relevance as a morbidity and mortality factor is growing and their social impact is being recognized [1]. The global weight of oncologic disease is growing, given the economic and social costs involved in the prevention, treatment and rehabilitation of it [6]. Research methods used in oncology enable us to analyze the oncologic process in its physiopathologic and clinical aspects, penetrating wide domains such as psychological, social, economic and organizational domains [1]. Epidemiology and statistics are significant areas of this study, since oncologic care can only be programmed through safe databases [7]. Assessing the implementation of these diseases in our community helps to recognize the global impact of tumors and to evaluate the effectiveness of the adopted control measures [2]. The time where therapeutic decisions were not discussed with the patient and the family, and treatment options were not even considered, has long since passed. Oncologic patients were frequently not informed of their diagnosis after their families were. This reality has changed and, today, patients participate, or should participate, in the several stages of their treatment [1]. In fact, patients motivated to participate in their treatment and rehabilitation plan often show a better QoL, and should therefore be involved in the strategies developed to fight their disease. Furthermore, evidence shows that a global patient QoL optimization can lead to a higher survival rate and to a higher quality of life [1]. Promoting the integration of QoL assessment in clinical practice can result in the optimization of infrastructures and methods capable of improving patients QoL [8]. A validated, safe and scientifically-based measuring instrument must be made available in a simple format, understood both by the patient and the physician, for being completed in less than 10 minutes [9]. Although being a subjective concept, HRQoL is quantified objectively and does not merely represent the inexistence of disease [10]. The multidimensional conception of HRQoL comprises a wide range of physical, psychological, functional, emotional and social variables, which as a whole, define welfare [11]. These domains vary individually according to religion and beliefs, culture, expectations, perceptions, education, knowledge, etc. [11]. Table 1 represents schematically the mains HRQoL dimensions and items, proposed by the WHO [12]: Dimensions and items for HRQoL assessment KMS in routine HRQoL assessment Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13]. Moreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13]. HRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15]. However, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment. Preliminary studies on oncologic patients conclude that the use of an adequate software for the HRQoL assessment, data collection and processing, allows us to obtain self-answered questionnaires from patients, an automatic quotation of these questionnaires, the creation of a database and the statistic analysis of the results, performing a routine HRQoL clinical assessment [13]. Moreover, the graphical representation of results enables a fast patient HRQoL assessment by the physician, and this evaluation becomes a diagnosis instrument to be used in routine clinical practice [13]. HRQoL assessment is dynamic and requires periodic reevaluations [14]. It should be done objectively and quantitatively on a routine basis. Then, the selection of a measuring instrument with good psychometric characteristics, easy to administrate and to quantify, that doesn't increase the appointment time and with a multidimensional character is most important. It must be answered and quoted before appointments. The results should remain confidential and anonymous, and when graphically represented should allow an easy reading of the patient's self-perception. Thus, HRQoL assessment becomes a diagnosis instrument that identifies patient's problems, highlights certain signs and symptoms that could otherwise go unnoticed, improves the physician-patient communication and assists therapeutic decisions; in other words, it renders the appointments easier. By analogy, the physician can evaluate the evolution of his patient's state comparing two or more assessments obtained in different periods [15]. However, a routine assessment implies the design of a new appointment protocol. The analysis and specification of the information system requirements, as well as the specification of necessary activities for the process, define the knowledge management system which supports the clinical decision aid system, based on the HRQoL assessment. Methodology A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses. In order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered. Patients demographics In order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer. In the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions. The two statistical hypotheses for a bilateral test in each situation were written. Hypothesis H 0 F ( X 0 ) = F ( X 1 ) ; Hypothesis H 1 : F ( X 0 ) <> F ( X 1 ) . We used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations. A high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers. After the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments. The Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters. We analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time. In addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment. A software platform to study the quality of life of oncology patients was designed and developed in an action-research process with patients, physicians and nurses. In order to assess the impact created by the application in the given answers we randomly selected patients from the otorhinolaryngology service in Oporto's IPO (Portuguese Institute of Oncology). We selected fifteen days from May, June and July of 2011, and all patients attending consultations on those days were invited to participate in this study. All of them accepted the invitation and then we obtained a sample of 54 individuals (Table 2). These patients answered the same questionnaire twice, one in paper form - the traditional model - and the other on the computer using the software developed for that purpose, with 40 minutes temporal gap. Half of the patients answered first on the paper form and the other half answered first on the computer platform, the minimum time between answers was 40 minutes. In both cases the answer time was measured and the patient's preference between the paper and the computer was registered. Information regarding patient's affinity with computer use was also registered. Patients demographics In order to understand if the computer-based environment influenced or not the answers we analyzed the obtained values for each given answer, in both of the assessment moments, using a collection of statistical models and tests. Answers obtained in paper support and through the computer-based platform were matched. To understand if the computer-based platform did not influence the patients answers we hypothesized that distributions, for each variable in study, were identical. We first tested the entire set of answers and then two subsets, which divided patients that answered firstly on paper and patients that answered firstly on the computer. In the validation process two standardized questionnaires were used, both from EORTC (European Organisation for Research and Treatment of Cancer): QLQ-C30 and QLQ-H&N35. The first one is a global questionnaire developed for all types of oncologic patients. It has thirty questions grouped in five domains (physical, social, emotional, functional and cognitive). The second is a specific questionnaire for Head and Neck oncology patients, with thirty five questions. The two statistical hypotheses for a bilateral test in each situation were written. Hypothesis H 0 F ( X 0 ) = F ( X 1 ) ; Hypothesis H 1 : F ( X 0 ) <> F ( X 1 ) . We used the Wilcoxon test, the most appropriate when the dependent variable is measured in an ordinal scale [16]. In both of the questionnaires (QLQ-C30 and QLQ-H&N35) adopted to evaluate the QoL the test results did not allow to conclude if there were significant differences between distributions, for the two samples and the three mentioned situations. A high level of significance was always attained, independently of the global or the partial analysis of the sample, divided between those who firstly answered on paper and those who firstly answered on the computer, so the hypothesis of not existing a significant difference between answers was accepted. We can thus state that the software use does not influence patient's answers. After the validation of the platform to obtain a patient self-assessment with standardized QoL measuring instruments, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments. The Rasch model estimates the question difficulty level and the person ability level with an iterative process. This process takes a lot of time and it is not compatible with a routine appointment. Thus, it was necessary to understand how we could make the process faster while maintaining accuracy in the values obtained for the estimated parameters. We analyze the running time by varying the number of iterations and the sample size without losing accuracy. So, it was possible to determinate the sample size and the number of iterations to calculate the parameters that minimize the execution time. In addition, we determine what were the calculations that could be made in advance, that is, the calculations that could be made before the appointment. Thus, the time required for QoL assessment was reduced to five minutes and you can use it in a routine assessment. Conclusions: In this paper we defined the concept of QoL, in different contexts and situations, which is reaching almost every sector of society. The main focus, however, was on the Health context. Some studies have suggested that the implementation of a patient HRQoL assessment in Portugal is challenged and questioned for several factors involving health institutions, health professionals and patients [4]. The platform designed and developed in this project gives the physician an opportunity to use patient's QoL measurement in real time as a clinical decision support element. The knowledge about patient QoL constitutes another factor that may, in certain circumstances, contribute to a better decision. The systematic patient QoL data collection allows the standardization of this information and to infer therapeutic strategies for a specific patient. Moreover, therapeutic alternatives can help the physician by giving him important data from which he can infer the patient's future QoL. We proved the validity of the developed platform in the acquisition of data required for the QoL assessment, and in allowing a routine QoL assessment to become a part of the appointment. An evolution of the platform for collecting clinical information, in order to typify patients and therapies according to a specific patient's QoL and a class of patients, is under development. The need to develop this platform underlines the importance of knowledge management systems as decision making aids.
Background: The assessment of Quality of Life (QoL) is a Medical goal; it is used in clinical research, medical practice, health-related economic studies and in planning health management measures and strategies. The objective of this project is to develop an informational platform to achieve a patient self-assessment with standardized QoL measuring instruments, through friendly software, easy for the user to adapt, which should aid the study of QoL, by promoting the creation of databases and accelerating its statistical treatment and yet generating subsequent useful results in graphical format for the physician analyzes in an appointment immediately after the answers collection. Methods: First, a software platform was designed and developed in an action-research process with patients, physicians and nurses. The computerized patient self-assessment with standardized QoL measuring instruments was compared with traditional one, to verify if its use did not influence the patient's answers. For that, the Wilcoxon and t-Student tests were applied. After, we adopted and adapted the mathematic Rash model to make possible the use of QoL measure in the routine appointments. Results: The results show that the computerized patient self-assessment does not influence the patient's answers and can be used as a suitable tool in the routine appointment, because indicates problems which are more difficult to identify in a traditional appointment, improving thus the physician's decisions. Conclusions: The possibility of representing graphically useful results that physician needs to analyze in the appointment, immediately after the answer collection, in an useful time, makes this QoL assessment platform a diagnosis instrument ready to be used routinely in clinical practice.
10,828
312
13
[ "patient", "answers", "patients", "qol", "computer", "assessment", "hrqol", "paper", "platform", "clinical" ]
[ "test", "test" ]
null
null
[CONTENT] Heath and Neck Oncologic Patients | Health-Related Quality of Life | Knowledge Management Systems | Decision Support Systems [SUMMARY]
null
null
[CONTENT] Heath and Neck Oncologic Patients | Health-Related Quality of Life | Knowledge Management Systems | Decision Support Systems [SUMMARY]
[CONTENT] Heath and Neck Oncologic Patients | Health-Related Quality of Life | Knowledge Management Systems | Decision Support Systems [SUMMARY]
[CONTENT] Heath and Neck Oncologic Patients | Health-Related Quality of Life | Knowledge Management Systems | Decision Support Systems [SUMMARY]
[CONTENT] Decision Support Systems, Clinical | Head and Neck Neoplasms | Humans | Quality of Life | Self-Assessment | Software | Statistics, Nonparametric | Surveys and Questionnaires | User-Computer Interface [SUMMARY]
null
null
[CONTENT] Decision Support Systems, Clinical | Head and Neck Neoplasms | Humans | Quality of Life | Self-Assessment | Software | Statistics, Nonparametric | Surveys and Questionnaires | User-Computer Interface [SUMMARY]
[CONTENT] Decision Support Systems, Clinical | Head and Neck Neoplasms | Humans | Quality of Life | Self-Assessment | Software | Statistics, Nonparametric | Surveys and Questionnaires | User-Computer Interface [SUMMARY]
[CONTENT] Decision Support Systems, Clinical | Head and Neck Neoplasms | Humans | Quality of Life | Self-Assessment | Software | Statistics, Nonparametric | Surveys and Questionnaires | User-Computer Interface [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] patient | answers | patients | qol | computer | assessment | hrqol | paper | platform | clinical [SUMMARY]
null
null
[CONTENT] patient | answers | patients | qol | computer | assessment | hrqol | paper | platform | clinical [SUMMARY]
[CONTENT] patient | answers | patients | qol | computer | assessment | hrqol | paper | platform | clinical [SUMMARY]
[CONTENT] patient | answers | patients | qol | computer | assessment | hrqol | paper | platform | clinical [SUMMARY]
[CONTENT] hrqol | assessment | patients | qol | patient | answered | computer | hrqol assessment | routine | time [SUMMARY]
null
null
[CONTENT] qol | patient | patient qol | health | platform | assessment | specific patient | infer | decision | qol assessment [SUMMARY]
[CONTENT] patient | answers | qol | patients | hrqol | assessment | computer | clinical | health | physician [SUMMARY]
[CONTENT] patient | answers | qol | patients | hrqol | assessment | computer | clinical | health | physician [SUMMARY]
[CONTENT] Quality of Life ||| QoL | QoL [SUMMARY]
null
null
[CONTENT] QoL [SUMMARY]
[CONTENT] Quality of Life ||| QoL | QoL ||| First ||| QoL ||| Wilcoxon ||| QoL ||| ||| ||| QoL [SUMMARY]
[CONTENT] Quality of Life ||| QoL | QoL ||| First ||| QoL ||| Wilcoxon ||| QoL ||| ||| ||| QoL [SUMMARY]
The impact of the COVID-19 pandemic on alloplastic breast reconstruction: An analysis of national outcomes.
35389527
Immediate alloplastic breast reconstruction shifted to the outpatient setting during the COVID-19 pandemic to conserve inpatient hospital beds while providing timely oncologic care. We examine the National Surgical Quality Improvement Program (NSQIP) database for trends in and safety of outpatient breast reconstruction during the pandemic.
BACKGROUND
NSQIP data were filtered for immediate alloplastic breast reconstructions between April and December of 2019 (before-COVID) and 2020 (during-COVID); the proportion of outpatient procedures was compared. Thirty-day complications were compared for noninferiority between propensity-matched outpatients and inpatients utilizing a 1% risk difference margin.
METHODS
During COVID, immediate alloplastic breast reconstruction cases decreased (4083 vs. 4677) and were more frequently outpatient (31% vs. 10%, p < 0.001). Outpatients had lower rates of smoking (6.8% vs. 8.4%, p = 0.03) and obesity (26% vs. 33%, p < 0.001). Surgical complication rates of outpatient procedures were noninferior to propensity-matched inpatients (5.0% vs. 5.5%, p = 0.03 noninferiority). Reoperation rates were lower in propensity-matched outpatients (5.2% vs. 8.0%, p = 0.003).
RESULTS
Immediate alloplastic breast reconstruction shifted towards outpatient procedures during the COVID-19 pandemic with noninferior complication rates. Therefore, a paradigm shift towards outpatient reconstruction for certain patients may be safe. However, decreased reoperations in outpatients may represent undiagnosed complications and warrant further investigation.
CONCLUSION
[ "COVID-19", "Humans", "Mammaplasty", "Pandemics", "Postoperative Complications", "Reoperation", "Retrospective Studies" ]
9088498
INTRODUCTION
In recent years, an increasing number of women with early‐stage breast cancer have opted for mastectomy over breast‐conserving surgery. 1 , 2 , 3 While not all patients desire reconstruction, more than 130 000 breast reconstructions are performed annually, with alloplastic procedures representing approximately 75% of these. 4 The ideal timing for breast reconstruction is a multifactorial decision based on desired reconstructive approach, other medical comorbidities, and whether the patient will require adjuvant radiotherapy. Immediate reconstruction at the time of mastectomy offers psychological and economic benefits and therefore makes up the majority of breast reconstructions 4 , 5 , 6 , 7 , 8 ; however, it has been associated with higher rates of surgical complications. 9 , 10 Traditionally, immediate reconstructions have been done on an inpatient basis to monitor drain output, provide pain control, and assess for complications such as hematoma and skin necrosis. In recent years, some centers have begun to offer immediate alloplastic reconstruction as an outpatient procedure, mirroring trends observed for isolated mastectomy and delayed alloplastic reconstruction. 11 , 12 Advances in the regional blockade and perioperative blocks have decreased postoperative pain, aiding this shift toward outpatient surgery. 13 Some studies have demonstrated higher patient satisfaction, lower costs, and comparable safety with outpatient surgery but were limited by smaller sample sizes, and this model has accounted for a minority of procedures in recent years. 14 , 15 , 16 , 17 One factor which may accelerate the paradigm shift toward outpatient immediate reconstruction is the SARS‐CoV‐2 coronavirus disease 2019 (COVID‐19) pandemic. In early 2020, both the American College of Surgeons and the American Society of Plastic Surgeons (ASPS) issued recommendations that all nonurgent elective surgeries be canceled or postponed. In addition, they recommended that urgent elective procedures be shifted to the outpatient setting to conserve inpatient resources, specifically intensive care unit beds, and decrease the risk of transmission of the novel coronavirus. 18 , 19 , 20 , 21 Overall, there was a sharp decline in elective surgical volumes across multiple specialties, including oncologic surgery. 22 , 23 , 24 Given that cancer patients are at higher risk for serious complications and mortality from COVID‐19, 25 , 26 these recommendations provided a strong incentive to provide immediate reconstruction in the outpatient setting. To conserve resources while still providing care for breast cancer patients, some centers pivoted to “high‐efficiency” same‐day protocols with encouraging early outcomes. 27 , 28 , 29 , 30 Despite the observed delays and global decreases in elective surgical volumes, relatively little is known about the effects of the COVID‐19 pandemic on outcomes of breast reconstruction on a national scale. Understanding how these necessary changes have affected outcomes will inform decision‐making for breast cancer surgeons and patients, as the availability of inpatient surgery returns. To evaluate these trends in a large, national cohort, we surveyed the American College of Surgeons National Surgical Quality Improvement Program (ACS‐NSQIP) database 31 to identify patients undergoing immediate alloplastic breast reconstruction to describe aggregate changes in surgical volume, patient demographics, comorbidities, procedure setting, early complications, and reoperation rates as a result of the COVID‐19 pandemic.
null
null
RESULTS
During the COVID‐19 pandemic, breast reconstruction case volumes and overall surgical case volume reported in the NSQIP database decreased (Figure 1). The database contains a total of 806 016 procedures from April to December 2019 and 644 061 from these months in 2020. In the before‐COVID period, 4677 direct‐to‐implant and immediate tissue expander procedures were reported, accounting for 580 per 100 000 total cases. In the corresponding months of 2020, 4083 of the same procedures were reported, a decrease of 13%; however, these made up 633 per 100 000 of all the surgeries reported, a significant increase in proportion (p = 0.001, Table 1). Notably, there was an increase in the proportion of alloplastic reconstructions performed as outpatient procedures from 10% before COVID to 31% in 2020 during COVID (p < 0.001), a change that is seen most prominently in Quarter 2 of 2020 but is consistently elevated for the remainder of the year (p < 0.001 each quarter). Case volume of all immediate alloplastic breast reconstructions recorded in the National Surgical Quality Improvement Program over 2019 and 2020, and outpatient immediate alloplastic reconstructions. The proportion of outpatient reconstructions was significantly higher for each quarter in 2020 than in the corresponding quarter in 2019 (p < 0.001) Demographic data and comorbidities of immediate alloplastic breast reconstruction patients before‐ and during‐COVID Abbreviations: ASA, American Society of Anesthesiologists; BMI, body mass index; CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease; IQR, interquartile range; NSQIP, National Surgical Quality Improvement Program. Patients underwent reconstruction at a median of 50 years of age (IQR: 42–60). Demographics differed between the before‐COVID and during‐COVID cohorts, with the latter group being more often African American (13% vs. 11%) and less often white (80% vs. 82%, p = 0.003), and more often of Hispanic ethnicity (11% vs. 9.5%, p = 0.01; Table 1). Comorbidities generally did not differ across the two cohorts, with the exception of overall higher ASA class in the during‐COVID group (65% vs. 67% Class II, 31% vs. 28% Class III, p = 0.01). During the pandemic, fewer direct‐to‐implant procedures were performed than before COVID (13% vs. 17%, p < 0.001), and the use of acellular dermal matrix became more common (Table 2). Surgical characteristics of implant and expander‐based breast reconstruction before and during COVID Abbreviation: IQR, interquartile range. Rates of postoperative complications, however, did not significantly increase. In the before‐COVID cohort, there were 306 surgical complications (6.5%, 95% confidence interval [CI]: 5.9%–7.3%), while in the during‐COVID cohort, there were 244 (6.0%, 95% CI: 5.3%–6.8%, p = 0.001 for noninferiority; Table 3; Figure 2A). The incidence of medical complications in the during‐COVID cohort was 1.1% (95% CI: 0.8%–1.4%), noninferior to the 1.5% in the before‐COVID cohort (95% CI: 1.2%–1.9%, p < 0.001). The rate of reoperations during COVID was 7.5% (95% CI: 6.7%–8.3%), also noninferior to before COVID (7.4%, p = 0.049). Postoperative outcomes by cohort Abbreviations: DVTs, deep venous thromboses; SSI, surgical site infection. Incidence of surgical complications, medical complications, and reoperations in (A) 2019 versus 2020 and (B) inpatient versus outpatient reconstruction. A dashed line indicates a noninferiority margin of a 1 percentage point increase in complications. Horizontal lines indicate 95% confidence intervals To adjust for patients' baseline differences, multivariable logistic regression was performed. Bivariate analyses identified age, BMI, operative time, race, ethnicity, diabetes, smoking, hypertension, and COPD as being potentially associated with surgical complication rates (p < 0.2); thus, these were included alongside the year of operation and outpatient procedure as covariates in the regression. The results confirmed that when accounting for the potential confounders listed above, the year of operation was not associated with surgical complications, with an odds ratio (OR) of 0.93 (95% CI: 0.76–1.14, p = 0.48). Outpatient status was similarly independent of surgical complications (OR: 0.90, 95% CI: 0.68–1.20, p = 0.48; Table 4). Results of multivariate logistic regression to identify predictors of surgical complications Note: Model adjusted R 2 = 0.0433. Abbreviations: BMI, body mass index; CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease. To verify these findings, analyses were performed utilizing outpatient surgery as the exposure of interest. Baseline data of patients undergoing inpatient and outpatient surgery did differ significantly, with outpatients tending to have fewer comorbidities, including lower rates of obesity and smoking, and lower ASA class (p < 0.05 for all; Table S1). Outpatient procedures were more often direct‐to‐implant (23% vs. 13%, p < 0.001) and had shorter operative times (p < 0.001). To minimize the impact of these baseline differences, propensity score‐matched outpatient and inpatient cohorts were generated, matched on age, BMI, race, ethnicity, diabetes, smoking, hypertension, and procedure type. The resulting groups encompassed 1371 patients each, with a good balance of all specified covariates (Tables 5 and S2). In these matched groups, outpatient surgery was noninferior to inpatient surgery in all measured domains. Outpatients had a 5.0% incidence of surgical complications (95% CI: 3.9%–6.3%) as compared to 5.5% in inpatients (p = 0.03 for noninferiority). Similarly, outpatients had noninferior rates of medical complications (0.8%, 95% CI: 0.4%–1.4%; vs. 1.4%) and reoperations (5.2%, 95% CI: 4.1%–6.5%; vs. 8.0%; p < 0.001 for noninferiority for both; Table 6, Figure 2B). Standardized mean differences (SMDs) for covariates used in propensity score matching Note: Covariates were chosen by bivariate analysis showing association with surgical complications with p < 0.20, with the addition of surgery type. Race, diabetes, and surgery type are shown separately in Table S2. Age and BMI are reported as median (IQR), and the remaining variables are reported as counts (%). Abbreviations: BMI, body mass index; SMD, standardized mean difference. Postoperative outcomes in inpatient and outpatient surgery with propensity score‐matched cohorts Abbreviations: DVTs, deep venous thromboses; SSI, surgical site infection. However, the incidence of secondary operations was also statistically significantly lower in the outpatient cohort, largely driven by a decrease in operations for hematoma and seroma drainage to 1.4%, half of the 2.8% seen in the inpatient cohort (p = 0.003; Table 6). This decrease was present primarily during postoperative days 0–1 (0.6% vs. 1.4%, p = 0.03), but trended towards significance during days 2–30 (0.8% vs. 1.5%, p = 0.10). Reoperations for implant removal, infectious complications, and wound repair did not differ significantly (p ≥ 0.44).
CONCLUSION
In the context of the COVID‐19 pandemic, at a time when conservation of hospital resources continues to be of great importance, we find that immediate alloplastic breast reconstruction is a generally safe procedure in the outpatient setting, with no increase in short‐term surgical complications, medical complications, or reoperations. However, further prospective study is necessary to validate the decreased reoperation rates in outpatients and clarify possible underlying causes. Close postoperative follow‐up, particularly in the initial days after surgery, remains essential in patients undergoing breast reconstruction in the inpatient or outpatient setting. The COVID‐19 pandemic has forced us to re‐examine surgical dogma: crisis forces innovation. Healthier patients were given the opportunity to undergo what was previously an inpatient procedure as an outpatient, with encouraging results; thus, what constitutes an acceptable outpatient procedure may now be continuously reevaluated in this ongoing time of need.
[ "INTRODUCTION", "Statistical analysis", "SYNOPSIS" ]
[ "In recent years, an increasing number of women with early‐stage breast cancer have opted for mastectomy over breast‐conserving surgery.\n1\n, \n2\n, \n3\n While not all patients desire reconstruction, more than 130 000 breast reconstructions are performed annually, with alloplastic procedures representing approximately 75% of these.\n4\n The ideal timing for breast reconstruction is a multifactorial decision based on desired reconstructive approach, other medical comorbidities, and whether the patient will require adjuvant radiotherapy. Immediate reconstruction at the time of mastectomy offers psychological and economic benefits and therefore makes up the majority of breast reconstructions\n4\n, \n5\n, \n6\n, \n7\n, \n8\n; however, it has been associated with higher rates of surgical complications.\n9\n, \n10\n Traditionally, immediate reconstructions have been done on an inpatient basis to monitor drain output, provide pain control, and assess for complications such as hematoma and skin necrosis. In recent years, some centers have begun to offer immediate alloplastic reconstruction as an outpatient procedure, mirroring trends observed for isolated mastectomy and delayed alloplastic reconstruction.\n11\n, \n12\n Advances in the regional blockade and perioperative blocks have decreased postoperative pain, aiding this shift toward outpatient surgery.\n13\n Some studies have demonstrated higher patient satisfaction, lower costs, and comparable safety with outpatient surgery but were limited by smaller sample sizes, and this model has accounted for a minority of procedures in recent years.\n14\n, \n15\n, \n16\n, \n17\n\n\nOne factor which may accelerate the paradigm shift toward outpatient immediate reconstruction is the SARS‐CoV‐2 coronavirus disease 2019 (COVID‐19) pandemic. In early 2020, both the American College of Surgeons and the American Society of Plastic Surgeons (ASPS) issued recommendations that all nonurgent elective surgeries be canceled or postponed. In addition, they recommended that urgent elective procedures be shifted to the outpatient setting to conserve inpatient resources, specifically intensive care unit beds, and decrease the risk of transmission of the novel coronavirus.\n18\n, \n19\n, \n20\n, \n21\n Overall, there was a sharp decline in elective surgical volumes across multiple specialties, including oncologic surgery.\n22\n, \n23\n, \n24\n Given that cancer patients are at higher risk for serious complications and mortality from COVID‐19,\n25\n, \n26\n these recommendations provided a strong incentive to provide immediate reconstruction in the outpatient setting. To conserve resources while still providing care for breast cancer patients, some centers pivoted to “high‐efficiency” same‐day protocols with encouraging early outcomes.\n27\n, \n28\n, \n29\n, \n30\n\n\nDespite the observed delays and global decreases in elective surgical volumes, relatively little is known about the effects of the COVID‐19 pandemic on outcomes of breast reconstruction on a national scale. Understanding how these necessary changes have affected outcomes will inform decision‐making for breast cancer surgeons and patients, as the availability of inpatient surgery returns. To evaluate these trends in a large, national cohort, we surveyed the American College of Surgeons National Surgical Quality Improvement Program (ACS‐NSQIP) database\n31\n to identify patients undergoing immediate alloplastic breast reconstruction to describe aggregate changes in surgical volume, patient demographics, comorbidities, procedure setting, early complications, and reoperation rates as a result of the COVID‐19 pandemic.", "Kolmogorov–Smirnov tests were performed to assess the normality of continuous variables, and these were reported as mean ± standard deviation or median (interquartile range [IQR]). χ\n2 and Fisher's exact tests were performed to compare categorical data as appropriate, and Mann–Whitney U tests were performed to compare numerical data. Post hoc Fisher's exact tests with a Holm–Bonferroni correction for multiple comparisons were performed when indicated for categorical variables with more than two groups. The pairwise deletion was utilized when records were missing data to maximize data retained. p Values were obtained from two‐tailed tests, and the significance level was predefined at α = 0.05.\nTo establish the safety of breast reconstruction in the COVID era, noninferiority tests were performed utilizing the Farrington–Manning approach and a risk difference margin of 1%.\n33\n In noninferiority analyses, p values were one‐sided, with a significance level of α = 0.05.\nMultivariable logistic regression was performed to further identify predictors independently associated with surgical complications. Year of operation, outpatient setting, and variables with pairwise associations with surgical complications with p < 0.20 were included as factors in the regression.\nGiven baseline differences in patients who underwent outpatient reconstruction, a propensity score‐matched analysis was performed to minimize differences between inpatient and outpatient cohorts with respect to known potential confounders. The covariates upon which patients were matched were again identified by pairwise associations with surgical complications with p < 0.20, with the exclusion of operative time, which is not defined before the decision to pursue an inpatient or an outpatient procedure, and COPD, which had a low count (n = 4 in the outpatient group), to prevent overfitting.\n34\n Surgery type was also added as a matching covariate. Propensity scores were calculated by a logistic regression model utilizing an optimal fixed ratio algorithm, minimizing the total difference in propensity scores across groups. A 1:1 match ratio and a caliper width of 0.25 standard deviations of the logit of the propensity score were used. The distribution of potential confounders before and after propensity score matching was evaluated by standardized mean differences (SMD), where an SMD ≤ 0.10 is considered nonsignificant. For categorical variables with more than 2 levels, the match was evaluated using χ\n2 tests.\nListwise deletion was utilized in logistic regression and propensity score analyses, excluding all records missing any of the predictor variables. All statistical analyses were performed in SAS Studio software version 3.8 (SAS Institute Inc.).", "The COVID‐19 pandemic accelerated the shift of immediate alloplastic breast reconstruction to the outpatient setting. We utilize the American College of Surgeons National Surgical Quality Improvement Program (ACS‐NSQIP) database to examine trends in and safety of outpatient breast reconstruction during the pandemic." ]
[ null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Statistical analysis", "RESULTS", "DISCUSSION", "CONCLUSION", "CONFLICTS OF INTEREST", "SYNOPSIS", "Supporting information" ]
[ "In recent years, an increasing number of women with early‐stage breast cancer have opted for mastectomy over breast‐conserving surgery.\n1\n, \n2\n, \n3\n While not all patients desire reconstruction, more than 130 000 breast reconstructions are performed annually, with alloplastic procedures representing approximately 75% of these.\n4\n The ideal timing for breast reconstruction is a multifactorial decision based on desired reconstructive approach, other medical comorbidities, and whether the patient will require adjuvant radiotherapy. Immediate reconstruction at the time of mastectomy offers psychological and economic benefits and therefore makes up the majority of breast reconstructions\n4\n, \n5\n, \n6\n, \n7\n, \n8\n; however, it has been associated with higher rates of surgical complications.\n9\n, \n10\n Traditionally, immediate reconstructions have been done on an inpatient basis to monitor drain output, provide pain control, and assess for complications such as hematoma and skin necrosis. In recent years, some centers have begun to offer immediate alloplastic reconstruction as an outpatient procedure, mirroring trends observed for isolated mastectomy and delayed alloplastic reconstruction.\n11\n, \n12\n Advances in the regional blockade and perioperative blocks have decreased postoperative pain, aiding this shift toward outpatient surgery.\n13\n Some studies have demonstrated higher patient satisfaction, lower costs, and comparable safety with outpatient surgery but were limited by smaller sample sizes, and this model has accounted for a minority of procedures in recent years.\n14\n, \n15\n, \n16\n, \n17\n\n\nOne factor which may accelerate the paradigm shift toward outpatient immediate reconstruction is the SARS‐CoV‐2 coronavirus disease 2019 (COVID‐19) pandemic. In early 2020, both the American College of Surgeons and the American Society of Plastic Surgeons (ASPS) issued recommendations that all nonurgent elective surgeries be canceled or postponed. In addition, they recommended that urgent elective procedures be shifted to the outpatient setting to conserve inpatient resources, specifically intensive care unit beds, and decrease the risk of transmission of the novel coronavirus.\n18\n, \n19\n, \n20\n, \n21\n Overall, there was a sharp decline in elective surgical volumes across multiple specialties, including oncologic surgery.\n22\n, \n23\n, \n24\n Given that cancer patients are at higher risk for serious complications and mortality from COVID‐19,\n25\n, \n26\n these recommendations provided a strong incentive to provide immediate reconstruction in the outpatient setting. To conserve resources while still providing care for breast cancer patients, some centers pivoted to “high‐efficiency” same‐day protocols with encouraging early outcomes.\n27\n, \n28\n, \n29\n, \n30\n\n\nDespite the observed delays and global decreases in elective surgical volumes, relatively little is known about the effects of the COVID‐19 pandemic on outcomes of breast reconstruction on a national scale. Understanding how these necessary changes have affected outcomes will inform decision‐making for breast cancer surgeons and patients, as the availability of inpatient surgery returns. To evaluate these trends in a large, national cohort, we surveyed the American College of Surgeons National Surgical Quality Improvement Program (ACS‐NSQIP) database\n31\n to identify patients undergoing immediate alloplastic breast reconstruction to describe aggregate changes in surgical volume, patient demographics, comorbidities, procedure setting, early complications, and reoperation rates as a result of the COVID‐19 pandemic.", "This study was designated as nonhuman subjects research due to the deidentified nature of the data and thus exempt from IRB approval. To isolate the effects of COVID‐19, which was widespread in the United States by April 2020,\n32\n the NSQIP database was first filtered for procedures that took place in April through December 2019 and April through December 2020. These groups were then searched for implant‐ and expander‐based breast reconstruction using the following current procedural terminology (CPT) codes: 19340, insertion of breast implant on the day of mastectomy (direct‐to‐implant) and 19357, tissue expander placement, with simultaneous mastectomy (CPT codes 19301‐19307). Patients with CPT codes corresponding to more than one of these procedures were presumed to have undergone bilateral reconstruction. Procedures were also filtered by the International Classification of Diseases (ICD‐10) diagnosis code to isolate patients with breast cancer or carcinoma in situ diagnoses (categories C50, D05, or Z85.3) or undergoing prophylactic mastectomy (Z40.01, Z80.3). Patients undergoing concurrent autologous breast reconstruction were excluded (CPT codes 19361, 19364, 19367, 19368, 19369).\nThe procedures fitting these inclusion criteria formed the before‐COVID and during‐COVID groups, containing 4677 and 4083 procedures, respectively. Patient demographic data corresponding to each of these procedures was extracted, including age at operation, sex, race, ethnicity, height, and weight. Body mass index (BMI) was calculated from height and weight measures where available. Relevant comorbidities were also collected: smoking status within 1 year, diabetes, hypertension requiring medication, chronic steroid use, presence of disseminated cancer, severe chronic obstructive pulmonary disease (COPD), congestive heart failure within 30 days of surgery, and American Society of Anesthesiologists (ASA) classification. Perioperative data recorded were operative time and CPT codes of all simultaneous procedures.\nThe primary outcome of interest was the incidence of surgical complications within 30 days postoperatively, defined as a composite of wound dehiscence and superficial, deep, and organ/space surgical site infections. Secondary outcomes included unplanned reoperations within 30 days, length of stay postoperatively, and medical complications within 30 days, a composite of occurrences of deep venous thrombosis (DVT), pulmonary embolism, pneumonia, reintubation, acute kidney injury, urinary tract infection, sepsis, stroke, myocardial infarction, and cardiac arrest. Unplanned reoperations were then categorized into procedures for hematoma/seroma drainage, implant removal, drainage/debridement of infected structures, wound repairs, or other utilizing CPT and ICD‐10 diagnosis codes. All complications and reoperations were calculated per patient.\nTo further examine the safety of outpatient reconstruction specifically, procedures from 2019 to 2020 were pooled and subsequently divided into cohorts based on inpatient or outpatient (postoperative length of stay 0 days) procedures. Demographic, perioperative, and outcomes data were compared between groups as above.\n Statistical analysis Kolmogorov–Smirnov tests were performed to assess the normality of continuous variables, and these were reported as mean ± standard deviation or median (interquartile range [IQR]). χ\n2 and Fisher's exact tests were performed to compare categorical data as appropriate, and Mann–Whitney U tests were performed to compare numerical data. Post hoc Fisher's exact tests with a Holm–Bonferroni correction for multiple comparisons were performed when indicated for categorical variables with more than two groups. The pairwise deletion was utilized when records were missing data to maximize data retained. p Values were obtained from two‐tailed tests, and the significance level was predefined at α = 0.05.\nTo establish the safety of breast reconstruction in the COVID era, noninferiority tests were performed utilizing the Farrington–Manning approach and a risk difference margin of 1%.\n33\n In noninferiority analyses, p values were one‐sided, with a significance level of α = 0.05.\nMultivariable logistic regression was performed to further identify predictors independently associated with surgical complications. Year of operation, outpatient setting, and variables with pairwise associations with surgical complications with p < 0.20 were included as factors in the regression.\nGiven baseline differences in patients who underwent outpatient reconstruction, a propensity score‐matched analysis was performed to minimize differences between inpatient and outpatient cohorts with respect to known potential confounders. The covariates upon which patients were matched were again identified by pairwise associations with surgical complications with p < 0.20, with the exclusion of operative time, which is not defined before the decision to pursue an inpatient or an outpatient procedure, and COPD, which had a low count (n = 4 in the outpatient group), to prevent overfitting.\n34\n Surgery type was also added as a matching covariate. Propensity scores were calculated by a logistic regression model utilizing an optimal fixed ratio algorithm, minimizing the total difference in propensity scores across groups. A 1:1 match ratio and a caliper width of 0.25 standard deviations of the logit of the propensity score were used. The distribution of potential confounders before and after propensity score matching was evaluated by standardized mean differences (SMD), where an SMD ≤ 0.10 is considered nonsignificant. For categorical variables with more than 2 levels, the match was evaluated using χ\n2 tests.\nListwise deletion was utilized in logistic regression and propensity score analyses, excluding all records missing any of the predictor variables. All statistical analyses were performed in SAS Studio software version 3.8 (SAS Institute Inc.).\nKolmogorov–Smirnov tests were performed to assess the normality of continuous variables, and these were reported as mean ± standard deviation or median (interquartile range [IQR]). χ\n2 and Fisher's exact tests were performed to compare categorical data as appropriate, and Mann–Whitney U tests were performed to compare numerical data. Post hoc Fisher's exact tests with a Holm–Bonferroni correction for multiple comparisons were performed when indicated for categorical variables with more than two groups. The pairwise deletion was utilized when records were missing data to maximize data retained. p Values were obtained from two‐tailed tests, and the significance level was predefined at α = 0.05.\nTo establish the safety of breast reconstruction in the COVID era, noninferiority tests were performed utilizing the Farrington–Manning approach and a risk difference margin of 1%.\n33\n In noninferiority analyses, p values were one‐sided, with a significance level of α = 0.05.\nMultivariable logistic regression was performed to further identify predictors independently associated with surgical complications. Year of operation, outpatient setting, and variables with pairwise associations with surgical complications with p < 0.20 were included as factors in the regression.\nGiven baseline differences in patients who underwent outpatient reconstruction, a propensity score‐matched analysis was performed to minimize differences between inpatient and outpatient cohorts with respect to known potential confounders. The covariates upon which patients were matched were again identified by pairwise associations with surgical complications with p < 0.20, with the exclusion of operative time, which is not defined before the decision to pursue an inpatient or an outpatient procedure, and COPD, which had a low count (n = 4 in the outpatient group), to prevent overfitting.\n34\n Surgery type was also added as a matching covariate. Propensity scores were calculated by a logistic regression model utilizing an optimal fixed ratio algorithm, minimizing the total difference in propensity scores across groups. A 1:1 match ratio and a caliper width of 0.25 standard deviations of the logit of the propensity score were used. The distribution of potential confounders before and after propensity score matching was evaluated by standardized mean differences (SMD), where an SMD ≤ 0.10 is considered nonsignificant. For categorical variables with more than 2 levels, the match was evaluated using χ\n2 tests.\nListwise deletion was utilized in logistic regression and propensity score analyses, excluding all records missing any of the predictor variables. All statistical analyses were performed in SAS Studio software version 3.8 (SAS Institute Inc.).", "Kolmogorov–Smirnov tests were performed to assess the normality of continuous variables, and these were reported as mean ± standard deviation or median (interquartile range [IQR]). χ\n2 and Fisher's exact tests were performed to compare categorical data as appropriate, and Mann–Whitney U tests were performed to compare numerical data. Post hoc Fisher's exact tests with a Holm–Bonferroni correction for multiple comparisons were performed when indicated for categorical variables with more than two groups. The pairwise deletion was utilized when records were missing data to maximize data retained. p Values were obtained from two‐tailed tests, and the significance level was predefined at α = 0.05.\nTo establish the safety of breast reconstruction in the COVID era, noninferiority tests were performed utilizing the Farrington–Manning approach and a risk difference margin of 1%.\n33\n In noninferiority analyses, p values were one‐sided, with a significance level of α = 0.05.\nMultivariable logistic regression was performed to further identify predictors independently associated with surgical complications. Year of operation, outpatient setting, and variables with pairwise associations with surgical complications with p < 0.20 were included as factors in the regression.\nGiven baseline differences in patients who underwent outpatient reconstruction, a propensity score‐matched analysis was performed to minimize differences between inpatient and outpatient cohorts with respect to known potential confounders. The covariates upon which patients were matched were again identified by pairwise associations with surgical complications with p < 0.20, with the exclusion of operative time, which is not defined before the decision to pursue an inpatient or an outpatient procedure, and COPD, which had a low count (n = 4 in the outpatient group), to prevent overfitting.\n34\n Surgery type was also added as a matching covariate. Propensity scores were calculated by a logistic regression model utilizing an optimal fixed ratio algorithm, minimizing the total difference in propensity scores across groups. A 1:1 match ratio and a caliper width of 0.25 standard deviations of the logit of the propensity score were used. The distribution of potential confounders before and after propensity score matching was evaluated by standardized mean differences (SMD), where an SMD ≤ 0.10 is considered nonsignificant. For categorical variables with more than 2 levels, the match was evaluated using χ\n2 tests.\nListwise deletion was utilized in logistic regression and propensity score analyses, excluding all records missing any of the predictor variables. All statistical analyses were performed in SAS Studio software version 3.8 (SAS Institute Inc.).", "During the COVID‐19 pandemic, breast reconstruction case volumes and overall surgical case volume reported in the NSQIP database decreased (Figure 1). The database contains a total of 806 016 procedures from April to December 2019 and 644 061 from these months in 2020. In the before‐COVID period, 4677 direct‐to‐implant and immediate tissue expander procedures were reported, accounting for 580 per 100 000 total cases. In the corresponding months of 2020, 4083 of the same procedures were reported, a decrease of 13%; however, these made up 633 per 100 000 of all the surgeries reported, a significant increase in proportion (p = 0.001, Table 1). Notably, there was an increase in the proportion of alloplastic reconstructions performed as outpatient procedures from 10% before COVID to 31% in 2020 during COVID (p < 0.001), a change that is seen most prominently in Quarter 2 of 2020 but is consistently elevated for the remainder of the year (p < 0.001 each quarter).\nCase volume of all immediate alloplastic breast reconstructions recorded in the National Surgical Quality Improvement Program over 2019 and 2020, and outpatient immediate alloplastic reconstructions. The proportion of outpatient reconstructions was significantly higher for each quarter in 2020 than in the corresponding quarter in 2019 (p < 0.001)\nDemographic data and comorbidities of immediate alloplastic breast reconstruction patients before‐ and during‐COVID\nAbbreviations: ASA, American Society of Anesthesiologists; BMI, body mass index; CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease; IQR, interquartile range; NSQIP, National Surgical Quality Improvement Program.\nPatients underwent reconstruction at a median of 50 years of age (IQR: 42–60). Demographics differed between the before‐COVID and during‐COVID cohorts, with the latter group being more often African American (13% vs. 11%) and less often white (80% vs. 82%, p = 0.003), and more often of Hispanic ethnicity (11% vs. 9.5%, p = 0.01; Table 1). Comorbidities generally did not differ across the two cohorts, with the exception of overall higher ASA class in the during‐COVID group (65% vs. 67% Class II, 31% vs. 28% Class III, p = 0.01). During the pandemic, fewer direct‐to‐implant procedures were performed than before COVID (13% vs. 17%, p < 0.001), and the use of acellular dermal matrix became more common (Table 2).\nSurgical characteristics of implant and expander‐based breast reconstruction before and during COVID\nAbbreviation: IQR, interquartile range.\nRates of postoperative complications, however, did not significantly increase. In the before‐COVID cohort, there were 306 surgical complications (6.5%, 95% confidence interval [CI]: 5.9%–7.3%), while in the during‐COVID cohort, there were 244 (6.0%, 95% CI: 5.3%–6.8%, p = 0.001 for noninferiority; Table 3; Figure 2A). The incidence of medical complications in the during‐COVID cohort was 1.1% (95% CI: 0.8%–1.4%), noninferior to the 1.5% in the before‐COVID cohort (95% CI: 1.2%–1.9%, p < 0.001). The rate of reoperations during COVID was 7.5% (95% CI: 6.7%–8.3%), also noninferior to before COVID (7.4%, p = 0.049).\nPostoperative outcomes by cohort\nAbbreviations: DVTs, deep venous thromboses; SSI, surgical site infection.\nIncidence of surgical complications, medical complications, and reoperations in (A) 2019 versus 2020 and (B) inpatient versus outpatient reconstruction. A dashed line indicates a noninferiority margin of a 1 percentage point increase in complications. Horizontal lines indicate 95% confidence intervals\nTo adjust for patients' baseline differences, multivariable logistic regression was performed. Bivariate analyses identified age, BMI, operative time, race, ethnicity, diabetes, smoking, hypertension, and COPD as being potentially associated with surgical complication rates (p < 0.2); thus, these were included alongside the year of operation and outpatient procedure as covariates in the regression. The results confirmed that when accounting for the potential confounders listed above, the year of operation was not associated with surgical complications, with an odds ratio (OR) of 0.93 (95% CI: 0.76–1.14, p = 0.48). Outpatient status was similarly independent of surgical complications (OR: 0.90, 95% CI: 0.68–1.20, p = 0.48; Table 4).\nResults of multivariate logistic regression to identify predictors of surgical complications\n\nNote: Model adjusted R\n2 = 0.0433.\nAbbreviations: BMI, body mass index; CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease.\nTo verify these findings, analyses were performed utilizing outpatient surgery as the exposure of interest. Baseline data of patients undergoing inpatient and outpatient surgery did differ significantly, with outpatients tending to have fewer comorbidities, including lower rates of obesity and smoking, and lower ASA class (p < 0.05 for all; Table S1). Outpatient procedures were more often direct‐to‐implant (23% vs. 13%, p < 0.001) and had shorter operative times (p < 0.001).\nTo minimize the impact of these baseline differences, propensity score‐matched outpatient and inpatient cohorts were generated, matched on age, BMI, race, ethnicity, diabetes, smoking, hypertension, and procedure type. The resulting groups encompassed 1371 patients each, with a good balance of all specified covariates (Tables 5 and S2). In these matched groups, outpatient surgery was noninferior to inpatient surgery in all measured domains. Outpatients had a 5.0% incidence of surgical complications (95% CI: 3.9%–6.3%) as compared to 5.5% in inpatients (p = 0.03 for noninferiority). Similarly, outpatients had noninferior rates of medical complications (0.8%, 95% CI: 0.4%–1.4%; vs. 1.4%) and reoperations (5.2%, 95% CI: 4.1%–6.5%; vs. 8.0%; p < 0.001 for noninferiority for both; Table 6, Figure 2B).\nStandardized mean differences (SMDs) for covariates used in propensity score matching\n\nNote: Covariates were chosen by bivariate analysis showing association with surgical complications with p < 0.20, with the addition of surgery type. Race, diabetes, and surgery type are shown separately in Table S2. Age and BMI are reported as median (IQR), and the remaining variables are reported as counts (%).\nAbbreviations: BMI, body mass index; SMD, standardized mean difference.\nPostoperative outcomes in inpatient and outpatient surgery with propensity score‐matched cohorts\nAbbreviations: DVTs, deep venous thromboses; SSI, surgical site infection.\nHowever, the incidence of secondary operations was also statistically significantly lower in the outpatient cohort, largely driven by a decrease in operations for hematoma and seroma drainage to 1.4%, half of the 2.8% seen in the inpatient cohort (p = 0.003; Table 6). This decrease was present primarily during postoperative days 0–1 (0.6% vs. 1.4%, p = 0.03), but trended towards significance during days 2–30 (0.8% vs. 1.5%, p = 0.10). Reoperations for implant removal, infectious complications, and wound repair did not differ significantly (p ≥ 0.44).", "In this study, a 13% decrease in immediate alloplastic breast reconstruction cases was observed during the COVID‐19 pandemic in the United States, with a concurrent increase in the proportion performed as outpatient procedures, from 10% to 31%. Noninferiority of alloplastic reconstruction both during the COVID‐19 pandemic and in the outpatient setting was demonstrated, the latter utilizing a propensity‐score matched analysis. Outpatient reconstruction was associated with surgical complications in 5.0% of patients, medical complications such as DVTs and urinary tract infections in 0.8%, and unplanned reoperations in 5.2% of patients, with no significant increase over inpatient reconstructions.\nDespite the decrease in overall recorded case volumes in NSQIP, immediate alloplastic breast reconstruction made up a significantly larger proportion of the cases represented in the NSQIP database in the included months in 2020 than in the same months in 2019. This trend may represent the rather unique characteristics of immediate alloplastic breast reconstruction as particularly amenable to continuation during the pandemic: it is simultaneously semi‐urgent, given its relationship to oncologic care, and can be performed on an outpatient basis with reasonable precedent.\n12\n, \n35\n Thus, while elective, nononcologic surgeries such as benign gynecologic procedures, as well as more complex inpatient procedures such as colorectal cancer resection, experienced 39%–60% volume reductions, the relatively milder decrease in alloplastic breast reconstruction may have led to a relative overrepresentation in the NSQIP database.\n36\n, \n37\n\n\nThese findings are consistent with a recent national survey of members of the ASPS, which reported an increase in breast reconstruction cases from 136 000 in 2019 to almost 138 000 in 2020 among its members.\n4\n At the same time, the ASPS recommended in March 2020 that autologous reconstructions are delayed and immediate alloplastic reconstruction be performed on a case‐by‐case basis, depending on resource availability, the patient's comorbidities, and the likelihood of complications. Several studies performed since have shown the strict institutional restrictions placed on all autologous breast reconstructions, with 47.7% of surgeons reporting performing only alloplastic reconstructions during the pandemic, while 15.9% reported restrictions on all forms of breast reconstruction.\n30\n, \n38\n, \n39\n, \n40\n\n\nSomewhat counterintuitively, there was a decrease in direct‐to‐implant reconstruction during the pandemic, with more patients undergoing tissue expander placement, despite the likely need for more frequent office visits for expansion and a possible second operation for exchange for permanent implants. However, given the multitude of patient and surgeon factors that inform the decision between the implant and tissue expander placement, such as incision type, preoperative breast size, and intraoperative mastectomy skin flap quality, it is unlikely that the coronavirus pandemic was the sole causative factor for this shift.\n41\n, \n42\n, \n43\n\n\nIn addition, our findings of outpatient alloplastic breast reconstruction during the COVID era being noninferior to the historic standard of care comport with the results of several single‐institution studies. Of note, Faulkner et al. examined immediate reconstruction during the first 3 months of COVID‐19 restrictions and found no difference in operative and nonoperative complications in comparison to the 3 preceding months, with an emphasis on same‐day discharge whenever medically possible.\n44\n Other centers had begun a shift towards outpatient procedures even before the pandemic, with similarly encouraging results.\n14\n, \n17\n In this study, outpatient reconstruction was also noninferior in medical complication rates. While these outcomes were rare and not individually significant, decreases in venous thromboses and thromboembolic events with outpatient operations have been shown with other surgical procedures when performed in the outpatient setting compared to inpatient and may be attributable to early mobilization.\n14\n, \n45\n, \n46\n\n\nImportantly, outpatient reconstruction had a significantly lower rate of unplanned secondary procedures than did inpatient operations, in particular for drainage of hematoma and seroma. There are several potential explanations for this trend, the most concerning of which is the potential undertreatment of outpatients due to a lack of observation or close follow‐up on a postoperative day 1—particularly relevant for the diagnosis of hematomas. Dumestre et al. previously demonstrated no association between outpatient reconstruction and hematoma formation, though in a relatively small cohort of 69 patients.\n17\n Seth et al. similarly examined a large cohort of immediate breast reconstructions and determined that no patient or surgical factors independently increase the risk of hematoma, although the postoperative length of stay was not analyzed.\n47\n The literature seems to show that the incidence of hematoma development is relatively constant across patients, which suggests the decrease observed here could represent underdiagnosis and undertreatment. Although an uncommon complication, the development of hematoma and seroma after an outpatient reconstruction may warrant further investigation; in particular, as the pandemic continues, and outpatient alloplastic reconstruction continues to become more common, close postoperative follow‐up and adequate patient education and counseling are crucial to identify true complication rates and improve the patient experience.\nThis study has several limitations inherent to retrospective studies and the nature of the NSQIP database. While it does report a plethora of postoperative complications, outcomes more specific to alloplastic breast reconstruction such as hematoma and seroma can only be inferred from ICD‐10 and CPT codes associated with a secondary procedure or readmission; therefore, our findings may underestimate the true burden of these complications in this cohort and overlook patients who did not undergo drainage of the fluid collection. Similarly, the NSQIP does not provide details regarding the site of wound complications, thus precluding a per‐breast analysis. As a result, our results may overestimate the incidence of wound complications on a per‐implant basis; however, given that the rates of bilateral reconstruction were similar across cohorts, this effect is non‐differential and should not alter the comparisons between years. Furthermore, specific patient and treatment characteristics such as cancer stage, adjuvant radiotherapy, prepectoral/subpectoral placement of implants, or the use of newer agents such as tranexamic acid to prevent hematoma are not reported in the database, and thus may confound our analysis of postoperative complications. Adjuvant use of tranexamic acid has become prevalent at our institution and others across the country during this time to mitigate hematoma formation. This therapeutic modality needs to be investigated further to examine the opportunity for conversion of inpatients procedures to outpatient procedures as the safety profile increases.\n48\n Similarly, because outcomes beyond 30 days postoperatively are not available, the relationship between outpatient reconstruction and long‐term complications and patient satisfaction could not be elucidated from these data.\nGiven that many of these procedures took place during a global pandemic and the waxing and waning nature of COVID‐19 cases and restrictions, changes in patient behavior were likely heterogeneous. A survey of breast cancer patients in April 2020 showed that 14.4% of patients had a higher threshold for contacting their breast cancer physician due to the pandemic, and this number decreased to 7.5% in November 2020.\n49\n\n", "\nIn the context of the COVID‐19 pandemic, at a time when conservation of hospital resources continues to be of great importance, we find that immediate alloplastic breast reconstruction is a generally safe procedure in the outpatient setting, with no increase in short‐term surgical complications, medical complications, or reoperations. However, further prospective study is necessary to validate the decreased reoperation rates in outpatients and clarify possible underlying causes. Close postoperative follow‐up, particularly in the initial days after surgery, remains essential in patients undergoing breast reconstruction in the inpatient or outpatient setting.\nThe COVID‐19 pandemic has forced us to re‐examine surgical dogma: crisis forces innovation. Healthier patients were given the opportunity to undergo what was previously an inpatient procedure as an outpatient, with encouraging results; thus, what constitutes an acceptable outpatient procedure may now be continuously reevaluated in this ongoing time of need.", "Justin M. Sacks is a cofounder and equity holder of LifeSprout, and a consultant for 3M. Other authors declare no conflicts of interest.", "The COVID‐19 pandemic accelerated the shift of immediate alloplastic breast reconstruction to the outpatient setting. We utilize the American College of Surgeons National Surgical Quality Improvement Program (ACS‐NSQIP) database to examine trends in and safety of outpatient breast reconstruction during the pandemic.", "Supporting information.\nClick here for additional data file.\nSupporting information.\nClick here for additional data file." ]
[ null, "materials-and-methods", null, "results", "discussion", "conclusions", "COI-statement", null, "supplementary-material" ]
[ "alloplastic breast reconstruction", "breast reconstruction", "COVID‐19", "immediate breast reconstruction", "outpatient breast reconstruction", "surgical complications" ]
INTRODUCTION: In recent years, an increasing number of women with early‐stage breast cancer have opted for mastectomy over breast‐conserving surgery. 1 , 2 , 3 While not all patients desire reconstruction, more than 130 000 breast reconstructions are performed annually, with alloplastic procedures representing approximately 75% of these. 4 The ideal timing for breast reconstruction is a multifactorial decision based on desired reconstructive approach, other medical comorbidities, and whether the patient will require adjuvant radiotherapy. Immediate reconstruction at the time of mastectomy offers psychological and economic benefits and therefore makes up the majority of breast reconstructions 4 , 5 , 6 , 7 , 8 ; however, it has been associated with higher rates of surgical complications. 9 , 10 Traditionally, immediate reconstructions have been done on an inpatient basis to monitor drain output, provide pain control, and assess for complications such as hematoma and skin necrosis. In recent years, some centers have begun to offer immediate alloplastic reconstruction as an outpatient procedure, mirroring trends observed for isolated mastectomy and delayed alloplastic reconstruction. 11 , 12 Advances in the regional blockade and perioperative blocks have decreased postoperative pain, aiding this shift toward outpatient surgery. 13 Some studies have demonstrated higher patient satisfaction, lower costs, and comparable safety with outpatient surgery but were limited by smaller sample sizes, and this model has accounted for a minority of procedures in recent years. 14 , 15 , 16 , 17 One factor which may accelerate the paradigm shift toward outpatient immediate reconstruction is the SARS‐CoV‐2 coronavirus disease 2019 (COVID‐19) pandemic. In early 2020, both the American College of Surgeons and the American Society of Plastic Surgeons (ASPS) issued recommendations that all nonurgent elective surgeries be canceled or postponed. In addition, they recommended that urgent elective procedures be shifted to the outpatient setting to conserve inpatient resources, specifically intensive care unit beds, and decrease the risk of transmission of the novel coronavirus. 18 , 19 , 20 , 21 Overall, there was a sharp decline in elective surgical volumes across multiple specialties, including oncologic surgery. 22 , 23 , 24 Given that cancer patients are at higher risk for serious complications and mortality from COVID‐19, 25 , 26 these recommendations provided a strong incentive to provide immediate reconstruction in the outpatient setting. To conserve resources while still providing care for breast cancer patients, some centers pivoted to “high‐efficiency” same‐day protocols with encouraging early outcomes. 27 , 28 , 29 , 30 Despite the observed delays and global decreases in elective surgical volumes, relatively little is known about the effects of the COVID‐19 pandemic on outcomes of breast reconstruction on a national scale. Understanding how these necessary changes have affected outcomes will inform decision‐making for breast cancer surgeons and patients, as the availability of inpatient surgery returns. To evaluate these trends in a large, national cohort, we surveyed the American College of Surgeons National Surgical Quality Improvement Program (ACS‐NSQIP) database 31 to identify patients undergoing immediate alloplastic breast reconstruction to describe aggregate changes in surgical volume, patient demographics, comorbidities, procedure setting, early complications, and reoperation rates as a result of the COVID‐19 pandemic. MATERIALS AND METHODS: This study was designated as nonhuman subjects research due to the deidentified nature of the data and thus exempt from IRB approval. To isolate the effects of COVID‐19, which was widespread in the United States by April 2020, 32 the NSQIP database was first filtered for procedures that took place in April through December 2019 and April through December 2020. These groups were then searched for implant‐ and expander‐based breast reconstruction using the following current procedural terminology (CPT) codes: 19340, insertion of breast implant on the day of mastectomy (direct‐to‐implant) and 19357, tissue expander placement, with simultaneous mastectomy (CPT codes 19301‐19307). Patients with CPT codes corresponding to more than one of these procedures were presumed to have undergone bilateral reconstruction. Procedures were also filtered by the International Classification of Diseases (ICD‐10) diagnosis code to isolate patients with breast cancer or carcinoma in situ diagnoses (categories C50, D05, or Z85.3) or undergoing prophylactic mastectomy (Z40.01, Z80.3). Patients undergoing concurrent autologous breast reconstruction were excluded (CPT codes 19361, 19364, 19367, 19368, 19369). The procedures fitting these inclusion criteria formed the before‐COVID and during‐COVID groups, containing 4677 and 4083 procedures, respectively. Patient demographic data corresponding to each of these procedures was extracted, including age at operation, sex, race, ethnicity, height, and weight. Body mass index (BMI) was calculated from height and weight measures where available. Relevant comorbidities were also collected: smoking status within 1 year, diabetes, hypertension requiring medication, chronic steroid use, presence of disseminated cancer, severe chronic obstructive pulmonary disease (COPD), congestive heart failure within 30 days of surgery, and American Society of Anesthesiologists (ASA) classification. Perioperative data recorded were operative time and CPT codes of all simultaneous procedures. The primary outcome of interest was the incidence of surgical complications within 30 days postoperatively, defined as a composite of wound dehiscence and superficial, deep, and organ/space surgical site infections. Secondary outcomes included unplanned reoperations within 30 days, length of stay postoperatively, and medical complications within 30 days, a composite of occurrences of deep venous thrombosis (DVT), pulmonary embolism, pneumonia, reintubation, acute kidney injury, urinary tract infection, sepsis, stroke, myocardial infarction, and cardiac arrest. Unplanned reoperations were then categorized into procedures for hematoma/seroma drainage, implant removal, drainage/debridement of infected structures, wound repairs, or other utilizing CPT and ICD‐10 diagnosis codes. All complications and reoperations were calculated per patient. To further examine the safety of outpatient reconstruction specifically, procedures from 2019 to 2020 were pooled and subsequently divided into cohorts based on inpatient or outpatient (postoperative length of stay 0 days) procedures. Demographic, perioperative, and outcomes data were compared between groups as above. Statistical analysis Kolmogorov–Smirnov tests were performed to assess the normality of continuous variables, and these were reported as mean ± standard deviation or median (interquartile range [IQR]). χ 2 and Fisher's exact tests were performed to compare categorical data as appropriate, and Mann–Whitney U tests were performed to compare numerical data. Post hoc Fisher's exact tests with a Holm–Bonferroni correction for multiple comparisons were performed when indicated for categorical variables with more than two groups. The pairwise deletion was utilized when records were missing data to maximize data retained. p Values were obtained from two‐tailed tests, and the significance level was predefined at α = 0.05. To establish the safety of breast reconstruction in the COVID era, noninferiority tests were performed utilizing the Farrington–Manning approach and a risk difference margin of 1%. 33 In noninferiority analyses, p values were one‐sided, with a significance level of α = 0.05. Multivariable logistic regression was performed to further identify predictors independently associated with surgical complications. Year of operation, outpatient setting, and variables with pairwise associations with surgical complications with p < 0.20 were included as factors in the regression. Given baseline differences in patients who underwent outpatient reconstruction, a propensity score‐matched analysis was performed to minimize differences between inpatient and outpatient cohorts with respect to known potential confounders. The covariates upon which patients were matched were again identified by pairwise associations with surgical complications with p < 0.20, with the exclusion of operative time, which is not defined before the decision to pursue an inpatient or an outpatient procedure, and COPD, which had a low count (n = 4 in the outpatient group), to prevent overfitting. 34 Surgery type was also added as a matching covariate. Propensity scores were calculated by a logistic regression model utilizing an optimal fixed ratio algorithm, minimizing the total difference in propensity scores across groups. A 1:1 match ratio and a caliper width of 0.25 standard deviations of the logit of the propensity score were used. The distribution of potential confounders before and after propensity score matching was evaluated by standardized mean differences (SMD), where an SMD ≤ 0.10 is considered nonsignificant. For categorical variables with more than 2 levels, the match was evaluated using χ 2 tests. Listwise deletion was utilized in logistic regression and propensity score analyses, excluding all records missing any of the predictor variables. All statistical analyses were performed in SAS Studio software version 3.8 (SAS Institute Inc.). Kolmogorov–Smirnov tests were performed to assess the normality of continuous variables, and these were reported as mean ± standard deviation or median (interquartile range [IQR]). χ 2 and Fisher's exact tests were performed to compare categorical data as appropriate, and Mann–Whitney U tests were performed to compare numerical data. Post hoc Fisher's exact tests with a Holm–Bonferroni correction for multiple comparisons were performed when indicated for categorical variables with more than two groups. The pairwise deletion was utilized when records were missing data to maximize data retained. p Values were obtained from two‐tailed tests, and the significance level was predefined at α = 0.05. To establish the safety of breast reconstruction in the COVID era, noninferiority tests were performed utilizing the Farrington–Manning approach and a risk difference margin of 1%. 33 In noninferiority analyses, p values were one‐sided, with a significance level of α = 0.05. Multivariable logistic regression was performed to further identify predictors independently associated with surgical complications. Year of operation, outpatient setting, and variables with pairwise associations with surgical complications with p < 0.20 were included as factors in the regression. Given baseline differences in patients who underwent outpatient reconstruction, a propensity score‐matched analysis was performed to minimize differences between inpatient and outpatient cohorts with respect to known potential confounders. The covariates upon which patients were matched were again identified by pairwise associations with surgical complications with p < 0.20, with the exclusion of operative time, which is not defined before the decision to pursue an inpatient or an outpatient procedure, and COPD, which had a low count (n = 4 in the outpatient group), to prevent overfitting. 34 Surgery type was also added as a matching covariate. Propensity scores were calculated by a logistic regression model utilizing an optimal fixed ratio algorithm, minimizing the total difference in propensity scores across groups. A 1:1 match ratio and a caliper width of 0.25 standard deviations of the logit of the propensity score were used. The distribution of potential confounders before and after propensity score matching was evaluated by standardized mean differences (SMD), where an SMD ≤ 0.10 is considered nonsignificant. For categorical variables with more than 2 levels, the match was evaluated using χ 2 tests. Listwise deletion was utilized in logistic regression and propensity score analyses, excluding all records missing any of the predictor variables. All statistical analyses were performed in SAS Studio software version 3.8 (SAS Institute Inc.). Statistical analysis: Kolmogorov–Smirnov tests were performed to assess the normality of continuous variables, and these were reported as mean ± standard deviation or median (interquartile range [IQR]). χ 2 and Fisher's exact tests were performed to compare categorical data as appropriate, and Mann–Whitney U tests were performed to compare numerical data. Post hoc Fisher's exact tests with a Holm–Bonferroni correction for multiple comparisons were performed when indicated for categorical variables with more than two groups. The pairwise deletion was utilized when records were missing data to maximize data retained. p Values were obtained from two‐tailed tests, and the significance level was predefined at α = 0.05. To establish the safety of breast reconstruction in the COVID era, noninferiority tests were performed utilizing the Farrington–Manning approach and a risk difference margin of 1%. 33 In noninferiority analyses, p values were one‐sided, with a significance level of α = 0.05. Multivariable logistic regression was performed to further identify predictors independently associated with surgical complications. Year of operation, outpatient setting, and variables with pairwise associations with surgical complications with p < 0.20 were included as factors in the regression. Given baseline differences in patients who underwent outpatient reconstruction, a propensity score‐matched analysis was performed to minimize differences between inpatient and outpatient cohorts with respect to known potential confounders. The covariates upon which patients were matched were again identified by pairwise associations with surgical complications with p < 0.20, with the exclusion of operative time, which is not defined before the decision to pursue an inpatient or an outpatient procedure, and COPD, which had a low count (n = 4 in the outpatient group), to prevent overfitting. 34 Surgery type was also added as a matching covariate. Propensity scores were calculated by a logistic regression model utilizing an optimal fixed ratio algorithm, minimizing the total difference in propensity scores across groups. A 1:1 match ratio and a caliper width of 0.25 standard deviations of the logit of the propensity score were used. The distribution of potential confounders before and after propensity score matching was evaluated by standardized mean differences (SMD), where an SMD ≤ 0.10 is considered nonsignificant. For categorical variables with more than 2 levels, the match was evaluated using χ 2 tests. Listwise deletion was utilized in logistic regression and propensity score analyses, excluding all records missing any of the predictor variables. All statistical analyses were performed in SAS Studio software version 3.8 (SAS Institute Inc.). RESULTS: During the COVID‐19 pandemic, breast reconstruction case volumes and overall surgical case volume reported in the NSQIP database decreased (Figure 1). The database contains a total of 806 016 procedures from April to December 2019 and 644 061 from these months in 2020. In the before‐COVID period, 4677 direct‐to‐implant and immediate tissue expander procedures were reported, accounting for 580 per 100 000 total cases. In the corresponding months of 2020, 4083 of the same procedures were reported, a decrease of 13%; however, these made up 633 per 100 000 of all the surgeries reported, a significant increase in proportion (p = 0.001, Table 1). Notably, there was an increase in the proportion of alloplastic reconstructions performed as outpatient procedures from 10% before COVID to 31% in 2020 during COVID (p < 0.001), a change that is seen most prominently in Quarter 2 of 2020 but is consistently elevated for the remainder of the year (p < 0.001 each quarter). Case volume of all immediate alloplastic breast reconstructions recorded in the National Surgical Quality Improvement Program over 2019 and 2020, and outpatient immediate alloplastic reconstructions. The proportion of outpatient reconstructions was significantly higher for each quarter in 2020 than in the corresponding quarter in 2019 (p < 0.001) Demographic data and comorbidities of immediate alloplastic breast reconstruction patients before‐ and during‐COVID Abbreviations: ASA, American Society of Anesthesiologists; BMI, body mass index; CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease; IQR, interquartile range; NSQIP, National Surgical Quality Improvement Program. Patients underwent reconstruction at a median of 50 years of age (IQR: 42–60). Demographics differed between the before‐COVID and during‐COVID cohorts, with the latter group being more often African American (13% vs. 11%) and less often white (80% vs. 82%, p = 0.003), and more often of Hispanic ethnicity (11% vs. 9.5%, p = 0.01; Table 1). Comorbidities generally did not differ across the two cohorts, with the exception of overall higher ASA class in the during‐COVID group (65% vs. 67% Class II, 31% vs. 28% Class III, p = 0.01). During the pandemic, fewer direct‐to‐implant procedures were performed than before COVID (13% vs. 17%, p < 0.001), and the use of acellular dermal matrix became more common (Table 2). Surgical characteristics of implant and expander‐based breast reconstruction before and during COVID Abbreviation: IQR, interquartile range. Rates of postoperative complications, however, did not significantly increase. In the before‐COVID cohort, there were 306 surgical complications (6.5%, 95% confidence interval [CI]: 5.9%–7.3%), while in the during‐COVID cohort, there were 244 (6.0%, 95% CI: 5.3%–6.8%, p = 0.001 for noninferiority; Table 3; Figure 2A). The incidence of medical complications in the during‐COVID cohort was 1.1% (95% CI: 0.8%–1.4%), noninferior to the 1.5% in the before‐COVID cohort (95% CI: 1.2%–1.9%, p < 0.001). The rate of reoperations during COVID was 7.5% (95% CI: 6.7%–8.3%), also noninferior to before COVID (7.4%, p = 0.049). Postoperative outcomes by cohort Abbreviations: DVTs, deep venous thromboses; SSI, surgical site infection. Incidence of surgical complications, medical complications, and reoperations in (A) 2019 versus 2020 and (B) inpatient versus outpatient reconstruction. A dashed line indicates a noninferiority margin of a 1 percentage point increase in complications. Horizontal lines indicate 95% confidence intervals To adjust for patients' baseline differences, multivariable logistic regression was performed. Bivariate analyses identified age, BMI, operative time, race, ethnicity, diabetes, smoking, hypertension, and COPD as being potentially associated with surgical complication rates (p < 0.2); thus, these were included alongside the year of operation and outpatient procedure as covariates in the regression. The results confirmed that when accounting for the potential confounders listed above, the year of operation was not associated with surgical complications, with an odds ratio (OR) of 0.93 (95% CI: 0.76–1.14, p = 0.48). Outpatient status was similarly independent of surgical complications (OR: 0.90, 95% CI: 0.68–1.20, p = 0.48; Table 4). Results of multivariate logistic regression to identify predictors of surgical complications Note: Model adjusted R 2 = 0.0433. Abbreviations: BMI, body mass index; CHF, congestive heart failure; COPD, chronic obstructive pulmonary disease. To verify these findings, analyses were performed utilizing outpatient surgery as the exposure of interest. Baseline data of patients undergoing inpatient and outpatient surgery did differ significantly, with outpatients tending to have fewer comorbidities, including lower rates of obesity and smoking, and lower ASA class (p < 0.05 for all; Table S1). Outpatient procedures were more often direct‐to‐implant (23% vs. 13%, p < 0.001) and had shorter operative times (p < 0.001). To minimize the impact of these baseline differences, propensity score‐matched outpatient and inpatient cohorts were generated, matched on age, BMI, race, ethnicity, diabetes, smoking, hypertension, and procedure type. The resulting groups encompassed 1371 patients each, with a good balance of all specified covariates (Tables 5 and S2). In these matched groups, outpatient surgery was noninferior to inpatient surgery in all measured domains. Outpatients had a 5.0% incidence of surgical complications (95% CI: 3.9%–6.3%) as compared to 5.5% in inpatients (p = 0.03 for noninferiority). Similarly, outpatients had noninferior rates of medical complications (0.8%, 95% CI: 0.4%–1.4%; vs. 1.4%) and reoperations (5.2%, 95% CI: 4.1%–6.5%; vs. 8.0%; p < 0.001 for noninferiority for both; Table 6, Figure 2B). Standardized mean differences (SMDs) for covariates used in propensity score matching Note: Covariates were chosen by bivariate analysis showing association with surgical complications with p < 0.20, with the addition of surgery type. Race, diabetes, and surgery type are shown separately in Table S2. Age and BMI are reported as median (IQR), and the remaining variables are reported as counts (%). Abbreviations: BMI, body mass index; SMD, standardized mean difference. Postoperative outcomes in inpatient and outpatient surgery with propensity score‐matched cohorts Abbreviations: DVTs, deep venous thromboses; SSI, surgical site infection. However, the incidence of secondary operations was also statistically significantly lower in the outpatient cohort, largely driven by a decrease in operations for hematoma and seroma drainage to 1.4%, half of the 2.8% seen in the inpatient cohort (p = 0.003; Table 6). This decrease was present primarily during postoperative days 0–1 (0.6% vs. 1.4%, p = 0.03), but trended towards significance during days 2–30 (0.8% vs. 1.5%, p = 0.10). Reoperations for implant removal, infectious complications, and wound repair did not differ significantly (p ≥ 0.44). DISCUSSION: In this study, a 13% decrease in immediate alloplastic breast reconstruction cases was observed during the COVID‐19 pandemic in the United States, with a concurrent increase in the proportion performed as outpatient procedures, from 10% to 31%. Noninferiority of alloplastic reconstruction both during the COVID‐19 pandemic and in the outpatient setting was demonstrated, the latter utilizing a propensity‐score matched analysis. Outpatient reconstruction was associated with surgical complications in 5.0% of patients, medical complications such as DVTs and urinary tract infections in 0.8%, and unplanned reoperations in 5.2% of patients, with no significant increase over inpatient reconstructions. Despite the decrease in overall recorded case volumes in NSQIP, immediate alloplastic breast reconstruction made up a significantly larger proportion of the cases represented in the NSQIP database in the included months in 2020 than in the same months in 2019. This trend may represent the rather unique characteristics of immediate alloplastic breast reconstruction as particularly amenable to continuation during the pandemic: it is simultaneously semi‐urgent, given its relationship to oncologic care, and can be performed on an outpatient basis with reasonable precedent. 12 , 35 Thus, while elective, nononcologic surgeries such as benign gynecologic procedures, as well as more complex inpatient procedures such as colorectal cancer resection, experienced 39%–60% volume reductions, the relatively milder decrease in alloplastic breast reconstruction may have led to a relative overrepresentation in the NSQIP database. 36 , 37 These findings are consistent with a recent national survey of members of the ASPS, which reported an increase in breast reconstruction cases from 136 000 in 2019 to almost 138 000 in 2020 among its members. 4 At the same time, the ASPS recommended in March 2020 that autologous reconstructions are delayed and immediate alloplastic reconstruction be performed on a case‐by‐case basis, depending on resource availability, the patient's comorbidities, and the likelihood of complications. Several studies performed since have shown the strict institutional restrictions placed on all autologous breast reconstructions, with 47.7% of surgeons reporting performing only alloplastic reconstructions during the pandemic, while 15.9% reported restrictions on all forms of breast reconstruction. 30 , 38 , 39 , 40 Somewhat counterintuitively, there was a decrease in direct‐to‐implant reconstruction during the pandemic, with more patients undergoing tissue expander placement, despite the likely need for more frequent office visits for expansion and a possible second operation for exchange for permanent implants. However, given the multitude of patient and surgeon factors that inform the decision between the implant and tissue expander placement, such as incision type, preoperative breast size, and intraoperative mastectomy skin flap quality, it is unlikely that the coronavirus pandemic was the sole causative factor for this shift. 41 , 42 , 43 In addition, our findings of outpatient alloplastic breast reconstruction during the COVID era being noninferior to the historic standard of care comport with the results of several single‐institution studies. Of note, Faulkner et al. examined immediate reconstruction during the first 3 months of COVID‐19 restrictions and found no difference in operative and nonoperative complications in comparison to the 3 preceding months, with an emphasis on same‐day discharge whenever medically possible. 44 Other centers had begun a shift towards outpatient procedures even before the pandemic, with similarly encouraging results. 14 , 17 In this study, outpatient reconstruction was also noninferior in medical complication rates. While these outcomes were rare and not individually significant, decreases in venous thromboses and thromboembolic events with outpatient operations have been shown with other surgical procedures when performed in the outpatient setting compared to inpatient and may be attributable to early mobilization. 14 , 45 , 46 Importantly, outpatient reconstruction had a significantly lower rate of unplanned secondary procedures than did inpatient operations, in particular for drainage of hematoma and seroma. There are several potential explanations for this trend, the most concerning of which is the potential undertreatment of outpatients due to a lack of observation or close follow‐up on a postoperative day 1—particularly relevant for the diagnosis of hematomas. Dumestre et al. previously demonstrated no association between outpatient reconstruction and hematoma formation, though in a relatively small cohort of 69 patients. 17 Seth et al. similarly examined a large cohort of immediate breast reconstructions and determined that no patient or surgical factors independently increase the risk of hematoma, although the postoperative length of stay was not analyzed. 47 The literature seems to show that the incidence of hematoma development is relatively constant across patients, which suggests the decrease observed here could represent underdiagnosis and undertreatment. Although an uncommon complication, the development of hematoma and seroma after an outpatient reconstruction may warrant further investigation; in particular, as the pandemic continues, and outpatient alloplastic reconstruction continues to become more common, close postoperative follow‐up and adequate patient education and counseling are crucial to identify true complication rates and improve the patient experience. This study has several limitations inherent to retrospective studies and the nature of the NSQIP database. While it does report a plethora of postoperative complications, outcomes more specific to alloplastic breast reconstruction such as hematoma and seroma can only be inferred from ICD‐10 and CPT codes associated with a secondary procedure or readmission; therefore, our findings may underestimate the true burden of these complications in this cohort and overlook patients who did not undergo drainage of the fluid collection. Similarly, the NSQIP does not provide details regarding the site of wound complications, thus precluding a per‐breast analysis. As a result, our results may overestimate the incidence of wound complications on a per‐implant basis; however, given that the rates of bilateral reconstruction were similar across cohorts, this effect is non‐differential and should not alter the comparisons between years. Furthermore, specific patient and treatment characteristics such as cancer stage, adjuvant radiotherapy, prepectoral/subpectoral placement of implants, or the use of newer agents such as tranexamic acid to prevent hematoma are not reported in the database, and thus may confound our analysis of postoperative complications. Adjuvant use of tranexamic acid has become prevalent at our institution and others across the country during this time to mitigate hematoma formation. This therapeutic modality needs to be investigated further to examine the opportunity for conversion of inpatients procedures to outpatient procedures as the safety profile increases. 48 Similarly, because outcomes beyond 30 days postoperatively are not available, the relationship between outpatient reconstruction and long‐term complications and patient satisfaction could not be elucidated from these data. Given that many of these procedures took place during a global pandemic and the waxing and waning nature of COVID‐19 cases and restrictions, changes in patient behavior were likely heterogeneous. A survey of breast cancer patients in April 2020 showed that 14.4% of patients had a higher threshold for contacting their breast cancer physician due to the pandemic, and this number decreased to 7.5% in November 2020. 49 CONCLUSION: In the context of the COVID‐19 pandemic, at a time when conservation of hospital resources continues to be of great importance, we find that immediate alloplastic breast reconstruction is a generally safe procedure in the outpatient setting, with no increase in short‐term surgical complications, medical complications, or reoperations. However, further prospective study is necessary to validate the decreased reoperation rates in outpatients and clarify possible underlying causes. Close postoperative follow‐up, particularly in the initial days after surgery, remains essential in patients undergoing breast reconstruction in the inpatient or outpatient setting. The COVID‐19 pandemic has forced us to re‐examine surgical dogma: crisis forces innovation. Healthier patients were given the opportunity to undergo what was previously an inpatient procedure as an outpatient, with encouraging results; thus, what constitutes an acceptable outpatient procedure may now be continuously reevaluated in this ongoing time of need. CONFLICTS OF INTEREST: Justin M. Sacks is a cofounder and equity holder of LifeSprout, and a consultant for 3M. Other authors declare no conflicts of interest. SYNOPSIS: The COVID‐19 pandemic accelerated the shift of immediate alloplastic breast reconstruction to the outpatient setting. We utilize the American College of Surgeons National Surgical Quality Improvement Program (ACS‐NSQIP) database to examine trends in and safety of outpatient breast reconstruction during the pandemic. Supporting information: Supporting information. Click here for additional data file. Supporting information. Click here for additional data file.
Background: Immediate alloplastic breast reconstruction shifted to the outpatient setting during the COVID-19 pandemic to conserve inpatient hospital beds while providing timely oncologic care. We examine the National Surgical Quality Improvement Program (NSQIP) database for trends in and safety of outpatient breast reconstruction during the pandemic. Methods: NSQIP data were filtered for immediate alloplastic breast reconstructions between April and December of 2019 (before-COVID) and 2020 (during-COVID); the proportion of outpatient procedures was compared. Thirty-day complications were compared for noninferiority between propensity-matched outpatients and inpatients utilizing a 1% risk difference margin. Results: During COVID, immediate alloplastic breast reconstruction cases decreased (4083 vs. 4677) and were more frequently outpatient (31% vs. 10%, p < 0.001). Outpatients had lower rates of smoking (6.8% vs. 8.4%, p = 0.03) and obesity (26% vs. 33%, p < 0.001). Surgical complication rates of outpatient procedures were noninferior to propensity-matched inpatients (5.0% vs. 5.5%, p = 0.03 noninferiority). Reoperation rates were lower in propensity-matched outpatients (5.2% vs. 8.0%, p = 0.003). Conclusions: Immediate alloplastic breast reconstruction shifted towards outpatient procedures during the COVID-19 pandemic with noninferior complication rates. Therefore, a paradigm shift towards outpatient reconstruction for certain patients may be safe. However, decreased reoperations in outpatients may represent undiagnosed complications and warrant further investigation.
INTRODUCTION: In recent years, an increasing number of women with early‐stage breast cancer have opted for mastectomy over breast‐conserving surgery. 1 , 2 , 3 While not all patients desire reconstruction, more than 130 000 breast reconstructions are performed annually, with alloplastic procedures representing approximately 75% of these. 4 The ideal timing for breast reconstruction is a multifactorial decision based on desired reconstructive approach, other medical comorbidities, and whether the patient will require adjuvant radiotherapy. Immediate reconstruction at the time of mastectomy offers psychological and economic benefits and therefore makes up the majority of breast reconstructions 4 , 5 , 6 , 7 , 8 ; however, it has been associated with higher rates of surgical complications. 9 , 10 Traditionally, immediate reconstructions have been done on an inpatient basis to monitor drain output, provide pain control, and assess for complications such as hematoma and skin necrosis. In recent years, some centers have begun to offer immediate alloplastic reconstruction as an outpatient procedure, mirroring trends observed for isolated mastectomy and delayed alloplastic reconstruction. 11 , 12 Advances in the regional blockade and perioperative blocks have decreased postoperative pain, aiding this shift toward outpatient surgery. 13 Some studies have demonstrated higher patient satisfaction, lower costs, and comparable safety with outpatient surgery but were limited by smaller sample sizes, and this model has accounted for a minority of procedures in recent years. 14 , 15 , 16 , 17 One factor which may accelerate the paradigm shift toward outpatient immediate reconstruction is the SARS‐CoV‐2 coronavirus disease 2019 (COVID‐19) pandemic. In early 2020, both the American College of Surgeons and the American Society of Plastic Surgeons (ASPS) issued recommendations that all nonurgent elective surgeries be canceled or postponed. In addition, they recommended that urgent elective procedures be shifted to the outpatient setting to conserve inpatient resources, specifically intensive care unit beds, and decrease the risk of transmission of the novel coronavirus. 18 , 19 , 20 , 21 Overall, there was a sharp decline in elective surgical volumes across multiple specialties, including oncologic surgery. 22 , 23 , 24 Given that cancer patients are at higher risk for serious complications and mortality from COVID‐19, 25 , 26 these recommendations provided a strong incentive to provide immediate reconstruction in the outpatient setting. To conserve resources while still providing care for breast cancer patients, some centers pivoted to “high‐efficiency” same‐day protocols with encouraging early outcomes. 27 , 28 , 29 , 30 Despite the observed delays and global decreases in elective surgical volumes, relatively little is known about the effects of the COVID‐19 pandemic on outcomes of breast reconstruction on a national scale. Understanding how these necessary changes have affected outcomes will inform decision‐making for breast cancer surgeons and patients, as the availability of inpatient surgery returns. To evaluate these trends in a large, national cohort, we surveyed the American College of Surgeons National Surgical Quality Improvement Program (ACS‐NSQIP) database 31 to identify patients undergoing immediate alloplastic breast reconstruction to describe aggregate changes in surgical volume, patient demographics, comorbidities, procedure setting, early complications, and reoperation rates as a result of the COVID‐19 pandemic. CONCLUSION: In the context of the COVID‐19 pandemic, at a time when conservation of hospital resources continues to be of great importance, we find that immediate alloplastic breast reconstruction is a generally safe procedure in the outpatient setting, with no increase in short‐term surgical complications, medical complications, or reoperations. However, further prospective study is necessary to validate the decreased reoperation rates in outpatients and clarify possible underlying causes. Close postoperative follow‐up, particularly in the initial days after surgery, remains essential in patients undergoing breast reconstruction in the inpatient or outpatient setting. The COVID‐19 pandemic has forced us to re‐examine surgical dogma: crisis forces innovation. Healthier patients were given the opportunity to undergo what was previously an inpatient procedure as an outpatient, with encouraging results; thus, what constitutes an acceptable outpatient procedure may now be continuously reevaluated in this ongoing time of need.
Background: Immediate alloplastic breast reconstruction shifted to the outpatient setting during the COVID-19 pandemic to conserve inpatient hospital beds while providing timely oncologic care. We examine the National Surgical Quality Improvement Program (NSQIP) database for trends in and safety of outpatient breast reconstruction during the pandemic. Methods: NSQIP data were filtered for immediate alloplastic breast reconstructions between April and December of 2019 (before-COVID) and 2020 (during-COVID); the proportion of outpatient procedures was compared. Thirty-day complications were compared for noninferiority between propensity-matched outpatients and inpatients utilizing a 1% risk difference margin. Results: During COVID, immediate alloplastic breast reconstruction cases decreased (4083 vs. 4677) and were more frequently outpatient (31% vs. 10%, p < 0.001). Outpatients had lower rates of smoking (6.8% vs. 8.4%, p = 0.03) and obesity (26% vs. 33%, p < 0.001). Surgical complication rates of outpatient procedures were noninferior to propensity-matched inpatients (5.0% vs. 5.5%, p = 0.03 noninferiority). Reoperation rates were lower in propensity-matched outpatients (5.2% vs. 8.0%, p = 0.003). Conclusions: Immediate alloplastic breast reconstruction shifted towards outpatient procedures during the COVID-19 pandemic with noninferior complication rates. Therefore, a paradigm shift towards outpatient reconstruction for certain patients may be safe. However, decreased reoperations in outpatients may represent undiagnosed complications and warrant further investigation.
5,652
289
9
[ "outpatient", "reconstruction", "complications", "breast", "surgical", "performed", "covid", "patients", "procedures", "breast reconstruction" ]
[ "test", "test" ]
null
[CONTENT] alloplastic breast reconstruction | breast reconstruction | COVID‐19 | immediate breast reconstruction | outpatient breast reconstruction | surgical complications [SUMMARY]
null
[CONTENT] alloplastic breast reconstruction | breast reconstruction | COVID‐19 | immediate breast reconstruction | outpatient breast reconstruction | surgical complications [SUMMARY]
[CONTENT] alloplastic breast reconstruction | breast reconstruction | COVID‐19 | immediate breast reconstruction | outpatient breast reconstruction | surgical complications [SUMMARY]
[CONTENT] alloplastic breast reconstruction | breast reconstruction | COVID‐19 | immediate breast reconstruction | outpatient breast reconstruction | surgical complications [SUMMARY]
[CONTENT] alloplastic breast reconstruction | breast reconstruction | COVID‐19 | immediate breast reconstruction | outpatient breast reconstruction | surgical complications [SUMMARY]
[CONTENT] COVID-19 | Humans | Mammaplasty | Pandemics | Postoperative Complications | Reoperation | Retrospective Studies [SUMMARY]
null
[CONTENT] COVID-19 | Humans | Mammaplasty | Pandemics | Postoperative Complications | Reoperation | Retrospective Studies [SUMMARY]
[CONTENT] COVID-19 | Humans | Mammaplasty | Pandemics | Postoperative Complications | Reoperation | Retrospective Studies [SUMMARY]
[CONTENT] COVID-19 | Humans | Mammaplasty | Pandemics | Postoperative Complications | Reoperation | Retrospective Studies [SUMMARY]
[CONTENT] COVID-19 | Humans | Mammaplasty | Pandemics | Postoperative Complications | Reoperation | Retrospective Studies [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] outpatient | reconstruction | complications | breast | surgical | performed | covid | patients | procedures | breast reconstruction [SUMMARY]
null
[CONTENT] outpatient | reconstruction | complications | breast | surgical | performed | covid | patients | procedures | breast reconstruction [SUMMARY]
[CONTENT] outpatient | reconstruction | complications | breast | surgical | performed | covid | patients | procedures | breast reconstruction [SUMMARY]
[CONTENT] outpatient | reconstruction | complications | breast | surgical | performed | covid | patients | procedures | breast reconstruction [SUMMARY]
[CONTENT] outpatient | reconstruction | complications | breast | surgical | performed | covid | patients | procedures | breast reconstruction [SUMMARY]
[CONTENT] reconstruction | breast | immediate | early | elective | recent years | cancer | surgeons | surgery | outpatient [SUMMARY]
null
[CONTENT] 95 | vs | ci | 001 | 95 ci | table | covid | complications | surgical | outpatient [SUMMARY]
[CONTENT] procedure outpatient | outpatient | procedure | covid 19 pandemic | pandemic | 19 pandemic | 19 | inpatient | outpatient setting | time [SUMMARY]
[CONTENT] outpatient | reconstruction | breast | complications | surgical | performed | tests | covid | pandemic | data [SUMMARY]
[CONTENT] outpatient | reconstruction | breast | complications | surgical | performed | tests | covid | pandemic | data [SUMMARY]
[CONTENT] COVID-19 ||| the National Surgical Quality Improvement Program | NSQIP [SUMMARY]
null
[CONTENT] 4083 | 4677 | 31% | 10% | 0.001 ||| 6.8% | 8.4% | 0.03 | 26% | 33% | 0.001 ||| 5.0% | 5.5% | 0.03 ||| 5.2% | 8.0% | 0.003 [SUMMARY]
[CONTENT] COVID-19 ||| ||| [SUMMARY]
[CONTENT] COVID-19 ||| the National Surgical Quality Improvement Program | NSQIP ||| NSQIP | between April and December of 2019 | 2020 ||| ||| Thirty-day | 1% ||| 4083 | 4677 | 31% | 10% | 0.001 ||| 6.8% | 8.4% | 0.03 | 26% | 33% | 0.001 ||| 5.0% | 5.5% | 0.03 ||| 5.2% | 8.0% | 0.003 ||| COVID-19 ||| ||| [SUMMARY]
[CONTENT] COVID-19 ||| the National Surgical Quality Improvement Program | NSQIP ||| NSQIP | between April and December of 2019 | 2020 ||| ||| Thirty-day | 1% ||| 4083 | 4677 | 31% | 10% | 0.001 ||| 6.8% | 8.4% | 0.03 | 26% | 33% | 0.001 ||| 5.0% | 5.5% | 0.03 ||| 5.2% | 8.0% | 0.003 ||| COVID-19 ||| ||| [SUMMARY]
Mechanical effects of left ventricular midwall fibrosis in non-ischemic cardiomyopathy.
26732096
Left ventricular (LV) mid-wall fibrosis (MWF), which occurs in about a quarter of patients with non-ischemic cardiomyopathy (NICM), is associated with high risk of pump failure. The mid LV wall is the site of circumferential myocardial fibers. We sought to determine the effect of MWF on LV myocardial mechanics.
BACKGROUND
Patients with NICM (n = 116; age: 62.8 ± 13.2 years; 67% male) underwent late gadolinium enhancement cardiovascular magnetic resonance (CMR) and were categorized according to the presence (+) or absence (-) of MWF. Feature tracking (FT) CMR was used to assess myocardial deformation.
METHODS
Despite a similar LVEF (24.3 vs. 27.5%, p = 0.20), patients with MWF (32 [24%]) had lower global circumferential strain (Ɛcc: -6.6% vs. -9.4 %, P = 0.004), but similar longitudinal (Ɛll: -7.6 % vs. -9.4 %, p = 0.053) and radial (Ɛrr: 14.6% vs. 17.8% p = 0.18) strain. Compared with - MWF, + MWF was associated with reduced LV systolic, circumferential strain rate (-0.38 ± 0.1 vs. -0.56 ± 0.3 s(-1), p = 0.005) and peak LV twist (4.65 vs. 6.31°, p = 0.004), as well as rigid LV body rotation (64 % vs. 28 %, P <0.001). In addition, +MWF was associated with reduced LV diastolic strain rates (DSRcc: 0.34 vs. 0.46 s(-1); DSRll: 0.38 vs. 0.50s(-1); DSRrr: -0.55 vs. -0.75 s(-1); all p <0.05).
RESULTS
MWF is associated with reduced LV global circumferential strain, strain rate and torsion. In addition, MWF is associated with rigid LV body rotation and reduced diastolic strain rates. These systolic and diastolic disturbances may be related to the increased risk of pump failure observed in patients with NICM and MWF.
CONCLUSIONS
[ "Aged", "Biomechanical Phenomena", "Cardiomyopathies", "Contrast Media", "Diastole", "England", "Female", "Fibrosis", "Gadolinium DTPA", "Heart Failure", "Heart Ventricles", "Humans", "Magnetic Resonance Imaging, Cine", "Male", "Middle Aged", "Myocardium", "Predictive Value of Tests", "Stress, Mechanical", "Systole", "Torsion, Mechanical", "Ventricular Function, Left" ]
4700639
Background
Non-ischemic cardiomyopathy (NICM) is a common cause of heart failure [1]. The NICM phenotype ranges from patients who remain largely asymptomatic to those who succumb to multiple hospitalizations and premature death. In a study of 603 patients with idiopathic dilated cardiomyopathy followed up over 9 years, Castelli et al. found that 45 % died or underwent cardiac transplantation [2]. Left ventricular mid-wall fibrosis (MWF) was first described as an autopsy finding in 1991 [3]. Clinical studies using late-gadolinium cardiovascular magnetic resonance (LGE-CMR) have subsequently shown that in patients with NIDCM, MWF is associated with an increased risk of heart failure hospitalizations, ventricular arrhythmias and cardiac death [4–8]. Patients with NICM and MWF are also less responsive to pharmacologic therapy [9] and cardiac resynchronization therapy [10]. Whilst the evidence linking MWF and poor patient outcomes is compelling [4–11], the mechanism remains unexplored. The left ventricle (LV) twists in systole and untwists, or recoils, in diastole. In systole, the LV base rotates clockwise and the apex rotates counter-clockwise. This wringing motion is effected by the helical arrangement of myocardial fibers, which run in a left-handed direction in the subepicardium and in a right-handed direction in the subendocardium . Contraction of subepicardial myocardial fibers cause the base to rotate clockwise and the apex to rotate in counterclockwise [12]. Because the radius of rotation of the subepicardium is greater than that of the subendocardium, the former provides a greater torque. Consequently, the LV gets smaller in systole and LV ejection occurs [12]. Circumferential fibers, which run in the mid-myocardium, are crucial to this process. During ejection, they shorten simultaneously with oblique fibers in the right- and left-handed helices. In effect, circumferential fibers provide a horizontal counterforce throughout ejection [13]. We hypothesized that injury to mid-myocardial, circumferential myocardial fibers [14], as might be expected from MWF, leads to impairment of LV circumferential contraction and relaxation and therefore, to disturbances in LV twist and torsion. In this study, we have used feature-tracking CMR (FT-CMR) [15] to explore the mechanical effects of MWF in patients with NICM.
null
null
Results
The characteristics of the study group are shown in Table 1. Amongst the entire cohort, 32/116 patients (28 %) had MWF. Patients were of similar age (63.8 vs. 62.3 years, p = 0.29), but more patients with MWF were men (84 % vs. 61 %, p = 0.02). There were no differences in NHYA class, atrial rhythm, QRS duration, LVEF, co-morbidities, pharmacological therapy for heart failure.Table 1Baseline characteristicsNo MWFMWFPN8432Age, yrs62.3 ± 13.763.8 ± 11.90.29Male, n (%)51 (61)27 (84)0.02Height, m1.68 ± 0.091.74 ± 0.090.02Weight, Kg83.4 ± 18.683.3 ± 12.60.97NYHA class0.20 I4 (5)3 (9) II15 (18)8 (25) III47 (56)11 (34) IV9 (11)5 (16) Unknown9 (11)5 (16)Diabetes mellitus, n (%)13 (16)7 (24)0.42Hypertension, n (%)18 (22)5 (17)0.61Atrial fibrillation, n (%)15 (18)8 (24)0.44Medication, n (%) Loop diuretics62 (81)26 (89)0.47 ACE-I or ARB77 (97)27 (90)0.31 Beta-blockers51 (65)20 (66)1.00 Aldosterone antagonists36 (46)10 (35)0.29Systolic blood pressure, mmHg124.3 ± 20.5119.6 ± 23.10.38Diastolic blood pressure, mmHg71.5 ± 11.971.7 ± 13.80.96QRS duration (ms)144 (28)149 (32)0.48 ACE-I angiotensin-converting enzyme inhibitors, ARB angiotensin receptor blockers Baseline characteristics ACE-I angiotensin-converting enzyme inhibitors, ARB angiotensin receptor blockers Systolic deformation As shown in Table 2, patients with MWF had a lower, global circumferential strain (Ɛcc: −6.6 % vs −9.4 %, P = 0.004), but similar longitudinal (Ɛll: −7.6 % vs. −9.4 %, p −0.053) and radial (Ɛrr: 14.6 % vs. 17.8 % p = 0.18) strain. Systolic strain rate was reduced in the circumferential direction (SSRcc: −0.38 s−1 vs. −0.56 s-1, p = 0.005), but not in radial or longitudinal directions. Figure 2 shows typical examples. As shown in Fig. 3, Ɛcc (r = 0.70), Ɛrr (0.57, p <0.001 and Ɛll (r = 0.62, p <0.001) correlated positively with LVEF. In the case of Ɛcc, the slope of the regression line was 0.17 in the + MWF group and 0.31 in the -MWF group, indicating that Ɛcc is lower in the + MWF group than in the -MWF at a given LVEF.Table 2Mechanical variables in patients with or without MWFNo MWFMWFPLV dimensions LVEDV, mL222 ± 80277 ± 790.002 LVESV, mL166 ± 79214 ± 830.007 LV mass, g137.6 ± 46.6155.5 ± 71.10.052Systolic deformation LVEF, %27.5 ± 10.824.3 ± 12.90.20 Ɛcc (%)−9.4 ± 4.76−6.6 (2.570.004 SSRcc (s−1)−0.56 ± 0.25−0.38 (0.120.005 Ɛrr (%)17.8 ± 11.014.6 ± 10.10.18 SSRrr (s−1)0.84 ± 0.370.74 ± 0.400.31 Ɛll (%)−9.4 ± 4.35−7.6 ± 3.340.053 SSRll (s−1)0.56 ± 0.20−0.49 ± 0.180.13Diastolic deformation DSRcc (s−1)0.46 ± 0.190.34 ± 0.110.010 DSRrr (s−1)−0.75 ± 0.35−0.55 ± 0.440.038 DSRll (s−1)0.50 ± 0.200.38 ± 0.140.006Systolic torsion Basal systolic rotation (°)  Net Clockwise3.40 ± 3.003.00 ± 2.230.513  Magnitude4.63 ± 2.643.67 ± 1.970.082 Basal rotation rate (° s−1)31.3 ± 14.522.1 ± 8.20.002 Apical systolic rotation (°)  Net anti-clockwise−3.50 ± 3.28−1.99 ± 1.970.024  Magnitude5.18 ± 3.153.52 ± 2.450.013 Apical rotation rate (° s−1)−38.9 ± 21.8−26.1 ± 15.80.005 Average basal/apical rotation (°)9.81 ± 4.487.20 ± 3.440.002 LV twist (°)6.31 ± 3.304.65 ± 2.180.004 LV twist per unit length (°/cm)1.34 ± 0.760.94 ± 0.550.005 Torsional shear angle0.83 ± 0.060.52 ± 0.070.008 LV twist rate (° s−1)48.4 ± 23.136.1 ± 17.10.01 Torsional pattern<0.001  Normal torsion, n (%)39 ± 4610 ± 32  Rigid body rotation, n (%)23 ± 2821 ± 64  Reverse torsion, n (%)22 ± 261 ± 4Diastolic torsion Basal rotation rate (° s−1)−34.1 ± 14.8−28.0 ± 11.80.053 Apical rotation rate (° s−1)38.3 ± 20.124.9 ± 13.10.001 LV untwist rate (° s−1)44.5 ± 21.030.5 ± 14.9<0.001Variables are expressed as mean ± SD MWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain Fig. 2Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF Fig. 3Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF Mechanical variables in patients with or without MWF Variables are expressed as mean ± SD MWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF As shown in Table 2, patients with MWF had a lower, global circumferential strain (Ɛcc: −6.6 % vs −9.4 %, P = 0.004), but similar longitudinal (Ɛll: −7.6 % vs. −9.4 %, p −0.053) and radial (Ɛrr: 14.6 % vs. 17.8 % p = 0.18) strain. Systolic strain rate was reduced in the circumferential direction (SSRcc: −0.38 s−1 vs. −0.56 s-1, p = 0.005), but not in radial or longitudinal directions. Figure 2 shows typical examples. As shown in Fig. 3, Ɛcc (r = 0.70), Ɛrr (0.57, p <0.001 and Ɛll (r = 0.62, p <0.001) correlated positively with LVEF. In the case of Ɛcc, the slope of the regression line was 0.17 in the + MWF group and 0.31 in the -MWF group, indicating that Ɛcc is lower in the + MWF group than in the -MWF at a given LVEF.Table 2Mechanical variables in patients with or without MWFNo MWFMWFPLV dimensions LVEDV, mL222 ± 80277 ± 790.002 LVESV, mL166 ± 79214 ± 830.007 LV mass, g137.6 ± 46.6155.5 ± 71.10.052Systolic deformation LVEF, %27.5 ± 10.824.3 ± 12.90.20 Ɛcc (%)−9.4 ± 4.76−6.6 (2.570.004 SSRcc (s−1)−0.56 ± 0.25−0.38 (0.120.005 Ɛrr (%)17.8 ± 11.014.6 ± 10.10.18 SSRrr (s−1)0.84 ± 0.370.74 ± 0.400.31 Ɛll (%)−9.4 ± 4.35−7.6 ± 3.340.053 SSRll (s−1)0.56 ± 0.20−0.49 ± 0.180.13Diastolic deformation DSRcc (s−1)0.46 ± 0.190.34 ± 0.110.010 DSRrr (s−1)−0.75 ± 0.35−0.55 ± 0.440.038 DSRll (s−1)0.50 ± 0.200.38 ± 0.140.006Systolic torsion Basal systolic rotation (°)  Net Clockwise3.40 ± 3.003.00 ± 2.230.513  Magnitude4.63 ± 2.643.67 ± 1.970.082 Basal rotation rate (° s−1)31.3 ± 14.522.1 ± 8.20.002 Apical systolic rotation (°)  Net anti-clockwise−3.50 ± 3.28−1.99 ± 1.970.024  Magnitude5.18 ± 3.153.52 ± 2.450.013 Apical rotation rate (° s−1)−38.9 ± 21.8−26.1 ± 15.80.005 Average basal/apical rotation (°)9.81 ± 4.487.20 ± 3.440.002 LV twist (°)6.31 ± 3.304.65 ± 2.180.004 LV twist per unit length (°/cm)1.34 ± 0.760.94 ± 0.550.005 Torsional shear angle0.83 ± 0.060.52 ± 0.070.008 LV twist rate (° s−1)48.4 ± 23.136.1 ± 17.10.01 Torsional pattern<0.001  Normal torsion, n (%)39 ± 4610 ± 32  Rigid body rotation, n (%)23 ± 2821 ± 64  Reverse torsion, n (%)22 ± 261 ± 4Diastolic torsion Basal rotation rate (° s−1)−34.1 ± 14.8−28.0 ± 11.80.053 Apical rotation rate (° s−1)38.3 ± 20.124.9 ± 13.10.001 LV untwist rate (° s−1)44.5 ± 21.030.5 ± 14.9<0.001Variables are expressed as mean ± SD MWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain Fig. 2Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF Fig. 3Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF Mechanical variables in patients with or without MWF Variables are expressed as mean ± SD MWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF Diastolic deformation In patients with MWF, diastolic strains rates were lower in all three directions in patients with MWF (DSRcc: 0.34 vs 0.46 s−1, p = 0.01; DSRrr: −0.55 vs −0.75 s−1, p = 0.04; DSRll: 0.38 vs 0.50 s−1, p = 0.006). In patients with MWF, diastolic strains rates were lower in all three directions in patients with MWF (DSRcc: 0.34 vs 0.46 s−1, p = 0.01; DSRrr: −0.55 vs −0.75 s−1, p = 0.04; DSRll: 0.38 vs 0.50 s−1, p = 0.006). Torsional mechanics Whilst basal rotation was unaffected by MWF (net clockwise: 3.00° vs. 3.30, p = 0.51; total magnitude: 3.67° vs. 4.63°, p = 0.08), the rate of basal rotation was reduced (22.1° s−1 vs 31.3° s−1, p = 0.002). In patients with MWF, apical rotation was also reduced in terms of both the total magnitude (3.52° vs 5.18°, p = 0.013) and the net anti-clockwise rotation (−1.99° vs. −3.50°, p = 0.024). The rate of apical rotation was lower in patients with MWF (−26.1° s−1 vs −38.9° s−1, p = 0.005). This reduction in the magnitude of apical rotation was associated with a reduction in LV twist (peak LV twist : 4.65° vs. 6.31°, p = 0.004; LV twist per unit length: 0.94°/cm vs.1.34°/cm, p = 0.005; torsional shear angle: 0.52 vs. 0.83, p = 0.008). The rate of LV twist (36.1° s−1 vs. 48.4° s−1, P = 0.001) and untwist (30.5° s−1 vs. 44.5° s−1, P <0.001) was also reduced in patients with MWF. A normal torsion pattern, in which there is predominantly anti-clockwise rotation of the apex and clockwise rotation of the base, was observed more frequently in patients without MWF (32 vs 46 %). Rigid LV body rotation was more frequently observed in patients with MWF (64 vs 28 %, p <0.001). Whilst basal rotation was unaffected by MWF (net clockwise: 3.00° vs. 3.30, p = 0.51; total magnitude: 3.67° vs. 4.63°, p = 0.08), the rate of basal rotation was reduced (22.1° s−1 vs 31.3° s−1, p = 0.002). In patients with MWF, apical rotation was also reduced in terms of both the total magnitude (3.52° vs 5.18°, p = 0.013) and the net anti-clockwise rotation (−1.99° vs. −3.50°, p = 0.024). The rate of apical rotation was lower in patients with MWF (−26.1° s−1 vs −38.9° s−1, p = 0.005). This reduction in the magnitude of apical rotation was associated with a reduction in LV twist (peak LV twist : 4.65° vs. 6.31°, p = 0.004; LV twist per unit length: 0.94°/cm vs.1.34°/cm, p = 0.005; torsional shear angle: 0.52 vs. 0.83, p = 0.008). The rate of LV twist (36.1° s−1 vs. 48.4° s−1, P = 0.001) and untwist (30.5° s−1 vs. 44.5° s−1, P <0.001) was also reduced in patients with MWF. A normal torsion pattern, in which there is predominantly anti-clockwise rotation of the apex and clockwise rotation of the base, was observed more frequently in patients without MWF (32 vs 46 %). Rigid LV body rotation was more frequently observed in patients with MWF (64 vs 28 %, p <0.001).
Conclusions
We have shown that in patients with NICM, MWF is associated with profound disturbances in LV global circumferential strain, strain rate, LV twist and torsion, in both systole and diastole. In addition, MWF is associated with rigid LV body rotation. These findings provide a mechanistic link between MWF and a poor clinical outcome in patients with NICM, despite pharmacologic and device therapy.
[ "Patients", "CMR", "Statistical analysis", "Systolic deformation", "Diastolic deformation", "Torsional mechanics", "Systole", "Diastole", "Limitations" ]
[ "Patients with NICM were recruited through CMR units in two centers (Good Hope Hospital and Queen Elizabeth Hospital, Birmingham, United Kingdom). The initial diagnosis of cardiomyopathy was made on the basis of clinical history, echocardiographic evidence of LV systolic impairment and absence of coronary artery disease on invasive coronary angiography. The diagnosis of NICM was also made on the basis of LGE-CMR [4]. Mid-wall LGE was assessed visually and only deemed to be present if a crescentic or circumferential area of mid-wall signal enhancement (2 SD above the mean intensity of remote myocardium in the same slice [16]), surrounded by non-enhanced epicardial and endocardial myocardium was evident. Patients with scars in a sub-endocardial or transmural distribution following coronary artery territories were regarded as ischemic in etiology [4] and excluded. Those with epicardial, transmural or patchy fibrosis suggestive of other etiologies were also excluded. It is routine clinical practice at the two recruiting dedicated heart failure units to perform CMR as part of the diagnostic work-up. Accordingly, all patients underwent CMR at the time of the diagnosis. All Participants gave written informed consent, and the study protocol conformed to the Declaration of Helsinki and was approved by the National Research Ethics Service.", "This was undertaken using 1.5 Tesla Magnetom Avanto (Siemens, Erlangen, Germany) or Signa (GE Healthcare Worldwide, Slough, England) scanners and a phased-array cardiac coil. A horizontal long-axis image and a short-axis LV stack from the atrioventricular ring to the LV apex were acquired using a steady state in free precession (SSFP) sequence (repetition time of 3.2 ms; echo time of 1.7 ms; flip angle of 60°; sequential 7 mm slices with a 3 mm interslice gap). There were 25 phases per cardiac cycle resulting in a mean temporal resolution of 40 ms.\nFor scar imaging, horizontal and vertical long-axis as well as short-axis slices identical to the LV stack were acquired using a segmented inversion-recovery technique 10 min after the intravenous administration of gadolinium-diethylenetriamine pentaacetic acid (0.1 mmol/kg). Inversion times were adjusted to null normal myocardium (260 to 400 ms). To exclude artefact, we required the typical scar pattern to be visible in the short-axis and long-axis acquisitions, in two different phase encoded directions. Patients were dichotomized according to the presence or absence of MWF, assessed visually by an experienced observer (F.L.), who was blinded to other study data.\nFeature tracking CMR (Tomtec Imaging Systems, Munich, Germany) was undertaken as previously described. It has been validated against myocardial tagging for the assessment of myocardial mechanics [15, 17]. We have previously shown that both circumferential- and longitudinal-based variables have an excellent intra- and inter-observer variability [18]. Global peak systolic circumferential (Ɛcc) and radial (Ɛrr) strain, strain rates (SSRcc and SSRrr) and diastolic strain rates (DSRcc and DSRrr) were assessed using FT-CMR of the mid-cavity LV short-axis cine. Longitudinal strains (Ɛll, SSRll and DSRll) were assessed using the horizontal long axis cine. Only the SSFP sequences were uploaded onto the FT-CMR software, ensuring that the operator (R.T.) was blinded to MWF status. In addition, MWF status was decided by an investigator (F.L.) who was blinded to the findings of FT-CMR.\nPeak systolic rotation was measured using the basal and apical short axis cines. In health, peak systolic rotation, as viewed from the apex, is typically clockwise (+) at the base, and anti-clockwise (−) at the apex. Peak systolic rotation was calculated in degrees and expressed as both the maximum extent of rotation in the anticipated direction (i.e., if systolic rotation at the apex was solely in a clockwise direction this would equate to 0°) and the total magnitude of rotation (regardless of direction). Torsional parameters are derived from the peak instantaneous net difference in apical and basal rotation. LV twist was defined as (Φ apex - Φ base), twist per unit length (Φ apex - Φ base/D), and LV torsion (circumferential-longitudinal shear angle) as (Φ apex - Φ base)(ρ apex - ρ base) / 2D (where Φ = the rotation angle; ρ = epicardial radius, and; D = base-to-apex distance) in accordance with agreed methodologies [19]. Systolic torsion was classified as either: a) normal torsion, in which there is predominantly anticlockwise rotation of the apex and clockwise rotation of the base;b) rigid body rotation: both the apex and base rotating in the same direction; and c) reverse torsion: predominantly clockwise rotation of the apex and anti-clockwise rotation of the base (Fig. 1).Fig. 1Rotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion\n\nRotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion", "Categorical variables were expressed as a percentage and continuous variables as mean ± standard deviation (SD). Normality was tested using the Shapiro-Wilk test. Comparisons between variables were made with Fisher’s exact test for categorical variables and independent samples t-tests for continuous variables, after adjustment by the Welch-Satterthwaite method where Levene’s test showed unequal variance between groups. A p value of <0.05 was considered statistically significant for all tests. Statistical analyses were performed using SPSS v21.0. (SPSS Inc. Chicago, Illinois).", "As shown in Table 2, patients with MWF had a lower, global circumferential strain (Ɛcc: −6.6 % vs −9.4 %, P = 0.004), but similar longitudinal (Ɛll: −7.6 % vs. −9.4 %, p −0.053) and radial (Ɛrr: 14.6 % vs. 17.8 % p = 0.18) strain. Systolic strain rate was reduced in the circumferential direction (SSRcc: −0.38 s−1 vs. −0.56 s-1, p = 0.005), but not in radial or longitudinal directions. Figure 2 shows typical examples. As shown in Fig. 3, Ɛcc (r = 0.70), Ɛrr (0.57, p <0.001 and Ɛll (r = 0.62, p <0.001) correlated positively with LVEF. In the case of Ɛcc, the slope of the regression line was 0.17 in the + MWF group and 0.31 in the -MWF group, indicating that Ɛcc is lower in the + MWF group than in the -MWF at a given LVEF.Table 2Mechanical variables in patients with or without MWFNo MWFMWFPLV dimensions LVEDV, mL222 ± 80277 ± 790.002 LVESV, mL166 ± 79214 ± 830.007 LV mass, g137.6 ± 46.6155.5 ± 71.10.052Systolic deformation LVEF, %27.5 ± 10.824.3 ± 12.90.20 Ɛcc (%)−9.4 ± 4.76−6.6 (2.570.004 SSRcc (s−1)−0.56 ± 0.25−0.38 (0.120.005 Ɛrr (%)17.8 ± 11.014.6 ± 10.10.18 SSRrr (s−1)0.84 ± 0.370.74 ± 0.400.31 Ɛll (%)−9.4 ± 4.35−7.6 ± 3.340.053 SSRll (s−1)0.56 ± 0.20−0.49 ± 0.180.13Diastolic deformation DSRcc (s−1)0.46 ± 0.190.34 ± 0.110.010 DSRrr (s−1)−0.75 ± 0.35−0.55 ± 0.440.038 DSRll (s−1)0.50 ± 0.200.38 ± 0.140.006Systolic torsion Basal systolic rotation (°)  Net Clockwise3.40 ± 3.003.00 ± 2.230.513  Magnitude4.63 ± 2.643.67 ± 1.970.082 Basal rotation rate (° s−1)31.3 ± 14.522.1 ± 8.20.002 Apical systolic rotation (°)  Net anti-clockwise−3.50 ± 3.28−1.99 ± 1.970.024  Magnitude5.18 ± 3.153.52 ± 2.450.013 Apical rotation rate (° s−1)−38.9 ± 21.8−26.1 ± 15.80.005 Average basal/apical rotation (°)9.81 ± 4.487.20 ± 3.440.002 LV twist (°)6.31 ± 3.304.65 ± 2.180.004 LV twist per unit length (°/cm)1.34 ± 0.760.94 ± 0.550.005 Torsional shear angle0.83 ± 0.060.52 ± 0.070.008 LV twist rate (° s−1)48.4 ± 23.136.1 ± 17.10.01 Torsional pattern<0.001  Normal torsion, n (%)39 ± 4610 ± 32  Rigid body rotation, n (%)23 ± 2821 ± 64  Reverse torsion, n (%)22 ± 261 ± 4Diastolic torsion Basal rotation rate (° s−1)−34.1 ± 14.8−28.0 ± 11.80.053 Apical rotation rate (° s−1)38.3 ± 20.124.9 ± 13.10.001 LV untwist rate (° s−1)44.5 ± 21.030.5 ± 14.9<0.001Variables are expressed as mean ± SD\nMWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain\nFig. 2Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF\nFig. 3Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF\n\nMechanical variables in patients with or without MWF\nVariables are expressed as mean ± SD\n\nMWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain\nFeature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF\nRelationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF", "In patients with MWF, diastolic strains rates were lower in all three directions in patients with MWF (DSRcc: 0.34 vs 0.46 s−1, p = 0.01; DSRrr: −0.55 vs −0.75 s−1, p = 0.04; DSRll: 0.38 vs 0.50 s−1, p = 0.006).", "Whilst basal rotation was unaffected by MWF (net clockwise: 3.00° vs. 3.30, p = 0.51; total magnitude: 3.67° vs. 4.63°, p = 0.08), the rate of basal rotation was reduced (22.1° s−1 vs 31.3° s−1, p = 0.002). In patients with MWF, apical rotation was also reduced in terms of both the total magnitude (3.52° vs 5.18°, p = 0.013) and the net anti-clockwise rotation (−1.99° vs. −3.50°, p = 0.024). The rate of apical rotation was lower in patients with MWF (−26.1° s−1 vs −38.9° s−1, p = 0.005). This reduction in the magnitude of apical rotation was associated with a reduction in LV twist (peak LV twist : 4.65° vs. 6.31°, p = 0.004; LV twist per unit length: 0.94°/cm vs.1.34°/cm, p = 0.005; torsional shear angle: 0.52 vs. 0.83, p = 0.008). The rate of LV twist (36.1° s−1 vs. 48.4° s−1, P = 0.001) and untwist (30.5° s−1 vs. 44.5° s−1, P <0.001) was also reduced in patients with MWF. A normal torsion pattern, in which there is predominantly anti-clockwise rotation of the apex and clockwise rotation of the base, was observed more frequently in patients without MWF (32 vs 46 %). Rigid LV body rotation was more frequently observed in patients with MWF (64 vs 28 %, p <0.001).", "During ejection, circumferential fibers shorten simultaneously with the oblique fibers in the right- and left-handed helices to thicken the myocardium and empty the heart. We have found that MWF was associated with a selective reduction in circumferential strain, suggesting that MWF preferentially affects mid-myocardial, circumferential fibres. As noted by Buckberg [13], circumferential fibers provide a horizontal counterforce, or 'buttress' to the simultaneously contracting oblique fibres. Impaired circumferential contraction would be expected to lead to impaired rotation, as we have found in patients with MHF. Our finding of more frequent rigid LV body rotation supports the notion that MWF renders the LV less capable of twisting and more liable to move as a rigid body.\nWe have previously shown that patients with NICM and MWF treated with CRT are more likely to suffer pump failure than patients without MWF [10]. On the other hand, Lamia et al. found that CRT improved torsion, stroke volume and stroke work in an animal model [20]. Using 3-dimensional speckle-tracking echocardiography, others found that in patients with NICM, CRT led to an improvement in LV torsion [21]. If torsion is indeed influenced by CRT, we might expect that the higher risk of pump failure observed in patients with MHF undergoing CRT may be due to a permanent inability of the LV to twist and untwist. This hypothesis requires further exploration.", "In diastole, release of energy stored in systole (recoil) causes rapid untwisting and a mitral-to-apical negative gradient [22] that 'sucks' blood from the left atrium to the LV [23]. Untwisting occurs mainly during the isovolumic relaxation period and is followed by diastolic filling. Several studies [24–26] have shown that whilst cavity volume is fixed during isovolumic relaxation, there is a rapid recoil of about 40 % of the torsion effected during systole. We have found that MWF leads to both a multi-directional impairment in diastolic strain rate, as well as to impairment of apical untwist rate. This is likely to account for the higher LV filling pressures observed using echocardiography in patients with NICM and MWF [27]. Conceivably, impaired apical untwisting leads to impaired LV suction and to increased LV filling pressures.", "The LGE-CMR technique described herein only detects replacement fibrosis. The more recent technique of T1 mapping, which detects interstitial fibrosis, was not undertaken. We cannot therefore comment as to whether our findings are also influenced by the latter. In addition, we have not routinely undertaken myocardial biopsy, nor have we quantified myocardial oedema. Therefore, we cannot exclude the possibility that our findings were influenced by active myocarditis, despite the absence of evidence from clinical and laboratory screening. We should also add that different manufacturers have varying methodologies for the calculation of the mechanical variables described and therefore, our findings are not generalizable to other FT-CMR methodologies. Publication of the FT-CMR algorithms used by different manufacturers would be welcome." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "CMR", "Statistical analysis", "Results", "Systolic deformation", "Diastolic deformation", "Torsional mechanics", "Discussion", "Systole", "Diastole", "Limitations", "Conclusions" ]
[ "Non-ischemic cardiomyopathy (NICM) is a common cause of heart failure [1]. The NICM phenotype ranges from patients who remain largely asymptomatic to those who succumb to multiple hospitalizations and premature death. In a study of 603 patients with idiopathic dilated cardiomyopathy followed up over 9 years, Castelli et al. found that 45 % died or underwent cardiac transplantation [2].\nLeft ventricular mid-wall fibrosis (MWF) was first described as an autopsy finding in 1991 [3]. Clinical studies using late-gadolinium cardiovascular magnetic resonance (LGE-CMR) have subsequently shown that in patients with NIDCM, MWF is associated with an increased risk of heart failure hospitalizations, ventricular arrhythmias and cardiac death [4–8]. Patients with NICM and MWF are also less responsive to pharmacologic therapy [9] and cardiac resynchronization therapy [10]. Whilst the evidence linking MWF and poor patient outcomes is compelling [4–11], the mechanism remains unexplored.\nThe left ventricle (LV) twists in systole and untwists, or recoils, in diastole. In systole, the LV base rotates clockwise and the apex rotates counter-clockwise. This wringing motion is effected by the helical arrangement of myocardial fibers, which run in a left-handed direction in the subepicardium and in a right-handed direction in the subendocardium . Contraction of subepicardial myocardial fibers cause the base to rotate clockwise and the apex to rotate in counterclockwise [12]. Because the radius of rotation of the subepicardium is greater than that of the subendocardium, the former provides a greater torque. Consequently, the LV gets smaller in systole and LV ejection occurs [12]. Circumferential fibers, which run in the mid-myocardium, are crucial to this process. During ejection, they shorten simultaneously with oblique fibers in the right- and left-handed helices. In effect, circumferential fibers provide a horizontal counterforce throughout ejection [13].\nWe hypothesized that injury to mid-myocardial, circumferential myocardial fibers [14], as might be expected from MWF, leads to impairment of LV circumferential contraction and relaxation and therefore, to disturbances in LV twist and torsion. In this study, we have used feature-tracking CMR (FT-CMR) [15] to explore the mechanical effects of MWF in patients with NICM.", " Patients Patients with NICM were recruited through CMR units in two centers (Good Hope Hospital and Queen Elizabeth Hospital, Birmingham, United Kingdom). The initial diagnosis of cardiomyopathy was made on the basis of clinical history, echocardiographic evidence of LV systolic impairment and absence of coronary artery disease on invasive coronary angiography. The diagnosis of NICM was also made on the basis of LGE-CMR [4]. Mid-wall LGE was assessed visually and only deemed to be present if a crescentic or circumferential area of mid-wall signal enhancement (2 SD above the mean intensity of remote myocardium in the same slice [16]), surrounded by non-enhanced epicardial and endocardial myocardium was evident. Patients with scars in a sub-endocardial or transmural distribution following coronary artery territories were regarded as ischemic in etiology [4] and excluded. Those with epicardial, transmural or patchy fibrosis suggestive of other etiologies were also excluded. It is routine clinical practice at the two recruiting dedicated heart failure units to perform CMR as part of the diagnostic work-up. Accordingly, all patients underwent CMR at the time of the diagnosis. All Participants gave written informed consent, and the study protocol conformed to the Declaration of Helsinki and was approved by the National Research Ethics Service.\nPatients with NICM were recruited through CMR units in two centers (Good Hope Hospital and Queen Elizabeth Hospital, Birmingham, United Kingdom). The initial diagnosis of cardiomyopathy was made on the basis of clinical history, echocardiographic evidence of LV systolic impairment and absence of coronary artery disease on invasive coronary angiography. The diagnosis of NICM was also made on the basis of LGE-CMR [4]. Mid-wall LGE was assessed visually and only deemed to be present if a crescentic or circumferential area of mid-wall signal enhancement (2 SD above the mean intensity of remote myocardium in the same slice [16]), surrounded by non-enhanced epicardial and endocardial myocardium was evident. Patients with scars in a sub-endocardial or transmural distribution following coronary artery territories were regarded as ischemic in etiology [4] and excluded. Those with epicardial, transmural or patchy fibrosis suggestive of other etiologies were also excluded. It is routine clinical practice at the two recruiting dedicated heart failure units to perform CMR as part of the diagnostic work-up. Accordingly, all patients underwent CMR at the time of the diagnosis. All Participants gave written informed consent, and the study protocol conformed to the Declaration of Helsinki and was approved by the National Research Ethics Service.\n CMR This was undertaken using 1.5 Tesla Magnetom Avanto (Siemens, Erlangen, Germany) or Signa (GE Healthcare Worldwide, Slough, England) scanners and a phased-array cardiac coil. A horizontal long-axis image and a short-axis LV stack from the atrioventricular ring to the LV apex were acquired using a steady state in free precession (SSFP) sequence (repetition time of 3.2 ms; echo time of 1.7 ms; flip angle of 60°; sequential 7 mm slices with a 3 mm interslice gap). There were 25 phases per cardiac cycle resulting in a mean temporal resolution of 40 ms.\nFor scar imaging, horizontal and vertical long-axis as well as short-axis slices identical to the LV stack were acquired using a segmented inversion-recovery technique 10 min after the intravenous administration of gadolinium-diethylenetriamine pentaacetic acid (0.1 mmol/kg). Inversion times were adjusted to null normal myocardium (260 to 400 ms). To exclude artefact, we required the typical scar pattern to be visible in the short-axis and long-axis acquisitions, in two different phase encoded directions. Patients were dichotomized according to the presence or absence of MWF, assessed visually by an experienced observer (F.L.), who was blinded to other study data.\nFeature tracking CMR (Tomtec Imaging Systems, Munich, Germany) was undertaken as previously described. It has been validated against myocardial tagging for the assessment of myocardial mechanics [15, 17]. We have previously shown that both circumferential- and longitudinal-based variables have an excellent intra- and inter-observer variability [18]. Global peak systolic circumferential (Ɛcc) and radial (Ɛrr) strain, strain rates (SSRcc and SSRrr) and diastolic strain rates (DSRcc and DSRrr) were assessed using FT-CMR of the mid-cavity LV short-axis cine. Longitudinal strains (Ɛll, SSRll and DSRll) were assessed using the horizontal long axis cine. Only the SSFP sequences were uploaded onto the FT-CMR software, ensuring that the operator (R.T.) was blinded to MWF status. In addition, MWF status was decided by an investigator (F.L.) who was blinded to the findings of FT-CMR.\nPeak systolic rotation was measured using the basal and apical short axis cines. In health, peak systolic rotation, as viewed from the apex, is typically clockwise (+) at the base, and anti-clockwise (−) at the apex. Peak systolic rotation was calculated in degrees and expressed as both the maximum extent of rotation in the anticipated direction (i.e., if systolic rotation at the apex was solely in a clockwise direction this would equate to 0°) and the total magnitude of rotation (regardless of direction). Torsional parameters are derived from the peak instantaneous net difference in apical and basal rotation. LV twist was defined as (Φ apex - Φ base), twist per unit length (Φ apex - Φ base/D), and LV torsion (circumferential-longitudinal shear angle) as (Φ apex - Φ base)(ρ apex - ρ base) / 2D (where Φ = the rotation angle; ρ = epicardial radius, and; D = base-to-apex distance) in accordance with agreed methodologies [19]. Systolic torsion was classified as either: a) normal torsion, in which there is predominantly anticlockwise rotation of the apex and clockwise rotation of the base;b) rigid body rotation: both the apex and base rotating in the same direction; and c) reverse torsion: predominantly clockwise rotation of the apex and anti-clockwise rotation of the base (Fig. 1).Fig. 1Rotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion\n\nRotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion\nThis was undertaken using 1.5 Tesla Magnetom Avanto (Siemens, Erlangen, Germany) or Signa (GE Healthcare Worldwide, Slough, England) scanners and a phased-array cardiac coil. A horizontal long-axis image and a short-axis LV stack from the atrioventricular ring to the LV apex were acquired using a steady state in free precession (SSFP) sequence (repetition time of 3.2 ms; echo time of 1.7 ms; flip angle of 60°; sequential 7 mm slices with a 3 mm interslice gap). There were 25 phases per cardiac cycle resulting in a mean temporal resolution of 40 ms.\nFor scar imaging, horizontal and vertical long-axis as well as short-axis slices identical to the LV stack were acquired using a segmented inversion-recovery technique 10 min after the intravenous administration of gadolinium-diethylenetriamine pentaacetic acid (0.1 mmol/kg). Inversion times were adjusted to null normal myocardium (260 to 400 ms). To exclude artefact, we required the typical scar pattern to be visible in the short-axis and long-axis acquisitions, in two different phase encoded directions. Patients were dichotomized according to the presence or absence of MWF, assessed visually by an experienced observer (F.L.), who was blinded to other study data.\nFeature tracking CMR (Tomtec Imaging Systems, Munich, Germany) was undertaken as previously described. It has been validated against myocardial tagging for the assessment of myocardial mechanics [15, 17]. We have previously shown that both circumferential- and longitudinal-based variables have an excellent intra- and inter-observer variability [18]. Global peak systolic circumferential (Ɛcc) and radial (Ɛrr) strain, strain rates (SSRcc and SSRrr) and diastolic strain rates (DSRcc and DSRrr) were assessed using FT-CMR of the mid-cavity LV short-axis cine. Longitudinal strains (Ɛll, SSRll and DSRll) were assessed using the horizontal long axis cine. Only the SSFP sequences were uploaded onto the FT-CMR software, ensuring that the operator (R.T.) was blinded to MWF status. In addition, MWF status was decided by an investigator (F.L.) who was blinded to the findings of FT-CMR.\nPeak systolic rotation was measured using the basal and apical short axis cines. In health, peak systolic rotation, as viewed from the apex, is typically clockwise (+) at the base, and anti-clockwise (−) at the apex. Peak systolic rotation was calculated in degrees and expressed as both the maximum extent of rotation in the anticipated direction (i.e., if systolic rotation at the apex was solely in a clockwise direction this would equate to 0°) and the total magnitude of rotation (regardless of direction). Torsional parameters are derived from the peak instantaneous net difference in apical and basal rotation. LV twist was defined as (Φ apex - Φ base), twist per unit length (Φ apex - Φ base/D), and LV torsion (circumferential-longitudinal shear angle) as (Φ apex - Φ base)(ρ apex - ρ base) / 2D (where Φ = the rotation angle; ρ = epicardial radius, and; D = base-to-apex distance) in accordance with agreed methodologies [19]. Systolic torsion was classified as either: a) normal torsion, in which there is predominantly anticlockwise rotation of the apex and clockwise rotation of the base;b) rigid body rotation: both the apex and base rotating in the same direction; and c) reverse torsion: predominantly clockwise rotation of the apex and anti-clockwise rotation of the base (Fig. 1).Fig. 1Rotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion\n\nRotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion\n Statistical analysis Categorical variables were expressed as a percentage and continuous variables as mean ± standard deviation (SD). Normality was tested using the Shapiro-Wilk test. Comparisons between variables were made with Fisher’s exact test for categorical variables and independent samples t-tests for continuous variables, after adjustment by the Welch-Satterthwaite method where Levene’s test showed unequal variance between groups. A p value of <0.05 was considered statistically significant for all tests. Statistical analyses were performed using SPSS v21.0. (SPSS Inc. Chicago, Illinois).\nCategorical variables were expressed as a percentage and continuous variables as mean ± standard deviation (SD). Normality was tested using the Shapiro-Wilk test. Comparisons between variables were made with Fisher’s exact test for categorical variables and independent samples t-tests for continuous variables, after adjustment by the Welch-Satterthwaite method where Levene’s test showed unequal variance between groups. A p value of <0.05 was considered statistically significant for all tests. Statistical analyses were performed using SPSS v21.0. (SPSS Inc. Chicago, Illinois).", "Patients with NICM were recruited through CMR units in two centers (Good Hope Hospital and Queen Elizabeth Hospital, Birmingham, United Kingdom). The initial diagnosis of cardiomyopathy was made on the basis of clinical history, echocardiographic evidence of LV systolic impairment and absence of coronary artery disease on invasive coronary angiography. The diagnosis of NICM was also made on the basis of LGE-CMR [4]. Mid-wall LGE was assessed visually and only deemed to be present if a crescentic or circumferential area of mid-wall signal enhancement (2 SD above the mean intensity of remote myocardium in the same slice [16]), surrounded by non-enhanced epicardial and endocardial myocardium was evident. Patients with scars in a sub-endocardial or transmural distribution following coronary artery territories were regarded as ischemic in etiology [4] and excluded. Those with epicardial, transmural or patchy fibrosis suggestive of other etiologies were also excluded. It is routine clinical practice at the two recruiting dedicated heart failure units to perform CMR as part of the diagnostic work-up. Accordingly, all patients underwent CMR at the time of the diagnosis. All Participants gave written informed consent, and the study protocol conformed to the Declaration of Helsinki and was approved by the National Research Ethics Service.", "This was undertaken using 1.5 Tesla Magnetom Avanto (Siemens, Erlangen, Germany) or Signa (GE Healthcare Worldwide, Slough, England) scanners and a phased-array cardiac coil. A horizontal long-axis image and a short-axis LV stack from the atrioventricular ring to the LV apex were acquired using a steady state in free precession (SSFP) sequence (repetition time of 3.2 ms; echo time of 1.7 ms; flip angle of 60°; sequential 7 mm slices with a 3 mm interslice gap). There were 25 phases per cardiac cycle resulting in a mean temporal resolution of 40 ms.\nFor scar imaging, horizontal and vertical long-axis as well as short-axis slices identical to the LV stack were acquired using a segmented inversion-recovery technique 10 min after the intravenous administration of gadolinium-diethylenetriamine pentaacetic acid (0.1 mmol/kg). Inversion times were adjusted to null normal myocardium (260 to 400 ms). To exclude artefact, we required the typical scar pattern to be visible in the short-axis and long-axis acquisitions, in two different phase encoded directions. Patients were dichotomized according to the presence or absence of MWF, assessed visually by an experienced observer (F.L.), who was blinded to other study data.\nFeature tracking CMR (Tomtec Imaging Systems, Munich, Germany) was undertaken as previously described. It has been validated against myocardial tagging for the assessment of myocardial mechanics [15, 17]. We have previously shown that both circumferential- and longitudinal-based variables have an excellent intra- and inter-observer variability [18]. Global peak systolic circumferential (Ɛcc) and radial (Ɛrr) strain, strain rates (SSRcc and SSRrr) and diastolic strain rates (DSRcc and DSRrr) were assessed using FT-CMR of the mid-cavity LV short-axis cine. Longitudinal strains (Ɛll, SSRll and DSRll) were assessed using the horizontal long axis cine. Only the SSFP sequences were uploaded onto the FT-CMR software, ensuring that the operator (R.T.) was blinded to MWF status. In addition, MWF status was decided by an investigator (F.L.) who was blinded to the findings of FT-CMR.\nPeak systolic rotation was measured using the basal and apical short axis cines. In health, peak systolic rotation, as viewed from the apex, is typically clockwise (+) at the base, and anti-clockwise (−) at the apex. Peak systolic rotation was calculated in degrees and expressed as both the maximum extent of rotation in the anticipated direction (i.e., if systolic rotation at the apex was solely in a clockwise direction this would equate to 0°) and the total magnitude of rotation (regardless of direction). Torsional parameters are derived from the peak instantaneous net difference in apical and basal rotation. LV twist was defined as (Φ apex - Φ base), twist per unit length (Φ apex - Φ base/D), and LV torsion (circumferential-longitudinal shear angle) as (Φ apex - Φ base)(ρ apex - ρ base) / 2D (where Φ = the rotation angle; ρ = epicardial radius, and; D = base-to-apex distance) in accordance with agreed methodologies [19]. Systolic torsion was classified as either: a) normal torsion, in which there is predominantly anticlockwise rotation of the apex and clockwise rotation of the base;b) rigid body rotation: both the apex and base rotating in the same direction; and c) reverse torsion: predominantly clockwise rotation of the apex and anti-clockwise rotation of the base (Fig. 1).Fig. 1Rotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion\n\nRotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion", "Categorical variables were expressed as a percentage and continuous variables as mean ± standard deviation (SD). Normality was tested using the Shapiro-Wilk test. Comparisons between variables were made with Fisher’s exact test for categorical variables and independent samples t-tests for continuous variables, after adjustment by the Welch-Satterthwaite method where Levene’s test showed unequal variance between groups. A p value of <0.05 was considered statistically significant for all tests. Statistical analyses were performed using SPSS v21.0. (SPSS Inc. Chicago, Illinois).", "The characteristics of the study group are shown in Table 1. Amongst the entire cohort, 32/116 patients (28 %) had MWF. Patients were of similar age (63.8 vs. 62.3 years, p = 0.29), but more patients with MWF were men (84 % vs. 61 %, p = 0.02). There were no differences in NHYA class, atrial rhythm, QRS duration, LVEF, co-morbidities, pharmacological therapy for heart failure.Table 1Baseline characteristicsNo MWFMWFPN8432Age, yrs62.3 ± 13.763.8 ± 11.90.29Male, n (%)51 (61)27 (84)0.02Height, m1.68 ± 0.091.74 ± 0.090.02Weight, Kg83.4 ± 18.683.3 ± 12.60.97NYHA class0.20 I4 (5)3 (9) II15 (18)8 (25) III47 (56)11 (34) IV9 (11)5 (16) Unknown9 (11)5 (16)Diabetes mellitus, n (%)13 (16)7 (24)0.42Hypertension, n (%)18 (22)5 (17)0.61Atrial fibrillation, n (%)15 (18)8 (24)0.44Medication, n (%) Loop diuretics62 (81)26 (89)0.47 ACE-I or ARB77 (97)27 (90)0.31 Beta-blockers51 (65)20 (66)1.00 Aldosterone antagonists36 (46)10 (35)0.29Systolic blood pressure, mmHg124.3 ± 20.5119.6 ± 23.10.38Diastolic blood pressure, mmHg71.5 ± 11.971.7 ± 13.80.96QRS duration (ms)144 (28)149 (32)0.48\nACE-I angiotensin-converting enzyme inhibitors, ARB angiotensin receptor blockers\n\nBaseline characteristics\n\nACE-I angiotensin-converting enzyme inhibitors, ARB angiotensin receptor blockers\n Systolic deformation As shown in Table 2, patients with MWF had a lower, global circumferential strain (Ɛcc: −6.6 % vs −9.4 %, P = 0.004), but similar longitudinal (Ɛll: −7.6 % vs. −9.4 %, p −0.053) and radial (Ɛrr: 14.6 % vs. 17.8 % p = 0.18) strain. Systolic strain rate was reduced in the circumferential direction (SSRcc: −0.38 s−1 vs. −0.56 s-1, p = 0.005), but not in radial or longitudinal directions. Figure 2 shows typical examples. As shown in Fig. 3, Ɛcc (r = 0.70), Ɛrr (0.57, p <0.001 and Ɛll (r = 0.62, p <0.001) correlated positively with LVEF. In the case of Ɛcc, the slope of the regression line was 0.17 in the + MWF group and 0.31 in the -MWF group, indicating that Ɛcc is lower in the + MWF group than in the -MWF at a given LVEF.Table 2Mechanical variables in patients with or without MWFNo MWFMWFPLV dimensions LVEDV, mL222 ± 80277 ± 790.002 LVESV, mL166 ± 79214 ± 830.007 LV mass, g137.6 ± 46.6155.5 ± 71.10.052Systolic deformation LVEF, %27.5 ± 10.824.3 ± 12.90.20 Ɛcc (%)−9.4 ± 4.76−6.6 (2.570.004 SSRcc (s−1)−0.56 ± 0.25−0.38 (0.120.005 Ɛrr (%)17.8 ± 11.014.6 ± 10.10.18 SSRrr (s−1)0.84 ± 0.370.74 ± 0.400.31 Ɛll (%)−9.4 ± 4.35−7.6 ± 3.340.053 SSRll (s−1)0.56 ± 0.20−0.49 ± 0.180.13Diastolic deformation DSRcc (s−1)0.46 ± 0.190.34 ± 0.110.010 DSRrr (s−1)−0.75 ± 0.35−0.55 ± 0.440.038 DSRll (s−1)0.50 ± 0.200.38 ± 0.140.006Systolic torsion Basal systolic rotation (°)  Net Clockwise3.40 ± 3.003.00 ± 2.230.513  Magnitude4.63 ± 2.643.67 ± 1.970.082 Basal rotation rate (° s−1)31.3 ± 14.522.1 ± 8.20.002 Apical systolic rotation (°)  Net anti-clockwise−3.50 ± 3.28−1.99 ± 1.970.024  Magnitude5.18 ± 3.153.52 ± 2.450.013 Apical rotation rate (° s−1)−38.9 ± 21.8−26.1 ± 15.80.005 Average basal/apical rotation (°)9.81 ± 4.487.20 ± 3.440.002 LV twist (°)6.31 ± 3.304.65 ± 2.180.004 LV twist per unit length (°/cm)1.34 ± 0.760.94 ± 0.550.005 Torsional shear angle0.83 ± 0.060.52 ± 0.070.008 LV twist rate (° s−1)48.4 ± 23.136.1 ± 17.10.01 Torsional pattern<0.001  Normal torsion, n (%)39 ± 4610 ± 32  Rigid body rotation, n (%)23 ± 2821 ± 64  Reverse torsion, n (%)22 ± 261 ± 4Diastolic torsion Basal rotation rate (° s−1)−34.1 ± 14.8−28.0 ± 11.80.053 Apical rotation rate (° s−1)38.3 ± 20.124.9 ± 13.10.001 LV untwist rate (° s−1)44.5 ± 21.030.5 ± 14.9<0.001Variables are expressed as mean ± SD\nMWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain\nFig. 2Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF\nFig. 3Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF\n\nMechanical variables in patients with or without MWF\nVariables are expressed as mean ± SD\n\nMWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain\nFeature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF\nRelationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF\nAs shown in Table 2, patients with MWF had a lower, global circumferential strain (Ɛcc: −6.6 % vs −9.4 %, P = 0.004), but similar longitudinal (Ɛll: −7.6 % vs. −9.4 %, p −0.053) and radial (Ɛrr: 14.6 % vs. 17.8 % p = 0.18) strain. Systolic strain rate was reduced in the circumferential direction (SSRcc: −0.38 s−1 vs. −0.56 s-1, p = 0.005), but not in radial or longitudinal directions. Figure 2 shows typical examples. As shown in Fig. 3, Ɛcc (r = 0.70), Ɛrr (0.57, p <0.001 and Ɛll (r = 0.62, p <0.001) correlated positively with LVEF. In the case of Ɛcc, the slope of the regression line was 0.17 in the + MWF group and 0.31 in the -MWF group, indicating that Ɛcc is lower in the + MWF group than in the -MWF at a given LVEF.Table 2Mechanical variables in patients with or without MWFNo MWFMWFPLV dimensions LVEDV, mL222 ± 80277 ± 790.002 LVESV, mL166 ± 79214 ± 830.007 LV mass, g137.6 ± 46.6155.5 ± 71.10.052Systolic deformation LVEF, %27.5 ± 10.824.3 ± 12.90.20 Ɛcc (%)−9.4 ± 4.76−6.6 (2.570.004 SSRcc (s−1)−0.56 ± 0.25−0.38 (0.120.005 Ɛrr (%)17.8 ± 11.014.6 ± 10.10.18 SSRrr (s−1)0.84 ± 0.370.74 ± 0.400.31 Ɛll (%)−9.4 ± 4.35−7.6 ± 3.340.053 SSRll (s−1)0.56 ± 0.20−0.49 ± 0.180.13Diastolic deformation DSRcc (s−1)0.46 ± 0.190.34 ± 0.110.010 DSRrr (s−1)−0.75 ± 0.35−0.55 ± 0.440.038 DSRll (s−1)0.50 ± 0.200.38 ± 0.140.006Systolic torsion Basal systolic rotation (°)  Net Clockwise3.40 ± 3.003.00 ± 2.230.513  Magnitude4.63 ± 2.643.67 ± 1.970.082 Basal rotation rate (° s−1)31.3 ± 14.522.1 ± 8.20.002 Apical systolic rotation (°)  Net anti-clockwise−3.50 ± 3.28−1.99 ± 1.970.024  Magnitude5.18 ± 3.153.52 ± 2.450.013 Apical rotation rate (° s−1)−38.9 ± 21.8−26.1 ± 15.80.005 Average basal/apical rotation (°)9.81 ± 4.487.20 ± 3.440.002 LV twist (°)6.31 ± 3.304.65 ± 2.180.004 LV twist per unit length (°/cm)1.34 ± 0.760.94 ± 0.550.005 Torsional shear angle0.83 ± 0.060.52 ± 0.070.008 LV twist rate (° s−1)48.4 ± 23.136.1 ± 17.10.01 Torsional pattern<0.001  Normal torsion, n (%)39 ± 4610 ± 32  Rigid body rotation, n (%)23 ± 2821 ± 64  Reverse torsion, n (%)22 ± 261 ± 4Diastolic torsion Basal rotation rate (° s−1)−34.1 ± 14.8−28.0 ± 11.80.053 Apical rotation rate (° s−1)38.3 ± 20.124.9 ± 13.10.001 LV untwist rate (° s−1)44.5 ± 21.030.5 ± 14.9<0.001Variables are expressed as mean ± SD\nMWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain\nFig. 2Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF\nFig. 3Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF\n\nMechanical variables in patients with or without MWF\nVariables are expressed as mean ± SD\n\nMWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain\nFeature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF\nRelationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF\n Diastolic deformation In patients with MWF, diastolic strains rates were lower in all three directions in patients with MWF (DSRcc: 0.34 vs 0.46 s−1, p = 0.01; DSRrr: −0.55 vs −0.75 s−1, p = 0.04; DSRll: 0.38 vs 0.50 s−1, p = 0.006).\nIn patients with MWF, diastolic strains rates were lower in all three directions in patients with MWF (DSRcc: 0.34 vs 0.46 s−1, p = 0.01; DSRrr: −0.55 vs −0.75 s−1, p = 0.04; DSRll: 0.38 vs 0.50 s−1, p = 0.006).\n Torsional mechanics Whilst basal rotation was unaffected by MWF (net clockwise: 3.00° vs. 3.30, p = 0.51; total magnitude: 3.67° vs. 4.63°, p = 0.08), the rate of basal rotation was reduced (22.1° s−1 vs 31.3° s−1, p = 0.002). In patients with MWF, apical rotation was also reduced in terms of both the total magnitude (3.52° vs 5.18°, p = 0.013) and the net anti-clockwise rotation (−1.99° vs. −3.50°, p = 0.024). The rate of apical rotation was lower in patients with MWF (−26.1° s−1 vs −38.9° s−1, p = 0.005). This reduction in the magnitude of apical rotation was associated with a reduction in LV twist (peak LV twist : 4.65° vs. 6.31°, p = 0.004; LV twist per unit length: 0.94°/cm vs.1.34°/cm, p = 0.005; torsional shear angle: 0.52 vs. 0.83, p = 0.008). The rate of LV twist (36.1° s−1 vs. 48.4° s−1, P = 0.001) and untwist (30.5° s−1 vs. 44.5° s−1, P <0.001) was also reduced in patients with MWF. A normal torsion pattern, in which there is predominantly anti-clockwise rotation of the apex and clockwise rotation of the base, was observed more frequently in patients without MWF (32 vs 46 %). Rigid LV body rotation was more frequently observed in patients with MWF (64 vs 28 %, p <0.001).\nWhilst basal rotation was unaffected by MWF (net clockwise: 3.00° vs. 3.30, p = 0.51; total magnitude: 3.67° vs. 4.63°, p = 0.08), the rate of basal rotation was reduced (22.1° s−1 vs 31.3° s−1, p = 0.002). In patients with MWF, apical rotation was also reduced in terms of both the total magnitude (3.52° vs 5.18°, p = 0.013) and the net anti-clockwise rotation (−1.99° vs. −3.50°, p = 0.024). The rate of apical rotation was lower in patients with MWF (−26.1° s−1 vs −38.9° s−1, p = 0.005). This reduction in the magnitude of apical rotation was associated with a reduction in LV twist (peak LV twist : 4.65° vs. 6.31°, p = 0.004; LV twist per unit length: 0.94°/cm vs.1.34°/cm, p = 0.005; torsional shear angle: 0.52 vs. 0.83, p = 0.008). The rate of LV twist (36.1° s−1 vs. 48.4° s−1, P = 0.001) and untwist (30.5° s−1 vs. 44.5° s−1, P <0.001) was also reduced in patients with MWF. A normal torsion pattern, in which there is predominantly anti-clockwise rotation of the apex and clockwise rotation of the base, was observed more frequently in patients without MWF (32 vs 46 %). Rigid LV body rotation was more frequently observed in patients with MWF (64 vs 28 %, p <0.001).", "As shown in Table 2, patients with MWF had a lower, global circumferential strain (Ɛcc: −6.6 % vs −9.4 %, P = 0.004), but similar longitudinal (Ɛll: −7.6 % vs. −9.4 %, p −0.053) and radial (Ɛrr: 14.6 % vs. 17.8 % p = 0.18) strain. Systolic strain rate was reduced in the circumferential direction (SSRcc: −0.38 s−1 vs. −0.56 s-1, p = 0.005), but not in radial or longitudinal directions. Figure 2 shows typical examples. As shown in Fig. 3, Ɛcc (r = 0.70), Ɛrr (0.57, p <0.001 and Ɛll (r = 0.62, p <0.001) correlated positively with LVEF. In the case of Ɛcc, the slope of the regression line was 0.17 in the + MWF group and 0.31 in the -MWF group, indicating that Ɛcc is lower in the + MWF group than in the -MWF at a given LVEF.Table 2Mechanical variables in patients with or without MWFNo MWFMWFPLV dimensions LVEDV, mL222 ± 80277 ± 790.002 LVESV, mL166 ± 79214 ± 830.007 LV mass, g137.6 ± 46.6155.5 ± 71.10.052Systolic deformation LVEF, %27.5 ± 10.824.3 ± 12.90.20 Ɛcc (%)−9.4 ± 4.76−6.6 (2.570.004 SSRcc (s−1)−0.56 ± 0.25−0.38 (0.120.005 Ɛrr (%)17.8 ± 11.014.6 ± 10.10.18 SSRrr (s−1)0.84 ± 0.370.74 ± 0.400.31 Ɛll (%)−9.4 ± 4.35−7.6 ± 3.340.053 SSRll (s−1)0.56 ± 0.20−0.49 ± 0.180.13Diastolic deformation DSRcc (s−1)0.46 ± 0.190.34 ± 0.110.010 DSRrr (s−1)−0.75 ± 0.35−0.55 ± 0.440.038 DSRll (s−1)0.50 ± 0.200.38 ± 0.140.006Systolic torsion Basal systolic rotation (°)  Net Clockwise3.40 ± 3.003.00 ± 2.230.513  Magnitude4.63 ± 2.643.67 ± 1.970.082 Basal rotation rate (° s−1)31.3 ± 14.522.1 ± 8.20.002 Apical systolic rotation (°)  Net anti-clockwise−3.50 ± 3.28−1.99 ± 1.970.024  Magnitude5.18 ± 3.153.52 ± 2.450.013 Apical rotation rate (° s−1)−38.9 ± 21.8−26.1 ± 15.80.005 Average basal/apical rotation (°)9.81 ± 4.487.20 ± 3.440.002 LV twist (°)6.31 ± 3.304.65 ± 2.180.004 LV twist per unit length (°/cm)1.34 ± 0.760.94 ± 0.550.005 Torsional shear angle0.83 ± 0.060.52 ± 0.070.008 LV twist rate (° s−1)48.4 ± 23.136.1 ± 17.10.01 Torsional pattern<0.001  Normal torsion, n (%)39 ± 4610 ± 32  Rigid body rotation, n (%)23 ± 2821 ± 64  Reverse torsion, n (%)22 ± 261 ± 4Diastolic torsion Basal rotation rate (° s−1)−34.1 ± 14.8−28.0 ± 11.80.053 Apical rotation rate (° s−1)38.3 ± 20.124.9 ± 13.10.001 LV untwist rate (° s−1)44.5 ± 21.030.5 ± 14.9<0.001Variables are expressed as mean ± SD\nMWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain\nFig. 2Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF\nFig. 3Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF\n\nMechanical variables in patients with or without MWF\nVariables are expressed as mean ± SD\n\nMWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain\nFeature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF\nRelationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF", "In patients with MWF, diastolic strains rates were lower in all three directions in patients with MWF (DSRcc: 0.34 vs 0.46 s−1, p = 0.01; DSRrr: −0.55 vs −0.75 s−1, p = 0.04; DSRll: 0.38 vs 0.50 s−1, p = 0.006).", "Whilst basal rotation was unaffected by MWF (net clockwise: 3.00° vs. 3.30, p = 0.51; total magnitude: 3.67° vs. 4.63°, p = 0.08), the rate of basal rotation was reduced (22.1° s−1 vs 31.3° s−1, p = 0.002). In patients with MWF, apical rotation was also reduced in terms of both the total magnitude (3.52° vs 5.18°, p = 0.013) and the net anti-clockwise rotation (−1.99° vs. −3.50°, p = 0.024). The rate of apical rotation was lower in patients with MWF (−26.1° s−1 vs −38.9° s−1, p = 0.005). This reduction in the magnitude of apical rotation was associated with a reduction in LV twist (peak LV twist : 4.65° vs. 6.31°, p = 0.004; LV twist per unit length: 0.94°/cm vs.1.34°/cm, p = 0.005; torsional shear angle: 0.52 vs. 0.83, p = 0.008). The rate of LV twist (36.1° s−1 vs. 48.4° s−1, P = 0.001) and untwist (30.5° s−1 vs. 44.5° s−1, P <0.001) was also reduced in patients with MWF. A normal torsion pattern, in which there is predominantly anti-clockwise rotation of the apex and clockwise rotation of the base, was observed more frequently in patients without MWF (32 vs 46 %). Rigid LV body rotation was more frequently observed in patients with MWF (64 vs 28 %, p <0.001).", "In this study, we have shown that in patients with NICM, MWF is associated with a selective impairment of circumferential LV myocardial strain. In addition, MWF is associated with impaired apical rotation and a reduction in rotation rate, from base to apex. MWF is also associated with impaired diastolic function, reflected in reductions in untwist in all directions, from base to apex. Together, these findings are consistent with the notion that, by affecting predominantly circumferential myocardial fibers, MWF leads to disturbances in myocardial contraction and diastolic function. The result is a 'stiff' LV, which is less able to twist to an applied torque (rotation) and more likely to move as a solid body. These disturbances may be related to the known associations of MWF with reduced pump function, heart failure hospitalizations and a poor response to medical and device therapy [4–11].\n Systole During ejection, circumferential fibers shorten simultaneously with the oblique fibers in the right- and left-handed helices to thicken the myocardium and empty the heart. We have found that MWF was associated with a selective reduction in circumferential strain, suggesting that MWF preferentially affects mid-myocardial, circumferential fibres. As noted by Buckberg [13], circumferential fibers provide a horizontal counterforce, or 'buttress' to the simultaneously contracting oblique fibres. Impaired circumferential contraction would be expected to lead to impaired rotation, as we have found in patients with MHF. Our finding of more frequent rigid LV body rotation supports the notion that MWF renders the LV less capable of twisting and more liable to move as a rigid body.\nWe have previously shown that patients with NICM and MWF treated with CRT are more likely to suffer pump failure than patients without MWF [10]. On the other hand, Lamia et al. found that CRT improved torsion, stroke volume and stroke work in an animal model [20]. Using 3-dimensional speckle-tracking echocardiography, others found that in patients with NICM, CRT led to an improvement in LV torsion [21]. If torsion is indeed influenced by CRT, we might expect that the higher risk of pump failure observed in patients with MHF undergoing CRT may be due to a permanent inability of the LV to twist and untwist. This hypothesis requires further exploration.\nDuring ejection, circumferential fibers shorten simultaneously with the oblique fibers in the right- and left-handed helices to thicken the myocardium and empty the heart. We have found that MWF was associated with a selective reduction in circumferential strain, suggesting that MWF preferentially affects mid-myocardial, circumferential fibres. As noted by Buckberg [13], circumferential fibers provide a horizontal counterforce, or 'buttress' to the simultaneously contracting oblique fibres. Impaired circumferential contraction would be expected to lead to impaired rotation, as we have found in patients with MHF. Our finding of more frequent rigid LV body rotation supports the notion that MWF renders the LV less capable of twisting and more liable to move as a rigid body.\nWe have previously shown that patients with NICM and MWF treated with CRT are more likely to suffer pump failure than patients without MWF [10]. On the other hand, Lamia et al. found that CRT improved torsion, stroke volume and stroke work in an animal model [20]. Using 3-dimensional speckle-tracking echocardiography, others found that in patients with NICM, CRT led to an improvement in LV torsion [21]. If torsion is indeed influenced by CRT, we might expect that the higher risk of pump failure observed in patients with MHF undergoing CRT may be due to a permanent inability of the LV to twist and untwist. This hypothesis requires further exploration.\n Diastole In diastole, release of energy stored in systole (recoil) causes rapid untwisting and a mitral-to-apical negative gradient [22] that 'sucks' blood from the left atrium to the LV [23]. Untwisting occurs mainly during the isovolumic relaxation period and is followed by diastolic filling. Several studies [24–26] have shown that whilst cavity volume is fixed during isovolumic relaxation, there is a rapid recoil of about 40 % of the torsion effected during systole. We have found that MWF leads to both a multi-directional impairment in diastolic strain rate, as well as to impairment of apical untwist rate. This is likely to account for the higher LV filling pressures observed using echocardiography in patients with NICM and MWF [27]. Conceivably, impaired apical untwisting leads to impaired LV suction and to increased LV filling pressures.\nIn diastole, release of energy stored in systole (recoil) causes rapid untwisting and a mitral-to-apical negative gradient [22] that 'sucks' blood from the left atrium to the LV [23]. Untwisting occurs mainly during the isovolumic relaxation period and is followed by diastolic filling. Several studies [24–26] have shown that whilst cavity volume is fixed during isovolumic relaxation, there is a rapid recoil of about 40 % of the torsion effected during systole. We have found that MWF leads to both a multi-directional impairment in diastolic strain rate, as well as to impairment of apical untwist rate. This is likely to account for the higher LV filling pressures observed using echocardiography in patients with NICM and MWF [27]. Conceivably, impaired apical untwisting leads to impaired LV suction and to increased LV filling pressures.\n Limitations The LGE-CMR technique described herein only detects replacement fibrosis. The more recent technique of T1 mapping, which detects interstitial fibrosis, was not undertaken. We cannot therefore comment as to whether our findings are also influenced by the latter. In addition, we have not routinely undertaken myocardial biopsy, nor have we quantified myocardial oedema. Therefore, we cannot exclude the possibility that our findings were influenced by active myocarditis, despite the absence of evidence from clinical and laboratory screening. We should also add that different manufacturers have varying methodologies for the calculation of the mechanical variables described and therefore, our findings are not generalizable to other FT-CMR methodologies. Publication of the FT-CMR algorithms used by different manufacturers would be welcome.\nThe LGE-CMR technique described herein only detects replacement fibrosis. The more recent technique of T1 mapping, which detects interstitial fibrosis, was not undertaken. We cannot therefore comment as to whether our findings are also influenced by the latter. In addition, we have not routinely undertaken myocardial biopsy, nor have we quantified myocardial oedema. Therefore, we cannot exclude the possibility that our findings were influenced by active myocarditis, despite the absence of evidence from clinical and laboratory screening. We should also add that different manufacturers have varying methodologies for the calculation of the mechanical variables described and therefore, our findings are not generalizable to other FT-CMR methodologies. Publication of the FT-CMR algorithms used by different manufacturers would be welcome.", "During ejection, circumferential fibers shorten simultaneously with the oblique fibers in the right- and left-handed helices to thicken the myocardium and empty the heart. We have found that MWF was associated with a selective reduction in circumferential strain, suggesting that MWF preferentially affects mid-myocardial, circumferential fibres. As noted by Buckberg [13], circumferential fibers provide a horizontal counterforce, or 'buttress' to the simultaneously contracting oblique fibres. Impaired circumferential contraction would be expected to lead to impaired rotation, as we have found in patients with MHF. Our finding of more frequent rigid LV body rotation supports the notion that MWF renders the LV less capable of twisting and more liable to move as a rigid body.\nWe have previously shown that patients with NICM and MWF treated with CRT are more likely to suffer pump failure than patients without MWF [10]. On the other hand, Lamia et al. found that CRT improved torsion, stroke volume and stroke work in an animal model [20]. Using 3-dimensional speckle-tracking echocardiography, others found that in patients with NICM, CRT led to an improvement in LV torsion [21]. If torsion is indeed influenced by CRT, we might expect that the higher risk of pump failure observed in patients with MHF undergoing CRT may be due to a permanent inability of the LV to twist and untwist. This hypothesis requires further exploration.", "In diastole, release of energy stored in systole (recoil) causes rapid untwisting and a mitral-to-apical negative gradient [22] that 'sucks' blood from the left atrium to the LV [23]. Untwisting occurs mainly during the isovolumic relaxation period and is followed by diastolic filling. Several studies [24–26] have shown that whilst cavity volume is fixed during isovolumic relaxation, there is a rapid recoil of about 40 % of the torsion effected during systole. We have found that MWF leads to both a multi-directional impairment in diastolic strain rate, as well as to impairment of apical untwist rate. This is likely to account for the higher LV filling pressures observed using echocardiography in patients with NICM and MWF [27]. Conceivably, impaired apical untwisting leads to impaired LV suction and to increased LV filling pressures.", "The LGE-CMR technique described herein only detects replacement fibrosis. The more recent technique of T1 mapping, which detects interstitial fibrosis, was not undertaken. We cannot therefore comment as to whether our findings are also influenced by the latter. In addition, we have not routinely undertaken myocardial biopsy, nor have we quantified myocardial oedema. Therefore, we cannot exclude the possibility that our findings were influenced by active myocarditis, despite the absence of evidence from clinical and laboratory screening. We should also add that different manufacturers have varying methodologies for the calculation of the mechanical variables described and therefore, our findings are not generalizable to other FT-CMR methodologies. Publication of the FT-CMR algorithms used by different manufacturers would be welcome.", "We have shown that in patients with NICM, MWF is associated with profound disturbances in LV global circumferential strain, strain rate, LV twist and torsion, in both systole and diastole. In addition, MWF is associated with rigid LV body rotation. These findings provide a mechanistic link between MWF and a poor clinical outcome in patients with NICM, despite pharmacologic and device therapy." ]
[ "introduction", "materials|methods", null, null, null, "results", null, null, null, "discussion", null, null, null, "conclusion" ]
[ "Heart failure", "Non ischemic dilated cardiomyopathy", "Mid-wall fibrosis", "Feature-tracking", "Cardiovascular magnetic resonance", "Torsion", "Myocardial deformation" ]
Background: Non-ischemic cardiomyopathy (NICM) is a common cause of heart failure [1]. The NICM phenotype ranges from patients who remain largely asymptomatic to those who succumb to multiple hospitalizations and premature death. In a study of 603 patients with idiopathic dilated cardiomyopathy followed up over 9 years, Castelli et al. found that 45 % died or underwent cardiac transplantation [2]. Left ventricular mid-wall fibrosis (MWF) was first described as an autopsy finding in 1991 [3]. Clinical studies using late-gadolinium cardiovascular magnetic resonance (LGE-CMR) have subsequently shown that in patients with NIDCM, MWF is associated with an increased risk of heart failure hospitalizations, ventricular arrhythmias and cardiac death [4–8]. Patients with NICM and MWF are also less responsive to pharmacologic therapy [9] and cardiac resynchronization therapy [10]. Whilst the evidence linking MWF and poor patient outcomes is compelling [4–11], the mechanism remains unexplored. The left ventricle (LV) twists in systole and untwists, or recoils, in diastole. In systole, the LV base rotates clockwise and the apex rotates counter-clockwise. This wringing motion is effected by the helical arrangement of myocardial fibers, which run in a left-handed direction in the subepicardium and in a right-handed direction in the subendocardium . Contraction of subepicardial myocardial fibers cause the base to rotate clockwise and the apex to rotate in counterclockwise [12]. Because the radius of rotation of the subepicardium is greater than that of the subendocardium, the former provides a greater torque. Consequently, the LV gets smaller in systole and LV ejection occurs [12]. Circumferential fibers, which run in the mid-myocardium, are crucial to this process. During ejection, they shorten simultaneously with oblique fibers in the right- and left-handed helices. In effect, circumferential fibers provide a horizontal counterforce throughout ejection [13]. We hypothesized that injury to mid-myocardial, circumferential myocardial fibers [14], as might be expected from MWF, leads to impairment of LV circumferential contraction and relaxation and therefore, to disturbances in LV twist and torsion. In this study, we have used feature-tracking CMR (FT-CMR) [15] to explore the mechanical effects of MWF in patients with NICM. Methods: Patients Patients with NICM were recruited through CMR units in two centers (Good Hope Hospital and Queen Elizabeth Hospital, Birmingham, United Kingdom). The initial diagnosis of cardiomyopathy was made on the basis of clinical history, echocardiographic evidence of LV systolic impairment and absence of coronary artery disease on invasive coronary angiography. The diagnosis of NICM was also made on the basis of LGE-CMR [4]. Mid-wall LGE was assessed visually and only deemed to be present if a crescentic or circumferential area of mid-wall signal enhancement (2 SD above the mean intensity of remote myocardium in the same slice [16]), surrounded by non-enhanced epicardial and endocardial myocardium was evident. Patients with scars in a sub-endocardial or transmural distribution following coronary artery territories were regarded as ischemic in etiology [4] and excluded. Those with epicardial, transmural or patchy fibrosis suggestive of other etiologies were also excluded. It is routine clinical practice at the two recruiting dedicated heart failure units to perform CMR as part of the diagnostic work-up. Accordingly, all patients underwent CMR at the time of the diagnosis. All Participants gave written informed consent, and the study protocol conformed to the Declaration of Helsinki and was approved by the National Research Ethics Service. Patients with NICM were recruited through CMR units in two centers (Good Hope Hospital and Queen Elizabeth Hospital, Birmingham, United Kingdom). The initial diagnosis of cardiomyopathy was made on the basis of clinical history, echocardiographic evidence of LV systolic impairment and absence of coronary artery disease on invasive coronary angiography. The diagnosis of NICM was also made on the basis of LGE-CMR [4]. Mid-wall LGE was assessed visually and only deemed to be present if a crescentic or circumferential area of mid-wall signal enhancement (2 SD above the mean intensity of remote myocardium in the same slice [16]), surrounded by non-enhanced epicardial and endocardial myocardium was evident. Patients with scars in a sub-endocardial or transmural distribution following coronary artery territories were regarded as ischemic in etiology [4] and excluded. Those with epicardial, transmural or patchy fibrosis suggestive of other etiologies were also excluded. It is routine clinical practice at the two recruiting dedicated heart failure units to perform CMR as part of the diagnostic work-up. Accordingly, all patients underwent CMR at the time of the diagnosis. All Participants gave written informed consent, and the study protocol conformed to the Declaration of Helsinki and was approved by the National Research Ethics Service. CMR This was undertaken using 1.5 Tesla Magnetom Avanto (Siemens, Erlangen, Germany) or Signa (GE Healthcare Worldwide, Slough, England) scanners and a phased-array cardiac coil. A horizontal long-axis image and a short-axis LV stack from the atrioventricular ring to the LV apex were acquired using a steady state in free precession (SSFP) sequence (repetition time of 3.2 ms; echo time of 1.7 ms; flip angle of 60°; sequential 7 mm slices with a 3 mm interslice gap). There were 25 phases per cardiac cycle resulting in a mean temporal resolution of 40 ms. For scar imaging, horizontal and vertical long-axis as well as short-axis slices identical to the LV stack were acquired using a segmented inversion-recovery technique 10 min after the intravenous administration of gadolinium-diethylenetriamine pentaacetic acid (0.1 mmol/kg). Inversion times were adjusted to null normal myocardium (260 to 400 ms). To exclude artefact, we required the typical scar pattern to be visible in the short-axis and long-axis acquisitions, in two different phase encoded directions. Patients were dichotomized according to the presence or absence of MWF, assessed visually by an experienced observer (F.L.), who was blinded to other study data. Feature tracking CMR (Tomtec Imaging Systems, Munich, Germany) was undertaken as previously described. It has been validated against myocardial tagging for the assessment of myocardial mechanics [15, 17]. We have previously shown that both circumferential- and longitudinal-based variables have an excellent intra- and inter-observer variability [18]. Global peak systolic circumferential (Ɛcc) and radial (Ɛrr) strain, strain rates (SSRcc and SSRrr) and diastolic strain rates (DSRcc and DSRrr) were assessed using FT-CMR of the mid-cavity LV short-axis cine. Longitudinal strains (Ɛll, SSRll and DSRll) were assessed using the horizontal long axis cine. Only the SSFP sequences were uploaded onto the FT-CMR software, ensuring that the operator (R.T.) was blinded to MWF status. In addition, MWF status was decided by an investigator (F.L.) who was blinded to the findings of FT-CMR. Peak systolic rotation was measured using the basal and apical short axis cines. In health, peak systolic rotation, as viewed from the apex, is typically clockwise (+) at the base, and anti-clockwise (−) at the apex. Peak systolic rotation was calculated in degrees and expressed as both the maximum extent of rotation in the anticipated direction (i.e., if systolic rotation at the apex was solely in a clockwise direction this would equate to 0°) and the total magnitude of rotation (regardless of direction). Torsional parameters are derived from the peak instantaneous net difference in apical and basal rotation. LV twist was defined as (Φ apex - Φ base), twist per unit length (Φ apex - Φ base/D), and LV torsion (circumferential-longitudinal shear angle) as (Φ apex - Φ base)(ρ apex - ρ base) / 2D (where Φ = the rotation angle; ρ = epicardial radius, and; D = base-to-apex distance) in accordance with agreed methodologies [19]. Systolic torsion was classified as either: a) normal torsion, in which there is predominantly anticlockwise rotation of the apex and clockwise rotation of the base;b) rigid body rotation: both the apex and base rotating in the same direction; and c) reverse torsion: predominantly clockwise rotation of the apex and anti-clockwise rotation of the base (Fig. 1).Fig. 1Rotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion Rotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion This was undertaken using 1.5 Tesla Magnetom Avanto (Siemens, Erlangen, Germany) or Signa (GE Healthcare Worldwide, Slough, England) scanners and a phased-array cardiac coil. A horizontal long-axis image and a short-axis LV stack from the atrioventricular ring to the LV apex were acquired using a steady state in free precession (SSFP) sequence (repetition time of 3.2 ms; echo time of 1.7 ms; flip angle of 60°; sequential 7 mm slices with a 3 mm interslice gap). There were 25 phases per cardiac cycle resulting in a mean temporal resolution of 40 ms. For scar imaging, horizontal and vertical long-axis as well as short-axis slices identical to the LV stack were acquired using a segmented inversion-recovery technique 10 min after the intravenous administration of gadolinium-diethylenetriamine pentaacetic acid (0.1 mmol/kg). Inversion times were adjusted to null normal myocardium (260 to 400 ms). To exclude artefact, we required the typical scar pattern to be visible in the short-axis and long-axis acquisitions, in two different phase encoded directions. Patients were dichotomized according to the presence or absence of MWF, assessed visually by an experienced observer (F.L.), who was blinded to other study data. Feature tracking CMR (Tomtec Imaging Systems, Munich, Germany) was undertaken as previously described. It has been validated against myocardial tagging for the assessment of myocardial mechanics [15, 17]. We have previously shown that both circumferential- and longitudinal-based variables have an excellent intra- and inter-observer variability [18]. Global peak systolic circumferential (Ɛcc) and radial (Ɛrr) strain, strain rates (SSRcc and SSRrr) and diastolic strain rates (DSRcc and DSRrr) were assessed using FT-CMR of the mid-cavity LV short-axis cine. Longitudinal strains (Ɛll, SSRll and DSRll) were assessed using the horizontal long axis cine. Only the SSFP sequences were uploaded onto the FT-CMR software, ensuring that the operator (R.T.) was blinded to MWF status. In addition, MWF status was decided by an investigator (F.L.) who was blinded to the findings of FT-CMR. Peak systolic rotation was measured using the basal and apical short axis cines. In health, peak systolic rotation, as viewed from the apex, is typically clockwise (+) at the base, and anti-clockwise (−) at the apex. Peak systolic rotation was calculated in degrees and expressed as both the maximum extent of rotation in the anticipated direction (i.e., if systolic rotation at the apex was solely in a clockwise direction this would equate to 0°) and the total magnitude of rotation (regardless of direction). Torsional parameters are derived from the peak instantaneous net difference in apical and basal rotation. LV twist was defined as (Φ apex - Φ base), twist per unit length (Φ apex - Φ base/D), and LV torsion (circumferential-longitudinal shear angle) as (Φ apex - Φ base)(ρ apex - ρ base) / 2D (where Φ = the rotation angle; ρ = epicardial radius, and; D = base-to-apex distance) in accordance with agreed methodologies [19]. Systolic torsion was classified as either: a) normal torsion, in which there is predominantly anticlockwise rotation of the apex and clockwise rotation of the base;b) rigid body rotation: both the apex and base rotating in the same direction; and c) reverse torsion: predominantly clockwise rotation of the apex and anti-clockwise rotation of the base (Fig. 1).Fig. 1Rotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion Rotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion Statistical analysis Categorical variables were expressed as a percentage and continuous variables as mean ± standard deviation (SD). Normality was tested using the Shapiro-Wilk test. Comparisons between variables were made with Fisher’s exact test for categorical variables and independent samples t-tests for continuous variables, after adjustment by the Welch-Satterthwaite method where Levene’s test showed unequal variance between groups. A p value of <0.05 was considered statistically significant for all tests. Statistical analyses were performed using SPSS v21.0. (SPSS Inc. Chicago, Illinois). Categorical variables were expressed as a percentage and continuous variables as mean ± standard deviation (SD). Normality was tested using the Shapiro-Wilk test. Comparisons between variables were made with Fisher’s exact test for categorical variables and independent samples t-tests for continuous variables, after adjustment by the Welch-Satterthwaite method where Levene’s test showed unequal variance between groups. A p value of <0.05 was considered statistically significant for all tests. Statistical analyses were performed using SPSS v21.0. (SPSS Inc. Chicago, Illinois). Patients: Patients with NICM were recruited through CMR units in two centers (Good Hope Hospital and Queen Elizabeth Hospital, Birmingham, United Kingdom). The initial diagnosis of cardiomyopathy was made on the basis of clinical history, echocardiographic evidence of LV systolic impairment and absence of coronary artery disease on invasive coronary angiography. The diagnosis of NICM was also made on the basis of LGE-CMR [4]. Mid-wall LGE was assessed visually and only deemed to be present if a crescentic or circumferential area of mid-wall signal enhancement (2 SD above the mean intensity of remote myocardium in the same slice [16]), surrounded by non-enhanced epicardial and endocardial myocardium was evident. Patients with scars in a sub-endocardial or transmural distribution following coronary artery territories were regarded as ischemic in etiology [4] and excluded. Those with epicardial, transmural or patchy fibrosis suggestive of other etiologies were also excluded. It is routine clinical practice at the two recruiting dedicated heart failure units to perform CMR as part of the diagnostic work-up. Accordingly, all patients underwent CMR at the time of the diagnosis. All Participants gave written informed consent, and the study protocol conformed to the Declaration of Helsinki and was approved by the National Research Ethics Service. CMR: This was undertaken using 1.5 Tesla Magnetom Avanto (Siemens, Erlangen, Germany) or Signa (GE Healthcare Worldwide, Slough, England) scanners and a phased-array cardiac coil. A horizontal long-axis image and a short-axis LV stack from the atrioventricular ring to the LV apex were acquired using a steady state in free precession (SSFP) sequence (repetition time of 3.2 ms; echo time of 1.7 ms; flip angle of 60°; sequential 7 mm slices with a 3 mm interslice gap). There were 25 phases per cardiac cycle resulting in a mean temporal resolution of 40 ms. For scar imaging, horizontal and vertical long-axis as well as short-axis slices identical to the LV stack were acquired using a segmented inversion-recovery technique 10 min after the intravenous administration of gadolinium-diethylenetriamine pentaacetic acid (0.1 mmol/kg). Inversion times were adjusted to null normal myocardium (260 to 400 ms). To exclude artefact, we required the typical scar pattern to be visible in the short-axis and long-axis acquisitions, in two different phase encoded directions. Patients were dichotomized according to the presence or absence of MWF, assessed visually by an experienced observer (F.L.), who was blinded to other study data. Feature tracking CMR (Tomtec Imaging Systems, Munich, Germany) was undertaken as previously described. It has been validated against myocardial tagging for the assessment of myocardial mechanics [15, 17]. We have previously shown that both circumferential- and longitudinal-based variables have an excellent intra- and inter-observer variability [18]. Global peak systolic circumferential (Ɛcc) and radial (Ɛrr) strain, strain rates (SSRcc and SSRrr) and diastolic strain rates (DSRcc and DSRrr) were assessed using FT-CMR of the mid-cavity LV short-axis cine. Longitudinal strains (Ɛll, SSRll and DSRll) were assessed using the horizontal long axis cine. Only the SSFP sequences were uploaded onto the FT-CMR software, ensuring that the operator (R.T.) was blinded to MWF status. In addition, MWF status was decided by an investigator (F.L.) who was blinded to the findings of FT-CMR. Peak systolic rotation was measured using the basal and apical short axis cines. In health, peak systolic rotation, as viewed from the apex, is typically clockwise (+) at the base, and anti-clockwise (−) at the apex. Peak systolic rotation was calculated in degrees and expressed as both the maximum extent of rotation in the anticipated direction (i.e., if systolic rotation at the apex was solely in a clockwise direction this would equate to 0°) and the total magnitude of rotation (regardless of direction). Torsional parameters are derived from the peak instantaneous net difference in apical and basal rotation. LV twist was defined as (Φ apex - Φ base), twist per unit length (Φ apex - Φ base/D), and LV torsion (circumferential-longitudinal shear angle) as (Φ apex - Φ base)(ρ apex - ρ base) / 2D (where Φ = the rotation angle; ρ = epicardial radius, and; D = base-to-apex distance) in accordance with agreed methodologies [19]. Systolic torsion was classified as either: a) normal torsion, in which there is predominantly anticlockwise rotation of the apex and clockwise rotation of the base;b) rigid body rotation: both the apex and base rotating in the same direction; and c) reverse torsion: predominantly clockwise rotation of the apex and anti-clockwise rotation of the base (Fig. 1).Fig. 1Rotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion Rotational mechanics in NICM. Diagrammatic representation of torsional and rotational patterns identified using feature-tracking cardiovascular magnetic resonance. In the bottom tiles, the time in the cardiac cycle, expressed as a percentage of the R-R interval on the ECG, is shown in the x axes. Rotation at the base and apex of the LV as well as net torsion (the instantaneous difference between apical and basal rotation) is shown on the y axis (in degrees) a shows a preserved torsional pattern from a patient with non-ischemic dilated cardiomyopathy without MWF with predominantly anticlockwise rotation at the apex and clockwise rotation at the base. b shows reverse torsion, where the direction of both apical and basal rotation is reversed. c shows rigid body rotation in a patient with NICM and MWF. The apex and base both twist in the same direction so that the heart rotates as one solid body with minimal net torsion Statistical analysis: Categorical variables were expressed as a percentage and continuous variables as mean ± standard deviation (SD). Normality was tested using the Shapiro-Wilk test. Comparisons between variables were made with Fisher’s exact test for categorical variables and independent samples t-tests for continuous variables, after adjustment by the Welch-Satterthwaite method where Levene’s test showed unequal variance between groups. A p value of <0.05 was considered statistically significant for all tests. Statistical analyses were performed using SPSS v21.0. (SPSS Inc. Chicago, Illinois). Results: The characteristics of the study group are shown in Table 1. Amongst the entire cohort, 32/116 patients (28 %) had MWF. Patients were of similar age (63.8 vs. 62.3 years, p = 0.29), but more patients with MWF were men (84 % vs. 61 %, p = 0.02). There were no differences in NHYA class, atrial rhythm, QRS duration, LVEF, co-morbidities, pharmacological therapy for heart failure.Table 1Baseline characteristicsNo MWFMWFPN8432Age, yrs62.3 ± 13.763.8 ± 11.90.29Male, n (%)51 (61)27 (84)0.02Height, m1.68 ± 0.091.74 ± 0.090.02Weight, Kg83.4 ± 18.683.3 ± 12.60.97NYHA class0.20 I4 (5)3 (9) II15 (18)8 (25) III47 (56)11 (34) IV9 (11)5 (16) Unknown9 (11)5 (16)Diabetes mellitus, n (%)13 (16)7 (24)0.42Hypertension, n (%)18 (22)5 (17)0.61Atrial fibrillation, n (%)15 (18)8 (24)0.44Medication, n (%) Loop diuretics62 (81)26 (89)0.47 ACE-I or ARB77 (97)27 (90)0.31 Beta-blockers51 (65)20 (66)1.00 Aldosterone antagonists36 (46)10 (35)0.29Systolic blood pressure, mmHg124.3 ± 20.5119.6 ± 23.10.38Diastolic blood pressure, mmHg71.5 ± 11.971.7 ± 13.80.96QRS duration (ms)144 (28)149 (32)0.48 ACE-I angiotensin-converting enzyme inhibitors, ARB angiotensin receptor blockers Baseline characteristics ACE-I angiotensin-converting enzyme inhibitors, ARB angiotensin receptor blockers Systolic deformation As shown in Table 2, patients with MWF had a lower, global circumferential strain (Ɛcc: −6.6 % vs −9.4 %, P = 0.004), but similar longitudinal (Ɛll: −7.6 % vs. −9.4 %, p −0.053) and radial (Ɛrr: 14.6 % vs. 17.8 % p = 0.18) strain. Systolic strain rate was reduced in the circumferential direction (SSRcc: −0.38 s−1 vs. −0.56 s-1, p = 0.005), but not in radial or longitudinal directions. Figure 2 shows typical examples. As shown in Fig. 3, Ɛcc (r = 0.70), Ɛrr (0.57, p <0.001 and Ɛll (r = 0.62, p <0.001) correlated positively with LVEF. In the case of Ɛcc, the slope of the regression line was 0.17 in the + MWF group and 0.31 in the -MWF group, indicating that Ɛcc is lower in the + MWF group than in the -MWF at a given LVEF.Table 2Mechanical variables in patients with or without MWFNo MWFMWFPLV dimensions LVEDV, mL222 ± 80277 ± 790.002 LVESV, mL166 ± 79214 ± 830.007 LV mass, g137.6 ± 46.6155.5 ± 71.10.052Systolic deformation LVEF, %27.5 ± 10.824.3 ± 12.90.20 Ɛcc (%)−9.4 ± 4.76−6.6 (2.570.004 SSRcc (s−1)−0.56 ± 0.25−0.38 (0.120.005 Ɛrr (%)17.8 ± 11.014.6 ± 10.10.18 SSRrr (s−1)0.84 ± 0.370.74 ± 0.400.31 Ɛll (%)−9.4 ± 4.35−7.6 ± 3.340.053 SSRll (s−1)0.56 ± 0.20−0.49 ± 0.180.13Diastolic deformation DSRcc (s−1)0.46 ± 0.190.34 ± 0.110.010 DSRrr (s−1)−0.75 ± 0.35−0.55 ± 0.440.038 DSRll (s−1)0.50 ± 0.200.38 ± 0.140.006Systolic torsion Basal systolic rotation (°)  Net Clockwise3.40 ± 3.003.00 ± 2.230.513  Magnitude4.63 ± 2.643.67 ± 1.970.082 Basal rotation rate (° s−1)31.3 ± 14.522.1 ± 8.20.002 Apical systolic rotation (°)  Net anti-clockwise−3.50 ± 3.28−1.99 ± 1.970.024  Magnitude5.18 ± 3.153.52 ± 2.450.013 Apical rotation rate (° s−1)−38.9 ± 21.8−26.1 ± 15.80.005 Average basal/apical rotation (°)9.81 ± 4.487.20 ± 3.440.002 LV twist (°)6.31 ± 3.304.65 ± 2.180.004 LV twist per unit length (°/cm)1.34 ± 0.760.94 ± 0.550.005 Torsional shear angle0.83 ± 0.060.52 ± 0.070.008 LV twist rate (° s−1)48.4 ± 23.136.1 ± 17.10.01 Torsional pattern<0.001  Normal torsion, n (%)39 ± 4610 ± 32  Rigid body rotation, n (%)23 ± 2821 ± 64  Reverse torsion, n (%)22 ± 261 ± 4Diastolic torsion Basal rotation rate (° s−1)−34.1 ± 14.8−28.0 ± 11.80.053 Apical rotation rate (° s−1)38.3 ± 20.124.9 ± 13.10.001 LV untwist rate (° s−1)44.5 ± 21.030.5 ± 14.9<0.001Variables are expressed as mean ± SD MWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain Fig. 2Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF Fig. 3Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF Mechanical variables in patients with or without MWF Variables are expressed as mean ± SD MWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF As shown in Table 2, patients with MWF had a lower, global circumferential strain (Ɛcc: −6.6 % vs −9.4 %, P = 0.004), but similar longitudinal (Ɛll: −7.6 % vs. −9.4 %, p −0.053) and radial (Ɛrr: 14.6 % vs. 17.8 % p = 0.18) strain. Systolic strain rate was reduced in the circumferential direction (SSRcc: −0.38 s−1 vs. −0.56 s-1, p = 0.005), but not in radial or longitudinal directions. Figure 2 shows typical examples. As shown in Fig. 3, Ɛcc (r = 0.70), Ɛrr (0.57, p <0.001 and Ɛll (r = 0.62, p <0.001) correlated positively with LVEF. In the case of Ɛcc, the slope of the regression line was 0.17 in the + MWF group and 0.31 in the -MWF group, indicating that Ɛcc is lower in the + MWF group than in the -MWF at a given LVEF.Table 2Mechanical variables in patients with or without MWFNo MWFMWFPLV dimensions LVEDV, mL222 ± 80277 ± 790.002 LVESV, mL166 ± 79214 ± 830.007 LV mass, g137.6 ± 46.6155.5 ± 71.10.052Systolic deformation LVEF, %27.5 ± 10.824.3 ± 12.90.20 Ɛcc (%)−9.4 ± 4.76−6.6 (2.570.004 SSRcc (s−1)−0.56 ± 0.25−0.38 (0.120.005 Ɛrr (%)17.8 ± 11.014.6 ± 10.10.18 SSRrr (s−1)0.84 ± 0.370.74 ± 0.400.31 Ɛll (%)−9.4 ± 4.35−7.6 ± 3.340.053 SSRll (s−1)0.56 ± 0.20−0.49 ± 0.180.13Diastolic deformation DSRcc (s−1)0.46 ± 0.190.34 ± 0.110.010 DSRrr (s−1)−0.75 ± 0.35−0.55 ± 0.440.038 DSRll (s−1)0.50 ± 0.200.38 ± 0.140.006Systolic torsion Basal systolic rotation (°)  Net Clockwise3.40 ± 3.003.00 ± 2.230.513  Magnitude4.63 ± 2.643.67 ± 1.970.082 Basal rotation rate (° s−1)31.3 ± 14.522.1 ± 8.20.002 Apical systolic rotation (°)  Net anti-clockwise−3.50 ± 3.28−1.99 ± 1.970.024  Magnitude5.18 ± 3.153.52 ± 2.450.013 Apical rotation rate (° s−1)−38.9 ± 21.8−26.1 ± 15.80.005 Average basal/apical rotation (°)9.81 ± 4.487.20 ± 3.440.002 LV twist (°)6.31 ± 3.304.65 ± 2.180.004 LV twist per unit length (°/cm)1.34 ± 0.760.94 ± 0.550.005 Torsional shear angle0.83 ± 0.060.52 ± 0.070.008 LV twist rate (° s−1)48.4 ± 23.136.1 ± 17.10.01 Torsional pattern<0.001  Normal torsion, n (%)39 ± 4610 ± 32  Rigid body rotation, n (%)23 ± 2821 ± 64  Reverse torsion, n (%)22 ± 261 ± 4Diastolic torsion Basal rotation rate (° s−1)−34.1 ± 14.8−28.0 ± 11.80.053 Apical rotation rate (° s−1)38.3 ± 20.124.9 ± 13.10.001 LV untwist rate (° s−1)44.5 ± 21.030.5 ± 14.9<0.001Variables are expressed as mean ± SD MWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain Fig. 2Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF Fig. 3Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF Mechanical variables in patients with or without MWF Variables are expressed as mean ± SD MWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF Diastolic deformation In patients with MWF, diastolic strains rates were lower in all three directions in patients with MWF (DSRcc: 0.34 vs 0.46 s−1, p = 0.01; DSRrr: −0.55 vs −0.75 s−1, p = 0.04; DSRll: 0.38 vs 0.50 s−1, p = 0.006). In patients with MWF, diastolic strains rates were lower in all three directions in patients with MWF (DSRcc: 0.34 vs 0.46 s−1, p = 0.01; DSRrr: −0.55 vs −0.75 s−1, p = 0.04; DSRll: 0.38 vs 0.50 s−1, p = 0.006). Torsional mechanics Whilst basal rotation was unaffected by MWF (net clockwise: 3.00° vs. 3.30, p = 0.51; total magnitude: 3.67° vs. 4.63°, p = 0.08), the rate of basal rotation was reduced (22.1° s−1 vs 31.3° s−1, p = 0.002). In patients with MWF, apical rotation was also reduced in terms of both the total magnitude (3.52° vs 5.18°, p = 0.013) and the net anti-clockwise rotation (−1.99° vs. −3.50°, p = 0.024). The rate of apical rotation was lower in patients with MWF (−26.1° s−1 vs −38.9° s−1, p = 0.005). This reduction in the magnitude of apical rotation was associated with a reduction in LV twist (peak LV twist : 4.65° vs. 6.31°, p = 0.004; LV twist per unit length: 0.94°/cm vs.1.34°/cm, p = 0.005; torsional shear angle: 0.52 vs. 0.83, p = 0.008). The rate of LV twist (36.1° s−1 vs. 48.4° s−1, P = 0.001) and untwist (30.5° s−1 vs. 44.5° s−1, P <0.001) was also reduced in patients with MWF. A normal torsion pattern, in which there is predominantly anti-clockwise rotation of the apex and clockwise rotation of the base, was observed more frequently in patients without MWF (32 vs 46 %). Rigid LV body rotation was more frequently observed in patients with MWF (64 vs 28 %, p <0.001). Whilst basal rotation was unaffected by MWF (net clockwise: 3.00° vs. 3.30, p = 0.51; total magnitude: 3.67° vs. 4.63°, p = 0.08), the rate of basal rotation was reduced (22.1° s−1 vs 31.3° s−1, p = 0.002). In patients with MWF, apical rotation was also reduced in terms of both the total magnitude (3.52° vs 5.18°, p = 0.013) and the net anti-clockwise rotation (−1.99° vs. −3.50°, p = 0.024). The rate of apical rotation was lower in patients with MWF (−26.1° s−1 vs −38.9° s−1, p = 0.005). This reduction in the magnitude of apical rotation was associated with a reduction in LV twist (peak LV twist : 4.65° vs. 6.31°, p = 0.004; LV twist per unit length: 0.94°/cm vs.1.34°/cm, p = 0.005; torsional shear angle: 0.52 vs. 0.83, p = 0.008). The rate of LV twist (36.1° s−1 vs. 48.4° s−1, P = 0.001) and untwist (30.5° s−1 vs. 44.5° s−1, P <0.001) was also reduced in patients with MWF. A normal torsion pattern, in which there is predominantly anti-clockwise rotation of the apex and clockwise rotation of the base, was observed more frequently in patients without MWF (32 vs 46 %). Rigid LV body rotation was more frequently observed in patients with MWF (64 vs 28 %, p <0.001). Systolic deformation: As shown in Table 2, patients with MWF had a lower, global circumferential strain (Ɛcc: −6.6 % vs −9.4 %, P = 0.004), but similar longitudinal (Ɛll: −7.6 % vs. −9.4 %, p −0.053) and radial (Ɛrr: 14.6 % vs. 17.8 % p = 0.18) strain. Systolic strain rate was reduced in the circumferential direction (SSRcc: −0.38 s−1 vs. −0.56 s-1, p = 0.005), but not in radial or longitudinal directions. Figure 2 shows typical examples. As shown in Fig. 3, Ɛcc (r = 0.70), Ɛrr (0.57, p <0.001 and Ɛll (r = 0.62, p <0.001) correlated positively with LVEF. In the case of Ɛcc, the slope of the regression line was 0.17 in the + MWF group and 0.31 in the -MWF group, indicating that Ɛcc is lower in the + MWF group than in the -MWF at a given LVEF.Table 2Mechanical variables in patients with or without MWFNo MWFMWFPLV dimensions LVEDV, mL222 ± 80277 ± 790.002 LVESV, mL166 ± 79214 ± 830.007 LV mass, g137.6 ± 46.6155.5 ± 71.10.052Systolic deformation LVEF, %27.5 ± 10.824.3 ± 12.90.20 Ɛcc (%)−9.4 ± 4.76−6.6 (2.570.004 SSRcc (s−1)−0.56 ± 0.25−0.38 (0.120.005 Ɛrr (%)17.8 ± 11.014.6 ± 10.10.18 SSRrr (s−1)0.84 ± 0.370.74 ± 0.400.31 Ɛll (%)−9.4 ± 4.35−7.6 ± 3.340.053 SSRll (s−1)0.56 ± 0.20−0.49 ± 0.180.13Diastolic deformation DSRcc (s−1)0.46 ± 0.190.34 ± 0.110.010 DSRrr (s−1)−0.75 ± 0.35−0.55 ± 0.440.038 DSRll (s−1)0.50 ± 0.200.38 ± 0.140.006Systolic torsion Basal systolic rotation (°)  Net Clockwise3.40 ± 3.003.00 ± 2.230.513  Magnitude4.63 ± 2.643.67 ± 1.970.082 Basal rotation rate (° s−1)31.3 ± 14.522.1 ± 8.20.002 Apical systolic rotation (°)  Net anti-clockwise−3.50 ± 3.28−1.99 ± 1.970.024  Magnitude5.18 ± 3.153.52 ± 2.450.013 Apical rotation rate (° s−1)−38.9 ± 21.8−26.1 ± 15.80.005 Average basal/apical rotation (°)9.81 ± 4.487.20 ± 3.440.002 LV twist (°)6.31 ± 3.304.65 ± 2.180.004 LV twist per unit length (°/cm)1.34 ± 0.760.94 ± 0.550.005 Torsional shear angle0.83 ± 0.060.52 ± 0.070.008 LV twist rate (° s−1)48.4 ± 23.136.1 ± 17.10.01 Torsional pattern<0.001  Normal torsion, n (%)39 ± 4610 ± 32  Rigid body rotation, n (%)23 ± 2821 ± 64  Reverse torsion, n (%)22 ± 261 ± 4Diastolic torsion Basal rotation rate (° s−1)−34.1 ± 14.8−28.0 ± 11.80.053 Apical rotation rate (° s−1)38.3 ± 20.124.9 ± 13.10.001 LV untwist rate (° s−1)44.5 ± 21.030.5 ± 14.9<0.001Variables are expressed as mean ± SD MWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain Fig. 2Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF Fig. 3Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF Mechanical variables in patients with or without MWF Variables are expressed as mean ± SD MWF mid-wall fibrosis, SSR systolic strain rate, DSR diastolic strain rate, Ɛ strain Feature-tracking CMR. Short-axis, late gadolinium enhancement views of patients with idiopathic dilated cardiomyopathy, without and with mid-wall fibrosis (MWF, white arrows). The bottom tiles show plots of global circumferential strain (Ɛcc, purple), global radial strain (Ɛrr, red) and global longitudinal strain (Ɛll, green) over a cardiac cycle. Note the marked reduction in Ɛcc in the patient with MWF Relationship between LVEF and myocardial strain. Scattergrams for each of the Lagrangian strains plotted against LVEF. Cases are classified according to presence (blue circles) or absence (red circles) of mid-wall fibrosis (MWF). The lines correspond to the 95 % confidence intervals for strain. The top scattergram demonstrates that above an LVEF of 25 % (dashed reference line) MWF alters the relationship between Ɛcc and LVEF: patients with MWF have lower Ɛcc than those with similar LVEF but without MWF Diastolic deformation: In patients with MWF, diastolic strains rates were lower in all three directions in patients with MWF (DSRcc: 0.34 vs 0.46 s−1, p = 0.01; DSRrr: −0.55 vs −0.75 s−1, p = 0.04; DSRll: 0.38 vs 0.50 s−1, p = 0.006). Torsional mechanics: Whilst basal rotation was unaffected by MWF (net clockwise: 3.00° vs. 3.30, p = 0.51; total magnitude: 3.67° vs. 4.63°, p = 0.08), the rate of basal rotation was reduced (22.1° s−1 vs 31.3° s−1, p = 0.002). In patients with MWF, apical rotation was also reduced in terms of both the total magnitude (3.52° vs 5.18°, p = 0.013) and the net anti-clockwise rotation (−1.99° vs. −3.50°, p = 0.024). The rate of apical rotation was lower in patients with MWF (−26.1° s−1 vs −38.9° s−1, p = 0.005). This reduction in the magnitude of apical rotation was associated with a reduction in LV twist (peak LV twist : 4.65° vs. 6.31°, p = 0.004; LV twist per unit length: 0.94°/cm vs.1.34°/cm, p = 0.005; torsional shear angle: 0.52 vs. 0.83, p = 0.008). The rate of LV twist (36.1° s−1 vs. 48.4° s−1, P = 0.001) and untwist (30.5° s−1 vs. 44.5° s−1, P <0.001) was also reduced in patients with MWF. A normal torsion pattern, in which there is predominantly anti-clockwise rotation of the apex and clockwise rotation of the base, was observed more frequently in patients without MWF (32 vs 46 %). Rigid LV body rotation was more frequently observed in patients with MWF (64 vs 28 %, p <0.001). Discussion: In this study, we have shown that in patients with NICM, MWF is associated with a selective impairment of circumferential LV myocardial strain. In addition, MWF is associated with impaired apical rotation and a reduction in rotation rate, from base to apex. MWF is also associated with impaired diastolic function, reflected in reductions in untwist in all directions, from base to apex. Together, these findings are consistent with the notion that, by affecting predominantly circumferential myocardial fibers, MWF leads to disturbances in myocardial contraction and diastolic function. The result is a 'stiff' LV, which is less able to twist to an applied torque (rotation) and more likely to move as a solid body. These disturbances may be related to the known associations of MWF with reduced pump function, heart failure hospitalizations and a poor response to medical and device therapy [4–11]. Systole During ejection, circumferential fibers shorten simultaneously with the oblique fibers in the right- and left-handed helices to thicken the myocardium and empty the heart. We have found that MWF was associated with a selective reduction in circumferential strain, suggesting that MWF preferentially affects mid-myocardial, circumferential fibres. As noted by Buckberg [13], circumferential fibers provide a horizontal counterforce, or 'buttress' to the simultaneously contracting oblique fibres. Impaired circumferential contraction would be expected to lead to impaired rotation, as we have found in patients with MHF. Our finding of more frequent rigid LV body rotation supports the notion that MWF renders the LV less capable of twisting and more liable to move as a rigid body. We have previously shown that patients with NICM and MWF treated with CRT are more likely to suffer pump failure than patients without MWF [10]. On the other hand, Lamia et al. found that CRT improved torsion, stroke volume and stroke work in an animal model [20]. Using 3-dimensional speckle-tracking echocardiography, others found that in patients with NICM, CRT led to an improvement in LV torsion [21]. If torsion is indeed influenced by CRT, we might expect that the higher risk of pump failure observed in patients with MHF undergoing CRT may be due to a permanent inability of the LV to twist and untwist. This hypothesis requires further exploration. During ejection, circumferential fibers shorten simultaneously with the oblique fibers in the right- and left-handed helices to thicken the myocardium and empty the heart. We have found that MWF was associated with a selective reduction in circumferential strain, suggesting that MWF preferentially affects mid-myocardial, circumferential fibres. As noted by Buckberg [13], circumferential fibers provide a horizontal counterforce, or 'buttress' to the simultaneously contracting oblique fibres. Impaired circumferential contraction would be expected to lead to impaired rotation, as we have found in patients with MHF. Our finding of more frequent rigid LV body rotation supports the notion that MWF renders the LV less capable of twisting and more liable to move as a rigid body. We have previously shown that patients with NICM and MWF treated with CRT are more likely to suffer pump failure than patients without MWF [10]. On the other hand, Lamia et al. found that CRT improved torsion, stroke volume and stroke work in an animal model [20]. Using 3-dimensional speckle-tracking echocardiography, others found that in patients with NICM, CRT led to an improvement in LV torsion [21]. If torsion is indeed influenced by CRT, we might expect that the higher risk of pump failure observed in patients with MHF undergoing CRT may be due to a permanent inability of the LV to twist and untwist. This hypothesis requires further exploration. Diastole In diastole, release of energy stored in systole (recoil) causes rapid untwisting and a mitral-to-apical negative gradient [22] that 'sucks' blood from the left atrium to the LV [23]. Untwisting occurs mainly during the isovolumic relaxation period and is followed by diastolic filling. Several studies [24–26] have shown that whilst cavity volume is fixed during isovolumic relaxation, there is a rapid recoil of about 40 % of the torsion effected during systole. We have found that MWF leads to both a multi-directional impairment in diastolic strain rate, as well as to impairment of apical untwist rate. This is likely to account for the higher LV filling pressures observed using echocardiography in patients with NICM and MWF [27]. Conceivably, impaired apical untwisting leads to impaired LV suction and to increased LV filling pressures. In diastole, release of energy stored in systole (recoil) causes rapid untwisting and a mitral-to-apical negative gradient [22] that 'sucks' blood from the left atrium to the LV [23]. Untwisting occurs mainly during the isovolumic relaxation period and is followed by diastolic filling. Several studies [24–26] have shown that whilst cavity volume is fixed during isovolumic relaxation, there is a rapid recoil of about 40 % of the torsion effected during systole. We have found that MWF leads to both a multi-directional impairment in diastolic strain rate, as well as to impairment of apical untwist rate. This is likely to account for the higher LV filling pressures observed using echocardiography in patients with NICM and MWF [27]. Conceivably, impaired apical untwisting leads to impaired LV suction and to increased LV filling pressures. Limitations The LGE-CMR technique described herein only detects replacement fibrosis. The more recent technique of T1 mapping, which detects interstitial fibrosis, was not undertaken. We cannot therefore comment as to whether our findings are also influenced by the latter. In addition, we have not routinely undertaken myocardial biopsy, nor have we quantified myocardial oedema. Therefore, we cannot exclude the possibility that our findings were influenced by active myocarditis, despite the absence of evidence from clinical and laboratory screening. We should also add that different manufacturers have varying methodologies for the calculation of the mechanical variables described and therefore, our findings are not generalizable to other FT-CMR methodologies. Publication of the FT-CMR algorithms used by different manufacturers would be welcome. The LGE-CMR technique described herein only detects replacement fibrosis. The more recent technique of T1 mapping, which detects interstitial fibrosis, was not undertaken. We cannot therefore comment as to whether our findings are also influenced by the latter. In addition, we have not routinely undertaken myocardial biopsy, nor have we quantified myocardial oedema. Therefore, we cannot exclude the possibility that our findings were influenced by active myocarditis, despite the absence of evidence from clinical and laboratory screening. We should also add that different manufacturers have varying methodologies for the calculation of the mechanical variables described and therefore, our findings are not generalizable to other FT-CMR methodologies. Publication of the FT-CMR algorithms used by different manufacturers would be welcome. Systole: During ejection, circumferential fibers shorten simultaneously with the oblique fibers in the right- and left-handed helices to thicken the myocardium and empty the heart. We have found that MWF was associated with a selective reduction in circumferential strain, suggesting that MWF preferentially affects mid-myocardial, circumferential fibres. As noted by Buckberg [13], circumferential fibers provide a horizontal counterforce, or 'buttress' to the simultaneously contracting oblique fibres. Impaired circumferential contraction would be expected to lead to impaired rotation, as we have found in patients with MHF. Our finding of more frequent rigid LV body rotation supports the notion that MWF renders the LV less capable of twisting and more liable to move as a rigid body. We have previously shown that patients with NICM and MWF treated with CRT are more likely to suffer pump failure than patients without MWF [10]. On the other hand, Lamia et al. found that CRT improved torsion, stroke volume and stroke work in an animal model [20]. Using 3-dimensional speckle-tracking echocardiography, others found that in patients with NICM, CRT led to an improvement in LV torsion [21]. If torsion is indeed influenced by CRT, we might expect that the higher risk of pump failure observed in patients with MHF undergoing CRT may be due to a permanent inability of the LV to twist and untwist. This hypothesis requires further exploration. Diastole: In diastole, release of energy stored in systole (recoil) causes rapid untwisting and a mitral-to-apical negative gradient [22] that 'sucks' blood from the left atrium to the LV [23]. Untwisting occurs mainly during the isovolumic relaxation period and is followed by diastolic filling. Several studies [24–26] have shown that whilst cavity volume is fixed during isovolumic relaxation, there is a rapid recoil of about 40 % of the torsion effected during systole. We have found that MWF leads to both a multi-directional impairment in diastolic strain rate, as well as to impairment of apical untwist rate. This is likely to account for the higher LV filling pressures observed using echocardiography in patients with NICM and MWF [27]. Conceivably, impaired apical untwisting leads to impaired LV suction and to increased LV filling pressures. Limitations: The LGE-CMR technique described herein only detects replacement fibrosis. The more recent technique of T1 mapping, which detects interstitial fibrosis, was not undertaken. We cannot therefore comment as to whether our findings are also influenced by the latter. In addition, we have not routinely undertaken myocardial biopsy, nor have we quantified myocardial oedema. Therefore, we cannot exclude the possibility that our findings were influenced by active myocarditis, despite the absence of evidence from clinical and laboratory screening. We should also add that different manufacturers have varying methodologies for the calculation of the mechanical variables described and therefore, our findings are not generalizable to other FT-CMR methodologies. Publication of the FT-CMR algorithms used by different manufacturers would be welcome. Conclusions: We have shown that in patients with NICM, MWF is associated with profound disturbances in LV global circumferential strain, strain rate, LV twist and torsion, in both systole and diastole. In addition, MWF is associated with rigid LV body rotation. These findings provide a mechanistic link between MWF and a poor clinical outcome in patients with NICM, despite pharmacologic and device therapy.
Background: Left ventricular (LV) mid-wall fibrosis (MWF), which occurs in about a quarter of patients with non-ischemic cardiomyopathy (NICM), is associated with high risk of pump failure. The mid LV wall is the site of circumferential myocardial fibers. We sought to determine the effect of MWF on LV myocardial mechanics. Methods: Patients with NICM (n = 116; age: 62.8 ± 13.2 years; 67% male) underwent late gadolinium enhancement cardiovascular magnetic resonance (CMR) and were categorized according to the presence (+) or absence (-) of MWF. Feature tracking (FT) CMR was used to assess myocardial deformation. Results: Despite a similar LVEF (24.3 vs. 27.5%, p = 0.20), patients with MWF (32 [24%]) had lower global circumferential strain (Ɛcc: -6.6% vs. -9.4 %, P = 0.004), but similar longitudinal (Ɛll: -7.6 % vs. -9.4 %, p = 0.053) and radial (Ɛrr: 14.6% vs. 17.8% p = 0.18) strain. Compared with - MWF, + MWF was associated with reduced LV systolic, circumferential strain rate (-0.38 ± 0.1 vs. -0.56 ± 0.3 s(-1), p = 0.005) and peak LV twist (4.65 vs. 6.31°, p = 0.004), as well as rigid LV body rotation (64 % vs. 28 %, P <0.001). In addition, +MWF was associated with reduced LV diastolic strain rates (DSRcc: 0.34 vs. 0.46 s(-1); DSRll: 0.38 vs. 0.50s(-1); DSRrr: -0.55 vs. -0.75 s(-1); all p <0.05). Conclusions: MWF is associated with reduced LV global circumferential strain, strain rate and torsion. In addition, MWF is associated with rigid LV body rotation and reduced diastolic strain rates. These systolic and diastolic disturbances may be related to the increased risk of pump failure observed in patients with NICM and MWF.
Background: Non-ischemic cardiomyopathy (NICM) is a common cause of heart failure [1]. The NICM phenotype ranges from patients who remain largely asymptomatic to those who succumb to multiple hospitalizations and premature death. In a study of 603 patients with idiopathic dilated cardiomyopathy followed up over 9 years, Castelli et al. found that 45 % died or underwent cardiac transplantation [2]. Left ventricular mid-wall fibrosis (MWF) was first described as an autopsy finding in 1991 [3]. Clinical studies using late-gadolinium cardiovascular magnetic resonance (LGE-CMR) have subsequently shown that in patients with NIDCM, MWF is associated with an increased risk of heart failure hospitalizations, ventricular arrhythmias and cardiac death [4–8]. Patients with NICM and MWF are also less responsive to pharmacologic therapy [9] and cardiac resynchronization therapy [10]. Whilst the evidence linking MWF and poor patient outcomes is compelling [4–11], the mechanism remains unexplored. The left ventricle (LV) twists in systole and untwists, or recoils, in diastole. In systole, the LV base rotates clockwise and the apex rotates counter-clockwise. This wringing motion is effected by the helical arrangement of myocardial fibers, which run in a left-handed direction in the subepicardium and in a right-handed direction in the subendocardium . Contraction of subepicardial myocardial fibers cause the base to rotate clockwise and the apex to rotate in counterclockwise [12]. Because the radius of rotation of the subepicardium is greater than that of the subendocardium, the former provides a greater torque. Consequently, the LV gets smaller in systole and LV ejection occurs [12]. Circumferential fibers, which run in the mid-myocardium, are crucial to this process. During ejection, they shorten simultaneously with oblique fibers in the right- and left-handed helices. In effect, circumferential fibers provide a horizontal counterforce throughout ejection [13]. We hypothesized that injury to mid-myocardial, circumferential myocardial fibers [14], as might be expected from MWF, leads to impairment of LV circumferential contraction and relaxation and therefore, to disturbances in LV twist and torsion. In this study, we have used feature-tracking CMR (FT-CMR) [15] to explore the mechanical effects of MWF in patients with NICM. Conclusions: We have shown that in patients with NICM, MWF is associated with profound disturbances in LV global circumferential strain, strain rate, LV twist and torsion, in both systole and diastole. In addition, MWF is associated with rigid LV body rotation. These findings provide a mechanistic link between MWF and a poor clinical outcome in patients with NICM, despite pharmacologic and device therapy.
Background: Left ventricular (LV) mid-wall fibrosis (MWF), which occurs in about a quarter of patients with non-ischemic cardiomyopathy (NICM), is associated with high risk of pump failure. The mid LV wall is the site of circumferential myocardial fibers. We sought to determine the effect of MWF on LV myocardial mechanics. Methods: Patients with NICM (n = 116; age: 62.8 ± 13.2 years; 67% male) underwent late gadolinium enhancement cardiovascular magnetic resonance (CMR) and were categorized according to the presence (+) or absence (-) of MWF. Feature tracking (FT) CMR was used to assess myocardial deformation. Results: Despite a similar LVEF (24.3 vs. 27.5%, p = 0.20), patients with MWF (32 [24%]) had lower global circumferential strain (Ɛcc: -6.6% vs. -9.4 %, P = 0.004), but similar longitudinal (Ɛll: -7.6 % vs. -9.4 %, p = 0.053) and radial (Ɛrr: 14.6% vs. 17.8% p = 0.18) strain. Compared with - MWF, + MWF was associated with reduced LV systolic, circumferential strain rate (-0.38 ± 0.1 vs. -0.56 ± 0.3 s(-1), p = 0.005) and peak LV twist (4.65 vs. 6.31°, p = 0.004), as well as rigid LV body rotation (64 % vs. 28 %, P <0.001). In addition, +MWF was associated with reduced LV diastolic strain rates (DSRcc: 0.34 vs. 0.46 s(-1); DSRll: 0.38 vs. 0.50s(-1); DSRrr: -0.55 vs. -0.75 s(-1); all p <0.05). Conclusions: MWF is associated with reduced LV global circumferential strain, strain rate and torsion. In addition, MWF is associated with rigid LV body rotation and reduced diastolic strain rates. These systolic and diastolic disturbances may be related to the increased risk of pump failure observed in patients with NICM and MWF.
11,316
402
14
[ "mwf", "rotation", "lv", "patients", "strain", "vs", "apex", "torsion", "base", "rate" ]
[ "test", "test" ]
null
[CONTENT] Heart failure | Non ischemic dilated cardiomyopathy | Mid-wall fibrosis | Feature-tracking | Cardiovascular magnetic resonance | Torsion | Myocardial deformation [SUMMARY]
null
[CONTENT] Heart failure | Non ischemic dilated cardiomyopathy | Mid-wall fibrosis | Feature-tracking | Cardiovascular magnetic resonance | Torsion | Myocardial deformation [SUMMARY]
[CONTENT] Heart failure | Non ischemic dilated cardiomyopathy | Mid-wall fibrosis | Feature-tracking | Cardiovascular magnetic resonance | Torsion | Myocardial deformation [SUMMARY]
[CONTENT] Heart failure | Non ischemic dilated cardiomyopathy | Mid-wall fibrosis | Feature-tracking | Cardiovascular magnetic resonance | Torsion | Myocardial deformation [SUMMARY]
[CONTENT] Heart failure | Non ischemic dilated cardiomyopathy | Mid-wall fibrosis | Feature-tracking | Cardiovascular magnetic resonance | Torsion | Myocardial deformation [SUMMARY]
[CONTENT] Aged | Biomechanical Phenomena | Cardiomyopathies | Contrast Media | Diastole | England | Female | Fibrosis | Gadolinium DTPA | Heart Failure | Heart Ventricles | Humans | Magnetic Resonance Imaging, Cine | Male | Middle Aged | Myocardium | Predictive Value of Tests | Stress, Mechanical | Systole | Torsion, Mechanical | Ventricular Function, Left [SUMMARY]
null
[CONTENT] Aged | Biomechanical Phenomena | Cardiomyopathies | Contrast Media | Diastole | England | Female | Fibrosis | Gadolinium DTPA | Heart Failure | Heart Ventricles | Humans | Magnetic Resonance Imaging, Cine | Male | Middle Aged | Myocardium | Predictive Value of Tests | Stress, Mechanical | Systole | Torsion, Mechanical | Ventricular Function, Left [SUMMARY]
[CONTENT] Aged | Biomechanical Phenomena | Cardiomyopathies | Contrast Media | Diastole | England | Female | Fibrosis | Gadolinium DTPA | Heart Failure | Heart Ventricles | Humans | Magnetic Resonance Imaging, Cine | Male | Middle Aged | Myocardium | Predictive Value of Tests | Stress, Mechanical | Systole | Torsion, Mechanical | Ventricular Function, Left [SUMMARY]
[CONTENT] Aged | Biomechanical Phenomena | Cardiomyopathies | Contrast Media | Diastole | England | Female | Fibrosis | Gadolinium DTPA | Heart Failure | Heart Ventricles | Humans | Magnetic Resonance Imaging, Cine | Male | Middle Aged | Myocardium | Predictive Value of Tests | Stress, Mechanical | Systole | Torsion, Mechanical | Ventricular Function, Left [SUMMARY]
[CONTENT] Aged | Biomechanical Phenomena | Cardiomyopathies | Contrast Media | Diastole | England | Female | Fibrosis | Gadolinium DTPA | Heart Failure | Heart Ventricles | Humans | Magnetic Resonance Imaging, Cine | Male | Middle Aged | Myocardium | Predictive Value of Tests | Stress, Mechanical | Systole | Torsion, Mechanical | Ventricular Function, Left [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] mwf | rotation | lv | patients | strain | vs | apex | torsion | base | rate [SUMMARY]
null
[CONTENT] mwf | rotation | lv | patients | strain | vs | apex | torsion | base | rate [SUMMARY]
[CONTENT] mwf | rotation | lv | patients | strain | vs | apex | torsion | base | rate [SUMMARY]
[CONTENT] mwf | rotation | lv | patients | strain | vs | apex | torsion | base | rate [SUMMARY]
[CONTENT] mwf | rotation | lv | patients | strain | vs | apex | torsion | base | rate [SUMMARY]
[CONTENT] fibers | left | myocardial fibers | mwf | lv | handed | ejection | systole | myocardial | nicm [SUMMARY]
null
[CONTENT] vs | mwf | lvef | strain | ɛcc | rate | rotation | patients mwf | patients | 001 [SUMMARY]
[CONTENT] mwf associated | mwf | lv | associated | patients nicm | nicm | strain | patients nicm despite | link mwf | link [SUMMARY]
[CONTENT] mwf | rotation | vs | lv | patients | strain | rate | apex | patients mwf | cmr [SUMMARY]
[CONTENT] mwf | rotation | vs | lv | patients | strain | rate | apex | patients mwf | cmr [SUMMARY]
[CONTENT] about a quarter ||| ||| [SUMMARY]
null
[CONTENT] 24.3 | 27.5% | 0.20 | MWF | 32 | 24% | Ɛcc | 0.004 | -9.4 % | 0.053 | 14.6% | 17.8% | 0.18 ||| MWF | 0.1 | 0.3 | 0.005 | 4.65 | 6.31 | 0.004 | 64 % | 28 % | P <0.001 ||| 0.34 | 0.46 | DSRll | 0.38 | DSRrr [SUMMARY]
[CONTENT] ||| MWF ||| NICM | MWF [SUMMARY]
[CONTENT] about a quarter ||| ||| ||| NICM | 116 | 62.8 ± | 13.2 years | 67% | CMR | MWF ||| CMR ||| 24.3 | 27.5% | 0.20 | MWF | 32 | 24% | Ɛcc | 0.004 | -9.4 % | 0.053 | 14.6% | 17.8% | 0.18 ||| MWF | 0.1 | 0.3 | 0.005 | 4.65 | 6.31 | 0.004 | 64 % | 28 % | P <0.001 ||| 0.34 | 0.46 | DSRll | 0.38 | DSRrr ||| ||| MWF ||| NICM | MWF [SUMMARY]
[CONTENT] about a quarter ||| ||| ||| NICM | 116 | 62.8 ± | 13.2 years | 67% | CMR | MWF ||| CMR ||| 24.3 | 27.5% | 0.20 | MWF | 32 | 24% | Ɛcc | 0.004 | -9.4 % | 0.053 | 14.6% | 17.8% | 0.18 ||| MWF | 0.1 | 0.3 | 0.005 | 4.65 | 6.31 | 0.004 | 64 % | 28 % | P <0.001 ||| 0.34 | 0.46 | DSRll | 0.38 | DSRrr ||| ||| MWF ||| NICM | MWF [SUMMARY]
Radiation therapy for epithelial ovarian cancer brain metastases: clinical outcomes and predictors of survival.
23414446
Brain metastases (BM) and leptomeningeal disease (LMD) are uncommon in epithelial ovarian cancer (EOC). We investigate the outcomes of modern radiation therapy (RT) as a primary treatment modality in patients with EOC BM and LMD.
BACKGROUND
We evaluated 60 patients with EOC treated at our institution from 1996 to 2010 who developed BM. All information was obtained from chart review.
METHODS
At EOC diagnosis, median age was 56.1 years and 88% of patients were stage III-IV. At time of BM diagnosis, 46.7% of patients had 1 BM, 16.7% had two to three, 26.7% had four or more, and 10% had LMD. Median follow-up after BM was 9.3 months (range, 0.3-82.3). All patients received RT, and 37% had surgical resection. LMD occurred in the primary or recurrent setting in 12 patients (20%), 9 of whom received RT. Median overall survival (OS) after BM was 9.7 months for all patients (95% CI 5.9-13.5), and 16.1 months (95% CI 3.8-28.3) in patients with one BM. On multivariate analysis, Karnofsky performance status less than 70 (hazard ratio [HR] 2.86, p = 0.018), four or more BM (HR 3.18, p = 0.05), LMD (HR 8.22, p = 0.013), and uncontrolled primary tumor (HR 2.84, p = 0.008) were significantly associated with inferior OS. Use of surgery was not significant (p = 0.31). Median central nervous system freedom from progression (CNS-FFP) in 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.9). Only four or more BM (HR 2.56, p = 0.04) was significantly associated with poorer CNS-FFP.
RESULTS
Based on our results, RT appears to be an effective treatment modality for brain metastases from EOC and should be routinely offered. Karnofsky performance status less than 70, four or more BM, LMD, and uncontrolled primary tumor predict for worse survival after RT for EOC BM. Whether RT is superior to surgery or chemotherapy for EOC BM remains to be seen in a larger cohort.
CONCLUSIONS
[ "Adult", "Aged", "Brain Neoplasms", "Carcinoma, Ovarian Epithelial", "Disease-Free Survival", "Female", "Humans", "Karnofsky Performance Status", "Middle Aged", "Neoplasms, Glandular and Epithelial", "Ovarian Neoplasms", "Risk Factors", "Treatment Outcome" ]
3608316
Background
Epithelial ovarian cancer (EOC) accounts for 3% of cancers among women, but is the fifth leading cause of cancer death in women and the leading cause of gynecologic cancer death [1]. The predominant form of relapse after primary surgery and chemotherapy for EOC is in the abdomen and pelvis [2]. Central nervous system (CNS) and brain metastases (BM) in these patients are a rare occurrence, with reported incidence of 0.29-11.6% [3-9], but may be increasing in incidence as extracranial disease is better controlled with improved surgical and chemotherapeutic options [9-11]. The therapeutic approach to patients with BM from EOC is challenging due to the small numbers of cases and short follow-up periods available in other series [3-10,12]. No studies investigate the impact of modern radiation therapy (RT) as a primary treatment modality in these patients. Treatments vary widely, including best supportive care, chemotherapy, steroids, whole brain radiation therapy (WBRT), surgical resection, and stereotactic radiosurgery (SRS). Median survival in existing studies of EOC with BM has generally been poor, on the order of several months, but some studies report survival as high as 18–33 months in selected patients treated with multimodality therapy that combines surgery, radiation therapy, and systemic chemotherapy [3,5,13,14]. Prior studies of BM in advanced malignancies and in small series of EOC have found performance status, age, primary tumor control, extracranial metastases, and treatment modality for BM to be predictors of survival after BM [4,11,12,15,16]. Leptomeningeal disease (LMD) is regarded as a factor for poor prognosis in other metastatic cancers [17-19] and has been observed with increasing incidence in advanced malignancies, especially in the era of magnetic resonance imaging (MRI) [18]. However, LMD is still rarely reported in EOC. We reviewed our institution’s experience using modern RT, with or without craniotomy, to treat patients with BM and LMD from EOC. We also identify predictors of survival after RT in this patient population.
Methods
This study was approved by the Institutional Review Board, who also approved a waiver of informed consent. Our institution’s gynecology database was searched for patients with EOC who developed BM and received RT. We identified 60 patients who were diagnosed between October 1996 and April 2010. We performed a retrospective chart review to obtain demographic data, details of initial EOC diagnosis and treatment, Karnofsky Performance Status (KPS), stage, grade, date of BM diagnosis, interval to BM, site and number of BM, treatment type for BM, systemic disease at BM diagnosis, follow-up and response to treatment, date of CNS relapse or recurrence, time interval to relapse or recurrence after initial BM, and date of death or last follow-up. For two patients, the date of death could not be determined. These patients were censored at date of last follow-up. All BM, including LMD, were diagnosed by imaging, most commonly MRI. Overall survival (OS) was calculated as the time from initial EOC or BM diagnosis to date of death or last follow-up. CNS freedom from progression (CNS-FFP) after BM was calculated from date of BM diagnosis to date of CNS recurrence or last follow-up imaging. Patients without follow-up imaging after BM treatment were excluded from the CNS-FFP analysis. Survival rates were determined using the Kaplan-Meier method, and survival curves were compared using the log-rank test. Univariate (UVA) and multivariate (MVA) survival analyses were performed using a Cox proportional hazards model on OS and CNS-FFP. Variables with p value ≤0.05 by UVA were considered for the MVA, and forward procedure was used to build the final model. A p value ≤0.05 was considered significant for all analyses. All statistical analysis was accomplished with the SPSS software package, version 19 (IBM, Armonk, NY).
Results
Clinical characteristics Patient characteristics at the time of initial BM diagnosis are listed in Table 1. Median age at diagnosis of EOC was 56.1 years (range, 31.2-79.0). Stage distribution [20] at original diagnosis of EOC was 3 patients with stage I (5%), 4 with stage II (6.7%), 40 with stage III (66.7%), and 13 with stage IV (21.7%). Histologic grade at diagnosis was 2 patients with grade 1 (3%), 7 with grade 2 (12%), 49 with grade 3 (82%), and 2 unknown (3%). Tumor histology was distributed as follows: 42 (70%) papillary serous, 8 (13%) endometrioid, 3 (5%) adenocarcinoma not otherwise specified, 2 (3%) mixed carcinoma, and 1 each of mixed adenocarcinoma, clear cell carcinoma, mucinous adenocarcinoma, small cell carcinoma, and cystic ovarian carcinoma. Clinical characteristics at time of initial brain metastases BM = brain metastasis; Dx = diagnosis; EOC = epithelial ovarian cancer; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease. Median follow-up from BM diagnosis for all 60 patients was 9.3 months (range, 0.3-82.3). Median follow-up from BM diagnosis for the six patients alive at analysis was 27.1 months (range, 0.7-82.2). Patient characteristics at the time of initial BM diagnosis are listed in Table 1. Median age at diagnosis of EOC was 56.1 years (range, 31.2-79.0). Stage distribution [20] at original diagnosis of EOC was 3 patients with stage I (5%), 4 with stage II (6.7%), 40 with stage III (66.7%), and 13 with stage IV (21.7%). Histologic grade at diagnosis was 2 patients with grade 1 (3%), 7 with grade 2 (12%), 49 with grade 3 (82%), and 2 unknown (3%). Tumor histology was distributed as follows: 42 (70%) papillary serous, 8 (13%) endometrioid, 3 (5%) adenocarcinoma not otherwise specified, 2 (3%) mixed carcinoma, and 1 each of mixed adenocarcinoma, clear cell carcinoma, mucinous adenocarcinoma, small cell carcinoma, and cystic ovarian carcinoma. Clinical characteristics at time of initial brain metastases BM = brain metastasis; Dx = diagnosis; EOC = epithelial ovarian cancer; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease. Median follow-up from BM diagnosis for all 60 patients was 9.3 months (range, 0.3-82.3). Median follow-up from BM diagnosis for the six patients alive at analysis was 27.1 months (range, 0.7-82.2). Initial treatment of brain metastases Treatments for initial BM are shown in Figure 1. RT was the sole treatment in 38 patients. Twenty-two patients were also treated with surgical resection for initial BM: 16 had a single BM, three had 2–3 BM, and three had four or more BM. In the six patients who underwent craniotomy and had multiple BM, four patients had only the one most symptomatic lesion removed, and two patients who presented with two lesions had both completely resected. Median WBRT dose was 3000 cGy (range, 600 cGy-4400 cGy). Median SRS dose was 2100 cGy (range, 1400–2200 cGy). Thirty-six patients (60%) had at least one cycle of systemic chemotherapy following RT ± surgery. After brain metastasis, ovarian cancers were treated with vastly heterogenous systemic regimens: we counted 18 unique regimens, and 19 patients received two or more lines of chemotherapy. The most common combination regimens were carboplatin and paclitaxel, followed by gemcitabine and carboplatin or cisplatin. The most common single-agent regimens were paclitaxel, gemcitabine, carboplatin, and liposomal doxorubicin. Patients with a KPS ≥70 were more likely than patients with KPS <70 to receive systemic therapy following local treatment of brain disease (Chi square, p = 0.04). The remaining 40% of patients either did not have information on chemotherapy available, or did not receive chemotherapy. This group had a median OS of 2.4 months after completing RT (range, 0.1-81.7). Treatments for initial brain metastases. Treatments for initial BM are shown in Figure 1. RT was the sole treatment in 38 patients. Twenty-two patients were also treated with surgical resection for initial BM: 16 had a single BM, three had 2–3 BM, and three had four or more BM. In the six patients who underwent craniotomy and had multiple BM, four patients had only the one most symptomatic lesion removed, and two patients who presented with two lesions had both completely resected. Median WBRT dose was 3000 cGy (range, 600 cGy-4400 cGy). Median SRS dose was 2100 cGy (range, 1400–2200 cGy). Thirty-six patients (60%) had at least one cycle of systemic chemotherapy following RT ± surgery. After brain metastasis, ovarian cancers were treated with vastly heterogenous systemic regimens: we counted 18 unique regimens, and 19 patients received two or more lines of chemotherapy. The most common combination regimens were carboplatin and paclitaxel, followed by gemcitabine and carboplatin or cisplatin. The most common single-agent regimens were paclitaxel, gemcitabine, carboplatin, and liposomal doxorubicin. Patients with a KPS ≥70 were more likely than patients with KPS <70 to receive systemic therapy following local treatment of brain disease (Chi square, p = 0.04). The remaining 40% of patients either did not have information on chemotherapy available, or did not receive chemotherapy. This group had a median OS of 2.4 months after completing RT (range, 0.1-81.7). Treatments for initial brain metastases. Recurrences and salvage RT Six patients were alive at time of analysis at a median follow-up of 27.1 months after their initial BM (range, 0.7-82.2). Twenty-four patients developed recurrent or progressive CNS disease. Median time to recurrence was 7.3 months (range, 0.9-46.3). Thirteen patients received further RT for their recurrence, including three treated with conventional RT to the spine for leptomeningeal recurrence. Seven patients received further systemic chemotherapy following diagnosis of their recurrent CNS disease. Six patients were alive at time of analysis at a median follow-up of 27.1 months after their initial BM (range, 0.7-82.2). Twenty-four patients developed recurrent or progressive CNS disease. Median time to recurrence was 7.3 months (range, 0.9-46.3). Thirteen patients received further RT for their recurrence, including three treated with conventional RT to the spine for leptomeningeal recurrence. Seven patients received further systemic chemotherapy following diagnosis of their recurrent CNS disease. Leptomeningeal disease LMD was diagnosed in 12 (20%) patients, either at time of initial BM (n = 6) or as relapse of CNS disease (n = 6). Median interval from initial BM diagnosis to secondary LMD diagnosis was 7.2 months (range, 2.5-44.5). All but one patient with LMD had a preceding or synchronous BM. Treatments for LMD in the primary setting included five WBRT and one partial brain radiation therapy (PBRT). Treatments for relapsed LMD included WBRT (n = 2), spine RT (2), systemic chemotherapy (1), and best supportive care (1). All patients who received RT and had follow-up imaging had at least a partial response to therapy. LMD was diagnosed in 12 (20%) patients, either at time of initial BM (n = 6) or as relapse of CNS disease (n = 6). Median interval from initial BM diagnosis to secondary LMD diagnosis was 7.2 months (range, 2.5-44.5). All but one patient with LMD had a preceding or synchronous BM. Treatments for LMD in the primary setting included five WBRT and one partial brain radiation therapy (PBRT). Treatments for relapsed LMD included WBRT (n = 2), spine RT (2), systemic chemotherapy (1), and best supportive care (1). All patients who received RT and had follow-up imaging had at least a partial response to therapy. Survival Median OS from EOC diagnosis was 67.1 months (95% CI, 54.9-69.4). Median OS after BM diagnosis was 9.7 months (95% CI, 5.9-13.5) in all patients (Figure 2a), and 15.6 months in the 47 patients with follow-up (95% CI, 9.8-21.3). Median OS from LMD diagnosis for the 6 patients diagnosed with LMD at the time of initial BM was 3.6 months (95% CI, 0.69-15.8). Median CNS-FFP in the 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.8). a. Overall survival for all patients using Kaplan-Meier method. b. Overall survival stratified by Karnofsky performance status. c. Overall survival stratified by primary tumor control status. d. Overall survival stratified by number of brain metastases and leptomeningeal disease. In the 28 patients with a single BM, median OS after BM was 16.1 months (95% CI, 3.8-28.3). Survival was no different in patients who received surgical resection for a single BM (log-rank, p = 0.32). Seven patients treated with SRS alone had median OS of 60.2 months (95% CI, 9.7-not reached). Univariate analysis of OS included the following potential factors, including those validated in the Recursive Partitioning Analysis for brain metastases [15]: age at BM diagnosis (<65 or ≥65), primary tumor pathologic stage and grade, histology (papillary serous, endometrioid, or all others), interval from EOC diagnosis to BM, KPS at BM diagnosis (<70 or ≥70), primary tumor control status, extent of extracranial disease (limited or extensive), location of BM, number of BM, presence of LMD, surgery as part of treatment, and type of RT received (SRS, WBRT, PBRT). The following factors were significantly associated with OS on UVA: KPS less than 70 (vs ≥70, hazard ratio [HR] 2.78, p = 0.008), four or more BM (vs. 1–3 BM; HR, 3.75; p < 0.001), LMD (vs. none; HR, 5.32; p = 0.001), longer duration between EOC diagnosis and BM diagnosis (HR, 1.01; p = 0.006), uncontrolled primary tumor (HR, 2.87; p = 0.001), extensive extracranial metastases (vs. limited extracranial metastases; HR, 1.98; p = 0.036), BM in both the cerebrum and cerebellum (vs either alone; HR, 2.49; p = 0.003), and use of SRS (vs. WBRT or PBRT; HR, 0.46; p = 0.03). The use of craniotomy had no effect on OS (p = 0.31). MVA (Table 2) identified KPS less than 70 (HR, 2.86; p = 0.018), four or more BM (vs. 1 or 2–3 BM; HR, 3.18; p = 0.053), LMD (vs. 1–4 BM; HR, 8.22; p = 0.013), and uncontrolled primary tumor (HR, 2.84; p = 0.008) as risk factors significantly associated with inferior OS after RT for BM. Kaplan-Meier survival curves stratified by the factors significant on MVA are shown in Figure 2b-d. Multivariate analysis of overall survival using Cox proportional hazard model Event = death. BM = brain metastasis; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease. On UVA for CNS-FFP, the presence of four or more BM was significant, with a negative association (HR, 4.09; p = 0.006), when compared with 1–3 BM (Figure 3b). In this analysis, 1 and 2–3 BM were combined since there was no significant difference in CNS-FFP between the two groups. Extensive extracranial disease was also significant (vs. limited; HR, 2.95; p = 0.048). On MVA for CNS-FFP, only four or more BM remained significant (HR, 2.56; p = 0.04), a. Central nervous system (CNS) freedom from progression using Kaplan-Meier method. b. CNS freedom from progression stratified by number of brain metastases. Median OS from EOC diagnosis was 67.1 months (95% CI, 54.9-69.4). Median OS after BM diagnosis was 9.7 months (95% CI, 5.9-13.5) in all patients (Figure 2a), and 15.6 months in the 47 patients with follow-up (95% CI, 9.8-21.3). Median OS from LMD diagnosis for the 6 patients diagnosed with LMD at the time of initial BM was 3.6 months (95% CI, 0.69-15.8). Median CNS-FFP in the 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.8). a. Overall survival for all patients using Kaplan-Meier method. b. Overall survival stratified by Karnofsky performance status. c. Overall survival stratified by primary tumor control status. d. Overall survival stratified by number of brain metastases and leptomeningeal disease. In the 28 patients with a single BM, median OS after BM was 16.1 months (95% CI, 3.8-28.3). Survival was no different in patients who received surgical resection for a single BM (log-rank, p = 0.32). Seven patients treated with SRS alone had median OS of 60.2 months (95% CI, 9.7-not reached). Univariate analysis of OS included the following potential factors, including those validated in the Recursive Partitioning Analysis for brain metastases [15]: age at BM diagnosis (<65 or ≥65), primary tumor pathologic stage and grade, histology (papillary serous, endometrioid, or all others), interval from EOC diagnosis to BM, KPS at BM diagnosis (<70 or ≥70), primary tumor control status, extent of extracranial disease (limited or extensive), location of BM, number of BM, presence of LMD, surgery as part of treatment, and type of RT received (SRS, WBRT, PBRT). The following factors were significantly associated with OS on UVA: KPS less than 70 (vs ≥70, hazard ratio [HR] 2.78, p = 0.008), four or more BM (vs. 1–3 BM; HR, 3.75; p < 0.001), LMD (vs. none; HR, 5.32; p = 0.001), longer duration between EOC diagnosis and BM diagnosis (HR, 1.01; p = 0.006), uncontrolled primary tumor (HR, 2.87; p = 0.001), extensive extracranial metastases (vs. limited extracranial metastases; HR, 1.98; p = 0.036), BM in both the cerebrum and cerebellum (vs either alone; HR, 2.49; p = 0.003), and use of SRS (vs. WBRT or PBRT; HR, 0.46; p = 0.03). The use of craniotomy had no effect on OS (p = 0.31). MVA (Table 2) identified KPS less than 70 (HR, 2.86; p = 0.018), four or more BM (vs. 1 or 2–3 BM; HR, 3.18; p = 0.053), LMD (vs. 1–4 BM; HR, 8.22; p = 0.013), and uncontrolled primary tumor (HR, 2.84; p = 0.008) as risk factors significantly associated with inferior OS after RT for BM. Kaplan-Meier survival curves stratified by the factors significant on MVA are shown in Figure 2b-d. Multivariate analysis of overall survival using Cox proportional hazard model Event = death. BM = brain metastasis; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease. On UVA for CNS-FFP, the presence of four or more BM was significant, with a negative association (HR, 4.09; p = 0.006), when compared with 1–3 BM (Figure 3b). In this analysis, 1 and 2–3 BM were combined since there was no significant difference in CNS-FFP between the two groups. Extensive extracranial disease was also significant (vs. limited; HR, 2.95; p = 0.048). On MVA for CNS-FFP, only four or more BM remained significant (HR, 2.56; p = 0.04), a. Central nervous system (CNS) freedom from progression using Kaplan-Meier method. b. CNS freedom from progression stratified by number of brain metastases.
Conclusions
This study is the largest reported series of patients with EOC BM treated with modern radiation therapy. We show that KPS less than 70, four or more BM, leptomeningeal disease, and uncontrolled primary tumor predict for inferior survival after radiation therapy for EOC BM. A single BM is the most common presentation, occurring in 47% of patients, and is associated with longer survival when treated with SRS or surgery + RT. Advanced-stage EOC is associated with shorter median interval to BM. Over half of patients with long-term follow-up will recur in the CNS, but can be salvaged effectively with RT or surgery. LMD is associated with poor survival and occurs in 20% of our patients. RT can result in partial or complete response of LMD, although patients will likely recur or progress.
[ "Background", "Clinical characteristics", "Initial treatment of brain metastases", "Recurrences and salvage RT", "Leptomeningeal disease", "Survival", "Abbreviations", "Competing interests", "Authors’ contributions", "Authors’ information", "" ]
[ "Epithelial ovarian cancer (EOC) accounts for 3% of cancers among women, but is the fifth leading cause of cancer death in women and the leading cause of gynecologic cancer death [1]. The predominant form of relapse after primary surgery and chemotherapy for EOC is in the abdomen and pelvis [2]. Central nervous system (CNS) and brain metastases (BM) in these patients are a rare occurrence, with reported incidence of 0.29-11.6% [3-9], but may be increasing in incidence as extracranial disease is better controlled with improved surgical and chemotherapeutic options [9-11].\nThe therapeutic approach to patients with BM from EOC is challenging due to the small numbers of cases and short follow-up periods available in other series [3-10,12]. No studies investigate the impact of modern radiation therapy (RT) as a primary treatment modality in these patients. Treatments vary widely, including best supportive care, chemotherapy, steroids, whole brain radiation therapy (WBRT), surgical resection, and stereotactic radiosurgery (SRS). Median survival in existing studies of EOC with BM has generally been poor, on the order of several months, but some studies report survival as high as 18–33 months in selected patients treated with multimodality therapy that combines surgery, radiation therapy, and systemic chemotherapy [3,5,13,14]. Prior studies of BM in advanced malignancies and in small series of EOC have found performance status, age, primary tumor control, extracranial metastases, and treatment modality for BM to be predictors of survival after BM [4,11,12,15,16]. Leptomeningeal disease (LMD) is regarded as a factor for poor prognosis in other metastatic cancers [17-19] and has been observed with increasing incidence in advanced malignancies, especially in the era of magnetic resonance imaging (MRI) [18]. However, LMD is still rarely reported in EOC.\nWe reviewed our institution’s experience using modern RT, with or without craniotomy, to treat patients with BM and LMD from EOC. We also identify predictors of survival after RT in this patient population.", "Patient characteristics at the time of initial BM diagnosis are listed in Table 1. Median age at diagnosis of EOC was 56.1 years (range, 31.2-79.0). Stage distribution [20] at original diagnosis of EOC was 3 patients with stage I (5%), 4 with stage II (6.7%), 40 with stage III (66.7%), and 13 with stage IV (21.7%). Histologic grade at diagnosis was 2 patients with grade 1 (3%), 7 with grade 2 (12%), 49 with grade 3 (82%), and 2 unknown (3%). Tumor histology was distributed as follows: 42 (70%) papillary serous, 8 (13%) endometrioid, 3 (5%) adenocarcinoma not otherwise specified, 2 (3%) mixed carcinoma, and 1 each of mixed adenocarcinoma, clear cell carcinoma, mucinous adenocarcinoma, small cell carcinoma, and cystic ovarian carcinoma.\nClinical characteristics at time of initial brain metastases\nBM = brain metastasis; Dx = diagnosis; EOC = epithelial ovarian cancer; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease.\nMedian follow-up from BM diagnosis for all 60 patients was 9.3 months (range, 0.3-82.3). Median follow-up from BM diagnosis for the six patients alive at analysis was 27.1 months (range, 0.7-82.2).", "Treatments for initial BM are shown in Figure 1. RT was the sole treatment in 38 patients. Twenty-two patients were also treated with surgical resection for initial BM: 16 had a single BM, three had 2–3 BM, and three had four or more BM. In the six patients who underwent craniotomy and had multiple BM, four patients had only the one most symptomatic lesion removed, and two patients who presented with two lesions had both completely resected. Median WBRT dose was 3000 cGy (range, 600 cGy-4400 cGy). Median SRS dose was 2100 cGy (range, 1400–2200 cGy). Thirty-six patients (60%) had at least one cycle of systemic chemotherapy following RT ± surgery. After brain metastasis, ovarian cancers were treated with vastly heterogenous systemic regimens: we counted 18 unique regimens, and 19 patients received two or more lines of chemotherapy. The most common combination regimens were carboplatin and paclitaxel, followed by gemcitabine and carboplatin or cisplatin. The most common single-agent regimens were paclitaxel, gemcitabine, carboplatin, and liposomal doxorubicin. Patients with a KPS ≥70 were more likely than patients with KPS <70 to receive systemic therapy following local treatment of brain disease (Chi square, p = 0.04). The remaining 40% of patients either did not have information on chemotherapy available, or did not receive chemotherapy. This group had a median OS of 2.4 months after completing RT (range, 0.1-81.7).\nTreatments for initial brain metastases.", "Six patients were alive at time of analysis at a median follow-up of 27.1 months after their initial BM (range, 0.7-82.2). Twenty-four patients developed recurrent or progressive CNS disease. Median time to recurrence was 7.3 months (range, 0.9-46.3). Thirteen patients received further RT for their recurrence, including three treated with conventional RT to the spine for leptomeningeal recurrence. Seven patients received further systemic chemotherapy following diagnosis of their recurrent CNS disease.", "LMD was diagnosed in 12 (20%) patients, either at time of initial BM (n = 6) or as relapse of CNS disease (n = 6). Median interval from initial BM diagnosis to secondary LMD diagnosis was 7.2 months (range, 2.5-44.5). All but one patient with LMD had a preceding or synchronous BM. Treatments for LMD in the primary setting included five WBRT and one partial brain radiation therapy (PBRT). Treatments for relapsed LMD included WBRT (n = 2), spine RT (2), systemic chemotherapy (1), and best supportive care (1). All patients who received RT and had follow-up imaging had at least a partial response to therapy.", "Median OS from EOC diagnosis was 67.1 months (95% CI, 54.9-69.4). Median OS after BM diagnosis was 9.7 months (95% CI, 5.9-13.5) in all patients (Figure 2a), and 15.6 months in the 47 patients with follow-up (95% CI, 9.8-21.3). Median OS from LMD diagnosis for the 6 patients diagnosed with LMD at the time of initial BM was 3.6 months (95% CI, 0.69-15.8). Median CNS-FFP in the 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.8).\na. Overall survival for all patients using Kaplan-Meier method. b. Overall survival stratified by Karnofsky performance status. c. Overall survival stratified by primary tumor control status. d. Overall survival stratified by number of brain metastases and leptomeningeal disease.\nIn the 28 patients with a single BM, median OS after BM was 16.1 months (95% CI, 3.8-28.3). Survival was no different in patients who received surgical resection for a single BM (log-rank, p = 0.32). Seven patients treated with SRS alone had median OS of 60.2 months (95% CI, 9.7-not reached).\nUnivariate analysis of OS included the following potential factors, including those validated in the Recursive Partitioning Analysis for brain metastases [15]: age at BM diagnosis (<65 or ≥65), primary tumor pathologic stage and grade, histology (papillary serous, endometrioid, or all others), interval from EOC diagnosis to BM, KPS at BM diagnosis (<70 or ≥70), primary tumor control status, extent of extracranial disease (limited or extensive), location of BM, number of BM, presence of LMD, surgery as part of treatment, and type of RT received (SRS, WBRT, PBRT). The following factors were significantly associated with OS on UVA: KPS less than 70 (vs ≥70, hazard ratio [HR] 2.78, p = 0.008), four or more BM (vs. 1–3 BM; HR, 3.75; p < 0.001), LMD (vs. none; HR, 5.32; p = 0.001), longer duration between EOC diagnosis and BM diagnosis (HR, 1.01; p = 0.006), uncontrolled primary tumor (HR, 2.87; p = 0.001), extensive extracranial metastases (vs. limited extracranial metastases; HR, 1.98; p = 0.036), BM in both the cerebrum and cerebellum (vs either alone; HR, 2.49; p = 0.003), and use of SRS (vs. WBRT or PBRT; HR, 0.46; p = 0.03). The use of craniotomy had no effect on OS (p = 0.31).\nMVA (Table 2) identified KPS less than 70 (HR, 2.86; p = 0.018), four or more BM (vs. 1 or 2–3 BM; HR, 3.18; p = 0.053), LMD (vs. 1–4 BM; HR, 8.22; p = 0.013), and uncontrolled primary tumor (HR, 2.84; p = 0.008) as risk factors significantly associated with inferior OS after RT for BM. Kaplan-Meier survival curves stratified by the factors significant on MVA are shown in Figure 2b-d.\nMultivariate analysis of overall survival using Cox proportional hazard model\nEvent = death. BM = brain metastasis; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease.\nOn UVA for CNS-FFP, the presence of four or more BM was significant, with a negative association (HR, 4.09; p = 0.006), when compared with 1–3 BM (Figure 3b). In this analysis, 1 and 2–3 BM were combined since there was no significant difference in CNS-FFP between the two groups. Extensive extracranial disease was also significant (vs. limited; HR, 2.95; p = 0.048). On MVA for CNS-FFP, only four or more BM remained significant (HR, 2.56; p = 0.04),\na. Central nervous system (CNS) freedom from progression using Kaplan-Meier method. b. CNS freedom from progression stratified by number of brain metastases.", "BM: Brain metastases; LMD: Leptomeningeal disease; EOC: Epithelial ovarian cancer; RT: Radiation therapy; KPS: Karnofsky performance status; CNS: Central nervous system; WBRT: Whole brain radiation therapy; PBRT: Partial brain radiation therapy; SRS: Stereotactic radiosurgery; OS: Overall survival; PFS: Progression free survival; FFP: Freedom from progression; MVA: Multivariate analysis; UVA: Univariate analysis; HR: Hazard ratio; RPA: Recursive partitioning analysis", "The authors have no financial competing interests and no non-financial competing interests.", "ST, KA, KB made substantial contributions to conception and design and analysis and interpretation of data. ST and VM performed all acquisition of data. All authors (ST, VM, KA, KB, MH, VT, and CA) were involved in drafting the manuscript and revising it critically for important intellectual content. All authors read and approved the final manuscript.", "CA is Chief of the Gynecologic Medical Oncology Service at MSKCC. KA is Member, Memorial Sloan-Kettering Cancer Center (MSKCC) and Professor of Radiation Oncology. VT is Associate Professor of Neurosurgery at MSKCC. MH is Associate Professor of Medicine and part of the Gynecologic Medical Oncology Service. VM is Assistant Professor of Medicine and part of the Gynecologic Medical Oncology Service. KB is Assistant Professor of Radiation Oncology and the principal CNS Radiation Oncologist at MSKCC. ST is chief resident in the Department of Radiation Oncology.", "Data presented in poster form at the American Society for Therapeutic Radiology and Oncology Annual Meeting, October 2–4, 2011, Miami, FL (abstract #2568)." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Clinical characteristics", "Initial treatment of brain metastases", "Recurrences and salvage RT", "Leptomeningeal disease", "Survival", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Authors’ information", "" ]
[ "Epithelial ovarian cancer (EOC) accounts for 3% of cancers among women, but is the fifth leading cause of cancer death in women and the leading cause of gynecologic cancer death [1]. The predominant form of relapse after primary surgery and chemotherapy for EOC is in the abdomen and pelvis [2]. Central nervous system (CNS) and brain metastases (BM) in these patients are a rare occurrence, with reported incidence of 0.29-11.6% [3-9], but may be increasing in incidence as extracranial disease is better controlled with improved surgical and chemotherapeutic options [9-11].\nThe therapeutic approach to patients with BM from EOC is challenging due to the small numbers of cases and short follow-up periods available in other series [3-10,12]. No studies investigate the impact of modern radiation therapy (RT) as a primary treatment modality in these patients. Treatments vary widely, including best supportive care, chemotherapy, steroids, whole brain radiation therapy (WBRT), surgical resection, and stereotactic radiosurgery (SRS). Median survival in existing studies of EOC with BM has generally been poor, on the order of several months, but some studies report survival as high as 18–33 months in selected patients treated with multimodality therapy that combines surgery, radiation therapy, and systemic chemotherapy [3,5,13,14]. Prior studies of BM in advanced malignancies and in small series of EOC have found performance status, age, primary tumor control, extracranial metastases, and treatment modality for BM to be predictors of survival after BM [4,11,12,15,16]. Leptomeningeal disease (LMD) is regarded as a factor for poor prognosis in other metastatic cancers [17-19] and has been observed with increasing incidence in advanced malignancies, especially in the era of magnetic resonance imaging (MRI) [18]. However, LMD is still rarely reported in EOC.\nWe reviewed our institution’s experience using modern RT, with or without craniotomy, to treat patients with BM and LMD from EOC. We also identify predictors of survival after RT in this patient population.", "This study was approved by the Institutional Review Board, who also approved a waiver of informed consent. Our institution’s gynecology database was searched for patients with EOC who developed BM and received RT. We identified 60 patients who were diagnosed between October 1996 and April 2010. We performed a retrospective chart review to obtain demographic data, details of initial EOC diagnosis and treatment, Karnofsky Performance Status (KPS), stage, grade, date of BM diagnosis, interval to BM, site and number of BM, treatment type for BM, systemic disease at BM diagnosis, follow-up and response to treatment, date of CNS relapse or recurrence, time interval to relapse or recurrence after initial BM, and date of death or last follow-up. For two patients, the date of death could not be determined. These patients were censored at date of last follow-up. All BM, including LMD, were diagnosed by imaging, most commonly MRI.\nOverall survival (OS) was calculated as the time from initial EOC or BM diagnosis to date of death or last follow-up. CNS freedom from progression (CNS-FFP) after BM was calculated from date of BM diagnosis to date of CNS recurrence or last follow-up imaging. Patients without follow-up imaging after BM treatment were excluded from the CNS-FFP analysis. Survival rates were determined using the Kaplan-Meier method, and survival curves were compared using the log-rank test. Univariate (UVA) and multivariate (MVA) survival analyses were performed using a Cox proportional hazards model on OS and CNS-FFP. Variables with p value ≤0.05 by UVA were considered for the MVA, and forward procedure was used to build the final model. A p value ≤0.05 was considered significant for all analyses. All statistical analysis was accomplished with the SPSS software package, version 19 (IBM, Armonk, NY). ", " Clinical characteristics Patient characteristics at the time of initial BM diagnosis are listed in Table 1. Median age at diagnosis of EOC was 56.1 years (range, 31.2-79.0). Stage distribution [20] at original diagnosis of EOC was 3 patients with stage I (5%), 4 with stage II (6.7%), 40 with stage III (66.7%), and 13 with stage IV (21.7%). Histologic grade at diagnosis was 2 patients with grade 1 (3%), 7 with grade 2 (12%), 49 with grade 3 (82%), and 2 unknown (3%). Tumor histology was distributed as follows: 42 (70%) papillary serous, 8 (13%) endometrioid, 3 (5%) adenocarcinoma not otherwise specified, 2 (3%) mixed carcinoma, and 1 each of mixed adenocarcinoma, clear cell carcinoma, mucinous adenocarcinoma, small cell carcinoma, and cystic ovarian carcinoma.\nClinical characteristics at time of initial brain metastases\nBM = brain metastasis; Dx = diagnosis; EOC = epithelial ovarian cancer; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease.\nMedian follow-up from BM diagnosis for all 60 patients was 9.3 months (range, 0.3-82.3). Median follow-up from BM diagnosis for the six patients alive at analysis was 27.1 months (range, 0.7-82.2).\nPatient characteristics at the time of initial BM diagnosis are listed in Table 1. Median age at diagnosis of EOC was 56.1 years (range, 31.2-79.0). Stage distribution [20] at original diagnosis of EOC was 3 patients with stage I (5%), 4 with stage II (6.7%), 40 with stage III (66.7%), and 13 with stage IV (21.7%). Histologic grade at diagnosis was 2 patients with grade 1 (3%), 7 with grade 2 (12%), 49 with grade 3 (82%), and 2 unknown (3%). Tumor histology was distributed as follows: 42 (70%) papillary serous, 8 (13%) endometrioid, 3 (5%) adenocarcinoma not otherwise specified, 2 (3%) mixed carcinoma, and 1 each of mixed adenocarcinoma, clear cell carcinoma, mucinous adenocarcinoma, small cell carcinoma, and cystic ovarian carcinoma.\nClinical characteristics at time of initial brain metastases\nBM = brain metastasis; Dx = diagnosis; EOC = epithelial ovarian cancer; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease.\nMedian follow-up from BM diagnosis for all 60 patients was 9.3 months (range, 0.3-82.3). Median follow-up from BM diagnosis for the six patients alive at analysis was 27.1 months (range, 0.7-82.2).\n Initial treatment of brain metastases Treatments for initial BM are shown in Figure 1. RT was the sole treatment in 38 patients. Twenty-two patients were also treated with surgical resection for initial BM: 16 had a single BM, three had 2–3 BM, and three had four or more BM. In the six patients who underwent craniotomy and had multiple BM, four patients had only the one most symptomatic lesion removed, and two patients who presented with two lesions had both completely resected. Median WBRT dose was 3000 cGy (range, 600 cGy-4400 cGy). Median SRS dose was 2100 cGy (range, 1400–2200 cGy). Thirty-six patients (60%) had at least one cycle of systemic chemotherapy following RT ± surgery. After brain metastasis, ovarian cancers were treated with vastly heterogenous systemic regimens: we counted 18 unique regimens, and 19 patients received two or more lines of chemotherapy. The most common combination regimens were carboplatin and paclitaxel, followed by gemcitabine and carboplatin or cisplatin. The most common single-agent regimens were paclitaxel, gemcitabine, carboplatin, and liposomal doxorubicin. Patients with a KPS ≥70 were more likely than patients with KPS <70 to receive systemic therapy following local treatment of brain disease (Chi square, p = 0.04). The remaining 40% of patients either did not have information on chemotherapy available, or did not receive chemotherapy. This group had a median OS of 2.4 months after completing RT (range, 0.1-81.7).\nTreatments for initial brain metastases.\nTreatments for initial BM are shown in Figure 1. RT was the sole treatment in 38 patients. Twenty-two patients were also treated with surgical resection for initial BM: 16 had a single BM, three had 2–3 BM, and three had four or more BM. In the six patients who underwent craniotomy and had multiple BM, four patients had only the one most symptomatic lesion removed, and two patients who presented with two lesions had both completely resected. Median WBRT dose was 3000 cGy (range, 600 cGy-4400 cGy). Median SRS dose was 2100 cGy (range, 1400–2200 cGy). Thirty-six patients (60%) had at least one cycle of systemic chemotherapy following RT ± surgery. After brain metastasis, ovarian cancers were treated with vastly heterogenous systemic regimens: we counted 18 unique regimens, and 19 patients received two or more lines of chemotherapy. The most common combination regimens were carboplatin and paclitaxel, followed by gemcitabine and carboplatin or cisplatin. The most common single-agent regimens were paclitaxel, gemcitabine, carboplatin, and liposomal doxorubicin. Patients with a KPS ≥70 were more likely than patients with KPS <70 to receive systemic therapy following local treatment of brain disease (Chi square, p = 0.04). The remaining 40% of patients either did not have information on chemotherapy available, or did not receive chemotherapy. This group had a median OS of 2.4 months after completing RT (range, 0.1-81.7).\nTreatments for initial brain metastases.\n Recurrences and salvage RT Six patients were alive at time of analysis at a median follow-up of 27.1 months after their initial BM (range, 0.7-82.2). Twenty-four patients developed recurrent or progressive CNS disease. Median time to recurrence was 7.3 months (range, 0.9-46.3). Thirteen patients received further RT for their recurrence, including three treated with conventional RT to the spine for leptomeningeal recurrence. Seven patients received further systemic chemotherapy following diagnosis of their recurrent CNS disease.\nSix patients were alive at time of analysis at a median follow-up of 27.1 months after their initial BM (range, 0.7-82.2). Twenty-four patients developed recurrent or progressive CNS disease. Median time to recurrence was 7.3 months (range, 0.9-46.3). Thirteen patients received further RT for their recurrence, including three treated with conventional RT to the spine for leptomeningeal recurrence. Seven patients received further systemic chemotherapy following diagnosis of their recurrent CNS disease.\n Leptomeningeal disease LMD was diagnosed in 12 (20%) patients, either at time of initial BM (n = 6) or as relapse of CNS disease (n = 6). Median interval from initial BM diagnosis to secondary LMD diagnosis was 7.2 months (range, 2.5-44.5). All but one patient with LMD had a preceding or synchronous BM. Treatments for LMD in the primary setting included five WBRT and one partial brain radiation therapy (PBRT). Treatments for relapsed LMD included WBRT (n = 2), spine RT (2), systemic chemotherapy (1), and best supportive care (1). All patients who received RT and had follow-up imaging had at least a partial response to therapy.\nLMD was diagnosed in 12 (20%) patients, either at time of initial BM (n = 6) or as relapse of CNS disease (n = 6). Median interval from initial BM diagnosis to secondary LMD diagnosis was 7.2 months (range, 2.5-44.5). All but one patient with LMD had a preceding or synchronous BM. Treatments for LMD in the primary setting included five WBRT and one partial brain radiation therapy (PBRT). Treatments for relapsed LMD included WBRT (n = 2), spine RT (2), systemic chemotherapy (1), and best supportive care (1). All patients who received RT and had follow-up imaging had at least a partial response to therapy.\n Survival Median OS from EOC diagnosis was 67.1 months (95% CI, 54.9-69.4). Median OS after BM diagnosis was 9.7 months (95% CI, 5.9-13.5) in all patients (Figure 2a), and 15.6 months in the 47 patients with follow-up (95% CI, 9.8-21.3). Median OS from LMD diagnosis for the 6 patients diagnosed with LMD at the time of initial BM was 3.6 months (95% CI, 0.69-15.8). Median CNS-FFP in the 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.8).\na. Overall survival for all patients using Kaplan-Meier method. b. Overall survival stratified by Karnofsky performance status. c. Overall survival stratified by primary tumor control status. d. Overall survival stratified by number of brain metastases and leptomeningeal disease.\nIn the 28 patients with a single BM, median OS after BM was 16.1 months (95% CI, 3.8-28.3). Survival was no different in patients who received surgical resection for a single BM (log-rank, p = 0.32). Seven patients treated with SRS alone had median OS of 60.2 months (95% CI, 9.7-not reached).\nUnivariate analysis of OS included the following potential factors, including those validated in the Recursive Partitioning Analysis for brain metastases [15]: age at BM diagnosis (<65 or ≥65), primary tumor pathologic stage and grade, histology (papillary serous, endometrioid, or all others), interval from EOC diagnosis to BM, KPS at BM diagnosis (<70 or ≥70), primary tumor control status, extent of extracranial disease (limited or extensive), location of BM, number of BM, presence of LMD, surgery as part of treatment, and type of RT received (SRS, WBRT, PBRT). The following factors were significantly associated with OS on UVA: KPS less than 70 (vs ≥70, hazard ratio [HR] 2.78, p = 0.008), four or more BM (vs. 1–3 BM; HR, 3.75; p < 0.001), LMD (vs. none; HR, 5.32; p = 0.001), longer duration between EOC diagnosis and BM diagnosis (HR, 1.01; p = 0.006), uncontrolled primary tumor (HR, 2.87; p = 0.001), extensive extracranial metastases (vs. limited extracranial metastases; HR, 1.98; p = 0.036), BM in both the cerebrum and cerebellum (vs either alone; HR, 2.49; p = 0.003), and use of SRS (vs. WBRT or PBRT; HR, 0.46; p = 0.03). The use of craniotomy had no effect on OS (p = 0.31).\nMVA (Table 2) identified KPS less than 70 (HR, 2.86; p = 0.018), four or more BM (vs. 1 or 2–3 BM; HR, 3.18; p = 0.053), LMD (vs. 1–4 BM; HR, 8.22; p = 0.013), and uncontrolled primary tumor (HR, 2.84; p = 0.008) as risk factors significantly associated with inferior OS after RT for BM. Kaplan-Meier survival curves stratified by the factors significant on MVA are shown in Figure 2b-d.\nMultivariate analysis of overall survival using Cox proportional hazard model\nEvent = death. BM = brain metastasis; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease.\nOn UVA for CNS-FFP, the presence of four or more BM was significant, with a negative association (HR, 4.09; p = 0.006), when compared with 1–3 BM (Figure 3b). In this analysis, 1 and 2–3 BM were combined since there was no significant difference in CNS-FFP between the two groups. Extensive extracranial disease was also significant (vs. limited; HR, 2.95; p = 0.048). On MVA for CNS-FFP, only four or more BM remained significant (HR, 2.56; p = 0.04),\na. Central nervous system (CNS) freedom from progression using Kaplan-Meier method. b. CNS freedom from progression stratified by number of brain metastases.\nMedian OS from EOC diagnosis was 67.1 months (95% CI, 54.9-69.4). Median OS after BM diagnosis was 9.7 months (95% CI, 5.9-13.5) in all patients (Figure 2a), and 15.6 months in the 47 patients with follow-up (95% CI, 9.8-21.3). Median OS from LMD diagnosis for the 6 patients diagnosed with LMD at the time of initial BM was 3.6 months (95% CI, 0.69-15.8). Median CNS-FFP in the 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.8).\na. Overall survival for all patients using Kaplan-Meier method. b. Overall survival stratified by Karnofsky performance status. c. Overall survival stratified by primary tumor control status. d. Overall survival stratified by number of brain metastases and leptomeningeal disease.\nIn the 28 patients with a single BM, median OS after BM was 16.1 months (95% CI, 3.8-28.3). Survival was no different in patients who received surgical resection for a single BM (log-rank, p = 0.32). Seven patients treated with SRS alone had median OS of 60.2 months (95% CI, 9.7-not reached).\nUnivariate analysis of OS included the following potential factors, including those validated in the Recursive Partitioning Analysis for brain metastases [15]: age at BM diagnosis (<65 or ≥65), primary tumor pathologic stage and grade, histology (papillary serous, endometrioid, or all others), interval from EOC diagnosis to BM, KPS at BM diagnosis (<70 or ≥70), primary tumor control status, extent of extracranial disease (limited or extensive), location of BM, number of BM, presence of LMD, surgery as part of treatment, and type of RT received (SRS, WBRT, PBRT). The following factors were significantly associated with OS on UVA: KPS less than 70 (vs ≥70, hazard ratio [HR] 2.78, p = 0.008), four or more BM (vs. 1–3 BM; HR, 3.75; p < 0.001), LMD (vs. none; HR, 5.32; p = 0.001), longer duration between EOC diagnosis and BM diagnosis (HR, 1.01; p = 0.006), uncontrolled primary tumor (HR, 2.87; p = 0.001), extensive extracranial metastases (vs. limited extracranial metastases; HR, 1.98; p = 0.036), BM in both the cerebrum and cerebellum (vs either alone; HR, 2.49; p = 0.003), and use of SRS (vs. WBRT or PBRT; HR, 0.46; p = 0.03). The use of craniotomy had no effect on OS (p = 0.31).\nMVA (Table 2) identified KPS less than 70 (HR, 2.86; p = 0.018), four or more BM (vs. 1 or 2–3 BM; HR, 3.18; p = 0.053), LMD (vs. 1–4 BM; HR, 8.22; p = 0.013), and uncontrolled primary tumor (HR, 2.84; p = 0.008) as risk factors significantly associated with inferior OS after RT for BM. Kaplan-Meier survival curves stratified by the factors significant on MVA are shown in Figure 2b-d.\nMultivariate analysis of overall survival using Cox proportional hazard model\nEvent = death. BM = brain metastasis; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease.\nOn UVA for CNS-FFP, the presence of four or more BM was significant, with a negative association (HR, 4.09; p = 0.006), when compared with 1–3 BM (Figure 3b). In this analysis, 1 and 2–3 BM were combined since there was no significant difference in CNS-FFP between the two groups. Extensive extracranial disease was also significant (vs. limited; HR, 2.95; p = 0.048). On MVA for CNS-FFP, only four or more BM remained significant (HR, 2.56; p = 0.04),\na. Central nervous system (CNS) freedom from progression using Kaplan-Meier method. b. CNS freedom from progression stratified by number of brain metastases.", "Patient characteristics at the time of initial BM diagnosis are listed in Table 1. Median age at diagnosis of EOC was 56.1 years (range, 31.2-79.0). Stage distribution [20] at original diagnosis of EOC was 3 patients with stage I (5%), 4 with stage II (6.7%), 40 with stage III (66.7%), and 13 with stage IV (21.7%). Histologic grade at diagnosis was 2 patients with grade 1 (3%), 7 with grade 2 (12%), 49 with grade 3 (82%), and 2 unknown (3%). Tumor histology was distributed as follows: 42 (70%) papillary serous, 8 (13%) endometrioid, 3 (5%) adenocarcinoma not otherwise specified, 2 (3%) mixed carcinoma, and 1 each of mixed adenocarcinoma, clear cell carcinoma, mucinous adenocarcinoma, small cell carcinoma, and cystic ovarian carcinoma.\nClinical characteristics at time of initial brain metastases\nBM = brain metastasis; Dx = diagnosis; EOC = epithelial ovarian cancer; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease.\nMedian follow-up from BM diagnosis for all 60 patients was 9.3 months (range, 0.3-82.3). Median follow-up from BM diagnosis for the six patients alive at analysis was 27.1 months (range, 0.7-82.2).", "Treatments for initial BM are shown in Figure 1. RT was the sole treatment in 38 patients. Twenty-two patients were also treated with surgical resection for initial BM: 16 had a single BM, three had 2–3 BM, and three had four or more BM. In the six patients who underwent craniotomy and had multiple BM, four patients had only the one most symptomatic lesion removed, and two patients who presented with two lesions had both completely resected. Median WBRT dose was 3000 cGy (range, 600 cGy-4400 cGy). Median SRS dose was 2100 cGy (range, 1400–2200 cGy). Thirty-six patients (60%) had at least one cycle of systemic chemotherapy following RT ± surgery. After brain metastasis, ovarian cancers were treated with vastly heterogenous systemic regimens: we counted 18 unique regimens, and 19 patients received two or more lines of chemotherapy. The most common combination regimens were carboplatin and paclitaxel, followed by gemcitabine and carboplatin or cisplatin. The most common single-agent regimens were paclitaxel, gemcitabine, carboplatin, and liposomal doxorubicin. Patients with a KPS ≥70 were more likely than patients with KPS <70 to receive systemic therapy following local treatment of brain disease (Chi square, p = 0.04). The remaining 40% of patients either did not have information on chemotherapy available, or did not receive chemotherapy. This group had a median OS of 2.4 months after completing RT (range, 0.1-81.7).\nTreatments for initial brain metastases.", "Six patients were alive at time of analysis at a median follow-up of 27.1 months after their initial BM (range, 0.7-82.2). Twenty-four patients developed recurrent or progressive CNS disease. Median time to recurrence was 7.3 months (range, 0.9-46.3). Thirteen patients received further RT for their recurrence, including three treated with conventional RT to the spine for leptomeningeal recurrence. Seven patients received further systemic chemotherapy following diagnosis of their recurrent CNS disease.", "LMD was diagnosed in 12 (20%) patients, either at time of initial BM (n = 6) or as relapse of CNS disease (n = 6). Median interval from initial BM diagnosis to secondary LMD diagnosis was 7.2 months (range, 2.5-44.5). All but one patient with LMD had a preceding or synchronous BM. Treatments for LMD in the primary setting included five WBRT and one partial brain radiation therapy (PBRT). Treatments for relapsed LMD included WBRT (n = 2), spine RT (2), systemic chemotherapy (1), and best supportive care (1). All patients who received RT and had follow-up imaging had at least a partial response to therapy.", "Median OS from EOC diagnosis was 67.1 months (95% CI, 54.9-69.4). Median OS after BM diagnosis was 9.7 months (95% CI, 5.9-13.5) in all patients (Figure 2a), and 15.6 months in the 47 patients with follow-up (95% CI, 9.8-21.3). Median OS from LMD diagnosis for the 6 patients diagnosed with LMD at the time of initial BM was 3.6 months (95% CI, 0.69-15.8). Median CNS-FFP in the 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.8).\na. Overall survival for all patients using Kaplan-Meier method. b. Overall survival stratified by Karnofsky performance status. c. Overall survival stratified by primary tumor control status. d. Overall survival stratified by number of brain metastases and leptomeningeal disease.\nIn the 28 patients with a single BM, median OS after BM was 16.1 months (95% CI, 3.8-28.3). Survival was no different in patients who received surgical resection for a single BM (log-rank, p = 0.32). Seven patients treated with SRS alone had median OS of 60.2 months (95% CI, 9.7-not reached).\nUnivariate analysis of OS included the following potential factors, including those validated in the Recursive Partitioning Analysis for brain metastases [15]: age at BM diagnosis (<65 or ≥65), primary tumor pathologic stage and grade, histology (papillary serous, endometrioid, or all others), interval from EOC diagnosis to BM, KPS at BM diagnosis (<70 or ≥70), primary tumor control status, extent of extracranial disease (limited or extensive), location of BM, number of BM, presence of LMD, surgery as part of treatment, and type of RT received (SRS, WBRT, PBRT). The following factors were significantly associated with OS on UVA: KPS less than 70 (vs ≥70, hazard ratio [HR] 2.78, p = 0.008), four or more BM (vs. 1–3 BM; HR, 3.75; p < 0.001), LMD (vs. none; HR, 5.32; p = 0.001), longer duration between EOC diagnosis and BM diagnosis (HR, 1.01; p = 0.006), uncontrolled primary tumor (HR, 2.87; p = 0.001), extensive extracranial metastases (vs. limited extracranial metastases; HR, 1.98; p = 0.036), BM in both the cerebrum and cerebellum (vs either alone; HR, 2.49; p = 0.003), and use of SRS (vs. WBRT or PBRT; HR, 0.46; p = 0.03). The use of craniotomy had no effect on OS (p = 0.31).\nMVA (Table 2) identified KPS less than 70 (HR, 2.86; p = 0.018), four or more BM (vs. 1 or 2–3 BM; HR, 3.18; p = 0.053), LMD (vs. 1–4 BM; HR, 8.22; p = 0.013), and uncontrolled primary tumor (HR, 2.84; p = 0.008) as risk factors significantly associated with inferior OS after RT for BM. Kaplan-Meier survival curves stratified by the factors significant on MVA are shown in Figure 2b-d.\nMultivariate analysis of overall survival using Cox proportional hazard model\nEvent = death. BM = brain metastasis; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease.\nOn UVA for CNS-FFP, the presence of four or more BM was significant, with a negative association (HR, 4.09; p = 0.006), when compared with 1–3 BM (Figure 3b). In this analysis, 1 and 2–3 BM were combined since there was no significant difference in CNS-FFP between the two groups. Extensive extracranial disease was also significant (vs. limited; HR, 2.95; p = 0.048). On MVA for CNS-FFP, only four or more BM remained significant (HR, 2.56; p = 0.04),\na. Central nervous system (CNS) freedom from progression using Kaplan-Meier method. b. CNS freedom from progression stratified by number of brain metastases.", "Our series represents the largest in the published literature of RT for treatment of BM in EOC, with a total of 60 patients analyzed. We show that these patients can be effectively treated with RT, with or without resection of tumor, and that survival in these patients depends on KPS, number of BM, presence of LMD, and presence of uncontrolled primary tumor. We found no statistically significant effects of age, tumor histology, grade, initial disease stage, RT type, or use of surgical resection of metastases on OS after diagnosis of BM.\nWhile CNS metastases are rare in EOC, they appear to be increasing in incidence as extracranial disease is better controlled with modern chemotherapy. In our series, patients with more advanced stage and grade comprise the majority of patients who develop BM. The most common presentation was a single BM in 47% of patients. Patients with single BM may be treated with surgery, SRS, PBRT, and/or WBRT. In our series, seven patients treated with SRS alone had a median OS of 60.2 months. While this patient subset is small, the long survival is noteworthy in a metastatic population, and likely indicates that these patients were highly selected with good control of extracranial disease and high KPS. Another smaller series has found longer survival in well-selected patients treated with SRS versus WBRT [21]. Patients treated with only WBRT tend to have shorter median survival times [22], and this may be reflective of overall poorer KPS, stage, and number of BM.\nThe use of surgery and RT has been associated with longer survival after BM in multiple smaller studies [3-5,8,13,23-25]; however, our study found no significant benefit to the use of surgery in 22 patients who received postoperative RT. Pothuri et al. [13] published a series from our institution of 14 patients treated with craniotomy and postoperative RT, and found median survival of 18 months and high rates of local control. Based on our current study and those preceding, we recommend that patients with a single BM, good KPS and limited extracranial disease be considered for SRS or both craniotomy and RT as clinically indicated. Ideally, single-modality SRS will be tested in more patients with EOC and single BM to come to a better understanding of the adequacy of SRS alone. In patients with multiple BM, some data suggest that resection of multiple metastases may improve outcomes [26,27]; we could not verify this finding in our population, as only two patients fell into this category.\nOur results add to the findings of the Radiation Therapy Oncology Group (RTOG) Recursive Partitioning Analysis (RPA) system [15] for BM in a variety of cancers, which identified KPS ≥70, age <65 years, controlled primary carcinoma, and no extracranial systemic metastases as being predictive of the longest survival after BM. We suspect that age was not significant in our patients because the majority of our patients fell under the 65-year-old cutoff used by the RPA.\nA large but older series of EOC BM includes 72 patients from MD Anderson Cancer Center [4] treated heterogeneously since 1985, and none with upfront SRS or PBRT. In that series, 8 (11%) patients received steroids alone, 35 (51%) WBRT alone, 8 (11%) surgery alone, and only 12 (17%) had both surgery and WBRT. Twenty-five patients had a single BM, and 47 had multiple. The study does not provide information on the use of salvage therapies for CNS recurrence. Their patients had inferior outcomes to those in our study; median OS was just 6.9 months in patients with a single BM. On their univariate analysis of potential prognostic factors, they did not include KPS or primary tumor control, both of which are vitally important risk factors for outcomes after BM, and are important for determining treatment approach. Although comparing outcomes between retrospective studies is challenging, several possibilities may explain why our results are superior to this series. First, almost half of our population had a single BM, while two-thirds of the patients in the MD Anderson series had multiple metastases. Second, none of our patients were treated with steroids alone, a group that had a significantly worse hazard ratio of death and accounts for 11% of their population. Third, 37% of our patients received multimodality therapy that included surgery and RT, while only 17% of patients in their study were treated in a similar fashion and represented the group with the longest median survival. Surgical resection of a single BM has been shown to improve survival in a randomized clinical trial [28]; in this MD Anderson series, it does not appear that all patients with single BM received surgery, and none received SRS. Fourth, almost two-thirds of our patients were treated with chemotherapy following treatment of their BM, which may contribute to better outcomes (although we were unable to include it in our Cox regression analysis due to its vast heterogeneity). Fifth, it is not clear that patients with relapse received salvage therapy, while most of our patients received RT or chemotherapy at CNS relapse. Lastly, patients treated at our institution are followed closely and systematically for recurrence, and are treated with early salvage therapy at time of relapse; it is unclear what the standard follow-up entailed in other studies.\nChen et al. [14] used the RTOG RPA classification system in a population of 19 patients with EOC BM treated with various approaches, and found that surgery was associated with longer OS (33.7 months vs 7.4 months, p = 0.006). However, only 9 patients underwent surgery, and 8 of these also received adjuvant radiotherapy to the brain, making the patient population too small to draw definitive conclusions regarding the adequacy of surgery alone for brain metastases in this population. Their UVA found primary tumor control (p = 0.006) and number of BM (p = 0.005) to be associated with OS, but the number of events was too small to perform a true multivariate analysis. In our study, we also find that primary tumor control and number of BM are important. Our patient population differs in that we have three times as many patients, and all are treated with RT. In addition, our larger patient population likely includes patients that are less highly selected than in the Chen study. In fact, Chen et al. concede in their discussion section that the relatively long overall survival of their study (median, 16.3 months) may be attributed to patient factors that are pertinent to outcome but not accounted for in the study, such as systemic therapy and institutional preference for aggressive, multimodality therapy.\nA recent multi-institutional retrospective study of 139 patients with brain metastases from various gynecologic malignancies identified a group of 56 patients with ovarian, fallopian tube, or peritoneal histology [11] (it is unclear how many of these patients had a true epithelial ovarian carcinoma). While comparing this “ovarian” subgroup to our patients is not entirely possible, given the heterogeneity of their subgroup, the study has several noteworthy findings. First, in the “ovarian” group, 80% received RT, about half of which also received surgery and/or chemotherapy for the BM. Second, median survival for the 56 patients was 12.5 months. Third, on a multivariate analysis across all gynecologic types, they found ovarian/tubal/peritoneal disease origin was associated with improved survival and recommended these patients be treated more aggressively when a BM is diagnosed.\nIn our series, 10 (20%) patients with BM also had LMD, and 9/10 (90%) had a synchronous or prior BM. LMD is an uncommon occurrence in advanced malignancy, estimated to occur in 4-15% of solid tumors [19]. Previous series have reported an incidence of synchronous or preexisting CNS metastases in 28-75% of patients with LMD [17]. Clarke et al. [18] reviewed a large series of LMD from our institution and reported that 70% of patients with LMD from solid tumors had previous or current brain disease. In that study, 59% of patients were treated with RT, 15% received supportive therapy only, and the remainder had some form of chemotherapy. Median OS after LMD was 2.3 months for patients with solid tumors; there were 2 cases of ovarian carcinoma. Another large series of 155 patients with LMD treated from 1980 to 2002 found a median OS in solid tumors of 2.8 months in non-breast solid tumors [29]. Only one patient had ovarian cancer.\nWe found that median OS after LMD when present at initial BM diagnosis (n = 6) was 3.55 months (95% CI, 0.69-15.8). The patients in our series who developed LMD after a preceding BM diagnosis had no higher representation of one primary treatment modality, either surgery or RT, prior to developing LMD. Our patient population is different from these other large studies for several reasons. First, LMD was primarily treated with RT in our study, in contrast to other reports that frequently use intrathecal or systemic chemotherapy as the principal treatment for LMD [30,31]. In one series of 31 patients treated with intrathecal chemotherapy, the response rate was 52% [32]. In contrast, our LMD patients treated with RT who had follow-up imaging (n = 6) demonstrated 100% overall response rate (both complete and partial response) as seen on their first follow-up MRI. Second, all patients had LMD diagnosed by MRI, with or without CSF analysis. Lastly, our study represents a homogenous population of one disease, as opposed to the existing series that include one or two ovarian cancers mixed in with various other solid tumor types. Based on the favorable responses to RT and slightly longer survival in our series, we support administering RT for EOC LMD. EOC is known to be a biologically radiosensitive disease, and our results in the setting of LMD appear to confirm this fact.\nThere are some limitations of our study. First, we performed a retrospective analysis, which has its own well-known inherent drawbacks and biases, many of which have been outlined above. Second, not all patients in our study had pathologic confirmation of BM from EOC, which could theoretically have led us to include patients with primary CNS tumors or benign disease in this cohort. Third, because it was difficult to determine cause of death for many patients in this retrospective analysis, we could not determine BM-specific survival, which would be a relevant measure of outcome in this population. Lastly, while we were able to determine which patients received any systemic chemotherapy before and after RT for BM, we were unable to routinely quantify the number of cycles of chemotherapy they received and the specific agents with which they were treated. Prospectively collected data on chemotherapy is required to draw strong conclusions about the efficacy of systemic therapy in this patient population.", "This study is the largest reported series of patients with EOC BM treated with modern radiation therapy. We show that KPS less than 70, four or more BM, leptomeningeal disease, and uncontrolled primary tumor predict for inferior survival after radiation therapy for EOC BM. A single BM is the most common presentation, occurring in 47% of patients, and is associated with longer survival when treated with SRS or surgery + RT. Advanced-stage EOC is associated with shorter median interval to BM. Over half of patients with long-term follow-up will recur in the CNS, but can be salvaged effectively with RT or surgery. LMD is associated with poor survival and occurs in 20% of our patients. RT can result in partial or complete response of LMD, although patients will likely recur or progress.", "BM: Brain metastases; LMD: Leptomeningeal disease; EOC: Epithelial ovarian cancer; RT: Radiation therapy; KPS: Karnofsky performance status; CNS: Central nervous system; WBRT: Whole brain radiation therapy; PBRT: Partial brain radiation therapy; SRS: Stereotactic radiosurgery; OS: Overall survival; PFS: Progression free survival; FFP: Freedom from progression; MVA: Multivariate analysis; UVA: Univariate analysis; HR: Hazard ratio; RPA: Recursive partitioning analysis", "The authors have no financial competing interests and no non-financial competing interests.", "ST, KA, KB made substantial contributions to conception and design and analysis and interpretation of data. ST and VM performed all acquisition of data. All authors (ST, VM, KA, KB, MH, VT, and CA) were involved in drafting the manuscript and revising it critically for important intellectual content. All authors read and approved the final manuscript.", "CA is Chief of the Gynecologic Medical Oncology Service at MSKCC. KA is Member, Memorial Sloan-Kettering Cancer Center (MSKCC) and Professor of Radiation Oncology. VT is Associate Professor of Neurosurgery at MSKCC. MH is Associate Professor of Medicine and part of the Gynecologic Medical Oncology Service. VM is Assistant Professor of Medicine and part of the Gynecologic Medical Oncology Service. KB is Assistant Professor of Radiation Oncology and the principal CNS Radiation Oncologist at MSKCC. ST is chief resident in the Department of Radiation Oncology.", "Data presented in poster form at the American Society for Therapeutic Radiology and Oncology Annual Meeting, October 2–4, 2011, Miami, FL (abstract #2568)." ]
[ null, "methods", "results", null, null, null, null, null, "discussion", "conclusions", null, null, null, null, null ]
[ "Ovarian cancer", "Brain metastases", "Leptomeningeal disease", "Palliation" ]
Background: Epithelial ovarian cancer (EOC) accounts for 3% of cancers among women, but is the fifth leading cause of cancer death in women and the leading cause of gynecologic cancer death [1]. The predominant form of relapse after primary surgery and chemotherapy for EOC is in the abdomen and pelvis [2]. Central nervous system (CNS) and brain metastases (BM) in these patients are a rare occurrence, with reported incidence of 0.29-11.6% [3-9], but may be increasing in incidence as extracranial disease is better controlled with improved surgical and chemotherapeutic options [9-11]. The therapeutic approach to patients with BM from EOC is challenging due to the small numbers of cases and short follow-up periods available in other series [3-10,12]. No studies investigate the impact of modern radiation therapy (RT) as a primary treatment modality in these patients. Treatments vary widely, including best supportive care, chemotherapy, steroids, whole brain radiation therapy (WBRT), surgical resection, and stereotactic radiosurgery (SRS). Median survival in existing studies of EOC with BM has generally been poor, on the order of several months, but some studies report survival as high as 18–33 months in selected patients treated with multimodality therapy that combines surgery, radiation therapy, and systemic chemotherapy [3,5,13,14]. Prior studies of BM in advanced malignancies and in small series of EOC have found performance status, age, primary tumor control, extracranial metastases, and treatment modality for BM to be predictors of survival after BM [4,11,12,15,16]. Leptomeningeal disease (LMD) is regarded as a factor for poor prognosis in other metastatic cancers [17-19] and has been observed with increasing incidence in advanced malignancies, especially in the era of magnetic resonance imaging (MRI) [18]. However, LMD is still rarely reported in EOC. We reviewed our institution’s experience using modern RT, with or without craniotomy, to treat patients with BM and LMD from EOC. We also identify predictors of survival after RT in this patient population. Methods: This study was approved by the Institutional Review Board, who also approved a waiver of informed consent. Our institution’s gynecology database was searched for patients with EOC who developed BM and received RT. We identified 60 patients who were diagnosed between October 1996 and April 2010. We performed a retrospective chart review to obtain demographic data, details of initial EOC diagnosis and treatment, Karnofsky Performance Status (KPS), stage, grade, date of BM diagnosis, interval to BM, site and number of BM, treatment type for BM, systemic disease at BM diagnosis, follow-up and response to treatment, date of CNS relapse or recurrence, time interval to relapse or recurrence after initial BM, and date of death or last follow-up. For two patients, the date of death could not be determined. These patients were censored at date of last follow-up. All BM, including LMD, were diagnosed by imaging, most commonly MRI. Overall survival (OS) was calculated as the time from initial EOC or BM diagnosis to date of death or last follow-up. CNS freedom from progression (CNS-FFP) after BM was calculated from date of BM diagnosis to date of CNS recurrence or last follow-up imaging. Patients without follow-up imaging after BM treatment were excluded from the CNS-FFP analysis. Survival rates were determined using the Kaplan-Meier method, and survival curves were compared using the log-rank test. Univariate (UVA) and multivariate (MVA) survival analyses were performed using a Cox proportional hazards model on OS and CNS-FFP. Variables with p value ≤0.05 by UVA were considered for the MVA, and forward procedure was used to build the final model. A p value ≤0.05 was considered significant for all analyses. All statistical analysis was accomplished with the SPSS software package, version 19 (IBM, Armonk, NY). Results: Clinical characteristics Patient characteristics at the time of initial BM diagnosis are listed in Table 1. Median age at diagnosis of EOC was 56.1 years (range, 31.2-79.0). Stage distribution [20] at original diagnosis of EOC was 3 patients with stage I (5%), 4 with stage II (6.7%), 40 with stage III (66.7%), and 13 with stage IV (21.7%). Histologic grade at diagnosis was 2 patients with grade 1 (3%), 7 with grade 2 (12%), 49 with grade 3 (82%), and 2 unknown (3%). Tumor histology was distributed as follows: 42 (70%) papillary serous, 8 (13%) endometrioid, 3 (5%) adenocarcinoma not otherwise specified, 2 (3%) mixed carcinoma, and 1 each of mixed adenocarcinoma, clear cell carcinoma, mucinous adenocarcinoma, small cell carcinoma, and cystic ovarian carcinoma. Clinical characteristics at time of initial brain metastases BM = brain metastasis; Dx = diagnosis; EOC = epithelial ovarian cancer; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease. Median follow-up from BM diagnosis for all 60 patients was 9.3 months (range, 0.3-82.3). Median follow-up from BM diagnosis for the six patients alive at analysis was 27.1 months (range, 0.7-82.2). Patient characteristics at the time of initial BM diagnosis are listed in Table 1. Median age at diagnosis of EOC was 56.1 years (range, 31.2-79.0). Stage distribution [20] at original diagnosis of EOC was 3 patients with stage I (5%), 4 with stage II (6.7%), 40 with stage III (66.7%), and 13 with stage IV (21.7%). Histologic grade at diagnosis was 2 patients with grade 1 (3%), 7 with grade 2 (12%), 49 with grade 3 (82%), and 2 unknown (3%). Tumor histology was distributed as follows: 42 (70%) papillary serous, 8 (13%) endometrioid, 3 (5%) adenocarcinoma not otherwise specified, 2 (3%) mixed carcinoma, and 1 each of mixed adenocarcinoma, clear cell carcinoma, mucinous adenocarcinoma, small cell carcinoma, and cystic ovarian carcinoma. Clinical characteristics at time of initial brain metastases BM = brain metastasis; Dx = diagnosis; EOC = epithelial ovarian cancer; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease. Median follow-up from BM diagnosis for all 60 patients was 9.3 months (range, 0.3-82.3). Median follow-up from BM diagnosis for the six patients alive at analysis was 27.1 months (range, 0.7-82.2). Initial treatment of brain metastases Treatments for initial BM are shown in Figure 1. RT was the sole treatment in 38 patients. Twenty-two patients were also treated with surgical resection for initial BM: 16 had a single BM, three had 2–3 BM, and three had four or more BM. In the six patients who underwent craniotomy and had multiple BM, four patients had only the one most symptomatic lesion removed, and two patients who presented with two lesions had both completely resected. Median WBRT dose was 3000 cGy (range, 600 cGy-4400 cGy). Median SRS dose was 2100 cGy (range, 1400–2200 cGy). Thirty-six patients (60%) had at least one cycle of systemic chemotherapy following RT ± surgery. After brain metastasis, ovarian cancers were treated with vastly heterogenous systemic regimens: we counted 18 unique regimens, and 19 patients received two or more lines of chemotherapy. The most common combination regimens were carboplatin and paclitaxel, followed by gemcitabine and carboplatin or cisplatin. The most common single-agent regimens were paclitaxel, gemcitabine, carboplatin, and liposomal doxorubicin. Patients with a KPS ≥70 were more likely than patients with KPS <70 to receive systemic therapy following local treatment of brain disease (Chi square, p = 0.04). The remaining 40% of patients either did not have information on chemotherapy available, or did not receive chemotherapy. This group had a median OS of 2.4 months after completing RT (range, 0.1-81.7). Treatments for initial brain metastases. Treatments for initial BM are shown in Figure 1. RT was the sole treatment in 38 patients. Twenty-two patients were also treated with surgical resection for initial BM: 16 had a single BM, three had 2–3 BM, and three had four or more BM. In the six patients who underwent craniotomy and had multiple BM, four patients had only the one most symptomatic lesion removed, and two patients who presented with two lesions had both completely resected. Median WBRT dose was 3000 cGy (range, 600 cGy-4400 cGy). Median SRS dose was 2100 cGy (range, 1400–2200 cGy). Thirty-six patients (60%) had at least one cycle of systemic chemotherapy following RT ± surgery. After brain metastasis, ovarian cancers were treated with vastly heterogenous systemic regimens: we counted 18 unique regimens, and 19 patients received two or more lines of chemotherapy. The most common combination regimens were carboplatin and paclitaxel, followed by gemcitabine and carboplatin or cisplatin. The most common single-agent regimens were paclitaxel, gemcitabine, carboplatin, and liposomal doxorubicin. Patients with a KPS ≥70 were more likely than patients with KPS <70 to receive systemic therapy following local treatment of brain disease (Chi square, p = 0.04). The remaining 40% of patients either did not have information on chemotherapy available, or did not receive chemotherapy. This group had a median OS of 2.4 months after completing RT (range, 0.1-81.7). Treatments for initial brain metastases. Recurrences and salvage RT Six patients were alive at time of analysis at a median follow-up of 27.1 months after their initial BM (range, 0.7-82.2). Twenty-four patients developed recurrent or progressive CNS disease. Median time to recurrence was 7.3 months (range, 0.9-46.3). Thirteen patients received further RT for their recurrence, including three treated with conventional RT to the spine for leptomeningeal recurrence. Seven patients received further systemic chemotherapy following diagnosis of their recurrent CNS disease. Six patients were alive at time of analysis at a median follow-up of 27.1 months after their initial BM (range, 0.7-82.2). Twenty-four patients developed recurrent or progressive CNS disease. Median time to recurrence was 7.3 months (range, 0.9-46.3). Thirteen patients received further RT for their recurrence, including three treated with conventional RT to the spine for leptomeningeal recurrence. Seven patients received further systemic chemotherapy following diagnosis of their recurrent CNS disease. Leptomeningeal disease LMD was diagnosed in 12 (20%) patients, either at time of initial BM (n = 6) or as relapse of CNS disease (n = 6). Median interval from initial BM diagnosis to secondary LMD diagnosis was 7.2 months (range, 2.5-44.5). All but one patient with LMD had a preceding or synchronous BM. Treatments for LMD in the primary setting included five WBRT and one partial brain radiation therapy (PBRT). Treatments for relapsed LMD included WBRT (n = 2), spine RT (2), systemic chemotherapy (1), and best supportive care (1). All patients who received RT and had follow-up imaging had at least a partial response to therapy. LMD was diagnosed in 12 (20%) patients, either at time of initial BM (n = 6) or as relapse of CNS disease (n = 6). Median interval from initial BM diagnosis to secondary LMD diagnosis was 7.2 months (range, 2.5-44.5). All but one patient with LMD had a preceding or synchronous BM. Treatments for LMD in the primary setting included five WBRT and one partial brain radiation therapy (PBRT). Treatments for relapsed LMD included WBRT (n = 2), spine RT (2), systemic chemotherapy (1), and best supportive care (1). All patients who received RT and had follow-up imaging had at least a partial response to therapy. Survival Median OS from EOC diagnosis was 67.1 months (95% CI, 54.9-69.4). Median OS after BM diagnosis was 9.7 months (95% CI, 5.9-13.5) in all patients (Figure 2a), and 15.6 months in the 47 patients with follow-up (95% CI, 9.8-21.3). Median OS from LMD diagnosis for the 6 patients diagnosed with LMD at the time of initial BM was 3.6 months (95% CI, 0.69-15.8). Median CNS-FFP in the 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.8). a. Overall survival for all patients using Kaplan-Meier method. b. Overall survival stratified by Karnofsky performance status. c. Overall survival stratified by primary tumor control status. d. Overall survival stratified by number of brain metastases and leptomeningeal disease. In the 28 patients with a single BM, median OS after BM was 16.1 months (95% CI, 3.8-28.3). Survival was no different in patients who received surgical resection for a single BM (log-rank, p = 0.32). Seven patients treated with SRS alone had median OS of 60.2 months (95% CI, 9.7-not reached). Univariate analysis of OS included the following potential factors, including those validated in the Recursive Partitioning Analysis for brain metastases [15]: age at BM diagnosis (<65 or ≥65), primary tumor pathologic stage and grade, histology (papillary serous, endometrioid, or all others), interval from EOC diagnosis to BM, KPS at BM diagnosis (<70 or ≥70), primary tumor control status, extent of extracranial disease (limited or extensive), location of BM, number of BM, presence of LMD, surgery as part of treatment, and type of RT received (SRS, WBRT, PBRT). The following factors were significantly associated with OS on UVA: KPS less than 70 (vs ≥70, hazard ratio [HR] 2.78, p = 0.008), four or more BM (vs. 1–3 BM; HR, 3.75; p < 0.001), LMD (vs. none; HR, 5.32; p = 0.001), longer duration between EOC diagnosis and BM diagnosis (HR, 1.01; p = 0.006), uncontrolled primary tumor (HR, 2.87; p = 0.001), extensive extracranial metastases (vs. limited extracranial metastases; HR, 1.98; p = 0.036), BM in both the cerebrum and cerebellum (vs either alone; HR, 2.49; p = 0.003), and use of SRS (vs. WBRT or PBRT; HR, 0.46; p = 0.03). The use of craniotomy had no effect on OS (p = 0.31). MVA (Table 2) identified KPS less than 70 (HR, 2.86; p = 0.018), four or more BM (vs. 1 or 2–3 BM; HR, 3.18; p = 0.053), LMD (vs. 1–4 BM; HR, 8.22; p = 0.013), and uncontrolled primary tumor (HR, 2.84; p = 0.008) as risk factors significantly associated with inferior OS after RT for BM. Kaplan-Meier survival curves stratified by the factors significant on MVA are shown in Figure 2b-d. Multivariate analysis of overall survival using Cox proportional hazard model Event = death. BM = brain metastasis; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease. On UVA for CNS-FFP, the presence of four or more BM was significant, with a negative association (HR, 4.09; p = 0.006), when compared with 1–3 BM (Figure 3b). In this analysis, 1 and 2–3 BM were combined since there was no significant difference in CNS-FFP between the two groups. Extensive extracranial disease was also significant (vs. limited; HR, 2.95; p = 0.048). On MVA for CNS-FFP, only four or more BM remained significant (HR, 2.56; p = 0.04), a. Central nervous system (CNS) freedom from progression using Kaplan-Meier method. b. CNS freedom from progression stratified by number of brain metastases. Median OS from EOC diagnosis was 67.1 months (95% CI, 54.9-69.4). Median OS after BM diagnosis was 9.7 months (95% CI, 5.9-13.5) in all patients (Figure 2a), and 15.6 months in the 47 patients with follow-up (95% CI, 9.8-21.3). Median OS from LMD diagnosis for the 6 patients diagnosed with LMD at the time of initial BM was 3.6 months (95% CI, 0.69-15.8). Median CNS-FFP in the 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.8). a. Overall survival for all patients using Kaplan-Meier method. b. Overall survival stratified by Karnofsky performance status. c. Overall survival stratified by primary tumor control status. d. Overall survival stratified by number of brain metastases and leptomeningeal disease. In the 28 patients with a single BM, median OS after BM was 16.1 months (95% CI, 3.8-28.3). Survival was no different in patients who received surgical resection for a single BM (log-rank, p = 0.32). Seven patients treated with SRS alone had median OS of 60.2 months (95% CI, 9.7-not reached). Univariate analysis of OS included the following potential factors, including those validated in the Recursive Partitioning Analysis for brain metastases [15]: age at BM diagnosis (<65 or ≥65), primary tumor pathologic stage and grade, histology (papillary serous, endometrioid, or all others), interval from EOC diagnosis to BM, KPS at BM diagnosis (<70 or ≥70), primary tumor control status, extent of extracranial disease (limited or extensive), location of BM, number of BM, presence of LMD, surgery as part of treatment, and type of RT received (SRS, WBRT, PBRT). The following factors were significantly associated with OS on UVA: KPS less than 70 (vs ≥70, hazard ratio [HR] 2.78, p = 0.008), four or more BM (vs. 1–3 BM; HR, 3.75; p < 0.001), LMD (vs. none; HR, 5.32; p = 0.001), longer duration between EOC diagnosis and BM diagnosis (HR, 1.01; p = 0.006), uncontrolled primary tumor (HR, 2.87; p = 0.001), extensive extracranial metastases (vs. limited extracranial metastases; HR, 1.98; p = 0.036), BM in both the cerebrum and cerebellum (vs either alone; HR, 2.49; p = 0.003), and use of SRS (vs. WBRT or PBRT; HR, 0.46; p = 0.03). The use of craniotomy had no effect on OS (p = 0.31). MVA (Table 2) identified KPS less than 70 (HR, 2.86; p = 0.018), four or more BM (vs. 1 or 2–3 BM; HR, 3.18; p = 0.053), LMD (vs. 1–4 BM; HR, 8.22; p = 0.013), and uncontrolled primary tumor (HR, 2.84; p = 0.008) as risk factors significantly associated with inferior OS after RT for BM. Kaplan-Meier survival curves stratified by the factors significant on MVA are shown in Figure 2b-d. Multivariate analysis of overall survival using Cox proportional hazard model Event = death. BM = brain metastasis; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease. On UVA for CNS-FFP, the presence of four or more BM was significant, with a negative association (HR, 4.09; p = 0.006), when compared with 1–3 BM (Figure 3b). In this analysis, 1 and 2–3 BM were combined since there was no significant difference in CNS-FFP between the two groups. Extensive extracranial disease was also significant (vs. limited; HR, 2.95; p = 0.048). On MVA for CNS-FFP, only four or more BM remained significant (HR, 2.56; p = 0.04), a. Central nervous system (CNS) freedom from progression using Kaplan-Meier method. b. CNS freedom from progression stratified by number of brain metastases. Clinical characteristics: Patient characteristics at the time of initial BM diagnosis are listed in Table 1. Median age at diagnosis of EOC was 56.1 years (range, 31.2-79.0). Stage distribution [20] at original diagnosis of EOC was 3 patients with stage I (5%), 4 with stage II (6.7%), 40 with stage III (66.7%), and 13 with stage IV (21.7%). Histologic grade at diagnosis was 2 patients with grade 1 (3%), 7 with grade 2 (12%), 49 with grade 3 (82%), and 2 unknown (3%). Tumor histology was distributed as follows: 42 (70%) papillary serous, 8 (13%) endometrioid, 3 (5%) adenocarcinoma not otherwise specified, 2 (3%) mixed carcinoma, and 1 each of mixed adenocarcinoma, clear cell carcinoma, mucinous adenocarcinoma, small cell carcinoma, and cystic ovarian carcinoma. Clinical characteristics at time of initial brain metastases BM = brain metastasis; Dx = diagnosis; EOC = epithelial ovarian cancer; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease. Median follow-up from BM diagnosis for all 60 patients was 9.3 months (range, 0.3-82.3). Median follow-up from BM diagnosis for the six patients alive at analysis was 27.1 months (range, 0.7-82.2). Initial treatment of brain metastases: Treatments for initial BM are shown in Figure 1. RT was the sole treatment in 38 patients. Twenty-two patients were also treated with surgical resection for initial BM: 16 had a single BM, three had 2–3 BM, and three had four or more BM. In the six patients who underwent craniotomy and had multiple BM, four patients had only the one most symptomatic lesion removed, and two patients who presented with two lesions had both completely resected. Median WBRT dose was 3000 cGy (range, 600 cGy-4400 cGy). Median SRS dose was 2100 cGy (range, 1400–2200 cGy). Thirty-six patients (60%) had at least one cycle of systemic chemotherapy following RT ± surgery. After brain metastasis, ovarian cancers were treated with vastly heterogenous systemic regimens: we counted 18 unique regimens, and 19 patients received two or more lines of chemotherapy. The most common combination regimens were carboplatin and paclitaxel, followed by gemcitabine and carboplatin or cisplatin. The most common single-agent regimens were paclitaxel, gemcitabine, carboplatin, and liposomal doxorubicin. Patients with a KPS ≥70 were more likely than patients with KPS <70 to receive systemic therapy following local treatment of brain disease (Chi square, p = 0.04). The remaining 40% of patients either did not have information on chemotherapy available, or did not receive chemotherapy. This group had a median OS of 2.4 months after completing RT (range, 0.1-81.7). Treatments for initial brain metastases. Recurrences and salvage RT: Six patients were alive at time of analysis at a median follow-up of 27.1 months after their initial BM (range, 0.7-82.2). Twenty-four patients developed recurrent or progressive CNS disease. Median time to recurrence was 7.3 months (range, 0.9-46.3). Thirteen patients received further RT for their recurrence, including three treated with conventional RT to the spine for leptomeningeal recurrence. Seven patients received further systemic chemotherapy following diagnosis of their recurrent CNS disease. Leptomeningeal disease: LMD was diagnosed in 12 (20%) patients, either at time of initial BM (n = 6) or as relapse of CNS disease (n = 6). Median interval from initial BM diagnosis to secondary LMD diagnosis was 7.2 months (range, 2.5-44.5). All but one patient with LMD had a preceding or synchronous BM. Treatments for LMD in the primary setting included five WBRT and one partial brain radiation therapy (PBRT). Treatments for relapsed LMD included WBRT (n = 2), spine RT (2), systemic chemotherapy (1), and best supportive care (1). All patients who received RT and had follow-up imaging had at least a partial response to therapy. Survival: Median OS from EOC diagnosis was 67.1 months (95% CI, 54.9-69.4). Median OS after BM diagnosis was 9.7 months (95% CI, 5.9-13.5) in all patients (Figure 2a), and 15.6 months in the 47 patients with follow-up (95% CI, 9.8-21.3). Median OS from LMD diagnosis for the 6 patients diagnosed with LMD at the time of initial BM was 3.6 months (95% CI, 0.69-15.8). Median CNS-FFP in the 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.8). a. Overall survival for all patients using Kaplan-Meier method. b. Overall survival stratified by Karnofsky performance status. c. Overall survival stratified by primary tumor control status. d. Overall survival stratified by number of brain metastases and leptomeningeal disease. In the 28 patients with a single BM, median OS after BM was 16.1 months (95% CI, 3.8-28.3). Survival was no different in patients who received surgical resection for a single BM (log-rank, p = 0.32). Seven patients treated with SRS alone had median OS of 60.2 months (95% CI, 9.7-not reached). Univariate analysis of OS included the following potential factors, including those validated in the Recursive Partitioning Analysis for brain metastases [15]: age at BM diagnosis (<65 or ≥65), primary tumor pathologic stage and grade, histology (papillary serous, endometrioid, or all others), interval from EOC diagnosis to BM, KPS at BM diagnosis (<70 or ≥70), primary tumor control status, extent of extracranial disease (limited or extensive), location of BM, number of BM, presence of LMD, surgery as part of treatment, and type of RT received (SRS, WBRT, PBRT). The following factors were significantly associated with OS on UVA: KPS less than 70 (vs ≥70, hazard ratio [HR] 2.78, p = 0.008), four or more BM (vs. 1–3 BM; HR, 3.75; p < 0.001), LMD (vs. none; HR, 5.32; p = 0.001), longer duration between EOC diagnosis and BM diagnosis (HR, 1.01; p = 0.006), uncontrolled primary tumor (HR, 2.87; p = 0.001), extensive extracranial metastases (vs. limited extracranial metastases; HR, 1.98; p = 0.036), BM in both the cerebrum and cerebellum (vs either alone; HR, 2.49; p = 0.003), and use of SRS (vs. WBRT or PBRT; HR, 0.46; p = 0.03). The use of craniotomy had no effect on OS (p = 0.31). MVA (Table 2) identified KPS less than 70 (HR, 2.86; p = 0.018), four or more BM (vs. 1 or 2–3 BM; HR, 3.18; p = 0.053), LMD (vs. 1–4 BM; HR, 8.22; p = 0.013), and uncontrolled primary tumor (HR, 2.84; p = 0.008) as risk factors significantly associated with inferior OS after RT for BM. Kaplan-Meier survival curves stratified by the factors significant on MVA are shown in Figure 2b-d. Multivariate analysis of overall survival using Cox proportional hazard model Event = death. BM = brain metastasis; KPS = Karnofsky Performance Score; LMD = leptomeningeal disease. On UVA for CNS-FFP, the presence of four or more BM was significant, with a negative association (HR, 4.09; p = 0.006), when compared with 1–3 BM (Figure 3b). In this analysis, 1 and 2–3 BM were combined since there was no significant difference in CNS-FFP between the two groups. Extensive extracranial disease was also significant (vs. limited; HR, 2.95; p = 0.048). On MVA for CNS-FFP, only four or more BM remained significant (HR, 2.56; p = 0.04), a. Central nervous system (CNS) freedom from progression using Kaplan-Meier method. b. CNS freedom from progression stratified by number of brain metastases. Discussion: Our series represents the largest in the published literature of RT for treatment of BM in EOC, with a total of 60 patients analyzed. We show that these patients can be effectively treated with RT, with or without resection of tumor, and that survival in these patients depends on KPS, number of BM, presence of LMD, and presence of uncontrolled primary tumor. We found no statistically significant effects of age, tumor histology, grade, initial disease stage, RT type, or use of surgical resection of metastases on OS after diagnosis of BM. While CNS metastases are rare in EOC, they appear to be increasing in incidence as extracranial disease is better controlled with modern chemotherapy. In our series, patients with more advanced stage and grade comprise the majority of patients who develop BM. The most common presentation was a single BM in 47% of patients. Patients with single BM may be treated with surgery, SRS, PBRT, and/or WBRT. In our series, seven patients treated with SRS alone had a median OS of 60.2 months. While this patient subset is small, the long survival is noteworthy in a metastatic population, and likely indicates that these patients were highly selected with good control of extracranial disease and high KPS. Another smaller series has found longer survival in well-selected patients treated with SRS versus WBRT [21]. Patients treated with only WBRT tend to have shorter median survival times [22], and this may be reflective of overall poorer KPS, stage, and number of BM. The use of surgery and RT has been associated with longer survival after BM in multiple smaller studies [3-5,8,13,23-25]; however, our study found no significant benefit to the use of surgery in 22 patients who received postoperative RT. Pothuri et al. [13] published a series from our institution of 14 patients treated with craniotomy and postoperative RT, and found median survival of 18 months and high rates of local control. Based on our current study and those preceding, we recommend that patients with a single BM, good KPS and limited extracranial disease be considered for SRS or both craniotomy and RT as clinically indicated. Ideally, single-modality SRS will be tested in more patients with EOC and single BM to come to a better understanding of the adequacy of SRS alone. In patients with multiple BM, some data suggest that resection of multiple metastases may improve outcomes [26,27]; we could not verify this finding in our population, as only two patients fell into this category. Our results add to the findings of the Radiation Therapy Oncology Group (RTOG) Recursive Partitioning Analysis (RPA) system [15] for BM in a variety of cancers, which identified KPS ≥70, age <65 years, controlled primary carcinoma, and no extracranial systemic metastases as being predictive of the longest survival after BM. We suspect that age was not significant in our patients because the majority of our patients fell under the 65-year-old cutoff used by the RPA. A large but older series of EOC BM includes 72 patients from MD Anderson Cancer Center [4] treated heterogeneously since 1985, and none with upfront SRS or PBRT. In that series, 8 (11%) patients received steroids alone, 35 (51%) WBRT alone, 8 (11%) surgery alone, and only 12 (17%) had both surgery and WBRT. Twenty-five patients had a single BM, and 47 had multiple. The study does not provide information on the use of salvage therapies for CNS recurrence. Their patients had inferior outcomes to those in our study; median OS was just 6.9 months in patients with a single BM. On their univariate analysis of potential prognostic factors, they did not include KPS or primary tumor control, both of which are vitally important risk factors for outcomes after BM, and are important for determining treatment approach. Although comparing outcomes between retrospective studies is challenging, several possibilities may explain why our results are superior to this series. First, almost half of our population had a single BM, while two-thirds of the patients in the MD Anderson series had multiple metastases. Second, none of our patients were treated with steroids alone, a group that had a significantly worse hazard ratio of death and accounts for 11% of their population. Third, 37% of our patients received multimodality therapy that included surgery and RT, while only 17% of patients in their study were treated in a similar fashion and represented the group with the longest median survival. Surgical resection of a single BM has been shown to improve survival in a randomized clinical trial [28]; in this MD Anderson series, it does not appear that all patients with single BM received surgery, and none received SRS. Fourth, almost two-thirds of our patients were treated with chemotherapy following treatment of their BM, which may contribute to better outcomes (although we were unable to include it in our Cox regression analysis due to its vast heterogeneity). Fifth, it is not clear that patients with relapse received salvage therapy, while most of our patients received RT or chemotherapy at CNS relapse. Lastly, patients treated at our institution are followed closely and systematically for recurrence, and are treated with early salvage therapy at time of relapse; it is unclear what the standard follow-up entailed in other studies. Chen et al. [14] used the RTOG RPA classification system in a population of 19 patients with EOC BM treated with various approaches, and found that surgery was associated with longer OS (33.7 months vs 7.4 months, p = 0.006). However, only 9 patients underwent surgery, and 8 of these also received adjuvant radiotherapy to the brain, making the patient population too small to draw definitive conclusions regarding the adequacy of surgery alone for brain metastases in this population. Their UVA found primary tumor control (p = 0.006) and number of BM (p = 0.005) to be associated with OS, but the number of events was too small to perform a true multivariate analysis. In our study, we also find that primary tumor control and number of BM are important. Our patient population differs in that we have three times as many patients, and all are treated with RT. In addition, our larger patient population likely includes patients that are less highly selected than in the Chen study. In fact, Chen et al. concede in their discussion section that the relatively long overall survival of their study (median, 16.3 months) may be attributed to patient factors that are pertinent to outcome but not accounted for in the study, such as systemic therapy and institutional preference for aggressive, multimodality therapy. A recent multi-institutional retrospective study of 139 patients with brain metastases from various gynecologic malignancies identified a group of 56 patients with ovarian, fallopian tube, or peritoneal histology [11] (it is unclear how many of these patients had a true epithelial ovarian carcinoma). While comparing this “ovarian” subgroup to our patients is not entirely possible, given the heterogeneity of their subgroup, the study has several noteworthy findings. First, in the “ovarian” group, 80% received RT, about half of which also received surgery and/or chemotherapy for the BM. Second, median survival for the 56 patients was 12.5 months. Third, on a multivariate analysis across all gynecologic types, they found ovarian/tubal/peritoneal disease origin was associated with improved survival and recommended these patients be treated more aggressively when a BM is diagnosed. In our series, 10 (20%) patients with BM also had LMD, and 9/10 (90%) had a synchronous or prior BM. LMD is an uncommon occurrence in advanced malignancy, estimated to occur in 4-15% of solid tumors [19]. Previous series have reported an incidence of synchronous or preexisting CNS metastases in 28-75% of patients with LMD [17]. Clarke et al. [18] reviewed a large series of LMD from our institution and reported that 70% of patients with LMD from solid tumors had previous or current brain disease. In that study, 59% of patients were treated with RT, 15% received supportive therapy only, and the remainder had some form of chemotherapy. Median OS after LMD was 2.3 months for patients with solid tumors; there were 2 cases of ovarian carcinoma. Another large series of 155 patients with LMD treated from 1980 to 2002 found a median OS in solid tumors of 2.8 months in non-breast solid tumors [29]. Only one patient had ovarian cancer. We found that median OS after LMD when present at initial BM diagnosis (n = 6) was 3.55 months (95% CI, 0.69-15.8). The patients in our series who developed LMD after a preceding BM diagnosis had no higher representation of one primary treatment modality, either surgery or RT, prior to developing LMD. Our patient population is different from these other large studies for several reasons. First, LMD was primarily treated with RT in our study, in contrast to other reports that frequently use intrathecal or systemic chemotherapy as the principal treatment for LMD [30,31]. In one series of 31 patients treated with intrathecal chemotherapy, the response rate was 52% [32]. In contrast, our LMD patients treated with RT who had follow-up imaging (n = 6) demonstrated 100% overall response rate (both complete and partial response) as seen on their first follow-up MRI. Second, all patients had LMD diagnosed by MRI, with or without CSF analysis. Lastly, our study represents a homogenous population of one disease, as opposed to the existing series that include one or two ovarian cancers mixed in with various other solid tumor types. Based on the favorable responses to RT and slightly longer survival in our series, we support administering RT for EOC LMD. EOC is known to be a biologically radiosensitive disease, and our results in the setting of LMD appear to confirm this fact. There are some limitations of our study. First, we performed a retrospective analysis, which has its own well-known inherent drawbacks and biases, many of which have been outlined above. Second, not all patients in our study had pathologic confirmation of BM from EOC, which could theoretically have led us to include patients with primary CNS tumors or benign disease in this cohort. Third, because it was difficult to determine cause of death for many patients in this retrospective analysis, we could not determine BM-specific survival, which would be a relevant measure of outcome in this population. Lastly, while we were able to determine which patients received any systemic chemotherapy before and after RT for BM, we were unable to routinely quantify the number of cycles of chemotherapy they received and the specific agents with which they were treated. Prospectively collected data on chemotherapy is required to draw strong conclusions about the efficacy of systemic therapy in this patient population. Conclusions: This study is the largest reported series of patients with EOC BM treated with modern radiation therapy. We show that KPS less than 70, four or more BM, leptomeningeal disease, and uncontrolled primary tumor predict for inferior survival after radiation therapy for EOC BM. A single BM is the most common presentation, occurring in 47% of patients, and is associated with longer survival when treated with SRS or surgery + RT. Advanced-stage EOC is associated with shorter median interval to BM. Over half of patients with long-term follow-up will recur in the CNS, but can be salvaged effectively with RT or surgery. LMD is associated with poor survival and occurs in 20% of our patients. RT can result in partial or complete response of LMD, although patients will likely recur or progress. Abbreviations: BM: Brain metastases; LMD: Leptomeningeal disease; EOC: Epithelial ovarian cancer; RT: Radiation therapy; KPS: Karnofsky performance status; CNS: Central nervous system; WBRT: Whole brain radiation therapy; PBRT: Partial brain radiation therapy; SRS: Stereotactic radiosurgery; OS: Overall survival; PFS: Progression free survival; FFP: Freedom from progression; MVA: Multivariate analysis; UVA: Univariate analysis; HR: Hazard ratio; RPA: Recursive partitioning analysis Competing interests: The authors have no financial competing interests and no non-financial competing interests. Authors’ contributions: ST, KA, KB made substantial contributions to conception and design and analysis and interpretation of data. ST and VM performed all acquisition of data. All authors (ST, VM, KA, KB, MH, VT, and CA) were involved in drafting the manuscript and revising it critically for important intellectual content. All authors read and approved the final manuscript. Authors’ information: CA is Chief of the Gynecologic Medical Oncology Service at MSKCC. KA is Member, Memorial Sloan-Kettering Cancer Center (MSKCC) and Professor of Radiation Oncology. VT is Associate Professor of Neurosurgery at MSKCC. MH is Associate Professor of Medicine and part of the Gynecologic Medical Oncology Service. VM is Assistant Professor of Medicine and part of the Gynecologic Medical Oncology Service. KB is Assistant Professor of Radiation Oncology and the principal CNS Radiation Oncologist at MSKCC. ST is chief resident in the Department of Radiation Oncology. : Data presented in poster form at the American Society for Therapeutic Radiology and Oncology Annual Meeting, October 2–4, 2011, Miami, FL (abstract #2568).
Background: Brain metastases (BM) and leptomeningeal disease (LMD) are uncommon in epithelial ovarian cancer (EOC). We investigate the outcomes of modern radiation therapy (RT) as a primary treatment modality in patients with EOC BM and LMD. Methods: We evaluated 60 patients with EOC treated at our institution from 1996 to 2010 who developed BM. All information was obtained from chart review. Results: At EOC diagnosis, median age was 56.1 years and 88% of patients were stage III-IV. At time of BM diagnosis, 46.7% of patients had 1 BM, 16.7% had two to three, 26.7% had four or more, and 10% had LMD. Median follow-up after BM was 9.3 months (range, 0.3-82.3). All patients received RT, and 37% had surgical resection. LMD occurred in the primary or recurrent setting in 12 patients (20%), 9 of whom received RT. Median overall survival (OS) after BM was 9.7 months for all patients (95% CI 5.9-13.5), and 16.1 months (95% CI 3.8-28.3) in patients with one BM. On multivariate analysis, Karnofsky performance status less than 70 (hazard ratio [HR] 2.86, p = 0.018), four or more BM (HR 3.18, p = 0.05), LMD (HR 8.22, p = 0.013), and uncontrolled primary tumor (HR 2.84, p = 0.008) were significantly associated with inferior OS. Use of surgery was not significant (p = 0.31). Median central nervous system freedom from progression (CNS-FFP) in 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.9). Only four or more BM (HR 2.56, p = 0.04) was significantly associated with poorer CNS-FFP. Conclusions: Based on our results, RT appears to be an effective treatment modality for brain metastases from EOC and should be routinely offered. Karnofsky performance status less than 70, four or more BM, LMD, and uncontrolled primary tumor predict for worse survival after RT for EOC BM. Whether RT is superior to surgery or chemotherapy for EOC BM remains to be seen in a larger cohort.
Background: Epithelial ovarian cancer (EOC) accounts for 3% of cancers among women, but is the fifth leading cause of cancer death in women and the leading cause of gynecologic cancer death [1]. The predominant form of relapse after primary surgery and chemotherapy for EOC is in the abdomen and pelvis [2]. Central nervous system (CNS) and brain metastases (BM) in these patients are a rare occurrence, with reported incidence of 0.29-11.6% [3-9], but may be increasing in incidence as extracranial disease is better controlled with improved surgical and chemotherapeutic options [9-11]. The therapeutic approach to patients with BM from EOC is challenging due to the small numbers of cases and short follow-up periods available in other series [3-10,12]. No studies investigate the impact of modern radiation therapy (RT) as a primary treatment modality in these patients. Treatments vary widely, including best supportive care, chemotherapy, steroids, whole brain radiation therapy (WBRT), surgical resection, and stereotactic radiosurgery (SRS). Median survival in existing studies of EOC with BM has generally been poor, on the order of several months, but some studies report survival as high as 18–33 months in selected patients treated with multimodality therapy that combines surgery, radiation therapy, and systemic chemotherapy [3,5,13,14]. Prior studies of BM in advanced malignancies and in small series of EOC have found performance status, age, primary tumor control, extracranial metastases, and treatment modality for BM to be predictors of survival after BM [4,11,12,15,16]. Leptomeningeal disease (LMD) is regarded as a factor for poor prognosis in other metastatic cancers [17-19] and has been observed with increasing incidence in advanced malignancies, especially in the era of magnetic resonance imaging (MRI) [18]. However, LMD is still rarely reported in EOC. We reviewed our institution’s experience using modern RT, with or without craniotomy, to treat patients with BM and LMD from EOC. We also identify predictors of survival after RT in this patient population. Conclusions: This study is the largest reported series of patients with EOC BM treated with modern radiation therapy. We show that KPS less than 70, four or more BM, leptomeningeal disease, and uncontrolled primary tumor predict for inferior survival after radiation therapy for EOC BM. A single BM is the most common presentation, occurring in 47% of patients, and is associated with longer survival when treated with SRS or surgery + RT. Advanced-stage EOC is associated with shorter median interval to BM. Over half of patients with long-term follow-up will recur in the CNS, but can be salvaged effectively with RT or surgery. LMD is associated with poor survival and occurs in 20% of our patients. RT can result in partial or complete response of LMD, although patients will likely recur or progress.
Background: Brain metastases (BM) and leptomeningeal disease (LMD) are uncommon in epithelial ovarian cancer (EOC). We investigate the outcomes of modern radiation therapy (RT) as a primary treatment modality in patients with EOC BM and LMD. Methods: We evaluated 60 patients with EOC treated at our institution from 1996 to 2010 who developed BM. All information was obtained from chart review. Results: At EOC diagnosis, median age was 56.1 years and 88% of patients were stage III-IV. At time of BM diagnosis, 46.7% of patients had 1 BM, 16.7% had two to three, 26.7% had four or more, and 10% had LMD. Median follow-up after BM was 9.3 months (range, 0.3-82.3). All patients received RT, and 37% had surgical resection. LMD occurred in the primary or recurrent setting in 12 patients (20%), 9 of whom received RT. Median overall survival (OS) after BM was 9.7 months for all patients (95% CI 5.9-13.5), and 16.1 months (95% CI 3.8-28.3) in patients with one BM. On multivariate analysis, Karnofsky performance status less than 70 (hazard ratio [HR] 2.86, p = 0.018), four or more BM (HR 3.18, p = 0.05), LMD (HR 8.22, p = 0.013), and uncontrolled primary tumor (HR 2.84, p = 0.008) were significantly associated with inferior OS. Use of surgery was not significant (p = 0.31). Median central nervous system freedom from progression (CNS-FFP) in 47 patients with follow-up was 18.5 months (95% CI, 9.3-27.9). Only four or more BM (HR 2.56, p = 0.04) was significantly associated with poorer CNS-FFP. Conclusions: Based on our results, RT appears to be an effective treatment modality for brain metastases from EOC and should be routinely offered. Karnofsky performance status less than 70, four or more BM, LMD, and uncontrolled primary tumor predict for worse survival after RT for EOC BM. Whether RT is superior to surgery or chemotherapy for EOC BM remains to be seen in a larger cohort.
8,245
434
15
[ "bm", "patients", "diagnosis", "lmd", "median", "rt", "months", "survival", "hr", "disease" ]
[ "test", "test" ]
[CONTENT] Ovarian cancer | Brain metastases | Leptomeningeal disease | Palliation [SUMMARY]
[CONTENT] Ovarian cancer | Brain metastases | Leptomeningeal disease | Palliation [SUMMARY]
[CONTENT] Ovarian cancer | Brain metastases | Leptomeningeal disease | Palliation [SUMMARY]
[CONTENT] Ovarian cancer | Brain metastases | Leptomeningeal disease | Palliation [SUMMARY]
[CONTENT] Ovarian cancer | Brain metastases | Leptomeningeal disease | Palliation [SUMMARY]
[CONTENT] Ovarian cancer | Brain metastases | Leptomeningeal disease | Palliation [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carcinoma, Ovarian Epithelial | Disease-Free Survival | Female | Humans | Karnofsky Performance Status | Middle Aged | Neoplasms, Glandular and Epithelial | Ovarian Neoplasms | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carcinoma, Ovarian Epithelial | Disease-Free Survival | Female | Humans | Karnofsky Performance Status | Middle Aged | Neoplasms, Glandular and Epithelial | Ovarian Neoplasms | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carcinoma, Ovarian Epithelial | Disease-Free Survival | Female | Humans | Karnofsky Performance Status | Middle Aged | Neoplasms, Glandular and Epithelial | Ovarian Neoplasms | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carcinoma, Ovarian Epithelial | Disease-Free Survival | Female | Humans | Karnofsky Performance Status | Middle Aged | Neoplasms, Glandular and Epithelial | Ovarian Neoplasms | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carcinoma, Ovarian Epithelial | Disease-Free Survival | Female | Humans | Karnofsky Performance Status | Middle Aged | Neoplasms, Glandular and Epithelial | Ovarian Neoplasms | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Brain Neoplasms | Carcinoma, Ovarian Epithelial | Disease-Free Survival | Female | Humans | Karnofsky Performance Status | Middle Aged | Neoplasms, Glandular and Epithelial | Ovarian Neoplasms | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] bm | patients | diagnosis | lmd | median | rt | months | survival | hr | disease [SUMMARY]
[CONTENT] bm | patients | diagnosis | lmd | median | rt | months | survival | hr | disease [SUMMARY]
[CONTENT] bm | patients | diagnosis | lmd | median | rt | months | survival | hr | disease [SUMMARY]
[CONTENT] bm | patients | diagnosis | lmd | median | rt | months | survival | hr | disease [SUMMARY]
[CONTENT] bm | patients | diagnosis | lmd | median | rt | months | survival | hr | disease [SUMMARY]
[CONTENT] bm | patients | diagnosis | lmd | median | rt | months | survival | hr | disease [SUMMARY]
[CONTENT] eoc | studies | bm | incidence | 11 | patients | survival | therapy | predictors survival | leading cause [SUMMARY]
[CONTENT] date | bm | date death | follow | diagnosis | cns | bm diagnosis | treatment | cns ffp | patients [SUMMARY]
[CONTENT] bm | patients | hr | diagnosis | median | vs | months | 95 | os | range [SUMMARY]
[CONTENT] patients | associated | bm | recur | survival | eoc | eoc bm | rt | radiation therapy | surgery [SUMMARY]
[CONTENT] bm | patients | diagnosis | lmd | survival | rt | median | eoc | hr | months [SUMMARY]
[CONTENT] bm | patients | diagnosis | lmd | survival | rt | median | eoc | hr | months [SUMMARY]
[CONTENT] Brain ||| EOC BM | LMD [SUMMARY]
[CONTENT] 60 | EOC | 1996 ||| [SUMMARY]
[CONTENT] 56.1 years | 88% ||| 46.7% | 1 BM | 16.7% | two | 26.7% | four | 10% ||| BM | 9.3 months | 0.3-82.3 ||| 37% ||| 12 | 20% | 9 ||| BM | 9.7 months | 95% | CI | 5.9-13.5 | 16.1 months | 95% | CI | 3.8-28.3 | one ||| Karnofsky | less than 70 | 2.86 | 0.018 | four | BM | 3.18 | 0.05 | 8.22 | 0.013 | 2.84 | 0.008 ||| 0.31 ||| CNS-FFP | 47 | 18.5 months | 95% | CI | 9.3-27.9 ||| Only four | BM | 2.56 | 0.04 | CNS-FFP [SUMMARY]
[CONTENT] RT | EOC ||| Karnofsky | less than 70 | four | BM | RT for EOC BM ||| RT | EOC BM [SUMMARY]
[CONTENT] ||| EOC BM ||| 60 | EOC | 1996 ||| ||| ||| 56.1 years | 88% ||| 46.7% | 1 BM | 16.7% | two | 26.7% | four | 10% ||| BM | 9.3 months | 0.3-82.3 ||| 37% ||| 12 | 20% | 9 ||| BM | 9.7 months | 95% | CI | 5.9-13.5 | 16.1 months | 95% | CI | 3.8-28.3 | one ||| Karnofsky | less than 70 | 2.86 | 0.018 | four | BM | 3.18 | 0.05 | 8.22 | 0.013 | 2.84 | 0.008 ||| 0.31 ||| CNS-FFP | 47 | 18.5 months | 95% | CI | 9.3-27.9 ||| Only four | BM | 2.56 | 0.04 | CNS-FFP ||| RT | EOC ||| Karnofsky | less than 70 | four | BM | RT for EOC BM ||| RT | EOC BM [SUMMARY]
[CONTENT] ||| EOC BM ||| 60 | EOC | 1996 ||| ||| ||| 56.1 years | 88% ||| 46.7% | 1 BM | 16.7% | two | 26.7% | four | 10% ||| BM | 9.3 months | 0.3-82.3 ||| 37% ||| 12 | 20% | 9 ||| BM | 9.7 months | 95% | CI | 5.9-13.5 | 16.1 months | 95% | CI | 3.8-28.3 | one ||| Karnofsky | less than 70 | 2.86 | 0.018 | four | BM | 3.18 | 0.05 | 8.22 | 0.013 | 2.84 | 0.008 ||| 0.31 ||| CNS-FFP | 47 | 18.5 months | 95% | CI | 9.3-27.9 ||| Only four | BM | 2.56 | 0.04 | CNS-FFP ||| RT | EOC ||| Karnofsky | less than 70 | four | BM | RT for EOC BM ||| RT | EOC BM [SUMMARY]
Integrin α7 high expression correlates with deteriorative tumor features and worse overall survival, and its knockdown inhibits cell proliferation and invasion but increases apoptosis in breast cancer.
31325216
This study aimed to investigate the correlation of integrin α7 (ITGA7) expression with clinical/pathological characteristics and overall survival (OS), and its knockdown on inhibiting cell activities in breast cancer.
BACKGROUND
A total of 191 breast cancer patients underwent surgery were retrospectively reviewed, and ITGA7 expression in tumor tissues was determined by immunofluorescence and real-time quantitative polymerase chain reaction. Patients' clinical/pathological data were recorded, and OS was calculated. In vitro, control shRNA and ITGA7 shRNA plasmids were transfected into MCF7 cells to evaluate the influence of ITGA7 knockdown on cell proliferation, apoptosis, and invasion.
METHODS
Ninety-two (48.2%) patients presented with ITGA7 high expression, and 99 patients (51.8%) presented with ITGA7 low expression. ITGA7 expression was positively correlated with T stage, tumor-node metastasis (TNM) stage, and pathological grade. Kaplan-Meier curves showed that ITGA7 high expression was associated with shorter OS, and multivariate Cox's proportional hazards regression displayed that ITGA7 high expression was an independent predictive factor for poor OS. Moreover, in vitro experiments disclosed that cell proliferation (by Cell Counting Kit-8 assay) and cell invasion (by Matrigel invasion assay) were reduced, while cell apoptosis rate (by Annexin V/propidium iodide assay) was enhanced by ITGA7 knockdown in MCF-7 cells.
RESULTS
Integrin α7 high expression correlates with increased T stage, TNM stage, and pathological grade as well as worse OS, and its knockdown enhances cell apoptosis but inhibits cell proliferation and invasion in breast cancer.
CONCLUSION
[ "Antigens, CD", "Apoptosis", "Biomarkers, Tumor", "Breast Neoplasms", "Cell Proliferation", "Female", "Follow-Up Studies", "Gene Expression Regulation, Neoplastic", "Humans", "Integrin alpha Chains", "Middle Aged", "Neoplasm Invasiveness", "Prognosis", "RNA, Small Interfering", "Retrospective Studies", "Survival Rate" ]
6805256
INTRODUCTION
Breast cancer has ranked the most common cancer and the leading cause of cancer death among females worldwide, which accounts for estimated 1 700 000 new cases and causes 520 000 deaths around the world according to 2015 global statistics.1 In China, breast cancer occurs in 268 600 new cases and results in 69 500 deaths in females.1, 2 With the development in medical technology, various treatment options have been applied in breast cancer patients (such as surgery, chemotherapy, endocrine therapy, as well as targeted therapies).3, 4 Early breast cancer is considered potentially curable with these measurements, whereas the efficacy of current available treatments is still limited for the disease metastasis that is responsible for 90% of the deaths from breast cancer.3, 4, 5 Thus, exploration of novel treatment target as well as convincing biomarker for prognosis is of great importance for management of breast cancer progression. Integrins are a large family of heterodimeric cell surface receptors that regulated cell‐cell and cell‐extracellular matrix interactions.6 Integrin α7 (ITGA7) is a gene localized on chromosome 12q13 and composed of at least 27 exons spanning a region of around 22.5 kb, which is the receptor for the extracellular matrix (ECM) protein laminin and forms heterodimer with integrin β1.7 Recently, the functions of ITGA7 in cancers have attracted increasing attentions. Several previous studies have disclosed that ITGA7 is upregulated and correlates with adverse clinicopathological characteristics in some cancers (such as esophageal squamous cell carcinoma and clear cell renal cell carcinoma).7, 8 Moreover, some in vitro experiments have disclosed that ITGA7 serves as a tumor oncogene in different cancer cells (such as glioblastoma and pancreatic carcinoma) through affecting cell proliferation and invasion.8, 9 Considering these implications about the promotive effect of ITGA7 in different cancers, we speculated that ITGA7 also might contribute to the progression of breast cancer and might be a potential treatment target, while relevant evidence is still limited. Thus, we conducted this study to investigate the correlation of ITGA7 expression with clinical/pathological characteristics and overall survival (OS) in breast cancer patients and further explore its knockdown on inhibiting breast cancer cell activities in vitro.
null
null
RESULTS
Study flow Totally 395 breast cancer patients who underwent surgical resection were retrospectively screened in this study, while 181 of them were excluded, including 69 patients with no preserved tumor tissue, 45 patients with incomplete clinical data, 37 patients underwent neoadjuvant therapy, 21 patients with relapsed or secondary cancer, and nine patients with other malignancies (Figure 1). Subsequently, in the 214 patients eligible, 23 patients who were unable to acquire informed consents were excluded, and finally 191 patients were reviewed and analyzed in this study. Study flow Totally 395 breast cancer patients who underwent surgical resection were retrospectively screened in this study, while 181 of them were excluded, including 69 patients with no preserved tumor tissue, 45 patients with incomplete clinical data, 37 patients underwent neoadjuvant therapy, 21 patients with relapsed or secondary cancer, and nine patients with other malignancies (Figure 1). Subsequently, in the 214 patients eligible, 23 patients who were unable to acquire informed consents were excluded, and finally 191 patients were reviewed and analyzed in this study. Study flow Baseline characteristics A total of 191 breast cancer patients were enrolled, with the mean age of 54.3 ± 13.6 years and the median age of 53.0 (45.0‐64.0) years (Table 1). For tumor size, the mean value was 3.2 ± 1.7 cm, and the median value was 3.0 (2.0‐4.0) cm. Regarding the disease stage, the numbers of patients with TNM I, TNM II, as well as TNM III were 27 (14.1%), 119 (62.3%), and 45 (23.6%), respectively. As for pathological grade, the numbers of patients with grade G1, G2, and G3 were 42 (22.0%), 124 (64.9%), and 25 (13.1%), respectively. The detailed information about other baseline characteristics is presented in Table 1. Baseline characteristics of breast cancer patients Abbreviations: ER: estrogen receptor; HER‐2: human epidermal growth factor receptor 2; IQR: interquartile range; N: node; PR: progesterone receptor; SD: standard deviation; T: tumor; TNM: tumor‐node metastasis. A total of 191 breast cancer patients were enrolled, with the mean age of 54.3 ± 13.6 years and the median age of 53.0 (45.0‐64.0) years (Table 1). For tumor size, the mean value was 3.2 ± 1.7 cm, and the median value was 3.0 (2.0‐4.0) cm. Regarding the disease stage, the numbers of patients with TNM I, TNM II, as well as TNM III were 27 (14.1%), 119 (62.3%), and 45 (23.6%), respectively. As for pathological grade, the numbers of patients with grade G1, G2, and G3 were 42 (22.0%), 124 (64.9%), and 25 (13.1%), respectively. The detailed information about other baseline characteristics is presented in Table 1. Baseline characteristics of breast cancer patients Abbreviations: ER: estrogen receptor; HER‐2: human epidermal growth factor receptor 2; IQR: interquartile range; N: node; PR: progesterone receptor; SD: standard deviation; T: tumor; TNM: tumor‐node metastasis. ITGA7 expression in breast cancer patients Examples of tumor ITGA7 high expression and ITGA7 low expression are shown in Figure 2A. In totally 191 patients, 92 (48.2%) patients presented with ITGA7 high expression, and 99 (51.8%) patients presented with ITGA7 low expression (Figure 2B). ITGA7 expression detected by IF. Examples of ITGA7 high expression and ITGA7 low expression detected by IF (A). There were 92 (48.2%) patients with ITGA7 high expression and 99 (51.8%) patients with ITGA7 low expression (B). ITGA7, integrin α7; IF, immunofluorescence Examples of tumor ITGA7 high expression and ITGA7 low expression are shown in Figure 2A. In totally 191 patients, 92 (48.2%) patients presented with ITGA7 high expression, and 99 (51.8%) patients presented with ITGA7 low expression (Figure 2B). ITGA7 expression detected by IF. Examples of ITGA7 high expression and ITGA7 low expression detected by IF (A). There were 92 (48.2%) patients with ITGA7 high expression and 99 (51.8%) patients with ITGA7 low expression (B). ITGA7, integrin α7; IF, immunofluorescence Correlation of ITGA7 expression with clinical characteristics in breast cancer patients ITGA7 protein high expression was associated with elevated T stage (P = .004), increased TNM stage (P = .038), and raised pathological grade (P = .017) in breast cancer patients (Table 2), whereas no correlation of ITGA7 protein expression with age (P = .395), tumor size (P = .661), N stage (P = .131), ER (P = .584), PR (P = .442), and HER‐2 (P = .915) was observed. Meanwhile, ITGA7 mRNA high expression was associated with increased T stage (P = .002), elevated TNM stage (P = .017), and higher pathological grade (P = .013). These data suggested that ITGA7 expression was positively correlated with T stage, TNM stage, and pathological grade in breast cancer patients. Correlation of ITGA7 expression with clinicopathological characteristics Difference between two groups was determined by Wilcoxon rank‐sum test or chi‐square test. P value < .05 was considered significant. Abbreviations: ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; IQR, interquartile range; ITGA7, integrin α7; N, node; PR, progesterone receptor; SD, standard deviation; T, tumor; TNM, tumor‐node metastasis. The high or low expression was classified according to the median value of ITGA7 mRNA relative expression. ITGA7 protein high expression was associated with elevated T stage (P = .004), increased TNM stage (P = .038), and raised pathological grade (P = .017) in breast cancer patients (Table 2), whereas no correlation of ITGA7 protein expression with age (P = .395), tumor size (P = .661), N stage (P = .131), ER (P = .584), PR (P = .442), and HER‐2 (P = .915) was observed. Meanwhile, ITGA7 mRNA high expression was associated with increased T stage (P = .002), elevated TNM stage (P = .017), and higher pathological grade (P = .013). These data suggested that ITGA7 expression was positively correlated with T stage, TNM stage, and pathological grade in breast cancer patients. Correlation of ITGA7 expression with clinicopathological characteristics Difference between two groups was determined by Wilcoxon rank‐sum test or chi‐square test. P value < .05 was considered significant. Abbreviations: ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; IQR, interquartile range; ITGA7, integrin α7; N, node; PR, progesterone receptor; SD, standard deviation; T, tumor; TNM, tumor‐node metastasis. The high or low expression was classified according to the median value of ITGA7 mRNA relative expression. Correlation of ITGA7 expression with OS in breast cancer patients K‐M curves displayed that ITGA7 protein high expression was associated with shorter OS (P < .001) (Figure 3A); moreover, ITGA7 mRNA high expression was correlated with worse OS in breast cancer patients (P = .009) (Figure 3B). OS in ITGA7 high expression patients and ITGA7 low expression patients. OS in ITGA7 protein high expression patients was remarkably reduced than that in ITGA7 low expression patients (A). OS in ITGA7 mRNA high expression patients was decreased than that in IGTA7 low expression patients (B). K‐M curves were used to display OS. Comparison of OS between two groups was determined by log‐rank test. P < .05 was considered significant. OS, overall survival; ITGA7, integrin α7; K‐M curves, Kaplan‐Meier curves K‐M curves displayed that ITGA7 protein high expression was associated with shorter OS (P < .001) (Figure 3A); moreover, ITGA7 mRNA high expression was correlated with worse OS in breast cancer patients (P = .009) (Figure 3B). OS in ITGA7 high expression patients and ITGA7 low expression patients. OS in ITGA7 protein high expression patients was remarkably reduced than that in ITGA7 low expression patients (A). OS in ITGA7 mRNA high expression patients was decreased than that in IGTA7 low expression patients (B). K‐M curves were used to display OS. Comparison of OS between two groups was determined by log‐rank test. P < .05 was considered significant. OS, overall survival; ITGA7, integrin α7; K‐M curves, Kaplan‐Meier curves Analysis of factors affecting OS in breast cancer patients Univariate Cox's regression displayed that ITGA7 high expression (P < .001) was associated with shorter OS, and larger tumor size (P < .001), higher T stage (P < .001), higher N stage (P < .001), higher TNM stage (P < .001), and higher pathological grade (P < .001) were also associated with worse OS in breast cancer patients (Table 3), whereas ER (positive vs negative) (P = .010) and PR (positive vs negative) (P = .022) were correlated with longer OS in breast cancer patients. Furthermore, the multivariate Cox's regression analysis revealed that ITGA7 high expression was an independent predictive factor for poorer OS (P = .006) in breast cancer patients, and higher pathological grade also independently predicted unfavorable OS (P = .001) in breast cancer patients. Univariate and multivariate Cox's proportional hazards regression model analyses of factors affecting OS P value < .05 was considered significant. Abbreviations: CI, confidence interval; ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; HR, hazard ratio; ITGA7, integrin α7; N, Node; OS, overall survival; PR, progesterone receptor; T, tumor; TNM, tumor‐node metastasis. Univariate Cox's regression displayed that ITGA7 high expression (P < .001) was associated with shorter OS, and larger tumor size (P < .001), higher T stage (P < .001), higher N stage (P < .001), higher TNM stage (P < .001), and higher pathological grade (P < .001) were also associated with worse OS in breast cancer patients (Table 3), whereas ER (positive vs negative) (P = .010) and PR (positive vs negative) (P = .022) were correlated with longer OS in breast cancer patients. Furthermore, the multivariate Cox's regression analysis revealed that ITGA7 high expression was an independent predictive factor for poorer OS (P = .006) in breast cancer patients, and higher pathological grade also independently predicted unfavorable OS (P = .001) in breast cancer patients. Univariate and multivariate Cox's proportional hazards regression model analyses of factors affecting OS P value < .05 was considered significant. Abbreviations: CI, confidence interval; ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; HR, hazard ratio; ITGA7, integrin α7; N, Node; OS, overall survival; PR, progesterone receptor; T, tumor; TNM, tumor‐node metastasis. Effect of ITGA7 knockdown on cell proliferation, cell apoptosis, and cell invasion in MCF7 cells In order to assess the effect of ITGA7 knockdown on cell functions in breast cancer cells, control shRNA and ITGA7 shRNA plasmids were constructed and transfected into MCF7 cells. After transfection at 24 hours, the expressions of ITGA7 mRNA (P < .01) and ITGA protein were reduced in ITGA7 knockdown group compared to control group (Figure 4A,B). Cell proliferation was reduced in ITGA7 knockdown group at 48 hours (P < .05) and 72 hours (P < .05) compared to control group (Figure 4C). For cell apoptosis, its rate was elevated in ITGA7 knockdown group at 24 hours compared to control group (P < .01) (Figure 4D,E). Additionally, invasive cell count was lower in ITGA7 knockdown group compared to control group (P < .01) (Figure 4F,G). These data indicated that ITGA7 knockdown repressed cell proliferation and invasion, but enhanced cell apoptosis in MCF7 cells. CCK‐8, AV/PI, and Matrigel invasion assays. ITGA7 mRNA expression was decreased in ITGA7 knockdown group compared to control group (A). ITGA7 protein expression was lower in ITGA7 knockdown group compared to control group (B). Cell viability was reduced in ITGA7 knockdown group compared to control group at 48 h and 72 h (C). Cell apoptosis rate was increased in ITGA7 knockdown group compared to control group (D, E). Cell count using Matrigel invasion assay was reduced in ITGA7 knockdown group compared to control group (F, G). Comparison between two groups was determined by t test. P < .05 was considered significant. *P < .05; **P < .01; ***P < .001. CCK‐8, Cell Counting Kit‐8; AV/PI, Annexin V/ propidium iodide; ITGA7, integrin α7 In order to assess the effect of ITGA7 knockdown on cell functions in breast cancer cells, control shRNA and ITGA7 shRNA plasmids were constructed and transfected into MCF7 cells. After transfection at 24 hours, the expressions of ITGA7 mRNA (P < .01) and ITGA protein were reduced in ITGA7 knockdown group compared to control group (Figure 4A,B). Cell proliferation was reduced in ITGA7 knockdown group at 48 hours (P < .05) and 72 hours (P < .05) compared to control group (Figure 4C). For cell apoptosis, its rate was elevated in ITGA7 knockdown group at 24 hours compared to control group (P < .01) (Figure 4D,E). Additionally, invasive cell count was lower in ITGA7 knockdown group compared to control group (P < .01) (Figure 4F,G). These data indicated that ITGA7 knockdown repressed cell proliferation and invasion, but enhanced cell apoptosis in MCF7 cells. CCK‐8, AV/PI, and Matrigel invasion assays. ITGA7 mRNA expression was decreased in ITGA7 knockdown group compared to control group (A). ITGA7 protein expression was lower in ITGA7 knockdown group compared to control group (B). Cell viability was reduced in ITGA7 knockdown group compared to control group at 48 h and 72 h (C). Cell apoptosis rate was increased in ITGA7 knockdown group compared to control group (D, E). Cell count using Matrigel invasion assay was reduced in ITGA7 knockdown group compared to control group (F, G). Comparison between two groups was determined by t test. P < .05 was considered significant. *P < .05; **P < .01; ***P < .001. CCK‐8, Cell Counting Kit‐8; AV/PI, Annexin V/ propidium iodide; ITGA7, integrin α7
null
null
[ "INTRODUCTION", "Patients", "Data collection", "Sample collection and ITGA7 expression", "ITGA7 knockdown in breast cancer cell line", "Detection of cell proliferation, cell apoptosis, and cell invasion", "Western blot", "RT‐qPCR", "Statistical analysis", "Study flow", "Baseline characteristics", "ITGA7 expression in breast cancer patients", "Correlation of ITGA7 expression with clinical characteristics in breast cancer patients", "Correlation of ITGA7 expression with OS in breast cancer patients", "Analysis of factors affecting OS in breast cancer patients", "Effect of ITGA7 knockdown on cell proliferation, cell apoptosis, and cell invasion in MCF7 cells" ]
[ "Breast cancer has ranked the most common cancer and the leading cause of cancer death among females worldwide, which accounts for estimated 1 700 000 new cases and causes 520 000 deaths around the world according to 2015 global statistics.1 In China, breast cancer occurs in 268 600 new cases and results in 69 500 deaths in females.1, 2 With the development in medical technology, various treatment options have been applied in breast cancer patients (such as surgery, chemotherapy, endocrine therapy, as well as targeted therapies).3, 4 Early breast cancer is considered potentially curable with these measurements, whereas the efficacy of current available treatments is still limited for the disease metastasis that is responsible for 90% of the deaths from breast cancer.3, 4, 5 Thus, exploration of novel treatment target as well as convincing biomarker for prognosis is of great importance for management of breast cancer progression.\nIntegrins are a large family of heterodimeric cell surface receptors that regulated cell‐cell and cell‐extracellular matrix interactions.6 Integrin α7 (ITGA7) is a gene localized on chromosome 12q13 and composed of at least 27 exons spanning a region of around 22.5 kb, which is the receptor for the extracellular matrix (ECM) protein laminin and forms heterodimer with integrin β1.7 Recently, the functions of ITGA7 in cancers have attracted increasing attentions. Several previous studies have disclosed that ITGA7 is upregulated and correlates with adverse clinicopathological characteristics in some cancers (such as esophageal squamous cell carcinoma and clear cell renal cell carcinoma).7, 8 Moreover, some in vitro experiments have disclosed that ITGA7 serves as a tumor oncogene in different cancer cells (such as glioblastoma and pancreatic carcinoma) through affecting cell proliferation and invasion.8, 9 Considering these implications about the promotive effect of ITGA7 in different cancers, we speculated that ITGA7 also might contribute to the progression of breast cancer and might be a potential treatment target, while relevant evidence is still limited. Thus, we conducted this study to investigate the correlation of ITGA7 expression with clinical/pathological characteristics and overall survival (OS) in breast cancer patients and further explore its knockdown on inhibiting breast cancer cell activities in vitro.", "A total of 191 breast cancer patients underwent resection from January 2014 to December 2016 were reviewed in our study. The screening criteria were as follows: (a) diagnosed as primary breast cancer by clinical and histopathological examinations; (b) underwent surgical resection; (c) formalin‐fixed, paraffin‐embedded tumor tissue was appropriately preserved and available; and (d) clinical data were complete. Following patients were excluded: (a) relapsed or secondary cancer; (b) underwent neoadjuvant therapy; and (c) suffering from other malignancies. The approval for this study was obtained from the Ethics Committee of GanSu Provincial Cancer Hospital, and verbal (with recording) or written informed consents were collected from included patients or their guardians.", "The screening and data retrieving were conducted in June 2018, and patients’ clinical data including age, tumor size, T stage, N stage, M stage, tumor‐node metastasis (TNM) stage, pathological grade, estrogen receptor (ER) status, progesterone receptor (PR) status, human epidermal growth factor receptor‐2 (HER‐2) status, as well as survival records were collected. The pathological grade was classified as grade 1 (G1): well differentiation; grade 2 (G2): moderate differentiation; and grade 3 (G3): poor differentiation. The TNM stage was assessed in accordance with the American Joint Committee on Cancer (AJCC) Staging System for Breast Cancer (7th version). The last follow‐up date was June 30, 2018, and OS was calculated from the date of surgical resection to the date of death or last visit.", "All formalin‐fixed, paraffin‐embedded tumor tissues were collected from the Pathology Department after approval by the Hospital. Specimens were cut onto 5‐µm slices, dried at 65℃ for 3 hours, deparaffinized in xylene (Catalog number: 95682; Sigma), followed by rehydration using gradient ethanol. Then, the slices were transparentized by polybutylene terephthalate and soaked in the solution of 1% bovine serum albumin (BSA) (Catalog number: A1933; Sigma) +0.1% Triton X‐100 (Catalog number: T9284; Sigma) for 30 minutes. Next, slices were quenched with fresh 3% hydrogen peroxide (Catalog number: 323381; Sigma) to inhibit endogenous tissue peroxidase activity, and the antigen retrieval was performed using microwave. After blocked by 10% goat serum (Catalog number: 50062Z; Thermo), the slices were incubated with rabbit anti‐ITGA7 antibody (Catalog number: ab203254; Abcam) at a dilution of 1:500 in buffer 4℃ overnight. Next day, slices were washed in buffer and then were incubated with Alexa Fluor® 488 conjugate‐labeled antibody against rabbit IgG (Catalog number: #4412; CST) at dilution of 1:500. After that, the slices were washed with phosphate buffer saline (PBS) and followed by 2‐(4‐amidinophenyl)‐6‐indolecarbamidine dihydrochloride (DAPI) (Catalog number: D3571; Invitrogen) staining and then were covered with coverslips. All slices were evaluated by 50i Nikon microscope in dark, and ITGA7 expression was semi‐quantitatively assessed by using the following intensity categories: 0, no staining; 1, weak but detectable staining; 2, moderate or distinct staining; and 3, intense staining. A histological score (HSCORE) was derived from the formula HSCORE = ΣPi(i + 1), where i represents the intensity scores, and Pi is the corresponding percentage of the cells. According to the HSCORE, ITGA7 high expression was defined as HSCORE > 0.7, and the ITGA7 low expression was defined as HSCORE ≤ 0.7.10 Furthermore, RNA was extracted from the FFPE tissue and detected by real‐time quantitative polymerase chain reaction (RT‐qPCR).", "Control shRNA and ITGA7 shRNA plasmids were established by Shanghai GenePharma Bio‐Tech Company using pEX‐2 vectors. Then, plasmids were transfected into MCF7 cells as control group and ITGA7 knockdown group. ITGA7 protein and mRNA expressions were subsequently detected by Western blot and RT‐qPCR at 24 hours post‐transfection to determine the transfection success. Human breast cancer cell line MCF7 was purchased from Cell Resource Center of Shanghai Institute of Life Sciences, Chinese Academy of Sciences, and cultured in 90% MEM medium (Catalog number: 12571071; Gibco) with 10% fetal bovine serum (Catalog number: 10099141; Gibco) under 95% air and 5% CO2 at 37℃. Sequences for ITGA7 shRNA were as follows: forward 5′‐CACCGCTGCCCACTCTACAGCTTTTCGAAAAAAGCTGTAGAGTGGGCAGC‐3′, reverse 5′‐AAAAGCTGCCCACTCTACAGCTTTTTTCGAAAAGCTGTAGAGTGGGCAGC‐3′, and sequences for control shRNA were as follows: forward 5′‐CACCGTTCTCCGAACGTGTCACGTCGAAACGTGACACGTTCGGAGAA‐3′, reverse 5′‐AAAATTCTCCGAACGTGTCACGTTTCGACGTGACACGTTCGGAGAAC‐3′.", "Cell proliferation, cell apoptosis, and cell invasion were measured by Cell Counting Kit‐8 (CCK‐8), Annexin V/propidium iodide (AV/PI), and Matrigel invasion assays, respectively. Cell viability was detected at 0, 24, 48, and 72 hours post‐transfection using Cell Counting Kit‐8 (Catalog number: CK04; Dojindo) according to the instructions of manufacturer. Cell apoptosis rate was detected at 24 hours post‐transfection using FITC Annexin V Apoptosis Detection Kit II (Catalog number: 556547; BD) according to the instructions of manufacturer. Besides, cell invasive ability was detected by Matrigel invasion assay using Matrigel basement membrane matrix (Catalog number: 356234; BD), Transwell filter chamber (Catalog number: 3422; Coring), formaldehyde solution (Catalog number: 818708; Sigma), and crystal violet (Catalog number: 46364; Sigma) according to the method described previously.11\n", "Western blot was performed as the following steps: (a) extraction of total protein was conducted with RIPA lysis and extraction buffer (Catalog number: 89901; Thermo); (b) concentration of total protein was measured by Pierce™ BCA Protein Assay Kit (Catalog number: 23227; Thermo), followed by electrophoresis and transfer to membranes; (c) membranes were blocked and then incubated with primary antibody (rabbit anti‐ITGA7 antibody [Catalog number: ab203254; Abcam]) (1:1000 dilution) and horseradish peroxidase (HRP)‐conjugated secondary antibody (goat anti‐rabbit IgG H&L [HRP] [Catalog number: ab6721; Abcam]) (1:4000 dilution). Finally, the bands were visualized by Pierce™ ECL Plus western blotting substrate (Catalog number: 32132X3; Thermo).", "RT‐PCR was performed as the following steps: (a) with TRIzol reagent (Catalog number: 15596018; Invitrogen), extraction of total RNA was performed; (b) the reverse transcription to cDNA was conducted using PrimeScript™ RT reagent Kit (Catalog number: RR037A; TAKARA); (c) qPCR was performed using QuantiNova SYBR Green PCR Kit (Catalog number: 208054; Qiagen), followed by qPCR amplification. Finally, the results of qPCR were calculated by 2‐ΔΔCt formula. Glyceraldehyde‐3‐phosphate dehydrogenase (GAPDH) was used as the internal reference. And information of primers used was as follows: ITGA7, forward (5′‐>3′): GCCACTCTGCCTGTCCAATG, reverse (5′‐>3′): GGAGGTGCTAAGGATGAGGTAGA; GAPDH, forward (5′‐>3′): GAGTCCACTGGCGTCTTCAC, reverse, ATCTTGAGGCTGTTGTCATACTTCT.", "Data were displayed as mean (standard deviation), median (interquartile range), or count (percentage). Difference between two groups was determined by Wilcoxon rank‐sum test, t test, chi‐square test, or log‐rank test (for survival analysis). Survival curves were constructed with the Kaplan‐Meier method. The influence of each variable on survival was examined by the univariate and multivariate Cox's proportional hazards regression analyses. All statistical analyses were performed with SPSS 19.0 software (IBM Corporation). A P value < .05 was considered statistically significant.", "Totally 395 breast cancer patients who underwent surgical resection were retrospectively screened in this study, while 181 of them were excluded, including 69 patients with no preserved tumor tissue, 45 patients with incomplete clinical data, 37 patients underwent neoadjuvant therapy, 21 patients with relapsed or secondary cancer, and nine patients with other malignancies (Figure 1). Subsequently, in the 214 patients eligible, 23 patients who were unable to acquire informed consents were excluded, and finally 191 patients were reviewed and analyzed in this study.\nStudy flow", "A total of 191 breast cancer patients were enrolled, with the mean age of 54.3 ± 13.6 years and the median age of 53.0 (45.0‐64.0) years (Table 1). For tumor size, the mean value was 3.2 ± 1.7 cm, and the median value was 3.0 (2.0‐4.0) cm. Regarding the disease stage, the numbers of patients with TNM I, TNM II, as well as TNM III were 27 (14.1%), 119 (62.3%), and 45 (23.6%), respectively. As for pathological grade, the numbers of patients with grade G1, G2, and G3 were 42 (22.0%), 124 (64.9%), and 25 (13.1%), respectively. The detailed information about other baseline characteristics is presented in Table 1.\nBaseline characteristics of breast cancer patients\nAbbreviations: ER: estrogen receptor; HER‐2: human epidermal growth factor receptor 2; IQR: interquartile range; N: node; PR: progesterone receptor; SD: standard deviation; T: tumor; TNM: tumor‐node metastasis.", "Examples of tumor ITGA7 high expression and ITGA7 low expression are shown in Figure 2A. In totally 191 patients, 92 (48.2%) patients presented with ITGA7 high expression, and 99 (51.8%) patients presented with ITGA7 low expression (Figure 2B).\nITGA7 expression detected by IF. Examples of ITGA7 high expression and ITGA7 low expression detected by IF (A). There were 92 (48.2%) patients with ITGA7 high expression and 99 (51.8%) patients with ITGA7 low expression (B). ITGA7, integrin α7; IF, immunofluorescence", "ITGA7 protein high expression was associated with elevated T stage (P = .004), increased TNM stage (P = .038), and raised pathological grade (P = .017) in breast cancer patients (Table 2), whereas no correlation of ITGA7 protein expression with age (P = .395), tumor size (P = .661), N stage (P = .131), ER (P = .584), PR (P = .442), and HER‐2 (P = .915) was observed. Meanwhile, ITGA7 mRNA high expression was associated with increased T stage (P = .002), elevated TNM stage (P = .017), and higher pathological grade (P = .013). These data suggested that ITGA7 expression was positively correlated with T stage, TNM stage, and pathological grade in breast cancer patients.\nCorrelation of ITGA7 expression with clinicopathological characteristics\nDifference between two groups was determined by Wilcoxon rank‐sum test or chi‐square test. P value < .05 was considered significant.\nAbbreviations: ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; IQR, interquartile range; ITGA7, integrin α7; N, node; PR, progesterone receptor; SD, standard deviation; T, tumor; TNM, tumor‐node metastasis.\nThe high or low expression was classified according to the median value of ITGA7 mRNA relative expression.", "K‐M curves displayed that ITGA7 protein high expression was associated with shorter OS (P < .001) (Figure 3A); moreover, ITGA7 mRNA high expression was correlated with worse OS in breast cancer patients (P = .009) (Figure 3B).\nOS in ITGA7 high expression patients and ITGA7 low expression patients. OS in ITGA7 protein high expression patients was remarkably reduced than that in ITGA7 low expression patients (A). OS in ITGA7 mRNA high expression patients was decreased than that in IGTA7 low expression patients (B). K‐M curves were used to display OS. Comparison of OS between two groups was determined by log‐rank test. P < .05 was considered significant. OS, overall survival; ITGA7, integrin α7; K‐M curves, Kaplan‐Meier curves", "Univariate Cox's regression displayed that ITGA7 high expression (P < .001) was associated with shorter OS, and larger tumor size (P < .001), higher T stage (P < .001), higher N stage (P < .001), higher TNM stage (P < .001), and higher pathological grade (P < .001) were also associated with worse OS in breast cancer patients (Table 3), whereas ER (positive vs negative) (P = .010) and PR (positive vs negative) (P = .022) were correlated with longer OS in breast cancer patients. Furthermore, the multivariate Cox's regression analysis revealed that ITGA7 high expression was an independent predictive factor for poorer OS (P = .006) in breast cancer patients, and higher pathological grade also independently predicted unfavorable OS (P = .001) in breast cancer patients.\nUnivariate and multivariate Cox's proportional hazards regression model analyses of factors affecting OS\n\nP value < .05 was considered significant.\nAbbreviations: CI, confidence interval; ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; HR, hazard ratio; ITGA7, integrin α7; N, Node; OS, overall survival; PR, progesterone receptor; T, tumor; TNM, tumor‐node metastasis.", "In order to assess the effect of ITGA7 knockdown on cell functions in breast cancer cells, control shRNA and ITGA7 shRNA plasmids were constructed and transfected into MCF7 cells. After transfection at 24 hours, the expressions of ITGA7 mRNA (P < .01) and ITGA protein were reduced in ITGA7 knockdown group compared to control group (Figure 4A,B). Cell proliferation was reduced in ITGA7 knockdown group at 48 hours (P < .05) and 72 hours (P < .05) compared to control group (Figure 4C). For cell apoptosis, its rate was elevated in ITGA7 knockdown group at 24 hours compared to control group (P < .01) (Figure 4D,E). Additionally, invasive cell count was lower in ITGA7 knockdown group compared to control group (P < .01) (Figure 4F,G). These data indicated that ITGA7 knockdown repressed cell proliferation and invasion, but enhanced cell apoptosis in MCF7 cells.\nCCK‐8, AV/PI, and Matrigel invasion assays. ITGA7 mRNA expression was decreased in ITGA7 knockdown group compared to control group (A). ITGA7 protein expression was lower in ITGA7 knockdown group compared to control group (B). Cell viability was reduced in ITGA7 knockdown group compared to control group at 48 h and 72 h (C). Cell apoptosis rate was increased in ITGA7 knockdown group compared to control group (D, E). Cell count using Matrigel invasion assay was reduced in ITGA7 knockdown group compared to control group (F, G). Comparison between two groups was determined by t test. P < .05 was considered significant. *P < .05; **P < .01; ***P < .001. CCK‐8, Cell Counting Kit‐8; AV/PI, Annexin V/ propidium iodide; ITGA7, integrin α7" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients", "Data collection", "Sample collection and ITGA7 expression", "ITGA7 knockdown in breast cancer cell line", "Detection of cell proliferation, cell apoptosis, and cell invasion", "Western blot", "RT‐qPCR", "Statistical analysis", "RESULTS", "Study flow", "Baseline characteristics", "ITGA7 expression in breast cancer patients", "Correlation of ITGA7 expression with clinical characteristics in breast cancer patients", "Correlation of ITGA7 expression with OS in breast cancer patients", "Analysis of factors affecting OS in breast cancer patients", "Effect of ITGA7 knockdown on cell proliferation, cell apoptosis, and cell invasion in MCF7 cells", "DISCUSSION" ]
[ "Breast cancer has ranked the most common cancer and the leading cause of cancer death among females worldwide, which accounts for estimated 1 700 000 new cases and causes 520 000 deaths around the world according to 2015 global statistics.1 In China, breast cancer occurs in 268 600 new cases and results in 69 500 deaths in females.1, 2 With the development in medical technology, various treatment options have been applied in breast cancer patients (such as surgery, chemotherapy, endocrine therapy, as well as targeted therapies).3, 4 Early breast cancer is considered potentially curable with these measurements, whereas the efficacy of current available treatments is still limited for the disease metastasis that is responsible for 90% of the deaths from breast cancer.3, 4, 5 Thus, exploration of novel treatment target as well as convincing biomarker for prognosis is of great importance for management of breast cancer progression.\nIntegrins are a large family of heterodimeric cell surface receptors that regulated cell‐cell and cell‐extracellular matrix interactions.6 Integrin α7 (ITGA7) is a gene localized on chromosome 12q13 and composed of at least 27 exons spanning a region of around 22.5 kb, which is the receptor for the extracellular matrix (ECM) protein laminin and forms heterodimer with integrin β1.7 Recently, the functions of ITGA7 in cancers have attracted increasing attentions. Several previous studies have disclosed that ITGA7 is upregulated and correlates with adverse clinicopathological characteristics in some cancers (such as esophageal squamous cell carcinoma and clear cell renal cell carcinoma).7, 8 Moreover, some in vitro experiments have disclosed that ITGA7 serves as a tumor oncogene in different cancer cells (such as glioblastoma and pancreatic carcinoma) through affecting cell proliferation and invasion.8, 9 Considering these implications about the promotive effect of ITGA7 in different cancers, we speculated that ITGA7 also might contribute to the progression of breast cancer and might be a potential treatment target, while relevant evidence is still limited. Thus, we conducted this study to investigate the correlation of ITGA7 expression with clinical/pathological characteristics and overall survival (OS) in breast cancer patients and further explore its knockdown on inhibiting breast cancer cell activities in vitro.", " Patients A total of 191 breast cancer patients underwent resection from January 2014 to December 2016 were reviewed in our study. The screening criteria were as follows: (a) diagnosed as primary breast cancer by clinical and histopathological examinations; (b) underwent surgical resection; (c) formalin‐fixed, paraffin‐embedded tumor tissue was appropriately preserved and available; and (d) clinical data were complete. Following patients were excluded: (a) relapsed or secondary cancer; (b) underwent neoadjuvant therapy; and (c) suffering from other malignancies. The approval for this study was obtained from the Ethics Committee of GanSu Provincial Cancer Hospital, and verbal (with recording) or written informed consents were collected from included patients or their guardians.\nA total of 191 breast cancer patients underwent resection from January 2014 to December 2016 were reviewed in our study. The screening criteria were as follows: (a) diagnosed as primary breast cancer by clinical and histopathological examinations; (b) underwent surgical resection; (c) formalin‐fixed, paraffin‐embedded tumor tissue was appropriately preserved and available; and (d) clinical data were complete. Following patients were excluded: (a) relapsed or secondary cancer; (b) underwent neoadjuvant therapy; and (c) suffering from other malignancies. The approval for this study was obtained from the Ethics Committee of GanSu Provincial Cancer Hospital, and verbal (with recording) or written informed consents were collected from included patients or their guardians.\n Data collection The screening and data retrieving were conducted in June 2018, and patients’ clinical data including age, tumor size, T stage, N stage, M stage, tumor‐node metastasis (TNM) stage, pathological grade, estrogen receptor (ER) status, progesterone receptor (PR) status, human epidermal growth factor receptor‐2 (HER‐2) status, as well as survival records were collected. The pathological grade was classified as grade 1 (G1): well differentiation; grade 2 (G2): moderate differentiation; and grade 3 (G3): poor differentiation. The TNM stage was assessed in accordance with the American Joint Committee on Cancer (AJCC) Staging System for Breast Cancer (7th version). The last follow‐up date was June 30, 2018, and OS was calculated from the date of surgical resection to the date of death or last visit.\nThe screening and data retrieving were conducted in June 2018, and patients’ clinical data including age, tumor size, T stage, N stage, M stage, tumor‐node metastasis (TNM) stage, pathological grade, estrogen receptor (ER) status, progesterone receptor (PR) status, human epidermal growth factor receptor‐2 (HER‐2) status, as well as survival records were collected. The pathological grade was classified as grade 1 (G1): well differentiation; grade 2 (G2): moderate differentiation; and grade 3 (G3): poor differentiation. The TNM stage was assessed in accordance with the American Joint Committee on Cancer (AJCC) Staging System for Breast Cancer (7th version). The last follow‐up date was June 30, 2018, and OS was calculated from the date of surgical resection to the date of death or last visit.\n Sample collection and ITGA7 expression All formalin‐fixed, paraffin‐embedded tumor tissues were collected from the Pathology Department after approval by the Hospital. Specimens were cut onto 5‐µm slices, dried at 65℃ for 3 hours, deparaffinized in xylene (Catalog number: 95682; Sigma), followed by rehydration using gradient ethanol. Then, the slices were transparentized by polybutylene terephthalate and soaked in the solution of 1% bovine serum albumin (BSA) (Catalog number: A1933; Sigma) +0.1% Triton X‐100 (Catalog number: T9284; Sigma) for 30 minutes. Next, slices were quenched with fresh 3% hydrogen peroxide (Catalog number: 323381; Sigma) to inhibit endogenous tissue peroxidase activity, and the antigen retrieval was performed using microwave. After blocked by 10% goat serum (Catalog number: 50062Z; Thermo), the slices were incubated with rabbit anti‐ITGA7 antibody (Catalog number: ab203254; Abcam) at a dilution of 1:500 in buffer 4℃ overnight. Next day, slices were washed in buffer and then were incubated with Alexa Fluor® 488 conjugate‐labeled antibody against rabbit IgG (Catalog number: #4412; CST) at dilution of 1:500. After that, the slices were washed with phosphate buffer saline (PBS) and followed by 2‐(4‐amidinophenyl)‐6‐indolecarbamidine dihydrochloride (DAPI) (Catalog number: D3571; Invitrogen) staining and then were covered with coverslips. All slices were evaluated by 50i Nikon microscope in dark, and ITGA7 expression was semi‐quantitatively assessed by using the following intensity categories: 0, no staining; 1, weak but detectable staining; 2, moderate or distinct staining; and 3, intense staining. A histological score (HSCORE) was derived from the formula HSCORE = ΣPi(i + 1), where i represents the intensity scores, and Pi is the corresponding percentage of the cells. According to the HSCORE, ITGA7 high expression was defined as HSCORE > 0.7, and the ITGA7 low expression was defined as HSCORE ≤ 0.7.10 Furthermore, RNA was extracted from the FFPE tissue and detected by real‐time quantitative polymerase chain reaction (RT‐qPCR).\nAll formalin‐fixed, paraffin‐embedded tumor tissues were collected from the Pathology Department after approval by the Hospital. Specimens were cut onto 5‐µm slices, dried at 65℃ for 3 hours, deparaffinized in xylene (Catalog number: 95682; Sigma), followed by rehydration using gradient ethanol. Then, the slices were transparentized by polybutylene terephthalate and soaked in the solution of 1% bovine serum albumin (BSA) (Catalog number: A1933; Sigma) +0.1% Triton X‐100 (Catalog number: T9284; Sigma) for 30 minutes. Next, slices were quenched with fresh 3% hydrogen peroxide (Catalog number: 323381; Sigma) to inhibit endogenous tissue peroxidase activity, and the antigen retrieval was performed using microwave. After blocked by 10% goat serum (Catalog number: 50062Z; Thermo), the slices were incubated with rabbit anti‐ITGA7 antibody (Catalog number: ab203254; Abcam) at a dilution of 1:500 in buffer 4℃ overnight. Next day, slices were washed in buffer and then were incubated with Alexa Fluor® 488 conjugate‐labeled antibody against rabbit IgG (Catalog number: #4412; CST) at dilution of 1:500. After that, the slices were washed with phosphate buffer saline (PBS) and followed by 2‐(4‐amidinophenyl)‐6‐indolecarbamidine dihydrochloride (DAPI) (Catalog number: D3571; Invitrogen) staining and then were covered with coverslips. All slices were evaluated by 50i Nikon microscope in dark, and ITGA7 expression was semi‐quantitatively assessed by using the following intensity categories: 0, no staining; 1, weak but detectable staining; 2, moderate or distinct staining; and 3, intense staining. A histological score (HSCORE) was derived from the formula HSCORE = ΣPi(i + 1), where i represents the intensity scores, and Pi is the corresponding percentage of the cells. According to the HSCORE, ITGA7 high expression was defined as HSCORE > 0.7, and the ITGA7 low expression was defined as HSCORE ≤ 0.7.10 Furthermore, RNA was extracted from the FFPE tissue and detected by real‐time quantitative polymerase chain reaction (RT‐qPCR).\n ITGA7 knockdown in breast cancer cell line Control shRNA and ITGA7 shRNA plasmids were established by Shanghai GenePharma Bio‐Tech Company using pEX‐2 vectors. Then, plasmids were transfected into MCF7 cells as control group and ITGA7 knockdown group. ITGA7 protein and mRNA expressions were subsequently detected by Western blot and RT‐qPCR at 24 hours post‐transfection to determine the transfection success. Human breast cancer cell line MCF7 was purchased from Cell Resource Center of Shanghai Institute of Life Sciences, Chinese Academy of Sciences, and cultured in 90% MEM medium (Catalog number: 12571071; Gibco) with 10% fetal bovine serum (Catalog number: 10099141; Gibco) under 95% air and 5% CO2 at 37℃. Sequences for ITGA7 shRNA were as follows: forward 5′‐CACCGCTGCCCACTCTACAGCTTTTCGAAAAAAGCTGTAGAGTGGGCAGC‐3′, reverse 5′‐AAAAGCTGCCCACTCTACAGCTTTTTTCGAAAAGCTGTAGAGTGGGCAGC‐3′, and sequences for control shRNA were as follows: forward 5′‐CACCGTTCTCCGAACGTGTCACGTCGAAACGTGACACGTTCGGAGAA‐3′, reverse 5′‐AAAATTCTCCGAACGTGTCACGTTTCGACGTGACACGTTCGGAGAAC‐3′.\nControl shRNA and ITGA7 shRNA plasmids were established by Shanghai GenePharma Bio‐Tech Company using pEX‐2 vectors. Then, plasmids were transfected into MCF7 cells as control group and ITGA7 knockdown group. ITGA7 protein and mRNA expressions were subsequently detected by Western blot and RT‐qPCR at 24 hours post‐transfection to determine the transfection success. Human breast cancer cell line MCF7 was purchased from Cell Resource Center of Shanghai Institute of Life Sciences, Chinese Academy of Sciences, and cultured in 90% MEM medium (Catalog number: 12571071; Gibco) with 10% fetal bovine serum (Catalog number: 10099141; Gibco) under 95% air and 5% CO2 at 37℃. Sequences for ITGA7 shRNA were as follows: forward 5′‐CACCGCTGCCCACTCTACAGCTTTTCGAAAAAAGCTGTAGAGTGGGCAGC‐3′, reverse 5′‐AAAAGCTGCCCACTCTACAGCTTTTTTCGAAAAGCTGTAGAGTGGGCAGC‐3′, and sequences for control shRNA were as follows: forward 5′‐CACCGTTCTCCGAACGTGTCACGTCGAAACGTGACACGTTCGGAGAA‐3′, reverse 5′‐AAAATTCTCCGAACGTGTCACGTTTCGACGTGACACGTTCGGAGAAC‐3′.\n Detection of cell proliferation, cell apoptosis, and cell invasion Cell proliferation, cell apoptosis, and cell invasion were measured by Cell Counting Kit‐8 (CCK‐8), Annexin V/propidium iodide (AV/PI), and Matrigel invasion assays, respectively. Cell viability was detected at 0, 24, 48, and 72 hours post‐transfection using Cell Counting Kit‐8 (Catalog number: CK04; Dojindo) according to the instructions of manufacturer. Cell apoptosis rate was detected at 24 hours post‐transfection using FITC Annexin V Apoptosis Detection Kit II (Catalog number: 556547; BD) according to the instructions of manufacturer. Besides, cell invasive ability was detected by Matrigel invasion assay using Matrigel basement membrane matrix (Catalog number: 356234; BD), Transwell filter chamber (Catalog number: 3422; Coring), formaldehyde solution (Catalog number: 818708; Sigma), and crystal violet (Catalog number: 46364; Sigma) according to the method described previously.11\n\nCell proliferation, cell apoptosis, and cell invasion were measured by Cell Counting Kit‐8 (CCK‐8), Annexin V/propidium iodide (AV/PI), and Matrigel invasion assays, respectively. Cell viability was detected at 0, 24, 48, and 72 hours post‐transfection using Cell Counting Kit‐8 (Catalog number: CK04; Dojindo) according to the instructions of manufacturer. Cell apoptosis rate was detected at 24 hours post‐transfection using FITC Annexin V Apoptosis Detection Kit II (Catalog number: 556547; BD) according to the instructions of manufacturer. Besides, cell invasive ability was detected by Matrigel invasion assay using Matrigel basement membrane matrix (Catalog number: 356234; BD), Transwell filter chamber (Catalog number: 3422; Coring), formaldehyde solution (Catalog number: 818708; Sigma), and crystal violet (Catalog number: 46364; Sigma) according to the method described previously.11\n\n Western blot Western blot was performed as the following steps: (a) extraction of total protein was conducted with RIPA lysis and extraction buffer (Catalog number: 89901; Thermo); (b) concentration of total protein was measured by Pierce™ BCA Protein Assay Kit (Catalog number: 23227; Thermo), followed by electrophoresis and transfer to membranes; (c) membranes were blocked and then incubated with primary antibody (rabbit anti‐ITGA7 antibody [Catalog number: ab203254; Abcam]) (1:1000 dilution) and horseradish peroxidase (HRP)‐conjugated secondary antibody (goat anti‐rabbit IgG H&L [HRP] [Catalog number: ab6721; Abcam]) (1:4000 dilution). Finally, the bands were visualized by Pierce™ ECL Plus western blotting substrate (Catalog number: 32132X3; Thermo).\nWestern blot was performed as the following steps: (a) extraction of total protein was conducted with RIPA lysis and extraction buffer (Catalog number: 89901; Thermo); (b) concentration of total protein was measured by Pierce™ BCA Protein Assay Kit (Catalog number: 23227; Thermo), followed by electrophoresis and transfer to membranes; (c) membranes were blocked and then incubated with primary antibody (rabbit anti‐ITGA7 antibody [Catalog number: ab203254; Abcam]) (1:1000 dilution) and horseradish peroxidase (HRP)‐conjugated secondary antibody (goat anti‐rabbit IgG H&L [HRP] [Catalog number: ab6721; Abcam]) (1:4000 dilution). Finally, the bands were visualized by Pierce™ ECL Plus western blotting substrate (Catalog number: 32132X3; Thermo).\n RT‐qPCR RT‐PCR was performed as the following steps: (a) with TRIzol reagent (Catalog number: 15596018; Invitrogen), extraction of total RNA was performed; (b) the reverse transcription to cDNA was conducted using PrimeScript™ RT reagent Kit (Catalog number: RR037A; TAKARA); (c) qPCR was performed using QuantiNova SYBR Green PCR Kit (Catalog number: 208054; Qiagen), followed by qPCR amplification. Finally, the results of qPCR were calculated by 2‐ΔΔCt formula. Glyceraldehyde‐3‐phosphate dehydrogenase (GAPDH) was used as the internal reference. And information of primers used was as follows: ITGA7, forward (5′‐>3′): GCCACTCTGCCTGTCCAATG, reverse (5′‐>3′): GGAGGTGCTAAGGATGAGGTAGA; GAPDH, forward (5′‐>3′): GAGTCCACTGGCGTCTTCAC, reverse, ATCTTGAGGCTGTTGTCATACTTCT.\nRT‐PCR was performed as the following steps: (a) with TRIzol reagent (Catalog number: 15596018; Invitrogen), extraction of total RNA was performed; (b) the reverse transcription to cDNA was conducted using PrimeScript™ RT reagent Kit (Catalog number: RR037A; TAKARA); (c) qPCR was performed using QuantiNova SYBR Green PCR Kit (Catalog number: 208054; Qiagen), followed by qPCR amplification. Finally, the results of qPCR were calculated by 2‐ΔΔCt formula. Glyceraldehyde‐3‐phosphate dehydrogenase (GAPDH) was used as the internal reference. And information of primers used was as follows: ITGA7, forward (5′‐>3′): GCCACTCTGCCTGTCCAATG, reverse (5′‐>3′): GGAGGTGCTAAGGATGAGGTAGA; GAPDH, forward (5′‐>3′): GAGTCCACTGGCGTCTTCAC, reverse, ATCTTGAGGCTGTTGTCATACTTCT.\n Statistical analysis Data were displayed as mean (standard deviation), median (interquartile range), or count (percentage). Difference between two groups was determined by Wilcoxon rank‐sum test, t test, chi‐square test, or log‐rank test (for survival analysis). Survival curves were constructed with the Kaplan‐Meier method. The influence of each variable on survival was examined by the univariate and multivariate Cox's proportional hazards regression analyses. All statistical analyses were performed with SPSS 19.0 software (IBM Corporation). A P value < .05 was considered statistically significant.\nData were displayed as mean (standard deviation), median (interquartile range), or count (percentage). Difference between two groups was determined by Wilcoxon rank‐sum test, t test, chi‐square test, or log‐rank test (for survival analysis). Survival curves were constructed with the Kaplan‐Meier method. The influence of each variable on survival was examined by the univariate and multivariate Cox's proportional hazards regression analyses. All statistical analyses were performed with SPSS 19.0 software (IBM Corporation). A P value < .05 was considered statistically significant.", "A total of 191 breast cancer patients underwent resection from January 2014 to December 2016 were reviewed in our study. The screening criteria were as follows: (a) diagnosed as primary breast cancer by clinical and histopathological examinations; (b) underwent surgical resection; (c) formalin‐fixed, paraffin‐embedded tumor tissue was appropriately preserved and available; and (d) clinical data were complete. Following patients were excluded: (a) relapsed or secondary cancer; (b) underwent neoadjuvant therapy; and (c) suffering from other malignancies. The approval for this study was obtained from the Ethics Committee of GanSu Provincial Cancer Hospital, and verbal (with recording) or written informed consents were collected from included patients or their guardians.", "The screening and data retrieving were conducted in June 2018, and patients’ clinical data including age, tumor size, T stage, N stage, M stage, tumor‐node metastasis (TNM) stage, pathological grade, estrogen receptor (ER) status, progesterone receptor (PR) status, human epidermal growth factor receptor‐2 (HER‐2) status, as well as survival records were collected. The pathological grade was classified as grade 1 (G1): well differentiation; grade 2 (G2): moderate differentiation; and grade 3 (G3): poor differentiation. The TNM stage was assessed in accordance with the American Joint Committee on Cancer (AJCC) Staging System for Breast Cancer (7th version). The last follow‐up date was June 30, 2018, and OS was calculated from the date of surgical resection to the date of death or last visit.", "All formalin‐fixed, paraffin‐embedded tumor tissues were collected from the Pathology Department after approval by the Hospital. Specimens were cut onto 5‐µm slices, dried at 65℃ for 3 hours, deparaffinized in xylene (Catalog number: 95682; Sigma), followed by rehydration using gradient ethanol. Then, the slices were transparentized by polybutylene terephthalate and soaked in the solution of 1% bovine serum albumin (BSA) (Catalog number: A1933; Sigma) +0.1% Triton X‐100 (Catalog number: T9284; Sigma) for 30 minutes. Next, slices were quenched with fresh 3% hydrogen peroxide (Catalog number: 323381; Sigma) to inhibit endogenous tissue peroxidase activity, and the antigen retrieval was performed using microwave. After blocked by 10% goat serum (Catalog number: 50062Z; Thermo), the slices were incubated with rabbit anti‐ITGA7 antibody (Catalog number: ab203254; Abcam) at a dilution of 1:500 in buffer 4℃ overnight. Next day, slices were washed in buffer and then were incubated with Alexa Fluor® 488 conjugate‐labeled antibody against rabbit IgG (Catalog number: #4412; CST) at dilution of 1:500. After that, the slices were washed with phosphate buffer saline (PBS) and followed by 2‐(4‐amidinophenyl)‐6‐indolecarbamidine dihydrochloride (DAPI) (Catalog number: D3571; Invitrogen) staining and then were covered with coverslips. All slices were evaluated by 50i Nikon microscope in dark, and ITGA7 expression was semi‐quantitatively assessed by using the following intensity categories: 0, no staining; 1, weak but detectable staining; 2, moderate or distinct staining; and 3, intense staining. A histological score (HSCORE) was derived from the formula HSCORE = ΣPi(i + 1), where i represents the intensity scores, and Pi is the corresponding percentage of the cells. According to the HSCORE, ITGA7 high expression was defined as HSCORE > 0.7, and the ITGA7 low expression was defined as HSCORE ≤ 0.7.10 Furthermore, RNA was extracted from the FFPE tissue and detected by real‐time quantitative polymerase chain reaction (RT‐qPCR).", "Control shRNA and ITGA7 shRNA plasmids were established by Shanghai GenePharma Bio‐Tech Company using pEX‐2 vectors. Then, plasmids were transfected into MCF7 cells as control group and ITGA7 knockdown group. ITGA7 protein and mRNA expressions were subsequently detected by Western blot and RT‐qPCR at 24 hours post‐transfection to determine the transfection success. Human breast cancer cell line MCF7 was purchased from Cell Resource Center of Shanghai Institute of Life Sciences, Chinese Academy of Sciences, and cultured in 90% MEM medium (Catalog number: 12571071; Gibco) with 10% fetal bovine serum (Catalog number: 10099141; Gibco) under 95% air and 5% CO2 at 37℃. Sequences for ITGA7 shRNA were as follows: forward 5′‐CACCGCTGCCCACTCTACAGCTTTTCGAAAAAAGCTGTAGAGTGGGCAGC‐3′, reverse 5′‐AAAAGCTGCCCACTCTACAGCTTTTTTCGAAAAGCTGTAGAGTGGGCAGC‐3′, and sequences for control shRNA were as follows: forward 5′‐CACCGTTCTCCGAACGTGTCACGTCGAAACGTGACACGTTCGGAGAA‐3′, reverse 5′‐AAAATTCTCCGAACGTGTCACGTTTCGACGTGACACGTTCGGAGAAC‐3′.", "Cell proliferation, cell apoptosis, and cell invasion were measured by Cell Counting Kit‐8 (CCK‐8), Annexin V/propidium iodide (AV/PI), and Matrigel invasion assays, respectively. Cell viability was detected at 0, 24, 48, and 72 hours post‐transfection using Cell Counting Kit‐8 (Catalog number: CK04; Dojindo) according to the instructions of manufacturer. Cell apoptosis rate was detected at 24 hours post‐transfection using FITC Annexin V Apoptosis Detection Kit II (Catalog number: 556547; BD) according to the instructions of manufacturer. Besides, cell invasive ability was detected by Matrigel invasion assay using Matrigel basement membrane matrix (Catalog number: 356234; BD), Transwell filter chamber (Catalog number: 3422; Coring), formaldehyde solution (Catalog number: 818708; Sigma), and crystal violet (Catalog number: 46364; Sigma) according to the method described previously.11\n", "Western blot was performed as the following steps: (a) extraction of total protein was conducted with RIPA lysis and extraction buffer (Catalog number: 89901; Thermo); (b) concentration of total protein was measured by Pierce™ BCA Protein Assay Kit (Catalog number: 23227; Thermo), followed by electrophoresis and transfer to membranes; (c) membranes were blocked and then incubated with primary antibody (rabbit anti‐ITGA7 antibody [Catalog number: ab203254; Abcam]) (1:1000 dilution) and horseradish peroxidase (HRP)‐conjugated secondary antibody (goat anti‐rabbit IgG H&L [HRP] [Catalog number: ab6721; Abcam]) (1:4000 dilution). Finally, the bands were visualized by Pierce™ ECL Plus western blotting substrate (Catalog number: 32132X3; Thermo).", "RT‐PCR was performed as the following steps: (a) with TRIzol reagent (Catalog number: 15596018; Invitrogen), extraction of total RNA was performed; (b) the reverse transcription to cDNA was conducted using PrimeScript™ RT reagent Kit (Catalog number: RR037A; TAKARA); (c) qPCR was performed using QuantiNova SYBR Green PCR Kit (Catalog number: 208054; Qiagen), followed by qPCR amplification. Finally, the results of qPCR were calculated by 2‐ΔΔCt formula. Glyceraldehyde‐3‐phosphate dehydrogenase (GAPDH) was used as the internal reference. And information of primers used was as follows: ITGA7, forward (5′‐>3′): GCCACTCTGCCTGTCCAATG, reverse (5′‐>3′): GGAGGTGCTAAGGATGAGGTAGA; GAPDH, forward (5′‐>3′): GAGTCCACTGGCGTCTTCAC, reverse, ATCTTGAGGCTGTTGTCATACTTCT.", "Data were displayed as mean (standard deviation), median (interquartile range), or count (percentage). Difference between two groups was determined by Wilcoxon rank‐sum test, t test, chi‐square test, or log‐rank test (for survival analysis). Survival curves were constructed with the Kaplan‐Meier method. The influence of each variable on survival was examined by the univariate and multivariate Cox's proportional hazards regression analyses. All statistical analyses were performed with SPSS 19.0 software (IBM Corporation). A P value < .05 was considered statistically significant.", " Study flow Totally 395 breast cancer patients who underwent surgical resection were retrospectively screened in this study, while 181 of them were excluded, including 69 patients with no preserved tumor tissue, 45 patients with incomplete clinical data, 37 patients underwent neoadjuvant therapy, 21 patients with relapsed or secondary cancer, and nine patients with other malignancies (Figure 1). Subsequently, in the 214 patients eligible, 23 patients who were unable to acquire informed consents were excluded, and finally 191 patients were reviewed and analyzed in this study.\nStudy flow\nTotally 395 breast cancer patients who underwent surgical resection were retrospectively screened in this study, while 181 of them were excluded, including 69 patients with no preserved tumor tissue, 45 patients with incomplete clinical data, 37 patients underwent neoadjuvant therapy, 21 patients with relapsed or secondary cancer, and nine patients with other malignancies (Figure 1). Subsequently, in the 214 patients eligible, 23 patients who were unable to acquire informed consents were excluded, and finally 191 patients were reviewed and analyzed in this study.\nStudy flow\n Baseline characteristics A total of 191 breast cancer patients were enrolled, with the mean age of 54.3 ± 13.6 years and the median age of 53.0 (45.0‐64.0) years (Table 1). For tumor size, the mean value was 3.2 ± 1.7 cm, and the median value was 3.0 (2.0‐4.0) cm. Regarding the disease stage, the numbers of patients with TNM I, TNM II, as well as TNM III were 27 (14.1%), 119 (62.3%), and 45 (23.6%), respectively. As for pathological grade, the numbers of patients with grade G1, G2, and G3 were 42 (22.0%), 124 (64.9%), and 25 (13.1%), respectively. The detailed information about other baseline characteristics is presented in Table 1.\nBaseline characteristics of breast cancer patients\nAbbreviations: ER: estrogen receptor; HER‐2: human epidermal growth factor receptor 2; IQR: interquartile range; N: node; PR: progesterone receptor; SD: standard deviation; T: tumor; TNM: tumor‐node metastasis.\nA total of 191 breast cancer patients were enrolled, with the mean age of 54.3 ± 13.6 years and the median age of 53.0 (45.0‐64.0) years (Table 1). For tumor size, the mean value was 3.2 ± 1.7 cm, and the median value was 3.0 (2.0‐4.0) cm. Regarding the disease stage, the numbers of patients with TNM I, TNM II, as well as TNM III were 27 (14.1%), 119 (62.3%), and 45 (23.6%), respectively. As for pathological grade, the numbers of patients with grade G1, G2, and G3 were 42 (22.0%), 124 (64.9%), and 25 (13.1%), respectively. The detailed information about other baseline characteristics is presented in Table 1.\nBaseline characteristics of breast cancer patients\nAbbreviations: ER: estrogen receptor; HER‐2: human epidermal growth factor receptor 2; IQR: interquartile range; N: node; PR: progesterone receptor; SD: standard deviation; T: tumor; TNM: tumor‐node metastasis.\n ITGA7 expression in breast cancer patients Examples of tumor ITGA7 high expression and ITGA7 low expression are shown in Figure 2A. In totally 191 patients, 92 (48.2%) patients presented with ITGA7 high expression, and 99 (51.8%) patients presented with ITGA7 low expression (Figure 2B).\nITGA7 expression detected by IF. Examples of ITGA7 high expression and ITGA7 low expression detected by IF (A). There were 92 (48.2%) patients with ITGA7 high expression and 99 (51.8%) patients with ITGA7 low expression (B). ITGA7, integrin α7; IF, immunofluorescence\nExamples of tumor ITGA7 high expression and ITGA7 low expression are shown in Figure 2A. In totally 191 patients, 92 (48.2%) patients presented with ITGA7 high expression, and 99 (51.8%) patients presented with ITGA7 low expression (Figure 2B).\nITGA7 expression detected by IF. Examples of ITGA7 high expression and ITGA7 low expression detected by IF (A). There were 92 (48.2%) patients with ITGA7 high expression and 99 (51.8%) patients with ITGA7 low expression (B). ITGA7, integrin α7; IF, immunofluorescence\n Correlation of ITGA7 expression with clinical characteristics in breast cancer patients ITGA7 protein high expression was associated with elevated T stage (P = .004), increased TNM stage (P = .038), and raised pathological grade (P = .017) in breast cancer patients (Table 2), whereas no correlation of ITGA7 protein expression with age (P = .395), tumor size (P = .661), N stage (P = .131), ER (P = .584), PR (P = .442), and HER‐2 (P = .915) was observed. Meanwhile, ITGA7 mRNA high expression was associated with increased T stage (P = .002), elevated TNM stage (P = .017), and higher pathological grade (P = .013). These data suggested that ITGA7 expression was positively correlated with T stage, TNM stage, and pathological grade in breast cancer patients.\nCorrelation of ITGA7 expression with clinicopathological characteristics\nDifference between two groups was determined by Wilcoxon rank‐sum test or chi‐square test. P value < .05 was considered significant.\nAbbreviations: ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; IQR, interquartile range; ITGA7, integrin α7; N, node; PR, progesterone receptor; SD, standard deviation; T, tumor; TNM, tumor‐node metastasis.\nThe high or low expression was classified according to the median value of ITGA7 mRNA relative expression.\nITGA7 protein high expression was associated with elevated T stage (P = .004), increased TNM stage (P = .038), and raised pathological grade (P = .017) in breast cancer patients (Table 2), whereas no correlation of ITGA7 protein expression with age (P = .395), tumor size (P = .661), N stage (P = .131), ER (P = .584), PR (P = .442), and HER‐2 (P = .915) was observed. Meanwhile, ITGA7 mRNA high expression was associated with increased T stage (P = .002), elevated TNM stage (P = .017), and higher pathological grade (P = .013). These data suggested that ITGA7 expression was positively correlated with T stage, TNM stage, and pathological grade in breast cancer patients.\nCorrelation of ITGA7 expression with clinicopathological characteristics\nDifference between two groups was determined by Wilcoxon rank‐sum test or chi‐square test. P value < .05 was considered significant.\nAbbreviations: ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; IQR, interquartile range; ITGA7, integrin α7; N, node; PR, progesterone receptor; SD, standard deviation; T, tumor; TNM, tumor‐node metastasis.\nThe high or low expression was classified according to the median value of ITGA7 mRNA relative expression.\n Correlation of ITGA7 expression with OS in breast cancer patients K‐M curves displayed that ITGA7 protein high expression was associated with shorter OS (P < .001) (Figure 3A); moreover, ITGA7 mRNA high expression was correlated with worse OS in breast cancer patients (P = .009) (Figure 3B).\nOS in ITGA7 high expression patients and ITGA7 low expression patients. OS in ITGA7 protein high expression patients was remarkably reduced than that in ITGA7 low expression patients (A). OS in ITGA7 mRNA high expression patients was decreased than that in IGTA7 low expression patients (B). K‐M curves were used to display OS. Comparison of OS between two groups was determined by log‐rank test. P < .05 was considered significant. OS, overall survival; ITGA7, integrin α7; K‐M curves, Kaplan‐Meier curves\nK‐M curves displayed that ITGA7 protein high expression was associated with shorter OS (P < .001) (Figure 3A); moreover, ITGA7 mRNA high expression was correlated with worse OS in breast cancer patients (P = .009) (Figure 3B).\nOS in ITGA7 high expression patients and ITGA7 low expression patients. OS in ITGA7 protein high expression patients was remarkably reduced than that in ITGA7 low expression patients (A). OS in ITGA7 mRNA high expression patients was decreased than that in IGTA7 low expression patients (B). K‐M curves were used to display OS. Comparison of OS between two groups was determined by log‐rank test. P < .05 was considered significant. OS, overall survival; ITGA7, integrin α7; K‐M curves, Kaplan‐Meier curves\n Analysis of factors affecting OS in breast cancer patients Univariate Cox's regression displayed that ITGA7 high expression (P < .001) was associated with shorter OS, and larger tumor size (P < .001), higher T stage (P < .001), higher N stage (P < .001), higher TNM stage (P < .001), and higher pathological grade (P < .001) were also associated with worse OS in breast cancer patients (Table 3), whereas ER (positive vs negative) (P = .010) and PR (positive vs negative) (P = .022) were correlated with longer OS in breast cancer patients. Furthermore, the multivariate Cox's regression analysis revealed that ITGA7 high expression was an independent predictive factor for poorer OS (P = .006) in breast cancer patients, and higher pathological grade also independently predicted unfavorable OS (P = .001) in breast cancer patients.\nUnivariate and multivariate Cox's proportional hazards regression model analyses of factors affecting OS\n\nP value < .05 was considered significant.\nAbbreviations: CI, confidence interval; ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; HR, hazard ratio; ITGA7, integrin α7; N, Node; OS, overall survival; PR, progesterone receptor; T, tumor; TNM, tumor‐node metastasis.\nUnivariate Cox's regression displayed that ITGA7 high expression (P < .001) was associated with shorter OS, and larger tumor size (P < .001), higher T stage (P < .001), higher N stage (P < .001), higher TNM stage (P < .001), and higher pathological grade (P < .001) were also associated with worse OS in breast cancer patients (Table 3), whereas ER (positive vs negative) (P = .010) and PR (positive vs negative) (P = .022) were correlated with longer OS in breast cancer patients. Furthermore, the multivariate Cox's regression analysis revealed that ITGA7 high expression was an independent predictive factor for poorer OS (P = .006) in breast cancer patients, and higher pathological grade also independently predicted unfavorable OS (P = .001) in breast cancer patients.\nUnivariate and multivariate Cox's proportional hazards regression model analyses of factors affecting OS\n\nP value < .05 was considered significant.\nAbbreviations: CI, confidence interval; ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; HR, hazard ratio; ITGA7, integrin α7; N, Node; OS, overall survival; PR, progesterone receptor; T, tumor; TNM, tumor‐node metastasis.\n Effect of ITGA7 knockdown on cell proliferation, cell apoptosis, and cell invasion in MCF7 cells In order to assess the effect of ITGA7 knockdown on cell functions in breast cancer cells, control shRNA and ITGA7 shRNA plasmids were constructed and transfected into MCF7 cells. After transfection at 24 hours, the expressions of ITGA7 mRNA (P < .01) and ITGA protein were reduced in ITGA7 knockdown group compared to control group (Figure 4A,B). Cell proliferation was reduced in ITGA7 knockdown group at 48 hours (P < .05) and 72 hours (P < .05) compared to control group (Figure 4C). For cell apoptosis, its rate was elevated in ITGA7 knockdown group at 24 hours compared to control group (P < .01) (Figure 4D,E). Additionally, invasive cell count was lower in ITGA7 knockdown group compared to control group (P < .01) (Figure 4F,G). These data indicated that ITGA7 knockdown repressed cell proliferation and invasion, but enhanced cell apoptosis in MCF7 cells.\nCCK‐8, AV/PI, and Matrigel invasion assays. ITGA7 mRNA expression was decreased in ITGA7 knockdown group compared to control group (A). ITGA7 protein expression was lower in ITGA7 knockdown group compared to control group (B). Cell viability was reduced in ITGA7 knockdown group compared to control group at 48 h and 72 h (C). Cell apoptosis rate was increased in ITGA7 knockdown group compared to control group (D, E). Cell count using Matrigel invasion assay was reduced in ITGA7 knockdown group compared to control group (F, G). Comparison between two groups was determined by t test. P < .05 was considered significant. *P < .05; **P < .01; ***P < .001. CCK‐8, Cell Counting Kit‐8; AV/PI, Annexin V/ propidium iodide; ITGA7, integrin α7\nIn order to assess the effect of ITGA7 knockdown on cell functions in breast cancer cells, control shRNA and ITGA7 shRNA plasmids were constructed and transfected into MCF7 cells. After transfection at 24 hours, the expressions of ITGA7 mRNA (P < .01) and ITGA protein were reduced in ITGA7 knockdown group compared to control group (Figure 4A,B). Cell proliferation was reduced in ITGA7 knockdown group at 48 hours (P < .05) and 72 hours (P < .05) compared to control group (Figure 4C). For cell apoptosis, its rate was elevated in ITGA7 knockdown group at 24 hours compared to control group (P < .01) (Figure 4D,E). Additionally, invasive cell count was lower in ITGA7 knockdown group compared to control group (P < .01) (Figure 4F,G). These data indicated that ITGA7 knockdown repressed cell proliferation and invasion, but enhanced cell apoptosis in MCF7 cells.\nCCK‐8, AV/PI, and Matrigel invasion assays. ITGA7 mRNA expression was decreased in ITGA7 knockdown group compared to control group (A). ITGA7 protein expression was lower in ITGA7 knockdown group compared to control group (B). Cell viability was reduced in ITGA7 knockdown group compared to control group at 48 h and 72 h (C). Cell apoptosis rate was increased in ITGA7 knockdown group compared to control group (D, E). Cell count using Matrigel invasion assay was reduced in ITGA7 knockdown group compared to control group (F, G). Comparison between two groups was determined by t test. P < .05 was considered significant. *P < .05; **P < .01; ***P < .001. CCK‐8, Cell Counting Kit‐8; AV/PI, Annexin V/ propidium iodide; ITGA7, integrin α7", "Totally 395 breast cancer patients who underwent surgical resection were retrospectively screened in this study, while 181 of them were excluded, including 69 patients with no preserved tumor tissue, 45 patients with incomplete clinical data, 37 patients underwent neoadjuvant therapy, 21 patients with relapsed or secondary cancer, and nine patients with other malignancies (Figure 1). Subsequently, in the 214 patients eligible, 23 patients who were unable to acquire informed consents were excluded, and finally 191 patients were reviewed and analyzed in this study.\nStudy flow", "A total of 191 breast cancer patients were enrolled, with the mean age of 54.3 ± 13.6 years and the median age of 53.0 (45.0‐64.0) years (Table 1). For tumor size, the mean value was 3.2 ± 1.7 cm, and the median value was 3.0 (2.0‐4.0) cm. Regarding the disease stage, the numbers of patients with TNM I, TNM II, as well as TNM III were 27 (14.1%), 119 (62.3%), and 45 (23.6%), respectively. As for pathological grade, the numbers of patients with grade G1, G2, and G3 were 42 (22.0%), 124 (64.9%), and 25 (13.1%), respectively. The detailed information about other baseline characteristics is presented in Table 1.\nBaseline characteristics of breast cancer patients\nAbbreviations: ER: estrogen receptor; HER‐2: human epidermal growth factor receptor 2; IQR: interquartile range; N: node; PR: progesterone receptor; SD: standard deviation; T: tumor; TNM: tumor‐node metastasis.", "Examples of tumor ITGA7 high expression and ITGA7 low expression are shown in Figure 2A. In totally 191 patients, 92 (48.2%) patients presented with ITGA7 high expression, and 99 (51.8%) patients presented with ITGA7 low expression (Figure 2B).\nITGA7 expression detected by IF. Examples of ITGA7 high expression and ITGA7 low expression detected by IF (A). There were 92 (48.2%) patients with ITGA7 high expression and 99 (51.8%) patients with ITGA7 low expression (B). ITGA7, integrin α7; IF, immunofluorescence", "ITGA7 protein high expression was associated with elevated T stage (P = .004), increased TNM stage (P = .038), and raised pathological grade (P = .017) in breast cancer patients (Table 2), whereas no correlation of ITGA7 protein expression with age (P = .395), tumor size (P = .661), N stage (P = .131), ER (P = .584), PR (P = .442), and HER‐2 (P = .915) was observed. Meanwhile, ITGA7 mRNA high expression was associated with increased T stage (P = .002), elevated TNM stage (P = .017), and higher pathological grade (P = .013). These data suggested that ITGA7 expression was positively correlated with T stage, TNM stage, and pathological grade in breast cancer patients.\nCorrelation of ITGA7 expression with clinicopathological characteristics\nDifference between two groups was determined by Wilcoxon rank‐sum test or chi‐square test. P value < .05 was considered significant.\nAbbreviations: ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; IQR, interquartile range; ITGA7, integrin α7; N, node; PR, progesterone receptor; SD, standard deviation; T, tumor; TNM, tumor‐node metastasis.\nThe high or low expression was classified according to the median value of ITGA7 mRNA relative expression.", "K‐M curves displayed that ITGA7 protein high expression was associated with shorter OS (P < .001) (Figure 3A); moreover, ITGA7 mRNA high expression was correlated with worse OS in breast cancer patients (P = .009) (Figure 3B).\nOS in ITGA7 high expression patients and ITGA7 low expression patients. OS in ITGA7 protein high expression patients was remarkably reduced than that in ITGA7 low expression patients (A). OS in ITGA7 mRNA high expression patients was decreased than that in IGTA7 low expression patients (B). K‐M curves were used to display OS. Comparison of OS between two groups was determined by log‐rank test. P < .05 was considered significant. OS, overall survival; ITGA7, integrin α7; K‐M curves, Kaplan‐Meier curves", "Univariate Cox's regression displayed that ITGA7 high expression (P < .001) was associated with shorter OS, and larger tumor size (P < .001), higher T stage (P < .001), higher N stage (P < .001), higher TNM stage (P < .001), and higher pathological grade (P < .001) were also associated with worse OS in breast cancer patients (Table 3), whereas ER (positive vs negative) (P = .010) and PR (positive vs negative) (P = .022) were correlated with longer OS in breast cancer patients. Furthermore, the multivariate Cox's regression analysis revealed that ITGA7 high expression was an independent predictive factor for poorer OS (P = .006) in breast cancer patients, and higher pathological grade also independently predicted unfavorable OS (P = .001) in breast cancer patients.\nUnivariate and multivariate Cox's proportional hazards regression model analyses of factors affecting OS\n\nP value < .05 was considered significant.\nAbbreviations: CI, confidence interval; ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; HR, hazard ratio; ITGA7, integrin α7; N, Node; OS, overall survival; PR, progesterone receptor; T, tumor; TNM, tumor‐node metastasis.", "In order to assess the effect of ITGA7 knockdown on cell functions in breast cancer cells, control shRNA and ITGA7 shRNA plasmids were constructed and transfected into MCF7 cells. After transfection at 24 hours, the expressions of ITGA7 mRNA (P < .01) and ITGA protein were reduced in ITGA7 knockdown group compared to control group (Figure 4A,B). Cell proliferation was reduced in ITGA7 knockdown group at 48 hours (P < .05) and 72 hours (P < .05) compared to control group (Figure 4C). For cell apoptosis, its rate was elevated in ITGA7 knockdown group at 24 hours compared to control group (P < .01) (Figure 4D,E). Additionally, invasive cell count was lower in ITGA7 knockdown group compared to control group (P < .01) (Figure 4F,G). These data indicated that ITGA7 knockdown repressed cell proliferation and invasion, but enhanced cell apoptosis in MCF7 cells.\nCCK‐8, AV/PI, and Matrigel invasion assays. ITGA7 mRNA expression was decreased in ITGA7 knockdown group compared to control group (A). ITGA7 protein expression was lower in ITGA7 knockdown group compared to control group (B). Cell viability was reduced in ITGA7 knockdown group compared to control group at 48 h and 72 h (C). Cell apoptosis rate was increased in ITGA7 knockdown group compared to control group (D, E). Cell count using Matrigel invasion assay was reduced in ITGA7 knockdown group compared to control group (F, G). Comparison between two groups was determined by t test. P < .05 was considered significant. *P < .05; **P < .01; ***P < .001. CCK‐8, Cell Counting Kit‐8; AV/PI, Annexin V/ propidium iodide; ITGA7, integrin α7", "Our results indicated that (a) ITGA7 high expression correlates with increased T stage, raised TNM stage, elevated pathological grade and worse OS, and it was an independent predictive factor for worse OS in breast cancer patients; (b) ITGA7 knockdown inhibited cell proliferation, cell invasion but enhanced cell apoptosis in breast cancer.\nIntegrins are transmembrane cell surface receptors, which comprises of 18 α and 8 β subunits.7 ITGA7, encoding a subunit belonging to integrin, mediates a variety of cell‐cell and cell‐matrix interactions, and it is recently reported to play a role in cell migration, cell differentiation, and cell metastasis in cancers.12, 13 For instance, some studies disclose that ITGA7 knockdown inhibits Hsp27‐mediated cell invasion in HCC cells and decreases S100P‐mediated cell migration in lung cancer cells.14, 15 Besides, a study shows that ITGA7 represses cell apoptosis as well as promotes chemoresistance via activating focal adhesion kinase (FAK)/Akt signaling, but enhances cell metastasis via inducing epithelial‐mesenchymal transition (EMT) in OSCC cells.7 Another study illustrates that ITGA7 knockdown might inhibit cell proliferation via decreasing phosphorylated AKT and p38 in glioblastoma cells.8 These previous data reveal that ITGA7 may be involved in the initiation and progression of some cancers, and it is able to affect some vital biological functions (such as cell apoptosis, cell invasion, and chemotaxis) of cancer cells via regulating multiple pathways (such as FAK/Akt signaling and phosphatidylinositol 3‐kinase (PI3K)/Akt pathway).\nIn a few observational studies, the role of aberrant ITGA7 expression in some cancers has been disclosed.7, 8 For example, a study shows that ITGA7 overexpression correlates with increased disease grade in glioblastoma patients.8 Another study displays that ITGA7 high expression is remarkably associated with poor differentiation and lymph node metastasis in esophageal squamous cell carcinoma patients.7 These previous studies reveal that ITGA7 high expression correlates with aggravated disease progression in these cancer patients, while the correlation of ITGA7 with disease progression in breast cancer is still inconclusive. In this study, we enrolled 191 breast cancer patients to explore the correlation of ITGA7 with disease progression of breast cancer patients. We found that ITGA7 high expression was associated with raised T stage, increased TNM stage, and elevated pathological grade in breast cancer patients, which might due to the following reasons: (a) ITGA7 might increase cell proliferation via inactivating the phosphorylation of AKT and p38 and promote cell invasion through interacting with Hsp27 or S100P to facilitate tumor growth and tumor invasion; thus, it led to increased T stage as well as TNM stage; (b) ITGA7 drove cancer stem cell features through FAK/MAPK/ERK‐mediated pathway, thereby enhanced abilities of self‐renew and differentiation, which further led to increased pathological grade in breast cancer patients.7 Furthermore, for the predictive value of ITGA7 on the treatment outcomes of cancer patients, ITGA7 high expression is reported to be associated with reduced OS in both clear cell renal carcinoma patients and bladder urothelial carcinoma patients.16, 17 Also, ITGA7 expression is negatively correlated with OS in both low‐ and high‐grade glioma patients.8 These data reveal that ITGA7 high expression predicts unfavorable OS in some cancer patients, while limited studies show the predictive value of ITGA7 in breast cancer patients. In our study, we found that ITGA7 independently predicted poor OS in breast cancer patients, and the possible reasons might be (a) ITGA7 enhanced cell proliferation and invasion but repressed cell apoptosis via regulating some vital pathways (such as FAK/Akt and PI3K/Akt) to accelerate disease progression, thus resulted in shorter OS in breast cancer patients; (b) ITGA7 might enhance chemoresistance through activating FAK/Akt signaling, thus resulted in adverse treatment efficacy and further led to poor survivals in breast cancer patients. Additionally, there were still some limitations in our study: (a) sample size (N = 191) was relatively small, and the statistical power might be low; (b) this was a single‐center study, which might lack wide representativeness; (c) this was a retrospective study, and assessment of ITGA7 expression was restricted to formalin‐fixed and paraffin‐embedded tissues; thus, further prospective study using fresh samples is needed to verify our results.\nIn order to explore the mechanisms about how ITGA7 affects cancer cell functions, several in vivo or in vitro experiments have been performed.7, 8, 9 For example, ITGA7 is overexpressed in a highly metastatic human pancreatic carcinoma line (SW1990 HM) compared to the control human pancreatic carcinoma (SW1990) and enhances cell invasion.9 In addition, ITGA7 knockdown reduces cell invasion in glioblastoma cells, and in vivo, the tumor growth is reduced in glioblastoma mice model treated by anti‐ITGA7 compared to the control mice.8 Moreover, ITGA7 promotes the ability of cell mobility, cell migration, and cell invasion but represses cell apoptosis in esophageal squamous cell carcinoma cells.7 According to these data, ITGA7 functions as an oncogene in these cancer cells, while the role of ITGA7 in breast cancers cells is rarely reported. In order to investigate the effect of ITGA7 on cell activities, we conducted CCK‐8, AV/PI, and Matrigel invasion assays in MCF‐7 cells with ITGA7 knockdown, and we found that ITGA7 knockdown repressed cell proliferation, promoted cell apoptosis, and reduced cell invasion in MCF‐7 cells. In addition, at 24 hours post‐transfection, cell invasion rate was reduced in ITGA7 knockdown group compared to control group, while no difference of cell viability existed between the two groups, indicating that the reduction in cell invasion was not due to the loss in viability by ITGA7 knockdown. Our results expanded the understanding about underlying mechanisms of ITGA7 in breast cancer cells and suggested that ITGA7 knockdown might serve as an anti‐tumor approach through repressing cell proliferation and invasion but enhancing cell apoptosis.\nIn conclusion, ITGA7 high expression correlates with increased T stage, TNM stage, and pathological grade as well as worse OS, and its knockdown enhances cell apoptosis but inhibits cell proliferation and invasion in breast cancer." ]
[ null, "materials-and-methods", null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion" ]
[ "apoptosis", "breast cancer", "integrin α7", "proliferation", "survival" ]
INTRODUCTION: Breast cancer has ranked the most common cancer and the leading cause of cancer death among females worldwide, which accounts for estimated 1 700 000 new cases and causes 520 000 deaths around the world according to 2015 global statistics.1 In China, breast cancer occurs in 268 600 new cases and results in 69 500 deaths in females.1, 2 With the development in medical technology, various treatment options have been applied in breast cancer patients (such as surgery, chemotherapy, endocrine therapy, as well as targeted therapies).3, 4 Early breast cancer is considered potentially curable with these measurements, whereas the efficacy of current available treatments is still limited for the disease metastasis that is responsible for 90% of the deaths from breast cancer.3, 4, 5 Thus, exploration of novel treatment target as well as convincing biomarker for prognosis is of great importance for management of breast cancer progression. Integrins are a large family of heterodimeric cell surface receptors that regulated cell‐cell and cell‐extracellular matrix interactions.6 Integrin α7 (ITGA7) is a gene localized on chromosome 12q13 and composed of at least 27 exons spanning a region of around 22.5 kb, which is the receptor for the extracellular matrix (ECM) protein laminin and forms heterodimer with integrin β1.7 Recently, the functions of ITGA7 in cancers have attracted increasing attentions. Several previous studies have disclosed that ITGA7 is upregulated and correlates with adverse clinicopathological characteristics in some cancers (such as esophageal squamous cell carcinoma and clear cell renal cell carcinoma).7, 8 Moreover, some in vitro experiments have disclosed that ITGA7 serves as a tumor oncogene in different cancer cells (such as glioblastoma and pancreatic carcinoma) through affecting cell proliferation and invasion.8, 9 Considering these implications about the promotive effect of ITGA7 in different cancers, we speculated that ITGA7 also might contribute to the progression of breast cancer and might be a potential treatment target, while relevant evidence is still limited. Thus, we conducted this study to investigate the correlation of ITGA7 expression with clinical/pathological characteristics and overall survival (OS) in breast cancer patients and further explore its knockdown on inhibiting breast cancer cell activities in vitro. MATERIALS AND METHODS: Patients A total of 191 breast cancer patients underwent resection from January 2014 to December 2016 were reviewed in our study. The screening criteria were as follows: (a) diagnosed as primary breast cancer by clinical and histopathological examinations; (b) underwent surgical resection; (c) formalin‐fixed, paraffin‐embedded tumor tissue was appropriately preserved and available; and (d) clinical data were complete. Following patients were excluded: (a) relapsed or secondary cancer; (b) underwent neoadjuvant therapy; and (c) suffering from other malignancies. The approval for this study was obtained from the Ethics Committee of GanSu Provincial Cancer Hospital, and verbal (with recording) or written informed consents were collected from included patients or their guardians. A total of 191 breast cancer patients underwent resection from January 2014 to December 2016 were reviewed in our study. The screening criteria were as follows: (a) diagnosed as primary breast cancer by clinical and histopathological examinations; (b) underwent surgical resection; (c) formalin‐fixed, paraffin‐embedded tumor tissue was appropriately preserved and available; and (d) clinical data were complete. Following patients were excluded: (a) relapsed or secondary cancer; (b) underwent neoadjuvant therapy; and (c) suffering from other malignancies. The approval for this study was obtained from the Ethics Committee of GanSu Provincial Cancer Hospital, and verbal (with recording) or written informed consents were collected from included patients or their guardians. Data collection The screening and data retrieving were conducted in June 2018, and patients’ clinical data including age, tumor size, T stage, N stage, M stage, tumor‐node metastasis (TNM) stage, pathological grade, estrogen receptor (ER) status, progesterone receptor (PR) status, human epidermal growth factor receptor‐2 (HER‐2) status, as well as survival records were collected. The pathological grade was classified as grade 1 (G1): well differentiation; grade 2 (G2): moderate differentiation; and grade 3 (G3): poor differentiation. The TNM stage was assessed in accordance with the American Joint Committee on Cancer (AJCC) Staging System for Breast Cancer (7th version). The last follow‐up date was June 30, 2018, and OS was calculated from the date of surgical resection to the date of death or last visit. The screening and data retrieving were conducted in June 2018, and patients’ clinical data including age, tumor size, T stage, N stage, M stage, tumor‐node metastasis (TNM) stage, pathological grade, estrogen receptor (ER) status, progesterone receptor (PR) status, human epidermal growth factor receptor‐2 (HER‐2) status, as well as survival records were collected. The pathological grade was classified as grade 1 (G1): well differentiation; grade 2 (G2): moderate differentiation; and grade 3 (G3): poor differentiation. The TNM stage was assessed in accordance with the American Joint Committee on Cancer (AJCC) Staging System for Breast Cancer (7th version). The last follow‐up date was June 30, 2018, and OS was calculated from the date of surgical resection to the date of death or last visit. Sample collection and ITGA7 expression All formalin‐fixed, paraffin‐embedded tumor tissues were collected from the Pathology Department after approval by the Hospital. Specimens were cut onto 5‐µm slices, dried at 65℃ for 3 hours, deparaffinized in xylene (Catalog number: 95682; Sigma), followed by rehydration using gradient ethanol. Then, the slices were transparentized by polybutylene terephthalate and soaked in the solution of 1% bovine serum albumin (BSA) (Catalog number: A1933; Sigma) +0.1% Triton X‐100 (Catalog number: T9284; Sigma) for 30 minutes. Next, slices were quenched with fresh 3% hydrogen peroxide (Catalog number: 323381; Sigma) to inhibit endogenous tissue peroxidase activity, and the antigen retrieval was performed using microwave. After blocked by 10% goat serum (Catalog number: 50062Z; Thermo), the slices were incubated with rabbit anti‐ITGA7 antibody (Catalog number: ab203254; Abcam) at a dilution of 1:500 in buffer 4℃ overnight. Next day, slices were washed in buffer and then were incubated with Alexa Fluor® 488 conjugate‐labeled antibody against rabbit IgG (Catalog number: #4412; CST) at dilution of 1:500. After that, the slices were washed with phosphate buffer saline (PBS) and followed by 2‐(4‐amidinophenyl)‐6‐indolecarbamidine dihydrochloride (DAPI) (Catalog number: D3571; Invitrogen) staining and then were covered with coverslips. All slices were evaluated by 50i Nikon microscope in dark, and ITGA7 expression was semi‐quantitatively assessed by using the following intensity categories: 0, no staining; 1, weak but detectable staining; 2, moderate or distinct staining; and 3, intense staining. A histological score (HSCORE) was derived from the formula HSCORE = ΣPi(i + 1), where i represents the intensity scores, and Pi is the corresponding percentage of the cells. According to the HSCORE, ITGA7 high expression was defined as HSCORE > 0.7, and the ITGA7 low expression was defined as HSCORE ≤ 0.7.10 Furthermore, RNA was extracted from the FFPE tissue and detected by real‐time quantitative polymerase chain reaction (RT‐qPCR). All formalin‐fixed, paraffin‐embedded tumor tissues were collected from the Pathology Department after approval by the Hospital. Specimens were cut onto 5‐µm slices, dried at 65℃ for 3 hours, deparaffinized in xylene (Catalog number: 95682; Sigma), followed by rehydration using gradient ethanol. Then, the slices were transparentized by polybutylene terephthalate and soaked in the solution of 1% bovine serum albumin (BSA) (Catalog number: A1933; Sigma) +0.1% Triton X‐100 (Catalog number: T9284; Sigma) for 30 minutes. Next, slices were quenched with fresh 3% hydrogen peroxide (Catalog number: 323381; Sigma) to inhibit endogenous tissue peroxidase activity, and the antigen retrieval was performed using microwave. After blocked by 10% goat serum (Catalog number: 50062Z; Thermo), the slices were incubated with rabbit anti‐ITGA7 antibody (Catalog number: ab203254; Abcam) at a dilution of 1:500 in buffer 4℃ overnight. Next day, slices were washed in buffer and then were incubated with Alexa Fluor® 488 conjugate‐labeled antibody against rabbit IgG (Catalog number: #4412; CST) at dilution of 1:500. After that, the slices were washed with phosphate buffer saline (PBS) and followed by 2‐(4‐amidinophenyl)‐6‐indolecarbamidine dihydrochloride (DAPI) (Catalog number: D3571; Invitrogen) staining and then were covered with coverslips. All slices were evaluated by 50i Nikon microscope in dark, and ITGA7 expression was semi‐quantitatively assessed by using the following intensity categories: 0, no staining; 1, weak but detectable staining; 2, moderate or distinct staining; and 3, intense staining. A histological score (HSCORE) was derived from the formula HSCORE = ΣPi(i + 1), where i represents the intensity scores, and Pi is the corresponding percentage of the cells. According to the HSCORE, ITGA7 high expression was defined as HSCORE > 0.7, and the ITGA7 low expression was defined as HSCORE ≤ 0.7.10 Furthermore, RNA was extracted from the FFPE tissue and detected by real‐time quantitative polymerase chain reaction (RT‐qPCR). ITGA7 knockdown in breast cancer cell line Control shRNA and ITGA7 shRNA plasmids were established by Shanghai GenePharma Bio‐Tech Company using pEX‐2 vectors. Then, plasmids were transfected into MCF7 cells as control group and ITGA7 knockdown group. ITGA7 protein and mRNA expressions were subsequently detected by Western blot and RT‐qPCR at 24 hours post‐transfection to determine the transfection success. Human breast cancer cell line MCF7 was purchased from Cell Resource Center of Shanghai Institute of Life Sciences, Chinese Academy of Sciences, and cultured in 90% MEM medium (Catalog number: 12571071; Gibco) with 10% fetal bovine serum (Catalog number: 10099141; Gibco) under 95% air and 5% CO2 at 37℃. Sequences for ITGA7 shRNA were as follows: forward 5′‐CACCGCTGCCCACTCTACAGCTTTTCGAAAAAAGCTGTAGAGTGGGCAGC‐3′, reverse 5′‐AAAAGCTGCCCACTCTACAGCTTTTTTCGAAAAGCTGTAGAGTGGGCAGC‐3′, and sequences for control shRNA were as follows: forward 5′‐CACCGTTCTCCGAACGTGTCACGTCGAAACGTGACACGTTCGGAGAA‐3′, reverse 5′‐AAAATTCTCCGAACGTGTCACGTTTCGACGTGACACGTTCGGAGAAC‐3′. Control shRNA and ITGA7 shRNA plasmids were established by Shanghai GenePharma Bio‐Tech Company using pEX‐2 vectors. Then, plasmids were transfected into MCF7 cells as control group and ITGA7 knockdown group. ITGA7 protein and mRNA expressions were subsequently detected by Western blot and RT‐qPCR at 24 hours post‐transfection to determine the transfection success. Human breast cancer cell line MCF7 was purchased from Cell Resource Center of Shanghai Institute of Life Sciences, Chinese Academy of Sciences, and cultured in 90% MEM medium (Catalog number: 12571071; Gibco) with 10% fetal bovine serum (Catalog number: 10099141; Gibco) under 95% air and 5% CO2 at 37℃. Sequences for ITGA7 shRNA were as follows: forward 5′‐CACCGCTGCCCACTCTACAGCTTTTCGAAAAAAGCTGTAGAGTGGGCAGC‐3′, reverse 5′‐AAAAGCTGCCCACTCTACAGCTTTTTTCGAAAAGCTGTAGAGTGGGCAGC‐3′, and sequences for control shRNA were as follows: forward 5′‐CACCGTTCTCCGAACGTGTCACGTCGAAACGTGACACGTTCGGAGAA‐3′, reverse 5′‐AAAATTCTCCGAACGTGTCACGTTTCGACGTGACACGTTCGGAGAAC‐3′. Detection of cell proliferation, cell apoptosis, and cell invasion Cell proliferation, cell apoptosis, and cell invasion were measured by Cell Counting Kit‐8 (CCK‐8), Annexin V/propidium iodide (AV/PI), and Matrigel invasion assays, respectively. Cell viability was detected at 0, 24, 48, and 72 hours post‐transfection using Cell Counting Kit‐8 (Catalog number: CK04; Dojindo) according to the instructions of manufacturer. Cell apoptosis rate was detected at 24 hours post‐transfection using FITC Annexin V Apoptosis Detection Kit II (Catalog number: 556547; BD) according to the instructions of manufacturer. Besides, cell invasive ability was detected by Matrigel invasion assay using Matrigel basement membrane matrix (Catalog number: 356234; BD), Transwell filter chamber (Catalog number: 3422; Coring), formaldehyde solution (Catalog number: 818708; Sigma), and crystal violet (Catalog number: 46364; Sigma) according to the method described previously.11 Cell proliferation, cell apoptosis, and cell invasion were measured by Cell Counting Kit‐8 (CCK‐8), Annexin V/propidium iodide (AV/PI), and Matrigel invasion assays, respectively. Cell viability was detected at 0, 24, 48, and 72 hours post‐transfection using Cell Counting Kit‐8 (Catalog number: CK04; Dojindo) according to the instructions of manufacturer. Cell apoptosis rate was detected at 24 hours post‐transfection using FITC Annexin V Apoptosis Detection Kit II (Catalog number: 556547; BD) according to the instructions of manufacturer. Besides, cell invasive ability was detected by Matrigel invasion assay using Matrigel basement membrane matrix (Catalog number: 356234; BD), Transwell filter chamber (Catalog number: 3422; Coring), formaldehyde solution (Catalog number: 818708; Sigma), and crystal violet (Catalog number: 46364; Sigma) according to the method described previously.11 Western blot Western blot was performed as the following steps: (a) extraction of total protein was conducted with RIPA lysis and extraction buffer (Catalog number: 89901; Thermo); (b) concentration of total protein was measured by Pierce™ BCA Protein Assay Kit (Catalog number: 23227; Thermo), followed by electrophoresis and transfer to membranes; (c) membranes were blocked and then incubated with primary antibody (rabbit anti‐ITGA7 antibody [Catalog number: ab203254; Abcam]) (1:1000 dilution) and horseradish peroxidase (HRP)‐conjugated secondary antibody (goat anti‐rabbit IgG H&L [HRP] [Catalog number: ab6721; Abcam]) (1:4000 dilution). Finally, the bands were visualized by Pierce™ ECL Plus western blotting substrate (Catalog number: 32132X3; Thermo). Western blot was performed as the following steps: (a) extraction of total protein was conducted with RIPA lysis and extraction buffer (Catalog number: 89901; Thermo); (b) concentration of total protein was measured by Pierce™ BCA Protein Assay Kit (Catalog number: 23227; Thermo), followed by electrophoresis and transfer to membranes; (c) membranes were blocked and then incubated with primary antibody (rabbit anti‐ITGA7 antibody [Catalog number: ab203254; Abcam]) (1:1000 dilution) and horseradish peroxidase (HRP)‐conjugated secondary antibody (goat anti‐rabbit IgG H&L [HRP] [Catalog number: ab6721; Abcam]) (1:4000 dilution). Finally, the bands were visualized by Pierce™ ECL Plus western blotting substrate (Catalog number: 32132X3; Thermo). RT‐qPCR RT‐PCR was performed as the following steps: (a) with TRIzol reagent (Catalog number: 15596018; Invitrogen), extraction of total RNA was performed; (b) the reverse transcription to cDNA was conducted using PrimeScript™ RT reagent Kit (Catalog number: RR037A; TAKARA); (c) qPCR was performed using QuantiNova SYBR Green PCR Kit (Catalog number: 208054; Qiagen), followed by qPCR amplification. Finally, the results of qPCR were calculated by 2‐ΔΔCt formula. Glyceraldehyde‐3‐phosphate dehydrogenase (GAPDH) was used as the internal reference. And information of primers used was as follows: ITGA7, forward (5′‐>3′): GCCACTCTGCCTGTCCAATG, reverse (5′‐>3′): GGAGGTGCTAAGGATGAGGTAGA; GAPDH, forward (5′‐>3′): GAGTCCACTGGCGTCTTCAC, reverse, ATCTTGAGGCTGTTGTCATACTTCT. RT‐PCR was performed as the following steps: (a) with TRIzol reagent (Catalog number: 15596018; Invitrogen), extraction of total RNA was performed; (b) the reverse transcription to cDNA was conducted using PrimeScript™ RT reagent Kit (Catalog number: RR037A; TAKARA); (c) qPCR was performed using QuantiNova SYBR Green PCR Kit (Catalog number: 208054; Qiagen), followed by qPCR amplification. Finally, the results of qPCR were calculated by 2‐ΔΔCt formula. Glyceraldehyde‐3‐phosphate dehydrogenase (GAPDH) was used as the internal reference. And information of primers used was as follows: ITGA7, forward (5′‐>3′): GCCACTCTGCCTGTCCAATG, reverse (5′‐>3′): GGAGGTGCTAAGGATGAGGTAGA; GAPDH, forward (5′‐>3′): GAGTCCACTGGCGTCTTCAC, reverse, ATCTTGAGGCTGTTGTCATACTTCT. Statistical analysis Data were displayed as mean (standard deviation), median (interquartile range), or count (percentage). Difference between two groups was determined by Wilcoxon rank‐sum test, t test, chi‐square test, or log‐rank test (for survival analysis). Survival curves were constructed with the Kaplan‐Meier method. The influence of each variable on survival was examined by the univariate and multivariate Cox's proportional hazards regression analyses. All statistical analyses were performed with SPSS 19.0 software (IBM Corporation). A P value < .05 was considered statistically significant. Data were displayed as mean (standard deviation), median (interquartile range), or count (percentage). Difference between two groups was determined by Wilcoxon rank‐sum test, t test, chi‐square test, or log‐rank test (for survival analysis). Survival curves were constructed with the Kaplan‐Meier method. The influence of each variable on survival was examined by the univariate and multivariate Cox's proportional hazards regression analyses. All statistical analyses were performed with SPSS 19.0 software (IBM Corporation). A P value < .05 was considered statistically significant. Patients: A total of 191 breast cancer patients underwent resection from January 2014 to December 2016 were reviewed in our study. The screening criteria were as follows: (a) diagnosed as primary breast cancer by clinical and histopathological examinations; (b) underwent surgical resection; (c) formalin‐fixed, paraffin‐embedded tumor tissue was appropriately preserved and available; and (d) clinical data were complete. Following patients were excluded: (a) relapsed or secondary cancer; (b) underwent neoadjuvant therapy; and (c) suffering from other malignancies. The approval for this study was obtained from the Ethics Committee of GanSu Provincial Cancer Hospital, and verbal (with recording) or written informed consents were collected from included patients or their guardians. Data collection: The screening and data retrieving were conducted in June 2018, and patients’ clinical data including age, tumor size, T stage, N stage, M stage, tumor‐node metastasis (TNM) stage, pathological grade, estrogen receptor (ER) status, progesterone receptor (PR) status, human epidermal growth factor receptor‐2 (HER‐2) status, as well as survival records were collected. The pathological grade was classified as grade 1 (G1): well differentiation; grade 2 (G2): moderate differentiation; and grade 3 (G3): poor differentiation. The TNM stage was assessed in accordance with the American Joint Committee on Cancer (AJCC) Staging System for Breast Cancer (7th version). The last follow‐up date was June 30, 2018, and OS was calculated from the date of surgical resection to the date of death or last visit. Sample collection and ITGA7 expression: All formalin‐fixed, paraffin‐embedded tumor tissues were collected from the Pathology Department after approval by the Hospital. Specimens were cut onto 5‐µm slices, dried at 65℃ for 3 hours, deparaffinized in xylene (Catalog number: 95682; Sigma), followed by rehydration using gradient ethanol. Then, the slices were transparentized by polybutylene terephthalate and soaked in the solution of 1% bovine serum albumin (BSA) (Catalog number: A1933; Sigma) +0.1% Triton X‐100 (Catalog number: T9284; Sigma) for 30 minutes. Next, slices were quenched with fresh 3% hydrogen peroxide (Catalog number: 323381; Sigma) to inhibit endogenous tissue peroxidase activity, and the antigen retrieval was performed using microwave. After blocked by 10% goat serum (Catalog number: 50062Z; Thermo), the slices were incubated with rabbit anti‐ITGA7 antibody (Catalog number: ab203254; Abcam) at a dilution of 1:500 in buffer 4℃ overnight. Next day, slices were washed in buffer and then were incubated with Alexa Fluor® 488 conjugate‐labeled antibody against rabbit IgG (Catalog number: #4412; CST) at dilution of 1:500. After that, the slices were washed with phosphate buffer saline (PBS) and followed by 2‐(4‐amidinophenyl)‐6‐indolecarbamidine dihydrochloride (DAPI) (Catalog number: D3571; Invitrogen) staining and then were covered with coverslips. All slices were evaluated by 50i Nikon microscope in dark, and ITGA7 expression was semi‐quantitatively assessed by using the following intensity categories: 0, no staining; 1, weak but detectable staining; 2, moderate or distinct staining; and 3, intense staining. A histological score (HSCORE) was derived from the formula HSCORE = ΣPi(i + 1), where i represents the intensity scores, and Pi is the corresponding percentage of the cells. According to the HSCORE, ITGA7 high expression was defined as HSCORE > 0.7, and the ITGA7 low expression was defined as HSCORE ≤ 0.7.10 Furthermore, RNA was extracted from the FFPE tissue and detected by real‐time quantitative polymerase chain reaction (RT‐qPCR). ITGA7 knockdown in breast cancer cell line: Control shRNA and ITGA7 shRNA plasmids were established by Shanghai GenePharma Bio‐Tech Company using pEX‐2 vectors. Then, plasmids were transfected into MCF7 cells as control group and ITGA7 knockdown group. ITGA7 protein and mRNA expressions were subsequently detected by Western blot and RT‐qPCR at 24 hours post‐transfection to determine the transfection success. Human breast cancer cell line MCF7 was purchased from Cell Resource Center of Shanghai Institute of Life Sciences, Chinese Academy of Sciences, and cultured in 90% MEM medium (Catalog number: 12571071; Gibco) with 10% fetal bovine serum (Catalog number: 10099141; Gibco) under 95% air and 5% CO2 at 37℃. Sequences for ITGA7 shRNA were as follows: forward 5′‐CACCGCTGCCCACTCTACAGCTTTTCGAAAAAAGCTGTAGAGTGGGCAGC‐3′, reverse 5′‐AAAAGCTGCCCACTCTACAGCTTTTTTCGAAAAGCTGTAGAGTGGGCAGC‐3′, and sequences for control shRNA were as follows: forward 5′‐CACCGTTCTCCGAACGTGTCACGTCGAAACGTGACACGTTCGGAGAA‐3′, reverse 5′‐AAAATTCTCCGAACGTGTCACGTTTCGACGTGACACGTTCGGAGAAC‐3′. Detection of cell proliferation, cell apoptosis, and cell invasion: Cell proliferation, cell apoptosis, and cell invasion were measured by Cell Counting Kit‐8 (CCK‐8), Annexin V/propidium iodide (AV/PI), and Matrigel invasion assays, respectively. Cell viability was detected at 0, 24, 48, and 72 hours post‐transfection using Cell Counting Kit‐8 (Catalog number: CK04; Dojindo) according to the instructions of manufacturer. Cell apoptosis rate was detected at 24 hours post‐transfection using FITC Annexin V Apoptosis Detection Kit II (Catalog number: 556547; BD) according to the instructions of manufacturer. Besides, cell invasive ability was detected by Matrigel invasion assay using Matrigel basement membrane matrix (Catalog number: 356234; BD), Transwell filter chamber (Catalog number: 3422; Coring), formaldehyde solution (Catalog number: 818708; Sigma), and crystal violet (Catalog number: 46364; Sigma) according to the method described previously.11 Western blot: Western blot was performed as the following steps: (a) extraction of total protein was conducted with RIPA lysis and extraction buffer (Catalog number: 89901; Thermo); (b) concentration of total protein was measured by Pierce™ BCA Protein Assay Kit (Catalog number: 23227; Thermo), followed by electrophoresis and transfer to membranes; (c) membranes were blocked and then incubated with primary antibody (rabbit anti‐ITGA7 antibody [Catalog number: ab203254; Abcam]) (1:1000 dilution) and horseradish peroxidase (HRP)‐conjugated secondary antibody (goat anti‐rabbit IgG H&L [HRP] [Catalog number: ab6721; Abcam]) (1:4000 dilution). Finally, the bands were visualized by Pierce™ ECL Plus western blotting substrate (Catalog number: 32132X3; Thermo). RT‐qPCR: RT‐PCR was performed as the following steps: (a) with TRIzol reagent (Catalog number: 15596018; Invitrogen), extraction of total RNA was performed; (b) the reverse transcription to cDNA was conducted using PrimeScript™ RT reagent Kit (Catalog number: RR037A; TAKARA); (c) qPCR was performed using QuantiNova SYBR Green PCR Kit (Catalog number: 208054; Qiagen), followed by qPCR amplification. Finally, the results of qPCR were calculated by 2‐ΔΔCt formula. Glyceraldehyde‐3‐phosphate dehydrogenase (GAPDH) was used as the internal reference. And information of primers used was as follows: ITGA7, forward (5′‐>3′): GCCACTCTGCCTGTCCAATG, reverse (5′‐>3′): GGAGGTGCTAAGGATGAGGTAGA; GAPDH, forward (5′‐>3′): GAGTCCACTGGCGTCTTCAC, reverse, ATCTTGAGGCTGTTGTCATACTTCT. Statistical analysis: Data were displayed as mean (standard deviation), median (interquartile range), or count (percentage). Difference between two groups was determined by Wilcoxon rank‐sum test, t test, chi‐square test, or log‐rank test (for survival analysis). Survival curves were constructed with the Kaplan‐Meier method. The influence of each variable on survival was examined by the univariate and multivariate Cox's proportional hazards regression analyses. All statistical analyses were performed with SPSS 19.0 software (IBM Corporation). A P value < .05 was considered statistically significant. RESULTS: Study flow Totally 395 breast cancer patients who underwent surgical resection were retrospectively screened in this study, while 181 of them were excluded, including 69 patients with no preserved tumor tissue, 45 patients with incomplete clinical data, 37 patients underwent neoadjuvant therapy, 21 patients with relapsed or secondary cancer, and nine patients with other malignancies (Figure 1). Subsequently, in the 214 patients eligible, 23 patients who were unable to acquire informed consents were excluded, and finally 191 patients were reviewed and analyzed in this study. Study flow Totally 395 breast cancer patients who underwent surgical resection were retrospectively screened in this study, while 181 of them were excluded, including 69 patients with no preserved tumor tissue, 45 patients with incomplete clinical data, 37 patients underwent neoadjuvant therapy, 21 patients with relapsed or secondary cancer, and nine patients with other malignancies (Figure 1). Subsequently, in the 214 patients eligible, 23 patients who were unable to acquire informed consents were excluded, and finally 191 patients were reviewed and analyzed in this study. Study flow Baseline characteristics A total of 191 breast cancer patients were enrolled, with the mean age of 54.3 ± 13.6 years and the median age of 53.0 (45.0‐64.0) years (Table 1). For tumor size, the mean value was 3.2 ± 1.7 cm, and the median value was 3.0 (2.0‐4.0) cm. Regarding the disease stage, the numbers of patients with TNM I, TNM II, as well as TNM III were 27 (14.1%), 119 (62.3%), and 45 (23.6%), respectively. As for pathological grade, the numbers of patients with grade G1, G2, and G3 were 42 (22.0%), 124 (64.9%), and 25 (13.1%), respectively. The detailed information about other baseline characteristics is presented in Table 1. Baseline characteristics of breast cancer patients Abbreviations: ER: estrogen receptor; HER‐2: human epidermal growth factor receptor 2; IQR: interquartile range; N: node; PR: progesterone receptor; SD: standard deviation; T: tumor; TNM: tumor‐node metastasis. A total of 191 breast cancer patients were enrolled, with the mean age of 54.3 ± 13.6 years and the median age of 53.0 (45.0‐64.0) years (Table 1). For tumor size, the mean value was 3.2 ± 1.7 cm, and the median value was 3.0 (2.0‐4.0) cm. Regarding the disease stage, the numbers of patients with TNM I, TNM II, as well as TNM III were 27 (14.1%), 119 (62.3%), and 45 (23.6%), respectively. As for pathological grade, the numbers of patients with grade G1, G2, and G3 were 42 (22.0%), 124 (64.9%), and 25 (13.1%), respectively. The detailed information about other baseline characteristics is presented in Table 1. Baseline characteristics of breast cancer patients Abbreviations: ER: estrogen receptor; HER‐2: human epidermal growth factor receptor 2; IQR: interquartile range; N: node; PR: progesterone receptor; SD: standard deviation; T: tumor; TNM: tumor‐node metastasis. ITGA7 expression in breast cancer patients Examples of tumor ITGA7 high expression and ITGA7 low expression are shown in Figure 2A. In totally 191 patients, 92 (48.2%) patients presented with ITGA7 high expression, and 99 (51.8%) patients presented with ITGA7 low expression (Figure 2B). ITGA7 expression detected by IF. Examples of ITGA7 high expression and ITGA7 low expression detected by IF (A). There were 92 (48.2%) patients with ITGA7 high expression and 99 (51.8%) patients with ITGA7 low expression (B). ITGA7, integrin α7; IF, immunofluorescence Examples of tumor ITGA7 high expression and ITGA7 low expression are shown in Figure 2A. In totally 191 patients, 92 (48.2%) patients presented with ITGA7 high expression, and 99 (51.8%) patients presented with ITGA7 low expression (Figure 2B). ITGA7 expression detected by IF. Examples of ITGA7 high expression and ITGA7 low expression detected by IF (A). There were 92 (48.2%) patients with ITGA7 high expression and 99 (51.8%) patients with ITGA7 low expression (B). ITGA7, integrin α7; IF, immunofluorescence Correlation of ITGA7 expression with clinical characteristics in breast cancer patients ITGA7 protein high expression was associated with elevated T stage (P = .004), increased TNM stage (P = .038), and raised pathological grade (P = .017) in breast cancer patients (Table 2), whereas no correlation of ITGA7 protein expression with age (P = .395), tumor size (P = .661), N stage (P = .131), ER (P = .584), PR (P = .442), and HER‐2 (P = .915) was observed. Meanwhile, ITGA7 mRNA high expression was associated with increased T stage (P = .002), elevated TNM stage (P = .017), and higher pathological grade (P = .013). These data suggested that ITGA7 expression was positively correlated with T stage, TNM stage, and pathological grade in breast cancer patients. Correlation of ITGA7 expression with clinicopathological characteristics Difference between two groups was determined by Wilcoxon rank‐sum test or chi‐square test. P value < .05 was considered significant. Abbreviations: ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; IQR, interquartile range; ITGA7, integrin α7; N, node; PR, progesterone receptor; SD, standard deviation; T, tumor; TNM, tumor‐node metastasis. The high or low expression was classified according to the median value of ITGA7 mRNA relative expression. ITGA7 protein high expression was associated with elevated T stage (P = .004), increased TNM stage (P = .038), and raised pathological grade (P = .017) in breast cancer patients (Table 2), whereas no correlation of ITGA7 protein expression with age (P = .395), tumor size (P = .661), N stage (P = .131), ER (P = .584), PR (P = .442), and HER‐2 (P = .915) was observed. Meanwhile, ITGA7 mRNA high expression was associated with increased T stage (P = .002), elevated TNM stage (P = .017), and higher pathological grade (P = .013). These data suggested that ITGA7 expression was positively correlated with T stage, TNM stage, and pathological grade in breast cancer patients. Correlation of ITGA7 expression with clinicopathological characteristics Difference between two groups was determined by Wilcoxon rank‐sum test or chi‐square test. P value < .05 was considered significant. Abbreviations: ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; IQR, interquartile range; ITGA7, integrin α7; N, node; PR, progesterone receptor; SD, standard deviation; T, tumor; TNM, tumor‐node metastasis. The high or low expression was classified according to the median value of ITGA7 mRNA relative expression. Correlation of ITGA7 expression with OS in breast cancer patients K‐M curves displayed that ITGA7 protein high expression was associated with shorter OS (P < .001) (Figure 3A); moreover, ITGA7 mRNA high expression was correlated with worse OS in breast cancer patients (P = .009) (Figure 3B). OS in ITGA7 high expression patients and ITGA7 low expression patients. OS in ITGA7 protein high expression patients was remarkably reduced than that in ITGA7 low expression patients (A). OS in ITGA7 mRNA high expression patients was decreased than that in IGTA7 low expression patients (B). K‐M curves were used to display OS. Comparison of OS between two groups was determined by log‐rank test. P < .05 was considered significant. OS, overall survival; ITGA7, integrin α7; K‐M curves, Kaplan‐Meier curves K‐M curves displayed that ITGA7 protein high expression was associated with shorter OS (P < .001) (Figure 3A); moreover, ITGA7 mRNA high expression was correlated with worse OS in breast cancer patients (P = .009) (Figure 3B). OS in ITGA7 high expression patients and ITGA7 low expression patients. OS in ITGA7 protein high expression patients was remarkably reduced than that in ITGA7 low expression patients (A). OS in ITGA7 mRNA high expression patients was decreased than that in IGTA7 low expression patients (B). K‐M curves were used to display OS. Comparison of OS between two groups was determined by log‐rank test. P < .05 was considered significant. OS, overall survival; ITGA7, integrin α7; K‐M curves, Kaplan‐Meier curves Analysis of factors affecting OS in breast cancer patients Univariate Cox's regression displayed that ITGA7 high expression (P < .001) was associated with shorter OS, and larger tumor size (P < .001), higher T stage (P < .001), higher N stage (P < .001), higher TNM stage (P < .001), and higher pathological grade (P < .001) were also associated with worse OS in breast cancer patients (Table 3), whereas ER (positive vs negative) (P = .010) and PR (positive vs negative) (P = .022) were correlated with longer OS in breast cancer patients. Furthermore, the multivariate Cox's regression analysis revealed that ITGA7 high expression was an independent predictive factor for poorer OS (P = .006) in breast cancer patients, and higher pathological grade also independently predicted unfavorable OS (P = .001) in breast cancer patients. Univariate and multivariate Cox's proportional hazards regression model analyses of factors affecting OS P value < .05 was considered significant. Abbreviations: CI, confidence interval; ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; HR, hazard ratio; ITGA7, integrin α7; N, Node; OS, overall survival; PR, progesterone receptor; T, tumor; TNM, tumor‐node metastasis. Univariate Cox's regression displayed that ITGA7 high expression (P < .001) was associated with shorter OS, and larger tumor size (P < .001), higher T stage (P < .001), higher N stage (P < .001), higher TNM stage (P < .001), and higher pathological grade (P < .001) were also associated with worse OS in breast cancer patients (Table 3), whereas ER (positive vs negative) (P = .010) and PR (positive vs negative) (P = .022) were correlated with longer OS in breast cancer patients. Furthermore, the multivariate Cox's regression analysis revealed that ITGA7 high expression was an independent predictive factor for poorer OS (P = .006) in breast cancer patients, and higher pathological grade also independently predicted unfavorable OS (P = .001) in breast cancer patients. Univariate and multivariate Cox's proportional hazards regression model analyses of factors affecting OS P value < .05 was considered significant. Abbreviations: CI, confidence interval; ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; HR, hazard ratio; ITGA7, integrin α7; N, Node; OS, overall survival; PR, progesterone receptor; T, tumor; TNM, tumor‐node metastasis. Effect of ITGA7 knockdown on cell proliferation, cell apoptosis, and cell invasion in MCF7 cells In order to assess the effect of ITGA7 knockdown on cell functions in breast cancer cells, control shRNA and ITGA7 shRNA plasmids were constructed and transfected into MCF7 cells. After transfection at 24 hours, the expressions of ITGA7 mRNA (P < .01) and ITGA protein were reduced in ITGA7 knockdown group compared to control group (Figure 4A,B). Cell proliferation was reduced in ITGA7 knockdown group at 48 hours (P < .05) and 72 hours (P < .05) compared to control group (Figure 4C). For cell apoptosis, its rate was elevated in ITGA7 knockdown group at 24 hours compared to control group (P < .01) (Figure 4D,E). Additionally, invasive cell count was lower in ITGA7 knockdown group compared to control group (P < .01) (Figure 4F,G). These data indicated that ITGA7 knockdown repressed cell proliferation and invasion, but enhanced cell apoptosis in MCF7 cells. CCK‐8, AV/PI, and Matrigel invasion assays. ITGA7 mRNA expression was decreased in ITGA7 knockdown group compared to control group (A). ITGA7 protein expression was lower in ITGA7 knockdown group compared to control group (B). Cell viability was reduced in ITGA7 knockdown group compared to control group at 48 h and 72 h (C). Cell apoptosis rate was increased in ITGA7 knockdown group compared to control group (D, E). Cell count using Matrigel invasion assay was reduced in ITGA7 knockdown group compared to control group (F, G). Comparison between two groups was determined by t test. P < .05 was considered significant. *P < .05; **P < .01; ***P < .001. CCK‐8, Cell Counting Kit‐8; AV/PI, Annexin V/ propidium iodide; ITGA7, integrin α7 In order to assess the effect of ITGA7 knockdown on cell functions in breast cancer cells, control shRNA and ITGA7 shRNA plasmids were constructed and transfected into MCF7 cells. After transfection at 24 hours, the expressions of ITGA7 mRNA (P < .01) and ITGA protein were reduced in ITGA7 knockdown group compared to control group (Figure 4A,B). Cell proliferation was reduced in ITGA7 knockdown group at 48 hours (P < .05) and 72 hours (P < .05) compared to control group (Figure 4C). For cell apoptosis, its rate was elevated in ITGA7 knockdown group at 24 hours compared to control group (P < .01) (Figure 4D,E). Additionally, invasive cell count was lower in ITGA7 knockdown group compared to control group (P < .01) (Figure 4F,G). These data indicated that ITGA7 knockdown repressed cell proliferation and invasion, but enhanced cell apoptosis in MCF7 cells. CCK‐8, AV/PI, and Matrigel invasion assays. ITGA7 mRNA expression was decreased in ITGA7 knockdown group compared to control group (A). ITGA7 protein expression was lower in ITGA7 knockdown group compared to control group (B). Cell viability was reduced in ITGA7 knockdown group compared to control group at 48 h and 72 h (C). Cell apoptosis rate was increased in ITGA7 knockdown group compared to control group (D, E). Cell count using Matrigel invasion assay was reduced in ITGA7 knockdown group compared to control group (F, G). Comparison between two groups was determined by t test. P < .05 was considered significant. *P < .05; **P < .01; ***P < .001. CCK‐8, Cell Counting Kit‐8; AV/PI, Annexin V/ propidium iodide; ITGA7, integrin α7 Study flow: Totally 395 breast cancer patients who underwent surgical resection were retrospectively screened in this study, while 181 of them were excluded, including 69 patients with no preserved tumor tissue, 45 patients with incomplete clinical data, 37 patients underwent neoadjuvant therapy, 21 patients with relapsed or secondary cancer, and nine patients with other malignancies (Figure 1). Subsequently, in the 214 patients eligible, 23 patients who were unable to acquire informed consents were excluded, and finally 191 patients were reviewed and analyzed in this study. Study flow Baseline characteristics: A total of 191 breast cancer patients were enrolled, with the mean age of 54.3 ± 13.6 years and the median age of 53.0 (45.0‐64.0) years (Table 1). For tumor size, the mean value was 3.2 ± 1.7 cm, and the median value was 3.0 (2.0‐4.0) cm. Regarding the disease stage, the numbers of patients with TNM I, TNM II, as well as TNM III were 27 (14.1%), 119 (62.3%), and 45 (23.6%), respectively. As for pathological grade, the numbers of patients with grade G1, G2, and G3 were 42 (22.0%), 124 (64.9%), and 25 (13.1%), respectively. The detailed information about other baseline characteristics is presented in Table 1. Baseline characteristics of breast cancer patients Abbreviations: ER: estrogen receptor; HER‐2: human epidermal growth factor receptor 2; IQR: interquartile range; N: node; PR: progesterone receptor; SD: standard deviation; T: tumor; TNM: tumor‐node metastasis. ITGA7 expression in breast cancer patients: Examples of tumor ITGA7 high expression and ITGA7 low expression are shown in Figure 2A. In totally 191 patients, 92 (48.2%) patients presented with ITGA7 high expression, and 99 (51.8%) patients presented with ITGA7 low expression (Figure 2B). ITGA7 expression detected by IF. Examples of ITGA7 high expression and ITGA7 low expression detected by IF (A). There were 92 (48.2%) patients with ITGA7 high expression and 99 (51.8%) patients with ITGA7 low expression (B). ITGA7, integrin α7; IF, immunofluorescence Correlation of ITGA7 expression with clinical characteristics in breast cancer patients: ITGA7 protein high expression was associated with elevated T stage (P = .004), increased TNM stage (P = .038), and raised pathological grade (P = .017) in breast cancer patients (Table 2), whereas no correlation of ITGA7 protein expression with age (P = .395), tumor size (P = .661), N stage (P = .131), ER (P = .584), PR (P = .442), and HER‐2 (P = .915) was observed. Meanwhile, ITGA7 mRNA high expression was associated with increased T stage (P = .002), elevated TNM stage (P = .017), and higher pathological grade (P = .013). These data suggested that ITGA7 expression was positively correlated with T stage, TNM stage, and pathological grade in breast cancer patients. Correlation of ITGA7 expression with clinicopathological characteristics Difference between two groups was determined by Wilcoxon rank‐sum test or chi‐square test. P value < .05 was considered significant. Abbreviations: ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; IQR, interquartile range; ITGA7, integrin α7; N, node; PR, progesterone receptor; SD, standard deviation; T, tumor; TNM, tumor‐node metastasis. The high or low expression was classified according to the median value of ITGA7 mRNA relative expression. Correlation of ITGA7 expression with OS in breast cancer patients: K‐M curves displayed that ITGA7 protein high expression was associated with shorter OS (P < .001) (Figure 3A); moreover, ITGA7 mRNA high expression was correlated with worse OS in breast cancer patients (P = .009) (Figure 3B). OS in ITGA7 high expression patients and ITGA7 low expression patients. OS in ITGA7 protein high expression patients was remarkably reduced than that in ITGA7 low expression patients (A). OS in ITGA7 mRNA high expression patients was decreased than that in IGTA7 low expression patients (B). K‐M curves were used to display OS. Comparison of OS between two groups was determined by log‐rank test. P < .05 was considered significant. OS, overall survival; ITGA7, integrin α7; K‐M curves, Kaplan‐Meier curves Analysis of factors affecting OS in breast cancer patients: Univariate Cox's regression displayed that ITGA7 high expression (P < .001) was associated with shorter OS, and larger tumor size (P < .001), higher T stage (P < .001), higher N stage (P < .001), higher TNM stage (P < .001), and higher pathological grade (P < .001) were also associated with worse OS in breast cancer patients (Table 3), whereas ER (positive vs negative) (P = .010) and PR (positive vs negative) (P = .022) were correlated with longer OS in breast cancer patients. Furthermore, the multivariate Cox's regression analysis revealed that ITGA7 high expression was an independent predictive factor for poorer OS (P = .006) in breast cancer patients, and higher pathological grade also independently predicted unfavorable OS (P = .001) in breast cancer patients. Univariate and multivariate Cox's proportional hazards regression model analyses of factors affecting OS P value < .05 was considered significant. Abbreviations: CI, confidence interval; ER, estrogen receptor; HER‐2, human epidermal growth factor receptor 2; HR, hazard ratio; ITGA7, integrin α7; N, Node; OS, overall survival; PR, progesterone receptor; T, tumor; TNM, tumor‐node metastasis. Effect of ITGA7 knockdown on cell proliferation, cell apoptosis, and cell invasion in MCF7 cells: In order to assess the effect of ITGA7 knockdown on cell functions in breast cancer cells, control shRNA and ITGA7 shRNA plasmids were constructed and transfected into MCF7 cells. After transfection at 24 hours, the expressions of ITGA7 mRNA (P < .01) and ITGA protein were reduced in ITGA7 knockdown group compared to control group (Figure 4A,B). Cell proliferation was reduced in ITGA7 knockdown group at 48 hours (P < .05) and 72 hours (P < .05) compared to control group (Figure 4C). For cell apoptosis, its rate was elevated in ITGA7 knockdown group at 24 hours compared to control group (P < .01) (Figure 4D,E). Additionally, invasive cell count was lower in ITGA7 knockdown group compared to control group (P < .01) (Figure 4F,G). These data indicated that ITGA7 knockdown repressed cell proliferation and invasion, but enhanced cell apoptosis in MCF7 cells. CCK‐8, AV/PI, and Matrigel invasion assays. ITGA7 mRNA expression was decreased in ITGA7 knockdown group compared to control group (A). ITGA7 protein expression was lower in ITGA7 knockdown group compared to control group (B). Cell viability was reduced in ITGA7 knockdown group compared to control group at 48 h and 72 h (C). Cell apoptosis rate was increased in ITGA7 knockdown group compared to control group (D, E). Cell count using Matrigel invasion assay was reduced in ITGA7 knockdown group compared to control group (F, G). Comparison between two groups was determined by t test. P < .05 was considered significant. *P < .05; **P < .01; ***P < .001. CCK‐8, Cell Counting Kit‐8; AV/PI, Annexin V/ propidium iodide; ITGA7, integrin α7 DISCUSSION: Our results indicated that (a) ITGA7 high expression correlates with increased T stage, raised TNM stage, elevated pathological grade and worse OS, and it was an independent predictive factor for worse OS in breast cancer patients; (b) ITGA7 knockdown inhibited cell proliferation, cell invasion but enhanced cell apoptosis in breast cancer. Integrins are transmembrane cell surface receptors, which comprises of 18 α and 8 β subunits.7 ITGA7, encoding a subunit belonging to integrin, mediates a variety of cell‐cell and cell‐matrix interactions, and it is recently reported to play a role in cell migration, cell differentiation, and cell metastasis in cancers.12, 13 For instance, some studies disclose that ITGA7 knockdown inhibits Hsp27‐mediated cell invasion in HCC cells and decreases S100P‐mediated cell migration in lung cancer cells.14, 15 Besides, a study shows that ITGA7 represses cell apoptosis as well as promotes chemoresistance via activating focal adhesion kinase (FAK)/Akt signaling, but enhances cell metastasis via inducing epithelial‐mesenchymal transition (EMT) in OSCC cells.7 Another study illustrates that ITGA7 knockdown might inhibit cell proliferation via decreasing phosphorylated AKT and p38 in glioblastoma cells.8 These previous data reveal that ITGA7 may be involved in the initiation and progression of some cancers, and it is able to affect some vital biological functions (such as cell apoptosis, cell invasion, and chemotaxis) of cancer cells via regulating multiple pathways (such as FAK/Akt signaling and phosphatidylinositol 3‐kinase (PI3K)/Akt pathway). In a few observational studies, the role of aberrant ITGA7 expression in some cancers has been disclosed.7, 8 For example, a study shows that ITGA7 overexpression correlates with increased disease grade in glioblastoma patients.8 Another study displays that ITGA7 high expression is remarkably associated with poor differentiation and lymph node metastasis in esophageal squamous cell carcinoma patients.7 These previous studies reveal that ITGA7 high expression correlates with aggravated disease progression in these cancer patients, while the correlation of ITGA7 with disease progression in breast cancer is still inconclusive. In this study, we enrolled 191 breast cancer patients to explore the correlation of ITGA7 with disease progression of breast cancer patients. We found that ITGA7 high expression was associated with raised T stage, increased TNM stage, and elevated pathological grade in breast cancer patients, which might due to the following reasons: (a) ITGA7 might increase cell proliferation via inactivating the phosphorylation of AKT and p38 and promote cell invasion through interacting with Hsp27 or S100P to facilitate tumor growth and tumor invasion; thus, it led to increased T stage as well as TNM stage; (b) ITGA7 drove cancer stem cell features through FAK/MAPK/ERK‐mediated pathway, thereby enhanced abilities of self‐renew and differentiation, which further led to increased pathological grade in breast cancer patients.7 Furthermore, for the predictive value of ITGA7 on the treatment outcomes of cancer patients, ITGA7 high expression is reported to be associated with reduced OS in both clear cell renal carcinoma patients and bladder urothelial carcinoma patients.16, 17 Also, ITGA7 expression is negatively correlated with OS in both low‐ and high‐grade glioma patients.8 These data reveal that ITGA7 high expression predicts unfavorable OS in some cancer patients, while limited studies show the predictive value of ITGA7 in breast cancer patients. In our study, we found that ITGA7 independently predicted poor OS in breast cancer patients, and the possible reasons might be (a) ITGA7 enhanced cell proliferation and invasion but repressed cell apoptosis via regulating some vital pathways (such as FAK/Akt and PI3K/Akt) to accelerate disease progression, thus resulted in shorter OS in breast cancer patients; (b) ITGA7 might enhance chemoresistance through activating FAK/Akt signaling, thus resulted in adverse treatment efficacy and further led to poor survivals in breast cancer patients. Additionally, there were still some limitations in our study: (a) sample size (N = 191) was relatively small, and the statistical power might be low; (b) this was a single‐center study, which might lack wide representativeness; (c) this was a retrospective study, and assessment of ITGA7 expression was restricted to formalin‐fixed and paraffin‐embedded tissues; thus, further prospective study using fresh samples is needed to verify our results. In order to explore the mechanisms about how ITGA7 affects cancer cell functions, several in vivo or in vitro experiments have been performed.7, 8, 9 For example, ITGA7 is overexpressed in a highly metastatic human pancreatic carcinoma line (SW1990 HM) compared to the control human pancreatic carcinoma (SW1990) and enhances cell invasion.9 In addition, ITGA7 knockdown reduces cell invasion in glioblastoma cells, and in vivo, the tumor growth is reduced in glioblastoma mice model treated by anti‐ITGA7 compared to the control mice.8 Moreover, ITGA7 promotes the ability of cell mobility, cell migration, and cell invasion but represses cell apoptosis in esophageal squamous cell carcinoma cells.7 According to these data, ITGA7 functions as an oncogene in these cancer cells, while the role of ITGA7 in breast cancers cells is rarely reported. In order to investigate the effect of ITGA7 on cell activities, we conducted CCK‐8, AV/PI, and Matrigel invasion assays in MCF‐7 cells with ITGA7 knockdown, and we found that ITGA7 knockdown repressed cell proliferation, promoted cell apoptosis, and reduced cell invasion in MCF‐7 cells. In addition, at 24 hours post‐transfection, cell invasion rate was reduced in ITGA7 knockdown group compared to control group, while no difference of cell viability existed between the two groups, indicating that the reduction in cell invasion was not due to the loss in viability by ITGA7 knockdown. Our results expanded the understanding about underlying mechanisms of ITGA7 in breast cancer cells and suggested that ITGA7 knockdown might serve as an anti‐tumor approach through repressing cell proliferation and invasion but enhancing cell apoptosis. In conclusion, ITGA7 high expression correlates with increased T stage, TNM stage, and pathological grade as well as worse OS, and its knockdown enhances cell apoptosis but inhibits cell proliferation and invasion in breast cancer.
Background: This study aimed to investigate the correlation of integrin α7 (ITGA7) expression with clinical/pathological characteristics and overall survival (OS), and its knockdown on inhibiting cell activities in breast cancer. Methods: A total of 191 breast cancer patients underwent surgery were retrospectively reviewed, and ITGA7 expression in tumor tissues was determined by immunofluorescence and real-time quantitative polymerase chain reaction. Patients' clinical/pathological data were recorded, and OS was calculated. In vitro, control shRNA and ITGA7 shRNA plasmids were transfected into MCF7 cells to evaluate the influence of ITGA7 knockdown on cell proliferation, apoptosis, and invasion. Results: Ninety-two (48.2%) patients presented with ITGA7 high expression, and 99 patients (51.8%) presented with ITGA7 low expression. ITGA7 expression was positively correlated with T stage, tumor-node metastasis (TNM) stage, and pathological grade. Kaplan-Meier curves showed that ITGA7 high expression was associated with shorter OS, and multivariate Cox's proportional hazards regression displayed that ITGA7 high expression was an independent predictive factor for poor OS. Moreover, in vitro experiments disclosed that cell proliferation (by Cell Counting Kit-8 assay) and cell invasion (by Matrigel invasion assay) were reduced, while cell apoptosis rate (by Annexin V/propidium iodide assay) was enhanced by ITGA7 knockdown in MCF-7 cells. Conclusions: Integrin α7 high expression correlates with increased T stage, TNM stage, and pathological grade as well as worse OS, and its knockdown enhances cell apoptosis but inhibits cell proliferation and invasion in breast cancer.
null
null
10,431
301
19
[ "itga7", "patients", "cell", "expression", "cancer", "breast", "breast cancer", "number", "catalog number", "catalog" ]
[ "test", "test" ]
null
null
null
[CONTENT] apoptosis | breast cancer | integrin α7 | proliferation | survival [SUMMARY]
null
[CONTENT] apoptosis | breast cancer | integrin α7 | proliferation | survival [SUMMARY]
null
[CONTENT] apoptosis | breast cancer | integrin α7 | proliferation | survival [SUMMARY]
null
[CONTENT] Antigens, CD | Apoptosis | Biomarkers, Tumor | Breast Neoplasms | Cell Proliferation | Female | Follow-Up Studies | Gene Expression Regulation, Neoplastic | Humans | Integrin alpha Chains | Middle Aged | Neoplasm Invasiveness | Prognosis | RNA, Small Interfering | Retrospective Studies | Survival Rate [SUMMARY]
null
[CONTENT] Antigens, CD | Apoptosis | Biomarkers, Tumor | Breast Neoplasms | Cell Proliferation | Female | Follow-Up Studies | Gene Expression Regulation, Neoplastic | Humans | Integrin alpha Chains | Middle Aged | Neoplasm Invasiveness | Prognosis | RNA, Small Interfering | Retrospective Studies | Survival Rate [SUMMARY]
null
[CONTENT] Antigens, CD | Apoptosis | Biomarkers, Tumor | Breast Neoplasms | Cell Proliferation | Female | Follow-Up Studies | Gene Expression Regulation, Neoplastic | Humans | Integrin alpha Chains | Middle Aged | Neoplasm Invasiveness | Prognosis | RNA, Small Interfering | Retrospective Studies | Survival Rate [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] itga7 | patients | cell | expression | cancer | breast | breast cancer | number | catalog number | catalog [SUMMARY]
null
[CONTENT] itga7 | patients | cell | expression | cancer | breast | breast cancer | number | catalog number | catalog [SUMMARY]
null
[CONTENT] itga7 | patients | cell | expression | cancer | breast | breast cancer | number | catalog number | catalog [SUMMARY]
null
[CONTENT] cell | cancer | breast cancer | breast | deaths | itga7 | cancers | treatment | carcinoma | 000 [SUMMARY]
null
[CONTENT] itga7 | patients | expression | group | os | itga7 knockdown | high | knockdown | compared control | compared [SUMMARY]
null
[CONTENT] itga7 | patients | cell | expression | catalog | catalog number | number | cancer | stage | os [SUMMARY]
null
[CONTENT] ITGA7 [SUMMARY]
null
[CONTENT] Ninety-two | 48.2% ||| ITGA7 | 99 | 51.8% | ITGA7 ||| TNM ||| Kaplan-Meier | ITGA7 | Cox | ITGA7 ||| Matrigel | Annexin V | ITGA7 | MCF-7 [SUMMARY]
null
[CONTENT] ITGA7 ||| 191 | ITGA7 ||| ||| control shRNA | ITGA7 | ITGA7 ||| Ninety-two | 48.2% | ITGA7 | 99 | 51.8% | ITGA7 ||| TNM ||| Kaplan-Meier | ITGA7 | Cox | ITGA7 ||| Matrigel | Annexin V | ITGA7 | MCF-7 ||| Integrin | TNM [SUMMARY]
null
Hypertension: adherence to treatment in rural Bangladesh--findings from a population-based study.
25361723
Poor adherence has been identified as the main cause of failure to control hypertension. Poor adherence to antihypertensive treatment is a significant cardiovascular risk factor, which often remains unrecognized. There are no previous studies that examined adherence with antihypertensive medication or the characteristics of the non-adherent patients in Bangladesh.
BACKGROUND
The study population included 29,960 men and women aged 25 years and older from three rural demographic surveillance sites of the International Center for Diarrheal Disease Research, Bangladesh (icddr,b): Matlab, Abhoynagar, and Mirsarai. Data was collected by a cross-sectional design on diagnostic provider, initial, and current treatment. Discontinuation of medication at the time of interview was defined as non-adherence to treatment.
DESIGN
The prevalence of hypertension was 13.67%. Qualified providers diagnosed only 53.5% of the hypertension (MBBS doctors 46.1 and specialized doctors 7.4%). Among the unqualified providers, village doctors diagnosed 40.7%, and others (nurse, health worker, paramedic, homeopath, spiritual healer, and pharmacy man) each diagnosed less than 5%. Of those who started treatment upon being diagnosed with hypertension, 26% discontinued the use of medication. Age, sex, education, wealth, and type of provider were independently associated with non-adherence to medication. More men discontinued the treatment than women (odds ratio [OR] 1.74, confidence interval [CI] 1.48-2.04). Non-adherence was greater when hypertension was diagnosed by unqualified providers (OR 1.52, CI 1.31-1.77). Hypertensive patients of older age, least poor quintile, and higher education were less likely to be non-adherent. Patients with cardiovascular comorbidity were also less likely to be non-adherent to antihypertensive medication (OR 0.79, CI 0.64-0.97).
RESULTS
Although village doctors diagnose 40% of hypertension, their treatments are associated with a higher rate of non-adherence to medication. The hypertension care practices of the village doctors should be explored by additional research. More emphasis should be placed on men, young people, and people with low education. Health programs focused on education regarding the importance of taking continuous antihypertensive medication is now of utmost importance.
CONCLUSIONS
[ "Adult", "Antihypertensive Agents", "Bangladesh", "Cross-Sectional Studies", "Female", "Humans", "Hypertension", "Interviews as Topic", "Male", "Medication Adherence", "Middle Aged", "Population Surveillance", "Prevalence", "Risk Factors", "Rural Population", "Sex Factors" ]
4212079
null
null
Methods
Ethical approval Ethical approval for surveillance site activities has been obtained from the International Center for Diarrheal Disease Research, Bangladesh (icddr,b) review board and Human Research Ethics Committee, The University of Newcastle, Australia. Ethical approval for surveillance site activities has been obtained from the International Center for Diarrheal Disease Research, Bangladesh (icddr,b) review board and Human Research Ethics Committee, The University of Newcastle, Australia. Study sites The study population was obtained from three rural demographic surveillance sites, Matlab, Abhoynagar, and Mirsarai. Table 1 describes the comparative characteristics of these three sites. The three rural sites are similar in terms of population composition, density, household size, primary occupation, religion, and their disease profile; and also in area characteristics (18–20). Rural Bangladesh is mostly plain land and riverine area. In general, the public health care delivery is very homogenous across the country. The services of community health workers, village doctors, and so on are very similar across the sites. Furthermore, there is no specific initiative from any public or private organizations to date, regarding managing chronic conditions per se hypertension, in these sites. The population under research was from icddr,b surveillance sites where icddr,b has been maintaining contact with the population in a very similar fashion across the sites for many years: in Matlab since the early 1960s, and in Mirsarai and Abhoynagar since the early 1980s. The sampling design of the Health and Sociodemographic Surveillance System (HDSS) was a stratified two-stage sampling; unions were stratified initially. Unions are administrative subunits in Bangladesh with a population of approximately 20,000 to 30,000. Each unions were randomly selected. Households served as the second stage sampling units. A systematic random sampling technique was applied to select the sample households. Among sample households, all household members were identified and listed to collect basic socioeconomic and demographic data, and a unique identification number was assigned to each individual. At regular 90-days intervals, one female interviewer visited each household to collect data on demographic and other programmatic events. HDSS is a cohort study, but this study presents a cross-sectional analysis of base-line data regarding hypertension medication adherence in HDSS population. The study population was obtained from three rural demographic surveillance sites, Matlab, Abhoynagar, and Mirsarai. Table 1 describes the comparative characteristics of these three sites. The three rural sites are similar in terms of population composition, density, household size, primary occupation, religion, and their disease profile; and also in area characteristics (18–20). Rural Bangladesh is mostly plain land and riverine area. In general, the public health care delivery is very homogenous across the country. The services of community health workers, village doctors, and so on are very similar across the sites. Furthermore, there is no specific initiative from any public or private organizations to date, regarding managing chronic conditions per se hypertension, in these sites. The population under research was from icddr,b surveillance sites where icddr,b has been maintaining contact with the population in a very similar fashion across the sites for many years: in Matlab since the early 1960s, and in Mirsarai and Abhoynagar since the early 1980s. The sampling design of the Health and Sociodemographic Surveillance System (HDSS) was a stratified two-stage sampling; unions were stratified initially. Unions are administrative subunits in Bangladesh with a population of approximately 20,000 to 30,000. Each unions were randomly selected. Households served as the second stage sampling units. A systematic random sampling technique was applied to select the sample households. Among sample households, all household members were identified and listed to collect basic socioeconomic and demographic data, and a unique identification number was assigned to each individual. At regular 90-days intervals, one female interviewer visited each household to collect data on demographic and other programmatic events. HDSS is a cohort study, but this study presents a cross-sectional analysis of base-line data regarding hypertension medication adherence in HDSS population. Study population The study population was limited to individuals aged 25 years and above. Data was collected in a cross-sectional survey during the year 2009, for a 2-month period at each site. The study population was sampled by door to door survey, during the regular surveillance rounds. In Abhoynagar and Mirsarai, an attempt was made to include respondents from all households, and in Matlab about 6,400 individuals were interviewed, evenly distributed across the research area. Because information was collected during regular household visits, only those present during the visit, and meeting the age criterion were included. This resulted in a biased selection of respondents, with an overrepresentation of women. To adjust for this bias, the research population was weighted to match the relative age–sex distribution of the populations each of the surveillance sites. In order to give equal weight to the three rural surveillance sites, the sampled population for Matlab was inflated to meet the sample size of the other two sites. Data were collected during the regular visits and only those present at the time of interview were included. We did not track the non-response rate as the sample was not drawn from our defined HDSS surveillance population. However, a typical estimate of absenteeism of a given household is approximately less than 5% in the HDSS surveillance sites (21). In total 29,960 individuals were included. The study population was limited to individuals aged 25 years and above. Data was collected in a cross-sectional survey during the year 2009, for a 2-month period at each site. The study population was sampled by door to door survey, during the regular surveillance rounds. In Abhoynagar and Mirsarai, an attempt was made to include respondents from all households, and in Matlab about 6,400 individuals were interviewed, evenly distributed across the research area. Because information was collected during regular household visits, only those present during the visit, and meeting the age criterion were included. This resulted in a biased selection of respondents, with an overrepresentation of women. To adjust for this bias, the research population was weighted to match the relative age–sex distribution of the populations each of the surveillance sites. In order to give equal weight to the three rural surveillance sites, the sampled population for Matlab was inflated to meet the sample size of the other two sites. Data were collected during the regular visits and only those present at the time of interview were included. We did not track the non-response rate as the sample was not drawn from our defined HDSS surveillance population. However, a typical estimate of absenteeism of a given household is approximately less than 5% in the HDSS surveillance sites (21). In total 29,960 individuals were included. Data collection Trained research assistants conducted the face-to-face interviews in Bangla using a two-part questionnaire on chronic disease lifestyle risk factors and management. The questionnaire was translated to English and then back translated to Bangla to check the consistency of the meaning. We then piloted the questionnaire with 10 people from an area similar to but other than the study sites, to see the understandability of the questionnaire, language, and so on. This pretested structured questionnaire collected information on 11 prespecified chronic conditions regarding diagnoses, initial treatment, current treatment, and health care provider. Respondents were asked ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have any of the following medical conditions: hypertension, diabetes, abnormal blood lipids, overweight, chronic bronchitis, heart attack, angina/coronary heart disease, stroke, asthma, oral cancer, lung cancer and others’. Respondents then needed to identify the most recent provider of the diagnoses. Diagnosis was solely based on self-reporting; details such as symptoms, signs, or lab tests were not included. All the information collected in this study was self-reported. We only reported about the hypertension in this paper. Trained research assistants conducted the face-to-face interviews in Bangla using a two-part questionnaire on chronic disease lifestyle risk factors and management. The questionnaire was translated to English and then back translated to Bangla to check the consistency of the meaning. We then piloted the questionnaire with 10 people from an area similar to but other than the study sites, to see the understandability of the questionnaire, language, and so on. This pretested structured questionnaire collected information on 11 prespecified chronic conditions regarding diagnoses, initial treatment, current treatment, and health care provider. Respondents were asked ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have any of the following medical conditions: hypertension, diabetes, abnormal blood lipids, overweight, chronic bronchitis, heart attack, angina/coronary heart disease, stroke, asthma, oral cancer, lung cancer and others’. Respondents then needed to identify the most recent provider of the diagnoses. Diagnosis was solely based on self-reporting; details such as symptoms, signs, or lab tests were not included. All the information collected in this study was self-reported. We only reported about the hypertension in this paper. Dependent and independent variables The dependent variable for this study was non-adherence to antihypertensive treatment, which we have defined as discontinuation of medication at the time of interview, when treatment was received at initial diagnosis. It is categorized in to ‘yes’ and ‘no’. The independent variables were age, sex, education, asset index, comorbidity, and health care provider. The individual sociodemographic factors were derived from preexisting surveillance data. The dependent variable for this study was non-adherence to antihypertensive treatment, which we have defined as discontinuation of medication at the time of interview, when treatment was received at initial diagnosis. It is categorized in to ‘yes’ and ‘no’. The independent variables were age, sex, education, asset index, comorbidity, and health care provider. The individual sociodemographic factors were derived from preexisting surveillance data. Definitions Hypertension Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Asset index The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. Comorbidity Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Hypertension Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Asset index The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. Comorbidity Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Statistical analyses Data was presented with mean (standard deviation, SD) for continuous variables and with proportion for categorical variables. The overall and sex-specific prevalences of hypertension were calculated. The study participants were divided into four age groups (<40, 40–49, 50–59, and 60+ years). Categorical variables were compared by chi-square statistics. Univariate regression analysis was performed to identify the factors that were associated with non-adherence. Any factor that provided a univariate p-value <0.05 was entered into a multiple regression model. Logistic regression analyses were performed to estimate odds ratios (OR) and 95% confidence intervals (CI) of non-adherence associated with various factors, with and without adjustment for other explanatory variables. SAS (Version 8) Statistical software was used for the analysis. Data was presented with mean (standard deviation, SD) for continuous variables and with proportion for categorical variables. The overall and sex-specific prevalences of hypertension were calculated. The study participants were divided into four age groups (<40, 40–49, 50–59, and 60+ years). Categorical variables were compared by chi-square statistics. Univariate regression analysis was performed to identify the factors that were associated with non-adherence. Any factor that provided a univariate p-value <0.05 was entered into a multiple regression model. Logistic regression analyses were performed to estimate odds ratios (OR) and 95% confidence intervals (CI) of non-adherence associated with various factors, with and without adjustment for other explanatory variables. SAS (Version 8) Statistical software was used for the analysis.
Results
The study population comprised 52.6% women, mean age was 44.6 years, with 17% belonging to the age group of 60 years and above. Those who had no formal education amounted to 43.7%. The prevalence of hypertension was 13.67%, higher for women (14.8%) than men (8.9%) (p<0.0001). Prevalence of hypertension increased with age (5.9% in the youngest age group compared to 23.1% in the oldest age group; p<0.0001), higher among the least poor quintile compared to the poorest quintile (20.6% vs. 6.3%, p<0.0001), and higher with increasing education (no formal education 12.2% vs. highest education 16.9%) (p<0.0001) (Table 2). Characteristics of the three study sites Only 58% of men and 51.1% of women with hypertension were diagnosed by qualified doctors. Among the unqualified providers, village doctors diagnosed 37% of the men and 42.7% of the women (Table 3). Characteristics of the study population (n=29,960) by hypertension status The proportion of people non-adherent to treatment was 26.2% in the study population. Non-adherence to treatment was higher among men (29.2%) than women (24.3%) (p<0.0001). Non-adherence to treatment decreased with age (p<0.0001) (Fig. 1). Non-adherence was less common among the wealthy people, both for men and women (Fig. 2). Percentage of people non-adherent to treatment by age group and sex, in rural Bangladesh, 2009. *Absolute numbers of samples are shown in the parenthesis. Percentage of people non-adherent to treatment by sex and asset quintile. *Absolute numbers of samples are shown in the parenthesis. Age, sex, education, wealth, comorbidity, and type of providers were independently associated with non-adherence to antihypertensive medication. More men were non-adherent to the treatment than women (OR 1.67, CI 1.42–1.97). Non-adherence to medication was greater when hypertension was diagnosed by unqualified providers (OR 1.46, CI 1.31–1.77). People of older age, higher education, and with most wealth were less likely to be non-adherent. Those who reported cardiovascular comorbidity (angina, heart attack, or stroke) were more likely to be compliant to medication (OR 0.78, CI 0.64–0.97) (Table 4). Hypertension diagnosed by type of health care providers and sex in rural Bangladesh Unadjusted and adjusted odds ratio (OR) and 95% confidence intervals (CI) of discontinuation of (non-adherence to) hypertension treatment in rural Bangladesh
Conclusions
Although village doctors make 40% of hypertension diagnoses, their treatments are associated with a higher rate of discontinuation. The hypertension management practices of the village doctors should be explored in subsequent research. More research is needed for better understanding of the determinants of adherence, also to find out the reasons for non-adherence to treatment particularly among men and young people. Accordingly, effective interventions need to be developed to properly manage hypertension and chronic diseases in general.
[ "Ethical approval", "Study sites", "Study population", "Data collection", "Dependent and independent variables", "Definitions", "Hypertension", "Asset index", "Comorbidity", "Statistical analyses" ]
[ "Ethical approval for surveillance site activities has been obtained from the International Center for Diarrheal Disease Research, Bangladesh (icddr,b) review board and Human Research Ethics Committee, The University of Newcastle, Australia.", "The study population was obtained from three rural demographic surveillance sites, Matlab, Abhoynagar, and Mirsarai. Table 1 describes the comparative characteristics of these three sites.\nThe three rural sites are similar in terms of population composition, density, household size, primary occupation, religion, and their disease profile; and also in area characteristics (18–20). Rural Bangladesh is mostly plain land and riverine area. In general, the public health care delivery is very homogenous across the country. The services of community health workers, village doctors, and so on are very similar across the sites. Furthermore, there is no specific initiative from any public or private organizations to date, regarding managing chronic conditions per se hypertension, in these sites. The population under research was from icddr,b surveillance sites where icddr,b has been maintaining contact with the population in a very similar fashion across the sites for many years: in Matlab since the early 1960s, and in Mirsarai and Abhoynagar since the early 1980s. The sampling design of the Health and Sociodemographic Surveillance System (HDSS) was a stratified two-stage sampling; unions were stratified initially. Unions are administrative subunits in Bangladesh with a population of approximately 20,000 to 30,000. Each unions were randomly selected. Households served as the second stage sampling units. A systematic random sampling technique was applied to select the sample households.\nAmong sample households, all household members were identified and listed to collect basic socioeconomic and demographic data, and a unique identification number was assigned to each individual. At regular 90-days intervals, one female interviewer visited each household to collect data on demographic and other programmatic events. HDSS is a cohort study, but this study presents a cross-sectional analysis of base-line data regarding hypertension medication adherence in HDSS population.", "The study population was limited to individuals aged 25 years and above. Data was collected in a cross-sectional survey during the year 2009, for a 2-month period at each site. The study population was sampled by door to door survey, during the regular surveillance rounds. In Abhoynagar and Mirsarai, an attempt was made to include respondents from all households, and in Matlab about 6,400 individuals were interviewed, evenly distributed across the research area.\nBecause information was collected during regular household visits, only those present during the visit, and meeting the age criterion were included. This resulted in a biased selection of respondents, with an overrepresentation of women. To adjust for this bias, the research population was weighted to match the relative age–sex distribution of the populations each of the surveillance sites. In order to give equal weight to the three rural surveillance sites, the sampled population for Matlab was inflated to meet the sample size of the other two sites.\nData were collected during the regular visits and only those present at the time of interview were included. We did not track the non-response rate as the sample was not drawn from our defined HDSS surveillance population. However, a typical estimate of absenteeism of a given household is approximately less than 5% in the HDSS surveillance sites (21). In total 29,960 individuals were included.", "Trained research assistants conducted the face-to-face interviews in Bangla using a two-part questionnaire on chronic disease lifestyle risk factors and management. The questionnaire was translated to English and then back translated to Bangla to check the consistency of the meaning. We then piloted the questionnaire with 10 people from an area similar to but other than the study sites, to see the understandability of the questionnaire, language, and so on. This pretested structured questionnaire collected information on 11 prespecified chronic conditions regarding diagnoses, initial treatment, current treatment, and health care provider. Respondents were asked ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have any of the following medical conditions: hypertension, diabetes, abnormal blood lipids, overweight, chronic bronchitis, heart attack, angina/coronary heart disease, stroke, asthma, oral cancer, lung cancer and others’. Respondents then needed to identify the most recent provider of the diagnoses. Diagnosis was solely based on self-reporting; details such as symptoms, signs, or lab tests were not included. All the information collected in this study was self-reported. We only reported about the hypertension in this paper.", "The dependent variable for this study was non-adherence to antihypertensive treatment, which we have defined as discontinuation of medication at the time of interview, when treatment was received at initial diagnosis. It is categorized in to ‘yes’ and ‘no’. The independent variables were age, sex, education, asset index, comorbidity, and health care provider. The individual sociodemographic factors were derived from preexisting surveillance data.", " Hypertension Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22).\nHypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22).\n Asset index The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites.\nThe asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites.\n Comorbidity Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy).\nComorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy).", "Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22).", "The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites.", "Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy).", "Data was presented with mean (standard deviation, SD) for continuous variables and with proportion for categorical variables. The overall and sex-specific prevalences of hypertension were calculated. The study participants were divided into four age groups (<40, 40–49, 50–59, and 60+ years). Categorical variables were compared by chi-square statistics.\nUnivariate regression analysis was performed to identify the factors that were associated with non-adherence. Any factor that provided a univariate p-value <0.05 was entered into a multiple regression model. Logistic regression analyses were performed to estimate odds ratios (OR) and 95% confidence intervals (CI) of non-adherence associated with various factors, with and without adjustment for other explanatory variables. SAS (Version 8) Statistical software was used for the analysis." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Methods", "Ethical approval", "Study sites", "Study population", "Data collection", "Dependent and independent variables", "Definitions", "Hypertension", "Asset index", "Comorbidity", "Statistical analyses", "Results", "Discussion", "Conclusions" ]
[ " Ethical approval Ethical approval for surveillance site activities has been obtained from the International Center for Diarrheal Disease Research, Bangladesh (icddr,b) review board and Human Research Ethics Committee, The University of Newcastle, Australia.\nEthical approval for surveillance site activities has been obtained from the International Center for Diarrheal Disease Research, Bangladesh (icddr,b) review board and Human Research Ethics Committee, The University of Newcastle, Australia.\n Study sites The study population was obtained from three rural demographic surveillance sites, Matlab, Abhoynagar, and Mirsarai. Table 1 describes the comparative characteristics of these three sites.\nThe three rural sites are similar in terms of population composition, density, household size, primary occupation, religion, and their disease profile; and also in area characteristics (18–20). Rural Bangladesh is mostly plain land and riverine area. In general, the public health care delivery is very homogenous across the country. The services of community health workers, village doctors, and so on are very similar across the sites. Furthermore, there is no specific initiative from any public or private organizations to date, regarding managing chronic conditions per se hypertension, in these sites. The population under research was from icddr,b surveillance sites where icddr,b has been maintaining contact with the population in a very similar fashion across the sites for many years: in Matlab since the early 1960s, and in Mirsarai and Abhoynagar since the early 1980s. The sampling design of the Health and Sociodemographic Surveillance System (HDSS) was a stratified two-stage sampling; unions were stratified initially. Unions are administrative subunits in Bangladesh with a population of approximately 20,000 to 30,000. Each unions were randomly selected. Households served as the second stage sampling units. A systematic random sampling technique was applied to select the sample households.\nAmong sample households, all household members were identified and listed to collect basic socioeconomic and demographic data, and a unique identification number was assigned to each individual. At regular 90-days intervals, one female interviewer visited each household to collect data on demographic and other programmatic events. HDSS is a cohort study, but this study presents a cross-sectional analysis of base-line data regarding hypertension medication adherence in HDSS population.\nThe study population was obtained from three rural demographic surveillance sites, Matlab, Abhoynagar, and Mirsarai. Table 1 describes the comparative characteristics of these three sites.\nThe three rural sites are similar in terms of population composition, density, household size, primary occupation, religion, and their disease profile; and also in area characteristics (18–20). Rural Bangladesh is mostly plain land and riverine area. In general, the public health care delivery is very homogenous across the country. The services of community health workers, village doctors, and so on are very similar across the sites. Furthermore, there is no specific initiative from any public or private organizations to date, regarding managing chronic conditions per se hypertension, in these sites. The population under research was from icddr,b surveillance sites where icddr,b has been maintaining contact with the population in a very similar fashion across the sites for many years: in Matlab since the early 1960s, and in Mirsarai and Abhoynagar since the early 1980s. The sampling design of the Health and Sociodemographic Surveillance System (HDSS) was a stratified two-stage sampling; unions were stratified initially. Unions are administrative subunits in Bangladesh with a population of approximately 20,000 to 30,000. Each unions were randomly selected. Households served as the second stage sampling units. A systematic random sampling technique was applied to select the sample households.\nAmong sample households, all household members were identified and listed to collect basic socioeconomic and demographic data, and a unique identification number was assigned to each individual. At regular 90-days intervals, one female interviewer visited each household to collect data on demographic and other programmatic events. HDSS is a cohort study, but this study presents a cross-sectional analysis of base-line data regarding hypertension medication adherence in HDSS population.\n Study population The study population was limited to individuals aged 25 years and above. Data was collected in a cross-sectional survey during the year 2009, for a 2-month period at each site. The study population was sampled by door to door survey, during the regular surveillance rounds. In Abhoynagar and Mirsarai, an attempt was made to include respondents from all households, and in Matlab about 6,400 individuals were interviewed, evenly distributed across the research area.\nBecause information was collected during regular household visits, only those present during the visit, and meeting the age criterion were included. This resulted in a biased selection of respondents, with an overrepresentation of women. To adjust for this bias, the research population was weighted to match the relative age–sex distribution of the populations each of the surveillance sites. In order to give equal weight to the three rural surveillance sites, the sampled population for Matlab was inflated to meet the sample size of the other two sites.\nData were collected during the regular visits and only those present at the time of interview were included. We did not track the non-response rate as the sample was not drawn from our defined HDSS surveillance population. However, a typical estimate of absenteeism of a given household is approximately less than 5% in the HDSS surveillance sites (21). In total 29,960 individuals were included.\nThe study population was limited to individuals aged 25 years and above. Data was collected in a cross-sectional survey during the year 2009, for a 2-month period at each site. The study population was sampled by door to door survey, during the regular surveillance rounds. In Abhoynagar and Mirsarai, an attempt was made to include respondents from all households, and in Matlab about 6,400 individuals were interviewed, evenly distributed across the research area.\nBecause information was collected during regular household visits, only those present during the visit, and meeting the age criterion were included. This resulted in a biased selection of respondents, with an overrepresentation of women. To adjust for this bias, the research population was weighted to match the relative age–sex distribution of the populations each of the surveillance sites. In order to give equal weight to the three rural surveillance sites, the sampled population for Matlab was inflated to meet the sample size of the other two sites.\nData were collected during the regular visits and only those present at the time of interview were included. We did not track the non-response rate as the sample was not drawn from our defined HDSS surveillance population. However, a typical estimate of absenteeism of a given household is approximately less than 5% in the HDSS surveillance sites (21). In total 29,960 individuals were included.\n Data collection Trained research assistants conducted the face-to-face interviews in Bangla using a two-part questionnaire on chronic disease lifestyle risk factors and management. The questionnaire was translated to English and then back translated to Bangla to check the consistency of the meaning. We then piloted the questionnaire with 10 people from an area similar to but other than the study sites, to see the understandability of the questionnaire, language, and so on. This pretested structured questionnaire collected information on 11 prespecified chronic conditions regarding diagnoses, initial treatment, current treatment, and health care provider. Respondents were asked ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have any of the following medical conditions: hypertension, diabetes, abnormal blood lipids, overweight, chronic bronchitis, heart attack, angina/coronary heart disease, stroke, asthma, oral cancer, lung cancer and others’. Respondents then needed to identify the most recent provider of the diagnoses. Diagnosis was solely based on self-reporting; details such as symptoms, signs, or lab tests were not included. All the information collected in this study was self-reported. We only reported about the hypertension in this paper.\nTrained research assistants conducted the face-to-face interviews in Bangla using a two-part questionnaire on chronic disease lifestyle risk factors and management. The questionnaire was translated to English and then back translated to Bangla to check the consistency of the meaning. We then piloted the questionnaire with 10 people from an area similar to but other than the study sites, to see the understandability of the questionnaire, language, and so on. This pretested structured questionnaire collected information on 11 prespecified chronic conditions regarding diagnoses, initial treatment, current treatment, and health care provider. Respondents were asked ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have any of the following medical conditions: hypertension, diabetes, abnormal blood lipids, overweight, chronic bronchitis, heart attack, angina/coronary heart disease, stroke, asthma, oral cancer, lung cancer and others’. Respondents then needed to identify the most recent provider of the diagnoses. Diagnosis was solely based on self-reporting; details such as symptoms, signs, or lab tests were not included. All the information collected in this study was self-reported. We only reported about the hypertension in this paper.\n Dependent and independent variables The dependent variable for this study was non-adherence to antihypertensive treatment, which we have defined as discontinuation of medication at the time of interview, when treatment was received at initial diagnosis. It is categorized in to ‘yes’ and ‘no’. The independent variables were age, sex, education, asset index, comorbidity, and health care provider. The individual sociodemographic factors were derived from preexisting surveillance data.\nThe dependent variable for this study was non-adherence to antihypertensive treatment, which we have defined as discontinuation of medication at the time of interview, when treatment was received at initial diagnosis. It is categorized in to ‘yes’ and ‘no’. The independent variables were age, sex, education, asset index, comorbidity, and health care provider. The individual sociodemographic factors were derived from preexisting surveillance data.\n Definitions Hypertension Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22).\nHypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22).\n Asset index The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites.\nThe asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites.\n Comorbidity Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy).\nComorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy).\n Hypertension Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22).\nHypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22).\n Asset index The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites.\nThe asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites.\n Comorbidity Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy).\nComorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy).\n Statistical analyses Data was presented with mean (standard deviation, SD) for continuous variables and with proportion for categorical variables. The overall and sex-specific prevalences of hypertension were calculated. The study participants were divided into four age groups (<40, 40–49, 50–59, and 60+ years). Categorical variables were compared by chi-square statistics.\nUnivariate regression analysis was performed to identify the factors that were associated with non-adherence. Any factor that provided a univariate p-value <0.05 was entered into a multiple regression model. Logistic regression analyses were performed to estimate odds ratios (OR) and 95% confidence intervals (CI) of non-adherence associated with various factors, with and without adjustment for other explanatory variables. SAS (Version 8) Statistical software was used for the analysis.\nData was presented with mean (standard deviation, SD) for continuous variables and with proportion for categorical variables. The overall and sex-specific prevalences of hypertension were calculated. The study participants were divided into four age groups (<40, 40–49, 50–59, and 60+ years). Categorical variables were compared by chi-square statistics.\nUnivariate regression analysis was performed to identify the factors that were associated with non-adherence. Any factor that provided a univariate p-value <0.05 was entered into a multiple regression model. Logistic regression analyses were performed to estimate odds ratios (OR) and 95% confidence intervals (CI) of non-adherence associated with various factors, with and without adjustment for other explanatory variables. SAS (Version 8) Statistical software was used for the analysis.", "Ethical approval for surveillance site activities has been obtained from the International Center for Diarrheal Disease Research, Bangladesh (icddr,b) review board and Human Research Ethics Committee, The University of Newcastle, Australia.", "The study population was obtained from three rural demographic surveillance sites, Matlab, Abhoynagar, and Mirsarai. Table 1 describes the comparative characteristics of these three sites.\nThe three rural sites are similar in terms of population composition, density, household size, primary occupation, religion, and their disease profile; and also in area characteristics (18–20). Rural Bangladesh is mostly plain land and riverine area. In general, the public health care delivery is very homogenous across the country. The services of community health workers, village doctors, and so on are very similar across the sites. Furthermore, there is no specific initiative from any public or private organizations to date, regarding managing chronic conditions per se hypertension, in these sites. The population under research was from icddr,b surveillance sites where icddr,b has been maintaining contact with the population in a very similar fashion across the sites for many years: in Matlab since the early 1960s, and in Mirsarai and Abhoynagar since the early 1980s. The sampling design of the Health and Sociodemographic Surveillance System (HDSS) was a stratified two-stage sampling; unions were stratified initially. Unions are administrative subunits in Bangladesh with a population of approximately 20,000 to 30,000. Each unions were randomly selected. Households served as the second stage sampling units. A systematic random sampling technique was applied to select the sample households.\nAmong sample households, all household members were identified and listed to collect basic socioeconomic and demographic data, and a unique identification number was assigned to each individual. At regular 90-days intervals, one female interviewer visited each household to collect data on demographic and other programmatic events. HDSS is a cohort study, but this study presents a cross-sectional analysis of base-line data regarding hypertension medication adherence in HDSS population.", "The study population was limited to individuals aged 25 years and above. Data was collected in a cross-sectional survey during the year 2009, for a 2-month period at each site. The study population was sampled by door to door survey, during the regular surveillance rounds. In Abhoynagar and Mirsarai, an attempt was made to include respondents from all households, and in Matlab about 6,400 individuals were interviewed, evenly distributed across the research area.\nBecause information was collected during regular household visits, only those present during the visit, and meeting the age criterion were included. This resulted in a biased selection of respondents, with an overrepresentation of women. To adjust for this bias, the research population was weighted to match the relative age–sex distribution of the populations each of the surveillance sites. In order to give equal weight to the three rural surveillance sites, the sampled population for Matlab was inflated to meet the sample size of the other two sites.\nData were collected during the regular visits and only those present at the time of interview were included. We did not track the non-response rate as the sample was not drawn from our defined HDSS surveillance population. However, a typical estimate of absenteeism of a given household is approximately less than 5% in the HDSS surveillance sites (21). In total 29,960 individuals were included.", "Trained research assistants conducted the face-to-face interviews in Bangla using a two-part questionnaire on chronic disease lifestyle risk factors and management. The questionnaire was translated to English and then back translated to Bangla to check the consistency of the meaning. We then piloted the questionnaire with 10 people from an area similar to but other than the study sites, to see the understandability of the questionnaire, language, and so on. This pretested structured questionnaire collected information on 11 prespecified chronic conditions regarding diagnoses, initial treatment, current treatment, and health care provider. Respondents were asked ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have any of the following medical conditions: hypertension, diabetes, abnormal blood lipids, overweight, chronic bronchitis, heart attack, angina/coronary heart disease, stroke, asthma, oral cancer, lung cancer and others’. Respondents then needed to identify the most recent provider of the diagnoses. Diagnosis was solely based on self-reporting; details such as symptoms, signs, or lab tests were not included. All the information collected in this study was self-reported. We only reported about the hypertension in this paper.", "The dependent variable for this study was non-adherence to antihypertensive treatment, which we have defined as discontinuation of medication at the time of interview, when treatment was received at initial diagnosis. It is categorized in to ‘yes’ and ‘no’. The independent variables were age, sex, education, asset index, comorbidity, and health care provider. The individual sociodemographic factors were derived from preexisting surveillance data.", " Hypertension Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22).\nHypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22).\n Asset index The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites.\nThe asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites.\n Comorbidity Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy).\nComorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy).", "Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22).", "The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites.", "Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy).", "Data was presented with mean (standard deviation, SD) for continuous variables and with proportion for categorical variables. The overall and sex-specific prevalences of hypertension were calculated. The study participants were divided into four age groups (<40, 40–49, 50–59, and 60+ years). Categorical variables were compared by chi-square statistics.\nUnivariate regression analysis was performed to identify the factors that were associated with non-adherence. Any factor that provided a univariate p-value <0.05 was entered into a multiple regression model. Logistic regression analyses were performed to estimate odds ratios (OR) and 95% confidence intervals (CI) of non-adherence associated with various factors, with and without adjustment for other explanatory variables. SAS (Version 8) Statistical software was used for the analysis.", "The study population comprised 52.6% women, mean age was 44.6 years, with 17% belonging to the age group of 60 years and above. Those who had no formal education amounted to 43.7%. The prevalence of hypertension was 13.67%, higher for women (14.8%) than men (8.9%) (p<0.0001). Prevalence of hypertension increased with age (5.9% in the youngest age group compared to 23.1% in the oldest age group; p<0.0001), higher among the least poor quintile compared to the poorest quintile (20.6% vs. 6.3%, p<0.0001), and higher with increasing education (no formal education 12.2% vs. highest education 16.9%) (p<0.0001) (Table 2).\nCharacteristics of the three study sites\nOnly 58% of men and 51.1% of women with hypertension were diagnosed by qualified doctors. Among the unqualified providers, village doctors diagnosed 37% of the men and 42.7% of the women (Table 3).\nCharacteristics of the study population (n=29,960) by hypertension status\nThe proportion of people non-adherent to treatment was 26.2% in the study population. Non-adherence to treatment was higher among men (29.2%) than women (24.3%) (p<0.0001). Non-adherence to treatment decreased with age (p<0.0001) (Fig. 1). Non-adherence was less common among the wealthy people, both for men and women (Fig. 2).\nPercentage of people non-adherent to treatment by age group and sex, in rural Bangladesh, 2009.\n\n*Absolute numbers of samples are shown in the parenthesis.\nPercentage of people non-adherent to treatment by sex and asset quintile.\n\n*Absolute numbers of samples are shown in the parenthesis.\nAge, sex, education, wealth, comorbidity, and type of providers were independently associated with non-adherence to antihypertensive medication. More men were non-adherent to the treatment than women (OR 1.67, CI 1.42–1.97). Non-adherence to medication was greater when hypertension was diagnosed by unqualified providers (OR 1.46, CI 1.31–1.77). People of older age, higher education, and with most wealth were less likely to be non-adherent. Those who reported cardiovascular comorbidity (angina, heart attack, or stroke) were more likely to be compliant to medication (OR 0.78, CI 0.64–0.97) (Table 4).\nHypertension diagnosed by type of health care providers and sex in rural Bangladesh\nUnadjusted and adjusted odds ratio (OR) and 95% confidence intervals (CI) of discontinuation of (non-adherence to) hypertension treatment in rural Bangladesh", "This is the first study reporting prevalence and correlates of non-adherence to antihypertensive treatment in rural Bangladesh. The prevalence of hypertension is 13.67% (95% CI 13.29–14.07) in this study. A recent study conducted in the same areas of Bangladesh reported that the prevalence of hypertension varied from 9.3 to 24.1% (24). A recent systematic review and meta-analysis of studies between 1995 and 2010 showed the pooled prevalence of hypertension in Bangladesh was 13.7% (overall prevalence estimate ranged between 9 and 22.2%) (25). Although we have collected self-reported information, the prevalence of hypertension reflects the magnitude of hypertension in rural Bangladesh very well. We have found that the prevalence of hypertension is significantly higher for women compared to men, consistent with other studies in rural Bangladesh. (26). Our finding that the prevalence is rising with increasing wealth is consistent with findings from other rural areas in Bangladesh (27).\nWe have found that more than 26% of the study population is non-adherent to antihypertensive treatment. In developed countries, adherence among patients suffering from non-communicable diseases averages only 50% (28). In China and the Gambia, only 43 and 27% of patients with hypertension adhere to their antihypertensive medication regimen, respectively (29, 30). In developing countries, the magnitude of poor adherence is assumed to be higher given the scarcity of health resources and difficulties in access to health care (31).\n\nThe reasons for poor adherence have been studied extensively in the West. Two of the most important factors contributing to poor adherence are the asymptomatic and lifelong nature of the disease. Other potential determinants of adherence may be related to demographic factors such as age and education, the patient's understanding and perception of hypertension, the health care provider's mode of delivering treatment, and the relationship between patients and health care professionals (32, 33). Data from the developing world, in this regard are scanty, with details of the prevalence of the condition has just beginning to emerge.\nWe have found that men in rural Bangladesh are most likely to discontinue the treatment. Similar findings were observed in other studies (34); however, opposing findings have also been reported (35). Our study showed that young hypertensive patients are less likely to continue antihypertensive treatment, in line with other findings \n(34–36). As the symptoms go unnoticed, young individuals may not pay much attention to the importance of continuing medication. This needs more exploration as an area of intervention. Sex and age are affecting adherence to treatment as in other diseases too (37).\nPoor socioeconomic status and low education are important factors for poor adherence (38–40); our findings are in line with these evidences. Although the exact cause could not be revealed in this study, people with higher education and more wealth may be more health conscious and knowledgeable about the impact of poor control of HBP. Studies found that non-adherent patients with lower income and less education suffered more from stroke (14). Financial constraints could be a factor in continuing lifelong medication for hypertension treatment (32).\n\nWe have found that in the presence of cardiovascular comorbidities (vascular diseases, abnormal lipids, and overweight) the odds of non-adherence to antihypertensive medication are reduced. The reason could be numerous; patients with comorbidities visit health care providers more frequently, they pay more attention to their health conditions, and also are more likely to go to a qualified health care provider and therefore more likely to continue medication in general. Specific reasons for this population are yet to be revealed as aspects of multimorbidity and polypharmacy are underresearched in Bangladesh.\nIn this study, non-adherence is more when hypertension was diagnosed by village doctors. The influence of factors related to the health care provider on adherence to therapy for hypertension has not been systematically studied. One important factor is probably lack of knowledge and training for health care providers on managing hypertension per se chronic diseases (41, 42). These factors may contribute to the fact of more than 40% of people being diagnosed as hypertensive by the village doctors.\nWe have mainly focused on demographic and economic factors such as age, sex, education, and wealth, some of which are results of social determinants, that influence adherence to treatment. This study has highlighted the importance of health care providers in patient adherence to antihypertensive therapy, and the extent of poor adherence, especially when unqualified providers, for example, village doctors, play a prominent role in diagnosing the condition. We have used the education and asset index data, as in many earlier reports, to shed some light on the financial aspects of this study outcome, as there is a positive correlation between educational level and income. Unfortunately, we do not have any data about the cost borne by the patients of this study; neither do we have any data regarding usual cost of antihypertensive drugs. However, a publication on availability and affordability of essential medicines for chronic diseases in low-income countries mentioned that median price ratio is between 1.14 and 1.31 in Bangladesh (this ratio compares a medicine's median price to its international reference price) (43).\n\nSeveral limitations should be mentioned. We have used the self-reported information because this is easy, economical, and has long been used as an epidemiological tool. This may be subject to recall bias and possibility of underestimation of actual prevalence. We have a robust phenomenon in place, our surveillance system. The surveillance people are in acquaintance with routine questionnaires and interviewers, decreasing the likelihood of reporting bias. In checking for the validity of hypertension diagnosis, participants were not only asked about the providers of the diagnosis but also asked about taking antihypertensive medication. However the focus of this paper was mainly the adherence to treatment for the hypertensive patients. There are more validated measures of medication adherence, where objective measures are used to specify the adherence. This is the first of its kind study on adherence to antihypertensive treatments in Bangladesh, and conducted on surveillance population.\nAmong the several strengths, this study covers a wide geographical area, provides initial steps of nationally representative data, and includes large sample size that supports the accuracy of our findings. HDSS collect information from whole communities over extended time periods, which more accurately reflect health and population problems in low- and middle-income countries. HDSS provides a reliable sampling frame and a high response rate is usually observed.", "Although village doctors make 40% of hypertension diagnoses, their treatments are associated with a higher rate of discontinuation. The hypertension management practices of the village doctors should be explored in subsequent research. More research is needed for better understanding of the determinants of adherence, also to find out the reasons for non-adherence to treatment particularly among men and young people. Accordingly, effective interventions need to be developed to properly manage hypertension and chronic diseases in general." ]
[ "methods", null, null, null, null, null, null, null, null, null, null, "results", "discussion", "conclusions" ]
[ "adherence to treatment", "hypertension", "Bangladesh", "village doctors", "low-income country" ]
Methods: Ethical approval Ethical approval for surveillance site activities has been obtained from the International Center for Diarrheal Disease Research, Bangladesh (icddr,b) review board and Human Research Ethics Committee, The University of Newcastle, Australia. Ethical approval for surveillance site activities has been obtained from the International Center for Diarrheal Disease Research, Bangladesh (icddr,b) review board and Human Research Ethics Committee, The University of Newcastle, Australia. Study sites The study population was obtained from three rural demographic surveillance sites, Matlab, Abhoynagar, and Mirsarai. Table 1 describes the comparative characteristics of these three sites. The three rural sites are similar in terms of population composition, density, household size, primary occupation, religion, and their disease profile; and also in area characteristics (18–20). Rural Bangladesh is mostly plain land and riverine area. In general, the public health care delivery is very homogenous across the country. The services of community health workers, village doctors, and so on are very similar across the sites. Furthermore, there is no specific initiative from any public or private organizations to date, regarding managing chronic conditions per se hypertension, in these sites. The population under research was from icddr,b surveillance sites where icddr,b has been maintaining contact with the population in a very similar fashion across the sites for many years: in Matlab since the early 1960s, and in Mirsarai and Abhoynagar since the early 1980s. The sampling design of the Health and Sociodemographic Surveillance System (HDSS) was a stratified two-stage sampling; unions were stratified initially. Unions are administrative subunits in Bangladesh with a population of approximately 20,000 to 30,000. Each unions were randomly selected. Households served as the second stage sampling units. A systematic random sampling technique was applied to select the sample households. Among sample households, all household members were identified and listed to collect basic socioeconomic and demographic data, and a unique identification number was assigned to each individual. At regular 90-days intervals, one female interviewer visited each household to collect data on demographic and other programmatic events. HDSS is a cohort study, but this study presents a cross-sectional analysis of base-line data regarding hypertension medication adherence in HDSS population. The study population was obtained from three rural demographic surveillance sites, Matlab, Abhoynagar, and Mirsarai. Table 1 describes the comparative characteristics of these three sites. The three rural sites are similar in terms of population composition, density, household size, primary occupation, religion, and their disease profile; and also in area characteristics (18–20). Rural Bangladesh is mostly plain land and riverine area. In general, the public health care delivery is very homogenous across the country. The services of community health workers, village doctors, and so on are very similar across the sites. Furthermore, there is no specific initiative from any public or private organizations to date, regarding managing chronic conditions per se hypertension, in these sites. The population under research was from icddr,b surveillance sites where icddr,b has been maintaining contact with the population in a very similar fashion across the sites for many years: in Matlab since the early 1960s, and in Mirsarai and Abhoynagar since the early 1980s. The sampling design of the Health and Sociodemographic Surveillance System (HDSS) was a stratified two-stage sampling; unions were stratified initially. Unions are administrative subunits in Bangladesh with a population of approximately 20,000 to 30,000. Each unions were randomly selected. Households served as the second stage sampling units. A systematic random sampling technique was applied to select the sample households. Among sample households, all household members were identified and listed to collect basic socioeconomic and demographic data, and a unique identification number was assigned to each individual. At regular 90-days intervals, one female interviewer visited each household to collect data on demographic and other programmatic events. HDSS is a cohort study, but this study presents a cross-sectional analysis of base-line data regarding hypertension medication adherence in HDSS population. Study population The study population was limited to individuals aged 25 years and above. Data was collected in a cross-sectional survey during the year 2009, for a 2-month period at each site. The study population was sampled by door to door survey, during the regular surveillance rounds. In Abhoynagar and Mirsarai, an attempt was made to include respondents from all households, and in Matlab about 6,400 individuals were interviewed, evenly distributed across the research area. Because information was collected during regular household visits, only those present during the visit, and meeting the age criterion were included. This resulted in a biased selection of respondents, with an overrepresentation of women. To adjust for this bias, the research population was weighted to match the relative age–sex distribution of the populations each of the surveillance sites. In order to give equal weight to the three rural surveillance sites, the sampled population for Matlab was inflated to meet the sample size of the other two sites. Data were collected during the regular visits and only those present at the time of interview were included. We did not track the non-response rate as the sample was not drawn from our defined HDSS surveillance population. However, a typical estimate of absenteeism of a given household is approximately less than 5% in the HDSS surveillance sites (21). In total 29,960 individuals were included. The study population was limited to individuals aged 25 years and above. Data was collected in a cross-sectional survey during the year 2009, for a 2-month period at each site. The study population was sampled by door to door survey, during the regular surveillance rounds. In Abhoynagar and Mirsarai, an attempt was made to include respondents from all households, and in Matlab about 6,400 individuals were interviewed, evenly distributed across the research area. Because information was collected during regular household visits, only those present during the visit, and meeting the age criterion were included. This resulted in a biased selection of respondents, with an overrepresentation of women. To adjust for this bias, the research population was weighted to match the relative age–sex distribution of the populations each of the surveillance sites. In order to give equal weight to the three rural surveillance sites, the sampled population for Matlab was inflated to meet the sample size of the other two sites. Data were collected during the regular visits and only those present at the time of interview were included. We did not track the non-response rate as the sample was not drawn from our defined HDSS surveillance population. However, a typical estimate of absenteeism of a given household is approximately less than 5% in the HDSS surveillance sites (21). In total 29,960 individuals were included. Data collection Trained research assistants conducted the face-to-face interviews in Bangla using a two-part questionnaire on chronic disease lifestyle risk factors and management. The questionnaire was translated to English and then back translated to Bangla to check the consistency of the meaning. We then piloted the questionnaire with 10 people from an area similar to but other than the study sites, to see the understandability of the questionnaire, language, and so on. This pretested structured questionnaire collected information on 11 prespecified chronic conditions regarding diagnoses, initial treatment, current treatment, and health care provider. Respondents were asked ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have any of the following medical conditions: hypertension, diabetes, abnormal blood lipids, overweight, chronic bronchitis, heart attack, angina/coronary heart disease, stroke, asthma, oral cancer, lung cancer and others’. Respondents then needed to identify the most recent provider of the diagnoses. Diagnosis was solely based on self-reporting; details such as symptoms, signs, or lab tests were not included. All the information collected in this study was self-reported. We only reported about the hypertension in this paper. Trained research assistants conducted the face-to-face interviews in Bangla using a two-part questionnaire on chronic disease lifestyle risk factors and management. The questionnaire was translated to English and then back translated to Bangla to check the consistency of the meaning. We then piloted the questionnaire with 10 people from an area similar to but other than the study sites, to see the understandability of the questionnaire, language, and so on. This pretested structured questionnaire collected information on 11 prespecified chronic conditions regarding diagnoses, initial treatment, current treatment, and health care provider. Respondents were asked ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have any of the following medical conditions: hypertension, diabetes, abnormal blood lipids, overweight, chronic bronchitis, heart attack, angina/coronary heart disease, stroke, asthma, oral cancer, lung cancer and others’. Respondents then needed to identify the most recent provider of the diagnoses. Diagnosis was solely based on self-reporting; details such as symptoms, signs, or lab tests were not included. All the information collected in this study was self-reported. We only reported about the hypertension in this paper. Dependent and independent variables The dependent variable for this study was non-adherence to antihypertensive treatment, which we have defined as discontinuation of medication at the time of interview, when treatment was received at initial diagnosis. It is categorized in to ‘yes’ and ‘no’. The independent variables were age, sex, education, asset index, comorbidity, and health care provider. The individual sociodemographic factors were derived from preexisting surveillance data. The dependent variable for this study was non-adherence to antihypertensive treatment, which we have defined as discontinuation of medication at the time of interview, when treatment was received at initial diagnosis. It is categorized in to ‘yes’ and ‘no’. The independent variables were age, sex, education, asset index, comorbidity, and health care provider. The individual sociodemographic factors were derived from preexisting surveillance data. Definitions Hypertension Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Asset index The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. Comorbidity Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Hypertension Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Asset index The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. Comorbidity Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Statistical analyses Data was presented with mean (standard deviation, SD) for continuous variables and with proportion for categorical variables. The overall and sex-specific prevalences of hypertension were calculated. The study participants were divided into four age groups (<40, 40–49, 50–59, and 60+ years). Categorical variables were compared by chi-square statistics. Univariate regression analysis was performed to identify the factors that were associated with non-adherence. Any factor that provided a univariate p-value <0.05 was entered into a multiple regression model. Logistic regression analyses were performed to estimate odds ratios (OR) and 95% confidence intervals (CI) of non-adherence associated with various factors, with and without adjustment for other explanatory variables. SAS (Version 8) Statistical software was used for the analysis. Data was presented with mean (standard deviation, SD) for continuous variables and with proportion for categorical variables. The overall and sex-specific prevalences of hypertension were calculated. The study participants were divided into four age groups (<40, 40–49, 50–59, and 60+ years). Categorical variables were compared by chi-square statistics. Univariate regression analysis was performed to identify the factors that were associated with non-adherence. Any factor that provided a univariate p-value <0.05 was entered into a multiple regression model. Logistic regression analyses were performed to estimate odds ratios (OR) and 95% confidence intervals (CI) of non-adherence associated with various factors, with and without adjustment for other explanatory variables. SAS (Version 8) Statistical software was used for the analysis. Ethical approval: Ethical approval for surveillance site activities has been obtained from the International Center for Diarrheal Disease Research, Bangladesh (icddr,b) review board and Human Research Ethics Committee, The University of Newcastle, Australia. Study sites: The study population was obtained from three rural demographic surveillance sites, Matlab, Abhoynagar, and Mirsarai. Table 1 describes the comparative characteristics of these three sites. The three rural sites are similar in terms of population composition, density, household size, primary occupation, religion, and their disease profile; and also in area characteristics (18–20). Rural Bangladesh is mostly plain land and riverine area. In general, the public health care delivery is very homogenous across the country. The services of community health workers, village doctors, and so on are very similar across the sites. Furthermore, there is no specific initiative from any public or private organizations to date, regarding managing chronic conditions per se hypertension, in these sites. The population under research was from icddr,b surveillance sites where icddr,b has been maintaining contact with the population in a very similar fashion across the sites for many years: in Matlab since the early 1960s, and in Mirsarai and Abhoynagar since the early 1980s. The sampling design of the Health and Sociodemographic Surveillance System (HDSS) was a stratified two-stage sampling; unions were stratified initially. Unions are administrative subunits in Bangladesh with a population of approximately 20,000 to 30,000. Each unions were randomly selected. Households served as the second stage sampling units. A systematic random sampling technique was applied to select the sample households. Among sample households, all household members were identified and listed to collect basic socioeconomic and demographic data, and a unique identification number was assigned to each individual. At regular 90-days intervals, one female interviewer visited each household to collect data on demographic and other programmatic events. HDSS is a cohort study, but this study presents a cross-sectional analysis of base-line data regarding hypertension medication adherence in HDSS population. Study population: The study population was limited to individuals aged 25 years and above. Data was collected in a cross-sectional survey during the year 2009, for a 2-month period at each site. The study population was sampled by door to door survey, during the regular surveillance rounds. In Abhoynagar and Mirsarai, an attempt was made to include respondents from all households, and in Matlab about 6,400 individuals were interviewed, evenly distributed across the research area. Because information was collected during regular household visits, only those present during the visit, and meeting the age criterion were included. This resulted in a biased selection of respondents, with an overrepresentation of women. To adjust for this bias, the research population was weighted to match the relative age–sex distribution of the populations each of the surveillance sites. In order to give equal weight to the three rural surveillance sites, the sampled population for Matlab was inflated to meet the sample size of the other two sites. Data were collected during the regular visits and only those present at the time of interview were included. We did not track the non-response rate as the sample was not drawn from our defined HDSS surveillance population. However, a typical estimate of absenteeism of a given household is approximately less than 5% in the HDSS surveillance sites (21). In total 29,960 individuals were included. Data collection: Trained research assistants conducted the face-to-face interviews in Bangla using a two-part questionnaire on chronic disease lifestyle risk factors and management. The questionnaire was translated to English and then back translated to Bangla to check the consistency of the meaning. We then piloted the questionnaire with 10 people from an area similar to but other than the study sites, to see the understandability of the questionnaire, language, and so on. This pretested structured questionnaire collected information on 11 prespecified chronic conditions regarding diagnoses, initial treatment, current treatment, and health care provider. Respondents were asked ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have any of the following medical conditions: hypertension, diabetes, abnormal blood lipids, overweight, chronic bronchitis, heart attack, angina/coronary heart disease, stroke, asthma, oral cancer, lung cancer and others’. Respondents then needed to identify the most recent provider of the diagnoses. Diagnosis was solely based on self-reporting; details such as symptoms, signs, or lab tests were not included. All the information collected in this study was self-reported. We only reported about the hypertension in this paper. Dependent and independent variables: The dependent variable for this study was non-adherence to antihypertensive treatment, which we have defined as discontinuation of medication at the time of interview, when treatment was received at initial diagnosis. It is categorized in to ‘yes’ and ‘no’. The independent variables were age, sex, education, asset index, comorbidity, and health care provider. The individual sociodemographic factors were derived from preexisting surveillance data. Definitions: Hypertension Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Asset index The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. Comorbidity Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Hypertension: Hypertension was diagnosed in this study by asking the respondents: ‘Have you ever been told by any of the following personnel: MBBS doctor, specialized doctor, nurse, health worker, paramedic (Medical assistant/sub assistant community medical office), village doctor/quack, homeopath, kabiraj, or pharmacy man that you have hypertension?’ Hypertension diagnosis is not based on blood pressure measurements; rather we used face-to-face interviews with a structured questionnaire. For the validity of the diagnosis we have asked who provided the diagnosis and to strengthen the diagnostic work we have also asked if they were taking any medication for their hypertension diagnosis. Community health workers or informal health providers or unqualified providers, whatever we want to call, they provide extensive health services in low-income countries. Unqualified providers are crucial to Bangladesh's pluralistic health care system. Ninety-two percent of the unqualified providers (Bangladesh is divided in 64 districts, second tier for administrative purposes) have district level training and more than 96% have specific training on hypertension (22). Asset index: The asset index was assessed based on household assets and housing characteristics, including bed, mattress, quilt, cooking pots, watch, chair, clothing cabinet, radio, television, bicycle, boat, cows, and electricity. Using a variable reduction technique, these assets and characteristics were combined into a single variable. Details about the calculation of the asset index can be found in other publications from the Matlab HDSS (23). After ranking this variable from low to high, households were divided into five equally sized groups, the poverty quintiles. This procedure was repeated for each site; household stratification did not account for possible poverty/wealth differences between sites. Comorbidity: Comorbidity was defined when respondents reported being diagnosed with cardiovascular diseases, other than hypertension, abnormal blood lipids, or overweight. Diagnosing provider was categorized as qualified doctors (MBBS and specialized physicians) and unqualified providers (nurses, health workers, paramedics, village doctors, homeopath, kabiraz/spiritual healers, and pharmacy). Statistical analyses: Data was presented with mean (standard deviation, SD) for continuous variables and with proportion for categorical variables. The overall and sex-specific prevalences of hypertension were calculated. The study participants were divided into four age groups (<40, 40–49, 50–59, and 60+ years). Categorical variables were compared by chi-square statistics. Univariate regression analysis was performed to identify the factors that were associated with non-adherence. Any factor that provided a univariate p-value <0.05 was entered into a multiple regression model. Logistic regression analyses were performed to estimate odds ratios (OR) and 95% confidence intervals (CI) of non-adherence associated with various factors, with and without adjustment for other explanatory variables. SAS (Version 8) Statistical software was used for the analysis. Results: The study population comprised 52.6% women, mean age was 44.6 years, with 17% belonging to the age group of 60 years and above. Those who had no formal education amounted to 43.7%. The prevalence of hypertension was 13.67%, higher for women (14.8%) than men (8.9%) (p<0.0001). Prevalence of hypertension increased with age (5.9% in the youngest age group compared to 23.1% in the oldest age group; p<0.0001), higher among the least poor quintile compared to the poorest quintile (20.6% vs. 6.3%, p<0.0001), and higher with increasing education (no formal education 12.2% vs. highest education 16.9%) (p<0.0001) (Table 2). Characteristics of the three study sites Only 58% of men and 51.1% of women with hypertension were diagnosed by qualified doctors. Among the unqualified providers, village doctors diagnosed 37% of the men and 42.7% of the women (Table 3). Characteristics of the study population (n=29,960) by hypertension status The proportion of people non-adherent to treatment was 26.2% in the study population. Non-adherence to treatment was higher among men (29.2%) than women (24.3%) (p<0.0001). Non-adherence to treatment decreased with age (p<0.0001) (Fig. 1). Non-adherence was less common among the wealthy people, both for men and women (Fig. 2). Percentage of people non-adherent to treatment by age group and sex, in rural Bangladesh, 2009. *Absolute numbers of samples are shown in the parenthesis. Percentage of people non-adherent to treatment by sex and asset quintile. *Absolute numbers of samples are shown in the parenthesis. Age, sex, education, wealth, comorbidity, and type of providers were independently associated with non-adherence to antihypertensive medication. More men were non-adherent to the treatment than women (OR 1.67, CI 1.42–1.97). Non-adherence to medication was greater when hypertension was diagnosed by unqualified providers (OR 1.46, CI 1.31–1.77). People of older age, higher education, and with most wealth were less likely to be non-adherent. Those who reported cardiovascular comorbidity (angina, heart attack, or stroke) were more likely to be compliant to medication (OR 0.78, CI 0.64–0.97) (Table 4). Hypertension diagnosed by type of health care providers and sex in rural Bangladesh Unadjusted and adjusted odds ratio (OR) and 95% confidence intervals (CI) of discontinuation of (non-adherence to) hypertension treatment in rural Bangladesh Discussion: This is the first study reporting prevalence and correlates of non-adherence to antihypertensive treatment in rural Bangladesh. The prevalence of hypertension is 13.67% (95% CI 13.29–14.07) in this study. A recent study conducted in the same areas of Bangladesh reported that the prevalence of hypertension varied from 9.3 to 24.1% (24). A recent systematic review and meta-analysis of studies between 1995 and 2010 showed the pooled prevalence of hypertension in Bangladesh was 13.7% (overall prevalence estimate ranged between 9 and 22.2%) (25). Although we have collected self-reported information, the prevalence of hypertension reflects the magnitude of hypertension in rural Bangladesh very well. We have found that the prevalence of hypertension is significantly higher for women compared to men, consistent with other studies in rural Bangladesh. (26). Our finding that the prevalence is rising with increasing wealth is consistent with findings from other rural areas in Bangladesh (27). We have found that more than 26% of the study population is non-adherent to antihypertensive treatment. In developed countries, adherence among patients suffering from non-communicable diseases averages only 50% (28). In China and the Gambia, only 43 and 27% of patients with hypertension adhere to their antihypertensive medication regimen, respectively (29, 30). In developing countries, the magnitude of poor adherence is assumed to be higher given the scarcity of health resources and difficulties in access to health care (31). The reasons for poor adherence have been studied extensively in the West. Two of the most important factors contributing to poor adherence are the asymptomatic and lifelong nature of the disease. Other potential determinants of adherence may be related to demographic factors such as age and education, the patient's understanding and perception of hypertension, the health care provider's mode of delivering treatment, and the relationship between patients and health care professionals (32, 33). Data from the developing world, in this regard are scanty, with details of the prevalence of the condition has just beginning to emerge. We have found that men in rural Bangladesh are most likely to discontinue the treatment. Similar findings were observed in other studies (34); however, opposing findings have also been reported (35). Our study showed that young hypertensive patients are less likely to continue antihypertensive treatment, in line with other findings (34–36). As the symptoms go unnoticed, young individuals may not pay much attention to the importance of continuing medication. This needs more exploration as an area of intervention. Sex and age are affecting adherence to treatment as in other diseases too (37). Poor socioeconomic status and low education are important factors for poor adherence (38–40); our findings are in line with these evidences. Although the exact cause could not be revealed in this study, people with higher education and more wealth may be more health conscious and knowledgeable about the impact of poor control of HBP. Studies found that non-adherent patients with lower income and less education suffered more from stroke (14). Financial constraints could be a factor in continuing lifelong medication for hypertension treatment (32). We have found that in the presence of cardiovascular comorbidities (vascular diseases, abnormal lipids, and overweight) the odds of non-adherence to antihypertensive medication are reduced. The reason could be numerous; patients with comorbidities visit health care providers more frequently, they pay more attention to their health conditions, and also are more likely to go to a qualified health care provider and therefore more likely to continue medication in general. Specific reasons for this population are yet to be revealed as aspects of multimorbidity and polypharmacy are underresearched in Bangladesh. In this study, non-adherence is more when hypertension was diagnosed by village doctors. The influence of factors related to the health care provider on adherence to therapy for hypertension has not been systematically studied. One important factor is probably lack of knowledge and training for health care providers on managing hypertension per se chronic diseases (41, 42). These factors may contribute to the fact of more than 40% of people being diagnosed as hypertensive by the village doctors. We have mainly focused on demographic and economic factors such as age, sex, education, and wealth, some of which are results of social determinants, that influence adherence to treatment. This study has highlighted the importance of health care providers in patient adherence to antihypertensive therapy, and the extent of poor adherence, especially when unqualified providers, for example, village doctors, play a prominent role in diagnosing the condition. We have used the education and asset index data, as in many earlier reports, to shed some light on the financial aspects of this study outcome, as there is a positive correlation between educational level and income. Unfortunately, we do not have any data about the cost borne by the patients of this study; neither do we have any data regarding usual cost of antihypertensive drugs. However, a publication on availability and affordability of essential medicines for chronic diseases in low-income countries mentioned that median price ratio is between 1.14 and 1.31 in Bangladesh (this ratio compares a medicine's median price to its international reference price) (43). Several limitations should be mentioned. We have used the self-reported information because this is easy, economical, and has long been used as an epidemiological tool. This may be subject to recall bias and possibility of underestimation of actual prevalence. We have a robust phenomenon in place, our surveillance system. The surveillance people are in acquaintance with routine questionnaires and interviewers, decreasing the likelihood of reporting bias. In checking for the validity of hypertension diagnosis, participants were not only asked about the providers of the diagnosis but also asked about taking antihypertensive medication. However the focus of this paper was mainly the adherence to treatment for the hypertensive patients. There are more validated measures of medication adherence, where objective measures are used to specify the adherence. This is the first of its kind study on adherence to antihypertensive treatments in Bangladesh, and conducted on surveillance population. Among the several strengths, this study covers a wide geographical area, provides initial steps of nationally representative data, and includes large sample size that supports the accuracy of our findings. HDSS collect information from whole communities over extended time periods, which more accurately reflect health and population problems in low- and middle-income countries. HDSS provides a reliable sampling frame and a high response rate is usually observed. Conclusions: Although village doctors make 40% of hypertension diagnoses, their treatments are associated with a higher rate of discontinuation. The hypertension management practices of the village doctors should be explored in subsequent research. More research is needed for better understanding of the determinants of adherence, also to find out the reasons for non-adherence to treatment particularly among men and young people. Accordingly, effective interventions need to be developed to properly manage hypertension and chronic diseases in general.
Background: Poor adherence has been identified as the main cause of failure to control hypertension. Poor adherence to antihypertensive treatment is a significant cardiovascular risk factor, which often remains unrecognized. There are no previous studies that examined adherence with antihypertensive medication or the characteristics of the non-adherent patients in Bangladesh. Methods: The study population included 29,960 men and women aged 25 years and older from three rural demographic surveillance sites of the International Center for Diarrheal Disease Research, Bangladesh (icddr,b): Matlab, Abhoynagar, and Mirsarai. Data was collected by a cross-sectional design on diagnostic provider, initial, and current treatment. Discontinuation of medication at the time of interview was defined as non-adherence to treatment. Results: The prevalence of hypertension was 13.67%. Qualified providers diagnosed only 53.5% of the hypertension (MBBS doctors 46.1 and specialized doctors 7.4%). Among the unqualified providers, village doctors diagnosed 40.7%, and others (nurse, health worker, paramedic, homeopath, spiritual healer, and pharmacy man) each diagnosed less than 5%. Of those who started treatment upon being diagnosed with hypertension, 26% discontinued the use of medication. Age, sex, education, wealth, and type of provider were independently associated with non-adherence to medication. More men discontinued the treatment than women (odds ratio [OR] 1.74, confidence interval [CI] 1.48-2.04). Non-adherence was greater when hypertension was diagnosed by unqualified providers (OR 1.52, CI 1.31-1.77). Hypertensive patients of older age, least poor quintile, and higher education were less likely to be non-adherent. Patients with cardiovascular comorbidity were also less likely to be non-adherent to antihypertensive medication (OR 0.79, CI 0.64-0.97). Conclusions: Although village doctors diagnose 40% of hypertension, their treatments are associated with a higher rate of non-adherence to medication. The hypertension care practices of the village doctors should be explored by additional research. More emphasis should be placed on men, young people, and people with low education. Health programs focused on education regarding the importance of taking continuous antihypertensive medication is now of utmost importance.
null
null
8,103
423
14
[ "hypertension", "health", "study", "sites", "providers", "population", "adherence", "diagnosis", "bangladesh", "surveillance" ]
[ "test", "test" ]
null
null
null
[CONTENT] adherence to treatment | hypertension | Bangladesh | village doctors | low-income country [SUMMARY]
[CONTENT] adherence to treatment | hypertension | Bangladesh | village doctors | low-income country [SUMMARY]
[CONTENT] adherence to treatment | hypertension | Bangladesh | village doctors | low-income country [SUMMARY]
[CONTENT] adherence to treatment | hypertension | Bangladesh | village doctors | low-income country [SUMMARY]
null
null
[CONTENT] Adult | Antihypertensive Agents | Bangladesh | Cross-Sectional Studies | Female | Humans | Hypertension | Interviews as Topic | Male | Medication Adherence | Middle Aged | Population Surveillance | Prevalence | Risk Factors | Rural Population | Sex Factors [SUMMARY]
[CONTENT] Adult | Antihypertensive Agents | Bangladesh | Cross-Sectional Studies | Female | Humans | Hypertension | Interviews as Topic | Male | Medication Adherence | Middle Aged | Population Surveillance | Prevalence | Risk Factors | Rural Population | Sex Factors [SUMMARY]
[CONTENT] Adult | Antihypertensive Agents | Bangladesh | Cross-Sectional Studies | Female | Humans | Hypertension | Interviews as Topic | Male | Medication Adherence | Middle Aged | Population Surveillance | Prevalence | Risk Factors | Rural Population | Sex Factors [SUMMARY]
[CONTENT] Adult | Antihypertensive Agents | Bangladesh | Cross-Sectional Studies | Female | Humans | Hypertension | Interviews as Topic | Male | Medication Adherence | Middle Aged | Population Surveillance | Prevalence | Risk Factors | Rural Population | Sex Factors [SUMMARY]
null
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
null
[CONTENT] hypertension | health | study | sites | providers | population | adherence | diagnosis | bangladesh | surveillance [SUMMARY]
[CONTENT] hypertension | health | study | sites | providers | population | adherence | diagnosis | bangladesh | surveillance [SUMMARY]
[CONTENT] hypertension | health | study | sites | providers | population | adherence | diagnosis | bangladesh | surveillance [SUMMARY]
[CONTENT] hypertension | health | study | sites | providers | population | adherence | diagnosis | bangladesh | surveillance [SUMMARY]
null
null
[CONTENT] health | sites | hypertension | population | doctor | providers | surveillance | diagnosis | household | study [SUMMARY]
[CONTENT] 0001 | non | age | women | men | adherent | non adherent | education | treatment | group [SUMMARY]
[CONTENT] hypertension | research | adherence | doctors | village doctors | treatment particularly | research research needed | research research needed better | need | hypertension diagnoses treatments associated [SUMMARY]
[CONTENT] hypertension | health | providers | adherence | population | sites | study | surveillance | treatment | non [SUMMARY]
null
null
[CONTENT] 29,960 | 25 years | three | the International Center for Diarrheal Disease Research | Matlab | Abhoynagar | Mirsarai ||| ||| [SUMMARY]
[CONTENT] 13.67% ||| only 53.5% | MBBS | 46.1 | 7.4% ||| 40.7% | less than 5% ||| 26% ||| ||| 1.74 ||| CI | 1.48 ||| 1.52 | CI | 1.31-1.77 ||| ||| 0.79 | CI | 0.64-0.97 [SUMMARY]
[CONTENT] 40% ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| Bangladesh ||| 29,960 | 25 years | three | the International Center for Diarrheal Disease Research | Matlab | Abhoynagar | Mirsarai ||| ||| ||| ||| 13.67% ||| only 53.5% | MBBS | 46.1 | 7.4% ||| 40.7% | less than 5% ||| 26% ||| ||| 1.74 ||| CI | 1.48 ||| 1.52 | CI | 1.31-1.77 ||| ||| 0.79 | CI | 0.64-0.97 ||| 40% ||| ||| ||| [SUMMARY]
null
Socio-economic indicators and predisposing factors associated with traumatic dental injuries in schoolchildren at Brasília, Brazil: a cross-sectional, population-based study.
25037704
This study assessed the prevalence of traumatic dental injuries (TDI) and its association with sociodemographic and physical characteristics in the anterior permanent teeth of 12-year-old schoolchildren at the city of Brasília - DF, Brazil.
BACKGROUND
A cross-sectional, population-based study was conducted on a sample of 1,389 boys and girls aged 12 years, enrolled in public and private fundamental schools at the Administrative Region (RA) of Brasília, Brazil, from October 2011 to September 2012. The demographic details were achieved by a structured questionnaire. The study recorded the type of damage, the size of incisal overjet, and whether lip coverage was inadequate. Sociodemographic data included sex, income and educational level of the parents or caretakers.
METHODS
A total of 1118 schoolchildren were examined, yielding a response rate of 80.48%. The prevalence of TDI was 14.63% in public schools and 23.40% in private schools. The students did not differ according to sex, income and educational level of the parents or caretakers concerning the occurrence of traumas in permanent anterior teeth. Increased overjet and inadequate lip coverage were found to be important contributing factors for TDIs.
RESULTS
In conclusion, this study showed an expressive prevalence of TDI in 12-year-old in schoolchildren at Brasília DF, Brazil. Sex and educational level of the parents were not associated with trauma. The increased overjet and inadequate lip coverage were significantly associated with dental trauma.
CONCLUSION
[ "Brazil", "Causality", "Child", "Cross-Sectional Studies", "Cuspid", "Educational Status", "Female", "Health Knowledge, Attitudes, Practice", "Household Articles", "Housing", "Humans", "Incisor", "Income", "Lip", "Male", "Overbite", "Parents", "Population Surveillance", "Prevalence", "Sex Factors", "Socioeconomic Factors", "Tooth Injuries" ]
4223362
Background
Traumatic dental injuries (TDI) have been extensively studied over the last few decades. They result in tooth fracture, displacement or loss, causing negative functional, esthetic and psychological effects to the individuals (children, adolescents and adults) [1-3]. Previous studies reported prevalence rates ranging from 6% to 27% in different populations [4-10]. In Brazil the prevalence varies widely, ranging from 10% to 58% [11-16]. The possible explanations for this variation include differences in places/environments, diagnostic criteria and examination methods [17]. Etiology and predisposing factors of traumatic injuries are well established in the literature. However, impact of socio-economic indicators remains conflicting and unclear [18,19]. The increased violence rates, number of car accidents and greater participation of children in sports activities contribute to make dental trauma an emerging public health problem. Also, the greater availability and access of leisure devices with potential risk have remarkably increased the number of cases [15]. Glendor [20] conducted a literature review on the etiology and reported risk factors for traumatic dental injuries and concluded that the number of causes of TDIs have alarmingly increased over the last decades. The author suggested that this phenomenon may be associated to the increased interest on the causes and also evidences the complex etiology of TDIs. The investigator also concluded that not only risk factors as overjet and inadequate lip coverage contribute to increase the TDIs, but also the complex interaction between the oral status of the patient, design of public parks and school playgrounds and human behavior. The question is to what extent these factors, together or separately, influence the risk of TDI. Studies have consistently shown that male individuals have a higher chance of TDI than female individuals [8,10,17]. Socio-economic status has been associated with several oral diseases and conditions, such as dental caries, periodontal diseases, tooth loss, and oral cancer. Nevertheless, the association between TDI and socio-economic indicators remains unclear [14,21,22]. Although some researchers have reported that schoolchildren with lower socio-economic status are more likely to suffer TDI [2,13,14,17,19], others have shown an inverse correlation, with wealthier children having a higher risk of TDI [9,13]. A review paper concluded that there are few studies correlating TDI in permanent teeth with socio-economic indicators and that the majority did not find such association [22]. Among the physical factors, increased overjet and inadequate lip has been consistently associated with TDI [12,13,19,21,23]. A systematic review using meta-analysis stated that an overjet greater than 3 mm increases the chance of dental trauma. Other study considered that inadequate lip coverage is a more important risk factor for the occurrence of TDI than the increased overjet separately [24]. The purpose of this study was to assess the prevalence of TDI and its association with sociodemographic and physical characteristics in anterior permanent teeth of 12-year-old schoolchildren at the city of Brasília – DF, Brazil.
Subjects and methods
This study was approved by the Institutional Review Board of the Health Sciences School at the University of Brasília, DF, Brazil. The Education Secretariat of the Government of Distrito Federal (GDF) authorized the study and provided the necessary information for the sample registry, which was updated on the examination date. The following data were obtained: name of all schools at Brasília, their addresses and total number of students registered in each school, at the age of 12 years. A cross-sectional, population-based study was conducted on a sample of 1389 boys and girls aged 12 years, enrolled at public and private fundamental schools at the Administrative Region (RA) of Brasília, Brazil. The sample size was calculated based on a sample error of 1.7%, significance level of 5%, prevalence of dental injuries of 20% and a population of 4,000 students aged 12 years, registered in public and private schools at Brasília, according to the school census of 2011. The total of 83 fundamental schools at the administrative region of Brasília, being 43 public and 40 private, were initially contacted on the interest to participate in the study. Only one public school did not agree to participate, while only 23 private schools agreed to participate.A letter was sent to all parents or caretakers of the selected children explaining the objectives, characteristics and importance of the study. Within each school, the study was conducted only on children whose parents or caretakers signed the consent form. The final sample was composed of 787 students of public schools and 658 students of private schools, among which, in 1,118 children, it was possible to obtain information on the variables analyzed, yielding a response rate of 80.45%, based on the planned sample (Figure 1). Sample calculation and response rate of the study. Brasília, DF, 2012. Socio-demographic data included the type of school (public or private), sex and educational level of the caretaker in completed years of study. The socio-economic data on the caretakers were collected by a questionnaire previously applied in another epidemiological survey [25]. This socio-economic questionnaire was used during the last Brazilian Oral Health Survey and a prevalence of 20.5% of traumatic dental injuries was found. This questionnaire was divided into four parts. The first part comprised the identification data. Similarly, in this study, we confirmed the student’s identification data, race, sex, birth date, and educational level of the parents/caretakers. The second part was composed by data on the socio-economic characteristics of the family: number of persons living in the house; number of bedrooms; material goods (TV, refrigerator, stereo, microwave oven, washing machine, number of cars; ranging from zero to ten goods); and also the family income (the sum of incomes received per each person living in the house ranging from R$ 250.00 to 9,500.00). The third part comprised questions on the educational level and years of the parents, oral morbidity and use of oral health services. And finally, the fourth part contained three questions on oral health self-perception and impacts of the parents. A section containing questions on the TDI was added to this questionnaire. TDI questions aimed to gather information on the self-perception of the parents relating to tooth traumas (first-aid notions in cases of tooth traumas, occurrence of accidents involving the mouth/teeth inside the family). If tooth trauma within the family was positively reported, we investigated which dentition was involved (primary or permanent); which type of traumatism occurred; and whether immediate care was provided. This form was sent to parents who agreed to participate in the study before clinical examination of the children “See Additional file 1.” The clinical forms and questionnaires were previously tested and did not require adjustments. A pilot study was conducted on thirty parents of the same sample to test the questionnaire. The results revealed that it was feasible in the local situation. The first thirty socio-economic questionnaires were used to test the research instrument adapted for this present study. The instrument was well understood by the participants and it was considered as effective for data collection. Thus, no adjustments were necessary. These questionnaires were included in the general sample. Clinical data on dental trauma, lip coverage and incisal overjet were collected by oral examinations. The etiology, site of occurrence of dental trauma, age at the occurrence of trauma were obtained by direct interview with the child. The criteria for classification of trauma were the same used in the Children's Dental Health Survey at the United Kingdom [26]. These criteria include tooth fractures, discoloration and loss due to trauma to the permanent dentition. The incisal overjet was coded as smaller or equal to 5 mm or greater than 5 mm., after measurement of the greatest distance between the incisal edges of maxillary incisors in relation to the incisal edges of corresponding mandibular teeth using a CPI periodontal probe. The anterior maxillary overjet was measured with the mandibular and maxillary teeth at centric occlusion with the aid of a CPI periodontal probe placed parallel to the occlusal plane. The overjet is the greatest distance in mm between the incisal edges of maxillary incisors in relation to the incisal edges of corresponding mandibular incisors. Anterior mandibular overjet is characterized by the anterior (labially) position of the mandibular incisors in relation to the corresponding maxillary incisors. Mandibular protrusion or crossbite was measured with the aid of a CPI periodontal probe and recorded in millimeters. Vertical open bite was characterized by lack of overlapping between maxillary and mandibular incisors. During data collection on lip coverage, it was considered as adequate when the lips touched, entirely covering the anterior teeth, with the schoolchildren silently reading a document without knowing they were being observed. Data were collected by two dentists (Frujeri MLV and Frujeri JAJ) with help of two annotators, previously trained and calibrated at the Center of Trauma a the Federal University of Minas Gerais (UFMG). The calibration/training exercises were conducted by the professors in charge of the Center of Trauma at the aforementioned university using photographs and images of different types of traumas and patients suffering dentoalveolar traumas assisted at the clinics of this center. The degree of diagnostic reproducibility was high, the kappa coefficients for inter-examiner agreement ranged from 0.85 to 1.00, indicating almost perfect to perfect agreement, since in most cases the kappa value was equal to one.The kappa coefficients for intra-examiner agreement were all equal to 1.00, indicating perfect agreement for both examiners. The clinical examinations were performed at the schools, during the classes, in open areas with enough natural light, with the children seated on chairs. All biosecurity procedures were strictly followed. Dental mirrors, CPI periodontal probes and gauze were packed and sterilized in sufficient numbers for one day of work. The examination included all permanent anterior teeth (incisors and canines). All teeth were dried before examination to increase the accuracy of the diagnosis. The examiner assessed existence and type of damage, treatment carried out, whether the incisal overjet was smaller or equal to 5 mm or greater than 5 mm and whether lip coverage was inadequate. The examination was conducted in a uniform fashion beginning from the maxillary right quadrant to the mandibular in clockwise direction. When the child was absent on the day of examination, a second visit was done. When tooth trauma presence was verified through clinical examination, the following characteristics were recorded into a specific sheet: type and site of injury; etiology; teeth damaged. Also, tooth trauma treatment and material type was recorded. It was recorded whether the teeth undergone trauma had not been treated until the moment of the research “See Additional file 2”. In these cases, the parents/caregivers were instructed on the importance of both trauma treatment and following-up through a letter. A pilot study was conducted on thirty schoolchildren of the same sample to test the methodology. The results revealed that it was feasible in the local situation. The inter- and intra-examiner diagnostic variability was assessed by examination in duplicate in 10% of the sample. The Kappa statistics was applied considering each tooth in each situation analyzed. These students were included in the total sample of 1,118 participants used in the analysis. Data were entered and analyzed on the software SAS 9.2 for Windows. To evaluate if the type of overjet, lip coverage, location of the school, sex, income and educational level might explain the occurrence of trauma in permanent teeth, a mixed-effects multiple logistic regression model, with random intercept [27] was used to compensate the intra-school correlation, since the schoolchildren are clustered within schools. As a result of the model adjustment, the odds ratio and respective 95% confidence intervals were calculated.
Results
The prevalence of dental trauma according to the variables analyzed is presented in Table 1. Prevalence of trauma to permanent teeth according to the variables analyzed in 12-year-old schoolchildren at the city of Brasília- DF- Brazil, in the year 2012 *CI- Confidence Interval. The multivariate analysis results are in Table 2. Likelihood of TDIs according to the adjusted odds ratio by mixed-effects logistic regression *OR- Odds Ratio. The results of association studies demonstrated that students in private and public schools may have differed as to the occurrence of traumas in permanent teeth [OR = 1.53; CI 95%: 0.99-2.38; p-value = 0.05]. Concerning the gender, they did not differ regarding the occurrence of traumas in permanent teeth [OR = 1.33; CI 95%: 0.96 – 1.85]. The income and educational level did not differ concerning the occurrence of traumas in permanent teeth (p = 0.65 and p = 0.49, respectively). It was observed that students with inadequate lip coverage had 8.94 times more chances of having trauma to permanent teeth than those with adequate lip coverage [OR = 8.94; CI 95%: 5.92-13.51; p-value < 0.0001]. It was also observed that students with overjet in the anterior maxilla had 2.98 times more chances of having trauma to permanent tooth than those with open bite [OR = 2.98; CI 95%: 1.15-7.93; p-value = 0.04]. Even though the confidence interval had the value 1, it is strongly asymmetric to the right and suggests that the association for this category of overjet with trauma to permanent tooth is considerable. Thus, students with overjet in the anterior mandible had 6,43 times more chances of having trauma to permanent tooth than those with open bite [OR = 6.43; IC 95%: 1.02-30.54; p-value = 0.04].
Conclusion
Sex and educational level of the parents were not associated with trauma. The increased overjet and inadequate lip coverage were significantly associated with dental trauma.
[ "Background", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Traumatic dental injuries (TDI) have been extensively studied over the last few decades. They result in tooth fracture, displacement or loss, causing negative functional, esthetic and psychological effects to the individuals (children, adolescents and adults) [1-3]. Previous studies reported prevalence rates ranging from 6% to 27% in different populations [4-10]. In Brazil the prevalence varies widely, ranging from 10% to 58% [11-16]. The possible explanations for this variation include differences in places/environments, diagnostic criteria and examination methods [17].\nEtiology and predisposing factors of traumatic injuries are well established in the literature. However, impact of socio-economic indicators remains conflicting and unclear [18,19]. The increased violence rates, number of car accidents and greater participation of children in sports activities contribute to make dental trauma an emerging public health problem. Also, the greater availability and access of leisure devices with potential risk have remarkably increased the number of cases [15]. Glendor [20] conducted a literature review on the etiology and reported risk factors for traumatic dental injuries and concluded that the number of causes of TDIs have alarmingly increased over the last decades. The author suggested that this phenomenon may be associated to the increased interest on the causes and also evidences the complex etiology of TDIs. The investigator also concluded that not only risk factors as overjet and inadequate lip coverage contribute to increase the TDIs, but also the complex interaction between the oral status of the patient, design of public parks and school playgrounds and human behavior. The question is to what extent these factors, together or separately, influence the risk of TDI.\nStudies have consistently shown that male individuals have a higher chance of TDI than female individuals [8,10,17]. Socio-economic status has been associated with several oral diseases and conditions, such as dental caries, periodontal diseases, tooth loss, and oral cancer. Nevertheless, the association between TDI and socio-economic indicators remains unclear [14,21,22]. Although some researchers have reported that schoolchildren with lower socio-economic status are more likely to suffer TDI [2,13,14,17,19], others have shown an inverse correlation, with wealthier children having a higher risk of TDI [9,13]. A review paper concluded that there are few studies correlating TDI in permanent teeth with socio-economic indicators and that the majority did not find such association [22].\nAmong the physical factors, increased overjet and inadequate lip has been consistently associated with TDI [12,13,19,21,23]. A systematic review using meta-analysis stated that an overjet greater than 3 mm increases the chance of dental trauma. Other study considered that inadequate lip coverage is a more important risk factor for the occurrence of TDI than the increased overjet separately [24].\nThe purpose of this study was to assess the prevalence of TDI and its association with sociodemographic and physical characteristics in anterior permanent teeth of 12-year-old schoolchildren at the city of Brasília – DF, Brazil.", "The authors declare that they have no competing interests.", "MLVF conceived of the study, participated in its design and coordination, carried out the population-based study and drafted the manuscript. JAJF participated in the design of the study, carried out the population-based study and performed the statistical analysis. ACBB participated in the design and coordination of the study as guiding the study. MISGC participated in the design of the study as co-advisor the study. EDCJ participated in the design of the study and helped to draft the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6831/14/91/prepub\n" ]
[ null, null, null, null ]
[ "Background", "Subjects and methods", "Results", "Discussion", "Conclusion", "Competing interests", "Authors’ contributions", "Pre-publication history", "Supplementary Material" ]
[ "Traumatic dental injuries (TDI) have been extensively studied over the last few decades. They result in tooth fracture, displacement or loss, causing negative functional, esthetic and psychological effects to the individuals (children, adolescents and adults) [1-3]. Previous studies reported prevalence rates ranging from 6% to 27% in different populations [4-10]. In Brazil the prevalence varies widely, ranging from 10% to 58% [11-16]. The possible explanations for this variation include differences in places/environments, diagnostic criteria and examination methods [17].\nEtiology and predisposing factors of traumatic injuries are well established in the literature. However, impact of socio-economic indicators remains conflicting and unclear [18,19]. The increased violence rates, number of car accidents and greater participation of children in sports activities contribute to make dental trauma an emerging public health problem. Also, the greater availability and access of leisure devices with potential risk have remarkably increased the number of cases [15]. Glendor [20] conducted a literature review on the etiology and reported risk factors for traumatic dental injuries and concluded that the number of causes of TDIs have alarmingly increased over the last decades. The author suggested that this phenomenon may be associated to the increased interest on the causes and also evidences the complex etiology of TDIs. The investigator also concluded that not only risk factors as overjet and inadequate lip coverage contribute to increase the TDIs, but also the complex interaction between the oral status of the patient, design of public parks and school playgrounds and human behavior. The question is to what extent these factors, together or separately, influence the risk of TDI.\nStudies have consistently shown that male individuals have a higher chance of TDI than female individuals [8,10,17]. Socio-economic status has been associated with several oral diseases and conditions, such as dental caries, periodontal diseases, tooth loss, and oral cancer. Nevertheless, the association between TDI and socio-economic indicators remains unclear [14,21,22]. Although some researchers have reported that schoolchildren with lower socio-economic status are more likely to suffer TDI [2,13,14,17,19], others have shown an inverse correlation, with wealthier children having a higher risk of TDI [9,13]. A review paper concluded that there are few studies correlating TDI in permanent teeth with socio-economic indicators and that the majority did not find such association [22].\nAmong the physical factors, increased overjet and inadequate lip has been consistently associated with TDI [12,13,19,21,23]. A systematic review using meta-analysis stated that an overjet greater than 3 mm increases the chance of dental trauma. Other study considered that inadequate lip coverage is a more important risk factor for the occurrence of TDI than the increased overjet separately [24].\nThe purpose of this study was to assess the prevalence of TDI and its association with sociodemographic and physical characteristics in anterior permanent teeth of 12-year-old schoolchildren at the city of Brasília – DF, Brazil.", "This study was approved by the Institutional Review Board of the Health Sciences School at the University of Brasília, DF, Brazil. The Education Secretariat of the Government of Distrito Federal (GDF) authorized the study and provided the necessary information for the sample registry, which was updated on the examination date. The following data were obtained: name of all schools at Brasília, their addresses and total number of students registered in each school, at the age of 12 years.\nA cross-sectional, population-based study was conducted on a sample of 1389 boys and girls aged 12 years, enrolled at public and private fundamental schools at the Administrative Region (RA) of Brasília, Brazil.\nThe sample size was calculated based on a sample error of 1.7%, significance level of 5%, prevalence of dental injuries of 20% and a population of 4,000 students aged 12 years, registered in public and private schools at Brasília, according to the school census of 2011.\nThe total of 83 fundamental schools at the administrative region of Brasília, being 43 public and 40 private, were initially contacted on the interest to participate in the study. Only one public school did not agree to participate, while only 23 private schools agreed to participate.A letter was sent to all parents or caretakers of the selected children explaining the objectives, characteristics and importance of the study. Within each school, the study was conducted only on children whose parents or caretakers signed the consent form. The final sample was composed of 787 students of public schools and 658 students of private schools, among which, in 1,118 children, it was possible to obtain information on the variables analyzed, yielding a response rate of 80.45%, based on the planned sample (Figure 1).\nSample calculation and response rate of the study. Brasília, DF, 2012.\nSocio-demographic data included the type of school (public or private), sex and educational level of the caretaker in completed years of study. The socio-economic data on the caretakers were collected by a questionnaire previously applied in another epidemiological survey [25]. This socio-economic questionnaire was used during the last Brazilian Oral Health Survey and a prevalence of 20.5% of traumatic dental injuries was found. This questionnaire was divided into four parts. The first part comprised the identification data. Similarly, in this study, we confirmed the student’s identification data, race, sex, birth date, and educational level of the parents/caretakers. The second part was composed by data on the socio-economic characteristics of the family: number of persons living in the house; number of bedrooms; material goods (TV, refrigerator, stereo, microwave oven, washing machine, number of cars; ranging from zero to ten goods); and also the family income (the sum of incomes received per each person living in the house ranging from R$ 250.00 to 9,500.00). The third part comprised questions on the educational level and years of the parents, oral morbidity and use of oral health services. And finally, the fourth part contained three questions on oral health self-perception and impacts of the parents. A section containing questions on the TDI was added to this questionnaire. TDI questions aimed to gather information on the self-perception of the parents relating to tooth traumas (first-aid notions in cases of tooth traumas, occurrence of accidents involving the mouth/teeth inside the family). If tooth trauma within the family was positively reported, we investigated which dentition was involved (primary or permanent); which type of traumatism occurred; and whether immediate care was provided. This form was sent to parents who agreed to participate in the study before clinical examination of the children “See Additional file 1.” The clinical forms and questionnaires were previously tested and did not require adjustments. A pilot study was conducted on thirty parents of the same sample to test the questionnaire. The results revealed that it was feasible in the local situation. The first thirty socio-economic questionnaires were used to test the research instrument adapted for this present study. The instrument was well understood by the participants and it was considered as effective for data collection. Thus, no adjustments were necessary. These questionnaires were included in the general sample.\nClinical data on dental trauma, lip coverage and incisal overjet were collected by oral examinations. The etiology, site of occurrence of dental trauma, age at the occurrence of trauma were obtained by direct interview with the child. The criteria for classification of trauma were the same used in the Children's Dental Health Survey at the United Kingdom [26]. These criteria include tooth fractures, discoloration and loss due to trauma to the permanent dentition. The incisal overjet was coded as smaller or equal to 5 mm or greater than 5 mm., after measurement of the greatest distance between the incisal edges of maxillary incisors in relation to the incisal edges of corresponding mandibular teeth using a CPI periodontal probe. The anterior maxillary overjet was measured with the mandibular and maxillary teeth at centric occlusion with the aid of a CPI periodontal probe placed parallel to the occlusal plane. The overjet is the greatest distance in mm between the incisal edges of maxillary incisors in relation to the incisal edges of corresponding mandibular incisors.\nAnterior mandibular overjet is characterized by the anterior (labially) position of the mandibular incisors in relation to the corresponding maxillary incisors. Mandibular protrusion or crossbite was measured with the aid of a CPI periodontal probe and recorded in millimeters. Vertical open bite was characterized by lack of overlapping between maxillary and mandibular incisors.\nDuring data collection on lip coverage, it was considered as adequate when the lips touched, entirely covering the anterior teeth, with the schoolchildren silently reading a document without knowing they were being observed. Data were collected by two dentists (Frujeri MLV and Frujeri JAJ) with help of two annotators, previously trained and calibrated at the Center of Trauma a the Federal University of Minas Gerais (UFMG). The calibration/training exercises were conducted by the professors in charge of the Center of Trauma at the aforementioned university using photographs and images of different types of traumas and patients suffering dentoalveolar traumas assisted at the clinics of this center. The degree of diagnostic reproducibility was high, the kappa coefficients for inter-examiner agreement ranged from 0.85 to 1.00, indicating almost perfect to perfect agreement, since in most cases the kappa value was equal to one.The kappa coefficients for intra-examiner agreement were all equal to 1.00, indicating perfect agreement for both examiners.\nThe clinical examinations were performed at the schools, during the classes, in open areas with enough natural light, with the children seated on chairs. All biosecurity procedures were strictly followed. Dental mirrors, CPI periodontal probes and gauze were packed and sterilized in sufficient numbers for one day of work. The examination included all permanent anterior teeth (incisors and canines). All teeth were dried before examination to increase the accuracy of the diagnosis. The examiner assessed existence and type of damage, treatment carried out, whether the incisal overjet was smaller or equal to 5 mm or greater than 5 mm and whether lip coverage was inadequate. The examination was conducted in a uniform fashion beginning from the maxillary right quadrant to the mandibular in clockwise direction. When the child was absent on the day of examination, a second visit was done. When tooth trauma presence was verified through clinical examination, the following characteristics were recorded into a specific sheet: type and site of injury; etiology; teeth damaged. Also, tooth trauma treatment and material type was recorded. It was recorded whether the teeth undergone trauma had not been treated until the moment of the research “See Additional file 2”. In these cases, the parents/caregivers were instructed on the importance of both trauma treatment and following-up through a letter. A pilot study was conducted on thirty schoolchildren of the same sample to test the methodology. The results revealed that it was feasible in the local situation. The inter- and intra-examiner diagnostic variability was assessed by examination in duplicate in 10% of the sample. The Kappa statistics was applied considering each tooth in each situation analyzed. These students were included in the total sample of 1,118 participants used in the analysis.\nData were entered and analyzed on the software SAS 9.2 for Windows. To evaluate if the type of overjet, lip coverage, location of the school, sex, income and educational level might explain the occurrence of trauma in permanent teeth, a mixed-effects multiple logistic regression model, with random intercept [27] was used to compensate the intra-school correlation, since the schoolchildren are clustered within schools. As a result of the model adjustment, the odds ratio and respective 95% confidence intervals were calculated.", "The prevalence of dental trauma according to the variables analyzed is presented in Table 1.\nPrevalence of trauma to permanent teeth according to the variables analyzed in 12-year-old schoolchildren at the city of Brasília- DF- Brazil, in the year 2012\n*CI- Confidence Interval.\nThe multivariate analysis results are in Table 2.\nLikelihood of TDIs according to the adjusted odds ratio by mixed-effects logistic regression\n*OR- Odds Ratio.\nThe results of association studies demonstrated that students in private and public schools may have differed as to the occurrence of traumas in permanent teeth [OR = 1.53; CI 95%: 0.99-2.38; p-value = 0.05]. Concerning the gender, they did not differ regarding the occurrence of traumas in permanent teeth [OR = 1.33; CI 95%: 0.96 – 1.85]. The income and educational level did not differ concerning the occurrence of traumas in permanent teeth (p = 0.65 and p = 0.49, respectively). It was observed that students with inadequate lip coverage had 8.94 times more chances of having trauma to permanent teeth than those with adequate lip coverage [OR = 8.94; CI 95%: 5.92-13.51; p-value < 0.0001].\nIt was also observed that students with overjet in the anterior maxilla had 2.98 times more chances of having trauma to permanent tooth than those with open bite [OR = 2.98; CI 95%: 1.15-7.93; p-value = 0.04]. Even though the confidence interval had the value 1, it is strongly asymmetric to the right and suggests that the association for this category of overjet with trauma to permanent tooth is considerable. Thus, students with overjet in the anterior mandible had 6,43 times more chances of having trauma to permanent tooth than those with open bite [OR = 6.43; IC 95%: 1.02-30.54; p-value = 0.04].", "The good response rate, calibration process and intra- and inter-examiner reproducibility data collaborated to the interval validity of data. The prevalence of dental trauma in the sample analyzed at Brasília was 14.63% in public schools and 23.40% in private schools. This value is relatively high if compared to other studies involving the same type of population and age. Higher values were observed at Blumenau-SC (58.6%) [19] and lower values were reported at the cities of Jaraguá do Sul –SC (15.3%) [11], Belo Horizonte-MG (13.6%) [12], Anápolis- GO (16,5%) [16], Florianópolis –SC (18.9%) [28], Campina Grande- PB (21%) [29] Recife –PE (23.3%) [13] and Herval d'Oeste –SC (17.3%) [15].\nAccording to the literature, the male gender is at higher risk to TDI. Usually, boys are more active and perform stronger physical activities as contact sports, fights, tougher plays and use toys and equipments with higher risk potential without adequate protection. In this study the prevalence in the male gender was higher than in females, yet this difference was not statistically significant (p = 0.0850), being different from most published studies [1,29]. Some studies also did not report this difference [13,15,30]. According to a previous study, it is possible that, with the greater participation of girls in contact sports and plays, previously typical of boys, this difference might be reduced or even disappear [20].\nThe present results were equivocal about differences in the prevalence between children of public and private schools and also in relation to the income and educational level of the parents. Published data in the dental literature are conflicting.\nSome demonstrate significant association between the prevalence and variables indicating better socio-economic condition [12,31], other corroborate the present study and did not report association [10,24], while other observed higher prevalence in children of lower socio-economic status [13,30]. There may be an interaction between the individual socio-economic condition and the physical environment. This is explained by the fact that a greater access to leisure goods and equipments may be associated to children with higher socio-economic level. For example, wealthier children have access to toys as bicycle, skate, horse riding, swimming pools and water skiing. Such equipments, when used without safety, may determine the increased prevalence. Conversely, less favored children are more exposed to public areas and recreation parks. Probably the individual mode of interaction with the environment determines the occurrence of dental trauma. Considering these inconclusive findings, further studies are necessary to elucidate the effect of the socio-economic condition on the occurrence of dental trauma.\nThe relationship between overjet (OJ) and TDI has been investigated by different authors [11,15,32] and demonstrate that individuals with overjet greater than 5 mm are at higher risk to TDIs compared to those with normal overjet. This study corroborates these findings, since it evidenced significant association between the presence of TDI and overjet. It was observed that students with anterior maxillary overjet greater than or equal to 5 mm had 2.98 times more chances of having trauma to permanent teeth than those with open bite, and students with mandibular anterior overjet had 6.43 times more chance of trauma to permanent teeth than those with open bite. Other studies also demonstrated this relationship [9,16]. Therefore, it may be inferred that the increased overjet is an important risk factor to dental trauma [14,33,34].\nFinally, concerning the inadequate lip coverage, this study revealed similar results as other published investigations [19,35,36] that considered it as the most important and independent risk factor for the occurrence of TDIs in anterior teeth. Bonini et al. (2012) [35] observed that children with malocclusions as open bite and increased overjet, associated with inadequate lip coverage, presented high prevalence of TDIs compared to those with adequate lip coverage. It was also observed that malocclusions of anterior teeth (increased overjet and open bite) are significantly associated with TDIs only when inadequate lip coverage is present. The investigators observed that the presence of malocclusions with adequate lip coverage is not an important risk factor for TDIs. The findings of this study in the mixed-effects multiple logistic regression model corroborate the results of aforementioned investigations, since students with inadequate lip coverage had 8.94 times more chances of trauma to anterior teeth than those with adequate lip coverage (OR- CI 95% [5.92-13.51] p-value < 0.0001). We share with these investigators the idea that this possibly occurs because the lips partly absorb the impact applied to the teeth during the trauma. Since these risk factors may be corrected by orthodontic treatment, the dental professionals should clinically diagnose these risk factors and inform the children’s caretakers on the need of orthodontic intervention as early as possible [9].", "Sex and educational level of the parents were not associated with trauma. The increased overjet and inadequate lip coverage were significantly associated with dental trauma.", "The authors declare that they have no competing interests.", "MLVF conceived of the study, participated in its design and coordination, carried out the population-based study and drafted the manuscript. JAJF participated in the design of the study, carried out the population-based study and performed the statistical analysis. ACBB participated in the design and coordination of the study as guiding the study. MISGC participated in the design of the study as co-advisor the study. EDCJ participated in the design of the study and helped to draft the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6831/14/91/prepub\n", "Socioeconomic form.\nClick here for file\nInterview and clinical examination of children participating in the survey on dental trauma.\nClick here for file" ]
[ null, "methods", "results", "discussion", "conclusions", null, null, null, "supplementary-material" ]
[ "Tooth injuries", "Prevalence", "Demographic data" ]
Background: Traumatic dental injuries (TDI) have been extensively studied over the last few decades. They result in tooth fracture, displacement or loss, causing negative functional, esthetic and psychological effects to the individuals (children, adolescents and adults) [1-3]. Previous studies reported prevalence rates ranging from 6% to 27% in different populations [4-10]. In Brazil the prevalence varies widely, ranging from 10% to 58% [11-16]. The possible explanations for this variation include differences in places/environments, diagnostic criteria and examination methods [17]. Etiology and predisposing factors of traumatic injuries are well established in the literature. However, impact of socio-economic indicators remains conflicting and unclear [18,19]. The increased violence rates, number of car accidents and greater participation of children in sports activities contribute to make dental trauma an emerging public health problem. Also, the greater availability and access of leisure devices with potential risk have remarkably increased the number of cases [15]. Glendor [20] conducted a literature review on the etiology and reported risk factors for traumatic dental injuries and concluded that the number of causes of TDIs have alarmingly increased over the last decades. The author suggested that this phenomenon may be associated to the increased interest on the causes and also evidences the complex etiology of TDIs. The investigator also concluded that not only risk factors as overjet and inadequate lip coverage contribute to increase the TDIs, but also the complex interaction between the oral status of the patient, design of public parks and school playgrounds and human behavior. The question is to what extent these factors, together or separately, influence the risk of TDI. Studies have consistently shown that male individuals have a higher chance of TDI than female individuals [8,10,17]. Socio-economic status has been associated with several oral diseases and conditions, such as dental caries, periodontal diseases, tooth loss, and oral cancer. Nevertheless, the association between TDI and socio-economic indicators remains unclear [14,21,22]. Although some researchers have reported that schoolchildren with lower socio-economic status are more likely to suffer TDI [2,13,14,17,19], others have shown an inverse correlation, with wealthier children having a higher risk of TDI [9,13]. A review paper concluded that there are few studies correlating TDI in permanent teeth with socio-economic indicators and that the majority did not find such association [22]. Among the physical factors, increased overjet and inadequate lip has been consistently associated with TDI [12,13,19,21,23]. A systematic review using meta-analysis stated that an overjet greater than 3 mm increases the chance of dental trauma. Other study considered that inadequate lip coverage is a more important risk factor for the occurrence of TDI than the increased overjet separately [24]. The purpose of this study was to assess the prevalence of TDI and its association with sociodemographic and physical characteristics in anterior permanent teeth of 12-year-old schoolchildren at the city of Brasília – DF, Brazil. Subjects and methods: This study was approved by the Institutional Review Board of the Health Sciences School at the University of Brasília, DF, Brazil. The Education Secretariat of the Government of Distrito Federal (GDF) authorized the study and provided the necessary information for the sample registry, which was updated on the examination date. The following data were obtained: name of all schools at Brasília, their addresses and total number of students registered in each school, at the age of 12 years. A cross-sectional, population-based study was conducted on a sample of 1389 boys and girls aged 12 years, enrolled at public and private fundamental schools at the Administrative Region (RA) of Brasília, Brazil. The sample size was calculated based on a sample error of 1.7%, significance level of 5%, prevalence of dental injuries of 20% and a population of 4,000 students aged 12 years, registered in public and private schools at Brasília, according to the school census of 2011. The total of 83 fundamental schools at the administrative region of Brasília, being 43 public and 40 private, were initially contacted on the interest to participate in the study. Only one public school did not agree to participate, while only 23 private schools agreed to participate.A letter was sent to all parents or caretakers of the selected children explaining the objectives, characteristics and importance of the study. Within each school, the study was conducted only on children whose parents or caretakers signed the consent form. The final sample was composed of 787 students of public schools and 658 students of private schools, among which, in 1,118 children, it was possible to obtain information on the variables analyzed, yielding a response rate of 80.45%, based on the planned sample (Figure 1). Sample calculation and response rate of the study. Brasília, DF, 2012. Socio-demographic data included the type of school (public or private), sex and educational level of the caretaker in completed years of study. The socio-economic data on the caretakers were collected by a questionnaire previously applied in another epidemiological survey [25]. This socio-economic questionnaire was used during the last Brazilian Oral Health Survey and a prevalence of 20.5% of traumatic dental injuries was found. This questionnaire was divided into four parts. The first part comprised the identification data. Similarly, in this study, we confirmed the student’s identification data, race, sex, birth date, and educational level of the parents/caretakers. The second part was composed by data on the socio-economic characteristics of the family: number of persons living in the house; number of bedrooms; material goods (TV, refrigerator, stereo, microwave oven, washing machine, number of cars; ranging from zero to ten goods); and also the family income (the sum of incomes received per each person living in the house ranging from R$ 250.00 to 9,500.00). The third part comprised questions on the educational level and years of the parents, oral morbidity and use of oral health services. And finally, the fourth part contained three questions on oral health self-perception and impacts of the parents. A section containing questions on the TDI was added to this questionnaire. TDI questions aimed to gather information on the self-perception of the parents relating to tooth traumas (first-aid notions in cases of tooth traumas, occurrence of accidents involving the mouth/teeth inside the family). If tooth trauma within the family was positively reported, we investigated which dentition was involved (primary or permanent); which type of traumatism occurred; and whether immediate care was provided. This form was sent to parents who agreed to participate in the study before clinical examination of the children “See Additional file 1.” The clinical forms and questionnaires were previously tested and did not require adjustments. A pilot study was conducted on thirty parents of the same sample to test the questionnaire. The results revealed that it was feasible in the local situation. The first thirty socio-economic questionnaires were used to test the research instrument adapted for this present study. The instrument was well understood by the participants and it was considered as effective for data collection. Thus, no adjustments were necessary. These questionnaires were included in the general sample. Clinical data on dental trauma, lip coverage and incisal overjet were collected by oral examinations. The etiology, site of occurrence of dental trauma, age at the occurrence of trauma were obtained by direct interview with the child. The criteria for classification of trauma were the same used in the Children's Dental Health Survey at the United Kingdom [26]. These criteria include tooth fractures, discoloration and loss due to trauma to the permanent dentition. The incisal overjet was coded as smaller or equal to 5 mm or greater than 5 mm., after measurement of the greatest distance between the incisal edges of maxillary incisors in relation to the incisal edges of corresponding mandibular teeth using a CPI periodontal probe. The anterior maxillary overjet was measured with the mandibular and maxillary teeth at centric occlusion with the aid of a CPI periodontal probe placed parallel to the occlusal plane. The overjet is the greatest distance in mm between the incisal edges of maxillary incisors in relation to the incisal edges of corresponding mandibular incisors. Anterior mandibular overjet is characterized by the anterior (labially) position of the mandibular incisors in relation to the corresponding maxillary incisors. Mandibular protrusion or crossbite was measured with the aid of a CPI periodontal probe and recorded in millimeters. Vertical open bite was characterized by lack of overlapping between maxillary and mandibular incisors. During data collection on lip coverage, it was considered as adequate when the lips touched, entirely covering the anterior teeth, with the schoolchildren silently reading a document without knowing they were being observed. Data were collected by two dentists (Frujeri MLV and Frujeri JAJ) with help of two annotators, previously trained and calibrated at the Center of Trauma a the Federal University of Minas Gerais (UFMG). The calibration/training exercises were conducted by the professors in charge of the Center of Trauma at the aforementioned university using photographs and images of different types of traumas and patients suffering dentoalveolar traumas assisted at the clinics of this center. The degree of diagnostic reproducibility was high, the kappa coefficients for inter-examiner agreement ranged from 0.85 to 1.00, indicating almost perfect to perfect agreement, since in most cases the kappa value was equal to one.The kappa coefficients for intra-examiner agreement were all equal to 1.00, indicating perfect agreement for both examiners. The clinical examinations were performed at the schools, during the classes, in open areas with enough natural light, with the children seated on chairs. All biosecurity procedures were strictly followed. Dental mirrors, CPI periodontal probes and gauze were packed and sterilized in sufficient numbers for one day of work. The examination included all permanent anterior teeth (incisors and canines). All teeth were dried before examination to increase the accuracy of the diagnosis. The examiner assessed existence and type of damage, treatment carried out, whether the incisal overjet was smaller or equal to 5 mm or greater than 5 mm and whether lip coverage was inadequate. The examination was conducted in a uniform fashion beginning from the maxillary right quadrant to the mandibular in clockwise direction. When the child was absent on the day of examination, a second visit was done. When tooth trauma presence was verified through clinical examination, the following characteristics were recorded into a specific sheet: type and site of injury; etiology; teeth damaged. Also, tooth trauma treatment and material type was recorded. It was recorded whether the teeth undergone trauma had not been treated until the moment of the research “See Additional file 2”. In these cases, the parents/caregivers were instructed on the importance of both trauma treatment and following-up through a letter. A pilot study was conducted on thirty schoolchildren of the same sample to test the methodology. The results revealed that it was feasible in the local situation. The inter- and intra-examiner diagnostic variability was assessed by examination in duplicate in 10% of the sample. The Kappa statistics was applied considering each tooth in each situation analyzed. These students were included in the total sample of 1,118 participants used in the analysis. Data were entered and analyzed on the software SAS 9.2 for Windows. To evaluate if the type of overjet, lip coverage, location of the school, sex, income and educational level might explain the occurrence of trauma in permanent teeth, a mixed-effects multiple logistic regression model, with random intercept [27] was used to compensate the intra-school correlation, since the schoolchildren are clustered within schools. As a result of the model adjustment, the odds ratio and respective 95% confidence intervals were calculated. Results: The prevalence of dental trauma according to the variables analyzed is presented in Table 1. Prevalence of trauma to permanent teeth according to the variables analyzed in 12-year-old schoolchildren at the city of Brasília- DF- Brazil, in the year 2012 *CI- Confidence Interval. The multivariate analysis results are in Table 2. Likelihood of TDIs according to the adjusted odds ratio by mixed-effects logistic regression *OR- Odds Ratio. The results of association studies demonstrated that students in private and public schools may have differed as to the occurrence of traumas in permanent teeth [OR = 1.53; CI 95%: 0.99-2.38; p-value = 0.05]. Concerning the gender, they did not differ regarding the occurrence of traumas in permanent teeth [OR = 1.33; CI 95%: 0.96 – 1.85]. The income and educational level did not differ concerning the occurrence of traumas in permanent teeth (p = 0.65 and p = 0.49, respectively). It was observed that students with inadequate lip coverage had 8.94 times more chances of having trauma to permanent teeth than those with adequate lip coverage [OR = 8.94; CI 95%: 5.92-13.51; p-value < 0.0001]. It was also observed that students with overjet in the anterior maxilla had 2.98 times more chances of having trauma to permanent tooth than those with open bite [OR = 2.98; CI 95%: 1.15-7.93; p-value = 0.04]. Even though the confidence interval had the value 1, it is strongly asymmetric to the right and suggests that the association for this category of overjet with trauma to permanent tooth is considerable. Thus, students with overjet in the anterior mandible had 6,43 times more chances of having trauma to permanent tooth than those with open bite [OR = 6.43; IC 95%: 1.02-30.54; p-value = 0.04]. Discussion: The good response rate, calibration process and intra- and inter-examiner reproducibility data collaborated to the interval validity of data. The prevalence of dental trauma in the sample analyzed at Brasília was 14.63% in public schools and 23.40% in private schools. This value is relatively high if compared to other studies involving the same type of population and age. Higher values were observed at Blumenau-SC (58.6%) [19] and lower values were reported at the cities of Jaraguá do Sul –SC (15.3%) [11], Belo Horizonte-MG (13.6%) [12], Anápolis- GO (16,5%) [16], Florianópolis –SC (18.9%) [28], Campina Grande- PB (21%) [29] Recife –PE (23.3%) [13] and Herval d'Oeste –SC (17.3%) [15]. According to the literature, the male gender is at higher risk to TDI. Usually, boys are more active and perform stronger physical activities as contact sports, fights, tougher plays and use toys and equipments with higher risk potential without adequate protection. In this study the prevalence in the male gender was higher than in females, yet this difference was not statistically significant (p = 0.0850), being different from most published studies [1,29]. Some studies also did not report this difference [13,15,30]. According to a previous study, it is possible that, with the greater participation of girls in contact sports and plays, previously typical of boys, this difference might be reduced or even disappear [20]. The present results were equivocal about differences in the prevalence between children of public and private schools and also in relation to the income and educational level of the parents. Published data in the dental literature are conflicting. Some demonstrate significant association between the prevalence and variables indicating better socio-economic condition [12,31], other corroborate the present study and did not report association [10,24], while other observed higher prevalence in children of lower socio-economic status [13,30]. There may be an interaction between the individual socio-economic condition and the physical environment. This is explained by the fact that a greater access to leisure goods and equipments may be associated to children with higher socio-economic level. For example, wealthier children have access to toys as bicycle, skate, horse riding, swimming pools and water skiing. Such equipments, when used without safety, may determine the increased prevalence. Conversely, less favored children are more exposed to public areas and recreation parks. Probably the individual mode of interaction with the environment determines the occurrence of dental trauma. Considering these inconclusive findings, further studies are necessary to elucidate the effect of the socio-economic condition on the occurrence of dental trauma. The relationship between overjet (OJ) and TDI has been investigated by different authors [11,15,32] and demonstrate that individuals with overjet greater than 5 mm are at higher risk to TDIs compared to those with normal overjet. This study corroborates these findings, since it evidenced significant association between the presence of TDI and overjet. It was observed that students with anterior maxillary overjet greater than or equal to 5 mm had 2.98 times more chances of having trauma to permanent teeth than those with open bite, and students with mandibular anterior overjet had 6.43 times more chance of trauma to permanent teeth than those with open bite. Other studies also demonstrated this relationship [9,16]. Therefore, it may be inferred that the increased overjet is an important risk factor to dental trauma [14,33,34]. Finally, concerning the inadequate lip coverage, this study revealed similar results as other published investigations [19,35,36] that considered it as the most important and independent risk factor for the occurrence of TDIs in anterior teeth. Bonini et al. (2012) [35] observed that children with malocclusions as open bite and increased overjet, associated with inadequate lip coverage, presented high prevalence of TDIs compared to those with adequate lip coverage. It was also observed that malocclusions of anterior teeth (increased overjet and open bite) are significantly associated with TDIs only when inadequate lip coverage is present. The investigators observed that the presence of malocclusions with adequate lip coverage is not an important risk factor for TDIs. The findings of this study in the mixed-effects multiple logistic regression model corroborate the results of aforementioned investigations, since students with inadequate lip coverage had 8.94 times more chances of trauma to anterior teeth than those with adequate lip coverage (OR- CI 95% [5.92-13.51] p-value < 0.0001). We share with these investigators the idea that this possibly occurs because the lips partly absorb the impact applied to the teeth during the trauma. Since these risk factors may be corrected by orthodontic treatment, the dental professionals should clinically diagnose these risk factors and inform the children’s caretakers on the need of orthodontic intervention as early as possible [9]. Conclusion: Sex and educational level of the parents were not associated with trauma. The increased overjet and inadequate lip coverage were significantly associated with dental trauma. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: MLVF conceived of the study, participated in its design and coordination, carried out the population-based study and drafted the manuscript. JAJF participated in the design of the study, carried out the population-based study and performed the statistical analysis. ACBB participated in the design and coordination of the study as guiding the study. MISGC participated in the design of the study as co-advisor the study. EDCJ participated in the design of the study and helped to draft the manuscript. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6831/14/91/prepub Supplementary Material: Socioeconomic form. Click here for file Interview and clinical examination of children participating in the survey on dental trauma. Click here for file
Background: This study assessed the prevalence of traumatic dental injuries (TDI) and its association with sociodemographic and physical characteristics in the anterior permanent teeth of 12-year-old schoolchildren at the city of Brasília - DF, Brazil. Methods: A cross-sectional, population-based study was conducted on a sample of 1,389 boys and girls aged 12 years, enrolled in public and private fundamental schools at the Administrative Region (RA) of Brasília, Brazil, from October 2011 to September 2012. The demographic details were achieved by a structured questionnaire. The study recorded the type of damage, the size of incisal overjet, and whether lip coverage was inadequate. Sociodemographic data included sex, income and educational level of the parents or caretakers. Results: A total of 1118 schoolchildren were examined, yielding a response rate of 80.48%. The prevalence of TDI was 14.63% in public schools and 23.40% in private schools. The students did not differ according to sex, income and educational level of the parents or caretakers concerning the occurrence of traumas in permanent anterior teeth. Increased overjet and inadequate lip coverage were found to be important contributing factors for TDIs. Conclusions: In conclusion, this study showed an expressive prevalence of TDI in 12-year-old in schoolchildren at Brasília DF, Brazil. Sex and educational level of the parents were not associated with trauma. The increased overjet and inadequate lip coverage were significantly associated with dental trauma.
Background: Traumatic dental injuries (TDI) have been extensively studied over the last few decades. They result in tooth fracture, displacement or loss, causing negative functional, esthetic and psychological effects to the individuals (children, adolescents and adults) [1-3]. Previous studies reported prevalence rates ranging from 6% to 27% in different populations [4-10]. In Brazil the prevalence varies widely, ranging from 10% to 58% [11-16]. The possible explanations for this variation include differences in places/environments, diagnostic criteria and examination methods [17]. Etiology and predisposing factors of traumatic injuries are well established in the literature. However, impact of socio-economic indicators remains conflicting and unclear [18,19]. The increased violence rates, number of car accidents and greater participation of children in sports activities contribute to make dental trauma an emerging public health problem. Also, the greater availability and access of leisure devices with potential risk have remarkably increased the number of cases [15]. Glendor [20] conducted a literature review on the etiology and reported risk factors for traumatic dental injuries and concluded that the number of causes of TDIs have alarmingly increased over the last decades. The author suggested that this phenomenon may be associated to the increased interest on the causes and also evidences the complex etiology of TDIs. The investigator also concluded that not only risk factors as overjet and inadequate lip coverage contribute to increase the TDIs, but also the complex interaction between the oral status of the patient, design of public parks and school playgrounds and human behavior. The question is to what extent these factors, together or separately, influence the risk of TDI. Studies have consistently shown that male individuals have a higher chance of TDI than female individuals [8,10,17]. Socio-economic status has been associated with several oral diseases and conditions, such as dental caries, periodontal diseases, tooth loss, and oral cancer. Nevertheless, the association between TDI and socio-economic indicators remains unclear [14,21,22]. Although some researchers have reported that schoolchildren with lower socio-economic status are more likely to suffer TDI [2,13,14,17,19], others have shown an inverse correlation, with wealthier children having a higher risk of TDI [9,13]. A review paper concluded that there are few studies correlating TDI in permanent teeth with socio-economic indicators and that the majority did not find such association [22]. Among the physical factors, increased overjet and inadequate lip has been consistently associated with TDI [12,13,19,21,23]. A systematic review using meta-analysis stated that an overjet greater than 3 mm increases the chance of dental trauma. Other study considered that inadequate lip coverage is a more important risk factor for the occurrence of TDI than the increased overjet separately [24]. The purpose of this study was to assess the prevalence of TDI and its association with sociodemographic and physical characteristics in anterior permanent teeth of 12-year-old schoolchildren at the city of Brasília – DF, Brazil. Conclusion: Sex and educational level of the parents were not associated with trauma. The increased overjet and inadequate lip coverage were significantly associated with dental trauma.
Background: This study assessed the prevalence of traumatic dental injuries (TDI) and its association with sociodemographic and physical characteristics in the anterior permanent teeth of 12-year-old schoolchildren at the city of Brasília - DF, Brazil. Methods: A cross-sectional, population-based study was conducted on a sample of 1,389 boys and girls aged 12 years, enrolled in public and private fundamental schools at the Administrative Region (RA) of Brasília, Brazil, from October 2011 to September 2012. The demographic details were achieved by a structured questionnaire. The study recorded the type of damage, the size of incisal overjet, and whether lip coverage was inadequate. Sociodemographic data included sex, income and educational level of the parents or caretakers. Results: A total of 1118 schoolchildren were examined, yielding a response rate of 80.48%. The prevalence of TDI was 14.63% in public schools and 23.40% in private schools. The students did not differ according to sex, income and educational level of the parents or caretakers concerning the occurrence of traumas in permanent anterior teeth. Increased overjet and inadequate lip coverage were found to be important contributing factors for TDIs. Conclusions: In conclusion, this study showed an expressive prevalence of TDI in 12-year-old in schoolchildren at Brasília DF, Brazil. Sex and educational level of the parents were not associated with trauma. The increased overjet and inadequate lip coverage were significantly associated with dental trauma.
3,813
281
9
[ "trauma", "study", "overjet", "teeth", "dental", "lip", "children", "permanent", "coverage", "lip coverage" ]
[ "test", "test" ]
[CONTENT] Tooth injuries | Prevalence | Demographic data [SUMMARY]
[CONTENT] Tooth injuries | Prevalence | Demographic data [SUMMARY]
[CONTENT] Tooth injuries | Prevalence | Demographic data [SUMMARY]
[CONTENT] Tooth injuries | Prevalence | Demographic data [SUMMARY]
[CONTENT] Tooth injuries | Prevalence | Demographic data [SUMMARY]
[CONTENT] Tooth injuries | Prevalence | Demographic data [SUMMARY]
[CONTENT] Brazil | Causality | Child | Cross-Sectional Studies | Cuspid | Educational Status | Female | Health Knowledge, Attitudes, Practice | Household Articles | Housing | Humans | Incisor | Income | Lip | Male | Overbite | Parents | Population Surveillance | Prevalence | Sex Factors | Socioeconomic Factors | Tooth Injuries [SUMMARY]
[CONTENT] Brazil | Causality | Child | Cross-Sectional Studies | Cuspid | Educational Status | Female | Health Knowledge, Attitudes, Practice | Household Articles | Housing | Humans | Incisor | Income | Lip | Male | Overbite | Parents | Population Surveillance | Prevalence | Sex Factors | Socioeconomic Factors | Tooth Injuries [SUMMARY]
[CONTENT] Brazil | Causality | Child | Cross-Sectional Studies | Cuspid | Educational Status | Female | Health Knowledge, Attitudes, Practice | Household Articles | Housing | Humans | Incisor | Income | Lip | Male | Overbite | Parents | Population Surveillance | Prevalence | Sex Factors | Socioeconomic Factors | Tooth Injuries [SUMMARY]
[CONTENT] Brazil | Causality | Child | Cross-Sectional Studies | Cuspid | Educational Status | Female | Health Knowledge, Attitudes, Practice | Household Articles | Housing | Humans | Incisor | Income | Lip | Male | Overbite | Parents | Population Surveillance | Prevalence | Sex Factors | Socioeconomic Factors | Tooth Injuries [SUMMARY]
[CONTENT] Brazil | Causality | Child | Cross-Sectional Studies | Cuspid | Educational Status | Female | Health Knowledge, Attitudes, Practice | Household Articles | Housing | Humans | Incisor | Income | Lip | Male | Overbite | Parents | Population Surveillance | Prevalence | Sex Factors | Socioeconomic Factors | Tooth Injuries [SUMMARY]
[CONTENT] Brazil | Causality | Child | Cross-Sectional Studies | Cuspid | Educational Status | Female | Health Knowledge, Attitudes, Practice | Household Articles | Housing | Humans | Incisor | Income | Lip | Male | Overbite | Parents | Population Surveillance | Prevalence | Sex Factors | Socioeconomic Factors | Tooth Injuries [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] trauma | study | overjet | teeth | dental | lip | children | permanent | coverage | lip coverage [SUMMARY]
[CONTENT] trauma | study | overjet | teeth | dental | lip | children | permanent | coverage | lip coverage [SUMMARY]
[CONTENT] trauma | study | overjet | teeth | dental | lip | children | permanent | coverage | lip coverage [SUMMARY]
[CONTENT] trauma | study | overjet | teeth | dental | lip | children | permanent | coverage | lip coverage [SUMMARY]
[CONTENT] trauma | study | overjet | teeth | dental | lip | children | permanent | coverage | lip coverage [SUMMARY]
[CONTENT] trauma | study | overjet | teeth | dental | lip | children | permanent | coverage | lip coverage [SUMMARY]
[CONTENT] tdi | risk | increased | factors | socio | economic | socio economic | economic indicators | indicators | socio economic indicators [SUMMARY]
[CONTENT] sample | data | study | incisors | incisal | trauma | mandibular | school | parents | schools [SUMMARY]
[CONTENT] permanent | ci | 95 | trauma permanent | value | ci 95 | teeth | permanent teeth | trauma | permanent tooth [SUMMARY]
[CONTENT] associated | trauma | coverage significantly associated | lip coverage significantly associated | inadequate lip coverage significantly | sex educational level parents | associated trauma increased overjet | associated trauma increased | associated trauma | educational level parents associated [SUMMARY]
[CONTENT] study | trauma | overjet | dental | associated | authors declare competing | competing | declare competing interests | authors declare | declare competing [SUMMARY]
[CONTENT] study | trauma | overjet | dental | associated | authors declare competing | competing | declare competing interests | authors declare | declare competing [SUMMARY]
[CONTENT] TDI | 12-year-old | Brasília - DF | Brazil [SUMMARY]
[CONTENT] 1,389 | 12 years | the Administrative Region | RA | Brasília | Brazil | October 2011 to September 2012 ||| ||| ||| [SUMMARY]
[CONTENT] 1118 | 80.48% ||| TDI | 14.63% | 23.40% ||| ||| [SUMMARY]
[CONTENT] TDI | 12-year-old | Brazil ||| ||| [SUMMARY]
[CONTENT] ||| TDI | 12-year-old | Brasília - DF | Brazil ||| 1,389 | 12 years | the Administrative Region | RA | Brasília | Brazil | October 2011 to September 2012 ||| ||| ||| ||| 1118 | 80.48% ||| TDI | 14.63% | 23.40% ||| ||| ||| TDI | 12-year-old | Brazil ||| ||| [SUMMARY]
[CONTENT] ||| TDI | 12-year-old | Brasília - DF | Brazil ||| 1,389 | 12 years | the Administrative Region | RA | Brasília | Brazil | October 2011 to September 2012 ||| ||| ||| ||| 1118 | 80.48% ||| TDI | 14.63% | 23.40% ||| ||| ||| TDI | 12-year-old | Brazil ||| ||| [SUMMARY]
Efficacy and safety of CT-P13 (biosimilar infliximab) in patients with rheumatoid arthritis: comparison between switching from reference infliximab to CT-P13 and continuing CT-P13 in the PLANETRA extension study.
27130908
NCT01571219; Results.
TRIAL REGISTRATION NUMBER
This open-label extension study recruited patients with RA who had completed the 54-week, randomised, parallel-group study comparing CT-P13 with RP (PLANETRA; NCT01217086). CT-P13 (3 mg/kg) was administered intravenously every 8 weeks from weeks 62 to 102. All patients received concomitant methotrexate. Endpoints included American College of Rheumatology 20% (ACR20) response, ACR50, ACR70, immunogenicity and safety. Data were analysed for patients who received CT-P13 for 102 weeks (maintenance group) and for those who received RP for 54 weeks and then switched to CT-P13 (switch group).
METHODS
Overall, 302 of 455 patients who completed the PLANETRA study enrolled into the extension. Of these, 158 had received CT-P13 (maintenance group) and 144 RP (switch group). Response rates at week 102 for maintenance versus switch groups, respectively, were 71.7% vs 71.8% for ACR20, 48.0% vs 51.4% for ACR50 and 24.3% vs 26.1% for ACR70. The proportion of patients with antidrug antibodies was comparable between groups (week 102: 40.3% vs 44.8%, respectively). Treatment-emergent adverse events occurred in similar proportions of patients in the two groups during the extension study (53.5% and 53.8%, respectively).
RESULTS
Comparable efficacy and tolerability were observed in patients who switched from RP to its biosimilar CT-P13 for an additional year and in those who had long-term CT-P13 treatment for 2 years.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Antibodies", "Antibodies, Monoclonal", "Antirheumatic Agents", "Arthritis, Rheumatoid", "Biosimilar Pharmaceuticals", "Drug Substitution", "Drug Therapy, Combination", "Female", "Humans", "Infliximab", "Male", "Methotrexate", "Middle Aged", "Treatment Outcome", "Young Adult" ]
5284338
Introduction
Infliximab is a human–murine chimeric monoclonal antibody to tumour necrosis factor (TNF).1 The introduction of infliximab and other biological drugs into clinical practice has dramatically improved the management of a number of immune-mediated inflammatory diseases, including rheumatoid arthritis (RA).2 However, currently available biologics are associated with high costs,3 4 which has led to restricted treatment access for patients with RA in several regions.5–9 A number of biologics used to treat RA—including originator infliximab (Remicade), hereafter referred to as the reference product (RP)—have reached or are approaching patent expiry in many countries. As a consequence, follow-on biologics (also termed ‘biosimilars’) are being developed for the treatment of RA. A biosimilar can be defined as a ‘biotherapeutic product that is similar in terms of quality, safety, and efficacy to an already licensed reference biotherapeutic product’.10 In order to gain approval, it is usually necessary to show that a biosimilar is highly similar to the RP in physicochemical and biological terms. In addition, clinical studies are generally needed to establish statistical equivalence in pharmacokinetics (PK) and efficacy and to characterise biosimilar safety.11–13 Since the first approval of a biosimilar by the European Medicines Agency (EMA) in 2006, a number of these agents, including granulocyte colony-stimulating factors and erythropoietins, have become available in Europe. Indeed, the range of therapeutic areas now covered by approved biosimilars is wide and includes cancer, anaemia, neutropenia and diabetes.14 Data for these EMA-approved biosimilars consistently show that they provide comparable efficacy and safety relative to their RPs.15–23 Recently, CT-P13 (Remsima, Inflectra)—a biosimilar of infliximab RP—became the first monoclonal antibody biosimilar to be approved in Europe for use in all indications held by the infliximab RP.24 All major physicochemical characteristics and in vitro biological activities of CT-P13 and the RP, including affinity for both soluble and transmembrane forms of TNF, are highly comparable.24 25 Approval of CT-P13 was partly based on findings from two 54-week, multinational, randomised, double-blind, parallel-group studies, which compared CT-P13 and RP in ankylosing spondylitis (AS) and RA Programme evaLuating the Autoimmune disease iNvEstigational drug cT-p13 in AS patients (PLANETAS) and Programme evaLuating the Autoimmune disease iNvEstigational drug cT-p13 in RA patients (PLANETRA)). These studies demonstrated that CT-P13 and RP are highly comparable in terms of PK, efficacy, immunogenicity and safety in both RA and AS.26–29 However, an important unanswered question for prescribing physicians is whether it is possible to switch from RP to CT-P13 in patients with RA without any detrimental effects on safety and efficacy.30 Here, we report the findings from an open-label extension of the PLANETRA study. There were two main aims of the extension study: (1) to investigate the efficacy and safety of switching to CT-P13 in patients previously treated with RP for 54 weeks in PLANETRA (hereafter named the ‘switch group’) and (2) to investigate the longer-term efficacy and safety of extended CT-P13 treatment over 2 years in patients previously treated with CT-P13 in PLANETRA (the ‘maintenance group’). To facilitate understanding of the data, the results for the maintenance and switch groups are described both for the main (weeks 0–54) and the extension (weeks 54–102) studies.
Patients and methods
Full details of the methods of the 54-week, randomised, double-blind, parallel-group PLANETRA study have been reported previously,26 27 and are described briefly below. Patients PLANETRA recruited patients aged 18–75 years with active RA for ≥1 year according to the 1987 American College of Rheumatology (ACR) classification criteria. Eligible patients did not respond adequately to ≥3 months of treatment with methotrexate (MTX) and received a stable MTX dose (12.5–25 mg/week) for ≥4 weeks before screening. Patients who had completed the main 54-week PLANETRA study were offered the opportunity to enter the extension study (ClinicalTrials.gov identifier: NCT01571219) for another 1 year. Those who did not sign a new informed consent for the extension study were excluded. Some of the relevant Ministries of Health (MoH) and ethics committees (ECs) did not approve the extension study, mainly due to the fact that data from the PLANETRA study were not available at the time of EC evaluation. Thus, patients from affected institutions were also excluded. Additional eligibility criteria applied for this extension study included no major protocol violations in the main study and no new therapy for RA in the extension study. Detailed information on non-participants in the extension study is shown in figure 1. Patient disposition in the PLANETRA extension study. All patients who enrolled in the extension study (n=158 and 144 in the maintenance and switch groups, respectively) were included in the ITT population. EC, ethics committee; ITT, intent-to-treat; MoH, Ministry of Health; RP, reference product. PLANETRA recruited patients aged 18–75 years with active RA for ≥1 year according to the 1987 American College of Rheumatology (ACR) classification criteria. Eligible patients did not respond adequately to ≥3 months of treatment with methotrexate (MTX) and received a stable MTX dose (12.5–25 mg/week) for ≥4 weeks before screening. Patients who had completed the main 54-week PLANETRA study were offered the opportunity to enter the extension study (ClinicalTrials.gov identifier: NCT01571219) for another 1 year. Those who did not sign a new informed consent for the extension study were excluded. Some of the relevant Ministries of Health (MoH) and ethics committees (ECs) did not approve the extension study, mainly due to the fact that data from the PLANETRA study were not available at the time of EC evaluation. Thus, patients from affected institutions were also excluded. Additional eligibility criteria applied for this extension study included no major protocol violations in the main study and no new therapy for RA in the extension study. Detailed information on non-participants in the extension study is shown in figure 1. Patient disposition in the PLANETRA extension study. All patients who enrolled in the extension study (n=158 and 144 in the maintenance and switch groups, respectively) were included in the ITT population. EC, ethics committee; ITT, intent-to-treat; MoH, Ministry of Health; RP, reference product. Study design and treatment This open-label, single-arm extension of PLANETRA was conducted in 69 centres in 16 countries. In the main study, patients received nine infusions of CT-P13 (CELLTRION, Incheon, Republic of Korea) or the infliximab RP (Janssen Biotech, Horsham, Pennsylvania, USA). After study treatment in PLANETRA, eligible patients could choose to continue in the extension study. However, patients and physicians continued to be blinded to the treatment that the patient had received during the main study. All patients participating in and completing this extension study received six infusions of CT-P13 from week 62 to week 102. During the whole study period, CT-P13 was administered via 2 h intravenous infusion at a fixed dose of 3 mg/kg. At the discretion of the investigator, antihistamines were provided 30–60 min prior to infusion of CT-P13. MTX (12.5–25 mg/week; oral or parenteral) and folic acid (≥5 mg/week; oral) were coadministered to all patients throughout the main and extension study periods. All patients provided new written informed consent to enrol into the extension study. The study was conducted according to the principles of the Declaration of Helsinki and International Conference on Harmonisation Good Clinical Practice guidelines. This open-label, single-arm extension of PLANETRA was conducted in 69 centres in 16 countries. In the main study, patients received nine infusions of CT-P13 (CELLTRION, Incheon, Republic of Korea) or the infliximab RP (Janssen Biotech, Horsham, Pennsylvania, USA). After study treatment in PLANETRA, eligible patients could choose to continue in the extension study. However, patients and physicians continued to be blinded to the treatment that the patient had received during the main study. All patients participating in and completing this extension study received six infusions of CT-P13 from week 62 to week 102. During the whole study period, CT-P13 was administered via 2 h intravenous infusion at a fixed dose of 3 mg/kg. At the discretion of the investigator, antihistamines were provided 30–60 min prior to infusion of CT-P13. MTX (12.5–25 mg/week; oral or parenteral) and folic acid (≥5 mg/week; oral) were coadministered to all patients throughout the main and extension study periods. All patients provided new written informed consent to enrol into the extension study. The study was conducted according to the principles of the Declaration of Helsinki and International Conference on Harmonisation Good Clinical Practice guidelines. Study endpoints Efficacy Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Immunogenicity The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). Safety Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Exploratory and post hoc endpoints Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Efficacy Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Immunogenicity The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). Safety Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Exploratory and post hoc endpoints Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Statistical analyses All data were analysed descriptively in the maintenance and switch groups. The populations were predefined in the study protocol and statistical analysis plan for participants of the extension study. The efficacy population included all patients who received at least one dose of study treatment and had at least one efficacy measurement in the extension study. Conservatively, ACR response was analysed using non-responder imputation (NRI) for missing values and presented with 95% CIs of the response rate using an exact binomial approach from the efficacy population. No imputation of missing values was done for analysis of other efficacy endpoints. The safety population consisted of all patients who enrolled in the study, because they had all received study treatment in the preceding study. Data from the main study period were analysed in participants of the extension study only, not in all patients in the main study. Methods for sensitivity analyses of ACR response and statistical analyses of exploratory and post hoc endpoints are included in online supplementary appendix A. All data were analysed descriptively in the maintenance and switch groups. The populations were predefined in the study protocol and statistical analysis plan for participants of the extension study. The efficacy population included all patients who received at least one dose of study treatment and had at least one efficacy measurement in the extension study. Conservatively, ACR response was analysed using non-responder imputation (NRI) for missing values and presented with 95% CIs of the response rate using an exact binomial approach from the efficacy population. No imputation of missing values was done for analysis of other efficacy endpoints. The safety population consisted of all patients who enrolled in the study, because they had all received study treatment in the preceding study. Data from the main study period were analysed in participants of the extension study only, not in all patients in the main study. Methods for sensitivity analyses of ACR response and statistical analyses of exploratory and post hoc endpoints are included in online supplementary appendix A.
Results
Patients The first patient visit of the main PLANETRA study and the last visit of the PLANETRA extension were held between November 2010 and July 2013. Of the 455 patients who completed the main PLANETRA study, 302 patients consented to participate in the extension study and were screened under the approval of the appropriate MoH/EC (figure 1). Of the 302 screened patients, all were enrolled and 301 were treated. One patient in the maintenance group was enrolled but discontinued due to an adverse event (B-cell lymphoma stage IV) before receiving treatment in the extension study. A total of 158 patients had received CT-P13 in the main study (maintenance group); 144 had received RP (switch group). These patients comprised the intent-to-treat (ITT) population of the extension study. Patient demographics and disease characteristics at baseline and at week 54 of the main study were similar between the two groups (table 1). Patient demographics and disease characteristics at baseline and week 54 of patients enrolled in the PLANETRA extension study (ITT population) Data shown in the table were recorded at the baseline and week 54 visits of the preceding 54-week main study. *Except where indicated otherwise, values are median (range). †Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. ‡Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology; CCP, cyclic citrullinated peptide; CRP, C reactive protein; DAS28, disease activity score in 28 joints; ESR, erythrocyte sedimentation rate; ITT, intent-to-treat; RF, rheumatoid factor; RP, reference product. In the maintenance and switch groups, respectively, 133 (84.2%) and 128 (88.9%) patients completed the extension phase; 25 (15.8%) and 16 (11.1%) patients discontinued over the whole period of the extension study. Reasons for patient withdrawal are shown in figure 1. The efficacy population of the extension study included 152 patients in the maintenance group and 142 patients in the switch group. Owing to incorrect kits being dispensed, one patient randomly assigned to the RP group received one dose of CT-P13 at week 2 in the PLANETRA main study. Applying a conservative approach, this patient was classified as a member of the CT-P13 group for safety analyses in the main study. Therefore, the safety population of the extension study in the maintenance and switch groups comprised 159 and 143 patients, respectively. Similar to the ITT population of the extension study, patient demographics and disease characteristics of non-participants in the extension study were also comparable between the CT-P13 and RP groups (see online supplementary appendix B). The first patient visit of the main PLANETRA study and the last visit of the PLANETRA extension were held between November 2010 and July 2013. Of the 455 patients who completed the main PLANETRA study, 302 patients consented to participate in the extension study and were screened under the approval of the appropriate MoH/EC (figure 1). Of the 302 screened patients, all were enrolled and 301 were treated. One patient in the maintenance group was enrolled but discontinued due to an adverse event (B-cell lymphoma stage IV) before receiving treatment in the extension study. A total of 158 patients had received CT-P13 in the main study (maintenance group); 144 had received RP (switch group). These patients comprised the intent-to-treat (ITT) population of the extension study. Patient demographics and disease characteristics at baseline and at week 54 of the main study were similar between the two groups (table 1). Patient demographics and disease characteristics at baseline and week 54 of patients enrolled in the PLANETRA extension study (ITT population) Data shown in the table were recorded at the baseline and week 54 visits of the preceding 54-week main study. *Except where indicated otherwise, values are median (range). †Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. ‡Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology; CCP, cyclic citrullinated peptide; CRP, C reactive protein; DAS28, disease activity score in 28 joints; ESR, erythrocyte sedimentation rate; ITT, intent-to-treat; RF, rheumatoid factor; RP, reference product. In the maintenance and switch groups, respectively, 133 (84.2%) and 128 (88.9%) patients completed the extension phase; 25 (15.8%) and 16 (11.1%) patients discontinued over the whole period of the extension study. Reasons for patient withdrawal are shown in figure 1. The efficacy population of the extension study included 152 patients in the maintenance group and 142 patients in the switch group. Owing to incorrect kits being dispensed, one patient randomly assigned to the RP group received one dose of CT-P13 at week 2 in the PLANETRA main study. Applying a conservative approach, this patient was classified as a member of the CT-P13 group for safety analyses in the main study. Therefore, the safety population of the extension study in the maintenance and switch groups comprised 159 and 143 patients, respectively. Similar to the ITT population of the extension study, patient demographics and disease characteristics of non-participants in the extension study were also comparable between the CT-P13 and RP groups (see online supplementary appendix B). Efficacy Throughout the extension study, ACR20, ACR50 and ACR70 response rates were maintained, and no differences were evident between the groups at weeks 78 and 102 (figure 2). In the switch group, respective ACR20, ACR50 and ACR70 response rates were 77.5%, 50.0% and 23.9% at week 54 (ie, at the end of RP treatment) and 71.8%, 51.4% and 26.1% at week 102 (ie, 48 weeks after the last infusion of RP at week 54). In the maintenance group, respective ACR20, ACR50 and ACR70 response rates were 77.0%, 46.1% and 22.4% at week 54 and 71.7%, 48.0% and 24.3% at week 102. In patients who participated in the extension study, the proportion of patients achieving ACR20, ACR50 and ACR70 responses during the main study was also similar between the two groups. In a subgroup analysis performed according to ADA status, the proportion of ADA-negative patients achieving ACR20 was 85.7% at week 54 and 82.2% at week 102 in the maintenance group, and 84.7% at week 54 and 82.8% at week 102 in the switch group. In comparison, 68.0% (week 54) and 73.4% (week 102) of ADA-positive patients in the maintenance group, and 70.6% and 73.4% in the switch group achieved ACR20 (see online supplementary appendix C, figure C-1). Proportion of patients with rheumatoid arthritis with (A) an ACR20 response, (B) an ACR50 response and (C) an ACR70 response in the maintenance* and switch** groups of the PLANETRA extension study (efficacy population with non-responder imputation approach). CI values are the 95% CIs of the treatment difference. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. **Patients treated with reference product during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology. No notable differences in other efficacy endpoints were noted between or within the groups at weeks 14, 30, 54, 78 or 102. The results for DAS28 score change and EULAR response criteria are shown in online supplementary appendix D. The results of assessments of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, HAQ, levels of CRP, ESR and hybrid ACR score were not different between groups (see online supplementary appendix E). Sensitivity analyses to compare populations and statistical approaches supported the appearance of sustained efficacy and comparability between the two groups (see online supplementary appendix F). Analyses using the last observation carried forward (LOCF) approach showed similar results as analyses using the NRI approach, both in the maintenance group (ACR20: 74.1% using LOCF vs 71.7% using NRI at week 102, respectively) and in the switch group (ACR20: 77.1% using LOCF vs 71.8% using NRI at week 102). Analyses of the main study ITT population using the LOCF approach showed relatively low response rates compared with analyses of the extension study ITT population. However, response rates were comparable between the groups and sustained throughout the 2-year study period, both in the extension study ITT population (ACR20: 74.1% at week 102 vs 75.3% at week 54 in the maintenance group, 77.1% vs 77.1% in the switch group, respectively) and in the main study ITT population (ACR20: 61.6% at week 102 vs 62.9% at week 54 in the CT-P13 group, 59.2% vs 59.9% in the RP group, respectively). When data for the main study ITT population were analysed using the NRI approach, lower response rates were seen at week 102 than week 54 although rates were comparable between the groups (ACR20: 36.1% at week 102 vs 57.0% at week 54 in the CT-P13 group, 33.6% vs 52.0% in the RP group). When remission was measured up to week 102 based on ACR/EULAR criteria (Boolean-based definition and index-based definition (Simple Disease Activity Index (SDAI)), Clinical Disease Activity Index (CDAI), DAS28 and DAS28 low disease activity, the proportion of patients achieving remission or low disease activity was similar between groups throughout the study period (see online supplementary appendix G). Throughout the extension study, ACR20, ACR50 and ACR70 response rates were maintained, and no differences were evident between the groups at weeks 78 and 102 (figure 2). In the switch group, respective ACR20, ACR50 and ACR70 response rates were 77.5%, 50.0% and 23.9% at week 54 (ie, at the end of RP treatment) and 71.8%, 51.4% and 26.1% at week 102 (ie, 48 weeks after the last infusion of RP at week 54). In the maintenance group, respective ACR20, ACR50 and ACR70 response rates were 77.0%, 46.1% and 22.4% at week 54 and 71.7%, 48.0% and 24.3% at week 102. In patients who participated in the extension study, the proportion of patients achieving ACR20, ACR50 and ACR70 responses during the main study was also similar between the two groups. In a subgroup analysis performed according to ADA status, the proportion of ADA-negative patients achieving ACR20 was 85.7% at week 54 and 82.2% at week 102 in the maintenance group, and 84.7% at week 54 and 82.8% at week 102 in the switch group. In comparison, 68.0% (week 54) and 73.4% (week 102) of ADA-positive patients in the maintenance group, and 70.6% and 73.4% in the switch group achieved ACR20 (see online supplementary appendix C, figure C-1). Proportion of patients with rheumatoid arthritis with (A) an ACR20 response, (B) an ACR50 response and (C) an ACR70 response in the maintenance* and switch** groups of the PLANETRA extension study (efficacy population with non-responder imputation approach). CI values are the 95% CIs of the treatment difference. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. **Patients treated with reference product during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology. No notable differences in other efficacy endpoints were noted between or within the groups at weeks 14, 30, 54, 78 or 102. The results for DAS28 score change and EULAR response criteria are shown in online supplementary appendix D. The results of assessments of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, HAQ, levels of CRP, ESR and hybrid ACR score were not different between groups (see online supplementary appendix E). Sensitivity analyses to compare populations and statistical approaches supported the appearance of sustained efficacy and comparability between the two groups (see online supplementary appendix F). Analyses using the last observation carried forward (LOCF) approach showed similar results as analyses using the NRI approach, both in the maintenance group (ACR20: 74.1% using LOCF vs 71.7% using NRI at week 102, respectively) and in the switch group (ACR20: 77.1% using LOCF vs 71.8% using NRI at week 102). Analyses of the main study ITT population using the LOCF approach showed relatively low response rates compared with analyses of the extension study ITT population. However, response rates were comparable between the groups and sustained throughout the 2-year study period, both in the extension study ITT population (ACR20: 74.1% at week 102 vs 75.3% at week 54 in the maintenance group, 77.1% vs 77.1% in the switch group, respectively) and in the main study ITT population (ACR20: 61.6% at week 102 vs 62.9% at week 54 in the CT-P13 group, 59.2% vs 59.9% in the RP group, respectively). When data for the main study ITT population were analysed using the NRI approach, lower response rates were seen at week 102 than week 54 although rates were comparable between the groups (ACR20: 36.1% at week 102 vs 57.0% at week 54 in the CT-P13 group, 33.6% vs 52.0% in the RP group). When remission was measured up to week 102 based on ACR/EULAR criteria (Boolean-based definition and index-based definition (Simple Disease Activity Index (SDAI)), Clinical Disease Activity Index (CDAI), DAS28 and DAS28 low disease activity, the proportion of patients achieving remission or low disease activity was similar between groups throughout the study period (see online supplementary appendix G). Immunogenicity The proportion of patients with ADAs was similar between the maintenance and switch groups at each time point during the main and extension studies (table 2). Almost all patients with a positive ADA result had a positive result for neutralising antibodies (NAb), and the proportion of patients with a positive NAb result was similar between the two groups. The proportion of ADA-positive patients with sustained ADAs was also highly similar between groups (80.2% and 80.4% in the maintenance and switch groups, respectively). Proportion of patients with RA who were positive for ADAs and NAbs in the main study and the extension study (safety population) Percentages for NAb results are based on the number of positive ADA results at that visit. ADA persistency was defined as transient when a patient tested positive for ADAs at one or more time point but negative at the last available time point. The remaining patients with positive ADA results were considered to have shown a sustained ADA response. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ‡N, total number of patients with at least one positive ADA result. ADAs, antidrug antibodies; NAbs, neutralising antibodies; RA, rheumatoid arthritis; RP, reference product. The proportion of patients with ADAs was similar between the maintenance and switch groups at each time point during the main and extension studies (table 2). Almost all patients with a positive ADA result had a positive result for neutralising antibodies (NAb), and the proportion of patients with a positive NAb result was similar between the two groups. The proportion of ADA-positive patients with sustained ADAs was also highly similar between groups (80.2% and 80.4% in the maintenance and switch groups, respectively). Proportion of patients with RA who were positive for ADAs and NAbs in the main study and the extension study (safety population) Percentages for NAb results are based on the number of positive ADA results at that visit. ADA persistency was defined as transient when a patient tested positive for ADAs at one or more time point but negative at the last available time point. The remaining patients with positive ADA results were considered to have shown a sustained ADA response. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ‡N, total number of patients with at least one positive ADA result. ADAs, antidrug antibodies; NAbs, neutralising antibodies; RA, rheumatoid arthritis; RP, reference product. Pharmacodynamics In a subgroup analysis performed by ADA status, the mean change from baseline in CRP and ESR was comparable in the maintenance and switch groups at week 54 and week 102 in both ADA-negative and ADA-positive patients (see online supplementary appendix C, table C-1). In a subgroup analysis performed by ADA status, the mean change from baseline in CRP and ESR was comparable in the maintenance and switch groups at week 54 and week 102 in both ADA-negative and ADA-positive patients (see online supplementary appendix C, table C-1). Safety The proportion of patients who experienced at least one TEAE was comparable between the maintenance group and the switch group (extension study: 53.5% (n=85 of 159) and 53.8% (n=77 of 143), respectively; main study: 63.5% (n=101) and 62.2% (n=89)). Rates of TEAEs considered by the investigator to be related to study treatment were also similar between the maintenance and switch groups (extension study: 22.0% (n=35) and 18.9% (n=27); main study: 35.2% (n=56) and 35.7% (n=51)). The most common treatment-related TEAEs are shown in table 3. Treatment-related TEAEs that were reported in at least 1% of patients in either the maintenance group or the switch group (safety population) *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. RA, rheumatoid arthritis; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event. With respect to serious adverse events (SAEs), these events occurred in the maintenance and switch groups, respectively, in 12 (7.5%) and 13 (9.1%) patients during the extension study, and in 9 (5.7%) and 5 (3.5%) patients during the main study. Treatment-related SAEs occurred in two (1.3%) and four (2.8%) patients in the extension study, respectively, and in two (1.3%) and two (1.4%) patients in the main study (see online supplementary appendix H). TEAEs leading to discontinuation occurred in 16 (10.1%) and 8 (5.6%) patients during the extension study. During the extension study, 11 (6.9%) and 4 (2.8%) patients in the maintenance and switch groups, respectively, reported infusion-related reactions. All were ADA positive and had sustained ADAs. Only one patient in the maintenance group experienced anaphylaxis. This patient was ADA positive (see online supplementary appendix C, table C-2). In the main study, infusion-related reactions were reported in 8 (5.0%) and 13 (9.1%) patients in the maintenance and switch groups, respectively. Of these, 4 (50.0%) and 11 (84.6%) were ADA positive. Two patients reported infusion-related reactions both in the main study and in the extension study period (one patient in each group). Table 4 shows data for all other TEAEs of special interest. No cases of TB were reported during the extension study. TEAEs of special interest regardless of relationship to study treatment in the PLANETRA main study and the extension study (safety population) *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ‡There were three patients (two in the maintenance group, one in the switch group) with three events of latent TB, which were reported both in the main study and in the extension study; this was because all three events started during week 62 (part of the end-of-study period of the main study). §There was one patient in the maintenance group with a serious AE of pneumonia, which was included as a ‘Serious infection’ and ‘Pneumonia’ during the main study. AE, adverse event; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event. The proportion of patients who experienced at least one TEAE was comparable between the maintenance group and the switch group (extension study: 53.5% (n=85 of 159) and 53.8% (n=77 of 143), respectively; main study: 63.5% (n=101) and 62.2% (n=89)). Rates of TEAEs considered by the investigator to be related to study treatment were also similar between the maintenance and switch groups (extension study: 22.0% (n=35) and 18.9% (n=27); main study: 35.2% (n=56) and 35.7% (n=51)). The most common treatment-related TEAEs are shown in table 3. Treatment-related TEAEs that were reported in at least 1% of patients in either the maintenance group or the switch group (safety population) *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. RA, rheumatoid arthritis; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event. With respect to serious adverse events (SAEs), these events occurred in the maintenance and switch groups, respectively, in 12 (7.5%) and 13 (9.1%) patients during the extension study, and in 9 (5.7%) and 5 (3.5%) patients during the main study. Treatment-related SAEs occurred in two (1.3%) and four (2.8%) patients in the extension study, respectively, and in two (1.3%) and two (1.4%) patients in the main study (see online supplementary appendix H). TEAEs leading to discontinuation occurred in 16 (10.1%) and 8 (5.6%) patients during the extension study. During the extension study, 11 (6.9%) and 4 (2.8%) patients in the maintenance and switch groups, respectively, reported infusion-related reactions. All were ADA positive and had sustained ADAs. Only one patient in the maintenance group experienced anaphylaxis. This patient was ADA positive (see online supplementary appendix C, table C-2). In the main study, infusion-related reactions were reported in 8 (5.0%) and 13 (9.1%) patients in the maintenance and switch groups, respectively. Of these, 4 (50.0%) and 11 (84.6%) were ADA positive. Two patients reported infusion-related reactions both in the main study and in the extension study period (one patient in each group). Table 4 shows data for all other TEAEs of special interest. No cases of TB were reported during the extension study. TEAEs of special interest regardless of relationship to study treatment in the PLANETRA main study and the extension study (safety population) *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ‡There were three patients (two in the maintenance group, one in the switch group) with three events of latent TB, which were reported both in the main study and in the extension study; this was because all three events started during week 62 (part of the end-of-study period of the main study). §There was one patient in the maintenance group with a serious AE of pneumonia, which was included as a ‘Serious infection’ and ‘Pneumonia’ during the main study. AE, adverse event; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event.
Conclusions
This multinational, open-label extension study demonstrated that in patients with RA receiving MTX, switching from RP to CT-P13 was not associated with any detrimental effects on efficacy, immunogenicity or safety. Additionally, this study demonstrated that CT-P13 remained efficacious and well tolerated over a 2-year treatment period.
[ "Patients", "Study design and treatment", "Study endpoints", "Efficacy", "Immunogenicity", "Safety", "Exploratory and post hoc endpoints", "Statistical analyses", "Patients", "Efficacy", "Immunogenicity", "Pharmacodynamics", "Safety" ]
[ "PLANETRA recruited patients aged 18–75 years with active RA for ≥1 year according to the 1987 American College of Rheumatology (ACR) classification criteria. Eligible patients did not respond adequately to ≥3 months of treatment with methotrexate (MTX) and received a stable MTX dose (12.5–25 mg/week) for ≥4 weeks before screening. Patients who had completed the main 54-week PLANETRA study were offered the opportunity to enter the extension study (ClinicalTrials.gov identifier: NCT01571219) for another 1 year. Those who did not sign a new informed consent for the extension study were excluded. Some of the relevant Ministries of Health (MoH) and ethics committees (ECs) did not approve the extension study, mainly due to the fact that data from the PLANETRA study were not available at the time of EC evaluation. Thus, patients from affected institutions were also excluded.\nAdditional eligibility criteria applied for this extension study included no major protocol violations in the main study and no new therapy for RA in the extension study. Detailed information on non-participants in the extension study is shown in figure 1.\nPatient disposition in the PLANETRA extension study. All patients who enrolled in the extension study (n=158 and 144 in the maintenance and switch groups, respectively) were included in the ITT population. EC, ethics committee; ITT, intent-to-treat; MoH, Ministry of Health; RP, reference product.", "This open-label, single-arm extension of PLANETRA was conducted in 69 centres in 16 countries. In the main study, patients received nine infusions of CT-P13 (CELLTRION, Incheon, Republic of Korea) or the infliximab RP (Janssen Biotech, Horsham, Pennsylvania, USA). After study treatment in PLANETRA, eligible patients could choose to continue in the extension study. However, patients and physicians continued to be blinded to the treatment that the patient had received during the main study. All patients participating in and completing this extension study received six infusions of CT-P13 from week 62 to week 102. During the whole study period, CT-P13 was administered via 2 h intravenous infusion at a fixed dose of 3 mg/kg. At the discretion of the investigator, antihistamines were provided 30–60 min prior to infusion of CT-P13. MTX (12.5–25 mg/week; oral or parenteral) and folic acid (≥5 mg/week; oral) were coadministered to all patients throughout the main and extension study periods. All patients provided new written informed consent to enrol into the extension study. The study was conducted according to the principles of the Declaration of Helsinki and International Conference on Harmonisation Good Clinical Practice guidelines.", " Efficacy Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score.\nEfficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score.\n Immunogenicity The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26\n27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden).\nThe proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26\n27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden).\n Safety Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses.\nTreatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses.\n Exploratory and post hoc endpoints Details of exploratory and post hoc endpoints are given in online supplementary appendix A.\n\n\n\nDetails of exploratory and post hoc endpoints are given in online supplementary appendix A.\n\n\n", "Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score.", "The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26\n27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden).", "Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses.", "Details of exploratory and post hoc endpoints are given in online supplementary appendix A.\n\n\n", "All data were analysed descriptively in the maintenance and switch groups. The populations were predefined in the study protocol and statistical analysis plan for participants of the extension study. The efficacy population included all patients who received at least one dose of study treatment and had at least one efficacy measurement in the extension study. Conservatively, ACR response was analysed using non-responder imputation (NRI) for missing values and presented with 95% CIs of the response rate using an exact binomial approach from the efficacy population. No imputation of missing values was done for analysis of other efficacy endpoints. The safety population consisted of all patients who enrolled in the study, because they had all received study treatment in the preceding study. Data from the main study period were analysed in participants of the extension study only, not in all patients in the main study. Methods for sensitivity analyses of ACR response and statistical analyses of exploratory and post hoc endpoints are included in online supplementary appendix A.", "The first patient visit of the main PLANETRA study and the last visit of the PLANETRA extension were held between November 2010 and July 2013. Of the 455 patients who completed the main PLANETRA study, 302 patients consented to participate in the extension study and were screened under the approval of the appropriate MoH/EC (figure 1). Of the 302 screened patients, all were enrolled and 301 were treated. One patient in the maintenance group was enrolled but discontinued due to an adverse event (B-cell lymphoma stage IV) before receiving treatment in the extension study. A total of 158 patients had received CT-P13 in the main study (maintenance group); 144 had received RP (switch group). These patients comprised the intent-to-treat (ITT) population of the extension study. Patient demographics and disease characteristics at baseline and at week 54 of the main study were similar between the two groups (table 1).\nPatient demographics and disease characteristics at baseline and week 54 of patients enrolled in the PLANETRA extension study (ITT population)\nData shown in the table were recorded at the baseline and week 54 visits of the preceding 54-week main study.\n*Except where indicated otherwise, values are median (range).\n†Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n‡Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\nACR, American College of Rheumatology; CCP, cyclic citrullinated peptide; CRP, C reactive protein; DAS28, disease activity score in 28 joints; ESR, erythrocyte sedimentation rate; ITT, intent-to-treat; RF, rheumatoid factor; RP, reference product.\nIn the maintenance and switch groups, respectively, 133 (84.2%) and 128 (88.9%) patients completed the extension phase; 25 (15.8%) and 16 (11.1%) patients discontinued over the whole period of the extension study. Reasons for patient withdrawal are shown in figure 1. The efficacy population of the extension study included 152 patients in the maintenance group and 142 patients in the switch group. Owing to incorrect kits being dispensed, one patient randomly assigned to the RP group received one dose of CT-P13 at week 2 in the PLANETRA main study. Applying a conservative approach, this patient was classified as a member of the CT-P13 group for safety analyses in the main study. Therefore, the safety population of the extension study in the maintenance and switch groups comprised 159 and 143 patients, respectively.\nSimilar to the ITT population of the extension study, patient demographics and disease characteristics of non-participants in the extension study were also comparable between the CT-P13 and RP groups (see online supplementary appendix B).", "Throughout the extension study, ACR20, ACR50 and ACR70 response rates were maintained, and no differences were evident between the groups at weeks 78 and 102 (figure 2). In the switch group, respective ACR20, ACR50 and ACR70 response rates were 77.5%, 50.0% and 23.9% at week 54 (ie, at the end of RP treatment) and 71.8%, 51.4% and 26.1% at week 102 (ie, 48 weeks after the last infusion of RP at week 54). In the maintenance group, respective ACR20, ACR50 and ACR70 response rates were 77.0%, 46.1% and 22.4% at week 54 and 71.7%, 48.0% and 24.3% at week 102. In patients who participated in the extension study, the proportion of patients achieving ACR20, ACR50 and ACR70 responses during the main study was also similar between the two groups. In a subgroup analysis performed according to ADA status, the proportion of ADA-negative patients achieving ACR20 was 85.7% at week 54 and 82.2% at week 102 in the maintenance group, and 84.7% at week 54 and 82.8% at week 102 in the switch group. In comparison, 68.0% (week 54) and 73.4% (week 102) of ADA-positive patients in the maintenance group, and 70.6% and 73.4% in the switch group achieved ACR20 (see online supplementary appendix C, figure C-1).\nProportion of patients with rheumatoid arthritis with (A) an ACR20 response, (B) an ACR50 response and (C) an ACR70 response in the maintenance* and switch** groups of the PLANETRA extension study (efficacy population with non-responder imputation approach). CI values are the 95% CIs of the treatment difference. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. **Patients treated with reference product during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology.\nNo notable differences in other efficacy endpoints were noted between or within the groups at weeks 14, 30, 54, 78 or 102. The results for DAS28 score change and EULAR response criteria are shown in online supplementary appendix D. The results of assessments of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, HAQ, levels of CRP, ESR and hybrid ACR score were not different between groups (see online supplementary appendix E).\nSensitivity analyses to compare populations and statistical approaches supported the appearance of sustained efficacy and comparability between the two groups (see online supplementary appendix F). Analyses using the last observation carried forward (LOCF) approach showed similar results as analyses using the NRI approach, both in the maintenance group (ACR20: 74.1% using LOCF vs 71.7% using NRI at week 102, respectively) and in the switch group (ACR20: 77.1% using LOCF vs 71.8% using NRI at week 102). Analyses of the main study ITT population using the LOCF approach showed relatively low response rates compared with analyses of the extension study ITT population. However, response rates were comparable between the groups and sustained throughout the 2-year study period, both in the extension study ITT population (ACR20: 74.1% at week 102 vs 75.3% at week 54 in the maintenance group, 77.1% vs 77.1% in the switch group, respectively) and in the main study ITT population (ACR20: 61.6% at week 102 vs 62.9% at week 54 in the CT-P13 group, 59.2% vs 59.9% in the RP group, respectively). When data for the main study ITT population were analysed using the NRI approach, lower response rates were seen at week 102 than week 54 although rates were comparable between the groups (ACR20: 36.1% at week 102 vs 57.0% at week 54 in the CT-P13 group, 33.6% vs 52.0% in the RP group).\nWhen remission was measured up to week 102 based on ACR/EULAR criteria (Boolean-based definition and index-based definition (Simple Disease Activity Index (SDAI)), Clinical Disease Activity Index (CDAI), DAS28 and DAS28 low disease activity, the proportion of patients achieving remission or low disease activity was similar between groups throughout the study period (see online supplementary appendix G).", "The proportion of patients with ADAs was similar between the maintenance and switch groups at each time point during the main and extension studies (table 2). Almost all patients with a positive ADA result had a positive result for neutralising antibodies (NAb), and the proportion of patients with a positive NAb result was similar between the two groups. The proportion of ADA-positive patients with sustained ADAs was also highly similar between groups (80.2% and 80.4% in the maintenance and switch groups, respectively).\nProportion of patients with RA who were positive for ADAs and NAbs in the main study and the extension study (safety population)\nPercentages for NAb results are based on the number of positive ADA results at that visit.\nADA persistency was defined as transient when a patient tested positive for ADAs at one or more time point but negative at the last available time point. The remaining patients with positive ADA results were considered to have shown a sustained ADA response.\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\n‡N, total number of patients with at least one positive ADA result.\nADAs, antidrug antibodies; NAbs, neutralising antibodies; RA, rheumatoid arthritis; RP, reference product.", "In a subgroup analysis performed by ADA status, the mean change from baseline in CRP and ESR was comparable in the maintenance and switch groups at week 54 and week 102 in both ADA-negative and ADA-positive patients (see online supplementary appendix C, table C-1).", "The proportion of patients who experienced at least one TEAE was comparable between the maintenance group and the switch group (extension study: 53.5% (n=85 of 159) and 53.8% (n=77 of 143), respectively; main study: 63.5% (n=101) and 62.2% (n=89)). Rates of TEAEs considered by the investigator to be related to study treatment were also similar between the maintenance and switch groups (extension study: 22.0% (n=35) and 18.9% (n=27); main study: 35.2% (n=56) and 35.7% (n=51)). The most common treatment-related TEAEs are shown in table 3.\nTreatment-related TEAEs that were reported in at least 1% of patients in either the maintenance group or the switch group (safety population)\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\nRA, rheumatoid arthritis; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event.\nWith respect to serious adverse events (SAEs), these events occurred in the maintenance and switch groups, respectively, in 12 (7.5%) and 13 (9.1%) patients during the extension study, and in 9 (5.7%) and 5 (3.5%) patients during the main study. Treatment-related SAEs occurred in two (1.3%) and four (2.8%) patients in the extension study, respectively, and in two (1.3%) and two (1.4%) patients in the main study (see online supplementary appendix H). TEAEs leading to discontinuation occurred in 16 (10.1%) and 8 (5.6%) patients during the extension study.\nDuring the extension study, 11 (6.9%) and 4 (2.8%) patients in the maintenance and switch groups, respectively, reported infusion-related reactions. All were ADA positive and had sustained ADAs. Only one patient in the maintenance group experienced anaphylaxis. This patient was ADA positive (see online supplementary appendix C, table C-2). In the main study, infusion-related reactions were reported in 8 (5.0%) and 13 (9.1%) patients in the maintenance and switch groups, respectively. Of these, 4 (50.0%) and 11 (84.6%) were ADA positive. Two patients reported infusion-related reactions both in the main study and in the extension study period (one patient in each group). Table 4 shows data for all other TEAEs of special interest. No cases of TB were reported during the extension study.\nTEAEs of special interest regardless of relationship to study treatment in the PLANETRA main study and the extension study (safety population)\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\n‡There were three patients (two in the maintenance group, one in the switch group) with three events of latent TB, which were reported both in the main study and in the extension study; this was because all three events started during week 62 (part of the end-of-study period of the main study).\n§There was one patient in the maintenance group with a serious AE of pneumonia, which was included as a ‘Serious infection’ and ‘Pneumonia’ during the main study.\nAE, adverse event; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Patients and methods", "Patients", "Study design and treatment", "Study endpoints", "Efficacy", "Immunogenicity", "Safety", "Exploratory and post hoc endpoints", "Statistical analyses", "Results", "Patients", "Efficacy", "Immunogenicity", "Pharmacodynamics", "Safety", "Discussion", "Conclusions" ]
[ "Infliximab is a human–murine chimeric monoclonal antibody to tumour necrosis factor (TNF).1 The introduction of infliximab and other biological drugs into clinical practice has dramatically improved the management of a number of immune-mediated inflammatory diseases, including rheumatoid arthritis (RA).2 However, currently available biologics are associated with high costs,3\n4 which has led to restricted treatment access for patients with RA in several regions.5–9\nA number of biologics used to treat RA—including originator infliximab (Remicade), hereafter referred to as the reference product (RP)—have reached or are approaching patent expiry in many countries. As a consequence, follow-on biologics (also termed ‘biosimilars’) are being developed for the treatment of RA. A biosimilar can be defined as a ‘biotherapeutic product that is similar in terms of quality, safety, and efficacy to an already licensed reference biotherapeutic product’.10 In order to gain approval, it is usually necessary to show that a biosimilar is highly similar to the RP in physicochemical and biological terms. In addition, clinical studies are generally needed to establish statistical equivalence in pharmacokinetics (PK) and efficacy and to characterise biosimilar safety.11–13\nSince the first approval of a biosimilar by the European Medicines Agency (EMA) in 2006, a number of these agents, including granulocyte colony-stimulating factors and erythropoietins, have become available in Europe. Indeed, the range of therapeutic areas now covered by approved biosimilars is wide and includes cancer, anaemia, neutropenia and diabetes.14 Data for these EMA-approved biosimilars consistently show that they provide comparable efficacy and safety relative to their RPs.15–23 Recently, CT-P13 (Remsima, Inflectra)—a biosimilar of infliximab RP—became the first monoclonal antibody biosimilar to be approved in Europe for use in all indications held by the infliximab RP.24 All major physicochemical characteristics and in vitro biological activities of CT-P13 and the RP, including affinity for both soluble and transmembrane forms of TNF, are highly comparable.24\n25 Approval of CT-P13 was partly based on findings from two 54-week, multinational, randomised, double-blind, parallel-group studies, which compared CT-P13 and RP in ankylosing spondylitis (AS) and RA Programme evaLuating the Autoimmune disease iNvEstigational drug cT-p13 in AS patients (PLANETAS) and Programme evaLuating the Autoimmune disease iNvEstigational drug cT-p13 in RA patients (PLANETRA)). These studies demonstrated that CT-P13 and RP are highly comparable in terms of PK, efficacy, immunogenicity and safety in both RA and AS.26–29 However, an important unanswered question for prescribing physicians is whether it is possible to switch from RP to CT-P13 in patients with RA without any detrimental effects on safety and efficacy.30\nHere, we report the findings from an open-label extension of the PLANETRA study. There were two main aims of the extension study: (1) to investigate the efficacy and safety of switching to CT-P13 in patients previously treated with RP for 54 weeks in PLANETRA (hereafter named the ‘switch group’) and (2) to investigate the longer-term efficacy and safety of extended CT-P13 treatment over 2 years in patients previously treated with CT-P13 in PLANETRA (the ‘maintenance group’). To facilitate understanding of the data, the results for the maintenance and switch groups are described both for the main (weeks 0–54) and the extension (weeks 54–102) studies.", "Full details of the methods of the 54-week, randomised, double-blind, parallel-group PLANETRA study have been reported previously,26\n27 and are described briefly below.\n Patients PLANETRA recruited patients aged 18–75 years with active RA for ≥1 year according to the 1987 American College of Rheumatology (ACR) classification criteria. Eligible patients did not respond adequately to ≥3 months of treatment with methotrexate (MTX) and received a stable MTX dose (12.5–25 mg/week) for ≥4 weeks before screening. Patients who had completed the main 54-week PLANETRA study were offered the opportunity to enter the extension study (ClinicalTrials.gov identifier: NCT01571219) for another 1 year. Those who did not sign a new informed consent for the extension study were excluded. Some of the relevant Ministries of Health (MoH) and ethics committees (ECs) did not approve the extension study, mainly due to the fact that data from the PLANETRA study were not available at the time of EC evaluation. Thus, patients from affected institutions were also excluded.\nAdditional eligibility criteria applied for this extension study included no major protocol violations in the main study and no new therapy for RA in the extension study. Detailed information on non-participants in the extension study is shown in figure 1.\nPatient disposition in the PLANETRA extension study. All patients who enrolled in the extension study (n=158 and 144 in the maintenance and switch groups, respectively) were included in the ITT population. EC, ethics committee; ITT, intent-to-treat; MoH, Ministry of Health; RP, reference product.\nPLANETRA recruited patients aged 18–75 years with active RA for ≥1 year according to the 1987 American College of Rheumatology (ACR) classification criteria. Eligible patients did not respond adequately to ≥3 months of treatment with methotrexate (MTX) and received a stable MTX dose (12.5–25 mg/week) for ≥4 weeks before screening. Patients who had completed the main 54-week PLANETRA study were offered the opportunity to enter the extension study (ClinicalTrials.gov identifier: NCT01571219) for another 1 year. Those who did not sign a new informed consent for the extension study were excluded. Some of the relevant Ministries of Health (MoH) and ethics committees (ECs) did not approve the extension study, mainly due to the fact that data from the PLANETRA study were not available at the time of EC evaluation. Thus, patients from affected institutions were also excluded.\nAdditional eligibility criteria applied for this extension study included no major protocol violations in the main study and no new therapy for RA in the extension study. Detailed information on non-participants in the extension study is shown in figure 1.\nPatient disposition in the PLANETRA extension study. All patients who enrolled in the extension study (n=158 and 144 in the maintenance and switch groups, respectively) were included in the ITT population. EC, ethics committee; ITT, intent-to-treat; MoH, Ministry of Health; RP, reference product.\n Study design and treatment This open-label, single-arm extension of PLANETRA was conducted in 69 centres in 16 countries. In the main study, patients received nine infusions of CT-P13 (CELLTRION, Incheon, Republic of Korea) or the infliximab RP (Janssen Biotech, Horsham, Pennsylvania, USA). After study treatment in PLANETRA, eligible patients could choose to continue in the extension study. However, patients and physicians continued to be blinded to the treatment that the patient had received during the main study. All patients participating in and completing this extension study received six infusions of CT-P13 from week 62 to week 102. During the whole study period, CT-P13 was administered via 2 h intravenous infusion at a fixed dose of 3 mg/kg. At the discretion of the investigator, antihistamines were provided 30–60 min prior to infusion of CT-P13. MTX (12.5–25 mg/week; oral or parenteral) and folic acid (≥5 mg/week; oral) were coadministered to all patients throughout the main and extension study periods. All patients provided new written informed consent to enrol into the extension study. The study was conducted according to the principles of the Declaration of Helsinki and International Conference on Harmonisation Good Clinical Practice guidelines.\nThis open-label, single-arm extension of PLANETRA was conducted in 69 centres in 16 countries. In the main study, patients received nine infusions of CT-P13 (CELLTRION, Incheon, Republic of Korea) or the infliximab RP (Janssen Biotech, Horsham, Pennsylvania, USA). After study treatment in PLANETRA, eligible patients could choose to continue in the extension study. However, patients and physicians continued to be blinded to the treatment that the patient had received during the main study. All patients participating in and completing this extension study received six infusions of CT-P13 from week 62 to week 102. During the whole study period, CT-P13 was administered via 2 h intravenous infusion at a fixed dose of 3 mg/kg. At the discretion of the investigator, antihistamines were provided 30–60 min prior to infusion of CT-P13. MTX (12.5–25 mg/week; oral or parenteral) and folic acid (≥5 mg/week; oral) were coadministered to all patients throughout the main and extension study periods. All patients provided new written informed consent to enrol into the extension study. The study was conducted according to the principles of the Declaration of Helsinki and International Conference on Harmonisation Good Clinical Practice guidelines.\n Study endpoints Efficacy Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score.\nEfficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score.\n Immunogenicity The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26\n27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden).\nThe proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26\n27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden).\n Safety Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses.\nTreatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses.\n Exploratory and post hoc endpoints Details of exploratory and post hoc endpoints are given in online supplementary appendix A.\n\n\n\nDetails of exploratory and post hoc endpoints are given in online supplementary appendix A.\n\n\n\n Efficacy Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score.\nEfficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score.\n Immunogenicity The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26\n27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden).\nThe proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26\n27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden).\n Safety Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses.\nTreatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses.\n Exploratory and post hoc endpoints Details of exploratory and post hoc endpoints are given in online supplementary appendix A.\n\n\n\nDetails of exploratory and post hoc endpoints are given in online supplementary appendix A.\n\n\n\n Statistical analyses All data were analysed descriptively in the maintenance and switch groups. The populations were predefined in the study protocol and statistical analysis plan for participants of the extension study. The efficacy population included all patients who received at least one dose of study treatment and had at least one efficacy measurement in the extension study. Conservatively, ACR response was analysed using non-responder imputation (NRI) for missing values and presented with 95% CIs of the response rate using an exact binomial approach from the efficacy population. No imputation of missing values was done for analysis of other efficacy endpoints. The safety population consisted of all patients who enrolled in the study, because they had all received study treatment in the preceding study. Data from the main study period were analysed in participants of the extension study only, not in all patients in the main study. Methods for sensitivity analyses of ACR response and statistical analyses of exploratory and post hoc endpoints are included in online supplementary appendix A.\nAll data were analysed descriptively in the maintenance and switch groups. The populations were predefined in the study protocol and statistical analysis plan for participants of the extension study. The efficacy population included all patients who received at least one dose of study treatment and had at least one efficacy measurement in the extension study. Conservatively, ACR response was analysed using non-responder imputation (NRI) for missing values and presented with 95% CIs of the response rate using an exact binomial approach from the efficacy population. No imputation of missing values was done for analysis of other efficacy endpoints. The safety population consisted of all patients who enrolled in the study, because they had all received study treatment in the preceding study. Data from the main study period were analysed in participants of the extension study only, not in all patients in the main study. Methods for sensitivity analyses of ACR response and statistical analyses of exploratory and post hoc endpoints are included in online supplementary appendix A.", "PLANETRA recruited patients aged 18–75 years with active RA for ≥1 year according to the 1987 American College of Rheumatology (ACR) classification criteria. Eligible patients did not respond adequately to ≥3 months of treatment with methotrexate (MTX) and received a stable MTX dose (12.5–25 mg/week) for ≥4 weeks before screening. Patients who had completed the main 54-week PLANETRA study were offered the opportunity to enter the extension study (ClinicalTrials.gov identifier: NCT01571219) for another 1 year. Those who did not sign a new informed consent for the extension study were excluded. Some of the relevant Ministries of Health (MoH) and ethics committees (ECs) did not approve the extension study, mainly due to the fact that data from the PLANETRA study were not available at the time of EC evaluation. Thus, patients from affected institutions were also excluded.\nAdditional eligibility criteria applied for this extension study included no major protocol violations in the main study and no new therapy for RA in the extension study. Detailed information on non-participants in the extension study is shown in figure 1.\nPatient disposition in the PLANETRA extension study. All patients who enrolled in the extension study (n=158 and 144 in the maintenance and switch groups, respectively) were included in the ITT population. EC, ethics committee; ITT, intent-to-treat; MoH, Ministry of Health; RP, reference product.", "This open-label, single-arm extension of PLANETRA was conducted in 69 centres in 16 countries. In the main study, patients received nine infusions of CT-P13 (CELLTRION, Incheon, Republic of Korea) or the infliximab RP (Janssen Biotech, Horsham, Pennsylvania, USA). After study treatment in PLANETRA, eligible patients could choose to continue in the extension study. However, patients and physicians continued to be blinded to the treatment that the patient had received during the main study. All patients participating in and completing this extension study received six infusions of CT-P13 from week 62 to week 102. During the whole study period, CT-P13 was administered via 2 h intravenous infusion at a fixed dose of 3 mg/kg. At the discretion of the investigator, antihistamines were provided 30–60 min prior to infusion of CT-P13. MTX (12.5–25 mg/week; oral or parenteral) and folic acid (≥5 mg/week; oral) were coadministered to all patients throughout the main and extension study periods. All patients provided new written informed consent to enrol into the extension study. The study was conducted according to the principles of the Declaration of Helsinki and International Conference on Harmonisation Good Clinical Practice guidelines.", " Efficacy Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score.\nEfficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score.\n Immunogenicity The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26\n27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden).\nThe proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26\n27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden).\n Safety Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses.\nTreatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses.\n Exploratory and post hoc endpoints Details of exploratory and post hoc endpoints are given in online supplementary appendix A.\n\n\n\nDetails of exploratory and post hoc endpoints are given in online supplementary appendix A.\n\n\n", "Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score.", "The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26\n27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden).", "Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses.", "Details of exploratory and post hoc endpoints are given in online supplementary appendix A.\n\n\n", "All data were analysed descriptively in the maintenance and switch groups. The populations were predefined in the study protocol and statistical analysis plan for participants of the extension study. The efficacy population included all patients who received at least one dose of study treatment and had at least one efficacy measurement in the extension study. Conservatively, ACR response was analysed using non-responder imputation (NRI) for missing values and presented with 95% CIs of the response rate using an exact binomial approach from the efficacy population. No imputation of missing values was done for analysis of other efficacy endpoints. The safety population consisted of all patients who enrolled in the study, because they had all received study treatment in the preceding study. Data from the main study period were analysed in participants of the extension study only, not in all patients in the main study. Methods for sensitivity analyses of ACR response and statistical analyses of exploratory and post hoc endpoints are included in online supplementary appendix A.", " Patients The first patient visit of the main PLANETRA study and the last visit of the PLANETRA extension were held between November 2010 and July 2013. Of the 455 patients who completed the main PLANETRA study, 302 patients consented to participate in the extension study and were screened under the approval of the appropriate MoH/EC (figure 1). Of the 302 screened patients, all were enrolled and 301 were treated. One patient in the maintenance group was enrolled but discontinued due to an adverse event (B-cell lymphoma stage IV) before receiving treatment in the extension study. A total of 158 patients had received CT-P13 in the main study (maintenance group); 144 had received RP (switch group). These patients comprised the intent-to-treat (ITT) population of the extension study. Patient demographics and disease characteristics at baseline and at week 54 of the main study were similar between the two groups (table 1).\nPatient demographics and disease characteristics at baseline and week 54 of patients enrolled in the PLANETRA extension study (ITT population)\nData shown in the table were recorded at the baseline and week 54 visits of the preceding 54-week main study.\n*Except where indicated otherwise, values are median (range).\n†Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n‡Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\nACR, American College of Rheumatology; CCP, cyclic citrullinated peptide; CRP, C reactive protein; DAS28, disease activity score in 28 joints; ESR, erythrocyte sedimentation rate; ITT, intent-to-treat; RF, rheumatoid factor; RP, reference product.\nIn the maintenance and switch groups, respectively, 133 (84.2%) and 128 (88.9%) patients completed the extension phase; 25 (15.8%) and 16 (11.1%) patients discontinued over the whole period of the extension study. Reasons for patient withdrawal are shown in figure 1. The efficacy population of the extension study included 152 patients in the maintenance group and 142 patients in the switch group. Owing to incorrect kits being dispensed, one patient randomly assigned to the RP group received one dose of CT-P13 at week 2 in the PLANETRA main study. Applying a conservative approach, this patient was classified as a member of the CT-P13 group for safety analyses in the main study. Therefore, the safety population of the extension study in the maintenance and switch groups comprised 159 and 143 patients, respectively.\nSimilar to the ITT population of the extension study, patient demographics and disease characteristics of non-participants in the extension study were also comparable between the CT-P13 and RP groups (see online supplementary appendix B).\nThe first patient visit of the main PLANETRA study and the last visit of the PLANETRA extension were held between November 2010 and July 2013. Of the 455 patients who completed the main PLANETRA study, 302 patients consented to participate in the extension study and were screened under the approval of the appropriate MoH/EC (figure 1). Of the 302 screened patients, all were enrolled and 301 were treated. One patient in the maintenance group was enrolled but discontinued due to an adverse event (B-cell lymphoma stage IV) before receiving treatment in the extension study. A total of 158 patients had received CT-P13 in the main study (maintenance group); 144 had received RP (switch group). These patients comprised the intent-to-treat (ITT) population of the extension study. Patient demographics and disease characteristics at baseline and at week 54 of the main study were similar between the two groups (table 1).\nPatient demographics and disease characteristics at baseline and week 54 of patients enrolled in the PLANETRA extension study (ITT population)\nData shown in the table were recorded at the baseline and week 54 visits of the preceding 54-week main study.\n*Except where indicated otherwise, values are median (range).\n†Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n‡Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\nACR, American College of Rheumatology; CCP, cyclic citrullinated peptide; CRP, C reactive protein; DAS28, disease activity score in 28 joints; ESR, erythrocyte sedimentation rate; ITT, intent-to-treat; RF, rheumatoid factor; RP, reference product.\nIn the maintenance and switch groups, respectively, 133 (84.2%) and 128 (88.9%) patients completed the extension phase; 25 (15.8%) and 16 (11.1%) patients discontinued over the whole period of the extension study. Reasons for patient withdrawal are shown in figure 1. The efficacy population of the extension study included 152 patients in the maintenance group and 142 patients in the switch group. Owing to incorrect kits being dispensed, one patient randomly assigned to the RP group received one dose of CT-P13 at week 2 in the PLANETRA main study. Applying a conservative approach, this patient was classified as a member of the CT-P13 group for safety analyses in the main study. Therefore, the safety population of the extension study in the maintenance and switch groups comprised 159 and 143 patients, respectively.\nSimilar to the ITT population of the extension study, patient demographics and disease characteristics of non-participants in the extension study were also comparable between the CT-P13 and RP groups (see online supplementary appendix B).\n Efficacy Throughout the extension study, ACR20, ACR50 and ACR70 response rates were maintained, and no differences were evident between the groups at weeks 78 and 102 (figure 2). In the switch group, respective ACR20, ACR50 and ACR70 response rates were 77.5%, 50.0% and 23.9% at week 54 (ie, at the end of RP treatment) and 71.8%, 51.4% and 26.1% at week 102 (ie, 48 weeks after the last infusion of RP at week 54). In the maintenance group, respective ACR20, ACR50 and ACR70 response rates were 77.0%, 46.1% and 22.4% at week 54 and 71.7%, 48.0% and 24.3% at week 102. In patients who participated in the extension study, the proportion of patients achieving ACR20, ACR50 and ACR70 responses during the main study was also similar between the two groups. In a subgroup analysis performed according to ADA status, the proportion of ADA-negative patients achieving ACR20 was 85.7% at week 54 and 82.2% at week 102 in the maintenance group, and 84.7% at week 54 and 82.8% at week 102 in the switch group. In comparison, 68.0% (week 54) and 73.4% (week 102) of ADA-positive patients in the maintenance group, and 70.6% and 73.4% in the switch group achieved ACR20 (see online supplementary appendix C, figure C-1).\nProportion of patients with rheumatoid arthritis with (A) an ACR20 response, (B) an ACR50 response and (C) an ACR70 response in the maintenance* and switch** groups of the PLANETRA extension study (efficacy population with non-responder imputation approach). CI values are the 95% CIs of the treatment difference. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. **Patients treated with reference product during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology.\nNo notable differences in other efficacy endpoints were noted between or within the groups at weeks 14, 30, 54, 78 or 102. The results for DAS28 score change and EULAR response criteria are shown in online supplementary appendix D. The results of assessments of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, HAQ, levels of CRP, ESR and hybrid ACR score were not different between groups (see online supplementary appendix E).\nSensitivity analyses to compare populations and statistical approaches supported the appearance of sustained efficacy and comparability between the two groups (see online supplementary appendix F). Analyses using the last observation carried forward (LOCF) approach showed similar results as analyses using the NRI approach, both in the maintenance group (ACR20: 74.1% using LOCF vs 71.7% using NRI at week 102, respectively) and in the switch group (ACR20: 77.1% using LOCF vs 71.8% using NRI at week 102). Analyses of the main study ITT population using the LOCF approach showed relatively low response rates compared with analyses of the extension study ITT population. However, response rates were comparable between the groups and sustained throughout the 2-year study period, both in the extension study ITT population (ACR20: 74.1% at week 102 vs 75.3% at week 54 in the maintenance group, 77.1% vs 77.1% in the switch group, respectively) and in the main study ITT population (ACR20: 61.6% at week 102 vs 62.9% at week 54 in the CT-P13 group, 59.2% vs 59.9% in the RP group, respectively). When data for the main study ITT population were analysed using the NRI approach, lower response rates were seen at week 102 than week 54 although rates were comparable between the groups (ACR20: 36.1% at week 102 vs 57.0% at week 54 in the CT-P13 group, 33.6% vs 52.0% in the RP group).\nWhen remission was measured up to week 102 based on ACR/EULAR criteria (Boolean-based definition and index-based definition (Simple Disease Activity Index (SDAI)), Clinical Disease Activity Index (CDAI), DAS28 and DAS28 low disease activity, the proportion of patients achieving remission or low disease activity was similar between groups throughout the study period (see online supplementary appendix G).\nThroughout the extension study, ACR20, ACR50 and ACR70 response rates were maintained, and no differences were evident between the groups at weeks 78 and 102 (figure 2). In the switch group, respective ACR20, ACR50 and ACR70 response rates were 77.5%, 50.0% and 23.9% at week 54 (ie, at the end of RP treatment) and 71.8%, 51.4% and 26.1% at week 102 (ie, 48 weeks after the last infusion of RP at week 54). In the maintenance group, respective ACR20, ACR50 and ACR70 response rates were 77.0%, 46.1% and 22.4% at week 54 and 71.7%, 48.0% and 24.3% at week 102. In patients who participated in the extension study, the proportion of patients achieving ACR20, ACR50 and ACR70 responses during the main study was also similar between the two groups. In a subgroup analysis performed according to ADA status, the proportion of ADA-negative patients achieving ACR20 was 85.7% at week 54 and 82.2% at week 102 in the maintenance group, and 84.7% at week 54 and 82.8% at week 102 in the switch group. In comparison, 68.0% (week 54) and 73.4% (week 102) of ADA-positive patients in the maintenance group, and 70.6% and 73.4% in the switch group achieved ACR20 (see online supplementary appendix C, figure C-1).\nProportion of patients with rheumatoid arthritis with (A) an ACR20 response, (B) an ACR50 response and (C) an ACR70 response in the maintenance* and switch** groups of the PLANETRA extension study (efficacy population with non-responder imputation approach). CI values are the 95% CIs of the treatment difference. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. **Patients treated with reference product during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology.\nNo notable differences in other efficacy endpoints were noted between or within the groups at weeks 14, 30, 54, 78 or 102. The results for DAS28 score change and EULAR response criteria are shown in online supplementary appendix D. The results of assessments of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, HAQ, levels of CRP, ESR and hybrid ACR score were not different between groups (see online supplementary appendix E).\nSensitivity analyses to compare populations and statistical approaches supported the appearance of sustained efficacy and comparability between the two groups (see online supplementary appendix F). Analyses using the last observation carried forward (LOCF) approach showed similar results as analyses using the NRI approach, both in the maintenance group (ACR20: 74.1% using LOCF vs 71.7% using NRI at week 102, respectively) and in the switch group (ACR20: 77.1% using LOCF vs 71.8% using NRI at week 102). Analyses of the main study ITT population using the LOCF approach showed relatively low response rates compared with analyses of the extension study ITT population. However, response rates were comparable between the groups and sustained throughout the 2-year study period, both in the extension study ITT population (ACR20: 74.1% at week 102 vs 75.3% at week 54 in the maintenance group, 77.1% vs 77.1% in the switch group, respectively) and in the main study ITT population (ACR20: 61.6% at week 102 vs 62.9% at week 54 in the CT-P13 group, 59.2% vs 59.9% in the RP group, respectively). When data for the main study ITT population were analysed using the NRI approach, lower response rates were seen at week 102 than week 54 although rates were comparable between the groups (ACR20: 36.1% at week 102 vs 57.0% at week 54 in the CT-P13 group, 33.6% vs 52.0% in the RP group).\nWhen remission was measured up to week 102 based on ACR/EULAR criteria (Boolean-based definition and index-based definition (Simple Disease Activity Index (SDAI)), Clinical Disease Activity Index (CDAI), DAS28 and DAS28 low disease activity, the proportion of patients achieving remission or low disease activity was similar between groups throughout the study period (see online supplementary appendix G).\n Immunogenicity The proportion of patients with ADAs was similar between the maintenance and switch groups at each time point during the main and extension studies (table 2). Almost all patients with a positive ADA result had a positive result for neutralising antibodies (NAb), and the proportion of patients with a positive NAb result was similar between the two groups. The proportion of ADA-positive patients with sustained ADAs was also highly similar between groups (80.2% and 80.4% in the maintenance and switch groups, respectively).\nProportion of patients with RA who were positive for ADAs and NAbs in the main study and the extension study (safety population)\nPercentages for NAb results are based on the number of positive ADA results at that visit.\nADA persistency was defined as transient when a patient tested positive for ADAs at one or more time point but negative at the last available time point. The remaining patients with positive ADA results were considered to have shown a sustained ADA response.\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\n‡N, total number of patients with at least one positive ADA result.\nADAs, antidrug antibodies; NAbs, neutralising antibodies; RA, rheumatoid arthritis; RP, reference product.\nThe proportion of patients with ADAs was similar between the maintenance and switch groups at each time point during the main and extension studies (table 2). Almost all patients with a positive ADA result had a positive result for neutralising antibodies (NAb), and the proportion of patients with a positive NAb result was similar between the two groups. The proportion of ADA-positive patients with sustained ADAs was also highly similar between groups (80.2% and 80.4% in the maintenance and switch groups, respectively).\nProportion of patients with RA who were positive for ADAs and NAbs in the main study and the extension study (safety population)\nPercentages for NAb results are based on the number of positive ADA results at that visit.\nADA persistency was defined as transient when a patient tested positive for ADAs at one or more time point but negative at the last available time point. The remaining patients with positive ADA results were considered to have shown a sustained ADA response.\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\n‡N, total number of patients with at least one positive ADA result.\nADAs, antidrug antibodies; NAbs, neutralising antibodies; RA, rheumatoid arthritis; RP, reference product.\n Pharmacodynamics In a subgroup analysis performed by ADA status, the mean change from baseline in CRP and ESR was comparable in the maintenance and switch groups at week 54 and week 102 in both ADA-negative and ADA-positive patients (see online supplementary appendix C, table C-1).\nIn a subgroup analysis performed by ADA status, the mean change from baseline in CRP and ESR was comparable in the maintenance and switch groups at week 54 and week 102 in both ADA-negative and ADA-positive patients (see online supplementary appendix C, table C-1).\n Safety The proportion of patients who experienced at least one TEAE was comparable between the maintenance group and the switch group (extension study: 53.5% (n=85 of 159) and 53.8% (n=77 of 143), respectively; main study: 63.5% (n=101) and 62.2% (n=89)). Rates of TEAEs considered by the investigator to be related to study treatment were also similar between the maintenance and switch groups (extension study: 22.0% (n=35) and 18.9% (n=27); main study: 35.2% (n=56) and 35.7% (n=51)). The most common treatment-related TEAEs are shown in table 3.\nTreatment-related TEAEs that were reported in at least 1% of patients in either the maintenance group or the switch group (safety population)\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\nRA, rheumatoid arthritis; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event.\nWith respect to serious adverse events (SAEs), these events occurred in the maintenance and switch groups, respectively, in 12 (7.5%) and 13 (9.1%) patients during the extension study, and in 9 (5.7%) and 5 (3.5%) patients during the main study. Treatment-related SAEs occurred in two (1.3%) and four (2.8%) patients in the extension study, respectively, and in two (1.3%) and two (1.4%) patients in the main study (see online supplementary appendix H). TEAEs leading to discontinuation occurred in 16 (10.1%) and 8 (5.6%) patients during the extension study.\nDuring the extension study, 11 (6.9%) and 4 (2.8%) patients in the maintenance and switch groups, respectively, reported infusion-related reactions. All were ADA positive and had sustained ADAs. Only one patient in the maintenance group experienced anaphylaxis. This patient was ADA positive (see online supplementary appendix C, table C-2). In the main study, infusion-related reactions were reported in 8 (5.0%) and 13 (9.1%) patients in the maintenance and switch groups, respectively. Of these, 4 (50.0%) and 11 (84.6%) were ADA positive. Two patients reported infusion-related reactions both in the main study and in the extension study period (one patient in each group). Table 4 shows data for all other TEAEs of special interest. No cases of TB were reported during the extension study.\nTEAEs of special interest regardless of relationship to study treatment in the PLANETRA main study and the extension study (safety population)\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\n‡There were three patients (two in the maintenance group, one in the switch group) with three events of latent TB, which were reported both in the main study and in the extension study; this was because all three events started during week 62 (part of the end-of-study period of the main study).\n§There was one patient in the maintenance group with a serious AE of pneumonia, which was included as a ‘Serious infection’ and ‘Pneumonia’ during the main study.\nAE, adverse event; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event.\nThe proportion of patients who experienced at least one TEAE was comparable between the maintenance group and the switch group (extension study: 53.5% (n=85 of 159) and 53.8% (n=77 of 143), respectively; main study: 63.5% (n=101) and 62.2% (n=89)). Rates of TEAEs considered by the investigator to be related to study treatment were also similar between the maintenance and switch groups (extension study: 22.0% (n=35) and 18.9% (n=27); main study: 35.2% (n=56) and 35.7% (n=51)). The most common treatment-related TEAEs are shown in table 3.\nTreatment-related TEAEs that were reported in at least 1% of patients in either the maintenance group or the switch group (safety population)\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\nRA, rheumatoid arthritis; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event.\nWith respect to serious adverse events (SAEs), these events occurred in the maintenance and switch groups, respectively, in 12 (7.5%) and 13 (9.1%) patients during the extension study, and in 9 (5.7%) and 5 (3.5%) patients during the main study. Treatment-related SAEs occurred in two (1.3%) and four (2.8%) patients in the extension study, respectively, and in two (1.3%) and two (1.4%) patients in the main study (see online supplementary appendix H). TEAEs leading to discontinuation occurred in 16 (10.1%) and 8 (5.6%) patients during the extension study.\nDuring the extension study, 11 (6.9%) and 4 (2.8%) patients in the maintenance and switch groups, respectively, reported infusion-related reactions. All were ADA positive and had sustained ADAs. Only one patient in the maintenance group experienced anaphylaxis. This patient was ADA positive (see online supplementary appendix C, table C-2). In the main study, infusion-related reactions were reported in 8 (5.0%) and 13 (9.1%) patients in the maintenance and switch groups, respectively. Of these, 4 (50.0%) and 11 (84.6%) were ADA positive. Two patients reported infusion-related reactions both in the main study and in the extension study period (one patient in each group). Table 4 shows data for all other TEAEs of special interest. No cases of TB were reported during the extension study.\nTEAEs of special interest regardless of relationship to study treatment in the PLANETRA main study and the extension study (safety population)\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\n‡There were three patients (two in the maintenance group, one in the switch group) with three events of latent TB, which were reported both in the main study and in the extension study; this was because all three events started during week 62 (part of the end-of-study period of the main study).\n§There was one patient in the maintenance group with a serious AE of pneumonia, which was included as a ‘Serious infection’ and ‘Pneumonia’ during the main study.\nAE, adverse event; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event.", "The first patient visit of the main PLANETRA study and the last visit of the PLANETRA extension were held between November 2010 and July 2013. Of the 455 patients who completed the main PLANETRA study, 302 patients consented to participate in the extension study and were screened under the approval of the appropriate MoH/EC (figure 1). Of the 302 screened patients, all were enrolled and 301 were treated. One patient in the maintenance group was enrolled but discontinued due to an adverse event (B-cell lymphoma stage IV) before receiving treatment in the extension study. A total of 158 patients had received CT-P13 in the main study (maintenance group); 144 had received RP (switch group). These patients comprised the intent-to-treat (ITT) population of the extension study. Patient demographics and disease characteristics at baseline and at week 54 of the main study were similar between the two groups (table 1).\nPatient demographics and disease characteristics at baseline and week 54 of patients enrolled in the PLANETRA extension study (ITT population)\nData shown in the table were recorded at the baseline and week 54 visits of the preceding 54-week main study.\n*Except where indicated otherwise, values are median (range).\n†Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n‡Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\nACR, American College of Rheumatology; CCP, cyclic citrullinated peptide; CRP, C reactive protein; DAS28, disease activity score in 28 joints; ESR, erythrocyte sedimentation rate; ITT, intent-to-treat; RF, rheumatoid factor; RP, reference product.\nIn the maintenance and switch groups, respectively, 133 (84.2%) and 128 (88.9%) patients completed the extension phase; 25 (15.8%) and 16 (11.1%) patients discontinued over the whole period of the extension study. Reasons for patient withdrawal are shown in figure 1. The efficacy population of the extension study included 152 patients in the maintenance group and 142 patients in the switch group. Owing to incorrect kits being dispensed, one patient randomly assigned to the RP group received one dose of CT-P13 at week 2 in the PLANETRA main study. Applying a conservative approach, this patient was classified as a member of the CT-P13 group for safety analyses in the main study. Therefore, the safety population of the extension study in the maintenance and switch groups comprised 159 and 143 patients, respectively.\nSimilar to the ITT population of the extension study, patient demographics and disease characteristics of non-participants in the extension study were also comparable between the CT-P13 and RP groups (see online supplementary appendix B).", "Throughout the extension study, ACR20, ACR50 and ACR70 response rates were maintained, and no differences were evident between the groups at weeks 78 and 102 (figure 2). In the switch group, respective ACR20, ACR50 and ACR70 response rates were 77.5%, 50.0% and 23.9% at week 54 (ie, at the end of RP treatment) and 71.8%, 51.4% and 26.1% at week 102 (ie, 48 weeks after the last infusion of RP at week 54). In the maintenance group, respective ACR20, ACR50 and ACR70 response rates were 77.0%, 46.1% and 22.4% at week 54 and 71.7%, 48.0% and 24.3% at week 102. In patients who participated in the extension study, the proportion of patients achieving ACR20, ACR50 and ACR70 responses during the main study was also similar between the two groups. In a subgroup analysis performed according to ADA status, the proportion of ADA-negative patients achieving ACR20 was 85.7% at week 54 and 82.2% at week 102 in the maintenance group, and 84.7% at week 54 and 82.8% at week 102 in the switch group. In comparison, 68.0% (week 54) and 73.4% (week 102) of ADA-positive patients in the maintenance group, and 70.6% and 73.4% in the switch group achieved ACR20 (see online supplementary appendix C, figure C-1).\nProportion of patients with rheumatoid arthritis with (A) an ACR20 response, (B) an ACR50 response and (C) an ACR70 response in the maintenance* and switch** groups of the PLANETRA extension study (efficacy population with non-responder imputation approach). CI values are the 95% CIs of the treatment difference. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. **Patients treated with reference product during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology.\nNo notable differences in other efficacy endpoints were noted between or within the groups at weeks 14, 30, 54, 78 or 102. The results for DAS28 score change and EULAR response criteria are shown in online supplementary appendix D. The results of assessments of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, HAQ, levels of CRP, ESR and hybrid ACR score were not different between groups (see online supplementary appendix E).\nSensitivity analyses to compare populations and statistical approaches supported the appearance of sustained efficacy and comparability between the two groups (see online supplementary appendix F). Analyses using the last observation carried forward (LOCF) approach showed similar results as analyses using the NRI approach, both in the maintenance group (ACR20: 74.1% using LOCF vs 71.7% using NRI at week 102, respectively) and in the switch group (ACR20: 77.1% using LOCF vs 71.8% using NRI at week 102). Analyses of the main study ITT population using the LOCF approach showed relatively low response rates compared with analyses of the extension study ITT population. However, response rates were comparable between the groups and sustained throughout the 2-year study period, both in the extension study ITT population (ACR20: 74.1% at week 102 vs 75.3% at week 54 in the maintenance group, 77.1% vs 77.1% in the switch group, respectively) and in the main study ITT population (ACR20: 61.6% at week 102 vs 62.9% at week 54 in the CT-P13 group, 59.2% vs 59.9% in the RP group, respectively). When data for the main study ITT population were analysed using the NRI approach, lower response rates were seen at week 102 than week 54 although rates were comparable between the groups (ACR20: 36.1% at week 102 vs 57.0% at week 54 in the CT-P13 group, 33.6% vs 52.0% in the RP group).\nWhen remission was measured up to week 102 based on ACR/EULAR criteria (Boolean-based definition and index-based definition (Simple Disease Activity Index (SDAI)), Clinical Disease Activity Index (CDAI), DAS28 and DAS28 low disease activity, the proportion of patients achieving remission or low disease activity was similar between groups throughout the study period (see online supplementary appendix G).", "The proportion of patients with ADAs was similar between the maintenance and switch groups at each time point during the main and extension studies (table 2). Almost all patients with a positive ADA result had a positive result for neutralising antibodies (NAb), and the proportion of patients with a positive NAb result was similar between the two groups. The proportion of ADA-positive patients with sustained ADAs was also highly similar between groups (80.2% and 80.4% in the maintenance and switch groups, respectively).\nProportion of patients with RA who were positive for ADAs and NAbs in the main study and the extension study (safety population)\nPercentages for NAb results are based on the number of positive ADA results at that visit.\nADA persistency was defined as transient when a patient tested positive for ADAs at one or more time point but negative at the last available time point. The remaining patients with positive ADA results were considered to have shown a sustained ADA response.\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\n‡N, total number of patients with at least one positive ADA result.\nADAs, antidrug antibodies; NAbs, neutralising antibodies; RA, rheumatoid arthritis; RP, reference product.", "In a subgroup analysis performed by ADA status, the mean change from baseline in CRP and ESR was comparable in the maintenance and switch groups at week 54 and week 102 in both ADA-negative and ADA-positive patients (see online supplementary appendix C, table C-1).", "The proportion of patients who experienced at least one TEAE was comparable between the maintenance group and the switch group (extension study: 53.5% (n=85 of 159) and 53.8% (n=77 of 143), respectively; main study: 63.5% (n=101) and 62.2% (n=89)). Rates of TEAEs considered by the investigator to be related to study treatment were also similar between the maintenance and switch groups (extension study: 22.0% (n=35) and 18.9% (n=27); main study: 35.2% (n=56) and 35.7% (n=51)). The most common treatment-related TEAEs are shown in table 3.\nTreatment-related TEAEs that were reported in at least 1% of patients in either the maintenance group or the switch group (safety population)\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\nRA, rheumatoid arthritis; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event.\nWith respect to serious adverse events (SAEs), these events occurred in the maintenance and switch groups, respectively, in 12 (7.5%) and 13 (9.1%) patients during the extension study, and in 9 (5.7%) and 5 (3.5%) patients during the main study. Treatment-related SAEs occurred in two (1.3%) and four (2.8%) patients in the extension study, respectively, and in two (1.3%) and two (1.4%) patients in the main study (see online supplementary appendix H). TEAEs leading to discontinuation occurred in 16 (10.1%) and 8 (5.6%) patients during the extension study.\nDuring the extension study, 11 (6.9%) and 4 (2.8%) patients in the maintenance and switch groups, respectively, reported infusion-related reactions. All were ADA positive and had sustained ADAs. Only one patient in the maintenance group experienced anaphylaxis. This patient was ADA positive (see online supplementary appendix C, table C-2). In the main study, infusion-related reactions were reported in 8 (5.0%) and 13 (9.1%) patients in the maintenance and switch groups, respectively. Of these, 4 (50.0%) and 11 (84.6%) were ADA positive. Two patients reported infusion-related reactions both in the main study and in the extension study period (one patient in each group). Table 4 shows data for all other TEAEs of special interest. No cases of TB were reported during the extension study.\nTEAEs of special interest regardless of relationship to study treatment in the PLANETRA main study and the extension study (safety population)\n*Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study.\n†Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study.\n‡There were three patients (two in the maintenance group, one in the switch group) with three events of latent TB, which were reported both in the main study and in the extension study; this was because all three events started during week 62 (part of the end-of-study period of the main study).\n§There was one patient in the maintenance group with a serious AE of pneumonia, which was included as a ‘Serious infection’ and ‘Pneumonia’ during the main study.\nAE, adverse event; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event.", "The PLANETRA extension study examined the efficacy and safety of treatment with a maximum of six infusions of CT-P13 in patients with RA previously treated with either CT-P13 (maintenance group) or infliximab RP (switch group) for 54 weeks. Importantly, in the switch group, no notable differences in ACR response rates were observed between week 54 (ie, the last RP treatment) and week 102 (ie, 48 weeks after the last RP infusion). Patients in the maintenance group of this extension study received CT-P13 for a total of 102 weeks. In this cohort, the responses to CT-P13 observed in the main study were sustained during the extension study. In the main parallel-group phase of PLANETRA,26\n27 ACR responses were broadly comparable with those observed in previous randomised studies of RP up to 54 weeks.31–33 The multinational Anti-TNF Trial in Rheumatoid Arthritis with Concomitant Therapy (ATTRACT) was the pivotal study of MTX plus either RP or placebo in patients with RA. Eligibility criteria for that study were similar to those for PLANETRA. A comparison of the 102-week data presented here with data from the same treatment duration of ATTRACT confirms that ACR response rates in the former were at least comparable with those in ATTRACT (if not higher).34 These data support the long-term efficacy of CT-P13 in patients with RA. Further efficacy endpoints—including DAS28-CRP, DAS28-ESR and EULAR-CRP or EULAR-ESR responses—were also maintained from week 54 to 102 in the switch group and were comparable between the maintenance and switch groups at weeks 78 and 102. In addition, the proportion of patients with remission by ACR/EULAR criteria, CDAI, DAS28 and DAS28 low disease activity was also comparable between the two treatment groups during the whole study period. Together, this suggests that there was no detrimental impact on efficacy of switching from RP to CT-P13 in patients with RA. Sensitivity analyses supported the sustained efficacy and comparability observed between the two groups. A multiple analysis approach, using LOCF and NRI methods, reported similar results, both in the maintenance group and in the switch group. Analyses of the main study ITT population and the extension study ITT population using the LOCF approach showed comparable and sustained outcomes throughout the 2-year study period. When analysed using the NRI approach, response rates at week 78 and week 102 in the main study ITT population were lower than when the LOCF approach was used. However, response rates were similar in both groups regardless of approach. Variations in response rates that occurred according to the analysis method were caused by the fact that some responders in the main study did not participate in the extension study (figure 1). In order to further understand the influence of non-participants in the extension study, patient demographics at baseline, disease characteristics at baseline and week 54 (see online supplementary appendix B) and ACR20 responses according to ADA status at week 54 (see online supplementary appendix I) were further analysed for this population. The results support the comparability between CT-P13 and RP groups, even for non-participants.\nCT-P13 was well tolerated during the extension study and displayed a long-term safety profile consistent with that of infliximab RP.34\n35 There was no noticeable difference in the safety profile before and after switching. After week 54 of the main study, the incidence of TEAEs, drug-related TEAEs or SAEs was similar between the maintenance and switch groups. The incidence of all potential infusion-related reactions did not increase when patients previously treated with RP were switched to CT-P13. During the extension study, 11 (6.9%) patients in the maintenance group and 4 (2.8%) patients in the switch group experienced infusion-related reactions. Infusion-related reactions were reported for 8 (5.0%) patients in the maintenance group and 13 (9.1%) patients in the switch group in the main study (ie, before the switch). Most of these events were of mild to moderate severity.\nIn terms of immunogenicity, the proportion of patients with ADAs remained stable and did not increase between weeks 54 and 102 in either group, although only qualitative analysis of ADA data was performed. In a similarly designed extension of the PLANETAS study, the proportion of patients with AS with ADAs also did not increase consistently.36 In the PLANETRA extension study, the ADA rate was comparable between the maintenance and switch groups at 102 weeks. The proportion of patients with sustained ADAs during the entire study period was also highly similar between groups. Similarly, in the PLANETAS extension study, the number of patients with AS with sustained ADAs was also similar between maintenance and switch groups. These data indicate no detrimental effect on immunogenicity when changing from RP to CT-P13, at least for the first six infusions. There was no analysis for IgG4.\nConcomitant use of MTX has been shown to reduce the immunogenicity of infliximab.37 In PLANETRA, MTX was coadministered throughout the study. Given that the initial and most recent doses of MTX were similar between the maintenance and switch groups (initial dose: 15.47 vs 15.51 mg/week; most recent dose: 15.52 vs 15.40 mg/week), it can be assumed that the effect of MTX on the development of ADAs was also similar between both groups. ADAs to infliximab are associated with a reduced clinical response to this drug, as well as to infusion-related reactions and other unwanted effects.38\n39 Compared with ADA-negative patients, ADA-positive patients in our study had lower ACR20 response rates and higher levels of CRP and ESR. Such trends were comparable in both the maintenance and switch groups. All of the patients reporting infusion-related reactions were ADA positive in both groups. These results suggest that the effects of switching from RP to CT-P13 did not influence the impact of ADAs.\nThe findings from the PLANETRA extension study indicate that there are no harmful effects on efficacy, safety or immunogenicity associated with switching from RP to CT-P13 in patients with RA. Similarly, no detrimental effects of switching were observed in an extension of the PLANETAS study performed in patients with AS.36 The current results are also aligned with those observed in switching studies with other biosimilars that have been approved by the EMA, which has stringent guidelines relating to the regulation of these types of agents. Switching data from a number of randomised and non-randomised trials consistently show that detrimental effects of switching between reference biologics and their EMA-approved biosimilars are unlikely to happen.18\n40–45\nThe current extension study was not formally designed to evaluate the non-inferiority or equivalence of switching to CT-P13 from RP versus continual CT-P13 treatment. In this respect, a randomised, double-blind, phase IV study has been initiated in Norway (‘NOR-SWITCH’; ClinicalTrials.gov identifier: NCT02148640) to formally examine the switchability of CT-P13 in a variety of indications. Additionally, a comprehensive pharmacovigilance programme by the manufacturers of CT-P13 is also ongoing. These postmarketing surveillance and registry studies will monitor the safety of CT-P13 in patients with AS, RA and other inflammatory diseases who have switched from RP.", "This multinational, open-label extension study demonstrated that in patients with RA receiving MTX, switching from RP to CT-P13 was not associated with any detrimental effects on efficacy, immunogenicity or safety. Additionally, this study demonstrated that CT-P13 remained efficacious and well tolerated over a 2-year treatment period." ]
[ "intro", "methods", null, null, null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusions" ]
[ "Treatment", "DMARDs (biologic)", "Rheumatoid Arthritis", "Anti-TNF" ]
Introduction: Infliximab is a human–murine chimeric monoclonal antibody to tumour necrosis factor (TNF).1 The introduction of infliximab and other biological drugs into clinical practice has dramatically improved the management of a number of immune-mediated inflammatory diseases, including rheumatoid arthritis (RA).2 However, currently available biologics are associated with high costs,3 4 which has led to restricted treatment access for patients with RA in several regions.5–9 A number of biologics used to treat RA—including originator infliximab (Remicade), hereafter referred to as the reference product (RP)—have reached or are approaching patent expiry in many countries. As a consequence, follow-on biologics (also termed ‘biosimilars’) are being developed for the treatment of RA. A biosimilar can be defined as a ‘biotherapeutic product that is similar in terms of quality, safety, and efficacy to an already licensed reference biotherapeutic product’.10 In order to gain approval, it is usually necessary to show that a biosimilar is highly similar to the RP in physicochemical and biological terms. In addition, clinical studies are generally needed to establish statistical equivalence in pharmacokinetics (PK) and efficacy and to characterise biosimilar safety.11–13 Since the first approval of a biosimilar by the European Medicines Agency (EMA) in 2006, a number of these agents, including granulocyte colony-stimulating factors and erythropoietins, have become available in Europe. Indeed, the range of therapeutic areas now covered by approved biosimilars is wide and includes cancer, anaemia, neutropenia and diabetes.14 Data for these EMA-approved biosimilars consistently show that they provide comparable efficacy and safety relative to their RPs.15–23 Recently, CT-P13 (Remsima, Inflectra)—a biosimilar of infliximab RP—became the first monoclonal antibody biosimilar to be approved in Europe for use in all indications held by the infliximab RP.24 All major physicochemical characteristics and in vitro biological activities of CT-P13 and the RP, including affinity for both soluble and transmembrane forms of TNF, are highly comparable.24 25 Approval of CT-P13 was partly based on findings from two 54-week, multinational, randomised, double-blind, parallel-group studies, which compared CT-P13 and RP in ankylosing spondylitis (AS) and RA Programme evaLuating the Autoimmune disease iNvEstigational drug cT-p13 in AS patients (PLANETAS) and Programme evaLuating the Autoimmune disease iNvEstigational drug cT-p13 in RA patients (PLANETRA)). These studies demonstrated that CT-P13 and RP are highly comparable in terms of PK, efficacy, immunogenicity and safety in both RA and AS.26–29 However, an important unanswered question for prescribing physicians is whether it is possible to switch from RP to CT-P13 in patients with RA without any detrimental effects on safety and efficacy.30 Here, we report the findings from an open-label extension of the PLANETRA study. There were two main aims of the extension study: (1) to investigate the efficacy and safety of switching to CT-P13 in patients previously treated with RP for 54 weeks in PLANETRA (hereafter named the ‘switch group’) and (2) to investigate the longer-term efficacy and safety of extended CT-P13 treatment over 2 years in patients previously treated with CT-P13 in PLANETRA (the ‘maintenance group’). To facilitate understanding of the data, the results for the maintenance and switch groups are described both for the main (weeks 0–54) and the extension (weeks 54–102) studies. Patients and methods: Full details of the methods of the 54-week, randomised, double-blind, parallel-group PLANETRA study have been reported previously,26 27 and are described briefly below. Patients PLANETRA recruited patients aged 18–75 years with active RA for ≥1 year according to the 1987 American College of Rheumatology (ACR) classification criteria. Eligible patients did not respond adequately to ≥3 months of treatment with methotrexate (MTX) and received a stable MTX dose (12.5–25 mg/week) for ≥4 weeks before screening. Patients who had completed the main 54-week PLANETRA study were offered the opportunity to enter the extension study (ClinicalTrials.gov identifier: NCT01571219) for another 1 year. Those who did not sign a new informed consent for the extension study were excluded. Some of the relevant Ministries of Health (MoH) and ethics committees (ECs) did not approve the extension study, mainly due to the fact that data from the PLANETRA study were not available at the time of EC evaluation. Thus, patients from affected institutions were also excluded. Additional eligibility criteria applied for this extension study included no major protocol violations in the main study and no new therapy for RA in the extension study. Detailed information on non-participants in the extension study is shown in figure 1. Patient disposition in the PLANETRA extension study. All patients who enrolled in the extension study (n=158 and 144 in the maintenance and switch groups, respectively) were included in the ITT population. EC, ethics committee; ITT, intent-to-treat; MoH, Ministry of Health; RP, reference product. PLANETRA recruited patients aged 18–75 years with active RA for ≥1 year according to the 1987 American College of Rheumatology (ACR) classification criteria. Eligible patients did not respond adequately to ≥3 months of treatment with methotrexate (MTX) and received a stable MTX dose (12.5–25 mg/week) for ≥4 weeks before screening. Patients who had completed the main 54-week PLANETRA study were offered the opportunity to enter the extension study (ClinicalTrials.gov identifier: NCT01571219) for another 1 year. Those who did not sign a new informed consent for the extension study were excluded. Some of the relevant Ministries of Health (MoH) and ethics committees (ECs) did not approve the extension study, mainly due to the fact that data from the PLANETRA study were not available at the time of EC evaluation. Thus, patients from affected institutions were also excluded. Additional eligibility criteria applied for this extension study included no major protocol violations in the main study and no new therapy for RA in the extension study. Detailed information on non-participants in the extension study is shown in figure 1. Patient disposition in the PLANETRA extension study. All patients who enrolled in the extension study (n=158 and 144 in the maintenance and switch groups, respectively) were included in the ITT population. EC, ethics committee; ITT, intent-to-treat; MoH, Ministry of Health; RP, reference product. Study design and treatment This open-label, single-arm extension of PLANETRA was conducted in 69 centres in 16 countries. In the main study, patients received nine infusions of CT-P13 (CELLTRION, Incheon, Republic of Korea) or the infliximab RP (Janssen Biotech, Horsham, Pennsylvania, USA). After study treatment in PLANETRA, eligible patients could choose to continue in the extension study. However, patients and physicians continued to be blinded to the treatment that the patient had received during the main study. All patients participating in and completing this extension study received six infusions of CT-P13 from week 62 to week 102. During the whole study period, CT-P13 was administered via 2 h intravenous infusion at a fixed dose of 3 mg/kg. At the discretion of the investigator, antihistamines were provided 30–60 min prior to infusion of CT-P13. MTX (12.5–25 mg/week; oral or parenteral) and folic acid (≥5 mg/week; oral) were coadministered to all patients throughout the main and extension study periods. All patients provided new written informed consent to enrol into the extension study. The study was conducted according to the principles of the Declaration of Helsinki and International Conference on Harmonisation Good Clinical Practice guidelines. This open-label, single-arm extension of PLANETRA was conducted in 69 centres in 16 countries. In the main study, patients received nine infusions of CT-P13 (CELLTRION, Incheon, Republic of Korea) or the infliximab RP (Janssen Biotech, Horsham, Pennsylvania, USA). After study treatment in PLANETRA, eligible patients could choose to continue in the extension study. However, patients and physicians continued to be blinded to the treatment that the patient had received during the main study. All patients participating in and completing this extension study received six infusions of CT-P13 from week 62 to week 102. During the whole study period, CT-P13 was administered via 2 h intravenous infusion at a fixed dose of 3 mg/kg. At the discretion of the investigator, antihistamines were provided 30–60 min prior to infusion of CT-P13. MTX (12.5–25 mg/week; oral or parenteral) and folic acid (≥5 mg/week; oral) were coadministered to all patients throughout the main and extension study periods. All patients provided new written informed consent to enrol into the extension study. The study was conducted according to the principles of the Declaration of Helsinki and International Conference on Harmonisation Good Clinical Practice guidelines. Study endpoints Efficacy Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Immunogenicity The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). Safety Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Exploratory and post hoc endpoints Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Efficacy Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Immunogenicity The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). Safety Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Exploratory and post hoc endpoints Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Statistical analyses All data were analysed descriptively in the maintenance and switch groups. The populations were predefined in the study protocol and statistical analysis plan for participants of the extension study. The efficacy population included all patients who received at least one dose of study treatment and had at least one efficacy measurement in the extension study. Conservatively, ACR response was analysed using non-responder imputation (NRI) for missing values and presented with 95% CIs of the response rate using an exact binomial approach from the efficacy population. No imputation of missing values was done for analysis of other efficacy endpoints. The safety population consisted of all patients who enrolled in the study, because they had all received study treatment in the preceding study. Data from the main study period were analysed in participants of the extension study only, not in all patients in the main study. Methods for sensitivity analyses of ACR response and statistical analyses of exploratory and post hoc endpoints are included in online supplementary appendix A. All data were analysed descriptively in the maintenance and switch groups. The populations were predefined in the study protocol and statistical analysis plan for participants of the extension study. The efficacy population included all patients who received at least one dose of study treatment and had at least one efficacy measurement in the extension study. Conservatively, ACR response was analysed using non-responder imputation (NRI) for missing values and presented with 95% CIs of the response rate using an exact binomial approach from the efficacy population. No imputation of missing values was done for analysis of other efficacy endpoints. The safety population consisted of all patients who enrolled in the study, because they had all received study treatment in the preceding study. Data from the main study period were analysed in participants of the extension study only, not in all patients in the main study. Methods for sensitivity analyses of ACR response and statistical analyses of exploratory and post hoc endpoints are included in online supplementary appendix A. Patients: PLANETRA recruited patients aged 18–75 years with active RA for ≥1 year according to the 1987 American College of Rheumatology (ACR) classification criteria. Eligible patients did not respond adequately to ≥3 months of treatment with methotrexate (MTX) and received a stable MTX dose (12.5–25 mg/week) for ≥4 weeks before screening. Patients who had completed the main 54-week PLANETRA study were offered the opportunity to enter the extension study (ClinicalTrials.gov identifier: NCT01571219) for another 1 year. Those who did not sign a new informed consent for the extension study were excluded. Some of the relevant Ministries of Health (MoH) and ethics committees (ECs) did not approve the extension study, mainly due to the fact that data from the PLANETRA study were not available at the time of EC evaluation. Thus, patients from affected institutions were also excluded. Additional eligibility criteria applied for this extension study included no major protocol violations in the main study and no new therapy for RA in the extension study. Detailed information on non-participants in the extension study is shown in figure 1. Patient disposition in the PLANETRA extension study. All patients who enrolled in the extension study (n=158 and 144 in the maintenance and switch groups, respectively) were included in the ITT population. EC, ethics committee; ITT, intent-to-treat; MoH, Ministry of Health; RP, reference product. Study design and treatment: This open-label, single-arm extension of PLANETRA was conducted in 69 centres in 16 countries. In the main study, patients received nine infusions of CT-P13 (CELLTRION, Incheon, Republic of Korea) or the infliximab RP (Janssen Biotech, Horsham, Pennsylvania, USA). After study treatment in PLANETRA, eligible patients could choose to continue in the extension study. However, patients and physicians continued to be blinded to the treatment that the patient had received during the main study. All patients participating in and completing this extension study received six infusions of CT-P13 from week 62 to week 102. During the whole study period, CT-P13 was administered via 2 h intravenous infusion at a fixed dose of 3 mg/kg. At the discretion of the investigator, antihistamines were provided 30–60 min prior to infusion of CT-P13. MTX (12.5–25 mg/week; oral or parenteral) and folic acid (≥5 mg/week; oral) were coadministered to all patients throughout the main and extension study periods. All patients provided new written informed consent to enrol into the extension study. The study was conducted according to the principles of the Declaration of Helsinki and International Conference on Harmonisation Good Clinical Practice guidelines. Study endpoints: Efficacy Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Immunogenicity The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). Safety Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Exploratory and post hoc endpoints Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Efficacy: Efficacy assessments were made at baseline and at weeks 14, 30, 54, 78 and 102. Efficacy endpoints included the proportion of patients meeting ACR20, ACR50 and ACR70 criteria; change from baseline in mean disease activity score in 28 joints (DAS28) and the proportion of patients meeting European League Against Rheumatism (EULAR) response criteria. Additional assessments included the number of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, the Health Assessment Questionnaire (HAQ), levels of C reactive protein (CRP), erythrocyte sedimentation rate (ESR) and hybrid ACR score. Immunogenicity: The proportion of patients with antidrug antibodies (ADAs) was assessed at baseline and at weeks 14, 30, 54, 78 and 102 using the previously reported method.26 27 The neutralising activity of ADAs was also assessed by a flow-through immunoassay method using the Gyros Immunoassay Platform (Gyros AB, Sweden). Safety: Treatment-emergent adverse events (TEAEs) were assessed throughout the main and extension studies. Other safety assessments included monitoring of TEAEs of special interest (infusion-related reactions (including hypersensitivity and anaphylactic reaction), tuberculosis (TB), latent TB (defined as a positive conversion of an interferon-γ release assay (negative at baseline) with a negative result for chest X-ray examination), serious infection, pneumonia, drug-induced liver injury, vascular disorders and malignancies), vital signs, physical examination findings and clinical laboratory analyses. Exploratory and post hoc endpoints: Details of exploratory and post hoc endpoints are given in online supplementary appendix A. Statistical analyses: All data were analysed descriptively in the maintenance and switch groups. The populations were predefined in the study protocol and statistical analysis plan for participants of the extension study. The efficacy population included all patients who received at least one dose of study treatment and had at least one efficacy measurement in the extension study. Conservatively, ACR response was analysed using non-responder imputation (NRI) for missing values and presented with 95% CIs of the response rate using an exact binomial approach from the efficacy population. No imputation of missing values was done for analysis of other efficacy endpoints. The safety population consisted of all patients who enrolled in the study, because they had all received study treatment in the preceding study. Data from the main study period were analysed in participants of the extension study only, not in all patients in the main study. Methods for sensitivity analyses of ACR response and statistical analyses of exploratory and post hoc endpoints are included in online supplementary appendix A. Results: Patients The first patient visit of the main PLANETRA study and the last visit of the PLANETRA extension were held between November 2010 and July 2013. Of the 455 patients who completed the main PLANETRA study, 302 patients consented to participate in the extension study and were screened under the approval of the appropriate MoH/EC (figure 1). Of the 302 screened patients, all were enrolled and 301 were treated. One patient in the maintenance group was enrolled but discontinued due to an adverse event (B-cell lymphoma stage IV) before receiving treatment in the extension study. A total of 158 patients had received CT-P13 in the main study (maintenance group); 144 had received RP (switch group). These patients comprised the intent-to-treat (ITT) population of the extension study. Patient demographics and disease characteristics at baseline and at week 54 of the main study were similar between the two groups (table 1). Patient demographics and disease characteristics at baseline and week 54 of patients enrolled in the PLANETRA extension study (ITT population) Data shown in the table were recorded at the baseline and week 54 visits of the preceding 54-week main study. *Except where indicated otherwise, values are median (range). †Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. ‡Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology; CCP, cyclic citrullinated peptide; CRP, C reactive protein; DAS28, disease activity score in 28 joints; ESR, erythrocyte sedimentation rate; ITT, intent-to-treat; RF, rheumatoid factor; RP, reference product. In the maintenance and switch groups, respectively, 133 (84.2%) and 128 (88.9%) patients completed the extension phase; 25 (15.8%) and 16 (11.1%) patients discontinued over the whole period of the extension study. Reasons for patient withdrawal are shown in figure 1. The efficacy population of the extension study included 152 patients in the maintenance group and 142 patients in the switch group. Owing to incorrect kits being dispensed, one patient randomly assigned to the RP group received one dose of CT-P13 at week 2 in the PLANETRA main study. Applying a conservative approach, this patient was classified as a member of the CT-P13 group for safety analyses in the main study. Therefore, the safety population of the extension study in the maintenance and switch groups comprised 159 and 143 patients, respectively. Similar to the ITT population of the extension study, patient demographics and disease characteristics of non-participants in the extension study were also comparable between the CT-P13 and RP groups (see online supplementary appendix B). The first patient visit of the main PLANETRA study and the last visit of the PLANETRA extension were held between November 2010 and July 2013. Of the 455 patients who completed the main PLANETRA study, 302 patients consented to participate in the extension study and were screened under the approval of the appropriate MoH/EC (figure 1). Of the 302 screened patients, all were enrolled and 301 were treated. One patient in the maintenance group was enrolled but discontinued due to an adverse event (B-cell lymphoma stage IV) before receiving treatment in the extension study. A total of 158 patients had received CT-P13 in the main study (maintenance group); 144 had received RP (switch group). These patients comprised the intent-to-treat (ITT) population of the extension study. Patient demographics and disease characteristics at baseline and at week 54 of the main study were similar between the two groups (table 1). Patient demographics and disease characteristics at baseline and week 54 of patients enrolled in the PLANETRA extension study (ITT population) Data shown in the table were recorded at the baseline and week 54 visits of the preceding 54-week main study. *Except where indicated otherwise, values are median (range). †Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. ‡Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology; CCP, cyclic citrullinated peptide; CRP, C reactive protein; DAS28, disease activity score in 28 joints; ESR, erythrocyte sedimentation rate; ITT, intent-to-treat; RF, rheumatoid factor; RP, reference product. In the maintenance and switch groups, respectively, 133 (84.2%) and 128 (88.9%) patients completed the extension phase; 25 (15.8%) and 16 (11.1%) patients discontinued over the whole period of the extension study. Reasons for patient withdrawal are shown in figure 1. The efficacy population of the extension study included 152 patients in the maintenance group and 142 patients in the switch group. Owing to incorrect kits being dispensed, one patient randomly assigned to the RP group received one dose of CT-P13 at week 2 in the PLANETRA main study. Applying a conservative approach, this patient was classified as a member of the CT-P13 group for safety analyses in the main study. Therefore, the safety population of the extension study in the maintenance and switch groups comprised 159 and 143 patients, respectively. Similar to the ITT population of the extension study, patient demographics and disease characteristics of non-participants in the extension study were also comparable between the CT-P13 and RP groups (see online supplementary appendix B). Efficacy Throughout the extension study, ACR20, ACR50 and ACR70 response rates were maintained, and no differences were evident between the groups at weeks 78 and 102 (figure 2). In the switch group, respective ACR20, ACR50 and ACR70 response rates were 77.5%, 50.0% and 23.9% at week 54 (ie, at the end of RP treatment) and 71.8%, 51.4% and 26.1% at week 102 (ie, 48 weeks after the last infusion of RP at week 54). In the maintenance group, respective ACR20, ACR50 and ACR70 response rates were 77.0%, 46.1% and 22.4% at week 54 and 71.7%, 48.0% and 24.3% at week 102. In patients who participated in the extension study, the proportion of patients achieving ACR20, ACR50 and ACR70 responses during the main study was also similar between the two groups. In a subgroup analysis performed according to ADA status, the proportion of ADA-negative patients achieving ACR20 was 85.7% at week 54 and 82.2% at week 102 in the maintenance group, and 84.7% at week 54 and 82.8% at week 102 in the switch group. In comparison, 68.0% (week 54) and 73.4% (week 102) of ADA-positive patients in the maintenance group, and 70.6% and 73.4% in the switch group achieved ACR20 (see online supplementary appendix C, figure C-1). Proportion of patients with rheumatoid arthritis with (A) an ACR20 response, (B) an ACR50 response and (C) an ACR70 response in the maintenance* and switch** groups of the PLANETRA extension study (efficacy population with non-responder imputation approach). CI values are the 95% CIs of the treatment difference. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. **Patients treated with reference product during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology. No notable differences in other efficacy endpoints were noted between or within the groups at weeks 14, 30, 54, 78 or 102. The results for DAS28 score change and EULAR response criteria are shown in online supplementary appendix D. The results of assessments of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, HAQ, levels of CRP, ESR and hybrid ACR score were not different between groups (see online supplementary appendix E). Sensitivity analyses to compare populations and statistical approaches supported the appearance of sustained efficacy and comparability between the two groups (see online supplementary appendix F). Analyses using the last observation carried forward (LOCF) approach showed similar results as analyses using the NRI approach, both in the maintenance group (ACR20: 74.1% using LOCF vs 71.7% using NRI at week 102, respectively) and in the switch group (ACR20: 77.1% using LOCF vs 71.8% using NRI at week 102). Analyses of the main study ITT population using the LOCF approach showed relatively low response rates compared with analyses of the extension study ITT population. However, response rates were comparable between the groups and sustained throughout the 2-year study period, both in the extension study ITT population (ACR20: 74.1% at week 102 vs 75.3% at week 54 in the maintenance group, 77.1% vs 77.1% in the switch group, respectively) and in the main study ITT population (ACR20: 61.6% at week 102 vs 62.9% at week 54 in the CT-P13 group, 59.2% vs 59.9% in the RP group, respectively). When data for the main study ITT population were analysed using the NRI approach, lower response rates were seen at week 102 than week 54 although rates were comparable between the groups (ACR20: 36.1% at week 102 vs 57.0% at week 54 in the CT-P13 group, 33.6% vs 52.0% in the RP group). When remission was measured up to week 102 based on ACR/EULAR criteria (Boolean-based definition and index-based definition (Simple Disease Activity Index (SDAI)), Clinical Disease Activity Index (CDAI), DAS28 and DAS28 low disease activity, the proportion of patients achieving remission or low disease activity was similar between groups throughout the study period (see online supplementary appendix G). Throughout the extension study, ACR20, ACR50 and ACR70 response rates were maintained, and no differences were evident between the groups at weeks 78 and 102 (figure 2). In the switch group, respective ACR20, ACR50 and ACR70 response rates were 77.5%, 50.0% and 23.9% at week 54 (ie, at the end of RP treatment) and 71.8%, 51.4% and 26.1% at week 102 (ie, 48 weeks after the last infusion of RP at week 54). In the maintenance group, respective ACR20, ACR50 and ACR70 response rates were 77.0%, 46.1% and 22.4% at week 54 and 71.7%, 48.0% and 24.3% at week 102. In patients who participated in the extension study, the proportion of patients achieving ACR20, ACR50 and ACR70 responses during the main study was also similar between the two groups. In a subgroup analysis performed according to ADA status, the proportion of ADA-negative patients achieving ACR20 was 85.7% at week 54 and 82.2% at week 102 in the maintenance group, and 84.7% at week 54 and 82.8% at week 102 in the switch group. In comparison, 68.0% (week 54) and 73.4% (week 102) of ADA-positive patients in the maintenance group, and 70.6% and 73.4% in the switch group achieved ACR20 (see online supplementary appendix C, figure C-1). Proportion of patients with rheumatoid arthritis with (A) an ACR20 response, (B) an ACR50 response and (C) an ACR70 response in the maintenance* and switch** groups of the PLANETRA extension study (efficacy population with non-responder imputation approach). CI values are the 95% CIs of the treatment difference. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. **Patients treated with reference product during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology. No notable differences in other efficacy endpoints were noted between or within the groups at weeks 14, 30, 54, 78 or 102. The results for DAS28 score change and EULAR response criteria are shown in online supplementary appendix D. The results of assessments of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, HAQ, levels of CRP, ESR and hybrid ACR score were not different between groups (see online supplementary appendix E). Sensitivity analyses to compare populations and statistical approaches supported the appearance of sustained efficacy and comparability between the two groups (see online supplementary appendix F). Analyses using the last observation carried forward (LOCF) approach showed similar results as analyses using the NRI approach, both in the maintenance group (ACR20: 74.1% using LOCF vs 71.7% using NRI at week 102, respectively) and in the switch group (ACR20: 77.1% using LOCF vs 71.8% using NRI at week 102). Analyses of the main study ITT population using the LOCF approach showed relatively low response rates compared with analyses of the extension study ITT population. However, response rates were comparable between the groups and sustained throughout the 2-year study period, both in the extension study ITT population (ACR20: 74.1% at week 102 vs 75.3% at week 54 in the maintenance group, 77.1% vs 77.1% in the switch group, respectively) and in the main study ITT population (ACR20: 61.6% at week 102 vs 62.9% at week 54 in the CT-P13 group, 59.2% vs 59.9% in the RP group, respectively). When data for the main study ITT population were analysed using the NRI approach, lower response rates were seen at week 102 than week 54 although rates were comparable between the groups (ACR20: 36.1% at week 102 vs 57.0% at week 54 in the CT-P13 group, 33.6% vs 52.0% in the RP group). When remission was measured up to week 102 based on ACR/EULAR criteria (Boolean-based definition and index-based definition (Simple Disease Activity Index (SDAI)), Clinical Disease Activity Index (CDAI), DAS28 and DAS28 low disease activity, the proportion of patients achieving remission or low disease activity was similar between groups throughout the study period (see online supplementary appendix G). Immunogenicity The proportion of patients with ADAs was similar between the maintenance and switch groups at each time point during the main and extension studies (table 2). Almost all patients with a positive ADA result had a positive result for neutralising antibodies (NAb), and the proportion of patients with a positive NAb result was similar between the two groups. The proportion of ADA-positive patients with sustained ADAs was also highly similar between groups (80.2% and 80.4% in the maintenance and switch groups, respectively). Proportion of patients with RA who were positive for ADAs and NAbs in the main study and the extension study (safety population) Percentages for NAb results are based on the number of positive ADA results at that visit. ADA persistency was defined as transient when a patient tested positive for ADAs at one or more time point but negative at the last available time point. The remaining patients with positive ADA results were considered to have shown a sustained ADA response. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ‡N, total number of patients with at least one positive ADA result. ADAs, antidrug antibodies; NAbs, neutralising antibodies; RA, rheumatoid arthritis; RP, reference product. The proportion of patients with ADAs was similar between the maintenance and switch groups at each time point during the main and extension studies (table 2). Almost all patients with a positive ADA result had a positive result for neutralising antibodies (NAb), and the proportion of patients with a positive NAb result was similar between the two groups. The proportion of ADA-positive patients with sustained ADAs was also highly similar between groups (80.2% and 80.4% in the maintenance and switch groups, respectively). Proportion of patients with RA who were positive for ADAs and NAbs in the main study and the extension study (safety population) Percentages for NAb results are based on the number of positive ADA results at that visit. ADA persistency was defined as transient when a patient tested positive for ADAs at one or more time point but negative at the last available time point. The remaining patients with positive ADA results were considered to have shown a sustained ADA response. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ‡N, total number of patients with at least one positive ADA result. ADAs, antidrug antibodies; NAbs, neutralising antibodies; RA, rheumatoid arthritis; RP, reference product. Pharmacodynamics In a subgroup analysis performed by ADA status, the mean change from baseline in CRP and ESR was comparable in the maintenance and switch groups at week 54 and week 102 in both ADA-negative and ADA-positive patients (see online supplementary appendix C, table C-1). In a subgroup analysis performed by ADA status, the mean change from baseline in CRP and ESR was comparable in the maintenance and switch groups at week 54 and week 102 in both ADA-negative and ADA-positive patients (see online supplementary appendix C, table C-1). Safety The proportion of patients who experienced at least one TEAE was comparable between the maintenance group and the switch group (extension study: 53.5% (n=85 of 159) and 53.8% (n=77 of 143), respectively; main study: 63.5% (n=101) and 62.2% (n=89)). Rates of TEAEs considered by the investigator to be related to study treatment were also similar between the maintenance and switch groups (extension study: 22.0% (n=35) and 18.9% (n=27); main study: 35.2% (n=56) and 35.7% (n=51)). The most common treatment-related TEAEs are shown in table 3. Treatment-related TEAEs that were reported in at least 1% of patients in either the maintenance group or the switch group (safety population) *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. RA, rheumatoid arthritis; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event. With respect to serious adverse events (SAEs), these events occurred in the maintenance and switch groups, respectively, in 12 (7.5%) and 13 (9.1%) patients during the extension study, and in 9 (5.7%) and 5 (3.5%) patients during the main study. Treatment-related SAEs occurred in two (1.3%) and four (2.8%) patients in the extension study, respectively, and in two (1.3%) and two (1.4%) patients in the main study (see online supplementary appendix H). TEAEs leading to discontinuation occurred in 16 (10.1%) and 8 (5.6%) patients during the extension study. During the extension study, 11 (6.9%) and 4 (2.8%) patients in the maintenance and switch groups, respectively, reported infusion-related reactions. All were ADA positive and had sustained ADAs. Only one patient in the maintenance group experienced anaphylaxis. This patient was ADA positive (see online supplementary appendix C, table C-2). In the main study, infusion-related reactions were reported in 8 (5.0%) and 13 (9.1%) patients in the maintenance and switch groups, respectively. Of these, 4 (50.0%) and 11 (84.6%) were ADA positive. Two patients reported infusion-related reactions both in the main study and in the extension study period (one patient in each group). Table 4 shows data for all other TEAEs of special interest. No cases of TB were reported during the extension study. TEAEs of special interest regardless of relationship to study treatment in the PLANETRA main study and the extension study (safety population) *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ‡There were three patients (two in the maintenance group, one in the switch group) with three events of latent TB, which were reported both in the main study and in the extension study; this was because all three events started during week 62 (part of the end-of-study period of the main study). §There was one patient in the maintenance group with a serious AE of pneumonia, which was included as a ‘Serious infection’ and ‘Pneumonia’ during the main study. AE, adverse event; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event. The proportion of patients who experienced at least one TEAE was comparable between the maintenance group and the switch group (extension study: 53.5% (n=85 of 159) and 53.8% (n=77 of 143), respectively; main study: 63.5% (n=101) and 62.2% (n=89)). Rates of TEAEs considered by the investigator to be related to study treatment were also similar between the maintenance and switch groups (extension study: 22.0% (n=35) and 18.9% (n=27); main study: 35.2% (n=56) and 35.7% (n=51)). The most common treatment-related TEAEs are shown in table 3. Treatment-related TEAEs that were reported in at least 1% of patients in either the maintenance group or the switch group (safety population) *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. RA, rheumatoid arthritis; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event. With respect to serious adverse events (SAEs), these events occurred in the maintenance and switch groups, respectively, in 12 (7.5%) and 13 (9.1%) patients during the extension study, and in 9 (5.7%) and 5 (3.5%) patients during the main study. Treatment-related SAEs occurred in two (1.3%) and four (2.8%) patients in the extension study, respectively, and in two (1.3%) and two (1.4%) patients in the main study (see online supplementary appendix H). TEAEs leading to discontinuation occurred in 16 (10.1%) and 8 (5.6%) patients during the extension study. During the extension study, 11 (6.9%) and 4 (2.8%) patients in the maintenance and switch groups, respectively, reported infusion-related reactions. All were ADA positive and had sustained ADAs. Only one patient in the maintenance group experienced anaphylaxis. This patient was ADA positive (see online supplementary appendix C, table C-2). In the main study, infusion-related reactions were reported in 8 (5.0%) and 13 (9.1%) patients in the maintenance and switch groups, respectively. Of these, 4 (50.0%) and 11 (84.6%) were ADA positive. Two patients reported infusion-related reactions both in the main study and in the extension study period (one patient in each group). Table 4 shows data for all other TEAEs of special interest. No cases of TB were reported during the extension study. TEAEs of special interest regardless of relationship to study treatment in the PLANETRA main study and the extension study (safety population) *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ‡There were three patients (two in the maintenance group, one in the switch group) with three events of latent TB, which were reported both in the main study and in the extension study; this was because all three events started during week 62 (part of the end-of-study period of the main study). §There was one patient in the maintenance group with a serious AE of pneumonia, which was included as a ‘Serious infection’ and ‘Pneumonia’ during the main study. AE, adverse event; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event. Patients: The first patient visit of the main PLANETRA study and the last visit of the PLANETRA extension were held between November 2010 and July 2013. Of the 455 patients who completed the main PLANETRA study, 302 patients consented to participate in the extension study and were screened under the approval of the appropriate MoH/EC (figure 1). Of the 302 screened patients, all were enrolled and 301 were treated. One patient in the maintenance group was enrolled but discontinued due to an adverse event (B-cell lymphoma stage IV) before receiving treatment in the extension study. A total of 158 patients had received CT-P13 in the main study (maintenance group); 144 had received RP (switch group). These patients comprised the intent-to-treat (ITT) population of the extension study. Patient demographics and disease characteristics at baseline and at week 54 of the main study were similar between the two groups (table 1). Patient demographics and disease characteristics at baseline and week 54 of patients enrolled in the PLANETRA extension study (ITT population) Data shown in the table were recorded at the baseline and week 54 visits of the preceding 54-week main study. *Except where indicated otherwise, values are median (range). †Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. ‡Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology; CCP, cyclic citrullinated peptide; CRP, C reactive protein; DAS28, disease activity score in 28 joints; ESR, erythrocyte sedimentation rate; ITT, intent-to-treat; RF, rheumatoid factor; RP, reference product. In the maintenance and switch groups, respectively, 133 (84.2%) and 128 (88.9%) patients completed the extension phase; 25 (15.8%) and 16 (11.1%) patients discontinued over the whole period of the extension study. Reasons for patient withdrawal are shown in figure 1. The efficacy population of the extension study included 152 patients in the maintenance group and 142 patients in the switch group. Owing to incorrect kits being dispensed, one patient randomly assigned to the RP group received one dose of CT-P13 at week 2 in the PLANETRA main study. Applying a conservative approach, this patient was classified as a member of the CT-P13 group for safety analyses in the main study. Therefore, the safety population of the extension study in the maintenance and switch groups comprised 159 and 143 patients, respectively. Similar to the ITT population of the extension study, patient demographics and disease characteristics of non-participants in the extension study were also comparable between the CT-P13 and RP groups (see online supplementary appendix B). Efficacy: Throughout the extension study, ACR20, ACR50 and ACR70 response rates were maintained, and no differences were evident between the groups at weeks 78 and 102 (figure 2). In the switch group, respective ACR20, ACR50 and ACR70 response rates were 77.5%, 50.0% and 23.9% at week 54 (ie, at the end of RP treatment) and 71.8%, 51.4% and 26.1% at week 102 (ie, 48 weeks after the last infusion of RP at week 54). In the maintenance group, respective ACR20, ACR50 and ACR70 response rates were 77.0%, 46.1% and 22.4% at week 54 and 71.7%, 48.0% and 24.3% at week 102. In patients who participated in the extension study, the proportion of patients achieving ACR20, ACR50 and ACR70 responses during the main study was also similar between the two groups. In a subgroup analysis performed according to ADA status, the proportion of ADA-negative patients achieving ACR20 was 85.7% at week 54 and 82.2% at week 102 in the maintenance group, and 84.7% at week 54 and 82.8% at week 102 in the switch group. In comparison, 68.0% (week 54) and 73.4% (week 102) of ADA-positive patients in the maintenance group, and 70.6% and 73.4% in the switch group achieved ACR20 (see online supplementary appendix C, figure C-1). Proportion of patients with rheumatoid arthritis with (A) an ACR20 response, (B) an ACR50 response and (C) an ACR70 response in the maintenance* and switch** groups of the PLANETRA extension study (efficacy population with non-responder imputation approach). CI values are the 95% CIs of the treatment difference. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. **Patients treated with reference product during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ACR, American College of Rheumatology. No notable differences in other efficacy endpoints were noted between or within the groups at weeks 14, 30, 54, 78 or 102. The results for DAS28 score change and EULAR response criteria are shown in online supplementary appendix D. The results of assessments of tender and swollen joints, patient assessment of pain, patient and physician global assessment of disease activity, HAQ, levels of CRP, ESR and hybrid ACR score were not different between groups (see online supplementary appendix E). Sensitivity analyses to compare populations and statistical approaches supported the appearance of sustained efficacy and comparability between the two groups (see online supplementary appendix F). Analyses using the last observation carried forward (LOCF) approach showed similar results as analyses using the NRI approach, both in the maintenance group (ACR20: 74.1% using LOCF vs 71.7% using NRI at week 102, respectively) and in the switch group (ACR20: 77.1% using LOCF vs 71.8% using NRI at week 102). Analyses of the main study ITT population using the LOCF approach showed relatively low response rates compared with analyses of the extension study ITT population. However, response rates were comparable between the groups and sustained throughout the 2-year study period, both in the extension study ITT population (ACR20: 74.1% at week 102 vs 75.3% at week 54 in the maintenance group, 77.1% vs 77.1% in the switch group, respectively) and in the main study ITT population (ACR20: 61.6% at week 102 vs 62.9% at week 54 in the CT-P13 group, 59.2% vs 59.9% in the RP group, respectively). When data for the main study ITT population were analysed using the NRI approach, lower response rates were seen at week 102 than week 54 although rates were comparable between the groups (ACR20: 36.1% at week 102 vs 57.0% at week 54 in the CT-P13 group, 33.6% vs 52.0% in the RP group). When remission was measured up to week 102 based on ACR/EULAR criteria (Boolean-based definition and index-based definition (Simple Disease Activity Index (SDAI)), Clinical Disease Activity Index (CDAI), DAS28 and DAS28 low disease activity, the proportion of patients achieving remission or low disease activity was similar between groups throughout the study period (see online supplementary appendix G). Immunogenicity: The proportion of patients with ADAs was similar between the maintenance and switch groups at each time point during the main and extension studies (table 2). Almost all patients with a positive ADA result had a positive result for neutralising antibodies (NAb), and the proportion of patients with a positive NAb result was similar between the two groups. The proportion of ADA-positive patients with sustained ADAs was also highly similar between groups (80.2% and 80.4% in the maintenance and switch groups, respectively). Proportion of patients with RA who were positive for ADAs and NAbs in the main study and the extension study (safety population) Percentages for NAb results are based on the number of positive ADA results at that visit. ADA persistency was defined as transient when a patient tested positive for ADAs at one or more time point but negative at the last available time point. The remaining patients with positive ADA results were considered to have shown a sustained ADA response. *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ‡N, total number of patients with at least one positive ADA result. ADAs, antidrug antibodies; NAbs, neutralising antibodies; RA, rheumatoid arthritis; RP, reference product. Pharmacodynamics: In a subgroup analysis performed by ADA status, the mean change from baseline in CRP and ESR was comparable in the maintenance and switch groups at week 54 and week 102 in both ADA-negative and ADA-positive patients (see online supplementary appendix C, table C-1). Safety: The proportion of patients who experienced at least one TEAE was comparable between the maintenance group and the switch group (extension study: 53.5% (n=85 of 159) and 53.8% (n=77 of 143), respectively; main study: 63.5% (n=101) and 62.2% (n=89)). Rates of TEAEs considered by the investigator to be related to study treatment were also similar between the maintenance and switch groups (extension study: 22.0% (n=35) and 18.9% (n=27); main study: 35.2% (n=56) and 35.7% (n=51)). The most common treatment-related TEAEs are shown in table 3. Treatment-related TEAEs that were reported in at least 1% of patients in either the maintenance group or the switch group (safety population) *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. RA, rheumatoid arthritis; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event. With respect to serious adverse events (SAEs), these events occurred in the maintenance and switch groups, respectively, in 12 (7.5%) and 13 (9.1%) patients during the extension study, and in 9 (5.7%) and 5 (3.5%) patients during the main study. Treatment-related SAEs occurred in two (1.3%) and four (2.8%) patients in the extension study, respectively, and in two (1.3%) and two (1.4%) patients in the main study (see online supplementary appendix H). TEAEs leading to discontinuation occurred in 16 (10.1%) and 8 (5.6%) patients during the extension study. During the extension study, 11 (6.9%) and 4 (2.8%) patients in the maintenance and switch groups, respectively, reported infusion-related reactions. All were ADA positive and had sustained ADAs. Only one patient in the maintenance group experienced anaphylaxis. This patient was ADA positive (see online supplementary appendix C, table C-2). In the main study, infusion-related reactions were reported in 8 (5.0%) and 13 (9.1%) patients in the maintenance and switch groups, respectively. Of these, 4 (50.0%) and 11 (84.6%) were ADA positive. Two patients reported infusion-related reactions both in the main study and in the extension study period (one patient in each group). Table 4 shows data for all other TEAEs of special interest. No cases of TB were reported during the extension study. TEAEs of special interest regardless of relationship to study treatment in the PLANETRA main study and the extension study (safety population) *Patients treated with CT-P13 during the 54 weeks of the main study and the 48-week extension study. †Patients treated with RP during the 54 weeks of the main study and then switched to CT-P13 during the 48-week extension study. ‡There were three patients (two in the maintenance group, one in the switch group) with three events of latent TB, which were reported both in the main study and in the extension study; this was because all three events started during week 62 (part of the end-of-study period of the main study). §There was one patient in the maintenance group with a serious AE of pneumonia, which was included as a ‘Serious infection’ and ‘Pneumonia’ during the main study. AE, adverse event; RP, reference product; TB, tuberculosis; TEAE, treatment-emergent adverse event. Discussion: The PLANETRA extension study examined the efficacy and safety of treatment with a maximum of six infusions of CT-P13 in patients with RA previously treated with either CT-P13 (maintenance group) or infliximab RP (switch group) for 54 weeks. Importantly, in the switch group, no notable differences in ACR response rates were observed between week 54 (ie, the last RP treatment) and week 102 (ie, 48 weeks after the last RP infusion). Patients in the maintenance group of this extension study received CT-P13 for a total of 102 weeks. In this cohort, the responses to CT-P13 observed in the main study were sustained during the extension study. In the main parallel-group phase of PLANETRA,26 27 ACR responses were broadly comparable with those observed in previous randomised studies of RP up to 54 weeks.31–33 The multinational Anti-TNF Trial in Rheumatoid Arthritis with Concomitant Therapy (ATTRACT) was the pivotal study of MTX plus either RP or placebo in patients with RA. Eligibility criteria for that study were similar to those for PLANETRA. A comparison of the 102-week data presented here with data from the same treatment duration of ATTRACT confirms that ACR response rates in the former were at least comparable with those in ATTRACT (if not higher).34 These data support the long-term efficacy of CT-P13 in patients with RA. Further efficacy endpoints—including DAS28-CRP, DAS28-ESR and EULAR-CRP or EULAR-ESR responses—were also maintained from week 54 to 102 in the switch group and were comparable between the maintenance and switch groups at weeks 78 and 102. In addition, the proportion of patients with remission by ACR/EULAR criteria, CDAI, DAS28 and DAS28 low disease activity was also comparable between the two treatment groups during the whole study period. Together, this suggests that there was no detrimental impact on efficacy of switching from RP to CT-P13 in patients with RA. Sensitivity analyses supported the sustained efficacy and comparability observed between the two groups. A multiple analysis approach, using LOCF and NRI methods, reported similar results, both in the maintenance group and in the switch group. Analyses of the main study ITT population and the extension study ITT population using the LOCF approach showed comparable and sustained outcomes throughout the 2-year study period. When analysed using the NRI approach, response rates at week 78 and week 102 in the main study ITT population were lower than when the LOCF approach was used. However, response rates were similar in both groups regardless of approach. Variations in response rates that occurred according to the analysis method were caused by the fact that some responders in the main study did not participate in the extension study (figure 1). In order to further understand the influence of non-participants in the extension study, patient demographics at baseline, disease characteristics at baseline and week 54 (see online supplementary appendix B) and ACR20 responses according to ADA status at week 54 (see online supplementary appendix I) were further analysed for this population. The results support the comparability between CT-P13 and RP groups, even for non-participants. CT-P13 was well tolerated during the extension study and displayed a long-term safety profile consistent with that of infliximab RP.34 35 There was no noticeable difference in the safety profile before and after switching. After week 54 of the main study, the incidence of TEAEs, drug-related TEAEs or SAEs was similar between the maintenance and switch groups. The incidence of all potential infusion-related reactions did not increase when patients previously treated with RP were switched to CT-P13. During the extension study, 11 (6.9%) patients in the maintenance group and 4 (2.8%) patients in the switch group experienced infusion-related reactions. Infusion-related reactions were reported for 8 (5.0%) patients in the maintenance group and 13 (9.1%) patients in the switch group in the main study (ie, before the switch). Most of these events were of mild to moderate severity. In terms of immunogenicity, the proportion of patients with ADAs remained stable and did not increase between weeks 54 and 102 in either group, although only qualitative analysis of ADA data was performed. In a similarly designed extension of the PLANETAS study, the proportion of patients with AS with ADAs also did not increase consistently.36 In the PLANETRA extension study, the ADA rate was comparable between the maintenance and switch groups at 102 weeks. The proportion of patients with sustained ADAs during the entire study period was also highly similar between groups. Similarly, in the PLANETAS extension study, the number of patients with AS with sustained ADAs was also similar between maintenance and switch groups. These data indicate no detrimental effect on immunogenicity when changing from RP to CT-P13, at least for the first six infusions. There was no analysis for IgG4. Concomitant use of MTX has been shown to reduce the immunogenicity of infliximab.37 In PLANETRA, MTX was coadministered throughout the study. Given that the initial and most recent doses of MTX were similar between the maintenance and switch groups (initial dose: 15.47 vs 15.51 mg/week; most recent dose: 15.52 vs 15.40 mg/week), it can be assumed that the effect of MTX on the development of ADAs was also similar between both groups. ADAs to infliximab are associated with a reduced clinical response to this drug, as well as to infusion-related reactions and other unwanted effects.38 39 Compared with ADA-negative patients, ADA-positive patients in our study had lower ACR20 response rates and higher levels of CRP and ESR. Such trends were comparable in both the maintenance and switch groups. All of the patients reporting infusion-related reactions were ADA positive in both groups. These results suggest that the effects of switching from RP to CT-P13 did not influence the impact of ADAs. The findings from the PLANETRA extension study indicate that there are no harmful effects on efficacy, safety or immunogenicity associated with switching from RP to CT-P13 in patients with RA. Similarly, no detrimental effects of switching were observed in an extension of the PLANETAS study performed in patients with AS.36 The current results are also aligned with those observed in switching studies with other biosimilars that have been approved by the EMA, which has stringent guidelines relating to the regulation of these types of agents. Switching data from a number of randomised and non-randomised trials consistently show that detrimental effects of switching between reference biologics and their EMA-approved biosimilars are unlikely to happen.18 40–45 The current extension study was not formally designed to evaluate the non-inferiority or equivalence of switching to CT-P13 from RP versus continual CT-P13 treatment. In this respect, a randomised, double-blind, phase IV study has been initiated in Norway (‘NOR-SWITCH’; ClinicalTrials.gov identifier: NCT02148640) to formally examine the switchability of CT-P13 in a variety of indications. Additionally, a comprehensive pharmacovigilance programme by the manufacturers of CT-P13 is also ongoing. These postmarketing surveillance and registry studies will monitor the safety of CT-P13 in patients with AS, RA and other inflammatory diseases who have switched from RP. Conclusions: This multinational, open-label extension study demonstrated that in patients with RA receiving MTX, switching from RP to CT-P13 was not associated with any detrimental effects on efficacy, immunogenicity or safety. Additionally, this study demonstrated that CT-P13 remained efficacious and well tolerated over a 2-year treatment period.
Background: NCT01571219; Results. Methods: This open-label extension study recruited patients with RA who had completed the 54-week, randomised, parallel-group study comparing CT-P13 with RP (PLANETRA; NCT01217086). CT-P13 (3 mg/kg) was administered intravenously every 8 weeks from weeks 62 to 102. All patients received concomitant methotrexate. Endpoints included American College of Rheumatology 20% (ACR20) response, ACR50, ACR70, immunogenicity and safety. Data were analysed for patients who received CT-P13 for 102 weeks (maintenance group) and for those who received RP for 54 weeks and then switched to CT-P13 (switch group). Results: Overall, 302 of 455 patients who completed the PLANETRA study enrolled into the extension. Of these, 158 had received CT-P13 (maintenance group) and 144 RP (switch group). Response rates at week 102 for maintenance versus switch groups, respectively, were 71.7% vs 71.8% for ACR20, 48.0% vs 51.4% for ACR50 and 24.3% vs 26.1% for ACR70. The proportion of patients with antidrug antibodies was comparable between groups (week 102: 40.3% vs 44.8%, respectively). Treatment-emergent adverse events occurred in similar proportions of patients in the two groups during the extension study (53.5% and 53.8%, respectively). Conclusions: Comparable efficacy and tolerability were observed in patients who switched from RP to its biosimilar CT-P13 for an additional year and in those who had long-term CT-P13 treatment for 2 years.
Introduction: Infliximab is a human–murine chimeric monoclonal antibody to tumour necrosis factor (TNF).1 The introduction of infliximab and other biological drugs into clinical practice has dramatically improved the management of a number of immune-mediated inflammatory diseases, including rheumatoid arthritis (RA).2 However, currently available biologics are associated with high costs,3 4 which has led to restricted treatment access for patients with RA in several regions.5–9 A number of biologics used to treat RA—including originator infliximab (Remicade), hereafter referred to as the reference product (RP)—have reached or are approaching patent expiry in many countries. As a consequence, follow-on biologics (also termed ‘biosimilars’) are being developed for the treatment of RA. A biosimilar can be defined as a ‘biotherapeutic product that is similar in terms of quality, safety, and efficacy to an already licensed reference biotherapeutic product’.10 In order to gain approval, it is usually necessary to show that a biosimilar is highly similar to the RP in physicochemical and biological terms. In addition, clinical studies are generally needed to establish statistical equivalence in pharmacokinetics (PK) and efficacy and to characterise biosimilar safety.11–13 Since the first approval of a biosimilar by the European Medicines Agency (EMA) in 2006, a number of these agents, including granulocyte colony-stimulating factors and erythropoietins, have become available in Europe. Indeed, the range of therapeutic areas now covered by approved biosimilars is wide and includes cancer, anaemia, neutropenia and diabetes.14 Data for these EMA-approved biosimilars consistently show that they provide comparable efficacy and safety relative to their RPs.15–23 Recently, CT-P13 (Remsima, Inflectra)—a biosimilar of infliximab RP—became the first monoclonal antibody biosimilar to be approved in Europe for use in all indications held by the infliximab RP.24 All major physicochemical characteristics and in vitro biological activities of CT-P13 and the RP, including affinity for both soluble and transmembrane forms of TNF, are highly comparable.24 25 Approval of CT-P13 was partly based on findings from two 54-week, multinational, randomised, double-blind, parallel-group studies, which compared CT-P13 and RP in ankylosing spondylitis (AS) and RA Programme evaLuating the Autoimmune disease iNvEstigational drug cT-p13 in AS patients (PLANETAS) and Programme evaLuating the Autoimmune disease iNvEstigational drug cT-p13 in RA patients (PLANETRA)). These studies demonstrated that CT-P13 and RP are highly comparable in terms of PK, efficacy, immunogenicity and safety in both RA and AS.26–29 However, an important unanswered question for prescribing physicians is whether it is possible to switch from RP to CT-P13 in patients with RA without any detrimental effects on safety and efficacy.30 Here, we report the findings from an open-label extension of the PLANETRA study. There were two main aims of the extension study: (1) to investigate the efficacy and safety of switching to CT-P13 in patients previously treated with RP for 54 weeks in PLANETRA (hereafter named the ‘switch group’) and (2) to investigate the longer-term efficacy and safety of extended CT-P13 treatment over 2 years in patients previously treated with CT-P13 in PLANETRA (the ‘maintenance group’). To facilitate understanding of the data, the results for the maintenance and switch groups are described both for the main (weeks 0–54) and the extension (weeks 54–102) studies. Conclusions: This multinational, open-label extension study demonstrated that in patients with RA receiving MTX, switching from RP to CT-P13 was not associated with any detrimental effects on efficacy, immunogenicity or safety. Additionally, this study demonstrated that CT-P13 remained efficacious and well tolerated over a 2-year treatment period.
Background: NCT01571219; Results. Methods: This open-label extension study recruited patients with RA who had completed the 54-week, randomised, parallel-group study comparing CT-P13 with RP (PLANETRA; NCT01217086). CT-P13 (3 mg/kg) was administered intravenously every 8 weeks from weeks 62 to 102. All patients received concomitant methotrexate. Endpoints included American College of Rheumatology 20% (ACR20) response, ACR50, ACR70, immunogenicity and safety. Data were analysed for patients who received CT-P13 for 102 weeks (maintenance group) and for those who received RP for 54 weeks and then switched to CT-P13 (switch group). Results: Overall, 302 of 455 patients who completed the PLANETRA study enrolled into the extension. Of these, 158 had received CT-P13 (maintenance group) and 144 RP (switch group). Response rates at week 102 for maintenance versus switch groups, respectively, were 71.7% vs 71.8% for ACR20, 48.0% vs 51.4% for ACR50 and 24.3% vs 26.1% for ACR70. The proportion of patients with antidrug antibodies was comparable between groups (week 102: 40.3% vs 44.8%, respectively). Treatment-emergent adverse events occurred in similar proportions of patients in the two groups during the extension study (53.5% and 53.8%, respectively). Conclusions: Comparable efficacy and tolerability were observed in patients who switched from RP to its biosimilar CT-P13 for an additional year and in those who had long-term CT-P13 treatment for 2 years.
13,844
312
18
[ "study", "patients", "extension", "extension study", "week", "main", "main study", "54", "group", "ct" ]
[ "test", "test" ]
[CONTENT] Treatment | DMARDs (biologic) | Rheumatoid Arthritis | Anti-TNF [SUMMARY]
[CONTENT] Treatment | DMARDs (biologic) | Rheumatoid Arthritis | Anti-TNF [SUMMARY]
[CONTENT] Treatment | DMARDs (biologic) | Rheumatoid Arthritis | Anti-TNF [SUMMARY]
[CONTENT] Treatment | DMARDs (biologic) | Rheumatoid Arthritis | Anti-TNF [SUMMARY]
[CONTENT] Treatment | DMARDs (biologic) | Rheumatoid Arthritis | Anti-TNF [SUMMARY]
[CONTENT] Treatment | DMARDs (biologic) | Rheumatoid Arthritis | Anti-TNF [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Antibodies | Antibodies, Monoclonal | Antirheumatic Agents | Arthritis, Rheumatoid | Biosimilar Pharmaceuticals | Drug Substitution | Drug Therapy, Combination | Female | Humans | Infliximab | Male | Methotrexate | Middle Aged | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Antibodies | Antibodies, Monoclonal | Antirheumatic Agents | Arthritis, Rheumatoid | Biosimilar Pharmaceuticals | Drug Substitution | Drug Therapy, Combination | Female | Humans | Infliximab | Male | Methotrexate | Middle Aged | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Antibodies | Antibodies, Monoclonal | Antirheumatic Agents | Arthritis, Rheumatoid | Biosimilar Pharmaceuticals | Drug Substitution | Drug Therapy, Combination | Female | Humans | Infliximab | Male | Methotrexate | Middle Aged | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Antibodies | Antibodies, Monoclonal | Antirheumatic Agents | Arthritis, Rheumatoid | Biosimilar Pharmaceuticals | Drug Substitution | Drug Therapy, Combination | Female | Humans | Infliximab | Male | Methotrexate | Middle Aged | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Antibodies | Antibodies, Monoclonal | Antirheumatic Agents | Arthritis, Rheumatoid | Biosimilar Pharmaceuticals | Drug Substitution | Drug Therapy, Combination | Female | Humans | Infliximab | Male | Methotrexate | Middle Aged | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Antibodies | Antibodies, Monoclonal | Antirheumatic Agents | Arthritis, Rheumatoid | Biosimilar Pharmaceuticals | Drug Substitution | Drug Therapy, Combination | Female | Humans | Infliximab | Male | Methotrexate | Middle Aged | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] study | patients | extension | extension study | week | main | main study | 54 | group | ct [SUMMARY]
[CONTENT] study | patients | extension | extension study | week | main | main study | 54 | group | ct [SUMMARY]
[CONTENT] study | patients | extension | extension study | week | main | main study | 54 | group | ct [SUMMARY]
[CONTENT] study | patients | extension | extension study | week | main | main study | 54 | group | ct [SUMMARY]
[CONTENT] study | patients | extension | extension study | week | main | main study | 54 | group | ct [SUMMARY]
[CONTENT] study | patients | extension | extension study | week | main | main study | 54 | group | ct [SUMMARY]
[CONTENT] biosimilar | ct | p13 | ct p13 | ra | rp | infliximab | efficacy | safety | biological [SUMMARY]
[CONTENT] study | patients | extension | extension study | included | assessed | efficacy | baseline | endpoints | assessment [SUMMARY]
[CONTENT] study | week | group | patients | extension study | extension | main study | main | 54 | maintenance [SUMMARY]
[CONTENT] study demonstrated | demonstrated | effects efficacy immunogenicity safety | detrimental effects efficacy | demonstrated patients | demonstrated patients ra | demonstrated patients ra receiving | p13 associated detrimental effects | p13 associated detrimental | p13 associated [SUMMARY]
[CONTENT] study | patients | extension | extension study | week | main | ct | p13 | ct p13 | group [SUMMARY]
[CONTENT] study | patients | extension | extension study | week | main | ct | p13 | ct p13 | group [SUMMARY]
[CONTENT] Results [SUMMARY]
[CONTENT] RA | 54-week | CT-P13 | PLANETRA ||| CT-P13 | 3 mg/kg | 8 weeks | weeks 62 to 102 ||| ||| American College of Rheumatology | 20% | ACR50 ||| 102 weeks | 54 weeks | CT-P13 [SUMMARY]
[CONTENT] 302 | 455 | PLANETRA ||| 158 | CT-P13 | 144 ||| Response | week 102 | 71.7% | 71.8% | 48.0% | 51.4% | ACR50 | 24.3% | 26.1% ||| week 102 | 40.3% | 44.8% ||| two | 53.5% | 53.8% [SUMMARY]
[CONTENT] an additional year | CT | 2 years [SUMMARY]
[CONTENT] Results ||| RA | 54-week | CT-P13 | PLANETRA ||| CT-P13 | 3 mg/kg | 8 weeks | weeks 62 to 102 ||| ||| American College of Rheumatology | 20% | ACR50 ||| 102 weeks | 54 weeks | CT-P13 ||| ||| 302 | 455 | PLANETRA ||| 158 | CT-P13 | 144 ||| Response | week 102 | 71.7% | 71.8% | 48.0% | 51.4% | ACR50 | 24.3% | 26.1% ||| week 102 | 40.3% | 44.8% ||| two | 53.5% | 53.8% ||| an additional year | CT | 2 years [SUMMARY]
[CONTENT] Results ||| RA | 54-week | CT-P13 | PLANETRA ||| CT-P13 | 3 mg/kg | 8 weeks | weeks 62 to 102 ||| ||| American College of Rheumatology | 20% | ACR50 ||| 102 weeks | 54 weeks | CT-P13 ||| ||| 302 | 455 | PLANETRA ||| 158 | CT-P13 | 144 ||| Response | week 102 | 71.7% | 71.8% | 48.0% | 51.4% | ACR50 | 24.3% | 26.1% ||| week 102 | 40.3% | 44.8% ||| two | 53.5% | 53.8% ||| an additional year | CT | 2 years [SUMMARY]
Fibroblast Growth Factor 19 and Fibroblast Growth Factor 21 Regulation in Obese Diabetics, and Non-Alcoholic Fatty Liver Disease after Gastric Bypass.
35277004
Gastric bypass (GB) is an effective treatment for those who are morbidly obese with coexisting type 2 diabetes mellitus (T2DM) or non-alcoholic fatty liver disease (NAFLD). Fibroblast growth factors (FGFs) are involved in the regulation of energy metabolism.
BACKGROUND
We investigated the roles of FGF 19, FGF 21, and total bile acid among those with morbidly obese and T2DM undergoing GB. A total of 35 patients were enrolled. Plasma FGF 19, FGF 21, and total bile acid levels were measured before surgery (M0), 3 months (M3), and 12 months (M12) after surgery, while the hepatic steatosis index (HSI) was calculated before and after surgery.
METHODS
Obese patients with T2DM after GB presented with increased serum FGF 19 levels (p = 0.024) and decreased total bile acid (p = 0.01) and FGF 21 levels (p = 0.005). DM complete remitters had a higher FGF 19 level at M3 (p = 0.004) compared with DM non-complete remitters. Fatty liver improvers tended to have lower FGF 21 (p = 0.05) compared with non-improvers at M12.
RESULTS
Changes in FGF 19 and FGF 21 play differential roles in DM remission and NAFLD improvement for patients after GB. Early increases in serum FGF 19 levels may predict complete remission of T2DM, while a decline in serum FGF 21 levels may reflect the improvement of NAFLD after GB.
CONCLUSION
[ "Diabetes Mellitus, Type 2", "Fibroblast Growth Factors", "Gastric Bypass", "Humans", "Non-alcoholic Fatty Liver Disease", "Obesity, Morbid" ]
8839096
1. Introduction
Obesity has been a global concern for the past 50 years and the prevalence has increased significantly over the past decade [1]. Obesity represents a major health challenge because it substantially increases the risk of metabolic diseases, including type 2 diabetes mellitus (T2DM) and non-alcoholic fatty liver disease (NAFLD) [1,2]. Body weight reduction is an important approach in reducing insulin resistance and improving NAFLD. Weight loss has been shown as one of the strongest predictors of improved insulin sensitivity [3]. The magnitude of weight loss is also correlated with the improvement of NAFLD [4]. Surgical intervention is considered an important approach, especially for morbidly obese patients with T2DM, medically resistant arterial hypertension, or comorbidities that are expected to improve with weight loss [5]. Gastric bypass, a widely adapted surgical technique, is one of the most effective methods to combat obesity and remit T2DM [6]. Nevertheless, there are many patients whose DM and NAFLD fail to improve despite receiving these interventions [7]. Furthermore, the mechanisms by which it causes weight loss and T2DM and NAFLD resolutions are not well elucidated [8]. Numerous studies have attempted to identify robust biological and clinical predictors of DM remission after bariatric surgery [9,10]. On the other hand, studies for improving NAFLD are relatively lacking and mostly limited to animal studies [11]. More biomarkers from the blood as surrogates in the evaluation of NAFLD to replace paired liver biopsy and reduce the suffering of the patients are desperately needed [12]. The human fibroblast growth factor (FGF) family contains at least 22 members involving the biological processes of cell growth, differentiation, development, and metabolism [13]. Aside from most FGFs presenting functions as autocrine or paracrine factors, FGF 19, FGF 21, and FGF 23 lack the conventional FGF heparin-binding domain and possess the ability to elicit endocrine actions, functioning as hormones [13]. Emerging evidence demonstrates the potential role of the FGF family in energy metabolism and in counteracting obesity, especially FGF 19 and FGF 21 [14]. Animal studies have shown that overexpression of FGF 19 or FGF 21 or treatment with recombinant protein enhanced metabolic rates and decreased fat mass, in addition to demonstrating improvements in glucose metabolism, insulin sensitivity, and lipid profiles [15,16,17,18]. Bile acids have a significant relationship with energy balance. Farnesoid-X-receptor (FXR) regulates bile acid homeostasis by regulating the transcription of several enterohepatic genes. The activation of the transcription factor FXR by bile acids provokes the subsequent secretion of FGF 19 [19]. Human FGF 19 is expressed in the ileal enterocytes of the small intestine. FGF 19, secreted into the portal circulation, has a pronounced diurnal rhythm, with peaks occurring 90–120 min after serum bile acid levels after food intake. β-Klotho (KLB) works as a co-receptor and supports endocrine signaling via binding with FGF receptor (FGFr) 4 [20]. The binding of FGF 19 to the hepatocyte cell surface FGFr 4/KLB complex leads to negative feedback and reduces hepatic bile salt synthesis [21]. Mice with impaired function for gut secretion of FGF 19 show significantly impaired weight loss and glucose improvement following bariatric surgery [15]. Collective data also reveal that the serum FGF 19 levels are decreased in patients with T2DM [22]. Apart from FGF 19, FGF 21 is expressed in multiple tissues, including the liver, brown adipose tissue, white adipose tissue, and pancreas [23]. Under normal physiologic status, most circulating FGF 21 originates from the liver [24]. Secretion of FGF 21 is provoked significantly by excess food intake, ketogenic, high-carbohydrate diets, or protein restriction [25]. The expression of the FGF 21 gene depends on several pathways. Increased circulating free fatty acids and prolonged fasting promote the transcriptional activation of FGF 21 by the peroxisome proliferator-activated receptor α-mediated pathway [26,27]. A high-glucose diet activates carbohydrate-response element-binding protein (ChREBP) and enhances FGF 21 secretion [28]. Furthermore, general control nonderepressible 2 (GCN2) would be activated when encountering amino acid deficiency, leading to FGF 21 transcription [29]. Metabolic stresses such as obesity, T2DM, or NAFLD are also responsible for inducing the expression and/or signaling of FGF 21 [30]. The FGF 21-dependent signaling of downstream FGFr is extremely complicated and well-debated [31]. Based on evidence from a recent study, FGF 21 stimulates hepatic fatty acid oxidation, ketogenesis, and gluconeogenesis, and suppresses lipogenesis [25]. FGF 21 reduced plasma glucose and triglycerides to a nearly normal level in an animal model [32]. Although recent studies provide clues regarding the dynamics of FGF 19 and FGF 21 in patients receiving bariatric surgery [8,33], the information is limited to sleeve gastrectomy [8]. Moreover, different characteristics between those with and without the improvement of obesity-related comorbidities were also lacking [33]. The main purpose of our study was to evaluate the effect of GB on changes in serum FGF 19 and FGF 21 levels. Furthermore, we also determined the relationship between both blood biomarkers and the improvement of either T2DM or NAFLD.
null
null
3. Results
3.1. Changes in Metabolic Profiles and Laboratory Data after GB A total of 35 obese patients (12 males and 23 females) with T2DM who underwent GB were enrolled. The baseline average age, body weight, and BMI were 44.8 ± 9.7 years old, 84.8 ± 14.1 kg, and 31.6 ± 4.6 kg/m2, respectively. The duration of T2DM was 5.8 ± 4.9 years. All enrolled patients received GB and were followed up for more than 1 year subsequently. Changes in metabolic profiles and laboratory data are reported in Table 1. Body weight, BMI, waist circumference, and ABSI showed significant improvements 1 year after GB (p < 0.001). Diabetes-related parameters, including fasting blood glucose, HbA1c, c-peptide, and insulin level, also showed a significant decline (p < 0.05). Liver function tests, including ALT, AST, and Alk-p levels, showed no significant change; however, a significant decline in γ-GT level was observed (p = 0.006). As for the lipid profile, HDL-C increased, and triglycerides decreased (p < 0.05), while total cholesterol and LDL levels did not change significantly. Decreases in uric acid levels were also demonstrated (p = 0.019). Variations in serum levels among FGF 19, FGF 21, and total bile acids before the operation and after 3 months and 12 months are presented in Figure 1. All three markers showed a significant trend of change after GB (p = 0.024, 0.005, and 0.010, respectively). The total bile acid level changed from 10.07 ± 4.33 µM at M0 to 11.78 ± 9.32 µM at M3 and to 8.31 ± 4.95 µM at M12 (p = 0.023 between M3 and M12). The FGF 19 level increased from 84.20 ± 61.31 pg/mL at M0 to 141.76 ± 108.70 pg/mL at M3 and to 142.69 ± 100.21 pg/mL at M12 (p = 0.016 between M0 and M3, p = 0.002 between M0 and M12). The FGF 21 level changed from 320.06 ± 238.96 pg/mL at M0 to 416.99 ± 375.86 pg/mL at M3 and to 230.24 ± 123.71 pg/mL at M12 (p = 0.049 between M0 and M3, p = 0.005 between M0 and M12). Serum levels of FGF 19 and FGF 21 for subjects during the three periods were intergraded and the correlations with indicators of diabetes and NAFLD are shown in Figure 2. FGF 19 had a significant negative correlation with serum c-peptide (r = −0.286, p = 0.006, Figure 2A) and HbA1c level (r = −0.308, p = 0.003, Figure 2B). On the other hand, FGF 21 had a significant positive correlation with serum HbA1c (r = 0.209, p = 0.047, Figure 2C) and total bile acids (r = 0.273, p = 0.005, Figure 2D), respectively. A total of 35 obese patients (12 males and 23 females) with T2DM who underwent GB were enrolled. The baseline average age, body weight, and BMI were 44.8 ± 9.7 years old, 84.8 ± 14.1 kg, and 31.6 ± 4.6 kg/m2, respectively. The duration of T2DM was 5.8 ± 4.9 years. All enrolled patients received GB and were followed up for more than 1 year subsequently. Changes in metabolic profiles and laboratory data are reported in Table 1. Body weight, BMI, waist circumference, and ABSI showed significant improvements 1 year after GB (p < 0.001). Diabetes-related parameters, including fasting blood glucose, HbA1c, c-peptide, and insulin level, also showed a significant decline (p < 0.05). Liver function tests, including ALT, AST, and Alk-p levels, showed no significant change; however, a significant decline in γ-GT level was observed (p = 0.006). As for the lipid profile, HDL-C increased, and triglycerides decreased (p < 0.05), while total cholesterol and LDL levels did not change significantly. Decreases in uric acid levels were also demonstrated (p = 0.019). Variations in serum levels among FGF 19, FGF 21, and total bile acids before the operation and after 3 months and 12 months are presented in Figure 1. All three markers showed a significant trend of change after GB (p = 0.024, 0.005, and 0.010, respectively). The total bile acid level changed from 10.07 ± 4.33 µM at M0 to 11.78 ± 9.32 µM at M3 and to 8.31 ± 4.95 µM at M12 (p = 0.023 between M3 and M12). The FGF 19 level increased from 84.20 ± 61.31 pg/mL at M0 to 141.76 ± 108.70 pg/mL at M3 and to 142.69 ± 100.21 pg/mL at M12 (p = 0.016 between M0 and M3, p = 0.002 between M0 and M12). The FGF 21 level changed from 320.06 ± 238.96 pg/mL at M0 to 416.99 ± 375.86 pg/mL at M3 and to 230.24 ± 123.71 pg/mL at M12 (p = 0.049 between M0 and M3, p = 0.005 between M0 and M12). Serum levels of FGF 19 and FGF 21 for subjects during the three periods were intergraded and the correlations with indicators of diabetes and NAFLD are shown in Figure 2. FGF 19 had a significant negative correlation with serum c-peptide (r = −0.286, p = 0.006, Figure 2A) and HbA1c level (r = −0.308, p = 0.003, Figure 2B). On the other hand, FGF 21 had a significant positive correlation with serum HbA1c (r = 0.209, p = 0.047, Figure 2C) and total bile acids (r = 0.273, p = 0.005, Figure 2D), respectively. 3.2. Characteristic Differences between DM-CR and DM-Non-CR Subjects Thirteen of our 35 study participants (37.1%) achieved complete DM remission 12 months after GB. Profiles of both DM-CR and DM-non-CR groups measured before and 12 months after GB are presented in Table 2. Regarding the preoperative condition, the DM-CR group had higher baseline body weight (93.85 ± 16.25 vs. 79.43 ± 9.53 kg, p = 0.010), BMI (43.73 ± 5.00 vs. 29.79 ± 3.27 kg/m2, p = 0.001), waist circumference (108.69 ± 10.22 vs. 100.45 ± 9.13 cm, p = 0.020), c-peptide (3.23 ± 1.01 vs. 2.31 ± 1.18 mg/dL, p = 0.026), and ALT level (56.08 ± 40.45 vs. 33.05 ± 25.90 U/L, p = 0.047) but with lower baseline HbA1c (8.51 ± 1.42 vs. 9.76 ± 1.41%, p = 0.016) compared to the DM-non-CR group. In addition, the DM-CR group had higher HSI (51.30 ± 5.05 vs. 42.69 ± 4.75, p < 0.001). Baseline FGF 19, FGF 21, and total bile acid levels were similar between the two groups. One year after the operation, the HbA1c level of the DM-CR group was 5.42 ± 0.38%, while that of the DM-non-CR group was 7.09 ± 0.99%. Patients who achieved DM-CR had lower fasting blood glucose (89.70 ± 10.22 vs. 125.18 ± 31.18 mg/dL, p < 0.001), γ-GT (12.40 ± 5.30 vs. 31.45 ± 24.71 U/L, p = 0.003), and triglyceride (74.00 ± 24.10 vs. 119.45 ± 46.79 mg/dL, p = 0.001) levels. No significant difference was found among other measurements regarding metabolic profiles or laboratory data. Neither FGF 19, FGF 21, nor total bile acid level showed any significant difference between DM-CR and DM-non-CR 12 months after GB. Changes in serum levels of FGF 19, FGF 21, and total bile acids at the time before operation and 3 months and 12 months after surgery are presented in Figure 3. After adjustment for age and gender, the DM-CR group had a significantly higher FGF 19 level compared to DM-non-CR at M3 (196.88 ± 153.00 vs. 102.06 ± 34.72 pg/mL, p = 0.004, shown in Figure 3B), and more changes in FGF 19 between M3 and M0 (133.15 ± 144.65 vs. 6.15 ± 86.35 pg/mL, p = 0.001, shown in Figure 3D) compared with the DM-non-CR group. Thirteen of our 35 study participants (37.1%) achieved complete DM remission 12 months after GB. Profiles of both DM-CR and DM-non-CR groups measured before and 12 months after GB are presented in Table 2. Regarding the preoperative condition, the DM-CR group had higher baseline body weight (93.85 ± 16.25 vs. 79.43 ± 9.53 kg, p = 0.010), BMI (43.73 ± 5.00 vs. 29.79 ± 3.27 kg/m2, p = 0.001), waist circumference (108.69 ± 10.22 vs. 100.45 ± 9.13 cm, p = 0.020), c-peptide (3.23 ± 1.01 vs. 2.31 ± 1.18 mg/dL, p = 0.026), and ALT level (56.08 ± 40.45 vs. 33.05 ± 25.90 U/L, p = 0.047) but with lower baseline HbA1c (8.51 ± 1.42 vs. 9.76 ± 1.41%, p = 0.016) compared to the DM-non-CR group. In addition, the DM-CR group had higher HSI (51.30 ± 5.05 vs. 42.69 ± 4.75, p < 0.001). Baseline FGF 19, FGF 21, and total bile acid levels were similar between the two groups. One year after the operation, the HbA1c level of the DM-CR group was 5.42 ± 0.38%, while that of the DM-non-CR group was 7.09 ± 0.99%. Patients who achieved DM-CR had lower fasting blood glucose (89.70 ± 10.22 vs. 125.18 ± 31.18 mg/dL, p < 0.001), γ-GT (12.40 ± 5.30 vs. 31.45 ± 24.71 U/L, p = 0.003), and triglyceride (74.00 ± 24.10 vs. 119.45 ± 46.79 mg/dL, p = 0.001) levels. No significant difference was found among other measurements regarding metabolic profiles or laboratory data. Neither FGF 19, FGF 21, nor total bile acid level showed any significant difference between DM-CR and DM-non-CR 12 months after GB. Changes in serum levels of FGF 19, FGF 21, and total bile acids at the time before operation and 3 months and 12 months after surgery are presented in Figure 3. After adjustment for age and gender, the DM-CR group had a significantly higher FGF 19 level compared to DM-non-CR at M3 (196.88 ± 153.00 vs. 102.06 ± 34.72 pg/mL, p = 0.004, shown in Figure 3B), and more changes in FGF 19 between M3 and M0 (133.15 ± 144.65 vs. 6.15 ± 86.35 pg/mL, p = 0.001, shown in Figure 3D) compared with the DM-non-CR group. 3.3. Characteristic Differences between HSI-I and HSI-Non-I Subjects Twenty-five of our 35 enrolled subjects (71.4%) were considered HSI-I based on evaluation at M12. Measurements of HSI-I and HSI-non-I groups before and at 12 months postoperatively are presented in Table 3. Regarding the preoperative condition, patients among the HSI-I group had higher systolic blood pressure (139.60 ± 15.11 vs. 127.30 ± 10.14 mmHg, p = 0.024) and lower fasting blood glucose (161.44 ± 66.11 vs. 214.70 ± 70.31, p = 0.042). Neither baseline FGF 19, FGF 21, nor total bile acid level showed any significant difference between these two groups. One year after GB, fatty liver improvers had an average HSI of 34.49 ± 1.25, while non-improvers had a HSI of 38.70 ± 1.93. HSI-I had lower serum FGF 21 (204.06 ± 122.68 vs. 295.67 ± 104.96, p = 0.046), insulin (3.43 ± 1.78 vs. 10.88 ± 10.38 mU/L, p = 0.0499), and triglyceride levels (90.18 ± 32.01 vs. 138.40 ± 55.67 mg/dL, p = 0.026). No differences in FGF 19 and bile acid levels were found. The changes in FGF 19, FGF 21, and total bile acids are presented in Figure 4. The differences in FGF 21 between HSI-I and HSI-non-I showed borderline significance (p = 0.0503) after being adjusted for age and gender (Figure 4C). Twenty-five of our 35 enrolled subjects (71.4%) were considered HSI-I based on evaluation at M12. Measurements of HSI-I and HSI-non-I groups before and at 12 months postoperatively are presented in Table 3. Regarding the preoperative condition, patients among the HSI-I group had higher systolic blood pressure (139.60 ± 15.11 vs. 127.30 ± 10.14 mmHg, p = 0.024) and lower fasting blood glucose (161.44 ± 66.11 vs. 214.70 ± 70.31, p = 0.042). Neither baseline FGF 19, FGF 21, nor total bile acid level showed any significant difference between these two groups. One year after GB, fatty liver improvers had an average HSI of 34.49 ± 1.25, while non-improvers had a HSI of 38.70 ± 1.93. HSI-I had lower serum FGF 21 (204.06 ± 122.68 vs. 295.67 ± 104.96, p = 0.046), insulin (3.43 ± 1.78 vs. 10.88 ± 10.38 mU/L, p = 0.0499), and triglyceride levels (90.18 ± 32.01 vs. 138.40 ± 55.67 mg/dL, p = 0.026). No differences in FGF 19 and bile acid levels were found. The changes in FGF 19, FGF 21, and total bile acids are presented in Figure 4. The differences in FGF 21 between HSI-I and HSI-non-I showed borderline significance (p = 0.0503) after being adjusted for age and gender (Figure 4C).
5. Conclusions
Obesity represents a major health challenge in our modern society. GB is an effective surgical approach for weight loss, leading to an increase in FGF 19 levels and a decrease in total bile acids and FGF 21 levels postoperatively. FGF 19 levels had a negative correlation with the severity of T2DM based on c-peptide and HbA1c levels. FGF 21 levels had a positive correlation with c-peptide and total bile acid levels. Early increases in serum FGF 19 levels may be a predictor for complete remission of T2DM after GB. A decline in serum FGF 21 levels may reflect the improvement of NAFLD after GB. Our study may shed light on the differential roles of FGF 19 and FGF 21 in human T2DM remission and NAFLD improvement.
[ "2. Materials and Methods", "2.2. Surgical Technique of Gastric Bypass (GB)", "2.3. Metabolic Profiles and Blood Sampling", "2.4. Measurement of the Plasma FGF 19, FGF 21, and Serum Total Bile Acid Levels", "2.5. Definition of DM Complete Remission and Insulin Resistance", "2.6. Definition of NAFLD Based on the Hepatic Steatosis Index (HSI)", "2.7. Statistical Analysis", "3.1. Changes in Metabolic Profiles and Laboratory Data after GB", "3.2. Characteristic Differences between DM-CR and DM-Non-CR Subjects", "3.3. Characteristic Differences between HSI-I and HSI-Non-I Subjects" ]
[ " 2.1. Study Design and Patients This was a hospital-based prospective observational study. Obese patients with T2DM receiving GB surgery were enrolled in the study. The study was conducted in accordance with the Declaration of Helsinki and the study protocol was approved by the institutional review board (approval number: MSIRB2015017, YM 103127E, 201002037IC, 2011-09-015IC, CHGH-IRB (405)102A-52)).\nThe inclusion criteria were as follows: (1) been diagnosed with T2DM for more than 6 months with a baseline hemoglobin A1c (HbA1c) level > 8%; (2) receiving regular medical treatment for T2DM, including therapeutic nutritional therapy, oral anti-diabetic agents, or insulin under the evaluation of specialists; (2) body mass index (BMI) ≥ 32.5 kg/m2; (3) willingness to receive additional treatment, including diet control and lifestyle modifications; (4) willingness to accept follow-up appointment, and provided the informed consent documents.\nWe excluded patients who had the following diagnosis or baseline characteristics: (1) cancer within the previous 5 years; (2) human immunodeficiency virus infection or active pulmonary tuberculosis; (3) unstable cardiovascular condition within the past 6 months; (4) pulmonary embolisms; (5) serum creatinine levels > 2.0 mg/dL; (6) chronic hepatitis B or C, liver cirrhosis, inflammatory bowel disease, or acromegaly; (7) history of organ transplantation or other bariatric surgery; (8) alcohol use disorders or substance use disorder; or (9) those who were uncooperative.\nAll study patients returned for a monthly follow-up appointment after surgery. The patients received a face-to-face consultation with dietitians and reported a food diary log. Clinical anthropometry and laboratory assessments were performed simultaneously.\nThis was a hospital-based prospective observational study. Obese patients with T2DM receiving GB surgery were enrolled in the study. The study was conducted in accordance with the Declaration of Helsinki and the study protocol was approved by the institutional review board (approval number: MSIRB2015017, YM 103127E, 201002037IC, 2011-09-015IC, CHGH-IRB (405)102A-52)).\nThe inclusion criteria were as follows: (1) been diagnosed with T2DM for more than 6 months with a baseline hemoglobin A1c (HbA1c) level > 8%; (2) receiving regular medical treatment for T2DM, including therapeutic nutritional therapy, oral anti-diabetic agents, or insulin under the evaluation of specialists; (2) body mass index (BMI) ≥ 32.5 kg/m2; (3) willingness to receive additional treatment, including diet control and lifestyle modifications; (4) willingness to accept follow-up appointment, and provided the informed consent documents.\nWe excluded patients who had the following diagnosis or baseline characteristics: (1) cancer within the previous 5 years; (2) human immunodeficiency virus infection or active pulmonary tuberculosis; (3) unstable cardiovascular condition within the past 6 months; (4) pulmonary embolisms; (5) serum creatinine levels > 2.0 mg/dL; (6) chronic hepatitis B or C, liver cirrhosis, inflammatory bowel disease, or acromegaly; (7) history of organ transplantation or other bariatric surgery; (8) alcohol use disorders or substance use disorder; or (9) those who were uncooperative.\nAll study patients returned for a monthly follow-up appointment after surgery. The patients received a face-to-face consultation with dietitians and reported a food diary log. Clinical anthropometry and laboratory assessments were performed simultaneously.\n 2.2. Surgical Technique of Gastric Bypass (GB) GB was performed as described in our previous studies [34,35,36]. A five-port technique was used. The dissection begins directly on the lesser curvature, and a 15- to 20-mL gastric pouch is created using multiple EndoGIA-45 staplers (US Surgical Corp). Patients are then placed in a neutral position for the creation of the jejunostomy. The jejunum is divided 50 cm distal to the ligament of Treitz. A stapled end-to-side jejunojejunostomy is performed with a 100 cm Roux limb. The enteroenterostomy defect and all mesenteric defects are closed with continuous sutures. The Roux limb is tunneled via a retrocolic, retrogastric path and positioned near the transected gastric pouch. The CEEA-21 anvil (US Surgical Corp) is pulled into the gastric pouch transorally following the previous study [37]. The CEEA stapler is then inserted through the Roux limb to perform the end-to-side gastro-jejunostomy. After the test for air leak, the trocar fascial defects are closed. Two hemovac drains are left in the lesser sac and left subphrenic space separately [38].\nGB was performed as described in our previous studies [34,35,36]. A five-port technique was used. The dissection begins directly on the lesser curvature, and a 15- to 20-mL gastric pouch is created using multiple EndoGIA-45 staplers (US Surgical Corp). Patients are then placed in a neutral position for the creation of the jejunostomy. The jejunum is divided 50 cm distal to the ligament of Treitz. A stapled end-to-side jejunojejunostomy is performed with a 100 cm Roux limb. The enteroenterostomy defect and all mesenteric defects are closed with continuous sutures. The Roux limb is tunneled via a retrocolic, retrogastric path and positioned near the transected gastric pouch. The CEEA-21 anvil (US Surgical Corp) is pulled into the gastric pouch transorally following the previous study [37]. The CEEA stapler is then inserted through the Roux limb to perform the end-to-side gastro-jejunostomy. After the test for air leak, the trocar fascial defects are closed. Two hemovac drains are left in the lesser sac and left subphrenic space separately [38].\n 2.3. Metabolic Profiles and Blood Sampling All study subjects received clinical anthropometry and laboratory assessments on the day before surgery as baseline (M0) and at 3 months (M3) and 12 months (M12) after surgery. Metabolic profiles included anthropometry measurements (body weight, waist circumference, body mass index (BMI), and a body shape index (ABSI)), and systolic and diastolic blood pressure. The patients were required to fast overnight before each blood sample collection. Blood samples were obtained from the median cubital vein between 8 and 11 o’clock in the morning and immediately transferred into a chilled glass tube containing disodium EDTA (1 mg/mL) and aprotinin (500 units/mL). The samples were stored on ice during collection. They were further centrifuged, plasma-separated, aliquoted into polypropylene tubes, and stored at −20 °C before receiving analysis.\nLaboratory assessments included serum levels of total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), fasting blood sugar, hemoglobin A1c (HbA1c), c-peptide, insulin, creatinine, uric acid, and liver function test (alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase (Alk-p), and gamma-glutamyl transferase (γ-GT)). All data are reported as means ± standard deviations.\nAll study subjects received clinical anthropometry and laboratory assessments on the day before surgery as baseline (M0) and at 3 months (M3) and 12 months (M12) after surgery. Metabolic profiles included anthropometry measurements (body weight, waist circumference, body mass index (BMI), and a body shape index (ABSI)), and systolic and diastolic blood pressure. The patients were required to fast overnight before each blood sample collection. Blood samples were obtained from the median cubital vein between 8 and 11 o’clock in the morning and immediately transferred into a chilled glass tube containing disodium EDTA (1 mg/mL) and aprotinin (500 units/mL). The samples were stored on ice during collection. They were further centrifuged, plasma-separated, aliquoted into polypropylene tubes, and stored at −20 °C before receiving analysis.\nLaboratory assessments included serum levels of total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), fasting blood sugar, hemoglobin A1c (HbA1c), c-peptide, insulin, creatinine, uric acid, and liver function test (alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase (Alk-p), and gamma-glutamyl transferase (γ-GT)). All data are reported as means ± standard deviations.\n 2.4. Measurement of the Plasma FGF 19, FGF 21, and Serum Total Bile Acid Levels The fasting blood samples obtained were used to determine the plasma levels of FGF 19 and FGF 21, and serum levels of total bile acids. Enzyme immunoassays for plasma FGF 19 and FGF 21 (R&D Systems, Minneapolis, MN, USA) were performed. Fasting total serum bile acids were assayed using the 3α-hydroxysteroid dehydrogenase method (Fumouze Diagnostics, Levallois-Perret, France). All measurements were performed in duplicate.\nThe fasting blood samples obtained were used to determine the plasma levels of FGF 19 and FGF 21, and serum levels of total bile acids. Enzyme immunoassays for plasma FGF 19 and FGF 21 (R&D Systems, Minneapolis, MN, USA) were performed. Fasting total serum bile acids were assayed using the 3α-hydroxysteroid dehydrogenase method (Fumouze Diagnostics, Levallois-Perret, France). All measurements were performed in duplicate.\n 2.5. Definition of DM Complete Remission and Insulin Resistance In our study, DM complete remission is defined as a return to normal measures of glucose metabolism with HbA1c in the normal range (<6.0% (42 mmol/mol)), which was adopted in most previous investigations [39]. We define HbA1c < 6.0% after 12 months after GB as DM complete remitters.\nInsulin resistance was evaluated by using the homeostasis model assessment of insulin resistance (HOMA-IR), defined as fasting plasma glucose level (mmol/L) × fasting immunoreactive insulin level (µ U/mL)/22.5. The β-cell function was assessed using the homeostatic model assessment of β-cell function (HOMA-β), with the formula of [20 × fasting immunoreactive insulin level (µ U/mL)]/[fasting plasma glucose level (mmol/L) − 3.5] [40]. The variation of the area under the curve of plasma FGF 19 levels per minute (ΔAUC of plasma amylin) during the OGTT was calculated with the trapezoidal method [41].\nIn our study, DM complete remission is defined as a return to normal measures of glucose metabolism with HbA1c in the normal range (<6.0% (42 mmol/mol)), which was adopted in most previous investigations [39]. We define HbA1c < 6.0% after 12 months after GB as DM complete remitters.\nInsulin resistance was evaluated by using the homeostasis model assessment of insulin resistance (HOMA-IR), defined as fasting plasma glucose level (mmol/L) × fasting immunoreactive insulin level (µ U/mL)/22.5. The β-cell function was assessed using the homeostatic model assessment of β-cell function (HOMA-β), with the formula of [20 × fasting immunoreactive insulin level (µ U/mL)]/[fasting plasma glucose level (mmol/L) − 3.5] [40]. The variation of the area under the curve of plasma FGF 19 levels per minute (ΔAUC of plasma amylin) during the OGTT was calculated with the trapezoidal method [41].\n 2.6. Definition of NAFLD Based on the Hepatic Steatosis Index (HSI) NAFLD has been considered as a continuum from obesity to metabolic syndrome and diabetes [42]. The gold standard to evaluate the magnitude of NAFLD depends on paired liver biopsy. However, liver biopsy may lead to patient discomfort and several complications, including bleeding, organ trauma, and even patient mortality [43]. Therefore, this approach is less feasible in clinical practice due to patient safety and a high patient refusal rate [12,44]. The hepatic steatosis index (HSI) was adapted for the clinical assessment of NAFLD and has been validated with imaging, including ultrasonography and magnetic resonance imaging [45]. HSI was defined as 8 × (ALT/AST ratio) + BMI (+2, if female; +2, if diabetes mellitus). A cut-off value of HSI > 36.0 may determine NAFLD with a specificity of 92.4% [46]. Based on this concept, our study defines HSI < 36.0 at 12 months after GB as HSI improvers (HSI-I).\nNAFLD has been considered as a continuum from obesity to metabolic syndrome and diabetes [42]. The gold standard to evaluate the magnitude of NAFLD depends on paired liver biopsy. However, liver biopsy may lead to patient discomfort and several complications, including bleeding, organ trauma, and even patient mortality [43]. Therefore, this approach is less feasible in clinical practice due to patient safety and a high patient refusal rate [12,44]. The hepatic steatosis index (HSI) was adapted for the clinical assessment of NAFLD and has been validated with imaging, including ultrasonography and magnetic resonance imaging [45]. HSI was defined as 8 × (ALT/AST ratio) + BMI (+2, if female; +2, if diabetes mellitus). A cut-off value of HSI > 36.0 may determine NAFLD with a specificity of 92.4% [46]. Based on this concept, our study defines HSI < 36.0 at 12 months after GB as HSI improvers (HSI-I).\n 2.7. Statistical Analysis SAS (version 9.4) was applied for the data analyses. The paired t-test was used to compare preoperative age, BMI, waist circumference, total body fat percentage, and resting metabolic rate among patients who underwent GB between the preoperative period and 3 and 12 months after surgery.\nPostoperative serum biochemical data (except BMI) were analyzed using an ANCOVA model to adjust for preoperative age and gender. The paired Student t-test was used in comparisons between preoperative data with data at each time point at 3 or 12 months postoperatively.\nA trend analysis (to obtain p values for the trend) was performed using the repeated GLM to explore whether GB had a significantly different effect on postoperative indicators within 3 or 12 months postoperatively.\nThe correlations among different postoperative indicators to explain possible physiological pathways were evaluated using the Pearson correlation. A p-value of < 0.05 was considered statistically significant.\nSAS (version 9.4) was applied for the data analyses. The paired t-test was used to compare preoperative age, BMI, waist circumference, total body fat percentage, and resting metabolic rate among patients who underwent GB between the preoperative period and 3 and 12 months after surgery.\nPostoperative serum biochemical data (except BMI) were analyzed using an ANCOVA model to adjust for preoperative age and gender. The paired Student t-test was used in comparisons between preoperative data with data at each time point at 3 or 12 months postoperatively.\nA trend analysis (to obtain p values for the trend) was performed using the repeated GLM to explore whether GB had a significantly different effect on postoperative indicators within 3 or 12 months postoperatively.\nThe correlations among different postoperative indicators to explain possible physiological pathways were evaluated using the Pearson correlation. A p-value of < 0.05 was considered statistically significant.", "GB was performed as described in our previous studies [34,35,36]. A five-port technique was used. The dissection begins directly on the lesser curvature, and a 15- to 20-mL gastric pouch is created using multiple EndoGIA-45 staplers (US Surgical Corp). Patients are then placed in a neutral position for the creation of the jejunostomy. The jejunum is divided 50 cm distal to the ligament of Treitz. A stapled end-to-side jejunojejunostomy is performed with a 100 cm Roux limb. The enteroenterostomy defect and all mesenteric defects are closed with continuous sutures. The Roux limb is tunneled via a retrocolic, retrogastric path and positioned near the transected gastric pouch. The CEEA-21 anvil (US Surgical Corp) is pulled into the gastric pouch transorally following the previous study [37]. The CEEA stapler is then inserted through the Roux limb to perform the end-to-side gastro-jejunostomy. After the test for air leak, the trocar fascial defects are closed. Two hemovac drains are left in the lesser sac and left subphrenic space separately [38].", "All study subjects received clinical anthropometry and laboratory assessments on the day before surgery as baseline (M0) and at 3 months (M3) and 12 months (M12) after surgery. Metabolic profiles included anthropometry measurements (body weight, waist circumference, body mass index (BMI), and a body shape index (ABSI)), and systolic and diastolic blood pressure. The patients were required to fast overnight before each blood sample collection. Blood samples were obtained from the median cubital vein between 8 and 11 o’clock in the morning and immediately transferred into a chilled glass tube containing disodium EDTA (1 mg/mL) and aprotinin (500 units/mL). The samples were stored on ice during collection. They were further centrifuged, plasma-separated, aliquoted into polypropylene tubes, and stored at −20 °C before receiving analysis.\nLaboratory assessments included serum levels of total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), fasting blood sugar, hemoglobin A1c (HbA1c), c-peptide, insulin, creatinine, uric acid, and liver function test (alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase (Alk-p), and gamma-glutamyl transferase (γ-GT)). All data are reported as means ± standard deviations.", "The fasting blood samples obtained were used to determine the plasma levels of FGF 19 and FGF 21, and serum levels of total bile acids. Enzyme immunoassays for plasma FGF 19 and FGF 21 (R&D Systems, Minneapolis, MN, USA) were performed. Fasting total serum bile acids were assayed using the 3α-hydroxysteroid dehydrogenase method (Fumouze Diagnostics, Levallois-Perret, France). All measurements were performed in duplicate.", "In our study, DM complete remission is defined as a return to normal measures of glucose metabolism with HbA1c in the normal range (<6.0% (42 mmol/mol)), which was adopted in most previous investigations [39]. We define HbA1c < 6.0% after 12 months after GB as DM complete remitters.\nInsulin resistance was evaluated by using the homeostasis model assessment of insulin resistance (HOMA-IR), defined as fasting plasma glucose level (mmol/L) × fasting immunoreactive insulin level (µ U/mL)/22.5. The β-cell function was assessed using the homeostatic model assessment of β-cell function (HOMA-β), with the formula of [20 × fasting immunoreactive insulin level (µ U/mL)]/[fasting plasma glucose level (mmol/L) − 3.5] [40]. The variation of the area under the curve of plasma FGF 19 levels per minute (ΔAUC of plasma amylin) during the OGTT was calculated with the trapezoidal method [41].", "NAFLD has been considered as a continuum from obesity to metabolic syndrome and diabetes [42]. The gold standard to evaluate the magnitude of NAFLD depends on paired liver biopsy. However, liver biopsy may lead to patient discomfort and several complications, including bleeding, organ trauma, and even patient mortality [43]. Therefore, this approach is less feasible in clinical practice due to patient safety and a high patient refusal rate [12,44]. The hepatic steatosis index (HSI) was adapted for the clinical assessment of NAFLD and has been validated with imaging, including ultrasonography and magnetic resonance imaging [45]. HSI was defined as 8 × (ALT/AST ratio) + BMI (+2, if female; +2, if diabetes mellitus). A cut-off value of HSI > 36.0 may determine NAFLD with a specificity of 92.4% [46]. Based on this concept, our study defines HSI < 36.0 at 12 months after GB as HSI improvers (HSI-I).", "SAS (version 9.4) was applied for the data analyses. The paired t-test was used to compare preoperative age, BMI, waist circumference, total body fat percentage, and resting metabolic rate among patients who underwent GB between the preoperative period and 3 and 12 months after surgery.\nPostoperative serum biochemical data (except BMI) were analyzed using an ANCOVA model to adjust for preoperative age and gender. The paired Student t-test was used in comparisons between preoperative data with data at each time point at 3 or 12 months postoperatively.\nA trend analysis (to obtain p values for the trend) was performed using the repeated GLM to explore whether GB had a significantly different effect on postoperative indicators within 3 or 12 months postoperatively.\nThe correlations among different postoperative indicators to explain possible physiological pathways were evaluated using the Pearson correlation. A p-value of < 0.05 was considered statistically significant.", "A total of 35 obese patients (12 males and 23 females) with T2DM who underwent GB were enrolled. The baseline average age, body weight, and BMI were 44.8 ± 9.7 years old, 84.8 ± 14.1 kg, and 31.6 ± 4.6 kg/m2, respectively. The duration of T2DM was 5.8 ± 4.9 years. All enrolled patients received GB and were followed up for more than 1 year subsequently.\nChanges in metabolic profiles and laboratory data are reported in Table 1. Body weight, BMI, waist circumference, and ABSI showed significant improvements 1 year after GB (p < 0.001). Diabetes-related parameters, including fasting blood glucose, HbA1c, c-peptide, and insulin level, also showed a significant decline (p < 0.05). Liver function tests, including ALT, AST, and Alk-p levels, showed no significant change; however, a significant decline in γ-GT level was observed (p = 0.006). As for the lipid profile, HDL-C increased, and triglycerides decreased (p < 0.05), while total cholesterol and LDL levels did not change significantly. Decreases in uric acid levels were also demonstrated (p = 0.019).\nVariations in serum levels among FGF 19, FGF 21, and total bile acids before the operation and after 3 months and 12 months are presented in Figure 1. All three markers showed a significant trend of change after GB (p = 0.024, 0.005, and 0.010, respectively). The total bile acid level changed from 10.07 ± 4.33 µM at M0 to 11.78 ± 9.32 µM at M3 and to 8.31 ± 4.95 µM at M12 (p = 0.023 between M3 and M12). The FGF 19 level increased from 84.20 ± 61.31 pg/mL at M0 to 141.76 ± 108.70 pg/mL at M3 and to 142.69 ± 100.21 pg/mL at M12 (p = 0.016 between M0 and M3, p = 0.002 between M0 and M12). The FGF 21 level changed from 320.06 ± 238.96 pg/mL at M0 to 416.99 ± 375.86 pg/mL at M3 and to 230.24 ± 123.71 pg/mL at M12 (p = 0.049 between M0 and M3, p = 0.005 between M0 and M12).\nSerum levels of FGF 19 and FGF 21 for subjects during the three periods were intergraded and the correlations with indicators of diabetes and NAFLD are shown in Figure 2. FGF 19 had a significant negative correlation with serum c-peptide (r = −0.286, p = 0.006, Figure 2A) and HbA1c level (r = −0.308, p = 0.003, Figure 2B). On the other hand, FGF 21 had a significant positive correlation with serum HbA1c (r = 0.209, p = 0.047, Figure 2C) and total bile acids (r = 0.273, p = 0.005, Figure 2D), respectively.", "Thirteen of our 35 study participants (37.1%) achieved complete DM remission 12 months after GB. Profiles of both DM-CR and DM-non-CR groups measured before and 12 months after GB are presented in Table 2.\nRegarding the preoperative condition, the DM-CR group had higher baseline body weight (93.85 ± 16.25 vs. 79.43 ± 9.53 kg, p = 0.010), BMI (43.73 ± 5.00 vs. 29.79 ± 3.27 kg/m2, p = 0.001), waist circumference (108.69 ± 10.22 vs. 100.45 ± 9.13 cm, p = 0.020), c-peptide (3.23 ± 1.01 vs. 2.31 ± 1.18 mg/dL, p = 0.026), and ALT level (56.08 ± 40.45 vs. 33.05 ± 25.90 U/L, p = 0.047) but with lower baseline HbA1c (8.51 ± 1.42 vs. 9.76 ± 1.41%, p = 0.016) compared to the DM-non-CR group. In addition, the DM-CR group had higher HSI (51.30 ± 5.05 vs. 42.69 ± 4.75, p < 0.001). Baseline FGF 19, FGF 21, and total bile acid levels were similar between the two groups.\nOne year after the operation, the HbA1c level of the DM-CR group was 5.42 ± 0.38%, while that of the DM-non-CR group was 7.09 ± 0.99%. Patients who achieved DM-CR had lower fasting blood glucose (89.70 ± 10.22 vs. 125.18 ± 31.18 mg/dL, p < 0.001), γ-GT (12.40 ± 5.30 vs. 31.45 ± 24.71 U/L, p = 0.003), and triglyceride (74.00 ± 24.10 vs. 119.45 ± 46.79 mg/dL, p = 0.001) levels. No significant difference was found among other measurements regarding metabolic profiles or laboratory data. Neither FGF 19, FGF 21, nor total bile acid level showed any significant difference between DM-CR and DM-non-CR 12 months after GB.\nChanges in serum levels of FGF 19, FGF 21, and total bile acids at the time before operation and 3 months and 12 months after surgery are presented in Figure 3. After adjustment for age and gender, the DM-CR group had a significantly higher FGF 19 level compared to DM-non-CR at M3 (196.88 ± 153.00 vs. 102.06 ± 34.72 pg/mL, p = 0.004, shown in Figure 3B), and more changes in FGF 19 between M3 and M0 (133.15 ± 144.65 vs. 6.15 ± 86.35 pg/mL, p = 0.001, shown in Figure 3D) compared with the DM-non-CR group.", "Twenty-five of our 35 enrolled subjects (71.4%) were considered HSI-I based on evaluation at M12. Measurements of HSI-I and HSI-non-I groups before and at 12 months postoperatively are presented in Table 3.\nRegarding the preoperative condition, patients among the HSI-I group had higher systolic blood pressure (139.60 ± 15.11 vs. 127.30 ± 10.14 mmHg, p = 0.024) and lower fasting blood glucose (161.44 ± 66.11 vs. 214.70 ± 70.31, p = 0.042). Neither baseline FGF 19, FGF 21, nor total bile acid level showed any significant difference between these two groups.\nOne year after GB, fatty liver improvers had an average HSI of 34.49 ± 1.25, while non-improvers had a HSI of 38.70 ± 1.93. HSI-I had lower serum FGF 21 (204.06 ± 122.68 vs. 295.67 ± 104.96, p = 0.046), insulin (3.43 ± 1.78 vs. 10.88 ± 10.38 mU/L, p = 0.0499), and triglyceride levels (90.18 ± 32.01 vs. 138.40 ± 55.67 mg/dL, p = 0.026). No differences in FGF 19 and bile acid levels were found.\nThe changes in FGF 19, FGF 21, and total bile acids are presented in Figure 4. The differences in FGF 21 between HSI-I and HSI-non-I showed borderline significance (p = 0.0503) after being adjusted for age and gender (Figure 4C)." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Study Design and Patients", "2.2. Surgical Technique of Gastric Bypass (GB)", "2.3. Metabolic Profiles and Blood Sampling", "2.4. Measurement of the Plasma FGF 19, FGF 21, and Serum Total Bile Acid Levels", "2.5. Definition of DM Complete Remission and Insulin Resistance", "2.6. Definition of NAFLD Based on the Hepatic Steatosis Index (HSI)", "2.7. Statistical Analysis", "3. Results", "3.1. Changes in Metabolic Profiles and Laboratory Data after GB", "3.2. Characteristic Differences between DM-CR and DM-Non-CR Subjects", "3.3. Characteristic Differences between HSI-I and HSI-Non-I Subjects", "4. Discussion", "5. Conclusions" ]
[ "Obesity has been a global concern for the past 50 years and the prevalence has increased significantly over the past decade [1]. Obesity represents a major health challenge because it substantially increases the risk of metabolic diseases, including type 2 diabetes mellitus (T2DM) and non-alcoholic fatty liver disease (NAFLD) [1,2].\nBody weight reduction is an important approach in reducing insulin resistance and improving NAFLD. Weight loss has been shown as one of the strongest predictors of improved insulin sensitivity [3]. The magnitude of weight loss is also correlated with the improvement of NAFLD [4].\nSurgical intervention is considered an important approach, especially for morbidly obese patients with T2DM, medically resistant arterial hypertension, or comorbidities that are expected to improve with weight loss [5]. Gastric bypass, a widely adapted surgical technique, is one of the most effective methods to combat obesity and remit T2DM [6]. Nevertheless, there are many patients whose DM and NAFLD fail to improve despite receiving these interventions [7]. Furthermore, the mechanisms by which it causes weight loss and T2DM and NAFLD resolutions are not well elucidated [8]. \nNumerous studies have attempted to identify robust biological and clinical predictors of DM remission after bariatric surgery [9,10]. On the other hand, studies for improving NAFLD are relatively lacking and mostly limited to animal studies [11]. More biomarkers from the blood as surrogates in the evaluation of NAFLD to replace paired liver biopsy and reduce the suffering of the patients are desperately needed [12].\nThe human fibroblast growth factor (FGF) family contains at least 22 members involving the biological processes of cell growth, differentiation, development, and metabolism [13]. Aside from most FGFs presenting functions as autocrine or paracrine factors, FGF 19, FGF 21, and FGF 23 lack the conventional FGF heparin-binding domain and possess the ability to elicit endocrine actions, functioning as hormones [13]. Emerging evidence demonstrates the potential role of the FGF family in energy metabolism and in counteracting obesity, especially FGF 19 and FGF 21 [14]. Animal studies have shown that overexpression of FGF 19 or FGF 21 or treatment with recombinant protein enhanced metabolic rates and decreased fat mass, in addition to demonstrating improvements in glucose metabolism, insulin sensitivity, and lipid profiles [15,16,17,18].\nBile acids have a significant relationship with energy balance. Farnesoid-X-receptor (FXR) regulates bile acid homeostasis by regulating the transcription of several enterohepatic genes. The activation of the transcription factor FXR by bile acids provokes the subsequent secretion of FGF 19 [19]. Human FGF 19 is expressed in the ileal enterocytes of the small intestine. FGF 19, secreted into the portal circulation, has a pronounced diurnal rhythm, with peaks occurring 90–120 min after serum bile acid levels after food intake. β-Klotho (KLB) works as a co-receptor and supports endocrine signaling via binding with FGF receptor (FGFr) 4 [20]. The binding of FGF 19 to the hepatocyte cell surface FGFr 4/KLB complex leads to negative feedback and reduces hepatic bile salt synthesis [21]. Mice with impaired function for gut secretion of FGF 19 show significantly impaired weight loss and glucose improvement following bariatric surgery [15]. Collective data also reveal that the serum FGF 19 levels are decreased in patients with T2DM [22].\nApart from FGF 19, FGF 21 is expressed in multiple tissues, including the liver, brown adipose tissue, white adipose tissue, and pancreas [23]. Under normal physiologic status, most circulating FGF 21 originates from the liver [24]. Secretion of FGF 21 is provoked significantly by excess food intake, ketogenic, high-carbohydrate diets, or protein restriction [25]. The expression of the FGF 21 gene depends on several pathways. Increased circulating free fatty acids and prolonged fasting promote the transcriptional activation of FGF 21 by the peroxisome proliferator-activated receptor α-mediated pathway [26,27]. A high-glucose diet activates carbohydrate-response element-binding protein (ChREBP) and enhances FGF 21 secretion [28]. Furthermore, general control nonderepressible 2 (GCN2) would be activated when encountering amino acid deficiency, leading to FGF 21 transcription [29]. Metabolic stresses such as obesity, T2DM, or NAFLD are also responsible for inducing the expression and/or signaling of FGF 21 [30]. The FGF 21-dependent signaling of downstream FGFr is extremely complicated and well-debated [31]. Based on evidence from a recent study, FGF 21 stimulates hepatic fatty acid oxidation, ketogenesis, and gluconeogenesis, and suppresses lipogenesis [25]. FGF 21 reduced plasma glucose and triglycerides to a nearly normal level in an animal model [32].\nAlthough recent studies provide clues regarding the dynamics of FGF 19 and FGF 21 in patients receiving bariatric surgery [8,33], the information is limited to sleeve gastrectomy [8]. Moreover, different characteristics between those with and without the improvement of obesity-related comorbidities were also lacking [33].\nThe main purpose of our study was to evaluate the effect of GB on changes in serum FGF 19 and FGF 21 levels. Furthermore, we also determined the relationship between both blood biomarkers and the improvement of either T2DM or NAFLD.", " 2.1. Study Design and Patients This was a hospital-based prospective observational study. Obese patients with T2DM receiving GB surgery were enrolled in the study. The study was conducted in accordance with the Declaration of Helsinki and the study protocol was approved by the institutional review board (approval number: MSIRB2015017, YM 103127E, 201002037IC, 2011-09-015IC, CHGH-IRB (405)102A-52)).\nThe inclusion criteria were as follows: (1) been diagnosed with T2DM for more than 6 months with a baseline hemoglobin A1c (HbA1c) level > 8%; (2) receiving regular medical treatment for T2DM, including therapeutic nutritional therapy, oral anti-diabetic agents, or insulin under the evaluation of specialists; (2) body mass index (BMI) ≥ 32.5 kg/m2; (3) willingness to receive additional treatment, including diet control and lifestyle modifications; (4) willingness to accept follow-up appointment, and provided the informed consent documents.\nWe excluded patients who had the following diagnosis or baseline characteristics: (1) cancer within the previous 5 years; (2) human immunodeficiency virus infection or active pulmonary tuberculosis; (3) unstable cardiovascular condition within the past 6 months; (4) pulmonary embolisms; (5) serum creatinine levels > 2.0 mg/dL; (6) chronic hepatitis B or C, liver cirrhosis, inflammatory bowel disease, or acromegaly; (7) history of organ transplantation or other bariatric surgery; (8) alcohol use disorders or substance use disorder; or (9) those who were uncooperative.\nAll study patients returned for a monthly follow-up appointment after surgery. The patients received a face-to-face consultation with dietitians and reported a food diary log. Clinical anthropometry and laboratory assessments were performed simultaneously.\nThis was a hospital-based prospective observational study. Obese patients with T2DM receiving GB surgery were enrolled in the study. The study was conducted in accordance with the Declaration of Helsinki and the study protocol was approved by the institutional review board (approval number: MSIRB2015017, YM 103127E, 201002037IC, 2011-09-015IC, CHGH-IRB (405)102A-52)).\nThe inclusion criteria were as follows: (1) been diagnosed with T2DM for more than 6 months with a baseline hemoglobin A1c (HbA1c) level > 8%; (2) receiving regular medical treatment for T2DM, including therapeutic nutritional therapy, oral anti-diabetic agents, or insulin under the evaluation of specialists; (2) body mass index (BMI) ≥ 32.5 kg/m2; (3) willingness to receive additional treatment, including diet control and lifestyle modifications; (4) willingness to accept follow-up appointment, and provided the informed consent documents.\nWe excluded patients who had the following diagnosis or baseline characteristics: (1) cancer within the previous 5 years; (2) human immunodeficiency virus infection or active pulmonary tuberculosis; (3) unstable cardiovascular condition within the past 6 months; (4) pulmonary embolisms; (5) serum creatinine levels > 2.0 mg/dL; (6) chronic hepatitis B or C, liver cirrhosis, inflammatory bowel disease, or acromegaly; (7) history of organ transplantation or other bariatric surgery; (8) alcohol use disorders or substance use disorder; or (9) those who were uncooperative.\nAll study patients returned for a monthly follow-up appointment after surgery. The patients received a face-to-face consultation with dietitians and reported a food diary log. Clinical anthropometry and laboratory assessments were performed simultaneously.\n 2.2. Surgical Technique of Gastric Bypass (GB) GB was performed as described in our previous studies [34,35,36]. A five-port technique was used. The dissection begins directly on the lesser curvature, and a 15- to 20-mL gastric pouch is created using multiple EndoGIA-45 staplers (US Surgical Corp). Patients are then placed in a neutral position for the creation of the jejunostomy. The jejunum is divided 50 cm distal to the ligament of Treitz. A stapled end-to-side jejunojejunostomy is performed with a 100 cm Roux limb. The enteroenterostomy defect and all mesenteric defects are closed with continuous sutures. The Roux limb is tunneled via a retrocolic, retrogastric path and positioned near the transected gastric pouch. The CEEA-21 anvil (US Surgical Corp) is pulled into the gastric pouch transorally following the previous study [37]. The CEEA stapler is then inserted through the Roux limb to perform the end-to-side gastro-jejunostomy. After the test for air leak, the trocar fascial defects are closed. Two hemovac drains are left in the lesser sac and left subphrenic space separately [38].\nGB was performed as described in our previous studies [34,35,36]. A five-port technique was used. The dissection begins directly on the lesser curvature, and a 15- to 20-mL gastric pouch is created using multiple EndoGIA-45 staplers (US Surgical Corp). Patients are then placed in a neutral position for the creation of the jejunostomy. The jejunum is divided 50 cm distal to the ligament of Treitz. A stapled end-to-side jejunojejunostomy is performed with a 100 cm Roux limb. The enteroenterostomy defect and all mesenteric defects are closed with continuous sutures. The Roux limb is tunneled via a retrocolic, retrogastric path and positioned near the transected gastric pouch. The CEEA-21 anvil (US Surgical Corp) is pulled into the gastric pouch transorally following the previous study [37]. The CEEA stapler is then inserted through the Roux limb to perform the end-to-side gastro-jejunostomy. After the test for air leak, the trocar fascial defects are closed. Two hemovac drains are left in the lesser sac and left subphrenic space separately [38].\n 2.3. Metabolic Profiles and Blood Sampling All study subjects received clinical anthropometry and laboratory assessments on the day before surgery as baseline (M0) and at 3 months (M3) and 12 months (M12) after surgery. Metabolic profiles included anthropometry measurements (body weight, waist circumference, body mass index (BMI), and a body shape index (ABSI)), and systolic and diastolic blood pressure. The patients were required to fast overnight before each blood sample collection. Blood samples were obtained from the median cubital vein between 8 and 11 o’clock in the morning and immediately transferred into a chilled glass tube containing disodium EDTA (1 mg/mL) and aprotinin (500 units/mL). The samples were stored on ice during collection. They were further centrifuged, plasma-separated, aliquoted into polypropylene tubes, and stored at −20 °C before receiving analysis.\nLaboratory assessments included serum levels of total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), fasting blood sugar, hemoglobin A1c (HbA1c), c-peptide, insulin, creatinine, uric acid, and liver function test (alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase (Alk-p), and gamma-glutamyl transferase (γ-GT)). All data are reported as means ± standard deviations.\nAll study subjects received clinical anthropometry and laboratory assessments on the day before surgery as baseline (M0) and at 3 months (M3) and 12 months (M12) after surgery. Metabolic profiles included anthropometry measurements (body weight, waist circumference, body mass index (BMI), and a body shape index (ABSI)), and systolic and diastolic blood pressure. The patients were required to fast overnight before each blood sample collection. Blood samples were obtained from the median cubital vein between 8 and 11 o’clock in the morning and immediately transferred into a chilled glass tube containing disodium EDTA (1 mg/mL) and aprotinin (500 units/mL). The samples were stored on ice during collection. They were further centrifuged, plasma-separated, aliquoted into polypropylene tubes, and stored at −20 °C before receiving analysis.\nLaboratory assessments included serum levels of total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), fasting blood sugar, hemoglobin A1c (HbA1c), c-peptide, insulin, creatinine, uric acid, and liver function test (alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase (Alk-p), and gamma-glutamyl transferase (γ-GT)). All data are reported as means ± standard deviations.\n 2.4. Measurement of the Plasma FGF 19, FGF 21, and Serum Total Bile Acid Levels The fasting blood samples obtained were used to determine the plasma levels of FGF 19 and FGF 21, and serum levels of total bile acids. Enzyme immunoassays for plasma FGF 19 and FGF 21 (R&D Systems, Minneapolis, MN, USA) were performed. Fasting total serum bile acids were assayed using the 3α-hydroxysteroid dehydrogenase method (Fumouze Diagnostics, Levallois-Perret, France). All measurements were performed in duplicate.\nThe fasting blood samples obtained were used to determine the plasma levels of FGF 19 and FGF 21, and serum levels of total bile acids. Enzyme immunoassays for plasma FGF 19 and FGF 21 (R&D Systems, Minneapolis, MN, USA) were performed. Fasting total serum bile acids were assayed using the 3α-hydroxysteroid dehydrogenase method (Fumouze Diagnostics, Levallois-Perret, France). All measurements were performed in duplicate.\n 2.5. Definition of DM Complete Remission and Insulin Resistance In our study, DM complete remission is defined as a return to normal measures of glucose metabolism with HbA1c in the normal range (<6.0% (42 mmol/mol)), which was adopted in most previous investigations [39]. We define HbA1c < 6.0% after 12 months after GB as DM complete remitters.\nInsulin resistance was evaluated by using the homeostasis model assessment of insulin resistance (HOMA-IR), defined as fasting plasma glucose level (mmol/L) × fasting immunoreactive insulin level (µ U/mL)/22.5. The β-cell function was assessed using the homeostatic model assessment of β-cell function (HOMA-β), with the formula of [20 × fasting immunoreactive insulin level (µ U/mL)]/[fasting plasma glucose level (mmol/L) − 3.5] [40]. The variation of the area under the curve of plasma FGF 19 levels per minute (ΔAUC of plasma amylin) during the OGTT was calculated with the trapezoidal method [41].\nIn our study, DM complete remission is defined as a return to normal measures of glucose metabolism with HbA1c in the normal range (<6.0% (42 mmol/mol)), which was adopted in most previous investigations [39]. We define HbA1c < 6.0% after 12 months after GB as DM complete remitters.\nInsulin resistance was evaluated by using the homeostasis model assessment of insulin resistance (HOMA-IR), defined as fasting plasma glucose level (mmol/L) × fasting immunoreactive insulin level (µ U/mL)/22.5. The β-cell function was assessed using the homeostatic model assessment of β-cell function (HOMA-β), with the formula of [20 × fasting immunoreactive insulin level (µ U/mL)]/[fasting plasma glucose level (mmol/L) − 3.5] [40]. The variation of the area under the curve of plasma FGF 19 levels per minute (ΔAUC of plasma amylin) during the OGTT was calculated with the trapezoidal method [41].\n 2.6. Definition of NAFLD Based on the Hepatic Steatosis Index (HSI) NAFLD has been considered as a continuum from obesity to metabolic syndrome and diabetes [42]. The gold standard to evaluate the magnitude of NAFLD depends on paired liver biopsy. However, liver biopsy may lead to patient discomfort and several complications, including bleeding, organ trauma, and even patient mortality [43]. Therefore, this approach is less feasible in clinical practice due to patient safety and a high patient refusal rate [12,44]. The hepatic steatosis index (HSI) was adapted for the clinical assessment of NAFLD and has been validated with imaging, including ultrasonography and magnetic resonance imaging [45]. HSI was defined as 8 × (ALT/AST ratio) + BMI (+2, if female; +2, if diabetes mellitus). A cut-off value of HSI > 36.0 may determine NAFLD with a specificity of 92.4% [46]. Based on this concept, our study defines HSI < 36.0 at 12 months after GB as HSI improvers (HSI-I).\nNAFLD has been considered as a continuum from obesity to metabolic syndrome and diabetes [42]. The gold standard to evaluate the magnitude of NAFLD depends on paired liver biopsy. However, liver biopsy may lead to patient discomfort and several complications, including bleeding, organ trauma, and even patient mortality [43]. Therefore, this approach is less feasible in clinical practice due to patient safety and a high patient refusal rate [12,44]. The hepatic steatosis index (HSI) was adapted for the clinical assessment of NAFLD and has been validated with imaging, including ultrasonography and magnetic resonance imaging [45]. HSI was defined as 8 × (ALT/AST ratio) + BMI (+2, if female; +2, if diabetes mellitus). A cut-off value of HSI > 36.0 may determine NAFLD with a specificity of 92.4% [46]. Based on this concept, our study defines HSI < 36.0 at 12 months after GB as HSI improvers (HSI-I).\n 2.7. Statistical Analysis SAS (version 9.4) was applied for the data analyses. The paired t-test was used to compare preoperative age, BMI, waist circumference, total body fat percentage, and resting metabolic rate among patients who underwent GB between the preoperative period and 3 and 12 months after surgery.\nPostoperative serum biochemical data (except BMI) were analyzed using an ANCOVA model to adjust for preoperative age and gender. The paired Student t-test was used in comparisons between preoperative data with data at each time point at 3 or 12 months postoperatively.\nA trend analysis (to obtain p values for the trend) was performed using the repeated GLM to explore whether GB had a significantly different effect on postoperative indicators within 3 or 12 months postoperatively.\nThe correlations among different postoperative indicators to explain possible physiological pathways were evaluated using the Pearson correlation. A p-value of < 0.05 was considered statistically significant.\nSAS (version 9.4) was applied for the data analyses. The paired t-test was used to compare preoperative age, BMI, waist circumference, total body fat percentage, and resting metabolic rate among patients who underwent GB between the preoperative period and 3 and 12 months after surgery.\nPostoperative serum biochemical data (except BMI) were analyzed using an ANCOVA model to adjust for preoperative age and gender. The paired Student t-test was used in comparisons between preoperative data with data at each time point at 3 or 12 months postoperatively.\nA trend analysis (to obtain p values for the trend) was performed using the repeated GLM to explore whether GB had a significantly different effect on postoperative indicators within 3 or 12 months postoperatively.\nThe correlations among different postoperative indicators to explain possible physiological pathways were evaluated using the Pearson correlation. A p-value of < 0.05 was considered statistically significant.", "This was a hospital-based prospective observational study. Obese patients with T2DM receiving GB surgery were enrolled in the study. The study was conducted in accordance with the Declaration of Helsinki and the study protocol was approved by the institutional review board (approval number: MSIRB2015017, YM 103127E, 201002037IC, 2011-09-015IC, CHGH-IRB (405)102A-52)).\nThe inclusion criteria were as follows: (1) been diagnosed with T2DM for more than 6 months with a baseline hemoglobin A1c (HbA1c) level > 8%; (2) receiving regular medical treatment for T2DM, including therapeutic nutritional therapy, oral anti-diabetic agents, or insulin under the evaluation of specialists; (2) body mass index (BMI) ≥ 32.5 kg/m2; (3) willingness to receive additional treatment, including diet control and lifestyle modifications; (4) willingness to accept follow-up appointment, and provided the informed consent documents.\nWe excluded patients who had the following diagnosis or baseline characteristics: (1) cancer within the previous 5 years; (2) human immunodeficiency virus infection or active pulmonary tuberculosis; (3) unstable cardiovascular condition within the past 6 months; (4) pulmonary embolisms; (5) serum creatinine levels > 2.0 mg/dL; (6) chronic hepatitis B or C, liver cirrhosis, inflammatory bowel disease, or acromegaly; (7) history of organ transplantation or other bariatric surgery; (8) alcohol use disorders or substance use disorder; or (9) those who were uncooperative.\nAll study patients returned for a monthly follow-up appointment after surgery. The patients received a face-to-face consultation with dietitians and reported a food diary log. Clinical anthropometry and laboratory assessments were performed simultaneously.", "GB was performed as described in our previous studies [34,35,36]. A five-port technique was used. The dissection begins directly on the lesser curvature, and a 15- to 20-mL gastric pouch is created using multiple EndoGIA-45 staplers (US Surgical Corp). Patients are then placed in a neutral position for the creation of the jejunostomy. The jejunum is divided 50 cm distal to the ligament of Treitz. A stapled end-to-side jejunojejunostomy is performed with a 100 cm Roux limb. The enteroenterostomy defect and all mesenteric defects are closed with continuous sutures. The Roux limb is tunneled via a retrocolic, retrogastric path and positioned near the transected gastric pouch. The CEEA-21 anvil (US Surgical Corp) is pulled into the gastric pouch transorally following the previous study [37]. The CEEA stapler is then inserted through the Roux limb to perform the end-to-side gastro-jejunostomy. After the test for air leak, the trocar fascial defects are closed. Two hemovac drains are left in the lesser sac and left subphrenic space separately [38].", "All study subjects received clinical anthropometry and laboratory assessments on the day before surgery as baseline (M0) and at 3 months (M3) and 12 months (M12) after surgery. Metabolic profiles included anthropometry measurements (body weight, waist circumference, body mass index (BMI), and a body shape index (ABSI)), and systolic and diastolic blood pressure. The patients were required to fast overnight before each blood sample collection. Blood samples were obtained from the median cubital vein between 8 and 11 o’clock in the morning and immediately transferred into a chilled glass tube containing disodium EDTA (1 mg/mL) and aprotinin (500 units/mL). The samples were stored on ice during collection. They were further centrifuged, plasma-separated, aliquoted into polypropylene tubes, and stored at −20 °C before receiving analysis.\nLaboratory assessments included serum levels of total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), fasting blood sugar, hemoglobin A1c (HbA1c), c-peptide, insulin, creatinine, uric acid, and liver function test (alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase (Alk-p), and gamma-glutamyl transferase (γ-GT)). All data are reported as means ± standard deviations.", "The fasting blood samples obtained were used to determine the plasma levels of FGF 19 and FGF 21, and serum levels of total bile acids. Enzyme immunoassays for plasma FGF 19 and FGF 21 (R&D Systems, Minneapolis, MN, USA) were performed. Fasting total serum bile acids were assayed using the 3α-hydroxysteroid dehydrogenase method (Fumouze Diagnostics, Levallois-Perret, France). All measurements were performed in duplicate.", "In our study, DM complete remission is defined as a return to normal measures of glucose metabolism with HbA1c in the normal range (<6.0% (42 mmol/mol)), which was adopted in most previous investigations [39]. We define HbA1c < 6.0% after 12 months after GB as DM complete remitters.\nInsulin resistance was evaluated by using the homeostasis model assessment of insulin resistance (HOMA-IR), defined as fasting plasma glucose level (mmol/L) × fasting immunoreactive insulin level (µ U/mL)/22.5. The β-cell function was assessed using the homeostatic model assessment of β-cell function (HOMA-β), with the formula of [20 × fasting immunoreactive insulin level (µ U/mL)]/[fasting plasma glucose level (mmol/L) − 3.5] [40]. The variation of the area under the curve of plasma FGF 19 levels per minute (ΔAUC of plasma amylin) during the OGTT was calculated with the trapezoidal method [41].", "NAFLD has been considered as a continuum from obesity to metabolic syndrome and diabetes [42]. The gold standard to evaluate the magnitude of NAFLD depends on paired liver biopsy. However, liver biopsy may lead to patient discomfort and several complications, including bleeding, organ trauma, and even patient mortality [43]. Therefore, this approach is less feasible in clinical practice due to patient safety and a high patient refusal rate [12,44]. The hepatic steatosis index (HSI) was adapted for the clinical assessment of NAFLD and has been validated with imaging, including ultrasonography and magnetic resonance imaging [45]. HSI was defined as 8 × (ALT/AST ratio) + BMI (+2, if female; +2, if diabetes mellitus). A cut-off value of HSI > 36.0 may determine NAFLD with a specificity of 92.4% [46]. Based on this concept, our study defines HSI < 36.0 at 12 months after GB as HSI improvers (HSI-I).", "SAS (version 9.4) was applied for the data analyses. The paired t-test was used to compare preoperative age, BMI, waist circumference, total body fat percentage, and resting metabolic rate among patients who underwent GB between the preoperative period and 3 and 12 months after surgery.\nPostoperative serum biochemical data (except BMI) were analyzed using an ANCOVA model to adjust for preoperative age and gender. The paired Student t-test was used in comparisons between preoperative data with data at each time point at 3 or 12 months postoperatively.\nA trend analysis (to obtain p values for the trend) was performed using the repeated GLM to explore whether GB had a significantly different effect on postoperative indicators within 3 or 12 months postoperatively.\nThe correlations among different postoperative indicators to explain possible physiological pathways were evaluated using the Pearson correlation. A p-value of < 0.05 was considered statistically significant.", " 3.1. Changes in Metabolic Profiles and Laboratory Data after GB A total of 35 obese patients (12 males and 23 females) with T2DM who underwent GB were enrolled. The baseline average age, body weight, and BMI were 44.8 ± 9.7 years old, 84.8 ± 14.1 kg, and 31.6 ± 4.6 kg/m2, respectively. The duration of T2DM was 5.8 ± 4.9 years. All enrolled patients received GB and were followed up for more than 1 year subsequently.\nChanges in metabolic profiles and laboratory data are reported in Table 1. Body weight, BMI, waist circumference, and ABSI showed significant improvements 1 year after GB (p < 0.001). Diabetes-related parameters, including fasting blood glucose, HbA1c, c-peptide, and insulin level, also showed a significant decline (p < 0.05). Liver function tests, including ALT, AST, and Alk-p levels, showed no significant change; however, a significant decline in γ-GT level was observed (p = 0.006). As for the lipid profile, HDL-C increased, and triglycerides decreased (p < 0.05), while total cholesterol and LDL levels did not change significantly. Decreases in uric acid levels were also demonstrated (p = 0.019).\nVariations in serum levels among FGF 19, FGF 21, and total bile acids before the operation and after 3 months and 12 months are presented in Figure 1. All three markers showed a significant trend of change after GB (p = 0.024, 0.005, and 0.010, respectively). The total bile acid level changed from 10.07 ± 4.33 µM at M0 to 11.78 ± 9.32 µM at M3 and to 8.31 ± 4.95 µM at M12 (p = 0.023 between M3 and M12). The FGF 19 level increased from 84.20 ± 61.31 pg/mL at M0 to 141.76 ± 108.70 pg/mL at M3 and to 142.69 ± 100.21 pg/mL at M12 (p = 0.016 between M0 and M3, p = 0.002 between M0 and M12). The FGF 21 level changed from 320.06 ± 238.96 pg/mL at M0 to 416.99 ± 375.86 pg/mL at M3 and to 230.24 ± 123.71 pg/mL at M12 (p = 0.049 between M0 and M3, p = 0.005 between M0 and M12).\nSerum levels of FGF 19 and FGF 21 for subjects during the three periods were intergraded and the correlations with indicators of diabetes and NAFLD are shown in Figure 2. FGF 19 had a significant negative correlation with serum c-peptide (r = −0.286, p = 0.006, Figure 2A) and HbA1c level (r = −0.308, p = 0.003, Figure 2B). On the other hand, FGF 21 had a significant positive correlation with serum HbA1c (r = 0.209, p = 0.047, Figure 2C) and total bile acids (r = 0.273, p = 0.005, Figure 2D), respectively.\nA total of 35 obese patients (12 males and 23 females) with T2DM who underwent GB were enrolled. The baseline average age, body weight, and BMI were 44.8 ± 9.7 years old, 84.8 ± 14.1 kg, and 31.6 ± 4.6 kg/m2, respectively. The duration of T2DM was 5.8 ± 4.9 years. All enrolled patients received GB and were followed up for more than 1 year subsequently.\nChanges in metabolic profiles and laboratory data are reported in Table 1. Body weight, BMI, waist circumference, and ABSI showed significant improvements 1 year after GB (p < 0.001). Diabetes-related parameters, including fasting blood glucose, HbA1c, c-peptide, and insulin level, also showed a significant decline (p < 0.05). Liver function tests, including ALT, AST, and Alk-p levels, showed no significant change; however, a significant decline in γ-GT level was observed (p = 0.006). As for the lipid profile, HDL-C increased, and triglycerides decreased (p < 0.05), while total cholesterol and LDL levels did not change significantly. Decreases in uric acid levels were also demonstrated (p = 0.019).\nVariations in serum levels among FGF 19, FGF 21, and total bile acids before the operation and after 3 months and 12 months are presented in Figure 1. All three markers showed a significant trend of change after GB (p = 0.024, 0.005, and 0.010, respectively). The total bile acid level changed from 10.07 ± 4.33 µM at M0 to 11.78 ± 9.32 µM at M3 and to 8.31 ± 4.95 µM at M12 (p = 0.023 between M3 and M12). The FGF 19 level increased from 84.20 ± 61.31 pg/mL at M0 to 141.76 ± 108.70 pg/mL at M3 and to 142.69 ± 100.21 pg/mL at M12 (p = 0.016 between M0 and M3, p = 0.002 between M0 and M12). The FGF 21 level changed from 320.06 ± 238.96 pg/mL at M0 to 416.99 ± 375.86 pg/mL at M3 and to 230.24 ± 123.71 pg/mL at M12 (p = 0.049 between M0 and M3, p = 0.005 between M0 and M12).\nSerum levels of FGF 19 and FGF 21 for subjects during the three periods were intergraded and the correlations with indicators of diabetes and NAFLD are shown in Figure 2. FGF 19 had a significant negative correlation with serum c-peptide (r = −0.286, p = 0.006, Figure 2A) and HbA1c level (r = −0.308, p = 0.003, Figure 2B). On the other hand, FGF 21 had a significant positive correlation with serum HbA1c (r = 0.209, p = 0.047, Figure 2C) and total bile acids (r = 0.273, p = 0.005, Figure 2D), respectively.\n 3.2. Characteristic Differences between DM-CR and DM-Non-CR Subjects Thirteen of our 35 study participants (37.1%) achieved complete DM remission 12 months after GB. Profiles of both DM-CR and DM-non-CR groups measured before and 12 months after GB are presented in Table 2.\nRegarding the preoperative condition, the DM-CR group had higher baseline body weight (93.85 ± 16.25 vs. 79.43 ± 9.53 kg, p = 0.010), BMI (43.73 ± 5.00 vs. 29.79 ± 3.27 kg/m2, p = 0.001), waist circumference (108.69 ± 10.22 vs. 100.45 ± 9.13 cm, p = 0.020), c-peptide (3.23 ± 1.01 vs. 2.31 ± 1.18 mg/dL, p = 0.026), and ALT level (56.08 ± 40.45 vs. 33.05 ± 25.90 U/L, p = 0.047) but with lower baseline HbA1c (8.51 ± 1.42 vs. 9.76 ± 1.41%, p = 0.016) compared to the DM-non-CR group. In addition, the DM-CR group had higher HSI (51.30 ± 5.05 vs. 42.69 ± 4.75, p < 0.001). Baseline FGF 19, FGF 21, and total bile acid levels were similar between the two groups.\nOne year after the operation, the HbA1c level of the DM-CR group was 5.42 ± 0.38%, while that of the DM-non-CR group was 7.09 ± 0.99%. Patients who achieved DM-CR had lower fasting blood glucose (89.70 ± 10.22 vs. 125.18 ± 31.18 mg/dL, p < 0.001), γ-GT (12.40 ± 5.30 vs. 31.45 ± 24.71 U/L, p = 0.003), and triglyceride (74.00 ± 24.10 vs. 119.45 ± 46.79 mg/dL, p = 0.001) levels. No significant difference was found among other measurements regarding metabolic profiles or laboratory data. Neither FGF 19, FGF 21, nor total bile acid level showed any significant difference between DM-CR and DM-non-CR 12 months after GB.\nChanges in serum levels of FGF 19, FGF 21, and total bile acids at the time before operation and 3 months and 12 months after surgery are presented in Figure 3. After adjustment for age and gender, the DM-CR group had a significantly higher FGF 19 level compared to DM-non-CR at M3 (196.88 ± 153.00 vs. 102.06 ± 34.72 pg/mL, p = 0.004, shown in Figure 3B), and more changes in FGF 19 between M3 and M0 (133.15 ± 144.65 vs. 6.15 ± 86.35 pg/mL, p = 0.001, shown in Figure 3D) compared with the DM-non-CR group.\nThirteen of our 35 study participants (37.1%) achieved complete DM remission 12 months after GB. Profiles of both DM-CR and DM-non-CR groups measured before and 12 months after GB are presented in Table 2.\nRegarding the preoperative condition, the DM-CR group had higher baseline body weight (93.85 ± 16.25 vs. 79.43 ± 9.53 kg, p = 0.010), BMI (43.73 ± 5.00 vs. 29.79 ± 3.27 kg/m2, p = 0.001), waist circumference (108.69 ± 10.22 vs. 100.45 ± 9.13 cm, p = 0.020), c-peptide (3.23 ± 1.01 vs. 2.31 ± 1.18 mg/dL, p = 0.026), and ALT level (56.08 ± 40.45 vs. 33.05 ± 25.90 U/L, p = 0.047) but with lower baseline HbA1c (8.51 ± 1.42 vs. 9.76 ± 1.41%, p = 0.016) compared to the DM-non-CR group. In addition, the DM-CR group had higher HSI (51.30 ± 5.05 vs. 42.69 ± 4.75, p < 0.001). Baseline FGF 19, FGF 21, and total bile acid levels were similar between the two groups.\nOne year after the operation, the HbA1c level of the DM-CR group was 5.42 ± 0.38%, while that of the DM-non-CR group was 7.09 ± 0.99%. Patients who achieved DM-CR had lower fasting blood glucose (89.70 ± 10.22 vs. 125.18 ± 31.18 mg/dL, p < 0.001), γ-GT (12.40 ± 5.30 vs. 31.45 ± 24.71 U/L, p = 0.003), and triglyceride (74.00 ± 24.10 vs. 119.45 ± 46.79 mg/dL, p = 0.001) levels. No significant difference was found among other measurements regarding metabolic profiles or laboratory data. Neither FGF 19, FGF 21, nor total bile acid level showed any significant difference between DM-CR and DM-non-CR 12 months after GB.\nChanges in serum levels of FGF 19, FGF 21, and total bile acids at the time before operation and 3 months and 12 months after surgery are presented in Figure 3. After adjustment for age and gender, the DM-CR group had a significantly higher FGF 19 level compared to DM-non-CR at M3 (196.88 ± 153.00 vs. 102.06 ± 34.72 pg/mL, p = 0.004, shown in Figure 3B), and more changes in FGF 19 between M3 and M0 (133.15 ± 144.65 vs. 6.15 ± 86.35 pg/mL, p = 0.001, shown in Figure 3D) compared with the DM-non-CR group.\n 3.3. Characteristic Differences between HSI-I and HSI-Non-I Subjects Twenty-five of our 35 enrolled subjects (71.4%) were considered HSI-I based on evaluation at M12. Measurements of HSI-I and HSI-non-I groups before and at 12 months postoperatively are presented in Table 3.\nRegarding the preoperative condition, patients among the HSI-I group had higher systolic blood pressure (139.60 ± 15.11 vs. 127.30 ± 10.14 mmHg, p = 0.024) and lower fasting blood glucose (161.44 ± 66.11 vs. 214.70 ± 70.31, p = 0.042). Neither baseline FGF 19, FGF 21, nor total bile acid level showed any significant difference between these two groups.\nOne year after GB, fatty liver improvers had an average HSI of 34.49 ± 1.25, while non-improvers had a HSI of 38.70 ± 1.93. HSI-I had lower serum FGF 21 (204.06 ± 122.68 vs. 295.67 ± 104.96, p = 0.046), insulin (3.43 ± 1.78 vs. 10.88 ± 10.38 mU/L, p = 0.0499), and triglyceride levels (90.18 ± 32.01 vs. 138.40 ± 55.67 mg/dL, p = 0.026). No differences in FGF 19 and bile acid levels were found.\nThe changes in FGF 19, FGF 21, and total bile acids are presented in Figure 4. The differences in FGF 21 between HSI-I and HSI-non-I showed borderline significance (p = 0.0503) after being adjusted for age and gender (Figure 4C).\nTwenty-five of our 35 enrolled subjects (71.4%) were considered HSI-I based on evaluation at M12. Measurements of HSI-I and HSI-non-I groups before and at 12 months postoperatively are presented in Table 3.\nRegarding the preoperative condition, patients among the HSI-I group had higher systolic blood pressure (139.60 ± 15.11 vs. 127.30 ± 10.14 mmHg, p = 0.024) and lower fasting blood glucose (161.44 ± 66.11 vs. 214.70 ± 70.31, p = 0.042). Neither baseline FGF 19, FGF 21, nor total bile acid level showed any significant difference between these two groups.\nOne year after GB, fatty liver improvers had an average HSI of 34.49 ± 1.25, while non-improvers had a HSI of 38.70 ± 1.93. HSI-I had lower serum FGF 21 (204.06 ± 122.68 vs. 295.67 ± 104.96, p = 0.046), insulin (3.43 ± 1.78 vs. 10.88 ± 10.38 mU/L, p = 0.0499), and triglyceride levels (90.18 ± 32.01 vs. 138.40 ± 55.67 mg/dL, p = 0.026). No differences in FGF 19 and bile acid levels were found.\nThe changes in FGF 19, FGF 21, and total bile acids are presented in Figure 4. The differences in FGF 21 between HSI-I and HSI-non-I showed borderline significance (p = 0.0503) after being adjusted for age and gender (Figure 4C).", "A total of 35 obese patients (12 males and 23 females) with T2DM who underwent GB were enrolled. The baseline average age, body weight, and BMI were 44.8 ± 9.7 years old, 84.8 ± 14.1 kg, and 31.6 ± 4.6 kg/m2, respectively. The duration of T2DM was 5.8 ± 4.9 years. All enrolled patients received GB and were followed up for more than 1 year subsequently.\nChanges in metabolic profiles and laboratory data are reported in Table 1. Body weight, BMI, waist circumference, and ABSI showed significant improvements 1 year after GB (p < 0.001). Diabetes-related parameters, including fasting blood glucose, HbA1c, c-peptide, and insulin level, also showed a significant decline (p < 0.05). Liver function tests, including ALT, AST, and Alk-p levels, showed no significant change; however, a significant decline in γ-GT level was observed (p = 0.006). As for the lipid profile, HDL-C increased, and triglycerides decreased (p < 0.05), while total cholesterol and LDL levels did not change significantly. Decreases in uric acid levels were also demonstrated (p = 0.019).\nVariations in serum levels among FGF 19, FGF 21, and total bile acids before the operation and after 3 months and 12 months are presented in Figure 1. All three markers showed a significant trend of change after GB (p = 0.024, 0.005, and 0.010, respectively). The total bile acid level changed from 10.07 ± 4.33 µM at M0 to 11.78 ± 9.32 µM at M3 and to 8.31 ± 4.95 µM at M12 (p = 0.023 between M3 and M12). The FGF 19 level increased from 84.20 ± 61.31 pg/mL at M0 to 141.76 ± 108.70 pg/mL at M3 and to 142.69 ± 100.21 pg/mL at M12 (p = 0.016 between M0 and M3, p = 0.002 between M0 and M12). The FGF 21 level changed from 320.06 ± 238.96 pg/mL at M0 to 416.99 ± 375.86 pg/mL at M3 and to 230.24 ± 123.71 pg/mL at M12 (p = 0.049 between M0 and M3, p = 0.005 between M0 and M12).\nSerum levels of FGF 19 and FGF 21 for subjects during the three periods were intergraded and the correlations with indicators of diabetes and NAFLD are shown in Figure 2. FGF 19 had a significant negative correlation with serum c-peptide (r = −0.286, p = 0.006, Figure 2A) and HbA1c level (r = −0.308, p = 0.003, Figure 2B). On the other hand, FGF 21 had a significant positive correlation with serum HbA1c (r = 0.209, p = 0.047, Figure 2C) and total bile acids (r = 0.273, p = 0.005, Figure 2D), respectively.", "Thirteen of our 35 study participants (37.1%) achieved complete DM remission 12 months after GB. Profiles of both DM-CR and DM-non-CR groups measured before and 12 months after GB are presented in Table 2.\nRegarding the preoperative condition, the DM-CR group had higher baseline body weight (93.85 ± 16.25 vs. 79.43 ± 9.53 kg, p = 0.010), BMI (43.73 ± 5.00 vs. 29.79 ± 3.27 kg/m2, p = 0.001), waist circumference (108.69 ± 10.22 vs. 100.45 ± 9.13 cm, p = 0.020), c-peptide (3.23 ± 1.01 vs. 2.31 ± 1.18 mg/dL, p = 0.026), and ALT level (56.08 ± 40.45 vs. 33.05 ± 25.90 U/L, p = 0.047) but with lower baseline HbA1c (8.51 ± 1.42 vs. 9.76 ± 1.41%, p = 0.016) compared to the DM-non-CR group. In addition, the DM-CR group had higher HSI (51.30 ± 5.05 vs. 42.69 ± 4.75, p < 0.001). Baseline FGF 19, FGF 21, and total bile acid levels were similar between the two groups.\nOne year after the operation, the HbA1c level of the DM-CR group was 5.42 ± 0.38%, while that of the DM-non-CR group was 7.09 ± 0.99%. Patients who achieved DM-CR had lower fasting blood glucose (89.70 ± 10.22 vs. 125.18 ± 31.18 mg/dL, p < 0.001), γ-GT (12.40 ± 5.30 vs. 31.45 ± 24.71 U/L, p = 0.003), and triglyceride (74.00 ± 24.10 vs. 119.45 ± 46.79 mg/dL, p = 0.001) levels. No significant difference was found among other measurements regarding metabolic profiles or laboratory data. Neither FGF 19, FGF 21, nor total bile acid level showed any significant difference between DM-CR and DM-non-CR 12 months after GB.\nChanges in serum levels of FGF 19, FGF 21, and total bile acids at the time before operation and 3 months and 12 months after surgery are presented in Figure 3. After adjustment for age and gender, the DM-CR group had a significantly higher FGF 19 level compared to DM-non-CR at M3 (196.88 ± 153.00 vs. 102.06 ± 34.72 pg/mL, p = 0.004, shown in Figure 3B), and more changes in FGF 19 between M3 and M0 (133.15 ± 144.65 vs. 6.15 ± 86.35 pg/mL, p = 0.001, shown in Figure 3D) compared with the DM-non-CR group.", "Twenty-five of our 35 enrolled subjects (71.4%) were considered HSI-I based on evaluation at M12. Measurements of HSI-I and HSI-non-I groups before and at 12 months postoperatively are presented in Table 3.\nRegarding the preoperative condition, patients among the HSI-I group had higher systolic blood pressure (139.60 ± 15.11 vs. 127.30 ± 10.14 mmHg, p = 0.024) and lower fasting blood glucose (161.44 ± 66.11 vs. 214.70 ± 70.31, p = 0.042). Neither baseline FGF 19, FGF 21, nor total bile acid level showed any significant difference between these two groups.\nOne year after GB, fatty liver improvers had an average HSI of 34.49 ± 1.25, while non-improvers had a HSI of 38.70 ± 1.93. HSI-I had lower serum FGF 21 (204.06 ± 122.68 vs. 295.67 ± 104.96, p = 0.046), insulin (3.43 ± 1.78 vs. 10.88 ± 10.38 mU/L, p = 0.0499), and triglyceride levels (90.18 ± 32.01 vs. 138.40 ± 55.67 mg/dL, p = 0.026). No differences in FGF 19 and bile acid levels were found.\nThe changes in FGF 19, FGF 21, and total bile acids are presented in Figure 4. The differences in FGF 21 between HSI-I and HSI-non-I showed borderline significance (p = 0.0503) after being adjusted for age and gender (Figure 4C).", "In our study, 35 obese patients with T2DM showed significant improvements in body weight, BMI, waist circumference, ABSI and insulin resistance, c-peptide, and insulin after receiving GB. Despite the limited effect in the improvement on lipid profile and liver function, our study also demonstrated GB as a beneficial approach to improving NAFLD based on the significant improvement of HSI after GB. Our study confirmed GB as an effective strategy to combat morbid obesity and its related comorbidities, such as T2DM and NAFLD.\nSerum FGF 19 concentrations are consistently elevated in obese patients after GB. This finding is consistent with previous studies in patients receiving sleeve gastrectomy, another standard bariatric procedure [8,47]. FGF 19 has a close connection with obesity [47,48] and correlates negatively with BMI in obese patients with DM [22]. A recent meta-analysis demonstrated a negative association between FGF 19 levels and the degree of BMI reduction after bariatric surgery [49], while obesity and DM led to significantly lower FGF 19 levels compared to those without DM [48]. A recent study from Gómez-Ambrosi et al. [33] showed a subsequent increase in serum FGF 19 levels after either diet- or surgery-induced weight loss.\nAnother major purpose of our study was to provide additional information on the characteristics of FGF levels on DM remission. Our analysis showed a significant elevation of FGF 19 levels among DM remitters compared with non-remitters 3 months after GB. Early FGF 19 improvements may predict the complete remission of T2DM for obese patients receiving GB. Furthermore, our study pointed out a negative linear correlation between FGF 19 levels and the indicators of DM severity, including HbA1c and c-peptide levels. This implies that FGF 19 may also provide predictive value regarding improvements in insulin resistance and remission of T2DM.\nFGF 21 signaling plays a crucial role in the development and progression of NAFLD [50]. Based on animal studies, the overexpression of FGF 21 antagonizes the effect of FGF 15/19 [51]. The elevation of FGF 21 levels in NAFLD patients may result from dysfunctional PPARα signaling [52]. Similar to the presence of insulin resistance in T2DM, “FGF 21 resistance” has been proposed as a key feature in NAFLD [53].\nIn one human study, FGF 21 had a positive correlation with BMI, and the expression of hepatic FGF 21 mRNA was significantly elevated in NAFLD [30]. It is an interesting fact, however, that the presence of FGF 21 elevation only presented during the stage of simple steatosis, and not after the development of non-alcoholic steatohepatitis or the resolution of steatosis [54]. This may be explained as reflecting the anti-inflammation effect of FGF 21 [55].\nOur investigation showed that NAFLD improvers tended to have lower FGF 21 levels at 12 months after GB with borderline significance. This finding consisted of the effect of bariatric surgery on reducing hepatic fat, inflammation, and fibrosis [56]. Recent studies pointed out potential differences from the aspect of different surgical interventions [33,57]. The effect on changes in FGF 21 is more prominent in sleeve gastrectomy compared with GB [33]. Our previous instigation also showed a more prominent change in pancreatic polypeptide hormone after receiving sleeve gastrectomy [57]. The results from our current study agree with this finding and further comparison between different types of surgery with larger study populations may be required.\nDespite FGF 19 and FGF 21 displaying some overlapping functions, our study showed differential roles of FGF 19 and FGF 21. FGF 19 had a higher affinity toward FGFr4, while FGF 21 is more potent toward FGFr1c. The FGFr4 gene is mainly expressed in the liver, whereas the FGFr1c gene has higher expression in the adipose tissue [58]. The biological evidence supports the fact that FGF 19 tends to have a closer interaction with DM and insulin resistance, while FGF 21 plays a greater role in the spectrum of fatty liver disease. Our study supports the notion that both FGF 19 and 21 may have complementary advantages in evaluating these two major comorbidities, T2DM and NAFLD, of obesity, respectively.\nThe understanding of these energy-regulating pathways suggests the potential of pharmacological approaches for patients with T2DM and NAFLD. Aldafermin (NGM 282), an engineered FGF 19 analog, demonstrated its safety and effectiveness in reducing liver fat content, improving liver fibrosis, and preventing the progression of non-alcoholic steatohepatitis (NASH) in a recent phase 2 randomized control trial [59], whereas its ability to improve insulin sensitivity remained inconsistent among studies [60]. Pegbelfermin and efruxifermin are considered the two most promising FGF 21 analogs. In a phase 2 study focusing on morbidly obese T2DM patients, high-dose pegbelfermin (BMS-986036), a PEGylated form of FGF 21, showed an effect on improving the lipid profile (increasing HDL-C and lower triglycerides), while it had no significant effect on improving weight loss or glycemic control [61]. Efruxifermin, a fusion protein of human IgG1 Fc domain linked to a modified human FGF 21, is considered a promising agent in reducing the liver fat fraction and markers of hepatic injury among NASH patients, as reported in a recent phase 2a study [62]. There is still limited evidence focusing on the obese population or on the aspect of combination or comparison with bariatric surgery.\nOur study has several limitations. First, the sample size of our study is small and may neglect the potential differences in the two biomarkers while performing subgroup analysis. Second, the current diagnostic criteria of NAFLD depend on histological or image studies. Subjects in our study did not receive evaluations of NAFLD during enrollment, and we adapted HSI as a blood surrogate for non-invasive assessment. The HSI has been validated with other non-invasive approaches, including ultrasonographic and magnetic resonance imaging [46]. HSI is also widely adopted in the field of metabolic disorders [63,64].", "Obesity represents a major health challenge in our modern society. GB is an effective surgical approach for weight loss, leading to an increase in FGF 19 levels and a decrease in total bile acids and FGF 21 levels postoperatively. FGF 19 levels had a negative correlation with the severity of T2DM based on c-peptide and HbA1c levels. FGF 21 levels had a positive correlation with c-peptide and total bile acid levels. Early increases in serum FGF 19 levels may be a predictor for complete remission of T2DM after GB. A decline in serum FGF 21 levels may reflect the improvement of NAFLD after GB. Our study may shed light on the differential roles of FGF 19 and FGF 21 in human T2DM remission and NAFLD improvement." ]
[ "intro", null, "subjects", null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusions" ]
[ "obesity", "diabetes mellitus", "FGF 19", "FGF 21", "total bile acid", "non-alcoholic fatty liver disease", "gastric bypass" ]
1. Introduction: Obesity has been a global concern for the past 50 years and the prevalence has increased significantly over the past decade [1]. Obesity represents a major health challenge because it substantially increases the risk of metabolic diseases, including type 2 diabetes mellitus (T2DM) and non-alcoholic fatty liver disease (NAFLD) [1,2]. Body weight reduction is an important approach in reducing insulin resistance and improving NAFLD. Weight loss has been shown as one of the strongest predictors of improved insulin sensitivity [3]. The magnitude of weight loss is also correlated with the improvement of NAFLD [4]. Surgical intervention is considered an important approach, especially for morbidly obese patients with T2DM, medically resistant arterial hypertension, or comorbidities that are expected to improve with weight loss [5]. Gastric bypass, a widely adapted surgical technique, is one of the most effective methods to combat obesity and remit T2DM [6]. Nevertheless, there are many patients whose DM and NAFLD fail to improve despite receiving these interventions [7]. Furthermore, the mechanisms by which it causes weight loss and T2DM and NAFLD resolutions are not well elucidated [8]. Numerous studies have attempted to identify robust biological and clinical predictors of DM remission after bariatric surgery [9,10]. On the other hand, studies for improving NAFLD are relatively lacking and mostly limited to animal studies [11]. More biomarkers from the blood as surrogates in the evaluation of NAFLD to replace paired liver biopsy and reduce the suffering of the patients are desperately needed [12]. The human fibroblast growth factor (FGF) family contains at least 22 members involving the biological processes of cell growth, differentiation, development, and metabolism [13]. Aside from most FGFs presenting functions as autocrine or paracrine factors, FGF 19, FGF 21, and FGF 23 lack the conventional FGF heparin-binding domain and possess the ability to elicit endocrine actions, functioning as hormones [13]. Emerging evidence demonstrates the potential role of the FGF family in energy metabolism and in counteracting obesity, especially FGF 19 and FGF 21 [14]. Animal studies have shown that overexpression of FGF 19 or FGF 21 or treatment with recombinant protein enhanced metabolic rates and decreased fat mass, in addition to demonstrating improvements in glucose metabolism, insulin sensitivity, and lipid profiles [15,16,17,18]. Bile acids have a significant relationship with energy balance. Farnesoid-X-receptor (FXR) regulates bile acid homeostasis by regulating the transcription of several enterohepatic genes. The activation of the transcription factor FXR by bile acids provokes the subsequent secretion of FGF 19 [19]. Human FGF 19 is expressed in the ileal enterocytes of the small intestine. FGF 19, secreted into the portal circulation, has a pronounced diurnal rhythm, with peaks occurring 90–120 min after serum bile acid levels after food intake. β-Klotho (KLB) works as a co-receptor and supports endocrine signaling via binding with FGF receptor (FGFr) 4 [20]. The binding of FGF 19 to the hepatocyte cell surface FGFr 4/KLB complex leads to negative feedback and reduces hepatic bile salt synthesis [21]. Mice with impaired function for gut secretion of FGF 19 show significantly impaired weight loss and glucose improvement following bariatric surgery [15]. Collective data also reveal that the serum FGF 19 levels are decreased in patients with T2DM [22]. Apart from FGF 19, FGF 21 is expressed in multiple tissues, including the liver, brown adipose tissue, white adipose tissue, and pancreas [23]. Under normal physiologic status, most circulating FGF 21 originates from the liver [24]. Secretion of FGF 21 is provoked significantly by excess food intake, ketogenic, high-carbohydrate diets, or protein restriction [25]. The expression of the FGF 21 gene depends on several pathways. Increased circulating free fatty acids and prolonged fasting promote the transcriptional activation of FGF 21 by the peroxisome proliferator-activated receptor α-mediated pathway [26,27]. A high-glucose diet activates carbohydrate-response element-binding protein (ChREBP) and enhances FGF 21 secretion [28]. Furthermore, general control nonderepressible 2 (GCN2) would be activated when encountering amino acid deficiency, leading to FGF 21 transcription [29]. Metabolic stresses such as obesity, T2DM, or NAFLD are also responsible for inducing the expression and/or signaling of FGF 21 [30]. The FGF 21-dependent signaling of downstream FGFr is extremely complicated and well-debated [31]. Based on evidence from a recent study, FGF 21 stimulates hepatic fatty acid oxidation, ketogenesis, and gluconeogenesis, and suppresses lipogenesis [25]. FGF 21 reduced plasma glucose and triglycerides to a nearly normal level in an animal model [32]. Although recent studies provide clues regarding the dynamics of FGF 19 and FGF 21 in patients receiving bariatric surgery [8,33], the information is limited to sleeve gastrectomy [8]. Moreover, different characteristics between those with and without the improvement of obesity-related comorbidities were also lacking [33]. The main purpose of our study was to evaluate the effect of GB on changes in serum FGF 19 and FGF 21 levels. Furthermore, we also determined the relationship between both blood biomarkers and the improvement of either T2DM or NAFLD. 2. Materials and Methods: 2.1. Study Design and Patients This was a hospital-based prospective observational study. Obese patients with T2DM receiving GB surgery were enrolled in the study. The study was conducted in accordance with the Declaration of Helsinki and the study protocol was approved by the institutional review board (approval number: MSIRB2015017, YM 103127E, 201002037IC, 2011-09-015IC, CHGH-IRB (405)102A-52)). The inclusion criteria were as follows: (1) been diagnosed with T2DM for more than 6 months with a baseline hemoglobin A1c (HbA1c) level > 8%; (2) receiving regular medical treatment for T2DM, including therapeutic nutritional therapy, oral anti-diabetic agents, or insulin under the evaluation of specialists; (2) body mass index (BMI) ≥ 32.5 kg/m2; (3) willingness to receive additional treatment, including diet control and lifestyle modifications; (4) willingness to accept follow-up appointment, and provided the informed consent documents. We excluded patients who had the following diagnosis or baseline characteristics: (1) cancer within the previous 5 years; (2) human immunodeficiency virus infection or active pulmonary tuberculosis; (3) unstable cardiovascular condition within the past 6 months; (4) pulmonary embolisms; (5) serum creatinine levels > 2.0 mg/dL; (6) chronic hepatitis B or C, liver cirrhosis, inflammatory bowel disease, or acromegaly; (7) history of organ transplantation or other bariatric surgery; (8) alcohol use disorders or substance use disorder; or (9) those who were uncooperative. All study patients returned for a monthly follow-up appointment after surgery. The patients received a face-to-face consultation with dietitians and reported a food diary log. Clinical anthropometry and laboratory assessments were performed simultaneously. This was a hospital-based prospective observational study. Obese patients with T2DM receiving GB surgery were enrolled in the study. The study was conducted in accordance with the Declaration of Helsinki and the study protocol was approved by the institutional review board (approval number: MSIRB2015017, YM 103127E, 201002037IC, 2011-09-015IC, CHGH-IRB (405)102A-52)). The inclusion criteria were as follows: (1) been diagnosed with T2DM for more than 6 months with a baseline hemoglobin A1c (HbA1c) level > 8%; (2) receiving regular medical treatment for T2DM, including therapeutic nutritional therapy, oral anti-diabetic agents, or insulin under the evaluation of specialists; (2) body mass index (BMI) ≥ 32.5 kg/m2; (3) willingness to receive additional treatment, including diet control and lifestyle modifications; (4) willingness to accept follow-up appointment, and provided the informed consent documents. We excluded patients who had the following diagnosis or baseline characteristics: (1) cancer within the previous 5 years; (2) human immunodeficiency virus infection or active pulmonary tuberculosis; (3) unstable cardiovascular condition within the past 6 months; (4) pulmonary embolisms; (5) serum creatinine levels > 2.0 mg/dL; (6) chronic hepatitis B or C, liver cirrhosis, inflammatory bowel disease, or acromegaly; (7) history of organ transplantation or other bariatric surgery; (8) alcohol use disorders or substance use disorder; or (9) those who were uncooperative. All study patients returned for a monthly follow-up appointment after surgery. The patients received a face-to-face consultation with dietitians and reported a food diary log. Clinical anthropometry and laboratory assessments were performed simultaneously. 2.2. Surgical Technique of Gastric Bypass (GB) GB was performed as described in our previous studies [34,35,36]. A five-port technique was used. The dissection begins directly on the lesser curvature, and a 15- to 20-mL gastric pouch is created using multiple EndoGIA-45 staplers (US Surgical Corp). Patients are then placed in a neutral position for the creation of the jejunostomy. The jejunum is divided 50 cm distal to the ligament of Treitz. A stapled end-to-side jejunojejunostomy is performed with a 100 cm Roux limb. The enteroenterostomy defect and all mesenteric defects are closed with continuous sutures. The Roux limb is tunneled via a retrocolic, retrogastric path and positioned near the transected gastric pouch. The CEEA-21 anvil (US Surgical Corp) is pulled into the gastric pouch transorally following the previous study [37]. The CEEA stapler is then inserted through the Roux limb to perform the end-to-side gastro-jejunostomy. After the test for air leak, the trocar fascial defects are closed. Two hemovac drains are left in the lesser sac and left subphrenic space separately [38]. GB was performed as described in our previous studies [34,35,36]. A five-port technique was used. The dissection begins directly on the lesser curvature, and a 15- to 20-mL gastric pouch is created using multiple EndoGIA-45 staplers (US Surgical Corp). Patients are then placed in a neutral position for the creation of the jejunostomy. The jejunum is divided 50 cm distal to the ligament of Treitz. A stapled end-to-side jejunojejunostomy is performed with a 100 cm Roux limb. The enteroenterostomy defect and all mesenteric defects are closed with continuous sutures. The Roux limb is tunneled via a retrocolic, retrogastric path and positioned near the transected gastric pouch. The CEEA-21 anvil (US Surgical Corp) is pulled into the gastric pouch transorally following the previous study [37]. The CEEA stapler is then inserted through the Roux limb to perform the end-to-side gastro-jejunostomy. After the test for air leak, the trocar fascial defects are closed. Two hemovac drains are left in the lesser sac and left subphrenic space separately [38]. 2.3. Metabolic Profiles and Blood Sampling All study subjects received clinical anthropometry and laboratory assessments on the day before surgery as baseline (M0) and at 3 months (M3) and 12 months (M12) after surgery. Metabolic profiles included anthropometry measurements (body weight, waist circumference, body mass index (BMI), and a body shape index (ABSI)), and systolic and diastolic blood pressure. The patients were required to fast overnight before each blood sample collection. Blood samples were obtained from the median cubital vein between 8 and 11 o’clock in the morning and immediately transferred into a chilled glass tube containing disodium EDTA (1 mg/mL) and aprotinin (500 units/mL). The samples were stored on ice during collection. They were further centrifuged, plasma-separated, aliquoted into polypropylene tubes, and stored at −20 °C before receiving analysis. Laboratory assessments included serum levels of total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), fasting blood sugar, hemoglobin A1c (HbA1c), c-peptide, insulin, creatinine, uric acid, and liver function test (alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase (Alk-p), and gamma-glutamyl transferase (γ-GT)). All data are reported as means ± standard deviations. All study subjects received clinical anthropometry and laboratory assessments on the day before surgery as baseline (M0) and at 3 months (M3) and 12 months (M12) after surgery. Metabolic profiles included anthropometry measurements (body weight, waist circumference, body mass index (BMI), and a body shape index (ABSI)), and systolic and diastolic blood pressure. The patients were required to fast overnight before each blood sample collection. Blood samples were obtained from the median cubital vein between 8 and 11 o’clock in the morning and immediately transferred into a chilled glass tube containing disodium EDTA (1 mg/mL) and aprotinin (500 units/mL). The samples were stored on ice during collection. They were further centrifuged, plasma-separated, aliquoted into polypropylene tubes, and stored at −20 °C before receiving analysis. Laboratory assessments included serum levels of total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), fasting blood sugar, hemoglobin A1c (HbA1c), c-peptide, insulin, creatinine, uric acid, and liver function test (alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase (Alk-p), and gamma-glutamyl transferase (γ-GT)). All data are reported as means ± standard deviations. 2.4. Measurement of the Plasma FGF 19, FGF 21, and Serum Total Bile Acid Levels The fasting blood samples obtained were used to determine the plasma levels of FGF 19 and FGF 21, and serum levels of total bile acids. Enzyme immunoassays for plasma FGF 19 and FGF 21 (R&D Systems, Minneapolis, MN, USA) were performed. Fasting total serum bile acids were assayed using the 3α-hydroxysteroid dehydrogenase method (Fumouze Diagnostics, Levallois-Perret, France). All measurements were performed in duplicate. The fasting blood samples obtained were used to determine the plasma levels of FGF 19 and FGF 21, and serum levels of total bile acids. Enzyme immunoassays for plasma FGF 19 and FGF 21 (R&D Systems, Minneapolis, MN, USA) were performed. Fasting total serum bile acids were assayed using the 3α-hydroxysteroid dehydrogenase method (Fumouze Diagnostics, Levallois-Perret, France). All measurements were performed in duplicate. 2.5. Definition of DM Complete Remission and Insulin Resistance In our study, DM complete remission is defined as a return to normal measures of glucose metabolism with HbA1c in the normal range (<6.0% (42 mmol/mol)), which was adopted in most previous investigations [39]. We define HbA1c < 6.0% after 12 months after GB as DM complete remitters. Insulin resistance was evaluated by using the homeostasis model assessment of insulin resistance (HOMA-IR), defined as fasting plasma glucose level (mmol/L) × fasting immunoreactive insulin level (µ U/mL)/22.5. The β-cell function was assessed using the homeostatic model assessment of β-cell function (HOMA-β), with the formula of [20 × fasting immunoreactive insulin level (µ U/mL)]/[fasting plasma glucose level (mmol/L) − 3.5] [40]. The variation of the area under the curve of plasma FGF 19 levels per minute (ΔAUC of plasma amylin) during the OGTT was calculated with the trapezoidal method [41]. In our study, DM complete remission is defined as a return to normal measures of glucose metabolism with HbA1c in the normal range (<6.0% (42 mmol/mol)), which was adopted in most previous investigations [39]. We define HbA1c < 6.0% after 12 months after GB as DM complete remitters. Insulin resistance was evaluated by using the homeostasis model assessment of insulin resistance (HOMA-IR), defined as fasting plasma glucose level (mmol/L) × fasting immunoreactive insulin level (µ U/mL)/22.5. The β-cell function was assessed using the homeostatic model assessment of β-cell function (HOMA-β), with the formula of [20 × fasting immunoreactive insulin level (µ U/mL)]/[fasting plasma glucose level (mmol/L) − 3.5] [40]. The variation of the area under the curve of plasma FGF 19 levels per minute (ΔAUC of plasma amylin) during the OGTT was calculated with the trapezoidal method [41]. 2.6. Definition of NAFLD Based on the Hepatic Steatosis Index (HSI) NAFLD has been considered as a continuum from obesity to metabolic syndrome and diabetes [42]. The gold standard to evaluate the magnitude of NAFLD depends on paired liver biopsy. However, liver biopsy may lead to patient discomfort and several complications, including bleeding, organ trauma, and even patient mortality [43]. Therefore, this approach is less feasible in clinical practice due to patient safety and a high patient refusal rate [12,44]. The hepatic steatosis index (HSI) was adapted for the clinical assessment of NAFLD and has been validated with imaging, including ultrasonography and magnetic resonance imaging [45]. HSI was defined as 8 × (ALT/AST ratio) + BMI (+2, if female; +2, if diabetes mellitus). A cut-off value of HSI > 36.0 may determine NAFLD with a specificity of 92.4% [46]. Based on this concept, our study defines HSI < 36.0 at 12 months after GB as HSI improvers (HSI-I). NAFLD has been considered as a continuum from obesity to metabolic syndrome and diabetes [42]. The gold standard to evaluate the magnitude of NAFLD depends on paired liver biopsy. However, liver biopsy may lead to patient discomfort and several complications, including bleeding, organ trauma, and even patient mortality [43]. Therefore, this approach is less feasible in clinical practice due to patient safety and a high patient refusal rate [12,44]. The hepatic steatosis index (HSI) was adapted for the clinical assessment of NAFLD and has been validated with imaging, including ultrasonography and magnetic resonance imaging [45]. HSI was defined as 8 × (ALT/AST ratio) + BMI (+2, if female; +2, if diabetes mellitus). A cut-off value of HSI > 36.0 may determine NAFLD with a specificity of 92.4% [46]. Based on this concept, our study defines HSI < 36.0 at 12 months after GB as HSI improvers (HSI-I). 2.7. Statistical Analysis SAS (version 9.4) was applied for the data analyses. The paired t-test was used to compare preoperative age, BMI, waist circumference, total body fat percentage, and resting metabolic rate among patients who underwent GB between the preoperative period and 3 and 12 months after surgery. Postoperative serum biochemical data (except BMI) were analyzed using an ANCOVA model to adjust for preoperative age and gender. The paired Student t-test was used in comparisons between preoperative data with data at each time point at 3 or 12 months postoperatively. A trend analysis (to obtain p values for the trend) was performed using the repeated GLM to explore whether GB had a significantly different effect on postoperative indicators within 3 or 12 months postoperatively. The correlations among different postoperative indicators to explain possible physiological pathways were evaluated using the Pearson correlation. A p-value of < 0.05 was considered statistically significant. SAS (version 9.4) was applied for the data analyses. The paired t-test was used to compare preoperative age, BMI, waist circumference, total body fat percentage, and resting metabolic rate among patients who underwent GB between the preoperative period and 3 and 12 months after surgery. Postoperative serum biochemical data (except BMI) were analyzed using an ANCOVA model to adjust for preoperative age and gender. The paired Student t-test was used in comparisons between preoperative data with data at each time point at 3 or 12 months postoperatively. A trend analysis (to obtain p values for the trend) was performed using the repeated GLM to explore whether GB had a significantly different effect on postoperative indicators within 3 or 12 months postoperatively. The correlations among different postoperative indicators to explain possible physiological pathways were evaluated using the Pearson correlation. A p-value of < 0.05 was considered statistically significant. 2.1. Study Design and Patients: This was a hospital-based prospective observational study. Obese patients with T2DM receiving GB surgery were enrolled in the study. The study was conducted in accordance with the Declaration of Helsinki and the study protocol was approved by the institutional review board (approval number: MSIRB2015017, YM 103127E, 201002037IC, 2011-09-015IC, CHGH-IRB (405)102A-52)). The inclusion criteria were as follows: (1) been diagnosed with T2DM for more than 6 months with a baseline hemoglobin A1c (HbA1c) level > 8%; (2) receiving regular medical treatment for T2DM, including therapeutic nutritional therapy, oral anti-diabetic agents, or insulin under the evaluation of specialists; (2) body mass index (BMI) ≥ 32.5 kg/m2; (3) willingness to receive additional treatment, including diet control and lifestyle modifications; (4) willingness to accept follow-up appointment, and provided the informed consent documents. We excluded patients who had the following diagnosis or baseline characteristics: (1) cancer within the previous 5 years; (2) human immunodeficiency virus infection or active pulmonary tuberculosis; (3) unstable cardiovascular condition within the past 6 months; (4) pulmonary embolisms; (5) serum creatinine levels > 2.0 mg/dL; (6) chronic hepatitis B or C, liver cirrhosis, inflammatory bowel disease, or acromegaly; (7) history of organ transplantation or other bariatric surgery; (8) alcohol use disorders or substance use disorder; or (9) those who were uncooperative. All study patients returned for a monthly follow-up appointment after surgery. The patients received a face-to-face consultation with dietitians and reported a food diary log. Clinical anthropometry and laboratory assessments were performed simultaneously. 2.2. Surgical Technique of Gastric Bypass (GB): GB was performed as described in our previous studies [34,35,36]. A five-port technique was used. The dissection begins directly on the lesser curvature, and a 15- to 20-mL gastric pouch is created using multiple EndoGIA-45 staplers (US Surgical Corp). Patients are then placed in a neutral position for the creation of the jejunostomy. The jejunum is divided 50 cm distal to the ligament of Treitz. A stapled end-to-side jejunojejunostomy is performed with a 100 cm Roux limb. The enteroenterostomy defect and all mesenteric defects are closed with continuous sutures. The Roux limb is tunneled via a retrocolic, retrogastric path and positioned near the transected gastric pouch. The CEEA-21 anvil (US Surgical Corp) is pulled into the gastric pouch transorally following the previous study [37]. The CEEA stapler is then inserted through the Roux limb to perform the end-to-side gastro-jejunostomy. After the test for air leak, the trocar fascial defects are closed. Two hemovac drains are left in the lesser sac and left subphrenic space separately [38]. 2.3. Metabolic Profiles and Blood Sampling: All study subjects received clinical anthropometry and laboratory assessments on the day before surgery as baseline (M0) and at 3 months (M3) and 12 months (M12) after surgery. Metabolic profiles included anthropometry measurements (body weight, waist circumference, body mass index (BMI), and a body shape index (ABSI)), and systolic and diastolic blood pressure. The patients were required to fast overnight before each blood sample collection. Blood samples were obtained from the median cubital vein between 8 and 11 o’clock in the morning and immediately transferred into a chilled glass tube containing disodium EDTA (1 mg/mL) and aprotinin (500 units/mL). The samples were stored on ice during collection. They were further centrifuged, plasma-separated, aliquoted into polypropylene tubes, and stored at −20 °C before receiving analysis. Laboratory assessments included serum levels of total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), fasting blood sugar, hemoglobin A1c (HbA1c), c-peptide, insulin, creatinine, uric acid, and liver function test (alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase (Alk-p), and gamma-glutamyl transferase (γ-GT)). All data are reported as means ± standard deviations. 2.4. Measurement of the Plasma FGF 19, FGF 21, and Serum Total Bile Acid Levels: The fasting blood samples obtained were used to determine the plasma levels of FGF 19 and FGF 21, and serum levels of total bile acids. Enzyme immunoassays for plasma FGF 19 and FGF 21 (R&D Systems, Minneapolis, MN, USA) were performed. Fasting total serum bile acids were assayed using the 3α-hydroxysteroid dehydrogenase method (Fumouze Diagnostics, Levallois-Perret, France). All measurements were performed in duplicate. 2.5. Definition of DM Complete Remission and Insulin Resistance: In our study, DM complete remission is defined as a return to normal measures of glucose metabolism with HbA1c in the normal range (<6.0% (42 mmol/mol)), which was adopted in most previous investigations [39]. We define HbA1c < 6.0% after 12 months after GB as DM complete remitters. Insulin resistance was evaluated by using the homeostasis model assessment of insulin resistance (HOMA-IR), defined as fasting plasma glucose level (mmol/L) × fasting immunoreactive insulin level (µ U/mL)/22.5. The β-cell function was assessed using the homeostatic model assessment of β-cell function (HOMA-β), with the formula of [20 × fasting immunoreactive insulin level (µ U/mL)]/[fasting plasma glucose level (mmol/L) − 3.5] [40]. The variation of the area under the curve of plasma FGF 19 levels per minute (ΔAUC of plasma amylin) during the OGTT was calculated with the trapezoidal method [41]. 2.6. Definition of NAFLD Based on the Hepatic Steatosis Index (HSI): NAFLD has been considered as a continuum from obesity to metabolic syndrome and diabetes [42]. The gold standard to evaluate the magnitude of NAFLD depends on paired liver biopsy. However, liver biopsy may lead to patient discomfort and several complications, including bleeding, organ trauma, and even patient mortality [43]. Therefore, this approach is less feasible in clinical practice due to patient safety and a high patient refusal rate [12,44]. The hepatic steatosis index (HSI) was adapted for the clinical assessment of NAFLD and has been validated with imaging, including ultrasonography and magnetic resonance imaging [45]. HSI was defined as 8 × (ALT/AST ratio) + BMI (+2, if female; +2, if diabetes mellitus). A cut-off value of HSI > 36.0 may determine NAFLD with a specificity of 92.4% [46]. Based on this concept, our study defines HSI < 36.0 at 12 months after GB as HSI improvers (HSI-I). 2.7. Statistical Analysis: SAS (version 9.4) was applied for the data analyses. The paired t-test was used to compare preoperative age, BMI, waist circumference, total body fat percentage, and resting metabolic rate among patients who underwent GB between the preoperative period and 3 and 12 months after surgery. Postoperative serum biochemical data (except BMI) were analyzed using an ANCOVA model to adjust for preoperative age and gender. The paired Student t-test was used in comparisons between preoperative data with data at each time point at 3 or 12 months postoperatively. A trend analysis (to obtain p values for the trend) was performed using the repeated GLM to explore whether GB had a significantly different effect on postoperative indicators within 3 or 12 months postoperatively. The correlations among different postoperative indicators to explain possible physiological pathways were evaluated using the Pearson correlation. A p-value of < 0.05 was considered statistically significant. 3. Results: 3.1. Changes in Metabolic Profiles and Laboratory Data after GB A total of 35 obese patients (12 males and 23 females) with T2DM who underwent GB were enrolled. The baseline average age, body weight, and BMI were 44.8 ± 9.7 years old, 84.8 ± 14.1 kg, and 31.6 ± 4.6 kg/m2, respectively. The duration of T2DM was 5.8 ± 4.9 years. All enrolled patients received GB and were followed up for more than 1 year subsequently. Changes in metabolic profiles and laboratory data are reported in Table 1. Body weight, BMI, waist circumference, and ABSI showed significant improvements 1 year after GB (p < 0.001). Diabetes-related parameters, including fasting blood glucose, HbA1c, c-peptide, and insulin level, also showed a significant decline (p < 0.05). Liver function tests, including ALT, AST, and Alk-p levels, showed no significant change; however, a significant decline in γ-GT level was observed (p = 0.006). As for the lipid profile, HDL-C increased, and triglycerides decreased (p < 0.05), while total cholesterol and LDL levels did not change significantly. Decreases in uric acid levels were also demonstrated (p = 0.019). Variations in serum levels among FGF 19, FGF 21, and total bile acids before the operation and after 3 months and 12 months are presented in Figure 1. All three markers showed a significant trend of change after GB (p = 0.024, 0.005, and 0.010, respectively). The total bile acid level changed from 10.07 ± 4.33 µM at M0 to 11.78 ± 9.32 µM at M3 and to 8.31 ± 4.95 µM at M12 (p = 0.023 between M3 and M12). The FGF 19 level increased from 84.20 ± 61.31 pg/mL at M0 to 141.76 ± 108.70 pg/mL at M3 and to 142.69 ± 100.21 pg/mL at M12 (p = 0.016 between M0 and M3, p = 0.002 between M0 and M12). The FGF 21 level changed from 320.06 ± 238.96 pg/mL at M0 to 416.99 ± 375.86 pg/mL at M3 and to 230.24 ± 123.71 pg/mL at M12 (p = 0.049 between M0 and M3, p = 0.005 between M0 and M12). Serum levels of FGF 19 and FGF 21 for subjects during the three periods were intergraded and the correlations with indicators of diabetes and NAFLD are shown in Figure 2. FGF 19 had a significant negative correlation with serum c-peptide (r = −0.286, p = 0.006, Figure 2A) and HbA1c level (r = −0.308, p = 0.003, Figure 2B). On the other hand, FGF 21 had a significant positive correlation with serum HbA1c (r = 0.209, p = 0.047, Figure 2C) and total bile acids (r = 0.273, p = 0.005, Figure 2D), respectively. A total of 35 obese patients (12 males and 23 females) with T2DM who underwent GB were enrolled. The baseline average age, body weight, and BMI were 44.8 ± 9.7 years old, 84.8 ± 14.1 kg, and 31.6 ± 4.6 kg/m2, respectively. The duration of T2DM was 5.8 ± 4.9 years. All enrolled patients received GB and were followed up for more than 1 year subsequently. Changes in metabolic profiles and laboratory data are reported in Table 1. Body weight, BMI, waist circumference, and ABSI showed significant improvements 1 year after GB (p < 0.001). Diabetes-related parameters, including fasting blood glucose, HbA1c, c-peptide, and insulin level, also showed a significant decline (p < 0.05). Liver function tests, including ALT, AST, and Alk-p levels, showed no significant change; however, a significant decline in γ-GT level was observed (p = 0.006). As for the lipid profile, HDL-C increased, and triglycerides decreased (p < 0.05), while total cholesterol and LDL levels did not change significantly. Decreases in uric acid levels were also demonstrated (p = 0.019). Variations in serum levels among FGF 19, FGF 21, and total bile acids before the operation and after 3 months and 12 months are presented in Figure 1. All three markers showed a significant trend of change after GB (p = 0.024, 0.005, and 0.010, respectively). The total bile acid level changed from 10.07 ± 4.33 µM at M0 to 11.78 ± 9.32 µM at M3 and to 8.31 ± 4.95 µM at M12 (p = 0.023 between M3 and M12). The FGF 19 level increased from 84.20 ± 61.31 pg/mL at M0 to 141.76 ± 108.70 pg/mL at M3 and to 142.69 ± 100.21 pg/mL at M12 (p = 0.016 between M0 and M3, p = 0.002 between M0 and M12). The FGF 21 level changed from 320.06 ± 238.96 pg/mL at M0 to 416.99 ± 375.86 pg/mL at M3 and to 230.24 ± 123.71 pg/mL at M12 (p = 0.049 between M0 and M3, p = 0.005 between M0 and M12). Serum levels of FGF 19 and FGF 21 for subjects during the three periods were intergraded and the correlations with indicators of diabetes and NAFLD are shown in Figure 2. FGF 19 had a significant negative correlation with serum c-peptide (r = −0.286, p = 0.006, Figure 2A) and HbA1c level (r = −0.308, p = 0.003, Figure 2B). On the other hand, FGF 21 had a significant positive correlation with serum HbA1c (r = 0.209, p = 0.047, Figure 2C) and total bile acids (r = 0.273, p = 0.005, Figure 2D), respectively. 3.2. Characteristic Differences between DM-CR and DM-Non-CR Subjects Thirteen of our 35 study participants (37.1%) achieved complete DM remission 12 months after GB. Profiles of both DM-CR and DM-non-CR groups measured before and 12 months after GB are presented in Table 2. Regarding the preoperative condition, the DM-CR group had higher baseline body weight (93.85 ± 16.25 vs. 79.43 ± 9.53 kg, p = 0.010), BMI (43.73 ± 5.00 vs. 29.79 ± 3.27 kg/m2, p = 0.001), waist circumference (108.69 ± 10.22 vs. 100.45 ± 9.13 cm, p = 0.020), c-peptide (3.23 ± 1.01 vs. 2.31 ± 1.18 mg/dL, p = 0.026), and ALT level (56.08 ± 40.45 vs. 33.05 ± 25.90 U/L, p = 0.047) but with lower baseline HbA1c (8.51 ± 1.42 vs. 9.76 ± 1.41%, p = 0.016) compared to the DM-non-CR group. In addition, the DM-CR group had higher HSI (51.30 ± 5.05 vs. 42.69 ± 4.75, p < 0.001). Baseline FGF 19, FGF 21, and total bile acid levels were similar between the two groups. One year after the operation, the HbA1c level of the DM-CR group was 5.42 ± 0.38%, while that of the DM-non-CR group was 7.09 ± 0.99%. Patients who achieved DM-CR had lower fasting blood glucose (89.70 ± 10.22 vs. 125.18 ± 31.18 mg/dL, p < 0.001), γ-GT (12.40 ± 5.30 vs. 31.45 ± 24.71 U/L, p = 0.003), and triglyceride (74.00 ± 24.10 vs. 119.45 ± 46.79 mg/dL, p = 0.001) levels. No significant difference was found among other measurements regarding metabolic profiles or laboratory data. Neither FGF 19, FGF 21, nor total bile acid level showed any significant difference between DM-CR and DM-non-CR 12 months after GB. Changes in serum levels of FGF 19, FGF 21, and total bile acids at the time before operation and 3 months and 12 months after surgery are presented in Figure 3. After adjustment for age and gender, the DM-CR group had a significantly higher FGF 19 level compared to DM-non-CR at M3 (196.88 ± 153.00 vs. 102.06 ± 34.72 pg/mL, p = 0.004, shown in Figure 3B), and more changes in FGF 19 between M3 and M0 (133.15 ± 144.65 vs. 6.15 ± 86.35 pg/mL, p = 0.001, shown in Figure 3D) compared with the DM-non-CR group. Thirteen of our 35 study participants (37.1%) achieved complete DM remission 12 months after GB. Profiles of both DM-CR and DM-non-CR groups measured before and 12 months after GB are presented in Table 2. Regarding the preoperative condition, the DM-CR group had higher baseline body weight (93.85 ± 16.25 vs. 79.43 ± 9.53 kg, p = 0.010), BMI (43.73 ± 5.00 vs. 29.79 ± 3.27 kg/m2, p = 0.001), waist circumference (108.69 ± 10.22 vs. 100.45 ± 9.13 cm, p = 0.020), c-peptide (3.23 ± 1.01 vs. 2.31 ± 1.18 mg/dL, p = 0.026), and ALT level (56.08 ± 40.45 vs. 33.05 ± 25.90 U/L, p = 0.047) but with lower baseline HbA1c (8.51 ± 1.42 vs. 9.76 ± 1.41%, p = 0.016) compared to the DM-non-CR group. In addition, the DM-CR group had higher HSI (51.30 ± 5.05 vs. 42.69 ± 4.75, p < 0.001). Baseline FGF 19, FGF 21, and total bile acid levels were similar between the two groups. One year after the operation, the HbA1c level of the DM-CR group was 5.42 ± 0.38%, while that of the DM-non-CR group was 7.09 ± 0.99%. Patients who achieved DM-CR had lower fasting blood glucose (89.70 ± 10.22 vs. 125.18 ± 31.18 mg/dL, p < 0.001), γ-GT (12.40 ± 5.30 vs. 31.45 ± 24.71 U/L, p = 0.003), and triglyceride (74.00 ± 24.10 vs. 119.45 ± 46.79 mg/dL, p = 0.001) levels. No significant difference was found among other measurements regarding metabolic profiles or laboratory data. Neither FGF 19, FGF 21, nor total bile acid level showed any significant difference between DM-CR and DM-non-CR 12 months after GB. Changes in serum levels of FGF 19, FGF 21, and total bile acids at the time before operation and 3 months and 12 months after surgery are presented in Figure 3. After adjustment for age and gender, the DM-CR group had a significantly higher FGF 19 level compared to DM-non-CR at M3 (196.88 ± 153.00 vs. 102.06 ± 34.72 pg/mL, p = 0.004, shown in Figure 3B), and more changes in FGF 19 between M3 and M0 (133.15 ± 144.65 vs. 6.15 ± 86.35 pg/mL, p = 0.001, shown in Figure 3D) compared with the DM-non-CR group. 3.3. Characteristic Differences between HSI-I and HSI-Non-I Subjects Twenty-five of our 35 enrolled subjects (71.4%) were considered HSI-I based on evaluation at M12. Measurements of HSI-I and HSI-non-I groups before and at 12 months postoperatively are presented in Table 3. Regarding the preoperative condition, patients among the HSI-I group had higher systolic blood pressure (139.60 ± 15.11 vs. 127.30 ± 10.14 mmHg, p = 0.024) and lower fasting blood glucose (161.44 ± 66.11 vs. 214.70 ± 70.31, p = 0.042). Neither baseline FGF 19, FGF 21, nor total bile acid level showed any significant difference between these two groups. One year after GB, fatty liver improvers had an average HSI of 34.49 ± 1.25, while non-improvers had a HSI of 38.70 ± 1.93. HSI-I had lower serum FGF 21 (204.06 ± 122.68 vs. 295.67 ± 104.96, p = 0.046), insulin (3.43 ± 1.78 vs. 10.88 ± 10.38 mU/L, p = 0.0499), and triglyceride levels (90.18 ± 32.01 vs. 138.40 ± 55.67 mg/dL, p = 0.026). No differences in FGF 19 and bile acid levels were found. The changes in FGF 19, FGF 21, and total bile acids are presented in Figure 4. The differences in FGF 21 between HSI-I and HSI-non-I showed borderline significance (p = 0.0503) after being adjusted for age and gender (Figure 4C). Twenty-five of our 35 enrolled subjects (71.4%) were considered HSI-I based on evaluation at M12. Measurements of HSI-I and HSI-non-I groups before and at 12 months postoperatively are presented in Table 3. Regarding the preoperative condition, patients among the HSI-I group had higher systolic blood pressure (139.60 ± 15.11 vs. 127.30 ± 10.14 mmHg, p = 0.024) and lower fasting blood glucose (161.44 ± 66.11 vs. 214.70 ± 70.31, p = 0.042). Neither baseline FGF 19, FGF 21, nor total bile acid level showed any significant difference between these two groups. One year after GB, fatty liver improvers had an average HSI of 34.49 ± 1.25, while non-improvers had a HSI of 38.70 ± 1.93. HSI-I had lower serum FGF 21 (204.06 ± 122.68 vs. 295.67 ± 104.96, p = 0.046), insulin (3.43 ± 1.78 vs. 10.88 ± 10.38 mU/L, p = 0.0499), and triglyceride levels (90.18 ± 32.01 vs. 138.40 ± 55.67 mg/dL, p = 0.026). No differences in FGF 19 and bile acid levels were found. The changes in FGF 19, FGF 21, and total bile acids are presented in Figure 4. The differences in FGF 21 between HSI-I and HSI-non-I showed borderline significance (p = 0.0503) after being adjusted for age and gender (Figure 4C). 3.1. Changes in Metabolic Profiles and Laboratory Data after GB: A total of 35 obese patients (12 males and 23 females) with T2DM who underwent GB were enrolled. The baseline average age, body weight, and BMI were 44.8 ± 9.7 years old, 84.8 ± 14.1 kg, and 31.6 ± 4.6 kg/m2, respectively. The duration of T2DM was 5.8 ± 4.9 years. All enrolled patients received GB and were followed up for more than 1 year subsequently. Changes in metabolic profiles and laboratory data are reported in Table 1. Body weight, BMI, waist circumference, and ABSI showed significant improvements 1 year after GB (p < 0.001). Diabetes-related parameters, including fasting blood glucose, HbA1c, c-peptide, and insulin level, also showed a significant decline (p < 0.05). Liver function tests, including ALT, AST, and Alk-p levels, showed no significant change; however, a significant decline in γ-GT level was observed (p = 0.006). As for the lipid profile, HDL-C increased, and triglycerides decreased (p < 0.05), while total cholesterol and LDL levels did not change significantly. Decreases in uric acid levels were also demonstrated (p = 0.019). Variations in serum levels among FGF 19, FGF 21, and total bile acids before the operation and after 3 months and 12 months are presented in Figure 1. All three markers showed a significant trend of change after GB (p = 0.024, 0.005, and 0.010, respectively). The total bile acid level changed from 10.07 ± 4.33 µM at M0 to 11.78 ± 9.32 µM at M3 and to 8.31 ± 4.95 µM at M12 (p = 0.023 between M3 and M12). The FGF 19 level increased from 84.20 ± 61.31 pg/mL at M0 to 141.76 ± 108.70 pg/mL at M3 and to 142.69 ± 100.21 pg/mL at M12 (p = 0.016 between M0 and M3, p = 0.002 between M0 and M12). The FGF 21 level changed from 320.06 ± 238.96 pg/mL at M0 to 416.99 ± 375.86 pg/mL at M3 and to 230.24 ± 123.71 pg/mL at M12 (p = 0.049 between M0 and M3, p = 0.005 between M0 and M12). Serum levels of FGF 19 and FGF 21 for subjects during the three periods were intergraded and the correlations with indicators of diabetes and NAFLD are shown in Figure 2. FGF 19 had a significant negative correlation with serum c-peptide (r = −0.286, p = 0.006, Figure 2A) and HbA1c level (r = −0.308, p = 0.003, Figure 2B). On the other hand, FGF 21 had a significant positive correlation with serum HbA1c (r = 0.209, p = 0.047, Figure 2C) and total bile acids (r = 0.273, p = 0.005, Figure 2D), respectively. 3.2. Characteristic Differences between DM-CR and DM-Non-CR Subjects: Thirteen of our 35 study participants (37.1%) achieved complete DM remission 12 months after GB. Profiles of both DM-CR and DM-non-CR groups measured before and 12 months after GB are presented in Table 2. Regarding the preoperative condition, the DM-CR group had higher baseline body weight (93.85 ± 16.25 vs. 79.43 ± 9.53 kg, p = 0.010), BMI (43.73 ± 5.00 vs. 29.79 ± 3.27 kg/m2, p = 0.001), waist circumference (108.69 ± 10.22 vs. 100.45 ± 9.13 cm, p = 0.020), c-peptide (3.23 ± 1.01 vs. 2.31 ± 1.18 mg/dL, p = 0.026), and ALT level (56.08 ± 40.45 vs. 33.05 ± 25.90 U/L, p = 0.047) but with lower baseline HbA1c (8.51 ± 1.42 vs. 9.76 ± 1.41%, p = 0.016) compared to the DM-non-CR group. In addition, the DM-CR group had higher HSI (51.30 ± 5.05 vs. 42.69 ± 4.75, p < 0.001). Baseline FGF 19, FGF 21, and total bile acid levels were similar between the two groups. One year after the operation, the HbA1c level of the DM-CR group was 5.42 ± 0.38%, while that of the DM-non-CR group was 7.09 ± 0.99%. Patients who achieved DM-CR had lower fasting blood glucose (89.70 ± 10.22 vs. 125.18 ± 31.18 mg/dL, p < 0.001), γ-GT (12.40 ± 5.30 vs. 31.45 ± 24.71 U/L, p = 0.003), and triglyceride (74.00 ± 24.10 vs. 119.45 ± 46.79 mg/dL, p = 0.001) levels. No significant difference was found among other measurements regarding metabolic profiles or laboratory data. Neither FGF 19, FGF 21, nor total bile acid level showed any significant difference between DM-CR and DM-non-CR 12 months after GB. Changes in serum levels of FGF 19, FGF 21, and total bile acids at the time before operation and 3 months and 12 months after surgery are presented in Figure 3. After adjustment for age and gender, the DM-CR group had a significantly higher FGF 19 level compared to DM-non-CR at M3 (196.88 ± 153.00 vs. 102.06 ± 34.72 pg/mL, p = 0.004, shown in Figure 3B), and more changes in FGF 19 between M3 and M0 (133.15 ± 144.65 vs. 6.15 ± 86.35 pg/mL, p = 0.001, shown in Figure 3D) compared with the DM-non-CR group. 3.3. Characteristic Differences between HSI-I and HSI-Non-I Subjects: Twenty-five of our 35 enrolled subjects (71.4%) were considered HSI-I based on evaluation at M12. Measurements of HSI-I and HSI-non-I groups before and at 12 months postoperatively are presented in Table 3. Regarding the preoperative condition, patients among the HSI-I group had higher systolic blood pressure (139.60 ± 15.11 vs. 127.30 ± 10.14 mmHg, p = 0.024) and lower fasting blood glucose (161.44 ± 66.11 vs. 214.70 ± 70.31, p = 0.042). Neither baseline FGF 19, FGF 21, nor total bile acid level showed any significant difference between these two groups. One year after GB, fatty liver improvers had an average HSI of 34.49 ± 1.25, while non-improvers had a HSI of 38.70 ± 1.93. HSI-I had lower serum FGF 21 (204.06 ± 122.68 vs. 295.67 ± 104.96, p = 0.046), insulin (3.43 ± 1.78 vs. 10.88 ± 10.38 mU/L, p = 0.0499), and triglyceride levels (90.18 ± 32.01 vs. 138.40 ± 55.67 mg/dL, p = 0.026). No differences in FGF 19 and bile acid levels were found. The changes in FGF 19, FGF 21, and total bile acids are presented in Figure 4. The differences in FGF 21 between HSI-I and HSI-non-I showed borderline significance (p = 0.0503) after being adjusted for age and gender (Figure 4C). 4. Discussion: In our study, 35 obese patients with T2DM showed significant improvements in body weight, BMI, waist circumference, ABSI and insulin resistance, c-peptide, and insulin after receiving GB. Despite the limited effect in the improvement on lipid profile and liver function, our study also demonstrated GB as a beneficial approach to improving NAFLD based on the significant improvement of HSI after GB. Our study confirmed GB as an effective strategy to combat morbid obesity and its related comorbidities, such as T2DM and NAFLD. Serum FGF 19 concentrations are consistently elevated in obese patients after GB. This finding is consistent with previous studies in patients receiving sleeve gastrectomy, another standard bariatric procedure [8,47]. FGF 19 has a close connection with obesity [47,48] and correlates negatively with BMI in obese patients with DM [22]. A recent meta-analysis demonstrated a negative association between FGF 19 levels and the degree of BMI reduction after bariatric surgery [49], while obesity and DM led to significantly lower FGF 19 levels compared to those without DM [48]. A recent study from Gómez-Ambrosi et al. [33] showed a subsequent increase in serum FGF 19 levels after either diet- or surgery-induced weight loss. Another major purpose of our study was to provide additional information on the characteristics of FGF levels on DM remission. Our analysis showed a significant elevation of FGF 19 levels among DM remitters compared with non-remitters 3 months after GB. Early FGF 19 improvements may predict the complete remission of T2DM for obese patients receiving GB. Furthermore, our study pointed out a negative linear correlation between FGF 19 levels and the indicators of DM severity, including HbA1c and c-peptide levels. This implies that FGF 19 may also provide predictive value regarding improvements in insulin resistance and remission of T2DM. FGF 21 signaling plays a crucial role in the development and progression of NAFLD [50]. Based on animal studies, the overexpression of FGF 21 antagonizes the effect of FGF 15/19 [51]. The elevation of FGF 21 levels in NAFLD patients may result from dysfunctional PPARα signaling [52]. Similar to the presence of insulin resistance in T2DM, “FGF 21 resistance” has been proposed as a key feature in NAFLD [53]. In one human study, FGF 21 had a positive correlation with BMI, and the expression of hepatic FGF 21 mRNA was significantly elevated in NAFLD [30]. It is an interesting fact, however, that the presence of FGF 21 elevation only presented during the stage of simple steatosis, and not after the development of non-alcoholic steatohepatitis or the resolution of steatosis [54]. This may be explained as reflecting the anti-inflammation effect of FGF 21 [55]. Our investigation showed that NAFLD improvers tended to have lower FGF 21 levels at 12 months after GB with borderline significance. This finding consisted of the effect of bariatric surgery on reducing hepatic fat, inflammation, and fibrosis [56]. Recent studies pointed out potential differences from the aspect of different surgical interventions [33,57]. The effect on changes in FGF 21 is more prominent in sleeve gastrectomy compared with GB [33]. Our previous instigation also showed a more prominent change in pancreatic polypeptide hormone after receiving sleeve gastrectomy [57]. The results from our current study agree with this finding and further comparison between different types of surgery with larger study populations may be required. Despite FGF 19 and FGF 21 displaying some overlapping functions, our study showed differential roles of FGF 19 and FGF 21. FGF 19 had a higher affinity toward FGFr4, while FGF 21 is more potent toward FGFr1c. The FGFr4 gene is mainly expressed in the liver, whereas the FGFr1c gene has higher expression in the adipose tissue [58]. The biological evidence supports the fact that FGF 19 tends to have a closer interaction with DM and insulin resistance, while FGF 21 plays a greater role in the spectrum of fatty liver disease. Our study supports the notion that both FGF 19 and 21 may have complementary advantages in evaluating these two major comorbidities, T2DM and NAFLD, of obesity, respectively. The understanding of these energy-regulating pathways suggests the potential of pharmacological approaches for patients with T2DM and NAFLD. Aldafermin (NGM 282), an engineered FGF 19 analog, demonstrated its safety and effectiveness in reducing liver fat content, improving liver fibrosis, and preventing the progression of non-alcoholic steatohepatitis (NASH) in a recent phase 2 randomized control trial [59], whereas its ability to improve insulin sensitivity remained inconsistent among studies [60]. Pegbelfermin and efruxifermin are considered the two most promising FGF 21 analogs. In a phase 2 study focusing on morbidly obese T2DM patients, high-dose pegbelfermin (BMS-986036), a PEGylated form of FGF 21, showed an effect on improving the lipid profile (increasing HDL-C and lower triglycerides), while it had no significant effect on improving weight loss or glycemic control [61]. Efruxifermin, a fusion protein of human IgG1 Fc domain linked to a modified human FGF 21, is considered a promising agent in reducing the liver fat fraction and markers of hepatic injury among NASH patients, as reported in a recent phase 2a study [62]. There is still limited evidence focusing on the obese population or on the aspect of combination or comparison with bariatric surgery. Our study has several limitations. First, the sample size of our study is small and may neglect the potential differences in the two biomarkers while performing subgroup analysis. Second, the current diagnostic criteria of NAFLD depend on histological or image studies. Subjects in our study did not receive evaluations of NAFLD during enrollment, and we adapted HSI as a blood surrogate for non-invasive assessment. The HSI has been validated with other non-invasive approaches, including ultrasonographic and magnetic resonance imaging [46]. HSI is also widely adopted in the field of metabolic disorders [63,64]. 5. Conclusions: Obesity represents a major health challenge in our modern society. GB is an effective surgical approach for weight loss, leading to an increase in FGF 19 levels and a decrease in total bile acids and FGF 21 levels postoperatively. FGF 19 levels had a negative correlation with the severity of T2DM based on c-peptide and HbA1c levels. FGF 21 levels had a positive correlation with c-peptide and total bile acid levels. Early increases in serum FGF 19 levels may be a predictor for complete remission of T2DM after GB. A decline in serum FGF 21 levels may reflect the improvement of NAFLD after GB. Our study may shed light on the differential roles of FGF 19 and FGF 21 in human T2DM remission and NAFLD improvement.
Background: Gastric bypass (GB) is an effective treatment for those who are morbidly obese with coexisting type 2 diabetes mellitus (T2DM) or non-alcoholic fatty liver disease (NAFLD). Fibroblast growth factors (FGFs) are involved in the regulation of energy metabolism. Methods: We investigated the roles of FGF 19, FGF 21, and total bile acid among those with morbidly obese and T2DM undergoing GB. A total of 35 patients were enrolled. Plasma FGF 19, FGF 21, and total bile acid levels were measured before surgery (M0), 3 months (M3), and 12 months (M12) after surgery, while the hepatic steatosis index (HSI) was calculated before and after surgery. Results: Obese patients with T2DM after GB presented with increased serum FGF 19 levels (p = 0.024) and decreased total bile acid (p = 0.01) and FGF 21 levels (p = 0.005). DM complete remitters had a higher FGF 19 level at M3 (p = 0.004) compared with DM non-complete remitters. Fatty liver improvers tended to have lower FGF 21 (p = 0.05) compared with non-improvers at M12. Conclusions: Changes in FGF 19 and FGF 21 play differential roles in DM remission and NAFLD improvement for patients after GB. Early increases in serum FGF 19 levels may predict complete remission of T2DM, while a decline in serum FGF 21 levels may reflect the improvement of NAFLD after GB.
1. Introduction: Obesity has been a global concern for the past 50 years and the prevalence has increased significantly over the past decade [1]. Obesity represents a major health challenge because it substantially increases the risk of metabolic diseases, including type 2 diabetes mellitus (T2DM) and non-alcoholic fatty liver disease (NAFLD) [1,2]. Body weight reduction is an important approach in reducing insulin resistance and improving NAFLD. Weight loss has been shown as one of the strongest predictors of improved insulin sensitivity [3]. The magnitude of weight loss is also correlated with the improvement of NAFLD [4]. Surgical intervention is considered an important approach, especially for morbidly obese patients with T2DM, medically resistant arterial hypertension, or comorbidities that are expected to improve with weight loss [5]. Gastric bypass, a widely adapted surgical technique, is one of the most effective methods to combat obesity and remit T2DM [6]. Nevertheless, there are many patients whose DM and NAFLD fail to improve despite receiving these interventions [7]. Furthermore, the mechanisms by which it causes weight loss and T2DM and NAFLD resolutions are not well elucidated [8]. Numerous studies have attempted to identify robust biological and clinical predictors of DM remission after bariatric surgery [9,10]. On the other hand, studies for improving NAFLD are relatively lacking and mostly limited to animal studies [11]. More biomarkers from the blood as surrogates in the evaluation of NAFLD to replace paired liver biopsy and reduce the suffering of the patients are desperately needed [12]. The human fibroblast growth factor (FGF) family contains at least 22 members involving the biological processes of cell growth, differentiation, development, and metabolism [13]. Aside from most FGFs presenting functions as autocrine or paracrine factors, FGF 19, FGF 21, and FGF 23 lack the conventional FGF heparin-binding domain and possess the ability to elicit endocrine actions, functioning as hormones [13]. Emerging evidence demonstrates the potential role of the FGF family in energy metabolism and in counteracting obesity, especially FGF 19 and FGF 21 [14]. Animal studies have shown that overexpression of FGF 19 or FGF 21 or treatment with recombinant protein enhanced metabolic rates and decreased fat mass, in addition to demonstrating improvements in glucose metabolism, insulin sensitivity, and lipid profiles [15,16,17,18]. Bile acids have a significant relationship with energy balance. Farnesoid-X-receptor (FXR) regulates bile acid homeostasis by regulating the transcription of several enterohepatic genes. The activation of the transcription factor FXR by bile acids provokes the subsequent secretion of FGF 19 [19]. Human FGF 19 is expressed in the ileal enterocytes of the small intestine. FGF 19, secreted into the portal circulation, has a pronounced diurnal rhythm, with peaks occurring 90–120 min after serum bile acid levels after food intake. β-Klotho (KLB) works as a co-receptor and supports endocrine signaling via binding with FGF receptor (FGFr) 4 [20]. The binding of FGF 19 to the hepatocyte cell surface FGFr 4/KLB complex leads to negative feedback and reduces hepatic bile salt synthesis [21]. Mice with impaired function for gut secretion of FGF 19 show significantly impaired weight loss and glucose improvement following bariatric surgery [15]. Collective data also reveal that the serum FGF 19 levels are decreased in patients with T2DM [22]. Apart from FGF 19, FGF 21 is expressed in multiple tissues, including the liver, brown adipose tissue, white adipose tissue, and pancreas [23]. Under normal physiologic status, most circulating FGF 21 originates from the liver [24]. Secretion of FGF 21 is provoked significantly by excess food intake, ketogenic, high-carbohydrate diets, or protein restriction [25]. The expression of the FGF 21 gene depends on several pathways. Increased circulating free fatty acids and prolonged fasting promote the transcriptional activation of FGF 21 by the peroxisome proliferator-activated receptor α-mediated pathway [26,27]. A high-glucose diet activates carbohydrate-response element-binding protein (ChREBP) and enhances FGF 21 secretion [28]. Furthermore, general control nonderepressible 2 (GCN2) would be activated when encountering amino acid deficiency, leading to FGF 21 transcription [29]. Metabolic stresses such as obesity, T2DM, or NAFLD are also responsible for inducing the expression and/or signaling of FGF 21 [30]. The FGF 21-dependent signaling of downstream FGFr is extremely complicated and well-debated [31]. Based on evidence from a recent study, FGF 21 stimulates hepatic fatty acid oxidation, ketogenesis, and gluconeogenesis, and suppresses lipogenesis [25]. FGF 21 reduced plasma glucose and triglycerides to a nearly normal level in an animal model [32]. Although recent studies provide clues regarding the dynamics of FGF 19 and FGF 21 in patients receiving bariatric surgery [8,33], the information is limited to sleeve gastrectomy [8]. Moreover, different characteristics between those with and without the improvement of obesity-related comorbidities were also lacking [33]. The main purpose of our study was to evaluate the effect of GB on changes in serum FGF 19 and FGF 21 levels. Furthermore, we also determined the relationship between both blood biomarkers and the improvement of either T2DM or NAFLD. 5. Conclusions: Obesity represents a major health challenge in our modern society. GB is an effective surgical approach for weight loss, leading to an increase in FGF 19 levels and a decrease in total bile acids and FGF 21 levels postoperatively. FGF 19 levels had a negative correlation with the severity of T2DM based on c-peptide and HbA1c levels. FGF 21 levels had a positive correlation with c-peptide and total bile acid levels. Early increases in serum FGF 19 levels may be a predictor for complete remission of T2DM after GB. A decline in serum FGF 21 levels may reflect the improvement of NAFLD after GB. Our study may shed light on the differential roles of FGF 19 and FGF 21 in human T2DM remission and NAFLD improvement.
Background: Gastric bypass (GB) is an effective treatment for those who are morbidly obese with coexisting type 2 diabetes mellitus (T2DM) or non-alcoholic fatty liver disease (NAFLD). Fibroblast growth factors (FGFs) are involved in the regulation of energy metabolism. Methods: We investigated the roles of FGF 19, FGF 21, and total bile acid among those with morbidly obese and T2DM undergoing GB. A total of 35 patients were enrolled. Plasma FGF 19, FGF 21, and total bile acid levels were measured before surgery (M0), 3 months (M3), and 12 months (M12) after surgery, while the hepatic steatosis index (HSI) was calculated before and after surgery. Results: Obese patients with T2DM after GB presented with increased serum FGF 19 levels (p = 0.024) and decreased total bile acid (p = 0.01) and FGF 21 levels (p = 0.005). DM complete remitters had a higher FGF 19 level at M3 (p = 0.004) compared with DM non-complete remitters. Fatty liver improvers tended to have lower FGF 21 (p = 0.05) compared with non-improvers at M12. Conclusions: Changes in FGF 19 and FGF 21 play differential roles in DM remission and NAFLD improvement for patients after GB. Early increases in serum FGF 19 levels may predict complete remission of T2DM, while a decline in serum FGF 21 levels may reflect the improvement of NAFLD after GB.
10,884
284
15
[ "fgf", "21", "19", "fgf 21", "fgf 19", "levels", "dm", "gb", "hsi", "months" ]
[ "test", "test" ]
null
[CONTENT] obesity | diabetes mellitus | FGF 19 | FGF 21 | total bile acid | non-alcoholic fatty liver disease | gastric bypass [SUMMARY]
null
[CONTENT] obesity | diabetes mellitus | FGF 19 | FGF 21 | total bile acid | non-alcoholic fatty liver disease | gastric bypass [SUMMARY]
[CONTENT] obesity | diabetes mellitus | FGF 19 | FGF 21 | total bile acid | non-alcoholic fatty liver disease | gastric bypass [SUMMARY]
[CONTENT] obesity | diabetes mellitus | FGF 19 | FGF 21 | total bile acid | non-alcoholic fatty liver disease | gastric bypass [SUMMARY]
[CONTENT] obesity | diabetes mellitus | FGF 19 | FGF 21 | total bile acid | non-alcoholic fatty liver disease | gastric bypass [SUMMARY]
[CONTENT] Diabetes Mellitus, Type 2 | Fibroblast Growth Factors | Gastric Bypass | Humans | Non-alcoholic Fatty Liver Disease | Obesity, Morbid [SUMMARY]
null
[CONTENT] Diabetes Mellitus, Type 2 | Fibroblast Growth Factors | Gastric Bypass | Humans | Non-alcoholic Fatty Liver Disease | Obesity, Morbid [SUMMARY]
[CONTENT] Diabetes Mellitus, Type 2 | Fibroblast Growth Factors | Gastric Bypass | Humans | Non-alcoholic Fatty Liver Disease | Obesity, Morbid [SUMMARY]
[CONTENT] Diabetes Mellitus, Type 2 | Fibroblast Growth Factors | Gastric Bypass | Humans | Non-alcoholic Fatty Liver Disease | Obesity, Morbid [SUMMARY]
[CONTENT] Diabetes Mellitus, Type 2 | Fibroblast Growth Factors | Gastric Bypass | Humans | Non-alcoholic Fatty Liver Disease | Obesity, Morbid [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] fgf | 21 | 19 | fgf 21 | fgf 19 | levels | dm | gb | hsi | months [SUMMARY]
null
[CONTENT] fgf | 21 | 19 | fgf 21 | fgf 19 | levels | dm | gb | hsi | months [SUMMARY]
[CONTENT] fgf | 21 | 19 | fgf 21 | fgf 19 | levels | dm | gb | hsi | months [SUMMARY]
[CONTENT] fgf | 21 | 19 | fgf 21 | fgf 19 | levels | dm | gb | hsi | months [SUMMARY]
[CONTENT] fgf | 21 | 19 | fgf 21 | fgf 19 | levels | dm | gb | hsi | months [SUMMARY]
[CONTENT] fgf | fgf 21 | 21 | 19 | fgf 19 | nafld | receptor | secretion | binding | loss [SUMMARY]
null
[CONTENT] vs | cr | fgf | dm | figure | hsi | dm cr | non | group | pg [SUMMARY]
[CONTENT] fgf | levels | 21 levels | fgf 21 levels | fgf 19 levels | 19 levels | fgf 21 | fgf 19 | 19 | 21 [SUMMARY]
[CONTENT] fgf | 21 | fgf 21 | 19 | fgf 19 | hsi | vs | levels | dm | cr [SUMMARY]
[CONTENT] fgf | 21 | fgf 21 | 19 | fgf 19 | hsi | vs | levels | dm | cr [SUMMARY]
[CONTENT] GB | 2 ||| [SUMMARY]
null
[CONTENT] Obese | T2DM | GB | 19 | 0.024 | 0.01 | FGF 21 | 0.005 ||| 19 | M3 | 0.004 ||| 21 | 0.05 | M12 [SUMMARY]
[CONTENT] 19 | 21 | DM | NAFLD | GB ||| 19 | T2DM | FGF 21 | NAFLD | GB [SUMMARY]
[CONTENT] GB | 2 ||| ||| FGF | 19 | 21 | obese | T2DM | GB ||| 35 ||| Plasma FGF 19 | 21 | 3 months | M3 | 12 months | M12 | HSI ||| Obese | T2DM | GB | 19 | 0.024 | 0.01 | FGF 21 | 0.005 ||| 19 | M3 | 0.004 ||| 21 | 0.05 | M12 ||| 19 | 21 | DM | NAFLD | GB ||| 19 | T2DM | FGF 21 | NAFLD | GB [SUMMARY]
[CONTENT] GB | 2 ||| ||| FGF | 19 | 21 | obese | T2DM | GB ||| 35 ||| Plasma FGF 19 | 21 | 3 months | M3 | 12 months | M12 | HSI ||| Obese | T2DM | GB | 19 | 0.024 | 0.01 | FGF 21 | 0.005 ||| 19 | M3 | 0.004 ||| 21 | 0.05 | M12 ||| 19 | 21 | DM | NAFLD | GB ||| 19 | T2DM | FGF 21 | NAFLD | GB [SUMMARY]
Petroleum contaminated water and health symptoms: a cross-sectional pilot study in a rural Nigerian community.
26546277
The oil-rich Niger Delta suffers from extensive petroleum contamination. A pilot study was conducted in the region of Ogoniland where one community, Ogale, has drinking water wells highly contaminated with a refined oil product. In a 2011 study, the United Nations Environment Programme (UNEP) sampled Ogale drinking water wells and detected numerous petroleum hydrocarbons, including benzene at concentrations as much as 1800 times higher than the USEPA drinking water standard. UNEP recommended immediate provision of clean drinking water, medical surveillance, and a prospective cohort study. Although the Nigerian government has provided emergency drinking water, other UNEP recommendations have not been implemented. We aimed to (i) follow up on UNEP recommendations by investigating health symptoms associated with exposure to contaminated water; and (ii) assess the adequacy and utilization of the government-supplied emergency drinking water.
BACKGROUND
We recruited 200 participants from Ogale and a reference community, Eteo, and administered questionnaires to investigate water use, perceived water safety, and self-reported health symptoms.
METHODS
Our multivariate regression analyses show statistically significant associations between exposure to Ogale drinking water and self-reported health symptoms consistent with petroleum exposure. Participants in Ogale more frequently reported health symptoms related to neurological effects (OR = 2.8), hematological effects (OR = 3.3), and irritation (OR = 2.7).
RESULTS
Our results are the first from a community relying on drinking water with such extremely high concentrations of benzene and other hydrocarbons. The ongoing exposure and these pilot study results highlight the need for more refined investigation as recommended by UNEP.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Cross-Sectional Studies", "Drinking Water", "Environmental Exposure", "Female", "Health Status", "Humans", "Male", "Middle Aged", "Nigeria", "Pilot Projects", "Rural Population", "Self Report", "Water Pollutants, Chemical", "Young Adult" ]
4636824
Background
While dramatic events such as the 1989 Exxon Valdez and 2010 Deepwater Horizon oil spills have received substantial remediation responses, the long-term and vast contamination of the Niger Delta region remains largely un-remediated [1]. Extraction, processing, and transport of crude oil dating back to the 1950s have had a devastating impact on Ogoniland, a territory in the Southern region of Nigeria. The United Nations Development Programme estimates that 6178 oil spills occurred in Ogoniland between 1976 and 2011, resulting in discharges of approximately three million barrels of oil [2]. Commissioned by the Federal Government of Nigeria, the United Nations Environmental Programme (UNEP) conducted an environmental assessment of Ogoniland and determined that the widespread oil contamination presents serious threats to human health [3]. Residents are exposed to petroleum hydrocarbons that are released to the environment by burning, spilling, and leaking. Exposure can occur via inhalation of hydrocarbons in ambient air and via consumption of and dermal contact with hydrocarbons in water, soil and sediment [1, 3]. This study focuses on the community of Ogale, located in the Eleme local government area of Ogoniland, where UNEP discovered substantial leakage from an abandoned section of a pipeline carrying refined oil. UNEP testing revealed approximately three inches of refined oil floating on the groundwater that supplies the community’s drinking water [3]. UNEP detected numerous petroleum hydrocarbons in water from individual borehole drinking water wells, notably benzene at concentrations as high as 9280 micrograms per liter, which is approximately 1800 times higher than the United States Environmental Protection Agency (US EPA) drinking water standard and over 900 times higher than the World Health Organization (WHO) drinking water guideline [3, 4]. Petroleum products are a complex mixture of hydrocarbons, consisting of both aromatic and long- and short-chain aliphatic hydrocarbons. Components of crude and refined petroleum, namely volatile organic compounds (VOCs), such as benzene, toluene and xylenes, and polycyclic aromatic hydrocarbons (PAHs), have independently been associated with adverse human health effects. Acute exposures to high concentrations of VOCs cause central nervous system toxicity, resulting in symptoms such as headaches, fatigue and dizziness [5, 6]. Chronic exposure to VOCs can impair the immune system via oxidative stress and decreases in white blood cell count [7, 8]. Benzene in particular is strongly associated with disorders of the hematopoietic system such as aplastic anemia [9, 10, 11]. Benzene is also classified as a known human carcinogen based on occupational studies in humans [4]. Polycyclic aromatic hydrocarbons cause symptoms such as nausea, vomiting and skin and eye irritation following acute, high-level exposures [12, 13]. Exposures to PAHs during pregnancy have been linked to decreased birth weight and impaired development in offspring [14]. Chronic occupational exposures are associated with dose-dependent increased risks of certain types of cancers, including lung, skin and bladder cancer [15]. Naphthalene, a low molecular weight PAH that was detected in Ogale water samples, can adversely affect the hematopoietic system, damaging and killing red blood cells, causing symptoms such as shortness of breath and fatigue [12, 16]. Alkylated PAHs comprise the majority of PAHs detected in petroleum products and are particularly persistent. Although the health effects of alkylated PAHs have not been well studied, limited evidence suggests that they may be more toxic and carcinogenic than their parent PAH compounds [17]. Although UNEP did not complete a detailed chemical characterization of the refined oil in Ogale wells, studies on petroleum exposures may provide some indication of adverse health effects that could occur in the community. Prior research has primarily focused on high-dose, short-term occupational exposures to crude oil, in particular those occurring during remediation of oil spills. Workers exposed to petroleum hydrocarbons have reported adverse health symptoms such as headaches, eye and skin irritation and respiratory difficulties [18, 19]. A recent cross-sectional study found that blood samples of oil spill workers showed alterations consistent with impairment of the hepatic and hematopoietic systems [20]. Research on the Prestige oil spill has provided preliminary evidence of exposure-dependent DNA damage in cleanup volunteers [21]. The ongoing NIEHS Gulf Long Term Follow Up (GuLF) Study on Deepwater Horizon spill workers will be the first to investigate long-term physical health effects using a prospective cohort design [22, 23]. Few studies have examined adverse effects associated with chronic exposure to elevated concentrations of refined oil products in the general population. Increases in depression and stress, stemming from perceived health risks and financial concerns, have been observed in communities subjected to chronic oil spill exposures [24]. One study found increases in cancer incidence and mortality in communities near the Amazon basin oil fields in Ecuador [25]. A thorough search of the scientific literature revealed only one health study conducted in the Niger Delta region, which reported higher rates of respiratory and skin disorders in Eleme compared to a less-industrialized Nigerian community [26]. However, this study does not include a description of the sampling design and locations, among other study weaknesses, and is therefore not suitable for reaching conclusions regarding any association between petroleum exposure and adverse health outcomes. Further research on chronic, high-magnitude exposures in individuals living in proximity to oil sites is needed to improve understanding of how oil spills might affect human health. UNEP made several recommendations in the Ogoniland environmental assessment, including provision of alternative drinking water supplies to Ogale, remediation of soil and groundwater contamination in the area, medical surveillance, and monitoring for potential adverse health effects through the implementation of a prospective cohort epidemiological study [3]. The objectives of this pilot study are to (i) follow up on UNEP recommendations by investigating health symptoms associated with exposure to contaminated water; and (ii) assess the adequacy and utilization of an emergency supply of potable water provided by the government. We compared the prevalence of self-reported health symptoms in Ogale and in a reference community, Eteo. The results of this pilot study will be helpful for designing the prospective cohort study in Ogale recommended by UNEP.
null
null
Results
Participants in Ogale and Eteo were comparable in demographic characteristics (Table 1). Mean ages in Ogale and Eteo were 34.3 (SD = 11.2) and 35.5 (SD = 12.7) respectively. In our sample, there were slightly higher proportions of women (53 % in Ogale, 55 % in Eteo) than men in both communities. More than one-half of participants in Ogale and Eteo were married (60 and 71 % respectively). The vast majority of study participants sampled identified as Christian (98 % in Ogale, 95 % in Eteo).Table 1Characteristics of participants in Ogale and EteoCharacteristicOgaleEteo(n = 100)(n = 100)Age, Mean (SD)34.3 ± 11.235.5 ± 12.7Age category 18–2035 21–408070 41–601119  > 6066Sex (%) Female5355 Male4745Religion (%) Christian9895 Muslim25Marital status (%) Married - Monogamous6071 Married - Polygamous20 Widowed35 Single3524Smoking (%) Never smoker8589 Ever smoker1511Education level (%) No formal education13 Primary school718 Secondary school5385 Post-Secondary school3921Occupation of head of household (%) Farmer615 Educator30 Artist/Musician20 Tradesman4044 Professional149 Government/Civil Service2210 Clergy21 Student38 Unemployed49 Retired44Primary medical care location (%) Private clinic3614 Primary health center841 General hospital2620 Local chemist1915 None88 Other32Residents in household (median)45Number of childrena in household (median)23 Abbreviations: SD Standard Deviation aDefined as individuals under the age of 18 Characteristics of participants in Ogale and Eteo Abbreviations: SD Standard Deviation aDefined as individuals under the age of 18 In both communities, more than three-quarters of participants reported never having smoked tobacco (85 % in Ogale, 89 % in Eteo) and of those who did, the overwhelming majority was male. On average, participants in Ogale had achieved higher levels of education than those in Eteo; nearly twice as many Ogale participants reported post-secondary education (39 % vs. 21 %). Head of household occupations were similar across both groups. The most commonly reported occupation for the head of household in both communities was a tradesman (40 % in Ogale, 44 % in Eteo). Both communities had comparable median numbers of total individuals (4 in Ogale, 5 in Eteo) and of children (2 in Ogale, 3 in Eteo) residing in the household. Participants in both communities had consistent access to medical services. The majority of participants reported visiting a health centre, general hospital or private clinic for their medical care and health services (72 and 76 % in Ogale and Eteo respectively). In addition, 19 % of participants in Ogale sought medical care from a local chemist compared to 15 % in Eteo. The prevalence of emergency water use as a primary source for individual household activities in Ogale ranged between 14 and 16 % (Table 2). Emergency water was most commonly used for drinking, cooking, brushing teeth, and washing food. The majority of participants in Ogale reported continued use of the contaminated water: 66 % reported drinking, 81 % reported cooking, and 83 % reported using their borehole well water for bathing, washing food, washing dishes, washing clothes, and cleaning the house. Additional reported sources of drinking water in Ogale were sachet water, bottled water and mono-pump (14, 4 and 1 % respectively). Only 24 % of participants in Ogale reported receiving emergency water supplies. Although over 80 % of these individuals stated that water delivery occurs at least once per week, half of them found the volume of water delivered to be insufficient for daily needs.Table 2Primary sources of water for various household activities in Ogale (n = 100)a Primary water source (n)Household activityBoreholeSachetb Emergencyc Cooking81016Drinking661415Brushing teeth80116Bathing83015Washing food83016Washing dishes83015Washing clothes83014Cleaning house83014 aWater sources with n < 5 (dugout well, monopump, bottled water and other) were excluded bSachet water is defined as water packed in plastic bags, commonly sold in outdoor markets cEmergency water is defined as water supplied by the government to Ogale participants as requested by UNEP Primary sources of water for various household activities in Ogale (n = 100)a aWater sources with n < 5 (dugout well, monopump, bottled water and other) were excluded bSachet water is defined as water packed in plastic bags, commonly sold in outdoor markets cEmergency water is defined as water supplied by the government to Ogale participants as requested by UNEP Eteo residents are not provided with emergency water because their water supply is not known to be contaminated. Approximately 97 % of individuals in Eteo reported using their individual household borehole drinking water wells for all specified household activities, while 2 % used surface water, and 1 % used rain water. Overall, participants in Ogale reported using their borehole well water for household activities for a median of 4 years, while participants in Eteo reported a median of 5 years. The median years of exposure to emergency drinking water and to sachet water in Ogale were 1 and 2, respectively. Participants in Ogale were significantly more likely to perceive their primary water source as having an odor (39 % vs. 8 %) The main sources of water with a reported odor in Ogale were borehole well and emergency water (Table 3). The majority of participants in Ogale who reported a borehole well odor (69 %) stated that it smelled like petroleum fuel. Among Ogale participants reporting an emergency water odor, the most common description was chlorine (10 %). In Eteo, one participant described the borehole well water as having a fuel odor (1 %).Table 3Detailed descriptions of water odor in Ogale and EteoWater location and source (n (%))Ogale (n = 39)Eteo (n = 8)Odor descriptionBoreholeSachetEmergencyBoreholeSachetAny odor32 (82.1)1 (2.6)6 (15.4)6 (75.0)2 (25.0)Fuel27 (69.2)1 (2.6)1 (2.6)1 (12.5)0Chlorine004 (10.3)00Chemical2 (5.1)01 (2.6)01 (12.5)Mold0001 (12.5)0Mud1 (2.6)0000Unpleasant0003 (37.5)0Smoke1 (2.6)0000Don’t know1 (2.6)001 (12.5)1 (12.5) Detailed descriptions of water odor in Ogale and Eteo When asked about water safety, 41 % of participants in Ogale reported perceiving their water source as “safe” compared to 70 % in Eteo (Fig. 1). In addition, almost one-third (32 %) of individuals in Ogale reported their water as “unsafe” compared to 9 % in Eteo. Differences in these proportions were found to be statistically significant at p < 0.05 level using chi-square analyses.Fig. 1Perceptions of primary water source safety in Ogale and Eteo. Participants from both communities reported that their primary water source was safe (light gray), unsafe (medium gray), or they did not know if it was safe or not (dark gray) Perceptions of primary water source safety in Ogale and Eteo. Participants from both communities reported that their primary water source was safe (light gray), unsafe (medium gray), or they did not know if it was safe or not (dark gray) Table 4 displays differences in symptom reporting across the three different primary sources of drinking water (borehole, sachet, and emergency water) in Ogale. On average, the proportions of individuals who reported experiencing current adverse health symptoms were similar among the three different sources of primary drinking water in Ogale. Overall, chi-square tests revealed no statistically significant differences in the proportions of health symptoms reported among individuals who used borehole, sachet, or emergency water for drinking.Table 4Self-reported adverse health symptoms by primary drinking water sources in Ogalea Primary drinking water source (n (%))BoreholeSachetEmergencyHealth Symptom(n = 66)(n = 14)(n = 15)Irritation34 (51.5)7 (50.0)6 (40.0)Neurologic27 (40.9)5 (35.7)8 (53.3)Gastrointestinal10 (15.2)2 (14.3)5 (33.3)Hematologic21 (31.8)5 (35.7)3 (20.0)General pain6 (9.1)0 (0.0)2 (13.3) aDifferences in proportions are not statistically significant at P < 0.05 using chi-square analyses Self-reported adverse health symptoms by primary drinking water sources in Ogalea aDifferences in proportions are not statistically significant at P < 0.05 using chi-square analyses Chi-square analyses were used to examine differences in health symptoms among participants in Ogale who reported receiving sufficient, insufficient or no emergency water supplies (Table 5). The frequency of irritation and gastrointestinal symptoms were significantly different between the three groups. Participants in Ogale who received sufficient emergency water supplies were less likely to report having irritation (8.3 %) than those receiving insufficient or no emergency water (66.7 and 50 % respectively). Overall, participants who received sufficient emergency water supplies reported the lowest proportions of health symptoms across the three groups.Table 5Self-reported adverse health symptoms by sufficiency of emergency water supply in OgaleSufficiency of water supply (n (%))SufficientInsufficientNo supplyHealth symptom(n = 12)(n = 12)(n = 76)Irritation*1 (8.3)8 (66.7)38 (50.0)Neurologic3 (25.0)8 (66.7)29 (38.2)Gastrointestinal*1 (8.3)6 (50.0)10 (13.2)Hematologic3 (25.0)5 (41.7)22 (30.0)General pain1 (8.3)1 (8.3)6 (7.9)*Differences in proportions are statistically significant at P < 0.05 using chi-square analyses Self-reported adverse health symptoms by sufficiency of emergency water supply in Ogale *Differences in proportions are statistically significant at P < 0.05 using chi-square analyses Table 6 displays general and specific self-reported health symptoms for all participants. After controlling for age, sex, smoking status, occupation, and education level, Ogale residents were significantly more likely to self-report any irritation (OR = 2.7; 95 % CI, 1.5–5.1), any neurological effects (OR = 2.8; 95 % CI, 1.5–5.5), and any hematologic effects (OR = 3.3; 95 % CI, (1.5–7.0). Almost three-quarters (68 %) of individuals residing in Ogale reported experiencing any current health symptoms compared to 56 % of Eteo residents. Chi-square analyses revealed that Ogale residents were significantly more likely to report having a headache (36 % vs. 18 %), dizziness (10 % vs. 2 %), throat irritation (8 % vs. 1 %), skin irritation (26 % vs. 5 %), rash (9 % vs. 1 %) and anemia (18 % vs. 4 %). After controlling for age, sex, and smoking, Ogale residents were significantly more likely to report having a headache (OR = 2.7; 95 % CI, 1.4–5.4), dizziness (OR = 6.3; 95 % CI, 1.3–30.4), eye irritation (OR = 2.5; 95 % CI, 1.1–5.7), throat irritation (OR = 9.1; 95 % CI, 1.1–75.4), skin irritation (OR = 6.5; 95 % CI, 2.4–18.2), and any type of anemia (OR = 5.9; 95 % CI, 1.9–18.3).Table 6Detailed multivariate analysis for self-reported health symptoms in Ogale and EteoHealth symptomOgale (n = 100)Eteo (n = 100)Crude OR (95 % CI)Adjusted ORa (95 % CI)Any symptom68561.6 (0.9, 3.0)1.8 (1.0, 3.3)Irritation*47252.7 (1.5, 4.8)2.7 (1.5, 5.1) Eye*21112.2 (0.9, 4.7)2.5 (1.1, 5.7) Throat*818.6 (1.1, 70.2)9.1 (1.1, 75.4) Skin*2656.7 (2.5, 18.2)6.6 (2.4, 18.2) Rash919.8 (1.2, 78.8)NA Runny nose14101.5 (0.6, 3.5)1.5 (0.6, 3.5) Cough1472.2 (0.8, 5.6)2.5 (0.9, 6.5)Gastrointestinal1792.1(0.9, 4.9)2.0 (0.8,5.0) Stomach pain1562.8 (1.0, 7.4)2.7 (1.0, 7.2) Diarrhea240.5 (0.1, 2.8)NANeurologic*40212.5 (1.3, 4.7)2.8 (1.5, 5.5) Headache*36182.6 (1.3, 4.9)2.7 (1.4, 5.4) Sleepiness732.4 (0.6, 9.7)NA Dizziness*1025.4 (1.2, 25.5)6.3 (1.3, 30.4)Hematologic*30132.9 (1.4, 6.0)3.3 (1.5, 7.0) Anemia*1845.3 (1.7, 16.2)5.9 (1.9, 18.3) Malaria1491.7 (0.7, 4.0)1.6 (0.7, 4.0)General pain8110.7 (0.3, 1.8)0.8 (0.3, 2.0) Abbreviations: OR Odds Ratio, NA Not Applicable, odds ratio was not adjusted due to low event frequency in that category aGeneral health symptom categories, displayed in boldface, were adjusted for age, sex, smoking status, occupation, and education level. Due to small sample sizes, specific symptoms were adjusted for age, sex and smoking status only*Adjusted odds ratios are statistically significant at P < 0.05 Detailed multivariate analysis for self-reported health symptoms in Ogale and Eteo Abbreviations: OR Odds Ratio, NA Not Applicable, odds ratio was not adjusted due to low event frequency in that category aGeneral health symptom categories, displayed in boldface, were adjusted for age, sex, smoking status, occupation, and education level. Due to small sample sizes, specific symptoms were adjusted for age, sex and smoking status only *Adjusted odds ratios are statistically significant at P < 0.05
Conclusions
In this cross-sectional pilot study, the first carried out in response to the UNEP recommendations, we observed statistically significant associations between exposure to petroleum-contaminated drinking water and self-reported symptoms consistent with exposure to petroleum hydrocarbons. These results reinforce UNEP’s recommendations for establishment of a health registry, medical surveillance, and a prospective cohort study for the Ogale community [3]. Future studies should define the full extent of contaminated household water and incorporate more detailed methods of exposure and outcome assessment for exposed populations, including its most susceptible members.
[ "Study population", "Outcome ascertainment", "Statistical analysis" ]
[ "The community of Ogale was selected for study based on the UNEP environmental assessment, which identified Ogale as having the most serious groundwater contamination observed in Ogoniland [3]. The community of Eteo was chosen to serve as a reference group because it is near Ogale (approximately 10 miles away), it is part of the same local government area of Eleme, and people living in Eteo and Ogale are comparable with respect to race, language, culture, and behavioral practices. The UNEP environmental assessment did not report any petroleum contamination in Eteo.\nA total of 200 adults over the age of 18 were enrolled in this pilot study (100 participants from each community). We employed a stratified random sampling strategy through door-to-door recruitment in three areas of both Ogale and Eteo, approaching individuals in every fifth house. In both communities, we obtained a 98 % response rate. Participants met the following eligibility criteria: 1) residence in the community for a minimum of one year, and 2) no prior history of residence in any other Ogoniland community associated with high levels of petroleum contamination, as reported by UNEP [3].\nThis study was conducted with approval from local authorities in Ogale and Eteo and from the Institutional Review Board at Boston University Medical Center (reference number: H-32345). Informed consent was obtained from each subject.", "Trained interviewers administered standardized questionnaires in each respondent’s home. These questionnaires were developed for this pilot study and include primarily closed-ended questions regarding demographics, smoking habits, water supply, water safety, current health symptoms and medical history. Participants were asked to report their primary water source and duration of its use for specific household activities: bathing, cooking, washing, drinking, brushing teeth, cleaning the house, and washing clothes, dishes and food. We collected information on primary source water characteristics such as odor and perceived safety. We asked individuals in Ogale who reported receiving emergency government-supplied water about the duration, frequency and sufficiency of water delivery. Participants who reported currently experiencing health issues were asked to list their symptoms; interviewers were careful not to lead participants in the open-ended responses.", "Descriptive statistics were used to compare demographic information of participants in Ogale and Eteo, and chi-square analyses were used to compare frequencies of self-reported perceptions of water odor and safety between communities. The relationship between exposure to contaminated water and self-reported adverse health outcomes was studied in distinct multivariate logistic regression models, which were used to obtain odds ratios and 95 % confidence intervals (CI) for self-reported symptoms in both communities. Sex, age, occupation, smoking status, and education were fitted in one multivariate logistic regression model. Due to our small sample size, only age, sex, and smoking status were included in the detailed logistic regression model to avoid over-fitting the multivariate model. These covariates were selected because of their potential to confound or modify the association of interest [27–30]. All analyses were performed using SAS statistical software version 9.3 (SAS Institute, Cary, NC)." ]
[ null, null, null ]
[ "Background", "Methods", "Study population", "Outcome ascertainment", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ "While dramatic events such as the 1989 Exxon Valdez and 2010 Deepwater Horizon oil spills have received substantial remediation responses, the long-term and vast contamination of the Niger Delta region remains largely un-remediated [1]. Extraction, processing, and transport of crude oil dating back to the 1950s have had a devastating impact on Ogoniland, a territory in the Southern region of Nigeria. The United Nations Development Programme estimates that 6178 oil spills occurred in Ogoniland between 1976 and 2011, resulting in discharges of approximately three million barrels of oil [2]. Commissioned by the Federal Government of Nigeria, the United Nations Environmental Programme (UNEP) conducted an environmental assessment of Ogoniland and determined that the widespread oil contamination presents serious threats to human health [3]. Residents are exposed to petroleum hydrocarbons that are released to the environment by burning, spilling, and leaking. Exposure can occur via inhalation of hydrocarbons in ambient air and via consumption of and dermal contact with hydrocarbons in water, soil and sediment [1, 3].\nThis study focuses on the community of Ogale, located in the Eleme local government area of Ogoniland, where UNEP discovered substantial leakage from an abandoned section of a pipeline carrying refined oil. UNEP testing revealed approximately three inches of refined oil floating on the groundwater that supplies the community’s drinking water [3]. UNEP detected numerous petroleum hydrocarbons in water from individual borehole drinking water wells, notably benzene at concentrations as high as 9280 micrograms per liter, which is approximately 1800 times higher than the United States Environmental Protection Agency (US EPA) drinking water standard and over 900 times higher than the World Health Organization (WHO) drinking water guideline [3, 4].\nPetroleum products are a complex mixture of hydrocarbons, consisting of both aromatic and long- and short-chain aliphatic hydrocarbons. Components of crude and refined petroleum, namely volatile organic compounds (VOCs), such as benzene, toluene and xylenes, and polycyclic aromatic hydrocarbons (PAHs), have independently been associated with adverse human health effects. Acute exposures to high concentrations of VOCs cause central nervous system toxicity, resulting in symptoms such as headaches, fatigue and dizziness [5, 6]. Chronic exposure to VOCs can impair the immune system via oxidative stress and decreases in white blood cell count [7, 8]. Benzene in particular is strongly associated with disorders of the hematopoietic system such as aplastic anemia [9, 10, 11]. Benzene is also classified as a known human carcinogen based on occupational studies in humans [4]. Polycyclic aromatic hydrocarbons cause symptoms such as nausea, vomiting and skin and eye irritation following acute, high-level exposures [12, 13]. Exposures to PAHs during pregnancy have been linked to decreased birth weight and impaired development in offspring [14]. Chronic occupational exposures are associated with dose-dependent increased risks of certain types of cancers, including lung, skin and bladder cancer [15]. Naphthalene, a low molecular weight PAH that was detected in Ogale water samples, can adversely affect the hematopoietic system, damaging and killing red blood cells, causing symptoms such as shortness of breath and fatigue [12, 16]. Alkylated PAHs comprise the majority of PAHs detected in petroleum products and are particularly persistent. Although the health effects of alkylated PAHs have not been well studied, limited evidence suggests that they may be more toxic and carcinogenic than their parent PAH compounds [17].\nAlthough UNEP did not complete a detailed chemical characterization of the refined oil in Ogale wells, studies on petroleum exposures may provide some indication of adverse health effects that could occur in the community. Prior research has primarily focused on high-dose, short-term occupational exposures to crude oil, in particular those occurring during remediation of oil spills. Workers exposed to petroleum hydrocarbons have reported adverse health symptoms such as headaches, eye and skin irritation and respiratory difficulties [18, 19]. A recent cross-sectional study found that blood samples of oil spill workers showed alterations consistent with impairment of the hepatic and hematopoietic systems [20]. Research on the Prestige oil spill has provided preliminary evidence of exposure-dependent DNA damage in cleanup volunteers [21]. The ongoing NIEHS Gulf Long Term Follow Up (GuLF) Study on Deepwater Horizon spill workers will be the first to investigate long-term physical health effects using a prospective cohort design [22, 23].\nFew studies have examined adverse effects associated with chronic exposure to elevated concentrations of refined oil products in the general population. Increases in depression and stress, stemming from perceived health risks and financial concerns, have been observed in communities subjected to chronic oil spill exposures [24]. One study found increases in cancer incidence and mortality in communities near the Amazon basin oil fields in Ecuador [25]. A thorough search of the scientific literature revealed only one health study conducted in the Niger Delta region, which reported higher rates of respiratory and skin disorders in Eleme compared to a less-industrialized Nigerian community [26]. However, this study does not include a description of the sampling design and locations, among other study weaknesses, and is therefore not suitable for reaching conclusions regarding any association between petroleum exposure and adverse health outcomes. Further research on chronic, high-magnitude exposures in individuals living in proximity to oil sites is needed to improve understanding of how oil spills might affect human health.\nUNEP made several recommendations in the Ogoniland environmental assessment, including provision of alternative drinking water supplies to Ogale, remediation of soil and groundwater contamination in the area, medical surveillance, and monitoring for potential adverse health effects through the implementation of a prospective cohort epidemiological study [3]. The objectives of this pilot study are to (i) follow up on UNEP recommendations by investigating health symptoms associated with exposure to contaminated water; and (ii) assess the adequacy and utilization of an emergency supply of potable water provided by the government. We compared the prevalence of self-reported health symptoms in Ogale and in a reference community, Eteo. The results of this pilot study will be helpful for designing the prospective cohort study in Ogale recommended by UNEP.", " Study population The community of Ogale was selected for study based on the UNEP environmental assessment, which identified Ogale as having the most serious groundwater contamination observed in Ogoniland [3]. The community of Eteo was chosen to serve as a reference group because it is near Ogale (approximately 10 miles away), it is part of the same local government area of Eleme, and people living in Eteo and Ogale are comparable with respect to race, language, culture, and behavioral practices. The UNEP environmental assessment did not report any petroleum contamination in Eteo.\nA total of 200 adults over the age of 18 were enrolled in this pilot study (100 participants from each community). We employed a stratified random sampling strategy through door-to-door recruitment in three areas of both Ogale and Eteo, approaching individuals in every fifth house. In both communities, we obtained a 98 % response rate. Participants met the following eligibility criteria: 1) residence in the community for a minimum of one year, and 2) no prior history of residence in any other Ogoniland community associated with high levels of petroleum contamination, as reported by UNEP [3].\nThis study was conducted with approval from local authorities in Ogale and Eteo and from the Institutional Review Board at Boston University Medical Center (reference number: H-32345). Informed consent was obtained from each subject.\nThe community of Ogale was selected for study based on the UNEP environmental assessment, which identified Ogale as having the most serious groundwater contamination observed in Ogoniland [3]. The community of Eteo was chosen to serve as a reference group because it is near Ogale (approximately 10 miles away), it is part of the same local government area of Eleme, and people living in Eteo and Ogale are comparable with respect to race, language, culture, and behavioral practices. The UNEP environmental assessment did not report any petroleum contamination in Eteo.\nA total of 200 adults over the age of 18 were enrolled in this pilot study (100 participants from each community). We employed a stratified random sampling strategy through door-to-door recruitment in three areas of both Ogale and Eteo, approaching individuals in every fifth house. In both communities, we obtained a 98 % response rate. Participants met the following eligibility criteria: 1) residence in the community for a minimum of one year, and 2) no prior history of residence in any other Ogoniland community associated with high levels of petroleum contamination, as reported by UNEP [3].\nThis study was conducted with approval from local authorities in Ogale and Eteo and from the Institutional Review Board at Boston University Medical Center (reference number: H-32345). Informed consent was obtained from each subject.\n Outcome ascertainment Trained interviewers administered standardized questionnaires in each respondent’s home. These questionnaires were developed for this pilot study and include primarily closed-ended questions regarding demographics, smoking habits, water supply, water safety, current health symptoms and medical history. Participants were asked to report their primary water source and duration of its use for specific household activities: bathing, cooking, washing, drinking, brushing teeth, cleaning the house, and washing clothes, dishes and food. We collected information on primary source water characteristics such as odor and perceived safety. We asked individuals in Ogale who reported receiving emergency government-supplied water about the duration, frequency and sufficiency of water delivery. Participants who reported currently experiencing health issues were asked to list their symptoms; interviewers were careful not to lead participants in the open-ended responses.\nTrained interviewers administered standardized questionnaires in each respondent’s home. These questionnaires were developed for this pilot study and include primarily closed-ended questions regarding demographics, smoking habits, water supply, water safety, current health symptoms and medical history. Participants were asked to report their primary water source and duration of its use for specific household activities: bathing, cooking, washing, drinking, brushing teeth, cleaning the house, and washing clothes, dishes and food. We collected information on primary source water characteristics such as odor and perceived safety. We asked individuals in Ogale who reported receiving emergency government-supplied water about the duration, frequency and sufficiency of water delivery. Participants who reported currently experiencing health issues were asked to list their symptoms; interviewers were careful not to lead participants in the open-ended responses.\n Statistical analysis Descriptive statistics were used to compare demographic information of participants in Ogale and Eteo, and chi-square analyses were used to compare frequencies of self-reported perceptions of water odor and safety between communities. The relationship between exposure to contaminated water and self-reported adverse health outcomes was studied in distinct multivariate logistic regression models, which were used to obtain odds ratios and 95 % confidence intervals (CI) for self-reported symptoms in both communities. Sex, age, occupation, smoking status, and education were fitted in one multivariate logistic regression model. Due to our small sample size, only age, sex, and smoking status were included in the detailed logistic regression model to avoid over-fitting the multivariate model. These covariates were selected because of their potential to confound or modify the association of interest [27–30]. All analyses were performed using SAS statistical software version 9.3 (SAS Institute, Cary, NC).\nDescriptive statistics were used to compare demographic information of participants in Ogale and Eteo, and chi-square analyses were used to compare frequencies of self-reported perceptions of water odor and safety between communities. The relationship between exposure to contaminated water and self-reported adverse health outcomes was studied in distinct multivariate logistic regression models, which were used to obtain odds ratios and 95 % confidence intervals (CI) for self-reported symptoms in both communities. Sex, age, occupation, smoking status, and education were fitted in one multivariate logistic regression model. Due to our small sample size, only age, sex, and smoking status were included in the detailed logistic regression model to avoid over-fitting the multivariate model. These covariates were selected because of their potential to confound or modify the association of interest [27–30]. All analyses were performed using SAS statistical software version 9.3 (SAS Institute, Cary, NC).", "The community of Ogale was selected for study based on the UNEP environmental assessment, which identified Ogale as having the most serious groundwater contamination observed in Ogoniland [3]. The community of Eteo was chosen to serve as a reference group because it is near Ogale (approximately 10 miles away), it is part of the same local government area of Eleme, and people living in Eteo and Ogale are comparable with respect to race, language, culture, and behavioral practices. The UNEP environmental assessment did not report any petroleum contamination in Eteo.\nA total of 200 adults over the age of 18 were enrolled in this pilot study (100 participants from each community). We employed a stratified random sampling strategy through door-to-door recruitment in three areas of both Ogale and Eteo, approaching individuals in every fifth house. In both communities, we obtained a 98 % response rate. Participants met the following eligibility criteria: 1) residence in the community for a minimum of one year, and 2) no prior history of residence in any other Ogoniland community associated with high levels of petroleum contamination, as reported by UNEP [3].\nThis study was conducted with approval from local authorities in Ogale and Eteo and from the Institutional Review Board at Boston University Medical Center (reference number: H-32345). Informed consent was obtained from each subject.", "Trained interviewers administered standardized questionnaires in each respondent’s home. These questionnaires were developed for this pilot study and include primarily closed-ended questions regarding demographics, smoking habits, water supply, water safety, current health symptoms and medical history. Participants were asked to report their primary water source and duration of its use for specific household activities: bathing, cooking, washing, drinking, brushing teeth, cleaning the house, and washing clothes, dishes and food. We collected information on primary source water characteristics such as odor and perceived safety. We asked individuals in Ogale who reported receiving emergency government-supplied water about the duration, frequency and sufficiency of water delivery. Participants who reported currently experiencing health issues were asked to list their symptoms; interviewers were careful not to lead participants in the open-ended responses.", "Descriptive statistics were used to compare demographic information of participants in Ogale and Eteo, and chi-square analyses were used to compare frequencies of self-reported perceptions of water odor and safety between communities. The relationship between exposure to contaminated water and self-reported adverse health outcomes was studied in distinct multivariate logistic regression models, which were used to obtain odds ratios and 95 % confidence intervals (CI) for self-reported symptoms in both communities. Sex, age, occupation, smoking status, and education were fitted in one multivariate logistic regression model. Due to our small sample size, only age, sex, and smoking status were included in the detailed logistic regression model to avoid over-fitting the multivariate model. These covariates were selected because of their potential to confound or modify the association of interest [27–30]. All analyses were performed using SAS statistical software version 9.3 (SAS Institute, Cary, NC).", "Participants in Ogale and Eteo were comparable in demographic characteristics (Table 1). Mean ages in Ogale and Eteo were 34.3 (SD = 11.2) and 35.5 (SD = 12.7) respectively. In our sample, there were slightly higher proportions of women (53 % in Ogale, 55 % in Eteo) than men in both communities. More than one-half of participants in Ogale and Eteo were married (60 and 71 % respectively). The vast majority of study participants sampled identified as Christian (98 % in Ogale, 95 % in Eteo).Table 1Characteristics of participants in Ogale and EteoCharacteristicOgaleEteo(n = 100)(n = 100)Age, Mean (SD)34.3 ± 11.235.5 ± 12.7Age category 18–2035 21–408070 41–601119  > 6066Sex (%) Female5355 Male4745Religion (%) Christian9895 Muslim25Marital status (%) Married - Monogamous6071 Married - Polygamous20 Widowed35 Single3524Smoking (%) Never smoker8589 Ever smoker1511Education level (%) No formal education13 Primary school718 Secondary school5385 Post-Secondary school3921Occupation of head of household (%) Farmer615 Educator30 Artist/Musician20 Tradesman4044 Professional149 Government/Civil Service2210 Clergy21 Student38 Unemployed49 Retired44Primary medical care location (%) Private clinic3614 Primary health center841 General hospital2620 Local chemist1915 None88 Other32Residents in household (median)45Number of childrena in household (median)23\nAbbreviations: SD Standard Deviation\naDefined as individuals under the age of 18\nCharacteristics of participants in Ogale and Eteo\n\nAbbreviations: SD Standard Deviation\n\naDefined as individuals under the age of 18\nIn both communities, more than three-quarters of participants reported never having smoked tobacco (85 % in Ogale, 89 % in Eteo) and of those who did, the overwhelming majority was male. On average, participants in Ogale had achieved higher levels of education than those in Eteo; nearly twice as many Ogale participants reported post-secondary education (39 % vs. 21 %). Head of household occupations were similar across both groups. The most commonly reported occupation for the head of household in both communities was a tradesman (40 % in Ogale, 44 % in Eteo). Both communities had comparable median numbers of total individuals (4 in Ogale, 5 in Eteo) and of children (2 in Ogale, 3 in Eteo) residing in the household. Participants in both communities had consistent access to medical services. The majority of participants reported visiting a health centre, general hospital or private clinic for their medical care and health services (72 and 76 % in Ogale and Eteo respectively). In addition, 19 % of participants in Ogale sought medical care from a local chemist compared to 15 % in Eteo.\nThe prevalence of emergency water use as a primary source for individual household activities in Ogale ranged between 14 and 16 % (Table 2). Emergency water was most commonly used for drinking, cooking, brushing teeth, and washing food. The majority of participants in Ogale reported continued use of the contaminated water: 66 % reported drinking, 81 % reported cooking, and 83 % reported using their borehole well water for bathing, washing food, washing dishes, washing clothes, and cleaning the house. Additional reported sources of drinking water in Ogale were sachet water, bottled water and mono-pump (14, 4 and 1 % respectively). Only 24 % of participants in Ogale reported receiving emergency water supplies. Although over 80 % of these individuals stated that water delivery occurs at least once per week, half of them found the volume of water delivered to be insufficient for daily needs.Table 2Primary sources of water for various household activities in Ogale (n = 100)a\nPrimary water source (n)Household activityBoreholeSachetb\nEmergencyc\nCooking81016Drinking661415Brushing teeth80116Bathing83015Washing food83016Washing dishes83015Washing clothes83014Cleaning house83014\naWater sources with n < 5 (dugout well, monopump, bottled water and other) were excluded\nbSachet water is defined as water packed in plastic bags, commonly sold in outdoor markets\ncEmergency water is defined as water supplied by the government to Ogale participants as requested by UNEP\nPrimary sources of water for various household activities in Ogale (n = 100)a\n\n\naWater sources with n < 5 (dugout well, monopump, bottled water and other) were excluded\n\nbSachet water is defined as water packed in plastic bags, commonly sold in outdoor markets\n\ncEmergency water is defined as water supplied by the government to Ogale participants as requested by UNEP\nEteo residents are not provided with emergency water because their water supply is not known to be contaminated. Approximately 97 % of individuals in Eteo reported using their individual household borehole drinking water wells for all specified household activities, while 2 % used surface water, and 1 % used rain water. Overall, participants in Ogale reported using their borehole well water for household activities for a median of 4 years, while participants in Eteo reported a median of 5 years. The median years of exposure to emergency drinking water and to sachet water in Ogale were 1 and 2, respectively.\nParticipants in Ogale were significantly more likely to perceive their primary water source as having an odor (39 % vs. 8 %) The main sources of water with a reported odor in Ogale were borehole well and emergency water (Table 3). The majority of participants in Ogale who reported a borehole well odor (69 %) stated that it smelled like petroleum fuel. Among Ogale participants reporting an emergency water odor, the most common description was chlorine (10 %). In Eteo, one participant described the borehole well water as having a fuel odor (1 %).Table 3Detailed descriptions of water odor in Ogale and EteoWater location and source (n (%))Ogale (n = 39)Eteo (n = 8)Odor descriptionBoreholeSachetEmergencyBoreholeSachetAny odor32 (82.1)1 (2.6)6 (15.4)6 (75.0)2 (25.0)Fuel27 (69.2)1 (2.6)1 (2.6)1 (12.5)0Chlorine004 (10.3)00Chemical2 (5.1)01 (2.6)01 (12.5)Mold0001 (12.5)0Mud1 (2.6)0000Unpleasant0003 (37.5)0Smoke1 (2.6)0000Don’t know1 (2.6)001 (12.5)1 (12.5)\nDetailed descriptions of water odor in Ogale and Eteo\nWhen asked about water safety, 41 % of participants in Ogale reported perceiving their water source as “safe” compared to 70 % in Eteo (Fig. 1). In addition, almost one-third (32 %) of individuals in Ogale reported their water as “unsafe” compared to 9 % in Eteo. Differences in these proportions were found to be statistically significant at p < 0.05 level using chi-square analyses.Fig. 1Perceptions of primary water source safety in Ogale and Eteo. Participants from both communities reported that their primary water source was safe (light gray), unsafe (medium gray), or they did not know if it was safe or not (dark gray)\nPerceptions of primary water source safety in Ogale and Eteo. Participants from both communities reported that their primary water source was safe (light gray), unsafe (medium gray), or they did not know if it was safe or not (dark gray)\nTable 4 displays differences in symptom reporting across the three different primary sources of drinking water (borehole, sachet, and emergency water) in Ogale. On average, the proportions of individuals who reported experiencing current adverse health symptoms were similar among the three different sources of primary drinking water in Ogale. Overall, chi-square tests revealed no statistically significant differences in the proportions of health symptoms reported among individuals who used borehole, sachet, or emergency water for drinking.Table 4Self-reported adverse health symptoms by primary drinking water sources in Ogalea\nPrimary drinking water source (n (%))BoreholeSachetEmergencyHealth Symptom(n = 66)(n = 14)(n = 15)Irritation34 (51.5)7 (50.0)6 (40.0)Neurologic27 (40.9)5 (35.7)8 (53.3)Gastrointestinal10 (15.2)2 (14.3)5 (33.3)Hematologic21 (31.8)5 (35.7)3 (20.0)General pain6 (9.1)0 (0.0)2 (13.3)\naDifferences in proportions are not statistically significant at P < 0.05 using chi-square analyses\nSelf-reported adverse health symptoms by primary drinking water sources in Ogalea\n\n\naDifferences in proportions are not statistically significant at P < 0.05 using chi-square analyses\nChi-square analyses were used to examine differences in health symptoms among participants in Ogale who reported receiving sufficient, insufficient or no emergency water supplies (Table 5). The frequency of irritation and gastrointestinal symptoms were significantly different between the three groups. Participants in Ogale who received sufficient emergency water supplies were less likely to report having irritation (8.3 %) than those receiving insufficient or no emergency water (66.7 and 50 % respectively). Overall, participants who received sufficient emergency water supplies reported the lowest proportions of health symptoms across the three groups.Table 5Self-reported adverse health symptoms by sufficiency of emergency water supply in OgaleSufficiency of water supply (n (%))SufficientInsufficientNo supplyHealth symptom(n = 12)(n = 12)(n = 76)Irritation*1 (8.3)8 (66.7)38 (50.0)Neurologic3 (25.0)8 (66.7)29 (38.2)Gastrointestinal*1 (8.3)6 (50.0)10 (13.2)Hematologic3 (25.0)5 (41.7)22 (30.0)General pain1 (8.3)1 (8.3)6 (7.9)*Differences in proportions are statistically significant at P < 0.05 using chi-square analyses\nSelf-reported adverse health symptoms by sufficiency of emergency water supply in Ogale\n*Differences in proportions are statistically significant at P < 0.05 using chi-square analyses\nTable 6 displays general and specific self-reported health symptoms for all participants. After controlling for age, sex, smoking status, occupation, and education level, Ogale residents were significantly more likely to self-report any irritation (OR = 2.7; 95 % CI, 1.5–5.1), any neurological effects (OR = 2.8; 95 % CI, 1.5–5.5), and any hematologic effects (OR = 3.3; 95 % CI, (1.5–7.0). Almost three-quarters (68 %) of individuals residing in Ogale reported experiencing any current health symptoms compared to 56 % of Eteo residents. Chi-square analyses revealed that Ogale residents were significantly more likely to report having a headache (36 % vs. 18 %), dizziness (10 % vs. 2 %), throat irritation (8 % vs. 1 %), skin irritation (26 % vs. 5 %), rash (9 % vs. 1 %) and anemia (18 % vs. 4 %). After controlling for age, sex, and smoking, Ogale residents were significantly more likely to report having a headache (OR = 2.7; 95 % CI, 1.4–5.4), dizziness (OR = 6.3; 95 % CI, 1.3–30.4), eye irritation (OR = 2.5; 95 % CI, 1.1–5.7), throat irritation (OR = 9.1; 95 % CI, 1.1–75.4), skin irritation (OR = 6.5; 95 % CI, 2.4–18.2), and any type of anemia (OR = 5.9; 95 % CI, 1.9–18.3).Table 6Detailed multivariate analysis for self-reported health symptoms in Ogale and EteoHealth symptomOgale (n = 100)Eteo (n = 100)Crude OR (95 % CI)Adjusted ORa (95 % CI)Any symptom68561.6 (0.9, 3.0)1.8 (1.0, 3.3)Irritation*47252.7 (1.5, 4.8)2.7 (1.5, 5.1) Eye*21112.2 (0.9, 4.7)2.5 (1.1, 5.7) Throat*818.6 (1.1, 70.2)9.1 (1.1, 75.4) Skin*2656.7 (2.5, 18.2)6.6 (2.4, 18.2) Rash919.8 (1.2, 78.8)NA Runny nose14101.5 (0.6, 3.5)1.5 (0.6, 3.5) Cough1472.2 (0.8, 5.6)2.5 (0.9, 6.5)Gastrointestinal1792.1(0.9, 4.9)2.0 (0.8,5.0) Stomach pain1562.8 (1.0, 7.4)2.7 (1.0, 7.2) Diarrhea240.5 (0.1, 2.8)NANeurologic*40212.5 (1.3, 4.7)2.8 (1.5, 5.5) Headache*36182.6 (1.3, 4.9)2.7 (1.4, 5.4) Sleepiness732.4 (0.6, 9.7)NA Dizziness*1025.4 (1.2, 25.5)6.3 (1.3, 30.4)Hematologic*30132.9 (1.4, 6.0)3.3 (1.5, 7.0) Anemia*1845.3 (1.7, 16.2)5.9 (1.9, 18.3) Malaria1491.7 (0.7, 4.0)1.6 (0.7, 4.0)General pain8110.7 (0.3, 1.8)0.8 (0.3, 2.0)\nAbbreviations: OR Odds Ratio, NA Not Applicable, odds ratio was not adjusted due to low event frequency in that category\naGeneral health symptom categories, displayed in boldface, were adjusted for age, sex, smoking status, occupation, and education level. Due to small sample sizes, specific symptoms were adjusted for age, sex and smoking status only*Adjusted odds ratios are statistically significant at P < 0.05\nDetailed multivariate analysis for self-reported health symptoms in Ogale and Eteo\n\nAbbreviations: OR Odds Ratio, NA Not Applicable, odds ratio was not adjusted due to low event frequency in that category\n\naGeneral health symptom categories, displayed in boldface, were adjusted for age, sex, smoking status, occupation, and education level. Due to small sample sizes, specific symptoms were adjusted for age, sex and smoking status only\n*Adjusted odds ratios are statistically significant at P < 0.05", "This study is one of few to examine general population exposure to highly elevated concentrations of refined oil in drinking water. Prior research has focused mainly on crude oil exposures in occupational cohorts. Only one previous study [26] examined the relationship between petroleum contamination and adverse health outcomes in the Niger Delta region. Participants in Ogale were more likely to report symptoms indicative of central nervous system toxicity, including headaches and dizziness, consistent with the literature on occupational exposures to crude oil spills and on occupational exposures to VOCs [5, 6, 18]. Unlike the previous Ogoniland study and several studies of occupational exposures [17, 18, 25], Ogale residents did not report a greater prevalence of respiratory symptoms. Participants in Ogale were more likely to report throat irritation, skin irritation, and rashes; these symptoms are consistent with exposure to high concentrations of PAHs and VOCs found in oil [5, 6, 12]. In addition, a significantly higher proportion of participants in Ogale reported a diagnosis of anemia, as might be expected from exposures to benzene and naphthalene [9, 10, 12, 16], although residents did not report the specific type of anemia with which they were diagnosed. More than 80 % of Ogale participants report that they still use untreated water on a daily basis.\nResponses to the contamination in Ogale have focused on the delivery of clean drinking water supplies, rather than on remediation of the water supply. However, these efforts have not proved to be efficacious: the frequency of emergency water delivery in the participants sampled ranged from daily to infrequently, suggesting instability in emergency water delivery. Only one quarter of the participants in Ogale reported receiving emergency water and, of these, only half found the emergency drinking water supply to be sufficient for their daily needs. On average, only 15 % reported using emergency water as a primary source for their daily household activities. There were no significant differences in self-reported symptoms between Ogale participants who reported their primary drinking water source as borehole water, sachet water or emergency water. This result might be due to a number of factors, such as: (1) Ogale participants who receive inadequate emergency water supplies may still be exposed to contaminated borehole water. Even if the emergency supply is adequate for drinking and cooking, residents of Ogale might be exposed to petroleum hydrocarbons via inhalation and dermal routes. Inhalation and dermal exposures may occur through use of contaminated water for household activities such as bathing and cleaning; (2) Emergency water may not have been in use long enough to alleviate symptoms that might be associated with drinking borehole water; (3) Sachet water might also be contaminated. Prior studies on sachet water quality in Nigeria have found numerous chemical and bacterial contaminants, as well as widespread improper storage and handling practices [31]; or (4) It is also possible that not all borehole water in Ogale is contaminated; UNEP did not precisely define the limits of contamination.\nThe present study is limited by a relatively small sample size; however, participants in both communities were selected via a random sampling technique to increase study generalizability. In addition, our high response rates – 98 % in both communities – indicate that sampling bias is unlikely. Because our adjustment for confounders was limited by our small sample size, the possibility of residual confounding remains. Our cross-sectional design makes it difficult to infer causation for the association between petroleum contamination and adverse health effects.\nOur results may be a consequence of our small sample size. They may also have been affected by misclassification of exposures and outcomes. State-of-the-science biomarker evaluation of exposures to petroleum was not feasible for this pilot work due to infrastructure and security constraints. Participants who reported use of emergency water may also be drinking borehole water at work or school. We were unable to measure petroleum hydrocarbon concentrations directly in the households surveyed; instead, our dichotomous classification of exposure was based upon UNEP’s analytical data indicating the location of contaminated drinking water. It is also possible that Ogale and Eteo differ with respect to non-borehole water sources of petroleum hydrocarbons, although no such differences are evident.\nSelf-report bias is a limitation of our outcome classification, as participants in Ogale were aware of the water supply contamination. This may have resulted in an overestimation of the observed association. We attempted to minimize self-report bias by masking the study objectives and hypothesis from participants. To avoid prompting, participants who reported experiencing any current health symptoms were asked to describe the health symptoms to the interviewers. This method was used to decrease the probability of information bias.", "In this cross-sectional pilot study, the first carried out in response to the UNEP recommendations, we observed statistically significant associations between exposure to petroleum-contaminated drinking water and self-reported symptoms consistent with exposure to petroleum hydrocarbons. These results reinforce UNEP’s recommendations for establishment of a health registry, medical surveillance, and a prospective cohort study for the Ogale community [3]. Future studies should define the full extent of contaminated household water and incorporate more detailed methods of exposure and outcome assessment for exposed populations, including its most susceptible members." ]
[ "introduction", "materials|methods", null, null, null, "results", "discussion", "conclusion" ]
[ "Petroleum hydrocarbons", "Drinking water", "Contamination", "Public health", "Refined oil", "Adverse health effects" ]
Background: While dramatic events such as the 1989 Exxon Valdez and 2010 Deepwater Horizon oil spills have received substantial remediation responses, the long-term and vast contamination of the Niger Delta region remains largely un-remediated [1]. Extraction, processing, and transport of crude oil dating back to the 1950s have had a devastating impact on Ogoniland, a territory in the Southern region of Nigeria. The United Nations Development Programme estimates that 6178 oil spills occurred in Ogoniland between 1976 and 2011, resulting in discharges of approximately three million barrels of oil [2]. Commissioned by the Federal Government of Nigeria, the United Nations Environmental Programme (UNEP) conducted an environmental assessment of Ogoniland and determined that the widespread oil contamination presents serious threats to human health [3]. Residents are exposed to petroleum hydrocarbons that are released to the environment by burning, spilling, and leaking. Exposure can occur via inhalation of hydrocarbons in ambient air and via consumption of and dermal contact with hydrocarbons in water, soil and sediment [1, 3]. This study focuses on the community of Ogale, located in the Eleme local government area of Ogoniland, where UNEP discovered substantial leakage from an abandoned section of a pipeline carrying refined oil. UNEP testing revealed approximately three inches of refined oil floating on the groundwater that supplies the community’s drinking water [3]. UNEP detected numerous petroleum hydrocarbons in water from individual borehole drinking water wells, notably benzene at concentrations as high as 9280 micrograms per liter, which is approximately 1800 times higher than the United States Environmental Protection Agency (US EPA) drinking water standard and over 900 times higher than the World Health Organization (WHO) drinking water guideline [3, 4]. Petroleum products are a complex mixture of hydrocarbons, consisting of both aromatic and long- and short-chain aliphatic hydrocarbons. Components of crude and refined petroleum, namely volatile organic compounds (VOCs), such as benzene, toluene and xylenes, and polycyclic aromatic hydrocarbons (PAHs), have independently been associated with adverse human health effects. Acute exposures to high concentrations of VOCs cause central nervous system toxicity, resulting in symptoms such as headaches, fatigue and dizziness [5, 6]. Chronic exposure to VOCs can impair the immune system via oxidative stress and decreases in white blood cell count [7, 8]. Benzene in particular is strongly associated with disorders of the hematopoietic system such as aplastic anemia [9, 10, 11]. Benzene is also classified as a known human carcinogen based on occupational studies in humans [4]. Polycyclic aromatic hydrocarbons cause symptoms such as nausea, vomiting and skin and eye irritation following acute, high-level exposures [12, 13]. Exposures to PAHs during pregnancy have been linked to decreased birth weight and impaired development in offspring [14]. Chronic occupational exposures are associated with dose-dependent increased risks of certain types of cancers, including lung, skin and bladder cancer [15]. Naphthalene, a low molecular weight PAH that was detected in Ogale water samples, can adversely affect the hematopoietic system, damaging and killing red blood cells, causing symptoms such as shortness of breath and fatigue [12, 16]. Alkylated PAHs comprise the majority of PAHs detected in petroleum products and are particularly persistent. Although the health effects of alkylated PAHs have not been well studied, limited evidence suggests that they may be more toxic and carcinogenic than their parent PAH compounds [17]. Although UNEP did not complete a detailed chemical characterization of the refined oil in Ogale wells, studies on petroleum exposures may provide some indication of adverse health effects that could occur in the community. Prior research has primarily focused on high-dose, short-term occupational exposures to crude oil, in particular those occurring during remediation of oil spills. Workers exposed to petroleum hydrocarbons have reported adverse health symptoms such as headaches, eye and skin irritation and respiratory difficulties [18, 19]. A recent cross-sectional study found that blood samples of oil spill workers showed alterations consistent with impairment of the hepatic and hematopoietic systems [20]. Research on the Prestige oil spill has provided preliminary evidence of exposure-dependent DNA damage in cleanup volunteers [21]. The ongoing NIEHS Gulf Long Term Follow Up (GuLF) Study on Deepwater Horizon spill workers will be the first to investigate long-term physical health effects using a prospective cohort design [22, 23]. Few studies have examined adverse effects associated with chronic exposure to elevated concentrations of refined oil products in the general population. Increases in depression and stress, stemming from perceived health risks and financial concerns, have been observed in communities subjected to chronic oil spill exposures [24]. One study found increases in cancer incidence and mortality in communities near the Amazon basin oil fields in Ecuador [25]. A thorough search of the scientific literature revealed only one health study conducted in the Niger Delta region, which reported higher rates of respiratory and skin disorders in Eleme compared to a less-industrialized Nigerian community [26]. However, this study does not include a description of the sampling design and locations, among other study weaknesses, and is therefore not suitable for reaching conclusions regarding any association between petroleum exposure and adverse health outcomes. Further research on chronic, high-magnitude exposures in individuals living in proximity to oil sites is needed to improve understanding of how oil spills might affect human health. UNEP made several recommendations in the Ogoniland environmental assessment, including provision of alternative drinking water supplies to Ogale, remediation of soil and groundwater contamination in the area, medical surveillance, and monitoring for potential adverse health effects through the implementation of a prospective cohort epidemiological study [3]. The objectives of this pilot study are to (i) follow up on UNEP recommendations by investigating health symptoms associated with exposure to contaminated water; and (ii) assess the adequacy and utilization of an emergency supply of potable water provided by the government. We compared the prevalence of self-reported health symptoms in Ogale and in a reference community, Eteo. The results of this pilot study will be helpful for designing the prospective cohort study in Ogale recommended by UNEP. Methods: Study population The community of Ogale was selected for study based on the UNEP environmental assessment, which identified Ogale as having the most serious groundwater contamination observed in Ogoniland [3]. The community of Eteo was chosen to serve as a reference group because it is near Ogale (approximately 10 miles away), it is part of the same local government area of Eleme, and people living in Eteo and Ogale are comparable with respect to race, language, culture, and behavioral practices. The UNEP environmental assessment did not report any petroleum contamination in Eteo. A total of 200 adults over the age of 18 were enrolled in this pilot study (100 participants from each community). We employed a stratified random sampling strategy through door-to-door recruitment in three areas of both Ogale and Eteo, approaching individuals in every fifth house. In both communities, we obtained a 98 % response rate. Participants met the following eligibility criteria: 1) residence in the community for a minimum of one year, and 2) no prior history of residence in any other Ogoniland community associated with high levels of petroleum contamination, as reported by UNEP [3]. This study was conducted with approval from local authorities in Ogale and Eteo and from the Institutional Review Board at Boston University Medical Center (reference number: H-32345). Informed consent was obtained from each subject. The community of Ogale was selected for study based on the UNEP environmental assessment, which identified Ogale as having the most serious groundwater contamination observed in Ogoniland [3]. The community of Eteo was chosen to serve as a reference group because it is near Ogale (approximately 10 miles away), it is part of the same local government area of Eleme, and people living in Eteo and Ogale are comparable with respect to race, language, culture, and behavioral practices. The UNEP environmental assessment did not report any petroleum contamination in Eteo. A total of 200 adults over the age of 18 were enrolled in this pilot study (100 participants from each community). We employed a stratified random sampling strategy through door-to-door recruitment in three areas of both Ogale and Eteo, approaching individuals in every fifth house. In both communities, we obtained a 98 % response rate. Participants met the following eligibility criteria: 1) residence in the community for a minimum of one year, and 2) no prior history of residence in any other Ogoniland community associated with high levels of petroleum contamination, as reported by UNEP [3]. This study was conducted with approval from local authorities in Ogale and Eteo and from the Institutional Review Board at Boston University Medical Center (reference number: H-32345). Informed consent was obtained from each subject. Outcome ascertainment Trained interviewers administered standardized questionnaires in each respondent’s home. These questionnaires were developed for this pilot study and include primarily closed-ended questions regarding demographics, smoking habits, water supply, water safety, current health symptoms and medical history. Participants were asked to report their primary water source and duration of its use for specific household activities: bathing, cooking, washing, drinking, brushing teeth, cleaning the house, and washing clothes, dishes and food. We collected information on primary source water characteristics such as odor and perceived safety. We asked individuals in Ogale who reported receiving emergency government-supplied water about the duration, frequency and sufficiency of water delivery. Participants who reported currently experiencing health issues were asked to list their symptoms; interviewers were careful not to lead participants in the open-ended responses. Trained interviewers administered standardized questionnaires in each respondent’s home. These questionnaires were developed for this pilot study and include primarily closed-ended questions regarding demographics, smoking habits, water supply, water safety, current health symptoms and medical history. Participants were asked to report their primary water source and duration of its use for specific household activities: bathing, cooking, washing, drinking, brushing teeth, cleaning the house, and washing clothes, dishes and food. We collected information on primary source water characteristics such as odor and perceived safety. We asked individuals in Ogale who reported receiving emergency government-supplied water about the duration, frequency and sufficiency of water delivery. Participants who reported currently experiencing health issues were asked to list their symptoms; interviewers were careful not to lead participants in the open-ended responses. Statistical analysis Descriptive statistics were used to compare demographic information of participants in Ogale and Eteo, and chi-square analyses were used to compare frequencies of self-reported perceptions of water odor and safety between communities. The relationship between exposure to contaminated water and self-reported adverse health outcomes was studied in distinct multivariate logistic regression models, which were used to obtain odds ratios and 95 % confidence intervals (CI) for self-reported symptoms in both communities. Sex, age, occupation, smoking status, and education were fitted in one multivariate logistic regression model. Due to our small sample size, only age, sex, and smoking status were included in the detailed logistic regression model to avoid over-fitting the multivariate model. These covariates were selected because of their potential to confound or modify the association of interest [27–30]. All analyses were performed using SAS statistical software version 9.3 (SAS Institute, Cary, NC). Descriptive statistics were used to compare demographic information of participants in Ogale and Eteo, and chi-square analyses were used to compare frequencies of self-reported perceptions of water odor and safety between communities. The relationship between exposure to contaminated water and self-reported adverse health outcomes was studied in distinct multivariate logistic regression models, which were used to obtain odds ratios and 95 % confidence intervals (CI) for self-reported symptoms in both communities. Sex, age, occupation, smoking status, and education were fitted in one multivariate logistic regression model. Due to our small sample size, only age, sex, and smoking status were included in the detailed logistic regression model to avoid over-fitting the multivariate model. These covariates were selected because of their potential to confound or modify the association of interest [27–30]. All analyses were performed using SAS statistical software version 9.3 (SAS Institute, Cary, NC). Study population: The community of Ogale was selected for study based on the UNEP environmental assessment, which identified Ogale as having the most serious groundwater contamination observed in Ogoniland [3]. The community of Eteo was chosen to serve as a reference group because it is near Ogale (approximately 10 miles away), it is part of the same local government area of Eleme, and people living in Eteo and Ogale are comparable with respect to race, language, culture, and behavioral practices. The UNEP environmental assessment did not report any petroleum contamination in Eteo. A total of 200 adults over the age of 18 were enrolled in this pilot study (100 participants from each community). We employed a stratified random sampling strategy through door-to-door recruitment in three areas of both Ogale and Eteo, approaching individuals in every fifth house. In both communities, we obtained a 98 % response rate. Participants met the following eligibility criteria: 1) residence in the community for a minimum of one year, and 2) no prior history of residence in any other Ogoniland community associated with high levels of petroleum contamination, as reported by UNEP [3]. This study was conducted with approval from local authorities in Ogale and Eteo and from the Institutional Review Board at Boston University Medical Center (reference number: H-32345). Informed consent was obtained from each subject. Outcome ascertainment: Trained interviewers administered standardized questionnaires in each respondent’s home. These questionnaires were developed for this pilot study and include primarily closed-ended questions regarding demographics, smoking habits, water supply, water safety, current health symptoms and medical history. Participants were asked to report their primary water source and duration of its use for specific household activities: bathing, cooking, washing, drinking, brushing teeth, cleaning the house, and washing clothes, dishes and food. We collected information on primary source water characteristics such as odor and perceived safety. We asked individuals in Ogale who reported receiving emergency government-supplied water about the duration, frequency and sufficiency of water delivery. Participants who reported currently experiencing health issues were asked to list their symptoms; interviewers were careful not to lead participants in the open-ended responses. Statistical analysis: Descriptive statistics were used to compare demographic information of participants in Ogale and Eteo, and chi-square analyses were used to compare frequencies of self-reported perceptions of water odor and safety between communities. The relationship between exposure to contaminated water and self-reported adverse health outcomes was studied in distinct multivariate logistic regression models, which were used to obtain odds ratios and 95 % confidence intervals (CI) for self-reported symptoms in both communities. Sex, age, occupation, smoking status, and education were fitted in one multivariate logistic regression model. Due to our small sample size, only age, sex, and smoking status were included in the detailed logistic regression model to avoid over-fitting the multivariate model. These covariates were selected because of their potential to confound or modify the association of interest [27–30]. All analyses were performed using SAS statistical software version 9.3 (SAS Institute, Cary, NC). Results: Participants in Ogale and Eteo were comparable in demographic characteristics (Table 1). Mean ages in Ogale and Eteo were 34.3 (SD = 11.2) and 35.5 (SD = 12.7) respectively. In our sample, there were slightly higher proportions of women (53 % in Ogale, 55 % in Eteo) than men in both communities. More than one-half of participants in Ogale and Eteo were married (60 and 71 % respectively). The vast majority of study participants sampled identified as Christian (98 % in Ogale, 95 % in Eteo).Table 1Characteristics of participants in Ogale and EteoCharacteristicOgaleEteo(n = 100)(n = 100)Age, Mean (SD)34.3 ± 11.235.5 ± 12.7Age category 18–2035 21–408070 41–601119  > 6066Sex (%) Female5355 Male4745Religion (%) Christian9895 Muslim25Marital status (%) Married - Monogamous6071 Married - Polygamous20 Widowed35 Single3524Smoking (%) Never smoker8589 Ever smoker1511Education level (%) No formal education13 Primary school718 Secondary school5385 Post-Secondary school3921Occupation of head of household (%) Farmer615 Educator30 Artist/Musician20 Tradesman4044 Professional149 Government/Civil Service2210 Clergy21 Student38 Unemployed49 Retired44Primary medical care location (%) Private clinic3614 Primary health center841 General hospital2620 Local chemist1915 None88 Other32Residents in household (median)45Number of childrena in household (median)23 Abbreviations: SD Standard Deviation aDefined as individuals under the age of 18 Characteristics of participants in Ogale and Eteo Abbreviations: SD Standard Deviation aDefined as individuals under the age of 18 In both communities, more than three-quarters of participants reported never having smoked tobacco (85 % in Ogale, 89 % in Eteo) and of those who did, the overwhelming majority was male. On average, participants in Ogale had achieved higher levels of education than those in Eteo; nearly twice as many Ogale participants reported post-secondary education (39 % vs. 21 %). Head of household occupations were similar across both groups. The most commonly reported occupation for the head of household in both communities was a tradesman (40 % in Ogale, 44 % in Eteo). Both communities had comparable median numbers of total individuals (4 in Ogale, 5 in Eteo) and of children (2 in Ogale, 3 in Eteo) residing in the household. Participants in both communities had consistent access to medical services. The majority of participants reported visiting a health centre, general hospital or private clinic for their medical care and health services (72 and 76 % in Ogale and Eteo respectively). In addition, 19 % of participants in Ogale sought medical care from a local chemist compared to 15 % in Eteo. The prevalence of emergency water use as a primary source for individual household activities in Ogale ranged between 14 and 16 % (Table 2). Emergency water was most commonly used for drinking, cooking, brushing teeth, and washing food. The majority of participants in Ogale reported continued use of the contaminated water: 66 % reported drinking, 81 % reported cooking, and 83 % reported using their borehole well water for bathing, washing food, washing dishes, washing clothes, and cleaning the house. Additional reported sources of drinking water in Ogale were sachet water, bottled water and mono-pump (14, 4 and 1 % respectively). Only 24 % of participants in Ogale reported receiving emergency water supplies. Although over 80 % of these individuals stated that water delivery occurs at least once per week, half of them found the volume of water delivered to be insufficient for daily needs.Table 2Primary sources of water for various household activities in Ogale (n = 100)a Primary water source (n)Household activityBoreholeSachetb Emergencyc Cooking81016Drinking661415Brushing teeth80116Bathing83015Washing food83016Washing dishes83015Washing clothes83014Cleaning house83014 aWater sources with n < 5 (dugout well, monopump, bottled water and other) were excluded bSachet water is defined as water packed in plastic bags, commonly sold in outdoor markets cEmergency water is defined as water supplied by the government to Ogale participants as requested by UNEP Primary sources of water for various household activities in Ogale (n = 100)a aWater sources with n < 5 (dugout well, monopump, bottled water and other) were excluded bSachet water is defined as water packed in plastic bags, commonly sold in outdoor markets cEmergency water is defined as water supplied by the government to Ogale participants as requested by UNEP Eteo residents are not provided with emergency water because their water supply is not known to be contaminated. Approximately 97 % of individuals in Eteo reported using their individual household borehole drinking water wells for all specified household activities, while 2 % used surface water, and 1 % used rain water. Overall, participants in Ogale reported using their borehole well water for household activities for a median of 4 years, while participants in Eteo reported a median of 5 years. The median years of exposure to emergency drinking water and to sachet water in Ogale were 1 and 2, respectively. Participants in Ogale were significantly more likely to perceive their primary water source as having an odor (39 % vs. 8 %) The main sources of water with a reported odor in Ogale were borehole well and emergency water (Table 3). The majority of participants in Ogale who reported a borehole well odor (69 %) stated that it smelled like petroleum fuel. Among Ogale participants reporting an emergency water odor, the most common description was chlorine (10 %). In Eteo, one participant described the borehole well water as having a fuel odor (1 %).Table 3Detailed descriptions of water odor in Ogale and EteoWater location and source (n (%))Ogale (n = 39)Eteo (n = 8)Odor descriptionBoreholeSachetEmergencyBoreholeSachetAny odor32 (82.1)1 (2.6)6 (15.4)6 (75.0)2 (25.0)Fuel27 (69.2)1 (2.6)1 (2.6)1 (12.5)0Chlorine004 (10.3)00Chemical2 (5.1)01 (2.6)01 (12.5)Mold0001 (12.5)0Mud1 (2.6)0000Unpleasant0003 (37.5)0Smoke1 (2.6)0000Don’t know1 (2.6)001 (12.5)1 (12.5) Detailed descriptions of water odor in Ogale and Eteo When asked about water safety, 41 % of participants in Ogale reported perceiving their water source as “safe” compared to 70 % in Eteo (Fig. 1). In addition, almost one-third (32 %) of individuals in Ogale reported their water as “unsafe” compared to 9 % in Eteo. Differences in these proportions were found to be statistically significant at p < 0.05 level using chi-square analyses.Fig. 1Perceptions of primary water source safety in Ogale and Eteo. Participants from both communities reported that their primary water source was safe (light gray), unsafe (medium gray), or they did not know if it was safe or not (dark gray) Perceptions of primary water source safety in Ogale and Eteo. Participants from both communities reported that their primary water source was safe (light gray), unsafe (medium gray), or they did not know if it was safe or not (dark gray) Table 4 displays differences in symptom reporting across the three different primary sources of drinking water (borehole, sachet, and emergency water) in Ogale. On average, the proportions of individuals who reported experiencing current adverse health symptoms were similar among the three different sources of primary drinking water in Ogale. Overall, chi-square tests revealed no statistically significant differences in the proportions of health symptoms reported among individuals who used borehole, sachet, or emergency water for drinking.Table 4Self-reported adverse health symptoms by primary drinking water sources in Ogalea Primary drinking water source (n (%))BoreholeSachetEmergencyHealth Symptom(n = 66)(n = 14)(n = 15)Irritation34 (51.5)7 (50.0)6 (40.0)Neurologic27 (40.9)5 (35.7)8 (53.3)Gastrointestinal10 (15.2)2 (14.3)5 (33.3)Hematologic21 (31.8)5 (35.7)3 (20.0)General pain6 (9.1)0 (0.0)2 (13.3) aDifferences in proportions are not statistically significant at P < 0.05 using chi-square analyses Self-reported adverse health symptoms by primary drinking water sources in Ogalea aDifferences in proportions are not statistically significant at P < 0.05 using chi-square analyses Chi-square analyses were used to examine differences in health symptoms among participants in Ogale who reported receiving sufficient, insufficient or no emergency water supplies (Table 5). The frequency of irritation and gastrointestinal symptoms were significantly different between the three groups. Participants in Ogale who received sufficient emergency water supplies were less likely to report having irritation (8.3 %) than those receiving insufficient or no emergency water (66.7 and 50 % respectively). Overall, participants who received sufficient emergency water supplies reported the lowest proportions of health symptoms across the three groups.Table 5Self-reported adverse health symptoms by sufficiency of emergency water supply in OgaleSufficiency of water supply (n (%))SufficientInsufficientNo supplyHealth symptom(n = 12)(n = 12)(n = 76)Irritation*1 (8.3)8 (66.7)38 (50.0)Neurologic3 (25.0)8 (66.7)29 (38.2)Gastrointestinal*1 (8.3)6 (50.0)10 (13.2)Hematologic3 (25.0)5 (41.7)22 (30.0)General pain1 (8.3)1 (8.3)6 (7.9)*Differences in proportions are statistically significant at P < 0.05 using chi-square analyses Self-reported adverse health symptoms by sufficiency of emergency water supply in Ogale *Differences in proportions are statistically significant at P < 0.05 using chi-square analyses Table 6 displays general and specific self-reported health symptoms for all participants. After controlling for age, sex, smoking status, occupation, and education level, Ogale residents were significantly more likely to self-report any irritation (OR = 2.7; 95 % CI, 1.5–5.1), any neurological effects (OR = 2.8; 95 % CI, 1.5–5.5), and any hematologic effects (OR = 3.3; 95 % CI, (1.5–7.0). Almost three-quarters (68 %) of individuals residing in Ogale reported experiencing any current health symptoms compared to 56 % of Eteo residents. Chi-square analyses revealed that Ogale residents were significantly more likely to report having a headache (36 % vs. 18 %), dizziness (10 % vs. 2 %), throat irritation (8 % vs. 1 %), skin irritation (26 % vs. 5 %), rash (9 % vs. 1 %) and anemia (18 % vs. 4 %). After controlling for age, sex, and smoking, Ogale residents were significantly more likely to report having a headache (OR = 2.7; 95 % CI, 1.4–5.4), dizziness (OR = 6.3; 95 % CI, 1.3–30.4), eye irritation (OR = 2.5; 95 % CI, 1.1–5.7), throat irritation (OR = 9.1; 95 % CI, 1.1–75.4), skin irritation (OR = 6.5; 95 % CI, 2.4–18.2), and any type of anemia (OR = 5.9; 95 % CI, 1.9–18.3).Table 6Detailed multivariate analysis for self-reported health symptoms in Ogale and EteoHealth symptomOgale (n = 100)Eteo (n = 100)Crude OR (95 % CI)Adjusted ORa (95 % CI)Any symptom68561.6 (0.9, 3.0)1.8 (1.0, 3.3)Irritation*47252.7 (1.5, 4.8)2.7 (1.5, 5.1) Eye*21112.2 (0.9, 4.7)2.5 (1.1, 5.7) Throat*818.6 (1.1, 70.2)9.1 (1.1, 75.4) Skin*2656.7 (2.5, 18.2)6.6 (2.4, 18.2) Rash919.8 (1.2, 78.8)NA Runny nose14101.5 (0.6, 3.5)1.5 (0.6, 3.5) Cough1472.2 (0.8, 5.6)2.5 (0.9, 6.5)Gastrointestinal1792.1(0.9, 4.9)2.0 (0.8,5.0) Stomach pain1562.8 (1.0, 7.4)2.7 (1.0, 7.2) Diarrhea240.5 (0.1, 2.8)NANeurologic*40212.5 (1.3, 4.7)2.8 (1.5, 5.5) Headache*36182.6 (1.3, 4.9)2.7 (1.4, 5.4) Sleepiness732.4 (0.6, 9.7)NA Dizziness*1025.4 (1.2, 25.5)6.3 (1.3, 30.4)Hematologic*30132.9 (1.4, 6.0)3.3 (1.5, 7.0) Anemia*1845.3 (1.7, 16.2)5.9 (1.9, 18.3) Malaria1491.7 (0.7, 4.0)1.6 (0.7, 4.0)General pain8110.7 (0.3, 1.8)0.8 (0.3, 2.0) Abbreviations: OR Odds Ratio, NA Not Applicable, odds ratio was not adjusted due to low event frequency in that category aGeneral health symptom categories, displayed in boldface, were adjusted for age, sex, smoking status, occupation, and education level. Due to small sample sizes, specific symptoms were adjusted for age, sex and smoking status only*Adjusted odds ratios are statistically significant at P < 0.05 Detailed multivariate analysis for self-reported health symptoms in Ogale and Eteo Abbreviations: OR Odds Ratio, NA Not Applicable, odds ratio was not adjusted due to low event frequency in that category aGeneral health symptom categories, displayed in boldface, were adjusted for age, sex, smoking status, occupation, and education level. Due to small sample sizes, specific symptoms were adjusted for age, sex and smoking status only *Adjusted odds ratios are statistically significant at P < 0.05 Discussion: This study is one of few to examine general population exposure to highly elevated concentrations of refined oil in drinking water. Prior research has focused mainly on crude oil exposures in occupational cohorts. Only one previous study [26] examined the relationship between petroleum contamination and adverse health outcomes in the Niger Delta region. Participants in Ogale were more likely to report symptoms indicative of central nervous system toxicity, including headaches and dizziness, consistent with the literature on occupational exposures to crude oil spills and on occupational exposures to VOCs [5, 6, 18]. Unlike the previous Ogoniland study and several studies of occupational exposures [17, 18, 25], Ogale residents did not report a greater prevalence of respiratory symptoms. Participants in Ogale were more likely to report throat irritation, skin irritation, and rashes; these symptoms are consistent with exposure to high concentrations of PAHs and VOCs found in oil [5, 6, 12]. In addition, a significantly higher proportion of participants in Ogale reported a diagnosis of anemia, as might be expected from exposures to benzene and naphthalene [9, 10, 12, 16], although residents did not report the specific type of anemia with which they were diagnosed. More than 80 % of Ogale participants report that they still use untreated water on a daily basis. Responses to the contamination in Ogale have focused on the delivery of clean drinking water supplies, rather than on remediation of the water supply. However, these efforts have not proved to be efficacious: the frequency of emergency water delivery in the participants sampled ranged from daily to infrequently, suggesting instability in emergency water delivery. Only one quarter of the participants in Ogale reported receiving emergency water and, of these, only half found the emergency drinking water supply to be sufficient for their daily needs. On average, only 15 % reported using emergency water as a primary source for their daily household activities. There were no significant differences in self-reported symptoms between Ogale participants who reported their primary drinking water source as borehole water, sachet water or emergency water. This result might be due to a number of factors, such as: (1) Ogale participants who receive inadequate emergency water supplies may still be exposed to contaminated borehole water. Even if the emergency supply is adequate for drinking and cooking, residents of Ogale might be exposed to petroleum hydrocarbons via inhalation and dermal routes. Inhalation and dermal exposures may occur through use of contaminated water for household activities such as bathing and cleaning; (2) Emergency water may not have been in use long enough to alleviate symptoms that might be associated with drinking borehole water; (3) Sachet water might also be contaminated. Prior studies on sachet water quality in Nigeria have found numerous chemical and bacterial contaminants, as well as widespread improper storage and handling practices [31]; or (4) It is also possible that not all borehole water in Ogale is contaminated; UNEP did not precisely define the limits of contamination. The present study is limited by a relatively small sample size; however, participants in both communities were selected via a random sampling technique to increase study generalizability. In addition, our high response rates – 98 % in both communities – indicate that sampling bias is unlikely. Because our adjustment for confounders was limited by our small sample size, the possibility of residual confounding remains. Our cross-sectional design makes it difficult to infer causation for the association between petroleum contamination and adverse health effects. Our results may be a consequence of our small sample size. They may also have been affected by misclassification of exposures and outcomes. State-of-the-science biomarker evaluation of exposures to petroleum was not feasible for this pilot work due to infrastructure and security constraints. Participants who reported use of emergency water may also be drinking borehole water at work or school. We were unable to measure petroleum hydrocarbon concentrations directly in the households surveyed; instead, our dichotomous classification of exposure was based upon UNEP’s analytical data indicating the location of contaminated drinking water. It is also possible that Ogale and Eteo differ with respect to non-borehole water sources of petroleum hydrocarbons, although no such differences are evident. Self-report bias is a limitation of our outcome classification, as participants in Ogale were aware of the water supply contamination. This may have resulted in an overestimation of the observed association. We attempted to minimize self-report bias by masking the study objectives and hypothesis from participants. To avoid prompting, participants who reported experiencing any current health symptoms were asked to describe the health symptoms to the interviewers. This method was used to decrease the probability of information bias. Conclusions: In this cross-sectional pilot study, the first carried out in response to the UNEP recommendations, we observed statistically significant associations between exposure to petroleum-contaminated drinking water and self-reported symptoms consistent with exposure to petroleum hydrocarbons. These results reinforce UNEP’s recommendations for establishment of a health registry, medical surveillance, and a prospective cohort study for the Ogale community [3]. Future studies should define the full extent of contaminated household water and incorporate more detailed methods of exposure and outcome assessment for exposed populations, including its most susceptible members.
Background: The oil-rich Niger Delta suffers from extensive petroleum contamination. A pilot study was conducted in the region of Ogoniland where one community, Ogale, has drinking water wells highly contaminated with a refined oil product. In a 2011 study, the United Nations Environment Programme (UNEP) sampled Ogale drinking water wells and detected numerous petroleum hydrocarbons, including benzene at concentrations as much as 1800 times higher than the USEPA drinking water standard. UNEP recommended immediate provision of clean drinking water, medical surveillance, and a prospective cohort study. Although the Nigerian government has provided emergency drinking water, other UNEP recommendations have not been implemented. We aimed to (i) follow up on UNEP recommendations by investigating health symptoms associated with exposure to contaminated water; and (ii) assess the adequacy and utilization of the government-supplied emergency drinking water. Methods: We recruited 200 participants from Ogale and a reference community, Eteo, and administered questionnaires to investigate water use, perceived water safety, and self-reported health symptoms. Results: Our multivariate regression analyses show statistically significant associations between exposure to Ogale drinking water and self-reported health symptoms consistent with petroleum exposure. Participants in Ogale more frequently reported health symptoms related to neurological effects (OR = 2.8), hematological effects (OR = 3.3), and irritation (OR = 2.7). Conclusions: Our results are the first from a community relying on drinking water with such extremely high concentrations of benzene and other hydrocarbons. The ongoing exposure and these pilot study results highlight the need for more refined investigation as recommended by UNEP.
Background: While dramatic events such as the 1989 Exxon Valdez and 2010 Deepwater Horizon oil spills have received substantial remediation responses, the long-term and vast contamination of the Niger Delta region remains largely un-remediated [1]. Extraction, processing, and transport of crude oil dating back to the 1950s have had a devastating impact on Ogoniland, a territory in the Southern region of Nigeria. The United Nations Development Programme estimates that 6178 oil spills occurred in Ogoniland between 1976 and 2011, resulting in discharges of approximately three million barrels of oil [2]. Commissioned by the Federal Government of Nigeria, the United Nations Environmental Programme (UNEP) conducted an environmental assessment of Ogoniland and determined that the widespread oil contamination presents serious threats to human health [3]. Residents are exposed to petroleum hydrocarbons that are released to the environment by burning, spilling, and leaking. Exposure can occur via inhalation of hydrocarbons in ambient air and via consumption of and dermal contact with hydrocarbons in water, soil and sediment [1, 3]. This study focuses on the community of Ogale, located in the Eleme local government area of Ogoniland, where UNEP discovered substantial leakage from an abandoned section of a pipeline carrying refined oil. UNEP testing revealed approximately three inches of refined oil floating on the groundwater that supplies the community’s drinking water [3]. UNEP detected numerous petroleum hydrocarbons in water from individual borehole drinking water wells, notably benzene at concentrations as high as 9280 micrograms per liter, which is approximately 1800 times higher than the United States Environmental Protection Agency (US EPA) drinking water standard and over 900 times higher than the World Health Organization (WHO) drinking water guideline [3, 4]. Petroleum products are a complex mixture of hydrocarbons, consisting of both aromatic and long- and short-chain aliphatic hydrocarbons. Components of crude and refined petroleum, namely volatile organic compounds (VOCs), such as benzene, toluene and xylenes, and polycyclic aromatic hydrocarbons (PAHs), have independently been associated with adverse human health effects. Acute exposures to high concentrations of VOCs cause central nervous system toxicity, resulting in symptoms such as headaches, fatigue and dizziness [5, 6]. Chronic exposure to VOCs can impair the immune system via oxidative stress and decreases in white blood cell count [7, 8]. Benzene in particular is strongly associated with disorders of the hematopoietic system such as aplastic anemia [9, 10, 11]. Benzene is also classified as a known human carcinogen based on occupational studies in humans [4]. Polycyclic aromatic hydrocarbons cause symptoms such as nausea, vomiting and skin and eye irritation following acute, high-level exposures [12, 13]. Exposures to PAHs during pregnancy have been linked to decreased birth weight and impaired development in offspring [14]. Chronic occupational exposures are associated with dose-dependent increased risks of certain types of cancers, including lung, skin and bladder cancer [15]. Naphthalene, a low molecular weight PAH that was detected in Ogale water samples, can adversely affect the hematopoietic system, damaging and killing red blood cells, causing symptoms such as shortness of breath and fatigue [12, 16]. Alkylated PAHs comprise the majority of PAHs detected in petroleum products and are particularly persistent. Although the health effects of alkylated PAHs have not been well studied, limited evidence suggests that they may be more toxic and carcinogenic than their parent PAH compounds [17]. Although UNEP did not complete a detailed chemical characterization of the refined oil in Ogale wells, studies on petroleum exposures may provide some indication of adverse health effects that could occur in the community. Prior research has primarily focused on high-dose, short-term occupational exposures to crude oil, in particular those occurring during remediation of oil spills. Workers exposed to petroleum hydrocarbons have reported adverse health symptoms such as headaches, eye and skin irritation and respiratory difficulties [18, 19]. A recent cross-sectional study found that blood samples of oil spill workers showed alterations consistent with impairment of the hepatic and hematopoietic systems [20]. Research on the Prestige oil spill has provided preliminary evidence of exposure-dependent DNA damage in cleanup volunteers [21]. The ongoing NIEHS Gulf Long Term Follow Up (GuLF) Study on Deepwater Horizon spill workers will be the first to investigate long-term physical health effects using a prospective cohort design [22, 23]. Few studies have examined adverse effects associated with chronic exposure to elevated concentrations of refined oil products in the general population. Increases in depression and stress, stemming from perceived health risks and financial concerns, have been observed in communities subjected to chronic oil spill exposures [24]. One study found increases in cancer incidence and mortality in communities near the Amazon basin oil fields in Ecuador [25]. A thorough search of the scientific literature revealed only one health study conducted in the Niger Delta region, which reported higher rates of respiratory and skin disorders in Eleme compared to a less-industrialized Nigerian community [26]. However, this study does not include a description of the sampling design and locations, among other study weaknesses, and is therefore not suitable for reaching conclusions regarding any association between petroleum exposure and adverse health outcomes. Further research on chronic, high-magnitude exposures in individuals living in proximity to oil sites is needed to improve understanding of how oil spills might affect human health. UNEP made several recommendations in the Ogoniland environmental assessment, including provision of alternative drinking water supplies to Ogale, remediation of soil and groundwater contamination in the area, medical surveillance, and monitoring for potential adverse health effects through the implementation of a prospective cohort epidemiological study [3]. The objectives of this pilot study are to (i) follow up on UNEP recommendations by investigating health symptoms associated with exposure to contaminated water; and (ii) assess the adequacy and utilization of an emergency supply of potable water provided by the government. We compared the prevalence of self-reported health symptoms in Ogale and in a reference community, Eteo. The results of this pilot study will be helpful for designing the prospective cohort study in Ogale recommended by UNEP. Conclusions: In this cross-sectional pilot study, the first carried out in response to the UNEP recommendations, we observed statistically significant associations between exposure to petroleum-contaminated drinking water and self-reported symptoms consistent with exposure to petroleum hydrocarbons. These results reinforce UNEP’s recommendations for establishment of a health registry, medical surveillance, and a prospective cohort study for the Ogale community [3]. Future studies should define the full extent of contaminated household water and incorporate more detailed methods of exposure and outcome assessment for exposed populations, including its most susceptible members.
Background: The oil-rich Niger Delta suffers from extensive petroleum contamination. A pilot study was conducted in the region of Ogoniland where one community, Ogale, has drinking water wells highly contaminated with a refined oil product. In a 2011 study, the United Nations Environment Programme (UNEP) sampled Ogale drinking water wells and detected numerous petroleum hydrocarbons, including benzene at concentrations as much as 1800 times higher than the USEPA drinking water standard. UNEP recommended immediate provision of clean drinking water, medical surveillance, and a prospective cohort study. Although the Nigerian government has provided emergency drinking water, other UNEP recommendations have not been implemented. We aimed to (i) follow up on UNEP recommendations by investigating health symptoms associated with exposure to contaminated water; and (ii) assess the adequacy and utilization of the government-supplied emergency drinking water. Methods: We recruited 200 participants from Ogale and a reference community, Eteo, and administered questionnaires to investigate water use, perceived water safety, and self-reported health symptoms. Results: Our multivariate regression analyses show statistically significant associations between exposure to Ogale drinking water and self-reported health symptoms consistent with petroleum exposure. Participants in Ogale more frequently reported health symptoms related to neurological effects (OR = 2.8), hematological effects (OR = 3.3), and irritation (OR = 2.7). Conclusions: Our results are the first from a community relying on drinking water with such extremely high concentrations of benzene and other hydrocarbons. The ongoing exposure and these pilot study results highlight the need for more refined investigation as recommended by UNEP.
6,541
312
8
[ "water", "ogale", "reported", "participants", "eteo", "health", "symptoms", "study", "emergency", "drinking" ]
[ "test", "test" ]
null
[CONTENT] Petroleum hydrocarbons | Drinking water | Contamination | Public health | Refined oil | Adverse health effects [SUMMARY]
null
[CONTENT] Petroleum hydrocarbons | Drinking water | Contamination | Public health | Refined oil | Adverse health effects [SUMMARY]
[CONTENT] Petroleum hydrocarbons | Drinking water | Contamination | Public health | Refined oil | Adverse health effects [SUMMARY]
[CONTENT] Petroleum hydrocarbons | Drinking water | Contamination | Public health | Refined oil | Adverse health effects [SUMMARY]
[CONTENT] Petroleum hydrocarbons | Drinking water | Contamination | Public health | Refined oil | Adverse health effects [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cross-Sectional Studies | Drinking Water | Environmental Exposure | Female | Health Status | Humans | Male | Middle Aged | Nigeria | Pilot Projects | Rural Population | Self Report | Water Pollutants, Chemical | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cross-Sectional Studies | Drinking Water | Environmental Exposure | Female | Health Status | Humans | Male | Middle Aged | Nigeria | Pilot Projects | Rural Population | Self Report | Water Pollutants, Chemical | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cross-Sectional Studies | Drinking Water | Environmental Exposure | Female | Health Status | Humans | Male | Middle Aged | Nigeria | Pilot Projects | Rural Population | Self Report | Water Pollutants, Chemical | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cross-Sectional Studies | Drinking Water | Environmental Exposure | Female | Health Status | Humans | Male | Middle Aged | Nigeria | Pilot Projects | Rural Population | Self Report | Water Pollutants, Chemical | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cross-Sectional Studies | Drinking Water | Environmental Exposure | Female | Health Status | Humans | Male | Middle Aged | Nigeria | Pilot Projects | Rural Population | Self Report | Water Pollutants, Chemical | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] water | ogale | reported | participants | eteo | health | symptoms | study | emergency | drinking [SUMMARY]
null
[CONTENT] water | ogale | reported | participants | eteo | health | symptoms | study | emergency | drinking [SUMMARY]
[CONTENT] water | ogale | reported | participants | eteo | health | symptoms | study | emergency | drinking [SUMMARY]
[CONTENT] water | ogale | reported | participants | eteo | health | symptoms | study | emergency | drinking [SUMMARY]
[CONTENT] water | ogale | reported | participants | eteo | health | symptoms | study | emergency | drinking [SUMMARY]
[CONTENT] oil | exposures | hydrocarbons | health | chronic | study | water | effects | pahs | refined [SUMMARY]
null
[CONTENT] water | ogale | participants | eteo | reported | table | emergency water | 95 ci | primary | proportions [SUMMARY]
[CONTENT] exposure petroleum | recommendations | unep recommendations | exposure | water incorporate | associations exposure | associations exposure petroleum | associations exposure petroleum contaminated | community future studies define | community future studies [SUMMARY]
[CONTENT] water | ogale | participants | reported | eteo | health | study | community | symptoms | oil [SUMMARY]
[CONTENT] water | ogale | participants | reported | eteo | health | study | community | symptoms | oil [SUMMARY]
[CONTENT] Niger Delta ||| Ogoniland | one | Ogale ||| 2011 | the United Nations Environment Programme | Ogale | as much as 1800 | USEPA ||| UNEP ||| Nigerian | UNEP ||| UNEP [SUMMARY]
null
[CONTENT] Ogale ||| Ogale | 2.8 | 3.3 | 2.7 [SUMMARY]
[CONTENT] first ||| UNEP [SUMMARY]
[CONTENT] Niger Delta ||| Ogoniland | one | Ogale ||| 2011 | the United Nations Environment Programme | Ogale | as much as 1800 | USEPA ||| UNEP ||| Nigerian | UNEP ||| UNEP ||| 200 | Ogale | Eteo ||| ||| Ogale ||| Ogale | 2.8 | 3.3 | 2.7 ||| first ||| UNEP [SUMMARY]
[CONTENT] Niger Delta ||| Ogoniland | one | Ogale ||| 2011 | the United Nations Environment Programme | Ogale | as much as 1800 | USEPA ||| UNEP ||| Nigerian | UNEP ||| UNEP ||| 200 | Ogale | Eteo ||| ||| Ogale ||| Ogale | 2.8 | 3.3 | 2.7 ||| first ||| UNEP [SUMMARY]
Knowledge of reproductive and sexual rights among University students in Ethiopia: institution-based cross-sectional.
23405855
People have the right to make choices regarding their own sexuality, as far as they respect the rights of others. The knowledge of those rights is critical to youth's ability to protect themselves from unwanted reproductive outcomes. Reproductive health targeted Millennium Development Goals will not be achieved without improving access to reproductive health. This study was aimed to assess knowledge of reproductive and sexual rights as well as associated factors among Wolaita Sodo University students.
BACKGROUND
An institution-based cross-sectional survey was conducted among 642 regular undergraduate Wolaita Sodo University students selected by simple random sampling. A pretested and structured self-administered questionnaire was used for data collection. Data were entered using EPI info version 3.5.3 statistical software and analyzed using SPSS version 20 statistical package. Descriptive statistics was used to describe the study population in relation to relevant variables. Bivariate and multivariate logistic regression was also carried out to see the effect of each independent variable on the dependent variable.
METHODS
More than half (54.5%) of the respondents were found to be knowledgeable about reproductive and sexual rights. Attending elementary and high school in private schools [AOR: 2.08, 95% CI: 1.08, 3.99], coming from urban areas [AOR: 1.46, 95% CI: 1.00, 2.12], being student of faculty of health sciences [AOR: 2.98, 95% CI: 1.22, 7.30], participation in reproductive health clubs [AOR: 3.11, 95% CI: 2.08, 4.65], utilization of reproductive health services [AOR: 2.34, 95% CI: 1.49, 3.69] and discussing sexual issues with someone else [AOR: 2.31, 95% CI: 1.48, 3.62], were positively associated with knowledge of reproductive and sexual rights.
RESULTS
The level of knowledge of students about reproductive and sexual rights was found to be low. The Ministry of Education has to incorporate reproductive and sexual rights in the curricula of high schools and institutions of higher learning.
CONCLUSION
[ "Adolescent", "Cross-Sectional Studies", "Ethiopia", "Female", "Health Knowledge, Attitudes, Practice", "Health Services Accessibility", "Humans", "Male", "Reproductive Health", "Sex Offenses", "Sexual Behavior", "Students", "Universities", "Women's Rights", "Young Adult" ]
3577501
Background
Reproductive and sexual health rights are rights of all people, regardless of age, gender and other characteristics. That is, people have the right to make choices regarding their own sexuality and reproduction, provided that they respect the right of others. Reproductive and sexual rights were first officially recognized at the International Conference on Population and Development (ICPD) in Cairo in 1994 [1]. The Program of Action of ICPD recognized that meeting the reproductive health (RH) needs is a vital requirement for human and social development. Protecting and promoting the reproductive and sexual rights of the youth and empowering them to make informed choices is a key to their wellbeing [2,3]. Action for universal access to sexual and reproductive health (SRH) by 2015 was agreed by 179 countries; despite this international commitment, progress towards the program has been slow [4]. Reproductive health targeted Millennium Development Goals (MDGs) will not be achieved without improving access to RH [5]. The majority of the young people have very little knowledge on what sexual rights they are entitled to. Sometimes, they do not even appreciate the extent of their violations and worse still they do not know where they could go, for legal or social advice [6,7]. Sources of information the youth traditionally access, such as parents have a skewed conception on anything related to sexuality and approach it from a cautionary perspective rather than from an informative one. Formal education on sexuality is limited to RH which is commonly found in the science syllabi and biology and does not cover reproductive and sexual rights [8]. The youth are unable to deal with such violations because of barriers like shame, guilt, embarrassment, not wanting friends and family to know; confidentiality; and fear of not being believed [9] and partly because of their inadequate knowledge and experience on sexuality issues including legal instruments that may accord them an opportunity to claim and protect sexuality-related rights [8]. Young people face increasing pressures regarding sex and sexuality including conflicting messages and norms which may be perpetuated by lack of awareness of their rights and results in many young people being either unable to seek help when they need it, and may prevent them from giving input within policy and decision making processes [10]. Evidences from Ethiopian Universities revealed that violations are rampant and inadequately addressed [11]. The in and out of school Ethiopian youth in the age range of 15 and 24 years face lots of challenges and constitute the highest percentage of new HIV cases in the country [12]. Traditional practices such as early marriage, marriage by abduction, and female genital cutting adversely affect the health and wellbeing of young people [13]. Therefore, this study set out to assess the level of youth knowledge about reproductive and sexuality rights and thereby generate appropriate intervention strategies so that mainstream issues of sexuality and related rights may be included in policies and university curricula.
Methods
An institution-based, cross-sectional study was conducted among Wolaita Sodo University students at Sodo town which is located in Wolaita Zone of Southern Nations Nationalities and People’s Regional State (SNNPR). Sodo is 327 km from Addis Ababa. The university had 7,839 under graduate regular students in its five faculties and four schools comprising of 32 departments. One RH and Anti HIVAIDS Club was operating in the university. At the beginning, students were stratified based on their year of study, as first, second, and third year and above. From the five faculties, students in each batch were selected using the simple random sampling technique. The number of students sampled from each faculty was proportional to the total number of students thereof. Regarding knowledge of reproductive and sexual rights, students who scored above the mean score in the knowledge questions were considered knowledgeable. Data were collected using a pretested structured self-administered questionnaire. Twelve data collectors guided by one supervisor collected the data on April 7, 2012. The study participants filled the questionnaire at the same time in 12 lecture halls, each with 200 seats. Only 55 students sat in each lecture hall. Female and male students were placed in different rooms to ensure their privacy. Data quality was controlled by giving trainings and appropriate supervisions for data collectors. The overall supervision was carried out by the principal investigator. A pre-test was conducted on 5% of the questionnaire on students of Gondar University. Appropriate modifications were made after analyzing the pre-test result before the actual data collection. The questionnaires filled by the students were checked for completeness and entered into EPI INFO version 3.5.3 statistical software and then exported to SPSS version 20 for further analysis. Descriptive statistics was used to describe the study population in relation to relevant variables. Both bivariate and multivariate logistic regression models were used to identify associated factors. Odds Ratios and their 95% Confidence Intervals were computed and variables with p - value less than 0.05 were considered as significantly associated with the outcome variable. Ethical clearance was obtained from the Institute of Public Health, the University of Gondar. A formal letter of cooperation was written to Wolaita Sodo University. Verbal consent was obtained from each study participant.
Results
Socio-demographic characteristics of the study participants Out of the expected 660 respondents, 648 agreed to participate in the study, yielding a response rate of 98.1%. Out of this, six incomplete questionnaires were discarded. The age of the respondents ranged from 18 to 28, with the median age of 20 years. More than half of the respondents (57.4%) were between the ages of 18 and 20 years. Male respondents were 516 (80.4%). Two hundred eighty (43.6%) of the respondents were Orthodox Christians, while 154 (24%) of the study participants were Wolaita by ethnicity. Regarding marital status, almost all (95.5%) of the study participants were single, and around half (52%) were from rural areas (Table  1). Socio-demographic and academic characteristics of Wolaita Sodo University undergraduate students, 2012, N = 642 Out of the expected 660 respondents, 648 agreed to participate in the study, yielding a response rate of 98.1%. Out of this, six incomplete questionnaires were discarded. The age of the respondents ranged from 18 to 28, with the median age of 20 years. More than half of the respondents (57.4%) were between the ages of 18 and 20 years. Male respondents were 516 (80.4%). Two hundred eighty (43.6%) of the respondents were Orthodox Christians, while 154 (24%) of the study participants were Wolaita by ethnicity. Regarding marital status, almost all (95.5%) of the study participants were single, and around half (52%) were from rural areas (Table  1). Socio-demographic and academic characteristics of Wolaita Sodo University undergraduate students, 2012, N = 642 Sexual experience and RH service utilization Two hundred six (32.1%) of the respondents reported that they had sexual experience. The mean age at the first sex was 17.34. Out of those who had sexual experience, 158 (53.3%) said that they had multiple sexual partners in their life time. One hundred six (16.5%) of the respondents reported that it was not important to discuss sexual issues with parents, while 140 (21.8%) had not discussed about the issue with anyone else. Among those who had discussed sexual issues, 419 (65.3%) discussed with their friends. Only 229 (35.7%) of the respondents participated in RH clubs. Around one-third of the respondents (37.1%) had no awareness that the student clinic in the campus provides RH services, while only 156 (24.3%) ever used any of the RH services. Around half 48.8% of the respondents said that their sources of information regarding reproductive and sexual issues were their peers, while health personnel, the media, teachers and parents were sources of information to 45.2%, 43.5%, 31% and 28.2%, respectively. Two hundred six (32.1%) of the respondents reported that they had sexual experience. The mean age at the first sex was 17.34. Out of those who had sexual experience, 158 (53.3%) said that they had multiple sexual partners in their life time. One hundred six (16.5%) of the respondents reported that it was not important to discuss sexual issues with parents, while 140 (21.8%) had not discussed about the issue with anyone else. Among those who had discussed sexual issues, 419 (65.3%) discussed with their friends. Only 229 (35.7%) of the respondents participated in RH clubs. Around one-third of the respondents (37.1%) had no awareness that the student clinic in the campus provides RH services, while only 156 (24.3%) ever used any of the RH services. Around half 48.8% of the respondents said that their sources of information regarding reproductive and sexual issues were their peers, while health personnel, the media, teachers and parents were sources of information to 45.2%, 43.5%, 31% and 28.2%, respectively. Knowledge of reproductive and sexual rights Participants were asked 24 questions to assess their knowledge on reproductive and sexual rights, and they were categorized in to two groups based on their score in relation to the mean. The mean score was 15.7. More than half (54.5%) of the respondents were found to be knowledgeable, while a substantial proportion (45.5%) of the respondents was not. Students were asked whether a married woman should have the right to limit the number of her children according to her desire and without her husband’s consent. The majority (63.7%) of them showed their disagreement with this idea. One hundred fifty-seven (24.5%) of the study participants said that a husband should get sex whenever he wants irrespective of his wife’s wish. Around half (53.7%) disagreed with the question that reflected the right of girls to autonomous reproductive choices without their partners` consent. Four hundred nine (63.7%) agreed that parents have the right to decide on sexual and RH issues of their children. Among all, 270 (42.1%) of the respondents disagreed with the statement which said students should have the right to freedom of assembly and political participation to influence the Government to place sexual and reproductive health issues on the priority list during planning and interventions. Three hundred sixty-four (56.7%) agreed with the statement that unmarried couples have no right to use contraceptives other than condoms (Table  2). Rresponses of the study participants to the knowledge questions, wolaita Sodo University, Wolaita Sodo, SNNPRS, Ethiopia, 2012, N = 642 Participants were asked 24 questions to assess their knowledge on reproductive and sexual rights, and they were categorized in to two groups based on their score in relation to the mean. The mean score was 15.7. More than half (54.5%) of the respondents were found to be knowledgeable, while a substantial proportion (45.5%) of the respondents was not. Students were asked whether a married woman should have the right to limit the number of her children according to her desire and without her husband’s consent. The majority (63.7%) of them showed their disagreement with this idea. One hundred fifty-seven (24.5%) of the study participants said that a husband should get sex whenever he wants irrespective of his wife’s wish. Around half (53.7%) disagreed with the question that reflected the right of girls to autonomous reproductive choices without their partners` consent. Four hundred nine (63.7%) agreed that parents have the right to decide on sexual and RH issues of their children. Among all, 270 (42.1%) of the respondents disagreed with the statement which said students should have the right to freedom of assembly and political participation to influence the Government to place sexual and reproductive health issues on the priority list during planning and interventions. Three hundred sixty-four (56.7%) agreed with the statement that unmarried couples have no right to use contraceptives other than condoms (Table  2). Rresponses of the study participants to the knowledge questions, wolaita Sodo University, Wolaita Sodo, SNNPRS, Ethiopia, 2012, N = 642 Factors associated with knowledge of sexual and reproductive rights The type elementary and high school attended, faculty, place they came from, participation in RH clubs, discussion RH issues, and utilization of RH services were found to have significant and independent effect on knowledge of reproductive and sexual rights, while sexual experience, parental education and occupation were not significantly associated. Students who attended private elementary and high schools were about 2 times more likely to be knowledgeable than students from public schools [AOR: 2.08, 95% CI: 1.08, 3.99]. Respondents who came from urban areas were also about 1.5 times more likely to be knowledgeable compared to those who came from rural areas [AOR: 1.45, 95% CI: 1.01, 2.11]. Study participants from faculty of health sciences were about 3 times more likely to be knowledgeable than students of social science and humanities [AOR: 2.98, 95% CI: 1.22, 7.30]. Those who participate in RH clubs were about 3 times more likely to be knowledgeable than those who did not participate [AOR: 3.11, 95% CI: 2.08, 4.65]. Students who utilized RH services were 2 times more likely to be knowledgeable than those who did not use [AOR: 2.34, 95% CI: 1.5, 3.69]. Students who ever had discussed RH issues with someone else were 2 times more likely to be knowledgeable than those who did not [AOR: 2.31, 95%CI: 1.48, 3.62] (Table  3). Bivariate and multivariate logistic regression analysis output of factors associated with knowledge of RSR among WSU students, SNNPRS Ethiopia, 2012 The type elementary and high school attended, faculty, place they came from, participation in RH clubs, discussion RH issues, and utilization of RH services were found to have significant and independent effect on knowledge of reproductive and sexual rights, while sexual experience, parental education and occupation were not significantly associated. Students who attended private elementary and high schools were about 2 times more likely to be knowledgeable than students from public schools [AOR: 2.08, 95% CI: 1.08, 3.99]. Respondents who came from urban areas were also about 1.5 times more likely to be knowledgeable compared to those who came from rural areas [AOR: 1.45, 95% CI: 1.01, 2.11]. Study participants from faculty of health sciences were about 3 times more likely to be knowledgeable than students of social science and humanities [AOR: 2.98, 95% CI: 1.22, 7.30]. Those who participate in RH clubs were about 3 times more likely to be knowledgeable than those who did not participate [AOR: 3.11, 95% CI: 2.08, 4.65]. Students who utilized RH services were 2 times more likely to be knowledgeable than those who did not use [AOR: 2.34, 95% CI: 1.5, 3.69]. Students who ever had discussed RH issues with someone else were 2 times more likely to be knowledgeable than those who did not [AOR: 2.31, 95%CI: 1.48, 3.62] (Table  3). Bivariate and multivariate logistic regression analysis output of factors associated with knowledge of RSR among WSU students, SNNPRS Ethiopia, 2012
Conclusions
The level of knowledge of the respondents is found to be inadequate. Attending private elementary and high schools, coming from urban areas, being second and third year student, belonging to the faculty of health sciences, discussing reproductive and sexual issues, participating in RH clubs, and utilization of RH services showed a positive and significant association with knowledge of reproductive and sexual rights. The Ministry of Education of Ethiopia has to incorporate reproductive and sexual rights in the curricula of high schools and institutions of higher learning.
[ "Background", "Socio-demographic characteristics of the study participants", "Sexual experience and RH service utilization", "Knowledge of reproductive and sexual rights", "Factors associated with knowledge of sexual and reproductive rights", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Reproductive and sexual health rights are rights of all people, regardless of age, gender and other characteristics. That is, people have the right to make choices regarding their own sexuality and reproduction, provided that they respect the right of others. Reproductive and sexual rights were first officially recognized at the International Conference on Population and Development (ICPD) in Cairo in 1994\n[1].\nThe Program of Action of ICPD recognized that meeting the reproductive health (RH) needs is a vital requirement for human and social development. Protecting and promoting the reproductive and sexual rights of the youth and empowering them to make informed choices is a key to their wellbeing\n[2,3]. Action for universal access to sexual and reproductive health (SRH) by 2015 was agreed by 179 countries; despite this international commitment, progress towards the program has been slow\n[4].\nReproductive health targeted Millennium Development Goals (MDGs) will not be achieved without improving access to RH\n[5]. The majority of the young people have very little knowledge on what sexual rights they are entitled to. Sometimes, they do not even appreciate the extent of their violations and worse still they do not know where they could go, for legal or social advice\n[6,7]. Sources of information the youth traditionally access, such as parents have a skewed conception on anything related to sexuality and approach it from a cautionary perspective rather than from an informative one. Formal education on sexuality is limited to RH which is commonly found in the science syllabi and biology and does not cover reproductive and sexual rights\n[8].\nThe youth are unable to deal with such violations because of barriers like shame, guilt, embarrassment, not wanting friends and family to know; confidentiality; and fear of not being believed\n[9] and partly because of their inadequate knowledge and experience on sexuality issues including legal instruments that may accord them an opportunity to claim and protect sexuality-related rights\n[8].\nYoung people face increasing pressures regarding sex and sexuality including conflicting messages and norms which may be perpetuated by lack of awareness of their rights and results in many young people being either unable to seek help when they need it, and may prevent them from giving input within policy and decision making processes\n[10].\nEvidences from Ethiopian Universities revealed that violations are rampant and inadequately addressed\n[11]. The in and out of school Ethiopian youth in the age range of 15 and 24 years face lots of challenges and constitute the highest percentage of new HIV cases in the country\n[12]. Traditional practices such as early marriage, marriage by abduction, and female genital cutting adversely affect the health and wellbeing of young people\n[13].\nTherefore, this study set out to assess the level of youth knowledge about reproductive and sexuality rights and thereby generate appropriate intervention strategies so that mainstream issues of sexuality and related rights may be included in policies and university curricula.", "Out of the expected 660 respondents, 648 agreed to participate in the study, yielding a response rate of 98.1%. Out of this, six incomplete questionnaires were discarded. The age of the respondents ranged from 18 to 28, with the median age of 20 years. More than half of the respondents (57.4%) were between the ages of 18 and 20 years. Male respondents were 516 (80.4%). Two hundred eighty (43.6%) of the respondents were Orthodox Christians, while 154 (24%) of the study participants were Wolaita by ethnicity. Regarding marital status, almost all (95.5%) of the study participants were single, and around half (52%) were from rural areas (Table \n1).\nSocio-demographic and academic characteristics of Wolaita Sodo University undergraduate students, 2012, N = 642", "Two hundred six (32.1%) of the respondents reported that they had sexual experience. The mean age at the first sex was 17.34. Out of those who had sexual experience, 158 (53.3%) said that they had multiple sexual partners in their life time. One hundred six (16.5%) of the respondents reported that it was not important to discuss sexual issues with parents, while 140 (21.8%) had not discussed about the issue with anyone else. Among those who had discussed sexual issues, 419 (65.3%) discussed with their friends. Only 229 (35.7%) of the respondents participated in RH clubs.\nAround one-third of the respondents (37.1%) had no awareness that the student clinic in the campus provides RH services, while only 156 (24.3%) ever used any of the RH services.\nAround half 48.8% of the respondents said that their sources of information regarding reproductive and sexual issues were their peers, while health personnel, the media, teachers and parents were sources of information to 45.2%, 43.5%, 31% and 28.2%, respectively.", "Participants were asked 24 questions to assess their knowledge on reproductive and sexual rights, and they were categorized in to two groups based on their score in relation to the mean. The mean score was 15.7. More than half (54.5%) of the respondents were found to be knowledgeable, while a substantial proportion (45.5%) of the respondents was not.\nStudents were asked whether a married woman should have the right to limit the number of her children according to her desire and without her husband’s consent. The majority (63.7%) of them showed their disagreement with this idea. One hundred fifty-seven (24.5%) of the study participants said that a husband should get sex whenever he wants irrespective of his wife’s wish. Around half (53.7%) disagreed with the question that reflected the right of girls to autonomous reproductive choices without their partners` consent. Four hundred nine (63.7%) agreed that parents have the right to decide on sexual and RH issues of their children. Among all, 270 (42.1%) of the respondents disagreed with the statement which said students should have the right to freedom of assembly and political participation to influence the Government to place sexual and reproductive health issues on the priority list during planning and interventions. Three hundred sixty-four (56.7%) agreed with the statement that unmarried couples have no right to use contraceptives other than condoms (Table \n2).\nRresponses of the study participants to the knowledge questions, wolaita Sodo University, Wolaita Sodo, SNNPRS, Ethiopia, 2012, N = 642", "The type elementary and high school attended, faculty, place they came from, participation in RH clubs, discussion RH issues, and utilization of RH services were found to have significant and independent effect on knowledge of reproductive and sexual rights, while sexual experience, parental education and occupation were not significantly associated.\nStudents who attended private elementary and high schools were about 2 times more likely to be knowledgeable than students from public schools [AOR: 2.08, 95% CI: 1.08, 3.99]. Respondents who came from urban areas were also about 1.5 times more likely to be knowledgeable compared to those who came from rural areas [AOR: 1.45, 95% CI: 1.01, 2.11]. Study participants from faculty of health sciences were about 3 times more likely to be knowledgeable than students of social science and humanities [AOR: 2.98, 95% CI: 1.22, 7.30].\nThose who participate in RH clubs were about 3 times more likely to be knowledgeable than those who did not participate [AOR: 3.11, 95% CI: 2.08, 4.65]. Students who utilized RH services were 2 times more likely to be knowledgeable than those who did not use [AOR: 2.34, 95% CI: 1.5, 3.69]. Students who ever had discussed RH issues with someone else were 2 times more likely to be knowledgeable than those who did not [AOR: 2.31, 95%CI: 1.48, 3.62] (Table \n3).\nBivariate and multivariate logistic regression analysis output of factors associated with knowledge of RSR among WSU students, SNNPRS Ethiopia, 2012", "The authors declare that they have no competing interests.", "YM wrote the proposal, participated in data collection, analyzed the data and drafted the paper. AB and ZB approved the proposal with some revisions, participated in data analysis and revised subsequent drafts of the paper. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-698X/13/12/prepub\n" ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Socio-demographic characteristics of the study participants", "Sexual experience and RH service utilization", "Knowledge of reproductive and sexual rights", "Factors associated with knowledge of sexual and reproductive rights", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Reproductive and sexual health rights are rights of all people, regardless of age, gender and other characteristics. That is, people have the right to make choices regarding their own sexuality and reproduction, provided that they respect the right of others. Reproductive and sexual rights were first officially recognized at the International Conference on Population and Development (ICPD) in Cairo in 1994\n[1].\nThe Program of Action of ICPD recognized that meeting the reproductive health (RH) needs is a vital requirement for human and social development. Protecting and promoting the reproductive and sexual rights of the youth and empowering them to make informed choices is a key to their wellbeing\n[2,3]. Action for universal access to sexual and reproductive health (SRH) by 2015 was agreed by 179 countries; despite this international commitment, progress towards the program has been slow\n[4].\nReproductive health targeted Millennium Development Goals (MDGs) will not be achieved without improving access to RH\n[5]. The majority of the young people have very little knowledge on what sexual rights they are entitled to. Sometimes, they do not even appreciate the extent of their violations and worse still they do not know where they could go, for legal or social advice\n[6,7]. Sources of information the youth traditionally access, such as parents have a skewed conception on anything related to sexuality and approach it from a cautionary perspective rather than from an informative one. Formal education on sexuality is limited to RH which is commonly found in the science syllabi and biology and does not cover reproductive and sexual rights\n[8].\nThe youth are unable to deal with such violations because of barriers like shame, guilt, embarrassment, not wanting friends and family to know; confidentiality; and fear of not being believed\n[9] and partly because of their inadequate knowledge and experience on sexuality issues including legal instruments that may accord them an opportunity to claim and protect sexuality-related rights\n[8].\nYoung people face increasing pressures regarding sex and sexuality including conflicting messages and norms which may be perpetuated by lack of awareness of their rights and results in many young people being either unable to seek help when they need it, and may prevent them from giving input within policy and decision making processes\n[10].\nEvidences from Ethiopian Universities revealed that violations are rampant and inadequately addressed\n[11]. The in and out of school Ethiopian youth in the age range of 15 and 24 years face lots of challenges and constitute the highest percentage of new HIV cases in the country\n[12]. Traditional practices such as early marriage, marriage by abduction, and female genital cutting adversely affect the health and wellbeing of young people\n[13].\nTherefore, this study set out to assess the level of youth knowledge about reproductive and sexuality rights and thereby generate appropriate intervention strategies so that mainstream issues of sexuality and related rights may be included in policies and university curricula.", "An institution-based, cross-sectional study was conducted among Wolaita Sodo University students at Sodo town which is located in Wolaita Zone of Southern Nations Nationalities and People’s Regional State (SNNPR). Sodo is 327 km from Addis Ababa. The university had 7,839 under graduate regular students in its five faculties and four schools comprising of 32 departments. One RH and Anti HIVAIDS Club was operating in the university.\nAt the beginning, students were stratified based on their year of study, as first, second, and third year and above. From the five faculties, students in each batch were selected using the simple random sampling technique. The number of students sampled from each faculty was proportional to the total number of students thereof. Regarding knowledge of reproductive and sexual rights, students who scored above the mean score in the knowledge questions were considered knowledgeable. Data were collected using a pretested structured self-administered questionnaire. Twelve data collectors guided by one supervisor collected the data on April 7, 2012. The study participants filled the questionnaire at the same time in 12 lecture halls, each with 200 seats. Only 55 students sat in each lecture hall. Female and male students were placed in different rooms to ensure their privacy. Data quality was controlled by giving trainings and appropriate supervisions for data collectors. The overall supervision was carried out by the principal investigator. A pre-test was conducted on 5% of the questionnaire on students of Gondar University. Appropriate modifications were made after analyzing the pre-test result before the actual data collection.\nThe questionnaires filled by the students were checked for completeness and entered into EPI INFO version 3.5.3 statistical software and then exported to SPSS version 20 for further analysis. Descriptive statistics was used to describe the study population in relation to relevant variables. Both bivariate and multivariate logistic regression models were used to identify associated factors. Odds Ratios and their 95% Confidence Intervals were computed and variables with p - value less than 0.05 were considered as significantly associated with the outcome variable.\nEthical clearance was obtained from the Institute of Public Health, the University of Gondar. A formal letter of cooperation was written to Wolaita Sodo University. Verbal consent was obtained from each study participant.", " Socio-demographic characteristics of the study participants Out of the expected 660 respondents, 648 agreed to participate in the study, yielding a response rate of 98.1%. Out of this, six incomplete questionnaires were discarded. The age of the respondents ranged from 18 to 28, with the median age of 20 years. More than half of the respondents (57.4%) were between the ages of 18 and 20 years. Male respondents were 516 (80.4%). Two hundred eighty (43.6%) of the respondents were Orthodox Christians, while 154 (24%) of the study participants were Wolaita by ethnicity. Regarding marital status, almost all (95.5%) of the study participants were single, and around half (52%) were from rural areas (Table \n1).\nSocio-demographic and academic characteristics of Wolaita Sodo University undergraduate students, 2012, N = 642\nOut of the expected 660 respondents, 648 agreed to participate in the study, yielding a response rate of 98.1%. Out of this, six incomplete questionnaires were discarded. The age of the respondents ranged from 18 to 28, with the median age of 20 years. More than half of the respondents (57.4%) were between the ages of 18 and 20 years. Male respondents were 516 (80.4%). Two hundred eighty (43.6%) of the respondents were Orthodox Christians, while 154 (24%) of the study participants were Wolaita by ethnicity. Regarding marital status, almost all (95.5%) of the study participants were single, and around half (52%) were from rural areas (Table \n1).\nSocio-demographic and academic characteristics of Wolaita Sodo University undergraduate students, 2012, N = 642\n Sexual experience and RH service utilization Two hundred six (32.1%) of the respondents reported that they had sexual experience. The mean age at the first sex was 17.34. Out of those who had sexual experience, 158 (53.3%) said that they had multiple sexual partners in their life time. One hundred six (16.5%) of the respondents reported that it was not important to discuss sexual issues with parents, while 140 (21.8%) had not discussed about the issue with anyone else. Among those who had discussed sexual issues, 419 (65.3%) discussed with their friends. Only 229 (35.7%) of the respondents participated in RH clubs.\nAround one-third of the respondents (37.1%) had no awareness that the student clinic in the campus provides RH services, while only 156 (24.3%) ever used any of the RH services.\nAround half 48.8% of the respondents said that their sources of information regarding reproductive and sexual issues were their peers, while health personnel, the media, teachers and parents were sources of information to 45.2%, 43.5%, 31% and 28.2%, respectively.\nTwo hundred six (32.1%) of the respondents reported that they had sexual experience. The mean age at the first sex was 17.34. Out of those who had sexual experience, 158 (53.3%) said that they had multiple sexual partners in their life time. One hundred six (16.5%) of the respondents reported that it was not important to discuss sexual issues with parents, while 140 (21.8%) had not discussed about the issue with anyone else. Among those who had discussed sexual issues, 419 (65.3%) discussed with their friends. Only 229 (35.7%) of the respondents participated in RH clubs.\nAround one-third of the respondents (37.1%) had no awareness that the student clinic in the campus provides RH services, while only 156 (24.3%) ever used any of the RH services.\nAround half 48.8% of the respondents said that their sources of information regarding reproductive and sexual issues were their peers, while health personnel, the media, teachers and parents were sources of information to 45.2%, 43.5%, 31% and 28.2%, respectively.\n Knowledge of reproductive and sexual rights Participants were asked 24 questions to assess their knowledge on reproductive and sexual rights, and they were categorized in to two groups based on their score in relation to the mean. The mean score was 15.7. More than half (54.5%) of the respondents were found to be knowledgeable, while a substantial proportion (45.5%) of the respondents was not.\nStudents were asked whether a married woman should have the right to limit the number of her children according to her desire and without her husband’s consent. The majority (63.7%) of them showed their disagreement with this idea. One hundred fifty-seven (24.5%) of the study participants said that a husband should get sex whenever he wants irrespective of his wife’s wish. Around half (53.7%) disagreed with the question that reflected the right of girls to autonomous reproductive choices without their partners` consent. Four hundred nine (63.7%) agreed that parents have the right to decide on sexual and RH issues of their children. Among all, 270 (42.1%) of the respondents disagreed with the statement which said students should have the right to freedom of assembly and political participation to influence the Government to place sexual and reproductive health issues on the priority list during planning and interventions. Three hundred sixty-four (56.7%) agreed with the statement that unmarried couples have no right to use contraceptives other than condoms (Table \n2).\nRresponses of the study participants to the knowledge questions, wolaita Sodo University, Wolaita Sodo, SNNPRS, Ethiopia, 2012, N = 642\nParticipants were asked 24 questions to assess their knowledge on reproductive and sexual rights, and they were categorized in to two groups based on their score in relation to the mean. The mean score was 15.7. More than half (54.5%) of the respondents were found to be knowledgeable, while a substantial proportion (45.5%) of the respondents was not.\nStudents were asked whether a married woman should have the right to limit the number of her children according to her desire and without her husband’s consent. The majority (63.7%) of them showed their disagreement with this idea. One hundred fifty-seven (24.5%) of the study participants said that a husband should get sex whenever he wants irrespective of his wife’s wish. Around half (53.7%) disagreed with the question that reflected the right of girls to autonomous reproductive choices without their partners` consent. Four hundred nine (63.7%) agreed that parents have the right to decide on sexual and RH issues of their children. Among all, 270 (42.1%) of the respondents disagreed with the statement which said students should have the right to freedom of assembly and political participation to influence the Government to place sexual and reproductive health issues on the priority list during planning and interventions. Three hundred sixty-four (56.7%) agreed with the statement that unmarried couples have no right to use contraceptives other than condoms (Table \n2).\nRresponses of the study participants to the knowledge questions, wolaita Sodo University, Wolaita Sodo, SNNPRS, Ethiopia, 2012, N = 642\n Factors associated with knowledge of sexual and reproductive rights The type elementary and high school attended, faculty, place they came from, participation in RH clubs, discussion RH issues, and utilization of RH services were found to have significant and independent effect on knowledge of reproductive and sexual rights, while sexual experience, parental education and occupation were not significantly associated.\nStudents who attended private elementary and high schools were about 2 times more likely to be knowledgeable than students from public schools [AOR: 2.08, 95% CI: 1.08, 3.99]. Respondents who came from urban areas were also about 1.5 times more likely to be knowledgeable compared to those who came from rural areas [AOR: 1.45, 95% CI: 1.01, 2.11]. Study participants from faculty of health sciences were about 3 times more likely to be knowledgeable than students of social science and humanities [AOR: 2.98, 95% CI: 1.22, 7.30].\nThose who participate in RH clubs were about 3 times more likely to be knowledgeable than those who did not participate [AOR: 3.11, 95% CI: 2.08, 4.65]. Students who utilized RH services were 2 times more likely to be knowledgeable than those who did not use [AOR: 2.34, 95% CI: 1.5, 3.69]. Students who ever had discussed RH issues with someone else were 2 times more likely to be knowledgeable than those who did not [AOR: 2.31, 95%CI: 1.48, 3.62] (Table \n3).\nBivariate and multivariate logistic regression analysis output of factors associated with knowledge of RSR among WSU students, SNNPRS Ethiopia, 2012\nThe type elementary and high school attended, faculty, place they came from, participation in RH clubs, discussion RH issues, and utilization of RH services were found to have significant and independent effect on knowledge of reproductive and sexual rights, while sexual experience, parental education and occupation were not significantly associated.\nStudents who attended private elementary and high schools were about 2 times more likely to be knowledgeable than students from public schools [AOR: 2.08, 95% CI: 1.08, 3.99]. Respondents who came from urban areas were also about 1.5 times more likely to be knowledgeable compared to those who came from rural areas [AOR: 1.45, 95% CI: 1.01, 2.11]. Study participants from faculty of health sciences were about 3 times more likely to be knowledgeable than students of social science and humanities [AOR: 2.98, 95% CI: 1.22, 7.30].\nThose who participate in RH clubs were about 3 times more likely to be knowledgeable than those who did not participate [AOR: 3.11, 95% CI: 2.08, 4.65]. Students who utilized RH services were 2 times more likely to be knowledgeable than those who did not use [AOR: 2.34, 95% CI: 1.5, 3.69]. Students who ever had discussed RH issues with someone else were 2 times more likely to be knowledgeable than those who did not [AOR: 2.31, 95%CI: 1.48, 3.62] (Table \n3).\nBivariate and multivariate logistic regression analysis output of factors associated with knowledge of RSR among WSU students, SNNPRS Ethiopia, 2012", "Out of the expected 660 respondents, 648 agreed to participate in the study, yielding a response rate of 98.1%. Out of this, six incomplete questionnaires were discarded. The age of the respondents ranged from 18 to 28, with the median age of 20 years. More than half of the respondents (57.4%) were between the ages of 18 and 20 years. Male respondents were 516 (80.4%). Two hundred eighty (43.6%) of the respondents were Orthodox Christians, while 154 (24%) of the study participants were Wolaita by ethnicity. Regarding marital status, almost all (95.5%) of the study participants were single, and around half (52%) were from rural areas (Table \n1).\nSocio-demographic and academic characteristics of Wolaita Sodo University undergraduate students, 2012, N = 642", "Two hundred six (32.1%) of the respondents reported that they had sexual experience. The mean age at the first sex was 17.34. Out of those who had sexual experience, 158 (53.3%) said that they had multiple sexual partners in their life time. One hundred six (16.5%) of the respondents reported that it was not important to discuss sexual issues with parents, while 140 (21.8%) had not discussed about the issue with anyone else. Among those who had discussed sexual issues, 419 (65.3%) discussed with their friends. Only 229 (35.7%) of the respondents participated in RH clubs.\nAround one-third of the respondents (37.1%) had no awareness that the student clinic in the campus provides RH services, while only 156 (24.3%) ever used any of the RH services.\nAround half 48.8% of the respondents said that their sources of information regarding reproductive and sexual issues were their peers, while health personnel, the media, teachers and parents were sources of information to 45.2%, 43.5%, 31% and 28.2%, respectively.", "Participants were asked 24 questions to assess their knowledge on reproductive and sexual rights, and they were categorized in to two groups based on their score in relation to the mean. The mean score was 15.7. More than half (54.5%) of the respondents were found to be knowledgeable, while a substantial proportion (45.5%) of the respondents was not.\nStudents were asked whether a married woman should have the right to limit the number of her children according to her desire and without her husband’s consent. The majority (63.7%) of them showed their disagreement with this idea. One hundred fifty-seven (24.5%) of the study participants said that a husband should get sex whenever he wants irrespective of his wife’s wish. Around half (53.7%) disagreed with the question that reflected the right of girls to autonomous reproductive choices without their partners` consent. Four hundred nine (63.7%) agreed that parents have the right to decide on sexual and RH issues of their children. Among all, 270 (42.1%) of the respondents disagreed with the statement which said students should have the right to freedom of assembly and political participation to influence the Government to place sexual and reproductive health issues on the priority list during planning and interventions. Three hundred sixty-four (56.7%) agreed with the statement that unmarried couples have no right to use contraceptives other than condoms (Table \n2).\nRresponses of the study participants to the knowledge questions, wolaita Sodo University, Wolaita Sodo, SNNPRS, Ethiopia, 2012, N = 642", "The type elementary and high school attended, faculty, place they came from, participation in RH clubs, discussion RH issues, and utilization of RH services were found to have significant and independent effect on knowledge of reproductive and sexual rights, while sexual experience, parental education and occupation were not significantly associated.\nStudents who attended private elementary and high schools were about 2 times more likely to be knowledgeable than students from public schools [AOR: 2.08, 95% CI: 1.08, 3.99]. Respondents who came from urban areas were also about 1.5 times more likely to be knowledgeable compared to those who came from rural areas [AOR: 1.45, 95% CI: 1.01, 2.11]. Study participants from faculty of health sciences were about 3 times more likely to be knowledgeable than students of social science and humanities [AOR: 2.98, 95% CI: 1.22, 7.30].\nThose who participate in RH clubs were about 3 times more likely to be knowledgeable than those who did not participate [AOR: 3.11, 95% CI: 2.08, 4.65]. Students who utilized RH services were 2 times more likely to be knowledgeable than those who did not use [AOR: 2.34, 95% CI: 1.5, 3.69]. Students who ever had discussed RH issues with someone else were 2 times more likely to be knowledgeable than those who did not [AOR: 2.31, 95%CI: 1.48, 3.62] (Table \n3).\nBivariate and multivariate logistic regression analysis output of factors associated with knowledge of RSR among WSU students, SNNPRS Ethiopia, 2012", "Reproductive health targeted Millennium Development Goals (MDGs) will not be achieved without improving access to reproductive health services\n[5]. But the youth are experiencing all forms of sexual abuse and sexual-rights violations that affect their lives\n[14] and the majority of the young people have very little knowledge of what sexual rights they are entitled to. Sometimes, they do not even appreciate the extent of their violations, and what is worse still, they do not know where they could go for legal or social advice\n[8]. Thus, this study is aimed to assess the knowledge of reproductive and sexual rights and associated factors among Wolaita Sodo University students.\nIn this study, a substantial proportion of students were not knowledgeable. As university students are the educated segment of the population, are expected to be the next leaders of the nation, this level of knowledge is far below adequate.\nAround two third of the respondents did not accept that a married woman has the right to limit the number of her children according to her desire without her husband’s consent. Furthermore a large group did not know that a married woman has the right to say no to sex, regardless of her husband`s wishes. This finding was lower than that of a study conducted in US, Texas\n[15]. This disparity might be because of the difference in culture and norm in which the two populations were brought up. Sex and sexuality are taboo in most Ethiopian cultures, resulting in reluctance to discuss and address sexual health issues\n[16]. Furthermore, the Ethiopian society is highly patriarchal where women’s right is undermined\n[17]. For most African societies, lack of knowledge on sexual matters entails safety as it is assumed that if adolescents are not exposed to such knowledge, the likelihood of getting involved, and consequently becoming victims, will be slim\n[18].\nAmong the respondents, about one-fifth of the females and one-fourth of males agreed that a husband should get sex whenever he wants irrespective of his wife’s wish. This finding is better than a similar study conducted among Indian adolescents selected from five states representing different cultural settings, where above 40% of women agreed to it\n[19]. This difference might be attributed to the difference in age and educational level of the study participants.\nStudents who came from urban areas were more likely to be knowledgeable compared to those who came from rural areas. The possible explanation for this scenario can be that students from towns relatively have better access to information through youth associations, youth centers, the media and the environment itself because most of NGO services are limited to urban areas. However, their counterparts from rural areas might lack such chances because of low awareness of the society which inhibits free and open discussion about reproductive and sexual issues.\nStudents who had attended private elementary and high schools were more likely to be knowledgeable than students from public schools. This finding is in contrary to the Nigerian study where students attending public schools were more aware of RSR than students from private schools\n[20]. In Nigeria, NGOs working on RH give more emphasis to public schools than to the private schools. But in Ethiopia, the number of NGOs working in public elementary and high schools is low. Additionally, compared to public schools, most of the private schools are found in towns where the students are more familiar with this issue and have better access to youth centers and sexuality education. Anti HIV/AIDS and RH clubs are also relatively strong and functional in private schools. Besides, students of private schools are relatively from families with good educational and economic background compared to students from public schools.\nHealth sciences students were more likely to be knowledgeable than students in the faculty of Social Sciences and Humanities (SSH). This might be due to courses that health science students are taking, such as RH which includes reproductive and sexual rights as a chapter. In addition, they can learn this specific topic in one or another way at least because their instructors are well familiarized and even experts on RH issues. Reproductive health service utilization has impact on the knowledge of reproductive and sexual rights. Students who utilized RH services were more likely to be knowledgeable than those who did not. This finding is in agreement with US study where users of RH services had more inclination towards reproductive health rights, and inconsistent contraceptive use was associated with low sexual assertiveness\n[15]. The possible explanation can be that since the services are provided by expertise that can give clear and correct information and answer the questions of their clients, utilization of RH services increases the probability of getting counseling and accurate information which has a direct impact on the knowledge of sexual and reproductive rights.\nDiscussing reproductive and sexual issues affects the knowledge of reproductive and sexual rights in a positive way. Students who have ever had discussed RH issues were more likely to be knowledgeable than those who did not. This can be explained by the fact that knowledge gained through experience sharing during discussion can increase the knowledge of reproductive and sexual rights. This study has shared the limitations of cross-sectional studies i.e. the difficulty of determining causal relationships between variables. As a cross-sectional study requires respondents to remember information retrospectively, recall bias are the other potential limitations of this study. However, scientific procedures were employed to minimize possible effects. In addition, supervision, pretest of the data collection tool, and adequate training of data collectors and supervisors were utilized.", "The level of knowledge of the respondents is found to be inadequate. Attending private elementary and high schools, coming from urban areas, being second and third year student, belonging to the faculty of health sciences, discussing reproductive and sexual issues, participating in RH clubs, and utilization of RH services showed a positive and significant association with knowledge of reproductive and sexual rights. The Ministry of Education of Ethiopia has to incorporate reproductive and sexual rights in the curricula of high schools and institutions of higher learning.", "The authors declare that they have no competing interests.", "YM wrote the proposal, participated in data collection, analyzed the data and drafted the paper. AB and ZB approved the proposal with some revisions, participated in data analysis and revised subsequent drafts of the paper. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-698X/13/12/prepub\n" ]
[ null, "methods", "results", null, null, null, null, "discussion", "conclusions", null, null, null ]
[ "Reproductive Health", "Reproductive and Sexual Rights", "Youths", "Ethiopia" ]
Background: Reproductive and sexual health rights are rights of all people, regardless of age, gender and other characteristics. That is, people have the right to make choices regarding their own sexuality and reproduction, provided that they respect the right of others. Reproductive and sexual rights were first officially recognized at the International Conference on Population and Development (ICPD) in Cairo in 1994 [1]. The Program of Action of ICPD recognized that meeting the reproductive health (RH) needs is a vital requirement for human and social development. Protecting and promoting the reproductive and sexual rights of the youth and empowering them to make informed choices is a key to their wellbeing [2,3]. Action for universal access to sexual and reproductive health (SRH) by 2015 was agreed by 179 countries; despite this international commitment, progress towards the program has been slow [4]. Reproductive health targeted Millennium Development Goals (MDGs) will not be achieved without improving access to RH [5]. The majority of the young people have very little knowledge on what sexual rights they are entitled to. Sometimes, they do not even appreciate the extent of their violations and worse still they do not know where they could go, for legal or social advice [6,7]. Sources of information the youth traditionally access, such as parents have a skewed conception on anything related to sexuality and approach it from a cautionary perspective rather than from an informative one. Formal education on sexuality is limited to RH which is commonly found in the science syllabi and biology and does not cover reproductive and sexual rights [8]. The youth are unable to deal with such violations because of barriers like shame, guilt, embarrassment, not wanting friends and family to know; confidentiality; and fear of not being believed [9] and partly because of their inadequate knowledge and experience on sexuality issues including legal instruments that may accord them an opportunity to claim and protect sexuality-related rights [8]. Young people face increasing pressures regarding sex and sexuality including conflicting messages and norms which may be perpetuated by lack of awareness of their rights and results in many young people being either unable to seek help when they need it, and may prevent them from giving input within policy and decision making processes [10]. Evidences from Ethiopian Universities revealed that violations are rampant and inadequately addressed [11]. The in and out of school Ethiopian youth in the age range of 15 and 24 years face lots of challenges and constitute the highest percentage of new HIV cases in the country [12]. Traditional practices such as early marriage, marriage by abduction, and female genital cutting adversely affect the health and wellbeing of young people [13]. Therefore, this study set out to assess the level of youth knowledge about reproductive and sexuality rights and thereby generate appropriate intervention strategies so that mainstream issues of sexuality and related rights may be included in policies and university curricula. Methods: An institution-based, cross-sectional study was conducted among Wolaita Sodo University students at Sodo town which is located in Wolaita Zone of Southern Nations Nationalities and People’s Regional State (SNNPR). Sodo is 327 km from Addis Ababa. The university had 7,839 under graduate regular students in its five faculties and four schools comprising of 32 departments. One RH and Anti HIVAIDS Club was operating in the university. At the beginning, students were stratified based on their year of study, as first, second, and third year and above. From the five faculties, students in each batch were selected using the simple random sampling technique. The number of students sampled from each faculty was proportional to the total number of students thereof. Regarding knowledge of reproductive and sexual rights, students who scored above the mean score in the knowledge questions were considered knowledgeable. Data were collected using a pretested structured self-administered questionnaire. Twelve data collectors guided by one supervisor collected the data on April 7, 2012. The study participants filled the questionnaire at the same time in 12 lecture halls, each with 200 seats. Only 55 students sat in each lecture hall. Female and male students were placed in different rooms to ensure their privacy. Data quality was controlled by giving trainings and appropriate supervisions for data collectors. The overall supervision was carried out by the principal investigator. A pre-test was conducted on 5% of the questionnaire on students of Gondar University. Appropriate modifications were made after analyzing the pre-test result before the actual data collection. The questionnaires filled by the students were checked for completeness and entered into EPI INFO version 3.5.3 statistical software and then exported to SPSS version 20 for further analysis. Descriptive statistics was used to describe the study population in relation to relevant variables. Both bivariate and multivariate logistic regression models were used to identify associated factors. Odds Ratios and their 95% Confidence Intervals were computed and variables with p - value less than 0.05 were considered as significantly associated with the outcome variable. Ethical clearance was obtained from the Institute of Public Health, the University of Gondar. A formal letter of cooperation was written to Wolaita Sodo University. Verbal consent was obtained from each study participant. Results: Socio-demographic characteristics of the study participants Out of the expected 660 respondents, 648 agreed to participate in the study, yielding a response rate of 98.1%. Out of this, six incomplete questionnaires were discarded. The age of the respondents ranged from 18 to 28, with the median age of 20 years. More than half of the respondents (57.4%) were between the ages of 18 and 20 years. Male respondents were 516 (80.4%). Two hundred eighty (43.6%) of the respondents were Orthodox Christians, while 154 (24%) of the study participants were Wolaita by ethnicity. Regarding marital status, almost all (95.5%) of the study participants were single, and around half (52%) were from rural areas (Table  1). Socio-demographic and academic characteristics of Wolaita Sodo University undergraduate students, 2012, N = 642 Out of the expected 660 respondents, 648 agreed to participate in the study, yielding a response rate of 98.1%. Out of this, six incomplete questionnaires were discarded. The age of the respondents ranged from 18 to 28, with the median age of 20 years. More than half of the respondents (57.4%) were between the ages of 18 and 20 years. Male respondents were 516 (80.4%). Two hundred eighty (43.6%) of the respondents were Orthodox Christians, while 154 (24%) of the study participants were Wolaita by ethnicity. Regarding marital status, almost all (95.5%) of the study participants were single, and around half (52%) were from rural areas (Table  1). Socio-demographic and academic characteristics of Wolaita Sodo University undergraduate students, 2012, N = 642 Sexual experience and RH service utilization Two hundred six (32.1%) of the respondents reported that they had sexual experience. The mean age at the first sex was 17.34. Out of those who had sexual experience, 158 (53.3%) said that they had multiple sexual partners in their life time. One hundred six (16.5%) of the respondents reported that it was not important to discuss sexual issues with parents, while 140 (21.8%) had not discussed about the issue with anyone else. Among those who had discussed sexual issues, 419 (65.3%) discussed with their friends. Only 229 (35.7%) of the respondents participated in RH clubs. Around one-third of the respondents (37.1%) had no awareness that the student clinic in the campus provides RH services, while only 156 (24.3%) ever used any of the RH services. Around half 48.8% of the respondents said that their sources of information regarding reproductive and sexual issues were their peers, while health personnel, the media, teachers and parents were sources of information to 45.2%, 43.5%, 31% and 28.2%, respectively. Two hundred six (32.1%) of the respondents reported that they had sexual experience. The mean age at the first sex was 17.34. Out of those who had sexual experience, 158 (53.3%) said that they had multiple sexual partners in their life time. One hundred six (16.5%) of the respondents reported that it was not important to discuss sexual issues with parents, while 140 (21.8%) had not discussed about the issue with anyone else. Among those who had discussed sexual issues, 419 (65.3%) discussed with their friends. Only 229 (35.7%) of the respondents participated in RH clubs. Around one-third of the respondents (37.1%) had no awareness that the student clinic in the campus provides RH services, while only 156 (24.3%) ever used any of the RH services. Around half 48.8% of the respondents said that their sources of information regarding reproductive and sexual issues were their peers, while health personnel, the media, teachers and parents were sources of information to 45.2%, 43.5%, 31% and 28.2%, respectively. Knowledge of reproductive and sexual rights Participants were asked 24 questions to assess their knowledge on reproductive and sexual rights, and they were categorized in to two groups based on their score in relation to the mean. The mean score was 15.7. More than half (54.5%) of the respondents were found to be knowledgeable, while a substantial proportion (45.5%) of the respondents was not. Students were asked whether a married woman should have the right to limit the number of her children according to her desire and without her husband’s consent. The majority (63.7%) of them showed their disagreement with this idea. One hundred fifty-seven (24.5%) of the study participants said that a husband should get sex whenever he wants irrespective of his wife’s wish. Around half (53.7%) disagreed with the question that reflected the right of girls to autonomous reproductive choices without their partners` consent. Four hundred nine (63.7%) agreed that parents have the right to decide on sexual and RH issues of their children. Among all, 270 (42.1%) of the respondents disagreed with the statement which said students should have the right to freedom of assembly and political participation to influence the Government to place sexual and reproductive health issues on the priority list during planning and interventions. Three hundred sixty-four (56.7%) agreed with the statement that unmarried couples have no right to use contraceptives other than condoms (Table  2). Rresponses of the study participants to the knowledge questions, wolaita Sodo University, Wolaita Sodo, SNNPRS, Ethiopia, 2012, N = 642 Participants were asked 24 questions to assess their knowledge on reproductive and sexual rights, and they were categorized in to two groups based on their score in relation to the mean. The mean score was 15.7. More than half (54.5%) of the respondents were found to be knowledgeable, while a substantial proportion (45.5%) of the respondents was not. Students were asked whether a married woman should have the right to limit the number of her children according to her desire and without her husband’s consent. The majority (63.7%) of them showed their disagreement with this idea. One hundred fifty-seven (24.5%) of the study participants said that a husband should get sex whenever he wants irrespective of his wife’s wish. Around half (53.7%) disagreed with the question that reflected the right of girls to autonomous reproductive choices without their partners` consent. Four hundred nine (63.7%) agreed that parents have the right to decide on sexual and RH issues of their children. Among all, 270 (42.1%) of the respondents disagreed with the statement which said students should have the right to freedom of assembly and political participation to influence the Government to place sexual and reproductive health issues on the priority list during planning and interventions. Three hundred sixty-four (56.7%) agreed with the statement that unmarried couples have no right to use contraceptives other than condoms (Table  2). Rresponses of the study participants to the knowledge questions, wolaita Sodo University, Wolaita Sodo, SNNPRS, Ethiopia, 2012, N = 642 Factors associated with knowledge of sexual and reproductive rights The type elementary and high school attended, faculty, place they came from, participation in RH clubs, discussion RH issues, and utilization of RH services were found to have significant and independent effect on knowledge of reproductive and sexual rights, while sexual experience, parental education and occupation were not significantly associated. Students who attended private elementary and high schools were about 2 times more likely to be knowledgeable than students from public schools [AOR: 2.08, 95% CI: 1.08, 3.99]. Respondents who came from urban areas were also about 1.5 times more likely to be knowledgeable compared to those who came from rural areas [AOR: 1.45, 95% CI: 1.01, 2.11]. Study participants from faculty of health sciences were about 3 times more likely to be knowledgeable than students of social science and humanities [AOR: 2.98, 95% CI: 1.22, 7.30]. Those who participate in RH clubs were about 3 times more likely to be knowledgeable than those who did not participate [AOR: 3.11, 95% CI: 2.08, 4.65]. Students who utilized RH services were 2 times more likely to be knowledgeable than those who did not use [AOR: 2.34, 95% CI: 1.5, 3.69]. Students who ever had discussed RH issues with someone else were 2 times more likely to be knowledgeable than those who did not [AOR: 2.31, 95%CI: 1.48, 3.62] (Table  3). Bivariate and multivariate logistic regression analysis output of factors associated with knowledge of RSR among WSU students, SNNPRS Ethiopia, 2012 The type elementary and high school attended, faculty, place they came from, participation in RH clubs, discussion RH issues, and utilization of RH services were found to have significant and independent effect on knowledge of reproductive and sexual rights, while sexual experience, parental education and occupation were not significantly associated. Students who attended private elementary and high schools were about 2 times more likely to be knowledgeable than students from public schools [AOR: 2.08, 95% CI: 1.08, 3.99]. Respondents who came from urban areas were also about 1.5 times more likely to be knowledgeable compared to those who came from rural areas [AOR: 1.45, 95% CI: 1.01, 2.11]. Study participants from faculty of health sciences were about 3 times more likely to be knowledgeable than students of social science and humanities [AOR: 2.98, 95% CI: 1.22, 7.30]. Those who participate in RH clubs were about 3 times more likely to be knowledgeable than those who did not participate [AOR: 3.11, 95% CI: 2.08, 4.65]. Students who utilized RH services were 2 times more likely to be knowledgeable than those who did not use [AOR: 2.34, 95% CI: 1.5, 3.69]. Students who ever had discussed RH issues with someone else were 2 times more likely to be knowledgeable than those who did not [AOR: 2.31, 95%CI: 1.48, 3.62] (Table  3). Bivariate and multivariate logistic regression analysis output of factors associated with knowledge of RSR among WSU students, SNNPRS Ethiopia, 2012 Socio-demographic characteristics of the study participants: Out of the expected 660 respondents, 648 agreed to participate in the study, yielding a response rate of 98.1%. Out of this, six incomplete questionnaires were discarded. The age of the respondents ranged from 18 to 28, with the median age of 20 years. More than half of the respondents (57.4%) were between the ages of 18 and 20 years. Male respondents were 516 (80.4%). Two hundred eighty (43.6%) of the respondents were Orthodox Christians, while 154 (24%) of the study participants were Wolaita by ethnicity. Regarding marital status, almost all (95.5%) of the study participants were single, and around half (52%) were from rural areas (Table  1). Socio-demographic and academic characteristics of Wolaita Sodo University undergraduate students, 2012, N = 642 Sexual experience and RH service utilization: Two hundred six (32.1%) of the respondents reported that they had sexual experience. The mean age at the first sex was 17.34. Out of those who had sexual experience, 158 (53.3%) said that they had multiple sexual partners in their life time. One hundred six (16.5%) of the respondents reported that it was not important to discuss sexual issues with parents, while 140 (21.8%) had not discussed about the issue with anyone else. Among those who had discussed sexual issues, 419 (65.3%) discussed with their friends. Only 229 (35.7%) of the respondents participated in RH clubs. Around one-third of the respondents (37.1%) had no awareness that the student clinic in the campus provides RH services, while only 156 (24.3%) ever used any of the RH services. Around half 48.8% of the respondents said that their sources of information regarding reproductive and sexual issues were their peers, while health personnel, the media, teachers and parents were sources of information to 45.2%, 43.5%, 31% and 28.2%, respectively. Knowledge of reproductive and sexual rights: Participants were asked 24 questions to assess their knowledge on reproductive and sexual rights, and they were categorized in to two groups based on their score in relation to the mean. The mean score was 15.7. More than half (54.5%) of the respondents were found to be knowledgeable, while a substantial proportion (45.5%) of the respondents was not. Students were asked whether a married woman should have the right to limit the number of her children according to her desire and without her husband’s consent. The majority (63.7%) of them showed their disagreement with this idea. One hundred fifty-seven (24.5%) of the study participants said that a husband should get sex whenever he wants irrespective of his wife’s wish. Around half (53.7%) disagreed with the question that reflected the right of girls to autonomous reproductive choices without their partners` consent. Four hundred nine (63.7%) agreed that parents have the right to decide on sexual and RH issues of their children. Among all, 270 (42.1%) of the respondents disagreed with the statement which said students should have the right to freedom of assembly and political participation to influence the Government to place sexual and reproductive health issues on the priority list during planning and interventions. Three hundred sixty-four (56.7%) agreed with the statement that unmarried couples have no right to use contraceptives other than condoms (Table  2). Rresponses of the study participants to the knowledge questions, wolaita Sodo University, Wolaita Sodo, SNNPRS, Ethiopia, 2012, N = 642 Factors associated with knowledge of sexual and reproductive rights: The type elementary and high school attended, faculty, place they came from, participation in RH clubs, discussion RH issues, and utilization of RH services were found to have significant and independent effect on knowledge of reproductive and sexual rights, while sexual experience, parental education and occupation were not significantly associated. Students who attended private elementary and high schools were about 2 times more likely to be knowledgeable than students from public schools [AOR: 2.08, 95% CI: 1.08, 3.99]. Respondents who came from urban areas were also about 1.5 times more likely to be knowledgeable compared to those who came from rural areas [AOR: 1.45, 95% CI: 1.01, 2.11]. Study participants from faculty of health sciences were about 3 times more likely to be knowledgeable than students of social science and humanities [AOR: 2.98, 95% CI: 1.22, 7.30]. Those who participate in RH clubs were about 3 times more likely to be knowledgeable than those who did not participate [AOR: 3.11, 95% CI: 2.08, 4.65]. Students who utilized RH services were 2 times more likely to be knowledgeable than those who did not use [AOR: 2.34, 95% CI: 1.5, 3.69]. Students who ever had discussed RH issues with someone else were 2 times more likely to be knowledgeable than those who did not [AOR: 2.31, 95%CI: 1.48, 3.62] (Table  3). Bivariate and multivariate logistic regression analysis output of factors associated with knowledge of RSR among WSU students, SNNPRS Ethiopia, 2012 Discussion: Reproductive health targeted Millennium Development Goals (MDGs) will not be achieved without improving access to reproductive health services [5]. But the youth are experiencing all forms of sexual abuse and sexual-rights violations that affect their lives [14] and the majority of the young people have very little knowledge of what sexual rights they are entitled to. Sometimes, they do not even appreciate the extent of their violations, and what is worse still, they do not know where they could go for legal or social advice [8]. Thus, this study is aimed to assess the knowledge of reproductive and sexual rights and associated factors among Wolaita Sodo University students. In this study, a substantial proportion of students were not knowledgeable. As university students are the educated segment of the population, are expected to be the next leaders of the nation, this level of knowledge is far below adequate. Around two third of the respondents did not accept that a married woman has the right to limit the number of her children according to her desire without her husband’s consent. Furthermore a large group did not know that a married woman has the right to say no to sex, regardless of her husband`s wishes. This finding was lower than that of a study conducted in US, Texas [15]. This disparity might be because of the difference in culture and norm in which the two populations were brought up. Sex and sexuality are taboo in most Ethiopian cultures, resulting in reluctance to discuss and address sexual health issues [16]. Furthermore, the Ethiopian society is highly patriarchal where women’s right is undermined [17]. For most African societies, lack of knowledge on sexual matters entails safety as it is assumed that if adolescents are not exposed to such knowledge, the likelihood of getting involved, and consequently becoming victims, will be slim [18]. Among the respondents, about one-fifth of the females and one-fourth of males agreed that a husband should get sex whenever he wants irrespective of his wife’s wish. This finding is better than a similar study conducted among Indian adolescents selected from five states representing different cultural settings, where above 40% of women agreed to it [19]. This difference might be attributed to the difference in age and educational level of the study participants. Students who came from urban areas were more likely to be knowledgeable compared to those who came from rural areas. The possible explanation for this scenario can be that students from towns relatively have better access to information through youth associations, youth centers, the media and the environment itself because most of NGO services are limited to urban areas. However, their counterparts from rural areas might lack such chances because of low awareness of the society which inhibits free and open discussion about reproductive and sexual issues. Students who had attended private elementary and high schools were more likely to be knowledgeable than students from public schools. This finding is in contrary to the Nigerian study where students attending public schools were more aware of RSR than students from private schools [20]. In Nigeria, NGOs working on RH give more emphasis to public schools than to the private schools. But in Ethiopia, the number of NGOs working in public elementary and high schools is low. Additionally, compared to public schools, most of the private schools are found in towns where the students are more familiar with this issue and have better access to youth centers and sexuality education. Anti HIV/AIDS and RH clubs are also relatively strong and functional in private schools. Besides, students of private schools are relatively from families with good educational and economic background compared to students from public schools. Health sciences students were more likely to be knowledgeable than students in the faculty of Social Sciences and Humanities (SSH). This might be due to courses that health science students are taking, such as RH which includes reproductive and sexual rights as a chapter. In addition, they can learn this specific topic in one or another way at least because their instructors are well familiarized and even experts on RH issues. Reproductive health service utilization has impact on the knowledge of reproductive and sexual rights. Students who utilized RH services were more likely to be knowledgeable than those who did not. This finding is in agreement with US study where users of RH services had more inclination towards reproductive health rights, and inconsistent contraceptive use was associated with low sexual assertiveness [15]. The possible explanation can be that since the services are provided by expertise that can give clear and correct information and answer the questions of their clients, utilization of RH services increases the probability of getting counseling and accurate information which has a direct impact on the knowledge of sexual and reproductive rights. Discussing reproductive and sexual issues affects the knowledge of reproductive and sexual rights in a positive way. Students who have ever had discussed RH issues were more likely to be knowledgeable than those who did not. This can be explained by the fact that knowledge gained through experience sharing during discussion can increase the knowledge of reproductive and sexual rights. This study has shared the limitations of cross-sectional studies i.e. the difficulty of determining causal relationships between variables. As a cross-sectional study requires respondents to remember information retrospectively, recall bias are the other potential limitations of this study. However, scientific procedures were employed to minimize possible effects. In addition, supervision, pretest of the data collection tool, and adequate training of data collectors and supervisors were utilized. Conclusions: The level of knowledge of the respondents is found to be inadequate. Attending private elementary and high schools, coming from urban areas, being second and third year student, belonging to the faculty of health sciences, discussing reproductive and sexual issues, participating in RH clubs, and utilization of RH services showed a positive and significant association with knowledge of reproductive and sexual rights. The Ministry of Education of Ethiopia has to incorporate reproductive and sexual rights in the curricula of high schools and institutions of higher learning. Competing interests: The authors declare that they have no competing interests. Authors' contributions: YM wrote the proposal, participated in data collection, analyzed the data and drafted the paper. AB and ZB approved the proposal with some revisions, participated in data analysis and revised subsequent drafts of the paper. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-698X/13/12/prepub
Background: People have the right to make choices regarding their own sexuality, as far as they respect the rights of others. The knowledge of those rights is critical to youth's ability to protect themselves from unwanted reproductive outcomes. Reproductive health targeted Millennium Development Goals will not be achieved without improving access to reproductive health. This study was aimed to assess knowledge of reproductive and sexual rights as well as associated factors among Wolaita Sodo University students. Methods: An institution-based cross-sectional survey was conducted among 642 regular undergraduate Wolaita Sodo University students selected by simple random sampling. A pretested and structured self-administered questionnaire was used for data collection. Data were entered using EPI info version 3.5.3 statistical software and analyzed using SPSS version 20 statistical package. Descriptive statistics was used to describe the study population in relation to relevant variables. Bivariate and multivariate logistic regression was also carried out to see the effect of each independent variable on the dependent variable. Results: More than half (54.5%) of the respondents were found to be knowledgeable about reproductive and sexual rights. Attending elementary and high school in private schools [AOR: 2.08, 95% CI: 1.08, 3.99], coming from urban areas [AOR: 1.46, 95% CI: 1.00, 2.12], being student of faculty of health sciences [AOR: 2.98, 95% CI: 1.22, 7.30], participation in reproductive health clubs [AOR: 3.11, 95% CI: 2.08, 4.65], utilization of reproductive health services [AOR: 2.34, 95% CI: 1.49, 3.69] and discussing sexual issues with someone else [AOR: 2.31, 95% CI: 1.48, 3.62], were positively associated with knowledge of reproductive and sexual rights. Conclusions: The level of knowledge of students about reproductive and sexual rights was found to be low. The Ministry of Education has to incorporate reproductive and sexual rights in the curricula of high schools and institutions of higher learning.
Background: Reproductive and sexual health rights are rights of all people, regardless of age, gender and other characteristics. That is, people have the right to make choices regarding their own sexuality and reproduction, provided that they respect the right of others. Reproductive and sexual rights were first officially recognized at the International Conference on Population and Development (ICPD) in Cairo in 1994 [1]. The Program of Action of ICPD recognized that meeting the reproductive health (RH) needs is a vital requirement for human and social development. Protecting and promoting the reproductive and sexual rights of the youth and empowering them to make informed choices is a key to their wellbeing [2,3]. Action for universal access to sexual and reproductive health (SRH) by 2015 was agreed by 179 countries; despite this international commitment, progress towards the program has been slow [4]. Reproductive health targeted Millennium Development Goals (MDGs) will not be achieved without improving access to RH [5]. The majority of the young people have very little knowledge on what sexual rights they are entitled to. Sometimes, they do not even appreciate the extent of their violations and worse still they do not know where they could go, for legal or social advice [6,7]. Sources of information the youth traditionally access, such as parents have a skewed conception on anything related to sexuality and approach it from a cautionary perspective rather than from an informative one. Formal education on sexuality is limited to RH which is commonly found in the science syllabi and biology and does not cover reproductive and sexual rights [8]. The youth are unable to deal with such violations because of barriers like shame, guilt, embarrassment, not wanting friends and family to know; confidentiality; and fear of not being believed [9] and partly because of their inadequate knowledge and experience on sexuality issues including legal instruments that may accord them an opportunity to claim and protect sexuality-related rights [8]. Young people face increasing pressures regarding sex and sexuality including conflicting messages and norms which may be perpetuated by lack of awareness of their rights and results in many young people being either unable to seek help when they need it, and may prevent them from giving input within policy and decision making processes [10]. Evidences from Ethiopian Universities revealed that violations are rampant and inadequately addressed [11]. The in and out of school Ethiopian youth in the age range of 15 and 24 years face lots of challenges and constitute the highest percentage of new HIV cases in the country [12]. Traditional practices such as early marriage, marriage by abduction, and female genital cutting adversely affect the health and wellbeing of young people [13]. Therefore, this study set out to assess the level of youth knowledge about reproductive and sexuality rights and thereby generate appropriate intervention strategies so that mainstream issues of sexuality and related rights may be included in policies and university curricula. Conclusions: The level of knowledge of the respondents is found to be inadequate. Attending private elementary and high schools, coming from urban areas, being second and third year student, belonging to the faculty of health sciences, discussing reproductive and sexual issues, participating in RH clubs, and utilization of RH services showed a positive and significant association with knowledge of reproductive and sexual rights. The Ministry of Education of Ethiopia has to incorporate reproductive and sexual rights in the curricula of high schools and institutions of higher learning.
Background: People have the right to make choices regarding their own sexuality, as far as they respect the rights of others. The knowledge of those rights is critical to youth's ability to protect themselves from unwanted reproductive outcomes. Reproductive health targeted Millennium Development Goals will not be achieved without improving access to reproductive health. This study was aimed to assess knowledge of reproductive and sexual rights as well as associated factors among Wolaita Sodo University students. Methods: An institution-based cross-sectional survey was conducted among 642 regular undergraduate Wolaita Sodo University students selected by simple random sampling. A pretested and structured self-administered questionnaire was used for data collection. Data were entered using EPI info version 3.5.3 statistical software and analyzed using SPSS version 20 statistical package. Descriptive statistics was used to describe the study population in relation to relevant variables. Bivariate and multivariate logistic regression was also carried out to see the effect of each independent variable on the dependent variable. Results: More than half (54.5%) of the respondents were found to be knowledgeable about reproductive and sexual rights. Attending elementary and high school in private schools [AOR: 2.08, 95% CI: 1.08, 3.99], coming from urban areas [AOR: 1.46, 95% CI: 1.00, 2.12], being student of faculty of health sciences [AOR: 2.98, 95% CI: 1.22, 7.30], participation in reproductive health clubs [AOR: 3.11, 95% CI: 2.08, 4.65], utilization of reproductive health services [AOR: 2.34, 95% CI: 1.49, 3.69] and discussing sexual issues with someone else [AOR: 2.31, 95% CI: 1.48, 3.62], were positively associated with knowledge of reproductive and sexual rights. Conclusions: The level of knowledge of students about reproductive and sexual rights was found to be low. The Ministry of Education has to incorporate reproductive and sexual rights in the curricula of high schools and institutions of higher learning.
5,302
381
12
[ "sexual", "students", "respondents", "rh", "reproductive", "study", "knowledge", "rights", "issues", "knowledgeable" ]
[ "test", "test" ]
[CONTENT] Reproductive Health | Reproductive and Sexual Rights | Youths | Ethiopia [SUMMARY]
[CONTENT] Reproductive Health | Reproductive and Sexual Rights | Youths | Ethiopia [SUMMARY]
[CONTENT] Reproductive Health | Reproductive and Sexual Rights | Youths | Ethiopia [SUMMARY]
[CONTENT] Reproductive Health | Reproductive and Sexual Rights | Youths | Ethiopia [SUMMARY]
[CONTENT] Reproductive Health | Reproductive and Sexual Rights | Youths | Ethiopia [SUMMARY]
[CONTENT] Reproductive Health | Reproductive and Sexual Rights | Youths | Ethiopia [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Male | Reproductive Health | Sex Offenses | Sexual Behavior | Students | Universities | Women's Rights | Young Adult [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Male | Reproductive Health | Sex Offenses | Sexual Behavior | Students | Universities | Women's Rights | Young Adult [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Male | Reproductive Health | Sex Offenses | Sexual Behavior | Students | Universities | Women's Rights | Young Adult [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Male | Reproductive Health | Sex Offenses | Sexual Behavior | Students | Universities | Women's Rights | Young Adult [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Male | Reproductive Health | Sex Offenses | Sexual Behavior | Students | Universities | Women's Rights | Young Adult [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Ethiopia | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Male | Reproductive Health | Sex Offenses | Sexual Behavior | Students | Universities | Women's Rights | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] sexual | students | respondents | rh | reproductive | study | knowledge | rights | issues | knowledgeable [SUMMARY]
[CONTENT] sexual | students | respondents | rh | reproductive | study | knowledge | rights | issues | knowledgeable [SUMMARY]
[CONTENT] sexual | students | respondents | rh | reproductive | study | knowledge | rights | issues | knowledgeable [SUMMARY]
[CONTENT] sexual | students | respondents | rh | reproductive | study | knowledge | rights | issues | knowledgeable [SUMMARY]
[CONTENT] sexual | students | respondents | rh | reproductive | study | knowledge | rights | issues | knowledgeable [SUMMARY]
[CONTENT] sexual | students | respondents | rh | reproductive | study | knowledge | rights | issues | knowledgeable [SUMMARY]
[CONTENT] sexuality | rights | people | youth | reproductive | young | young people | related | sexual | access [SUMMARY]
[CONTENT] students | data | university | questionnaire | study | sodo | faculties | version | considered | lecture [SUMMARY]
[CONTENT] respondents | sexual | 95 ci | times likely knowledgeable | times likely | aor | times | ci | students | rh [SUMMARY]
[CONTENT] reproductive | sexual | reproductive sexual | high | high schools | schools | rights | reproductive sexual rights | knowledge | sexual rights [SUMMARY]
[CONTENT] sexual | students | respondents | rh | reproductive | study | rights | knowledge | issues | data [SUMMARY]
[CONTENT] sexual | students | respondents | rh | reproductive | study | rights | knowledge | issues | data [SUMMARY]
[CONTENT] ||| ||| Millennium Development Goals ||| Wolaita Sodo University [SUMMARY]
[CONTENT] 642 | Wolaita Sodo University ||| ||| EPI | 3.5.3 | SPSS | 20 ||| ||| Bivariate [SUMMARY]
[CONTENT] More than half | 54.5% ||| 2.08 | 95% | CI | 1.08 | 3.99 | 1.46 | 95% | CI | 1.00 | 2.12 ||| 2.98 | 95% | CI | 1.22 | 7.30 | 3.11 | 95% | CI | 2.08 | 4.65 | 2.34 | 95% | CI | 1.49 | 3.69 | 2.31 | 95% | CI | 1.48 | 3.62 [SUMMARY]
[CONTENT] ||| The Ministry of Education [SUMMARY]
[CONTENT] ||| ||| Millennium Development Goals ||| Wolaita Sodo University ||| 642 | Wolaita Sodo University ||| ||| EPI | 3.5.3 | SPSS | 20 ||| ||| Bivariate ||| More than half | 54.5% ||| 2.08 | 95% | CI | 1.08 | 3.99 | 1.46 | 95% | CI | 1.00 | 2.12 ||| 2.98 | 95% | CI | 1.22 | 7.30 | 3.11 | 95% | CI | 2.08 | 4.65 | 2.34 | 95% | CI | 1.49 | 3.69 | 2.31 | 95% | CI | 1.48 | 3.62 ||| ||| The Ministry of Education [SUMMARY]
[CONTENT] ||| ||| Millennium Development Goals ||| Wolaita Sodo University ||| 642 | Wolaita Sodo University ||| ||| EPI | 3.5.3 | SPSS | 20 ||| ||| Bivariate ||| More than half | 54.5% ||| 2.08 | 95% | CI | 1.08 | 3.99 | 1.46 | 95% | CI | 1.00 | 2.12 ||| 2.98 | 95% | CI | 1.22 | 7.30 | 3.11 | 95% | CI | 2.08 | 4.65 | 2.34 | 95% | CI | 1.49 | 3.69 | 2.31 | 95% | CI | 1.48 | 3.62 ||| ||| The Ministry of Education [SUMMARY]
Methylphenidate normalizes frontocingulate underactivation during error processing in attention-deficit/hyperactivity disorder.
21664605
Children with attention-deficit/hyperactivity disorder (ADHD) have deficits in performance monitoring often improved with the indirect catecholamine agonist methylphenidate (MPH). We used functional magnetic resonance imaging to investigate the effects of single-dose MPH on activation of error processing brain areas in medication-naive boys with ADHD during a stop task that elicits 50% error rates.
BACKGROUND
Twelve medication-naive boys with ADHD were scanned twice, under either a single clinical dose of MPH or placebo, in a randomized, double-blind design while they performed an individually adjusted tracking stop task, designed to elicit 50% failures. Brain activation was compared within patients under either drug condition. To test for potential normalization effects of MPH, brain activation in ADHD patients under either drug condition was compared with that of 13 healthy age-matched boys.
METHODS
During failed inhibition, boys with ADHD under placebo relative to control subjects showed reduced brain activation in performance monitoring areas of dorsomedial and left ventrolateral prefrontal cortices, thalamus, cingulate, and parietal regions. MPH, relative to placebo, upregulated activation in these brain regions within patients and normalized all activation differences between patients and control subjects. During successful inhibition, MPH normalized reduced activation observed in patients under placebo compared with control subjects in parietotemporal and cerebellar regions.
RESULTS
MPH normalized brain dysfunction in medication-naive ADHD boys relative to control subjects in typical brain areas of performance monitoring, comprising left ventrolateral and dorsomedial frontal and parietal cortices. This could underlie the amelioration of MPH of attention and academic performance in ADHD.
CONCLUSIONS
[ "Adolescent", "Attention", "Attention Deficit Disorder with Hyperactivity", "Central Nervous System Stimulants", "Child", "Double-Blind Method", "Frontal Lobe", "Gyrus Cinguli", "Humans", "Magnetic Resonance Imaging", "Male", "Methylphenidate", "Neuropsychological Tests", "Psychomotor Performance" ]
3139835
null
null
null
null
Results
Performance The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1]. A multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1). The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1]. A multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1).
null
null
[ "Subjects", "fMRI Paradigm: Stop Task", "fMRI Image Acquisition", "fMRI Image Analysis", "Performance", "Brain Activation", "Motion", "Within-Group Brain Activations", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "Conjunction Analysis Between Within-Group and Between-Group ANOVAs" ]
[ "Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg).\nThirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder.\nAll participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained.\nUnivariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36).", "The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task.", "Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage.", "At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed.\nThe second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel.\nIn addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition.", "The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1].\nA multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1).", " Motion Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].\nMultivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].\n Within-Group Brain Activations Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\n Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\n ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.\n Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.\n ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\n Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\n Conjunction Analysis Between Within-Group and Between-Group ANOVAs To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3.\nTo test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3.", "Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].", " Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).", "Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).", "Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).", " Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.", "MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).", "No significant activation differences were observed between medication conditions.", " Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.", "Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).", "Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.", "To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Methods and Materials", "Subjects", "fMRI Paradigm: Stop Task", "fMRI Image Acquisition", "fMRI Image Analysis", "Results", "Performance", "Brain Activation", "Motion", "Within-Group Brain Activations", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions", "Failed Stop–Go Contrast", "Successful Stop–Go Contrast", "Conjunction Analysis Between Within-Group and Between-Group ANOVAs", "Discussion" ]
[ " Subjects Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg).\nThirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder.\nAll participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained.\nUnivariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36).\nTwelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg).\nThirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder.\nAll participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained.\nUnivariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36).\n fMRI Paradigm: Stop Task The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task.\nThe rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task.\n fMRI Image Acquisition Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage.\nGradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage.\n fMRI Image Analysis At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed.\nThe second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel.\nIn addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition.\nAt the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed.\nThe second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel.\nIn addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition.", "Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg).\nThirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder.\nAll participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained.\nUnivariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36).", "The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task.", "Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage.", "At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed.\nThe second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel.\nIn addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition.", " Performance The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1].\nA multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1).\nThe probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1].\nA multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1).", "The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1].\nA multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1).", " Motion Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].\nMultivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].\n Within-Group Brain Activations Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\n Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\n ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.\n Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.\n ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\n Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\n Conjunction Analysis Between Within-Group and Between-Group ANOVAs To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3.\nTo test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3.", "Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5].", " Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\nControl subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).\n Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).\nActivation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).", "Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas.\nActivation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex.\nActivation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1).", "Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate.\nActivation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices.\nActivation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1).", " Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\nMPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).\n Successful Stop–Go Contrast No significant activation differences were observed between medication conditions.\nNo significant activation differences were observed between medication conditions.", "MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH.\nTo investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02).", "No significant activation differences were observed between medication conditions.", " Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).\n Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.\nRelative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.", "Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3).\nUnder the MPH condition, ADHD patients did not differ from controls in any of these regions.\nPost-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02).", "Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1).\nUnder the MPH condition, ADHD patients did not differ from control subjects in any of these regions.\nActivation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3).\nAll group difference findings for both contrasts remained essentially unchanged when IQ was covaried.", "To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3.", "During error trials, ADHD boys under placebo compared with healthy control subjects showed significant underactivation in a typical error processing and performance monitoring network comprising dMFC, left IFC, thalamus, posterior cingulate/precuneus, and inferior temporoparietal regions. Among patients, MPH compared with placebo significantly upregulated activation in overlapping medial frontal, IFC, and parietal regions as well as the lenticular nucleus. Under MPH, brain activation differences between control subjects and ADHD patients were no longer observed. Reduced fronto-thalamo-parietal activation that was normalized with MPH was, furthermore, negatively associated with faster post-error reaction times in patients, which were trendwise slowed with MPH.\nDuring successful stop trials, ADHD boys showed underactivation in a right hemispheric network of medial temporal and inferior parietal brain regions and, at a more lenient threshold, in small clusters of bilateral IFC, thalamus, and pre-SMA. Although within-patient comparison between MPH and placebo did not show significant activation differences, all underactivations in patients relative to control subjects under placebo were normalized with a single dose of MPH.\nThe dMFC, comprising Brodmann areas 8, 6, and 32, including pre-SMA and ACC, is a typical region of error processing and performance monitoring in adults (37,43–48) and children (29,38,49). We have previously found this region to be underactivated in children with ADHD during oddball (50) and switch tasks (4). Errors indicate violation of a reward prediction (i.e., positive performance) and have been linked to midbrain dopamine (51). Normalization with MPH of underfunctioning of this region in ADHD is in line with the notion that phasic dopamine response modulates error-related mesial frontal activation (52,53). These findings extend evidence for upregulation with acute and chronic doses of MPH in previously medicated patients with ADHD in a more rostral ACC location during tasks of cognitive control (22,28,54).\nActivation in dMFC during errors triggers additional activation in functionally interconnected left IFC, as well as striatal, premotor, and parietal components of the error monitoring system, leading to post-error performance adjustments (43–45,48). IFC underactivation is one of the most consistent findings in fMRI studies in patients with ADHD, with right IFC dysfunction typically observed during inhibitory performance (7,8,11,55), in line with its role in inhibition (37,56), and left IFC during stop errors (4,29) as well as during flexible, selective, or sustained attention (4,9,26,50,57), in line with its role for performance monitoring (44,45,48,49) and saliency processing (58,59). IFC dysfunction is furthermore a disorder-specific neurofunctional deficit compared with patients with conduct (6,50,57,60) and obsessive compulsive (4) disorders. MPH thus appears to modulate an important neurofunctional biomarker of ADHD. The more predominantly left-hemispheric upregulation effect during errors may suggest a stronger effect of MPH on performance monitoring than inhibitory function in ADHD. Left IFC upregulation has previously been observed in ADHD patients in the context of an attention-demanding time discrimination task after acute (27) and 6 weeks of MPH treatment during interference inhibition (54). Structural studies have shown more normal cortical thinning in left IFC in psychostimulant-medicated compared with unmedicated ADHD children (61). Together, this raises the speculation that MPH may have a lateralized upregulating effect on left IFC structure and function.\nPosterior thalamic regions have been associated both with motor response inhibition (62) and performance monitoring (48,63,64). The finding that MPH normalizes activation in this region is in line with speculation of this region's involvement in the modulation of the dopaminergic error signal (63,65,66).\nThe fact that lower dMFC, IFC, and thalamic activation in ADHD patients was associated with faster post-error slowing, both of which were enhanced by MPH, reinforces the role of this network for abnormal error monitoring in ADHD. Posterior cingulate and precuneus are connected with MFC and parietal areas and form part of the performance monitoring network (47,49,67,68), mediating visual spatial attention to saliency (69,70) and the integration of performance outcome with attentional modulation (48). The fact that these regions were underactivated during both inhibition and its failure is in line with a generic attention role of these areas. In line with this, we and others have previously observed underactivation in ADHD patients in these regions during inhibition errors (4,6,8,60), as well as during other salient stimuli such as oddball, novel or incongruent targets (10,26,50,71,72).\nNormalization with MPH of reduced activation in typical frontoparietal regions of saliency processing and performance monitoring is consistent with the dopamine-deficiency hypothesis of ADHD given that dopamine agonists enhance stimulus salience (73). It is also in line with our previous findings of upregulation with MPH of posterior cingulate/precuneus in the same group of medication naive boys with ADHD during a target detection task, resulting in improved attention (26), and during an attention demanding time discrimination task (27). To our knowledge, normalization of inferior parietal activation with MPH has only recently been observed in ADHD patients, in the context of sustained attention (26) and interference inhibition (54).\nDuring successful stop trials, MPH also normalized underactivation in the cerebellum, which, together with subthalamic nucleus, caudate, and IFG, forms a neurofunctional network of motor response inhibition (38). These findings extend previous evidence for cerebellum upregulation with MPH in ADHD patients during interference inhibition (54) and time estimation (27).\nWithin patients, MPH also enhanced activation of caudate and putamen. This is in line with previous fMRI findings of caudate upregulation in ADHD patients after acute and chronic doses of MPH during inhibition and attention tasks (22,24,54) and is likely associated with the known effect of MPH on striatal dopamine transporter blockage (14,15).\nThe findings of more pronounced normalization effects of MPH on abnormal performance monitoring than inhibition networks could suggest that MPH enhances generic attention and performance monitoring functions more than inhibitory capacity. This would be in line with the behavioral effect of MPH of modulating go and post-error reaction times, but not inhibition speed, which, furthermore were correlated with the reduced frontothalamic error-processing activation that was normalized with MPH. Relative to control subjects, patients only significantly differed in intrasubject response variability. Small subject numbers, a relatively older child group, and fMRI task design restrictions may have been responsible for minor behavior differences. The findings of brain dysfunctions in boys with ADHD and their normalization under the clinical dose of MPH despite minor performance differences and only trend-level improvements with MPH show that brain activation is more sensitive than performance to detect both abnormalities and pharmacologic effects. This is in line with previous findings of marked brain dysfunctions in ADHD adolescents despite no stop task impairment (7,8,50) and higher sensitivity of brain activation than behavior to show pharmacologic effects of MPH in ADHD (24,26–28,54,74).\nMPH prevents the reuptake of catecholamines from the synaptic cleft by blocking dopamine and norepinephrine transporters (DAT/NET) (75,76), with higher affinity for the former (77,78). In healthy adults, MPH blocks 60% to 70% of striatal DAT in a dose-dependent manner, increasing extracellular levels of dopamine in striatum (75,79–82), as well as in frontal, thalamic, and temporal regions (83). The upregulating effects on basal ganglia, thalamic, and anterior cingulate activation were therefore likely mediated by mesolimbic striatocingulate dopaminergic pathways known to modulate error monitoring systems (63,64). In frontal regions, however, MPH upregulates noradrenaline to the same or greater extent than dopamine (84–86), via reuptake inhibition of NET that clear up both dopamine and noradrenaline (85,87–89). The upregulating effects on frontal activation, therefore, may have been mediated by enhanced catecholamine neurotransmission, in line with recent evidence that noradrenaline also plays a role in error monitoring (66,90).\nA limitation of the study is that patients were tested twice, whereas control subjects were only scanned once, for ethical and financial reasons. Practice effects, however, were overcome by the counterbalanced design. Another limitation is the relatively small sample size. Minimum numbers of 15 to 20 participants have been suggested for fMRI studies (91). Repeated-measures designs, however, are statistically more powerful than independent data sets, which makes the within-subject ANOVA more robust.\nTo our knowledge, this is the first study to show that a single dose of MPH in ADHD upregulates and normalizes the underfunctioning of dMFC, left IFC, posterior cingulate, and parietal regions that in concert play an important role in error processing. The normalization findings of these key regions of both performance monitoring and ADHD dysfunction reinforce the association between dopaminergic neurotransmission abnormalities, ADHD, and poor performance monitoring and may underlie the behavioral effects of improving attention and school performance in boys with ADHD." ]
[ "materials|methods", null, null, null, null, "results", null, null, null, null, null, null, null, null, null, null, null, null, null, "discussion" ]
[ "Attention-deficit/hyperactivity disorder (ADHD)", "error processing", "methylphenidate", "motor response inhibition", "performance monitoring", "stop task" ]
Methods and Materials: Subjects Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg). Thirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder. All participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained. Univariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36). Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg). Thirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder. All participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained. Univariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36). fMRI Paradigm: Stop Task The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task. The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task. fMRI Image Acquisition Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage. Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage. fMRI Image Analysis At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed. The second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel. In addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition. At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed. The second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel. In addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition. Subjects: Twelve medication-naïve, right-handed boys aged 10 to 15 years (mean age = 13, SD = 1) who met clinical diagnostic criteria for the combined (inattentive/hyperactive) subtype of ADHD (DSM-IV) were recruited through clinics. Clinical diagnosis of ADHD was established through interviews with an experienced child psychiatrist (A-MM) using the standardized Maudsley Diagnostic Interview to check for presence or absence of diagnostic criteria for any mental disorder as set out by DSM-IV (30). Exclusion criteria were lifetime comorbidity with any other psychiatric disorder, except for conduct/oppositional defiant disorder (present in one patient), as well as learning disability and specific reading disorder, neurological abnormalities, epilepsy, drug or substance abuse, and previous exposure to stimulant medication. Patients with ADHD also had to score above cutoff for hyperactive/inattentive symptoms on the Strengths and Difficulties Questionnaire for Parents (SDQ) (31). Patients were scanned twice, in a randomized, counterbalanced fashion, 1 week apart, 1 hour after either .3 mg/kg of MPH administration or placebo (vitamin C, 100 mg). Thirteen male right-handed adolescent boys in the age range of 11 to 16 years (mean age = 13, SD = 1) were recruited through advertisements in the same geographic areas of South London to ensure similar socioeconomic status and were scanned once. They scored below cutoff for behavioral problems in the SDQ and had no history of psychiatric disorder. All participants were above the fifth percentile on the Raven progressive matrices performance IQ (32) (IQ mean estimate controls = 100, SD = 14; ADHD = 91, SD = 9) and paid £30 for participation. Parental and child informed consent/assent and approval from the local ethical committee was obtained. Univariate analyses of variance (ANOVAs) showed no group differences between boys with and without ADHD for age [F(1,25) = 2, p = .2] but did for IQ [F(1,25) = 8, p < .009]. IQ is associated with ADHD in the general population (33,34). We purposely did not match groups for IQ because matching ADHD and control groups for IQ would have created unrepresentative groups and therefore be misguided (35). Furthermore, IQ was significantly negatively correlated with the SDQ scores for inattention and hyperactivity (r = −.5, p < .001). We did not covary for IQ because when groups are not randomly selected, covarying for a variable that differs between groups violates the standard assumptions for analysis of covariance. When the covariate is intrinsic to the condition, it becomes meaningless to “adjust” group effects for differences in the covariate because it would alter the group effect in potentially problematic ways, leading to spurious results (35,36). fMRI Paradigm: Stop Task: The rapid, mixed-trial, event-related fMRI design was practiced by subjects once before scanning. The visual tracking stop task requires withholding of a motor response to a go stimulus when it is followed unpredictably by a stop signal (8,37,38). The basic task is a choice reaction time task (left and right pointing arrows: go signals) with a mean intertrial-interval of 1.8 sec (156 go trials). In 20% of trials, pseudo-randomly interspersed, the go signals are followed (about 250 ms later) by arrows pointing upwards (stop signals), and subjects have to inhibit their motor responses (40 stop trials). A tracking algorithm changes the time interval between go-signal and stop-signal onsets according to each subject's inhibitory performance to ensure that the task is equally challenging for each individual and to provide 50% successful and 50% unsuccessful inhibition trials at every moment of the task. fMRI Image Acquisition: Gradient-echo echoplanar magnetic resonance imaging data were acquired on a GE Signa 1.5-T Horizon LX System (General Electric, Milwaukee, Wisconsin) at the Maudsley Hospital, London. A quadrature birdcage head coil was used for radio-frequency transmission and reception. During the 6-min run of the stop task, in each of 16 noncontiguous planes parallel to the anterior–posterior commissural, 196 T2*-weighted magnetic resonance images depicting blood oxygen–level dependent (BOLD) contrast covering the whole brain were acquired with echo time = 40 msec, repetition time = 1.8 sec, flip angle = 90°, in-plane resolution = 3.1 mm, slice thickness = 7 mm, slice skip = .7 mm, providing complete brain coverage. fMRI Image Analysis: At the individual subject level, a standard general linear modeling approach was used to obtain estimates of the response size (beta) to each of the two stop task conditions (successful and unsuccessful stop trials) against an implicit baseline (go trials). Following transformation of the fMRI data for each individual into standard space and smoothing with a three-dimensional 7-mm full width at half maximum Gaussian filter, the experimental model was convolved for each condition with gamma variate functions having peak responses at 4 and 8 sec following stimulus onset to accommodate variability in BOLD response timing. By fitting these convolved model components to the time series at each voxel, beta estimates were obtained for each effect of interest. The standard errors of these beta estimates were computed nonparametrically using a bootstrap procedure designed to operate on time series data, containing serial dependencies, with repeated deterministic (experimentally determined) effects. This method is outlined in detail in a previous work (39). Two hundred bootstraps at each voxel were used to estimate parameter standard errors. Using the combined parameter estimates over all conditions, the mean fitted time series was also computed and, from the combined bootstrap parameter estimates for each bootstrap, the 95% confidence limits on the fitted time series was computed. The second-level analysis proceeded by computing either the group differences (patients and controls) or the drug condition differences (placebo, MPH) within patients at each voxel and the standard error of this difference (using the bootstrap estimates derived earlier). The significance of these differences was then tested in three ways: 1) a simple parametric random effects (paired t) test, using only the group difference/placebo-MPH effect size differences; 2) a permutation test of the same random effects t statistic in which the null distribution was estimated by randomly swapping the signs of the differences (we used 40,000 permutations per voxel to obtain a confidence limit of .0007–.0013 for p value of .001); and 3) a mixed-effects test using both the effect size differences and their subject-level standard errors to accommodate first (subject) level heteroscedasticity (40). This was also conducted using 40,000 permutations per voxel. In addition to voxelwise maps, cluster-level inference on the contrast (beta) values was performed at a family-wise error corrected threshold of p < .05 using the Threshold-Free Cluster Enhancement method proposed by Smith and Nichols (41). This cluster-level inference was also used for the within-group maps for each experimental condition. Results: Performance The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1]. A multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1). The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1]. A multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1). Performance: The probability of inhibition was about 50% in all subjects with no significant group differences, showing that the task algorithm worked [F(1,38) = 1; p < .3; Table 1]. A multivariate ANOVA between control subjects and ADHD patients under either drug condition showed a trend for a significant group effect [F(8,62) = 2, p < .09] due to a significant univariate group effect in the standard deviation to go trials [F(2,34) = 5, p < .02], which were higher in patients under either medication condition compared with control subjects (p < .05). Post hoc tests furthermore revealed a trend for MPH compared with placebo to slow down reaction times within ADHD patients to both go (p < .06) and post-error go trials (both p < .07) (Table 1). Brain Activation: Motion Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5]. Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5]. Within-Group Brain Activations Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). Successful Stop–Go Contrast No significant activation differences were observed between medication conditions. No significant activation differences were observed between medication conditions. Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). Successful Stop–Go Contrast No significant activation differences were observed between medication conditions. No significant activation differences were observed between medication conditions. ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Conjunction Analysis Between Within-Group and Between-Group ANOVAs To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3. To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3. Motion: Multivariate ANOVA showed no significant group differences between control subjects and ADHD patients under either drug condition in mean or maximum rotation or translation parameters in the x, y, or z dimensions [F(2,38) = .9, p < .5]. Within-Group Brain Activations: Failed Stop–Go Contrast Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Successful Stop–Go Contrast Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). Failed Stop–Go Contrast: Control subjects activated relatively large clusters in left and right inferior frontal cortex (IFC)/anterior insula, medial frontal/anterior cingulate cortex (MFC/ACC), precentral, inferior parietal, middle and superior temporal areas, posterior thalamus and caudate, parahippocampal gyri, precuneus/posterior cingulate, occipital, and cerebellar areas. Activation in ADHD patients under placebo was in MFC and superior temporal cortex reaching into anterior and posterior insula, caudate, and inferior parietal and occipital cortex. Activation in ADHD patients under MPH was in a large cluster of ACC and MFC, in left and right IFC/anterior insula, left premotor cortex, right basal ganglia, bilateral middle and superior temporal, inferior parietal and occipital areas, hippocampal gyri, posterior cingulate, precuneus, and cerebellum (Figure S1 in Supplement 1). Successful Stop–Go Contrast: Activation in healthy control boys was in a large cluster comprising left and right orbital and IFC, dorsolateral and MFC, insula, basal ganglia, hippocampus, posterior thalamic regions, pre- and postcentral gyri, inferior and superior parietal, temporal and occipital cortices, precuneus, and posterior cingulate. Activation in ADHD patients under placebo was in small clusters in right MFC, supplementary motor area (SMA), right superior temporal, postcentral, left inferior and superior parietal, and occipital cortices. Activation in ADHD patients under the MPH condition was in superior and MFC/ACC, right globus pallidus and putamen, right superior temporal and superior and inferior parietal cortex, posterior insula, and left cerebellum (Figure S1 in Supplement 1). ANOVA Within-Patient Comparisons in Brain Activation Between the Placebo and the MPH Conditions: Failed Stop–Go Contrast MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). Successful Stop–Go Contrast No significant activation differences were observed between medication conditions. No significant activation differences were observed between medication conditions. Failed Stop–Go Contrast: MPH contrasted with placebo elicited enhanced activation in left IFC, reaching into insula and putamen; in right IFC reaching into insula, putamen, and caudate; in left medial frontal lobe; and in left inferior parietal, precuneus, and occipital regions (Figure 1, Table 2). The placebo condition elicited no enhanced activation over MPH. To investigate whether brain regions that differed with MPH were associated with task performance, statistical measures of the BOLD response were extracted for each subject in each ANOVA cluster and then correlated with performance variables. There was a significant positive correlation in patients between post-error reaction times and left and right IFC activation (r = .7, p < .02) and between go reaction times and right IFC activation (r = .6, p < .05) and a negative correlation between response variability and right inferior parietal activation (r = −.7, p < .02). Successful Stop–Go Contrast: No significant activation differences were observed between medication conditions. ANOVA Between-Group Comparisons in Brain Activation Between Control Subjects and Boys with ADHD Under Either the Placebo or the MPH Conditions: Failed Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Successful Stop–Go Contrast Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Failed Stop–Go Contrast: Relative to control subjects, ADHD patients under the placebo condition showed underactivation in left IFC and dorsomedial prefrontal cortices (dMFC) (including pre-SMA), right premotor, superior and inferior parietal cortices, posterior cingulate/precuneus, posterior thalamus, and bilateral inferior temporo-occipital areas (Figure 2A, Table 3). Under the MPH condition, ADHD patients did not differ from controls in any of these regions. Post-error slowing in ADHD patients under placebo, but not in control subjects, was significantly positively correlated with activation in left IFC, premotor, dMFC, and thalamic underactivation clusters (r > .6 for all clusters, p < .05) as well as with superior parietal, occipital, and cerebellar activation (r < .4, p < .05). Standard deviation of reaction times was correlated with dMFC activation in controls (r = .6, p < .02). Successful Stop–Go Contrast: Relative to control subjects, ADHD patients under the placebo condition showed underactivation in a right hemispheric network of medial temporal and inferior parietal lobes, precuneus/posterior cingulate and cerebellum (Figure 2B, Table 3). To test our hypothesis of IFC underactivation, we reanalyzed the data at a more lenient p value of p < .002 for voxelwise comparison. This elicited additional underactivation in right IFC, left and right subthalamic nuclei, and the pre-SMA (Table 3 and Figure S2 in Supplement 1). Under the MPH condition, ADHD patients did not differ from control subjects in any of these regions. Activation in posterior cingulate and lingual gyrus correlated significantly positively with post-error go reaction times in ADHD boys under placebo (r > .6, p < .3). All group difference findings for both contrasts remained essentially unchanged when IQ was covaried. Conjunction Analysis Between Within-Group and Between-Group ANOVAs: To test whether brain regions that were upregulated with MPH relative to placebo within patients overlapped with brain regions that were reduced in patients under placebo relative to controls and then normalized with MPH, we performed a conjunction analysis by determining the voxels where the within-group ANOVA (MPH > placebo in ADHD) and the between-group ANOVA (control subjects > ADHD placebo) were both significant (42). Three clusters emerged, in left IFC (Talairach coordinates: −43, 7, 4), right SMA (Talairach coordinates: 7, 4, 59) and right inferior parietal lobe (Talairach coordinates: 32, −63, 42). Overlapping clusters are also indicated in bold in Table 3 and shown in Figure 3. Discussion: During error trials, ADHD boys under placebo compared with healthy control subjects showed significant underactivation in a typical error processing and performance monitoring network comprising dMFC, left IFC, thalamus, posterior cingulate/precuneus, and inferior temporoparietal regions. Among patients, MPH compared with placebo significantly upregulated activation in overlapping medial frontal, IFC, and parietal regions as well as the lenticular nucleus. Under MPH, brain activation differences between control subjects and ADHD patients were no longer observed. Reduced fronto-thalamo-parietal activation that was normalized with MPH was, furthermore, negatively associated with faster post-error reaction times in patients, which were trendwise slowed with MPH. During successful stop trials, ADHD boys showed underactivation in a right hemispheric network of medial temporal and inferior parietal brain regions and, at a more lenient threshold, in small clusters of bilateral IFC, thalamus, and pre-SMA. Although within-patient comparison between MPH and placebo did not show significant activation differences, all underactivations in patients relative to control subjects under placebo were normalized with a single dose of MPH. The dMFC, comprising Brodmann areas 8, 6, and 32, including pre-SMA and ACC, is a typical region of error processing and performance monitoring in adults (37,43–48) and children (29,38,49). We have previously found this region to be underactivated in children with ADHD during oddball (50) and switch tasks (4). Errors indicate violation of a reward prediction (i.e., positive performance) and have been linked to midbrain dopamine (51). Normalization with MPH of underfunctioning of this region in ADHD is in line with the notion that phasic dopamine response modulates error-related mesial frontal activation (52,53). These findings extend evidence for upregulation with acute and chronic doses of MPH in previously medicated patients with ADHD in a more rostral ACC location during tasks of cognitive control (22,28,54). Activation in dMFC during errors triggers additional activation in functionally interconnected left IFC, as well as striatal, premotor, and parietal components of the error monitoring system, leading to post-error performance adjustments (43–45,48). IFC underactivation is one of the most consistent findings in fMRI studies in patients with ADHD, with right IFC dysfunction typically observed during inhibitory performance (7,8,11,55), in line with its role in inhibition (37,56), and left IFC during stop errors (4,29) as well as during flexible, selective, or sustained attention (4,9,26,50,57), in line with its role for performance monitoring (44,45,48,49) and saliency processing (58,59). IFC dysfunction is furthermore a disorder-specific neurofunctional deficit compared with patients with conduct (6,50,57,60) and obsessive compulsive (4) disorders. MPH thus appears to modulate an important neurofunctional biomarker of ADHD. The more predominantly left-hemispheric upregulation effect during errors may suggest a stronger effect of MPH on performance monitoring than inhibitory function in ADHD. Left IFC upregulation has previously been observed in ADHD patients in the context of an attention-demanding time discrimination task after acute (27) and 6 weeks of MPH treatment during interference inhibition (54). Structural studies have shown more normal cortical thinning in left IFC in psychostimulant-medicated compared with unmedicated ADHD children (61). Together, this raises the speculation that MPH may have a lateralized upregulating effect on left IFC structure and function. Posterior thalamic regions have been associated both with motor response inhibition (62) and performance monitoring (48,63,64). The finding that MPH normalizes activation in this region is in line with speculation of this region's involvement in the modulation of the dopaminergic error signal (63,65,66). The fact that lower dMFC, IFC, and thalamic activation in ADHD patients was associated with faster post-error slowing, both of which were enhanced by MPH, reinforces the role of this network for abnormal error monitoring in ADHD. Posterior cingulate and precuneus are connected with MFC and parietal areas and form part of the performance monitoring network (47,49,67,68), mediating visual spatial attention to saliency (69,70) and the integration of performance outcome with attentional modulation (48). The fact that these regions were underactivated during both inhibition and its failure is in line with a generic attention role of these areas. In line with this, we and others have previously observed underactivation in ADHD patients in these regions during inhibition errors (4,6,8,60), as well as during other salient stimuli such as oddball, novel or incongruent targets (10,26,50,71,72). Normalization with MPH of reduced activation in typical frontoparietal regions of saliency processing and performance monitoring is consistent with the dopamine-deficiency hypothesis of ADHD given that dopamine agonists enhance stimulus salience (73). It is also in line with our previous findings of upregulation with MPH of posterior cingulate/precuneus in the same group of medication naive boys with ADHD during a target detection task, resulting in improved attention (26), and during an attention demanding time discrimination task (27). To our knowledge, normalization of inferior parietal activation with MPH has only recently been observed in ADHD patients, in the context of sustained attention (26) and interference inhibition (54). During successful stop trials, MPH also normalized underactivation in the cerebellum, which, together with subthalamic nucleus, caudate, and IFG, forms a neurofunctional network of motor response inhibition (38). These findings extend previous evidence for cerebellum upregulation with MPH in ADHD patients during interference inhibition (54) and time estimation (27). Within patients, MPH also enhanced activation of caudate and putamen. This is in line with previous fMRI findings of caudate upregulation in ADHD patients after acute and chronic doses of MPH during inhibition and attention tasks (22,24,54) and is likely associated with the known effect of MPH on striatal dopamine transporter blockage (14,15). The findings of more pronounced normalization effects of MPH on abnormal performance monitoring than inhibition networks could suggest that MPH enhances generic attention and performance monitoring functions more than inhibitory capacity. This would be in line with the behavioral effect of MPH of modulating go and post-error reaction times, but not inhibition speed, which, furthermore were correlated with the reduced frontothalamic error-processing activation that was normalized with MPH. Relative to control subjects, patients only significantly differed in intrasubject response variability. Small subject numbers, a relatively older child group, and fMRI task design restrictions may have been responsible for minor behavior differences. The findings of brain dysfunctions in boys with ADHD and their normalization under the clinical dose of MPH despite minor performance differences and only trend-level improvements with MPH show that brain activation is more sensitive than performance to detect both abnormalities and pharmacologic effects. This is in line with previous findings of marked brain dysfunctions in ADHD adolescents despite no stop task impairment (7,8,50) and higher sensitivity of brain activation than behavior to show pharmacologic effects of MPH in ADHD (24,26–28,54,74). MPH prevents the reuptake of catecholamines from the synaptic cleft by blocking dopamine and norepinephrine transporters (DAT/NET) (75,76), with higher affinity for the former (77,78). In healthy adults, MPH blocks 60% to 70% of striatal DAT in a dose-dependent manner, increasing extracellular levels of dopamine in striatum (75,79–82), as well as in frontal, thalamic, and temporal regions (83). The upregulating effects on basal ganglia, thalamic, and anterior cingulate activation were therefore likely mediated by mesolimbic striatocingulate dopaminergic pathways known to modulate error monitoring systems (63,64). In frontal regions, however, MPH upregulates noradrenaline to the same or greater extent than dopamine (84–86), via reuptake inhibition of NET that clear up both dopamine and noradrenaline (85,87–89). The upregulating effects on frontal activation, therefore, may have been mediated by enhanced catecholamine neurotransmission, in line with recent evidence that noradrenaline also plays a role in error monitoring (66,90). A limitation of the study is that patients were tested twice, whereas control subjects were only scanned once, for ethical and financial reasons. Practice effects, however, were overcome by the counterbalanced design. Another limitation is the relatively small sample size. Minimum numbers of 15 to 20 participants have been suggested for fMRI studies (91). Repeated-measures designs, however, are statistically more powerful than independent data sets, which makes the within-subject ANOVA more robust. To our knowledge, this is the first study to show that a single dose of MPH in ADHD upregulates and normalizes the underfunctioning of dMFC, left IFC, posterior cingulate, and parietal regions that in concert play an important role in error processing. The normalization findings of these key regions of both performance monitoring and ADHD dysfunction reinforce the association between dopaminergic neurotransmission abnormalities, ADHD, and poor performance monitoring and may underlie the behavioral effects of improving attention and school performance in boys with ADHD.
Background: Children with attention-deficit/hyperactivity disorder (ADHD) have deficits in performance monitoring often improved with the indirect catecholamine agonist methylphenidate (MPH). We used functional magnetic resonance imaging to investigate the effects of single-dose MPH on activation of error processing brain areas in medication-naive boys with ADHD during a stop task that elicits 50% error rates. Methods: Twelve medication-naive boys with ADHD were scanned twice, under either a single clinical dose of MPH or placebo, in a randomized, double-blind design while they performed an individually adjusted tracking stop task, designed to elicit 50% failures. Brain activation was compared within patients under either drug condition. To test for potential normalization effects of MPH, brain activation in ADHD patients under either drug condition was compared with that of 13 healthy age-matched boys. Results: During failed inhibition, boys with ADHD under placebo relative to control subjects showed reduced brain activation in performance monitoring areas of dorsomedial and left ventrolateral prefrontal cortices, thalamus, cingulate, and parietal regions. MPH, relative to placebo, upregulated activation in these brain regions within patients and normalized all activation differences between patients and control subjects. During successful inhibition, MPH normalized reduced activation observed in patients under placebo compared with control subjects in parietotemporal and cerebellar regions. Conclusions: MPH normalized brain dysfunction in medication-naive ADHD boys relative to control subjects in typical brain areas of performance monitoring, comprising left ventrolateral and dorsomedial frontal and parietal cortices. This could underlie the amelioration of MPH of attention and academic performance in ADHD.
null
null
12,775
303
20
[ "adhd", "right", "activation", "patients", "mph", "left", "ifc", "inferior", "placebo", "parietal" ]
[ "test", "test" ]
null
null
null
null
null
null
[CONTENT] Attention-deficit/hyperactivity disorder (ADHD) | error processing | methylphenidate | motor response inhibition | performance monitoring | stop task [SUMMARY]
null
[CONTENT] Attention-deficit/hyperactivity disorder (ADHD) | error processing | methylphenidate | motor response inhibition | performance monitoring | stop task [SUMMARY]
null
null
null
[CONTENT] Adolescent | Attention | Attention Deficit Disorder with Hyperactivity | Central Nervous System Stimulants | Child | Double-Blind Method | Frontal Lobe | Gyrus Cinguli | Humans | Magnetic Resonance Imaging | Male | Methylphenidate | Neuropsychological Tests | Psychomotor Performance [SUMMARY]
null
[CONTENT] Adolescent | Attention | Attention Deficit Disorder with Hyperactivity | Central Nervous System Stimulants | Child | Double-Blind Method | Frontal Lobe | Gyrus Cinguli | Humans | Magnetic Resonance Imaging | Male | Methylphenidate | Neuropsychological Tests | Psychomotor Performance [SUMMARY]
null
null
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
null
null
[CONTENT] adhd | right | activation | patients | mph | left | ifc | inferior | placebo | parietal [SUMMARY]
null
[CONTENT] adhd | right | activation | patients | mph | left | ifc | inferior | placebo | parietal [SUMMARY]
null
null
null
[CONTENT] compared | trend | significant | group effect | significant group | group | subjects | effect | trials | table [SUMMARY]
null
[CONTENT] activation | adhd | right | superior | patients | left | ifc | adhd patients | inferior | posterior [SUMMARY]
null
null
null
[CONTENT] ||| MPH ||| MPH [SUMMARY]
null
[CONTENT] MPH ||| MPH | 50% ||| Twelve | MPH | 50% ||| ||| MPH | 13 ||| ||| MPH ||| MPH ||| MPH ||| MPH | ADHD [SUMMARY]
null
Improvement in 24-hour bronchodilation and symptom control with aclidinium bromide versus tiotropium and placebo in symptomatic patients with COPD: post hoc analysis of a Phase IIIb study.
28652725
A previous Phase IIIb study (NCT01462929) in patients with moderate to severe COPD demonstrated that 6 weeks of treatment with aclidinium led to improvements in 24-hour bronchodilation comparable to those with tiotropium, and improvement of symptoms versus placebo. This post hoc analysis was performed to assess the effect of treatment in the symptomatic patient group participating in the study.
BACKGROUND
Symptomatic patients (defined as those with Evaluating Respiratory Symptoms [E-RS™] in COPD baseline score ≥10 units) received aclidinium bromide 400 μg twice daily (BID), tiotropium 18 μg once daily (QD), or placebo, for 6 weeks. Lung function, COPD respiratory symptoms, and incidence of adverse events (AEs) were assessed.
METHODS
In all, 277 symptomatic patients were included in this post hoc analysis. Aclidinium and tiotropium treatment improved forced expiratory volume in 1 second (FEV1) from baseline to week 6 at all time points over 24 hours versus placebo. In addition, improvements in FEV1 from baseline during the nighttime period were observed for aclidinium versus tiotropium on day 1 (aclidinium 157 mL, tiotropium 67 mL; P<0.001) and week 6 (aclidinium 153 mL, tiotropium 90 mL; P<0.05). Aclidinium improved trough FEV1 from baseline versus placebo and tiotropium at day 1 (aclidinium 136 mL, tiotropium 68 mL; P<0.05) and week 6 (aclidinium 137 mL, tiotropium 71 mL; P<0.05). Aclidinium also improved early-morning and nighttime symptom severity, limitation of early-morning activities, and E-RS Total and domain scores versus tiotropium (except E-RS Chest Symptoms) and placebo over 6 weeks. Tolerability showed similar incidence of AEs in each arm.
RESULTS
In this post hoc analysis of symptomatic patients with moderate to severe COPD, aclidinium 400 μg BID provided additional improvements compared with tiotropium 18 μg QD in: 1) bronchodilation, particularly during the nighttime, 2) daily COPD symptoms (E-RS), 3) early-morning and nighttime symptoms, and 4) early-morning limitation of activity.
CONCLUSION
[ "Activities of Daily Living", "Aged", "Bronchodilator Agents", "Circadian Rhythm", "Double-Blind Method", "Female", "Forced Expiratory Volume", "Humans", "Lung", "Male", "Middle Aged", "Muscarinic Antagonists", "Pulmonary Disease, Chronic Obstructive", "Recovery of Function", "Severity of Illness Index", "Time Factors", "Tiotropium Bromide", "Treatment Outcome", "Tropanes" ]
5476673
Introduction
Symptoms of COPD can vary in severity over a 24-hour period, and studies indicate that they are generally worse in the early morning and at nighttime.1–3 Symptoms include chronic cough, sputum production, and breathlessness, which can severely impact on a patient’s daily activities and overall well-being,3 and have a corresponding high socioeconomic burden.4 Estimates suggest that the frequency of nocturnal symptoms and symptomatic sleep disturbance may exceed 75% in patients with COPD, and potential long-term consequences may include lung function changes, increased exacerbation frequency, emergence or worsening of cardiovascular disease, impaired quality of life, and increased mortality.1 It is therefore important that symptoms over the entire 24-hour day are identified and managed appropriately. In order to provide appropriate therapy, clinical guidelines (Global initiative for chronic Obstructive Lung Disease [GOLD]) suggest that symptoms, airflow limitation, and risk of exacerbations are assessed.5 Patients are classified into one of four groups according to their symptom burden and risk of exacerbations: A, low risk, less symptoms; B, low risk, more symptoms; C, high risk, less symptoms; or D, high risk, more symptoms;5 current evidence suggests that bronchodilator treatment may be more effective in those patients who are considered symptomatic (ie, groups B and D).5 Bronchodilator therapies are a mainstay of COPD treatment, with two classes of long-acting bronchodilators currently available: long-acting muscarinic antagonists (LAMAs) and long-acting β2-agonists (LABAs). LAMAs inhibit the action of acetylcholine at muscarinic receptors, while LABAs enhance cAMP signaling through stimulation of β2-adrenergic receptors, resulting in the relaxation of bronchial smooth muscle.5 The LAMA aclidinium bromide is a maintenance bronchodilator therapy for adults with COPD. The efficacy and tolerability results from a Phase IIIb study in patients with moderate to severe COPD, who received either aclidinium 400 μg twice daily (BID), the active comparator tiotropium 18 μg once daily (QD), or placebo have been previously reported.6 Briefly, following 6 weeks of treatment, patients receiving aclidinium 400 μg BID demonstrated improvements in 24-hour bronchodilation, compared with placebo, that were comparable with tiotropium 18 μg QD. In addition, COPD symptoms significantly improved from baseline with aclidinium, but not tiotropium, compared with placebo.6 These results were similar to those observed in a prior 2-week Phase IIa trial.7 Furthermore, a recent real-world study in patients with COPD reported improvements in nighttime and early-morning symptoms, limitation of morning activities, and quality of life over 3 months with aclidinium 400 μg BID, compared with baseline.8 Since aclidinium has a greater impact on COPD symptoms than tiotropium,9,10 and the “more symptomatic” patient groups stand to benefit more from bronchodilator treatment than the “less symptomatic” groups, aclidinium may provide an additional therapeutic benefit over tiotropium in these patients. This study reports the findings of a post hoc analysis, which focused on the response in the symptomatic patient group. The key objective of this analysis was to identify any differences in 24-hour lung function and symptom control between treatment with aclidinium 400 μg BID and tiotropium 18 μg QD in this population of patients.
Post hoc analysis
This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11
Results
Patients In all, 414 patients were randomized to treatment in the overall study (2:2:1 ratio), of which 277 were defined as symptomatic (E-RS baseline score ≥10 units) and included in this post hoc subgroup analysis (placebo: n=60; aclidinium 400 μg: n=116; tiotropium 18 μg: n=101) (Figure 1). The percentages of patients in each treatment arm of this post hoc analysis were similar to those in the primary study (placebo, 21.7% vs 20.5%; aclidinium 400 μg, 41.9% vs 41.3%; tiotropium 18 μg, 36.5% vs 38.2%, respectively). Demographics and baseline characteristics in the subgroup of symptomatic patients were similar to those in the overall study population (symptomatic patients: mean age 62.1 years, 65.0% male, 54.5% current smokers, post-bronchodilator FEV1 54.6% predicted). Patient demographics and baseline characteristics for symptomatic patients were also similar across treatment arms, with the exception of a higher proportion of male patients in the active treatment groups compared with placebo, and a higher proportion of patients with severe COPD in the tiotropium group (Table 1). Mean post-bronchodilator percent predicted FEV1 and COPD symptoms scores at baseline were similar across treatment arms (Table 1). In all, 414 patients were randomized to treatment in the overall study (2:2:1 ratio), of which 277 were defined as symptomatic (E-RS baseline score ≥10 units) and included in this post hoc subgroup analysis (placebo: n=60; aclidinium 400 μg: n=116; tiotropium 18 μg: n=101) (Figure 1). The percentages of patients in each treatment arm of this post hoc analysis were similar to those in the primary study (placebo, 21.7% vs 20.5%; aclidinium 400 μg, 41.9% vs 41.3%; tiotropium 18 μg, 36.5% vs 38.2%, respectively). Demographics and baseline characteristics in the subgroup of symptomatic patients were similar to those in the overall study population (symptomatic patients: mean age 62.1 years, 65.0% male, 54.5% current smokers, post-bronchodilator FEV1 54.6% predicted). Patient demographics and baseline characteristics for symptomatic patients were also similar across treatment arms, with the exception of a higher proportion of male patients in the active treatment groups compared with placebo, and a higher proportion of patients with severe COPD in the tiotropium group (Table 1). Mean post-bronchodilator percent predicted FEV1 and COPD symptoms scores at baseline were similar across treatment arms (Table 1). Efficacy Lung function Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). COPD symptoms in symptomatic patients In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). Lung function Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). COPD symptoms in symptomatic patients In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). Safety and tolerability In the subgroup of symptomatic patients, the incidence of TEAEs was comparable in the placebo (26.7%), aclidinium (28.4%), and tiotropium (32.7%) groups. Similar to the overall study population, the most commonly reported TEAEs in symptomatic patients were headache (5.8%) and nasopharyngitis (5.1%). Other common TEAEs (≥2% of patients overall) were COPD exacerbation (2.5%), back pain (2.5%), and cough (2.2%). The majority of TEAEs were mild or moderate in intensity. There were few serious TEAEs (1.4% overall) and no deaths in the subgroup of symptomatic patients. In total, five patients (1.8%) discontinued due to TEAEs and one patient (0.4%) discontinued due to a serious TEAE, with COPD exacerbation being the most common cause (1.4%). In the subgroup of symptomatic patients, the incidence of TEAEs was comparable in the placebo (26.7%), aclidinium (28.4%), and tiotropium (32.7%) groups. Similar to the overall study population, the most commonly reported TEAEs in symptomatic patients were headache (5.8%) and nasopharyngitis (5.1%). Other common TEAEs (≥2% of patients overall) were COPD exacerbation (2.5%), back pain (2.5%), and cough (2.2%). The majority of TEAEs were mild or moderate in intensity. There were few serious TEAEs (1.4% overall) and no deaths in the subgroup of symptomatic patients. In total, five patients (1.8%) discontinued due to TEAEs and one patient (0.4%) discontinued due to a serious TEAE, with COPD exacerbation being the most common cause (1.4%).
Conclusion
Results from this post hoc analysis of a symptomatic patient group with moderate to severe COPD showed that aclidinium 400 μg BID provided additional improvements compared with tiotropium 18 μg QD in: 1) bronchodilation, particularly during the nighttime, 2) E-RS responder status, 3) early-morning, daytime, and nighttime symptoms, and 4) early-morning limitation of activity. These results suggest that symptomatic patients may achieve greater benefits during the nighttime with aclidinium treatment than patients with fewer symptoms.
[ "Methods", "Overall study", "Assessments and endpoints", "Lung function", "COPD symptoms", "Safety and tolerability", "Statistical analyses", "Patients", "Efficacy", "Lung function", "Safety and tolerability", "Conclusion" ]
[ " Study design and patients Overall study This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.\nThis was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.\n Post hoc analysis This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11\nThis post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11\n Overall study This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.\nThis was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.\n Post hoc analysis This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11\nThis post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11\n Assessments and endpoints Lung function Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\nLung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\n COPD symptoms Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\nEvery evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\n Safety and tolerability Treatment-emergent adverse events (TEAEs) were recorded throughout the study.\nTreatment-emergent adverse events (TEAEs) were recorded throughout the study.\n Statistical analyses Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.\nEfficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.\n Lung function Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\nLung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\n COPD symptoms Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\nEvery evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\n Safety and tolerability Treatment-emergent adverse events (TEAEs) were recorded throughout the study.\nTreatment-emergent adverse events (TEAEs) were recorded throughout the study.\n Statistical analyses Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.\nEfficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.", "This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.", " Lung function Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\nLung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\n COPD symptoms Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\nEvery evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\n Safety and tolerability Treatment-emergent adverse events (TEAEs) were recorded throughout the study.\nTreatment-emergent adverse events (TEAEs) were recorded throughout the study.\n Statistical analyses Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.\nEfficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.", "Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.", "Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16", "Treatment-emergent adverse events (TEAEs) were recorded throughout the study.", "Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.", "In all, 414 patients were randomized to treatment in the overall study (2:2:1 ratio), of which 277 were defined as symptomatic (E-RS baseline score ≥10 units) and included in this post hoc subgroup analysis (placebo: n=60; aclidinium 400 μg: n=116; tiotropium 18 μg: n=101) (Figure 1). The percentages of patients in each treatment arm of this post hoc analysis were similar to those in the primary study (placebo, 21.7% vs 20.5%; aclidinium 400 μg, 41.9% vs 41.3%; tiotropium 18 μg, 36.5% vs 38.2%, respectively).\nDemographics and baseline characteristics in the subgroup of symptomatic patients were similar to those in the overall study population (symptomatic patients: mean age 62.1 years, 65.0% male, 54.5% current smokers, post-bronchodilator FEV1 54.6% predicted). Patient demographics and baseline characteristics for symptomatic patients were also similar across treatment arms, with the exception of a higher proportion of male patients in the active treatment groups compared with placebo, and a higher proportion of patients with severe COPD in the tiotropium group (Table 1). Mean post-bronchodilator percent predicted FEV1 and COPD symptoms scores at baseline were similar across treatment arms (Table 1).", " Lung function Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05).\nAclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3).\nLung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05).\nAclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3).\n COPD symptoms in symptomatic patients In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks.\nOverall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C).\nIn this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks.\nOverall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C).", "Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05).\nAclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3).", "In the subgroup of symptomatic patients, the incidence of TEAEs was comparable in the placebo (26.7%), aclidinium (28.4%), and tiotropium (32.7%) groups. Similar to the overall study population, the most commonly reported TEAEs in symptomatic patients were headache (5.8%) and nasopharyngitis (5.1%). Other common TEAEs (≥2% of patients overall) were COPD exacerbation (2.5%), back pain (2.5%), and cough (2.2%). The majority of TEAEs were mild or moderate in intensity. There were few serious TEAEs (1.4% overall) and no deaths in the subgroup of symptomatic patients. In total, five patients (1.8%) discontinued due to TEAEs and one patient (0.4%) discontinued due to a serious TEAE, with COPD exacerbation being the most common cause (1.4%).", "Results from this post hoc analysis of a symptomatic patient group with moderate to severe COPD showed that aclidinium 400 μg BID provided additional improvements compared with tiotropium 18 μg QD in: 1) bronchodilation, particularly during the nighttime, 2) E-RS responder status, 3) early-morning, daytime, and nighttime symptoms, and 4) early-morning limitation of activity. These results suggest that symptomatic patients may achieve greater benefits during the nighttime with aclidinium treatment than patients with fewer symptoms." ]
[ "methods", "methods", null, null, null, null, null, "subjects", null, null, null, null ]
[ "Introduction", "Methods", "Study design and patients", "Overall study", "Post hoc analysis", "Assessments and endpoints", "Lung function", "COPD symptoms", "Safety and tolerability", "Statistical analyses", "Results", "Patients", "Efficacy", "Lung function", "COPD symptoms in symptomatic patients", "Safety and tolerability", "Discussion", "Conclusion" ]
[ "Symptoms of COPD can vary in severity over a 24-hour period, and studies indicate that they are generally worse in the early morning and at nighttime.1–3 Symptoms include chronic cough, sputum production, and breathlessness, which can severely impact on a patient’s daily activities and overall well-being,3 and have a corresponding high socioeconomic burden.4 Estimates suggest that the frequency of nocturnal symptoms and symptomatic sleep disturbance may exceed 75% in patients with COPD, and potential long-term consequences may include lung function changes, increased exacerbation frequency, emergence or worsening of cardiovascular disease, impaired quality of life, and increased mortality.1 It is therefore important that symptoms over the entire 24-hour day are identified and managed appropriately.\nIn order to provide appropriate therapy, clinical guidelines (Global initiative for chronic Obstructive Lung Disease [GOLD]) suggest that symptoms, airflow limitation, and risk of exacerbations are assessed.5 Patients are classified into one of four groups according to their symptom burden and risk of exacerbations: A, low risk, less symptoms; B, low risk, more symptoms; C, high risk, less symptoms; or D, high risk, more symptoms;5 current evidence suggests that bronchodilator treatment may be more effective in those patients who are considered symptomatic (ie, groups B and D).5\nBronchodilator therapies are a mainstay of COPD treatment, with two classes of long-acting bronchodilators currently available: long-acting muscarinic antagonists (LAMAs) and long-acting β2-agonists (LABAs). LAMAs inhibit the action of acetylcholine at muscarinic receptors, while LABAs enhance cAMP signaling through stimulation of β2-adrenergic receptors, resulting in the relaxation of bronchial smooth muscle.5 The LAMA aclidinium bromide is a maintenance bronchodilator therapy for adults with COPD.\nThe efficacy and tolerability results from a Phase IIIb study in patients with moderate to severe COPD, who received either aclidinium 400 μg twice daily (BID), the active comparator tiotropium 18 μg once daily (QD), or placebo have been previously reported.6 Briefly, following 6 weeks of treatment, patients receiving aclidinium 400 μg BID demonstrated improvements in 24-hour bronchodilation, compared with placebo, that were comparable with tiotropium 18 μg QD. In addition, COPD symptoms significantly improved from baseline with aclidinium, but not tiotropium, compared with placebo.6 These results were similar to those observed in a prior 2-week Phase IIa trial.7 Furthermore, a recent real-world study in patients with COPD reported improvements in nighttime and early-morning symptoms, limitation of morning activities, and quality of life over 3 months with aclidinium 400 μg BID, compared with baseline.8 Since aclidinium has a greater impact on COPD symptoms than tiotropium,9,10 and the “more symptomatic” patient groups stand to benefit more from bronchodilator treatment than the “less symptomatic” groups, aclidinium may provide an additional therapeutic benefit over tiotropium in these patients.\nThis study reports the findings of a post hoc analysis, which focused on the response in the symptomatic patient group. The key objective of this analysis was to identify any differences in 24-hour lung function and symptom control between treatment with aclidinium 400 μg BID and tiotropium 18 μg QD in this population of patients.", " Study design and patients Overall study This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.\nThis was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.\n Post hoc analysis This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11\nThis post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11\n Overall study This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.\nThis was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.\n Post hoc analysis This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11\nThis post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11\n Assessments and endpoints Lung function Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\nLung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\n COPD symptoms Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\nEvery evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\n Safety and tolerability Treatment-emergent adverse events (TEAEs) were recorded throughout the study.\nTreatment-emergent adverse events (TEAEs) were recorded throughout the study.\n Statistical analyses Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.\nEfficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.\n Lung function Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\nLung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\n COPD symptoms Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\nEvery evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\n Safety and tolerability Treatment-emergent adverse events (TEAEs) were recorded throughout the study.\nTreatment-emergent adverse events (TEAEs) were recorded throughout the study.\n Statistical analyses Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.\nEfficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.", " Overall study This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.\nThis was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.\n Post hoc analysis This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11\nThis post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11", "This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks.\nThe study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent.", "This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11", " Lung function Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\nLung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.\n COPD symptoms Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\nEvery evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16\n Safety and tolerability Treatment-emergent adverse events (TEAEs) were recorded throughout the study.\nTreatment-emergent adverse events (TEAEs) were recorded throughout the study.\n Statistical analyses Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.\nEfficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.", "Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1.", "Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study.\nTo assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16", "Treatment-emergent adverse events (TEAEs) were recorded throughout the study.", "Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons.", " Patients In all, 414 patients were randomized to treatment in the overall study (2:2:1 ratio), of which 277 were defined as symptomatic (E-RS baseline score ≥10 units) and included in this post hoc subgroup analysis (placebo: n=60; aclidinium 400 μg: n=116; tiotropium 18 μg: n=101) (Figure 1). The percentages of patients in each treatment arm of this post hoc analysis were similar to those in the primary study (placebo, 21.7% vs 20.5%; aclidinium 400 μg, 41.9% vs 41.3%; tiotropium 18 μg, 36.5% vs 38.2%, respectively).\nDemographics and baseline characteristics in the subgroup of symptomatic patients were similar to those in the overall study population (symptomatic patients: mean age 62.1 years, 65.0% male, 54.5% current smokers, post-bronchodilator FEV1 54.6% predicted). Patient demographics and baseline characteristics for symptomatic patients were also similar across treatment arms, with the exception of a higher proportion of male patients in the active treatment groups compared with placebo, and a higher proportion of patients with severe COPD in the tiotropium group (Table 1). Mean post-bronchodilator percent predicted FEV1 and COPD symptoms scores at baseline were similar across treatment arms (Table 1).\nIn all, 414 patients were randomized to treatment in the overall study (2:2:1 ratio), of which 277 were defined as symptomatic (E-RS baseline score ≥10 units) and included in this post hoc subgroup analysis (placebo: n=60; aclidinium 400 μg: n=116; tiotropium 18 μg: n=101) (Figure 1). The percentages of patients in each treatment arm of this post hoc analysis were similar to those in the primary study (placebo, 21.7% vs 20.5%; aclidinium 400 μg, 41.9% vs 41.3%; tiotropium 18 μg, 36.5% vs 38.2%, respectively).\nDemographics and baseline characteristics in the subgroup of symptomatic patients were similar to those in the overall study population (symptomatic patients: mean age 62.1 years, 65.0% male, 54.5% current smokers, post-bronchodilator FEV1 54.6% predicted). Patient demographics and baseline characteristics for symptomatic patients were also similar across treatment arms, with the exception of a higher proportion of male patients in the active treatment groups compared with placebo, and a higher proportion of patients with severe COPD in the tiotropium group (Table 1). Mean post-bronchodilator percent predicted FEV1 and COPD symptoms scores at baseline were similar across treatment arms (Table 1).\n Efficacy Lung function Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05).\nAclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3).\nLung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05).\nAclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3).\n COPD symptoms in symptomatic patients In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks.\nOverall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C).\nIn this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks.\nOverall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C).\n Lung function Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05).\nAclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3).\nLung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05).\nAclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3).\n COPD symptoms in symptomatic patients In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks.\nOverall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C).\nIn this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks.\nOverall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C).\n Safety and tolerability In the subgroup of symptomatic patients, the incidence of TEAEs was comparable in the placebo (26.7%), aclidinium (28.4%), and tiotropium (32.7%) groups. Similar to the overall study population, the most commonly reported TEAEs in symptomatic patients were headache (5.8%) and nasopharyngitis (5.1%). Other common TEAEs (≥2% of patients overall) were COPD exacerbation (2.5%), back pain (2.5%), and cough (2.2%). The majority of TEAEs were mild or moderate in intensity. There were few serious TEAEs (1.4% overall) and no deaths in the subgroup of symptomatic patients. In total, five patients (1.8%) discontinued due to TEAEs and one patient (0.4%) discontinued due to a serious TEAE, with COPD exacerbation being the most common cause (1.4%).\nIn the subgroup of symptomatic patients, the incidence of TEAEs was comparable in the placebo (26.7%), aclidinium (28.4%), and tiotropium (32.7%) groups. Similar to the overall study population, the most commonly reported TEAEs in symptomatic patients were headache (5.8%) and nasopharyngitis (5.1%). Other common TEAEs (≥2% of patients overall) were COPD exacerbation (2.5%), back pain (2.5%), and cough (2.2%). The majority of TEAEs were mild or moderate in intensity. There were few serious TEAEs (1.4% overall) and no deaths in the subgroup of symptomatic patients. In total, five patients (1.8%) discontinued due to TEAEs and one patient (0.4%) discontinued due to a serious TEAE, with COPD exacerbation being the most common cause (1.4%).", "In all, 414 patients were randomized to treatment in the overall study (2:2:1 ratio), of which 277 were defined as symptomatic (E-RS baseline score ≥10 units) and included in this post hoc subgroup analysis (placebo: n=60; aclidinium 400 μg: n=116; tiotropium 18 μg: n=101) (Figure 1). The percentages of patients in each treatment arm of this post hoc analysis were similar to those in the primary study (placebo, 21.7% vs 20.5%; aclidinium 400 μg, 41.9% vs 41.3%; tiotropium 18 μg, 36.5% vs 38.2%, respectively).\nDemographics and baseline characteristics in the subgroup of symptomatic patients were similar to those in the overall study population (symptomatic patients: mean age 62.1 years, 65.0% male, 54.5% current smokers, post-bronchodilator FEV1 54.6% predicted). Patient demographics and baseline characteristics for symptomatic patients were also similar across treatment arms, with the exception of a higher proportion of male patients in the active treatment groups compared with placebo, and a higher proportion of patients with severe COPD in the tiotropium group (Table 1). Mean post-bronchodilator percent predicted FEV1 and COPD symptoms scores at baseline were similar across treatment arms (Table 1).", " Lung function Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05).\nAclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3).\nLung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05).\nAclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3).\n COPD symptoms in symptomatic patients In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks.\nOverall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C).\nIn this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks.\nOverall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C).", "Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05).\nAclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3).", "In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks.\nOverall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C).", "In the subgroup of symptomatic patients, the incidence of TEAEs was comparable in the placebo (26.7%), aclidinium (28.4%), and tiotropium (32.7%) groups. Similar to the overall study population, the most commonly reported TEAEs in symptomatic patients were headache (5.8%) and nasopharyngitis (5.1%). Other common TEAEs (≥2% of patients overall) were COPD exacerbation (2.5%), back pain (2.5%), and cough (2.2%). The majority of TEAEs were mild or moderate in intensity. There were few serious TEAEs (1.4% overall) and no deaths in the subgroup of symptomatic patients. In total, five patients (1.8%) discontinued due to TEAEs and one patient (0.4%) discontinued due to a serious TEAE, with COPD exacerbation being the most common cause (1.4%).", "Assessment of treatment efficacy in symptomatic patients has clinical significance, as treatment guidelines recommend that such patients are treated in order to improve lung function and reduce symptoms.17 This post hoc analysis was performed to evaluate the 24-hour effect of treatment with aclidinium bromide 400 μg BID, tiotropium 18 μg QD, or placebo, in 277 symptomatic patients with moderate to severe COPD participating in a Phase III study. This subgroup of symptomatic patients constituted a substantial proportion of the overall patients (277/414; 45%).\nIn the subgroup of symptomatic patients, 6 weeks of treatment with aclidinium BID showed improvements from baseline in FEV1 at all time points over 24 hours (compared with placebo), and FEV1 was higher than tiotropium QD at most 12- to 24-hour time points. Aclidinium treatment also led to greater improvements in trough FEV1 compared with tiotropium. Furthermore, during the nighttime period at both day 1 and week 6, improvements from baseline in FEV1 (compared with placebo) were greater with aclidinium than with tiotropium.\nPatient symptoms also improved following treatment. After 6 weeks, the improvement in E-RS score was greater with aclidinium compared with both placebo and tiotropium, as indicated by changes in RS-Breathlessness, RS-Cough and Sputum, and E-RS Total scores. In addition, the percentage of patients defined as E-RS responders increased with aclidinium compared with tiotropium or placebo. Improvement over placebo and tiotropium was also observed in early-morning symptom severity, nighttime symptom severity, individual early-morning symptoms (shortness of breath and cough), and limitation of early-morning activity caused by symptoms. Safety and tolerability in symptomatic patients appeared to be similar to that in the overall study population, with the most commonly reported TEAEs being headache and nasopharyngitis.\nThe overall study population included both symptomatic and asymptomatic patients. Results have been previously reported and indicated that aclidinium provided significant 24-hour bronchodilation versus placebo from day 1, with comparable efficacy to tiotropium after 6 weeks.6 In this post hoc analysis, some notable differences were observed in the symptomatic patient group. Improvements in bronchodilation during the nighttime period were greater with aclidinium than with tiotropium in symptomatic patients, whereas in the overall population, no differences were observed between the two treatments. Furthermore, symptomatic patients experienced a greater reduction in nighttime symptom severity from baseline to 6 weeks with aclidinium, compared with tiotropium. In the overall population, although nighttime symptom severity was significantly reduced with aclidinium versus placebo, the difference between the two comparators was not statistically significant.6 For both the overall population and the symptomatic patient group, no differences in nighttime symptom severity were observed for tiotropium versus placebo.\nWhile noting that the BID dosing regimen of aclidinium may have contributed to the observed improvement in nighttime efficacy versus tiotropium QD in symptomatic patients, improvement of nighttime lung function is of particular importance for patients with COPD as nighttime symptoms and poor quality sleep are common.1 To date, relatively few studies have demonstrated significant improvements in nighttime lung function and/or sleep quality following bronchodilator therapy.1,18–22\nThe improvements in E-RS score observed in symptomatic patients receiving treatment with aclidinium are likely to be clinically significant since E-RS has been shown to be a valid and reliable tool for the assessment of respiratory symptoms of COPD in clinical trials.11,14,23 The minimal clinically important differences for the different aspects of the E-RS tool have recently been proposed: RS-Total ≥−2 units; RS-Breathlessness ≥−1 unit; RS-Cough and Sputum ≥−0.7 units; and RS-Chest Symptoms ≥−0.7 units,14 and the ability of the E-RS to capture treatment effects has recently been evaluated.11 In this study, change in E-RS Total score was −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. With aclidinium, changes from baseline in individual domain scores at week 6 were −1.3 for Breathlessness, −0.6 for Chest Symptoms, and −0.8 for Cough and Sputum.\nThere are some potential limitations of this post hoc analysis. There was found to be a significantly higher proportion of patients with baseline bronchial reversibility in the aclidinium group compared with the tiotropium group, which may potentially account for the significant difference in efficacy observed. The apparent higher proportion of patients with severe COPD in the tiotropium group is, however, unlikely to have influenced the FEV1 response in these patients, since the differences between groups were not found to be significant. Also, the 6-week study period may not be long enough to reflect a patient’s symptom burden. In addition, the E-RS threshold used to distinguish symptomatic from asymptomatic patients has not been formally validated and requires further investigation.11 One must also consider that unvalidated, early versions of the early-morning (Early Morning Symptoms of COPD Instrument) and nighttime symptoms (Nighttime Symptoms of COPD Instrument) questionnaires were used in this study. Both questionnaires have subsequently been developed and evaluated further, and there are published data indicating that these are valid tools for measuring COPD symptoms in large randomized trials.15,16 The use of a patient questionnaire, rather than clinical assessments, to evaluate limitation of early-morning activity could be considered a potential constraint of this study; however, the benefit of this method is that it can assess if patients are restricted in their usual morning activities, such as getting washed and dressed.\nOne further consideration of this post hoc analysis is that although symptomatic patients constituted 45% of the overall study population, the total sample size remains relatively small (n=414). Furthermore, it should be noted that it is possible that a different group of symptomatic patients may have been identified if the COPD Assessment Test or modified Medical Research Council criteria outlined in the GOLD report were applied.5", "Results from this post hoc analysis of a symptomatic patient group with moderate to severe COPD showed that aclidinium 400 μg BID provided additional improvements compared with tiotropium 18 μg QD in: 1) bronchodilation, particularly during the nighttime, 2) E-RS responder status, 3) early-morning, daytime, and nighttime symptoms, and 4) early-morning limitation of activity. These results suggest that symptomatic patients may achieve greater benefits during the nighttime with aclidinium treatment than patients with fewer symptoms." ]
[ "intro", "methods", "methods|subjects", "methods", "methods", null, null, null, null, null, "results", "subjects", null, null, "subjects", null, "discussion", null ]
[ "COPD", "24-hour bronchodilation", "long-acting muscarinic antagonist", "nighttime", "symptoms" ]
Introduction: Symptoms of COPD can vary in severity over a 24-hour period, and studies indicate that they are generally worse in the early morning and at nighttime.1–3 Symptoms include chronic cough, sputum production, and breathlessness, which can severely impact on a patient’s daily activities and overall well-being,3 and have a corresponding high socioeconomic burden.4 Estimates suggest that the frequency of nocturnal symptoms and symptomatic sleep disturbance may exceed 75% in patients with COPD, and potential long-term consequences may include lung function changes, increased exacerbation frequency, emergence or worsening of cardiovascular disease, impaired quality of life, and increased mortality.1 It is therefore important that symptoms over the entire 24-hour day are identified and managed appropriately. In order to provide appropriate therapy, clinical guidelines (Global initiative for chronic Obstructive Lung Disease [GOLD]) suggest that symptoms, airflow limitation, and risk of exacerbations are assessed.5 Patients are classified into one of four groups according to their symptom burden and risk of exacerbations: A, low risk, less symptoms; B, low risk, more symptoms; C, high risk, less symptoms; or D, high risk, more symptoms;5 current evidence suggests that bronchodilator treatment may be more effective in those patients who are considered symptomatic (ie, groups B and D).5 Bronchodilator therapies are a mainstay of COPD treatment, with two classes of long-acting bronchodilators currently available: long-acting muscarinic antagonists (LAMAs) and long-acting β2-agonists (LABAs). LAMAs inhibit the action of acetylcholine at muscarinic receptors, while LABAs enhance cAMP signaling through stimulation of β2-adrenergic receptors, resulting in the relaxation of bronchial smooth muscle.5 The LAMA aclidinium bromide is a maintenance bronchodilator therapy for adults with COPD. The efficacy and tolerability results from a Phase IIIb study in patients with moderate to severe COPD, who received either aclidinium 400 μg twice daily (BID), the active comparator tiotropium 18 μg once daily (QD), or placebo have been previously reported.6 Briefly, following 6 weeks of treatment, patients receiving aclidinium 400 μg BID demonstrated improvements in 24-hour bronchodilation, compared with placebo, that were comparable with tiotropium 18 μg QD. In addition, COPD symptoms significantly improved from baseline with aclidinium, but not tiotropium, compared with placebo.6 These results were similar to those observed in a prior 2-week Phase IIa trial.7 Furthermore, a recent real-world study in patients with COPD reported improvements in nighttime and early-morning symptoms, limitation of morning activities, and quality of life over 3 months with aclidinium 400 μg BID, compared with baseline.8 Since aclidinium has a greater impact on COPD symptoms than tiotropium,9,10 and the “more symptomatic” patient groups stand to benefit more from bronchodilator treatment than the “less symptomatic” groups, aclidinium may provide an additional therapeutic benefit over tiotropium in these patients. This study reports the findings of a post hoc analysis, which focused on the response in the symptomatic patient group. The key objective of this analysis was to identify any differences in 24-hour lung function and symptom control between treatment with aclidinium 400 μg BID and tiotropium 18 μg QD in this population of patients. Methods: Study design and patients Overall study This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks. The study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent. This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks. The study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent. Post hoc analysis This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11 This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11 Overall study This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks. The study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent. This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks. The study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent. Post hoc analysis This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11 This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11 Assessments and endpoints Lung function Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1. Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1. COPD symptoms Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study. To assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16 Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study. To assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16 Safety and tolerability Treatment-emergent adverse events (TEAEs) were recorded throughout the study. Treatment-emergent adverse events (TEAEs) were recorded throughout the study. Statistical analyses Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons. Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons. Lung function Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1. Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1. COPD symptoms Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study. To assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16 Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study. To assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16 Safety and tolerability Treatment-emergent adverse events (TEAEs) were recorded throughout the study. Treatment-emergent adverse events (TEAEs) were recorded throughout the study. Statistical analyses Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons. Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons. Study design and patients: Overall study This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks. The study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent. This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks. The study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent. Post hoc analysis This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11 This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11 Overall study: This was a randomized, double-blind, double-dummy, placebo- and active comparator-controlled, multicenter Phase IIIb study in patients with moderate to severe COPD (ClinicalTrials.gov identifier: NCT01462929). Full details of the study design and inclusion/exclusion criteria have been published previously.6 Briefly, patients with COPD aged ≥40 years with a smoking history (current or previous) of ≥10 pack-years were eligible to enter the study. Patients with moderate to severe COPD (for whom long-acting bronchodilators are recommended)5 had post-salbutamol forced expiratory volume in 1 second (FEV1) ≥30% and <80% of the predicted normal value, FEV1/forced vital capacity <70%. Use of long-acting bronchodilators other than the investigative treatment was not permitted. Use of salbutamol pressurized metered dose inhaler (100 μg/puff) was permitted as relief medication as needed (except ≤6 hours before each visit). Patients were permitted to continue use of oral sustained-release theophylline (use of other methylxanthines was not permitted), inhaled corticosteroids, and oral or parenteral corticosteroids (equivalent to ≤10 mg/day or 20 mg every other day of prednisone) if treatment was stable ≥4 weeks prior to screening, except ≤6 hours before each visit. Oxygen therapy (except ≤2 hours before each visit) was permitted. After a screening visit, patients underwent a 2- to 3-week run-in period to assess disease stability. Eligible patients were randomized (2:2:1) to receive aclidinium bromide 400 μg BID in the morning and evening via the Genuair™/Pressair® (registered trademark of AstraZeneca group of companies; for use within the USA as Pressair® and as Genuair™ within all other licensed territories) multidose dry powder inhaler, tiotropium 18 μg QD in the morning via the HandiHaler®, or placebo for 6 weeks. The study was approved by an independent ethics committee at each site (Table S1) and was conducted in accordance with the Declaration of Helsinki, the International Conference on Harmonisation, and Good Clinical Practice guidelines. All patients provided written informed consent. Post hoc analysis: This post hoc analysis assessed symptomatic patients, defined as those patients with an Evaluating Respiratory Symptoms in COPD (E-RS:COPD™ [The EXACT™ and E-RS™ are owned by Evidera. Permission to use these instruments may be obtained from Evidera {[email protected]}]; formerly known as EXAcerbations of Chronic pulmonary disease Tool [EXACT]-RS) baseline score ≥10 units. This threshold was chosen based on data indicating that an E-RS score ≥10 units differentiated between asymptomatic (GOLD groups A and C) and symptomatic (GOLD groups B and D) patients.11 Assessments and endpoints: Lung function Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1. Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1. COPD symptoms Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study. To assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16 Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study. To assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16 Safety and tolerability Treatment-emergent adverse events (TEAEs) were recorded throughout the study. Treatment-emergent adverse events (TEAEs) were recorded throughout the study. Statistical analyses Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons. Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons. Lung function: Lung function was assessed over 24 hours post-dose on day 1 and at week 6. The primary endpoint was change from baseline in normalized FEV1 area under the curve (AUC) over 24 hours post-morning dose (AUC0–24/24 h) at week 6. The secondary endpoint was change from baseline in normalized FEV1 AUC over the nighttime period (AUC12–24/12 h) at week 6. An additional lung function endpoint was change from baseline in morning pre-dose (trough) FEV1. COPD symptoms: Every evening, patients completed the 14-item EXACT (recall period of “today”) via electronic diaries and daily COPD symptoms scores were derived using E-RS scoring algorithms. The E-RS uses the 11 respiratory symptom items from the 14-item EXACT and assesses both overall daily respiratory COPD symptoms (RS-Total score; score range, 0–40, with higher scores indicating more severe symptoms) and specific respiratory symptoms using three subscales (RS-Breathlessness [score range, 0–17], RS-Cough and Sputum [score range, 0–11], and RS-Chest Symptoms [the sum of three items related to chest congestion/discomfort; score range, 0–12]).12,13 E-RS Total and domain scores were assessed at baseline and over the 6-week study duration. Patients who achieved a clinically meaningful improvement from baseline (E-RS Total score ≥−2 units) were considered to be responders; this responder definition was proposed based on results from three randomized controlled trials.14 Responder status was assessed over the 6 weeks of the study. To assess the severity of early-morning and nighttime symptoms, an additional COPD symptoms questionnaire developed by the study sponsor was completed by patients each morning via electronic diaries (5-point scale: 1= “did not experience symptoms”; 5= “very severe”) and included individual morning symptoms of cough, wheeze, shortness of breath, and phlegm (5-point scale: 0= “no symptoms”; 4= “very severe symptoms”), as well as limitation of morning activities (5-point scale: 1= “not at all”; 5= “a very great deal”). Since this study, these questionnaires have been developed and evaluated further.15,16 Safety and tolerability: Treatment-emergent adverse events (TEAEs) were recorded throughout the study. Statistical analyses: Efficacy data are reported for the intent-to-treat population, defined as all randomized patients who received at least one dose of study medication and who had at least one baseline and post-baseline FEV1 value. Endpoints were assessed using an analysis of covariance model with treatment and sex as factors, and age and baseline values as covariates. Between-group least squares mean differences and 95% confidence intervals were calculated for all treatment group comparisons. Results: Patients In all, 414 patients were randomized to treatment in the overall study (2:2:1 ratio), of which 277 were defined as symptomatic (E-RS baseline score ≥10 units) and included in this post hoc subgroup analysis (placebo: n=60; aclidinium 400 μg: n=116; tiotropium 18 μg: n=101) (Figure 1). The percentages of patients in each treatment arm of this post hoc analysis were similar to those in the primary study (placebo, 21.7% vs 20.5%; aclidinium 400 μg, 41.9% vs 41.3%; tiotropium 18 μg, 36.5% vs 38.2%, respectively). Demographics and baseline characteristics in the subgroup of symptomatic patients were similar to those in the overall study population (symptomatic patients: mean age 62.1 years, 65.0% male, 54.5% current smokers, post-bronchodilator FEV1 54.6% predicted). Patient demographics and baseline characteristics for symptomatic patients were also similar across treatment arms, with the exception of a higher proportion of male patients in the active treatment groups compared with placebo, and a higher proportion of patients with severe COPD in the tiotropium group (Table 1). Mean post-bronchodilator percent predicted FEV1 and COPD symptoms scores at baseline were similar across treatment arms (Table 1). In all, 414 patients were randomized to treatment in the overall study (2:2:1 ratio), of which 277 were defined as symptomatic (E-RS baseline score ≥10 units) and included in this post hoc subgroup analysis (placebo: n=60; aclidinium 400 μg: n=116; tiotropium 18 μg: n=101) (Figure 1). The percentages of patients in each treatment arm of this post hoc analysis were similar to those in the primary study (placebo, 21.7% vs 20.5%; aclidinium 400 μg, 41.9% vs 41.3%; tiotropium 18 μg, 36.5% vs 38.2%, respectively). Demographics and baseline characteristics in the subgroup of symptomatic patients were similar to those in the overall study population (symptomatic patients: mean age 62.1 years, 65.0% male, 54.5% current smokers, post-bronchodilator FEV1 54.6% predicted). Patient demographics and baseline characteristics for symptomatic patients were also similar across treatment arms, with the exception of a higher proportion of male patients in the active treatment groups compared with placebo, and a higher proportion of patients with severe COPD in the tiotropium group (Table 1). Mean post-bronchodilator percent predicted FEV1 and COPD symptoms scores at baseline were similar across treatment arms (Table 1). Efficacy Lung function Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). COPD symptoms in symptomatic patients In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). Lung function Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). COPD symptoms in symptomatic patients In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). Safety and tolerability In the subgroup of symptomatic patients, the incidence of TEAEs was comparable in the placebo (26.7%), aclidinium (28.4%), and tiotropium (32.7%) groups. Similar to the overall study population, the most commonly reported TEAEs in symptomatic patients were headache (5.8%) and nasopharyngitis (5.1%). Other common TEAEs (≥2% of patients overall) were COPD exacerbation (2.5%), back pain (2.5%), and cough (2.2%). The majority of TEAEs were mild or moderate in intensity. There were few serious TEAEs (1.4% overall) and no deaths in the subgroup of symptomatic patients. In total, five patients (1.8%) discontinued due to TEAEs and one patient (0.4%) discontinued due to a serious TEAE, with COPD exacerbation being the most common cause (1.4%). In the subgroup of symptomatic patients, the incidence of TEAEs was comparable in the placebo (26.7%), aclidinium (28.4%), and tiotropium (32.7%) groups. Similar to the overall study population, the most commonly reported TEAEs in symptomatic patients were headache (5.8%) and nasopharyngitis (5.1%). Other common TEAEs (≥2% of patients overall) were COPD exacerbation (2.5%), back pain (2.5%), and cough (2.2%). The majority of TEAEs were mild or moderate in intensity. There were few serious TEAEs (1.4% overall) and no deaths in the subgroup of symptomatic patients. In total, five patients (1.8%) discontinued due to TEAEs and one patient (0.4%) discontinued due to a serious TEAE, with COPD exacerbation being the most common cause (1.4%). Patients: In all, 414 patients were randomized to treatment in the overall study (2:2:1 ratio), of which 277 were defined as symptomatic (E-RS baseline score ≥10 units) and included in this post hoc subgroup analysis (placebo: n=60; aclidinium 400 μg: n=116; tiotropium 18 μg: n=101) (Figure 1). The percentages of patients in each treatment arm of this post hoc analysis were similar to those in the primary study (placebo, 21.7% vs 20.5%; aclidinium 400 μg, 41.9% vs 41.3%; tiotropium 18 μg, 36.5% vs 38.2%, respectively). Demographics and baseline characteristics in the subgroup of symptomatic patients were similar to those in the overall study population (symptomatic patients: mean age 62.1 years, 65.0% male, 54.5% current smokers, post-bronchodilator FEV1 54.6% predicted). Patient demographics and baseline characteristics for symptomatic patients were also similar across treatment arms, with the exception of a higher proportion of male patients in the active treatment groups compared with placebo, and a higher proportion of patients with severe COPD in the tiotropium group (Table 1). Mean post-bronchodilator percent predicted FEV1 and COPD symptoms scores at baseline were similar across treatment arms (Table 1). Efficacy: Lung function Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). COPD symptoms in symptomatic patients In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). Lung function: Lung function endpoints in the subgroup of symptomatic patients were similar to those in the overall population. Aclidinium 400 μg BID and tiotropium 18 μg QD both improved FEV1 over 24 hours (AUC0–24/24 h) from baseline at week 6 compared with placebo (aclidinium, 140 mL; tiotropium, 106 mL; both P<0.01). Furthermore, treatment with aclidinium 400 μg BID and tiotropium 18 μg QD improved FEV1 from baseline at week 6 at all time points over 24 hours, compared with placebo (Figure 2). During the nighttime period (AUC12–24 h/12 h), improvements from baseline compared with placebo were greater with aclidinium 400 μg BID than tiotropium 18 μg QD on day 1 (157 vs 67 mL for aclidinium and tiotropium, respectively; P<0.001) and week 6 (153 vs 90 mL for aclidinium and tiotropium, respectively; P<0.05). Aclidinium 400 μg BID also demonstrated improvements in trough FEV1 from baseline versus placebo and tiotropium at day 1 (136 vs 68 mL for aclidinium and tiotro-pium, respectively; P<0.05) and week 6 (137 vs 71 mL for aclidinium and tiotropium, respectively; P<0.05) in symptomatic patients (Figure 3). COPD symptoms in symptomatic patients: In this subgroup of symptomatic patients, the improvement from baseline in E-RS Total score was greater with aclidinium compared with placebo (P<0.001) and tiotropium (P<0.05) over 6 weeks (Figure 4A): −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. For each of the E-RS domains, greater improvements from baseline in E-RS score in symptomatic patients were also observed for aclidinium over 6 weeks of treatment (RS-Breathlessness and RS-Cough and Sputum: P<0.05 vs tiotropium and P<0.01 vs placebo; RS-Chest Symptoms: P<0.05 vs placebo) (Figure 4A). A higher percentage of patients in the aclidinium 400 μg treatment arm were E-RS responders (52.6%) compared with placebo (28.3%; P<0.01) and tiotropium 18 μg (37.6%; P<0.05) (Figure 4B) over 6 weeks. Overall early-morning symptom severity was reduced in the subgroup of symptomatic patients over 6 weeks with aclidinium treatment versus placebo (P<0.01) and tiotropium (P<0.05; Figure 5A). Aclidinium also demonstrated improvements in individual early-morning symptom domains; shortness of breath and cough symptom scores improved in symptomatic patients treated with aclidinium compared with placebo over 6 weeks (both P<0.05; Figure 5A). A reduction in overall nighttime symptom severity from baseline was observed over 6 weeks with aclidinium versus placebo and tiotropium in symptomatic patients (both P<0.05; Figure 5B). Numerical improvements in early-morning or nighttime symptom severity were observed for tiotropium versus placebo in this subgroup. In symptomatic patients, limitation of early-morning activity caused by COPD symptoms was reduced from baseline over 6 weeks with aclidinium versus placebo (P<0.01) and tiotropium (P<0.05), but not with tiotropium versus placebo (Figure 5C). Safety and tolerability: In the subgroup of symptomatic patients, the incidence of TEAEs was comparable in the placebo (26.7%), aclidinium (28.4%), and tiotropium (32.7%) groups. Similar to the overall study population, the most commonly reported TEAEs in symptomatic patients were headache (5.8%) and nasopharyngitis (5.1%). Other common TEAEs (≥2% of patients overall) were COPD exacerbation (2.5%), back pain (2.5%), and cough (2.2%). The majority of TEAEs were mild or moderate in intensity. There were few serious TEAEs (1.4% overall) and no deaths in the subgroup of symptomatic patients. In total, five patients (1.8%) discontinued due to TEAEs and one patient (0.4%) discontinued due to a serious TEAE, with COPD exacerbation being the most common cause (1.4%). Discussion: Assessment of treatment efficacy in symptomatic patients has clinical significance, as treatment guidelines recommend that such patients are treated in order to improve lung function and reduce symptoms.17 This post hoc analysis was performed to evaluate the 24-hour effect of treatment with aclidinium bromide 400 μg BID, tiotropium 18 μg QD, or placebo, in 277 symptomatic patients with moderate to severe COPD participating in a Phase III study. This subgroup of symptomatic patients constituted a substantial proportion of the overall patients (277/414; 45%). In the subgroup of symptomatic patients, 6 weeks of treatment with aclidinium BID showed improvements from baseline in FEV1 at all time points over 24 hours (compared with placebo), and FEV1 was higher than tiotropium QD at most 12- to 24-hour time points. Aclidinium treatment also led to greater improvements in trough FEV1 compared with tiotropium. Furthermore, during the nighttime period at both day 1 and week 6, improvements from baseline in FEV1 (compared with placebo) were greater with aclidinium than with tiotropium. Patient symptoms also improved following treatment. After 6 weeks, the improvement in E-RS score was greater with aclidinium compared with both placebo and tiotropium, as indicated by changes in RS-Breathlessness, RS-Cough and Sputum, and E-RS Total scores. In addition, the percentage of patients defined as E-RS responders increased with aclidinium compared with tiotropium or placebo. Improvement over placebo and tiotropium was also observed in early-morning symptom severity, nighttime symptom severity, individual early-morning symptoms (shortness of breath and cough), and limitation of early-morning activity caused by symptoms. Safety and tolerability in symptomatic patients appeared to be similar to that in the overall study population, with the most commonly reported TEAEs being headache and nasopharyngitis. The overall study population included both symptomatic and asymptomatic patients. Results have been previously reported and indicated that aclidinium provided significant 24-hour bronchodilation versus placebo from day 1, with comparable efficacy to tiotropium after 6 weeks.6 In this post hoc analysis, some notable differences were observed in the symptomatic patient group. Improvements in bronchodilation during the nighttime period were greater with aclidinium than with tiotropium in symptomatic patients, whereas in the overall population, no differences were observed between the two treatments. Furthermore, symptomatic patients experienced a greater reduction in nighttime symptom severity from baseline to 6 weeks with aclidinium, compared with tiotropium. In the overall population, although nighttime symptom severity was significantly reduced with aclidinium versus placebo, the difference between the two comparators was not statistically significant.6 For both the overall population and the symptomatic patient group, no differences in nighttime symptom severity were observed for tiotropium versus placebo. While noting that the BID dosing regimen of aclidinium may have contributed to the observed improvement in nighttime efficacy versus tiotropium QD in symptomatic patients, improvement of nighttime lung function is of particular importance for patients with COPD as nighttime symptoms and poor quality sleep are common.1 To date, relatively few studies have demonstrated significant improvements in nighttime lung function and/or sleep quality following bronchodilator therapy.1,18–22 The improvements in E-RS score observed in symptomatic patients receiving treatment with aclidinium are likely to be clinically significant since E-RS has been shown to be a valid and reliable tool for the assessment of respiratory symptoms of COPD in clinical trials.11,14,23 The minimal clinically important differences for the different aspects of the E-RS tool have recently been proposed: RS-Total ≥−2 units; RS-Breathlessness ≥−1 unit; RS-Cough and Sputum ≥−0.7 units; and RS-Chest Symptoms ≥−0.7 units,14 and the ability of the E-RS to capture treatment effects has recently been evaluated.11 In this study, change in E-RS Total score was −2.8 units with aclidinium versus −0.7 units with placebo and −1.6 units with tiotropium. With aclidinium, changes from baseline in individual domain scores at week 6 were −1.3 for Breathlessness, −0.6 for Chest Symptoms, and −0.8 for Cough and Sputum. There are some potential limitations of this post hoc analysis. There was found to be a significantly higher proportion of patients with baseline bronchial reversibility in the aclidinium group compared with the tiotropium group, which may potentially account for the significant difference in efficacy observed. The apparent higher proportion of patients with severe COPD in the tiotropium group is, however, unlikely to have influenced the FEV1 response in these patients, since the differences between groups were not found to be significant. Also, the 6-week study period may not be long enough to reflect a patient’s symptom burden. In addition, the E-RS threshold used to distinguish symptomatic from asymptomatic patients has not been formally validated and requires further investigation.11 One must also consider that unvalidated, early versions of the early-morning (Early Morning Symptoms of COPD Instrument) and nighttime symptoms (Nighttime Symptoms of COPD Instrument) questionnaires were used in this study. Both questionnaires have subsequently been developed and evaluated further, and there are published data indicating that these are valid tools for measuring COPD symptoms in large randomized trials.15,16 The use of a patient questionnaire, rather than clinical assessments, to evaluate limitation of early-morning activity could be considered a potential constraint of this study; however, the benefit of this method is that it can assess if patients are restricted in their usual morning activities, such as getting washed and dressed. One further consideration of this post hoc analysis is that although symptomatic patients constituted 45% of the overall study population, the total sample size remains relatively small (n=414). Furthermore, it should be noted that it is possible that a different group of symptomatic patients may have been identified if the COPD Assessment Test or modified Medical Research Council criteria outlined in the GOLD report were applied.5 Conclusion: Results from this post hoc analysis of a symptomatic patient group with moderate to severe COPD showed that aclidinium 400 μg BID provided additional improvements compared with tiotropium 18 μg QD in: 1) bronchodilation, particularly during the nighttime, 2) E-RS responder status, 3) early-morning, daytime, and nighttime symptoms, and 4) early-morning limitation of activity. These results suggest that symptomatic patients may achieve greater benefits during the nighttime with aclidinium treatment than patients with fewer symptoms.
Background: A previous Phase IIIb study (NCT01462929) in patients with moderate to severe COPD demonstrated that 6 weeks of treatment with aclidinium led to improvements in 24-hour bronchodilation comparable to those with tiotropium, and improvement of symptoms versus placebo. This post hoc analysis was performed to assess the effect of treatment in the symptomatic patient group participating in the study. Methods: Symptomatic patients (defined as those with Evaluating Respiratory Symptoms [E-RS™] in COPD baseline score ≥10 units) received aclidinium bromide 400 μg twice daily (BID), tiotropium 18 μg once daily (QD), or placebo, for 6 weeks. Lung function, COPD respiratory symptoms, and incidence of adverse events (AEs) were assessed. Results: In all, 277 symptomatic patients were included in this post hoc analysis. Aclidinium and tiotropium treatment improved forced expiratory volume in 1 second (FEV1) from baseline to week 6 at all time points over 24 hours versus placebo. In addition, improvements in FEV1 from baseline during the nighttime period were observed for aclidinium versus tiotropium on day 1 (aclidinium 157 mL, tiotropium 67 mL; P<0.001) and week 6 (aclidinium 153 mL, tiotropium 90 mL; P<0.05). Aclidinium improved trough FEV1 from baseline versus placebo and tiotropium at day 1 (aclidinium 136 mL, tiotropium 68 mL; P<0.05) and week 6 (aclidinium 137 mL, tiotropium 71 mL; P<0.05). Aclidinium also improved early-morning and nighttime symptom severity, limitation of early-morning activities, and E-RS Total and domain scores versus tiotropium (except E-RS Chest Symptoms) and placebo over 6 weeks. Tolerability showed similar incidence of AEs in each arm. Conclusions: In this post hoc analysis of symptomatic patients with moderate to severe COPD, aclidinium 400 μg BID provided additional improvements compared with tiotropium 18 μg QD in: 1) bronchodilation, particularly during the nighttime, 2) daily COPD symptoms (E-RS), 3) early-morning and nighttime symptoms, and 4) early-morning limitation of activity.
Introduction: Symptoms of COPD can vary in severity over a 24-hour period, and studies indicate that they are generally worse in the early morning and at nighttime.1–3 Symptoms include chronic cough, sputum production, and breathlessness, which can severely impact on a patient’s daily activities and overall well-being,3 and have a corresponding high socioeconomic burden.4 Estimates suggest that the frequency of nocturnal symptoms and symptomatic sleep disturbance may exceed 75% in patients with COPD, and potential long-term consequences may include lung function changes, increased exacerbation frequency, emergence or worsening of cardiovascular disease, impaired quality of life, and increased mortality.1 It is therefore important that symptoms over the entire 24-hour day are identified and managed appropriately. In order to provide appropriate therapy, clinical guidelines (Global initiative for chronic Obstructive Lung Disease [GOLD]) suggest that symptoms, airflow limitation, and risk of exacerbations are assessed.5 Patients are classified into one of four groups according to their symptom burden and risk of exacerbations: A, low risk, less symptoms; B, low risk, more symptoms; C, high risk, less symptoms; or D, high risk, more symptoms;5 current evidence suggests that bronchodilator treatment may be more effective in those patients who are considered symptomatic (ie, groups B and D).5 Bronchodilator therapies are a mainstay of COPD treatment, with two classes of long-acting bronchodilators currently available: long-acting muscarinic antagonists (LAMAs) and long-acting β2-agonists (LABAs). LAMAs inhibit the action of acetylcholine at muscarinic receptors, while LABAs enhance cAMP signaling through stimulation of β2-adrenergic receptors, resulting in the relaxation of bronchial smooth muscle.5 The LAMA aclidinium bromide is a maintenance bronchodilator therapy for adults with COPD. The efficacy and tolerability results from a Phase IIIb study in patients with moderate to severe COPD, who received either aclidinium 400 μg twice daily (BID), the active comparator tiotropium 18 μg once daily (QD), or placebo have been previously reported.6 Briefly, following 6 weeks of treatment, patients receiving aclidinium 400 μg BID demonstrated improvements in 24-hour bronchodilation, compared with placebo, that were comparable with tiotropium 18 μg QD. In addition, COPD symptoms significantly improved from baseline with aclidinium, but not tiotropium, compared with placebo.6 These results were similar to those observed in a prior 2-week Phase IIa trial.7 Furthermore, a recent real-world study in patients with COPD reported improvements in nighttime and early-morning symptoms, limitation of morning activities, and quality of life over 3 months with aclidinium 400 μg BID, compared with baseline.8 Since aclidinium has a greater impact on COPD symptoms than tiotropium,9,10 and the “more symptomatic” patient groups stand to benefit more from bronchodilator treatment than the “less symptomatic” groups, aclidinium may provide an additional therapeutic benefit over tiotropium in these patients. This study reports the findings of a post hoc analysis, which focused on the response in the symptomatic patient group. The key objective of this analysis was to identify any differences in 24-hour lung function and symptom control between treatment with aclidinium 400 μg BID and tiotropium 18 μg QD in this population of patients. Conclusion: Results from this post hoc analysis of a symptomatic patient group with moderate to severe COPD showed that aclidinium 400 μg BID provided additional improvements compared with tiotropium 18 μg QD in: 1) bronchodilation, particularly during the nighttime, 2) E-RS responder status, 3) early-morning, daytime, and nighttime symptoms, and 4) early-morning limitation of activity. These results suggest that symptomatic patients may achieve greater benefits during the nighttime with aclidinium treatment than patients with fewer symptoms.
Background: A previous Phase IIIb study (NCT01462929) in patients with moderate to severe COPD demonstrated that 6 weeks of treatment with aclidinium led to improvements in 24-hour bronchodilation comparable to those with tiotropium, and improvement of symptoms versus placebo. This post hoc analysis was performed to assess the effect of treatment in the symptomatic patient group participating in the study. Methods: Symptomatic patients (defined as those with Evaluating Respiratory Symptoms [E-RS™] in COPD baseline score ≥10 units) received aclidinium bromide 400 μg twice daily (BID), tiotropium 18 μg once daily (QD), or placebo, for 6 weeks. Lung function, COPD respiratory symptoms, and incidence of adverse events (AEs) were assessed. Results: In all, 277 symptomatic patients were included in this post hoc analysis. Aclidinium and tiotropium treatment improved forced expiratory volume in 1 second (FEV1) from baseline to week 6 at all time points over 24 hours versus placebo. In addition, improvements in FEV1 from baseline during the nighttime period were observed for aclidinium versus tiotropium on day 1 (aclidinium 157 mL, tiotropium 67 mL; P<0.001) and week 6 (aclidinium 153 mL, tiotropium 90 mL; P<0.05). Aclidinium improved trough FEV1 from baseline versus placebo and tiotropium at day 1 (aclidinium 136 mL, tiotropium 68 mL; P<0.05) and week 6 (aclidinium 137 mL, tiotropium 71 mL; P<0.05). Aclidinium also improved early-morning and nighttime symptom severity, limitation of early-morning activities, and E-RS Total and domain scores versus tiotropium (except E-RS Chest Symptoms) and placebo over 6 weeks. Tolerability showed similar incidence of AEs in each arm. Conclusions: In this post hoc analysis of symptomatic patients with moderate to severe COPD, aclidinium 400 μg BID provided additional improvements compared with tiotropium 18 μg QD in: 1) bronchodilation, particularly during the nighttime, 2) daily COPD symptoms (E-RS), 3) early-morning and nighttime symptoms, and 4) early-morning limitation of activity.
14,276
398
18
[ "patients", "aclidinium", "tiotropium", "rs", "placebo", "baseline", "symptoms", "symptomatic", "μg", "copd" ]
[ "test", "test" ]
[CONTENT] COPD | 24-hour bronchodilation | long-acting muscarinic antagonist | nighttime | symptoms [SUMMARY]
[CONTENT] COPD | 24-hour bronchodilation | long-acting muscarinic antagonist | nighttime | symptoms [SUMMARY]
[CONTENT] COPD | 24-hour bronchodilation | long-acting muscarinic antagonist | nighttime | symptoms [SUMMARY]
[CONTENT] COPD | 24-hour bronchodilation | long-acting muscarinic antagonist | nighttime | symptoms [SUMMARY]
[CONTENT] COPD | 24-hour bronchodilation | long-acting muscarinic antagonist | nighttime | symptoms [SUMMARY]
[CONTENT] COPD | 24-hour bronchodilation | long-acting muscarinic antagonist | nighttime | symptoms [SUMMARY]
[CONTENT] Activities of Daily Living | Aged | Bronchodilator Agents | Circadian Rhythm | Double-Blind Method | Female | Forced Expiratory Volume | Humans | Lung | Male | Middle Aged | Muscarinic Antagonists | Pulmonary Disease, Chronic Obstructive | Recovery of Function | Severity of Illness Index | Time Factors | Tiotropium Bromide | Treatment Outcome | Tropanes [SUMMARY]
[CONTENT] Activities of Daily Living | Aged | Bronchodilator Agents | Circadian Rhythm | Double-Blind Method | Female | Forced Expiratory Volume | Humans | Lung | Male | Middle Aged | Muscarinic Antagonists | Pulmonary Disease, Chronic Obstructive | Recovery of Function | Severity of Illness Index | Time Factors | Tiotropium Bromide | Treatment Outcome | Tropanes [SUMMARY]
[CONTENT] Activities of Daily Living | Aged | Bronchodilator Agents | Circadian Rhythm | Double-Blind Method | Female | Forced Expiratory Volume | Humans | Lung | Male | Middle Aged | Muscarinic Antagonists | Pulmonary Disease, Chronic Obstructive | Recovery of Function | Severity of Illness Index | Time Factors | Tiotropium Bromide | Treatment Outcome | Tropanes [SUMMARY]
[CONTENT] Activities of Daily Living | Aged | Bronchodilator Agents | Circadian Rhythm | Double-Blind Method | Female | Forced Expiratory Volume | Humans | Lung | Male | Middle Aged | Muscarinic Antagonists | Pulmonary Disease, Chronic Obstructive | Recovery of Function | Severity of Illness Index | Time Factors | Tiotropium Bromide | Treatment Outcome | Tropanes [SUMMARY]
[CONTENT] Activities of Daily Living | Aged | Bronchodilator Agents | Circadian Rhythm | Double-Blind Method | Female | Forced Expiratory Volume | Humans | Lung | Male | Middle Aged | Muscarinic Antagonists | Pulmonary Disease, Chronic Obstructive | Recovery of Function | Severity of Illness Index | Time Factors | Tiotropium Bromide | Treatment Outcome | Tropanes [SUMMARY]
[CONTENT] Activities of Daily Living | Aged | Bronchodilator Agents | Circadian Rhythm | Double-Blind Method | Female | Forced Expiratory Volume | Humans | Lung | Male | Middle Aged | Muscarinic Antagonists | Pulmonary Disease, Chronic Obstructive | Recovery of Function | Severity of Illness Index | Time Factors | Tiotropium Bromide | Treatment Outcome | Tropanes [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] patients | aclidinium | tiotropium | rs | placebo | baseline | symptoms | symptomatic | μg | copd [SUMMARY]
[CONTENT] patients | aclidinium | tiotropium | rs | placebo | baseline | symptoms | symptomatic | μg | copd [SUMMARY]
[CONTENT] patients | aclidinium | tiotropium | rs | placebo | baseline | symptoms | symptomatic | μg | copd [SUMMARY]
[CONTENT] patients | aclidinium | tiotropium | rs | placebo | baseline | symptoms | symptomatic | μg | copd [SUMMARY]
[CONTENT] patients | aclidinium | tiotropium | rs | placebo | baseline | symptoms | symptomatic | μg | copd [SUMMARY]
[CONTENT] patients | aclidinium | tiotropium | rs | placebo | baseline | symptoms | symptomatic | μg | copd [SUMMARY]
[CONTENT] risk | symptoms | risk symptoms | 24 hour | hour | aclidinium | μg | copd | high | bronchodilator [SUMMARY]
[CONTENT] evidera | rs | exact rs | gold groups | score 10 | gold | exact | score 10 units | 10 units | 10 [SUMMARY]
[CONTENT] aclidinium | tiotropium | 05 | placebo | figure | vs | patients | symptomatic patients | symptomatic | μg [SUMMARY]
[CONTENT] nighttime | results | early morning | early | improvements compared tiotropium | improvements compared | nighttime rs responder status | nighttime rs responder | nighttime rs | compared tiotropium 18 [SUMMARY]
[CONTENT] patients | aclidinium | rs | tiotropium | symptoms | placebo | baseline | symptomatic | μg | study [SUMMARY]
[CONTENT] patients | aclidinium | rs | tiotropium | symptoms | placebo | baseline | symptomatic | μg | study [SUMMARY]
[CONTENT] Phase | NCT01462929 | 6 weeks | 24-hour ||| [SUMMARY]
[CONTENT] 400 | BID | 18 | 6 weeks ||| Lung [SUMMARY]
[CONTENT] 277 ||| 1 second | FEV1 | week 6 | 24 hours ||| FEV1 | day 1 | 157 mL | 67 | week 6 | 153 mL | 90 mL ||| day 1 | 136 mL | 68 mL | P<0.05 | week 6 | 137 mL | 71 mL ||| early-morning | nighttime | early-morning | E-RS | 6 weeks ||| [SUMMARY]
[CONTENT] COPD | 400 | BID | 18 | 1 | 2 | 3 | early-morning | nighttime | 4 | early-morning [SUMMARY]
[CONTENT] NCT01462929 | 6 weeks | 24-hour ||| ||| 400 | BID | 18 | 6 weeks ||| Lung ||| ||| 277 ||| 1 second | FEV1 | week 6 | 24 hours ||| FEV1 | day 1 | 157 mL | 67 | week 6 | 153 mL | 90 mL ||| day 1 | 136 mL | 68 mL | P<0.05 | week 6 | 137 mL | 71 mL ||| early-morning | nighttime | early-morning | E-RS | 6 weeks ||| ||| COPD | 400 | BID | 18 | 1 | 2 | 3 | early-morning | nighttime | 4 | early-morning [SUMMARY]
[CONTENT] NCT01462929 | 6 weeks | 24-hour ||| ||| 400 | BID | 18 | 6 weeks ||| Lung ||| ||| 277 ||| 1 second | FEV1 | week 6 | 24 hours ||| FEV1 | day 1 | 157 mL | 67 | week 6 | 153 mL | 90 mL ||| day 1 | 136 mL | 68 mL | P<0.05 | week 6 | 137 mL | 71 mL ||| early-morning | nighttime | early-morning | E-RS | 6 weeks ||| ||| COPD | 400 | BID | 18 | 1 | 2 | 3 | early-morning | nighttime | 4 | early-morning [SUMMARY]
Validation of the Pittsburgh sleep quality index in community dwelling Ethiopian adults.
28347341
The applicability of the Pittsburgh Sleep Quality Index (PSQI) in screening of insomnia is demonstrated in various populations. But, the tool has not been validated in a sample of Ethiopians. Therefore, this study aimed to assess its psychometric properties in community dwelling Ethiopian adults.
BACKGROUND
Participants (n = 311, age = 25.5 ± 6.0 years and body mass index = 22.1 ± 2.3 kg/m2) from Mizan-Aman town, Southwest Ethiopia completed the PSQI and a semi-structured questionnaire for socio-demographics. Clinical interview for screening of insomnia according to the International Classification of Sleep Disorders was carried out as a concurrent validation measure.
MATERIAL AND METHODS
Overall, the PSQI scale did not have floor effect and ceiling effects. Moderate internal consistency (Cronbach's alpha was 0.59) and sufficient internal homogeneity as indicated by correlation coefficient between component scores and the global PSQI score was found. The PSQI was of good value for screening insomnia with optimal cut-off scores of 5.5 (sensitivity 82%, specificity 56.2%) and the area under the curve, 0.78 (p < 0.0001). The PSQI has unidimensional factor structure in the Ethiopian community adults for screening insomnia.
RESULTS
The PSQI has good psychometric validity in screening for insomnia among Ethiopians adults.
CONCLUSION
[ "Adult", "Ethiopia", "Female", "Humans", "Independent Living", "Male", "Middle Aged", "Psychometrics", "Quality of Life", "Reference Values", "Reproducibility of Results", "Sleep Initiation and Maintenance Disorders", "Surveys and Questionnaires", "Translating" ]
5369003
Introduction
Sleep disorders are becoming endemic in both developed and developing societies [1–4]. More than half of the population across the globe is affected by some type of sleep disturbances and about one fifth of adults have chronic sleep disorder [1–5]. Sleep problems are implicated in poor health conditions characterized by impaired social relationships, neurologic, and/or psychiatric conditions, drowsy driving, risk-taking behavior, occupational accidents, and heightened risk of cardiovascular events [3–5]. Majority of the Ethiopian university students are sleep disturbed, which adversely affected their psychological health [2, 3]. Insomnia and its subjective symptoms e.g. difficulty initiating sleep, short sleep duration and poor sleep quality are the main dimensions of poor sleep in Ethiopia [2, 3]. Ethiopia is a developing country of Africa with limited trained sleep health professionals. There is no indigenously developed questionnaire tool for the assessment of sleep quality. Additionally, no work has been done to validate known tools that assess sleep quality in Ethiopians. In such circumstances, the availability of a validated questionnaire tool to assess sleep health is necessary. There are more than 80 different languages in the country with majority of the Ethiopians having low to high level of proficiency of spoken Amharic, which is the national language. However, the reading proficiency is more variable because of differences in script for major groups of languages. The Pittsburgh Sleep Quality Index (PSQI) is the most widely used instrument in diagnosis of sleep disorders including insomnia in different populations [5–9]. The questionnaire has twenty-four items of which nineteen self-reported items are added non-linearly to generate seven components. The scores of these components are pooled linearly to get the global PSQI score, which is a measure of sleep health for the period of the 1 month immediately preceding the time of administration. The tool is easy to understand, patient compliant and requires about 5 min to be completed. The validity of the PSQI is well established in various clinical and non-clinical populations, people of different regions and ethnicities [5, 6, 10]. However, few studies have investigated the psychometric properties of the PSQI in African population. It has never been validated in Ethiopian populations [5, 7]. The present study therefore sought to validate the PSQI in a sample of community dwelling Ethiopian adults.
null
null
Results
The socio-demographics of the community dwelling Ethiopian adults participating in the study are given in Table 1. Table 2 shows the item analysis (i.e. component analysis in this case) of the PSQI in the study population. The mean global PSQI score was 7.0. Majority of the participants reported habit of tea/coffee (99.0%), alcohol intake (74%) and Chat chewing (52.1%), (Table 1).Table 1Socio-demographics of community dwelling Ethiopian adultsCharacteristicsMean ± SD/FrequencyAge (yr)25.45 ± 5.99BMI (Kg/m2)22.07 ± 2.30Gender Male276 (88.7%) Female35 (11.3%)Ethnicity Bench87 (28%) Kaffa75 (24.1%) Oromo38 (12.2%) Amhara40 (12.9%) Tigre7 (2.3%) Others64 (20.6%)Education Illiterate1 (0.3%) Can read and write99 (31.8%) Primary (1–8)21 (6.8%) Secondary (9–12)76 (24.4%) College/University114 (36.7%)Occupation Farmer36 (11.6%) Government Employee34 (10.9%) Merchants17 (5.5%) Housewife1 (0.3%) Others223 (71.7%)Religion Orthodox Christian162 (52.1%) Protestants Christian101 (32.5%) Islam44 (14.1%) Others4 (1.3%)Monthly Family Income (In Birr) Very Low (less than 445)15 (4.8%) Low (446–1200)186 (59.8%) Average (1201–2500)87 (28.0%) Above average (2501–3500)16 (5.1%) High (greater than 3500)7 (2.3%)Parents Single210 (67.5%) Married98 (31.5%) Divorced3 (1.0%)Sleep Global PSQI score6.96 ± 3.34 ICSD-R Classification Insomniac/normal206 (66.2%)/105 (33.8%)Substance use/Habits Chat user/non-user162 (52.1%)/149 (47.9%) Alcoholic/non-alcoholic230 (74%)/81 (26%) Smoker/non-smoker79 (25.4%)/232 (74.6%) Tea/Coffee consumer/non-consumer308 (99.0%)/3 (1.0%) Beverage consumer/beverage non-consumer129 (41.5%)/182 (58.5%) BMI body mass index, PSQI Pittsburgh sleep quality index; ICSD-R international classification of sleep disorders, revised criteria Table 2The distribution of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adultsComponents of the PSQIPSQI sub-componentFrequencyPercentagePSQI component of sleep duration≥7 h9530.56–7 h5317.05–6 h289.0<5 h13543.4PSQI component of sleep disturbances04715.1126183.9231.0300PSQI component of sleep latency03611.6111336.3212439.933812.2PSQI component of daytime dysfunction029394.21144.5241.3300PSQI component of sleep efficiency>85%10533.875–84%247.765–74%278.7<65%15549.8PSQI component of sleep qualityVery good9731.2Fairly good12640.5Fairly bad5417.4Very bad3410.9PSQI component of sleep medicationNot during the past month30698.4Less than once a week31.0Once or twice a week2.6Three or more times a week000 Socio-demographics of community dwelling Ethiopian adults BMI body mass index, PSQI Pittsburgh sleep quality index; ICSD-R international classification of sleep disorders, revised criteria The distribution of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adults Ceiling or floor effects were considered to be present if more than 15% of respondents achieved the highest or lowest score, respectively [14, 15]. Overall, the global PSQI score did not have floor and ceiling effects; 5.1% of Ethiopian adults reported a minimum score of zero, and none reported a maximum score of 21, respectively. However, individual components of the tool did show floor and ceiling effects. All the PSQI components except for sleep latency showed floor effect i.e. more than 15% of respondents achieved the lowest score [14, 15]. Nevertheless, ceiling effect was observed only for components of sleep duration and sleep efficiency i.e. more than 15% of respondents achieved the highest score [14, 15]. The internal consistency test of the PSQI scores showed a Cronbach’s alpha of 0.59, a value suggesting moderate consistency. Cronbach’s alpha increased by 0.03 (from 0.59 to 0.62) on removing the PSQI components of medicine use and daytime dysfunction. The internal homogeneity as indicated by Spearman’s correlation coefficient (r) between component scores and the global PSQI score was 0.15–0.81. All the correlation coefficients were significant (p < 0.001) (Table 3). The groups identified as normal sleep and insomnia based on clinical interview differed across the global PSQI score and all the components score except the component for medicine use and daytime dysfunction (Table 4). The ROC curve is shown in Fig. 1. Table 5 shows the results of the ROC curve analysis with sensitivity and specificity for all the global PSQI scores between 0.5 and 16. The sensitivity and specificity of the PSQI at the cut-off score of 5.5 were 82 and 56.2%, respectively.Table 3Internal consistency and homogeneity of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adultsComponents of the PSQIComponent-to- global PSQI score correlationsCronbach’s Alpha if Component DeletedPSQI component of sleep quality.50.58PSQI component of sleep latency.56.52PSQI component of sleep duration.81.43PSQI component of sleep efficiency.81.45PSQI component of sleep disturbances.34.58PSQI component of sleep medication.18.60PSQI component of daytime dysfunction.15.60 Table 4Discriminative validity: Comparison of the Pittsburgh Sleep Quality Index (PSQI) scores between normal sleepers and insomniacs as determined by clinical interview in community dwelling Ethiopian adultsComponents of the PSQIMean rank p-valueNormal sleepersPrimary InsomniacsPSQI component of sleep quality90.88189.19<0.01PSQI component of sleep latency94.47187.36<0.01PSQI component of sleep duration131.25168.61<0.01PSQI component of sleep efficiency131.11168.68<0.01PSQI component of sleep disturbances130.65168.92<0.01PSQI component of sleep medication155.00156.510.52PSQI component of daytime dysfunction154.40156.820.58Global PSQI scorea 4.70 ± 3.468.11 ± 2.61<0.01 aMean ± SD, Independent t-test was used for the global PSQI score and Mann Whitney U test was applied for component scores Fig. 1Receiver operator curves (A) No discrimination (AUC = 0.5) (B) Experimental test (0.78 (p < 0.001)) and (C) Perfect test (AUC = 1.0) in community dwelling Ethiopian adults Table 5Sensitivity and specificity of the Pittsburgh Sleep Quality Index at each cut-off score in community dwelling Ethiopian adultsCut-off ScoreSensitivitySpecificity0.50.990.141.50.990.292.50.970.353.50.950.404.50.910.465.50.820.566.50.730.657.50.610.748.50.530.839.50.410.9310.50.120.9611.50.050.9912.50.020.99140.011.001601.00 Internal consistency and homogeneity of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adults Discriminative validity: Comparison of the Pittsburgh Sleep Quality Index (PSQI) scores between normal sleepers and insomniacs as determined by clinical interview in community dwelling Ethiopian adults aMean ± SD, Independent t-test was used for the global PSQI score and Mann Whitney U test was applied for component scores Receiver operator curves (A) No discrimination (AUC = 0.5) (B) Experimental test (0.78 (p < 0.001)) and (C) Perfect test (AUC = 1.0) in community dwelling Ethiopian adults Sensitivity and specificity of the Pittsburgh Sleep Quality Index at each cut-off score in community dwelling Ethiopian adults Tables 6, 7 and 8 shows the results of the factor analysis. The results of Kaiser-Meyer-Olkin test of sampling adequacy, Bartlett’s test of sphericity, anti-image matrix and determinant show that the sample met the conditions for factor analysis (Table 6) [11, 16]. Kaiser’s criteria (Eigenvalue > 1), Scree plot and parallel analysis identified 3-factor models, while cumulative variance rule (>40%) found 2-factor model for the PSQI with seven components. Kaiser’s criteria (Eigenvalue > 1), Scree plot and parallel analysis identified 2-factor model, while cumulative variance rule (>40%) found 1-factor model for the PSQI with five components (without the PSQI components of medicine use and daytime dysfunction) (Table 6).Table 6Summary of the sample size adequacy measures and exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adultsMeasuresPSQI (Seven components)PSQI (Five components)Kaiser-Meyer-Olkin Test of Sampling Adequacy0.510.52Bartlett’s test of Sphericity<0.001<0.001Anti-image matrix0.39–0.670.48–0.64Determinant0.080.09Number of factors Kaiser’s criteria (Eigenvalue > 1)32 Cumulative variance rule (>40%)21 Scree plot32 Parallel analysis32 PSQI Pittsburgh sleep quality index Table 7Factor loadings in exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adultsComponents of the PSQIPSQI (Seven components)PSQI (Five components)Factor 1Factor 2Factor 3Factor 1Factor 2PSQI component of sleep quality.88.05.08.88.05PSQI component of sleep latency.87−.13.18.87−.13PSQI component of sleep duration.10−.96.01.10−.97PSQI component of sleep efficiency.09−.97.06.09−.97PSQI component of sleep disturbances.62−.13.08.62−.12PSQI component of sleep medication.16−.03.78PSQI component of daytime dysfunction.06−.03.81Principal component analysis with direct oblimin rotation (Kaiser Normalization) was employed. PSQI Pittsburgh sleep quality index Table 8Summary of the Confirmatory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adultsModelsGFIAGFICFIRMSEARMR χ 2 df p χ 2/dfECVIModel-A0.790.580.320.340.29528.0614<0.0137.721.79Model-B0.980.950.980.060.0422.26100.012.230.19Model-C0.800.570.350.350.29508.9113<0.0139.151.74Model-D0.740.230.330.560.40490.305<0.0198.061.65Model-E0.980.910.980.110.0515.093<0.015.0310.13Model-A: 1-Factor model of the PSQI with all the seven components; Model-B: 1-Factor model of the PSQI with all the seven components and incorporation of modification index (correlations between error terms); Model-C: 2-Factor model of the PSQI (Factor-1 comprised of SLPQUAL, LATEN, DURAT, HSE, DISTB; Factor-2 comprised of MEDS, DAYDYS); Model-D: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS); Model-E: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS) with incorporation of modification index (correlations between error terms) GFI goodness of fit index, AGFI adjusted goodness of fit index, CFI comparative fit index, RMSEA root mean square error of approximation, RMR root mean square residual, ECVI expected cross-validation indexSLPQUAL: PSQI component of sleep quality, LATEN: PSQI component of sleep latency, DURAT: PSQI component of sleep duration, HSE: PSQI component of sleep efficiency, DISTB: PSQI component of sleep disturbances, MEDS: PSQI component of sleep medication, DAYDYS: PSQI component of daytime dysfunction Summary of the sample size adequacy measures and exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adults PSQI Pittsburgh sleep quality index Factor loadings in exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adults Principal component analysis with direct oblimin rotation (Kaiser Normalization) was employed. PSQI Pittsburgh sleep quality index Summary of the Confirmatory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adults Model-A: 1-Factor model of the PSQI with all the seven components; Model-B: 1-Factor model of the PSQI with all the seven components and incorporation of modification index (correlations between error terms); Model-C: 2-Factor model of the PSQI (Factor-1 comprised of SLPQUAL, LATEN, DURAT, HSE, DISTB; Factor-2 comprised of MEDS, DAYDYS); Model-D: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS); Model-E: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS) with incorporation of modification index (correlations between error terms) GFI goodness of fit index, AGFI adjusted goodness of fit index, CFI comparative fit index, RMSEA root mean square error of approximation, RMR root mean square residual, ECVI expected cross-validation index SLPQUAL: PSQI component of sleep quality, LATEN: PSQI component of sleep latency, DURAT: PSQI component of sleep duration, HSE: PSQI component of sleep efficiency, DISTB: PSQI component of sleep disturbances, MEDS: PSQI component of sleep medication, DAYDYS: PSQI component of daytime dysfunction The CFA was run on five models of the PSQI (Table 8); Model-A: 1-Factor model of the PSQI with all the seven components; Model-B: 1-Factor model of the PSQI with all the seven components and incorporation of modification index (correlations between error terms); Model-C: 2-Factor model of the PSQI (Factor-1 comprised of the PSQI components for sleep quality, sleep latency, sleep duration, sleep efficiency and sleep disturbances; Factor-2 comprised of the PSQI components for sleep medicine and daytime dysfunction); Model-D: 1-Factor model of the PSQI with only five components (without the PSQI components for sleep medicine and daytime dysfunction); Model-E: 1-Factor model of the PSQI with only five components (without the PSQI components for sleep medicine and daytime dysfunction) with incorporation of modification index (correlations between error terms). None of the models had absolute fit to the data i.e. non-significant χ 2 p value (Table 8). Three models performed poorly i.e. RMR and RMSEA were higher than the cut-off values, while GFI, AGFI and CFI were lower than the cut-off values. Model-B performed best with highest values for GFI, AGFI and CFI, while it had lowest values for RMSEA, RMR and χ 2/df.
null
null
[ "Statistical analysis" ]
[ "The statistical package SPSS version 16.0 (SPSS Inc., Chicago, USA) was used. The PSQI is composed of nineteen self-reported items. The scores of these individual items are added non-linearly to get seven component scores. These components are measured variables and should not be confused with component/factor term (a latent variable) used in factor analysis [6, 11]. The Cronbach alpha test was used to assess internal consistency. Internal homogeneity was tested by correlation analysis between PSQI components and the global scores. Discriminative validity was assessed by test of difference; Mann Whitney for the structured categorical variables (the PSQI components) and t-test for the global PSQI score. The diagnostic validation was performed by receiver operating curve (ROC) analysis. The screening by the sleep expert based on clinical interview served as the gold standard and the global PSQI score was the test variable. Sensitivity, specificity, area under the curve (AUC), and cut off score were assessed. Exploratory factor analysis (EFA) was performed using principal component analysis for initial estimate. This was followed by maximum likelihood estimation with direct oblimin rotation. EFA investigated two types of the PSQI i.e. one with all seven components and other with five components (sans the PSQI components of medicine use and daytime dysfunction). Confirmatory factor analysis (CFA) was performed using maximum likelihood. A value of up to 0.05 indicated good fit for both root mean square residual (RMR) as well as root mean square error of approximation (RMSEA). A value of more than/equal to 0.90 indicated good fit for both goodness of fit index (GFI) as well as adjusted goodness of fit index (AGFI) [12]. Lesser value of expected cross-validation index (ECVI) indicates better fit- employed as a relative measure of fit. A comparative fit index (CFI) of no less than 0.95, and χ\n2/df ratio of less than 3 indicated an acceptable fit between a model and the data [13]." ]
[ null ]
[ "Introduction", "Material and methods", "Statistical analysis", "Results", "Discussion" ]
[ "Sleep disorders are becoming endemic in both developed and developing societies [1–4]. More than half of the population across the globe is affected by some type of sleep disturbances and about one fifth of adults have chronic sleep disorder [1–5]. Sleep problems are implicated in poor health conditions characterized by impaired social relationships, neurologic, and/or psychiatric conditions, drowsy driving, risk-taking behavior, occupational accidents, and heightened risk of cardiovascular events [3–5]. Majority of the Ethiopian university students are sleep disturbed, which adversely affected their psychological health [2, 3].\nInsomnia and its subjective symptoms e.g. difficulty initiating sleep, short sleep duration and poor sleep quality are the main dimensions of poor sleep in Ethiopia [2, 3]. Ethiopia is a developing country of Africa with limited trained sleep health professionals. There is no indigenously developed questionnaire tool for the assessment of sleep quality. Additionally, no work has been done to validate known tools that assess sleep quality in Ethiopians. In such circumstances, the availability of a validated questionnaire tool to assess sleep health is necessary. There are more than 80 different languages in the country with majority of the Ethiopians having low to high level of proficiency of spoken Amharic, which is the national language. However, the reading proficiency is more variable because of differences in script for major groups of languages.\nThe Pittsburgh Sleep Quality Index (PSQI) is the most widely used instrument in diagnosis of sleep disorders including insomnia in different populations [5–9]. The questionnaire has twenty-four items of which nineteen self-reported items are added non-linearly to generate seven components. The scores of these components are pooled linearly to get the global PSQI score, which is a measure of sleep health for the period of the 1 month immediately preceding the time of administration. The tool is easy to understand, patient compliant and requires about 5 min to be completed. The validity of the PSQI is well established in various clinical and non-clinical populations, people of different regions and ethnicities [5, 6, 10]. However, few studies have investigated the psychometric properties of the PSQI in African population. It has never been validated in Ethiopian populations [5, 7]. The present study therefore sought to validate the PSQI in a sample of community dwelling Ethiopian adults.", "Households were selected by simple random sampling (SRS) method across Mizan-Aman town, Bench Maji Zone, Southwest, Ethiopia. Further, only one adult member (chosen randomly) from every selected house was interviewed. Three hundred and eleven out of an initial 550 adults who were screened and who had been found qualified were administered the survey and fully completed it. Ethiopia is known for cultivation of Chat-a psycho-stimulant and coffee. The consumption of chat and/or coffee is highly prevalent in the Ethiopian community. Chat has been reported to be associated with poor sleep [2, 3]. The previous African study reporting validation of the PSQI on college students in Nigeria did not report about Chat habits [7]. Moreover, the study only involved college students [7], whereas our study assessed the validity of the PSQI in the community dwelling adults. The mean age and body mass index (BMI) of the participants were 25.5 ± 6.0 years and 22.1 ± 2.3 kg/m2, respectively. Exclusion criteria consisted of self-reported problems with memory. A detailed explanation regarding the purpose and procedures of the study was given to the volunteers. Even though, Amharic is the most widely spoken language but its reading proficiency level is limited. Therefore, instructor administered original English version of the PSQI was employed. Semi-structured tool for demographics and the PSQI were employed. All the participants were interviewed by an experienced sleep researcher blinded to the PSQI score for the presence of insomnia according to the International Classification of Sleep Disorders, revised criteria (ICSD-R). These criteria included: (i) Almost nightly insufficient amount of sleep, (ii) Not feeling rested after usual sleep and (iii) Mild to severe impairment of social or occupational functioning, (iv) Complaint of restlessness, irritability, anxiety, daytime fatigue, and tiredness. The subjects were screened as insomniacs if they had either (i or ii) of the condition and at least mild complaints related to both (iii) and (iv) [7]. The PSQI measures sleep quality for the month period just preceding the interview. But, it has been found to be a valid and reliable measure of insomnia in some populations [5, 7–9].\n Statistical analysis The statistical package SPSS version 16.0 (SPSS Inc., Chicago, USA) was used. The PSQI is composed of nineteen self-reported items. The scores of these individual items are added non-linearly to get seven component scores. These components are measured variables and should not be confused with component/factor term (a latent variable) used in factor analysis [6, 11]. The Cronbach alpha test was used to assess internal consistency. Internal homogeneity was tested by correlation analysis between PSQI components and the global scores. Discriminative validity was assessed by test of difference; Mann Whitney for the structured categorical variables (the PSQI components) and t-test for the global PSQI score. The diagnostic validation was performed by receiver operating curve (ROC) analysis. The screening by the sleep expert based on clinical interview served as the gold standard and the global PSQI score was the test variable. Sensitivity, specificity, area under the curve (AUC), and cut off score were assessed. Exploratory factor analysis (EFA) was performed using principal component analysis for initial estimate. This was followed by maximum likelihood estimation with direct oblimin rotation. EFA investigated two types of the PSQI i.e. one with all seven components and other with five components (sans the PSQI components of medicine use and daytime dysfunction). Confirmatory factor analysis (CFA) was performed using maximum likelihood. A value of up to 0.05 indicated good fit for both root mean square residual (RMR) as well as root mean square error of approximation (RMSEA). A value of more than/equal to 0.90 indicated good fit for both goodness of fit index (GFI) as well as adjusted goodness of fit index (AGFI) [12]. Lesser value of expected cross-validation index (ECVI) indicates better fit- employed as a relative measure of fit. A comparative fit index (CFI) of no less than 0.95, and χ\n2/df ratio of less than 3 indicated an acceptable fit between a model and the data [13].\nThe statistical package SPSS version 16.0 (SPSS Inc., Chicago, USA) was used. The PSQI is composed of nineteen self-reported items. The scores of these individual items are added non-linearly to get seven component scores. These components are measured variables and should not be confused with component/factor term (a latent variable) used in factor analysis [6, 11]. The Cronbach alpha test was used to assess internal consistency. Internal homogeneity was tested by correlation analysis between PSQI components and the global scores. Discriminative validity was assessed by test of difference; Mann Whitney for the structured categorical variables (the PSQI components) and t-test for the global PSQI score. The diagnostic validation was performed by receiver operating curve (ROC) analysis. The screening by the sleep expert based on clinical interview served as the gold standard and the global PSQI score was the test variable. Sensitivity, specificity, area under the curve (AUC), and cut off score were assessed. Exploratory factor analysis (EFA) was performed using principal component analysis for initial estimate. This was followed by maximum likelihood estimation with direct oblimin rotation. EFA investigated two types of the PSQI i.e. one with all seven components and other with five components (sans the PSQI components of medicine use and daytime dysfunction). Confirmatory factor analysis (CFA) was performed using maximum likelihood. A value of up to 0.05 indicated good fit for both root mean square residual (RMR) as well as root mean square error of approximation (RMSEA). A value of more than/equal to 0.90 indicated good fit for both goodness of fit index (GFI) as well as adjusted goodness of fit index (AGFI) [12]. Lesser value of expected cross-validation index (ECVI) indicates better fit- employed as a relative measure of fit. A comparative fit index (CFI) of no less than 0.95, and χ\n2/df ratio of less than 3 indicated an acceptable fit between a model and the data [13].", "The statistical package SPSS version 16.0 (SPSS Inc., Chicago, USA) was used. The PSQI is composed of nineteen self-reported items. The scores of these individual items are added non-linearly to get seven component scores. These components are measured variables and should not be confused with component/factor term (a latent variable) used in factor analysis [6, 11]. The Cronbach alpha test was used to assess internal consistency. Internal homogeneity was tested by correlation analysis between PSQI components and the global scores. Discriminative validity was assessed by test of difference; Mann Whitney for the structured categorical variables (the PSQI components) and t-test for the global PSQI score. The diagnostic validation was performed by receiver operating curve (ROC) analysis. The screening by the sleep expert based on clinical interview served as the gold standard and the global PSQI score was the test variable. Sensitivity, specificity, area under the curve (AUC), and cut off score were assessed. Exploratory factor analysis (EFA) was performed using principal component analysis for initial estimate. This was followed by maximum likelihood estimation with direct oblimin rotation. EFA investigated two types of the PSQI i.e. one with all seven components and other with five components (sans the PSQI components of medicine use and daytime dysfunction). Confirmatory factor analysis (CFA) was performed using maximum likelihood. A value of up to 0.05 indicated good fit for both root mean square residual (RMR) as well as root mean square error of approximation (RMSEA). A value of more than/equal to 0.90 indicated good fit for both goodness of fit index (GFI) as well as adjusted goodness of fit index (AGFI) [12]. Lesser value of expected cross-validation index (ECVI) indicates better fit- employed as a relative measure of fit. A comparative fit index (CFI) of no less than 0.95, and χ\n2/df ratio of less than 3 indicated an acceptable fit between a model and the data [13].", "The socio-demographics of the community dwelling Ethiopian adults participating in the study are given in Table 1. Table 2 shows the item analysis (i.e. component analysis in this case) of the PSQI in the study population. The mean global PSQI score was 7.0. Majority of the participants reported habit of tea/coffee (99.0%), alcohol intake (74%) and Chat chewing (52.1%), (Table 1).Table 1Socio-demographics of community dwelling Ethiopian adultsCharacteristicsMean ± SD/FrequencyAge (yr)25.45 ± 5.99BMI (Kg/m2)22.07 ± 2.30Gender Male276 (88.7%) Female35 (11.3%)Ethnicity Bench87 (28%) Kaffa75 (24.1%) Oromo38 (12.2%) Amhara40 (12.9%) Tigre7 (2.3%) Others64 (20.6%)Education Illiterate1 (0.3%) Can read and write99 (31.8%) Primary (1–8)21 (6.8%) Secondary (9–12)76 (24.4%) College/University114 (36.7%)Occupation Farmer36 (11.6%) Government Employee34 (10.9%) Merchants17 (5.5%) Housewife1 (0.3%) Others223 (71.7%)Religion Orthodox Christian162 (52.1%) Protestants Christian101 (32.5%) Islam44 (14.1%) Others4 (1.3%)Monthly Family Income (In Birr) Very Low (less than 445)15 (4.8%) Low (446–1200)186 (59.8%) Average (1201–2500)87 (28.0%) Above average (2501–3500)16 (5.1%) High (greater than 3500)7 (2.3%)Parents Single210 (67.5%) Married98 (31.5%) Divorced3 (1.0%)Sleep Global PSQI score6.96 ± 3.34 ICSD-R Classification Insomniac/normal206 (66.2%)/105 (33.8%)Substance use/Habits Chat user/non-user162 (52.1%)/149 (47.9%) Alcoholic/non-alcoholic230 (74%)/81 (26%) Smoker/non-smoker79 (25.4%)/232 (74.6%) Tea/Coffee consumer/non-consumer308 (99.0%)/3 (1.0%) Beverage consumer/beverage non-consumer129 (41.5%)/182 (58.5%)\nBMI body mass index, PSQI Pittsburgh sleep quality index; ICSD-R international classification of sleep disorders, revised criteria\nTable 2The distribution of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adultsComponents of the PSQIPSQI sub-componentFrequencyPercentagePSQI component of sleep duration≥7 h9530.56–7 h5317.05–6 h289.0<5 h13543.4PSQI component of sleep disturbances04715.1126183.9231.0300PSQI component of sleep latency03611.6111336.3212439.933812.2PSQI component of daytime dysfunction029394.21144.5241.3300PSQI component of sleep efficiency>85%10533.875–84%247.765–74%278.7<65%15549.8PSQI component of sleep qualityVery good9731.2Fairly good12640.5Fairly bad5417.4Very bad3410.9PSQI component of sleep medicationNot during the past month30698.4Less than once a week31.0Once or twice a week2.6Three or more times a week000\n\nSocio-demographics of community dwelling Ethiopian adults\n\nBMI body mass index, PSQI Pittsburgh sleep quality index; ICSD-R international classification of sleep disorders, revised criteria\nThe distribution of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adults\nCeiling or floor effects were considered to be present if more than 15% of respondents achieved the highest or lowest score, respectively [14, 15]. Overall, the global PSQI score did not have floor and ceiling effects; 5.1% of Ethiopian adults reported a minimum score of zero, and none reported a maximum score of 21, respectively. However, individual components of the tool did show floor and ceiling effects. All the PSQI components except for sleep latency showed floor effect i.e. more than 15% of respondents achieved the lowest score [14, 15]. Nevertheless, ceiling effect was observed only for components of sleep duration and sleep efficiency i.e. more than 15% of respondents achieved the highest score [14, 15].\nThe internal consistency test of the PSQI scores showed a Cronbach’s alpha of 0.59, a value suggesting moderate consistency. Cronbach’s alpha increased by 0.03 (from 0.59 to 0.62) on removing the PSQI components of medicine use and daytime dysfunction. The internal homogeneity as indicated by Spearman’s correlation coefficient (r) between component scores and the global PSQI score was 0.15–0.81. All the correlation coefficients were significant (p < 0.001) (Table 3). The groups identified as normal sleep and insomnia based on clinical interview differed across the global PSQI score and all the components score except the component for medicine use and daytime dysfunction (Table 4). The ROC curve is shown in Fig. 1. Table 5 shows the results of the ROC curve analysis with sensitivity and specificity for all the global PSQI scores between 0.5 and 16. The sensitivity and specificity of the PSQI at the cut-off score of 5.5 were 82 and 56.2%, respectively.Table 3Internal consistency and homogeneity of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adultsComponents of the PSQIComponent-to- global PSQI score correlationsCronbach’s Alpha if Component DeletedPSQI component of sleep quality.50.58PSQI component of sleep latency.56.52PSQI component of sleep duration.81.43PSQI component of sleep efficiency.81.45PSQI component of sleep disturbances.34.58PSQI component of sleep medication.18.60PSQI component of daytime dysfunction.15.60\nTable 4Discriminative validity: Comparison of the Pittsburgh Sleep Quality Index (PSQI) scores between normal sleepers and insomniacs as determined by clinical interview in community dwelling Ethiopian adultsComponents of the PSQIMean rank\np-valueNormal sleepersPrimary InsomniacsPSQI component of sleep quality90.88189.19<0.01PSQI component of sleep latency94.47187.36<0.01PSQI component of sleep duration131.25168.61<0.01PSQI component of sleep efficiency131.11168.68<0.01PSQI component of sleep disturbances130.65168.92<0.01PSQI component of sleep medication155.00156.510.52PSQI component of daytime dysfunction154.40156.820.58Global PSQI scorea\n4.70 ± 3.468.11 ± 2.61<0.01\naMean ± SD, Independent t-test was used for the global PSQI score and Mann Whitney U test was applied for component scores\nFig. 1Receiver operator curves (A) No discrimination (AUC = 0.5) (B) Experimental test (0.78 (p < 0.001)) and (C) Perfect test (AUC = 1.0) in community dwelling Ethiopian adults\nTable 5Sensitivity and specificity of the Pittsburgh Sleep Quality Index at each cut-off score in community dwelling Ethiopian adultsCut-off ScoreSensitivitySpecificity0.50.990.141.50.990.292.50.970.353.50.950.404.50.910.465.50.820.566.50.730.657.50.610.748.50.530.839.50.410.9310.50.120.9611.50.050.9912.50.020.99140.011.001601.00\n\nInternal consistency and homogeneity of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adults\nDiscriminative validity: Comparison of the Pittsburgh Sleep Quality Index (PSQI) scores between normal sleepers and insomniacs as determined by clinical interview in community dwelling Ethiopian adults\n\naMean ± SD, Independent t-test was used for the global PSQI score and Mann Whitney U test was applied for component scores\nReceiver operator curves (A) No discrimination (AUC = 0.5) (B) Experimental test (0.78 (p < 0.001)) and (C) Perfect test (AUC = 1.0) in community dwelling Ethiopian adults\nSensitivity and specificity of the Pittsburgh Sleep Quality Index at each cut-off score in community dwelling Ethiopian adults\nTables 6, 7 and 8 shows the results of the factor analysis. The results of Kaiser-Meyer-Olkin test of sampling adequacy, Bartlett’s test of sphericity, anti-image matrix and determinant show that the sample met the conditions for factor analysis (Table 6) [11, 16]. Kaiser’s criteria (Eigenvalue > 1), Scree plot and parallel analysis identified 3-factor models, while cumulative variance rule (>40%) found 2-factor model for the PSQI with seven components. Kaiser’s criteria (Eigenvalue > 1), Scree plot and parallel analysis identified 2-factor model, while cumulative variance rule (>40%) found 1-factor model for the PSQI with five components (without the PSQI components of medicine use and daytime dysfunction) (Table 6).Table 6Summary of the sample size adequacy measures and exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adultsMeasuresPSQI (Seven components)PSQI (Five components)Kaiser-Meyer-Olkin Test of Sampling Adequacy0.510.52Bartlett’s test of Sphericity<0.001<0.001Anti-image matrix0.39–0.670.48–0.64Determinant0.080.09Number of factors Kaiser’s criteria (Eigenvalue > 1)32 Cumulative variance rule (>40%)21 Scree plot32 Parallel analysis32\nPSQI Pittsburgh sleep quality index\nTable 7Factor loadings in exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adultsComponents of the PSQIPSQI (Seven components)PSQI (Five components)Factor 1Factor 2Factor 3Factor 1Factor 2PSQI component of sleep quality.88.05.08.88.05PSQI component of sleep latency.87−.13.18.87−.13PSQI component of sleep duration.10−.96.01.10−.97PSQI component of sleep efficiency.09−.97.06.09−.97PSQI component of sleep disturbances.62−.13.08.62−.12PSQI component of sleep medication.16−.03.78PSQI component of daytime dysfunction.06−.03.81Principal component analysis with direct oblimin rotation (Kaiser Normalization) was employed. PSQI Pittsburgh sleep quality index\nTable 8Summary of the Confirmatory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adultsModelsGFIAGFICFIRMSEARMR\nχ\n2\ndf\np\n\nχ\n2/dfECVIModel-A0.790.580.320.340.29528.0614<0.0137.721.79Model-B0.980.950.980.060.0422.26100.012.230.19Model-C0.800.570.350.350.29508.9113<0.0139.151.74Model-D0.740.230.330.560.40490.305<0.0198.061.65Model-E0.980.910.980.110.0515.093<0.015.0310.13Model-A: 1-Factor model of the PSQI with all the seven components; Model-B: 1-Factor model of the PSQI with all the seven components and incorporation of modification index (correlations between error terms); Model-C: 2-Factor model of the PSQI (Factor-1 comprised of SLPQUAL, LATEN, DURAT, HSE, DISTB; Factor-2 comprised of MEDS, DAYDYS); Model-D: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS); Model-E: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS) with incorporation of modification index (correlations between error terms)\nGFI goodness of fit index, AGFI adjusted goodness of fit index, CFI comparative fit index, RMSEA root mean square error of approximation, RMR root mean square residual, ECVI expected cross-validation indexSLPQUAL: PSQI component of sleep quality, LATEN: PSQI component of sleep latency, DURAT: PSQI component of sleep duration, HSE: PSQI component of sleep efficiency, DISTB: PSQI component of sleep disturbances, MEDS: PSQI component of sleep medication, DAYDYS: PSQI component of daytime dysfunction\n\nSummary of the sample size adequacy measures and exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adults\n\nPSQI Pittsburgh sleep quality index\nFactor loadings in exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adults\nPrincipal component analysis with direct oblimin rotation (Kaiser Normalization) was employed. PSQI Pittsburgh sleep quality index\nSummary of the Confirmatory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adults\nModel-A: 1-Factor model of the PSQI with all the seven components; Model-B: 1-Factor model of the PSQI with all the seven components and incorporation of modification index (correlations between error terms); Model-C: 2-Factor model of the PSQI (Factor-1 comprised of SLPQUAL, LATEN, DURAT, HSE, DISTB; Factor-2 comprised of MEDS, DAYDYS); Model-D: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS); Model-E: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS) with incorporation of modification index (correlations between error terms)\n\nGFI goodness of fit index, AGFI adjusted goodness of fit index, CFI comparative fit index, RMSEA root mean square error of approximation, RMR root mean square residual, ECVI expected cross-validation index\nSLPQUAL: PSQI component of sleep quality, LATEN: PSQI component of sleep latency, DURAT: PSQI component of sleep duration, HSE: PSQI component of sleep efficiency, DISTB: PSQI component of sleep disturbances, MEDS: PSQI component of sleep medication, DAYDYS: PSQI component of daytime dysfunction\nThe CFA was run on five models of the PSQI (Table 8); Model-A: 1-Factor model of the PSQI with all the seven components; Model-B: 1-Factor model of the PSQI with all the seven components and incorporation of modification index (correlations between error terms); Model-C: 2-Factor model of the PSQI (Factor-1 comprised of the PSQI components for sleep quality, sleep latency, sleep duration, sleep efficiency and sleep disturbances; Factor-2 comprised of the PSQI components for sleep medicine and daytime dysfunction); Model-D: 1-Factor model of the PSQI with only five components (without the PSQI components for sleep medicine and daytime dysfunction); Model-E: 1-Factor model of the PSQI with only five components (without the PSQI components for sleep medicine and daytime dysfunction) with incorporation of modification index (correlations between error terms). None of the models had absolute fit to the data i.e. non-significant χ\n2 p value (Table 8). Three models performed poorly i.e. RMR and RMSEA were higher than the cut-off values, while GFI, AGFI and CFI were lower than the cut-off values. Model-B performed best with highest values for GFI, AGFI and CFI, while it had lowest values for RMSEA, RMR and χ\n2/df.", "This is the first study to examine the psychometric and diagnostic validity of a sleep questionnaire tool in any segment of the Ethiopian population. In this study, the PSQI was validated in community dwelling Ethiopian adults using ICSD-R criteria for screening of insomnia. The individual components of the PSQI had floor and ceiling effects but the global PSQI score did not have either of these effects (Table 2). Therefore, item analysis does support validity of the overall score of the scale [14]. One of the few studies that reported about floor and ceiling effects found that floor effects were observed for all the PSQI components except for sleep disturbances (Table 2) in patients of temporomandibular disorders [17].\nThe internal consistency as assessed in this population of community dwelling Ethiopian adults was moderately adequate. Although, the value of Cronbach’s alpha was low in this study, it may be noted that the tool has never been reported to show a value of this psychometric index within the ideal range i.e., 0.9–0.95. Previous studies have reported Cronbach’s alpha values between 0.58 and 0.83 [5, 6, 17]. However, Rener-Sitar et al. 2014 reported a value of Cronbach alpha (.58) in patients of temporomandibular disorders without complaints of pain [17], which is almost similar to the one found in our study (Table 2). The component-global PSQI score correlation was moderate to strong except for the PSQI component of medicine and daytime dysfunction (Table 2). A recent systematic review concluded that the sleep medicine component has been shown to contribute poorly to construct validity [5]. Additionally, lack of awareness about sleep health in developing societies is common [18]. The lesser awareness might lead to contrived low sensitivity of this component in such societies [4, 10].\nThe significantly higher values of the global PSQI and component scores (except daytime dysfunction and sleep medicine) among insomniacs establish the diagnostic known-group or discriminative validity of the tool in this population of community dwelling Ethiopian adults. Notably, with regard to discriminative validity a striking similarity was observed with the previous report in African population from Nigeria. The validation of the tool in the Nigerian students found that the global PSQI and component scores (except daytime dysfunction and sleep medicine) were significantly higher among insomniacs [7]. The relatively less contribution of the PSQI component of daytime dysfunction and sleep medicine to internal consistency, homogeneity, and discriminative validity in Afro-Asian populations [4, 7, 10] is interesting and need to be explored further. This may help further increase the utility of the tool in these populations and understanding of sleep health construct in these societies.\nThe diagnostic validity of the scale against ICSD-R criteria for insomnia in this sample of community dwelling Ethiopian adults was in moderate to adequate range. The AUC of 0.78 (Fig. 1) found in our study was slightly less than the recommended limit value of 0.80 for good diagnostic use [19]. However, it is higher than those reported by other studies with different gold standard and/or concurrent measure in diverse samples [5]. The value was almost similar to that reported in patients with lower back pain; AUC was 0.79 (CI 0.723–0.819; p < 0.0001), for identifying insomnia. The concurrent measure employed in that study was sleep diary [8]. However, the value of AUC in our study was higher than that reported in Nigerian students; AUC (0.685), this study employed a measure of concurrent validity that was similar to that used in our study [7]. The cut-off score (5.5) (Table 5 and Fig. 1) for screening insomnia in our study sample of community dwelling Ethiopian adults was higher than that reported in Nigerian students and less than that estimated in patients of post-acute brain injury [7, 9]. The global PSQI score cannot take values in decimals [6], therefore the practical cut-off score in our study was 6-a value similar to that reported in patients of lower back pain for screening insomnia [8]. The sensitivity (82%) and specificity (56.2%) of the PSQI at the cut-off score was comparable to previous studies validating the tool for screening of primary insomnia [7–9]. The results of the EFA were inconclusive, but the outcome of CFA favored unidimensionality of the PSQI in the Ethiopian community adults (Tables 6, 7 and 8). This is similar to some of the previous reports [5, 11, 17], though heterogeneity of the factor structure of the PSQI remains an area of extensive research [5, 7, 11, 16].\nThe limitations of this study include a modest sample size, and non-application of polysomnography. The gender ratio of the sample was not representative of the general Ethiopian population. Therefore, the results of the study may be gender biased for males. Future work should explore this aspect. However, the merit of the study include concurrent measure of clinical screening by sleep researchers based on ICSD-R, validation in a population having high prevalence of sleep problems which may be related to chat addiction and that does not have access to polysomnography, actigraphy and trained sleep health professionals. The PSQI was found to be of adequate use for screening for insomnia among this sample of community dwelling Ethiopian adults." ]
[ "introduction", "materials|methods", null, "results", "discussion" ]
[ "PSQI", "Catha edulis", "Ethiopia", "Insomnia", "Pittsburgh sleep quality index", "Sleep", "Substance abuse" ]
Introduction: Sleep disorders are becoming endemic in both developed and developing societies [1–4]. More than half of the population across the globe is affected by some type of sleep disturbances and about one fifth of adults have chronic sleep disorder [1–5]. Sleep problems are implicated in poor health conditions characterized by impaired social relationships, neurologic, and/or psychiatric conditions, drowsy driving, risk-taking behavior, occupational accidents, and heightened risk of cardiovascular events [3–5]. Majority of the Ethiopian university students are sleep disturbed, which adversely affected their psychological health [2, 3]. Insomnia and its subjective symptoms e.g. difficulty initiating sleep, short sleep duration and poor sleep quality are the main dimensions of poor sleep in Ethiopia [2, 3]. Ethiopia is a developing country of Africa with limited trained sleep health professionals. There is no indigenously developed questionnaire tool for the assessment of sleep quality. Additionally, no work has been done to validate known tools that assess sleep quality in Ethiopians. In such circumstances, the availability of a validated questionnaire tool to assess sleep health is necessary. There are more than 80 different languages in the country with majority of the Ethiopians having low to high level of proficiency of spoken Amharic, which is the national language. However, the reading proficiency is more variable because of differences in script for major groups of languages. The Pittsburgh Sleep Quality Index (PSQI) is the most widely used instrument in diagnosis of sleep disorders including insomnia in different populations [5–9]. The questionnaire has twenty-four items of which nineteen self-reported items are added non-linearly to generate seven components. The scores of these components are pooled linearly to get the global PSQI score, which is a measure of sleep health for the period of the 1 month immediately preceding the time of administration. The tool is easy to understand, patient compliant and requires about 5 min to be completed. The validity of the PSQI is well established in various clinical and non-clinical populations, people of different regions and ethnicities [5, 6, 10]. However, few studies have investigated the psychometric properties of the PSQI in African population. It has never been validated in Ethiopian populations [5, 7]. The present study therefore sought to validate the PSQI in a sample of community dwelling Ethiopian adults. Material and methods: Households were selected by simple random sampling (SRS) method across Mizan-Aman town, Bench Maji Zone, Southwest, Ethiopia. Further, only one adult member (chosen randomly) from every selected house was interviewed. Three hundred and eleven out of an initial 550 adults who were screened and who had been found qualified were administered the survey and fully completed it. Ethiopia is known for cultivation of Chat-a psycho-stimulant and coffee. The consumption of chat and/or coffee is highly prevalent in the Ethiopian community. Chat has been reported to be associated with poor sleep [2, 3]. The previous African study reporting validation of the PSQI on college students in Nigeria did not report about Chat habits [7]. Moreover, the study only involved college students [7], whereas our study assessed the validity of the PSQI in the community dwelling adults. The mean age and body mass index (BMI) of the participants were 25.5 ± 6.0 years and 22.1 ± 2.3 kg/m2, respectively. Exclusion criteria consisted of self-reported problems with memory. A detailed explanation regarding the purpose and procedures of the study was given to the volunteers. Even though, Amharic is the most widely spoken language but its reading proficiency level is limited. Therefore, instructor administered original English version of the PSQI was employed. Semi-structured tool for demographics and the PSQI were employed. All the participants were interviewed by an experienced sleep researcher blinded to the PSQI score for the presence of insomnia according to the International Classification of Sleep Disorders, revised criteria (ICSD-R). These criteria included: (i) Almost nightly insufficient amount of sleep, (ii) Not feeling rested after usual sleep and (iii) Mild to severe impairment of social or occupational functioning, (iv) Complaint of restlessness, irritability, anxiety, daytime fatigue, and tiredness. The subjects were screened as insomniacs if they had either (i or ii) of the condition and at least mild complaints related to both (iii) and (iv) [7]. The PSQI measures sleep quality for the month period just preceding the interview. But, it has been found to be a valid and reliable measure of insomnia in some populations [5, 7–9]. Statistical analysis The statistical package SPSS version 16.0 (SPSS Inc., Chicago, USA) was used. The PSQI is composed of nineteen self-reported items. The scores of these individual items are added non-linearly to get seven component scores. These components are measured variables and should not be confused with component/factor term (a latent variable) used in factor analysis [6, 11]. The Cronbach alpha test was used to assess internal consistency. Internal homogeneity was tested by correlation analysis between PSQI components and the global scores. Discriminative validity was assessed by test of difference; Mann Whitney for the structured categorical variables (the PSQI components) and t-test for the global PSQI score. The diagnostic validation was performed by receiver operating curve (ROC) analysis. The screening by the sleep expert based on clinical interview served as the gold standard and the global PSQI score was the test variable. Sensitivity, specificity, area under the curve (AUC), and cut off score were assessed. Exploratory factor analysis (EFA) was performed using principal component analysis for initial estimate. This was followed by maximum likelihood estimation with direct oblimin rotation. EFA investigated two types of the PSQI i.e. one with all seven components and other with five components (sans the PSQI components of medicine use and daytime dysfunction). Confirmatory factor analysis (CFA) was performed using maximum likelihood. A value of up to 0.05 indicated good fit for both root mean square residual (RMR) as well as root mean square error of approximation (RMSEA). A value of more than/equal to 0.90 indicated good fit for both goodness of fit index (GFI) as well as adjusted goodness of fit index (AGFI) [12]. Lesser value of expected cross-validation index (ECVI) indicates better fit- employed as a relative measure of fit. A comparative fit index (CFI) of no less than 0.95, and χ 2/df ratio of less than 3 indicated an acceptable fit between a model and the data [13]. The statistical package SPSS version 16.0 (SPSS Inc., Chicago, USA) was used. The PSQI is composed of nineteen self-reported items. The scores of these individual items are added non-linearly to get seven component scores. These components are measured variables and should not be confused with component/factor term (a latent variable) used in factor analysis [6, 11]. The Cronbach alpha test was used to assess internal consistency. Internal homogeneity was tested by correlation analysis between PSQI components and the global scores. Discriminative validity was assessed by test of difference; Mann Whitney for the structured categorical variables (the PSQI components) and t-test for the global PSQI score. The diagnostic validation was performed by receiver operating curve (ROC) analysis. The screening by the sleep expert based on clinical interview served as the gold standard and the global PSQI score was the test variable. Sensitivity, specificity, area under the curve (AUC), and cut off score were assessed. Exploratory factor analysis (EFA) was performed using principal component analysis for initial estimate. This was followed by maximum likelihood estimation with direct oblimin rotation. EFA investigated two types of the PSQI i.e. one with all seven components and other with five components (sans the PSQI components of medicine use and daytime dysfunction). Confirmatory factor analysis (CFA) was performed using maximum likelihood. A value of up to 0.05 indicated good fit for both root mean square residual (RMR) as well as root mean square error of approximation (RMSEA). A value of more than/equal to 0.90 indicated good fit for both goodness of fit index (GFI) as well as adjusted goodness of fit index (AGFI) [12]. Lesser value of expected cross-validation index (ECVI) indicates better fit- employed as a relative measure of fit. A comparative fit index (CFI) of no less than 0.95, and χ 2/df ratio of less than 3 indicated an acceptable fit between a model and the data [13]. Statistical analysis: The statistical package SPSS version 16.0 (SPSS Inc., Chicago, USA) was used. The PSQI is composed of nineteen self-reported items. The scores of these individual items are added non-linearly to get seven component scores. These components are measured variables and should not be confused with component/factor term (a latent variable) used in factor analysis [6, 11]. The Cronbach alpha test was used to assess internal consistency. Internal homogeneity was tested by correlation analysis between PSQI components and the global scores. Discriminative validity was assessed by test of difference; Mann Whitney for the structured categorical variables (the PSQI components) and t-test for the global PSQI score. The diagnostic validation was performed by receiver operating curve (ROC) analysis. The screening by the sleep expert based on clinical interview served as the gold standard and the global PSQI score was the test variable. Sensitivity, specificity, area under the curve (AUC), and cut off score were assessed. Exploratory factor analysis (EFA) was performed using principal component analysis for initial estimate. This was followed by maximum likelihood estimation with direct oblimin rotation. EFA investigated two types of the PSQI i.e. one with all seven components and other with five components (sans the PSQI components of medicine use and daytime dysfunction). Confirmatory factor analysis (CFA) was performed using maximum likelihood. A value of up to 0.05 indicated good fit for both root mean square residual (RMR) as well as root mean square error of approximation (RMSEA). A value of more than/equal to 0.90 indicated good fit for both goodness of fit index (GFI) as well as adjusted goodness of fit index (AGFI) [12]. Lesser value of expected cross-validation index (ECVI) indicates better fit- employed as a relative measure of fit. A comparative fit index (CFI) of no less than 0.95, and χ 2/df ratio of less than 3 indicated an acceptable fit between a model and the data [13]. Results: The socio-demographics of the community dwelling Ethiopian adults participating in the study are given in Table 1. Table 2 shows the item analysis (i.e. component analysis in this case) of the PSQI in the study population. The mean global PSQI score was 7.0. Majority of the participants reported habit of tea/coffee (99.0%), alcohol intake (74%) and Chat chewing (52.1%), (Table 1).Table 1Socio-demographics of community dwelling Ethiopian adultsCharacteristicsMean ± SD/FrequencyAge (yr)25.45 ± 5.99BMI (Kg/m2)22.07 ± 2.30Gender Male276 (88.7%) Female35 (11.3%)Ethnicity Bench87 (28%) Kaffa75 (24.1%) Oromo38 (12.2%) Amhara40 (12.9%) Tigre7 (2.3%) Others64 (20.6%)Education Illiterate1 (0.3%) Can read and write99 (31.8%) Primary (1–8)21 (6.8%) Secondary (9–12)76 (24.4%) College/University114 (36.7%)Occupation Farmer36 (11.6%) Government Employee34 (10.9%) Merchants17 (5.5%) Housewife1 (0.3%) Others223 (71.7%)Religion Orthodox Christian162 (52.1%) Protestants Christian101 (32.5%) Islam44 (14.1%) Others4 (1.3%)Monthly Family Income (In Birr) Very Low (less than 445)15 (4.8%) Low (446–1200)186 (59.8%) Average (1201–2500)87 (28.0%) Above average (2501–3500)16 (5.1%) High (greater than 3500)7 (2.3%)Parents Single210 (67.5%) Married98 (31.5%) Divorced3 (1.0%)Sleep Global PSQI score6.96 ± 3.34 ICSD-R Classification Insomniac/normal206 (66.2%)/105 (33.8%)Substance use/Habits Chat user/non-user162 (52.1%)/149 (47.9%) Alcoholic/non-alcoholic230 (74%)/81 (26%) Smoker/non-smoker79 (25.4%)/232 (74.6%) Tea/Coffee consumer/non-consumer308 (99.0%)/3 (1.0%) Beverage consumer/beverage non-consumer129 (41.5%)/182 (58.5%) BMI body mass index, PSQI Pittsburgh sleep quality index; ICSD-R international classification of sleep disorders, revised criteria Table 2The distribution of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adultsComponents of the PSQIPSQI sub-componentFrequencyPercentagePSQI component of sleep duration≥7 h9530.56–7 h5317.05–6 h289.0<5 h13543.4PSQI component of sleep disturbances04715.1126183.9231.0300PSQI component of sleep latency03611.6111336.3212439.933812.2PSQI component of daytime dysfunction029394.21144.5241.3300PSQI component of sleep efficiency>85%10533.875–84%247.765–74%278.7<65%15549.8PSQI component of sleep qualityVery good9731.2Fairly good12640.5Fairly bad5417.4Very bad3410.9PSQI component of sleep medicationNot during the past month30698.4Less than once a week31.0Once or twice a week2.6Three or more times a week000 Socio-demographics of community dwelling Ethiopian adults BMI body mass index, PSQI Pittsburgh sleep quality index; ICSD-R international classification of sleep disorders, revised criteria The distribution of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adults Ceiling or floor effects were considered to be present if more than 15% of respondents achieved the highest or lowest score, respectively [14, 15]. Overall, the global PSQI score did not have floor and ceiling effects; 5.1% of Ethiopian adults reported a minimum score of zero, and none reported a maximum score of 21, respectively. However, individual components of the tool did show floor and ceiling effects. All the PSQI components except for sleep latency showed floor effect i.e. more than 15% of respondents achieved the lowest score [14, 15]. Nevertheless, ceiling effect was observed only for components of sleep duration and sleep efficiency i.e. more than 15% of respondents achieved the highest score [14, 15]. The internal consistency test of the PSQI scores showed a Cronbach’s alpha of 0.59, a value suggesting moderate consistency. Cronbach’s alpha increased by 0.03 (from 0.59 to 0.62) on removing the PSQI components of medicine use and daytime dysfunction. The internal homogeneity as indicated by Spearman’s correlation coefficient (r) between component scores and the global PSQI score was 0.15–0.81. All the correlation coefficients were significant (p < 0.001) (Table 3). The groups identified as normal sleep and insomnia based on clinical interview differed across the global PSQI score and all the components score except the component for medicine use and daytime dysfunction (Table 4). The ROC curve is shown in Fig. 1. Table 5 shows the results of the ROC curve analysis with sensitivity and specificity for all the global PSQI scores between 0.5 and 16. The sensitivity and specificity of the PSQI at the cut-off score of 5.5 were 82 and 56.2%, respectively.Table 3Internal consistency and homogeneity of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adultsComponents of the PSQIComponent-to- global PSQI score correlationsCronbach’s Alpha if Component DeletedPSQI component of sleep quality.50.58PSQI component of sleep latency.56.52PSQI component of sleep duration.81.43PSQI component of sleep efficiency.81.45PSQI component of sleep disturbances.34.58PSQI component of sleep medication.18.60PSQI component of daytime dysfunction.15.60 Table 4Discriminative validity: Comparison of the Pittsburgh Sleep Quality Index (PSQI) scores between normal sleepers and insomniacs as determined by clinical interview in community dwelling Ethiopian adultsComponents of the PSQIMean rank p-valueNormal sleepersPrimary InsomniacsPSQI component of sleep quality90.88189.19<0.01PSQI component of sleep latency94.47187.36<0.01PSQI component of sleep duration131.25168.61<0.01PSQI component of sleep efficiency131.11168.68<0.01PSQI component of sleep disturbances130.65168.92<0.01PSQI component of sleep medication155.00156.510.52PSQI component of daytime dysfunction154.40156.820.58Global PSQI scorea 4.70 ± 3.468.11 ± 2.61<0.01 aMean ± SD, Independent t-test was used for the global PSQI score and Mann Whitney U test was applied for component scores Fig. 1Receiver operator curves (A) No discrimination (AUC = 0.5) (B) Experimental test (0.78 (p < 0.001)) and (C) Perfect test (AUC = 1.0) in community dwelling Ethiopian adults Table 5Sensitivity and specificity of the Pittsburgh Sleep Quality Index at each cut-off score in community dwelling Ethiopian adultsCut-off ScoreSensitivitySpecificity0.50.990.141.50.990.292.50.970.353.50.950.404.50.910.465.50.820.566.50.730.657.50.610.748.50.530.839.50.410.9310.50.120.9611.50.050.9912.50.020.99140.011.001601.00 Internal consistency and homogeneity of the Pittsburgh Sleep Quality Index (PSQI) scores in community dwelling Ethiopian adults Discriminative validity: Comparison of the Pittsburgh Sleep Quality Index (PSQI) scores between normal sleepers and insomniacs as determined by clinical interview in community dwelling Ethiopian adults aMean ± SD, Independent t-test was used for the global PSQI score and Mann Whitney U test was applied for component scores Receiver operator curves (A) No discrimination (AUC = 0.5) (B) Experimental test (0.78 (p < 0.001)) and (C) Perfect test (AUC = 1.0) in community dwelling Ethiopian adults Sensitivity and specificity of the Pittsburgh Sleep Quality Index at each cut-off score in community dwelling Ethiopian adults Tables 6, 7 and 8 shows the results of the factor analysis. The results of Kaiser-Meyer-Olkin test of sampling adequacy, Bartlett’s test of sphericity, anti-image matrix and determinant show that the sample met the conditions for factor analysis (Table 6) [11, 16]. Kaiser’s criteria (Eigenvalue > 1), Scree plot and parallel analysis identified 3-factor models, while cumulative variance rule (>40%) found 2-factor model for the PSQI with seven components. Kaiser’s criteria (Eigenvalue > 1), Scree plot and parallel analysis identified 2-factor model, while cumulative variance rule (>40%) found 1-factor model for the PSQI with five components (without the PSQI components of medicine use and daytime dysfunction) (Table 6).Table 6Summary of the sample size adequacy measures and exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adultsMeasuresPSQI (Seven components)PSQI (Five components)Kaiser-Meyer-Olkin Test of Sampling Adequacy0.510.52Bartlett’s test of Sphericity<0.001<0.001Anti-image matrix0.39–0.670.48–0.64Determinant0.080.09Number of factors Kaiser’s criteria (Eigenvalue > 1)32 Cumulative variance rule (>40%)21 Scree plot32 Parallel analysis32 PSQI Pittsburgh sleep quality index Table 7Factor loadings in exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adultsComponents of the PSQIPSQI (Seven components)PSQI (Five components)Factor 1Factor 2Factor 3Factor 1Factor 2PSQI component of sleep quality.88.05.08.88.05PSQI component of sleep latency.87−.13.18.87−.13PSQI component of sleep duration.10−.96.01.10−.97PSQI component of sleep efficiency.09−.97.06.09−.97PSQI component of sleep disturbances.62−.13.08.62−.12PSQI component of sleep medication.16−.03.78PSQI component of daytime dysfunction.06−.03.81Principal component analysis with direct oblimin rotation (Kaiser Normalization) was employed. PSQI Pittsburgh sleep quality index Table 8Summary of the Confirmatory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adultsModelsGFIAGFICFIRMSEARMR χ 2 df p χ 2/dfECVIModel-A0.790.580.320.340.29528.0614<0.0137.721.79Model-B0.980.950.980.060.0422.26100.012.230.19Model-C0.800.570.350.350.29508.9113<0.0139.151.74Model-D0.740.230.330.560.40490.305<0.0198.061.65Model-E0.980.910.980.110.0515.093<0.015.0310.13Model-A: 1-Factor model of the PSQI with all the seven components; Model-B: 1-Factor model of the PSQI with all the seven components and incorporation of modification index (correlations between error terms); Model-C: 2-Factor model of the PSQI (Factor-1 comprised of SLPQUAL, LATEN, DURAT, HSE, DISTB; Factor-2 comprised of MEDS, DAYDYS); Model-D: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS); Model-E: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS) with incorporation of modification index (correlations between error terms) GFI goodness of fit index, AGFI adjusted goodness of fit index, CFI comparative fit index, RMSEA root mean square error of approximation, RMR root mean square residual, ECVI expected cross-validation indexSLPQUAL: PSQI component of sleep quality, LATEN: PSQI component of sleep latency, DURAT: PSQI component of sleep duration, HSE: PSQI component of sleep efficiency, DISTB: PSQI component of sleep disturbances, MEDS: PSQI component of sleep medication, DAYDYS: PSQI component of daytime dysfunction Summary of the sample size adequacy measures and exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adults PSQI Pittsburgh sleep quality index Factor loadings in exploratory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adults Principal component analysis with direct oblimin rotation (Kaiser Normalization) was employed. PSQI Pittsburgh sleep quality index Summary of the Confirmatory factor analysis of the Pittsburgh Sleep Quality Index in community dwelling Ethiopian adults Model-A: 1-Factor model of the PSQI with all the seven components; Model-B: 1-Factor model of the PSQI with all the seven components and incorporation of modification index (correlations between error terms); Model-C: 2-Factor model of the PSQI (Factor-1 comprised of SLPQUAL, LATEN, DURAT, HSE, DISTB; Factor-2 comprised of MEDS, DAYDYS); Model-D: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS); Model-E: 1-Factor model of the PSQI with only five components (without MEDS and DAYDYS) with incorporation of modification index (correlations between error terms) GFI goodness of fit index, AGFI adjusted goodness of fit index, CFI comparative fit index, RMSEA root mean square error of approximation, RMR root mean square residual, ECVI expected cross-validation index SLPQUAL: PSQI component of sleep quality, LATEN: PSQI component of sleep latency, DURAT: PSQI component of sleep duration, HSE: PSQI component of sleep efficiency, DISTB: PSQI component of sleep disturbances, MEDS: PSQI component of sleep medication, DAYDYS: PSQI component of daytime dysfunction The CFA was run on five models of the PSQI (Table 8); Model-A: 1-Factor model of the PSQI with all the seven components; Model-B: 1-Factor model of the PSQI with all the seven components and incorporation of modification index (correlations between error terms); Model-C: 2-Factor model of the PSQI (Factor-1 comprised of the PSQI components for sleep quality, sleep latency, sleep duration, sleep efficiency and sleep disturbances; Factor-2 comprised of the PSQI components for sleep medicine and daytime dysfunction); Model-D: 1-Factor model of the PSQI with only five components (without the PSQI components for sleep medicine and daytime dysfunction); Model-E: 1-Factor model of the PSQI with only five components (without the PSQI components for sleep medicine and daytime dysfunction) with incorporation of modification index (correlations between error terms). None of the models had absolute fit to the data i.e. non-significant χ 2 p value (Table 8). Three models performed poorly i.e. RMR and RMSEA were higher than the cut-off values, while GFI, AGFI and CFI were lower than the cut-off values. Model-B performed best with highest values for GFI, AGFI and CFI, while it had lowest values for RMSEA, RMR and χ 2/df. Discussion: This is the first study to examine the psychometric and diagnostic validity of a sleep questionnaire tool in any segment of the Ethiopian population. In this study, the PSQI was validated in community dwelling Ethiopian adults using ICSD-R criteria for screening of insomnia. The individual components of the PSQI had floor and ceiling effects but the global PSQI score did not have either of these effects (Table 2). Therefore, item analysis does support validity of the overall score of the scale [14]. One of the few studies that reported about floor and ceiling effects found that floor effects were observed for all the PSQI components except for sleep disturbances (Table 2) in patients of temporomandibular disorders [17]. The internal consistency as assessed in this population of community dwelling Ethiopian adults was moderately adequate. Although, the value of Cronbach’s alpha was low in this study, it may be noted that the tool has never been reported to show a value of this psychometric index within the ideal range i.e., 0.9–0.95. Previous studies have reported Cronbach’s alpha values between 0.58 and 0.83 [5, 6, 17]. However, Rener-Sitar et al. 2014 reported a value of Cronbach alpha (.58) in patients of temporomandibular disorders without complaints of pain [17], which is almost similar to the one found in our study (Table 2). The component-global PSQI score correlation was moderate to strong except for the PSQI component of medicine and daytime dysfunction (Table 2). A recent systematic review concluded that the sleep medicine component has been shown to contribute poorly to construct validity [5]. Additionally, lack of awareness about sleep health in developing societies is common [18]. The lesser awareness might lead to contrived low sensitivity of this component in such societies [4, 10]. The significantly higher values of the global PSQI and component scores (except daytime dysfunction and sleep medicine) among insomniacs establish the diagnostic known-group or discriminative validity of the tool in this population of community dwelling Ethiopian adults. Notably, with regard to discriminative validity a striking similarity was observed with the previous report in African population from Nigeria. The validation of the tool in the Nigerian students found that the global PSQI and component scores (except daytime dysfunction and sleep medicine) were significantly higher among insomniacs [7]. The relatively less contribution of the PSQI component of daytime dysfunction and sleep medicine to internal consistency, homogeneity, and discriminative validity in Afro-Asian populations [4, 7, 10] is interesting and need to be explored further. This may help further increase the utility of the tool in these populations and understanding of sleep health construct in these societies. The diagnostic validity of the scale against ICSD-R criteria for insomnia in this sample of community dwelling Ethiopian adults was in moderate to adequate range. The AUC of 0.78 (Fig. 1) found in our study was slightly less than the recommended limit value of 0.80 for good diagnostic use [19]. However, it is higher than those reported by other studies with different gold standard and/or concurrent measure in diverse samples [5]. The value was almost similar to that reported in patients with lower back pain; AUC was 0.79 (CI 0.723–0.819; p < 0.0001), for identifying insomnia. The concurrent measure employed in that study was sleep diary [8]. However, the value of AUC in our study was higher than that reported in Nigerian students; AUC (0.685), this study employed a measure of concurrent validity that was similar to that used in our study [7]. The cut-off score (5.5) (Table 5 and Fig. 1) for screening insomnia in our study sample of community dwelling Ethiopian adults was higher than that reported in Nigerian students and less than that estimated in patients of post-acute brain injury [7, 9]. The global PSQI score cannot take values in decimals [6], therefore the practical cut-off score in our study was 6-a value similar to that reported in patients of lower back pain for screening insomnia [8]. The sensitivity (82%) and specificity (56.2%) of the PSQI at the cut-off score was comparable to previous studies validating the tool for screening of primary insomnia [7–9]. The results of the EFA were inconclusive, but the outcome of CFA favored unidimensionality of the PSQI in the Ethiopian community adults (Tables 6, 7 and 8). This is similar to some of the previous reports [5, 11, 17], though heterogeneity of the factor structure of the PSQI remains an area of extensive research [5, 7, 11, 16]. The limitations of this study include a modest sample size, and non-application of polysomnography. The gender ratio of the sample was not representative of the general Ethiopian population. Therefore, the results of the study may be gender biased for males. Future work should explore this aspect. However, the merit of the study include concurrent measure of clinical screening by sleep researchers based on ICSD-R, validation in a population having high prevalence of sleep problems which may be related to chat addiction and that does not have access to polysomnography, actigraphy and trained sleep health professionals. The PSQI was found to be of adequate use for screening for insomnia among this sample of community dwelling Ethiopian adults.
Background: The applicability of the Pittsburgh Sleep Quality Index (PSQI) in screening of insomnia is demonstrated in various populations. But, the tool has not been validated in a sample of Ethiopians. Therefore, this study aimed to assess its psychometric properties in community dwelling Ethiopian adults. Methods: Participants (n = 311, age = 25.5 ± 6.0 years and body mass index = 22.1 ± 2.3 kg/m2) from Mizan-Aman town, Southwest Ethiopia completed the PSQI and a semi-structured questionnaire for socio-demographics. Clinical interview for screening of insomnia according to the International Classification of Sleep Disorders was carried out as a concurrent validation measure. Results: Overall, the PSQI scale did not have floor effect and ceiling effects. Moderate internal consistency (Cronbach's alpha was 0.59) and sufficient internal homogeneity as indicated by correlation coefficient between component scores and the global PSQI score was found. The PSQI was of good value for screening insomnia with optimal cut-off scores of 5.5 (sensitivity 82%, specificity 56.2%) and the area under the curve, 0.78 (p < 0.0001). The PSQI has unidimensional factor structure in the Ethiopian community adults for screening insomnia. Conclusions: The PSQI has good psychometric validity in screening for insomnia among Ethiopians adults.
null
null
5,556
262
5
[ "psqi", "sleep", "component", "index", "components", "factor", "analysis", "model", "component sleep", "score" ]
[ "test", "test" ]
null
null
null
[CONTENT] PSQI | Catha edulis | Ethiopia | Insomnia | Pittsburgh sleep quality index | Sleep | Substance abuse [SUMMARY]
null
[CONTENT] PSQI | Catha edulis | Ethiopia | Insomnia | Pittsburgh sleep quality index | Sleep | Substance abuse [SUMMARY]
null
[CONTENT] PSQI | Catha edulis | Ethiopia | Insomnia | Pittsburgh sleep quality index | Sleep | Substance abuse [SUMMARY]
null
[CONTENT] Adult | Ethiopia | Female | Humans | Independent Living | Male | Middle Aged | Psychometrics | Quality of Life | Reference Values | Reproducibility of Results | Sleep Initiation and Maintenance Disorders | Surveys and Questionnaires | Translating [SUMMARY]
null
[CONTENT] Adult | Ethiopia | Female | Humans | Independent Living | Male | Middle Aged | Psychometrics | Quality of Life | Reference Values | Reproducibility of Results | Sleep Initiation and Maintenance Disorders | Surveys and Questionnaires | Translating [SUMMARY]
null
[CONTENT] Adult | Ethiopia | Female | Humans | Independent Living | Male | Middle Aged | Psychometrics | Quality of Life | Reference Values | Reproducibility of Results | Sleep Initiation and Maintenance Disorders | Surveys and Questionnaires | Translating [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] psqi | sleep | component | index | components | factor | analysis | model | component sleep | score [SUMMARY]
null
[CONTENT] psqi | sleep | component | index | components | factor | analysis | model | component sleep | score [SUMMARY]
null
[CONTENT] psqi | sleep | component | index | components | factor | analysis | model | component sleep | score [SUMMARY]
null
[CONTENT] sleep | health | quality | sleep quality | questionnaire | different | sleep health | poor | psqi | populations [SUMMARY]
null
[CONTENT] component sleep | sleep | psqi | component | model | factor | factor model | factor model psqi | model psqi | quality [SUMMARY]
null
[CONTENT] psqi | sleep | fit | component | components | analysis | factor | index | test | score [SUMMARY]
null
[CONTENT] ||| Ethiopians ||| Ethiopian [SUMMARY]
null
[CONTENT] PSQI ||| 0.59 ||| 5.5 | 82% | 56.2% | 0.78 | 0.0001 ||| PSQI | Ethiopian [SUMMARY]
null
[CONTENT] ||| Ethiopians ||| Ethiopian ||| 311 | 25.5 ± | 6.0 years | 22.1 | 2.3 kg/m2 | Mizan-Aman | Southwest Ethiopia | PSQI ||| the International Classification of Sleep Disorders ||| PSQI ||| 0.59 ||| 5.5 | 82% | 56.2% | 0.78 | 0.0001 ||| PSQI | Ethiopian ||| PSQI | Ethiopians [SUMMARY]
null
Low Self-Perception of Malnutrition in Older Hospitalized Patients.
33239871
Studies focusing on self-perception of nutritional status in older hospitalized patients are lacking. We aimed to examine the self-perception of body weight and nutritional status among older hospitalized patients compared to their actual body weight and nutritional status based on medical assessment.
BACKGROUND
This observational cross-sectional study investigated 197 older participants (mean age 82.2±6.8 years, 61% women) who were consecutively admitted to the geriatric acute care ward. Body weight status and nutritional status were assessed using WHO-BMI classification and Mini Nutritional Assessment-Short Form (MNA-SF), respectively. Self-perceived body weight status and nutritional status were assessed with a standardized questionnaire. A follow-up was performed with a short telephone interview after three months.
MATERIALS AND METHODS
According to MNA-SF, 49% and 35% were at risk of malnutrition and malnourished, respectively. There was no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF (Kappa: 0.06). A slight agreement was found between subjective body weight status and objective body weight status according to WHO-BMI classification (Kappa: 0.19). A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time, of which 7 and 2 were malnourished and at risk of malnutrition according to MNA-SF, respectively. Of those who were malnourished and at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% did not realize their malnutrition. Compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission and further weight loss and more often reported health deterioration and experienced death within three months after discharge.
RESULTS
No agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients was found. Our study highlights the need to raise knowledge about the issue of malnutrition and increase awareness of health risks associated with malnutrition among older hospitalized patients.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Body Mass Index", "Cross-Sectional Studies", "Female", "Geriatric Assessment", "Humans", "Male", "Malnutrition", "Nutrition Assessment", "Nutritional Status", "Risk Factors", "Self Concept", "Surveys and Questionnaires", "Weight Loss" ]
7682442
Introduction
Malnutrition is a frequent finding in older patients and has commonly a multifactorial etiology. Malnutrition is associated with a low quality of life, prolonged hospitalization and rehabilitation, more frequent complications and higher morbidity and mortality.1 Although the prevalence of malnutrition is between 30% and 50% in hospitalized older persons,2–4 it remains widely unrecognized and untreated.1 However, even if malnutrition is recognized, its treatment may be challenging, especially in an older frail population. In this population, the successful treatment of malnutrition mostly takes weeks and months. Particularly for a sustained treatment after hospital discharge, the perception and comprehension of malnutrition by the patient may be critical. Older patients need to be aware of the fact that they are underweight, malnourished or at risk of malnutrition. Without this awareness and the comprehension of the consequences of malnutrition, a behavioral change is unlikely to happen, and management of malnutrition may not be successful. The agreement between self-perception and measured body weight has already been investigated in previous studies on different populations including older subjects.5–7 The findings of a cross-sectional study of 1295 healthy older adults aged 60–96 years demonstrated low agreement between objective and self-perceived body weight status.7 However, to the best of our knowledge data about self-perception of malnutrition are missing for older hospitalized patients. In the present study, we aimed to examine the self-perception of body weight and nutritional status among older hospitalized patients compared to their actual body weight and nutritional status based on medical assessment and their relevance for outcome.
null
null
null
null
Conclusion
In this study, no agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients was found. Since malnourished older patients were more susceptible to death, especially if there were not aware of their malnutrition, our study highlights the need to raise knowledge about the issue of malnutrition and increase awareness of health risks associated with malnutrition among older hospitalized patients.
[ "Subjects and Methods", "Self-Reported and Medical Assessment Variables", "Statistical Analysis", "Results", "Characterization of Study Population", "Patient Perception of Body Weight, Nutritional Status and Appetite", "Medical Assessment of Body Weight and Nutritional Status", "Concordance Between Patient Perception and Medical Records", "Follow-Up", "Discussion", "Conclusion" ]
[ "This observational cross-sectional study was undertaken between November 2018 and April 2020 at eight acute care geriatric hospital departments in Germany. The study population comprised 197 consecutively hospitalized older participants aged between 66 and 101 years. Exclusion criteria were age <65 years, missing or withdrawn informed consent, severe disturbance of fluid status (ie, severe cardiac decompensation, decompensated kidney failure and dehydration), moderate to severe dementia, impossibility to measure body weight and inability to cooperate. The study protocol had been approved by the ethical committee of Ruhr-University, Bochum (no 18–6451 approved on 22.10 2018). The participants were informed about the purpose of the study, and that it was conducted in accordance with the Declaration of Helsinki. Written consent was obtained by each study participant.", "Reported variables were obtained using two structured questionnaires with predefined answers, where adequate. The first questionnaire about patient’s self-perception was distributed by a trained physician at each department and completed by the study participants within 24 hours after hospital admission. Help was given when necessary. The second questionnaire about medical assessment was filled out by the attending physician during the hospital stay and at hospital discharge.\nTo assess patient’s self-perception, we used the following main questions: 1) How is your actual body height and weight? 2) Do you think you are normal weight, underweight or overweight? 3) Do you rate your nutritional status as good, undernourished or overnourished? 4) Are you satisfied with your nutritional status? 5) Did your weight change during the last 3 months (no, decrease, gain or do not know)? 6) If you have lost weight, how much did you lose within the last three months? 7) If you have lost weight, what do you think is the main reason (12 predefined answers and free text)? 8) Would you like to change your weight (keep it, lose weight, gain weight, do not know)? Regarding the self-perception of patients’ appetite, the Simplified Nutritional Appetite Questionnaire (SNAQ)8 was integrated at the end of the questionnaire.\nThe medical assessment questionnaire asked about the main reason of hospitalization, current measured body weight and height, weight loss in last three months, need for nutritional therapy due to weight loss and kind of treatment of malnutrition, if present. At each center, a trained nurse measured body weight in light clothing with an accuracy of 0.1 kg using a calibrated chair scale and height to the nearest 0.5 cm with a stadiometer. BMI was calculated and patients were categorized according to the WHO-BMI classification (underweight: BMI <18.5 kg/m2, normal weight BMI 18.5–24.9 kg/m2, overweight BMI 25.0–29.9 kg/m2 and obesity BMI ≥30.0 kg/m2).9 In addition, geriatric assessment was performed at hospital admission except the Barthel-Index, which was evaluated on admission and at discharge. Risk of malnutrition was measured according to the Mini Nutritional Assessment Short Form (MNA-SF)10 which is a validated tool for the screening of nutritional status of geriatric patients across settings. Participants are stratified as having normal nutritional status (12–14 points), being at risk of malnutrition (8–11 points) and being malnourished (0–7 points).\nActivities of daily living were determined using the Barthel-Index (BI).11 The point’s range of the German version of the BI is 0–100 pts., with 100 pts indicating independence in all activities of daily living. Cognitive function was measured with either the Mini Mental Status Examination (MMSE)12 or the Montreal Cognitive Assessment (MoCA),13 according to the standard assessment of each center. Medical comorbidities were evaluated using Charlson Comorbidity Index (CCI).14 The geriatric assessment was performed within the clinical routine of each center and the results were validated by the attending physician.\nA follow-up was performed with a short telephone interview after three months, performed by the trained physician at each study center. Patients were asked about their general health status compared to hospital discharge (worse, same, better) and body weight compared to hospital discharge (stable, decreased, increased, unclear) and unplanned readmission to hospital. If the patient was not able to give reliable answers or it was not possible to get in contact with him, a relative was asked.", "The statistical analysis was completed using SPSS statistical software (SPSS Statistics for Windows, 137 IBM Corp, Version 26.0, Armonk, NY, USA). For an approximate sample size calculation, we expected 50% of older patients to systematically overestimate their body weight by an average of 2 kg (SD = 8 kg) and 50% of the patients to estimate with an average difference of 0 kg, ie, correctly (SD = 2 kg). A sample size of N = 200 in a 1:1 design with a power of 0.8 and a Type I error of 0.05 was calculated (http://PowerAndSampleSize.com). Means and standard deviations (SDs) were used for continuous data with normal distribution whereas median values are expressed with interquartile ranges (IQR) for non-normally distributed data. Categorical variables are shown as n (%). In order to compare the self-reported nutritional status and the objective nutritional status, we divided the patients into three groups according to the MNA-SF classification (malnourished, risk of malnutrition and normal nutritional status). Group differences were analyzed by using paired samples t test and Wilcoxon signed rank for normally and non-normally distributed values, respectively. Categorical variables were compared by the Chi square test. Pearson’s correlation was applied for normally distributed variables whereas Spearman correlation was used for nonparametric data. Multivariate analysis was used to examine the relationship between nutritional status and outcomes while adjusting for age, gender, comorbidity and cognitive function. In addition, the Kappa coefficient was used to assess the agreement between self-perceived and objective body weight status and nutritional status. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A p-value of <0.05 was considered as the limit of significance.", " Characterization of Study Population Baseline characteristics of study participants are summarized in Table 1. Of 197 patients with a mean age of 82.2 ± 6.8 years, 121 (61%) were women. Major reasons for hospitalization were cardiovascular disease, falls, fractures, osteoarthritis, neurodegenerative diseases and general disease, including infections.Table 1Characteristics of Study Population on AdmissionAll (n=197)Gender (number, %) Females121 (61) Males76 (39)Age (y)82.2 ± 6.8BMI (kg/m2)25.3 ± 6.3Geriatric assessments, Median (IQR) MNA-SF9 (7–11) Barthel-Index on admission55 (35–70) Barthel-Index at discharge75 (65–85) MMSE28 (25–29) MoCA21 (17–24) Charlson Comorbidity Index2 (1–4)Reason for admission Cardiovascular disease39 (20) Falls and fractures72 (37) Osteoarthritis16 (8) Neurodegenerative diseases6 (3) General diseases64 (32)Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.\n\nCharacteristics of Study Population on Admission\nNotes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.\nBaseline characteristics of study participants are summarized in Table 1. Of 197 patients with a mean age of 82.2 ± 6.8 years, 121 (61%) were women. Major reasons for hospitalization were cardiovascular disease, falls, fractures, osteoarthritis, neurodegenerative diseases and general disease, including infections.Table 1Characteristics of Study Population on AdmissionAll (n=197)Gender (number, %) Females121 (61) Males76 (39)Age (y)82.2 ± 6.8BMI (kg/m2)25.3 ± 6.3Geriatric assessments, Median (IQR) MNA-SF9 (7–11) Barthel-Index on admission55 (35–70) Barthel-Index at discharge75 (65–85) MMSE28 (25–29) MoCA21 (17–24) Charlson Comorbidity Index2 (1–4)Reason for admission Cardiovascular disease39 (20) Falls and fractures72 (37) Osteoarthritis16 (8) Neurodegenerative diseases6 (3) General diseases64 (32)Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.\n\nCharacteristics of Study Population on Admission\nNotes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.\n Patient Perception of Body Weight, Nutritional Status and Appetite As shown in Table 2, 53% and 25% of the patients regarded their body weight as normal and underweight, respectively. Mean current body weight reported by the patients was 69.7 ± 17.1 kg. In addition, 77% of the patients reported a good nutritional status whereas 17% considered themselves undernourished. In terms of appetite perception, 43% of patients regarded their appetite as good and very good and 39% as average. Half of the patients (52%) reported weight loss in the last three months (mean weight loss: 6.1 ± 4.9 kg), 46% intended to maintain their current body weight whereas 27% wanted to gain weight in the future. Among those who reported weight loss, 43% and 15% intended to gain and lose body weight in the future, respectively. The main reasons of weight loss reported by the patients were acute illness, loss of appetite, psychological stress and dysphagia.Table 2Patient Perception of Body Weight, Nutritional Status and Appetite (n=197)Number (%)Body weight status Underweight49 (25) Normal weight104 (53) Overweight42 (22)Nutritional status Good150 (77) Undernourished34 (17) Overnourished12 (6)Satisfaction with nutritional status Satisfied143 (73) Unsatisfied51 (27)Appetite according to SNAQ Very poor9 (5) Poor26 (13) Average76 (39) Good73 (38) Very good9 (5)Body weight change in last 3 months No67 (35) Decreased101 (52) Increased14 (7) Unknown11 (6)Willing to change body weight in the next 3 months No91 (46) Willing to decrease`48 (24) Willing to increase52 (27) Unknown5 (3)Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.\n\nPatient Perception of Body Weight, Nutritional Status and Appetite (n=197)\nAbbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.\nAs shown in Table 2, 53% and 25% of the patients regarded their body weight as normal and underweight, respectively. Mean current body weight reported by the patients was 69.7 ± 17.1 kg. In addition, 77% of the patients reported a good nutritional status whereas 17% considered themselves undernourished. In terms of appetite perception, 43% of patients regarded their appetite as good and very good and 39% as average. Half of the patients (52%) reported weight loss in the last three months (mean weight loss: 6.1 ± 4.9 kg), 46% intended to maintain their current body weight whereas 27% wanted to gain weight in the future. Among those who reported weight loss, 43% and 15% intended to gain and lose body weight in the future, respectively. The main reasons of weight loss reported by the patients were acute illness, loss of appetite, psychological stress and dysphagia.Table 2Patient Perception of Body Weight, Nutritional Status and Appetite (n=197)Number (%)Body weight status Underweight49 (25) Normal weight104 (53) Overweight42 (22)Nutritional status Good150 (77) Undernourished34 (17) Overnourished12 (6)Satisfaction with nutritional status Satisfied143 (73) Unsatisfied51 (27)Appetite according to SNAQ Very poor9 (5) Poor26 (13) Average76 (39) Good73 (38) Very good9 (5)Body weight change in last 3 months No67 (35) Decreased101 (52) Increased14 (7) Unknown11 (6)Willing to change body weight in the next 3 months No91 (46) Willing to decrease`48 (24) Willing to increase52 (27) Unknown5 (3)Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.\n\nPatient Perception of Body Weight, Nutritional Status and Appetite (n=197)\nAbbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.\n Medical Assessment of Body Weight and Nutritional Status The results of the medical assessment are summarized in Table 3. Mean measured body weight was 70.1 ± 18.8 kg which was similar to the weight reported by the patients (P=0.436). When compared to the WHO-BMI classification, almost half of the patients were within the healthy weight range and only 9% were classified as underweight.Table 3Results of Medical Assessment of Nutritional Status (n=197)Total PopulationCurrent body weight (kg)70.1 ± 18.8Height (m)1.66 ± 0.1BMI (kg/m2)25.3 ± 6.4Objective body weight status Underweight (BMI <18.5 kg/m2)16 (9) Normal weight (BMI 18.5–24.9 kg/m2)91 (47) Overweight (BMI 25.0–29.9 kg/m2)45 (23) Obesity (BMI ≥30.0 kg/m2)41 (21)Weight loss in last 3 months No (n, %)83 (46) Yes (n, %)97 (54)Nutritional status according to MNA-SF Normal nutritional status (n, %)31 (16) At risk of malnutrition (n, %)95 (49) Malnourished (n, %)69 (35) SNAQ score, Median (IQR)14 (11–15) <14 (n, %)86 (48) ≥14 (n, %)92 (52)Nutritional therapy of weight loss No (n, %)42 (40) Yes (n, %)62 (60)Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\n\nResults of Medical Assessment of Nutritional Status (n=197)\nNotes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\nAccording to MNA-SF, 16% and 49% had normal nutritional status and were at risk of malnutrition, respectively, whereas 35% were malnourished. In addition, 18% and 32% of the patients had a severe and moderate decrease in food intake over the past three months, respectively. According to the SNAQ, over half of the patients (52%) were at nutritional risk.\nAccording to the information given by the attending physician, 54% of the patients had weight loss in the last three months. The main reasons for weight loss as reported by the attending physician were general disease, gastrointestinal disorders, dementia and pain. Furthermore, 60% of the patients with weight loss received nutritional therapy (mainly high protein and/or high energy oral nutritional supplements) during the hospital stay. The group treated with nutritional therapy was more malnourished (median MNA-SF: 6, IQR: 4–8 vs 9, IQR: 7–11; P<0.001) and had lower mean BMI (22.3 ± 4.6 kg/m2 vs 28.4 ± 7.6 kg/m2, P<0.001) compared to those who did not receive nutritional therapy.\nThe results of the medical assessment are summarized in Table 3. Mean measured body weight was 70.1 ± 18.8 kg which was similar to the weight reported by the patients (P=0.436). When compared to the WHO-BMI classification, almost half of the patients were within the healthy weight range and only 9% were classified as underweight.Table 3Results of Medical Assessment of Nutritional Status (n=197)Total PopulationCurrent body weight (kg)70.1 ± 18.8Height (m)1.66 ± 0.1BMI (kg/m2)25.3 ± 6.4Objective body weight status Underweight (BMI <18.5 kg/m2)16 (9) Normal weight (BMI 18.5–24.9 kg/m2)91 (47) Overweight (BMI 25.0–29.9 kg/m2)45 (23) Obesity (BMI ≥30.0 kg/m2)41 (21)Weight loss in last 3 months No (n, %)83 (46) Yes (n, %)97 (54)Nutritional status according to MNA-SF Normal nutritional status (n, %)31 (16) At risk of malnutrition (n, %)95 (49) Malnourished (n, %)69 (35) SNAQ score, Median (IQR)14 (11–15) <14 (n, %)86 (48) ≥14 (n, %)92 (52)Nutritional therapy of weight loss No (n, %)42 (40) Yes (n, %)62 (60)Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\n\nResults of Medical Assessment of Nutritional Status (n=197)\nNotes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\nAccording to MNA-SF, 16% and 49% had normal nutritional status and were at risk of malnutrition, respectively, whereas 35% were malnourished. In addition, 18% and 32% of the patients had a severe and moderate decrease in food intake over the past three months, respectively. According to the SNAQ, over half of the patients (52%) were at nutritional risk.\nAccording to the information given by the attending physician, 54% of the patients had weight loss in the last three months. The main reasons for weight loss as reported by the attending physician were general disease, gastrointestinal disorders, dementia and pain. Furthermore, 60% of the patients with weight loss received nutritional therapy (mainly high protein and/or high energy oral nutritional supplements) during the hospital stay. The group treated with nutritional therapy was more malnourished (median MNA-SF: 6, IQR: 4–8 vs 9, IQR: 7–11; P<0.001) and had lower mean BMI (22.3 ± 4.6 kg/m2 vs 28.4 ± 7.6 kg/m2, P<0.001) compared to those who did not receive nutritional therapy.\n Concordance Between Patient Perception and Medical Records With regard to body weight, 55% and 84% of patients who were malnourished or at risk of malnutrition according to MNA-SF reported their body weight, as normal weight and overweight, respectively (Table 4). In addition, 64% of malnourished patients and 87% of patients at risk of malnutrition according to MNA-SF classified their nutritional status as good. Furthermore, 58% and 82% of patients who were satisfied with their nutritional status were malnourished and at risk of malnutrition based on MNA-SF, respectively. When compared to the objective nutritional status, only 33% of malnourished patients based on MNA-SF correctly perceived their nutritional status as undernourished (Table 4). The Kappa coefficient (0.06) showed no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF. In addition, we found only a slight agreement between subjective body weight status and objective body weight status according to the WHO-BMI classification (Kappa coefficient: 0.19).Table 4Concordance Between Patient Perception and Medical Assessment of Nutritional StatusSelf-Reported by PatientsMNA-SF Classification (n, %)P valueMalnourished (n=69, 35%)At Risk (n=95, 49%)Normal (n=31, 16%)TotalBody weight status Normal weight31 (45)54 (58)19 (61)104P<0.001 Underweight31 (45)16 (16)2 (6)49 Overweight7 (10)25 (26)10 (33)42Nutritional status Good44 (64)82 (87)24 (77)150P<0.001 Undernourished23 (33)8 (8)3 (10)34 Overnourished2 (3)5 (5)4 (13)11Satisfaction with nutritional status Satisfied39 (58)78 (82)26 (87)143P=0.002 Unsatisfied30 (42)17 (18)4 (13)51Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).\n\nConcordance Between Patient Perception and Medical Assessment of Nutritional Status\nAbbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).\nWith regard to body weight, 55% and 84% of patients who were malnourished or at risk of malnutrition according to MNA-SF reported their body weight, as normal weight and overweight, respectively (Table 4). In addition, 64% of malnourished patients and 87% of patients at risk of malnutrition according to MNA-SF classified their nutritional status as good. Furthermore, 58% and 82% of patients who were satisfied with their nutritional status were malnourished and at risk of malnutrition based on MNA-SF, respectively. When compared to the objective nutritional status, only 33% of malnourished patients based on MNA-SF correctly perceived their nutritional status as undernourished (Table 4). The Kappa coefficient (0.06) showed no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF. In addition, we found only a slight agreement between subjective body weight status and objective body weight status according to the WHO-BMI classification (Kappa coefficient: 0.19).Table 4Concordance Between Patient Perception and Medical Assessment of Nutritional StatusSelf-Reported by PatientsMNA-SF Classification (n, %)P valueMalnourished (n=69, 35%)At Risk (n=95, 49%)Normal (n=31, 16%)TotalBody weight status Normal weight31 (45)54 (58)19 (61)104P<0.001 Underweight31 (45)16 (16)2 (6)49 Overweight7 (10)25 (26)10 (33)42Nutritional status Good44 (64)82 (87)24 (77)150P<0.001 Undernourished23 (33)8 (8)3 (10)34 Overnourished2 (3)5 (5)4 (13)11Satisfaction with nutritional status Satisfied39 (58)78 (82)26 (87)143P=0.002 Unsatisfied30 (42)17 (18)4 (13)51Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).\n\nConcordance Between Patient Perception and Medical Assessment of Nutritional Status\nAbbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).\n Follow-Up A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time (9/193; 4.6%), of which 7 (78%) and 2 (22%) were malnourished and at risk of malnutrition according to MNA-SF, respectively. None of the patients with a normal nutritional status died during the follow-up period. Of those who were malnourished or at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% (6/9) perceived their nutritional status incorrectly (P=0.01), ie, they did not believe to be malnourished. Further, results of multivariate analysis showed that compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission (36%, n=23 vs 18%, n=5; respectively, P=0.097), further weight loss (44%, n=28 vs 14%, n=4; respectively, P=0.073), more often reported health deterioration (29%, n=18 vs 17%, n=5; respectively, P=0.218) and death (11%, n=7 vs 0%, n=0; respectively, P=0.021) within three months after discharge. In addition, no significant associations between self–perceived malnutrition and adverse outcomes were found except a further weight loss within three months after discharge (P= 0.04).\nA total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time (9/193; 4.6%), of which 7 (78%) and 2 (22%) were malnourished and at risk of malnutrition according to MNA-SF, respectively. None of the patients with a normal nutritional status died during the follow-up period. Of those who were malnourished or at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% (6/9) perceived their nutritional status incorrectly (P=0.01), ie, they did not believe to be malnourished. Further, results of multivariate analysis showed that compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission (36%, n=23 vs 18%, n=5; respectively, P=0.097), further weight loss (44%, n=28 vs 14%, n=4; respectively, P=0.073), more often reported health deterioration (29%, n=18 vs 17%, n=5; respectively, P=0.218) and death (11%, n=7 vs 0%, n=0; respectively, P=0.021) within three months after discharge. In addition, no significant associations between self–perceived malnutrition and adverse outcomes were found except a further weight loss within three months after discharge (P= 0.04).", "Baseline characteristics of study participants are summarized in Table 1. Of 197 patients with a mean age of 82.2 ± 6.8 years, 121 (61%) were women. Major reasons for hospitalization were cardiovascular disease, falls, fractures, osteoarthritis, neurodegenerative diseases and general disease, including infections.Table 1Characteristics of Study Population on AdmissionAll (n=197)Gender (number, %) Females121 (61) Males76 (39)Age (y)82.2 ± 6.8BMI (kg/m2)25.3 ± 6.3Geriatric assessments, Median (IQR) MNA-SF9 (7–11) Barthel-Index on admission55 (35–70) Barthel-Index at discharge75 (65–85) MMSE28 (25–29) MoCA21 (17–24) Charlson Comorbidity Index2 (1–4)Reason for admission Cardiovascular disease39 (20) Falls and fractures72 (37) Osteoarthritis16 (8) Neurodegenerative diseases6 (3) General diseases64 (32)Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.\n\nCharacteristics of Study Population on Admission\nNotes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.", "As shown in Table 2, 53% and 25% of the patients regarded their body weight as normal and underweight, respectively. Mean current body weight reported by the patients was 69.7 ± 17.1 kg. In addition, 77% of the patients reported a good nutritional status whereas 17% considered themselves undernourished. In terms of appetite perception, 43% of patients regarded their appetite as good and very good and 39% as average. Half of the patients (52%) reported weight loss in the last three months (mean weight loss: 6.1 ± 4.9 kg), 46% intended to maintain their current body weight whereas 27% wanted to gain weight in the future. Among those who reported weight loss, 43% and 15% intended to gain and lose body weight in the future, respectively. The main reasons of weight loss reported by the patients were acute illness, loss of appetite, psychological stress and dysphagia.Table 2Patient Perception of Body Weight, Nutritional Status and Appetite (n=197)Number (%)Body weight status Underweight49 (25) Normal weight104 (53) Overweight42 (22)Nutritional status Good150 (77) Undernourished34 (17) Overnourished12 (6)Satisfaction with nutritional status Satisfied143 (73) Unsatisfied51 (27)Appetite according to SNAQ Very poor9 (5) Poor26 (13) Average76 (39) Good73 (38) Very good9 (5)Body weight change in last 3 months No67 (35) Decreased101 (52) Increased14 (7) Unknown11 (6)Willing to change body weight in the next 3 months No91 (46) Willing to decrease`48 (24) Willing to increase52 (27) Unknown5 (3)Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.\n\nPatient Perception of Body Weight, Nutritional Status and Appetite (n=197)\nAbbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.", "The results of the medical assessment are summarized in Table 3. Mean measured body weight was 70.1 ± 18.8 kg which was similar to the weight reported by the patients (P=0.436). When compared to the WHO-BMI classification, almost half of the patients were within the healthy weight range and only 9% were classified as underweight.Table 3Results of Medical Assessment of Nutritional Status (n=197)Total PopulationCurrent body weight (kg)70.1 ± 18.8Height (m)1.66 ± 0.1BMI (kg/m2)25.3 ± 6.4Objective body weight status Underweight (BMI <18.5 kg/m2)16 (9) Normal weight (BMI 18.5–24.9 kg/m2)91 (47) Overweight (BMI 25.0–29.9 kg/m2)45 (23) Obesity (BMI ≥30.0 kg/m2)41 (21)Weight loss in last 3 months No (n, %)83 (46) Yes (n, %)97 (54)Nutritional status according to MNA-SF Normal nutritional status (n, %)31 (16) At risk of malnutrition (n, %)95 (49) Malnourished (n, %)69 (35) SNAQ score, Median (IQR)14 (11–15) <14 (n, %)86 (48) ≥14 (n, %)92 (52)Nutritional therapy of weight loss No (n, %)42 (40) Yes (n, %)62 (60)Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\n\nResults of Medical Assessment of Nutritional Status (n=197)\nNotes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\nAccording to MNA-SF, 16% and 49% had normal nutritional status and were at risk of malnutrition, respectively, whereas 35% were malnourished. In addition, 18% and 32% of the patients had a severe and moderate decrease in food intake over the past three months, respectively. According to the SNAQ, over half of the patients (52%) were at nutritional risk.\nAccording to the information given by the attending physician, 54% of the patients had weight loss in the last three months. The main reasons for weight loss as reported by the attending physician were general disease, gastrointestinal disorders, dementia and pain. Furthermore, 60% of the patients with weight loss received nutritional therapy (mainly high protein and/or high energy oral nutritional supplements) during the hospital stay. The group treated with nutritional therapy was more malnourished (median MNA-SF: 6, IQR: 4–8 vs 9, IQR: 7–11; P<0.001) and had lower mean BMI (22.3 ± 4.6 kg/m2 vs 28.4 ± 7.6 kg/m2, P<0.001) compared to those who did not receive nutritional therapy.", "With regard to body weight, 55% and 84% of patients who were malnourished or at risk of malnutrition according to MNA-SF reported their body weight, as normal weight and overweight, respectively (Table 4). In addition, 64% of malnourished patients and 87% of patients at risk of malnutrition according to MNA-SF classified their nutritional status as good. Furthermore, 58% and 82% of patients who were satisfied with their nutritional status were malnourished and at risk of malnutrition based on MNA-SF, respectively. When compared to the objective nutritional status, only 33% of malnourished patients based on MNA-SF correctly perceived their nutritional status as undernourished (Table 4). The Kappa coefficient (0.06) showed no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF. In addition, we found only a slight agreement between subjective body weight status and objective body weight status according to the WHO-BMI classification (Kappa coefficient: 0.19).Table 4Concordance Between Patient Perception and Medical Assessment of Nutritional StatusSelf-Reported by PatientsMNA-SF Classification (n, %)P valueMalnourished (n=69, 35%)At Risk (n=95, 49%)Normal (n=31, 16%)TotalBody weight status Normal weight31 (45)54 (58)19 (61)104P<0.001 Underweight31 (45)16 (16)2 (6)49 Overweight7 (10)25 (26)10 (33)42Nutritional status Good44 (64)82 (87)24 (77)150P<0.001 Undernourished23 (33)8 (8)3 (10)34 Overnourished2 (3)5 (5)4 (13)11Satisfaction with nutritional status Satisfied39 (58)78 (82)26 (87)143P=0.002 Unsatisfied30 (42)17 (18)4 (13)51Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).\n\nConcordance Between Patient Perception and Medical Assessment of Nutritional Status\nAbbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).", "A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time (9/193; 4.6%), of which 7 (78%) and 2 (22%) were malnourished and at risk of malnutrition according to MNA-SF, respectively. None of the patients with a normal nutritional status died during the follow-up period. Of those who were malnourished or at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% (6/9) perceived their nutritional status incorrectly (P=0.01), ie, they did not believe to be malnourished. Further, results of multivariate analysis showed that compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission (36%, n=23 vs 18%, n=5; respectively, P=0.097), further weight loss (44%, n=28 vs 14%, n=4; respectively, P=0.073), more often reported health deterioration (29%, n=18 vs 17%, n=5; respectively, P=0.218) and death (11%, n=7 vs 0%, n=0; respectively, P=0.021) within three months after discharge. In addition, no significant associations between self–perceived malnutrition and adverse outcomes were found except a further weight loss within three months after discharge (P= 0.04).", "In the present study, 35% of the patients were malnourished and 49% were at risk of malnutrition, respectively, according to MNA-SF. However, we found major discrepancies between nutritional status (MNA-SF) and body weight status (WHO-BMI classification) and the self-perception of nutritional status and body weight status by older hospitalized patients. Previous studies have investigated exclusively the agreement between self-perception of body weight and measured body weight mainly in young and middle age adults15,16 and healthy older individuals.6,7 To the best of our knowledge, this is the first study evaluating the agreement between subjective and objective nutritional status in older hospitalized patients.\nThe findings of the present study demonstrated only a slight agreement between subjective and objective body weight status (Kappa coefficient, 0.19) in older hospitalized patients. These findings extend the results of previous cross-sectional study among healthy older adults (aged 60–96 years) which reported a low agreement between objective and self-perceived body weight status.7 In another cross-sectional study among 76 older individuals aged 65 and 97 years, 40% perceived their body weight status incorrectly.6 In this respect, it seems that older adults substantially misperceive their body weight status may be due to a lack of awareness of changes in body weight with advancing age.17 In a cross-sectional study among women 14–79 years, Park et al16 indicated that age is the most important factor associated with deviations in weight status perception with increasing misperception as age increased. Further, a low agreement between self-reported and measured body weight with increasing age was also observed in previous cross-sectional and longitudinal studies covering a wide age range.17,18\nBesides body weight status perception, studies focusing on self-perception of nutritional status in older hospitalized patients are lacking. This is remarkable, since correct self-perception of nutritional status could be a key factor for successful treatment and management of malnutrition in older individuals.19 In the present study, we found no agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients (Kappa coefficient, 0.06), which confirms a substantial misperception of nutritional status in older persons. Findings of a small pilot study among ten hospitalized seniors (ages 65 and older) who were classified to be at risk of malnutrition by nutrition screening indicated that none of the patients believed to be at risk of malnutrition whereas they reported to have a good nutritional intake.20 The same is true for our study since the majority of those who were classified as malnourished (67%) or at risk of malnutrition (92%) according to MNA-SF did not see themselves malnourished or at risk of malnutrition.\nPrevious studies demonstrated significant associations between weight loss and risk of malnutrition and higher mortality rate among community-living older adults.21,22 In the present study, over half of the population reported weight loss in the past three months whereas 15% of them intended to lose even more body weight, which should be considered as harmful for most older patients. Further, in this study, 5% of patients who were malnourished or at risk of malnutrition according to MNA-SF died during the 3 months follow-up period. Nearly three-quarters of whom did not perceive their reduced nutritional status. Indeed, this misperception indicates a potential problem.\nNutritional deficiencies are frequently observed in older persons; however, poor knowledge about their own nutritional status could be involved in the development of nutritional inadequacy. Malnutrition mostly develops very slowly over time and is associated with unspecific symptoms.23–25 A better understanding and awareness of malnutrition by affected patients may be a factor to slow or even prevent the development of malnutrition.25 In a survey in Australia, dietitians have reported a lack of knowledge by community-living older adults about malnutrition as the strongest barrier in performing malnutrition screening.26 Poor understanding of malnutrition and the misperception of nutritional and body weight status in older persons as seen in our study and previous studies7,17,27 may seriously hamper the successful implementation of nutrition therapy. Our study highlights the need to raise knowledge and awareness about malnutrition and associated health risks among older hospitalized patients.\nThis study has some limitations. It was undertaken in eight different acute care geriatric hospital departments with different numbers of included patients. Therefore, the results may be biased by two departments recruiting half of the patients. Another limitation is the exclusion of patients with a high risk of malnutrition such as patients with dementia which, however, cannot be avoided when enquiring self-perception. Further, self-perception of nutritional status was only obtained at hospital admission. For further research, it would be of interest to measure how the perception of nutritional status changes after discussing the malnutrition diagnosis with the patients and after nutritional counseling.", "In this study, no agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients was found. Since malnourished older patients were more susceptible to death, especially if there were not aware of their malnutrition, our study highlights the need to raise knowledge about the issue of malnutrition and increase awareness of health risks associated with malnutrition among older hospitalized patients." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Subjects and Methods", "Self-Reported and Medical Assessment Variables", "Statistical Analysis", "Results", "Characterization of Study Population", "Patient Perception of Body Weight, Nutritional Status and Appetite", "Medical Assessment of Body Weight and Nutritional Status", "Concordance Between Patient Perception and Medical Records", "Follow-Up", "Discussion", "Conclusion" ]
[ "Malnutrition is a frequent finding in older patients and has commonly a multifactorial etiology. Malnutrition is associated with a low quality of life, prolonged hospitalization and rehabilitation, more frequent complications and higher morbidity and mortality.1 Although the prevalence of malnutrition is between 30% and 50% in hospitalized older persons,2–4 it remains widely unrecognized and untreated.1\nHowever, even if malnutrition is recognized, its treatment may be challenging, especially in an older frail population. In this population, the successful treatment of malnutrition mostly takes weeks and months. Particularly for a sustained treatment after hospital discharge, the perception and comprehension of malnutrition by the patient may be critical. Older patients need to be aware of the fact that they are underweight, malnourished or at risk of malnutrition. Without this awareness and the comprehension of the consequences of malnutrition, a behavioral change is unlikely to happen, and management of malnutrition may not be successful.\nThe agreement between self-perception and measured body weight has already been investigated in previous studies on different populations including older subjects.5–7 The findings of a cross-sectional study of 1295 healthy older adults aged 60–96 years demonstrated low agreement between objective and self-perceived body weight status.7 However, to the best of our knowledge data about self-perception of malnutrition are missing for older hospitalized patients. In the present study, we aimed to examine the self-perception of body weight and nutritional status among older hospitalized patients compared to their actual body weight and nutritional status based on medical assessment and their relevance for outcome.", "This observational cross-sectional study was undertaken between November 2018 and April 2020 at eight acute care geriatric hospital departments in Germany. The study population comprised 197 consecutively hospitalized older participants aged between 66 and 101 years. Exclusion criteria were age <65 years, missing or withdrawn informed consent, severe disturbance of fluid status (ie, severe cardiac decompensation, decompensated kidney failure and dehydration), moderate to severe dementia, impossibility to measure body weight and inability to cooperate. The study protocol had been approved by the ethical committee of Ruhr-University, Bochum (no 18–6451 approved on 22.10 2018). The participants were informed about the purpose of the study, and that it was conducted in accordance with the Declaration of Helsinki. Written consent was obtained by each study participant.", "Reported variables were obtained using two structured questionnaires with predefined answers, where adequate. The first questionnaire about patient’s self-perception was distributed by a trained physician at each department and completed by the study participants within 24 hours after hospital admission. Help was given when necessary. The second questionnaire about medical assessment was filled out by the attending physician during the hospital stay and at hospital discharge.\nTo assess patient’s self-perception, we used the following main questions: 1) How is your actual body height and weight? 2) Do you think you are normal weight, underweight or overweight? 3) Do you rate your nutritional status as good, undernourished or overnourished? 4) Are you satisfied with your nutritional status? 5) Did your weight change during the last 3 months (no, decrease, gain or do not know)? 6) If you have lost weight, how much did you lose within the last three months? 7) If you have lost weight, what do you think is the main reason (12 predefined answers and free text)? 8) Would you like to change your weight (keep it, lose weight, gain weight, do not know)? Regarding the self-perception of patients’ appetite, the Simplified Nutritional Appetite Questionnaire (SNAQ)8 was integrated at the end of the questionnaire.\nThe medical assessment questionnaire asked about the main reason of hospitalization, current measured body weight and height, weight loss in last three months, need for nutritional therapy due to weight loss and kind of treatment of malnutrition, if present. At each center, a trained nurse measured body weight in light clothing with an accuracy of 0.1 kg using a calibrated chair scale and height to the nearest 0.5 cm with a stadiometer. BMI was calculated and patients were categorized according to the WHO-BMI classification (underweight: BMI <18.5 kg/m2, normal weight BMI 18.5–24.9 kg/m2, overweight BMI 25.0–29.9 kg/m2 and obesity BMI ≥30.0 kg/m2).9 In addition, geriatric assessment was performed at hospital admission except the Barthel-Index, which was evaluated on admission and at discharge. Risk of malnutrition was measured according to the Mini Nutritional Assessment Short Form (MNA-SF)10 which is a validated tool for the screening of nutritional status of geriatric patients across settings. Participants are stratified as having normal nutritional status (12–14 points), being at risk of malnutrition (8–11 points) and being malnourished (0–7 points).\nActivities of daily living were determined using the Barthel-Index (BI).11 The point’s range of the German version of the BI is 0–100 pts., with 100 pts indicating independence in all activities of daily living. Cognitive function was measured with either the Mini Mental Status Examination (MMSE)12 or the Montreal Cognitive Assessment (MoCA),13 according to the standard assessment of each center. Medical comorbidities were evaluated using Charlson Comorbidity Index (CCI).14 The geriatric assessment was performed within the clinical routine of each center and the results were validated by the attending physician.\nA follow-up was performed with a short telephone interview after three months, performed by the trained physician at each study center. Patients were asked about their general health status compared to hospital discharge (worse, same, better) and body weight compared to hospital discharge (stable, decreased, increased, unclear) and unplanned readmission to hospital. If the patient was not able to give reliable answers or it was not possible to get in contact with him, a relative was asked.", "The statistical analysis was completed using SPSS statistical software (SPSS Statistics for Windows, 137 IBM Corp, Version 26.0, Armonk, NY, USA). For an approximate sample size calculation, we expected 50% of older patients to systematically overestimate their body weight by an average of 2 kg (SD = 8 kg) and 50% of the patients to estimate with an average difference of 0 kg, ie, correctly (SD = 2 kg). A sample size of N = 200 in a 1:1 design with a power of 0.8 and a Type I error of 0.05 was calculated (http://PowerAndSampleSize.com). Means and standard deviations (SDs) were used for continuous data with normal distribution whereas median values are expressed with interquartile ranges (IQR) for non-normally distributed data. Categorical variables are shown as n (%). In order to compare the self-reported nutritional status and the objective nutritional status, we divided the patients into three groups according to the MNA-SF classification (malnourished, risk of malnutrition and normal nutritional status). Group differences were analyzed by using paired samples t test and Wilcoxon signed rank for normally and non-normally distributed values, respectively. Categorical variables were compared by the Chi square test. Pearson’s correlation was applied for normally distributed variables whereas Spearman correlation was used for nonparametric data. Multivariate analysis was used to examine the relationship between nutritional status and outcomes while adjusting for age, gender, comorbidity and cognitive function. In addition, the Kappa coefficient was used to assess the agreement between self-perceived and objective body weight status and nutritional status. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A p-value of <0.05 was considered as the limit of significance.", " Characterization of Study Population Baseline characteristics of study participants are summarized in Table 1. Of 197 patients with a mean age of 82.2 ± 6.8 years, 121 (61%) were women. Major reasons for hospitalization were cardiovascular disease, falls, fractures, osteoarthritis, neurodegenerative diseases and general disease, including infections.Table 1Characteristics of Study Population on AdmissionAll (n=197)Gender (number, %) Females121 (61) Males76 (39)Age (y)82.2 ± 6.8BMI (kg/m2)25.3 ± 6.3Geriatric assessments, Median (IQR) MNA-SF9 (7–11) Barthel-Index on admission55 (35–70) Barthel-Index at discharge75 (65–85) MMSE28 (25–29) MoCA21 (17–24) Charlson Comorbidity Index2 (1–4)Reason for admission Cardiovascular disease39 (20) Falls and fractures72 (37) Osteoarthritis16 (8) Neurodegenerative diseases6 (3) General diseases64 (32)Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.\n\nCharacteristics of Study Population on Admission\nNotes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.\nBaseline characteristics of study participants are summarized in Table 1. Of 197 patients with a mean age of 82.2 ± 6.8 years, 121 (61%) were women. Major reasons for hospitalization were cardiovascular disease, falls, fractures, osteoarthritis, neurodegenerative diseases and general disease, including infections.Table 1Characteristics of Study Population on AdmissionAll (n=197)Gender (number, %) Females121 (61) Males76 (39)Age (y)82.2 ± 6.8BMI (kg/m2)25.3 ± 6.3Geriatric assessments, Median (IQR) MNA-SF9 (7–11) Barthel-Index on admission55 (35–70) Barthel-Index at discharge75 (65–85) MMSE28 (25–29) MoCA21 (17–24) Charlson Comorbidity Index2 (1–4)Reason for admission Cardiovascular disease39 (20) Falls and fractures72 (37) Osteoarthritis16 (8) Neurodegenerative diseases6 (3) General diseases64 (32)Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.\n\nCharacteristics of Study Population on Admission\nNotes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.\n Patient Perception of Body Weight, Nutritional Status and Appetite As shown in Table 2, 53% and 25% of the patients regarded their body weight as normal and underweight, respectively. Mean current body weight reported by the patients was 69.7 ± 17.1 kg. In addition, 77% of the patients reported a good nutritional status whereas 17% considered themselves undernourished. In terms of appetite perception, 43% of patients regarded their appetite as good and very good and 39% as average. Half of the patients (52%) reported weight loss in the last three months (mean weight loss: 6.1 ± 4.9 kg), 46% intended to maintain their current body weight whereas 27% wanted to gain weight in the future. Among those who reported weight loss, 43% and 15% intended to gain and lose body weight in the future, respectively. The main reasons of weight loss reported by the patients were acute illness, loss of appetite, psychological stress and dysphagia.Table 2Patient Perception of Body Weight, Nutritional Status and Appetite (n=197)Number (%)Body weight status Underweight49 (25) Normal weight104 (53) Overweight42 (22)Nutritional status Good150 (77) Undernourished34 (17) Overnourished12 (6)Satisfaction with nutritional status Satisfied143 (73) Unsatisfied51 (27)Appetite according to SNAQ Very poor9 (5) Poor26 (13) Average76 (39) Good73 (38) Very good9 (5)Body weight change in last 3 months No67 (35) Decreased101 (52) Increased14 (7) Unknown11 (6)Willing to change body weight in the next 3 months No91 (46) Willing to decrease`48 (24) Willing to increase52 (27) Unknown5 (3)Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.\n\nPatient Perception of Body Weight, Nutritional Status and Appetite (n=197)\nAbbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.\nAs shown in Table 2, 53% and 25% of the patients regarded their body weight as normal and underweight, respectively. Mean current body weight reported by the patients was 69.7 ± 17.1 kg. In addition, 77% of the patients reported a good nutritional status whereas 17% considered themselves undernourished. In terms of appetite perception, 43% of patients regarded their appetite as good and very good and 39% as average. Half of the patients (52%) reported weight loss in the last three months (mean weight loss: 6.1 ± 4.9 kg), 46% intended to maintain their current body weight whereas 27% wanted to gain weight in the future. Among those who reported weight loss, 43% and 15% intended to gain and lose body weight in the future, respectively. The main reasons of weight loss reported by the patients were acute illness, loss of appetite, psychological stress and dysphagia.Table 2Patient Perception of Body Weight, Nutritional Status and Appetite (n=197)Number (%)Body weight status Underweight49 (25) Normal weight104 (53) Overweight42 (22)Nutritional status Good150 (77) Undernourished34 (17) Overnourished12 (6)Satisfaction with nutritional status Satisfied143 (73) Unsatisfied51 (27)Appetite according to SNAQ Very poor9 (5) Poor26 (13) Average76 (39) Good73 (38) Very good9 (5)Body weight change in last 3 months No67 (35) Decreased101 (52) Increased14 (7) Unknown11 (6)Willing to change body weight in the next 3 months No91 (46) Willing to decrease`48 (24) Willing to increase52 (27) Unknown5 (3)Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.\n\nPatient Perception of Body Weight, Nutritional Status and Appetite (n=197)\nAbbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.\n Medical Assessment of Body Weight and Nutritional Status The results of the medical assessment are summarized in Table 3. Mean measured body weight was 70.1 ± 18.8 kg which was similar to the weight reported by the patients (P=0.436). When compared to the WHO-BMI classification, almost half of the patients were within the healthy weight range and only 9% were classified as underweight.Table 3Results of Medical Assessment of Nutritional Status (n=197)Total PopulationCurrent body weight (kg)70.1 ± 18.8Height (m)1.66 ± 0.1BMI (kg/m2)25.3 ± 6.4Objective body weight status Underweight (BMI <18.5 kg/m2)16 (9) Normal weight (BMI 18.5–24.9 kg/m2)91 (47) Overweight (BMI 25.0–29.9 kg/m2)45 (23) Obesity (BMI ≥30.0 kg/m2)41 (21)Weight loss in last 3 months No (n, %)83 (46) Yes (n, %)97 (54)Nutritional status according to MNA-SF Normal nutritional status (n, %)31 (16) At risk of malnutrition (n, %)95 (49) Malnourished (n, %)69 (35) SNAQ score, Median (IQR)14 (11–15) <14 (n, %)86 (48) ≥14 (n, %)92 (52)Nutritional therapy of weight loss No (n, %)42 (40) Yes (n, %)62 (60)Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\n\nResults of Medical Assessment of Nutritional Status (n=197)\nNotes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\nAccording to MNA-SF, 16% and 49% had normal nutritional status and were at risk of malnutrition, respectively, whereas 35% were malnourished. In addition, 18% and 32% of the patients had a severe and moderate decrease in food intake over the past three months, respectively. According to the SNAQ, over half of the patients (52%) were at nutritional risk.\nAccording to the information given by the attending physician, 54% of the patients had weight loss in the last three months. The main reasons for weight loss as reported by the attending physician were general disease, gastrointestinal disorders, dementia and pain. Furthermore, 60% of the patients with weight loss received nutritional therapy (mainly high protein and/or high energy oral nutritional supplements) during the hospital stay. The group treated with nutritional therapy was more malnourished (median MNA-SF: 6, IQR: 4–8 vs 9, IQR: 7–11; P<0.001) and had lower mean BMI (22.3 ± 4.6 kg/m2 vs 28.4 ± 7.6 kg/m2, P<0.001) compared to those who did not receive nutritional therapy.\nThe results of the medical assessment are summarized in Table 3. Mean measured body weight was 70.1 ± 18.8 kg which was similar to the weight reported by the patients (P=0.436). When compared to the WHO-BMI classification, almost half of the patients were within the healthy weight range and only 9% were classified as underweight.Table 3Results of Medical Assessment of Nutritional Status (n=197)Total PopulationCurrent body weight (kg)70.1 ± 18.8Height (m)1.66 ± 0.1BMI (kg/m2)25.3 ± 6.4Objective body weight status Underweight (BMI <18.5 kg/m2)16 (9) Normal weight (BMI 18.5–24.9 kg/m2)91 (47) Overweight (BMI 25.0–29.9 kg/m2)45 (23) Obesity (BMI ≥30.0 kg/m2)41 (21)Weight loss in last 3 months No (n, %)83 (46) Yes (n, %)97 (54)Nutritional status according to MNA-SF Normal nutritional status (n, %)31 (16) At risk of malnutrition (n, %)95 (49) Malnourished (n, %)69 (35) SNAQ score, Median (IQR)14 (11–15) <14 (n, %)86 (48) ≥14 (n, %)92 (52)Nutritional therapy of weight loss No (n, %)42 (40) Yes (n, %)62 (60)Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\n\nResults of Medical Assessment of Nutritional Status (n=197)\nNotes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\nAccording to MNA-SF, 16% and 49% had normal nutritional status and were at risk of malnutrition, respectively, whereas 35% were malnourished. In addition, 18% and 32% of the patients had a severe and moderate decrease in food intake over the past three months, respectively. According to the SNAQ, over half of the patients (52%) were at nutritional risk.\nAccording to the information given by the attending physician, 54% of the patients had weight loss in the last three months. The main reasons for weight loss as reported by the attending physician were general disease, gastrointestinal disorders, dementia and pain. Furthermore, 60% of the patients with weight loss received nutritional therapy (mainly high protein and/or high energy oral nutritional supplements) during the hospital stay. The group treated with nutritional therapy was more malnourished (median MNA-SF: 6, IQR: 4–8 vs 9, IQR: 7–11; P<0.001) and had lower mean BMI (22.3 ± 4.6 kg/m2 vs 28.4 ± 7.6 kg/m2, P<0.001) compared to those who did not receive nutritional therapy.\n Concordance Between Patient Perception and Medical Records With regard to body weight, 55% and 84% of patients who were malnourished or at risk of malnutrition according to MNA-SF reported their body weight, as normal weight and overweight, respectively (Table 4). In addition, 64% of malnourished patients and 87% of patients at risk of malnutrition according to MNA-SF classified their nutritional status as good. Furthermore, 58% and 82% of patients who were satisfied with their nutritional status were malnourished and at risk of malnutrition based on MNA-SF, respectively. When compared to the objective nutritional status, only 33% of malnourished patients based on MNA-SF correctly perceived their nutritional status as undernourished (Table 4). The Kappa coefficient (0.06) showed no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF. In addition, we found only a slight agreement between subjective body weight status and objective body weight status according to the WHO-BMI classification (Kappa coefficient: 0.19).Table 4Concordance Between Patient Perception and Medical Assessment of Nutritional StatusSelf-Reported by PatientsMNA-SF Classification (n, %)P valueMalnourished (n=69, 35%)At Risk (n=95, 49%)Normal (n=31, 16%)TotalBody weight status Normal weight31 (45)54 (58)19 (61)104P<0.001 Underweight31 (45)16 (16)2 (6)49 Overweight7 (10)25 (26)10 (33)42Nutritional status Good44 (64)82 (87)24 (77)150P<0.001 Undernourished23 (33)8 (8)3 (10)34 Overnourished2 (3)5 (5)4 (13)11Satisfaction with nutritional status Satisfied39 (58)78 (82)26 (87)143P=0.002 Unsatisfied30 (42)17 (18)4 (13)51Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).\n\nConcordance Between Patient Perception and Medical Assessment of Nutritional Status\nAbbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).\nWith regard to body weight, 55% and 84% of patients who were malnourished or at risk of malnutrition according to MNA-SF reported their body weight, as normal weight and overweight, respectively (Table 4). In addition, 64% of malnourished patients and 87% of patients at risk of malnutrition according to MNA-SF classified their nutritional status as good. Furthermore, 58% and 82% of patients who were satisfied with their nutritional status were malnourished and at risk of malnutrition based on MNA-SF, respectively. When compared to the objective nutritional status, only 33% of malnourished patients based on MNA-SF correctly perceived their nutritional status as undernourished (Table 4). The Kappa coefficient (0.06) showed no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF. In addition, we found only a slight agreement between subjective body weight status and objective body weight status according to the WHO-BMI classification (Kappa coefficient: 0.19).Table 4Concordance Between Patient Perception and Medical Assessment of Nutritional StatusSelf-Reported by PatientsMNA-SF Classification (n, %)P valueMalnourished (n=69, 35%)At Risk (n=95, 49%)Normal (n=31, 16%)TotalBody weight status Normal weight31 (45)54 (58)19 (61)104P<0.001 Underweight31 (45)16 (16)2 (6)49 Overweight7 (10)25 (26)10 (33)42Nutritional status Good44 (64)82 (87)24 (77)150P<0.001 Undernourished23 (33)8 (8)3 (10)34 Overnourished2 (3)5 (5)4 (13)11Satisfaction with nutritional status Satisfied39 (58)78 (82)26 (87)143P=0.002 Unsatisfied30 (42)17 (18)4 (13)51Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).\n\nConcordance Between Patient Perception and Medical Assessment of Nutritional Status\nAbbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).\n Follow-Up A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time (9/193; 4.6%), of which 7 (78%) and 2 (22%) were malnourished and at risk of malnutrition according to MNA-SF, respectively. None of the patients with a normal nutritional status died during the follow-up period. Of those who were malnourished or at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% (6/9) perceived their nutritional status incorrectly (P=0.01), ie, they did not believe to be malnourished. Further, results of multivariate analysis showed that compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission (36%, n=23 vs 18%, n=5; respectively, P=0.097), further weight loss (44%, n=28 vs 14%, n=4; respectively, P=0.073), more often reported health deterioration (29%, n=18 vs 17%, n=5; respectively, P=0.218) and death (11%, n=7 vs 0%, n=0; respectively, P=0.021) within three months after discharge. In addition, no significant associations between self–perceived malnutrition and adverse outcomes were found except a further weight loss within three months after discharge (P= 0.04).\nA total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time (9/193; 4.6%), of which 7 (78%) and 2 (22%) were malnourished and at risk of malnutrition according to MNA-SF, respectively. None of the patients with a normal nutritional status died during the follow-up period. Of those who were malnourished or at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% (6/9) perceived their nutritional status incorrectly (P=0.01), ie, they did not believe to be malnourished. Further, results of multivariate analysis showed that compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission (36%, n=23 vs 18%, n=5; respectively, P=0.097), further weight loss (44%, n=28 vs 14%, n=4; respectively, P=0.073), more often reported health deterioration (29%, n=18 vs 17%, n=5; respectively, P=0.218) and death (11%, n=7 vs 0%, n=0; respectively, P=0.021) within three months after discharge. In addition, no significant associations between self–perceived malnutrition and adverse outcomes were found except a further weight loss within three months after discharge (P= 0.04).", "Baseline characteristics of study participants are summarized in Table 1. Of 197 patients with a mean age of 82.2 ± 6.8 years, 121 (61%) were women. Major reasons for hospitalization were cardiovascular disease, falls, fractures, osteoarthritis, neurodegenerative diseases and general disease, including infections.Table 1Characteristics of Study Population on AdmissionAll (n=197)Gender (number, %) Females121 (61) Males76 (39)Age (y)82.2 ± 6.8BMI (kg/m2)25.3 ± 6.3Geriatric assessments, Median (IQR) MNA-SF9 (7–11) Barthel-Index on admission55 (35–70) Barthel-Index at discharge75 (65–85) MMSE28 (25–29) MoCA21 (17–24) Charlson Comorbidity Index2 (1–4)Reason for admission Cardiovascular disease39 (20) Falls and fractures72 (37) Osteoarthritis16 (8) Neurodegenerative diseases6 (3) General diseases64 (32)Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.\n\nCharacteristics of Study Population on Admission\nNotes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment.", "As shown in Table 2, 53% and 25% of the patients regarded their body weight as normal and underweight, respectively. Mean current body weight reported by the patients was 69.7 ± 17.1 kg. In addition, 77% of the patients reported a good nutritional status whereas 17% considered themselves undernourished. In terms of appetite perception, 43% of patients regarded their appetite as good and very good and 39% as average. Half of the patients (52%) reported weight loss in the last three months (mean weight loss: 6.1 ± 4.9 kg), 46% intended to maintain their current body weight whereas 27% wanted to gain weight in the future. Among those who reported weight loss, 43% and 15% intended to gain and lose body weight in the future, respectively. The main reasons of weight loss reported by the patients were acute illness, loss of appetite, psychological stress and dysphagia.Table 2Patient Perception of Body Weight, Nutritional Status and Appetite (n=197)Number (%)Body weight status Underweight49 (25) Normal weight104 (53) Overweight42 (22)Nutritional status Good150 (77) Undernourished34 (17) Overnourished12 (6)Satisfaction with nutritional status Satisfied143 (73) Unsatisfied51 (27)Appetite according to SNAQ Very poor9 (5) Poor26 (13) Average76 (39) Good73 (38) Very good9 (5)Body weight change in last 3 months No67 (35) Decreased101 (52) Increased14 (7) Unknown11 (6)Willing to change body weight in the next 3 months No91 (46) Willing to decrease`48 (24) Willing to increase52 (27) Unknown5 (3)Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.\n\nPatient Perception of Body Weight, Nutritional Status and Appetite (n=197)\nAbbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire.", "The results of the medical assessment are summarized in Table 3. Mean measured body weight was 70.1 ± 18.8 kg which was similar to the weight reported by the patients (P=0.436). When compared to the WHO-BMI classification, almost half of the patients were within the healthy weight range and only 9% were classified as underweight.Table 3Results of Medical Assessment of Nutritional Status (n=197)Total PopulationCurrent body weight (kg)70.1 ± 18.8Height (m)1.66 ± 0.1BMI (kg/m2)25.3 ± 6.4Objective body weight status Underweight (BMI <18.5 kg/m2)16 (9) Normal weight (BMI 18.5–24.9 kg/m2)91 (47) Overweight (BMI 25.0–29.9 kg/m2)45 (23) Obesity (BMI ≥30.0 kg/m2)41 (21)Weight loss in last 3 months No (n, %)83 (46) Yes (n, %)97 (54)Nutritional status according to MNA-SF Normal nutritional status (n, %)31 (16) At risk of malnutrition (n, %)95 (49) Malnourished (n, %)69 (35) SNAQ score, Median (IQR)14 (11–15) <14 (n, %)86 (48) ≥14 (n, %)92 (52)Nutritional therapy of weight loss No (n, %)42 (40) Yes (n, %)62 (60)Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\n\nResults of Medical Assessment of Nutritional Status (n=197)\nNotes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).\nAbbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months).\nAccording to MNA-SF, 16% and 49% had normal nutritional status and were at risk of malnutrition, respectively, whereas 35% were malnourished. In addition, 18% and 32% of the patients had a severe and moderate decrease in food intake over the past three months, respectively. According to the SNAQ, over half of the patients (52%) were at nutritional risk.\nAccording to the information given by the attending physician, 54% of the patients had weight loss in the last three months. The main reasons for weight loss as reported by the attending physician were general disease, gastrointestinal disorders, dementia and pain. Furthermore, 60% of the patients with weight loss received nutritional therapy (mainly high protein and/or high energy oral nutritional supplements) during the hospital stay. The group treated with nutritional therapy was more malnourished (median MNA-SF: 6, IQR: 4–8 vs 9, IQR: 7–11; P<0.001) and had lower mean BMI (22.3 ± 4.6 kg/m2 vs 28.4 ± 7.6 kg/m2, P<0.001) compared to those who did not receive nutritional therapy.", "With regard to body weight, 55% and 84% of patients who were malnourished or at risk of malnutrition according to MNA-SF reported their body weight, as normal weight and overweight, respectively (Table 4). In addition, 64% of malnourished patients and 87% of patients at risk of malnutrition according to MNA-SF classified their nutritional status as good. Furthermore, 58% and 82% of patients who were satisfied with their nutritional status were malnourished and at risk of malnutrition based on MNA-SF, respectively. When compared to the objective nutritional status, only 33% of malnourished patients based on MNA-SF correctly perceived their nutritional status as undernourished (Table 4). The Kappa coefficient (0.06) showed no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF. In addition, we found only a slight agreement between subjective body weight status and objective body weight status according to the WHO-BMI classification (Kappa coefficient: 0.19).Table 4Concordance Between Patient Perception and Medical Assessment of Nutritional StatusSelf-Reported by PatientsMNA-SF Classification (n, %)P valueMalnourished (n=69, 35%)At Risk (n=95, 49%)Normal (n=31, 16%)TotalBody weight status Normal weight31 (45)54 (58)19 (61)104P<0.001 Underweight31 (45)16 (16)2 (6)49 Overweight7 (10)25 (26)10 (33)42Nutritional status Good44 (64)82 (87)24 (77)150P<0.001 Undernourished23 (33)8 (8)3 (10)34 Overnourished2 (3)5 (5)4 (13)11Satisfaction with nutritional status Satisfied39 (58)78 (82)26 (87)143P=0.002 Unsatisfied30 (42)17 (18)4 (13)51Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).\n\nConcordance Between Patient Perception and Medical Assessment of Nutritional Status\nAbbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points).", "A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time (9/193; 4.6%), of which 7 (78%) and 2 (22%) were malnourished and at risk of malnutrition according to MNA-SF, respectively. None of the patients with a normal nutritional status died during the follow-up period. Of those who were malnourished or at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% (6/9) perceived their nutritional status incorrectly (P=0.01), ie, they did not believe to be malnourished. Further, results of multivariate analysis showed that compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission (36%, n=23 vs 18%, n=5; respectively, P=0.097), further weight loss (44%, n=28 vs 14%, n=4; respectively, P=0.073), more often reported health deterioration (29%, n=18 vs 17%, n=5; respectively, P=0.218) and death (11%, n=7 vs 0%, n=0; respectively, P=0.021) within three months after discharge. In addition, no significant associations between self–perceived malnutrition and adverse outcomes were found except a further weight loss within three months after discharge (P= 0.04).", "In the present study, 35% of the patients were malnourished and 49% were at risk of malnutrition, respectively, according to MNA-SF. However, we found major discrepancies between nutritional status (MNA-SF) and body weight status (WHO-BMI classification) and the self-perception of nutritional status and body weight status by older hospitalized patients. Previous studies have investigated exclusively the agreement between self-perception of body weight and measured body weight mainly in young and middle age adults15,16 and healthy older individuals.6,7 To the best of our knowledge, this is the first study evaluating the agreement between subjective and objective nutritional status in older hospitalized patients.\nThe findings of the present study demonstrated only a slight agreement between subjective and objective body weight status (Kappa coefficient, 0.19) in older hospitalized patients. These findings extend the results of previous cross-sectional study among healthy older adults (aged 60–96 years) which reported a low agreement between objective and self-perceived body weight status.7 In another cross-sectional study among 76 older individuals aged 65 and 97 years, 40% perceived their body weight status incorrectly.6 In this respect, it seems that older adults substantially misperceive their body weight status may be due to a lack of awareness of changes in body weight with advancing age.17 In a cross-sectional study among women 14–79 years, Park et al16 indicated that age is the most important factor associated with deviations in weight status perception with increasing misperception as age increased. Further, a low agreement between self-reported and measured body weight with increasing age was also observed in previous cross-sectional and longitudinal studies covering a wide age range.17,18\nBesides body weight status perception, studies focusing on self-perception of nutritional status in older hospitalized patients are lacking. This is remarkable, since correct self-perception of nutritional status could be a key factor for successful treatment and management of malnutrition in older individuals.19 In the present study, we found no agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients (Kappa coefficient, 0.06), which confirms a substantial misperception of nutritional status in older persons. Findings of a small pilot study among ten hospitalized seniors (ages 65 and older) who were classified to be at risk of malnutrition by nutrition screening indicated that none of the patients believed to be at risk of malnutrition whereas they reported to have a good nutritional intake.20 The same is true for our study since the majority of those who were classified as malnourished (67%) or at risk of malnutrition (92%) according to MNA-SF did not see themselves malnourished or at risk of malnutrition.\nPrevious studies demonstrated significant associations between weight loss and risk of malnutrition and higher mortality rate among community-living older adults.21,22 In the present study, over half of the population reported weight loss in the past three months whereas 15% of them intended to lose even more body weight, which should be considered as harmful for most older patients. Further, in this study, 5% of patients who were malnourished or at risk of malnutrition according to MNA-SF died during the 3 months follow-up period. Nearly three-quarters of whom did not perceive their reduced nutritional status. Indeed, this misperception indicates a potential problem.\nNutritional deficiencies are frequently observed in older persons; however, poor knowledge about their own nutritional status could be involved in the development of nutritional inadequacy. Malnutrition mostly develops very slowly over time and is associated with unspecific symptoms.23–25 A better understanding and awareness of malnutrition by affected patients may be a factor to slow or even prevent the development of malnutrition.25 In a survey in Australia, dietitians have reported a lack of knowledge by community-living older adults about malnutrition as the strongest barrier in performing malnutrition screening.26 Poor understanding of malnutrition and the misperception of nutritional and body weight status in older persons as seen in our study and previous studies7,17,27 may seriously hamper the successful implementation of nutrition therapy. Our study highlights the need to raise knowledge and awareness about malnutrition and associated health risks among older hospitalized patients.\nThis study has some limitations. It was undertaken in eight different acute care geriatric hospital departments with different numbers of included patients. Therefore, the results may be biased by two departments recruiting half of the patients. Another limitation is the exclusion of patients with a high risk of malnutrition such as patients with dementia which, however, cannot be avoided when enquiring self-perception. Further, self-perception of nutritional status was only obtained at hospital admission. For further research, it would be of interest to measure how the perception of nutritional status changes after discussing the malnutrition diagnosis with the patients and after nutritional counseling.", "In this study, no agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients was found. Since malnourished older patients were more susceptible to death, especially if there were not aware of their malnutrition, our study highlights the need to raise knowledge about the issue of malnutrition and increase awareness of health risks associated with malnutrition among older hospitalized patients." ]
[ "intro", null, null, null, null, null, null, null, null, null, null, null ]
[ "body weight", "geriatrics", "malnutrition", "older patients", "self-perception" ]
Introduction: Malnutrition is a frequent finding in older patients and has commonly a multifactorial etiology. Malnutrition is associated with a low quality of life, prolonged hospitalization and rehabilitation, more frequent complications and higher morbidity and mortality.1 Although the prevalence of malnutrition is between 30% and 50% in hospitalized older persons,2–4 it remains widely unrecognized and untreated.1 However, even if malnutrition is recognized, its treatment may be challenging, especially in an older frail population. In this population, the successful treatment of malnutrition mostly takes weeks and months. Particularly for a sustained treatment after hospital discharge, the perception and comprehension of malnutrition by the patient may be critical. Older patients need to be aware of the fact that they are underweight, malnourished or at risk of malnutrition. Without this awareness and the comprehension of the consequences of malnutrition, a behavioral change is unlikely to happen, and management of malnutrition may not be successful. The agreement between self-perception and measured body weight has already been investigated in previous studies on different populations including older subjects.5–7 The findings of a cross-sectional study of 1295 healthy older adults aged 60–96 years demonstrated low agreement between objective and self-perceived body weight status.7 However, to the best of our knowledge data about self-perception of malnutrition are missing for older hospitalized patients. In the present study, we aimed to examine the self-perception of body weight and nutritional status among older hospitalized patients compared to their actual body weight and nutritional status based on medical assessment and their relevance for outcome. Subjects and Methods: This observational cross-sectional study was undertaken between November 2018 and April 2020 at eight acute care geriatric hospital departments in Germany. The study population comprised 197 consecutively hospitalized older participants aged between 66 and 101 years. Exclusion criteria were age <65 years, missing or withdrawn informed consent, severe disturbance of fluid status (ie, severe cardiac decompensation, decompensated kidney failure and dehydration), moderate to severe dementia, impossibility to measure body weight and inability to cooperate. The study protocol had been approved by the ethical committee of Ruhr-University, Bochum (no 18–6451 approved on 22.10 2018). The participants were informed about the purpose of the study, and that it was conducted in accordance with the Declaration of Helsinki. Written consent was obtained by each study participant. Self-Reported and Medical Assessment Variables: Reported variables were obtained using two structured questionnaires with predefined answers, where adequate. The first questionnaire about patient’s self-perception was distributed by a trained physician at each department and completed by the study participants within 24 hours after hospital admission. Help was given when necessary. The second questionnaire about medical assessment was filled out by the attending physician during the hospital stay and at hospital discharge. To assess patient’s self-perception, we used the following main questions: 1) How is your actual body height and weight? 2) Do you think you are normal weight, underweight or overweight? 3) Do you rate your nutritional status as good, undernourished or overnourished? 4) Are you satisfied with your nutritional status? 5) Did your weight change during the last 3 months (no, decrease, gain or do not know)? 6) If you have lost weight, how much did you lose within the last three months? 7) If you have lost weight, what do you think is the main reason (12 predefined answers and free text)? 8) Would you like to change your weight (keep it, lose weight, gain weight, do not know)? Regarding the self-perception of patients’ appetite, the Simplified Nutritional Appetite Questionnaire (SNAQ)8 was integrated at the end of the questionnaire. The medical assessment questionnaire asked about the main reason of hospitalization, current measured body weight and height, weight loss in last three months, need for nutritional therapy due to weight loss and kind of treatment of malnutrition, if present. At each center, a trained nurse measured body weight in light clothing with an accuracy of 0.1 kg using a calibrated chair scale and height to the nearest 0.5 cm with a stadiometer. BMI was calculated and patients were categorized according to the WHO-BMI classification (underweight: BMI <18.5 kg/m2, normal weight BMI 18.5–24.9 kg/m2, overweight BMI 25.0–29.9 kg/m2 and obesity BMI ≥30.0 kg/m2).9 In addition, geriatric assessment was performed at hospital admission except the Barthel-Index, which was evaluated on admission and at discharge. Risk of malnutrition was measured according to the Mini Nutritional Assessment Short Form (MNA-SF)10 which is a validated tool for the screening of nutritional status of geriatric patients across settings. Participants are stratified as having normal nutritional status (12–14 points), being at risk of malnutrition (8–11 points) and being malnourished (0–7 points). Activities of daily living were determined using the Barthel-Index (BI).11 The point’s range of the German version of the BI is 0–100 pts., with 100 pts indicating independence in all activities of daily living. Cognitive function was measured with either the Mini Mental Status Examination (MMSE)12 or the Montreal Cognitive Assessment (MoCA),13 according to the standard assessment of each center. Medical comorbidities were evaluated using Charlson Comorbidity Index (CCI).14 The geriatric assessment was performed within the clinical routine of each center and the results were validated by the attending physician. A follow-up was performed with a short telephone interview after three months, performed by the trained physician at each study center. Patients were asked about their general health status compared to hospital discharge (worse, same, better) and body weight compared to hospital discharge (stable, decreased, increased, unclear) and unplanned readmission to hospital. If the patient was not able to give reliable answers or it was not possible to get in contact with him, a relative was asked. Statistical Analysis: The statistical analysis was completed using SPSS statistical software (SPSS Statistics for Windows, 137 IBM Corp, Version 26.0, Armonk, NY, USA). For an approximate sample size calculation, we expected 50% of older patients to systematically overestimate their body weight by an average of 2 kg (SD = 8 kg) and 50% of the patients to estimate with an average difference of 0 kg, ie, correctly (SD = 2 kg). A sample size of N = 200 in a 1:1 design with a power of 0.8 and a Type I error of 0.05 was calculated (http://PowerAndSampleSize.com). Means and standard deviations (SDs) were used for continuous data with normal distribution whereas median values are expressed with interquartile ranges (IQR) for non-normally distributed data. Categorical variables are shown as n (%). In order to compare the self-reported nutritional status and the objective nutritional status, we divided the patients into three groups according to the MNA-SF classification (malnourished, risk of malnutrition and normal nutritional status). Group differences were analyzed by using paired samples t test and Wilcoxon signed rank for normally and non-normally distributed values, respectively. Categorical variables were compared by the Chi square test. Pearson’s correlation was applied for normally distributed variables whereas Spearman correlation was used for nonparametric data. Multivariate analysis was used to examine the relationship between nutritional status and outcomes while adjusting for age, gender, comorbidity and cognitive function. In addition, the Kappa coefficient was used to assess the agreement between self-perceived and objective body weight status and nutritional status. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A p-value of <0.05 was considered as the limit of significance. Results: Characterization of Study Population Baseline characteristics of study participants are summarized in Table 1. Of 197 patients with a mean age of 82.2 ± 6.8 years, 121 (61%) were women. Major reasons for hospitalization were cardiovascular disease, falls, fractures, osteoarthritis, neurodegenerative diseases and general disease, including infections.Table 1Characteristics of Study Population on AdmissionAll (n=197)Gender (number, %) Females121 (61) Males76 (39)Age (y)82.2 ± 6.8BMI (kg/m2)25.3 ± 6.3Geriatric assessments, Median (IQR) MNA-SF9 (7–11) Barthel-Index on admission55 (35–70) Barthel-Index at discharge75 (65–85) MMSE28 (25–29) MoCA21 (17–24) Charlson Comorbidity Index2 (1–4)Reason for admission Cardiovascular disease39 (20) Falls and fractures72 (37) Osteoarthritis16 (8) Neurodegenerative diseases6 (3) General diseases64 (32)Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment. Characteristics of Study Population on Admission Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range). Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment. Baseline characteristics of study participants are summarized in Table 1. Of 197 patients with a mean age of 82.2 ± 6.8 years, 121 (61%) were women. Major reasons for hospitalization were cardiovascular disease, falls, fractures, osteoarthritis, neurodegenerative diseases and general disease, including infections.Table 1Characteristics of Study Population on AdmissionAll (n=197)Gender (number, %) Females121 (61) Males76 (39)Age (y)82.2 ± 6.8BMI (kg/m2)25.3 ± 6.3Geriatric assessments, Median (IQR) MNA-SF9 (7–11) Barthel-Index on admission55 (35–70) Barthel-Index at discharge75 (65–85) MMSE28 (25–29) MoCA21 (17–24) Charlson Comorbidity Index2 (1–4)Reason for admission Cardiovascular disease39 (20) Falls and fractures72 (37) Osteoarthritis16 (8) Neurodegenerative diseases6 (3) General diseases64 (32)Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment. Characteristics of Study Population on Admission Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range). Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment. Patient Perception of Body Weight, Nutritional Status and Appetite As shown in Table 2, 53% and 25% of the patients regarded their body weight as normal and underweight, respectively. Mean current body weight reported by the patients was 69.7 ± 17.1 kg. In addition, 77% of the patients reported a good nutritional status whereas 17% considered themselves undernourished. In terms of appetite perception, 43% of patients regarded their appetite as good and very good and 39% as average. Half of the patients (52%) reported weight loss in the last three months (mean weight loss: 6.1 ± 4.9 kg), 46% intended to maintain their current body weight whereas 27% wanted to gain weight in the future. Among those who reported weight loss, 43% and 15% intended to gain and lose body weight in the future, respectively. The main reasons of weight loss reported by the patients were acute illness, loss of appetite, psychological stress and dysphagia.Table 2Patient Perception of Body Weight, Nutritional Status and Appetite (n=197)Number (%)Body weight status Underweight49 (25) Normal weight104 (53) Overweight42 (22)Nutritional status Good150 (77) Undernourished34 (17) Overnourished12 (6)Satisfaction with nutritional status Satisfied143 (73) Unsatisfied51 (27)Appetite according to SNAQ Very poor9 (5) Poor26 (13) Average76 (39) Good73 (38) Very good9 (5)Body weight change in last 3 months No67 (35) Decreased101 (52) Increased14 (7) Unknown11 (6)Willing to change body weight in the next 3 months No91 (46) Willing to decrease`48 (24) Willing to increase52 (27) Unknown5 (3)Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire. Patient Perception of Body Weight, Nutritional Status and Appetite (n=197) Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire. As shown in Table 2, 53% and 25% of the patients regarded their body weight as normal and underweight, respectively. Mean current body weight reported by the patients was 69.7 ± 17.1 kg. In addition, 77% of the patients reported a good nutritional status whereas 17% considered themselves undernourished. In terms of appetite perception, 43% of patients regarded their appetite as good and very good and 39% as average. Half of the patients (52%) reported weight loss in the last three months (mean weight loss: 6.1 ± 4.9 kg), 46% intended to maintain their current body weight whereas 27% wanted to gain weight in the future. Among those who reported weight loss, 43% and 15% intended to gain and lose body weight in the future, respectively. The main reasons of weight loss reported by the patients were acute illness, loss of appetite, psychological stress and dysphagia.Table 2Patient Perception of Body Weight, Nutritional Status and Appetite (n=197)Number (%)Body weight status Underweight49 (25) Normal weight104 (53) Overweight42 (22)Nutritional status Good150 (77) Undernourished34 (17) Overnourished12 (6)Satisfaction with nutritional status Satisfied143 (73) Unsatisfied51 (27)Appetite according to SNAQ Very poor9 (5) Poor26 (13) Average76 (39) Good73 (38) Very good9 (5)Body weight change in last 3 months No67 (35) Decreased101 (52) Increased14 (7) Unknown11 (6)Willing to change body weight in the next 3 months No91 (46) Willing to decrease`48 (24) Willing to increase52 (27) Unknown5 (3)Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire. Patient Perception of Body Weight, Nutritional Status and Appetite (n=197) Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire. Medical Assessment of Body Weight and Nutritional Status The results of the medical assessment are summarized in Table 3. Mean measured body weight was 70.1 ± 18.8 kg which was similar to the weight reported by the patients (P=0.436). When compared to the WHO-BMI classification, almost half of the patients were within the healthy weight range and only 9% were classified as underweight.Table 3Results of Medical Assessment of Nutritional Status (n=197)Total PopulationCurrent body weight (kg)70.1 ± 18.8Height (m)1.66 ± 0.1BMI (kg/m2)25.3 ± 6.4Objective body weight status Underweight (BMI <18.5 kg/m2)16 (9) Normal weight (BMI 18.5–24.9 kg/m2)91 (47) Overweight (BMI 25.0–29.9 kg/m2)45 (23) Obesity (BMI ≥30.0 kg/m2)41 (21)Weight loss in last 3 months No (n, %)83 (46) Yes (n, %)97 (54)Nutritional status according to MNA-SF Normal nutritional status (n, %)31 (16) At risk of malnutrition (n, %)95 (49) Malnourished (n, %)69 (35) SNAQ score, Median (IQR)14 (11–15) <14 (n, %)86 (48) ≥14 (n, %)92 (52)Nutritional therapy of weight loss No (n, %)42 (40) Yes (n, %)62 (60)Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months). Results of Medical Assessment of Nutritional Status (n=197) Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range). Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months). According to MNA-SF, 16% and 49% had normal nutritional status and were at risk of malnutrition, respectively, whereas 35% were malnourished. In addition, 18% and 32% of the patients had a severe and moderate decrease in food intake over the past three months, respectively. According to the SNAQ, over half of the patients (52%) were at nutritional risk. According to the information given by the attending physician, 54% of the patients had weight loss in the last three months. The main reasons for weight loss as reported by the attending physician were general disease, gastrointestinal disorders, dementia and pain. Furthermore, 60% of the patients with weight loss received nutritional therapy (mainly high protein and/or high energy oral nutritional supplements) during the hospital stay. The group treated with nutritional therapy was more malnourished (median MNA-SF: 6, IQR: 4–8 vs 9, IQR: 7–11; P<0.001) and had lower mean BMI (22.3 ± 4.6 kg/m2 vs 28.4 ± 7.6 kg/m2, P<0.001) compared to those who did not receive nutritional therapy. The results of the medical assessment are summarized in Table 3. Mean measured body weight was 70.1 ± 18.8 kg which was similar to the weight reported by the patients (P=0.436). When compared to the WHO-BMI classification, almost half of the patients were within the healthy weight range and only 9% were classified as underweight.Table 3Results of Medical Assessment of Nutritional Status (n=197)Total PopulationCurrent body weight (kg)70.1 ± 18.8Height (m)1.66 ± 0.1BMI (kg/m2)25.3 ± 6.4Objective body weight status Underweight (BMI <18.5 kg/m2)16 (9) Normal weight (BMI 18.5–24.9 kg/m2)91 (47) Overweight (BMI 25.0–29.9 kg/m2)45 (23) Obesity (BMI ≥30.0 kg/m2)41 (21)Weight loss in last 3 months No (n, %)83 (46) Yes (n, %)97 (54)Nutritional status according to MNA-SF Normal nutritional status (n, %)31 (16) At risk of malnutrition (n, %)95 (49) Malnourished (n, %)69 (35) SNAQ score, Median (IQR)14 (11–15) <14 (n, %)86 (48) ≥14 (n, %)92 (52)Nutritional therapy of weight loss No (n, %)42 (40) Yes (n, %)62 (60)Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months). Results of Medical Assessment of Nutritional Status (n=197) Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range). Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months). According to MNA-SF, 16% and 49% had normal nutritional status and were at risk of malnutrition, respectively, whereas 35% were malnourished. In addition, 18% and 32% of the patients had a severe and moderate decrease in food intake over the past three months, respectively. According to the SNAQ, over half of the patients (52%) were at nutritional risk. According to the information given by the attending physician, 54% of the patients had weight loss in the last three months. The main reasons for weight loss as reported by the attending physician were general disease, gastrointestinal disorders, dementia and pain. Furthermore, 60% of the patients with weight loss received nutritional therapy (mainly high protein and/or high energy oral nutritional supplements) during the hospital stay. The group treated with nutritional therapy was more malnourished (median MNA-SF: 6, IQR: 4–8 vs 9, IQR: 7–11; P<0.001) and had lower mean BMI (22.3 ± 4.6 kg/m2 vs 28.4 ± 7.6 kg/m2, P<0.001) compared to those who did not receive nutritional therapy. Concordance Between Patient Perception and Medical Records With regard to body weight, 55% and 84% of patients who were malnourished or at risk of malnutrition according to MNA-SF reported their body weight, as normal weight and overweight, respectively (Table 4). In addition, 64% of malnourished patients and 87% of patients at risk of malnutrition according to MNA-SF classified their nutritional status as good. Furthermore, 58% and 82% of patients who were satisfied with their nutritional status were malnourished and at risk of malnutrition based on MNA-SF, respectively. When compared to the objective nutritional status, only 33% of malnourished patients based on MNA-SF correctly perceived their nutritional status as undernourished (Table 4). The Kappa coefficient (0.06) showed no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF. In addition, we found only a slight agreement between subjective body weight status and objective body weight status according to the WHO-BMI classification (Kappa coefficient: 0.19).Table 4Concordance Between Patient Perception and Medical Assessment of Nutritional StatusSelf-Reported by PatientsMNA-SF Classification (n, %)P valueMalnourished (n=69, 35%)At Risk (n=95, 49%)Normal (n=31, 16%)TotalBody weight status Normal weight31 (45)54 (58)19 (61)104P<0.001 Underweight31 (45)16 (16)2 (6)49 Overweight7 (10)25 (26)10 (33)42Nutritional status Good44 (64)82 (87)24 (77)150P<0.001 Undernourished23 (33)8 (8)3 (10)34 Overnourished2 (3)5 (5)4 (13)11Satisfaction with nutritional status Satisfied39 (58)78 (82)26 (87)143P=0.002 Unsatisfied30 (42)17 (18)4 (13)51Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points). Concordance Between Patient Perception and Medical Assessment of Nutritional Status Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points). With regard to body weight, 55% and 84% of patients who were malnourished or at risk of malnutrition according to MNA-SF reported their body weight, as normal weight and overweight, respectively (Table 4). In addition, 64% of malnourished patients and 87% of patients at risk of malnutrition according to MNA-SF classified their nutritional status as good. Furthermore, 58% and 82% of patients who were satisfied with their nutritional status were malnourished and at risk of malnutrition based on MNA-SF, respectively. When compared to the objective nutritional status, only 33% of malnourished patients based on MNA-SF correctly perceived their nutritional status as undernourished (Table 4). The Kappa coefficient (0.06) showed no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF. In addition, we found only a slight agreement between subjective body weight status and objective body weight status according to the WHO-BMI classification (Kappa coefficient: 0.19).Table 4Concordance Between Patient Perception and Medical Assessment of Nutritional StatusSelf-Reported by PatientsMNA-SF Classification (n, %)P valueMalnourished (n=69, 35%)At Risk (n=95, 49%)Normal (n=31, 16%)TotalBody weight status Normal weight31 (45)54 (58)19 (61)104P<0.001 Underweight31 (45)16 (16)2 (6)49 Overweight7 (10)25 (26)10 (33)42Nutritional status Good44 (64)82 (87)24 (77)150P<0.001 Undernourished23 (33)8 (8)3 (10)34 Overnourished2 (3)5 (5)4 (13)11Satisfaction with nutritional status Satisfied39 (58)78 (82)26 (87)143P=0.002 Unsatisfied30 (42)17 (18)4 (13)51Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points). Concordance Between Patient Perception and Medical Assessment of Nutritional Status Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points). Follow-Up A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time (9/193; 4.6%), of which 7 (78%) and 2 (22%) were malnourished and at risk of malnutrition according to MNA-SF, respectively. None of the patients with a normal nutritional status died during the follow-up period. Of those who were malnourished or at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% (6/9) perceived their nutritional status incorrectly (P=0.01), ie, they did not believe to be malnourished. Further, results of multivariate analysis showed that compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission (36%, n=23 vs 18%, n=5; respectively, P=0.097), further weight loss (44%, n=28 vs 14%, n=4; respectively, P=0.073), more often reported health deterioration (29%, n=18 vs 17%, n=5; respectively, P=0.218) and death (11%, n=7 vs 0%, n=0; respectively, P=0.021) within three months after discharge. In addition, no significant associations between self–perceived malnutrition and adverse outcomes were found except a further weight loss within three months after discharge (P= 0.04). A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time (9/193; 4.6%), of which 7 (78%) and 2 (22%) were malnourished and at risk of malnutrition according to MNA-SF, respectively. None of the patients with a normal nutritional status died during the follow-up period. Of those who were malnourished or at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% (6/9) perceived their nutritional status incorrectly (P=0.01), ie, they did not believe to be malnourished. Further, results of multivariate analysis showed that compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission (36%, n=23 vs 18%, n=5; respectively, P=0.097), further weight loss (44%, n=28 vs 14%, n=4; respectively, P=0.073), more often reported health deterioration (29%, n=18 vs 17%, n=5; respectively, P=0.218) and death (11%, n=7 vs 0%, n=0; respectively, P=0.021) within three months after discharge. In addition, no significant associations between self–perceived malnutrition and adverse outcomes were found except a further weight loss within three months after discharge (P= 0.04). Characterization of Study Population: Baseline characteristics of study participants are summarized in Table 1. Of 197 patients with a mean age of 82.2 ± 6.8 years, 121 (61%) were women. Major reasons for hospitalization were cardiovascular disease, falls, fractures, osteoarthritis, neurodegenerative diseases and general disease, including infections.Table 1Characteristics of Study Population on AdmissionAll (n=197)Gender (number, %) Females121 (61) Males76 (39)Age (y)82.2 ± 6.8BMI (kg/m2)25.3 ± 6.3Geriatric assessments, Median (IQR) MNA-SF9 (7–11) Barthel-Index on admission55 (35–70) Barthel-Index at discharge75 (65–85) MMSE28 (25–29) MoCA21 (17–24) Charlson Comorbidity Index2 (1–4)Reason for admission Cardiovascular disease39 (20) Falls and fractures72 (37) Osteoarthritis16 (8) Neurodegenerative diseases6 (3) General diseases64 (32)Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment. Characteristics of Study Population on Admission Notes: For MMSE and MoCA, scores <26 considered as cognitively impaired. MMSE and MoCA were performed in 93 and 93 patients, respectively. Values are given as number (%), mean ± SD or median (IQR, interquartile range). Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); MMSE, Mini Mental Status Examination; MOCA, Montreal Cognitive Assessment. Patient Perception of Body Weight, Nutritional Status and Appetite: As shown in Table 2, 53% and 25% of the patients regarded their body weight as normal and underweight, respectively. Mean current body weight reported by the patients was 69.7 ± 17.1 kg. In addition, 77% of the patients reported a good nutritional status whereas 17% considered themselves undernourished. In terms of appetite perception, 43% of patients regarded their appetite as good and very good and 39% as average. Half of the patients (52%) reported weight loss in the last three months (mean weight loss: 6.1 ± 4.9 kg), 46% intended to maintain their current body weight whereas 27% wanted to gain weight in the future. Among those who reported weight loss, 43% and 15% intended to gain and lose body weight in the future, respectively. The main reasons of weight loss reported by the patients were acute illness, loss of appetite, psychological stress and dysphagia.Table 2Patient Perception of Body Weight, Nutritional Status and Appetite (n=197)Number (%)Body weight status Underweight49 (25) Normal weight104 (53) Overweight42 (22)Nutritional status Good150 (77) Undernourished34 (17) Overnourished12 (6)Satisfaction with nutritional status Satisfied143 (73) Unsatisfied51 (27)Appetite according to SNAQ Very poor9 (5) Poor26 (13) Average76 (39) Good73 (38) Very good9 (5)Body weight change in last 3 months No67 (35) Decreased101 (52) Increased14 (7) Unknown11 (6)Willing to change body weight in the next 3 months No91 (46) Willing to decrease`48 (24) Willing to increase52 (27) Unknown5 (3)Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire. Patient Perception of Body Weight, Nutritional Status and Appetite (n=197) Abbreviation: SNAQ, Simplified Nutritional Appetite Questionnaire. Medical Assessment of Body Weight and Nutritional Status: The results of the medical assessment are summarized in Table 3. Mean measured body weight was 70.1 ± 18.8 kg which was similar to the weight reported by the patients (P=0.436). When compared to the WHO-BMI classification, almost half of the patients were within the healthy weight range and only 9% were classified as underweight.Table 3Results of Medical Assessment of Nutritional Status (n=197)Total PopulationCurrent body weight (kg)70.1 ± 18.8Height (m)1.66 ± 0.1BMI (kg/m2)25.3 ± 6.4Objective body weight status Underweight (BMI <18.5 kg/m2)16 (9) Normal weight (BMI 18.5–24.9 kg/m2)91 (47) Overweight (BMI 25.0–29.9 kg/m2)45 (23) Obesity (BMI ≥30.0 kg/m2)41 (21)Weight loss in last 3 months No (n, %)83 (46) Yes (n, %)97 (54)Nutritional status according to MNA-SF Normal nutritional status (n, %)31 (16) At risk of malnutrition (n, %)95 (49) Malnourished (n, %)69 (35) SNAQ score, Median (IQR)14 (11–15) <14 (n, %)86 (48) ≥14 (n, %)92 (52)Nutritional therapy of weight loss No (n, %)42 (40) Yes (n, %)62 (60)Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range).Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months). Results of Medical Assessment of Nutritional Status (n=197) Notes: Body weight was measured using a calibrated chair scale. Values are given as number (%), mean ± SD or median (IQR, interquartile range). Abbreviations: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points); SNAQ score, Simplified Nutritional Appetite Questionnaire (maximum score 20, score <14 indicates risk of at least 5% weight loss within six months). According to MNA-SF, 16% and 49% had normal nutritional status and were at risk of malnutrition, respectively, whereas 35% were malnourished. In addition, 18% and 32% of the patients had a severe and moderate decrease in food intake over the past three months, respectively. According to the SNAQ, over half of the patients (52%) were at nutritional risk. According to the information given by the attending physician, 54% of the patients had weight loss in the last three months. The main reasons for weight loss as reported by the attending physician were general disease, gastrointestinal disorders, dementia and pain. Furthermore, 60% of the patients with weight loss received nutritional therapy (mainly high protein and/or high energy oral nutritional supplements) during the hospital stay. The group treated with nutritional therapy was more malnourished (median MNA-SF: 6, IQR: 4–8 vs 9, IQR: 7–11; P<0.001) and had lower mean BMI (22.3 ± 4.6 kg/m2 vs 28.4 ± 7.6 kg/m2, P<0.001) compared to those who did not receive nutritional therapy. Concordance Between Patient Perception and Medical Records: With regard to body weight, 55% and 84% of patients who were malnourished or at risk of malnutrition according to MNA-SF reported their body weight, as normal weight and overweight, respectively (Table 4). In addition, 64% of malnourished patients and 87% of patients at risk of malnutrition according to MNA-SF classified their nutritional status as good. Furthermore, 58% and 82% of patients who were satisfied with their nutritional status were malnourished and at risk of malnutrition based on MNA-SF, respectively. When compared to the objective nutritional status, only 33% of malnourished patients based on MNA-SF correctly perceived their nutritional status as undernourished (Table 4). The Kappa coefficient (0.06) showed no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF. In addition, we found only a slight agreement between subjective body weight status and objective body weight status according to the WHO-BMI classification (Kappa coefficient: 0.19).Table 4Concordance Between Patient Perception and Medical Assessment of Nutritional StatusSelf-Reported by PatientsMNA-SF Classification (n, %)P valueMalnourished (n=69, 35%)At Risk (n=95, 49%)Normal (n=31, 16%)TotalBody weight status Normal weight31 (45)54 (58)19 (61)104P<0.001 Underweight31 (45)16 (16)2 (6)49 Overweight7 (10)25 (26)10 (33)42Nutritional status Good44 (64)82 (87)24 (77)150P<0.001 Undernourished23 (33)8 (8)3 (10)34 Overnourished2 (3)5 (5)4 (13)11Satisfaction with nutritional status Satisfied39 (58)78 (82)26 (87)143P=0.002 Unsatisfied30 (42)17 (18)4 (13)51Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points). Concordance Between Patient Perception and Medical Assessment of Nutritional Status Abbreviation: MNA-SF, Mini Nutritional Assessment Short Form (normal nutritional status 12–14 points, at risk of malnutrition 8–11 points and malnourished 0–7 points). Follow-Up: A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time (9/193; 4.6%), of which 7 (78%) and 2 (22%) were malnourished and at risk of malnutrition according to MNA-SF, respectively. None of the patients with a normal nutritional status died during the follow-up period. Of those who were malnourished or at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% (6/9) perceived their nutritional status incorrectly (P=0.01), ie, they did not believe to be malnourished. Further, results of multivariate analysis showed that compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission (36%, n=23 vs 18%, n=5; respectively, P=0.097), further weight loss (44%, n=28 vs 14%, n=4; respectively, P=0.073), more often reported health deterioration (29%, n=18 vs 17%, n=5; respectively, P=0.218) and death (11%, n=7 vs 0%, n=0; respectively, P=0.021) within three months after discharge. In addition, no significant associations between self–perceived malnutrition and adverse outcomes were found except a further weight loss within three months after discharge (P= 0.04). Discussion: In the present study, 35% of the patients were malnourished and 49% were at risk of malnutrition, respectively, according to MNA-SF. However, we found major discrepancies between nutritional status (MNA-SF) and body weight status (WHO-BMI classification) and the self-perception of nutritional status and body weight status by older hospitalized patients. Previous studies have investigated exclusively the agreement between self-perception of body weight and measured body weight mainly in young and middle age adults15,16 and healthy older individuals.6,7 To the best of our knowledge, this is the first study evaluating the agreement between subjective and objective nutritional status in older hospitalized patients. The findings of the present study demonstrated only a slight agreement between subjective and objective body weight status (Kappa coefficient, 0.19) in older hospitalized patients. These findings extend the results of previous cross-sectional study among healthy older adults (aged 60–96 years) which reported a low agreement between objective and self-perceived body weight status.7 In another cross-sectional study among 76 older individuals aged 65 and 97 years, 40% perceived their body weight status incorrectly.6 In this respect, it seems that older adults substantially misperceive their body weight status may be due to a lack of awareness of changes in body weight with advancing age.17 In a cross-sectional study among women 14–79 years, Park et al16 indicated that age is the most important factor associated with deviations in weight status perception with increasing misperception as age increased. Further, a low agreement between self-reported and measured body weight with increasing age was also observed in previous cross-sectional and longitudinal studies covering a wide age range.17,18 Besides body weight status perception, studies focusing on self-perception of nutritional status in older hospitalized patients are lacking. This is remarkable, since correct self-perception of nutritional status could be a key factor for successful treatment and management of malnutrition in older individuals.19 In the present study, we found no agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients (Kappa coefficient, 0.06), which confirms a substantial misperception of nutritional status in older persons. Findings of a small pilot study among ten hospitalized seniors (ages 65 and older) who were classified to be at risk of malnutrition by nutrition screening indicated that none of the patients believed to be at risk of malnutrition whereas they reported to have a good nutritional intake.20 The same is true for our study since the majority of those who were classified as malnourished (67%) or at risk of malnutrition (92%) according to MNA-SF did not see themselves malnourished or at risk of malnutrition. Previous studies demonstrated significant associations between weight loss and risk of malnutrition and higher mortality rate among community-living older adults.21,22 In the present study, over half of the population reported weight loss in the past three months whereas 15% of them intended to lose even more body weight, which should be considered as harmful for most older patients. Further, in this study, 5% of patients who were malnourished or at risk of malnutrition according to MNA-SF died during the 3 months follow-up period. Nearly three-quarters of whom did not perceive their reduced nutritional status. Indeed, this misperception indicates a potential problem. Nutritional deficiencies are frequently observed in older persons; however, poor knowledge about their own nutritional status could be involved in the development of nutritional inadequacy. Malnutrition mostly develops very slowly over time and is associated with unspecific symptoms.23–25 A better understanding and awareness of malnutrition by affected patients may be a factor to slow or even prevent the development of malnutrition.25 In a survey in Australia, dietitians have reported a lack of knowledge by community-living older adults about malnutrition as the strongest barrier in performing malnutrition screening.26 Poor understanding of malnutrition and the misperception of nutritional and body weight status in older persons as seen in our study and previous studies7,17,27 may seriously hamper the successful implementation of nutrition therapy. Our study highlights the need to raise knowledge and awareness about malnutrition and associated health risks among older hospitalized patients. This study has some limitations. It was undertaken in eight different acute care geriatric hospital departments with different numbers of included patients. Therefore, the results may be biased by two departments recruiting half of the patients. Another limitation is the exclusion of patients with a high risk of malnutrition such as patients with dementia which, however, cannot be avoided when enquiring self-perception. Further, self-perception of nutritional status was only obtained at hospital admission. For further research, it would be of interest to measure how the perception of nutritional status changes after discussing the malnutrition diagnosis with the patients and after nutritional counseling. Conclusion: In this study, no agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients was found. Since malnourished older patients were more susceptible to death, especially if there were not aware of their malnutrition, our study highlights the need to raise knowledge about the issue of malnutrition and increase awareness of health risks associated with malnutrition among older hospitalized patients.
Background: Studies focusing on self-perception of nutritional status in older hospitalized patients are lacking. We aimed to examine the self-perception of body weight and nutritional status among older hospitalized patients compared to their actual body weight and nutritional status based on medical assessment. Methods: This observational cross-sectional study investigated 197 older participants (mean age 82.2±6.8 years, 61% women) who were consecutively admitted to the geriatric acute care ward. Body weight status and nutritional status were assessed using WHO-BMI classification and Mini Nutritional Assessment-Short Form (MNA-SF), respectively. Self-perceived body weight status and nutritional status were assessed with a standardized questionnaire. A follow-up was performed with a short telephone interview after three months. Results: According to MNA-SF, 49% and 35% were at risk of malnutrition and malnourished, respectively. There was no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF (Kappa: 0.06). A slight agreement was found between subjective body weight status and objective body weight status according to WHO-BMI classification (Kappa: 0.19). A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time, of which 7 and 2 were malnourished and at risk of malnutrition according to MNA-SF, respectively. Of those who were malnourished and at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% did not realize their malnutrition. Compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission and further weight loss and more often reported health deterioration and experienced death within three months after discharge. Conclusions: No agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients was found. Our study highlights the need to raise knowledge about the issue of malnutrition and increase awareness of health risks associated with malnutrition among older hospitalized patients.
Introduction: Malnutrition is a frequent finding in older patients and has commonly a multifactorial etiology. Malnutrition is associated with a low quality of life, prolonged hospitalization and rehabilitation, more frequent complications and higher morbidity and mortality.1 Although the prevalence of malnutrition is between 30% and 50% in hospitalized older persons,2–4 it remains widely unrecognized and untreated.1 However, even if malnutrition is recognized, its treatment may be challenging, especially in an older frail population. In this population, the successful treatment of malnutrition mostly takes weeks and months. Particularly for a sustained treatment after hospital discharge, the perception and comprehension of malnutrition by the patient may be critical. Older patients need to be aware of the fact that they are underweight, malnourished or at risk of malnutrition. Without this awareness and the comprehension of the consequences of malnutrition, a behavioral change is unlikely to happen, and management of malnutrition may not be successful. The agreement between self-perception and measured body weight has already been investigated in previous studies on different populations including older subjects.5–7 The findings of a cross-sectional study of 1295 healthy older adults aged 60–96 years demonstrated low agreement between objective and self-perceived body weight status.7 However, to the best of our knowledge data about self-perception of malnutrition are missing for older hospitalized patients. In the present study, we aimed to examine the self-perception of body weight and nutritional status among older hospitalized patients compared to their actual body weight and nutritional status based on medical assessment and their relevance for outcome. Conclusion: In this study, no agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients was found. Since malnourished older patients were more susceptible to death, especially if there were not aware of their malnutrition, our study highlights the need to raise knowledge about the issue of malnutrition and increase awareness of health risks associated with malnutrition among older hospitalized patients.
Background: Studies focusing on self-perception of nutritional status in older hospitalized patients are lacking. We aimed to examine the self-perception of body weight and nutritional status among older hospitalized patients compared to their actual body weight and nutritional status based on medical assessment. Methods: This observational cross-sectional study investigated 197 older participants (mean age 82.2±6.8 years, 61% women) who were consecutively admitted to the geriatric acute care ward. Body weight status and nutritional status were assessed using WHO-BMI classification and Mini Nutritional Assessment-Short Form (MNA-SF), respectively. Self-perceived body weight status and nutritional status were assessed with a standardized questionnaire. A follow-up was performed with a short telephone interview after three months. Results: According to MNA-SF, 49% and 35% were at risk of malnutrition and malnourished, respectively. There was no agreement between self-perceived nutritional status and objective nutritional status according to MNA-SF (Kappa: 0.06). A slight agreement was found between subjective body weight status and objective body weight status according to WHO-BMI classification (Kappa: 0.19). A total of 184 patients completed the 3 months follow-up and additional 9 patients died during this time, of which 7 and 2 were malnourished and at risk of malnutrition according to MNA-SF, respectively. Of those who were malnourished and at risk of malnutrition based on MNA-SF and died during follow-up, 67.7% did not realize their malnutrition. Compared to the patients with normal nutritional status during hospitalization, malnourished patients based on MNA-SF had higher rates of unplanned hospital readmission and further weight loss and more often reported health deterioration and experienced death within three months after discharge. Conclusions: No agreement between self-perceived nutritional status and objective nutritional status among older hospitalized patients was found. Our study highlights the need to raise knowledge about the issue of malnutrition and increase awareness of health risks associated with malnutrition among older hospitalized patients.
8,570
387
12
[ "nutritional", "weight", "status", "nutritional status", "patients", "body", "body weight", "malnutrition", "risk", "mna" ]
[ "test", "test" ]
null
null
[CONTENT] body weight | geriatrics | malnutrition | older patients | self-perception [SUMMARY]
null
null
[CONTENT] body weight | geriatrics | malnutrition | older patients | self-perception [SUMMARY]
[CONTENT] body weight | geriatrics | malnutrition | older patients | self-perception [SUMMARY]
[CONTENT] body weight | geriatrics | malnutrition | older patients | self-perception [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Body Mass Index | Cross-Sectional Studies | Female | Geriatric Assessment | Humans | Male | Malnutrition | Nutrition Assessment | Nutritional Status | Risk Factors | Self Concept | Surveys and Questionnaires | Weight Loss [SUMMARY]
null
null
[CONTENT] Aged | Aged, 80 and over | Body Mass Index | Cross-Sectional Studies | Female | Geriatric Assessment | Humans | Male | Malnutrition | Nutrition Assessment | Nutritional Status | Risk Factors | Self Concept | Surveys and Questionnaires | Weight Loss [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Body Mass Index | Cross-Sectional Studies | Female | Geriatric Assessment | Humans | Male | Malnutrition | Nutrition Assessment | Nutritional Status | Risk Factors | Self Concept | Surveys and Questionnaires | Weight Loss [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Body Mass Index | Cross-Sectional Studies | Female | Geriatric Assessment | Humans | Male | Malnutrition | Nutrition Assessment | Nutritional Status | Risk Factors | Self Concept | Surveys and Questionnaires | Weight Loss [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] nutritional | weight | status | nutritional status | patients | body | body weight | malnutrition | risk | mna [SUMMARY]
null
null
[CONTENT] nutritional | weight | status | nutritional status | patients | body | body weight | malnutrition | risk | mna [SUMMARY]
[CONTENT] nutritional | weight | status | nutritional status | patients | body | body weight | malnutrition | risk | mna [SUMMARY]
[CONTENT] nutritional | weight | status | nutritional status | patients | body | body weight | malnutrition | risk | mna [SUMMARY]
[CONTENT] older | malnutrition | treatment | self perception | perception | hospitalized | frequent | comprehension | self | body weight [SUMMARY]
null
null
[CONTENT] older | older hospitalized patients | older hospitalized | hospitalized patients | hospitalized | malnutrition | patients | study | susceptible | hospitalized patients found [SUMMARY]
[CONTENT] nutritional | weight | status | patients | nutritional status | malnutrition | body | body weight | older | points [SUMMARY]
[CONTENT] nutritional | weight | status | patients | nutritional status | malnutrition | body | body weight | older | points [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
null
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| 197 ||| age 82.2±6.8 years | 61% ||| Mini Nutritional Assessment-Short Form | MNA-SF ||| ||| three months ||| ||| MNA-SF | 49% | 35% ||| MNA-SF | Kappa | 0.06 ||| Kappa | 0.19 ||| 184 | the 3 months | 7 | 2 | MNA-SF ||| MNA-SF | 67.7% ||| MNA-SF | three months ||| ||| [SUMMARY]
[CONTENT] ||| ||| 197 ||| age 82.2±6.8 years | 61% ||| Mini Nutritional Assessment-Short Form | MNA-SF ||| ||| three months ||| ||| MNA-SF | 49% | 35% ||| MNA-SF | Kappa | 0.06 ||| Kappa | 0.19 ||| 184 | the 3 months | 7 | 2 | MNA-SF ||| MNA-SF | 67.7% ||| MNA-SF | three months ||| ||| [SUMMARY]
Polymorphism in the
35222588
The role of single nucleotide polymorphism rs10937405 (C>T) of the TP63 gene in cancer including leukemia has previously been studied in different world populations; however, the role of this variant in leukemia in the North Indian population of Jammu and Kashmir is still unknown.
BACKGROUND
A total of 588 subjects, (188 cases and 400 controls) were recruited for the study. The rs10937405 variant was genotyped by using the real-time based TaqMan assay.
METHODS
A statistically significant association was observed between the rs10937405 and leukemia [OR of 1.94 (95% CI 1.51-2.48), p=1.2x10-6].
RESULTS
The current study concludes that the rs10937405 variant is a risk factor for the development of leukemia in the population of Jammu and Kashmir, North India. However, it would be interesting to explore the contribution of this variant in other cancers as well. Our findings will help in the development of diagnostic markers for leukemia in the studied population and potentially for other North Indian populations.
CONCLUSION
[ "Asian People", "Case-Control Studies", "Genetic Predisposition to Disease", "Genotype", "Humans", "India", "Leukemia", "Polymorphism, Single Nucleotide", "Transcription Factors", "Tumor Suppressor Proteins" ]
8843251
Introduction
Leukemia ranks among the top most cancers in the world with an estimated 3,00,000 new cases (2.8% of all new cancer cases and 3.8% deaths) diagnosed every year globally1,2. In India, leukemia is ranked ninth with a ratio of 1.56:1.09 in males and females3. In India, a total of more than 10,000 new cases of childhood leukemia have been reported annually4. Among North Indian populations, the population of Jammu and Kashmir is found to be at higher risk, with high mortality rate associated with different cancers5. The incidence of leukemia in Jammu and Kashmir has increased rapidly about 5.07% in the previous decade6. The population of northern part of Jammu and Kashmir state practice endogamy, thus preserving the gene pools that result in the increase of homozygosity. This factor has been documented as an inherited genetic factor that can contribute to the etiology of leukemia7. Leukemia is multifactorial in origin which can be caused by both genetic as well as non-genetic factors. Genome-wide association studies (GWAS) have advanced our understanding of susceptibility to leukemia; however, much of the heritable risk factors remain unidentified. Previous GWAS have suggested a polygenic susceptibility to leukemia, identifying SNPs in different loci influencing leukemia risk such as, 7p12.2 (IKZF1), 9p21.3 (CDKN2A), 10p12.2 (PIP4K2A), 10q26.13 (LHPP), 12q23.1 (ELK3), 10p14 (GATA3), 10q21.2 (ARID5B), and 14q11.2 (CEBPE)8–12. Recently, GWAS has found a strong association of variant rs10937405 of TP63 with lung cancer in Korean population13. The TP63 gene is a homolog of the tumor suppressor gene TP53, located on chromosome 3q27-28 region, which is a member of transcription factor. This rs10937405 variant could probably affect the expression of other genes and can increase the risk of leukemia. In the current study, we aimed to explore the association of variant rs10937405 of TP63 with leukemia in the North Indian population of Jammu and Kashmir.
null
null
Results
We recruited a total of 588 subjects, out of which 188 were leukemia patients (cases) and 400 healthy (controls). Among the cases, 56% were males and 44% were females and among the controls, 68% were males and 32% were females, suggesting that the frequency of Leukemia is higher among males in the J & K population. The mean age in cases were 40.51 (±14.67) years and that of controls 50.76 (±13.30). The average BMI of cases (21.21 ±6.08) and controls (24.21 ±5.06) as shown in Table 1. Clinical characteristics for cases and controls Corrected for age, gender, BMI, alcohol consumption, and smoking The allele frequency distribution of the variant rs10937405 of TP63 between cases and controls is summarized in Table 2. In the current study, T is present in more cases (0.62) than in controls (0.54), hence suggesting that allele T is causing risk. We observed that genetic allele T of variant rs10937405 of TP63 is significantly associated with leukemia (p value =1.2 × 10−6), with H.W. E = 0.974. Allelic frequency distribution between cases and controls Corrected for age, gender, BMI, alcohol consumption and smoking. To observe the maximum effect of allele T, we evaluated the association by using dominant model. The OR observed was 1.6 (0.94–2.4) at 95% CI in leukemia corrected for age, gender and BMI. Furthermore, we have evaluated the variant rs10937405 of TP63 by applying other genetic models as per the risk allele and the results observed were showing positive association of variant in all the three models in case of Leukemia as shown in Table 3. Showing the association of variant rs10937405 of Tp63 with leukemia in North Indian population using genetic models *Corrected for Age, Gender and BMI To evaluate the association of rs10937405 with different subtypes of leukemia, statistical analyses was performed on the allelic distribution within these subtypes as shown in table 4. The variant rs10937405 of TP63 was found significantly associated with all four subtypes of the leukemia (p-value = 0.002, 0.0001, 0.032, and 0.034 for ALL, AML, CML, and CLL, respectively). Genetic association of different subtypes of leukemia with the variant rs10937405 variant The calculated OR under different models as given in table 1 showed significant association of rs10937405 with leukemia. The allelic OR was 1.94 (1.51–2.48 at 95% CI, p=1.2x10-6). Under additive model the OR was 1.48(1.01–2.16), at 95%CI, p=0.02, under the recessive model the OR was 1.7(0.8–3.54), at 95% CI, p=0.001. All values were corrected for age, gender, BMI, smoking and alcohol consumption.
Conclusion
Our findings provide evidence that the variant rs10937405 of TP63 is significantly association with leukemia in the population of Jammu and Kashmir in Northern India. Further studies involving more diverse ethnic groups, particularly from north India will not only validate these findings but will also assist in developing this variant as a biomarker for leukemia screening programs.
[ "Ethics statement", "Sampling", "Genotyping", "Statistical analysis" ]
[ "The Institutional Ethics Review Board (IERB) of Shri Mata Vaishno Devi University (SMVDU) approved the study through notification number SMVDU/IERB/16/41. All the details of cases and controls were recorded in a predesigned proforma and a written informed consent was obtained from all the participants. All experimental procedures were conducted according to the guidelines and regulations set by the IERB, SMVDU.", "A total of 588 subjects were recruited for the study, of which 188 were the leukemic cases collected from the different hospitals of Jammu and Kashmir after ethical approval and informed consent and 400 were age and sex-matched healthy controls. All cases were histopathologically confirmed by pathologist, GMC, Jammu. The genomic DNA was isolated from the blood samples using Qiagen DNA Isolation kit (Cat. No. 51206). Agarose gel electrophoresis was used to analyse the quality of the genomic DNA and quantification was performed using UV spectrophotometer.", "Genotyping of variants rs10937405 of TP63 was performed using the TaqMan allele discrimination assay MX3005p labeled with VIC (Victoria Green Fluorescent Protein) and FAM (Fluorescein amidites) dyes (Thermo Fisher Scientific) and UNG Master Mix (Applied Bio-systems, USA). The Volume of the total PCR reaction was 10µl, comprising of 2.5 µl of TaqMan UNG Master Mix, 0.25 µl of the probe, 3µl DNA (5ng/µl) and 4.25 µl nuclease-free water added together to make the final volume. The thermal conditions adopted were 10 minutes at 95 °C, 40 cycles of 95°C for 15 seconds and 60°C for 1 min. All the samples were run in a 96-well plate with three no template controls (NTCs). The post PCR detection system was used to measure allele-specific fluorescence. A total of 93 random samples each from cases and controls were picked and re-genotyped for cross-validation of the genotyping calls and the concordance rate was 100%.", "Statistical analyses of the data were performed by using SPSS software (v.20; Chicago, IL). Chi-square (χ2) was performed and genotyping frequencies were also tested. All samples were following the Hardy-Weinberg equilibrium. Binary Logistic Regression was used to estimate OR at 95% confidence interval (CI) and the respective level of significance was estimated as p-value." ]
[ null, null, null, null ]
[ "Introduction", "Materials and methods", "Ethics statement", "Sampling", "Genotyping", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ "Leukemia ranks among the top most cancers in the world with an estimated 3,00,000 new cases (2.8% of all new cancer cases and 3.8% deaths) diagnosed every year globally1,2. In India, leukemia is ranked ninth with a ratio of 1.56:1.09 in males and females3. In India, a total of more than 10,000 new cases of childhood leukemia have been reported annually4. Among North Indian populations, the population of Jammu and Kashmir is found to be at higher risk, with high mortality rate associated with different cancers5. The incidence of leukemia in Jammu and Kashmir has increased rapidly about 5.07% in the previous decade6. The population of northern part of Jammu and Kashmir state practice endogamy, thus preserving the gene pools that result in the increase of homozygosity. This factor has been documented as an inherited genetic factor that can contribute to the etiology of leukemia7. Leukemia is multifactorial in origin which can be caused by both genetic as well as non-genetic factors. Genome-wide association studies (GWAS) have advanced our understanding of susceptibility to leukemia; however, much of the heritable risk factors remain unidentified. Previous GWAS have suggested a polygenic susceptibility to leukemia, identifying SNPs in different loci influencing leukemia risk such as, 7p12.2 (IKZF1), 9p21.3 (CDKN2A), 10p12.2 (PIP4K2A), 10q26.13 (LHPP), 12q23.1 (ELK3), 10p14 (GATA3), 10q21.2 (ARID5B), and 14q11.2 (CEBPE)8–12. Recently, GWAS has found a strong association of variant rs10937405 of TP63 with lung cancer in Korean population13. The TP63 gene is a homolog of the tumor suppressor gene TP53, located on chromosome 3q27-28 region, which is a member of transcription factor. This rs10937405 variant could probably affect the expression of other genes and can increase the risk of leukemia. In the current study, we aimed to explore the association of variant rs10937405 of TP63 with leukemia in the North Indian population of Jammu and Kashmir.", " Ethics statement The Institutional Ethics Review Board (IERB) of Shri Mata Vaishno Devi University (SMVDU) approved the study through notification number SMVDU/IERB/16/41. All the details of cases and controls were recorded in a predesigned proforma and a written informed consent was obtained from all the participants. All experimental procedures were conducted according to the guidelines and regulations set by the IERB, SMVDU.\nThe Institutional Ethics Review Board (IERB) of Shri Mata Vaishno Devi University (SMVDU) approved the study through notification number SMVDU/IERB/16/41. All the details of cases and controls were recorded in a predesigned proforma and a written informed consent was obtained from all the participants. All experimental procedures were conducted according to the guidelines and regulations set by the IERB, SMVDU.\n Sampling A total of 588 subjects were recruited for the study, of which 188 were the leukemic cases collected from the different hospitals of Jammu and Kashmir after ethical approval and informed consent and 400 were age and sex-matched healthy controls. All cases were histopathologically confirmed by pathologist, GMC, Jammu. The genomic DNA was isolated from the blood samples using Qiagen DNA Isolation kit (Cat. No. 51206). Agarose gel electrophoresis was used to analyse the quality of the genomic DNA and quantification was performed using UV spectrophotometer.\nA total of 588 subjects were recruited for the study, of which 188 were the leukemic cases collected from the different hospitals of Jammu and Kashmir after ethical approval and informed consent and 400 were age and sex-matched healthy controls. All cases were histopathologically confirmed by pathologist, GMC, Jammu. The genomic DNA was isolated from the blood samples using Qiagen DNA Isolation kit (Cat. No. 51206). Agarose gel electrophoresis was used to analyse the quality of the genomic DNA and quantification was performed using UV spectrophotometer.\n Genotyping Genotyping of variants rs10937405 of TP63 was performed using the TaqMan allele discrimination assay MX3005p labeled with VIC (Victoria Green Fluorescent Protein) and FAM (Fluorescein amidites) dyes (Thermo Fisher Scientific) and UNG Master Mix (Applied Bio-systems, USA). The Volume of the total PCR reaction was 10µl, comprising of 2.5 µl of TaqMan UNG Master Mix, 0.25 µl of the probe, 3µl DNA (5ng/µl) and 4.25 µl nuclease-free water added together to make the final volume. The thermal conditions adopted were 10 minutes at 95 °C, 40 cycles of 95°C for 15 seconds and 60°C for 1 min. All the samples were run in a 96-well plate with three no template controls (NTCs). The post PCR detection system was used to measure allele-specific fluorescence. A total of 93 random samples each from cases and controls were picked and re-genotyped for cross-validation of the genotyping calls and the concordance rate was 100%.\nGenotyping of variants rs10937405 of TP63 was performed using the TaqMan allele discrimination assay MX3005p labeled with VIC (Victoria Green Fluorescent Protein) and FAM (Fluorescein amidites) dyes (Thermo Fisher Scientific) and UNG Master Mix (Applied Bio-systems, USA). The Volume of the total PCR reaction was 10µl, comprising of 2.5 µl of TaqMan UNG Master Mix, 0.25 µl of the probe, 3µl DNA (5ng/µl) and 4.25 µl nuclease-free water added together to make the final volume. The thermal conditions adopted were 10 minutes at 95 °C, 40 cycles of 95°C for 15 seconds and 60°C for 1 min. All the samples were run in a 96-well plate with three no template controls (NTCs). The post PCR detection system was used to measure allele-specific fluorescence. A total of 93 random samples each from cases and controls were picked and re-genotyped for cross-validation of the genotyping calls and the concordance rate was 100%.\n Statistical analysis Statistical analyses of the data were performed by using SPSS software (v.20; Chicago, IL). Chi-square (χ2) was performed and genotyping frequencies were also tested. All samples were following the Hardy-Weinberg equilibrium. Binary Logistic Regression was used to estimate OR at 95% confidence interval (CI) and the respective level of significance was estimated as p-value.\nStatistical analyses of the data were performed by using SPSS software (v.20; Chicago, IL). Chi-square (χ2) was performed and genotyping frequencies were also tested. All samples were following the Hardy-Weinberg equilibrium. Binary Logistic Regression was used to estimate OR at 95% confidence interval (CI) and the respective level of significance was estimated as p-value.", "The Institutional Ethics Review Board (IERB) of Shri Mata Vaishno Devi University (SMVDU) approved the study through notification number SMVDU/IERB/16/41. All the details of cases and controls were recorded in a predesigned proforma and a written informed consent was obtained from all the participants. All experimental procedures were conducted according to the guidelines and regulations set by the IERB, SMVDU.", "A total of 588 subjects were recruited for the study, of which 188 were the leukemic cases collected from the different hospitals of Jammu and Kashmir after ethical approval and informed consent and 400 were age and sex-matched healthy controls. All cases were histopathologically confirmed by pathologist, GMC, Jammu. The genomic DNA was isolated from the blood samples using Qiagen DNA Isolation kit (Cat. No. 51206). Agarose gel electrophoresis was used to analyse the quality of the genomic DNA and quantification was performed using UV spectrophotometer.", "Genotyping of variants rs10937405 of TP63 was performed using the TaqMan allele discrimination assay MX3005p labeled with VIC (Victoria Green Fluorescent Protein) and FAM (Fluorescein amidites) dyes (Thermo Fisher Scientific) and UNG Master Mix (Applied Bio-systems, USA). The Volume of the total PCR reaction was 10µl, comprising of 2.5 µl of TaqMan UNG Master Mix, 0.25 µl of the probe, 3µl DNA (5ng/µl) and 4.25 µl nuclease-free water added together to make the final volume. The thermal conditions adopted were 10 minutes at 95 °C, 40 cycles of 95°C for 15 seconds and 60°C for 1 min. All the samples were run in a 96-well plate with three no template controls (NTCs). The post PCR detection system was used to measure allele-specific fluorescence. A total of 93 random samples each from cases and controls were picked and re-genotyped for cross-validation of the genotyping calls and the concordance rate was 100%.", "Statistical analyses of the data were performed by using SPSS software (v.20; Chicago, IL). Chi-square (χ2) was performed and genotyping frequencies were also tested. All samples were following the Hardy-Weinberg equilibrium. Binary Logistic Regression was used to estimate OR at 95% confidence interval (CI) and the respective level of significance was estimated as p-value.", "We recruited a total of 588 subjects, out of which 188 were leukemia patients (cases) and 400 healthy (controls). Among the cases, 56% were males and 44% were females and among the controls, 68% were males and 32% were females, suggesting that the frequency of Leukemia is higher among males in the J & K population. The mean age in cases were 40.51 (±14.67) years and that of controls 50.76 (±13.30). The average BMI of cases (21.21 ±6.08) and controls (24.21 ±5.06) as shown in Table 1.\nClinical characteristics for cases and controls\nCorrected for age, gender, BMI, alcohol consumption, and smoking\nThe allele frequency distribution of the variant rs10937405 of TP63 between cases and controls is summarized in Table 2. In the current study, T is present in more cases (0.62) than in controls (0.54), hence suggesting that allele T is causing risk. We observed that genetic allele T of variant rs10937405 of TP63 is significantly associated with leukemia (p value =1.2 × 10−6), with H.W. E = 0.974.\nAllelic frequency distribution between cases and controls\nCorrected for age, gender, BMI, alcohol consumption and smoking.\nTo observe the maximum effect of allele T, we evaluated the association by using dominant model. The OR observed was 1.6 (0.94–2.4) at 95% CI in leukemia corrected for age, gender and BMI. Furthermore, we have evaluated the variant rs10937405 of TP63 by applying other genetic models as per the risk allele and the results observed were showing positive association of variant in all the three models in case of Leukemia as shown in Table 3.\nShowing the association of variant rs10937405 of Tp63 with leukemia in North Indian population using genetic models\n*Corrected for Age, Gender and BMI\nTo evaluate the association of rs10937405 with different subtypes of leukemia, statistical analyses was performed on the allelic distribution within these subtypes as shown in table 4. The variant rs10937405 of TP63 was found significantly associated with all four subtypes of the leukemia (p-value = 0.002, 0.0001, 0.032, and 0.034 for ALL, AML, CML, and CLL, respectively).\nGenetic association of different subtypes of leukemia with the variant rs10937405 variant\nThe calculated OR under different models as given in table 1 showed significant association of rs10937405 with leukemia. The allelic OR was 1.94 (1.51–2.48 at 95% CI, p=1.2x10-6). Under additive model the OR was 1.48(1.01–2.16), at 95%CI, p=0.02, under the recessive model the OR was 1.7(0.8–3.54), at 95% CI, p=0.001. All values were corrected for age, gender, BMI, smoking and alcohol consumption.", "In the present study, association of the variant rs10937405 of TP63 with leukemia was explored in the North Indian population. Previously, the association of this variant was reported in the population groups of Japan14 and Korea13, where ‘C’ allele was found to be the risk allele. Most interestingly in the present study, it was found that ‘T’ which is a wild type allele was a risk factor in the studied population. In various GWAS in different ethnic populations, the rs10937405 variant was found to be associated with lung cancer risk in the population groups of Japan, Korea, Han-Chinese6, 15–17. The variant was found associated with lung carcinoma among Asian females in absence of tobacco smoking18, but showed significant association with smoking in UK Population. The variant shows association with lung cancer with significance association with smoking19. We did not come across any work which has found any association between aleukemia risk and coal and further more we evaluated the role of cigarette smoking in leukemia. And our study is the first to find the association of smoking with leukemia. Though In future, this genetic variant can also be explored in different ethnic populations for a variety of reasons, including differences in their allele frequency and in both the genetic and environmental backgrounds that interact with the variant.\nThe TP63 gene plays an important role in cell proliferation, apoptosis, development, differentiation, senescence, ageing, and response to cellular stress. TP63 contains two transcriptional start sites leading to p63 isoforms either containing (TP63) or lacking (△ Np63), the trans-activation domain20. TP63 isoforms possesses strong transactivation activity on p53 responsive promoters, whereas △ Np63 isoforms are able to outcompete p53 for binding to p53-responsive promoters and repress gene expression. TP63 protein contains N-terminal TA domain that is 22% homologous, while △ Np63 isoforms are transcribed from alternative promoter within third intron21. The 14 unique N-terminal amino acid residues in △ Np63 isoforms have shown to possess transactivation activity. TP63 expression has been reported in blast crisis in chronic myelogenous leukemia22 follicular lymphoma (FL), diffuse large B-cell lymphoma (DLBCL)23 and, isolated cases of chronic lymphocytic leukemia, marginal cell lymphoma. In some studies, Tp63 was found over expressed and hypo methylated in CLL subtypes of leukemia24.\nHowever, accumulation of DNA damage and deficient response to genotoxic stress contributes to an earlier progression of leukemia. DNA damage activates c-Abl and then activates TP63 to mediate cell death. TP63 is responsible for inducing transcription of pro-apoptotic family members PUMA (p53 upregulated modulator of apoptosis) and NOXA, which then bind to BAX/BAK and trigger apoptosis. Puma can also be activated independently of p53 and thus plays an important role in p53-independent apoptosis. The p53 homolog p73 can also regulate Puma expression by binding to the same p53-responsive elements in the Puma promoter. Puma is believed to bind Bak through Bid and Bim. Noxa is less effective than Puma in p53-mediated apoptosis, for Puma (like Bim) can bind to all the anti-apoptotic Bcl-2 family members, whereas Noxa antagonizes only Mcl-1 and A1. Nevertheless, the functional overlap of Noxa and Puma in apoptosis caused by DNA damage indicates that, to some extent, they may cooperate in the progress of apoptosis. However, if there is a mutation in the TP63, it inhibits the apoptosis which leads in the progression of leukemia as shown in Figure 1.\nShowing the Hypothesized pathway of TP63 which leads to DNA damage and targeting apoptotic pathway where it leads to progression of leukemia.\nTp63 is shown to interact with many genes as described in String tool software v10.5\nBesides, this genetic variant has putative regulatory effect (SNIPA online tool) as shown in figure 2, thus polymorphism in any of the region could possibly affect the neighboring SNPs and disturb the physiology of genes.\nLinkage disequilibrium plot shows the amount of correlation between a sentinel variant (blue colored) and its surrounding variants (red colored). The y-axis signifies the correlation coefficient; the x-axis signifies the chromosomal position of each SNP. The plot symbol of each variant designates its functional observations (http://snipa.helmholtz-muenchen.de).\nThis variant has also been found associated with the lung cancer of the North Indian population by our group25, thus suggesting a potential role in multiple cancers. Our findings suggest that this SNP can be used as diagnostic and prognostic marker for leukemia and other cancer types in the North Indian populations.", "Our findings provide evidence that the variant rs10937405 of TP63 is significantly association with leukemia in the population of Jammu and Kashmir in Northern India. Further studies involving more diverse ethnic groups, particularly from north India will not only validate these findings but will also assist in developing this variant as a biomarker for leukemia screening programs." ]
[ "intro", "materials|methods", null, null, null, null, "results", "discussion", "conclusions" ]
[ "Single Nucleotide Polymorphism (SNPs)", "Leukemia", "North Indian population", "Tumour suppressor (TP63)", "Linkage Disequilibrium (LD)", "Genome wide association studies (GWAS)", "Jammu and Kashmir (J &K)" ]
Introduction: Leukemia ranks among the top most cancers in the world with an estimated 3,00,000 new cases (2.8% of all new cancer cases and 3.8% deaths) diagnosed every year globally1,2. In India, leukemia is ranked ninth with a ratio of 1.56:1.09 in males and females3. In India, a total of more than 10,000 new cases of childhood leukemia have been reported annually4. Among North Indian populations, the population of Jammu and Kashmir is found to be at higher risk, with high mortality rate associated with different cancers5. The incidence of leukemia in Jammu and Kashmir has increased rapidly about 5.07% in the previous decade6. The population of northern part of Jammu and Kashmir state practice endogamy, thus preserving the gene pools that result in the increase of homozygosity. This factor has been documented as an inherited genetic factor that can contribute to the etiology of leukemia7. Leukemia is multifactorial in origin which can be caused by both genetic as well as non-genetic factors. Genome-wide association studies (GWAS) have advanced our understanding of susceptibility to leukemia; however, much of the heritable risk factors remain unidentified. Previous GWAS have suggested a polygenic susceptibility to leukemia, identifying SNPs in different loci influencing leukemia risk such as, 7p12.2 (IKZF1), 9p21.3 (CDKN2A), 10p12.2 (PIP4K2A), 10q26.13 (LHPP), 12q23.1 (ELK3), 10p14 (GATA3), 10q21.2 (ARID5B), and 14q11.2 (CEBPE)8–12. Recently, GWAS has found a strong association of variant rs10937405 of TP63 with lung cancer in Korean population13. The TP63 gene is a homolog of the tumor suppressor gene TP53, located on chromosome 3q27-28 region, which is a member of transcription factor. This rs10937405 variant could probably affect the expression of other genes and can increase the risk of leukemia. In the current study, we aimed to explore the association of variant rs10937405 of TP63 with leukemia in the North Indian population of Jammu and Kashmir. Materials and methods: Ethics statement The Institutional Ethics Review Board (IERB) of Shri Mata Vaishno Devi University (SMVDU) approved the study through notification number SMVDU/IERB/16/41. All the details of cases and controls were recorded in a predesigned proforma and a written informed consent was obtained from all the participants. All experimental procedures were conducted according to the guidelines and regulations set by the IERB, SMVDU. The Institutional Ethics Review Board (IERB) of Shri Mata Vaishno Devi University (SMVDU) approved the study through notification number SMVDU/IERB/16/41. All the details of cases and controls were recorded in a predesigned proforma and a written informed consent was obtained from all the participants. All experimental procedures were conducted according to the guidelines and regulations set by the IERB, SMVDU. Sampling A total of 588 subjects were recruited for the study, of which 188 were the leukemic cases collected from the different hospitals of Jammu and Kashmir after ethical approval and informed consent and 400 were age and sex-matched healthy controls. All cases were histopathologically confirmed by pathologist, GMC, Jammu. The genomic DNA was isolated from the blood samples using Qiagen DNA Isolation kit (Cat. No. 51206). Agarose gel electrophoresis was used to analyse the quality of the genomic DNA and quantification was performed using UV spectrophotometer. A total of 588 subjects were recruited for the study, of which 188 were the leukemic cases collected from the different hospitals of Jammu and Kashmir after ethical approval and informed consent and 400 were age and sex-matched healthy controls. All cases were histopathologically confirmed by pathologist, GMC, Jammu. The genomic DNA was isolated from the blood samples using Qiagen DNA Isolation kit (Cat. No. 51206). Agarose gel electrophoresis was used to analyse the quality of the genomic DNA and quantification was performed using UV spectrophotometer. Genotyping Genotyping of variants rs10937405 of TP63 was performed using the TaqMan allele discrimination assay MX3005p labeled with VIC (Victoria Green Fluorescent Protein) and FAM (Fluorescein amidites) dyes (Thermo Fisher Scientific) and UNG Master Mix (Applied Bio-systems, USA). The Volume of the total PCR reaction was 10µl, comprising of 2.5 µl of TaqMan UNG Master Mix, 0.25 µl of the probe, 3µl DNA (5ng/µl) and 4.25 µl nuclease-free water added together to make the final volume. The thermal conditions adopted were 10 minutes at 95 °C, 40 cycles of 95°C for 15 seconds and 60°C for 1 min. All the samples were run in a 96-well plate with three no template controls (NTCs). The post PCR detection system was used to measure allele-specific fluorescence. A total of 93 random samples each from cases and controls were picked and re-genotyped for cross-validation of the genotyping calls and the concordance rate was 100%. Genotyping of variants rs10937405 of TP63 was performed using the TaqMan allele discrimination assay MX3005p labeled with VIC (Victoria Green Fluorescent Protein) and FAM (Fluorescein amidites) dyes (Thermo Fisher Scientific) and UNG Master Mix (Applied Bio-systems, USA). The Volume of the total PCR reaction was 10µl, comprising of 2.5 µl of TaqMan UNG Master Mix, 0.25 µl of the probe, 3µl DNA (5ng/µl) and 4.25 µl nuclease-free water added together to make the final volume. The thermal conditions adopted were 10 minutes at 95 °C, 40 cycles of 95°C for 15 seconds and 60°C for 1 min. All the samples were run in a 96-well plate with three no template controls (NTCs). The post PCR detection system was used to measure allele-specific fluorescence. A total of 93 random samples each from cases and controls were picked and re-genotyped for cross-validation of the genotyping calls and the concordance rate was 100%. Statistical analysis Statistical analyses of the data were performed by using SPSS software (v.20; Chicago, IL). Chi-square (χ2) was performed and genotyping frequencies were also tested. All samples were following the Hardy-Weinberg equilibrium. Binary Logistic Regression was used to estimate OR at 95% confidence interval (CI) and the respective level of significance was estimated as p-value. Statistical analyses of the data were performed by using SPSS software (v.20; Chicago, IL). Chi-square (χ2) was performed and genotyping frequencies were also tested. All samples were following the Hardy-Weinberg equilibrium. Binary Logistic Regression was used to estimate OR at 95% confidence interval (CI) and the respective level of significance was estimated as p-value. Ethics statement: The Institutional Ethics Review Board (IERB) of Shri Mata Vaishno Devi University (SMVDU) approved the study through notification number SMVDU/IERB/16/41. All the details of cases and controls were recorded in a predesigned proforma and a written informed consent was obtained from all the participants. All experimental procedures were conducted according to the guidelines and regulations set by the IERB, SMVDU. Sampling: A total of 588 subjects were recruited for the study, of which 188 were the leukemic cases collected from the different hospitals of Jammu and Kashmir after ethical approval and informed consent and 400 were age and sex-matched healthy controls. All cases were histopathologically confirmed by pathologist, GMC, Jammu. The genomic DNA was isolated from the blood samples using Qiagen DNA Isolation kit (Cat. No. 51206). Agarose gel electrophoresis was used to analyse the quality of the genomic DNA and quantification was performed using UV spectrophotometer. Genotyping: Genotyping of variants rs10937405 of TP63 was performed using the TaqMan allele discrimination assay MX3005p labeled with VIC (Victoria Green Fluorescent Protein) and FAM (Fluorescein amidites) dyes (Thermo Fisher Scientific) and UNG Master Mix (Applied Bio-systems, USA). The Volume of the total PCR reaction was 10µl, comprising of 2.5 µl of TaqMan UNG Master Mix, 0.25 µl of the probe, 3µl DNA (5ng/µl) and 4.25 µl nuclease-free water added together to make the final volume. The thermal conditions adopted were 10 minutes at 95 °C, 40 cycles of 95°C for 15 seconds and 60°C for 1 min. All the samples were run in a 96-well plate with three no template controls (NTCs). The post PCR detection system was used to measure allele-specific fluorescence. A total of 93 random samples each from cases and controls were picked and re-genotyped for cross-validation of the genotyping calls and the concordance rate was 100%. Statistical analysis: Statistical analyses of the data were performed by using SPSS software (v.20; Chicago, IL). Chi-square (χ2) was performed and genotyping frequencies were also tested. All samples were following the Hardy-Weinberg equilibrium. Binary Logistic Regression was used to estimate OR at 95% confidence interval (CI) and the respective level of significance was estimated as p-value. Results: We recruited a total of 588 subjects, out of which 188 were leukemia patients (cases) and 400 healthy (controls). Among the cases, 56% were males and 44% were females and among the controls, 68% were males and 32% were females, suggesting that the frequency of Leukemia is higher among males in the J & K population. The mean age in cases were 40.51 (±14.67) years and that of controls 50.76 (±13.30). The average BMI of cases (21.21 ±6.08) and controls (24.21 ±5.06) as shown in Table 1. Clinical characteristics for cases and controls Corrected for age, gender, BMI, alcohol consumption, and smoking The allele frequency distribution of the variant rs10937405 of TP63 between cases and controls is summarized in Table 2. In the current study, T is present in more cases (0.62) than in controls (0.54), hence suggesting that allele T is causing risk. We observed that genetic allele T of variant rs10937405 of TP63 is significantly associated with leukemia (p value =1.2 × 10−6), with H.W. E = 0.974. Allelic frequency distribution between cases and controls Corrected for age, gender, BMI, alcohol consumption and smoking. To observe the maximum effect of allele T, we evaluated the association by using dominant model. The OR observed was 1.6 (0.94–2.4) at 95% CI in leukemia corrected for age, gender and BMI. Furthermore, we have evaluated the variant rs10937405 of TP63 by applying other genetic models as per the risk allele and the results observed were showing positive association of variant in all the three models in case of Leukemia as shown in Table 3. Showing the association of variant rs10937405 of Tp63 with leukemia in North Indian population using genetic models *Corrected for Age, Gender and BMI To evaluate the association of rs10937405 with different subtypes of leukemia, statistical analyses was performed on the allelic distribution within these subtypes as shown in table 4. The variant rs10937405 of TP63 was found significantly associated with all four subtypes of the leukemia (p-value = 0.002, 0.0001, 0.032, and 0.034 for ALL, AML, CML, and CLL, respectively). Genetic association of different subtypes of leukemia with the variant rs10937405 variant The calculated OR under different models as given in table 1 showed significant association of rs10937405 with leukemia. The allelic OR was 1.94 (1.51–2.48 at 95% CI, p=1.2x10-6). Under additive model the OR was 1.48(1.01–2.16), at 95%CI, p=0.02, under the recessive model the OR was 1.7(0.8–3.54), at 95% CI, p=0.001. All values were corrected for age, gender, BMI, smoking and alcohol consumption. Discussion: In the present study, association of the variant rs10937405 of TP63 with leukemia was explored in the North Indian population. Previously, the association of this variant was reported in the population groups of Japan14 and Korea13, where ‘C’ allele was found to be the risk allele. Most interestingly in the present study, it was found that ‘T’ which is a wild type allele was a risk factor in the studied population. In various GWAS in different ethnic populations, the rs10937405 variant was found to be associated with lung cancer risk in the population groups of Japan, Korea, Han-Chinese6, 15–17. The variant was found associated with lung carcinoma among Asian females in absence of tobacco smoking18, but showed significant association with smoking in UK Population. The variant shows association with lung cancer with significance association with smoking19. We did not come across any work which has found any association between aleukemia risk and coal and further more we evaluated the role of cigarette smoking in leukemia. And our study is the first to find the association of smoking with leukemia. Though In future, this genetic variant can also be explored in different ethnic populations for a variety of reasons, including differences in their allele frequency and in both the genetic and environmental backgrounds that interact with the variant. The TP63 gene plays an important role in cell proliferation, apoptosis, development, differentiation, senescence, ageing, and response to cellular stress. TP63 contains two transcriptional start sites leading to p63 isoforms either containing (TP63) or lacking (△ Np63), the trans-activation domain20. TP63 isoforms possesses strong transactivation activity on p53 responsive promoters, whereas △ Np63 isoforms are able to outcompete p53 for binding to p53-responsive promoters and repress gene expression. TP63 protein contains N-terminal TA domain that is 22% homologous, while △ Np63 isoforms are transcribed from alternative promoter within third intron21. The 14 unique N-terminal amino acid residues in △ Np63 isoforms have shown to possess transactivation activity. TP63 expression has been reported in blast crisis in chronic myelogenous leukemia22 follicular lymphoma (FL), diffuse large B-cell lymphoma (DLBCL)23 and, isolated cases of chronic lymphocytic leukemia, marginal cell lymphoma. In some studies, Tp63 was found over expressed and hypo methylated in CLL subtypes of leukemia24. However, accumulation of DNA damage and deficient response to genotoxic stress contributes to an earlier progression of leukemia. DNA damage activates c-Abl and then activates TP63 to mediate cell death. TP63 is responsible for inducing transcription of pro-apoptotic family members PUMA (p53 upregulated modulator of apoptosis) and NOXA, which then bind to BAX/BAK and trigger apoptosis. Puma can also be activated independently of p53 and thus plays an important role in p53-independent apoptosis. The p53 homolog p73 can also regulate Puma expression by binding to the same p53-responsive elements in the Puma promoter. Puma is believed to bind Bak through Bid and Bim. Noxa is less effective than Puma in p53-mediated apoptosis, for Puma (like Bim) can bind to all the anti-apoptotic Bcl-2 family members, whereas Noxa antagonizes only Mcl-1 and A1. Nevertheless, the functional overlap of Noxa and Puma in apoptosis caused by DNA damage indicates that, to some extent, they may cooperate in the progress of apoptosis. However, if there is a mutation in the TP63, it inhibits the apoptosis which leads in the progression of leukemia as shown in Figure 1. Showing the Hypothesized pathway of TP63 which leads to DNA damage and targeting apoptotic pathway where it leads to progression of leukemia. Tp63 is shown to interact with many genes as described in String tool software v10.5 Besides, this genetic variant has putative regulatory effect (SNIPA online tool) as shown in figure 2, thus polymorphism in any of the region could possibly affect the neighboring SNPs and disturb the physiology of genes. Linkage disequilibrium plot shows the amount of correlation between a sentinel variant (blue colored) and its surrounding variants (red colored). The y-axis signifies the correlation coefficient; the x-axis signifies the chromosomal position of each SNP. The plot symbol of each variant designates its functional observations (http://snipa.helmholtz-muenchen.de). This variant has also been found associated with the lung cancer of the North Indian population by our group25, thus suggesting a potential role in multiple cancers. Our findings suggest that this SNP can be used as diagnostic and prognostic marker for leukemia and other cancer types in the North Indian populations. Conclusion: Our findings provide evidence that the variant rs10937405 of TP63 is significantly association with leukemia in the population of Jammu and Kashmir in Northern India. Further studies involving more diverse ethnic groups, particularly from north India will not only validate these findings but will also assist in developing this variant as a biomarker for leukemia screening programs.
Background: The role of single nucleotide polymorphism rs10937405 (C>T) of the TP63 gene in cancer including leukemia has previously been studied in different world populations; however, the role of this variant in leukemia in the North Indian population of Jammu and Kashmir is still unknown. Methods: A total of 588 subjects, (188 cases and 400 controls) were recruited for the study. The rs10937405 variant was genotyped by using the real-time based TaqMan assay. Results: A statistically significant association was observed between the rs10937405 and leukemia [OR of 1.94 (95% CI 1.51-2.48), p=1.2x10-6]. Conclusions: The current study concludes that the rs10937405 variant is a risk factor for the development of leukemia in the population of Jammu and Kashmir, North India. However, it would be interesting to explore the contribution of this variant in other cancers as well. Our findings will help in the development of diagnostic markers for leukemia in the studied population and potentially for other North Indian populations.
Introduction: Leukemia ranks among the top most cancers in the world with an estimated 3,00,000 new cases (2.8% of all new cancer cases and 3.8% deaths) diagnosed every year globally1,2. In India, leukemia is ranked ninth with a ratio of 1.56:1.09 in males and females3. In India, a total of more than 10,000 new cases of childhood leukemia have been reported annually4. Among North Indian populations, the population of Jammu and Kashmir is found to be at higher risk, with high mortality rate associated with different cancers5. The incidence of leukemia in Jammu and Kashmir has increased rapidly about 5.07% in the previous decade6. The population of northern part of Jammu and Kashmir state practice endogamy, thus preserving the gene pools that result in the increase of homozygosity. This factor has been documented as an inherited genetic factor that can contribute to the etiology of leukemia7. Leukemia is multifactorial in origin which can be caused by both genetic as well as non-genetic factors. Genome-wide association studies (GWAS) have advanced our understanding of susceptibility to leukemia; however, much of the heritable risk factors remain unidentified. Previous GWAS have suggested a polygenic susceptibility to leukemia, identifying SNPs in different loci influencing leukemia risk such as, 7p12.2 (IKZF1), 9p21.3 (CDKN2A), 10p12.2 (PIP4K2A), 10q26.13 (LHPP), 12q23.1 (ELK3), 10p14 (GATA3), 10q21.2 (ARID5B), and 14q11.2 (CEBPE)8–12. Recently, GWAS has found a strong association of variant rs10937405 of TP63 with lung cancer in Korean population13. The TP63 gene is a homolog of the tumor suppressor gene TP53, located on chromosome 3q27-28 region, which is a member of transcription factor. This rs10937405 variant could probably affect the expression of other genes and can increase the risk of leukemia. In the current study, we aimed to explore the association of variant rs10937405 of TP63 with leukemia in the North Indian population of Jammu and Kashmir. Conclusion: Our findings provide evidence that the variant rs10937405 of TP63 is significantly association with leukemia in the population of Jammu and Kashmir in Northern India. Further studies involving more diverse ethnic groups, particularly from north India will not only validate these findings but will also assist in developing this variant as a biomarker for leukemia screening programs.
Background: The role of single nucleotide polymorphism rs10937405 (C>T) of the TP63 gene in cancer including leukemia has previously been studied in different world populations; however, the role of this variant in leukemia in the North Indian population of Jammu and Kashmir is still unknown. Methods: A total of 588 subjects, (188 cases and 400 controls) were recruited for the study. The rs10937405 variant was genotyped by using the real-time based TaqMan assay. Results: A statistically significant association was observed between the rs10937405 and leukemia [OR of 1.94 (95% CI 1.51-2.48), p=1.2x10-6]. Conclusions: The current study concludes that the rs10937405 variant is a risk factor for the development of leukemia in the population of Jammu and Kashmir, North India. However, it would be interesting to explore the contribution of this variant in other cancers as well. Our findings will help in the development of diagnostic markers for leukemia in the studied population and potentially for other North Indian populations.
3,155
200
9
[ "leukemia", "tp63", "variant", "cases", "controls", "association", "rs10937405", "dna", "allele", "95" ]
[ "test", "test" ]
null
[CONTENT] Single Nucleotide Polymorphism (SNPs) | Leukemia | North Indian population | Tumour suppressor (TP63) | Linkage Disequilibrium (LD) | Genome wide association studies (GWAS) | Jammu and Kashmir (J &K) [SUMMARY]
null
[CONTENT] Single Nucleotide Polymorphism (SNPs) | Leukemia | North Indian population | Tumour suppressor (TP63) | Linkage Disequilibrium (LD) | Genome wide association studies (GWAS) | Jammu and Kashmir (J &K) [SUMMARY]
[CONTENT] Single Nucleotide Polymorphism (SNPs) | Leukemia | North Indian population | Tumour suppressor (TP63) | Linkage Disequilibrium (LD) | Genome wide association studies (GWAS) | Jammu and Kashmir (J &K) [SUMMARY]
[CONTENT] Single Nucleotide Polymorphism (SNPs) | Leukemia | North Indian population | Tumour suppressor (TP63) | Linkage Disequilibrium (LD) | Genome wide association studies (GWAS) | Jammu and Kashmir (J &K) [SUMMARY]
[CONTENT] Single Nucleotide Polymorphism (SNPs) | Leukemia | North Indian population | Tumour suppressor (TP63) | Linkage Disequilibrium (LD) | Genome wide association studies (GWAS) | Jammu and Kashmir (J &K) [SUMMARY]
[CONTENT] Asian People | Case-Control Studies | Genetic Predisposition to Disease | Genotype | Humans | India | Leukemia | Polymorphism, Single Nucleotide | Transcription Factors | Tumor Suppressor Proteins [SUMMARY]
null
[CONTENT] Asian People | Case-Control Studies | Genetic Predisposition to Disease | Genotype | Humans | India | Leukemia | Polymorphism, Single Nucleotide | Transcription Factors | Tumor Suppressor Proteins [SUMMARY]
[CONTENT] Asian People | Case-Control Studies | Genetic Predisposition to Disease | Genotype | Humans | India | Leukemia | Polymorphism, Single Nucleotide | Transcription Factors | Tumor Suppressor Proteins [SUMMARY]
[CONTENT] Asian People | Case-Control Studies | Genetic Predisposition to Disease | Genotype | Humans | India | Leukemia | Polymorphism, Single Nucleotide | Transcription Factors | Tumor Suppressor Proteins [SUMMARY]
[CONTENT] Asian People | Case-Control Studies | Genetic Predisposition to Disease | Genotype | Humans | India | Leukemia | Polymorphism, Single Nucleotide | Transcription Factors | Tumor Suppressor Proteins [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] leukemia | tp63 | variant | cases | controls | association | rs10937405 | dna | allele | 95 [SUMMARY]
null
[CONTENT] leukemia | tp63 | variant | cases | controls | association | rs10937405 | dna | allele | 95 [SUMMARY]
[CONTENT] leukemia | tp63 | variant | cases | controls | association | rs10937405 | dna | allele | 95 [SUMMARY]
[CONTENT] leukemia | tp63 | variant | cases | controls | association | rs10937405 | dna | allele | 95 [SUMMARY]
[CONTENT] leukemia | tp63 | variant | cases | controls | association | rs10937405 | dna | allele | 95 [SUMMARY]
[CONTENT] leukemia | new | risk | jammu | jammu kashmir | kashmir | factor | gwas | gene | genetic [SUMMARY]
null
[CONTENT] leukemia | bmi | variant | corrected | corrected age | gender bmi | corrected age gender | corrected age gender bmi | gender | table [SUMMARY]
[CONTENT] india | findings | leukemia | variant | validate | findings provide evidence | findings provide evidence variant | validate findings assist | validate findings | developing [SUMMARY]
[CONTENT] leukemia | variant | controls | cases | tp63 | dna | smvdu | ierb | association | µl [SUMMARY]
[CONTENT] leukemia | variant | controls | cases | tp63 | dna | smvdu | ierb | association | µl [SUMMARY]
[CONTENT] North Indian | Jammu | Kashmir [SUMMARY]
null
[CONTENT] rs10937405 ||| 1.94 | 95% | CI | 1.51-2.48 | p=1.2x10 [SUMMARY]
[CONTENT] Jammu | Kashmir | North India ||| ||| North Indian [SUMMARY]
[CONTENT] North Indian | Jammu | Kashmir ||| 588 | 188 | 400 ||| TaqMan ||| rs10937405 ||| 1.94 | 95% | CI | 1.51-2.48 | p=1.2x10 ||| Jammu | Kashmir | North India ||| ||| North Indian [SUMMARY]
[CONTENT] North Indian | Jammu | Kashmir ||| 588 | 188 | 400 ||| TaqMan ||| rs10937405 ||| 1.94 | 95% | CI | 1.51-2.48 | p=1.2x10 ||| Jammu | Kashmir | North India ||| ||| North Indian [SUMMARY]
Interleukin-6 concentrations in the urine and dipstick analyses were related to bacteriuria but not symptoms in the elderly: a cross sectional study of 421 nursing home residents.
25117748
Up to half the residents of nursing homes for the elderly have asymptomatic bacteriuria (ABU), which should not be treated with antibiotics. A complementary test to discriminate between symptomatic urinary tract infections (UTI) and ABU is needed, as diagnostic uncertainty is likely to generate significant antibiotic overtreatment. Previous studies indicate that Interleukin-6 (IL-6) in the urine might be suitable as such a test. The aim of this study was to investigate the association between laboratory findings of bacteriuria, IL-6 in the urine, dipstick urinalysis and newly onset symptoms among residents of nursing homes.
BACKGROUND
In this cross sectional study, voided urine specimens for culture, urine dipstick and IL-6 analyses were collected from all residents capable of providing a voided urine sample, regardless of the presence of symptoms. Urine specimens and symptom forms were provided from 421 residents of 22 nursing homes. The following new or increased nonspecific symptoms occurring during the previous month were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency.
METHODS
Recent onset of nonspecific symptoms was common among elderly residents of nursing homes (85/421). Urine cultures were positive in 32% (135/421), Escherichia coli was by far the most common bacterial finding. Residents without nonspecific symptoms had positive urine cultures as often as those with nonspecific symptoms with a duration of up to one month. Residents with positive urine cultures had higher concentrations of IL-6 in the urine (p < 0.001). However, among residents with positive urine cultures there were no differences in IL-6 concentrations or dipstick findings between those with or without nonspecific symptoms.
RESULTS
Nonspecific symptoms among elderly residents of nursing homes are unlikely to be caused by bacteria in the urine. This study could not establish any clinical value of using dipstick urinalysis or IL-6 in the urine to verify if bacteriuria was linked to nonspecific symptoms.
CONCLUSIONS
[ "Aged", "Aged, 80 and over", "Anti-Bacterial Agents", "Bacteriuria", "Biomarkers", "Cross-Sectional Studies", "Female", "Homes for the Aged", "Humans", "Interleukin-6", "Male", "Nursing Homes", "Sweden", "Urinalysis", "Urinary Tract Infections" ]
4137105
Background
The presence of asymptomatic bacteriuria (ABU) among residents of nursing homes for the elderly varies between 25% and 50% for women and 15% and 40% for men [1-3]. There is overwhelming evidence that ABU should not be treated with antibiotics in an adult population except for pregnant women and patients prior to traumatic urologic interventions with mucosal bleeding [4-7]. The high prevalence of ABU makes it difficult to know if a new symptom in a resident with bacteriuria is caused by a urinary tract infection (UTI), or if the bacteria in the urine is only representative of an ABU [3,8-11]. This is especially difficult in the presence of symptoms not specific to the urinary tract such as fatigue, restlessness, confusion, aggressiveness, loss of appetite or frequent falls. Nonspecific symptoms such as changes in mental status are the most common reasons for suspecting a UTI among residents of nursing homes [12-14]. These symptoms can have many causes besides UTI [15]. There are different opinions on the possible connection between different nonspecific symptoms and UTI [10,16-26]. Nonspecific symptoms and diagnostic uncertainty often lead to antibiotic treatments of dubious value [8,14,27,28]. Urine culture alone seems inappropriate for evaluating symptoms among residents of nursing homes [10]. There are two major possible explanations, either common bacteria in the urine are of little relevance, or a urine culture is insufficient to identify UTI. With the emergence of multidrug-resistant bacteria and the antimicrobial drug discovery pipeline currently running dry, it is important not to misinterpret bacteriuria as UTI and prescribe antibiotics when it actually represents ABU. Thus, a complementary test to discriminate between symptomatic UTI and ABU is needed [29,30]. The cytokine Interleukin-6 (IL-6) is a mediator of inflammation playing an important role in the acute phase response and immune system regulation [29,31]. The biosynthesis of IL-6 is stimulated by e.g. bacteria [31]. After intravesical inoculation of patients with E. coli, all patients secreted IL-6 into the urine, however, serum concentrations of IL-6 did not increase suggesting a dominance of local IL-6 production [32]. A symptomatic lower UTI is assumed associated with more severe inflammation in the bladder compared to an ABU. Previous studies suggested that concentrations of IL-6 in the urine may be valuable in discriminating between ABU and UTI in the elderly, however, this needs evaluation in a larger study among the elderly [9,33]. The aim of this study was to investigate the association between laboratory findings of bacteria in the urine, elevated IL-6 concentrations in the urine, dipstick urinalysis and new or increased symptoms in residents of nursing homes for elderly.
Methods
During the first three months of 2012, a study protocol was completed and single urine specimens collected from all included residents of 22 nursing homes in south-western Sweden. The attending nurses were provided detailed verbal and written information for the procedure. The study was approved by the Regional ethical review board of Gothenburg University (D-nr 578-11). The data was collected as part of another study of antimicrobial resistance in urinary pathogens among nursing home residents [34]. Inclusion and exclusion criteria Residents of the participating nursing homes, regardless of UTI symptoms were invited to participate. Those accepting participation were included if they met the following inclusion criteria: ● Permanent residence in nursing homes for the elderly (regardless of gender) ● Presence at a nursing home for the elderly during the study ● Participation approval ● No indwelling urinary catheter ● Sufficiently continent to leave a voided urinary specimen ● Residents with dementia were included if cooperative when collecting urine samples ● No urostomy ● No regularly clean intermittent catheterisation ● Not terminally ill ● No ongoing peritoneal- or haemodialysis The following exclusion criterion was used: ● If the resident did not agree to participate or discontinued study participation Residents of the participating nursing homes, regardless of UTI symptoms were invited to participate. Those accepting participation were included if they met the following inclusion criteria: ● Permanent residence in nursing homes for the elderly (regardless of gender) ● Presence at a nursing home for the elderly during the study ● Participation approval ● No indwelling urinary catheter ● Sufficiently continent to leave a voided urinary specimen ● Residents with dementia were included if cooperative when collecting urine samples ● No urostomy ● No regularly clean intermittent catheterisation ● Not terminally ill ● No ongoing peritoneal- or haemodialysis The following exclusion criterion was used: ● If the resident did not agree to participate or discontinued study participation Statement of consent Residents were informed of the studies verbally and in writing. Informed approval for participation in the studies was collected from decision-capable individuals choosing to participate in the study. However, a considerable number of participants consisted of residents with varying degrees of dementia. If the resident was incapable of understanding information and thereby possessing a reduced decision capability, these residents only participated so long as they did not oppose participation and under the condition that appointed representatives or relatives did not oppose their participation after having partaken of the study information. This procedure was approved by the Regional ethical review board of Gothenburg University. Residents were informed of the studies verbally and in writing. Informed approval for participation in the studies was collected from decision-capable individuals choosing to participate in the study. However, a considerable number of participants consisted of residents with varying degrees of dementia. If the resident was incapable of understanding information and thereby possessing a reduced decision capability, these residents only participated so long as they did not oppose participation and under the condition that appointed representatives or relatives did not oppose their participation after having partaken of the study information. This procedure was approved by the Regional ethical review board of Gothenburg University. Study protocol In addition to collecting the urine sample, the attending nurse made an entry in the study protocol for each included resident whether having any symptoms, newly onset or increased within the last month and still present when the urine specimen was obtained. Nursing documentation and record keeping was used to obtain information about the presence or absence of symptoms one month prior to inclusion. The following nonspecific symptoms were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. It was also registered if the resident had ongoing or previous antibiotic treatment within the last month, diabetes mellitus or dementia. To avoid presence of symptoms influencing what day the study protocol was completed and urine specimen collected, there was a predetermined date for collection of the urine sample from each included resident. In addition to collecting the urine sample, the attending nurse made an entry in the study protocol for each included resident whether having any symptoms, newly onset or increased within the last month and still present when the urine specimen was obtained. Nursing documentation and record keeping was used to obtain information about the presence or absence of symptoms one month prior to inclusion. The following nonspecific symptoms were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. It was also registered if the resident had ongoing or previous antibiotic treatment within the last month, diabetes mellitus or dementia. To avoid presence of symptoms influencing what day the study protocol was completed and urine specimen collected, there was a predetermined date for collection of the urine sample from each included resident. Laboratory tests Personnel at the nursing homes were instructed to collect a mid-stream morning urine sample, or a voided urine specimen with as long a bladder incubation time as possible. Immediately after collecting urine samples, dipstick urinalysis was carried out at the nursing home. Visual reading of the urine dipstick Multistix 5 (Siemens Healthcare Laboratory Diagnostics) was performed for the detection of nitrite and leukocyte esterase. Body temperature was measured by an ear thermometer. Urine specimens were cultured at the microbiology laboratory at Södra Älvsborg Hospital in Borås, Sweden using clinical routine procedure. The urine specimens were chilled before transport and usually arrived at the laboratory within 24 hours. As in clinical routine, the laboratory was provided information on the outcome of the dipstick urinalysis as well as information on any urinary tract specific UTI symptoms from the attending nurse. The microbiology laboratory fractionated 10 μl urine on the surfaces of two plates; a cystine-lactose-electrolyte deficient agar (CLED) and a Columbia blood agar base. Plates were incubated overnight (minimum 15 h) at 35-37°C. CLED plates were incubated in air, and Columbia plates were incubated in 5% CO2. The latter was further incubated for 24 hours if no growth occurred after the first incubation. Growth of bacteria was considered significant if the number of colony-forming units (CFU)/mL was ≥105. However, at signs of possible UTI such as positive nitrite dipstick, leukocyte esterase dipstick >1, fever, frequency, urgency or dysuria, the cut-off point was ≥103 for patients with growth of Escherichia coli (E. coli) and for male patients with Klebsiella species (spp.) and Enterococcus faecalis. For symptomatic women harbouring the two latter species the cut-off level was ≥104. Nonspecific symptoms did not influence cut-off levels for CFU/mL in the urine cultures. Measurements of the concentrations of IL-6 in the urine were performed with enzyme-linked immunosorbent assay (ELISA) using a commercial kit (Quantikine HS ELISA, High Sensitivity) [35] according to instructions from the manufacturer (R&D Systems, Abingdon, Oxford, UK) at the clinical immunology laboratory at Sahlgrenska University Hospital in Gothenburg, Sweden. Urine specimens for IL-6 analysis were frozen pending transport to the clinical immunology laboratory. Concentrations of creatinine in the urine were analysed by the automated general chemistry analyser UniCel® DxC 800 Synchron® Clinical System, according to instructions from the manufacturer (Beckman Coulter), at the clinical chemistry laboratory at Södra Älvsborg Hospital in Borås, Sweden. Personnel at the nursing homes were instructed to collect a mid-stream morning urine sample, or a voided urine specimen with as long a bladder incubation time as possible. Immediately after collecting urine samples, dipstick urinalysis was carried out at the nursing home. Visual reading of the urine dipstick Multistix 5 (Siemens Healthcare Laboratory Diagnostics) was performed for the detection of nitrite and leukocyte esterase. Body temperature was measured by an ear thermometer. Urine specimens were cultured at the microbiology laboratory at Södra Älvsborg Hospital in Borås, Sweden using clinical routine procedure. The urine specimens were chilled before transport and usually arrived at the laboratory within 24 hours. As in clinical routine, the laboratory was provided information on the outcome of the dipstick urinalysis as well as information on any urinary tract specific UTI symptoms from the attending nurse. The microbiology laboratory fractionated 10 μl urine on the surfaces of two plates; a cystine-lactose-electrolyte deficient agar (CLED) and a Columbia blood agar base. Plates were incubated overnight (minimum 15 h) at 35-37°C. CLED plates were incubated in air, and Columbia plates were incubated in 5% CO2. The latter was further incubated for 24 hours if no growth occurred after the first incubation. Growth of bacteria was considered significant if the number of colony-forming units (CFU)/mL was ≥105. However, at signs of possible UTI such as positive nitrite dipstick, leukocyte esterase dipstick >1, fever, frequency, urgency or dysuria, the cut-off point was ≥103 for patients with growth of Escherichia coli (E. coli) and for male patients with Klebsiella species (spp.) and Enterococcus faecalis. For symptomatic women harbouring the two latter species the cut-off level was ≥104. Nonspecific symptoms did not influence cut-off levels for CFU/mL in the urine cultures. Measurements of the concentrations of IL-6 in the urine were performed with enzyme-linked immunosorbent assay (ELISA) using a commercial kit (Quantikine HS ELISA, High Sensitivity) [35] according to instructions from the manufacturer (R&D Systems, Abingdon, Oxford, UK) at the clinical immunology laboratory at Sahlgrenska University Hospital in Gothenburg, Sweden. Urine specimens for IL-6 analysis were frozen pending transport to the clinical immunology laboratory. Concentrations of creatinine in the urine were analysed by the automated general chemistry analyser UniCel® DxC 800 Synchron® Clinical System, according to instructions from the manufacturer (Beckman Coulter), at the clinical chemistry laboratory at Södra Älvsborg Hospital in Borås, Sweden. Statistical analysis The first objective was to clarify whether the concentrations of IL-6 in the urine or urine dipsticks differed between residents with or without bacteriuria. Creatinine adjusted IL-6 was calculated. Concentrations of unadjusted and adjusted IL-6 in the urine and outcome of urine dipstick analyses were compared between residents with positive and negative urine cultures, irrespective of symptoms, using the Mann-Whitney test for IL-6 (due to skewed data) and the Pearson’s chi-square test for urine dipsticks. The second and third objective was to clarify whether a symptom correlated to bacteriuria or antibiotic usage. The prevalence of bacteriuria or use of antibiotics during the month preceding sampling of urine was compared between residents with or without symptoms using Pearson’s chi-square test. Fisher’s exact test was used in case of small numbers. The fourth objective was to clarify if the concentrations of IL-6 or outcomes of urine dipsticks differed depending on symptoms in residents with bacteriuria. Concentrations of IL-6 in the urine or outcome of dipstick analyses were compared between bacteriuric residents with or without symptoms using Mann-Whitney’s test for IL-6 (due to skewed data) and Pearson’s chi-square test for dipsticks. The fifth objective was to correlate factors with symptoms while adjusting for covariates. A cut-off was used to construct a dichotomous variable covering approximately 20% of the highest IL-6 concentrations (≥5 ng/L). A similar dichotomous variable was constructed for urine dipstick leukocyte esterase where ≥3+ was considered positive. Forward stepwise (conditional) logistic regressions were performed where the condition for entry was 0.050 and for removal 0.10. Variables that served well for the overall prediction were also kept in the model. Zero order correlations between independent variables were checked and correlations >0.6 were not allowed. The independent variables, all but age being dichotomous, were; urine culture, IL-6 in the urine, leukocyte esterase dipstick, nitrite dipstick, antibiotics during the last month, age, gender, and presence of diabetes mellitus or dementia. IBM SPSS Statistics version 21 was used for statistical analysis. The first objective was to clarify whether the concentrations of IL-6 in the urine or urine dipsticks differed between residents with or without bacteriuria. Creatinine adjusted IL-6 was calculated. Concentrations of unadjusted and adjusted IL-6 in the urine and outcome of urine dipstick analyses were compared between residents with positive and negative urine cultures, irrespective of symptoms, using the Mann-Whitney test for IL-6 (due to skewed data) and the Pearson’s chi-square test for urine dipsticks. The second and third objective was to clarify whether a symptom correlated to bacteriuria or antibiotic usage. The prevalence of bacteriuria or use of antibiotics during the month preceding sampling of urine was compared between residents with or without symptoms using Pearson’s chi-square test. Fisher’s exact test was used in case of small numbers. The fourth objective was to clarify if the concentrations of IL-6 or outcomes of urine dipsticks differed depending on symptoms in residents with bacteriuria. Concentrations of IL-6 in the urine or outcome of dipstick analyses were compared between bacteriuric residents with or without symptoms using Mann-Whitney’s test for IL-6 (due to skewed data) and Pearson’s chi-square test for dipsticks. The fifth objective was to correlate factors with symptoms while adjusting for covariates. A cut-off was used to construct a dichotomous variable covering approximately 20% of the highest IL-6 concentrations (≥5 ng/L). A similar dichotomous variable was constructed for urine dipstick leukocyte esterase where ≥3+ was considered positive. Forward stepwise (conditional) logistic regressions were performed where the condition for entry was 0.050 and for removal 0.10. Variables that served well for the overall prediction were also kept in the model. Zero order correlations between independent variables were checked and correlations >0.6 were not allowed. The independent variables, all but age being dichotomous, were; urine culture, IL-6 in the urine, leukocyte esterase dipstick, nitrite dipstick, antibiotics during the last month, age, gender, and presence of diabetes mellitus or dementia. IBM SPSS Statistics version 21 was used for statistical analysis.
Results
Studied population Inclusion criteria were fulfilled by 676 of 901 residents in 22 nursing homes, and 425 (63%) accepted participation (Figure 1). Voided urine specimens and symptom forms were provided from 421 residents, 295 (70%) women and 126 (30%) men. Women (mean 87 years, SD 6.4, range 63-100) were slightly older than men (mean 85 years, SD 7.1, range 65-100) (p = 0.0053). Participant flow chart. Among participating residents 56/421 (13%) suffered from diabetes mellitus and 228/421 (54%) had dementia. When urine specimens were collected, 18/421 (4.3%) were undergoing antibiotic treatment. Another 29/421 (6.9%) had no ongoing antibiotic treatment when the urine specimen was collected but had received antibiotics during the previous month. Measure of body temperature was conclusive in 399/421 residents; none of these residents had a body temperature ≥38.0° Celsius. Inclusion criteria were fulfilled by 676 of 901 residents in 22 nursing homes, and 425 (63%) accepted participation (Figure 1). Voided urine specimens and symptom forms were provided from 421 residents, 295 (70%) women and 126 (30%) men. Women (mean 87 years, SD 6.4, range 63-100) were slightly older than men (mean 85 years, SD 7.1, range 65-100) (p = 0.0053). Participant flow chart. Among participating residents 56/421 (13%) suffered from diabetes mellitus and 228/421 (54%) had dementia. When urine specimens were collected, 18/421 (4.3%) were undergoing antibiotic treatment. Another 29/421 (6.9%) had no ongoing antibiotic treatment when the urine specimen was collected but had received antibiotics during the previous month. Measure of body temperature was conclusive in 399/421 residents; none of these residents had a body temperature ≥38.0° Celsius. Bacterial findings There was significant growth of potentially pathogenic bacteria in 32% (135/421) of voided urine specimens. E. coli was by far the most common finding, present in 81% (109/135) of positive urine cultures. Klebsiella spp. were the second most common finding, present in 8.1% (11/135) of positive cultures. Proteus spp. were present in 3.0% (4/135) of positive cultures. Other species had very low prevalence’s, ≤1.5% of positive urine cultures for each species. There was significant growth of potentially pathogenic bacteria in 32% (135/421) of voided urine specimens. E. coli was by far the most common finding, present in 81% (109/135) of positive urine cultures. Klebsiella spp. were the second most common finding, present in 8.1% (11/135) of positive cultures. Proteus spp. were present in 3.0% (4/135) of positive cultures. Other species had very low prevalence’s, ≤1.5% of positive urine cultures for each species. IL-6 and creatinine in the urine Concentrations of IL-6 were analysed in urine specimens from 97% (409/421) of residents. In 2.9% (12/421) of residents, urine samples for IL-6 analyses were accidentally lost, or there was not enough urine for both culture and IL-6 analysis. Concentration of IL-6 in the urine had a mean of 3.4 ng/L (SD 5.9) and a median of 1.6 ng/L (interquartile range 0.7-4.1, range 0.20-62). Concentration of creatinine in the urine had a mean of 7.4 mmol/L (SD 4.0). Creatinine adjusted concentration of IL-6 in the urine had a mean of 0.59 ng/mmol creatinine (SD 1.2) and a median of 0.23 ng/mmol creatinine (interquartile range 0.11-0.55, range 0.019-12). Pearson’s correlation coefficient between unadjusted urine IL-6 concentrations and creatinine adjusted IL-6 concentrations was 0.86 (p < 10-6). Urine IL-6 concentrations were ≥5.0 ng/L in 18% (75/409) of residents and creatinine adjusted IL-6 concentrations were ≥0.75 ng/mmol in 18% (75/409) of residents. Concentrations of IL-6 were analysed in urine specimens from 97% (409/421) of residents. In 2.9% (12/421) of residents, urine samples for IL-6 analyses were accidentally lost, or there was not enough urine for both culture and IL-6 analysis. Concentration of IL-6 in the urine had a mean of 3.4 ng/L (SD 5.9) and a median of 1.6 ng/L (interquartile range 0.7-4.1, range 0.20-62). Concentration of creatinine in the urine had a mean of 7.4 mmol/L (SD 4.0). Creatinine adjusted concentration of IL-6 in the urine had a mean of 0.59 ng/mmol creatinine (SD 1.2) and a median of 0.23 ng/mmol creatinine (interquartile range 0.11-0.55, range 0.019-12). Pearson’s correlation coefficient between unadjusted urine IL-6 concentrations and creatinine adjusted IL-6 concentrations was 0.86 (p < 10-6). Urine IL-6 concentrations were ≥5.0 ng/L in 18% (75/409) of residents and creatinine adjusted IL-6 concentrations were ≥0.75 ng/mmol in 18% (75/409) of residents. IL-6 concentrations in the urine divided by positive and negative urine cultures Concentrations of IL-6 in the urine was higher (p = 0.000004) among residents with significant growth of bacteria in the urine; the mean IL-6 concentration was 5.1 ng/L (SD 8.7) and the median IL-6 concentration was 2.5 ng/L (interquartile range 1.0-5.7), compared to residents with negative urine cultures, where the mean IL-6 concentration was 2.6 ng/L (SD 3.6) and the median IL-6 concentration was 1.3 ng/L (interquartile range 0.6-2.8). The same applies for creatinine adjusted IL-6 concentrations (p < 10-6). Similarly residents with positive urine cultures were more likely to have urine IL-6 ≥ 5.0 ng/L (p = 0.000053) and creatinine adjusted IL-6 ≥ 0.75 ng/mmol (p = 0.000001) compared to those with negative urine cultures. Concentrations of IL-6 in the urine was higher (p = 0.000004) among residents with significant growth of bacteria in the urine; the mean IL-6 concentration was 5.1 ng/L (SD 8.7) and the median IL-6 concentration was 2.5 ng/L (interquartile range 1.0-5.7), compared to residents with negative urine cultures, where the mean IL-6 concentration was 2.6 ng/L (SD 3.6) and the median IL-6 concentration was 1.3 ng/L (interquartile range 0.6-2.8). The same applies for creatinine adjusted IL-6 concentrations (p < 10-6). Similarly residents with positive urine cultures were more likely to have urine IL-6 ≥ 5.0 ng/L (p = 0.000053) and creatinine adjusted IL-6 ≥ 0.75 ng/mmol (p = 0.000001) compared to those with negative urine cultures. Dipstick urinalysis Urine dipsticks were analysed for nitrite and leukocyte esterase in urine specimens from 408/421 residents. Urine dipstick analyses were not performed in 13/421 residents, mostly due to insufficient urine volume. Among all residents, regardless of bacteriuria or not, 26% (106/408) of nitrite dipsticks were positive and 22% (90/408) of leukocyte esterase dipsticks were ≥3 + . Leukocyte esterase dipsticks ≥3 + were more common (p = <10-6) among residents with significant growth of bacteria in the urine; 46% (61/132) versus 11% (29/276) in residents with negative urine cultures. Positive nitrite dipsticks were more common (p = <10-6) among residents with positive urine cultures; 64% (84/132) versus 8.0% (22/276) in residents with negative urine cultures. Urine dipsticks were analysed for nitrite and leukocyte esterase in urine specimens from 408/421 residents. Urine dipstick analyses were not performed in 13/421 residents, mostly due to insufficient urine volume. Among all residents, regardless of bacteriuria or not, 26% (106/408) of nitrite dipsticks were positive and 22% (90/408) of leukocyte esterase dipsticks were ≥3 + . Leukocyte esterase dipsticks ≥3 + were more common (p = <10-6) among residents with significant growth of bacteria in the urine; 46% (61/132) versus 11% (29/276) in residents with negative urine cultures. Positive nitrite dipsticks were more common (p = <10-6) among residents with positive urine cultures; 64% (84/132) versus 8.0% (22/276) in residents with negative urine cultures. Symptoms, bacteriuria and antibiotic treatments The prevalence of new or increased symptoms, occurring during the last month and still present when urine specimens were obtained are presented in Table 1. There were no significant differences in the proportion of positive urine cultures among those with or without nonspecific symptoms, however there were less positive urine cultures among residents with urinary frequency (Table 1). Residents with some of the symptoms had a higher prevalence of antibiotic treatments during the last month (Table 2). Prevalence of symptoms and positive urine cultures 1Symptoms commencing at any time during the preceding month and still present when sampling urine. 2Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportions of positive urine cultures among those with or without symptoms. Prevalence of symptoms and antibiotic treatment 1Symptoms commencing at any time during the preceding month and still present when sampling urine. 2Antibiotic treatment given at any time during the month preceding sampling of urine. 3Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportion of antibiotic treatment among those with or without symptoms. The prevalence of new or increased symptoms, occurring during the last month and still present when urine specimens were obtained are presented in Table 1. There were no significant differences in the proportion of positive urine cultures among those with or without nonspecific symptoms, however there were less positive urine cultures among residents with urinary frequency (Table 1). Residents with some of the symptoms had a higher prevalence of antibiotic treatments during the last month (Table 2). Prevalence of symptoms and positive urine cultures 1Symptoms commencing at any time during the preceding month and still present when sampling urine. 2Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportions of positive urine cultures among those with or without symptoms. Prevalence of symptoms and antibiotic treatment 1Symptoms commencing at any time during the preceding month and still present when sampling urine. 2Antibiotic treatment given at any time during the month preceding sampling of urine. 3Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportion of antibiotic treatment among those with or without symptoms. IL-6 and dipstick urinalyses in residents with bacteriuria In residents exclusively with bacteriuria there were no significant differences in concentrations of urine IL-6 when comparing those with or without a new or increased symptom; fatigue (p = 0.24), restlessness (p = 0.40), confusion (p = 0.38), aggressiveness (p = 0.66), loss of appetite (p = 0.27), frequent falls (p = 0.15), not being herself/himself (p = 0.90), having any of the nonspecific symptoms (p = 0.69), dysuria (p = 0.13) and urinary urgency (p = 0.82). In residents exclusively having bacteriuria there were no significant differences in the proportion of leukocyte esterase dipsticks ≥3+ when comparing those with or without new or increased symptoms; fatigue (p = 0.39), restlessness (p = 1.0), confusion (p = 1.0), aggressiveness (p = 0.62), loss of appetite (p = 1.0), frequent falls (p = 0.60), not being herself/himself (p = 1.0), having any of the nonspecific symptoms (p = 0.68), dysuria (p = 0.46) and urinary urgency (p = 0.34). Similarly there were no significant differences in proportion of positive nitrite dipsticks when comparing those with or without new or increased symptoms. All patients with urinary frequency had negative urine culture. In residents exclusively with bacteriuria there were no significant differences in concentrations of urine IL-6 when comparing those with or without a new or increased symptom; fatigue (p = 0.24), restlessness (p = 0.40), confusion (p = 0.38), aggressiveness (p = 0.66), loss of appetite (p = 0.27), frequent falls (p = 0.15), not being herself/himself (p = 0.90), having any of the nonspecific symptoms (p = 0.69), dysuria (p = 0.13) and urinary urgency (p = 0.82). In residents exclusively having bacteriuria there were no significant differences in the proportion of leukocyte esterase dipsticks ≥3+ when comparing those with or without new or increased symptoms; fatigue (p = 0.39), restlessness (p = 1.0), confusion (p = 1.0), aggressiveness (p = 0.62), loss of appetite (p = 1.0), frequent falls (p = 0.60), not being herself/himself (p = 1.0), having any of the nonspecific symptoms (p = 0.68), dysuria (p = 0.46) and urinary urgency (p = 0.34). Similarly there were no significant differences in proportion of positive nitrite dipsticks when comparing those with or without new or increased symptoms. All patients with urinary frequency had negative urine culture. Predictors of symptoms A positive urine culture was only significant in the model predicting confusion, OR 0.15 (0.033-0.68; p = 0.014). However, it is important to note that the odds ratio was <1, i.e. positive urine cultures were less common among residents with confusion (Table 3). As urine IL-6 > 5 ng/L was also a significant predictor in this regression model for confusion, another regression was made where urine culture and urine IL-6 ≥ 5 ng/L were replaced by a combined dichotomous variable being positive if both IL-6 ≥ 5 ng/L and the urine culture was positive at the same time, or otherwise negative. This combined variable was however not a significant predictor of confusion. Predictors 1 of new or increased symptoms commencing at any time during the preceding month and still present when sampling urine 1Predictors in patients where a urine sample could be obtained and with information for all variables (n = 397). Forward stepwise (conditional) logistic regressions where probability for entry was 0.050 and for removal 0.10 was used. Variables that served well for the overall prediction were also kept in the model. Outcome presented as odds ratios (95% CI with p-value) for variables included in the model. Urine dipstick (nitrite positive or leukocyte esterase being 3+ or 4+), age, gender or presence of diabetes mellitus did not reach the final model for any symptom. Nagelkerke’s R-square as a measure of the model’s ability to predict presence of a symptom. 2With (=1) or without (=0) bacteriuria. The latter was the reference. 3Interleukin-6 elevated (≥5 ng/L) or not. The latter was the reference. 4Ongoing antibiotic treatment (n = 16) or having had antibiotics during the last month (n = 28). 5None of the independent variables could predict either fatigue or restlessness. A positive urine culture was only significant in the model predicting confusion, OR 0.15 (0.033-0.68; p = 0.014). However, it is important to note that the odds ratio was <1, i.e. positive urine cultures were less common among residents with confusion (Table 3). As urine IL-6 > 5 ng/L was also a significant predictor in this regression model for confusion, another regression was made where urine culture and urine IL-6 ≥ 5 ng/L were replaced by a combined dichotomous variable being positive if both IL-6 ≥ 5 ng/L and the urine culture was positive at the same time, or otherwise negative. This combined variable was however not a significant predictor of confusion. Predictors 1 of new or increased symptoms commencing at any time during the preceding month and still present when sampling urine 1Predictors in patients where a urine sample could be obtained and with information for all variables (n = 397). Forward stepwise (conditional) logistic regressions where probability for entry was 0.050 and for removal 0.10 was used. Variables that served well for the overall prediction were also kept in the model. Outcome presented as odds ratios (95% CI with p-value) for variables included in the model. Urine dipstick (nitrite positive or leukocyte esterase being 3+ or 4+), age, gender or presence of diabetes mellitus did not reach the final model for any symptom. Nagelkerke’s R-square as a measure of the model’s ability to predict presence of a symptom. 2With (=1) or without (=0) bacteriuria. The latter was the reference. 3Interleukin-6 elevated (≥5 ng/L) or not. The latter was the reference. 4Ongoing antibiotic treatment (n = 16) or having had antibiotics during the last month (n = 28). 5None of the independent variables could predict either fatigue or restlessness.
Conclusions
Recently onset nonspecific symptoms were common among elderly residents of nursing homes. Residents without nonspecific symptoms had positive urine cultures as often as those with nonspecific symptoms, suggesting that nonspecific symptoms are not caused by bacteria in the urine. Residents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations or dipstick findings between those with or without nonspecific symptoms. Thus, IL-6 concentrations in the urine and dipstick analyses are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine.
[ "Background", "Inclusion and exclusion criteria", "Statement of consent", "Study protocol", "Laboratory tests", "Statistical analysis", "Studied population", "Bacterial findings", "IL-6 and creatinine in the urine", "IL-6 concentrations in the urine divided by positive and negative urine cultures", "Dipstick urinalysis", "Symptoms, bacteriuria and antibiotic treatments", "IL-6 and dipstick urinalyses in residents with bacteriuria", "Predictors of symptoms", "Strengths and limitations of the study", "Differentiating ABU versus UTI", "Antibiotic treatment and negative urine culture", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "The presence of asymptomatic bacteriuria (ABU) among residents of nursing homes for the elderly varies between 25% and 50% for women and 15% and 40% for men [1-3]. There is overwhelming evidence that ABU should not be treated with antibiotics in an adult population except for pregnant women and patients prior to traumatic urologic interventions with mucosal bleeding [4-7]. The high prevalence of ABU makes it difficult to know if a new symptom in a resident with bacteriuria is caused by a urinary tract infection (UTI), or if the bacteria in the urine is only representative of an ABU [3,8-11]. This is especially difficult in the presence of symptoms not specific to the urinary tract such as fatigue, restlessness, confusion, aggressiveness, loss of appetite or frequent falls.\nNonspecific symptoms such as changes in mental status are the most common reasons for suspecting a UTI among residents of nursing homes [12-14]. These symptoms can have many causes besides UTI [15]. There are different opinions on the possible connection between different nonspecific symptoms and UTI [10,16-26]. Nonspecific symptoms and diagnostic uncertainty often lead to antibiotic treatments of dubious value [8,14,27,28]. Urine culture alone seems inappropriate for evaluating symptoms among residents of nursing homes [10]. There are two major possible explanations, either common bacteria in the urine are of little relevance, or a urine culture is insufficient to identify UTI.\nWith the emergence of multidrug-resistant bacteria and the antimicrobial drug discovery pipeline currently running dry, it is important not to misinterpret bacteriuria as UTI and prescribe antibiotics when it actually represents ABU. Thus, a complementary test to discriminate between symptomatic UTI and ABU is needed [29,30]. The cytokine Interleukin-6 (IL-6) is a mediator of inflammation playing an important role in the acute phase response and immune system regulation [29,31]. The biosynthesis of IL-6 is stimulated by e.g. bacteria [31]. After intravesical inoculation of patients with E. coli, all patients secreted IL-6 into the urine, however, serum concentrations of IL-6 did not increase suggesting a dominance of local IL-6 production [32]. A symptomatic lower UTI is assumed associated with more severe inflammation in the bladder compared to an ABU. Previous studies suggested that concentrations of IL-6 in the urine may be valuable in discriminating between ABU and UTI in the elderly, however, this needs evaluation in a larger study among the elderly [9,33].\nThe aim of this study was to investigate the association between laboratory findings of bacteria in the urine, elevated IL-6 concentrations in the urine, dipstick urinalysis and new or increased symptoms in residents of nursing homes for elderly.", "Residents of the participating nursing homes, regardless of UTI symptoms were invited to participate. Those accepting participation were included if they met the following inclusion criteria:\n● Permanent residence in nursing homes for the elderly (regardless of gender)\n● Presence at a nursing home for the elderly during the study\n● Participation approval\n● No indwelling urinary catheter\n● Sufficiently continent to leave a voided urinary specimen\n● Residents with dementia were included if cooperative when collecting urine samples\n● No urostomy\n● No regularly clean intermittent catheterisation\n● Not terminally ill\n● No ongoing peritoneal- or haemodialysis\nThe following exclusion criterion was used:\n● If the resident did not agree to participate or discontinued study participation", "Residents were informed of the studies verbally and in writing. Informed approval for participation in the studies was collected from decision-capable individuals choosing to participate in the study. However, a considerable number of participants consisted of residents with varying degrees of dementia. If the resident was incapable of understanding information and thereby possessing a reduced decision capability, these residents only participated so long as they did not oppose participation and under the condition that appointed representatives or relatives did not oppose their participation after having partaken of the study information. This procedure was approved by the Regional ethical review board of Gothenburg University.", "In addition to collecting the urine sample, the attending nurse made an entry in the study protocol for each included resident whether having any symptoms, newly onset or increased within the last month and still present when the urine specimen was obtained. Nursing documentation and record keeping was used to obtain information about the presence or absence of symptoms one month prior to inclusion. The following nonspecific symptoms were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. It was also registered if the resident had ongoing or previous antibiotic treatment within the last month, diabetes mellitus or dementia.\nTo avoid presence of symptoms influencing what day the study protocol was completed and urine specimen collected, there was a predetermined date for collection of the urine sample from each included resident.", "Personnel at the nursing homes were instructed to collect a mid-stream morning urine sample, or a voided urine specimen with as long a bladder incubation time as possible. Immediately after collecting urine samples, dipstick urinalysis was carried out at the nursing home. Visual reading of the urine dipstick Multistix 5 (Siemens Healthcare Laboratory Diagnostics) was performed for the detection of nitrite and leukocyte esterase. Body temperature was measured by an ear thermometer.\nUrine specimens were cultured at the microbiology laboratory at Södra Älvsborg Hospital in Borås, Sweden using clinical routine procedure. The urine specimens were chilled before transport and usually arrived at the laboratory within 24 hours. As in clinical routine, the laboratory was provided information on the outcome of the dipstick urinalysis as well as information on any urinary tract specific UTI symptoms from the attending nurse.\nThe microbiology laboratory fractionated 10 μl urine on the surfaces of two plates; a cystine-lactose-electrolyte deficient agar (CLED) and a Columbia blood agar base. Plates were incubated overnight (minimum 15 h) at 35-37°C. CLED plates were incubated in air, and Columbia plates were incubated in 5% CO2. The latter was further incubated for 24 hours if no growth occurred after the first incubation. Growth of bacteria was considered significant if the number of colony-forming units (CFU)/mL was ≥105. However, at signs of possible UTI such as positive nitrite dipstick, leukocyte esterase dipstick >1, fever, frequency, urgency or dysuria, the cut-off point was ≥103 for patients with growth of Escherichia coli (E. coli) and for male patients with Klebsiella species (spp.) and Enterococcus faecalis. For symptomatic women harbouring the two latter species the cut-off level was ≥104. Nonspecific symptoms did not influence cut-off levels for CFU/mL in the urine cultures.\nMeasurements of the concentrations of IL-6 in the urine were performed with enzyme-linked immunosorbent assay (ELISA) using a commercial kit (Quantikine HS ELISA, High Sensitivity) [35] according to instructions from the manufacturer (R&D Systems, Abingdon, Oxford, UK) at the clinical immunology laboratory at Sahlgrenska University Hospital in Gothenburg, Sweden. Urine specimens for IL-6 analysis were frozen pending transport to the clinical immunology laboratory.\nConcentrations of creatinine in the urine were analysed by the automated general chemistry analyser UniCel® DxC 800 Synchron® Clinical System, according to instructions from the manufacturer (Beckman Coulter), at the clinical chemistry laboratory at Södra Älvsborg Hospital in Borås, Sweden.", "The first objective was to clarify whether the concentrations of IL-6 in the urine or urine dipsticks differed between residents with or without bacteriuria. Creatinine adjusted IL-6 was calculated. Concentrations of unadjusted and adjusted IL-6 in the urine and outcome of urine dipstick analyses were compared between residents with positive and negative urine cultures, irrespective of symptoms, using the Mann-Whitney test for IL-6 (due to skewed data) and the Pearson’s chi-square test for urine dipsticks.\nThe second and third objective was to clarify whether a symptom correlated to bacteriuria or antibiotic usage. The prevalence of bacteriuria or use of antibiotics during the month preceding sampling of urine was compared between residents with or without symptoms using Pearson’s chi-square test. Fisher’s exact test was used in case of small numbers.\nThe fourth objective was to clarify if the concentrations of IL-6 or outcomes of urine dipsticks differed depending on symptoms in residents with bacteriuria. Concentrations of IL-6 in the urine or outcome of dipstick analyses were compared between bacteriuric residents with or without symptoms using Mann-Whitney’s test for IL-6 (due to skewed data) and Pearson’s chi-square test for dipsticks.\nThe fifth objective was to correlate factors with symptoms while adjusting for covariates.\nA cut-off was used to construct a dichotomous variable covering approximately 20% of the highest IL-6 concentrations (≥5 ng/L). A similar dichotomous variable was constructed for urine dipstick leukocyte esterase where ≥3+ was considered positive. Forward stepwise (conditional) logistic regressions were performed where the condition for entry was 0.050 and for removal 0.10. Variables that served well for the overall prediction were also kept in the model. Zero order correlations between independent variables were checked and correlations >0.6 were not allowed. The independent variables, all but age being dichotomous, were; urine culture, IL-6 in the urine, leukocyte esterase dipstick, nitrite dipstick, antibiotics during the last month, age, gender, and presence of diabetes mellitus or dementia.\nIBM SPSS Statistics version 21 was used for statistical analysis.", "Inclusion criteria were fulfilled by 676 of 901 residents in 22 nursing homes, and 425 (63%) accepted participation (Figure 1). Voided urine specimens and symptom forms were provided from 421 residents, 295 (70%) women and 126 (30%) men. Women (mean 87 years, SD 6.4, range 63-100) were slightly older than men (mean 85 years, SD 7.1, range 65-100) (p = 0.0053).\nParticipant flow chart.\nAmong participating residents 56/421 (13%) suffered from diabetes mellitus and 228/421 (54%) had dementia. When urine specimens were collected, 18/421 (4.3%) were undergoing antibiotic treatment. Another 29/421 (6.9%) had no ongoing antibiotic treatment when the urine specimen was collected but had received antibiotics during the previous month. Measure of body temperature was conclusive in 399/421 residents; none of these residents had a body temperature ≥38.0° Celsius.", "There was significant growth of potentially pathogenic bacteria in 32% (135/421) of voided urine specimens. E. coli was by far the most common finding, present in 81% (109/135) of positive urine cultures. Klebsiella spp. were the second most common finding, present in 8.1% (11/135) of positive cultures. Proteus spp. were present in 3.0% (4/135) of positive cultures. Other species had very low prevalence’s, ≤1.5% of positive urine cultures for each species.", "Concentrations of IL-6 were analysed in urine specimens from 97% (409/421) of residents. In 2.9% (12/421) of residents, urine samples for IL-6 analyses were accidentally lost, or there was not enough urine for both culture and IL-6 analysis.\nConcentration of IL-6 in the urine had a mean of 3.4 ng/L (SD 5.9) and a median of 1.6 ng/L (interquartile range 0.7-4.1, range 0.20-62).\nConcentration of creatinine in the urine had a mean of 7.4 mmol/L (SD 4.0). Creatinine adjusted concentration of IL-6 in the urine had a mean of 0.59 ng/mmol creatinine (SD 1.2) and a median of 0.23 ng/mmol creatinine (interquartile range 0.11-0.55, range 0.019-12). Pearson’s correlation coefficient between unadjusted urine IL-6 concentrations and creatinine adjusted IL-6 concentrations was 0.86 (p < 10-6).\nUrine IL-6 concentrations were ≥5.0 ng/L in 18% (75/409) of residents and creatinine adjusted IL-6 concentrations were ≥0.75 ng/mmol in 18% (75/409) of residents.", "Concentrations of IL-6 in the urine was higher (p = 0.000004) among residents with significant growth of bacteria in the urine; the mean IL-6 concentration was 5.1 ng/L (SD 8.7) and the median IL-6 concentration was 2.5 ng/L (interquartile range 1.0-5.7), compared to residents with negative urine cultures, where the mean IL-6 concentration was 2.6 ng/L (SD 3.6) and the median IL-6 concentration was 1.3 ng/L (interquartile range 0.6-2.8). The same applies for creatinine adjusted IL-6 concentrations (p < 10-6).\nSimilarly residents with positive urine cultures were more likely to have urine IL-6 ≥ 5.0 ng/L (p = 0.000053) and creatinine adjusted IL-6 ≥ 0.75 ng/mmol (p = 0.000001) compared to those with negative urine cultures.", "Urine dipsticks were analysed for nitrite and leukocyte esterase in urine specimens from 408/421 residents. Urine dipstick analyses were not performed in 13/421 residents, mostly due to insufficient urine volume. Among all residents, regardless of bacteriuria or not, 26% (106/408) of nitrite dipsticks were positive and 22% (90/408) of leukocyte esterase dipsticks were ≥3 + .\nLeukocyte esterase dipsticks ≥3 + were more common (p = <10-6) among residents with significant growth of bacteria in the urine; 46% (61/132) versus 11% (29/276) in residents with negative urine cultures. Positive nitrite dipsticks were more common (p = <10-6) among residents with positive urine cultures; 64% (84/132) versus 8.0% (22/276) in residents with negative urine cultures.", "The prevalence of new or increased symptoms, occurring during the last month and still present when urine specimens were obtained are presented in Table 1. There were no significant differences in the proportion of positive urine cultures among those with or without nonspecific symptoms, however there were less positive urine cultures among residents with urinary frequency (Table 1). Residents with some of the symptoms had a higher prevalence of antibiotic treatments during the last month (Table 2).\nPrevalence of symptoms and positive urine cultures\n1Symptoms commencing at any time during the preceding month and still present when sampling urine.\n2Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportions of positive urine cultures among those with or without symptoms.\nPrevalence of symptoms and antibiotic treatment\n1Symptoms commencing at any time during the preceding month and still present when sampling urine.\n2Antibiotic treatment given at any time during the month preceding sampling of urine.\n3Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportion of antibiotic treatment among those with or without symptoms.", "In residents exclusively with bacteriuria there were no significant differences in concentrations of urine IL-6 when comparing those with or without a new or increased symptom; fatigue (p = 0.24), restlessness (p = 0.40), confusion (p = 0.38), aggressiveness (p = 0.66), loss of appetite (p = 0.27), frequent falls (p = 0.15), not being herself/himself (p = 0.90), having any of the nonspecific symptoms (p = 0.69), dysuria (p = 0.13) and urinary urgency (p = 0.82).\nIn residents exclusively having bacteriuria there were no significant differences in the proportion of leukocyte esterase dipsticks ≥3+ when comparing those with or without new or increased symptoms; fatigue (p = 0.39), restlessness (p = 1.0), confusion (p = 1.0), aggressiveness (p = 0.62), loss of appetite (p = 1.0), frequent falls (p = 0.60), not being herself/himself (p = 1.0), having any of the nonspecific symptoms (p = 0.68), dysuria (p = 0.46) and urinary urgency (p = 0.34). Similarly there were no significant differences in proportion of positive nitrite dipsticks when comparing those with or without new or increased symptoms.\nAll patients with urinary frequency had negative urine culture.", "A positive urine culture was only significant in the model predicting confusion, OR 0.15 (0.033-0.68; p = 0.014). However, it is important to note that the odds ratio was <1, i.e. positive urine cultures were less common among residents with confusion (Table 3). As urine IL-6 > 5 ng/L was also a significant predictor in this regression model for confusion, another regression was made where urine culture and urine IL-6 ≥ 5 ng/L were replaced by a combined dichotomous variable being positive if both IL-6 ≥ 5 ng/L and the urine culture was positive at the same time, or otherwise negative. This combined variable was however not a significant predictor of confusion.\n\nPredictors\n\n1 \n\nof new or increased symptoms commencing at any time during the preceding month and still present when sampling urine\n\n1Predictors in patients where a urine sample could be obtained and with information for all variables (n = 397). Forward stepwise (conditional) logistic regressions where probability for entry was 0.050 and for removal 0.10 was used. Variables that served well for the overall prediction were also kept in the model. Outcome presented as odds ratios (95% CI with p-value) for variables included in the model. Urine dipstick (nitrite positive or leukocyte esterase being 3+ or 4+), age, gender or presence of diabetes mellitus did not reach the final model for any symptom. Nagelkerke’s R-square as a measure of the model’s ability to predict presence of a symptom.\n2With (=1) or without (=0) bacteriuria. The latter was the reference.\n3Interleukin-6 elevated (≥5 ng/L) or not. The latter was the reference.\n4Ongoing antibiotic treatment (n = 16) or having had antibiotics during the last month (n = 28).\n5None of the independent variables could predict either fatigue or restlessness.", "A major strength of this study is that urine specimens were collected from every participating resident capable of providing a urine sample, regardless of the presence of symptoms. Therefore, this study can compare residents having symptoms with those without symptoms.\nIn this study we obtained urine specimens and study protocols from 47% (421/901) of individuals registered at the nursing homes. This may appear low but is similar to previously published studies in nursing homes [3]. The main reason for not participating was substantial urinary incontinence, often combined with dementia. Twenty-five percent (222/901) refused participation. Still this may be considered acceptable when studying an elderly fragile population with a high proportion of residents with dementia as well as the ethical requirement of approval from appointed representative/relatives.\nAll individuals living at the nursing homes were asked to participate. Due to ethical considerations, it was not noted whether those who refused participation suffered from dementia or urinary incontinence too severe to be able to provide a urine sample. The same applied to one ward withdrawing during the ongoing study. Thus, it is assumed that some of the patients excluded, since they refused participation, would not have been eligible for this study anyway. Knowing these numbers would probably have resulted in less exclusion due to a higher number of residents not meeting the inclusion criteria.\nThe main focus was non-specific symptoms, and the study had enough power to suggest that IL-6 does not play a role in determining if any non-specific symptom is caused by a UTI or something else. Furthermore, these results suggest that non-specific symptoms are, in most cases, unlikely to be caused by a UTI. However, the study is underpowered to clearly sort out these issues for each specific symptom.\nResidents with urinary catheters were not included in this study, therefore the results cannot be considered representative for residents with urinary catheters.", "It is interesting to note that a positive urine culture was not commoner among residents with nonspecific symptoms compared to residents without symptoms. There was a trend (p = 0.057) toward a lower proportion of positive urine cultures among residents with confusion occurring during the last month (Table 1). This suggests that nonspecific symptoms are not caused by bacteria in the urine. Not considering other more plausible causes of the symptoms places the patient at risk for having other undiagnosed conditions. The UTI diagnosis is all too often made in the absence of newly onset focal urinary tract symptoms.\nProcedures utilizing presence of symptoms or outcomes of prior dipstick testing to influence setting of cut-off levels for CFU/mL in urine cultures to label growth as clinically significant may enhance the diagnostic procedure [36,37]. These procedures are common in microbiologic laboratories in Sweden and internationally. Using the routine clinical procedure increases clinical usefulness of the study results.\nResidents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations between those with or without nonspecific symptoms. Thus IL-6 concentrations are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. If nonspecific symptoms are not caused by bacteria in the urine, IL-6 concentrations cannot identify a subgroup of residents with more severe inflammation in the bladder correlating to nonspecific symptoms.\nThere were no differences either in urine dipstick analyses for nitrite or leukocyte esterase ≥3+ between residents with positive urine cultures when comparing those with or without symptoms. Subsequently urine dipsticks are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine.", "Residents with recently onset confusion, loss of appetite, frequent falls and any of the nonspecific symptoms had oftener been prescribed antibiotics during the last month. This might explain the trend toward the lower prevalence of bacteriuria among residents with confusion. Also, in the logistic regressions, antibiotics during the previous month were a predictor of loss of appetite, frequent falls and “any of the nonspecific symptoms”. This supports previous studies showing that nonspecific symptoms were a common reason for suspecting UTI and the prescription of antibiotics [12-14,27]. These registered symptoms in this study might also reflect side effects of prescribed antibiotics as the elderly are more likely to retain side effects from antibiotics [38]. These residents could also represent a frailer population having more nonspecific symptoms, and also being more prone to infections, and consequently more antibiotic prescriptions.\nEven if this study suggests that nonspecific symptoms are not caused by bacteria in the urine, due to the possible confounders described above, the best proof would be a future randomized controlled trial evaluating UTI antibiotic treatment of nonspecific symptoms among elderly residents of nursing homes. However, an RCT in a large population of fragile elderly individuals, many with dementia and no possibility to give statement of consent would be very difficult to carry out.\nThis study primarily aimed to study non UTI specific symptoms. As UTI specific symptoms were less frequent, this study was partially underpowered regarding UTI specific symptoms. However, it is interesting to note that among all symptoms urinary frequency was the only symptom where the proportion of positive urine cultures differed from those not having this symptom. Those with urinary frequency had a lower proportion of positive urine cultures and a trend (not significant) towards a higher proportion of having had antibiotic treatment during the previous month. Another explanation for this could be a shorter bladder incubation time in that group.", "The authors declare that they have no competing interests.", "All authors participated in the design of the study. PDS and ME carried out the data collection. PDS analysed the data and drafted the manuscript. All authors contributed to interpretation of the analyses, critical reviews and revisions, and the final approval of the paper.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2318/14/88/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Inclusion and exclusion criteria", "Statement of consent", "Study protocol", "Laboratory tests", "Statistical analysis", "Results", "Studied population", "Bacterial findings", "IL-6 and creatinine in the urine", "IL-6 concentrations in the urine divided by positive and negative urine cultures", "Dipstick urinalysis", "Symptoms, bacteriuria and antibiotic treatments", "IL-6 and dipstick urinalyses in residents with bacteriuria", "Predictors of symptoms", "Discussion", "Strengths and limitations of the study", "Differentiating ABU versus UTI", "Antibiotic treatment and negative urine culture", "Conclusions", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "The presence of asymptomatic bacteriuria (ABU) among residents of nursing homes for the elderly varies between 25% and 50% for women and 15% and 40% for men [1-3]. There is overwhelming evidence that ABU should not be treated with antibiotics in an adult population except for pregnant women and patients prior to traumatic urologic interventions with mucosal bleeding [4-7]. The high prevalence of ABU makes it difficult to know if a new symptom in a resident with bacteriuria is caused by a urinary tract infection (UTI), or if the bacteria in the urine is only representative of an ABU [3,8-11]. This is especially difficult in the presence of symptoms not specific to the urinary tract such as fatigue, restlessness, confusion, aggressiveness, loss of appetite or frequent falls.\nNonspecific symptoms such as changes in mental status are the most common reasons for suspecting a UTI among residents of nursing homes [12-14]. These symptoms can have many causes besides UTI [15]. There are different opinions on the possible connection between different nonspecific symptoms and UTI [10,16-26]. Nonspecific symptoms and diagnostic uncertainty often lead to antibiotic treatments of dubious value [8,14,27,28]. Urine culture alone seems inappropriate for evaluating symptoms among residents of nursing homes [10]. There are two major possible explanations, either common bacteria in the urine are of little relevance, or a urine culture is insufficient to identify UTI.\nWith the emergence of multidrug-resistant bacteria and the antimicrobial drug discovery pipeline currently running dry, it is important not to misinterpret bacteriuria as UTI and prescribe antibiotics when it actually represents ABU. Thus, a complementary test to discriminate between symptomatic UTI and ABU is needed [29,30]. The cytokine Interleukin-6 (IL-6) is a mediator of inflammation playing an important role in the acute phase response and immune system regulation [29,31]. The biosynthesis of IL-6 is stimulated by e.g. bacteria [31]. After intravesical inoculation of patients with E. coli, all patients secreted IL-6 into the urine, however, serum concentrations of IL-6 did not increase suggesting a dominance of local IL-6 production [32]. A symptomatic lower UTI is assumed associated with more severe inflammation in the bladder compared to an ABU. Previous studies suggested that concentrations of IL-6 in the urine may be valuable in discriminating between ABU and UTI in the elderly, however, this needs evaluation in a larger study among the elderly [9,33].\nThe aim of this study was to investigate the association between laboratory findings of bacteria in the urine, elevated IL-6 concentrations in the urine, dipstick urinalysis and new or increased symptoms in residents of nursing homes for elderly.", "During the first three months of 2012, a study protocol was completed and single urine specimens collected from all included residents of 22 nursing homes in south-western Sweden. The attending nurses were provided detailed verbal and written information for the procedure. The study was approved by the Regional ethical review board of Gothenburg University (D-nr 578-11). The data was collected as part of another study of antimicrobial resistance in urinary pathogens among nursing home residents [34].\n Inclusion and exclusion criteria Residents of the participating nursing homes, regardless of UTI symptoms were invited to participate. Those accepting participation were included if they met the following inclusion criteria:\n● Permanent residence in nursing homes for the elderly (regardless of gender)\n● Presence at a nursing home for the elderly during the study\n● Participation approval\n● No indwelling urinary catheter\n● Sufficiently continent to leave a voided urinary specimen\n● Residents with dementia were included if cooperative when collecting urine samples\n● No urostomy\n● No regularly clean intermittent catheterisation\n● Not terminally ill\n● No ongoing peritoneal- or haemodialysis\nThe following exclusion criterion was used:\n● If the resident did not agree to participate or discontinued study participation\nResidents of the participating nursing homes, regardless of UTI symptoms were invited to participate. Those accepting participation were included if they met the following inclusion criteria:\n● Permanent residence in nursing homes for the elderly (regardless of gender)\n● Presence at a nursing home for the elderly during the study\n● Participation approval\n● No indwelling urinary catheter\n● Sufficiently continent to leave a voided urinary specimen\n● Residents with dementia were included if cooperative when collecting urine samples\n● No urostomy\n● No regularly clean intermittent catheterisation\n● Not terminally ill\n● No ongoing peritoneal- or haemodialysis\nThe following exclusion criterion was used:\n● If the resident did not agree to participate or discontinued study participation\n Statement of consent Residents were informed of the studies verbally and in writing. Informed approval for participation in the studies was collected from decision-capable individuals choosing to participate in the study. However, a considerable number of participants consisted of residents with varying degrees of dementia. If the resident was incapable of understanding information and thereby possessing a reduced decision capability, these residents only participated so long as they did not oppose participation and under the condition that appointed representatives or relatives did not oppose their participation after having partaken of the study information. This procedure was approved by the Regional ethical review board of Gothenburg University.\nResidents were informed of the studies verbally and in writing. Informed approval for participation in the studies was collected from decision-capable individuals choosing to participate in the study. However, a considerable number of participants consisted of residents with varying degrees of dementia. If the resident was incapable of understanding information and thereby possessing a reduced decision capability, these residents only participated so long as they did not oppose participation and under the condition that appointed representatives or relatives did not oppose their participation after having partaken of the study information. This procedure was approved by the Regional ethical review board of Gothenburg University.\n Study protocol In addition to collecting the urine sample, the attending nurse made an entry in the study protocol for each included resident whether having any symptoms, newly onset or increased within the last month and still present when the urine specimen was obtained. Nursing documentation and record keeping was used to obtain information about the presence or absence of symptoms one month prior to inclusion. The following nonspecific symptoms were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. It was also registered if the resident had ongoing or previous antibiotic treatment within the last month, diabetes mellitus or dementia.\nTo avoid presence of symptoms influencing what day the study protocol was completed and urine specimen collected, there was a predetermined date for collection of the urine sample from each included resident.\nIn addition to collecting the urine sample, the attending nurse made an entry in the study protocol for each included resident whether having any symptoms, newly onset or increased within the last month and still present when the urine specimen was obtained. Nursing documentation and record keeping was used to obtain information about the presence or absence of symptoms one month prior to inclusion. The following nonspecific symptoms were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. It was also registered if the resident had ongoing or previous antibiotic treatment within the last month, diabetes mellitus or dementia.\nTo avoid presence of symptoms influencing what day the study protocol was completed and urine specimen collected, there was a predetermined date for collection of the urine sample from each included resident.\n Laboratory tests Personnel at the nursing homes were instructed to collect a mid-stream morning urine sample, or a voided urine specimen with as long a bladder incubation time as possible. Immediately after collecting urine samples, dipstick urinalysis was carried out at the nursing home. Visual reading of the urine dipstick Multistix 5 (Siemens Healthcare Laboratory Diagnostics) was performed for the detection of nitrite and leukocyte esterase. Body temperature was measured by an ear thermometer.\nUrine specimens were cultured at the microbiology laboratory at Södra Älvsborg Hospital in Borås, Sweden using clinical routine procedure. The urine specimens were chilled before transport and usually arrived at the laboratory within 24 hours. As in clinical routine, the laboratory was provided information on the outcome of the dipstick urinalysis as well as information on any urinary tract specific UTI symptoms from the attending nurse.\nThe microbiology laboratory fractionated 10 μl urine on the surfaces of two plates; a cystine-lactose-electrolyte deficient agar (CLED) and a Columbia blood agar base. Plates were incubated overnight (minimum 15 h) at 35-37°C. CLED plates were incubated in air, and Columbia plates were incubated in 5% CO2. The latter was further incubated for 24 hours if no growth occurred after the first incubation. Growth of bacteria was considered significant if the number of colony-forming units (CFU)/mL was ≥105. However, at signs of possible UTI such as positive nitrite dipstick, leukocyte esterase dipstick >1, fever, frequency, urgency or dysuria, the cut-off point was ≥103 for patients with growth of Escherichia coli (E. coli) and for male patients with Klebsiella species (spp.) and Enterococcus faecalis. For symptomatic women harbouring the two latter species the cut-off level was ≥104. Nonspecific symptoms did not influence cut-off levels for CFU/mL in the urine cultures.\nMeasurements of the concentrations of IL-6 in the urine were performed with enzyme-linked immunosorbent assay (ELISA) using a commercial kit (Quantikine HS ELISA, High Sensitivity) [35] according to instructions from the manufacturer (R&D Systems, Abingdon, Oxford, UK) at the clinical immunology laboratory at Sahlgrenska University Hospital in Gothenburg, Sweden. Urine specimens for IL-6 analysis were frozen pending transport to the clinical immunology laboratory.\nConcentrations of creatinine in the urine were analysed by the automated general chemistry analyser UniCel® DxC 800 Synchron® Clinical System, according to instructions from the manufacturer (Beckman Coulter), at the clinical chemistry laboratory at Södra Älvsborg Hospital in Borås, Sweden.\nPersonnel at the nursing homes were instructed to collect a mid-stream morning urine sample, or a voided urine specimen with as long a bladder incubation time as possible. Immediately after collecting urine samples, dipstick urinalysis was carried out at the nursing home. Visual reading of the urine dipstick Multistix 5 (Siemens Healthcare Laboratory Diagnostics) was performed for the detection of nitrite and leukocyte esterase. Body temperature was measured by an ear thermometer.\nUrine specimens were cultured at the microbiology laboratory at Södra Älvsborg Hospital in Borås, Sweden using clinical routine procedure. The urine specimens were chilled before transport and usually arrived at the laboratory within 24 hours. As in clinical routine, the laboratory was provided information on the outcome of the dipstick urinalysis as well as information on any urinary tract specific UTI symptoms from the attending nurse.\nThe microbiology laboratory fractionated 10 μl urine on the surfaces of two plates; a cystine-lactose-electrolyte deficient agar (CLED) and a Columbia blood agar base. Plates were incubated overnight (minimum 15 h) at 35-37°C. CLED plates were incubated in air, and Columbia plates were incubated in 5% CO2. The latter was further incubated for 24 hours if no growth occurred after the first incubation. Growth of bacteria was considered significant if the number of colony-forming units (CFU)/mL was ≥105. However, at signs of possible UTI such as positive nitrite dipstick, leukocyte esterase dipstick >1, fever, frequency, urgency or dysuria, the cut-off point was ≥103 for patients with growth of Escherichia coli (E. coli) and for male patients with Klebsiella species (spp.) and Enterococcus faecalis. For symptomatic women harbouring the two latter species the cut-off level was ≥104. Nonspecific symptoms did not influence cut-off levels for CFU/mL in the urine cultures.\nMeasurements of the concentrations of IL-6 in the urine were performed with enzyme-linked immunosorbent assay (ELISA) using a commercial kit (Quantikine HS ELISA, High Sensitivity) [35] according to instructions from the manufacturer (R&D Systems, Abingdon, Oxford, UK) at the clinical immunology laboratory at Sahlgrenska University Hospital in Gothenburg, Sweden. Urine specimens for IL-6 analysis were frozen pending transport to the clinical immunology laboratory.\nConcentrations of creatinine in the urine were analysed by the automated general chemistry analyser UniCel® DxC 800 Synchron® Clinical System, according to instructions from the manufacturer (Beckman Coulter), at the clinical chemistry laboratory at Södra Älvsborg Hospital in Borås, Sweden.\n Statistical analysis The first objective was to clarify whether the concentrations of IL-6 in the urine or urine dipsticks differed between residents with or without bacteriuria. Creatinine adjusted IL-6 was calculated. Concentrations of unadjusted and adjusted IL-6 in the urine and outcome of urine dipstick analyses were compared between residents with positive and negative urine cultures, irrespective of symptoms, using the Mann-Whitney test for IL-6 (due to skewed data) and the Pearson’s chi-square test for urine dipsticks.\nThe second and third objective was to clarify whether a symptom correlated to bacteriuria or antibiotic usage. The prevalence of bacteriuria or use of antibiotics during the month preceding sampling of urine was compared between residents with or without symptoms using Pearson’s chi-square test. Fisher’s exact test was used in case of small numbers.\nThe fourth objective was to clarify if the concentrations of IL-6 or outcomes of urine dipsticks differed depending on symptoms in residents with bacteriuria. Concentrations of IL-6 in the urine or outcome of dipstick analyses were compared between bacteriuric residents with or without symptoms using Mann-Whitney’s test for IL-6 (due to skewed data) and Pearson’s chi-square test for dipsticks.\nThe fifth objective was to correlate factors with symptoms while adjusting for covariates.\nA cut-off was used to construct a dichotomous variable covering approximately 20% of the highest IL-6 concentrations (≥5 ng/L). A similar dichotomous variable was constructed for urine dipstick leukocyte esterase where ≥3+ was considered positive. Forward stepwise (conditional) logistic regressions were performed where the condition for entry was 0.050 and for removal 0.10. Variables that served well for the overall prediction were also kept in the model. Zero order correlations between independent variables were checked and correlations >0.6 were not allowed. The independent variables, all but age being dichotomous, were; urine culture, IL-6 in the urine, leukocyte esterase dipstick, nitrite dipstick, antibiotics during the last month, age, gender, and presence of diabetes mellitus or dementia.\nIBM SPSS Statistics version 21 was used for statistical analysis.\nThe first objective was to clarify whether the concentrations of IL-6 in the urine or urine dipsticks differed between residents with or without bacteriuria. Creatinine adjusted IL-6 was calculated. Concentrations of unadjusted and adjusted IL-6 in the urine and outcome of urine dipstick analyses were compared between residents with positive and negative urine cultures, irrespective of symptoms, using the Mann-Whitney test for IL-6 (due to skewed data) and the Pearson’s chi-square test for urine dipsticks.\nThe second and third objective was to clarify whether a symptom correlated to bacteriuria or antibiotic usage. The prevalence of bacteriuria or use of antibiotics during the month preceding sampling of urine was compared between residents with or without symptoms using Pearson’s chi-square test. Fisher’s exact test was used in case of small numbers.\nThe fourth objective was to clarify if the concentrations of IL-6 or outcomes of urine dipsticks differed depending on symptoms in residents with bacteriuria. Concentrations of IL-6 in the urine or outcome of dipstick analyses were compared between bacteriuric residents with or without symptoms using Mann-Whitney’s test for IL-6 (due to skewed data) and Pearson’s chi-square test for dipsticks.\nThe fifth objective was to correlate factors with symptoms while adjusting for covariates.\nA cut-off was used to construct a dichotomous variable covering approximately 20% of the highest IL-6 concentrations (≥5 ng/L). A similar dichotomous variable was constructed for urine dipstick leukocyte esterase where ≥3+ was considered positive. Forward stepwise (conditional) logistic regressions were performed where the condition for entry was 0.050 and for removal 0.10. Variables that served well for the overall prediction were also kept in the model. Zero order correlations between independent variables were checked and correlations >0.6 were not allowed. The independent variables, all but age being dichotomous, were; urine culture, IL-6 in the urine, leukocyte esterase dipstick, nitrite dipstick, antibiotics during the last month, age, gender, and presence of diabetes mellitus or dementia.\nIBM SPSS Statistics version 21 was used for statistical analysis.", "Residents of the participating nursing homes, regardless of UTI symptoms were invited to participate. Those accepting participation were included if they met the following inclusion criteria:\n● Permanent residence in nursing homes for the elderly (regardless of gender)\n● Presence at a nursing home for the elderly during the study\n● Participation approval\n● No indwelling urinary catheter\n● Sufficiently continent to leave a voided urinary specimen\n● Residents with dementia were included if cooperative when collecting urine samples\n● No urostomy\n● No regularly clean intermittent catheterisation\n● Not terminally ill\n● No ongoing peritoneal- or haemodialysis\nThe following exclusion criterion was used:\n● If the resident did not agree to participate or discontinued study participation", "Residents were informed of the studies verbally and in writing. Informed approval for participation in the studies was collected from decision-capable individuals choosing to participate in the study. However, a considerable number of participants consisted of residents with varying degrees of dementia. If the resident was incapable of understanding information and thereby possessing a reduced decision capability, these residents only participated so long as they did not oppose participation and under the condition that appointed representatives or relatives did not oppose their participation after having partaken of the study information. This procedure was approved by the Regional ethical review board of Gothenburg University.", "In addition to collecting the urine sample, the attending nurse made an entry in the study protocol for each included resident whether having any symptoms, newly onset or increased within the last month and still present when the urine specimen was obtained. Nursing documentation and record keeping was used to obtain information about the presence or absence of symptoms one month prior to inclusion. The following nonspecific symptoms were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. It was also registered if the resident had ongoing or previous antibiotic treatment within the last month, diabetes mellitus or dementia.\nTo avoid presence of symptoms influencing what day the study protocol was completed and urine specimen collected, there was a predetermined date for collection of the urine sample from each included resident.", "Personnel at the nursing homes were instructed to collect a mid-stream morning urine sample, or a voided urine specimen with as long a bladder incubation time as possible. Immediately after collecting urine samples, dipstick urinalysis was carried out at the nursing home. Visual reading of the urine dipstick Multistix 5 (Siemens Healthcare Laboratory Diagnostics) was performed for the detection of nitrite and leukocyte esterase. Body temperature was measured by an ear thermometer.\nUrine specimens were cultured at the microbiology laboratory at Södra Älvsborg Hospital in Borås, Sweden using clinical routine procedure. The urine specimens were chilled before transport and usually arrived at the laboratory within 24 hours. As in clinical routine, the laboratory was provided information on the outcome of the dipstick urinalysis as well as information on any urinary tract specific UTI symptoms from the attending nurse.\nThe microbiology laboratory fractionated 10 μl urine on the surfaces of two plates; a cystine-lactose-electrolyte deficient agar (CLED) and a Columbia blood agar base. Plates were incubated overnight (minimum 15 h) at 35-37°C. CLED plates were incubated in air, and Columbia plates were incubated in 5% CO2. The latter was further incubated for 24 hours if no growth occurred after the first incubation. Growth of bacteria was considered significant if the number of colony-forming units (CFU)/mL was ≥105. However, at signs of possible UTI such as positive nitrite dipstick, leukocyte esterase dipstick >1, fever, frequency, urgency or dysuria, the cut-off point was ≥103 for patients with growth of Escherichia coli (E. coli) and for male patients with Klebsiella species (spp.) and Enterococcus faecalis. For symptomatic women harbouring the two latter species the cut-off level was ≥104. Nonspecific symptoms did not influence cut-off levels for CFU/mL in the urine cultures.\nMeasurements of the concentrations of IL-6 in the urine were performed with enzyme-linked immunosorbent assay (ELISA) using a commercial kit (Quantikine HS ELISA, High Sensitivity) [35] according to instructions from the manufacturer (R&D Systems, Abingdon, Oxford, UK) at the clinical immunology laboratory at Sahlgrenska University Hospital in Gothenburg, Sweden. Urine specimens for IL-6 analysis were frozen pending transport to the clinical immunology laboratory.\nConcentrations of creatinine in the urine were analysed by the automated general chemistry analyser UniCel® DxC 800 Synchron® Clinical System, according to instructions from the manufacturer (Beckman Coulter), at the clinical chemistry laboratory at Södra Älvsborg Hospital in Borås, Sweden.", "The first objective was to clarify whether the concentrations of IL-6 in the urine or urine dipsticks differed between residents with or without bacteriuria. Creatinine adjusted IL-6 was calculated. Concentrations of unadjusted and adjusted IL-6 in the urine and outcome of urine dipstick analyses were compared between residents with positive and negative urine cultures, irrespective of symptoms, using the Mann-Whitney test for IL-6 (due to skewed data) and the Pearson’s chi-square test for urine dipsticks.\nThe second and third objective was to clarify whether a symptom correlated to bacteriuria or antibiotic usage. The prevalence of bacteriuria or use of antibiotics during the month preceding sampling of urine was compared between residents with or without symptoms using Pearson’s chi-square test. Fisher’s exact test was used in case of small numbers.\nThe fourth objective was to clarify if the concentrations of IL-6 or outcomes of urine dipsticks differed depending on symptoms in residents with bacteriuria. Concentrations of IL-6 in the urine or outcome of dipstick analyses were compared between bacteriuric residents with or without symptoms using Mann-Whitney’s test for IL-6 (due to skewed data) and Pearson’s chi-square test for dipsticks.\nThe fifth objective was to correlate factors with symptoms while adjusting for covariates.\nA cut-off was used to construct a dichotomous variable covering approximately 20% of the highest IL-6 concentrations (≥5 ng/L). A similar dichotomous variable was constructed for urine dipstick leukocyte esterase where ≥3+ was considered positive. Forward stepwise (conditional) logistic regressions were performed where the condition for entry was 0.050 and for removal 0.10. Variables that served well for the overall prediction were also kept in the model. Zero order correlations between independent variables were checked and correlations >0.6 were not allowed. The independent variables, all but age being dichotomous, were; urine culture, IL-6 in the urine, leukocyte esterase dipstick, nitrite dipstick, antibiotics during the last month, age, gender, and presence of diabetes mellitus or dementia.\nIBM SPSS Statistics version 21 was used for statistical analysis.", " Studied population Inclusion criteria were fulfilled by 676 of 901 residents in 22 nursing homes, and 425 (63%) accepted participation (Figure 1). Voided urine specimens and symptom forms were provided from 421 residents, 295 (70%) women and 126 (30%) men. Women (mean 87 years, SD 6.4, range 63-100) were slightly older than men (mean 85 years, SD 7.1, range 65-100) (p = 0.0053).\nParticipant flow chart.\nAmong participating residents 56/421 (13%) suffered from diabetes mellitus and 228/421 (54%) had dementia. When urine specimens were collected, 18/421 (4.3%) were undergoing antibiotic treatment. Another 29/421 (6.9%) had no ongoing antibiotic treatment when the urine specimen was collected but had received antibiotics during the previous month. Measure of body temperature was conclusive in 399/421 residents; none of these residents had a body temperature ≥38.0° Celsius.\nInclusion criteria were fulfilled by 676 of 901 residents in 22 nursing homes, and 425 (63%) accepted participation (Figure 1). Voided urine specimens and symptom forms were provided from 421 residents, 295 (70%) women and 126 (30%) men. Women (mean 87 years, SD 6.4, range 63-100) were slightly older than men (mean 85 years, SD 7.1, range 65-100) (p = 0.0053).\nParticipant flow chart.\nAmong participating residents 56/421 (13%) suffered from diabetes mellitus and 228/421 (54%) had dementia. When urine specimens were collected, 18/421 (4.3%) were undergoing antibiotic treatment. Another 29/421 (6.9%) had no ongoing antibiotic treatment when the urine specimen was collected but had received antibiotics during the previous month. Measure of body temperature was conclusive in 399/421 residents; none of these residents had a body temperature ≥38.0° Celsius.\n Bacterial findings There was significant growth of potentially pathogenic bacteria in 32% (135/421) of voided urine specimens. E. coli was by far the most common finding, present in 81% (109/135) of positive urine cultures. Klebsiella spp. were the second most common finding, present in 8.1% (11/135) of positive cultures. Proteus spp. were present in 3.0% (4/135) of positive cultures. Other species had very low prevalence’s, ≤1.5% of positive urine cultures for each species.\nThere was significant growth of potentially pathogenic bacteria in 32% (135/421) of voided urine specimens. E. coli was by far the most common finding, present in 81% (109/135) of positive urine cultures. Klebsiella spp. were the second most common finding, present in 8.1% (11/135) of positive cultures. Proteus spp. were present in 3.0% (4/135) of positive cultures. Other species had very low prevalence’s, ≤1.5% of positive urine cultures for each species.\n IL-6 and creatinine in the urine Concentrations of IL-6 were analysed in urine specimens from 97% (409/421) of residents. In 2.9% (12/421) of residents, urine samples for IL-6 analyses were accidentally lost, or there was not enough urine for both culture and IL-6 analysis.\nConcentration of IL-6 in the urine had a mean of 3.4 ng/L (SD 5.9) and a median of 1.6 ng/L (interquartile range 0.7-4.1, range 0.20-62).\nConcentration of creatinine in the urine had a mean of 7.4 mmol/L (SD 4.0). Creatinine adjusted concentration of IL-6 in the urine had a mean of 0.59 ng/mmol creatinine (SD 1.2) and a median of 0.23 ng/mmol creatinine (interquartile range 0.11-0.55, range 0.019-12). Pearson’s correlation coefficient between unadjusted urine IL-6 concentrations and creatinine adjusted IL-6 concentrations was 0.86 (p < 10-6).\nUrine IL-6 concentrations were ≥5.0 ng/L in 18% (75/409) of residents and creatinine adjusted IL-6 concentrations were ≥0.75 ng/mmol in 18% (75/409) of residents.\nConcentrations of IL-6 were analysed in urine specimens from 97% (409/421) of residents. In 2.9% (12/421) of residents, urine samples for IL-6 analyses were accidentally lost, or there was not enough urine for both culture and IL-6 analysis.\nConcentration of IL-6 in the urine had a mean of 3.4 ng/L (SD 5.9) and a median of 1.6 ng/L (interquartile range 0.7-4.1, range 0.20-62).\nConcentration of creatinine in the urine had a mean of 7.4 mmol/L (SD 4.0). Creatinine adjusted concentration of IL-6 in the urine had a mean of 0.59 ng/mmol creatinine (SD 1.2) and a median of 0.23 ng/mmol creatinine (interquartile range 0.11-0.55, range 0.019-12). Pearson’s correlation coefficient between unadjusted urine IL-6 concentrations and creatinine adjusted IL-6 concentrations was 0.86 (p < 10-6).\nUrine IL-6 concentrations were ≥5.0 ng/L in 18% (75/409) of residents and creatinine adjusted IL-6 concentrations were ≥0.75 ng/mmol in 18% (75/409) of residents.\n IL-6 concentrations in the urine divided by positive and negative urine cultures Concentrations of IL-6 in the urine was higher (p = 0.000004) among residents with significant growth of bacteria in the urine; the mean IL-6 concentration was 5.1 ng/L (SD 8.7) and the median IL-6 concentration was 2.5 ng/L (interquartile range 1.0-5.7), compared to residents with negative urine cultures, where the mean IL-6 concentration was 2.6 ng/L (SD 3.6) and the median IL-6 concentration was 1.3 ng/L (interquartile range 0.6-2.8). The same applies for creatinine adjusted IL-6 concentrations (p < 10-6).\nSimilarly residents with positive urine cultures were more likely to have urine IL-6 ≥ 5.0 ng/L (p = 0.000053) and creatinine adjusted IL-6 ≥ 0.75 ng/mmol (p = 0.000001) compared to those with negative urine cultures.\nConcentrations of IL-6 in the urine was higher (p = 0.000004) among residents with significant growth of bacteria in the urine; the mean IL-6 concentration was 5.1 ng/L (SD 8.7) and the median IL-6 concentration was 2.5 ng/L (interquartile range 1.0-5.7), compared to residents with negative urine cultures, where the mean IL-6 concentration was 2.6 ng/L (SD 3.6) and the median IL-6 concentration was 1.3 ng/L (interquartile range 0.6-2.8). The same applies for creatinine adjusted IL-6 concentrations (p < 10-6).\nSimilarly residents with positive urine cultures were more likely to have urine IL-6 ≥ 5.0 ng/L (p = 0.000053) and creatinine adjusted IL-6 ≥ 0.75 ng/mmol (p = 0.000001) compared to those with negative urine cultures.\n Dipstick urinalysis Urine dipsticks were analysed for nitrite and leukocyte esterase in urine specimens from 408/421 residents. Urine dipstick analyses were not performed in 13/421 residents, mostly due to insufficient urine volume. Among all residents, regardless of bacteriuria or not, 26% (106/408) of nitrite dipsticks were positive and 22% (90/408) of leukocyte esterase dipsticks were ≥3 + .\nLeukocyte esterase dipsticks ≥3 + were more common (p = <10-6) among residents with significant growth of bacteria in the urine; 46% (61/132) versus 11% (29/276) in residents with negative urine cultures. Positive nitrite dipsticks were more common (p = <10-6) among residents with positive urine cultures; 64% (84/132) versus 8.0% (22/276) in residents with negative urine cultures.\nUrine dipsticks were analysed for nitrite and leukocyte esterase in urine specimens from 408/421 residents. Urine dipstick analyses were not performed in 13/421 residents, mostly due to insufficient urine volume. Among all residents, regardless of bacteriuria or not, 26% (106/408) of nitrite dipsticks were positive and 22% (90/408) of leukocyte esterase dipsticks were ≥3 + .\nLeukocyte esterase dipsticks ≥3 + were more common (p = <10-6) among residents with significant growth of bacteria in the urine; 46% (61/132) versus 11% (29/276) in residents with negative urine cultures. Positive nitrite dipsticks were more common (p = <10-6) among residents with positive urine cultures; 64% (84/132) versus 8.0% (22/276) in residents with negative urine cultures.\n Symptoms, bacteriuria and antibiotic treatments The prevalence of new or increased symptoms, occurring during the last month and still present when urine specimens were obtained are presented in Table 1. There were no significant differences in the proportion of positive urine cultures among those with or without nonspecific symptoms, however there were less positive urine cultures among residents with urinary frequency (Table 1). Residents with some of the symptoms had a higher prevalence of antibiotic treatments during the last month (Table 2).\nPrevalence of symptoms and positive urine cultures\n1Symptoms commencing at any time during the preceding month and still present when sampling urine.\n2Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportions of positive urine cultures among those with or without symptoms.\nPrevalence of symptoms and antibiotic treatment\n1Symptoms commencing at any time during the preceding month and still present when sampling urine.\n2Antibiotic treatment given at any time during the month preceding sampling of urine.\n3Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportion of antibiotic treatment among those with or without symptoms.\nThe prevalence of new or increased symptoms, occurring during the last month and still present when urine specimens were obtained are presented in Table 1. There were no significant differences in the proportion of positive urine cultures among those with or without nonspecific symptoms, however there were less positive urine cultures among residents with urinary frequency (Table 1). Residents with some of the symptoms had a higher prevalence of antibiotic treatments during the last month (Table 2).\nPrevalence of symptoms and positive urine cultures\n1Symptoms commencing at any time during the preceding month and still present when sampling urine.\n2Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportions of positive urine cultures among those with or without symptoms.\nPrevalence of symptoms and antibiotic treatment\n1Symptoms commencing at any time during the preceding month and still present when sampling urine.\n2Antibiotic treatment given at any time during the month preceding sampling of urine.\n3Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportion of antibiotic treatment among those with or without symptoms.\n IL-6 and dipstick urinalyses in residents with bacteriuria In residents exclusively with bacteriuria there were no significant differences in concentrations of urine IL-6 when comparing those with or without a new or increased symptom; fatigue (p = 0.24), restlessness (p = 0.40), confusion (p = 0.38), aggressiveness (p = 0.66), loss of appetite (p = 0.27), frequent falls (p = 0.15), not being herself/himself (p = 0.90), having any of the nonspecific symptoms (p = 0.69), dysuria (p = 0.13) and urinary urgency (p = 0.82).\nIn residents exclusively having bacteriuria there were no significant differences in the proportion of leukocyte esterase dipsticks ≥3+ when comparing those with or without new or increased symptoms; fatigue (p = 0.39), restlessness (p = 1.0), confusion (p = 1.0), aggressiveness (p = 0.62), loss of appetite (p = 1.0), frequent falls (p = 0.60), not being herself/himself (p = 1.0), having any of the nonspecific symptoms (p = 0.68), dysuria (p = 0.46) and urinary urgency (p = 0.34). Similarly there were no significant differences in proportion of positive nitrite dipsticks when comparing those with or without new or increased symptoms.\nAll patients with urinary frequency had negative urine culture.\nIn residents exclusively with bacteriuria there were no significant differences in concentrations of urine IL-6 when comparing those with or without a new or increased symptom; fatigue (p = 0.24), restlessness (p = 0.40), confusion (p = 0.38), aggressiveness (p = 0.66), loss of appetite (p = 0.27), frequent falls (p = 0.15), not being herself/himself (p = 0.90), having any of the nonspecific symptoms (p = 0.69), dysuria (p = 0.13) and urinary urgency (p = 0.82).\nIn residents exclusively having bacteriuria there were no significant differences in the proportion of leukocyte esterase dipsticks ≥3+ when comparing those with or without new or increased symptoms; fatigue (p = 0.39), restlessness (p = 1.0), confusion (p = 1.0), aggressiveness (p = 0.62), loss of appetite (p = 1.0), frequent falls (p = 0.60), not being herself/himself (p = 1.0), having any of the nonspecific symptoms (p = 0.68), dysuria (p = 0.46) and urinary urgency (p = 0.34). Similarly there were no significant differences in proportion of positive nitrite dipsticks when comparing those with or without new or increased symptoms.\nAll patients with urinary frequency had negative urine culture.\n Predictors of symptoms A positive urine culture was only significant in the model predicting confusion, OR 0.15 (0.033-0.68; p = 0.014). However, it is important to note that the odds ratio was <1, i.e. positive urine cultures were less common among residents with confusion (Table 3). As urine IL-6 > 5 ng/L was also a significant predictor in this regression model for confusion, another regression was made where urine culture and urine IL-6 ≥ 5 ng/L were replaced by a combined dichotomous variable being positive if both IL-6 ≥ 5 ng/L and the urine culture was positive at the same time, or otherwise negative. This combined variable was however not a significant predictor of confusion.\n\nPredictors\n\n1 \n\nof new or increased symptoms commencing at any time during the preceding month and still present when sampling urine\n\n1Predictors in patients where a urine sample could be obtained and with information for all variables (n = 397). Forward stepwise (conditional) logistic regressions where probability for entry was 0.050 and for removal 0.10 was used. Variables that served well for the overall prediction were also kept in the model. Outcome presented as odds ratios (95% CI with p-value) for variables included in the model. Urine dipstick (nitrite positive or leukocyte esterase being 3+ or 4+), age, gender or presence of diabetes mellitus did not reach the final model for any symptom. Nagelkerke’s R-square as a measure of the model’s ability to predict presence of a symptom.\n2With (=1) or without (=0) bacteriuria. The latter was the reference.\n3Interleukin-6 elevated (≥5 ng/L) or not. The latter was the reference.\n4Ongoing antibiotic treatment (n = 16) or having had antibiotics during the last month (n = 28).\n5None of the independent variables could predict either fatigue or restlessness.\nA positive urine culture was only significant in the model predicting confusion, OR 0.15 (0.033-0.68; p = 0.014). However, it is important to note that the odds ratio was <1, i.e. positive urine cultures were less common among residents with confusion (Table 3). As urine IL-6 > 5 ng/L was also a significant predictor in this regression model for confusion, another regression was made where urine culture and urine IL-6 ≥ 5 ng/L were replaced by a combined dichotomous variable being positive if both IL-6 ≥ 5 ng/L and the urine culture was positive at the same time, or otherwise negative. This combined variable was however not a significant predictor of confusion.\n\nPredictors\n\n1 \n\nof new or increased symptoms commencing at any time during the preceding month and still present when sampling urine\n\n1Predictors in patients where a urine sample could be obtained and with information for all variables (n = 397). Forward stepwise (conditional) logistic regressions where probability for entry was 0.050 and for removal 0.10 was used. Variables that served well for the overall prediction were also kept in the model. Outcome presented as odds ratios (95% CI with p-value) for variables included in the model. Urine dipstick (nitrite positive or leukocyte esterase being 3+ or 4+), age, gender or presence of diabetes mellitus did not reach the final model for any symptom. Nagelkerke’s R-square as a measure of the model’s ability to predict presence of a symptom.\n2With (=1) or without (=0) bacteriuria. The latter was the reference.\n3Interleukin-6 elevated (≥5 ng/L) or not. The latter was the reference.\n4Ongoing antibiotic treatment (n = 16) or having had antibiotics during the last month (n = 28).\n5None of the independent variables could predict either fatigue or restlessness.", "Inclusion criteria were fulfilled by 676 of 901 residents in 22 nursing homes, and 425 (63%) accepted participation (Figure 1). Voided urine specimens and symptom forms were provided from 421 residents, 295 (70%) women and 126 (30%) men. Women (mean 87 years, SD 6.4, range 63-100) were slightly older than men (mean 85 years, SD 7.1, range 65-100) (p = 0.0053).\nParticipant flow chart.\nAmong participating residents 56/421 (13%) suffered from diabetes mellitus and 228/421 (54%) had dementia. When urine specimens were collected, 18/421 (4.3%) were undergoing antibiotic treatment. Another 29/421 (6.9%) had no ongoing antibiotic treatment when the urine specimen was collected but had received antibiotics during the previous month. Measure of body temperature was conclusive in 399/421 residents; none of these residents had a body temperature ≥38.0° Celsius.", "There was significant growth of potentially pathogenic bacteria in 32% (135/421) of voided urine specimens. E. coli was by far the most common finding, present in 81% (109/135) of positive urine cultures. Klebsiella spp. were the second most common finding, present in 8.1% (11/135) of positive cultures. Proteus spp. were present in 3.0% (4/135) of positive cultures. Other species had very low prevalence’s, ≤1.5% of positive urine cultures for each species.", "Concentrations of IL-6 were analysed in urine specimens from 97% (409/421) of residents. In 2.9% (12/421) of residents, urine samples for IL-6 analyses were accidentally lost, or there was not enough urine for both culture and IL-6 analysis.\nConcentration of IL-6 in the urine had a mean of 3.4 ng/L (SD 5.9) and a median of 1.6 ng/L (interquartile range 0.7-4.1, range 0.20-62).\nConcentration of creatinine in the urine had a mean of 7.4 mmol/L (SD 4.0). Creatinine adjusted concentration of IL-6 in the urine had a mean of 0.59 ng/mmol creatinine (SD 1.2) and a median of 0.23 ng/mmol creatinine (interquartile range 0.11-0.55, range 0.019-12). Pearson’s correlation coefficient between unadjusted urine IL-6 concentrations and creatinine adjusted IL-6 concentrations was 0.86 (p < 10-6).\nUrine IL-6 concentrations were ≥5.0 ng/L in 18% (75/409) of residents and creatinine adjusted IL-6 concentrations were ≥0.75 ng/mmol in 18% (75/409) of residents.", "Concentrations of IL-6 in the urine was higher (p = 0.000004) among residents with significant growth of bacteria in the urine; the mean IL-6 concentration was 5.1 ng/L (SD 8.7) and the median IL-6 concentration was 2.5 ng/L (interquartile range 1.0-5.7), compared to residents with negative urine cultures, where the mean IL-6 concentration was 2.6 ng/L (SD 3.6) and the median IL-6 concentration was 1.3 ng/L (interquartile range 0.6-2.8). The same applies for creatinine adjusted IL-6 concentrations (p < 10-6).\nSimilarly residents with positive urine cultures were more likely to have urine IL-6 ≥ 5.0 ng/L (p = 0.000053) and creatinine adjusted IL-6 ≥ 0.75 ng/mmol (p = 0.000001) compared to those with negative urine cultures.", "Urine dipsticks were analysed for nitrite and leukocyte esterase in urine specimens from 408/421 residents. Urine dipstick analyses were not performed in 13/421 residents, mostly due to insufficient urine volume. Among all residents, regardless of bacteriuria or not, 26% (106/408) of nitrite dipsticks were positive and 22% (90/408) of leukocyte esterase dipsticks were ≥3 + .\nLeukocyte esterase dipsticks ≥3 + were more common (p = <10-6) among residents with significant growth of bacteria in the urine; 46% (61/132) versus 11% (29/276) in residents with negative urine cultures. Positive nitrite dipsticks were more common (p = <10-6) among residents with positive urine cultures; 64% (84/132) versus 8.0% (22/276) in residents with negative urine cultures.", "The prevalence of new or increased symptoms, occurring during the last month and still present when urine specimens were obtained are presented in Table 1. There were no significant differences in the proportion of positive urine cultures among those with or without nonspecific symptoms, however there were less positive urine cultures among residents with urinary frequency (Table 1). Residents with some of the symptoms had a higher prevalence of antibiotic treatments during the last month (Table 2).\nPrevalence of symptoms and positive urine cultures\n1Symptoms commencing at any time during the preceding month and still present when sampling urine.\n2Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportions of positive urine cultures among those with or without symptoms.\nPrevalence of symptoms and antibiotic treatment\n1Symptoms commencing at any time during the preceding month and still present when sampling urine.\n2Antibiotic treatment given at any time during the month preceding sampling of urine.\n3Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportion of antibiotic treatment among those with or without symptoms.", "In residents exclusively with bacteriuria there were no significant differences in concentrations of urine IL-6 when comparing those with or without a new or increased symptom; fatigue (p = 0.24), restlessness (p = 0.40), confusion (p = 0.38), aggressiveness (p = 0.66), loss of appetite (p = 0.27), frequent falls (p = 0.15), not being herself/himself (p = 0.90), having any of the nonspecific symptoms (p = 0.69), dysuria (p = 0.13) and urinary urgency (p = 0.82).\nIn residents exclusively having bacteriuria there were no significant differences in the proportion of leukocyte esterase dipsticks ≥3+ when comparing those with or without new or increased symptoms; fatigue (p = 0.39), restlessness (p = 1.0), confusion (p = 1.0), aggressiveness (p = 0.62), loss of appetite (p = 1.0), frequent falls (p = 0.60), not being herself/himself (p = 1.0), having any of the nonspecific symptoms (p = 0.68), dysuria (p = 0.46) and urinary urgency (p = 0.34). Similarly there were no significant differences in proportion of positive nitrite dipsticks when comparing those with or without new or increased symptoms.\nAll patients with urinary frequency had negative urine culture.", "A positive urine culture was only significant in the model predicting confusion, OR 0.15 (0.033-0.68; p = 0.014). However, it is important to note that the odds ratio was <1, i.e. positive urine cultures were less common among residents with confusion (Table 3). As urine IL-6 > 5 ng/L was also a significant predictor in this regression model for confusion, another regression was made where urine culture and urine IL-6 ≥ 5 ng/L were replaced by a combined dichotomous variable being positive if both IL-6 ≥ 5 ng/L and the urine culture was positive at the same time, or otherwise negative. This combined variable was however not a significant predictor of confusion.\n\nPredictors\n\n1 \n\nof new or increased symptoms commencing at any time during the preceding month and still present when sampling urine\n\n1Predictors in patients where a urine sample could be obtained and with information for all variables (n = 397). Forward stepwise (conditional) logistic regressions where probability for entry was 0.050 and for removal 0.10 was used. Variables that served well for the overall prediction were also kept in the model. Outcome presented as odds ratios (95% CI with p-value) for variables included in the model. Urine dipstick (nitrite positive or leukocyte esterase being 3+ or 4+), age, gender or presence of diabetes mellitus did not reach the final model for any symptom. Nagelkerke’s R-square as a measure of the model’s ability to predict presence of a symptom.\n2With (=1) or without (=0) bacteriuria. The latter was the reference.\n3Interleukin-6 elevated (≥5 ng/L) or not. The latter was the reference.\n4Ongoing antibiotic treatment (n = 16) or having had antibiotics during the last month (n = 28).\n5None of the independent variables could predict either fatigue or restlessness.", "Recent onset of nonspecific symptoms was common among elderly residents of nursing homes. Positive urine cultures were as common in residents with as without nonspecific symptoms. Residents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations or dipstick findings between those with or without nonspecific symptoms.\n Strengths and limitations of the study A major strength of this study is that urine specimens were collected from every participating resident capable of providing a urine sample, regardless of the presence of symptoms. Therefore, this study can compare residents having symptoms with those without symptoms.\nIn this study we obtained urine specimens and study protocols from 47% (421/901) of individuals registered at the nursing homes. This may appear low but is similar to previously published studies in nursing homes [3]. The main reason for not participating was substantial urinary incontinence, often combined with dementia. Twenty-five percent (222/901) refused participation. Still this may be considered acceptable when studying an elderly fragile population with a high proportion of residents with dementia as well as the ethical requirement of approval from appointed representative/relatives.\nAll individuals living at the nursing homes were asked to participate. Due to ethical considerations, it was not noted whether those who refused participation suffered from dementia or urinary incontinence too severe to be able to provide a urine sample. The same applied to one ward withdrawing during the ongoing study. Thus, it is assumed that some of the patients excluded, since they refused participation, would not have been eligible for this study anyway. Knowing these numbers would probably have resulted in less exclusion due to a higher number of residents not meeting the inclusion criteria.\nThe main focus was non-specific symptoms, and the study had enough power to suggest that IL-6 does not play a role in determining if any non-specific symptom is caused by a UTI or something else. Furthermore, these results suggest that non-specific symptoms are, in most cases, unlikely to be caused by a UTI. However, the study is underpowered to clearly sort out these issues for each specific symptom.\nResidents with urinary catheters were not included in this study, therefore the results cannot be considered representative for residents with urinary catheters.\nA major strength of this study is that urine specimens were collected from every participating resident capable of providing a urine sample, regardless of the presence of symptoms. Therefore, this study can compare residents having symptoms with those without symptoms.\nIn this study we obtained urine specimens and study protocols from 47% (421/901) of individuals registered at the nursing homes. This may appear low but is similar to previously published studies in nursing homes [3]. The main reason for not participating was substantial urinary incontinence, often combined with dementia. Twenty-five percent (222/901) refused participation. Still this may be considered acceptable when studying an elderly fragile population with a high proportion of residents with dementia as well as the ethical requirement of approval from appointed representative/relatives.\nAll individuals living at the nursing homes were asked to participate. Due to ethical considerations, it was not noted whether those who refused participation suffered from dementia or urinary incontinence too severe to be able to provide a urine sample. The same applied to one ward withdrawing during the ongoing study. Thus, it is assumed that some of the patients excluded, since they refused participation, would not have been eligible for this study anyway. Knowing these numbers would probably have resulted in less exclusion due to a higher number of residents not meeting the inclusion criteria.\nThe main focus was non-specific symptoms, and the study had enough power to suggest that IL-6 does not play a role in determining if any non-specific symptom is caused by a UTI or something else. Furthermore, these results suggest that non-specific symptoms are, in most cases, unlikely to be caused by a UTI. However, the study is underpowered to clearly sort out these issues for each specific symptom.\nResidents with urinary catheters were not included in this study, therefore the results cannot be considered representative for residents with urinary catheters.\n Differentiating ABU versus UTI It is interesting to note that a positive urine culture was not commoner among residents with nonspecific symptoms compared to residents without symptoms. There was a trend (p = 0.057) toward a lower proportion of positive urine cultures among residents with confusion occurring during the last month (Table 1). This suggests that nonspecific symptoms are not caused by bacteria in the urine. Not considering other more plausible causes of the symptoms places the patient at risk for having other undiagnosed conditions. The UTI diagnosis is all too often made in the absence of newly onset focal urinary tract symptoms.\nProcedures utilizing presence of symptoms or outcomes of prior dipstick testing to influence setting of cut-off levels for CFU/mL in urine cultures to label growth as clinically significant may enhance the diagnostic procedure [36,37]. These procedures are common in microbiologic laboratories in Sweden and internationally. Using the routine clinical procedure increases clinical usefulness of the study results.\nResidents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations between those with or without nonspecific symptoms. Thus IL-6 concentrations are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. If nonspecific symptoms are not caused by bacteria in the urine, IL-6 concentrations cannot identify a subgroup of residents with more severe inflammation in the bladder correlating to nonspecific symptoms.\nThere were no differences either in urine dipstick analyses for nitrite or leukocyte esterase ≥3+ between residents with positive urine cultures when comparing those with or without symptoms. Subsequently urine dipsticks are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine.\nIt is interesting to note that a positive urine culture was not commoner among residents with nonspecific symptoms compared to residents without symptoms. There was a trend (p = 0.057) toward a lower proportion of positive urine cultures among residents with confusion occurring during the last month (Table 1). This suggests that nonspecific symptoms are not caused by bacteria in the urine. Not considering other more plausible causes of the symptoms places the patient at risk for having other undiagnosed conditions. The UTI diagnosis is all too often made in the absence of newly onset focal urinary tract symptoms.\nProcedures utilizing presence of symptoms or outcomes of prior dipstick testing to influence setting of cut-off levels for CFU/mL in urine cultures to label growth as clinically significant may enhance the diagnostic procedure [36,37]. These procedures are common in microbiologic laboratories in Sweden and internationally. Using the routine clinical procedure increases clinical usefulness of the study results.\nResidents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations between those with or without nonspecific symptoms. Thus IL-6 concentrations are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. If nonspecific symptoms are not caused by bacteria in the urine, IL-6 concentrations cannot identify a subgroup of residents with more severe inflammation in the bladder correlating to nonspecific symptoms.\nThere were no differences either in urine dipstick analyses for nitrite or leukocyte esterase ≥3+ between residents with positive urine cultures when comparing those with or without symptoms. Subsequently urine dipsticks are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine.\n Antibiotic treatment and negative urine culture Residents with recently onset confusion, loss of appetite, frequent falls and any of the nonspecific symptoms had oftener been prescribed antibiotics during the last month. This might explain the trend toward the lower prevalence of bacteriuria among residents with confusion. Also, in the logistic regressions, antibiotics during the previous month were a predictor of loss of appetite, frequent falls and “any of the nonspecific symptoms”. This supports previous studies showing that nonspecific symptoms were a common reason for suspecting UTI and the prescription of antibiotics [12-14,27]. These registered symptoms in this study might also reflect side effects of prescribed antibiotics as the elderly are more likely to retain side effects from antibiotics [38]. These residents could also represent a frailer population having more nonspecific symptoms, and also being more prone to infections, and consequently more antibiotic prescriptions.\nEven if this study suggests that nonspecific symptoms are not caused by bacteria in the urine, due to the possible confounders described above, the best proof would be a future randomized controlled trial evaluating UTI antibiotic treatment of nonspecific symptoms among elderly residents of nursing homes. However, an RCT in a large population of fragile elderly individuals, many with dementia and no possibility to give statement of consent would be very difficult to carry out.\nThis study primarily aimed to study non UTI specific symptoms. As UTI specific symptoms were less frequent, this study was partially underpowered regarding UTI specific symptoms. However, it is interesting to note that among all symptoms urinary frequency was the only symptom where the proportion of positive urine cultures differed from those not having this symptom. Those with urinary frequency had a lower proportion of positive urine cultures and a trend (not significant) towards a higher proportion of having had antibiotic treatment during the previous month. Another explanation for this could be a shorter bladder incubation time in that group.\nResidents with recently onset confusion, loss of appetite, frequent falls and any of the nonspecific symptoms had oftener been prescribed antibiotics during the last month. This might explain the trend toward the lower prevalence of bacteriuria among residents with confusion. Also, in the logistic regressions, antibiotics during the previous month were a predictor of loss of appetite, frequent falls and “any of the nonspecific symptoms”. This supports previous studies showing that nonspecific symptoms were a common reason for suspecting UTI and the prescription of antibiotics [12-14,27]. These registered symptoms in this study might also reflect side effects of prescribed antibiotics as the elderly are more likely to retain side effects from antibiotics [38]. These residents could also represent a frailer population having more nonspecific symptoms, and also being more prone to infections, and consequently more antibiotic prescriptions.\nEven if this study suggests that nonspecific symptoms are not caused by bacteria in the urine, due to the possible confounders described above, the best proof would be a future randomized controlled trial evaluating UTI antibiotic treatment of nonspecific symptoms among elderly residents of nursing homes. However, an RCT in a large population of fragile elderly individuals, many with dementia and no possibility to give statement of consent would be very difficult to carry out.\nThis study primarily aimed to study non UTI specific symptoms. As UTI specific symptoms were less frequent, this study was partially underpowered regarding UTI specific symptoms. However, it is interesting to note that among all symptoms urinary frequency was the only symptom where the proportion of positive urine cultures differed from those not having this symptom. Those with urinary frequency had a lower proportion of positive urine cultures and a trend (not significant) towards a higher proportion of having had antibiotic treatment during the previous month. Another explanation for this could be a shorter bladder incubation time in that group.", "A major strength of this study is that urine specimens were collected from every participating resident capable of providing a urine sample, regardless of the presence of symptoms. Therefore, this study can compare residents having symptoms with those without symptoms.\nIn this study we obtained urine specimens and study protocols from 47% (421/901) of individuals registered at the nursing homes. This may appear low but is similar to previously published studies in nursing homes [3]. The main reason for not participating was substantial urinary incontinence, often combined with dementia. Twenty-five percent (222/901) refused participation. Still this may be considered acceptable when studying an elderly fragile population with a high proportion of residents with dementia as well as the ethical requirement of approval from appointed representative/relatives.\nAll individuals living at the nursing homes were asked to participate. Due to ethical considerations, it was not noted whether those who refused participation suffered from dementia or urinary incontinence too severe to be able to provide a urine sample. The same applied to one ward withdrawing during the ongoing study. Thus, it is assumed that some of the patients excluded, since they refused participation, would not have been eligible for this study anyway. Knowing these numbers would probably have resulted in less exclusion due to a higher number of residents not meeting the inclusion criteria.\nThe main focus was non-specific symptoms, and the study had enough power to suggest that IL-6 does not play a role in determining if any non-specific symptom is caused by a UTI or something else. Furthermore, these results suggest that non-specific symptoms are, in most cases, unlikely to be caused by a UTI. However, the study is underpowered to clearly sort out these issues for each specific symptom.\nResidents with urinary catheters were not included in this study, therefore the results cannot be considered representative for residents with urinary catheters.", "It is interesting to note that a positive urine culture was not commoner among residents with nonspecific symptoms compared to residents without symptoms. There was a trend (p = 0.057) toward a lower proportion of positive urine cultures among residents with confusion occurring during the last month (Table 1). This suggests that nonspecific symptoms are not caused by bacteria in the urine. Not considering other more plausible causes of the symptoms places the patient at risk for having other undiagnosed conditions. The UTI diagnosis is all too often made in the absence of newly onset focal urinary tract symptoms.\nProcedures utilizing presence of symptoms or outcomes of prior dipstick testing to influence setting of cut-off levels for CFU/mL in urine cultures to label growth as clinically significant may enhance the diagnostic procedure [36,37]. These procedures are common in microbiologic laboratories in Sweden and internationally. Using the routine clinical procedure increases clinical usefulness of the study results.\nResidents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations between those with or without nonspecific symptoms. Thus IL-6 concentrations are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. If nonspecific symptoms are not caused by bacteria in the urine, IL-6 concentrations cannot identify a subgroup of residents with more severe inflammation in the bladder correlating to nonspecific symptoms.\nThere were no differences either in urine dipstick analyses for nitrite or leukocyte esterase ≥3+ between residents with positive urine cultures when comparing those with or without symptoms. Subsequently urine dipsticks are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine.", "Residents with recently onset confusion, loss of appetite, frequent falls and any of the nonspecific symptoms had oftener been prescribed antibiotics during the last month. This might explain the trend toward the lower prevalence of bacteriuria among residents with confusion. Also, in the logistic regressions, antibiotics during the previous month were a predictor of loss of appetite, frequent falls and “any of the nonspecific symptoms”. This supports previous studies showing that nonspecific symptoms were a common reason for suspecting UTI and the prescription of antibiotics [12-14,27]. These registered symptoms in this study might also reflect side effects of prescribed antibiotics as the elderly are more likely to retain side effects from antibiotics [38]. These residents could also represent a frailer population having more nonspecific symptoms, and also being more prone to infections, and consequently more antibiotic prescriptions.\nEven if this study suggests that nonspecific symptoms are not caused by bacteria in the urine, due to the possible confounders described above, the best proof would be a future randomized controlled trial evaluating UTI antibiotic treatment of nonspecific symptoms among elderly residents of nursing homes. However, an RCT in a large population of fragile elderly individuals, many with dementia and no possibility to give statement of consent would be very difficult to carry out.\nThis study primarily aimed to study non UTI specific symptoms. As UTI specific symptoms were less frequent, this study was partially underpowered regarding UTI specific symptoms. However, it is interesting to note that among all symptoms urinary frequency was the only symptom where the proportion of positive urine cultures differed from those not having this symptom. Those with urinary frequency had a lower proportion of positive urine cultures and a trend (not significant) towards a higher proportion of having had antibiotic treatment during the previous month. Another explanation for this could be a shorter bladder incubation time in that group.", "Recently onset nonspecific symptoms were common among elderly residents of nursing homes. Residents without nonspecific symptoms had positive urine cultures as often as those with nonspecific symptoms, suggesting that nonspecific symptoms are not caused by bacteria in the urine.\nResidents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations or dipstick findings between those with or without nonspecific symptoms. Thus, IL-6 concentrations in the urine and dipstick analyses are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine.", "The authors declare that they have no competing interests.", "All authors participated in the design of the study. PDS and ME carried out the data collection. PDS analysed the data and drafted the manuscript. All authors contributed to interpretation of the analyses, critical reviews and revisions, and the final approval of the paper.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2318/14/88/prepub\n" ]
[ null, "methods", null, null, null, null, null, "results", null, null, null, null, null, null, null, null, "discussion", null, null, null, "conclusions", null, null, null ]
[ "Interleukin-6", "Urinary tract infections", "Bacteriuria", "Homes for the aged", "Nursing homes", "Dipstick urinalysis", "Diagnostic tests" ]
Background: The presence of asymptomatic bacteriuria (ABU) among residents of nursing homes for the elderly varies between 25% and 50% for women and 15% and 40% for men [1-3]. There is overwhelming evidence that ABU should not be treated with antibiotics in an adult population except for pregnant women and patients prior to traumatic urologic interventions with mucosal bleeding [4-7]. The high prevalence of ABU makes it difficult to know if a new symptom in a resident with bacteriuria is caused by a urinary tract infection (UTI), or if the bacteria in the urine is only representative of an ABU [3,8-11]. This is especially difficult in the presence of symptoms not specific to the urinary tract such as fatigue, restlessness, confusion, aggressiveness, loss of appetite or frequent falls. Nonspecific symptoms such as changes in mental status are the most common reasons for suspecting a UTI among residents of nursing homes [12-14]. These symptoms can have many causes besides UTI [15]. There are different opinions on the possible connection between different nonspecific symptoms and UTI [10,16-26]. Nonspecific symptoms and diagnostic uncertainty often lead to antibiotic treatments of dubious value [8,14,27,28]. Urine culture alone seems inappropriate for evaluating symptoms among residents of nursing homes [10]. There are two major possible explanations, either common bacteria in the urine are of little relevance, or a urine culture is insufficient to identify UTI. With the emergence of multidrug-resistant bacteria and the antimicrobial drug discovery pipeline currently running dry, it is important not to misinterpret bacteriuria as UTI and prescribe antibiotics when it actually represents ABU. Thus, a complementary test to discriminate between symptomatic UTI and ABU is needed [29,30]. The cytokine Interleukin-6 (IL-6) is a mediator of inflammation playing an important role in the acute phase response and immune system regulation [29,31]. The biosynthesis of IL-6 is stimulated by e.g. bacteria [31]. After intravesical inoculation of patients with E. coli, all patients secreted IL-6 into the urine, however, serum concentrations of IL-6 did not increase suggesting a dominance of local IL-6 production [32]. A symptomatic lower UTI is assumed associated with more severe inflammation in the bladder compared to an ABU. Previous studies suggested that concentrations of IL-6 in the urine may be valuable in discriminating between ABU and UTI in the elderly, however, this needs evaluation in a larger study among the elderly [9,33]. The aim of this study was to investigate the association between laboratory findings of bacteria in the urine, elevated IL-6 concentrations in the urine, dipstick urinalysis and new or increased symptoms in residents of nursing homes for elderly. Methods: During the first three months of 2012, a study protocol was completed and single urine specimens collected from all included residents of 22 nursing homes in south-western Sweden. The attending nurses were provided detailed verbal and written information for the procedure. The study was approved by the Regional ethical review board of Gothenburg University (D-nr 578-11). The data was collected as part of another study of antimicrobial resistance in urinary pathogens among nursing home residents [34]. Inclusion and exclusion criteria Residents of the participating nursing homes, regardless of UTI symptoms were invited to participate. Those accepting participation were included if they met the following inclusion criteria: ● Permanent residence in nursing homes for the elderly (regardless of gender) ● Presence at a nursing home for the elderly during the study ● Participation approval ● No indwelling urinary catheter ● Sufficiently continent to leave a voided urinary specimen ● Residents with dementia were included if cooperative when collecting urine samples ● No urostomy ● No regularly clean intermittent catheterisation ● Not terminally ill ● No ongoing peritoneal- or haemodialysis The following exclusion criterion was used: ● If the resident did not agree to participate or discontinued study participation Residents of the participating nursing homes, regardless of UTI symptoms were invited to participate. Those accepting participation were included if they met the following inclusion criteria: ● Permanent residence in nursing homes for the elderly (regardless of gender) ● Presence at a nursing home for the elderly during the study ● Participation approval ● No indwelling urinary catheter ● Sufficiently continent to leave a voided urinary specimen ● Residents with dementia were included if cooperative when collecting urine samples ● No urostomy ● No regularly clean intermittent catheterisation ● Not terminally ill ● No ongoing peritoneal- or haemodialysis The following exclusion criterion was used: ● If the resident did not agree to participate or discontinued study participation Statement of consent Residents were informed of the studies verbally and in writing. Informed approval for participation in the studies was collected from decision-capable individuals choosing to participate in the study. However, a considerable number of participants consisted of residents with varying degrees of dementia. If the resident was incapable of understanding information and thereby possessing a reduced decision capability, these residents only participated so long as they did not oppose participation and under the condition that appointed representatives or relatives did not oppose their participation after having partaken of the study information. This procedure was approved by the Regional ethical review board of Gothenburg University. Residents were informed of the studies verbally and in writing. Informed approval for participation in the studies was collected from decision-capable individuals choosing to participate in the study. However, a considerable number of participants consisted of residents with varying degrees of dementia. If the resident was incapable of understanding information and thereby possessing a reduced decision capability, these residents only participated so long as they did not oppose participation and under the condition that appointed representatives or relatives did not oppose their participation after having partaken of the study information. This procedure was approved by the Regional ethical review board of Gothenburg University. Study protocol In addition to collecting the urine sample, the attending nurse made an entry in the study protocol for each included resident whether having any symptoms, newly onset or increased within the last month and still present when the urine specimen was obtained. Nursing documentation and record keeping was used to obtain information about the presence or absence of symptoms one month prior to inclusion. The following nonspecific symptoms were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. It was also registered if the resident had ongoing or previous antibiotic treatment within the last month, diabetes mellitus or dementia. To avoid presence of symptoms influencing what day the study protocol was completed and urine specimen collected, there was a predetermined date for collection of the urine sample from each included resident. In addition to collecting the urine sample, the attending nurse made an entry in the study protocol for each included resident whether having any symptoms, newly onset or increased within the last month and still present when the urine specimen was obtained. Nursing documentation and record keeping was used to obtain information about the presence or absence of symptoms one month prior to inclusion. The following nonspecific symptoms were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. It was also registered if the resident had ongoing or previous antibiotic treatment within the last month, diabetes mellitus or dementia. To avoid presence of symptoms influencing what day the study protocol was completed and urine specimen collected, there was a predetermined date for collection of the urine sample from each included resident. Laboratory tests Personnel at the nursing homes were instructed to collect a mid-stream morning urine sample, or a voided urine specimen with as long a bladder incubation time as possible. Immediately after collecting urine samples, dipstick urinalysis was carried out at the nursing home. Visual reading of the urine dipstick Multistix 5 (Siemens Healthcare Laboratory Diagnostics) was performed for the detection of nitrite and leukocyte esterase. Body temperature was measured by an ear thermometer. Urine specimens were cultured at the microbiology laboratory at Södra Älvsborg Hospital in Borås, Sweden using clinical routine procedure. The urine specimens were chilled before transport and usually arrived at the laboratory within 24 hours. As in clinical routine, the laboratory was provided information on the outcome of the dipstick urinalysis as well as information on any urinary tract specific UTI symptoms from the attending nurse. The microbiology laboratory fractionated 10 μl urine on the surfaces of two plates; a cystine-lactose-electrolyte deficient agar (CLED) and a Columbia blood agar base. Plates were incubated overnight (minimum 15 h) at 35-37°C. CLED plates were incubated in air, and Columbia plates were incubated in 5% CO2. The latter was further incubated for 24 hours if no growth occurred after the first incubation. Growth of bacteria was considered significant if the number of colony-forming units (CFU)/mL was ≥105. However, at signs of possible UTI such as positive nitrite dipstick, leukocyte esterase dipstick >1, fever, frequency, urgency or dysuria, the cut-off point was ≥103 for patients with growth of Escherichia coli (E. coli) and for male patients with Klebsiella species (spp.) and Enterococcus faecalis. For symptomatic women harbouring the two latter species the cut-off level was ≥104. Nonspecific symptoms did not influence cut-off levels for CFU/mL in the urine cultures. Measurements of the concentrations of IL-6 in the urine were performed with enzyme-linked immunosorbent assay (ELISA) using a commercial kit (Quantikine HS ELISA, High Sensitivity) [35] according to instructions from the manufacturer (R&D Systems, Abingdon, Oxford, UK) at the clinical immunology laboratory at Sahlgrenska University Hospital in Gothenburg, Sweden. Urine specimens for IL-6 analysis were frozen pending transport to the clinical immunology laboratory. Concentrations of creatinine in the urine were analysed by the automated general chemistry analyser UniCel® DxC 800 Synchron® Clinical System, according to instructions from the manufacturer (Beckman Coulter), at the clinical chemistry laboratory at Södra Älvsborg Hospital in Borås, Sweden. Personnel at the nursing homes were instructed to collect a mid-stream morning urine sample, or a voided urine specimen with as long a bladder incubation time as possible. Immediately after collecting urine samples, dipstick urinalysis was carried out at the nursing home. Visual reading of the urine dipstick Multistix 5 (Siemens Healthcare Laboratory Diagnostics) was performed for the detection of nitrite and leukocyte esterase. Body temperature was measured by an ear thermometer. Urine specimens were cultured at the microbiology laboratory at Södra Älvsborg Hospital in Borås, Sweden using clinical routine procedure. The urine specimens were chilled before transport and usually arrived at the laboratory within 24 hours. As in clinical routine, the laboratory was provided information on the outcome of the dipstick urinalysis as well as information on any urinary tract specific UTI symptoms from the attending nurse. The microbiology laboratory fractionated 10 μl urine on the surfaces of two plates; a cystine-lactose-electrolyte deficient agar (CLED) and a Columbia blood agar base. Plates were incubated overnight (minimum 15 h) at 35-37°C. CLED plates were incubated in air, and Columbia plates were incubated in 5% CO2. The latter was further incubated for 24 hours if no growth occurred after the first incubation. Growth of bacteria was considered significant if the number of colony-forming units (CFU)/mL was ≥105. However, at signs of possible UTI such as positive nitrite dipstick, leukocyte esterase dipstick >1, fever, frequency, urgency or dysuria, the cut-off point was ≥103 for patients with growth of Escherichia coli (E. coli) and for male patients with Klebsiella species (spp.) and Enterococcus faecalis. For symptomatic women harbouring the two latter species the cut-off level was ≥104. Nonspecific symptoms did not influence cut-off levels for CFU/mL in the urine cultures. Measurements of the concentrations of IL-6 in the urine were performed with enzyme-linked immunosorbent assay (ELISA) using a commercial kit (Quantikine HS ELISA, High Sensitivity) [35] according to instructions from the manufacturer (R&D Systems, Abingdon, Oxford, UK) at the clinical immunology laboratory at Sahlgrenska University Hospital in Gothenburg, Sweden. Urine specimens for IL-6 analysis were frozen pending transport to the clinical immunology laboratory. Concentrations of creatinine in the urine were analysed by the automated general chemistry analyser UniCel® DxC 800 Synchron® Clinical System, according to instructions from the manufacturer (Beckman Coulter), at the clinical chemistry laboratory at Södra Älvsborg Hospital in Borås, Sweden. Statistical analysis The first objective was to clarify whether the concentrations of IL-6 in the urine or urine dipsticks differed between residents with or without bacteriuria. Creatinine adjusted IL-6 was calculated. Concentrations of unadjusted and adjusted IL-6 in the urine and outcome of urine dipstick analyses were compared between residents with positive and negative urine cultures, irrespective of symptoms, using the Mann-Whitney test for IL-6 (due to skewed data) and the Pearson’s chi-square test for urine dipsticks. The second and third objective was to clarify whether a symptom correlated to bacteriuria or antibiotic usage. The prevalence of bacteriuria or use of antibiotics during the month preceding sampling of urine was compared between residents with or without symptoms using Pearson’s chi-square test. Fisher’s exact test was used in case of small numbers. The fourth objective was to clarify if the concentrations of IL-6 or outcomes of urine dipsticks differed depending on symptoms in residents with bacteriuria. Concentrations of IL-6 in the urine or outcome of dipstick analyses were compared between bacteriuric residents with or without symptoms using Mann-Whitney’s test for IL-6 (due to skewed data) and Pearson’s chi-square test for dipsticks. The fifth objective was to correlate factors with symptoms while adjusting for covariates. A cut-off was used to construct a dichotomous variable covering approximately 20% of the highest IL-6 concentrations (≥5 ng/L). A similar dichotomous variable was constructed for urine dipstick leukocyte esterase where ≥3+ was considered positive. Forward stepwise (conditional) logistic regressions were performed where the condition for entry was 0.050 and for removal 0.10. Variables that served well for the overall prediction were also kept in the model. Zero order correlations between independent variables were checked and correlations >0.6 were not allowed. The independent variables, all but age being dichotomous, were; urine culture, IL-6 in the urine, leukocyte esterase dipstick, nitrite dipstick, antibiotics during the last month, age, gender, and presence of diabetes mellitus or dementia. IBM SPSS Statistics version 21 was used for statistical analysis. The first objective was to clarify whether the concentrations of IL-6 in the urine or urine dipsticks differed between residents with or without bacteriuria. Creatinine adjusted IL-6 was calculated. Concentrations of unadjusted and adjusted IL-6 in the urine and outcome of urine dipstick analyses were compared between residents with positive and negative urine cultures, irrespective of symptoms, using the Mann-Whitney test for IL-6 (due to skewed data) and the Pearson’s chi-square test for urine dipsticks. The second and third objective was to clarify whether a symptom correlated to bacteriuria or antibiotic usage. The prevalence of bacteriuria or use of antibiotics during the month preceding sampling of urine was compared between residents with or without symptoms using Pearson’s chi-square test. Fisher’s exact test was used in case of small numbers. The fourth objective was to clarify if the concentrations of IL-6 or outcomes of urine dipsticks differed depending on symptoms in residents with bacteriuria. Concentrations of IL-6 in the urine or outcome of dipstick analyses were compared between bacteriuric residents with or without symptoms using Mann-Whitney’s test for IL-6 (due to skewed data) and Pearson’s chi-square test for dipsticks. The fifth objective was to correlate factors with symptoms while adjusting for covariates. A cut-off was used to construct a dichotomous variable covering approximately 20% of the highest IL-6 concentrations (≥5 ng/L). A similar dichotomous variable was constructed for urine dipstick leukocyte esterase where ≥3+ was considered positive. Forward stepwise (conditional) logistic regressions were performed where the condition for entry was 0.050 and for removal 0.10. Variables that served well for the overall prediction were also kept in the model. Zero order correlations between independent variables were checked and correlations >0.6 were not allowed. The independent variables, all but age being dichotomous, were; urine culture, IL-6 in the urine, leukocyte esterase dipstick, nitrite dipstick, antibiotics during the last month, age, gender, and presence of diabetes mellitus or dementia. IBM SPSS Statistics version 21 was used for statistical analysis. Inclusion and exclusion criteria: Residents of the participating nursing homes, regardless of UTI symptoms were invited to participate. Those accepting participation were included if they met the following inclusion criteria: ● Permanent residence in nursing homes for the elderly (regardless of gender) ● Presence at a nursing home for the elderly during the study ● Participation approval ● No indwelling urinary catheter ● Sufficiently continent to leave a voided urinary specimen ● Residents with dementia were included if cooperative when collecting urine samples ● No urostomy ● No regularly clean intermittent catheterisation ● Not terminally ill ● No ongoing peritoneal- or haemodialysis The following exclusion criterion was used: ● If the resident did not agree to participate or discontinued study participation Statement of consent: Residents were informed of the studies verbally and in writing. Informed approval for participation in the studies was collected from decision-capable individuals choosing to participate in the study. However, a considerable number of participants consisted of residents with varying degrees of dementia. If the resident was incapable of understanding information and thereby possessing a reduced decision capability, these residents only participated so long as they did not oppose participation and under the condition that appointed representatives or relatives did not oppose their participation after having partaken of the study information. This procedure was approved by the Regional ethical review board of Gothenburg University. Study protocol: In addition to collecting the urine sample, the attending nurse made an entry in the study protocol for each included resident whether having any symptoms, newly onset or increased within the last month and still present when the urine specimen was obtained. Nursing documentation and record keeping was used to obtain information about the presence or absence of symptoms one month prior to inclusion. The following nonspecific symptoms were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. It was also registered if the resident had ongoing or previous antibiotic treatment within the last month, diabetes mellitus or dementia. To avoid presence of symptoms influencing what day the study protocol was completed and urine specimen collected, there was a predetermined date for collection of the urine sample from each included resident. Laboratory tests: Personnel at the nursing homes were instructed to collect a mid-stream morning urine sample, or a voided urine specimen with as long a bladder incubation time as possible. Immediately after collecting urine samples, dipstick urinalysis was carried out at the nursing home. Visual reading of the urine dipstick Multistix 5 (Siemens Healthcare Laboratory Diagnostics) was performed for the detection of nitrite and leukocyte esterase. Body temperature was measured by an ear thermometer. Urine specimens were cultured at the microbiology laboratory at Södra Älvsborg Hospital in Borås, Sweden using clinical routine procedure. The urine specimens were chilled before transport and usually arrived at the laboratory within 24 hours. As in clinical routine, the laboratory was provided information on the outcome of the dipstick urinalysis as well as information on any urinary tract specific UTI symptoms from the attending nurse. The microbiology laboratory fractionated 10 μl urine on the surfaces of two plates; a cystine-lactose-electrolyte deficient agar (CLED) and a Columbia blood agar base. Plates were incubated overnight (minimum 15 h) at 35-37°C. CLED plates were incubated in air, and Columbia plates were incubated in 5% CO2. The latter was further incubated for 24 hours if no growth occurred after the first incubation. Growth of bacteria was considered significant if the number of colony-forming units (CFU)/mL was ≥105. However, at signs of possible UTI such as positive nitrite dipstick, leukocyte esterase dipstick >1, fever, frequency, urgency or dysuria, the cut-off point was ≥103 for patients with growth of Escherichia coli (E. coli) and for male patients with Klebsiella species (spp.) and Enterococcus faecalis. For symptomatic women harbouring the two latter species the cut-off level was ≥104. Nonspecific symptoms did not influence cut-off levels for CFU/mL in the urine cultures. Measurements of the concentrations of IL-6 in the urine were performed with enzyme-linked immunosorbent assay (ELISA) using a commercial kit (Quantikine HS ELISA, High Sensitivity) [35] according to instructions from the manufacturer (R&D Systems, Abingdon, Oxford, UK) at the clinical immunology laboratory at Sahlgrenska University Hospital in Gothenburg, Sweden. Urine specimens for IL-6 analysis were frozen pending transport to the clinical immunology laboratory. Concentrations of creatinine in the urine were analysed by the automated general chemistry analyser UniCel® DxC 800 Synchron® Clinical System, according to instructions from the manufacturer (Beckman Coulter), at the clinical chemistry laboratory at Södra Älvsborg Hospital in Borås, Sweden. Statistical analysis: The first objective was to clarify whether the concentrations of IL-6 in the urine or urine dipsticks differed between residents with or without bacteriuria. Creatinine adjusted IL-6 was calculated. Concentrations of unadjusted and adjusted IL-6 in the urine and outcome of urine dipstick analyses were compared between residents with positive and negative urine cultures, irrespective of symptoms, using the Mann-Whitney test for IL-6 (due to skewed data) and the Pearson’s chi-square test for urine dipsticks. The second and third objective was to clarify whether a symptom correlated to bacteriuria or antibiotic usage. The prevalence of bacteriuria or use of antibiotics during the month preceding sampling of urine was compared between residents with or without symptoms using Pearson’s chi-square test. Fisher’s exact test was used in case of small numbers. The fourth objective was to clarify if the concentrations of IL-6 or outcomes of urine dipsticks differed depending on symptoms in residents with bacteriuria. Concentrations of IL-6 in the urine or outcome of dipstick analyses were compared between bacteriuric residents with or without symptoms using Mann-Whitney’s test for IL-6 (due to skewed data) and Pearson’s chi-square test for dipsticks. The fifth objective was to correlate factors with symptoms while adjusting for covariates. A cut-off was used to construct a dichotomous variable covering approximately 20% of the highest IL-6 concentrations (≥5 ng/L). A similar dichotomous variable was constructed for urine dipstick leukocyte esterase where ≥3+ was considered positive. Forward stepwise (conditional) logistic regressions were performed where the condition for entry was 0.050 and for removal 0.10. Variables that served well for the overall prediction were also kept in the model. Zero order correlations between independent variables were checked and correlations >0.6 were not allowed. The independent variables, all but age being dichotomous, were; urine culture, IL-6 in the urine, leukocyte esterase dipstick, nitrite dipstick, antibiotics during the last month, age, gender, and presence of diabetes mellitus or dementia. IBM SPSS Statistics version 21 was used for statistical analysis. Results: Studied population Inclusion criteria were fulfilled by 676 of 901 residents in 22 nursing homes, and 425 (63%) accepted participation (Figure 1). Voided urine specimens and symptom forms were provided from 421 residents, 295 (70%) women and 126 (30%) men. Women (mean 87 years, SD 6.4, range 63-100) were slightly older than men (mean 85 years, SD 7.1, range 65-100) (p = 0.0053). Participant flow chart. Among participating residents 56/421 (13%) suffered from diabetes mellitus and 228/421 (54%) had dementia. When urine specimens were collected, 18/421 (4.3%) were undergoing antibiotic treatment. Another 29/421 (6.9%) had no ongoing antibiotic treatment when the urine specimen was collected but had received antibiotics during the previous month. Measure of body temperature was conclusive in 399/421 residents; none of these residents had a body temperature ≥38.0° Celsius. Inclusion criteria were fulfilled by 676 of 901 residents in 22 nursing homes, and 425 (63%) accepted participation (Figure 1). Voided urine specimens and symptom forms were provided from 421 residents, 295 (70%) women and 126 (30%) men. Women (mean 87 years, SD 6.4, range 63-100) were slightly older than men (mean 85 years, SD 7.1, range 65-100) (p = 0.0053). Participant flow chart. Among participating residents 56/421 (13%) suffered from diabetes mellitus and 228/421 (54%) had dementia. When urine specimens were collected, 18/421 (4.3%) were undergoing antibiotic treatment. Another 29/421 (6.9%) had no ongoing antibiotic treatment when the urine specimen was collected but had received antibiotics during the previous month. Measure of body temperature was conclusive in 399/421 residents; none of these residents had a body temperature ≥38.0° Celsius. Bacterial findings There was significant growth of potentially pathogenic bacteria in 32% (135/421) of voided urine specimens. E. coli was by far the most common finding, present in 81% (109/135) of positive urine cultures. Klebsiella spp. were the second most common finding, present in 8.1% (11/135) of positive cultures. Proteus spp. were present in 3.0% (4/135) of positive cultures. Other species had very low prevalence’s, ≤1.5% of positive urine cultures for each species. There was significant growth of potentially pathogenic bacteria in 32% (135/421) of voided urine specimens. E. coli was by far the most common finding, present in 81% (109/135) of positive urine cultures. Klebsiella spp. were the second most common finding, present in 8.1% (11/135) of positive cultures. Proteus spp. were present in 3.0% (4/135) of positive cultures. Other species had very low prevalence’s, ≤1.5% of positive urine cultures for each species. IL-6 and creatinine in the urine Concentrations of IL-6 were analysed in urine specimens from 97% (409/421) of residents. In 2.9% (12/421) of residents, urine samples for IL-6 analyses were accidentally lost, or there was not enough urine for both culture and IL-6 analysis. Concentration of IL-6 in the urine had a mean of 3.4 ng/L (SD 5.9) and a median of 1.6 ng/L (interquartile range 0.7-4.1, range 0.20-62). Concentration of creatinine in the urine had a mean of 7.4 mmol/L (SD 4.0). Creatinine adjusted concentration of IL-6 in the urine had a mean of 0.59 ng/mmol creatinine (SD 1.2) and a median of 0.23 ng/mmol creatinine (interquartile range 0.11-0.55, range 0.019-12). Pearson’s correlation coefficient between unadjusted urine IL-6 concentrations and creatinine adjusted IL-6 concentrations was 0.86 (p < 10-6). Urine IL-6 concentrations were ≥5.0 ng/L in 18% (75/409) of residents and creatinine adjusted IL-6 concentrations were ≥0.75 ng/mmol in 18% (75/409) of residents. Concentrations of IL-6 were analysed in urine specimens from 97% (409/421) of residents. In 2.9% (12/421) of residents, urine samples for IL-6 analyses were accidentally lost, or there was not enough urine for both culture and IL-6 analysis. Concentration of IL-6 in the urine had a mean of 3.4 ng/L (SD 5.9) and a median of 1.6 ng/L (interquartile range 0.7-4.1, range 0.20-62). Concentration of creatinine in the urine had a mean of 7.4 mmol/L (SD 4.0). Creatinine adjusted concentration of IL-6 in the urine had a mean of 0.59 ng/mmol creatinine (SD 1.2) and a median of 0.23 ng/mmol creatinine (interquartile range 0.11-0.55, range 0.019-12). Pearson’s correlation coefficient between unadjusted urine IL-6 concentrations and creatinine adjusted IL-6 concentrations was 0.86 (p < 10-6). Urine IL-6 concentrations were ≥5.0 ng/L in 18% (75/409) of residents and creatinine adjusted IL-6 concentrations were ≥0.75 ng/mmol in 18% (75/409) of residents. IL-6 concentrations in the urine divided by positive and negative urine cultures Concentrations of IL-6 in the urine was higher (p = 0.000004) among residents with significant growth of bacteria in the urine; the mean IL-6 concentration was 5.1 ng/L (SD 8.7) and the median IL-6 concentration was 2.5 ng/L (interquartile range 1.0-5.7), compared to residents with negative urine cultures, where the mean IL-6 concentration was 2.6 ng/L (SD 3.6) and the median IL-6 concentration was 1.3 ng/L (interquartile range 0.6-2.8). The same applies for creatinine adjusted IL-6 concentrations (p < 10-6). Similarly residents with positive urine cultures were more likely to have urine IL-6 ≥ 5.0 ng/L (p = 0.000053) and creatinine adjusted IL-6 ≥ 0.75 ng/mmol (p = 0.000001) compared to those with negative urine cultures. Concentrations of IL-6 in the urine was higher (p = 0.000004) among residents with significant growth of bacteria in the urine; the mean IL-6 concentration was 5.1 ng/L (SD 8.7) and the median IL-6 concentration was 2.5 ng/L (interquartile range 1.0-5.7), compared to residents with negative urine cultures, where the mean IL-6 concentration was 2.6 ng/L (SD 3.6) and the median IL-6 concentration was 1.3 ng/L (interquartile range 0.6-2.8). The same applies for creatinine adjusted IL-6 concentrations (p < 10-6). Similarly residents with positive urine cultures were more likely to have urine IL-6 ≥ 5.0 ng/L (p = 0.000053) and creatinine adjusted IL-6 ≥ 0.75 ng/mmol (p = 0.000001) compared to those with negative urine cultures. Dipstick urinalysis Urine dipsticks were analysed for nitrite and leukocyte esterase in urine specimens from 408/421 residents. Urine dipstick analyses were not performed in 13/421 residents, mostly due to insufficient urine volume. Among all residents, regardless of bacteriuria or not, 26% (106/408) of nitrite dipsticks were positive and 22% (90/408) of leukocyte esterase dipsticks were ≥3 + . Leukocyte esterase dipsticks ≥3 + were more common (p = <10-6) among residents with significant growth of bacteria in the urine; 46% (61/132) versus 11% (29/276) in residents with negative urine cultures. Positive nitrite dipsticks were more common (p = <10-6) among residents with positive urine cultures; 64% (84/132) versus 8.0% (22/276) in residents with negative urine cultures. Urine dipsticks were analysed for nitrite and leukocyte esterase in urine specimens from 408/421 residents. Urine dipstick analyses were not performed in 13/421 residents, mostly due to insufficient urine volume. Among all residents, regardless of bacteriuria or not, 26% (106/408) of nitrite dipsticks were positive and 22% (90/408) of leukocyte esterase dipsticks were ≥3 + . Leukocyte esterase dipsticks ≥3 + were more common (p = <10-6) among residents with significant growth of bacteria in the urine; 46% (61/132) versus 11% (29/276) in residents with negative urine cultures. Positive nitrite dipsticks were more common (p = <10-6) among residents with positive urine cultures; 64% (84/132) versus 8.0% (22/276) in residents with negative urine cultures. Symptoms, bacteriuria and antibiotic treatments The prevalence of new or increased symptoms, occurring during the last month and still present when urine specimens were obtained are presented in Table 1. There were no significant differences in the proportion of positive urine cultures among those with or without nonspecific symptoms, however there were less positive urine cultures among residents with urinary frequency (Table 1). Residents with some of the symptoms had a higher prevalence of antibiotic treatments during the last month (Table 2). Prevalence of symptoms and positive urine cultures 1Symptoms commencing at any time during the preceding month and still present when sampling urine. 2Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportions of positive urine cultures among those with or without symptoms. Prevalence of symptoms and antibiotic treatment 1Symptoms commencing at any time during the preceding month and still present when sampling urine. 2Antibiotic treatment given at any time during the month preceding sampling of urine. 3Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportion of antibiotic treatment among those with or without symptoms. The prevalence of new or increased symptoms, occurring during the last month and still present when urine specimens were obtained are presented in Table 1. There were no significant differences in the proportion of positive urine cultures among those with or without nonspecific symptoms, however there were less positive urine cultures among residents with urinary frequency (Table 1). Residents with some of the symptoms had a higher prevalence of antibiotic treatments during the last month (Table 2). Prevalence of symptoms and positive urine cultures 1Symptoms commencing at any time during the preceding month and still present when sampling urine. 2Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportions of positive urine cultures among those with or without symptoms. Prevalence of symptoms and antibiotic treatment 1Symptoms commencing at any time during the preceding month and still present when sampling urine. 2Antibiotic treatment given at any time during the month preceding sampling of urine. 3Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportion of antibiotic treatment among those with or without symptoms. IL-6 and dipstick urinalyses in residents with bacteriuria In residents exclusively with bacteriuria there were no significant differences in concentrations of urine IL-6 when comparing those with or without a new or increased symptom; fatigue (p = 0.24), restlessness (p = 0.40), confusion (p = 0.38), aggressiveness (p = 0.66), loss of appetite (p = 0.27), frequent falls (p = 0.15), not being herself/himself (p = 0.90), having any of the nonspecific symptoms (p = 0.69), dysuria (p = 0.13) and urinary urgency (p = 0.82). In residents exclusively having bacteriuria there were no significant differences in the proportion of leukocyte esterase dipsticks ≥3+ when comparing those with or without new or increased symptoms; fatigue (p = 0.39), restlessness (p = 1.0), confusion (p = 1.0), aggressiveness (p = 0.62), loss of appetite (p = 1.0), frequent falls (p = 0.60), not being herself/himself (p = 1.0), having any of the nonspecific symptoms (p = 0.68), dysuria (p = 0.46) and urinary urgency (p = 0.34). Similarly there were no significant differences in proportion of positive nitrite dipsticks when comparing those with or without new or increased symptoms. All patients with urinary frequency had negative urine culture. In residents exclusively with bacteriuria there were no significant differences in concentrations of urine IL-6 when comparing those with or without a new or increased symptom; fatigue (p = 0.24), restlessness (p = 0.40), confusion (p = 0.38), aggressiveness (p = 0.66), loss of appetite (p = 0.27), frequent falls (p = 0.15), not being herself/himself (p = 0.90), having any of the nonspecific symptoms (p = 0.69), dysuria (p = 0.13) and urinary urgency (p = 0.82). In residents exclusively having bacteriuria there were no significant differences in the proportion of leukocyte esterase dipsticks ≥3+ when comparing those with or without new or increased symptoms; fatigue (p = 0.39), restlessness (p = 1.0), confusion (p = 1.0), aggressiveness (p = 0.62), loss of appetite (p = 1.0), frequent falls (p = 0.60), not being herself/himself (p = 1.0), having any of the nonspecific symptoms (p = 0.68), dysuria (p = 0.46) and urinary urgency (p = 0.34). Similarly there were no significant differences in proportion of positive nitrite dipsticks when comparing those with or without new or increased symptoms. All patients with urinary frequency had negative urine culture. Predictors of symptoms A positive urine culture was only significant in the model predicting confusion, OR 0.15 (0.033-0.68; p = 0.014). However, it is important to note that the odds ratio was <1, i.e. positive urine cultures were less common among residents with confusion (Table 3). As urine IL-6 > 5 ng/L was also a significant predictor in this regression model for confusion, another regression was made where urine culture and urine IL-6 ≥ 5 ng/L were replaced by a combined dichotomous variable being positive if both IL-6 ≥ 5 ng/L and the urine culture was positive at the same time, or otherwise negative. This combined variable was however not a significant predictor of confusion. Predictors 1 of new or increased symptoms commencing at any time during the preceding month and still present when sampling urine 1Predictors in patients where a urine sample could be obtained and with information for all variables (n = 397). Forward stepwise (conditional) logistic regressions where probability for entry was 0.050 and for removal 0.10 was used. Variables that served well for the overall prediction were also kept in the model. Outcome presented as odds ratios (95% CI with p-value) for variables included in the model. Urine dipstick (nitrite positive or leukocyte esterase being 3+ or 4+), age, gender or presence of diabetes mellitus did not reach the final model for any symptom. Nagelkerke’s R-square as a measure of the model’s ability to predict presence of a symptom. 2With (=1) or without (=0) bacteriuria. The latter was the reference. 3Interleukin-6 elevated (≥5 ng/L) or not. The latter was the reference. 4Ongoing antibiotic treatment (n = 16) or having had antibiotics during the last month (n = 28). 5None of the independent variables could predict either fatigue or restlessness. A positive urine culture was only significant in the model predicting confusion, OR 0.15 (0.033-0.68; p = 0.014). However, it is important to note that the odds ratio was <1, i.e. positive urine cultures were less common among residents with confusion (Table 3). As urine IL-6 > 5 ng/L was also a significant predictor in this regression model for confusion, another regression was made where urine culture and urine IL-6 ≥ 5 ng/L were replaced by a combined dichotomous variable being positive if both IL-6 ≥ 5 ng/L and the urine culture was positive at the same time, or otherwise negative. This combined variable was however not a significant predictor of confusion. Predictors 1 of new or increased symptoms commencing at any time during the preceding month and still present when sampling urine 1Predictors in patients where a urine sample could be obtained and with information for all variables (n = 397). Forward stepwise (conditional) logistic regressions where probability for entry was 0.050 and for removal 0.10 was used. Variables that served well for the overall prediction were also kept in the model. Outcome presented as odds ratios (95% CI with p-value) for variables included in the model. Urine dipstick (nitrite positive or leukocyte esterase being 3+ or 4+), age, gender or presence of diabetes mellitus did not reach the final model for any symptom. Nagelkerke’s R-square as a measure of the model’s ability to predict presence of a symptom. 2With (=1) or without (=0) bacteriuria. The latter was the reference. 3Interleukin-6 elevated (≥5 ng/L) or not. The latter was the reference. 4Ongoing antibiotic treatment (n = 16) or having had antibiotics during the last month (n = 28). 5None of the independent variables could predict either fatigue or restlessness. Studied population: Inclusion criteria were fulfilled by 676 of 901 residents in 22 nursing homes, and 425 (63%) accepted participation (Figure 1). Voided urine specimens and symptom forms were provided from 421 residents, 295 (70%) women and 126 (30%) men. Women (mean 87 years, SD 6.4, range 63-100) were slightly older than men (mean 85 years, SD 7.1, range 65-100) (p = 0.0053). Participant flow chart. Among participating residents 56/421 (13%) suffered from diabetes mellitus and 228/421 (54%) had dementia. When urine specimens were collected, 18/421 (4.3%) were undergoing antibiotic treatment. Another 29/421 (6.9%) had no ongoing antibiotic treatment when the urine specimen was collected but had received antibiotics during the previous month. Measure of body temperature was conclusive in 399/421 residents; none of these residents had a body temperature ≥38.0° Celsius. Bacterial findings: There was significant growth of potentially pathogenic bacteria in 32% (135/421) of voided urine specimens. E. coli was by far the most common finding, present in 81% (109/135) of positive urine cultures. Klebsiella spp. were the second most common finding, present in 8.1% (11/135) of positive cultures. Proteus spp. were present in 3.0% (4/135) of positive cultures. Other species had very low prevalence’s, ≤1.5% of positive urine cultures for each species. IL-6 and creatinine in the urine: Concentrations of IL-6 were analysed in urine specimens from 97% (409/421) of residents. In 2.9% (12/421) of residents, urine samples for IL-6 analyses were accidentally lost, or there was not enough urine for both culture and IL-6 analysis. Concentration of IL-6 in the urine had a mean of 3.4 ng/L (SD 5.9) and a median of 1.6 ng/L (interquartile range 0.7-4.1, range 0.20-62). Concentration of creatinine in the urine had a mean of 7.4 mmol/L (SD 4.0). Creatinine adjusted concentration of IL-6 in the urine had a mean of 0.59 ng/mmol creatinine (SD 1.2) and a median of 0.23 ng/mmol creatinine (interquartile range 0.11-0.55, range 0.019-12). Pearson’s correlation coefficient between unadjusted urine IL-6 concentrations and creatinine adjusted IL-6 concentrations was 0.86 (p < 10-6). Urine IL-6 concentrations were ≥5.0 ng/L in 18% (75/409) of residents and creatinine adjusted IL-6 concentrations were ≥0.75 ng/mmol in 18% (75/409) of residents. IL-6 concentrations in the urine divided by positive and negative urine cultures: Concentrations of IL-6 in the urine was higher (p = 0.000004) among residents with significant growth of bacteria in the urine; the mean IL-6 concentration was 5.1 ng/L (SD 8.7) and the median IL-6 concentration was 2.5 ng/L (interquartile range 1.0-5.7), compared to residents with negative urine cultures, where the mean IL-6 concentration was 2.6 ng/L (SD 3.6) and the median IL-6 concentration was 1.3 ng/L (interquartile range 0.6-2.8). The same applies for creatinine adjusted IL-6 concentrations (p < 10-6). Similarly residents with positive urine cultures were more likely to have urine IL-6 ≥ 5.0 ng/L (p = 0.000053) and creatinine adjusted IL-6 ≥ 0.75 ng/mmol (p = 0.000001) compared to those with negative urine cultures. Dipstick urinalysis: Urine dipsticks were analysed for nitrite and leukocyte esterase in urine specimens from 408/421 residents. Urine dipstick analyses were not performed in 13/421 residents, mostly due to insufficient urine volume. Among all residents, regardless of bacteriuria or not, 26% (106/408) of nitrite dipsticks were positive and 22% (90/408) of leukocyte esterase dipsticks were ≥3 + . Leukocyte esterase dipsticks ≥3 + were more common (p = <10-6) among residents with significant growth of bacteria in the urine; 46% (61/132) versus 11% (29/276) in residents with negative urine cultures. Positive nitrite dipsticks were more common (p = <10-6) among residents with positive urine cultures; 64% (84/132) versus 8.0% (22/276) in residents with negative urine cultures. Symptoms, bacteriuria and antibiotic treatments: The prevalence of new or increased symptoms, occurring during the last month and still present when urine specimens were obtained are presented in Table 1. There were no significant differences in the proportion of positive urine cultures among those with or without nonspecific symptoms, however there were less positive urine cultures among residents with urinary frequency (Table 1). Residents with some of the symptoms had a higher prevalence of antibiotic treatments during the last month (Table 2). Prevalence of symptoms and positive urine cultures 1Symptoms commencing at any time during the preceding month and still present when sampling urine. 2Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportions of positive urine cultures among those with or without symptoms. Prevalence of symptoms and antibiotic treatment 1Symptoms commencing at any time during the preceding month and still present when sampling urine. 2Antibiotic treatment given at any time during the month preceding sampling of urine. 3Pearson’s chi-square and when appropriate Fisher’s exact test comparing proportion of antibiotic treatment among those with or without symptoms. IL-6 and dipstick urinalyses in residents with bacteriuria: In residents exclusively with bacteriuria there were no significant differences in concentrations of urine IL-6 when comparing those with or without a new or increased symptom; fatigue (p = 0.24), restlessness (p = 0.40), confusion (p = 0.38), aggressiveness (p = 0.66), loss of appetite (p = 0.27), frequent falls (p = 0.15), not being herself/himself (p = 0.90), having any of the nonspecific symptoms (p = 0.69), dysuria (p = 0.13) and urinary urgency (p = 0.82). In residents exclusively having bacteriuria there were no significant differences in the proportion of leukocyte esterase dipsticks ≥3+ when comparing those with or without new or increased symptoms; fatigue (p = 0.39), restlessness (p = 1.0), confusion (p = 1.0), aggressiveness (p = 0.62), loss of appetite (p = 1.0), frequent falls (p = 0.60), not being herself/himself (p = 1.0), having any of the nonspecific symptoms (p = 0.68), dysuria (p = 0.46) and urinary urgency (p = 0.34). Similarly there were no significant differences in proportion of positive nitrite dipsticks when comparing those with or without new or increased symptoms. All patients with urinary frequency had negative urine culture. Predictors of symptoms: A positive urine culture was only significant in the model predicting confusion, OR 0.15 (0.033-0.68; p = 0.014). However, it is important to note that the odds ratio was <1, i.e. positive urine cultures were less common among residents with confusion (Table 3). As urine IL-6 > 5 ng/L was also a significant predictor in this regression model for confusion, another regression was made where urine culture and urine IL-6 ≥ 5 ng/L were replaced by a combined dichotomous variable being positive if both IL-6 ≥ 5 ng/L and the urine culture was positive at the same time, or otherwise negative. This combined variable was however not a significant predictor of confusion. Predictors 1 of new or increased symptoms commencing at any time during the preceding month and still present when sampling urine 1Predictors in patients where a urine sample could be obtained and with information for all variables (n = 397). Forward stepwise (conditional) logistic regressions where probability for entry was 0.050 and for removal 0.10 was used. Variables that served well for the overall prediction were also kept in the model. Outcome presented as odds ratios (95% CI with p-value) for variables included in the model. Urine dipstick (nitrite positive or leukocyte esterase being 3+ or 4+), age, gender or presence of diabetes mellitus did not reach the final model for any symptom. Nagelkerke’s R-square as a measure of the model’s ability to predict presence of a symptom. 2With (=1) or without (=0) bacteriuria. The latter was the reference. 3Interleukin-6 elevated (≥5 ng/L) or not. The latter was the reference. 4Ongoing antibiotic treatment (n = 16) or having had antibiotics during the last month (n = 28). 5None of the independent variables could predict either fatigue or restlessness. Discussion: Recent onset of nonspecific symptoms was common among elderly residents of nursing homes. Positive urine cultures were as common in residents with as without nonspecific symptoms. Residents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations or dipstick findings between those with or without nonspecific symptoms. Strengths and limitations of the study A major strength of this study is that urine specimens were collected from every participating resident capable of providing a urine sample, regardless of the presence of symptoms. Therefore, this study can compare residents having symptoms with those without symptoms. In this study we obtained urine specimens and study protocols from 47% (421/901) of individuals registered at the nursing homes. This may appear low but is similar to previously published studies in nursing homes [3]. The main reason for not participating was substantial urinary incontinence, often combined with dementia. Twenty-five percent (222/901) refused participation. Still this may be considered acceptable when studying an elderly fragile population with a high proportion of residents with dementia as well as the ethical requirement of approval from appointed representative/relatives. All individuals living at the nursing homes were asked to participate. Due to ethical considerations, it was not noted whether those who refused participation suffered from dementia or urinary incontinence too severe to be able to provide a urine sample. The same applied to one ward withdrawing during the ongoing study. Thus, it is assumed that some of the patients excluded, since they refused participation, would not have been eligible for this study anyway. Knowing these numbers would probably have resulted in less exclusion due to a higher number of residents not meeting the inclusion criteria. The main focus was non-specific symptoms, and the study had enough power to suggest that IL-6 does not play a role in determining if any non-specific symptom is caused by a UTI or something else. Furthermore, these results suggest that non-specific symptoms are, in most cases, unlikely to be caused by a UTI. However, the study is underpowered to clearly sort out these issues for each specific symptom. Residents with urinary catheters were not included in this study, therefore the results cannot be considered representative for residents with urinary catheters. A major strength of this study is that urine specimens were collected from every participating resident capable of providing a urine sample, regardless of the presence of symptoms. Therefore, this study can compare residents having symptoms with those without symptoms. In this study we obtained urine specimens and study protocols from 47% (421/901) of individuals registered at the nursing homes. This may appear low but is similar to previously published studies in nursing homes [3]. The main reason for not participating was substantial urinary incontinence, often combined with dementia. Twenty-five percent (222/901) refused participation. Still this may be considered acceptable when studying an elderly fragile population with a high proportion of residents with dementia as well as the ethical requirement of approval from appointed representative/relatives. All individuals living at the nursing homes were asked to participate. Due to ethical considerations, it was not noted whether those who refused participation suffered from dementia or urinary incontinence too severe to be able to provide a urine sample. The same applied to one ward withdrawing during the ongoing study. Thus, it is assumed that some of the patients excluded, since they refused participation, would not have been eligible for this study anyway. Knowing these numbers would probably have resulted in less exclusion due to a higher number of residents not meeting the inclusion criteria. The main focus was non-specific symptoms, and the study had enough power to suggest that IL-6 does not play a role in determining if any non-specific symptom is caused by a UTI or something else. Furthermore, these results suggest that non-specific symptoms are, in most cases, unlikely to be caused by a UTI. However, the study is underpowered to clearly sort out these issues for each specific symptom. Residents with urinary catheters were not included in this study, therefore the results cannot be considered representative for residents with urinary catheters. Differentiating ABU versus UTI It is interesting to note that a positive urine culture was not commoner among residents with nonspecific symptoms compared to residents without symptoms. There was a trend (p = 0.057) toward a lower proportion of positive urine cultures among residents with confusion occurring during the last month (Table 1). This suggests that nonspecific symptoms are not caused by bacteria in the urine. Not considering other more plausible causes of the symptoms places the patient at risk for having other undiagnosed conditions. The UTI diagnosis is all too often made in the absence of newly onset focal urinary tract symptoms. Procedures utilizing presence of symptoms or outcomes of prior dipstick testing to influence setting of cut-off levels for CFU/mL in urine cultures to label growth as clinically significant may enhance the diagnostic procedure [36,37]. These procedures are common in microbiologic laboratories in Sweden and internationally. Using the routine clinical procedure increases clinical usefulness of the study results. Residents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations between those with or without nonspecific symptoms. Thus IL-6 concentrations are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. If nonspecific symptoms are not caused by bacteria in the urine, IL-6 concentrations cannot identify a subgroup of residents with more severe inflammation in the bladder correlating to nonspecific symptoms. There were no differences either in urine dipstick analyses for nitrite or leukocyte esterase ≥3+ between residents with positive urine cultures when comparing those with or without symptoms. Subsequently urine dipsticks are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. It is interesting to note that a positive urine culture was not commoner among residents with nonspecific symptoms compared to residents without symptoms. There was a trend (p = 0.057) toward a lower proportion of positive urine cultures among residents with confusion occurring during the last month (Table 1). This suggests that nonspecific symptoms are not caused by bacteria in the urine. Not considering other more plausible causes of the symptoms places the patient at risk for having other undiagnosed conditions. The UTI diagnosis is all too often made in the absence of newly onset focal urinary tract symptoms. Procedures utilizing presence of symptoms or outcomes of prior dipstick testing to influence setting of cut-off levels for CFU/mL in urine cultures to label growth as clinically significant may enhance the diagnostic procedure [36,37]. These procedures are common in microbiologic laboratories in Sweden and internationally. Using the routine clinical procedure increases clinical usefulness of the study results. Residents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations between those with or without nonspecific symptoms. Thus IL-6 concentrations are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. If nonspecific symptoms are not caused by bacteria in the urine, IL-6 concentrations cannot identify a subgroup of residents with more severe inflammation in the bladder correlating to nonspecific symptoms. There were no differences either in urine dipstick analyses for nitrite or leukocyte esterase ≥3+ between residents with positive urine cultures when comparing those with or without symptoms. Subsequently urine dipsticks are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. Antibiotic treatment and negative urine culture Residents with recently onset confusion, loss of appetite, frequent falls and any of the nonspecific symptoms had oftener been prescribed antibiotics during the last month. This might explain the trend toward the lower prevalence of bacteriuria among residents with confusion. Also, in the logistic regressions, antibiotics during the previous month were a predictor of loss of appetite, frequent falls and “any of the nonspecific symptoms”. This supports previous studies showing that nonspecific symptoms were a common reason for suspecting UTI and the prescription of antibiotics [12-14,27]. These registered symptoms in this study might also reflect side effects of prescribed antibiotics as the elderly are more likely to retain side effects from antibiotics [38]. These residents could also represent a frailer population having more nonspecific symptoms, and also being more prone to infections, and consequently more antibiotic prescriptions. Even if this study suggests that nonspecific symptoms are not caused by bacteria in the urine, due to the possible confounders described above, the best proof would be a future randomized controlled trial evaluating UTI antibiotic treatment of nonspecific symptoms among elderly residents of nursing homes. However, an RCT in a large population of fragile elderly individuals, many with dementia and no possibility to give statement of consent would be very difficult to carry out. This study primarily aimed to study non UTI specific symptoms. As UTI specific symptoms were less frequent, this study was partially underpowered regarding UTI specific symptoms. However, it is interesting to note that among all symptoms urinary frequency was the only symptom where the proportion of positive urine cultures differed from those not having this symptom. Those with urinary frequency had a lower proportion of positive urine cultures and a trend (not significant) towards a higher proportion of having had antibiotic treatment during the previous month. Another explanation for this could be a shorter bladder incubation time in that group. Residents with recently onset confusion, loss of appetite, frequent falls and any of the nonspecific symptoms had oftener been prescribed antibiotics during the last month. This might explain the trend toward the lower prevalence of bacteriuria among residents with confusion. Also, in the logistic regressions, antibiotics during the previous month were a predictor of loss of appetite, frequent falls and “any of the nonspecific symptoms”. This supports previous studies showing that nonspecific symptoms were a common reason for suspecting UTI and the prescription of antibiotics [12-14,27]. These registered symptoms in this study might also reflect side effects of prescribed antibiotics as the elderly are more likely to retain side effects from antibiotics [38]. These residents could also represent a frailer population having more nonspecific symptoms, and also being more prone to infections, and consequently more antibiotic prescriptions. Even if this study suggests that nonspecific symptoms are not caused by bacteria in the urine, due to the possible confounders described above, the best proof would be a future randomized controlled trial evaluating UTI antibiotic treatment of nonspecific symptoms among elderly residents of nursing homes. However, an RCT in a large population of fragile elderly individuals, many with dementia and no possibility to give statement of consent would be very difficult to carry out. This study primarily aimed to study non UTI specific symptoms. As UTI specific symptoms were less frequent, this study was partially underpowered regarding UTI specific symptoms. However, it is interesting to note that among all symptoms urinary frequency was the only symptom where the proportion of positive urine cultures differed from those not having this symptom. Those with urinary frequency had a lower proportion of positive urine cultures and a trend (not significant) towards a higher proportion of having had antibiotic treatment during the previous month. Another explanation for this could be a shorter bladder incubation time in that group. Strengths and limitations of the study: A major strength of this study is that urine specimens were collected from every participating resident capable of providing a urine sample, regardless of the presence of symptoms. Therefore, this study can compare residents having symptoms with those without symptoms. In this study we obtained urine specimens and study protocols from 47% (421/901) of individuals registered at the nursing homes. This may appear low but is similar to previously published studies in nursing homes [3]. The main reason for not participating was substantial urinary incontinence, often combined with dementia. Twenty-five percent (222/901) refused participation. Still this may be considered acceptable when studying an elderly fragile population with a high proportion of residents with dementia as well as the ethical requirement of approval from appointed representative/relatives. All individuals living at the nursing homes were asked to participate. Due to ethical considerations, it was not noted whether those who refused participation suffered from dementia or urinary incontinence too severe to be able to provide a urine sample. The same applied to one ward withdrawing during the ongoing study. Thus, it is assumed that some of the patients excluded, since they refused participation, would not have been eligible for this study anyway. Knowing these numbers would probably have resulted in less exclusion due to a higher number of residents not meeting the inclusion criteria. The main focus was non-specific symptoms, and the study had enough power to suggest that IL-6 does not play a role in determining if any non-specific symptom is caused by a UTI or something else. Furthermore, these results suggest that non-specific symptoms are, in most cases, unlikely to be caused by a UTI. However, the study is underpowered to clearly sort out these issues for each specific symptom. Residents with urinary catheters were not included in this study, therefore the results cannot be considered representative for residents with urinary catheters. Differentiating ABU versus UTI: It is interesting to note that a positive urine culture was not commoner among residents with nonspecific symptoms compared to residents without symptoms. There was a trend (p = 0.057) toward a lower proportion of positive urine cultures among residents with confusion occurring during the last month (Table 1). This suggests that nonspecific symptoms are not caused by bacteria in the urine. Not considering other more plausible causes of the symptoms places the patient at risk for having other undiagnosed conditions. The UTI diagnosis is all too often made in the absence of newly onset focal urinary tract symptoms. Procedures utilizing presence of symptoms or outcomes of prior dipstick testing to influence setting of cut-off levels for CFU/mL in urine cultures to label growth as clinically significant may enhance the diagnostic procedure [36,37]. These procedures are common in microbiologic laboratories in Sweden and internationally. Using the routine clinical procedure increases clinical usefulness of the study results. Residents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations between those with or without nonspecific symptoms. Thus IL-6 concentrations are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. If nonspecific symptoms are not caused by bacteria in the urine, IL-6 concentrations cannot identify a subgroup of residents with more severe inflammation in the bladder correlating to nonspecific symptoms. There were no differences either in urine dipstick analyses for nitrite or leukocyte esterase ≥3+ between residents with positive urine cultures when comparing those with or without symptoms. Subsequently urine dipsticks are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. Antibiotic treatment and negative urine culture: Residents with recently onset confusion, loss of appetite, frequent falls and any of the nonspecific symptoms had oftener been prescribed antibiotics during the last month. This might explain the trend toward the lower prevalence of bacteriuria among residents with confusion. Also, in the logistic regressions, antibiotics during the previous month were a predictor of loss of appetite, frequent falls and “any of the nonspecific symptoms”. This supports previous studies showing that nonspecific symptoms were a common reason for suspecting UTI and the prescription of antibiotics [12-14,27]. These registered symptoms in this study might also reflect side effects of prescribed antibiotics as the elderly are more likely to retain side effects from antibiotics [38]. These residents could also represent a frailer population having more nonspecific symptoms, and also being more prone to infections, and consequently more antibiotic prescriptions. Even if this study suggests that nonspecific symptoms are not caused by bacteria in the urine, due to the possible confounders described above, the best proof would be a future randomized controlled trial evaluating UTI antibiotic treatment of nonspecific symptoms among elderly residents of nursing homes. However, an RCT in a large population of fragile elderly individuals, many with dementia and no possibility to give statement of consent would be very difficult to carry out. This study primarily aimed to study non UTI specific symptoms. As UTI specific symptoms were less frequent, this study was partially underpowered regarding UTI specific symptoms. However, it is interesting to note that among all symptoms urinary frequency was the only symptom where the proportion of positive urine cultures differed from those not having this symptom. Those with urinary frequency had a lower proportion of positive urine cultures and a trend (not significant) towards a higher proportion of having had antibiotic treatment during the previous month. Another explanation for this could be a shorter bladder incubation time in that group. Conclusions: Recently onset nonspecific symptoms were common among elderly residents of nursing homes. Residents without nonspecific symptoms had positive urine cultures as often as those with nonspecific symptoms, suggesting that nonspecific symptoms are not caused by bacteria in the urine. Residents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations or dipstick findings between those with or without nonspecific symptoms. Thus, IL-6 concentrations in the urine and dipstick analyses are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: All authors participated in the design of the study. PDS and ME carried out the data collection. PDS analysed the data and drafted the manuscript. All authors contributed to interpretation of the analyses, critical reviews and revisions, and the final approval of the paper. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2318/14/88/prepub
Background: Up to half the residents of nursing homes for the elderly have asymptomatic bacteriuria (ABU), which should not be treated with antibiotics. A complementary test to discriminate between symptomatic urinary tract infections (UTI) and ABU is needed, as diagnostic uncertainty is likely to generate significant antibiotic overtreatment. Previous studies indicate that Interleukin-6 (IL-6) in the urine might be suitable as such a test. The aim of this study was to investigate the association between laboratory findings of bacteriuria, IL-6 in the urine, dipstick urinalysis and newly onset symptoms among residents of nursing homes. Methods: In this cross sectional study, voided urine specimens for culture, urine dipstick and IL-6 analyses were collected from all residents capable of providing a voided urine sample, regardless of the presence of symptoms. Urine specimens and symptom forms were provided from 421 residents of 22 nursing homes. The following new or increased nonspecific symptoms occurring during the previous month were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. Results: Recent onset of nonspecific symptoms was common among elderly residents of nursing homes (85/421). Urine cultures were positive in 32% (135/421), Escherichia coli was by far the most common bacterial finding. Residents without nonspecific symptoms had positive urine cultures as often as those with nonspecific symptoms with a duration of up to one month. Residents with positive urine cultures had higher concentrations of IL-6 in the urine (p < 0.001). However, among residents with positive urine cultures there were no differences in IL-6 concentrations or dipstick findings between those with or without nonspecific symptoms. Conclusions: Nonspecific symptoms among elderly residents of nursing homes are unlikely to be caused by bacteria in the urine. This study could not establish any clinical value of using dipstick urinalysis or IL-6 in the urine to verify if bacteriuria was linked to nonspecific symptoms.
Background: The presence of asymptomatic bacteriuria (ABU) among residents of nursing homes for the elderly varies between 25% and 50% for women and 15% and 40% for men [1-3]. There is overwhelming evidence that ABU should not be treated with antibiotics in an adult population except for pregnant women and patients prior to traumatic urologic interventions with mucosal bleeding [4-7]. The high prevalence of ABU makes it difficult to know if a new symptom in a resident with bacteriuria is caused by a urinary tract infection (UTI), or if the bacteria in the urine is only representative of an ABU [3,8-11]. This is especially difficult in the presence of symptoms not specific to the urinary tract such as fatigue, restlessness, confusion, aggressiveness, loss of appetite or frequent falls. Nonspecific symptoms such as changes in mental status are the most common reasons for suspecting a UTI among residents of nursing homes [12-14]. These symptoms can have many causes besides UTI [15]. There are different opinions on the possible connection between different nonspecific symptoms and UTI [10,16-26]. Nonspecific symptoms and diagnostic uncertainty often lead to antibiotic treatments of dubious value [8,14,27,28]. Urine culture alone seems inappropriate for evaluating symptoms among residents of nursing homes [10]. There are two major possible explanations, either common bacteria in the urine are of little relevance, or a urine culture is insufficient to identify UTI. With the emergence of multidrug-resistant bacteria and the antimicrobial drug discovery pipeline currently running dry, it is important not to misinterpret bacteriuria as UTI and prescribe antibiotics when it actually represents ABU. Thus, a complementary test to discriminate between symptomatic UTI and ABU is needed [29,30]. The cytokine Interleukin-6 (IL-6) is a mediator of inflammation playing an important role in the acute phase response and immune system regulation [29,31]. The biosynthesis of IL-6 is stimulated by e.g. bacteria [31]. After intravesical inoculation of patients with E. coli, all patients secreted IL-6 into the urine, however, serum concentrations of IL-6 did not increase suggesting a dominance of local IL-6 production [32]. A symptomatic lower UTI is assumed associated with more severe inflammation in the bladder compared to an ABU. Previous studies suggested that concentrations of IL-6 in the urine may be valuable in discriminating between ABU and UTI in the elderly, however, this needs evaluation in a larger study among the elderly [9,33]. The aim of this study was to investigate the association between laboratory findings of bacteria in the urine, elevated IL-6 concentrations in the urine, dipstick urinalysis and new or increased symptoms in residents of nursing homes for elderly. Conclusions: Recently onset nonspecific symptoms were common among elderly residents of nursing homes. Residents without nonspecific symptoms had positive urine cultures as often as those with nonspecific symptoms, suggesting that nonspecific symptoms are not caused by bacteria in the urine. Residents with positive urine cultures had higher concentrations of IL-6 in the urine. However, among residents with positive urine cultures there were no differences in IL-6 concentrations or dipstick findings between those with or without nonspecific symptoms. Thus, IL-6 concentrations in the urine and dipstick analyses are not useful when assessing elderly residents with nonspecific symptoms and bacteria in the urine.
Background: Up to half the residents of nursing homes for the elderly have asymptomatic bacteriuria (ABU), which should not be treated with antibiotics. A complementary test to discriminate between symptomatic urinary tract infections (UTI) and ABU is needed, as diagnostic uncertainty is likely to generate significant antibiotic overtreatment. Previous studies indicate that Interleukin-6 (IL-6) in the urine might be suitable as such a test. The aim of this study was to investigate the association between laboratory findings of bacteriuria, IL-6 in the urine, dipstick urinalysis and newly onset symptoms among residents of nursing homes. Methods: In this cross sectional study, voided urine specimens for culture, urine dipstick and IL-6 analyses were collected from all residents capable of providing a voided urine sample, regardless of the presence of symptoms. Urine specimens and symptom forms were provided from 421 residents of 22 nursing homes. The following new or increased nonspecific symptoms occurring during the previous month were registered; fatigue, restlessness, confusion, aggressiveness, loss of appetite, frequent falls and not being herself/himself, as well as symptoms from the urinary tract; dysuria, urinary urgency and frequency. Results: Recent onset of nonspecific symptoms was common among elderly residents of nursing homes (85/421). Urine cultures were positive in 32% (135/421), Escherichia coli was by far the most common bacterial finding. Residents without nonspecific symptoms had positive urine cultures as often as those with nonspecific symptoms with a duration of up to one month. Residents with positive urine cultures had higher concentrations of IL-6 in the urine (p < 0.001). However, among residents with positive urine cultures there were no differences in IL-6 concentrations or dipstick findings between those with or without nonspecific symptoms. Conclusions: Nonspecific symptoms among elderly residents of nursing homes are unlikely to be caused by bacteria in the urine. This study could not establish any clinical value of using dipstick urinalysis or IL-6 in the urine to verify if bacteriuria was linked to nonspecific symptoms.
13,290
382
24
[ "urine", "symptoms", "residents", "il", "positive", "cultures", "urine cultures", "study", "concentrations", "nonspecific symptoms" ]
[ "test", "test" ]
[CONTENT] Interleukin-6 | Urinary tract infections | Bacteriuria | Homes for the aged | Nursing homes | Dipstick urinalysis | Diagnostic tests [SUMMARY]
[CONTENT] Interleukin-6 | Urinary tract infections | Bacteriuria | Homes for the aged | Nursing homes | Dipstick urinalysis | Diagnostic tests [SUMMARY]
[CONTENT] Interleukin-6 | Urinary tract infections | Bacteriuria | Homes for the aged | Nursing homes | Dipstick urinalysis | Diagnostic tests [SUMMARY]
[CONTENT] Interleukin-6 | Urinary tract infections | Bacteriuria | Homes for the aged | Nursing homes | Dipstick urinalysis | Diagnostic tests [SUMMARY]
[CONTENT] Interleukin-6 | Urinary tract infections | Bacteriuria | Homes for the aged | Nursing homes | Dipstick urinalysis | Diagnostic tests [SUMMARY]
[CONTENT] Interleukin-6 | Urinary tract infections | Bacteriuria | Homes for the aged | Nursing homes | Dipstick urinalysis | Diagnostic tests [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anti-Bacterial Agents | Bacteriuria | Biomarkers | Cross-Sectional Studies | Female | Homes for the Aged | Humans | Interleukin-6 | Male | Nursing Homes | Sweden | Urinalysis | Urinary Tract Infections [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anti-Bacterial Agents | Bacteriuria | Biomarkers | Cross-Sectional Studies | Female | Homes for the Aged | Humans | Interleukin-6 | Male | Nursing Homes | Sweden | Urinalysis | Urinary Tract Infections [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anti-Bacterial Agents | Bacteriuria | Biomarkers | Cross-Sectional Studies | Female | Homes for the Aged | Humans | Interleukin-6 | Male | Nursing Homes | Sweden | Urinalysis | Urinary Tract Infections [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anti-Bacterial Agents | Bacteriuria | Biomarkers | Cross-Sectional Studies | Female | Homes for the Aged | Humans | Interleukin-6 | Male | Nursing Homes | Sweden | Urinalysis | Urinary Tract Infections [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anti-Bacterial Agents | Bacteriuria | Biomarkers | Cross-Sectional Studies | Female | Homes for the Aged | Humans | Interleukin-6 | Male | Nursing Homes | Sweden | Urinalysis | Urinary Tract Infections [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anti-Bacterial Agents | Bacteriuria | Biomarkers | Cross-Sectional Studies | Female | Homes for the Aged | Humans | Interleukin-6 | Male | Nursing Homes | Sweden | Urinalysis | Urinary Tract Infections [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] urine | symptoms | residents | il | positive | cultures | urine cultures | study | concentrations | nonspecific symptoms [SUMMARY]
[CONTENT] urine | symptoms | residents | il | positive | cultures | urine cultures | study | concentrations | nonspecific symptoms [SUMMARY]
[CONTENT] urine | symptoms | residents | il | positive | cultures | urine cultures | study | concentrations | nonspecific symptoms [SUMMARY]
[CONTENT] urine | symptoms | residents | il | positive | cultures | urine cultures | study | concentrations | nonspecific symptoms [SUMMARY]
[CONTENT] urine | symptoms | residents | il | positive | cultures | urine cultures | study | concentrations | nonspecific symptoms [SUMMARY]
[CONTENT] urine | symptoms | residents | il | positive | cultures | urine cultures | study | concentrations | nonspecific symptoms [SUMMARY]
[CONTENT] abu | uti | il | residents nursing homes | residents nursing | symptoms | urine | bacteria | elderly | homes [SUMMARY]
[CONTENT] urine | laboratory | symptoms | dipstick | il | clinical | study | test | residents | participation [SUMMARY]
[CONTENT] urine | ng | il | residents | positive | cultures | 421 | urine cultures | range | concentration [SUMMARY]
[CONTENT] nonspecific symptoms | nonspecific | symptoms | urine | residents | urine residents positive urine | urine residents positive | residents nonspecific | residents nonspecific symptoms | urine residents [SUMMARY]
[CONTENT] urine | symptoms | residents | il | positive | cultures | study | nonspecific | nonspecific symptoms | urine cultures [SUMMARY]
[CONTENT] urine | symptoms | residents | il | positive | cultures | study | nonspecific | nonspecific symptoms | urine cultures [SUMMARY]
[CONTENT] Up to half | ABU ||| UTI | ABU ||| ||| bacteriuria | IL-6 | dipstick urinalysis [SUMMARY]
[CONTENT] IL-6 ||| 421 | 22 ||| the previous month [SUMMARY]
[CONTENT] 85/421 ||| 32% | 135/421 | Escherichia ||| up to one month ||| IL-6 | 0.001 ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Up to half | ABU ||| UTI | ABU ||| ||| bacteriuria | IL-6 | dipstick urinalysis ||| IL-6 ||| 421 | 22 ||| the previous month ||| ||| 85/421 ||| 32% | 135/421 | Escherichia ||| up to one month ||| IL-6 | 0.001 ||| ||| ||| [SUMMARY]
[CONTENT] Up to half | ABU ||| UTI | ABU ||| ||| bacteriuria | IL-6 | dipstick urinalysis ||| IL-6 ||| 421 | 22 ||| the previous month ||| ||| 85/421 ||| 32% | 135/421 | Escherichia ||| up to one month ||| IL-6 | 0.001 ||| ||| ||| [SUMMARY]
Epidemiology and history of knee injury and its impact on activity limitation among football premier league professional referees.
29362295
The purpose of this study was to determine the epidemiology and history of knee injury and its impact on activity limitation among football premier league professional referees in Iran.
BACKGROUND
This was a descriptive study. 59 Football Premier League professional referees participated in the study. The knee injury related information such as injury history and mechanism was recorded. Injury related symptoms and their impacts on the activity limitation, ability to perform activities of daily living as well participation in sports and recreational activities was obtained through the Knee Outcome Survey (KOS).
METHODS
The results indicated that 31 out of 59 participants reported the history of knee injury. In addition, 18.6%, 22.4% and 81% of the referees reported that they had been injured during the last 6 months of the last year, and at some point in their refereeing careers, respectively. Results further indicated that 48.8% of the injuries occurred in the non-dominant leg and they occurred more frequently during training sessions (52%). Furthermore, the value of KOS was 85 ± 13 for Activities of Daily Living subscale and 90 ± 9 for Sports and Recreational Activities subscale of the KOS.
RESULTS
Knee injury was quite common among the Football Premier League professional referees. It was also indicated that the injuries occurred mainly due to insufficient physical fitness. Therefore, it is suggested that football referees undergo the proper warm-up program to avoid knee injury.
CONCLUSIONS
[ "Activities of Daily Living", "Adult", "Athletic Injuries", "History, 20th Century", "History, 21st Century", "Humans", "Incidence", "Iran", "Knee Injuries", "Male", "Soccer", "Surveys and Questionnaires" ]
5801612
Introduction
Football is considered to be the most popular sport in the world and approximately 270 million people (4%) of the world population are involved in it.1-5 Referees who make 5 million of this population play an important role in football. 6,7 Football referees perform the same activities such as walking and running during matches as the football players do.8 Although there are substantial differences between referees and players, they cover an average distance of 9 to 13 km at high intensity and even a longer distance is covered by professional referees.9-11 Football referees are quite active during matches and they are often older than football players. 8 Therefore, injuries of football referees are expected to be different from those of football players. 12 A number of studies have examined the anthropo-metric profiles of football referees, their movement pat-terns, their competencies, the quality and the level of their refereeing, their roles in refereeing (referee or assistant referee), the length of time they spend in train-ing and matches, and their psychological demands dur-ing matches and training.10,13,14 Bizzini and colleagues (2009) indicated that 39% of female referees were injured in Women’s World Cup in 2007, and that the incidence of injury was 1 per 20 matches (95% CI = 4.2-65.1).13 On average, 20.8% of injuries were recorded per 1000 match hours (95% CI = 4.17-31.7).12,13 Blake and colleagues (2009) showed that 61% of the referees were injured during 12 months (95% CI = 59-69), 56% of injury mechanisms were start-ups and fast short running, and 60% of injuries occurred at refer-eeing time.15 Gabrilo and colleagues (2013) indicated that over 40% of 342 male football referees were injured and over 60% of them reported musculoskeletal problems.16 A knee is the most important joint of a body in terms of stability, weight bearing, balance, mobility, and shock absorption during running. Bizzini and colleagues (2009) reported that one-third of referees underwent surgery due to musculoskeletal problems and knee operation is the most common surgery (20%).17 Furthermore, according to Mahdavi and Mirjani (2015), 57.1% of all knee injuries were related to the dominant leg and injuries of an anterior cruciate ligament (ACL) was the most prevalent (66%).18 Since referees are an integral part of the game of football and their absence can cause various problems, optimum health is a major factor for their perfect refereeing.16 The most common type of injury among football referees is the knee injury.17,18 The sufficient recovery time of this injury ranges from at least 6 weeks to 6 months19 which means a referee has to be away from refereeing from 6 weeks to 6 months. Therefore, knee injuries are highly important in football referees. The previous studies have mainly focused on the prevalence of injuries and few of them have investigated the prevention of the injuries among football referees.20 Besides, no research study has ever been conducted to examine knee injuries of football referees using the Knee Outcome Survey (KOS). This study aimed to investigate the epidemiology and history of knee injury and its impact on activity limitation among football premier league professional referees.
null
null
Results
Anthropometric characteristics and sports information of the participants are presented in Table 1. *For example odd days 31 out of 59 participants reported the history of knee injury. 18.6%, 22.4% and 81% of the referees reported that they had been injured during the last 6 months of the last year, and at some point in their refereeing careers, respectively. 48.8% of the injuries occurred in the non-dominant leg and they occurred more frequently during training sessions (52%). The characteristics, type, and mechanisms of the referees’ knee injuries are presented in Table 2. The total scores were 85±13 and 90±9 for the ADLS and the SAS, respectively. In addition, the referees’ current level (at the time of the study) of self-reported knee join function were 79 ± 14 and 82±12 for the ADLS and the SAS, respectively. Furthermore, the correlation coefficient of the ADLS and the SAS with the referees’ self-reported knee functions were 0.47 (p =0.01) and 0.63 (p =0.001), respectively. Table 3 and 4 present more information on the ADLS and the SAS.
Conclusion
Knee injury was quite common among the Football Premier League professional referees. According to the results, the injuries occurred mainly due to insufficient physical fitness. Based on the background of the study, football referees are more active than football players and they run over longer distances. Therefore, football referees should undergo similar training to football players in order to prevent injuries. Acknowledgements We gratefully acknowledge the Football Premier League professional referees who participated in the present study. We would also like to appreciate the head of the Referees Committee of the Football Federation Islamic Republic of Iran and all other personnel contributing to the implementation of this study.
[ "Methods " ]
[ "\nStudy Design\n\nThis study was cross-sectional.\n\nSubjects\n\nThe study sample was composed of 59 football referees with grade 1, 2 and international-level officiating in the Iranian Premier League.\n\nProcedure\n\nOne of the researchers filled out the questionnaires in person through face-to-face interviews with the referees at the Football Hotel in February 2016 when they were going to undergo assessment. They were asked to answer the questions accurately and ensured confidentiality of all information collected.\n\nMeasurements\n\nThe following questionnaires were used to collect the needed information:\n1. Personal Information Questionnaire:height, weight, age, education level, and Body Mass Index.\n2. Sports Information Questionnaire:sports history, number of training sessions, number of days and hours of training, training duration, type and duration of warm-up.\n3. Knee Injury History Questionnaire:injury history, the injured leg (dominant or non-dominant), mechanism of injury, time of injury (in matches or training sessions), type of therapy, and special care of the injury.\n4. Knee outcome survey:the KOS which is a patient-completed questionnaire was used to determine symptoms, functional limitation, and disability of the knee joint resulting from various knee injuries during activities of daily living and sports.21\nThe KOS is used for both athletes and elderly people,22,23 and investigates various injuries, including knee ligament injuries, meniscus tears, meniscal cartilage lesions, patellofemoral pain syndrome, dislocation of the knee, and osteoarthritis.21,24 The KOS has 2 subscales consisting of Activities of Daily Living Scale (ADLS) and Sports Activity Scale (SAS). Knee-Rating Scale has demonstrated high reliability and validity with the KOS subscales (0.97, 0.97, respectively for SAS; 0.78, 0.97, respectively for ADLS).20,21,24\nThe ADLS is a 14-item scale for activities of daily living. Six items assess the effects of knee symptoms such as pain, stiffness, swelling, buckling, ‎weakness, and limping on ability to perform activities of daily living, and 8 items assess the ‎effects of knee condition on the ability to perform specific functional tasks such as going up and ‎down the stairs, standing, kneeling, squatting, sitting with the knee bent, and rising from a ‎chair. \nThe SAS is an 11-item scale which assesses the effects of knee symptoms on the ability to perform sports and recreational activities (7 items) and the effects of knee condition on the ability to perform specific skills such as straight running, jumping and landing, cutting and pivoting, quick stopping and starting (4 items). \nEach item is rated on a 5-point scale. The score can range from 0 to 70 for the ADLS and from 0 to 55 for the SAS. The overall ADLS and the SAS percent rating were calculated and presented.21 Lower percentages reflect higher levels of disability.\n\nStatistical Analysis\n\nDescriptive statistics were reported as frequency, mean, and standard deviation. They were used to determine the injury prevalence and characteristics of the sample. Pearson Correlation Coefficient was used to measure the relationships between the SAS and the ADLS, and the current level of self-reported knee joint function with both the SAS and the ADLS to evaluate the proprioception in the participants. An alpha level of p<0.05 was used to establish statistical significance. The statistical analyses were performed using SPSS version 22. " ]
[ null ]
[ "Introduction", "Methods ", "Results", "Discussion", "Conclusion" ]
[ "Football is considered to be the most popular sport in the world and approximately 270 million people (4%) of the world population are involved in it.1-5 Referees who make 5 million of this population play an important role in football. 6,7\nFootball referees perform the same activities such as walking and running during matches as the football players do.8 Although there are substantial differences between referees and players, they cover an average distance of 9 to 13 km at high intensity and even a longer distance is covered by professional referees.9-11 Football referees are quite active during matches and they are often older than football players. 8 Therefore, injuries of football referees are expected to be different from those of football players. 12\nA number of studies have examined the anthropo-metric profiles of football referees, their movement pat-terns, their competencies, the quality and the level of their refereeing, their roles in refereeing (referee or assistant referee), the length of time they spend in train-ing and matches, and their psychological demands dur-ing matches and training.10,13,14\nBizzini and colleagues (2009) indicated that 39% of female referees were injured in Women’s World Cup in 2007, and that the incidence of injury was 1 per 20 matches (95% CI = 4.2-65.1).13 On average, 20.8% of injuries were recorded per 1000 match hours (95% CI = 4.17-31.7).12,13\nBlake and colleagues (2009) showed that 61% of the referees were injured during 12 months (95% CI = 59-69), 56% of injury mechanisms were start-ups and fast short running, and 60% of injuries occurred at refer-eeing time.15\nGabrilo and colleagues (2013) indicated that over 40% of 342 male football referees were injured and over 60% of them reported musculoskeletal problems.16\nA knee is the most important joint of a body in terms of stability, weight bearing, balance, mobility, and shock absorption during running. Bizzini and colleagues (2009) reported that one-third of referees underwent surgery due to musculoskeletal problems and knee operation is the most common surgery (20%).17 Furthermore, according to Mahdavi and Mirjani (2015), 57.1% of all knee injuries were related to the dominant leg and injuries of an anterior cruciate ligament (ACL) was the most prevalent (66%).18\nSince referees are an integral part of the game of football and their absence can cause various problems, optimum health is a major factor for their perfect refereeing.16 The most common type of injury among football referees is the knee injury.17,18 The sufficient recovery time of this injury ranges from at least 6 weeks to 6 months19 which means a referee has to be away from refereeing from 6 weeks to 6 months. Therefore, knee injuries are highly important in football referees. The previous studies have mainly focused on the prevalence of injuries and few of them have investigated the prevention of the injuries among football referees.20 Besides, no research study has ever been conducted to examine knee injuries of football referees using the Knee Outcome Survey (KOS). \nThis study aimed to investigate the epidemiology and history of knee injury and its impact on activity limitation among football premier league professional referees. ", "\nStudy Design\n\nThis study was cross-sectional.\n\nSubjects\n\nThe study sample was composed of 59 football referees with grade 1, 2 and international-level officiating in the Iranian Premier League.\n\nProcedure\n\nOne of the researchers filled out the questionnaires in person through face-to-face interviews with the referees at the Football Hotel in February 2016 when they were going to undergo assessment. They were asked to answer the questions accurately and ensured confidentiality of all information collected.\n\nMeasurements\n\nThe following questionnaires were used to collect the needed information:\n1. Personal Information Questionnaire:height, weight, age, education level, and Body Mass Index.\n2. Sports Information Questionnaire:sports history, number of training sessions, number of days and hours of training, training duration, type and duration of warm-up.\n3. Knee Injury History Questionnaire:injury history, the injured leg (dominant or non-dominant), mechanism of injury, time of injury (in matches or training sessions), type of therapy, and special care of the injury.\n4. Knee outcome survey:the KOS which is a patient-completed questionnaire was used to determine symptoms, functional limitation, and disability of the knee joint resulting from various knee injuries during activities of daily living and sports.21\nThe KOS is used for both athletes and elderly people,22,23 and investigates various injuries, including knee ligament injuries, meniscus tears, meniscal cartilage lesions, patellofemoral pain syndrome, dislocation of the knee, and osteoarthritis.21,24 The KOS has 2 subscales consisting of Activities of Daily Living Scale (ADLS) and Sports Activity Scale (SAS). Knee-Rating Scale has demonstrated high reliability and validity with the KOS subscales (0.97, 0.97, respectively for SAS; 0.78, 0.97, respectively for ADLS).20,21,24\nThe ADLS is a 14-item scale for activities of daily living. Six items assess the effects of knee symptoms such as pain, stiffness, swelling, buckling, ‎weakness, and limping on ability to perform activities of daily living, and 8 items assess the ‎effects of knee condition on the ability to perform specific functional tasks such as going up and ‎down the stairs, standing, kneeling, squatting, sitting with the knee bent, and rising from a ‎chair. \nThe SAS is an 11-item scale which assesses the effects of knee symptoms on the ability to perform sports and recreational activities (7 items) and the effects of knee condition on the ability to perform specific skills such as straight running, jumping and landing, cutting and pivoting, quick stopping and starting (4 items). \nEach item is rated on a 5-point scale. The score can range from 0 to 70 for the ADLS and from 0 to 55 for the SAS. The overall ADLS and the SAS percent rating were calculated and presented.21 Lower percentages reflect higher levels of disability.\n\nStatistical Analysis\n\nDescriptive statistics were reported as frequency, mean, and standard deviation. They were used to determine the injury prevalence and characteristics of the sample. Pearson Correlation Coefficient was used to measure the relationships between the SAS and the ADLS, and the current level of self-reported knee joint function with both the SAS and the ADLS to evaluate the proprioception in the participants. An alpha level of p<0.05 was used to establish statistical significance. The statistical analyses were performed using SPSS version 22. ", "Anthropometric characteristics and sports information of the participants are presented in Table 1.\n*For example odd days\n31 out of 59 participants reported the history of knee injury. 18.6%, 22.4% and 81% of the referees reported that they had been injured during the last 6 months of the last year, and at some point in their refereeing careers, respectively.\n48.8% of the injuries occurred in the non-dominant leg and they occurred more frequently during training sessions (52%).\nThe characteristics, type, and mechanisms of the referees’ knee injuries are presented in Table 2.\nThe total scores were 85±13 and 90±9 for the ADLS and the SAS, respectively. In addition, the referees’ current level (at the time of the study) of self-reported knee join function were 79 ± 14 and 82±12 for the ADLS and the SAS, respectively. \nFurthermore, the correlation coefficient of the ADLS and the SAS with the referees’ self-reported knee functions were 0.47 (p =0.01) and 0.63 (p =0.001), respectively. Table 3 and 4 present more information on the ADLS and the SAS.", "This study aimed to investigate the prevalence and the mechanism of a knee injury and to identify the effects of knee injuries on knee function in the activities of daily living, sports activities, and proprioception of the Football Premier League professional referees in Iran. The study indicated that 81% of the referees had suffered knee injuries. A function of muscles surrounding knee ‎helps the knee movements and stabilization. Performance of these muscles may be affected by fatigue during the training sessions as the fatigued muscles can’t generate appropriate joint stability which in turn results in knee injuries.25-27 These findings are in line with the results by Bizzini and colleagues (2009) and Paes (2011).12,28 More focus on aerobic activities (46.7% CI = 41.1-52.3) and limited attention to warm-up and body flexibility (17.8 CI = 13.3-23.4) are considered the main causes of injuries.29 Referees should be involved in relatively higher levels of activities than those in matches. 30 These kinds of activities are called high-frequency training. 31\nThe results showed that knee injuries occurred more frequently during the training sessions (52%) which can be due to the duration of training. Referees practice for 1-2 hours per session (83.1%) and 4-3 days per week (44.1%). It also seems that the duration of training is longer for professional referees than for beginners.17 This volume of practice increases mechanical tension on the lower extremities, leading to an overall feeling of tiredness or lack of energy, thus increasing the risk of injury.25 The statistics show that the levels of physical fitness in more than half of Iranian football referees are rather lower than those of football referees in other countries. Thus, Iranian football referees should focus more on strength and conditioning training to prevent injuries. These findings are in agreement with the results by Bizzini and colleagues (2009) and Silva (2014),32 but not with the findings of Blake and colleagues (2009), Wilson and colleagues (2011) and Mahdavi and Mirjani (2015).\nThe results have revealed that meniscus injuries are the most prevalent type of knee injuries owing to knee rotation (74.1%). Meniscus injuries occur due to a combination of compression force and too much rotation while pivoting. In this case, the meniscus collagen tissue cannot endure the force leading to meniscus tear.33 Referees’ activities also include frequent pivoting, rotation and changing direction. In addition, tired referees are prone to knee joint laxity which in turn could increase valgus/varus stress on ‎knee joint, thus resulting in injury.25,27 ‎These findings are in line with those by Bizzini and colleagues (2009). \nThe results also indicated that the Football Premier League professional referees resumed refereeing after a week of injury (36.7%). These findings are consistent with those by Wilson and colleagues (2011) but not with those by Bizzini and colleagues (2009) and Mahdavi and Mirjani (2015). The importance of refereeing in football can account for the rapid resumption of refereeing by a referee. Insufficient recovery can lead to re-injury which endangers referees’ health. Since referees’ health and refereeing are directly related, the quality of refereeing is also affected. \nThe study showed that the most frequently used therapy was physiotherapy which confirms the findings of Mahdavi and Mirjani (2015).\nThe results from Knee-Rating Scale and the score assigned by the referees to their knee function in both the ADLS and the SAS are indicative of the referees’ high level of proprioception in their knee joint after refereeing resumption. According to Adachi and colleague (2002) knee joint instability does not affect proprioceptive function of the knee.34 Moreover, Good and colleagues (1999) stated that knee position sense did not differ for injured and non-injured knees. This finding may have been due to measurement error. Also, no exact measurement method had ever been designed which means the issue needs to be further investigated.35 However, Skinner and Barrack (1991) mentioned that weakness in knee joint proprioception is an effective factor in the etiology of meniscus lesions and can lead to degenerative joint disease.36\nKnee injuries caused relatively more disorders in the activities of daily living than sports activities. The most frequent complaint was about sitting with the knees bent (Table 4). The knee injuries seem to be mostly related to meniscus which consequently lead to the reduction of knee joint range of motion.37,38 This is the first time that the KOS has been applied in football referees. Thus, there is no similar information to be compared with the obtained results. \nBased on the results of this study, it is concluded that the knee of the non-dominant leg is more prone to injuries. The non-dominant leg is usually weaker than dominant leg because the non-dominant leg plays the role of a supporter and a stabilizer in most movements and sports activities. Thus, non-dominant leg would tolerate more pressure which makes it more susceptible to injury.39,40 This can be reduced by appropriate strength training program and improvement of muscle balance in agonist and antagonist muscles (hamstrings and quadriceps). \nThe Premier League referees were mostly injured during training sessions which can be attributed to increases in the training volume and a greater focus on aerobic exercises. It is proposed that training volume should be adjusted according to training seasons. Moreover, Interval Training should predominate.30 Besides, referees usually pay more attention to basic stretching and running (35.6%) which is considered to be an important factor causing injuries.12\nThis study has some limitations which have to be pointed out. First, the KOS is a self-reported scale. Therefore, the results may have been affected by recall bias. Second, we did not collect data related to the knee injuries of female football referees. Future research should focus on female football referees to see if the prevalence and mechanism of their injuries are different from those of male football referees. Third, the authors could not afford any more time to use objective assessment tools such as Functional Movement Screening (FMS) to see if they can predict injury. It is recommended that other functional assessment tests should be incorporated in addition to the self-reported questionnaires. \nThe findings of this study can be used by the Referees Committee of the Football Federation Islamic Republic of Iran, football referees and coaches to design and apply specific warm-up program for football referees in order to prevent injuries and reduce the costs and time loss by referees.\nIt is recommended that special exercises such as proprioceptive, strength-training (e.g. Nordic hamstring), flexibility, and endurance exercises should be included in warm-up routine to prevent knee injuries.", "Knee injury was quite common among the Football Premier League professional referees. According to the results, the injuries occurred mainly due to insufficient physical fitness. Based on the background of the study, football referees are more active than football players and they run over longer distances. Therefore, football referees should undergo similar training to football players in order to prevent injuries.\n\nAcknowledgements\n\nWe gratefully acknowledge the Football Premier League professional referees who participated in the present study. We would also like to appreciate the head of the Referees Committee of the Football Federation Islamic Republic of Iran and all other personnel contributing to the implementation of this study. " ]
[ "introduction", null, "results", "discussion", "conclusion" ]
[ "Knee injury", "KOS (Knee Outcome Survey)", "Football referees", "Injury prevalence", "Premier league", "Injury mechanism" ]
Introduction: Football is considered to be the most popular sport in the world and approximately 270 million people (4%) of the world population are involved in it.1-5 Referees who make 5 million of this population play an important role in football. 6,7 Football referees perform the same activities such as walking and running during matches as the football players do.8 Although there are substantial differences between referees and players, they cover an average distance of 9 to 13 km at high intensity and even a longer distance is covered by professional referees.9-11 Football referees are quite active during matches and they are often older than football players. 8 Therefore, injuries of football referees are expected to be different from those of football players. 12 A number of studies have examined the anthropo-metric profiles of football referees, their movement pat-terns, their competencies, the quality and the level of their refereeing, their roles in refereeing (referee or assistant referee), the length of time they spend in train-ing and matches, and their psychological demands dur-ing matches and training.10,13,14 Bizzini and colleagues (2009) indicated that 39% of female referees were injured in Women’s World Cup in 2007, and that the incidence of injury was 1 per 20 matches (95% CI = 4.2-65.1).13 On average, 20.8% of injuries were recorded per 1000 match hours (95% CI = 4.17-31.7).12,13 Blake and colleagues (2009) showed that 61% of the referees were injured during 12 months (95% CI = 59-69), 56% of injury mechanisms were start-ups and fast short running, and 60% of injuries occurred at refer-eeing time.15 Gabrilo and colleagues (2013) indicated that over 40% of 342 male football referees were injured and over 60% of them reported musculoskeletal problems.16 A knee is the most important joint of a body in terms of stability, weight bearing, balance, mobility, and shock absorption during running. Bizzini and colleagues (2009) reported that one-third of referees underwent surgery due to musculoskeletal problems and knee operation is the most common surgery (20%).17 Furthermore, according to Mahdavi and Mirjani (2015), 57.1% of all knee injuries were related to the dominant leg and injuries of an anterior cruciate ligament (ACL) was the most prevalent (66%).18 Since referees are an integral part of the game of football and their absence can cause various problems, optimum health is a major factor for their perfect refereeing.16 The most common type of injury among football referees is the knee injury.17,18 The sufficient recovery time of this injury ranges from at least 6 weeks to 6 months19 which means a referee has to be away from refereeing from 6 weeks to 6 months. Therefore, knee injuries are highly important in football referees. The previous studies have mainly focused on the prevalence of injuries and few of them have investigated the prevention of the injuries among football referees.20 Besides, no research study has ever been conducted to examine knee injuries of football referees using the Knee Outcome Survey (KOS). This study aimed to investigate the epidemiology and history of knee injury and its impact on activity limitation among football premier league professional referees. Methods : Study Design This study was cross-sectional. Subjects The study sample was composed of 59 football referees with grade 1, 2 and international-level officiating in the Iranian Premier League. Procedure One of the researchers filled out the questionnaires in person through face-to-face interviews with the referees at the Football Hotel in February 2016 when they were going to undergo assessment. They were asked to answer the questions accurately and ensured confidentiality of all information collected. Measurements The following questionnaires were used to collect the needed information: 1. Personal Information Questionnaire:height, weight, age, education level, and Body Mass Index. 2. Sports Information Questionnaire:sports history, number of training sessions, number of days and hours of training, training duration, type and duration of warm-up. 3. Knee Injury History Questionnaire:injury history, the injured leg (dominant or non-dominant), mechanism of injury, time of injury (in matches or training sessions), type of therapy, and special care of the injury. 4. Knee outcome survey:the KOS which is a patient-completed questionnaire was used to determine symptoms, functional limitation, and disability of the knee joint resulting from various knee injuries during activities of daily living and sports.21 The KOS is used for both athletes and elderly people,22,23 and investigates various injuries, including knee ligament injuries, meniscus tears, meniscal cartilage lesions, patellofemoral pain syndrome, dislocation of the knee, and osteoarthritis.21,24 The KOS has 2 subscales consisting of Activities of Daily Living Scale (ADLS) and Sports Activity Scale (SAS). Knee-Rating Scale has demonstrated high reliability and validity with the KOS subscales (0.97, 0.97, respectively for SAS; 0.78, 0.97, respectively for ADLS).20,21,24 The ADLS is a 14-item scale for activities of daily living. Six items assess the effects of knee symptoms such as pain, stiffness, swelling, buckling, ‎weakness, and limping on ability to perform activities of daily living, and 8 items assess the ‎effects of knee condition on the ability to perform specific functional tasks such as going up and ‎down the stairs, standing, kneeling, squatting, sitting with the knee bent, and rising from a ‎chair. The SAS is an 11-item scale which assesses the effects of knee symptoms on the ability to perform sports and recreational activities (7 items) and the effects of knee condition on the ability to perform specific skills such as straight running, jumping and landing, cutting and pivoting, quick stopping and starting (4 items). Each item is rated on a 5-point scale. The score can range from 0 to 70 for the ADLS and from 0 to 55 for the SAS. The overall ADLS and the SAS percent rating were calculated and presented.21 Lower percentages reflect higher levels of disability. Statistical Analysis Descriptive statistics were reported as frequency, mean, and standard deviation. They were used to determine the injury prevalence and characteristics of the sample. Pearson Correlation Coefficient was used to measure the relationships between the SAS and the ADLS, and the current level of self-reported knee joint function with both the SAS and the ADLS to evaluate the proprioception in the participants. An alpha level of p<0.05 was used to establish statistical significance. The statistical analyses were performed using SPSS version 22. Results: Anthropometric characteristics and sports information of the participants are presented in Table 1. *For example odd days 31 out of 59 participants reported the history of knee injury. 18.6%, 22.4% and 81% of the referees reported that they had been injured during the last 6 months of the last year, and at some point in their refereeing careers, respectively. 48.8% of the injuries occurred in the non-dominant leg and they occurred more frequently during training sessions (52%). The characteristics, type, and mechanisms of the referees’ knee injuries are presented in Table 2. The total scores were 85±13 and 90±9 for the ADLS and the SAS, respectively. In addition, the referees’ current level (at the time of the study) of self-reported knee join function were 79 ± 14 and 82±12 for the ADLS and the SAS, respectively. Furthermore, the correlation coefficient of the ADLS and the SAS with the referees’ self-reported knee functions were 0.47 (p =0.01) and 0.63 (p =0.001), respectively. Table 3 and 4 present more information on the ADLS and the SAS. Discussion: This study aimed to investigate the prevalence and the mechanism of a knee injury and to identify the effects of knee injuries on knee function in the activities of daily living, sports activities, and proprioception of the Football Premier League professional referees in Iran. The study indicated that 81% of the referees had suffered knee injuries. A function of muscles surrounding knee ‎helps the knee movements and stabilization. Performance of these muscles may be affected by fatigue during the training sessions as the fatigued muscles can’t generate appropriate joint stability which in turn results in knee injuries.25-27 These findings are in line with the results by Bizzini and colleagues (2009) and Paes (2011).12,28 More focus on aerobic activities (46.7% CI = 41.1-52.3) and limited attention to warm-up and body flexibility (17.8 CI = 13.3-23.4) are considered the main causes of injuries.29 Referees should be involved in relatively higher levels of activities than those in matches. 30 These kinds of activities are called high-frequency training. 31 The results showed that knee injuries occurred more frequently during the training sessions (52%) which can be due to the duration of training. Referees practice for 1-2 hours per session (83.1%) and 4-3 days per week (44.1%). It also seems that the duration of training is longer for professional referees than for beginners.17 This volume of practice increases mechanical tension on the lower extremities, leading to an overall feeling of tiredness or lack of energy, thus increasing the risk of injury.25 The statistics show that the levels of physical fitness in more than half of Iranian football referees are rather lower than those of football referees in other countries. Thus, Iranian football referees should focus more on strength and conditioning training to prevent injuries. These findings are in agreement with the results by Bizzini and colleagues (2009) and Silva (2014),32 but not with the findings of Blake and colleagues (2009), Wilson and colleagues (2011) and Mahdavi and Mirjani (2015). The results have revealed that meniscus injuries are the most prevalent type of knee injuries owing to knee rotation (74.1%). Meniscus injuries occur due to a combination of compression force and too much rotation while pivoting. In this case, the meniscus collagen tissue cannot endure the force leading to meniscus tear.33 Referees’ activities also include frequent pivoting, rotation and changing direction. In addition, tired referees are prone to knee joint laxity which in turn could increase valgus/varus stress on ‎knee joint, thus resulting in injury.25,27 ‎These findings are in line with those by Bizzini and colleagues (2009). The results also indicated that the Football Premier League professional referees resumed refereeing after a week of injury (36.7%). These findings are consistent with those by Wilson and colleagues (2011) but not with those by Bizzini and colleagues (2009) and Mahdavi and Mirjani (2015). The importance of refereeing in football can account for the rapid resumption of refereeing by a referee. Insufficient recovery can lead to re-injury which endangers referees’ health. Since referees’ health and refereeing are directly related, the quality of refereeing is also affected. The study showed that the most frequently used therapy was physiotherapy which confirms the findings of Mahdavi and Mirjani (2015). The results from Knee-Rating Scale and the score assigned by the referees to their knee function in both the ADLS and the SAS are indicative of the referees’ high level of proprioception in their knee joint after refereeing resumption. According to Adachi and colleague (2002) knee joint instability does not affect proprioceptive function of the knee.34 Moreover, Good and colleagues (1999) stated that knee position sense did not differ for injured and non-injured knees. This finding may have been due to measurement error. Also, no exact measurement method had ever been designed which means the issue needs to be further investigated.35 However, Skinner and Barrack (1991) mentioned that weakness in knee joint proprioception is an effective factor in the etiology of meniscus lesions and can lead to degenerative joint disease.36 Knee injuries caused relatively more disorders in the activities of daily living than sports activities. The most frequent complaint was about sitting with the knees bent (Table 4). The knee injuries seem to be mostly related to meniscus which consequently lead to the reduction of knee joint range of motion.37,38 This is the first time that the KOS has been applied in football referees. Thus, there is no similar information to be compared with the obtained results. Based on the results of this study, it is concluded that the knee of the non-dominant leg is more prone to injuries. The non-dominant leg is usually weaker than dominant leg because the non-dominant leg plays the role of a supporter and a stabilizer in most movements and sports activities. Thus, non-dominant leg would tolerate more pressure which makes it more susceptible to injury.39,40 This can be reduced by appropriate strength training program and improvement of muscle balance in agonist and antagonist muscles (hamstrings and quadriceps). The Premier League referees were mostly injured during training sessions which can be attributed to increases in the training volume and a greater focus on aerobic exercises. It is proposed that training volume should be adjusted according to training seasons. Moreover, Interval Training should predominate.30 Besides, referees usually pay more attention to basic stretching and running (35.6%) which is considered to be an important factor causing injuries.12 This study has some limitations which have to be pointed out. First, the KOS is a self-reported scale. Therefore, the results may have been affected by recall bias. Second, we did not collect data related to the knee injuries of female football referees. Future research should focus on female football referees to see if the prevalence and mechanism of their injuries are different from those of male football referees. Third, the authors could not afford any more time to use objective assessment tools such as Functional Movement Screening (FMS) to see if they can predict injury. It is recommended that other functional assessment tests should be incorporated in addition to the self-reported questionnaires. The findings of this study can be used by the Referees Committee of the Football Federation Islamic Republic of Iran, football referees and coaches to design and apply specific warm-up program for football referees in order to prevent injuries and reduce the costs and time loss by referees. It is recommended that special exercises such as proprioceptive, strength-training (e.g. Nordic hamstring), flexibility, and endurance exercises should be included in warm-up routine to prevent knee injuries. Conclusion: Knee injury was quite common among the Football Premier League professional referees. According to the results, the injuries occurred mainly due to insufficient physical fitness. Based on the background of the study, football referees are more active than football players and they run over longer distances. Therefore, football referees should undergo similar training to football players in order to prevent injuries. Acknowledgements We gratefully acknowledge the Football Premier League professional referees who participated in the present study. We would also like to appreciate the head of the Referees Committee of the Football Federation Islamic Republic of Iran and all other personnel contributing to the implementation of this study.
Background: The purpose of this study was to determine the epidemiology and history of knee injury and its impact on activity limitation among football premier league professional referees in Iran. Methods: This was a descriptive study. 59 Football Premier League professional referees participated in the study. The knee injury related information such as injury history and mechanism was recorded. Injury related symptoms and their impacts on the activity limitation, ability to perform activities of daily living as well participation in sports and recreational activities was obtained through the Knee Outcome Survey (KOS). Results: The results indicated that 31 out of 59 participants reported the history of knee injury. In addition, 18.6%, 22.4% and 81% of the referees reported that they had been injured during the last 6 months of the last year, and at some point in their refereeing careers, respectively. Results further indicated that 48.8% of the injuries occurred in the non-dominant leg and they occurred more frequently during training sessions (52%). Furthermore, the value of KOS was 85 ± 13 for Activities of Daily Living subscale and 90 ± 9 for Sports and Recreational Activities subscale of the KOS. Conclusions: Knee injury was quite common among the Football Premier League professional referees. It was also indicated that the injuries occurred mainly due to insufficient physical fitness. Therefore, it is suggested that football referees undergo the proper warm-up program to avoid knee injury.
Introduction: Football is considered to be the most popular sport in the world and approximately 270 million people (4%) of the world population are involved in it.1-5 Referees who make 5 million of this population play an important role in football. 6,7 Football referees perform the same activities such as walking and running during matches as the football players do.8 Although there are substantial differences between referees and players, they cover an average distance of 9 to 13 km at high intensity and even a longer distance is covered by professional referees.9-11 Football referees are quite active during matches and they are often older than football players. 8 Therefore, injuries of football referees are expected to be different from those of football players. 12 A number of studies have examined the anthropo-metric profiles of football referees, their movement pat-terns, their competencies, the quality and the level of their refereeing, their roles in refereeing (referee or assistant referee), the length of time they spend in train-ing and matches, and their psychological demands dur-ing matches and training.10,13,14 Bizzini and colleagues (2009) indicated that 39% of female referees were injured in Women’s World Cup in 2007, and that the incidence of injury was 1 per 20 matches (95% CI = 4.2-65.1).13 On average, 20.8% of injuries were recorded per 1000 match hours (95% CI = 4.17-31.7).12,13 Blake and colleagues (2009) showed that 61% of the referees were injured during 12 months (95% CI = 59-69), 56% of injury mechanisms were start-ups and fast short running, and 60% of injuries occurred at refer-eeing time.15 Gabrilo and colleagues (2013) indicated that over 40% of 342 male football referees were injured and over 60% of them reported musculoskeletal problems.16 A knee is the most important joint of a body in terms of stability, weight bearing, balance, mobility, and shock absorption during running. Bizzini and colleagues (2009) reported that one-third of referees underwent surgery due to musculoskeletal problems and knee operation is the most common surgery (20%).17 Furthermore, according to Mahdavi and Mirjani (2015), 57.1% of all knee injuries were related to the dominant leg and injuries of an anterior cruciate ligament (ACL) was the most prevalent (66%).18 Since referees are an integral part of the game of football and their absence can cause various problems, optimum health is a major factor for their perfect refereeing.16 The most common type of injury among football referees is the knee injury.17,18 The sufficient recovery time of this injury ranges from at least 6 weeks to 6 months19 which means a referee has to be away from refereeing from 6 weeks to 6 months. Therefore, knee injuries are highly important in football referees. The previous studies have mainly focused on the prevalence of injuries and few of them have investigated the prevention of the injuries among football referees.20 Besides, no research study has ever been conducted to examine knee injuries of football referees using the Knee Outcome Survey (KOS). This study aimed to investigate the epidemiology and history of knee injury and its impact on activity limitation among football premier league professional referees. Conclusion: Knee injury was quite common among the Football Premier League professional referees. According to the results, the injuries occurred mainly due to insufficient physical fitness. Based on the background of the study, football referees are more active than football players and they run over longer distances. Therefore, football referees should undergo similar training to football players in order to prevent injuries. Acknowledgements We gratefully acknowledge the Football Premier League professional referees who participated in the present study. We would also like to appreciate the head of the Referees Committee of the Football Federation Islamic Republic of Iran and all other personnel contributing to the implementation of this study.
Background: The purpose of this study was to determine the epidemiology and history of knee injury and its impact on activity limitation among football premier league professional referees in Iran. Methods: This was a descriptive study. 59 Football Premier League professional referees participated in the study. The knee injury related information such as injury history and mechanism was recorded. Injury related symptoms and their impacts on the activity limitation, ability to perform activities of daily living as well participation in sports and recreational activities was obtained through the Knee Outcome Survey (KOS). Results: The results indicated that 31 out of 59 participants reported the history of knee injury. In addition, 18.6%, 22.4% and 81% of the referees reported that they had been injured during the last 6 months of the last year, and at some point in their refereeing careers, respectively. Results further indicated that 48.8% of the injuries occurred in the non-dominant leg and they occurred more frequently during training sessions (52%). Furthermore, the value of KOS was 85 ± 13 for Activities of Daily Living subscale and 90 ± 9 for Sports and Recreational Activities subscale of the KOS. Conclusions: Knee injury was quite common among the Football Premier League professional referees. It was also indicated that the injuries occurred mainly due to insufficient physical fitness. Therefore, it is suggested that football referees undergo the proper warm-up program to avoid knee injury.
2,878
277
5
[ "referees", "knee", "football", "injuries", "injury", "football referees", "training", "activities", "study", "knee injuries" ]
[ "test", "test" ]
null
[CONTENT] Knee injury | KOS (Knee Outcome Survey) | Football referees | Injury prevalence | Premier league | Injury mechanism [SUMMARY]
null
[CONTENT] Knee injury | KOS (Knee Outcome Survey) | Football referees | Injury prevalence | Premier league | Injury mechanism [SUMMARY]
[CONTENT] Knee injury | KOS (Knee Outcome Survey) | Football referees | Injury prevalence | Premier league | Injury mechanism [SUMMARY]
[CONTENT] Knee injury | KOS (Knee Outcome Survey) | Football referees | Injury prevalence | Premier league | Injury mechanism [SUMMARY]
[CONTENT] Knee injury | KOS (Knee Outcome Survey) | Football referees | Injury prevalence | Premier league | Injury mechanism [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Athletic Injuries | History, 20th Century | History, 21st Century | Humans | Incidence | Iran | Knee Injuries | Male | Soccer | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] Activities of Daily Living | Adult | Athletic Injuries | History, 20th Century | History, 21st Century | Humans | Incidence | Iran | Knee Injuries | Male | Soccer | Surveys and Questionnaires [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Athletic Injuries | History, 20th Century | History, 21st Century | Humans | Incidence | Iran | Knee Injuries | Male | Soccer | Surveys and Questionnaires [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Athletic Injuries | History, 20th Century | History, 21st Century | Humans | Incidence | Iran | Knee Injuries | Male | Soccer | Surveys and Questionnaires [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Athletic Injuries | History, 20th Century | History, 21st Century | Humans | Incidence | Iran | Knee Injuries | Male | Soccer | Surveys and Questionnaires [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] referees | knee | football | injuries | injury | football referees | training | activities | study | knee injuries [SUMMARY]
null
[CONTENT] referees | knee | football | injuries | injury | football referees | training | activities | study | knee injuries [SUMMARY]
[CONTENT] referees | knee | football | injuries | injury | football referees | training | activities | study | knee injuries [SUMMARY]
[CONTENT] referees | knee | football | injuries | injury | football referees | training | activities | study | knee injuries [SUMMARY]
[CONTENT] referees | knee | football | injuries | injury | football referees | training | activities | study | knee injuries [SUMMARY]
[CONTENT] football | referees | football referees | injuries | knee | matches | players | 20 | colleagues | injuries football [SUMMARY]
null
[CONTENT] respectively | adls | sas | adls sas | table | reported | adls sas respectively | sas respectively | presented table | knee [SUMMARY]
[CONTENT] football | referees | football players | players | study | league professional | league professional referees | professional referees | professional | premier league professional referees [SUMMARY]
[CONTENT] referees | football | knee | injuries | football referees | adls | sas | injury | study | training [SUMMARY]
[CONTENT] referees | football | knee | injuries | football referees | adls | sas | injury | study | training [SUMMARY]
[CONTENT] Iran [SUMMARY]
null
[CONTENT] 31 | 59 ||| 18.6% | 22.4% and | 81% | the last 6 months of the last year ||| 48.8% | 52% ||| KOS | 85 | 13 | Activities of Daily Living | 90 | Recreational Activities | KOS [SUMMARY]
[CONTENT] Knee ||| ||| [SUMMARY]
[CONTENT] Iran ||| ||| 59 ||| ||| daily | the Knee Outcome Survey | KOS ||| ||| 31 | 59 ||| 18.6% | 22.4% and | 81% | the last 6 months of the last year ||| 48.8% | 52% ||| KOS | 85 | 13 | Activities of Daily Living | 90 | Recreational Activities | KOS ||| ||| ||| [SUMMARY]
[CONTENT] Iran ||| ||| 59 ||| ||| daily | the Knee Outcome Survey | KOS ||| ||| 31 | 59 ||| 18.6% | 22.4% and | 81% | the last 6 months of the last year ||| 48.8% | 52% ||| KOS | 85 | 13 | Activities of Daily Living | 90 | Recreational Activities | KOS ||| ||| ||| [SUMMARY]
Immune-Mediated Cerebellar Ataxia Associated With Neuronal Surface Antibodies.
35250990
Immune-mediated cerebellar ataxias (IMCAs) are common in paraneoplastic cerebellar degeneration (PCD) but rarely occur in patients with neuronal surface antibodies (NSAbs). Although cerebellar ataxias (CAs) associated with anti-NMDAR and anti-CASPR2 have been reported in a few cases, they have never been studied systematically. This study aimed to analyze the characteristics of anti-NSAbs-associated CAs.
BACKGROUND
A retrospective investigation was conducted to identify patients using the keywords IMCAs and NSAbs. We collected the clinical data of 14 patients diagnosed with anti-NSAbs-associated CAs.
METHODS
The median age was 33 years (16-66), and the male-to-female ratio was 4:3. Nine were positive for NMDAR-Ab, two for LGI1-Ab, two for CASPR2-Ab, and one for AMPA2R-Ab. CAs were initial symptoms in three patients and presented during the first two months of the disease course (10 days on average) among the rest of the patients. After the immunotherapy, two cases were free from symptoms, and eight cases recovered satisfactorily (10/14, 71.4%). Compared with other causes of IMCAs, anti-NSAbs were more frequently associated with additional extra-cerebellar symptoms (85.7%), mostly seizures (78.6%) and mental abnormalities (64.3%). In the CSF analysis, pleocytosis was detected in ten patients (71.4%) and oligoclonal bands (OB) were observed in nine patients (64.3%). Moreover, compared with PCD and anti-GAD65-Ab-associated CAs, anti-NSAbs-associated CAs showed a better response to immunotherapy.
RESULTS
IMCAs are rare and atypical in autoimmune encephalitis with neuronal surface antibodies. Compared with other forms of IMCAs, more symptoms of encephalopathy, a higher rate of pleocytosis and positive OB in CSF, and positive therapeutic effect were the key features of anti-NSAbs-associated CAs.
CONCLUSION
[ "Adult", "Autoantibodies", "Cerebellar Ataxia", "Female", "Humans", "Leukocytosis", "Male", "Paraneoplastic Cerebellar Degeneration", "Retrospective Studies" ]
8891139
Introduction
Over the past few decades, with the discovery of anti-NMDAR encephalitis in 2007 (1), an increasing number of specific neuronal surface antibodies (NSAbs) have been discovered, including LGI1-Ab, CASPR2-Ab, AMPA1/2R-Ab, GABAR-A/B-Ab, and so on (2–5). Anti-NSAbs-associated autoimmune encephalitis presents diverse clinical phenotypes, such as seizures, cognitive decline, psychiatric abnormalities, and autonomic dysfunction. In contrast, symptoms of cerebellar ataxia are reported rarely in patients with positive NSAbs (6–8). Immune-mediated cerebellar ataxias (IMCAs) are one of the most common symptoms of paraneoplastic cerebellar degeneration (PCD) (9). Previous studies have suggested that CAs are atypical in anti-NSAbs-associated autoimmune encephalitis (6, 7, 10). For example, cerebellar ataxia has been reported in a few patients with anti-NMDAR encephalitis (10, 11). Boyko et al. found that about 15% of patients with anti-CASPR2 encephalitis developed cerebellar ataxia followed by the onset of limbic symptoms or Morvan syndrome (12). Although CAs associated with anti-NMDAR and anti-CASPR2 have been studied in a few cases, they have not been studied systematically. In this study, we summarized the clinical features of 14 patients with anti-NSAbs-associated CAs. Moreover, to explore the distinct features of anti-NSAbs-associated CAs, a study of two overlapping cohorts, including patients diagnosed as IMCAs and patients with NSAbs from 2015 to 2020 in our center, was designed and conducted. The present study focused on the clinical characteristics of patients with anti-NSAbs-associated CAs.
null
null
Results
Frequency of Anti-NSAbs-Associated CAs Among the 40 IMCAs with definite etiology, 14 patients (25.9%) were identified as anti-NSAbs associated CAs, followed by PCD (13 patients, 24.1%), anti-GAD65-Ab-associated CAs (7 patients, 13.0%), and autoimmune disease-associated CAs (6 patients, 11.1%) ( Table 1 ). Regarding the 191 patients with positive NSAbs (adult, 164; children, 27), 14 patients (7.3%) developed CAs during the disease period. The result showed a similar proportion of cerebellar ataxia, respectively, in adults (12/164, 7.3%) and children (2/27, 7.4%) (P=1.000) ( Table 2 ). Compared with other causes of IMCAs or anti-NSAbs without CAs, no distinct features in demographic data were found in patients with Anti-NSAbs-associated CAs Comparison of clinical features of four forms of IMCAs. IMCAs, Immune-mediated cerebellar ataxias; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; MRI, Magnetic resonance imaging. Comparison of clinical characteristic of patients positive for NASbs with and without CAs. NSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; Ab, Antibody; MRI, Magnetic resonance imaging *Brain MRI hyperintense signal on T2-weighted fluid-attenuated inversion recovery sequences highly restricted to one or both medial temporal lobes (limbic encephalitis), or in multifocal areas involving grey matter, white matter (13). According to the data, 185 patients were positive with one antibody (108 with NMDAR-Ab, 52 with LGI1-Ab, 12 with CASPR2-Ab, 4 with AMPA2R-Ab, and 9 with GABAR-B-Ab), and 6 patients were positive with two types of antibodies (4 with NMDAR-Ab and CASPR2-Ab, and 2 with CASPR2-Ab and LGI1-Ab) ( Table 3 ). Figures suggested that patients with AMPAR-Ab had the highest occurrence of cerebellar ataxia during the course of the disease (1/4, 25%), followed by CASPR2-Ab (2/12, 16.7%), NMDAR-Ab (9/108, 8.3%), and LGI1-Ab (2/52, 3.8%). However, the symptoms of CAs were not apparent among patients with GABAB-R-Ab (0/9, 0%). Moreover, the occurrence of multiple antibodies was not related to the increased frequency of CAs (0/6, 0%). The data of antibodies detection in patients with anti-NSAbs. NSAbs, Neuronal surface antibodies; CSF, Cerebrospinal fluid. Among the 40 IMCAs with definite etiology, 14 patients (25.9%) were identified as anti-NSAbs associated CAs, followed by PCD (13 patients, 24.1%), anti-GAD65-Ab-associated CAs (7 patients, 13.0%), and autoimmune disease-associated CAs (6 patients, 11.1%) ( Table 1 ). Regarding the 191 patients with positive NSAbs (adult, 164; children, 27), 14 patients (7.3%) developed CAs during the disease period. The result showed a similar proportion of cerebellar ataxia, respectively, in adults (12/164, 7.3%) and children (2/27, 7.4%) (P=1.000) ( Table 2 ). Compared with other causes of IMCAs or anti-NSAbs without CAs, no distinct features in demographic data were found in patients with Anti-NSAbs-associated CAs Comparison of clinical features of four forms of IMCAs. IMCAs, Immune-mediated cerebellar ataxias; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; MRI, Magnetic resonance imaging. Comparison of clinical characteristic of patients positive for NASbs with and without CAs. NSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; Ab, Antibody; MRI, Magnetic resonance imaging *Brain MRI hyperintense signal on T2-weighted fluid-attenuated inversion recovery sequences highly restricted to one or both medial temporal lobes (limbic encephalitis), or in multifocal areas involving grey matter, white matter (13). According to the data, 185 patients were positive with one antibody (108 with NMDAR-Ab, 52 with LGI1-Ab, 12 with CASPR2-Ab, 4 with AMPA2R-Ab, and 9 with GABAR-B-Ab), and 6 patients were positive with two types of antibodies (4 with NMDAR-Ab and CASPR2-Ab, and 2 with CASPR2-Ab and LGI1-Ab) ( Table 3 ). Figures suggested that patients with AMPAR-Ab had the highest occurrence of cerebellar ataxia during the course of the disease (1/4, 25%), followed by CASPR2-Ab (2/12, 16.7%), NMDAR-Ab (9/108, 8.3%), and LGI1-Ab (2/52, 3.8%). However, the symptoms of CAs were not apparent among patients with GABAB-R-Ab (0/9, 0%). Moreover, the occurrence of multiple antibodies was not related to the increased frequency of CAs (0/6, 0%). The data of antibodies detection in patients with anti-NSAbs. NSAbs, Neuronal surface antibodies; CSF, Cerebrospinal fluid. Clinical Characteristics of Patients With CAs Regarding 14 patients with CAs, three patients suffered from gait ataxia as the initial symptom (two with NMDAR-Ab and one with CASPR2-Ab). The other 11 patients experienced cerebellar ataxia during the first two months after the disease onset (2-60 days, average on day 10). CAs installed acutely (10 patients, 71.4%), subacutely (1 patient, 7.1%) and insidiously (3 patients, 21.4%). Cerebellar signs are shown in Table 1 . Most patients presented with slurred speech as a main symptom (78.6%), but nystagmus seldom appeared (35.7%). In addition, two patients (one with NMDAR-Ab and the other with CASPR2-Ab) had isolated CAs throughout the disease ( Table 4 ). The neurological examination revealed that five patients (35.7%) failed to complete the alternating movement test with both two hands, six patients (42.9%) went through the finger-nose test with difficulties, and five patients (35.7%) could not finish the heel-knee-tibia test. However, there were not enough patients for clinical characteristics comparison among different antibodies. Clinical characteristics of patients with anti-NSAbs-associated CAs. NSAbs: Neuronal surface antibodies; CAs: Cerebellar ataxias; M: Male; F: Female; CSF: Cerebrospinal fluid; WBC: White blood cells counts; Pos: Positive; Neg: Negative; OB: Oligoclonal bands; MRI: Magnetic resonance imaging; IVIG: Intravenous immunoglobulin. Regarding 14 patients with CAs, three patients suffered from gait ataxia as the initial symptom (two with NMDAR-Ab and one with CASPR2-Ab). The other 11 patients experienced cerebellar ataxia during the first two months after the disease onset (2-60 days, average on day 10). CAs installed acutely (10 patients, 71.4%), subacutely (1 patient, 7.1%) and insidiously (3 patients, 21.4%). Cerebellar signs are shown in Table 1 . Most patients presented with slurred speech as a main symptom (78.6%), but nystagmus seldom appeared (35.7%). In addition, two patients (one with NMDAR-Ab and the other with CASPR2-Ab) had isolated CAs throughout the disease ( Table 4 ). The neurological examination revealed that five patients (35.7%) failed to complete the alternating movement test with both two hands, six patients (42.9%) went through the finger-nose test with difficulties, and five patients (35.7%) could not finish the heel-knee-tibia test. However, there were not enough patients for clinical characteristics comparison among different antibodies. Clinical characteristics of patients with anti-NSAbs-associated CAs. NSAbs: Neuronal surface antibodies; CAs: Cerebellar ataxias; M: Male; F: Female; CSF: Cerebrospinal fluid; WBC: White blood cells counts; Pos: Positive; Neg: Negative; OB: Oligoclonal bands; MRI: Magnetic resonance imaging; IVIG: Intravenous immunoglobulin. Laboratory Testing and MRI Analysis Blood and CSF markers indicating inflammation were evaluated. Among patients with NSAbs, oligoclonal bands (OB) in CSF were more frequently in patients with anti-NSAbs-associated CAs than in patients without CAs (P=0.025). The proportions of negative, weakly positive, positive and strongly positive NSAbs were further analyzed ( Table 5 ). Interestingly, more patients with CAs were found to have high titer range of CSF antibodies (within “positive” group and “strongly positive” group, 85.7% vs. 45.8%, P=0.005). All 14 patients underwent 3T brain MRI. Cerebellar abnormalities on MRI are listed in Table 4 . Limbic system involvement (medial temporal T2/FLAIR signal changes) (13) was found in seven patients (50%). All 14 cases were negative for routine tumor screening. The levels of antibodies titers in patients with anti-NSAbs-associated CAs. NSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; CSF, Cerebrospinal fluid. Blood and CSF markers indicating inflammation were evaluated. Among patients with NSAbs, oligoclonal bands (OB) in CSF were more frequently in patients with anti-NSAbs-associated CAs than in patients without CAs (P=0.025). The proportions of negative, weakly positive, positive and strongly positive NSAbs were further analyzed ( Table 5 ). Interestingly, more patients with CAs were found to have high titer range of CSF antibodies (within “positive” group and “strongly positive” group, 85.7% vs. 45.8%, P=0.005). All 14 patients underwent 3T brain MRI. Cerebellar abnormalities on MRI are listed in Table 4 . Limbic system involvement (medial temporal T2/FLAIR signal changes) (13) was found in seven patients (50%). All 14 cases were negative for routine tumor screening. The levels of antibodies titers in patients with anti-NSAbs-associated CAs. NSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; CSF, Cerebrospinal fluid. Response to the Immunotherapy All 14 patients were followed up for at least 15 months after discharge (median 27 months, ranging from 15-48 months). At the early stage, the patients were investigated by the standardized questionnaire through a telephone survey once a month, and all patients were hospitalized again for disease re-evaluation 6 months after discharge. Two patients were free from the symptoms of cerebellar ataxia, and eight recovered satisfactorily (10/14, 71.4%) after the glucocorticoid treatment or the combination of glucocorticoids and IVIG ( Table 4 ). In addition, two patients (patients 8 and 12) with NMDAR-Ab had clinical relapses, whereas the symptoms of cerebellar ataxia were not observed during the relapses. No malignancy was found in these 14 patients until the last follow-up visit. Follow-up brain MRI showed an obvious diminishment in cerebellar T2/FLAIR-hyperintense lesions in two patients (2/4, 50%, case 7 and 13) and no significant changes in others (case 1 and case 2). However, cerebellar atrophy in brain MRI of case 3 remained unchanged. All 14 patients were followed up for at least 15 months after discharge (median 27 months, ranging from 15-48 months). At the early stage, the patients were investigated by the standardized questionnaire through a telephone survey once a month, and all patients were hospitalized again for disease re-evaluation 6 months after discharge. Two patients were free from the symptoms of cerebellar ataxia, and eight recovered satisfactorily (10/14, 71.4%) after the glucocorticoid treatment or the combination of glucocorticoids and IVIG ( Table 4 ). In addition, two patients (patients 8 and 12) with NMDAR-Ab had clinical relapses, whereas the symptoms of cerebellar ataxia were not observed during the relapses. No malignancy was found in these 14 patients until the last follow-up visit. Follow-up brain MRI showed an obvious diminishment in cerebellar T2/FLAIR-hyperintense lesions in two patients (2/4, 50%, case 7 and 13) and no significant changes in others (case 1 and case 2). However, cerebellar atrophy in brain MRI of case 3 remained unchanged.
Conclusion
Cerebellar ataxias were rare and atypical in autoimmune encephalitis with neuronal surface antibodies. Patients with anti-NSAbs-associated CAs exhibited a higher positivity rate of OB in CSF. Most patients, especially those with isolated CAs, responded well to immunotherapy. Compared with other causes, anti-NSAbs-associated CAs led to more symptoms of encephalopathy and showed better therapeutic effects from immunotherapy. Considering the treatability of anti-NSAbs-associated CAs, it is valuable to perform serum and CSF NSAbs tests for patients with cerebellar ataxia.
[ "Methods", "Patient Identification", "Antibody Detection", "Clinical Data and Outcome Measures", "Statistical Analysis", "Frequency of Anti-NSAbs-Associated CAs", "Clinical Characteristics of Patients With CAs", "Laboratory Testing and MRI Analysis", "Response to the Immunotherapy", "Conclusion" ]
[ " Patient Identification A retrospective investigation was conducted on outpatient and hospitalized cases in the Department of Neurology, Xuanwu Hospital, Capital Medical University from June 2015 to June 2020 to identify potential patients with anti-NSAbs-associated CAs using the keywords NSAbs and IMCAs. 54 patients with IMCAs and 191 patients positive for NSAbs were enrolled.\nAll 54 patients were screened for common causes of IMCAs, including PCD, anti-GAD65-Ab-associated CA, post-infectious cerebellitis, gluten ataxia (GA), opsoclonus myoclonus syndrome (OMS), Miller Fisher syndrome, Hashimoto’s encephalopathy (HE) and Systemic Lupus Erythematosus (SLE). Among 54 patients, 13 patients were diagnosed with PCDs (7 with Yo-Ab, 3 with Hu-Ab 2 with Tr-Ab, and 1 with SOX1-Ab), 7 patients with anti-GAD65-Ab-associated CA, 6 with autoimmune disease-associated CAs (4 with Hashimoto’s Encephalopathy and 2 with Systemic Lupus Erythematosus), 14 with unknown etiology and the remaining 14 patients were positive for NSAbs, including 9 with NMDAR-Ab, 2 with LGI1-Ab, 2 with CASPR2-Ab and 1 with AMPA2R-Ab. \nFigure 1\n demonstrates the process of identifying patients in this study. These 14 patients were negative for onconeural antibodies (ONAs), anti-GAD-65 antibodies, GQ1b antibodies, anti-gliadin antibodies (AGA), anti-thyroid antibodies (ATA), anti-nuclear antibody (ANA), and anti-double-stranded DNA antibodies. In addition, there was no history of virus infection, dermatitis herpetiformis (DH), and celiac disease (CD) in all. Moreover, routine screening examinations, including muti-tumor markers and whole-body PET-CT, showed no malignant tumors in these 14 cases. Alternative causes of cerebellar autoimmunity, such as Gluten Ataxia, PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CA, were excluded.\nThe process of identifying patients from IMCAs and anti-NSAbs cohorts. NSAbs, Neuronal surface antibodies; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; AE, autoimmune encephalitis.\nPatients with PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CAs were included as the control groups to explore the clinical characteristics of IMCAs associated with these antibodies. Then we reviewed the clinical information of the remaining 177 patients with antibodies targeting NSAbs, and all patients met the diagnostic criteria for autoimmune encephalitis (13). We compared the clinical characteristics of patients with or without CAs to identify the occurrence rate of IMCAs in autoimmune encephalitis.\nA retrospective investigation was conducted on outpatient and hospitalized cases in the Department of Neurology, Xuanwu Hospital, Capital Medical University from June 2015 to June 2020 to identify potential patients with anti-NSAbs-associated CAs using the keywords NSAbs and IMCAs. 54 patients with IMCAs and 191 patients positive for NSAbs were enrolled.\nAll 54 patients were screened for common causes of IMCAs, including PCD, anti-GAD65-Ab-associated CA, post-infectious cerebellitis, gluten ataxia (GA), opsoclonus myoclonus syndrome (OMS), Miller Fisher syndrome, Hashimoto’s encephalopathy (HE) and Systemic Lupus Erythematosus (SLE). Among 54 patients, 13 patients were diagnosed with PCDs (7 with Yo-Ab, 3 with Hu-Ab 2 with Tr-Ab, and 1 with SOX1-Ab), 7 patients with anti-GAD65-Ab-associated CA, 6 with autoimmune disease-associated CAs (4 with Hashimoto’s Encephalopathy and 2 with Systemic Lupus Erythematosus), 14 with unknown etiology and the remaining 14 patients were positive for NSAbs, including 9 with NMDAR-Ab, 2 with LGI1-Ab, 2 with CASPR2-Ab and 1 with AMPA2R-Ab. \nFigure 1\n demonstrates the process of identifying patients in this study. These 14 patients were negative for onconeural antibodies (ONAs), anti-GAD-65 antibodies, GQ1b antibodies, anti-gliadin antibodies (AGA), anti-thyroid antibodies (ATA), anti-nuclear antibody (ANA), and anti-double-stranded DNA antibodies. In addition, there was no history of virus infection, dermatitis herpetiformis (DH), and celiac disease (CD) in all. Moreover, routine screening examinations, including muti-tumor markers and whole-body PET-CT, showed no malignant tumors in these 14 cases. Alternative causes of cerebellar autoimmunity, such as Gluten Ataxia, PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CA, were excluded.\nThe process of identifying patients from IMCAs and anti-NSAbs cohorts. NSAbs, Neuronal surface antibodies; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; AE, autoimmune encephalitis.\nPatients with PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CAs were included as the control groups to explore the clinical characteristics of IMCAs associated with these antibodies. Then we reviewed the clinical information of the remaining 177 patients with antibodies targeting NSAbs, and all patients met the diagnostic criteria for autoimmune encephalitis (13). We compared the clinical characteristics of patients with or without CAs to identify the occurrence rate of IMCAs in autoimmune encephalitis.\n Antibody Detection All patients were screened for immunoglobulin G (IgG) against common antigens of autoimmune encephalopathy antibodies using indirect immunofluorescence assays (IFAs) (EUROIMMUN, FA112d-1, Germany) and the cell-based assays Euroimmun kit (commercial CBA) prior to the treatments, including antibodies targeting NMDAR, LGI1, CASPR2, AMPA1/2-R, GABA-A/B-R, DPPX, IgLON5, MOG, and onconeural antibodies (ONAs), including Hu-Ab, Yo-Ab, Ri-Ab, CV2-Ab, PNMA2 (Ma-2/Ta) -Ab, Amphiphysin-Ab, SOX1-Ab, Tr-Ab, and GAD65-Ab. As previously reported (4–6), tissue-based assays (TBAs) using rat brain tissue and CBAs using human embryonic kidney 293 (HEK293) cells were utilized for antibodies detection.\nThe initial dilution titers of serum and CSF were 1:10 and 1:1, respectively. Antibody titers were defined as three levels. For the antibody titers in serum, 1:10, 1:32 to 1:100, and 1:320 or above were defined as weakly positive, positive, and strongly positive, respectively. In CSF, 1:1, 1:3.2 to 1:10, and 1:32 or above were defined as weakly positive, positive, and strongly positive (14).\nAll patients were screened for immunoglobulin G (IgG) against common antigens of autoimmune encephalopathy antibodies using indirect immunofluorescence assays (IFAs) (EUROIMMUN, FA112d-1, Germany) and the cell-based assays Euroimmun kit (commercial CBA) prior to the treatments, including antibodies targeting NMDAR, LGI1, CASPR2, AMPA1/2-R, GABA-A/B-R, DPPX, IgLON5, MOG, and onconeural antibodies (ONAs), including Hu-Ab, Yo-Ab, Ri-Ab, CV2-Ab, PNMA2 (Ma-2/Ta) -Ab, Amphiphysin-Ab, SOX1-Ab, Tr-Ab, and GAD65-Ab. As previously reported (4–6), tissue-based assays (TBAs) using rat brain tissue and CBAs using human embryonic kidney 293 (HEK293) cells were utilized for antibodies detection.\nThe initial dilution titers of serum and CSF were 1:10 and 1:1, respectively. Antibody titers were defined as three levels. For the antibody titers in serum, 1:10, 1:32 to 1:100, and 1:320 or above were defined as weakly positive, positive, and strongly positive, respectively. In CSF, 1:1, 1:3.2 to 1:10, and 1:32 or above were defined as weakly positive, positive, and strongly positive (14).\n Clinical Data and Outcome Measures Detailed clinical information including demographic, clinical manifestation, CSF analysis, and brain magnetic resonance imaging (MRI) of all patients was collected. The symptoms of cerebellar ataxia were recorded as gait ataxia, slurred speech, limb dysmetria, and nystagmus. All patients received immunotherapy after diagnosis. Glucocorticoids, intravenous immunoglobulin (IVIG), and plasma exchange were classified as first-line therapy with other immunosuppressants as second-line therapy. The therapeutic regimen and responsiveness to immunotherapy of patients were collected, and the outcome was evaluated by modified Rankin score (mRS) after discharge with a reduction of mRS ≥1 during follow-ups defined as efficacious. Relapse of encephalitis was defined as the new onset or worsening of symptoms occurring after at least 2 months of improvement or stabilization (10).\nDetailed clinical information including demographic, clinical manifestation, CSF analysis, and brain magnetic resonance imaging (MRI) of all patients was collected. The symptoms of cerebellar ataxia were recorded as gait ataxia, slurred speech, limb dysmetria, and nystagmus. All patients received immunotherapy after diagnosis. Glucocorticoids, intravenous immunoglobulin (IVIG), and plasma exchange were classified as first-line therapy with other immunosuppressants as second-line therapy. The therapeutic regimen and responsiveness to immunotherapy of patients were collected, and the outcome was evaluated by modified Rankin score (mRS) after discharge with a reduction of mRS ≥1 during follow-ups defined as efficacious. Relapse of encephalitis was defined as the new onset or worsening of symptoms occurring after at least 2 months of improvement or stabilization (10).\n Statistical Analysis Statistical analysis was performed with IBM SPSS V.23.0. Summary statistics were reported as median (range, minimum-maximum) for continuous variables, frequencies, and percentages for categorical variables. As appropriate, clinical data were compared using Pearson’s χ2, Fisher’s exact test, or Mann-Whitney U test. P<0.05 was considered statistically significant.\nStatistical analysis was performed with IBM SPSS V.23.0. Summary statistics were reported as median (range, minimum-maximum) for continuous variables, frequencies, and percentages for categorical variables. As appropriate, clinical data were compared using Pearson’s χ2, Fisher’s exact test, or Mann-Whitney U test. P<0.05 was considered statistically significant.", "A retrospective investigation was conducted on outpatient and hospitalized cases in the Department of Neurology, Xuanwu Hospital, Capital Medical University from June 2015 to June 2020 to identify potential patients with anti-NSAbs-associated CAs using the keywords NSAbs and IMCAs. 54 patients with IMCAs and 191 patients positive for NSAbs were enrolled.\nAll 54 patients were screened for common causes of IMCAs, including PCD, anti-GAD65-Ab-associated CA, post-infectious cerebellitis, gluten ataxia (GA), opsoclonus myoclonus syndrome (OMS), Miller Fisher syndrome, Hashimoto’s encephalopathy (HE) and Systemic Lupus Erythematosus (SLE). Among 54 patients, 13 patients were diagnosed with PCDs (7 with Yo-Ab, 3 with Hu-Ab 2 with Tr-Ab, and 1 with SOX1-Ab), 7 patients with anti-GAD65-Ab-associated CA, 6 with autoimmune disease-associated CAs (4 with Hashimoto’s Encephalopathy and 2 with Systemic Lupus Erythematosus), 14 with unknown etiology and the remaining 14 patients were positive for NSAbs, including 9 with NMDAR-Ab, 2 with LGI1-Ab, 2 with CASPR2-Ab and 1 with AMPA2R-Ab. \nFigure 1\n demonstrates the process of identifying patients in this study. These 14 patients were negative for onconeural antibodies (ONAs), anti-GAD-65 antibodies, GQ1b antibodies, anti-gliadin antibodies (AGA), anti-thyroid antibodies (ATA), anti-nuclear antibody (ANA), and anti-double-stranded DNA antibodies. In addition, there was no history of virus infection, dermatitis herpetiformis (DH), and celiac disease (CD) in all. Moreover, routine screening examinations, including muti-tumor markers and whole-body PET-CT, showed no malignant tumors in these 14 cases. Alternative causes of cerebellar autoimmunity, such as Gluten Ataxia, PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CA, were excluded.\nThe process of identifying patients from IMCAs and anti-NSAbs cohorts. NSAbs, Neuronal surface antibodies; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; AE, autoimmune encephalitis.\nPatients with PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CAs were included as the control groups to explore the clinical characteristics of IMCAs associated with these antibodies. Then we reviewed the clinical information of the remaining 177 patients with antibodies targeting NSAbs, and all patients met the diagnostic criteria for autoimmune encephalitis (13). We compared the clinical characteristics of patients with or without CAs to identify the occurrence rate of IMCAs in autoimmune encephalitis.", "All patients were screened for immunoglobulin G (IgG) against common antigens of autoimmune encephalopathy antibodies using indirect immunofluorescence assays (IFAs) (EUROIMMUN, FA112d-1, Germany) and the cell-based assays Euroimmun kit (commercial CBA) prior to the treatments, including antibodies targeting NMDAR, LGI1, CASPR2, AMPA1/2-R, GABA-A/B-R, DPPX, IgLON5, MOG, and onconeural antibodies (ONAs), including Hu-Ab, Yo-Ab, Ri-Ab, CV2-Ab, PNMA2 (Ma-2/Ta) -Ab, Amphiphysin-Ab, SOX1-Ab, Tr-Ab, and GAD65-Ab. As previously reported (4–6), tissue-based assays (TBAs) using rat brain tissue and CBAs using human embryonic kidney 293 (HEK293) cells were utilized for antibodies detection.\nThe initial dilution titers of serum and CSF were 1:10 and 1:1, respectively. Antibody titers were defined as three levels. For the antibody titers in serum, 1:10, 1:32 to 1:100, and 1:320 or above were defined as weakly positive, positive, and strongly positive, respectively. In CSF, 1:1, 1:3.2 to 1:10, and 1:32 or above were defined as weakly positive, positive, and strongly positive (14).", "Detailed clinical information including demographic, clinical manifestation, CSF analysis, and brain magnetic resonance imaging (MRI) of all patients was collected. The symptoms of cerebellar ataxia were recorded as gait ataxia, slurred speech, limb dysmetria, and nystagmus. All patients received immunotherapy after diagnosis. Glucocorticoids, intravenous immunoglobulin (IVIG), and plasma exchange were classified as first-line therapy with other immunosuppressants as second-line therapy. The therapeutic regimen and responsiveness to immunotherapy of patients were collected, and the outcome was evaluated by modified Rankin score (mRS) after discharge with a reduction of mRS ≥1 during follow-ups defined as efficacious. Relapse of encephalitis was defined as the new onset or worsening of symptoms occurring after at least 2 months of improvement or stabilization (10).", "Statistical analysis was performed with IBM SPSS V.23.0. Summary statistics were reported as median (range, minimum-maximum) for continuous variables, frequencies, and percentages for categorical variables. As appropriate, clinical data were compared using Pearson’s χ2, Fisher’s exact test, or Mann-Whitney U test. P<0.05 was considered statistically significant.", "Among the 40 IMCAs with definite etiology, 14 patients (25.9%) were identified as anti-NSAbs associated CAs, followed by PCD (13 patients, 24.1%), anti-GAD65-Ab-associated CAs (7 patients, 13.0%), and autoimmune disease-associated CAs (6 patients, 11.1%) (\nTable 1\n). Regarding the 191 patients with positive NSAbs (adult, 164; children, 27), 14 patients (7.3%) developed CAs during the disease period. The result showed a similar proportion of cerebellar ataxia, respectively, in adults (12/164, 7.3%) and children (2/27, 7.4%) (P=1.000) (\nTable 2\n). Compared with other causes of IMCAs or anti-NSAbs without CAs, no distinct features in demographic data were found in patients with Anti-NSAbs-associated CAs\nComparison of clinical features of four forms of IMCAs.\nIMCAs, Immune-mediated cerebellar ataxias; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; MRI, Magnetic resonance imaging.\nComparison of clinical characteristic of patients positive for NASbs with and without CAs.\nNSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; Ab, Antibody; MRI, Magnetic resonance imaging\n\n*Brain MRI hyperintense signal on T2-weighted fluid-attenuated inversion recovery sequences highly restricted to one or both medial temporal lobes (limbic encephalitis), or in multifocal areas involving grey matter, white matter (13).\nAccording to the data, 185 patients were positive with one antibody (108 with NMDAR-Ab, 52 with LGI1-Ab, 12 with CASPR2-Ab, 4 with AMPA2R-Ab, and 9 with GABAR-B-Ab), and 6 patients were positive with two types of antibodies (4 with NMDAR-Ab and CASPR2-Ab, and 2 with CASPR2-Ab and LGI1-Ab) (\nTable 3\n). Figures suggested that patients with AMPAR-Ab had the highest occurrence of cerebellar ataxia during the course of the disease (1/4, 25%), followed by CASPR2-Ab (2/12, 16.7%), NMDAR-Ab (9/108, 8.3%), and LGI1-Ab (2/52, 3.8%). However, the symptoms of CAs were not apparent among patients with GABAB-R-Ab (0/9, 0%). Moreover, the occurrence of multiple antibodies was not related to the increased frequency of CAs (0/6, 0%).\nThe data of antibodies detection in patients with anti-NSAbs.\nNSAbs, Neuronal surface antibodies; CSF, Cerebrospinal fluid.", "Regarding 14 patients with CAs, three patients suffered from gait ataxia as the initial symptom (two with NMDAR-Ab and one with CASPR2-Ab). The other 11 patients experienced cerebellar ataxia during the first two months after the disease onset (2-60 days, average on day 10). CAs installed acutely (10 patients, 71.4%), subacutely (1 patient, 7.1%) and insidiously (3 patients, 21.4%). Cerebellar signs are shown in \nTable 1\n. Most patients presented with slurred speech as a main symptom (78.6%), but nystagmus seldom appeared (35.7%). In addition, two patients (one with NMDAR-Ab and the other with CASPR2-Ab) had isolated CAs throughout the disease (\nTable 4\n). The neurological examination revealed that five patients (35.7%) failed to complete the alternating movement test with both two hands, six patients (42.9%) went through the finger-nose test with difficulties, and five patients (35.7%) could not finish the heel-knee-tibia test. However, there were not enough patients for clinical characteristics comparison among different antibodies.\nClinical characteristics of patients with anti-NSAbs-associated CAs.\nNSAbs: Neuronal surface antibodies; CAs: Cerebellar ataxias; M: Male; F: Female; CSF: Cerebrospinal fluid; WBC: White blood cells counts; Pos: Positive; Neg: Negative; OB: Oligoclonal bands; MRI: Magnetic resonance imaging; IVIG: Intravenous immunoglobulin.", "Blood and CSF markers indicating inflammation were evaluated. Among patients with NSAbs, oligoclonal bands (OB) in CSF were more frequently in patients with anti-NSAbs-associated CAs than in patients without CAs (P=0.025). The proportions of negative, weakly positive, positive and strongly positive NSAbs were further analyzed (\nTable 5\n). Interestingly, more patients with CAs were found to have high titer range of CSF antibodies (within “positive” group and “strongly positive” group, 85.7% vs. 45.8%, P=0.005). All 14 patients underwent 3T brain MRI. Cerebellar abnormalities on MRI are listed in \nTable 4\n. Limbic system involvement (medial temporal T2/FLAIR signal changes) (13) was found in seven patients (50%). All 14 cases were negative for routine tumor screening.\nThe levels of antibodies titers in patients with anti-NSAbs-associated CAs.\nNSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; CSF, Cerebrospinal fluid.", "All 14 patients were followed up for at least 15 months after discharge (median 27 months, ranging from 15-48 months). At the early stage, the patients were investigated by the standardized questionnaire through a telephone survey once a month, and all patients were hospitalized again for disease re-evaluation 6 months after discharge. Two patients were free from the symptoms of cerebellar ataxia, and eight recovered satisfactorily (10/14, 71.4%) after the glucocorticoid treatment or the combination of glucocorticoids and IVIG (\nTable 4\n). In addition, two patients (patients 8 and 12) with NMDAR-Ab had clinical relapses, whereas the symptoms of cerebellar ataxia were not observed during the relapses. No malignancy was found in these 14 patients until the last follow-up visit. Follow-up brain MRI showed an obvious diminishment in cerebellar T2/FLAIR-hyperintense lesions in two patients (2/4, 50%, case 7 and 13) and no significant changes in others (case 1 and case 2). However, cerebellar atrophy in brain MRI of case 3 remained unchanged.", "Cerebellar ataxias were rare and atypical in autoimmune encephalitis with neuronal surface antibodies. Patients with anti-NSAbs-associated CAs exhibited a higher positivity rate of OB in CSF. Most patients, especially those with isolated CAs, responded well to immunotherapy. Compared with other causes, anti-NSAbs-associated CAs led to more symptoms of encephalopathy and showed better therapeutic effects from immunotherapy. Considering the treatability of anti-NSAbs-associated CAs, it is valuable to perform serum and CSF NSAbs tests for patients with cerebellar ataxia." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Patient Identification", "Antibody Detection", "Clinical Data and Outcome Measures", "Statistical Analysis", "Results", "Frequency of Anti-NSAbs-Associated CAs", "Clinical Characteristics of Patients With CAs", "Laboratory Testing and MRI Analysis", "Response to the Immunotherapy", "Discussion", "Conclusion", "Data Availability Statement", "Ethics Statement", "Author Contributions", "Funding", "Conflict of Interest", "Publisher’s Note" ]
[ "Over the past few decades, with the discovery of anti-NMDAR encephalitis in 2007 (1), an increasing number of specific neuronal surface antibodies (NSAbs) have been discovered, including LGI1-Ab, CASPR2-Ab, AMPA1/2R-Ab, GABAR-A/B-Ab, and so on (2–5). Anti-NSAbs-associated autoimmune encephalitis presents diverse clinical phenotypes, such as seizures, cognitive decline, psychiatric abnormalities, and autonomic dysfunction. In contrast, symptoms of cerebellar ataxia are reported rarely in patients with positive NSAbs (6–8).\nImmune-mediated cerebellar ataxias (IMCAs) are one of the most common symptoms of paraneoplastic cerebellar degeneration (PCD) (9). Previous studies have suggested that CAs are atypical in anti-NSAbs-associated autoimmune encephalitis (6, 7, 10). For example, cerebellar ataxia has been reported in a few patients with anti-NMDAR encephalitis (10, 11). Boyko et al. found that about 15% of patients with anti-CASPR2 encephalitis developed cerebellar ataxia followed by the onset of limbic symptoms or Morvan syndrome (12). Although CAs associated with anti-NMDAR and anti-CASPR2 have been studied in a few cases, they have not been studied systematically. In this study, we summarized the clinical features of 14 patients with anti-NSAbs-associated CAs. Moreover, to explore the distinct features of anti-NSAbs-associated CAs, a study of two overlapping cohorts, including patients diagnosed as IMCAs and patients with NSAbs from 2015 to 2020 in our center, was designed and conducted. The present study focused on the clinical characteristics of patients with anti-NSAbs-associated CAs.", " Patient Identification A retrospective investigation was conducted on outpatient and hospitalized cases in the Department of Neurology, Xuanwu Hospital, Capital Medical University from June 2015 to June 2020 to identify potential patients with anti-NSAbs-associated CAs using the keywords NSAbs and IMCAs. 54 patients with IMCAs and 191 patients positive for NSAbs were enrolled.\nAll 54 patients were screened for common causes of IMCAs, including PCD, anti-GAD65-Ab-associated CA, post-infectious cerebellitis, gluten ataxia (GA), opsoclonus myoclonus syndrome (OMS), Miller Fisher syndrome, Hashimoto’s encephalopathy (HE) and Systemic Lupus Erythematosus (SLE). Among 54 patients, 13 patients were diagnosed with PCDs (7 with Yo-Ab, 3 with Hu-Ab 2 with Tr-Ab, and 1 with SOX1-Ab), 7 patients with anti-GAD65-Ab-associated CA, 6 with autoimmune disease-associated CAs (4 with Hashimoto’s Encephalopathy and 2 with Systemic Lupus Erythematosus), 14 with unknown etiology and the remaining 14 patients were positive for NSAbs, including 9 with NMDAR-Ab, 2 with LGI1-Ab, 2 with CASPR2-Ab and 1 with AMPA2R-Ab. \nFigure 1\n demonstrates the process of identifying patients in this study. These 14 patients were negative for onconeural antibodies (ONAs), anti-GAD-65 antibodies, GQ1b antibodies, anti-gliadin antibodies (AGA), anti-thyroid antibodies (ATA), anti-nuclear antibody (ANA), and anti-double-stranded DNA antibodies. In addition, there was no history of virus infection, dermatitis herpetiformis (DH), and celiac disease (CD) in all. Moreover, routine screening examinations, including muti-tumor markers and whole-body PET-CT, showed no malignant tumors in these 14 cases. Alternative causes of cerebellar autoimmunity, such as Gluten Ataxia, PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CA, were excluded.\nThe process of identifying patients from IMCAs and anti-NSAbs cohorts. NSAbs, Neuronal surface antibodies; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; AE, autoimmune encephalitis.\nPatients with PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CAs were included as the control groups to explore the clinical characteristics of IMCAs associated with these antibodies. Then we reviewed the clinical information of the remaining 177 patients with antibodies targeting NSAbs, and all patients met the diagnostic criteria for autoimmune encephalitis (13). We compared the clinical characteristics of patients with or without CAs to identify the occurrence rate of IMCAs in autoimmune encephalitis.\nA retrospective investigation was conducted on outpatient and hospitalized cases in the Department of Neurology, Xuanwu Hospital, Capital Medical University from June 2015 to June 2020 to identify potential patients with anti-NSAbs-associated CAs using the keywords NSAbs and IMCAs. 54 patients with IMCAs and 191 patients positive for NSAbs were enrolled.\nAll 54 patients were screened for common causes of IMCAs, including PCD, anti-GAD65-Ab-associated CA, post-infectious cerebellitis, gluten ataxia (GA), opsoclonus myoclonus syndrome (OMS), Miller Fisher syndrome, Hashimoto’s encephalopathy (HE) and Systemic Lupus Erythematosus (SLE). Among 54 patients, 13 patients were diagnosed with PCDs (7 with Yo-Ab, 3 with Hu-Ab 2 with Tr-Ab, and 1 with SOX1-Ab), 7 patients with anti-GAD65-Ab-associated CA, 6 with autoimmune disease-associated CAs (4 with Hashimoto’s Encephalopathy and 2 with Systemic Lupus Erythematosus), 14 with unknown etiology and the remaining 14 patients were positive for NSAbs, including 9 with NMDAR-Ab, 2 with LGI1-Ab, 2 with CASPR2-Ab and 1 with AMPA2R-Ab. \nFigure 1\n demonstrates the process of identifying patients in this study. These 14 patients were negative for onconeural antibodies (ONAs), anti-GAD-65 antibodies, GQ1b antibodies, anti-gliadin antibodies (AGA), anti-thyroid antibodies (ATA), anti-nuclear antibody (ANA), and anti-double-stranded DNA antibodies. In addition, there was no history of virus infection, dermatitis herpetiformis (DH), and celiac disease (CD) in all. Moreover, routine screening examinations, including muti-tumor markers and whole-body PET-CT, showed no malignant tumors in these 14 cases. Alternative causes of cerebellar autoimmunity, such as Gluten Ataxia, PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CA, were excluded.\nThe process of identifying patients from IMCAs and anti-NSAbs cohorts. NSAbs, Neuronal surface antibodies; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; AE, autoimmune encephalitis.\nPatients with PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CAs were included as the control groups to explore the clinical characteristics of IMCAs associated with these antibodies. Then we reviewed the clinical information of the remaining 177 patients with antibodies targeting NSAbs, and all patients met the diagnostic criteria for autoimmune encephalitis (13). We compared the clinical characteristics of patients with or without CAs to identify the occurrence rate of IMCAs in autoimmune encephalitis.\n Antibody Detection All patients were screened for immunoglobulin G (IgG) against common antigens of autoimmune encephalopathy antibodies using indirect immunofluorescence assays (IFAs) (EUROIMMUN, FA112d-1, Germany) and the cell-based assays Euroimmun kit (commercial CBA) prior to the treatments, including antibodies targeting NMDAR, LGI1, CASPR2, AMPA1/2-R, GABA-A/B-R, DPPX, IgLON5, MOG, and onconeural antibodies (ONAs), including Hu-Ab, Yo-Ab, Ri-Ab, CV2-Ab, PNMA2 (Ma-2/Ta) -Ab, Amphiphysin-Ab, SOX1-Ab, Tr-Ab, and GAD65-Ab. As previously reported (4–6), tissue-based assays (TBAs) using rat brain tissue and CBAs using human embryonic kidney 293 (HEK293) cells were utilized for antibodies detection.\nThe initial dilution titers of serum and CSF were 1:10 and 1:1, respectively. Antibody titers were defined as three levels. For the antibody titers in serum, 1:10, 1:32 to 1:100, and 1:320 or above were defined as weakly positive, positive, and strongly positive, respectively. In CSF, 1:1, 1:3.2 to 1:10, and 1:32 or above were defined as weakly positive, positive, and strongly positive (14).\nAll patients were screened for immunoglobulin G (IgG) against common antigens of autoimmune encephalopathy antibodies using indirect immunofluorescence assays (IFAs) (EUROIMMUN, FA112d-1, Germany) and the cell-based assays Euroimmun kit (commercial CBA) prior to the treatments, including antibodies targeting NMDAR, LGI1, CASPR2, AMPA1/2-R, GABA-A/B-R, DPPX, IgLON5, MOG, and onconeural antibodies (ONAs), including Hu-Ab, Yo-Ab, Ri-Ab, CV2-Ab, PNMA2 (Ma-2/Ta) -Ab, Amphiphysin-Ab, SOX1-Ab, Tr-Ab, and GAD65-Ab. As previously reported (4–6), tissue-based assays (TBAs) using rat brain tissue and CBAs using human embryonic kidney 293 (HEK293) cells were utilized for antibodies detection.\nThe initial dilution titers of serum and CSF were 1:10 and 1:1, respectively. Antibody titers were defined as three levels. For the antibody titers in serum, 1:10, 1:32 to 1:100, and 1:320 or above were defined as weakly positive, positive, and strongly positive, respectively. In CSF, 1:1, 1:3.2 to 1:10, and 1:32 or above were defined as weakly positive, positive, and strongly positive (14).\n Clinical Data and Outcome Measures Detailed clinical information including demographic, clinical manifestation, CSF analysis, and brain magnetic resonance imaging (MRI) of all patients was collected. The symptoms of cerebellar ataxia were recorded as gait ataxia, slurred speech, limb dysmetria, and nystagmus. All patients received immunotherapy after diagnosis. Glucocorticoids, intravenous immunoglobulin (IVIG), and plasma exchange were classified as first-line therapy with other immunosuppressants as second-line therapy. The therapeutic regimen and responsiveness to immunotherapy of patients were collected, and the outcome was evaluated by modified Rankin score (mRS) after discharge with a reduction of mRS ≥1 during follow-ups defined as efficacious. Relapse of encephalitis was defined as the new onset or worsening of symptoms occurring after at least 2 months of improvement or stabilization (10).\nDetailed clinical information including demographic, clinical manifestation, CSF analysis, and brain magnetic resonance imaging (MRI) of all patients was collected. The symptoms of cerebellar ataxia were recorded as gait ataxia, slurred speech, limb dysmetria, and nystagmus. All patients received immunotherapy after diagnosis. Glucocorticoids, intravenous immunoglobulin (IVIG), and plasma exchange were classified as first-line therapy with other immunosuppressants as second-line therapy. The therapeutic regimen and responsiveness to immunotherapy of patients were collected, and the outcome was evaluated by modified Rankin score (mRS) after discharge with a reduction of mRS ≥1 during follow-ups defined as efficacious. Relapse of encephalitis was defined as the new onset or worsening of symptoms occurring after at least 2 months of improvement or stabilization (10).\n Statistical Analysis Statistical analysis was performed with IBM SPSS V.23.0. Summary statistics were reported as median (range, minimum-maximum) for continuous variables, frequencies, and percentages for categorical variables. As appropriate, clinical data were compared using Pearson’s χ2, Fisher’s exact test, or Mann-Whitney U test. P<0.05 was considered statistically significant.\nStatistical analysis was performed with IBM SPSS V.23.0. Summary statistics were reported as median (range, minimum-maximum) for continuous variables, frequencies, and percentages for categorical variables. As appropriate, clinical data were compared using Pearson’s χ2, Fisher’s exact test, or Mann-Whitney U test. P<0.05 was considered statistically significant.", "A retrospective investigation was conducted on outpatient and hospitalized cases in the Department of Neurology, Xuanwu Hospital, Capital Medical University from June 2015 to June 2020 to identify potential patients with anti-NSAbs-associated CAs using the keywords NSAbs and IMCAs. 54 patients with IMCAs and 191 patients positive for NSAbs were enrolled.\nAll 54 patients were screened for common causes of IMCAs, including PCD, anti-GAD65-Ab-associated CA, post-infectious cerebellitis, gluten ataxia (GA), opsoclonus myoclonus syndrome (OMS), Miller Fisher syndrome, Hashimoto’s encephalopathy (HE) and Systemic Lupus Erythematosus (SLE). Among 54 patients, 13 patients were diagnosed with PCDs (7 with Yo-Ab, 3 with Hu-Ab 2 with Tr-Ab, and 1 with SOX1-Ab), 7 patients with anti-GAD65-Ab-associated CA, 6 with autoimmune disease-associated CAs (4 with Hashimoto’s Encephalopathy and 2 with Systemic Lupus Erythematosus), 14 with unknown etiology and the remaining 14 patients were positive for NSAbs, including 9 with NMDAR-Ab, 2 with LGI1-Ab, 2 with CASPR2-Ab and 1 with AMPA2R-Ab. \nFigure 1\n demonstrates the process of identifying patients in this study. These 14 patients were negative for onconeural antibodies (ONAs), anti-GAD-65 antibodies, GQ1b antibodies, anti-gliadin antibodies (AGA), anti-thyroid antibodies (ATA), anti-nuclear antibody (ANA), and anti-double-stranded DNA antibodies. In addition, there was no history of virus infection, dermatitis herpetiformis (DH), and celiac disease (CD) in all. Moreover, routine screening examinations, including muti-tumor markers and whole-body PET-CT, showed no malignant tumors in these 14 cases. Alternative causes of cerebellar autoimmunity, such as Gluten Ataxia, PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CA, were excluded.\nThe process of identifying patients from IMCAs and anti-NSAbs cohorts. NSAbs, Neuronal surface antibodies; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; AE, autoimmune encephalitis.\nPatients with PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CAs were included as the control groups to explore the clinical characteristics of IMCAs associated with these antibodies. Then we reviewed the clinical information of the remaining 177 patients with antibodies targeting NSAbs, and all patients met the diagnostic criteria for autoimmune encephalitis (13). We compared the clinical characteristics of patients with or without CAs to identify the occurrence rate of IMCAs in autoimmune encephalitis.", "All patients were screened for immunoglobulin G (IgG) against common antigens of autoimmune encephalopathy antibodies using indirect immunofluorescence assays (IFAs) (EUROIMMUN, FA112d-1, Germany) and the cell-based assays Euroimmun kit (commercial CBA) prior to the treatments, including antibodies targeting NMDAR, LGI1, CASPR2, AMPA1/2-R, GABA-A/B-R, DPPX, IgLON5, MOG, and onconeural antibodies (ONAs), including Hu-Ab, Yo-Ab, Ri-Ab, CV2-Ab, PNMA2 (Ma-2/Ta) -Ab, Amphiphysin-Ab, SOX1-Ab, Tr-Ab, and GAD65-Ab. As previously reported (4–6), tissue-based assays (TBAs) using rat brain tissue and CBAs using human embryonic kidney 293 (HEK293) cells were utilized for antibodies detection.\nThe initial dilution titers of serum and CSF were 1:10 and 1:1, respectively. Antibody titers were defined as three levels. For the antibody titers in serum, 1:10, 1:32 to 1:100, and 1:320 or above were defined as weakly positive, positive, and strongly positive, respectively. In CSF, 1:1, 1:3.2 to 1:10, and 1:32 or above were defined as weakly positive, positive, and strongly positive (14).", "Detailed clinical information including demographic, clinical manifestation, CSF analysis, and brain magnetic resonance imaging (MRI) of all patients was collected. The symptoms of cerebellar ataxia were recorded as gait ataxia, slurred speech, limb dysmetria, and nystagmus. All patients received immunotherapy after diagnosis. Glucocorticoids, intravenous immunoglobulin (IVIG), and plasma exchange were classified as first-line therapy with other immunosuppressants as second-line therapy. The therapeutic regimen and responsiveness to immunotherapy of patients were collected, and the outcome was evaluated by modified Rankin score (mRS) after discharge with a reduction of mRS ≥1 during follow-ups defined as efficacious. Relapse of encephalitis was defined as the new onset or worsening of symptoms occurring after at least 2 months of improvement or stabilization (10).", "Statistical analysis was performed with IBM SPSS V.23.0. Summary statistics were reported as median (range, minimum-maximum) for continuous variables, frequencies, and percentages for categorical variables. As appropriate, clinical data were compared using Pearson’s χ2, Fisher’s exact test, or Mann-Whitney U test. P<0.05 was considered statistically significant.", " Frequency of Anti-NSAbs-Associated CAs Among the 40 IMCAs with definite etiology, 14 patients (25.9%) were identified as anti-NSAbs associated CAs, followed by PCD (13 patients, 24.1%), anti-GAD65-Ab-associated CAs (7 patients, 13.0%), and autoimmune disease-associated CAs (6 patients, 11.1%) (\nTable 1\n). Regarding the 191 patients with positive NSAbs (adult, 164; children, 27), 14 patients (7.3%) developed CAs during the disease period. The result showed a similar proportion of cerebellar ataxia, respectively, in adults (12/164, 7.3%) and children (2/27, 7.4%) (P=1.000) (\nTable 2\n). Compared with other causes of IMCAs or anti-NSAbs without CAs, no distinct features in demographic data were found in patients with Anti-NSAbs-associated CAs\nComparison of clinical features of four forms of IMCAs.\nIMCAs, Immune-mediated cerebellar ataxias; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; MRI, Magnetic resonance imaging.\nComparison of clinical characteristic of patients positive for NASbs with and without CAs.\nNSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; Ab, Antibody; MRI, Magnetic resonance imaging\n\n*Brain MRI hyperintense signal on T2-weighted fluid-attenuated inversion recovery sequences highly restricted to one or both medial temporal lobes (limbic encephalitis), or in multifocal areas involving grey matter, white matter (13).\nAccording to the data, 185 patients were positive with one antibody (108 with NMDAR-Ab, 52 with LGI1-Ab, 12 with CASPR2-Ab, 4 with AMPA2R-Ab, and 9 with GABAR-B-Ab), and 6 patients were positive with two types of antibodies (4 with NMDAR-Ab and CASPR2-Ab, and 2 with CASPR2-Ab and LGI1-Ab) (\nTable 3\n). Figures suggested that patients with AMPAR-Ab had the highest occurrence of cerebellar ataxia during the course of the disease (1/4, 25%), followed by CASPR2-Ab (2/12, 16.7%), NMDAR-Ab (9/108, 8.3%), and LGI1-Ab (2/52, 3.8%). However, the symptoms of CAs were not apparent among patients with GABAB-R-Ab (0/9, 0%). Moreover, the occurrence of multiple antibodies was not related to the increased frequency of CAs (0/6, 0%).\nThe data of antibodies detection in patients with anti-NSAbs.\nNSAbs, Neuronal surface antibodies; CSF, Cerebrospinal fluid.\nAmong the 40 IMCAs with definite etiology, 14 patients (25.9%) were identified as anti-NSAbs associated CAs, followed by PCD (13 patients, 24.1%), anti-GAD65-Ab-associated CAs (7 patients, 13.0%), and autoimmune disease-associated CAs (6 patients, 11.1%) (\nTable 1\n). Regarding the 191 patients with positive NSAbs (adult, 164; children, 27), 14 patients (7.3%) developed CAs during the disease period. The result showed a similar proportion of cerebellar ataxia, respectively, in adults (12/164, 7.3%) and children (2/27, 7.4%) (P=1.000) (\nTable 2\n). Compared with other causes of IMCAs or anti-NSAbs without CAs, no distinct features in demographic data were found in patients with Anti-NSAbs-associated CAs\nComparison of clinical features of four forms of IMCAs.\nIMCAs, Immune-mediated cerebellar ataxias; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; MRI, Magnetic resonance imaging.\nComparison of clinical characteristic of patients positive for NASbs with and without CAs.\nNSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; Ab, Antibody; MRI, Magnetic resonance imaging\n\n*Brain MRI hyperintense signal on T2-weighted fluid-attenuated inversion recovery sequences highly restricted to one or both medial temporal lobes (limbic encephalitis), or in multifocal areas involving grey matter, white matter (13).\nAccording to the data, 185 patients were positive with one antibody (108 with NMDAR-Ab, 52 with LGI1-Ab, 12 with CASPR2-Ab, 4 with AMPA2R-Ab, and 9 with GABAR-B-Ab), and 6 patients were positive with two types of antibodies (4 with NMDAR-Ab and CASPR2-Ab, and 2 with CASPR2-Ab and LGI1-Ab) (\nTable 3\n). Figures suggested that patients with AMPAR-Ab had the highest occurrence of cerebellar ataxia during the course of the disease (1/4, 25%), followed by CASPR2-Ab (2/12, 16.7%), NMDAR-Ab (9/108, 8.3%), and LGI1-Ab (2/52, 3.8%). However, the symptoms of CAs were not apparent among patients with GABAB-R-Ab (0/9, 0%). Moreover, the occurrence of multiple antibodies was not related to the increased frequency of CAs (0/6, 0%).\nThe data of antibodies detection in patients with anti-NSAbs.\nNSAbs, Neuronal surface antibodies; CSF, Cerebrospinal fluid.\n Clinical Characteristics of Patients With CAs Regarding 14 patients with CAs, three patients suffered from gait ataxia as the initial symptom (two with NMDAR-Ab and one with CASPR2-Ab). The other 11 patients experienced cerebellar ataxia during the first two months after the disease onset (2-60 days, average on day 10). CAs installed acutely (10 patients, 71.4%), subacutely (1 patient, 7.1%) and insidiously (3 patients, 21.4%). Cerebellar signs are shown in \nTable 1\n. Most patients presented with slurred speech as a main symptom (78.6%), but nystagmus seldom appeared (35.7%). In addition, two patients (one with NMDAR-Ab and the other with CASPR2-Ab) had isolated CAs throughout the disease (\nTable 4\n). The neurological examination revealed that five patients (35.7%) failed to complete the alternating movement test with both two hands, six patients (42.9%) went through the finger-nose test with difficulties, and five patients (35.7%) could not finish the heel-knee-tibia test. However, there were not enough patients for clinical characteristics comparison among different antibodies.\nClinical characteristics of patients with anti-NSAbs-associated CAs.\nNSAbs: Neuronal surface antibodies; CAs: Cerebellar ataxias; M: Male; F: Female; CSF: Cerebrospinal fluid; WBC: White blood cells counts; Pos: Positive; Neg: Negative; OB: Oligoclonal bands; MRI: Magnetic resonance imaging; IVIG: Intravenous immunoglobulin.\nRegarding 14 patients with CAs, three patients suffered from gait ataxia as the initial symptom (two with NMDAR-Ab and one with CASPR2-Ab). The other 11 patients experienced cerebellar ataxia during the first two months after the disease onset (2-60 days, average on day 10). CAs installed acutely (10 patients, 71.4%), subacutely (1 patient, 7.1%) and insidiously (3 patients, 21.4%). Cerebellar signs are shown in \nTable 1\n. Most patients presented with slurred speech as a main symptom (78.6%), but nystagmus seldom appeared (35.7%). In addition, two patients (one with NMDAR-Ab and the other with CASPR2-Ab) had isolated CAs throughout the disease (\nTable 4\n). The neurological examination revealed that five patients (35.7%) failed to complete the alternating movement test with both two hands, six patients (42.9%) went through the finger-nose test with difficulties, and five patients (35.7%) could not finish the heel-knee-tibia test. However, there were not enough patients for clinical characteristics comparison among different antibodies.\nClinical characteristics of patients with anti-NSAbs-associated CAs.\nNSAbs: Neuronal surface antibodies; CAs: Cerebellar ataxias; M: Male; F: Female; CSF: Cerebrospinal fluid; WBC: White blood cells counts; Pos: Positive; Neg: Negative; OB: Oligoclonal bands; MRI: Magnetic resonance imaging; IVIG: Intravenous immunoglobulin.\n Laboratory Testing and MRI Analysis Blood and CSF markers indicating inflammation were evaluated. Among patients with NSAbs, oligoclonal bands (OB) in CSF were more frequently in patients with anti-NSAbs-associated CAs than in patients without CAs (P=0.025). The proportions of negative, weakly positive, positive and strongly positive NSAbs were further analyzed (\nTable 5\n). Interestingly, more patients with CAs were found to have high titer range of CSF antibodies (within “positive” group and “strongly positive” group, 85.7% vs. 45.8%, P=0.005). All 14 patients underwent 3T brain MRI. Cerebellar abnormalities on MRI are listed in \nTable 4\n. Limbic system involvement (medial temporal T2/FLAIR signal changes) (13) was found in seven patients (50%). All 14 cases were negative for routine tumor screening.\nThe levels of antibodies titers in patients with anti-NSAbs-associated CAs.\nNSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; CSF, Cerebrospinal fluid.\nBlood and CSF markers indicating inflammation were evaluated. Among patients with NSAbs, oligoclonal bands (OB) in CSF were more frequently in patients with anti-NSAbs-associated CAs than in patients without CAs (P=0.025). The proportions of negative, weakly positive, positive and strongly positive NSAbs were further analyzed (\nTable 5\n). Interestingly, more patients with CAs were found to have high titer range of CSF antibodies (within “positive” group and “strongly positive” group, 85.7% vs. 45.8%, P=0.005). All 14 patients underwent 3T brain MRI. Cerebellar abnormalities on MRI are listed in \nTable 4\n. Limbic system involvement (medial temporal T2/FLAIR signal changes) (13) was found in seven patients (50%). All 14 cases were negative for routine tumor screening.\nThe levels of antibodies titers in patients with anti-NSAbs-associated CAs.\nNSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; CSF, Cerebrospinal fluid.\n Response to the Immunotherapy All 14 patients were followed up for at least 15 months after discharge (median 27 months, ranging from 15-48 months). At the early stage, the patients were investigated by the standardized questionnaire through a telephone survey once a month, and all patients were hospitalized again for disease re-evaluation 6 months after discharge. Two patients were free from the symptoms of cerebellar ataxia, and eight recovered satisfactorily (10/14, 71.4%) after the glucocorticoid treatment or the combination of glucocorticoids and IVIG (\nTable 4\n). In addition, two patients (patients 8 and 12) with NMDAR-Ab had clinical relapses, whereas the symptoms of cerebellar ataxia were not observed during the relapses. No malignancy was found in these 14 patients until the last follow-up visit. Follow-up brain MRI showed an obvious diminishment in cerebellar T2/FLAIR-hyperintense lesions in two patients (2/4, 50%, case 7 and 13) and no significant changes in others (case 1 and case 2). However, cerebellar atrophy in brain MRI of case 3 remained unchanged.\nAll 14 patients were followed up for at least 15 months after discharge (median 27 months, ranging from 15-48 months). At the early stage, the patients were investigated by the standardized questionnaire through a telephone survey once a month, and all patients were hospitalized again for disease re-evaluation 6 months after discharge. Two patients were free from the symptoms of cerebellar ataxia, and eight recovered satisfactorily (10/14, 71.4%) after the glucocorticoid treatment or the combination of glucocorticoids and IVIG (\nTable 4\n). In addition, two patients (patients 8 and 12) with NMDAR-Ab had clinical relapses, whereas the symptoms of cerebellar ataxia were not observed during the relapses. No malignancy was found in these 14 patients until the last follow-up visit. Follow-up brain MRI showed an obvious diminishment in cerebellar T2/FLAIR-hyperintense lesions in two patients (2/4, 50%, case 7 and 13) and no significant changes in others (case 1 and case 2). However, cerebellar atrophy in brain MRI of case 3 remained unchanged.", "Among the 40 IMCAs with definite etiology, 14 patients (25.9%) were identified as anti-NSAbs associated CAs, followed by PCD (13 patients, 24.1%), anti-GAD65-Ab-associated CAs (7 patients, 13.0%), and autoimmune disease-associated CAs (6 patients, 11.1%) (\nTable 1\n). Regarding the 191 patients with positive NSAbs (adult, 164; children, 27), 14 patients (7.3%) developed CAs during the disease period. The result showed a similar proportion of cerebellar ataxia, respectively, in adults (12/164, 7.3%) and children (2/27, 7.4%) (P=1.000) (\nTable 2\n). Compared with other causes of IMCAs or anti-NSAbs without CAs, no distinct features in demographic data were found in patients with Anti-NSAbs-associated CAs\nComparison of clinical features of four forms of IMCAs.\nIMCAs, Immune-mediated cerebellar ataxias; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; MRI, Magnetic resonance imaging.\nComparison of clinical characteristic of patients positive for NASbs with and without CAs.\nNSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; Ab, Antibody; MRI, Magnetic resonance imaging\n\n*Brain MRI hyperintense signal on T2-weighted fluid-attenuated inversion recovery sequences highly restricted to one or both medial temporal lobes (limbic encephalitis), or in multifocal areas involving grey matter, white matter (13).\nAccording to the data, 185 patients were positive with one antibody (108 with NMDAR-Ab, 52 with LGI1-Ab, 12 with CASPR2-Ab, 4 with AMPA2R-Ab, and 9 with GABAR-B-Ab), and 6 patients were positive with two types of antibodies (4 with NMDAR-Ab and CASPR2-Ab, and 2 with CASPR2-Ab and LGI1-Ab) (\nTable 3\n). Figures suggested that patients with AMPAR-Ab had the highest occurrence of cerebellar ataxia during the course of the disease (1/4, 25%), followed by CASPR2-Ab (2/12, 16.7%), NMDAR-Ab (9/108, 8.3%), and LGI1-Ab (2/52, 3.8%). However, the symptoms of CAs were not apparent among patients with GABAB-R-Ab (0/9, 0%). Moreover, the occurrence of multiple antibodies was not related to the increased frequency of CAs (0/6, 0%).\nThe data of antibodies detection in patients with anti-NSAbs.\nNSAbs, Neuronal surface antibodies; CSF, Cerebrospinal fluid.", "Regarding 14 patients with CAs, three patients suffered from gait ataxia as the initial symptom (two with NMDAR-Ab and one with CASPR2-Ab). The other 11 patients experienced cerebellar ataxia during the first two months after the disease onset (2-60 days, average on day 10). CAs installed acutely (10 patients, 71.4%), subacutely (1 patient, 7.1%) and insidiously (3 patients, 21.4%). Cerebellar signs are shown in \nTable 1\n. Most patients presented with slurred speech as a main symptom (78.6%), but nystagmus seldom appeared (35.7%). In addition, two patients (one with NMDAR-Ab and the other with CASPR2-Ab) had isolated CAs throughout the disease (\nTable 4\n). The neurological examination revealed that five patients (35.7%) failed to complete the alternating movement test with both two hands, six patients (42.9%) went through the finger-nose test with difficulties, and five patients (35.7%) could not finish the heel-knee-tibia test. However, there were not enough patients for clinical characteristics comparison among different antibodies.\nClinical characteristics of patients with anti-NSAbs-associated CAs.\nNSAbs: Neuronal surface antibodies; CAs: Cerebellar ataxias; M: Male; F: Female; CSF: Cerebrospinal fluid; WBC: White blood cells counts; Pos: Positive; Neg: Negative; OB: Oligoclonal bands; MRI: Magnetic resonance imaging; IVIG: Intravenous immunoglobulin.", "Blood and CSF markers indicating inflammation were evaluated. Among patients with NSAbs, oligoclonal bands (OB) in CSF were more frequently in patients with anti-NSAbs-associated CAs than in patients without CAs (P=0.025). The proportions of negative, weakly positive, positive and strongly positive NSAbs were further analyzed (\nTable 5\n). Interestingly, more patients with CAs were found to have high titer range of CSF antibodies (within “positive” group and “strongly positive” group, 85.7% vs. 45.8%, P=0.005). All 14 patients underwent 3T brain MRI. Cerebellar abnormalities on MRI are listed in \nTable 4\n. Limbic system involvement (medial temporal T2/FLAIR signal changes) (13) was found in seven patients (50%). All 14 cases were negative for routine tumor screening.\nThe levels of antibodies titers in patients with anti-NSAbs-associated CAs.\nNSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; CSF, Cerebrospinal fluid.", "All 14 patients were followed up for at least 15 months after discharge (median 27 months, ranging from 15-48 months). At the early stage, the patients were investigated by the standardized questionnaire through a telephone survey once a month, and all patients were hospitalized again for disease re-evaluation 6 months after discharge. Two patients were free from the symptoms of cerebellar ataxia, and eight recovered satisfactorily (10/14, 71.4%) after the glucocorticoid treatment or the combination of glucocorticoids and IVIG (\nTable 4\n). In addition, two patients (patients 8 and 12) with NMDAR-Ab had clinical relapses, whereas the symptoms of cerebellar ataxia were not observed during the relapses. No malignancy was found in these 14 patients until the last follow-up visit. Follow-up brain MRI showed an obvious diminishment in cerebellar T2/FLAIR-hyperintense lesions in two patients (2/4, 50%, case 7 and 13) and no significant changes in others (case 1 and case 2). However, cerebellar atrophy in brain MRI of case 3 remained unchanged.", "IMCAs are a rare spectrum of diseases, and several specific neuronal antibodies were identified in IMCAs, such as the onconeural antibodies (Yo-Ab, Hu-Ab, CV2-Ab, Ri-Ab, and Ma-2-Ab) for PCDs, GAD65-Ab for anti-GAD65 Ab-associated CA, and Ri-Ab for OMS (15–20). However, only a few studies have concentrated on the anti-NSAbs-associated CAs in autoimmune encephalitis (6–8). Research conducted by Iizuka T et al. revealed that approximately 5% of patients with anti-NMDAR encephalitis showed cerebellar complaints during the disease course (11). Boyko et al. reviewed the clinical data of 163 patients with CASPR2-Ab and found that 24 patients (14.7%) developed CAs (12). However, no large-sample clinical observation and follow-up studies have been conducted to explore the frequency of cerebellar ataxia in patients with anti-AMPAR, anti-LGI1, or anti-GABABR encephalitis by far (8, 21).\nThis study reviewed 54 patients with IMCAs and identified 14 patients carrying different types of neural surface autoantibodies, including NMDAR-Ab, LGI-Ab, CASPR2-Ab, and AMPA2R-Ab. Anti-NSAbs-associated CAs mostly appeared in the first two weeks of the disease course of autoimmune encephalitis, and two patients (case 3 with CASPR2-Ab and case 13 with NMDAR-Ab) had isolated symptoms of CAs throughout the disease. Five patients with isolated CAs have been reported from publications previously (7, 22), including four with CASPR2-Ab and one with NMDAR-Ab. Although the symptoms of isolated CAs were common in PCD, when patients suffered from isolated CAs, the diagnosis of autoimmune encephalitis should also be considered, especially anti-NMDAR or anti-CASPR2 encephalitis. Anti-NSAbs-associated CAs showed less predominant truncal ataxia than PCDs. Mimicking Hashimoto’s encephalopathy (23, 24), anti-NSAbs associated more frequently with additional extra-cerebellar symptoms, such as seizures and mental abnormalities, during the disease course.\nCompared with other forms of IMCAs, a higher rate of pleocytosis and positive OB in CSF might have a predictive value for cerebellar symptoms in autoimmune encephalitis. In terms of imaging abnormalities on MRI, previous studies figured out that only 6% patients with anti-NMDAR encephalitis had cerebellar abnormalities and 13.3% patients suffered from progressive and irreversible cerebellar atrophy (11, 25). However, cerebellar atrophy in MRI was found in 16.7% patients with CASPR2-Ab (26) and those patients all benefited from immunotherapy (26). In this study, only one patient (case 3) with CASPR2-Ab had cerebellar atrophy. After immunotherapy, he was free from symptoms, and the existing cerebellar atrophy did not worsen in follow-up brain MRI.\nAll 14 patients received immunotherapy, and most encouragingly, two patients with isolated CAs (case 3 mentioned above and case 13) fully recovered and returned to work. Five patients presented with isolated CAs as reported previously, including 4 patients with CASPR2-Ab and 1 with NMDAR-Ab mentioned above (7, 22). The latter patient exhibited a partial improvement after immunotherapy (22), while the therapeutic response of the former 4 patients is unknown (7). The influencing factors of treatment effect were unclear, which might be related to the type of antibodies, the influence of accompanying symptoms and the severity of cerebellar inflammation. Further research is needed to explore the underlying mechanism of the pathophysiological function and the factors affecting the immunotherapy of anti-NSAbs associated CAs.\nAmong the many mechanistic studies of IMCAs, the association between ONA and PCDs has always been a research hotspot, but few studies have focused on the pathogenesis of anti-NSAbs associated CAs. In patients with PCD, previous studies have revealed a significant loss of Purkinje cells due to a cell-mediated cytotoxic immune response associated with activated CD8+ T cells and microglia in the cerebellum (27, 28). However, in patients with anti-NMDAR encephalitis, post-mortem studies demonstrated no profound loss of Purkinje cells in the cerebellum (11). The NMDAR is strongly expressed on the cerebellum and these receptors on granular cells (but not on Purkinje cells) were identified as a specific antigen of IgG antibodies, which might interfere with the excitatory pathway involving the NMDAR-mediated signaling, resulting in cerebellar dysfunction (29).", "Cerebellar ataxias were rare and atypical in autoimmune encephalitis with neuronal surface antibodies. Patients with anti-NSAbs-associated CAs exhibited a higher positivity rate of OB in CSF. Most patients, especially those with isolated CAs, responded well to immunotherapy. Compared with other causes, anti-NSAbs-associated CAs led to more symptoms of encephalopathy and showed better therapeutic effects from immunotherapy. Considering the treatability of anti-NSAbs-associated CAs, it is valuable to perform serum and CSF NSAbs tests for patients with cerebellar ataxia.", "The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.", "The studies involving human participants were reviewed and approved by The Ethics Committee of Xuanwu Hospital (No.2017YFC0907702). The patients/participants provided their written informed consent to participate in this study.", "YJ was the major contributors in writing the manuscript. YJ, ML, and DL conceptualized the study. YJ, MZ, and HW collected samples and data. ZH, LJ, JY, AL, and YW contributed to the diagnosis and treatment of patients. YW checked the final manuscript. All authors contributed to the article and approved the submitted version.", "This work was supported by Beijing Postdoctoral Research Foundation [Grant No. 2021-ZZ-001], Beijing Municipal Education Commission [Grant No. TJSH20161002502], National Natural Science Foundation of China [Grant No. 81771398], Beijing Key Clinical Speciality Excellence Project and National Support Provincial Major Disease Medical Services and Social Capability Enhancement Project.", "The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.", "All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher." ]
[ "intro", null, null, null, null, null, "results", null, null, null, null, "discussion", null, "data-availability", "ethics-statement", "author-contributions", "funding-information", "COI-statement", "disclaimer" ]
[ "immune-mediated cerebellar ataxia", "neuronal surface antibodies", "CSF analysis", "oligoclonal bands", "immunotherapy" ]
Introduction: Over the past few decades, with the discovery of anti-NMDAR encephalitis in 2007 (1), an increasing number of specific neuronal surface antibodies (NSAbs) have been discovered, including LGI1-Ab, CASPR2-Ab, AMPA1/2R-Ab, GABAR-A/B-Ab, and so on (2–5). Anti-NSAbs-associated autoimmune encephalitis presents diverse clinical phenotypes, such as seizures, cognitive decline, psychiatric abnormalities, and autonomic dysfunction. In contrast, symptoms of cerebellar ataxia are reported rarely in patients with positive NSAbs (6–8). Immune-mediated cerebellar ataxias (IMCAs) are one of the most common symptoms of paraneoplastic cerebellar degeneration (PCD) (9). Previous studies have suggested that CAs are atypical in anti-NSAbs-associated autoimmune encephalitis (6, 7, 10). For example, cerebellar ataxia has been reported in a few patients with anti-NMDAR encephalitis (10, 11). Boyko et al. found that about 15% of patients with anti-CASPR2 encephalitis developed cerebellar ataxia followed by the onset of limbic symptoms or Morvan syndrome (12). Although CAs associated with anti-NMDAR and anti-CASPR2 have been studied in a few cases, they have not been studied systematically. In this study, we summarized the clinical features of 14 patients with anti-NSAbs-associated CAs. Moreover, to explore the distinct features of anti-NSAbs-associated CAs, a study of two overlapping cohorts, including patients diagnosed as IMCAs and patients with NSAbs from 2015 to 2020 in our center, was designed and conducted. The present study focused on the clinical characteristics of patients with anti-NSAbs-associated CAs. Methods: Patient Identification A retrospective investigation was conducted on outpatient and hospitalized cases in the Department of Neurology, Xuanwu Hospital, Capital Medical University from June 2015 to June 2020 to identify potential patients with anti-NSAbs-associated CAs using the keywords NSAbs and IMCAs. 54 patients with IMCAs and 191 patients positive for NSAbs were enrolled. All 54 patients were screened for common causes of IMCAs, including PCD, anti-GAD65-Ab-associated CA, post-infectious cerebellitis, gluten ataxia (GA), opsoclonus myoclonus syndrome (OMS), Miller Fisher syndrome, Hashimoto’s encephalopathy (HE) and Systemic Lupus Erythematosus (SLE). Among 54 patients, 13 patients were diagnosed with PCDs (7 with Yo-Ab, 3 with Hu-Ab 2 with Tr-Ab, and 1 with SOX1-Ab), 7 patients with anti-GAD65-Ab-associated CA, 6 with autoimmune disease-associated CAs (4 with Hashimoto’s Encephalopathy and 2 with Systemic Lupus Erythematosus), 14 with unknown etiology and the remaining 14 patients were positive for NSAbs, including 9 with NMDAR-Ab, 2 with LGI1-Ab, 2 with CASPR2-Ab and 1 with AMPA2R-Ab. Figure 1 demonstrates the process of identifying patients in this study. These 14 patients were negative for onconeural antibodies (ONAs), anti-GAD-65 antibodies, GQ1b antibodies, anti-gliadin antibodies (AGA), anti-thyroid antibodies (ATA), anti-nuclear antibody (ANA), and anti-double-stranded DNA antibodies. In addition, there was no history of virus infection, dermatitis herpetiformis (DH), and celiac disease (CD) in all. Moreover, routine screening examinations, including muti-tumor markers and whole-body PET-CT, showed no malignant tumors in these 14 cases. Alternative causes of cerebellar autoimmunity, such as Gluten Ataxia, PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CA, were excluded. The process of identifying patients from IMCAs and anti-NSAbs cohorts. NSAbs, Neuronal surface antibodies; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; AE, autoimmune encephalitis. Patients with PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CAs were included as the control groups to explore the clinical characteristics of IMCAs associated with these antibodies. Then we reviewed the clinical information of the remaining 177 patients with antibodies targeting NSAbs, and all patients met the diagnostic criteria for autoimmune encephalitis (13). We compared the clinical characteristics of patients with or without CAs to identify the occurrence rate of IMCAs in autoimmune encephalitis. A retrospective investigation was conducted on outpatient and hospitalized cases in the Department of Neurology, Xuanwu Hospital, Capital Medical University from June 2015 to June 2020 to identify potential patients with anti-NSAbs-associated CAs using the keywords NSAbs and IMCAs. 54 patients with IMCAs and 191 patients positive for NSAbs were enrolled. All 54 patients were screened for common causes of IMCAs, including PCD, anti-GAD65-Ab-associated CA, post-infectious cerebellitis, gluten ataxia (GA), opsoclonus myoclonus syndrome (OMS), Miller Fisher syndrome, Hashimoto’s encephalopathy (HE) and Systemic Lupus Erythematosus (SLE). Among 54 patients, 13 patients were diagnosed with PCDs (7 with Yo-Ab, 3 with Hu-Ab 2 with Tr-Ab, and 1 with SOX1-Ab), 7 patients with anti-GAD65-Ab-associated CA, 6 with autoimmune disease-associated CAs (4 with Hashimoto’s Encephalopathy and 2 with Systemic Lupus Erythematosus), 14 with unknown etiology and the remaining 14 patients were positive for NSAbs, including 9 with NMDAR-Ab, 2 with LGI1-Ab, 2 with CASPR2-Ab and 1 with AMPA2R-Ab. Figure 1 demonstrates the process of identifying patients in this study. These 14 patients were negative for onconeural antibodies (ONAs), anti-GAD-65 antibodies, GQ1b antibodies, anti-gliadin antibodies (AGA), anti-thyroid antibodies (ATA), anti-nuclear antibody (ANA), and anti-double-stranded DNA antibodies. In addition, there was no history of virus infection, dermatitis herpetiformis (DH), and celiac disease (CD) in all. Moreover, routine screening examinations, including muti-tumor markers and whole-body PET-CT, showed no malignant tumors in these 14 cases. Alternative causes of cerebellar autoimmunity, such as Gluten Ataxia, PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CA, were excluded. The process of identifying patients from IMCAs and anti-NSAbs cohorts. NSAbs, Neuronal surface antibodies; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; AE, autoimmune encephalitis. Patients with PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CAs were included as the control groups to explore the clinical characteristics of IMCAs associated with these antibodies. Then we reviewed the clinical information of the remaining 177 patients with antibodies targeting NSAbs, and all patients met the diagnostic criteria for autoimmune encephalitis (13). We compared the clinical characteristics of patients with or without CAs to identify the occurrence rate of IMCAs in autoimmune encephalitis. Antibody Detection All patients were screened for immunoglobulin G (IgG) against common antigens of autoimmune encephalopathy antibodies using indirect immunofluorescence assays (IFAs) (EUROIMMUN, FA112d-1, Germany) and the cell-based assays Euroimmun kit (commercial CBA) prior to the treatments, including antibodies targeting NMDAR, LGI1, CASPR2, AMPA1/2-R, GABA-A/B-R, DPPX, IgLON5, MOG, and onconeural antibodies (ONAs), including Hu-Ab, Yo-Ab, Ri-Ab, CV2-Ab, PNMA2 (Ma-2/Ta) -Ab, Amphiphysin-Ab, SOX1-Ab, Tr-Ab, and GAD65-Ab. As previously reported (4–6), tissue-based assays (TBAs) using rat brain tissue and CBAs using human embryonic kidney 293 (HEK293) cells were utilized for antibodies detection. The initial dilution titers of serum and CSF were 1:10 and 1:1, respectively. Antibody titers were defined as three levels. For the antibody titers in serum, 1:10, 1:32 to 1:100, and 1:320 or above were defined as weakly positive, positive, and strongly positive, respectively. In CSF, 1:1, 1:3.2 to 1:10, and 1:32 or above were defined as weakly positive, positive, and strongly positive (14). All patients were screened for immunoglobulin G (IgG) against common antigens of autoimmune encephalopathy antibodies using indirect immunofluorescence assays (IFAs) (EUROIMMUN, FA112d-1, Germany) and the cell-based assays Euroimmun kit (commercial CBA) prior to the treatments, including antibodies targeting NMDAR, LGI1, CASPR2, AMPA1/2-R, GABA-A/B-R, DPPX, IgLON5, MOG, and onconeural antibodies (ONAs), including Hu-Ab, Yo-Ab, Ri-Ab, CV2-Ab, PNMA2 (Ma-2/Ta) -Ab, Amphiphysin-Ab, SOX1-Ab, Tr-Ab, and GAD65-Ab. As previously reported (4–6), tissue-based assays (TBAs) using rat brain tissue and CBAs using human embryonic kidney 293 (HEK293) cells were utilized for antibodies detection. The initial dilution titers of serum and CSF were 1:10 and 1:1, respectively. Antibody titers were defined as three levels. For the antibody titers in serum, 1:10, 1:32 to 1:100, and 1:320 or above were defined as weakly positive, positive, and strongly positive, respectively. In CSF, 1:1, 1:3.2 to 1:10, and 1:32 or above were defined as weakly positive, positive, and strongly positive (14). Clinical Data and Outcome Measures Detailed clinical information including demographic, clinical manifestation, CSF analysis, and brain magnetic resonance imaging (MRI) of all patients was collected. The symptoms of cerebellar ataxia were recorded as gait ataxia, slurred speech, limb dysmetria, and nystagmus. All patients received immunotherapy after diagnosis. Glucocorticoids, intravenous immunoglobulin (IVIG), and plasma exchange were classified as first-line therapy with other immunosuppressants as second-line therapy. The therapeutic regimen and responsiveness to immunotherapy of patients were collected, and the outcome was evaluated by modified Rankin score (mRS) after discharge with a reduction of mRS ≥1 during follow-ups defined as efficacious. Relapse of encephalitis was defined as the new onset or worsening of symptoms occurring after at least 2 months of improvement or stabilization (10). Detailed clinical information including demographic, clinical manifestation, CSF analysis, and brain magnetic resonance imaging (MRI) of all patients was collected. The symptoms of cerebellar ataxia were recorded as gait ataxia, slurred speech, limb dysmetria, and nystagmus. All patients received immunotherapy after diagnosis. Glucocorticoids, intravenous immunoglobulin (IVIG), and plasma exchange were classified as first-line therapy with other immunosuppressants as second-line therapy. The therapeutic regimen and responsiveness to immunotherapy of patients were collected, and the outcome was evaluated by modified Rankin score (mRS) after discharge with a reduction of mRS ≥1 during follow-ups defined as efficacious. Relapse of encephalitis was defined as the new onset or worsening of symptoms occurring after at least 2 months of improvement or stabilization (10). Statistical Analysis Statistical analysis was performed with IBM SPSS V.23.0. Summary statistics were reported as median (range, minimum-maximum) for continuous variables, frequencies, and percentages for categorical variables. As appropriate, clinical data were compared using Pearson’s χ2, Fisher’s exact test, or Mann-Whitney U test. P<0.05 was considered statistically significant. Statistical analysis was performed with IBM SPSS V.23.0. Summary statistics were reported as median (range, minimum-maximum) for continuous variables, frequencies, and percentages for categorical variables. As appropriate, clinical data were compared using Pearson’s χ2, Fisher’s exact test, or Mann-Whitney U test. P<0.05 was considered statistically significant. Patient Identification: A retrospective investigation was conducted on outpatient and hospitalized cases in the Department of Neurology, Xuanwu Hospital, Capital Medical University from June 2015 to June 2020 to identify potential patients with anti-NSAbs-associated CAs using the keywords NSAbs and IMCAs. 54 patients with IMCAs and 191 patients positive for NSAbs were enrolled. All 54 patients were screened for common causes of IMCAs, including PCD, anti-GAD65-Ab-associated CA, post-infectious cerebellitis, gluten ataxia (GA), opsoclonus myoclonus syndrome (OMS), Miller Fisher syndrome, Hashimoto’s encephalopathy (HE) and Systemic Lupus Erythematosus (SLE). Among 54 patients, 13 patients were diagnosed with PCDs (7 with Yo-Ab, 3 with Hu-Ab 2 with Tr-Ab, and 1 with SOX1-Ab), 7 patients with anti-GAD65-Ab-associated CA, 6 with autoimmune disease-associated CAs (4 with Hashimoto’s Encephalopathy and 2 with Systemic Lupus Erythematosus), 14 with unknown etiology and the remaining 14 patients were positive for NSAbs, including 9 with NMDAR-Ab, 2 with LGI1-Ab, 2 with CASPR2-Ab and 1 with AMPA2R-Ab. Figure 1 demonstrates the process of identifying patients in this study. These 14 patients were negative for onconeural antibodies (ONAs), anti-GAD-65 antibodies, GQ1b antibodies, anti-gliadin antibodies (AGA), anti-thyroid antibodies (ATA), anti-nuclear antibody (ANA), and anti-double-stranded DNA antibodies. In addition, there was no history of virus infection, dermatitis herpetiformis (DH), and celiac disease (CD) in all. Moreover, routine screening examinations, including muti-tumor markers and whole-body PET-CT, showed no malignant tumors in these 14 cases. Alternative causes of cerebellar autoimmunity, such as Gluten Ataxia, PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CA, were excluded. The process of identifying patients from IMCAs and anti-NSAbs cohorts. NSAbs, Neuronal surface antibodies; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; AE, autoimmune encephalitis. Patients with PCD, anti-GAD65-Ab-associated CA, and autoimmune disease-associated CAs were included as the control groups to explore the clinical characteristics of IMCAs associated with these antibodies. Then we reviewed the clinical information of the remaining 177 patients with antibodies targeting NSAbs, and all patients met the diagnostic criteria for autoimmune encephalitis (13). We compared the clinical characteristics of patients with or without CAs to identify the occurrence rate of IMCAs in autoimmune encephalitis. Antibody Detection: All patients were screened for immunoglobulin G (IgG) against common antigens of autoimmune encephalopathy antibodies using indirect immunofluorescence assays (IFAs) (EUROIMMUN, FA112d-1, Germany) and the cell-based assays Euroimmun kit (commercial CBA) prior to the treatments, including antibodies targeting NMDAR, LGI1, CASPR2, AMPA1/2-R, GABA-A/B-R, DPPX, IgLON5, MOG, and onconeural antibodies (ONAs), including Hu-Ab, Yo-Ab, Ri-Ab, CV2-Ab, PNMA2 (Ma-2/Ta) -Ab, Amphiphysin-Ab, SOX1-Ab, Tr-Ab, and GAD65-Ab. As previously reported (4–6), tissue-based assays (TBAs) using rat brain tissue and CBAs using human embryonic kidney 293 (HEK293) cells were utilized for antibodies detection. The initial dilution titers of serum and CSF were 1:10 and 1:1, respectively. Antibody titers were defined as three levels. For the antibody titers in serum, 1:10, 1:32 to 1:100, and 1:320 or above were defined as weakly positive, positive, and strongly positive, respectively. In CSF, 1:1, 1:3.2 to 1:10, and 1:32 or above were defined as weakly positive, positive, and strongly positive (14). Clinical Data and Outcome Measures: Detailed clinical information including demographic, clinical manifestation, CSF analysis, and brain magnetic resonance imaging (MRI) of all patients was collected. The symptoms of cerebellar ataxia were recorded as gait ataxia, slurred speech, limb dysmetria, and nystagmus. All patients received immunotherapy after diagnosis. Glucocorticoids, intravenous immunoglobulin (IVIG), and plasma exchange were classified as first-line therapy with other immunosuppressants as second-line therapy. The therapeutic regimen and responsiveness to immunotherapy of patients were collected, and the outcome was evaluated by modified Rankin score (mRS) after discharge with a reduction of mRS ≥1 during follow-ups defined as efficacious. Relapse of encephalitis was defined as the new onset or worsening of symptoms occurring after at least 2 months of improvement or stabilization (10). Statistical Analysis: Statistical analysis was performed with IBM SPSS V.23.0. Summary statistics were reported as median (range, minimum-maximum) for continuous variables, frequencies, and percentages for categorical variables. As appropriate, clinical data were compared using Pearson’s χ2, Fisher’s exact test, or Mann-Whitney U test. P<0.05 was considered statistically significant. Results: Frequency of Anti-NSAbs-Associated CAs Among the 40 IMCAs with definite etiology, 14 patients (25.9%) were identified as anti-NSAbs associated CAs, followed by PCD (13 patients, 24.1%), anti-GAD65-Ab-associated CAs (7 patients, 13.0%), and autoimmune disease-associated CAs (6 patients, 11.1%) ( Table 1 ). Regarding the 191 patients with positive NSAbs (adult, 164; children, 27), 14 patients (7.3%) developed CAs during the disease period. The result showed a similar proportion of cerebellar ataxia, respectively, in adults (12/164, 7.3%) and children (2/27, 7.4%) (P=1.000) ( Table 2 ). Compared with other causes of IMCAs or anti-NSAbs without CAs, no distinct features in demographic data were found in patients with Anti-NSAbs-associated CAs Comparison of clinical features of four forms of IMCAs. IMCAs, Immune-mediated cerebellar ataxias; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; MRI, Magnetic resonance imaging. Comparison of clinical characteristic of patients positive for NASbs with and without CAs. NSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; Ab, Antibody; MRI, Magnetic resonance imaging *Brain MRI hyperintense signal on T2-weighted fluid-attenuated inversion recovery sequences highly restricted to one or both medial temporal lobes (limbic encephalitis), or in multifocal areas involving grey matter, white matter (13). According to the data, 185 patients were positive with one antibody (108 with NMDAR-Ab, 52 with LGI1-Ab, 12 with CASPR2-Ab, 4 with AMPA2R-Ab, and 9 with GABAR-B-Ab), and 6 patients were positive with two types of antibodies (4 with NMDAR-Ab and CASPR2-Ab, and 2 with CASPR2-Ab and LGI1-Ab) ( Table 3 ). Figures suggested that patients with AMPAR-Ab had the highest occurrence of cerebellar ataxia during the course of the disease (1/4, 25%), followed by CASPR2-Ab (2/12, 16.7%), NMDAR-Ab (9/108, 8.3%), and LGI1-Ab (2/52, 3.8%). However, the symptoms of CAs were not apparent among patients with GABAB-R-Ab (0/9, 0%). Moreover, the occurrence of multiple antibodies was not related to the increased frequency of CAs (0/6, 0%). The data of antibodies detection in patients with anti-NSAbs. NSAbs, Neuronal surface antibodies; CSF, Cerebrospinal fluid. Among the 40 IMCAs with definite etiology, 14 patients (25.9%) were identified as anti-NSAbs associated CAs, followed by PCD (13 patients, 24.1%), anti-GAD65-Ab-associated CAs (7 patients, 13.0%), and autoimmune disease-associated CAs (6 patients, 11.1%) ( Table 1 ). Regarding the 191 patients with positive NSAbs (adult, 164; children, 27), 14 patients (7.3%) developed CAs during the disease period. The result showed a similar proportion of cerebellar ataxia, respectively, in adults (12/164, 7.3%) and children (2/27, 7.4%) (P=1.000) ( Table 2 ). Compared with other causes of IMCAs or anti-NSAbs without CAs, no distinct features in demographic data were found in patients with Anti-NSAbs-associated CAs Comparison of clinical features of four forms of IMCAs. IMCAs, Immune-mediated cerebellar ataxias; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; MRI, Magnetic resonance imaging. Comparison of clinical characteristic of patients positive for NASbs with and without CAs. NSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; Ab, Antibody; MRI, Magnetic resonance imaging *Brain MRI hyperintense signal on T2-weighted fluid-attenuated inversion recovery sequences highly restricted to one or both medial temporal lobes (limbic encephalitis), or in multifocal areas involving grey matter, white matter (13). According to the data, 185 patients were positive with one antibody (108 with NMDAR-Ab, 52 with LGI1-Ab, 12 with CASPR2-Ab, 4 with AMPA2R-Ab, and 9 with GABAR-B-Ab), and 6 patients were positive with two types of antibodies (4 with NMDAR-Ab and CASPR2-Ab, and 2 with CASPR2-Ab and LGI1-Ab) ( Table 3 ). Figures suggested that patients with AMPAR-Ab had the highest occurrence of cerebellar ataxia during the course of the disease (1/4, 25%), followed by CASPR2-Ab (2/12, 16.7%), NMDAR-Ab (9/108, 8.3%), and LGI1-Ab (2/52, 3.8%). However, the symptoms of CAs were not apparent among patients with GABAB-R-Ab (0/9, 0%). Moreover, the occurrence of multiple antibodies was not related to the increased frequency of CAs (0/6, 0%). The data of antibodies detection in patients with anti-NSAbs. NSAbs, Neuronal surface antibodies; CSF, Cerebrospinal fluid. Clinical Characteristics of Patients With CAs Regarding 14 patients with CAs, three patients suffered from gait ataxia as the initial symptom (two with NMDAR-Ab and one with CASPR2-Ab). The other 11 patients experienced cerebellar ataxia during the first two months after the disease onset (2-60 days, average on day 10). CAs installed acutely (10 patients, 71.4%), subacutely (1 patient, 7.1%) and insidiously (3 patients, 21.4%). Cerebellar signs are shown in Table 1 . Most patients presented with slurred speech as a main symptom (78.6%), but nystagmus seldom appeared (35.7%). In addition, two patients (one with NMDAR-Ab and the other with CASPR2-Ab) had isolated CAs throughout the disease ( Table 4 ). The neurological examination revealed that five patients (35.7%) failed to complete the alternating movement test with both two hands, six patients (42.9%) went through the finger-nose test with difficulties, and five patients (35.7%) could not finish the heel-knee-tibia test. However, there were not enough patients for clinical characteristics comparison among different antibodies. Clinical characteristics of patients with anti-NSAbs-associated CAs. NSAbs: Neuronal surface antibodies; CAs: Cerebellar ataxias; M: Male; F: Female; CSF: Cerebrospinal fluid; WBC: White blood cells counts; Pos: Positive; Neg: Negative; OB: Oligoclonal bands; MRI: Magnetic resonance imaging; IVIG: Intravenous immunoglobulin. Regarding 14 patients with CAs, three patients suffered from gait ataxia as the initial symptom (two with NMDAR-Ab and one with CASPR2-Ab). The other 11 patients experienced cerebellar ataxia during the first two months after the disease onset (2-60 days, average on day 10). CAs installed acutely (10 patients, 71.4%), subacutely (1 patient, 7.1%) and insidiously (3 patients, 21.4%). Cerebellar signs are shown in Table 1 . Most patients presented with slurred speech as a main symptom (78.6%), but nystagmus seldom appeared (35.7%). In addition, two patients (one with NMDAR-Ab and the other with CASPR2-Ab) had isolated CAs throughout the disease ( Table 4 ). The neurological examination revealed that five patients (35.7%) failed to complete the alternating movement test with both two hands, six patients (42.9%) went through the finger-nose test with difficulties, and five patients (35.7%) could not finish the heel-knee-tibia test. However, there were not enough patients for clinical characteristics comparison among different antibodies. Clinical characteristics of patients with anti-NSAbs-associated CAs. NSAbs: Neuronal surface antibodies; CAs: Cerebellar ataxias; M: Male; F: Female; CSF: Cerebrospinal fluid; WBC: White blood cells counts; Pos: Positive; Neg: Negative; OB: Oligoclonal bands; MRI: Magnetic resonance imaging; IVIG: Intravenous immunoglobulin. Laboratory Testing and MRI Analysis Blood and CSF markers indicating inflammation were evaluated. Among patients with NSAbs, oligoclonal bands (OB) in CSF were more frequently in patients with anti-NSAbs-associated CAs than in patients without CAs (P=0.025). The proportions of negative, weakly positive, positive and strongly positive NSAbs were further analyzed ( Table 5 ). Interestingly, more patients with CAs were found to have high titer range of CSF antibodies (within “positive” group and “strongly positive” group, 85.7% vs. 45.8%, P=0.005). All 14 patients underwent 3T brain MRI. Cerebellar abnormalities on MRI are listed in Table 4 . Limbic system involvement (medial temporal T2/FLAIR signal changes) (13) was found in seven patients (50%). All 14 cases were negative for routine tumor screening. The levels of antibodies titers in patients with anti-NSAbs-associated CAs. NSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; CSF, Cerebrospinal fluid. Blood and CSF markers indicating inflammation were evaluated. Among patients with NSAbs, oligoclonal bands (OB) in CSF were more frequently in patients with anti-NSAbs-associated CAs than in patients without CAs (P=0.025). The proportions of negative, weakly positive, positive and strongly positive NSAbs were further analyzed ( Table 5 ). Interestingly, more patients with CAs were found to have high titer range of CSF antibodies (within “positive” group and “strongly positive” group, 85.7% vs. 45.8%, P=0.005). All 14 patients underwent 3T brain MRI. Cerebellar abnormalities on MRI are listed in Table 4 . Limbic system involvement (medial temporal T2/FLAIR signal changes) (13) was found in seven patients (50%). All 14 cases were negative for routine tumor screening. The levels of antibodies titers in patients with anti-NSAbs-associated CAs. NSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; CSF, Cerebrospinal fluid. Response to the Immunotherapy All 14 patients were followed up for at least 15 months after discharge (median 27 months, ranging from 15-48 months). At the early stage, the patients were investigated by the standardized questionnaire through a telephone survey once a month, and all patients were hospitalized again for disease re-evaluation 6 months after discharge. Two patients were free from the symptoms of cerebellar ataxia, and eight recovered satisfactorily (10/14, 71.4%) after the glucocorticoid treatment or the combination of glucocorticoids and IVIG ( Table 4 ). In addition, two patients (patients 8 and 12) with NMDAR-Ab had clinical relapses, whereas the symptoms of cerebellar ataxia were not observed during the relapses. No malignancy was found in these 14 patients until the last follow-up visit. Follow-up brain MRI showed an obvious diminishment in cerebellar T2/FLAIR-hyperintense lesions in two patients (2/4, 50%, case 7 and 13) and no significant changes in others (case 1 and case 2). However, cerebellar atrophy in brain MRI of case 3 remained unchanged. All 14 patients were followed up for at least 15 months after discharge (median 27 months, ranging from 15-48 months). At the early stage, the patients were investigated by the standardized questionnaire through a telephone survey once a month, and all patients were hospitalized again for disease re-evaluation 6 months after discharge. Two patients were free from the symptoms of cerebellar ataxia, and eight recovered satisfactorily (10/14, 71.4%) after the glucocorticoid treatment or the combination of glucocorticoids and IVIG ( Table 4 ). In addition, two patients (patients 8 and 12) with NMDAR-Ab had clinical relapses, whereas the symptoms of cerebellar ataxia were not observed during the relapses. No malignancy was found in these 14 patients until the last follow-up visit. Follow-up brain MRI showed an obvious diminishment in cerebellar T2/FLAIR-hyperintense lesions in two patients (2/4, 50%, case 7 and 13) and no significant changes in others (case 1 and case 2). However, cerebellar atrophy in brain MRI of case 3 remained unchanged. Frequency of Anti-NSAbs-Associated CAs: Among the 40 IMCAs with definite etiology, 14 patients (25.9%) were identified as anti-NSAbs associated CAs, followed by PCD (13 patients, 24.1%), anti-GAD65-Ab-associated CAs (7 patients, 13.0%), and autoimmune disease-associated CAs (6 patients, 11.1%) ( Table 1 ). Regarding the 191 patients with positive NSAbs (adult, 164; children, 27), 14 patients (7.3%) developed CAs during the disease period. The result showed a similar proportion of cerebellar ataxia, respectively, in adults (12/164, 7.3%) and children (2/27, 7.4%) (P=1.000) ( Table 2 ). Compared with other causes of IMCAs or anti-NSAbs without CAs, no distinct features in demographic data were found in patients with Anti-NSAbs-associated CAs Comparison of clinical features of four forms of IMCAs. IMCAs, Immune-mediated cerebellar ataxias; NSAbs, Neuronal surface antibodies; PCD, Paraneoplastic cerebellar degeneration; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; MRI, Magnetic resonance imaging. Comparison of clinical characteristic of patients positive for NASbs with and without CAs. NSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; OB, Oligoclonal bands; CSF, Cerebrospinal fluid; Ab, Antibody; MRI, Magnetic resonance imaging *Brain MRI hyperintense signal on T2-weighted fluid-attenuated inversion recovery sequences highly restricted to one or both medial temporal lobes (limbic encephalitis), or in multifocal areas involving grey matter, white matter (13). According to the data, 185 patients were positive with one antibody (108 with NMDAR-Ab, 52 with LGI1-Ab, 12 with CASPR2-Ab, 4 with AMPA2R-Ab, and 9 with GABAR-B-Ab), and 6 patients were positive with two types of antibodies (4 with NMDAR-Ab and CASPR2-Ab, and 2 with CASPR2-Ab and LGI1-Ab) ( Table 3 ). Figures suggested that patients with AMPAR-Ab had the highest occurrence of cerebellar ataxia during the course of the disease (1/4, 25%), followed by CASPR2-Ab (2/12, 16.7%), NMDAR-Ab (9/108, 8.3%), and LGI1-Ab (2/52, 3.8%). However, the symptoms of CAs were not apparent among patients with GABAB-R-Ab (0/9, 0%). Moreover, the occurrence of multiple antibodies was not related to the increased frequency of CAs (0/6, 0%). The data of antibodies detection in patients with anti-NSAbs. NSAbs, Neuronal surface antibodies; CSF, Cerebrospinal fluid. Clinical Characteristics of Patients With CAs: Regarding 14 patients with CAs, three patients suffered from gait ataxia as the initial symptom (two with NMDAR-Ab and one with CASPR2-Ab). The other 11 patients experienced cerebellar ataxia during the first two months after the disease onset (2-60 days, average on day 10). CAs installed acutely (10 patients, 71.4%), subacutely (1 patient, 7.1%) and insidiously (3 patients, 21.4%). Cerebellar signs are shown in Table 1 . Most patients presented with slurred speech as a main symptom (78.6%), but nystagmus seldom appeared (35.7%). In addition, two patients (one with NMDAR-Ab and the other with CASPR2-Ab) had isolated CAs throughout the disease ( Table 4 ). The neurological examination revealed that five patients (35.7%) failed to complete the alternating movement test with both two hands, six patients (42.9%) went through the finger-nose test with difficulties, and five patients (35.7%) could not finish the heel-knee-tibia test. However, there were not enough patients for clinical characteristics comparison among different antibodies. Clinical characteristics of patients with anti-NSAbs-associated CAs. NSAbs: Neuronal surface antibodies; CAs: Cerebellar ataxias; M: Male; F: Female; CSF: Cerebrospinal fluid; WBC: White blood cells counts; Pos: Positive; Neg: Negative; OB: Oligoclonal bands; MRI: Magnetic resonance imaging; IVIG: Intravenous immunoglobulin. Laboratory Testing and MRI Analysis: Blood and CSF markers indicating inflammation were evaluated. Among patients with NSAbs, oligoclonal bands (OB) in CSF were more frequently in patients with anti-NSAbs-associated CAs than in patients without CAs (P=0.025). The proportions of negative, weakly positive, positive and strongly positive NSAbs were further analyzed ( Table 5 ). Interestingly, more patients with CAs were found to have high titer range of CSF antibodies (within “positive” group and “strongly positive” group, 85.7% vs. 45.8%, P=0.005). All 14 patients underwent 3T brain MRI. Cerebellar abnormalities on MRI are listed in Table 4 . Limbic system involvement (medial temporal T2/FLAIR signal changes) (13) was found in seven patients (50%). All 14 cases were negative for routine tumor screening. The levels of antibodies titers in patients with anti-NSAbs-associated CAs. NSAbs, Neuronal surface antibodies; CAs, Cerebellar ataxias; CSF, Cerebrospinal fluid. Response to the Immunotherapy: All 14 patients were followed up for at least 15 months after discharge (median 27 months, ranging from 15-48 months). At the early stage, the patients were investigated by the standardized questionnaire through a telephone survey once a month, and all patients were hospitalized again for disease re-evaluation 6 months after discharge. Two patients were free from the symptoms of cerebellar ataxia, and eight recovered satisfactorily (10/14, 71.4%) after the glucocorticoid treatment or the combination of glucocorticoids and IVIG ( Table 4 ). In addition, two patients (patients 8 and 12) with NMDAR-Ab had clinical relapses, whereas the symptoms of cerebellar ataxia were not observed during the relapses. No malignancy was found in these 14 patients until the last follow-up visit. Follow-up brain MRI showed an obvious diminishment in cerebellar T2/FLAIR-hyperintense lesions in two patients (2/4, 50%, case 7 and 13) and no significant changes in others (case 1 and case 2). However, cerebellar atrophy in brain MRI of case 3 remained unchanged. Discussion: IMCAs are a rare spectrum of diseases, and several specific neuronal antibodies were identified in IMCAs, such as the onconeural antibodies (Yo-Ab, Hu-Ab, CV2-Ab, Ri-Ab, and Ma-2-Ab) for PCDs, GAD65-Ab for anti-GAD65 Ab-associated CA, and Ri-Ab for OMS (15–20). However, only a few studies have concentrated on the anti-NSAbs-associated CAs in autoimmune encephalitis (6–8). Research conducted by Iizuka T et al. revealed that approximately 5% of patients with anti-NMDAR encephalitis showed cerebellar complaints during the disease course (11). Boyko et al. reviewed the clinical data of 163 patients with CASPR2-Ab and found that 24 patients (14.7%) developed CAs (12). However, no large-sample clinical observation and follow-up studies have been conducted to explore the frequency of cerebellar ataxia in patients with anti-AMPAR, anti-LGI1, or anti-GABABR encephalitis by far (8, 21). This study reviewed 54 patients with IMCAs and identified 14 patients carrying different types of neural surface autoantibodies, including NMDAR-Ab, LGI-Ab, CASPR2-Ab, and AMPA2R-Ab. Anti-NSAbs-associated CAs mostly appeared in the first two weeks of the disease course of autoimmune encephalitis, and two patients (case 3 with CASPR2-Ab and case 13 with NMDAR-Ab) had isolated symptoms of CAs throughout the disease. Five patients with isolated CAs have been reported from publications previously (7, 22), including four with CASPR2-Ab and one with NMDAR-Ab. Although the symptoms of isolated CAs were common in PCD, when patients suffered from isolated CAs, the diagnosis of autoimmune encephalitis should also be considered, especially anti-NMDAR or anti-CASPR2 encephalitis. Anti-NSAbs-associated CAs showed less predominant truncal ataxia than PCDs. Mimicking Hashimoto’s encephalopathy (23, 24), anti-NSAbs associated more frequently with additional extra-cerebellar symptoms, such as seizures and mental abnormalities, during the disease course. Compared with other forms of IMCAs, a higher rate of pleocytosis and positive OB in CSF might have a predictive value for cerebellar symptoms in autoimmune encephalitis. In terms of imaging abnormalities on MRI, previous studies figured out that only 6% patients with anti-NMDAR encephalitis had cerebellar abnormalities and 13.3% patients suffered from progressive and irreversible cerebellar atrophy (11, 25). However, cerebellar atrophy in MRI was found in 16.7% patients with CASPR2-Ab (26) and those patients all benefited from immunotherapy (26). In this study, only one patient (case 3) with CASPR2-Ab had cerebellar atrophy. After immunotherapy, he was free from symptoms, and the existing cerebellar atrophy did not worsen in follow-up brain MRI. All 14 patients received immunotherapy, and most encouragingly, two patients with isolated CAs (case 3 mentioned above and case 13) fully recovered and returned to work. Five patients presented with isolated CAs as reported previously, including 4 patients with CASPR2-Ab and 1 with NMDAR-Ab mentioned above (7, 22). The latter patient exhibited a partial improvement after immunotherapy (22), while the therapeutic response of the former 4 patients is unknown (7). The influencing factors of treatment effect were unclear, which might be related to the type of antibodies, the influence of accompanying symptoms and the severity of cerebellar inflammation. Further research is needed to explore the underlying mechanism of the pathophysiological function and the factors affecting the immunotherapy of anti-NSAbs associated CAs. Among the many mechanistic studies of IMCAs, the association between ONA and PCDs has always been a research hotspot, but few studies have focused on the pathogenesis of anti-NSAbs associated CAs. In patients with PCD, previous studies have revealed a significant loss of Purkinje cells due to a cell-mediated cytotoxic immune response associated with activated CD8+ T cells and microglia in the cerebellum (27, 28). However, in patients with anti-NMDAR encephalitis, post-mortem studies demonstrated no profound loss of Purkinje cells in the cerebellum (11). The NMDAR is strongly expressed on the cerebellum and these receptors on granular cells (but not on Purkinje cells) were identified as a specific antigen of IgG antibodies, which might interfere with the excitatory pathway involving the NMDAR-mediated signaling, resulting in cerebellar dysfunction (29). Conclusion: Cerebellar ataxias were rare and atypical in autoimmune encephalitis with neuronal surface antibodies. Patients with anti-NSAbs-associated CAs exhibited a higher positivity rate of OB in CSF. Most patients, especially those with isolated CAs, responded well to immunotherapy. Compared with other causes, anti-NSAbs-associated CAs led to more symptoms of encephalopathy and showed better therapeutic effects from immunotherapy. Considering the treatability of anti-NSAbs-associated CAs, it is valuable to perform serum and CSF NSAbs tests for patients with cerebellar ataxia. Data Availability Statement: The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author. Ethics Statement: The studies involving human participants were reviewed and approved by The Ethics Committee of Xuanwu Hospital (No.2017YFC0907702). The patients/participants provided their written informed consent to participate in this study. Author Contributions: YJ was the major contributors in writing the manuscript. YJ, ML, and DL conceptualized the study. YJ, MZ, and HW collected samples and data. ZH, LJ, JY, AL, and YW contributed to the diagnosis and treatment of patients. YW checked the final manuscript. All authors contributed to the article and approved the submitted version. Funding: This work was supported by Beijing Postdoctoral Research Foundation [Grant No. 2021-ZZ-001], Beijing Municipal Education Commission [Grant No. TJSH20161002502], National Natural Science Foundation of China [Grant No. 81771398], Beijing Key Clinical Speciality Excellence Project and National Support Provincial Major Disease Medical Services and Social Capability Enhancement Project. Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher’s Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Background: Immune-mediated cerebellar ataxias (IMCAs) are common in paraneoplastic cerebellar degeneration (PCD) but rarely occur in patients with neuronal surface antibodies (NSAbs). Although cerebellar ataxias (CAs) associated with anti-NMDAR and anti-CASPR2 have been reported in a few cases, they have never been studied systematically. This study aimed to analyze the characteristics of anti-NSAbs-associated CAs. Methods: A retrospective investigation was conducted to identify patients using the keywords IMCAs and NSAbs. We collected the clinical data of 14 patients diagnosed with anti-NSAbs-associated CAs. Results: The median age was 33 years (16-66), and the male-to-female ratio was 4:3. Nine were positive for NMDAR-Ab, two for LGI1-Ab, two for CASPR2-Ab, and one for AMPA2R-Ab. CAs were initial symptoms in three patients and presented during the first two months of the disease course (10 days on average) among the rest of the patients. After the immunotherapy, two cases were free from symptoms, and eight cases recovered satisfactorily (10/14, 71.4%). Compared with other causes of IMCAs, anti-NSAbs were more frequently associated with additional extra-cerebellar symptoms (85.7%), mostly seizures (78.6%) and mental abnormalities (64.3%). In the CSF analysis, pleocytosis was detected in ten patients (71.4%) and oligoclonal bands (OB) were observed in nine patients (64.3%). Moreover, compared with PCD and anti-GAD65-Ab-associated CAs, anti-NSAbs-associated CAs showed a better response to immunotherapy. Conclusions: IMCAs are rare and atypical in autoimmune encephalitis with neuronal surface antibodies. Compared with other forms of IMCAs, more symptoms of encephalopathy, a higher rate of pleocytosis and positive OB in CSF, and positive therapeutic effect were the key features of anti-NSAbs-associated CAs.
Introduction: Over the past few decades, with the discovery of anti-NMDAR encephalitis in 2007 (1), an increasing number of specific neuronal surface antibodies (NSAbs) have been discovered, including LGI1-Ab, CASPR2-Ab, AMPA1/2R-Ab, GABAR-A/B-Ab, and so on (2–5). Anti-NSAbs-associated autoimmune encephalitis presents diverse clinical phenotypes, such as seizures, cognitive decline, psychiatric abnormalities, and autonomic dysfunction. In contrast, symptoms of cerebellar ataxia are reported rarely in patients with positive NSAbs (6–8). Immune-mediated cerebellar ataxias (IMCAs) are one of the most common symptoms of paraneoplastic cerebellar degeneration (PCD) (9). Previous studies have suggested that CAs are atypical in anti-NSAbs-associated autoimmune encephalitis (6, 7, 10). For example, cerebellar ataxia has been reported in a few patients with anti-NMDAR encephalitis (10, 11). Boyko et al. found that about 15% of patients with anti-CASPR2 encephalitis developed cerebellar ataxia followed by the onset of limbic symptoms or Morvan syndrome (12). Although CAs associated with anti-NMDAR and anti-CASPR2 have been studied in a few cases, they have not been studied systematically. In this study, we summarized the clinical features of 14 patients with anti-NSAbs-associated CAs. Moreover, to explore the distinct features of anti-NSAbs-associated CAs, a study of two overlapping cohorts, including patients diagnosed as IMCAs and patients with NSAbs from 2015 to 2020 in our center, was designed and conducted. The present study focused on the clinical characteristics of patients with anti-NSAbs-associated CAs. Conclusion: Cerebellar ataxias were rare and atypical in autoimmune encephalitis with neuronal surface antibodies. Patients with anti-NSAbs-associated CAs exhibited a higher positivity rate of OB in CSF. Most patients, especially those with isolated CAs, responded well to immunotherapy. Compared with other causes, anti-NSAbs-associated CAs led to more symptoms of encephalopathy and showed better therapeutic effects from immunotherapy. Considering the treatability of anti-NSAbs-associated CAs, it is valuable to perform serum and CSF NSAbs tests for patients with cerebellar ataxia.
Background: Immune-mediated cerebellar ataxias (IMCAs) are common in paraneoplastic cerebellar degeneration (PCD) but rarely occur in patients with neuronal surface antibodies (NSAbs). Although cerebellar ataxias (CAs) associated with anti-NMDAR and anti-CASPR2 have been reported in a few cases, they have never been studied systematically. This study aimed to analyze the characteristics of anti-NSAbs-associated CAs. Methods: A retrospective investigation was conducted to identify patients using the keywords IMCAs and NSAbs. We collected the clinical data of 14 patients diagnosed with anti-NSAbs-associated CAs. Results: The median age was 33 years (16-66), and the male-to-female ratio was 4:3. Nine were positive for NMDAR-Ab, two for LGI1-Ab, two for CASPR2-Ab, and one for AMPA2R-Ab. CAs were initial symptoms in three patients and presented during the first two months of the disease course (10 days on average) among the rest of the patients. After the immunotherapy, two cases were free from symptoms, and eight cases recovered satisfactorily (10/14, 71.4%). Compared with other causes of IMCAs, anti-NSAbs were more frequently associated with additional extra-cerebellar symptoms (85.7%), mostly seizures (78.6%) and mental abnormalities (64.3%). In the CSF analysis, pleocytosis was detected in ten patients (71.4%) and oligoclonal bands (OB) were observed in nine patients (64.3%). Moreover, compared with PCD and anti-GAD65-Ab-associated CAs, anti-NSAbs-associated CAs showed a better response to immunotherapy. Conclusions: IMCAs are rare and atypical in autoimmune encephalitis with neuronal surface antibodies. Compared with other forms of IMCAs, more symptoms of encephalopathy, a higher rate of pleocytosis and positive OB in CSF, and positive therapeutic effect were the key features of anti-NSAbs-associated CAs.
8,316
376
19
[ "patients", "ab", "cas", "nsabs", "anti", "antibodies", "cerebellar", "associated", "positive", "associated cas" ]
[ "test", "test" ]
null
[CONTENT] immune-mediated cerebellar ataxia | neuronal surface antibodies | CSF analysis | oligoclonal bands | immunotherapy [SUMMARY]
null
[CONTENT] immune-mediated cerebellar ataxia | neuronal surface antibodies | CSF analysis | oligoclonal bands | immunotherapy [SUMMARY]
[CONTENT] immune-mediated cerebellar ataxia | neuronal surface antibodies | CSF analysis | oligoclonal bands | immunotherapy [SUMMARY]
[CONTENT] immune-mediated cerebellar ataxia | neuronal surface antibodies | CSF analysis | oligoclonal bands | immunotherapy [SUMMARY]
[CONTENT] immune-mediated cerebellar ataxia | neuronal surface antibodies | CSF analysis | oligoclonal bands | immunotherapy [SUMMARY]
[CONTENT] Adult | Autoantibodies | Cerebellar Ataxia | Female | Humans | Leukocytosis | Male | Paraneoplastic Cerebellar Degeneration | Retrospective Studies [SUMMARY]
null
[CONTENT] Adult | Autoantibodies | Cerebellar Ataxia | Female | Humans | Leukocytosis | Male | Paraneoplastic Cerebellar Degeneration | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Autoantibodies | Cerebellar Ataxia | Female | Humans | Leukocytosis | Male | Paraneoplastic Cerebellar Degeneration | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Autoantibodies | Cerebellar Ataxia | Female | Humans | Leukocytosis | Male | Paraneoplastic Cerebellar Degeneration | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Autoantibodies | Cerebellar Ataxia | Female | Humans | Leukocytosis | Male | Paraneoplastic Cerebellar Degeneration | Retrospective Studies [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] patients | ab | cas | nsabs | anti | antibodies | cerebellar | associated | positive | associated cas [SUMMARY]
null
[CONTENT] patients | ab | cas | nsabs | anti | antibodies | cerebellar | associated | positive | associated cas [SUMMARY]
[CONTENT] patients | ab | cas | nsabs | anti | antibodies | cerebellar | associated | positive | associated cas [SUMMARY]
[CONTENT] patients | ab | cas | nsabs | anti | antibodies | cerebellar | associated | positive | associated cas [SUMMARY]
[CONTENT] patients | ab | cas | nsabs | anti | antibodies | cerebellar | associated | positive | associated cas [SUMMARY]
[CONTENT] anti | nsabs | associated | patients | encephalitis | anti nmdar | cas | anti nsabs | anti nsabs associated | nsabs associated [SUMMARY]
null
[CONTENT] patients | cas | ab | nsabs | cerebellar | table | antibodies | positive | mri | anti [SUMMARY]
[CONTENT] nsabs | cas | anti nsabs associated cas | nsabs associated | anti nsabs associated | associated cas | associated | anti nsabs | nsabs associated cas | anti [SUMMARY]
[CONTENT] patients | ab | cas | nsabs | anti | antibodies | cerebellar | associated | positive | anti nsabs [SUMMARY]
[CONTENT] patients | ab | cas | nsabs | anti | antibodies | cerebellar | associated | positive | anti nsabs [SUMMARY]
[CONTENT] NSAbs ||| anti-CASPR2 ||| NSAbs [SUMMARY]
null
[CONTENT] 33 years | 16-66 | 4:3 ||| Nine | NMDAR-Ab | two | LGI1-Ab | two | CASPR2-Ab | three | the first two months | 10 days ||| two | eight | 10/14 | 71.4% ||| 85.7% | 78.6% | 64.3% ||| CSF | ten | 71.4% | nine | 64.3% ||| PCD | NSAbs [SUMMARY]
[CONTENT] ||| CSF | NSAbs [SUMMARY]
[CONTENT] NSAbs ||| anti-CASPR2 ||| NSAbs ||| NSAbs ||| 14 | anti-NSAbs ||| ||| 33 years | 16-66 | 4:3 ||| Nine | NMDAR-Ab | two | LGI1-Ab | two | CASPR2-Ab | three | the first two months | 10 days ||| two | eight | 10/14 | 71.4% ||| 85.7% | 78.6% | 64.3% ||| CSF | ten | 71.4% | nine | 64.3% ||| PCD | NSAbs ||| ||| CSF | NSAbs [SUMMARY]
[CONTENT] NSAbs ||| anti-CASPR2 ||| NSAbs ||| NSAbs ||| 14 | anti-NSAbs ||| ||| 33 years | 16-66 | 4:3 ||| Nine | NMDAR-Ab | two | LGI1-Ab | two | CASPR2-Ab | three | the first two months | 10 days ||| two | eight | 10/14 | 71.4% ||| 85.7% | 78.6% | 64.3% ||| CSF | ten | 71.4% | nine | 64.3% ||| PCD | NSAbs ||| ||| CSF | NSAbs [SUMMARY]
Systematic evaluation and comparison of statistical tests for publication bias.
16276033
This study evaluates the statistical and discriminatory powers of three statistical test methods (Begg's, Egger's, and Macaskill's) to detect publication bias in meta-analyses.
BACKGROUND
The data sources were 130 reviews from the Cochrane Database of Systematic Reviews 2002 issue, which considered a binary endpoint and contained 10 or more individual studies. Funnel plots with observers'agreements were selected as a reference standard. We evaluated a trade-off between sensitivity and specificity by varying cut-off p-values, power of statistical tests given fixed false positive rates, and area under the receiver operating characteristic curve.
METHODS
In 36 reviews, 733 original studies evaluated 2,874,006 subjects. The number of trials included in each ranged from 10 to 70 (median 14.5). Given that the false positive rate was 0.1, the sensitivity of Egger's method was 0.93, and was larger than that of Begg's method (0.86) and Macaskill's method (0.43). The sensitivities of three statistical tests increased as the cut-off p-values increased without a substantial decrement of specificities. The area under the ROC curve of Egger's method was 0.955 (95% confidence interval, 0.889-1.000) and was not different from that of Begg's method (area=0.913, p=0.2302), but it was larger than that of Macaskill's method (area=0.719, p=0.0116).
RESULTS
Egger's linear regression method and Begg's method had stronger statistical and discriminatory powers than Macaskill's method for detecting publication bias given the same type I error level. The power of these methods could be improved by increasing the cut-off p-value without a substantial increment of false positive rate.
CONCLUSION
[ "Linear Models", "Meta-Analysis as Topic", "Publication Bias", "ROC Curve" ]
7904376
null
null
METHODS
To evaluate the test performance of Begg’s, Egger’s, and Macaskill’s methods, visual interpretation of funnel plots was used as a reference standard. A funnel plot is a graphical presentation of each trial’s effect size against one of the sample size measures, such as the precision of the effect size estimate, the overall sample size, and the standard error.8 It is based on the assumption that the results from smaller studies will be more widely spread around the mean effect because of a large random error. If there is no publication bias, a plot of sample size versus treatment effect from individual studies in a meta-analysis should therefore look like an inverted funnel.9 In practice, smaller studies or non-significant studies are less likely to be published and data for the lower left-hand corner of the graph are often lacking, creating an asymmetry in the funnel shape. It has been reported that different observers may interpret funnel plots differently;4 however, we aimed to resolve this problem by having different observers judge the shape of funnel plots and reach a consensus. All completed systematic reviews that were contained in the Cochrane Database of Systematic Reviews 2002 issue were examined.10 Only reviews including 10 or more trials with a binary outcome measure were included in the assessment. The cut-off of 10 trials is based on the minimum number reported by Sutton, who evaluated the effect of publication bias on the results and conclusions of systematic reviews and meta-analyses.8 At most, a single meta-analysis from each systematic review was included, and when more than one meta-analysis met the inclusion criteria, the one containing the largest number of studies was selected. If two or more analyses contained the same number of studies, the one with the strongest relation to the primary outcome of the study was selected. One evaluator extracted data from each selected meta-analysis. The binary data of the meta-analysis were used for the analysis, and a continuity correction of 0.5 was used when necessary.11 For each review article selected, funnel plots were constructed by plotting the effect measure (eg, the natural logarithm of the odds ratio) against the inverse of its standard error, which is less likely to give a biased result than the use of other effect measures (e.g., log risk ratio and risk difference).12 Two observers, one of the authors and another person blinded to the results of statistical analysis, interpreted all funnel plots and judged independently whether publication bias was present. Both observers are general internists and have the experience of conducting meta-analyses that have been published in peer-reviewed journals. To verify consistency, observer A interpreted all funnel plots again 7 days later. Inconsistencies between the observers were resolved by discussion. Some resulting funnel plots were typical in shape and thus easy to evaluate, but others were atypical and difficult to evaluate. To assess the difficulties of interpretation, two observers also ranked the graphs by using three categories of complexity ([1] easy; [2] moderate; and [3] complicated). The rankings by two observers were then totaled and the result was used as a score of the graph’s complexity. For example, when a graph was ranked “[1] easy” by observer A and “[3] complicated” by observer B, the complexity score for that graph was 4. Therefore a larger number implied a higher level of complexity of a graph’s interpretation. The asymmetry of the funnel plots was statistically evaluated by three test methods: Begg’s, Egger’s, and Macaskill’s. Begg’s method tests publication bias by determining whether there is a significant correlation between the effect size estimates and their variances. The effect estimates were standardized to stabilize the variance, and an adjusted rank correlation test was then performed.5,13 Let ti and vi be the estimated effect sizes and those variances from the k studies in the meta-analysis, i=1,…..,k. To construct a valid rank correlation test, it is necessary to stabilize the variance by standardizing the effect size prior to performing the test. We correlate ti* and vi,wheret∗i=(ti−t¯)/(v∗i)1/2wheret¯=(∑vj−1tj)/∑vj−1and where vi* = vi -(∑vj-1)-1 is the variance of ti-t¯. Throughout, we have used the rank correlation test based on Kendall’s tau. This involves enumerating the number of pairs of studies that are ranked in the same order with respect to the two factors (i.e., t* and v). Egger’s method detects funnel plot asymmetry by determining whether the intercept deviates significantly from zero in a regression of the standardized effect estimates versus their precision.14 The standard normal deviate (SND), defined as the odds ratio divided by its standard error, is regressed against the estimate’s precision, the latter being defined as the inverse of the standard error (regression equation: SND=a+b×precision). The analysis could be weighted or unweighted by the inverse of the variance of the effect estimates. The unweighted model was used for the current analysis. Macaskill’s method is fitting a regression directly to the data by using the treatment effect (ti) as the dependent variable and the study size (ni) as the independent variable.6 The observations are weighted by the reciprocal of the pooled variance for each study, that is, the variance of the pooled estimates resulting from combining the data for the two groups. When there is no publication bias, the regression slope has an expected value of zero, and a nonzero slope would suggest an association between effect and sample size, possibly because of publication bias. We compared these three statistical tests in three different ways. First, we used a p-value as a cut-off point for defining the presence of publication bias by Begg’s method, Egger’s method, or Macaskill’s method, and evaluated a trade-off between sensitivity and specificity by varying cut-off p-values (0.05, 0.1, 0.2). Second, we estimated the sensitivities of these tests corresponding to a fixed false positive rate (0.05 or 0.1) to compare their statistical powers. Third, the receiver operating characteristic curve analysis was used to determine the discriminatory power of each test. A receiver operating characteristic analysis is a popular method of assessing the predictive power of a test by plotting the sensitivity (power) of the test against the corresponding false-positive rate (1-specificity) as the cut-off level of the model varies.15 In the present analysis, sensitivity refers to the percentage of systematic reviews with publication bias detected by a statistical test using a given cut-off point out of all systematic reviews with publication bias defined by the reference standard. Specificity refers to the percentage of systematic reviews found by a statistical test to be without publication bias out of all systematic reviews without publication bias defined by reference standard. We compared areas under receiver operating characteristic (ROC) curves by setting the Egger’s method as a reference, using an algorithm suggested by DeLong, DeLong, and Clarke-Pearson.16 We defined statistical significance to be p<0.05 for this analysis. In the evaluation of statistical tests, we used a subgroup of graphs that were scored at 2 and with the observers’ agreements, i.e., both observers agreed that these graphs were reliable and easy to evaluate. We also evaluated three tests by using all 130 reviews as sensitivity analyses. We tested the reliability and validity of the reference standard by examining intra- and inter-observer agreement of these plots, using Kappa statistics.17 All statistical analyses were performed with STATA®, version 7 (STATA corporation, College Station, TX, USA).
RESULTS
At the time of investigation, the Cochrane Library (2002, Issue 1) contained 1297 completed systematic reviews. Of these, 130 meta-analyses included 10 or more trials with at least 1 dichotomous outcome. Summary characteristics of the systematic reviews were shown in Table 1. In all 130 reviews, a total of 2,468 original studies evaluated 5,490,223 subjects; the median number of subjects per review was 2,801.5 (range, 3821,874,547); the median number of original studies included in one review was 13 (range, 10-135); the median pooled odds ratios was 0.895 (range, 0.09-6.22). Table 2 shows the number of funnel plots judged to show publication bias by inspection of funnel plots of all studies or of studies restricted to the groups of various scores. When all data were included, the number of studies interpreted as biased by observer B was larger than that according to observer A. Of all 130 graphs, 38 (29.2%) were scored two, 57 (43.8%) were scored three, 27 (20.8%) were scored four, and 8 (6.2%) were scored five. *: Percentages out of all 130 reviews. A-1: the interpretation by observer A on the day 1. A-2: the interpretation on the day 2 by observer A. Table 3 summarizes the intra- and interobserver agreement of the graphical test. The intraobserver agreement rate was 82.3%, and the Kappa value of observer A was and 0.65 (95% confidence interval (CI), 0.52-0.77). The interobserver agreement rate and the Kappa value on all graphs evaluated by observer A and observer B were 73.8% and 0.42 (95% CI. 0.27-0.57), respectively. This increased to 92.1% and 0.75 (0.49-1.00) when limited to the graphs scored as 2. The intraobserver agreements on all graphs and that on graphs scored as 2 were in the upper range of good reproducibility.12 Of the 38 graphs scored at two, 36 with observers’ agreement were used for the main analyses. CI: confidence interval. As shown in Figure 1, in analyses using data with interobserver agreement (n=36), the area under the ROC curve of Egger’s method was 0.955 (95% CI, 0.889-1.000) and was not different from that of Begg’s method (area=0.913, p=0.2302), but it was larger than that of Macaskill’s method (area=0.719, p=0.0116). In all data set (n=130), the area under the ROC curve of Egger’s method was 0.728 (95% CI, 0.643-0.813), but was not statistically different from that of Begg’s method (area=0.649, p=0.0779) or Macaskill’s method (area=0.634, p=0.0645). Left column: Main analyses by using funnel plots hat were scored as 2 and with observers’ agreement. Right column: Sensitivity analyses by using all 130 reviews. CI: confidence interval. Table 4 summarizes a trade-off between the sensitivities and specificities of three statistical tests by varying the cut-off p-values. All these statistical tests had high specificities with varying degrees of sensitivities. In analyses using data with interobserver agreements (n=36), sensitivities increased as the cut-off p-values increased without a decrement of specificities for any statistical tests. For example, when the cut-off p-value was increased to 0.20 from 0.05, the sensitivity of Egger’s test increased by 0.39, but false positive rate (1-specificity) remained constant. At any cut-off p-values shown in this table, the sensitivity of Egger’s test was larger than that of Begg’s method or Macaskill’s method. These results were not influenced by sensitivity analyses using all data set; false positive rate (1-specificity) was always below the cut-off p-value (the nominal significance level). * Only reviews that were scored 2 and with inter-observer agreement were used Table 5 summarizes the sensitivities of statistical tests that the false positive rates (1-specificity) were fixed at 0.05 or 0.1. The statistical power (sensitivity) of Egger’s method was larger than Begg’s or Macaskill’s method, regardless of false positive rates (0.05 or 0.1) or type of data set used (n=130 or n=36). * Only reviews that were scored 2 and with inter-observer agreement were used Figure 2 shows representative examples of funnel plots in the current analyses. For example, graph (D) is an example of the discrepancy between the three statistical tests in terms of detecting publication bias. The number of included studies in this review was 11; the total sample size was 619; the median sample size per review was 33 (range, 20-204); the pooled odds ratio was 3.32 (95% CI, 2.24-4.92). The funnel plots were scored as 2, and both observers agreed that this analysis had publication bias. The p-values were respectively 0.018, 0.020, and 0.352 for Egger’s, Begg’s, and Macaskill’s method. For a cut-off p-vale of 0.1, Egger’s method and Begg’s method suggest the presence of a publication bias, but Macaskill’s method does not. (A) A typical example of the absence of publication bias. The number of included studies was 13; the total sample size was 855; the median sample size per review was 46 (range, 20-234); the pooled odds ratio was 0.96 (95% CI, 0.68-1.35). The funnel plots were scored at 4, and both observers agreed that there is no publication bias in this analysis. The p-values were 0.583, 0.641, and 0.603, respectively, for Egger’s method, Begg’s method, and Macaskill’s method. (B) A typical example of the presence of publication bias. The number of included studies was 15; the total sample size was 1278; the median sample size per review was 73 (range, 23-158); the pooled odds ratio was 0.78 (95% CI, 0.61-1.00). The funnel plots were scored at 2, and both observers agreed that there is a publication bias in this analysis. The p-values were respectively 0.006, 0.002, and 0.02 for Egger’s method, Begg’s method, and Macaskill’s method. (C) An example of the inconsistency between two observers in the interpretation of funnel plots. The number of included studies was 25; the total sample size was 2478; the median sample size per review was 97 (range, 36-200); the pooled odds ratio was 1.64 (95% CI, 1.28-2.11). The funnel plots were scored at 3, and observers A asserted that there was publication bias in this analysis, whereas observer B did not. p-values were respectively 0.500, 0.944, and 0.419 for Egger’s method, Begg’s method, and Macaskill’s method. (D) An example of the inconsistency between the three statistical tests in detecting publication bias. The number of included studies was 11; the total sample size was 619; the median sample size per review was 33 (range, 20-204); the pooled odds ratio was 3.32 (95% CI, 2.24-4.92). The funnel plots were scored at 2, and both observers agreed that there is no publication bias in this analysis. The p-values were respectively 0.018, 0.020, and 0.352 for Egger’s method, Begg’s method, and Macaskill’s method. With positivity criterion p<0.1, Egger’s method and Begg’s method suggest the presence of a publication bias, but Macaskill’s test did not.
null
null
[]
[]
[]
[ "METHODS", "RESULTS", "DISCUSSION" ]
[ "To evaluate the test performance of Begg’s, Egger’s, and Macaskill’s methods, visual interpretation of funnel plots was used as a reference standard. A funnel plot is a graphical presentation of each trial’s effect size against one of the sample size measures, such as the precision of the effect size estimate, the overall sample size, and the standard error.8 It is based on the assumption that the results from smaller studies will be more widely spread around the mean effect because of a large random error. If there is no publication bias, a plot of sample size versus treatment effect from individual studies in a meta-analysis should therefore look like an inverted funnel.9 In practice, smaller studies or non-significant studies are less likely to be published and data for the lower left-hand corner of the graph are often lacking, creating an asymmetry in the funnel shape. It has been reported that different observers may interpret funnel plots differently;4 however, we aimed to resolve this problem by having different observers judge the shape of funnel plots and reach a consensus.\nAll completed systematic reviews that were contained in the Cochrane Database of Systematic Reviews 2002 issue were examined.10 Only reviews including 10 or more trials with a binary outcome measure were included in the assessment. The cut-off of 10 trials is based on the minimum number reported by Sutton, who evaluated the effect of publication bias on the results and conclusions of systematic reviews and meta-analyses.8 At most, a single meta-analysis from each systematic review was included, and when more than one meta-analysis met the inclusion criteria, the one containing the largest number of studies was selected. If two or more analyses contained the same number of studies, the one with the strongest relation to the primary outcome of the study was selected. One evaluator extracted data from each selected meta-analysis. The binary data of the meta-analysis were used for the analysis, and a continuity correction of 0.5 was used when necessary.11\nFor each review article selected, funnel plots were constructed by plotting the effect measure (eg, the natural logarithm of the odds ratio) against the inverse of its standard error, which is less likely to give a biased result than the use of other effect measures (e.g., log risk ratio and risk difference).12 Two observers, one of the authors and another person blinded to the results of statistical analysis, interpreted all funnel plots and judged independently whether publication bias was present. Both observers are general internists and have the experience of conducting meta-analyses that have been published in peer-reviewed journals. To verify consistency, observer A interpreted all funnel plots again 7 days later. Inconsistencies between the observers were resolved by discussion.\nSome resulting funnel plots were typical in shape and thus easy to evaluate, but others were atypical and difficult to evaluate. To assess the difficulties of interpretation, two observers also ranked the graphs by using three categories of complexity ([1] easy; [2] moderate; and [3] complicated). The rankings by two observers were then totaled and the result was used as a score of the graph’s complexity. For example, when a graph was ranked “[1] easy” by observer A and “[3] complicated” by observer B, the complexity score for that graph was 4. Therefore a larger number implied a higher level of complexity of a graph’s interpretation.\nThe asymmetry of the funnel plots was statistically evaluated by three test methods: Begg’s, Egger’s, and Macaskill’s. Begg’s method tests publication bias by determining whether there is a significant correlation between the effect size estimates and their variances. The effect estimates were standardized to stabilize the variance, and an adjusted rank correlation test was then performed.5,13 Let ti and vi be the estimated effect sizes and those variances from the k studies in the meta-analysis, i=1,…..,k. To construct a valid rank correlation test, it is necessary to stabilize the variance by standardizing the effect size prior to performing the test. We correlate ti* and vi,wheret∗i=(ti−t¯)/(v∗i)1/2wheret¯=(∑vj−1tj)/∑vj−1and where vi* = vi -(∑vj-1)-1 is the variance of ti-t¯.\nThroughout, we have used the rank correlation test based on Kendall’s tau. This involves enumerating the number of pairs of studies that are ranked in the same order with respect to the two factors (i.e., t* and v).\nEgger’s method detects funnel plot asymmetry by determining whether the intercept deviates significantly from zero in a regression of the standardized effect estimates versus their precision.14 The standard normal deviate (SND), defined as the odds ratio divided by its standard error, is regressed against the estimate’s precision, the latter being defined as the inverse of the standard error (regression equation: SND=a+b×precision). The analysis could be weighted or unweighted by the inverse of the variance of the effect estimates. The unweighted model was used for the current analysis.\nMacaskill’s method is fitting a regression directly to the data by using the treatment effect (ti) as the dependent variable and the study size (ni) as the independent variable.6 The observations are weighted by the reciprocal of the pooled variance for each study, that is, the variance of the pooled estimates resulting from combining the data for the two groups. When there is no publication bias, the regression slope has an expected value of zero, and a nonzero slope would suggest an association between effect and sample size, possibly because of publication bias.\nWe compared these three statistical tests in three different ways. First, we used a p-value as a cut-off point for defining the presence of publication bias by Begg’s method, Egger’s method, or Macaskill’s method, and evaluated a trade-off between sensitivity and specificity by varying cut-off p-values (0.05, 0.1, 0.2). Second, we estimated the sensitivities of these tests corresponding to a fixed false positive rate (0.05 or 0.1) to compare their statistical powers. Third, the receiver operating characteristic curve analysis was used to determine the discriminatory power of each test. A receiver operating characteristic analysis is a popular method of assessing the predictive power of a test by plotting the sensitivity (power) of the test against the corresponding false-positive rate (1-specificity) as the cut-off level of the model varies.15 In the present analysis, sensitivity refers to the percentage of systematic reviews with publication bias detected by a statistical test using a given cut-off point out of all systematic reviews with publication bias defined by the reference standard. Specificity refers to the percentage of systematic reviews found by a statistical test to be without publication bias out of all systematic reviews without publication bias defined by reference standard. We compared areas under receiver operating characteristic (ROC) curves by setting the Egger’s method as a reference, using an algorithm suggested by DeLong, DeLong, and Clarke-Pearson.16 We defined statistical significance to be p<0.05 for this analysis.\nIn the evaluation of statistical tests, we used a subgroup of graphs that were scored at 2 and with the observers’ agreements, i.e., both observers agreed that these graphs were reliable and easy to evaluate. We also evaluated three tests by using all 130 reviews as sensitivity analyses. We tested the reliability and validity of the reference standard by examining intra- and inter-observer agreement of these plots, using Kappa statistics.17 All statistical analyses were performed with STATA®, version 7 (STATA corporation, College Station, TX, USA).", "At the time of investigation, the Cochrane Library (2002, Issue 1) contained 1297 completed systematic reviews. Of these, 130 meta-analyses included 10 or more trials with at least 1 dichotomous outcome. Summary characteristics of the systematic reviews were shown in Table 1. In all 130 reviews, a total of 2,468 original studies evaluated 5,490,223 subjects; the median number of subjects per review was 2,801.5 (range, 3821,874,547); the median number of original studies included in one review was 13 (range, 10-135); the median pooled odds ratios was 0.895 (range, 0.09-6.22).\nTable 2 shows the number of funnel plots judged to show publication bias by inspection of funnel plots of all studies or of studies restricted to the groups of various scores. When all data were included, the number of studies interpreted as biased by observer B was larger than that according to observer A. Of all 130 graphs, 38 (29.2%) were scored two, 57 (43.8%) were scored three, 27 (20.8%) were scored four, and 8 (6.2%) were scored five.\n*: Percentages out of all 130 reviews.\nA-1: the interpretation by observer A on the day 1.\nA-2: the interpretation on the day 2 by observer A.\nTable 3 summarizes the intra- and interobserver agreement of the graphical test. The intraobserver agreement rate was 82.3%, and the Kappa value of observer A was and 0.65 (95% confidence interval (CI), 0.52-0.77). The interobserver agreement rate and the Kappa value on all graphs evaluated by observer A and observer B were 73.8% and 0.42 (95% CI. 0.27-0.57), respectively. This increased to 92.1% and 0.75 (0.49-1.00) when limited to the graphs scored as 2. The intraobserver agreements on all graphs and that on graphs scored as 2 were in the upper range of good reproducibility.12 Of the 38 graphs scored at two, 36 with observers’ agreement were used for the main analyses.\nCI: confidence interval.\nAs shown in Figure 1, in analyses using data with interobserver agreement (n=36), the area under the ROC curve of Egger’s method was 0.955 (95% CI, 0.889-1.000) and was not different from that of Begg’s method (area=0.913, p=0.2302), but it was larger than that of Macaskill’s method (area=0.719, p=0.0116). In all data set (n=130), the area under the ROC curve of Egger’s method was 0.728 (95% CI, 0.643-0.813), but was not statistically different from that of Begg’s method (area=0.649, p=0.0779) or Macaskill’s method (area=0.634, p=0.0645).\nLeft column: Main analyses by using funnel plots hat were scored as 2 and with observers’ agreement. Right column: Sensitivity analyses by using all 130 reviews.\nCI: confidence interval.\nTable 4 summarizes a trade-off between the sensitivities and specificities of three statistical tests by varying the cut-off p-values. All these statistical tests had high specificities with varying degrees of sensitivities. In analyses using data with interobserver agreements (n=36), sensitivities increased as the cut-off p-values increased without a decrement of specificities for any statistical tests. For example, when the cut-off p-value was increased to 0.20 from 0.05, the sensitivity of Egger’s test increased by 0.39, but false positive rate (1-specificity) remained constant. At any cut-off p-values shown in this table, the sensitivity of Egger’s test was larger than that of Begg’s method or Macaskill’s method. These results were not influenced by sensitivity analyses using all data set; false positive rate (1-specificity) was always below the cut-off p-value (the nominal significance level).\n* Only reviews that were scored 2 and with inter-observer agreement were used\nTable 5 summarizes the sensitivities of statistical tests that the false positive rates (1-specificity) were fixed at 0.05 or 0.1. The statistical power (sensitivity) of Egger’s method was larger than Begg’s or Macaskill’s method, regardless of false positive rates (0.05 or 0.1) or type of data set used (n=130 or n=36).\n* Only reviews that were scored 2 and with inter-observer agreement were used\nFigure 2 shows representative examples of funnel plots in the current analyses. For example, graph (D) is an example of the discrepancy between the three statistical tests in terms of detecting publication bias. The number of included studies in this review was 11; the total sample size was 619; the median sample size per review was 33 (range, 20-204); the pooled odds ratio was 3.32 (95% CI, 2.24-4.92). The funnel plots were scored as 2, and both observers agreed that this analysis had publication bias. The p-values were respectively 0.018, 0.020, and 0.352 for Egger’s, Begg’s, and Macaskill’s method. For a cut-off p-vale of 0.1, Egger’s method and Begg’s method suggest the presence of a publication bias, but Macaskill’s method does not.\n(A) A typical example of the absence of publication bias. The number of included studies was 13; the total sample size was 855; the median sample size per review was 46 (range, 20-234); the pooled odds ratio was 0.96 (95% CI, 0.68-1.35). The funnel plots were scored at 4, and both observers agreed that there is no publication bias in this analysis. The p-values were 0.583, 0.641, and 0.603, respectively, for Egger’s method, Begg’s method, and Macaskill’s method.\n(B) A typical example of the presence of publication bias. The number of included studies was 15; the total sample size was 1278; the median sample size per review was 73 (range, 23-158); the pooled odds ratio was 0.78 (95% CI, 0.61-1.00). The funnel plots were scored at 2, and both observers agreed that there is a publication bias in this analysis. The p-values were respectively 0.006, 0.002, and 0.02 for Egger’s method, Begg’s method, and Macaskill’s method.\n(C) An example of the inconsistency between two observers in the interpretation of funnel plots. The number of included studies was 25; the total sample size was 2478; the median sample size per review was 97 (range, 36-200); the pooled odds ratio was 1.64 (95% CI, 1.28-2.11). The funnel plots were scored at 3, and observers A asserted that there was publication bias in this analysis, whereas observer B did not. p-values were respectively 0.500, 0.944, and 0.419 for Egger’s method, Begg’s method, and Macaskill’s method.\n(D) An example of the inconsistency between the three statistical tests in detecting publication bias. The number of included studies was 11; the total sample size was 619; the median sample size per review was 33 (range, 20-204); the pooled odds ratio was 3.32 (95% CI, 2.24-4.92). The funnel plots were scored at 2, and both observers agreed that there is no publication bias in this analysis. The p-values were respectively 0.018, 0.020, and 0.352 for Egger’s method, Begg’s method, and Macaskill’s method. With positivity criterion p<0.1, Egger’s method and Begg’s method suggest the presence of a publication bias, but Macaskill’s test did not.", "Publication bias in meta-analysis could lead to serious consequences, and there have been repeated calls for a worldwide registration of clinical trials.18-22 Recently the International Committee of Medical Journal Editors (ICMJE) proposed comprehensive trial registration as a solution to the bias problem. ICMJE member journals will require, as a condition of consideration for publication, registration in a public trials registry.3 This policy applies to any clinical trial starting enrolment after July 1, 2005. An examination of possible publication bias and related bias should be an essential part of meta-analyses and systematic reviews. Although a registration of trials and the creation of a database of all published and unpublished trials could solve the problem, it will be a long time before these goals are completely fulfilled. The methodology of assessing publication bias is still developing, and the current study generates the following suggestions concerning how to evaluate publication bias.\nFirst, our results showed that Egger’s method had stronger statistical power than Begg’s method or Macaskill’s method. This is in part consistent with the reports by Egger, Sterne, or Macaskill,2,6,7 which suggested the difference in power between Egger’s method and Begg’s or Macaskill’s method. In the Macaskill report, however, which has compared these three statistical tests, the use of Macaskill’s method was recommended to detect publication bias because it had a low false positive rate; Begg’s and Egger’s methods had higher false positive rate than the nominal level, though they had stronger statistical powers. Our results, however, showed that the statistical power of Egger’s or Begg’s method seemed stronger than the method given the fixed false positive rates (0.05 or 0.1), which is inconsistent with Macaskill’s opinion that the stronger statistical power of these two tests could be attained by the trade-off with problematically high false positive rate. It is not clear why our results are inconsistent with Macaskill’s report, but a possible reason may be that they evaluated their method by using simulated data, whereas we have used data in a real setting. For example, in their report, most of the simulated reviews included only a few hundred of subjects per study, but the median number of subjects in the current analysis was 2,801.5; our analysis using data in a real setting included a broader range of effect size (OR: 0.09-6.22) than that in their simulation study (OR: 0.25-1.0). Our results thus may suggest that their method is not necessarily preferred in a real setting. The results of the area under the receiver operating characteristics curve analyses have also revealed that Begg’s or Egger’s method had stronger discriminatory power in detecting publication bias than Macaskill’s method did.\nSecond, we have shown that a higher statistical power could be attained with a relatively small increment of false positive rate (type I error) by setting cut-off p-values of three statistical tests larger than conventionally used cut-off p-value. For example, by increasing it to 0.15 from 0.05, the sensitivity of Egger’s test increased by 0.24, but the false positive rate increased by only 0.05 in all data set. The same happens with Begg’s and Macaskill’s method. Typically, the cut-off p-value of 0.1 has been used to determine the presence of publication bias by the use of these statistical tests,23 the conventional cut-off point of these statistical tests could be reconsidered depending on the purpose of using them.\nIt is important to determine a reference standard for evaluating sensitivities and specificities when evaluating test performance. Funnel plots, the simplest and most commonly used method in meta-analysis, were used as a reference standard in the current analysis for the following reasons. First, under certain conditions, they have higher reproducibility than generally thought; this observation is based on multiple observer conclusions.4 Observer agreements on the interpretation of funnel plots have not been systematically evaluated prior to this study. Our inter-observer agreement rate was and 92.1%, and Kappa value of the graphs scored 2 was and 0.75. Although observer agreement decreased with the degree of difficulty of interpretation (the easier the graphs were to interpret, the better the reproducibility of the funnel plots), it was still 73.8% even when all 130 meta-analyses were used; the Kappa value was 0.42, which denotes good reproducibility. This justifies our use of funnel plots as a reference standard.\nSecond, funnel plots are suitable for determining the presence or absence of publication bias because they are more than a tool for evaluating an asymmetry, which statistical tests mainly evaluate. Publication bias has long been considered to exist when funnel plots are asymmetrical.9 However, many factors influence their asymmetry. For example, English-language bias – the preferential publication of negative findings in journals published in languages other than English – makes the location and inclusion of non-English studies in meta-analysis less likely.24 Also as a consequence of citation bias, negative studies tend to be quoted less frequently and are therefore more likely to be missed in a literature search.25,26 Other factors causing an asymmetry of funnel plots include poor methodological design of small studies, inadequate analysis, fraud, or choice of effect measure. Therefore to relate the asymmetry to publication bias, other factors should be evaluated; for that purpose, a funnel plot does a better job than a statistical test because the observer can judge publication bias subjectively by referring to this information.\nThere are several limitations in this study. First, we did not include meta-analyses involving a small number of studies, since the power of statistical tests declines when only a few studies are subjected to meta-analysis,7 and graphical interpretation may be difficult and biased when the number of studies is small. Different results of test performance may arise if these few studies are included. Second, it is an issue that the larger a cut-off p-value, the higher the probability of detecting publication bias by chance, but our results suggested that the benefits of using a larger cut-off p-value surpass the problems of false positivity for publication bias. Further evaluation is necessary to verify whether our results can be extrapolated to funnel plots other than those used in our analyses.\nIn conclusion, Egger’s linear regression method or Begg’s method had higher statistical and discriminatory power for detecting publication bias than Macaskill’s method given the same type I error level. The false negative rate of these methods could be improved by increasing the cut-off p-value without a substantial increment of false positive rate." ]
[ "methods", "results", "discussion" ]
[ "Meta-Analysis", "Cochrane library", "funnel plots", "statistical tests", "Publication Bias" ]
METHODS: To evaluate the test performance of Begg’s, Egger’s, and Macaskill’s methods, visual interpretation of funnel plots was used as a reference standard. A funnel plot is a graphical presentation of each trial’s effect size against one of the sample size measures, such as the precision of the effect size estimate, the overall sample size, and the standard error.8 It is based on the assumption that the results from smaller studies will be more widely spread around the mean effect because of a large random error. If there is no publication bias, a plot of sample size versus treatment effect from individual studies in a meta-analysis should therefore look like an inverted funnel.9 In practice, smaller studies or non-significant studies are less likely to be published and data for the lower left-hand corner of the graph are often lacking, creating an asymmetry in the funnel shape. It has been reported that different observers may interpret funnel plots differently;4 however, we aimed to resolve this problem by having different observers judge the shape of funnel plots and reach a consensus. All completed systematic reviews that were contained in the Cochrane Database of Systematic Reviews 2002 issue were examined.10 Only reviews including 10 or more trials with a binary outcome measure were included in the assessment. The cut-off of 10 trials is based on the minimum number reported by Sutton, who evaluated the effect of publication bias on the results and conclusions of systematic reviews and meta-analyses.8 At most, a single meta-analysis from each systematic review was included, and when more than one meta-analysis met the inclusion criteria, the one containing the largest number of studies was selected. If two or more analyses contained the same number of studies, the one with the strongest relation to the primary outcome of the study was selected. One evaluator extracted data from each selected meta-analysis. The binary data of the meta-analysis were used for the analysis, and a continuity correction of 0.5 was used when necessary.11 For each review article selected, funnel plots were constructed by plotting the effect measure (eg, the natural logarithm of the odds ratio) against the inverse of its standard error, which is less likely to give a biased result than the use of other effect measures (e.g., log risk ratio and risk difference).12 Two observers, one of the authors and another person blinded to the results of statistical analysis, interpreted all funnel plots and judged independently whether publication bias was present. Both observers are general internists and have the experience of conducting meta-analyses that have been published in peer-reviewed journals. To verify consistency, observer A interpreted all funnel plots again 7 days later. Inconsistencies between the observers were resolved by discussion. Some resulting funnel plots were typical in shape and thus easy to evaluate, but others were atypical and difficult to evaluate. To assess the difficulties of interpretation, two observers also ranked the graphs by using three categories of complexity ([1] easy; [2] moderate; and [3] complicated). The rankings by two observers were then totaled and the result was used as a score of the graph’s complexity. For example, when a graph was ranked “[1] easy” by observer A and “[3] complicated” by observer B, the complexity score for that graph was 4. Therefore a larger number implied a higher level of complexity of a graph’s interpretation. The asymmetry of the funnel plots was statistically evaluated by three test methods: Begg’s, Egger’s, and Macaskill’s. Begg’s method tests publication bias by determining whether there is a significant correlation between the effect size estimates and their variances. The effect estimates were standardized to stabilize the variance, and an adjusted rank correlation test was then performed.5,13 Let ti and vi be the estimated effect sizes and those variances from the k studies in the meta-analysis, i=1,…..,k. To construct a valid rank correlation test, it is necessary to stabilize the variance by standardizing the effect size prior to performing the test. We correlate ti* and vi,wheret∗i=(ti−t¯)/(v∗i)1/2wheret¯=(∑vj−1tj)/∑vj−1and where vi* = vi -(∑vj-1)-1 is the variance of ti-t¯. Throughout, we have used the rank correlation test based on Kendall’s tau. This involves enumerating the number of pairs of studies that are ranked in the same order with respect to the two factors (i.e., t* and v). Egger’s method detects funnel plot asymmetry by determining whether the intercept deviates significantly from zero in a regression of the standardized effect estimates versus their precision.14 The standard normal deviate (SND), defined as the odds ratio divided by its standard error, is regressed against the estimate’s precision, the latter being defined as the inverse of the standard error (regression equation: SND=a+b×precision). The analysis could be weighted or unweighted by the inverse of the variance of the effect estimates. The unweighted model was used for the current analysis. Macaskill’s method is fitting a regression directly to the data by using the treatment effect (ti) as the dependent variable and the study size (ni) as the independent variable.6 The observations are weighted by the reciprocal of the pooled variance for each study, that is, the variance of the pooled estimates resulting from combining the data for the two groups. When there is no publication bias, the regression slope has an expected value of zero, and a nonzero slope would suggest an association between effect and sample size, possibly because of publication bias. We compared these three statistical tests in three different ways. First, we used a p-value as a cut-off point for defining the presence of publication bias by Begg’s method, Egger’s method, or Macaskill’s method, and evaluated a trade-off between sensitivity and specificity by varying cut-off p-values (0.05, 0.1, 0.2). Second, we estimated the sensitivities of these tests corresponding to a fixed false positive rate (0.05 or 0.1) to compare their statistical powers. Third, the receiver operating characteristic curve analysis was used to determine the discriminatory power of each test. A receiver operating characteristic analysis is a popular method of assessing the predictive power of a test by plotting the sensitivity (power) of the test against the corresponding false-positive rate (1-specificity) as the cut-off level of the model varies.15 In the present analysis, sensitivity refers to the percentage of systematic reviews with publication bias detected by a statistical test using a given cut-off point out of all systematic reviews with publication bias defined by the reference standard. Specificity refers to the percentage of systematic reviews found by a statistical test to be without publication bias out of all systematic reviews without publication bias defined by reference standard. We compared areas under receiver operating characteristic (ROC) curves by setting the Egger’s method as a reference, using an algorithm suggested by DeLong, DeLong, and Clarke-Pearson.16 We defined statistical significance to be p<0.05 for this analysis. In the evaluation of statistical tests, we used a subgroup of graphs that were scored at 2 and with the observers’ agreements, i.e., both observers agreed that these graphs were reliable and easy to evaluate. We also evaluated three tests by using all 130 reviews as sensitivity analyses. We tested the reliability and validity of the reference standard by examining intra- and inter-observer agreement of these plots, using Kappa statistics.17 All statistical analyses were performed with STATA®, version 7 (STATA corporation, College Station, TX, USA). RESULTS: At the time of investigation, the Cochrane Library (2002, Issue 1) contained 1297 completed systematic reviews. Of these, 130 meta-analyses included 10 or more trials with at least 1 dichotomous outcome. Summary characteristics of the systematic reviews were shown in Table 1. In all 130 reviews, a total of 2,468 original studies evaluated 5,490,223 subjects; the median number of subjects per review was 2,801.5 (range, 3821,874,547); the median number of original studies included in one review was 13 (range, 10-135); the median pooled odds ratios was 0.895 (range, 0.09-6.22). Table 2 shows the number of funnel plots judged to show publication bias by inspection of funnel plots of all studies or of studies restricted to the groups of various scores. When all data were included, the number of studies interpreted as biased by observer B was larger than that according to observer A. Of all 130 graphs, 38 (29.2%) were scored two, 57 (43.8%) were scored three, 27 (20.8%) were scored four, and 8 (6.2%) were scored five. *: Percentages out of all 130 reviews. A-1: the interpretation by observer A on the day 1. A-2: the interpretation on the day 2 by observer A. Table 3 summarizes the intra- and interobserver agreement of the graphical test. The intraobserver agreement rate was 82.3%, and the Kappa value of observer A was and 0.65 (95% confidence interval (CI), 0.52-0.77). The interobserver agreement rate and the Kappa value on all graphs evaluated by observer A and observer B were 73.8% and 0.42 (95% CI. 0.27-0.57), respectively. This increased to 92.1% and 0.75 (0.49-1.00) when limited to the graphs scored as 2. The intraobserver agreements on all graphs and that on graphs scored as 2 were in the upper range of good reproducibility.12 Of the 38 graphs scored at two, 36 with observers’ agreement were used for the main analyses. CI: confidence interval. As shown in Figure 1, in analyses using data with interobserver agreement (n=36), the area under the ROC curve of Egger’s method was 0.955 (95% CI, 0.889-1.000) and was not different from that of Begg’s method (area=0.913, p=0.2302), but it was larger than that of Macaskill’s method (area=0.719, p=0.0116). In all data set (n=130), the area under the ROC curve of Egger’s method was 0.728 (95% CI, 0.643-0.813), but was not statistically different from that of Begg’s method (area=0.649, p=0.0779) or Macaskill’s method (area=0.634, p=0.0645). Left column: Main analyses by using funnel plots hat were scored as 2 and with observers’ agreement. Right column: Sensitivity analyses by using all 130 reviews. CI: confidence interval. Table 4 summarizes a trade-off between the sensitivities and specificities of three statistical tests by varying the cut-off p-values. All these statistical tests had high specificities with varying degrees of sensitivities. In analyses using data with interobserver agreements (n=36), sensitivities increased as the cut-off p-values increased without a decrement of specificities for any statistical tests. For example, when the cut-off p-value was increased to 0.20 from 0.05, the sensitivity of Egger’s test increased by 0.39, but false positive rate (1-specificity) remained constant. At any cut-off p-values shown in this table, the sensitivity of Egger’s test was larger than that of Begg’s method or Macaskill’s method. These results were not influenced by sensitivity analyses using all data set; false positive rate (1-specificity) was always below the cut-off p-value (the nominal significance level). * Only reviews that were scored 2 and with inter-observer agreement were used Table 5 summarizes the sensitivities of statistical tests that the false positive rates (1-specificity) were fixed at 0.05 or 0.1. The statistical power (sensitivity) of Egger’s method was larger than Begg’s or Macaskill’s method, regardless of false positive rates (0.05 or 0.1) or type of data set used (n=130 or n=36). * Only reviews that were scored 2 and with inter-observer agreement were used Figure 2 shows representative examples of funnel plots in the current analyses. For example, graph (D) is an example of the discrepancy between the three statistical tests in terms of detecting publication bias. The number of included studies in this review was 11; the total sample size was 619; the median sample size per review was 33 (range, 20-204); the pooled odds ratio was 3.32 (95% CI, 2.24-4.92). The funnel plots were scored as 2, and both observers agreed that this analysis had publication bias. The p-values were respectively 0.018, 0.020, and 0.352 for Egger’s, Begg’s, and Macaskill’s method. For a cut-off p-vale of 0.1, Egger’s method and Begg’s method suggest the presence of a publication bias, but Macaskill’s method does not. (A) A typical example of the absence of publication bias. The number of included studies was 13; the total sample size was 855; the median sample size per review was 46 (range, 20-234); the pooled odds ratio was 0.96 (95% CI, 0.68-1.35). The funnel plots were scored at 4, and both observers agreed that there is no publication bias in this analysis. The p-values were 0.583, 0.641, and 0.603, respectively, for Egger’s method, Begg’s method, and Macaskill’s method. (B) A typical example of the presence of publication bias. The number of included studies was 15; the total sample size was 1278; the median sample size per review was 73 (range, 23-158); the pooled odds ratio was 0.78 (95% CI, 0.61-1.00). The funnel plots were scored at 2, and both observers agreed that there is a publication bias in this analysis. The p-values were respectively 0.006, 0.002, and 0.02 for Egger’s method, Begg’s method, and Macaskill’s method. (C) An example of the inconsistency between two observers in the interpretation of funnel plots. The number of included studies was 25; the total sample size was 2478; the median sample size per review was 97 (range, 36-200); the pooled odds ratio was 1.64 (95% CI, 1.28-2.11). The funnel plots were scored at 3, and observers A asserted that there was publication bias in this analysis, whereas observer B did not. p-values were respectively 0.500, 0.944, and 0.419 for Egger’s method, Begg’s method, and Macaskill’s method. (D) An example of the inconsistency between the three statistical tests in detecting publication bias. The number of included studies was 11; the total sample size was 619; the median sample size per review was 33 (range, 20-204); the pooled odds ratio was 3.32 (95% CI, 2.24-4.92). The funnel plots were scored at 2, and both observers agreed that there is no publication bias in this analysis. The p-values were respectively 0.018, 0.020, and 0.352 for Egger’s method, Begg’s method, and Macaskill’s method. With positivity criterion p<0.1, Egger’s method and Begg’s method suggest the presence of a publication bias, but Macaskill’s test did not. DISCUSSION: Publication bias in meta-analysis could lead to serious consequences, and there have been repeated calls for a worldwide registration of clinical trials.18-22 Recently the International Committee of Medical Journal Editors (ICMJE) proposed comprehensive trial registration as a solution to the bias problem. ICMJE member journals will require, as a condition of consideration for publication, registration in a public trials registry.3 This policy applies to any clinical trial starting enrolment after July 1, 2005. An examination of possible publication bias and related bias should be an essential part of meta-analyses and systematic reviews. Although a registration of trials and the creation of a database of all published and unpublished trials could solve the problem, it will be a long time before these goals are completely fulfilled. The methodology of assessing publication bias is still developing, and the current study generates the following suggestions concerning how to evaluate publication bias. First, our results showed that Egger’s method had stronger statistical power than Begg’s method or Macaskill’s method. This is in part consistent with the reports by Egger, Sterne, or Macaskill,2,6,7 which suggested the difference in power between Egger’s method and Begg’s or Macaskill’s method. In the Macaskill report, however, which has compared these three statistical tests, the use of Macaskill’s method was recommended to detect publication bias because it had a low false positive rate; Begg’s and Egger’s methods had higher false positive rate than the nominal level, though they had stronger statistical powers. Our results, however, showed that the statistical power of Egger’s or Begg’s method seemed stronger than the method given the fixed false positive rates (0.05 or 0.1), which is inconsistent with Macaskill’s opinion that the stronger statistical power of these two tests could be attained by the trade-off with problematically high false positive rate. It is not clear why our results are inconsistent with Macaskill’s report, but a possible reason may be that they evaluated their method by using simulated data, whereas we have used data in a real setting. For example, in their report, most of the simulated reviews included only a few hundred of subjects per study, but the median number of subjects in the current analysis was 2,801.5; our analysis using data in a real setting included a broader range of effect size (OR: 0.09-6.22) than that in their simulation study (OR: 0.25-1.0). Our results thus may suggest that their method is not necessarily preferred in a real setting. The results of the area under the receiver operating characteristics curve analyses have also revealed that Begg’s or Egger’s method had stronger discriminatory power in detecting publication bias than Macaskill’s method did. Second, we have shown that a higher statistical power could be attained with a relatively small increment of false positive rate (type I error) by setting cut-off p-values of three statistical tests larger than conventionally used cut-off p-value. For example, by increasing it to 0.15 from 0.05, the sensitivity of Egger’s test increased by 0.24, but the false positive rate increased by only 0.05 in all data set. The same happens with Begg’s and Macaskill’s method. Typically, the cut-off p-value of 0.1 has been used to determine the presence of publication bias by the use of these statistical tests,23 the conventional cut-off point of these statistical tests could be reconsidered depending on the purpose of using them. It is important to determine a reference standard for evaluating sensitivities and specificities when evaluating test performance. Funnel plots, the simplest and most commonly used method in meta-analysis, were used as a reference standard in the current analysis for the following reasons. First, under certain conditions, they have higher reproducibility than generally thought; this observation is based on multiple observer conclusions.4 Observer agreements on the interpretation of funnel plots have not been systematically evaluated prior to this study. Our inter-observer agreement rate was and 92.1%, and Kappa value of the graphs scored 2 was and 0.75. Although observer agreement decreased with the degree of difficulty of interpretation (the easier the graphs were to interpret, the better the reproducibility of the funnel plots), it was still 73.8% even when all 130 meta-analyses were used; the Kappa value was 0.42, which denotes good reproducibility. This justifies our use of funnel plots as a reference standard. Second, funnel plots are suitable for determining the presence or absence of publication bias because they are more than a tool for evaluating an asymmetry, which statistical tests mainly evaluate. Publication bias has long been considered to exist when funnel plots are asymmetrical.9 However, many factors influence their asymmetry. For example, English-language bias – the preferential publication of negative findings in journals published in languages other than English – makes the location and inclusion of non-English studies in meta-analysis less likely.24 Also as a consequence of citation bias, negative studies tend to be quoted less frequently and are therefore more likely to be missed in a literature search.25,26 Other factors causing an asymmetry of funnel plots include poor methodological design of small studies, inadequate analysis, fraud, or choice of effect measure. Therefore to relate the asymmetry to publication bias, other factors should be evaluated; for that purpose, a funnel plot does a better job than a statistical test because the observer can judge publication bias subjectively by referring to this information. There are several limitations in this study. First, we did not include meta-analyses involving a small number of studies, since the power of statistical tests declines when only a few studies are subjected to meta-analysis,7 and graphical interpretation may be difficult and biased when the number of studies is small. Different results of test performance may arise if these few studies are included. Second, it is an issue that the larger a cut-off p-value, the higher the probability of detecting publication bias by chance, but our results suggested that the benefits of using a larger cut-off p-value surpass the problems of false positivity for publication bias. Further evaluation is necessary to verify whether our results can be extrapolated to funnel plots other than those used in our analyses. In conclusion, Egger’s linear regression method or Begg’s method had higher statistical and discriminatory power for detecting publication bias than Macaskill’s method given the same type I error level. The false negative rate of these methods could be improved by increasing the cut-off p-value without a substantial increment of false positive rate.
Background: This study evaluates the statistical and discriminatory powers of three statistical test methods (Begg's, Egger's, and Macaskill's) to detect publication bias in meta-analyses. Methods: The data sources were 130 reviews from the Cochrane Database of Systematic Reviews 2002 issue, which considered a binary endpoint and contained 10 or more individual studies. Funnel plots with observers'agreements were selected as a reference standard. We evaluated a trade-off between sensitivity and specificity by varying cut-off p-values, power of statistical tests given fixed false positive rates, and area under the receiver operating characteristic curve. Results: In 36 reviews, 733 original studies evaluated 2,874,006 subjects. The number of trials included in each ranged from 10 to 70 (median 14.5). Given that the false positive rate was 0.1, the sensitivity of Egger's method was 0.93, and was larger than that of Begg's method (0.86) and Macaskill's method (0.43). The sensitivities of three statistical tests increased as the cut-off p-values increased without a substantial decrement of specificities. The area under the ROC curve of Egger's method was 0.955 (95% confidence interval, 0.889-1.000) and was not different from that of Begg's method (area=0.913, p=0.2302), but it was larger than that of Macaskill's method (area=0.719, p=0.0116). Conclusions: Egger's linear regression method and Begg's method had stronger statistical and discriminatory powers than Macaskill's method for detecting publication bias given the same type I error level. The power of these methods could be improved by increasing the cut-off p-value without a substantial increment of false positive rate.
null
null
4,185
326
3
[ "method", "bias", "publication", "publication bias", "funnel", "statistical", "analysis", "plots", "funnel plots", "egger" ]
[ "test", "test" ]
null
null
null
null
[CONTENT] Meta-Analysis | Cochrane library | funnel plots | statistical tests | Publication Bias [SUMMARY]
[CONTENT] Meta-Analysis | Cochrane library | funnel plots | statistical tests | Publication Bias [SUMMARY]
null
[CONTENT] Meta-Analysis | Cochrane library | funnel plots | statistical tests | Publication Bias [SUMMARY]
null
null
[CONTENT] Linear Models | Meta-Analysis as Topic | Publication Bias | ROC Curve [SUMMARY]
[CONTENT] Linear Models | Meta-Analysis as Topic | Publication Bias | ROC Curve [SUMMARY]
null
[CONTENT] Linear Models | Meta-Analysis as Topic | Publication Bias | ROC Curve [SUMMARY]
null
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
null
[CONTENT] method | bias | publication | publication bias | funnel | statistical | analysis | plots | funnel plots | egger [SUMMARY]
[CONTENT] method | bias | publication | publication bias | funnel | statistical | analysis | plots | funnel plots | egger [SUMMARY]
null
[CONTENT] method | bias | publication | publication bias | funnel | statistical | analysis | plots | funnel plots | egger [SUMMARY]
null
null
[CONTENT] effect | analysis | funnel | observers | standard | publication bias | publication | test | bias | variance [SUMMARY]
[CONTENT] method | ci | 95 | scored | 95 ci | sample | sample size | egger | bias | publication [SUMMARY]
null
[CONTENT] method | bias | publication | publication bias | funnel | statistical | analysis | plots | funnel plots | macaskill [SUMMARY]
null
null
[CONTENT] 130 | the Cochrane Database of Systematic Reviews | 2002 | 10 ||| ||| [SUMMARY]
[CONTENT] 36 | 733 | 2,874,006 ||| 10 | 14.5 ||| 0.1 | Egger | 0.93 | 0.86 | Macaskill | 0.43 ||| three ||| ROC | Egger | 0.955 | 95% | 0.889-1.000 | area=0.913 | p=0.2302 | Macaskill [SUMMARY]
null
[CONTENT] ||| three | Egger | Macaskill ||| 130 | the Cochrane Database of Systematic Reviews | 2002 | 10 ||| ||| ||| ||| 36 | 733 | 2,874,006 ||| 10 | 14.5 ||| 0.1 | Egger | 0.93 | 0.86 | Macaskill | 0.43 ||| three ||| ROC | Egger | 0.955 | 95% | 0.889-1.000 | area=0.913 | p=0.2302 | Macaskill ||| Egger | Begg | Macaskill ||| ||| [SUMMARY]
null
Attractive toxic sugar bait (ATSB) methods decimate populations of Anopheles malaria vectors in arid environments regardless of the local availability of favoured sugar-source blossoms.
22297155
Attractive toxic sugar bait (ATSB) methods are a new and promising "attract and kill" strategy for mosquito control. Sugar-feeding female and male mosquitoes attracted to ATSB solutions, either sprayed on plants or in bait stations, ingest an incorporated low-risk toxin such as boric acid and are killed. This field study in the arid malaria-free oasis environment of Israel compares how the availability of a primary natural sugar source for Anopheles sergentii mosquitoes: flowering Acacia raddiana trees, affects the efficacy of ATSB methods for mosquito control.
BACKGROUND
A 47-day field trial was conducted to compare impacts of a single application of ATSB treatment on mosquito densities and age structure in isolated uninhabited sugar-rich and sugar-poor oases relative to an untreated sugar-rich oasis that served as a control.
METHODS
ATSB spraying on patches of non-flowering vegetation around freshwater springs reduced densities of female An. sergentii by 95.2% in the sugar-rich oasis and 98.6% in the sugar-poor oasis; males in both oases were practically eliminated. It reduced daily survival rates of female An. sergentii from 0.77 to 0.35 in the sugar-poor oasis and from 0.85 to 0.51 in the sugar-rich oasis. ATSB treatment reduced the proportion of older more epidemiologically dangerous mosquitoes (three or more gonotrophic cycles) by 100% and 96.7%, respectively, in the sugar-poor and sugar-rich oases. Overall, malaria vectorial capacity was reduced from 11.2 to 0.0 in the sugar-poor oasis and from 79.0 to 0.03 in the sugar-rich oasis. Reduction in vector capacity to negligible levels days after ATSB application in the sugar-poor oasis, but not until after 2 weeks in the sugar-rich oasis, show that natural sugar sources compete with the applied ATSB solutions.
RESULTS
While readily available natural sugar sources delay ATSB impact, they do not affect overall outcomes because the high frequency of sugar feeding by mosquitoes has an accumulating effect on the probability they will be attracted to and killed by ATSB methods. Operationally, ATSB methods for malaria vector control are highly effective in arid environments regardless of competitive, highly attractive natural sugar sources in their outdoor environment.
CONCLUSION
[ "Acacia", "Animals", "Anopheles", "Carbohydrate Metabolism", "Ecosystem", "Feeding Behavior", "Female", "Israel", "Male", "Mosquito Control", "Poisons", "Population Density", "Survival Analysis" ]
3293779
Background
Attractive toxic sugar bait (ATSB) methods are a new form of vector control that kill female and male mosquitoes questing for essential sugar sources in the outdoor environment [1-7]. ATSB solutions consist of fruit or flower scent as an attractant, sugar solution as a feeding stimulant, and oral toxin to kill the mosquitoes. ATSB solutions that are sprayed on small spots of vegetation or suspended in simple removable bait stations attract mosquitoes from a large area and the mosquitoes ingesting the toxic solutions are killed. The ATSB methods developed and field-tested in Israel demonstrate how they literally decimate local populations of different anopheline and culicine mosquito species [1-5]. Similar successful ATSB field trials have also controlled Culex quinquefasciatus from storm drains in Florida, USA [6] and Anopheles gambiae s.l. malaria vectors in Mali, West Africa [7]. The new ATSB methods are highly effective, technologically simple, low-cost, and circumvent traditional problems associated with the indiscriminate effects of contact insecticides [8] by narrowing the specificity of attraction to sugar-seeking mosquitoes and by using environmentally safe oral toxins such as boric acid, that is considered to be only slightly more toxic to humans and other vertebrates than table salt [9]. ATSB methods work by competing with available natural plant sugar sources, which are an essential source of energy for females and the only food source for male mosquitoes [10,11]. Mosquitoes are highly selective in their attraction to locally available flowering plants and other sources of sugar including fruits, seedpods, and honeydew [12-14] and the availability of favourable natural sugar sources strongly affects mosquito survival [13]. All of the above-noted ATSB field trials used juices made from local natural fruits to successfully divert sugar-seeking mosquitoes from their natural sources of plant sugars. Between 50 and 90% of the local female and male mosquitoes feed on ATSB solutions within the first few days after applications, as inferred from data at control sites where the same attractive bait solutions are applied without toxin but containing coloured food dye markers, which are readily apparent in sugar-fed mosquitoes [1]. The present study is on Anopheles sergentii, the most common and abundant Anopheles species in Israel and the main vector of malaria in the Afro-Arabian zone [15,16]. This mosquito species was the main vector responsible for malaria outbreaks [17-19] before the elimination of malaria parasite transmission from Israel in the 1960s [20-22]. The objective of this study was to determine the relationship between the efficacy of ATSB control and the availability of natural plant sugar sources. As demonstrated in a recent comparative study of An. sergentii in sugar-rich and sugar-poor oases in Israel, the availability of natural plant sugar sources affects mosquito fitness, population dynamics, and malaria vector capacity [23]. Accordingly, a single application of ATSB was made in the same two relatively small, isolated, and uninhabited sugar-rich and sugar-poor oases. Another larger oasis with high densities of An. sergentii was not sprayed and served as a control site for the ATSB field trial.
Methods
Study sites The study was conducted at three oases located within the depression of the African-Syrian Rift Valley, in the northern part of the Arava Valley, about 25 km south of the Dead Sea. The shoreline of the Dead Sea is about 400 m below sea level while the central Arava Valley rises to about 200 m above sea level before it is again descending towards the Red Sea. The region belongs to the Sahara-Arabian phyto-geographical zone [24]. The area is an extreme desert with occasional natural oases centred on springs and artificial agricultural oases created by irrigation; the conditions in these sites are tropical [25]. The climate is arid with an average humidity of 57% and annual winter rains averaging of 50-100 mm. The average temperature ranges from 20°C from the end of September to early April and to 30°C from May to August [26,27]. The area is known for its rich mosquito fauna dominated by An. sergentii, Ochlerotatus caspius and Culex pipiens [28]. Field experiments were conducted at three oases. Two of the oases included small, unnamed and uninhabited oases, 5 km apart in the Arava Valley. As recently described [23], the environments of the two oases are very similar except for the availability of sugar sources. In one of the oases (termed "sugar-rich oasis"), there were two flowering Acacia raddiana trees that were the preferable source of sugar for the mosquitoes [14]. In contrast, there were no flowering plant blossoms in the other oasis (termed "sugar-poor oasis"). Both sites covered areas of about 5 ha, included small fresh-water springs surrounded by dense non-flowering vegetation which was largely grazed out by camels and donkeys with no visible plant sugar sources during the period of field experiments [23,24]. Neot Hakikar oasis served as an untreated control site. It is located about 20 km north of the small oases and is the largest natural oasis in southern Israel and the Dead Sea region. In the eastern more agricultural part of the oasis a small settlement is located with gardens, vast fields and greenhouses. The western, much more natural part is a nature reserve with a mixture of salt marshes, wet and dry Salinas (ie areas high in salt content with specialized plant communities) and fresh-water springs surrounded by riparian vegetation largely dominated by Phragmites australis L. Gramineae and Carex sp. L. Cyperaceae. This natural vegetation, crossed by a drainage canal, is partially overgrown by reeds and sedges. On the dry banks of the canal vegetation is dominated by groves and thickets of trees and bushes like Tamarix nilotica and Tamarix passerinoides (Tamaricaceae), Prosopis farcta (Mimosaceae), Nitraria retusa (Nitrariaceae) and chenopod bushes like Atriplex halimus, Atriplex leucoclada, Suaeda asphaltica, Suaeda fruticosa (Chenopodiaceae). At the time of the experiment some T. nilotica and P. farcta bushes were flowering. The study was conducted at three oases located within the depression of the African-Syrian Rift Valley, in the northern part of the Arava Valley, about 25 km south of the Dead Sea. The shoreline of the Dead Sea is about 400 m below sea level while the central Arava Valley rises to about 200 m above sea level before it is again descending towards the Red Sea. The region belongs to the Sahara-Arabian phyto-geographical zone [24]. The area is an extreme desert with occasional natural oases centred on springs and artificial agricultural oases created by irrigation; the conditions in these sites are tropical [25]. The climate is arid with an average humidity of 57% and annual winter rains averaging of 50-100 mm. The average temperature ranges from 20°C from the end of September to early April and to 30°C from May to August [26,27]. The area is known for its rich mosquito fauna dominated by An. sergentii, Ochlerotatus caspius and Culex pipiens [28]. Field experiments were conducted at three oases. Two of the oases included small, unnamed and uninhabited oases, 5 km apart in the Arava Valley. As recently described [23], the environments of the two oases are very similar except for the availability of sugar sources. In one of the oases (termed "sugar-rich oasis"), there were two flowering Acacia raddiana trees that were the preferable source of sugar for the mosquitoes [14]. In contrast, there were no flowering plant blossoms in the other oasis (termed "sugar-poor oasis"). Both sites covered areas of about 5 ha, included small fresh-water springs surrounded by dense non-flowering vegetation which was largely grazed out by camels and donkeys with no visible plant sugar sources during the period of field experiments [23,24]. Neot Hakikar oasis served as an untreated control site. It is located about 20 km north of the small oases and is the largest natural oasis in southern Israel and the Dead Sea region. In the eastern more agricultural part of the oasis a small settlement is located with gardens, vast fields and greenhouses. The western, much more natural part is a nature reserve with a mixture of salt marshes, wet and dry Salinas (ie areas high in salt content with specialized plant communities) and fresh-water springs surrounded by riparian vegetation largely dominated by Phragmites australis L. Gramineae and Carex sp. L. Cyperaceae. This natural vegetation, crossed by a drainage canal, is partially overgrown by reeds and sedges. On the dry banks of the canal vegetation is dominated by groves and thickets of trees and bushes like Tamarix nilotica and Tamarix passerinoides (Tamaricaceae), Prosopis farcta (Mimosaceae), Nitraria retusa (Nitrariaceae) and chenopod bushes like Atriplex halimus, Atriplex leucoclada, Suaeda asphaltica, Suaeda fruticosa (Chenopodiaceae). At the time of the experiment some T. nilotica and P. farcta bushes were flowering. Preparation of ATSB solutions The ATSB bait solution used in the sugar-rich and sugar-poor oases consisted of ~75% juice of over-ripe to rotting prickly pear cactus (Opuntia ficus-indica, Cactaceae), 5% V/V wine, 20% W/V brown sugar, 1% (W/V) BaitStab™ concentrate (a product containing antifungal and antibacterial additives produced by Westham Innovations LTD, Tel Aviv, Israel) and boric acid 1% (W/V) [29]. The solution was ripened outdoors for 48 h in covered buckets before adding the BaitStab™ and the boric acid. In this study, prickly pear cactus fruit (Opuntia ficus-indica) was used because it was locally abundant and known to be highly attractive for both sand flies [29] and mosquitoes (Schlein and Muller, unpublished). The ATSB bait solution used in the sugar-rich and sugar-poor oases consisted of ~75% juice of over-ripe to rotting prickly pear cactus (Opuntia ficus-indica, Cactaceae), 5% V/V wine, 20% W/V brown sugar, 1% (W/V) BaitStab™ concentrate (a product containing antifungal and antibacterial additives produced by Westham Innovations LTD, Tel Aviv, Israel) and boric acid 1% (W/V) [29]. The solution was ripened outdoors for 48 h in covered buckets before adding the BaitStab™ and the boric acid. In this study, prickly pear cactus fruit (Opuntia ficus-indica) was used because it was locally abundant and known to be highly attractive for both sand flies [29] and mosquitoes (Schlein and Muller, unpublished). Field application of ATSB solutions The ATSB solution was sprayed with a 16-l back-pack sprayer (Killaspray, Model 4526, Hozelock, Birmingham UK) in aliquots of ~80 ml on 1 m2 spots at distances of ~3 m on the vegetation surrounding the fresh-water springs of the two isolated oasis (sugar-rich and sugar-poor). Predominant types of non-flowering plants sprayed at the two sites were P. australis, Atriplex sp. and Suaeda sp. As a strategy to minimize potential harm to non-target insects, the predominant natural sugar source for An. sergentii, the flowering A. raddiana trees, present in the sugar-rich oasis were not sprayed. One sprayer completed the applications in less than 1 h per site. No bait solution was sprayed at the control site Neot Hakikar. The ATSB solution was sprayed with a 16-l back-pack sprayer (Killaspray, Model 4526, Hozelock, Birmingham UK) in aliquots of ~80 ml on 1 m2 spots at distances of ~3 m on the vegetation surrounding the fresh-water springs of the two isolated oasis (sugar-rich and sugar-poor). Predominant types of non-flowering plants sprayed at the two sites were P. australis, Atriplex sp. and Suaeda sp. As a strategy to minimize potential harm to non-target insects, the predominant natural sugar source for An. sergentii, the flowering A. raddiana trees, present in the sugar-rich oasis were not sprayed. One sprayer completed the applications in less than 1 h per site. No bait solution was sprayed at the control site Neot Hakikar. Study design and methods for the ATSB field trial The field trial was conducted over a period of 47 days, from 1 November to 17 December, 2009. During this period, at each of the three study sites, adult mosquitoes were sampled at two-day intervals (a total of 24 times) using six CDC UV traps (Model 1212, John W. Hock, Gainesville, FL) without attractants in fixed positions surrounding the available fresh-water springs. ATSB bait solutions were sprayed on day 12 of the field experiment. Collected mosquitoes were sexed, identified to species, and the physiological age of female mosquitoes was determined by dissecting ovaries and counting the number of dilatations [30]. The field trial was conducted over a period of 47 days, from 1 November to 17 December, 2009. During this period, at each of the three study sites, adult mosquitoes were sampled at two-day intervals (a total of 24 times) using six CDC UV traps (Model 1212, John W. Hock, Gainesville, FL) without attractants in fixed positions surrounding the available fresh-water springs. ATSB bait solutions were sprayed on day 12 of the field experiment. Collected mosquitoes were sexed, identified to species, and the physiological age of female mosquitoes was determined by dissecting ovaries and counting the number of dilatations [30]. Statistical analysis To evaluate impacts of ATSB on mosquito populations, captures of An. sergentii were examined at four intervals (1-12, 13-24, 25-36, and 37-47 days). A logistic regression was used to examine the proportion of females with three or more gonotrophic cycles in each oasis over time. Contrasts were used to test for significant changes from the pre-treatment period in each oasis. Separate Poisson regressions were used to analyse the number of male and female An. sergentii caught in the light traps over time in the three oases. Contrasts were used to compare the control oasis with the poor and rich oases at each time. To evaluate impacts of ATSB on mosquito populations, captures of An. sergentii were examined at four intervals (1-12, 13-24, 25-36, and 37-47 days). A logistic regression was used to examine the proportion of females with three or more gonotrophic cycles in each oasis over time. Contrasts were used to test for significant changes from the pre-treatment period in each oasis. Separate Poisson regressions were used to analyse the number of male and female An. sergentii caught in the light traps over time in the three oases. Contrasts were used to compare the control oasis with the poor and rich oases at each time. Estimation of vectorial capacity Vectorial capacity (VC). defined as the average number of infectious bites the mosquito could potentially deliver over her lifetime, was used to estimate the impact of ATSB on the potential for malaria parasite transmission: V C = m p E I P - T 2 log ( p ) Where m was the number of mosquitoes per person, T was the estimated duration of the gonotrophic cycle [23]. EIP was the extrinsic incubation period of malaria parasites in mosquitoes assuming to be 10 days [31]. p was the survival rate estimated based on parous rates r. p = r T Following Dye [32], VC was compared before and after the intervention. Therefore, only m and p were separately estimated for the two periods. m was estimated as the average number of female mosquitoes caught per trap night. Vectorial capacity (VC). defined as the average number of infectious bites the mosquito could potentially deliver over her lifetime, was used to estimate the impact of ATSB on the potential for malaria parasite transmission: V C = m p E I P - T 2 log ( p ) Where m was the number of mosquitoes per person, T was the estimated duration of the gonotrophic cycle [23]. EIP was the extrinsic incubation period of malaria parasites in mosquitoes assuming to be 10 days [31]. p was the survival rate estimated based on parous rates r. p = r T Following Dye [32], VC was compared before and after the intervention. Therefore, only m and p were separately estimated for the two periods. m was estimated as the average number of female mosquitoes caught per trap night.
null
null
Conclusions
GCM, JCB, WG, and YS conceived and planned the study, interpreted results, and wrote the paper. GCM directed and performed the field experiments and managed the data. KLA and WG analysed the data. All authors read and approved the final manuscript.
[ "Background", "Study sites", "Preparation of ATSB solutions", "Field application of ATSB solutions", "Study design and methods for the ATSB field trial", "Statistical analysis", "Estimation of vectorial capacity", "Results", "Discussion", "Conclusions" ]
[ "Attractive toxic sugar bait (ATSB) methods are a new form of vector control that kill female and male mosquitoes questing for essential sugar sources in the outdoor environment [1-7]. ATSB solutions consist of fruit or flower scent as an attractant, sugar solution as a feeding stimulant, and oral toxin to kill the mosquitoes. ATSB solutions that are sprayed on small spots of vegetation or suspended in simple removable bait stations attract mosquitoes from a large area and the mosquitoes ingesting the toxic solutions are killed. The ATSB methods developed and field-tested in Israel demonstrate how they literally decimate local populations of different anopheline and culicine mosquito species [1-5]. Similar successful ATSB field trials have also controlled Culex quinquefasciatus from storm drains in Florida, USA [6] and Anopheles gambiae s.l. malaria vectors in Mali, West Africa [7]. The new ATSB methods are highly effective, technologically simple, low-cost, and circumvent traditional problems associated with the indiscriminate effects of contact insecticides [8] by narrowing the specificity of attraction to sugar-seeking mosquitoes and by using environmentally safe oral toxins such as boric acid, that is considered to be only slightly more toxic to humans and other vertebrates than table salt [9].\nATSB methods work by competing with available natural plant sugar sources, which are an essential source of energy for females and the only food source for male mosquitoes [10,11]. Mosquitoes are highly selective in their attraction to locally available flowering plants and other sources of sugar including fruits, seedpods, and honeydew [12-14] and the availability of favourable natural sugar sources strongly affects mosquito survival [13]. All of the above-noted ATSB field trials used juices made from local natural fruits to successfully divert sugar-seeking mosquitoes from their natural sources of plant sugars. Between 50 and 90% of the local female and male mosquitoes feed on ATSB solutions within the first few days after applications, as inferred from data at control sites where the same attractive bait solutions are applied without toxin but containing coloured food dye markers, which are readily apparent in sugar-fed mosquitoes [1].\nThe present study is on Anopheles sergentii, the most common and abundant Anopheles species in Israel and the main vector of malaria in the Afro-Arabian zone [15,16]. This mosquito species was the main vector responsible for malaria outbreaks [17-19] before the elimination of malaria parasite transmission from Israel in the 1960s [20-22].\nThe objective of this study was to determine the relationship between the efficacy of ATSB control and the availability of natural plant sugar sources. As demonstrated in a recent comparative study of An. sergentii in sugar-rich and sugar-poor oases in Israel, the availability of natural plant sugar sources affects mosquito fitness, population dynamics, and malaria vector capacity [23]. Accordingly, a single application of ATSB was made in the same two relatively small, isolated, and uninhabited sugar-rich and sugar-poor oases. Another larger oasis with high densities of An. sergentii was not sprayed and served as a control site for the ATSB field trial.", "The study was conducted at three oases located within the depression of the African-Syrian Rift Valley, in the northern part of the Arava Valley, about 25 km south of the Dead Sea. The shoreline of the Dead Sea is about 400 m below sea level while the central Arava Valley rises to about 200 m above sea level before it is again descending towards the Red Sea. The region belongs to the Sahara-Arabian phyto-geographical zone [24]. The area is an extreme desert with occasional natural oases centred on springs and artificial agricultural oases created by irrigation; the conditions in these sites are tropical [25]. The climate is arid with an average humidity of 57% and annual winter rains averaging of 50-100 mm. The average temperature ranges from 20°C from the end of September to early April and to 30°C from May to August [26,27]. The area is known for its rich mosquito fauna dominated by An. sergentii, Ochlerotatus caspius and Culex pipiens [28].\nField experiments were conducted at three oases. Two of the oases included small, unnamed and uninhabited oases, 5 km apart in the Arava Valley. As recently described [23], the environments of the two oases are very similar except for the availability of sugar sources. In one of the oases (termed \"sugar-rich oasis\"), there were two flowering Acacia raddiana trees that were the preferable source of sugar for the mosquitoes [14]. In contrast, there were no flowering plant blossoms in the other oasis (termed \"sugar-poor oasis\"). Both sites covered areas of about 5 ha, included small fresh-water springs surrounded by dense non-flowering vegetation which was largely grazed out by camels and donkeys with no visible plant sugar sources during the period of field experiments [23,24]. Neot Hakikar oasis served as an untreated control site. It is located about 20 km north of the small oases and is the largest natural oasis in southern Israel and the Dead Sea region. In the eastern more agricultural part of the oasis a small settlement is located with gardens, vast fields and greenhouses. The western, much more natural part is a nature reserve with a mixture of salt marshes, wet and dry Salinas (ie areas high in salt content with specialized plant communities) and fresh-water springs surrounded by riparian vegetation largely dominated by Phragmites australis L. Gramineae and Carex sp. L. Cyperaceae. This natural vegetation, crossed by a drainage canal, is partially overgrown by reeds and sedges. On the dry banks of the canal vegetation is dominated by groves and thickets of trees and bushes like Tamarix nilotica and Tamarix passerinoides (Tamaricaceae), Prosopis farcta (Mimosaceae), Nitraria retusa (Nitrariaceae) and chenopod bushes like Atriplex halimus, Atriplex leucoclada, Suaeda asphaltica, Suaeda fruticosa (Chenopodiaceae). At the time of the experiment some T. nilotica and P. farcta bushes were flowering.", "The ATSB bait solution used in the sugar-rich and sugar-poor oases consisted of ~75% juice of over-ripe to rotting prickly pear cactus (Opuntia ficus-indica, Cactaceae), 5% V/V wine, 20% W/V brown sugar, 1% (W/V) BaitStab™ concentrate (a product containing antifungal and antibacterial additives produced by Westham Innovations LTD, Tel Aviv, Israel) and boric acid 1% (W/V) [29]. The solution was ripened outdoors for 48 h in covered buckets before adding the BaitStab™ and the boric acid. In this study, prickly pear cactus fruit (Opuntia ficus-indica) was used because it was locally abundant and known to be highly attractive for both sand flies [29] and mosquitoes (Schlein and Muller, unpublished).", "The ATSB solution was sprayed with a 16-l back-pack sprayer (Killaspray, Model 4526, Hozelock, Birmingham UK) in aliquots of ~80 ml on 1 m2 spots at distances of ~3 m on the vegetation surrounding the fresh-water springs of the two isolated oasis (sugar-rich and sugar-poor). Predominant types of non-flowering plants sprayed at the two sites were P. australis, Atriplex sp. and Suaeda sp. As a strategy to minimize potential harm to non-target insects, the predominant natural sugar source for An. sergentii, the flowering A. raddiana trees, present in the sugar-rich oasis were not sprayed. One sprayer completed the applications in less than 1 h per site. No bait solution was sprayed at the control site Neot Hakikar.", "The field trial was conducted over a period of 47 days, from 1 November to 17 December, 2009. During this period, at each of the three study sites, adult mosquitoes were sampled at two-day intervals (a total of 24 times) using six CDC UV traps (Model 1212, John W. Hock, Gainesville, FL) without attractants in fixed positions surrounding the available fresh-water springs. ATSB bait solutions were sprayed on day 12 of the field experiment. Collected mosquitoes were sexed, identified to species, and the physiological age of female mosquitoes was determined by dissecting ovaries and counting the number of dilatations [30].", "To evaluate impacts of ATSB on mosquito populations, captures of An. sergentii were examined at four intervals (1-12, 13-24, 25-36, and 37-47 days). A logistic regression was used to examine the proportion of females with three or more gonotrophic cycles in each oasis over time. Contrasts were used to test for significant changes from the pre-treatment period in each oasis. Separate Poisson regressions were used to analyse the number of male and female An. sergentii caught in the light traps over time in the three oases. Contrasts were used to compare the control oasis with the poor and rich oases at each time.", "Vectorial capacity (VC). defined as the average number of infectious bites the mosquito could potentially deliver over her lifetime, was used to estimate the impact of ATSB on the potential for malaria parasite transmission:\n\n\n\n\nV\nC\n=\n\n\nm\n\n\np\n\n\nE\nI\nP\n\n\n\n\n-\n\n\nT\n\n\n2\n\n\nlog\n\n(\n\np\n\n)\n\n\n\n\n\n\n\nWhere m was the number of mosquitoes per person, T was the estimated duration of the gonotrophic cycle [23]. EIP was the extrinsic incubation period of malaria parasites in mosquitoes assuming to be 10 days [31]. p was the survival rate estimated based on parous rates r.\n\n\n\n\np\n=\n\n\nr\n\n\nT\n\n\n\n\n\n\nFollowing Dye [32], VC was compared before and after the intervention. Therefore, only m and p were separately estimated for the two periods. m was estimated as the average number of female mosquitoes caught per trap night.", "At both the sugar-poor and sugar-rich oases, a single application of ATSB on day 12 reduced densities of female An. sergentii by over 95% and practically eliminated male An. sergentii (Figure 1). Densities of female and male An. sergentii in the sugar-poor oasis were immediately reduced by ATSB treatment compared with the more gradual decreases observed in the sugar-rich oasis.\nAverages (± 1 standard error) of light trap captures of female and male Anopheles sergentii in three oases (sugar-rich, sugar-poor, and control) from 1 November to 17 December, 2009, in Israel (vertical dot lines in panels indicate the date of implementation of ATSB).\nDensities of female An. sergentii in the sugar-poor and sugar-rich sites from the pre-treatment period (days 1-12) to the post-treatment period (days 13-47) decreased over 75-fold and 20-fold, respectively, compared to less than a two-fold natural decrease at the control site that did not receive ATSB treatment. At the control site, densities of female An. sergentii averaged 119.42 ± 9.98 before day 12 and 83.52 ± 5.30 from days 13-47. At the sugar-poor oasis, densities of female An. sergentii averaged 103.81 ± 10.20 before ATSB treatment and 9.97 ± 4.02 post-treatment. At the sugar-rich oasis, densities of female An. sergentii averaged 217.19 ± 11.54 before ATSB treatment and 63.07 ± 13.63 post-treatment. For all but two comparisons of the control oasis with either rich or poor oases the differences were significant at p < 0.001 for females after the treatment was applied: control was significantly higher than poor and lower than rich.\nSimilarly, for male An. sergentii, densities decreased about 15-fold and four-fold from the pre-treatment to the post-treatment period in the sugar-poor and sugar-rich sites, respectively, compared to only a one-fold decrease at the control site. At the control site, densities of male An. sergentii averaged 56.22 ± 4.77 before day 12 and 40.18 ± 3.59 from days 13-47. At the sugar-poor oasis, densities of male An. sergentii averaged 27.36 ± 2.37 before ATSB treatment and 1.75 ± 1.05 from post-treatment. At the sugar-rich oasis, densities of male An. sergentii averaged 47.42 ± 4.88 before ATSB treatment and 10.64 ± 3.10 post-treatment. After treatment, males were significantly lower in both poor and rich oases compared with the control oasis at p < 0.001.\nTable 1 shows, according to pre-treatment days 1-12 and the three post-treatment periods, how ATSB treatment in the sugar-poor and sugar-rich oases affected the proportion of females classified according to gonotrophic cycles (0, 1, 2, 3, and > 3). ATSB treatment reduced the proportion of older more epidemiologically dangerous mosquitoes (three or more gonotrophic cycles) by 100% and 94.9%, respectively, in the sugar-poor and sugar-rich oasis. In the control group the proportion of females with three or more gonotropic cycles increased slightly but not significantly over time. At the sugar-poor site, the proportion of females with three or more gonotropic cycles was significantly reduced compared to pre-treatment levels at 13-24 days (p = 0.011), at 25-35 days (p = 0.014), and at 36-47 days (p < 0.001). At the sugar-rich site, the number of females with three or more gonotropic cycles was significantly reduced in the first week post-treatment (p = 0.001) and at the subsequent measurement times (p < 0.001 for both times).\nAge structure, population parameters, and vectorial capacity (VC) of female Anopheles sergentii before and after ATSB treatment on day 12\nTable 1 also shows how ATSB treatment markedly reduced female An. sergentii densities, parous rates, survival rates and vectorial capacity. Compared with the control site, while female An. sergentii densities decreased less than two-fold as indicated above, parous rates, survival rates, and vectorial capacity remained fairly constant throughout the monitoring period. From the pre-treatment period (days 1-12) to the last period of post-treatment monitoring (days 37-47), the parous rates decreased from 0.59 to 0.12 at the sugar-poor site and decreased from 0.73 to 0.26 at the sugar-rich site. During the same periods, the survival rates decreased from 0.77 to 0.35 at the sugar-poor site and decreased from 0.85 to 0.51 at the sugar-rich site. Malaria vectorial capacity was reduced from a pre-treatment level of 11.2 to 0.0 (last two monitoring periods, days 25-35 and days 37-47) at the sugar-poor oasis and from a pre-treatment level of 79.0 to 0.03 (last monitoring period, days 37-47) at the sugar-rich oasis. Reduction in VC to negligible levels was observed days after ATSB application in the sugar-poor oasis but not until after 2 weeks in the sugar-rich oasis.", "This field trial shows that a single application of ATSB solution by plant spraying at the two oases treatment sites markedly reduced the relative abundance of An. sergentii populations and their longevity. Densities of adult females and males, and the proportion of \"older\" more dangerous females were reduced by 95% or more. Not unexpectedly, the impact of the ATSB treatment is comparable to that demonstrated in previous field trials [1-7].\nThe comparison of ATSB spraying of non-flowering vegetation in the sugar-rich and sugar-poor oases allowed experimental testing of the hypothesis that natural sugar resources compete with the ATSB. As expected, ATSB application in the sugar-poor oasis reduced densities of female An. sergentii by 95% within 2 weeks. In contrast, it took 4 weeks in the sugar-rich oasis for ATSB application to reduce densities of female An. sergentii by 95%. The difference of 2 weeks to 95% population reduction in the sugar-rich oasis, likely due to a reduced frequency of mosquito exposure to ATSB, represented competition with attractive natural sources.\nThe finding that, regardless of the available natural sugar resources, ATSB use can substantially reduce mosquito densities in arid environments is likely due to high frequencies of mosquito sugar-feeding [33,34]. Most female and male mosquitoes likely encounter sprayed ATSB solutions and feed at least once during their lifespan (Figure 1). When ATSB solutions are sprayed on non-flowering vegetation as a strategy to reduce overall impact on non-target insects [7], the sprayed areas largely represent favourable outdoor mosquito resting microenvironments and not sugar-feeding centres containing attractive flowering plants. The probability that mosquitoes encounter and feed on sprayed ATSB solution at their outdoor resting microhabitats is high because these are specific locations where mosquitoes spend most of their time.\nThis study demonstrates for the first time under experimental field conditions how a single application of ATSB can reduce malaria VC from relatively high to negligible levels. Based on ATSB field trials to date [1-7], it is likely that this new approach can also be used in different malaria endemic environments to impact entomological inoculation rates (EIRs) and epidemiological parameters of malaria in humans. Remaining are challenges in the areas of: 1) product development, to standardize attractive baits; 2) deployment methods, to determine the seasonal timing and coverage needed to maximize efficacy while minimizing potential costs and any potential harm to non-target invertebrates; and 3) controlled field trials, to determine how ATSB strategies can be used in combination with existing vector control methods to additionally impact EIRs, especially in eco-epidemiological situations where the continuing problems of malaria cannot be solved using current vector control methods.", "This study provides further evidence that ATSB methods can effectively target and kill sugar-feeding anopheline mosquitoes, and shows how available natural sugar resources used by mosquitoes in arid environments compete with applied ATSB solutions. While abundant sugar resources in the sugar-rich oasis delayed full impacts of ATSB by about 2 weeks, mosquito population reductions of over 95% were none-the-less achieved by a single ATSB application. As well, this study shows for the first time how ATSB can reduce malaria VC from relatively high to negligible levels, with only minimal differences due to sugar-poor and sugar-rich environmental conditions. Overall, this demonstration of how even single applications of ATSB solutions can operationally decimate populations of anopheline mosquitoes and drive their potential for malaria transmission to near zero levels highlights the importance of ATSB as a promising new tool for outdoor vector control." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study sites", "Preparation of ATSB solutions", "Field application of ATSB solutions", "Study design and methods for the ATSB field trial", "Statistical analysis", "Estimation of vectorial capacity", "Results", "Discussion", "Conclusions" ]
[ "Attractive toxic sugar bait (ATSB) methods are a new form of vector control that kill female and male mosquitoes questing for essential sugar sources in the outdoor environment [1-7]. ATSB solutions consist of fruit or flower scent as an attractant, sugar solution as a feeding stimulant, and oral toxin to kill the mosquitoes. ATSB solutions that are sprayed on small spots of vegetation or suspended in simple removable bait stations attract mosquitoes from a large area and the mosquitoes ingesting the toxic solutions are killed. The ATSB methods developed and field-tested in Israel demonstrate how they literally decimate local populations of different anopheline and culicine mosquito species [1-5]. Similar successful ATSB field trials have also controlled Culex quinquefasciatus from storm drains in Florida, USA [6] and Anopheles gambiae s.l. malaria vectors in Mali, West Africa [7]. The new ATSB methods are highly effective, technologically simple, low-cost, and circumvent traditional problems associated with the indiscriminate effects of contact insecticides [8] by narrowing the specificity of attraction to sugar-seeking mosquitoes and by using environmentally safe oral toxins such as boric acid, that is considered to be only slightly more toxic to humans and other vertebrates than table salt [9].\nATSB methods work by competing with available natural plant sugar sources, which are an essential source of energy for females and the only food source for male mosquitoes [10,11]. Mosquitoes are highly selective in their attraction to locally available flowering plants and other sources of sugar including fruits, seedpods, and honeydew [12-14] and the availability of favourable natural sugar sources strongly affects mosquito survival [13]. All of the above-noted ATSB field trials used juices made from local natural fruits to successfully divert sugar-seeking mosquitoes from their natural sources of plant sugars. Between 50 and 90% of the local female and male mosquitoes feed on ATSB solutions within the first few days after applications, as inferred from data at control sites where the same attractive bait solutions are applied without toxin but containing coloured food dye markers, which are readily apparent in sugar-fed mosquitoes [1].\nThe present study is on Anopheles sergentii, the most common and abundant Anopheles species in Israel and the main vector of malaria in the Afro-Arabian zone [15,16]. This mosquito species was the main vector responsible for malaria outbreaks [17-19] before the elimination of malaria parasite transmission from Israel in the 1960s [20-22].\nThe objective of this study was to determine the relationship between the efficacy of ATSB control and the availability of natural plant sugar sources. As demonstrated in a recent comparative study of An. sergentii in sugar-rich and sugar-poor oases in Israel, the availability of natural plant sugar sources affects mosquito fitness, population dynamics, and malaria vector capacity [23]. Accordingly, a single application of ATSB was made in the same two relatively small, isolated, and uninhabited sugar-rich and sugar-poor oases. Another larger oasis with high densities of An. sergentii was not sprayed and served as a control site for the ATSB field trial.", " Study sites The study was conducted at three oases located within the depression of the African-Syrian Rift Valley, in the northern part of the Arava Valley, about 25 km south of the Dead Sea. The shoreline of the Dead Sea is about 400 m below sea level while the central Arava Valley rises to about 200 m above sea level before it is again descending towards the Red Sea. The region belongs to the Sahara-Arabian phyto-geographical zone [24]. The area is an extreme desert with occasional natural oases centred on springs and artificial agricultural oases created by irrigation; the conditions in these sites are tropical [25]. The climate is arid with an average humidity of 57% and annual winter rains averaging of 50-100 mm. The average temperature ranges from 20°C from the end of September to early April and to 30°C from May to August [26,27]. The area is known for its rich mosquito fauna dominated by An. sergentii, Ochlerotatus caspius and Culex pipiens [28].\nField experiments were conducted at three oases. Two of the oases included small, unnamed and uninhabited oases, 5 km apart in the Arava Valley. As recently described [23], the environments of the two oases are very similar except for the availability of sugar sources. In one of the oases (termed \"sugar-rich oasis\"), there were two flowering Acacia raddiana trees that were the preferable source of sugar for the mosquitoes [14]. In contrast, there were no flowering plant blossoms in the other oasis (termed \"sugar-poor oasis\"). Both sites covered areas of about 5 ha, included small fresh-water springs surrounded by dense non-flowering vegetation which was largely grazed out by camels and donkeys with no visible plant sugar sources during the period of field experiments [23,24]. Neot Hakikar oasis served as an untreated control site. It is located about 20 km north of the small oases and is the largest natural oasis in southern Israel and the Dead Sea region. In the eastern more agricultural part of the oasis a small settlement is located with gardens, vast fields and greenhouses. The western, much more natural part is a nature reserve with a mixture of salt marshes, wet and dry Salinas (ie areas high in salt content with specialized plant communities) and fresh-water springs surrounded by riparian vegetation largely dominated by Phragmites australis L. Gramineae and Carex sp. L. Cyperaceae. This natural vegetation, crossed by a drainage canal, is partially overgrown by reeds and sedges. On the dry banks of the canal vegetation is dominated by groves and thickets of trees and bushes like Tamarix nilotica and Tamarix passerinoides (Tamaricaceae), Prosopis farcta (Mimosaceae), Nitraria retusa (Nitrariaceae) and chenopod bushes like Atriplex halimus, Atriplex leucoclada, Suaeda asphaltica, Suaeda fruticosa (Chenopodiaceae). At the time of the experiment some T. nilotica and P. farcta bushes were flowering.\nThe study was conducted at three oases located within the depression of the African-Syrian Rift Valley, in the northern part of the Arava Valley, about 25 km south of the Dead Sea. The shoreline of the Dead Sea is about 400 m below sea level while the central Arava Valley rises to about 200 m above sea level before it is again descending towards the Red Sea. The region belongs to the Sahara-Arabian phyto-geographical zone [24]. The area is an extreme desert with occasional natural oases centred on springs and artificial agricultural oases created by irrigation; the conditions in these sites are tropical [25]. The climate is arid with an average humidity of 57% and annual winter rains averaging of 50-100 mm. The average temperature ranges from 20°C from the end of September to early April and to 30°C from May to August [26,27]. The area is known for its rich mosquito fauna dominated by An. sergentii, Ochlerotatus caspius and Culex pipiens [28].\nField experiments were conducted at three oases. Two of the oases included small, unnamed and uninhabited oases, 5 km apart in the Arava Valley. As recently described [23], the environments of the two oases are very similar except for the availability of sugar sources. In one of the oases (termed \"sugar-rich oasis\"), there were two flowering Acacia raddiana trees that were the preferable source of sugar for the mosquitoes [14]. In contrast, there were no flowering plant blossoms in the other oasis (termed \"sugar-poor oasis\"). Both sites covered areas of about 5 ha, included small fresh-water springs surrounded by dense non-flowering vegetation which was largely grazed out by camels and donkeys with no visible plant sugar sources during the period of field experiments [23,24]. Neot Hakikar oasis served as an untreated control site. It is located about 20 km north of the small oases and is the largest natural oasis in southern Israel and the Dead Sea region. In the eastern more agricultural part of the oasis a small settlement is located with gardens, vast fields and greenhouses. The western, much more natural part is a nature reserve with a mixture of salt marshes, wet and dry Salinas (ie areas high in salt content with specialized plant communities) and fresh-water springs surrounded by riparian vegetation largely dominated by Phragmites australis L. Gramineae and Carex sp. L. Cyperaceae. This natural vegetation, crossed by a drainage canal, is partially overgrown by reeds and sedges. On the dry banks of the canal vegetation is dominated by groves and thickets of trees and bushes like Tamarix nilotica and Tamarix passerinoides (Tamaricaceae), Prosopis farcta (Mimosaceae), Nitraria retusa (Nitrariaceae) and chenopod bushes like Atriplex halimus, Atriplex leucoclada, Suaeda asphaltica, Suaeda fruticosa (Chenopodiaceae). At the time of the experiment some T. nilotica and P. farcta bushes were flowering.\n Preparation of ATSB solutions The ATSB bait solution used in the sugar-rich and sugar-poor oases consisted of ~75% juice of over-ripe to rotting prickly pear cactus (Opuntia ficus-indica, Cactaceae), 5% V/V wine, 20% W/V brown sugar, 1% (W/V) BaitStab™ concentrate (a product containing antifungal and antibacterial additives produced by Westham Innovations LTD, Tel Aviv, Israel) and boric acid 1% (W/V) [29]. The solution was ripened outdoors for 48 h in covered buckets before adding the BaitStab™ and the boric acid. In this study, prickly pear cactus fruit (Opuntia ficus-indica) was used because it was locally abundant and known to be highly attractive for both sand flies [29] and mosquitoes (Schlein and Muller, unpublished).\nThe ATSB bait solution used in the sugar-rich and sugar-poor oases consisted of ~75% juice of over-ripe to rotting prickly pear cactus (Opuntia ficus-indica, Cactaceae), 5% V/V wine, 20% W/V brown sugar, 1% (W/V) BaitStab™ concentrate (a product containing antifungal and antibacterial additives produced by Westham Innovations LTD, Tel Aviv, Israel) and boric acid 1% (W/V) [29]. The solution was ripened outdoors for 48 h in covered buckets before adding the BaitStab™ and the boric acid. In this study, prickly pear cactus fruit (Opuntia ficus-indica) was used because it was locally abundant and known to be highly attractive for both sand flies [29] and mosquitoes (Schlein and Muller, unpublished).\n Field application of ATSB solutions The ATSB solution was sprayed with a 16-l back-pack sprayer (Killaspray, Model 4526, Hozelock, Birmingham UK) in aliquots of ~80 ml on 1 m2 spots at distances of ~3 m on the vegetation surrounding the fresh-water springs of the two isolated oasis (sugar-rich and sugar-poor). Predominant types of non-flowering plants sprayed at the two sites were P. australis, Atriplex sp. and Suaeda sp. As a strategy to minimize potential harm to non-target insects, the predominant natural sugar source for An. sergentii, the flowering A. raddiana trees, present in the sugar-rich oasis were not sprayed. One sprayer completed the applications in less than 1 h per site. No bait solution was sprayed at the control site Neot Hakikar.\nThe ATSB solution was sprayed with a 16-l back-pack sprayer (Killaspray, Model 4526, Hozelock, Birmingham UK) in aliquots of ~80 ml on 1 m2 spots at distances of ~3 m on the vegetation surrounding the fresh-water springs of the two isolated oasis (sugar-rich and sugar-poor). Predominant types of non-flowering plants sprayed at the two sites were P. australis, Atriplex sp. and Suaeda sp. As a strategy to minimize potential harm to non-target insects, the predominant natural sugar source for An. sergentii, the flowering A. raddiana trees, present in the sugar-rich oasis were not sprayed. One sprayer completed the applications in less than 1 h per site. No bait solution was sprayed at the control site Neot Hakikar.\n Study design and methods for the ATSB field trial The field trial was conducted over a period of 47 days, from 1 November to 17 December, 2009. During this period, at each of the three study sites, adult mosquitoes were sampled at two-day intervals (a total of 24 times) using six CDC UV traps (Model 1212, John W. Hock, Gainesville, FL) without attractants in fixed positions surrounding the available fresh-water springs. ATSB bait solutions were sprayed on day 12 of the field experiment. Collected mosquitoes were sexed, identified to species, and the physiological age of female mosquitoes was determined by dissecting ovaries and counting the number of dilatations [30].\nThe field trial was conducted over a period of 47 days, from 1 November to 17 December, 2009. During this period, at each of the three study sites, adult mosquitoes were sampled at two-day intervals (a total of 24 times) using six CDC UV traps (Model 1212, John W. Hock, Gainesville, FL) without attractants in fixed positions surrounding the available fresh-water springs. ATSB bait solutions were sprayed on day 12 of the field experiment. Collected mosquitoes were sexed, identified to species, and the physiological age of female mosquitoes was determined by dissecting ovaries and counting the number of dilatations [30].\n Statistical analysis To evaluate impacts of ATSB on mosquito populations, captures of An. sergentii were examined at four intervals (1-12, 13-24, 25-36, and 37-47 days). A logistic regression was used to examine the proportion of females with three or more gonotrophic cycles in each oasis over time. Contrasts were used to test for significant changes from the pre-treatment period in each oasis. Separate Poisson regressions were used to analyse the number of male and female An. sergentii caught in the light traps over time in the three oases. Contrasts were used to compare the control oasis with the poor and rich oases at each time.\nTo evaluate impacts of ATSB on mosquito populations, captures of An. sergentii were examined at four intervals (1-12, 13-24, 25-36, and 37-47 days). A logistic regression was used to examine the proportion of females with three or more gonotrophic cycles in each oasis over time. Contrasts were used to test for significant changes from the pre-treatment period in each oasis. Separate Poisson regressions were used to analyse the number of male and female An. sergentii caught in the light traps over time in the three oases. Contrasts were used to compare the control oasis with the poor and rich oases at each time.\n Estimation of vectorial capacity Vectorial capacity (VC). defined as the average number of infectious bites the mosquito could potentially deliver over her lifetime, was used to estimate the impact of ATSB on the potential for malaria parasite transmission:\n\n\n\n\nV\nC\n=\n\n\nm\n\n\np\n\n\nE\nI\nP\n\n\n\n\n-\n\n\nT\n\n\n2\n\n\nlog\n\n(\n\np\n\n)\n\n\n\n\n\n\n\nWhere m was the number of mosquitoes per person, T was the estimated duration of the gonotrophic cycle [23]. EIP was the extrinsic incubation period of malaria parasites in mosquitoes assuming to be 10 days [31]. p was the survival rate estimated based on parous rates r.\n\n\n\n\np\n=\n\n\nr\n\n\nT\n\n\n\n\n\n\nFollowing Dye [32], VC was compared before and after the intervention. Therefore, only m and p were separately estimated for the two periods. m was estimated as the average number of female mosquitoes caught per trap night.\nVectorial capacity (VC). defined as the average number of infectious bites the mosquito could potentially deliver over her lifetime, was used to estimate the impact of ATSB on the potential for malaria parasite transmission:\n\n\n\n\nV\nC\n=\n\n\nm\n\n\np\n\n\nE\nI\nP\n\n\n\n\n-\n\n\nT\n\n\n2\n\n\nlog\n\n(\n\np\n\n)\n\n\n\n\n\n\n\nWhere m was the number of mosquitoes per person, T was the estimated duration of the gonotrophic cycle [23]. EIP was the extrinsic incubation period of malaria parasites in mosquitoes assuming to be 10 days [31]. p was the survival rate estimated based on parous rates r.\n\n\n\n\np\n=\n\n\nr\n\n\nT\n\n\n\n\n\n\nFollowing Dye [32], VC was compared before and after the intervention. Therefore, only m and p were separately estimated for the two periods. m was estimated as the average number of female mosquitoes caught per trap night.", "The study was conducted at three oases located within the depression of the African-Syrian Rift Valley, in the northern part of the Arava Valley, about 25 km south of the Dead Sea. The shoreline of the Dead Sea is about 400 m below sea level while the central Arava Valley rises to about 200 m above sea level before it is again descending towards the Red Sea. The region belongs to the Sahara-Arabian phyto-geographical zone [24]. The area is an extreme desert with occasional natural oases centred on springs and artificial agricultural oases created by irrigation; the conditions in these sites are tropical [25]. The climate is arid with an average humidity of 57% and annual winter rains averaging of 50-100 mm. The average temperature ranges from 20°C from the end of September to early April and to 30°C from May to August [26,27]. The area is known for its rich mosquito fauna dominated by An. sergentii, Ochlerotatus caspius and Culex pipiens [28].\nField experiments were conducted at three oases. Two of the oases included small, unnamed and uninhabited oases, 5 km apart in the Arava Valley. As recently described [23], the environments of the two oases are very similar except for the availability of sugar sources. In one of the oases (termed \"sugar-rich oasis\"), there were two flowering Acacia raddiana trees that were the preferable source of sugar for the mosquitoes [14]. In contrast, there were no flowering plant blossoms in the other oasis (termed \"sugar-poor oasis\"). Both sites covered areas of about 5 ha, included small fresh-water springs surrounded by dense non-flowering vegetation which was largely grazed out by camels and donkeys with no visible plant sugar sources during the period of field experiments [23,24]. Neot Hakikar oasis served as an untreated control site. It is located about 20 km north of the small oases and is the largest natural oasis in southern Israel and the Dead Sea region. In the eastern more agricultural part of the oasis a small settlement is located with gardens, vast fields and greenhouses. The western, much more natural part is a nature reserve with a mixture of salt marshes, wet and dry Salinas (ie areas high in salt content with specialized plant communities) and fresh-water springs surrounded by riparian vegetation largely dominated by Phragmites australis L. Gramineae and Carex sp. L. Cyperaceae. This natural vegetation, crossed by a drainage canal, is partially overgrown by reeds and sedges. On the dry banks of the canal vegetation is dominated by groves and thickets of trees and bushes like Tamarix nilotica and Tamarix passerinoides (Tamaricaceae), Prosopis farcta (Mimosaceae), Nitraria retusa (Nitrariaceae) and chenopod bushes like Atriplex halimus, Atriplex leucoclada, Suaeda asphaltica, Suaeda fruticosa (Chenopodiaceae). At the time of the experiment some T. nilotica and P. farcta bushes were flowering.", "The ATSB bait solution used in the sugar-rich and sugar-poor oases consisted of ~75% juice of over-ripe to rotting prickly pear cactus (Opuntia ficus-indica, Cactaceae), 5% V/V wine, 20% W/V brown sugar, 1% (W/V) BaitStab™ concentrate (a product containing antifungal and antibacterial additives produced by Westham Innovations LTD, Tel Aviv, Israel) and boric acid 1% (W/V) [29]. The solution was ripened outdoors for 48 h in covered buckets before adding the BaitStab™ and the boric acid. In this study, prickly pear cactus fruit (Opuntia ficus-indica) was used because it was locally abundant and known to be highly attractive for both sand flies [29] and mosquitoes (Schlein and Muller, unpublished).", "The ATSB solution was sprayed with a 16-l back-pack sprayer (Killaspray, Model 4526, Hozelock, Birmingham UK) in aliquots of ~80 ml on 1 m2 spots at distances of ~3 m on the vegetation surrounding the fresh-water springs of the two isolated oasis (sugar-rich and sugar-poor). Predominant types of non-flowering plants sprayed at the two sites were P. australis, Atriplex sp. and Suaeda sp. As a strategy to minimize potential harm to non-target insects, the predominant natural sugar source for An. sergentii, the flowering A. raddiana trees, present in the sugar-rich oasis were not sprayed. One sprayer completed the applications in less than 1 h per site. No bait solution was sprayed at the control site Neot Hakikar.", "The field trial was conducted over a period of 47 days, from 1 November to 17 December, 2009. During this period, at each of the three study sites, adult mosquitoes were sampled at two-day intervals (a total of 24 times) using six CDC UV traps (Model 1212, John W. Hock, Gainesville, FL) without attractants in fixed positions surrounding the available fresh-water springs. ATSB bait solutions were sprayed on day 12 of the field experiment. Collected mosquitoes were sexed, identified to species, and the physiological age of female mosquitoes was determined by dissecting ovaries and counting the number of dilatations [30].", "To evaluate impacts of ATSB on mosquito populations, captures of An. sergentii were examined at four intervals (1-12, 13-24, 25-36, and 37-47 days). A logistic regression was used to examine the proportion of females with three or more gonotrophic cycles in each oasis over time. Contrasts were used to test for significant changes from the pre-treatment period in each oasis. Separate Poisson regressions were used to analyse the number of male and female An. sergentii caught in the light traps over time in the three oases. Contrasts were used to compare the control oasis with the poor and rich oases at each time.", "Vectorial capacity (VC). defined as the average number of infectious bites the mosquito could potentially deliver over her lifetime, was used to estimate the impact of ATSB on the potential for malaria parasite transmission:\n\n\n\n\nV\nC\n=\n\n\nm\n\n\np\n\n\nE\nI\nP\n\n\n\n\n-\n\n\nT\n\n\n2\n\n\nlog\n\n(\n\np\n\n)\n\n\n\n\n\n\n\nWhere m was the number of mosquitoes per person, T was the estimated duration of the gonotrophic cycle [23]. EIP was the extrinsic incubation period of malaria parasites in mosquitoes assuming to be 10 days [31]. p was the survival rate estimated based on parous rates r.\n\n\n\n\np\n=\n\n\nr\n\n\nT\n\n\n\n\n\n\nFollowing Dye [32], VC was compared before and after the intervention. Therefore, only m and p were separately estimated for the two periods. m was estimated as the average number of female mosquitoes caught per trap night.", "At both the sugar-poor and sugar-rich oases, a single application of ATSB on day 12 reduced densities of female An. sergentii by over 95% and practically eliminated male An. sergentii (Figure 1). Densities of female and male An. sergentii in the sugar-poor oasis were immediately reduced by ATSB treatment compared with the more gradual decreases observed in the sugar-rich oasis.\nAverages (± 1 standard error) of light trap captures of female and male Anopheles sergentii in three oases (sugar-rich, sugar-poor, and control) from 1 November to 17 December, 2009, in Israel (vertical dot lines in panels indicate the date of implementation of ATSB).\nDensities of female An. sergentii in the sugar-poor and sugar-rich sites from the pre-treatment period (days 1-12) to the post-treatment period (days 13-47) decreased over 75-fold and 20-fold, respectively, compared to less than a two-fold natural decrease at the control site that did not receive ATSB treatment. At the control site, densities of female An. sergentii averaged 119.42 ± 9.98 before day 12 and 83.52 ± 5.30 from days 13-47. At the sugar-poor oasis, densities of female An. sergentii averaged 103.81 ± 10.20 before ATSB treatment and 9.97 ± 4.02 post-treatment. At the sugar-rich oasis, densities of female An. sergentii averaged 217.19 ± 11.54 before ATSB treatment and 63.07 ± 13.63 post-treatment. For all but two comparisons of the control oasis with either rich or poor oases the differences were significant at p < 0.001 for females after the treatment was applied: control was significantly higher than poor and lower than rich.\nSimilarly, for male An. sergentii, densities decreased about 15-fold and four-fold from the pre-treatment to the post-treatment period in the sugar-poor and sugar-rich sites, respectively, compared to only a one-fold decrease at the control site. At the control site, densities of male An. sergentii averaged 56.22 ± 4.77 before day 12 and 40.18 ± 3.59 from days 13-47. At the sugar-poor oasis, densities of male An. sergentii averaged 27.36 ± 2.37 before ATSB treatment and 1.75 ± 1.05 from post-treatment. At the sugar-rich oasis, densities of male An. sergentii averaged 47.42 ± 4.88 before ATSB treatment and 10.64 ± 3.10 post-treatment. After treatment, males were significantly lower in both poor and rich oases compared with the control oasis at p < 0.001.\nTable 1 shows, according to pre-treatment days 1-12 and the three post-treatment periods, how ATSB treatment in the sugar-poor and sugar-rich oases affected the proportion of females classified according to gonotrophic cycles (0, 1, 2, 3, and > 3). ATSB treatment reduced the proportion of older more epidemiologically dangerous mosquitoes (three or more gonotrophic cycles) by 100% and 94.9%, respectively, in the sugar-poor and sugar-rich oasis. In the control group the proportion of females with three or more gonotropic cycles increased slightly but not significantly over time. At the sugar-poor site, the proportion of females with three or more gonotropic cycles was significantly reduced compared to pre-treatment levels at 13-24 days (p = 0.011), at 25-35 days (p = 0.014), and at 36-47 days (p < 0.001). At the sugar-rich site, the number of females with three or more gonotropic cycles was significantly reduced in the first week post-treatment (p = 0.001) and at the subsequent measurement times (p < 0.001 for both times).\nAge structure, population parameters, and vectorial capacity (VC) of female Anopheles sergentii before and after ATSB treatment on day 12\nTable 1 also shows how ATSB treatment markedly reduced female An. sergentii densities, parous rates, survival rates and vectorial capacity. Compared with the control site, while female An. sergentii densities decreased less than two-fold as indicated above, parous rates, survival rates, and vectorial capacity remained fairly constant throughout the monitoring period. From the pre-treatment period (days 1-12) to the last period of post-treatment monitoring (days 37-47), the parous rates decreased from 0.59 to 0.12 at the sugar-poor site and decreased from 0.73 to 0.26 at the sugar-rich site. During the same periods, the survival rates decreased from 0.77 to 0.35 at the sugar-poor site and decreased from 0.85 to 0.51 at the sugar-rich site. Malaria vectorial capacity was reduced from a pre-treatment level of 11.2 to 0.0 (last two monitoring periods, days 25-35 and days 37-47) at the sugar-poor oasis and from a pre-treatment level of 79.0 to 0.03 (last monitoring period, days 37-47) at the sugar-rich oasis. Reduction in VC to negligible levels was observed days after ATSB application in the sugar-poor oasis but not until after 2 weeks in the sugar-rich oasis.", "This field trial shows that a single application of ATSB solution by plant spraying at the two oases treatment sites markedly reduced the relative abundance of An. sergentii populations and their longevity. Densities of adult females and males, and the proportion of \"older\" more dangerous females were reduced by 95% or more. Not unexpectedly, the impact of the ATSB treatment is comparable to that demonstrated in previous field trials [1-7].\nThe comparison of ATSB spraying of non-flowering vegetation in the sugar-rich and sugar-poor oases allowed experimental testing of the hypothesis that natural sugar resources compete with the ATSB. As expected, ATSB application in the sugar-poor oasis reduced densities of female An. sergentii by 95% within 2 weeks. In contrast, it took 4 weeks in the sugar-rich oasis for ATSB application to reduce densities of female An. sergentii by 95%. The difference of 2 weeks to 95% population reduction in the sugar-rich oasis, likely due to a reduced frequency of mosquito exposure to ATSB, represented competition with attractive natural sources.\nThe finding that, regardless of the available natural sugar resources, ATSB use can substantially reduce mosquito densities in arid environments is likely due to high frequencies of mosquito sugar-feeding [33,34]. Most female and male mosquitoes likely encounter sprayed ATSB solutions and feed at least once during their lifespan (Figure 1). When ATSB solutions are sprayed on non-flowering vegetation as a strategy to reduce overall impact on non-target insects [7], the sprayed areas largely represent favourable outdoor mosquito resting microenvironments and not sugar-feeding centres containing attractive flowering plants. The probability that mosquitoes encounter and feed on sprayed ATSB solution at their outdoor resting microhabitats is high because these are specific locations where mosquitoes spend most of their time.\nThis study demonstrates for the first time under experimental field conditions how a single application of ATSB can reduce malaria VC from relatively high to negligible levels. Based on ATSB field trials to date [1-7], it is likely that this new approach can also be used in different malaria endemic environments to impact entomological inoculation rates (EIRs) and epidemiological parameters of malaria in humans. Remaining are challenges in the areas of: 1) product development, to standardize attractive baits; 2) deployment methods, to determine the seasonal timing and coverage needed to maximize efficacy while minimizing potential costs and any potential harm to non-target invertebrates; and 3) controlled field trials, to determine how ATSB strategies can be used in combination with existing vector control methods to additionally impact EIRs, especially in eco-epidemiological situations where the continuing problems of malaria cannot be solved using current vector control methods.", "This study provides further evidence that ATSB methods can effectively target and kill sugar-feeding anopheline mosquitoes, and shows how available natural sugar resources used by mosquitoes in arid environments compete with applied ATSB solutions. While abundant sugar resources in the sugar-rich oasis delayed full impacts of ATSB by about 2 weeks, mosquito population reductions of over 95% were none-the-less achieved by a single ATSB application. As well, this study shows for the first time how ATSB can reduce malaria VC from relatively high to negligible levels, with only minimal differences due to sugar-poor and sugar-rich environmental conditions. Overall, this demonstration of how even single applications of ATSB solutions can operationally decimate populations of anopheline mosquitoes and drive their potential for malaria transmission to near zero levels highlights the importance of ATSB as a promising new tool for outdoor vector control." ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[ "Sugar feeding", "Vectorial capacity", "Malaria", "Attractive toxic sugar baits (ATSB)", "Outdoor mosquito control", "\nAnopheles sergentii\n" ]
Background: Attractive toxic sugar bait (ATSB) methods are a new form of vector control that kill female and male mosquitoes questing for essential sugar sources in the outdoor environment [1-7]. ATSB solutions consist of fruit or flower scent as an attractant, sugar solution as a feeding stimulant, and oral toxin to kill the mosquitoes. ATSB solutions that are sprayed on small spots of vegetation or suspended in simple removable bait stations attract mosquitoes from a large area and the mosquitoes ingesting the toxic solutions are killed. The ATSB methods developed and field-tested in Israel demonstrate how they literally decimate local populations of different anopheline and culicine mosquito species [1-5]. Similar successful ATSB field trials have also controlled Culex quinquefasciatus from storm drains in Florida, USA [6] and Anopheles gambiae s.l. malaria vectors in Mali, West Africa [7]. The new ATSB methods are highly effective, technologically simple, low-cost, and circumvent traditional problems associated with the indiscriminate effects of contact insecticides [8] by narrowing the specificity of attraction to sugar-seeking mosquitoes and by using environmentally safe oral toxins such as boric acid, that is considered to be only slightly more toxic to humans and other vertebrates than table salt [9]. ATSB methods work by competing with available natural plant sugar sources, which are an essential source of energy for females and the only food source for male mosquitoes [10,11]. Mosquitoes are highly selective in their attraction to locally available flowering plants and other sources of sugar including fruits, seedpods, and honeydew [12-14] and the availability of favourable natural sugar sources strongly affects mosquito survival [13]. All of the above-noted ATSB field trials used juices made from local natural fruits to successfully divert sugar-seeking mosquitoes from their natural sources of plant sugars. Between 50 and 90% of the local female and male mosquitoes feed on ATSB solutions within the first few days after applications, as inferred from data at control sites where the same attractive bait solutions are applied without toxin but containing coloured food dye markers, which are readily apparent in sugar-fed mosquitoes [1]. The present study is on Anopheles sergentii, the most common and abundant Anopheles species in Israel and the main vector of malaria in the Afro-Arabian zone [15,16]. This mosquito species was the main vector responsible for malaria outbreaks [17-19] before the elimination of malaria parasite transmission from Israel in the 1960s [20-22]. The objective of this study was to determine the relationship between the efficacy of ATSB control and the availability of natural plant sugar sources. As demonstrated in a recent comparative study of An. sergentii in sugar-rich and sugar-poor oases in Israel, the availability of natural plant sugar sources affects mosquito fitness, population dynamics, and malaria vector capacity [23]. Accordingly, a single application of ATSB was made in the same two relatively small, isolated, and uninhabited sugar-rich and sugar-poor oases. Another larger oasis with high densities of An. sergentii was not sprayed and served as a control site for the ATSB field trial. Methods: Study sites The study was conducted at three oases located within the depression of the African-Syrian Rift Valley, in the northern part of the Arava Valley, about 25 km south of the Dead Sea. The shoreline of the Dead Sea is about 400 m below sea level while the central Arava Valley rises to about 200 m above sea level before it is again descending towards the Red Sea. The region belongs to the Sahara-Arabian phyto-geographical zone [24]. The area is an extreme desert with occasional natural oases centred on springs and artificial agricultural oases created by irrigation; the conditions in these sites are tropical [25]. The climate is arid with an average humidity of 57% and annual winter rains averaging of 50-100 mm. The average temperature ranges from 20°C from the end of September to early April and to 30°C from May to August [26,27]. The area is known for its rich mosquito fauna dominated by An. sergentii, Ochlerotatus caspius and Culex pipiens [28]. Field experiments were conducted at three oases. Two of the oases included small, unnamed and uninhabited oases, 5 km apart in the Arava Valley. As recently described [23], the environments of the two oases are very similar except for the availability of sugar sources. In one of the oases (termed "sugar-rich oasis"), there were two flowering Acacia raddiana trees that were the preferable source of sugar for the mosquitoes [14]. In contrast, there were no flowering plant blossoms in the other oasis (termed "sugar-poor oasis"). Both sites covered areas of about 5 ha, included small fresh-water springs surrounded by dense non-flowering vegetation which was largely grazed out by camels and donkeys with no visible plant sugar sources during the period of field experiments [23,24]. Neot Hakikar oasis served as an untreated control site. It is located about 20 km north of the small oases and is the largest natural oasis in southern Israel and the Dead Sea region. In the eastern more agricultural part of the oasis a small settlement is located with gardens, vast fields and greenhouses. The western, much more natural part is a nature reserve with a mixture of salt marshes, wet and dry Salinas (ie areas high in salt content with specialized plant communities) and fresh-water springs surrounded by riparian vegetation largely dominated by Phragmites australis L. Gramineae and Carex sp. L. Cyperaceae. This natural vegetation, crossed by a drainage canal, is partially overgrown by reeds and sedges. On the dry banks of the canal vegetation is dominated by groves and thickets of trees and bushes like Tamarix nilotica and Tamarix passerinoides (Tamaricaceae), Prosopis farcta (Mimosaceae), Nitraria retusa (Nitrariaceae) and chenopod bushes like Atriplex halimus, Atriplex leucoclada, Suaeda asphaltica, Suaeda fruticosa (Chenopodiaceae). At the time of the experiment some T. nilotica and P. farcta bushes were flowering. The study was conducted at three oases located within the depression of the African-Syrian Rift Valley, in the northern part of the Arava Valley, about 25 km south of the Dead Sea. The shoreline of the Dead Sea is about 400 m below sea level while the central Arava Valley rises to about 200 m above sea level before it is again descending towards the Red Sea. The region belongs to the Sahara-Arabian phyto-geographical zone [24]. The area is an extreme desert with occasional natural oases centred on springs and artificial agricultural oases created by irrigation; the conditions in these sites are tropical [25]. The climate is arid with an average humidity of 57% and annual winter rains averaging of 50-100 mm. The average temperature ranges from 20°C from the end of September to early April and to 30°C from May to August [26,27]. The area is known for its rich mosquito fauna dominated by An. sergentii, Ochlerotatus caspius and Culex pipiens [28]. Field experiments were conducted at three oases. Two of the oases included small, unnamed and uninhabited oases, 5 km apart in the Arava Valley. As recently described [23], the environments of the two oases are very similar except for the availability of sugar sources. In one of the oases (termed "sugar-rich oasis"), there were two flowering Acacia raddiana trees that were the preferable source of sugar for the mosquitoes [14]. In contrast, there were no flowering plant blossoms in the other oasis (termed "sugar-poor oasis"). Both sites covered areas of about 5 ha, included small fresh-water springs surrounded by dense non-flowering vegetation which was largely grazed out by camels and donkeys with no visible plant sugar sources during the period of field experiments [23,24]. Neot Hakikar oasis served as an untreated control site. It is located about 20 km north of the small oases and is the largest natural oasis in southern Israel and the Dead Sea region. In the eastern more agricultural part of the oasis a small settlement is located with gardens, vast fields and greenhouses. The western, much more natural part is a nature reserve with a mixture of salt marshes, wet and dry Salinas (ie areas high in salt content with specialized plant communities) and fresh-water springs surrounded by riparian vegetation largely dominated by Phragmites australis L. Gramineae and Carex sp. L. Cyperaceae. This natural vegetation, crossed by a drainage canal, is partially overgrown by reeds and sedges. On the dry banks of the canal vegetation is dominated by groves and thickets of trees and bushes like Tamarix nilotica and Tamarix passerinoides (Tamaricaceae), Prosopis farcta (Mimosaceae), Nitraria retusa (Nitrariaceae) and chenopod bushes like Atriplex halimus, Atriplex leucoclada, Suaeda asphaltica, Suaeda fruticosa (Chenopodiaceae). At the time of the experiment some T. nilotica and P. farcta bushes were flowering. Preparation of ATSB solutions The ATSB bait solution used in the sugar-rich and sugar-poor oases consisted of ~75% juice of over-ripe to rotting prickly pear cactus (Opuntia ficus-indica, Cactaceae), 5% V/V wine, 20% W/V brown sugar, 1% (W/V) BaitStab™ concentrate (a product containing antifungal and antibacterial additives produced by Westham Innovations LTD, Tel Aviv, Israel) and boric acid 1% (W/V) [29]. The solution was ripened outdoors for 48 h in covered buckets before adding the BaitStab™ and the boric acid. In this study, prickly pear cactus fruit (Opuntia ficus-indica) was used because it was locally abundant and known to be highly attractive for both sand flies [29] and mosquitoes (Schlein and Muller, unpublished). The ATSB bait solution used in the sugar-rich and sugar-poor oases consisted of ~75% juice of over-ripe to rotting prickly pear cactus (Opuntia ficus-indica, Cactaceae), 5% V/V wine, 20% W/V brown sugar, 1% (W/V) BaitStab™ concentrate (a product containing antifungal and antibacterial additives produced by Westham Innovations LTD, Tel Aviv, Israel) and boric acid 1% (W/V) [29]. The solution was ripened outdoors for 48 h in covered buckets before adding the BaitStab™ and the boric acid. In this study, prickly pear cactus fruit (Opuntia ficus-indica) was used because it was locally abundant and known to be highly attractive for both sand flies [29] and mosquitoes (Schlein and Muller, unpublished). Field application of ATSB solutions The ATSB solution was sprayed with a 16-l back-pack sprayer (Killaspray, Model 4526, Hozelock, Birmingham UK) in aliquots of ~80 ml on 1 m2 spots at distances of ~3 m on the vegetation surrounding the fresh-water springs of the two isolated oasis (sugar-rich and sugar-poor). Predominant types of non-flowering plants sprayed at the two sites were P. australis, Atriplex sp. and Suaeda sp. As a strategy to minimize potential harm to non-target insects, the predominant natural sugar source for An. sergentii, the flowering A. raddiana trees, present in the sugar-rich oasis were not sprayed. One sprayer completed the applications in less than 1 h per site. No bait solution was sprayed at the control site Neot Hakikar. The ATSB solution was sprayed with a 16-l back-pack sprayer (Killaspray, Model 4526, Hozelock, Birmingham UK) in aliquots of ~80 ml on 1 m2 spots at distances of ~3 m on the vegetation surrounding the fresh-water springs of the two isolated oasis (sugar-rich and sugar-poor). Predominant types of non-flowering plants sprayed at the two sites were P. australis, Atriplex sp. and Suaeda sp. As a strategy to minimize potential harm to non-target insects, the predominant natural sugar source for An. sergentii, the flowering A. raddiana trees, present in the sugar-rich oasis were not sprayed. One sprayer completed the applications in less than 1 h per site. No bait solution was sprayed at the control site Neot Hakikar. Study design and methods for the ATSB field trial The field trial was conducted over a period of 47 days, from 1 November to 17 December, 2009. During this period, at each of the three study sites, adult mosquitoes were sampled at two-day intervals (a total of 24 times) using six CDC UV traps (Model 1212, John W. Hock, Gainesville, FL) without attractants in fixed positions surrounding the available fresh-water springs. ATSB bait solutions were sprayed on day 12 of the field experiment. Collected mosquitoes were sexed, identified to species, and the physiological age of female mosquitoes was determined by dissecting ovaries and counting the number of dilatations [30]. The field trial was conducted over a period of 47 days, from 1 November to 17 December, 2009. During this period, at each of the three study sites, adult mosquitoes were sampled at two-day intervals (a total of 24 times) using six CDC UV traps (Model 1212, John W. Hock, Gainesville, FL) without attractants in fixed positions surrounding the available fresh-water springs. ATSB bait solutions were sprayed on day 12 of the field experiment. Collected mosquitoes were sexed, identified to species, and the physiological age of female mosquitoes was determined by dissecting ovaries and counting the number of dilatations [30]. Statistical analysis To evaluate impacts of ATSB on mosquito populations, captures of An. sergentii were examined at four intervals (1-12, 13-24, 25-36, and 37-47 days). A logistic regression was used to examine the proportion of females with three or more gonotrophic cycles in each oasis over time. Contrasts were used to test for significant changes from the pre-treatment period in each oasis. Separate Poisson regressions were used to analyse the number of male and female An. sergentii caught in the light traps over time in the three oases. Contrasts were used to compare the control oasis with the poor and rich oases at each time. To evaluate impacts of ATSB on mosquito populations, captures of An. sergentii were examined at four intervals (1-12, 13-24, 25-36, and 37-47 days). A logistic regression was used to examine the proportion of females with three or more gonotrophic cycles in each oasis over time. Contrasts were used to test for significant changes from the pre-treatment period in each oasis. Separate Poisson regressions were used to analyse the number of male and female An. sergentii caught in the light traps over time in the three oases. Contrasts were used to compare the control oasis with the poor and rich oases at each time. Estimation of vectorial capacity Vectorial capacity (VC). defined as the average number of infectious bites the mosquito could potentially deliver over her lifetime, was used to estimate the impact of ATSB on the potential for malaria parasite transmission: V C = m p E I P - T 2 log ( p ) Where m was the number of mosquitoes per person, T was the estimated duration of the gonotrophic cycle [23]. EIP was the extrinsic incubation period of malaria parasites in mosquitoes assuming to be 10 days [31]. p was the survival rate estimated based on parous rates r. p = r T Following Dye [32], VC was compared before and after the intervention. Therefore, only m and p were separately estimated for the two periods. m was estimated as the average number of female mosquitoes caught per trap night. Vectorial capacity (VC). defined as the average number of infectious bites the mosquito could potentially deliver over her lifetime, was used to estimate the impact of ATSB on the potential for malaria parasite transmission: V C = m p E I P - T 2 log ( p ) Where m was the number of mosquitoes per person, T was the estimated duration of the gonotrophic cycle [23]. EIP was the extrinsic incubation period of malaria parasites in mosquitoes assuming to be 10 days [31]. p was the survival rate estimated based on parous rates r. p = r T Following Dye [32], VC was compared before and after the intervention. Therefore, only m and p were separately estimated for the two periods. m was estimated as the average number of female mosquitoes caught per trap night. Study sites: The study was conducted at three oases located within the depression of the African-Syrian Rift Valley, in the northern part of the Arava Valley, about 25 km south of the Dead Sea. The shoreline of the Dead Sea is about 400 m below sea level while the central Arava Valley rises to about 200 m above sea level before it is again descending towards the Red Sea. The region belongs to the Sahara-Arabian phyto-geographical zone [24]. The area is an extreme desert with occasional natural oases centred on springs and artificial agricultural oases created by irrigation; the conditions in these sites are tropical [25]. The climate is arid with an average humidity of 57% and annual winter rains averaging of 50-100 mm. The average temperature ranges from 20°C from the end of September to early April and to 30°C from May to August [26,27]. The area is known for its rich mosquito fauna dominated by An. sergentii, Ochlerotatus caspius and Culex pipiens [28]. Field experiments were conducted at three oases. Two of the oases included small, unnamed and uninhabited oases, 5 km apart in the Arava Valley. As recently described [23], the environments of the two oases are very similar except for the availability of sugar sources. In one of the oases (termed "sugar-rich oasis"), there were two flowering Acacia raddiana trees that were the preferable source of sugar for the mosquitoes [14]. In contrast, there were no flowering plant blossoms in the other oasis (termed "sugar-poor oasis"). Both sites covered areas of about 5 ha, included small fresh-water springs surrounded by dense non-flowering vegetation which was largely grazed out by camels and donkeys with no visible plant sugar sources during the period of field experiments [23,24]. Neot Hakikar oasis served as an untreated control site. It is located about 20 km north of the small oases and is the largest natural oasis in southern Israel and the Dead Sea region. In the eastern more agricultural part of the oasis a small settlement is located with gardens, vast fields and greenhouses. The western, much more natural part is a nature reserve with a mixture of salt marshes, wet and dry Salinas (ie areas high in salt content with specialized plant communities) and fresh-water springs surrounded by riparian vegetation largely dominated by Phragmites australis L. Gramineae and Carex sp. L. Cyperaceae. This natural vegetation, crossed by a drainage canal, is partially overgrown by reeds and sedges. On the dry banks of the canal vegetation is dominated by groves and thickets of trees and bushes like Tamarix nilotica and Tamarix passerinoides (Tamaricaceae), Prosopis farcta (Mimosaceae), Nitraria retusa (Nitrariaceae) and chenopod bushes like Atriplex halimus, Atriplex leucoclada, Suaeda asphaltica, Suaeda fruticosa (Chenopodiaceae). At the time of the experiment some T. nilotica and P. farcta bushes were flowering. Preparation of ATSB solutions: The ATSB bait solution used in the sugar-rich and sugar-poor oases consisted of ~75% juice of over-ripe to rotting prickly pear cactus (Opuntia ficus-indica, Cactaceae), 5% V/V wine, 20% W/V brown sugar, 1% (W/V) BaitStab™ concentrate (a product containing antifungal and antibacterial additives produced by Westham Innovations LTD, Tel Aviv, Israel) and boric acid 1% (W/V) [29]. The solution was ripened outdoors for 48 h in covered buckets before adding the BaitStab™ and the boric acid. In this study, prickly pear cactus fruit (Opuntia ficus-indica) was used because it was locally abundant and known to be highly attractive for both sand flies [29] and mosquitoes (Schlein and Muller, unpublished). Field application of ATSB solutions: The ATSB solution was sprayed with a 16-l back-pack sprayer (Killaspray, Model 4526, Hozelock, Birmingham UK) in aliquots of ~80 ml on 1 m2 spots at distances of ~3 m on the vegetation surrounding the fresh-water springs of the two isolated oasis (sugar-rich and sugar-poor). Predominant types of non-flowering plants sprayed at the two sites were P. australis, Atriplex sp. and Suaeda sp. As a strategy to minimize potential harm to non-target insects, the predominant natural sugar source for An. sergentii, the flowering A. raddiana trees, present in the sugar-rich oasis were not sprayed. One sprayer completed the applications in less than 1 h per site. No bait solution was sprayed at the control site Neot Hakikar. Study design and methods for the ATSB field trial: The field trial was conducted over a period of 47 days, from 1 November to 17 December, 2009. During this period, at each of the three study sites, adult mosquitoes were sampled at two-day intervals (a total of 24 times) using six CDC UV traps (Model 1212, John W. Hock, Gainesville, FL) without attractants in fixed positions surrounding the available fresh-water springs. ATSB bait solutions were sprayed on day 12 of the field experiment. Collected mosquitoes were sexed, identified to species, and the physiological age of female mosquitoes was determined by dissecting ovaries and counting the number of dilatations [30]. Statistical analysis: To evaluate impacts of ATSB on mosquito populations, captures of An. sergentii were examined at four intervals (1-12, 13-24, 25-36, and 37-47 days). A logistic regression was used to examine the proportion of females with three or more gonotrophic cycles in each oasis over time. Contrasts were used to test for significant changes from the pre-treatment period in each oasis. Separate Poisson regressions were used to analyse the number of male and female An. sergentii caught in the light traps over time in the three oases. Contrasts were used to compare the control oasis with the poor and rich oases at each time. Estimation of vectorial capacity: Vectorial capacity (VC). defined as the average number of infectious bites the mosquito could potentially deliver over her lifetime, was used to estimate the impact of ATSB on the potential for malaria parasite transmission: V C = m p E I P - T 2 log ( p ) Where m was the number of mosquitoes per person, T was the estimated duration of the gonotrophic cycle [23]. EIP was the extrinsic incubation period of malaria parasites in mosquitoes assuming to be 10 days [31]. p was the survival rate estimated based on parous rates r. p = r T Following Dye [32], VC was compared before and after the intervention. Therefore, only m and p were separately estimated for the two periods. m was estimated as the average number of female mosquitoes caught per trap night. Results: At both the sugar-poor and sugar-rich oases, a single application of ATSB on day 12 reduced densities of female An. sergentii by over 95% and practically eliminated male An. sergentii (Figure 1). Densities of female and male An. sergentii in the sugar-poor oasis were immediately reduced by ATSB treatment compared with the more gradual decreases observed in the sugar-rich oasis. Averages (± 1 standard error) of light trap captures of female and male Anopheles sergentii in three oases (sugar-rich, sugar-poor, and control) from 1 November to 17 December, 2009, in Israel (vertical dot lines in panels indicate the date of implementation of ATSB). Densities of female An. sergentii in the sugar-poor and sugar-rich sites from the pre-treatment period (days 1-12) to the post-treatment period (days 13-47) decreased over 75-fold and 20-fold, respectively, compared to less than a two-fold natural decrease at the control site that did not receive ATSB treatment. At the control site, densities of female An. sergentii averaged 119.42 ± 9.98 before day 12 and 83.52 ± 5.30 from days 13-47. At the sugar-poor oasis, densities of female An. sergentii averaged 103.81 ± 10.20 before ATSB treatment and 9.97 ± 4.02 post-treatment. At the sugar-rich oasis, densities of female An. sergentii averaged 217.19 ± 11.54 before ATSB treatment and 63.07 ± 13.63 post-treatment. For all but two comparisons of the control oasis with either rich or poor oases the differences were significant at p < 0.001 for females after the treatment was applied: control was significantly higher than poor and lower than rich. Similarly, for male An. sergentii, densities decreased about 15-fold and four-fold from the pre-treatment to the post-treatment period in the sugar-poor and sugar-rich sites, respectively, compared to only a one-fold decrease at the control site. At the control site, densities of male An. sergentii averaged 56.22 ± 4.77 before day 12 and 40.18 ± 3.59 from days 13-47. At the sugar-poor oasis, densities of male An. sergentii averaged 27.36 ± 2.37 before ATSB treatment and 1.75 ± 1.05 from post-treatment. At the sugar-rich oasis, densities of male An. sergentii averaged 47.42 ± 4.88 before ATSB treatment and 10.64 ± 3.10 post-treatment. After treatment, males were significantly lower in both poor and rich oases compared with the control oasis at p < 0.001. Table 1 shows, according to pre-treatment days 1-12 and the three post-treatment periods, how ATSB treatment in the sugar-poor and sugar-rich oases affected the proportion of females classified according to gonotrophic cycles (0, 1, 2, 3, and > 3). ATSB treatment reduced the proportion of older more epidemiologically dangerous mosquitoes (three or more gonotrophic cycles) by 100% and 94.9%, respectively, in the sugar-poor and sugar-rich oasis. In the control group the proportion of females with three or more gonotropic cycles increased slightly but not significantly over time. At the sugar-poor site, the proportion of females with three or more gonotropic cycles was significantly reduced compared to pre-treatment levels at 13-24 days (p = 0.011), at 25-35 days (p = 0.014), and at 36-47 days (p < 0.001). At the sugar-rich site, the number of females with three or more gonotropic cycles was significantly reduced in the first week post-treatment (p = 0.001) and at the subsequent measurement times (p < 0.001 for both times). Age structure, population parameters, and vectorial capacity (VC) of female Anopheles sergentii before and after ATSB treatment on day 12 Table 1 also shows how ATSB treatment markedly reduced female An. sergentii densities, parous rates, survival rates and vectorial capacity. Compared with the control site, while female An. sergentii densities decreased less than two-fold as indicated above, parous rates, survival rates, and vectorial capacity remained fairly constant throughout the monitoring period. From the pre-treatment period (days 1-12) to the last period of post-treatment monitoring (days 37-47), the parous rates decreased from 0.59 to 0.12 at the sugar-poor site and decreased from 0.73 to 0.26 at the sugar-rich site. During the same periods, the survival rates decreased from 0.77 to 0.35 at the sugar-poor site and decreased from 0.85 to 0.51 at the sugar-rich site. Malaria vectorial capacity was reduced from a pre-treatment level of 11.2 to 0.0 (last two monitoring periods, days 25-35 and days 37-47) at the sugar-poor oasis and from a pre-treatment level of 79.0 to 0.03 (last monitoring period, days 37-47) at the sugar-rich oasis. Reduction in VC to negligible levels was observed days after ATSB application in the sugar-poor oasis but not until after 2 weeks in the sugar-rich oasis. Discussion: This field trial shows that a single application of ATSB solution by plant spraying at the two oases treatment sites markedly reduced the relative abundance of An. sergentii populations and their longevity. Densities of adult females and males, and the proportion of "older" more dangerous females were reduced by 95% or more. Not unexpectedly, the impact of the ATSB treatment is comparable to that demonstrated in previous field trials [1-7]. The comparison of ATSB spraying of non-flowering vegetation in the sugar-rich and sugar-poor oases allowed experimental testing of the hypothesis that natural sugar resources compete with the ATSB. As expected, ATSB application in the sugar-poor oasis reduced densities of female An. sergentii by 95% within 2 weeks. In contrast, it took 4 weeks in the sugar-rich oasis for ATSB application to reduce densities of female An. sergentii by 95%. The difference of 2 weeks to 95% population reduction in the sugar-rich oasis, likely due to a reduced frequency of mosquito exposure to ATSB, represented competition with attractive natural sources. The finding that, regardless of the available natural sugar resources, ATSB use can substantially reduce mosquito densities in arid environments is likely due to high frequencies of mosquito sugar-feeding [33,34]. Most female and male mosquitoes likely encounter sprayed ATSB solutions and feed at least once during their lifespan (Figure 1). When ATSB solutions are sprayed on non-flowering vegetation as a strategy to reduce overall impact on non-target insects [7], the sprayed areas largely represent favourable outdoor mosquito resting microenvironments and not sugar-feeding centres containing attractive flowering plants. The probability that mosquitoes encounter and feed on sprayed ATSB solution at their outdoor resting microhabitats is high because these are specific locations where mosquitoes spend most of their time. This study demonstrates for the first time under experimental field conditions how a single application of ATSB can reduce malaria VC from relatively high to negligible levels. Based on ATSB field trials to date [1-7], it is likely that this new approach can also be used in different malaria endemic environments to impact entomological inoculation rates (EIRs) and epidemiological parameters of malaria in humans. Remaining are challenges in the areas of: 1) product development, to standardize attractive baits; 2) deployment methods, to determine the seasonal timing and coverage needed to maximize efficacy while minimizing potential costs and any potential harm to non-target invertebrates; and 3) controlled field trials, to determine how ATSB strategies can be used in combination with existing vector control methods to additionally impact EIRs, especially in eco-epidemiological situations where the continuing problems of malaria cannot be solved using current vector control methods. Conclusions: This study provides further evidence that ATSB methods can effectively target and kill sugar-feeding anopheline mosquitoes, and shows how available natural sugar resources used by mosquitoes in arid environments compete with applied ATSB solutions. While abundant sugar resources in the sugar-rich oasis delayed full impacts of ATSB by about 2 weeks, mosquito population reductions of over 95% were none-the-less achieved by a single ATSB application. As well, this study shows for the first time how ATSB can reduce malaria VC from relatively high to negligible levels, with only minimal differences due to sugar-poor and sugar-rich environmental conditions. Overall, this demonstration of how even single applications of ATSB solutions can operationally decimate populations of anopheline mosquitoes and drive their potential for malaria transmission to near zero levels highlights the importance of ATSB as a promising new tool for outdoor vector control.
Background: Attractive toxic sugar bait (ATSB) methods are a new and promising "attract and kill" strategy for mosquito control. Sugar-feeding female and male mosquitoes attracted to ATSB solutions, either sprayed on plants or in bait stations, ingest an incorporated low-risk toxin such as boric acid and are killed. This field study in the arid malaria-free oasis environment of Israel compares how the availability of a primary natural sugar source for Anopheles sergentii mosquitoes: flowering Acacia raddiana trees, affects the efficacy of ATSB methods for mosquito control. Methods: A 47-day field trial was conducted to compare impacts of a single application of ATSB treatment on mosquito densities and age structure in isolated uninhabited sugar-rich and sugar-poor oases relative to an untreated sugar-rich oasis that served as a control. Results: ATSB spraying on patches of non-flowering vegetation around freshwater springs reduced densities of female An. sergentii by 95.2% in the sugar-rich oasis and 98.6% in the sugar-poor oasis; males in both oases were practically eliminated. It reduced daily survival rates of female An. sergentii from 0.77 to 0.35 in the sugar-poor oasis and from 0.85 to 0.51 in the sugar-rich oasis. ATSB treatment reduced the proportion of older more epidemiologically dangerous mosquitoes (three or more gonotrophic cycles) by 100% and 96.7%, respectively, in the sugar-poor and sugar-rich oases. Overall, malaria vectorial capacity was reduced from 11.2 to 0.0 in the sugar-poor oasis and from 79.0 to 0.03 in the sugar-rich oasis. Reduction in vector capacity to negligible levels days after ATSB application in the sugar-poor oasis, but not until after 2 weeks in the sugar-rich oasis, show that natural sugar sources compete with the applied ATSB solutions. Conclusions: While readily available natural sugar sources delay ATSB impact, they do not affect overall outcomes because the high frequency of sugar feeding by mosquitoes has an accumulating effect on the probability they will be attracted to and killed by ATSB methods. Operationally, ATSB methods for malaria vector control are highly effective in arid environments regardless of competitive, highly attractive natural sugar sources in their outdoor environment.
Background: Attractive toxic sugar bait (ATSB) methods are a new form of vector control that kill female and male mosquitoes questing for essential sugar sources in the outdoor environment [1-7]. ATSB solutions consist of fruit or flower scent as an attractant, sugar solution as a feeding stimulant, and oral toxin to kill the mosquitoes. ATSB solutions that are sprayed on small spots of vegetation or suspended in simple removable bait stations attract mosquitoes from a large area and the mosquitoes ingesting the toxic solutions are killed. The ATSB methods developed and field-tested in Israel demonstrate how they literally decimate local populations of different anopheline and culicine mosquito species [1-5]. Similar successful ATSB field trials have also controlled Culex quinquefasciatus from storm drains in Florida, USA [6] and Anopheles gambiae s.l. malaria vectors in Mali, West Africa [7]. The new ATSB methods are highly effective, technologically simple, low-cost, and circumvent traditional problems associated with the indiscriminate effects of contact insecticides [8] by narrowing the specificity of attraction to sugar-seeking mosquitoes and by using environmentally safe oral toxins such as boric acid, that is considered to be only slightly more toxic to humans and other vertebrates than table salt [9]. ATSB methods work by competing with available natural plant sugar sources, which are an essential source of energy for females and the only food source for male mosquitoes [10,11]. Mosquitoes are highly selective in their attraction to locally available flowering plants and other sources of sugar including fruits, seedpods, and honeydew [12-14] and the availability of favourable natural sugar sources strongly affects mosquito survival [13]. All of the above-noted ATSB field trials used juices made from local natural fruits to successfully divert sugar-seeking mosquitoes from their natural sources of plant sugars. Between 50 and 90% of the local female and male mosquitoes feed on ATSB solutions within the first few days after applications, as inferred from data at control sites where the same attractive bait solutions are applied without toxin but containing coloured food dye markers, which are readily apparent in sugar-fed mosquitoes [1]. The present study is on Anopheles sergentii, the most common and abundant Anopheles species in Israel and the main vector of malaria in the Afro-Arabian zone [15,16]. This mosquito species was the main vector responsible for malaria outbreaks [17-19] before the elimination of malaria parasite transmission from Israel in the 1960s [20-22]. The objective of this study was to determine the relationship between the efficacy of ATSB control and the availability of natural plant sugar sources. As demonstrated in a recent comparative study of An. sergentii in sugar-rich and sugar-poor oases in Israel, the availability of natural plant sugar sources affects mosquito fitness, population dynamics, and malaria vector capacity [23]. Accordingly, a single application of ATSB was made in the same two relatively small, isolated, and uninhabited sugar-rich and sugar-poor oases. Another larger oasis with high densities of An. sergentii was not sprayed and served as a control site for the ATSB field trial. Conclusions: GCM, JCB, WG, and YS conceived and planned the study, interpreted results, and wrote the paper. GCM directed and performed the field experiments and managed the data. KLA and WG analysed the data. All authors read and approved the final manuscript.
Background: Attractive toxic sugar bait (ATSB) methods are a new and promising "attract and kill" strategy for mosquito control. Sugar-feeding female and male mosquitoes attracted to ATSB solutions, either sprayed on plants or in bait stations, ingest an incorporated low-risk toxin such as boric acid and are killed. This field study in the arid malaria-free oasis environment of Israel compares how the availability of a primary natural sugar source for Anopheles sergentii mosquitoes: flowering Acacia raddiana trees, affects the efficacy of ATSB methods for mosquito control. Methods: A 47-day field trial was conducted to compare impacts of a single application of ATSB treatment on mosquito densities and age structure in isolated uninhabited sugar-rich and sugar-poor oases relative to an untreated sugar-rich oasis that served as a control. Results: ATSB spraying on patches of non-flowering vegetation around freshwater springs reduced densities of female An. sergentii by 95.2% in the sugar-rich oasis and 98.6% in the sugar-poor oasis; males in both oases were practically eliminated. It reduced daily survival rates of female An. sergentii from 0.77 to 0.35 in the sugar-poor oasis and from 0.85 to 0.51 in the sugar-rich oasis. ATSB treatment reduced the proportion of older more epidemiologically dangerous mosquitoes (three or more gonotrophic cycles) by 100% and 96.7%, respectively, in the sugar-poor and sugar-rich oases. Overall, malaria vectorial capacity was reduced from 11.2 to 0.0 in the sugar-poor oasis and from 79.0 to 0.03 in the sugar-rich oasis. Reduction in vector capacity to negligible levels days after ATSB application in the sugar-poor oasis, but not until after 2 weeks in the sugar-rich oasis, show that natural sugar sources compete with the applied ATSB solutions. Conclusions: While readily available natural sugar sources delay ATSB impact, they do not affect overall outcomes because the high frequency of sugar feeding by mosquitoes has an accumulating effect on the probability they will be attracted to and killed by ATSB methods. Operationally, ATSB methods for malaria vector control are highly effective in arid environments regardless of competitive, highly attractive natural sugar sources in their outdoor environment.
6,285
425
11
[ "sugar", "atsb", "oasis", "oases", "rich", "mosquitoes", "poor", "sugar rich", "treatment", "sergentii" ]
[ "test", "test" ]
null
[CONTENT] Sugar feeding | Vectorial capacity | Malaria | Attractive toxic sugar baits (ATSB) | Outdoor mosquito control | Anopheles sergentii [SUMMARY]
[CONTENT] Sugar feeding | Vectorial capacity | Malaria | Attractive toxic sugar baits (ATSB) | Outdoor mosquito control | Anopheles sergentii [SUMMARY]
null
[CONTENT] Sugar feeding | Vectorial capacity | Malaria | Attractive toxic sugar baits (ATSB) | Outdoor mosquito control | Anopheles sergentii [SUMMARY]
[CONTENT] Sugar feeding | Vectorial capacity | Malaria | Attractive toxic sugar baits (ATSB) | Outdoor mosquito control | Anopheles sergentii [SUMMARY]
[CONTENT] Sugar feeding | Vectorial capacity | Malaria | Attractive toxic sugar baits (ATSB) | Outdoor mosquito control | Anopheles sergentii [SUMMARY]
[CONTENT] Acacia | Animals | Anopheles | Carbohydrate Metabolism | Ecosystem | Feeding Behavior | Female | Israel | Male | Mosquito Control | Poisons | Population Density | Survival Analysis [SUMMARY]
[CONTENT] Acacia | Animals | Anopheles | Carbohydrate Metabolism | Ecosystem | Feeding Behavior | Female | Israel | Male | Mosquito Control | Poisons | Population Density | Survival Analysis [SUMMARY]
null
[CONTENT] Acacia | Animals | Anopheles | Carbohydrate Metabolism | Ecosystem | Feeding Behavior | Female | Israel | Male | Mosquito Control | Poisons | Population Density | Survival Analysis [SUMMARY]
[CONTENT] Acacia | Animals | Anopheles | Carbohydrate Metabolism | Ecosystem | Feeding Behavior | Female | Israel | Male | Mosquito Control | Poisons | Population Density | Survival Analysis [SUMMARY]
[CONTENT] Acacia | Animals | Anopheles | Carbohydrate Metabolism | Ecosystem | Feeding Behavior | Female | Israel | Male | Mosquito Control | Poisons | Population Density | Survival Analysis [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] sugar | atsb | oasis | oases | rich | mosquitoes | poor | sugar rich | treatment | sergentii [SUMMARY]
[CONTENT] sugar | atsb | oasis | oases | rich | mosquitoes | poor | sugar rich | treatment | sergentii [SUMMARY]
null
[CONTENT] sugar | atsb | oasis | oases | rich | mosquitoes | poor | sugar rich | treatment | sergentii [SUMMARY]
[CONTENT] sugar | atsb | oasis | oases | rich | mosquitoes | poor | sugar rich | treatment | sergentii [SUMMARY]
[CONTENT] sugar | atsb | oasis | oases | rich | mosquitoes | poor | sugar rich | treatment | sergentii [SUMMARY]
[CONTENT] sugar | sources | atsb | mosquitoes | sugar sources | atsb methods | solutions | natural | vector | natural plant sugar [SUMMARY]
[CONTENT] oases | sugar | sea | oasis | flowering | valley | estimated | mosquitoes | springs | sprayed [SUMMARY]
null
[CONTENT] sugar | atsb | anopheline mosquitoes | anopheline | sugar resources | resources | levels | shows | atsb solutions | single [SUMMARY]
[CONTENT] sugar | atsb | oasis | mosquitoes | oases | rich | treatment | sprayed | sergentii | sugar rich [SUMMARY]
[CONTENT] sugar | atsb | oasis | mosquitoes | oases | rich | treatment | sprayed | sergentii | sugar rich [SUMMARY]
[CONTENT] ||| ||| Israel | Acacia raddiana [SUMMARY]
[CONTENT] 47-day [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| Israel | Acacia raddiana ||| 47-day ||| 95.2% | 98.6% ||| daily | 0.77 | 0.35 | 0.85 | 0.51 ||| three | 100% | 96.7% ||| 11.2 | 0.0 | 79.0 | 0.03 ||| days | 2 weeks ||| ||| [SUMMARY]
[CONTENT] ||| ||| Israel | Acacia raddiana ||| 47-day ||| 95.2% | 98.6% ||| daily | 0.77 | 0.35 | 0.85 | 0.51 ||| three | 100% | 96.7% ||| 11.2 | 0.0 | 79.0 | 0.03 ||| days | 2 weeks ||| ||| [SUMMARY]
Fast skeletal troponin I, but not the slow isoform, is increased in patients under statin therapy: a pilot study.
30591813
Statin therapy is often associated with muscle complaints and increased serum creatine kinase (CK). However, although essential in determining muscle damage, this marker is not specific for skeletal muscle. Recent studies on animal models have shown that slow and fast isoforms of skeletal troponin I (ssTnI and fsTnI, respectively) can be useful markers of skeletal muscle injury. The aim of this study was to evaluate the utility of ssTnI and fsTnI as markers to monitor the statin-induced skeletal muscle damage.
INTRODUCTION
A total of 51 patients (14 using and 37 not using statins) admitted to the intensive care unit of the University of Ferrara Academic Hospital were included in this observational study. Serum activities of CK, aldolase, alanine aminotransferase and myoglobin were determined by spectrophotometric assays or routine laboratory analysis. Isoforms ssTnI and fsTnI were determined by commercially available ELISAs. The creatine kinase MB isoform (CK-MB) and cardiac troponin I (cTnI) were evaluated as biomarkers of cardiac muscle damage by automatic analysers.
MATERIALS AND METHODS
Among the non-specific markers, only CK was significantly higher in statin users (P = 0.027). Isoform fsTnI, but not ssTnI, was specifically increased in those patients using statins (P = 0.009) evidencing the major susceptibility of fast-twitch fibres towards statins. Sub-clinical increase in fsTnI, but not CK, was more frequent in statin users (P = 0.007). Cardiac markers were not significantly altered by statins confirming the selectivity of the effect on skeletal muscle.
RESULTS
Serum fsTnI could be a good marker for monitoring statin-associated muscular damage outperforming traditional markers.
CONCLUSIONS
[ "Aged", "Creatine Kinase", "Cross-Sectional Studies", "Female", "Humans", "Hydroxymethylglutaryl-CoA Reductase Inhibitors", "Male", "Muscle, Skeletal", "Pilot Projects", "Troponin I" ]
6294157
Introduction
Hydroxymethylglutaryl-coenzyme A (HMG-CoA) reductase inhibitors, also known as statins, are one of the most prescribed cardiovascular disease (CVD) risk-reducing drugs worldwide, exerting this by the massive lowering of low density lipoprotein-cholesterol (LDL) concentration in blood (1). However, as every drug, statins come with adverse effects among which the most acknowledged is muscle toxicity, manifesting with mild muscle complaints such as muscle weakness, cramps, fatigue and only in rare severe cases, rhabdomyolysis (2). The frequency of such adverse effects is fairly low, ranging from 0.1% for severe events to 10-15% complaints of mild event; therefore, these drugs are perceived with a favourable safety profile considering that an impairment of muscular performance do not occur (3-6). A common clinical practice to reveal possible deleterious effects of statins on muscle is the increase in circulating creatine kinase (CK), where values greater than 1950 U/L (ten times the upper limit for normal values) are considered a reason for concerns (7). Moreover, an accumulating body of evidence suggests that statin therapy increases CK activity even in the absence of muscle complaints (4, 6). On the other hand, it has been established that muscle symptoms can also occur without CK elevations (8). Nonetheless, although CK is mainly represented in skeletal muscle, it cannot be considered as a specific marker since it is highly variable from subject to subject making it difficult to establish normal values, and it is prone to lifestyle-dependent changes (9, 10). In addition to myopathy, cardiac or neurologic diseases can be possible causes for elevated CK found in serum, highlighting the volatile nature of this marker for muscle-related disorders (10, 11). In addition to CK, other non-specific muscle markers exist spanning from aldolase and aspartate aminotransferase (AST) to myoglobin, which is mainly present in cardiac muscle and oxidative type I fibres (12, 13). Although in the past they have been extensively used for diagnosis of myocardial infarction, they have been replaced by more specific cardiac markers such as the MB isoform of CK (CK-MB) and, particularly, cardiac troponin I (cTnI), which is a more sensitive and specific marker (14, 15). Thanks to base research, also more specific circulating markers for skeletal muscle emerged, making even possible to study the damage to different type of fibres. Indeed, skeletal troponins, mostly the slow-twitch and fast-twitch troponin I (ssTnI and fsTnI, respectively), have proven to be good candidate biomarkers for the evaluation of damage to slow oxidative (Type I) and fast glycolytic (Type II) fibres, respectively, following extensive exercise (16, 17). In addition, the quantification of these isoforms by western blot or enzyme-linked immunosorbent assays (ELISA), alone or in combination, have been used to study the muscle damage within different pathological settings (18-20). Nonetheless, their application for studying and monitoring the statin-associated muscle damage is still scarce, with the only exception of animal studies (21). Therefore, the purpose of the present work was to evaluate the utility of ssTnI and fsTnI as possible markers to monitor the statin-induced skeletal muscle damage. Furthermore, less specific markers such as aldolase, AST, myoglobin and CK, as well as cardiac markers like CK-MB and cTnI, were also evaluated.
null
null
Results
Table 1 provides a summary of the demographic and main clinical characteristics of the study population. Patients using statins and not using the LDL lowering drugs were not different in age, female gender prevalence and BMI. In addition, there were no differences in the type of admission between the two groups. On the contrary, patients using statins had higher prevalence of CVDs, including hypertension and metabolic diseases (Table 1). However, after excluding hypertension from the CVDs their prevalence was not significantly different between the two groups (see Table 1, P = 0.198). Group comparisons were performed as stated in the Material and methods section. The raw data are reported in Supplementary Table 2. The crude and adjusted geometric means with the associated 95% confidence interval (95% CI) are reported in Table 2. Creatine kinase was significantly higher in statin compared to no statin using subjects (Table 2 and Figure 1), whereas aldolase and AST did not significantly differ between the two groups. Creatine kinase remained significantly higher in subjects using statins upon correction for covariates (i.e. age, sex, BMI) (Table 2). Furthermore, fsTnI, but not ssTnI, was higher in patients using statins (Table 2; Figure 1A and 1B), with values exceeding more than five times those measured in the NO statin cohort. Of note, upon adjustment this difference remained statistically significant (Table 2, adjusted means). Median and interquartile range of the raw values of ssTnI, fsTnI and CK measured in the two groups. The concentration of ssTnI (panel A) was not different between the two groups whereas fsTnI (panel B) was significantly higher (P = 0.005) in subjects using statins. The same was observed for CK (panel C) (P = 0.049). The boundaries of the box represent the 25th-75th quartile. The line within the box indicates the median. The whiskers above and below the box represent the highest and the lowest values excluding outliers. The P-values reported in the graph represent the exact value obtained by the Mann-Whitney U test without correcting for possible confounding factors. Finally, among the cardiac damage markers, only myoglobin and cTnI were found to be significantly higher in patients using statins (Table 2, crude means), differences that disappeared after correction for confounding factors (i.e age, sex, BMI). Abnormal values were calculated as stated in the Material and methods section and the results are reported in Table 3. As shown, the frequency of sub-clinically abnormal serum values of all the examined markers was not different between the two groups with the exception for fsTnI. Indeed, a large part of patients using statins had sub-clinically abnormal values of circulating fsTnI compared to a lower proportion in those not using the lipid lowering drugs (Table 3). We first correlated the biochemical parameters with clinical and demographical characteristics of patients measured in the whole population and the results are presented in Table 4. None of the examined parameters correlated with BMI and only cTnI was significantly and positively related with age. Then, we correlated the muscle markers between themselves and separated the output into cardiac markers, non-specific muscle markers (proteins that belongs to skeletal muscle as well as other tissues) and specific muscle markers (proteins solely present in the skeletal muscle). The results are summarized in Table 5. Within the cardiac markers, there were moderate correlations between myoglobin and CK-MB as well as CK; cTnI was positively related with CK-MB and CK was moderately positively related with the CK-MB isoenzyme. Within the non-specific markers only myoglobin was positively correlated with all the other measured parameters (Table 5) and CK was positively related with AST. Interestingly, within the specific markers subset ssTnI did not correlate with any of the measured parameters including fsTnI, whereas fsTnI was weakly correlated with aldolase and AST, moderately/strongly correlated with myoglobin and strongly correlated with CPK (Table 5).
null
null
[ "Patient selection", "Serum sampling", "Quantification of skeletal troponins", "Myoglobin, CK, CK-MB and cardiac Troponin I determinations", "Aldolase, AST, ALT assays", "Statistical analysis" ]
[ "In this cross-sectional observational study, a total of 51 consecutive patients enrolled within the project “Diaphragmatic dysfunction in critically ill patients undergoing mechanical ventilation” and admitted to the intensive care unit (ICU) of the University of Ferrara Academic Hospital from 2014 to 2017 were included in the study. After patient recruitment two groups were formed: 14 patients under statin therapy and 37 patients without statin therapy. The study was approved by the local ethics committee, conforms to The Code of Ethics of the World Medical Association (Declaration of Helsinki) and was conducted according to the guidelines for Good Clinical Practice (European Medicines Agency). Written informed consent was obtained from each patient or next of kin prior to inclusion in the study. The inclusion criteria were: age 18 years or older, with expected mechanical ventilation for at least 72 hours. The exclusion criteria were history of neuromuscular disease, diaphragm atrophy or paralysis, abnormal values of cardiac markers, current thoracotomy, pneumothorax or pneumo-mediastinum, hypoxemia requiring FIO2 greater than 60%, presence of bronchopleural air leaks, pregnancy.", "Serum samples were obtained from the collection of venous blood in anticoagulant-free tubes by centrifugation at 3000 rpm for 10 minutes after clotting, and stored aliquoted at - 80 °C until assay. In order to avoid possible loss of bioactivity, samples were analysed within 3 months from the collection and thawed only once.", "Slow skeletal Troponin I (TNNI1 or ssTnI; Mybiosource, Cat. No. MBS2510383) and fast skeletal Troponin I (TNNI2 or fsTnI; Mybiosource, Cat. No. MBS927961) were assayed by commercially available ELISA kits according to manufacturer’s instructions. All reagents and standards were included in the kits and analysts were blinded for any clinical information. Briefly, undiluted serum samples were analysed in duplicate into 96-microwell microtiter plates precoated with anti-TNNI1 or anti-TNNI2 antibodies. Seven serial dilutions of TNNI1 standard (range of 46.8 - 3000 pg/mL) or TNNI2 standard (range 6.25 - 400 pg/mL) were dispensed on each plate in duplicate and incubated at 37 °C (90 minutes for TNNI1 and 2 hours for TNNI2). At the end of the incubation, the plates were emptied and 100 μL of biotinylated detection antibody working solution were added to each wells and incubated for 1 hour at 37 °C. After 3 washing cycles, 100 μL of streptavidin-horseradish peroxidase (HRP) conjugate working solution were added to each wells and the plates were incubated for further 30 minutes at 37 °C and then washed 5 times. Finally, 90 μL of substrate solution were added to each well and the plates incubated at 37 °C for 30 minutes. The reaction was stopped by the addition of 50 μL of Stop Solution and the absorbance read at 450 nm. Concentrations of TNNI1 or TNNI2 were determined by interpolation from the standard curve. Both intra-assay and inter-assay coefficient of variations (CV) were below 10%. The samples were assayed in four different runs together with three internal quality controls to assess the performance of the kit and to correct for possible run-to-run variability. According to the manufacturer’s booklet and to our independent experiments (see Supplementary Table 1), cross-reactivity between ssTnI and fsTnI and analogues (cTnI) was negligible (less than 0.1%) or not present.", "Myoglobin (Beckman Coulter, Cat. No. OSR6168 and Cat. No. 973243), CK (CPK; CK-Nac, Beckman Coulter, Cat. No. OSR6179 and OSR6279), CK-MB (CK-MB, Beckman Coulter, Cat. No. OSR61155), cardiac Troponin I (cTnI; AccuTnI+3, Beckman Coulter, Cat. No. A98143) were determined by routine analysis on Beckman Coulter automatic analysers at the Laboratory analysis of Sant’Anna Hospital, Ferrara, Italy.", "Aldolase, AST and alanine aminotransferase (ALT) activities were assayed in undiluted serum samples by coupled spectrophotometric enzymatic assays on a Tecan Infinite M200 (Tecan Group Ltd., Männedorf, Switzerland). All enzymatic tests were performed following IFCC (International Federation of Clinical Chemistry and Laboratory Medicine) procedures.", "The normality of distribution of dependent variables was checked by Shapiro-Wilk test. Since the variables were not normally distributed, they were transformed to logarithmic function in order to approximate a normal distribution. In this way, the residuals of the ANCOVA model used for further statistical analyses were normally distributed. To correct for possible confounding factors such as age, gender, and body mass index (BMI), two group comparisons were performed on log-transformed variables by ANCOVA and including the listed variables as covariates and the other biological variables (CK, ssTnI, fsTnI etc.) as outcomes. Comparisons not corrected for confounding factors were performed on non-transformed variables by the non-parametric Mann-Whitney U test. Fisher’s exact or Chi-square test were used to compare the general characteristics of the samples or the proportion of abnormal values expressed as categorical variables. Abnormal values were determined based on a 75% cut-off for the circulating markers of muscular functionality/damage in the whole study population: if proteins were increased (> 75% cut-off) the values were considered as “sub-clinically abnormal”. Spearman’s rank test was used to analyse bivariate correlations. All the statistical analyses were performed by SPSS 21 (IBM), and an alpha level of 0.05 was considered statistically significant. The figures were made with Graphpad Prism v5." ]
[ null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Patient selection", "Serum sampling", "Quantification of skeletal troponins", "Myoglobin, CK, CK-MB and cardiac Troponin I determinations", "Aldolase, AST, ALT assays", "Statistical analysis", "Results", "Discussion", "Supplementary material" ]
[ "Hydroxymethylglutaryl-coenzyme A (HMG-CoA) reductase inhibitors, also known as statins, are one of the most prescribed cardiovascular disease (CVD) risk-reducing drugs worldwide, exerting this by the massive lowering of low density lipoprotein-cholesterol (LDL) concentration in blood (1). However, as every drug, statins come with adverse effects among which the most acknowledged is muscle toxicity, manifesting with mild muscle complaints such as muscle weakness, cramps, fatigue and only in rare severe cases, rhabdomyolysis (2). The frequency of such adverse effects is fairly low, ranging from 0.1% for severe events to 10-15% complaints of mild event; therefore, these drugs are perceived with a favourable safety profile considering that an impairment of muscular performance do not occur (3-6). A common clinical practice to reveal possible deleterious effects of statins on muscle is the increase in circulating creatine kinase (CK), where values greater than 1950 U/L (ten times the upper limit for normal values) are considered a reason for concerns (7).\nMoreover, an accumulating body of evidence suggests that statin therapy increases CK activity even in the absence of muscle complaints (4, 6). On the other hand, it has been established that muscle symptoms can also occur without CK elevations (8). Nonetheless, although CK is mainly represented in skeletal muscle, it cannot be considered as a specific marker since it is highly variable from subject to subject making it difficult to establish normal values, and it is prone to lifestyle-dependent changes (9, 10). In addition to myopathy, cardiac or neurologic diseases can be possible causes for elevated CK found in serum, highlighting the volatile nature of this marker for muscle-related disorders (10, 11). In addition to CK, other non-specific muscle markers exist spanning from aldolase and aspartate aminotransferase (AST) to myoglobin, which is mainly present in cardiac muscle and oxidative type I fibres (12, 13). Although in the past they have been extensively used for diagnosis of myocardial infarction, they have been replaced by more specific cardiac markers such as the MB isoform of CK (CK-MB) and, particularly, cardiac troponin I (cTnI), which is a more sensitive and specific marker (14, 15).\nThanks to base research, also more specific circulating markers for skeletal muscle emerged, making even possible to study the damage to different type of fibres. Indeed, skeletal troponins, mostly the slow-twitch and fast-twitch troponin I (ssTnI and fsTnI, respectively), have proven to be good candidate biomarkers for the evaluation of damage to slow oxidative (Type I) and fast glycolytic (Type II) fibres, respectively, following extensive exercise (16, 17). In addition, the quantification of these isoforms by western blot or enzyme-linked immunosorbent assays (ELISA), alone or in combination, have been used to study the muscle damage within different pathological settings (18-20). Nonetheless, their application for studying and monitoring the statin-associated muscle damage is still scarce, with the only exception of animal studies (21).\nTherefore, the purpose of the present work was to evaluate the utility of ssTnI and fsTnI as possible markers to monitor the statin-induced skeletal muscle damage. Furthermore, less specific markers such as aldolase, AST, myoglobin and CK, as well as cardiac markers like CK-MB and cTnI, were also evaluated.", " Patient selection In this cross-sectional observational study, a total of 51 consecutive patients enrolled within the project “Diaphragmatic dysfunction in critically ill patients undergoing mechanical ventilation” and admitted to the intensive care unit (ICU) of the University of Ferrara Academic Hospital from 2014 to 2017 were included in the study. After patient recruitment two groups were formed: 14 patients under statin therapy and 37 patients without statin therapy. The study was approved by the local ethics committee, conforms to The Code of Ethics of the World Medical Association (Declaration of Helsinki) and was conducted according to the guidelines for Good Clinical Practice (European Medicines Agency). Written informed consent was obtained from each patient or next of kin prior to inclusion in the study. The inclusion criteria were: age 18 years or older, with expected mechanical ventilation for at least 72 hours. The exclusion criteria were history of neuromuscular disease, diaphragm atrophy or paralysis, abnormal values of cardiac markers, current thoracotomy, pneumothorax or pneumo-mediastinum, hypoxemia requiring FIO2 greater than 60%, presence of bronchopleural air leaks, pregnancy.\nIn this cross-sectional observational study, a total of 51 consecutive patients enrolled within the project “Diaphragmatic dysfunction in critically ill patients undergoing mechanical ventilation” and admitted to the intensive care unit (ICU) of the University of Ferrara Academic Hospital from 2014 to 2017 were included in the study. After patient recruitment two groups were formed: 14 patients under statin therapy and 37 patients without statin therapy. The study was approved by the local ethics committee, conforms to The Code of Ethics of the World Medical Association (Declaration of Helsinki) and was conducted according to the guidelines for Good Clinical Practice (European Medicines Agency). Written informed consent was obtained from each patient or next of kin prior to inclusion in the study. The inclusion criteria were: age 18 years or older, with expected mechanical ventilation for at least 72 hours. The exclusion criteria were history of neuromuscular disease, diaphragm atrophy or paralysis, abnormal values of cardiac markers, current thoracotomy, pneumothorax or pneumo-mediastinum, hypoxemia requiring FIO2 greater than 60%, presence of bronchopleural air leaks, pregnancy.\n Serum sampling Serum samples were obtained from the collection of venous blood in anticoagulant-free tubes by centrifugation at 3000 rpm for 10 minutes after clotting, and stored aliquoted at - 80 °C until assay. In order to avoid possible loss of bioactivity, samples were analysed within 3 months from the collection and thawed only once.\nSerum samples were obtained from the collection of venous blood in anticoagulant-free tubes by centrifugation at 3000 rpm for 10 minutes after clotting, and stored aliquoted at - 80 °C until assay. In order to avoid possible loss of bioactivity, samples were analysed within 3 months from the collection and thawed only once.\n Quantification of skeletal troponins Slow skeletal Troponin I (TNNI1 or ssTnI; Mybiosource, Cat. No. MBS2510383) and fast skeletal Troponin I (TNNI2 or fsTnI; Mybiosource, Cat. No. MBS927961) were assayed by commercially available ELISA kits according to manufacturer’s instructions. All reagents and standards were included in the kits and analysts were blinded for any clinical information. Briefly, undiluted serum samples were analysed in duplicate into 96-microwell microtiter plates precoated with anti-TNNI1 or anti-TNNI2 antibodies. Seven serial dilutions of TNNI1 standard (range of 46.8 - 3000 pg/mL) or TNNI2 standard (range 6.25 - 400 pg/mL) were dispensed on each plate in duplicate and incubated at 37 °C (90 minutes for TNNI1 and 2 hours for TNNI2). At the end of the incubation, the plates were emptied and 100 μL of biotinylated detection antibody working solution were added to each wells and incubated for 1 hour at 37 °C. After 3 washing cycles, 100 μL of streptavidin-horseradish peroxidase (HRP) conjugate working solution were added to each wells and the plates were incubated for further 30 minutes at 37 °C and then washed 5 times. Finally, 90 μL of substrate solution were added to each well and the plates incubated at 37 °C for 30 minutes. The reaction was stopped by the addition of 50 μL of Stop Solution and the absorbance read at 450 nm. Concentrations of TNNI1 or TNNI2 were determined by interpolation from the standard curve. Both intra-assay and inter-assay coefficient of variations (CV) were below 10%. The samples were assayed in four different runs together with three internal quality controls to assess the performance of the kit and to correct for possible run-to-run variability. According to the manufacturer’s booklet and to our independent experiments (see Supplementary Table 1), cross-reactivity between ssTnI and fsTnI and analogues (cTnI) was negligible (less than 0.1%) or not present.\nSlow skeletal Troponin I (TNNI1 or ssTnI; Mybiosource, Cat. No. MBS2510383) and fast skeletal Troponin I (TNNI2 or fsTnI; Mybiosource, Cat. No. MBS927961) were assayed by commercially available ELISA kits according to manufacturer’s instructions. All reagents and standards were included in the kits and analysts were blinded for any clinical information. Briefly, undiluted serum samples were analysed in duplicate into 96-microwell microtiter plates precoated with anti-TNNI1 or anti-TNNI2 antibodies. Seven serial dilutions of TNNI1 standard (range of 46.8 - 3000 pg/mL) or TNNI2 standard (range 6.25 - 400 pg/mL) were dispensed on each plate in duplicate and incubated at 37 °C (90 minutes for TNNI1 and 2 hours for TNNI2). At the end of the incubation, the plates were emptied and 100 μL of biotinylated detection antibody working solution were added to each wells and incubated for 1 hour at 37 °C. After 3 washing cycles, 100 μL of streptavidin-horseradish peroxidase (HRP) conjugate working solution were added to each wells and the plates were incubated for further 30 minutes at 37 °C and then washed 5 times. Finally, 90 μL of substrate solution were added to each well and the plates incubated at 37 °C for 30 minutes. The reaction was stopped by the addition of 50 μL of Stop Solution and the absorbance read at 450 nm. Concentrations of TNNI1 or TNNI2 were determined by interpolation from the standard curve. Both intra-assay and inter-assay coefficient of variations (CV) were below 10%. The samples were assayed in four different runs together with three internal quality controls to assess the performance of the kit and to correct for possible run-to-run variability. According to the manufacturer’s booklet and to our independent experiments (see Supplementary Table 1), cross-reactivity between ssTnI and fsTnI and analogues (cTnI) was negligible (less than 0.1%) or not present.\n Myoglobin, CK, CK-MB and cardiac Troponin I determinations Myoglobin (Beckman Coulter, Cat. No. OSR6168 and Cat. No. 973243), CK (CPK; CK-Nac, Beckman Coulter, Cat. No. OSR6179 and OSR6279), CK-MB (CK-MB, Beckman Coulter, Cat. No. OSR61155), cardiac Troponin I (cTnI; AccuTnI+3, Beckman Coulter, Cat. No. A98143) were determined by routine analysis on Beckman Coulter automatic analysers at the Laboratory analysis of Sant’Anna Hospital, Ferrara, Italy.\nMyoglobin (Beckman Coulter, Cat. No. OSR6168 and Cat. No. 973243), CK (CPK; CK-Nac, Beckman Coulter, Cat. No. OSR6179 and OSR6279), CK-MB (CK-MB, Beckman Coulter, Cat. No. OSR61155), cardiac Troponin I (cTnI; AccuTnI+3, Beckman Coulter, Cat. No. A98143) were determined by routine analysis on Beckman Coulter automatic analysers at the Laboratory analysis of Sant’Anna Hospital, Ferrara, Italy.\n Aldolase, AST, ALT assays Aldolase, AST and alanine aminotransferase (ALT) activities were assayed in undiluted serum samples by coupled spectrophotometric enzymatic assays on a Tecan Infinite M200 (Tecan Group Ltd., Männedorf, Switzerland). All enzymatic tests were performed following IFCC (International Federation of Clinical Chemistry and Laboratory Medicine) procedures.\nAldolase, AST and alanine aminotransferase (ALT) activities were assayed in undiluted serum samples by coupled spectrophotometric enzymatic assays on a Tecan Infinite M200 (Tecan Group Ltd., Männedorf, Switzerland). All enzymatic tests were performed following IFCC (International Federation of Clinical Chemistry and Laboratory Medicine) procedures.\n Statistical analysis The normality of distribution of dependent variables was checked by Shapiro-Wilk test. Since the variables were not normally distributed, they were transformed to logarithmic function in order to approximate a normal distribution. In this way, the residuals of the ANCOVA model used for further statistical analyses were normally distributed. To correct for possible confounding factors such as age, gender, and body mass index (BMI), two group comparisons were performed on log-transformed variables by ANCOVA and including the listed variables as covariates and the other biological variables (CK, ssTnI, fsTnI etc.) as outcomes. Comparisons not corrected for confounding factors were performed on non-transformed variables by the non-parametric Mann-Whitney U test. Fisher’s exact or Chi-square test were used to compare the general characteristics of the samples or the proportion of abnormal values expressed as categorical variables. Abnormal values were determined based on a 75% cut-off for the circulating markers of muscular functionality/damage in the whole study population: if proteins were increased (> 75% cut-off) the values were considered as “sub-clinically abnormal”. Spearman’s rank test was used to analyse bivariate correlations. All the statistical analyses were performed by SPSS 21 (IBM), and an alpha level of 0.05 was considered statistically significant. The figures were made with Graphpad Prism v5.\nThe normality of distribution of dependent variables was checked by Shapiro-Wilk test. Since the variables were not normally distributed, they were transformed to logarithmic function in order to approximate a normal distribution. In this way, the residuals of the ANCOVA model used for further statistical analyses were normally distributed. To correct for possible confounding factors such as age, gender, and body mass index (BMI), two group comparisons were performed on log-transformed variables by ANCOVA and including the listed variables as covariates and the other biological variables (CK, ssTnI, fsTnI etc.) as outcomes. Comparisons not corrected for confounding factors were performed on non-transformed variables by the non-parametric Mann-Whitney U test. Fisher’s exact or Chi-square test were used to compare the general characteristics of the samples or the proportion of abnormal values expressed as categorical variables. Abnormal values were determined based on a 75% cut-off for the circulating markers of muscular functionality/damage in the whole study population: if proteins were increased (> 75% cut-off) the values were considered as “sub-clinically abnormal”. Spearman’s rank test was used to analyse bivariate correlations. All the statistical analyses were performed by SPSS 21 (IBM), and an alpha level of 0.05 was considered statistically significant. The figures were made with Graphpad Prism v5.", "In this cross-sectional observational study, a total of 51 consecutive patients enrolled within the project “Diaphragmatic dysfunction in critically ill patients undergoing mechanical ventilation” and admitted to the intensive care unit (ICU) of the University of Ferrara Academic Hospital from 2014 to 2017 were included in the study. After patient recruitment two groups were formed: 14 patients under statin therapy and 37 patients without statin therapy. The study was approved by the local ethics committee, conforms to The Code of Ethics of the World Medical Association (Declaration of Helsinki) and was conducted according to the guidelines for Good Clinical Practice (European Medicines Agency). Written informed consent was obtained from each patient or next of kin prior to inclusion in the study. The inclusion criteria were: age 18 years or older, with expected mechanical ventilation for at least 72 hours. The exclusion criteria were history of neuromuscular disease, diaphragm atrophy or paralysis, abnormal values of cardiac markers, current thoracotomy, pneumothorax or pneumo-mediastinum, hypoxemia requiring FIO2 greater than 60%, presence of bronchopleural air leaks, pregnancy.", "Serum samples were obtained from the collection of venous blood in anticoagulant-free tubes by centrifugation at 3000 rpm for 10 minutes after clotting, and stored aliquoted at - 80 °C until assay. In order to avoid possible loss of bioactivity, samples were analysed within 3 months from the collection and thawed only once.", "Slow skeletal Troponin I (TNNI1 or ssTnI; Mybiosource, Cat. No. MBS2510383) and fast skeletal Troponin I (TNNI2 or fsTnI; Mybiosource, Cat. No. MBS927961) were assayed by commercially available ELISA kits according to manufacturer’s instructions. All reagents and standards were included in the kits and analysts were blinded for any clinical information. Briefly, undiluted serum samples were analysed in duplicate into 96-microwell microtiter plates precoated with anti-TNNI1 or anti-TNNI2 antibodies. Seven serial dilutions of TNNI1 standard (range of 46.8 - 3000 pg/mL) or TNNI2 standard (range 6.25 - 400 pg/mL) were dispensed on each plate in duplicate and incubated at 37 °C (90 minutes for TNNI1 and 2 hours for TNNI2). At the end of the incubation, the plates were emptied and 100 μL of biotinylated detection antibody working solution were added to each wells and incubated for 1 hour at 37 °C. After 3 washing cycles, 100 μL of streptavidin-horseradish peroxidase (HRP) conjugate working solution were added to each wells and the plates were incubated for further 30 minutes at 37 °C and then washed 5 times. Finally, 90 μL of substrate solution were added to each well and the plates incubated at 37 °C for 30 minutes. The reaction was stopped by the addition of 50 μL of Stop Solution and the absorbance read at 450 nm. Concentrations of TNNI1 or TNNI2 were determined by interpolation from the standard curve. Both intra-assay and inter-assay coefficient of variations (CV) were below 10%. The samples were assayed in four different runs together with three internal quality controls to assess the performance of the kit and to correct for possible run-to-run variability. According to the manufacturer’s booklet and to our independent experiments (see Supplementary Table 1), cross-reactivity between ssTnI and fsTnI and analogues (cTnI) was negligible (less than 0.1%) or not present.", "Myoglobin (Beckman Coulter, Cat. No. OSR6168 and Cat. No. 973243), CK (CPK; CK-Nac, Beckman Coulter, Cat. No. OSR6179 and OSR6279), CK-MB (CK-MB, Beckman Coulter, Cat. No. OSR61155), cardiac Troponin I (cTnI; AccuTnI+3, Beckman Coulter, Cat. No. A98143) were determined by routine analysis on Beckman Coulter automatic analysers at the Laboratory analysis of Sant’Anna Hospital, Ferrara, Italy.", "Aldolase, AST and alanine aminotransferase (ALT) activities were assayed in undiluted serum samples by coupled spectrophotometric enzymatic assays on a Tecan Infinite M200 (Tecan Group Ltd., Männedorf, Switzerland). All enzymatic tests were performed following IFCC (International Federation of Clinical Chemistry and Laboratory Medicine) procedures.", "The normality of distribution of dependent variables was checked by Shapiro-Wilk test. Since the variables were not normally distributed, they were transformed to logarithmic function in order to approximate a normal distribution. In this way, the residuals of the ANCOVA model used for further statistical analyses were normally distributed. To correct for possible confounding factors such as age, gender, and body mass index (BMI), two group comparisons were performed on log-transformed variables by ANCOVA and including the listed variables as covariates and the other biological variables (CK, ssTnI, fsTnI etc.) as outcomes. Comparisons not corrected for confounding factors were performed on non-transformed variables by the non-parametric Mann-Whitney U test. Fisher’s exact or Chi-square test were used to compare the general characteristics of the samples or the proportion of abnormal values expressed as categorical variables. Abnormal values were determined based on a 75% cut-off for the circulating markers of muscular functionality/damage in the whole study population: if proteins were increased (> 75% cut-off) the values were considered as “sub-clinically abnormal”. Spearman’s rank test was used to analyse bivariate correlations. All the statistical analyses were performed by SPSS 21 (IBM), and an alpha level of 0.05 was considered statistically significant. The figures were made with Graphpad Prism v5.", "Table 1 provides a summary of the demographic and main clinical characteristics of the study population. Patients using statins and not using the LDL lowering drugs were not different in age, female gender prevalence and BMI. In addition, there were no differences in the type of admission between the two groups. On the contrary, patients using statins had higher prevalence of CVDs, including hypertension and metabolic diseases (Table 1). However, after excluding hypertension from the CVDs their prevalence was not significantly different between the two groups (see Table 1, P = 0.198).\nGroup comparisons were performed as stated in the Material and methods section. The raw data are reported in Supplementary Table 2. The crude and adjusted geometric means with the associated 95% confidence interval (95% CI) are reported in Table 2. Creatine kinase was significantly higher in statin compared to no statin using subjects (Table 2 and Figure 1), whereas aldolase and AST did not significantly differ between the two groups. Creatine kinase remained significantly higher in subjects using statins upon correction for covariates (i.e. age, sex, BMI) (Table 2). Furthermore, fsTnI, but not ssTnI, was higher in patients using statins (Table 2; Figure 1A and 1B), with values exceeding more than five times those measured in the NO statin cohort. Of note, upon adjustment this difference remained statistically significant (Table 2, adjusted means).\nMedian and interquartile range of the raw values of ssTnI, fsTnI and CK measured in the two groups. The concentration of ssTnI (panel A) was not different between the two groups whereas fsTnI (panel B) was significantly higher (P = 0.005) in subjects using statins. The same was observed for CK (panel C) (P = 0.049). The boundaries of the box represent the 25th-75th quartile. The line within the box indicates the median. The whiskers above and below the box represent the highest and the lowest values excluding outliers. The P-values reported in the graph represent the exact value obtained by the Mann-Whitney U test without correcting for possible confounding factors.\nFinally, among the cardiac damage markers, only myoglobin and cTnI were found to be significantly higher in patients using statins (Table 2, crude means), differences that disappeared after correction for confounding factors (i.e age, sex, BMI).\nAbnormal values were calculated as stated in the Material and methods section and the results are reported in Table 3. As shown, the frequency of sub-clinically abnormal serum values of all the examined markers was not different between the two groups with the exception for fsTnI. Indeed, a large part of patients using statins had sub-clinically abnormal values of circulating fsTnI compared to a lower proportion in those not using the lipid lowering drugs (Table 3).\nWe first correlated the biochemical parameters with clinical and demographical characteristics of patients measured in the whole population and the results are presented in Table 4. None of the examined parameters correlated with BMI and only cTnI was significantly and positively related with age. Then, we correlated the muscle markers between themselves and separated the output into cardiac markers, non-specific muscle markers (proteins that belongs to skeletal muscle as well as other tissues) and specific muscle markers (proteins solely present in the skeletal muscle). The results are summarized in Table 5. Within the cardiac markers, there were moderate correlations between myoglobin and CK-MB as well as CK; cTnI was positively related with CK-MB and CK was moderately positively related with the CK-MB isoenzyme. Within the non-specific markers only myoglobin was positively correlated with all the other measured parameters (Table 5) and CK was positively related with AST. Interestingly, within the specific markers subset ssTnI did not correlate with any of the measured parameters including fsTnI, whereas fsTnI was weakly correlated with aldolase and AST, moderately/strongly correlated with myoglobin and strongly correlated with CPK (Table 5).", "Our results confirmed that statin treatment was correlated with an increased average serum concentration of CK, whereas the other non-specific markers of muscles remained unchanged, suggesting the onset of a subclinical low-level muscular injury. This is in agreement with results from other randomized clinical trials where high dose statins produced an increase in circulating CK also in healthy subjects without any muscle-related complaints (4, 6).\nNonetheless, our most important finding was that for the first time in humans we found an increase in fsTnI isoform, specific for fast-twitch fibers, in serum of subjects taking statins while the concentration of ssTnI was unchanged. Indeed, up to now only animal studies observed a fibre-specific effect of statins on muscle, with fast-twitch fibres more prone to ultrastructural and functional alterations induced by the drugs (22). Thus, the observation of different serum amounts of fiber-specific muscle damage markers between subjects using or not using statins may be considered as a clue for a differential release of the proteins from the muscle, reflecting the probable different susceptibility of fibers to the drugs. This is of paramount importance considering that other drugs, such as fibrates, seems to mainly target slow-twitch fibers (23, 24). Therefore, the key point of our work was the detection of fiber-specific muscle damage through a simple blood test rather than muscle biopsy.\nNonetheless, although we observed a concomitant increase in both CK and fsTnI, the latter could be more sensitive in identifying low grade muscle injury than CK since a higher proportion of statin users were characterized by sub-clinically abnormal values of fsTnI (cut-off: > 75% than the median value). However, the paucity of human studies exploring the use of skeletal troponins, and in particular fsTnI, as muscle damage markers together with the lack of standardized analysis techniques precluded us to compare our data with normal values determined in a larger population. Therefore, the results we found might be a picture of our population and not generalizable to a larger one. Interestingly, the finding of a strong positive correlation between CK and fsTnI, observed also in another study not related with statins, suggests that the increase in CK could reflect the major susceptibility of fast-twitch fibres to statins as well (17). It has been reported that CK (subunit M) expression is increased in muscles composed mainly of fast-twitch fibres compared to those with slow-twitch, reflecting the higher anaerobic metabolism of Type II fibres (25). Therefore, it is not surprising that CK and fsTnI correlated, suggesting that both proteins more likely mark fast fibres. However, at this time it is still unknown what lies behind the increased susceptibility of fast-twitch fibres towards statins, although we can infer that differences in energetic metabolism as well as in structures involved in calcium release/reuptake can play major roles. Indeed, in a study Draeger and collaborators observed a breakdown of the T-tubular system, important for the transmission of action potential, in patients using statins (26). Considering that the fast-twitch fibres have a more extended and developed T-tubular system than slow-twitch fibres of the same species, it is tempting to speculate that this difference might also be reflected in the increased susceptibility of fast-twitch fibres, which in turn can be reflected in an increased leakage of fsTnI into the circulation (27). The lack of correlation or the presence of a weak correlation we found between several proteins examined may be due to the nature of the protein itself, thus non-specific or specific to skeletal muscle that may reflect the functionality of other organs and tissues (e.g. liver and cardiac muscle). In addition, the lack of correlation between the ssTnI and fsTnI subunits is not surprising since they are supposed to mark different skeletal fibres and therefore a correlation is not necessarily expected. Nonetheless, these lacking correlations do not undermine our hypothesis or our study conclusions.\nThis study was not without limitations. First, the already accounted small sample size may have weakened the generalization of our results to larger populations. However, we have to acknowledge that this is the first study evaluating both ssTnI and fsTnI separately and as specific markers of sub-clinical damage to skeletal muscles; therefore, it may be an important starting point for future studies. Second, the observational and cross-sectional design of the study precluded us to determine any cause-and-effect relationships between the measured variables as well as the extent of associations between skeletal troponin I isoforms and statin treatment. A longitudinal approach with an interventional design would be more valuable. Third, the population enrolled in this study may not reflect the real-life clinical application of the analysed protein markers, since relatively healthier subjects (e.g. dyslipidemic patients and those affected by metabolic syndrome) may be more appropriate for studying the statin-related muscle damage. However, our results may be a good starting point for a general applicability of the skeletal troponins for the evaluation of clinical/sub-clinical muscle damage regardless of the population. In addition, studies dealing with muscular markers and the evaluation of muscle functionality should acknowledge that statins may act as confounding factors.\nIn conclusion, our results suggest that fsTnI could be a good marker for monitoring statin-associated muscular damage outperforming traditional markers such as CK, opening new avenues for the evaluation of fibre-specific skeletal muscle damage. Further studies in larger cohorts are needed to confirm and extend the actual usefulness of both skeletal troponin I isoforms as markers for skeletal muscle damage associated with statin therapy.", "Supplementary tables" ]
[ "intro", "materials|methods", null, null, null, null, null, null, "results", "discussion", "supplementary-material" ]
[ "statin", "fast skeletal troponin", "slow skeletal troponin", "muscle damage", "creatine kinase" ]
Introduction: Hydroxymethylglutaryl-coenzyme A (HMG-CoA) reductase inhibitors, also known as statins, are one of the most prescribed cardiovascular disease (CVD) risk-reducing drugs worldwide, exerting this by the massive lowering of low density lipoprotein-cholesterol (LDL) concentration in blood (1). However, as every drug, statins come with adverse effects among which the most acknowledged is muscle toxicity, manifesting with mild muscle complaints such as muscle weakness, cramps, fatigue and only in rare severe cases, rhabdomyolysis (2). The frequency of such adverse effects is fairly low, ranging from 0.1% for severe events to 10-15% complaints of mild event; therefore, these drugs are perceived with a favourable safety profile considering that an impairment of muscular performance do not occur (3-6). A common clinical practice to reveal possible deleterious effects of statins on muscle is the increase in circulating creatine kinase (CK), where values greater than 1950 U/L (ten times the upper limit for normal values) are considered a reason for concerns (7). Moreover, an accumulating body of evidence suggests that statin therapy increases CK activity even in the absence of muscle complaints (4, 6). On the other hand, it has been established that muscle symptoms can also occur without CK elevations (8). Nonetheless, although CK is mainly represented in skeletal muscle, it cannot be considered as a specific marker since it is highly variable from subject to subject making it difficult to establish normal values, and it is prone to lifestyle-dependent changes (9, 10). In addition to myopathy, cardiac or neurologic diseases can be possible causes for elevated CK found in serum, highlighting the volatile nature of this marker for muscle-related disorders (10, 11). In addition to CK, other non-specific muscle markers exist spanning from aldolase and aspartate aminotransferase (AST) to myoglobin, which is mainly present in cardiac muscle and oxidative type I fibres (12, 13). Although in the past they have been extensively used for diagnosis of myocardial infarction, they have been replaced by more specific cardiac markers such as the MB isoform of CK (CK-MB) and, particularly, cardiac troponin I (cTnI), which is a more sensitive and specific marker (14, 15). Thanks to base research, also more specific circulating markers for skeletal muscle emerged, making even possible to study the damage to different type of fibres. Indeed, skeletal troponins, mostly the slow-twitch and fast-twitch troponin I (ssTnI and fsTnI, respectively), have proven to be good candidate biomarkers for the evaluation of damage to slow oxidative (Type I) and fast glycolytic (Type II) fibres, respectively, following extensive exercise (16, 17). In addition, the quantification of these isoforms by western blot or enzyme-linked immunosorbent assays (ELISA), alone or in combination, have been used to study the muscle damage within different pathological settings (18-20). Nonetheless, their application for studying and monitoring the statin-associated muscle damage is still scarce, with the only exception of animal studies (21). Therefore, the purpose of the present work was to evaluate the utility of ssTnI and fsTnI as possible markers to monitor the statin-induced skeletal muscle damage. Furthermore, less specific markers such as aldolase, AST, myoglobin and CK, as well as cardiac markers like CK-MB and cTnI, were also evaluated. Materials and methods: Patient selection In this cross-sectional observational study, a total of 51 consecutive patients enrolled within the project “Diaphragmatic dysfunction in critically ill patients undergoing mechanical ventilation” and admitted to the intensive care unit (ICU) of the University of Ferrara Academic Hospital from 2014 to 2017 were included in the study. After patient recruitment two groups were formed: 14 patients under statin therapy and 37 patients without statin therapy. The study was approved by the local ethics committee, conforms to The Code of Ethics of the World Medical Association (Declaration of Helsinki) and was conducted according to the guidelines for Good Clinical Practice (European Medicines Agency). Written informed consent was obtained from each patient or next of kin prior to inclusion in the study. The inclusion criteria were: age 18 years or older, with expected mechanical ventilation for at least 72 hours. The exclusion criteria were history of neuromuscular disease, diaphragm atrophy or paralysis, abnormal values of cardiac markers, current thoracotomy, pneumothorax or pneumo-mediastinum, hypoxemia requiring FIO2 greater than 60%, presence of bronchopleural air leaks, pregnancy. In this cross-sectional observational study, a total of 51 consecutive patients enrolled within the project “Diaphragmatic dysfunction in critically ill patients undergoing mechanical ventilation” and admitted to the intensive care unit (ICU) of the University of Ferrara Academic Hospital from 2014 to 2017 were included in the study. After patient recruitment two groups were formed: 14 patients under statin therapy and 37 patients without statin therapy. The study was approved by the local ethics committee, conforms to The Code of Ethics of the World Medical Association (Declaration of Helsinki) and was conducted according to the guidelines for Good Clinical Practice (European Medicines Agency). Written informed consent was obtained from each patient or next of kin prior to inclusion in the study. The inclusion criteria were: age 18 years or older, with expected mechanical ventilation for at least 72 hours. The exclusion criteria were history of neuromuscular disease, diaphragm atrophy or paralysis, abnormal values of cardiac markers, current thoracotomy, pneumothorax or pneumo-mediastinum, hypoxemia requiring FIO2 greater than 60%, presence of bronchopleural air leaks, pregnancy. Serum sampling Serum samples were obtained from the collection of venous blood in anticoagulant-free tubes by centrifugation at 3000 rpm for 10 minutes after clotting, and stored aliquoted at - 80 °C until assay. In order to avoid possible loss of bioactivity, samples were analysed within 3 months from the collection and thawed only once. Serum samples were obtained from the collection of venous blood in anticoagulant-free tubes by centrifugation at 3000 rpm for 10 minutes after clotting, and stored aliquoted at - 80 °C until assay. In order to avoid possible loss of bioactivity, samples were analysed within 3 months from the collection and thawed only once. Quantification of skeletal troponins Slow skeletal Troponin I (TNNI1 or ssTnI; Mybiosource, Cat. No. MBS2510383) and fast skeletal Troponin I (TNNI2 or fsTnI; Mybiosource, Cat. No. MBS927961) were assayed by commercially available ELISA kits according to manufacturer’s instructions. All reagents and standards were included in the kits and analysts were blinded for any clinical information. Briefly, undiluted serum samples were analysed in duplicate into 96-microwell microtiter plates precoated with anti-TNNI1 or anti-TNNI2 antibodies. Seven serial dilutions of TNNI1 standard (range of 46.8 - 3000 pg/mL) or TNNI2 standard (range 6.25 - 400 pg/mL) were dispensed on each plate in duplicate and incubated at 37 °C (90 minutes for TNNI1 and 2 hours for TNNI2). At the end of the incubation, the plates were emptied and 100 μL of biotinylated detection antibody working solution were added to each wells and incubated for 1 hour at 37 °C. After 3 washing cycles, 100 μL of streptavidin-horseradish peroxidase (HRP) conjugate working solution were added to each wells and the plates were incubated for further 30 minutes at 37 °C and then washed 5 times. Finally, 90 μL of substrate solution were added to each well and the plates incubated at 37 °C for 30 minutes. The reaction was stopped by the addition of 50 μL of Stop Solution and the absorbance read at 450 nm. Concentrations of TNNI1 or TNNI2 were determined by interpolation from the standard curve. Both intra-assay and inter-assay coefficient of variations (CV) were below 10%. The samples were assayed in four different runs together with three internal quality controls to assess the performance of the kit and to correct for possible run-to-run variability. According to the manufacturer’s booklet and to our independent experiments (see Supplementary Table 1), cross-reactivity between ssTnI and fsTnI and analogues (cTnI) was negligible (less than 0.1%) or not present. Slow skeletal Troponin I (TNNI1 or ssTnI; Mybiosource, Cat. No. MBS2510383) and fast skeletal Troponin I (TNNI2 or fsTnI; Mybiosource, Cat. No. MBS927961) were assayed by commercially available ELISA kits according to manufacturer’s instructions. All reagents and standards were included in the kits and analysts were blinded for any clinical information. Briefly, undiluted serum samples were analysed in duplicate into 96-microwell microtiter plates precoated with anti-TNNI1 or anti-TNNI2 antibodies. Seven serial dilutions of TNNI1 standard (range of 46.8 - 3000 pg/mL) or TNNI2 standard (range 6.25 - 400 pg/mL) were dispensed on each plate in duplicate and incubated at 37 °C (90 minutes for TNNI1 and 2 hours for TNNI2). At the end of the incubation, the plates were emptied and 100 μL of biotinylated detection antibody working solution were added to each wells and incubated for 1 hour at 37 °C. After 3 washing cycles, 100 μL of streptavidin-horseradish peroxidase (HRP) conjugate working solution were added to each wells and the plates were incubated for further 30 minutes at 37 °C and then washed 5 times. Finally, 90 μL of substrate solution were added to each well and the plates incubated at 37 °C for 30 minutes. The reaction was stopped by the addition of 50 μL of Stop Solution and the absorbance read at 450 nm. Concentrations of TNNI1 or TNNI2 were determined by interpolation from the standard curve. Both intra-assay and inter-assay coefficient of variations (CV) were below 10%. The samples were assayed in four different runs together with three internal quality controls to assess the performance of the kit and to correct for possible run-to-run variability. According to the manufacturer’s booklet and to our independent experiments (see Supplementary Table 1), cross-reactivity between ssTnI and fsTnI and analogues (cTnI) was negligible (less than 0.1%) or not present. Myoglobin, CK, CK-MB and cardiac Troponin I determinations Myoglobin (Beckman Coulter, Cat. No. OSR6168 and Cat. No. 973243), CK (CPK; CK-Nac, Beckman Coulter, Cat. No. OSR6179 and OSR6279), CK-MB (CK-MB, Beckman Coulter, Cat. No. OSR61155), cardiac Troponin I (cTnI; AccuTnI+3, Beckman Coulter, Cat. No. A98143) were determined by routine analysis on Beckman Coulter automatic analysers at the Laboratory analysis of Sant’Anna Hospital, Ferrara, Italy. Myoglobin (Beckman Coulter, Cat. No. OSR6168 and Cat. No. 973243), CK (CPK; CK-Nac, Beckman Coulter, Cat. No. OSR6179 and OSR6279), CK-MB (CK-MB, Beckman Coulter, Cat. No. OSR61155), cardiac Troponin I (cTnI; AccuTnI+3, Beckman Coulter, Cat. No. A98143) were determined by routine analysis on Beckman Coulter automatic analysers at the Laboratory analysis of Sant’Anna Hospital, Ferrara, Italy. Aldolase, AST, ALT assays Aldolase, AST and alanine aminotransferase (ALT) activities were assayed in undiluted serum samples by coupled spectrophotometric enzymatic assays on a Tecan Infinite M200 (Tecan Group Ltd., Männedorf, Switzerland). All enzymatic tests were performed following IFCC (International Federation of Clinical Chemistry and Laboratory Medicine) procedures. Aldolase, AST and alanine aminotransferase (ALT) activities were assayed in undiluted serum samples by coupled spectrophotometric enzymatic assays on a Tecan Infinite M200 (Tecan Group Ltd., Männedorf, Switzerland). All enzymatic tests were performed following IFCC (International Federation of Clinical Chemistry and Laboratory Medicine) procedures. Statistical analysis The normality of distribution of dependent variables was checked by Shapiro-Wilk test. Since the variables were not normally distributed, they were transformed to logarithmic function in order to approximate a normal distribution. In this way, the residuals of the ANCOVA model used for further statistical analyses were normally distributed. To correct for possible confounding factors such as age, gender, and body mass index (BMI), two group comparisons were performed on log-transformed variables by ANCOVA and including the listed variables as covariates and the other biological variables (CK, ssTnI, fsTnI etc.) as outcomes. Comparisons not corrected for confounding factors were performed on non-transformed variables by the non-parametric Mann-Whitney U test. Fisher’s exact or Chi-square test were used to compare the general characteristics of the samples or the proportion of abnormal values expressed as categorical variables. Abnormal values were determined based on a 75% cut-off for the circulating markers of muscular functionality/damage in the whole study population: if proteins were increased (> 75% cut-off) the values were considered as “sub-clinically abnormal”. Spearman’s rank test was used to analyse bivariate correlations. All the statistical analyses were performed by SPSS 21 (IBM), and an alpha level of 0.05 was considered statistically significant. The figures were made with Graphpad Prism v5. The normality of distribution of dependent variables was checked by Shapiro-Wilk test. Since the variables were not normally distributed, they were transformed to logarithmic function in order to approximate a normal distribution. In this way, the residuals of the ANCOVA model used for further statistical analyses were normally distributed. To correct for possible confounding factors such as age, gender, and body mass index (BMI), two group comparisons were performed on log-transformed variables by ANCOVA and including the listed variables as covariates and the other biological variables (CK, ssTnI, fsTnI etc.) as outcomes. Comparisons not corrected for confounding factors were performed on non-transformed variables by the non-parametric Mann-Whitney U test. Fisher’s exact or Chi-square test were used to compare the general characteristics of the samples or the proportion of abnormal values expressed as categorical variables. Abnormal values were determined based on a 75% cut-off for the circulating markers of muscular functionality/damage in the whole study population: if proteins were increased (> 75% cut-off) the values were considered as “sub-clinically abnormal”. Spearman’s rank test was used to analyse bivariate correlations. All the statistical analyses were performed by SPSS 21 (IBM), and an alpha level of 0.05 was considered statistically significant. The figures were made with Graphpad Prism v5. Patient selection: In this cross-sectional observational study, a total of 51 consecutive patients enrolled within the project “Diaphragmatic dysfunction in critically ill patients undergoing mechanical ventilation” and admitted to the intensive care unit (ICU) of the University of Ferrara Academic Hospital from 2014 to 2017 were included in the study. After patient recruitment two groups were formed: 14 patients under statin therapy and 37 patients without statin therapy. The study was approved by the local ethics committee, conforms to The Code of Ethics of the World Medical Association (Declaration of Helsinki) and was conducted according to the guidelines for Good Clinical Practice (European Medicines Agency). Written informed consent was obtained from each patient or next of kin prior to inclusion in the study. The inclusion criteria were: age 18 years or older, with expected mechanical ventilation for at least 72 hours. The exclusion criteria were history of neuromuscular disease, diaphragm atrophy or paralysis, abnormal values of cardiac markers, current thoracotomy, pneumothorax or pneumo-mediastinum, hypoxemia requiring FIO2 greater than 60%, presence of bronchopleural air leaks, pregnancy. Serum sampling: Serum samples were obtained from the collection of venous blood in anticoagulant-free tubes by centrifugation at 3000 rpm for 10 minutes after clotting, and stored aliquoted at - 80 °C until assay. In order to avoid possible loss of bioactivity, samples were analysed within 3 months from the collection and thawed only once. Quantification of skeletal troponins: Slow skeletal Troponin I (TNNI1 or ssTnI; Mybiosource, Cat. No. MBS2510383) and fast skeletal Troponin I (TNNI2 or fsTnI; Mybiosource, Cat. No. MBS927961) were assayed by commercially available ELISA kits according to manufacturer’s instructions. All reagents and standards were included in the kits and analysts were blinded for any clinical information. Briefly, undiluted serum samples were analysed in duplicate into 96-microwell microtiter plates precoated with anti-TNNI1 or anti-TNNI2 antibodies. Seven serial dilutions of TNNI1 standard (range of 46.8 - 3000 pg/mL) or TNNI2 standard (range 6.25 - 400 pg/mL) were dispensed on each plate in duplicate and incubated at 37 °C (90 minutes for TNNI1 and 2 hours for TNNI2). At the end of the incubation, the plates were emptied and 100 μL of biotinylated detection antibody working solution were added to each wells and incubated for 1 hour at 37 °C. After 3 washing cycles, 100 μL of streptavidin-horseradish peroxidase (HRP) conjugate working solution were added to each wells and the plates were incubated for further 30 minutes at 37 °C and then washed 5 times. Finally, 90 μL of substrate solution were added to each well and the plates incubated at 37 °C for 30 minutes. The reaction was stopped by the addition of 50 μL of Stop Solution and the absorbance read at 450 nm. Concentrations of TNNI1 or TNNI2 were determined by interpolation from the standard curve. Both intra-assay and inter-assay coefficient of variations (CV) were below 10%. The samples were assayed in four different runs together with three internal quality controls to assess the performance of the kit and to correct for possible run-to-run variability. According to the manufacturer’s booklet and to our independent experiments (see Supplementary Table 1), cross-reactivity between ssTnI and fsTnI and analogues (cTnI) was negligible (less than 0.1%) or not present. Myoglobin, CK, CK-MB and cardiac Troponin I determinations: Myoglobin (Beckman Coulter, Cat. No. OSR6168 and Cat. No. 973243), CK (CPK; CK-Nac, Beckman Coulter, Cat. No. OSR6179 and OSR6279), CK-MB (CK-MB, Beckman Coulter, Cat. No. OSR61155), cardiac Troponin I (cTnI; AccuTnI+3, Beckman Coulter, Cat. No. A98143) were determined by routine analysis on Beckman Coulter automatic analysers at the Laboratory analysis of Sant’Anna Hospital, Ferrara, Italy. Aldolase, AST, ALT assays: Aldolase, AST and alanine aminotransferase (ALT) activities were assayed in undiluted serum samples by coupled spectrophotometric enzymatic assays on a Tecan Infinite M200 (Tecan Group Ltd., Männedorf, Switzerland). All enzymatic tests were performed following IFCC (International Federation of Clinical Chemistry and Laboratory Medicine) procedures. Statistical analysis: The normality of distribution of dependent variables was checked by Shapiro-Wilk test. Since the variables were not normally distributed, they were transformed to logarithmic function in order to approximate a normal distribution. In this way, the residuals of the ANCOVA model used for further statistical analyses were normally distributed. To correct for possible confounding factors such as age, gender, and body mass index (BMI), two group comparisons were performed on log-transformed variables by ANCOVA and including the listed variables as covariates and the other biological variables (CK, ssTnI, fsTnI etc.) as outcomes. Comparisons not corrected for confounding factors were performed on non-transformed variables by the non-parametric Mann-Whitney U test. Fisher’s exact or Chi-square test were used to compare the general characteristics of the samples or the proportion of abnormal values expressed as categorical variables. Abnormal values were determined based on a 75% cut-off for the circulating markers of muscular functionality/damage in the whole study population: if proteins were increased (> 75% cut-off) the values were considered as “sub-clinically abnormal”. Spearman’s rank test was used to analyse bivariate correlations. All the statistical analyses were performed by SPSS 21 (IBM), and an alpha level of 0.05 was considered statistically significant. The figures were made with Graphpad Prism v5. Results: Table 1 provides a summary of the demographic and main clinical characteristics of the study population. Patients using statins and not using the LDL lowering drugs were not different in age, female gender prevalence and BMI. In addition, there were no differences in the type of admission between the two groups. On the contrary, patients using statins had higher prevalence of CVDs, including hypertension and metabolic diseases (Table 1). However, after excluding hypertension from the CVDs their prevalence was not significantly different between the two groups (see Table 1, P = 0.198). Group comparisons were performed as stated in the Material and methods section. The raw data are reported in Supplementary Table 2. The crude and adjusted geometric means with the associated 95% confidence interval (95% CI) are reported in Table 2. Creatine kinase was significantly higher in statin compared to no statin using subjects (Table 2 and Figure 1), whereas aldolase and AST did not significantly differ between the two groups. Creatine kinase remained significantly higher in subjects using statins upon correction for covariates (i.e. age, sex, BMI) (Table 2). Furthermore, fsTnI, but not ssTnI, was higher in patients using statins (Table 2; Figure 1A and 1B), with values exceeding more than five times those measured in the NO statin cohort. Of note, upon adjustment this difference remained statistically significant (Table 2, adjusted means). Median and interquartile range of the raw values of ssTnI, fsTnI and CK measured in the two groups. The concentration of ssTnI (panel A) was not different between the two groups whereas fsTnI (panel B) was significantly higher (P = 0.005) in subjects using statins. The same was observed for CK (panel C) (P = 0.049). The boundaries of the box represent the 25th-75th quartile. The line within the box indicates the median. The whiskers above and below the box represent the highest and the lowest values excluding outliers. The P-values reported in the graph represent the exact value obtained by the Mann-Whitney U test without correcting for possible confounding factors. Finally, among the cardiac damage markers, only myoglobin and cTnI were found to be significantly higher in patients using statins (Table 2, crude means), differences that disappeared after correction for confounding factors (i.e age, sex, BMI). Abnormal values were calculated as stated in the Material and methods section and the results are reported in Table 3. As shown, the frequency of sub-clinically abnormal serum values of all the examined markers was not different between the two groups with the exception for fsTnI. Indeed, a large part of patients using statins had sub-clinically abnormal values of circulating fsTnI compared to a lower proportion in those not using the lipid lowering drugs (Table 3). We first correlated the biochemical parameters with clinical and demographical characteristics of patients measured in the whole population and the results are presented in Table 4. None of the examined parameters correlated with BMI and only cTnI was significantly and positively related with age. Then, we correlated the muscle markers between themselves and separated the output into cardiac markers, non-specific muscle markers (proteins that belongs to skeletal muscle as well as other tissues) and specific muscle markers (proteins solely present in the skeletal muscle). The results are summarized in Table 5. Within the cardiac markers, there were moderate correlations between myoglobin and CK-MB as well as CK; cTnI was positively related with CK-MB and CK was moderately positively related with the CK-MB isoenzyme. Within the non-specific markers only myoglobin was positively correlated with all the other measured parameters (Table 5) and CK was positively related with AST. Interestingly, within the specific markers subset ssTnI did not correlate with any of the measured parameters including fsTnI, whereas fsTnI was weakly correlated with aldolase and AST, moderately/strongly correlated with myoglobin and strongly correlated with CPK (Table 5). Discussion: Our results confirmed that statin treatment was correlated with an increased average serum concentration of CK, whereas the other non-specific markers of muscles remained unchanged, suggesting the onset of a subclinical low-level muscular injury. This is in agreement with results from other randomized clinical trials where high dose statins produced an increase in circulating CK also in healthy subjects without any muscle-related complaints (4, 6). Nonetheless, our most important finding was that for the first time in humans we found an increase in fsTnI isoform, specific for fast-twitch fibers, in serum of subjects taking statins while the concentration of ssTnI was unchanged. Indeed, up to now only animal studies observed a fibre-specific effect of statins on muscle, with fast-twitch fibres more prone to ultrastructural and functional alterations induced by the drugs (22). Thus, the observation of different serum amounts of fiber-specific muscle damage markers between subjects using or not using statins may be considered as a clue for a differential release of the proteins from the muscle, reflecting the probable different susceptibility of fibers to the drugs. This is of paramount importance considering that other drugs, such as fibrates, seems to mainly target slow-twitch fibers (23, 24). Therefore, the key point of our work was the detection of fiber-specific muscle damage through a simple blood test rather than muscle biopsy. Nonetheless, although we observed a concomitant increase in both CK and fsTnI, the latter could be more sensitive in identifying low grade muscle injury than CK since a higher proportion of statin users were characterized by sub-clinically abnormal values of fsTnI (cut-off: > 75% than the median value). However, the paucity of human studies exploring the use of skeletal troponins, and in particular fsTnI, as muscle damage markers together with the lack of standardized analysis techniques precluded us to compare our data with normal values determined in a larger population. Therefore, the results we found might be a picture of our population and not generalizable to a larger one. Interestingly, the finding of a strong positive correlation between CK and fsTnI, observed also in another study not related with statins, suggests that the increase in CK could reflect the major susceptibility of fast-twitch fibres to statins as well (17). It has been reported that CK (subunit M) expression is increased in muscles composed mainly of fast-twitch fibres compared to those with slow-twitch, reflecting the higher anaerobic metabolism of Type II fibres (25). Therefore, it is not surprising that CK and fsTnI correlated, suggesting that both proteins more likely mark fast fibres. However, at this time it is still unknown what lies behind the increased susceptibility of fast-twitch fibres towards statins, although we can infer that differences in energetic metabolism as well as in structures involved in calcium release/reuptake can play major roles. Indeed, in a study Draeger and collaborators observed a breakdown of the T-tubular system, important for the transmission of action potential, in patients using statins (26). Considering that the fast-twitch fibres have a more extended and developed T-tubular system than slow-twitch fibres of the same species, it is tempting to speculate that this difference might also be reflected in the increased susceptibility of fast-twitch fibres, which in turn can be reflected in an increased leakage of fsTnI into the circulation (27). The lack of correlation or the presence of a weak correlation we found between several proteins examined may be due to the nature of the protein itself, thus non-specific or specific to skeletal muscle that may reflect the functionality of other organs and tissues (e.g. liver and cardiac muscle). In addition, the lack of correlation between the ssTnI and fsTnI subunits is not surprising since they are supposed to mark different skeletal fibres and therefore a correlation is not necessarily expected. Nonetheless, these lacking correlations do not undermine our hypothesis or our study conclusions. This study was not without limitations. First, the already accounted small sample size may have weakened the generalization of our results to larger populations. However, we have to acknowledge that this is the first study evaluating both ssTnI and fsTnI separately and as specific markers of sub-clinical damage to skeletal muscles; therefore, it may be an important starting point for future studies. Second, the observational and cross-sectional design of the study precluded us to determine any cause-and-effect relationships between the measured variables as well as the extent of associations between skeletal troponin I isoforms and statin treatment. A longitudinal approach with an interventional design would be more valuable. Third, the population enrolled in this study may not reflect the real-life clinical application of the analysed protein markers, since relatively healthier subjects (e.g. dyslipidemic patients and those affected by metabolic syndrome) may be more appropriate for studying the statin-related muscle damage. However, our results may be a good starting point for a general applicability of the skeletal troponins for the evaluation of clinical/sub-clinical muscle damage regardless of the population. In addition, studies dealing with muscular markers and the evaluation of muscle functionality should acknowledge that statins may act as confounding factors. In conclusion, our results suggest that fsTnI could be a good marker for monitoring statin-associated muscular damage outperforming traditional markers such as CK, opening new avenues for the evaluation of fibre-specific skeletal muscle damage. Further studies in larger cohorts are needed to confirm and extend the actual usefulness of both skeletal troponin I isoforms as markers for skeletal muscle damage associated with statin therapy. Supplementary material: Supplementary tables
Background: Statin therapy is often associated with muscle complaints and increased serum creatine kinase (CK). However, although essential in determining muscle damage, this marker is not specific for skeletal muscle. Recent studies on animal models have shown that slow and fast isoforms of skeletal troponin I (ssTnI and fsTnI, respectively) can be useful markers of skeletal muscle injury. The aim of this study was to evaluate the utility of ssTnI and fsTnI as markers to monitor the statin-induced skeletal muscle damage. Methods: A total of 51 patients (14 using and 37 not using statins) admitted to the intensive care unit of the University of Ferrara Academic Hospital were included in this observational study. Serum activities of CK, aldolase, alanine aminotransferase and myoglobin were determined by spectrophotometric assays or routine laboratory analysis. Isoforms ssTnI and fsTnI were determined by commercially available ELISAs. The creatine kinase MB isoform (CK-MB) and cardiac troponin I (cTnI) were evaluated as biomarkers of cardiac muscle damage by automatic analysers. Results: Among the non-specific markers, only CK was significantly higher in statin users (P = 0.027). Isoform fsTnI, but not ssTnI, was specifically increased in those patients using statins (P = 0.009) evidencing the major susceptibility of fast-twitch fibres towards statins. Sub-clinical increase in fsTnI, but not CK, was more frequent in statin users (P = 0.007). Cardiac markers were not significantly altered by statins confirming the selectivity of the effect on skeletal muscle. Conclusions: Serum fsTnI could be a good marker for monitoring statin-associated muscular damage outperforming traditional markers.
null
null
5,769
315
11
[ "ck", "muscle", "markers", "fstni", "study", "values", "skeletal", "variables", "cat", "patients" ]
[ "test", "test" ]
null
null
null
[CONTENT] statin | fast skeletal troponin | slow skeletal troponin | muscle damage | creatine kinase [SUMMARY]
null
[CONTENT] statin | fast skeletal troponin | slow skeletal troponin | muscle damage | creatine kinase [SUMMARY]
null
[CONTENT] statin | fast skeletal troponin | slow skeletal troponin | muscle damage | creatine kinase [SUMMARY]
null
[CONTENT] Aged | Creatine Kinase | Cross-Sectional Studies | Female | Humans | Hydroxymethylglutaryl-CoA Reductase Inhibitors | Male | Muscle, Skeletal | Pilot Projects | Troponin I [SUMMARY]
null
[CONTENT] Aged | Creatine Kinase | Cross-Sectional Studies | Female | Humans | Hydroxymethylglutaryl-CoA Reductase Inhibitors | Male | Muscle, Skeletal | Pilot Projects | Troponin I [SUMMARY]
null
[CONTENT] Aged | Creatine Kinase | Cross-Sectional Studies | Female | Humans | Hydroxymethylglutaryl-CoA Reductase Inhibitors | Male | Muscle, Skeletal | Pilot Projects | Troponin I [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] ck | muscle | markers | fstni | study | values | skeletal | variables | cat | patients [SUMMARY]
null
[CONTENT] ck | muscle | markers | fstni | study | values | skeletal | variables | cat | patients [SUMMARY]
null
[CONTENT] ck | muscle | markers | fstni | study | values | skeletal | variables | cat | patients [SUMMARY]
null
[CONTENT] muscle | ck | specific | markers | damage | type | effects | cardiac | marker | fibres [SUMMARY]
null
[CONTENT] table | significantly | correlated | statins | higher | positively | markers | groups | ck | measured [SUMMARY]
null
[CONTENT] tables | supplementary tables | ck | muscle | supplementary | cat | coulter | beckman coulter | beckman | variables [SUMMARY]
null
[CONTENT] Statin | creatine kinase | CK ||| ||| fsTnI ||| fsTnI | statin [SUMMARY]
null
[CONTENT] CK | statin | 0.027 ||| Isoform fsTnI | 0.009 ||| fsTnI | CK | statin | 0.007 ||| Cardiac [SUMMARY]
null
[CONTENT] Statin | creatine kinase | CK ||| ||| fsTnI ||| fsTnI | statin ||| 51 | 14 | 37 | the University of Ferrara Academic Hospital ||| CK | assays ||| fsTnI ||| CK-MB ||| ||| CK | statin | 0.027 ||| Isoform fsTnI | 0.009 ||| fsTnI | CK | statin | 0.007 ||| Cardiac ||| Serum fsTnI | statin [SUMMARY]
null
Clinical and economic outcomes for patients initiating fluticasone propionate/salmeterol combination therapy (250/50 mcg) versus anticholinergics in a comorbid COPD/depression population.
22315518
Chronic obstructive pulmonary disease (COPD) is frequently associated with comorbid depression and anxiety. Managing COPD symptoms and exacerbations through use of appropriate and adequate pharmacotherapy in this population may result in better COPD-related outcomes.
BACKGROUND
This retrospective, observational study used administrative claims of patients aged 40 years and older with COPD and comorbid depression/anxiety identified from January 1, 2004 through June 30, 2008. Patients were assigned to fluticasone propionate/salmeterol 250/50 mcg combination (FSC) or anticholinergics (AC) based on their first (index) prescription. The risks of COPD exacerbations and healthcare utilization and costs were compared between cohorts during 1 year of follow-up.
METHODS
The adjusted risk of a COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort (n = 2923) relative to the FSC cohort (n = 1078) (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08-1.56) after controlling for baseline differences in covariates. The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort. The average number of COPD-related hospitalizations during the follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01-2.09, P = 0.041). The savings from lower COPD-related medical costs ($692 vs $1042, P < 0.050) kept the COPD-related total costs during the follow-up period comparable to those in the AC cohort ($1659 vs $1677, P > 0.050) although the pharmacy costs were higher in the FSC cohort.
RESULTS
FSC compared with AC was associated with more favorable COPD-related outcomes and lower COPD-related utilization and medical costs among patients with COPD and comorbid anxiety/depression.
CONCLUSIONS
[ "Adult", "Albuterol", "Androstadienes", "Cholinergic Antagonists", "Cost of Illness", "Depression", "Drug Combinations", "Female", "Fluticasone-Salmeterol Drug Combination", "Follow-Up Studies", "Glucocorticoids", "Hospitalization", "Humans", "Male", "Middle Aged", "Pulmonary Disease, Chronic Obstructive", "Retrospective Studies", "Treatment Outcome" ]
3273366
Introduction
Chronic obstructive pulmonary disease (COPD) affects an estimated 24 million individuals and incurred an estimated annual $49.9 billion cost in the US in 2010. Addressing this issue is important because of the increasing number of diagnosed cases.1–4 Neuropsychiatric symptoms and disorders including major depression, depressive symptoms, and anxiety are commonly comorbid with COPD.5,6 Estimates of prevalence of depression in COPD vary but are generally higher than those reported in some other advanced chronic diseases.5 A cross-sectional study of 18,588 individuals in the 2004 US-based Health and Retirement Survey found that depressive symptoms were more common in COPD than in coronary heart disease, stroke, diabetes, arthritis, hypertension, and cancer.7 Of the 1736 individuals with self-reported COPD, 40% had at least three depressive symptoms. In another general practice study, the risk of depression was 2.5 times greater for patients with severe COPD than for controls without COPD or asthma.8 The risk of comorbid depression was not increased in mild-to-moderate COPD. The authors concluded that patients with severe COPD are at heightened risk of depression and that the results highlight the importance of reducing symptoms and improving physical functioning in patients with COPD. Depression and anxiety in COPD may additionally adversely increase the burden of COPD. Depression independently predicts poor quality of life in patients with COPD.9–11 Depression was among several predictors of lack of adherence to the recommended criteria for use of inhaled corticosteroids in a sample of 10,711 patients with COPD in the primary care setting.12 Furthermore, the presence of depression predicted noncompletion of pulmonary rehabilitation in COPD.13 The impact of COPD therapies on outcomes and costs in patients with COPD who have comorbid anxiety and/or depression is not fully understood. Bronchodilators (including beta-agonists and anticholinergics) are central to COPD management as they can reduce the frequency and severity of acute exacerbations; however, their impact in a cohort of COPD patients with comorbid depression has not been studied.14 Evidence suggests that combination long-acting beta agonist/inhaled corticosteroid therapy may be more beneficial than long-acting anticholinergic therapy with respect to clinical and economic outcomes.15,16 This study was conducted to compare the risk of COPD exacerbations and COPD-related health care utilization and costs between patients initiating inhaled corticosteroid/long-acting beta agonist combination therapy (fluticasone propionate/salmeterol 250/50 mcg combination [FSC]) and those initiating anticholinergics (ACs) in patients with COPD and comorbid anxiety/depression.
Statistical analysis
Pretreatment characteristics were summarized with descriptive statistics. Inferential statistics (chi-square test for categorical variables, t-test or Mann–Whitney test for continuous variables) were used to quantify differences between cohorts. Multivariate regression models were used to examine the association between the initiation of index drugs and outcomes while controlling for potential confounders. Covariates were chosen a priori for all models on the basis of clinical relevance and significant differences at baseline. The covariates were age, male gender, Charlson comorbidity index, upper respiratory tract infection, cardiovascular disease, asthma, presence of short-acting beta-agonist, presence of oral corticosteroid, presence of home oxygen therapy, and presence of COPD-related hospitalizations. A logistic regression model was used to analyze differences between cohorts in the risk of COPD exacerbations. A zero-inflated negative binomial regression model was used to analyze differences between cohorts in the number of COPD-related exacerbations. Differences between cohorts in COPD-related costs were evaluated with a generalized linear model using a gamma distribution with a log link function while controlling for covariates. This method has the advantage of estimating the adjusted costs directly without the need for retransformation while simultaneously using log-transformed costs in its estimation. The method of recycled predictions was used to obtain predicted costs for each cohort.
Results
Sample Among patients with a diagnosis of COPD and a prescription filled for COPD maintenance medications during the study period, 34,189 (11.5%) patients had a diagnosis of depression or anxiety and a prescription filled for depression or anxiety medications between 1 year pre-index and 60 days post-index date. Of these 34,189 patients, 30,188 (88.3%) were excluded from the study mainly because of noncontinuous eligibility (44.4%) and no COPD diagnosis (25.5%) during the pre-index period through 60 days after the index date. After applying exclusion criteria, the final sample comprised 4001 patients (n = 1078 FSC; n = 2923 AC). Table 1 shows demographics and pre-index clinical characteristics, health care use, and costs. In the FSC cohort compared with the AC cohort, patients were younger, more likely to be female, had lower Charlson comorbidity index at baseline, and were more likely to have asthma. Pre-index COPD was more severe in the FSC cohort than the AC cohort reflected by the higher use of short-acting beta-agonist canisters and oral corticosteroid prescriptions. During the 1-year pre-index period, the proportion of patients having a COPD-related hospitalization and COPD-related medical costs was significantly lower in the FSC cohort than the AC cohort. Pre-index COPD-related pharmacy costs were significantly higher for the FSC cohort than the AC cohort. Among patients with a diagnosis of COPD and a prescription filled for COPD maintenance medications during the study period, 34,189 (11.5%) patients had a diagnosis of depression or anxiety and a prescription filled for depression or anxiety medications between 1 year pre-index and 60 days post-index date. Of these 34,189 patients, 30,188 (88.3%) were excluded from the study mainly because of noncontinuous eligibility (44.4%) and no COPD diagnosis (25.5%) during the pre-index period through 60 days after the index date. After applying exclusion criteria, the final sample comprised 4001 patients (n = 1078 FSC; n = 2923 AC). Table 1 shows demographics and pre-index clinical characteristics, health care use, and costs. In the FSC cohort compared with the AC cohort, patients were younger, more likely to be female, had lower Charlson comorbidity index at baseline, and were more likely to have asthma. Pre-index COPD was more severe in the FSC cohort than the AC cohort reflected by the higher use of short-acting beta-agonist canisters and oral corticosteroid prescriptions. During the 1-year pre-index period, the proportion of patients having a COPD-related hospitalization and COPD-related medical costs was significantly lower in the FSC cohort than the AC cohort. Pre-index COPD-related pharmacy costs were significantly higher for the FSC cohort than the AC cohort. Clinical and economic outcomes Unadjusted outcomes Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Adjusted outcomes After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050). After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050). Unadjusted outcomes Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Adjusted outcomes After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050). After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050).
null
null
[ "Introduction", "Methods", "Study design", "Data source", "Sample", "Outcomes and costs", "Sample", "Clinical and economic outcomes", "Unadjusted outcomes", "Adjusted outcomes" ]
[ "Chronic obstructive pulmonary disease (COPD) affects an estimated 24 million individuals and incurred an estimated annual $49.9 billion cost in the US in 2010. Addressing this issue is important because of the increasing number of diagnosed cases.1–4 Neuropsychiatric symptoms and disorders including major depression, depressive symptoms, and anxiety are commonly comorbid with COPD.5,6 Estimates of prevalence of depression in COPD vary but are generally higher than those reported in some other advanced chronic diseases.5 A cross-sectional study of 18,588 individuals in the 2004 US-based Health and Retirement Survey found that depressive symptoms were more common in COPD than in coronary heart disease, stroke, diabetes, arthritis, hypertension, and cancer.7 Of the 1736 individuals with self-reported COPD, 40% had at least three depressive symptoms. In another general practice study, the risk of depression was 2.5 times greater for patients with severe COPD than for controls without COPD or asthma.8 The risk of comorbid depression was not increased in mild-to-moderate COPD. The authors concluded that patients with severe COPD are at heightened risk of depression and that the results highlight the importance of reducing symptoms and improving physical functioning in patients with COPD.\nDepression and anxiety in COPD may additionally adversely increase the burden of COPD. Depression independently predicts poor quality of life in patients with COPD.9–11 Depression was among several predictors of lack of adherence to the recommended criteria for use of inhaled corticosteroids in a sample of 10,711 patients with COPD in the primary care setting.12 Furthermore, the presence of depression predicted noncompletion of pulmonary rehabilitation in COPD.13\nThe impact of COPD therapies on outcomes and costs in patients with COPD who have comorbid anxiety and/or depression is not fully understood. Bronchodilators (including beta-agonists and anticholinergics) are central to COPD management as they can reduce the frequency and severity of acute exacerbations; however, their impact in a cohort of COPD patients with comorbid depression has not been studied.14 Evidence suggests that combination long-acting beta agonist/inhaled corticosteroid therapy may be more beneficial than long-acting anticholinergic therapy with respect to clinical and economic outcomes.15,16 This study was conducted to compare the risk of COPD exacerbations and COPD-related health care utilization and costs between patients initiating inhaled corticosteroid/long-acting beta agonist combination therapy (fluticasone propionate/salmeterol 250/50 mcg combination [FSC]) and those initiating anticholinergics (ACs) in patients with COPD and comorbid anxiety/depression.", " Study design Figure 1 shows the design of this observational, retrospective cohort study. The study period for this analysis ranged from January 1st, 2003 through June 30th, 2009. The population was patients aged 40 years and older with COPD and comorbid depression/anxiety enrolled in health plans. The index date was defined as the date of the first prescription for maintenance COPD treatment (FSC; ACs including tiotropium, ipratropium, or combination ipratropium-albuterol; inhaled corticosteroid; long-acting beta-agonist) during the period from January 1st, 2004 to June 30th, 2008, termed as the enrollment period. Of patients with at least one pharmacy claim for a maintenance medication used to treat COPD, patients were considered to have comorbid depression or anxiety if they had at least one prescription claim for a medication used to treat depression or anxiety along with a diagnosis code for depression during or anxiety, respectively, 1 year pre-index or within 60 days after the index date. The 12-month period before the index date (pre-index period) was used to characterize the study population at baseline, and the 12-month period after the index date (follow-up period) was used to assess all study outcomes. The first 60 days of the follow-up period after the index date (the clean period) was used to ensure receipt of monotherapy by requiring no use of other COPD maintenance medications. This study was exempt from an institutional review board approval as it was retrospective in nature, did not involve an intervention, and utilized anonymized data.\nFigure 1 shows the design of this observational, retrospective cohort study. The study period for this analysis ranged from January 1st, 2003 through June 30th, 2009. The population was patients aged 40 years and older with COPD and comorbid depression/anxiety enrolled in health plans. The index date was defined as the date of the first prescription for maintenance COPD treatment (FSC; ACs including tiotropium, ipratropium, or combination ipratropium-albuterol; inhaled corticosteroid; long-acting beta-agonist) during the period from January 1st, 2004 to June 30th, 2008, termed as the enrollment period. Of patients with at least one pharmacy claim for a maintenance medication used to treat COPD, patients were considered to have comorbid depression or anxiety if they had at least one prescription claim for a medication used to treat depression or anxiety along with a diagnosis code for depression during or anxiety, respectively, 1 year pre-index or within 60 days after the index date. The 12-month period before the index date (pre-index period) was used to characterize the study population at baseline, and the 12-month period after the index date (follow-up period) was used to assess all study outcomes. The first 60 days of the follow-up period after the index date (the clean period) was used to ensure receipt of monotherapy by requiring no use of other COPD maintenance medications. This study was exempt from an institutional review board approval as it was retrospective in nature, did not involve an intervention, and utilized anonymized data.\n Data source Data were obtained from the Ingenix Impact National Benchmark database (formerly, IHCIS), a comprehensive US medical claims database generally representative of the insured US population aged <65 years. The data are collected from more than 46 health care plans serving members across nine census regions. It does not include Medicaid or Medicare information. The database contains inpatient/outpatient and pharmacy claims, lab results, and enrollment information on more than 98 million lives from 1997 to 2010. Roughly 30.8 million patients in the IHCIS database have at least 2 years of both medical and pharmacy benefits; roughly 18.3 million patients have at least 3 years of both medical and pharmacy benefits. The data are fully de-identified and compliant with the Health Insurance Portability and Accountability Act.\nData were obtained from the Ingenix Impact National Benchmark database (formerly, IHCIS), a comprehensive US medical claims database generally representative of the insured US population aged <65 years. The data are collected from more than 46 health care plans serving members across nine census regions. It does not include Medicaid or Medicare information. The database contains inpatient/outpatient and pharmacy claims, lab results, and enrollment information on more than 98 million lives from 1997 to 2010. Roughly 30.8 million patients in the IHCIS database have at least 2 years of both medical and pharmacy benefits; roughly 18.3 million patients have at least 3 years of both medical and pharmacy benefits. The data are fully de-identified and compliant with the Health Insurance Portability and Accountability Act.\n Sample The target population had a diagnosis of COPD (ICD-9-CM codes 491.xx, 492.xx, 496.xx) in any field in the pre-index period or 60 days after the index date; absence of exclusionary comorbid conditions (respiratory cancer, cystic fibrosis, fibrosis due to tuberculosis, and bronchiectasis, pneumoconiosis, pulmonary fibrosis, pulmonary tuberculosis, sarcoidosis) during the 1 year pre-index or post-index (follow-up) periods; and an index date occurring during the enrollment period. In addition, a diagnosis of depression or anxiety in any field, along with a medication for treating depression or anxiety was also required. Finally, patients could not have received maintenance medications for COPD other than the index medication on or 60 days after the index date and had to be continuously eligible to receive health care services during the 1-year pre-index and post-index (follow-up) periods.\nPatients were categorized on the index date into the FSC cohort or the AC cohort depending on their medication use. Patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist were excluded from the study as only 8% of the sample received these drug classes. It was thought that statistical and scientific conclusions based on the small number of patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist would not be justified.\nThe target population had a diagnosis of COPD (ICD-9-CM codes 491.xx, 492.xx, 496.xx) in any field in the pre-index period or 60 days after the index date; absence of exclusionary comorbid conditions (respiratory cancer, cystic fibrosis, fibrosis due to tuberculosis, and bronchiectasis, pneumoconiosis, pulmonary fibrosis, pulmonary tuberculosis, sarcoidosis) during the 1 year pre-index or post-index (follow-up) periods; and an index date occurring during the enrollment period. In addition, a diagnosis of depression or anxiety in any field, along with a medication for treating depression or anxiety was also required. Finally, patients could not have received maintenance medications for COPD other than the index medication on or 60 days after the index date and had to be continuously eligible to receive health care services during the 1-year pre-index and post-index (follow-up) periods.\nPatients were categorized on the index date into the FSC cohort or the AC cohort depending on their medication use. Patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist were excluded from the study as only 8% of the sample received these drug classes. It was thought that statistical and scientific conclusions based on the small number of patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist would not be justified.\n Outcomes and costs Clinical outcomes were risk of and number of COPD exacerbations during the 1-year follow-up period. A COPD exacerbation was defined as an emergency department visit with a primary diagnosis code for COPD, a hospitalization with a primary discharge diagnosis for COPD, or a physician visit with a primary diagnosis code for COPD plus a prescription for an oral corticosteroid or antibiotic within 5 days of the visit. When computing the number of exacerbations, an exacerbation within 45 days of a previous exacerbation was not counted as a separate exacerbation.17 COPD-related total (medical plus pharmacy), medical, and pharmacy costs were also determined during the 1-year follow-up period and summed to yield annual costs, which were standardized to 2009 $US using the Consumer Price Index for US medical care. COPD-related medical costs were computed from the paid amounts of medical claims with a primary diagnosis code for COPD. COPD-related pharmacy costs were computed from paid amounts of COPD-related prescription medications (including ACs, short-acting beta agonists, long-acting beta-agonists, inhaled corticosteroids, combination inhaled corticosteroids/long-acting beta-agonists, methylxanthines, oral corticosteroids, and antibiotics for respiratory infections) identified using national drug codes, health care common procedure coding system codes beginning with the letter J, or current procedural terminology codes as appropriate.\nCohorts were also compared with respect to pretreatment characteristics including demographics (age, sex, US census region), comorbidities during the pre-index period (Charlson comorbidity index score,18 asthma), and proxies for COPD severity during the pre-index period (number of canisters of inhaled short-acting beta-agonists, number of prescriptions for oral corticosteroids, use of home oxygen therapy, number of hospitalizations/emergency department visits for COPD, number of physician visits with a prescription for COPD). Age, sex, and US geographic region at the index date were obtained from enrollment files. The Dartmouth–Manitoba adaptation of the Charlson comorbidity index score18 – a weighted index of 19 chronic medical conditions that predict mortality, postoperative complications, and length of hospital stay – was calculated for each patient based on diagnoses reported during the pre-index period (excluding COPD codes). The number of canisters of short-acting beta-agonists was computed by dividing the quantity dispensed in mg by mg per canister. The use of home oxygen therapy was categorized as a binary variable (use, no use) based on current procedural terminology codes for home oxygen therapy on medical claims.\nClinical outcomes were risk of and number of COPD exacerbations during the 1-year follow-up period. A COPD exacerbation was defined as an emergency department visit with a primary diagnosis code for COPD, a hospitalization with a primary discharge diagnosis for COPD, or a physician visit with a primary diagnosis code for COPD plus a prescription for an oral corticosteroid or antibiotic within 5 days of the visit. When computing the number of exacerbations, an exacerbation within 45 days of a previous exacerbation was not counted as a separate exacerbation.17 COPD-related total (medical plus pharmacy), medical, and pharmacy costs were also determined during the 1-year follow-up period and summed to yield annual costs, which were standardized to 2009 $US using the Consumer Price Index for US medical care. COPD-related medical costs were computed from the paid amounts of medical claims with a primary diagnosis code for COPD. COPD-related pharmacy costs were computed from paid amounts of COPD-related prescription medications (including ACs, short-acting beta agonists, long-acting beta-agonists, inhaled corticosteroids, combination inhaled corticosteroids/long-acting beta-agonists, methylxanthines, oral corticosteroids, and antibiotics for respiratory infections) identified using national drug codes, health care common procedure coding system codes beginning with the letter J, or current procedural terminology codes as appropriate.\nCohorts were also compared with respect to pretreatment characteristics including demographics (age, sex, US census region), comorbidities during the pre-index period (Charlson comorbidity index score,18 asthma), and proxies for COPD severity during the pre-index period (number of canisters of inhaled short-acting beta-agonists, number of prescriptions for oral corticosteroids, use of home oxygen therapy, number of hospitalizations/emergency department visits for COPD, number of physician visits with a prescription for COPD). Age, sex, and US geographic region at the index date were obtained from enrollment files. The Dartmouth–Manitoba adaptation of the Charlson comorbidity index score18 – a weighted index of 19 chronic medical conditions that predict mortality, postoperative complications, and length of hospital stay – was calculated for each patient based on diagnoses reported during the pre-index period (excluding COPD codes). The number of canisters of short-acting beta-agonists was computed by dividing the quantity dispensed in mg by mg per canister. The use of home oxygen therapy was categorized as a binary variable (use, no use) based on current procedural terminology codes for home oxygen therapy on medical claims.\n Statistical analysis Pretreatment characteristics were summarized with descriptive statistics. Inferential statistics (chi-square test for categorical variables, t-test or Mann–Whitney test for continuous variables) were used to quantify differences between cohorts.\nMultivariate regression models were used to examine the association between the initiation of index drugs and outcomes while controlling for potential confounders. Covariates were chosen a priori for all models on the basis of clinical relevance and significant differences at baseline. The covariates were age, male gender, Charlson comorbidity index, upper respiratory tract infection, cardiovascular disease, asthma, presence of short-acting beta-agonist, presence of oral corticosteroid, presence of home oxygen therapy, and presence of COPD-related hospitalizations. A logistic regression model was used to analyze differences between cohorts in the risk of COPD exacerbations. A zero-inflated negative binomial regression model was used to analyze differences between cohorts in the number of COPD-related exacerbations.\nDifferences between cohorts in COPD-related costs were evaluated with a generalized linear model using a gamma distribution with a log link function while controlling for covariates. This method has the advantage of estimating the adjusted costs directly without the need for retransformation while simultaneously using log-transformed costs in its estimation. The method of recycled predictions was used to obtain predicted costs for each cohort.\nPretreatment characteristics were summarized with descriptive statistics. Inferential statistics (chi-square test for categorical variables, t-test or Mann–Whitney test for continuous variables) were used to quantify differences between cohorts.\nMultivariate regression models were used to examine the association between the initiation of index drugs and outcomes while controlling for potential confounders. Covariates were chosen a priori for all models on the basis of clinical relevance and significant differences at baseline. The covariates were age, male gender, Charlson comorbidity index, upper respiratory tract infection, cardiovascular disease, asthma, presence of short-acting beta-agonist, presence of oral corticosteroid, presence of home oxygen therapy, and presence of COPD-related hospitalizations. A logistic regression model was used to analyze differences between cohorts in the risk of COPD exacerbations. A zero-inflated negative binomial regression model was used to analyze differences between cohorts in the number of COPD-related exacerbations.\nDifferences between cohorts in COPD-related costs were evaluated with a generalized linear model using a gamma distribution with a log link function while controlling for covariates. This method has the advantage of estimating the adjusted costs directly without the need for retransformation while simultaneously using log-transformed costs in its estimation. The method of recycled predictions was used to obtain predicted costs for each cohort.", "Figure 1 shows the design of this observational, retrospective cohort study. The study period for this analysis ranged from January 1st, 2003 through June 30th, 2009. The population was patients aged 40 years and older with COPD and comorbid depression/anxiety enrolled in health plans. The index date was defined as the date of the first prescription for maintenance COPD treatment (FSC; ACs including tiotropium, ipratropium, or combination ipratropium-albuterol; inhaled corticosteroid; long-acting beta-agonist) during the period from January 1st, 2004 to June 30th, 2008, termed as the enrollment period. Of patients with at least one pharmacy claim for a maintenance medication used to treat COPD, patients were considered to have comorbid depression or anxiety if they had at least one prescription claim for a medication used to treat depression or anxiety along with a diagnosis code for depression during or anxiety, respectively, 1 year pre-index or within 60 days after the index date. The 12-month period before the index date (pre-index period) was used to characterize the study population at baseline, and the 12-month period after the index date (follow-up period) was used to assess all study outcomes. The first 60 days of the follow-up period after the index date (the clean period) was used to ensure receipt of monotherapy by requiring no use of other COPD maintenance medications. This study was exempt from an institutional review board approval as it was retrospective in nature, did not involve an intervention, and utilized anonymized data.", "Data were obtained from the Ingenix Impact National Benchmark database (formerly, IHCIS), a comprehensive US medical claims database generally representative of the insured US population aged <65 years. The data are collected from more than 46 health care plans serving members across nine census regions. It does not include Medicaid or Medicare information. The database contains inpatient/outpatient and pharmacy claims, lab results, and enrollment information on more than 98 million lives from 1997 to 2010. Roughly 30.8 million patients in the IHCIS database have at least 2 years of both medical and pharmacy benefits; roughly 18.3 million patients have at least 3 years of both medical and pharmacy benefits. The data are fully de-identified and compliant with the Health Insurance Portability and Accountability Act.", "The target population had a diagnosis of COPD (ICD-9-CM codes 491.xx, 492.xx, 496.xx) in any field in the pre-index period or 60 days after the index date; absence of exclusionary comorbid conditions (respiratory cancer, cystic fibrosis, fibrosis due to tuberculosis, and bronchiectasis, pneumoconiosis, pulmonary fibrosis, pulmonary tuberculosis, sarcoidosis) during the 1 year pre-index or post-index (follow-up) periods; and an index date occurring during the enrollment period. In addition, a diagnosis of depression or anxiety in any field, along with a medication for treating depression or anxiety was also required. Finally, patients could not have received maintenance medications for COPD other than the index medication on or 60 days after the index date and had to be continuously eligible to receive health care services during the 1-year pre-index and post-index (follow-up) periods.\nPatients were categorized on the index date into the FSC cohort or the AC cohort depending on their medication use. Patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist were excluded from the study as only 8% of the sample received these drug classes. It was thought that statistical and scientific conclusions based on the small number of patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist would not be justified.", "Clinical outcomes were risk of and number of COPD exacerbations during the 1-year follow-up period. A COPD exacerbation was defined as an emergency department visit with a primary diagnosis code for COPD, a hospitalization with a primary discharge diagnosis for COPD, or a physician visit with a primary diagnosis code for COPD plus a prescription for an oral corticosteroid or antibiotic within 5 days of the visit. When computing the number of exacerbations, an exacerbation within 45 days of a previous exacerbation was not counted as a separate exacerbation.17 COPD-related total (medical plus pharmacy), medical, and pharmacy costs were also determined during the 1-year follow-up period and summed to yield annual costs, which were standardized to 2009 $US using the Consumer Price Index for US medical care. COPD-related medical costs were computed from the paid amounts of medical claims with a primary diagnosis code for COPD. COPD-related pharmacy costs were computed from paid amounts of COPD-related prescription medications (including ACs, short-acting beta agonists, long-acting beta-agonists, inhaled corticosteroids, combination inhaled corticosteroids/long-acting beta-agonists, methylxanthines, oral corticosteroids, and antibiotics for respiratory infections) identified using national drug codes, health care common procedure coding system codes beginning with the letter J, or current procedural terminology codes as appropriate.\nCohorts were also compared with respect to pretreatment characteristics including demographics (age, sex, US census region), comorbidities during the pre-index period (Charlson comorbidity index score,18 asthma), and proxies for COPD severity during the pre-index period (number of canisters of inhaled short-acting beta-agonists, number of prescriptions for oral corticosteroids, use of home oxygen therapy, number of hospitalizations/emergency department visits for COPD, number of physician visits with a prescription for COPD). Age, sex, and US geographic region at the index date were obtained from enrollment files. The Dartmouth–Manitoba adaptation of the Charlson comorbidity index score18 – a weighted index of 19 chronic medical conditions that predict mortality, postoperative complications, and length of hospital stay – was calculated for each patient based on diagnoses reported during the pre-index period (excluding COPD codes). The number of canisters of short-acting beta-agonists was computed by dividing the quantity dispensed in mg by mg per canister. The use of home oxygen therapy was categorized as a binary variable (use, no use) based on current procedural terminology codes for home oxygen therapy on medical claims.", "Among patients with a diagnosis of COPD and a prescription filled for COPD maintenance medications during the study period, 34,189 (11.5%) patients had a diagnosis of depression or anxiety and a prescription filled for depression or anxiety medications between 1 year pre-index and 60 days post-index date. Of these 34,189 patients, 30,188 (88.3%) were excluded from the study mainly because of noncontinuous eligibility (44.4%) and no COPD diagnosis (25.5%) during the pre-index period through 60 days after the index date. After applying exclusion criteria, the final sample comprised 4001 patients (n = 1078 FSC; n = 2923 AC).\nTable 1 shows demographics and pre-index clinical characteristics, health care use, and costs. In the FSC cohort compared with the AC cohort, patients were younger, more likely to be female, had lower Charlson comorbidity index at baseline, and were more likely to have asthma. Pre-index COPD was more severe in the FSC cohort than the AC cohort reflected by the higher use of short-acting beta-agonist canisters and oral corticosteroid prescriptions. During the 1-year pre-index period, the proportion of patients having a COPD-related hospitalization and COPD-related medical costs was significantly lower in the FSC cohort than the AC cohort. Pre-index COPD-related pharmacy costs were significantly higher for the FSC cohort than the AC cohort.", " Unadjusted outcomes Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001).\nTable 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001).\n Adjusted outcomes After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2).\nThe average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit.\nTable 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050).\nAfter controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2).\nThe average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit.\nTable 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050).", "Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001).", "After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2).\nThe average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit.\nTable 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050)." ]
[ null, "methods", "methods", "methods", null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study design", "Data source", "Sample", "Outcomes and costs", "Statistical analysis", "Results", "Sample", "Clinical and economic outcomes", "Unadjusted outcomes", "Adjusted outcomes", "Discussion" ]
[ "Chronic obstructive pulmonary disease (COPD) affects an estimated 24 million individuals and incurred an estimated annual $49.9 billion cost in the US in 2010. Addressing this issue is important because of the increasing number of diagnosed cases.1–4 Neuropsychiatric symptoms and disorders including major depression, depressive symptoms, and anxiety are commonly comorbid with COPD.5,6 Estimates of prevalence of depression in COPD vary but are generally higher than those reported in some other advanced chronic diseases.5 A cross-sectional study of 18,588 individuals in the 2004 US-based Health and Retirement Survey found that depressive symptoms were more common in COPD than in coronary heart disease, stroke, diabetes, arthritis, hypertension, and cancer.7 Of the 1736 individuals with self-reported COPD, 40% had at least three depressive symptoms. In another general practice study, the risk of depression was 2.5 times greater for patients with severe COPD than for controls without COPD or asthma.8 The risk of comorbid depression was not increased in mild-to-moderate COPD. The authors concluded that patients with severe COPD are at heightened risk of depression and that the results highlight the importance of reducing symptoms and improving physical functioning in patients with COPD.\nDepression and anxiety in COPD may additionally adversely increase the burden of COPD. Depression independently predicts poor quality of life in patients with COPD.9–11 Depression was among several predictors of lack of adherence to the recommended criteria for use of inhaled corticosteroids in a sample of 10,711 patients with COPD in the primary care setting.12 Furthermore, the presence of depression predicted noncompletion of pulmonary rehabilitation in COPD.13\nThe impact of COPD therapies on outcomes and costs in patients with COPD who have comorbid anxiety and/or depression is not fully understood. Bronchodilators (including beta-agonists and anticholinergics) are central to COPD management as they can reduce the frequency and severity of acute exacerbations; however, their impact in a cohort of COPD patients with comorbid depression has not been studied.14 Evidence suggests that combination long-acting beta agonist/inhaled corticosteroid therapy may be more beneficial than long-acting anticholinergic therapy with respect to clinical and economic outcomes.15,16 This study was conducted to compare the risk of COPD exacerbations and COPD-related health care utilization and costs between patients initiating inhaled corticosteroid/long-acting beta agonist combination therapy (fluticasone propionate/salmeterol 250/50 mcg combination [FSC]) and those initiating anticholinergics (ACs) in patients with COPD and comorbid anxiety/depression.", " Study design Figure 1 shows the design of this observational, retrospective cohort study. The study period for this analysis ranged from January 1st, 2003 through June 30th, 2009. The population was patients aged 40 years and older with COPD and comorbid depression/anxiety enrolled in health plans. The index date was defined as the date of the first prescription for maintenance COPD treatment (FSC; ACs including tiotropium, ipratropium, or combination ipratropium-albuterol; inhaled corticosteroid; long-acting beta-agonist) during the period from January 1st, 2004 to June 30th, 2008, termed as the enrollment period. Of patients with at least one pharmacy claim for a maintenance medication used to treat COPD, patients were considered to have comorbid depression or anxiety if they had at least one prescription claim for a medication used to treat depression or anxiety along with a diagnosis code for depression during or anxiety, respectively, 1 year pre-index or within 60 days after the index date. The 12-month period before the index date (pre-index period) was used to characterize the study population at baseline, and the 12-month period after the index date (follow-up period) was used to assess all study outcomes. The first 60 days of the follow-up period after the index date (the clean period) was used to ensure receipt of monotherapy by requiring no use of other COPD maintenance medications. This study was exempt from an institutional review board approval as it was retrospective in nature, did not involve an intervention, and utilized anonymized data.\nFigure 1 shows the design of this observational, retrospective cohort study. The study period for this analysis ranged from January 1st, 2003 through June 30th, 2009. The population was patients aged 40 years and older with COPD and comorbid depression/anxiety enrolled in health plans. The index date was defined as the date of the first prescription for maintenance COPD treatment (FSC; ACs including tiotropium, ipratropium, or combination ipratropium-albuterol; inhaled corticosteroid; long-acting beta-agonist) during the period from January 1st, 2004 to June 30th, 2008, termed as the enrollment period. Of patients with at least one pharmacy claim for a maintenance medication used to treat COPD, patients were considered to have comorbid depression or anxiety if they had at least one prescription claim for a medication used to treat depression or anxiety along with a diagnosis code for depression during or anxiety, respectively, 1 year pre-index or within 60 days after the index date. The 12-month period before the index date (pre-index period) was used to characterize the study population at baseline, and the 12-month period after the index date (follow-up period) was used to assess all study outcomes. The first 60 days of the follow-up period after the index date (the clean period) was used to ensure receipt of monotherapy by requiring no use of other COPD maintenance medications. This study was exempt from an institutional review board approval as it was retrospective in nature, did not involve an intervention, and utilized anonymized data.\n Data source Data were obtained from the Ingenix Impact National Benchmark database (formerly, IHCIS), a comprehensive US medical claims database generally representative of the insured US population aged <65 years. The data are collected from more than 46 health care plans serving members across nine census regions. It does not include Medicaid or Medicare information. The database contains inpatient/outpatient and pharmacy claims, lab results, and enrollment information on more than 98 million lives from 1997 to 2010. Roughly 30.8 million patients in the IHCIS database have at least 2 years of both medical and pharmacy benefits; roughly 18.3 million patients have at least 3 years of both medical and pharmacy benefits. The data are fully de-identified and compliant with the Health Insurance Portability and Accountability Act.\nData were obtained from the Ingenix Impact National Benchmark database (formerly, IHCIS), a comprehensive US medical claims database generally representative of the insured US population aged <65 years. The data are collected from more than 46 health care plans serving members across nine census regions. It does not include Medicaid or Medicare information. The database contains inpatient/outpatient and pharmacy claims, lab results, and enrollment information on more than 98 million lives from 1997 to 2010. Roughly 30.8 million patients in the IHCIS database have at least 2 years of both medical and pharmacy benefits; roughly 18.3 million patients have at least 3 years of both medical and pharmacy benefits. The data are fully de-identified and compliant with the Health Insurance Portability and Accountability Act.\n Sample The target population had a diagnosis of COPD (ICD-9-CM codes 491.xx, 492.xx, 496.xx) in any field in the pre-index period or 60 days after the index date; absence of exclusionary comorbid conditions (respiratory cancer, cystic fibrosis, fibrosis due to tuberculosis, and bronchiectasis, pneumoconiosis, pulmonary fibrosis, pulmonary tuberculosis, sarcoidosis) during the 1 year pre-index or post-index (follow-up) periods; and an index date occurring during the enrollment period. In addition, a diagnosis of depression or anxiety in any field, along with a medication for treating depression or anxiety was also required. Finally, patients could not have received maintenance medications for COPD other than the index medication on or 60 days after the index date and had to be continuously eligible to receive health care services during the 1-year pre-index and post-index (follow-up) periods.\nPatients were categorized on the index date into the FSC cohort or the AC cohort depending on their medication use. Patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist were excluded from the study as only 8% of the sample received these drug classes. It was thought that statistical and scientific conclusions based on the small number of patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist would not be justified.\nThe target population had a diagnosis of COPD (ICD-9-CM codes 491.xx, 492.xx, 496.xx) in any field in the pre-index period or 60 days after the index date; absence of exclusionary comorbid conditions (respiratory cancer, cystic fibrosis, fibrosis due to tuberculosis, and bronchiectasis, pneumoconiosis, pulmonary fibrosis, pulmonary tuberculosis, sarcoidosis) during the 1 year pre-index or post-index (follow-up) periods; and an index date occurring during the enrollment period. In addition, a diagnosis of depression or anxiety in any field, along with a medication for treating depression or anxiety was also required. Finally, patients could not have received maintenance medications for COPD other than the index medication on or 60 days after the index date and had to be continuously eligible to receive health care services during the 1-year pre-index and post-index (follow-up) periods.\nPatients were categorized on the index date into the FSC cohort or the AC cohort depending on their medication use. Patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist were excluded from the study as only 8% of the sample received these drug classes. It was thought that statistical and scientific conclusions based on the small number of patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist would not be justified.\n Outcomes and costs Clinical outcomes were risk of and number of COPD exacerbations during the 1-year follow-up period. A COPD exacerbation was defined as an emergency department visit with a primary diagnosis code for COPD, a hospitalization with a primary discharge diagnosis for COPD, or a physician visit with a primary diagnosis code for COPD plus a prescription for an oral corticosteroid or antibiotic within 5 days of the visit. When computing the number of exacerbations, an exacerbation within 45 days of a previous exacerbation was not counted as a separate exacerbation.17 COPD-related total (medical plus pharmacy), medical, and pharmacy costs were also determined during the 1-year follow-up period and summed to yield annual costs, which were standardized to 2009 $US using the Consumer Price Index for US medical care. COPD-related medical costs were computed from the paid amounts of medical claims with a primary diagnosis code for COPD. COPD-related pharmacy costs were computed from paid amounts of COPD-related prescription medications (including ACs, short-acting beta agonists, long-acting beta-agonists, inhaled corticosteroids, combination inhaled corticosteroids/long-acting beta-agonists, methylxanthines, oral corticosteroids, and antibiotics for respiratory infections) identified using national drug codes, health care common procedure coding system codes beginning with the letter J, or current procedural terminology codes as appropriate.\nCohorts were also compared with respect to pretreatment characteristics including demographics (age, sex, US census region), comorbidities during the pre-index period (Charlson comorbidity index score,18 asthma), and proxies for COPD severity during the pre-index period (number of canisters of inhaled short-acting beta-agonists, number of prescriptions for oral corticosteroids, use of home oxygen therapy, number of hospitalizations/emergency department visits for COPD, number of physician visits with a prescription for COPD). Age, sex, and US geographic region at the index date were obtained from enrollment files. The Dartmouth–Manitoba adaptation of the Charlson comorbidity index score18 – a weighted index of 19 chronic medical conditions that predict mortality, postoperative complications, and length of hospital stay – was calculated for each patient based on diagnoses reported during the pre-index period (excluding COPD codes). The number of canisters of short-acting beta-agonists was computed by dividing the quantity dispensed in mg by mg per canister. The use of home oxygen therapy was categorized as a binary variable (use, no use) based on current procedural terminology codes for home oxygen therapy on medical claims.\nClinical outcomes were risk of and number of COPD exacerbations during the 1-year follow-up period. A COPD exacerbation was defined as an emergency department visit with a primary diagnosis code for COPD, a hospitalization with a primary discharge diagnosis for COPD, or a physician visit with a primary diagnosis code for COPD plus a prescription for an oral corticosteroid or antibiotic within 5 days of the visit. When computing the number of exacerbations, an exacerbation within 45 days of a previous exacerbation was not counted as a separate exacerbation.17 COPD-related total (medical plus pharmacy), medical, and pharmacy costs were also determined during the 1-year follow-up period and summed to yield annual costs, which were standardized to 2009 $US using the Consumer Price Index for US medical care. COPD-related medical costs were computed from the paid amounts of medical claims with a primary diagnosis code for COPD. COPD-related pharmacy costs were computed from paid amounts of COPD-related prescription medications (including ACs, short-acting beta agonists, long-acting beta-agonists, inhaled corticosteroids, combination inhaled corticosteroids/long-acting beta-agonists, methylxanthines, oral corticosteroids, and antibiotics for respiratory infections) identified using national drug codes, health care common procedure coding system codes beginning with the letter J, or current procedural terminology codes as appropriate.\nCohorts were also compared with respect to pretreatment characteristics including demographics (age, sex, US census region), comorbidities during the pre-index period (Charlson comorbidity index score,18 asthma), and proxies for COPD severity during the pre-index period (number of canisters of inhaled short-acting beta-agonists, number of prescriptions for oral corticosteroids, use of home oxygen therapy, number of hospitalizations/emergency department visits for COPD, number of physician visits with a prescription for COPD). Age, sex, and US geographic region at the index date were obtained from enrollment files. The Dartmouth–Manitoba adaptation of the Charlson comorbidity index score18 – a weighted index of 19 chronic medical conditions that predict mortality, postoperative complications, and length of hospital stay – was calculated for each patient based on diagnoses reported during the pre-index period (excluding COPD codes). The number of canisters of short-acting beta-agonists was computed by dividing the quantity dispensed in mg by mg per canister. The use of home oxygen therapy was categorized as a binary variable (use, no use) based on current procedural terminology codes for home oxygen therapy on medical claims.\n Statistical analysis Pretreatment characteristics were summarized with descriptive statistics. Inferential statistics (chi-square test for categorical variables, t-test or Mann–Whitney test for continuous variables) were used to quantify differences between cohorts.\nMultivariate regression models were used to examine the association between the initiation of index drugs and outcomes while controlling for potential confounders. Covariates were chosen a priori for all models on the basis of clinical relevance and significant differences at baseline. The covariates were age, male gender, Charlson comorbidity index, upper respiratory tract infection, cardiovascular disease, asthma, presence of short-acting beta-agonist, presence of oral corticosteroid, presence of home oxygen therapy, and presence of COPD-related hospitalizations. A logistic regression model was used to analyze differences between cohorts in the risk of COPD exacerbations. A zero-inflated negative binomial regression model was used to analyze differences between cohorts in the number of COPD-related exacerbations.\nDifferences between cohorts in COPD-related costs were evaluated with a generalized linear model using a gamma distribution with a log link function while controlling for covariates. This method has the advantage of estimating the adjusted costs directly without the need for retransformation while simultaneously using log-transformed costs in its estimation. The method of recycled predictions was used to obtain predicted costs for each cohort.\nPretreatment characteristics were summarized with descriptive statistics. Inferential statistics (chi-square test for categorical variables, t-test or Mann–Whitney test for continuous variables) were used to quantify differences between cohorts.\nMultivariate regression models were used to examine the association between the initiation of index drugs and outcomes while controlling for potential confounders. Covariates were chosen a priori for all models on the basis of clinical relevance and significant differences at baseline. The covariates were age, male gender, Charlson comorbidity index, upper respiratory tract infection, cardiovascular disease, asthma, presence of short-acting beta-agonist, presence of oral corticosteroid, presence of home oxygen therapy, and presence of COPD-related hospitalizations. A logistic regression model was used to analyze differences between cohorts in the risk of COPD exacerbations. A zero-inflated negative binomial regression model was used to analyze differences between cohorts in the number of COPD-related exacerbations.\nDifferences between cohorts in COPD-related costs were evaluated with a generalized linear model using a gamma distribution with a log link function while controlling for covariates. This method has the advantage of estimating the adjusted costs directly without the need for retransformation while simultaneously using log-transformed costs in its estimation. The method of recycled predictions was used to obtain predicted costs for each cohort.", "Figure 1 shows the design of this observational, retrospective cohort study. The study period for this analysis ranged from January 1st, 2003 through June 30th, 2009. The population was patients aged 40 years and older with COPD and comorbid depression/anxiety enrolled in health plans. The index date was defined as the date of the first prescription for maintenance COPD treatment (FSC; ACs including tiotropium, ipratropium, or combination ipratropium-albuterol; inhaled corticosteroid; long-acting beta-agonist) during the period from January 1st, 2004 to June 30th, 2008, termed as the enrollment period. Of patients with at least one pharmacy claim for a maintenance medication used to treat COPD, patients were considered to have comorbid depression or anxiety if they had at least one prescription claim for a medication used to treat depression or anxiety along with a diagnosis code for depression during or anxiety, respectively, 1 year pre-index or within 60 days after the index date. The 12-month period before the index date (pre-index period) was used to characterize the study population at baseline, and the 12-month period after the index date (follow-up period) was used to assess all study outcomes. The first 60 days of the follow-up period after the index date (the clean period) was used to ensure receipt of monotherapy by requiring no use of other COPD maintenance medications. This study was exempt from an institutional review board approval as it was retrospective in nature, did not involve an intervention, and utilized anonymized data.", "Data were obtained from the Ingenix Impact National Benchmark database (formerly, IHCIS), a comprehensive US medical claims database generally representative of the insured US population aged <65 years. The data are collected from more than 46 health care plans serving members across nine census regions. It does not include Medicaid or Medicare information. The database contains inpatient/outpatient and pharmacy claims, lab results, and enrollment information on more than 98 million lives from 1997 to 2010. Roughly 30.8 million patients in the IHCIS database have at least 2 years of both medical and pharmacy benefits; roughly 18.3 million patients have at least 3 years of both medical and pharmacy benefits. The data are fully de-identified and compliant with the Health Insurance Portability and Accountability Act.", "The target population had a diagnosis of COPD (ICD-9-CM codes 491.xx, 492.xx, 496.xx) in any field in the pre-index period or 60 days after the index date; absence of exclusionary comorbid conditions (respiratory cancer, cystic fibrosis, fibrosis due to tuberculosis, and bronchiectasis, pneumoconiosis, pulmonary fibrosis, pulmonary tuberculosis, sarcoidosis) during the 1 year pre-index or post-index (follow-up) periods; and an index date occurring during the enrollment period. In addition, a diagnosis of depression or anxiety in any field, along with a medication for treating depression or anxiety was also required. Finally, patients could not have received maintenance medications for COPD other than the index medication on or 60 days after the index date and had to be continuously eligible to receive health care services during the 1-year pre-index and post-index (follow-up) periods.\nPatients were categorized on the index date into the FSC cohort or the AC cohort depending on their medication use. Patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist were excluded from the study as only 8% of the sample received these drug classes. It was thought that statistical and scientific conclusions based on the small number of patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist would not be justified.", "Clinical outcomes were risk of and number of COPD exacerbations during the 1-year follow-up period. A COPD exacerbation was defined as an emergency department visit with a primary diagnosis code for COPD, a hospitalization with a primary discharge diagnosis for COPD, or a physician visit with a primary diagnosis code for COPD plus a prescription for an oral corticosteroid or antibiotic within 5 days of the visit. When computing the number of exacerbations, an exacerbation within 45 days of a previous exacerbation was not counted as a separate exacerbation.17 COPD-related total (medical plus pharmacy), medical, and pharmacy costs were also determined during the 1-year follow-up period and summed to yield annual costs, which were standardized to 2009 $US using the Consumer Price Index for US medical care. COPD-related medical costs were computed from the paid amounts of medical claims with a primary diagnosis code for COPD. COPD-related pharmacy costs were computed from paid amounts of COPD-related prescription medications (including ACs, short-acting beta agonists, long-acting beta-agonists, inhaled corticosteroids, combination inhaled corticosteroids/long-acting beta-agonists, methylxanthines, oral corticosteroids, and antibiotics for respiratory infections) identified using national drug codes, health care common procedure coding system codes beginning with the letter J, or current procedural terminology codes as appropriate.\nCohorts were also compared with respect to pretreatment characteristics including demographics (age, sex, US census region), comorbidities during the pre-index period (Charlson comorbidity index score,18 asthma), and proxies for COPD severity during the pre-index period (number of canisters of inhaled short-acting beta-agonists, number of prescriptions for oral corticosteroids, use of home oxygen therapy, number of hospitalizations/emergency department visits for COPD, number of physician visits with a prescription for COPD). Age, sex, and US geographic region at the index date were obtained from enrollment files. The Dartmouth–Manitoba adaptation of the Charlson comorbidity index score18 – a weighted index of 19 chronic medical conditions that predict mortality, postoperative complications, and length of hospital stay – was calculated for each patient based on diagnoses reported during the pre-index period (excluding COPD codes). The number of canisters of short-acting beta-agonists was computed by dividing the quantity dispensed in mg by mg per canister. The use of home oxygen therapy was categorized as a binary variable (use, no use) based on current procedural terminology codes for home oxygen therapy on medical claims.", "Pretreatment characteristics were summarized with descriptive statistics. Inferential statistics (chi-square test for categorical variables, t-test or Mann–Whitney test for continuous variables) were used to quantify differences between cohorts.\nMultivariate regression models were used to examine the association between the initiation of index drugs and outcomes while controlling for potential confounders. Covariates were chosen a priori for all models on the basis of clinical relevance and significant differences at baseline. The covariates were age, male gender, Charlson comorbidity index, upper respiratory tract infection, cardiovascular disease, asthma, presence of short-acting beta-agonist, presence of oral corticosteroid, presence of home oxygen therapy, and presence of COPD-related hospitalizations. A logistic regression model was used to analyze differences between cohorts in the risk of COPD exacerbations. A zero-inflated negative binomial regression model was used to analyze differences between cohorts in the number of COPD-related exacerbations.\nDifferences between cohorts in COPD-related costs were evaluated with a generalized linear model using a gamma distribution with a log link function while controlling for covariates. This method has the advantage of estimating the adjusted costs directly without the need for retransformation while simultaneously using log-transformed costs in its estimation. The method of recycled predictions was used to obtain predicted costs for each cohort.", " Sample Among patients with a diagnosis of COPD and a prescription filled for COPD maintenance medications during the study period, 34,189 (11.5%) patients had a diagnosis of depression or anxiety and a prescription filled for depression or anxiety medications between 1 year pre-index and 60 days post-index date. Of these 34,189 patients, 30,188 (88.3%) were excluded from the study mainly because of noncontinuous eligibility (44.4%) and no COPD diagnosis (25.5%) during the pre-index period through 60 days after the index date. After applying exclusion criteria, the final sample comprised 4001 patients (n = 1078 FSC; n = 2923 AC).\nTable 1 shows demographics and pre-index clinical characteristics, health care use, and costs. In the FSC cohort compared with the AC cohort, patients were younger, more likely to be female, had lower Charlson comorbidity index at baseline, and were more likely to have asthma. Pre-index COPD was more severe in the FSC cohort than the AC cohort reflected by the higher use of short-acting beta-agonist canisters and oral corticosteroid prescriptions. During the 1-year pre-index period, the proportion of patients having a COPD-related hospitalization and COPD-related medical costs was significantly lower in the FSC cohort than the AC cohort. Pre-index COPD-related pharmacy costs were significantly higher for the FSC cohort than the AC cohort.\nAmong patients with a diagnosis of COPD and a prescription filled for COPD maintenance medications during the study period, 34,189 (11.5%) patients had a diagnosis of depression or anxiety and a prescription filled for depression or anxiety medications between 1 year pre-index and 60 days post-index date. Of these 34,189 patients, 30,188 (88.3%) were excluded from the study mainly because of noncontinuous eligibility (44.4%) and no COPD diagnosis (25.5%) during the pre-index period through 60 days after the index date. After applying exclusion criteria, the final sample comprised 4001 patients (n = 1078 FSC; n = 2923 AC).\nTable 1 shows demographics and pre-index clinical characteristics, health care use, and costs. In the FSC cohort compared with the AC cohort, patients were younger, more likely to be female, had lower Charlson comorbidity index at baseline, and were more likely to have asthma. Pre-index COPD was more severe in the FSC cohort than the AC cohort reflected by the higher use of short-acting beta-agonist canisters and oral corticosteroid prescriptions. During the 1-year pre-index period, the proportion of patients having a COPD-related hospitalization and COPD-related medical costs was significantly lower in the FSC cohort than the AC cohort. Pre-index COPD-related pharmacy costs were significantly higher for the FSC cohort than the AC cohort.\n Clinical and economic outcomes Unadjusted outcomes Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001).\nTable 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001).\n Adjusted outcomes After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2).\nThe average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit.\nTable 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050).\nAfter controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2).\nThe average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit.\nTable 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050).\n Unadjusted outcomes Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001).\nTable 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001).\n Adjusted outcomes After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2).\nThe average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit.\nTable 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050).\nAfter controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2).\nThe average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit.\nTable 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050).", "Among patients with a diagnosis of COPD and a prescription filled for COPD maintenance medications during the study period, 34,189 (11.5%) patients had a diagnosis of depression or anxiety and a prescription filled for depression or anxiety medications between 1 year pre-index and 60 days post-index date. Of these 34,189 patients, 30,188 (88.3%) were excluded from the study mainly because of noncontinuous eligibility (44.4%) and no COPD diagnosis (25.5%) during the pre-index period through 60 days after the index date. After applying exclusion criteria, the final sample comprised 4001 patients (n = 1078 FSC; n = 2923 AC).\nTable 1 shows demographics and pre-index clinical characteristics, health care use, and costs. In the FSC cohort compared with the AC cohort, patients were younger, more likely to be female, had lower Charlson comorbidity index at baseline, and were more likely to have asthma. Pre-index COPD was more severe in the FSC cohort than the AC cohort reflected by the higher use of short-acting beta-agonist canisters and oral corticosteroid prescriptions. During the 1-year pre-index period, the proportion of patients having a COPD-related hospitalization and COPD-related medical costs was significantly lower in the FSC cohort than the AC cohort. Pre-index COPD-related pharmacy costs were significantly higher for the FSC cohort than the AC cohort.", " Unadjusted outcomes Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001).\nTable 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001).\n Adjusted outcomes After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2).\nThe average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit.\nTable 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050).\nAfter controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2).\nThe average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit.\nTable 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050).", "Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001).", "After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2).\nThe average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit.\nTable 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050).", "This study is, to the authors’ knowledge, the first to assess a cohort of COPD patients with comorbid anxiety/depression initiating maintenance therapy with FSC or AC. The use of FSC was associated with better clinical and economic COPD-related outcomes. The AC cohort compared with the FSC cohort had a significantly higher risk of experiencing any COPD-related exacerbation, exacerbations requiring hospitalization, and exacerbations requiring an emergency department visit during the 1-year study period. The number of COPD-related hospitalizations was lower for patients who initiated COPD maintenance therapy with FSC compared with AC. The lower risks of COPD-related hospitalizations and emergency department visits and the lower number of hospitalizations in the FSC cohort were associated with a reduction in COPD-related medical costs for the FSC cohort compared with the AC cohort.\nIn clinical studies comparing FSC and tiotropium, lung function improvements and exacerbation rates were similar whereas quality of life was better in the FSC groups.19 In addition, patients taking inhaled corticosteroids tended to have fewer exacerbations that required treatment with oral corticosteroids compared with subjects taking tiotropium. These differences (ie, an improved quality of life and a reduction in dyspnea during exacerbations) may reflect the effects of the inhaled corticosteroid-associated improvement in health in these patients and may help to explain why this therapy might improve outcomes in a depressed population.\nThe present study adds to a body of literature, from studies of patients with COPD selected without regard to the presence of comorbid depression, that supports the benefits of FSC versus both short- and long-acting anticholinergic bronchodilators in reducing the risk of COPD-related events and/or reducing medical and total costs.16,20–22 In an observational study of 14,689 patients aged ≥65 years with COPD in a commercial Medicare health maintenance organization plan, initial maintenance treatment with FSC was associated with total COPD-related cost savings (medical plus pharmacy) of US$110 versus tiotropium, US$295 versus ipratropium/albuterol, and US$1235 versus ipratropium (all of which are ACs) over a 1-year follow-up period.16 This reduction in total costs with FSC versus tiotropium more than offsets the higher pharmacy costs of FSC. In a retrospective, observational study involving Texas Medicaid beneficiaries aged 40 to 64 years (n = 6793), FSC compared with ipratropium was associated with a 27% lower risk of COPD-related hospitalization or emergency department visit and with similar COPD-related total costs (medical plus pharmacy costs) over 12 months.20 In another study, initial maintenance therapy with FSC compared with ipratropium was associated with a 56% lower risk of COPD-related hospitalization or emergency department visit and similar COPD-related total costs (with lower all-cause total costs) in patients ≥40 years with COPD.21 Finally, in a retrospective claims analysis involving 1051 adults ≥65 years with COPD, FSC compared with ipratropium was associated with a 45% reduction in the risk of COPD-related hospitalization or emergency department visit, and lower COPD-related medical costs.22 In several of these studies,16,21,22 FSC-associated reductions in medical costs appeared to more than offset the increase in pharmacy costs such that total costs (medical plus pharmacy) were lower with FSC (although total costs were not reported in one study);21 however, pharmacy costs were higher with FSC than with the short-acting anticholinergic bronchodilator. Across studies comparing FSC with short- and long-acting anticholinergic bronchodilators in patients selected without regard to presence of comorbid depression, FSC was associated with significantly lower risk of COPD-related events and, generally, lower total medical costs.\nIn the current study, definitions for COPD exacerbations included emergency department visits, hospitalizations, and physician visits with a prescription for an oral corticosteroid or antibiotic for respiratory infections. A similarly inclusive definition was used in recent studies of COPD exacerbations.23–26 Emergency department visits and hospitalizations reflect severe exacerbations, and physician visits with a prescription for an oral corticosteroid or antibiotic may reflect moderate exacerbations. The use of these definitions for COPD exacerbations provides a sensitive measure of treatment outcomes and reflects the spectrum of manifestations of COPD exacerbations in clinical practice.\nThe results of this study should be interpreted in the context of its limitations. Although analyses controlled for differences in baseline disease severity and other baseline characteristics in both adjusted cost models and risk models in the current study, such adjustment is imperfect and does not obviate the need for further assessment in cohorts balanced with respect to baseline characteristics. The possibility remains of residual confounding due to between-cohort differences in patient characteristics that were not controlled for in multivariate analysis. Other limitations of this study include potential errors in the coding of claims, the inability to verify the accuracy of diagnosis codes, and the absence of a means of assessing patient compliance. The potential for these biases is inherent in observational studies. If these biases were operating, they are likely to have been operating similarly between cohorts.\nPatients with COPD are at increased risk of depression and anxiety relative to those with other chronic disorders.5–7 This observational study is believed to be the first to examine the impact of pharmacotherapy on outcomes and costs in patients with COPD and depression or anxiety. The results show that FSC compared with AC was associated with more favorable COPD-related outcomes and lower COPD-related utilization and medical costs among patients with COPD and comorbid depression or anxiety. Further study of the influence of therapeutic intervention on outcomes, utilization, and costs in COPD and comorbid depression or anxiety is warranted." ]
[ null, "methods", "methods", "methods", null, null, "methods", "results", null, null, null, null, "discussion" ]
[ "COPD", "fluticasone propionate/salmeterol", "anticholinergics", "depression", "comorbidity" ]
Introduction: Chronic obstructive pulmonary disease (COPD) affects an estimated 24 million individuals and incurred an estimated annual $49.9 billion cost in the US in 2010. Addressing this issue is important because of the increasing number of diagnosed cases.1–4 Neuropsychiatric symptoms and disorders including major depression, depressive symptoms, and anxiety are commonly comorbid with COPD.5,6 Estimates of prevalence of depression in COPD vary but are generally higher than those reported in some other advanced chronic diseases.5 A cross-sectional study of 18,588 individuals in the 2004 US-based Health and Retirement Survey found that depressive symptoms were more common in COPD than in coronary heart disease, stroke, diabetes, arthritis, hypertension, and cancer.7 Of the 1736 individuals with self-reported COPD, 40% had at least three depressive symptoms. In another general practice study, the risk of depression was 2.5 times greater for patients with severe COPD than for controls without COPD or asthma.8 The risk of comorbid depression was not increased in mild-to-moderate COPD. The authors concluded that patients with severe COPD are at heightened risk of depression and that the results highlight the importance of reducing symptoms and improving physical functioning in patients with COPD. Depression and anxiety in COPD may additionally adversely increase the burden of COPD. Depression independently predicts poor quality of life in patients with COPD.9–11 Depression was among several predictors of lack of adherence to the recommended criteria for use of inhaled corticosteroids in a sample of 10,711 patients with COPD in the primary care setting.12 Furthermore, the presence of depression predicted noncompletion of pulmonary rehabilitation in COPD.13 The impact of COPD therapies on outcomes and costs in patients with COPD who have comorbid anxiety and/or depression is not fully understood. Bronchodilators (including beta-agonists and anticholinergics) are central to COPD management as they can reduce the frequency and severity of acute exacerbations; however, their impact in a cohort of COPD patients with comorbid depression has not been studied.14 Evidence suggests that combination long-acting beta agonist/inhaled corticosteroid therapy may be more beneficial than long-acting anticholinergic therapy with respect to clinical and economic outcomes.15,16 This study was conducted to compare the risk of COPD exacerbations and COPD-related health care utilization and costs between patients initiating inhaled corticosteroid/long-acting beta agonist combination therapy (fluticasone propionate/salmeterol 250/50 mcg combination [FSC]) and those initiating anticholinergics (ACs) in patients with COPD and comorbid anxiety/depression. Methods: Study design Figure 1 shows the design of this observational, retrospective cohort study. The study period for this analysis ranged from January 1st, 2003 through June 30th, 2009. The population was patients aged 40 years and older with COPD and comorbid depression/anxiety enrolled in health plans. The index date was defined as the date of the first prescription for maintenance COPD treatment (FSC; ACs including tiotropium, ipratropium, or combination ipratropium-albuterol; inhaled corticosteroid; long-acting beta-agonist) during the period from January 1st, 2004 to June 30th, 2008, termed as the enrollment period. Of patients with at least one pharmacy claim for a maintenance medication used to treat COPD, patients were considered to have comorbid depression or anxiety if they had at least one prescription claim for a medication used to treat depression or anxiety along with a diagnosis code for depression during or anxiety, respectively, 1 year pre-index or within 60 days after the index date. The 12-month period before the index date (pre-index period) was used to characterize the study population at baseline, and the 12-month period after the index date (follow-up period) was used to assess all study outcomes. The first 60 days of the follow-up period after the index date (the clean period) was used to ensure receipt of monotherapy by requiring no use of other COPD maintenance medications. This study was exempt from an institutional review board approval as it was retrospective in nature, did not involve an intervention, and utilized anonymized data. Figure 1 shows the design of this observational, retrospective cohort study. The study period for this analysis ranged from January 1st, 2003 through June 30th, 2009. The population was patients aged 40 years and older with COPD and comorbid depression/anxiety enrolled in health plans. The index date was defined as the date of the first prescription for maintenance COPD treatment (FSC; ACs including tiotropium, ipratropium, or combination ipratropium-albuterol; inhaled corticosteroid; long-acting beta-agonist) during the period from January 1st, 2004 to June 30th, 2008, termed as the enrollment period. Of patients with at least one pharmacy claim for a maintenance medication used to treat COPD, patients were considered to have comorbid depression or anxiety if they had at least one prescription claim for a medication used to treat depression or anxiety along with a diagnosis code for depression during or anxiety, respectively, 1 year pre-index or within 60 days after the index date. The 12-month period before the index date (pre-index period) was used to characterize the study population at baseline, and the 12-month period after the index date (follow-up period) was used to assess all study outcomes. The first 60 days of the follow-up period after the index date (the clean period) was used to ensure receipt of monotherapy by requiring no use of other COPD maintenance medications. This study was exempt from an institutional review board approval as it was retrospective in nature, did not involve an intervention, and utilized anonymized data. Data source Data were obtained from the Ingenix Impact National Benchmark database (formerly, IHCIS), a comprehensive US medical claims database generally representative of the insured US population aged <65 years. The data are collected from more than 46 health care plans serving members across nine census regions. It does not include Medicaid or Medicare information. The database contains inpatient/outpatient and pharmacy claims, lab results, and enrollment information on more than 98 million lives from 1997 to 2010. Roughly 30.8 million patients in the IHCIS database have at least 2 years of both medical and pharmacy benefits; roughly 18.3 million patients have at least 3 years of both medical and pharmacy benefits. The data are fully de-identified and compliant with the Health Insurance Portability and Accountability Act. Data were obtained from the Ingenix Impact National Benchmark database (formerly, IHCIS), a comprehensive US medical claims database generally representative of the insured US population aged <65 years. The data are collected from more than 46 health care plans serving members across nine census regions. It does not include Medicaid or Medicare information. The database contains inpatient/outpatient and pharmacy claims, lab results, and enrollment information on more than 98 million lives from 1997 to 2010. Roughly 30.8 million patients in the IHCIS database have at least 2 years of both medical and pharmacy benefits; roughly 18.3 million patients have at least 3 years of both medical and pharmacy benefits. The data are fully de-identified and compliant with the Health Insurance Portability and Accountability Act. Sample The target population had a diagnosis of COPD (ICD-9-CM codes 491.xx, 492.xx, 496.xx) in any field in the pre-index period or 60 days after the index date; absence of exclusionary comorbid conditions (respiratory cancer, cystic fibrosis, fibrosis due to tuberculosis, and bronchiectasis, pneumoconiosis, pulmonary fibrosis, pulmonary tuberculosis, sarcoidosis) during the 1 year pre-index or post-index (follow-up) periods; and an index date occurring during the enrollment period. In addition, a diagnosis of depression or anxiety in any field, along with a medication for treating depression or anxiety was also required. Finally, patients could not have received maintenance medications for COPD other than the index medication on or 60 days after the index date and had to be continuously eligible to receive health care services during the 1-year pre-index and post-index (follow-up) periods. Patients were categorized on the index date into the FSC cohort or the AC cohort depending on their medication use. Patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist were excluded from the study as only 8% of the sample received these drug classes. It was thought that statistical and scientific conclusions based on the small number of patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist would not be justified. The target population had a diagnosis of COPD (ICD-9-CM codes 491.xx, 492.xx, 496.xx) in any field in the pre-index period or 60 days after the index date; absence of exclusionary comorbid conditions (respiratory cancer, cystic fibrosis, fibrosis due to tuberculosis, and bronchiectasis, pneumoconiosis, pulmonary fibrosis, pulmonary tuberculosis, sarcoidosis) during the 1 year pre-index or post-index (follow-up) periods; and an index date occurring during the enrollment period. In addition, a diagnosis of depression or anxiety in any field, along with a medication for treating depression or anxiety was also required. Finally, patients could not have received maintenance medications for COPD other than the index medication on or 60 days after the index date and had to be continuously eligible to receive health care services during the 1-year pre-index and post-index (follow-up) periods. Patients were categorized on the index date into the FSC cohort or the AC cohort depending on their medication use. Patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist were excluded from the study as only 8% of the sample received these drug classes. It was thought that statistical and scientific conclusions based on the small number of patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist would not be justified. Outcomes and costs Clinical outcomes were risk of and number of COPD exacerbations during the 1-year follow-up period. A COPD exacerbation was defined as an emergency department visit with a primary diagnosis code for COPD, a hospitalization with a primary discharge diagnosis for COPD, or a physician visit with a primary diagnosis code for COPD plus a prescription for an oral corticosteroid or antibiotic within 5 days of the visit. When computing the number of exacerbations, an exacerbation within 45 days of a previous exacerbation was not counted as a separate exacerbation.17 COPD-related total (medical plus pharmacy), medical, and pharmacy costs were also determined during the 1-year follow-up period and summed to yield annual costs, which were standardized to 2009 $US using the Consumer Price Index for US medical care. COPD-related medical costs were computed from the paid amounts of medical claims with a primary diagnosis code for COPD. COPD-related pharmacy costs were computed from paid amounts of COPD-related prescription medications (including ACs, short-acting beta agonists, long-acting beta-agonists, inhaled corticosteroids, combination inhaled corticosteroids/long-acting beta-agonists, methylxanthines, oral corticosteroids, and antibiotics for respiratory infections) identified using national drug codes, health care common procedure coding system codes beginning with the letter J, or current procedural terminology codes as appropriate. Cohorts were also compared with respect to pretreatment characteristics including demographics (age, sex, US census region), comorbidities during the pre-index period (Charlson comorbidity index score,18 asthma), and proxies for COPD severity during the pre-index period (number of canisters of inhaled short-acting beta-agonists, number of prescriptions for oral corticosteroids, use of home oxygen therapy, number of hospitalizations/emergency department visits for COPD, number of physician visits with a prescription for COPD). Age, sex, and US geographic region at the index date were obtained from enrollment files. The Dartmouth–Manitoba adaptation of the Charlson comorbidity index score18 – a weighted index of 19 chronic medical conditions that predict mortality, postoperative complications, and length of hospital stay – was calculated for each patient based on diagnoses reported during the pre-index period (excluding COPD codes). The number of canisters of short-acting beta-agonists was computed by dividing the quantity dispensed in mg by mg per canister. The use of home oxygen therapy was categorized as a binary variable (use, no use) based on current procedural terminology codes for home oxygen therapy on medical claims. Clinical outcomes were risk of and number of COPD exacerbations during the 1-year follow-up period. A COPD exacerbation was defined as an emergency department visit with a primary diagnosis code for COPD, a hospitalization with a primary discharge diagnosis for COPD, or a physician visit with a primary diagnosis code for COPD plus a prescription for an oral corticosteroid or antibiotic within 5 days of the visit. When computing the number of exacerbations, an exacerbation within 45 days of a previous exacerbation was not counted as a separate exacerbation.17 COPD-related total (medical plus pharmacy), medical, and pharmacy costs were also determined during the 1-year follow-up period and summed to yield annual costs, which were standardized to 2009 $US using the Consumer Price Index for US medical care. COPD-related medical costs were computed from the paid amounts of medical claims with a primary diagnosis code for COPD. COPD-related pharmacy costs were computed from paid amounts of COPD-related prescription medications (including ACs, short-acting beta agonists, long-acting beta-agonists, inhaled corticosteroids, combination inhaled corticosteroids/long-acting beta-agonists, methylxanthines, oral corticosteroids, and antibiotics for respiratory infections) identified using national drug codes, health care common procedure coding system codes beginning with the letter J, or current procedural terminology codes as appropriate. Cohorts were also compared with respect to pretreatment characteristics including demographics (age, sex, US census region), comorbidities during the pre-index period (Charlson comorbidity index score,18 asthma), and proxies for COPD severity during the pre-index period (number of canisters of inhaled short-acting beta-agonists, number of prescriptions for oral corticosteroids, use of home oxygen therapy, number of hospitalizations/emergency department visits for COPD, number of physician visits with a prescription for COPD). Age, sex, and US geographic region at the index date were obtained from enrollment files. The Dartmouth–Manitoba adaptation of the Charlson comorbidity index score18 – a weighted index of 19 chronic medical conditions that predict mortality, postoperative complications, and length of hospital stay – was calculated for each patient based on diagnoses reported during the pre-index period (excluding COPD codes). The number of canisters of short-acting beta-agonists was computed by dividing the quantity dispensed in mg by mg per canister. The use of home oxygen therapy was categorized as a binary variable (use, no use) based on current procedural terminology codes for home oxygen therapy on medical claims. Statistical analysis Pretreatment characteristics were summarized with descriptive statistics. Inferential statistics (chi-square test for categorical variables, t-test or Mann–Whitney test for continuous variables) were used to quantify differences between cohorts. Multivariate regression models were used to examine the association between the initiation of index drugs and outcomes while controlling for potential confounders. Covariates were chosen a priori for all models on the basis of clinical relevance and significant differences at baseline. The covariates were age, male gender, Charlson comorbidity index, upper respiratory tract infection, cardiovascular disease, asthma, presence of short-acting beta-agonist, presence of oral corticosteroid, presence of home oxygen therapy, and presence of COPD-related hospitalizations. A logistic regression model was used to analyze differences between cohorts in the risk of COPD exacerbations. A zero-inflated negative binomial regression model was used to analyze differences between cohorts in the number of COPD-related exacerbations. Differences between cohorts in COPD-related costs were evaluated with a generalized linear model using a gamma distribution with a log link function while controlling for covariates. This method has the advantage of estimating the adjusted costs directly without the need for retransformation while simultaneously using log-transformed costs in its estimation. The method of recycled predictions was used to obtain predicted costs for each cohort. Pretreatment characteristics were summarized with descriptive statistics. Inferential statistics (chi-square test for categorical variables, t-test or Mann–Whitney test for continuous variables) were used to quantify differences between cohorts. Multivariate regression models were used to examine the association between the initiation of index drugs and outcomes while controlling for potential confounders. Covariates were chosen a priori for all models on the basis of clinical relevance and significant differences at baseline. The covariates were age, male gender, Charlson comorbidity index, upper respiratory tract infection, cardiovascular disease, asthma, presence of short-acting beta-agonist, presence of oral corticosteroid, presence of home oxygen therapy, and presence of COPD-related hospitalizations. A logistic regression model was used to analyze differences between cohorts in the risk of COPD exacerbations. A zero-inflated negative binomial regression model was used to analyze differences between cohorts in the number of COPD-related exacerbations. Differences between cohorts in COPD-related costs were evaluated with a generalized linear model using a gamma distribution with a log link function while controlling for covariates. This method has the advantage of estimating the adjusted costs directly without the need for retransformation while simultaneously using log-transformed costs in its estimation. The method of recycled predictions was used to obtain predicted costs for each cohort. Study design: Figure 1 shows the design of this observational, retrospective cohort study. The study period for this analysis ranged from January 1st, 2003 through June 30th, 2009. The population was patients aged 40 years and older with COPD and comorbid depression/anxiety enrolled in health plans. The index date was defined as the date of the first prescription for maintenance COPD treatment (FSC; ACs including tiotropium, ipratropium, or combination ipratropium-albuterol; inhaled corticosteroid; long-acting beta-agonist) during the period from January 1st, 2004 to June 30th, 2008, termed as the enrollment period. Of patients with at least one pharmacy claim for a maintenance medication used to treat COPD, patients were considered to have comorbid depression or anxiety if they had at least one prescription claim for a medication used to treat depression or anxiety along with a diagnosis code for depression during or anxiety, respectively, 1 year pre-index or within 60 days after the index date. The 12-month period before the index date (pre-index period) was used to characterize the study population at baseline, and the 12-month period after the index date (follow-up period) was used to assess all study outcomes. The first 60 days of the follow-up period after the index date (the clean period) was used to ensure receipt of monotherapy by requiring no use of other COPD maintenance medications. This study was exempt from an institutional review board approval as it was retrospective in nature, did not involve an intervention, and utilized anonymized data. Data source: Data were obtained from the Ingenix Impact National Benchmark database (formerly, IHCIS), a comprehensive US medical claims database generally representative of the insured US population aged <65 years. The data are collected from more than 46 health care plans serving members across nine census regions. It does not include Medicaid or Medicare information. The database contains inpatient/outpatient and pharmacy claims, lab results, and enrollment information on more than 98 million lives from 1997 to 2010. Roughly 30.8 million patients in the IHCIS database have at least 2 years of both medical and pharmacy benefits; roughly 18.3 million patients have at least 3 years of both medical and pharmacy benefits. The data are fully de-identified and compliant with the Health Insurance Portability and Accountability Act. Sample: The target population had a diagnosis of COPD (ICD-9-CM codes 491.xx, 492.xx, 496.xx) in any field in the pre-index period or 60 days after the index date; absence of exclusionary comorbid conditions (respiratory cancer, cystic fibrosis, fibrosis due to tuberculosis, and bronchiectasis, pneumoconiosis, pulmonary fibrosis, pulmonary tuberculosis, sarcoidosis) during the 1 year pre-index or post-index (follow-up) periods; and an index date occurring during the enrollment period. In addition, a diagnosis of depression or anxiety in any field, along with a medication for treating depression or anxiety was also required. Finally, patients could not have received maintenance medications for COPD other than the index medication on or 60 days after the index date and had to be continuously eligible to receive health care services during the 1-year pre-index and post-index (follow-up) periods. Patients were categorized on the index date into the FSC cohort or the AC cohort depending on their medication use. Patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist were excluded from the study as only 8% of the sample received these drug classes. It was thought that statistical and scientific conclusions based on the small number of patients who initiated therapy with an inhaled corticosteroid or a long-acting beta-agonist would not be justified. Outcomes and costs: Clinical outcomes were risk of and number of COPD exacerbations during the 1-year follow-up period. A COPD exacerbation was defined as an emergency department visit with a primary diagnosis code for COPD, a hospitalization with a primary discharge diagnosis for COPD, or a physician visit with a primary diagnosis code for COPD plus a prescription for an oral corticosteroid or antibiotic within 5 days of the visit. When computing the number of exacerbations, an exacerbation within 45 days of a previous exacerbation was not counted as a separate exacerbation.17 COPD-related total (medical plus pharmacy), medical, and pharmacy costs were also determined during the 1-year follow-up period and summed to yield annual costs, which were standardized to 2009 $US using the Consumer Price Index for US medical care. COPD-related medical costs were computed from the paid amounts of medical claims with a primary diagnosis code for COPD. COPD-related pharmacy costs were computed from paid amounts of COPD-related prescription medications (including ACs, short-acting beta agonists, long-acting beta-agonists, inhaled corticosteroids, combination inhaled corticosteroids/long-acting beta-agonists, methylxanthines, oral corticosteroids, and antibiotics for respiratory infections) identified using national drug codes, health care common procedure coding system codes beginning with the letter J, or current procedural terminology codes as appropriate. Cohorts were also compared with respect to pretreatment characteristics including demographics (age, sex, US census region), comorbidities during the pre-index period (Charlson comorbidity index score,18 asthma), and proxies for COPD severity during the pre-index period (number of canisters of inhaled short-acting beta-agonists, number of prescriptions for oral corticosteroids, use of home oxygen therapy, number of hospitalizations/emergency department visits for COPD, number of physician visits with a prescription for COPD). Age, sex, and US geographic region at the index date were obtained from enrollment files. The Dartmouth–Manitoba adaptation of the Charlson comorbidity index score18 – a weighted index of 19 chronic medical conditions that predict mortality, postoperative complications, and length of hospital stay – was calculated for each patient based on diagnoses reported during the pre-index period (excluding COPD codes). The number of canisters of short-acting beta-agonists was computed by dividing the quantity dispensed in mg by mg per canister. The use of home oxygen therapy was categorized as a binary variable (use, no use) based on current procedural terminology codes for home oxygen therapy on medical claims. Statistical analysis: Pretreatment characteristics were summarized with descriptive statistics. Inferential statistics (chi-square test for categorical variables, t-test or Mann–Whitney test for continuous variables) were used to quantify differences between cohorts. Multivariate regression models were used to examine the association between the initiation of index drugs and outcomes while controlling for potential confounders. Covariates were chosen a priori for all models on the basis of clinical relevance and significant differences at baseline. The covariates were age, male gender, Charlson comorbidity index, upper respiratory tract infection, cardiovascular disease, asthma, presence of short-acting beta-agonist, presence of oral corticosteroid, presence of home oxygen therapy, and presence of COPD-related hospitalizations. A logistic regression model was used to analyze differences between cohorts in the risk of COPD exacerbations. A zero-inflated negative binomial regression model was used to analyze differences between cohorts in the number of COPD-related exacerbations. Differences between cohorts in COPD-related costs were evaluated with a generalized linear model using a gamma distribution with a log link function while controlling for covariates. This method has the advantage of estimating the adjusted costs directly without the need for retransformation while simultaneously using log-transformed costs in its estimation. The method of recycled predictions was used to obtain predicted costs for each cohort. Results: Sample Among patients with a diagnosis of COPD and a prescription filled for COPD maintenance medications during the study period, 34,189 (11.5%) patients had a diagnosis of depression or anxiety and a prescription filled for depression or anxiety medications between 1 year pre-index and 60 days post-index date. Of these 34,189 patients, 30,188 (88.3%) were excluded from the study mainly because of noncontinuous eligibility (44.4%) and no COPD diagnosis (25.5%) during the pre-index period through 60 days after the index date. After applying exclusion criteria, the final sample comprised 4001 patients (n = 1078 FSC; n = 2923 AC). Table 1 shows demographics and pre-index clinical characteristics, health care use, and costs. In the FSC cohort compared with the AC cohort, patients were younger, more likely to be female, had lower Charlson comorbidity index at baseline, and were more likely to have asthma. Pre-index COPD was more severe in the FSC cohort than the AC cohort reflected by the higher use of short-acting beta-agonist canisters and oral corticosteroid prescriptions. During the 1-year pre-index period, the proportion of patients having a COPD-related hospitalization and COPD-related medical costs was significantly lower in the FSC cohort than the AC cohort. Pre-index COPD-related pharmacy costs were significantly higher for the FSC cohort than the AC cohort. Among patients with a diagnosis of COPD and a prescription filled for COPD maintenance medications during the study period, 34,189 (11.5%) patients had a diagnosis of depression or anxiety and a prescription filled for depression or anxiety medications between 1 year pre-index and 60 days post-index date. Of these 34,189 patients, 30,188 (88.3%) were excluded from the study mainly because of noncontinuous eligibility (44.4%) and no COPD diagnosis (25.5%) during the pre-index period through 60 days after the index date. After applying exclusion criteria, the final sample comprised 4001 patients (n = 1078 FSC; n = 2923 AC). Table 1 shows demographics and pre-index clinical characteristics, health care use, and costs. In the FSC cohort compared with the AC cohort, patients were younger, more likely to be female, had lower Charlson comorbidity index at baseline, and were more likely to have asthma. Pre-index COPD was more severe in the FSC cohort than the AC cohort reflected by the higher use of short-acting beta-agonist canisters and oral corticosteroid prescriptions. During the 1-year pre-index period, the proportion of patients having a COPD-related hospitalization and COPD-related medical costs was significantly lower in the FSC cohort than the AC cohort. Pre-index COPD-related pharmacy costs were significantly higher for the FSC cohort than the AC cohort. Clinical and economic outcomes Unadjusted outcomes Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Adjusted outcomes After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050). After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050). Unadjusted outcomes Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Adjusted outcomes After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050). After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050). Sample: Among patients with a diagnosis of COPD and a prescription filled for COPD maintenance medications during the study period, 34,189 (11.5%) patients had a diagnosis of depression or anxiety and a prescription filled for depression or anxiety medications between 1 year pre-index and 60 days post-index date. Of these 34,189 patients, 30,188 (88.3%) were excluded from the study mainly because of noncontinuous eligibility (44.4%) and no COPD diagnosis (25.5%) during the pre-index period through 60 days after the index date. After applying exclusion criteria, the final sample comprised 4001 patients (n = 1078 FSC; n = 2923 AC). Table 1 shows demographics and pre-index clinical characteristics, health care use, and costs. In the FSC cohort compared with the AC cohort, patients were younger, more likely to be female, had lower Charlson comorbidity index at baseline, and were more likely to have asthma. Pre-index COPD was more severe in the FSC cohort than the AC cohort reflected by the higher use of short-acting beta-agonist canisters and oral corticosteroid prescriptions. During the 1-year pre-index period, the proportion of patients having a COPD-related hospitalization and COPD-related medical costs was significantly lower in the FSC cohort than the AC cohort. Pre-index COPD-related pharmacy costs were significantly higher for the FSC cohort than the AC cohort. Clinical and economic outcomes: Unadjusted outcomes Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Adjusted outcomes After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050). After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050). Unadjusted outcomes: Table 2 shows unadjusted data on COPD-related outcomes and costs. In the unadjusted dataset, the proportion of patients with any COPD-related exacerbation in the follow-up period was lower in the FSC cohort compared with the AC cohort (18.6% vs 23.1%, P = 0.002). The annual number of COPD-related hospitalizations was significantly lower for the FSC cohort compared with the AC cohort (0.04 vs 0.07, P < 0.001). Despite higher annual COPD-related pharmacy costs in the FSC cohort compared with the AC cohort (US$934 vs US$684, P < 0.001), COPD-related annual total costs (US$1604 vs US$1687, P < 0.001) were significantly lower for the FSC cohort driven because of lower medical costs (US$670 vs US$1003, P = 0.001). Adjusted outcomes: After controlling for differences in baseline covariates between the cohorts, the adjusted overall risk of having any COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort relative to the FSC cohort (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08–1.56) (Figure 2). The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort (Figure 2). The average number of COPD-related hospitalizations during the 1-year follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01–2.09, P = 0.041). The average number of combined COPD-related hospitalizations and emergency department visits was 31% higher in the AC cohort compared with the FSC cohort (IRR: 1.31, 95% CI: 1.05–1.72, P = 0.045). The cohorts did not differ significantly with respect to the numbers of COPD-related emergency department visits or COPD-related physician office visits with an oral corticosteroid/ antibiotic dispensed within 5 days of the visit. Table 3 shows adjusted COPD-related costs by cohort. The savings from lower COPD-related medical cost (US$692 vs US$1042, P < 0.050) kept the COPD-related total costs during the 1-year follow-up period comparable to those in the AC cohort (US$1659 vs US$1677, P > 0.050) although the FSC group had higher COPD-related pharmacy costs (US$941 vs US$683, P < 0.050). Discussion: This study is, to the authors’ knowledge, the first to assess a cohort of COPD patients with comorbid anxiety/depression initiating maintenance therapy with FSC or AC. The use of FSC was associated with better clinical and economic COPD-related outcomes. The AC cohort compared with the FSC cohort had a significantly higher risk of experiencing any COPD-related exacerbation, exacerbations requiring hospitalization, and exacerbations requiring an emergency department visit during the 1-year study period. The number of COPD-related hospitalizations was lower for patients who initiated COPD maintenance therapy with FSC compared with AC. The lower risks of COPD-related hospitalizations and emergency department visits and the lower number of hospitalizations in the FSC cohort were associated with a reduction in COPD-related medical costs for the FSC cohort compared with the AC cohort. In clinical studies comparing FSC and tiotropium, lung function improvements and exacerbation rates were similar whereas quality of life was better in the FSC groups.19 In addition, patients taking inhaled corticosteroids tended to have fewer exacerbations that required treatment with oral corticosteroids compared with subjects taking tiotropium. These differences (ie, an improved quality of life and a reduction in dyspnea during exacerbations) may reflect the effects of the inhaled corticosteroid-associated improvement in health in these patients and may help to explain why this therapy might improve outcomes in a depressed population. The present study adds to a body of literature, from studies of patients with COPD selected without regard to the presence of comorbid depression, that supports the benefits of FSC versus both short- and long-acting anticholinergic bronchodilators in reducing the risk of COPD-related events and/or reducing medical and total costs.16,20–22 In an observational study of 14,689 patients aged ≥65 years with COPD in a commercial Medicare health maintenance organization plan, initial maintenance treatment with FSC was associated with total COPD-related cost savings (medical plus pharmacy) of US$110 versus tiotropium, US$295 versus ipratropium/albuterol, and US$1235 versus ipratropium (all of which are ACs) over a 1-year follow-up period.16 This reduction in total costs with FSC versus tiotropium more than offsets the higher pharmacy costs of FSC. In a retrospective, observational study involving Texas Medicaid beneficiaries aged 40 to 64 years (n = 6793), FSC compared with ipratropium was associated with a 27% lower risk of COPD-related hospitalization or emergency department visit and with similar COPD-related total costs (medical plus pharmacy costs) over 12 months.20 In another study, initial maintenance therapy with FSC compared with ipratropium was associated with a 56% lower risk of COPD-related hospitalization or emergency department visit and similar COPD-related total costs (with lower all-cause total costs) in patients ≥40 years with COPD.21 Finally, in a retrospective claims analysis involving 1051 adults ≥65 years with COPD, FSC compared with ipratropium was associated with a 45% reduction in the risk of COPD-related hospitalization or emergency department visit, and lower COPD-related medical costs.22 In several of these studies,16,21,22 FSC-associated reductions in medical costs appeared to more than offset the increase in pharmacy costs such that total costs (medical plus pharmacy) were lower with FSC (although total costs were not reported in one study);21 however, pharmacy costs were higher with FSC than with the short-acting anticholinergic bronchodilator. Across studies comparing FSC with short- and long-acting anticholinergic bronchodilators in patients selected without regard to presence of comorbid depression, FSC was associated with significantly lower risk of COPD-related events and, generally, lower total medical costs. In the current study, definitions for COPD exacerbations included emergency department visits, hospitalizations, and physician visits with a prescription for an oral corticosteroid or antibiotic for respiratory infections. A similarly inclusive definition was used in recent studies of COPD exacerbations.23–26 Emergency department visits and hospitalizations reflect severe exacerbations, and physician visits with a prescription for an oral corticosteroid or antibiotic may reflect moderate exacerbations. The use of these definitions for COPD exacerbations provides a sensitive measure of treatment outcomes and reflects the spectrum of manifestations of COPD exacerbations in clinical practice. The results of this study should be interpreted in the context of its limitations. Although analyses controlled for differences in baseline disease severity and other baseline characteristics in both adjusted cost models and risk models in the current study, such adjustment is imperfect and does not obviate the need for further assessment in cohorts balanced with respect to baseline characteristics. The possibility remains of residual confounding due to between-cohort differences in patient characteristics that were not controlled for in multivariate analysis. Other limitations of this study include potential errors in the coding of claims, the inability to verify the accuracy of diagnosis codes, and the absence of a means of assessing patient compliance. The potential for these biases is inherent in observational studies. If these biases were operating, they are likely to have been operating similarly between cohorts. Patients with COPD are at increased risk of depression and anxiety relative to those with other chronic disorders.5–7 This observational study is believed to be the first to examine the impact of pharmacotherapy on outcomes and costs in patients with COPD and depression or anxiety. The results show that FSC compared with AC was associated with more favorable COPD-related outcomes and lower COPD-related utilization and medical costs among patients with COPD and comorbid depression or anxiety. Further study of the influence of therapeutic intervention on outcomes, utilization, and costs in COPD and comorbid depression or anxiety is warranted.
Background: Chronic obstructive pulmonary disease (COPD) is frequently associated with comorbid depression and anxiety. Managing COPD symptoms and exacerbations through use of appropriate and adequate pharmacotherapy in this population may result in better COPD-related outcomes. Methods: This retrospective, observational study used administrative claims of patients aged 40 years and older with COPD and comorbid depression/anxiety identified from January 1, 2004 through June 30, 2008. Patients were assigned to fluticasone propionate/salmeterol 250/50 mcg combination (FSC) or anticholinergics (AC) based on their first (index) prescription. The risks of COPD exacerbations and healthcare utilization and costs were compared between cohorts during 1 year of follow-up. Results: The adjusted risk of a COPD-related exacerbation during the 1-year follow-up period was 30% higher in the AC cohort (n = 2923) relative to the FSC cohort (n = 1078) (odds ratio [OR]: 1.30, 95% confidence interval [CI]: 1.08-1.56) after controlling for baseline differences in covariates. The risks of COPD-related hospitalizations and emergency department visits were 56% and 65% higher, respectively, in the AC cohort compared with the FSC cohort. The average number of COPD-related hospitalizations during the follow-up period was 46% higher for the AC cohort compared with the FSC cohort (incidence rate ratio [IRR]: 1.46, 95% CI: 1.01-2.09, P = 0.041). The savings from lower COPD-related medical costs ($692 vs $1042, P < 0.050) kept the COPD-related total costs during the follow-up period comparable to those in the AC cohort ($1659 vs $1677, P > 0.050) although the pharmacy costs were higher in the FSC cohort. Conclusions: FSC compared with AC was associated with more favorable COPD-related outcomes and lower COPD-related utilization and medical costs among patients with COPD and comorbid anxiety/depression.
null
null
10,068
384
13
[ "copd", "cohort", "related", "copd related", "index", "fsc", "costs", "period", "ac", "fsc cohort" ]
[ "test", "test" ]
null
null
[CONTENT] COPD | fluticasone propionate/salmeterol | anticholinergics | depression | comorbidity [SUMMARY]
[CONTENT] COPD | fluticasone propionate/salmeterol | anticholinergics | depression | comorbidity [SUMMARY]
[CONTENT] COPD | fluticasone propionate/salmeterol | anticholinergics | depression | comorbidity [SUMMARY]
null
[CONTENT] COPD | fluticasone propionate/salmeterol | anticholinergics | depression | comorbidity [SUMMARY]
null
[CONTENT] Adult | Albuterol | Androstadienes | Cholinergic Antagonists | Cost of Illness | Depression | Drug Combinations | Female | Fluticasone-Salmeterol Drug Combination | Follow-Up Studies | Glucocorticoids | Hospitalization | Humans | Male | Middle Aged | Pulmonary Disease, Chronic Obstructive | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Albuterol | Androstadienes | Cholinergic Antagonists | Cost of Illness | Depression | Drug Combinations | Female | Fluticasone-Salmeterol Drug Combination | Follow-Up Studies | Glucocorticoids | Hospitalization | Humans | Male | Middle Aged | Pulmonary Disease, Chronic Obstructive | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Albuterol | Androstadienes | Cholinergic Antagonists | Cost of Illness | Depression | Drug Combinations | Female | Fluticasone-Salmeterol Drug Combination | Follow-Up Studies | Glucocorticoids | Hospitalization | Humans | Male | Middle Aged | Pulmonary Disease, Chronic Obstructive | Retrospective Studies | Treatment Outcome [SUMMARY]
null
[CONTENT] Adult | Albuterol | Androstadienes | Cholinergic Antagonists | Cost of Illness | Depression | Drug Combinations | Female | Fluticasone-Salmeterol Drug Combination | Follow-Up Studies | Glucocorticoids | Hospitalization | Humans | Male | Middle Aged | Pulmonary Disease, Chronic Obstructive | Retrospective Studies | Treatment Outcome [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
[CONTENT] copd | cohort | related | copd related | index | fsc | costs | period | ac | fsc cohort [SUMMARY]
[CONTENT] copd | cohort | related | copd related | index | fsc | costs | period | ac | fsc cohort [SUMMARY]
[CONTENT] copd | cohort | related | copd related | index | fsc | costs | period | ac | fsc cohort [SUMMARY]
null
[CONTENT] copd | cohort | related | copd related | index | fsc | costs | period | ac | fsc cohort [SUMMARY]
null
[CONTENT] copd | depression | symptoms | patients | comorbid | patients copd | depressive symptoms | depressive | individuals | anxiety [SUMMARY]
[CONTENT] differences cohorts | differences | presence | model | test | regression | cohorts | covariates | log | analyze differences [SUMMARY]
[CONTENT] cohort | related | copd related | copd | vs | ac | fsc cohort | ac cohort | fsc | cohort compared [SUMMARY]
null
[CONTENT] copd | index | cohort | related | copd related | fsc | costs | period | ac | vs [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] aged 40 years | COPD | January 1, 2004 through June 30, 2008 ||| 250/50 mcg | FSC | AC ||| 1 year [SUMMARY]
[CONTENT] 1-year | 30% | AC | 2923 | FSC | 1078 ||| 1.30 | 95% ||| CI | 1.08 ||| 56% | 65% | AC | FSC ||| 46% | AC | FSC | 1.46 | 95% | CI | 1.01-2.09 | 0.041 ||| 692 | 1042 | AC | 1659 | 1677 | 0.050 | FSC [SUMMARY]
null
[CONTENT] ||| ||| aged 40 years | COPD | January 1, 2004 through June 30, 2008 ||| 250/50 mcg | FSC | AC ||| 1 year ||| 1-year | 30% | AC | 2923 | FSC | 1078 | 1.30 | 95% ||| CI | 1.08 ||| 56% | 65% | AC | FSC ||| 46% | AC | FSC | 1.46 | 95% | CI | 1.01-2.09 | 0.041 ||| 692 | 1042 | AC | 1659 | 1677 | 0.050 | FSC ||| FSC | AC | COPD [SUMMARY]
null
Altered microRNA expression profile during epithelial wound repair in bronchial epithelial cells.
24188858
Airway epithelial cells provide a protective barrier against environmental particles including potential pathogens. Epithelial repair in response to tissue damage is abnormal in asthmatic airway epithelium in comparison to the repair of normal epithelium after damage. The complex mechanisms coordinating the regulation of the processes involved in wound repair requires the phased expression of networks of genes. Small non-coding RNA molecules termed microRNAs (miRNAs) play a critical role in such coordinated regulation of gene expression. We aimed to establish if the phased expression of specific miRNAs is correlated with the repair of mechanically induced damage to the epithelium.
BACKGROUND
To investigate the possible involvement of miRNA in epithelial repair, we analyzed miRNA expression profiles during epithelial repair in a cell culture model using TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format. The expression of 754 miRNA genes at seven time points in a 48-hour period during the wound repair process was profiled using the bronchial epithelial cell line 16HBE14o- growing in monolayer.
METHODS
The expression levels of numerous miRNAs were found to be altered during the wound repair process. These miRNA genes were clustered into 3 different patterns of expression that correlate with the further regulation of several biological pathways involved in wound repair. Moreover, it was observed that expression of some miRNA genes were significantly altered only at one time point, indicating their involvement in a specific stage of the epithelial wound repair.
RESULTS
In summary, miRNA expression is modulated during the normal repair processes in airway epithelium in vitro suggesting a potential role in regulation of wound repair.
CONCLUSIONS
[ "Bronchi", "Cells, Cultured", "Down-Regulation", "Epithelial Cells", "Gene Expression Profiling", "Gene Expression Regulation", "Humans", "MicroRNAs", "Oligonucleotide Array Sequence Analysis", "Signal Transduction", "Time Factors", "Up-Regulation", "Wound Healing" ]
4229315
Background
The airway epithelium has been recognized to play a central role in the integration of innate and adaptive immune responses [1-4]. The airway epithelium is also crucial to the origin and progression of respiratory disorders such as asthma, chronic obstructive pulmonary disease, cystic fibrosis and pulmonary fibrosis. In asthma, chronic airway inflammation underlies aberrant repair of the airway that subsequently leads to structural and functional changes in the airway wall. This remodeling is responsible for a number of the clinical characteristics of asthmatic patients. Normal epithelial repair occurs in a series of overlapping stages. Damage to the epithelium or challenge associated with damage can result in loss of structural integrity or barrier function and local mucosal activation [5]. Studies in animals have shown that the repair of normal airway epithelium after minor damage involves the migration of the remaining epithelial cells to cover the damaged area. This is a rapid process, suggesting an autonomous response by cells in the vicinity of the damage [6]. It includes an acute inflammatory response, with recruitment of immune cells as well as epithelial spreading and migration stimulated by secreted provisional matrix. Once the barrier is reformed, the differentiated characteristics are then restored. The regulation of these processes require complex sequential changes in the epithelial cell biology driven by the phased expression of networks of genes [7]. One biological mechanism that plays a critical role in the coordinate regulation of gene expression such as that required during epithelial wound repair is the expression of small non-coding RNA molecules termed microRNAs (miRNAs) [8]. To date, more than 1000 human miRNAs have been identified [http://microrna.sanger.ac.uk], with documented tissue-specific expression of some of these miRNAs in lung and involvement in the development of lung diseases including lung cancer, asthma and fibrosis [9-15]. MiRNAs have been demonstrated to play a crucial role in epithelial cell proliferation and differentiation [16-18]. The expression in lung epithelium of Dicer, the enzyme responsible for processing of miRNA precursors, is essential for lung morphogenesis [16] and there is differential expression of miRNAs during lung development [17]. Furthermore, transgenic over-expression of miR-17-92 (shown to be over-expressed in lung cancer) in the lung epithelium promotes proliferation and inhibits differentiation of lung epithelial progenitor cells [18]. Recently, it has been reported that miRNA-146a modulates survival of bronchial epithelial cells in response to cytokine-induced apoptosis [19]. In experimental studies, mice lacking miR-155 demonstrated autoimmune phenotypes in the lungs with increased airway remodeling and leukocyte invasion, phenotypes similar to those observed in asthma [20,21]. While a number of studies have examined the role of miRNA in lung development and in disease [9-15], their influence on the regulation of gene expression involved in epithelial wound repair remains unresolved and comprehensive studies on miRNA involvement in epithelial repair and the pathogenesis of airway remodeling are lacking. However in the skin, miRNAs were found to play a crucial role in wound closure by controlling migration and proliferation of keratinocytes in an in vitro model of wound repair [22]. Thus the hypothesis of the study was that the stages of wound repair in respiratory epithelium are regulated by the phased expression of specific miRNA species. The aim was to investigate the possible involvement of miRNAs by examining their expression profile in epithelial repair in a cell culture model. Understanding the effect of altered miRNA activity on protein expression during repair processes can be further used to identify pathways targeted by miRNAs that regulate epithelial wound repair, potentially providing a novel therapeutic strategy for asthma and other respiratory diseases with underlying aberrant epithelial wound repair.
Methods
Cell culture and wounding assays The 16HBE14o- bronchial epithelial cell line was cultured under standard conditions [23]. For the wounding assay, cells were seeded on 6-well plates at the initial density of 3x105 cells and cultured until confluent. Forty eight hours after reaching full confluence cells were damaged by scraping off the monolayer with a hatch-cross wounding pattern using a P200 Gilson pipette tip. After that, the medium and cell debris were removed by pipetting off the medium and 2 ml of fresh serum-containing medium was added to the remaining cells. For all experiments, at least two points of reference per well of a 6-well plate were used for post-injury analyses. Several time-lapse experiments were performed to establish consistent experimental conditions and the timing of the stages of wound repair. The 16HBE14o- bronchial epithelial cell line was cultured under standard conditions [23]. For the wounding assay, cells were seeded on 6-well plates at the initial density of 3x105 cells and cultured until confluent. Forty eight hours after reaching full confluence cells were damaged by scraping off the monolayer with a hatch-cross wounding pattern using a P200 Gilson pipette tip. After that, the medium and cell debris were removed by pipetting off the medium and 2 ml of fresh serum-containing medium was added to the remaining cells. For all experiments, at least two points of reference per well of a 6-well plate were used for post-injury analyses. Several time-lapse experiments were performed to establish consistent experimental conditions and the timing of the stages of wound repair. Time lapse microscopy Time lapse images were captured at 15 minute intervals on a Leica DM IRB phase-contrast inverted microscope (Leica; Milton Keynes, UK) in a chamber maintained at 36 ± 1°C and 5% CO2 atmosphere. The images were collected with a cooled Hamamatsu ORCA digital camera (Hamamatsu Photonics, Welwyn Garden City, UK) connected to a computer running Cell^P software (Olympus, London, UK) for 30 hours (ensuring complete wound closure is included in the time course). For quantitative analysis of the area of damage and hence ongoing repair in time lapse serial images ImageJ software [24] was used. Time lapse images were captured at 15 minute intervals on a Leica DM IRB phase-contrast inverted microscope (Leica; Milton Keynes, UK) in a chamber maintained at 36 ± 1°C and 5% CO2 atmosphere. The images were collected with a cooled Hamamatsu ORCA digital camera (Hamamatsu Photonics, Welwyn Garden City, UK) connected to a computer running Cell^P software (Olympus, London, UK) for 30 hours (ensuring complete wound closure is included in the time course). For quantitative analysis of the area of damage and hence ongoing repair in time lapse serial images ImageJ software [24] was used. RNA isolation RNA isolation was performed with the use of an Exiqon RNA isolation kit. Samples were collected in triplicate for each of the following time points: baseline, 2, 4, 8, 16, 24 and 48 hours after wounding. RNA isolation was performed according to the manufacturer’s protocol from 6-well plates (9.5 cm2 of growth area) and the amount of starting material was 1×106 cells per well. Samples were frozen at -70°C for subsequent use in microarray experiments. RNA isolation was performed with the use of an Exiqon RNA isolation kit. Samples were collected in triplicate for each of the following time points: baseline, 2, 4, 8, 16, 24 and 48 hours after wounding. RNA isolation was performed according to the manufacturer’s protocol from 6-well plates (9.5 cm2 of growth area) and the amount of starting material was 1×106 cells per well. Samples were frozen at -70°C for subsequent use in microarray experiments. RNA quality control The concentration of total RNA in each sample was determined using a NanoDrop 1000 spectrophotometer. The integrity of total RNA extracted was assessed by a Lab901 Gene Tools System. The passing criteria for use in microarrays was: sample volume of 10–30 μl, RNA concentration > 50 ng/μl, SDV ≤ 3.1 (ScreenTape Degradation Value), which corresponds to RIN ≥ 9.0, purity: OD 260/280 > 1.7, OD 260/230 > 1.4. The concentration of total RNA in each sample was determined using a NanoDrop 1000 spectrophotometer. The integrity of total RNA extracted was assessed by a Lab901 Gene Tools System. The passing criteria for use in microarrays was: sample volume of 10–30 μl, RNA concentration > 50 ng/μl, SDV ≤ 3.1 (ScreenTape Degradation Value), which corresponds to RIN ≥ 9.0, purity: OD 260/280 > 1.7, OD 260/230 > 1.4. Micro RNA profiling Micro-RNA expression profiling of bronchial epithelial cells was performed in three replicates per time point following wounding. TaqMan Array Human MicroRNA Card A and B (Applied Biosystems) (based on Sanger miRBase 9.2) was utilised for specific amplification and detection of 754 mature human miRNAs by TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format (TLDA) using TaqMan MicroRNA Reverse Transcription Kit and Megaplex RT Primers (Human Megaplex™ RT Primers Pool A and B). The resulting cDNA combined with TaqMan Universal PCR Master Mix, No AmpErase UNG was loaded into the arrays and TaqMan real-time PCR was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems) according to the manufacturer’s protocol. Raw data were obtained using SDS 2.3 software (Applied Biosystems). All SDS files were analyzed utilizing the RQ Manager 1.2 software (Applied Biosystems). The comparative analysis of miRNA expression datasets between baseline and each time point following the wounding assay was performed using DataAssist software v.3.01 (Applied Biosystems). The comparative CT method [25] was used for calculating relative quantitation of gene expression after removing outliers with use of Grubbs’ outlier test together with a heuristic rule to remove the outlier among technical replicates and data normalization was based on the endogenous control gene expression method (U6 snRNA-001973) where the mean of the selected endogenous control was used to normalize the Ct value of each sample. The data from miRNA profiling have been deposited in ArrayExpress database (accession no. E-MEXP-3986). Micro-RNA expression profiling of bronchial epithelial cells was performed in three replicates per time point following wounding. TaqMan Array Human MicroRNA Card A and B (Applied Biosystems) (based on Sanger miRBase 9.2) was utilised for specific amplification and detection of 754 mature human miRNAs by TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format (TLDA) using TaqMan MicroRNA Reverse Transcription Kit and Megaplex RT Primers (Human Megaplex™ RT Primers Pool A and B). The resulting cDNA combined with TaqMan Universal PCR Master Mix, No AmpErase UNG was loaded into the arrays and TaqMan real-time PCR was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems) according to the manufacturer’s protocol. Raw data were obtained using SDS 2.3 software (Applied Biosystems). All SDS files were analyzed utilizing the RQ Manager 1.2 software (Applied Biosystems). The comparative analysis of miRNA expression datasets between baseline and each time point following the wounding assay was performed using DataAssist software v.3.01 (Applied Biosystems). The comparative CT method [25] was used for calculating relative quantitation of gene expression after removing outliers with use of Grubbs’ outlier test together with a heuristic rule to remove the outlier among technical replicates and data normalization was based on the endogenous control gene expression method (U6 snRNA-001973) where the mean of the selected endogenous control was used to normalize the Ct value of each sample. The data from miRNA profiling have been deposited in ArrayExpress database (accession no. E-MEXP-3986). Cluster analysis To identify the clusters of miRNAs following the same expression profile over time, we performed cluster analysis using STEM (Short Time series Expression Miner) software available at: http://www.cs.cmu.edu/~jernst/stem[26]. To identify the clusters of miRNAs following the same expression profile over time, we performed cluster analysis using STEM (Short Time series Expression Miner) software available at: http://www.cs.cmu.edu/~jernst/stem[26]. Target genes and pathways prediction To identify potential common biological pathways for miRNAs showing similar expression profiles in cluster analysis, we performed pathway enrichment analysis. The best predicted candidate mRNA genes for each differentially expressed miRNA were identified using the miRNA BodyMap tool available at: http://www.mirnabodymap.org. The tool enables the selection of target genes based on the use of several prediction algorithms at a time: DIANA, PITA, TargetScan, RNA22 (3UTR), RNA22 (5UTR), TargetScan_cons, MicroCosm, miRDB, RNA22 (5UTR), TarBase and miRecords. To minimize the target prediction noise, only target genes predicted by five or more of the prediction algorithms mentioned above were included. The lists with predicted target genes were then analysed with use of The Database for Annotation, Visualization and Integrated Discovery (DAVID) v.6.7 [27,28] to identify BioCarta & KEGG pathways [29,30] enriched functional-related gene groups and biological themes, particularly gene ontology (GO) terms [31] in which the analysed sets of target genes were statistically the most overrepresented (enriched). To identify potential common biological pathways for miRNAs showing similar expression profiles in cluster analysis, we performed pathway enrichment analysis. The best predicted candidate mRNA genes for each differentially expressed miRNA were identified using the miRNA BodyMap tool available at: http://www.mirnabodymap.org. The tool enables the selection of target genes based on the use of several prediction algorithms at a time: DIANA, PITA, TargetScan, RNA22 (3UTR), RNA22 (5UTR), TargetScan_cons, MicroCosm, miRDB, RNA22 (5UTR), TarBase and miRecords. To minimize the target prediction noise, only target genes predicted by five or more of the prediction algorithms mentioned above were included. The lists with predicted target genes were then analysed with use of The Database for Annotation, Visualization and Integrated Discovery (DAVID) v.6.7 [27,28] to identify BioCarta & KEGG pathways [29,30] enriched functional-related gene groups and biological themes, particularly gene ontology (GO) terms [31] in which the analysed sets of target genes were statistically the most overrepresented (enriched). Statistics The statistics applied by Data Assist software for each sample included calculation of the relative quantification (RQ) = 2 (-ΔCt)/2(-ΔCt reference). The standard deviation (SD) was calculated for CT values of each of the three technical replicates and was used to calculate the RQ Min and RQ Max [RQ Min = 2(-ΔCt – SD)/2(-ΔCt reference), RQ Max = 2 (-ΔCt + SD)/2(-ΔCt reference)]. Pearson's product moment correlation coefficient (r) was calculated for CT or ΔCT values of sample pairs as below and plotted on the Signal Correlation Plot and Scatter Plot respectively. r = N ΣXY - ΣX ΣY NΣ X 2 - ΣX 2 NΣ Y 2 - ΣY 2 Distances between samples and assays were calculated for hierarchical clustering based on the ΔCT values using Pearson’s correlation or the Eucidian distance calculated as follows [https://products.appliedbiosystems.com]. For a sample pair, the Pearson's product moment correlation coefficient (r) was calculated considering all ΔCT values from all assays, and the distance defined as 1 – r. For an assay pair, r was calculated considering all ΔCT values from all samples and the distance defined as 1 – r. Euclidean Distance calculated as ΣΔCTi-ΔCTj2 where, for a sample pair, the calculation is done across all assays for sample i and sample j while for an assay pair, the calculation is done across all samples for assay i and assay j. The statistics applied by Data Assist software for each sample included calculation of the relative quantification (RQ) = 2 (-ΔCt)/2(-ΔCt reference). The standard deviation (SD) was calculated for CT values of each of the three technical replicates and was used to calculate the RQ Min and RQ Max [RQ Min = 2(-ΔCt – SD)/2(-ΔCt reference), RQ Max = 2 (-ΔCt + SD)/2(-ΔCt reference)]. Pearson's product moment correlation coefficient (r) was calculated for CT or ΔCT values of sample pairs as below and plotted on the Signal Correlation Plot and Scatter Plot respectively. r = N ΣXY - ΣX ΣY NΣ X 2 - ΣX 2 NΣ Y 2 - ΣY 2 Distances between samples and assays were calculated for hierarchical clustering based on the ΔCT values using Pearson’s correlation or the Eucidian distance calculated as follows [https://products.appliedbiosystems.com]. For a sample pair, the Pearson's product moment correlation coefficient (r) was calculated considering all ΔCT values from all assays, and the distance defined as 1 – r. For an assay pair, r was calculated considering all ΔCT values from all samples and the distance defined as 1 – r. Euclidean Distance calculated as ΣΔCTi-ΔCTj2 where, for a sample pair, the calculation is done across all assays for sample i and sample j while for an assay pair, the calculation is done across all samples for assay i and assay j.
Results
Characterisation of epithelial wound repair model To analyse the changes in miRNA expression profile during epithelial wound repair, we used a previously well established in vitro model mimicking this process [23,32-34], that allowed real-time monitoring of the rate of epithelial repair and quantitative analysis using time-lapse microscopy. The following time points were selected for miRNA expression profile analysis: baseline immediately before wounding. (A) 2 hours after wounding: cells adjacent to the wound initiate a response but have not migrated substantially. (B) 4 hours after wounding: 25% of the original wound area has been covered by cells. (C) 8 hours after wounding: 50% of wounded area covered by cells. (D) 16 hours after wounding: wounded area completely covered by cells. Once the wound is covered cell proliferation and re-differentiation may still be in progress so additional time points were added. (E) 24 hours post-wounding (F) 48 hours after wounding (Figure 1). With the exception of cells damaged during the original mechanical wounding, cell death was not seen in the repairing areas by time lapse microscopy. Stages of wound repair at different time points (A – 2 hrs, B – 4 hrs, C – 8 hrs, D – 16 hrs, E – 24 hrs, F – 48 hrs post wounding), n=3 wells for each time point. To analyse the changes in miRNA expression profile during epithelial wound repair, we used a previously well established in vitro model mimicking this process [23,32-34], that allowed real-time monitoring of the rate of epithelial repair and quantitative analysis using time-lapse microscopy. The following time points were selected for miRNA expression profile analysis: baseline immediately before wounding. (A) 2 hours after wounding: cells adjacent to the wound initiate a response but have not migrated substantially. (B) 4 hours after wounding: 25% of the original wound area has been covered by cells. (C) 8 hours after wounding: 50% of wounded area covered by cells. (D) 16 hours after wounding: wounded area completely covered by cells. Once the wound is covered cell proliferation and re-differentiation may still be in progress so additional time points were added. (E) 24 hours post-wounding (F) 48 hours after wounding (Figure 1). With the exception of cells damaged during the original mechanical wounding, cell death was not seen in the repairing areas by time lapse microscopy. Stages of wound repair at different time points (A – 2 hrs, B – 4 hrs, C – 8 hrs, D – 16 hrs, E – 24 hrs, F – 48 hrs post wounding), n=3 wells for each time point. Global miRNA expression profile altered during epithelial wound repair Expression profiling analysis revealed a large number of mature miRNAs that were modulated at different time points during epithelial repair with a fold change above 2.0 (Table 1). Numerous miRNAs showed significantly increased or decreased expression (>10-fold) at different time points as compared to baseline (presented in Additional file 1). Based on the high fold change values at different time points, ten miRNAs were found to undergo a significant modulation (both up- or down-regulation) at five or more of the seven time points analysed (Additional file 2). We also observed that the alterations in expression of some miRNA genes were limited to a single time point of wound repair, whereas at the other time points the expression levels did not differ much from the baseline, suggesting their involvement at a particular stage of repair (marked in red in Additional file 1). Number of miRNAs with >2.0-fold change in expression at different time points after wounding Expression profiling analysis revealed a large number of mature miRNAs that were modulated at different time points during epithelial repair with a fold change above 2.0 (Table 1). Numerous miRNAs showed significantly increased or decreased expression (>10-fold) at different time points as compared to baseline (presented in Additional file 1). Based on the high fold change values at different time points, ten miRNAs were found to undergo a significant modulation (both up- or down-regulation) at five or more of the seven time points analysed (Additional file 2). We also observed that the alterations in expression of some miRNA genes were limited to a single time point of wound repair, whereas at the other time points the expression levels did not differ much from the baseline, suggesting their involvement at a particular stage of repair (marked in red in Additional file 1). Number of miRNAs with >2.0-fold change in expression at different time points after wounding Cluster analysis We then hypothesized that, given the number of miRNA genes undergoing significant changes during the epithelial repair process, a common expression profile might be shared by miRNAs whose expression is regulated by particular transcriptional activation pathways. Therefore, we analysed the expression of miRNA genes with use of the clustering algorithm STEM [26], assigning each gene passing the filtering criteria to the model profile that most closely matches the gene's expression profile as determined by the correlation coefficient. Since the model profiles are selected by the software by random allocation, independent of the data, the algorithm then determines which profiles have a statistically significant higher number of genes assigned using a permutation test. It then uses standard hypothesis testing to determine which model profiles have significantly more genes assigned as compared to the average number of genes assigned to the model profile in the permutation runs. Our cluster analysis revealed three significant miRNA expression profiles (16, 1 and 18) over 48 hours of wound repair (Figures 2, 3 and 4). Profile 16 of miRNA with similar expression pattern during wound repair. Profile 1 of miRNA with similar expression pattern during wound repair. Profile 18 of miRNA with similar expression pattern during wound repair. Profile 16 included genes that gradually increase between 2 and 16 hours and then display a sudden drop in expression 16 hours post-wounding, which corresponded to the completion of cell proliferation and the restoration of the monolayer after wounding in time lapse observations. Profile 1 was characterized by significant decrease of miRNA expression 4 hours after wounding followed by a significant increase with a maximum 16 hours post-wounding, suggesting induction of transcription of genes involved in the early response to stress due to mechanical cell damage which are subsequently switched off. Profile 18 shared some similarities with profile 1, although it showed a more gradual decrease in miRNA expression 4 hours post-wounding, that then increased steadily to reach a maximum at 16 hours and afterwards gradually decreased. The different profiles of miRNA expression are shown in Figures 2, 3 and 4. The miRNA genes sharing the common expression pattern during epithelial wound repair are listed in Additional file 3. We then hypothesized that, given the number of miRNA genes undergoing significant changes during the epithelial repair process, a common expression profile might be shared by miRNAs whose expression is regulated by particular transcriptional activation pathways. Therefore, we analysed the expression of miRNA genes with use of the clustering algorithm STEM [26], assigning each gene passing the filtering criteria to the model profile that most closely matches the gene's expression profile as determined by the correlation coefficient. Since the model profiles are selected by the software by random allocation, independent of the data, the algorithm then determines which profiles have a statistically significant higher number of genes assigned using a permutation test. It then uses standard hypothesis testing to determine which model profiles have significantly more genes assigned as compared to the average number of genes assigned to the model profile in the permutation runs. Our cluster analysis revealed three significant miRNA expression profiles (16, 1 and 18) over 48 hours of wound repair (Figures 2, 3 and 4). Profile 16 of miRNA with similar expression pattern during wound repair. Profile 1 of miRNA with similar expression pattern during wound repair. Profile 18 of miRNA with similar expression pattern during wound repair. Profile 16 included genes that gradually increase between 2 and 16 hours and then display a sudden drop in expression 16 hours post-wounding, which corresponded to the completion of cell proliferation and the restoration of the monolayer after wounding in time lapse observations. Profile 1 was characterized by significant decrease of miRNA expression 4 hours after wounding followed by a significant increase with a maximum 16 hours post-wounding, suggesting induction of transcription of genes involved in the early response to stress due to mechanical cell damage which are subsequently switched off. Profile 18 shared some similarities with profile 1, although it showed a more gradual decrease in miRNA expression 4 hours post-wounding, that then increased steadily to reach a maximum at 16 hours and afterwards gradually decreased. The different profiles of miRNA expression are shown in Figures 2, 3 and 4. The miRNA genes sharing the common expression pattern during epithelial wound repair are listed in Additional file 3. Identification of biological processes regulated by miRNAs Pathways in clusters of miRNAs The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). Pathways in clusters of miRNAs The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). Target pathways at different stages of wound repair To identify the most important pathways involved at different stages of epithelial wound repair in vitro we also performed pathway enrichment of miRNAs significantly altered only at one time point of wound repair (see Additional file 1, genes in red). For those genes, targets were predicted as above and DAVID was used to identify potential pathways and biological processes. The main observation for epithelial cells in the early phase of repair (2 hours post-wounding) were miRNAs being up-regulated, suggesting switching off target genes and processes associated with response to cellular stress (MAP kinase pathway), regulation of actin cytoskeleton, cell proliferation and migration. The main pathways targeted by up-regulated miRNAs identified for the repair 4 hours after cell damage included genes involved in negative regulation of transcription, RNA metabolism, regulation of cell motion and the cytoskeleton. The most important processes at 8 hours after wounding involved a number of up-regulated miRNAs at this time point and indicating the switching off of genes involved in negative regulation of gene expression and negative regulation of cell communication. At 16 hours following epithelial cell wounding we observed a number of miRNA genes that were down regulated and, therefore, switching on genes involved in mitotic cell cycle, negative regulation of cell death, cell proliferation, ERBB signalling pathway (cell proliferation, survival, migration). This may suggest the predominance of a proliferating phenotype of cells after the damaged area was closed by spreading and migrating cells. After 24 hours post-wounding we observed further down regulation of miRNA genes. Two were of particular interest as they are responsible for switching on genes involved in p53 signalling pathway (cell cycle arrest), IL-10 (anti-inflammatory response), regulation of apoptosis, cell death, RNA transport and localization. This indicates that at this time point cells have proliferated sufficiently and are beginning to differentiate. At 48 hours after wounding, we observed mainly up regulation of miRNA genes responsible for silencing genes involved in protein catabolic processes, alternative splicing, spectrins, mRNA splicing and processing as well as methylation indicating that cells are undergoing physiological processes and restoring a normal phenotype. To identify the most important pathways involved at different stages of epithelial wound repair in vitro we also performed pathway enrichment of miRNAs significantly altered only at one time point of wound repair (see Additional file 1, genes in red). For those genes, targets were predicted as above and DAVID was used to identify potential pathways and biological processes. The main observation for epithelial cells in the early phase of repair (2 hours post-wounding) were miRNAs being up-regulated, suggesting switching off target genes and processes associated with response to cellular stress (MAP kinase pathway), regulation of actin cytoskeleton, cell proliferation and migration. The main pathways targeted by up-regulated miRNAs identified for the repair 4 hours after cell damage included genes involved in negative regulation of transcription, RNA metabolism, regulation of cell motion and the cytoskeleton. The most important processes at 8 hours after wounding involved a number of up-regulated miRNAs at this time point and indicating the switching off of genes involved in negative regulation of gene expression and negative regulation of cell communication. At 16 hours following epithelial cell wounding we observed a number of miRNA genes that were down regulated and, therefore, switching on genes involved in mitotic cell cycle, negative regulation of cell death, cell proliferation, ERBB signalling pathway (cell proliferation, survival, migration). This may suggest the predominance of a proliferating phenotype of cells after the damaged area was closed by spreading and migrating cells. After 24 hours post-wounding we observed further down regulation of miRNA genes. Two were of particular interest as they are responsible for switching on genes involved in p53 signalling pathway (cell cycle arrest), IL-10 (anti-inflammatory response), regulation of apoptosis, cell death, RNA transport and localization. This indicates that at this time point cells have proliferated sufficiently and are beginning to differentiate. At 48 hours after wounding, we observed mainly up regulation of miRNA genes responsible for silencing genes involved in protein catabolic processes, alternative splicing, spectrins, mRNA splicing and processing as well as methylation indicating that cells are undergoing physiological processes and restoring a normal phenotype.
Conclusions
In summary, we report here for the first time that expression of multiple miRNAs is significantly altered during airway epithelium wound repair processes. Different patterns of expression have been observed and the target genes of those miRNA clusters coordinate several biological pathways involved in the repair of injury. Our work provides a starting point for a systematic analysis of mRNA targets specific for wound repair. This will help to identify regulatory networks controlling these processes in airway epithelium to better understand their involvement in respiratory diseases.
[ "Background", "Cell culture and wounding assays", "Time lapse microscopy", "RNA isolation", "RNA quality control", "Micro RNA profiling", "Cluster analysis", "Target genes and pathways prediction", "Statistics", "Characterisation of epithelial wound repair model", "Global miRNA expression profile altered during epithelial wound repair", "Cluster analysis", "Identification of biological processes regulated by miRNAs", "Pathways in clusters of miRNAs", "Target pathways at different stages of wound repair", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "The airway epithelium has been recognized to play a central role in the integration of innate and adaptive immune responses [1-4]. The airway epithelium is also crucial to the origin and progression of respiratory disorders such as asthma, chronic obstructive pulmonary disease, cystic fibrosis and pulmonary fibrosis. In asthma, chronic airway inflammation underlies aberrant repair of the airway that subsequently leads to structural and functional changes in the airway wall. This remodeling is responsible for a number of the clinical characteristics of asthmatic patients.\nNormal epithelial repair occurs in a series of overlapping stages. Damage to the epithelium or challenge associated with damage can result in loss of structural integrity or barrier function and local mucosal activation [5]. Studies in animals have shown that the repair of normal airway epithelium after minor damage involves the migration of the remaining epithelial cells to cover the damaged area. This is a rapid process, suggesting an autonomous response by cells in the vicinity of the damage [6]. It includes an acute inflammatory response, with recruitment of immune cells as well as epithelial spreading and migration stimulated by secreted provisional matrix. Once the barrier is reformed, the differentiated characteristics are then restored. The regulation of these processes require complex sequential changes in the epithelial cell biology driven by the phased expression of networks of genes [7].\nOne biological mechanism that plays a critical role in the coordinate regulation of gene expression such as that required during epithelial wound repair is the expression of small non-coding RNA molecules termed microRNAs (miRNAs) [8]. To date, more than 1000 human miRNAs have been identified [http://microrna.sanger.ac.uk], with documented tissue-specific expression of some of these miRNAs in lung and involvement in the development of lung diseases including lung cancer, asthma and fibrosis [9-15]. MiRNAs have been demonstrated to play a crucial role in epithelial cell proliferation and differentiation [16-18]. The expression in lung epithelium of Dicer, the enzyme responsible for processing of miRNA precursors, is essential for lung morphogenesis [16] and there is differential expression of miRNAs during lung development [17]. Furthermore, transgenic over-expression of miR-17-92 (shown to be over-expressed in lung cancer) in the lung epithelium promotes proliferation and inhibits differentiation of lung epithelial progenitor cells [18]. Recently, it has been reported that miRNA-146a modulates survival of bronchial epithelial cells in response to cytokine-induced apoptosis [19]. In experimental studies, mice lacking miR-155 demonstrated autoimmune phenotypes in the lungs with increased airway remodeling and leukocyte invasion, phenotypes similar to those observed in asthma [20,21].\nWhile a number of studies have examined the role of miRNA in lung development and in disease [9-15], their influence on the regulation of gene expression involved in epithelial wound repair remains unresolved and comprehensive studies on miRNA involvement in epithelial repair and the pathogenesis of airway remodeling are lacking. However in the skin, miRNAs were found to play a crucial role in wound closure by controlling migration and proliferation of keratinocytes in an in vitro model of wound repair [22].\nThus the hypothesis of the study was that the stages of wound repair in respiratory epithelium are regulated by the phased expression of specific miRNA species. The aim was to investigate the possible involvement of miRNAs by examining their expression profile in epithelial repair in a cell culture model. Understanding the effect of altered miRNA activity on protein expression during repair processes can be further used to identify pathways targeted by miRNAs that regulate epithelial wound repair, potentially providing a novel therapeutic strategy for asthma and other respiratory diseases with underlying aberrant epithelial wound repair.", "The 16HBE14o- bronchial epithelial cell line was cultured under standard conditions [23]. For the wounding assay, cells were seeded on 6-well plates at the initial density of 3x105 cells and cultured until confluent. Forty eight hours after reaching full confluence cells were damaged by scraping off the monolayer with a hatch-cross wounding pattern using a P200 Gilson pipette tip. After that, the medium and cell debris were removed by pipetting off the medium and 2 ml of fresh serum-containing medium was added to the remaining cells. For all experiments, at least two points of reference per well of a 6-well plate were used for post-injury analyses. Several time-lapse experiments were performed to establish consistent experimental conditions and the timing of the stages of wound repair.", "Time lapse images were captured at 15 minute intervals on a Leica DM IRB phase-contrast inverted microscope (Leica; Milton Keynes, UK) in a chamber maintained at 36 ± 1°C and 5% CO2 atmosphere. The images were collected with a cooled Hamamatsu ORCA digital camera (Hamamatsu Photonics, Welwyn Garden City, UK) connected to a computer running Cell^P software (Olympus, London, UK) for 30 hours (ensuring complete wound closure is included in the time course). For quantitative analysis of the area of damage and hence ongoing repair in time lapse serial images ImageJ software [24] was used.", "RNA isolation was performed with the use of an Exiqon RNA isolation kit. Samples were collected in triplicate for each of the following time points: baseline, 2, 4, 8, 16, 24 and 48 hours after wounding. RNA isolation was performed according to the manufacturer’s protocol from 6-well plates (9.5 cm2 of growth area) and the amount of starting material was 1×106 cells per well. Samples were frozen at -70°C for subsequent use in microarray experiments.", "The concentration of total RNA in each sample was determined using a NanoDrop 1000 spectrophotometer. The integrity of total RNA extracted was assessed by a Lab901 Gene Tools System. The passing criteria for use in microarrays was: sample volume of 10–30 μl, RNA concentration > 50 ng/μl, SDV ≤ 3.1 (ScreenTape Degradation Value), which corresponds to RIN ≥ 9.0, purity: OD 260/280 > 1.7, OD 260/230 > 1.4.", "Micro-RNA expression profiling of bronchial epithelial cells was performed in three replicates per time point following wounding. TaqMan Array Human MicroRNA Card A and B (Applied Biosystems) (based on Sanger miRBase 9.2) was utilised for specific amplification and detection of 754 mature human miRNAs by TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format (TLDA) using TaqMan MicroRNA Reverse Transcription Kit and Megaplex RT Primers (Human Megaplex™ RT Primers Pool A and B). The resulting cDNA combined with TaqMan Universal PCR Master Mix, No AmpErase UNG was loaded into the arrays and TaqMan real-time PCR was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems) according to the manufacturer’s protocol.\nRaw data were obtained using SDS 2.3 software (Applied Biosystems). All SDS files were analyzed utilizing the RQ Manager 1.2 software (Applied Biosystems). The comparative analysis of miRNA expression datasets between baseline and each time point following the wounding assay was performed using DataAssist software v.3.01 (Applied Biosystems). The comparative CT method [25] was used for calculating relative quantitation of gene expression after removing outliers with use of Grubbs’ outlier test together with a heuristic rule to remove the outlier among technical replicates and data normalization was based on the endogenous control gene expression method (U6 snRNA-001973) where the mean of the selected endogenous control was used to normalize the Ct value of each sample.\nThe data from miRNA profiling have been deposited in ArrayExpress database (accession no. E-MEXP-3986).", "To identify the clusters of miRNAs following the same expression profile over time, we performed cluster analysis using STEM (Short Time series Expression Miner) software available at: http://www.cs.cmu.edu/~jernst/stem[26].", "To identify potential common biological pathways for miRNAs showing similar expression profiles in cluster analysis, we performed pathway enrichment analysis. The best predicted candidate mRNA genes for each differentially expressed miRNA were identified using the miRNA BodyMap tool available at: http://www.mirnabodymap.org. The tool enables the selection of target genes based on the use of several prediction algorithms at a time: DIANA, PITA, TargetScan, RNA22 (3UTR), RNA22 (5UTR), TargetScan_cons, MicroCosm, miRDB, RNA22 (5UTR), TarBase and miRecords. To minimize the target prediction noise, only target genes predicted by five or more of the prediction algorithms mentioned above were included.\nThe lists with predicted target genes were then analysed with use of The Database for Annotation, Visualization and Integrated Discovery (DAVID) v.6.7 [27,28] to identify BioCarta & KEGG pathways [29,30] enriched functional-related gene groups and biological themes, particularly gene ontology (GO) terms [31] in which the analysed sets of target genes were statistically the most overrepresented (enriched).", "The statistics applied by Data Assist software for each sample included calculation of the relative quantification (RQ) = 2 (-ΔCt)/2(-ΔCt reference). The standard deviation (SD) was calculated for CT values of each of the three technical replicates and was used to calculate the RQ Min and RQ Max [RQ Min = 2(-ΔCt – SD)/2(-ΔCt reference), RQ Max = 2 (-ΔCt + SD)/2(-ΔCt reference)]. Pearson's product moment correlation coefficient (r) was calculated for CT or ΔCT values of sample pairs as below and plotted on the Signal Correlation Plot and Scatter Plot respectively.\n\n\n\n\nr\n=\n\n\nN\n\nΣXY\n-\n\nΣX\n\n\nΣY\n\n\n\n\n\nNΣ\n\nX\n2\n\n-\n\n\nΣX\n\n\n2\n\n\n\n\n\n\n\nNΣ\n\nY\n2\n\n-\n\n\nΣY\n\n2\n\n\n\n\n\n\n\n\n\nDistances between samples and assays were calculated for hierarchical clustering based on the ΔCT values using Pearson’s correlation or the Eucidian distance calculated as follows [https://products.appliedbiosystems.com]. For a sample pair, the Pearson's product moment correlation coefficient (r) was calculated considering all ΔCT values from all assays, and the distance defined as 1 – r. For an assay pair, r was calculated considering all ΔCT values from all samples and the distance defined as 1 – r. Euclidean Distance calculated as ΣΔCTi-ΔCTj2 where, for a sample pair, the calculation is done across all assays for sample i and sample j while for an assay pair, the calculation is done across all samples for assay i and assay j.", "To analyse the changes in miRNA expression profile during epithelial wound repair, we used a previously well established in vitro model mimicking this process [23,32-34], that allowed real-time monitoring of the rate of epithelial repair and quantitative analysis using time-lapse microscopy. The following time points were selected for miRNA expression profile analysis: baseline immediately before wounding. (A) 2 hours after wounding: cells adjacent to the wound initiate a response but have not migrated substantially. (B) 4 hours after wounding: 25% of the original wound area has been covered by cells. (C) 8 hours after wounding: 50% of wounded area covered by cells. (D) 16 hours after wounding: wounded area completely covered by cells. Once the wound is covered cell proliferation and re-differentiation may still be in progress so additional time points were added. (E) 24 hours post-wounding (F) 48 hours after wounding (Figure 1). With the exception of cells damaged during the original mechanical wounding, cell death was not seen in the repairing areas by time lapse microscopy.\nStages of wound repair at different time points (A – 2 hrs, B – 4 hrs, C – 8 hrs, D – 16 hrs, E – 24 hrs, F – 48 hrs post wounding), n=3 wells for each time point.", "Expression profiling analysis revealed a large number of mature miRNAs that were modulated at different time points during epithelial repair with a fold change above 2.0 (Table 1). Numerous miRNAs showed significantly increased or decreased expression (>10-fold) at different time points as compared to baseline (presented in Additional file 1). Based on the high fold change values at different time points, ten miRNAs were found to undergo a significant modulation (both up- or down-regulation) at five or more of the seven time points analysed (Additional file 2). We also observed that the alterations in expression of some miRNA genes were limited to a single time point of wound repair, whereas at the other time points the expression levels did not differ much from the baseline, suggesting their involvement at a particular stage of repair (marked in red in Additional file 1).\nNumber of miRNAs with >2.0-fold change in expression at different time points after wounding", "We then hypothesized that, given the number of miRNA genes undergoing significant changes during the epithelial repair process, a common expression profile might be shared by miRNAs whose expression is regulated by particular transcriptional activation pathways. Therefore, we analysed the expression of miRNA genes with use of the clustering algorithm STEM [26], assigning each gene passing the filtering criteria to the model profile that most closely matches the gene's expression profile as determined by the correlation coefficient. Since the model profiles are selected by the software by random allocation, independent of the data, the algorithm then determines which profiles have a statistically significant higher number of genes assigned using a permutation test. It then uses standard hypothesis testing to determine which model profiles have significantly more genes assigned as compared to the average number of genes assigned to the model profile in the permutation runs. Our cluster analysis revealed three significant miRNA expression profiles (16, 1 and 18) over 48 hours of wound repair (Figures 2, 3 and 4).\nProfile 16 of miRNA with similar expression pattern during wound repair.\nProfile 1 of miRNA with similar expression pattern during wound repair.\nProfile 18 of miRNA with similar expression pattern during wound repair.\nProfile 16 included genes that gradually increase between 2 and 16 hours and then display a sudden drop in expression 16 hours post-wounding, which corresponded to the completion of cell proliferation and the restoration of the monolayer after wounding in time lapse observations. Profile 1 was characterized by significant decrease of miRNA expression 4 hours after wounding followed by a significant increase with a maximum 16 hours post-wounding, suggesting induction of transcription of genes involved in the early response to stress due to mechanical cell damage which are subsequently switched off. Profile 18 shared some similarities with profile 1, although it showed a more gradual decrease in miRNA expression 4 hours post-wounding, that then increased steadily to reach a maximum at 16 hours and afterwards gradually decreased. The different profiles of miRNA expression are shown in Figures 2, 3 and 4. The miRNA genes sharing the common expression pattern during epithelial wound repair are listed in Additional file 3.", " Pathways in clusters of miRNAs The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5.\nFor all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7).\nThe next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5.\nFor all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7).", "The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5.\nFor all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7).", "To identify the most important pathways involved at different stages of epithelial wound repair in vitro we also performed pathway enrichment of miRNAs significantly altered only at one time point of wound repair (see Additional file 1, genes in red). For those genes, targets were predicted as above and DAVID was used to identify potential pathways and biological processes. The main observation for epithelial cells in the early phase of repair (2 hours post-wounding) were miRNAs being up-regulated, suggesting switching off target genes and processes associated with response to cellular stress (MAP kinase pathway), regulation of actin cytoskeleton, cell proliferation and migration. The main pathways targeted by up-regulated miRNAs identified for the repair 4 hours after cell damage included genes involved in negative regulation of transcription, RNA metabolism, regulation of cell motion and the cytoskeleton. The most important processes at 8 hours after wounding involved a number of up-regulated miRNAs at this time point and indicating the switching off of genes involved in negative regulation of gene expression and negative regulation of cell communication. At 16 hours following epithelial cell wounding we observed a number of miRNA genes that were down regulated and, therefore, switching on genes involved in mitotic cell cycle, negative regulation of cell death, cell proliferation, ERBB signalling pathway (cell proliferation, survival, migration). This may suggest the predominance of a proliferating phenotype of cells after the damaged area was closed by spreading and migrating cells. After 24 hours post-wounding we observed further down regulation of miRNA genes. Two were of particular interest as they are responsible for switching on genes involved in p53 signalling pathway (cell cycle arrest), IL-10 (anti-inflammatory response), regulation of apoptosis, cell death, RNA transport and localization. This indicates that at this time point cells have proliferated sufficiently and are beginning to differentiate. At 48 hours after wounding, we observed mainly up regulation of miRNA genes responsible for silencing genes involved in protein catabolic processes, alternative splicing, spectrins, mRNA splicing and processing as well as methylation indicating that cells are undergoing physiological processes and restoring a normal phenotype.", "The authors declare that they have no competing interests.", "AS performed in vitro cell experiments and wounding assays, miRNA profiling, data analysis, cluster and pathway analysis, drafted the paper and approved its final version. PL contributed to the study design and methodology regarding cell experiments drafted the paper and approved its final version. JWH contributed to the study design and methodology regarding miRNA analysis, drafted the paper and approved its final version.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2466/13/63/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Cell culture and wounding assays", "Time lapse microscopy", "RNA isolation", "RNA quality control", "Micro RNA profiling", "Cluster analysis", "Target genes and pathways prediction", "Statistics", "Results", "Characterisation of epithelial wound repair model", "Global miRNA expression profile altered during epithelial wound repair", "Cluster analysis", "Identification of biological processes regulated by miRNAs", "Pathways in clusters of miRNAs", "Target pathways at different stages of wound repair", "Discussion", "Conclusions", "Competing interests", "Authors’ contributions", "Pre-publication history", "Supplementary Material" ]
[ "The airway epithelium has been recognized to play a central role in the integration of innate and adaptive immune responses [1-4]. The airway epithelium is also crucial to the origin and progression of respiratory disorders such as asthma, chronic obstructive pulmonary disease, cystic fibrosis and pulmonary fibrosis. In asthma, chronic airway inflammation underlies aberrant repair of the airway that subsequently leads to structural and functional changes in the airway wall. This remodeling is responsible for a number of the clinical characteristics of asthmatic patients.\nNormal epithelial repair occurs in a series of overlapping stages. Damage to the epithelium or challenge associated with damage can result in loss of structural integrity or barrier function and local mucosal activation [5]. Studies in animals have shown that the repair of normal airway epithelium after minor damage involves the migration of the remaining epithelial cells to cover the damaged area. This is a rapid process, suggesting an autonomous response by cells in the vicinity of the damage [6]. It includes an acute inflammatory response, with recruitment of immune cells as well as epithelial spreading and migration stimulated by secreted provisional matrix. Once the barrier is reformed, the differentiated characteristics are then restored. The regulation of these processes require complex sequential changes in the epithelial cell biology driven by the phased expression of networks of genes [7].\nOne biological mechanism that plays a critical role in the coordinate regulation of gene expression such as that required during epithelial wound repair is the expression of small non-coding RNA molecules termed microRNAs (miRNAs) [8]. To date, more than 1000 human miRNAs have been identified [http://microrna.sanger.ac.uk], with documented tissue-specific expression of some of these miRNAs in lung and involvement in the development of lung diseases including lung cancer, asthma and fibrosis [9-15]. MiRNAs have been demonstrated to play a crucial role in epithelial cell proliferation and differentiation [16-18]. The expression in lung epithelium of Dicer, the enzyme responsible for processing of miRNA precursors, is essential for lung morphogenesis [16] and there is differential expression of miRNAs during lung development [17]. Furthermore, transgenic over-expression of miR-17-92 (shown to be over-expressed in lung cancer) in the lung epithelium promotes proliferation and inhibits differentiation of lung epithelial progenitor cells [18]. Recently, it has been reported that miRNA-146a modulates survival of bronchial epithelial cells in response to cytokine-induced apoptosis [19]. In experimental studies, mice lacking miR-155 demonstrated autoimmune phenotypes in the lungs with increased airway remodeling and leukocyte invasion, phenotypes similar to those observed in asthma [20,21].\nWhile a number of studies have examined the role of miRNA in lung development and in disease [9-15], their influence on the regulation of gene expression involved in epithelial wound repair remains unresolved and comprehensive studies on miRNA involvement in epithelial repair and the pathogenesis of airway remodeling are lacking. However in the skin, miRNAs were found to play a crucial role in wound closure by controlling migration and proliferation of keratinocytes in an in vitro model of wound repair [22].\nThus the hypothesis of the study was that the stages of wound repair in respiratory epithelium are regulated by the phased expression of specific miRNA species. The aim was to investigate the possible involvement of miRNAs by examining their expression profile in epithelial repair in a cell culture model. Understanding the effect of altered miRNA activity on protein expression during repair processes can be further used to identify pathways targeted by miRNAs that regulate epithelial wound repair, potentially providing a novel therapeutic strategy for asthma and other respiratory diseases with underlying aberrant epithelial wound repair.", " Cell culture and wounding assays The 16HBE14o- bronchial epithelial cell line was cultured under standard conditions [23]. For the wounding assay, cells were seeded on 6-well plates at the initial density of 3x105 cells and cultured until confluent. Forty eight hours after reaching full confluence cells were damaged by scraping off the monolayer with a hatch-cross wounding pattern using a P200 Gilson pipette tip. After that, the medium and cell debris were removed by pipetting off the medium and 2 ml of fresh serum-containing medium was added to the remaining cells. For all experiments, at least two points of reference per well of a 6-well plate were used for post-injury analyses. Several time-lapse experiments were performed to establish consistent experimental conditions and the timing of the stages of wound repair.\nThe 16HBE14o- bronchial epithelial cell line was cultured under standard conditions [23]. For the wounding assay, cells were seeded on 6-well plates at the initial density of 3x105 cells and cultured until confluent. Forty eight hours after reaching full confluence cells were damaged by scraping off the monolayer with a hatch-cross wounding pattern using a P200 Gilson pipette tip. After that, the medium and cell debris were removed by pipetting off the medium and 2 ml of fresh serum-containing medium was added to the remaining cells. For all experiments, at least two points of reference per well of a 6-well plate were used for post-injury analyses. Several time-lapse experiments were performed to establish consistent experimental conditions and the timing of the stages of wound repair.\n Time lapse microscopy Time lapse images were captured at 15 minute intervals on a Leica DM IRB phase-contrast inverted microscope (Leica; Milton Keynes, UK) in a chamber maintained at 36 ± 1°C and 5% CO2 atmosphere. The images were collected with a cooled Hamamatsu ORCA digital camera (Hamamatsu Photonics, Welwyn Garden City, UK) connected to a computer running Cell^P software (Olympus, London, UK) for 30 hours (ensuring complete wound closure is included in the time course). For quantitative analysis of the area of damage and hence ongoing repair in time lapse serial images ImageJ software [24] was used.\nTime lapse images were captured at 15 minute intervals on a Leica DM IRB phase-contrast inverted microscope (Leica; Milton Keynes, UK) in a chamber maintained at 36 ± 1°C and 5% CO2 atmosphere. The images were collected with a cooled Hamamatsu ORCA digital camera (Hamamatsu Photonics, Welwyn Garden City, UK) connected to a computer running Cell^P software (Olympus, London, UK) for 30 hours (ensuring complete wound closure is included in the time course). For quantitative analysis of the area of damage and hence ongoing repair in time lapse serial images ImageJ software [24] was used.\n RNA isolation RNA isolation was performed with the use of an Exiqon RNA isolation kit. Samples were collected in triplicate for each of the following time points: baseline, 2, 4, 8, 16, 24 and 48 hours after wounding. RNA isolation was performed according to the manufacturer’s protocol from 6-well plates (9.5 cm2 of growth area) and the amount of starting material was 1×106 cells per well. Samples were frozen at -70°C for subsequent use in microarray experiments.\nRNA isolation was performed with the use of an Exiqon RNA isolation kit. Samples were collected in triplicate for each of the following time points: baseline, 2, 4, 8, 16, 24 and 48 hours after wounding. RNA isolation was performed according to the manufacturer’s protocol from 6-well plates (9.5 cm2 of growth area) and the amount of starting material was 1×106 cells per well. Samples were frozen at -70°C for subsequent use in microarray experiments.\n RNA quality control The concentration of total RNA in each sample was determined using a NanoDrop 1000 spectrophotometer. The integrity of total RNA extracted was assessed by a Lab901 Gene Tools System. The passing criteria for use in microarrays was: sample volume of 10–30 μl, RNA concentration > 50 ng/μl, SDV ≤ 3.1 (ScreenTape Degradation Value), which corresponds to RIN ≥ 9.0, purity: OD 260/280 > 1.7, OD 260/230 > 1.4.\nThe concentration of total RNA in each sample was determined using a NanoDrop 1000 spectrophotometer. The integrity of total RNA extracted was assessed by a Lab901 Gene Tools System. The passing criteria for use in microarrays was: sample volume of 10–30 μl, RNA concentration > 50 ng/μl, SDV ≤ 3.1 (ScreenTape Degradation Value), which corresponds to RIN ≥ 9.0, purity: OD 260/280 > 1.7, OD 260/230 > 1.4.\n Micro RNA profiling Micro-RNA expression profiling of bronchial epithelial cells was performed in three replicates per time point following wounding. TaqMan Array Human MicroRNA Card A and B (Applied Biosystems) (based on Sanger miRBase 9.2) was utilised for specific amplification and detection of 754 mature human miRNAs by TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format (TLDA) using TaqMan MicroRNA Reverse Transcription Kit and Megaplex RT Primers (Human Megaplex™ RT Primers Pool A and B). The resulting cDNA combined with TaqMan Universal PCR Master Mix, No AmpErase UNG was loaded into the arrays and TaqMan real-time PCR was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems) according to the manufacturer’s protocol.\nRaw data were obtained using SDS 2.3 software (Applied Biosystems). All SDS files were analyzed utilizing the RQ Manager 1.2 software (Applied Biosystems). The comparative analysis of miRNA expression datasets between baseline and each time point following the wounding assay was performed using DataAssist software v.3.01 (Applied Biosystems). The comparative CT method [25] was used for calculating relative quantitation of gene expression after removing outliers with use of Grubbs’ outlier test together with a heuristic rule to remove the outlier among technical replicates and data normalization was based on the endogenous control gene expression method (U6 snRNA-001973) where the mean of the selected endogenous control was used to normalize the Ct value of each sample.\nThe data from miRNA profiling have been deposited in ArrayExpress database (accession no. E-MEXP-3986).\nMicro-RNA expression profiling of bronchial epithelial cells was performed in three replicates per time point following wounding. TaqMan Array Human MicroRNA Card A and B (Applied Biosystems) (based on Sanger miRBase 9.2) was utilised for specific amplification and detection of 754 mature human miRNAs by TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format (TLDA) using TaqMan MicroRNA Reverse Transcription Kit and Megaplex RT Primers (Human Megaplex™ RT Primers Pool A and B). The resulting cDNA combined with TaqMan Universal PCR Master Mix, No AmpErase UNG was loaded into the arrays and TaqMan real-time PCR was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems) according to the manufacturer’s protocol.\nRaw data were obtained using SDS 2.3 software (Applied Biosystems). All SDS files were analyzed utilizing the RQ Manager 1.2 software (Applied Biosystems). The comparative analysis of miRNA expression datasets between baseline and each time point following the wounding assay was performed using DataAssist software v.3.01 (Applied Biosystems). The comparative CT method [25] was used for calculating relative quantitation of gene expression after removing outliers with use of Grubbs’ outlier test together with a heuristic rule to remove the outlier among technical replicates and data normalization was based on the endogenous control gene expression method (U6 snRNA-001973) where the mean of the selected endogenous control was used to normalize the Ct value of each sample.\nThe data from miRNA profiling have been deposited in ArrayExpress database (accession no. E-MEXP-3986).\n Cluster analysis To identify the clusters of miRNAs following the same expression profile over time, we performed cluster analysis using STEM (Short Time series Expression Miner) software available at: http://www.cs.cmu.edu/~jernst/stem[26].\nTo identify the clusters of miRNAs following the same expression profile over time, we performed cluster analysis using STEM (Short Time series Expression Miner) software available at: http://www.cs.cmu.edu/~jernst/stem[26].\n Target genes and pathways prediction To identify potential common biological pathways for miRNAs showing similar expression profiles in cluster analysis, we performed pathway enrichment analysis. The best predicted candidate mRNA genes for each differentially expressed miRNA were identified using the miRNA BodyMap tool available at: http://www.mirnabodymap.org. The tool enables the selection of target genes based on the use of several prediction algorithms at a time: DIANA, PITA, TargetScan, RNA22 (3UTR), RNA22 (5UTR), TargetScan_cons, MicroCosm, miRDB, RNA22 (5UTR), TarBase and miRecords. To minimize the target prediction noise, only target genes predicted by five or more of the prediction algorithms mentioned above were included.\nThe lists with predicted target genes were then analysed with use of The Database for Annotation, Visualization and Integrated Discovery (DAVID) v.6.7 [27,28] to identify BioCarta & KEGG pathways [29,30] enriched functional-related gene groups and biological themes, particularly gene ontology (GO) terms [31] in which the analysed sets of target genes were statistically the most overrepresented (enriched).\nTo identify potential common biological pathways for miRNAs showing similar expression profiles in cluster analysis, we performed pathway enrichment analysis. The best predicted candidate mRNA genes for each differentially expressed miRNA were identified using the miRNA BodyMap tool available at: http://www.mirnabodymap.org. The tool enables the selection of target genes based on the use of several prediction algorithms at a time: DIANA, PITA, TargetScan, RNA22 (3UTR), RNA22 (5UTR), TargetScan_cons, MicroCosm, miRDB, RNA22 (5UTR), TarBase and miRecords. To minimize the target prediction noise, only target genes predicted by five or more of the prediction algorithms mentioned above were included.\nThe lists with predicted target genes were then analysed with use of The Database for Annotation, Visualization and Integrated Discovery (DAVID) v.6.7 [27,28] to identify BioCarta & KEGG pathways [29,30] enriched functional-related gene groups and biological themes, particularly gene ontology (GO) terms [31] in which the analysed sets of target genes were statistically the most overrepresented (enriched).\n Statistics The statistics applied by Data Assist software for each sample included calculation of the relative quantification (RQ) = 2 (-ΔCt)/2(-ΔCt reference). The standard deviation (SD) was calculated for CT values of each of the three technical replicates and was used to calculate the RQ Min and RQ Max [RQ Min = 2(-ΔCt – SD)/2(-ΔCt reference), RQ Max = 2 (-ΔCt + SD)/2(-ΔCt reference)]. Pearson's product moment correlation coefficient (r) was calculated for CT or ΔCT values of sample pairs as below and plotted on the Signal Correlation Plot and Scatter Plot respectively.\n\n\n\n\nr\n=\n\n\nN\n\nΣXY\n-\n\nΣX\n\n\nΣY\n\n\n\n\n\nNΣ\n\nX\n2\n\n-\n\n\nΣX\n\n\n2\n\n\n\n\n\n\n\nNΣ\n\nY\n2\n\n-\n\n\nΣY\n\n2\n\n\n\n\n\n\n\n\n\nDistances between samples and assays were calculated for hierarchical clustering based on the ΔCT values using Pearson’s correlation or the Eucidian distance calculated as follows [https://products.appliedbiosystems.com]. For a sample pair, the Pearson's product moment correlation coefficient (r) was calculated considering all ΔCT values from all assays, and the distance defined as 1 – r. For an assay pair, r was calculated considering all ΔCT values from all samples and the distance defined as 1 – r. Euclidean Distance calculated as ΣΔCTi-ΔCTj2 where, for a sample pair, the calculation is done across all assays for sample i and sample j while for an assay pair, the calculation is done across all samples for assay i and assay j.\nThe statistics applied by Data Assist software for each sample included calculation of the relative quantification (RQ) = 2 (-ΔCt)/2(-ΔCt reference). The standard deviation (SD) was calculated for CT values of each of the three technical replicates and was used to calculate the RQ Min and RQ Max [RQ Min = 2(-ΔCt – SD)/2(-ΔCt reference), RQ Max = 2 (-ΔCt + SD)/2(-ΔCt reference)]. Pearson's product moment correlation coefficient (r) was calculated for CT or ΔCT values of sample pairs as below and plotted on the Signal Correlation Plot and Scatter Plot respectively.\n\n\n\n\nr\n=\n\n\nN\n\nΣXY\n-\n\nΣX\n\n\nΣY\n\n\n\n\n\nNΣ\n\nX\n2\n\n-\n\n\nΣX\n\n\n2\n\n\n\n\n\n\n\nNΣ\n\nY\n2\n\n-\n\n\nΣY\n\n2\n\n\n\n\n\n\n\n\n\nDistances between samples and assays were calculated for hierarchical clustering based on the ΔCT values using Pearson’s correlation or the Eucidian distance calculated as follows [https://products.appliedbiosystems.com]. For a sample pair, the Pearson's product moment correlation coefficient (r) was calculated considering all ΔCT values from all assays, and the distance defined as 1 – r. For an assay pair, r was calculated considering all ΔCT values from all samples and the distance defined as 1 – r. Euclidean Distance calculated as ΣΔCTi-ΔCTj2 where, for a sample pair, the calculation is done across all assays for sample i and sample j while for an assay pair, the calculation is done across all samples for assay i and assay j.", "The 16HBE14o- bronchial epithelial cell line was cultured under standard conditions [23]. For the wounding assay, cells were seeded on 6-well plates at the initial density of 3x105 cells and cultured until confluent. Forty eight hours after reaching full confluence cells were damaged by scraping off the monolayer with a hatch-cross wounding pattern using a P200 Gilson pipette tip. After that, the medium and cell debris were removed by pipetting off the medium and 2 ml of fresh serum-containing medium was added to the remaining cells. For all experiments, at least two points of reference per well of a 6-well plate were used for post-injury analyses. Several time-lapse experiments were performed to establish consistent experimental conditions and the timing of the stages of wound repair.", "Time lapse images were captured at 15 minute intervals on a Leica DM IRB phase-contrast inverted microscope (Leica; Milton Keynes, UK) in a chamber maintained at 36 ± 1°C and 5% CO2 atmosphere. The images were collected with a cooled Hamamatsu ORCA digital camera (Hamamatsu Photonics, Welwyn Garden City, UK) connected to a computer running Cell^P software (Olympus, London, UK) for 30 hours (ensuring complete wound closure is included in the time course). For quantitative analysis of the area of damage and hence ongoing repair in time lapse serial images ImageJ software [24] was used.", "RNA isolation was performed with the use of an Exiqon RNA isolation kit. Samples were collected in triplicate for each of the following time points: baseline, 2, 4, 8, 16, 24 and 48 hours after wounding. RNA isolation was performed according to the manufacturer’s protocol from 6-well plates (9.5 cm2 of growth area) and the amount of starting material was 1×106 cells per well. Samples were frozen at -70°C for subsequent use in microarray experiments.", "The concentration of total RNA in each sample was determined using a NanoDrop 1000 spectrophotometer. The integrity of total RNA extracted was assessed by a Lab901 Gene Tools System. The passing criteria for use in microarrays was: sample volume of 10–30 μl, RNA concentration > 50 ng/μl, SDV ≤ 3.1 (ScreenTape Degradation Value), which corresponds to RIN ≥ 9.0, purity: OD 260/280 > 1.7, OD 260/230 > 1.4.", "Micro-RNA expression profiling of bronchial epithelial cells was performed in three replicates per time point following wounding. TaqMan Array Human MicroRNA Card A and B (Applied Biosystems) (based on Sanger miRBase 9.2) was utilised for specific amplification and detection of 754 mature human miRNAs by TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format (TLDA) using TaqMan MicroRNA Reverse Transcription Kit and Megaplex RT Primers (Human Megaplex™ RT Primers Pool A and B). The resulting cDNA combined with TaqMan Universal PCR Master Mix, No AmpErase UNG was loaded into the arrays and TaqMan real-time PCR was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems) according to the manufacturer’s protocol.\nRaw data were obtained using SDS 2.3 software (Applied Biosystems). All SDS files were analyzed utilizing the RQ Manager 1.2 software (Applied Biosystems). The comparative analysis of miRNA expression datasets between baseline and each time point following the wounding assay was performed using DataAssist software v.3.01 (Applied Biosystems). The comparative CT method [25] was used for calculating relative quantitation of gene expression after removing outliers with use of Grubbs’ outlier test together with a heuristic rule to remove the outlier among technical replicates and data normalization was based on the endogenous control gene expression method (U6 snRNA-001973) where the mean of the selected endogenous control was used to normalize the Ct value of each sample.\nThe data from miRNA profiling have been deposited in ArrayExpress database (accession no. E-MEXP-3986).", "To identify the clusters of miRNAs following the same expression profile over time, we performed cluster analysis using STEM (Short Time series Expression Miner) software available at: http://www.cs.cmu.edu/~jernst/stem[26].", "To identify potential common biological pathways for miRNAs showing similar expression profiles in cluster analysis, we performed pathway enrichment analysis. The best predicted candidate mRNA genes for each differentially expressed miRNA were identified using the miRNA BodyMap tool available at: http://www.mirnabodymap.org. The tool enables the selection of target genes based on the use of several prediction algorithms at a time: DIANA, PITA, TargetScan, RNA22 (3UTR), RNA22 (5UTR), TargetScan_cons, MicroCosm, miRDB, RNA22 (5UTR), TarBase and miRecords. To minimize the target prediction noise, only target genes predicted by five or more of the prediction algorithms mentioned above were included.\nThe lists with predicted target genes were then analysed with use of The Database for Annotation, Visualization and Integrated Discovery (DAVID) v.6.7 [27,28] to identify BioCarta & KEGG pathways [29,30] enriched functional-related gene groups and biological themes, particularly gene ontology (GO) terms [31] in which the analysed sets of target genes were statistically the most overrepresented (enriched).", "The statistics applied by Data Assist software for each sample included calculation of the relative quantification (RQ) = 2 (-ΔCt)/2(-ΔCt reference). The standard deviation (SD) was calculated for CT values of each of the three technical replicates and was used to calculate the RQ Min and RQ Max [RQ Min = 2(-ΔCt – SD)/2(-ΔCt reference), RQ Max = 2 (-ΔCt + SD)/2(-ΔCt reference)]. Pearson's product moment correlation coefficient (r) was calculated for CT or ΔCT values of sample pairs as below and plotted on the Signal Correlation Plot and Scatter Plot respectively.\n\n\n\n\nr\n=\n\n\nN\n\nΣXY\n-\n\nΣX\n\n\nΣY\n\n\n\n\n\nNΣ\n\nX\n2\n\n-\n\n\nΣX\n\n\n2\n\n\n\n\n\n\n\nNΣ\n\nY\n2\n\n-\n\n\nΣY\n\n2\n\n\n\n\n\n\n\n\n\nDistances between samples and assays were calculated for hierarchical clustering based on the ΔCT values using Pearson’s correlation or the Eucidian distance calculated as follows [https://products.appliedbiosystems.com]. For a sample pair, the Pearson's product moment correlation coefficient (r) was calculated considering all ΔCT values from all assays, and the distance defined as 1 – r. For an assay pair, r was calculated considering all ΔCT values from all samples and the distance defined as 1 – r. Euclidean Distance calculated as ΣΔCTi-ΔCTj2 where, for a sample pair, the calculation is done across all assays for sample i and sample j while for an assay pair, the calculation is done across all samples for assay i and assay j.", " Characterisation of epithelial wound repair model To analyse the changes in miRNA expression profile during epithelial wound repair, we used a previously well established in vitro model mimicking this process [23,32-34], that allowed real-time monitoring of the rate of epithelial repair and quantitative analysis using time-lapse microscopy. The following time points were selected for miRNA expression profile analysis: baseline immediately before wounding. (A) 2 hours after wounding: cells adjacent to the wound initiate a response but have not migrated substantially. (B) 4 hours after wounding: 25% of the original wound area has been covered by cells. (C) 8 hours after wounding: 50% of wounded area covered by cells. (D) 16 hours after wounding: wounded area completely covered by cells. Once the wound is covered cell proliferation and re-differentiation may still be in progress so additional time points were added. (E) 24 hours post-wounding (F) 48 hours after wounding (Figure 1). With the exception of cells damaged during the original mechanical wounding, cell death was not seen in the repairing areas by time lapse microscopy.\nStages of wound repair at different time points (A – 2 hrs, B – 4 hrs, C – 8 hrs, D – 16 hrs, E – 24 hrs, F – 48 hrs post wounding), n=3 wells for each time point.\nTo analyse the changes in miRNA expression profile during epithelial wound repair, we used a previously well established in vitro model mimicking this process [23,32-34], that allowed real-time monitoring of the rate of epithelial repair and quantitative analysis using time-lapse microscopy. The following time points were selected for miRNA expression profile analysis: baseline immediately before wounding. (A) 2 hours after wounding: cells adjacent to the wound initiate a response but have not migrated substantially. (B) 4 hours after wounding: 25% of the original wound area has been covered by cells. (C) 8 hours after wounding: 50% of wounded area covered by cells. (D) 16 hours after wounding: wounded area completely covered by cells. Once the wound is covered cell proliferation and re-differentiation may still be in progress so additional time points were added. (E) 24 hours post-wounding (F) 48 hours after wounding (Figure 1). With the exception of cells damaged during the original mechanical wounding, cell death was not seen in the repairing areas by time lapse microscopy.\nStages of wound repair at different time points (A – 2 hrs, B – 4 hrs, C – 8 hrs, D – 16 hrs, E – 24 hrs, F – 48 hrs post wounding), n=3 wells for each time point.\n Global miRNA expression profile altered during epithelial wound repair Expression profiling analysis revealed a large number of mature miRNAs that were modulated at different time points during epithelial repair with a fold change above 2.0 (Table 1). Numerous miRNAs showed significantly increased or decreased expression (>10-fold) at different time points as compared to baseline (presented in Additional file 1). Based on the high fold change values at different time points, ten miRNAs were found to undergo a significant modulation (both up- or down-regulation) at five or more of the seven time points analysed (Additional file 2). We also observed that the alterations in expression of some miRNA genes were limited to a single time point of wound repair, whereas at the other time points the expression levels did not differ much from the baseline, suggesting their involvement at a particular stage of repair (marked in red in Additional file 1).\nNumber of miRNAs with >2.0-fold change in expression at different time points after wounding\nExpression profiling analysis revealed a large number of mature miRNAs that were modulated at different time points during epithelial repair with a fold change above 2.0 (Table 1). Numerous miRNAs showed significantly increased or decreased expression (>10-fold) at different time points as compared to baseline (presented in Additional file 1). Based on the high fold change values at different time points, ten miRNAs were found to undergo a significant modulation (both up- or down-regulation) at five or more of the seven time points analysed (Additional file 2). We also observed that the alterations in expression of some miRNA genes were limited to a single time point of wound repair, whereas at the other time points the expression levels did not differ much from the baseline, suggesting their involvement at a particular stage of repair (marked in red in Additional file 1).\nNumber of miRNAs with >2.0-fold change in expression at different time points after wounding\n Cluster analysis We then hypothesized that, given the number of miRNA genes undergoing significant changes during the epithelial repair process, a common expression profile might be shared by miRNAs whose expression is regulated by particular transcriptional activation pathways. Therefore, we analysed the expression of miRNA genes with use of the clustering algorithm STEM [26], assigning each gene passing the filtering criteria to the model profile that most closely matches the gene's expression profile as determined by the correlation coefficient. Since the model profiles are selected by the software by random allocation, independent of the data, the algorithm then determines which profiles have a statistically significant higher number of genes assigned using a permutation test. It then uses standard hypothesis testing to determine which model profiles have significantly more genes assigned as compared to the average number of genes assigned to the model profile in the permutation runs. Our cluster analysis revealed three significant miRNA expression profiles (16, 1 and 18) over 48 hours of wound repair (Figures 2, 3 and 4).\nProfile 16 of miRNA with similar expression pattern during wound repair.\nProfile 1 of miRNA with similar expression pattern during wound repair.\nProfile 18 of miRNA with similar expression pattern during wound repair.\nProfile 16 included genes that gradually increase between 2 and 16 hours and then display a sudden drop in expression 16 hours post-wounding, which corresponded to the completion of cell proliferation and the restoration of the monolayer after wounding in time lapse observations. Profile 1 was characterized by significant decrease of miRNA expression 4 hours after wounding followed by a significant increase with a maximum 16 hours post-wounding, suggesting induction of transcription of genes involved in the early response to stress due to mechanical cell damage which are subsequently switched off. Profile 18 shared some similarities with profile 1, although it showed a more gradual decrease in miRNA expression 4 hours post-wounding, that then increased steadily to reach a maximum at 16 hours and afterwards gradually decreased. The different profiles of miRNA expression are shown in Figures 2, 3 and 4. The miRNA genes sharing the common expression pattern during epithelial wound repair are listed in Additional file 3.\nWe then hypothesized that, given the number of miRNA genes undergoing significant changes during the epithelial repair process, a common expression profile might be shared by miRNAs whose expression is regulated by particular transcriptional activation pathways. Therefore, we analysed the expression of miRNA genes with use of the clustering algorithm STEM [26], assigning each gene passing the filtering criteria to the model profile that most closely matches the gene's expression profile as determined by the correlation coefficient. Since the model profiles are selected by the software by random allocation, independent of the data, the algorithm then determines which profiles have a statistically significant higher number of genes assigned using a permutation test. It then uses standard hypothesis testing to determine which model profiles have significantly more genes assigned as compared to the average number of genes assigned to the model profile in the permutation runs. Our cluster analysis revealed three significant miRNA expression profiles (16, 1 and 18) over 48 hours of wound repair (Figures 2, 3 and 4).\nProfile 16 of miRNA with similar expression pattern during wound repair.\nProfile 1 of miRNA with similar expression pattern during wound repair.\nProfile 18 of miRNA with similar expression pattern during wound repair.\nProfile 16 included genes that gradually increase between 2 and 16 hours and then display a sudden drop in expression 16 hours post-wounding, which corresponded to the completion of cell proliferation and the restoration of the monolayer after wounding in time lapse observations. Profile 1 was characterized by significant decrease of miRNA expression 4 hours after wounding followed by a significant increase with a maximum 16 hours post-wounding, suggesting induction of transcription of genes involved in the early response to stress due to mechanical cell damage which are subsequently switched off. Profile 18 shared some similarities with profile 1, although it showed a more gradual decrease in miRNA expression 4 hours post-wounding, that then increased steadily to reach a maximum at 16 hours and afterwards gradually decreased. The different profiles of miRNA expression are shown in Figures 2, 3 and 4. The miRNA genes sharing the common expression pattern during epithelial wound repair are listed in Additional file 3.\n Identification of biological processes regulated by miRNAs Pathways in clusters of miRNAs The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5.\nFor all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7).\nThe next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5.\nFor all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7).\n Pathways in clusters of miRNAs The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5.\nFor all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7).\nThe next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5.\nFor all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7).\n Target pathways at different stages of wound repair To identify the most important pathways involved at different stages of epithelial wound repair in vitro we also performed pathway enrichment of miRNAs significantly altered only at one time point of wound repair (see Additional file 1, genes in red). For those genes, targets were predicted as above and DAVID was used to identify potential pathways and biological processes. The main observation for epithelial cells in the early phase of repair (2 hours post-wounding) were miRNAs being up-regulated, suggesting switching off target genes and processes associated with response to cellular stress (MAP kinase pathway), regulation of actin cytoskeleton, cell proliferation and migration. The main pathways targeted by up-regulated miRNAs identified for the repair 4 hours after cell damage included genes involved in negative regulation of transcription, RNA metabolism, regulation of cell motion and the cytoskeleton. The most important processes at 8 hours after wounding involved a number of up-regulated miRNAs at this time point and indicating the switching off of genes involved in negative regulation of gene expression and negative regulation of cell communication. At 16 hours following epithelial cell wounding we observed a number of miRNA genes that were down regulated and, therefore, switching on genes involved in mitotic cell cycle, negative regulation of cell death, cell proliferation, ERBB signalling pathway (cell proliferation, survival, migration). This may suggest the predominance of a proliferating phenotype of cells after the damaged area was closed by spreading and migrating cells. After 24 hours post-wounding we observed further down regulation of miRNA genes. Two were of particular interest as they are responsible for switching on genes involved in p53 signalling pathway (cell cycle arrest), IL-10 (anti-inflammatory response), regulation of apoptosis, cell death, RNA transport and localization. This indicates that at this time point cells have proliferated sufficiently and are beginning to differentiate. At 48 hours after wounding, we observed mainly up regulation of miRNA genes responsible for silencing genes involved in protein catabolic processes, alternative splicing, spectrins, mRNA splicing and processing as well as methylation indicating that cells are undergoing physiological processes and restoring a normal phenotype.\nTo identify the most important pathways involved at different stages of epithelial wound repair in vitro we also performed pathway enrichment of miRNAs significantly altered only at one time point of wound repair (see Additional file 1, genes in red). For those genes, targets were predicted as above and DAVID was used to identify potential pathways and biological processes. The main observation for epithelial cells in the early phase of repair (2 hours post-wounding) were miRNAs being up-regulated, suggesting switching off target genes and processes associated with response to cellular stress (MAP kinase pathway), regulation of actin cytoskeleton, cell proliferation and migration. The main pathways targeted by up-regulated miRNAs identified for the repair 4 hours after cell damage included genes involved in negative regulation of transcription, RNA metabolism, regulation of cell motion and the cytoskeleton. The most important processes at 8 hours after wounding involved a number of up-regulated miRNAs at this time point and indicating the switching off of genes involved in negative regulation of gene expression and negative regulation of cell communication. At 16 hours following epithelial cell wounding we observed a number of miRNA genes that were down regulated and, therefore, switching on genes involved in mitotic cell cycle, negative regulation of cell death, cell proliferation, ERBB signalling pathway (cell proliferation, survival, migration). This may suggest the predominance of a proliferating phenotype of cells after the damaged area was closed by spreading and migrating cells. After 24 hours post-wounding we observed further down regulation of miRNA genes. Two were of particular interest as they are responsible for switching on genes involved in p53 signalling pathway (cell cycle arrest), IL-10 (anti-inflammatory response), regulation of apoptosis, cell death, RNA transport and localization. This indicates that at this time point cells have proliferated sufficiently and are beginning to differentiate. At 48 hours after wounding, we observed mainly up regulation of miRNA genes responsible for silencing genes involved in protein catabolic processes, alternative splicing, spectrins, mRNA splicing and processing as well as methylation indicating that cells are undergoing physiological processes and restoring a normal phenotype.", "To analyse the changes in miRNA expression profile during epithelial wound repair, we used a previously well established in vitro model mimicking this process [23,32-34], that allowed real-time monitoring of the rate of epithelial repair and quantitative analysis using time-lapse microscopy. The following time points were selected for miRNA expression profile analysis: baseline immediately before wounding. (A) 2 hours after wounding: cells adjacent to the wound initiate a response but have not migrated substantially. (B) 4 hours after wounding: 25% of the original wound area has been covered by cells. (C) 8 hours after wounding: 50% of wounded area covered by cells. (D) 16 hours after wounding: wounded area completely covered by cells. Once the wound is covered cell proliferation and re-differentiation may still be in progress so additional time points were added. (E) 24 hours post-wounding (F) 48 hours after wounding (Figure 1). With the exception of cells damaged during the original mechanical wounding, cell death was not seen in the repairing areas by time lapse microscopy.\nStages of wound repair at different time points (A – 2 hrs, B – 4 hrs, C – 8 hrs, D – 16 hrs, E – 24 hrs, F – 48 hrs post wounding), n=3 wells for each time point.", "Expression profiling analysis revealed a large number of mature miRNAs that were modulated at different time points during epithelial repair with a fold change above 2.0 (Table 1). Numerous miRNAs showed significantly increased or decreased expression (>10-fold) at different time points as compared to baseline (presented in Additional file 1). Based on the high fold change values at different time points, ten miRNAs were found to undergo a significant modulation (both up- or down-regulation) at five or more of the seven time points analysed (Additional file 2). We also observed that the alterations in expression of some miRNA genes were limited to a single time point of wound repair, whereas at the other time points the expression levels did not differ much from the baseline, suggesting their involvement at a particular stage of repair (marked in red in Additional file 1).\nNumber of miRNAs with >2.0-fold change in expression at different time points after wounding", "We then hypothesized that, given the number of miRNA genes undergoing significant changes during the epithelial repair process, a common expression profile might be shared by miRNAs whose expression is regulated by particular transcriptional activation pathways. Therefore, we analysed the expression of miRNA genes with use of the clustering algorithm STEM [26], assigning each gene passing the filtering criteria to the model profile that most closely matches the gene's expression profile as determined by the correlation coefficient. Since the model profiles are selected by the software by random allocation, independent of the data, the algorithm then determines which profiles have a statistically significant higher number of genes assigned using a permutation test. It then uses standard hypothesis testing to determine which model profiles have significantly more genes assigned as compared to the average number of genes assigned to the model profile in the permutation runs. Our cluster analysis revealed three significant miRNA expression profiles (16, 1 and 18) over 48 hours of wound repair (Figures 2, 3 and 4).\nProfile 16 of miRNA with similar expression pattern during wound repair.\nProfile 1 of miRNA with similar expression pattern during wound repair.\nProfile 18 of miRNA with similar expression pattern during wound repair.\nProfile 16 included genes that gradually increase between 2 and 16 hours and then display a sudden drop in expression 16 hours post-wounding, which corresponded to the completion of cell proliferation and the restoration of the monolayer after wounding in time lapse observations. Profile 1 was characterized by significant decrease of miRNA expression 4 hours after wounding followed by a significant increase with a maximum 16 hours post-wounding, suggesting induction of transcription of genes involved in the early response to stress due to mechanical cell damage which are subsequently switched off. Profile 18 shared some similarities with profile 1, although it showed a more gradual decrease in miRNA expression 4 hours post-wounding, that then increased steadily to reach a maximum at 16 hours and afterwards gradually decreased. The different profiles of miRNA expression are shown in Figures 2, 3 and 4. The miRNA genes sharing the common expression pattern during epithelial wound repair are listed in Additional file 3.", " Pathways in clusters of miRNAs The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5.\nFor all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7).\nThe next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5.\nFor all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7).", "The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5.\nFor all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7).", "To identify the most important pathways involved at different stages of epithelial wound repair in vitro we also performed pathway enrichment of miRNAs significantly altered only at one time point of wound repair (see Additional file 1, genes in red). For those genes, targets were predicted as above and DAVID was used to identify potential pathways and biological processes. The main observation for epithelial cells in the early phase of repair (2 hours post-wounding) were miRNAs being up-regulated, suggesting switching off target genes and processes associated with response to cellular stress (MAP kinase pathway), regulation of actin cytoskeleton, cell proliferation and migration. The main pathways targeted by up-regulated miRNAs identified for the repair 4 hours after cell damage included genes involved in negative regulation of transcription, RNA metabolism, regulation of cell motion and the cytoskeleton. The most important processes at 8 hours after wounding involved a number of up-regulated miRNAs at this time point and indicating the switching off of genes involved in negative regulation of gene expression and negative regulation of cell communication. At 16 hours following epithelial cell wounding we observed a number of miRNA genes that were down regulated and, therefore, switching on genes involved in mitotic cell cycle, negative regulation of cell death, cell proliferation, ERBB signalling pathway (cell proliferation, survival, migration). This may suggest the predominance of a proliferating phenotype of cells after the damaged area was closed by spreading and migrating cells. After 24 hours post-wounding we observed further down regulation of miRNA genes. Two were of particular interest as they are responsible for switching on genes involved in p53 signalling pathway (cell cycle arrest), IL-10 (anti-inflammatory response), regulation of apoptosis, cell death, RNA transport and localization. This indicates that at this time point cells have proliferated sufficiently and are beginning to differentiate. At 48 hours after wounding, we observed mainly up regulation of miRNA genes responsible for silencing genes involved in protein catabolic processes, alternative splicing, spectrins, mRNA splicing and processing as well as methylation indicating that cells are undergoing physiological processes and restoring a normal phenotype.", "The main finding of this study is the involvement of multiple miRNA genes in the process of epithelial wound repair in vitro. We found three distinct expression patterns of miRNA genes clusters that are predicted to further regulate numerous pathways and biological processes involved in wound repair. We have applied here for the first time the cluster analysis of time-series miRNA expression data (using STEM) to identify basic patterns and predict pathways (using DAVID) involved in repair processes of airway epithelium.\nSuch an approach has enabled us to identify common miRNA expression profiles during wound repair giving comprehensive information about activated miRNA genes. The relationships amongst these genes, their regulation and coordination during wound repair over time can also be explored. Further validation of individual protein, gene or miRNA changes will be required in subsequent studies, but it seems clear that specific expression profiles of clusters of miRNAs correlates with repair of mechanically induced damage to the epithelium. For expression profile 16 we demonstrated that, among other plausible signalling pathways, the neurotrophin signalling pathway may be involved in wound repair in epithelial cells, in addition to the inflammatory response in airway epithelium in allergy and asthma as reported previously [35-38]. The involvement in wound repair may further suggest that this pathway is important in the regulation of airway remodelling in asthma. Indeed, in the study by Kilic et al. [39] it was observed that blocking one of the neurotrophins, nerve growth factor (NGF), prevented subepithelial fibrosis in a mouse model of asthma and that NGF overexpression exerted a direct effect on collagen expression in murine lung fibroblasts. The involvement of neurotrophins in repair processes has been also confirmed recently by Palazzo et al. [40] in wound healing in dermal fibroblasts. Moreover, miRNAs involved in this pathway such as the miR-200 family were reported to control epithelial-mesenchymal transition (EMT) [41], the process that is suggested to underlie airway remodelling in asthma. In the recent study of Ogawa et al. [42] utilising a mouse model of asthma, it was observed that mice challenged with house dust mite allergen exhibited an increase in NGF that was primarily expressed in bronchial epithelium and was positively correlated with airway hyperresponsiveness and substance P-positive nerve fibers. However in this model siRNA targeted NGF inhibited hyperresponsiveness and modulation of innervation but not subepithelial fibrosis and allergic inflammation.\nFor expression profile 1 we observed a significant down-regulation at the beginning of wound repair followed by sharp increase in miRNA expression with a maximum at 16 hours after cell damage. This may indicate the induction of the six pathways predicted by enrichment analysis in the early phase of wound repair, which are then being switched off by the miRNAs with increased expression up to 16 hours post-wounding.\nThe process of wound repair in vivo in the airways involves cell spreading and migration as the primary mechanisms in the first 12–24 hours after injury, while proliferation begins by 15–24 h and continues for days to weeks. Similarly in our study we have confirmed that epithelial wound repair in vitro mimics the in vivo situation but in a shorter time frame, and that in its early stage this involves spreading and migration of neighbouring epithelial cells to cover the damaged area (2 and 4 hours after wounding). This is followed by migration and proliferation of progenitor cells to restore cell numbers (8 and 16 hours after cell damage) and differentiation to restore function (24 and 48 hours post-wounding) (Figure 1) [43-48].\nAnalysis of miRNAs involved at only specific time points of wound repair revealed that during the early stages numerous miRNAs are being significantly up-regulated, switching off pathways regulating cell proliferation and differentiation and activating cellular stress responses (chemokine signalling pathway) as well as cell migration and cell death (corresponding to time points at 2, 4 and 8 hours after injury). Furthermore, at later time points cells are undergoing intensive proliferation and secreting extracellular matrix which is supported by the involvement of ERBB signalling pathway and NFAT pathway stimulating cell proliferation and the regulation of transcription of immune genes (that corresponds to 16 hours after injury). Once confluent, cells restore their phenotype so that the cell cycle is arrested (inhibition of cell division) and differentiation processes are switched on. In parallel to this, the IL-10 anti-inflammatory signalling pathway is induced to deactivate immune cells stimulated during the early stages of wound repair.", "In summary, we report here for the first time that expression of multiple miRNAs is significantly altered during airway epithelium wound repair processes. Different patterns of expression have been observed and the target genes of those miRNA clusters coordinate several biological pathways involved in the repair of injury. Our work provides a starting point for a systematic analysis of mRNA targets specific for wound repair. This will help to identify regulatory networks controlling these processes in airway epithelium to better understand their involvement in respiratory diseases.", "The authors declare that they have no competing interests.", "AS performed in vitro cell experiments and wounding assays, miRNA profiling, data analysis, cluster and pathway analysis, drafted the paper and approved its final version. PL contributed to the study design and methodology regarding cell experiments drafted the paper and approved its final version. JWH contributed to the study design and methodology regarding miRNA analysis, drafted the paper and approved its final version.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2466/13/63/prepub\n", "List of significantly modulated mature miRNAs (>10.0-fold) and their respective fold induction at each time point. * miRNAs with significant change in expression at one time point only (marked in red).\nClick here for file\nFold change of the top ten miRNAs undergoing significant modulation (>10-fold) during wound repair process at, at least, five time points.\nClick here for file\nMiRNA genes assigned to each expression profile during wound repair (values given for each time point represent expression change after normalization in STEM software).\nClick here for file\nThe significantly overrepresented pathways (enriched) in the analysed sets of target genes of miRNAs included in the profile 16.\nClick here for file\nThe significantly overrepresented pathways (enriched) in the analysed sets of target genes of miRNAs included in the profile 1.\nClick here for file\nThe most significant biological processes predicted using DAVID tool undergoing regulation of miRNA target genes from the same expression profile (processes were ranked based on their Fisher Exact Probability value from the gene enrichment analysis to identify those showing significant overrepresentation).\nClick here for file\nNeurotrophin signaling pathway with miRNA genes and their predicted targets.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusions", null, null, null, "supplementary-material" ]
[ "Epithelial cells", "Wound repair", "miRNA", "Profiling", "Cluster analysis", "Pathway analysis" ]
Background: The airway epithelium has been recognized to play a central role in the integration of innate and adaptive immune responses [1-4]. The airway epithelium is also crucial to the origin and progression of respiratory disorders such as asthma, chronic obstructive pulmonary disease, cystic fibrosis and pulmonary fibrosis. In asthma, chronic airway inflammation underlies aberrant repair of the airway that subsequently leads to structural and functional changes in the airway wall. This remodeling is responsible for a number of the clinical characteristics of asthmatic patients. Normal epithelial repair occurs in a series of overlapping stages. Damage to the epithelium or challenge associated with damage can result in loss of structural integrity or barrier function and local mucosal activation [5]. Studies in animals have shown that the repair of normal airway epithelium after minor damage involves the migration of the remaining epithelial cells to cover the damaged area. This is a rapid process, suggesting an autonomous response by cells in the vicinity of the damage [6]. It includes an acute inflammatory response, with recruitment of immune cells as well as epithelial spreading and migration stimulated by secreted provisional matrix. Once the barrier is reformed, the differentiated characteristics are then restored. The regulation of these processes require complex sequential changes in the epithelial cell biology driven by the phased expression of networks of genes [7]. One biological mechanism that plays a critical role in the coordinate regulation of gene expression such as that required during epithelial wound repair is the expression of small non-coding RNA molecules termed microRNAs (miRNAs) [8]. To date, more than 1000 human miRNAs have been identified [http://microrna.sanger.ac.uk], with documented tissue-specific expression of some of these miRNAs in lung and involvement in the development of lung diseases including lung cancer, asthma and fibrosis [9-15]. MiRNAs have been demonstrated to play a crucial role in epithelial cell proliferation and differentiation [16-18]. The expression in lung epithelium of Dicer, the enzyme responsible for processing of miRNA precursors, is essential for lung morphogenesis [16] and there is differential expression of miRNAs during lung development [17]. Furthermore, transgenic over-expression of miR-17-92 (shown to be over-expressed in lung cancer) in the lung epithelium promotes proliferation and inhibits differentiation of lung epithelial progenitor cells [18]. Recently, it has been reported that miRNA-146a modulates survival of bronchial epithelial cells in response to cytokine-induced apoptosis [19]. In experimental studies, mice lacking miR-155 demonstrated autoimmune phenotypes in the lungs with increased airway remodeling and leukocyte invasion, phenotypes similar to those observed in asthma [20,21]. While a number of studies have examined the role of miRNA in lung development and in disease [9-15], their influence on the regulation of gene expression involved in epithelial wound repair remains unresolved and comprehensive studies on miRNA involvement in epithelial repair and the pathogenesis of airway remodeling are lacking. However in the skin, miRNAs were found to play a crucial role in wound closure by controlling migration and proliferation of keratinocytes in an in vitro model of wound repair [22]. Thus the hypothesis of the study was that the stages of wound repair in respiratory epithelium are regulated by the phased expression of specific miRNA species. The aim was to investigate the possible involvement of miRNAs by examining their expression profile in epithelial repair in a cell culture model. Understanding the effect of altered miRNA activity on protein expression during repair processes can be further used to identify pathways targeted by miRNAs that regulate epithelial wound repair, potentially providing a novel therapeutic strategy for asthma and other respiratory diseases with underlying aberrant epithelial wound repair. Methods: Cell culture and wounding assays The 16HBE14o- bronchial epithelial cell line was cultured under standard conditions [23]. For the wounding assay, cells were seeded on 6-well plates at the initial density of 3x105 cells and cultured until confluent. Forty eight hours after reaching full confluence cells were damaged by scraping off the monolayer with a hatch-cross wounding pattern using a P200 Gilson pipette tip. After that, the medium and cell debris were removed by pipetting off the medium and 2 ml of fresh serum-containing medium was added to the remaining cells. For all experiments, at least two points of reference per well of a 6-well plate were used for post-injury analyses. Several time-lapse experiments were performed to establish consistent experimental conditions and the timing of the stages of wound repair. The 16HBE14o- bronchial epithelial cell line was cultured under standard conditions [23]. For the wounding assay, cells were seeded on 6-well plates at the initial density of 3x105 cells and cultured until confluent. Forty eight hours after reaching full confluence cells were damaged by scraping off the monolayer with a hatch-cross wounding pattern using a P200 Gilson pipette tip. After that, the medium and cell debris were removed by pipetting off the medium and 2 ml of fresh serum-containing medium was added to the remaining cells. For all experiments, at least two points of reference per well of a 6-well plate were used for post-injury analyses. Several time-lapse experiments were performed to establish consistent experimental conditions and the timing of the stages of wound repair. Time lapse microscopy Time lapse images were captured at 15 minute intervals on a Leica DM IRB phase-contrast inverted microscope (Leica; Milton Keynes, UK) in a chamber maintained at 36 ± 1°C and 5% CO2 atmosphere. The images were collected with a cooled Hamamatsu ORCA digital camera (Hamamatsu Photonics, Welwyn Garden City, UK) connected to a computer running Cell^P software (Olympus, London, UK) for 30 hours (ensuring complete wound closure is included in the time course). For quantitative analysis of the area of damage and hence ongoing repair in time lapse serial images ImageJ software [24] was used. Time lapse images were captured at 15 minute intervals on a Leica DM IRB phase-contrast inverted microscope (Leica; Milton Keynes, UK) in a chamber maintained at 36 ± 1°C and 5% CO2 atmosphere. The images were collected with a cooled Hamamatsu ORCA digital camera (Hamamatsu Photonics, Welwyn Garden City, UK) connected to a computer running Cell^P software (Olympus, London, UK) for 30 hours (ensuring complete wound closure is included in the time course). For quantitative analysis of the area of damage and hence ongoing repair in time lapse serial images ImageJ software [24] was used. RNA isolation RNA isolation was performed with the use of an Exiqon RNA isolation kit. Samples were collected in triplicate for each of the following time points: baseline, 2, 4, 8, 16, 24 and 48 hours after wounding. RNA isolation was performed according to the manufacturer’s protocol from 6-well plates (9.5 cm2 of growth area) and the amount of starting material was 1×106 cells per well. Samples were frozen at -70°C for subsequent use in microarray experiments. RNA isolation was performed with the use of an Exiqon RNA isolation kit. Samples were collected in triplicate for each of the following time points: baseline, 2, 4, 8, 16, 24 and 48 hours after wounding. RNA isolation was performed according to the manufacturer’s protocol from 6-well plates (9.5 cm2 of growth area) and the amount of starting material was 1×106 cells per well. Samples were frozen at -70°C for subsequent use in microarray experiments. RNA quality control The concentration of total RNA in each sample was determined using a NanoDrop 1000 spectrophotometer. The integrity of total RNA extracted was assessed by a Lab901 Gene Tools System. The passing criteria for use in microarrays was: sample volume of 10–30 μl, RNA concentration > 50 ng/μl, SDV ≤ 3.1 (ScreenTape Degradation Value), which corresponds to RIN ≥ 9.0, purity: OD 260/280 > 1.7, OD 260/230 > 1.4. The concentration of total RNA in each sample was determined using a NanoDrop 1000 spectrophotometer. The integrity of total RNA extracted was assessed by a Lab901 Gene Tools System. The passing criteria for use in microarrays was: sample volume of 10–30 μl, RNA concentration > 50 ng/μl, SDV ≤ 3.1 (ScreenTape Degradation Value), which corresponds to RIN ≥ 9.0, purity: OD 260/280 > 1.7, OD 260/230 > 1.4. Micro RNA profiling Micro-RNA expression profiling of bronchial epithelial cells was performed in three replicates per time point following wounding. TaqMan Array Human MicroRNA Card A and B (Applied Biosystems) (based on Sanger miRBase 9.2) was utilised for specific amplification and detection of 754 mature human miRNAs by TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format (TLDA) using TaqMan MicroRNA Reverse Transcription Kit and Megaplex RT Primers (Human Megaplex™ RT Primers Pool A and B). The resulting cDNA combined with TaqMan Universal PCR Master Mix, No AmpErase UNG was loaded into the arrays and TaqMan real-time PCR was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems) according to the manufacturer’s protocol. Raw data were obtained using SDS 2.3 software (Applied Biosystems). All SDS files were analyzed utilizing the RQ Manager 1.2 software (Applied Biosystems). The comparative analysis of miRNA expression datasets between baseline and each time point following the wounding assay was performed using DataAssist software v.3.01 (Applied Biosystems). The comparative CT method [25] was used for calculating relative quantitation of gene expression after removing outliers with use of Grubbs’ outlier test together with a heuristic rule to remove the outlier among technical replicates and data normalization was based on the endogenous control gene expression method (U6 snRNA-001973) where the mean of the selected endogenous control was used to normalize the Ct value of each sample. The data from miRNA profiling have been deposited in ArrayExpress database (accession no. E-MEXP-3986). Micro-RNA expression profiling of bronchial epithelial cells was performed in three replicates per time point following wounding. TaqMan Array Human MicroRNA Card A and B (Applied Biosystems) (based on Sanger miRBase 9.2) was utilised for specific amplification and detection of 754 mature human miRNAs by TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format (TLDA) using TaqMan MicroRNA Reverse Transcription Kit and Megaplex RT Primers (Human Megaplex™ RT Primers Pool A and B). The resulting cDNA combined with TaqMan Universal PCR Master Mix, No AmpErase UNG was loaded into the arrays and TaqMan real-time PCR was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems) according to the manufacturer’s protocol. Raw data were obtained using SDS 2.3 software (Applied Biosystems). All SDS files were analyzed utilizing the RQ Manager 1.2 software (Applied Biosystems). The comparative analysis of miRNA expression datasets between baseline and each time point following the wounding assay was performed using DataAssist software v.3.01 (Applied Biosystems). The comparative CT method [25] was used for calculating relative quantitation of gene expression after removing outliers with use of Grubbs’ outlier test together with a heuristic rule to remove the outlier among technical replicates and data normalization was based on the endogenous control gene expression method (U6 snRNA-001973) where the mean of the selected endogenous control was used to normalize the Ct value of each sample. The data from miRNA profiling have been deposited in ArrayExpress database (accession no. E-MEXP-3986). Cluster analysis To identify the clusters of miRNAs following the same expression profile over time, we performed cluster analysis using STEM (Short Time series Expression Miner) software available at: http://www.cs.cmu.edu/~jernst/stem[26]. To identify the clusters of miRNAs following the same expression profile over time, we performed cluster analysis using STEM (Short Time series Expression Miner) software available at: http://www.cs.cmu.edu/~jernst/stem[26]. Target genes and pathways prediction To identify potential common biological pathways for miRNAs showing similar expression profiles in cluster analysis, we performed pathway enrichment analysis. The best predicted candidate mRNA genes for each differentially expressed miRNA were identified using the miRNA BodyMap tool available at: http://www.mirnabodymap.org. The tool enables the selection of target genes based on the use of several prediction algorithms at a time: DIANA, PITA, TargetScan, RNA22 (3UTR), RNA22 (5UTR), TargetScan_cons, MicroCosm, miRDB, RNA22 (5UTR), TarBase and miRecords. To minimize the target prediction noise, only target genes predicted by five or more of the prediction algorithms mentioned above were included. The lists with predicted target genes were then analysed with use of The Database for Annotation, Visualization and Integrated Discovery (DAVID) v.6.7 [27,28] to identify BioCarta & KEGG pathways [29,30] enriched functional-related gene groups and biological themes, particularly gene ontology (GO) terms [31] in which the analysed sets of target genes were statistically the most overrepresented (enriched). To identify potential common biological pathways for miRNAs showing similar expression profiles in cluster analysis, we performed pathway enrichment analysis. The best predicted candidate mRNA genes for each differentially expressed miRNA were identified using the miRNA BodyMap tool available at: http://www.mirnabodymap.org. The tool enables the selection of target genes based on the use of several prediction algorithms at a time: DIANA, PITA, TargetScan, RNA22 (3UTR), RNA22 (5UTR), TargetScan_cons, MicroCosm, miRDB, RNA22 (5UTR), TarBase and miRecords. To minimize the target prediction noise, only target genes predicted by five or more of the prediction algorithms mentioned above were included. The lists with predicted target genes were then analysed with use of The Database for Annotation, Visualization and Integrated Discovery (DAVID) v.6.7 [27,28] to identify BioCarta & KEGG pathways [29,30] enriched functional-related gene groups and biological themes, particularly gene ontology (GO) terms [31] in which the analysed sets of target genes were statistically the most overrepresented (enriched). Statistics The statistics applied by Data Assist software for each sample included calculation of the relative quantification (RQ) = 2 (-ΔCt)/2(-ΔCt reference). The standard deviation (SD) was calculated for CT values of each of the three technical replicates and was used to calculate the RQ Min and RQ Max [RQ Min = 2(-ΔCt – SD)/2(-ΔCt reference), RQ Max = 2 (-ΔCt + SD)/2(-ΔCt reference)]. Pearson's product moment correlation coefficient (r) was calculated for CT or ΔCT values of sample pairs as below and plotted on the Signal Correlation Plot and Scatter Plot respectively. r = N ΣXY - ΣX ΣY NΣ X 2 - ΣX 2 NΣ Y 2 - ΣY 2 Distances between samples and assays were calculated for hierarchical clustering based on the ΔCT values using Pearson’s correlation or the Eucidian distance calculated as follows [https://products.appliedbiosystems.com]. For a sample pair, the Pearson's product moment correlation coefficient (r) was calculated considering all ΔCT values from all assays, and the distance defined as 1 – r. For an assay pair, r was calculated considering all ΔCT values from all samples and the distance defined as 1 – r. Euclidean Distance calculated as ΣΔCTi-ΔCTj2 where, for a sample pair, the calculation is done across all assays for sample i and sample j while for an assay pair, the calculation is done across all samples for assay i and assay j. The statistics applied by Data Assist software for each sample included calculation of the relative quantification (RQ) = 2 (-ΔCt)/2(-ΔCt reference). The standard deviation (SD) was calculated for CT values of each of the three technical replicates and was used to calculate the RQ Min and RQ Max [RQ Min = 2(-ΔCt – SD)/2(-ΔCt reference), RQ Max = 2 (-ΔCt + SD)/2(-ΔCt reference)]. Pearson's product moment correlation coefficient (r) was calculated for CT or ΔCT values of sample pairs as below and plotted on the Signal Correlation Plot and Scatter Plot respectively. r = N ΣXY - ΣX ΣY NΣ X 2 - ΣX 2 NΣ Y 2 - ΣY 2 Distances between samples and assays were calculated for hierarchical clustering based on the ΔCT values using Pearson’s correlation or the Eucidian distance calculated as follows [https://products.appliedbiosystems.com]. For a sample pair, the Pearson's product moment correlation coefficient (r) was calculated considering all ΔCT values from all assays, and the distance defined as 1 – r. For an assay pair, r was calculated considering all ΔCT values from all samples and the distance defined as 1 – r. Euclidean Distance calculated as ΣΔCTi-ΔCTj2 where, for a sample pair, the calculation is done across all assays for sample i and sample j while for an assay pair, the calculation is done across all samples for assay i and assay j. Cell culture and wounding assays: The 16HBE14o- bronchial epithelial cell line was cultured under standard conditions [23]. For the wounding assay, cells were seeded on 6-well plates at the initial density of 3x105 cells and cultured until confluent. Forty eight hours after reaching full confluence cells were damaged by scraping off the monolayer with a hatch-cross wounding pattern using a P200 Gilson pipette tip. After that, the medium and cell debris were removed by pipetting off the medium and 2 ml of fresh serum-containing medium was added to the remaining cells. For all experiments, at least two points of reference per well of a 6-well plate were used for post-injury analyses. Several time-lapse experiments were performed to establish consistent experimental conditions and the timing of the stages of wound repair. Time lapse microscopy: Time lapse images were captured at 15 minute intervals on a Leica DM IRB phase-contrast inverted microscope (Leica; Milton Keynes, UK) in a chamber maintained at 36 ± 1°C and 5% CO2 atmosphere. The images were collected with a cooled Hamamatsu ORCA digital camera (Hamamatsu Photonics, Welwyn Garden City, UK) connected to a computer running Cell^P software (Olympus, London, UK) for 30 hours (ensuring complete wound closure is included in the time course). For quantitative analysis of the area of damage and hence ongoing repair in time lapse serial images ImageJ software [24] was used. RNA isolation: RNA isolation was performed with the use of an Exiqon RNA isolation kit. Samples were collected in triplicate for each of the following time points: baseline, 2, 4, 8, 16, 24 and 48 hours after wounding. RNA isolation was performed according to the manufacturer’s protocol from 6-well plates (9.5 cm2 of growth area) and the amount of starting material was 1×106 cells per well. Samples were frozen at -70°C for subsequent use in microarray experiments. RNA quality control: The concentration of total RNA in each sample was determined using a NanoDrop 1000 spectrophotometer. The integrity of total RNA extracted was assessed by a Lab901 Gene Tools System. The passing criteria for use in microarrays was: sample volume of 10–30 μl, RNA concentration > 50 ng/μl, SDV ≤ 3.1 (ScreenTape Degradation Value), which corresponds to RIN ≥ 9.0, purity: OD 260/280 > 1.7, OD 260/230 > 1.4. Micro RNA profiling: Micro-RNA expression profiling of bronchial epithelial cells was performed in three replicates per time point following wounding. TaqMan Array Human MicroRNA Card A and B (Applied Biosystems) (based on Sanger miRBase 9.2) was utilised for specific amplification and detection of 754 mature human miRNAs by TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format (TLDA) using TaqMan MicroRNA Reverse Transcription Kit and Megaplex RT Primers (Human Megaplex™ RT Primers Pool A and B). The resulting cDNA combined with TaqMan Universal PCR Master Mix, No AmpErase UNG was loaded into the arrays and TaqMan real-time PCR was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems) according to the manufacturer’s protocol. Raw data were obtained using SDS 2.3 software (Applied Biosystems). All SDS files were analyzed utilizing the RQ Manager 1.2 software (Applied Biosystems). The comparative analysis of miRNA expression datasets between baseline and each time point following the wounding assay was performed using DataAssist software v.3.01 (Applied Biosystems). The comparative CT method [25] was used for calculating relative quantitation of gene expression after removing outliers with use of Grubbs’ outlier test together with a heuristic rule to remove the outlier among technical replicates and data normalization was based on the endogenous control gene expression method (U6 snRNA-001973) where the mean of the selected endogenous control was used to normalize the Ct value of each sample. The data from miRNA profiling have been deposited in ArrayExpress database (accession no. E-MEXP-3986). Cluster analysis: To identify the clusters of miRNAs following the same expression profile over time, we performed cluster analysis using STEM (Short Time series Expression Miner) software available at: http://www.cs.cmu.edu/~jernst/stem[26]. Target genes and pathways prediction: To identify potential common biological pathways for miRNAs showing similar expression profiles in cluster analysis, we performed pathway enrichment analysis. The best predicted candidate mRNA genes for each differentially expressed miRNA were identified using the miRNA BodyMap tool available at: http://www.mirnabodymap.org. The tool enables the selection of target genes based on the use of several prediction algorithms at a time: DIANA, PITA, TargetScan, RNA22 (3UTR), RNA22 (5UTR), TargetScan_cons, MicroCosm, miRDB, RNA22 (5UTR), TarBase and miRecords. To minimize the target prediction noise, only target genes predicted by five or more of the prediction algorithms mentioned above were included. The lists with predicted target genes were then analysed with use of The Database for Annotation, Visualization and Integrated Discovery (DAVID) v.6.7 [27,28] to identify BioCarta & KEGG pathways [29,30] enriched functional-related gene groups and biological themes, particularly gene ontology (GO) terms [31] in which the analysed sets of target genes were statistically the most overrepresented (enriched). Statistics: The statistics applied by Data Assist software for each sample included calculation of the relative quantification (RQ) = 2 (-ΔCt)/2(-ΔCt reference). The standard deviation (SD) was calculated for CT values of each of the three technical replicates and was used to calculate the RQ Min and RQ Max [RQ Min = 2(-ΔCt – SD)/2(-ΔCt reference), RQ Max = 2 (-ΔCt + SD)/2(-ΔCt reference)]. Pearson's product moment correlation coefficient (r) was calculated for CT or ΔCT values of sample pairs as below and plotted on the Signal Correlation Plot and Scatter Plot respectively. r = N ΣXY - ΣX ΣY NΣ X 2 - ΣX 2 NΣ Y 2 - ΣY 2 Distances between samples and assays were calculated for hierarchical clustering based on the ΔCT values using Pearson’s correlation or the Eucidian distance calculated as follows [https://products.appliedbiosystems.com]. For a sample pair, the Pearson's product moment correlation coefficient (r) was calculated considering all ΔCT values from all assays, and the distance defined as 1 – r. For an assay pair, r was calculated considering all ΔCT values from all samples and the distance defined as 1 – r. Euclidean Distance calculated as ΣΔCTi-ΔCTj2 where, for a sample pair, the calculation is done across all assays for sample i and sample j while for an assay pair, the calculation is done across all samples for assay i and assay j. Results: Characterisation of epithelial wound repair model To analyse the changes in miRNA expression profile during epithelial wound repair, we used a previously well established in vitro model mimicking this process [23,32-34], that allowed real-time monitoring of the rate of epithelial repair and quantitative analysis using time-lapse microscopy. The following time points were selected for miRNA expression profile analysis: baseline immediately before wounding. (A) 2 hours after wounding: cells adjacent to the wound initiate a response but have not migrated substantially. (B) 4 hours after wounding: 25% of the original wound area has been covered by cells. (C) 8 hours after wounding: 50% of wounded area covered by cells. (D) 16 hours after wounding: wounded area completely covered by cells. Once the wound is covered cell proliferation and re-differentiation may still be in progress so additional time points were added. (E) 24 hours post-wounding (F) 48 hours after wounding (Figure 1). With the exception of cells damaged during the original mechanical wounding, cell death was not seen in the repairing areas by time lapse microscopy. Stages of wound repair at different time points (A – 2 hrs, B – 4 hrs, C – 8 hrs, D – 16 hrs, E – 24 hrs, F – 48 hrs post wounding), n=3 wells for each time point. To analyse the changes in miRNA expression profile during epithelial wound repair, we used a previously well established in vitro model mimicking this process [23,32-34], that allowed real-time monitoring of the rate of epithelial repair and quantitative analysis using time-lapse microscopy. The following time points were selected for miRNA expression profile analysis: baseline immediately before wounding. (A) 2 hours after wounding: cells adjacent to the wound initiate a response but have not migrated substantially. (B) 4 hours after wounding: 25% of the original wound area has been covered by cells. (C) 8 hours after wounding: 50% of wounded area covered by cells. (D) 16 hours after wounding: wounded area completely covered by cells. Once the wound is covered cell proliferation and re-differentiation may still be in progress so additional time points were added. (E) 24 hours post-wounding (F) 48 hours after wounding (Figure 1). With the exception of cells damaged during the original mechanical wounding, cell death was not seen in the repairing areas by time lapse microscopy. Stages of wound repair at different time points (A – 2 hrs, B – 4 hrs, C – 8 hrs, D – 16 hrs, E – 24 hrs, F – 48 hrs post wounding), n=3 wells for each time point. Global miRNA expression profile altered during epithelial wound repair Expression profiling analysis revealed a large number of mature miRNAs that were modulated at different time points during epithelial repair with a fold change above 2.0 (Table 1). Numerous miRNAs showed significantly increased or decreased expression (>10-fold) at different time points as compared to baseline (presented in Additional file 1). Based on the high fold change values at different time points, ten miRNAs were found to undergo a significant modulation (both up- or down-regulation) at five or more of the seven time points analysed (Additional file 2). We also observed that the alterations in expression of some miRNA genes were limited to a single time point of wound repair, whereas at the other time points the expression levels did not differ much from the baseline, suggesting their involvement at a particular stage of repair (marked in red in Additional file 1). Number of miRNAs with >2.0-fold change in expression at different time points after wounding Expression profiling analysis revealed a large number of mature miRNAs that were modulated at different time points during epithelial repair with a fold change above 2.0 (Table 1). Numerous miRNAs showed significantly increased or decreased expression (>10-fold) at different time points as compared to baseline (presented in Additional file 1). Based on the high fold change values at different time points, ten miRNAs were found to undergo a significant modulation (both up- or down-regulation) at five or more of the seven time points analysed (Additional file 2). We also observed that the alterations in expression of some miRNA genes were limited to a single time point of wound repair, whereas at the other time points the expression levels did not differ much from the baseline, suggesting their involvement at a particular stage of repair (marked in red in Additional file 1). Number of miRNAs with >2.0-fold change in expression at different time points after wounding Cluster analysis We then hypothesized that, given the number of miRNA genes undergoing significant changes during the epithelial repair process, a common expression profile might be shared by miRNAs whose expression is regulated by particular transcriptional activation pathways. Therefore, we analysed the expression of miRNA genes with use of the clustering algorithm STEM [26], assigning each gene passing the filtering criteria to the model profile that most closely matches the gene's expression profile as determined by the correlation coefficient. Since the model profiles are selected by the software by random allocation, independent of the data, the algorithm then determines which profiles have a statistically significant higher number of genes assigned using a permutation test. It then uses standard hypothesis testing to determine which model profiles have significantly more genes assigned as compared to the average number of genes assigned to the model profile in the permutation runs. Our cluster analysis revealed three significant miRNA expression profiles (16, 1 and 18) over 48 hours of wound repair (Figures 2, 3 and 4). Profile 16 of miRNA with similar expression pattern during wound repair. Profile 1 of miRNA with similar expression pattern during wound repair. Profile 18 of miRNA with similar expression pattern during wound repair. Profile 16 included genes that gradually increase between 2 and 16 hours and then display a sudden drop in expression 16 hours post-wounding, which corresponded to the completion of cell proliferation and the restoration of the monolayer after wounding in time lapse observations. Profile 1 was characterized by significant decrease of miRNA expression 4 hours after wounding followed by a significant increase with a maximum 16 hours post-wounding, suggesting induction of transcription of genes involved in the early response to stress due to mechanical cell damage which are subsequently switched off. Profile 18 shared some similarities with profile 1, although it showed a more gradual decrease in miRNA expression 4 hours post-wounding, that then increased steadily to reach a maximum at 16 hours and afterwards gradually decreased. The different profiles of miRNA expression are shown in Figures 2, 3 and 4. The miRNA genes sharing the common expression pattern during epithelial wound repair are listed in Additional file 3. We then hypothesized that, given the number of miRNA genes undergoing significant changes during the epithelial repair process, a common expression profile might be shared by miRNAs whose expression is regulated by particular transcriptional activation pathways. Therefore, we analysed the expression of miRNA genes with use of the clustering algorithm STEM [26], assigning each gene passing the filtering criteria to the model profile that most closely matches the gene's expression profile as determined by the correlation coefficient. Since the model profiles are selected by the software by random allocation, independent of the data, the algorithm then determines which profiles have a statistically significant higher number of genes assigned using a permutation test. It then uses standard hypothesis testing to determine which model profiles have significantly more genes assigned as compared to the average number of genes assigned to the model profile in the permutation runs. Our cluster analysis revealed three significant miRNA expression profiles (16, 1 and 18) over 48 hours of wound repair (Figures 2, 3 and 4). Profile 16 of miRNA with similar expression pattern during wound repair. Profile 1 of miRNA with similar expression pattern during wound repair. Profile 18 of miRNA with similar expression pattern during wound repair. Profile 16 included genes that gradually increase between 2 and 16 hours and then display a sudden drop in expression 16 hours post-wounding, which corresponded to the completion of cell proliferation and the restoration of the monolayer after wounding in time lapse observations. Profile 1 was characterized by significant decrease of miRNA expression 4 hours after wounding followed by a significant increase with a maximum 16 hours post-wounding, suggesting induction of transcription of genes involved in the early response to stress due to mechanical cell damage which are subsequently switched off. Profile 18 shared some similarities with profile 1, although it showed a more gradual decrease in miRNA expression 4 hours post-wounding, that then increased steadily to reach a maximum at 16 hours and afterwards gradually decreased. The different profiles of miRNA expression are shown in Figures 2, 3 and 4. The miRNA genes sharing the common expression pattern during epithelial wound repair are listed in Additional file 3. Identification of biological processes regulated by miRNAs Pathways in clusters of miRNAs The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). Pathways in clusters of miRNAs The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). Target pathways at different stages of wound repair To identify the most important pathways involved at different stages of epithelial wound repair in vitro we also performed pathway enrichment of miRNAs significantly altered only at one time point of wound repair (see Additional file 1, genes in red). For those genes, targets were predicted as above and DAVID was used to identify potential pathways and biological processes. The main observation for epithelial cells in the early phase of repair (2 hours post-wounding) were miRNAs being up-regulated, suggesting switching off target genes and processes associated with response to cellular stress (MAP kinase pathway), regulation of actin cytoskeleton, cell proliferation and migration. The main pathways targeted by up-regulated miRNAs identified for the repair 4 hours after cell damage included genes involved in negative regulation of transcription, RNA metabolism, regulation of cell motion and the cytoskeleton. The most important processes at 8 hours after wounding involved a number of up-regulated miRNAs at this time point and indicating the switching off of genes involved in negative regulation of gene expression and negative regulation of cell communication. At 16 hours following epithelial cell wounding we observed a number of miRNA genes that were down regulated and, therefore, switching on genes involved in mitotic cell cycle, negative regulation of cell death, cell proliferation, ERBB signalling pathway (cell proliferation, survival, migration). This may suggest the predominance of a proliferating phenotype of cells after the damaged area was closed by spreading and migrating cells. After 24 hours post-wounding we observed further down regulation of miRNA genes. Two were of particular interest as they are responsible for switching on genes involved in p53 signalling pathway (cell cycle arrest), IL-10 (anti-inflammatory response), regulation of apoptosis, cell death, RNA transport and localization. This indicates that at this time point cells have proliferated sufficiently and are beginning to differentiate. At 48 hours after wounding, we observed mainly up regulation of miRNA genes responsible for silencing genes involved in protein catabolic processes, alternative splicing, spectrins, mRNA splicing and processing as well as methylation indicating that cells are undergoing physiological processes and restoring a normal phenotype. To identify the most important pathways involved at different stages of epithelial wound repair in vitro we also performed pathway enrichment of miRNAs significantly altered only at one time point of wound repair (see Additional file 1, genes in red). For those genes, targets were predicted as above and DAVID was used to identify potential pathways and biological processes. The main observation for epithelial cells in the early phase of repair (2 hours post-wounding) were miRNAs being up-regulated, suggesting switching off target genes and processes associated with response to cellular stress (MAP kinase pathway), regulation of actin cytoskeleton, cell proliferation and migration. The main pathways targeted by up-regulated miRNAs identified for the repair 4 hours after cell damage included genes involved in negative regulation of transcription, RNA metabolism, regulation of cell motion and the cytoskeleton. The most important processes at 8 hours after wounding involved a number of up-regulated miRNAs at this time point and indicating the switching off of genes involved in negative regulation of gene expression and negative regulation of cell communication. At 16 hours following epithelial cell wounding we observed a number of miRNA genes that were down regulated and, therefore, switching on genes involved in mitotic cell cycle, negative regulation of cell death, cell proliferation, ERBB signalling pathway (cell proliferation, survival, migration). This may suggest the predominance of a proliferating phenotype of cells after the damaged area was closed by spreading and migrating cells. After 24 hours post-wounding we observed further down regulation of miRNA genes. Two were of particular interest as they are responsible for switching on genes involved in p53 signalling pathway (cell cycle arrest), IL-10 (anti-inflammatory response), regulation of apoptosis, cell death, RNA transport and localization. This indicates that at this time point cells have proliferated sufficiently and are beginning to differentiate. At 48 hours after wounding, we observed mainly up regulation of miRNA genes responsible for silencing genes involved in protein catabolic processes, alternative splicing, spectrins, mRNA splicing and processing as well as methylation indicating that cells are undergoing physiological processes and restoring a normal phenotype. Characterisation of epithelial wound repair model: To analyse the changes in miRNA expression profile during epithelial wound repair, we used a previously well established in vitro model mimicking this process [23,32-34], that allowed real-time monitoring of the rate of epithelial repair and quantitative analysis using time-lapse microscopy. The following time points were selected for miRNA expression profile analysis: baseline immediately before wounding. (A) 2 hours after wounding: cells adjacent to the wound initiate a response but have not migrated substantially. (B) 4 hours after wounding: 25% of the original wound area has been covered by cells. (C) 8 hours after wounding: 50% of wounded area covered by cells. (D) 16 hours after wounding: wounded area completely covered by cells. Once the wound is covered cell proliferation and re-differentiation may still be in progress so additional time points were added. (E) 24 hours post-wounding (F) 48 hours after wounding (Figure 1). With the exception of cells damaged during the original mechanical wounding, cell death was not seen in the repairing areas by time lapse microscopy. Stages of wound repair at different time points (A – 2 hrs, B – 4 hrs, C – 8 hrs, D – 16 hrs, E – 24 hrs, F – 48 hrs post wounding), n=3 wells for each time point. Global miRNA expression profile altered during epithelial wound repair: Expression profiling analysis revealed a large number of mature miRNAs that were modulated at different time points during epithelial repair with a fold change above 2.0 (Table 1). Numerous miRNAs showed significantly increased or decreased expression (>10-fold) at different time points as compared to baseline (presented in Additional file 1). Based on the high fold change values at different time points, ten miRNAs were found to undergo a significant modulation (both up- or down-regulation) at five or more of the seven time points analysed (Additional file 2). We also observed that the alterations in expression of some miRNA genes were limited to a single time point of wound repair, whereas at the other time points the expression levels did not differ much from the baseline, suggesting their involvement at a particular stage of repair (marked in red in Additional file 1). Number of miRNAs with >2.0-fold change in expression at different time points after wounding Cluster analysis: We then hypothesized that, given the number of miRNA genes undergoing significant changes during the epithelial repair process, a common expression profile might be shared by miRNAs whose expression is regulated by particular transcriptional activation pathways. Therefore, we analysed the expression of miRNA genes with use of the clustering algorithm STEM [26], assigning each gene passing the filtering criteria to the model profile that most closely matches the gene's expression profile as determined by the correlation coefficient. Since the model profiles are selected by the software by random allocation, independent of the data, the algorithm then determines which profiles have a statistically significant higher number of genes assigned using a permutation test. It then uses standard hypothesis testing to determine which model profiles have significantly more genes assigned as compared to the average number of genes assigned to the model profile in the permutation runs. Our cluster analysis revealed three significant miRNA expression profiles (16, 1 and 18) over 48 hours of wound repair (Figures 2, 3 and 4). Profile 16 of miRNA with similar expression pattern during wound repair. Profile 1 of miRNA with similar expression pattern during wound repair. Profile 18 of miRNA with similar expression pattern during wound repair. Profile 16 included genes that gradually increase between 2 and 16 hours and then display a sudden drop in expression 16 hours post-wounding, which corresponded to the completion of cell proliferation and the restoration of the monolayer after wounding in time lapse observations. Profile 1 was characterized by significant decrease of miRNA expression 4 hours after wounding followed by a significant increase with a maximum 16 hours post-wounding, suggesting induction of transcription of genes involved in the early response to stress due to mechanical cell damage which are subsequently switched off. Profile 18 shared some similarities with profile 1, although it showed a more gradual decrease in miRNA expression 4 hours post-wounding, that then increased steadily to reach a maximum at 16 hours and afterwards gradually decreased. The different profiles of miRNA expression are shown in Figures 2, 3 and 4. The miRNA genes sharing the common expression pattern during epithelial wound repair are listed in Additional file 3. Identification of biological processes regulated by miRNAs: Pathways in clusters of miRNAs The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). Pathways in clusters of miRNAs: The next question to be addressed was if miRNA clusters of characteristic expression profile during epithelial wound repair identified using STEM were regulating target genes from the same biological pathways or processes. To analyse this, we used highly predicted miRNA targets (mRNAs confirmed to be a target of specific miRNA by at least five different algorithms) to create a list of potential miRNA target genes, which were then analysed utilising the DAVID online database for annotation and visualization [27,28]. The use of DAVID enabled the integration of the miRNA target genes into common pathways or GO processes. Analysis of targets predicted for each miRNA expression cluster generated by STEM enabled us to predict four significantly enriched pathways for profile 16, including the neurotrophin signalling pathway, ERBB signalling pathway, MAPK signalling pathway and the RIG-I-like receptor signalling pathway. Six pathways were predicted for the targets of miRNAs demonstrating expression in profile 1: adherence junction, acute myeloid leukaemia, small lung cancer, cell cycle, pathways in cancer and the chemokine signalling pathway. No common pathways were predicted for profile 18. The predicted pathways are shown in Additional files 4 and 5. For all the profiles, DAVID also predicted numerous biological processes where miRNAs targets play a significant role (see Additional file 6). In general, predicted biological processes and pathways were mainly associated with cell cycle regulation and induction of mitotic divisions, switching on anti-apoptotic genes (ECM, PKB/Akt and IKK) and genes stimulating proliferation (such as MEK, PPARγ) that are of known importance in epithelial wound repair. Apart from well documented biological processes, we also observed that, surprisingly, the most significantly overrepresented were target genes involved in the neurotrophin signalling pathway which suggests its importance in epithelial wound repair process (Additional file 7). Target pathways at different stages of wound repair: To identify the most important pathways involved at different stages of epithelial wound repair in vitro we also performed pathway enrichment of miRNAs significantly altered only at one time point of wound repair (see Additional file 1, genes in red). For those genes, targets were predicted as above and DAVID was used to identify potential pathways and biological processes. The main observation for epithelial cells in the early phase of repair (2 hours post-wounding) were miRNAs being up-regulated, suggesting switching off target genes and processes associated with response to cellular stress (MAP kinase pathway), regulation of actin cytoskeleton, cell proliferation and migration. The main pathways targeted by up-regulated miRNAs identified for the repair 4 hours after cell damage included genes involved in negative regulation of transcription, RNA metabolism, regulation of cell motion and the cytoskeleton. The most important processes at 8 hours after wounding involved a number of up-regulated miRNAs at this time point and indicating the switching off of genes involved in negative regulation of gene expression and negative regulation of cell communication. At 16 hours following epithelial cell wounding we observed a number of miRNA genes that were down regulated and, therefore, switching on genes involved in mitotic cell cycle, negative regulation of cell death, cell proliferation, ERBB signalling pathway (cell proliferation, survival, migration). This may suggest the predominance of a proliferating phenotype of cells after the damaged area was closed by spreading and migrating cells. After 24 hours post-wounding we observed further down regulation of miRNA genes. Two were of particular interest as they are responsible for switching on genes involved in p53 signalling pathway (cell cycle arrest), IL-10 (anti-inflammatory response), regulation of apoptosis, cell death, RNA transport and localization. This indicates that at this time point cells have proliferated sufficiently and are beginning to differentiate. At 48 hours after wounding, we observed mainly up regulation of miRNA genes responsible for silencing genes involved in protein catabolic processes, alternative splicing, spectrins, mRNA splicing and processing as well as methylation indicating that cells are undergoing physiological processes and restoring a normal phenotype. Discussion: The main finding of this study is the involvement of multiple miRNA genes in the process of epithelial wound repair in vitro. We found three distinct expression patterns of miRNA genes clusters that are predicted to further regulate numerous pathways and biological processes involved in wound repair. We have applied here for the first time the cluster analysis of time-series miRNA expression data (using STEM) to identify basic patterns and predict pathways (using DAVID) involved in repair processes of airway epithelium. Such an approach has enabled us to identify common miRNA expression profiles during wound repair giving comprehensive information about activated miRNA genes. The relationships amongst these genes, their regulation and coordination during wound repair over time can also be explored. Further validation of individual protein, gene or miRNA changes will be required in subsequent studies, but it seems clear that specific expression profiles of clusters of miRNAs correlates with repair of mechanically induced damage to the epithelium. For expression profile 16 we demonstrated that, among other plausible signalling pathways, the neurotrophin signalling pathway may be involved in wound repair in epithelial cells, in addition to the inflammatory response in airway epithelium in allergy and asthma as reported previously [35-38]. The involvement in wound repair may further suggest that this pathway is important in the regulation of airway remodelling in asthma. Indeed, in the study by Kilic et al. [39] it was observed that blocking one of the neurotrophins, nerve growth factor (NGF), prevented subepithelial fibrosis in a mouse model of asthma and that NGF overexpression exerted a direct effect on collagen expression in murine lung fibroblasts. The involvement of neurotrophins in repair processes has been also confirmed recently by Palazzo et al. [40] in wound healing in dermal fibroblasts. Moreover, miRNAs involved in this pathway such as the miR-200 family were reported to control epithelial-mesenchymal transition (EMT) [41], the process that is suggested to underlie airway remodelling in asthma. In the recent study of Ogawa et al. [42] utilising a mouse model of asthma, it was observed that mice challenged with house dust mite allergen exhibited an increase in NGF that was primarily expressed in bronchial epithelium and was positively correlated with airway hyperresponsiveness and substance P-positive nerve fibers. However in this model siRNA targeted NGF inhibited hyperresponsiveness and modulation of innervation but not subepithelial fibrosis and allergic inflammation. For expression profile 1 we observed a significant down-regulation at the beginning of wound repair followed by sharp increase in miRNA expression with a maximum at 16 hours after cell damage. This may indicate the induction of the six pathways predicted by enrichment analysis in the early phase of wound repair, which are then being switched off by the miRNAs with increased expression up to 16 hours post-wounding. The process of wound repair in vivo in the airways involves cell spreading and migration as the primary mechanisms in the first 12–24 hours after injury, while proliferation begins by 15–24 h and continues for days to weeks. Similarly in our study we have confirmed that epithelial wound repair in vitro mimics the in vivo situation but in a shorter time frame, and that in its early stage this involves spreading and migration of neighbouring epithelial cells to cover the damaged area (2 and 4 hours after wounding). This is followed by migration and proliferation of progenitor cells to restore cell numbers (8 and 16 hours after cell damage) and differentiation to restore function (24 and 48 hours post-wounding) (Figure 1) [43-48]. Analysis of miRNAs involved at only specific time points of wound repair revealed that during the early stages numerous miRNAs are being significantly up-regulated, switching off pathways regulating cell proliferation and differentiation and activating cellular stress responses (chemokine signalling pathway) as well as cell migration and cell death (corresponding to time points at 2, 4 and 8 hours after injury). Furthermore, at later time points cells are undergoing intensive proliferation and secreting extracellular matrix which is supported by the involvement of ERBB signalling pathway and NFAT pathway stimulating cell proliferation and the regulation of transcription of immune genes (that corresponds to 16 hours after injury). Once confluent, cells restore their phenotype so that the cell cycle is arrested (inhibition of cell division) and differentiation processes are switched on. In parallel to this, the IL-10 anti-inflammatory signalling pathway is induced to deactivate immune cells stimulated during the early stages of wound repair. Conclusions: In summary, we report here for the first time that expression of multiple miRNAs is significantly altered during airway epithelium wound repair processes. Different patterns of expression have been observed and the target genes of those miRNA clusters coordinate several biological pathways involved in the repair of injury. Our work provides a starting point for a systematic analysis of mRNA targets specific for wound repair. This will help to identify regulatory networks controlling these processes in airway epithelium to better understand their involvement in respiratory diseases. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: AS performed in vitro cell experiments and wounding assays, miRNA profiling, data analysis, cluster and pathway analysis, drafted the paper and approved its final version. PL contributed to the study design and methodology regarding cell experiments drafted the paper and approved its final version. JWH contributed to the study design and methodology regarding miRNA analysis, drafted the paper and approved its final version. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2466/13/63/prepub Supplementary Material: List of significantly modulated mature miRNAs (>10.0-fold) and their respective fold induction at each time point. * miRNAs with significant change in expression at one time point only (marked in red). Click here for file Fold change of the top ten miRNAs undergoing significant modulation (>10-fold) during wound repair process at, at least, five time points. Click here for file MiRNA genes assigned to each expression profile during wound repair (values given for each time point represent expression change after normalization in STEM software). Click here for file The significantly overrepresented pathways (enriched) in the analysed sets of target genes of miRNAs included in the profile 16. Click here for file The significantly overrepresented pathways (enriched) in the analysed sets of target genes of miRNAs included in the profile 1. Click here for file The most significant biological processes predicted using DAVID tool undergoing regulation of miRNA target genes from the same expression profile (processes were ranked based on their Fisher Exact Probability value from the gene enrichment analysis to identify those showing significant overrepresentation). Click here for file Neurotrophin signaling pathway with miRNA genes and their predicted targets. Click here for file
Background: Airway epithelial cells provide a protective barrier against environmental particles including potential pathogens. Epithelial repair in response to tissue damage is abnormal in asthmatic airway epithelium in comparison to the repair of normal epithelium after damage. The complex mechanisms coordinating the regulation of the processes involved in wound repair requires the phased expression of networks of genes. Small non-coding RNA molecules termed microRNAs (miRNAs) play a critical role in such coordinated regulation of gene expression. We aimed to establish if the phased expression of specific miRNAs is correlated with the repair of mechanically induced damage to the epithelium. Methods: To investigate the possible involvement of miRNA in epithelial repair, we analyzed miRNA expression profiles during epithelial repair in a cell culture model using TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format. The expression of 754 miRNA genes at seven time points in a 48-hour period during the wound repair process was profiled using the bronchial epithelial cell line 16HBE14o- growing in monolayer. Results: The expression levels of numerous miRNAs were found to be altered during the wound repair process. These miRNA genes were clustered into 3 different patterns of expression that correlate with the further regulation of several biological pathways involved in wound repair. Moreover, it was observed that expression of some miRNA genes were significantly altered only at one time point, indicating their involvement in a specific stage of the epithelial wound repair. Conclusions: In summary, miRNA expression is modulated during the normal repair processes in airway epithelium in vitro suggesting a potential role in regulation of wound repair.
Background: The airway epithelium has been recognized to play a central role in the integration of innate and adaptive immune responses [1-4]. The airway epithelium is also crucial to the origin and progression of respiratory disorders such as asthma, chronic obstructive pulmonary disease, cystic fibrosis and pulmonary fibrosis. In asthma, chronic airway inflammation underlies aberrant repair of the airway that subsequently leads to structural and functional changes in the airway wall. This remodeling is responsible for a number of the clinical characteristics of asthmatic patients. Normal epithelial repair occurs in a series of overlapping stages. Damage to the epithelium or challenge associated with damage can result in loss of structural integrity or barrier function and local mucosal activation [5]. Studies in animals have shown that the repair of normal airway epithelium after minor damage involves the migration of the remaining epithelial cells to cover the damaged area. This is a rapid process, suggesting an autonomous response by cells in the vicinity of the damage [6]. It includes an acute inflammatory response, with recruitment of immune cells as well as epithelial spreading and migration stimulated by secreted provisional matrix. Once the barrier is reformed, the differentiated characteristics are then restored. The regulation of these processes require complex sequential changes in the epithelial cell biology driven by the phased expression of networks of genes [7]. One biological mechanism that plays a critical role in the coordinate regulation of gene expression such as that required during epithelial wound repair is the expression of small non-coding RNA molecules termed microRNAs (miRNAs) [8]. To date, more than 1000 human miRNAs have been identified [http://microrna.sanger.ac.uk], with documented tissue-specific expression of some of these miRNAs in lung and involvement in the development of lung diseases including lung cancer, asthma and fibrosis [9-15]. MiRNAs have been demonstrated to play a crucial role in epithelial cell proliferation and differentiation [16-18]. The expression in lung epithelium of Dicer, the enzyme responsible for processing of miRNA precursors, is essential for lung morphogenesis [16] and there is differential expression of miRNAs during lung development [17]. Furthermore, transgenic over-expression of miR-17-92 (shown to be over-expressed in lung cancer) in the lung epithelium promotes proliferation and inhibits differentiation of lung epithelial progenitor cells [18]. Recently, it has been reported that miRNA-146a modulates survival of bronchial epithelial cells in response to cytokine-induced apoptosis [19]. In experimental studies, mice lacking miR-155 demonstrated autoimmune phenotypes in the lungs with increased airway remodeling and leukocyte invasion, phenotypes similar to those observed in asthma [20,21]. While a number of studies have examined the role of miRNA in lung development and in disease [9-15], their influence on the regulation of gene expression involved in epithelial wound repair remains unresolved and comprehensive studies on miRNA involvement in epithelial repair and the pathogenesis of airway remodeling are lacking. However in the skin, miRNAs were found to play a crucial role in wound closure by controlling migration and proliferation of keratinocytes in an in vitro model of wound repair [22]. Thus the hypothesis of the study was that the stages of wound repair in respiratory epithelium are regulated by the phased expression of specific miRNA species. The aim was to investigate the possible involvement of miRNAs by examining their expression profile in epithelial repair in a cell culture model. Understanding the effect of altered miRNA activity on protein expression during repair processes can be further used to identify pathways targeted by miRNAs that regulate epithelial wound repair, potentially providing a novel therapeutic strategy for asthma and other respiratory diseases with underlying aberrant epithelial wound repair. Conclusions: In summary, we report here for the first time that expression of multiple miRNAs is significantly altered during airway epithelium wound repair processes. Different patterns of expression have been observed and the target genes of those miRNA clusters coordinate several biological pathways involved in the repair of injury. Our work provides a starting point for a systematic analysis of mRNA targets specific for wound repair. This will help to identify regulatory networks controlling these processes in airway epithelium to better understand their involvement in respiratory diseases.
Background: Airway epithelial cells provide a protective barrier against environmental particles including potential pathogens. Epithelial repair in response to tissue damage is abnormal in asthmatic airway epithelium in comparison to the repair of normal epithelium after damage. The complex mechanisms coordinating the regulation of the processes involved in wound repair requires the phased expression of networks of genes. Small non-coding RNA molecules termed microRNAs (miRNAs) play a critical role in such coordinated regulation of gene expression. We aimed to establish if the phased expression of specific miRNAs is correlated with the repair of mechanically induced damage to the epithelium. Methods: To investigate the possible involvement of miRNA in epithelial repair, we analyzed miRNA expression profiles during epithelial repair in a cell culture model using TaqMan-based quantitative real-time PCR in a TaqMan Low Density Array format. The expression of 754 miRNA genes at seven time points in a 48-hour period during the wound repair process was profiled using the bronchial epithelial cell line 16HBE14o- growing in monolayer. Results: The expression levels of numerous miRNAs were found to be altered during the wound repair process. These miRNA genes were clustered into 3 different patterns of expression that correlate with the further regulation of several biological pathways involved in wound repair. Moreover, it was observed that expression of some miRNA genes were significantly altered only at one time point, indicating their involvement in a specific stage of the epithelial wound repair. Conclusions: In summary, miRNA expression is modulated during the normal repair processes in airway epithelium in vitro suggesting a potential role in regulation of wound repair.
12,232
300
23
[ "expression", "genes", "mirna", "repair", "time", "wound", "pathways", "cell", "wound repair", "wounding" ]
[ "test", "test" ]
[CONTENT] Epithelial cells | Wound repair | miRNA | Profiling | Cluster analysis | Pathway analysis [SUMMARY]
[CONTENT] Epithelial cells | Wound repair | miRNA | Profiling | Cluster analysis | Pathway analysis [SUMMARY]
[CONTENT] Epithelial cells | Wound repair | miRNA | Profiling | Cluster analysis | Pathway analysis [SUMMARY]
[CONTENT] Epithelial cells | Wound repair | miRNA | Profiling | Cluster analysis | Pathway analysis [SUMMARY]
[CONTENT] Epithelial cells | Wound repair | miRNA | Profiling | Cluster analysis | Pathway analysis [SUMMARY]
[CONTENT] Epithelial cells | Wound repair | miRNA | Profiling | Cluster analysis | Pathway analysis [SUMMARY]
[CONTENT] Bronchi | Cells, Cultured | Down-Regulation | Epithelial Cells | Gene Expression Profiling | Gene Expression Regulation | Humans | MicroRNAs | Oligonucleotide Array Sequence Analysis | Signal Transduction | Time Factors | Up-Regulation | Wound Healing [SUMMARY]
[CONTENT] Bronchi | Cells, Cultured | Down-Regulation | Epithelial Cells | Gene Expression Profiling | Gene Expression Regulation | Humans | MicroRNAs | Oligonucleotide Array Sequence Analysis | Signal Transduction | Time Factors | Up-Regulation | Wound Healing [SUMMARY]
[CONTENT] Bronchi | Cells, Cultured | Down-Regulation | Epithelial Cells | Gene Expression Profiling | Gene Expression Regulation | Humans | MicroRNAs | Oligonucleotide Array Sequence Analysis | Signal Transduction | Time Factors | Up-Regulation | Wound Healing [SUMMARY]
[CONTENT] Bronchi | Cells, Cultured | Down-Regulation | Epithelial Cells | Gene Expression Profiling | Gene Expression Regulation | Humans | MicroRNAs | Oligonucleotide Array Sequence Analysis | Signal Transduction | Time Factors | Up-Regulation | Wound Healing [SUMMARY]
[CONTENT] Bronchi | Cells, Cultured | Down-Regulation | Epithelial Cells | Gene Expression Profiling | Gene Expression Regulation | Humans | MicroRNAs | Oligonucleotide Array Sequence Analysis | Signal Transduction | Time Factors | Up-Regulation | Wound Healing [SUMMARY]
[CONTENT] Bronchi | Cells, Cultured | Down-Regulation | Epithelial Cells | Gene Expression Profiling | Gene Expression Regulation | Humans | MicroRNAs | Oligonucleotide Array Sequence Analysis | Signal Transduction | Time Factors | Up-Regulation | Wound Healing [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] expression | genes | mirna | repair | time | wound | pathways | cell | wound repair | wounding [SUMMARY]
[CONTENT] expression | genes | mirna | repair | time | wound | pathways | cell | wound repair | wounding [SUMMARY]
[CONTENT] expression | genes | mirna | repair | time | wound | pathways | cell | wound repair | wounding [SUMMARY]
[CONTENT] expression | genes | mirna | repair | time | wound | pathways | cell | wound repair | wounding [SUMMARY]
[CONTENT] expression | genes | mirna | repair | time | wound | pathways | cell | wound repair | wounding [SUMMARY]
[CONTENT] expression | genes | mirna | repair | time | wound | pathways | cell | wound repair | wounding [SUMMARY]
[CONTENT] lung | airway | epithelial | epithelium | repair | expression | asthma | role | studies | mirnas [SUMMARY]
[CONTENT] δct | sample | calculated | time | taqman | rna | rq | applied | assay | biosystems [SUMMARY]
[CONTENT] genes | mirna | expression | profile | pathways | hours | repair | wounding | signalling | signalling pathway [SUMMARY]
[CONTENT] airway epithelium | airway | epithelium | repair | processes | repair processes different | networks controlling processes airway | expression multiple mirnas | expression multiple mirnas significantly | coordinate biological pathways involved [SUMMARY]
[CONTENT] expression | genes | time | repair | mirna | wound | wounding | hours | cell | pathways [SUMMARY]
[CONTENT] expression | genes | time | repair | mirna | wound | wounding | hours | cell | pathways [SUMMARY]
[CONTENT] ||| ||| ||| RNA ||| [SUMMARY]
[CONTENT] PCR ||| 754 miRNA | seven | 48-hour | 16HBE14o- [SUMMARY]
[CONTENT] ||| 3 ||| one [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| ||| ||| RNA ||| ||| PCR ||| 754 miRNA | seven | 48-hour | 16HBE14o- ||| ||| ||| 3 ||| one ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| RNA ||| ||| PCR ||| 754 miRNA | seven | 48-hour | 16HBE14o- ||| ||| ||| 3 ||| one ||| [SUMMARY]
Unveiling transcription factor regulation and differential co-expression genes in Duchenne muscular dystrophy.
25338682
Gene expression analysis is powerful for investigating the underlying mechanisms of Duchenne muscular dystrophy (DMD). Previous studies mainly neglected co-expression or transcription factor (TF) information. Here we integrated TF information into differential co-expression analysis (DCEA) to explore new understandings of DMD pathogenesis.
BACKGROUND
Using two microarray datasets from Gene Expression Omnibus (GEO) database, we firstly detected differentially expressed genes (DEGs) and pathways enriched with DEGs. Secondly, we constructed differentially regulated networks to integrate the TF-to-target information and the differential co-expression genes.
METHODS
A total of 454 DEGs were detected and both KEGG pathway and ingenuity pathway analysis revealed that pathways enriched with aberrantly regulated genes are mostly involved in the immune response processes. DCEA results generated 610 pairs of DEGs regulated by at least one common TF, including 78 pairs of co-expressed DEGs. A network was constructed to illustrate their relationships and a subnetwork for DMD related molecules was constructed to show genes and TFs that may play important roles in the secondary changes of DMD. Among the DEGs which shared TFs with DMD, six genes were co-expressed with DMD, including ATP1A2, C1QB, MYOF, SAT1, TRIP10, and IFI6.
RESULTS
Our results may provide a new understanding of DMD and contribute potential targets for future therapeutic tests.
CONCLUSION
[ "Databases, Genetic", "Gene Expression Profiling", "Gene Expression Regulation", "Gene Regulatory Networks", "Genetic Markers", "Genetic Predisposition to Disease", "Humans", "Muscular Dystrophy, Duchenne", "Oligonucleotide Array Sequence Analysis", "Phenotype", "Prognosis", "Transcription Factors" ]
4312468
Background
Duchenne muscular dystrophy (DMD) is a severe and progressive inherited neuromuscular disease characterized by muscle fiber degeneration and central nervous system disorders that affects 1 in 3600–6000 live male births [1]. This disease is caused by mutations or dysregulation of the X-linked dystrophin gene. Absence of or defects of dystrophin protein result in disruption of the dystrophin-associated protein complex (DAPC), resulting in chronic inflammation and progressive muscle degeneration [2]. Although mutations of the DMD protein have been identified to be primarily responsible for the pathology, comprehensive understanding of the downstream mechanisms due to dystrophin absence is still lacking. Secondary changes in DMD involve calcium homeostasis [3], nitric oxide synthase [4], inflammation [5] and mast cell degranulation [6], suggesting that the pathological process of DMD is highly complicated. Gene expression profile analysis is a powerful strategy to investigate the pathophysiological mechanisms of DMD. Several gene expression profiling studies [7-11] have been performed earlier, providing insights into the pathology of DMD. However, most of them focused on expression level changes of individual genes, without considering the differences in gene interconnections. Since the majority of proteins function with other proteins, differential co-expression analysis (DCEA) would provide a better understanding of the mechanism underlying DMD progression. The rationale of DCEA is that changes in gene co-expression patterns between two disease statuses (case and control) provide hints regarding the disrupted regulatory relationships in patients. Therefore, integrate the information of Transcription factors (TFs) into DCEA may detect the regulatory causes of observed co-expression changes since TFs can regulate gene expression through binding the cis-elements in the target genes’ promoter regions. TFs are reported to be important in both normal and disease states [12]. Investigation of TF activities in mdx mice has proposed several TFs involved in DMD pathogenesis [13]. Thus, integrative analysis of TFs and target differential co-expression genes may provide better understanding on molecular mechanism of the secondary changes of DMD. In this study, using microarray gene expression data collected from the Gene Expression Omnibus (GEO) database, we carried out an integrative bioinformatics analysis. Firstly, differentially expressed genes (DEGs) were detected and pathways enriched with DEGs were acquired. Secondly, differentially regulated link networks were constructed to integrate the TF-to-target information and the differential co-expression genes. Our results may provide new understanding of DMD and new targets for future therapeutic tests.
null
null
Results
With the two downloaded datasets (GSE6011 and GSE3307), expression profile of 27 DMD patients (22 from GSE6011 and 5 from GSE3307) and 14 healthy controls were included in this study. Dystrophin protein was absent in all patients [9,10]. A total of 454 genes were detected to be DEGs, including 284 up regulated genes and 170 down regulated ones. KEGG pathway analysis revealed that aberrantly regulated genes were enriched in 10 pathways (Table 1). Most of these pathways (7/10) are involved in the inflammatory/immune response process, such as the immune system, immune disease and infectious disease. In addition, a DMD related pathway, the viral myocarditis pathway, was included. The other two pathways are Cell adhesion molecules (CAMs) and Focal adhesion.Table 1 KEGG pathways enriched with differentially expressed genes ID Pathway description Pathway class P -value hsa05322Systemic lupus erythematosusImmune diseases7.40E-05hsa05416Viral myocarditisCardiovascular Diseases1.60E-04hsa04514Cell adhesion molecules (CAMs)Signaling Molecules and Interaction4.00E-03hsa05310AsthmaImmune Diseases1.00E-02hsa04510Focal adhesionCell Communication1.00E-02hsa05130Pathogenic Escherichia coli infectionInfectious Diseases1.10E-02hsa04672Intestinal immune network for IgA productionImmune system1.90E-02hsa05330Allograft rejectionImmune Diseases1.70E-02hsa04612Antigen processing and presentationImmune System2.20E-02hsa05332Graft-versus-host diseaseImmune Diseases2.10E-02 KEGG pathways enriched with differentially expressed genes The IPA results of canonical pathways showed that the DEGs were enriched for epithelial adherens junction signaling, calcium signaling, nNOS signaling in skeletal muscle cells and some additional pathways (Figure 1). Consistent with the KEGG pathway analysis, aberrantly regulated genes were enriched in immune system pathways, such as calcium-induced T lymphocyte apoptosis, iCOS-iCOSL signaling in T helper cells, complement system, and antigen presentation pathway (Figure 1). Diseases and Disorders analysis (Table 2) revealed that “Neurological Disease” (P = 3.98E-15–1.20E-02) was the top disease affected by these DEGs followed by “Skeletal and Muscular Disorders” (P = 2.66E-13–1.07E-02). “Cellular Growth and Proliferation” (P = 1.38E-14–1.12E-02) was the top biological function mediated by these DEGs (Table 2).Figure 1 Ingenuity pathway analysis: the most significant canonical pathways in which differently expressed genes (DEGs) were enriched. Table 2 Ingenuity pathway analysis: diseases and functions related to differentially expressed genes Name P -value # Molecules Diseases and Disorders Neurological disease3.98E-15–1.20E-02133Skeletal and muscular disorders2.66E-13–1.07E-02118Cardiovascular disease5.89E-13–9.41E-0375Cancer9.06E-11–1.14E-02293Psychological disorders3.24E-10–1.13E-0292 Molecular and Cellular functions Cellular growth and proliferation1.38E-14–1.12E-02136Cell death and survival1.24E-13–1.20E-02128Cellular development8.68E-08–1.12E-02106Cellular movement1.15E-07–1.20E-0270Cellular assembly and organization6.30E-06–9.41E-0373 Ingenuity pathway analysis: the most significant canonical pathways in which differently expressed genes (DEGs) were enriched. Ingenuity pathway analysis: diseases and functions related to differentially expressed genes DCEA results generated 610 pairs of DEGs regulated by at least one common TF. Among these pairs, 78 pairs DEGs were identified as differentially co-expressed. The expression values of a total of 25 pairs of DEGs were positively correlated with each other. A network was generated to illustrate the relationship between the differentially co-expressed genes and their regulatory TFs (Figure 2). Considering the primary causative effect of the aberrant expression of DMD, we construct a subnetwork with DMD and its differentially co-expressed genes (Figure 3) to illustrate genes and TFs that may play important roles in the secondary changes of DMD patients. As shown in Figure 3, among the DEGs which shared TFs with DMD, six genes were co-expressed with DMD, including ATP1A2, C1QB, MYOF, SAT1, TRIP10, and IFI6.Figure 2 Transcription factor (TF) regulation network for the 78 pairs of co-expressed differently expressed genes (DEGs). DCG: differential co-expression genes; DCL: links between DCGs.Figure 3 A subnetwork out of Figure 2 surrounding the predefined gene DMD . TF: Transcription factor; DCG: differential co-expression genes; DCL: links between DCGs. Transcription factor (TF) regulation network for the 78 pairs of co-expressed differently expressed genes (DEGs). DCG: differential co-expression genes; DCL: links between DCGs. A subnetwork out of Figure 2 surrounding the predefined gene DMD . TF: Transcription factor; DCG: differential co-expression genes; DCL: links between DCGs.
Conclusions
In summary, with combined microarray data from the GEO database, we conducted DECA and acquire the TF information of co-expressed DEGs to capture pathogenic characteristics of DMD. Our results may provide a new understanding of DMD and further contribute potential targets for future therapeutic tests.
[ "Microarray data", "Detection of differentially expressed genes (DEGs)", "Functional analysis", "DCEA and TF-to-target analysis" ]
[ "We used two datasets (GSE6011 and GSE3307) from the GEO (http://www.ncbi.nlm.nih.gov/geo/) database. For further combined analysis, we only selected samples generated with the platform GPL96: [HG-U133A] Affymetrix Human Genome U133A Array.", "Raw data including simple omnibus format in text (SOFT) family files and CEL files of all samples were downloaded. The CEL files contained the expression information for each probe. Robust Multiarray Analysis (RMA) [14] was used for raw intensity values normalization: firstly, background noise and processing artifacts were neutralized by model-based background correction; secondly, expression values were set to a common scale by quantile normalization; thirdly, expression value of each probe was generated by an iterative median polishing procedure. The resulting log2-transformed RMA expression value was then used for detecting DEGs. Statistical t test was used to identify DEGs and the Benjamini and Hochberg procedure [15] were carried out to correct the multiple testing problems. The threshold of significant DEGs was set as P <0.01. Up- or down-regulation of the DEGs were determined according to the fold-change. All of the above procedures were performed using the R software (v3.0.3) with BioConductor and limma packages (3.12.1) and libraries [16].", "Differentially expressed probes were annotated based on the SOFT files. All genes were then mapped to the Kyoto En-cyclopedia of Genes and Genomes (KEGG) pathways (http://www.genome.jp/kegg/) database. The hyper geometric distribution test was used to identify pathways significantly enriched with DEGs. In addition to KEGG pathway analysis, DEGs were also uploaded into the Ingenuity Pathway Analysis (IPA) software (Ingenuity® Systems, http://www.ingenuity.com) and was associated with the canonical pathways, molecular and cellular functions, diseases and disorders in the Ingenuity Knowledge Base. Fisher’s exact test was implemented to assess the significance of the associations between DEGs and canonical pathways, functions, or diseases. For canonical pathways, a ratio was also computed between the number of DEGs and the total number of molecules in the pathway.", "The DEGs were subjected to DCEA and TF-to-target analysis using the DCGL package (v2.0) [17] in the R software. Human TF-to-target library was incorporated into the package to identify differentially regulated genes and links. A total of 215 human TFs and 16,863 targets were included in the package. Differentially co-expressed genes (DCGs) were firstly identified and a network was constructed with DCGs and their shared TFs. Since dysregulation of DMD gene is the primary cause of the disease, we also constructed a subnetwork surrounding the predefined gene DMD." ]
[ null, null, null, null ]
[ "Background", "Methods", "Microarray data", "Detection of differentially expressed genes (DEGs)", "Functional analysis", "DCEA and TF-to-target analysis", "Results", "Discussion", "Conclusions" ]
[ "Duchenne muscular dystrophy (DMD) is a severe and progressive inherited neuromuscular disease characterized by muscle fiber degeneration and central nervous system disorders that affects 1 in 3600–6000 live male births [1]. This disease is caused by mutations or dysregulation of the X-linked dystrophin gene. Absence of or defects of dystrophin protein result in disruption of the dystrophin-associated protein complex (DAPC), resulting in chronic inflammation and progressive muscle degeneration [2].\nAlthough mutations of the DMD protein have been identified to be primarily responsible for the pathology, comprehensive understanding of the downstream mechanisms due to dystrophin absence is still lacking. Secondary changes in DMD involve calcium homeostasis [3], nitric oxide synthase [4], inflammation [5] and mast cell degranulation [6], suggesting that the pathological process of DMD is highly complicated. Gene expression profile analysis is a powerful strategy to investigate the pathophysiological mechanisms of DMD. Several gene expression profiling studies [7-11] have been performed earlier, providing insights into the pathology of DMD. However, most of them focused on expression level changes of individual genes, without considering the differences in gene interconnections. Since the majority of proteins function with other proteins, differential co-expression analysis (DCEA) would provide a better understanding of the mechanism underlying DMD progression. The rationale of DCEA is that changes in gene co-expression patterns between two disease statuses (case and control) provide hints regarding the disrupted regulatory relationships in patients. Therefore, integrate the information of Transcription factors (TFs) into DCEA may detect the regulatory causes of observed co-expression changes since TFs can regulate gene expression through binding the cis-elements in the target genes’ promoter regions. TFs are reported to be important in both normal and disease states [12]. Investigation of TF activities in mdx mice has proposed several TFs involved in DMD pathogenesis [13]. Thus, integrative analysis of TFs and target differential co-expression genes may provide better understanding on molecular mechanism of the secondary changes of DMD.\nIn this study, using microarray gene expression data collected from the Gene Expression Omnibus (GEO) database, we carried out an integrative bioinformatics analysis. Firstly, differentially expressed genes (DEGs) were detected and pathways enriched with DEGs were acquired. Secondly, differentially regulated link networks were constructed to integrate the TF-to-target information and the differential co-expression genes. Our results may provide new understanding of DMD and new targets for future therapeutic tests.", " Microarray data We used two datasets (GSE6011 and GSE3307) from the GEO (http://www.ncbi.nlm.nih.gov/geo/) database. For further combined analysis, we only selected samples generated with the platform GPL96: [HG-U133A] Affymetrix Human Genome U133A Array.\nWe used two datasets (GSE6011 and GSE3307) from the GEO (http://www.ncbi.nlm.nih.gov/geo/) database. For further combined analysis, we only selected samples generated with the platform GPL96: [HG-U133A] Affymetrix Human Genome U133A Array.\n Detection of differentially expressed genes (DEGs) Raw data including simple omnibus format in text (SOFT) family files and CEL files of all samples were downloaded. The CEL files contained the expression information for each probe. Robust Multiarray Analysis (RMA) [14] was used for raw intensity values normalization: firstly, background noise and processing artifacts were neutralized by model-based background correction; secondly, expression values were set to a common scale by quantile normalization; thirdly, expression value of each probe was generated by an iterative median polishing procedure. The resulting log2-transformed RMA expression value was then used for detecting DEGs. Statistical t test was used to identify DEGs and the Benjamini and Hochberg procedure [15] were carried out to correct the multiple testing problems. The threshold of significant DEGs was set as P <0.01. Up- or down-regulation of the DEGs were determined according to the fold-change. All of the above procedures were performed using the R software (v3.0.3) with BioConductor and limma packages (3.12.1) and libraries [16].\nRaw data including simple omnibus format in text (SOFT) family files and CEL files of all samples were downloaded. The CEL files contained the expression information for each probe. Robust Multiarray Analysis (RMA) [14] was used for raw intensity values normalization: firstly, background noise and processing artifacts were neutralized by model-based background correction; secondly, expression values were set to a common scale by quantile normalization; thirdly, expression value of each probe was generated by an iterative median polishing procedure. The resulting log2-transformed RMA expression value was then used for detecting DEGs. Statistical t test was used to identify DEGs and the Benjamini and Hochberg procedure [15] were carried out to correct the multiple testing problems. The threshold of significant DEGs was set as P <0.01. Up- or down-regulation of the DEGs were determined according to the fold-change. All of the above procedures were performed using the R software (v3.0.3) with BioConductor and limma packages (3.12.1) and libraries [16].\n Functional analysis Differentially expressed probes were annotated based on the SOFT files. All genes were then mapped to the Kyoto En-cyclopedia of Genes and Genomes (KEGG) pathways (http://www.genome.jp/kegg/) database. The hyper geometric distribution test was used to identify pathways significantly enriched with DEGs. In addition to KEGG pathway analysis, DEGs were also uploaded into the Ingenuity Pathway Analysis (IPA) software (Ingenuity® Systems, http://www.ingenuity.com) and was associated with the canonical pathways, molecular and cellular functions, diseases and disorders in the Ingenuity Knowledge Base. Fisher’s exact test was implemented to assess the significance of the associations between DEGs and canonical pathways, functions, or diseases. For canonical pathways, a ratio was also computed between the number of DEGs and the total number of molecules in the pathway.\nDifferentially expressed probes were annotated based on the SOFT files. All genes were then mapped to the Kyoto En-cyclopedia of Genes and Genomes (KEGG) pathways (http://www.genome.jp/kegg/) database. The hyper geometric distribution test was used to identify pathways significantly enriched with DEGs. In addition to KEGG pathway analysis, DEGs were also uploaded into the Ingenuity Pathway Analysis (IPA) software (Ingenuity® Systems, http://www.ingenuity.com) and was associated with the canonical pathways, molecular and cellular functions, diseases and disorders in the Ingenuity Knowledge Base. Fisher’s exact test was implemented to assess the significance of the associations between DEGs and canonical pathways, functions, or diseases. For canonical pathways, a ratio was also computed between the number of DEGs and the total number of molecules in the pathway.\n DCEA and TF-to-target analysis The DEGs were subjected to DCEA and TF-to-target analysis using the DCGL package (v2.0) [17] in the R software. Human TF-to-target library was incorporated into the package to identify differentially regulated genes and links. A total of 215 human TFs and 16,863 targets were included in the package. Differentially co-expressed genes (DCGs) were firstly identified and a network was constructed with DCGs and their shared TFs. Since dysregulation of DMD gene is the primary cause of the disease, we also constructed a subnetwork surrounding the predefined gene DMD.\nThe DEGs were subjected to DCEA and TF-to-target analysis using the DCGL package (v2.0) [17] in the R software. Human TF-to-target library was incorporated into the package to identify differentially regulated genes and links. A total of 215 human TFs and 16,863 targets were included in the package. Differentially co-expressed genes (DCGs) were firstly identified and a network was constructed with DCGs and their shared TFs. Since dysregulation of DMD gene is the primary cause of the disease, we also constructed a subnetwork surrounding the predefined gene DMD.", "We used two datasets (GSE6011 and GSE3307) from the GEO (http://www.ncbi.nlm.nih.gov/geo/) database. For further combined analysis, we only selected samples generated with the platform GPL96: [HG-U133A] Affymetrix Human Genome U133A Array.", "Raw data including simple omnibus format in text (SOFT) family files and CEL files of all samples were downloaded. The CEL files contained the expression information for each probe. Robust Multiarray Analysis (RMA) [14] was used for raw intensity values normalization: firstly, background noise and processing artifacts were neutralized by model-based background correction; secondly, expression values were set to a common scale by quantile normalization; thirdly, expression value of each probe was generated by an iterative median polishing procedure. The resulting log2-transformed RMA expression value was then used for detecting DEGs. Statistical t test was used to identify DEGs and the Benjamini and Hochberg procedure [15] were carried out to correct the multiple testing problems. The threshold of significant DEGs was set as P <0.01. Up- or down-regulation of the DEGs were determined according to the fold-change. All of the above procedures were performed using the R software (v3.0.3) with BioConductor and limma packages (3.12.1) and libraries [16].", "Differentially expressed probes were annotated based on the SOFT files. All genes were then mapped to the Kyoto En-cyclopedia of Genes and Genomes (KEGG) pathways (http://www.genome.jp/kegg/) database. The hyper geometric distribution test was used to identify pathways significantly enriched with DEGs. In addition to KEGG pathway analysis, DEGs were also uploaded into the Ingenuity Pathway Analysis (IPA) software (Ingenuity® Systems, http://www.ingenuity.com) and was associated with the canonical pathways, molecular and cellular functions, diseases and disorders in the Ingenuity Knowledge Base. Fisher’s exact test was implemented to assess the significance of the associations between DEGs and canonical pathways, functions, or diseases. For canonical pathways, a ratio was also computed between the number of DEGs and the total number of molecules in the pathway.", "The DEGs were subjected to DCEA and TF-to-target analysis using the DCGL package (v2.0) [17] in the R software. Human TF-to-target library was incorporated into the package to identify differentially regulated genes and links. A total of 215 human TFs and 16,863 targets were included in the package. Differentially co-expressed genes (DCGs) were firstly identified and a network was constructed with DCGs and their shared TFs. Since dysregulation of DMD gene is the primary cause of the disease, we also constructed a subnetwork surrounding the predefined gene DMD.", "With the two downloaded datasets (GSE6011 and GSE3307), expression profile of 27 DMD patients (22 from GSE6011 and 5 from GSE3307) and 14 healthy controls were included in this study. Dystrophin protein was absent in all patients [9,10]. A total of 454 genes were detected to be DEGs, including 284 up regulated genes and 170 down regulated ones. KEGG pathway analysis revealed that aberrantly regulated genes were enriched in 10 pathways (Table 1). Most of these pathways (7/10) are involved in the inflammatory/immune response process, such as the immune system, immune disease and infectious disease. In addition, a DMD related pathway, the viral myocarditis pathway, was included. The other two pathways are Cell adhesion molecules (CAMs) and Focal adhesion.Table 1\nKEGG pathways enriched with differentially expressed genes\n\nID\n\nPathway description\n\nPathway class\n\nP\n-value\nhsa05322Systemic lupus erythematosusImmune diseases7.40E-05hsa05416Viral myocarditisCardiovascular Diseases1.60E-04hsa04514Cell adhesion molecules (CAMs)Signaling Molecules and Interaction4.00E-03hsa05310AsthmaImmune Diseases1.00E-02hsa04510Focal adhesionCell Communication1.00E-02hsa05130Pathogenic Escherichia coli infectionInfectious Diseases1.10E-02hsa04672Intestinal immune network for IgA productionImmune system1.90E-02hsa05330Allograft rejectionImmune Diseases1.70E-02hsa04612Antigen processing and presentationImmune System2.20E-02hsa05332Graft-versus-host diseaseImmune Diseases2.10E-02\n\nKEGG pathways enriched with differentially expressed genes\n\nThe IPA results of canonical pathways showed that the DEGs were enriched for epithelial adherens junction signaling, calcium signaling, nNOS signaling in skeletal muscle cells and some additional pathways (Figure 1). Consistent with the KEGG pathway analysis, aberrantly regulated genes were enriched in immune system pathways, such as calcium-induced T lymphocyte apoptosis, iCOS-iCOSL signaling in T helper cells, complement system, and antigen presentation pathway (Figure 1). Diseases and Disorders analysis (Table 2) revealed that “Neurological Disease” (P = 3.98E-15–1.20E-02) was the top disease affected by these DEGs followed by “Skeletal and Muscular Disorders” (P = 2.66E-13–1.07E-02). “Cellular Growth and Proliferation” (P = 1.38E-14–1.12E-02) was the top biological function mediated by these DEGs (Table 2).Figure 1\nIngenuity pathway analysis: the most significant canonical pathways in which differently expressed genes (DEGs) were enriched.\nTable 2\nIngenuity pathway analysis: diseases and functions related to differentially expressed genes\n\nName\n\nP\n-value\n\n# Molecules\n\nDiseases and Disorders\nNeurological disease3.98E-15–1.20E-02133Skeletal and muscular disorders2.66E-13–1.07E-02118Cardiovascular disease5.89E-13–9.41E-0375Cancer9.06E-11–1.14E-02293Psychological disorders3.24E-10–1.13E-0292\nMolecular and Cellular functions\nCellular growth and proliferation1.38E-14–1.12E-02136Cell death and survival1.24E-13–1.20E-02128Cellular development8.68E-08–1.12E-02106Cellular movement1.15E-07–1.20E-0270Cellular assembly and organization6.30E-06–9.41E-0373\n\nIngenuity pathway analysis: the most significant canonical pathways in which differently expressed genes (DEGs) were enriched.\n\n\nIngenuity pathway analysis: diseases and functions related to differentially expressed genes\n\nDCEA results generated 610 pairs of DEGs regulated by at least one common TF. Among these pairs, 78 pairs DEGs were identified as differentially co-expressed. The expression values of a total of 25 pairs of DEGs were positively correlated with each other. A network was generated to illustrate the relationship between the differentially co-expressed genes and their regulatory TFs (Figure 2). Considering the primary causative effect of the aberrant expression of DMD, we construct a subnetwork with DMD and its differentially co-expressed genes (Figure 3) to illustrate genes and TFs that may play important roles in the secondary changes of DMD patients. As shown in Figure 3, among the DEGs which shared TFs with DMD, six genes were co-expressed with DMD, including ATP1A2, C1QB, MYOF, SAT1, TRIP10, and IFI6.Figure 2\nTranscription factor (TF) regulation network for the 78 pairs of co-expressed differently expressed genes (DEGs). DCG: differential co-expression genes; DCL: links between DCGs.Figure 3\nA subnetwork out of Figure\n2\nsurrounding the predefined gene\nDMD\n. TF: Transcription factor; DCG: differential co-expression genes; DCL: links between DCGs.\n\nTranscription factor (TF) regulation network for the 78 pairs of co-expressed differently expressed genes (DEGs). DCG: differential co-expression genes; DCL: links between DCGs.\n\nA subnetwork out of Figure\n2\nsurrounding the predefined gene\nDMD\n. TF: Transcription factor; DCG: differential co-expression genes; DCL: links between DCGs.", "The pathophysiology of DMD is highly complex, involving the dysregulation of many downstream cascades. Gene expression profiling is powerful for investigating the secondary changes of DMD patients. However, previous studies mainly focused on individual gene expression changes, without considering the gene co-expression pattern or the TF information. In this study, with combined two DMD microarray datasets, we implemented DCEA and constructed a TF regulated network with the hope to provide new understanding of the pathogenesis.\nBoth KEGG pathway and IPA canonical pathway analysis of the DEGs revealed that pathways enriched with aberrantly regulated genes are mostly involved in the immune response processes. This may be due to the infiltration of immune cells into the muscles and the elevated levels of various inflammatory cytokines [18-20]. In addition, the calcium signaling and nNOS signaling in skeletal muscle cells pathway were found to be enriched with deregulated genes, which is consistent with previous findings [3,4].\nDCEA results generated 610 pairs of DEGs regulated by at least one common TF, including 78 pairs of co-expressed DEGs. As shown in Figure 2, many TFs are involved in the regulation of these DEGs. Since DMD is the causative gene, a subnetwork was constructed to illustrate important genes and TFs that may play important roles in the secondary changes of DMD patients (Figure 3). Among the DEGs which shared TFs with DMD, six genes were co-expressed with DMD, including ATP1A2, C1QB, MYOF, SAT1, TRIP10, and IFI6. Among them, ATP1A2, MYOF, and SAT1 have previously been reported to be involved in muscular dystrophy [21-23]. The relationship between the three left genes and DMD is still known. However, it is worth further investigation. Take IFI6 for example, this gene was highly expressed in muscle and bone and it is reported to inhibit cytochrome c release from mitochondria by regulating the Ca2+ channel which consequently attenuate apoptosis [24]. Alterations in intracellular Ca2+ homeostasis are involved in the alteration of apoptosis since a sustained increase in cytosolic Ca2+concentration accompanies with apoptosis in cells [25]. Previous study [26] have demonstrated the high influx of extracellular calcium through a dystrophin-deficient membrane, which may lead to subsequent muscle necrosis or apoptosis along with the inflammation response. Whether this gene contributes to the pathogenesis through its regulation of the Ca2+ channel needs further investigation. Among the TFs detected in the array, several TFs of DMD are dysregulated, including GATA2 and STAT5B. Further studies are needed to investigate their involvement in the disease and the mechanisms of other unmeasured TFs (Figure 3).\nThe ultimate cure for DMD will lie in the stable introduction of a functional dystrophin gene into the muscles of DMD patients, however, when gene therapy or transplantation of stem cells/ muscle precursor cells will be clinically available is unpredictable. In the interim, reducing secondary features of the pathologic progression of dystrophin deficiency could improve the quality and length of life for DMD patients. Dysregulation of the genes which were co-expressed with DMD may be caused by the malfunction of DMD. Therapeutic strategies aimed to compensate for the dysregulation of these genes may help to reduce the secondary features of DMD. Therefore, genes co-expressed with DMD we identified here may be considered as therapeutic targets in further investigations to treat the secondary effects. In addition, TFs that regulated these genes and DMD may also serve as potential therapeutic markers in future studies.", "In summary, with combined microarray data from the GEO database, we conducted DECA and acquire the TF information of co-expressed DEGs to capture pathogenic characteristics of DMD. Our results may provide a new understanding of DMD and further contribute potential targets for future therapeutic tests." ]
[ "introduction", "materials|methods", null, null, null, null, "results", "discussion", "conclusion" ]
[ "Duchenne muscular dystrophy", "Differential co-expression analysis", "Transcription factor" ]
Background: Duchenne muscular dystrophy (DMD) is a severe and progressive inherited neuromuscular disease characterized by muscle fiber degeneration and central nervous system disorders that affects 1 in 3600–6000 live male births [1]. This disease is caused by mutations or dysregulation of the X-linked dystrophin gene. Absence of or defects of dystrophin protein result in disruption of the dystrophin-associated protein complex (DAPC), resulting in chronic inflammation and progressive muscle degeneration [2]. Although mutations of the DMD protein have been identified to be primarily responsible for the pathology, comprehensive understanding of the downstream mechanisms due to dystrophin absence is still lacking. Secondary changes in DMD involve calcium homeostasis [3], nitric oxide synthase [4], inflammation [5] and mast cell degranulation [6], suggesting that the pathological process of DMD is highly complicated. Gene expression profile analysis is a powerful strategy to investigate the pathophysiological mechanisms of DMD. Several gene expression profiling studies [7-11] have been performed earlier, providing insights into the pathology of DMD. However, most of them focused on expression level changes of individual genes, without considering the differences in gene interconnections. Since the majority of proteins function with other proteins, differential co-expression analysis (DCEA) would provide a better understanding of the mechanism underlying DMD progression. The rationale of DCEA is that changes in gene co-expression patterns between two disease statuses (case and control) provide hints regarding the disrupted regulatory relationships in patients. Therefore, integrate the information of Transcription factors (TFs) into DCEA may detect the regulatory causes of observed co-expression changes since TFs can regulate gene expression through binding the cis-elements in the target genes’ promoter regions. TFs are reported to be important in both normal and disease states [12]. Investigation of TF activities in mdx mice has proposed several TFs involved in DMD pathogenesis [13]. Thus, integrative analysis of TFs and target differential co-expression genes may provide better understanding on molecular mechanism of the secondary changes of DMD. In this study, using microarray gene expression data collected from the Gene Expression Omnibus (GEO) database, we carried out an integrative bioinformatics analysis. Firstly, differentially expressed genes (DEGs) were detected and pathways enriched with DEGs were acquired. Secondly, differentially regulated link networks were constructed to integrate the TF-to-target information and the differential co-expression genes. Our results may provide new understanding of DMD and new targets for future therapeutic tests. Methods: Microarray data We used two datasets (GSE6011 and GSE3307) from the GEO (http://www.ncbi.nlm.nih.gov/geo/) database. For further combined analysis, we only selected samples generated with the platform GPL96: [HG-U133A] Affymetrix Human Genome U133A Array. We used two datasets (GSE6011 and GSE3307) from the GEO (http://www.ncbi.nlm.nih.gov/geo/) database. For further combined analysis, we only selected samples generated with the platform GPL96: [HG-U133A] Affymetrix Human Genome U133A Array. Detection of differentially expressed genes (DEGs) Raw data including simple omnibus format in text (SOFT) family files and CEL files of all samples were downloaded. The CEL files contained the expression information for each probe. Robust Multiarray Analysis (RMA) [14] was used for raw intensity values normalization: firstly, background noise and processing artifacts were neutralized by model-based background correction; secondly, expression values were set to a common scale by quantile normalization; thirdly, expression value of each probe was generated by an iterative median polishing procedure. The resulting log2-transformed RMA expression value was then used for detecting DEGs. Statistical t test was used to identify DEGs and the Benjamini and Hochberg procedure [15] were carried out to correct the multiple testing problems. The threshold of significant DEGs was set as P <0.01. Up- or down-regulation of the DEGs were determined according to the fold-change. All of the above procedures were performed using the R software (v3.0.3) with BioConductor and limma packages (3.12.1) and libraries [16]. Raw data including simple omnibus format in text (SOFT) family files and CEL files of all samples were downloaded. The CEL files contained the expression information for each probe. Robust Multiarray Analysis (RMA) [14] was used for raw intensity values normalization: firstly, background noise and processing artifacts were neutralized by model-based background correction; secondly, expression values were set to a common scale by quantile normalization; thirdly, expression value of each probe was generated by an iterative median polishing procedure. The resulting log2-transformed RMA expression value was then used for detecting DEGs. Statistical t test was used to identify DEGs and the Benjamini and Hochberg procedure [15] were carried out to correct the multiple testing problems. The threshold of significant DEGs was set as P <0.01. Up- or down-regulation of the DEGs were determined according to the fold-change. All of the above procedures were performed using the R software (v3.0.3) with BioConductor and limma packages (3.12.1) and libraries [16]. Functional analysis Differentially expressed probes were annotated based on the SOFT files. All genes were then mapped to the Kyoto En-cyclopedia of Genes and Genomes (KEGG) pathways (http://www.genome.jp/kegg/) database. The hyper geometric distribution test was used to identify pathways significantly enriched with DEGs. In addition to KEGG pathway analysis, DEGs were also uploaded into the Ingenuity Pathway Analysis (IPA) software (Ingenuity® Systems, http://www.ingenuity.com) and was associated with the canonical pathways, molecular and cellular functions, diseases and disorders in the Ingenuity Knowledge Base. Fisher’s exact test was implemented to assess the significance of the associations between DEGs and canonical pathways, functions, or diseases. For canonical pathways, a ratio was also computed between the number of DEGs and the total number of molecules in the pathway. Differentially expressed probes were annotated based on the SOFT files. All genes were then mapped to the Kyoto En-cyclopedia of Genes and Genomes (KEGG) pathways (http://www.genome.jp/kegg/) database. The hyper geometric distribution test was used to identify pathways significantly enriched with DEGs. In addition to KEGG pathway analysis, DEGs were also uploaded into the Ingenuity Pathway Analysis (IPA) software (Ingenuity® Systems, http://www.ingenuity.com) and was associated with the canonical pathways, molecular and cellular functions, diseases and disorders in the Ingenuity Knowledge Base. Fisher’s exact test was implemented to assess the significance of the associations between DEGs and canonical pathways, functions, or diseases. For canonical pathways, a ratio was also computed between the number of DEGs and the total number of molecules in the pathway. DCEA and TF-to-target analysis The DEGs were subjected to DCEA and TF-to-target analysis using the DCGL package (v2.0) [17] in the R software. Human TF-to-target library was incorporated into the package to identify differentially regulated genes and links. A total of 215 human TFs and 16,863 targets were included in the package. Differentially co-expressed genes (DCGs) were firstly identified and a network was constructed with DCGs and their shared TFs. Since dysregulation of DMD gene is the primary cause of the disease, we also constructed a subnetwork surrounding the predefined gene DMD. The DEGs were subjected to DCEA and TF-to-target analysis using the DCGL package (v2.0) [17] in the R software. Human TF-to-target library was incorporated into the package to identify differentially regulated genes and links. A total of 215 human TFs and 16,863 targets were included in the package. Differentially co-expressed genes (DCGs) were firstly identified and a network was constructed with DCGs and their shared TFs. Since dysregulation of DMD gene is the primary cause of the disease, we also constructed a subnetwork surrounding the predefined gene DMD. Microarray data: We used two datasets (GSE6011 and GSE3307) from the GEO (http://www.ncbi.nlm.nih.gov/geo/) database. For further combined analysis, we only selected samples generated with the platform GPL96: [HG-U133A] Affymetrix Human Genome U133A Array. Detection of differentially expressed genes (DEGs): Raw data including simple omnibus format in text (SOFT) family files and CEL files of all samples were downloaded. The CEL files contained the expression information for each probe. Robust Multiarray Analysis (RMA) [14] was used for raw intensity values normalization: firstly, background noise and processing artifacts were neutralized by model-based background correction; secondly, expression values were set to a common scale by quantile normalization; thirdly, expression value of each probe was generated by an iterative median polishing procedure. The resulting log2-transformed RMA expression value was then used for detecting DEGs. Statistical t test was used to identify DEGs and the Benjamini and Hochberg procedure [15] were carried out to correct the multiple testing problems. The threshold of significant DEGs was set as P <0.01. Up- or down-regulation of the DEGs were determined according to the fold-change. All of the above procedures were performed using the R software (v3.0.3) with BioConductor and limma packages (3.12.1) and libraries [16]. Functional analysis: Differentially expressed probes were annotated based on the SOFT files. All genes were then mapped to the Kyoto En-cyclopedia of Genes and Genomes (KEGG) pathways (http://www.genome.jp/kegg/) database. The hyper geometric distribution test was used to identify pathways significantly enriched with DEGs. In addition to KEGG pathway analysis, DEGs were also uploaded into the Ingenuity Pathway Analysis (IPA) software (Ingenuity® Systems, http://www.ingenuity.com) and was associated with the canonical pathways, molecular and cellular functions, diseases and disorders in the Ingenuity Knowledge Base. Fisher’s exact test was implemented to assess the significance of the associations between DEGs and canonical pathways, functions, or diseases. For canonical pathways, a ratio was also computed between the number of DEGs and the total number of molecules in the pathway. DCEA and TF-to-target analysis: The DEGs were subjected to DCEA and TF-to-target analysis using the DCGL package (v2.0) [17] in the R software. Human TF-to-target library was incorporated into the package to identify differentially regulated genes and links. A total of 215 human TFs and 16,863 targets were included in the package. Differentially co-expressed genes (DCGs) were firstly identified and a network was constructed with DCGs and their shared TFs. Since dysregulation of DMD gene is the primary cause of the disease, we also constructed a subnetwork surrounding the predefined gene DMD. Results: With the two downloaded datasets (GSE6011 and GSE3307), expression profile of 27 DMD patients (22 from GSE6011 and 5 from GSE3307) and 14 healthy controls were included in this study. Dystrophin protein was absent in all patients [9,10]. A total of 454 genes were detected to be DEGs, including 284 up regulated genes and 170 down regulated ones. KEGG pathway analysis revealed that aberrantly regulated genes were enriched in 10 pathways (Table 1). Most of these pathways (7/10) are involved in the inflammatory/immune response process, such as the immune system, immune disease and infectious disease. In addition, a DMD related pathway, the viral myocarditis pathway, was included. The other two pathways are Cell adhesion molecules (CAMs) and Focal adhesion.Table 1 KEGG pathways enriched with differentially expressed genes ID Pathway description Pathway class P -value hsa05322Systemic lupus erythematosusImmune diseases7.40E-05hsa05416Viral myocarditisCardiovascular Diseases1.60E-04hsa04514Cell adhesion molecules (CAMs)Signaling Molecules and Interaction4.00E-03hsa05310AsthmaImmune Diseases1.00E-02hsa04510Focal adhesionCell Communication1.00E-02hsa05130Pathogenic Escherichia coli infectionInfectious Diseases1.10E-02hsa04672Intestinal immune network for IgA productionImmune system1.90E-02hsa05330Allograft rejectionImmune Diseases1.70E-02hsa04612Antigen processing and presentationImmune System2.20E-02hsa05332Graft-versus-host diseaseImmune Diseases2.10E-02 KEGG pathways enriched with differentially expressed genes The IPA results of canonical pathways showed that the DEGs were enriched for epithelial adherens junction signaling, calcium signaling, nNOS signaling in skeletal muscle cells and some additional pathways (Figure 1). Consistent with the KEGG pathway analysis, aberrantly regulated genes were enriched in immune system pathways, such as calcium-induced T lymphocyte apoptosis, iCOS-iCOSL signaling in T helper cells, complement system, and antigen presentation pathway (Figure 1). Diseases and Disorders analysis (Table 2) revealed that “Neurological Disease” (P = 3.98E-15–1.20E-02) was the top disease affected by these DEGs followed by “Skeletal and Muscular Disorders” (P = 2.66E-13–1.07E-02). “Cellular Growth and Proliferation” (P = 1.38E-14–1.12E-02) was the top biological function mediated by these DEGs (Table 2).Figure 1 Ingenuity pathway analysis: the most significant canonical pathways in which differently expressed genes (DEGs) were enriched. Table 2 Ingenuity pathway analysis: diseases and functions related to differentially expressed genes Name P -value # Molecules Diseases and Disorders Neurological disease3.98E-15–1.20E-02133Skeletal and muscular disorders2.66E-13–1.07E-02118Cardiovascular disease5.89E-13–9.41E-0375Cancer9.06E-11–1.14E-02293Psychological disorders3.24E-10–1.13E-0292 Molecular and Cellular functions Cellular growth and proliferation1.38E-14–1.12E-02136Cell death and survival1.24E-13–1.20E-02128Cellular development8.68E-08–1.12E-02106Cellular movement1.15E-07–1.20E-0270Cellular assembly and organization6.30E-06–9.41E-0373 Ingenuity pathway analysis: the most significant canonical pathways in which differently expressed genes (DEGs) were enriched. Ingenuity pathway analysis: diseases and functions related to differentially expressed genes DCEA results generated 610 pairs of DEGs regulated by at least one common TF. Among these pairs, 78 pairs DEGs were identified as differentially co-expressed. The expression values of a total of 25 pairs of DEGs were positively correlated with each other. A network was generated to illustrate the relationship between the differentially co-expressed genes and their regulatory TFs (Figure 2). Considering the primary causative effect of the aberrant expression of DMD, we construct a subnetwork with DMD and its differentially co-expressed genes (Figure 3) to illustrate genes and TFs that may play important roles in the secondary changes of DMD patients. As shown in Figure 3, among the DEGs which shared TFs with DMD, six genes were co-expressed with DMD, including ATP1A2, C1QB, MYOF, SAT1, TRIP10, and IFI6.Figure 2 Transcription factor (TF) regulation network for the 78 pairs of co-expressed differently expressed genes (DEGs). DCG: differential co-expression genes; DCL: links between DCGs.Figure 3 A subnetwork out of Figure 2 surrounding the predefined gene DMD . TF: Transcription factor; DCG: differential co-expression genes; DCL: links between DCGs. Transcription factor (TF) regulation network for the 78 pairs of co-expressed differently expressed genes (DEGs). DCG: differential co-expression genes; DCL: links between DCGs. A subnetwork out of Figure 2 surrounding the predefined gene DMD . TF: Transcription factor; DCG: differential co-expression genes; DCL: links between DCGs. Discussion: The pathophysiology of DMD is highly complex, involving the dysregulation of many downstream cascades. Gene expression profiling is powerful for investigating the secondary changes of DMD patients. However, previous studies mainly focused on individual gene expression changes, without considering the gene co-expression pattern or the TF information. In this study, with combined two DMD microarray datasets, we implemented DCEA and constructed a TF regulated network with the hope to provide new understanding of the pathogenesis. Both KEGG pathway and IPA canonical pathway analysis of the DEGs revealed that pathways enriched with aberrantly regulated genes are mostly involved in the immune response processes. This may be due to the infiltration of immune cells into the muscles and the elevated levels of various inflammatory cytokines [18-20]. In addition, the calcium signaling and nNOS signaling in skeletal muscle cells pathway were found to be enriched with deregulated genes, which is consistent with previous findings [3,4]. DCEA results generated 610 pairs of DEGs regulated by at least one common TF, including 78 pairs of co-expressed DEGs. As shown in Figure 2, many TFs are involved in the regulation of these DEGs. Since DMD is the causative gene, a subnetwork was constructed to illustrate important genes and TFs that may play important roles in the secondary changes of DMD patients (Figure 3). Among the DEGs which shared TFs with DMD, six genes were co-expressed with DMD, including ATP1A2, C1QB, MYOF, SAT1, TRIP10, and IFI6. Among them, ATP1A2, MYOF, and SAT1 have previously been reported to be involved in muscular dystrophy [21-23]. The relationship between the three left genes and DMD is still known. However, it is worth further investigation. Take IFI6 for example, this gene was highly expressed in muscle and bone and it is reported to inhibit cytochrome c release from mitochondria by regulating the Ca2+ channel which consequently attenuate apoptosis [24]. Alterations in intracellular Ca2+ homeostasis are involved in the alteration of apoptosis since a sustained increase in cytosolic Ca2+concentration accompanies with apoptosis in cells [25]. Previous study [26] have demonstrated the high influx of extracellular calcium through a dystrophin-deficient membrane, which may lead to subsequent muscle necrosis or apoptosis along with the inflammation response. Whether this gene contributes to the pathogenesis through its regulation of the Ca2+ channel needs further investigation. Among the TFs detected in the array, several TFs of DMD are dysregulated, including GATA2 and STAT5B. Further studies are needed to investigate their involvement in the disease and the mechanisms of other unmeasured TFs (Figure 3). The ultimate cure for DMD will lie in the stable introduction of a functional dystrophin gene into the muscles of DMD patients, however, when gene therapy or transplantation of stem cells/ muscle precursor cells will be clinically available is unpredictable. In the interim, reducing secondary features of the pathologic progression of dystrophin deficiency could improve the quality and length of life for DMD patients. Dysregulation of the genes which were co-expressed with DMD may be caused by the malfunction of DMD. Therapeutic strategies aimed to compensate for the dysregulation of these genes may help to reduce the secondary features of DMD. Therefore, genes co-expressed with DMD we identified here may be considered as therapeutic targets in further investigations to treat the secondary effects. In addition, TFs that regulated these genes and DMD may also serve as potential therapeutic markers in future studies. Conclusions: In summary, with combined microarray data from the GEO database, we conducted DECA and acquire the TF information of co-expressed DEGs to capture pathogenic characteristics of DMD. Our results may provide a new understanding of DMD and further contribute potential targets for future therapeutic tests.
Background: Gene expression analysis is powerful for investigating the underlying mechanisms of Duchenne muscular dystrophy (DMD). Previous studies mainly neglected co-expression or transcription factor (TF) information. Here we integrated TF information into differential co-expression analysis (DCEA) to explore new understandings of DMD pathogenesis. Methods: Using two microarray datasets from Gene Expression Omnibus (GEO) database, we firstly detected differentially expressed genes (DEGs) and pathways enriched with DEGs. Secondly, we constructed differentially regulated networks to integrate the TF-to-target information and the differential co-expression genes. Results: A total of 454 DEGs were detected and both KEGG pathway and ingenuity pathway analysis revealed that pathways enriched with aberrantly regulated genes are mostly involved in the immune response processes. DCEA results generated 610 pairs of DEGs regulated by at least one common TF, including 78 pairs of co-expressed DEGs. A network was constructed to illustrate their relationships and a subnetwork for DMD related molecules was constructed to show genes and TFs that may play important roles in the secondary changes of DMD. Among the DEGs which shared TFs with DMD, six genes were co-expressed with DMD, including ATP1A2, C1QB, MYOF, SAT1, TRIP10, and IFI6. Conclusions: Our results may provide a new understanding of DMD and contribute potential targets for future therapeutic tests.
Background: Duchenne muscular dystrophy (DMD) is a severe and progressive inherited neuromuscular disease characterized by muscle fiber degeneration and central nervous system disorders that affects 1 in 3600–6000 live male births [1]. This disease is caused by mutations or dysregulation of the X-linked dystrophin gene. Absence of or defects of dystrophin protein result in disruption of the dystrophin-associated protein complex (DAPC), resulting in chronic inflammation and progressive muscle degeneration [2]. Although mutations of the DMD protein have been identified to be primarily responsible for the pathology, comprehensive understanding of the downstream mechanisms due to dystrophin absence is still lacking. Secondary changes in DMD involve calcium homeostasis [3], nitric oxide synthase [4], inflammation [5] and mast cell degranulation [6], suggesting that the pathological process of DMD is highly complicated. Gene expression profile analysis is a powerful strategy to investigate the pathophysiological mechanisms of DMD. Several gene expression profiling studies [7-11] have been performed earlier, providing insights into the pathology of DMD. However, most of them focused on expression level changes of individual genes, without considering the differences in gene interconnections. Since the majority of proteins function with other proteins, differential co-expression analysis (DCEA) would provide a better understanding of the mechanism underlying DMD progression. The rationale of DCEA is that changes in gene co-expression patterns between two disease statuses (case and control) provide hints regarding the disrupted regulatory relationships in patients. Therefore, integrate the information of Transcription factors (TFs) into DCEA may detect the regulatory causes of observed co-expression changes since TFs can regulate gene expression through binding the cis-elements in the target genes’ promoter regions. TFs are reported to be important in both normal and disease states [12]. Investigation of TF activities in mdx mice has proposed several TFs involved in DMD pathogenesis [13]. Thus, integrative analysis of TFs and target differential co-expression genes may provide better understanding on molecular mechanism of the secondary changes of DMD. In this study, using microarray gene expression data collected from the Gene Expression Omnibus (GEO) database, we carried out an integrative bioinformatics analysis. Firstly, differentially expressed genes (DEGs) were detected and pathways enriched with DEGs were acquired. Secondly, differentially regulated link networks were constructed to integrate the TF-to-target information and the differential co-expression genes. Our results may provide new understanding of DMD and new targets for future therapeutic tests. Conclusions: In summary, with combined microarray data from the GEO database, we conducted DECA and acquire the TF information of co-expressed DEGs to capture pathogenic characteristics of DMD. Our results may provide a new understanding of DMD and further contribute potential targets for future therapeutic tests.
Background: Gene expression analysis is powerful for investigating the underlying mechanisms of Duchenne muscular dystrophy (DMD). Previous studies mainly neglected co-expression or transcription factor (TF) information. Here we integrated TF information into differential co-expression analysis (DCEA) to explore new understandings of DMD pathogenesis. Methods: Using two microarray datasets from Gene Expression Omnibus (GEO) database, we firstly detected differentially expressed genes (DEGs) and pathways enriched with DEGs. Secondly, we constructed differentially regulated networks to integrate the TF-to-target information and the differential co-expression genes. Results: A total of 454 DEGs were detected and both KEGG pathway and ingenuity pathway analysis revealed that pathways enriched with aberrantly regulated genes are mostly involved in the immune response processes. DCEA results generated 610 pairs of DEGs regulated by at least one common TF, including 78 pairs of co-expressed DEGs. A network was constructed to illustrate their relationships and a subnetwork for DMD related molecules was constructed to show genes and TFs that may play important roles in the secondary changes of DMD. Among the DEGs which shared TFs with DMD, six genes were co-expressed with DMD, including ATP1A2, C1QB, MYOF, SAT1, TRIP10, and IFI6. Conclusions: Our results may provide a new understanding of DMD and contribute potential targets for future therapeutic tests.
3,542
262
9
[ "degs", "genes", "dmd", "expression", "analysis", "expressed", "pathways", "co", "gene", "pathway" ]
[ "test", "test" ]
null
[CONTENT] Duchenne muscular dystrophy | Differential co-expression analysis | Transcription factor [SUMMARY]
null
[CONTENT] Duchenne muscular dystrophy | Differential co-expression analysis | Transcription factor [SUMMARY]
[CONTENT] Duchenne muscular dystrophy | Differential co-expression analysis | Transcription factor [SUMMARY]
[CONTENT] Duchenne muscular dystrophy | Differential co-expression analysis | Transcription factor [SUMMARY]
[CONTENT] Duchenne muscular dystrophy | Differential co-expression analysis | Transcription factor [SUMMARY]
[CONTENT] Databases, Genetic | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Genetic Markers | Genetic Predisposition to Disease | Humans | Muscular Dystrophy, Duchenne | Oligonucleotide Array Sequence Analysis | Phenotype | Prognosis | Transcription Factors [SUMMARY]
null
[CONTENT] Databases, Genetic | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Genetic Markers | Genetic Predisposition to Disease | Humans | Muscular Dystrophy, Duchenne | Oligonucleotide Array Sequence Analysis | Phenotype | Prognosis | Transcription Factors [SUMMARY]
[CONTENT] Databases, Genetic | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Genetic Markers | Genetic Predisposition to Disease | Humans | Muscular Dystrophy, Duchenne | Oligonucleotide Array Sequence Analysis | Phenotype | Prognosis | Transcription Factors [SUMMARY]
[CONTENT] Databases, Genetic | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Genetic Markers | Genetic Predisposition to Disease | Humans | Muscular Dystrophy, Duchenne | Oligonucleotide Array Sequence Analysis | Phenotype | Prognosis | Transcription Factors [SUMMARY]
[CONTENT] Databases, Genetic | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Genetic Markers | Genetic Predisposition to Disease | Humans | Muscular Dystrophy, Duchenne | Oligonucleotide Array Sequence Analysis | Phenotype | Prognosis | Transcription Factors [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] degs | genes | dmd | expression | analysis | expressed | pathways | co | gene | pathway [SUMMARY]
null
[CONTENT] degs | genes | dmd | expression | analysis | expressed | pathways | co | gene | pathway [SUMMARY]
[CONTENT] degs | genes | dmd | expression | analysis | expressed | pathways | co | gene | pathway [SUMMARY]
[CONTENT] degs | genes | dmd | expression | analysis | expressed | pathways | co | gene | pathway [SUMMARY]
[CONTENT] degs | genes | dmd | expression | analysis | expressed | pathways | co | gene | pathway [SUMMARY]
[CONTENT] expression | dmd | gene | gene expression | changes | co expression | provide | dystrophin | understanding | tfs [SUMMARY]
null
[CONTENT] genes | figure | pathway | expressed | expressed genes | pathways | co | degs | pairs | 20e [SUMMARY]
[CONTENT] dmd | potential targets future | combined microarray | dmd contribute | dmd contribute potential | dmd contribute potential targets | capture pathogenic | capture pathogenic characteristics | capture pathogenic characteristics dmd | understanding dmd contribute [SUMMARY]
[CONTENT] dmd | genes | degs | expression | pathways | gene | pathway | analysis | tfs | co [SUMMARY]
[CONTENT] dmd | genes | degs | expression | pathways | gene | pathway | analysis | tfs | co [SUMMARY]
[CONTENT] Duchenne | DMD ||| TF ||| DCEA | DMD [SUMMARY]
null
[CONTENT] 454 | KEGG ||| DCEA | 610 | at least one | TF | 78 ||| DMD | DMD ||| DMD | six | DMD | C1QB | MYOF | SAT1 | IFI6 [SUMMARY]
[CONTENT] DMD [SUMMARY]
[CONTENT] Duchenne | DMD ||| TF ||| DCEA | DMD ||| two | Gene Expression Omnibus | GEO ||| Secondly ||| 454 | KEGG ||| DCEA | 610 | at least one | TF | 78 ||| DMD | DMD ||| DMD | six | DMD | C1QB | MYOF | SAT1 | IFI6 ||| DMD [SUMMARY]
[CONTENT] Duchenne | DMD ||| TF ||| DCEA | DMD ||| two | Gene Expression Omnibus | GEO ||| Secondly ||| 454 | KEGG ||| DCEA | 610 | at least one | TF | 78 ||| DMD | DMD ||| DMD | six | DMD | C1QB | MYOF | SAT1 | IFI6 ||| DMD [SUMMARY]
Aspartate aminotransferase to platelet ratio index and sustained virologic response are associated with progression from hepatitis C associated liver cirrhosis to hepatocellular carcinoma after treatment with pegylated interferon plus ribavirin.
27536084
The aim of this study was to evaluate the clinically significant predictors of hepatocellular carcinoma (HCC) development among hepatitis C virus (HCV) cirrhotic patients receiving combination therapy.
BACKGROUND
One hundred and five compensated cirrhosis patients who received pegylated interferon plus ribavirin between January 2005 and December 2011 were enrolled. All the patients were examined with abdominal sonography and liver biochemistry at baseline, end of treatment, and every 3-6 months posttreatment. The occurrence of HCC was evaluated every 3-6 months posttreatment.
PATIENTS AND METHODS
A total of 105 patients were enrolled (mean age 58.3±10.4 years). The average follow-up time for each patient was 4.38 years (standard deviation 1.73 years; range 1.13-9.27 years). Fifteen (14.3%) patients developed HCC during follow-up period. Thirteen of them had high baseline aspartate aminotransferase to platelet ratio index (APRI) (ie, an APRI >2.0). Multivariate analysis showed that those without sustained virologic response (SVR) (hazard ratio [HR] 5.795; 95% confidence interval [CI] 1.370-24.5; P=0.017) and high APRI (HR 5.548; 95% CI 1.191-25.86; P=0.029) had a significantly higher risk of HCC occurrence. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%-35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%-51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%-1.3%). Further, the cumulative incidence of HCC was significantly higher (P=0.006) in patients with high APRI (3-year cumulative incidence 21.8%; 95% CI 8.2%-35.3%; 5-year cumulative incidence 30.5%, 95% CI 11.8%-49.3%) compared to those with low APRI (3- and 5-year cumulative incidence 4.2%, 95% CI 0%-1.0%).
RESULTS
In HCV-infected cirrhotic patients who received combination therapy, APRI and SVR are the two major predictors of HCC development.
CONCLUSION
[ "Aged", "Antiviral Agents", "Aspartate Aminotransferases", "Blood Platelets", "Carcinoma, Hepatocellular", "Disease Progression", "Drug Therapy, Combination", "Female", "Hepacivirus", "Hepatitis C, Chronic", "Humans", "Interferon-alpha", "Kaplan-Meier Estimate", "Liver Cirrhosis", "Liver Neoplasms", "Male", "Middle Aged", "Multivariate Analysis", "Proportional Hazards Models", "Retrospective Studies", "Ribavirin", "Risk Factors", "Sustained Virologic Response", "Taiwan" ]
4976814
Introduction
Hepatocellular carcinoma (HCC) is one of the most deadly cancers and is the third leading cause of cancer-related death among males and the sixth among females worldwide.1 According to a recent national analysis in Taiwan, HCC remains the second leading cause of cancer-related death among both males and females during the past 10 years.2 Chronic hepatitis is one of the most important causes of chronic liver diseases, including cirrhosis that can lead to subsequent decompensation and development of HCC.3 The estimated risk of HCC is 15–20 times higher among persons infected with hepatitis C virus (HCV) compared to those without infection, and the greatest excess risk occurs in those with advanced hepatic fibrosis or cirrhosis.4,5 In chronic HCV antiviral therapy, a sustained viral response (SVR) had been a clinically meaningful end point at which viral clearance contributes to reduced inflammation and histologically identified fibrosis, fewer hepatic complications, lower liver-related mortality, and a reduced incidence of HCC.6–8 Even in patients with advanced liver disease, SVR has been found to be associated with HCC risk reduction.9 Risk factors for HCV-related HCC include older age, male sex, coinfection with human immunodeficiency virus (HIV) or hepatitis B virus (HBV), obesity, hepatic fibrosis, alcohol abuse, and sex hormone dysregulation.4,10,11 The liver fibrosis stage provides important prognostic information about the development of HCC. The aspartate aminotransferase to platelet ratio index (APRI) is a noninvasive marker that has been validated for the diagnosis of both significant fibrosis and cirrhosis.12,13 APRI is a useful marker for the prognosis in chronic hepatitis C (CHC) patients.14 Some recent studies report that the APRI score could be a predictor of HCC in chronic hepatitis patients.15,16 APRI could be a useful marker to classify HCC risk in CHC patients who achieved SVR and predict HCC recurrence after radiofrequency ablation.17–19 However, the prognostic value of APRI in cirrhotic patients for predicting the occurrence of HCC is uncertain. Hence, the aim of this study was to evaluate the clinically significant predictors of HCC development among HCV cirrhotic patients. These factors may affect physicians’ clinical decisions.
Multivariate stratified analysis
Table 2 shows the results of the Cox regression analysis. After adjustments for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver), those without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) and those with a high APRI level (HR 5.548; 95% CI 1.191–25.86; P=0.029) were associated with a significantly higher risk of HCC occurrence.
Results
General characteristics of the patients A total of 105 patients with HCV-associated Child A cirrhosis without decompensation were enrolled, consisting of 57 patients (54.3%) with SVR and 48 patients without SVR (45.7%) by intention-to-treat analysis. Of the 105 patients, 66 (62.9%) were infected with HCV genotype 1, and the remaining 39 patients (37.1%) were infected with other genotypes of HCV. There were 47 males and 58 females, and the mean patient age was 58.3±10.4 years. Thirteen patients (12.4%) did not complete the full treatment regimen (mean treatment duration, 61 days). One patient (1/13) stopped treatment due to major trauma with renal hemorrhage. Twelve patients (12/13, 92.3%) did not accept complete treatment due to treatment side effects. Among those patients, decompensation with jaundice (4/13) was the most common reason. An intention-to-treat analysis was performed in this study. No deaths or severe treatment-related complications were found during the course of treatment. The average follow-up time for each patient was 4.38 years (SD 1.73 years; range 1.13–9.27 years) (Table 1). Fifteen (14.3%) patients developed HCC during follow-up period; 13 in this group had high baseline APRI (>2.0) and two had low APRI (<2.0). A total of 105 patients with HCV-associated Child A cirrhosis without decompensation were enrolled, consisting of 57 patients (54.3%) with SVR and 48 patients without SVR (45.7%) by intention-to-treat analysis. Of the 105 patients, 66 (62.9%) were infected with HCV genotype 1, and the remaining 39 patients (37.1%) were infected with other genotypes of HCV. There were 47 males and 58 females, and the mean patient age was 58.3±10.4 years. Thirteen patients (12.4%) did not complete the full treatment regimen (mean treatment duration, 61 days). One patient (1/13) stopped treatment due to major trauma with renal hemorrhage. Twelve patients (12/13, 92.3%) did not accept complete treatment due to treatment side effects. Among those patients, decompensation with jaundice (4/13) was the most common reason. An intention-to-treat analysis was performed in this study. No deaths or severe treatment-related complications were found during the course of treatment. The average follow-up time for each patient was 4.38 years (SD 1.73 years; range 1.13–9.27 years) (Table 1). Fifteen (14.3%) patients developed HCC during follow-up period; 13 in this group had high baseline APRI (>2.0) and two had low APRI (<2.0). Demographics and clinical features that predispose to HCC Table 1 shows the baseline characteristics and treatment outcomes of the patients with and without progression to HCC. We performed univariate analyses on the two groups and found significant differences in age, SVR, baseline APRI, platelet count, AST, and ALT. No significant differences in sex, diabetes, cohepatitis (alcoholism, HBV coinfection, or fatty liver), HCV genotype, HCV RNA load, alpha-fetoprotein, albumin, international normalized ratio of prothrombin time, and total bilirubin were found between the two groups. Table 1 shows the baseline characteristics and treatment outcomes of the patients with and without progression to HCC. We performed univariate analyses on the two groups and found significant differences in age, SVR, baseline APRI, platelet count, AST, and ALT. No significant differences in sex, diabetes, cohepatitis (alcoholism, HBV coinfection, or fatty liver), HCV genotype, HCV RNA load, alpha-fetoprotein, albumin, international normalized ratio of prothrombin time, and total bilirubin were found between the two groups. Multivariate stratified analysis Table 2 shows the results of the Cox regression analysis. After adjustments for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver), those without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) and those with a high APRI level (HR 5.548; 95% CI 1.191–25.86; P=0.029) were associated with a significantly higher risk of HCC occurrence. Table 2 shows the results of the Cox regression analysis. After adjustments for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver), those without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) and those with a high APRI level (HR 5.548; 95% CI 1.191–25.86; P=0.029) were associated with a significantly higher risk of HCC occurrence. Cumulative incidences of HCC occurrence The cumulative incidences of HCC among patients with and without SVR after treatment are shown in Figure 2. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%–35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%–51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%–1.3%). The difference in 5-year cumulative incidence was 24.9%. The unadjusted number needed to treat associated with 1 less HCC within 5 years was 4, suggesting that four patients who achieved SVR were associated with 1 less HCC within 5 years of treatment. Figure 3 shows the cumulative incidence among patients with high and low APRI values, based on a cutoff value of 2.0. The cumulative incidence of HCC was significantly higher (P=0.006) in patients with a high APRI value (3-year cumulative incidence 21.8%; 95% CI 8.2%–35.3%; 5-year cumulative incidence 30.5%; 95% CI 11.8%–49.3%) compared to those with low APRI values (3- and 5-year cumulative incidence 4.2%; 95% CI 0%–1.0%). The difference in 5-year cumulative incidences was 26.3%. The cumulative incidences of HCC among patients with and without SVR after treatment are shown in Figure 2. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%–35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%–51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%–1.3%). The difference in 5-year cumulative incidence was 24.9%. The unadjusted number needed to treat associated with 1 less HCC within 5 years was 4, suggesting that four patients who achieved SVR were associated with 1 less HCC within 5 years of treatment. Figure 3 shows the cumulative incidence among patients with high and low APRI values, based on a cutoff value of 2.0. The cumulative incidence of HCC was significantly higher (P=0.006) in patients with a high APRI value (3-year cumulative incidence 21.8%; 95% CI 8.2%–35.3%; 5-year cumulative incidence 30.5%; 95% CI 11.8%–49.3%) compared to those with low APRI values (3- and 5-year cumulative incidence 4.2%; 95% CI 0%–1.0%). The difference in 5-year cumulative incidences was 26.3%.
Conclusion
In cirrhotic HCV-infected patients, SVR is a major predictor of HCC development, whereas APRI may be a potent predictor of HCC risk among these patients. Further studies are warranted to validate our findings and their applicability in clinical practice.
[ "Regimen of PEG-IFN plus RBV", "Clinical monitoring", "HCV quantification and genotyping", "Sustained virologic response", "Statistical analysis", "Demographics and clinical features that predispose to HCC", "Cumulative incidences of HCC occurrence", "Conclusion" ]
[ "PEG-IFN-α-2a (Pegasys; Hoffman-La Roche Ltd., Basel, Switzerland) or PEG-IFN-α-2b (PegIntron; Schering-Plough Corp., Kenilworth, NJ, USA) plus RBV were prescribed to eligible patients for 6 months. PEG-IFN-α-2a (180 μg/kg) or PEG-IFN-α-2b (1.5 μg/kg) was administered once per week by subcutaneous injection. The fixed duration (6 months) without consideration of HCV genotypes is due to restrictions imposed by the reimbursement policy of the Bureau of National Health Insurance in Taiwan. RBV was prescribed orally at a dose of 800 mg/day for patients <55 kg, 1,000 mg/day for patients between 55 and 75 kg, and 1,200 mg/day for patients >75 kg. Adjustments to the dose of PEG-IFN and RBV and treatment with either erythropoietin or blood transfusion were determined according to published practice guidelines.3,20–22", "The primary outcome was time to develop HCC. All patients were examined at baseline, end of treatment, and every 3–6 months posttreatment with a liver function test and abdominal sonography at the hospital’s gastrointestinal outpatient clinic. The serum aspartate aminotransferase (AST), ALT, total bilirubin, creatinine, hemoglobin, white blood cell count, and platelet count were measured at each time point. HCV RNA was quantified at baseline and 24 weeks posttreatment. The diagnosis of liver cirrhosis was made by liver biopsy or by clinical diagnostic criteria including twice documented ultrasonographic evidence of a coarse and nodular parenchyma, irregular surface, and dull margin with splenomegaly, ascites, hepatic encephalopathy, or varices.23 Liver biopsy with optional procedure was performed at baseline with patients’ consent. Liver biopsy was obtained from 62 patients (59.6%) in this study. A diagnosis of fatty liver was based on the results of biopsy and/or abdominal ultrasound. Other clinical factors, including diabetes mellitus, chronic hepatitis B (CHB), and alcoholism, were also evaluated by chart review. The diagnosis of CHB was based on seropositive status for hepatitis B surface antigen for at least 6 months. HCC was diagnosed either by biopsy or by imaging in the setting of liver cirrhosis. The specific imaging pattern was defined by increased contrast uptake in the arterial phase followed by contrast washout in the venous/delayed phase as seen via computer tomography or magnetic resonance.24,25 In this study, we calculated APRI using the formula of ([AST/upper limit of normal]/platelet count [109/L]) ×100, and further used it as a noninvasive marker validated for the diagnosis of both significant fibrosis and cirrhosis. The cutoff value of APRI >2.0 defined high APRI and APRI <2.0 defined low APRI.3,12,13", "Serum HCV RNA was quantified at baseline and 24 weeks posttreatment using real-time polymerase chain reaction with a detection limitation of 15 IU/mL.26 The threshold for discriminating low- and high-baseline HCV RNA was 400,000 IU/mL.22 HCV genotyping was performed using melting curve analysis (Roche LightCycler; Biotronics Tech Corp., Lowell, MA, USA).27", "SVR was defined as an undetectable HCV RNA for at least 24 weeks after the patient completed the combined treatment of PEG-IFN plus RBV. Patients who were positive for HCV RNA at week 24 posttreatment were considered non-SVR.", "SPSS 19.0 for Windows (IBM Corporation, Armonk, NY, USA) was used for all statistical analyses. The chi-square test or the Fisher’s exact test was used for nominal variables. Student’s t-test was used to compare continuous variables with normal distributions, and the Mann–Whitney U-test was used for continuous variables with nonnormal distributions. In competing risk data ratios, we conducted calculations and comparisons of cumulative incidences using a modified Kaplan–Meier method. We tested differences in the full time-to-event distributions between the study groups using a log-rank test. To determine the independent risk factors for HCC occurrence, we carried out multivariate analyses and stratified analyses using HRs with Cox proportional hazards regression models in the presence of a competing risk event after adjusting for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver). Results were shown as HRs with 95% CIs. Significances in all analyses were set as P<0.05.", "Table 1 shows the baseline characteristics and treatment outcomes of the patients with and without progression to HCC. We performed univariate analyses on the two groups and found significant differences in age, SVR, baseline APRI, platelet count, AST, and ALT. No significant differences in sex, diabetes, cohepatitis (alcoholism, HBV coinfection, or fatty liver), HCV genotype, HCV RNA load, alpha-fetoprotein, albumin, international normalized ratio of prothrombin time, and total bilirubin were found between the two groups.", "The cumulative incidences of HCC among patients with and without SVR after treatment are shown in Figure 2. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%–35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%–51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%–1.3%). The difference in 5-year cumulative incidence was 24.9%. The unadjusted number needed to treat associated with 1 less HCC within 5 years was 4, suggesting that four patients who achieved SVR were associated with 1 less HCC within 5 years of treatment.\nFigure 3 shows the cumulative incidence among patients with high and low APRI values, based on a cutoff value of 2.0. The cumulative incidence of HCC was significantly higher (P=0.006) in patients with a high APRI value (3-year cumulative incidence 21.8%; 95% CI 8.2%–35.3%; 5-year cumulative incidence 30.5%; 95% CI 11.8%–49.3%) compared to those with low APRI values (3- and 5-year cumulative incidence 4.2%; 95% CI 0%–1.0%). The difference in 5-year cumulative incidences was 26.3%.", "In cirrhotic HCV-infected patients, SVR is a major predictor of HCC development, whereas APRI may be a potent predictor of HCC risk among these patients. Further studies are warranted to validate our findings and their applicability in clinical practice." ]
[ null, null, null, null, "methods", null, null, null ]
[ "Introduction", "Patients and methods", "Selection of patients", "Regimen of PEG-IFN plus RBV", "Clinical monitoring", "HCV quantification and genotyping", "Sustained virologic response", "Statistical analysis", "Results", "General characteristics of the patients", "Demographics and clinical features that predispose to HCC", "Multivariate stratified analysis", "Cumulative incidences of HCC occurrence", "Discussion", "Conclusion" ]
[ "Hepatocellular carcinoma (HCC) is one of the most deadly cancers and is the third leading cause of cancer-related death among males and the sixth among females worldwide.1 According to a recent national analysis in Taiwan, HCC remains the second leading cause of cancer-related death among both males and females during the past 10 years.2 Chronic hepatitis is one of the most important causes of chronic liver diseases, including cirrhosis that can lead to subsequent decompensation and development of HCC.3\nThe estimated risk of HCC is 15–20 times higher among persons infected with hepatitis C virus (HCV) compared to those without infection, and the greatest excess risk occurs in those with advanced hepatic fibrosis or cirrhosis.4,5 In chronic HCV antiviral therapy, a sustained viral response (SVR) had been a clinically meaningful end point at which viral clearance contributes to reduced inflammation and histologically identified fibrosis, fewer hepatic complications, lower liver-related mortality, and a reduced incidence of HCC.6–8 Even in patients with advanced liver disease, SVR has been found to be associated with HCC risk reduction.9\nRisk factors for HCV-related HCC include older age, male sex, coinfection with human immunodeficiency virus (HIV) or hepatitis B virus (HBV), obesity, hepatic fibrosis, alcohol abuse, and sex hormone dysregulation.4,10,11 The liver fibrosis stage provides important prognostic information about the development of HCC. The aspartate aminotransferase to platelet ratio index (APRI) is a noninvasive marker that has been validated for the diagnosis of both significant fibrosis and cirrhosis.12,13 APRI is a useful marker for the prognosis in chronic hepatitis C (CHC) patients.14 Some recent studies report that the APRI score could be a predictor of HCC in chronic hepatitis patients.15,16 APRI could be a useful marker to classify HCC risk in CHC patients who achieved SVR and predict HCC recurrence after radiofrequency ablation.17–19 However, the prognostic value of APRI in cirrhotic patients for predicting the occurrence of HCC is uncertain.\nHence, the aim of this study was to evaluate the clinically significant predictors of HCC development among HCV cirrhotic patients. These factors may affect physicians’ clinical decisions.", " Selection of patients HCV patients with Child A cirrhosis without decompensation and who underwent treatment at the Dalin Tzu Chi General Hospital with either pegylated interferon (PEG-IFN)-α-2a or PEG-IFN-α-2b plus ribavirin (RBV) between January 2005 and December 2011 were enrolled in this retrospective study. All the patients were positive for antihepatitis C antibody for more than 6 months, had an alanine aminotransferase (ALT) level higher than the upper limit of normal, and had detectable serum HCV RNA. We excluded patients with posttreatment follow-up <1 year, HCC history, incomplete medical records, autoimmune diseases, HIV infection, neutropenia (<1,500 neutrophils/mL), thrombocytopenia (<50,000 platelets/mL), anemia (<12 g of hemoglobin/dL in females and <13 g/dL in males), or poorly controlled psychiatric disease.\nOne hundred and twenty compensated cirrhosis patients who received PEG-IFN plus RBV were initially enrolled (Figure 1). Five patients had posttreatment follow-up time <1 year, seven did not have complete medical records, and three previously had HCC, and all these patients were excluded from the study. The remaining 105 patients were finally included. The average follow-up time for each patient was 4.38 years (standard deviation [SD] 1.73 years; range 1.13–9.27 years). The study was approved by the Research Ethics Committee of Dalin Tzu Chi Hospital (B099010124). Written informed consent was obtained from each patient, and the Research Ethics Committee of Dalin Tzu Chi Hospital approved the written consent process.\nHCV patients with Child A cirrhosis without decompensation and who underwent treatment at the Dalin Tzu Chi General Hospital with either pegylated interferon (PEG-IFN)-α-2a or PEG-IFN-α-2b plus ribavirin (RBV) between January 2005 and December 2011 were enrolled in this retrospective study. All the patients were positive for antihepatitis C antibody for more than 6 months, had an alanine aminotransferase (ALT) level higher than the upper limit of normal, and had detectable serum HCV RNA. We excluded patients with posttreatment follow-up <1 year, HCC history, incomplete medical records, autoimmune diseases, HIV infection, neutropenia (<1,500 neutrophils/mL), thrombocytopenia (<50,000 platelets/mL), anemia (<12 g of hemoglobin/dL in females and <13 g/dL in males), or poorly controlled psychiatric disease.\nOne hundred and twenty compensated cirrhosis patients who received PEG-IFN plus RBV were initially enrolled (Figure 1). Five patients had posttreatment follow-up time <1 year, seven did not have complete medical records, and three previously had HCC, and all these patients were excluded from the study. The remaining 105 patients were finally included. The average follow-up time for each patient was 4.38 years (standard deviation [SD] 1.73 years; range 1.13–9.27 years). The study was approved by the Research Ethics Committee of Dalin Tzu Chi Hospital (B099010124). Written informed consent was obtained from each patient, and the Research Ethics Committee of Dalin Tzu Chi Hospital approved the written consent process.\n Regimen of PEG-IFN plus RBV PEG-IFN-α-2a (Pegasys; Hoffman-La Roche Ltd., Basel, Switzerland) or PEG-IFN-α-2b (PegIntron; Schering-Plough Corp., Kenilworth, NJ, USA) plus RBV were prescribed to eligible patients for 6 months. PEG-IFN-α-2a (180 μg/kg) or PEG-IFN-α-2b (1.5 μg/kg) was administered once per week by subcutaneous injection. The fixed duration (6 months) without consideration of HCV genotypes is due to restrictions imposed by the reimbursement policy of the Bureau of National Health Insurance in Taiwan. RBV was prescribed orally at a dose of 800 mg/day for patients <55 kg, 1,000 mg/day for patients between 55 and 75 kg, and 1,200 mg/day for patients >75 kg. Adjustments to the dose of PEG-IFN and RBV and treatment with either erythropoietin or blood transfusion were determined according to published practice guidelines.3,20–22\nPEG-IFN-α-2a (Pegasys; Hoffman-La Roche Ltd., Basel, Switzerland) or PEG-IFN-α-2b (PegIntron; Schering-Plough Corp., Kenilworth, NJ, USA) plus RBV were prescribed to eligible patients for 6 months. PEG-IFN-α-2a (180 μg/kg) or PEG-IFN-α-2b (1.5 μg/kg) was administered once per week by subcutaneous injection. The fixed duration (6 months) without consideration of HCV genotypes is due to restrictions imposed by the reimbursement policy of the Bureau of National Health Insurance in Taiwan. RBV was prescribed orally at a dose of 800 mg/day for patients <55 kg, 1,000 mg/day for patients between 55 and 75 kg, and 1,200 mg/day for patients >75 kg. Adjustments to the dose of PEG-IFN and RBV and treatment with either erythropoietin or blood transfusion were determined according to published practice guidelines.3,20–22\n Clinical monitoring The primary outcome was time to develop HCC. All patients were examined at baseline, end of treatment, and every 3–6 months posttreatment with a liver function test and abdominal sonography at the hospital’s gastrointestinal outpatient clinic. The serum aspartate aminotransferase (AST), ALT, total bilirubin, creatinine, hemoglobin, white blood cell count, and platelet count were measured at each time point. HCV RNA was quantified at baseline and 24 weeks posttreatment. The diagnosis of liver cirrhosis was made by liver biopsy or by clinical diagnostic criteria including twice documented ultrasonographic evidence of a coarse and nodular parenchyma, irregular surface, and dull margin with splenomegaly, ascites, hepatic encephalopathy, or varices.23 Liver biopsy with optional procedure was performed at baseline with patients’ consent. Liver biopsy was obtained from 62 patients (59.6%) in this study. A diagnosis of fatty liver was based on the results of biopsy and/or abdominal ultrasound. Other clinical factors, including diabetes mellitus, chronic hepatitis B (CHB), and alcoholism, were also evaluated by chart review. The diagnosis of CHB was based on seropositive status for hepatitis B surface antigen for at least 6 months. HCC was diagnosed either by biopsy or by imaging in the setting of liver cirrhosis. The specific imaging pattern was defined by increased contrast uptake in the arterial phase followed by contrast washout in the venous/delayed phase as seen via computer tomography or magnetic resonance.24,25 In this study, we calculated APRI using the formula of ([AST/upper limit of normal]/platelet count [109/L]) ×100, and further used it as a noninvasive marker validated for the diagnosis of both significant fibrosis and cirrhosis. The cutoff value of APRI >2.0 defined high APRI and APRI <2.0 defined low APRI.3,12,13\nThe primary outcome was time to develop HCC. All patients were examined at baseline, end of treatment, and every 3–6 months posttreatment with a liver function test and abdominal sonography at the hospital’s gastrointestinal outpatient clinic. The serum aspartate aminotransferase (AST), ALT, total bilirubin, creatinine, hemoglobin, white blood cell count, and platelet count were measured at each time point. HCV RNA was quantified at baseline and 24 weeks posttreatment. The diagnosis of liver cirrhosis was made by liver biopsy or by clinical diagnostic criteria including twice documented ultrasonographic evidence of a coarse and nodular parenchyma, irregular surface, and dull margin with splenomegaly, ascites, hepatic encephalopathy, or varices.23 Liver biopsy with optional procedure was performed at baseline with patients’ consent. Liver biopsy was obtained from 62 patients (59.6%) in this study. A diagnosis of fatty liver was based on the results of biopsy and/or abdominal ultrasound. Other clinical factors, including diabetes mellitus, chronic hepatitis B (CHB), and alcoholism, were also evaluated by chart review. The diagnosis of CHB was based on seropositive status for hepatitis B surface antigen for at least 6 months. HCC was diagnosed either by biopsy or by imaging in the setting of liver cirrhosis. The specific imaging pattern was defined by increased contrast uptake in the arterial phase followed by contrast washout in the venous/delayed phase as seen via computer tomography or magnetic resonance.24,25 In this study, we calculated APRI using the formula of ([AST/upper limit of normal]/platelet count [109/L]) ×100, and further used it as a noninvasive marker validated for the diagnosis of both significant fibrosis and cirrhosis. The cutoff value of APRI >2.0 defined high APRI and APRI <2.0 defined low APRI.3,12,13\n HCV quantification and genotyping Serum HCV RNA was quantified at baseline and 24 weeks posttreatment using real-time polymerase chain reaction with a detection limitation of 15 IU/mL.26 The threshold for discriminating low- and high-baseline HCV RNA was 400,000 IU/mL.22 HCV genotyping was performed using melting curve analysis (Roche LightCycler; Biotronics Tech Corp., Lowell, MA, USA).27\nSerum HCV RNA was quantified at baseline and 24 weeks posttreatment using real-time polymerase chain reaction with a detection limitation of 15 IU/mL.26 The threshold for discriminating low- and high-baseline HCV RNA was 400,000 IU/mL.22 HCV genotyping was performed using melting curve analysis (Roche LightCycler; Biotronics Tech Corp., Lowell, MA, USA).27\n Sustained virologic response SVR was defined as an undetectable HCV RNA for at least 24 weeks after the patient completed the combined treatment of PEG-IFN plus RBV. Patients who were positive for HCV RNA at week 24 posttreatment were considered non-SVR.\nSVR was defined as an undetectable HCV RNA for at least 24 weeks after the patient completed the combined treatment of PEG-IFN plus RBV. Patients who were positive for HCV RNA at week 24 posttreatment were considered non-SVR.\n Statistical analysis SPSS 19.0 for Windows (IBM Corporation, Armonk, NY, USA) was used for all statistical analyses. The chi-square test or the Fisher’s exact test was used for nominal variables. Student’s t-test was used to compare continuous variables with normal distributions, and the Mann–Whitney U-test was used for continuous variables with nonnormal distributions. In competing risk data ratios, we conducted calculations and comparisons of cumulative incidences using a modified Kaplan–Meier method. We tested differences in the full time-to-event distributions between the study groups using a log-rank test. To determine the independent risk factors for HCC occurrence, we carried out multivariate analyses and stratified analyses using HRs with Cox proportional hazards regression models in the presence of a competing risk event after adjusting for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver). Results were shown as HRs with 95% CIs. Significances in all analyses were set as P<0.05.\nSPSS 19.0 for Windows (IBM Corporation, Armonk, NY, USA) was used for all statistical analyses. The chi-square test or the Fisher’s exact test was used for nominal variables. Student’s t-test was used to compare continuous variables with normal distributions, and the Mann–Whitney U-test was used for continuous variables with nonnormal distributions. In competing risk data ratios, we conducted calculations and comparisons of cumulative incidences using a modified Kaplan–Meier method. We tested differences in the full time-to-event distributions between the study groups using a log-rank test. To determine the independent risk factors for HCC occurrence, we carried out multivariate analyses and stratified analyses using HRs with Cox proportional hazards regression models in the presence of a competing risk event after adjusting for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver). Results were shown as HRs with 95% CIs. Significances in all analyses were set as P<0.05.", "HCV patients with Child A cirrhosis without decompensation and who underwent treatment at the Dalin Tzu Chi General Hospital with either pegylated interferon (PEG-IFN)-α-2a or PEG-IFN-α-2b plus ribavirin (RBV) between January 2005 and December 2011 were enrolled in this retrospective study. All the patients were positive for antihepatitis C antibody for more than 6 months, had an alanine aminotransferase (ALT) level higher than the upper limit of normal, and had detectable serum HCV RNA. We excluded patients with posttreatment follow-up <1 year, HCC history, incomplete medical records, autoimmune diseases, HIV infection, neutropenia (<1,500 neutrophils/mL), thrombocytopenia (<50,000 platelets/mL), anemia (<12 g of hemoglobin/dL in females and <13 g/dL in males), or poorly controlled psychiatric disease.\nOne hundred and twenty compensated cirrhosis patients who received PEG-IFN plus RBV were initially enrolled (Figure 1). Five patients had posttreatment follow-up time <1 year, seven did not have complete medical records, and three previously had HCC, and all these patients were excluded from the study. The remaining 105 patients were finally included. The average follow-up time for each patient was 4.38 years (standard deviation [SD] 1.73 years; range 1.13–9.27 years). The study was approved by the Research Ethics Committee of Dalin Tzu Chi Hospital (B099010124). Written informed consent was obtained from each patient, and the Research Ethics Committee of Dalin Tzu Chi Hospital approved the written consent process.", "PEG-IFN-α-2a (Pegasys; Hoffman-La Roche Ltd., Basel, Switzerland) or PEG-IFN-α-2b (PegIntron; Schering-Plough Corp., Kenilworth, NJ, USA) plus RBV were prescribed to eligible patients for 6 months. PEG-IFN-α-2a (180 μg/kg) or PEG-IFN-α-2b (1.5 μg/kg) was administered once per week by subcutaneous injection. The fixed duration (6 months) without consideration of HCV genotypes is due to restrictions imposed by the reimbursement policy of the Bureau of National Health Insurance in Taiwan. RBV was prescribed orally at a dose of 800 mg/day for patients <55 kg, 1,000 mg/day for patients between 55 and 75 kg, and 1,200 mg/day for patients >75 kg. Adjustments to the dose of PEG-IFN and RBV and treatment with either erythropoietin or blood transfusion were determined according to published practice guidelines.3,20–22", "The primary outcome was time to develop HCC. All patients were examined at baseline, end of treatment, and every 3–6 months posttreatment with a liver function test and abdominal sonography at the hospital’s gastrointestinal outpatient clinic. The serum aspartate aminotransferase (AST), ALT, total bilirubin, creatinine, hemoglobin, white blood cell count, and platelet count were measured at each time point. HCV RNA was quantified at baseline and 24 weeks posttreatment. The diagnosis of liver cirrhosis was made by liver biopsy or by clinical diagnostic criteria including twice documented ultrasonographic evidence of a coarse and nodular parenchyma, irregular surface, and dull margin with splenomegaly, ascites, hepatic encephalopathy, or varices.23 Liver biopsy with optional procedure was performed at baseline with patients’ consent. Liver biopsy was obtained from 62 patients (59.6%) in this study. A diagnosis of fatty liver was based on the results of biopsy and/or abdominal ultrasound. Other clinical factors, including diabetes mellitus, chronic hepatitis B (CHB), and alcoholism, were also evaluated by chart review. The diagnosis of CHB was based on seropositive status for hepatitis B surface antigen for at least 6 months. HCC was diagnosed either by biopsy or by imaging in the setting of liver cirrhosis. The specific imaging pattern was defined by increased contrast uptake in the arterial phase followed by contrast washout in the venous/delayed phase as seen via computer tomography or magnetic resonance.24,25 In this study, we calculated APRI using the formula of ([AST/upper limit of normal]/platelet count [109/L]) ×100, and further used it as a noninvasive marker validated for the diagnosis of both significant fibrosis and cirrhosis. The cutoff value of APRI >2.0 defined high APRI and APRI <2.0 defined low APRI.3,12,13", "Serum HCV RNA was quantified at baseline and 24 weeks posttreatment using real-time polymerase chain reaction with a detection limitation of 15 IU/mL.26 The threshold for discriminating low- and high-baseline HCV RNA was 400,000 IU/mL.22 HCV genotyping was performed using melting curve analysis (Roche LightCycler; Biotronics Tech Corp., Lowell, MA, USA).27", "SVR was defined as an undetectable HCV RNA for at least 24 weeks after the patient completed the combined treatment of PEG-IFN plus RBV. Patients who were positive for HCV RNA at week 24 posttreatment were considered non-SVR.", "SPSS 19.0 for Windows (IBM Corporation, Armonk, NY, USA) was used for all statistical analyses. The chi-square test or the Fisher’s exact test was used for nominal variables. Student’s t-test was used to compare continuous variables with normal distributions, and the Mann–Whitney U-test was used for continuous variables with nonnormal distributions. In competing risk data ratios, we conducted calculations and comparisons of cumulative incidences using a modified Kaplan–Meier method. We tested differences in the full time-to-event distributions between the study groups using a log-rank test. To determine the independent risk factors for HCC occurrence, we carried out multivariate analyses and stratified analyses using HRs with Cox proportional hazards regression models in the presence of a competing risk event after adjusting for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver). Results were shown as HRs with 95% CIs. Significances in all analyses were set as P<0.05.", " General characteristics of the patients A total of 105 patients with HCV-associated Child A cirrhosis without decompensation were enrolled, consisting of 57 patients (54.3%) with SVR and 48 patients without SVR (45.7%) by intention-to-treat analysis. Of the 105 patients, 66 (62.9%) were infected with HCV genotype 1, and the remaining 39 patients (37.1%) were infected with other genotypes of HCV. There were 47 males and 58 females, and the mean patient age was 58.3±10.4 years. Thirteen patients (12.4%) did not complete the full treatment regimen (mean treatment duration, 61 days). One patient (1/13) stopped treatment due to major trauma with renal hemorrhage. Twelve patients (12/13, 92.3%) did not accept complete treatment due to treatment side effects. Among those patients, decompensation with jaundice (4/13) was the most common reason. An intention-to-treat analysis was performed in this study. No deaths or severe treatment-related complications were found during the course of treatment. The average follow-up time for each patient was 4.38 years (SD 1.73 years; range 1.13–9.27 years) (Table 1). Fifteen (14.3%) patients developed HCC during follow-up period; 13 in this group had high baseline APRI (>2.0) and two had low APRI (<2.0).\nA total of 105 patients with HCV-associated Child A cirrhosis without decompensation were enrolled, consisting of 57 patients (54.3%) with SVR and 48 patients without SVR (45.7%) by intention-to-treat analysis. Of the 105 patients, 66 (62.9%) were infected with HCV genotype 1, and the remaining 39 patients (37.1%) were infected with other genotypes of HCV. There were 47 males and 58 females, and the mean patient age was 58.3±10.4 years. Thirteen patients (12.4%) did not complete the full treatment regimen (mean treatment duration, 61 days). One patient (1/13) stopped treatment due to major trauma with renal hemorrhage. Twelve patients (12/13, 92.3%) did not accept complete treatment due to treatment side effects. Among those patients, decompensation with jaundice (4/13) was the most common reason. An intention-to-treat analysis was performed in this study. No deaths or severe treatment-related complications were found during the course of treatment. The average follow-up time for each patient was 4.38 years (SD 1.73 years; range 1.13–9.27 years) (Table 1). Fifteen (14.3%) patients developed HCC during follow-up period; 13 in this group had high baseline APRI (>2.0) and two had low APRI (<2.0).\n Demographics and clinical features that predispose to HCC Table 1 shows the baseline characteristics and treatment outcomes of the patients with and without progression to HCC. We performed univariate analyses on the two groups and found significant differences in age, SVR, baseline APRI, platelet count, AST, and ALT. No significant differences in sex, diabetes, cohepatitis (alcoholism, HBV coinfection, or fatty liver), HCV genotype, HCV RNA load, alpha-fetoprotein, albumin, international normalized ratio of prothrombin time, and total bilirubin were found between the two groups.\nTable 1 shows the baseline characteristics and treatment outcomes of the patients with and without progression to HCC. We performed univariate analyses on the two groups and found significant differences in age, SVR, baseline APRI, platelet count, AST, and ALT. No significant differences in sex, diabetes, cohepatitis (alcoholism, HBV coinfection, or fatty liver), HCV genotype, HCV RNA load, alpha-fetoprotein, albumin, international normalized ratio of prothrombin time, and total bilirubin were found between the two groups.\n Multivariate stratified analysis Table 2 shows the results of the Cox regression analysis. After adjustments for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver), those without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) and those with a high APRI level (HR 5.548; 95% CI 1.191–25.86; P=0.029) were associated with a significantly higher risk of HCC occurrence.\nTable 2 shows the results of the Cox regression analysis. After adjustments for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver), those without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) and those with a high APRI level (HR 5.548; 95% CI 1.191–25.86; P=0.029) were associated with a significantly higher risk of HCC occurrence.\n Cumulative incidences of HCC occurrence The cumulative incidences of HCC among patients with and without SVR after treatment are shown in Figure 2. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%–35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%–51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%–1.3%). The difference in 5-year cumulative incidence was 24.9%. The unadjusted number needed to treat associated with 1 less HCC within 5 years was 4, suggesting that four patients who achieved SVR were associated with 1 less HCC within 5 years of treatment.\nFigure 3 shows the cumulative incidence among patients with high and low APRI values, based on a cutoff value of 2.0. The cumulative incidence of HCC was significantly higher (P=0.006) in patients with a high APRI value (3-year cumulative incidence 21.8%; 95% CI 8.2%–35.3%; 5-year cumulative incidence 30.5%; 95% CI 11.8%–49.3%) compared to those with low APRI values (3- and 5-year cumulative incidence 4.2%; 95% CI 0%–1.0%). The difference in 5-year cumulative incidences was 26.3%.\nThe cumulative incidences of HCC among patients with and without SVR after treatment are shown in Figure 2. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%–35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%–51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%–1.3%). The difference in 5-year cumulative incidence was 24.9%. The unadjusted number needed to treat associated with 1 less HCC within 5 years was 4, suggesting that four patients who achieved SVR were associated with 1 less HCC within 5 years of treatment.\nFigure 3 shows the cumulative incidence among patients with high and low APRI values, based on a cutoff value of 2.0. The cumulative incidence of HCC was significantly higher (P=0.006) in patients with a high APRI value (3-year cumulative incidence 21.8%; 95% CI 8.2%–35.3%; 5-year cumulative incidence 30.5%; 95% CI 11.8%–49.3%) compared to those with low APRI values (3- and 5-year cumulative incidence 4.2%; 95% CI 0%–1.0%). The difference in 5-year cumulative incidences was 26.3%.", "A total of 105 patients with HCV-associated Child A cirrhosis without decompensation were enrolled, consisting of 57 patients (54.3%) with SVR and 48 patients without SVR (45.7%) by intention-to-treat analysis. Of the 105 patients, 66 (62.9%) were infected with HCV genotype 1, and the remaining 39 patients (37.1%) were infected with other genotypes of HCV. There were 47 males and 58 females, and the mean patient age was 58.3±10.4 years. Thirteen patients (12.4%) did not complete the full treatment regimen (mean treatment duration, 61 days). One patient (1/13) stopped treatment due to major trauma with renal hemorrhage. Twelve patients (12/13, 92.3%) did not accept complete treatment due to treatment side effects. Among those patients, decompensation with jaundice (4/13) was the most common reason. An intention-to-treat analysis was performed in this study. No deaths or severe treatment-related complications were found during the course of treatment. The average follow-up time for each patient was 4.38 years (SD 1.73 years; range 1.13–9.27 years) (Table 1). Fifteen (14.3%) patients developed HCC during follow-up period; 13 in this group had high baseline APRI (>2.0) and two had low APRI (<2.0).", "Table 1 shows the baseline characteristics and treatment outcomes of the patients with and without progression to HCC. We performed univariate analyses on the two groups and found significant differences in age, SVR, baseline APRI, platelet count, AST, and ALT. No significant differences in sex, diabetes, cohepatitis (alcoholism, HBV coinfection, or fatty liver), HCV genotype, HCV RNA load, alpha-fetoprotein, albumin, international normalized ratio of prothrombin time, and total bilirubin were found between the two groups.", "Table 2 shows the results of the Cox regression analysis. After adjustments for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver), those without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) and those with a high APRI level (HR 5.548; 95% CI 1.191–25.86; P=0.029) were associated with a significantly higher risk of HCC occurrence.", "The cumulative incidences of HCC among patients with and without SVR after treatment are shown in Figure 2. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%–35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%–51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%–1.3%). The difference in 5-year cumulative incidence was 24.9%. The unadjusted number needed to treat associated with 1 less HCC within 5 years was 4, suggesting that four patients who achieved SVR were associated with 1 less HCC within 5 years of treatment.\nFigure 3 shows the cumulative incidence among patients with high and low APRI values, based on a cutoff value of 2.0. The cumulative incidence of HCC was significantly higher (P=0.006) in patients with a high APRI value (3-year cumulative incidence 21.8%; 95% CI 8.2%–35.3%; 5-year cumulative incidence 30.5%; 95% CI 11.8%–49.3%) compared to those with low APRI values (3- and 5-year cumulative incidence 4.2%; 95% CI 0%–1.0%). The difference in 5-year cumulative incidences was 26.3%.", "HCV is a single-stranded RNA virus, and its genome is never integrated into the genome of hepatocytes. Although no known oncogenic properties have been reported with respect to HCV, the multiple functions of HCV proteins and their effects on the modulation of intracellular signaling transduction processes may facilitate carcinogenesis via interactions of viral proteins with host cell proteins.28 HCV core proteins lead to the promotion of cellular transformation, and HCV-related chronic inflammation has been responsible for promoting mutations during hepatocyte regeneration, thereby contributing to HCC development.29,30 Hepatocyte regeneration and cycling that occur after antiviral therapy may activate cellular pathways involved in the development of dysplasia, increasing the risk of hepatocarcinogenesis.31 Interferon-based therapy was found to reduce the risk of HCC development, particularly in virologic or biochemical responders.32–34 Despite the larger magnitude of risk reduction in patients with SVR, there remains a definite risk of HCC in SVR patients.11,35 The risk factors for HCV-related HCC include older age, male, coinfection with HIV or HBV, obesity, hepatic fibrosis, alcohol abuse, and sex hormone dysregulation.4,10,11\nA meta-analysis summarized the evidence from 30 observational studies that examined the risk of HCC among HCV-infected patients and found that SVR was associated with a reduction in the relative risk of HCC for patients at all stages of liver disease (HR 0.24; 95% CI 0.18%–0.31%; P<0.001) and a higher absolute benefit (HCC prevention) in patients with advanced liver disease who achieved SVR.9 Similarly, in our study, HCC development in cirrhotic HCV patients was significantly lower in the SVR group. Those patients without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) were associated with a significantly higher risk of HCC occurrence. A large cohort study conducted in Taiwan showed similar results in HCC risk reduction among cirrhotic HCV patients.10\nNotably, our study results found that high APRI (>2) is an independent risk factor for HCC development (HR 5.548; 95% CI 1.191–25.86; P=0.029). APRI is a noninvasive measurement for diagnosing hepatic fibrosis and cirrhosis and was recommended by the World Health Organization (WHO) guidelines for HCV infection published in 2014.3 Based on the WHO recommendation, patients with values above the high APRI cutoff value (>2) should be treated with higher priority because they have a high probability of developing F4 cirrhosis.3 A recent study showed that a higher APRI significantly increased the risk of HCC in cirrhotic HBV-infected patients, but this finding was attenuated after multivariate adjustment.16 More recently, in a cohort of 642 SVR Taiwan patients (13% cirrhotics), HCC was strongly associated with cirrhosis (HR 4.98; 95% CI 2.32–10.71; P<0.001) and less strongly with age (HR 1.06; 95% CI 1.02–1.11; P=0.005) and Gamma-glutamyl transpeptidase (γGT) (HR 1.01; 95% CI 1.00–1.013; P<0.001).36 This study supports the importance of cirrhosis as the predictor of HCC in CHC patients and potential role of APRI. As far as we know, few studies have evaluated the possible relationship between APRI and HCC development in HCV-infected patients after SVR.15,17,18 Our study included pure cirrhotic cohort and showed that high APRI (>2.0) is significantly related to HCC development even after multivariate adjustment. The data from recent cohort support the recommendation that a higher treatment priority should be given to those patients who present with a high APRI.3\nAlthough a high proportion of genotype 1 (62%) and advanced liver disease with short treatment duration (6 months) cause lower SVR rate (54.3%), a recent study showed the achievement of SVR could slow down the rapid progression of HCC among cirrhotic patients.9 In the era of new, interferon-sparing, antiviral strategies, the SVR rate for similar patients is more than 90%. The results of our study support the significant benefit of SVR achievement in lowering the progression of HCC, and the APRI score could help us identify patients at higher risk of HCC development. The direct antiviral agent treatment should be provided for the cirrhotic patients even though it is expensive.\nOur study found no difference between 3- and 5-year cumulative incidences of HCC in patients with SVR or with low APRI, as shown in Figures 2 and 3. The stabilization of cumulative incidences may indicate that HCC development is more closely related to developments during the first few years after SVR achievement in low APRI patients. The interaction between SVR and APRI appears to be complicated. A higher fibrosis stage is associated with lower SVR rate, and typically the fibrosis stage improves after SVR is achieved. This may be due to high SVR durability with a low relapse rate, which has been confirmed in HCV-infected liver transplant recipients, hemophiliacs, cirrhotics, those coinfected with HIV, and children. A durable SVR may improve the fibrosis stage and decrease the incidence of HCC 3 years later. Hence, HCV cirrhotic patients (especially those with a high APRI) should be treated as early as possible because of potential beneficial effects in HCC prevention.\nThere are some limitations in our study. First, liver biopsy is considered the gold standard for cirrhosis diagnosis. Transient elastography could offer a noninvasive alternative to biopsy to measure the stage of fibrosis, given its comparable diagnostic accuracy. Due to the earlier study period with facility limitation, we could not provide these data. In our study, 59.6% (62/105) of patients underwent biopsy and other patients with cirrhosis were diagnosed using the clinical criteria. However, the diagnosis criteria combined with twice documented ultrasonographic evidence of liver cirrhosis with solid clinical end point (splenomegaly, ascites, hepatic encephalopathy, or varices) could complementally increase the accuracy of cirrhosis diagnosis in the current study.18 Second, this is a retrospective, single-center, observational study, which could lead to selection bias because patients with severe cirrhosis were probably not considered for treatment and therefore were not included in our study.", "In cirrhotic HCV-infected patients, SVR is a major predictor of HCC development, whereas APRI may be a potent predictor of HCC risk among these patients. Further studies are warranted to validate our findings and their applicability in clinical practice." ]
[ "intro", "methods|subjects", "subjects", null, null, null, null, "methods", "results", "intro|subjects", null, "methods", null, "discussion", null ]
[ "aspartate aminotransferase to platelet ratio index", "chronic hepatitis C", "hepatitis C virus", "hepatocellular carcinoma", "liver cirrhosis", "sustained virologic response" ]
Introduction: Hepatocellular carcinoma (HCC) is one of the most deadly cancers and is the third leading cause of cancer-related death among males and the sixth among females worldwide.1 According to a recent national analysis in Taiwan, HCC remains the second leading cause of cancer-related death among both males and females during the past 10 years.2 Chronic hepatitis is one of the most important causes of chronic liver diseases, including cirrhosis that can lead to subsequent decompensation and development of HCC.3 The estimated risk of HCC is 15–20 times higher among persons infected with hepatitis C virus (HCV) compared to those without infection, and the greatest excess risk occurs in those with advanced hepatic fibrosis or cirrhosis.4,5 In chronic HCV antiviral therapy, a sustained viral response (SVR) had been a clinically meaningful end point at which viral clearance contributes to reduced inflammation and histologically identified fibrosis, fewer hepatic complications, lower liver-related mortality, and a reduced incidence of HCC.6–8 Even in patients with advanced liver disease, SVR has been found to be associated with HCC risk reduction.9 Risk factors for HCV-related HCC include older age, male sex, coinfection with human immunodeficiency virus (HIV) or hepatitis B virus (HBV), obesity, hepatic fibrosis, alcohol abuse, and sex hormone dysregulation.4,10,11 The liver fibrosis stage provides important prognostic information about the development of HCC. The aspartate aminotransferase to platelet ratio index (APRI) is a noninvasive marker that has been validated for the diagnosis of both significant fibrosis and cirrhosis.12,13 APRI is a useful marker for the prognosis in chronic hepatitis C (CHC) patients.14 Some recent studies report that the APRI score could be a predictor of HCC in chronic hepatitis patients.15,16 APRI could be a useful marker to classify HCC risk in CHC patients who achieved SVR and predict HCC recurrence after radiofrequency ablation.17–19 However, the prognostic value of APRI in cirrhotic patients for predicting the occurrence of HCC is uncertain. Hence, the aim of this study was to evaluate the clinically significant predictors of HCC development among HCV cirrhotic patients. These factors may affect physicians’ clinical decisions. Patients and methods: Selection of patients HCV patients with Child A cirrhosis without decompensation and who underwent treatment at the Dalin Tzu Chi General Hospital with either pegylated interferon (PEG-IFN)-α-2a or PEG-IFN-α-2b plus ribavirin (RBV) between January 2005 and December 2011 were enrolled in this retrospective study. All the patients were positive for antihepatitis C antibody for more than 6 months, had an alanine aminotransferase (ALT) level higher than the upper limit of normal, and had detectable serum HCV RNA. We excluded patients with posttreatment follow-up <1 year, HCC history, incomplete medical records, autoimmune diseases, HIV infection, neutropenia (<1,500 neutrophils/mL), thrombocytopenia (<50,000 platelets/mL), anemia (<12 g of hemoglobin/dL in females and <13 g/dL in males), or poorly controlled psychiatric disease. One hundred and twenty compensated cirrhosis patients who received PEG-IFN plus RBV were initially enrolled (Figure 1). Five patients had posttreatment follow-up time <1 year, seven did not have complete medical records, and three previously had HCC, and all these patients were excluded from the study. The remaining 105 patients were finally included. The average follow-up time for each patient was 4.38 years (standard deviation [SD] 1.73 years; range 1.13–9.27 years). The study was approved by the Research Ethics Committee of Dalin Tzu Chi Hospital (B099010124). Written informed consent was obtained from each patient, and the Research Ethics Committee of Dalin Tzu Chi Hospital approved the written consent process. HCV patients with Child A cirrhosis without decompensation and who underwent treatment at the Dalin Tzu Chi General Hospital with either pegylated interferon (PEG-IFN)-α-2a or PEG-IFN-α-2b plus ribavirin (RBV) between January 2005 and December 2011 were enrolled in this retrospective study. All the patients were positive for antihepatitis C antibody for more than 6 months, had an alanine aminotransferase (ALT) level higher than the upper limit of normal, and had detectable serum HCV RNA. We excluded patients with posttreatment follow-up <1 year, HCC history, incomplete medical records, autoimmune diseases, HIV infection, neutropenia (<1,500 neutrophils/mL), thrombocytopenia (<50,000 platelets/mL), anemia (<12 g of hemoglobin/dL in females and <13 g/dL in males), or poorly controlled psychiatric disease. One hundred and twenty compensated cirrhosis patients who received PEG-IFN plus RBV were initially enrolled (Figure 1). Five patients had posttreatment follow-up time <1 year, seven did not have complete medical records, and three previously had HCC, and all these patients were excluded from the study. The remaining 105 patients were finally included. The average follow-up time for each patient was 4.38 years (standard deviation [SD] 1.73 years; range 1.13–9.27 years). The study was approved by the Research Ethics Committee of Dalin Tzu Chi Hospital (B099010124). Written informed consent was obtained from each patient, and the Research Ethics Committee of Dalin Tzu Chi Hospital approved the written consent process. Regimen of PEG-IFN plus RBV PEG-IFN-α-2a (Pegasys; Hoffman-La Roche Ltd., Basel, Switzerland) or PEG-IFN-α-2b (PegIntron; Schering-Plough Corp., Kenilworth, NJ, USA) plus RBV were prescribed to eligible patients for 6 months. PEG-IFN-α-2a (180 μg/kg) or PEG-IFN-α-2b (1.5 μg/kg) was administered once per week by subcutaneous injection. The fixed duration (6 months) without consideration of HCV genotypes is due to restrictions imposed by the reimbursement policy of the Bureau of National Health Insurance in Taiwan. RBV was prescribed orally at a dose of 800 mg/day for patients <55 kg, 1,000 mg/day for patients between 55 and 75 kg, and 1,200 mg/day for patients >75 kg. Adjustments to the dose of PEG-IFN and RBV and treatment with either erythropoietin or blood transfusion were determined according to published practice guidelines.3,20–22 PEG-IFN-α-2a (Pegasys; Hoffman-La Roche Ltd., Basel, Switzerland) or PEG-IFN-α-2b (PegIntron; Schering-Plough Corp., Kenilworth, NJ, USA) plus RBV were prescribed to eligible patients for 6 months. PEG-IFN-α-2a (180 μg/kg) or PEG-IFN-α-2b (1.5 μg/kg) was administered once per week by subcutaneous injection. The fixed duration (6 months) without consideration of HCV genotypes is due to restrictions imposed by the reimbursement policy of the Bureau of National Health Insurance in Taiwan. RBV was prescribed orally at a dose of 800 mg/day for patients <55 kg, 1,000 mg/day for patients between 55 and 75 kg, and 1,200 mg/day for patients >75 kg. Adjustments to the dose of PEG-IFN and RBV and treatment with either erythropoietin or blood transfusion were determined according to published practice guidelines.3,20–22 Clinical monitoring The primary outcome was time to develop HCC. All patients were examined at baseline, end of treatment, and every 3–6 months posttreatment with a liver function test and abdominal sonography at the hospital’s gastrointestinal outpatient clinic. The serum aspartate aminotransferase (AST), ALT, total bilirubin, creatinine, hemoglobin, white blood cell count, and platelet count were measured at each time point. HCV RNA was quantified at baseline and 24 weeks posttreatment. The diagnosis of liver cirrhosis was made by liver biopsy or by clinical diagnostic criteria including twice documented ultrasonographic evidence of a coarse and nodular parenchyma, irregular surface, and dull margin with splenomegaly, ascites, hepatic encephalopathy, or varices.23 Liver biopsy with optional procedure was performed at baseline with patients’ consent. Liver biopsy was obtained from 62 patients (59.6%) in this study. A diagnosis of fatty liver was based on the results of biopsy and/or abdominal ultrasound. Other clinical factors, including diabetes mellitus, chronic hepatitis B (CHB), and alcoholism, were also evaluated by chart review. The diagnosis of CHB was based on seropositive status for hepatitis B surface antigen for at least 6 months. HCC was diagnosed either by biopsy or by imaging in the setting of liver cirrhosis. The specific imaging pattern was defined by increased contrast uptake in the arterial phase followed by contrast washout in the venous/delayed phase as seen via computer tomography or magnetic resonance.24,25 In this study, we calculated APRI using the formula of ([AST/upper limit of normal]/platelet count [109/L]) ×100, and further used it as a noninvasive marker validated for the diagnosis of both significant fibrosis and cirrhosis. The cutoff value of APRI >2.0 defined high APRI and APRI <2.0 defined low APRI.3,12,13 The primary outcome was time to develop HCC. All patients were examined at baseline, end of treatment, and every 3–6 months posttreatment with a liver function test and abdominal sonography at the hospital’s gastrointestinal outpatient clinic. The serum aspartate aminotransferase (AST), ALT, total bilirubin, creatinine, hemoglobin, white blood cell count, and platelet count were measured at each time point. HCV RNA was quantified at baseline and 24 weeks posttreatment. The diagnosis of liver cirrhosis was made by liver biopsy or by clinical diagnostic criteria including twice documented ultrasonographic evidence of a coarse and nodular parenchyma, irregular surface, and dull margin with splenomegaly, ascites, hepatic encephalopathy, or varices.23 Liver biopsy with optional procedure was performed at baseline with patients’ consent. Liver biopsy was obtained from 62 patients (59.6%) in this study. A diagnosis of fatty liver was based on the results of biopsy and/or abdominal ultrasound. Other clinical factors, including diabetes mellitus, chronic hepatitis B (CHB), and alcoholism, were also evaluated by chart review. The diagnosis of CHB was based on seropositive status for hepatitis B surface antigen for at least 6 months. HCC was diagnosed either by biopsy or by imaging in the setting of liver cirrhosis. The specific imaging pattern was defined by increased contrast uptake in the arterial phase followed by contrast washout in the venous/delayed phase as seen via computer tomography or magnetic resonance.24,25 In this study, we calculated APRI using the formula of ([AST/upper limit of normal]/platelet count [109/L]) ×100, and further used it as a noninvasive marker validated for the diagnosis of both significant fibrosis and cirrhosis. The cutoff value of APRI >2.0 defined high APRI and APRI <2.0 defined low APRI.3,12,13 HCV quantification and genotyping Serum HCV RNA was quantified at baseline and 24 weeks posttreatment using real-time polymerase chain reaction with a detection limitation of 15 IU/mL.26 The threshold for discriminating low- and high-baseline HCV RNA was 400,000 IU/mL.22 HCV genotyping was performed using melting curve analysis (Roche LightCycler; Biotronics Tech Corp., Lowell, MA, USA).27 Serum HCV RNA was quantified at baseline and 24 weeks posttreatment using real-time polymerase chain reaction with a detection limitation of 15 IU/mL.26 The threshold for discriminating low- and high-baseline HCV RNA was 400,000 IU/mL.22 HCV genotyping was performed using melting curve analysis (Roche LightCycler; Biotronics Tech Corp., Lowell, MA, USA).27 Sustained virologic response SVR was defined as an undetectable HCV RNA for at least 24 weeks after the patient completed the combined treatment of PEG-IFN plus RBV. Patients who were positive for HCV RNA at week 24 posttreatment were considered non-SVR. SVR was defined as an undetectable HCV RNA for at least 24 weeks after the patient completed the combined treatment of PEG-IFN plus RBV. Patients who were positive for HCV RNA at week 24 posttreatment were considered non-SVR. Statistical analysis SPSS 19.0 for Windows (IBM Corporation, Armonk, NY, USA) was used for all statistical analyses. The chi-square test or the Fisher’s exact test was used for nominal variables. Student’s t-test was used to compare continuous variables with normal distributions, and the Mann–Whitney U-test was used for continuous variables with nonnormal distributions. In competing risk data ratios, we conducted calculations and comparisons of cumulative incidences using a modified Kaplan–Meier method. We tested differences in the full time-to-event distributions between the study groups using a log-rank test. To determine the independent risk factors for HCC occurrence, we carried out multivariate analyses and stratified analyses using HRs with Cox proportional hazards regression models in the presence of a competing risk event after adjusting for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver). Results were shown as HRs with 95% CIs. Significances in all analyses were set as P<0.05. SPSS 19.0 for Windows (IBM Corporation, Armonk, NY, USA) was used for all statistical analyses. The chi-square test or the Fisher’s exact test was used for nominal variables. Student’s t-test was used to compare continuous variables with normal distributions, and the Mann–Whitney U-test was used for continuous variables with nonnormal distributions. In competing risk data ratios, we conducted calculations and comparisons of cumulative incidences using a modified Kaplan–Meier method. We tested differences in the full time-to-event distributions between the study groups using a log-rank test. To determine the independent risk factors for HCC occurrence, we carried out multivariate analyses and stratified analyses using HRs with Cox proportional hazards regression models in the presence of a competing risk event after adjusting for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver). Results were shown as HRs with 95% CIs. Significances in all analyses were set as P<0.05. Selection of patients: HCV patients with Child A cirrhosis without decompensation and who underwent treatment at the Dalin Tzu Chi General Hospital with either pegylated interferon (PEG-IFN)-α-2a or PEG-IFN-α-2b plus ribavirin (RBV) between January 2005 and December 2011 were enrolled in this retrospective study. All the patients were positive for antihepatitis C antibody for more than 6 months, had an alanine aminotransferase (ALT) level higher than the upper limit of normal, and had detectable serum HCV RNA. We excluded patients with posttreatment follow-up <1 year, HCC history, incomplete medical records, autoimmune diseases, HIV infection, neutropenia (<1,500 neutrophils/mL), thrombocytopenia (<50,000 platelets/mL), anemia (<12 g of hemoglobin/dL in females and <13 g/dL in males), or poorly controlled psychiatric disease. One hundred and twenty compensated cirrhosis patients who received PEG-IFN plus RBV were initially enrolled (Figure 1). Five patients had posttreatment follow-up time <1 year, seven did not have complete medical records, and three previously had HCC, and all these patients were excluded from the study. The remaining 105 patients were finally included. The average follow-up time for each patient was 4.38 years (standard deviation [SD] 1.73 years; range 1.13–9.27 years). The study was approved by the Research Ethics Committee of Dalin Tzu Chi Hospital (B099010124). Written informed consent was obtained from each patient, and the Research Ethics Committee of Dalin Tzu Chi Hospital approved the written consent process. Regimen of PEG-IFN plus RBV: PEG-IFN-α-2a (Pegasys; Hoffman-La Roche Ltd., Basel, Switzerland) or PEG-IFN-α-2b (PegIntron; Schering-Plough Corp., Kenilworth, NJ, USA) plus RBV were prescribed to eligible patients for 6 months. PEG-IFN-α-2a (180 μg/kg) or PEG-IFN-α-2b (1.5 μg/kg) was administered once per week by subcutaneous injection. The fixed duration (6 months) without consideration of HCV genotypes is due to restrictions imposed by the reimbursement policy of the Bureau of National Health Insurance in Taiwan. RBV was prescribed orally at a dose of 800 mg/day for patients <55 kg, 1,000 mg/day for patients between 55 and 75 kg, and 1,200 mg/day for patients >75 kg. Adjustments to the dose of PEG-IFN and RBV and treatment with either erythropoietin or blood transfusion were determined according to published practice guidelines.3,20–22 Clinical monitoring: The primary outcome was time to develop HCC. All patients were examined at baseline, end of treatment, and every 3–6 months posttreatment with a liver function test and abdominal sonography at the hospital’s gastrointestinal outpatient clinic. The serum aspartate aminotransferase (AST), ALT, total bilirubin, creatinine, hemoglobin, white blood cell count, and platelet count were measured at each time point. HCV RNA was quantified at baseline and 24 weeks posttreatment. The diagnosis of liver cirrhosis was made by liver biopsy or by clinical diagnostic criteria including twice documented ultrasonographic evidence of a coarse and nodular parenchyma, irregular surface, and dull margin with splenomegaly, ascites, hepatic encephalopathy, or varices.23 Liver biopsy with optional procedure was performed at baseline with patients’ consent. Liver biopsy was obtained from 62 patients (59.6%) in this study. A diagnosis of fatty liver was based on the results of biopsy and/or abdominal ultrasound. Other clinical factors, including diabetes mellitus, chronic hepatitis B (CHB), and alcoholism, were also evaluated by chart review. The diagnosis of CHB was based on seropositive status for hepatitis B surface antigen for at least 6 months. HCC was diagnosed either by biopsy or by imaging in the setting of liver cirrhosis. The specific imaging pattern was defined by increased contrast uptake in the arterial phase followed by contrast washout in the venous/delayed phase as seen via computer tomography or magnetic resonance.24,25 In this study, we calculated APRI using the formula of ([AST/upper limit of normal]/platelet count [109/L]) ×100, and further used it as a noninvasive marker validated for the diagnosis of both significant fibrosis and cirrhosis. The cutoff value of APRI >2.0 defined high APRI and APRI <2.0 defined low APRI.3,12,13 HCV quantification and genotyping: Serum HCV RNA was quantified at baseline and 24 weeks posttreatment using real-time polymerase chain reaction with a detection limitation of 15 IU/mL.26 The threshold for discriminating low- and high-baseline HCV RNA was 400,000 IU/mL.22 HCV genotyping was performed using melting curve analysis (Roche LightCycler; Biotronics Tech Corp., Lowell, MA, USA).27 Sustained virologic response: SVR was defined as an undetectable HCV RNA for at least 24 weeks after the patient completed the combined treatment of PEG-IFN plus RBV. Patients who were positive for HCV RNA at week 24 posttreatment were considered non-SVR. Statistical analysis: SPSS 19.0 for Windows (IBM Corporation, Armonk, NY, USA) was used for all statistical analyses. The chi-square test or the Fisher’s exact test was used for nominal variables. Student’s t-test was used to compare continuous variables with normal distributions, and the Mann–Whitney U-test was used for continuous variables with nonnormal distributions. In competing risk data ratios, we conducted calculations and comparisons of cumulative incidences using a modified Kaplan–Meier method. We tested differences in the full time-to-event distributions between the study groups using a log-rank test. To determine the independent risk factors for HCC occurrence, we carried out multivariate analyses and stratified analyses using HRs with Cox proportional hazards regression models in the presence of a competing risk event after adjusting for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver). Results were shown as HRs with 95% CIs. Significances in all analyses were set as P<0.05. Results: General characteristics of the patients A total of 105 patients with HCV-associated Child A cirrhosis without decompensation were enrolled, consisting of 57 patients (54.3%) with SVR and 48 patients without SVR (45.7%) by intention-to-treat analysis. Of the 105 patients, 66 (62.9%) were infected with HCV genotype 1, and the remaining 39 patients (37.1%) were infected with other genotypes of HCV. There were 47 males and 58 females, and the mean patient age was 58.3±10.4 years. Thirteen patients (12.4%) did not complete the full treatment regimen (mean treatment duration, 61 days). One patient (1/13) stopped treatment due to major trauma with renal hemorrhage. Twelve patients (12/13, 92.3%) did not accept complete treatment due to treatment side effects. Among those patients, decompensation with jaundice (4/13) was the most common reason. An intention-to-treat analysis was performed in this study. No deaths or severe treatment-related complications were found during the course of treatment. The average follow-up time for each patient was 4.38 years (SD 1.73 years; range 1.13–9.27 years) (Table 1). Fifteen (14.3%) patients developed HCC during follow-up period; 13 in this group had high baseline APRI (>2.0) and two had low APRI (<2.0). A total of 105 patients with HCV-associated Child A cirrhosis without decompensation were enrolled, consisting of 57 patients (54.3%) with SVR and 48 patients without SVR (45.7%) by intention-to-treat analysis. Of the 105 patients, 66 (62.9%) were infected with HCV genotype 1, and the remaining 39 patients (37.1%) were infected with other genotypes of HCV. There were 47 males and 58 females, and the mean patient age was 58.3±10.4 years. Thirteen patients (12.4%) did not complete the full treatment regimen (mean treatment duration, 61 days). One patient (1/13) stopped treatment due to major trauma with renal hemorrhage. Twelve patients (12/13, 92.3%) did not accept complete treatment due to treatment side effects. Among those patients, decompensation with jaundice (4/13) was the most common reason. An intention-to-treat analysis was performed in this study. No deaths or severe treatment-related complications were found during the course of treatment. The average follow-up time for each patient was 4.38 years (SD 1.73 years; range 1.13–9.27 years) (Table 1). Fifteen (14.3%) patients developed HCC during follow-up period; 13 in this group had high baseline APRI (>2.0) and two had low APRI (<2.0). Demographics and clinical features that predispose to HCC Table 1 shows the baseline characteristics and treatment outcomes of the patients with and without progression to HCC. We performed univariate analyses on the two groups and found significant differences in age, SVR, baseline APRI, platelet count, AST, and ALT. No significant differences in sex, diabetes, cohepatitis (alcoholism, HBV coinfection, or fatty liver), HCV genotype, HCV RNA load, alpha-fetoprotein, albumin, international normalized ratio of prothrombin time, and total bilirubin were found between the two groups. Table 1 shows the baseline characteristics and treatment outcomes of the patients with and without progression to HCC. We performed univariate analyses on the two groups and found significant differences in age, SVR, baseline APRI, platelet count, AST, and ALT. No significant differences in sex, diabetes, cohepatitis (alcoholism, HBV coinfection, or fatty liver), HCV genotype, HCV RNA load, alpha-fetoprotein, albumin, international normalized ratio of prothrombin time, and total bilirubin were found between the two groups. Multivariate stratified analysis Table 2 shows the results of the Cox regression analysis. After adjustments for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver), those without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) and those with a high APRI level (HR 5.548; 95% CI 1.191–25.86; P=0.029) were associated with a significantly higher risk of HCC occurrence. Table 2 shows the results of the Cox regression analysis. After adjustments for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver), those without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) and those with a high APRI level (HR 5.548; 95% CI 1.191–25.86; P=0.029) were associated with a significantly higher risk of HCC occurrence. Cumulative incidences of HCC occurrence The cumulative incidences of HCC among patients with and without SVR after treatment are shown in Figure 2. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%–35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%–51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%–1.3%). The difference in 5-year cumulative incidence was 24.9%. The unadjusted number needed to treat associated with 1 less HCC within 5 years was 4, suggesting that four patients who achieved SVR were associated with 1 less HCC within 5 years of treatment. Figure 3 shows the cumulative incidence among patients with high and low APRI values, based on a cutoff value of 2.0. The cumulative incidence of HCC was significantly higher (P=0.006) in patients with a high APRI value (3-year cumulative incidence 21.8%; 95% CI 8.2%–35.3%; 5-year cumulative incidence 30.5%; 95% CI 11.8%–49.3%) compared to those with low APRI values (3- and 5-year cumulative incidence 4.2%; 95% CI 0%–1.0%). The difference in 5-year cumulative incidences was 26.3%. The cumulative incidences of HCC among patients with and without SVR after treatment are shown in Figure 2. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%–35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%–51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%–1.3%). The difference in 5-year cumulative incidence was 24.9%. The unadjusted number needed to treat associated with 1 less HCC within 5 years was 4, suggesting that four patients who achieved SVR were associated with 1 less HCC within 5 years of treatment. Figure 3 shows the cumulative incidence among patients with high and low APRI values, based on a cutoff value of 2.0. The cumulative incidence of HCC was significantly higher (P=0.006) in patients with a high APRI value (3-year cumulative incidence 21.8%; 95% CI 8.2%–35.3%; 5-year cumulative incidence 30.5%; 95% CI 11.8%–49.3%) compared to those with low APRI values (3- and 5-year cumulative incidence 4.2%; 95% CI 0%–1.0%). The difference in 5-year cumulative incidences was 26.3%. General characteristics of the patients: A total of 105 patients with HCV-associated Child A cirrhosis without decompensation were enrolled, consisting of 57 patients (54.3%) with SVR and 48 patients without SVR (45.7%) by intention-to-treat analysis. Of the 105 patients, 66 (62.9%) were infected with HCV genotype 1, and the remaining 39 patients (37.1%) were infected with other genotypes of HCV. There were 47 males and 58 females, and the mean patient age was 58.3±10.4 years. Thirteen patients (12.4%) did not complete the full treatment regimen (mean treatment duration, 61 days). One patient (1/13) stopped treatment due to major trauma with renal hemorrhage. Twelve patients (12/13, 92.3%) did not accept complete treatment due to treatment side effects. Among those patients, decompensation with jaundice (4/13) was the most common reason. An intention-to-treat analysis was performed in this study. No deaths or severe treatment-related complications were found during the course of treatment. The average follow-up time for each patient was 4.38 years (SD 1.73 years; range 1.13–9.27 years) (Table 1). Fifteen (14.3%) patients developed HCC during follow-up period; 13 in this group had high baseline APRI (>2.0) and two had low APRI (<2.0). Demographics and clinical features that predispose to HCC: Table 1 shows the baseline characteristics and treatment outcomes of the patients with and without progression to HCC. We performed univariate analyses on the two groups and found significant differences in age, SVR, baseline APRI, platelet count, AST, and ALT. No significant differences in sex, diabetes, cohepatitis (alcoholism, HBV coinfection, or fatty liver), HCV genotype, HCV RNA load, alpha-fetoprotein, albumin, international normalized ratio of prothrombin time, and total bilirubin were found between the two groups. Multivariate stratified analysis: Table 2 shows the results of the Cox regression analysis. After adjustments for age, sex, high APRI, SVR, genotype, and cohepatitis (including hepatitis B coinfection, alcoholism, or fatty liver), those without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) and those with a high APRI level (HR 5.548; 95% CI 1.191–25.86; P=0.029) were associated with a significantly higher risk of HCC occurrence. Cumulative incidences of HCC occurrence: The cumulative incidences of HCC among patients with and without SVR after treatment are shown in Figure 2. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%–35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%–51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%–1.3%). The difference in 5-year cumulative incidence was 24.9%. The unadjusted number needed to treat associated with 1 less HCC within 5 years was 4, suggesting that four patients who achieved SVR were associated with 1 less HCC within 5 years of treatment. Figure 3 shows the cumulative incidence among patients with high and low APRI values, based on a cutoff value of 2.0. The cumulative incidence of HCC was significantly higher (P=0.006) in patients with a high APRI value (3-year cumulative incidence 21.8%; 95% CI 8.2%–35.3%; 5-year cumulative incidence 30.5%; 95% CI 11.8%–49.3%) compared to those with low APRI values (3- and 5-year cumulative incidence 4.2%; 95% CI 0%–1.0%). The difference in 5-year cumulative incidences was 26.3%. Discussion: HCV is a single-stranded RNA virus, and its genome is never integrated into the genome of hepatocytes. Although no known oncogenic properties have been reported with respect to HCV, the multiple functions of HCV proteins and their effects on the modulation of intracellular signaling transduction processes may facilitate carcinogenesis via interactions of viral proteins with host cell proteins.28 HCV core proteins lead to the promotion of cellular transformation, and HCV-related chronic inflammation has been responsible for promoting mutations during hepatocyte regeneration, thereby contributing to HCC development.29,30 Hepatocyte regeneration and cycling that occur after antiviral therapy may activate cellular pathways involved in the development of dysplasia, increasing the risk of hepatocarcinogenesis.31 Interferon-based therapy was found to reduce the risk of HCC development, particularly in virologic or biochemical responders.32–34 Despite the larger magnitude of risk reduction in patients with SVR, there remains a definite risk of HCC in SVR patients.11,35 The risk factors for HCV-related HCC include older age, male, coinfection with HIV or HBV, obesity, hepatic fibrosis, alcohol abuse, and sex hormone dysregulation.4,10,11 A meta-analysis summarized the evidence from 30 observational studies that examined the risk of HCC among HCV-infected patients and found that SVR was associated with a reduction in the relative risk of HCC for patients at all stages of liver disease (HR 0.24; 95% CI 0.18%–0.31%; P<0.001) and a higher absolute benefit (HCC prevention) in patients with advanced liver disease who achieved SVR.9 Similarly, in our study, HCC development in cirrhotic HCV patients was significantly lower in the SVR group. Those patients without SVR (HR 5.795; 95% CI 1.370–24.5; P=0.017) were associated with a significantly higher risk of HCC occurrence. A large cohort study conducted in Taiwan showed similar results in HCC risk reduction among cirrhotic HCV patients.10 Notably, our study results found that high APRI (>2) is an independent risk factor for HCC development (HR 5.548; 95% CI 1.191–25.86; P=0.029). APRI is a noninvasive measurement for diagnosing hepatic fibrosis and cirrhosis and was recommended by the World Health Organization (WHO) guidelines for HCV infection published in 2014.3 Based on the WHO recommendation, patients with values above the high APRI cutoff value (>2) should be treated with higher priority because they have a high probability of developing F4 cirrhosis.3 A recent study showed that a higher APRI significantly increased the risk of HCC in cirrhotic HBV-infected patients, but this finding was attenuated after multivariate adjustment.16 More recently, in a cohort of 642 SVR Taiwan patients (13% cirrhotics), HCC was strongly associated with cirrhosis (HR 4.98; 95% CI 2.32–10.71; P<0.001) and less strongly with age (HR 1.06; 95% CI 1.02–1.11; P=0.005) and Gamma-glutamyl transpeptidase (γGT) (HR 1.01; 95% CI 1.00–1.013; P<0.001).36 This study supports the importance of cirrhosis as the predictor of HCC in CHC patients and potential role of APRI. As far as we know, few studies have evaluated the possible relationship between APRI and HCC development in HCV-infected patients after SVR.15,17,18 Our study included pure cirrhotic cohort and showed that high APRI (>2.0) is significantly related to HCC development even after multivariate adjustment. The data from recent cohort support the recommendation that a higher treatment priority should be given to those patients who present with a high APRI.3 Although a high proportion of genotype 1 (62%) and advanced liver disease with short treatment duration (6 months) cause lower SVR rate (54.3%), a recent study showed the achievement of SVR could slow down the rapid progression of HCC among cirrhotic patients.9 In the era of new, interferon-sparing, antiviral strategies, the SVR rate for similar patients is more than 90%. The results of our study support the significant benefit of SVR achievement in lowering the progression of HCC, and the APRI score could help us identify patients at higher risk of HCC development. The direct antiviral agent treatment should be provided for the cirrhotic patients even though it is expensive. Our study found no difference between 3- and 5-year cumulative incidences of HCC in patients with SVR or with low APRI, as shown in Figures 2 and 3. The stabilization of cumulative incidences may indicate that HCC development is more closely related to developments during the first few years after SVR achievement in low APRI patients. The interaction between SVR and APRI appears to be complicated. A higher fibrosis stage is associated with lower SVR rate, and typically the fibrosis stage improves after SVR is achieved. This may be due to high SVR durability with a low relapse rate, which has been confirmed in HCV-infected liver transplant recipients, hemophiliacs, cirrhotics, those coinfected with HIV, and children. A durable SVR may improve the fibrosis stage and decrease the incidence of HCC 3 years later. Hence, HCV cirrhotic patients (especially those with a high APRI) should be treated as early as possible because of potential beneficial effects in HCC prevention. There are some limitations in our study. First, liver biopsy is considered the gold standard for cirrhosis diagnosis. Transient elastography could offer a noninvasive alternative to biopsy to measure the stage of fibrosis, given its comparable diagnostic accuracy. Due to the earlier study period with facility limitation, we could not provide these data. In our study, 59.6% (62/105) of patients underwent biopsy and other patients with cirrhosis were diagnosed using the clinical criteria. However, the diagnosis criteria combined with twice documented ultrasonographic evidence of liver cirrhosis with solid clinical end point (splenomegaly, ascites, hepatic encephalopathy, or varices) could complementally increase the accuracy of cirrhosis diagnosis in the current study.18 Second, this is a retrospective, single-center, observational study, which could lead to selection bias because patients with severe cirrhosis were probably not considered for treatment and therefore were not included in our study. Conclusion: In cirrhotic HCV-infected patients, SVR is a major predictor of HCC development, whereas APRI may be a potent predictor of HCC risk among these patients. Further studies are warranted to validate our findings and their applicability in clinical practice.
Background: The aim of this study was to evaluate the clinically significant predictors of hepatocellular carcinoma (HCC) development among hepatitis C virus (HCV) cirrhotic patients receiving combination therapy. Methods: One hundred and five compensated cirrhosis patients who received pegylated interferon plus ribavirin between January 2005 and December 2011 were enrolled. All the patients were examined with abdominal sonography and liver biochemistry at baseline, end of treatment, and every 3-6 months posttreatment. The occurrence of HCC was evaluated every 3-6 months posttreatment. Results: A total of 105 patients were enrolled (mean age 58.3±10.4 years). The average follow-up time for each patient was 4.38 years (standard deviation 1.73 years; range 1.13-9.27 years). Fifteen (14.3%) patients developed HCC during follow-up period. Thirteen of them had high baseline aspartate aminotransferase to platelet ratio index (APRI) (ie, an APRI >2.0). Multivariate analysis showed that those without sustained virologic response (SVR) (hazard ratio [HR] 5.795; 95% confidence interval [CI] 1.370-24.5; P=0.017) and high APRI (HR 5.548; 95% CI 1.191-25.86; P=0.029) had a significantly higher risk of HCC occurrence. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%-35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%-51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%-1.3%). Further, the cumulative incidence of HCC was significantly higher (P=0.006) in patients with high APRI (3-year cumulative incidence 21.8%; 95% CI 8.2%-35.3%; 5-year cumulative incidence 30.5%, 95% CI 11.8%-49.3%) compared to those with low APRI (3- and 5-year cumulative incidence 4.2%, 95% CI 0%-1.0%). Conclusions: In HCV-infected cirrhotic patients who received combination therapy, APRI and SVR are the two major predictors of HCC development.
Introduction: Hepatocellular carcinoma (HCC) is one of the most deadly cancers and is the third leading cause of cancer-related death among males and the sixth among females worldwide.1 According to a recent national analysis in Taiwan, HCC remains the second leading cause of cancer-related death among both males and females during the past 10 years.2 Chronic hepatitis is one of the most important causes of chronic liver diseases, including cirrhosis that can lead to subsequent decompensation and development of HCC.3 The estimated risk of HCC is 15–20 times higher among persons infected with hepatitis C virus (HCV) compared to those without infection, and the greatest excess risk occurs in those with advanced hepatic fibrosis or cirrhosis.4,5 In chronic HCV antiviral therapy, a sustained viral response (SVR) had been a clinically meaningful end point at which viral clearance contributes to reduced inflammation and histologically identified fibrosis, fewer hepatic complications, lower liver-related mortality, and a reduced incidence of HCC.6–8 Even in patients with advanced liver disease, SVR has been found to be associated with HCC risk reduction.9 Risk factors for HCV-related HCC include older age, male sex, coinfection with human immunodeficiency virus (HIV) or hepatitis B virus (HBV), obesity, hepatic fibrosis, alcohol abuse, and sex hormone dysregulation.4,10,11 The liver fibrosis stage provides important prognostic information about the development of HCC. The aspartate aminotransferase to platelet ratio index (APRI) is a noninvasive marker that has been validated for the diagnosis of both significant fibrosis and cirrhosis.12,13 APRI is a useful marker for the prognosis in chronic hepatitis C (CHC) patients.14 Some recent studies report that the APRI score could be a predictor of HCC in chronic hepatitis patients.15,16 APRI could be a useful marker to classify HCC risk in CHC patients who achieved SVR and predict HCC recurrence after radiofrequency ablation.17–19 However, the prognostic value of APRI in cirrhotic patients for predicting the occurrence of HCC is uncertain. Hence, the aim of this study was to evaluate the clinically significant predictors of HCC development among HCV cirrhotic patients. These factors may affect physicians’ clinical decisions. Conclusion: In cirrhotic HCV-infected patients, SVR is a major predictor of HCC development, whereas APRI may be a potent predictor of HCC risk among these patients. Further studies are warranted to validate our findings and their applicability in clinical practice.
Background: The aim of this study was to evaluate the clinically significant predictors of hepatocellular carcinoma (HCC) development among hepatitis C virus (HCV) cirrhotic patients receiving combination therapy. Methods: One hundred and five compensated cirrhosis patients who received pegylated interferon plus ribavirin between January 2005 and December 2011 were enrolled. All the patients were examined with abdominal sonography and liver biochemistry at baseline, end of treatment, and every 3-6 months posttreatment. The occurrence of HCC was evaluated every 3-6 months posttreatment. Results: A total of 105 patients were enrolled (mean age 58.3±10.4 years). The average follow-up time for each patient was 4.38 years (standard deviation 1.73 years; range 1.13-9.27 years). Fifteen (14.3%) patients developed HCC during follow-up period. Thirteen of them had high baseline aspartate aminotransferase to platelet ratio index (APRI) (ie, an APRI >2.0). Multivariate analysis showed that those without sustained virologic response (SVR) (hazard ratio [HR] 5.795; 95% confidence interval [CI] 1.370-24.5; P=0.017) and high APRI (HR 5.548; 95% CI 1.191-25.86; P=0.029) had a significantly higher risk of HCC occurrence. The cumulative incidence of HCC was significantly higher (P=0.009) in patients without SVR (3-year cumulative incidence 21.4%; 95% CI 7.4%-35.5%; 5-year cumulative incidence 31.1%; 95% CI 11.2%-51.1%) compared to those with SVR (3- and 5-year cumulative incidence 6.2%; 95% CI 0%-1.3%). Further, the cumulative incidence of HCC was significantly higher (P=0.006) in patients with high APRI (3-year cumulative incidence 21.8%; 95% CI 8.2%-35.3%; 5-year cumulative incidence 30.5%, 95% CI 11.8%-49.3%) compared to those with low APRI (3- and 5-year cumulative incidence 4.2%, 95% CI 0%-1.0%). Conclusions: In HCV-infected cirrhotic patients who received combination therapy, APRI and SVR are the two major predictors of HCC development.
7,061
404
15
[ "patients", "hcc", "hcv", "apri", "svr", "treatment", "cumulative", "liver", "study", "95" ]
[ "test", "test" ]
[CONTENT] aspartate aminotransferase to platelet ratio index | chronic hepatitis C | hepatitis C virus | hepatocellular carcinoma | liver cirrhosis | sustained virologic response [SUMMARY]
[CONTENT] aspartate aminotransferase to platelet ratio index | chronic hepatitis C | hepatitis C virus | hepatocellular carcinoma | liver cirrhosis | sustained virologic response [SUMMARY]
[CONTENT] aspartate aminotransferase to platelet ratio index | chronic hepatitis C | hepatitis C virus | hepatocellular carcinoma | liver cirrhosis | sustained virologic response [SUMMARY]
[CONTENT] aspartate aminotransferase to platelet ratio index | chronic hepatitis C | hepatitis C virus | hepatocellular carcinoma | liver cirrhosis | sustained virologic response [SUMMARY]
[CONTENT] aspartate aminotransferase to platelet ratio index | chronic hepatitis C | hepatitis C virus | hepatocellular carcinoma | liver cirrhosis | sustained virologic response [SUMMARY]
[CONTENT] aspartate aminotransferase to platelet ratio index | chronic hepatitis C | hepatitis C virus | hepatocellular carcinoma | liver cirrhosis | sustained virologic response [SUMMARY]
[CONTENT] Aged | Antiviral Agents | Aspartate Aminotransferases | Blood Platelets | Carcinoma, Hepatocellular | Disease Progression | Drug Therapy, Combination | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Interferon-alpha | Kaplan-Meier Estimate | Liver Cirrhosis | Liver Neoplasms | Male | Middle Aged | Multivariate Analysis | Proportional Hazards Models | Retrospective Studies | Ribavirin | Risk Factors | Sustained Virologic Response | Taiwan [SUMMARY]
[CONTENT] Aged | Antiviral Agents | Aspartate Aminotransferases | Blood Platelets | Carcinoma, Hepatocellular | Disease Progression | Drug Therapy, Combination | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Interferon-alpha | Kaplan-Meier Estimate | Liver Cirrhosis | Liver Neoplasms | Male | Middle Aged | Multivariate Analysis | Proportional Hazards Models | Retrospective Studies | Ribavirin | Risk Factors | Sustained Virologic Response | Taiwan [SUMMARY]
[CONTENT] Aged | Antiviral Agents | Aspartate Aminotransferases | Blood Platelets | Carcinoma, Hepatocellular | Disease Progression | Drug Therapy, Combination | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Interferon-alpha | Kaplan-Meier Estimate | Liver Cirrhosis | Liver Neoplasms | Male | Middle Aged | Multivariate Analysis | Proportional Hazards Models | Retrospective Studies | Ribavirin | Risk Factors | Sustained Virologic Response | Taiwan [SUMMARY]
[CONTENT] Aged | Antiviral Agents | Aspartate Aminotransferases | Blood Platelets | Carcinoma, Hepatocellular | Disease Progression | Drug Therapy, Combination | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Interferon-alpha | Kaplan-Meier Estimate | Liver Cirrhosis | Liver Neoplasms | Male | Middle Aged | Multivariate Analysis | Proportional Hazards Models | Retrospective Studies | Ribavirin | Risk Factors | Sustained Virologic Response | Taiwan [SUMMARY]
[CONTENT] Aged | Antiviral Agents | Aspartate Aminotransferases | Blood Platelets | Carcinoma, Hepatocellular | Disease Progression | Drug Therapy, Combination | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Interferon-alpha | Kaplan-Meier Estimate | Liver Cirrhosis | Liver Neoplasms | Male | Middle Aged | Multivariate Analysis | Proportional Hazards Models | Retrospective Studies | Ribavirin | Risk Factors | Sustained Virologic Response | Taiwan [SUMMARY]
[CONTENT] Aged | Antiviral Agents | Aspartate Aminotransferases | Blood Platelets | Carcinoma, Hepatocellular | Disease Progression | Drug Therapy, Combination | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Interferon-alpha | Kaplan-Meier Estimate | Liver Cirrhosis | Liver Neoplasms | Male | Middle Aged | Multivariate Analysis | Proportional Hazards Models | Retrospective Studies | Ribavirin | Risk Factors | Sustained Virologic Response | Taiwan [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] patients | hcc | hcv | apri | svr | treatment | cumulative | liver | study | 95 [SUMMARY]
[CONTENT] patients | hcc | hcv | apri | svr | treatment | cumulative | liver | study | 95 [SUMMARY]
[CONTENT] patients | hcc | hcv | apri | svr | treatment | cumulative | liver | study | 95 [SUMMARY]
[CONTENT] patients | hcc | hcv | apri | svr | treatment | cumulative | liver | study | 95 [SUMMARY]
[CONTENT] patients | hcc | hcv | apri | svr | treatment | cumulative | liver | study | 95 [SUMMARY]
[CONTENT] patients | hcc | hcv | apri | svr | treatment | cumulative | liver | study | 95 [SUMMARY]
[CONTENT] hcc | chronic | fibrosis | hepatitis | related | risk | virus | patients | development | chronic hepatitis [SUMMARY]
[CONTENT] hr | 95 ci | ci | 95 | high apri | high | regression analysis adjustments | regression analysis adjustments age | 017 high | 017 high apri [SUMMARY]
[CONTENT] cumulative incidence | cumulative | incidence | year cumulative | patients | year cumulative incidence | ci | 95 ci | year | 95 [SUMMARY]
[CONTENT] predictor hcc | predictor | validate findings applicability clinical | potent predictor hcc | validate | validate findings | validate findings applicability | major predictor | major predictor hcc | major predictor hcc development [SUMMARY]
[CONTENT] patients | hcc | hcv | svr | apri | cumulative | cumulative incidence | treatment | ci | 95 ci [SUMMARY]
[CONTENT] patients | hcc | hcv | svr | apri | cumulative | cumulative incidence | treatment | ci | 95 ci [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] One hundred and five | January 2005 | December 2011 ||| 3-6 months ||| HCC | every 3-6 months [SUMMARY]
[CONTENT] 105 | age 58.3±10.4 years ||| 4.38 years | 1.73 years | 1.13-9.27 years ||| Fifteen | 14.3% ||| ||| Thirteen | 2.0 ||| 5.795 | 95% ||| CI | 1.370 | 5.548 | 95% | CI | 1.191 | P=0.029 ||| HCC | SVR | 3-year | 21.4% | 95% | CI 7.4%-35.5% | 5-year | 31.1% | 95% | 11.2%-51.1% | SVR | 5-year | 6.2% | 95% | CI | 0%-1.3% ||| HCC | 3-year | 21.8% | 95% | 8.2%-35.3% | 5-year | 30.5% | 95% | CI | 11.8%-49.3% | 5-year | 4.2% | 95% | CI | 0%-1.0% [SUMMARY]
[CONTENT] SVR | two | HCC [SUMMARY]
[CONTENT] ||| One hundred and five | January 2005 | December 2011 ||| 3-6 months ||| HCC | every 3-6 months ||| ||| 105 | age 58.3±10.4 years ||| 4.38 years | 1.73 years | 1.13-9.27 years ||| Fifteen | 14.3% ||| ||| Thirteen | 2.0 ||| 5.795 | 95% ||| CI | 1.370 | 5.548 | 95% | CI | 1.191 | P=0.029 ||| HCC | SVR | 3-year | 21.4% | 95% | CI 7.4%-35.5% | 5-year | 31.1% | 95% | 11.2%-51.1% | SVR | 5-year | 6.2% | 95% | CI | 0%-1.3% ||| HCC | 3-year | 21.8% | 95% | 8.2%-35.3% | 5-year | 30.5% | 95% | CI | 11.8%-49.3% | 5-year | 4.2% | 95% | CI | 0%-1.0% ||| SVR | two | HCC [SUMMARY]
[CONTENT] ||| One hundred and five | January 2005 | December 2011 ||| 3-6 months ||| HCC | every 3-6 months ||| ||| 105 | age 58.3±10.4 years ||| 4.38 years | 1.73 years | 1.13-9.27 years ||| Fifteen | 14.3% ||| ||| Thirteen | 2.0 ||| 5.795 | 95% ||| CI | 1.370 | 5.548 | 95% | CI | 1.191 | P=0.029 ||| HCC | SVR | 3-year | 21.4% | 95% | CI 7.4%-35.5% | 5-year | 31.1% | 95% | 11.2%-51.1% | SVR | 5-year | 6.2% | 95% | CI | 0%-1.3% ||| HCC | 3-year | 21.8% | 95% | 8.2%-35.3% | 5-year | 30.5% | 95% | CI | 11.8%-49.3% | 5-year | 4.2% | 95% | CI | 0%-1.0% ||| SVR | two | HCC [SUMMARY]
Modeled nitrate levels in well water supplies and prevalence of abnormal thyroid conditions among the Old Order Amish in Pennsylvania.
22339761
Nitrate is a widespread contaminant of drinking water supplies, especially in agricultural areas. Nitrate intake from drinking water and dietary sources can interfere with the uptake of iodide by the thyroid, thus potentially impacting thyroid function.
BACKGROUND
We assessed the relation of estimated nitrate levels in well water supplies with thyroid health in a cohort of 2,543 Old Order Amish residing in Lancaster, Chester, and Lebanon counties in Pennsylvania for whom thyroid stimulating hormone (TSH) levels were measured during 1995-2008. Nitrate measurement data (1976-2006) for 3,613 wells in the study area were obtained from the U.S. Geological Survey and we used these data to estimate concentrations at study participants' residences using a standard linear mixed effects model that included hydrogeological covariates and kriging of the wells' residuals. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L N-NO3(-), with a median value of 6.5 mg/L, which was used as the cutpoint to define high and low nitrate exposure. In a validation analysis of the model, we calculated that the sensitivity of the model was 67% and the specificity was 93%. TSH levels were used to define the following outcomes: clinical hyperthyroidism (n = 10), clinical hypothyroidism (n = 56), subclinical hyperthyroidism (n = 25), and subclinical hypothyroidism (n = 228).
METHODS
In women, high nitrate exposure was significantly associated with subclinical hypothyroidism (OR = 1.60; 95% CI: 1.11-2.32). Nitrate was not associated with subclinical thyroid disease in men or with clinical thyroid disease in men or women.
RESULTS
Although these data do not provide strong support for an association between nitrate in drinking water and thyroid health, our results do suggest that further exploration of this hypothesis is warranted using studies that incorporate individual measures of both dietary and drinking water nitrate intake.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Amish", "Cohort Studies", "Cross-Sectional Studies", "Drinking Water", "Environmental Monitoring", "Epidemiological Monitoring", "Female", "Humans", "Hyperthyroidism", "Hypothyroidism", "Linear Models", "Male", "Middle Aged", "Nitrates", "Pennsylvania", "Prevalence", "Sex Factors", "Thyrotropin", "Water Pollutants, Chemical", "Water Wells", "Young Adult" ]
3305600
Background
Nitrate is a widespread contaminant of drinking water supplies, especially in agricultural areas. The thyroid can concentrate univalent anions such as nitrate (NO3-), which subsequently interferes with the uptake of iodide (I-) by the thyroid and may cause reduced production of thyroid hormones [1-4]. The result of the reduced thyroid hormone production is a compensatory increase in thyroid stimulating hormone (TSH), a sensitive indicator of thyroid function. High and low TSH levels reflect hypo- and hyperfunction of the thyroid gland, respectively. Chronic stimulation of the thyroid gland by excessive TSH has been shown in animals to induce the development of hypertrophy and thyroid disease, as well as hyperplasia, followed by adenoma and carcinoma [5]. At least two epidemiological studies have shown high nitrate intake to be associated with thyroid dysfunction, including hypertrophy and changes in TSH levels [6,7]; however, the impact of nitrate intake on specific thyroid conditions, including hyperthyroidism and hypothyroidism is not clear. Elevated concentrations of nitrate in groundwater originate from a number of sources, including leaking septic tanks, animal waste, and overuse of nitrogen fertilizers [7]. Nitrate is very soluble and it readily migrates to groundwater. Nitrate contamination of groundwater is an exposure of interest as groundwater serves as the primary drinking water supply for over 90% of the rural population and 50% of the total population of North America [8]. Although the U.S. Environmental Protection Agency (EPA) maximum contaminant level (MCL) for nitrate as nitrogen (nitrate-N) is 10 mg/L in public water sources [9], the levels in private wells are not regulated and the task of monitoring is left to residential owners, presenting opportunities for high levels of human exposure. The U.S. Geological Survey (USGS) estimates that nitrate concentrations exceed the EPA's standard in approximately 15% of agricultural and rural areas, exposing over 2 million people in the United States [10]. The MCL for nitrate in drinking water was established to protect against methemoglobinemia, or "blue baby syndrome," to which infants are especially susceptible. However, this health guideline has not been thoroughly evaluated for other health outcomes such as thyroid disease and cancer. The Old Order Amish community is a population characterized by a homogeneous lifestyle, including intensive farming practices and low mobility, and has been relatively unchanged across generations [11]. In areas where many large dairy and poultry farms are concentrated, the land area for disposal of animal wastes is limited. This situation often results in overloading the available land with manure, with considerable nitrogen ending up in groundwater or surface water [8,12]. Lancaster County in southeastern Pennsylvania is an example of such an area where extensive dairy enterprises with high stocking rates prevail. High levels of nitrate in the groundwater [8] suggest that the Amish are a potentially highly exposed population. Given the biological effects of nitrate intake on the thyroid, investigation of whether the Amish in this area exhibit an increased prevalence of thyroid dysfunction and thyroid disease is of interest. The aim of this study is to assess whether nitrate concentrations in well water are associated with levels of TSH and thyroid disease. Our goal was to use survey data on nitrate levels in well-water obtained from the USGS to conduct a cross-sectional analysis of the association between nitrate exposure and thyroid health. This study builds upon several ongoing studies of diabetes, obesity, osteoporosis, hypertension, and cardiovascular disease in the Amish, initiated in 1993 at the University of Maryland [13-16].
Methods
Study population Subjects included in this analysis were 3,017 Old Order Amish aged 18 years and older from Lancaster, Chester, and Lebanon Counties, Pennsylvania, for whom thyroid health was assessed through measurement of thyroid stimulating hormone (TSH) levels in their prior participation in one or more studies of health by investigators at the University of Maryland, Baltimore [13-16]. We excluded participants whose residences were located outside of Lancaster, Chester, or Lebanon counties (n = 328) due to sparse nitrate measurement data, and persons who reported use of thyroid medication (n = 145) leaving a total of 2,543 persons (1,336 females and 1,207 males) in the final analysis. Nearly all of the enrolled individuals are descendants of a small number of Amish who settled in Lancaster County, Pennsylvania, in the mid-eighteenth century [13,17,18]. This study was approved by the Institutional Review Boards of the University of Maryland and the National Cancer Institute. All subjects included in this analysis received a standardized examination at the Amish Research Clinic in Strasburg, Pennsylvania or in the participant's home during the time period 1995-2008. As part of this examination, a fasting blood sample was collected from which TSH levels were measured with the Siemens TSH assay (Immulite 2000; Deerfield, IL) according to the manufacturer's instructions. The method is a solid-phase, chemiluminescent, competitive analog immunoassay and has analytical sensitivity of 0.004 μIU/ml and upper limit of 75 μIU/ml of TSH. Residential street addresses were geocoded using the TeleAtlas (Lebanon, NH) MatchMaker SDK Professional version 8.3 (October 2006), a spatial database of roads, and a modified version of a Microsoft Visual Basic version 6.0 program issued by TeleAtlas to match input addresses to the spatial database. We assigned residence location using an offset of 25 ft from the street centerline. Addresses that were not successfully geocoded were checked for errors using interactive geocoding techniques. Where only a street intersection was available for the residential location (1.0% of residences), we assigned the geographic location of the residence to the middle of the intersection. Where only a zip code was available for the residential location (3.0% of residences), we assigned the geographic location of the residence to the centroid of the zip code. The geocoded location of the residences and the geographic boundary of our study area is shown in Figure 1. Location of participant residences and wells with nitrate measures in study area. Subjects included in this analysis were 3,017 Old Order Amish aged 18 years and older from Lancaster, Chester, and Lebanon Counties, Pennsylvania, for whom thyroid health was assessed through measurement of thyroid stimulating hormone (TSH) levels in their prior participation in one or more studies of health by investigators at the University of Maryland, Baltimore [13-16]. We excluded participants whose residences were located outside of Lancaster, Chester, or Lebanon counties (n = 328) due to sparse nitrate measurement data, and persons who reported use of thyroid medication (n = 145) leaving a total of 2,543 persons (1,336 females and 1,207 males) in the final analysis. Nearly all of the enrolled individuals are descendants of a small number of Amish who settled in Lancaster County, Pennsylvania, in the mid-eighteenth century [13,17,18]. This study was approved by the Institutional Review Boards of the University of Maryland and the National Cancer Institute. All subjects included in this analysis received a standardized examination at the Amish Research Clinic in Strasburg, Pennsylvania or in the participant's home during the time period 1995-2008. As part of this examination, a fasting blood sample was collected from which TSH levels were measured with the Siemens TSH assay (Immulite 2000; Deerfield, IL) according to the manufacturer's instructions. The method is a solid-phase, chemiluminescent, competitive analog immunoassay and has analytical sensitivity of 0.004 μIU/ml and upper limit of 75 μIU/ml of TSH. Residential street addresses were geocoded using the TeleAtlas (Lebanon, NH) MatchMaker SDK Professional version 8.3 (October 2006), a spatial database of roads, and a modified version of a Microsoft Visual Basic version 6.0 program issued by TeleAtlas to match input addresses to the spatial database. We assigned residence location using an offset of 25 ft from the street centerline. Addresses that were not successfully geocoded were checked for errors using interactive geocoding techniques. Where only a street intersection was available for the residential location (1.0% of residences), we assigned the geographic location of the residence to the middle of the intersection. Where only a zip code was available for the residential location (3.0% of residences), we assigned the geographic location of the residence to the centroid of the zip code. The geocoded location of the residences and the geographic boundary of our study area is shown in Figure 1. Location of participant residences and wells with nitrate measures in study area. Historical assessment of nitrate levels A survey of nitrate levels in well water in Lancaster, Chester, and Lebanon counties was carried out from 1976-2006 by the USGS. The USGS collected data from active monitoring wells in the county and from a well-owner monitoring program conducted by the State Department of Natural Resources in collaboration with Pennsylvania State University. Water samples (50-100 uL) from all programs were measured for nitrate using ion chromatography with a detection limit of 0.002 mg/L as nitrate-N [19]. A total of 3,613 unique wells were measured in our study area during the survey period. The measurements were not from wells chosen at random but included monitoring data reported by the USGS and samples from individual well owners. A total of 3,057 wells had 1 measurement; 198 wells had 2 measurements; 113 wells had 3 measurements; and 245 wells had more than 3 measurements. Figure 1 shows the geographic distribution of the wells in our study area in relation to the location of the 2,543 participant residences The median distance between a residence and the closest measured well was 576.0 m (interquartile range: 308.0-897.2 m). The median nitrate concentration by season ranged from 2.0 mg/L as nitrate-nitrogen (hereafter mg/L) for summer months (interquartile range: 0-7.5 mg/L) to 2.7 mg/L in spring months (interquartile range: 0-8.8 mg/L). For wells with multiple measures, the median difference between the maximum and minimum value was 1.2 mg/l (IQR: 0.3-0.9). The mean of the measurements was used for wells with multiple measurements when we did the exposure modeling (see below). A survey of nitrate levels in well water in Lancaster, Chester, and Lebanon counties was carried out from 1976-2006 by the USGS. The USGS collected data from active monitoring wells in the county and from a well-owner monitoring program conducted by the State Department of Natural Resources in collaboration with Pennsylvania State University. Water samples (50-100 uL) from all programs were measured for nitrate using ion chromatography with a detection limit of 0.002 mg/L as nitrate-N [19]. A total of 3,613 unique wells were measured in our study area during the survey period. The measurements were not from wells chosen at random but included monitoring data reported by the USGS and samples from individual well owners. A total of 3,057 wells had 1 measurement; 198 wells had 2 measurements; 113 wells had 3 measurements; and 245 wells had more than 3 measurements. Figure 1 shows the geographic distribution of the wells in our study area in relation to the location of the 2,543 participant residences The median distance between a residence and the closest measured well was 576.0 m (interquartile range: 308.0-897.2 m). The median nitrate concentration by season ranged from 2.0 mg/L as nitrate-nitrogen (hereafter mg/L) for summer months (interquartile range: 0-7.5 mg/L) to 2.7 mg/L in spring months (interquartile range: 0-8.8 mg/L). For wells with multiple measures, the median difference between the maximum and minimum value was 1.2 mg/l (IQR: 0.3-0.9). The mean of the measurements was used for wells with multiple measurements when we did the exposure modeling (see below). Prediction of nitrate levels in well water of participants' residences We assumed the drinking water supply for participants to be a well located at their reported residence. To estimate nitrate levels at this location, we first determined whether nitrate concentrations in the USGS wells varied across the types of aquifers in the study area (Table 1). Maps of the primary aquifers were obtained from the USGS (created from 30 m pixel satellite imagery) [19]. There are five principal aquifers in the study area (Figure 2). The differentiation of aquifer type is important because the transport of contaminants in groundwater is generally confined to within these hydrogeologic boundaries. Using the 1992 USGS National Land Cover Data Set (NLCD) [20], we also evaluated nitrate levels in the USGS wells across thirteen types of land use (pasture, deciduous forest, row crops, low intensity residential, mixed forest, commercial or industrial, evergreen forest, high intensity residential, water, quarries/gravel pits, transitional, urban grasses, and woody wetlands) (Table 1 and Figure 3). We found limited temporal variation by season and by decade within each of the five aquifer types over the well measurement period, as well as across the land use classifications used in our analysis (Figure 3). Distribution of nitrate concentration in US Geological Survey wells by aquifer type and categories of land use in Lancaster, Lebanon, and Chester Counties, from 1976-2006 Spatial classification is based on the National Land Cover Data Set; 1992, USGS [ref 38]. Principle aquifers in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003 Land use in 1992 in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003 We used a standard linear mixed effects statistical model to develop a predictive model including the variables principal aquifer and land use. Nitrate levels were log normally distributed so we modeled the natural logarithm of the concentration. Spatial correlation existed in the nitrate measurements even after covariate adjustment [21], so we performed kriging on the wells' residuals from the predictive nitrate model. If a well had more than one measurement, the mean of the measurements and its residual was used in the modeling. We assumed that the residuals in the model have a single, normally distributed mean structure centered at zero, allowing for universal kriging across the study area. The kriging procedure predicts a 'residual' for each study participant based on a weighted average of the 20 neighboring wells' residuals (within the respective aquifer and land use category). For comparison, we also applied the kriging procedure based on the weighted average of the five neighboring wells' residuals. For example, if for a particular region of our study area, the regression model tends to underestimate the true observed log nitrate values (positive residuals) then individuals in this region will be given a representative positive residual prediction that is added to the log nitrate estimate based on the individuals' covariates and the regression parameters. The antilog gives an unbiased predictor of median nitrate value resulting in estimates that are more robust to outlier observations than a mean estimator. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L, with a median of 6.5 mg/L and a mean of 6.6 mg/L (sd = 2.9 mg/L). The predicted nitrate level mean was similar to the mean of the measured values used for modeling (6.8 mg/L; sd = 8.3 mg/L) although the standard deviation was smaller. We assumed the drinking water supply for participants to be a well located at their reported residence. To estimate nitrate levels at this location, we first determined whether nitrate concentrations in the USGS wells varied across the types of aquifers in the study area (Table 1). Maps of the primary aquifers were obtained from the USGS (created from 30 m pixel satellite imagery) [19]. There are five principal aquifers in the study area (Figure 2). The differentiation of aquifer type is important because the transport of contaminants in groundwater is generally confined to within these hydrogeologic boundaries. Using the 1992 USGS National Land Cover Data Set (NLCD) [20], we also evaluated nitrate levels in the USGS wells across thirteen types of land use (pasture, deciduous forest, row crops, low intensity residential, mixed forest, commercial or industrial, evergreen forest, high intensity residential, water, quarries/gravel pits, transitional, urban grasses, and woody wetlands) (Table 1 and Figure 3). We found limited temporal variation by season and by decade within each of the five aquifer types over the well measurement period, as well as across the land use classifications used in our analysis (Figure 3). Distribution of nitrate concentration in US Geological Survey wells by aquifer type and categories of land use in Lancaster, Lebanon, and Chester Counties, from 1976-2006 Spatial classification is based on the National Land Cover Data Set; 1992, USGS [ref 38]. Principle aquifers in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003 Land use in 1992 in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003 We used a standard linear mixed effects statistical model to develop a predictive model including the variables principal aquifer and land use. Nitrate levels were log normally distributed so we modeled the natural logarithm of the concentration. Spatial correlation existed in the nitrate measurements even after covariate adjustment [21], so we performed kriging on the wells' residuals from the predictive nitrate model. If a well had more than one measurement, the mean of the measurements and its residual was used in the modeling. We assumed that the residuals in the model have a single, normally distributed mean structure centered at zero, allowing for universal kriging across the study area. The kriging procedure predicts a 'residual' for each study participant based on a weighted average of the 20 neighboring wells' residuals (within the respective aquifer and land use category). For comparison, we also applied the kriging procedure based on the weighted average of the five neighboring wells' residuals. For example, if for a particular region of our study area, the regression model tends to underestimate the true observed log nitrate values (positive residuals) then individuals in this region will be given a representative positive residual prediction that is added to the log nitrate estimate based on the individuals' covariates and the regression parameters. The antilog gives an unbiased predictor of median nitrate value resulting in estimates that are more robust to outlier observations than a mean estimator. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L, with a median of 6.5 mg/L and a mean of 6.6 mg/L (sd = 2.9 mg/L). The predicted nitrate level mean was similar to the mean of the measured values used for modeling (6.8 mg/L; sd = 8.3 mg/L) although the standard deviation was smaller. Model validation The validity of the predictive model was assessed for 77 validation wells by comparing the predicted nitrate concentration to the observed nitrate concentrations. The validation wells were randomly selected from 482 wells with USGS measurements from 1992-1993. The limited date range was chosen to be consistent with the time frame of the NLCD land use database used in our analyses. We evaluated model sensitivity, specificity, and percent agreement using the median of the predicted nitrate level (6.5 mg/L as nitrate-N) as a cutpoint for high and low exposure categories. The sensitivity of the model was 67% and the specificity was 93%. The Spearman's rank correlation between the continuous predicted and measured concentrations was 0.73. Cross tabulation of predicted and observed nitrate concentrations by quartiles of the measured nitrate concentrations demonstrated a percent agreement of 56% (Table 2). Comparison of quartiles of the predicted nitrate concentration by quartiles of the measured nitrate concentrations, 77 wells used for the model validation Percent agreement = 56% The validity of the predictive model was assessed for 77 validation wells by comparing the predicted nitrate concentration to the observed nitrate concentrations. The validation wells were randomly selected from 482 wells with USGS measurements from 1992-1993. The limited date range was chosen to be consistent with the time frame of the NLCD land use database used in our analyses. We evaluated model sensitivity, specificity, and percent agreement using the median of the predicted nitrate level (6.5 mg/L as nitrate-N) as a cutpoint for high and low exposure categories. The sensitivity of the model was 67% and the specificity was 93%. The Spearman's rank correlation between the continuous predicted and measured concentrations was 0.73. Cross tabulation of predicted and observed nitrate concentrations by quartiles of the measured nitrate concentrations demonstrated a percent agreement of 56% (Table 2). Comparison of quartiles of the predicted nitrate concentration by quartiles of the measured nitrate concentrations, 77 wells used for the model validation Percent agreement = 56% Data analysis We used generalized linear regression to assess the association between estimated nitrate levels in well water and continuous TSH measures. TSH levels were also used to define disease status based on clinical guidelines [21]. A "normal" range for TSH was defined as 0.4-4 mIU/ml. A TSH level of > 4 mIU/ml-10 mIU/ml was defined as subclinical hypothyroidism (n = 228) and more than 10 mIU/ml was defined as clinical hypothyroidism (n = 56). A TSH value of 0.1 mIU/ml to 0.4 mIU/ml was defined as subclinical hyperthyroidism (n = 25) and less than 0.1 mIU/ml was defined as clinical hyperthyroidism (n = 10). All of the disease definitions are based on the assumption that TSH was marking primary disease in the thyroid since other causes of TSH abnormalities, e.g., primary pituitary disease, thyroid hormone resistance, are very uncommon by comparison [22]. Estimated nitrate levels in participants' drinking water were categorized into quartiles and by the median of the predicted well nitrate level (6.5 mg/L). We evaluated the association of the nitrate levels with each thyroid disease group using unconditional logistic regression to compute the odds ratio (OR) and 95% confidence intervals. All models were adjusted for potential confounding factors including age (continuous) and BMI ((normal (< 25 kg/m2), overweight (25-30 kg/m2), and obese (> 30 kg/m2)). We conducted analyses stratified by gender as well as for men and women combined. Tests of linear trend were performed by modeling the continuous nitrate estimates. A p-value < 0.05 was considered significant and all data analyses were conducted using SAS version 9.1. We conducted two sensitivity analyses. In the first analysis, we excluded participants whose residences were located within boundaries of the U.S. Census Places (USCB 2004) and were therefore possibly connected to public water supplies with nitrate levels below the MCL (16% of study population). In the second analysis, we excluded those whose residence was greater than 1500 m from the nearest well with measurement data (17%) to reduce the probability of measurement error. We recomputed the OR for subclinical hypothyroidism after correcting for exposure misclassification (i.e. by reclassifying false positives and false negatives) using our estimates of sensitivity and specificity and the prevalence of exposure (50%). We used generalized linear regression to assess the association between estimated nitrate levels in well water and continuous TSH measures. TSH levels were also used to define disease status based on clinical guidelines [21]. A "normal" range for TSH was defined as 0.4-4 mIU/ml. A TSH level of > 4 mIU/ml-10 mIU/ml was defined as subclinical hypothyroidism (n = 228) and more than 10 mIU/ml was defined as clinical hypothyroidism (n = 56). A TSH value of 0.1 mIU/ml to 0.4 mIU/ml was defined as subclinical hyperthyroidism (n = 25) and less than 0.1 mIU/ml was defined as clinical hyperthyroidism (n = 10). All of the disease definitions are based on the assumption that TSH was marking primary disease in the thyroid since other causes of TSH abnormalities, e.g., primary pituitary disease, thyroid hormone resistance, are very uncommon by comparison [22]. Estimated nitrate levels in participants' drinking water were categorized into quartiles and by the median of the predicted well nitrate level (6.5 mg/L). We evaluated the association of the nitrate levels with each thyroid disease group using unconditional logistic regression to compute the odds ratio (OR) and 95% confidence intervals. All models were adjusted for potential confounding factors including age (continuous) and BMI ((normal (< 25 kg/m2), overweight (25-30 kg/m2), and obese (> 30 kg/m2)). We conducted analyses stratified by gender as well as for men and women combined. Tests of linear trend were performed by modeling the continuous nitrate estimates. A p-value < 0.05 was considered significant and all data analyses were conducted using SAS version 9.1. We conducted two sensitivity analyses. In the first analysis, we excluded participants whose residences were located within boundaries of the U.S. Census Places (USCB 2004) and were therefore possibly connected to public water supplies with nitrate levels below the MCL (16% of study population). In the second analysis, we excluded those whose residence was greater than 1500 m from the nearest well with measurement data (17%) to reduce the probability of measurement error. We recomputed the OR for subclinical hypothyroidism after correcting for exposure misclassification (i.e. by reclassifying false positives and false negatives) using our estimates of sensitivity and specificity and the prevalence of exposure (50%).
null
null
Conclusions
BA-K: participated in the conceptualization of the study and the exposure assessment design; conducted data analysis and drafted the manuscript; SH: conducted the nitrate modeling work and gave important intellectual input for the data analysis, and helped draft the manuscript; JN: conceptualized the exposure assessment approach, advised in the creation of the hydrogeological covariates and exposure modeling, and provided important intellectual content; MS: advised on the medical interpretation of the findings, and provided important intellectual content; AS: As the PI of the cohort study, AS designed the study, oversaw the recruitment of subjects and conducted much of the primary data gathering; BM: As a co-PI, BM participated in the design of the study and ongoing study efforts, the interpretation of the results for the nitrate investigation, and drafting of the manuscript; MA: obtained the hydrogelogical dataset, GIS analysis, reviewed the manuscript, and revised it critically for important intellectual content; TH: helped conceptualize the study, provided statistical support and modeling guidance, and provided important intellectual content for the manuscript; YZ: helped conceptualize the study and provided important intellectual content for the manuscript; MW: mentored in the conceptualization of the study, participated in study design, method development and data analysis, helped in the drafting of the manuscript. All authors assisted in the interpretation of results and contributed towards the final version of the manuscript. All authors read and approved the final manuscript.
[ "Background", "Study population", "Historical assessment of nitrate levels", "Prediction of nitrate levels in well water of participants' residences", "Model validation", "Data analysis", "Results", "Discussion", "Conclusions" ]
[ "Nitrate is a widespread contaminant of drinking water supplies, especially in agricultural areas. The thyroid can concentrate univalent anions such as nitrate (NO3-), which subsequently interferes with the uptake of iodide (I-) by the thyroid and may cause reduced production of thyroid hormones [1-4]. The result of the reduced thyroid hormone production is a compensatory increase in thyroid stimulating hormone (TSH), a sensitive indicator of thyroid function. High and low TSH levels reflect hypo- and hyperfunction of the thyroid gland, respectively. Chronic stimulation of the thyroid gland by excessive TSH has been shown in animals to induce the development of hypertrophy and thyroid disease, as well as hyperplasia, followed by adenoma and carcinoma [5]. At least two epidemiological studies have shown high nitrate intake to be associated with thyroid dysfunction, including hypertrophy and changes in TSH levels [6,7]; however, the impact of nitrate intake on specific thyroid conditions, including hyperthyroidism and hypothyroidism is not clear.\nElevated concentrations of nitrate in groundwater originate from a number of sources, including leaking septic tanks, animal waste, and overuse of nitrogen fertilizers [7]. Nitrate is very soluble and it readily migrates to groundwater. Nitrate contamination of groundwater is an exposure of interest as groundwater serves as the primary drinking water supply for over 90% of the rural population and 50% of the total population of North America [8]. Although the U.S. Environmental Protection Agency (EPA) maximum contaminant level (MCL) for nitrate as nitrogen (nitrate-N) is 10 mg/L in public water sources [9], the levels in private wells are not regulated and the task of monitoring is left to residential owners, presenting opportunities for high levels of human exposure. The U.S. Geological Survey (USGS) estimates that nitrate concentrations exceed the EPA's standard in approximately 15% of agricultural and rural areas, exposing over 2 million people in the United States [10]. The MCL for nitrate in drinking water was established to protect against methemoglobinemia, or \"blue baby syndrome,\" to which infants are especially susceptible. However, this health guideline has not been thoroughly evaluated for other health outcomes such as thyroid disease and cancer.\nThe Old Order Amish community is a population characterized by a homogeneous lifestyle, including intensive farming practices and low mobility, and has been relatively unchanged across generations [11]. In areas where many large dairy and poultry farms are concentrated, the land area for disposal of animal wastes is limited. This situation often results in overloading the available land with manure, with considerable nitrogen ending up in groundwater or surface water [8,12]. Lancaster County in southeastern Pennsylvania is an example of such an area where extensive dairy enterprises with high stocking rates prevail. High levels of nitrate in the groundwater [8] suggest that the Amish are a potentially highly exposed population. Given the biological effects of nitrate intake on the thyroid, investigation of whether the Amish in this area exhibit an increased prevalence of thyroid dysfunction and thyroid disease is of interest.\nThe aim of this study is to assess whether nitrate concentrations in well water are associated with levels of TSH and thyroid disease. Our goal was to use survey data on nitrate levels in well-water obtained from the USGS to conduct a cross-sectional analysis of the association between nitrate exposure and thyroid health. This study builds upon several ongoing studies of diabetes, obesity, osteoporosis, hypertension, and cardiovascular disease in the Amish, initiated in 1993 at the University of Maryland [13-16].", "Subjects included in this analysis were 3,017 Old Order Amish aged 18 years and older from Lancaster, Chester, and Lebanon Counties, Pennsylvania, for whom thyroid health was assessed through measurement of thyroid stimulating hormone (TSH) levels in their prior participation in one or more studies of health by investigators at the University of Maryland, Baltimore [13-16]. We excluded participants whose residences were located outside of Lancaster, Chester, or Lebanon counties (n = 328) due to sparse nitrate measurement data, and persons who reported use of thyroid medication (n = 145) leaving a total of 2,543 persons (1,336 females and 1,207 males) in the final analysis. Nearly all of the enrolled individuals are descendants of a small number of Amish who settled in Lancaster County, Pennsylvania, in the mid-eighteenth century [13,17,18]. This study was approved by the Institutional Review Boards of the University of Maryland and the National Cancer Institute.\nAll subjects included in this analysis received a standardized examination at the Amish Research Clinic in Strasburg, Pennsylvania or in the participant's home during the time period 1995-2008. As part of this examination, a fasting blood sample was collected from which TSH levels were measured with the Siemens TSH assay (Immulite 2000; Deerfield, IL) according to the manufacturer's instructions. The method is a solid-phase, chemiluminescent, competitive analog immunoassay and has analytical sensitivity of 0.004 μIU/ml and upper limit of 75 μIU/ml of TSH.\nResidential street addresses were geocoded using the TeleAtlas (Lebanon, NH) MatchMaker SDK Professional version 8.3 (October 2006), a spatial database of roads, and a modified version of a Microsoft Visual Basic version 6.0 program issued by TeleAtlas to match input addresses to the spatial database. We assigned residence location using an offset of 25 ft from the street centerline. Addresses that were not successfully geocoded were checked for errors using interactive geocoding techniques. Where only a street intersection was available for the residential location (1.0% of residences), we assigned the geographic location of the residence to the middle of the intersection. Where only a zip code was available for the residential location (3.0% of residences), we assigned the geographic location of the residence to the centroid of the zip code. The geocoded location of the residences and the geographic boundary of our study area is shown in Figure 1.\nLocation of participant residences and wells with nitrate measures in study area.", "A survey of nitrate levels in well water in Lancaster, Chester, and Lebanon counties was carried out from 1976-2006 by the USGS. The USGS collected data from active monitoring wells in the county and from a well-owner monitoring program conducted by the State Department of Natural Resources in collaboration with Pennsylvania State University. Water samples (50-100 uL) from all programs were measured for nitrate using ion chromatography with a detection limit of 0.002 mg/L as nitrate-N [19].\nA total of 3,613 unique wells were measured in our study area during the survey period. The measurements were not from wells chosen at random but included monitoring data reported by the USGS and samples from individual well owners. A total of 3,057 wells had 1 measurement; 198 wells had 2 measurements; 113 wells had 3 measurements; and 245 wells had more than 3 measurements. Figure 1 shows the geographic distribution of the wells in our study area in relation to the location of the 2,543 participant residences The median distance between a residence and the closest measured well was 576.0 m (interquartile range: 308.0-897.2 m). The median nitrate concentration by season ranged from 2.0 mg/L as nitrate-nitrogen (hereafter mg/L) for summer months (interquartile range: 0-7.5 mg/L) to 2.7 mg/L in spring months (interquartile range: 0-8.8 mg/L). For wells with multiple measures, the median difference between the maximum and minimum value was 1.2 mg/l (IQR: 0.3-0.9). The mean of the measurements was used for wells with multiple measurements when we did the exposure modeling (see below).", "We assumed the drinking water supply for participants to be a well located at their reported residence. To estimate nitrate levels at this location, we first determined whether nitrate concentrations in the USGS wells varied across the types of aquifers in the study area (Table 1). Maps of the primary aquifers were obtained from the USGS (created from 30 m pixel satellite imagery) [19]. There are five principal aquifers in the study area (Figure 2). The differentiation of aquifer type is important because the transport of contaminants in groundwater is generally confined to within these hydrogeologic boundaries. Using the 1992 USGS National Land Cover Data Set (NLCD) [20], we also evaluated nitrate levels in the USGS wells across thirteen types of land use (pasture, deciduous forest, row crops, low intensity residential, mixed forest, commercial or industrial, evergreen forest, high intensity residential, water, quarries/gravel pits, transitional, urban grasses, and woody wetlands) (Table 1 and Figure 3). We found limited temporal variation by season and by decade within each of the five aquifer types over the well measurement period, as well as across the land use classifications used in our analysis (Figure 3).\nDistribution of nitrate concentration in US Geological Survey wells by aquifer type and categories of land use in Lancaster, Lebanon, and Chester Counties, from 1976-2006\nSpatial classification is based on the National Land Cover Data Set; 1992, USGS [ref 38].\nPrinciple aquifers in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003\nLand use in 1992 in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003\nWe used a standard linear mixed effects statistical model to develop a predictive model including the variables principal aquifer and land use. Nitrate levels were log normally distributed so we modeled the natural logarithm of the concentration. Spatial correlation existed in the nitrate measurements even after covariate adjustment [21], so we performed kriging on the wells' residuals from the predictive nitrate model. If a well had more than one measurement, the mean of the measurements and its residual was used in the modeling. We assumed that the residuals in the model have a single, normally distributed mean structure centered at zero, allowing for universal kriging across the study area. The kriging procedure predicts a 'residual' for each study participant based on a weighted average of the 20 neighboring wells' residuals (within the respective aquifer and land use category). For comparison, we also applied the kriging procedure based on the weighted average of the five neighboring wells' residuals. For example, if for a particular region of our study area, the regression model tends to underestimate the true observed log nitrate values (positive residuals) then individuals in this region will be given a representative positive residual prediction that is added to the log nitrate estimate based on the individuals' covariates and the regression parameters. The antilog gives an unbiased predictor of median nitrate value resulting in estimates that are more robust to outlier observations than a mean estimator. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L, with a median of 6.5 mg/L and a mean of 6.6 mg/L (sd = 2.9 mg/L). The predicted nitrate level mean was similar to the mean of the measured values used for modeling (6.8 mg/L; sd = 8.3 mg/L) although the standard deviation was smaller.", "The validity of the predictive model was assessed for 77 validation wells by comparing the predicted nitrate concentration to the observed nitrate concentrations. The validation wells were randomly selected from 482 wells with USGS measurements from 1992-1993. The limited date range was chosen to be consistent with the time frame of the NLCD land use database used in our analyses. We evaluated model sensitivity, specificity, and percent agreement using the median of the predicted nitrate level (6.5 mg/L as nitrate-N) as a cutpoint for high and low exposure categories. The sensitivity of the model was 67% and the specificity was 93%. The Spearman's rank correlation between the continuous predicted and measured concentrations was 0.73. Cross tabulation of predicted and observed nitrate concentrations by quartiles of the measured nitrate concentrations demonstrated a percent agreement of 56% (Table 2).\nComparison of quartiles of the predicted nitrate concentration by quartiles of the measured nitrate concentrations, 77 wells used for the model validation\nPercent agreement = 56%", "We used generalized linear regression to assess the association between estimated nitrate levels in well water and continuous TSH measures. TSH levels were also used to define disease status based on clinical guidelines [21]. A \"normal\" range for TSH was defined as 0.4-4 mIU/ml. A TSH level of > 4 mIU/ml-10 mIU/ml was defined as subclinical hypothyroidism (n = 228) and more than 10 mIU/ml was defined as clinical hypothyroidism (n = 56). A TSH value of 0.1 mIU/ml to 0.4 mIU/ml was defined as subclinical hyperthyroidism (n = 25) and less than 0.1 mIU/ml was defined as clinical hyperthyroidism (n = 10). All of the disease definitions are based on the assumption that TSH was marking primary disease in the thyroid since other causes of TSH abnormalities, e.g., primary pituitary disease, thyroid hormone resistance, are very uncommon by comparison [22].\nEstimated nitrate levels in participants' drinking water were categorized into quartiles and by the median of the predicted well nitrate level (6.5 mg/L). We evaluated the association of the nitrate levels with each thyroid disease group using unconditional logistic regression to compute the odds ratio (OR) and 95% confidence intervals. All models were adjusted for potential confounding factors including age (continuous) and BMI ((normal (< 25 kg/m2), overweight (25-30 kg/m2), and obese (> 30 kg/m2)). We conducted analyses stratified by gender as well as for men and women combined. Tests of linear trend were performed by modeling the continuous nitrate estimates. A p-value < 0.05 was considered significant and all data analyses were conducted using SAS version 9.1.\nWe conducted two sensitivity analyses. In the first analysis, we excluded participants whose residences were located within boundaries of the U.S. Census Places (USCB 2004) and were therefore possibly connected to public water supplies with nitrate levels below the MCL (16% of study population). In the second analysis, we excluded those whose residence was greater than 1500 m from the nearest well with measurement data (17%) to reduce the probability of measurement error. We recomputed the OR for subclinical hypothyroidism after correcting for exposure misclassification (i.e. by reclassifying false positives and false negatives) using our estimates of sensitivity and specificity and the prevalence of exposure (50%).", "The mean TSH level was 2.92 mIU/ml (3.05 mIU/ml for women and 2.77 mIU/ml for men). Based on the TSH measures, the prevalence of clinical hyperthyroidism was 0.4% and the prevalence of subclinical hyperthyroidism was 1.0%. The prevalence of clinical hypothyroidism was 2.2% and the prevalence of subclinical hypothyroidism was 9.0%.\nThe mean age of participants was 50 years (range: 18-98). The mean BMI was 26.6 kg/m2 for men and 27.7 kg/m2 for women (Table 3). The average BMI of males with clinical hyperthyroidism was lower than that of those in the general study population but females with clinical hyperthyroidism had a slightly higher average BMI than the general study population. The average age of persons with thyroid disease was higher in all categories compared to the group with normal TSH levels. Although smoking data was not available for the entire study population, among those for whom these data were collected, less than 1% of women (4 of 657) and 43% of males (310 of 725) reported ever smoking tobacco.\nCharacteristics of the study population by thyroid disease status\nIncludes persons who reside in Lancaster, Chester, or Lebanon Counties and excludes persons younger than 18 or who report use of thyroid medications.\n*smoking status is based on 45.8% of the study population\nAdjusting for age and BMI, and modeling TSH concentration as the outcome, we observed no significant relationship with nitrate concentration. The B coefficient for men and women combined was -0.12 (p-value = 0.14), -0.13 for men (p-value = 0.11), and -0.12 for women (p-value = 0.40). Modeling the dichotomized high/low nitrate predictor, the B coefficient for men and women combined was -0.64 (p-value = 0.19), -0.57 for men (p-value = 0.22), and -0.70 for women (p-value = 0.40).\nNeither clinical or subclinical hyperthyroidism were associated with nitrate concentrations (Table 4), although the number of cases was low (n = 10 cases of clinical hyperthyroidism and n = 25 cases of subclinical hyperthyroidism).\nOdds ratios (ORs) and 95% confidence intervals (CIs) for the prevalence of hyperthyroidism associated with estimated nitrate levels in residential wells\nModels adjusted for age and BMI; model with men and women combined is also adjusted for gender\nThe results for hypothyroidism are presented in Table 5. Overall, there was a borderline significant positive association between subclinical hypothyroidism and high nitrate exposure (age- and BMI-adjusted OR = 1.32; 95% CI: 1.0-1.68), with further analyses revealing the association to be present in women (OR = 1.60; 95% CI: 1.11-2.32), but not in men (OR = 0.98; 95% CI: 0.63-1.52). However, the association among women did not increase monotonically with increasing quartiles of estimated nitrate concentrations in their water supply exposure. The interaction for gender and nitrate was not significant (p-interaction = 0.32). No significant associations were observed for clinical hypothyroidism. The results were consistent when stratified by age and BMI.\nOdds ratios (ORs) and 95% confidence intervals (CIs) for the prevalence of hypothyroidism associated with nitrate levels in residential wells\nModels adjusted for age and BMI; model with men and women combined is also adjusted for gender; nitrate concentrations in mg/L\nThe results were unchanged in a sensitivity analysis that excluded participants whose residences were possibly connected to public water supplies (data not shown). The exclusion of persons who reside more than 1500 m from the nearest well also did not result in a material change in our results (data not shown), although it did decrease the odds ratio for high nitrate intake and subclinical hypothyroidism in women from 1.60-1.52. We also estimated the OR for subclinical hypothyroidism among women in the absence of exposure misclassification as 2.1 (versus 1.6 observed).", "Our results provide limited support for an association between nitrate levels in private wells and subclinical hypothyroidism among women but not men. With estimated exposure to nitrate in drinking water at or above 6.5 mg/L, we observed a significantly increased prevalence of subclinical hypothyroidism in women, although there was not a monotonic increase with increasing quartiles of nitrate. These findings of an increased prevalence of hypothyroidism among women are consistent with our hypothesis, namely that the competitive inhibition of iodide uptake associated with increased nitrate exposure would result in decreased systemic active thyroid hormone (as indicated by increased TSH levels). We did not observe an association for clinical hypothyroidism, but the number of cases in this group was much lower.\nThe mean TSH level in our study population was 3.05 μIU/ml in women and 2.77 μIU/ml in men. These levels are higher than TSH levels in the general US population surveyed by the National Health and Nutrition Examination Survey (NHANES) from 1988-1994 [23], in which the means among women and men were 1.49 μIU/ml and 1.46 μIU/ml, respectively. The prevalence of hypothyroidism and hyperthyroidism in the US population is estimated to be 4.6% (0.3% clinical and 4.3% subclinical) and 1.3% (0.5% clinical and 0.8% subclinical), respectively [23], compared with 11.2% and 1.4% in our study population. However, as the risk of thyroid disease increases with age, the higher prevalence in the Amish could be partially due to the older age distribution in this study (mean = 50.1 years) population compared to the age distribution of the NHANES study population (mean = 45.0 years). When hypothyroidism is compared by sex in this study population and the NHANES population, the prevalence is 1.5-times more common in Amish women than men whereas it is 2-8 times more common in women than men in the US population [23].\nIn previous epidemiological studies, investigators have identified a relationship between nitrate contamination of water supplies and thyroid dysfunction and thyroid disease. In a cross-sectional study of school children living in areas of Slovakia with high and low nitrate exposure via drinking water, children in the high nitrate area had increased thyroid volume and increased frequency of signs of subclinical thyroid disorders (thyroid hypoechogenicity by ultrasound, increased TSH level and positive anti-thyroid peroxidase (TPO)) [6]. The nitrate levels ranged from 11.3 to 58.7 mg/L (as nitrate-nitrogen) in the highly polluted area and were < 0.4 mg/L nitrate-nitrogen in the low nitrate area. Similarly, investigators in the Netherlands conducted a cross-sectional study of women who obtained their drinking water from public supplies and private wells with varying nitrate levels [7]. They observed a dose-dependent increase in the volume of the thyroid associated with increasing nitrate concentrations in drinking water from a combination of public and private supplies, with nitrate levels ranging from 0.004 mg/L to 29.1 mg/L (as nitrate-nitrogen). Women with nitrate levels exceeding 11.1 mg/L as nitrate-nitrogen had a significant increased prevalence of thyroid gland hypertrophy. Our results for women are consistent with the findings in Slovakia and indirectly support the associations observed in the Netherlands. However, the reason for our finding of in women but not men is unclear particularly since men consume more water than women on average [23]. It is possible that women may be more sensitive to exposures that perturb the thyroid as indicated by their higher prevalence of thyroid disease [24].\nA previous epidemiologic investigation of the association of nitrate intake from public water supplies and diet with the risk of self-reported hypothyroidism and hyperthyroidism was conducted in a cohort of 21,977 older women in Iowa [25]. The investigators found no association between the prevalence of hypo- or hyperthyroidism and nitrate concentrations in public water supplies; nor was there an association for those who were using private wells. However, intake of nitrate from the diet can be a primary source of exposure when drinking water nitrate levels are below the MCL of 10 mg/L nitrate-N [26-29]. In the Iowa study, increasing intake of nitrate from dietary sources was associated with an increased prevalence of hypothyroidism (OR Q4 = 1.24; 95% CI = 1.10-1.40, P for trend = 0.001) while no association was observed with hyperthyroidism [24]. In addition to consumption of tap water, people living in areas with high nitrate concentration in their water supplies may be exposed through their use of water for cooking, irrigation of crops used as a food source, and through milk products from local farm animals. Nitrate is a natural component of plants and is found at high concentrations in leafy vegetables, such as lettuce and spinach, and some root vegetables, such as beets [25]. The lack of dietary questionnaire data in our study is a limitation since estimates of well-water nitrate were below the MCL of 10 mg/L for 89% of participants [25-27]. The lack of dietary information in general likely resulted in exposure misclassification in our study population.\nA strength of this study is the availability of valid measures of TSH using study participant serum samples. Although only one measure was available for each study participant, the use of TSH rather than self-reported thyroid disease is likely to more accurately define thyroid disease. Although factors such as pregnancy and obesity can affect TSH, the levels are a reliable index of the biological activity of thyroid hormones. Anti-TPO was not available, which can also be helpful in the diagnosis of thyroid disease as an autoimmune disease. In addition to measuring TSH and anti-TPO in blood, future studies would be further strengthened by the use of ultrasound technology to determine thyroid volume, which could provide insight into nitrate exposure levels that may cause hypertrophy of the thyroid.\nAn additional strength of our study was that we validated our exposure metric and characterized the sensitivity and specificity based on the median observed versus predicted nitrate level in wells monitored by the USGS in our study area. Specificity was high (93%) indicating that our model classified those with lower nitrate levels accurately. The lower sensitivity (67%) indicated that the model underestimated nitrate concentrations for those with higher levels. The result of this misclassification, if nondifferential by disease status [28], would be to attenuate ORs as we demonstrated for subclinical thyroid disease.\nOur study was limited by a lack of information about the study population's complete residential history. However, we know that the majority of this Amish cohort reside in rural areas, with low relocation rates, and that it is typically the women who relocate to live in the homes or on the same land as their husband's family [29]. Most Amish men would subsequently have a stable residential history and exposure to nitrate contamination of well water over time. It is not clear to what degree a complete residential history would have affected our findings for both men and women. It is possible that the association we observed between subclinical hypothyroidism in women was attenuated due to this source of misclassification. The well measurements were also not randomly selected but represent data collected by USGS and individuals that potentially reside in areas with higher levels of nitrate than those who did not receive monitoring attention from USGS or who were not aware of a problem in their well. We identified a large standard deviation for the wells with repeat measures and were unable to fully explore the reasoning beyond having a small proportion (13%) of repeat samples. Additional data on well depth, other hydrogeological factors, or why multiple samples were taken could provide more insight into this observed variation, but was not available.\nOur study is also limited by the fact that we did not have data on actual water supply source to the residence, nor personal water consumption. Because most residences were located outside of areas served by public water utilities, we assumed the drinking water supply for participants was a well located at their residence. We did not have data on tap water consumption, and thus the approximate daily intake, which can be an important variable in determining exposure. Most people in the United States drink about 1.5-2 l of water per day [30]. Similarly, because of the attention given to water contamination in the Lancaster area, it is possible that some study participants obtain their water from sources that have been purified via reverse osmosis or from bottled water. These limitations would clearly affect the exposure estimates and result in misclassification of the exposure.\nFuture work in this area would be enhanced by the assessment of multiple contaminants present in water sources and the general environment that could be simultaneously affecting thyroid health. Multiple environmental pollutants from industrial as well as agricultural activities may be an important consideration for future investigation. Of particular interest is pesticides as there is increasing evidence of their ability to alter thyroid hormone homeostasis, causing thyroid dysfunction and thyroid disease [31,32]. The varied effects of these chemicals on thyroid function could affect study findings. Determination of these exposures should be a future study design consideration.\nFurthermore, the effect of contamination from other univalent anions which interfere with the uptake of iodide by the thyroid should be considered in future investigation into the effects of nitrate in drinking water. For example, perchlorate, the oxidizer for solid rocket fuel and a component of some fertilizers, is found in both food and water [33], and interferes with iodide uptake much like nitrate. Similarly, thiocyanate, another univalent anion that causes thyroid dysfunction, is a metabolite from tobacco smoke and is found in certain foods [34-37].", "The present study provides limited evidence that nitrate in residential well water is associated with subclinical hypothyroidism in women but not men. Future studies that include validated biomarkers, as well as individual level nitrate exposure estimates of dietary and drinking water intakes, and an assessment of co-contaminants, are needed to provide information about the relevance of nitrate intake and thyroid disease." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Historical assessment of nitrate levels", "Prediction of nitrate levels in well water of participants' residences", "Model validation", "Data analysis", "Results", "Discussion", "Conclusions" ]
[ "Nitrate is a widespread contaminant of drinking water supplies, especially in agricultural areas. The thyroid can concentrate univalent anions such as nitrate (NO3-), which subsequently interferes with the uptake of iodide (I-) by the thyroid and may cause reduced production of thyroid hormones [1-4]. The result of the reduced thyroid hormone production is a compensatory increase in thyroid stimulating hormone (TSH), a sensitive indicator of thyroid function. High and low TSH levels reflect hypo- and hyperfunction of the thyroid gland, respectively. Chronic stimulation of the thyroid gland by excessive TSH has been shown in animals to induce the development of hypertrophy and thyroid disease, as well as hyperplasia, followed by adenoma and carcinoma [5]. At least two epidemiological studies have shown high nitrate intake to be associated with thyroid dysfunction, including hypertrophy and changes in TSH levels [6,7]; however, the impact of nitrate intake on specific thyroid conditions, including hyperthyroidism and hypothyroidism is not clear.\nElevated concentrations of nitrate in groundwater originate from a number of sources, including leaking septic tanks, animal waste, and overuse of nitrogen fertilizers [7]. Nitrate is very soluble and it readily migrates to groundwater. Nitrate contamination of groundwater is an exposure of interest as groundwater serves as the primary drinking water supply for over 90% of the rural population and 50% of the total population of North America [8]. Although the U.S. Environmental Protection Agency (EPA) maximum contaminant level (MCL) for nitrate as nitrogen (nitrate-N) is 10 mg/L in public water sources [9], the levels in private wells are not regulated and the task of monitoring is left to residential owners, presenting opportunities for high levels of human exposure. The U.S. Geological Survey (USGS) estimates that nitrate concentrations exceed the EPA's standard in approximately 15% of agricultural and rural areas, exposing over 2 million people in the United States [10]. The MCL for nitrate in drinking water was established to protect against methemoglobinemia, or \"blue baby syndrome,\" to which infants are especially susceptible. However, this health guideline has not been thoroughly evaluated for other health outcomes such as thyroid disease and cancer.\nThe Old Order Amish community is a population characterized by a homogeneous lifestyle, including intensive farming practices and low mobility, and has been relatively unchanged across generations [11]. In areas where many large dairy and poultry farms are concentrated, the land area for disposal of animal wastes is limited. This situation often results in overloading the available land with manure, with considerable nitrogen ending up in groundwater or surface water [8,12]. Lancaster County in southeastern Pennsylvania is an example of such an area where extensive dairy enterprises with high stocking rates prevail. High levels of nitrate in the groundwater [8] suggest that the Amish are a potentially highly exposed population. Given the biological effects of nitrate intake on the thyroid, investigation of whether the Amish in this area exhibit an increased prevalence of thyroid dysfunction and thyroid disease is of interest.\nThe aim of this study is to assess whether nitrate concentrations in well water are associated with levels of TSH and thyroid disease. Our goal was to use survey data on nitrate levels in well-water obtained from the USGS to conduct a cross-sectional analysis of the association between nitrate exposure and thyroid health. This study builds upon several ongoing studies of diabetes, obesity, osteoporosis, hypertension, and cardiovascular disease in the Amish, initiated in 1993 at the University of Maryland [13-16].", " Study population Subjects included in this analysis were 3,017 Old Order Amish aged 18 years and older from Lancaster, Chester, and Lebanon Counties, Pennsylvania, for whom thyroid health was assessed through measurement of thyroid stimulating hormone (TSH) levels in their prior participation in one or more studies of health by investigators at the University of Maryland, Baltimore [13-16]. We excluded participants whose residences were located outside of Lancaster, Chester, or Lebanon counties (n = 328) due to sparse nitrate measurement data, and persons who reported use of thyroid medication (n = 145) leaving a total of 2,543 persons (1,336 females and 1,207 males) in the final analysis. Nearly all of the enrolled individuals are descendants of a small number of Amish who settled in Lancaster County, Pennsylvania, in the mid-eighteenth century [13,17,18]. This study was approved by the Institutional Review Boards of the University of Maryland and the National Cancer Institute.\nAll subjects included in this analysis received a standardized examination at the Amish Research Clinic in Strasburg, Pennsylvania or in the participant's home during the time period 1995-2008. As part of this examination, a fasting blood sample was collected from which TSH levels were measured with the Siemens TSH assay (Immulite 2000; Deerfield, IL) according to the manufacturer's instructions. The method is a solid-phase, chemiluminescent, competitive analog immunoassay and has analytical sensitivity of 0.004 μIU/ml and upper limit of 75 μIU/ml of TSH.\nResidential street addresses were geocoded using the TeleAtlas (Lebanon, NH) MatchMaker SDK Professional version 8.3 (October 2006), a spatial database of roads, and a modified version of a Microsoft Visual Basic version 6.0 program issued by TeleAtlas to match input addresses to the spatial database. We assigned residence location using an offset of 25 ft from the street centerline. Addresses that were not successfully geocoded were checked for errors using interactive geocoding techniques. Where only a street intersection was available for the residential location (1.0% of residences), we assigned the geographic location of the residence to the middle of the intersection. Where only a zip code was available for the residential location (3.0% of residences), we assigned the geographic location of the residence to the centroid of the zip code. The geocoded location of the residences and the geographic boundary of our study area is shown in Figure 1.\nLocation of participant residences and wells with nitrate measures in study area.\nSubjects included in this analysis were 3,017 Old Order Amish aged 18 years and older from Lancaster, Chester, and Lebanon Counties, Pennsylvania, for whom thyroid health was assessed through measurement of thyroid stimulating hormone (TSH) levels in their prior participation in one or more studies of health by investigators at the University of Maryland, Baltimore [13-16]. We excluded participants whose residences were located outside of Lancaster, Chester, or Lebanon counties (n = 328) due to sparse nitrate measurement data, and persons who reported use of thyroid medication (n = 145) leaving a total of 2,543 persons (1,336 females and 1,207 males) in the final analysis. Nearly all of the enrolled individuals are descendants of a small number of Amish who settled in Lancaster County, Pennsylvania, in the mid-eighteenth century [13,17,18]. This study was approved by the Institutional Review Boards of the University of Maryland and the National Cancer Institute.\nAll subjects included in this analysis received a standardized examination at the Amish Research Clinic in Strasburg, Pennsylvania or in the participant's home during the time period 1995-2008. As part of this examination, a fasting blood sample was collected from which TSH levels were measured with the Siemens TSH assay (Immulite 2000; Deerfield, IL) according to the manufacturer's instructions. The method is a solid-phase, chemiluminescent, competitive analog immunoassay and has analytical sensitivity of 0.004 μIU/ml and upper limit of 75 μIU/ml of TSH.\nResidential street addresses were geocoded using the TeleAtlas (Lebanon, NH) MatchMaker SDK Professional version 8.3 (October 2006), a spatial database of roads, and a modified version of a Microsoft Visual Basic version 6.0 program issued by TeleAtlas to match input addresses to the spatial database. We assigned residence location using an offset of 25 ft from the street centerline. Addresses that were not successfully geocoded were checked for errors using interactive geocoding techniques. Where only a street intersection was available for the residential location (1.0% of residences), we assigned the geographic location of the residence to the middle of the intersection. Where only a zip code was available for the residential location (3.0% of residences), we assigned the geographic location of the residence to the centroid of the zip code. The geocoded location of the residences and the geographic boundary of our study area is shown in Figure 1.\nLocation of participant residences and wells with nitrate measures in study area.\n Historical assessment of nitrate levels A survey of nitrate levels in well water in Lancaster, Chester, and Lebanon counties was carried out from 1976-2006 by the USGS. The USGS collected data from active monitoring wells in the county and from a well-owner monitoring program conducted by the State Department of Natural Resources in collaboration with Pennsylvania State University. Water samples (50-100 uL) from all programs were measured for nitrate using ion chromatography with a detection limit of 0.002 mg/L as nitrate-N [19].\nA total of 3,613 unique wells were measured in our study area during the survey period. The measurements were not from wells chosen at random but included monitoring data reported by the USGS and samples from individual well owners. A total of 3,057 wells had 1 measurement; 198 wells had 2 measurements; 113 wells had 3 measurements; and 245 wells had more than 3 measurements. Figure 1 shows the geographic distribution of the wells in our study area in relation to the location of the 2,543 participant residences The median distance between a residence and the closest measured well was 576.0 m (interquartile range: 308.0-897.2 m). The median nitrate concentration by season ranged from 2.0 mg/L as nitrate-nitrogen (hereafter mg/L) for summer months (interquartile range: 0-7.5 mg/L) to 2.7 mg/L in spring months (interquartile range: 0-8.8 mg/L). For wells with multiple measures, the median difference between the maximum and minimum value was 1.2 mg/l (IQR: 0.3-0.9). The mean of the measurements was used for wells with multiple measurements when we did the exposure modeling (see below).\nA survey of nitrate levels in well water in Lancaster, Chester, and Lebanon counties was carried out from 1976-2006 by the USGS. The USGS collected data from active monitoring wells in the county and from a well-owner monitoring program conducted by the State Department of Natural Resources in collaboration with Pennsylvania State University. Water samples (50-100 uL) from all programs were measured for nitrate using ion chromatography with a detection limit of 0.002 mg/L as nitrate-N [19].\nA total of 3,613 unique wells were measured in our study area during the survey period. The measurements were not from wells chosen at random but included monitoring data reported by the USGS and samples from individual well owners. A total of 3,057 wells had 1 measurement; 198 wells had 2 measurements; 113 wells had 3 measurements; and 245 wells had more than 3 measurements. Figure 1 shows the geographic distribution of the wells in our study area in relation to the location of the 2,543 participant residences The median distance between a residence and the closest measured well was 576.0 m (interquartile range: 308.0-897.2 m). The median nitrate concentration by season ranged from 2.0 mg/L as nitrate-nitrogen (hereafter mg/L) for summer months (interquartile range: 0-7.5 mg/L) to 2.7 mg/L in spring months (interquartile range: 0-8.8 mg/L). For wells with multiple measures, the median difference between the maximum and minimum value was 1.2 mg/l (IQR: 0.3-0.9). The mean of the measurements was used for wells with multiple measurements when we did the exposure modeling (see below).\n Prediction of nitrate levels in well water of participants' residences We assumed the drinking water supply for participants to be a well located at their reported residence. To estimate nitrate levels at this location, we first determined whether nitrate concentrations in the USGS wells varied across the types of aquifers in the study area (Table 1). Maps of the primary aquifers were obtained from the USGS (created from 30 m pixel satellite imagery) [19]. There are five principal aquifers in the study area (Figure 2). The differentiation of aquifer type is important because the transport of contaminants in groundwater is generally confined to within these hydrogeologic boundaries. Using the 1992 USGS National Land Cover Data Set (NLCD) [20], we also evaluated nitrate levels in the USGS wells across thirteen types of land use (pasture, deciduous forest, row crops, low intensity residential, mixed forest, commercial or industrial, evergreen forest, high intensity residential, water, quarries/gravel pits, transitional, urban grasses, and woody wetlands) (Table 1 and Figure 3). We found limited temporal variation by season and by decade within each of the five aquifer types over the well measurement period, as well as across the land use classifications used in our analysis (Figure 3).\nDistribution of nitrate concentration in US Geological Survey wells by aquifer type and categories of land use in Lancaster, Lebanon, and Chester Counties, from 1976-2006\nSpatial classification is based on the National Land Cover Data Set; 1992, USGS [ref 38].\nPrinciple aquifers in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003\nLand use in 1992 in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003\nWe used a standard linear mixed effects statistical model to develop a predictive model including the variables principal aquifer and land use. Nitrate levels were log normally distributed so we modeled the natural logarithm of the concentration. Spatial correlation existed in the nitrate measurements even after covariate adjustment [21], so we performed kriging on the wells' residuals from the predictive nitrate model. If a well had more than one measurement, the mean of the measurements and its residual was used in the modeling. We assumed that the residuals in the model have a single, normally distributed mean structure centered at zero, allowing for universal kriging across the study area. The kriging procedure predicts a 'residual' for each study participant based on a weighted average of the 20 neighboring wells' residuals (within the respective aquifer and land use category). For comparison, we also applied the kriging procedure based on the weighted average of the five neighboring wells' residuals. For example, if for a particular region of our study area, the regression model tends to underestimate the true observed log nitrate values (positive residuals) then individuals in this region will be given a representative positive residual prediction that is added to the log nitrate estimate based on the individuals' covariates and the regression parameters. The antilog gives an unbiased predictor of median nitrate value resulting in estimates that are more robust to outlier observations than a mean estimator. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L, with a median of 6.5 mg/L and a mean of 6.6 mg/L (sd = 2.9 mg/L). The predicted nitrate level mean was similar to the mean of the measured values used for modeling (6.8 mg/L; sd = 8.3 mg/L) although the standard deviation was smaller.\nWe assumed the drinking water supply for participants to be a well located at their reported residence. To estimate nitrate levels at this location, we first determined whether nitrate concentrations in the USGS wells varied across the types of aquifers in the study area (Table 1). Maps of the primary aquifers were obtained from the USGS (created from 30 m pixel satellite imagery) [19]. There are five principal aquifers in the study area (Figure 2). The differentiation of aquifer type is important because the transport of contaminants in groundwater is generally confined to within these hydrogeologic boundaries. Using the 1992 USGS National Land Cover Data Set (NLCD) [20], we also evaluated nitrate levels in the USGS wells across thirteen types of land use (pasture, deciduous forest, row crops, low intensity residential, mixed forest, commercial or industrial, evergreen forest, high intensity residential, water, quarries/gravel pits, transitional, urban grasses, and woody wetlands) (Table 1 and Figure 3). We found limited temporal variation by season and by decade within each of the five aquifer types over the well measurement period, as well as across the land use classifications used in our analysis (Figure 3).\nDistribution of nitrate concentration in US Geological Survey wells by aquifer type and categories of land use in Lancaster, Lebanon, and Chester Counties, from 1976-2006\nSpatial classification is based on the National Land Cover Data Set; 1992, USGS [ref 38].\nPrinciple aquifers in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003\nLand use in 1992 in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003\nWe used a standard linear mixed effects statistical model to develop a predictive model including the variables principal aquifer and land use. Nitrate levels were log normally distributed so we modeled the natural logarithm of the concentration. Spatial correlation existed in the nitrate measurements even after covariate adjustment [21], so we performed kriging on the wells' residuals from the predictive nitrate model. If a well had more than one measurement, the mean of the measurements and its residual was used in the modeling. We assumed that the residuals in the model have a single, normally distributed mean structure centered at zero, allowing for universal kriging across the study area. The kriging procedure predicts a 'residual' for each study participant based on a weighted average of the 20 neighboring wells' residuals (within the respective aquifer and land use category). For comparison, we also applied the kriging procedure based on the weighted average of the five neighboring wells' residuals. For example, if for a particular region of our study area, the regression model tends to underestimate the true observed log nitrate values (positive residuals) then individuals in this region will be given a representative positive residual prediction that is added to the log nitrate estimate based on the individuals' covariates and the regression parameters. The antilog gives an unbiased predictor of median nitrate value resulting in estimates that are more robust to outlier observations than a mean estimator. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L, with a median of 6.5 mg/L and a mean of 6.6 mg/L (sd = 2.9 mg/L). The predicted nitrate level mean was similar to the mean of the measured values used for modeling (6.8 mg/L; sd = 8.3 mg/L) although the standard deviation was smaller.\n Model validation The validity of the predictive model was assessed for 77 validation wells by comparing the predicted nitrate concentration to the observed nitrate concentrations. The validation wells were randomly selected from 482 wells with USGS measurements from 1992-1993. The limited date range was chosen to be consistent with the time frame of the NLCD land use database used in our analyses. We evaluated model sensitivity, specificity, and percent agreement using the median of the predicted nitrate level (6.5 mg/L as nitrate-N) as a cutpoint for high and low exposure categories. The sensitivity of the model was 67% and the specificity was 93%. The Spearman's rank correlation between the continuous predicted and measured concentrations was 0.73. Cross tabulation of predicted and observed nitrate concentrations by quartiles of the measured nitrate concentrations demonstrated a percent agreement of 56% (Table 2).\nComparison of quartiles of the predicted nitrate concentration by quartiles of the measured nitrate concentrations, 77 wells used for the model validation\nPercent agreement = 56%\nThe validity of the predictive model was assessed for 77 validation wells by comparing the predicted nitrate concentration to the observed nitrate concentrations. The validation wells were randomly selected from 482 wells with USGS measurements from 1992-1993. The limited date range was chosen to be consistent with the time frame of the NLCD land use database used in our analyses. We evaluated model sensitivity, specificity, and percent agreement using the median of the predicted nitrate level (6.5 mg/L as nitrate-N) as a cutpoint for high and low exposure categories. The sensitivity of the model was 67% and the specificity was 93%. The Spearman's rank correlation between the continuous predicted and measured concentrations was 0.73. Cross tabulation of predicted and observed nitrate concentrations by quartiles of the measured nitrate concentrations demonstrated a percent agreement of 56% (Table 2).\nComparison of quartiles of the predicted nitrate concentration by quartiles of the measured nitrate concentrations, 77 wells used for the model validation\nPercent agreement = 56%\n Data analysis We used generalized linear regression to assess the association between estimated nitrate levels in well water and continuous TSH measures. TSH levels were also used to define disease status based on clinical guidelines [21]. A \"normal\" range for TSH was defined as 0.4-4 mIU/ml. A TSH level of > 4 mIU/ml-10 mIU/ml was defined as subclinical hypothyroidism (n = 228) and more than 10 mIU/ml was defined as clinical hypothyroidism (n = 56). A TSH value of 0.1 mIU/ml to 0.4 mIU/ml was defined as subclinical hyperthyroidism (n = 25) and less than 0.1 mIU/ml was defined as clinical hyperthyroidism (n = 10). All of the disease definitions are based on the assumption that TSH was marking primary disease in the thyroid since other causes of TSH abnormalities, e.g., primary pituitary disease, thyroid hormone resistance, are very uncommon by comparison [22].\nEstimated nitrate levels in participants' drinking water were categorized into quartiles and by the median of the predicted well nitrate level (6.5 mg/L). We evaluated the association of the nitrate levels with each thyroid disease group using unconditional logistic regression to compute the odds ratio (OR) and 95% confidence intervals. All models were adjusted for potential confounding factors including age (continuous) and BMI ((normal (< 25 kg/m2), overweight (25-30 kg/m2), and obese (> 30 kg/m2)). We conducted analyses stratified by gender as well as for men and women combined. Tests of linear trend were performed by modeling the continuous nitrate estimates. A p-value < 0.05 was considered significant and all data analyses were conducted using SAS version 9.1.\nWe conducted two sensitivity analyses. In the first analysis, we excluded participants whose residences were located within boundaries of the U.S. Census Places (USCB 2004) and were therefore possibly connected to public water supplies with nitrate levels below the MCL (16% of study population). In the second analysis, we excluded those whose residence was greater than 1500 m from the nearest well with measurement data (17%) to reduce the probability of measurement error. We recomputed the OR for subclinical hypothyroidism after correcting for exposure misclassification (i.e. by reclassifying false positives and false negatives) using our estimates of sensitivity and specificity and the prevalence of exposure (50%).\nWe used generalized linear regression to assess the association between estimated nitrate levels in well water and continuous TSH measures. TSH levels were also used to define disease status based on clinical guidelines [21]. A \"normal\" range for TSH was defined as 0.4-4 mIU/ml. A TSH level of > 4 mIU/ml-10 mIU/ml was defined as subclinical hypothyroidism (n = 228) and more than 10 mIU/ml was defined as clinical hypothyroidism (n = 56). A TSH value of 0.1 mIU/ml to 0.4 mIU/ml was defined as subclinical hyperthyroidism (n = 25) and less than 0.1 mIU/ml was defined as clinical hyperthyroidism (n = 10). All of the disease definitions are based on the assumption that TSH was marking primary disease in the thyroid since other causes of TSH abnormalities, e.g., primary pituitary disease, thyroid hormone resistance, are very uncommon by comparison [22].\nEstimated nitrate levels in participants' drinking water were categorized into quartiles and by the median of the predicted well nitrate level (6.5 mg/L). We evaluated the association of the nitrate levels with each thyroid disease group using unconditional logistic regression to compute the odds ratio (OR) and 95% confidence intervals. All models were adjusted for potential confounding factors including age (continuous) and BMI ((normal (< 25 kg/m2), overweight (25-30 kg/m2), and obese (> 30 kg/m2)). We conducted analyses stratified by gender as well as for men and women combined. Tests of linear trend were performed by modeling the continuous nitrate estimates. A p-value < 0.05 was considered significant and all data analyses were conducted using SAS version 9.1.\nWe conducted two sensitivity analyses. In the first analysis, we excluded participants whose residences were located within boundaries of the U.S. Census Places (USCB 2004) and were therefore possibly connected to public water supplies with nitrate levels below the MCL (16% of study population). In the second analysis, we excluded those whose residence was greater than 1500 m from the nearest well with measurement data (17%) to reduce the probability of measurement error. We recomputed the OR for subclinical hypothyroidism after correcting for exposure misclassification (i.e. by reclassifying false positives and false negatives) using our estimates of sensitivity and specificity and the prevalence of exposure (50%).", "Subjects included in this analysis were 3,017 Old Order Amish aged 18 years and older from Lancaster, Chester, and Lebanon Counties, Pennsylvania, for whom thyroid health was assessed through measurement of thyroid stimulating hormone (TSH) levels in their prior participation in one or more studies of health by investigators at the University of Maryland, Baltimore [13-16]. We excluded participants whose residences were located outside of Lancaster, Chester, or Lebanon counties (n = 328) due to sparse nitrate measurement data, and persons who reported use of thyroid medication (n = 145) leaving a total of 2,543 persons (1,336 females and 1,207 males) in the final analysis. Nearly all of the enrolled individuals are descendants of a small number of Amish who settled in Lancaster County, Pennsylvania, in the mid-eighteenth century [13,17,18]. This study was approved by the Institutional Review Boards of the University of Maryland and the National Cancer Institute.\nAll subjects included in this analysis received a standardized examination at the Amish Research Clinic in Strasburg, Pennsylvania or in the participant's home during the time period 1995-2008. As part of this examination, a fasting blood sample was collected from which TSH levels were measured with the Siemens TSH assay (Immulite 2000; Deerfield, IL) according to the manufacturer's instructions. The method is a solid-phase, chemiluminescent, competitive analog immunoassay and has analytical sensitivity of 0.004 μIU/ml and upper limit of 75 μIU/ml of TSH.\nResidential street addresses were geocoded using the TeleAtlas (Lebanon, NH) MatchMaker SDK Professional version 8.3 (October 2006), a spatial database of roads, and a modified version of a Microsoft Visual Basic version 6.0 program issued by TeleAtlas to match input addresses to the spatial database. We assigned residence location using an offset of 25 ft from the street centerline. Addresses that were not successfully geocoded were checked for errors using interactive geocoding techniques. Where only a street intersection was available for the residential location (1.0% of residences), we assigned the geographic location of the residence to the middle of the intersection. Where only a zip code was available for the residential location (3.0% of residences), we assigned the geographic location of the residence to the centroid of the zip code. The geocoded location of the residences and the geographic boundary of our study area is shown in Figure 1.\nLocation of participant residences and wells with nitrate measures in study area.", "A survey of nitrate levels in well water in Lancaster, Chester, and Lebanon counties was carried out from 1976-2006 by the USGS. The USGS collected data from active monitoring wells in the county and from a well-owner monitoring program conducted by the State Department of Natural Resources in collaboration with Pennsylvania State University. Water samples (50-100 uL) from all programs were measured for nitrate using ion chromatography with a detection limit of 0.002 mg/L as nitrate-N [19].\nA total of 3,613 unique wells were measured in our study area during the survey period. The measurements were not from wells chosen at random but included monitoring data reported by the USGS and samples from individual well owners. A total of 3,057 wells had 1 measurement; 198 wells had 2 measurements; 113 wells had 3 measurements; and 245 wells had more than 3 measurements. Figure 1 shows the geographic distribution of the wells in our study area in relation to the location of the 2,543 participant residences The median distance between a residence and the closest measured well was 576.0 m (interquartile range: 308.0-897.2 m). The median nitrate concentration by season ranged from 2.0 mg/L as nitrate-nitrogen (hereafter mg/L) for summer months (interquartile range: 0-7.5 mg/L) to 2.7 mg/L in spring months (interquartile range: 0-8.8 mg/L). For wells with multiple measures, the median difference between the maximum and minimum value was 1.2 mg/l (IQR: 0.3-0.9). The mean of the measurements was used for wells with multiple measurements when we did the exposure modeling (see below).", "We assumed the drinking water supply for participants to be a well located at their reported residence. To estimate nitrate levels at this location, we first determined whether nitrate concentrations in the USGS wells varied across the types of aquifers in the study area (Table 1). Maps of the primary aquifers were obtained from the USGS (created from 30 m pixel satellite imagery) [19]. There are five principal aquifers in the study area (Figure 2). The differentiation of aquifer type is important because the transport of contaminants in groundwater is generally confined to within these hydrogeologic boundaries. Using the 1992 USGS National Land Cover Data Set (NLCD) [20], we also evaluated nitrate levels in the USGS wells across thirteen types of land use (pasture, deciduous forest, row crops, low intensity residential, mixed forest, commercial or industrial, evergreen forest, high intensity residential, water, quarries/gravel pits, transitional, urban grasses, and woody wetlands) (Table 1 and Figure 3). We found limited temporal variation by season and by decade within each of the five aquifer types over the well measurement period, as well as across the land use classifications used in our analysis (Figure 3).\nDistribution of nitrate concentration in US Geological Survey wells by aquifer type and categories of land use in Lancaster, Lebanon, and Chester Counties, from 1976-2006\nSpatial classification is based on the National Land Cover Data Set; 1992, USGS [ref 38].\nPrinciple aquifers in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003\nLand use in 1992 in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003\nWe used a standard linear mixed effects statistical model to develop a predictive model including the variables principal aquifer and land use. Nitrate levels were log normally distributed so we modeled the natural logarithm of the concentration. Spatial correlation existed in the nitrate measurements even after covariate adjustment [21], so we performed kriging on the wells' residuals from the predictive nitrate model. If a well had more than one measurement, the mean of the measurements and its residual was used in the modeling. We assumed that the residuals in the model have a single, normally distributed mean structure centered at zero, allowing for universal kriging across the study area. The kriging procedure predicts a 'residual' for each study participant based on a weighted average of the 20 neighboring wells' residuals (within the respective aquifer and land use category). For comparison, we also applied the kriging procedure based on the weighted average of the five neighboring wells' residuals. For example, if for a particular region of our study area, the regression model tends to underestimate the true observed log nitrate values (positive residuals) then individuals in this region will be given a representative positive residual prediction that is added to the log nitrate estimate based on the individuals' covariates and the regression parameters. The antilog gives an unbiased predictor of median nitrate value resulting in estimates that are more robust to outlier observations than a mean estimator. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L, with a median of 6.5 mg/L and a mean of 6.6 mg/L (sd = 2.9 mg/L). The predicted nitrate level mean was similar to the mean of the measured values used for modeling (6.8 mg/L; sd = 8.3 mg/L) although the standard deviation was smaller.", "The validity of the predictive model was assessed for 77 validation wells by comparing the predicted nitrate concentration to the observed nitrate concentrations. The validation wells were randomly selected from 482 wells with USGS measurements from 1992-1993. The limited date range was chosen to be consistent with the time frame of the NLCD land use database used in our analyses. We evaluated model sensitivity, specificity, and percent agreement using the median of the predicted nitrate level (6.5 mg/L as nitrate-N) as a cutpoint for high and low exposure categories. The sensitivity of the model was 67% and the specificity was 93%. The Spearman's rank correlation between the continuous predicted and measured concentrations was 0.73. Cross tabulation of predicted and observed nitrate concentrations by quartiles of the measured nitrate concentrations demonstrated a percent agreement of 56% (Table 2).\nComparison of quartiles of the predicted nitrate concentration by quartiles of the measured nitrate concentrations, 77 wells used for the model validation\nPercent agreement = 56%", "We used generalized linear regression to assess the association between estimated nitrate levels in well water and continuous TSH measures. TSH levels were also used to define disease status based on clinical guidelines [21]. A \"normal\" range for TSH was defined as 0.4-4 mIU/ml. A TSH level of > 4 mIU/ml-10 mIU/ml was defined as subclinical hypothyroidism (n = 228) and more than 10 mIU/ml was defined as clinical hypothyroidism (n = 56). A TSH value of 0.1 mIU/ml to 0.4 mIU/ml was defined as subclinical hyperthyroidism (n = 25) and less than 0.1 mIU/ml was defined as clinical hyperthyroidism (n = 10). All of the disease definitions are based on the assumption that TSH was marking primary disease in the thyroid since other causes of TSH abnormalities, e.g., primary pituitary disease, thyroid hormone resistance, are very uncommon by comparison [22].\nEstimated nitrate levels in participants' drinking water were categorized into quartiles and by the median of the predicted well nitrate level (6.5 mg/L). We evaluated the association of the nitrate levels with each thyroid disease group using unconditional logistic regression to compute the odds ratio (OR) and 95% confidence intervals. All models were adjusted for potential confounding factors including age (continuous) and BMI ((normal (< 25 kg/m2), overweight (25-30 kg/m2), and obese (> 30 kg/m2)). We conducted analyses stratified by gender as well as for men and women combined. Tests of linear trend were performed by modeling the continuous nitrate estimates. A p-value < 0.05 was considered significant and all data analyses were conducted using SAS version 9.1.\nWe conducted two sensitivity analyses. In the first analysis, we excluded participants whose residences were located within boundaries of the U.S. Census Places (USCB 2004) and were therefore possibly connected to public water supplies with nitrate levels below the MCL (16% of study population). In the second analysis, we excluded those whose residence was greater than 1500 m from the nearest well with measurement data (17%) to reduce the probability of measurement error. We recomputed the OR for subclinical hypothyroidism after correcting for exposure misclassification (i.e. by reclassifying false positives and false negatives) using our estimates of sensitivity and specificity and the prevalence of exposure (50%).", "The mean TSH level was 2.92 mIU/ml (3.05 mIU/ml for women and 2.77 mIU/ml for men). Based on the TSH measures, the prevalence of clinical hyperthyroidism was 0.4% and the prevalence of subclinical hyperthyroidism was 1.0%. The prevalence of clinical hypothyroidism was 2.2% and the prevalence of subclinical hypothyroidism was 9.0%.\nThe mean age of participants was 50 years (range: 18-98). The mean BMI was 26.6 kg/m2 for men and 27.7 kg/m2 for women (Table 3). The average BMI of males with clinical hyperthyroidism was lower than that of those in the general study population but females with clinical hyperthyroidism had a slightly higher average BMI than the general study population. The average age of persons with thyroid disease was higher in all categories compared to the group with normal TSH levels. Although smoking data was not available for the entire study population, among those for whom these data were collected, less than 1% of women (4 of 657) and 43% of males (310 of 725) reported ever smoking tobacco.\nCharacteristics of the study population by thyroid disease status\nIncludes persons who reside in Lancaster, Chester, or Lebanon Counties and excludes persons younger than 18 or who report use of thyroid medications.\n*smoking status is based on 45.8% of the study population\nAdjusting for age and BMI, and modeling TSH concentration as the outcome, we observed no significant relationship with nitrate concentration. The B coefficient for men and women combined was -0.12 (p-value = 0.14), -0.13 for men (p-value = 0.11), and -0.12 for women (p-value = 0.40). Modeling the dichotomized high/low nitrate predictor, the B coefficient for men and women combined was -0.64 (p-value = 0.19), -0.57 for men (p-value = 0.22), and -0.70 for women (p-value = 0.40).\nNeither clinical or subclinical hyperthyroidism were associated with nitrate concentrations (Table 4), although the number of cases was low (n = 10 cases of clinical hyperthyroidism and n = 25 cases of subclinical hyperthyroidism).\nOdds ratios (ORs) and 95% confidence intervals (CIs) for the prevalence of hyperthyroidism associated with estimated nitrate levels in residential wells\nModels adjusted for age and BMI; model with men and women combined is also adjusted for gender\nThe results for hypothyroidism are presented in Table 5. Overall, there was a borderline significant positive association between subclinical hypothyroidism and high nitrate exposure (age- and BMI-adjusted OR = 1.32; 95% CI: 1.0-1.68), with further analyses revealing the association to be present in women (OR = 1.60; 95% CI: 1.11-2.32), but not in men (OR = 0.98; 95% CI: 0.63-1.52). However, the association among women did not increase monotonically with increasing quartiles of estimated nitrate concentrations in their water supply exposure. The interaction for gender and nitrate was not significant (p-interaction = 0.32). No significant associations were observed for clinical hypothyroidism. The results were consistent when stratified by age and BMI.\nOdds ratios (ORs) and 95% confidence intervals (CIs) for the prevalence of hypothyroidism associated with nitrate levels in residential wells\nModels adjusted for age and BMI; model with men and women combined is also adjusted for gender; nitrate concentrations in mg/L\nThe results were unchanged in a sensitivity analysis that excluded participants whose residences were possibly connected to public water supplies (data not shown). The exclusion of persons who reside more than 1500 m from the nearest well also did not result in a material change in our results (data not shown), although it did decrease the odds ratio for high nitrate intake and subclinical hypothyroidism in women from 1.60-1.52. We also estimated the OR for subclinical hypothyroidism among women in the absence of exposure misclassification as 2.1 (versus 1.6 observed).", "Our results provide limited support for an association between nitrate levels in private wells and subclinical hypothyroidism among women but not men. With estimated exposure to nitrate in drinking water at or above 6.5 mg/L, we observed a significantly increased prevalence of subclinical hypothyroidism in women, although there was not a monotonic increase with increasing quartiles of nitrate. These findings of an increased prevalence of hypothyroidism among women are consistent with our hypothesis, namely that the competitive inhibition of iodide uptake associated with increased nitrate exposure would result in decreased systemic active thyroid hormone (as indicated by increased TSH levels). We did not observe an association for clinical hypothyroidism, but the number of cases in this group was much lower.\nThe mean TSH level in our study population was 3.05 μIU/ml in women and 2.77 μIU/ml in men. These levels are higher than TSH levels in the general US population surveyed by the National Health and Nutrition Examination Survey (NHANES) from 1988-1994 [23], in which the means among women and men were 1.49 μIU/ml and 1.46 μIU/ml, respectively. The prevalence of hypothyroidism and hyperthyroidism in the US population is estimated to be 4.6% (0.3% clinical and 4.3% subclinical) and 1.3% (0.5% clinical and 0.8% subclinical), respectively [23], compared with 11.2% and 1.4% in our study population. However, as the risk of thyroid disease increases with age, the higher prevalence in the Amish could be partially due to the older age distribution in this study (mean = 50.1 years) population compared to the age distribution of the NHANES study population (mean = 45.0 years). When hypothyroidism is compared by sex in this study population and the NHANES population, the prevalence is 1.5-times more common in Amish women than men whereas it is 2-8 times more common in women than men in the US population [23].\nIn previous epidemiological studies, investigators have identified a relationship between nitrate contamination of water supplies and thyroid dysfunction and thyroid disease. In a cross-sectional study of school children living in areas of Slovakia with high and low nitrate exposure via drinking water, children in the high nitrate area had increased thyroid volume and increased frequency of signs of subclinical thyroid disorders (thyroid hypoechogenicity by ultrasound, increased TSH level and positive anti-thyroid peroxidase (TPO)) [6]. The nitrate levels ranged from 11.3 to 58.7 mg/L (as nitrate-nitrogen) in the highly polluted area and were < 0.4 mg/L nitrate-nitrogen in the low nitrate area. Similarly, investigators in the Netherlands conducted a cross-sectional study of women who obtained their drinking water from public supplies and private wells with varying nitrate levels [7]. They observed a dose-dependent increase in the volume of the thyroid associated with increasing nitrate concentrations in drinking water from a combination of public and private supplies, with nitrate levels ranging from 0.004 mg/L to 29.1 mg/L (as nitrate-nitrogen). Women with nitrate levels exceeding 11.1 mg/L as nitrate-nitrogen had a significant increased prevalence of thyroid gland hypertrophy. Our results for women are consistent with the findings in Slovakia and indirectly support the associations observed in the Netherlands. However, the reason for our finding of in women but not men is unclear particularly since men consume more water than women on average [23]. It is possible that women may be more sensitive to exposures that perturb the thyroid as indicated by their higher prevalence of thyroid disease [24].\nA previous epidemiologic investigation of the association of nitrate intake from public water supplies and diet with the risk of self-reported hypothyroidism and hyperthyroidism was conducted in a cohort of 21,977 older women in Iowa [25]. The investigators found no association between the prevalence of hypo- or hyperthyroidism and nitrate concentrations in public water supplies; nor was there an association for those who were using private wells. However, intake of nitrate from the diet can be a primary source of exposure when drinking water nitrate levels are below the MCL of 10 mg/L nitrate-N [26-29]. In the Iowa study, increasing intake of nitrate from dietary sources was associated with an increased prevalence of hypothyroidism (OR Q4 = 1.24; 95% CI = 1.10-1.40, P for trend = 0.001) while no association was observed with hyperthyroidism [24]. In addition to consumption of tap water, people living in areas with high nitrate concentration in their water supplies may be exposed through their use of water for cooking, irrigation of crops used as a food source, and through milk products from local farm animals. Nitrate is a natural component of plants and is found at high concentrations in leafy vegetables, such as lettuce and spinach, and some root vegetables, such as beets [25]. The lack of dietary questionnaire data in our study is a limitation since estimates of well-water nitrate were below the MCL of 10 mg/L for 89% of participants [25-27]. The lack of dietary information in general likely resulted in exposure misclassification in our study population.\nA strength of this study is the availability of valid measures of TSH using study participant serum samples. Although only one measure was available for each study participant, the use of TSH rather than self-reported thyroid disease is likely to more accurately define thyroid disease. Although factors such as pregnancy and obesity can affect TSH, the levels are a reliable index of the biological activity of thyroid hormones. Anti-TPO was not available, which can also be helpful in the diagnosis of thyroid disease as an autoimmune disease. In addition to measuring TSH and anti-TPO in blood, future studies would be further strengthened by the use of ultrasound technology to determine thyroid volume, which could provide insight into nitrate exposure levels that may cause hypertrophy of the thyroid.\nAn additional strength of our study was that we validated our exposure metric and characterized the sensitivity and specificity based on the median observed versus predicted nitrate level in wells monitored by the USGS in our study area. Specificity was high (93%) indicating that our model classified those with lower nitrate levels accurately. The lower sensitivity (67%) indicated that the model underestimated nitrate concentrations for those with higher levels. The result of this misclassification, if nondifferential by disease status [28], would be to attenuate ORs as we demonstrated for subclinical thyroid disease.\nOur study was limited by a lack of information about the study population's complete residential history. However, we know that the majority of this Amish cohort reside in rural areas, with low relocation rates, and that it is typically the women who relocate to live in the homes or on the same land as their husband's family [29]. Most Amish men would subsequently have a stable residential history and exposure to nitrate contamination of well water over time. It is not clear to what degree a complete residential history would have affected our findings for both men and women. It is possible that the association we observed between subclinical hypothyroidism in women was attenuated due to this source of misclassification. The well measurements were also not randomly selected but represent data collected by USGS and individuals that potentially reside in areas with higher levels of nitrate than those who did not receive monitoring attention from USGS or who were not aware of a problem in their well. We identified a large standard deviation for the wells with repeat measures and were unable to fully explore the reasoning beyond having a small proportion (13%) of repeat samples. Additional data on well depth, other hydrogeological factors, or why multiple samples were taken could provide more insight into this observed variation, but was not available.\nOur study is also limited by the fact that we did not have data on actual water supply source to the residence, nor personal water consumption. Because most residences were located outside of areas served by public water utilities, we assumed the drinking water supply for participants was a well located at their residence. We did not have data on tap water consumption, and thus the approximate daily intake, which can be an important variable in determining exposure. Most people in the United States drink about 1.5-2 l of water per day [30]. Similarly, because of the attention given to water contamination in the Lancaster area, it is possible that some study participants obtain their water from sources that have been purified via reverse osmosis or from bottled water. These limitations would clearly affect the exposure estimates and result in misclassification of the exposure.\nFuture work in this area would be enhanced by the assessment of multiple contaminants present in water sources and the general environment that could be simultaneously affecting thyroid health. Multiple environmental pollutants from industrial as well as agricultural activities may be an important consideration for future investigation. Of particular interest is pesticides as there is increasing evidence of their ability to alter thyroid hormone homeostasis, causing thyroid dysfunction and thyroid disease [31,32]. The varied effects of these chemicals on thyroid function could affect study findings. Determination of these exposures should be a future study design consideration.\nFurthermore, the effect of contamination from other univalent anions which interfere with the uptake of iodide by the thyroid should be considered in future investigation into the effects of nitrate in drinking water. For example, perchlorate, the oxidizer for solid rocket fuel and a component of some fertilizers, is found in both food and water [33], and interferes with iodide uptake much like nitrate. Similarly, thiocyanate, another univalent anion that causes thyroid dysfunction, is a metabolite from tobacco smoke and is found in certain foods [34-37].", "The present study provides limited evidence that nitrate in residential well water is associated with subclinical hypothyroidism in women but not men. Future studies that include validated biomarkers, as well as individual level nitrate exposure estimates of dietary and drinking water intakes, and an assessment of co-contaminants, are needed to provide information about the relevance of nitrate intake and thyroid disease." ]
[ null, "methods", null, null, null, null, null, null, null, null ]
[ "Nitrate", "Thyroid Conditions", "TSH", "Old Order Amish", "Water pollution", "Drinking water" ]
Background: Nitrate is a widespread contaminant of drinking water supplies, especially in agricultural areas. The thyroid can concentrate univalent anions such as nitrate (NO3-), which subsequently interferes with the uptake of iodide (I-) by the thyroid and may cause reduced production of thyroid hormones [1-4]. The result of the reduced thyroid hormone production is a compensatory increase in thyroid stimulating hormone (TSH), a sensitive indicator of thyroid function. High and low TSH levels reflect hypo- and hyperfunction of the thyroid gland, respectively. Chronic stimulation of the thyroid gland by excessive TSH has been shown in animals to induce the development of hypertrophy and thyroid disease, as well as hyperplasia, followed by adenoma and carcinoma [5]. At least two epidemiological studies have shown high nitrate intake to be associated with thyroid dysfunction, including hypertrophy and changes in TSH levels [6,7]; however, the impact of nitrate intake on specific thyroid conditions, including hyperthyroidism and hypothyroidism is not clear. Elevated concentrations of nitrate in groundwater originate from a number of sources, including leaking septic tanks, animal waste, and overuse of nitrogen fertilizers [7]. Nitrate is very soluble and it readily migrates to groundwater. Nitrate contamination of groundwater is an exposure of interest as groundwater serves as the primary drinking water supply for over 90% of the rural population and 50% of the total population of North America [8]. Although the U.S. Environmental Protection Agency (EPA) maximum contaminant level (MCL) for nitrate as nitrogen (nitrate-N) is 10 mg/L in public water sources [9], the levels in private wells are not regulated and the task of monitoring is left to residential owners, presenting opportunities for high levels of human exposure. The U.S. Geological Survey (USGS) estimates that nitrate concentrations exceed the EPA's standard in approximately 15% of agricultural and rural areas, exposing over 2 million people in the United States [10]. The MCL for nitrate in drinking water was established to protect against methemoglobinemia, or "blue baby syndrome," to which infants are especially susceptible. However, this health guideline has not been thoroughly evaluated for other health outcomes such as thyroid disease and cancer. The Old Order Amish community is a population characterized by a homogeneous lifestyle, including intensive farming practices and low mobility, and has been relatively unchanged across generations [11]. In areas where many large dairy and poultry farms are concentrated, the land area for disposal of animal wastes is limited. This situation often results in overloading the available land with manure, with considerable nitrogen ending up in groundwater or surface water [8,12]. Lancaster County in southeastern Pennsylvania is an example of such an area where extensive dairy enterprises with high stocking rates prevail. High levels of nitrate in the groundwater [8] suggest that the Amish are a potentially highly exposed population. Given the biological effects of nitrate intake on the thyroid, investigation of whether the Amish in this area exhibit an increased prevalence of thyroid dysfunction and thyroid disease is of interest. The aim of this study is to assess whether nitrate concentrations in well water are associated with levels of TSH and thyroid disease. Our goal was to use survey data on nitrate levels in well-water obtained from the USGS to conduct a cross-sectional analysis of the association between nitrate exposure and thyroid health. This study builds upon several ongoing studies of diabetes, obesity, osteoporosis, hypertension, and cardiovascular disease in the Amish, initiated in 1993 at the University of Maryland [13-16]. Methods: Study population Subjects included in this analysis were 3,017 Old Order Amish aged 18 years and older from Lancaster, Chester, and Lebanon Counties, Pennsylvania, for whom thyroid health was assessed through measurement of thyroid stimulating hormone (TSH) levels in their prior participation in one or more studies of health by investigators at the University of Maryland, Baltimore [13-16]. We excluded participants whose residences were located outside of Lancaster, Chester, or Lebanon counties (n = 328) due to sparse nitrate measurement data, and persons who reported use of thyroid medication (n = 145) leaving a total of 2,543 persons (1,336 females and 1,207 males) in the final analysis. Nearly all of the enrolled individuals are descendants of a small number of Amish who settled in Lancaster County, Pennsylvania, in the mid-eighteenth century [13,17,18]. This study was approved by the Institutional Review Boards of the University of Maryland and the National Cancer Institute. All subjects included in this analysis received a standardized examination at the Amish Research Clinic in Strasburg, Pennsylvania or in the participant's home during the time period 1995-2008. As part of this examination, a fasting blood sample was collected from which TSH levels were measured with the Siemens TSH assay (Immulite 2000; Deerfield, IL) according to the manufacturer's instructions. The method is a solid-phase, chemiluminescent, competitive analog immunoassay and has analytical sensitivity of 0.004 μIU/ml and upper limit of 75 μIU/ml of TSH. Residential street addresses were geocoded using the TeleAtlas (Lebanon, NH) MatchMaker SDK Professional version 8.3 (October 2006), a spatial database of roads, and a modified version of a Microsoft Visual Basic version 6.0 program issued by TeleAtlas to match input addresses to the spatial database. We assigned residence location using an offset of 25 ft from the street centerline. Addresses that were not successfully geocoded were checked for errors using interactive geocoding techniques. Where only a street intersection was available for the residential location (1.0% of residences), we assigned the geographic location of the residence to the middle of the intersection. Where only a zip code was available for the residential location (3.0% of residences), we assigned the geographic location of the residence to the centroid of the zip code. The geocoded location of the residences and the geographic boundary of our study area is shown in Figure 1. Location of participant residences and wells with nitrate measures in study area. Subjects included in this analysis were 3,017 Old Order Amish aged 18 years and older from Lancaster, Chester, and Lebanon Counties, Pennsylvania, for whom thyroid health was assessed through measurement of thyroid stimulating hormone (TSH) levels in their prior participation in one or more studies of health by investigators at the University of Maryland, Baltimore [13-16]. We excluded participants whose residences were located outside of Lancaster, Chester, or Lebanon counties (n = 328) due to sparse nitrate measurement data, and persons who reported use of thyroid medication (n = 145) leaving a total of 2,543 persons (1,336 females and 1,207 males) in the final analysis. Nearly all of the enrolled individuals are descendants of a small number of Amish who settled in Lancaster County, Pennsylvania, in the mid-eighteenth century [13,17,18]. This study was approved by the Institutional Review Boards of the University of Maryland and the National Cancer Institute. All subjects included in this analysis received a standardized examination at the Amish Research Clinic in Strasburg, Pennsylvania or in the participant's home during the time period 1995-2008. As part of this examination, a fasting blood sample was collected from which TSH levels were measured with the Siemens TSH assay (Immulite 2000; Deerfield, IL) according to the manufacturer's instructions. The method is a solid-phase, chemiluminescent, competitive analog immunoassay and has analytical sensitivity of 0.004 μIU/ml and upper limit of 75 μIU/ml of TSH. Residential street addresses were geocoded using the TeleAtlas (Lebanon, NH) MatchMaker SDK Professional version 8.3 (October 2006), a spatial database of roads, and a modified version of a Microsoft Visual Basic version 6.0 program issued by TeleAtlas to match input addresses to the spatial database. We assigned residence location using an offset of 25 ft from the street centerline. Addresses that were not successfully geocoded were checked for errors using interactive geocoding techniques. Where only a street intersection was available for the residential location (1.0% of residences), we assigned the geographic location of the residence to the middle of the intersection. Where only a zip code was available for the residential location (3.0% of residences), we assigned the geographic location of the residence to the centroid of the zip code. The geocoded location of the residences and the geographic boundary of our study area is shown in Figure 1. Location of participant residences and wells with nitrate measures in study area. Historical assessment of nitrate levels A survey of nitrate levels in well water in Lancaster, Chester, and Lebanon counties was carried out from 1976-2006 by the USGS. The USGS collected data from active monitoring wells in the county and from a well-owner monitoring program conducted by the State Department of Natural Resources in collaboration with Pennsylvania State University. Water samples (50-100 uL) from all programs were measured for nitrate using ion chromatography with a detection limit of 0.002 mg/L as nitrate-N [19]. A total of 3,613 unique wells were measured in our study area during the survey period. The measurements were not from wells chosen at random but included monitoring data reported by the USGS and samples from individual well owners. A total of 3,057 wells had 1 measurement; 198 wells had 2 measurements; 113 wells had 3 measurements; and 245 wells had more than 3 measurements. Figure 1 shows the geographic distribution of the wells in our study area in relation to the location of the 2,543 participant residences The median distance between a residence and the closest measured well was 576.0 m (interquartile range: 308.0-897.2 m). The median nitrate concentration by season ranged from 2.0 mg/L as nitrate-nitrogen (hereafter mg/L) for summer months (interquartile range: 0-7.5 mg/L) to 2.7 mg/L in spring months (interquartile range: 0-8.8 mg/L). For wells with multiple measures, the median difference between the maximum and minimum value was 1.2 mg/l (IQR: 0.3-0.9). The mean of the measurements was used for wells with multiple measurements when we did the exposure modeling (see below). A survey of nitrate levels in well water in Lancaster, Chester, and Lebanon counties was carried out from 1976-2006 by the USGS. The USGS collected data from active monitoring wells in the county and from a well-owner monitoring program conducted by the State Department of Natural Resources in collaboration with Pennsylvania State University. Water samples (50-100 uL) from all programs were measured for nitrate using ion chromatography with a detection limit of 0.002 mg/L as nitrate-N [19]. A total of 3,613 unique wells were measured in our study area during the survey period. The measurements were not from wells chosen at random but included monitoring data reported by the USGS and samples from individual well owners. A total of 3,057 wells had 1 measurement; 198 wells had 2 measurements; 113 wells had 3 measurements; and 245 wells had more than 3 measurements. Figure 1 shows the geographic distribution of the wells in our study area in relation to the location of the 2,543 participant residences The median distance between a residence and the closest measured well was 576.0 m (interquartile range: 308.0-897.2 m). The median nitrate concentration by season ranged from 2.0 mg/L as nitrate-nitrogen (hereafter mg/L) for summer months (interquartile range: 0-7.5 mg/L) to 2.7 mg/L in spring months (interquartile range: 0-8.8 mg/L). For wells with multiple measures, the median difference between the maximum and minimum value was 1.2 mg/l (IQR: 0.3-0.9). The mean of the measurements was used for wells with multiple measurements when we did the exposure modeling (see below). Prediction of nitrate levels in well water of participants' residences We assumed the drinking water supply for participants to be a well located at their reported residence. To estimate nitrate levels at this location, we first determined whether nitrate concentrations in the USGS wells varied across the types of aquifers in the study area (Table 1). Maps of the primary aquifers were obtained from the USGS (created from 30 m pixel satellite imagery) [19]. There are five principal aquifers in the study area (Figure 2). The differentiation of aquifer type is important because the transport of contaminants in groundwater is generally confined to within these hydrogeologic boundaries. Using the 1992 USGS National Land Cover Data Set (NLCD) [20], we also evaluated nitrate levels in the USGS wells across thirteen types of land use (pasture, deciduous forest, row crops, low intensity residential, mixed forest, commercial or industrial, evergreen forest, high intensity residential, water, quarries/gravel pits, transitional, urban grasses, and woody wetlands) (Table 1 and Figure 3). We found limited temporal variation by season and by decade within each of the five aquifer types over the well measurement period, as well as across the land use classifications used in our analysis (Figure 3). Distribution of nitrate concentration in US Geological Survey wells by aquifer type and categories of land use in Lancaster, Lebanon, and Chester Counties, from 1976-2006 Spatial classification is based on the National Land Cover Data Set; 1992, USGS [ref 38]. Principle aquifers in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003 Land use in 1992 in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003 We used a standard linear mixed effects statistical model to develop a predictive model including the variables principal aquifer and land use. Nitrate levels were log normally distributed so we modeled the natural logarithm of the concentration. Spatial correlation existed in the nitrate measurements even after covariate adjustment [21], so we performed kriging on the wells' residuals from the predictive nitrate model. If a well had more than one measurement, the mean of the measurements and its residual was used in the modeling. We assumed that the residuals in the model have a single, normally distributed mean structure centered at zero, allowing for universal kriging across the study area. The kriging procedure predicts a 'residual' for each study participant based on a weighted average of the 20 neighboring wells' residuals (within the respective aquifer and land use category). For comparison, we also applied the kriging procedure based on the weighted average of the five neighboring wells' residuals. For example, if for a particular region of our study area, the regression model tends to underestimate the true observed log nitrate values (positive residuals) then individuals in this region will be given a representative positive residual prediction that is added to the log nitrate estimate based on the individuals' covariates and the regression parameters. The antilog gives an unbiased predictor of median nitrate value resulting in estimates that are more robust to outlier observations than a mean estimator. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L, with a median of 6.5 mg/L and a mean of 6.6 mg/L (sd = 2.9 mg/L). The predicted nitrate level mean was similar to the mean of the measured values used for modeling (6.8 mg/L; sd = 8.3 mg/L) although the standard deviation was smaller. We assumed the drinking water supply for participants to be a well located at their reported residence. To estimate nitrate levels at this location, we first determined whether nitrate concentrations in the USGS wells varied across the types of aquifers in the study area (Table 1). Maps of the primary aquifers were obtained from the USGS (created from 30 m pixel satellite imagery) [19]. There are five principal aquifers in the study area (Figure 2). The differentiation of aquifer type is important because the transport of contaminants in groundwater is generally confined to within these hydrogeologic boundaries. Using the 1992 USGS National Land Cover Data Set (NLCD) [20], we also evaluated nitrate levels in the USGS wells across thirteen types of land use (pasture, deciduous forest, row crops, low intensity residential, mixed forest, commercial or industrial, evergreen forest, high intensity residential, water, quarries/gravel pits, transitional, urban grasses, and woody wetlands) (Table 1 and Figure 3). We found limited temporal variation by season and by decade within each of the five aquifer types over the well measurement period, as well as across the land use classifications used in our analysis (Figure 3). Distribution of nitrate concentration in US Geological Survey wells by aquifer type and categories of land use in Lancaster, Lebanon, and Chester Counties, from 1976-2006 Spatial classification is based on the National Land Cover Data Set; 1992, USGS [ref 38]. Principle aquifers in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003 Land use in 1992 in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003 We used a standard linear mixed effects statistical model to develop a predictive model including the variables principal aquifer and land use. Nitrate levels were log normally distributed so we modeled the natural logarithm of the concentration. Spatial correlation existed in the nitrate measurements even after covariate adjustment [21], so we performed kriging on the wells' residuals from the predictive nitrate model. If a well had more than one measurement, the mean of the measurements and its residual was used in the modeling. We assumed that the residuals in the model have a single, normally distributed mean structure centered at zero, allowing for universal kriging across the study area. The kriging procedure predicts a 'residual' for each study participant based on a weighted average of the 20 neighboring wells' residuals (within the respective aquifer and land use category). For comparison, we also applied the kriging procedure based on the weighted average of the five neighboring wells' residuals. For example, if for a particular region of our study area, the regression model tends to underestimate the true observed log nitrate values (positive residuals) then individuals in this region will be given a representative positive residual prediction that is added to the log nitrate estimate based on the individuals' covariates and the regression parameters. The antilog gives an unbiased predictor of median nitrate value resulting in estimates that are more robust to outlier observations than a mean estimator. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L, with a median of 6.5 mg/L and a mean of 6.6 mg/L (sd = 2.9 mg/L). The predicted nitrate level mean was similar to the mean of the measured values used for modeling (6.8 mg/L; sd = 8.3 mg/L) although the standard deviation was smaller. Model validation The validity of the predictive model was assessed for 77 validation wells by comparing the predicted nitrate concentration to the observed nitrate concentrations. The validation wells were randomly selected from 482 wells with USGS measurements from 1992-1993. The limited date range was chosen to be consistent with the time frame of the NLCD land use database used in our analyses. We evaluated model sensitivity, specificity, and percent agreement using the median of the predicted nitrate level (6.5 mg/L as nitrate-N) as a cutpoint for high and low exposure categories. The sensitivity of the model was 67% and the specificity was 93%. The Spearman's rank correlation between the continuous predicted and measured concentrations was 0.73. Cross tabulation of predicted and observed nitrate concentrations by quartiles of the measured nitrate concentrations demonstrated a percent agreement of 56% (Table 2). Comparison of quartiles of the predicted nitrate concentration by quartiles of the measured nitrate concentrations, 77 wells used for the model validation Percent agreement = 56% The validity of the predictive model was assessed for 77 validation wells by comparing the predicted nitrate concentration to the observed nitrate concentrations. The validation wells were randomly selected from 482 wells with USGS measurements from 1992-1993. The limited date range was chosen to be consistent with the time frame of the NLCD land use database used in our analyses. We evaluated model sensitivity, specificity, and percent agreement using the median of the predicted nitrate level (6.5 mg/L as nitrate-N) as a cutpoint for high and low exposure categories. The sensitivity of the model was 67% and the specificity was 93%. The Spearman's rank correlation between the continuous predicted and measured concentrations was 0.73. Cross tabulation of predicted and observed nitrate concentrations by quartiles of the measured nitrate concentrations demonstrated a percent agreement of 56% (Table 2). Comparison of quartiles of the predicted nitrate concentration by quartiles of the measured nitrate concentrations, 77 wells used for the model validation Percent agreement = 56% Data analysis We used generalized linear regression to assess the association between estimated nitrate levels in well water and continuous TSH measures. TSH levels were also used to define disease status based on clinical guidelines [21]. A "normal" range for TSH was defined as 0.4-4 mIU/ml. A TSH level of > 4 mIU/ml-10 mIU/ml was defined as subclinical hypothyroidism (n = 228) and more than 10 mIU/ml was defined as clinical hypothyroidism (n = 56). A TSH value of 0.1 mIU/ml to 0.4 mIU/ml was defined as subclinical hyperthyroidism (n = 25) and less than 0.1 mIU/ml was defined as clinical hyperthyroidism (n = 10). All of the disease definitions are based on the assumption that TSH was marking primary disease in the thyroid since other causes of TSH abnormalities, e.g., primary pituitary disease, thyroid hormone resistance, are very uncommon by comparison [22]. Estimated nitrate levels in participants' drinking water were categorized into quartiles and by the median of the predicted well nitrate level (6.5 mg/L). We evaluated the association of the nitrate levels with each thyroid disease group using unconditional logistic regression to compute the odds ratio (OR) and 95% confidence intervals. All models were adjusted for potential confounding factors including age (continuous) and BMI ((normal (< 25 kg/m2), overweight (25-30 kg/m2), and obese (> 30 kg/m2)). We conducted analyses stratified by gender as well as for men and women combined. Tests of linear trend were performed by modeling the continuous nitrate estimates. A p-value < 0.05 was considered significant and all data analyses were conducted using SAS version 9.1. We conducted two sensitivity analyses. In the first analysis, we excluded participants whose residences were located within boundaries of the U.S. Census Places (USCB 2004) and were therefore possibly connected to public water supplies with nitrate levels below the MCL (16% of study population). In the second analysis, we excluded those whose residence was greater than 1500 m from the nearest well with measurement data (17%) to reduce the probability of measurement error. We recomputed the OR for subclinical hypothyroidism after correcting for exposure misclassification (i.e. by reclassifying false positives and false negatives) using our estimates of sensitivity and specificity and the prevalence of exposure (50%). We used generalized linear regression to assess the association between estimated nitrate levels in well water and continuous TSH measures. TSH levels were also used to define disease status based on clinical guidelines [21]. A "normal" range for TSH was defined as 0.4-4 mIU/ml. A TSH level of > 4 mIU/ml-10 mIU/ml was defined as subclinical hypothyroidism (n = 228) and more than 10 mIU/ml was defined as clinical hypothyroidism (n = 56). A TSH value of 0.1 mIU/ml to 0.4 mIU/ml was defined as subclinical hyperthyroidism (n = 25) and less than 0.1 mIU/ml was defined as clinical hyperthyroidism (n = 10). All of the disease definitions are based on the assumption that TSH was marking primary disease in the thyroid since other causes of TSH abnormalities, e.g., primary pituitary disease, thyroid hormone resistance, are very uncommon by comparison [22]. Estimated nitrate levels in participants' drinking water were categorized into quartiles and by the median of the predicted well nitrate level (6.5 mg/L). We evaluated the association of the nitrate levels with each thyroid disease group using unconditional logistic regression to compute the odds ratio (OR) and 95% confidence intervals. All models were adjusted for potential confounding factors including age (continuous) and BMI ((normal (< 25 kg/m2), overweight (25-30 kg/m2), and obese (> 30 kg/m2)). We conducted analyses stratified by gender as well as for men and women combined. Tests of linear trend were performed by modeling the continuous nitrate estimates. A p-value < 0.05 was considered significant and all data analyses were conducted using SAS version 9.1. We conducted two sensitivity analyses. In the first analysis, we excluded participants whose residences were located within boundaries of the U.S. Census Places (USCB 2004) and were therefore possibly connected to public water supplies with nitrate levels below the MCL (16% of study population). In the second analysis, we excluded those whose residence was greater than 1500 m from the nearest well with measurement data (17%) to reduce the probability of measurement error. We recomputed the OR for subclinical hypothyroidism after correcting for exposure misclassification (i.e. by reclassifying false positives and false negatives) using our estimates of sensitivity and specificity and the prevalence of exposure (50%). Study population: Subjects included in this analysis were 3,017 Old Order Amish aged 18 years and older from Lancaster, Chester, and Lebanon Counties, Pennsylvania, for whom thyroid health was assessed through measurement of thyroid stimulating hormone (TSH) levels in their prior participation in one or more studies of health by investigators at the University of Maryland, Baltimore [13-16]. We excluded participants whose residences were located outside of Lancaster, Chester, or Lebanon counties (n = 328) due to sparse nitrate measurement data, and persons who reported use of thyroid medication (n = 145) leaving a total of 2,543 persons (1,336 females and 1,207 males) in the final analysis. Nearly all of the enrolled individuals are descendants of a small number of Amish who settled in Lancaster County, Pennsylvania, in the mid-eighteenth century [13,17,18]. This study was approved by the Institutional Review Boards of the University of Maryland and the National Cancer Institute. All subjects included in this analysis received a standardized examination at the Amish Research Clinic in Strasburg, Pennsylvania or in the participant's home during the time period 1995-2008. As part of this examination, a fasting blood sample was collected from which TSH levels were measured with the Siemens TSH assay (Immulite 2000; Deerfield, IL) according to the manufacturer's instructions. The method is a solid-phase, chemiluminescent, competitive analog immunoassay and has analytical sensitivity of 0.004 μIU/ml and upper limit of 75 μIU/ml of TSH. Residential street addresses were geocoded using the TeleAtlas (Lebanon, NH) MatchMaker SDK Professional version 8.3 (October 2006), a spatial database of roads, and a modified version of a Microsoft Visual Basic version 6.0 program issued by TeleAtlas to match input addresses to the spatial database. We assigned residence location using an offset of 25 ft from the street centerline. Addresses that were not successfully geocoded were checked for errors using interactive geocoding techniques. Where only a street intersection was available for the residential location (1.0% of residences), we assigned the geographic location of the residence to the middle of the intersection. Where only a zip code was available for the residential location (3.0% of residences), we assigned the geographic location of the residence to the centroid of the zip code. The geocoded location of the residences and the geographic boundary of our study area is shown in Figure 1. Location of participant residences and wells with nitrate measures in study area. Historical assessment of nitrate levels: A survey of nitrate levels in well water in Lancaster, Chester, and Lebanon counties was carried out from 1976-2006 by the USGS. The USGS collected data from active monitoring wells in the county and from a well-owner monitoring program conducted by the State Department of Natural Resources in collaboration with Pennsylvania State University. Water samples (50-100 uL) from all programs were measured for nitrate using ion chromatography with a detection limit of 0.002 mg/L as nitrate-N [19]. A total of 3,613 unique wells were measured in our study area during the survey period. The measurements were not from wells chosen at random but included monitoring data reported by the USGS and samples from individual well owners. A total of 3,057 wells had 1 measurement; 198 wells had 2 measurements; 113 wells had 3 measurements; and 245 wells had more than 3 measurements. Figure 1 shows the geographic distribution of the wells in our study area in relation to the location of the 2,543 participant residences The median distance between a residence and the closest measured well was 576.0 m (interquartile range: 308.0-897.2 m). The median nitrate concentration by season ranged from 2.0 mg/L as nitrate-nitrogen (hereafter mg/L) for summer months (interquartile range: 0-7.5 mg/L) to 2.7 mg/L in spring months (interquartile range: 0-8.8 mg/L). For wells with multiple measures, the median difference between the maximum and minimum value was 1.2 mg/l (IQR: 0.3-0.9). The mean of the measurements was used for wells with multiple measurements when we did the exposure modeling (see below). Prediction of nitrate levels in well water of participants' residences: We assumed the drinking water supply for participants to be a well located at their reported residence. To estimate nitrate levels at this location, we first determined whether nitrate concentrations in the USGS wells varied across the types of aquifers in the study area (Table 1). Maps of the primary aquifers were obtained from the USGS (created from 30 m pixel satellite imagery) [19]. There are five principal aquifers in the study area (Figure 2). The differentiation of aquifer type is important because the transport of contaminants in groundwater is generally confined to within these hydrogeologic boundaries. Using the 1992 USGS National Land Cover Data Set (NLCD) [20], we also evaluated nitrate levels in the USGS wells across thirteen types of land use (pasture, deciduous forest, row crops, low intensity residential, mixed forest, commercial or industrial, evergreen forest, high intensity residential, water, quarries/gravel pits, transitional, urban grasses, and woody wetlands) (Table 1 and Figure 3). We found limited temporal variation by season and by decade within each of the five aquifer types over the well measurement period, as well as across the land use classifications used in our analysis (Figure 3). Distribution of nitrate concentration in US Geological Survey wells by aquifer type and categories of land use in Lancaster, Lebanon, and Chester Counties, from 1976-2006 Spatial classification is based on the National Land Cover Data Set; 1992, USGS [ref 38]. Principle aquifers in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003 Land use in 1992 in the three study area counties in southeastern Pennsylvania. Data from Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands: U.S. Geological Survey. Madison, WI; 2003 We used a standard linear mixed effects statistical model to develop a predictive model including the variables principal aquifer and land use. Nitrate levels were log normally distributed so we modeled the natural logarithm of the concentration. Spatial correlation existed in the nitrate measurements even after covariate adjustment [21], so we performed kriging on the wells' residuals from the predictive nitrate model. If a well had more than one measurement, the mean of the measurements and its residual was used in the modeling. We assumed that the residuals in the model have a single, normally distributed mean structure centered at zero, allowing for universal kriging across the study area. The kriging procedure predicts a 'residual' for each study participant based on a weighted average of the 20 neighboring wells' residuals (within the respective aquifer and land use category). For comparison, we also applied the kriging procedure based on the weighted average of the five neighboring wells' residuals. For example, if for a particular region of our study area, the regression model tends to underestimate the true observed log nitrate values (positive residuals) then individuals in this region will be given a representative positive residual prediction that is added to the log nitrate estimate based on the individuals' covariates and the regression parameters. The antilog gives an unbiased predictor of median nitrate value resulting in estimates that are more robust to outlier observations than a mean estimator. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L, with a median of 6.5 mg/L and a mean of 6.6 mg/L (sd = 2.9 mg/L). The predicted nitrate level mean was similar to the mean of the measured values used for modeling (6.8 mg/L; sd = 8.3 mg/L) although the standard deviation was smaller. Model validation: The validity of the predictive model was assessed for 77 validation wells by comparing the predicted nitrate concentration to the observed nitrate concentrations. The validation wells were randomly selected from 482 wells with USGS measurements from 1992-1993. The limited date range was chosen to be consistent with the time frame of the NLCD land use database used in our analyses. We evaluated model sensitivity, specificity, and percent agreement using the median of the predicted nitrate level (6.5 mg/L as nitrate-N) as a cutpoint for high and low exposure categories. The sensitivity of the model was 67% and the specificity was 93%. The Spearman's rank correlation between the continuous predicted and measured concentrations was 0.73. Cross tabulation of predicted and observed nitrate concentrations by quartiles of the measured nitrate concentrations demonstrated a percent agreement of 56% (Table 2). Comparison of quartiles of the predicted nitrate concentration by quartiles of the measured nitrate concentrations, 77 wells used for the model validation Percent agreement = 56% Data analysis: We used generalized linear regression to assess the association between estimated nitrate levels in well water and continuous TSH measures. TSH levels were also used to define disease status based on clinical guidelines [21]. A "normal" range for TSH was defined as 0.4-4 mIU/ml. A TSH level of > 4 mIU/ml-10 mIU/ml was defined as subclinical hypothyroidism (n = 228) and more than 10 mIU/ml was defined as clinical hypothyroidism (n = 56). A TSH value of 0.1 mIU/ml to 0.4 mIU/ml was defined as subclinical hyperthyroidism (n = 25) and less than 0.1 mIU/ml was defined as clinical hyperthyroidism (n = 10). All of the disease definitions are based on the assumption that TSH was marking primary disease in the thyroid since other causes of TSH abnormalities, e.g., primary pituitary disease, thyroid hormone resistance, are very uncommon by comparison [22]. Estimated nitrate levels in participants' drinking water were categorized into quartiles and by the median of the predicted well nitrate level (6.5 mg/L). We evaluated the association of the nitrate levels with each thyroid disease group using unconditional logistic regression to compute the odds ratio (OR) and 95% confidence intervals. All models were adjusted for potential confounding factors including age (continuous) and BMI ((normal (< 25 kg/m2), overweight (25-30 kg/m2), and obese (> 30 kg/m2)). We conducted analyses stratified by gender as well as for men and women combined. Tests of linear trend were performed by modeling the continuous nitrate estimates. A p-value < 0.05 was considered significant and all data analyses were conducted using SAS version 9.1. We conducted two sensitivity analyses. In the first analysis, we excluded participants whose residences were located within boundaries of the U.S. Census Places (USCB 2004) and were therefore possibly connected to public water supplies with nitrate levels below the MCL (16% of study population). In the second analysis, we excluded those whose residence was greater than 1500 m from the nearest well with measurement data (17%) to reduce the probability of measurement error. We recomputed the OR for subclinical hypothyroidism after correcting for exposure misclassification (i.e. by reclassifying false positives and false negatives) using our estimates of sensitivity and specificity and the prevalence of exposure (50%). Results: The mean TSH level was 2.92 mIU/ml (3.05 mIU/ml for women and 2.77 mIU/ml for men). Based on the TSH measures, the prevalence of clinical hyperthyroidism was 0.4% and the prevalence of subclinical hyperthyroidism was 1.0%. The prevalence of clinical hypothyroidism was 2.2% and the prevalence of subclinical hypothyroidism was 9.0%. The mean age of participants was 50 years (range: 18-98). The mean BMI was 26.6 kg/m2 for men and 27.7 kg/m2 for women (Table 3). The average BMI of males with clinical hyperthyroidism was lower than that of those in the general study population but females with clinical hyperthyroidism had a slightly higher average BMI than the general study population. The average age of persons with thyroid disease was higher in all categories compared to the group with normal TSH levels. Although smoking data was not available for the entire study population, among those for whom these data were collected, less than 1% of women (4 of 657) and 43% of males (310 of 725) reported ever smoking tobacco. Characteristics of the study population by thyroid disease status Includes persons who reside in Lancaster, Chester, or Lebanon Counties and excludes persons younger than 18 or who report use of thyroid medications. *smoking status is based on 45.8% of the study population Adjusting for age and BMI, and modeling TSH concentration as the outcome, we observed no significant relationship with nitrate concentration. The B coefficient for men and women combined was -0.12 (p-value = 0.14), -0.13 for men (p-value = 0.11), and -0.12 for women (p-value = 0.40). Modeling the dichotomized high/low nitrate predictor, the B coefficient for men and women combined was -0.64 (p-value = 0.19), -0.57 for men (p-value = 0.22), and -0.70 for women (p-value = 0.40). Neither clinical or subclinical hyperthyroidism were associated with nitrate concentrations (Table 4), although the number of cases was low (n = 10 cases of clinical hyperthyroidism and n = 25 cases of subclinical hyperthyroidism). Odds ratios (ORs) and 95% confidence intervals (CIs) for the prevalence of hyperthyroidism associated with estimated nitrate levels in residential wells Models adjusted for age and BMI; model with men and women combined is also adjusted for gender The results for hypothyroidism are presented in Table 5. Overall, there was a borderline significant positive association between subclinical hypothyroidism and high nitrate exposure (age- and BMI-adjusted OR = 1.32; 95% CI: 1.0-1.68), with further analyses revealing the association to be present in women (OR = 1.60; 95% CI: 1.11-2.32), but not in men (OR = 0.98; 95% CI: 0.63-1.52). However, the association among women did not increase monotonically with increasing quartiles of estimated nitrate concentrations in their water supply exposure. The interaction for gender and nitrate was not significant (p-interaction = 0.32). No significant associations were observed for clinical hypothyroidism. The results were consistent when stratified by age and BMI. Odds ratios (ORs) and 95% confidence intervals (CIs) for the prevalence of hypothyroidism associated with nitrate levels in residential wells Models adjusted for age and BMI; model with men and women combined is also adjusted for gender; nitrate concentrations in mg/L The results were unchanged in a sensitivity analysis that excluded participants whose residences were possibly connected to public water supplies (data not shown). The exclusion of persons who reside more than 1500 m from the nearest well also did not result in a material change in our results (data not shown), although it did decrease the odds ratio for high nitrate intake and subclinical hypothyroidism in women from 1.60-1.52. We also estimated the OR for subclinical hypothyroidism among women in the absence of exposure misclassification as 2.1 (versus 1.6 observed). Discussion: Our results provide limited support for an association between nitrate levels in private wells and subclinical hypothyroidism among women but not men. With estimated exposure to nitrate in drinking water at or above 6.5 mg/L, we observed a significantly increased prevalence of subclinical hypothyroidism in women, although there was not a monotonic increase with increasing quartiles of nitrate. These findings of an increased prevalence of hypothyroidism among women are consistent with our hypothesis, namely that the competitive inhibition of iodide uptake associated with increased nitrate exposure would result in decreased systemic active thyroid hormone (as indicated by increased TSH levels). We did not observe an association for clinical hypothyroidism, but the number of cases in this group was much lower. The mean TSH level in our study population was 3.05 μIU/ml in women and 2.77 μIU/ml in men. These levels are higher than TSH levels in the general US population surveyed by the National Health and Nutrition Examination Survey (NHANES) from 1988-1994 [23], in which the means among women and men were 1.49 μIU/ml and 1.46 μIU/ml, respectively. The prevalence of hypothyroidism and hyperthyroidism in the US population is estimated to be 4.6% (0.3% clinical and 4.3% subclinical) and 1.3% (0.5% clinical and 0.8% subclinical), respectively [23], compared with 11.2% and 1.4% in our study population. However, as the risk of thyroid disease increases with age, the higher prevalence in the Amish could be partially due to the older age distribution in this study (mean = 50.1 years) population compared to the age distribution of the NHANES study population (mean = 45.0 years). When hypothyroidism is compared by sex in this study population and the NHANES population, the prevalence is 1.5-times more common in Amish women than men whereas it is 2-8 times more common in women than men in the US population [23]. In previous epidemiological studies, investigators have identified a relationship between nitrate contamination of water supplies and thyroid dysfunction and thyroid disease. In a cross-sectional study of school children living in areas of Slovakia with high and low nitrate exposure via drinking water, children in the high nitrate area had increased thyroid volume and increased frequency of signs of subclinical thyroid disorders (thyroid hypoechogenicity by ultrasound, increased TSH level and positive anti-thyroid peroxidase (TPO)) [6]. The nitrate levels ranged from 11.3 to 58.7 mg/L (as nitrate-nitrogen) in the highly polluted area and were < 0.4 mg/L nitrate-nitrogen in the low nitrate area. Similarly, investigators in the Netherlands conducted a cross-sectional study of women who obtained their drinking water from public supplies and private wells with varying nitrate levels [7]. They observed a dose-dependent increase in the volume of the thyroid associated with increasing nitrate concentrations in drinking water from a combination of public and private supplies, with nitrate levels ranging from 0.004 mg/L to 29.1 mg/L (as nitrate-nitrogen). Women with nitrate levels exceeding 11.1 mg/L as nitrate-nitrogen had a significant increased prevalence of thyroid gland hypertrophy. Our results for women are consistent with the findings in Slovakia and indirectly support the associations observed in the Netherlands. However, the reason for our finding of in women but not men is unclear particularly since men consume more water than women on average [23]. It is possible that women may be more sensitive to exposures that perturb the thyroid as indicated by their higher prevalence of thyroid disease [24]. A previous epidemiologic investigation of the association of nitrate intake from public water supplies and diet with the risk of self-reported hypothyroidism and hyperthyroidism was conducted in a cohort of 21,977 older women in Iowa [25]. The investigators found no association between the prevalence of hypo- or hyperthyroidism and nitrate concentrations in public water supplies; nor was there an association for those who were using private wells. However, intake of nitrate from the diet can be a primary source of exposure when drinking water nitrate levels are below the MCL of 10 mg/L nitrate-N [26-29]. In the Iowa study, increasing intake of nitrate from dietary sources was associated with an increased prevalence of hypothyroidism (OR Q4 = 1.24; 95% CI = 1.10-1.40, P for trend = 0.001) while no association was observed with hyperthyroidism [24]. In addition to consumption of tap water, people living in areas with high nitrate concentration in their water supplies may be exposed through their use of water for cooking, irrigation of crops used as a food source, and through milk products from local farm animals. Nitrate is a natural component of plants and is found at high concentrations in leafy vegetables, such as lettuce and spinach, and some root vegetables, such as beets [25]. The lack of dietary questionnaire data in our study is a limitation since estimates of well-water nitrate were below the MCL of 10 mg/L for 89% of participants [25-27]. The lack of dietary information in general likely resulted in exposure misclassification in our study population. A strength of this study is the availability of valid measures of TSH using study participant serum samples. Although only one measure was available for each study participant, the use of TSH rather than self-reported thyroid disease is likely to more accurately define thyroid disease. Although factors such as pregnancy and obesity can affect TSH, the levels are a reliable index of the biological activity of thyroid hormones. Anti-TPO was not available, which can also be helpful in the diagnosis of thyroid disease as an autoimmune disease. In addition to measuring TSH and anti-TPO in blood, future studies would be further strengthened by the use of ultrasound technology to determine thyroid volume, which could provide insight into nitrate exposure levels that may cause hypertrophy of the thyroid. An additional strength of our study was that we validated our exposure metric and characterized the sensitivity and specificity based on the median observed versus predicted nitrate level in wells monitored by the USGS in our study area. Specificity was high (93%) indicating that our model classified those with lower nitrate levels accurately. The lower sensitivity (67%) indicated that the model underestimated nitrate concentrations for those with higher levels. The result of this misclassification, if nondifferential by disease status [28], would be to attenuate ORs as we demonstrated for subclinical thyroid disease. Our study was limited by a lack of information about the study population's complete residential history. However, we know that the majority of this Amish cohort reside in rural areas, with low relocation rates, and that it is typically the women who relocate to live in the homes or on the same land as their husband's family [29]. Most Amish men would subsequently have a stable residential history and exposure to nitrate contamination of well water over time. It is not clear to what degree a complete residential history would have affected our findings for both men and women. It is possible that the association we observed between subclinical hypothyroidism in women was attenuated due to this source of misclassification. The well measurements were also not randomly selected but represent data collected by USGS and individuals that potentially reside in areas with higher levels of nitrate than those who did not receive monitoring attention from USGS or who were not aware of a problem in their well. We identified a large standard deviation for the wells with repeat measures and were unable to fully explore the reasoning beyond having a small proportion (13%) of repeat samples. Additional data on well depth, other hydrogeological factors, or why multiple samples were taken could provide more insight into this observed variation, but was not available. Our study is also limited by the fact that we did not have data on actual water supply source to the residence, nor personal water consumption. Because most residences were located outside of areas served by public water utilities, we assumed the drinking water supply for participants was a well located at their residence. We did not have data on tap water consumption, and thus the approximate daily intake, which can be an important variable in determining exposure. Most people in the United States drink about 1.5-2 l of water per day [30]. Similarly, because of the attention given to water contamination in the Lancaster area, it is possible that some study participants obtain their water from sources that have been purified via reverse osmosis or from bottled water. These limitations would clearly affect the exposure estimates and result in misclassification of the exposure. Future work in this area would be enhanced by the assessment of multiple contaminants present in water sources and the general environment that could be simultaneously affecting thyroid health. Multiple environmental pollutants from industrial as well as agricultural activities may be an important consideration for future investigation. Of particular interest is pesticides as there is increasing evidence of their ability to alter thyroid hormone homeostasis, causing thyroid dysfunction and thyroid disease [31,32]. The varied effects of these chemicals on thyroid function could affect study findings. Determination of these exposures should be a future study design consideration. Furthermore, the effect of contamination from other univalent anions which interfere with the uptake of iodide by the thyroid should be considered in future investigation into the effects of nitrate in drinking water. For example, perchlorate, the oxidizer for solid rocket fuel and a component of some fertilizers, is found in both food and water [33], and interferes with iodide uptake much like nitrate. Similarly, thiocyanate, another univalent anion that causes thyroid dysfunction, is a metabolite from tobacco smoke and is found in certain foods [34-37]. Conclusions: The present study provides limited evidence that nitrate in residential well water is associated with subclinical hypothyroidism in women but not men. Future studies that include validated biomarkers, as well as individual level nitrate exposure estimates of dietary and drinking water intakes, and an assessment of co-contaminants, are needed to provide information about the relevance of nitrate intake and thyroid disease.
Background: Nitrate is a widespread contaminant of drinking water supplies, especially in agricultural areas. Nitrate intake from drinking water and dietary sources can interfere with the uptake of iodide by the thyroid, thus potentially impacting thyroid function. Methods: We assessed the relation of estimated nitrate levels in well water supplies with thyroid health in a cohort of 2,543 Old Order Amish residing in Lancaster, Chester, and Lebanon counties in Pennsylvania for whom thyroid stimulating hormone (TSH) levels were measured during 1995-2008. Nitrate measurement data (1976-2006) for 3,613 wells in the study area were obtained from the U.S. Geological Survey and we used these data to estimate concentrations at study participants' residences using a standard linear mixed effects model that included hydrogeological covariates and kriging of the wells' residuals. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L N-NO3(-), with a median value of 6.5 mg/L, which was used as the cutpoint to define high and low nitrate exposure. In a validation analysis of the model, we calculated that the sensitivity of the model was 67% and the specificity was 93%. TSH levels were used to define the following outcomes: clinical hyperthyroidism (n = 10), clinical hypothyroidism (n = 56), subclinical hyperthyroidism (n = 25), and subclinical hypothyroidism (n = 228). Results: In women, high nitrate exposure was significantly associated with subclinical hypothyroidism (OR = 1.60; 95% CI: 1.11-2.32). Nitrate was not associated with subclinical thyroid disease in men or with clinical thyroid disease in men or women. Conclusions: Although these data do not provide strong support for an association between nitrate in drinking water and thyroid health, our results do suggest that further exploration of this hypothesis is warranted using studies that incorporate individual measures of both dietary and drinking water nitrate intake.
Background: Nitrate is a widespread contaminant of drinking water supplies, especially in agricultural areas. The thyroid can concentrate univalent anions such as nitrate (NO3-), which subsequently interferes with the uptake of iodide (I-) by the thyroid and may cause reduced production of thyroid hormones [1-4]. The result of the reduced thyroid hormone production is a compensatory increase in thyroid stimulating hormone (TSH), a sensitive indicator of thyroid function. High and low TSH levels reflect hypo- and hyperfunction of the thyroid gland, respectively. Chronic stimulation of the thyroid gland by excessive TSH has been shown in animals to induce the development of hypertrophy and thyroid disease, as well as hyperplasia, followed by adenoma and carcinoma [5]. At least two epidemiological studies have shown high nitrate intake to be associated with thyroid dysfunction, including hypertrophy and changes in TSH levels [6,7]; however, the impact of nitrate intake on specific thyroid conditions, including hyperthyroidism and hypothyroidism is not clear. Elevated concentrations of nitrate in groundwater originate from a number of sources, including leaking septic tanks, animal waste, and overuse of nitrogen fertilizers [7]. Nitrate is very soluble and it readily migrates to groundwater. Nitrate contamination of groundwater is an exposure of interest as groundwater serves as the primary drinking water supply for over 90% of the rural population and 50% of the total population of North America [8]. Although the U.S. Environmental Protection Agency (EPA) maximum contaminant level (MCL) for nitrate as nitrogen (nitrate-N) is 10 mg/L in public water sources [9], the levels in private wells are not regulated and the task of monitoring is left to residential owners, presenting opportunities for high levels of human exposure. The U.S. Geological Survey (USGS) estimates that nitrate concentrations exceed the EPA's standard in approximately 15% of agricultural and rural areas, exposing over 2 million people in the United States [10]. The MCL for nitrate in drinking water was established to protect against methemoglobinemia, or "blue baby syndrome," to which infants are especially susceptible. However, this health guideline has not been thoroughly evaluated for other health outcomes such as thyroid disease and cancer. The Old Order Amish community is a population characterized by a homogeneous lifestyle, including intensive farming practices and low mobility, and has been relatively unchanged across generations [11]. In areas where many large dairy and poultry farms are concentrated, the land area for disposal of animal wastes is limited. This situation often results in overloading the available land with manure, with considerable nitrogen ending up in groundwater or surface water [8,12]. Lancaster County in southeastern Pennsylvania is an example of such an area where extensive dairy enterprises with high stocking rates prevail. High levels of nitrate in the groundwater [8] suggest that the Amish are a potentially highly exposed population. Given the biological effects of nitrate intake on the thyroid, investigation of whether the Amish in this area exhibit an increased prevalence of thyroid dysfunction and thyroid disease is of interest. The aim of this study is to assess whether nitrate concentrations in well water are associated with levels of TSH and thyroid disease. Our goal was to use survey data on nitrate levels in well-water obtained from the USGS to conduct a cross-sectional analysis of the association between nitrate exposure and thyroid health. This study builds upon several ongoing studies of diabetes, obesity, osteoporosis, hypertension, and cardiovascular disease in the Amish, initiated in 1993 at the University of Maryland [13-16]. Conclusions: BA-K: participated in the conceptualization of the study and the exposure assessment design; conducted data analysis and drafted the manuscript; SH: conducted the nitrate modeling work and gave important intellectual input for the data analysis, and helped draft the manuscript; JN: conceptualized the exposure assessment approach, advised in the creation of the hydrogeological covariates and exposure modeling, and provided important intellectual content; MS: advised on the medical interpretation of the findings, and provided important intellectual content; AS: As the PI of the cohort study, AS designed the study, oversaw the recruitment of subjects and conducted much of the primary data gathering; BM: As a co-PI, BM participated in the design of the study and ongoing study efforts, the interpretation of the results for the nitrate investigation, and drafting of the manuscript; MA: obtained the hydrogelogical dataset, GIS analysis, reviewed the manuscript, and revised it critically for important intellectual content; TH: helped conceptualize the study, provided statistical support and modeling guidance, and provided important intellectual content for the manuscript; YZ: helped conceptualize the study and provided important intellectual content for the manuscript; MW: mentored in the conceptualization of the study, participated in study design, method development and data analysis, helped in the drafting of the manuscript. All authors assisted in the interpretation of results and contributed towards the final version of the manuscript. All authors read and approved the final manuscript.
Background: Nitrate is a widespread contaminant of drinking water supplies, especially in agricultural areas. Nitrate intake from drinking water and dietary sources can interfere with the uptake of iodide by the thyroid, thus potentially impacting thyroid function. Methods: We assessed the relation of estimated nitrate levels in well water supplies with thyroid health in a cohort of 2,543 Old Order Amish residing in Lancaster, Chester, and Lebanon counties in Pennsylvania for whom thyroid stimulating hormone (TSH) levels were measured during 1995-2008. Nitrate measurement data (1976-2006) for 3,613 wells in the study area were obtained from the U.S. Geological Survey and we used these data to estimate concentrations at study participants' residences using a standard linear mixed effects model that included hydrogeological covariates and kriging of the wells' residuals. Nitrate levels estimated by the model ranged from 0.35 mg/L to 16.4 mg/L N-NO3(-), with a median value of 6.5 mg/L, which was used as the cutpoint to define high and low nitrate exposure. In a validation analysis of the model, we calculated that the sensitivity of the model was 67% and the specificity was 93%. TSH levels were used to define the following outcomes: clinical hyperthyroidism (n = 10), clinical hypothyroidism (n = 56), subclinical hyperthyroidism (n = 25), and subclinical hypothyroidism (n = 228). Results: In women, high nitrate exposure was significantly associated with subclinical hypothyroidism (OR = 1.60; 95% CI: 1.11-2.32). Nitrate was not associated with subclinical thyroid disease in men or with clinical thyroid disease in men or women. Conclusions: Although these data do not provide strong support for an association between nitrate in drinking water and thyroid health, our results do suggest that further exploration of this hypothesis is warranted using studies that incorporate individual measures of both dietary and drinking water nitrate intake.
9,945
367
10
[ "nitrate", "wells", "study", "thyroid", "levels", "water", "mg", "tsh", "nitrate levels", "data" ]
[ "test", "test" ]
null
[CONTENT] Nitrate | Thyroid Conditions | TSH | Old Order Amish | Water pollution | Drinking water [SUMMARY]
[CONTENT] Nitrate | Thyroid Conditions | TSH | Old Order Amish | Water pollution | Drinking water [SUMMARY]
null
[CONTENT] Nitrate | Thyroid Conditions | TSH | Old Order Amish | Water pollution | Drinking water [SUMMARY]
[CONTENT] Nitrate | Thyroid Conditions | TSH | Old Order Amish | Water pollution | Drinking water [SUMMARY]
[CONTENT] Nitrate | Thyroid Conditions | TSH | Old Order Amish | Water pollution | Drinking water [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Amish | Cohort Studies | Cross-Sectional Studies | Drinking Water | Environmental Monitoring | Epidemiological Monitoring | Female | Humans | Hyperthyroidism | Hypothyroidism | Linear Models | Male | Middle Aged | Nitrates | Pennsylvania | Prevalence | Sex Factors | Thyrotropin | Water Pollutants, Chemical | Water Wells | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Amish | Cohort Studies | Cross-Sectional Studies | Drinking Water | Environmental Monitoring | Epidemiological Monitoring | Female | Humans | Hyperthyroidism | Hypothyroidism | Linear Models | Male | Middle Aged | Nitrates | Pennsylvania | Prevalence | Sex Factors | Thyrotropin | Water Pollutants, Chemical | Water Wells | Young Adult [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Amish | Cohort Studies | Cross-Sectional Studies | Drinking Water | Environmental Monitoring | Epidemiological Monitoring | Female | Humans | Hyperthyroidism | Hypothyroidism | Linear Models | Male | Middle Aged | Nitrates | Pennsylvania | Prevalence | Sex Factors | Thyrotropin | Water Pollutants, Chemical | Water Wells | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Amish | Cohort Studies | Cross-Sectional Studies | Drinking Water | Environmental Monitoring | Epidemiological Monitoring | Female | Humans | Hyperthyroidism | Hypothyroidism | Linear Models | Male | Middle Aged | Nitrates | Pennsylvania | Prevalence | Sex Factors | Thyrotropin | Water Pollutants, Chemical | Water Wells | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Amish | Cohort Studies | Cross-Sectional Studies | Drinking Water | Environmental Monitoring | Epidemiological Monitoring | Female | Humans | Hyperthyroidism | Hypothyroidism | Linear Models | Male | Middle Aged | Nitrates | Pennsylvania | Prevalence | Sex Factors | Thyrotropin | Water Pollutants, Chemical | Water Wells | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] nitrate | wells | study | thyroid | levels | water | mg | tsh | nitrate levels | data [SUMMARY]
[CONTENT] nitrate | wells | study | thyroid | levels | water | mg | tsh | nitrate levels | data [SUMMARY]
null
[CONTENT] nitrate | wells | study | thyroid | levels | water | mg | tsh | nitrate levels | data [SUMMARY]
[CONTENT] nitrate | wells | study | thyroid | levels | water | mg | tsh | nitrate levels | data [SUMMARY]
[CONTENT] nitrate | wells | study | thyroid | levels | water | mg | tsh | nitrate levels | data [SUMMARY]
[CONTENT] thyroid | nitrate | groundwater | levels | water | high | disease | tsh | amish | including [SUMMARY]
[CONTENT] nitrate | wells | mg | model | location | study area | tsh | levels | study | area [SUMMARY]
null
[CONTENT] nitrate | provide information relevance nitrate | estimates dietary | co contaminants needed | co contaminants | co | estimates dietary drinking | estimates dietary drinking water | studies include validated | women men future [SUMMARY]
[CONTENT] nitrate | thyroid | wells | water | tsh | study | levels | mg | women | model [SUMMARY]
[CONTENT] nitrate | thyroid | wells | water | tsh | study | levels | mg | women | model [SUMMARY]
[CONTENT] Nitrate ||| [SUMMARY]
[CONTENT] 2,543 | Lancaster | Chester | Lebanon | Pennsylvania | TSH | 1995-2008 ||| 1976-2006 | 3,613 | the U.S. Geological Survey ||| 0.35 | 16.4 | 6.5 ||| 67% | 93% ||| TSH | 10 | 56 | 25 | 228 [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| 2,543 | Lancaster | Chester | Lebanon | Pennsylvania | TSH | 1995-2008 ||| 1976-2006 | 3,613 | the U.S. Geological Survey ||| 0.35 | 16.4 | 6.5 ||| 67% | 93% ||| TSH | 10 | 56 | 25 | 228 ||| 1.60 | 95% | CI | 1.11-2.32 ||| ||| [SUMMARY]
[CONTENT] ||| ||| 2,543 | Lancaster | Chester | Lebanon | Pennsylvania | TSH | 1995-2008 ||| 1976-2006 | 3,613 | the U.S. Geological Survey ||| 0.35 | 16.4 | 6.5 ||| 67% | 93% ||| TSH | 10 | 56 | 25 | 228 ||| 1.60 | 95% | CI | 1.11-2.32 ||| ||| [SUMMARY]
Measles outbreak investigation in an urban slum of Kaduna Metropolis, Kaduna State, Nigeria, March 2015.
31303921
Despite availability of an effective vaccine, the measles epidemic continue to occur in Nigeria. In February 2015, we investigated a suspected measles outbreak in an urban slum in Rigasa, Kaduna State, Nigeria. The study was to confirm the outbreak, determine the risk factors and implement appropriate control measures.
INTRODUCTION
We identified cases through active search and health record review. We conducted an unmatched case-control (1:1) study involving 75 under-5 cases who were randomly sampled, and 75 neighborhood controls. We interviewed caregivers of these children using structured questionnaire to collect information on sociodemographic characteristics and vaccination status of children. We collected 15 blood samples for measles IgM using Enzyme Linked Immunosorbent Assay. Descriptive, bivariate and logistic regression analyses were performed using Epi-info software. Confidence interval was set at 95%.
METHODS
We recorded 159 cases with two deaths {case fatality rate = 1.3%}. 50.3% (80) of the cases were male. Of the 15 serum samples, 11(73.3%) were confirmed IgM positive for measles. Compared to the controls, the cases were more likely to have had no or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)]: 28.3 (2.1, 392.0), contact with measles cases [AOR (95% CI)]: 7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]: 5.2 (1.2, 22.5). Measles serum IgM was positive in 11 samples.
RESULTS
We identified low RI uptake and contact with measles cases as predictors of measles outbreak in Rigasa, Kaduna State. We recommended strengthening of RI and education of care-givers' on completing RI schedule.
CONCLUSION
[ "Caregivers", "Case-Control Studies", "Child, Preschool", "Disease Outbreaks", "Enzyme-Linked Immunosorbent Assay", "Epidemics", "Female", "Humans", "Immunoglobulin M", "Infant", "Infant, Newborn", "Logistic Models", "Male", "Measles", "Measles Vaccine", "Nigeria", "Poverty Areas", "Risk Factors", "Surveys and Questionnaires", "Vaccination" ]
6607246
Introduction
Measles is an acute, highly contagious vaccine preventable viral disease which usually affects younger children. Transmission is primarily person-to-person via aerosolized droplets or by direct contact with the nasal and throat secretions of infected persons [1, 2]. Incubation period is 10-14 days (range, 8-15 days) from exposure to onset of rash, and the individual becomes contagious before the eruption of rashes. In 2014, World Health Organization (WHO) reported 266,701 measles cases globally with 145,700 measles deaths [1]. Being unvaccinated against measles is a risk factor for contracting the disease [3]. Other factors responsible for measles outbreak and transmissions in developing countries are; lack of parental awareness of vaccination importance and compliance with routine immunization schedule, household overcrowding with easy contact with someone with measles, acquired or inherited immunodeficiency states and malnutrition [4-6]. During outbreaks, measles case fatality rate (CFR) in developing countries are normally estimated to be 3-5%, but may reach 10-30% compared with 0.1% reported from industrialized countries [2]. Malnutrition, poor supportive case management and complications like pneumonia, diarrhea, croup and central nervous system involvement are responsible for high measles CFR [7, 8]. Nigeria is second to India among ten countries with large number of unvaccinated children for measles, and has 2.7 million of the 21.5 million children globally that have zero dose for measles containing vaccine (MCV1) in 2013 [9]. Measles is one of the top ten causes of childhood morbidity and mortality with recurrent episodes common in Northern Nigeria at the first quarter of each year [10, 11]. In 2012, 2,553 measles cases were reported in Nigeria, an increase from 390 cases reported in 2006 [10]. In line with regional strategic plan, Nigeria planned to eliminate measles by 2020 by strengthening routine immunization, conduct bi-annual measles immunization campaign for second opportunity, epidemiologic surveillance with laboratory confirmation of cases and improve case management including Vitamin A supplementation. This is meant to improve measles coverage from the present 51% as at 2014 [12] to 95% needed for effective herd immunity. In October 2013, following measles outbreak in 19 States in Northern Nigeria, mass measles vaccination campaign was conducted. The first reported case of a suspected measles in Rigasa community, an urban slum of Kaduna Metropolis occurred on 5th of January 2015 in an unimmunized 10 months old child. The District or Local Government Area (LGA), Disease Surveillance and Notification Officer (DSNO) notified the State Epidemiologist and State DSNO on 10th February, 2015 when the reported cases reached an epidemic threshold. We investigated to confirm this outbreak, determine the risk factors for contracting infection and implement appropriate control measures. This paper describes the epidemiological methods employed in the investigation, summarizes the key findings and highlights the public health actions undertaken.
Methods
Study site and study population: Rigasa is a densely populated urban slum in the south west of Igabi LGA, in Kaduna State, North-West Nigeria. It has an estimated 59,906 households with about 14,156 under-one children. The settlement has three health facilities rendering RI services. The community is noted for poor utilization of RI services and has rejected polio supplemental immunization services in the past. Measles outbreaks have been previously reported from this community. The Last measles supplementary immunization activities (SIA) was conducted from 5th to 9th October, 2013. Descriptive epidemiology-quantitative: in this investigation, a suspected measles case was, any person with generalized maculopapular rash, fever, and at least one of the following; cough, coryza or conjunctivitis or in whom a physician suspected measles, living in Rigasa community, from January 2015 when the index case was reported to March 2015. A confirmed case was, any suspected case with measles IgM positive test or an epidemiological link to a laboratory confirmed case living in the same community at same period. We actively search for cases in the community where measles is locally known as “Bakon dauro”. We interviewed and physically examined some cases at the treatment center to verify diagnosis and ensure that they met the case definition. We developed a line-list to collect information from all suspected cases on their age, sex, residence and time of onset, migration history and immunization status. We analyzed the line-list data to characterize the outbreak in time, place and person, and to develop a plausible hypothesis for measles transmission in the community. We conducted an in-depth interview with health workers rendering routine immunization services to ascertain utilization of RI services by under five-children in the community. Case-control study: we conducted an unmatched case-control study. A suspected case is, any child aged 0-59 months residing in Rigasa with history of fever, rash and at least one of the following: cough, coryza or conjunctivitis or in whom a physician suspected measles; a confirmed case is, positive to measles IgM using enzyme-linked immunosorbent assay (ELISA); and a probable case if there was epidemiological link to a laboratory confirmed measles case. A control was, any child 0-59 months residing in the same community but without the signs and symptoms of measles. We enrolled 75 cases and 75 controls to identify an odds ratio of 3 (for a risk factor on which intervention would have a significant impact), assuming 21.2% prevalence of exposure among control [4], with 95% CI and power of 80%. The sample size was determined using the Statcal function of Epi-Info statistical software. We selected and recruited the cases consecutively from among the patients that presented at the health facility and in the community. The controls were selected from the community; each control was selected from the 3rd homestead to the right of the household of a case. We used structured questionnaires to collect data on demographic characteristics, exposures, vaccination status and associated factors from both cases and controls, and clinical information from the cases only. Sample collection and laboratory analysis: we collected blood samples for 15 suspected cases for measles serum IgM determination using ELISA method at the WHO regional laboratory at Kaduna. Data management: we entered data into Epi-Info statistical software and performed univariate analysis to obtain frequencies and proportions, and bivariate analysis to obtain odds ratios and determine associations, setting p-value of 0.05 as the cut-off for statistical significance. We also performed unconditional logistic regression to adjust for possible confounders and identify the independent factors for contracting measles infection. Factors that were significant at p < 0.05 in the bivariate analysis and biological plausible variable such as age and sex were included in the model. In the final model, only variables that were found to significantly affect the outcome at P < 0.05 were retained. Data management was done using Epi InfoTM software 3.5.3 (CDC Atlanta, USA), and Microsoft Excel. For qualitative study, content analysis was thematically analyzed. Ethical consideration: ethical approval was not obtained as study was conducted as part of an outbreak response. However, permission to conduct the study was granted by the State Primary Health Care Agency, District or LGA Director of Primary Health Care and District head of Rigasa community. Informed consent was obtained from all respondents interviewed. Confidentiality of all the subjects was assured and maintained during and after the study.
Results
Descriptive epidemiology-quantitatively: a total of 159 cases with two deaths (CFR = 1.3%) were identified. 80 (50.3%) were male. Cumulatively, under-five children accounted for 90% of cases. The mean age of cases was 32 (± 22.5) months but the age group of 0-11 months were severely affected with age specific death rate of 5% (Table 1). The index case for this outbreak was a 10 months old female child that had maculopapular rash on 5th January, 2015. She was unvaccinated for measles and never had any immunization for vaccine preventable diseases according to Expanded Programme on Immunization (EPI) schedule. She had no history of contact with someone with measles and had not travelled out of the community in the last one month. She was managed as an outpatient and died 3 days later. Figure 1 shows the epidemic curve of the outbreak. The epidemic curve has a propagated pattern with four peaks, the highest on 10th March, 2015. The outbreak spanned from 5th January to 4th April, 2015. Age distributions of measles cases and case fatality rate (CFR) in Rigasa community, March 2015 Epidemic curve of measles outbreak in Rigasa community, week 1 to 15, 2015 Descriptive epidemiology-qualitative evaluation of RI services: content analysis of in depth interview of three health workers rendering routine immunization services revealed that caregivers' usually utilized RI services in the first 6-10 weeks of life, no stock out of measles vaccine in last 6 months and most caregivers failed to comply with EPI schedule. Also, all planned RI sessions were held in the past 3 months and there was no vaccine stock out in 6 months prior to the outbreak and the facilities have functioning cold chain system. Analytic study: total population sampled was 150 (cases 75, control 75). The mean age for cases and controls was 33.2 (± 21) and 37.6 (± 29) months. Males were 42 (56%) cases and 37(49%) controls. Among the cases, 15 (20%) was vaccinated for measles compared to 23 (30.7%) of the control. Among the 112 children unvaccinated for measles, attack rate was 54%. Cumulatively, 41(27%) were vaccinated for measles. Only 1 (1.3%) case compared to 12 (16%) of controls had completed the Expanded Programme on Immunization schedule (Table 2). From bivariate analysis, cases were more likely than control, to have had none or incomplete RI [OR (95% CI)]:14.0 (1.8, 111.4); not to have received measles vaccination [OR (95% CI)]:2.0 (0.8, 3.7); to have had close contact with measles cases [OR (95% CI)]:6.0 (2.7, 11.2); and to have caregivers who were younger than 20 years [OR (95% CI)]:2.6 (1, 6.8) (Table 3). From modelling, independent predictors of measles transmission in Rigassa, an urban slum in Kaduna metropolis were none or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)<]:28.3 (2.1, 392.0), unvaccinated for measles [AOR (95% CI)]:1.8 (0.8, 3.7), recent contact with measles cases [AOR (95% CI)]:7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]:5.2 (1.2, 22.5) (Table 4). Characteristics of measles cases and controls in Rigasa community, March 2015 Factors that may be responsible for measles outbreak at Rigasa community, Igabi LGA Kaduna State-March 2015 Factors responsible for measles outbreak’s transmission at Rigasa community, Kaduna metropolis, Kaduna State - March 2015 Laboratory findings: eleven (73%) of the 15 serum samples were confirmed IgM positive for measles. Two samples that were negative for measles IgM were also negative of rubella IgM; one was indeterminate, but there was an epidemiological link with a confirmed case of measles. The outcome of the last sample could not be ascertained.
Conclusion
We confirmed there was a measles outbreak in Rigasa community. Low measles vaccine coverage as a result of poor uptake of RI was responsible for the outbreak of measles infection in the community. This resulted in the accumulation of susceptible children thus lowering the herd immunity against measles infection. This study also suggests that children of younger caregivers were more afflicted with measles infection during this outbreak. The poor housing conditions and overcrowding in this community greatly fueled the outbreak, supporting the evidence that close contact with a measles case is a risk factor for measles transmission. A major public health implication of this study is the need to strengthen RI services. We therefore recommend that the state ministry of health should increase demand creation for RI services through more sensitization and education of caregivers. Additionally, health workers should encourage and motivate caregivers who access RI services to complete the EPI schedule to prevent vaccine preventable diseases. Engaging caregivers who had completely immunized their children according to EPI schedule as a community role model could be a good strategy to motivate other caregivers to access and complete RI services. What is known about this topic Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. What this study adds We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles. We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles.
[ "What is known about this topic", "What this study adds" ]
[ "Measles is a highly contagious vaccine preventable viral disease that usually affects younger children;\nSeveral factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak.", "We found children less than one year to be severely affected by measles and having the highest case-fatality;\nIn this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%;\nThis study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds", "Competing interests" ]
[ "Measles is an acute, highly contagious vaccine preventable viral disease which usually affects younger children. Transmission is primarily person-to-person via aerosolized droplets or by direct contact with the nasal and throat secretions of infected persons [1, 2]. Incubation period is 10-14 days (range, 8-15 days) from exposure to onset of rash, and the individual becomes contagious before the eruption of rashes. In 2014, World Health Organization (WHO) reported 266,701 measles cases globally with 145,700 measles deaths [1]. Being unvaccinated against measles is a risk factor for contracting the disease [3]. Other factors responsible for measles outbreak and transmissions in developing countries are; lack of parental awareness of vaccination importance and compliance with routine immunization schedule, household overcrowding with easy contact with someone with measles, acquired or inherited immunodeficiency states and malnutrition [4-6]. During outbreaks, measles case fatality rate (CFR) in developing countries are normally estimated to be 3-5%, but may reach 10-30% compared with 0.1% reported from industrialized countries [2]. Malnutrition, poor supportive case management and complications like pneumonia, diarrhea, croup and central nervous system involvement are responsible for high measles CFR [7, 8]. Nigeria is second to India among ten countries with large number of unvaccinated children for measles, and has 2.7 million of the 21.5 million children globally that have zero dose for measles containing vaccine (MCV1) in 2013 [9]. Measles is one of the top ten causes of childhood morbidity and mortality with recurrent episodes common in Northern Nigeria at the first quarter of each year [10, 11]. In 2012, 2,553 measles cases were reported in Nigeria, an increase from 390 cases reported in 2006 [10].\nIn line with regional strategic plan, Nigeria planned to eliminate measles by 2020 by strengthening routine immunization, conduct bi-annual measles immunization campaign for second opportunity, epidemiologic surveillance with laboratory confirmation of cases and improve case management including Vitamin A supplementation. This is meant to improve measles coverage from the present 51% as at 2014 [12] to 95% needed for effective herd immunity. In October 2013, following measles outbreak in 19 States in Northern Nigeria, mass measles vaccination campaign was conducted. The first reported case of a suspected measles in Rigasa community, an urban slum of Kaduna Metropolis occurred on 5th of January 2015 in an unimmunized 10 months old child. The District or Local Government Area (LGA), Disease Surveillance and Notification Officer (DSNO) notified the State Epidemiologist and State DSNO on 10th February, 2015 when the reported cases reached an epidemic threshold. We investigated to confirm this outbreak, determine the risk factors for contracting infection and implement appropriate control measures. This paper describes the epidemiological methods employed in the investigation, summarizes the key findings and highlights the public health actions undertaken.", "Study site and study population: Rigasa is a densely populated urban slum in the south west of Igabi LGA, in Kaduna State, North-West Nigeria. It has an estimated 59,906 households with about 14,156 under-one children. The settlement has three health facilities rendering RI services. The community is noted for poor utilization of RI services and has rejected polio supplemental immunization services in the past. Measles outbreaks have been previously reported from this community. The Last measles supplementary immunization activities (SIA) was conducted from 5th to 9th October, 2013.\nDescriptive epidemiology-quantitative: in this investigation, a suspected measles case was, any person with generalized maculopapular rash, fever, and at least one of the following; cough, coryza or conjunctivitis or in whom a physician suspected measles, living in Rigasa community, from January 2015 when the index case was reported to March 2015. A confirmed case was, any suspected case with measles IgM positive test or an epidemiological link to a laboratory confirmed case living in the same community at same period. We actively search for cases in the community where measles is locally known as “Bakon dauro”. We interviewed and physically examined some cases at the treatment center to verify diagnosis and ensure that they met the case definition. We developed a line-list to collect information from all suspected cases on their age, sex, residence and time of onset, migration history and immunization status. We analyzed the line-list data to characterize the outbreak in time, place and person, and to develop a plausible hypothesis for measles transmission in the community. We conducted an in-depth interview with health workers rendering routine immunization services to ascertain utilization of RI services by under five-children in the community.\nCase-control study: we conducted an unmatched case-control study. A suspected case is, any child aged 0-59 months residing in Rigasa with history of fever, rash and at least one of the following: cough, coryza or conjunctivitis or in whom a physician suspected measles; a confirmed case is, positive to measles IgM using enzyme-linked immunosorbent assay (ELISA); and a probable case if there was epidemiological link to a laboratory confirmed measles case. A control was, any child 0-59 months residing in the same community but without the signs and symptoms of measles. We enrolled 75 cases and 75 controls to identify an odds ratio of 3 (for a risk factor on which intervention would have a significant impact), assuming 21.2% prevalence of exposure among control [4], with 95% CI and power of 80%. The sample size was determined using the Statcal function of Epi-Info statistical software. We selected and recruited the cases consecutively from among the patients that presented at the health facility and in the community. The controls were selected from the community; each control was selected from the 3rd homestead to the right of the household of a case. We used structured questionnaires to collect data on demographic characteristics, exposures, vaccination status and associated factors from both cases and controls, and clinical information from the cases only.\nSample collection and laboratory analysis: we collected blood samples for 15 suspected cases for measles serum IgM determination using ELISA method at the WHO regional laboratory at Kaduna.\nData management: we entered data into Epi-Info statistical software and performed univariate analysis to obtain frequencies and proportions, and bivariate analysis to obtain odds ratios and determine associations, setting p-value of 0.05 as the cut-off for statistical significance. We also performed unconditional logistic regression to adjust for possible confounders and identify the independent factors for contracting measles infection. Factors that were significant at p < 0.05 in the bivariate analysis and biological plausible variable such as age and sex were included in the model. In the final model, only variables that were found to significantly affect the outcome at P < 0.05 were retained. Data management was done using Epi InfoTM software 3.5.3 (CDC Atlanta, USA), and Microsoft Excel. For qualitative study, content analysis was thematically analyzed.\nEthical consideration: ethical approval was not obtained as study was conducted as part of an outbreak response. However, permission to conduct the study was granted by the State Primary Health Care Agency, District or LGA Director of Primary Health Care and District head of Rigasa community. Informed consent was obtained from all respondents interviewed. Confidentiality of all the subjects was assured and maintained during and after the study.", "Descriptive epidemiology-quantitatively: a total of 159 cases with two deaths (CFR = 1.3%) were identified. 80 (50.3%) were male. Cumulatively, under-five children accounted for 90% of cases. The mean age of cases was 32 (± 22.5) months but the age group of 0-11 months were severely affected with age specific death rate of 5% (Table 1). The index case for this outbreak was a 10 months old female child that had maculopapular rash on 5th January, 2015. She was unvaccinated for measles and never had any immunization for vaccine preventable diseases according to Expanded Programme on Immunization (EPI) schedule. She had no history of contact with someone with measles and had not travelled out of the community in the last one month. She was managed as an outpatient and died 3 days later. Figure 1 shows the epidemic curve of the outbreak. The epidemic curve has a propagated pattern with four peaks, the highest on 10th March, 2015. The outbreak spanned from 5th January to 4th April, 2015.\nAge distributions of measles cases and case fatality rate (CFR) in Rigasa community, March 2015\nEpidemic curve of measles outbreak in Rigasa community, week 1 to 15, 2015\nDescriptive epidemiology-qualitative evaluation of RI services: content analysis of in depth interview of three health workers rendering routine immunization services revealed that caregivers' usually utilized RI services in the first 6-10 weeks of life, no stock out of measles vaccine in last 6 months and most caregivers failed to comply with EPI schedule. Also, all planned RI sessions were held in the past 3 months and there was no vaccine stock out in 6 months prior to the outbreak and the facilities have functioning cold chain system.\nAnalytic study: total population sampled was 150 (cases 75, control 75). The mean age for cases and controls was 33.2 (± 21) and 37.6 (± 29) months. Males were 42 (56%) cases and 37(49%) controls. Among the cases, 15 (20%) was vaccinated for measles compared to 23 (30.7%) of the control. Among the 112 children unvaccinated for measles, attack rate was 54%. Cumulatively, 41(27%) were vaccinated for measles. Only 1 (1.3%) case compared to 12 (16%) of controls had completed the Expanded Programme on Immunization schedule (Table 2). From bivariate analysis, cases were more likely than control, to have had none or incomplete RI [OR (95% CI)]:14.0 (1.8, 111.4); not to have received measles vaccination [OR (95% CI)]:2.0 (0.8, 3.7); to have had close contact with measles cases [OR (95% CI)]:6.0 (2.7, 11.2); and to have caregivers who were younger than 20 years [OR (95% CI)]:2.6 (1, 6.8) (Table 3). From modelling, independent predictors of measles transmission in Rigassa, an urban slum in Kaduna metropolis were none or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)<]:28.3 (2.1, 392.0), unvaccinated for measles [AOR (95% CI)]:1.8 (0.8, 3.7), recent contact with measles cases [AOR (95% CI)]:7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]:5.2 (1.2, 22.5) (Table 4).\nCharacteristics of measles cases and controls in Rigasa community, March 2015\nFactors that may be responsible for measles outbreak at Rigasa community, Igabi LGA Kaduna State-March 2015\nFactors responsible for measles outbreak’s transmission at Rigasa community, Kaduna metropolis, Kaduna State - March 2015\nLaboratory findings: eleven (73%) of the 15 serum samples were confirmed IgM positive for measles. Two samples that were negative for measles IgM were also negative of rubella IgM; one was indeterminate, but there was an epidemiological link with a confirmed case of measles. The outcome of the last sample could not be ascertained.", "This laboratory confirmed that measles outbreak caused severe morbidity in a densely populated urban slum community in Kaduna metropolis where substandard housing and poor living conditions existed. In addition, the community, like most cosmopolitan settlements in Northern Nigeria, has witnessed poor uptake of RI for all antigens. In this investigation, measles vaccination coverage was 27%; this is lower than the estimated national coverage of 51% [12]. The outbreak of measles in Rigasa spanned from 5th January to 4th April, 2015, weeks after outbreak investigation and response. This prolonged spread of the infection could be due to lack of herd immunity in the community. The case fatality of 1.3% in this study is similar to 1.5% reported in Lagos, a cosmopolitan city in Nigeria, but lower than 3.9% reported in Bayelsa-South-South Nigeria, and 6.9% reported in a rural community in Southwestern Nigeria [8, 11, 13]. The lower CFR reported in this study could be due to early presentation of affected children to health facilities, and improved case management. During the outbreak, the state primary health care agency supplied drugs, including vitamin A, to health facilities that were used in the treatment of cases at no cost. Although measles usually affects under-five children [11], in this study we found under-one to be severely affected with CFR of 5%. In developing countries, malnutrition, lack of supportive care, crowding and poorly managed complications of measles have been implicated as causes of high CFR [8]. Routine immunization is significantly associated with reduction in measles infection among vaccinated individuals. We found that those who had no or incomplete vaccination according to EPI schedule were fourteen times more likely to have measles infection (95% CI. 1.8-111.4). In this community only 8.6% of the children had complete EPI schedule. This was by far less than the national target of at least 87% of RI coverage, and in which no fewer than 90% of the LGAs reach at least 80% of infants with complete scheduled of routine antigens by 2015 [14]. Countries with a single-dose of measles are said to be poor, least developed, report the lowest routine vaccination coverage and experience high measles diseases burden [15]. During the study period, on the EPI schedule, a single dose of measles antigen should be administered to a child at age nine months. Apart from the country's low measles vaccination coverage, primary vaccine failure occurs in 25% of children vaccinated at 9 months, and thus are unable to develop protective humoral antibodies against measles virus [10].\nWe also found that caregivers who were 20 years or less were more likely to have children with measles. This could be attributed to lack of knowledge on the importance of routine immunization and childcare. Caregivers in this community usually present their children for immunization according to EPI in the first 3 months of life but failed to complete the schedule therefore missing measles vaccination at the ninth month. Reasons put forward by caregivers to why their children missed measles vaccination ranged from being unaware of EPI schedule for measles, to competing priorities, to adverse events following immunization (AEFI). These reasons are similar to those cited in previous literatures [8, 16]. In this outbreak, the epi-curve revealed a propagated epidemic pattern which probably affirms that the disease was transmitted from person to person. Rigasa is a densely populated urban slum community and this allow for the spread of measles in this community. Overcrowding is an important risk factor in the transmission of respiratory infections [17]; and measles being a highly contagious disease, recent contact and overcrowding are risk factors for disease transmissions during measles outbreak [1, 3]. This outbreak response had one major limitation; that is, delay in reactive vaccination in the community after the investigation due to fear of potential postelection violence. This investigation was conducted just before 2015 national election, and the past elections in Nigeria, especially the preceding 2011 election, were marred by postelection violence. However, during the investigation, with support from the traditional leaders and government we were able to implement the following public health actions: educate the community on the importance of measles vaccination, prompt identification of cases in the community and referral to health facilities where drugs supplied by the state government for free treatment of cases were available. A retroactive measles vaccination campaign was carried out in the community after the 2015 national election.", "We confirmed there was a measles outbreak in Rigasa community. Low measles vaccine coverage as a result of poor uptake of RI was responsible for the outbreak of measles infection in the community. This resulted in the accumulation of susceptible children thus lowering the herd immunity against measles infection. This study also suggests that children of younger caregivers were more afflicted with measles infection during this outbreak. The poor housing conditions and overcrowding in this community greatly fueled the outbreak, supporting the evidence that close contact with a measles case is a risk factor for measles transmission. A major public health implication of this study is the need to strengthen RI services. We therefore recommend that the state ministry of health should increase demand creation for RI services through more sensitization and education of caregivers. Additionally, health workers should encourage and motivate caregivers who access RI services to complete the EPI schedule to prevent vaccine preventable diseases. Engaging caregivers who had completely immunized their children according to EPI schedule as a community role model could be a good strategy to motivate other caregivers to access and complete RI services.\n What is known about this topic Measles is a highly contagious vaccine preventable viral disease that usually affects younger children;\nSeveral factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak.\nMeasles is a highly contagious vaccine preventable viral disease that usually affects younger children;\nSeveral factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak.\n What this study adds We found children less than one year to be severely affected by measles and having the highest case-fatality;\nIn this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%;\nThis study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles.\nWe found children less than one year to be severely affected by measles and having the highest case-fatality;\nIn this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%;\nThis study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles.", "Measles is a highly contagious vaccine preventable viral disease that usually affects younger children;\nSeveral factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak.", "We found children less than one year to be severely affected by measles and having the highest case-fatality;\nIn this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%;\nThis study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles.", "The authors declare no competing interests." ]
[ "intro", "methods", "results", "discussion", "conclusion", null, null, "COI-statement" ]
[ "Measles", "outbreak investigation", "routine immunization", "urban slum" ]
Introduction: Measles is an acute, highly contagious vaccine preventable viral disease which usually affects younger children. Transmission is primarily person-to-person via aerosolized droplets or by direct contact with the nasal and throat secretions of infected persons [1, 2]. Incubation period is 10-14 days (range, 8-15 days) from exposure to onset of rash, and the individual becomes contagious before the eruption of rashes. In 2014, World Health Organization (WHO) reported 266,701 measles cases globally with 145,700 measles deaths [1]. Being unvaccinated against measles is a risk factor for contracting the disease [3]. Other factors responsible for measles outbreak and transmissions in developing countries are; lack of parental awareness of vaccination importance and compliance with routine immunization schedule, household overcrowding with easy contact with someone with measles, acquired or inherited immunodeficiency states and malnutrition [4-6]. During outbreaks, measles case fatality rate (CFR) in developing countries are normally estimated to be 3-5%, but may reach 10-30% compared with 0.1% reported from industrialized countries [2]. Malnutrition, poor supportive case management and complications like pneumonia, diarrhea, croup and central nervous system involvement are responsible for high measles CFR [7, 8]. Nigeria is second to India among ten countries with large number of unvaccinated children for measles, and has 2.7 million of the 21.5 million children globally that have zero dose for measles containing vaccine (MCV1) in 2013 [9]. Measles is one of the top ten causes of childhood morbidity and mortality with recurrent episodes common in Northern Nigeria at the first quarter of each year [10, 11]. In 2012, 2,553 measles cases were reported in Nigeria, an increase from 390 cases reported in 2006 [10]. In line with regional strategic plan, Nigeria planned to eliminate measles by 2020 by strengthening routine immunization, conduct bi-annual measles immunization campaign for second opportunity, epidemiologic surveillance with laboratory confirmation of cases and improve case management including Vitamin A supplementation. This is meant to improve measles coverage from the present 51% as at 2014 [12] to 95% needed for effective herd immunity. In October 2013, following measles outbreak in 19 States in Northern Nigeria, mass measles vaccination campaign was conducted. The first reported case of a suspected measles in Rigasa community, an urban slum of Kaduna Metropolis occurred on 5th of January 2015 in an unimmunized 10 months old child. The District or Local Government Area (LGA), Disease Surveillance and Notification Officer (DSNO) notified the State Epidemiologist and State DSNO on 10th February, 2015 when the reported cases reached an epidemic threshold. We investigated to confirm this outbreak, determine the risk factors for contracting infection and implement appropriate control measures. This paper describes the epidemiological methods employed in the investigation, summarizes the key findings and highlights the public health actions undertaken. Methods: Study site and study population: Rigasa is a densely populated urban slum in the south west of Igabi LGA, in Kaduna State, North-West Nigeria. It has an estimated 59,906 households with about 14,156 under-one children. The settlement has three health facilities rendering RI services. The community is noted for poor utilization of RI services and has rejected polio supplemental immunization services in the past. Measles outbreaks have been previously reported from this community. The Last measles supplementary immunization activities (SIA) was conducted from 5th to 9th October, 2013. Descriptive epidemiology-quantitative: in this investigation, a suspected measles case was, any person with generalized maculopapular rash, fever, and at least one of the following; cough, coryza or conjunctivitis or in whom a physician suspected measles, living in Rigasa community, from January 2015 when the index case was reported to March 2015. A confirmed case was, any suspected case with measles IgM positive test or an epidemiological link to a laboratory confirmed case living in the same community at same period. We actively search for cases in the community where measles is locally known as “Bakon dauro”. We interviewed and physically examined some cases at the treatment center to verify diagnosis and ensure that they met the case definition. We developed a line-list to collect information from all suspected cases on their age, sex, residence and time of onset, migration history and immunization status. We analyzed the line-list data to characterize the outbreak in time, place and person, and to develop a plausible hypothesis for measles transmission in the community. We conducted an in-depth interview with health workers rendering routine immunization services to ascertain utilization of RI services by under five-children in the community. Case-control study: we conducted an unmatched case-control study. A suspected case is, any child aged 0-59 months residing in Rigasa with history of fever, rash and at least one of the following: cough, coryza or conjunctivitis or in whom a physician suspected measles; a confirmed case is, positive to measles IgM using enzyme-linked immunosorbent assay (ELISA); and a probable case if there was epidemiological link to a laboratory confirmed measles case. A control was, any child 0-59 months residing in the same community but without the signs and symptoms of measles. We enrolled 75 cases and 75 controls to identify an odds ratio of 3 (for a risk factor on which intervention would have a significant impact), assuming 21.2% prevalence of exposure among control [4], with 95% CI and power of 80%. The sample size was determined using the Statcal function of Epi-Info statistical software. We selected and recruited the cases consecutively from among the patients that presented at the health facility and in the community. The controls were selected from the community; each control was selected from the 3rd homestead to the right of the household of a case. We used structured questionnaires to collect data on demographic characteristics, exposures, vaccination status and associated factors from both cases and controls, and clinical information from the cases only. Sample collection and laboratory analysis: we collected blood samples for 15 suspected cases for measles serum IgM determination using ELISA method at the WHO regional laboratory at Kaduna. Data management: we entered data into Epi-Info statistical software and performed univariate analysis to obtain frequencies and proportions, and bivariate analysis to obtain odds ratios and determine associations, setting p-value of 0.05 as the cut-off for statistical significance. We also performed unconditional logistic regression to adjust for possible confounders and identify the independent factors for contracting measles infection. Factors that were significant at p < 0.05 in the bivariate analysis and biological plausible variable such as age and sex were included in the model. In the final model, only variables that were found to significantly affect the outcome at P < 0.05 were retained. Data management was done using Epi InfoTM software 3.5.3 (CDC Atlanta, USA), and Microsoft Excel. For qualitative study, content analysis was thematically analyzed. Ethical consideration: ethical approval was not obtained as study was conducted as part of an outbreak response. However, permission to conduct the study was granted by the State Primary Health Care Agency, District or LGA Director of Primary Health Care and District head of Rigasa community. Informed consent was obtained from all respondents interviewed. Confidentiality of all the subjects was assured and maintained during and after the study. Results: Descriptive epidemiology-quantitatively: a total of 159 cases with two deaths (CFR = 1.3%) were identified. 80 (50.3%) were male. Cumulatively, under-five children accounted for 90% of cases. The mean age of cases was 32 (± 22.5) months but the age group of 0-11 months were severely affected with age specific death rate of 5% (Table 1). The index case for this outbreak was a 10 months old female child that had maculopapular rash on 5th January, 2015. She was unvaccinated for measles and never had any immunization for vaccine preventable diseases according to Expanded Programme on Immunization (EPI) schedule. She had no history of contact with someone with measles and had not travelled out of the community in the last one month. She was managed as an outpatient and died 3 days later. Figure 1 shows the epidemic curve of the outbreak. The epidemic curve has a propagated pattern with four peaks, the highest on 10th March, 2015. The outbreak spanned from 5th January to 4th April, 2015. Age distributions of measles cases and case fatality rate (CFR) in Rigasa community, March 2015 Epidemic curve of measles outbreak in Rigasa community, week 1 to 15, 2015 Descriptive epidemiology-qualitative evaluation of RI services: content analysis of in depth interview of three health workers rendering routine immunization services revealed that caregivers' usually utilized RI services in the first 6-10 weeks of life, no stock out of measles vaccine in last 6 months and most caregivers failed to comply with EPI schedule. Also, all planned RI sessions were held in the past 3 months and there was no vaccine stock out in 6 months prior to the outbreak and the facilities have functioning cold chain system. Analytic study: total population sampled was 150 (cases 75, control 75). The mean age for cases and controls was 33.2 (± 21) and 37.6 (± 29) months. Males were 42 (56%) cases and 37(49%) controls. Among the cases, 15 (20%) was vaccinated for measles compared to 23 (30.7%) of the control. Among the 112 children unvaccinated for measles, attack rate was 54%. Cumulatively, 41(27%) were vaccinated for measles. Only 1 (1.3%) case compared to 12 (16%) of controls had completed the Expanded Programme on Immunization schedule (Table 2). From bivariate analysis, cases were more likely than control, to have had none or incomplete RI [OR (95% CI)]:14.0 (1.8, 111.4); not to have received measles vaccination [OR (95% CI)]:2.0 (0.8, 3.7); to have had close contact with measles cases [OR (95% CI)]:6.0 (2.7, 11.2); and to have caregivers who were younger than 20 years [OR (95% CI)]:2.6 (1, 6.8) (Table 3). From modelling, independent predictors of measles transmission in Rigassa, an urban slum in Kaduna metropolis were none or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)<]:28.3 (2.1, 392.0), unvaccinated for measles [AOR (95% CI)]:1.8 (0.8, 3.7), recent contact with measles cases [AOR (95% CI)]:7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]:5.2 (1.2, 22.5) (Table 4). Characteristics of measles cases and controls in Rigasa community, March 2015 Factors that may be responsible for measles outbreak at Rigasa community, Igabi LGA Kaduna State-March 2015 Factors responsible for measles outbreak’s transmission at Rigasa community, Kaduna metropolis, Kaduna State - March 2015 Laboratory findings: eleven (73%) of the 15 serum samples were confirmed IgM positive for measles. Two samples that were negative for measles IgM were also negative of rubella IgM; one was indeterminate, but there was an epidemiological link with a confirmed case of measles. The outcome of the last sample could not be ascertained. Discussion: This laboratory confirmed that measles outbreak caused severe morbidity in a densely populated urban slum community in Kaduna metropolis where substandard housing and poor living conditions existed. In addition, the community, like most cosmopolitan settlements in Northern Nigeria, has witnessed poor uptake of RI for all antigens. In this investigation, measles vaccination coverage was 27%; this is lower than the estimated national coverage of 51% [12]. The outbreak of measles in Rigasa spanned from 5th January to 4th April, 2015, weeks after outbreak investigation and response. This prolonged spread of the infection could be due to lack of herd immunity in the community. The case fatality of 1.3% in this study is similar to 1.5% reported in Lagos, a cosmopolitan city in Nigeria, but lower than 3.9% reported in Bayelsa-South-South Nigeria, and 6.9% reported in a rural community in Southwestern Nigeria [8, 11, 13]. The lower CFR reported in this study could be due to early presentation of affected children to health facilities, and improved case management. During the outbreak, the state primary health care agency supplied drugs, including vitamin A, to health facilities that were used in the treatment of cases at no cost. Although measles usually affects under-five children [11], in this study we found under-one to be severely affected with CFR of 5%. In developing countries, malnutrition, lack of supportive care, crowding and poorly managed complications of measles have been implicated as causes of high CFR [8]. Routine immunization is significantly associated with reduction in measles infection among vaccinated individuals. We found that those who had no or incomplete vaccination according to EPI schedule were fourteen times more likely to have measles infection (95% CI. 1.8-111.4). In this community only 8.6% of the children had complete EPI schedule. This was by far less than the national target of at least 87% of RI coverage, and in which no fewer than 90% of the LGAs reach at least 80% of infants with complete scheduled of routine antigens by 2015 [14]. Countries with a single-dose of measles are said to be poor, least developed, report the lowest routine vaccination coverage and experience high measles diseases burden [15]. During the study period, on the EPI schedule, a single dose of measles antigen should be administered to a child at age nine months. Apart from the country's low measles vaccination coverage, primary vaccine failure occurs in 25% of children vaccinated at 9 months, and thus are unable to develop protective humoral antibodies against measles virus [10]. We also found that caregivers who were 20 years or less were more likely to have children with measles. This could be attributed to lack of knowledge on the importance of routine immunization and childcare. Caregivers in this community usually present their children for immunization according to EPI in the first 3 months of life but failed to complete the schedule therefore missing measles vaccination at the ninth month. Reasons put forward by caregivers to why their children missed measles vaccination ranged from being unaware of EPI schedule for measles, to competing priorities, to adverse events following immunization (AEFI). These reasons are similar to those cited in previous literatures [8, 16]. In this outbreak, the epi-curve revealed a propagated epidemic pattern which probably affirms that the disease was transmitted from person to person. Rigasa is a densely populated urban slum community and this allow for the spread of measles in this community. Overcrowding is an important risk factor in the transmission of respiratory infections [17]; and measles being a highly contagious disease, recent contact and overcrowding are risk factors for disease transmissions during measles outbreak [1, 3]. This outbreak response had one major limitation; that is, delay in reactive vaccination in the community after the investigation due to fear of potential postelection violence. This investigation was conducted just before 2015 national election, and the past elections in Nigeria, especially the preceding 2011 election, were marred by postelection violence. However, during the investigation, with support from the traditional leaders and government we were able to implement the following public health actions: educate the community on the importance of measles vaccination, prompt identification of cases in the community and referral to health facilities where drugs supplied by the state government for free treatment of cases were available. A retroactive measles vaccination campaign was carried out in the community after the 2015 national election. Conclusion: We confirmed there was a measles outbreak in Rigasa community. Low measles vaccine coverage as a result of poor uptake of RI was responsible for the outbreak of measles infection in the community. This resulted in the accumulation of susceptible children thus lowering the herd immunity against measles infection. This study also suggests that children of younger caregivers were more afflicted with measles infection during this outbreak. The poor housing conditions and overcrowding in this community greatly fueled the outbreak, supporting the evidence that close contact with a measles case is a risk factor for measles transmission. A major public health implication of this study is the need to strengthen RI services. We therefore recommend that the state ministry of health should increase demand creation for RI services through more sensitization and education of caregivers. Additionally, health workers should encourage and motivate caregivers who access RI services to complete the EPI schedule to prevent vaccine preventable diseases. Engaging caregivers who had completely immunized their children according to EPI schedule as a community role model could be a good strategy to motivate other caregivers to access and complete RI services. What is known about this topic Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. What this study adds We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles. We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles. What is known about this topic: Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. What this study adds: We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles. Competing interests: The authors declare no competing interests.
Background: Despite availability of an effective vaccine, the measles epidemic continue to occur in Nigeria. In February 2015, we investigated a suspected measles outbreak in an urban slum in Rigasa, Kaduna State, Nigeria. The study was to confirm the outbreak, determine the risk factors and implement appropriate control measures. Methods: We identified cases through active search and health record review. We conducted an unmatched case-control (1:1) study involving 75 under-5 cases who were randomly sampled, and 75 neighborhood controls. We interviewed caregivers of these children using structured questionnaire to collect information on sociodemographic characteristics and vaccination status of children. We collected 15 blood samples for measles IgM using Enzyme Linked Immunosorbent Assay. Descriptive, bivariate and logistic regression analyses were performed using Epi-info software. Confidence interval was set at 95%. Results: We recorded 159 cases with two deaths {case fatality rate = 1.3%}. 50.3% (80) of the cases were male. Of the 15 serum samples, 11(73.3%) were confirmed IgM positive for measles. Compared to the controls, the cases were more likely to have had no or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)]: 28.3 (2.1, 392.0), contact with measles cases [AOR (95% CI)]: 7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]: 5.2 (1.2, 22.5). Measles serum IgM was positive in 11 samples. Conclusions: We identified low RI uptake and contact with measles cases as predictors of measles outbreak in Rigasa, Kaduna State. We recommended strengthening of RI and education of care-givers' on completing RI schedule.
Introduction: Measles is an acute, highly contagious vaccine preventable viral disease which usually affects younger children. Transmission is primarily person-to-person via aerosolized droplets or by direct contact with the nasal and throat secretions of infected persons [1, 2]. Incubation period is 10-14 days (range, 8-15 days) from exposure to onset of rash, and the individual becomes contagious before the eruption of rashes. In 2014, World Health Organization (WHO) reported 266,701 measles cases globally with 145,700 measles deaths [1]. Being unvaccinated against measles is a risk factor for contracting the disease [3]. Other factors responsible for measles outbreak and transmissions in developing countries are; lack of parental awareness of vaccination importance and compliance with routine immunization schedule, household overcrowding with easy contact with someone with measles, acquired or inherited immunodeficiency states and malnutrition [4-6]. During outbreaks, measles case fatality rate (CFR) in developing countries are normally estimated to be 3-5%, but may reach 10-30% compared with 0.1% reported from industrialized countries [2]. Malnutrition, poor supportive case management and complications like pneumonia, diarrhea, croup and central nervous system involvement are responsible for high measles CFR [7, 8]. Nigeria is second to India among ten countries with large number of unvaccinated children for measles, and has 2.7 million of the 21.5 million children globally that have zero dose for measles containing vaccine (MCV1) in 2013 [9]. Measles is one of the top ten causes of childhood morbidity and mortality with recurrent episodes common in Northern Nigeria at the first quarter of each year [10, 11]. In 2012, 2,553 measles cases were reported in Nigeria, an increase from 390 cases reported in 2006 [10]. In line with regional strategic plan, Nigeria planned to eliminate measles by 2020 by strengthening routine immunization, conduct bi-annual measles immunization campaign for second opportunity, epidemiologic surveillance with laboratory confirmation of cases and improve case management including Vitamin A supplementation. This is meant to improve measles coverage from the present 51% as at 2014 [12] to 95% needed for effective herd immunity. In October 2013, following measles outbreak in 19 States in Northern Nigeria, mass measles vaccination campaign was conducted. The first reported case of a suspected measles in Rigasa community, an urban slum of Kaduna Metropolis occurred on 5th of January 2015 in an unimmunized 10 months old child. The District or Local Government Area (LGA), Disease Surveillance and Notification Officer (DSNO) notified the State Epidemiologist and State DSNO on 10th February, 2015 when the reported cases reached an epidemic threshold. We investigated to confirm this outbreak, determine the risk factors for contracting infection and implement appropriate control measures. This paper describes the epidemiological methods employed in the investigation, summarizes the key findings and highlights the public health actions undertaken. Conclusion: We confirmed there was a measles outbreak in Rigasa community. Low measles vaccine coverage as a result of poor uptake of RI was responsible for the outbreak of measles infection in the community. This resulted in the accumulation of susceptible children thus lowering the herd immunity against measles infection. This study also suggests that children of younger caregivers were more afflicted with measles infection during this outbreak. The poor housing conditions and overcrowding in this community greatly fueled the outbreak, supporting the evidence that close contact with a measles case is a risk factor for measles transmission. A major public health implication of this study is the need to strengthen RI services. We therefore recommend that the state ministry of health should increase demand creation for RI services through more sensitization and education of caregivers. Additionally, health workers should encourage and motivate caregivers who access RI services to complete the EPI schedule to prevent vaccine preventable diseases. Engaging caregivers who had completely immunized their children according to EPI schedule as a community role model could be a good strategy to motivate other caregivers to access and complete RI services. What is known about this topic Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. Measles is a highly contagious vaccine preventable viral disease that usually affects younger children; Several factors such as low coverage for measles antigens and overcrowding have been noted to be risk factors for measles outbreak. What this study adds We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles. We found children less than one year to be severely affected by measles and having the highest case-fatality; In this investigation, we found measles vaccination coverage to be 27% in this Rigasa; this is less than the reported national coverage of 51%; This study also reveals that younger caregivers who are 20 years or less compared to older ones, are more likely to have children with measles.
Background: Despite availability of an effective vaccine, the measles epidemic continue to occur in Nigeria. In February 2015, we investigated a suspected measles outbreak in an urban slum in Rigasa, Kaduna State, Nigeria. The study was to confirm the outbreak, determine the risk factors and implement appropriate control measures. Methods: We identified cases through active search and health record review. We conducted an unmatched case-control (1:1) study involving 75 under-5 cases who were randomly sampled, and 75 neighborhood controls. We interviewed caregivers of these children using structured questionnaire to collect information on sociodemographic characteristics and vaccination status of children. We collected 15 blood samples for measles IgM using Enzyme Linked Immunosorbent Assay. Descriptive, bivariate and logistic regression analyses were performed using Epi-info software. Confidence interval was set at 95%. Results: We recorded 159 cases with two deaths {case fatality rate = 1.3%}. 50.3% (80) of the cases were male. Of the 15 serum samples, 11(73.3%) were confirmed IgM positive for measles. Compared to the controls, the cases were more likely to have had no or incomplete routine immunization (RI) [adjusted odds ratio (AOR) (95% confidence interval (CI)]: 28.3 (2.1, 392.0), contact with measles cases [AOR (95% CI)]: 7.5 (2.9, 19.7), and having a caregiver younger than 20 years [AOR (95% CI)]: 5.2 (1.2, 22.5). Measles serum IgM was positive in 11 samples. Conclusions: We identified low RI uptake and contact with measles cases as predictors of measles outbreak in Rigasa, Kaduna State. We recommended strengthening of RI and education of care-givers' on completing RI schedule.
3,648
344
8
[ "measles", "community", "cases", "case", "outbreak", "children", "study", "immunization", "health", "2015" ]
[ "test", "test" ]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Measles | outbreak investigation | routine immunization | urban slum [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] Caregivers | Case-Control Studies | Child, Preschool | Disease Outbreaks | Enzyme-Linked Immunosorbent Assay | Epidemics | Female | Humans | Immunoglobulin M | Infant | Infant, Newborn | Logistic Models | Male | Measles | Measles Vaccine | Nigeria | Poverty Areas | Risk Factors | Surveys and Questionnaires | Vaccination [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | community | cases | case | outbreak | children | study | immunization | health | 2015 [SUMMARY]
[CONTENT] measles | 10 | nigeria | reported | countries | cases | surveillance | 2014 | million | states [SUMMARY]
[CONTENT] case | community | suspected | measles | cases | data | study | analysis | services | control [SUMMARY]
[CONTENT] measles | cases | ci | 2015 | 95 | 95 ci | months | march | march 2015 | aor 95 [SUMMARY]
[CONTENT] measles | caregivers | children | coverage | ri | outbreak | ri services | services | younger | study [SUMMARY]
[CONTENT] measles | community | children | cases | coverage | outbreak | case | interests | declare competing interests | competing interests [SUMMARY]
[CONTENT] measles | community | children | cases | coverage | outbreak | case | interests | declare competing interests | competing interests [SUMMARY]
[CONTENT] Nigeria ||| February 2015 | Rigasa | Kaduna State | Nigeria ||| [SUMMARY]
[CONTENT] ||| 75 | 75 ||| ||| 15 | Enzyme Linked Immunosorbent ||| Epi-info ||| 95% [SUMMARY]
[CONTENT] 159 | two | 1.3% ||| 50.3% | 80 ||| 15 | 11(73.3% ||| ||| AOR | 95% | CI | 28.3 | 2.1 | 392.0 | 95% | CI | 7.5 | 2.9 | 19.7 | 20 years ||| AOR | 95% | CI | 5.2 | 1.2 | 22.5 ||| 11 [SUMMARY]
[CONTENT] Rigasa | Kaduna State ||| [SUMMARY]
[CONTENT] Nigeria ||| February 2015 | Rigasa | Kaduna State | Nigeria ||| ||| ||| 75 | 75 ||| ||| 15 | Enzyme Linked Immunosorbent ||| Epi-info ||| 95% ||| ||| 159 | two | 1.3% ||| 50.3% | 80 ||| 15 | 11(73.3% ||| ||| AOR | 95% | CI | 28.3 | 2.1 | 392.0 | 95% | CI | 7.5 | 2.9 | 19.7 | 20 years ||| AOR | 95% | CI | 5.2 | 1.2 | 22.5 ||| 11 ||| Rigasa | Kaduna State ||| [SUMMARY]
[CONTENT] Nigeria ||| February 2015 | Rigasa | Kaduna State | Nigeria ||| ||| ||| 75 | 75 ||| ||| 15 | Enzyme Linked Immunosorbent ||| Epi-info ||| 95% ||| ||| 159 | two | 1.3% ||| 50.3% | 80 ||| 15 | 11(73.3% ||| ||| AOR | 95% | CI | 28.3 | 2.1 | 392.0 | 95% | CI | 7.5 | 2.9 | 19.7 | 20 years ||| AOR | 95% | CI | 5.2 | 1.2 | 22.5 ||| 11 ||| Rigasa | Kaduna State ||| [SUMMARY]
Prognosis of critically ill cirrhotic versus non-cirrhotic patients: a comprehensive score-matched study.
25580088
Cirrhotic patients admitted to an intensive care unit (ICU) have high mortality rates. The present study compared the characteristics and outcomes of critically ill patients admitted to the ICU with and without cirrhosis using the matched Acute Physiology and Chronic Health Evaluation III (APACHE III) and Sequential Organ Failure Assessment (SOFA) scores.
BACKGROUND
A retrospective case-control study was performed at the medical ICU of a tertiary-care hospital between January 2006 and December 2009. Patients were admitted with life-threatening complications and were matched for APACHE III and SOFA scores. Of 336 patients enrolled in the study, 87 in the cirrhosis or noncirrhosis group were matched according to the APACHE III scores. Another 55 patients with cirrhosis were matched to the 55 patients without cirrhosis according to the SOFA scores. Demographic data, aetiology of ICU admission, and laboratory variables were also evaluated.
METHODS
The overall hospital mortality rate in the patients with cirrhosis in the APACHE III-matched group was more than that in their counterparts (73.6% vs 57.5%, P = .026) but the rate did not differ significantly in the SOFA-matched group (61.8% vs 67.3%). In the APACHE III-matched group, the SOFA scores of patients with cirrhosis were significantly higher than those of patients without cirrhosis (P < .001), whereas the difference in APACHE III scores was nonsignificant between the SOFA-matched patients with and without cirrhosis.
RESULTS
Score-matched analytical data showed that the SOFA scores significantly differentiated the patients admitted to the ICU with cirrhosis from those without cirrhosis in APACHE III-matched groups, whereas difference in the APACHE III scores between the patients with and without cirrhosis were nonsignificant in the SOFA-matched group.
CONCLUSIONS
[ "APACHE", "Aged", "Case-Control Studies", "Critical Illness", "Female", "Hospital Mortality", "Humans", "Intensive Care Units", "Liver Cirrhosis", "Male", "Middle Aged", "Organ Dysfunction Scores", "Prognosis", "Retrospective Studies" ]
4289577
Background
Accurate prognostic predictors are crucial for patients admitted to an intensive care unit (ICU). Prognostic scoring systems are useful for clinical management such as predicting a survival rate, making decisions, and facilitating explanation of disease severity, by clinical physicians. Patients with cirrhosis admitted to an ICU frequently have disappointing outcomes despite intensive medical support, and these patients are particular targets for prognostic evaluation. Various systems for scoring severity and predicting prognosis have been developed and applied for decades. The Acute Physiology and Chronic Health Evaluation III (APACHE III) score [1], one of the widely used scoring systems, is known for its accuracy in predicting mortality. However, the APACHE III scoring system was initially developed for various diseases and not exclusively for liver-related diseases. By contrast, the Sequential Organ Failure Assessment (SOFA) score [2], another widely used scoring system, is superior to the APACHE III scoring system for assessing specific organ dysfunction including cirrhosis [3, 4]. Our previous study demonstrated that the APACHE III and SOFA scores were both independently associated with a hospital mortality rate and demonstrated high discriminatory power for predicting mortality in patients with cirrhosis [5]. However, few studies have performed detailed independent comparisons between APACHE III and SOFA scores [6]. In the present case-control study, we matched APACHE III and SOFA scores and compared the different clinical characteristics and outcomes of patients with cirrhosis with those of their matched noncirrhotic controls.
Methods
Study design The Chang Gung Medical Foundation Institutional Review Board approved the present study and waived the need for informed consent, because patient privacy was not breached during the study, and the study did not interfere with clinical decisions related to patient care (approval No. 98-3658A3). All data in our study were anonymised. This retrospective case-control study was conducted in a tertiary-care hospital. The enrolled patients were recruited from a database of critically ill patients admitted to medical ICUs between January 2006 and December 2009. For the APACHE III-matched group (174 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of APACHE III ± 3 points. For the SOFA-matched group (110 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of SOFA ± 1 point [7, 8]. The outcomes of interest were the length of stay in an ICU, length of stay in a hospital, and hospital mortality rate. The Chang Gung Medical Foundation Institutional Review Board approved the present study and waived the need for informed consent, because patient privacy was not breached during the study, and the study did not interfere with clinical decisions related to patient care (approval No. 98-3658A3). All data in our study were anonymised. This retrospective case-control study was conducted in a tertiary-care hospital. The enrolled patients were recruited from a database of critically ill patients admitted to medical ICUs between January 2006 and December 2009. For the APACHE III-matched group (174 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of APACHE III ± 3 points. For the SOFA-matched group (110 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of SOFA ± 1 point [7, 8]. The outcomes of interest were the length of stay in an ICU, length of stay in a hospital, and hospital mortality rate. Study population and data collection All patients admitted to medical ICUs between January 2006 and December 2009 with APACHE III and SOFA scores available were eligible for inclusion. Exclusion criteria were age < 18 years, length of stay in a hospital or an ICU of < 24 hours, patients with chronic uraemia and undergoing renal replacement therapy, and hospital readmission. Data were recorded regarding patient demographics, reason for ICU admission, clinical and laboratory variables, APACHE III and SOFA scores, the risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, end-stage renal failure (RIFLE) classification [9], the length of stay in an ICU and a hospital, and hospital mortality. Data on the length of stay included those from patients with hospital mortality. All patients admitted to medical ICUs between January 2006 and December 2009 with APACHE III and SOFA scores available were eligible for inclusion. Exclusion criteria were age < 18 years, length of stay in a hospital or an ICU of < 24 hours, patients with chronic uraemia and undergoing renal replacement therapy, and hospital readmission. Data were recorded regarding patient demographics, reason for ICU admission, clinical and laboratory variables, APACHE III and SOFA scores, the risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, end-stage renal failure (RIFLE) classification [9], the length of stay in an ICU and a hospital, and hospital mortality. Data on the length of stay included those from patients with hospital mortality. Definitions Cirrhosis was diagnosed based on liver histology or a combination of physical presentation, biochemical data, and ultrasonographic findings. Illness severity was assessed according to the APACHE III and the SOFA scores, which were defined and calculated as described previously [1, 2]. Acute kidney injury (AKI) was defined using the RIFLE criteria, and patients were scored as RIFLE-R or higher severity. Baseline serum creatinine (SCr) concentration was the first value measured during hospitalisation. The Modification of Diet in Renal Disease formula was used to estimate baseline SCr concentration in patients whose previous SCr concentration was unavailable. The criteria resulting in the most severe RIFLE classification were used [9]. A simple model for assessing mortality was developed as follows: non-AKI (0 points), RIFLE-R (1 point), RIFLE-I (2 points), and RIFLE-F (3 points) on Day 1 of ICU admission [10, 11]. The lowest physiological and biochemical values on Day 1 of ICU admission were recorded. In sedated or paralysed patients, neurological scoring was not performed and was not classified as neurological failure. In patients who were intubated but not sedated, the best verbal response was determined according to clinical judgment. Cirrhosis was diagnosed based on liver histology or a combination of physical presentation, biochemical data, and ultrasonographic findings. Illness severity was assessed according to the APACHE III and the SOFA scores, which were defined and calculated as described previously [1, 2]. Acute kidney injury (AKI) was defined using the RIFLE criteria, and patients were scored as RIFLE-R or higher severity. Baseline serum creatinine (SCr) concentration was the first value measured during hospitalisation. The Modification of Diet in Renal Disease formula was used to estimate baseline SCr concentration in patients whose previous SCr concentration was unavailable. The criteria resulting in the most severe RIFLE classification were used [9]. A simple model for assessing mortality was developed as follows: non-AKI (0 points), RIFLE-R (1 point), RIFLE-I (2 points), and RIFLE-F (3 points) on Day 1 of ICU admission [10, 11]. The lowest physiological and biochemical values on Day 1 of ICU admission were recorded. In sedated or paralysed patients, neurological scoring was not performed and was not classified as neurological failure. In patients who were intubated but not sedated, the best verbal response was determined according to clinical judgment. Statistical analysis Descriptive statistical results were expressed as mean ± standard error (SE). In primary analysis, the patients with cirrhosis were compared with the patients without cirrhosis. All variables were tested for normal distribution using the Kolmogorov–Smirnov test. Student’s t-test was used to compare the means of continuous variables and normally distributed data, whereas the Mann–Whitney U test was used for all other comparisons. Categorical data were tested using the χ2 test or Fisher’s exact test. Calibration was assessed using the Hosmer–Lemeshow goodness-of-fit test (C statistic) to compare the number of observed and predicted deaths in various risk groups for the entire range of death probabilities. Discrimination was assessed by determining area under the receiver operating characteristic curve (AUROC). Areas under 2 receiver operating characteristic curves were compared by applying a nonparametric approach. The AUROC analysis was also conducted to estimate the cut-off values, sensitivity, specificity, overall correctness, and positive and negative predictive values. Finally, cut-off points were calculated by determining the best Youden’s index (sensitivity + specificity −1). Cumulative survival curves over time were generated by applying the Kaplan–Meier approach and compared using the log rank test. All statistical tests were 2-tailed; P < .05 was considered statistically significant. Data were analysed using SPSS Version 13.0 for Windows (SPSS, Inc., Chicago, IL, USA). Descriptive statistical results were expressed as mean ± standard error (SE). In primary analysis, the patients with cirrhosis were compared with the patients without cirrhosis. All variables were tested for normal distribution using the Kolmogorov–Smirnov test. Student’s t-test was used to compare the means of continuous variables and normally distributed data, whereas the Mann–Whitney U test was used for all other comparisons. Categorical data were tested using the χ2 test or Fisher’s exact test. Calibration was assessed using the Hosmer–Lemeshow goodness-of-fit test (C statistic) to compare the number of observed and predicted deaths in various risk groups for the entire range of death probabilities. Discrimination was assessed by determining area under the receiver operating characteristic curve (AUROC). Areas under 2 receiver operating characteristic curves were compared by applying a nonparametric approach. The AUROC analysis was also conducted to estimate the cut-off values, sensitivity, specificity, overall correctness, and positive and negative predictive values. Finally, cut-off points were calculated by determining the best Youden’s index (sensitivity + specificity −1). Cumulative survival curves over time were generated by applying the Kaplan–Meier approach and compared using the log rank test. All statistical tests were 2-tailed; P < .05 was considered statistically significant. Data were analysed using SPSS Version 13.0 for Windows (SPSS, Inc., Chicago, IL, USA).
Results
Patient characteristics A total of 336 critically ill patients admitted to the medical ICU between January 2006 and December 2009 were enrolled in the present study. Table 1 lists the reasons for ICU admission. The demographic data, clinical characteristics, and outcomes of the 2 score-matched groups are depicted in Tables 2 and 3, respectively.Table 1 Reasons for ICU admission APACHE III-matchedSOFA-matchedgroup (n = 174)group (n = 110)Sepsis99(56.9%)62(56.4%) Urinary tract infection9(5.2%)8(7.3%) Pneumonia62(35.6%)38(34.5%) Intra-abdominal infection16(9.2%)7(6.4%) Blood stream infection9(5.2%)8(7.3%) Soft tissue infection3(1.7%)1(0.9%)Cardiovascular diseases4(2.3%)2(1.8%)Upper gastrointestinal bleeding18(10.3%)12(10.9%)Hepatic failure34(19.5%)23(20.9%)Other19(10.9%)11(10.0%) Abbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.Table 2 APACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Total (n = 174)Cirrhosis (n = 87)Non-cirrhosis (n = 87) p-valueAge (years)65.5 ± 1.159.8 ± 1.571.3 ± 1.4<0.001Male/Female123/5164/2359/28NS(0.405)Length of ICU stay (days)14 ± 111 ± 116 ± 20.009Length of Hospital stay (days)32 ± 230 ± 335 ± 3NS(0.223)Body weight on ICU admission (kg)61 ± 1.065 ± 157 ± 1<0.001GCS, ICU first day (points)9 ± 010 ± 19 ± 1NS(0.077)MAP, ICU admission (mmHg)78 ± 180 ± 276 ± 2NS(0.137)Serum Creatinine, ICU first day (mg/dl)2.5 ± 0.22.4 ± 0.22.5 ± 0.3NS(0.885)Arterial HCO3 −, ICU first day21 ± 119 ± 1.022 ± 1.00.002Serum Sodium, ICU first day (mg/dl)138 ± 1.0138 ± 1.0138 ± 1.0NS(0.890)Bilirubin, ICU first day (mg/dl)6.4 ± 0.711.4 ± 1.31.4 ± 0.3<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.4 ± 0.12.4 ± 0.1NS(0.329)Blood Sugar, ICU first day (mg/dl)166 ± 7159 ± 11170 ± 10.0NS(0.482)Hemoglobin, ICU first day (g/dl)9.6 ± 0.29.2 ± 0.210.0 ± 0.20.024Platelets, ICU first day (×103/μL)145.0 ± 9.179.4 ± 5.9210.5 ± 14.1<0.001Leukocytes, ICU first day (×103/μL)14.5 ± 0.713.5 ± 1.015.5 ± 0.8NS(0.133)PaO2/FiO2, ICU first day(mmHg)268 ± 11275 ± 13262 ± 17NS(0.536)Shock(%)54 (31.0)21 (24.1)33 (37.9)0.049Hospital mortality (%)114 (65.5)64 (73.6)50 (57.5)0.026 Score systems APACHE III, ICU first day(mean ± SE)87.8 ± 2.187.7 ± 3.488.0 ± 2.5NS(0.941)SOFA, ICU first day(mean ± SE)9.6 ± 0.311.3 ± 0.48.0 ± 0.3<0.001RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.6 ± 0.11.7 ± 0.2NS(0.913) Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 3 SOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Total (n = 110)Cirrhosis (n = 55)Non-cirrhosis (n = 55) p-valueAge (years)65.0 ± 1.461.1 ± 2.068.9 ± 2.00.006Male/Female79/3140/1539/16NS(0.832)Length of ICU stay (days)14 ± 111 ± 118 ± 20.007Length of Hospital stay (days)31 ± 232 ± 431 ± 3NS(0.843)Body weight on ICU admission (kg)60 ± 164 ± 256 ± 20.001GCS, ICU first day (points)10 ± 011 ± 18 ± 10.005MAP, ICU admission (mmHg)78 ± 283 ± 273 ± 20.002Serum Creatinine, ICU first day (mg/dl)2.4 ± 0.21.8 ± 0.23.0 ± 0.30.004Arterial HCO3 −, ICU first day21 ± 120 ± 121 ± 1NS(0.458)Serum Sodium, ICU first day (mg/dl)137 ± 1138 ± 1137 ± 1NS(0.282)Bilirubin, ICU first day (mg/dl)5.8 ± 0.89.6 ± 1.51.9 ± 0.4<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.5 ± 0.12.3 ± 0.10.016Blood Sugar, ICU first day (mg/dl)160 ± 8159 ± 12161 ± 11NS(0.899)Hemoglobin, ICU first day (g/dl)9.7 ± 0.29.3 ± 0.310.0 ± 0.3NS(0.134)Platelets, ICU first day (×103/μL)127.3 ± 10.185.8 ± 8.1168.8 ± 16.9<0.001Leukocytes, ICU first day (×103/μL)13.9 ± 0.812.9 ± 1.314.9 ± 1.0NS(0.224)PaO2/FiO2, ICU first day(mmHg)256 ± 13270 ± 16243 ± 19NS(0.299)Shock(%)39 (35.5)9 (16.3)30 (54.5)<0.001Hospital mortality (%)71(64.5)34 (61.8)37 (67.3)NS(0.550) Score systems APACHE III, ICU first day(mean ± SE)86.5 ± 2.881.1 ± 4.291.9 ± 3.7NS(0.058)SOFA, ICU first day(mean ± SE)10.0 ± 0.310.0 ± 0.510.0 ± 0.3NS(0.927)RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.4 ± 0.21.9 ± 0.20.041 Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. Reasons for ICU admission Abbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment. APACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. SOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. In the APACHE III-matched group (Table 2), the critically ill patients with cirrhosis had a lower arterial HCO3− level (P = .002), a lower haemoglobin level (P = .024), a lower platelet count (P < .001), and a higher serum bilirubin level (P < .001) on Day 1 of ICU admission compared with the patients without cirrhosis. The patients without cirrhosis with the same score were older and experienced more shock episodes than the patients with cirrhosis did. The hospital mortality rate was significantly higher in the patients with cirrhosis than in the patients without cirrhosis (P = .026). Renal function and a PaO2/FiO2 ratio were similar between the 2 groups. Among APACHE III-matched patients, the patients with cirrhosis had significantly higher SOFA scores than those of the patients without cirrhosis (P < .001). In the SOFA-matched group (Table 3), the patients with cirrhosis had a lower platelet count (P < .001), a higher Glasgow coma scale (GCS), a more stable haemodynamic status, more favourable renal function, a higher bilirubin level, and a lower albumin level compared with the patients without cirrhosis. The patients without cirrhosis experienced more shock episodes than did the patients with cirrhosis. No significant difference in the hospital mortality rate was observed between the patients with and without cirrhosis. In both the APACHE III and SOFA-matched groups, the patients with cirrhosis had a shorter overall length of stay in an ICU than that of the patients without cirrhosis. This was attributed to the higher hospital mortality rate in the patients with cirrhosis. In both the APACHE III and SOFA-matched groups, the patients without cirrhosis had a significantly higher rate of shock than the patients with cirrhosis did. This was because of the main reason that ICU admission of the patients with cirrhosis was mainly due to hepatic failure; the reasons for ICU admission of the patients without cirrhosis were GI bleeding and sepsis. (Data not shown here) A total of 336 critically ill patients admitted to the medical ICU between January 2006 and December 2009 were enrolled in the present study. Table 1 lists the reasons for ICU admission. The demographic data, clinical characteristics, and outcomes of the 2 score-matched groups are depicted in Tables 2 and 3, respectively.Table 1 Reasons for ICU admission APACHE III-matchedSOFA-matchedgroup (n = 174)group (n = 110)Sepsis99(56.9%)62(56.4%) Urinary tract infection9(5.2%)8(7.3%) Pneumonia62(35.6%)38(34.5%) Intra-abdominal infection16(9.2%)7(6.4%) Blood stream infection9(5.2%)8(7.3%) Soft tissue infection3(1.7%)1(0.9%)Cardiovascular diseases4(2.3%)2(1.8%)Upper gastrointestinal bleeding18(10.3%)12(10.9%)Hepatic failure34(19.5%)23(20.9%)Other19(10.9%)11(10.0%) Abbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.Table 2 APACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Total (n = 174)Cirrhosis (n = 87)Non-cirrhosis (n = 87) p-valueAge (years)65.5 ± 1.159.8 ± 1.571.3 ± 1.4<0.001Male/Female123/5164/2359/28NS(0.405)Length of ICU stay (days)14 ± 111 ± 116 ± 20.009Length of Hospital stay (days)32 ± 230 ± 335 ± 3NS(0.223)Body weight on ICU admission (kg)61 ± 1.065 ± 157 ± 1<0.001GCS, ICU first day (points)9 ± 010 ± 19 ± 1NS(0.077)MAP, ICU admission (mmHg)78 ± 180 ± 276 ± 2NS(0.137)Serum Creatinine, ICU first day (mg/dl)2.5 ± 0.22.4 ± 0.22.5 ± 0.3NS(0.885)Arterial HCO3 −, ICU first day21 ± 119 ± 1.022 ± 1.00.002Serum Sodium, ICU first day (mg/dl)138 ± 1.0138 ± 1.0138 ± 1.0NS(0.890)Bilirubin, ICU first day (mg/dl)6.4 ± 0.711.4 ± 1.31.4 ± 0.3<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.4 ± 0.12.4 ± 0.1NS(0.329)Blood Sugar, ICU first day (mg/dl)166 ± 7159 ± 11170 ± 10.0NS(0.482)Hemoglobin, ICU first day (g/dl)9.6 ± 0.29.2 ± 0.210.0 ± 0.20.024Platelets, ICU first day (×103/μL)145.0 ± 9.179.4 ± 5.9210.5 ± 14.1<0.001Leukocytes, ICU first day (×103/μL)14.5 ± 0.713.5 ± 1.015.5 ± 0.8NS(0.133)PaO2/FiO2, ICU first day(mmHg)268 ± 11275 ± 13262 ± 17NS(0.536)Shock(%)54 (31.0)21 (24.1)33 (37.9)0.049Hospital mortality (%)114 (65.5)64 (73.6)50 (57.5)0.026 Score systems APACHE III, ICU first day(mean ± SE)87.8 ± 2.187.7 ± 3.488.0 ± 2.5NS(0.941)SOFA, ICU first day(mean ± SE)9.6 ± 0.311.3 ± 0.48.0 ± 0.3<0.001RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.6 ± 0.11.7 ± 0.2NS(0.913) Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 3 SOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Total (n = 110)Cirrhosis (n = 55)Non-cirrhosis (n = 55) p-valueAge (years)65.0 ± 1.461.1 ± 2.068.9 ± 2.00.006Male/Female79/3140/1539/16NS(0.832)Length of ICU stay (days)14 ± 111 ± 118 ± 20.007Length of Hospital stay (days)31 ± 232 ± 431 ± 3NS(0.843)Body weight on ICU admission (kg)60 ± 164 ± 256 ± 20.001GCS, ICU first day (points)10 ± 011 ± 18 ± 10.005MAP, ICU admission (mmHg)78 ± 283 ± 273 ± 20.002Serum Creatinine, ICU first day (mg/dl)2.4 ± 0.21.8 ± 0.23.0 ± 0.30.004Arterial HCO3 −, ICU first day21 ± 120 ± 121 ± 1NS(0.458)Serum Sodium, ICU first day (mg/dl)137 ± 1138 ± 1137 ± 1NS(0.282)Bilirubin, ICU first day (mg/dl)5.8 ± 0.89.6 ± 1.51.9 ± 0.4<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.5 ± 0.12.3 ± 0.10.016Blood Sugar, ICU first day (mg/dl)160 ± 8159 ± 12161 ± 11NS(0.899)Hemoglobin, ICU first day (g/dl)9.7 ± 0.29.3 ± 0.310.0 ± 0.3NS(0.134)Platelets, ICU first day (×103/μL)127.3 ± 10.185.8 ± 8.1168.8 ± 16.9<0.001Leukocytes, ICU first day (×103/μL)13.9 ± 0.812.9 ± 1.314.9 ± 1.0NS(0.224)PaO2/FiO2, ICU first day(mmHg)256 ± 13270 ± 16243 ± 19NS(0.299)Shock(%)39 (35.5)9 (16.3)30 (54.5)<0.001Hospital mortality (%)71(64.5)34 (61.8)37 (67.3)NS(0.550) Score systems APACHE III, ICU first day(mean ± SE)86.5 ± 2.881.1 ± 4.291.9 ± 3.7NS(0.058)SOFA, ICU first day(mean ± SE)10.0 ± 0.310.0 ± 0.510.0 ± 0.3NS(0.927)RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.4 ± 0.21.9 ± 0.20.041 Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. Reasons for ICU admission Abbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment. APACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. SOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. In the APACHE III-matched group (Table 2), the critically ill patients with cirrhosis had a lower arterial HCO3− level (P = .002), a lower haemoglobin level (P = .024), a lower platelet count (P < .001), and a higher serum bilirubin level (P < .001) on Day 1 of ICU admission compared with the patients without cirrhosis. The patients without cirrhosis with the same score were older and experienced more shock episodes than the patients with cirrhosis did. The hospital mortality rate was significantly higher in the patients with cirrhosis than in the patients without cirrhosis (P = .026). Renal function and a PaO2/FiO2 ratio were similar between the 2 groups. Among APACHE III-matched patients, the patients with cirrhosis had significantly higher SOFA scores than those of the patients without cirrhosis (P < .001). In the SOFA-matched group (Table 3), the patients with cirrhosis had a lower platelet count (P < .001), a higher Glasgow coma scale (GCS), a more stable haemodynamic status, more favourable renal function, a higher bilirubin level, and a lower albumin level compared with the patients without cirrhosis. The patients without cirrhosis experienced more shock episodes than did the patients with cirrhosis. No significant difference in the hospital mortality rate was observed between the patients with and without cirrhosis. In both the APACHE III and SOFA-matched groups, the patients with cirrhosis had a shorter overall length of stay in an ICU than that of the patients without cirrhosis. This was attributed to the higher hospital mortality rate in the patients with cirrhosis. In both the APACHE III and SOFA-matched groups, the patients without cirrhosis had a significantly higher rate of shock than the patients with cirrhosis did. This was because of the main reason that ICU admission of the patients with cirrhosis was mainly due to hepatic failure; the reasons for ICU admission of the patients without cirrhosis were GI bleeding and sepsis. (Data not shown here) Mortality and severity of illness scoring systems Prediction abilities of the APACHE III, SOFA, and RIFLE scoring systems were compared; Table 4 lists the calibration and discrimination of the models. In the APACHE III-matched group, the SOFA scoring system demonstrated the highest prediction ability (AUROC = 0.810 ± 0.056) among all 3 systems. All the 3 scoring systems predicted mortality more precisely in the patients with cirrhosis than in those without cirrhosis. In the SOFA-matched group, the APACHE III scoring system was the most accurate predictor among all 3 systems. In the patients with cirrhosis, the SOFA scoring system demonstrated the highest prediction ability. To determine the selected cut-off points for predicting in-hospital mortality, the sensitivity, specificity, and overall accuracy of prediction were determined (Table 5). In the APACHE III-matched group, the SOFA scoring system had the best Youden’s index among the patients with cirrhosis, whereas the APACHE III scoring system had the best Youden’s index among the total population. In the SOFA-matched group, the APACHE III scoring system had the best Youden’s index among both the patients with cirrhosis and the total population. The RIFLE scoring system demonstrated the highest specificity for prognostic prediction in the patients with cirrhosis of both the SOFA-matched and APACHE III-matched groups.Table 4 Calibration and discrimination for the scoring methods in predicting hospital mortality CalibrationDiscriminationgoodness-of-fit (χ 2)df p AUROC ± SE95% CI p APACHE III-matched group APACHE III  Total population3.63380.8890.745 ± 0.0400.667 – 0.823<0.001 Cirrhosis12.30470.0910.783 ± 0.0620.662 – 0.904<0.001 Non-cirrhosis3.81980.8730.733 ± 0.0550.626 – 0.840<0.001 SOFA  Total population4.50580.8090.735 ± 0.0390.659 – 0.812<0.001 Cirrhosis11.24380.1880.810 ± 0.0560.700 – 0.920<0.001 Non-cirrhosis1.18060.9780.624 ± 0.0600.506 – 0.741NS(0.050) RIFLE  Total population3.16030.3680.621 ± 0.0450.534 – 0.7090.010 Cirrhosis2.00820.3660.710 ± 0.0610.590 – 0.8300.004 Non-cirrhosis2.47530.4800.554 ± 0.0620.431 – 0.676NS(0.395) SOFA-matched group APACHE III  Total population5.09380.7480.733 ± 0.0510.633 – 0.834<0.001 Cirrhosis15.20170.0340.767 ± 0.0680.633 – 0.9000.001 Non-cirrhosis1.71670.9740.706 ± 0.0770.556 – 0.8560.014 SOFA  Total population6.51960.3680.663 ± 0.0530.559 – 0.7670.005 Cirrhosis6.42760.3770.742 ± 0.0680.608 – 0.8750.003 Non-cirrhosis5.69850.3370.543 ± 0.0790.387 – 0.698NS(0.609) RIFLE  Total population4.60930.2030.634 ± 0.0550.527 – 0.7420.020 Cirrhosis1.21620.5440.656 ± 0.0740.511 – 0.801NS(0.053) Non-cirrhosis4.53130.2100.594 ± 0.0830.432 – 0.756NS(0.262) Abbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 5 Subsequent hospital mortality predicted after ICU admission Predictive FactorsCutoff PointYouden indexSensitivity (%)Specificity (%)Overall correctness (%) APACHE III-matched group APACHE III  Total population830.42756771 Cirrhosis72a 0.55787677 Non-cirrhosis830.40707070 SOFA  Total population100.40528870 Cirrhosis10a 0.56758178 Non-cirrhosis90.20368460 RIFLE  Total populationInjury0.22398361 Cirrhosisinjurya 0.35459068 Non-cirrhosisInjury0.10327855 SOFA-matched group APACHE III  Total population760.417567071 Cirrhosis71a 0.50747675 Non-cirrhosis830.40736770 SOFA  Total population100.30458565 Cirrhosis10 a 0.42568671 Non-cirrhosis100.18358359 RIFLE  Total populationNon-AKI0.22764661 CirrhosisInjurya 0.26359063 Non-cirrhosisNon-AKI0.20863360 Abbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. aValue giving the best Youden index. Calibration and discrimination for the scoring methods in predicting hospital mortality Abbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. Subsequent hospital mortality predicted after ICU admission Abbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. aValue giving the best Youden index. Figure 1A and B show the cumulative survival rates in the patients with and without cirrhosis in the APACHE III-matched and SOFA-matched groups, respectively. The cumulative survival rates showed that patients with cirrhosis in the APACHE III-matched group had significantly higher mortality rates than the patients without cirrhosis did, whereas no significant difference was detected between the patients with and without cirrhosis in the SOFA-matched group. In both the APACHE III-matched and SOFA-matched groups, the cumulative survival rates significantly differed when an underlying AKI was considered (Figure 2A and B).Figure 1 The cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients, p  < 0.05) and SOFA-matched (1B, 110 patients, p -value: not significant) groups, respectively. Figure 2 The cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients, p  < 0.05) and SOFA-matched (2B, 110 patients, p  < 0.05) groups, respectively. The cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients, p  < 0.05) and SOFA-matched (1B, 110 patients, p -value: not significant) groups, respectively. The cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients, p  < 0.05) and SOFA-matched (2B, 110 patients, p  < 0.05) groups, respectively. Prediction abilities of the APACHE III, SOFA, and RIFLE scoring systems were compared; Table 4 lists the calibration and discrimination of the models. In the APACHE III-matched group, the SOFA scoring system demonstrated the highest prediction ability (AUROC = 0.810 ± 0.056) among all 3 systems. All the 3 scoring systems predicted mortality more precisely in the patients with cirrhosis than in those without cirrhosis. In the SOFA-matched group, the APACHE III scoring system was the most accurate predictor among all 3 systems. In the patients with cirrhosis, the SOFA scoring system demonstrated the highest prediction ability. To determine the selected cut-off points for predicting in-hospital mortality, the sensitivity, specificity, and overall accuracy of prediction were determined (Table 5). In the APACHE III-matched group, the SOFA scoring system had the best Youden’s index among the patients with cirrhosis, whereas the APACHE III scoring system had the best Youden’s index among the total population. In the SOFA-matched group, the APACHE III scoring system had the best Youden’s index among both the patients with cirrhosis and the total population. The RIFLE scoring system demonstrated the highest specificity for prognostic prediction in the patients with cirrhosis of both the SOFA-matched and APACHE III-matched groups.Table 4 Calibration and discrimination for the scoring methods in predicting hospital mortality CalibrationDiscriminationgoodness-of-fit (χ 2)df p AUROC ± SE95% CI p APACHE III-matched group APACHE III  Total population3.63380.8890.745 ± 0.0400.667 – 0.823<0.001 Cirrhosis12.30470.0910.783 ± 0.0620.662 – 0.904<0.001 Non-cirrhosis3.81980.8730.733 ± 0.0550.626 – 0.840<0.001 SOFA  Total population4.50580.8090.735 ± 0.0390.659 – 0.812<0.001 Cirrhosis11.24380.1880.810 ± 0.0560.700 – 0.920<0.001 Non-cirrhosis1.18060.9780.624 ± 0.0600.506 – 0.741NS(0.050) RIFLE  Total population3.16030.3680.621 ± 0.0450.534 – 0.7090.010 Cirrhosis2.00820.3660.710 ± 0.0610.590 – 0.8300.004 Non-cirrhosis2.47530.4800.554 ± 0.0620.431 – 0.676NS(0.395) SOFA-matched group APACHE III  Total population5.09380.7480.733 ± 0.0510.633 – 0.834<0.001 Cirrhosis15.20170.0340.767 ± 0.0680.633 – 0.9000.001 Non-cirrhosis1.71670.9740.706 ± 0.0770.556 – 0.8560.014 SOFA  Total population6.51960.3680.663 ± 0.0530.559 – 0.7670.005 Cirrhosis6.42760.3770.742 ± 0.0680.608 – 0.8750.003 Non-cirrhosis5.69850.3370.543 ± 0.0790.387 – 0.698NS(0.609) RIFLE  Total population4.60930.2030.634 ± 0.0550.527 – 0.7420.020 Cirrhosis1.21620.5440.656 ± 0.0740.511 – 0.801NS(0.053) Non-cirrhosis4.53130.2100.594 ± 0.0830.432 – 0.756NS(0.262) Abbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 5 Subsequent hospital mortality predicted after ICU admission Predictive FactorsCutoff PointYouden indexSensitivity (%)Specificity (%)Overall correctness (%) APACHE III-matched group APACHE III  Total population830.42756771 Cirrhosis72a 0.55787677 Non-cirrhosis830.40707070 SOFA  Total population100.40528870 Cirrhosis10a 0.56758178 Non-cirrhosis90.20368460 RIFLE  Total populationInjury0.22398361 Cirrhosisinjurya 0.35459068 Non-cirrhosisInjury0.10327855 SOFA-matched group APACHE III  Total population760.417567071 Cirrhosis71a 0.50747675 Non-cirrhosis830.40736770 SOFA  Total population100.30458565 Cirrhosis10 a 0.42568671 Non-cirrhosis100.18358359 RIFLE  Total populationNon-AKI0.22764661 CirrhosisInjurya 0.26359063 Non-cirrhosisNon-AKI0.20863360 Abbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. aValue giving the best Youden index. Calibration and discrimination for the scoring methods in predicting hospital mortality Abbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. Subsequent hospital mortality predicted after ICU admission Abbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. aValue giving the best Youden index. Figure 1A and B show the cumulative survival rates in the patients with and without cirrhosis in the APACHE III-matched and SOFA-matched groups, respectively. The cumulative survival rates showed that patients with cirrhosis in the APACHE III-matched group had significantly higher mortality rates than the patients without cirrhosis did, whereas no significant difference was detected between the patients with and without cirrhosis in the SOFA-matched group. In both the APACHE III-matched and SOFA-matched groups, the cumulative survival rates significantly differed when an underlying AKI was considered (Figure 2A and B).Figure 1 The cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients, p  < 0.05) and SOFA-matched (1B, 110 patients, p -value: not significant) groups, respectively. Figure 2 The cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients, p  < 0.05) and SOFA-matched (2B, 110 patients, p  < 0.05) groups, respectively. The cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients, p  < 0.05) and SOFA-matched (1B, 110 patients, p -value: not significant) groups, respectively. The cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients, p  < 0.05) and SOFA-matched (2B, 110 patients, p  < 0.05) groups, respectively.
Conclusions
Our results provide additional evidence that SOFA scores differ significantly between patients with and without cirrhosis matched according to APACHE III scores. The score-matched analytical data showed that the predictive accuracy of SOFA is superior to that of APACHE III in evaluating critically ill patients with cirrhosis. We also demonstrated that the mean arterial pressure, GCS, and RIFLE classification play critical roles in determining prognosis in this subset of patients. When considering cost-effectiveness and ease of implementation, the SOFA scale is recommended for evaluating short-term prognosis in critically ill patients with cirrhosis.
[ "Background", "Study design", "Study population and data collection", "Definitions", "Statistical analysis", "Patient characteristics", "Mortality and severity of illness scoring systems" ]
[ "Accurate prognostic predictors are crucial for patients admitted to an intensive care unit (ICU). Prognostic scoring systems are useful for clinical management such as predicting a survival rate, making decisions, and facilitating explanation of disease severity, by clinical physicians. Patients with cirrhosis admitted to an ICU frequently have disappointing outcomes despite intensive medical support, and these patients are particular targets for prognostic evaluation.\nVarious systems for scoring severity and predicting prognosis have been developed and applied for decades. The Acute Physiology and Chronic Health Evaluation III (APACHE III) score [1], one of the widely used scoring systems, is known for its accuracy in predicting mortality. However, the APACHE III scoring system was initially developed for various diseases and not exclusively for liver-related diseases. By contrast, the Sequential Organ Failure Assessment (SOFA) score [2], another widely used scoring system, is superior to the APACHE III scoring system for assessing specific organ dysfunction including cirrhosis [3, 4]. Our previous study demonstrated that the APACHE III and SOFA scores were both independently associated with a hospital mortality rate and demonstrated high discriminatory power for predicting mortality in patients with cirrhosis [5]. However, few studies have performed detailed independent comparisons between APACHE III and SOFA scores [6]. In the present case-control study, we matched APACHE III and SOFA scores and compared the different clinical characteristics and outcomes of patients with cirrhosis with those of their matched noncirrhotic controls.", "The Chang Gung Medical Foundation Institutional Review Board approved the present study and waived the need for informed consent, because patient privacy was not breached during the study, and the study did not interfere with clinical decisions related to patient care (approval No. 98-3658A3). All data in our study were anonymised. This retrospective case-control study was conducted in a tertiary-care hospital. The enrolled patients were recruited from a database of critically ill patients admitted to medical ICUs between January 2006 and December 2009. For the APACHE III-matched group (174 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of APACHE III ± 3 points. For the SOFA-matched group (110 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of SOFA ± 1 point [7, 8]. The outcomes of interest were the length of stay in an ICU, length of stay in a hospital, and hospital mortality rate.", "All patients admitted to medical ICUs between January 2006 and December 2009 with APACHE III and SOFA scores available were eligible for inclusion. Exclusion criteria were age < 18 years, length of stay in a hospital or an ICU of < 24 hours, patients with chronic uraemia and undergoing renal replacement therapy, and hospital readmission. Data were recorded regarding patient demographics, reason for ICU admission, clinical and laboratory variables, APACHE III and SOFA scores, the risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, end-stage renal failure (RIFLE) classification [9], the length of stay in an ICU and a hospital, and hospital mortality. Data on the length of stay included those from patients with hospital mortality.", "Cirrhosis was diagnosed based on liver histology or a combination of physical presentation, biochemical data, and ultrasonographic findings. Illness severity was assessed according to the APACHE III and the SOFA scores, which were defined and calculated as described previously [1, 2]. Acute kidney injury (AKI) was defined using the RIFLE criteria, and patients were scored as RIFLE-R or higher severity. Baseline serum creatinine (SCr) concentration was the first value measured during hospitalisation. The Modification of Diet in Renal Disease formula was used to estimate baseline SCr concentration in patients whose previous SCr concentration was unavailable. The criteria resulting in the most severe RIFLE classification were used [9]. A simple model for assessing mortality was developed as follows: non-AKI (0 points), RIFLE-R (1 point), RIFLE-I (2 points), and RIFLE-F (3 points) on Day 1 of ICU admission [10, 11].\nThe lowest physiological and biochemical values on Day 1 of ICU admission were recorded. In sedated or paralysed patients, neurological scoring was not performed and was not classified as neurological failure. In patients who were intubated but not sedated, the best verbal response was determined according to clinical judgment.", "Descriptive statistical results were expressed as mean ± standard error (SE). In primary analysis, the patients with cirrhosis were compared with the patients without cirrhosis. All variables were tested for normal distribution using the Kolmogorov–Smirnov test. Student’s t-test was used to compare the means of continuous variables and normally distributed data, whereas the Mann–Whitney U test was used for all other comparisons. Categorical data were tested using the χ2 test or Fisher’s exact test.\nCalibration was assessed using the Hosmer–Lemeshow goodness-of-fit test (C statistic) to compare the number of observed and predicted deaths in various risk groups for the entire range of death probabilities. Discrimination was assessed by determining area under the receiver operating characteristic curve (AUROC). Areas under 2 receiver operating characteristic curves were compared by applying a nonparametric approach. The AUROC analysis was also conducted to estimate the cut-off values, sensitivity, specificity, overall correctness, and positive and negative predictive values. Finally, cut-off points were calculated by determining the best Youden’s index (sensitivity + specificity −1).\nCumulative survival curves over time were generated by applying the Kaplan–Meier approach and compared using the log rank test. All statistical tests were 2-tailed; P < .05 was considered statistically significant. Data were analysed using SPSS Version 13.0 for Windows (SPSS, Inc., Chicago, IL, USA).", "A total of 336 critically ill patients admitted to the medical ICU between January 2006 and December 2009 were enrolled in the present study. Table 1 lists the reasons for ICU admission. The demographic data, clinical characteristics, and outcomes of the 2 score-matched groups are depicted in Tables 2 and 3, respectively.Table 1\nReasons for ICU admission\nAPACHE III-matchedSOFA-matchedgroup (n = 174)group (n = 110)Sepsis99(56.9%)62(56.4%) Urinary tract infection9(5.2%)8(7.3%) Pneumonia62(35.6%)38(34.5%) Intra-abdominal infection16(9.2%)7(6.4%) Blood stream infection9(5.2%)8(7.3%) Soft tissue infection3(1.7%)1(0.9%)Cardiovascular diseases4(2.3%)2(1.8%)Upper gastrointestinal bleeding18(10.3%)12(10.9%)Hepatic failure34(19.5%)23(20.9%)Other19(10.9%)11(10.0%)\nAbbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.Table 2\nAPACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\nTotal (n = 174)Cirrhosis (n = 87)Non-cirrhosis (n = 87)\np-valueAge (years)65.5 ± 1.159.8 ± 1.571.3 ± 1.4<0.001Male/Female123/5164/2359/28NS(0.405)Length of ICU stay (days)14 ± 111 ± 116 ± 20.009Length of Hospital stay (days)32 ± 230 ± 335 ± 3NS(0.223)Body weight on ICU admission (kg)61 ± 1.065 ± 157 ± 1<0.001GCS, ICU first day (points)9 ± 010 ± 19 ± 1NS(0.077)MAP, ICU admission (mmHg)78 ± 180 ± 276 ± 2NS(0.137)Serum Creatinine, ICU first day (mg/dl)2.5 ± 0.22.4 ± 0.22.5 ± 0.3NS(0.885)Arterial HCO3\n−, ICU first day21 ± 119 ± 1.022 ± 1.00.002Serum Sodium, ICU first day (mg/dl)138 ± 1.0138 ± 1.0138 ± 1.0NS(0.890)Bilirubin, ICU first day (mg/dl)6.4 ± 0.711.4 ± 1.31.4 ± 0.3<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.4 ± 0.12.4 ± 0.1NS(0.329)Blood Sugar, ICU first day (mg/dl)166 ± 7159 ± 11170 ± 10.0NS(0.482)Hemoglobin, ICU first day (g/dl)9.6 ± 0.29.2 ± 0.210.0 ± 0.20.024Platelets, ICU first day (×103/μL)145.0 ± 9.179.4 ± 5.9210.5 ± 14.1<0.001Leukocytes, ICU first day (×103/μL)14.5 ± 0.713.5 ± 1.015.5 ± 0.8NS(0.133)PaO2/FiO2, ICU first day(mmHg)268 ± 11275 ± 13262 ± 17NS(0.536)Shock(%)54 (31.0)21 (24.1)33 (37.9)0.049Hospital mortality (%)114 (65.5)64 (73.6)50 (57.5)0.026\nScore systems\nAPACHE III, ICU first day(mean ± SE)87.8 ± 2.187.7 ± 3.488.0 ± 2.5NS(0.941)SOFA, ICU first day(mean ± SE)9.6 ± 0.311.3 ± 0.48.0 ± 0.3<0.001RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.6 ± 0.11.7 ± 0.2NS(0.913)\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 3\nSOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\nTotal (n = 110)Cirrhosis (n = 55)Non-cirrhosis (n = 55)\np-valueAge (years)65.0 ± 1.461.1 ± 2.068.9 ± 2.00.006Male/Female79/3140/1539/16NS(0.832)Length of ICU stay (days)14 ± 111 ± 118 ± 20.007Length of Hospital stay (days)31 ± 232 ± 431 ± 3NS(0.843)Body weight on ICU admission (kg)60 ± 164 ± 256 ± 20.001GCS, ICU first day (points)10 ± 011 ± 18 ± 10.005MAP, ICU admission (mmHg)78 ± 283 ± 273 ± 20.002Serum Creatinine, ICU first day (mg/dl)2.4 ± 0.21.8 ± 0.23.0 ± 0.30.004Arterial HCO3\n−, ICU first day21 ± 120 ± 121 ± 1NS(0.458)Serum Sodium, ICU first day (mg/dl)137 ± 1138 ± 1137 ± 1NS(0.282)Bilirubin, ICU first day (mg/dl)5.8 ± 0.89.6 ± 1.51.9 ± 0.4<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.5 ± 0.12.3 ± 0.10.016Blood Sugar, ICU first day (mg/dl)160 ± 8159 ± 12161 ± 11NS(0.899)Hemoglobin, ICU first day (g/dl)9.7 ± 0.29.3 ± 0.310.0 ± 0.3NS(0.134)Platelets, ICU first day (×103/μL)127.3 ± 10.185.8 ± 8.1168.8 ± 16.9<0.001Leukocytes, ICU first day (×103/μL)13.9 ± 0.812.9 ± 1.314.9 ± 1.0NS(0.224)PaO2/FiO2, ICU first day(mmHg)256 ± 13270 ± 16243 ± 19NS(0.299)Shock(%)39 (35.5)9 (16.3)30 (54.5)<0.001Hospital mortality (%)71(64.5)34 (61.8)37 (67.3)NS(0.550)\nScore systems\nAPACHE III, ICU first day(mean ± SE)86.5 ± 2.881.1 ± 4.291.9 ± 3.7NS(0.058)SOFA, ICU first day(mean ± SE)10.0 ± 0.310.0 ± 0.510.0 ± 0.3NS(0.927)RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.4 ± 0.21.9 ± 0.20.041\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nReasons for ICU admission\n\n\nAbbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.\n\nAPACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\n\n\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nSOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\n\n\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\nIn the APACHE III-matched group (Table 2), the critically ill patients with cirrhosis had a lower arterial HCO3− level (P = .002), a lower haemoglobin level (P = .024), a lower platelet count (P < .001), and a higher serum bilirubin level (P < .001) on Day 1 of ICU admission compared with the patients without cirrhosis. The patients without cirrhosis with the same score were older and experienced more shock episodes than the patients with cirrhosis did. The hospital mortality rate was significantly higher in the patients with cirrhosis than in the patients without cirrhosis (P = .026). Renal function and a PaO2/FiO2 ratio were similar between the 2 groups. Among APACHE III-matched patients, the patients with cirrhosis had significantly higher SOFA scores than those of the patients without cirrhosis (P < .001).\nIn the SOFA-matched group (Table 3), the patients with cirrhosis had a lower platelet count (P < .001), a higher Glasgow coma scale (GCS), a more stable haemodynamic status, more favourable renal function, a higher bilirubin level, and a lower albumin level compared with the patients without cirrhosis. The patients without cirrhosis experienced more shock episodes than did the patients with cirrhosis. No significant difference in the hospital mortality rate was observed between the patients with and without cirrhosis.\nIn both the APACHE III and SOFA-matched groups, the patients with cirrhosis had a shorter overall length of stay in an ICU than that of the patients without cirrhosis. This was attributed to the higher hospital mortality rate in the patients with cirrhosis. In both the APACHE III and SOFA-matched groups, the patients without cirrhosis had a significantly higher rate of shock than the patients with cirrhosis did. This was because of the main reason that ICU admission of the patients with cirrhosis was mainly due to hepatic failure; the reasons for ICU admission of the patients without cirrhosis were GI bleeding and sepsis. (Data not shown here)", "Prediction abilities of the APACHE III, SOFA, and RIFLE scoring systems were compared; Table 4 lists the calibration and discrimination of the models. In the APACHE III-matched group, the SOFA scoring system demonstrated the highest prediction ability (AUROC = 0.810 ± 0.056) among all 3 systems. All the 3 scoring systems predicted mortality more precisely in the patients with cirrhosis than in those without cirrhosis. In the SOFA-matched group, the APACHE III scoring system was the most accurate predictor among all 3 systems. In the patients with cirrhosis, the SOFA scoring system demonstrated the highest prediction ability. To determine the selected cut-off points for predicting in-hospital mortality, the sensitivity, specificity, and overall accuracy of prediction were determined (Table 5). In the APACHE III-matched group, the SOFA scoring system had the best Youden’s index among the patients with cirrhosis, whereas the APACHE III scoring system had the best Youden’s index among the total population. In the SOFA-matched group, the APACHE III scoring system had the best Youden’s index among both the patients with cirrhosis and the total population. The RIFLE scoring system demonstrated the highest specificity for prognostic prediction in the patients with cirrhosis of both the SOFA-matched and APACHE III-matched groups.Table 4\nCalibration and discrimination for the scoring methods in predicting hospital mortality\nCalibrationDiscriminationgoodness-of-fit (χ\n2)df\np\nAUROC ± SE95% CI\np\n\nAPACHE III-matched group\n\nAPACHE III\n Total population3.63380.8890.745 ± 0.0400.667 – 0.823<0.001 Cirrhosis12.30470.0910.783 ± 0.0620.662 – 0.904<0.001 Non-cirrhosis3.81980.8730.733 ± 0.0550.626 – 0.840<0.001\nSOFA\n Total population4.50580.8090.735 ± 0.0390.659 – 0.812<0.001 Cirrhosis11.24380.1880.810 ± 0.0560.700 – 0.920<0.001 Non-cirrhosis1.18060.9780.624 ± 0.0600.506 – 0.741NS(0.050)\nRIFLE\n Total population3.16030.3680.621 ± 0.0450.534 – 0.7090.010 Cirrhosis2.00820.3660.710 ± 0.0610.590 – 0.8300.004 Non-cirrhosis2.47530.4800.554 ± 0.0620.431 – 0.676NS(0.395)\nSOFA-matched group\n\nAPACHE III\n Total population5.09380.7480.733 ± 0.0510.633 – 0.834<0.001 Cirrhosis15.20170.0340.767 ± 0.0680.633 – 0.9000.001 Non-cirrhosis1.71670.9740.706 ± 0.0770.556 – 0.8560.014\nSOFA\n Total population6.51960.3680.663 ± 0.0530.559 – 0.7670.005 Cirrhosis6.42760.3770.742 ± 0.0680.608 – 0.8750.003 Non-cirrhosis5.69850.3370.543 ± 0.0790.387 – 0.698NS(0.609)\nRIFLE\n Total population4.60930.2030.634 ± 0.0550.527 – 0.7420.020 Cirrhosis1.21620.5440.656 ± 0.0740.511 – 0.801NS(0.053) Non-cirrhosis4.53130.2100.594 ± 0.0830.432 – 0.756NS(0.262)\nAbbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 5\nSubsequent hospital mortality predicted after ICU admission\nPredictive FactorsCutoff PointYouden indexSensitivity (%)Specificity (%)Overall correctness (%)\nAPACHE III-matched group\n\nAPACHE III\n Total population830.42756771 Cirrhosis72a\n0.55787677 Non-cirrhosis830.40707070\nSOFA\n Total population100.40528870 Cirrhosis10a\n0.56758178 Non-cirrhosis90.20368460\nRIFLE\n Total populationInjury0.22398361 Cirrhosisinjurya\n0.35459068 Non-cirrhosisInjury0.10327855\nSOFA-matched group\n\nAPACHE III\n Total population760.417567071 Cirrhosis71a\n0.50747675 Non-cirrhosis830.40736770\nSOFA\n Total population100.30458565 Cirrhosis10 a\n0.42568671 Non-cirrhosis100.18358359\nRIFLE\n Total populationNon-AKI0.22764661 CirrhosisInjurya\n0.26359063 Non-cirrhosisNon-AKI0.20863360\nAbbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\naValue giving the best Youden index.\n\nCalibration and discrimination for the scoring methods in predicting hospital mortality\n\n\nAbbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nSubsequent hospital mortality predicted after ICU admission\n\n\nAbbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\naValue giving the best Youden index.\nFigure 1A and B show the cumulative survival rates in the patients with and without cirrhosis in the APACHE III-matched and SOFA-matched groups, respectively. The cumulative survival rates showed that patients with cirrhosis in the APACHE III-matched group had significantly higher mortality rates than the patients without cirrhosis did, whereas no significant difference was detected between the patients with and without cirrhosis in the SOFA-matched group. In both the APACHE III-matched and SOFA-matched groups, the cumulative survival rates significantly differed when an underlying AKI was considered (Figure 2A and B).Figure 1\nThe cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients,\np \n< 0.05) and SOFA-matched (1B, 110 patients,\np\n-value: not significant) groups, respectively.\nFigure 2\nThe cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients,\np \n< 0.05) and SOFA-matched (2B, 110 patients,\np \n< 0.05) groups, respectively.\n\n\nThe cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients,\np \n< 0.05) and SOFA-matched (1B, 110 patients,\np\n-value: not significant) groups, respectively.\n\n\nThe cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients,\np \n< 0.05) and SOFA-matched (2B, 110 patients,\np \n< 0.05) groups, respectively.\n" ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study design", "Study population and data collection", "Definitions", "Statistical analysis", "Results", "Patient characteristics", "Mortality and severity of illness scoring systems", "Discussion", "Conclusions" ]
[ "Accurate prognostic predictors are crucial for patients admitted to an intensive care unit (ICU). Prognostic scoring systems are useful for clinical management such as predicting a survival rate, making decisions, and facilitating explanation of disease severity, by clinical physicians. Patients with cirrhosis admitted to an ICU frequently have disappointing outcomes despite intensive medical support, and these patients are particular targets for prognostic evaluation.\nVarious systems for scoring severity and predicting prognosis have been developed and applied for decades. The Acute Physiology and Chronic Health Evaluation III (APACHE III) score [1], one of the widely used scoring systems, is known for its accuracy in predicting mortality. However, the APACHE III scoring system was initially developed for various diseases and not exclusively for liver-related diseases. By contrast, the Sequential Organ Failure Assessment (SOFA) score [2], another widely used scoring system, is superior to the APACHE III scoring system for assessing specific organ dysfunction including cirrhosis [3, 4]. Our previous study demonstrated that the APACHE III and SOFA scores were both independently associated with a hospital mortality rate and demonstrated high discriminatory power for predicting mortality in patients with cirrhosis [5]. However, few studies have performed detailed independent comparisons between APACHE III and SOFA scores [6]. In the present case-control study, we matched APACHE III and SOFA scores and compared the different clinical characteristics and outcomes of patients with cirrhosis with those of their matched noncirrhotic controls.", " Study design The Chang Gung Medical Foundation Institutional Review Board approved the present study and waived the need for informed consent, because patient privacy was not breached during the study, and the study did not interfere with clinical decisions related to patient care (approval No. 98-3658A3). All data in our study were anonymised. This retrospective case-control study was conducted in a tertiary-care hospital. The enrolled patients were recruited from a database of critically ill patients admitted to medical ICUs between January 2006 and December 2009. For the APACHE III-matched group (174 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of APACHE III ± 3 points. For the SOFA-matched group (110 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of SOFA ± 1 point [7, 8]. The outcomes of interest were the length of stay in an ICU, length of stay in a hospital, and hospital mortality rate.\nThe Chang Gung Medical Foundation Institutional Review Board approved the present study and waived the need for informed consent, because patient privacy was not breached during the study, and the study did not interfere with clinical decisions related to patient care (approval No. 98-3658A3). All data in our study were anonymised. This retrospective case-control study was conducted in a tertiary-care hospital. The enrolled patients were recruited from a database of critically ill patients admitted to medical ICUs between January 2006 and December 2009. For the APACHE III-matched group (174 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of APACHE III ± 3 points. For the SOFA-matched group (110 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of SOFA ± 1 point [7, 8]. The outcomes of interest were the length of stay in an ICU, length of stay in a hospital, and hospital mortality rate.\n Study population and data collection All patients admitted to medical ICUs between January 2006 and December 2009 with APACHE III and SOFA scores available were eligible for inclusion. Exclusion criteria were age < 18 years, length of stay in a hospital or an ICU of < 24 hours, patients with chronic uraemia and undergoing renal replacement therapy, and hospital readmission. Data were recorded regarding patient demographics, reason for ICU admission, clinical and laboratory variables, APACHE III and SOFA scores, the risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, end-stage renal failure (RIFLE) classification [9], the length of stay in an ICU and a hospital, and hospital mortality. Data on the length of stay included those from patients with hospital mortality.\nAll patients admitted to medical ICUs between January 2006 and December 2009 with APACHE III and SOFA scores available were eligible for inclusion. Exclusion criteria were age < 18 years, length of stay in a hospital or an ICU of < 24 hours, patients with chronic uraemia and undergoing renal replacement therapy, and hospital readmission. Data were recorded regarding patient demographics, reason for ICU admission, clinical and laboratory variables, APACHE III and SOFA scores, the risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, end-stage renal failure (RIFLE) classification [9], the length of stay in an ICU and a hospital, and hospital mortality. Data on the length of stay included those from patients with hospital mortality.\n Definitions Cirrhosis was diagnosed based on liver histology or a combination of physical presentation, biochemical data, and ultrasonographic findings. Illness severity was assessed according to the APACHE III and the SOFA scores, which were defined and calculated as described previously [1, 2]. Acute kidney injury (AKI) was defined using the RIFLE criteria, and patients were scored as RIFLE-R or higher severity. Baseline serum creatinine (SCr) concentration was the first value measured during hospitalisation. The Modification of Diet in Renal Disease formula was used to estimate baseline SCr concentration in patients whose previous SCr concentration was unavailable. The criteria resulting in the most severe RIFLE classification were used [9]. A simple model for assessing mortality was developed as follows: non-AKI (0 points), RIFLE-R (1 point), RIFLE-I (2 points), and RIFLE-F (3 points) on Day 1 of ICU admission [10, 11].\nThe lowest physiological and biochemical values on Day 1 of ICU admission were recorded. In sedated or paralysed patients, neurological scoring was not performed and was not classified as neurological failure. In patients who were intubated but not sedated, the best verbal response was determined according to clinical judgment.\nCirrhosis was diagnosed based on liver histology or a combination of physical presentation, biochemical data, and ultrasonographic findings. Illness severity was assessed according to the APACHE III and the SOFA scores, which were defined and calculated as described previously [1, 2]. Acute kidney injury (AKI) was defined using the RIFLE criteria, and patients were scored as RIFLE-R or higher severity. Baseline serum creatinine (SCr) concentration was the first value measured during hospitalisation. The Modification of Diet in Renal Disease formula was used to estimate baseline SCr concentration in patients whose previous SCr concentration was unavailable. The criteria resulting in the most severe RIFLE classification were used [9]. A simple model for assessing mortality was developed as follows: non-AKI (0 points), RIFLE-R (1 point), RIFLE-I (2 points), and RIFLE-F (3 points) on Day 1 of ICU admission [10, 11].\nThe lowest physiological and biochemical values on Day 1 of ICU admission were recorded. In sedated or paralysed patients, neurological scoring was not performed and was not classified as neurological failure. In patients who were intubated but not sedated, the best verbal response was determined according to clinical judgment.\n Statistical analysis Descriptive statistical results were expressed as mean ± standard error (SE). In primary analysis, the patients with cirrhosis were compared with the patients without cirrhosis. All variables were tested for normal distribution using the Kolmogorov–Smirnov test. Student’s t-test was used to compare the means of continuous variables and normally distributed data, whereas the Mann–Whitney U test was used for all other comparisons. Categorical data were tested using the χ2 test or Fisher’s exact test.\nCalibration was assessed using the Hosmer–Lemeshow goodness-of-fit test (C statistic) to compare the number of observed and predicted deaths in various risk groups for the entire range of death probabilities. Discrimination was assessed by determining area under the receiver operating characteristic curve (AUROC). Areas under 2 receiver operating characteristic curves were compared by applying a nonparametric approach. The AUROC analysis was also conducted to estimate the cut-off values, sensitivity, specificity, overall correctness, and positive and negative predictive values. Finally, cut-off points were calculated by determining the best Youden’s index (sensitivity + specificity −1).\nCumulative survival curves over time were generated by applying the Kaplan–Meier approach and compared using the log rank test. All statistical tests were 2-tailed; P < .05 was considered statistically significant. Data were analysed using SPSS Version 13.0 for Windows (SPSS, Inc., Chicago, IL, USA).\nDescriptive statistical results were expressed as mean ± standard error (SE). In primary analysis, the patients with cirrhosis were compared with the patients without cirrhosis. All variables were tested for normal distribution using the Kolmogorov–Smirnov test. Student’s t-test was used to compare the means of continuous variables and normally distributed data, whereas the Mann–Whitney U test was used for all other comparisons. Categorical data were tested using the χ2 test or Fisher’s exact test.\nCalibration was assessed using the Hosmer–Lemeshow goodness-of-fit test (C statistic) to compare the number of observed and predicted deaths in various risk groups for the entire range of death probabilities. Discrimination was assessed by determining area under the receiver operating characteristic curve (AUROC). Areas under 2 receiver operating characteristic curves were compared by applying a nonparametric approach. The AUROC analysis was also conducted to estimate the cut-off values, sensitivity, specificity, overall correctness, and positive and negative predictive values. Finally, cut-off points were calculated by determining the best Youden’s index (sensitivity + specificity −1).\nCumulative survival curves over time were generated by applying the Kaplan–Meier approach and compared using the log rank test. All statistical tests were 2-tailed; P < .05 was considered statistically significant. Data were analysed using SPSS Version 13.0 for Windows (SPSS, Inc., Chicago, IL, USA).", "The Chang Gung Medical Foundation Institutional Review Board approved the present study and waived the need for informed consent, because patient privacy was not breached during the study, and the study did not interfere with clinical decisions related to patient care (approval No. 98-3658A3). All data in our study were anonymised. This retrospective case-control study was conducted in a tertiary-care hospital. The enrolled patients were recruited from a database of critically ill patients admitted to medical ICUs between January 2006 and December 2009. For the APACHE III-matched group (174 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of APACHE III ± 3 points. For the SOFA-matched group (110 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of SOFA ± 1 point [7, 8]. The outcomes of interest were the length of stay in an ICU, length of stay in a hospital, and hospital mortality rate.", "All patients admitted to medical ICUs between January 2006 and December 2009 with APACHE III and SOFA scores available were eligible for inclusion. Exclusion criteria were age < 18 years, length of stay in a hospital or an ICU of < 24 hours, patients with chronic uraemia and undergoing renal replacement therapy, and hospital readmission. Data were recorded regarding patient demographics, reason for ICU admission, clinical and laboratory variables, APACHE III and SOFA scores, the risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, end-stage renal failure (RIFLE) classification [9], the length of stay in an ICU and a hospital, and hospital mortality. Data on the length of stay included those from patients with hospital mortality.", "Cirrhosis was diagnosed based on liver histology or a combination of physical presentation, biochemical data, and ultrasonographic findings. Illness severity was assessed according to the APACHE III and the SOFA scores, which were defined and calculated as described previously [1, 2]. Acute kidney injury (AKI) was defined using the RIFLE criteria, and patients were scored as RIFLE-R or higher severity. Baseline serum creatinine (SCr) concentration was the first value measured during hospitalisation. The Modification of Diet in Renal Disease formula was used to estimate baseline SCr concentration in patients whose previous SCr concentration was unavailable. The criteria resulting in the most severe RIFLE classification were used [9]. A simple model for assessing mortality was developed as follows: non-AKI (0 points), RIFLE-R (1 point), RIFLE-I (2 points), and RIFLE-F (3 points) on Day 1 of ICU admission [10, 11].\nThe lowest physiological and biochemical values on Day 1 of ICU admission were recorded. In sedated or paralysed patients, neurological scoring was not performed and was not classified as neurological failure. In patients who were intubated but not sedated, the best verbal response was determined according to clinical judgment.", "Descriptive statistical results were expressed as mean ± standard error (SE). In primary analysis, the patients with cirrhosis were compared with the patients without cirrhosis. All variables were tested for normal distribution using the Kolmogorov–Smirnov test. Student’s t-test was used to compare the means of continuous variables and normally distributed data, whereas the Mann–Whitney U test was used for all other comparisons. Categorical data were tested using the χ2 test or Fisher’s exact test.\nCalibration was assessed using the Hosmer–Lemeshow goodness-of-fit test (C statistic) to compare the number of observed and predicted deaths in various risk groups for the entire range of death probabilities. Discrimination was assessed by determining area under the receiver operating characteristic curve (AUROC). Areas under 2 receiver operating characteristic curves were compared by applying a nonparametric approach. The AUROC analysis was also conducted to estimate the cut-off values, sensitivity, specificity, overall correctness, and positive and negative predictive values. Finally, cut-off points were calculated by determining the best Youden’s index (sensitivity + specificity −1).\nCumulative survival curves over time were generated by applying the Kaplan–Meier approach and compared using the log rank test. All statistical tests were 2-tailed; P < .05 was considered statistically significant. Data were analysed using SPSS Version 13.0 for Windows (SPSS, Inc., Chicago, IL, USA).", " Patient characteristics A total of 336 critically ill patients admitted to the medical ICU between January 2006 and December 2009 were enrolled in the present study. Table 1 lists the reasons for ICU admission. The demographic data, clinical characteristics, and outcomes of the 2 score-matched groups are depicted in Tables 2 and 3, respectively.Table 1\nReasons for ICU admission\nAPACHE III-matchedSOFA-matchedgroup (n = 174)group (n = 110)Sepsis99(56.9%)62(56.4%) Urinary tract infection9(5.2%)8(7.3%) Pneumonia62(35.6%)38(34.5%) Intra-abdominal infection16(9.2%)7(6.4%) Blood stream infection9(5.2%)8(7.3%) Soft tissue infection3(1.7%)1(0.9%)Cardiovascular diseases4(2.3%)2(1.8%)Upper gastrointestinal bleeding18(10.3%)12(10.9%)Hepatic failure34(19.5%)23(20.9%)Other19(10.9%)11(10.0%)\nAbbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.Table 2\nAPACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\nTotal (n = 174)Cirrhosis (n = 87)Non-cirrhosis (n = 87)\np-valueAge (years)65.5 ± 1.159.8 ± 1.571.3 ± 1.4<0.001Male/Female123/5164/2359/28NS(0.405)Length of ICU stay (days)14 ± 111 ± 116 ± 20.009Length of Hospital stay (days)32 ± 230 ± 335 ± 3NS(0.223)Body weight on ICU admission (kg)61 ± 1.065 ± 157 ± 1<0.001GCS, ICU first day (points)9 ± 010 ± 19 ± 1NS(0.077)MAP, ICU admission (mmHg)78 ± 180 ± 276 ± 2NS(0.137)Serum Creatinine, ICU first day (mg/dl)2.5 ± 0.22.4 ± 0.22.5 ± 0.3NS(0.885)Arterial HCO3\n−, ICU first day21 ± 119 ± 1.022 ± 1.00.002Serum Sodium, ICU first day (mg/dl)138 ± 1.0138 ± 1.0138 ± 1.0NS(0.890)Bilirubin, ICU first day (mg/dl)6.4 ± 0.711.4 ± 1.31.4 ± 0.3<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.4 ± 0.12.4 ± 0.1NS(0.329)Blood Sugar, ICU first day (mg/dl)166 ± 7159 ± 11170 ± 10.0NS(0.482)Hemoglobin, ICU first day (g/dl)9.6 ± 0.29.2 ± 0.210.0 ± 0.20.024Platelets, ICU first day (×103/μL)145.0 ± 9.179.4 ± 5.9210.5 ± 14.1<0.001Leukocytes, ICU first day (×103/μL)14.5 ± 0.713.5 ± 1.015.5 ± 0.8NS(0.133)PaO2/FiO2, ICU first day(mmHg)268 ± 11275 ± 13262 ± 17NS(0.536)Shock(%)54 (31.0)21 (24.1)33 (37.9)0.049Hospital mortality (%)114 (65.5)64 (73.6)50 (57.5)0.026\nScore systems\nAPACHE III, ICU first day(mean ± SE)87.8 ± 2.187.7 ± 3.488.0 ± 2.5NS(0.941)SOFA, ICU first day(mean ± SE)9.6 ± 0.311.3 ± 0.48.0 ± 0.3<0.001RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.6 ± 0.11.7 ± 0.2NS(0.913)\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 3\nSOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\nTotal (n = 110)Cirrhosis (n = 55)Non-cirrhosis (n = 55)\np-valueAge (years)65.0 ± 1.461.1 ± 2.068.9 ± 2.00.006Male/Female79/3140/1539/16NS(0.832)Length of ICU stay (days)14 ± 111 ± 118 ± 20.007Length of Hospital stay (days)31 ± 232 ± 431 ± 3NS(0.843)Body weight on ICU admission (kg)60 ± 164 ± 256 ± 20.001GCS, ICU first day (points)10 ± 011 ± 18 ± 10.005MAP, ICU admission (mmHg)78 ± 283 ± 273 ± 20.002Serum Creatinine, ICU first day (mg/dl)2.4 ± 0.21.8 ± 0.23.0 ± 0.30.004Arterial HCO3\n−, ICU first day21 ± 120 ± 121 ± 1NS(0.458)Serum Sodium, ICU first day (mg/dl)137 ± 1138 ± 1137 ± 1NS(0.282)Bilirubin, ICU first day (mg/dl)5.8 ± 0.89.6 ± 1.51.9 ± 0.4<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.5 ± 0.12.3 ± 0.10.016Blood Sugar, ICU first day (mg/dl)160 ± 8159 ± 12161 ± 11NS(0.899)Hemoglobin, ICU first day (g/dl)9.7 ± 0.29.3 ± 0.310.0 ± 0.3NS(0.134)Platelets, ICU first day (×103/μL)127.3 ± 10.185.8 ± 8.1168.8 ± 16.9<0.001Leukocytes, ICU first day (×103/μL)13.9 ± 0.812.9 ± 1.314.9 ± 1.0NS(0.224)PaO2/FiO2, ICU first day(mmHg)256 ± 13270 ± 16243 ± 19NS(0.299)Shock(%)39 (35.5)9 (16.3)30 (54.5)<0.001Hospital mortality (%)71(64.5)34 (61.8)37 (67.3)NS(0.550)\nScore systems\nAPACHE III, ICU first day(mean ± SE)86.5 ± 2.881.1 ± 4.291.9 ± 3.7NS(0.058)SOFA, ICU first day(mean ± SE)10.0 ± 0.310.0 ± 0.510.0 ± 0.3NS(0.927)RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.4 ± 0.21.9 ± 0.20.041\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nReasons for ICU admission\n\n\nAbbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.\n\nAPACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\n\n\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nSOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\n\n\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\nIn the APACHE III-matched group (Table 2), the critically ill patients with cirrhosis had a lower arterial HCO3− level (P = .002), a lower haemoglobin level (P = .024), a lower platelet count (P < .001), and a higher serum bilirubin level (P < .001) on Day 1 of ICU admission compared with the patients without cirrhosis. The patients without cirrhosis with the same score were older and experienced more shock episodes than the patients with cirrhosis did. The hospital mortality rate was significantly higher in the patients with cirrhosis than in the patients without cirrhosis (P = .026). Renal function and a PaO2/FiO2 ratio were similar between the 2 groups. Among APACHE III-matched patients, the patients with cirrhosis had significantly higher SOFA scores than those of the patients without cirrhosis (P < .001).\nIn the SOFA-matched group (Table 3), the patients with cirrhosis had a lower platelet count (P < .001), a higher Glasgow coma scale (GCS), a more stable haemodynamic status, more favourable renal function, a higher bilirubin level, and a lower albumin level compared with the patients without cirrhosis. The patients without cirrhosis experienced more shock episodes than did the patients with cirrhosis. No significant difference in the hospital mortality rate was observed between the patients with and without cirrhosis.\nIn both the APACHE III and SOFA-matched groups, the patients with cirrhosis had a shorter overall length of stay in an ICU than that of the patients without cirrhosis. This was attributed to the higher hospital mortality rate in the patients with cirrhosis. In both the APACHE III and SOFA-matched groups, the patients without cirrhosis had a significantly higher rate of shock than the patients with cirrhosis did. This was because of the main reason that ICU admission of the patients with cirrhosis was mainly due to hepatic failure; the reasons for ICU admission of the patients without cirrhosis were GI bleeding and sepsis. (Data not shown here)\nA total of 336 critically ill patients admitted to the medical ICU between January 2006 and December 2009 were enrolled in the present study. Table 1 lists the reasons for ICU admission. The demographic data, clinical characteristics, and outcomes of the 2 score-matched groups are depicted in Tables 2 and 3, respectively.Table 1\nReasons for ICU admission\nAPACHE III-matchedSOFA-matchedgroup (n = 174)group (n = 110)Sepsis99(56.9%)62(56.4%) Urinary tract infection9(5.2%)8(7.3%) Pneumonia62(35.6%)38(34.5%) Intra-abdominal infection16(9.2%)7(6.4%) Blood stream infection9(5.2%)8(7.3%) Soft tissue infection3(1.7%)1(0.9%)Cardiovascular diseases4(2.3%)2(1.8%)Upper gastrointestinal bleeding18(10.3%)12(10.9%)Hepatic failure34(19.5%)23(20.9%)Other19(10.9%)11(10.0%)\nAbbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.Table 2\nAPACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\nTotal (n = 174)Cirrhosis (n = 87)Non-cirrhosis (n = 87)\np-valueAge (years)65.5 ± 1.159.8 ± 1.571.3 ± 1.4<0.001Male/Female123/5164/2359/28NS(0.405)Length of ICU stay (days)14 ± 111 ± 116 ± 20.009Length of Hospital stay (days)32 ± 230 ± 335 ± 3NS(0.223)Body weight on ICU admission (kg)61 ± 1.065 ± 157 ± 1<0.001GCS, ICU first day (points)9 ± 010 ± 19 ± 1NS(0.077)MAP, ICU admission (mmHg)78 ± 180 ± 276 ± 2NS(0.137)Serum Creatinine, ICU first day (mg/dl)2.5 ± 0.22.4 ± 0.22.5 ± 0.3NS(0.885)Arterial HCO3\n−, ICU first day21 ± 119 ± 1.022 ± 1.00.002Serum Sodium, ICU first day (mg/dl)138 ± 1.0138 ± 1.0138 ± 1.0NS(0.890)Bilirubin, ICU first day (mg/dl)6.4 ± 0.711.4 ± 1.31.4 ± 0.3<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.4 ± 0.12.4 ± 0.1NS(0.329)Blood Sugar, ICU first day (mg/dl)166 ± 7159 ± 11170 ± 10.0NS(0.482)Hemoglobin, ICU first day (g/dl)9.6 ± 0.29.2 ± 0.210.0 ± 0.20.024Platelets, ICU first day (×103/μL)145.0 ± 9.179.4 ± 5.9210.5 ± 14.1<0.001Leukocytes, ICU first day (×103/μL)14.5 ± 0.713.5 ± 1.015.5 ± 0.8NS(0.133)PaO2/FiO2, ICU first day(mmHg)268 ± 11275 ± 13262 ± 17NS(0.536)Shock(%)54 (31.0)21 (24.1)33 (37.9)0.049Hospital mortality (%)114 (65.5)64 (73.6)50 (57.5)0.026\nScore systems\nAPACHE III, ICU first day(mean ± SE)87.8 ± 2.187.7 ± 3.488.0 ± 2.5NS(0.941)SOFA, ICU first day(mean ± SE)9.6 ± 0.311.3 ± 0.48.0 ± 0.3<0.001RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.6 ± 0.11.7 ± 0.2NS(0.913)\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 3\nSOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\nTotal (n = 110)Cirrhosis (n = 55)Non-cirrhosis (n = 55)\np-valueAge (years)65.0 ± 1.461.1 ± 2.068.9 ± 2.00.006Male/Female79/3140/1539/16NS(0.832)Length of ICU stay (days)14 ± 111 ± 118 ± 20.007Length of Hospital stay (days)31 ± 232 ± 431 ± 3NS(0.843)Body weight on ICU admission (kg)60 ± 164 ± 256 ± 20.001GCS, ICU first day (points)10 ± 011 ± 18 ± 10.005MAP, ICU admission (mmHg)78 ± 283 ± 273 ± 20.002Serum Creatinine, ICU first day (mg/dl)2.4 ± 0.21.8 ± 0.23.0 ± 0.30.004Arterial HCO3\n−, ICU first day21 ± 120 ± 121 ± 1NS(0.458)Serum Sodium, ICU first day (mg/dl)137 ± 1138 ± 1137 ± 1NS(0.282)Bilirubin, ICU first day (mg/dl)5.8 ± 0.89.6 ± 1.51.9 ± 0.4<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.5 ± 0.12.3 ± 0.10.016Blood Sugar, ICU first day (mg/dl)160 ± 8159 ± 12161 ± 11NS(0.899)Hemoglobin, ICU first day (g/dl)9.7 ± 0.29.3 ± 0.310.0 ± 0.3NS(0.134)Platelets, ICU first day (×103/μL)127.3 ± 10.185.8 ± 8.1168.8 ± 16.9<0.001Leukocytes, ICU first day (×103/μL)13.9 ± 0.812.9 ± 1.314.9 ± 1.0NS(0.224)PaO2/FiO2, ICU first day(mmHg)256 ± 13270 ± 16243 ± 19NS(0.299)Shock(%)39 (35.5)9 (16.3)30 (54.5)<0.001Hospital mortality (%)71(64.5)34 (61.8)37 (67.3)NS(0.550)\nScore systems\nAPACHE III, ICU first day(mean ± SE)86.5 ± 2.881.1 ± 4.291.9 ± 3.7NS(0.058)SOFA, ICU first day(mean ± SE)10.0 ± 0.310.0 ± 0.510.0 ± 0.3NS(0.927)RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.4 ± 0.21.9 ± 0.20.041\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nReasons for ICU admission\n\n\nAbbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.\n\nAPACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\n\n\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nSOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\n\n\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\nIn the APACHE III-matched group (Table 2), the critically ill patients with cirrhosis had a lower arterial HCO3− level (P = .002), a lower haemoglobin level (P = .024), a lower platelet count (P < .001), and a higher serum bilirubin level (P < .001) on Day 1 of ICU admission compared with the patients without cirrhosis. The patients without cirrhosis with the same score were older and experienced more shock episodes than the patients with cirrhosis did. The hospital mortality rate was significantly higher in the patients with cirrhosis than in the patients without cirrhosis (P = .026). Renal function and a PaO2/FiO2 ratio were similar between the 2 groups. Among APACHE III-matched patients, the patients with cirrhosis had significantly higher SOFA scores than those of the patients without cirrhosis (P < .001).\nIn the SOFA-matched group (Table 3), the patients with cirrhosis had a lower platelet count (P < .001), a higher Glasgow coma scale (GCS), a more stable haemodynamic status, more favourable renal function, a higher bilirubin level, and a lower albumin level compared with the patients without cirrhosis. The patients without cirrhosis experienced more shock episodes than did the patients with cirrhosis. No significant difference in the hospital mortality rate was observed between the patients with and without cirrhosis.\nIn both the APACHE III and SOFA-matched groups, the patients with cirrhosis had a shorter overall length of stay in an ICU than that of the patients without cirrhosis. This was attributed to the higher hospital mortality rate in the patients with cirrhosis. In both the APACHE III and SOFA-matched groups, the patients without cirrhosis had a significantly higher rate of shock than the patients with cirrhosis did. This was because of the main reason that ICU admission of the patients with cirrhosis was mainly due to hepatic failure; the reasons for ICU admission of the patients without cirrhosis were GI bleeding and sepsis. (Data not shown here)\n Mortality and severity of illness scoring systems Prediction abilities of the APACHE III, SOFA, and RIFLE scoring systems were compared; Table 4 lists the calibration and discrimination of the models. In the APACHE III-matched group, the SOFA scoring system demonstrated the highest prediction ability (AUROC = 0.810 ± 0.056) among all 3 systems. All the 3 scoring systems predicted mortality more precisely in the patients with cirrhosis than in those without cirrhosis. In the SOFA-matched group, the APACHE III scoring system was the most accurate predictor among all 3 systems. In the patients with cirrhosis, the SOFA scoring system demonstrated the highest prediction ability. To determine the selected cut-off points for predicting in-hospital mortality, the sensitivity, specificity, and overall accuracy of prediction were determined (Table 5). In the APACHE III-matched group, the SOFA scoring system had the best Youden’s index among the patients with cirrhosis, whereas the APACHE III scoring system had the best Youden’s index among the total population. In the SOFA-matched group, the APACHE III scoring system had the best Youden’s index among both the patients with cirrhosis and the total population. The RIFLE scoring system demonstrated the highest specificity for prognostic prediction in the patients with cirrhosis of both the SOFA-matched and APACHE III-matched groups.Table 4\nCalibration and discrimination for the scoring methods in predicting hospital mortality\nCalibrationDiscriminationgoodness-of-fit (χ\n2)df\np\nAUROC ± SE95% CI\np\n\nAPACHE III-matched group\n\nAPACHE III\n Total population3.63380.8890.745 ± 0.0400.667 – 0.823<0.001 Cirrhosis12.30470.0910.783 ± 0.0620.662 – 0.904<0.001 Non-cirrhosis3.81980.8730.733 ± 0.0550.626 – 0.840<0.001\nSOFA\n Total population4.50580.8090.735 ± 0.0390.659 – 0.812<0.001 Cirrhosis11.24380.1880.810 ± 0.0560.700 – 0.920<0.001 Non-cirrhosis1.18060.9780.624 ± 0.0600.506 – 0.741NS(0.050)\nRIFLE\n Total population3.16030.3680.621 ± 0.0450.534 – 0.7090.010 Cirrhosis2.00820.3660.710 ± 0.0610.590 – 0.8300.004 Non-cirrhosis2.47530.4800.554 ± 0.0620.431 – 0.676NS(0.395)\nSOFA-matched group\n\nAPACHE III\n Total population5.09380.7480.733 ± 0.0510.633 – 0.834<0.001 Cirrhosis15.20170.0340.767 ± 0.0680.633 – 0.9000.001 Non-cirrhosis1.71670.9740.706 ± 0.0770.556 – 0.8560.014\nSOFA\n Total population6.51960.3680.663 ± 0.0530.559 – 0.7670.005 Cirrhosis6.42760.3770.742 ± 0.0680.608 – 0.8750.003 Non-cirrhosis5.69850.3370.543 ± 0.0790.387 – 0.698NS(0.609)\nRIFLE\n Total population4.60930.2030.634 ± 0.0550.527 – 0.7420.020 Cirrhosis1.21620.5440.656 ± 0.0740.511 – 0.801NS(0.053) Non-cirrhosis4.53130.2100.594 ± 0.0830.432 – 0.756NS(0.262)\nAbbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 5\nSubsequent hospital mortality predicted after ICU admission\nPredictive FactorsCutoff PointYouden indexSensitivity (%)Specificity (%)Overall correctness (%)\nAPACHE III-matched group\n\nAPACHE III\n Total population830.42756771 Cirrhosis72a\n0.55787677 Non-cirrhosis830.40707070\nSOFA\n Total population100.40528870 Cirrhosis10a\n0.56758178 Non-cirrhosis90.20368460\nRIFLE\n Total populationInjury0.22398361 Cirrhosisinjurya\n0.35459068 Non-cirrhosisInjury0.10327855\nSOFA-matched group\n\nAPACHE III\n Total population760.417567071 Cirrhosis71a\n0.50747675 Non-cirrhosis830.40736770\nSOFA\n Total population100.30458565 Cirrhosis10 a\n0.42568671 Non-cirrhosis100.18358359\nRIFLE\n Total populationNon-AKI0.22764661 CirrhosisInjurya\n0.26359063 Non-cirrhosisNon-AKI0.20863360\nAbbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\naValue giving the best Youden index.\n\nCalibration and discrimination for the scoring methods in predicting hospital mortality\n\n\nAbbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nSubsequent hospital mortality predicted after ICU admission\n\n\nAbbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\naValue giving the best Youden index.\nFigure 1A and B show the cumulative survival rates in the patients with and without cirrhosis in the APACHE III-matched and SOFA-matched groups, respectively. The cumulative survival rates showed that patients with cirrhosis in the APACHE III-matched group had significantly higher mortality rates than the patients without cirrhosis did, whereas no significant difference was detected between the patients with and without cirrhosis in the SOFA-matched group. In both the APACHE III-matched and SOFA-matched groups, the cumulative survival rates significantly differed when an underlying AKI was considered (Figure 2A and B).Figure 1\nThe cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients,\np \n< 0.05) and SOFA-matched (1B, 110 patients,\np\n-value: not significant) groups, respectively.\nFigure 2\nThe cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients,\np \n< 0.05) and SOFA-matched (2B, 110 patients,\np \n< 0.05) groups, respectively.\n\n\nThe cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients,\np \n< 0.05) and SOFA-matched (1B, 110 patients,\np\n-value: not significant) groups, respectively.\n\n\nThe cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients,\np \n< 0.05) and SOFA-matched (2B, 110 patients,\np \n< 0.05) groups, respectively.\n\nPrediction abilities of the APACHE III, SOFA, and RIFLE scoring systems were compared; Table 4 lists the calibration and discrimination of the models. In the APACHE III-matched group, the SOFA scoring system demonstrated the highest prediction ability (AUROC = 0.810 ± 0.056) among all 3 systems. All the 3 scoring systems predicted mortality more precisely in the patients with cirrhosis than in those without cirrhosis. In the SOFA-matched group, the APACHE III scoring system was the most accurate predictor among all 3 systems. In the patients with cirrhosis, the SOFA scoring system demonstrated the highest prediction ability. To determine the selected cut-off points for predicting in-hospital mortality, the sensitivity, specificity, and overall accuracy of prediction were determined (Table 5). In the APACHE III-matched group, the SOFA scoring system had the best Youden’s index among the patients with cirrhosis, whereas the APACHE III scoring system had the best Youden’s index among the total population. In the SOFA-matched group, the APACHE III scoring system had the best Youden’s index among both the patients with cirrhosis and the total population. The RIFLE scoring system demonstrated the highest specificity for prognostic prediction in the patients with cirrhosis of both the SOFA-matched and APACHE III-matched groups.Table 4\nCalibration and discrimination for the scoring methods in predicting hospital mortality\nCalibrationDiscriminationgoodness-of-fit (χ\n2)df\np\nAUROC ± SE95% CI\np\n\nAPACHE III-matched group\n\nAPACHE III\n Total population3.63380.8890.745 ± 0.0400.667 – 0.823<0.001 Cirrhosis12.30470.0910.783 ± 0.0620.662 – 0.904<0.001 Non-cirrhosis3.81980.8730.733 ± 0.0550.626 – 0.840<0.001\nSOFA\n Total population4.50580.8090.735 ± 0.0390.659 – 0.812<0.001 Cirrhosis11.24380.1880.810 ± 0.0560.700 – 0.920<0.001 Non-cirrhosis1.18060.9780.624 ± 0.0600.506 – 0.741NS(0.050)\nRIFLE\n Total population3.16030.3680.621 ± 0.0450.534 – 0.7090.010 Cirrhosis2.00820.3660.710 ± 0.0610.590 – 0.8300.004 Non-cirrhosis2.47530.4800.554 ± 0.0620.431 – 0.676NS(0.395)\nSOFA-matched group\n\nAPACHE III\n Total population5.09380.7480.733 ± 0.0510.633 – 0.834<0.001 Cirrhosis15.20170.0340.767 ± 0.0680.633 – 0.9000.001 Non-cirrhosis1.71670.9740.706 ± 0.0770.556 – 0.8560.014\nSOFA\n Total population6.51960.3680.663 ± 0.0530.559 – 0.7670.005 Cirrhosis6.42760.3770.742 ± 0.0680.608 – 0.8750.003 Non-cirrhosis5.69850.3370.543 ± 0.0790.387 – 0.698NS(0.609)\nRIFLE\n Total population4.60930.2030.634 ± 0.0550.527 – 0.7420.020 Cirrhosis1.21620.5440.656 ± 0.0740.511 – 0.801NS(0.053) Non-cirrhosis4.53130.2100.594 ± 0.0830.432 – 0.756NS(0.262)\nAbbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 5\nSubsequent hospital mortality predicted after ICU admission\nPredictive FactorsCutoff PointYouden indexSensitivity (%)Specificity (%)Overall correctness (%)\nAPACHE III-matched group\n\nAPACHE III\n Total population830.42756771 Cirrhosis72a\n0.55787677 Non-cirrhosis830.40707070\nSOFA\n Total population100.40528870 Cirrhosis10a\n0.56758178 Non-cirrhosis90.20368460\nRIFLE\n Total populationInjury0.22398361 Cirrhosisinjurya\n0.35459068 Non-cirrhosisInjury0.10327855\nSOFA-matched group\n\nAPACHE III\n Total population760.417567071 Cirrhosis71a\n0.50747675 Non-cirrhosis830.40736770\nSOFA\n Total population100.30458565 Cirrhosis10 a\n0.42568671 Non-cirrhosis100.18358359\nRIFLE\n Total populationNon-AKI0.22764661 CirrhosisInjurya\n0.26359063 Non-cirrhosisNon-AKI0.20863360\nAbbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\naValue giving the best Youden index.\n\nCalibration and discrimination for the scoring methods in predicting hospital mortality\n\n\nAbbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nSubsequent hospital mortality predicted after ICU admission\n\n\nAbbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\naValue giving the best Youden index.\nFigure 1A and B show the cumulative survival rates in the patients with and without cirrhosis in the APACHE III-matched and SOFA-matched groups, respectively. The cumulative survival rates showed that patients with cirrhosis in the APACHE III-matched group had significantly higher mortality rates than the patients without cirrhosis did, whereas no significant difference was detected between the patients with and without cirrhosis in the SOFA-matched group. In both the APACHE III-matched and SOFA-matched groups, the cumulative survival rates significantly differed when an underlying AKI was considered (Figure 2A and B).Figure 1\nThe cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients,\np \n< 0.05) and SOFA-matched (1B, 110 patients,\np\n-value: not significant) groups, respectively.\nFigure 2\nThe cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients,\np \n< 0.05) and SOFA-matched (2B, 110 patients,\np \n< 0.05) groups, respectively.\n\n\nThe cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients,\np \n< 0.05) and SOFA-matched (1B, 110 patients,\np\n-value: not significant) groups, respectively.\n\n\nThe cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients,\np \n< 0.05) and SOFA-matched (2B, 110 patients,\np \n< 0.05) groups, respectively.\n", "A total of 336 critically ill patients admitted to the medical ICU between January 2006 and December 2009 were enrolled in the present study. Table 1 lists the reasons for ICU admission. The demographic data, clinical characteristics, and outcomes of the 2 score-matched groups are depicted in Tables 2 and 3, respectively.Table 1\nReasons for ICU admission\nAPACHE III-matchedSOFA-matchedgroup (n = 174)group (n = 110)Sepsis99(56.9%)62(56.4%) Urinary tract infection9(5.2%)8(7.3%) Pneumonia62(35.6%)38(34.5%) Intra-abdominal infection16(9.2%)7(6.4%) Blood stream infection9(5.2%)8(7.3%) Soft tissue infection3(1.7%)1(0.9%)Cardiovascular diseases4(2.3%)2(1.8%)Upper gastrointestinal bleeding18(10.3%)12(10.9%)Hepatic failure34(19.5%)23(20.9%)Other19(10.9%)11(10.0%)\nAbbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.Table 2\nAPACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\nTotal (n = 174)Cirrhosis (n = 87)Non-cirrhosis (n = 87)\np-valueAge (years)65.5 ± 1.159.8 ± 1.571.3 ± 1.4<0.001Male/Female123/5164/2359/28NS(0.405)Length of ICU stay (days)14 ± 111 ± 116 ± 20.009Length of Hospital stay (days)32 ± 230 ± 335 ± 3NS(0.223)Body weight on ICU admission (kg)61 ± 1.065 ± 157 ± 1<0.001GCS, ICU first day (points)9 ± 010 ± 19 ± 1NS(0.077)MAP, ICU admission (mmHg)78 ± 180 ± 276 ± 2NS(0.137)Serum Creatinine, ICU first day (mg/dl)2.5 ± 0.22.4 ± 0.22.5 ± 0.3NS(0.885)Arterial HCO3\n−, ICU first day21 ± 119 ± 1.022 ± 1.00.002Serum Sodium, ICU first day (mg/dl)138 ± 1.0138 ± 1.0138 ± 1.0NS(0.890)Bilirubin, ICU first day (mg/dl)6.4 ± 0.711.4 ± 1.31.4 ± 0.3<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.4 ± 0.12.4 ± 0.1NS(0.329)Blood Sugar, ICU first day (mg/dl)166 ± 7159 ± 11170 ± 10.0NS(0.482)Hemoglobin, ICU first day (g/dl)9.6 ± 0.29.2 ± 0.210.0 ± 0.20.024Platelets, ICU first day (×103/μL)145.0 ± 9.179.4 ± 5.9210.5 ± 14.1<0.001Leukocytes, ICU first day (×103/μL)14.5 ± 0.713.5 ± 1.015.5 ± 0.8NS(0.133)PaO2/FiO2, ICU first day(mmHg)268 ± 11275 ± 13262 ± 17NS(0.536)Shock(%)54 (31.0)21 (24.1)33 (37.9)0.049Hospital mortality (%)114 (65.5)64 (73.6)50 (57.5)0.026\nScore systems\nAPACHE III, ICU first day(mean ± SE)87.8 ± 2.187.7 ± 3.488.0 ± 2.5NS(0.941)SOFA, ICU first day(mean ± SE)9.6 ± 0.311.3 ± 0.48.0 ± 0.3<0.001RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.6 ± 0.11.7 ± 0.2NS(0.913)\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 3\nSOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\nTotal (n = 110)Cirrhosis (n = 55)Non-cirrhosis (n = 55)\np-valueAge (years)65.0 ± 1.461.1 ± 2.068.9 ± 2.00.006Male/Female79/3140/1539/16NS(0.832)Length of ICU stay (days)14 ± 111 ± 118 ± 20.007Length of Hospital stay (days)31 ± 232 ± 431 ± 3NS(0.843)Body weight on ICU admission (kg)60 ± 164 ± 256 ± 20.001GCS, ICU first day (points)10 ± 011 ± 18 ± 10.005MAP, ICU admission (mmHg)78 ± 283 ± 273 ± 20.002Serum Creatinine, ICU first day (mg/dl)2.4 ± 0.21.8 ± 0.23.0 ± 0.30.004Arterial HCO3\n−, ICU first day21 ± 120 ± 121 ± 1NS(0.458)Serum Sodium, ICU first day (mg/dl)137 ± 1138 ± 1137 ± 1NS(0.282)Bilirubin, ICU first day (mg/dl)5.8 ± 0.89.6 ± 1.51.9 ± 0.4<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.5 ± 0.12.3 ± 0.10.016Blood Sugar, ICU first day (mg/dl)160 ± 8159 ± 12161 ± 11NS(0.899)Hemoglobin, ICU first day (g/dl)9.7 ± 0.29.3 ± 0.310.0 ± 0.3NS(0.134)Platelets, ICU first day (×103/μL)127.3 ± 10.185.8 ± 8.1168.8 ± 16.9<0.001Leukocytes, ICU first day (×103/μL)13.9 ± 0.812.9 ± 1.314.9 ± 1.0NS(0.224)PaO2/FiO2, ICU first day(mmHg)256 ± 13270 ± 16243 ± 19NS(0.299)Shock(%)39 (35.5)9 (16.3)30 (54.5)<0.001Hospital mortality (%)71(64.5)34 (61.8)37 (67.3)NS(0.550)\nScore systems\nAPACHE III, ICU first day(mean ± SE)86.5 ± 2.881.1 ± 4.291.9 ± 3.7NS(0.058)SOFA, ICU first day(mean ± SE)10.0 ± 0.310.0 ± 0.510.0 ± 0.3NS(0.927)RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.4 ± 0.21.9 ± 0.20.041\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nReasons for ICU admission\n\n\nAbbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.\n\nAPACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\n\n\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nSOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis\n\n\nAbbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\nIn the APACHE III-matched group (Table 2), the critically ill patients with cirrhosis had a lower arterial HCO3− level (P = .002), a lower haemoglobin level (P = .024), a lower platelet count (P < .001), and a higher serum bilirubin level (P < .001) on Day 1 of ICU admission compared with the patients without cirrhosis. The patients without cirrhosis with the same score were older and experienced more shock episodes than the patients with cirrhosis did. The hospital mortality rate was significantly higher in the patients with cirrhosis than in the patients without cirrhosis (P = .026). Renal function and a PaO2/FiO2 ratio were similar between the 2 groups. Among APACHE III-matched patients, the patients with cirrhosis had significantly higher SOFA scores than those of the patients without cirrhosis (P < .001).\nIn the SOFA-matched group (Table 3), the patients with cirrhosis had a lower platelet count (P < .001), a higher Glasgow coma scale (GCS), a more stable haemodynamic status, more favourable renal function, a higher bilirubin level, and a lower albumin level compared with the patients without cirrhosis. The patients without cirrhosis experienced more shock episodes than did the patients with cirrhosis. No significant difference in the hospital mortality rate was observed between the patients with and without cirrhosis.\nIn both the APACHE III and SOFA-matched groups, the patients with cirrhosis had a shorter overall length of stay in an ICU than that of the patients without cirrhosis. This was attributed to the higher hospital mortality rate in the patients with cirrhosis. In both the APACHE III and SOFA-matched groups, the patients without cirrhosis had a significantly higher rate of shock than the patients with cirrhosis did. This was because of the main reason that ICU admission of the patients with cirrhosis was mainly due to hepatic failure; the reasons for ICU admission of the patients without cirrhosis were GI bleeding and sepsis. (Data not shown here)", "Prediction abilities of the APACHE III, SOFA, and RIFLE scoring systems were compared; Table 4 lists the calibration and discrimination of the models. In the APACHE III-matched group, the SOFA scoring system demonstrated the highest prediction ability (AUROC = 0.810 ± 0.056) among all 3 systems. All the 3 scoring systems predicted mortality more precisely in the patients with cirrhosis than in those without cirrhosis. In the SOFA-matched group, the APACHE III scoring system was the most accurate predictor among all 3 systems. In the patients with cirrhosis, the SOFA scoring system demonstrated the highest prediction ability. To determine the selected cut-off points for predicting in-hospital mortality, the sensitivity, specificity, and overall accuracy of prediction were determined (Table 5). In the APACHE III-matched group, the SOFA scoring system had the best Youden’s index among the patients with cirrhosis, whereas the APACHE III scoring system had the best Youden’s index among the total population. In the SOFA-matched group, the APACHE III scoring system had the best Youden’s index among both the patients with cirrhosis and the total population. The RIFLE scoring system demonstrated the highest specificity for prognostic prediction in the patients with cirrhosis of both the SOFA-matched and APACHE III-matched groups.Table 4\nCalibration and discrimination for the scoring methods in predicting hospital mortality\nCalibrationDiscriminationgoodness-of-fit (χ\n2)df\np\nAUROC ± SE95% CI\np\n\nAPACHE III-matched group\n\nAPACHE III\n Total population3.63380.8890.745 ± 0.0400.667 – 0.823<0.001 Cirrhosis12.30470.0910.783 ± 0.0620.662 – 0.904<0.001 Non-cirrhosis3.81980.8730.733 ± 0.0550.626 – 0.840<0.001\nSOFA\n Total population4.50580.8090.735 ± 0.0390.659 – 0.812<0.001 Cirrhosis11.24380.1880.810 ± 0.0560.700 – 0.920<0.001 Non-cirrhosis1.18060.9780.624 ± 0.0600.506 – 0.741NS(0.050)\nRIFLE\n Total population3.16030.3680.621 ± 0.0450.534 – 0.7090.010 Cirrhosis2.00820.3660.710 ± 0.0610.590 – 0.8300.004 Non-cirrhosis2.47530.4800.554 ± 0.0620.431 – 0.676NS(0.395)\nSOFA-matched group\n\nAPACHE III\n Total population5.09380.7480.733 ± 0.0510.633 – 0.834<0.001 Cirrhosis15.20170.0340.767 ± 0.0680.633 – 0.9000.001 Non-cirrhosis1.71670.9740.706 ± 0.0770.556 – 0.8560.014\nSOFA\n Total population6.51960.3680.663 ± 0.0530.559 – 0.7670.005 Cirrhosis6.42760.3770.742 ± 0.0680.608 – 0.8750.003 Non-cirrhosis5.69850.3370.543 ± 0.0790.387 – 0.698NS(0.609)\nRIFLE\n Total population4.60930.2030.634 ± 0.0550.527 – 0.7420.020 Cirrhosis1.21620.5440.656 ± 0.0740.511 – 0.801NS(0.053) Non-cirrhosis4.53130.2100.594 ± 0.0830.432 – 0.756NS(0.262)\nAbbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 5\nSubsequent hospital mortality predicted after ICU admission\nPredictive FactorsCutoff PointYouden indexSensitivity (%)Specificity (%)Overall correctness (%)\nAPACHE III-matched group\n\nAPACHE III\n Total population830.42756771 Cirrhosis72a\n0.55787677 Non-cirrhosis830.40707070\nSOFA\n Total population100.40528870 Cirrhosis10a\n0.56758178 Non-cirrhosis90.20368460\nRIFLE\n Total populationInjury0.22398361 Cirrhosisinjurya\n0.35459068 Non-cirrhosisInjury0.10327855\nSOFA-matched group\n\nAPACHE III\n Total population760.417567071 Cirrhosis71a\n0.50747675 Non-cirrhosis830.40736770\nSOFA\n Total population100.30458565 Cirrhosis10 a\n0.42568671 Non-cirrhosis100.18358359\nRIFLE\n Total populationNon-AKI0.22764661 CirrhosisInjurya\n0.26359063 Non-cirrhosisNon-AKI0.20863360\nAbbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\naValue giving the best Youden index.\n\nCalibration and discrimination for the scoring methods in predicting hospital mortality\n\n\nAbbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\nSubsequent hospital mortality predicted after ICU admission\n\n\nAbbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.\n\naValue giving the best Youden index.\nFigure 1A and B show the cumulative survival rates in the patients with and without cirrhosis in the APACHE III-matched and SOFA-matched groups, respectively. The cumulative survival rates showed that patients with cirrhosis in the APACHE III-matched group had significantly higher mortality rates than the patients without cirrhosis did, whereas no significant difference was detected between the patients with and without cirrhosis in the SOFA-matched group. In both the APACHE III-matched and SOFA-matched groups, the cumulative survival rates significantly differed when an underlying AKI was considered (Figure 2A and B).Figure 1\nThe cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients,\np \n< 0.05) and SOFA-matched (1B, 110 patients,\np\n-value: not significant) groups, respectively.\nFigure 2\nThe cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients,\np \n< 0.05) and SOFA-matched (2B, 110 patients,\np \n< 0.05) groups, respectively.\n\n\nThe cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients,\np \n< 0.05) and SOFA-matched (1B, 110 patients,\np\n-value: not significant) groups, respectively.\n\n\nThe cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients,\np \n< 0.05) and SOFA-matched (2B, 110 patients,\np \n< 0.05) groups, respectively.\n", "Based on our research, this is the first study to compare the usefulness of different scoring systems for outcome prediction in patients admitted to an ICU with and without cirrhosis by using a score-matched method. In the present study, several clinical characteristics and outcomes of critically ill patients with cirrhosis were compared with those of critically ill patients without cirrhosis with matched APACHE III or SOFA scores. The poor outcome of patients with cirrhosis was consistent with the results of previous studies [12–14]. Recent studies have supported the efficacy of the SOFA scoring system for assessing the extent of organ dysfunction in various groups including critically ill patients with cirrhosis [3–5].\nThe APACHE III score consisted of the acute physiology score, age, and chronic health problem scores, which are widely used for predicting clinical outcomes currently. Among the patients with cirrhosis, bilirubin levels and incidences of cirrhosis were higher than those among the patients without cirrhosis. Results of the present study suggested that among patients with the same APACHE III score, patients with cirrhosis were older with a higher risk of acidaemia and shock than those without cirrhosis. However, renal function and consciousness status were nonsignificantly different between the 2 groups. The requirement for mechanical ventilation was not included in the APACHE III score, and no difference in the PaO2/FiO2 ratio was observed.\nAlthough the SOFA score includes a fewer number of items and does not assess age and comorbid conditions, it enhances its simplicity and demonstrates high discriminatory power for predicting critically ill patients with cirrhosis. The patients with cirrhosis in the SOFA-matched group showed higher GCS and lower SCr concentrations on Day 1 of ICU admission, whereas these differences were nonsignificant in the APACHE III-matched group. Because patients with cirrhosis typically have higher serum bilirubin levels and lower platelet counts (caused by de novo liver disease), which contribute to a higher SOFA score, than their counterparts do, the patients with cirrhosis with similar SOFA scores may have relatively improved characteristics in the other 5 organic fields. This phenomenon is consistent with our findings showing that the patients with cirrhosis had a more stable neurologic status and renal function than did the patients without cirrhosis in the SOFA-matched group.\nAPACHE III, a prognostic model, predicts mortality. Prognostic scoring models such as APACHE III assume that mortality is affected by physiological disturbances that occur early in the course of illness, whereas organ dysfunction-scoring systems such as SOFA allow determination of organ dysfunction at the time of admission and at regular intervals throughout the stay in an ICU, thus allowing for the assessment of changes in organ function. The SOFA score is an organ dysfunction score that quantifies the burden of organ dysfunction. Although the SOFA score was originally used to describe morbidity, it was also used in mortality prediction. The accuracy of mortality predictions may be improved with repeated measurements by using organ dysfunction scoring systems such as SOFA.\nAs shown in Table 4, in the APACHE III-matched group, the SOFA score demonstrated a higher discrimination ability in the patients with cirrhosis than in the patients without cirrhosis (AUROC = 0.810 ± 0.056 vs 0.624 ± 0.060). The SOFA score is simpler for assessment than the APACHE III score by clinicians. Meanwhile, the SOFA score allows for sequential measurements and more accurately reflects the dynamic aspects of disease processes and may provide information of higher quality on the mortality risk. Therefore, the SOFA score is a superior and easier-to-implement model for predicting mortality in the patients with cirrhosis, with a cut-off value of 10 points, providing the optimal overall correctness. In addition, both the SOFA and APACHE III scores are comparable in the patients without cirrhosis.\nWhen the patients with and without cirrhosis were matched by using SOFA scores, difference in APACHE III scores between the 2 groups was nonsignificant in the present study. The RIFLE scoring system, however, showed superior results in patients with cirrhosis to those of the noncirrhotic controls (P = .041) in the SOFA-matched group. The patients with cirrhosis tend to have malnutrition, low muscle mass, and impaired synthesis of creatinine. Therefore, the RIFLE scoring system, which is based on the serum creatinine and urine output, may lead to underestimation of AKI severity and overall illness.\nDespite the encouraging results of the present study, several potential limitations should be considered. First, this was a retrospective study performed at a single tertiary-care medical centre, which limits generalisation of the findings. Second, the patients without cirrhosis mainly composed of patients with sepsis and those with a low proportion of patients with other diseases such as cardiovascular disease or acute respiratory distress syndrome. The specificity of the subgroup of the patients with cirrhosis may limit the generalisation. Third, the patient population comprised a high proportion of patients with hepatitis B virus infection. Therefore, this study has limited applicability to typical North American and European patients with hepatitis C virus infection or those with alcohol dependence. Finally, the sample size was insufficient for matching SOFA and APACHE III scores among the patients with and without cirrhosis. Therefore, we cannot draw definitive conclusions regarding the relatively poor short-term prognosis of the patients admitted to the ICU with cirrhosis compared with that of the patients admitted to the ICU without cirrhosis.", "Our results provide additional evidence that SOFA scores differ significantly between patients with and without cirrhosis matched according to APACHE III scores. The score-matched analytical data showed that the predictive accuracy of SOFA is superior to that of APACHE III in evaluating critically ill patients with cirrhosis. We also demonstrated that the mean arterial pressure, GCS, and RIFLE classification play critical roles in determining prognosis in this subset of patients. When considering cost-effectiveness and ease of implementation, the SOFA scale is recommended for evaluating short-term prognosis in critically ill patients with cirrhosis." ]
[ null, "methods", null, null, null, null, "results", null, null, "discussion", "conclusions" ]
[ "Acute physiology and chronic health evaluation III (APACHE III)", "Sequential organ failure assessment (SOFA)", "Intensive care unit (ICU)", "Cirrhosis", "Outcome" ]
Background: Accurate prognostic predictors are crucial for patients admitted to an intensive care unit (ICU). Prognostic scoring systems are useful for clinical management such as predicting a survival rate, making decisions, and facilitating explanation of disease severity, by clinical physicians. Patients with cirrhosis admitted to an ICU frequently have disappointing outcomes despite intensive medical support, and these patients are particular targets for prognostic evaluation. Various systems for scoring severity and predicting prognosis have been developed and applied for decades. The Acute Physiology and Chronic Health Evaluation III (APACHE III) score [1], one of the widely used scoring systems, is known for its accuracy in predicting mortality. However, the APACHE III scoring system was initially developed for various diseases and not exclusively for liver-related diseases. By contrast, the Sequential Organ Failure Assessment (SOFA) score [2], another widely used scoring system, is superior to the APACHE III scoring system for assessing specific organ dysfunction including cirrhosis [3, 4]. Our previous study demonstrated that the APACHE III and SOFA scores were both independently associated with a hospital mortality rate and demonstrated high discriminatory power for predicting mortality in patients with cirrhosis [5]. However, few studies have performed detailed independent comparisons between APACHE III and SOFA scores [6]. In the present case-control study, we matched APACHE III and SOFA scores and compared the different clinical characteristics and outcomes of patients with cirrhosis with those of their matched noncirrhotic controls. Methods: Study design The Chang Gung Medical Foundation Institutional Review Board approved the present study and waived the need for informed consent, because patient privacy was not breached during the study, and the study did not interfere with clinical decisions related to patient care (approval No. 98-3658A3). All data in our study were anonymised. This retrospective case-control study was conducted in a tertiary-care hospital. The enrolled patients were recruited from a database of critically ill patients admitted to medical ICUs between January 2006 and December 2009. For the APACHE III-matched group (174 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of APACHE III ± 3 points. For the SOFA-matched group (110 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of SOFA ± 1 point [7, 8]. The outcomes of interest were the length of stay in an ICU, length of stay in a hospital, and hospital mortality rate. The Chang Gung Medical Foundation Institutional Review Board approved the present study and waived the need for informed consent, because patient privacy was not breached during the study, and the study did not interfere with clinical decisions related to patient care (approval No. 98-3658A3). All data in our study were anonymised. This retrospective case-control study was conducted in a tertiary-care hospital. The enrolled patients were recruited from a database of critically ill patients admitted to medical ICUs between January 2006 and December 2009. For the APACHE III-matched group (174 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of APACHE III ± 3 points. For the SOFA-matched group (110 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of SOFA ± 1 point [7, 8]. The outcomes of interest were the length of stay in an ICU, length of stay in a hospital, and hospital mortality rate. Study population and data collection All patients admitted to medical ICUs between January 2006 and December 2009 with APACHE III and SOFA scores available were eligible for inclusion. Exclusion criteria were age < 18 years, length of stay in a hospital or an ICU of < 24 hours, patients with chronic uraemia and undergoing renal replacement therapy, and hospital readmission. Data were recorded regarding patient demographics, reason for ICU admission, clinical and laboratory variables, APACHE III and SOFA scores, the risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, end-stage renal failure (RIFLE) classification [9], the length of stay in an ICU and a hospital, and hospital mortality. Data on the length of stay included those from patients with hospital mortality. All patients admitted to medical ICUs between January 2006 and December 2009 with APACHE III and SOFA scores available were eligible for inclusion. Exclusion criteria were age < 18 years, length of stay in a hospital or an ICU of < 24 hours, patients with chronic uraemia and undergoing renal replacement therapy, and hospital readmission. Data were recorded regarding patient demographics, reason for ICU admission, clinical and laboratory variables, APACHE III and SOFA scores, the risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, end-stage renal failure (RIFLE) classification [9], the length of stay in an ICU and a hospital, and hospital mortality. Data on the length of stay included those from patients with hospital mortality. Definitions Cirrhosis was diagnosed based on liver histology or a combination of physical presentation, biochemical data, and ultrasonographic findings. Illness severity was assessed according to the APACHE III and the SOFA scores, which were defined and calculated as described previously [1, 2]. Acute kidney injury (AKI) was defined using the RIFLE criteria, and patients were scored as RIFLE-R or higher severity. Baseline serum creatinine (SCr) concentration was the first value measured during hospitalisation. The Modification of Diet in Renal Disease formula was used to estimate baseline SCr concentration in patients whose previous SCr concentration was unavailable. The criteria resulting in the most severe RIFLE classification were used [9]. A simple model for assessing mortality was developed as follows: non-AKI (0 points), RIFLE-R (1 point), RIFLE-I (2 points), and RIFLE-F (3 points) on Day 1 of ICU admission [10, 11]. The lowest physiological and biochemical values on Day 1 of ICU admission were recorded. In sedated or paralysed patients, neurological scoring was not performed and was not classified as neurological failure. In patients who were intubated but not sedated, the best verbal response was determined according to clinical judgment. Cirrhosis was diagnosed based on liver histology or a combination of physical presentation, biochemical data, and ultrasonographic findings. Illness severity was assessed according to the APACHE III and the SOFA scores, which were defined and calculated as described previously [1, 2]. Acute kidney injury (AKI) was defined using the RIFLE criteria, and patients were scored as RIFLE-R or higher severity. Baseline serum creatinine (SCr) concentration was the first value measured during hospitalisation. The Modification of Diet in Renal Disease formula was used to estimate baseline SCr concentration in patients whose previous SCr concentration was unavailable. The criteria resulting in the most severe RIFLE classification were used [9]. A simple model for assessing mortality was developed as follows: non-AKI (0 points), RIFLE-R (1 point), RIFLE-I (2 points), and RIFLE-F (3 points) on Day 1 of ICU admission [10, 11]. The lowest physiological and biochemical values on Day 1 of ICU admission were recorded. In sedated or paralysed patients, neurological scoring was not performed and was not classified as neurological failure. In patients who were intubated but not sedated, the best verbal response was determined according to clinical judgment. Statistical analysis Descriptive statistical results were expressed as mean ± standard error (SE). In primary analysis, the patients with cirrhosis were compared with the patients without cirrhosis. All variables were tested for normal distribution using the Kolmogorov–Smirnov test. Student’s t-test was used to compare the means of continuous variables and normally distributed data, whereas the Mann–Whitney U test was used for all other comparisons. Categorical data were tested using the χ2 test or Fisher’s exact test. Calibration was assessed using the Hosmer–Lemeshow goodness-of-fit test (C statistic) to compare the number of observed and predicted deaths in various risk groups for the entire range of death probabilities. Discrimination was assessed by determining area under the receiver operating characteristic curve (AUROC). Areas under 2 receiver operating characteristic curves were compared by applying a nonparametric approach. The AUROC analysis was also conducted to estimate the cut-off values, sensitivity, specificity, overall correctness, and positive and negative predictive values. Finally, cut-off points were calculated by determining the best Youden’s index (sensitivity + specificity −1). Cumulative survival curves over time were generated by applying the Kaplan–Meier approach and compared using the log rank test. All statistical tests were 2-tailed; P < .05 was considered statistically significant. Data were analysed using SPSS Version 13.0 for Windows (SPSS, Inc., Chicago, IL, USA). Descriptive statistical results were expressed as mean ± standard error (SE). In primary analysis, the patients with cirrhosis were compared with the patients without cirrhosis. All variables were tested for normal distribution using the Kolmogorov–Smirnov test. Student’s t-test was used to compare the means of continuous variables and normally distributed data, whereas the Mann–Whitney U test was used for all other comparisons. Categorical data were tested using the χ2 test or Fisher’s exact test. Calibration was assessed using the Hosmer–Lemeshow goodness-of-fit test (C statistic) to compare the number of observed and predicted deaths in various risk groups for the entire range of death probabilities. Discrimination was assessed by determining area under the receiver operating characteristic curve (AUROC). Areas under 2 receiver operating characteristic curves were compared by applying a nonparametric approach. The AUROC analysis was also conducted to estimate the cut-off values, sensitivity, specificity, overall correctness, and positive and negative predictive values. Finally, cut-off points were calculated by determining the best Youden’s index (sensitivity + specificity −1). Cumulative survival curves over time were generated by applying the Kaplan–Meier approach and compared using the log rank test. All statistical tests were 2-tailed; P < .05 was considered statistically significant. Data were analysed using SPSS Version 13.0 for Windows (SPSS, Inc., Chicago, IL, USA). Study design: The Chang Gung Medical Foundation Institutional Review Board approved the present study and waived the need for informed consent, because patient privacy was not breached during the study, and the study did not interfere with clinical decisions related to patient care (approval No. 98-3658A3). All data in our study were anonymised. This retrospective case-control study was conducted in a tertiary-care hospital. The enrolled patients were recruited from a database of critically ill patients admitted to medical ICUs between January 2006 and December 2009. For the APACHE III-matched group (174 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of APACHE III ± 3 points. For the SOFA-matched group (110 patients), each patient with cirrhosis was matched 1:1 to a control patient without cirrhosis by using the criteria of SOFA ± 1 point [7, 8]. The outcomes of interest were the length of stay in an ICU, length of stay in a hospital, and hospital mortality rate. Study population and data collection: All patients admitted to medical ICUs between January 2006 and December 2009 with APACHE III and SOFA scores available were eligible for inclusion. Exclusion criteria were age < 18 years, length of stay in a hospital or an ICU of < 24 hours, patients with chronic uraemia and undergoing renal replacement therapy, and hospital readmission. Data were recorded regarding patient demographics, reason for ICU admission, clinical and laboratory variables, APACHE III and SOFA scores, the risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, end-stage renal failure (RIFLE) classification [9], the length of stay in an ICU and a hospital, and hospital mortality. Data on the length of stay included those from patients with hospital mortality. Definitions: Cirrhosis was diagnosed based on liver histology or a combination of physical presentation, biochemical data, and ultrasonographic findings. Illness severity was assessed according to the APACHE III and the SOFA scores, which were defined and calculated as described previously [1, 2]. Acute kidney injury (AKI) was defined using the RIFLE criteria, and patients were scored as RIFLE-R or higher severity. Baseline serum creatinine (SCr) concentration was the first value measured during hospitalisation. The Modification of Diet in Renal Disease formula was used to estimate baseline SCr concentration in patients whose previous SCr concentration was unavailable. The criteria resulting in the most severe RIFLE classification were used [9]. A simple model for assessing mortality was developed as follows: non-AKI (0 points), RIFLE-R (1 point), RIFLE-I (2 points), and RIFLE-F (3 points) on Day 1 of ICU admission [10, 11]. The lowest physiological and biochemical values on Day 1 of ICU admission were recorded. In sedated or paralysed patients, neurological scoring was not performed and was not classified as neurological failure. In patients who were intubated but not sedated, the best verbal response was determined according to clinical judgment. Statistical analysis: Descriptive statistical results were expressed as mean ± standard error (SE). In primary analysis, the patients with cirrhosis were compared with the patients without cirrhosis. All variables were tested for normal distribution using the Kolmogorov–Smirnov test. Student’s t-test was used to compare the means of continuous variables and normally distributed data, whereas the Mann–Whitney U test was used for all other comparisons. Categorical data were tested using the χ2 test or Fisher’s exact test. Calibration was assessed using the Hosmer–Lemeshow goodness-of-fit test (C statistic) to compare the number of observed and predicted deaths in various risk groups for the entire range of death probabilities. Discrimination was assessed by determining area under the receiver operating characteristic curve (AUROC). Areas under 2 receiver operating characteristic curves were compared by applying a nonparametric approach. The AUROC analysis was also conducted to estimate the cut-off values, sensitivity, specificity, overall correctness, and positive and negative predictive values. Finally, cut-off points were calculated by determining the best Youden’s index (sensitivity + specificity −1). Cumulative survival curves over time were generated by applying the Kaplan–Meier approach and compared using the log rank test. All statistical tests were 2-tailed; P < .05 was considered statistically significant. Data were analysed using SPSS Version 13.0 for Windows (SPSS, Inc., Chicago, IL, USA). Results: Patient characteristics A total of 336 critically ill patients admitted to the medical ICU between January 2006 and December 2009 were enrolled in the present study. Table 1 lists the reasons for ICU admission. The demographic data, clinical characteristics, and outcomes of the 2 score-matched groups are depicted in Tables 2 and 3, respectively.Table 1 Reasons for ICU admission APACHE III-matchedSOFA-matchedgroup (n = 174)group (n = 110)Sepsis99(56.9%)62(56.4%) Urinary tract infection9(5.2%)8(7.3%) Pneumonia62(35.6%)38(34.5%) Intra-abdominal infection16(9.2%)7(6.4%) Blood stream infection9(5.2%)8(7.3%) Soft tissue infection3(1.7%)1(0.9%)Cardiovascular diseases4(2.3%)2(1.8%)Upper gastrointestinal bleeding18(10.3%)12(10.9%)Hepatic failure34(19.5%)23(20.9%)Other19(10.9%)11(10.0%) Abbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.Table 2 APACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Total (n = 174)Cirrhosis (n = 87)Non-cirrhosis (n = 87) p-valueAge (years)65.5 ± 1.159.8 ± 1.571.3 ± 1.4<0.001Male/Female123/5164/2359/28NS(0.405)Length of ICU stay (days)14 ± 111 ± 116 ± 20.009Length of Hospital stay (days)32 ± 230 ± 335 ± 3NS(0.223)Body weight on ICU admission (kg)61 ± 1.065 ± 157 ± 1<0.001GCS, ICU first day (points)9 ± 010 ± 19 ± 1NS(0.077)MAP, ICU admission (mmHg)78 ± 180 ± 276 ± 2NS(0.137)Serum Creatinine, ICU first day (mg/dl)2.5 ± 0.22.4 ± 0.22.5 ± 0.3NS(0.885)Arterial HCO3 −, ICU first day21 ± 119 ± 1.022 ± 1.00.002Serum Sodium, ICU first day (mg/dl)138 ± 1.0138 ± 1.0138 ± 1.0NS(0.890)Bilirubin, ICU first day (mg/dl)6.4 ± 0.711.4 ± 1.31.4 ± 0.3<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.4 ± 0.12.4 ± 0.1NS(0.329)Blood Sugar, ICU first day (mg/dl)166 ± 7159 ± 11170 ± 10.0NS(0.482)Hemoglobin, ICU first day (g/dl)9.6 ± 0.29.2 ± 0.210.0 ± 0.20.024Platelets, ICU first day (×103/μL)145.0 ± 9.179.4 ± 5.9210.5 ± 14.1<0.001Leukocytes, ICU first day (×103/μL)14.5 ± 0.713.5 ± 1.015.5 ± 0.8NS(0.133)PaO2/FiO2, ICU first day(mmHg)268 ± 11275 ± 13262 ± 17NS(0.536)Shock(%)54 (31.0)21 (24.1)33 (37.9)0.049Hospital mortality (%)114 (65.5)64 (73.6)50 (57.5)0.026 Score systems APACHE III, ICU first day(mean ± SE)87.8 ± 2.187.7 ± 3.488.0 ± 2.5NS(0.941)SOFA, ICU first day(mean ± SE)9.6 ± 0.311.3 ± 0.48.0 ± 0.3<0.001RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.6 ± 0.11.7 ± 0.2NS(0.913) Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 3 SOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Total (n = 110)Cirrhosis (n = 55)Non-cirrhosis (n = 55) p-valueAge (years)65.0 ± 1.461.1 ± 2.068.9 ± 2.00.006Male/Female79/3140/1539/16NS(0.832)Length of ICU stay (days)14 ± 111 ± 118 ± 20.007Length of Hospital stay (days)31 ± 232 ± 431 ± 3NS(0.843)Body weight on ICU admission (kg)60 ± 164 ± 256 ± 20.001GCS, ICU first day (points)10 ± 011 ± 18 ± 10.005MAP, ICU admission (mmHg)78 ± 283 ± 273 ± 20.002Serum Creatinine, ICU first day (mg/dl)2.4 ± 0.21.8 ± 0.23.0 ± 0.30.004Arterial HCO3 −, ICU first day21 ± 120 ± 121 ± 1NS(0.458)Serum Sodium, ICU first day (mg/dl)137 ± 1138 ± 1137 ± 1NS(0.282)Bilirubin, ICU first day (mg/dl)5.8 ± 0.89.6 ± 1.51.9 ± 0.4<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.5 ± 0.12.3 ± 0.10.016Blood Sugar, ICU first day (mg/dl)160 ± 8159 ± 12161 ± 11NS(0.899)Hemoglobin, ICU first day (g/dl)9.7 ± 0.29.3 ± 0.310.0 ± 0.3NS(0.134)Platelets, ICU first day (×103/μL)127.3 ± 10.185.8 ± 8.1168.8 ± 16.9<0.001Leukocytes, ICU first day (×103/μL)13.9 ± 0.812.9 ± 1.314.9 ± 1.0NS(0.224)PaO2/FiO2, ICU first day(mmHg)256 ± 13270 ± 16243 ± 19NS(0.299)Shock(%)39 (35.5)9 (16.3)30 (54.5)<0.001Hospital mortality (%)71(64.5)34 (61.8)37 (67.3)NS(0.550) Score systems APACHE III, ICU first day(mean ± SE)86.5 ± 2.881.1 ± 4.291.9 ± 3.7NS(0.058)SOFA, ICU first day(mean ± SE)10.0 ± 0.310.0 ± 0.510.0 ± 0.3NS(0.927)RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.4 ± 0.21.9 ± 0.20.041 Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. Reasons for ICU admission Abbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment. APACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. SOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. In the APACHE III-matched group (Table 2), the critically ill patients with cirrhosis had a lower arterial HCO3− level (P = .002), a lower haemoglobin level (P = .024), a lower platelet count (P < .001), and a higher serum bilirubin level (P < .001) on Day 1 of ICU admission compared with the patients without cirrhosis. The patients without cirrhosis with the same score were older and experienced more shock episodes than the patients with cirrhosis did. The hospital mortality rate was significantly higher in the patients with cirrhosis than in the patients without cirrhosis (P = .026). Renal function and a PaO2/FiO2 ratio were similar between the 2 groups. Among APACHE III-matched patients, the patients with cirrhosis had significantly higher SOFA scores than those of the patients without cirrhosis (P < .001). In the SOFA-matched group (Table 3), the patients with cirrhosis had a lower platelet count (P < .001), a higher Glasgow coma scale (GCS), a more stable haemodynamic status, more favourable renal function, a higher bilirubin level, and a lower albumin level compared with the patients without cirrhosis. The patients without cirrhosis experienced more shock episodes than did the patients with cirrhosis. No significant difference in the hospital mortality rate was observed between the patients with and without cirrhosis. In both the APACHE III and SOFA-matched groups, the patients with cirrhosis had a shorter overall length of stay in an ICU than that of the patients without cirrhosis. This was attributed to the higher hospital mortality rate in the patients with cirrhosis. In both the APACHE III and SOFA-matched groups, the patients without cirrhosis had a significantly higher rate of shock than the patients with cirrhosis did. This was because of the main reason that ICU admission of the patients with cirrhosis was mainly due to hepatic failure; the reasons for ICU admission of the patients without cirrhosis were GI bleeding and sepsis. (Data not shown here) A total of 336 critically ill patients admitted to the medical ICU between January 2006 and December 2009 were enrolled in the present study. Table 1 lists the reasons for ICU admission. The demographic data, clinical characteristics, and outcomes of the 2 score-matched groups are depicted in Tables 2 and 3, respectively.Table 1 Reasons for ICU admission APACHE III-matchedSOFA-matchedgroup (n = 174)group (n = 110)Sepsis99(56.9%)62(56.4%) Urinary tract infection9(5.2%)8(7.3%) Pneumonia62(35.6%)38(34.5%) Intra-abdominal infection16(9.2%)7(6.4%) Blood stream infection9(5.2%)8(7.3%) Soft tissue infection3(1.7%)1(0.9%)Cardiovascular diseases4(2.3%)2(1.8%)Upper gastrointestinal bleeding18(10.3%)12(10.9%)Hepatic failure34(19.5%)23(20.9%)Other19(10.9%)11(10.0%) Abbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.Table 2 APACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Total (n = 174)Cirrhosis (n = 87)Non-cirrhosis (n = 87) p-valueAge (years)65.5 ± 1.159.8 ± 1.571.3 ± 1.4<0.001Male/Female123/5164/2359/28NS(0.405)Length of ICU stay (days)14 ± 111 ± 116 ± 20.009Length of Hospital stay (days)32 ± 230 ± 335 ± 3NS(0.223)Body weight on ICU admission (kg)61 ± 1.065 ± 157 ± 1<0.001GCS, ICU first day (points)9 ± 010 ± 19 ± 1NS(0.077)MAP, ICU admission (mmHg)78 ± 180 ± 276 ± 2NS(0.137)Serum Creatinine, ICU first day (mg/dl)2.5 ± 0.22.4 ± 0.22.5 ± 0.3NS(0.885)Arterial HCO3 −, ICU first day21 ± 119 ± 1.022 ± 1.00.002Serum Sodium, ICU first day (mg/dl)138 ± 1.0138 ± 1.0138 ± 1.0NS(0.890)Bilirubin, ICU first day (mg/dl)6.4 ± 0.711.4 ± 1.31.4 ± 0.3<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.4 ± 0.12.4 ± 0.1NS(0.329)Blood Sugar, ICU first day (mg/dl)166 ± 7159 ± 11170 ± 10.0NS(0.482)Hemoglobin, ICU first day (g/dl)9.6 ± 0.29.2 ± 0.210.0 ± 0.20.024Platelets, ICU first day (×103/μL)145.0 ± 9.179.4 ± 5.9210.5 ± 14.1<0.001Leukocytes, ICU first day (×103/μL)14.5 ± 0.713.5 ± 1.015.5 ± 0.8NS(0.133)PaO2/FiO2, ICU first day(mmHg)268 ± 11275 ± 13262 ± 17NS(0.536)Shock(%)54 (31.0)21 (24.1)33 (37.9)0.049Hospital mortality (%)114 (65.5)64 (73.6)50 (57.5)0.026 Score systems APACHE III, ICU first day(mean ± SE)87.8 ± 2.187.7 ± 3.488.0 ± 2.5NS(0.941)SOFA, ICU first day(mean ± SE)9.6 ± 0.311.3 ± 0.48.0 ± 0.3<0.001RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.6 ± 0.11.7 ± 0.2NS(0.913) Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 3 SOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Total (n = 110)Cirrhosis (n = 55)Non-cirrhosis (n = 55) p-valueAge (years)65.0 ± 1.461.1 ± 2.068.9 ± 2.00.006Male/Female79/3140/1539/16NS(0.832)Length of ICU stay (days)14 ± 111 ± 118 ± 20.007Length of Hospital stay (days)31 ± 232 ± 431 ± 3NS(0.843)Body weight on ICU admission (kg)60 ± 164 ± 256 ± 20.001GCS, ICU first day (points)10 ± 011 ± 18 ± 10.005MAP, ICU admission (mmHg)78 ± 283 ± 273 ± 20.002Serum Creatinine, ICU first day (mg/dl)2.4 ± 0.21.8 ± 0.23.0 ± 0.30.004Arterial HCO3 −, ICU first day21 ± 120 ± 121 ± 1NS(0.458)Serum Sodium, ICU first day (mg/dl)137 ± 1138 ± 1137 ± 1NS(0.282)Bilirubin, ICU first day (mg/dl)5.8 ± 0.89.6 ± 1.51.9 ± 0.4<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.5 ± 0.12.3 ± 0.10.016Blood Sugar, ICU first day (mg/dl)160 ± 8159 ± 12161 ± 11NS(0.899)Hemoglobin, ICU first day (g/dl)9.7 ± 0.29.3 ± 0.310.0 ± 0.3NS(0.134)Platelets, ICU first day (×103/μL)127.3 ± 10.185.8 ± 8.1168.8 ± 16.9<0.001Leukocytes, ICU first day (×103/μL)13.9 ± 0.812.9 ± 1.314.9 ± 1.0NS(0.224)PaO2/FiO2, ICU first day(mmHg)256 ± 13270 ± 16243 ± 19NS(0.299)Shock(%)39 (35.5)9 (16.3)30 (54.5)<0.001Hospital mortality (%)71(64.5)34 (61.8)37 (67.3)NS(0.550) Score systems APACHE III, ICU first day(mean ± SE)86.5 ± 2.881.1 ± 4.291.9 ± 3.7NS(0.058)SOFA, ICU first day(mean ± SE)10.0 ± 0.310.0 ± 0.510.0 ± 0.3NS(0.927)RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.4 ± 0.21.9 ± 0.20.041 Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. Reasons for ICU admission Abbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment. APACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. SOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. In the APACHE III-matched group (Table 2), the critically ill patients with cirrhosis had a lower arterial HCO3− level (P = .002), a lower haemoglobin level (P = .024), a lower platelet count (P < .001), and a higher serum bilirubin level (P < .001) on Day 1 of ICU admission compared with the patients without cirrhosis. The patients without cirrhosis with the same score were older and experienced more shock episodes than the patients with cirrhosis did. The hospital mortality rate was significantly higher in the patients with cirrhosis than in the patients without cirrhosis (P = .026). Renal function and a PaO2/FiO2 ratio were similar between the 2 groups. Among APACHE III-matched patients, the patients with cirrhosis had significantly higher SOFA scores than those of the patients without cirrhosis (P < .001). In the SOFA-matched group (Table 3), the patients with cirrhosis had a lower platelet count (P < .001), a higher Glasgow coma scale (GCS), a more stable haemodynamic status, more favourable renal function, a higher bilirubin level, and a lower albumin level compared with the patients without cirrhosis. The patients without cirrhosis experienced more shock episodes than did the patients with cirrhosis. No significant difference in the hospital mortality rate was observed between the patients with and without cirrhosis. In both the APACHE III and SOFA-matched groups, the patients with cirrhosis had a shorter overall length of stay in an ICU than that of the patients without cirrhosis. This was attributed to the higher hospital mortality rate in the patients with cirrhosis. In both the APACHE III and SOFA-matched groups, the patients without cirrhosis had a significantly higher rate of shock than the patients with cirrhosis did. This was because of the main reason that ICU admission of the patients with cirrhosis was mainly due to hepatic failure; the reasons for ICU admission of the patients without cirrhosis were GI bleeding and sepsis. (Data not shown here) Mortality and severity of illness scoring systems Prediction abilities of the APACHE III, SOFA, and RIFLE scoring systems were compared; Table 4 lists the calibration and discrimination of the models. In the APACHE III-matched group, the SOFA scoring system demonstrated the highest prediction ability (AUROC = 0.810 ± 0.056) among all 3 systems. All the 3 scoring systems predicted mortality more precisely in the patients with cirrhosis than in those without cirrhosis. In the SOFA-matched group, the APACHE III scoring system was the most accurate predictor among all 3 systems. In the patients with cirrhosis, the SOFA scoring system demonstrated the highest prediction ability. To determine the selected cut-off points for predicting in-hospital mortality, the sensitivity, specificity, and overall accuracy of prediction were determined (Table 5). In the APACHE III-matched group, the SOFA scoring system had the best Youden’s index among the patients with cirrhosis, whereas the APACHE III scoring system had the best Youden’s index among the total population. In the SOFA-matched group, the APACHE III scoring system had the best Youden’s index among both the patients with cirrhosis and the total population. The RIFLE scoring system demonstrated the highest specificity for prognostic prediction in the patients with cirrhosis of both the SOFA-matched and APACHE III-matched groups.Table 4 Calibration and discrimination for the scoring methods in predicting hospital mortality CalibrationDiscriminationgoodness-of-fit (χ 2)df p AUROC ± SE95% CI p APACHE III-matched group APACHE III  Total population3.63380.8890.745 ± 0.0400.667 – 0.823<0.001 Cirrhosis12.30470.0910.783 ± 0.0620.662 – 0.904<0.001 Non-cirrhosis3.81980.8730.733 ± 0.0550.626 – 0.840<0.001 SOFA  Total population4.50580.8090.735 ± 0.0390.659 – 0.812<0.001 Cirrhosis11.24380.1880.810 ± 0.0560.700 – 0.920<0.001 Non-cirrhosis1.18060.9780.624 ± 0.0600.506 – 0.741NS(0.050) RIFLE  Total population3.16030.3680.621 ± 0.0450.534 – 0.7090.010 Cirrhosis2.00820.3660.710 ± 0.0610.590 – 0.8300.004 Non-cirrhosis2.47530.4800.554 ± 0.0620.431 – 0.676NS(0.395) SOFA-matched group APACHE III  Total population5.09380.7480.733 ± 0.0510.633 – 0.834<0.001 Cirrhosis15.20170.0340.767 ± 0.0680.633 – 0.9000.001 Non-cirrhosis1.71670.9740.706 ± 0.0770.556 – 0.8560.014 SOFA  Total population6.51960.3680.663 ± 0.0530.559 – 0.7670.005 Cirrhosis6.42760.3770.742 ± 0.0680.608 – 0.8750.003 Non-cirrhosis5.69850.3370.543 ± 0.0790.387 – 0.698NS(0.609) RIFLE  Total population4.60930.2030.634 ± 0.0550.527 – 0.7420.020 Cirrhosis1.21620.5440.656 ± 0.0740.511 – 0.801NS(0.053) Non-cirrhosis4.53130.2100.594 ± 0.0830.432 – 0.756NS(0.262) Abbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 5 Subsequent hospital mortality predicted after ICU admission Predictive FactorsCutoff PointYouden indexSensitivity (%)Specificity (%)Overall correctness (%) APACHE III-matched group APACHE III  Total population830.42756771 Cirrhosis72a 0.55787677 Non-cirrhosis830.40707070 SOFA  Total population100.40528870 Cirrhosis10a 0.56758178 Non-cirrhosis90.20368460 RIFLE  Total populationInjury0.22398361 Cirrhosisinjurya 0.35459068 Non-cirrhosisInjury0.10327855 SOFA-matched group APACHE III  Total population760.417567071 Cirrhosis71a 0.50747675 Non-cirrhosis830.40736770 SOFA  Total population100.30458565 Cirrhosis10 a 0.42568671 Non-cirrhosis100.18358359 RIFLE  Total populationNon-AKI0.22764661 CirrhosisInjurya 0.26359063 Non-cirrhosisNon-AKI0.20863360 Abbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. aValue giving the best Youden index. Calibration and discrimination for the scoring methods in predicting hospital mortality Abbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. Subsequent hospital mortality predicted after ICU admission Abbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. aValue giving the best Youden index. Figure 1A and B show the cumulative survival rates in the patients with and without cirrhosis in the APACHE III-matched and SOFA-matched groups, respectively. The cumulative survival rates showed that patients with cirrhosis in the APACHE III-matched group had significantly higher mortality rates than the patients without cirrhosis did, whereas no significant difference was detected between the patients with and without cirrhosis in the SOFA-matched group. In both the APACHE III-matched and SOFA-matched groups, the cumulative survival rates significantly differed when an underlying AKI was considered (Figure 2A and B).Figure 1 The cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients, p  < 0.05) and SOFA-matched (1B, 110 patients, p -value: not significant) groups, respectively. Figure 2 The cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients, p  < 0.05) and SOFA-matched (2B, 110 patients, p  < 0.05) groups, respectively. The cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients, p  < 0.05) and SOFA-matched (1B, 110 patients, p -value: not significant) groups, respectively. The cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients, p  < 0.05) and SOFA-matched (2B, 110 patients, p  < 0.05) groups, respectively. Prediction abilities of the APACHE III, SOFA, and RIFLE scoring systems were compared; Table 4 lists the calibration and discrimination of the models. In the APACHE III-matched group, the SOFA scoring system demonstrated the highest prediction ability (AUROC = 0.810 ± 0.056) among all 3 systems. All the 3 scoring systems predicted mortality more precisely in the patients with cirrhosis than in those without cirrhosis. In the SOFA-matched group, the APACHE III scoring system was the most accurate predictor among all 3 systems. In the patients with cirrhosis, the SOFA scoring system demonstrated the highest prediction ability. To determine the selected cut-off points for predicting in-hospital mortality, the sensitivity, specificity, and overall accuracy of prediction were determined (Table 5). In the APACHE III-matched group, the SOFA scoring system had the best Youden’s index among the patients with cirrhosis, whereas the APACHE III scoring system had the best Youden’s index among the total population. In the SOFA-matched group, the APACHE III scoring system had the best Youden’s index among both the patients with cirrhosis and the total population. The RIFLE scoring system demonstrated the highest specificity for prognostic prediction in the patients with cirrhosis of both the SOFA-matched and APACHE III-matched groups.Table 4 Calibration and discrimination for the scoring methods in predicting hospital mortality CalibrationDiscriminationgoodness-of-fit (χ 2)df p AUROC ± SE95% CI p APACHE III-matched group APACHE III  Total population3.63380.8890.745 ± 0.0400.667 – 0.823<0.001 Cirrhosis12.30470.0910.783 ± 0.0620.662 – 0.904<0.001 Non-cirrhosis3.81980.8730.733 ± 0.0550.626 – 0.840<0.001 SOFA  Total population4.50580.8090.735 ± 0.0390.659 – 0.812<0.001 Cirrhosis11.24380.1880.810 ± 0.0560.700 – 0.920<0.001 Non-cirrhosis1.18060.9780.624 ± 0.0600.506 – 0.741NS(0.050) RIFLE  Total population3.16030.3680.621 ± 0.0450.534 – 0.7090.010 Cirrhosis2.00820.3660.710 ± 0.0610.590 – 0.8300.004 Non-cirrhosis2.47530.4800.554 ± 0.0620.431 – 0.676NS(0.395) SOFA-matched group APACHE III  Total population5.09380.7480.733 ± 0.0510.633 – 0.834<0.001 Cirrhosis15.20170.0340.767 ± 0.0680.633 – 0.9000.001 Non-cirrhosis1.71670.9740.706 ± 0.0770.556 – 0.8560.014 SOFA  Total population6.51960.3680.663 ± 0.0530.559 – 0.7670.005 Cirrhosis6.42760.3770.742 ± 0.0680.608 – 0.8750.003 Non-cirrhosis5.69850.3370.543 ± 0.0790.387 – 0.698NS(0.609) RIFLE  Total population4.60930.2030.634 ± 0.0550.527 – 0.7420.020 Cirrhosis1.21620.5440.656 ± 0.0740.511 – 0.801NS(0.053) Non-cirrhosis4.53130.2100.594 ± 0.0830.432 – 0.756NS(0.262) Abbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 5 Subsequent hospital mortality predicted after ICU admission Predictive FactorsCutoff PointYouden indexSensitivity (%)Specificity (%)Overall correctness (%) APACHE III-matched group APACHE III  Total population830.42756771 Cirrhosis72a 0.55787677 Non-cirrhosis830.40707070 SOFA  Total population100.40528870 Cirrhosis10a 0.56758178 Non-cirrhosis90.20368460 RIFLE  Total populationInjury0.22398361 Cirrhosisinjurya 0.35459068 Non-cirrhosisInjury0.10327855 SOFA-matched group APACHE III  Total population760.417567071 Cirrhosis71a 0.50747675 Non-cirrhosis830.40736770 SOFA  Total population100.30458565 Cirrhosis10 a 0.42568671 Non-cirrhosis100.18358359 RIFLE  Total populationNon-AKI0.22764661 CirrhosisInjurya 0.26359063 Non-cirrhosisNon-AKI0.20863360 Abbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. aValue giving the best Youden index. Calibration and discrimination for the scoring methods in predicting hospital mortality Abbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. Subsequent hospital mortality predicted after ICU admission Abbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. aValue giving the best Youden index. Figure 1A and B show the cumulative survival rates in the patients with and without cirrhosis in the APACHE III-matched and SOFA-matched groups, respectively. The cumulative survival rates showed that patients with cirrhosis in the APACHE III-matched group had significantly higher mortality rates than the patients without cirrhosis did, whereas no significant difference was detected between the patients with and without cirrhosis in the SOFA-matched group. In both the APACHE III-matched and SOFA-matched groups, the cumulative survival rates significantly differed when an underlying AKI was considered (Figure 2A and B).Figure 1 The cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients, p  < 0.05) and SOFA-matched (1B, 110 patients, p -value: not significant) groups, respectively. Figure 2 The cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients, p  < 0.05) and SOFA-matched (2B, 110 patients, p  < 0.05) groups, respectively. The cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients, p  < 0.05) and SOFA-matched (1B, 110 patients, p -value: not significant) groups, respectively. The cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients, p  < 0.05) and SOFA-matched (2B, 110 patients, p  < 0.05) groups, respectively. Patient characteristics: A total of 336 critically ill patients admitted to the medical ICU between January 2006 and December 2009 were enrolled in the present study. Table 1 lists the reasons for ICU admission. The demographic data, clinical characteristics, and outcomes of the 2 score-matched groups are depicted in Tables 2 and 3, respectively.Table 1 Reasons for ICU admission APACHE III-matchedSOFA-matchedgroup (n = 174)group (n = 110)Sepsis99(56.9%)62(56.4%) Urinary tract infection9(5.2%)8(7.3%) Pneumonia62(35.6%)38(34.5%) Intra-abdominal infection16(9.2%)7(6.4%) Blood stream infection9(5.2%)8(7.3%) Soft tissue infection3(1.7%)1(0.9%)Cardiovascular diseases4(2.3%)2(1.8%)Upper gastrointestinal bleeding18(10.3%)12(10.9%)Hepatic failure34(19.5%)23(20.9%)Other19(10.9%)11(10.0%) Abbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment.Table 2 APACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Total (n = 174)Cirrhosis (n = 87)Non-cirrhosis (n = 87) p-valueAge (years)65.5 ± 1.159.8 ± 1.571.3 ± 1.4<0.001Male/Female123/5164/2359/28NS(0.405)Length of ICU stay (days)14 ± 111 ± 116 ± 20.009Length of Hospital stay (days)32 ± 230 ± 335 ± 3NS(0.223)Body weight on ICU admission (kg)61 ± 1.065 ± 157 ± 1<0.001GCS, ICU first day (points)9 ± 010 ± 19 ± 1NS(0.077)MAP, ICU admission (mmHg)78 ± 180 ± 276 ± 2NS(0.137)Serum Creatinine, ICU first day (mg/dl)2.5 ± 0.22.4 ± 0.22.5 ± 0.3NS(0.885)Arterial HCO3 −, ICU first day21 ± 119 ± 1.022 ± 1.00.002Serum Sodium, ICU first day (mg/dl)138 ± 1.0138 ± 1.0138 ± 1.0NS(0.890)Bilirubin, ICU first day (mg/dl)6.4 ± 0.711.4 ± 1.31.4 ± 0.3<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.4 ± 0.12.4 ± 0.1NS(0.329)Blood Sugar, ICU first day (mg/dl)166 ± 7159 ± 11170 ± 10.0NS(0.482)Hemoglobin, ICU first day (g/dl)9.6 ± 0.29.2 ± 0.210.0 ± 0.20.024Platelets, ICU first day (×103/μL)145.0 ± 9.179.4 ± 5.9210.5 ± 14.1<0.001Leukocytes, ICU first day (×103/μL)14.5 ± 0.713.5 ± 1.015.5 ± 0.8NS(0.133)PaO2/FiO2, ICU first day(mmHg)268 ± 11275 ± 13262 ± 17NS(0.536)Shock(%)54 (31.0)21 (24.1)33 (37.9)0.049Hospital mortality (%)114 (65.5)64 (73.6)50 (57.5)0.026 Score systems APACHE III, ICU first day(mean ± SE)87.8 ± 2.187.7 ± 3.488.0 ± 2.5NS(0.941)SOFA, ICU first day(mean ± SE)9.6 ± 0.311.3 ± 0.48.0 ± 0.3<0.001RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.6 ± 0.11.7 ± 0.2NS(0.913) Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 3 SOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Total (n = 110)Cirrhosis (n = 55)Non-cirrhosis (n = 55) p-valueAge (years)65.0 ± 1.461.1 ± 2.068.9 ± 2.00.006Male/Female79/3140/1539/16NS(0.832)Length of ICU stay (days)14 ± 111 ± 118 ± 20.007Length of Hospital stay (days)31 ± 232 ± 431 ± 3NS(0.843)Body weight on ICU admission (kg)60 ± 164 ± 256 ± 20.001GCS, ICU first day (points)10 ± 011 ± 18 ± 10.005MAP, ICU admission (mmHg)78 ± 283 ± 273 ± 20.002Serum Creatinine, ICU first day (mg/dl)2.4 ± 0.21.8 ± 0.23.0 ± 0.30.004Arterial HCO3 −, ICU first day21 ± 120 ± 121 ± 1NS(0.458)Serum Sodium, ICU first day (mg/dl)137 ± 1138 ± 1137 ± 1NS(0.282)Bilirubin, ICU first day (mg/dl)5.8 ± 0.89.6 ± 1.51.9 ± 0.4<0.001Albumin, ICU first day (g/l)2.4 ± 0.12.5 ± 0.12.3 ± 0.10.016Blood Sugar, ICU first day (mg/dl)160 ± 8159 ± 12161 ± 11NS(0.899)Hemoglobin, ICU first day (g/dl)9.7 ± 0.29.3 ± 0.310.0 ± 0.3NS(0.134)Platelets, ICU first day (×103/μL)127.3 ± 10.185.8 ± 8.1168.8 ± 16.9<0.001Leukocytes, ICU first day (×103/μL)13.9 ± 0.812.9 ± 1.314.9 ± 1.0NS(0.224)PaO2/FiO2, ICU first day(mmHg)256 ± 13270 ± 16243 ± 19NS(0.299)Shock(%)39 (35.5)9 (16.3)30 (54.5)<0.001Hospital mortality (%)71(64.5)34 (61.8)37 (67.3)NS(0.550) Score systems APACHE III, ICU first day(mean ± SE)86.5 ± 2.881.1 ± 4.291.9 ± 3.7NS(0.058)SOFA, ICU first day(mean ± SE)10.0 ± 0.310.0 ± 0.510.0 ± 0.3NS(0.927)RIFLE, ICU first day(mean ± SE)1.6 ± 0.11.4 ± 0.21.9 ± 0.20.041 Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. Reasons for ICU admission Abbreviation: ICU, intensive care unit; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment. APACHE III-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. SOFA-matched patient demographic data and clinical characteristics according to cirrhosis and non-cirrhosis Abbreviation: NS, not significant; ICU, intensive care unit; SE, standard error; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, arterial partial pressure of oxygen; FiO2, fraction of inspired oxygen; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. In the APACHE III-matched group (Table 2), the critically ill patients with cirrhosis had a lower arterial HCO3− level (P = .002), a lower haemoglobin level (P = .024), a lower platelet count (P < .001), and a higher serum bilirubin level (P < .001) on Day 1 of ICU admission compared with the patients without cirrhosis. The patients without cirrhosis with the same score were older and experienced more shock episodes than the patients with cirrhosis did. The hospital mortality rate was significantly higher in the patients with cirrhosis than in the patients without cirrhosis (P = .026). Renal function and a PaO2/FiO2 ratio were similar between the 2 groups. Among APACHE III-matched patients, the patients with cirrhosis had significantly higher SOFA scores than those of the patients without cirrhosis (P < .001). In the SOFA-matched group (Table 3), the patients with cirrhosis had a lower platelet count (P < .001), a higher Glasgow coma scale (GCS), a more stable haemodynamic status, more favourable renal function, a higher bilirubin level, and a lower albumin level compared with the patients without cirrhosis. The patients without cirrhosis experienced more shock episodes than did the patients with cirrhosis. No significant difference in the hospital mortality rate was observed between the patients with and without cirrhosis. In both the APACHE III and SOFA-matched groups, the patients with cirrhosis had a shorter overall length of stay in an ICU than that of the patients without cirrhosis. This was attributed to the higher hospital mortality rate in the patients with cirrhosis. In both the APACHE III and SOFA-matched groups, the patients without cirrhosis had a significantly higher rate of shock than the patients with cirrhosis did. This was because of the main reason that ICU admission of the patients with cirrhosis was mainly due to hepatic failure; the reasons for ICU admission of the patients without cirrhosis were GI bleeding and sepsis. (Data not shown here) Mortality and severity of illness scoring systems: Prediction abilities of the APACHE III, SOFA, and RIFLE scoring systems were compared; Table 4 lists the calibration and discrimination of the models. In the APACHE III-matched group, the SOFA scoring system demonstrated the highest prediction ability (AUROC = 0.810 ± 0.056) among all 3 systems. All the 3 scoring systems predicted mortality more precisely in the patients with cirrhosis than in those without cirrhosis. In the SOFA-matched group, the APACHE III scoring system was the most accurate predictor among all 3 systems. In the patients with cirrhosis, the SOFA scoring system demonstrated the highest prediction ability. To determine the selected cut-off points for predicting in-hospital mortality, the sensitivity, specificity, and overall accuracy of prediction were determined (Table 5). In the APACHE III-matched group, the SOFA scoring system had the best Youden’s index among the patients with cirrhosis, whereas the APACHE III scoring system had the best Youden’s index among the total population. In the SOFA-matched group, the APACHE III scoring system had the best Youden’s index among both the patients with cirrhosis and the total population. The RIFLE scoring system demonstrated the highest specificity for prognostic prediction in the patients with cirrhosis of both the SOFA-matched and APACHE III-matched groups.Table 4 Calibration and discrimination for the scoring methods in predicting hospital mortality CalibrationDiscriminationgoodness-of-fit (χ 2)df p AUROC ± SE95% CI p APACHE III-matched group APACHE III  Total population3.63380.8890.745 ± 0.0400.667 – 0.823<0.001 Cirrhosis12.30470.0910.783 ± 0.0620.662 – 0.904<0.001 Non-cirrhosis3.81980.8730.733 ± 0.0550.626 – 0.840<0.001 SOFA  Total population4.50580.8090.735 ± 0.0390.659 – 0.812<0.001 Cirrhosis11.24380.1880.810 ± 0.0560.700 – 0.920<0.001 Non-cirrhosis1.18060.9780.624 ± 0.0600.506 – 0.741NS(0.050) RIFLE  Total population3.16030.3680.621 ± 0.0450.534 – 0.7090.010 Cirrhosis2.00820.3660.710 ± 0.0610.590 – 0.8300.004 Non-cirrhosis2.47530.4800.554 ± 0.0620.431 – 0.676NS(0.395) SOFA-matched group APACHE III  Total population5.09380.7480.733 ± 0.0510.633 – 0.834<0.001 Cirrhosis15.20170.0340.767 ± 0.0680.633 – 0.9000.001 Non-cirrhosis1.71670.9740.706 ± 0.0770.556 – 0.8560.014 SOFA  Total population6.51960.3680.663 ± 0.0530.559 – 0.7670.005 Cirrhosis6.42760.3770.742 ± 0.0680.608 – 0.8750.003 Non-cirrhosis5.69850.3370.543 ± 0.0790.387 – 0.698NS(0.609) RIFLE  Total population4.60930.2030.634 ± 0.0550.527 – 0.7420.020 Cirrhosis1.21620.5440.656 ± 0.0740.511 – 0.801NS(0.053) Non-cirrhosis4.53130.2100.594 ± 0.0830.432 – 0.756NS(0.262) Abbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure.Table 5 Subsequent hospital mortality predicted after ICU admission Predictive FactorsCutoff PointYouden indexSensitivity (%)Specificity (%)Overall correctness (%) APACHE III-matched group APACHE III  Total population830.42756771 Cirrhosis72a 0.55787677 Non-cirrhosis830.40707070 SOFA  Total population100.40528870 Cirrhosis10a 0.56758178 Non-cirrhosis90.20368460 RIFLE  Total populationInjury0.22398361 Cirrhosisinjurya 0.35459068 Non-cirrhosisInjury0.10327855 SOFA-matched group APACHE III  Total population760.417567071 Cirrhosis71a 0.50747675 Non-cirrhosis830.40736770 SOFA  Total population100.30458565 Cirrhosis10 a 0.42568671 Non-cirrhosis100.18358359 RIFLE  Total populationNon-AKI0.22764661 CirrhosisInjurya 0.26359063 Non-cirrhosisNon-AKI0.20863360 Abbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. aValue giving the best Youden index. Calibration and discrimination for the scoring methods in predicting hospital mortality Abbreviation: df, degree of freedom; AUROC, areas under the receiver operating characteristic curve; SE, standard error; CI, confidence intervals; APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. Subsequent hospital mortality predicted after ICU admission Abbreviation: APACHE, Acute Physiology and Chronic Health Evaluation; SOFA, sequential organ failure assessment; RIFLE, risk of renal failure, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage renal failure. aValue giving the best Youden index. Figure 1A and B show the cumulative survival rates in the patients with and without cirrhosis in the APACHE III-matched and SOFA-matched groups, respectively. The cumulative survival rates showed that patients with cirrhosis in the APACHE III-matched group had significantly higher mortality rates than the patients without cirrhosis did, whereas no significant difference was detected between the patients with and without cirrhosis in the SOFA-matched group. In both the APACHE III-matched and SOFA-matched groups, the cumulative survival rates significantly differed when an underlying AKI was considered (Figure 2A and B).Figure 1 The cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients, p  < 0.05) and SOFA-matched (1B, 110 patients, p -value: not significant) groups, respectively. Figure 2 The cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients, p  < 0.05) and SOFA-matched (2B, 110 patients, p  < 0.05) groups, respectively. The cumulative survival rates for cirrhotic and non-cirrhotic patients in the APACHE III-matched (1A, 174 patients, p  < 0.05) and SOFA-matched (1B, 110 patients, p -value: not significant) groups, respectively. The cumulative survival rates for acute kidney injury (AKI) and non-AKI patients in the APACHE III-matched (2A, 174 patients, p  < 0.05) and SOFA-matched (2B, 110 patients, p  < 0.05) groups, respectively. Discussion: Based on our research, this is the first study to compare the usefulness of different scoring systems for outcome prediction in patients admitted to an ICU with and without cirrhosis by using a score-matched method. In the present study, several clinical characteristics and outcomes of critically ill patients with cirrhosis were compared with those of critically ill patients without cirrhosis with matched APACHE III or SOFA scores. The poor outcome of patients with cirrhosis was consistent with the results of previous studies [12–14]. Recent studies have supported the efficacy of the SOFA scoring system for assessing the extent of organ dysfunction in various groups including critically ill patients with cirrhosis [3–5]. The APACHE III score consisted of the acute physiology score, age, and chronic health problem scores, which are widely used for predicting clinical outcomes currently. Among the patients with cirrhosis, bilirubin levels and incidences of cirrhosis were higher than those among the patients without cirrhosis. Results of the present study suggested that among patients with the same APACHE III score, patients with cirrhosis were older with a higher risk of acidaemia and shock than those without cirrhosis. However, renal function and consciousness status were nonsignificantly different between the 2 groups. The requirement for mechanical ventilation was not included in the APACHE III score, and no difference in the PaO2/FiO2 ratio was observed. Although the SOFA score includes a fewer number of items and does not assess age and comorbid conditions, it enhances its simplicity and demonstrates high discriminatory power for predicting critically ill patients with cirrhosis. The patients with cirrhosis in the SOFA-matched group showed higher GCS and lower SCr concentrations on Day 1 of ICU admission, whereas these differences were nonsignificant in the APACHE III-matched group. Because patients with cirrhosis typically have higher serum bilirubin levels and lower platelet counts (caused by de novo liver disease), which contribute to a higher SOFA score, than their counterparts do, the patients with cirrhosis with similar SOFA scores may have relatively improved characteristics in the other 5 organic fields. This phenomenon is consistent with our findings showing that the patients with cirrhosis had a more stable neurologic status and renal function than did the patients without cirrhosis in the SOFA-matched group. APACHE III, a prognostic model, predicts mortality. Prognostic scoring models such as APACHE III assume that mortality is affected by physiological disturbances that occur early in the course of illness, whereas organ dysfunction-scoring systems such as SOFA allow determination of organ dysfunction at the time of admission and at regular intervals throughout the stay in an ICU, thus allowing for the assessment of changes in organ function. The SOFA score is an organ dysfunction score that quantifies the burden of organ dysfunction. Although the SOFA score was originally used to describe morbidity, it was also used in mortality prediction. The accuracy of mortality predictions may be improved with repeated measurements by using organ dysfunction scoring systems such as SOFA. As shown in Table 4, in the APACHE III-matched group, the SOFA score demonstrated a higher discrimination ability in the patients with cirrhosis than in the patients without cirrhosis (AUROC = 0.810 ± 0.056 vs 0.624 ± 0.060). The SOFA score is simpler for assessment than the APACHE III score by clinicians. Meanwhile, the SOFA score allows for sequential measurements and more accurately reflects the dynamic aspects of disease processes and may provide information of higher quality on the mortality risk. Therefore, the SOFA score is a superior and easier-to-implement model for predicting mortality in the patients with cirrhosis, with a cut-off value of 10 points, providing the optimal overall correctness. In addition, both the SOFA and APACHE III scores are comparable in the patients without cirrhosis. When the patients with and without cirrhosis were matched by using SOFA scores, difference in APACHE III scores between the 2 groups was nonsignificant in the present study. The RIFLE scoring system, however, showed superior results in patients with cirrhosis to those of the noncirrhotic controls (P = .041) in the SOFA-matched group. The patients with cirrhosis tend to have malnutrition, low muscle mass, and impaired synthesis of creatinine. Therefore, the RIFLE scoring system, which is based on the serum creatinine and urine output, may lead to underestimation of AKI severity and overall illness. Despite the encouraging results of the present study, several potential limitations should be considered. First, this was a retrospective study performed at a single tertiary-care medical centre, which limits generalisation of the findings. Second, the patients without cirrhosis mainly composed of patients with sepsis and those with a low proportion of patients with other diseases such as cardiovascular disease or acute respiratory distress syndrome. The specificity of the subgroup of the patients with cirrhosis may limit the generalisation. Third, the patient population comprised a high proportion of patients with hepatitis B virus infection. Therefore, this study has limited applicability to typical North American and European patients with hepatitis C virus infection or those with alcohol dependence. Finally, the sample size was insufficient for matching SOFA and APACHE III scores among the patients with and without cirrhosis. Therefore, we cannot draw definitive conclusions regarding the relatively poor short-term prognosis of the patients admitted to the ICU with cirrhosis compared with that of the patients admitted to the ICU without cirrhosis. Conclusions: Our results provide additional evidence that SOFA scores differ significantly between patients with and without cirrhosis matched according to APACHE III scores. The score-matched analytical data showed that the predictive accuracy of SOFA is superior to that of APACHE III in evaluating critically ill patients with cirrhosis. We also demonstrated that the mean arterial pressure, GCS, and RIFLE classification play critical roles in determining prognosis in this subset of patients. When considering cost-effectiveness and ease of implementation, the SOFA scale is recommended for evaluating short-term prognosis in critically ill patients with cirrhosis.
Background: Cirrhotic patients admitted to an intensive care unit (ICU) have high mortality rates. The present study compared the characteristics and outcomes of critically ill patients admitted to the ICU with and without cirrhosis using the matched Acute Physiology and Chronic Health Evaluation III (APACHE III) and Sequential Organ Failure Assessment (SOFA) scores. Methods: A retrospective case-control study was performed at the medical ICU of a tertiary-care hospital between January 2006 and December 2009. Patients were admitted with life-threatening complications and were matched for APACHE III and SOFA scores. Of 336 patients enrolled in the study, 87 in the cirrhosis or noncirrhosis group were matched according to the APACHE III scores. Another 55 patients with cirrhosis were matched to the 55 patients without cirrhosis according to the SOFA scores. Demographic data, aetiology of ICU admission, and laboratory variables were also evaluated. Results: The overall hospital mortality rate in the patients with cirrhosis in the APACHE III-matched group was more than that in their counterparts (73.6% vs 57.5%, P = .026) but the rate did not differ significantly in the SOFA-matched group (61.8% vs 67.3%). In the APACHE III-matched group, the SOFA scores of patients with cirrhosis were significantly higher than those of patients without cirrhosis (P < .001), whereas the difference in APACHE III scores was nonsignificant between the SOFA-matched patients with and without cirrhosis. Conclusions: Score-matched analytical data showed that the SOFA scores significantly differentiated the patients admitted to the ICU with cirrhosis from those without cirrhosis in APACHE III-matched groups, whereas difference in the APACHE III scores between the patients with and without cirrhosis were nonsignificant in the SOFA-matched group.
Background: Accurate prognostic predictors are crucial for patients admitted to an intensive care unit (ICU). Prognostic scoring systems are useful for clinical management such as predicting a survival rate, making decisions, and facilitating explanation of disease severity, by clinical physicians. Patients with cirrhosis admitted to an ICU frequently have disappointing outcomes despite intensive medical support, and these patients are particular targets for prognostic evaluation. Various systems for scoring severity and predicting prognosis have been developed and applied for decades. The Acute Physiology and Chronic Health Evaluation III (APACHE III) score [1], one of the widely used scoring systems, is known for its accuracy in predicting mortality. However, the APACHE III scoring system was initially developed for various diseases and not exclusively for liver-related diseases. By contrast, the Sequential Organ Failure Assessment (SOFA) score [2], another widely used scoring system, is superior to the APACHE III scoring system for assessing specific organ dysfunction including cirrhosis [3, 4]. Our previous study demonstrated that the APACHE III and SOFA scores were both independently associated with a hospital mortality rate and demonstrated high discriminatory power for predicting mortality in patients with cirrhosis [5]. However, few studies have performed detailed independent comparisons between APACHE III and SOFA scores [6]. In the present case-control study, we matched APACHE III and SOFA scores and compared the different clinical characteristics and outcomes of patients with cirrhosis with those of their matched noncirrhotic controls. Conclusions: Our results provide additional evidence that SOFA scores differ significantly between patients with and without cirrhosis matched according to APACHE III scores. The score-matched analytical data showed that the predictive accuracy of SOFA is superior to that of APACHE III in evaluating critically ill patients with cirrhosis. We also demonstrated that the mean arterial pressure, GCS, and RIFLE classification play critical roles in determining prognosis in this subset of patients. When considering cost-effectiveness and ease of implementation, the SOFA scale is recommended for evaluating short-term prognosis in critically ill patients with cirrhosis.
Background: Cirrhotic patients admitted to an intensive care unit (ICU) have high mortality rates. The present study compared the characteristics and outcomes of critically ill patients admitted to the ICU with and without cirrhosis using the matched Acute Physiology and Chronic Health Evaluation III (APACHE III) and Sequential Organ Failure Assessment (SOFA) scores. Methods: A retrospective case-control study was performed at the medical ICU of a tertiary-care hospital between January 2006 and December 2009. Patients were admitted with life-threatening complications and were matched for APACHE III and SOFA scores. Of 336 patients enrolled in the study, 87 in the cirrhosis or noncirrhosis group were matched according to the APACHE III scores. Another 55 patients with cirrhosis were matched to the 55 patients without cirrhosis according to the SOFA scores. Demographic data, aetiology of ICU admission, and laboratory variables were also evaluated. Results: The overall hospital mortality rate in the patients with cirrhosis in the APACHE III-matched group was more than that in their counterparts (73.6% vs 57.5%, P = .026) but the rate did not differ significantly in the SOFA-matched group (61.8% vs 67.3%). In the APACHE III-matched group, the SOFA scores of patients with cirrhosis were significantly higher than those of patients without cirrhosis (P < .001), whereas the difference in APACHE III scores was nonsignificant between the SOFA-matched patients with and without cirrhosis. Conclusions: Score-matched analytical data showed that the SOFA scores significantly differentiated the patients admitted to the ICU with cirrhosis from those without cirrhosis in APACHE III-matched groups, whereas difference in the APACHE III scores between the patients with and without cirrhosis were nonsignificant in the SOFA-matched group.
13,264
338
11
[ "patients", "cirrhosis", "icu", "sofa", "apache", "matched", "iii", "apache iii", "patients cirrhosis", "failure" ]
[ "test", "test" ]
[CONTENT] Acute physiology and chronic health evaluation III (APACHE III) | Sequential organ failure assessment (SOFA) | Intensive care unit (ICU) | Cirrhosis | Outcome [SUMMARY]
[CONTENT] Acute physiology and chronic health evaluation III (APACHE III) | Sequential organ failure assessment (SOFA) | Intensive care unit (ICU) | Cirrhosis | Outcome [SUMMARY]
[CONTENT] Acute physiology and chronic health evaluation III (APACHE III) | Sequential organ failure assessment (SOFA) | Intensive care unit (ICU) | Cirrhosis | Outcome [SUMMARY]
[CONTENT] Acute physiology and chronic health evaluation III (APACHE III) | Sequential organ failure assessment (SOFA) | Intensive care unit (ICU) | Cirrhosis | Outcome [SUMMARY]
[CONTENT] Acute physiology and chronic health evaluation III (APACHE III) | Sequential organ failure assessment (SOFA) | Intensive care unit (ICU) | Cirrhosis | Outcome [SUMMARY]
[CONTENT] Acute physiology and chronic health evaluation III (APACHE III) | Sequential organ failure assessment (SOFA) | Intensive care unit (ICU) | Cirrhosis | Outcome [SUMMARY]
[CONTENT] APACHE | Aged | Case-Control Studies | Critical Illness | Female | Hospital Mortality | Humans | Intensive Care Units | Liver Cirrhosis | Male | Middle Aged | Organ Dysfunction Scores | Prognosis | Retrospective Studies [SUMMARY]
[CONTENT] APACHE | Aged | Case-Control Studies | Critical Illness | Female | Hospital Mortality | Humans | Intensive Care Units | Liver Cirrhosis | Male | Middle Aged | Organ Dysfunction Scores | Prognosis | Retrospective Studies [SUMMARY]
[CONTENT] APACHE | Aged | Case-Control Studies | Critical Illness | Female | Hospital Mortality | Humans | Intensive Care Units | Liver Cirrhosis | Male | Middle Aged | Organ Dysfunction Scores | Prognosis | Retrospective Studies [SUMMARY]
[CONTENT] APACHE | Aged | Case-Control Studies | Critical Illness | Female | Hospital Mortality | Humans | Intensive Care Units | Liver Cirrhosis | Male | Middle Aged | Organ Dysfunction Scores | Prognosis | Retrospective Studies [SUMMARY]
[CONTENT] APACHE | Aged | Case-Control Studies | Critical Illness | Female | Hospital Mortality | Humans | Intensive Care Units | Liver Cirrhosis | Male | Middle Aged | Organ Dysfunction Scores | Prognosis | Retrospective Studies [SUMMARY]
[CONTENT] APACHE | Aged | Case-Control Studies | Critical Illness | Female | Hospital Mortality | Humans | Intensive Care Units | Liver Cirrhosis | Male | Middle Aged | Organ Dysfunction Scores | Prognosis | Retrospective Studies [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] patients | cirrhosis | icu | sofa | apache | matched | iii | apache iii | patients cirrhosis | failure [SUMMARY]
[CONTENT] patients | cirrhosis | icu | sofa | apache | matched | iii | apache iii | patients cirrhosis | failure [SUMMARY]
[CONTENT] patients | cirrhosis | icu | sofa | apache | matched | iii | apache iii | patients cirrhosis | failure [SUMMARY]
[CONTENT] patients | cirrhosis | icu | sofa | apache | matched | iii | apache iii | patients cirrhosis | failure [SUMMARY]
[CONTENT] patients | cirrhosis | icu | sofa | apache | matched | iii | apache iii | patients cirrhosis | failure [SUMMARY]
[CONTENT] patients | cirrhosis | icu | sofa | apache | matched | iii | apache iii | patients cirrhosis | failure [SUMMARY]
[CONTENT] scoring | iii | predicting | apache | apache iii | prognostic | scoring system | system | score widely scoring | score widely [SUMMARY]
[CONTENT] test | patients | hospital | patient | data | patient cirrhosis | criteria | study | rifle | length stay [SUMMARY]
[CONTENT] icu day | icu | failure | day | cirrhosis | matched | patients | apache | sofa | kidney [SUMMARY]
[CONTENT] evaluating | prognosis | patients | patients cirrhosis | ill patients cirrhosis | critically ill patients cirrhosis | sofa | cirrhosis | ill patients | critically [SUMMARY]
[CONTENT] patients | cirrhosis | sofa | apache | icu | patients cirrhosis | iii | matched | apache iii | failure [SUMMARY]
[CONTENT] patients | cirrhosis | sofa | apache | icu | patients cirrhosis | iii | matched | apache iii | failure [SUMMARY]
[CONTENT] ICU ||| ICU | Acute Physiology and Chronic Health Evaluation III | Sequential Organ Failure Assessment [SUMMARY]
[CONTENT] ICU | tertiary | between January 2006 | December 2009 ||| ||| 336 | 87 ||| 55 | 55 | SOFA ||| ICU [SUMMARY]
[CONTENT] 73.6% | 57.5% | P =  | .026 ||| 61.8% | 67.3% ||| SOFA | .001 [SUMMARY]
[CONTENT] SOFA | ICU [SUMMARY]
[CONTENT] ICU ||| ICU | Acute Physiology and Chronic Health Evaluation III | Sequential Organ Failure Assessment ||| ICU | tertiary | between January 2006 | December 2009 ||| ||| 336 | 87 ||| 55 | 55 | SOFA ||| ICU ||| ||| 73.6% | 57.5% | P =  | .026 ||| 61.8% | 67.3% ||| SOFA | .001 ||| SOFA | ICU [SUMMARY]
[CONTENT] ICU ||| ICU | Acute Physiology and Chronic Health Evaluation III | Sequential Organ Failure Assessment ||| ICU | tertiary | between January 2006 | December 2009 ||| ||| 336 | 87 ||| 55 | 55 | SOFA ||| ICU ||| ||| 73.6% | 57.5% | P =  | .026 ||| 61.8% | 67.3% ||| SOFA | .001 ||| SOFA | ICU [SUMMARY]
Biological clustering supports both "Dutch" and "British" hypotheses of asthma and chronic obstructive pulmonary disease.
25129678
Asthma and chronic obstructive pulmonary disease (COPD) are heterogeneous diseases.
BACKGROUND
We compared the clinical and physiological characteristics and sputum mediators between 86 subjects with severe asthma and 75 with moderate-to-severe COPD. Biological subgroups were determined using factor and cluster analyses on 18 sputum cytokines. The subgroups were validated on independent severe asthma (n = 166) and COPD (n = 58) cohorts. Two techniques were used to assign the validation subjects to subgroups: linear discriminant analysis, or the best identified discriminator (single cytokine) in combination with subject disease status (asthma or COPD).
METHODS
Discriminant analysis distinguished severe asthma from COPD completely using a combination of clinical and biological variables. Factor and cluster analyses of the sputum cytokine profiles revealed 3 biological clusters: cluster 1: asthma predominant, eosinophilic, high TH2 cytokines; cluster 2: asthma and COPD overlap, neutrophilic; cluster 3: COPD predominant, mixed eosinophilic and neutrophilic. Validation subjects were classified into 3 subgroups using discriminant analysis, or disease status with a binary assessment of sputum IL-1β expression. Sputum cellular and cytokine profiles of the validation subgroups were similar to the subgroups from the test study.
RESULTS
Sputum cytokine profiling can determine distinct and overlapping groups of subjects with asthma and COPD, supporting both the British and Dutch hypotheses. These findings may contribute to improved patient classification to enable stratified medicine.
CONCLUSIONS
[ "Aged", "Asthma", "Cluster Analysis", "Cytokines", "Female", "Forced Expiratory Volume", "Humans", "Leukocyte Count", "Male", "Middle Aged", "Pulmonary Disease, Chronic Obstructive", "Sputum", "United Kingdom" ]
4282726
null
null
Methods
Subjects Subjects with severe asthma or moderate-to-severe COPD were recruited from a single center at the Glenfield Hospital, Leicester, United Kingdom, into independent test and validation studies. Assignment to asthma or COPD was made by the subjects' physician consistent with definitions of asthma and COPD according to the Global Initiative for Asthma1 or the Global Initiative for Chronic Obstructive Lung Disease2 guidelines, respectively, for both the test and validation groups. All subjects were assessed at stable visits at least 8 weeks free from an exacerbation, defined as an increase in symptoms necessitating a course of oral corticosteroids and/or antibiotic therapy. The subjects with COPD had participated in an exacerbation study,20,21 and some of the subjects with asthma had participated in an earlier study.22 All subjects provided written informed consent, and the studies were approved by the local Leicestershire, Northamptonshire, and Rutland ethics committee. Subjects with severe asthma or moderate-to-severe COPD were recruited from a single center at the Glenfield Hospital, Leicester, United Kingdom, into independent test and validation studies. Assignment to asthma or COPD was made by the subjects' physician consistent with definitions of asthma and COPD according to the Global Initiative for Asthma1 or the Global Initiative for Chronic Obstructive Lung Disease2 guidelines, respectively, for both the test and validation groups. All subjects were assessed at stable visits at least 8 weeks free from an exacerbation, defined as an increase in symptoms necessitating a course of oral corticosteroids and/or antibiotic therapy. The subjects with COPD had participated in an exacerbation study,20,21 and some of the subjects with asthma had participated in an earlier study.22 All subjects provided written informed consent, and the studies were approved by the local Leicestershire, Northamptonshire, and Rutland ethics committee. Measurements Demographic, clinical, and lung-function data were recorded including pre- and postbronchodilator FEV1, forced vital capacity, and symptom scores using the visual analogue scale. Spontaneous or induced sputum was collected for sputum total and differential cell counts and bacteriology; cell-free sputum supernatant was used for mediator assessment as described previously.20 Sputum was produced spontaneously in 93% of the subjects. Positive bacterial colonization was defined as colony-forming units greater than 107/mL sputum or positive culture.20,21 Subjects with sputum eosinophil and neutrophil differential cell counts above 3%23,24 and 61%25 were defined as eosinophilic or neutrophilic, respectively. Further stratification of the subjects into 4 subgroups on the basis of their sputum cell counts was also done: pure eosinophilic (eosinophil > 3% and neutrophil < 61%), pure neutrophilic (eosinophil < 3% and neutrophil > 61%), mixed granulocytic (eosinophil > 3% and neutrophil > 61%), and paucigranulocytic (eosinophil < 3% and neutrophil < 61%). Inflammatory mediators were measured in sputum supernatants using the Meso Scale Discovery Platform (MSD; Gaithersburg, Md). The mediators measured were selected to reflect cytokines, chemokines, and proinflammatory mediators implicated in airway disease. The performance of the MSD platform in terms of recovery of spiked exogenous recombinant proteins has been described previously.16 Sputum inflammatory mediators that were below the detectable range were replaced with their corresponding lower limit of detection in subjects with both asthma and COPD. Twenty-one mediators were included in the test study, and 14 mediators were available in the validation study. Demographic, clinical, and lung-function data were recorded including pre- and postbronchodilator FEV1, forced vital capacity, and symptom scores using the visual analogue scale. Spontaneous or induced sputum was collected for sputum total and differential cell counts and bacteriology; cell-free sputum supernatant was used for mediator assessment as described previously.20 Sputum was produced spontaneously in 93% of the subjects. Positive bacterial colonization was defined as colony-forming units greater than 107/mL sputum or positive culture.20,21 Subjects with sputum eosinophil and neutrophil differential cell counts above 3%23,24 and 61%25 were defined as eosinophilic or neutrophilic, respectively. Further stratification of the subjects into 4 subgroups on the basis of their sputum cell counts was also done: pure eosinophilic (eosinophil > 3% and neutrophil < 61%), pure neutrophilic (eosinophil < 3% and neutrophil > 61%), mixed granulocytic (eosinophil > 3% and neutrophil > 61%), and paucigranulocytic (eosinophil < 3% and neutrophil < 61%). Inflammatory mediators were measured in sputum supernatants using the Meso Scale Discovery Platform (MSD; Gaithersburg, Md). The mediators measured were selected to reflect cytokines, chemokines, and proinflammatory mediators implicated in airway disease. The performance of the MSD platform in terms of recovery of spiked exogenous recombinant proteins has been described previously.16 Sputum inflammatory mediators that were below the detectable range were replaced with their corresponding lower limit of detection in subjects with both asthma and COPD. Twenty-one mediators were included in the test study, and 14 mediators were available in the validation study. Statistical analysis See this article's Online Repository at www.jacionline.org for detailed statistical methods. All statistical analyses were performed using STATA/IC version 13.0 for Windows (StataCorp, College Station, Tex) and R version 2.15.1 (R Foundation for statistical computing, Vienna, Austria). Parametric data were presented as mean with SEM, and log transformed data were presented as geometric mean with 95% CI. The χ2 test or the Fisher exact test was used to compare proportions, and 1-way ANOVA was used to compare means across multiple groups; nonparametric data were presented as median with first and third quartiles, and Kruskal-Wallis test was used to compare subgroups. Inflammatory mediators that significantly discriminated across asthma versus COPD and bacterially colonized versus noncolonized were identified using receiver operating characteristic (ROC) curves. Factor analysis was performed on sputum inflammatory mediators, and independent factor scores were derived and used as input variables in the k-means cluster analysis to identify subjects' biological subgroups. The optimal number of clusters was chosen on the basis of a scree plot, by plotting within-cluster sum of the squares against a series of sequential numbers of clusters. Linear discriminant analysis was performed and a classification model developed from the test study for the validation study. In addition, classification and regression trees analysis was performed sequentially on all inflammatory mediators in the test study that had high discriminant function to identify possibly clinically relevant cutoff points. The inflammatory mediator cutoff points with the highest sensitivity ratio in discriminating the clusters together with subject disease status (asthma or COPD) were applied to classify the validation study into subgroups. A P value of less than .05 was taken as statistically significant. See this article's Online Repository at www.jacionline.org for detailed statistical methods. All statistical analyses were performed using STATA/IC version 13.0 for Windows (StataCorp, College Station, Tex) and R version 2.15.1 (R Foundation for statistical computing, Vienna, Austria). Parametric data were presented as mean with SEM, and log transformed data were presented as geometric mean with 95% CI. The χ2 test or the Fisher exact test was used to compare proportions, and 1-way ANOVA was used to compare means across multiple groups; nonparametric data were presented as median with first and third quartiles, and Kruskal-Wallis test was used to compare subgroups. Inflammatory mediators that significantly discriminated across asthma versus COPD and bacterially colonized versus noncolonized were identified using receiver operating characteristic (ROC) curves. Factor analysis was performed on sputum inflammatory mediators, and independent factor scores were derived and used as input variables in the k-means cluster analysis to identify subjects' biological subgroups. The optimal number of clusters was chosen on the basis of a scree plot, by plotting within-cluster sum of the squares against a series of sequential numbers of clusters. Linear discriminant analysis was performed and a classification model developed from the test study for the validation study. In addition, classification and regression trees analysis was performed sequentially on all inflammatory mediators in the test study that had high discriminant function to identify possibly clinically relevant cutoff points. The inflammatory mediator cutoff points with the highest sensitivity ratio in discriminating the clusters together with subject disease status (asthma or COPD) were applied to classify the validation study into subgroups. A P value of less than .05 was taken as statistically significant.
Results
The clinical and sputum characteristics of the asthma (n = 86) and COPD (n = 75) test groups are presented in Table I. Subjects with asthma were younger, had a higher body mass index, better lung function, fewer symptoms, and a lower smoking pack-year history than did subjects with COPD. The differential neutrophil and eosinophil counts were not statistically different between the groups, but the total cell count was higher in those with COPD. The sputum inflammatory mediator profiles were distinct with increased TH2 (IL-5, IL-13, and CCL26) and TH1 mediators (CXCL10 and 11) in severe asthma compared with COPD and increased IL-6, CCL2, CCL3, and CCL4 in COPD compared with severe asthma. Inflammatory mediators that best discriminated asthma and COPD are presented as ROC curves (see Fig E1 in this article's Online Repository at www.jacionline.org). Sputum CCL5 and CXCL11 levels were substantially higher in subjects with asthma than in subjects with COPD, with area under the ROC curves (ROC AUCs) of 0.74 (95% CI, 0.67-0.82; P < .0001) and 0.72 (95% CI, 0.64-0.80; P < .0001), respectively. Sputum IL-6 and CCL2 levels were significantly higher in subjects with COPD than in subjects with asthma, with ROC AUCs of 0.86 (95% CI, 0.81-0.92; P < .0001) and 0.69 (0.61-0.77; P < .0001), respectively. Discriminant analysis using the combined clinical, physiological, and biological (inflammatory mediator) variables completely distinguished the asthma and COPD groups (Fig 1). The mediators that best discriminated between bacterially colonized and noncolonized subjects were sputum IL-1β and TNF-α, with ROC AUCs of 0.76 (95% CI, 0.68-0.85) and 0.75 (95% CI, 0.66-0.84), respectively (see Fig E2 in this article's Online Repository at www.jacionline.org). Factor analysis revealed 4 factors with IL-1β, IL-5, IL-6, and CXCL11 as the highest loading components, respectively, across the 4 factors (see Table E1 in this article's Online Repository at www.jacionline.org). Subsequent cluster analysis identified 3 clusters (Table II). Individual clinical and biological comparisons of subjects with asthma and COPD in clusters 1, 2, and 3 are presented in Table E2 in this article's Online Repository at www.jacionline.org. Linear discriminant analysis was performed to verify the determined clusters and to identify the contribution of inflammatory mediators in discriminating the clusters. Subsequently, 2 discriminant scores for individual subjects were calculated and used to represent the clusters in a 2-dimensional graph (Fig 2). Cluster 1 consisted of mainly subjects with asthma (95% of cluster 1) with elevated sputum TH2 mediators and was eosinophil predominant, with 67% of the subjects having a sputum eosinophilia and 48% a sputum neutrophilia. Further stratification of cluster 1 by sputum cell counts showed that the subjects were 40% pure eosinophilic, 21% pure neutrophilic, 27% mixed granulocytic, and 12% paucigranulocytic. Cluster 2 consisted of an overlap of subjects with asthma and COPD with sputum neutrophil predominance (75% of subjects with asthma and 95% of subjects with COPD). In contrast, only 11% and 5% of the subjects with asthma or COPD, respectively, had a sputum eosinophilia. In addition, there were elevated sputum levels of IL-1β, IL-8, IL-10 and TNF-α and bacterial colonization. The increased rate of bacterial colonization found in this cluster was driven predominately by subjects with COPD (Table E2). Further stratification of cluster 2 showed that the subjects were 0% pure eosinophilic, 74% pure neutrophilic, 9% mixed granulocytic, and 17% paucigranulocytic. Cluster 3 consisted predominantly of subjects with COPD (95% of cluster 3). In contrast to subjects with COPD in cluster 2, neutrophilic inflammation was present in only 49% of the subjects whereas a sputum eosinophilia was observed in 46% of the subjects. IL-6 and CCL2 levels were increased compared with those in clusters 1 and 2 but were similar to those in subjects with COPD in cluster 2. Only CCL13 and CCL17 were elevated in subjects with COPD in cluster 3 compared with subjects with COPD in cluster 2 (Table E2). Further stratification of cluster 3 showed that the subjects were 21% pure eosinophilic, 28% pure neutrophilic, 23% mixed granulocytic, and 28% paucigranulocytic. The proportion of subjects with asthma with airflow obstruction in the 3 clusters was not significantly different. Of the 2 subjects with asthma in cluster 3, 1 had persistent airflow obstruction compared with 17 of 55 in cluster 1 and 10 of 28 in cluster 2. The best discriminator between subjects in clusters 1 or 3 compared with the overlap group cluster 2 was sputum IL-1β at a cutoff point of 130 pg/mL (Fig 3). The second best discriminator was TNF-α with a cutoff point of 5 pg/mL (see Fig E3 in this article's Online Repository at www.jacionline.org). Validation These cluster analysis findings were then validated in independent asthma and COPD cohorts. Subjects were assigned into subgroups using 2 techniques. The first was a classification model developed from the test cohort using linear discriminant analysis and betas for each cluster and cytokines were extracted (see Table E3 in this article's Online Repository at www.jacionline.org). Individual subject discriminant score in each subgroup was calculated and the subject was assigned to the subgroup in which he or she had the highest score. The second technique used the IL-1β cutoff point at 130 pg/mL, which was identified as the best classifier to distinguish overlap cluster 2 from clusters 1 or 3 in the test study and was used alongside subject disease status (asthma or COPD). The sputum cellular and inflammatory mediator profiles of the 3 validation study subgroups, obtained using both techniques, were very similar to the test subgroups (Tables III and IV; Fig 4). In addition, individual clinical and biological comparisons of subjects with asthma and COPD in validation subgroups, presented in this article's Online Repository in Tables E4 and E5 at www.jacionline.org, revealed a patter similar to that of test subgroups (Table E2). These cluster analysis findings were then validated in independent asthma and COPD cohorts. Subjects were assigned into subgroups using 2 techniques. The first was a classification model developed from the test cohort using linear discriminant analysis and betas for each cluster and cytokines were extracted (see Table E3 in this article's Online Repository at www.jacionline.org). Individual subject discriminant score in each subgroup was calculated and the subject was assigned to the subgroup in which he or she had the highest score. The second technique used the IL-1β cutoff point at 130 pg/mL, which was identified as the best classifier to distinguish overlap cluster 2 from clusters 1 or 3 in the test study and was used alongside subject disease status (asthma or COPD). The sputum cellular and inflammatory mediator profiles of the 3 validation study subgroups, obtained using both techniques, were very similar to the test subgroups (Tables III and IV; Fig 4). In addition, individual clinical and biological comparisons of subjects with asthma and COPD in validation subgroups, presented in this article's Online Repository in Tables E4 and E5 at www.jacionline.org, revealed a patter similar to that of test subgroups (Table E2).
null
null
[ "Subjects", "Measurements", "Statistical analysis", "Validation" ]
[ "Subjects with severe asthma or moderate-to-severe COPD were recruited from a single center at the Glenfield Hospital, Leicester, United Kingdom, into independent test and validation studies. Assignment to asthma or COPD was made by the subjects' physician consistent with definitions of asthma and COPD according to the Global Initiative for Asthma1 or the Global Initiative for Chronic Obstructive Lung Disease2 guidelines, respectively, for both the test and validation groups. All subjects were assessed at stable visits at least 8 weeks free from an exacerbation, defined as an increase in symptoms necessitating a course of oral corticosteroids and/or antibiotic therapy. The subjects with COPD had participated in an exacerbation study,20,21 and some of the subjects with asthma had participated in an earlier study.22 All subjects provided written informed consent, and the studies were approved by the local Leicestershire, Northamptonshire, and Rutland ethics committee.", "Demographic, clinical, and lung-function data were recorded including pre- and postbronchodilator FEV1, forced vital capacity, and symptom scores using the visual analogue scale. Spontaneous or induced sputum was collected for sputum total and differential cell counts and bacteriology; cell-free sputum supernatant was used for mediator assessment as described previously.20 Sputum was produced spontaneously in 93% of the subjects. Positive bacterial colonization was defined as colony-forming units greater than 107/mL sputum or positive culture.20,21 Subjects with sputum eosinophil and neutrophil differential cell counts above 3%23,24 and 61%25 were defined as eosinophilic or neutrophilic, respectively. Further stratification of the subjects into 4 subgroups on the basis of their sputum cell counts was also done: pure eosinophilic (eosinophil > 3% and neutrophil < 61%), pure neutrophilic (eosinophil < 3% and neutrophil > 61%), mixed granulocytic (eosinophil > 3% and neutrophil > 61%), and paucigranulocytic (eosinophil < 3% and neutrophil < 61%). Inflammatory mediators were measured in sputum supernatants using the Meso Scale Discovery Platform (MSD; Gaithersburg, Md). The mediators measured were selected to reflect cytokines, chemokines, and proinflammatory mediators implicated in airway disease. The performance of the MSD platform in terms of recovery of spiked exogenous recombinant proteins has been described previously.16 Sputum inflammatory mediators that were below the detectable range were replaced with their corresponding lower limit of detection in subjects with both asthma and COPD. Twenty-one mediators were included in the test study, and 14 mediators were available in the validation study.", "See this article's Online Repository at www.jacionline.org for detailed statistical methods. All statistical analyses were performed using STATA/IC version 13.0 for Windows (StataCorp, College Station, Tex) and R version 2.15.1 (R Foundation for statistical computing, Vienna, Austria). Parametric data were presented as mean with SEM, and log transformed data were presented as geometric mean with 95% CI. The χ2 test or the Fisher exact test was used to compare proportions, and 1-way ANOVA was used to compare means across multiple groups; nonparametric data were presented as median with first and third quartiles, and Kruskal-Wallis test was used to compare subgroups. Inflammatory mediators that significantly discriminated across asthma versus COPD and bacterially colonized versus noncolonized were identified using receiver operating characteristic (ROC) curves. Factor analysis was performed on sputum inflammatory mediators, and independent factor scores were derived and used as input variables in the k-means cluster analysis to identify subjects' biological subgroups. The optimal number of clusters was chosen on the basis of a scree plot, by plotting within-cluster sum of the squares against a series of sequential numbers of clusters. Linear discriminant analysis was performed and a classification model developed from the test study for the validation study. In addition, classification and regression trees analysis was performed sequentially on all inflammatory mediators in the test study that had high discriminant function to identify possibly clinically relevant cutoff points. The inflammatory mediator cutoff points with the highest sensitivity ratio in discriminating the clusters together with subject disease status (asthma or COPD) were applied to classify the validation study into subgroups. A P value of less than .05 was taken as statistically significant.", "These cluster analysis findings were then validated in independent asthma and COPD cohorts. Subjects were assigned into subgroups using 2 techniques. The first was a classification model developed from the test cohort using linear discriminant analysis and betas for each cluster and cytokines were extracted (see Table E3 in this article's Online Repository at www.jacionline.org). Individual subject discriminant score in each subgroup was calculated and the subject was assigned to the subgroup in which he or she had the highest score. The second technique used the IL-1β cutoff point at 130 pg/mL, which was identified as the best classifier to distinguish overlap cluster 2 from clusters 1 or 3 in the test study and was used alongside subject disease status (asthma or COPD). The sputum cellular and inflammatory mediator profiles of the 3 validation study subgroups, obtained using both techniques, were very similar to the test subgroups (Tables III and IV; Fig 4). In addition, individual clinical and biological comparisons of subjects with asthma and COPD in validation subgroups, presented in this article's Online Repository in Tables E4 and E5 at www.jacionline.org, revealed a patter similar to that of test subgroups (Table E2)." ]
[ null, null, null, null ]
[ "Methods", "Subjects", "Measurements", "Statistical analysis", "Results", "Validation", "Discussion" ]
[ " Subjects Subjects with severe asthma or moderate-to-severe COPD were recruited from a single center at the Glenfield Hospital, Leicester, United Kingdom, into independent test and validation studies. Assignment to asthma or COPD was made by the subjects' physician consistent with definitions of asthma and COPD according to the Global Initiative for Asthma1 or the Global Initiative for Chronic Obstructive Lung Disease2 guidelines, respectively, for both the test and validation groups. All subjects were assessed at stable visits at least 8 weeks free from an exacerbation, defined as an increase in symptoms necessitating a course of oral corticosteroids and/or antibiotic therapy. The subjects with COPD had participated in an exacerbation study,20,21 and some of the subjects with asthma had participated in an earlier study.22 All subjects provided written informed consent, and the studies were approved by the local Leicestershire, Northamptonshire, and Rutland ethics committee.\nSubjects with severe asthma or moderate-to-severe COPD were recruited from a single center at the Glenfield Hospital, Leicester, United Kingdom, into independent test and validation studies. Assignment to asthma or COPD was made by the subjects' physician consistent with definitions of asthma and COPD according to the Global Initiative for Asthma1 or the Global Initiative for Chronic Obstructive Lung Disease2 guidelines, respectively, for both the test and validation groups. All subjects were assessed at stable visits at least 8 weeks free from an exacerbation, defined as an increase in symptoms necessitating a course of oral corticosteroids and/or antibiotic therapy. The subjects with COPD had participated in an exacerbation study,20,21 and some of the subjects with asthma had participated in an earlier study.22 All subjects provided written informed consent, and the studies were approved by the local Leicestershire, Northamptonshire, and Rutland ethics committee.\n Measurements Demographic, clinical, and lung-function data were recorded including pre- and postbronchodilator FEV1, forced vital capacity, and symptom scores using the visual analogue scale. Spontaneous or induced sputum was collected for sputum total and differential cell counts and bacteriology; cell-free sputum supernatant was used for mediator assessment as described previously.20 Sputum was produced spontaneously in 93% of the subjects. Positive bacterial colonization was defined as colony-forming units greater than 107/mL sputum or positive culture.20,21 Subjects with sputum eosinophil and neutrophil differential cell counts above 3%23,24 and 61%25 were defined as eosinophilic or neutrophilic, respectively. Further stratification of the subjects into 4 subgroups on the basis of their sputum cell counts was also done: pure eosinophilic (eosinophil > 3% and neutrophil < 61%), pure neutrophilic (eosinophil < 3% and neutrophil > 61%), mixed granulocytic (eosinophil > 3% and neutrophil > 61%), and paucigranulocytic (eosinophil < 3% and neutrophil < 61%). Inflammatory mediators were measured in sputum supernatants using the Meso Scale Discovery Platform (MSD; Gaithersburg, Md). The mediators measured were selected to reflect cytokines, chemokines, and proinflammatory mediators implicated in airway disease. The performance of the MSD platform in terms of recovery of spiked exogenous recombinant proteins has been described previously.16 Sputum inflammatory mediators that were below the detectable range were replaced with their corresponding lower limit of detection in subjects with both asthma and COPD. Twenty-one mediators were included in the test study, and 14 mediators were available in the validation study.\nDemographic, clinical, and lung-function data were recorded including pre- and postbronchodilator FEV1, forced vital capacity, and symptom scores using the visual analogue scale. Spontaneous or induced sputum was collected for sputum total and differential cell counts and bacteriology; cell-free sputum supernatant was used for mediator assessment as described previously.20 Sputum was produced spontaneously in 93% of the subjects. Positive bacterial colonization was defined as colony-forming units greater than 107/mL sputum or positive culture.20,21 Subjects with sputum eosinophil and neutrophil differential cell counts above 3%23,24 and 61%25 were defined as eosinophilic or neutrophilic, respectively. Further stratification of the subjects into 4 subgroups on the basis of their sputum cell counts was also done: pure eosinophilic (eosinophil > 3% and neutrophil < 61%), pure neutrophilic (eosinophil < 3% and neutrophil > 61%), mixed granulocytic (eosinophil > 3% and neutrophil > 61%), and paucigranulocytic (eosinophil < 3% and neutrophil < 61%). Inflammatory mediators were measured in sputum supernatants using the Meso Scale Discovery Platform (MSD; Gaithersburg, Md). The mediators measured were selected to reflect cytokines, chemokines, and proinflammatory mediators implicated in airway disease. The performance of the MSD platform in terms of recovery of spiked exogenous recombinant proteins has been described previously.16 Sputum inflammatory mediators that were below the detectable range were replaced with their corresponding lower limit of detection in subjects with both asthma and COPD. Twenty-one mediators were included in the test study, and 14 mediators were available in the validation study.\n Statistical analysis See this article's Online Repository at www.jacionline.org for detailed statistical methods. All statistical analyses were performed using STATA/IC version 13.0 for Windows (StataCorp, College Station, Tex) and R version 2.15.1 (R Foundation for statistical computing, Vienna, Austria). Parametric data were presented as mean with SEM, and log transformed data were presented as geometric mean with 95% CI. The χ2 test or the Fisher exact test was used to compare proportions, and 1-way ANOVA was used to compare means across multiple groups; nonparametric data were presented as median with first and third quartiles, and Kruskal-Wallis test was used to compare subgroups. Inflammatory mediators that significantly discriminated across asthma versus COPD and bacterially colonized versus noncolonized were identified using receiver operating characteristic (ROC) curves. Factor analysis was performed on sputum inflammatory mediators, and independent factor scores were derived and used as input variables in the k-means cluster analysis to identify subjects' biological subgroups. The optimal number of clusters was chosen on the basis of a scree plot, by plotting within-cluster sum of the squares against a series of sequential numbers of clusters. Linear discriminant analysis was performed and a classification model developed from the test study for the validation study. In addition, classification and regression trees analysis was performed sequentially on all inflammatory mediators in the test study that had high discriminant function to identify possibly clinically relevant cutoff points. The inflammatory mediator cutoff points with the highest sensitivity ratio in discriminating the clusters together with subject disease status (asthma or COPD) were applied to classify the validation study into subgroups. A P value of less than .05 was taken as statistically significant.\nSee this article's Online Repository at www.jacionline.org for detailed statistical methods. All statistical analyses were performed using STATA/IC version 13.0 for Windows (StataCorp, College Station, Tex) and R version 2.15.1 (R Foundation for statistical computing, Vienna, Austria). Parametric data were presented as mean with SEM, and log transformed data were presented as geometric mean with 95% CI. The χ2 test or the Fisher exact test was used to compare proportions, and 1-way ANOVA was used to compare means across multiple groups; nonparametric data were presented as median with first and third quartiles, and Kruskal-Wallis test was used to compare subgroups. Inflammatory mediators that significantly discriminated across asthma versus COPD and bacterially colonized versus noncolonized were identified using receiver operating characteristic (ROC) curves. Factor analysis was performed on sputum inflammatory mediators, and independent factor scores were derived and used as input variables in the k-means cluster analysis to identify subjects' biological subgroups. The optimal number of clusters was chosen on the basis of a scree plot, by plotting within-cluster sum of the squares against a series of sequential numbers of clusters. Linear discriminant analysis was performed and a classification model developed from the test study for the validation study. In addition, classification and regression trees analysis was performed sequentially on all inflammatory mediators in the test study that had high discriminant function to identify possibly clinically relevant cutoff points. The inflammatory mediator cutoff points with the highest sensitivity ratio in discriminating the clusters together with subject disease status (asthma or COPD) were applied to classify the validation study into subgroups. A P value of less than .05 was taken as statistically significant.", "Subjects with severe asthma or moderate-to-severe COPD were recruited from a single center at the Glenfield Hospital, Leicester, United Kingdom, into independent test and validation studies. Assignment to asthma or COPD was made by the subjects' physician consistent with definitions of asthma and COPD according to the Global Initiative for Asthma1 or the Global Initiative for Chronic Obstructive Lung Disease2 guidelines, respectively, for both the test and validation groups. All subjects were assessed at stable visits at least 8 weeks free from an exacerbation, defined as an increase in symptoms necessitating a course of oral corticosteroids and/or antibiotic therapy. The subjects with COPD had participated in an exacerbation study,20,21 and some of the subjects with asthma had participated in an earlier study.22 All subjects provided written informed consent, and the studies were approved by the local Leicestershire, Northamptonshire, and Rutland ethics committee.", "Demographic, clinical, and lung-function data were recorded including pre- and postbronchodilator FEV1, forced vital capacity, and symptom scores using the visual analogue scale. Spontaneous or induced sputum was collected for sputum total and differential cell counts and bacteriology; cell-free sputum supernatant was used for mediator assessment as described previously.20 Sputum was produced spontaneously in 93% of the subjects. Positive bacterial colonization was defined as colony-forming units greater than 107/mL sputum or positive culture.20,21 Subjects with sputum eosinophil and neutrophil differential cell counts above 3%23,24 and 61%25 were defined as eosinophilic or neutrophilic, respectively. Further stratification of the subjects into 4 subgroups on the basis of their sputum cell counts was also done: pure eosinophilic (eosinophil > 3% and neutrophil < 61%), pure neutrophilic (eosinophil < 3% and neutrophil > 61%), mixed granulocytic (eosinophil > 3% and neutrophil > 61%), and paucigranulocytic (eosinophil < 3% and neutrophil < 61%). Inflammatory mediators were measured in sputum supernatants using the Meso Scale Discovery Platform (MSD; Gaithersburg, Md). The mediators measured were selected to reflect cytokines, chemokines, and proinflammatory mediators implicated in airway disease. The performance of the MSD platform in terms of recovery of spiked exogenous recombinant proteins has been described previously.16 Sputum inflammatory mediators that were below the detectable range were replaced with their corresponding lower limit of detection in subjects with both asthma and COPD. Twenty-one mediators were included in the test study, and 14 mediators were available in the validation study.", "See this article's Online Repository at www.jacionline.org for detailed statistical methods. All statistical analyses were performed using STATA/IC version 13.0 for Windows (StataCorp, College Station, Tex) and R version 2.15.1 (R Foundation for statistical computing, Vienna, Austria). Parametric data were presented as mean with SEM, and log transformed data were presented as geometric mean with 95% CI. The χ2 test or the Fisher exact test was used to compare proportions, and 1-way ANOVA was used to compare means across multiple groups; nonparametric data were presented as median with first and third quartiles, and Kruskal-Wallis test was used to compare subgroups. Inflammatory mediators that significantly discriminated across asthma versus COPD and bacterially colonized versus noncolonized were identified using receiver operating characteristic (ROC) curves. Factor analysis was performed on sputum inflammatory mediators, and independent factor scores were derived and used as input variables in the k-means cluster analysis to identify subjects' biological subgroups. The optimal number of clusters was chosen on the basis of a scree plot, by plotting within-cluster sum of the squares against a series of sequential numbers of clusters. Linear discriminant analysis was performed and a classification model developed from the test study for the validation study. In addition, classification and regression trees analysis was performed sequentially on all inflammatory mediators in the test study that had high discriminant function to identify possibly clinically relevant cutoff points. The inflammatory mediator cutoff points with the highest sensitivity ratio in discriminating the clusters together with subject disease status (asthma or COPD) were applied to classify the validation study into subgroups. A P value of less than .05 was taken as statistically significant.", "The clinical and sputum characteristics of the asthma (n = 86) and COPD (n = 75) test groups are presented in Table I. Subjects with asthma were younger, had a higher body mass index, better lung function, fewer symptoms, and a lower smoking pack-year history than did subjects with COPD. The differential neutrophil and eosinophil counts were not statistically different between the groups, but the total cell count was higher in those with COPD. The sputum inflammatory mediator profiles were distinct with increased TH2 (IL-5, IL-13, and CCL26) and TH1 mediators (CXCL10 and 11) in severe asthma compared with COPD and increased IL-6, CCL2, CCL3, and CCL4 in COPD compared with severe asthma. Inflammatory mediators that best discriminated asthma and COPD are presented as ROC curves (see Fig E1 in this article's Online Repository at www.jacionline.org). Sputum CCL5 and CXCL11 levels were substantially higher in subjects with asthma than in subjects with COPD, with area under the ROC curves (ROC AUCs) of 0.74 (95% CI, 0.67-0.82; P < .0001) and 0.72 (95% CI, 0.64-0.80; P < .0001), respectively. Sputum IL-6 and CCL2 levels were significantly higher in subjects with COPD than in subjects with asthma, with ROC AUCs of 0.86 (95% CI, 0.81-0.92; P < .0001) and 0.69 (0.61-0.77; P < .0001), respectively. Discriminant analysis using the combined clinical, physiological, and biological (inflammatory mediator) variables completely distinguished the asthma and COPD groups (Fig 1).\nThe mediators that best discriminated between bacterially colonized and noncolonized subjects were sputum IL-1β and TNF-α, with ROC AUCs of 0.76 (95% CI, 0.68-0.85) and 0.75 (95% CI, 0.66-0.84), respectively (see Fig E2 in this article's Online Repository at www.jacionline.org). Factor analysis revealed 4 factors with IL-1β, IL-5, IL-6, and CXCL11 as the highest loading components, respectively, across the 4 factors (see Table E1 in this article's Online Repository at www.jacionline.org). Subsequent cluster analysis identified 3 clusters (Table II). Individual clinical and biological comparisons of subjects with asthma and COPD in clusters 1, 2, and 3 are presented in Table E2 in this article's Online Repository at www.jacionline.org. Linear discriminant analysis was performed to verify the determined clusters and to identify the contribution of inflammatory mediators in discriminating the clusters. Subsequently, 2 discriminant scores for individual subjects were calculated and used to represent the clusters in a 2-dimensional graph (Fig 2).\nCluster 1 consisted of mainly subjects with asthma (95% of cluster 1) with elevated sputum TH2 mediators and was eosinophil predominant, with 67% of the subjects having a sputum eosinophilia and 48% a sputum neutrophilia. Further stratification of cluster 1 by sputum cell counts showed that the subjects were 40% pure eosinophilic, 21% pure neutrophilic, 27% mixed granulocytic, and 12% paucigranulocytic.\nCluster 2 consisted of an overlap of subjects with asthma and COPD with sputum neutrophil predominance (75% of subjects with asthma and 95% of subjects with COPD). In contrast, only 11% and 5% of the subjects with asthma or COPD, respectively, had a sputum eosinophilia. In addition, there were elevated sputum levels of IL-1β, IL-8, IL-10 and TNF-α and bacterial colonization. The increased rate of bacterial colonization found in this cluster was driven predominately by subjects with COPD (Table E2). Further stratification of cluster 2 showed that the subjects were 0% pure eosinophilic, 74% pure neutrophilic, 9% mixed granulocytic, and 17% paucigranulocytic.\nCluster 3 consisted predominantly of subjects with COPD (95% of cluster 3). In contrast to subjects with COPD in cluster 2, neutrophilic inflammation was present in only 49% of the subjects whereas a sputum eosinophilia was observed in 46% of the subjects. IL-6 and CCL2 levels were increased compared with those in clusters 1 and 2 but were similar to those in subjects with COPD in cluster 2. Only CCL13 and CCL17 were elevated in subjects with COPD in cluster 3 compared with subjects with COPD in cluster 2 (Table E2). Further stratification of cluster 3 showed that the subjects were 21% pure eosinophilic, 28% pure neutrophilic, 23% mixed granulocytic, and 28% paucigranulocytic. The proportion of subjects with asthma with airflow obstruction in the 3 clusters was not significantly different. Of the 2 subjects with asthma in cluster 3, 1 had persistent airflow obstruction compared with 17 of 55 in cluster 1 and 10 of 28 in cluster 2.\nThe best discriminator between subjects in clusters 1 or 3 compared with the overlap group cluster 2 was sputum IL-1β at a cutoff point of 130 pg/mL (Fig 3). The second best discriminator was TNF-α with a cutoff point of 5 pg/mL (see Fig E3 in this article's Online Repository at www.jacionline.org).\n Validation These cluster analysis findings were then validated in independent asthma and COPD cohorts. Subjects were assigned into subgroups using 2 techniques. The first was a classification model developed from the test cohort using linear discriminant analysis and betas for each cluster and cytokines were extracted (see Table E3 in this article's Online Repository at www.jacionline.org). Individual subject discriminant score in each subgroup was calculated and the subject was assigned to the subgroup in which he or she had the highest score. The second technique used the IL-1β cutoff point at 130 pg/mL, which was identified as the best classifier to distinguish overlap cluster 2 from clusters 1 or 3 in the test study and was used alongside subject disease status (asthma or COPD). The sputum cellular and inflammatory mediator profiles of the 3 validation study subgroups, obtained using both techniques, were very similar to the test subgroups (Tables III and IV; Fig 4). In addition, individual clinical and biological comparisons of subjects with asthma and COPD in validation subgroups, presented in this article's Online Repository in Tables E4 and E5 at www.jacionline.org, revealed a patter similar to that of test subgroups (Table E2).\nThese cluster analysis findings were then validated in independent asthma and COPD cohorts. Subjects were assigned into subgroups using 2 techniques. The first was a classification model developed from the test cohort using linear discriminant analysis and betas for each cluster and cytokines were extracted (see Table E3 in this article's Online Repository at www.jacionline.org). Individual subject discriminant score in each subgroup was calculated and the subject was assigned to the subgroup in which he or she had the highest score. The second technique used the IL-1β cutoff point at 130 pg/mL, which was identified as the best classifier to distinguish overlap cluster 2 from clusters 1 or 3 in the test study and was used alongside subject disease status (asthma or COPD). The sputum cellular and inflammatory mediator profiles of the 3 validation study subgroups, obtained using both techniques, were very similar to the test subgroups (Tables III and IV; Fig 4). In addition, individual clinical and biological comparisons of subjects with asthma and COPD in validation subgroups, presented in this article's Online Repository in Tables E4 and E5 at www.jacionline.org, revealed a patter similar to that of test subgroups (Table E2).", "These cluster analysis findings were then validated in independent asthma and COPD cohorts. Subjects were assigned into subgroups using 2 techniques. The first was a classification model developed from the test cohort using linear discriminant analysis and betas for each cluster and cytokines were extracted (see Table E3 in this article's Online Repository at www.jacionline.org). Individual subject discriminant score in each subgroup was calculated and the subject was assigned to the subgroup in which he or she had the highest score. The second technique used the IL-1β cutoff point at 130 pg/mL, which was identified as the best classifier to distinguish overlap cluster 2 from clusters 1 or 3 in the test study and was used alongside subject disease status (asthma or COPD). The sputum cellular and inflammatory mediator profiles of the 3 validation study subgroups, obtained using both techniques, were very similar to the test subgroups (Tables III and IV; Fig 4). In addition, individual clinical and biological comparisons of subjects with asthma and COPD in validation subgroups, presented in this article's Online Repository in Tables E4 and E5 at www.jacionline.org, revealed a patter similar to that of test subgroups (Table E2).", "Here we report that although a combination of clinical variables distinguished asthma from COPD, further analyses of the sputum inflammatory mediators revealed that patients with asthma and COPD were best described by 3 biological clusters incorporating clinical, physiological, and inflammatory mediator characteristics. Our findings have further underscored the complex heterogeneity of asthma and COPD and provided support for the “British” hypothesis of airway disease pathogenesis as we identified 2 clusters that were predominately either asthma or COPD with distinct cytokine profiles, while also supporting the “Dutch” hypothesis by identifying a third cluster of overlapping subjects from both disease groups with similar cytokine profiles. Cluster 1 was asthma predominant with evidence of eosinophilic inflammation and increased TH2 inflammatory mediators. Cluster 2 contained an asthma and COPD overlap group, with predominately neutrophilic airway inflammation and elevated levels of IL-1β and TNF-α in addition to being assigned the highest proportion of subjects with bacterial colonization. Cluster 3 was a COPD-predominant group with mixed granulocytic airway inflammation and high sputum IL-6 and CCL13 levels. Furthermore, the biological clusters derived from the test group could be validated in an independent group yielding similar inflammatory mediator profiles to the test group. Whether these biological clusters can be used to stratify subjects for more targeted approaches to novel and existing therapies needs to be further studied.\nThe clusters we have identified have biological plausibility and they confirm and extend our current understanding of the immunopathobiology of asthma and COPD, moving our understanding beyond previous comparisons of asthma versus COPD16 or clustering approaches of cytokine profiles in asthma or COPD alone.20,26 In addition, the clusters might represent groups with possible stratified responses to specific anti-inflammatory treatment. Cluster 1 is consistent with the TH2-predominant eosinophilic asthma paradigm. Indeed, this group was predominately asthmatic but importantly also included about 5% of subjects with COPD. It would seem likely that this group is most likely to respond to anti-TH2 cytokine therapy such as anti–IL-5 and 13.18,19,27,28 Eosinophilic COPD is well-described, and this group has a greater response to oral and inhaled corticosteroids than did those with noneosinophilic COPD.29,30 Whether subjects with COPD in this cluster would respond to anti-TH2 cytokine therapy is currently under study (www.clinicaltrials.gov\nNCT01227278). Cluster 2 included an overlap of subjects with asthma and COPD. This group was predominately neutrophilic, consistent with previous observations,14 and with increased bacterial colonization. Recent evidence supports the role for macrolide antibiotics in COPD31 and in noneosinophilic severe asthma.32 Antineutrophilic strategies such as anti-CXCR2 are currently under study.33 Further studies are required to assess whether this cluster represents patients most likely to respond to these therapies. In cluster 2, increased bacterial colonization was evident, particularly in those with COPD, perhaps suggesting that in these subjects the neutrophilic inflammation is a consequence of bacterial colonization rather than the primary abnormality. Thus, whether ameliorating neutrophilic inflammation in this group is beneficial or harmful is unclear. Indeed, lessons from anti–TNF-α therapy suggest that targeting proinflammatory cytokines can increase the risk of infection.34,35 In contrast, in those with neutrophilic inflammation without evidence of bacterial colonization, particularly in those with asthma, the neutrophilic inflammation might be critical in the development of the disease. Thus, identification of distinct groups that benefit or are harmed by antineutrophilic approaches would enable better stratification of such therapies. Cluster 3 included mainly subjects with COPD in which bacterial colonization was observed in fewer subjects in spite of consistently elevated proinflammatory cytokines. Perhaps this group, in contrast to cluster 2, represents subjects in which the proinflammatory environment plays a more causal role in the disease expression rather than as a consequence of infection. This might suggest that this group would be more amenable to anticytokine therapies such as anti–IL-6. In addition, eosinophilic inflammation was a feature in some subjects in cluster 3 in the absence of an elevated TH2 profile. One of the few cytokines increased in cluster 3 was CCL13, which is a CCR3 agonist and promotes eosinophil migration. Small airway macrophages are an important source of CCL13 in the airway and might play a role in the eosinophilic inflammation in this group.36 Taken together, these intriguing and novel observations immediately open up opportunities for further translational studies to determine the underlying mechanisms of these clusters and their treatment-specific anti-inflammatory therapies.\nIn addition to clear differences in the cytokine profiles between groups, there were several differences in clinical parameters. These were largely dependent on whether the clusters were asthma or COPD predominant or mixed. For example, lung function, age, greater smoking history, and higher exacerbation frequency was related to the number of subjects with COPD in the cluster. However, the symptom of cough was more common in cluster 2. Indeed, subjects with asthma and COPD in cluster 2 had a higher visual analog scale score for cough than did either the subjects with asthma in cluster 1 or the subjects with COPD in cluster 3, respectively. This suggests that this difference is independent of disease status and might represent a real association either between the inflammatory profile or increased bacterial colonization in cluster 2. This cluster also had the highest sputum total inflammatory cell count, suggesting that this represents a “chronic bronchitis” group. As suggested above, whether this group might warrant antimicrobial, anti-inflammatory, or antimucolytic therapy is an interesting possibility.\nOne of the strengths of our observations is that we were able to support the identification of the 3 biological clusters in an independent validation group. The similarity between the cytokine (inflammatory mediators) profiles in test and validation groups supports the view that each cluster is a consistent phenotype and might reflect common immunopathology and phenotype-specific responses to treatment. We found biological clusters that were asthma or COPD predominant, suggesting that there are distinct mechanisms underlying these groups, but we also identified a consistent overlap group that might be a consequence of shared mechanisms. Two approaches were used to validate the clusters in an independent group using discriminant analysis and the generation of a classifier that used the disease allocation and sputum IL1β cutoff. Sputum IL-1β was the best discriminator between the subjects with asthma or COPD in clusters 1 and 3, respectively, with those in the overlap group cluster 2. The clinical diagnosis of asthma or COPD together with a single sputum cytokine (IL-1β cutoff) demonstrated a simple approach to segment asthma and COPD populations into 3 groups with distinct and consistent cytokine profiles. This approach has advantages in its simplicity and offers the potential for immediate use in stratified medicine studies although it might underestimate small, albeit potentially important subgroups such as TH2 high COPD.\nOne possible limitation of this study is that only subjects with severe asthma and COPD who attended a secondary care setting were included, and thus might not be representative of a more generalized population. We concede that our findings cannot be extrapolated to mild to moderate asthma or mild COPD but are confident that our test and validation populations are representative of our broader secondary care patient population. Our earlier preliminary data comparing asthma and COPD included subjects across the severity of disease and in this analysis fewer differences between asthma and COPD were observed.16 Whether this was due to lack of power because of the small numbers or due to masking clearer differences in more severe disease is unknown. Further studies are required to include healthy controls, larger disease populations including a broader spectrum of subjects including those with mild disease, and comparisons with other disease control groups. Allergic sensitization might also be an important mechanism in driving the different clusters. We did not record atopic status in the COPD group consistently, but in those with asthma there was no difference across the clusters. However, future studies should consider the role of allergy in these clusters. Our study has focused on stable visits and a similar comparison is required for longitudinal follow-up at stable and exacerbation events. We have previously reported exacerbation biological clusters in subjects with COPD and interestingly have identified similar profiles as described here.20 Whether comparisons of cytokine profiles in larger groups of subjects with severe asthma and COPD reveal similar biological clusters needs to be addressed. The cytokine profiles have been derived from sputum analysis and whether the profiles are similar in tissue samples is unknown. Access to bronchoscopic samples from large numbers of subjects with COPD and asthma with severe disease is challenging, but multicenter efforts to address this are underway in parallel with sputum sampling and these findings are eagerly awaited. In addition, although we have chosen to measure a large number of mediators implicated in obstructive airway disease, these mediators cannot fully reflect the complexity of airway disease and approaches using more comprehensive assessment of inflammatory networks in the airway perhaps using 'omic approaches such as transcriptomics will be informative.37 Such studies in small numbers suggest similar groupings described here with transcriptional profiles associated with cellular profiles and further studies are awaited.\nIn conclusion, we found here that sputum inflammatory mediator profiling can determine distinct and overlapping groups of subjects with asthma and COPD. We identified an asthma-predominant cluster with eosinophilic inflammation and elevated TH2 inflammatory mediators, a COPD-predominant group with elevated proinflammatory cytokines, and an asthma and COPD overlap group that clinically had chronic bronchitis, increased bacterial colonization, elevated sputum IL-1β and TNF-α levels, and a sputum neutrophilia. We predict that these groups might contribute to improved patient classification to enable a stratified medicine approach to airways disease.Clinical implicationsSputum cytokine profiling can determine distinct and overlapping asthma and COPD subgroups supporting both the British and Dutch hypotheses of airway disease.\nSputum cytokine profiling can determine distinct and overlapping asthma and COPD subgroups supporting both the British and Dutch hypotheses of airway disease." ]
[ "methods", null, null, null, "results", null, "discussion" ]
[ "Asthma and COPD overlap", "cytokines", "factor and cluster analyses", "COPD, Chronic obstructive pulmonary disease", "ROC, Receiver operating characteristic", "ROC AUC, Area under the receiver operating characteristic curve" ]
Methods: Subjects Subjects with severe asthma or moderate-to-severe COPD were recruited from a single center at the Glenfield Hospital, Leicester, United Kingdom, into independent test and validation studies. Assignment to asthma or COPD was made by the subjects' physician consistent with definitions of asthma and COPD according to the Global Initiative for Asthma1 or the Global Initiative for Chronic Obstructive Lung Disease2 guidelines, respectively, for both the test and validation groups. All subjects were assessed at stable visits at least 8 weeks free from an exacerbation, defined as an increase in symptoms necessitating a course of oral corticosteroids and/or antibiotic therapy. The subjects with COPD had participated in an exacerbation study,20,21 and some of the subjects with asthma had participated in an earlier study.22 All subjects provided written informed consent, and the studies were approved by the local Leicestershire, Northamptonshire, and Rutland ethics committee. Subjects with severe asthma or moderate-to-severe COPD were recruited from a single center at the Glenfield Hospital, Leicester, United Kingdom, into independent test and validation studies. Assignment to asthma or COPD was made by the subjects' physician consistent with definitions of asthma and COPD according to the Global Initiative for Asthma1 or the Global Initiative for Chronic Obstructive Lung Disease2 guidelines, respectively, for both the test and validation groups. All subjects were assessed at stable visits at least 8 weeks free from an exacerbation, defined as an increase in symptoms necessitating a course of oral corticosteroids and/or antibiotic therapy. The subjects with COPD had participated in an exacerbation study,20,21 and some of the subjects with asthma had participated in an earlier study.22 All subjects provided written informed consent, and the studies were approved by the local Leicestershire, Northamptonshire, and Rutland ethics committee. Measurements Demographic, clinical, and lung-function data were recorded including pre- and postbronchodilator FEV1, forced vital capacity, and symptom scores using the visual analogue scale. Spontaneous or induced sputum was collected for sputum total and differential cell counts and bacteriology; cell-free sputum supernatant was used for mediator assessment as described previously.20 Sputum was produced spontaneously in 93% of the subjects. Positive bacterial colonization was defined as colony-forming units greater than 107/mL sputum or positive culture.20,21 Subjects with sputum eosinophil and neutrophil differential cell counts above 3%23,24 and 61%25 were defined as eosinophilic or neutrophilic, respectively. Further stratification of the subjects into 4 subgroups on the basis of their sputum cell counts was also done: pure eosinophilic (eosinophil > 3% and neutrophil < 61%), pure neutrophilic (eosinophil < 3% and neutrophil > 61%), mixed granulocytic (eosinophil > 3% and neutrophil > 61%), and paucigranulocytic (eosinophil < 3% and neutrophil < 61%). Inflammatory mediators were measured in sputum supernatants using the Meso Scale Discovery Platform (MSD; Gaithersburg, Md). The mediators measured were selected to reflect cytokines, chemokines, and proinflammatory mediators implicated in airway disease. The performance of the MSD platform in terms of recovery of spiked exogenous recombinant proteins has been described previously.16 Sputum inflammatory mediators that were below the detectable range were replaced with their corresponding lower limit of detection in subjects with both asthma and COPD. Twenty-one mediators were included in the test study, and 14 mediators were available in the validation study. Demographic, clinical, and lung-function data were recorded including pre- and postbronchodilator FEV1, forced vital capacity, and symptom scores using the visual analogue scale. Spontaneous or induced sputum was collected for sputum total and differential cell counts and bacteriology; cell-free sputum supernatant was used for mediator assessment as described previously.20 Sputum was produced spontaneously in 93% of the subjects. Positive bacterial colonization was defined as colony-forming units greater than 107/mL sputum or positive culture.20,21 Subjects with sputum eosinophil and neutrophil differential cell counts above 3%23,24 and 61%25 were defined as eosinophilic or neutrophilic, respectively. Further stratification of the subjects into 4 subgroups on the basis of their sputum cell counts was also done: pure eosinophilic (eosinophil > 3% and neutrophil < 61%), pure neutrophilic (eosinophil < 3% and neutrophil > 61%), mixed granulocytic (eosinophil > 3% and neutrophil > 61%), and paucigranulocytic (eosinophil < 3% and neutrophil < 61%). Inflammatory mediators were measured in sputum supernatants using the Meso Scale Discovery Platform (MSD; Gaithersburg, Md). The mediators measured were selected to reflect cytokines, chemokines, and proinflammatory mediators implicated in airway disease. The performance of the MSD platform in terms of recovery of spiked exogenous recombinant proteins has been described previously.16 Sputum inflammatory mediators that were below the detectable range were replaced with their corresponding lower limit of detection in subjects with both asthma and COPD. Twenty-one mediators were included in the test study, and 14 mediators were available in the validation study. Statistical analysis See this article's Online Repository at www.jacionline.org for detailed statistical methods. All statistical analyses were performed using STATA/IC version 13.0 for Windows (StataCorp, College Station, Tex) and R version 2.15.1 (R Foundation for statistical computing, Vienna, Austria). Parametric data were presented as mean with SEM, and log transformed data were presented as geometric mean with 95% CI. The χ2 test or the Fisher exact test was used to compare proportions, and 1-way ANOVA was used to compare means across multiple groups; nonparametric data were presented as median with first and third quartiles, and Kruskal-Wallis test was used to compare subgroups. Inflammatory mediators that significantly discriminated across asthma versus COPD and bacterially colonized versus noncolonized were identified using receiver operating characteristic (ROC) curves. Factor analysis was performed on sputum inflammatory mediators, and independent factor scores were derived and used as input variables in the k-means cluster analysis to identify subjects' biological subgroups. The optimal number of clusters was chosen on the basis of a scree plot, by plotting within-cluster sum of the squares against a series of sequential numbers of clusters. Linear discriminant analysis was performed and a classification model developed from the test study for the validation study. In addition, classification and regression trees analysis was performed sequentially on all inflammatory mediators in the test study that had high discriminant function to identify possibly clinically relevant cutoff points. The inflammatory mediator cutoff points with the highest sensitivity ratio in discriminating the clusters together with subject disease status (asthma or COPD) were applied to classify the validation study into subgroups. A P value of less than .05 was taken as statistically significant. See this article's Online Repository at www.jacionline.org for detailed statistical methods. All statistical analyses were performed using STATA/IC version 13.0 for Windows (StataCorp, College Station, Tex) and R version 2.15.1 (R Foundation for statistical computing, Vienna, Austria). Parametric data were presented as mean with SEM, and log transformed data were presented as geometric mean with 95% CI. The χ2 test or the Fisher exact test was used to compare proportions, and 1-way ANOVA was used to compare means across multiple groups; nonparametric data were presented as median with first and third quartiles, and Kruskal-Wallis test was used to compare subgroups. Inflammatory mediators that significantly discriminated across asthma versus COPD and bacterially colonized versus noncolonized were identified using receiver operating characteristic (ROC) curves. Factor analysis was performed on sputum inflammatory mediators, and independent factor scores were derived and used as input variables in the k-means cluster analysis to identify subjects' biological subgroups. The optimal number of clusters was chosen on the basis of a scree plot, by plotting within-cluster sum of the squares against a series of sequential numbers of clusters. Linear discriminant analysis was performed and a classification model developed from the test study for the validation study. In addition, classification and regression trees analysis was performed sequentially on all inflammatory mediators in the test study that had high discriminant function to identify possibly clinically relevant cutoff points. The inflammatory mediator cutoff points with the highest sensitivity ratio in discriminating the clusters together with subject disease status (asthma or COPD) were applied to classify the validation study into subgroups. A P value of less than .05 was taken as statistically significant. Subjects: Subjects with severe asthma or moderate-to-severe COPD were recruited from a single center at the Glenfield Hospital, Leicester, United Kingdom, into independent test and validation studies. Assignment to asthma or COPD was made by the subjects' physician consistent with definitions of asthma and COPD according to the Global Initiative for Asthma1 or the Global Initiative for Chronic Obstructive Lung Disease2 guidelines, respectively, for both the test and validation groups. All subjects were assessed at stable visits at least 8 weeks free from an exacerbation, defined as an increase in symptoms necessitating a course of oral corticosteroids and/or antibiotic therapy. The subjects with COPD had participated in an exacerbation study,20,21 and some of the subjects with asthma had participated in an earlier study.22 All subjects provided written informed consent, and the studies were approved by the local Leicestershire, Northamptonshire, and Rutland ethics committee. Measurements: Demographic, clinical, and lung-function data were recorded including pre- and postbronchodilator FEV1, forced vital capacity, and symptom scores using the visual analogue scale. Spontaneous or induced sputum was collected for sputum total and differential cell counts and bacteriology; cell-free sputum supernatant was used for mediator assessment as described previously.20 Sputum was produced spontaneously in 93% of the subjects. Positive bacterial colonization was defined as colony-forming units greater than 107/mL sputum or positive culture.20,21 Subjects with sputum eosinophil and neutrophil differential cell counts above 3%23,24 and 61%25 were defined as eosinophilic or neutrophilic, respectively. Further stratification of the subjects into 4 subgroups on the basis of their sputum cell counts was also done: pure eosinophilic (eosinophil > 3% and neutrophil < 61%), pure neutrophilic (eosinophil < 3% and neutrophil > 61%), mixed granulocytic (eosinophil > 3% and neutrophil > 61%), and paucigranulocytic (eosinophil < 3% and neutrophil < 61%). Inflammatory mediators were measured in sputum supernatants using the Meso Scale Discovery Platform (MSD; Gaithersburg, Md). The mediators measured were selected to reflect cytokines, chemokines, and proinflammatory mediators implicated in airway disease. The performance of the MSD platform in terms of recovery of spiked exogenous recombinant proteins has been described previously.16 Sputum inflammatory mediators that were below the detectable range were replaced with their corresponding lower limit of detection in subjects with both asthma and COPD. Twenty-one mediators were included in the test study, and 14 mediators were available in the validation study. Statistical analysis: See this article's Online Repository at www.jacionline.org for detailed statistical methods. All statistical analyses were performed using STATA/IC version 13.0 for Windows (StataCorp, College Station, Tex) and R version 2.15.1 (R Foundation for statistical computing, Vienna, Austria). Parametric data were presented as mean with SEM, and log transformed data were presented as geometric mean with 95% CI. The χ2 test or the Fisher exact test was used to compare proportions, and 1-way ANOVA was used to compare means across multiple groups; nonparametric data were presented as median with first and third quartiles, and Kruskal-Wallis test was used to compare subgroups. Inflammatory mediators that significantly discriminated across asthma versus COPD and bacterially colonized versus noncolonized were identified using receiver operating characteristic (ROC) curves. Factor analysis was performed on sputum inflammatory mediators, and independent factor scores were derived and used as input variables in the k-means cluster analysis to identify subjects' biological subgroups. The optimal number of clusters was chosen on the basis of a scree plot, by plotting within-cluster sum of the squares against a series of sequential numbers of clusters. Linear discriminant analysis was performed and a classification model developed from the test study for the validation study. In addition, classification and regression trees analysis was performed sequentially on all inflammatory mediators in the test study that had high discriminant function to identify possibly clinically relevant cutoff points. The inflammatory mediator cutoff points with the highest sensitivity ratio in discriminating the clusters together with subject disease status (asthma or COPD) were applied to classify the validation study into subgroups. A P value of less than .05 was taken as statistically significant. Results: The clinical and sputum characteristics of the asthma (n = 86) and COPD (n = 75) test groups are presented in Table I. Subjects with asthma were younger, had a higher body mass index, better lung function, fewer symptoms, and a lower smoking pack-year history than did subjects with COPD. The differential neutrophil and eosinophil counts were not statistically different between the groups, but the total cell count was higher in those with COPD. The sputum inflammatory mediator profiles were distinct with increased TH2 (IL-5, IL-13, and CCL26) and TH1 mediators (CXCL10 and 11) in severe asthma compared with COPD and increased IL-6, CCL2, CCL3, and CCL4 in COPD compared with severe asthma. Inflammatory mediators that best discriminated asthma and COPD are presented as ROC curves (see Fig E1 in this article's Online Repository at www.jacionline.org). Sputum CCL5 and CXCL11 levels were substantially higher in subjects with asthma than in subjects with COPD, with area under the ROC curves (ROC AUCs) of 0.74 (95% CI, 0.67-0.82; P < .0001) and 0.72 (95% CI, 0.64-0.80; P < .0001), respectively. Sputum IL-6 and CCL2 levels were significantly higher in subjects with COPD than in subjects with asthma, with ROC AUCs of 0.86 (95% CI, 0.81-0.92; P < .0001) and 0.69 (0.61-0.77; P < .0001), respectively. Discriminant analysis using the combined clinical, physiological, and biological (inflammatory mediator) variables completely distinguished the asthma and COPD groups (Fig 1). The mediators that best discriminated between bacterially colonized and noncolonized subjects were sputum IL-1β and TNF-α, with ROC AUCs of 0.76 (95% CI, 0.68-0.85) and 0.75 (95% CI, 0.66-0.84), respectively (see Fig E2 in this article's Online Repository at www.jacionline.org). Factor analysis revealed 4 factors with IL-1β, IL-5, IL-6, and CXCL11 as the highest loading components, respectively, across the 4 factors (see Table E1 in this article's Online Repository at www.jacionline.org). Subsequent cluster analysis identified 3 clusters (Table II). Individual clinical and biological comparisons of subjects with asthma and COPD in clusters 1, 2, and 3 are presented in Table E2 in this article's Online Repository at www.jacionline.org. Linear discriminant analysis was performed to verify the determined clusters and to identify the contribution of inflammatory mediators in discriminating the clusters. Subsequently, 2 discriminant scores for individual subjects were calculated and used to represent the clusters in a 2-dimensional graph (Fig 2). Cluster 1 consisted of mainly subjects with asthma (95% of cluster 1) with elevated sputum TH2 mediators and was eosinophil predominant, with 67% of the subjects having a sputum eosinophilia and 48% a sputum neutrophilia. Further stratification of cluster 1 by sputum cell counts showed that the subjects were 40% pure eosinophilic, 21% pure neutrophilic, 27% mixed granulocytic, and 12% paucigranulocytic. Cluster 2 consisted of an overlap of subjects with asthma and COPD with sputum neutrophil predominance (75% of subjects with asthma and 95% of subjects with COPD). In contrast, only 11% and 5% of the subjects with asthma or COPD, respectively, had a sputum eosinophilia. In addition, there were elevated sputum levels of IL-1β, IL-8, IL-10 and TNF-α and bacterial colonization. The increased rate of bacterial colonization found in this cluster was driven predominately by subjects with COPD (Table E2). Further stratification of cluster 2 showed that the subjects were 0% pure eosinophilic, 74% pure neutrophilic, 9% mixed granulocytic, and 17% paucigranulocytic. Cluster 3 consisted predominantly of subjects with COPD (95% of cluster 3). In contrast to subjects with COPD in cluster 2, neutrophilic inflammation was present in only 49% of the subjects whereas a sputum eosinophilia was observed in 46% of the subjects. IL-6 and CCL2 levels were increased compared with those in clusters 1 and 2 but were similar to those in subjects with COPD in cluster 2. Only CCL13 and CCL17 were elevated in subjects with COPD in cluster 3 compared with subjects with COPD in cluster 2 (Table E2). Further stratification of cluster 3 showed that the subjects were 21% pure eosinophilic, 28% pure neutrophilic, 23% mixed granulocytic, and 28% paucigranulocytic. The proportion of subjects with asthma with airflow obstruction in the 3 clusters was not significantly different. Of the 2 subjects with asthma in cluster 3, 1 had persistent airflow obstruction compared with 17 of 55 in cluster 1 and 10 of 28 in cluster 2. The best discriminator between subjects in clusters 1 or 3 compared with the overlap group cluster 2 was sputum IL-1β at a cutoff point of 130 pg/mL (Fig 3). The second best discriminator was TNF-α with a cutoff point of 5 pg/mL (see Fig E3 in this article's Online Repository at www.jacionline.org). Validation These cluster analysis findings were then validated in independent asthma and COPD cohorts. Subjects were assigned into subgroups using 2 techniques. The first was a classification model developed from the test cohort using linear discriminant analysis and betas for each cluster and cytokines were extracted (see Table E3 in this article's Online Repository at www.jacionline.org). Individual subject discriminant score in each subgroup was calculated and the subject was assigned to the subgroup in which he or she had the highest score. The second technique used the IL-1β cutoff point at 130 pg/mL, which was identified as the best classifier to distinguish overlap cluster 2 from clusters 1 or 3 in the test study and was used alongside subject disease status (asthma or COPD). The sputum cellular and inflammatory mediator profiles of the 3 validation study subgroups, obtained using both techniques, were very similar to the test subgroups (Tables III and IV; Fig 4). In addition, individual clinical and biological comparisons of subjects with asthma and COPD in validation subgroups, presented in this article's Online Repository in Tables E4 and E5 at www.jacionline.org, revealed a patter similar to that of test subgroups (Table E2). These cluster analysis findings were then validated in independent asthma and COPD cohorts. Subjects were assigned into subgroups using 2 techniques. The first was a classification model developed from the test cohort using linear discriminant analysis and betas for each cluster and cytokines were extracted (see Table E3 in this article's Online Repository at www.jacionline.org). Individual subject discriminant score in each subgroup was calculated and the subject was assigned to the subgroup in which he or she had the highest score. The second technique used the IL-1β cutoff point at 130 pg/mL, which was identified as the best classifier to distinguish overlap cluster 2 from clusters 1 or 3 in the test study and was used alongside subject disease status (asthma or COPD). The sputum cellular and inflammatory mediator profiles of the 3 validation study subgroups, obtained using both techniques, were very similar to the test subgroups (Tables III and IV; Fig 4). In addition, individual clinical and biological comparisons of subjects with asthma and COPD in validation subgroups, presented in this article's Online Repository in Tables E4 and E5 at www.jacionline.org, revealed a patter similar to that of test subgroups (Table E2). Validation: These cluster analysis findings were then validated in independent asthma and COPD cohorts. Subjects were assigned into subgroups using 2 techniques. The first was a classification model developed from the test cohort using linear discriminant analysis and betas for each cluster and cytokines were extracted (see Table E3 in this article's Online Repository at www.jacionline.org). Individual subject discriminant score in each subgroup was calculated and the subject was assigned to the subgroup in which he or she had the highest score. The second technique used the IL-1β cutoff point at 130 pg/mL, which was identified as the best classifier to distinguish overlap cluster 2 from clusters 1 or 3 in the test study and was used alongside subject disease status (asthma or COPD). The sputum cellular and inflammatory mediator profiles of the 3 validation study subgroups, obtained using both techniques, were very similar to the test subgroups (Tables III and IV; Fig 4). In addition, individual clinical and biological comparisons of subjects with asthma and COPD in validation subgroups, presented in this article's Online Repository in Tables E4 and E5 at www.jacionline.org, revealed a patter similar to that of test subgroups (Table E2). Discussion: Here we report that although a combination of clinical variables distinguished asthma from COPD, further analyses of the sputum inflammatory mediators revealed that patients with asthma and COPD were best described by 3 biological clusters incorporating clinical, physiological, and inflammatory mediator characteristics. Our findings have further underscored the complex heterogeneity of asthma and COPD and provided support for the “British” hypothesis of airway disease pathogenesis as we identified 2 clusters that were predominately either asthma or COPD with distinct cytokine profiles, while also supporting the “Dutch” hypothesis by identifying a third cluster of overlapping subjects from both disease groups with similar cytokine profiles. Cluster 1 was asthma predominant with evidence of eosinophilic inflammation and increased TH2 inflammatory mediators. Cluster 2 contained an asthma and COPD overlap group, with predominately neutrophilic airway inflammation and elevated levels of IL-1β and TNF-α in addition to being assigned the highest proportion of subjects with bacterial colonization. Cluster 3 was a COPD-predominant group with mixed granulocytic airway inflammation and high sputum IL-6 and CCL13 levels. Furthermore, the biological clusters derived from the test group could be validated in an independent group yielding similar inflammatory mediator profiles to the test group. Whether these biological clusters can be used to stratify subjects for more targeted approaches to novel and existing therapies needs to be further studied. The clusters we have identified have biological plausibility and they confirm and extend our current understanding of the immunopathobiology of asthma and COPD, moving our understanding beyond previous comparisons of asthma versus COPD16 or clustering approaches of cytokine profiles in asthma or COPD alone.20,26 In addition, the clusters might represent groups with possible stratified responses to specific anti-inflammatory treatment. Cluster 1 is consistent with the TH2-predominant eosinophilic asthma paradigm. Indeed, this group was predominately asthmatic but importantly also included about 5% of subjects with COPD. It would seem likely that this group is most likely to respond to anti-TH2 cytokine therapy such as anti–IL-5 and 13.18,19,27,28 Eosinophilic COPD is well-described, and this group has a greater response to oral and inhaled corticosteroids than did those with noneosinophilic COPD.29,30 Whether subjects with COPD in this cluster would respond to anti-TH2 cytokine therapy is currently under study (www.clinicaltrials.gov NCT01227278). Cluster 2 included an overlap of subjects with asthma and COPD. This group was predominately neutrophilic, consistent with previous observations,14 and with increased bacterial colonization. Recent evidence supports the role for macrolide antibiotics in COPD31 and in noneosinophilic severe asthma.32 Antineutrophilic strategies such as anti-CXCR2 are currently under study.33 Further studies are required to assess whether this cluster represents patients most likely to respond to these therapies. In cluster 2, increased bacterial colonization was evident, particularly in those with COPD, perhaps suggesting that in these subjects the neutrophilic inflammation is a consequence of bacterial colonization rather than the primary abnormality. Thus, whether ameliorating neutrophilic inflammation in this group is beneficial or harmful is unclear. Indeed, lessons from anti–TNF-α therapy suggest that targeting proinflammatory cytokines can increase the risk of infection.34,35 In contrast, in those with neutrophilic inflammation without evidence of bacterial colonization, particularly in those with asthma, the neutrophilic inflammation might be critical in the development of the disease. Thus, identification of distinct groups that benefit or are harmed by antineutrophilic approaches would enable better stratification of such therapies. Cluster 3 included mainly subjects with COPD in which bacterial colonization was observed in fewer subjects in spite of consistently elevated proinflammatory cytokines. Perhaps this group, in contrast to cluster 2, represents subjects in which the proinflammatory environment plays a more causal role in the disease expression rather than as a consequence of infection. This might suggest that this group would be more amenable to anticytokine therapies such as anti–IL-6. In addition, eosinophilic inflammation was a feature in some subjects in cluster 3 in the absence of an elevated TH2 profile. One of the few cytokines increased in cluster 3 was CCL13, which is a CCR3 agonist and promotes eosinophil migration. Small airway macrophages are an important source of CCL13 in the airway and might play a role in the eosinophilic inflammation in this group.36 Taken together, these intriguing and novel observations immediately open up opportunities for further translational studies to determine the underlying mechanisms of these clusters and their treatment-specific anti-inflammatory therapies. In addition to clear differences in the cytokine profiles between groups, there were several differences in clinical parameters. These were largely dependent on whether the clusters were asthma or COPD predominant or mixed. For example, lung function, age, greater smoking history, and higher exacerbation frequency was related to the number of subjects with COPD in the cluster. However, the symptom of cough was more common in cluster 2. Indeed, subjects with asthma and COPD in cluster 2 had a higher visual analog scale score for cough than did either the subjects with asthma in cluster 1 or the subjects with COPD in cluster 3, respectively. This suggests that this difference is independent of disease status and might represent a real association either between the inflammatory profile or increased bacterial colonization in cluster 2. This cluster also had the highest sputum total inflammatory cell count, suggesting that this represents a “chronic bronchitis” group. As suggested above, whether this group might warrant antimicrobial, anti-inflammatory, or antimucolytic therapy is an interesting possibility. One of the strengths of our observations is that we were able to support the identification of the 3 biological clusters in an independent validation group. The similarity between the cytokine (inflammatory mediators) profiles in test and validation groups supports the view that each cluster is a consistent phenotype and might reflect common immunopathology and phenotype-specific responses to treatment. We found biological clusters that were asthma or COPD predominant, suggesting that there are distinct mechanisms underlying these groups, but we also identified a consistent overlap group that might be a consequence of shared mechanisms. Two approaches were used to validate the clusters in an independent group using discriminant analysis and the generation of a classifier that used the disease allocation and sputum IL1β cutoff. Sputum IL-1β was the best discriminator between the subjects with asthma or COPD in clusters 1 and 3, respectively, with those in the overlap group cluster 2. The clinical diagnosis of asthma or COPD together with a single sputum cytokine (IL-1β cutoff) demonstrated a simple approach to segment asthma and COPD populations into 3 groups with distinct and consistent cytokine profiles. This approach has advantages in its simplicity and offers the potential for immediate use in stratified medicine studies although it might underestimate small, albeit potentially important subgroups such as TH2 high COPD. One possible limitation of this study is that only subjects with severe asthma and COPD who attended a secondary care setting were included, and thus might not be representative of a more generalized population. We concede that our findings cannot be extrapolated to mild to moderate asthma or mild COPD but are confident that our test and validation populations are representative of our broader secondary care patient population. Our earlier preliminary data comparing asthma and COPD included subjects across the severity of disease and in this analysis fewer differences between asthma and COPD were observed.16 Whether this was due to lack of power because of the small numbers or due to masking clearer differences in more severe disease is unknown. Further studies are required to include healthy controls, larger disease populations including a broader spectrum of subjects including those with mild disease, and comparisons with other disease control groups. Allergic sensitization might also be an important mechanism in driving the different clusters. We did not record atopic status in the COPD group consistently, but in those with asthma there was no difference across the clusters. However, future studies should consider the role of allergy in these clusters. Our study has focused on stable visits and a similar comparison is required for longitudinal follow-up at stable and exacerbation events. We have previously reported exacerbation biological clusters in subjects with COPD and interestingly have identified similar profiles as described here.20 Whether comparisons of cytokine profiles in larger groups of subjects with severe asthma and COPD reveal similar biological clusters needs to be addressed. The cytokine profiles have been derived from sputum analysis and whether the profiles are similar in tissue samples is unknown. Access to bronchoscopic samples from large numbers of subjects with COPD and asthma with severe disease is challenging, but multicenter efforts to address this are underway in parallel with sputum sampling and these findings are eagerly awaited. In addition, although we have chosen to measure a large number of mediators implicated in obstructive airway disease, these mediators cannot fully reflect the complexity of airway disease and approaches using more comprehensive assessment of inflammatory networks in the airway perhaps using 'omic approaches such as transcriptomics will be informative.37 Such studies in small numbers suggest similar groupings described here with transcriptional profiles associated with cellular profiles and further studies are awaited. In conclusion, we found here that sputum inflammatory mediator profiling can determine distinct and overlapping groups of subjects with asthma and COPD. We identified an asthma-predominant cluster with eosinophilic inflammation and elevated TH2 inflammatory mediators, a COPD-predominant group with elevated proinflammatory cytokines, and an asthma and COPD overlap group that clinically had chronic bronchitis, increased bacterial colonization, elevated sputum IL-1β and TNF-α levels, and a sputum neutrophilia. We predict that these groups might contribute to improved patient classification to enable a stratified medicine approach to airways disease.Clinical implicationsSputum cytokine profiling can determine distinct and overlapping asthma and COPD subgroups supporting both the British and Dutch hypotheses of airway disease. Sputum cytokine profiling can determine distinct and overlapping asthma and COPD subgroups supporting both the British and Dutch hypotheses of airway disease.
Background: Asthma and chronic obstructive pulmonary disease (COPD) are heterogeneous diseases. Methods: We compared the clinical and physiological characteristics and sputum mediators between 86 subjects with severe asthma and 75 with moderate-to-severe COPD. Biological subgroups were determined using factor and cluster analyses on 18 sputum cytokines. The subgroups were validated on independent severe asthma (n = 166) and COPD (n = 58) cohorts. Two techniques were used to assign the validation subjects to subgroups: linear discriminant analysis, or the best identified discriminator (single cytokine) in combination with subject disease status (asthma or COPD). Results: Discriminant analysis distinguished severe asthma from COPD completely using a combination of clinical and biological variables. Factor and cluster analyses of the sputum cytokine profiles revealed 3 biological clusters: cluster 1: asthma predominant, eosinophilic, high TH2 cytokines; cluster 2: asthma and COPD overlap, neutrophilic; cluster 3: COPD predominant, mixed eosinophilic and neutrophilic. Validation subjects were classified into 3 subgroups using discriminant analysis, or disease status with a binary assessment of sputum IL-1β expression. Sputum cellular and cytokine profiles of the validation subgroups were similar to the subgroups from the test study. Conclusions: Sputum cytokine profiling can determine distinct and overlapping groups of subjects with asthma and COPD, supporting both the British and Dutch hypotheses. These findings may contribute to improved patient classification to enable stratified medicine.
null
null
5,787
270
7
[ "subjects", "copd", "asthma", "sputum", "cluster", "asthma copd", "test", "mediators", "inflammatory", "clusters" ]
[ "test", "test" ]
null
null
null
null
[CONTENT] Asthma and COPD overlap | cytokines | factor and cluster analyses | COPD, Chronic obstructive pulmonary disease | ROC, Receiver operating characteristic | ROC AUC, Area under the receiver operating characteristic curve [SUMMARY]
[CONTENT] Asthma and COPD overlap | cytokines | factor and cluster analyses | COPD, Chronic obstructive pulmonary disease | ROC, Receiver operating characteristic | ROC AUC, Area under the receiver operating characteristic curve [SUMMARY]
null
[CONTENT] Asthma and COPD overlap | cytokines | factor and cluster analyses | COPD, Chronic obstructive pulmonary disease | ROC, Receiver operating characteristic | ROC AUC, Area under the receiver operating characteristic curve [SUMMARY]
null
null
[CONTENT] Aged | Asthma | Cluster Analysis | Cytokines | Female | Forced Expiratory Volume | Humans | Leukocyte Count | Male | Middle Aged | Pulmonary Disease, Chronic Obstructive | Sputum | United Kingdom [SUMMARY]
[CONTENT] Aged | Asthma | Cluster Analysis | Cytokines | Female | Forced Expiratory Volume | Humans | Leukocyte Count | Male | Middle Aged | Pulmonary Disease, Chronic Obstructive | Sputum | United Kingdom [SUMMARY]
null
[CONTENT] Aged | Asthma | Cluster Analysis | Cytokines | Female | Forced Expiratory Volume | Humans | Leukocyte Count | Male | Middle Aged | Pulmonary Disease, Chronic Obstructive | Sputum | United Kingdom [SUMMARY]
null
null
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
null
[CONTENT] test | test [SUMMARY]
null
null
[CONTENT] subjects | copd | asthma | sputum | cluster | asthma copd | test | mediators | inflammatory | clusters [SUMMARY]
[CONTENT] subjects | copd | asthma | sputum | cluster | asthma copd | test | mediators | inflammatory | clusters [SUMMARY]
null
[CONTENT] subjects | copd | asthma | sputum | cluster | asthma copd | test | mediators | inflammatory | clusters [SUMMARY]
null
null
[CONTENT] mediators | subjects | sputum | eosinophil neutrophil | neutrophil | 61 | test | study | eosinophil neutrophil 61 | neutrophil 61 [SUMMARY]
[CONTENT] subjects | cluster | il | copd | asthma | table | sputum | fig | subjects copd | compared [SUMMARY]
null
[CONTENT] subjects | copd | asthma | sputum | cluster | test | mediators | asthma copd | subgroups | study [SUMMARY]
null
null
[CONTENT] between 86 | 75 ||| 18 ||| 166 | COPD | 58 ||| Two | linear [SUMMARY]
[CONTENT] COPD ||| 3 | 1 | TH2 | 2 | 3 ||| 3 | IL-1β ||| [SUMMARY]
null
[CONTENT] ||| between 86 | 75 ||| 18 ||| 166 | COPD | 58 ||| Two | linear ||| ||| COPD ||| 3 | 1 | TH2 | 2 | 3 ||| 3 | IL-1β ||| ||| British | Dutch ||| [SUMMARY]
null
Wdr66 is a novel marker for risk stratification and involved in epithelial-mesenchymal transition of esophageal squamous cell carcinoma.
23514407
We attempted to identify novel biomarkers and therapeutic targets for esophageal squamous cell carcinoma by gene expression profiling of frozen esophageal squamous carcinoma specimens and examined the functional relevance of a newly discovered marker gene, WDR66.
BACKGROUND
Laser capture microdissection technique was applied to collect the cells from well-defined tumor areas in collaboration with an experienced pathologist. Whole human gene expression profiling of frozen esophageal squamous carcinoma specimens (n = 10) and normal esophageal squamous tissue (n = 18) was performed using microarray technology. A gene encoding WDR66, WD repeat-containing protein 66 was significantly highly expressed in esophageal squamous carcinoma specimens. Microarray results were validated by quantitative real-time polymerase chain reaction (qRT-PCR) in a second and independent cohort (n = 71) consisting of esophageal squamous cell carcinoma (n = 25), normal esophagus (n = 11), esophageal adenocarcinoma (n = 13), gastric adenocarcinoma (n = 15) and colorectal cancers (n = 7). In order to understand WDR66's functional relevance siRNA-mediated knockdown was performed in a human esophageal squamous cell carcinoma cell line, KYSE520 and the effects of this treatment were then checked by another microarray analysis.
METHODS
High WDR66 expression was significantly associated with poor overall survival (P = 0.031) of patients suffering from esophageal squamous carcinomas. Multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042). WDR66 knockdown by RNA interference resulted particularly in changes of the expression of membrane components. Expression of vimentin was down regulated in WDR66 knockdown cells while that of the tight junction protein occludin was markedly up regulated. Furthermore, siRNA-mediated knockdown of WDR66 resulted in suppression of cell growth and reduced cell motility.
RESULTS
WDR66 might be a useful biomarker for risk stratification of esophageal squamous carcinomas. WDR66 expression is likely to play an important role in esophageal squamous cell carcinoma growth and invasion as a positive modulator of epithelial-mesenchymal transition. Furthermore, due to its high expression and possible functional relevance, WDR66 might be a novel drug target for the treatment of squamous carcinoma.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Biomarkers, Tumor", "Calcium-Binding Proteins", "Carcinoma, Squamous Cell", "Cell Line, Tumor", "Cell Movement", "Cell Proliferation", "Epithelial-Mesenchymal Transition", "Esophageal Neoplasms", "Esophageal Squamous Cell Carcinoma", "Female", "Gene Expression", "Gene Knockdown Techniques", "Humans", "Male", "Middle Aged", "Neoplasm Grading", "Neoplasm Staging", "Occludin", "Risk", "Vimentin", "Young Adult" ]
3610187
Background
Esophageal squamous cell carcinoma (ESCC) is one of the most lethal malignancies of the digestive tract and in most cases the initial diagnosis is established only once the malignancy is in the advanced stage [1]. Poor survival is due to the fact that ESCC frequently metastasizes to regional and distant lymph nodes, even at initial diagnosis. Treatment of cancer using molecular targets has brought promising results and attracts more and more attention [2-5]. Characterization of genes involved in the progression and development of ESCC may lead to the identification of new prognostic markers and therapeutic targets. By whole genome-wide expression profiling, we found that WD repeat-containing protein 66 (WDR66), located on chromosome 12 (12q24.31), might be a useful biomarker for risk stratification and a modulator for epithelial-mesenchymal transition of ESCC. WD-repeat protein family is a large family found in all eukaryotes and is implicated in a variety of functions ranging from signal transduction and transcription regulation to cell cycle control, autophagy and apoptosis [6]. These repeating units are believed to serve as a scaffold for multiple protein interactions with various proteins [7]. According to whole-genome sequence analysis, there are 136 WD-repeat proteins in humans which belong to the same structural class [8]. Among the WD-repeat proteins, endonuclein containing five WD-repeat domains was shown to be up regulated in pancreatic cancer [9]. The expression of human BTRC (beta-transducing repeat-containing protein), which contains one F-box and seven WD-repeats, targeted to epithelial cells under tissue specific promoter in BTRC deficient (−/−) female mice, promoted the development of mammary tumors [10]. WDRPUH (WD repeat-containing protein 16) encoding a protein containing 11 highly conserved WD-repeat domains was also shown to be up regulated in human hepatocellular carcinomas and involved in promotion of cell proliferation [11]. The WD repeat-containing protein 66 contains 9 highly conserved WD40 repeat motifs and an EF-hand-like domain. A genome-wide association study identified a single-nucleotide polymorphism located within intron 3 of WDR66 associated with mean platelet volume [12]. WD-repeat proteins have been identified as tumor markers that were frequently up-regulated in various cancers [11,13,14]. Here, we report for the first time that WDR66 might be an important prognostic factor for patients with ESCC as found by whole human gene expression profiling. Moreover, to our knowledge, the role of WDR66 in esophageal cancer development and progression has not been explored up to now. To this end we examined the effect of silencing of WDR66 by another microarray analysis. In addition, the effect of WDR66 on epithelial-mesenchymal transition (EMT), cell motility and tumor growth was investigated.
Methods
Patients Tissue samples from individuals of the first set (n = 28) were obtained from the tumor bank of Charité Comprehensive Cancer Center. Gene expression was examined by whole-human-genome microarrays (Affymetrix, Santa Clara, CA, USA). Ten esophageal squamous cell carcinoma (ESCC) and 18 normal esophageal (NE) biopsies were randomly collected. Normal healthy esophageal biopsies were collected from patients with esophageal pain but diagnosed as normal squamous without pathological changes. Surgical specimens of chemotherapy-naïve patients with known ESCC of histological grading G1, UICC stage II and III, had undergone esophagectomy. Patients’ age ranged from 22 to 83 years, with a median age of 59 years. A second panel (n = 71) consisting of ESCC (n = 25), NE (n = 11), esophageal adenocarcinoma (EAC) (n = 13), gastric adenocarcinoma (GAC) (n = 15) and colorectal cancers (CRC) (n = 7) was used for qRT-PCR validation. Patients’ age ranged from 24 to 79 years, with a median age of 63 years. All samples were snap-frozen in liquid nitrogen and stored at −80°C. We obtained tissue specimens from all subjects with informed written consent (approved by the local ethics committee of the Charité-Universitätsmedizin, Berlin). Each single specimen included in this study was histopathologically approved according to grade and stage by an experienced pathologist ( MV, University Bayreuth). Tissue samples from individuals of the first set (n = 28) were obtained from the tumor bank of Charité Comprehensive Cancer Center. Gene expression was examined by whole-human-genome microarrays (Affymetrix, Santa Clara, CA, USA). Ten esophageal squamous cell carcinoma (ESCC) and 18 normal esophageal (NE) biopsies were randomly collected. Normal healthy esophageal biopsies were collected from patients with esophageal pain but diagnosed as normal squamous without pathological changes. Surgical specimens of chemotherapy-naïve patients with known ESCC of histological grading G1, UICC stage II and III, had undergone esophagectomy. Patients’ age ranged from 22 to 83 years, with a median age of 59 years. A second panel (n = 71) consisting of ESCC (n = 25), NE (n = 11), esophageal adenocarcinoma (EAC) (n = 13), gastric adenocarcinoma (GAC) (n = 15) and colorectal cancers (CRC) (n = 7) was used for qRT-PCR validation. Patients’ age ranged from 24 to 79 years, with a median age of 63 years. All samples were snap-frozen in liquid nitrogen and stored at −80°C. We obtained tissue specimens from all subjects with informed written consent (approved by the local ethics committee of the Charité-Universitätsmedizin, Berlin). Each single specimen included in this study was histopathologically approved according to grade and stage by an experienced pathologist ( MV, University Bayreuth). Laser capture microdissection and microarray Laser Capture Microdissection (Cellcut, MMI AG and Nikon TE300 microscope) was used for isolating desired cells from sections. After transferring 5 μm sections onto MMI membrane slides, these were fixed in 70% isopropyl alcohol and then stained with the MMI basic staining kit. Desired tumor cell or NE areas were selected, cut and collected. Preparation of labeled cRNA and hybridization were done using the gene chip hybridization, wash, and stain kit (Affymetrix, Santa Clara, CA, USA), as described previously [15]. Two cycle labeling was applied to all samples. In total 28 chip-data were collected using Gene Chip Operation Software (GCOS, Affymetrix). The 28 specimens analyzed consists 10 ESCC and 18 NE. To obtain the relative gene expression measurements, probe set-level data extraction was performed with the GCRMA (Robust Multiarray Average) normalization algorithm implemented in GeneSpring GX10.2 (Agilent). All data were log2 transformed. A list of all the genes included in these microarrays and the normalized data have been deposited in the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/info/linking.html) under GEO accession number GSE26886. For gene-by-gene statistical testing, parametric tests were used to compare differences between groups. The False Discovery Rate (FDR) was employed using Benjamini-Hochberg procedure for multiple testing of the resulting p-value significance. Laser Capture Microdissection (Cellcut, MMI AG and Nikon TE300 microscope) was used for isolating desired cells from sections. After transferring 5 μm sections onto MMI membrane slides, these were fixed in 70% isopropyl alcohol and then stained with the MMI basic staining kit. Desired tumor cell or NE areas were selected, cut and collected. Preparation of labeled cRNA and hybridization were done using the gene chip hybridization, wash, and stain kit (Affymetrix, Santa Clara, CA, USA), as described previously [15]. Two cycle labeling was applied to all samples. In total 28 chip-data were collected using Gene Chip Operation Software (GCOS, Affymetrix). The 28 specimens analyzed consists 10 ESCC and 18 NE. To obtain the relative gene expression measurements, probe set-level data extraction was performed with the GCRMA (Robust Multiarray Average) normalization algorithm implemented in GeneSpring GX10.2 (Agilent). All data were log2 transformed. A list of all the genes included in these microarrays and the normalized data have been deposited in the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/info/linking.html) under GEO accession number GSE26886. For gene-by-gene statistical testing, parametric tests were used to compare differences between groups. The False Discovery Rate (FDR) was employed using Benjamini-Hochberg procedure for multiple testing of the resulting p-value significance. In situ hybridization A 148 bp fragment located at the 3 terminal end of human WDR66 gene (NM144668)was subcloned into the pBluescript II vector pBS-27.16 using primer pair forward: 5’-CAACCTgCTCCgTCAAA-3’ and reverse: 5’-TAAACATTCCTggTAACTTCAC-3’. The linearized plasmid was used as a template for the synthesis of antisense probes. The probe was labeled by digoxigenin / dUTP with a DIG/RNA labelling kit (Boehringer Mannheim), according to the manufacturer’s instructions. The quality and quantity of the probe were confirmed by gel electrophoresis before used for In situ hybridization. The Digoxigenin-labeled probe was applied to 5 μm dewaxed FFPE sections and hybridized at 65°C overnight in a humid chamber. After 3 washes to remove the nonspecific binding or unbound probes, digoxigenin-labeled probe was detected using the alkaline phosphatase method. A 148 bp fragment located at the 3 terminal end of human WDR66 gene (NM144668)was subcloned into the pBluescript II vector pBS-27.16 using primer pair forward: 5’-CAACCTgCTCCgTCAAA-3’ and reverse: 5’-TAAACATTCCTggTAACTTCAC-3’. The linearized plasmid was used as a template for the synthesis of antisense probes. The probe was labeled by digoxigenin / dUTP with a DIG/RNA labelling kit (Boehringer Mannheim), according to the manufacturer’s instructions. The quality and quantity of the probe were confirmed by gel electrophoresis before used for In situ hybridization. The Digoxigenin-labeled probe was applied to 5 μm dewaxed FFPE sections and hybridized at 65°C overnight in a humid chamber. After 3 washes to remove the nonspecific binding or unbound probes, digoxigenin-labeled probe was detected using the alkaline phosphatase method. RNA extraction and qRT-PCR RNA extractions were carried out using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Total RNA quality and yield were assessed using a bioanalyzer system (Agilent 2100 Bioanalyzer; Agilent Technologies, Palo Alto, CA, USA) and a spectrophotometer (NanoDrop ND-1000; NanoDrop Technologies, Wilmington, DE, USA). Only RNA with an RNA integrity number > 9.0 was used for microarray analysis. For quantification of mRNA-expression, qRT-PCR was performed for 3 genes plus one control, using pre-designed gene-specific TaqMan® probes and primer sets purchased from Applied Biosystems (Hs01566237_m1 for WDR66, Hs00958116_m1 for VIM, Hs00170162_m1 for OCLD, and 4326317E for GAPDH). Conditions and data analysis of qRT-PCR were as described [16]. RNA extractions were carried out using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Total RNA quality and yield were assessed using a bioanalyzer system (Agilent 2100 Bioanalyzer; Agilent Technologies, Palo Alto, CA, USA) and a spectrophotometer (NanoDrop ND-1000; NanoDrop Technologies, Wilmington, DE, USA). Only RNA with an RNA integrity number > 9.0 was used for microarray analysis. For quantification of mRNA-expression, qRT-PCR was performed for 3 genes plus one control, using pre-designed gene-specific TaqMan® probes and primer sets purchased from Applied Biosystems (Hs01566237_m1 for WDR66, Hs00958116_m1 for VIM, Hs00170162_m1 for OCLD, and 4326317E for GAPDH). Conditions and data analysis of qRT-PCR were as described [16]. Cell culture and siRNA-mediated knockdown KYSE520, OE33, SW480, Caco2, HCT116, HT29, HL60, LS174T, Daudi, HEK293, MCF7, MDA-MB-231, MDA-MB-435 and Capan-I were obtained from the American Type Culture Collection (ATCC, Manassas, VA) and cultured according to the supplier’s instructions. For siRNA-mediated knockdown of WDR66, cells were seeded in 6-well plates on the day before transfection. On day 0, cells were transfected with siRNA at 25 nmol/L concentrations using Thermo Scientific DharmaFECT transfection reagents according to manufacturer’s instructions. The siRNA sense 5’ – GuuACuAAAGGuGAGCAuA - 3’ sequence corresponding to WDR66 mRNA was chemically synthesized by sigma-Proligo (Munich, Germany). RNA was extracted at indicated time points as described. KYSE520, OE33, SW480, Caco2, HCT116, HT29, HL60, LS174T, Daudi, HEK293, MCF7, MDA-MB-231, MDA-MB-435 and Capan-I were obtained from the American Type Culture Collection (ATCC, Manassas, VA) and cultured according to the supplier’s instructions. For siRNA-mediated knockdown of WDR66, cells were seeded in 6-well plates on the day before transfection. On day 0, cells were transfected with siRNA at 25 nmol/L concentrations using Thermo Scientific DharmaFECT transfection reagents according to manufacturer’s instructions. The siRNA sense 5’ – GuuACuAAAGGuGAGCAuA - 3’ sequence corresponding to WDR66 mRNA was chemically synthesized by sigma-Proligo (Munich, Germany). RNA was extracted at indicated time points as described. Microarray analysis of WDR66 knock-down cells Total RNA was extracted from 106 cell pellet using RNeasy Mini Prep Kit (Qiagen, Hilden, Germany). RNA quality was checked by Bioanalyzer (Agilent, Santa Clara, CA) Only RNA samples showing a RIN of at least 9.0 were used for labelling. Total RNA (1 μg) was labelled with Cy3 using the Low Input RNA Amplification Kit (Agilent, Santa Clara, CA). Labelled cRNAs were hybridized to Whole Human Genome 4x44K Oligonucleotide Microarrays (Agilent, Santa Clara, CA) according to the manual. Arrays were scanned by using standard Agilent protocols and a G2565AA Microarray Scanner (Agilent, Santa Clara, CA). Raw expression values were determined using Feature Extraction 9.0 software (Agilent, Santa Clara, CA). Total RNA was extracted from 106 cell pellet using RNeasy Mini Prep Kit (Qiagen, Hilden, Germany). RNA quality was checked by Bioanalyzer (Agilent, Santa Clara, CA) Only RNA samples showing a RIN of at least 9.0 were used for labelling. Total RNA (1 μg) was labelled with Cy3 using the Low Input RNA Amplification Kit (Agilent, Santa Clara, CA). Labelled cRNAs were hybridized to Whole Human Genome 4x44K Oligonucleotide Microarrays (Agilent, Santa Clara, CA) according to the manual. Arrays were scanned by using standard Agilent protocols and a G2565AA Microarray Scanner (Agilent, Santa Clara, CA). Raw expression values were determined using Feature Extraction 9.0 software (Agilent, Santa Clara, CA). Western blotting analysis Total cell extracts were obtained and cell lysate containing 50 μg of protein was separated on 10% SDS-polyacrylamide gel and then blotted onto polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). Primary antibody for vimentin detection was mouse monoclonal anti-human vimentin antibody (Sigma-Aldrich Corporation, V5255, 1:200, approximately 54 kDa). Primary antibody for occludin detection was rabbit polyclonal anti-human occludin antibody (Invitrogen, 71–1500, 1:500, at 65 kDa). ß-actin was used as loading control (Abcam, 1:2000, ab8226). Signals were detected using ECL kit (Amersham Pharmacia Biotech, Piscataway, NJ, USA). Images were scanned by FujiFilm LAS-1000 (FujiFilm, Düsseldorf, Germany). Total cell extracts were obtained and cell lysate containing 50 μg of protein was separated on 10% SDS-polyacrylamide gel and then blotted onto polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). Primary antibody for vimentin detection was mouse monoclonal anti-human vimentin antibody (Sigma-Aldrich Corporation, V5255, 1:200, approximately 54 kDa). Primary antibody for occludin detection was rabbit polyclonal anti-human occludin antibody (Invitrogen, 71–1500, 1:500, at 65 kDa). ß-actin was used as loading control (Abcam, 1:2000, ab8226). Signals were detected using ECL kit (Amersham Pharmacia Biotech, Piscataway, NJ, USA). Images were scanned by FujiFilm LAS-1000 (FujiFilm, Düsseldorf, Germany). Cell number, cell motility and wound-healing assay Cells were seeded in 6-well plates 24 h before transfection. Transfection was done as described above. Cells were collected at indicated time points, and cell numbers were measured using POLARstar Omega reader (BMG Labtech, Offenburg, Germany). Emission and excitation filters were 485 and 520 nm. The results were analyzed by MARS data analysis software. Cell motility was determined using 12-well Transwell Permeable Support inserts with polycarbonate filters with a pore size of 8 μm (Corning Costar, Lowell, MA), according to the manufacturer’s instructions. Wound healing assays were performed in triplicates using cytoselect 24-well wound healing assay (Cell Biolabs, Inc.). Cells were seeded in 6-well plates 24 h before transfection. Transfection was done as described above. Cells were collected at indicated time points, and cell numbers were measured using POLARstar Omega reader (BMG Labtech, Offenburg, Germany). Emission and excitation filters were 485 and 520 nm. The results were analyzed by MARS data analysis software. Cell motility was determined using 12-well Transwell Permeable Support inserts with polycarbonate filters with a pore size of 8 μm (Corning Costar, Lowell, MA), according to the manufacturer’s instructions. Wound healing assays were performed in triplicates using cytoselect 24-well wound healing assay (Cell Biolabs, Inc.). Statistical analysis Statistical analysis was done using GraphPad Prism version 5 for Windows (GraphPad Software) and SPSS version 13 for Windows (SPSS, Chicago, IL, USA) as follows: GraphPad Prism, Unpaired t test with Welch’s correction of quantitative real-time RT-PCR measurements of WDR66 in patient samples and of gene expression measurements in the validation cohort; nonparametric Mann–Whitney U test of cell numbers, motility assay, and cell wound assay after knockdown of WDR66; SPSS, Kaplan-Meier survival analysis and log-rank statistics, cut-point analysis of qRT-PCR measurements of WDR66 in patient samples using maximally selected rank statistics to determine the value separating a group into two groups with the most significant difference when used as a cut-point; grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low; WDR66 > 125, WDR66 high; the stratified Cox-regression model was used to determine prognostic factors in a multivariate analysis with WDR66 dichotomized at the previously determined cut-points. Statistical analysis was done using GraphPad Prism version 5 for Windows (GraphPad Software) and SPSS version 13 for Windows (SPSS, Chicago, IL, USA) as follows: GraphPad Prism, Unpaired t test with Welch’s correction of quantitative real-time RT-PCR measurements of WDR66 in patient samples and of gene expression measurements in the validation cohort; nonparametric Mann–Whitney U test of cell numbers, motility assay, and cell wound assay after knockdown of WDR66; SPSS, Kaplan-Meier survival analysis and log-rank statistics, cut-point analysis of qRT-PCR measurements of WDR66 in patient samples using maximally selected rank statistics to determine the value separating a group into two groups with the most significant difference when used as a cut-point; grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low; WDR66 > 125, WDR66 high; the stratified Cox-regression model was used to determine prognostic factors in a multivariate analysis with WDR66 dichotomized at the previously determined cut-points.
Results
WDR66 is specifically highly expressed in esophageal squamous cell carcinoma Whole genome-wide expression profiling was performed on 28 esophageal specimens (GSE26886). To make sure that only epithelial cells were studied, we applied laser capture microdissection technique to the specimens. A number of genes differentially expressed between ESCC samples and normal esophageal squamous epithelium samples were identified. The probe set with the highest fold change and lowest p-value represented the WDR66 transcript (P < 0.0001) (Figure 1A). As a validation study, WDR66 expression was examined by qRT-PCR in an independent cohort consisting of 71 specimens including ESCC (n = 25), NE (n = 11), EAC (n = 13), GAC (n = 15) and CRC (n = 7). We found that WDR66 was highly expressed in 96% of ESCC patients (Figure 1B). Confirming our previous results from the microarray study, WDR66 expression was found to be significantly higher in ESCC compared to NE as well as the other three cancer types checked in this cohort (P < 0.0001). Immunohistochemical localization of WDR66 was not carried out because none of the WDR66 antibodies available allowed detecting a specific protein band on Western blots. The presence of WDR66-specific mRNA was probed by in-situ hybridization using single-stranded RNA probes of the WDR66 gene in 4% PFA-fixed paraffin-embedded esophageal tissues. WDR66 transcription (positive staining) was specifically detected in the esophageal squamous carcinoma cells but not in normal squamous epithelia (Figure 1C). Furthermore, WDR66 expression was examined in 14 human cell-lines and 20 normal human tissues by qRT-PCR. Expression of WDR66 gene was abundantly expressed only in the human esophageal squamous cell carcinoma cell line KYSE520, but not expressed in any other human cell line, such as OE33, SW480, HT29, HCT116, LS174T, Caco2, HL60, HEK293, Daudi, Capan1, MCF7, MDA-MB231 and MDA-MB435 (Figure 1D). Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis (Figure 1E). Thus, our data suggest that WDR66 might be a cancer / testis antigen. WDR66 is highly expressed in esophageal squamous cell carcinoma. A: mRNA expression of the WDR66 gene was determined by microarray analysis. Microarray analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the WDR66 gene. The WDR66 gene is significantly differentially expressed in ESCC (corrected p-value < 0.0001). Expression level of WDR66 gene is low in NE but high in ESCC cases. B: Relative mRNA expression of the WDR66 gene in an independent validation cohort. Quantitation was done relative to the transcript of GAPDH. Significance in differential expression of individual gene between groups was calculated (p-value < 0.001). Results showed that WDR66 gene expression level is highest in ESCC and low to absent in NE or other carcinomas. On the horizontal axis patient groups ESCC, NE, EAC, GAC and CRC are depicted. C: WDR66 gene is highly expressed in ESCC epithelium according to in situ hybridization. In situ hybridization was down using anti-sense probes of human WDR66 gene in FFPE sections of esophageal specimens. Signals for WDR66 transcripts were observed specifically in esophageal squamous cell carcinoma (ESCC, right), but not in normal squamous epithelium (NE, left). D: WDR66 expression level in various human cell lines. WDR66 expression was examined by quantitative real-time PCR in 14 cell lines cultivated from different human carcinomas. The expression was quantified relative to human esophageal squamous carcinoma cell line KYSE520. E: Tissue-specific expression of WDR66 gene in various human normal tissues. Quantitative real-time PCR analysis of WDR66 expression levels in 20 human normal tissues (FirstChoice® Human Total RNA Survey Panel). WDR66 gene is preferentially expressed in testis. Gene level was quantified relative to the expression in ESCC cell line KYSE520. Whole genome-wide expression profiling was performed on 28 esophageal specimens (GSE26886). To make sure that only epithelial cells were studied, we applied laser capture microdissection technique to the specimens. A number of genes differentially expressed between ESCC samples and normal esophageal squamous epithelium samples were identified. The probe set with the highest fold change and lowest p-value represented the WDR66 transcript (P < 0.0001) (Figure 1A). As a validation study, WDR66 expression was examined by qRT-PCR in an independent cohort consisting of 71 specimens including ESCC (n = 25), NE (n = 11), EAC (n = 13), GAC (n = 15) and CRC (n = 7). We found that WDR66 was highly expressed in 96% of ESCC patients (Figure 1B). Confirming our previous results from the microarray study, WDR66 expression was found to be significantly higher in ESCC compared to NE as well as the other three cancer types checked in this cohort (P < 0.0001). Immunohistochemical localization of WDR66 was not carried out because none of the WDR66 antibodies available allowed detecting a specific protein band on Western blots. The presence of WDR66-specific mRNA was probed by in-situ hybridization using single-stranded RNA probes of the WDR66 gene in 4% PFA-fixed paraffin-embedded esophageal tissues. WDR66 transcription (positive staining) was specifically detected in the esophageal squamous carcinoma cells but not in normal squamous epithelia (Figure 1C). Furthermore, WDR66 expression was examined in 14 human cell-lines and 20 normal human tissues by qRT-PCR. Expression of WDR66 gene was abundantly expressed only in the human esophageal squamous cell carcinoma cell line KYSE520, but not expressed in any other human cell line, such as OE33, SW480, HT29, HCT116, LS174T, Caco2, HL60, HEK293, Daudi, Capan1, MCF7, MDA-MB231 and MDA-MB435 (Figure 1D). Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis (Figure 1E). Thus, our data suggest that WDR66 might be a cancer / testis antigen. WDR66 is highly expressed in esophageal squamous cell carcinoma. A: mRNA expression of the WDR66 gene was determined by microarray analysis. Microarray analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the WDR66 gene. The WDR66 gene is significantly differentially expressed in ESCC (corrected p-value < 0.0001). Expression level of WDR66 gene is low in NE but high in ESCC cases. B: Relative mRNA expression of the WDR66 gene in an independent validation cohort. Quantitation was done relative to the transcript of GAPDH. Significance in differential expression of individual gene between groups was calculated (p-value < 0.001). Results showed that WDR66 gene expression level is highest in ESCC and low to absent in NE or other carcinomas. On the horizontal axis patient groups ESCC, NE, EAC, GAC and CRC are depicted. C: WDR66 gene is highly expressed in ESCC epithelium according to in situ hybridization. In situ hybridization was down using anti-sense probes of human WDR66 gene in FFPE sections of esophageal specimens. Signals for WDR66 transcripts were observed specifically in esophageal squamous cell carcinoma (ESCC, right), but not in normal squamous epithelium (NE, left). D: WDR66 expression level in various human cell lines. WDR66 expression was examined by quantitative real-time PCR in 14 cell lines cultivated from different human carcinomas. The expression was quantified relative to human esophageal squamous carcinoma cell line KYSE520. E: Tissue-specific expression of WDR66 gene in various human normal tissues. Quantitative real-time PCR analysis of WDR66 expression levels in 20 human normal tissues (FirstChoice® Human Total RNA Survey Panel). WDR66 gene is preferentially expressed in testis. Gene level was quantified relative to the expression in ESCC cell line KYSE520. High expression of WDR66 correlates with poor survival outcome in ESCC In order to test if WDR66 expression correlates with prognostic markers in a separate validation set of ESCC examples, we determined WDR66 expression in an independent set of n = 25 ESCC examples using qRT-PCR (Additional file 1: Table S1). High expression of WDR66 RNA was found to be a significant prognostic factor with regard to cancer-related survival (P = 0.031; Figure 2). When analyzed with regard to various clinicopathological parameters, such as gender (P = 0.804), age (P = 0.432), tumor differentiation (P =0.032), pT factor (P = 0.234), lymph node metastasis (P = 0.545), distant metastasis (P = 0.543) and TNM stage (P = 0.002; Table 1), multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042; Table 1). High WDR66 mRNA expression is associated with poor survival in ESCC patients. Kaplan-Meier analysis of survival of grouped according to WDR66 expression as measured by quantitative real-time RT-PCR. Grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low (n = 12); WDR66 > 125, WDR66 high (n = 13). After choosing an optimal cut-point, analysis for WDR66 was done using maximally selected rank statistics. The group with patients expressing WDR66 at low levels showed a significantly better overall survival compared with the group with high levels of WDR66 expression (P = 0.031; log rank). Cox regression analysis for factors possibly influencing disease-specific survival in patients with ESCC in our cohort In order to test if WDR66 expression correlates with prognostic markers in a separate validation set of ESCC examples, we determined WDR66 expression in an independent set of n = 25 ESCC examples using qRT-PCR (Additional file 1: Table S1). High expression of WDR66 RNA was found to be a significant prognostic factor with regard to cancer-related survival (P = 0.031; Figure 2). When analyzed with regard to various clinicopathological parameters, such as gender (P = 0.804), age (P = 0.432), tumor differentiation (P =0.032), pT factor (P = 0.234), lymph node metastasis (P = 0.545), distant metastasis (P = 0.543) and TNM stage (P = 0.002; Table 1), multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042; Table 1). High WDR66 mRNA expression is associated with poor survival in ESCC patients. Kaplan-Meier analysis of survival of grouped according to WDR66 expression as measured by quantitative real-time RT-PCR. Grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low (n = 12); WDR66 > 125, WDR66 high (n = 13). After choosing an optimal cut-point, analysis for WDR66 was done using maximally selected rank statistics. The group with patients expressing WDR66 at low levels showed a significantly better overall survival compared with the group with high levels of WDR66 expression (P = 0.031; log rank). Cox regression analysis for factors possibly influencing disease-specific survival in patients with ESCC in our cohort Knockdown of WDR66 in KYSE520 effected VIM and OCLD expression in vitro In order to learn more about the function of WDR66, RNA interference was used to silence its expression in KYSE520 cells, a human esophageal squamous cell carcinoma cell line which highly expressed WDR66. Subsequently, a microarray expression analysis was performed in order to identify the genes affected by WDR66 knockdown. A total of 699 genes was identified based on a two-fold change expression difference with p-value of p < 0.001. In an approach to link the observed gene expression profile to gene function, these 699 differentially expressed genes were subjected to gene ontology (GO) analysis. Functional enrichment analysis identified 10 GO terms to be significantly associated with the WDR66 knockdown (Additional file 2: Table S2). All these 10 GO terms are membrane related. We checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed (p = 0.0008) while occludin was less expressed (p < 0.0001) in ESCC specimens in comparison to NE (Figure 3A). Microarray data were validated by qRT-PCR and Western blotting providing independent evidence of the changes of vimentin (VIM) and occludin (OCLD) expression associated with the WDR66 knockdown (Figure 3B, 3C). Knockdown of WDR66 gene affects expression of vimentin and occludin. A: mRNA expression of the VIM and OCLN gene in the original training cohort determined by microarray analysis. Array data analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the genes of interest. VIM and OCLN are significantly differentially expressed in ESCC compared with NE (corrected p-value: VIM p = 0.0008; OCLD p < 0.0001). VIM expression level is low in NE but high in ESCC cases, whereas OCLN expression is high in NE but low in ESCC cases. The horizontal axis depicts the patient groups ESCC and NE. (∗∗∗ P < 0.001) B: Knockdown of WDR66 affects mRNA expression of VIM and OCLN. Gene expression is presented as normalized (log scale) signal intensity for mRNA expression levels of VIM and OCLN gene. Gene expression level of VIM was significantly down regulated whereas OCLN expression was significantly higher in cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA) (corrected p-value: VIM p = 0.0286; OCLD p = 0.0186). Data are representatives of four independent experiments. (∗p < 0.05). C: Detection of vimentin and occludin protein by immunoblotting of KYSE520 cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA). β-actin was used as loading control. Vimentin expression was significantly down regulated while occludin was significantly higher expressed in cells treated with WDR66 siRNA. In order to learn more about the function of WDR66, RNA interference was used to silence its expression in KYSE520 cells, a human esophageal squamous cell carcinoma cell line which highly expressed WDR66. Subsequently, a microarray expression analysis was performed in order to identify the genes affected by WDR66 knockdown. A total of 699 genes was identified based on a two-fold change expression difference with p-value of p < 0.001. In an approach to link the observed gene expression profile to gene function, these 699 differentially expressed genes were subjected to gene ontology (GO) analysis. Functional enrichment analysis identified 10 GO terms to be significantly associated with the WDR66 knockdown (Additional file 2: Table S2). All these 10 GO terms are membrane related. We checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed (p = 0.0008) while occludin was less expressed (p < 0.0001) in ESCC specimens in comparison to NE (Figure 3A). Microarray data were validated by qRT-PCR and Western blotting providing independent evidence of the changes of vimentin (VIM) and occludin (OCLD) expression associated with the WDR66 knockdown (Figure 3B, 3C). Knockdown of WDR66 gene affects expression of vimentin and occludin. A: mRNA expression of the VIM and OCLN gene in the original training cohort determined by microarray analysis. Array data analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the genes of interest. VIM and OCLN are significantly differentially expressed in ESCC compared with NE (corrected p-value: VIM p = 0.0008; OCLD p < 0.0001). VIM expression level is low in NE but high in ESCC cases, whereas OCLN expression is high in NE but low in ESCC cases. The horizontal axis depicts the patient groups ESCC and NE. (∗∗∗ P < 0.001) B: Knockdown of WDR66 affects mRNA expression of VIM and OCLN. Gene expression is presented as normalized (log scale) signal intensity for mRNA expression levels of VIM and OCLN gene. Gene expression level of VIM was significantly down regulated whereas OCLN expression was significantly higher in cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA) (corrected p-value: VIM p = 0.0286; OCLD p = 0.0186). Data are representatives of four independent experiments. (∗p < 0.05). C: Detection of vimentin and occludin protein by immunoblotting of KYSE520 cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA). β-actin was used as loading control. Vimentin expression was significantly down regulated while occludin was significantly higher expressed in cells treated with WDR66 siRNA. Knockdown of WDR66 in KYSE520 cells affects cell motility and results in growth suppression Due to the effect of WDR66 on the expression of vimentin, an important EMT marker which plays a central role in the reversible trans-differentiation and occludin, an adhesion molecules that is a constituent of tight junctions, we hypothesized that WDR66 may regulate cell motility of esophageal squamous cancer cells. WDR66 was silenced in the human squamous cell carcinoma cell line KYSE520 by RNA interference. Transfection efficiency was evaluated by qPCR. Cell migration assays showed that KYSE520 cells, which had been transfected with WDR66 siRNA, displayed a motility capacity of only 40% compared to the cells having been transfected with control siRNA (AllStar) after 16 hours (Figure 4A). Moreover, we found that introduction of siWDR66 remarkably suppressed growth of KYSE520 in comparison to control cells (Figure 4B). In order to visualize the involvement of WDR66 in cell migration and proliferation, a wound-healing assay was carried out. The insert was removed at defined time points of scratching and the results were recorded by taking pictures (Figure 4C). Thus, our data suggest that WDR66 promotes cell proliferation and affects cell motility. Knockdown of WDR66 gene affects cell motility and cell growth. A: Cell motility assays showed that knockdown of WDR66 reduced cell migration after 16 hours. About 35% of the siWDR66 cells were migrated in comparison to that of the mock control cells. The differences between siWDR66 and Allstar (negative control siRNA) cells were significant (p = 0.0032) using the paired t test. The data shown are representatives of three independent experiments and each done in quadruplicate. B: Knockdown of WDR66 leads to suppression of cell growth. A total of 1.5×104 cells were seeded at day 0. WDR66 knockdown cells grew slower than cells treated with Allstar (negative control siRNA). The differences between siWDR66 and Allstar (negative control siRNA) treated cells were highly significant (p = 0. 0098). C: Wound-healing assays show that knockdown of WDR66 reduces cell motility. Representative images are shown. Images are taken immediately after insert was removed and 8 hours later. Original magnification is x400. Due to the effect of WDR66 on the expression of vimentin, an important EMT marker which plays a central role in the reversible trans-differentiation and occludin, an adhesion molecules that is a constituent of tight junctions, we hypothesized that WDR66 may regulate cell motility of esophageal squamous cancer cells. WDR66 was silenced in the human squamous cell carcinoma cell line KYSE520 by RNA interference. Transfection efficiency was evaluated by qPCR. Cell migration assays showed that KYSE520 cells, which had been transfected with WDR66 siRNA, displayed a motility capacity of only 40% compared to the cells having been transfected with control siRNA (AllStar) after 16 hours (Figure 4A). Moreover, we found that introduction of siWDR66 remarkably suppressed growth of KYSE520 in comparison to control cells (Figure 4B). In order to visualize the involvement of WDR66 in cell migration and proliferation, a wound-healing assay was carried out. The insert was removed at defined time points of scratching and the results were recorded by taking pictures (Figure 4C). Thus, our data suggest that WDR66 promotes cell proliferation and affects cell motility. Knockdown of WDR66 gene affects cell motility and cell growth. A: Cell motility assays showed that knockdown of WDR66 reduced cell migration after 16 hours. About 35% of the siWDR66 cells were migrated in comparison to that of the mock control cells. The differences between siWDR66 and Allstar (negative control siRNA) cells were significant (p = 0.0032) using the paired t test. The data shown are representatives of three independent experiments and each done in quadruplicate. B: Knockdown of WDR66 leads to suppression of cell growth. A total of 1.5×104 cells were seeded at day 0. WDR66 knockdown cells grew slower than cells treated with Allstar (negative control siRNA). The differences between siWDR66 and Allstar (negative control siRNA) treated cells were highly significant (p = 0. 0098). C: Wound-healing assays show that knockdown of WDR66 reduces cell motility. Representative images are shown. Images are taken immediately after insert was removed and 8 hours later. Original magnification is x400.
Conclusions
In summary, we have identified WDR66 as a potential novel prognostic marker and promising target for ESCC. This result is based on our observations that (1) WDR66 is specifically highly expressed in esophageal squamous cell carcinoma and high WDR66 expression correlates with poor overall survival, (2) WDR66 regulates vimentin and occludin expression and plays a crucial role for EMT, and (3) knockdown of WDR66 suppresses cell growth and motility and decreases cell viability of ESCC cells. Therefore, we propose that WDR66 plays a major role in ESCC biology. Our functional data furthermore warrants further investigation of selective targeting of WDR66 as a novel drug target for ESCC treatment.
[ "Background", "Patients", "Laser capture microdissection and microarray", "In situ hybridization", "RNA extraction and qRT-PCR", "Cell culture and siRNA-mediated knockdown", "Microarray analysis of WDR66 knock-down cells", "Western blotting analysis", "Cell number, cell motility and wound-healing assay", "Statistical analysis", "WDR66 is specifically highly expressed in esophageal squamous cell carcinoma", "High expression of WDR66 correlates with poor survival outcome in ESCC", "Knockdown of WDR66 in KYSE520 effected VIM and OCLD expression in vitro", "Knockdown of WDR66 in KYSE520 cells affects cell motility and results in growth suppression", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Esophageal squamous cell carcinoma (ESCC) is one of the most lethal malignancies of the digestive tract and in most cases the initial diagnosis is established only once the malignancy is in the advanced stage [1]. Poor survival is due to the fact that ESCC frequently metastasizes to regional and distant lymph nodes, even at initial diagnosis.\nTreatment of cancer using molecular targets has brought promising results and attracts more and more attention [2-5]. Characterization of genes involved in the progression and development of ESCC may lead to the identification of new prognostic markers and therapeutic targets.\nBy whole genome-wide expression profiling, we found that WD repeat-containing protein 66 (WDR66), located on chromosome 12 (12q24.31), might be a useful biomarker for risk stratification and a modulator for epithelial-mesenchymal transition of ESCC.\nWD-repeat protein family is a large family found in all eukaryotes and is implicated in a variety of functions ranging from signal transduction and transcription regulation to cell cycle control, autophagy and apoptosis [6]. These repeating units are believed to serve as a scaffold for multiple protein interactions with various proteins [7]. According to whole-genome sequence analysis, there are 136 WD-repeat proteins in humans which belong to the same structural class [8]. Among the WD-repeat proteins, endonuclein containing five WD-repeat domains was shown to be up regulated in pancreatic cancer [9]. The expression of human BTRC (beta-transducing repeat-containing protein), which contains one F-box and seven WD-repeats, targeted to epithelial cells under tissue specific promoter in BTRC deficient (−/−) female mice, promoted the development of mammary tumors [10]. WDRPUH (WD repeat-containing protein 16) encoding a protein containing 11 highly conserved WD-repeat domains was also shown to be up regulated in human hepatocellular carcinomas and involved in promotion of cell proliferation [11]. The WD repeat-containing protein 66 contains 9 highly conserved WD40 repeat motifs and an EF-hand-like domain. A genome-wide association study identified a single-nucleotide polymorphism located within intron 3 of WDR66 associated with mean platelet volume [12].\nWD-repeat proteins have been identified as tumor markers that were frequently up-regulated in various cancers [11,13,14]. Here, we report for the first time that WDR66 might be an important prognostic factor for patients with ESCC as found by whole human gene expression profiling. Moreover, to our knowledge, the role of WDR66 in esophageal cancer development and progression has not been explored up to now. To this end we examined the effect of silencing of WDR66 by another microarray analysis.\nIn addition, the effect of WDR66 on epithelial-mesenchymal transition (EMT), cell motility and tumor growth was investigated.", "Tissue samples from individuals of the first set (n = 28) were obtained from the tumor bank of Charité Comprehensive Cancer Center. Gene expression was examined by whole-human-genome microarrays (Affymetrix, Santa Clara, CA, USA). Ten esophageal squamous cell carcinoma (ESCC) and 18 normal esophageal (NE) biopsies were randomly collected. Normal healthy esophageal biopsies were collected from patients with esophageal pain but diagnosed as normal squamous without pathological changes. Surgical specimens of chemotherapy-naïve patients with known ESCC of histological grading G1, UICC stage II and III, had undergone esophagectomy. Patients’ age ranged from 22 to 83 years, with a median age of 59 years.\nA second panel (n = 71) consisting of ESCC (n = 25), NE (n = 11), esophageal adenocarcinoma (EAC) (n = 13), gastric adenocarcinoma (GAC) (n = 15) and colorectal cancers (CRC) (n = 7) was used for qRT-PCR validation. Patients’ age ranged from 24 to 79 years, with a median age of 63 years.\nAll samples were snap-frozen in liquid nitrogen and stored at −80°C. We obtained tissue specimens from all subjects with informed written consent (approved by the local ethics committee of the Charité-Universitätsmedizin, Berlin). Each single specimen included in this study was histopathologically approved according to grade and stage by an experienced pathologist ( MV, University Bayreuth).", "Laser Capture Microdissection (Cellcut, MMI AG and Nikon TE300 microscope) was used for isolating desired cells from sections. After transferring 5 μm sections onto MMI membrane slides, these were fixed in 70% isopropyl alcohol and then stained with the MMI basic staining kit. Desired tumor cell or NE areas were selected, cut and collected. Preparation of labeled cRNA and hybridization were done using the gene chip hybridization, wash, and stain kit (Affymetrix, Santa Clara, CA, USA), as described previously [15]. Two cycle labeling was applied to all samples. In total 28 chip-data were collected using Gene Chip Operation Software (GCOS, Affymetrix). The 28 specimens analyzed consists 10 ESCC and 18 NE. To obtain the relative gene expression measurements, probe set-level data extraction was performed with the GCRMA (Robust Multiarray Average) normalization algorithm implemented in GeneSpring GX10.2 (Agilent). All data were log2 transformed. A list of all the genes included in these microarrays and the normalized data have been deposited in the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/info/linking.html) under GEO accession number GSE26886. For gene-by-gene statistical testing, parametric tests were used to compare differences between groups. The False Discovery Rate (FDR) was employed using Benjamini-Hochberg procedure for multiple testing of the resulting p-value significance.", "A 148 bp fragment located at the 3 terminal end of human WDR66 gene (NM144668)was subcloned into the pBluescript II vector pBS-27.16 using primer pair forward: 5’-CAACCTgCTCCgTCAAA-3’ and reverse: 5’-TAAACATTCCTggTAACTTCAC-3’. The linearized plasmid was used as a template for the synthesis of antisense probes. The probe was labeled by digoxigenin / dUTP with a DIG/RNA labelling kit (Boehringer Mannheim), according to the manufacturer’s instructions. The quality and quantity of the probe were confirmed by gel electrophoresis before used for In situ hybridization. The Digoxigenin-labeled probe was applied to 5 μm dewaxed FFPE sections and hybridized at 65°C overnight in a humid chamber. After 3 washes to remove the nonspecific binding or unbound probes, digoxigenin-labeled probe was detected using the alkaline phosphatase method.", "RNA extractions were carried out using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Total RNA quality and yield were assessed using a bioanalyzer system (Agilent 2100 Bioanalyzer; Agilent Technologies, Palo Alto, CA, USA) and a spectrophotometer (NanoDrop ND-1000; NanoDrop Technologies, Wilmington, DE, USA). Only RNA with an RNA integrity number > 9.0 was used for microarray analysis.\nFor quantification of mRNA-expression, qRT-PCR was performed for 3 genes plus one control, using pre-designed gene-specific TaqMan® probes and primer sets purchased from Applied Biosystems (Hs01566237_m1 for WDR66, Hs00958116_m1 for VIM, Hs00170162_m1 for OCLD, and 4326317E for GAPDH). Conditions and data analysis of qRT-PCR were as described [16].", "KYSE520, OE33, SW480, Caco2, HCT116, HT29, HL60, LS174T, Daudi, HEK293, MCF7, MDA-MB-231, MDA-MB-435 and Capan-I were obtained from the American Type Culture Collection (ATCC, Manassas, VA) and cultured according to the supplier’s instructions. For siRNA-mediated knockdown of WDR66, cells were seeded in 6-well plates on the day before transfection. On day 0, cells were transfected with siRNA at 25 nmol/L concentrations using Thermo Scientific DharmaFECT transfection reagents according to manufacturer’s instructions. The siRNA sense 5’ – GuuACuAAAGGuGAGCAuA - 3’ sequence corresponding to WDR66 mRNA was chemically synthesized by sigma-Proligo (Munich, Germany). RNA was extracted at indicated time points as described.", "Total RNA was extracted from 106 cell pellet using RNeasy Mini Prep Kit (Qiagen, Hilden, Germany). RNA quality was checked by Bioanalyzer (Agilent, Santa Clara, CA) Only RNA samples showing a RIN of at least 9.0 were used for labelling. Total RNA (1 μg) was labelled with Cy3 using the Low Input RNA Amplification Kit (Agilent, Santa Clara, CA). Labelled cRNAs were hybridized to Whole Human Genome 4x44K Oligonucleotide Microarrays (Agilent, Santa Clara, CA) according to the manual. Arrays were scanned by using standard Agilent protocols and a G2565AA Microarray Scanner (Agilent, Santa Clara, CA). Raw expression values were determined using Feature Extraction 9.0 software (Agilent, Santa Clara, CA).", "Total cell extracts were obtained and cell lysate containing 50 μg of protein was separated on 10% SDS-polyacrylamide gel and then blotted onto polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). Primary antibody for vimentin detection was mouse monoclonal anti-human vimentin antibody (Sigma-Aldrich Corporation, V5255, 1:200, approximately 54 kDa). Primary antibody for occludin detection was rabbit polyclonal anti-human occludin antibody (Invitrogen, 71–1500, 1:500, at 65 kDa). ß-actin was used as loading control (Abcam, 1:2000, ab8226). Signals were detected using ECL kit (Amersham Pharmacia Biotech, Piscataway, NJ, USA). Images were scanned by FujiFilm LAS-1000 (FujiFilm, Düsseldorf, Germany).", "Cells were seeded in 6-well plates 24 h before transfection. Transfection was done as described above. Cells were collected at indicated time points, and cell numbers were measured using POLARstar Omega reader (BMG Labtech, Offenburg, Germany). Emission and excitation filters were 485 and 520 nm. The results were analyzed by MARS data analysis software. Cell motility was determined using 12-well Transwell Permeable Support inserts with polycarbonate filters with a pore size of 8 μm (Corning Costar, Lowell, MA), according to the manufacturer’s instructions. Wound healing assays were performed in triplicates using cytoselect 24-well wound healing assay (Cell Biolabs, Inc.).", "Statistical analysis was done using GraphPad Prism version 5 for Windows (GraphPad Software) and SPSS version 13 for Windows (SPSS, Chicago, IL, USA) as follows: GraphPad Prism, Unpaired t test with Welch’s correction of quantitative real-time RT-PCR measurements of WDR66 in patient samples and of gene expression measurements in the validation cohort; nonparametric Mann–Whitney U test of cell numbers, motility assay, and cell wound assay after knockdown of WDR66; SPSS, Kaplan-Meier survival analysis and log-rank statistics, cut-point analysis of qRT-PCR measurements of WDR66 in patient samples using maximally selected rank statistics to determine the value separating a group into two groups with the most significant difference when used as a cut-point; grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low; WDR66 > 125, WDR66 high; the stratified Cox-regression model was used to determine prognostic factors in a multivariate analysis with WDR66 dichotomized at the previously determined cut-points.", "Whole genome-wide expression profiling was performed on 28 esophageal specimens (GSE26886). To make sure that only epithelial cells were studied, we applied laser capture microdissection technique to the specimens. A number of genes differentially expressed between ESCC samples and normal esophageal squamous epithelium samples were identified. The probe set with the highest fold change and lowest p-value represented the WDR66 transcript (P < 0.0001) (Figure 1A). As a validation study, WDR66 expression was examined by qRT-PCR in an independent cohort consisting of 71 specimens including ESCC (n = 25), NE (n = 11), EAC (n = 13), GAC (n = 15) and CRC (n = 7). We found that WDR66 was highly expressed in 96% of ESCC patients (Figure 1B). Confirming our previous results from the microarray study, WDR66 expression was found to be significantly higher in ESCC compared to NE as well as the other three cancer types checked in this cohort (P < 0.0001). Immunohistochemical localization of WDR66 was not carried out because none of the WDR66 antibodies available allowed detecting a specific protein band on Western blots. The presence of WDR66-specific mRNA was probed by in-situ hybridization using single-stranded RNA probes of the WDR66 gene in 4% PFA-fixed paraffin-embedded esophageal tissues. WDR66 transcription (positive staining) was specifically detected in the esophageal squamous carcinoma cells but not in normal squamous epithelia (Figure 1C). Furthermore, WDR66 expression was examined in 14 human cell-lines and 20 normal human tissues by qRT-PCR. Expression of WDR66 gene was abundantly expressed only in the human esophageal squamous cell carcinoma cell line KYSE520, but not expressed in any other human cell line, such as OE33, SW480, HT29, HCT116, LS174T, Caco2, HL60, HEK293, Daudi, Capan1, MCF7, MDA-MB231 and MDA-MB435 (Figure 1D). Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis (Figure 1E). Thus, our data suggest that WDR66 might be a cancer / testis antigen.\nWDR66 is highly expressed in esophageal squamous cell carcinoma. A: mRNA expression of the WDR66 gene was determined by microarray analysis. Microarray analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the WDR66 gene. The WDR66 gene is significantly differentially expressed in ESCC (corrected p-value < 0.0001). Expression level of WDR66 gene is low in NE but high in ESCC cases. B: Relative mRNA expression of the WDR66 gene in an independent validation cohort. Quantitation was done relative to the transcript of GAPDH. Significance in differential expression of individual gene between groups was calculated (p-value < 0.001). Results showed that WDR66 gene expression level is highest in ESCC and low to absent in NE or other carcinomas. On the horizontal axis patient groups ESCC, NE, EAC, GAC and CRC are depicted. C: WDR66 gene is highly expressed in ESCC epithelium according to in situ hybridization. In situ hybridization was down using anti-sense probes of human WDR66 gene in FFPE sections of esophageal specimens. Signals for WDR66 transcripts were observed specifically in esophageal squamous cell carcinoma (ESCC, right), but not in normal squamous epithelium (NE, left). D: WDR66 expression level in various human cell lines. WDR66 expression was examined by quantitative real-time PCR in 14 cell lines cultivated from different human carcinomas. The expression was quantified relative to human esophageal squamous carcinoma cell line KYSE520. E: Tissue-specific expression of WDR66 gene in various human normal tissues. Quantitative real-time PCR analysis of WDR66 expression levels in 20 human normal tissues (FirstChoice® Human Total RNA Survey Panel). WDR66 gene is preferentially expressed in testis. Gene level was quantified relative to the expression in ESCC cell line KYSE520.", "In order to test if WDR66 expression correlates with prognostic markers in a separate validation set of ESCC examples, we determined WDR66 expression in an independent set of n = 25 ESCC examples using qRT-PCR (Additional file 1: Table S1). High expression of WDR66 RNA was found to be a significant prognostic factor with regard to cancer-related survival (P = 0.031; Figure 2). When analyzed with regard to various clinicopathological parameters, such as gender (P = 0.804), age (P = 0.432), tumor differentiation (P =0.032), pT factor (P = 0.234), lymph node metastasis (P = 0.545), distant metastasis (P = 0.543) and TNM stage (P = 0.002; Table 1), multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042; Table 1).\nHigh WDR66 mRNA expression is associated with poor survival in ESCC patients. Kaplan-Meier analysis of survival of grouped according to WDR66 expression as measured by quantitative real-time RT-PCR. Grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low (n = 12); WDR66 > 125, WDR66 high (n = 13). After choosing an optimal cut-point, analysis for WDR66 was done using maximally selected rank statistics. The group with patients expressing WDR66 at low levels showed a significantly better overall survival compared with the group with high levels of WDR66 expression (P = 0.031; log rank).\nCox regression analysis for factors possibly influencing disease-specific survival in patients with ESCC in our cohort", "In order to learn more about the function of WDR66, RNA interference was used to silence its expression in KYSE520 cells, a human esophageal squamous cell carcinoma cell line which highly expressed WDR66. Subsequently, a microarray expression analysis was performed in order to identify the genes affected by WDR66 knockdown. A total of 699 genes was identified based on a two-fold change expression difference with p-value of p < 0.001. In an approach to link the observed gene expression profile to gene function, these 699 differentially expressed genes were subjected to gene ontology (GO) analysis. Functional enrichment analysis identified 10 GO terms to be significantly associated with the WDR66 knockdown (Additional file 2: Table S2). All these 10 GO terms are membrane related. We checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed (p = 0.0008) while occludin was less expressed (p < 0.0001) in ESCC specimens in comparison to NE (Figure 3A). Microarray data were validated by qRT-PCR and Western blotting providing independent evidence of the changes of vimentin (VIM) and occludin (OCLD) expression associated with the WDR66 knockdown (Figure 3B, 3C).\nKnockdown of WDR66 gene affects expression of vimentin and occludin. A: mRNA expression of the VIM and OCLN gene in the original training cohort determined by microarray analysis. Array data analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the genes of interest. VIM and OCLN are significantly differentially expressed in ESCC compared with NE (corrected p-value: VIM p = 0.0008; OCLD p < 0.0001). VIM expression level is low in NE but high in ESCC cases, whereas OCLN expression is high in NE but low in ESCC cases. The horizontal axis depicts the patient groups ESCC and NE. (∗∗∗ P < 0.001) B: Knockdown of WDR66 affects mRNA expression of VIM and OCLN. Gene expression is presented as normalized (log scale) signal intensity for mRNA expression levels of VIM and OCLN gene. Gene expression level of VIM was significantly down regulated whereas OCLN expression was significantly higher in cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA) (corrected p-value: VIM p = 0.0286; OCLD p = 0.0186). Data are representatives of four independent experiments. (∗p < 0.05). C: Detection of vimentin and occludin protein by immunoblotting of KYSE520 cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA). β-actin was used as loading control. Vimentin expression was significantly down regulated while occludin was significantly higher expressed in cells treated with WDR66 siRNA.", "Due to the effect of WDR66 on the expression of vimentin, an important EMT marker which plays a central role in the reversible trans-differentiation and occludin, an adhesion molecules that is a constituent of tight junctions, we hypothesized that WDR66 may regulate cell motility of esophageal squamous cancer cells. WDR66 was silenced in the human squamous cell carcinoma cell line KYSE520 by RNA interference. Transfection efficiency was evaluated by qPCR. Cell migration assays showed that KYSE520 cells, which had been transfected with WDR66 siRNA, displayed a motility capacity of only 40% compared to the cells having been transfected with control siRNA (AllStar) after 16 hours (Figure 4A). Moreover, we found that introduction of siWDR66 remarkably suppressed growth of KYSE520 in comparison to control cells (Figure 4B). In order to visualize the involvement of WDR66 in cell migration and proliferation, a wound-healing assay was carried out. The insert was removed at defined time points of scratching and the results were recorded by taking pictures (Figure 4C). Thus, our data suggest that WDR66 promotes cell proliferation and affects cell motility.\nKnockdown of WDR66 gene affects cell motility and cell growth. A: Cell motility assays showed that knockdown of WDR66 reduced cell migration after 16 hours. About 35% of the siWDR66 cells were migrated in comparison to that of the mock control cells. The differences between siWDR66 and Allstar (negative control siRNA) cells were significant (p = 0.0032) using the paired t test. The data shown are representatives of three independent experiments and each done in quadruplicate. B: Knockdown of WDR66 leads to suppression of cell growth. A total of 1.5×104 cells were seeded at day 0. WDR66 knockdown cells grew slower than cells treated with Allstar (negative control siRNA). The differences between siWDR66 and Allstar (negative control siRNA) treated cells were highly significant (p = 0. 0098). C: Wound-healing assays show that knockdown of WDR66 reduces cell motility. Representative images are shown. Images are taken immediately after insert was removed and 8 hours later. Original magnification is x400.", "CRC: Colorectal cancers; EAC: Esophageal adenocarcinoma; ESCC: Esophageal squamous cell carcinoma; EMT: Epithelial to mesenchymal transition; GAC: Gastric adenocarcinoma; NE: Normal esophageal; OCLD: Occludin; qRT-PCR: Quantitative reverse transcription- polymerase chain reaction; siRNA: Small interfering RNA; VIM: Vimentin; WDR66: WD repeat-containing protein 66.", "The authors declare that they have no competing interests.", "QW have made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data; CM has been involved in data acquisition, data analysis and interpretation of data, drafting the manuscript and revising it critically for important intellectual content; and WK contributed in writing and revising the paper and has given final approval of the version to be published. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/13/137/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Laser capture microdissection and microarray", "In situ hybridization", "RNA extraction and qRT-PCR", "Cell culture and siRNA-mediated knockdown", "Microarray analysis of WDR66 knock-down cells", "Western blotting analysis", "Cell number, cell motility and wound-healing assay", "Statistical analysis", "Results", "WDR66 is specifically highly expressed in esophageal squamous cell carcinoma", "High expression of WDR66 correlates with poor survival outcome in ESCC", "Knockdown of WDR66 in KYSE520 effected VIM and OCLD expression in vitro", "Knockdown of WDR66 in KYSE520 cells affects cell motility and results in growth suppression", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history", "Supplementary Material" ]
[ "Esophageal squamous cell carcinoma (ESCC) is one of the most lethal malignancies of the digestive tract and in most cases the initial diagnosis is established only once the malignancy is in the advanced stage [1]. Poor survival is due to the fact that ESCC frequently metastasizes to regional and distant lymph nodes, even at initial diagnosis.\nTreatment of cancer using molecular targets has brought promising results and attracts more and more attention [2-5]. Characterization of genes involved in the progression and development of ESCC may lead to the identification of new prognostic markers and therapeutic targets.\nBy whole genome-wide expression profiling, we found that WD repeat-containing protein 66 (WDR66), located on chromosome 12 (12q24.31), might be a useful biomarker for risk stratification and a modulator for epithelial-mesenchymal transition of ESCC.\nWD-repeat protein family is a large family found in all eukaryotes and is implicated in a variety of functions ranging from signal transduction and transcription regulation to cell cycle control, autophagy and apoptosis [6]. These repeating units are believed to serve as a scaffold for multiple protein interactions with various proteins [7]. According to whole-genome sequence analysis, there are 136 WD-repeat proteins in humans which belong to the same structural class [8]. Among the WD-repeat proteins, endonuclein containing five WD-repeat domains was shown to be up regulated in pancreatic cancer [9]. The expression of human BTRC (beta-transducing repeat-containing protein), which contains one F-box and seven WD-repeats, targeted to epithelial cells under tissue specific promoter in BTRC deficient (−/−) female mice, promoted the development of mammary tumors [10]. WDRPUH (WD repeat-containing protein 16) encoding a protein containing 11 highly conserved WD-repeat domains was also shown to be up regulated in human hepatocellular carcinomas and involved in promotion of cell proliferation [11]. The WD repeat-containing protein 66 contains 9 highly conserved WD40 repeat motifs and an EF-hand-like domain. A genome-wide association study identified a single-nucleotide polymorphism located within intron 3 of WDR66 associated with mean platelet volume [12].\nWD-repeat proteins have been identified as tumor markers that were frequently up-regulated in various cancers [11,13,14]. Here, we report for the first time that WDR66 might be an important prognostic factor for patients with ESCC as found by whole human gene expression profiling. Moreover, to our knowledge, the role of WDR66 in esophageal cancer development and progression has not been explored up to now. To this end we examined the effect of silencing of WDR66 by another microarray analysis.\nIn addition, the effect of WDR66 on epithelial-mesenchymal transition (EMT), cell motility and tumor growth was investigated.", " Patients Tissue samples from individuals of the first set (n = 28) were obtained from the tumor bank of Charité Comprehensive Cancer Center. Gene expression was examined by whole-human-genome microarrays (Affymetrix, Santa Clara, CA, USA). Ten esophageal squamous cell carcinoma (ESCC) and 18 normal esophageal (NE) biopsies were randomly collected. Normal healthy esophageal biopsies were collected from patients with esophageal pain but diagnosed as normal squamous without pathological changes. Surgical specimens of chemotherapy-naïve patients with known ESCC of histological grading G1, UICC stage II and III, had undergone esophagectomy. Patients’ age ranged from 22 to 83 years, with a median age of 59 years.\nA second panel (n = 71) consisting of ESCC (n = 25), NE (n = 11), esophageal adenocarcinoma (EAC) (n = 13), gastric adenocarcinoma (GAC) (n = 15) and colorectal cancers (CRC) (n = 7) was used for qRT-PCR validation. Patients’ age ranged from 24 to 79 years, with a median age of 63 years.\nAll samples were snap-frozen in liquid nitrogen and stored at −80°C. We obtained tissue specimens from all subjects with informed written consent (approved by the local ethics committee of the Charité-Universitätsmedizin, Berlin). Each single specimen included in this study was histopathologically approved according to grade and stage by an experienced pathologist ( MV, University Bayreuth).\nTissue samples from individuals of the first set (n = 28) were obtained from the tumor bank of Charité Comprehensive Cancer Center. Gene expression was examined by whole-human-genome microarrays (Affymetrix, Santa Clara, CA, USA). Ten esophageal squamous cell carcinoma (ESCC) and 18 normal esophageal (NE) biopsies were randomly collected. Normal healthy esophageal biopsies were collected from patients with esophageal pain but diagnosed as normal squamous without pathological changes. Surgical specimens of chemotherapy-naïve patients with known ESCC of histological grading G1, UICC stage II and III, had undergone esophagectomy. Patients’ age ranged from 22 to 83 years, with a median age of 59 years.\nA second panel (n = 71) consisting of ESCC (n = 25), NE (n = 11), esophageal adenocarcinoma (EAC) (n = 13), gastric adenocarcinoma (GAC) (n = 15) and colorectal cancers (CRC) (n = 7) was used for qRT-PCR validation. Patients’ age ranged from 24 to 79 years, with a median age of 63 years.\nAll samples were snap-frozen in liquid nitrogen and stored at −80°C. We obtained tissue specimens from all subjects with informed written consent (approved by the local ethics committee of the Charité-Universitätsmedizin, Berlin). Each single specimen included in this study was histopathologically approved according to grade and stage by an experienced pathologist ( MV, University Bayreuth).\n Laser capture microdissection and microarray Laser Capture Microdissection (Cellcut, MMI AG and Nikon TE300 microscope) was used for isolating desired cells from sections. After transferring 5 μm sections onto MMI membrane slides, these were fixed in 70% isopropyl alcohol and then stained with the MMI basic staining kit. Desired tumor cell or NE areas were selected, cut and collected. Preparation of labeled cRNA and hybridization were done using the gene chip hybridization, wash, and stain kit (Affymetrix, Santa Clara, CA, USA), as described previously [15]. Two cycle labeling was applied to all samples. In total 28 chip-data were collected using Gene Chip Operation Software (GCOS, Affymetrix). The 28 specimens analyzed consists 10 ESCC and 18 NE. To obtain the relative gene expression measurements, probe set-level data extraction was performed with the GCRMA (Robust Multiarray Average) normalization algorithm implemented in GeneSpring GX10.2 (Agilent). All data were log2 transformed. A list of all the genes included in these microarrays and the normalized data have been deposited in the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/info/linking.html) under GEO accession number GSE26886. For gene-by-gene statistical testing, parametric tests were used to compare differences between groups. The False Discovery Rate (FDR) was employed using Benjamini-Hochberg procedure for multiple testing of the resulting p-value significance.\nLaser Capture Microdissection (Cellcut, MMI AG and Nikon TE300 microscope) was used for isolating desired cells from sections. After transferring 5 μm sections onto MMI membrane slides, these were fixed in 70% isopropyl alcohol and then stained with the MMI basic staining kit. Desired tumor cell or NE areas were selected, cut and collected. Preparation of labeled cRNA and hybridization were done using the gene chip hybridization, wash, and stain kit (Affymetrix, Santa Clara, CA, USA), as described previously [15]. Two cycle labeling was applied to all samples. In total 28 chip-data were collected using Gene Chip Operation Software (GCOS, Affymetrix). The 28 specimens analyzed consists 10 ESCC and 18 NE. To obtain the relative gene expression measurements, probe set-level data extraction was performed with the GCRMA (Robust Multiarray Average) normalization algorithm implemented in GeneSpring GX10.2 (Agilent). All data were log2 transformed. A list of all the genes included in these microarrays and the normalized data have been deposited in the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/info/linking.html) under GEO accession number GSE26886. For gene-by-gene statistical testing, parametric tests were used to compare differences between groups. The False Discovery Rate (FDR) was employed using Benjamini-Hochberg procedure for multiple testing of the resulting p-value significance.\n In situ hybridization A 148 bp fragment located at the 3 terminal end of human WDR66 gene (NM144668)was subcloned into the pBluescript II vector pBS-27.16 using primer pair forward: 5’-CAACCTgCTCCgTCAAA-3’ and reverse: 5’-TAAACATTCCTggTAACTTCAC-3’. The linearized plasmid was used as a template for the synthesis of antisense probes. The probe was labeled by digoxigenin / dUTP with a DIG/RNA labelling kit (Boehringer Mannheim), according to the manufacturer’s instructions. The quality and quantity of the probe were confirmed by gel electrophoresis before used for In situ hybridization. The Digoxigenin-labeled probe was applied to 5 μm dewaxed FFPE sections and hybridized at 65°C overnight in a humid chamber. After 3 washes to remove the nonspecific binding or unbound probes, digoxigenin-labeled probe was detected using the alkaline phosphatase method.\nA 148 bp fragment located at the 3 terminal end of human WDR66 gene (NM144668)was subcloned into the pBluescript II vector pBS-27.16 using primer pair forward: 5’-CAACCTgCTCCgTCAAA-3’ and reverse: 5’-TAAACATTCCTggTAACTTCAC-3’. The linearized plasmid was used as a template for the synthesis of antisense probes. The probe was labeled by digoxigenin / dUTP with a DIG/RNA labelling kit (Boehringer Mannheim), according to the manufacturer’s instructions. The quality and quantity of the probe were confirmed by gel electrophoresis before used for In situ hybridization. The Digoxigenin-labeled probe was applied to 5 μm dewaxed FFPE sections and hybridized at 65°C overnight in a humid chamber. After 3 washes to remove the nonspecific binding or unbound probes, digoxigenin-labeled probe was detected using the alkaline phosphatase method.\n RNA extraction and qRT-PCR RNA extractions were carried out using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Total RNA quality and yield were assessed using a bioanalyzer system (Agilent 2100 Bioanalyzer; Agilent Technologies, Palo Alto, CA, USA) and a spectrophotometer (NanoDrop ND-1000; NanoDrop Technologies, Wilmington, DE, USA). Only RNA with an RNA integrity number > 9.0 was used for microarray analysis.\nFor quantification of mRNA-expression, qRT-PCR was performed for 3 genes plus one control, using pre-designed gene-specific TaqMan® probes and primer sets purchased from Applied Biosystems (Hs01566237_m1 for WDR66, Hs00958116_m1 for VIM, Hs00170162_m1 for OCLD, and 4326317E for GAPDH). Conditions and data analysis of qRT-PCR were as described [16].\nRNA extractions were carried out using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Total RNA quality and yield were assessed using a bioanalyzer system (Agilent 2100 Bioanalyzer; Agilent Technologies, Palo Alto, CA, USA) and a spectrophotometer (NanoDrop ND-1000; NanoDrop Technologies, Wilmington, DE, USA). Only RNA with an RNA integrity number > 9.0 was used for microarray analysis.\nFor quantification of mRNA-expression, qRT-PCR was performed for 3 genes plus one control, using pre-designed gene-specific TaqMan® probes and primer sets purchased from Applied Biosystems (Hs01566237_m1 for WDR66, Hs00958116_m1 for VIM, Hs00170162_m1 for OCLD, and 4326317E for GAPDH). Conditions and data analysis of qRT-PCR were as described [16].\n Cell culture and siRNA-mediated knockdown KYSE520, OE33, SW480, Caco2, HCT116, HT29, HL60, LS174T, Daudi, HEK293, MCF7, MDA-MB-231, MDA-MB-435 and Capan-I were obtained from the American Type Culture Collection (ATCC, Manassas, VA) and cultured according to the supplier’s instructions. For siRNA-mediated knockdown of WDR66, cells were seeded in 6-well plates on the day before transfection. On day 0, cells were transfected with siRNA at 25 nmol/L concentrations using Thermo Scientific DharmaFECT transfection reagents according to manufacturer’s instructions. The siRNA sense 5’ – GuuACuAAAGGuGAGCAuA - 3’ sequence corresponding to WDR66 mRNA was chemically synthesized by sigma-Proligo (Munich, Germany). RNA was extracted at indicated time points as described.\nKYSE520, OE33, SW480, Caco2, HCT116, HT29, HL60, LS174T, Daudi, HEK293, MCF7, MDA-MB-231, MDA-MB-435 and Capan-I were obtained from the American Type Culture Collection (ATCC, Manassas, VA) and cultured according to the supplier’s instructions. For siRNA-mediated knockdown of WDR66, cells were seeded in 6-well plates on the day before transfection. On day 0, cells were transfected with siRNA at 25 nmol/L concentrations using Thermo Scientific DharmaFECT transfection reagents according to manufacturer’s instructions. The siRNA sense 5’ – GuuACuAAAGGuGAGCAuA - 3’ sequence corresponding to WDR66 mRNA was chemically synthesized by sigma-Proligo (Munich, Germany). RNA was extracted at indicated time points as described.\n Microarray analysis of WDR66 knock-down cells Total RNA was extracted from 106 cell pellet using RNeasy Mini Prep Kit (Qiagen, Hilden, Germany). RNA quality was checked by Bioanalyzer (Agilent, Santa Clara, CA) Only RNA samples showing a RIN of at least 9.0 were used for labelling. Total RNA (1 μg) was labelled with Cy3 using the Low Input RNA Amplification Kit (Agilent, Santa Clara, CA). Labelled cRNAs were hybridized to Whole Human Genome 4x44K Oligonucleotide Microarrays (Agilent, Santa Clara, CA) according to the manual. Arrays were scanned by using standard Agilent protocols and a G2565AA Microarray Scanner (Agilent, Santa Clara, CA). Raw expression values were determined using Feature Extraction 9.0 software (Agilent, Santa Clara, CA).\nTotal RNA was extracted from 106 cell pellet using RNeasy Mini Prep Kit (Qiagen, Hilden, Germany). RNA quality was checked by Bioanalyzer (Agilent, Santa Clara, CA) Only RNA samples showing a RIN of at least 9.0 were used for labelling. Total RNA (1 μg) was labelled with Cy3 using the Low Input RNA Amplification Kit (Agilent, Santa Clara, CA). Labelled cRNAs were hybridized to Whole Human Genome 4x44K Oligonucleotide Microarrays (Agilent, Santa Clara, CA) according to the manual. Arrays were scanned by using standard Agilent protocols and a G2565AA Microarray Scanner (Agilent, Santa Clara, CA). Raw expression values were determined using Feature Extraction 9.0 software (Agilent, Santa Clara, CA).\n Western blotting analysis Total cell extracts were obtained and cell lysate containing 50 μg of protein was separated on 10% SDS-polyacrylamide gel and then blotted onto polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). Primary antibody for vimentin detection was mouse monoclonal anti-human vimentin antibody (Sigma-Aldrich Corporation, V5255, 1:200, approximately 54 kDa). Primary antibody for occludin detection was rabbit polyclonal anti-human occludin antibody (Invitrogen, 71–1500, 1:500, at 65 kDa). ß-actin was used as loading control (Abcam, 1:2000, ab8226). Signals were detected using ECL kit (Amersham Pharmacia Biotech, Piscataway, NJ, USA). Images were scanned by FujiFilm LAS-1000 (FujiFilm, Düsseldorf, Germany).\nTotal cell extracts were obtained and cell lysate containing 50 μg of protein was separated on 10% SDS-polyacrylamide gel and then blotted onto polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). Primary antibody for vimentin detection was mouse monoclonal anti-human vimentin antibody (Sigma-Aldrich Corporation, V5255, 1:200, approximately 54 kDa). Primary antibody for occludin detection was rabbit polyclonal anti-human occludin antibody (Invitrogen, 71–1500, 1:500, at 65 kDa). ß-actin was used as loading control (Abcam, 1:2000, ab8226). Signals were detected using ECL kit (Amersham Pharmacia Biotech, Piscataway, NJ, USA). Images were scanned by FujiFilm LAS-1000 (FujiFilm, Düsseldorf, Germany).\n Cell number, cell motility and wound-healing assay Cells were seeded in 6-well plates 24 h before transfection. Transfection was done as described above. Cells were collected at indicated time points, and cell numbers were measured using POLARstar Omega reader (BMG Labtech, Offenburg, Germany). Emission and excitation filters were 485 and 520 nm. The results were analyzed by MARS data analysis software. Cell motility was determined using 12-well Transwell Permeable Support inserts with polycarbonate filters with a pore size of 8 μm (Corning Costar, Lowell, MA), according to the manufacturer’s instructions. Wound healing assays were performed in triplicates using cytoselect 24-well wound healing assay (Cell Biolabs, Inc.).\nCells were seeded in 6-well plates 24 h before transfection. Transfection was done as described above. Cells were collected at indicated time points, and cell numbers were measured using POLARstar Omega reader (BMG Labtech, Offenburg, Germany). Emission and excitation filters were 485 and 520 nm. The results were analyzed by MARS data analysis software. Cell motility was determined using 12-well Transwell Permeable Support inserts with polycarbonate filters with a pore size of 8 μm (Corning Costar, Lowell, MA), according to the manufacturer’s instructions. Wound healing assays were performed in triplicates using cytoselect 24-well wound healing assay (Cell Biolabs, Inc.).\n Statistical analysis Statistical analysis was done using GraphPad Prism version 5 for Windows (GraphPad Software) and SPSS version 13 for Windows (SPSS, Chicago, IL, USA) as follows: GraphPad Prism, Unpaired t test with Welch’s correction of quantitative real-time RT-PCR measurements of WDR66 in patient samples and of gene expression measurements in the validation cohort; nonparametric Mann–Whitney U test of cell numbers, motility assay, and cell wound assay after knockdown of WDR66; SPSS, Kaplan-Meier survival analysis and log-rank statistics, cut-point analysis of qRT-PCR measurements of WDR66 in patient samples using maximally selected rank statistics to determine the value separating a group into two groups with the most significant difference when used as a cut-point; grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low; WDR66 > 125, WDR66 high; the stratified Cox-regression model was used to determine prognostic factors in a multivariate analysis with WDR66 dichotomized at the previously determined cut-points.\nStatistical analysis was done using GraphPad Prism version 5 for Windows (GraphPad Software) and SPSS version 13 for Windows (SPSS, Chicago, IL, USA) as follows: GraphPad Prism, Unpaired t test with Welch’s correction of quantitative real-time RT-PCR measurements of WDR66 in patient samples and of gene expression measurements in the validation cohort; nonparametric Mann–Whitney U test of cell numbers, motility assay, and cell wound assay after knockdown of WDR66; SPSS, Kaplan-Meier survival analysis and log-rank statistics, cut-point analysis of qRT-PCR measurements of WDR66 in patient samples using maximally selected rank statistics to determine the value separating a group into two groups with the most significant difference when used as a cut-point; grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low; WDR66 > 125, WDR66 high; the stratified Cox-regression model was used to determine prognostic factors in a multivariate analysis with WDR66 dichotomized at the previously determined cut-points.", "Tissue samples from individuals of the first set (n = 28) were obtained from the tumor bank of Charité Comprehensive Cancer Center. Gene expression was examined by whole-human-genome microarrays (Affymetrix, Santa Clara, CA, USA). Ten esophageal squamous cell carcinoma (ESCC) and 18 normal esophageal (NE) biopsies were randomly collected. Normal healthy esophageal biopsies were collected from patients with esophageal pain but diagnosed as normal squamous without pathological changes. Surgical specimens of chemotherapy-naïve patients with known ESCC of histological grading G1, UICC stage II and III, had undergone esophagectomy. Patients’ age ranged from 22 to 83 years, with a median age of 59 years.\nA second panel (n = 71) consisting of ESCC (n = 25), NE (n = 11), esophageal adenocarcinoma (EAC) (n = 13), gastric adenocarcinoma (GAC) (n = 15) and colorectal cancers (CRC) (n = 7) was used for qRT-PCR validation. Patients’ age ranged from 24 to 79 years, with a median age of 63 years.\nAll samples were snap-frozen in liquid nitrogen and stored at −80°C. We obtained tissue specimens from all subjects with informed written consent (approved by the local ethics committee of the Charité-Universitätsmedizin, Berlin). Each single specimen included in this study was histopathologically approved according to grade and stage by an experienced pathologist ( MV, University Bayreuth).", "Laser Capture Microdissection (Cellcut, MMI AG and Nikon TE300 microscope) was used for isolating desired cells from sections. After transferring 5 μm sections onto MMI membrane slides, these were fixed in 70% isopropyl alcohol and then stained with the MMI basic staining kit. Desired tumor cell or NE areas were selected, cut and collected. Preparation of labeled cRNA and hybridization were done using the gene chip hybridization, wash, and stain kit (Affymetrix, Santa Clara, CA, USA), as described previously [15]. Two cycle labeling was applied to all samples. In total 28 chip-data were collected using Gene Chip Operation Software (GCOS, Affymetrix). The 28 specimens analyzed consists 10 ESCC and 18 NE. To obtain the relative gene expression measurements, probe set-level data extraction was performed with the GCRMA (Robust Multiarray Average) normalization algorithm implemented in GeneSpring GX10.2 (Agilent). All data were log2 transformed. A list of all the genes included in these microarrays and the normalized data have been deposited in the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/info/linking.html) under GEO accession number GSE26886. For gene-by-gene statistical testing, parametric tests were used to compare differences between groups. The False Discovery Rate (FDR) was employed using Benjamini-Hochberg procedure for multiple testing of the resulting p-value significance.", "A 148 bp fragment located at the 3 terminal end of human WDR66 gene (NM144668)was subcloned into the pBluescript II vector pBS-27.16 using primer pair forward: 5’-CAACCTgCTCCgTCAAA-3’ and reverse: 5’-TAAACATTCCTggTAACTTCAC-3’. The linearized plasmid was used as a template for the synthesis of antisense probes. The probe was labeled by digoxigenin / dUTP with a DIG/RNA labelling kit (Boehringer Mannheim), according to the manufacturer’s instructions. The quality and quantity of the probe were confirmed by gel electrophoresis before used for In situ hybridization. The Digoxigenin-labeled probe was applied to 5 μm dewaxed FFPE sections and hybridized at 65°C overnight in a humid chamber. After 3 washes to remove the nonspecific binding or unbound probes, digoxigenin-labeled probe was detected using the alkaline phosphatase method.", "RNA extractions were carried out using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Total RNA quality and yield were assessed using a bioanalyzer system (Agilent 2100 Bioanalyzer; Agilent Technologies, Palo Alto, CA, USA) and a spectrophotometer (NanoDrop ND-1000; NanoDrop Technologies, Wilmington, DE, USA). Only RNA with an RNA integrity number > 9.0 was used for microarray analysis.\nFor quantification of mRNA-expression, qRT-PCR was performed for 3 genes plus one control, using pre-designed gene-specific TaqMan® probes and primer sets purchased from Applied Biosystems (Hs01566237_m1 for WDR66, Hs00958116_m1 for VIM, Hs00170162_m1 for OCLD, and 4326317E for GAPDH). Conditions and data analysis of qRT-PCR were as described [16].", "KYSE520, OE33, SW480, Caco2, HCT116, HT29, HL60, LS174T, Daudi, HEK293, MCF7, MDA-MB-231, MDA-MB-435 and Capan-I were obtained from the American Type Culture Collection (ATCC, Manassas, VA) and cultured according to the supplier’s instructions. For siRNA-mediated knockdown of WDR66, cells were seeded in 6-well plates on the day before transfection. On day 0, cells were transfected with siRNA at 25 nmol/L concentrations using Thermo Scientific DharmaFECT transfection reagents according to manufacturer’s instructions. The siRNA sense 5’ – GuuACuAAAGGuGAGCAuA - 3’ sequence corresponding to WDR66 mRNA was chemically synthesized by sigma-Proligo (Munich, Germany). RNA was extracted at indicated time points as described.", "Total RNA was extracted from 106 cell pellet using RNeasy Mini Prep Kit (Qiagen, Hilden, Germany). RNA quality was checked by Bioanalyzer (Agilent, Santa Clara, CA) Only RNA samples showing a RIN of at least 9.0 were used for labelling. Total RNA (1 μg) was labelled with Cy3 using the Low Input RNA Amplification Kit (Agilent, Santa Clara, CA). Labelled cRNAs were hybridized to Whole Human Genome 4x44K Oligonucleotide Microarrays (Agilent, Santa Clara, CA) according to the manual. Arrays were scanned by using standard Agilent protocols and a G2565AA Microarray Scanner (Agilent, Santa Clara, CA). Raw expression values were determined using Feature Extraction 9.0 software (Agilent, Santa Clara, CA).", "Total cell extracts were obtained and cell lysate containing 50 μg of protein was separated on 10% SDS-polyacrylamide gel and then blotted onto polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). Primary antibody for vimentin detection was mouse monoclonal anti-human vimentin antibody (Sigma-Aldrich Corporation, V5255, 1:200, approximately 54 kDa). Primary antibody for occludin detection was rabbit polyclonal anti-human occludin antibody (Invitrogen, 71–1500, 1:500, at 65 kDa). ß-actin was used as loading control (Abcam, 1:2000, ab8226). Signals were detected using ECL kit (Amersham Pharmacia Biotech, Piscataway, NJ, USA). Images were scanned by FujiFilm LAS-1000 (FujiFilm, Düsseldorf, Germany).", "Cells were seeded in 6-well plates 24 h before transfection. Transfection was done as described above. Cells were collected at indicated time points, and cell numbers were measured using POLARstar Omega reader (BMG Labtech, Offenburg, Germany). Emission and excitation filters were 485 and 520 nm. The results were analyzed by MARS data analysis software. Cell motility was determined using 12-well Transwell Permeable Support inserts with polycarbonate filters with a pore size of 8 μm (Corning Costar, Lowell, MA), according to the manufacturer’s instructions. Wound healing assays were performed in triplicates using cytoselect 24-well wound healing assay (Cell Biolabs, Inc.).", "Statistical analysis was done using GraphPad Prism version 5 for Windows (GraphPad Software) and SPSS version 13 for Windows (SPSS, Chicago, IL, USA) as follows: GraphPad Prism, Unpaired t test with Welch’s correction of quantitative real-time RT-PCR measurements of WDR66 in patient samples and of gene expression measurements in the validation cohort; nonparametric Mann–Whitney U test of cell numbers, motility assay, and cell wound assay after knockdown of WDR66; SPSS, Kaplan-Meier survival analysis and log-rank statistics, cut-point analysis of qRT-PCR measurements of WDR66 in patient samples using maximally selected rank statistics to determine the value separating a group into two groups with the most significant difference when used as a cut-point; grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low; WDR66 > 125, WDR66 high; the stratified Cox-regression model was used to determine prognostic factors in a multivariate analysis with WDR66 dichotomized at the previously determined cut-points.", " WDR66 is specifically highly expressed in esophageal squamous cell carcinoma Whole genome-wide expression profiling was performed on 28 esophageal specimens (GSE26886). To make sure that only epithelial cells were studied, we applied laser capture microdissection technique to the specimens. A number of genes differentially expressed between ESCC samples and normal esophageal squamous epithelium samples were identified. The probe set with the highest fold change and lowest p-value represented the WDR66 transcript (P < 0.0001) (Figure 1A). As a validation study, WDR66 expression was examined by qRT-PCR in an independent cohort consisting of 71 specimens including ESCC (n = 25), NE (n = 11), EAC (n = 13), GAC (n = 15) and CRC (n = 7). We found that WDR66 was highly expressed in 96% of ESCC patients (Figure 1B). Confirming our previous results from the microarray study, WDR66 expression was found to be significantly higher in ESCC compared to NE as well as the other three cancer types checked in this cohort (P < 0.0001). Immunohistochemical localization of WDR66 was not carried out because none of the WDR66 antibodies available allowed detecting a specific protein band on Western blots. The presence of WDR66-specific mRNA was probed by in-situ hybridization using single-stranded RNA probes of the WDR66 gene in 4% PFA-fixed paraffin-embedded esophageal tissues. WDR66 transcription (positive staining) was specifically detected in the esophageal squamous carcinoma cells but not in normal squamous epithelia (Figure 1C). Furthermore, WDR66 expression was examined in 14 human cell-lines and 20 normal human tissues by qRT-PCR. Expression of WDR66 gene was abundantly expressed only in the human esophageal squamous cell carcinoma cell line KYSE520, but not expressed in any other human cell line, such as OE33, SW480, HT29, HCT116, LS174T, Caco2, HL60, HEK293, Daudi, Capan1, MCF7, MDA-MB231 and MDA-MB435 (Figure 1D). Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis (Figure 1E). Thus, our data suggest that WDR66 might be a cancer / testis antigen.\nWDR66 is highly expressed in esophageal squamous cell carcinoma. A: mRNA expression of the WDR66 gene was determined by microarray analysis. Microarray analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the WDR66 gene. The WDR66 gene is significantly differentially expressed in ESCC (corrected p-value < 0.0001). Expression level of WDR66 gene is low in NE but high in ESCC cases. B: Relative mRNA expression of the WDR66 gene in an independent validation cohort. Quantitation was done relative to the transcript of GAPDH. Significance in differential expression of individual gene between groups was calculated (p-value < 0.001). Results showed that WDR66 gene expression level is highest in ESCC and low to absent in NE or other carcinomas. On the horizontal axis patient groups ESCC, NE, EAC, GAC and CRC are depicted. C: WDR66 gene is highly expressed in ESCC epithelium according to in situ hybridization. In situ hybridization was down using anti-sense probes of human WDR66 gene in FFPE sections of esophageal specimens. Signals for WDR66 transcripts were observed specifically in esophageal squamous cell carcinoma (ESCC, right), but not in normal squamous epithelium (NE, left). D: WDR66 expression level in various human cell lines. WDR66 expression was examined by quantitative real-time PCR in 14 cell lines cultivated from different human carcinomas. The expression was quantified relative to human esophageal squamous carcinoma cell line KYSE520. E: Tissue-specific expression of WDR66 gene in various human normal tissues. Quantitative real-time PCR analysis of WDR66 expression levels in 20 human normal tissues (FirstChoice® Human Total RNA Survey Panel). WDR66 gene is preferentially expressed in testis. Gene level was quantified relative to the expression in ESCC cell line KYSE520.\nWhole genome-wide expression profiling was performed on 28 esophageal specimens (GSE26886). To make sure that only epithelial cells were studied, we applied laser capture microdissection technique to the specimens. A number of genes differentially expressed between ESCC samples and normal esophageal squamous epithelium samples were identified. The probe set with the highest fold change and lowest p-value represented the WDR66 transcript (P < 0.0001) (Figure 1A). As a validation study, WDR66 expression was examined by qRT-PCR in an independent cohort consisting of 71 specimens including ESCC (n = 25), NE (n = 11), EAC (n = 13), GAC (n = 15) and CRC (n = 7). We found that WDR66 was highly expressed in 96% of ESCC patients (Figure 1B). Confirming our previous results from the microarray study, WDR66 expression was found to be significantly higher in ESCC compared to NE as well as the other three cancer types checked in this cohort (P < 0.0001). Immunohistochemical localization of WDR66 was not carried out because none of the WDR66 antibodies available allowed detecting a specific protein band on Western blots. The presence of WDR66-specific mRNA was probed by in-situ hybridization using single-stranded RNA probes of the WDR66 gene in 4% PFA-fixed paraffin-embedded esophageal tissues. WDR66 transcription (positive staining) was specifically detected in the esophageal squamous carcinoma cells but not in normal squamous epithelia (Figure 1C). Furthermore, WDR66 expression was examined in 14 human cell-lines and 20 normal human tissues by qRT-PCR. Expression of WDR66 gene was abundantly expressed only in the human esophageal squamous cell carcinoma cell line KYSE520, but not expressed in any other human cell line, such as OE33, SW480, HT29, HCT116, LS174T, Caco2, HL60, HEK293, Daudi, Capan1, MCF7, MDA-MB231 and MDA-MB435 (Figure 1D). Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis (Figure 1E). Thus, our data suggest that WDR66 might be a cancer / testis antigen.\nWDR66 is highly expressed in esophageal squamous cell carcinoma. A: mRNA expression of the WDR66 gene was determined by microarray analysis. Microarray analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the WDR66 gene. The WDR66 gene is significantly differentially expressed in ESCC (corrected p-value < 0.0001). Expression level of WDR66 gene is low in NE but high in ESCC cases. B: Relative mRNA expression of the WDR66 gene in an independent validation cohort. Quantitation was done relative to the transcript of GAPDH. Significance in differential expression of individual gene between groups was calculated (p-value < 0.001). Results showed that WDR66 gene expression level is highest in ESCC and low to absent in NE or other carcinomas. On the horizontal axis patient groups ESCC, NE, EAC, GAC and CRC are depicted. C: WDR66 gene is highly expressed in ESCC epithelium according to in situ hybridization. In situ hybridization was down using anti-sense probes of human WDR66 gene in FFPE sections of esophageal specimens. Signals for WDR66 transcripts were observed specifically in esophageal squamous cell carcinoma (ESCC, right), but not in normal squamous epithelium (NE, left). D: WDR66 expression level in various human cell lines. WDR66 expression was examined by quantitative real-time PCR in 14 cell lines cultivated from different human carcinomas. The expression was quantified relative to human esophageal squamous carcinoma cell line KYSE520. E: Tissue-specific expression of WDR66 gene in various human normal tissues. Quantitative real-time PCR analysis of WDR66 expression levels in 20 human normal tissues (FirstChoice® Human Total RNA Survey Panel). WDR66 gene is preferentially expressed in testis. Gene level was quantified relative to the expression in ESCC cell line KYSE520.\n High expression of WDR66 correlates with poor survival outcome in ESCC In order to test if WDR66 expression correlates with prognostic markers in a separate validation set of ESCC examples, we determined WDR66 expression in an independent set of n = 25 ESCC examples using qRT-PCR (Additional file 1: Table S1). High expression of WDR66 RNA was found to be a significant prognostic factor with regard to cancer-related survival (P = 0.031; Figure 2). When analyzed with regard to various clinicopathological parameters, such as gender (P = 0.804), age (P = 0.432), tumor differentiation (P =0.032), pT factor (P = 0.234), lymph node metastasis (P = 0.545), distant metastasis (P = 0.543) and TNM stage (P = 0.002; Table 1), multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042; Table 1).\nHigh WDR66 mRNA expression is associated with poor survival in ESCC patients. Kaplan-Meier analysis of survival of grouped according to WDR66 expression as measured by quantitative real-time RT-PCR. Grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low (n = 12); WDR66 > 125, WDR66 high (n = 13). After choosing an optimal cut-point, analysis for WDR66 was done using maximally selected rank statistics. The group with patients expressing WDR66 at low levels showed a significantly better overall survival compared with the group with high levels of WDR66 expression (P = 0.031; log rank).\nCox regression analysis for factors possibly influencing disease-specific survival in patients with ESCC in our cohort\nIn order to test if WDR66 expression correlates with prognostic markers in a separate validation set of ESCC examples, we determined WDR66 expression in an independent set of n = 25 ESCC examples using qRT-PCR (Additional file 1: Table S1). High expression of WDR66 RNA was found to be a significant prognostic factor with regard to cancer-related survival (P = 0.031; Figure 2). When analyzed with regard to various clinicopathological parameters, such as gender (P = 0.804), age (P = 0.432), tumor differentiation (P =0.032), pT factor (P = 0.234), lymph node metastasis (P = 0.545), distant metastasis (P = 0.543) and TNM stage (P = 0.002; Table 1), multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042; Table 1).\nHigh WDR66 mRNA expression is associated with poor survival in ESCC patients. Kaplan-Meier analysis of survival of grouped according to WDR66 expression as measured by quantitative real-time RT-PCR. Grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low (n = 12); WDR66 > 125, WDR66 high (n = 13). After choosing an optimal cut-point, analysis for WDR66 was done using maximally selected rank statistics. The group with patients expressing WDR66 at low levels showed a significantly better overall survival compared with the group with high levels of WDR66 expression (P = 0.031; log rank).\nCox regression analysis for factors possibly influencing disease-specific survival in patients with ESCC in our cohort\n Knockdown of WDR66 in KYSE520 effected VIM and OCLD expression in vitro In order to learn more about the function of WDR66, RNA interference was used to silence its expression in KYSE520 cells, a human esophageal squamous cell carcinoma cell line which highly expressed WDR66. Subsequently, a microarray expression analysis was performed in order to identify the genes affected by WDR66 knockdown. A total of 699 genes was identified based on a two-fold change expression difference with p-value of p < 0.001. In an approach to link the observed gene expression profile to gene function, these 699 differentially expressed genes were subjected to gene ontology (GO) analysis. Functional enrichment analysis identified 10 GO terms to be significantly associated with the WDR66 knockdown (Additional file 2: Table S2). All these 10 GO terms are membrane related. We checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed (p = 0.0008) while occludin was less expressed (p < 0.0001) in ESCC specimens in comparison to NE (Figure 3A). Microarray data were validated by qRT-PCR and Western blotting providing independent evidence of the changes of vimentin (VIM) and occludin (OCLD) expression associated with the WDR66 knockdown (Figure 3B, 3C).\nKnockdown of WDR66 gene affects expression of vimentin and occludin. A: mRNA expression of the VIM and OCLN gene in the original training cohort determined by microarray analysis. Array data analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the genes of interest. VIM and OCLN are significantly differentially expressed in ESCC compared with NE (corrected p-value: VIM p = 0.0008; OCLD p < 0.0001). VIM expression level is low in NE but high in ESCC cases, whereas OCLN expression is high in NE but low in ESCC cases. The horizontal axis depicts the patient groups ESCC and NE. (∗∗∗ P < 0.001) B: Knockdown of WDR66 affects mRNA expression of VIM and OCLN. Gene expression is presented as normalized (log scale) signal intensity for mRNA expression levels of VIM and OCLN gene. Gene expression level of VIM was significantly down regulated whereas OCLN expression was significantly higher in cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA) (corrected p-value: VIM p = 0.0286; OCLD p = 0.0186). Data are representatives of four independent experiments. (∗p < 0.05). C: Detection of vimentin and occludin protein by immunoblotting of KYSE520 cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA). β-actin was used as loading control. Vimentin expression was significantly down regulated while occludin was significantly higher expressed in cells treated with WDR66 siRNA.\nIn order to learn more about the function of WDR66, RNA interference was used to silence its expression in KYSE520 cells, a human esophageal squamous cell carcinoma cell line which highly expressed WDR66. Subsequently, a microarray expression analysis was performed in order to identify the genes affected by WDR66 knockdown. A total of 699 genes was identified based on a two-fold change expression difference with p-value of p < 0.001. In an approach to link the observed gene expression profile to gene function, these 699 differentially expressed genes were subjected to gene ontology (GO) analysis. Functional enrichment analysis identified 10 GO terms to be significantly associated with the WDR66 knockdown (Additional file 2: Table S2). All these 10 GO terms are membrane related. We checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed (p = 0.0008) while occludin was less expressed (p < 0.0001) in ESCC specimens in comparison to NE (Figure 3A). Microarray data were validated by qRT-PCR and Western blotting providing independent evidence of the changes of vimentin (VIM) and occludin (OCLD) expression associated with the WDR66 knockdown (Figure 3B, 3C).\nKnockdown of WDR66 gene affects expression of vimentin and occludin. A: mRNA expression of the VIM and OCLN gene in the original training cohort determined by microarray analysis. Array data analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the genes of interest. VIM and OCLN are significantly differentially expressed in ESCC compared with NE (corrected p-value: VIM p = 0.0008; OCLD p < 0.0001). VIM expression level is low in NE but high in ESCC cases, whereas OCLN expression is high in NE but low in ESCC cases. The horizontal axis depicts the patient groups ESCC and NE. (∗∗∗ P < 0.001) B: Knockdown of WDR66 affects mRNA expression of VIM and OCLN. Gene expression is presented as normalized (log scale) signal intensity for mRNA expression levels of VIM and OCLN gene. Gene expression level of VIM was significantly down regulated whereas OCLN expression was significantly higher in cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA) (corrected p-value: VIM p = 0.0286; OCLD p = 0.0186). Data are representatives of four independent experiments. (∗p < 0.05). C: Detection of vimentin and occludin protein by immunoblotting of KYSE520 cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA). β-actin was used as loading control. Vimentin expression was significantly down regulated while occludin was significantly higher expressed in cells treated with WDR66 siRNA.\n Knockdown of WDR66 in KYSE520 cells affects cell motility and results in growth suppression Due to the effect of WDR66 on the expression of vimentin, an important EMT marker which plays a central role in the reversible trans-differentiation and occludin, an adhesion molecules that is a constituent of tight junctions, we hypothesized that WDR66 may regulate cell motility of esophageal squamous cancer cells. WDR66 was silenced in the human squamous cell carcinoma cell line KYSE520 by RNA interference. Transfection efficiency was evaluated by qPCR. Cell migration assays showed that KYSE520 cells, which had been transfected with WDR66 siRNA, displayed a motility capacity of only 40% compared to the cells having been transfected with control siRNA (AllStar) after 16 hours (Figure 4A). Moreover, we found that introduction of siWDR66 remarkably suppressed growth of KYSE520 in comparison to control cells (Figure 4B). In order to visualize the involvement of WDR66 in cell migration and proliferation, a wound-healing assay was carried out. The insert was removed at defined time points of scratching and the results were recorded by taking pictures (Figure 4C). Thus, our data suggest that WDR66 promotes cell proliferation and affects cell motility.\nKnockdown of WDR66 gene affects cell motility and cell growth. A: Cell motility assays showed that knockdown of WDR66 reduced cell migration after 16 hours. About 35% of the siWDR66 cells were migrated in comparison to that of the mock control cells. The differences between siWDR66 and Allstar (negative control siRNA) cells were significant (p = 0.0032) using the paired t test. The data shown are representatives of three independent experiments and each done in quadruplicate. B: Knockdown of WDR66 leads to suppression of cell growth. A total of 1.5×104 cells were seeded at day 0. WDR66 knockdown cells grew slower than cells treated with Allstar (negative control siRNA). The differences between siWDR66 and Allstar (negative control siRNA) treated cells were highly significant (p = 0. 0098). C: Wound-healing assays show that knockdown of WDR66 reduces cell motility. Representative images are shown. Images are taken immediately after insert was removed and 8 hours later. Original magnification is x400.\nDue to the effect of WDR66 on the expression of vimentin, an important EMT marker which plays a central role in the reversible trans-differentiation and occludin, an adhesion molecules that is a constituent of tight junctions, we hypothesized that WDR66 may regulate cell motility of esophageal squamous cancer cells. WDR66 was silenced in the human squamous cell carcinoma cell line KYSE520 by RNA interference. Transfection efficiency was evaluated by qPCR. Cell migration assays showed that KYSE520 cells, which had been transfected with WDR66 siRNA, displayed a motility capacity of only 40% compared to the cells having been transfected with control siRNA (AllStar) after 16 hours (Figure 4A). Moreover, we found that introduction of siWDR66 remarkably suppressed growth of KYSE520 in comparison to control cells (Figure 4B). In order to visualize the involvement of WDR66 in cell migration and proliferation, a wound-healing assay was carried out. The insert was removed at defined time points of scratching and the results were recorded by taking pictures (Figure 4C). Thus, our data suggest that WDR66 promotes cell proliferation and affects cell motility.\nKnockdown of WDR66 gene affects cell motility and cell growth. A: Cell motility assays showed that knockdown of WDR66 reduced cell migration after 16 hours. About 35% of the siWDR66 cells were migrated in comparison to that of the mock control cells. The differences between siWDR66 and Allstar (negative control siRNA) cells were significant (p = 0.0032) using the paired t test. The data shown are representatives of three independent experiments and each done in quadruplicate. B: Knockdown of WDR66 leads to suppression of cell growth. A total of 1.5×104 cells were seeded at day 0. WDR66 knockdown cells grew slower than cells treated with Allstar (negative control siRNA). The differences between siWDR66 and Allstar (negative control siRNA) treated cells were highly significant (p = 0. 0098). C: Wound-healing assays show that knockdown of WDR66 reduces cell motility. Representative images are shown. Images are taken immediately after insert was removed and 8 hours later. Original magnification is x400.", "Whole genome-wide expression profiling was performed on 28 esophageal specimens (GSE26886). To make sure that only epithelial cells were studied, we applied laser capture microdissection technique to the specimens. A number of genes differentially expressed between ESCC samples and normal esophageal squamous epithelium samples were identified. The probe set with the highest fold change and lowest p-value represented the WDR66 transcript (P < 0.0001) (Figure 1A). As a validation study, WDR66 expression was examined by qRT-PCR in an independent cohort consisting of 71 specimens including ESCC (n = 25), NE (n = 11), EAC (n = 13), GAC (n = 15) and CRC (n = 7). We found that WDR66 was highly expressed in 96% of ESCC patients (Figure 1B). Confirming our previous results from the microarray study, WDR66 expression was found to be significantly higher in ESCC compared to NE as well as the other three cancer types checked in this cohort (P < 0.0001). Immunohistochemical localization of WDR66 was not carried out because none of the WDR66 antibodies available allowed detecting a specific protein band on Western blots. The presence of WDR66-specific mRNA was probed by in-situ hybridization using single-stranded RNA probes of the WDR66 gene in 4% PFA-fixed paraffin-embedded esophageal tissues. WDR66 transcription (positive staining) was specifically detected in the esophageal squamous carcinoma cells but not in normal squamous epithelia (Figure 1C). Furthermore, WDR66 expression was examined in 14 human cell-lines and 20 normal human tissues by qRT-PCR. Expression of WDR66 gene was abundantly expressed only in the human esophageal squamous cell carcinoma cell line KYSE520, but not expressed in any other human cell line, such as OE33, SW480, HT29, HCT116, LS174T, Caco2, HL60, HEK293, Daudi, Capan1, MCF7, MDA-MB231 and MDA-MB435 (Figure 1D). Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis (Figure 1E). Thus, our data suggest that WDR66 might be a cancer / testis antigen.\nWDR66 is highly expressed in esophageal squamous cell carcinoma. A: mRNA expression of the WDR66 gene was determined by microarray analysis. Microarray analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the WDR66 gene. The WDR66 gene is significantly differentially expressed in ESCC (corrected p-value < 0.0001). Expression level of WDR66 gene is low in NE but high in ESCC cases. B: Relative mRNA expression of the WDR66 gene in an independent validation cohort. Quantitation was done relative to the transcript of GAPDH. Significance in differential expression of individual gene between groups was calculated (p-value < 0.001). Results showed that WDR66 gene expression level is highest in ESCC and low to absent in NE or other carcinomas. On the horizontal axis patient groups ESCC, NE, EAC, GAC and CRC are depicted. C: WDR66 gene is highly expressed in ESCC epithelium according to in situ hybridization. In situ hybridization was down using anti-sense probes of human WDR66 gene in FFPE sections of esophageal specimens. Signals for WDR66 transcripts were observed specifically in esophageal squamous cell carcinoma (ESCC, right), but not in normal squamous epithelium (NE, left). D: WDR66 expression level in various human cell lines. WDR66 expression was examined by quantitative real-time PCR in 14 cell lines cultivated from different human carcinomas. The expression was quantified relative to human esophageal squamous carcinoma cell line KYSE520. E: Tissue-specific expression of WDR66 gene in various human normal tissues. Quantitative real-time PCR analysis of WDR66 expression levels in 20 human normal tissues (FirstChoice® Human Total RNA Survey Panel). WDR66 gene is preferentially expressed in testis. Gene level was quantified relative to the expression in ESCC cell line KYSE520.", "In order to test if WDR66 expression correlates with prognostic markers in a separate validation set of ESCC examples, we determined WDR66 expression in an independent set of n = 25 ESCC examples using qRT-PCR (Additional file 1: Table S1). High expression of WDR66 RNA was found to be a significant prognostic factor with regard to cancer-related survival (P = 0.031; Figure 2). When analyzed with regard to various clinicopathological parameters, such as gender (P = 0.804), age (P = 0.432), tumor differentiation (P =0.032), pT factor (P = 0.234), lymph node metastasis (P = 0.545), distant metastasis (P = 0.543) and TNM stage (P = 0.002; Table 1), multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042; Table 1).\nHigh WDR66 mRNA expression is associated with poor survival in ESCC patients. Kaplan-Meier analysis of survival of grouped according to WDR66 expression as measured by quantitative real-time RT-PCR. Grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low (n = 12); WDR66 > 125, WDR66 high (n = 13). After choosing an optimal cut-point, analysis for WDR66 was done using maximally selected rank statistics. The group with patients expressing WDR66 at low levels showed a significantly better overall survival compared with the group with high levels of WDR66 expression (P = 0.031; log rank).\nCox regression analysis for factors possibly influencing disease-specific survival in patients with ESCC in our cohort", "In order to learn more about the function of WDR66, RNA interference was used to silence its expression in KYSE520 cells, a human esophageal squamous cell carcinoma cell line which highly expressed WDR66. Subsequently, a microarray expression analysis was performed in order to identify the genes affected by WDR66 knockdown. A total of 699 genes was identified based on a two-fold change expression difference with p-value of p < 0.001. In an approach to link the observed gene expression profile to gene function, these 699 differentially expressed genes were subjected to gene ontology (GO) analysis. Functional enrichment analysis identified 10 GO terms to be significantly associated with the WDR66 knockdown (Additional file 2: Table S2). All these 10 GO terms are membrane related. We checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed (p = 0.0008) while occludin was less expressed (p < 0.0001) in ESCC specimens in comparison to NE (Figure 3A). Microarray data were validated by qRT-PCR and Western blotting providing independent evidence of the changes of vimentin (VIM) and occludin (OCLD) expression associated with the WDR66 knockdown (Figure 3B, 3C).\nKnockdown of WDR66 gene affects expression of vimentin and occludin. A: mRNA expression of the VIM and OCLN gene in the original training cohort determined by microarray analysis. Array data analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the genes of interest. VIM and OCLN are significantly differentially expressed in ESCC compared with NE (corrected p-value: VIM p = 0.0008; OCLD p < 0.0001). VIM expression level is low in NE but high in ESCC cases, whereas OCLN expression is high in NE but low in ESCC cases. The horizontal axis depicts the patient groups ESCC and NE. (∗∗∗ P < 0.001) B: Knockdown of WDR66 affects mRNA expression of VIM and OCLN. Gene expression is presented as normalized (log scale) signal intensity for mRNA expression levels of VIM and OCLN gene. Gene expression level of VIM was significantly down regulated whereas OCLN expression was significantly higher in cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA) (corrected p-value: VIM p = 0.0286; OCLD p = 0.0186). Data are representatives of four independent experiments. (∗p < 0.05). C: Detection of vimentin and occludin protein by immunoblotting of KYSE520 cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA). β-actin was used as loading control. Vimentin expression was significantly down regulated while occludin was significantly higher expressed in cells treated with WDR66 siRNA.", "Due to the effect of WDR66 on the expression of vimentin, an important EMT marker which plays a central role in the reversible trans-differentiation and occludin, an adhesion molecules that is a constituent of tight junctions, we hypothesized that WDR66 may regulate cell motility of esophageal squamous cancer cells. WDR66 was silenced in the human squamous cell carcinoma cell line KYSE520 by RNA interference. Transfection efficiency was evaluated by qPCR. Cell migration assays showed that KYSE520 cells, which had been transfected with WDR66 siRNA, displayed a motility capacity of only 40% compared to the cells having been transfected with control siRNA (AllStar) after 16 hours (Figure 4A). Moreover, we found that introduction of siWDR66 remarkably suppressed growth of KYSE520 in comparison to control cells (Figure 4B). In order to visualize the involvement of WDR66 in cell migration and proliferation, a wound-healing assay was carried out. The insert was removed at defined time points of scratching and the results were recorded by taking pictures (Figure 4C). Thus, our data suggest that WDR66 promotes cell proliferation and affects cell motility.\nKnockdown of WDR66 gene affects cell motility and cell growth. A: Cell motility assays showed that knockdown of WDR66 reduced cell migration after 16 hours. About 35% of the siWDR66 cells were migrated in comparison to that of the mock control cells. The differences between siWDR66 and Allstar (negative control siRNA) cells were significant (p = 0.0032) using the paired t test. The data shown are representatives of three independent experiments and each done in quadruplicate. B: Knockdown of WDR66 leads to suppression of cell growth. A total of 1.5×104 cells were seeded at day 0. WDR66 knockdown cells grew slower than cells treated with Allstar (negative control siRNA). The differences between siWDR66 and Allstar (negative control siRNA) treated cells were highly significant (p = 0. 0098). C: Wound-healing assays show that knockdown of WDR66 reduces cell motility. Representative images are shown. Images are taken immediately after insert was removed and 8 hours later. Original magnification is x400.", "By whole genome-wide expression profiling we found that WD repeat-containing protein 66 (WDR66) might be a useful biomarker for risk stratification and a modulator for epithelial-mesenchymal transition of ESCC. Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis. Thus, our data suggest that WDR66 might be a cancer / testis antigen. Cancer-testis (CT) genes, normally expressed in germ line cells but also activated in a wide range of cancer types, often encode antigens in cancer patients [17]. Testis is an immune-privileged site as a result of a blood barrier and lack of HLA class I expression on the surface of germ cells [18]. Hence, if testis-specific genes are expressed in other tissues, they can be immunogenic. Expression of some cancer-testis genes in a high percentage of esophageal tumors makes them potential targets for immunotherapy, like LAGE1, MAGE-A4 and NY-ESO-1 [19]. NY-ESO-1 is a highly immunogenic, prototypical protein marker limited in expression to a wide variety of cancer types but not in normal tissue, with the exception of the immune-privileged testes, and has been heavily investigated as target for immunotherapy in cancer patients. Recently, immunotherapy of various cancers again NY-ESO-1 showed promising clinical results [20-22]. Unfortunately, most cancer / testis antigens are expressed only in a small group of tumor patients, among 10-40%. However, we found that the expression of WDR66 was specifically enhanced in 96% of ESCC patients and also very low or absent in normal tissues. WDR66 may be a novel Immunotherapy target for ESCC.\nOur experiments revealed a strong association of WDR66 expression with vimentin and occludin. Vimentin is an important mesenchymal marker and plays a central role for reversible trans-differentiation [23]. Tumor cells undergoing EMT have an increased the ability for detachment from the main tumor bulk, which is a crucial step for tumor dissemination and metastasis [24]. EMT is accompanied by a switch from keratin to vimentin expression [25]. Earlier studies also demonstrated a correlation between the reduction of tight junctions and tumor metastasis. Here, we found a decreased expression of occludin, an important tight junction protein. According to a recent study comparing the risk of metastasis in ESCC and EAC, ESCC is characterized by a more aggressive behavior and tendency for early metastasis [26]. Thus, we hypothesize that the elevated expression of WDR66 in ESCC may promote EMT through an up-regulation of vimentin expression and a down regulation of occludin and cohesion of the tumor tissue. At this point, we checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed while occludin was less expressed in ESCC specimens in comparison to NE which supports our hypothesis. A recent study showed that low expression of the tight junction protein claudin-4 is associated with poor prognosis in ESCC [27]. This is also in agreement with the results of our microarray study, which showed that claudin-4 was significantly less expressed in ESCC and also remarkably over expressed after WDR66 knockdown (data not shown).", "In summary, we have identified WDR66 as a potential novel prognostic marker and promising target for ESCC. This result is based on our observations that (1) WDR66 is specifically highly expressed in esophageal squamous cell carcinoma and high WDR66 expression correlates with poor overall survival, (2) WDR66 regulates vimentin and occludin expression and plays a crucial role for EMT, and (3) knockdown of WDR66 suppresses cell growth and motility and decreases cell viability of ESCC cells. Therefore, we propose that WDR66 plays a major role in ESCC biology. Our functional data furthermore warrants further investigation of selective targeting of WDR66 as a novel drug target for ESCC treatment.", "CRC: Colorectal cancers; EAC: Esophageal adenocarcinoma; ESCC: Esophageal squamous cell carcinoma; EMT: Epithelial to mesenchymal transition; GAC: Gastric adenocarcinoma; NE: Normal esophageal; OCLD: Occludin; qRT-PCR: Quantitative reverse transcription- polymerase chain reaction; siRNA: Small interfering RNA; VIM: Vimentin; WDR66: WD repeat-containing protein 66.", "The authors declare that they have no competing interests.", "QW have made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data; CM has been involved in data acquisition, data analysis and interpretation of data, drafting the manuscript and revising it critically for important intellectual content; and WK contributed in writing and revising the paper and has given final approval of the version to be published. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/13/137/prepub\n", "The clinicopathologic characterization of the 25 ESCC Patients for survival analysis.\nClick here for file\nSignificantly enriched Gene Ontology (GO) terms identified for genes differentially expressed in siWDR66 Kyse520 cells. GO analysis was performed using GeneSpring. The 10 GO terms with the significant corrected P-value (FDR false discovery rate corrected for multiple testing) are depicted sorted by p-Value (noncorrected).\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions", null, null, null, null, "supplementary-material" ]
[ "WD repeat-containing protein", "Esophageal squamous cell carcinoma", "Epithelial-mesenchymal transition" ]
Background: Esophageal squamous cell carcinoma (ESCC) is one of the most lethal malignancies of the digestive tract and in most cases the initial diagnosis is established only once the malignancy is in the advanced stage [1]. Poor survival is due to the fact that ESCC frequently metastasizes to regional and distant lymph nodes, even at initial diagnosis. Treatment of cancer using molecular targets has brought promising results and attracts more and more attention [2-5]. Characterization of genes involved in the progression and development of ESCC may lead to the identification of new prognostic markers and therapeutic targets. By whole genome-wide expression profiling, we found that WD repeat-containing protein 66 (WDR66), located on chromosome 12 (12q24.31), might be a useful biomarker for risk stratification and a modulator for epithelial-mesenchymal transition of ESCC. WD-repeat protein family is a large family found in all eukaryotes and is implicated in a variety of functions ranging from signal transduction and transcription regulation to cell cycle control, autophagy and apoptosis [6]. These repeating units are believed to serve as a scaffold for multiple protein interactions with various proteins [7]. According to whole-genome sequence analysis, there are 136 WD-repeat proteins in humans which belong to the same structural class [8]. Among the WD-repeat proteins, endonuclein containing five WD-repeat domains was shown to be up regulated in pancreatic cancer [9]. The expression of human BTRC (beta-transducing repeat-containing protein), which contains one F-box and seven WD-repeats, targeted to epithelial cells under tissue specific promoter in BTRC deficient (−/−) female mice, promoted the development of mammary tumors [10]. WDRPUH (WD repeat-containing protein 16) encoding a protein containing 11 highly conserved WD-repeat domains was also shown to be up regulated in human hepatocellular carcinomas and involved in promotion of cell proliferation [11]. The WD repeat-containing protein 66 contains 9 highly conserved WD40 repeat motifs and an EF-hand-like domain. A genome-wide association study identified a single-nucleotide polymorphism located within intron 3 of WDR66 associated with mean platelet volume [12]. WD-repeat proteins have been identified as tumor markers that were frequently up-regulated in various cancers [11,13,14]. Here, we report for the first time that WDR66 might be an important prognostic factor for patients with ESCC as found by whole human gene expression profiling. Moreover, to our knowledge, the role of WDR66 in esophageal cancer development and progression has not been explored up to now. To this end we examined the effect of silencing of WDR66 by another microarray analysis. In addition, the effect of WDR66 on epithelial-mesenchymal transition (EMT), cell motility and tumor growth was investigated. Methods: Patients Tissue samples from individuals of the first set (n = 28) were obtained from the tumor bank of Charité Comprehensive Cancer Center. Gene expression was examined by whole-human-genome microarrays (Affymetrix, Santa Clara, CA, USA). Ten esophageal squamous cell carcinoma (ESCC) and 18 normal esophageal (NE) biopsies were randomly collected. Normal healthy esophageal biopsies were collected from patients with esophageal pain but diagnosed as normal squamous without pathological changes. Surgical specimens of chemotherapy-naïve patients with known ESCC of histological grading G1, UICC stage II and III, had undergone esophagectomy. Patients’ age ranged from 22 to 83 years, with a median age of 59 years. A second panel (n = 71) consisting of ESCC (n = 25), NE (n = 11), esophageal adenocarcinoma (EAC) (n = 13), gastric adenocarcinoma (GAC) (n = 15) and colorectal cancers (CRC) (n = 7) was used for qRT-PCR validation. Patients’ age ranged from 24 to 79 years, with a median age of 63 years. All samples were snap-frozen in liquid nitrogen and stored at −80°C. We obtained tissue specimens from all subjects with informed written consent (approved by the local ethics committee of the Charité-Universitätsmedizin, Berlin). Each single specimen included in this study was histopathologically approved according to grade and stage by an experienced pathologist ( MV, University Bayreuth). Tissue samples from individuals of the first set (n = 28) were obtained from the tumor bank of Charité Comprehensive Cancer Center. Gene expression was examined by whole-human-genome microarrays (Affymetrix, Santa Clara, CA, USA). Ten esophageal squamous cell carcinoma (ESCC) and 18 normal esophageal (NE) biopsies were randomly collected. Normal healthy esophageal biopsies were collected from patients with esophageal pain but diagnosed as normal squamous without pathological changes. Surgical specimens of chemotherapy-naïve patients with known ESCC of histological grading G1, UICC stage II and III, had undergone esophagectomy. Patients’ age ranged from 22 to 83 years, with a median age of 59 years. A second panel (n = 71) consisting of ESCC (n = 25), NE (n = 11), esophageal adenocarcinoma (EAC) (n = 13), gastric adenocarcinoma (GAC) (n = 15) and colorectal cancers (CRC) (n = 7) was used for qRT-PCR validation. Patients’ age ranged from 24 to 79 years, with a median age of 63 years. All samples were snap-frozen in liquid nitrogen and stored at −80°C. We obtained tissue specimens from all subjects with informed written consent (approved by the local ethics committee of the Charité-Universitätsmedizin, Berlin). Each single specimen included in this study was histopathologically approved according to grade and stage by an experienced pathologist ( MV, University Bayreuth). Laser capture microdissection and microarray Laser Capture Microdissection (Cellcut, MMI AG and Nikon TE300 microscope) was used for isolating desired cells from sections. After transferring 5 μm sections onto MMI membrane slides, these were fixed in 70% isopropyl alcohol and then stained with the MMI basic staining kit. Desired tumor cell or NE areas were selected, cut and collected. Preparation of labeled cRNA and hybridization were done using the gene chip hybridization, wash, and stain kit (Affymetrix, Santa Clara, CA, USA), as described previously [15]. Two cycle labeling was applied to all samples. In total 28 chip-data were collected using Gene Chip Operation Software (GCOS, Affymetrix). The 28 specimens analyzed consists 10 ESCC and 18 NE. To obtain the relative gene expression measurements, probe set-level data extraction was performed with the GCRMA (Robust Multiarray Average) normalization algorithm implemented in GeneSpring GX10.2 (Agilent). All data were log2 transformed. A list of all the genes included in these microarrays and the normalized data have been deposited in the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/info/linking.html) under GEO accession number GSE26886. For gene-by-gene statistical testing, parametric tests were used to compare differences between groups. The False Discovery Rate (FDR) was employed using Benjamini-Hochberg procedure for multiple testing of the resulting p-value significance. Laser Capture Microdissection (Cellcut, MMI AG and Nikon TE300 microscope) was used for isolating desired cells from sections. After transferring 5 μm sections onto MMI membrane slides, these were fixed in 70% isopropyl alcohol and then stained with the MMI basic staining kit. Desired tumor cell or NE areas were selected, cut and collected. Preparation of labeled cRNA and hybridization were done using the gene chip hybridization, wash, and stain kit (Affymetrix, Santa Clara, CA, USA), as described previously [15]. Two cycle labeling was applied to all samples. In total 28 chip-data were collected using Gene Chip Operation Software (GCOS, Affymetrix). The 28 specimens analyzed consists 10 ESCC and 18 NE. To obtain the relative gene expression measurements, probe set-level data extraction was performed with the GCRMA (Robust Multiarray Average) normalization algorithm implemented in GeneSpring GX10.2 (Agilent). All data were log2 transformed. A list of all the genes included in these microarrays and the normalized data have been deposited in the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/info/linking.html) under GEO accession number GSE26886. For gene-by-gene statistical testing, parametric tests were used to compare differences between groups. The False Discovery Rate (FDR) was employed using Benjamini-Hochberg procedure for multiple testing of the resulting p-value significance. In situ hybridization A 148 bp fragment located at the 3 terminal end of human WDR66 gene (NM144668)was subcloned into the pBluescript II vector pBS-27.16 using primer pair forward: 5’-CAACCTgCTCCgTCAAA-3’ and reverse: 5’-TAAACATTCCTggTAACTTCAC-3’. The linearized plasmid was used as a template for the synthesis of antisense probes. The probe was labeled by digoxigenin / dUTP with a DIG/RNA labelling kit (Boehringer Mannheim), according to the manufacturer’s instructions. The quality and quantity of the probe were confirmed by gel electrophoresis before used for In situ hybridization. The Digoxigenin-labeled probe was applied to 5 μm dewaxed FFPE sections and hybridized at 65°C overnight in a humid chamber. After 3 washes to remove the nonspecific binding or unbound probes, digoxigenin-labeled probe was detected using the alkaline phosphatase method. A 148 bp fragment located at the 3 terminal end of human WDR66 gene (NM144668)was subcloned into the pBluescript II vector pBS-27.16 using primer pair forward: 5’-CAACCTgCTCCgTCAAA-3’ and reverse: 5’-TAAACATTCCTggTAACTTCAC-3’. The linearized plasmid was used as a template for the synthesis of antisense probes. The probe was labeled by digoxigenin / dUTP with a DIG/RNA labelling kit (Boehringer Mannheim), according to the manufacturer’s instructions. The quality and quantity of the probe were confirmed by gel electrophoresis before used for In situ hybridization. The Digoxigenin-labeled probe was applied to 5 μm dewaxed FFPE sections and hybridized at 65°C overnight in a humid chamber. After 3 washes to remove the nonspecific binding or unbound probes, digoxigenin-labeled probe was detected using the alkaline phosphatase method. RNA extraction and qRT-PCR RNA extractions were carried out using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Total RNA quality and yield were assessed using a bioanalyzer system (Agilent 2100 Bioanalyzer; Agilent Technologies, Palo Alto, CA, USA) and a spectrophotometer (NanoDrop ND-1000; NanoDrop Technologies, Wilmington, DE, USA). Only RNA with an RNA integrity number > 9.0 was used for microarray analysis. For quantification of mRNA-expression, qRT-PCR was performed for 3 genes plus one control, using pre-designed gene-specific TaqMan® probes and primer sets purchased from Applied Biosystems (Hs01566237_m1 for WDR66, Hs00958116_m1 for VIM, Hs00170162_m1 for OCLD, and 4326317E for GAPDH). Conditions and data analysis of qRT-PCR were as described [16]. RNA extractions were carried out using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Total RNA quality and yield were assessed using a bioanalyzer system (Agilent 2100 Bioanalyzer; Agilent Technologies, Palo Alto, CA, USA) and a spectrophotometer (NanoDrop ND-1000; NanoDrop Technologies, Wilmington, DE, USA). Only RNA with an RNA integrity number > 9.0 was used for microarray analysis. For quantification of mRNA-expression, qRT-PCR was performed for 3 genes plus one control, using pre-designed gene-specific TaqMan® probes and primer sets purchased from Applied Biosystems (Hs01566237_m1 for WDR66, Hs00958116_m1 for VIM, Hs00170162_m1 for OCLD, and 4326317E for GAPDH). Conditions and data analysis of qRT-PCR were as described [16]. Cell culture and siRNA-mediated knockdown KYSE520, OE33, SW480, Caco2, HCT116, HT29, HL60, LS174T, Daudi, HEK293, MCF7, MDA-MB-231, MDA-MB-435 and Capan-I were obtained from the American Type Culture Collection (ATCC, Manassas, VA) and cultured according to the supplier’s instructions. For siRNA-mediated knockdown of WDR66, cells were seeded in 6-well plates on the day before transfection. On day 0, cells were transfected with siRNA at 25 nmol/L concentrations using Thermo Scientific DharmaFECT transfection reagents according to manufacturer’s instructions. The siRNA sense 5’ – GuuACuAAAGGuGAGCAuA - 3’ sequence corresponding to WDR66 mRNA was chemically synthesized by sigma-Proligo (Munich, Germany). RNA was extracted at indicated time points as described. KYSE520, OE33, SW480, Caco2, HCT116, HT29, HL60, LS174T, Daudi, HEK293, MCF7, MDA-MB-231, MDA-MB-435 and Capan-I were obtained from the American Type Culture Collection (ATCC, Manassas, VA) and cultured according to the supplier’s instructions. For siRNA-mediated knockdown of WDR66, cells were seeded in 6-well plates on the day before transfection. On day 0, cells were transfected with siRNA at 25 nmol/L concentrations using Thermo Scientific DharmaFECT transfection reagents according to manufacturer’s instructions. The siRNA sense 5’ – GuuACuAAAGGuGAGCAuA - 3’ sequence corresponding to WDR66 mRNA was chemically synthesized by sigma-Proligo (Munich, Germany). RNA was extracted at indicated time points as described. Microarray analysis of WDR66 knock-down cells Total RNA was extracted from 106 cell pellet using RNeasy Mini Prep Kit (Qiagen, Hilden, Germany). RNA quality was checked by Bioanalyzer (Agilent, Santa Clara, CA) Only RNA samples showing a RIN of at least 9.0 were used for labelling. Total RNA (1 μg) was labelled with Cy3 using the Low Input RNA Amplification Kit (Agilent, Santa Clara, CA). Labelled cRNAs were hybridized to Whole Human Genome 4x44K Oligonucleotide Microarrays (Agilent, Santa Clara, CA) according to the manual. Arrays were scanned by using standard Agilent protocols and a G2565AA Microarray Scanner (Agilent, Santa Clara, CA). Raw expression values were determined using Feature Extraction 9.0 software (Agilent, Santa Clara, CA). Total RNA was extracted from 106 cell pellet using RNeasy Mini Prep Kit (Qiagen, Hilden, Germany). RNA quality was checked by Bioanalyzer (Agilent, Santa Clara, CA) Only RNA samples showing a RIN of at least 9.0 were used for labelling. Total RNA (1 μg) was labelled with Cy3 using the Low Input RNA Amplification Kit (Agilent, Santa Clara, CA). Labelled cRNAs were hybridized to Whole Human Genome 4x44K Oligonucleotide Microarrays (Agilent, Santa Clara, CA) according to the manual. Arrays were scanned by using standard Agilent protocols and a G2565AA Microarray Scanner (Agilent, Santa Clara, CA). Raw expression values were determined using Feature Extraction 9.0 software (Agilent, Santa Clara, CA). Western blotting analysis Total cell extracts were obtained and cell lysate containing 50 μg of protein was separated on 10% SDS-polyacrylamide gel and then blotted onto polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). Primary antibody for vimentin detection was mouse monoclonal anti-human vimentin antibody (Sigma-Aldrich Corporation, V5255, 1:200, approximately 54 kDa). Primary antibody for occludin detection was rabbit polyclonal anti-human occludin antibody (Invitrogen, 71–1500, 1:500, at 65 kDa). ß-actin was used as loading control (Abcam, 1:2000, ab8226). Signals were detected using ECL kit (Amersham Pharmacia Biotech, Piscataway, NJ, USA). Images were scanned by FujiFilm LAS-1000 (FujiFilm, Düsseldorf, Germany). Total cell extracts were obtained and cell lysate containing 50 μg of protein was separated on 10% SDS-polyacrylamide gel and then blotted onto polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). Primary antibody for vimentin detection was mouse monoclonal anti-human vimentin antibody (Sigma-Aldrich Corporation, V5255, 1:200, approximately 54 kDa). Primary antibody for occludin detection was rabbit polyclonal anti-human occludin antibody (Invitrogen, 71–1500, 1:500, at 65 kDa). ß-actin was used as loading control (Abcam, 1:2000, ab8226). Signals were detected using ECL kit (Amersham Pharmacia Biotech, Piscataway, NJ, USA). Images were scanned by FujiFilm LAS-1000 (FujiFilm, Düsseldorf, Germany). Cell number, cell motility and wound-healing assay Cells were seeded in 6-well plates 24 h before transfection. Transfection was done as described above. Cells were collected at indicated time points, and cell numbers were measured using POLARstar Omega reader (BMG Labtech, Offenburg, Germany). Emission and excitation filters were 485 and 520 nm. The results were analyzed by MARS data analysis software. Cell motility was determined using 12-well Transwell Permeable Support inserts with polycarbonate filters with a pore size of 8 μm (Corning Costar, Lowell, MA), according to the manufacturer’s instructions. Wound healing assays were performed in triplicates using cytoselect 24-well wound healing assay (Cell Biolabs, Inc.). Cells were seeded in 6-well plates 24 h before transfection. Transfection was done as described above. Cells were collected at indicated time points, and cell numbers were measured using POLARstar Omega reader (BMG Labtech, Offenburg, Germany). Emission and excitation filters were 485 and 520 nm. The results were analyzed by MARS data analysis software. Cell motility was determined using 12-well Transwell Permeable Support inserts with polycarbonate filters with a pore size of 8 μm (Corning Costar, Lowell, MA), according to the manufacturer’s instructions. Wound healing assays were performed in triplicates using cytoselect 24-well wound healing assay (Cell Biolabs, Inc.). Statistical analysis Statistical analysis was done using GraphPad Prism version 5 for Windows (GraphPad Software) and SPSS version 13 for Windows (SPSS, Chicago, IL, USA) as follows: GraphPad Prism, Unpaired t test with Welch’s correction of quantitative real-time RT-PCR measurements of WDR66 in patient samples and of gene expression measurements in the validation cohort; nonparametric Mann–Whitney U test of cell numbers, motility assay, and cell wound assay after knockdown of WDR66; SPSS, Kaplan-Meier survival analysis and log-rank statistics, cut-point analysis of qRT-PCR measurements of WDR66 in patient samples using maximally selected rank statistics to determine the value separating a group into two groups with the most significant difference when used as a cut-point; grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low; WDR66 > 125, WDR66 high; the stratified Cox-regression model was used to determine prognostic factors in a multivariate analysis with WDR66 dichotomized at the previously determined cut-points. Statistical analysis was done using GraphPad Prism version 5 for Windows (GraphPad Software) and SPSS version 13 for Windows (SPSS, Chicago, IL, USA) as follows: GraphPad Prism, Unpaired t test with Welch’s correction of quantitative real-time RT-PCR measurements of WDR66 in patient samples and of gene expression measurements in the validation cohort; nonparametric Mann–Whitney U test of cell numbers, motility assay, and cell wound assay after knockdown of WDR66; SPSS, Kaplan-Meier survival analysis and log-rank statistics, cut-point analysis of qRT-PCR measurements of WDR66 in patient samples using maximally selected rank statistics to determine the value separating a group into two groups with the most significant difference when used as a cut-point; grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low; WDR66 > 125, WDR66 high; the stratified Cox-regression model was used to determine prognostic factors in a multivariate analysis with WDR66 dichotomized at the previously determined cut-points. Patients: Tissue samples from individuals of the first set (n = 28) were obtained from the tumor bank of Charité Comprehensive Cancer Center. Gene expression was examined by whole-human-genome microarrays (Affymetrix, Santa Clara, CA, USA). Ten esophageal squamous cell carcinoma (ESCC) and 18 normal esophageal (NE) biopsies were randomly collected. Normal healthy esophageal biopsies were collected from patients with esophageal pain but diagnosed as normal squamous without pathological changes. Surgical specimens of chemotherapy-naïve patients with known ESCC of histological grading G1, UICC stage II and III, had undergone esophagectomy. Patients’ age ranged from 22 to 83 years, with a median age of 59 years. A second panel (n = 71) consisting of ESCC (n = 25), NE (n = 11), esophageal adenocarcinoma (EAC) (n = 13), gastric adenocarcinoma (GAC) (n = 15) and colorectal cancers (CRC) (n = 7) was used for qRT-PCR validation. Patients’ age ranged from 24 to 79 years, with a median age of 63 years. All samples were snap-frozen in liquid nitrogen and stored at −80°C. We obtained tissue specimens from all subjects with informed written consent (approved by the local ethics committee of the Charité-Universitätsmedizin, Berlin). Each single specimen included in this study was histopathologically approved according to grade and stage by an experienced pathologist ( MV, University Bayreuth). Laser capture microdissection and microarray: Laser Capture Microdissection (Cellcut, MMI AG and Nikon TE300 microscope) was used for isolating desired cells from sections. After transferring 5 μm sections onto MMI membrane slides, these were fixed in 70% isopropyl alcohol and then stained with the MMI basic staining kit. Desired tumor cell or NE areas were selected, cut and collected. Preparation of labeled cRNA and hybridization were done using the gene chip hybridization, wash, and stain kit (Affymetrix, Santa Clara, CA, USA), as described previously [15]. Two cycle labeling was applied to all samples. In total 28 chip-data were collected using Gene Chip Operation Software (GCOS, Affymetrix). The 28 specimens analyzed consists 10 ESCC and 18 NE. To obtain the relative gene expression measurements, probe set-level data extraction was performed with the GCRMA (Robust Multiarray Average) normalization algorithm implemented in GeneSpring GX10.2 (Agilent). All data were log2 transformed. A list of all the genes included in these microarrays and the normalized data have been deposited in the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/info/linking.html) under GEO accession number GSE26886. For gene-by-gene statistical testing, parametric tests were used to compare differences between groups. The False Discovery Rate (FDR) was employed using Benjamini-Hochberg procedure for multiple testing of the resulting p-value significance. In situ hybridization: A 148 bp fragment located at the 3 terminal end of human WDR66 gene (NM144668)was subcloned into the pBluescript II vector pBS-27.16 using primer pair forward: 5’-CAACCTgCTCCgTCAAA-3’ and reverse: 5’-TAAACATTCCTggTAACTTCAC-3’. The linearized plasmid was used as a template for the synthesis of antisense probes. The probe was labeled by digoxigenin / dUTP with a DIG/RNA labelling kit (Boehringer Mannheim), according to the manufacturer’s instructions. The quality and quantity of the probe were confirmed by gel electrophoresis before used for In situ hybridization. The Digoxigenin-labeled probe was applied to 5 μm dewaxed FFPE sections and hybridized at 65°C overnight in a humid chamber. After 3 washes to remove the nonspecific binding or unbound probes, digoxigenin-labeled probe was detected using the alkaline phosphatase method. RNA extraction and qRT-PCR: RNA extractions were carried out using the RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Total RNA quality and yield were assessed using a bioanalyzer system (Agilent 2100 Bioanalyzer; Agilent Technologies, Palo Alto, CA, USA) and a spectrophotometer (NanoDrop ND-1000; NanoDrop Technologies, Wilmington, DE, USA). Only RNA with an RNA integrity number > 9.0 was used for microarray analysis. For quantification of mRNA-expression, qRT-PCR was performed for 3 genes plus one control, using pre-designed gene-specific TaqMan® probes and primer sets purchased from Applied Biosystems (Hs01566237_m1 for WDR66, Hs00958116_m1 for VIM, Hs00170162_m1 for OCLD, and 4326317E for GAPDH). Conditions and data analysis of qRT-PCR were as described [16]. Cell culture and siRNA-mediated knockdown: KYSE520, OE33, SW480, Caco2, HCT116, HT29, HL60, LS174T, Daudi, HEK293, MCF7, MDA-MB-231, MDA-MB-435 and Capan-I were obtained from the American Type Culture Collection (ATCC, Manassas, VA) and cultured according to the supplier’s instructions. For siRNA-mediated knockdown of WDR66, cells were seeded in 6-well plates on the day before transfection. On day 0, cells were transfected with siRNA at 25 nmol/L concentrations using Thermo Scientific DharmaFECT transfection reagents according to manufacturer’s instructions. The siRNA sense 5’ – GuuACuAAAGGuGAGCAuA - 3’ sequence corresponding to WDR66 mRNA was chemically synthesized by sigma-Proligo (Munich, Germany). RNA was extracted at indicated time points as described. Microarray analysis of WDR66 knock-down cells: Total RNA was extracted from 106 cell pellet using RNeasy Mini Prep Kit (Qiagen, Hilden, Germany). RNA quality was checked by Bioanalyzer (Agilent, Santa Clara, CA) Only RNA samples showing a RIN of at least 9.0 were used for labelling. Total RNA (1 μg) was labelled with Cy3 using the Low Input RNA Amplification Kit (Agilent, Santa Clara, CA). Labelled cRNAs were hybridized to Whole Human Genome 4x44K Oligonucleotide Microarrays (Agilent, Santa Clara, CA) according to the manual. Arrays were scanned by using standard Agilent protocols and a G2565AA Microarray Scanner (Agilent, Santa Clara, CA). Raw expression values were determined using Feature Extraction 9.0 software (Agilent, Santa Clara, CA). Western blotting analysis: Total cell extracts were obtained and cell lysate containing 50 μg of protein was separated on 10% SDS-polyacrylamide gel and then blotted onto polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). Primary antibody for vimentin detection was mouse monoclonal anti-human vimentin antibody (Sigma-Aldrich Corporation, V5255, 1:200, approximately 54 kDa). Primary antibody for occludin detection was rabbit polyclonal anti-human occludin antibody (Invitrogen, 71–1500, 1:500, at 65 kDa). ß-actin was used as loading control (Abcam, 1:2000, ab8226). Signals were detected using ECL kit (Amersham Pharmacia Biotech, Piscataway, NJ, USA). Images were scanned by FujiFilm LAS-1000 (FujiFilm, Düsseldorf, Germany). Cell number, cell motility and wound-healing assay: Cells were seeded in 6-well plates 24 h before transfection. Transfection was done as described above. Cells were collected at indicated time points, and cell numbers were measured using POLARstar Omega reader (BMG Labtech, Offenburg, Germany). Emission and excitation filters were 485 and 520 nm. The results were analyzed by MARS data analysis software. Cell motility was determined using 12-well Transwell Permeable Support inserts with polycarbonate filters with a pore size of 8 μm (Corning Costar, Lowell, MA), according to the manufacturer’s instructions. Wound healing assays were performed in triplicates using cytoselect 24-well wound healing assay (Cell Biolabs, Inc.). Statistical analysis: Statistical analysis was done using GraphPad Prism version 5 for Windows (GraphPad Software) and SPSS version 13 for Windows (SPSS, Chicago, IL, USA) as follows: GraphPad Prism, Unpaired t test with Welch’s correction of quantitative real-time RT-PCR measurements of WDR66 in patient samples and of gene expression measurements in the validation cohort; nonparametric Mann–Whitney U test of cell numbers, motility assay, and cell wound assay after knockdown of WDR66; SPSS, Kaplan-Meier survival analysis and log-rank statistics, cut-point analysis of qRT-PCR measurements of WDR66 in patient samples using maximally selected rank statistics to determine the value separating a group into two groups with the most significant difference when used as a cut-point; grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low; WDR66 > 125, WDR66 high; the stratified Cox-regression model was used to determine prognostic factors in a multivariate analysis with WDR66 dichotomized at the previously determined cut-points. Results: WDR66 is specifically highly expressed in esophageal squamous cell carcinoma Whole genome-wide expression profiling was performed on 28 esophageal specimens (GSE26886). To make sure that only epithelial cells were studied, we applied laser capture microdissection technique to the specimens. A number of genes differentially expressed between ESCC samples and normal esophageal squamous epithelium samples were identified. The probe set with the highest fold change and lowest p-value represented the WDR66 transcript (P < 0.0001) (Figure 1A). As a validation study, WDR66 expression was examined by qRT-PCR in an independent cohort consisting of 71 specimens including ESCC (n = 25), NE (n = 11), EAC (n = 13), GAC (n = 15) and CRC (n = 7). We found that WDR66 was highly expressed in 96% of ESCC patients (Figure 1B). Confirming our previous results from the microarray study, WDR66 expression was found to be significantly higher in ESCC compared to NE as well as the other three cancer types checked in this cohort (P < 0.0001). Immunohistochemical localization of WDR66 was not carried out because none of the WDR66 antibodies available allowed detecting a specific protein band on Western blots. The presence of WDR66-specific mRNA was probed by in-situ hybridization using single-stranded RNA probes of the WDR66 gene in 4% PFA-fixed paraffin-embedded esophageal tissues. WDR66 transcription (positive staining) was specifically detected in the esophageal squamous carcinoma cells but not in normal squamous epithelia (Figure 1C). Furthermore, WDR66 expression was examined in 14 human cell-lines and 20 normal human tissues by qRT-PCR. Expression of WDR66 gene was abundantly expressed only in the human esophageal squamous cell carcinoma cell line KYSE520, but not expressed in any other human cell line, such as OE33, SW480, HT29, HCT116, LS174T, Caco2, HL60, HEK293, Daudi, Capan1, MCF7, MDA-MB231 and MDA-MB435 (Figure 1D). Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis (Figure 1E). Thus, our data suggest that WDR66 might be a cancer / testis antigen. WDR66 is highly expressed in esophageal squamous cell carcinoma. A: mRNA expression of the WDR66 gene was determined by microarray analysis. Microarray analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the WDR66 gene. The WDR66 gene is significantly differentially expressed in ESCC (corrected p-value < 0.0001). Expression level of WDR66 gene is low in NE but high in ESCC cases. B: Relative mRNA expression of the WDR66 gene in an independent validation cohort. Quantitation was done relative to the transcript of GAPDH. Significance in differential expression of individual gene between groups was calculated (p-value < 0.001). Results showed that WDR66 gene expression level is highest in ESCC and low to absent in NE or other carcinomas. On the horizontal axis patient groups ESCC, NE, EAC, GAC and CRC are depicted. C: WDR66 gene is highly expressed in ESCC epithelium according to in situ hybridization. In situ hybridization was down using anti-sense probes of human WDR66 gene in FFPE sections of esophageal specimens. Signals for WDR66 transcripts were observed specifically in esophageal squamous cell carcinoma (ESCC, right), but not in normal squamous epithelium (NE, left). D: WDR66 expression level in various human cell lines. WDR66 expression was examined by quantitative real-time PCR in 14 cell lines cultivated from different human carcinomas. The expression was quantified relative to human esophageal squamous carcinoma cell line KYSE520. E: Tissue-specific expression of WDR66 gene in various human normal tissues. Quantitative real-time PCR analysis of WDR66 expression levels in 20 human normal tissues (FirstChoice® Human Total RNA Survey Panel). WDR66 gene is preferentially expressed in testis. Gene level was quantified relative to the expression in ESCC cell line KYSE520. Whole genome-wide expression profiling was performed on 28 esophageal specimens (GSE26886). To make sure that only epithelial cells were studied, we applied laser capture microdissection technique to the specimens. A number of genes differentially expressed between ESCC samples and normal esophageal squamous epithelium samples were identified. The probe set with the highest fold change and lowest p-value represented the WDR66 transcript (P < 0.0001) (Figure 1A). As a validation study, WDR66 expression was examined by qRT-PCR in an independent cohort consisting of 71 specimens including ESCC (n = 25), NE (n = 11), EAC (n = 13), GAC (n = 15) and CRC (n = 7). We found that WDR66 was highly expressed in 96% of ESCC patients (Figure 1B). Confirming our previous results from the microarray study, WDR66 expression was found to be significantly higher in ESCC compared to NE as well as the other three cancer types checked in this cohort (P < 0.0001). Immunohistochemical localization of WDR66 was not carried out because none of the WDR66 antibodies available allowed detecting a specific protein band on Western blots. The presence of WDR66-specific mRNA was probed by in-situ hybridization using single-stranded RNA probes of the WDR66 gene in 4% PFA-fixed paraffin-embedded esophageal tissues. WDR66 transcription (positive staining) was specifically detected in the esophageal squamous carcinoma cells but not in normal squamous epithelia (Figure 1C). Furthermore, WDR66 expression was examined in 14 human cell-lines and 20 normal human tissues by qRT-PCR. Expression of WDR66 gene was abundantly expressed only in the human esophageal squamous cell carcinoma cell line KYSE520, but not expressed in any other human cell line, such as OE33, SW480, HT29, HCT116, LS174T, Caco2, HL60, HEK293, Daudi, Capan1, MCF7, MDA-MB231 and MDA-MB435 (Figure 1D). Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis (Figure 1E). Thus, our data suggest that WDR66 might be a cancer / testis antigen. WDR66 is highly expressed in esophageal squamous cell carcinoma. A: mRNA expression of the WDR66 gene was determined by microarray analysis. Microarray analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the WDR66 gene. The WDR66 gene is significantly differentially expressed in ESCC (corrected p-value < 0.0001). Expression level of WDR66 gene is low in NE but high in ESCC cases. B: Relative mRNA expression of the WDR66 gene in an independent validation cohort. Quantitation was done relative to the transcript of GAPDH. Significance in differential expression of individual gene between groups was calculated (p-value < 0.001). Results showed that WDR66 gene expression level is highest in ESCC and low to absent in NE or other carcinomas. On the horizontal axis patient groups ESCC, NE, EAC, GAC and CRC are depicted. C: WDR66 gene is highly expressed in ESCC epithelium according to in situ hybridization. In situ hybridization was down using anti-sense probes of human WDR66 gene in FFPE sections of esophageal specimens. Signals for WDR66 transcripts were observed specifically in esophageal squamous cell carcinoma (ESCC, right), but not in normal squamous epithelium (NE, left). D: WDR66 expression level in various human cell lines. WDR66 expression was examined by quantitative real-time PCR in 14 cell lines cultivated from different human carcinomas. The expression was quantified relative to human esophageal squamous carcinoma cell line KYSE520. E: Tissue-specific expression of WDR66 gene in various human normal tissues. Quantitative real-time PCR analysis of WDR66 expression levels in 20 human normal tissues (FirstChoice® Human Total RNA Survey Panel). WDR66 gene is preferentially expressed in testis. Gene level was quantified relative to the expression in ESCC cell line KYSE520. High expression of WDR66 correlates with poor survival outcome in ESCC In order to test if WDR66 expression correlates with prognostic markers in a separate validation set of ESCC examples, we determined WDR66 expression in an independent set of n = 25 ESCC examples using qRT-PCR (Additional file 1: Table S1). High expression of WDR66 RNA was found to be a significant prognostic factor with regard to cancer-related survival (P = 0.031; Figure 2). When analyzed with regard to various clinicopathological parameters, such as gender (P = 0.804), age (P = 0.432), tumor differentiation (P =0.032), pT factor (P = 0.234), lymph node metastasis (P = 0.545), distant metastasis (P = 0.543) and TNM stage (P = 0.002; Table 1), multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042; Table 1). High WDR66 mRNA expression is associated with poor survival in ESCC patients. Kaplan-Meier analysis of survival of grouped according to WDR66 expression as measured by quantitative real-time RT-PCR. Grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low (n = 12); WDR66 > 125, WDR66 high (n = 13). After choosing an optimal cut-point, analysis for WDR66 was done using maximally selected rank statistics. The group with patients expressing WDR66 at low levels showed a significantly better overall survival compared with the group with high levels of WDR66 expression (P = 0.031; log rank). Cox regression analysis for factors possibly influencing disease-specific survival in patients with ESCC in our cohort In order to test if WDR66 expression correlates with prognostic markers in a separate validation set of ESCC examples, we determined WDR66 expression in an independent set of n = 25 ESCC examples using qRT-PCR (Additional file 1: Table S1). High expression of WDR66 RNA was found to be a significant prognostic factor with regard to cancer-related survival (P = 0.031; Figure 2). When analyzed with regard to various clinicopathological parameters, such as gender (P = 0.804), age (P = 0.432), tumor differentiation (P =0.032), pT factor (P = 0.234), lymph node metastasis (P = 0.545), distant metastasis (P = 0.543) and TNM stage (P = 0.002; Table 1), multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042; Table 1). High WDR66 mRNA expression is associated with poor survival in ESCC patients. Kaplan-Meier analysis of survival of grouped according to WDR66 expression as measured by quantitative real-time RT-PCR. Grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low (n = 12); WDR66 > 125, WDR66 high (n = 13). After choosing an optimal cut-point, analysis for WDR66 was done using maximally selected rank statistics. The group with patients expressing WDR66 at low levels showed a significantly better overall survival compared with the group with high levels of WDR66 expression (P = 0.031; log rank). Cox regression analysis for factors possibly influencing disease-specific survival in patients with ESCC in our cohort Knockdown of WDR66 in KYSE520 effected VIM and OCLD expression in vitro In order to learn more about the function of WDR66, RNA interference was used to silence its expression in KYSE520 cells, a human esophageal squamous cell carcinoma cell line which highly expressed WDR66. Subsequently, a microarray expression analysis was performed in order to identify the genes affected by WDR66 knockdown. A total of 699 genes was identified based on a two-fold change expression difference with p-value of p < 0.001. In an approach to link the observed gene expression profile to gene function, these 699 differentially expressed genes were subjected to gene ontology (GO) analysis. Functional enrichment analysis identified 10 GO terms to be significantly associated with the WDR66 knockdown (Additional file 2: Table S2). All these 10 GO terms are membrane related. We checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed (p = 0.0008) while occludin was less expressed (p < 0.0001) in ESCC specimens in comparison to NE (Figure 3A). Microarray data were validated by qRT-PCR and Western blotting providing independent evidence of the changes of vimentin (VIM) and occludin (OCLD) expression associated with the WDR66 knockdown (Figure 3B, 3C). Knockdown of WDR66 gene affects expression of vimentin and occludin. A: mRNA expression of the VIM and OCLN gene in the original training cohort determined by microarray analysis. Array data analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the genes of interest. VIM and OCLN are significantly differentially expressed in ESCC compared with NE (corrected p-value: VIM p = 0.0008; OCLD p < 0.0001). VIM expression level is low in NE but high in ESCC cases, whereas OCLN expression is high in NE but low in ESCC cases. The horizontal axis depicts the patient groups ESCC and NE. (∗∗∗ P < 0.001) B: Knockdown of WDR66 affects mRNA expression of VIM and OCLN. Gene expression is presented as normalized (log scale) signal intensity for mRNA expression levels of VIM and OCLN gene. Gene expression level of VIM was significantly down regulated whereas OCLN expression was significantly higher in cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA) (corrected p-value: VIM p = 0.0286; OCLD p = 0.0186). Data are representatives of four independent experiments. (∗p < 0.05). C: Detection of vimentin and occludin protein by immunoblotting of KYSE520 cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA). β-actin was used as loading control. Vimentin expression was significantly down regulated while occludin was significantly higher expressed in cells treated with WDR66 siRNA. In order to learn more about the function of WDR66, RNA interference was used to silence its expression in KYSE520 cells, a human esophageal squamous cell carcinoma cell line which highly expressed WDR66. Subsequently, a microarray expression analysis was performed in order to identify the genes affected by WDR66 knockdown. A total of 699 genes was identified based on a two-fold change expression difference with p-value of p < 0.001. In an approach to link the observed gene expression profile to gene function, these 699 differentially expressed genes were subjected to gene ontology (GO) analysis. Functional enrichment analysis identified 10 GO terms to be significantly associated with the WDR66 knockdown (Additional file 2: Table S2). All these 10 GO terms are membrane related. We checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed (p = 0.0008) while occludin was less expressed (p < 0.0001) in ESCC specimens in comparison to NE (Figure 3A). Microarray data were validated by qRT-PCR and Western blotting providing independent evidence of the changes of vimentin (VIM) and occludin (OCLD) expression associated with the WDR66 knockdown (Figure 3B, 3C). Knockdown of WDR66 gene affects expression of vimentin and occludin. A: mRNA expression of the VIM and OCLN gene in the original training cohort determined by microarray analysis. Array data analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the genes of interest. VIM and OCLN are significantly differentially expressed in ESCC compared with NE (corrected p-value: VIM p = 0.0008; OCLD p < 0.0001). VIM expression level is low in NE but high in ESCC cases, whereas OCLN expression is high in NE but low in ESCC cases. The horizontal axis depicts the patient groups ESCC and NE. (∗∗∗ P < 0.001) B: Knockdown of WDR66 affects mRNA expression of VIM and OCLN. Gene expression is presented as normalized (log scale) signal intensity for mRNA expression levels of VIM and OCLN gene. Gene expression level of VIM was significantly down regulated whereas OCLN expression was significantly higher in cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA) (corrected p-value: VIM p = 0.0286; OCLD p = 0.0186). Data are representatives of four independent experiments. (∗p < 0.05). C: Detection of vimentin and occludin protein by immunoblotting of KYSE520 cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA). β-actin was used as loading control. Vimentin expression was significantly down regulated while occludin was significantly higher expressed in cells treated with WDR66 siRNA. Knockdown of WDR66 in KYSE520 cells affects cell motility and results in growth suppression Due to the effect of WDR66 on the expression of vimentin, an important EMT marker which plays a central role in the reversible trans-differentiation and occludin, an adhesion molecules that is a constituent of tight junctions, we hypothesized that WDR66 may regulate cell motility of esophageal squamous cancer cells. WDR66 was silenced in the human squamous cell carcinoma cell line KYSE520 by RNA interference. Transfection efficiency was evaluated by qPCR. Cell migration assays showed that KYSE520 cells, which had been transfected with WDR66 siRNA, displayed a motility capacity of only 40% compared to the cells having been transfected with control siRNA (AllStar) after 16 hours (Figure 4A). Moreover, we found that introduction of siWDR66 remarkably suppressed growth of KYSE520 in comparison to control cells (Figure 4B). In order to visualize the involvement of WDR66 in cell migration and proliferation, a wound-healing assay was carried out. The insert was removed at defined time points of scratching and the results were recorded by taking pictures (Figure 4C). Thus, our data suggest that WDR66 promotes cell proliferation and affects cell motility. Knockdown of WDR66 gene affects cell motility and cell growth. A: Cell motility assays showed that knockdown of WDR66 reduced cell migration after 16 hours. About 35% of the siWDR66 cells were migrated in comparison to that of the mock control cells. The differences between siWDR66 and Allstar (negative control siRNA) cells were significant (p = 0.0032) using the paired t test. The data shown are representatives of three independent experiments and each done in quadruplicate. B: Knockdown of WDR66 leads to suppression of cell growth. A total of 1.5×104 cells were seeded at day 0. WDR66 knockdown cells grew slower than cells treated with Allstar (negative control siRNA). The differences between siWDR66 and Allstar (negative control siRNA) treated cells were highly significant (p = 0. 0098). C: Wound-healing assays show that knockdown of WDR66 reduces cell motility. Representative images are shown. Images are taken immediately after insert was removed and 8 hours later. Original magnification is x400. Due to the effect of WDR66 on the expression of vimentin, an important EMT marker which plays a central role in the reversible trans-differentiation and occludin, an adhesion molecules that is a constituent of tight junctions, we hypothesized that WDR66 may regulate cell motility of esophageal squamous cancer cells. WDR66 was silenced in the human squamous cell carcinoma cell line KYSE520 by RNA interference. Transfection efficiency was evaluated by qPCR. Cell migration assays showed that KYSE520 cells, which had been transfected with WDR66 siRNA, displayed a motility capacity of only 40% compared to the cells having been transfected with control siRNA (AllStar) after 16 hours (Figure 4A). Moreover, we found that introduction of siWDR66 remarkably suppressed growth of KYSE520 in comparison to control cells (Figure 4B). In order to visualize the involvement of WDR66 in cell migration and proliferation, a wound-healing assay was carried out. The insert was removed at defined time points of scratching and the results were recorded by taking pictures (Figure 4C). Thus, our data suggest that WDR66 promotes cell proliferation and affects cell motility. Knockdown of WDR66 gene affects cell motility and cell growth. A: Cell motility assays showed that knockdown of WDR66 reduced cell migration after 16 hours. About 35% of the siWDR66 cells were migrated in comparison to that of the mock control cells. The differences between siWDR66 and Allstar (negative control siRNA) cells were significant (p = 0.0032) using the paired t test. The data shown are representatives of three independent experiments and each done in quadruplicate. B: Knockdown of WDR66 leads to suppression of cell growth. A total of 1.5×104 cells were seeded at day 0. WDR66 knockdown cells grew slower than cells treated with Allstar (negative control siRNA). The differences between siWDR66 and Allstar (negative control siRNA) treated cells were highly significant (p = 0. 0098). C: Wound-healing assays show that knockdown of WDR66 reduces cell motility. Representative images are shown. Images are taken immediately after insert was removed and 8 hours later. Original magnification is x400. WDR66 is specifically highly expressed in esophageal squamous cell carcinoma: Whole genome-wide expression profiling was performed on 28 esophageal specimens (GSE26886). To make sure that only epithelial cells were studied, we applied laser capture microdissection technique to the specimens. A number of genes differentially expressed between ESCC samples and normal esophageal squamous epithelium samples were identified. The probe set with the highest fold change and lowest p-value represented the WDR66 transcript (P < 0.0001) (Figure 1A). As a validation study, WDR66 expression was examined by qRT-PCR in an independent cohort consisting of 71 specimens including ESCC (n = 25), NE (n = 11), EAC (n = 13), GAC (n = 15) and CRC (n = 7). We found that WDR66 was highly expressed in 96% of ESCC patients (Figure 1B). Confirming our previous results from the microarray study, WDR66 expression was found to be significantly higher in ESCC compared to NE as well as the other three cancer types checked in this cohort (P < 0.0001). Immunohistochemical localization of WDR66 was not carried out because none of the WDR66 antibodies available allowed detecting a specific protein band on Western blots. The presence of WDR66-specific mRNA was probed by in-situ hybridization using single-stranded RNA probes of the WDR66 gene in 4% PFA-fixed paraffin-embedded esophageal tissues. WDR66 transcription (positive staining) was specifically detected in the esophageal squamous carcinoma cells but not in normal squamous epithelia (Figure 1C). Furthermore, WDR66 expression was examined in 14 human cell-lines and 20 normal human tissues by qRT-PCR. Expression of WDR66 gene was abundantly expressed only in the human esophageal squamous cell carcinoma cell line KYSE520, but not expressed in any other human cell line, such as OE33, SW480, HT29, HCT116, LS174T, Caco2, HL60, HEK293, Daudi, Capan1, MCF7, MDA-MB231 and MDA-MB435 (Figure 1D). Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis (Figure 1E). Thus, our data suggest that WDR66 might be a cancer / testis antigen. WDR66 is highly expressed in esophageal squamous cell carcinoma. A: mRNA expression of the WDR66 gene was determined by microarray analysis. Microarray analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the WDR66 gene. The WDR66 gene is significantly differentially expressed in ESCC (corrected p-value < 0.0001). Expression level of WDR66 gene is low in NE but high in ESCC cases. B: Relative mRNA expression of the WDR66 gene in an independent validation cohort. Quantitation was done relative to the transcript of GAPDH. Significance in differential expression of individual gene between groups was calculated (p-value < 0.001). Results showed that WDR66 gene expression level is highest in ESCC and low to absent in NE or other carcinomas. On the horizontal axis patient groups ESCC, NE, EAC, GAC and CRC are depicted. C: WDR66 gene is highly expressed in ESCC epithelium according to in situ hybridization. In situ hybridization was down using anti-sense probes of human WDR66 gene in FFPE sections of esophageal specimens. Signals for WDR66 transcripts were observed specifically in esophageal squamous cell carcinoma (ESCC, right), but not in normal squamous epithelium (NE, left). D: WDR66 expression level in various human cell lines. WDR66 expression was examined by quantitative real-time PCR in 14 cell lines cultivated from different human carcinomas. The expression was quantified relative to human esophageal squamous carcinoma cell line KYSE520. E: Tissue-specific expression of WDR66 gene in various human normal tissues. Quantitative real-time PCR analysis of WDR66 expression levels in 20 human normal tissues (FirstChoice® Human Total RNA Survey Panel). WDR66 gene is preferentially expressed in testis. Gene level was quantified relative to the expression in ESCC cell line KYSE520. High expression of WDR66 correlates with poor survival outcome in ESCC: In order to test if WDR66 expression correlates with prognostic markers in a separate validation set of ESCC examples, we determined WDR66 expression in an independent set of n = 25 ESCC examples using qRT-PCR (Additional file 1: Table S1). High expression of WDR66 RNA was found to be a significant prognostic factor with regard to cancer-related survival (P = 0.031; Figure 2). When analyzed with regard to various clinicopathological parameters, such as gender (P = 0.804), age (P = 0.432), tumor differentiation (P =0.032), pT factor (P = 0.234), lymph node metastasis (P = 0.545), distant metastasis (P = 0.543) and TNM stage (P = 0.002; Table 1), multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042; Table 1). High WDR66 mRNA expression is associated with poor survival in ESCC patients. Kaplan-Meier analysis of survival of grouped according to WDR66 expression as measured by quantitative real-time RT-PCR. Grouping of patients according to median of qRT-PCR measurements was done as follows: WDR66 ≤ 125, WDR66 low (n = 12); WDR66 > 125, WDR66 high (n = 13). After choosing an optimal cut-point, analysis for WDR66 was done using maximally selected rank statistics. The group with patients expressing WDR66 at low levels showed a significantly better overall survival compared with the group with high levels of WDR66 expression (P = 0.031; log rank). Cox regression analysis for factors possibly influencing disease-specific survival in patients with ESCC in our cohort Knockdown of WDR66 in KYSE520 effected VIM and OCLD expression in vitro: In order to learn more about the function of WDR66, RNA interference was used to silence its expression in KYSE520 cells, a human esophageal squamous cell carcinoma cell line which highly expressed WDR66. Subsequently, a microarray expression analysis was performed in order to identify the genes affected by WDR66 knockdown. A total of 699 genes was identified based on a two-fold change expression difference with p-value of p < 0.001. In an approach to link the observed gene expression profile to gene function, these 699 differentially expressed genes were subjected to gene ontology (GO) analysis. Functional enrichment analysis identified 10 GO terms to be significantly associated with the WDR66 knockdown (Additional file 2: Table S2). All these 10 GO terms are membrane related. We checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed (p = 0.0008) while occludin was less expressed (p < 0.0001) in ESCC specimens in comparison to NE (Figure 3A). Microarray data were validated by qRT-PCR and Western blotting providing independent evidence of the changes of vimentin (VIM) and occludin (OCLD) expression associated with the WDR66 knockdown (Figure 3B, 3C). Knockdown of WDR66 gene affects expression of vimentin and occludin. A: mRNA expression of the VIM and OCLN gene in the original training cohort determined by microarray analysis. Array data analysis was performed on 18 healthy normal esophageal epithelium (NE) and 10 esophageal squamous cell carcinoma (ESCC) samples. Gene expression is presented as normalized (log2 scale) signal intensity of the genes of interest. VIM and OCLN are significantly differentially expressed in ESCC compared with NE (corrected p-value: VIM p = 0.0008; OCLD p < 0.0001). VIM expression level is low in NE but high in ESCC cases, whereas OCLN expression is high in NE but low in ESCC cases. The horizontal axis depicts the patient groups ESCC and NE. (∗∗∗ P < 0.001) B: Knockdown of WDR66 affects mRNA expression of VIM and OCLN. Gene expression is presented as normalized (log scale) signal intensity for mRNA expression levels of VIM and OCLN gene. Gene expression level of VIM was significantly down regulated whereas OCLN expression was significantly higher in cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA) (corrected p-value: VIM p = 0.0286; OCLD p = 0.0186). Data are representatives of four independent experiments. (∗p < 0.05). C: Detection of vimentin and occludin protein by immunoblotting of KYSE520 cells treated with WDR66 siRNA in comparison to NTC (KYSE520) and Allstar (negative control siRNA). β-actin was used as loading control. Vimentin expression was significantly down regulated while occludin was significantly higher expressed in cells treated with WDR66 siRNA. Knockdown of WDR66 in KYSE520 cells affects cell motility and results in growth suppression: Due to the effect of WDR66 on the expression of vimentin, an important EMT marker which plays a central role in the reversible trans-differentiation and occludin, an adhesion molecules that is a constituent of tight junctions, we hypothesized that WDR66 may regulate cell motility of esophageal squamous cancer cells. WDR66 was silenced in the human squamous cell carcinoma cell line KYSE520 by RNA interference. Transfection efficiency was evaluated by qPCR. Cell migration assays showed that KYSE520 cells, which had been transfected with WDR66 siRNA, displayed a motility capacity of only 40% compared to the cells having been transfected with control siRNA (AllStar) after 16 hours (Figure 4A). Moreover, we found that introduction of siWDR66 remarkably suppressed growth of KYSE520 in comparison to control cells (Figure 4B). In order to visualize the involvement of WDR66 in cell migration and proliferation, a wound-healing assay was carried out. The insert was removed at defined time points of scratching and the results were recorded by taking pictures (Figure 4C). Thus, our data suggest that WDR66 promotes cell proliferation and affects cell motility. Knockdown of WDR66 gene affects cell motility and cell growth. A: Cell motility assays showed that knockdown of WDR66 reduced cell migration after 16 hours. About 35% of the siWDR66 cells were migrated in comparison to that of the mock control cells. The differences between siWDR66 and Allstar (negative control siRNA) cells were significant (p = 0.0032) using the paired t test. The data shown are representatives of three independent experiments and each done in quadruplicate. B: Knockdown of WDR66 leads to suppression of cell growth. A total of 1.5×104 cells were seeded at day 0. WDR66 knockdown cells grew slower than cells treated with Allstar (negative control siRNA). The differences between siWDR66 and Allstar (negative control siRNA) treated cells were highly significant (p = 0. 0098). C: Wound-healing assays show that knockdown of WDR66 reduces cell motility. Representative images are shown. Images are taken immediately after insert was removed and 8 hours later. Original magnification is x400. Discussion: By whole genome-wide expression profiling we found that WD repeat-containing protein 66 (WDR66) might be a useful biomarker for risk stratification and a modulator for epithelial-mesenchymal transition of ESCC. Among 20 normal human tissues examined by qRT-PCR, WDR66 was most abundantly expressed in the testis. Thus, our data suggest that WDR66 might be a cancer / testis antigen. Cancer-testis (CT) genes, normally expressed in germ line cells but also activated in a wide range of cancer types, often encode antigens in cancer patients [17]. Testis is an immune-privileged site as a result of a blood barrier and lack of HLA class I expression on the surface of germ cells [18]. Hence, if testis-specific genes are expressed in other tissues, they can be immunogenic. Expression of some cancer-testis genes in a high percentage of esophageal tumors makes them potential targets for immunotherapy, like LAGE1, MAGE-A4 and NY-ESO-1 [19]. NY-ESO-1 is a highly immunogenic, prototypical protein marker limited in expression to a wide variety of cancer types but not in normal tissue, with the exception of the immune-privileged testes, and has been heavily investigated as target for immunotherapy in cancer patients. Recently, immunotherapy of various cancers again NY-ESO-1 showed promising clinical results [20-22]. Unfortunately, most cancer / testis antigens are expressed only in a small group of tumor patients, among 10-40%. However, we found that the expression of WDR66 was specifically enhanced in 96% of ESCC patients and also very low or absent in normal tissues. WDR66 may be a novel Immunotherapy target for ESCC. Our experiments revealed a strong association of WDR66 expression with vimentin and occludin. Vimentin is an important mesenchymal marker and plays a central role for reversible trans-differentiation [23]. Tumor cells undergoing EMT have an increased the ability for detachment from the main tumor bulk, which is a crucial step for tumor dissemination and metastasis [24]. EMT is accompanied by a switch from keratin to vimentin expression [25]. Earlier studies also demonstrated a correlation between the reduction of tight junctions and tumor metastasis. Here, we found a decreased expression of occludin, an important tight junction protein. According to a recent study comparing the risk of metastasis in ESCC and EAC, ESCC is characterized by a more aggressive behavior and tendency for early metastasis [26]. Thus, we hypothesize that the elevated expression of WDR66 in ESCC may promote EMT through an up-regulation of vimentin expression and a down regulation of occludin and cohesion of the tumor tissue. At this point, we checked the expression of vimentin and occludin in ESCC patients of our cohort, and found that vimentin was highly expressed while occludin was less expressed in ESCC specimens in comparison to NE which supports our hypothesis. A recent study showed that low expression of the tight junction protein claudin-4 is associated with poor prognosis in ESCC [27]. This is also in agreement with the results of our microarray study, which showed that claudin-4 was significantly less expressed in ESCC and also remarkably over expressed after WDR66 knockdown (data not shown). Conclusions: In summary, we have identified WDR66 as a potential novel prognostic marker and promising target for ESCC. This result is based on our observations that (1) WDR66 is specifically highly expressed in esophageal squamous cell carcinoma and high WDR66 expression correlates with poor overall survival, (2) WDR66 regulates vimentin and occludin expression and plays a crucial role for EMT, and (3) knockdown of WDR66 suppresses cell growth and motility and decreases cell viability of ESCC cells. Therefore, we propose that WDR66 plays a major role in ESCC biology. Our functional data furthermore warrants further investigation of selective targeting of WDR66 as a novel drug target for ESCC treatment. Abbreviations: CRC: Colorectal cancers; EAC: Esophageal adenocarcinoma; ESCC: Esophageal squamous cell carcinoma; EMT: Epithelial to mesenchymal transition; GAC: Gastric adenocarcinoma; NE: Normal esophageal; OCLD: Occludin; qRT-PCR: Quantitative reverse transcription- polymerase chain reaction; siRNA: Small interfering RNA; VIM: Vimentin; WDR66: WD repeat-containing protein 66. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: QW have made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data; CM has been involved in data acquisition, data analysis and interpretation of data, drafting the manuscript and revising it critically for important intellectual content; and WK contributed in writing and revising the paper and has given final approval of the version to be published. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2407/13/137/prepub Supplementary Material: The clinicopathologic characterization of the 25 ESCC Patients for survival analysis. Click here for file Significantly enriched Gene Ontology (GO) terms identified for genes differentially expressed in siWDR66 Kyse520 cells. GO analysis was performed using GeneSpring. The 10 GO terms with the significant corrected P-value (FDR false discovery rate corrected for multiple testing) are depicted sorted by p-Value (noncorrected). Click here for file
Background: We attempted to identify novel biomarkers and therapeutic targets for esophageal squamous cell carcinoma by gene expression profiling of frozen esophageal squamous carcinoma specimens and examined the functional relevance of a newly discovered marker gene, WDR66. Methods: Laser capture microdissection technique was applied to collect the cells from well-defined tumor areas in collaboration with an experienced pathologist. Whole human gene expression profiling of frozen esophageal squamous carcinoma specimens (n = 10) and normal esophageal squamous tissue (n = 18) was performed using microarray technology. A gene encoding WDR66, WD repeat-containing protein 66 was significantly highly expressed in esophageal squamous carcinoma specimens. Microarray results were validated by quantitative real-time polymerase chain reaction (qRT-PCR) in a second and independent cohort (n = 71) consisting of esophageal squamous cell carcinoma (n = 25), normal esophagus (n = 11), esophageal adenocarcinoma (n = 13), gastric adenocarcinoma (n = 15) and colorectal cancers (n = 7). In order to understand WDR66's functional relevance siRNA-mediated knockdown was performed in a human esophageal squamous cell carcinoma cell line, KYSE520 and the effects of this treatment were then checked by another microarray analysis. Results: High WDR66 expression was significantly associated with poor overall survival (P = 0.031) of patients suffering from esophageal squamous carcinomas. Multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042). WDR66 knockdown by RNA interference resulted particularly in changes of the expression of membrane components. Expression of vimentin was down regulated in WDR66 knockdown cells while that of the tight junction protein occludin was markedly up regulated. Furthermore, siRNA-mediated knockdown of WDR66 resulted in suppression of cell growth and reduced cell motility. Conclusions: WDR66 might be a useful biomarker for risk stratification of esophageal squamous carcinomas. WDR66 expression is likely to play an important role in esophageal squamous cell carcinoma growth and invasion as a positive modulator of epithelial-mesenchymal transition. Furthermore, due to its high expression and possible functional relevance, WDR66 might be a novel drug target for the treatment of squamous carcinoma.
Background: Esophageal squamous cell carcinoma (ESCC) is one of the most lethal malignancies of the digestive tract and in most cases the initial diagnosis is established only once the malignancy is in the advanced stage [1]. Poor survival is due to the fact that ESCC frequently metastasizes to regional and distant lymph nodes, even at initial diagnosis. Treatment of cancer using molecular targets has brought promising results and attracts more and more attention [2-5]. Characterization of genes involved in the progression and development of ESCC may lead to the identification of new prognostic markers and therapeutic targets. By whole genome-wide expression profiling, we found that WD repeat-containing protein 66 (WDR66), located on chromosome 12 (12q24.31), might be a useful biomarker for risk stratification and a modulator for epithelial-mesenchymal transition of ESCC. WD-repeat protein family is a large family found in all eukaryotes and is implicated in a variety of functions ranging from signal transduction and transcription regulation to cell cycle control, autophagy and apoptosis [6]. These repeating units are believed to serve as a scaffold for multiple protein interactions with various proteins [7]. According to whole-genome sequence analysis, there are 136 WD-repeat proteins in humans which belong to the same structural class [8]. Among the WD-repeat proteins, endonuclein containing five WD-repeat domains was shown to be up regulated in pancreatic cancer [9]. The expression of human BTRC (beta-transducing repeat-containing protein), which contains one F-box and seven WD-repeats, targeted to epithelial cells under tissue specific promoter in BTRC deficient (−/−) female mice, promoted the development of mammary tumors [10]. WDRPUH (WD repeat-containing protein 16) encoding a protein containing 11 highly conserved WD-repeat domains was also shown to be up regulated in human hepatocellular carcinomas and involved in promotion of cell proliferation [11]. The WD repeat-containing protein 66 contains 9 highly conserved WD40 repeat motifs and an EF-hand-like domain. A genome-wide association study identified a single-nucleotide polymorphism located within intron 3 of WDR66 associated with mean platelet volume [12]. WD-repeat proteins have been identified as tumor markers that were frequently up-regulated in various cancers [11,13,14]. Here, we report for the first time that WDR66 might be an important prognostic factor for patients with ESCC as found by whole human gene expression profiling. Moreover, to our knowledge, the role of WDR66 in esophageal cancer development and progression has not been explored up to now. To this end we examined the effect of silencing of WDR66 by another microarray analysis. In addition, the effect of WDR66 on epithelial-mesenchymal transition (EMT), cell motility and tumor growth was investigated. Conclusions: In summary, we have identified WDR66 as a potential novel prognostic marker and promising target for ESCC. This result is based on our observations that (1) WDR66 is specifically highly expressed in esophageal squamous cell carcinoma and high WDR66 expression correlates with poor overall survival, (2) WDR66 regulates vimentin and occludin expression and plays a crucial role for EMT, and (3) knockdown of WDR66 suppresses cell growth and motility and decreases cell viability of ESCC cells. Therefore, we propose that WDR66 plays a major role in ESCC biology. Our functional data furthermore warrants further investigation of selective targeting of WDR66 as a novel drug target for ESCC treatment.
Background: We attempted to identify novel biomarkers and therapeutic targets for esophageal squamous cell carcinoma by gene expression profiling of frozen esophageal squamous carcinoma specimens and examined the functional relevance of a newly discovered marker gene, WDR66. Methods: Laser capture microdissection technique was applied to collect the cells from well-defined tumor areas in collaboration with an experienced pathologist. Whole human gene expression profiling of frozen esophageal squamous carcinoma specimens (n = 10) and normal esophageal squamous tissue (n = 18) was performed using microarray technology. A gene encoding WDR66, WD repeat-containing protein 66 was significantly highly expressed in esophageal squamous carcinoma specimens. Microarray results were validated by quantitative real-time polymerase chain reaction (qRT-PCR) in a second and independent cohort (n = 71) consisting of esophageal squamous cell carcinoma (n = 25), normal esophagus (n = 11), esophageal adenocarcinoma (n = 13), gastric adenocarcinoma (n = 15) and colorectal cancers (n = 7). In order to understand WDR66's functional relevance siRNA-mediated knockdown was performed in a human esophageal squamous cell carcinoma cell line, KYSE520 and the effects of this treatment were then checked by another microarray analysis. Results: High WDR66 expression was significantly associated with poor overall survival (P = 0.031) of patients suffering from esophageal squamous carcinomas. Multivariate Cox regression analysis revealed that WDR66 expression remained an independent prognostic factor (P = 0.042). WDR66 knockdown by RNA interference resulted particularly in changes of the expression of membrane components. Expression of vimentin was down regulated in WDR66 knockdown cells while that of the tight junction protein occludin was markedly up regulated. Furthermore, siRNA-mediated knockdown of WDR66 resulted in suppression of cell growth and reduced cell motility. Conclusions: WDR66 might be a useful biomarker for risk stratification of esophageal squamous carcinomas. WDR66 expression is likely to play an important role in esophageal squamous cell carcinoma growth and invasion as a positive modulator of epithelial-mesenchymal transition. Furthermore, due to its high expression and possible functional relevance, WDR66 might be a novel drug target for the treatment of squamous carcinoma.
12,994
407
23
[ "wdr66", "expression", "cell", "gene", "escc", "cells", "esophageal", "analysis", "human", "expressed" ]
[ "test", "test" ]
[CONTENT] WD repeat-containing protein | Esophageal squamous cell carcinoma | Epithelial-mesenchymal transition [SUMMARY]
[CONTENT] WD repeat-containing protein | Esophageal squamous cell carcinoma | Epithelial-mesenchymal transition [SUMMARY]
[CONTENT] WD repeat-containing protein | Esophageal squamous cell carcinoma | Epithelial-mesenchymal transition [SUMMARY]
[CONTENT] WD repeat-containing protein | Esophageal squamous cell carcinoma | Epithelial-mesenchymal transition [SUMMARY]
[CONTENT] WD repeat-containing protein | Esophageal squamous cell carcinoma | Epithelial-mesenchymal transition [SUMMARY]
[CONTENT] WD repeat-containing protein | Esophageal squamous cell carcinoma | Epithelial-mesenchymal transition [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Calcium-Binding Proteins | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Movement | Cell Proliferation | Epithelial-Mesenchymal Transition | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Female | Gene Expression | Gene Knockdown Techniques | Humans | Male | Middle Aged | Neoplasm Grading | Neoplasm Staging | Occludin | Risk | Vimentin | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Calcium-Binding Proteins | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Movement | Cell Proliferation | Epithelial-Mesenchymal Transition | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Female | Gene Expression | Gene Knockdown Techniques | Humans | Male | Middle Aged | Neoplasm Grading | Neoplasm Staging | Occludin | Risk | Vimentin | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Calcium-Binding Proteins | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Movement | Cell Proliferation | Epithelial-Mesenchymal Transition | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Female | Gene Expression | Gene Knockdown Techniques | Humans | Male | Middle Aged | Neoplasm Grading | Neoplasm Staging | Occludin | Risk | Vimentin | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Calcium-Binding Proteins | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Movement | Cell Proliferation | Epithelial-Mesenchymal Transition | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Female | Gene Expression | Gene Knockdown Techniques | Humans | Male | Middle Aged | Neoplasm Grading | Neoplasm Staging | Occludin | Risk | Vimentin | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Calcium-Binding Proteins | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Movement | Cell Proliferation | Epithelial-Mesenchymal Transition | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Female | Gene Expression | Gene Knockdown Techniques | Humans | Male | Middle Aged | Neoplasm Grading | Neoplasm Staging | Occludin | Risk | Vimentin | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Calcium-Binding Proteins | Carcinoma, Squamous Cell | Cell Line, Tumor | Cell Movement | Cell Proliferation | Epithelial-Mesenchymal Transition | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Female | Gene Expression | Gene Knockdown Techniques | Humans | Male | Middle Aged | Neoplasm Grading | Neoplasm Staging | Occludin | Risk | Vimentin | Young Adult [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] wdr66 | expression | cell | gene | escc | cells | esophageal | analysis | human | expressed [SUMMARY]
[CONTENT] wdr66 | expression | cell | gene | escc | cells | esophageal | analysis | human | expressed [SUMMARY]
[CONTENT] wdr66 | expression | cell | gene | escc | cells | esophageal | analysis | human | expressed [SUMMARY]
[CONTENT] wdr66 | expression | cell | gene | escc | cells | esophageal | analysis | human | expressed [SUMMARY]
[CONTENT] wdr66 | expression | cell | gene | escc | cells | esophageal | analysis | human | expressed [SUMMARY]
[CONTENT] wdr66 | expression | cell | gene | escc | cells | esophageal | analysis | human | expressed [SUMMARY]
[CONTENT] repeat | wd | wd repeat | containing | proteins | protein | repeat containing protein | containing protein | repeat containing | wd repeat proteins [SUMMARY]
[CONTENT] agilent | rna | wdr66 | santa clara | clara | santa | usa | cell | gene | kit [SUMMARY]
[CONTENT] wdr66 | expression | cell | gene | escc | expressed | wdr66 gene | cells | esophageal | wdr66 expression [SUMMARY]
[CONTENT] wdr66 | escc | novel | target escc | target | plays | role | cell | functional data furthermore | promising target escc [SUMMARY]
[CONTENT] wdr66 | expression | cell | escc | gene | analysis | cells | esophageal | expressed | rna [SUMMARY]
[CONTENT] wdr66 | expression | cell | escc | gene | analysis | cells | esophageal | expressed | rna [SUMMARY]
[CONTENT] WDR66 [SUMMARY]
[CONTENT] ||| 10 | 18 ||| 66 ||| Microarray | second | 71 | 25 | 11 | 13 | 15 | 7 ||| WDR66 | KYSE520 [SUMMARY]
[CONTENT] 0.031 ||| Multivariate Cox | 0.042 ||| RNA ||| ||| [SUMMARY]
[CONTENT] ||| ||| WDR66 [SUMMARY]
[CONTENT] WDR66 ||| ||| 10 | 18 ||| 66 ||| Microarray | second | 71 | 25 | 11 | 13 | 15 | 7 ||| WDR66 | KYSE520 ||| ||| 0.031 ||| Multivariate Cox | 0.042 ||| RNA ||| ||| ||| ||| ||| WDR66 [SUMMARY]
[CONTENT] WDR66 ||| ||| 10 | 18 ||| 66 ||| Microarray | second | 71 | 25 | 11 | 13 | 15 | 7 ||| WDR66 | KYSE520 ||| ||| 0.031 ||| Multivariate Cox | 0.042 ||| RNA ||| ||| ||| ||| ||| WDR66 [SUMMARY]
The clinical prognostic value of lncRNA FOXP4-AS1 in cancer patients: A meta-analysis and bioinformatics analysis based on TCGA datasets.
36281152
The mortality and recurrence of patients with cancer is of high prevalence. Long non-coding RNA (lncRNA) forkhead box P4 antisense RNA 1 (FOXP4-AS1) is a promising lncRNA. There is increasing evidence that lncRNA FOXP4-AS1 is abnormally expressed in various tumors and is associated with cancer prognosis. This study was designed to identify the prognostic value of lncRNA FOXP4-AS1 in human malignancies.
BACKGROUND
We searched electronic databases up to April 29, 2022, including PubMed, Cochrane Library, Embase, MEDLINE, and Web of Science. Eligible studies that evaluated the clinicopathological and prognostic role of lncRNA FOXP4-AS1 in patients with malignant tumors were included. The pooled odds ratios (ORs) and the hazard ratios (HRs) were calculated to assess the role of lncRNA FOXP4-AS1 using Stata/SE 16.1 software.
METHODS
A total of 6 studies on cancer patients were included in the present meta-analysis. The combined results revealed that high expression of lncRNA FOXP4-AS1 was significantly associated with unfavorable overall survival (OS) (HR = 1.99, 95% confidence interval [CI]: 1.65-2.39, P < .00001), and poor disease-free survival (DFS) (HR = 1.81, 95% CI: 1.54-2.13, P < .00001) in a variety of cancers. In additional, the increase in lncRNA FOXP4-AS1 expression was also correlated with tumor size ((larger vs smaller) (OR = 3.16, 95% CI: 2.12-4.71, P < .00001), alpha-fetoprotein (≥400 vs <400) (OR = 3.81, 95%CI: 2.38-6.11, P = .83), lymph node metastasis (positive vs negative) (OR = 2.93, 95%CI: 1.51-5.68, P = .001), and age (younger vs older) (OR = 2.06, 95% CI: 1.41-3.00, P = .00002) in patients with cancer. Furthermore, analysis results using The Cancer Genome Atlas (TCGA) dataset showed that the expression level of lncRNA FOXP4-AS1 was higher in most tumor tissues than in the corresponding normal tissues, which predicted a worse prognosis.
RESULTS
In this meta-analysis, we demonstrate that high lncRNA FOXP4-AS1 expression may become a potential marker to predict cancer prognosis.
CONCLUSIONS
[ "Humans", "RNA, Long Noncoding", "Prognosis", "alpha-Fetoproteins", "Computational Biology", "Biomarkers, Tumor", "Neoplasms", "RNA, Antisense", "Lymphatic Metastasis", "Forkhead Transcription Factors" ]
9592412
1. Introduction
Cancer is one of the leading causes of death worldwide.[1] However, the exact mechanism underlying this cancer remains unclear. Furthermore, surveillance of patients with early stage cancer remains difficult. Hence, many cancer cases are identified at an advanced stage and have a dismal prognosis. The official databases of the World Health Organization and American Cancer Society indicate that cancer poses the highest clinical, social, and economic burden among all human diseases. A total number of 18 million new cases have been diagnosed in 2018, the most frequent of which are lung (2.09 million cases), breast (2.09 million cases), and prostate (1.28 million cases) cancers.[2] Therefore, early diagnosis and intervention have become vital for improving the overall survival (OS) of patients with cancer. Long non-coding RNA (lncRNA) are defined as transcripts >200 nucleotides in length that display limited protein-coding potential.[3] In recent years, lncRNAs have been found to play significant regulatory roles in a variety of diseases, especially in the biological processes of malignant tumors, including differentiation, migration, apoptosis, and dose compensation effects.[4,5] Recent studies have proposed that LINC00675 is related to clinicopathological features and prognosis of various cancer patients by participating in cancer cell proliferation and invasion.[6,7] In cervical cancer, SIP1 expression is upregulated by lncRNA NORAD to promote proliferation and invasion of cervical cancer cells.[8] Studies have shown that lncNONHSAAT081507.1 (LINC81507) plays an inhibitory role in the progression of non-small cell lung cancer and acts as a therapeutic target and potential biomarker for the diagnosis and prognosis of nonsmall cell lung cancer.[9] These results provide evidence that lncRNAs may serve as novel prognostic biomarkers and therapeutic targets in human tumors.[10,11] Forkhead box P4 antisense RNA 1 (FOXP4-AS1), a member of the lncRNA family, is an antisense lncRNA to FOXP4. The lncRNA FOXP4-AS1, which is an lncRNA related to tumors, is believed to participate in the occurrence of tumors and promote tumor proliferation, invasion, and migration. Extensive studies have indicated that FOXP4-AS1 is highly expressed in several malignancies, including hepatocellular carcinoma (HCC),[12] colorectal carcinoma,[13] and nasopharyngeal carcinoma (NPC).[14] Thus, its upregulation is usually related to tumor grade and poor prognosis.[15,16] However, no systematic meta-analysis has yet been conducted to support the prognostic value of FOXP4-AS1 in these cancers. Hence, a meta-analysis was performed to investigate the clinical prognostic value of lncRNA FOXP4-AS1 in patients with cancer.
2. Material and Methods
2.1. Literature search This meta-analysis was conducted in accordance with the Guidelines for Preferred Reports of Systematic Reviews and Meta-Analysis and Meta-analysis of Observational Epidemiological Studies statements. A comprehensive literature search was conducted. In order to identify relevant articles, 2 reviewers independently searched electronic databases, including PubMed, Cochrane Library, EMBASE, Medline and Web of Science.Use the following search terms: “long non-coding RNA FOXP4-AS1,” “lncRNA FOXP4-AS1,” “Forkhead box P4 antisense RNA 1,” “tumor,” “cancer,” and “prognosis.” The issue will be discussed with a third reviewer if there are any inconsistencies. To select eligible studies, we used the following criteria: definitive diagnosis or histopathological diagnosis in cancer patients; survival and clinical prognostic parameters of lncRNA FOXP4-AS1 in cancer patients were reported. The combined disaster risk (HR) and 95% confidence intervals (CI) were calculated using sufficient information. The exclusion criteria were as follows: studies without prognostic outcome information, non-human studies, letters, case reports, review articles, duplicate publications, and studies without original data. This meta-analysis was conducted in accordance with the Guidelines for Preferred Reports of Systematic Reviews and Meta-Analysis and Meta-analysis of Observational Epidemiological Studies statements. A comprehensive literature search was conducted. In order to identify relevant articles, 2 reviewers independently searched electronic databases, including PubMed, Cochrane Library, EMBASE, Medline and Web of Science.Use the following search terms: “long non-coding RNA FOXP4-AS1,” “lncRNA FOXP4-AS1,” “Forkhead box P4 antisense RNA 1,” “tumor,” “cancer,” and “prognosis.” The issue will be discussed with a third reviewer if there are any inconsistencies. To select eligible studies, we used the following criteria: definitive diagnosis or histopathological diagnosis in cancer patients; survival and clinical prognostic parameters of lncRNA FOXP4-AS1 in cancer patients were reported. The combined disaster risk (HR) and 95% confidence intervals (CI) were calculated using sufficient information. The exclusion criteria were as follows: studies without prognostic outcome information, non-human studies, letters, case reports, review articles, duplicate publications, and studies without original data. 2.2. Data extraction and quality assessment The following information was extracted from each study by 3 independent authors and a consensus was reached: author, country, publication year, tumor type, cancer size, follow-up time, detection method, and cutoff value. Based on distant metastasis (DM), tumor size, and cancer node metastasis stage, the number of patients was divided for each group, and the number of patients with high or low FOXP4-AS1 expression in each group. When the HR with 95% CI was reported in a univariate or multivariate analysis, it was directly extracted from the report (https://sourceforge.net/projects/digitizer/).[17] We used Engauge Digitizer to calculate HR and 95% CI based on Kaplan–Meier survival curves, and quality assessments for all included studies were based on the Newcastle–Ottawa quality assessment scale, which includes 3 dimensions: selection, comparability, and exposure. Each study was given a score ranging from 0 to 9. A study with a Newcastle–Ottawa quality assessment scale score of ≥6 was considered to be of high quality.[18] The following information was extracted from each study by 3 independent authors and a consensus was reached: author, country, publication year, tumor type, cancer size, follow-up time, detection method, and cutoff value. Based on distant metastasis (DM), tumor size, and cancer node metastasis stage, the number of patients was divided for each group, and the number of patients with high or low FOXP4-AS1 expression in each group. When the HR with 95% CI was reported in a univariate or multivariate analysis, it was directly extracted from the report (https://sourceforge.net/projects/digitizer/).[17] We used Engauge Digitizer to calculate HR and 95% CI based on Kaplan–Meier survival curves, and quality assessments for all included studies were based on the Newcastle–Ottawa quality assessment scale, which includes 3 dimensions: selection, comparability, and exposure. Each study was given a score ranging from 0 to 9. A study with a Newcastle–Ottawa quality assessment scale score of ≥6 was considered to be of high quality.[18] 2.3. Statistical analyses All statistical analyses of the data were performed using Review Manager (RevMan) 5.3 software and Stata/SE 16.1 software. Sensitivity analysis was performed by omitting the literature one by one to determine whether the results were stable, and the publication bias of this meta-analysis was evaluated using the Begg test according to Stata/SE 16.1 software. The Q test and I2 statistics were applied to estimate the heterogeneity of the results. A fixed-effects model was selected when an I2 < 50% was observed. The synthetic estimate was calculated based on the random-effects model when heterogeneity was evident (I2 > 50%). Statistical significance was set at P < .05. All statistical analyses of the data were performed using Review Manager (RevMan) 5.3 software and Stata/SE 16.1 software. Sensitivity analysis was performed by omitting the literature one by one to determine whether the results were stable, and the publication bias of this meta-analysis was evaluated using the Begg test according to Stata/SE 16.1 software. The Q test and I2 statistics were applied to estimate the heterogeneity of the results. A fixed-effects model was selected when an I2 < 50% was observed. The synthetic estimate was calculated based on the random-effects model when heterogeneity was evident (I2 > 50%). Statistical significance was set at P < .05.
3.5. Validation of the results in the cancer genome atlas (TCGA) dataset
As well, the investigators made use of TCGA dataset to analyze the expression of lncRNA FOXP4-AS1 in the different types of cancers. This database consists of 22 different types of human malignant tumors, including bladder urothelial carcinoma, breast invasive carcinoma, cervical squamous cell carcinoma and endocervical adenocarcinoma, cholangiocarcinoma, colon adenocarcinoma, esophageal carcinoma (ESCA), head and neck squamous cell carcinoma (HNSC), kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), rectum adenocarcinoma (READ), sarcoma (SARC), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), thyroid carcinoma (THCA), thymoma (THYM), and uterine corpus endometrial carcinoma (UCEC). As shown in Figure 7, lncRNA FOXP4-AS1 was significantly overexpressed in tumor tissues, especially in patients with COAD, PRAD, READ, and STAD. We found that the expression of lncRNA FOXP4-AS1 in 6 malignant tumors was significantly different in clinical staging, such as KIRP, KIPAN (pan-kidney cohort), HNSC, KIRC, LIHC, and TGCT (testicular germ cell tumors) (Fig. 8). Moreover, the expression of lncRNA FOXP4-AS1 was significantly different in the differentiation of 6 malignant tumors, including ESCA, stomach and esophageal carcinoma (STES), STAD, HNSC, PAAD, and ovarian serous cystadenocarcinoma (OV) (Fig. 9). Additionally, the investigators explored whether lncRNA FOXP4-AS1 expression was associated with the survival and prognosis of cancer patients in the TCGA dataset. LncRNA FOXP4-AS1 expression in different types of human malignant tumor tissues and corresponding normal tissues. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. LncRNA FOXP4-AS1 expression in clinical stage of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. LncRNA FOXP4-AS1 expression in the differentiation grade of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. The results revealed that upregulated lncRNA FOXP4-AS1 expression in different types of malignant tumors exhibited negative effects on OS (Fig. 10) and DFS (Fig. 11). Kaplan–Meier plotter of OS for 22 types of human malignancies. OS = overall survival. Kaplan–Meier plotter of DFS for 22 types of human malignancies. DFS = disease-free survival.
5. Conclusions
We conducted the first systematic review and estimation of the relationship between abnormal lncRNA FOXP4-AS1 expression and survival and clinical outcomes in patients with tumors. Based on our results, high expression levels of lncRNA FOXP4-AS1 were associated with poor OS and DFS, making this gene a potential prognostic biomarker. Given the limitations of this study, a more large-scale, high-quality study on a variety of ethnic populations is necessary to assess the value of lncRNA FOXP4-AS1 in tumors.
[ "2.1. Literature search", "2.2. Data extraction and quality assessment", "2.3. Statistical analyses", "3. Results", "3.1. Literature search and selection", "3.2. Lncrna FOXP4-AS1 expression is highly correlated with OS and disease-free survival (DFS)", "3.3. Relationship between lncRNA FOXP4-AS1 expression and clinicopathological characteristics", "3.4. Publication bias and sensitivity analysis", "Author contributions" ]
[ "This meta-analysis was conducted in accordance with the Guidelines for Preferred Reports of Systematic Reviews and Meta-Analysis and Meta-analysis of Observational Epidemiological Studies statements. A comprehensive literature search was conducted. In order to identify relevant articles, 2 reviewers independently searched electronic databases, including PubMed, Cochrane Library, EMBASE, Medline and Web of Science.Use the following search terms: “long non-coding RNA FOXP4-AS1,” “lncRNA FOXP4-AS1,” “Forkhead box P4 antisense RNA 1,” “tumor,” “cancer,” and “prognosis.” The issue will be discussed with a third reviewer if there are any inconsistencies. To select eligible studies, we used the following criteria: definitive diagnosis or histopathological diagnosis in cancer patients; survival and clinical prognostic parameters of lncRNA FOXP4-AS1 in cancer patients were reported. The combined disaster risk (HR) and 95% confidence intervals (CI) were calculated using sufficient information. The exclusion criteria were as follows: studies without prognostic outcome information, non-human studies, letters, case reports, review articles, duplicate publications, and studies without original data.", "The following information was extracted from each study by 3 independent authors and a consensus was reached: author, country, publication year, tumor type, cancer size, follow-up time, detection method, and cutoff value. Based on distant metastasis (DM), tumor size, and cancer node metastasis stage, the number of patients was divided for each group, and the number of patients with high or low FOXP4-AS1 expression in each group. When the HR with 95% CI was reported in a univariate or multivariate analysis, it was directly extracted from the report (https://sourceforge.net/projects/digitizer/).[17] We used Engauge Digitizer to calculate HR and 95% CI based on Kaplan–Meier survival curves, and quality assessments for all included studies were based on the Newcastle–Ottawa quality assessment scale, which includes 3 dimensions: selection, comparability, and exposure. Each study was given a score ranging from 0 to 9. A study with a Newcastle–Ottawa quality assessment scale score of ≥6 was considered to be of high quality.[18]", "All statistical analyses of the data were performed using Review Manager (RevMan) 5.3 software and Stata/SE 16.1 software. Sensitivity analysis was performed by omitting the literature one by one to determine whether the results were stable, and the publication bias of this meta-analysis was evaluated using the Begg test according to Stata/SE 16.1 software. The Q test and I2 statistics were applied to estimate the heterogeneity of the results. A fixed-effects model was selected when an I2 < 50% was observed. The synthetic estimate was calculated based on the random-effects model when heterogeneity was evident (I2 > 50%). Statistical significance was set at P < .05.", " 3.1. Literature search and selection After the preliminary online search, the investigators retrieved 110 relevant studies from electronic databases. After the removal of duplicates, 57 studies were excluded. After thorough screening of titles and abstracts, 14 publications were included. After carefully assessing the full texts, 6 studies published between 2018 and 2021 were included in the present meta-analysis. The literature screening process is shown in Figure 1. These eligible studies included 1128 patients. In the present meta-analysis, a variety of tumor types were reported, including HCC,[12,19] NPC,[14] mantle cell lymphoma (MCL),[20] colorectal cancer,[13] and osteosarcoma.[21] The expression of lncRNA FOXP4-AS1 in these included studies was quantified using real-time fluorescent PCR. The median was selected as the cutoff value to distinguish between high and low expression of lncRNA FOXP4-AS1. Six eligible studies used the OS to estimate patient survival. The detailed clinical characteristics of the patients are summarized in Table 1.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nFlow diagram of the literatures selection in this meta-analysis.\nAfter the preliminary online search, the investigators retrieved 110 relevant studies from electronic databases. After the removal of duplicates, 57 studies were excluded. After thorough screening of titles and abstracts, 14 publications were included. After carefully assessing the full texts, 6 studies published between 2018 and 2021 were included in the present meta-analysis. The literature screening process is shown in Figure 1. These eligible studies included 1128 patients. In the present meta-analysis, a variety of tumor types were reported, including HCC,[12,19] NPC,[14] mantle cell lymphoma (MCL),[20] colorectal cancer,[13] and osteosarcoma.[21] The expression of lncRNA FOXP4-AS1 in these included studies was quantified using real-time fluorescent PCR. The median was selected as the cutoff value to distinguish between high and low expression of lncRNA FOXP4-AS1. Six eligible studies used the OS to estimate patient survival. The detailed clinical characteristics of the patients are summarized in Table 1.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nFlow diagram of the literatures selection in this meta-analysis.\n 3.2. Lncrna FOXP4-AS1 expression is highly correlated with OS and disease-free survival (DFS) Overall, all the included studies investigated cancer prognosis. A total of 1128 patients were assessed for HR and 95% CI for OS. A random-effects model was used to analyze the pooled HR, and its 95% CI depended on obvious heterogeneity (P = .06, I2 = 53%). We further elucidated the relationship between FOXP4-AS1 expression and OS, as illustrated in Figure 2. The pooled results revealed that high expression of lncRNA FOXP4-AS1 was related to poor prognosis of cancers (HR = 1.99, 95% CI:1.65–2.39, P < .00001, Fig. 2A). In terms of DFS, 5 studies were included, and the pooled results indicated that patients with high expression of lncRNA FOXP4-AS1 had poor DFS (HR = 1.81, 95% CI:1.54–2.13, P < .00001, Fig. 2B). Thus, the prognosis of cancer patients with lncRNA FOXP4-AS1 overexpression was worse than that of patients with low lncRNA FOXP4-AS1 expression. These results indicate that lncRNA FOXP4-AS1 may be a factor in predicting the prognosis of cancer patients.\nForest plots for the association between lncRNA FOXP4-AS1 expression with OS (A) and DFS (B). DFS = disease-free survival, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\nOverall, all the included studies investigated cancer prognosis. A total of 1128 patients were assessed for HR and 95% CI for OS. A random-effects model was used to analyze the pooled HR, and its 95% CI depended on obvious heterogeneity (P = .06, I2 = 53%). We further elucidated the relationship between FOXP4-AS1 expression and OS, as illustrated in Figure 2. The pooled results revealed that high expression of lncRNA FOXP4-AS1 was related to poor prognosis of cancers (HR = 1.99, 95% CI:1.65–2.39, P < .00001, Fig. 2A). In terms of DFS, 5 studies were included, and the pooled results indicated that patients with high expression of lncRNA FOXP4-AS1 had poor DFS (HR = 1.81, 95% CI:1.54–2.13, P < .00001, Fig. 2B). Thus, the prognosis of cancer patients with lncRNA FOXP4-AS1 overexpression was worse than that of patients with low lncRNA FOXP4-AS1 expression. These results indicate that lncRNA FOXP4-AS1 may be a factor in predicting the prognosis of cancer patients.\nForest plots for the association between lncRNA FOXP4-AS1 expression with OS (A) and DFS (B). DFS = disease-free survival, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\n 3.3. Relationship between lncRNA FOXP4-AS1 expression and clinicopathological characteristics According to the evaluation of the 6 eligible studies that contained detailed clinicopathological characteristics, it was observed that the elevated expression of lncRNA FOXP4-AS1 positively correlated with tumor size (larger vs smaller) (OR = 3.16, 95%CI: 2.12–4.71, P < .00001, Fig. 3B). In particular, the overexpression of lncRNA FOXP4-AS1 was consistent with elevated alpha-fetoprotein (OR = 3.81, 95%CI: 2.38–6.11, P = .83, Fig. 3C) in HCC, and the fixed effects model was selected to estimate due to the inconspicuous heterogeneity. The analysis results revealed that the patients with lncRNA FOXP4-AS1 overexpression were more vulnerable to younger (OR = 2.06, 95%CI: 1.41–3.00, P = .00002, Fig. 3A) and lymph node metastasis (OR = 2.93, 95%CI: 1.51–5.68, P = .001, Fig. 3D) in patients with cancer. However, there were no significant differences between lncRNA FOXP4-AS1 expression and TNM stage (OR = 1.38, 95%CI: 0.42–4.48, P = .59, Fig. 4A), DM (OR = 0.84, 95%CI: 0.49–1.45, P = .54, Fig. 4B), gender (OR = 1.08, 95%CI: 0.70–1.67, P = .72, Fig. 4C), differentiation (OR = 0.91, 95%CI: 0.49–1.67, P = .76, Fig. 4D). This information is presented in Table 2.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: age (A), tumor size (B), AFP (C), lymph node metastasis (D). AFP = alpha-fetoprotein, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: TNM stage (A), distant metastasis (B), gender (C), and differentiation (D). FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nAccording to the evaluation of the 6 eligible studies that contained detailed clinicopathological characteristics, it was observed that the elevated expression of lncRNA FOXP4-AS1 positively correlated with tumor size (larger vs smaller) (OR = 3.16, 95%CI: 2.12–4.71, P < .00001, Fig. 3B). In particular, the overexpression of lncRNA FOXP4-AS1 was consistent with elevated alpha-fetoprotein (OR = 3.81, 95%CI: 2.38–6.11, P = .83, Fig. 3C) in HCC, and the fixed effects model was selected to estimate due to the inconspicuous heterogeneity. The analysis results revealed that the patients with lncRNA FOXP4-AS1 overexpression were more vulnerable to younger (OR = 2.06, 95%CI: 1.41–3.00, P = .00002, Fig. 3A) and lymph node metastasis (OR = 2.93, 95%CI: 1.51–5.68, P = .001, Fig. 3D) in patients with cancer. However, there were no significant differences between lncRNA FOXP4-AS1 expression and TNM stage (OR = 1.38, 95%CI: 0.42–4.48, P = .59, Fig. 4A), DM (OR = 0.84, 95%CI: 0.49–1.45, P = .54, Fig. 4B), gender (OR = 1.08, 95%CI: 0.70–1.67, P = .72, Fig. 4C), differentiation (OR = 0.91, 95%CI: 0.49–1.67, P = .76, Fig. 4D). This information is presented in Table 2.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: age (A), tumor size (B), AFP (C), lymph node metastasis (D). AFP = alpha-fetoprotein, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: TNM stage (A), distant metastasis (B), gender (C), and differentiation (D). FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\n 3.4. Publication bias and sensitivity analysis This meta-analysis evaluated the publication bias using the Begg test. The results showed no significant publication bias affecting OS analysis (Pr > IzI = 0.368) (Fig. 5). Figure 6 illustrates the sensitivity analysis we conducted to show that the HRs were robust even after removing all the studies individually.\nPublication bias assessment of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\nSensitivity analysis of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\nThis meta-analysis evaluated the publication bias using the Begg test. The results showed no significant publication bias affecting OS analysis (Pr > IzI = 0.368) (Fig. 5). Figure 6 illustrates the sensitivity analysis we conducted to show that the HRs were robust even after removing all the studies individually.\nPublication bias assessment of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\nSensitivity analysis of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\n 3.5. Validation of the results in the cancer genome atlas (TCGA) dataset As well, the investigators made use of TCGA dataset to analyze the expression of lncRNA FOXP4-AS1 in the different types of cancers. This database consists of 22 different types of human malignant tumors, including bladder urothelial carcinoma, breast invasive carcinoma, cervical squamous cell carcinoma and endocervical adenocarcinoma, cholangiocarcinoma, colon adenocarcinoma, esophageal carcinoma (ESCA), head and neck squamous cell carcinoma (HNSC), kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), rectum adenocarcinoma (READ), sarcoma (SARC), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), thyroid carcinoma (THCA), thymoma (THYM), and uterine corpus endometrial carcinoma (UCEC). As shown in Figure 7, lncRNA FOXP4-AS1 was significantly overexpressed in tumor tissues, especially in patients with COAD, PRAD, READ, and STAD. We found that the expression of lncRNA FOXP4-AS1 in 6 malignant tumors was significantly different in clinical staging, such as KIRP, KIPAN (pan-kidney cohort), HNSC, KIRC, LIHC, and TGCT (testicular germ cell tumors) (Fig. 8). Moreover, the expression of lncRNA FOXP4-AS1 was significantly different in the differentiation of 6 malignant tumors, including ESCA, stomach and esophageal carcinoma (STES), STAD, HNSC, PAAD, and ovarian serous cystadenocarcinoma (OV) (Fig. 9). Additionally, the investigators explored whether lncRNA FOXP4-AS1 expression was associated with the survival and prognosis of cancer patients in the TCGA dataset.\nLncRNA FOXP4-AS1 expression in different types of human malignant tumor tissues and corresponding normal tissues. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nLncRNA FOXP4-AS1 expression in clinical stage of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nLncRNA FOXP4-AS1 expression in the differentiation grade of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nThe results revealed that upregulated lncRNA FOXP4-AS1 expression in different types of malignant tumors exhibited negative effects on OS (Fig. 10) and DFS (Fig. 11).\nKaplan–Meier plotter of OS for 22 types of human malignancies. OS = overall survival.\nKaplan–Meier plotter of DFS for 22 types of human malignancies. DFS = disease-free survival.\nAs well, the investigators made use of TCGA dataset to analyze the expression of lncRNA FOXP4-AS1 in the different types of cancers. This database consists of 22 different types of human malignant tumors, including bladder urothelial carcinoma, breast invasive carcinoma, cervical squamous cell carcinoma and endocervical adenocarcinoma, cholangiocarcinoma, colon adenocarcinoma, esophageal carcinoma (ESCA), head and neck squamous cell carcinoma (HNSC), kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), rectum adenocarcinoma (READ), sarcoma (SARC), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), thyroid carcinoma (THCA), thymoma (THYM), and uterine corpus endometrial carcinoma (UCEC). As shown in Figure 7, lncRNA FOXP4-AS1 was significantly overexpressed in tumor tissues, especially in patients with COAD, PRAD, READ, and STAD. We found that the expression of lncRNA FOXP4-AS1 in 6 malignant tumors was significantly different in clinical staging, such as KIRP, KIPAN (pan-kidney cohort), HNSC, KIRC, LIHC, and TGCT (testicular germ cell tumors) (Fig. 8). Moreover, the expression of lncRNA FOXP4-AS1 was significantly different in the differentiation of 6 malignant tumors, including ESCA, stomach and esophageal carcinoma (STES), STAD, HNSC, PAAD, and ovarian serous cystadenocarcinoma (OV) (Fig. 9). Additionally, the investigators explored whether lncRNA FOXP4-AS1 expression was associated with the survival and prognosis of cancer patients in the TCGA dataset.\nLncRNA FOXP4-AS1 expression in different types of human malignant tumor tissues and corresponding normal tissues. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nLncRNA FOXP4-AS1 expression in clinical stage of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nLncRNA FOXP4-AS1 expression in the differentiation grade of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nThe results revealed that upregulated lncRNA FOXP4-AS1 expression in different types of malignant tumors exhibited negative effects on OS (Fig. 10) and DFS (Fig. 11).\nKaplan–Meier plotter of OS for 22 types of human malignancies. OS = overall survival.\nKaplan–Meier plotter of DFS for 22 types of human malignancies. DFS = disease-free survival.", "After the preliminary online search, the investigators retrieved 110 relevant studies from electronic databases. After the removal of duplicates, 57 studies were excluded. After thorough screening of titles and abstracts, 14 publications were included. After carefully assessing the full texts, 6 studies published between 2018 and 2021 were included in the present meta-analysis. The literature screening process is shown in Figure 1. These eligible studies included 1128 patients. In the present meta-analysis, a variety of tumor types were reported, including HCC,[12,19] NPC,[14] mantle cell lymphoma (MCL),[20] colorectal cancer,[13] and osteosarcoma.[21] The expression of lncRNA FOXP4-AS1 in these included studies was quantified using real-time fluorescent PCR. The median was selected as the cutoff value to distinguish between high and low expression of lncRNA FOXP4-AS1. Six eligible studies used the OS to estimate patient survival. The detailed clinical characteristics of the patients are summarized in Table 1.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nFlow diagram of the literatures selection in this meta-analysis.", "Overall, all the included studies investigated cancer prognosis. A total of 1128 patients were assessed for HR and 95% CI for OS. A random-effects model was used to analyze the pooled HR, and its 95% CI depended on obvious heterogeneity (P = .06, I2 = 53%). We further elucidated the relationship between FOXP4-AS1 expression and OS, as illustrated in Figure 2. The pooled results revealed that high expression of lncRNA FOXP4-AS1 was related to poor prognosis of cancers (HR = 1.99, 95% CI:1.65–2.39, P < .00001, Fig. 2A). In terms of DFS, 5 studies were included, and the pooled results indicated that patients with high expression of lncRNA FOXP4-AS1 had poor DFS (HR = 1.81, 95% CI:1.54–2.13, P < .00001, Fig. 2B). Thus, the prognosis of cancer patients with lncRNA FOXP4-AS1 overexpression was worse than that of patients with low lncRNA FOXP4-AS1 expression. These results indicate that lncRNA FOXP4-AS1 may be a factor in predicting the prognosis of cancer patients.\nForest plots for the association between lncRNA FOXP4-AS1 expression with OS (A) and DFS (B). DFS = disease-free survival, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.", "According to the evaluation of the 6 eligible studies that contained detailed clinicopathological characteristics, it was observed that the elevated expression of lncRNA FOXP4-AS1 positively correlated with tumor size (larger vs smaller) (OR = 3.16, 95%CI: 2.12–4.71, P < .00001, Fig. 3B). In particular, the overexpression of lncRNA FOXP4-AS1 was consistent with elevated alpha-fetoprotein (OR = 3.81, 95%CI: 2.38–6.11, P = .83, Fig. 3C) in HCC, and the fixed effects model was selected to estimate due to the inconspicuous heterogeneity. The analysis results revealed that the patients with lncRNA FOXP4-AS1 overexpression were more vulnerable to younger (OR = 2.06, 95%CI: 1.41–3.00, P = .00002, Fig. 3A) and lymph node metastasis (OR = 2.93, 95%CI: 1.51–5.68, P = .001, Fig. 3D) in patients with cancer. However, there were no significant differences between lncRNA FOXP4-AS1 expression and TNM stage (OR = 1.38, 95%CI: 0.42–4.48, P = .59, Fig. 4A), DM (OR = 0.84, 95%CI: 0.49–1.45, P = .54, Fig. 4B), gender (OR = 1.08, 95%CI: 0.70–1.67, P = .72, Fig. 4C), differentiation (OR = 0.91, 95%CI: 0.49–1.67, P = .76, Fig. 4D). This information is presented in Table 2.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: age (A), tumor size (B), AFP (C), lymph node metastasis (D). AFP = alpha-fetoprotein, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: TNM stage (A), distant metastasis (B), gender (C), and differentiation (D). FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.", "This meta-analysis evaluated the publication bias using the Begg test. The results showed no significant publication bias affecting OS analysis (Pr > IzI = 0.368) (Fig. 5). Figure 6 illustrates the sensitivity analysis we conducted to show that the HRs were robust even after removing all the studies individually.\nPublication bias assessment of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\nSensitivity analysis of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.", "Conceptualization: Qiang Shu, Fei Xie.\nData curation: Xiaoling Liu, Jushu Yang.\nFormal analysis: Xiaoling Liu.\nFunding acquisition: Fei Xie.\nInvestigation: Xiaoling Liu, Jushu Yang, Tinggang Mou.\nMethodology: Qiang Shu, Xiaoling Liu, Jushu Yang, Tinggang Mou.\nProject administration: Qiang Shu, Fei Xie.\nSoftware: Tinggang Mou.\nSupervision: Fei Xie.\nWriting – original draft: Qiang Shu.\nWriting – review & editing: Qiang Shu." ]
[ null, null, null, "results", null, null, null, null, null ]
[ "1. Introduction", "2. Material and Methods", "2.1. Literature search", "2.2. Data extraction and quality assessment", "2.3. Statistical analyses", "3. Results", "3.1. Literature search and selection", "3.2. Lncrna FOXP4-AS1 expression is highly correlated with OS and disease-free survival (DFS)", "3.3. Relationship between lncRNA FOXP4-AS1 expression and clinicopathological characteristics", "3.4. Publication bias and sensitivity analysis", "3.5. Validation of the results in the cancer genome atlas (TCGA) dataset", "4. Discussion", "5. Conclusions", "Author contributions" ]
[ "Cancer is one of the leading causes of death worldwide.[1] However, the exact mechanism underlying this cancer remains unclear. Furthermore, surveillance of patients with early stage cancer remains difficult. Hence, many cancer cases are identified at an advanced stage and have a dismal prognosis. The official databases of the World Health Organization and American Cancer Society indicate that cancer poses the highest clinical, social, and economic burden among all human diseases. A total number of 18 million new cases have been diagnosed in 2018, the most frequent of which are lung (2.09 million cases), breast (2.09 million cases), and prostate (1.28 million cases) cancers.[2] Therefore, early diagnosis and intervention have become vital for improving the overall survival (OS) of patients with cancer.\nLong non-coding RNA (lncRNA) are defined as transcripts >200 nucleotides in length that display limited protein-coding potential.[3] In recent years, lncRNAs have been found to play significant regulatory roles in a variety of diseases, especially in the biological processes of malignant tumors, including differentiation, migration, apoptosis, and dose compensation effects.[4,5] Recent studies have proposed that LINC00675 is related to clinicopathological features and prognosis of various cancer patients by participating in cancer cell proliferation and invasion.[6,7] In cervical cancer, SIP1 expression is upregulated by lncRNA NORAD to promote proliferation and invasion of cervical cancer cells.[8] Studies have shown that lncNONHSAAT081507.1 (LINC81507) plays an inhibitory role in the progression of non-small cell lung cancer and acts as a therapeutic target and potential biomarker for the diagnosis and prognosis of nonsmall cell lung cancer.[9] These results provide evidence that lncRNAs may serve as novel prognostic biomarkers and therapeutic targets in human tumors.[10,11]\nForkhead box P4 antisense RNA 1 (FOXP4-AS1), a member of the lncRNA family, is an antisense lncRNA to FOXP4. The lncRNA FOXP4-AS1, which is an lncRNA related to tumors, is believed to participate in the occurrence of tumors and promote tumor proliferation, invasion, and migration. Extensive studies have indicated that FOXP4-AS1 is highly expressed in several malignancies, including hepatocellular carcinoma (HCC),[12] colorectal carcinoma,[13] and nasopharyngeal carcinoma (NPC).[14] Thus, its upregulation is usually related to tumor grade and poor prognosis.[15,16] However, no systematic meta-analysis has yet been conducted to support the prognostic value of FOXP4-AS1 in these cancers. Hence, a meta-analysis was performed to investigate the clinical prognostic value of lncRNA FOXP4-AS1 in patients with cancer.", " 2.1. Literature search This meta-analysis was conducted in accordance with the Guidelines for Preferred Reports of Systematic Reviews and Meta-Analysis and Meta-analysis of Observational Epidemiological Studies statements. A comprehensive literature search was conducted. In order to identify relevant articles, 2 reviewers independently searched electronic databases, including PubMed, Cochrane Library, EMBASE, Medline and Web of Science.Use the following search terms: “long non-coding RNA FOXP4-AS1,” “lncRNA FOXP4-AS1,” “Forkhead box P4 antisense RNA 1,” “tumor,” “cancer,” and “prognosis.” The issue will be discussed with a third reviewer if there are any inconsistencies. To select eligible studies, we used the following criteria: definitive diagnosis or histopathological diagnosis in cancer patients; survival and clinical prognostic parameters of lncRNA FOXP4-AS1 in cancer patients were reported. The combined disaster risk (HR) and 95% confidence intervals (CI) were calculated using sufficient information. The exclusion criteria were as follows: studies without prognostic outcome information, non-human studies, letters, case reports, review articles, duplicate publications, and studies without original data.\nThis meta-analysis was conducted in accordance with the Guidelines for Preferred Reports of Systematic Reviews and Meta-Analysis and Meta-analysis of Observational Epidemiological Studies statements. A comprehensive literature search was conducted. In order to identify relevant articles, 2 reviewers independently searched electronic databases, including PubMed, Cochrane Library, EMBASE, Medline and Web of Science.Use the following search terms: “long non-coding RNA FOXP4-AS1,” “lncRNA FOXP4-AS1,” “Forkhead box P4 antisense RNA 1,” “tumor,” “cancer,” and “prognosis.” The issue will be discussed with a third reviewer if there are any inconsistencies. To select eligible studies, we used the following criteria: definitive diagnosis or histopathological diagnosis in cancer patients; survival and clinical prognostic parameters of lncRNA FOXP4-AS1 in cancer patients were reported. The combined disaster risk (HR) and 95% confidence intervals (CI) were calculated using sufficient information. The exclusion criteria were as follows: studies without prognostic outcome information, non-human studies, letters, case reports, review articles, duplicate publications, and studies without original data.\n 2.2. Data extraction and quality assessment The following information was extracted from each study by 3 independent authors and a consensus was reached: author, country, publication year, tumor type, cancer size, follow-up time, detection method, and cutoff value. Based on distant metastasis (DM), tumor size, and cancer node metastasis stage, the number of patients was divided for each group, and the number of patients with high or low FOXP4-AS1 expression in each group. When the HR with 95% CI was reported in a univariate or multivariate analysis, it was directly extracted from the report (https://sourceforge.net/projects/digitizer/).[17] We used Engauge Digitizer to calculate HR and 95% CI based on Kaplan–Meier survival curves, and quality assessments for all included studies were based on the Newcastle–Ottawa quality assessment scale, which includes 3 dimensions: selection, comparability, and exposure. Each study was given a score ranging from 0 to 9. A study with a Newcastle–Ottawa quality assessment scale score of ≥6 was considered to be of high quality.[18]\nThe following information was extracted from each study by 3 independent authors and a consensus was reached: author, country, publication year, tumor type, cancer size, follow-up time, detection method, and cutoff value. Based on distant metastasis (DM), tumor size, and cancer node metastasis stage, the number of patients was divided for each group, and the number of patients with high or low FOXP4-AS1 expression in each group. When the HR with 95% CI was reported in a univariate or multivariate analysis, it was directly extracted from the report (https://sourceforge.net/projects/digitizer/).[17] We used Engauge Digitizer to calculate HR and 95% CI based on Kaplan–Meier survival curves, and quality assessments for all included studies were based on the Newcastle–Ottawa quality assessment scale, which includes 3 dimensions: selection, comparability, and exposure. Each study was given a score ranging from 0 to 9. A study with a Newcastle–Ottawa quality assessment scale score of ≥6 was considered to be of high quality.[18]\n 2.3. Statistical analyses All statistical analyses of the data were performed using Review Manager (RevMan) 5.3 software and Stata/SE 16.1 software. Sensitivity analysis was performed by omitting the literature one by one to determine whether the results were stable, and the publication bias of this meta-analysis was evaluated using the Begg test according to Stata/SE 16.1 software. The Q test and I2 statistics were applied to estimate the heterogeneity of the results. A fixed-effects model was selected when an I2 < 50% was observed. The synthetic estimate was calculated based on the random-effects model when heterogeneity was evident (I2 > 50%). Statistical significance was set at P < .05.\nAll statistical analyses of the data were performed using Review Manager (RevMan) 5.3 software and Stata/SE 16.1 software. Sensitivity analysis was performed by omitting the literature one by one to determine whether the results were stable, and the publication bias of this meta-analysis was evaluated using the Begg test according to Stata/SE 16.1 software. The Q test and I2 statistics were applied to estimate the heterogeneity of the results. A fixed-effects model was selected when an I2 < 50% was observed. The synthetic estimate was calculated based on the random-effects model when heterogeneity was evident (I2 > 50%). Statistical significance was set at P < .05.", "This meta-analysis was conducted in accordance with the Guidelines for Preferred Reports of Systematic Reviews and Meta-Analysis and Meta-analysis of Observational Epidemiological Studies statements. A comprehensive literature search was conducted. In order to identify relevant articles, 2 reviewers independently searched electronic databases, including PubMed, Cochrane Library, EMBASE, Medline and Web of Science.Use the following search terms: “long non-coding RNA FOXP4-AS1,” “lncRNA FOXP4-AS1,” “Forkhead box P4 antisense RNA 1,” “tumor,” “cancer,” and “prognosis.” The issue will be discussed with a third reviewer if there are any inconsistencies. To select eligible studies, we used the following criteria: definitive diagnosis or histopathological diagnosis in cancer patients; survival and clinical prognostic parameters of lncRNA FOXP4-AS1 in cancer patients were reported. The combined disaster risk (HR) and 95% confidence intervals (CI) were calculated using sufficient information. The exclusion criteria were as follows: studies without prognostic outcome information, non-human studies, letters, case reports, review articles, duplicate publications, and studies without original data.", "The following information was extracted from each study by 3 independent authors and a consensus was reached: author, country, publication year, tumor type, cancer size, follow-up time, detection method, and cutoff value. Based on distant metastasis (DM), tumor size, and cancer node metastasis stage, the number of patients was divided for each group, and the number of patients with high or low FOXP4-AS1 expression in each group. When the HR with 95% CI was reported in a univariate or multivariate analysis, it was directly extracted from the report (https://sourceforge.net/projects/digitizer/).[17] We used Engauge Digitizer to calculate HR and 95% CI based on Kaplan–Meier survival curves, and quality assessments for all included studies were based on the Newcastle–Ottawa quality assessment scale, which includes 3 dimensions: selection, comparability, and exposure. Each study was given a score ranging from 0 to 9. A study with a Newcastle–Ottawa quality assessment scale score of ≥6 was considered to be of high quality.[18]", "All statistical analyses of the data were performed using Review Manager (RevMan) 5.3 software and Stata/SE 16.1 software. Sensitivity analysis was performed by omitting the literature one by one to determine whether the results were stable, and the publication bias of this meta-analysis was evaluated using the Begg test according to Stata/SE 16.1 software. The Q test and I2 statistics were applied to estimate the heterogeneity of the results. A fixed-effects model was selected when an I2 < 50% was observed. The synthetic estimate was calculated based on the random-effects model when heterogeneity was evident (I2 > 50%). Statistical significance was set at P < .05.", " 3.1. Literature search and selection After the preliminary online search, the investigators retrieved 110 relevant studies from electronic databases. After the removal of duplicates, 57 studies were excluded. After thorough screening of titles and abstracts, 14 publications were included. After carefully assessing the full texts, 6 studies published between 2018 and 2021 were included in the present meta-analysis. The literature screening process is shown in Figure 1. These eligible studies included 1128 patients. In the present meta-analysis, a variety of tumor types were reported, including HCC,[12,19] NPC,[14] mantle cell lymphoma (MCL),[20] colorectal cancer,[13] and osteosarcoma.[21] The expression of lncRNA FOXP4-AS1 in these included studies was quantified using real-time fluorescent PCR. The median was selected as the cutoff value to distinguish between high and low expression of lncRNA FOXP4-AS1. Six eligible studies used the OS to estimate patient survival. The detailed clinical characteristics of the patients are summarized in Table 1.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nFlow diagram of the literatures selection in this meta-analysis.\nAfter the preliminary online search, the investigators retrieved 110 relevant studies from electronic databases. After the removal of duplicates, 57 studies were excluded. After thorough screening of titles and abstracts, 14 publications were included. After carefully assessing the full texts, 6 studies published between 2018 and 2021 were included in the present meta-analysis. The literature screening process is shown in Figure 1. These eligible studies included 1128 patients. In the present meta-analysis, a variety of tumor types were reported, including HCC,[12,19] NPC,[14] mantle cell lymphoma (MCL),[20] colorectal cancer,[13] and osteosarcoma.[21] The expression of lncRNA FOXP4-AS1 in these included studies was quantified using real-time fluorescent PCR. The median was selected as the cutoff value to distinguish between high and low expression of lncRNA FOXP4-AS1. Six eligible studies used the OS to estimate patient survival. The detailed clinical characteristics of the patients are summarized in Table 1.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nFlow diagram of the literatures selection in this meta-analysis.\n 3.2. Lncrna FOXP4-AS1 expression is highly correlated with OS and disease-free survival (DFS) Overall, all the included studies investigated cancer prognosis. A total of 1128 patients were assessed for HR and 95% CI for OS. A random-effects model was used to analyze the pooled HR, and its 95% CI depended on obvious heterogeneity (P = .06, I2 = 53%). We further elucidated the relationship between FOXP4-AS1 expression and OS, as illustrated in Figure 2. The pooled results revealed that high expression of lncRNA FOXP4-AS1 was related to poor prognosis of cancers (HR = 1.99, 95% CI:1.65–2.39, P < .00001, Fig. 2A). In terms of DFS, 5 studies were included, and the pooled results indicated that patients with high expression of lncRNA FOXP4-AS1 had poor DFS (HR = 1.81, 95% CI:1.54–2.13, P < .00001, Fig. 2B). Thus, the prognosis of cancer patients with lncRNA FOXP4-AS1 overexpression was worse than that of patients with low lncRNA FOXP4-AS1 expression. These results indicate that lncRNA FOXP4-AS1 may be a factor in predicting the prognosis of cancer patients.\nForest plots for the association between lncRNA FOXP4-AS1 expression with OS (A) and DFS (B). DFS = disease-free survival, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\nOverall, all the included studies investigated cancer prognosis. A total of 1128 patients were assessed for HR and 95% CI for OS. A random-effects model was used to analyze the pooled HR, and its 95% CI depended on obvious heterogeneity (P = .06, I2 = 53%). We further elucidated the relationship between FOXP4-AS1 expression and OS, as illustrated in Figure 2. The pooled results revealed that high expression of lncRNA FOXP4-AS1 was related to poor prognosis of cancers (HR = 1.99, 95% CI:1.65–2.39, P < .00001, Fig. 2A). In terms of DFS, 5 studies were included, and the pooled results indicated that patients with high expression of lncRNA FOXP4-AS1 had poor DFS (HR = 1.81, 95% CI:1.54–2.13, P < .00001, Fig. 2B). Thus, the prognosis of cancer patients with lncRNA FOXP4-AS1 overexpression was worse than that of patients with low lncRNA FOXP4-AS1 expression. These results indicate that lncRNA FOXP4-AS1 may be a factor in predicting the prognosis of cancer patients.\nForest plots for the association between lncRNA FOXP4-AS1 expression with OS (A) and DFS (B). DFS = disease-free survival, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\n 3.3. Relationship between lncRNA FOXP4-AS1 expression and clinicopathological characteristics According to the evaluation of the 6 eligible studies that contained detailed clinicopathological characteristics, it was observed that the elevated expression of lncRNA FOXP4-AS1 positively correlated with tumor size (larger vs smaller) (OR = 3.16, 95%CI: 2.12–4.71, P < .00001, Fig. 3B). In particular, the overexpression of lncRNA FOXP4-AS1 was consistent with elevated alpha-fetoprotein (OR = 3.81, 95%CI: 2.38–6.11, P = .83, Fig. 3C) in HCC, and the fixed effects model was selected to estimate due to the inconspicuous heterogeneity. The analysis results revealed that the patients with lncRNA FOXP4-AS1 overexpression were more vulnerable to younger (OR = 2.06, 95%CI: 1.41–3.00, P = .00002, Fig. 3A) and lymph node metastasis (OR = 2.93, 95%CI: 1.51–5.68, P = .001, Fig. 3D) in patients with cancer. However, there were no significant differences between lncRNA FOXP4-AS1 expression and TNM stage (OR = 1.38, 95%CI: 0.42–4.48, P = .59, Fig. 4A), DM (OR = 0.84, 95%CI: 0.49–1.45, P = .54, Fig. 4B), gender (OR = 1.08, 95%CI: 0.70–1.67, P = .72, Fig. 4C), differentiation (OR = 0.91, 95%CI: 0.49–1.67, P = .76, Fig. 4D). This information is presented in Table 2.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: age (A), tumor size (B), AFP (C), lymph node metastasis (D). AFP = alpha-fetoprotein, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: TNM stage (A), distant metastasis (B), gender (C), and differentiation (D). FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nAccording to the evaluation of the 6 eligible studies that contained detailed clinicopathological characteristics, it was observed that the elevated expression of lncRNA FOXP4-AS1 positively correlated with tumor size (larger vs smaller) (OR = 3.16, 95%CI: 2.12–4.71, P < .00001, Fig. 3B). In particular, the overexpression of lncRNA FOXP4-AS1 was consistent with elevated alpha-fetoprotein (OR = 3.81, 95%CI: 2.38–6.11, P = .83, Fig. 3C) in HCC, and the fixed effects model was selected to estimate due to the inconspicuous heterogeneity. The analysis results revealed that the patients with lncRNA FOXP4-AS1 overexpression were more vulnerable to younger (OR = 2.06, 95%CI: 1.41–3.00, P = .00002, Fig. 3A) and lymph node metastasis (OR = 2.93, 95%CI: 1.51–5.68, P = .001, Fig. 3D) in patients with cancer. However, there were no significant differences between lncRNA FOXP4-AS1 expression and TNM stage (OR = 1.38, 95%CI: 0.42–4.48, P = .59, Fig. 4A), DM (OR = 0.84, 95%CI: 0.49–1.45, P = .54, Fig. 4B), gender (OR = 1.08, 95%CI: 0.70–1.67, P = .72, Fig. 4C), differentiation (OR = 0.91, 95%CI: 0.49–1.67, P = .76, Fig. 4D). This information is presented in Table 2.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: age (A), tumor size (B), AFP (C), lymph node metastasis (D). AFP = alpha-fetoprotein, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: TNM stage (A), distant metastasis (B), gender (C), and differentiation (D). FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\n 3.4. Publication bias and sensitivity analysis This meta-analysis evaluated the publication bias using the Begg test. The results showed no significant publication bias affecting OS analysis (Pr > IzI = 0.368) (Fig. 5). Figure 6 illustrates the sensitivity analysis we conducted to show that the HRs were robust even after removing all the studies individually.\nPublication bias assessment of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\nSensitivity analysis of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\nThis meta-analysis evaluated the publication bias using the Begg test. The results showed no significant publication bias affecting OS analysis (Pr > IzI = 0.368) (Fig. 5). Figure 6 illustrates the sensitivity analysis we conducted to show that the HRs were robust even after removing all the studies individually.\nPublication bias assessment of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\nSensitivity analysis of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\n 3.5. Validation of the results in the cancer genome atlas (TCGA) dataset As well, the investigators made use of TCGA dataset to analyze the expression of lncRNA FOXP4-AS1 in the different types of cancers. This database consists of 22 different types of human malignant tumors, including bladder urothelial carcinoma, breast invasive carcinoma, cervical squamous cell carcinoma and endocervical adenocarcinoma, cholangiocarcinoma, colon adenocarcinoma, esophageal carcinoma (ESCA), head and neck squamous cell carcinoma (HNSC), kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), rectum adenocarcinoma (READ), sarcoma (SARC), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), thyroid carcinoma (THCA), thymoma (THYM), and uterine corpus endometrial carcinoma (UCEC). As shown in Figure 7, lncRNA FOXP4-AS1 was significantly overexpressed in tumor tissues, especially in patients with COAD, PRAD, READ, and STAD. We found that the expression of lncRNA FOXP4-AS1 in 6 malignant tumors was significantly different in clinical staging, such as KIRP, KIPAN (pan-kidney cohort), HNSC, KIRC, LIHC, and TGCT (testicular germ cell tumors) (Fig. 8). Moreover, the expression of lncRNA FOXP4-AS1 was significantly different in the differentiation of 6 malignant tumors, including ESCA, stomach and esophageal carcinoma (STES), STAD, HNSC, PAAD, and ovarian serous cystadenocarcinoma (OV) (Fig. 9). Additionally, the investigators explored whether lncRNA FOXP4-AS1 expression was associated with the survival and prognosis of cancer patients in the TCGA dataset.\nLncRNA FOXP4-AS1 expression in different types of human malignant tumor tissues and corresponding normal tissues. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nLncRNA FOXP4-AS1 expression in clinical stage of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nLncRNA FOXP4-AS1 expression in the differentiation grade of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nThe results revealed that upregulated lncRNA FOXP4-AS1 expression in different types of malignant tumors exhibited negative effects on OS (Fig. 10) and DFS (Fig. 11).\nKaplan–Meier plotter of OS for 22 types of human malignancies. OS = overall survival.\nKaplan–Meier plotter of DFS for 22 types of human malignancies. DFS = disease-free survival.\nAs well, the investigators made use of TCGA dataset to analyze the expression of lncRNA FOXP4-AS1 in the different types of cancers. This database consists of 22 different types of human malignant tumors, including bladder urothelial carcinoma, breast invasive carcinoma, cervical squamous cell carcinoma and endocervical adenocarcinoma, cholangiocarcinoma, colon adenocarcinoma, esophageal carcinoma (ESCA), head and neck squamous cell carcinoma (HNSC), kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), rectum adenocarcinoma (READ), sarcoma (SARC), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), thyroid carcinoma (THCA), thymoma (THYM), and uterine corpus endometrial carcinoma (UCEC). As shown in Figure 7, lncRNA FOXP4-AS1 was significantly overexpressed in tumor tissues, especially in patients with COAD, PRAD, READ, and STAD. We found that the expression of lncRNA FOXP4-AS1 in 6 malignant tumors was significantly different in clinical staging, such as KIRP, KIPAN (pan-kidney cohort), HNSC, KIRC, LIHC, and TGCT (testicular germ cell tumors) (Fig. 8). Moreover, the expression of lncRNA FOXP4-AS1 was significantly different in the differentiation of 6 malignant tumors, including ESCA, stomach and esophageal carcinoma (STES), STAD, HNSC, PAAD, and ovarian serous cystadenocarcinoma (OV) (Fig. 9). Additionally, the investigators explored whether lncRNA FOXP4-AS1 expression was associated with the survival and prognosis of cancer patients in the TCGA dataset.\nLncRNA FOXP4-AS1 expression in different types of human malignant tumor tissues and corresponding normal tissues. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nLncRNA FOXP4-AS1 expression in clinical stage of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nLncRNA FOXP4-AS1 expression in the differentiation grade of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nThe results revealed that upregulated lncRNA FOXP4-AS1 expression in different types of malignant tumors exhibited negative effects on OS (Fig. 10) and DFS (Fig. 11).\nKaplan–Meier plotter of OS for 22 types of human malignancies. OS = overall survival.\nKaplan–Meier plotter of DFS for 22 types of human malignancies. DFS = disease-free survival.", "After the preliminary online search, the investigators retrieved 110 relevant studies from electronic databases. After the removal of duplicates, 57 studies were excluded. After thorough screening of titles and abstracts, 14 publications were included. After carefully assessing the full texts, 6 studies published between 2018 and 2021 were included in the present meta-analysis. The literature screening process is shown in Figure 1. These eligible studies included 1128 patients. In the present meta-analysis, a variety of tumor types were reported, including HCC,[12,19] NPC,[14] mantle cell lymphoma (MCL),[20] colorectal cancer,[13] and osteosarcoma.[21] The expression of lncRNA FOXP4-AS1 in these included studies was quantified using real-time fluorescent PCR. The median was selected as the cutoff value to distinguish between high and low expression of lncRNA FOXP4-AS1. Six eligible studies used the OS to estimate patient survival. The detailed clinical characteristics of the patients are summarized in Table 1.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nFlow diagram of the literatures selection in this meta-analysis.", "Overall, all the included studies investigated cancer prognosis. A total of 1128 patients were assessed for HR and 95% CI for OS. A random-effects model was used to analyze the pooled HR, and its 95% CI depended on obvious heterogeneity (P = .06, I2 = 53%). We further elucidated the relationship between FOXP4-AS1 expression and OS, as illustrated in Figure 2. The pooled results revealed that high expression of lncRNA FOXP4-AS1 was related to poor prognosis of cancers (HR = 1.99, 95% CI:1.65–2.39, P < .00001, Fig. 2A). In terms of DFS, 5 studies were included, and the pooled results indicated that patients with high expression of lncRNA FOXP4-AS1 had poor DFS (HR = 1.81, 95% CI:1.54–2.13, P < .00001, Fig. 2B). Thus, the prognosis of cancer patients with lncRNA FOXP4-AS1 overexpression was worse than that of patients with low lncRNA FOXP4-AS1 expression. These results indicate that lncRNA FOXP4-AS1 may be a factor in predicting the prognosis of cancer patients.\nForest plots for the association between lncRNA FOXP4-AS1 expression with OS (A) and DFS (B). DFS = disease-free survival, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.", "According to the evaluation of the 6 eligible studies that contained detailed clinicopathological characteristics, it was observed that the elevated expression of lncRNA FOXP4-AS1 positively correlated with tumor size (larger vs smaller) (OR = 3.16, 95%CI: 2.12–4.71, P < .00001, Fig. 3B). In particular, the overexpression of lncRNA FOXP4-AS1 was consistent with elevated alpha-fetoprotein (OR = 3.81, 95%CI: 2.38–6.11, P = .83, Fig. 3C) in HCC, and the fixed effects model was selected to estimate due to the inconspicuous heterogeneity. The analysis results revealed that the patients with lncRNA FOXP4-AS1 overexpression were more vulnerable to younger (OR = 2.06, 95%CI: 1.41–3.00, P = .00002, Fig. 3A) and lymph node metastasis (OR = 2.93, 95%CI: 1.51–5.68, P = .001, Fig. 3D) in patients with cancer. However, there were no significant differences between lncRNA FOXP4-AS1 expression and TNM stage (OR = 1.38, 95%CI: 0.42–4.48, P = .59, Fig. 4A), DM (OR = 0.84, 95%CI: 0.49–1.45, P = .54, Fig. 4B), gender (OR = 1.08, 95%CI: 0.70–1.67, P = .72, Fig. 4C), differentiation (OR = 0.91, 95%CI: 0.49–1.67, P = .76, Fig. 4D). This information is presented in Table 2.\nThe main characteristics of the eligible literatures included in the meta-analysis.\nCRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: age (A), tumor size (B), AFP (C), lymph node metastasis (D). AFP = alpha-fetoprotein, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nForest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: TNM stage (A), distant metastasis (B), gender (C), and differentiation (D). FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.", "This meta-analysis evaluated the publication bias using the Begg test. The results showed no significant publication bias affecting OS analysis (Pr > IzI = 0.368) (Fig. 5). Figure 6 illustrates the sensitivity analysis we conducted to show that the HRs were robust even after removing all the studies individually.\nPublication bias assessment of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.\nSensitivity analysis of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival.", "As well, the investigators made use of TCGA dataset to analyze the expression of lncRNA FOXP4-AS1 in the different types of cancers. This database consists of 22 different types of human malignant tumors, including bladder urothelial carcinoma, breast invasive carcinoma, cervical squamous cell carcinoma and endocervical adenocarcinoma, cholangiocarcinoma, colon adenocarcinoma, esophageal carcinoma (ESCA), head and neck squamous cell carcinoma (HNSC), kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), rectum adenocarcinoma (READ), sarcoma (SARC), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), thyroid carcinoma (THCA), thymoma (THYM), and uterine corpus endometrial carcinoma (UCEC). As shown in Figure 7, lncRNA FOXP4-AS1 was significantly overexpressed in tumor tissues, especially in patients with COAD, PRAD, READ, and STAD. We found that the expression of lncRNA FOXP4-AS1 in 6 malignant tumors was significantly different in clinical staging, such as KIRP, KIPAN (pan-kidney cohort), HNSC, KIRC, LIHC, and TGCT (testicular germ cell tumors) (Fig. 8). Moreover, the expression of lncRNA FOXP4-AS1 was significantly different in the differentiation of 6 malignant tumors, including ESCA, stomach and esophageal carcinoma (STES), STAD, HNSC, PAAD, and ovarian serous cystadenocarcinoma (OV) (Fig. 9). Additionally, the investigators explored whether lncRNA FOXP4-AS1 expression was associated with the survival and prognosis of cancer patients in the TCGA dataset.\nLncRNA FOXP4-AS1 expression in different types of human malignant tumor tissues and corresponding normal tissues. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nLncRNA FOXP4-AS1 expression in clinical stage of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nLncRNA FOXP4-AS1 expression in the differentiation grade of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA.\nThe results revealed that upregulated lncRNA FOXP4-AS1 expression in different types of malignant tumors exhibited negative effects on OS (Fig. 10) and DFS (Fig. 11).\nKaplan–Meier plotter of OS for 22 types of human malignancies. OS = overall survival.\nKaplan–Meier plotter of DFS for 22 types of human malignancies. DFS = disease-free survival.", "As is well known, numerous cancers have a poor prognosis because early diagnosis is difficult. A high proportion of patients have an advanced stage of cancer at diagnosis, indicating that tumors have spread to adjacent or distant organs, tissues, and lymph nodes, indicating poor prognosis. Consequently, it is indispensable to develop novel biomarkers that are reliable for predicting cancer patient diagnosis and prognosis.\nIn recent years, human cancers can be predicted with the help of lncRNAs. The lncRNA FOXP4-AS1, in particular, has generated a lot of interest because accumulating studies suggest that it may play a key role in determining the clinical outcome of different types of cancers.[14] In most studies, there are a limited number of samples; therefore, more evidence is needed regarding the prognostic role of lncRNA FOXP4-AS1, which provides sufficient data for further investigation.\nAs far as we are aware, this is the first meta-analysis that provides insights regarding the precise role played by lncRNA FOXP4-AS1 in patient survival and clinicopathological parameters. The present study demonstrated that elevated lncRNA FOXP4-AS1 levels were significantly associated with inferior OS and DFS in various cancers, indicating that lncRNA FOXP4-AS1 may serve as an indicator for cancer prognosis, with the potential to support new therapies. On the basis of clinicopathological features, patients with high lncRNA FOXP4-AS1expression are more inclined to have a high risk of tumor growth than those with low lncRNA FOXP4-AS1 expression. Additionally, the inferiority of high lncRNA FOXP4-AS1 expression on alpha-fetoprotein and lymph node metastasis was observed, indicating that lncRNA FOXP4-AS1 overexpression was associated with worse clinicopathological characteristics. However, the high expression of lncRNA FOXP4-AS1 was not associated with gender, DM, differentiation, or tumor node metastasis stage.\nIn cancers, abnormal expression of the lncRNA FOXP4-AS1 is associated with poor clinical prognosis, and the exact underlying mechanisms still need to be clarified. For accounting this significant association, there are several possible explanations. Wang et al proposed that lncRNA FOXP4-AS1 is involved in HCC development by mediating in the PI3K-Akt signaling Pathway.[12] Tao et al demonstrated that lncRNA FOXP4-AS1 predicts poor prognosis of MCL and accelerates its progression of MCL through the miR-423-5p/NACC1 pathway.[20] Yang et al revealed that the lncRNA FOXP4-AS1 participates in the development and progression of osteosarcoma by downregulating LATS1 via binding to LSD1 and EZH2. Furthermore, overexpression of lncRNA FOXP4-AS1 led to enhanced proliferation, migration, and invasion; shortened the G0/G1 phase; and inhibited the cell cycle.[21] The study of Zhong et al demonstrated that high expression of lncRNA FOXP4-AS1 in NPC portended poor outcomes. LncRNA FOXP4-AS1upregulated STMN1 by interacting with miR-423-5p as a competing endogenous RNA (ceRNA) to promote NPC progression.[22] Wu et al confirmed that the lncRNA FOXP4-AS1 is activated by PAX5 and promotes the growth of prostate cancer by sequestering miR-3184-5p to upregulate FOXP4.[23] Niu et al,s research found that lncRNA FOXP4-AS1, which is upregulated in esophageal squamous cell carcinoma (ESCC), promotes FOXP4 expression by enriching MLL2 and H3K4me3 in the FOXP4 promoter through a “molecular scaffold.” Moreover, FOXP4, a transcription factor of β-catenin, promotes the transcription of β-catenin and ultimately leads to malignant progression of ESCC.[24] Liu et al found that lncRNA FOXP4-AS1 may function in pancreatic ductal adenocarcinoma (PDAC) by participating in biological processes and pathways, including oxidative phosphorylation, tricarboxylic acid cycle, and classical tumor-related pathways such as NF-kappaB and Janus kinase/signal transducers, in addition to activators of transcription, cell proliferation, and adhesion.[25] Activation of DNA repair is one of the reasons for chemoresistance, and the myc-pathway has been associated with the acquisition of temozolomide resistance in glioblastoma through a c-Myc–miR-29c–REV3L network.[26] Through pathway analysis, Huang et al suggested that the DNA repair/MYC gene set is enriched in low-grade glioma patients with high expression of lncRNA FOXP4-AS1. Therefore, overexpression of lncRNA FOXP4-AS1may may affect temozolomide the prognosis of cancer. However, this result needs to be further explored.[27]\nIt is noteworthy that the present study had some limitations. First, only English language reports have been considered, so we might have missed important studies published in other languages, and the studies included were all from China, and the results may best reflect the clinical characteristics of Asian cancer patients. Second, considering the relatively small number of samples in the literature, it is still necessary to investigate a single tumor type in a larger number of samples, and additional studies are needed to assess DFS and PFS. Due to this, potential publication bias is very likely to exist, despite the lack of evidence from our statistical tests. In the end, the HRs and 95%CIs were extracted by the indirect method, which was inevitably biased. Therefore, it is necessary to increase the number of high quality studies that contain a large number of samples to avoid the various factors in the compound.", "We conducted the first systematic review and estimation of the relationship between abnormal lncRNA FOXP4-AS1 expression and survival and clinical outcomes in patients with tumors. Based on our results, high expression levels of lncRNA FOXP4-AS1 were associated with poor OS and DFS, making this gene a potential prognostic biomarker. Given the limitations of this study, a more large-scale, high-quality study on a variety of ethnic populations is necessary to assess the value of lncRNA FOXP4-AS1 in tumors.", "Conceptualization: Qiang Shu, Fei Xie.\nData curation: Xiaoling Liu, Jushu Yang.\nFormal analysis: Xiaoling Liu.\nFunding acquisition: Fei Xie.\nInvestigation: Xiaoling Liu, Jushu Yang, Tinggang Mou.\nMethodology: Qiang Shu, Xiaoling Liu, Jushu Yang, Tinggang Mou.\nProject administration: Qiang Shu, Fei Xie.\nSoftware: Tinggang Mou.\nSupervision: Fei Xie.\nWriting – original draft: Qiang Shu.\nWriting – review & editing: Qiang Shu." ]
[ "intro", "methods", null, null, null, "results", null, null, null, null, "results", "discussion", "conclusions", null ]
[ "cancer", "disease free survival", "FOXP4-AS1", "long non-coding RNA", "overall survival", "prognosis" ]
1. Introduction: Cancer is one of the leading causes of death worldwide.[1] However, the exact mechanism underlying this cancer remains unclear. Furthermore, surveillance of patients with early stage cancer remains difficult. Hence, many cancer cases are identified at an advanced stage and have a dismal prognosis. The official databases of the World Health Organization and American Cancer Society indicate that cancer poses the highest clinical, social, and economic burden among all human diseases. A total number of 18 million new cases have been diagnosed in 2018, the most frequent of which are lung (2.09 million cases), breast (2.09 million cases), and prostate (1.28 million cases) cancers.[2] Therefore, early diagnosis and intervention have become vital for improving the overall survival (OS) of patients with cancer. Long non-coding RNA (lncRNA) are defined as transcripts >200 nucleotides in length that display limited protein-coding potential.[3] In recent years, lncRNAs have been found to play significant regulatory roles in a variety of diseases, especially in the biological processes of malignant tumors, including differentiation, migration, apoptosis, and dose compensation effects.[4,5] Recent studies have proposed that LINC00675 is related to clinicopathological features and prognosis of various cancer patients by participating in cancer cell proliferation and invasion.[6,7] In cervical cancer, SIP1 expression is upregulated by lncRNA NORAD to promote proliferation and invasion of cervical cancer cells.[8] Studies have shown that lncNONHSAAT081507.1 (LINC81507) plays an inhibitory role in the progression of non-small cell lung cancer and acts as a therapeutic target and potential biomarker for the diagnosis and prognosis of nonsmall cell lung cancer.[9] These results provide evidence that lncRNAs may serve as novel prognostic biomarkers and therapeutic targets in human tumors.[10,11] Forkhead box P4 antisense RNA 1 (FOXP4-AS1), a member of the lncRNA family, is an antisense lncRNA to FOXP4. The lncRNA FOXP4-AS1, which is an lncRNA related to tumors, is believed to participate in the occurrence of tumors and promote tumor proliferation, invasion, and migration. Extensive studies have indicated that FOXP4-AS1 is highly expressed in several malignancies, including hepatocellular carcinoma (HCC),[12] colorectal carcinoma,[13] and nasopharyngeal carcinoma (NPC).[14] Thus, its upregulation is usually related to tumor grade and poor prognosis.[15,16] However, no systematic meta-analysis has yet been conducted to support the prognostic value of FOXP4-AS1 in these cancers. Hence, a meta-analysis was performed to investigate the clinical prognostic value of lncRNA FOXP4-AS1 in patients with cancer. 2. Material and Methods: 2.1. Literature search This meta-analysis was conducted in accordance with the Guidelines for Preferred Reports of Systematic Reviews and Meta-Analysis and Meta-analysis of Observational Epidemiological Studies statements. A comprehensive literature search was conducted. In order to identify relevant articles, 2 reviewers independently searched electronic databases, including PubMed, Cochrane Library, EMBASE, Medline and Web of Science.Use the following search terms: “long non-coding RNA FOXP4-AS1,” “lncRNA FOXP4-AS1,” “Forkhead box P4 antisense RNA 1,” “tumor,” “cancer,” and “prognosis.” The issue will be discussed with a third reviewer if there are any inconsistencies. To select eligible studies, we used the following criteria: definitive diagnosis or histopathological diagnosis in cancer patients; survival and clinical prognostic parameters of lncRNA FOXP4-AS1 in cancer patients were reported. The combined disaster risk (HR) and 95% confidence intervals (CI) were calculated using sufficient information. The exclusion criteria were as follows: studies without prognostic outcome information, non-human studies, letters, case reports, review articles, duplicate publications, and studies without original data. This meta-analysis was conducted in accordance with the Guidelines for Preferred Reports of Systematic Reviews and Meta-Analysis and Meta-analysis of Observational Epidemiological Studies statements. A comprehensive literature search was conducted. In order to identify relevant articles, 2 reviewers independently searched electronic databases, including PubMed, Cochrane Library, EMBASE, Medline and Web of Science.Use the following search terms: “long non-coding RNA FOXP4-AS1,” “lncRNA FOXP4-AS1,” “Forkhead box P4 antisense RNA 1,” “tumor,” “cancer,” and “prognosis.” The issue will be discussed with a third reviewer if there are any inconsistencies. To select eligible studies, we used the following criteria: definitive diagnosis or histopathological diagnosis in cancer patients; survival and clinical prognostic parameters of lncRNA FOXP4-AS1 in cancer patients were reported. The combined disaster risk (HR) and 95% confidence intervals (CI) were calculated using sufficient information. The exclusion criteria were as follows: studies without prognostic outcome information, non-human studies, letters, case reports, review articles, duplicate publications, and studies without original data. 2.2. Data extraction and quality assessment The following information was extracted from each study by 3 independent authors and a consensus was reached: author, country, publication year, tumor type, cancer size, follow-up time, detection method, and cutoff value. Based on distant metastasis (DM), tumor size, and cancer node metastasis stage, the number of patients was divided for each group, and the number of patients with high or low FOXP4-AS1 expression in each group. When the HR with 95% CI was reported in a univariate or multivariate analysis, it was directly extracted from the report (https://sourceforge.net/projects/digitizer/).[17] We used Engauge Digitizer to calculate HR and 95% CI based on Kaplan–Meier survival curves, and quality assessments for all included studies were based on the Newcastle–Ottawa quality assessment scale, which includes 3 dimensions: selection, comparability, and exposure. Each study was given a score ranging from 0 to 9. A study with a Newcastle–Ottawa quality assessment scale score of ≥6 was considered to be of high quality.[18] The following information was extracted from each study by 3 independent authors and a consensus was reached: author, country, publication year, tumor type, cancer size, follow-up time, detection method, and cutoff value. Based on distant metastasis (DM), tumor size, and cancer node metastasis stage, the number of patients was divided for each group, and the number of patients with high or low FOXP4-AS1 expression in each group. When the HR with 95% CI was reported in a univariate or multivariate analysis, it was directly extracted from the report (https://sourceforge.net/projects/digitizer/).[17] We used Engauge Digitizer to calculate HR and 95% CI based on Kaplan–Meier survival curves, and quality assessments for all included studies were based on the Newcastle–Ottawa quality assessment scale, which includes 3 dimensions: selection, comparability, and exposure. Each study was given a score ranging from 0 to 9. A study with a Newcastle–Ottawa quality assessment scale score of ≥6 was considered to be of high quality.[18] 2.3. Statistical analyses All statistical analyses of the data were performed using Review Manager (RevMan) 5.3 software and Stata/SE 16.1 software. Sensitivity analysis was performed by omitting the literature one by one to determine whether the results were stable, and the publication bias of this meta-analysis was evaluated using the Begg test according to Stata/SE 16.1 software. The Q test and I2 statistics were applied to estimate the heterogeneity of the results. A fixed-effects model was selected when an I2 < 50% was observed. The synthetic estimate was calculated based on the random-effects model when heterogeneity was evident (I2 > 50%). Statistical significance was set at P < .05. All statistical analyses of the data were performed using Review Manager (RevMan) 5.3 software and Stata/SE 16.1 software. Sensitivity analysis was performed by omitting the literature one by one to determine whether the results were stable, and the publication bias of this meta-analysis was evaluated using the Begg test according to Stata/SE 16.1 software. The Q test and I2 statistics were applied to estimate the heterogeneity of the results. A fixed-effects model was selected when an I2 < 50% was observed. The synthetic estimate was calculated based on the random-effects model when heterogeneity was evident (I2 > 50%). Statistical significance was set at P < .05. 2.1. Literature search: This meta-analysis was conducted in accordance with the Guidelines for Preferred Reports of Systematic Reviews and Meta-Analysis and Meta-analysis of Observational Epidemiological Studies statements. A comprehensive literature search was conducted. In order to identify relevant articles, 2 reviewers independently searched electronic databases, including PubMed, Cochrane Library, EMBASE, Medline and Web of Science.Use the following search terms: “long non-coding RNA FOXP4-AS1,” “lncRNA FOXP4-AS1,” “Forkhead box P4 antisense RNA 1,” “tumor,” “cancer,” and “prognosis.” The issue will be discussed with a third reviewer if there are any inconsistencies. To select eligible studies, we used the following criteria: definitive diagnosis or histopathological diagnosis in cancer patients; survival and clinical prognostic parameters of lncRNA FOXP4-AS1 in cancer patients were reported. The combined disaster risk (HR) and 95% confidence intervals (CI) were calculated using sufficient information. The exclusion criteria were as follows: studies without prognostic outcome information, non-human studies, letters, case reports, review articles, duplicate publications, and studies without original data. 2.2. Data extraction and quality assessment: The following information was extracted from each study by 3 independent authors and a consensus was reached: author, country, publication year, tumor type, cancer size, follow-up time, detection method, and cutoff value. Based on distant metastasis (DM), tumor size, and cancer node metastasis stage, the number of patients was divided for each group, and the number of patients with high or low FOXP4-AS1 expression in each group. When the HR with 95% CI was reported in a univariate or multivariate analysis, it was directly extracted from the report (https://sourceforge.net/projects/digitizer/).[17] We used Engauge Digitizer to calculate HR and 95% CI based on Kaplan–Meier survival curves, and quality assessments for all included studies were based on the Newcastle–Ottawa quality assessment scale, which includes 3 dimensions: selection, comparability, and exposure. Each study was given a score ranging from 0 to 9. A study with a Newcastle–Ottawa quality assessment scale score of ≥6 was considered to be of high quality.[18] 2.3. Statistical analyses: All statistical analyses of the data were performed using Review Manager (RevMan) 5.3 software and Stata/SE 16.1 software. Sensitivity analysis was performed by omitting the literature one by one to determine whether the results were stable, and the publication bias of this meta-analysis was evaluated using the Begg test according to Stata/SE 16.1 software. The Q test and I2 statistics were applied to estimate the heterogeneity of the results. A fixed-effects model was selected when an I2 < 50% was observed. The synthetic estimate was calculated based on the random-effects model when heterogeneity was evident (I2 > 50%). Statistical significance was set at P < .05. 3. Results: 3.1. Literature search and selection After the preliminary online search, the investigators retrieved 110 relevant studies from electronic databases. After the removal of duplicates, 57 studies were excluded. After thorough screening of titles and abstracts, 14 publications were included. After carefully assessing the full texts, 6 studies published between 2018 and 2021 were included in the present meta-analysis. The literature screening process is shown in Figure 1. These eligible studies included 1128 patients. In the present meta-analysis, a variety of tumor types were reported, including HCC,[12,19] NPC,[14] mantle cell lymphoma (MCL),[20] colorectal cancer,[13] and osteosarcoma.[21] The expression of lncRNA FOXP4-AS1 in these included studies was quantified using real-time fluorescent PCR. The median was selected as the cutoff value to distinguish between high and low expression of lncRNA FOXP4-AS1. Six eligible studies used the OS to estimate patient survival. The detailed clinical characteristics of the patients are summarized in Table 1. The main characteristics of the eligible literatures included in the meta-analysis. CRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction. Flow diagram of the literatures selection in this meta-analysis. After the preliminary online search, the investigators retrieved 110 relevant studies from electronic databases. After the removal of duplicates, 57 studies were excluded. After thorough screening of titles and abstracts, 14 publications were included. After carefully assessing the full texts, 6 studies published between 2018 and 2021 were included in the present meta-analysis. The literature screening process is shown in Figure 1. These eligible studies included 1128 patients. In the present meta-analysis, a variety of tumor types were reported, including HCC,[12,19] NPC,[14] mantle cell lymphoma (MCL),[20] colorectal cancer,[13] and osteosarcoma.[21] The expression of lncRNA FOXP4-AS1 in these included studies was quantified using real-time fluorescent PCR. The median was selected as the cutoff value to distinguish between high and low expression of lncRNA FOXP4-AS1. Six eligible studies used the OS to estimate patient survival. The detailed clinical characteristics of the patients are summarized in Table 1. The main characteristics of the eligible literatures included in the meta-analysis. CRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction. Flow diagram of the literatures selection in this meta-analysis. 3.2. Lncrna FOXP4-AS1 expression is highly correlated with OS and disease-free survival (DFS) Overall, all the included studies investigated cancer prognosis. A total of 1128 patients were assessed for HR and 95% CI for OS. A random-effects model was used to analyze the pooled HR, and its 95% CI depended on obvious heterogeneity (P = .06, I2 = 53%). We further elucidated the relationship between FOXP4-AS1 expression and OS, as illustrated in Figure 2. The pooled results revealed that high expression of lncRNA FOXP4-AS1 was related to poor prognosis of cancers (HR = 1.99, 95% CI:1.65–2.39, P < .00001, Fig. 2A). In terms of DFS, 5 studies were included, and the pooled results indicated that patients with high expression of lncRNA FOXP4-AS1 had poor DFS (HR = 1.81, 95% CI:1.54–2.13, P < .00001, Fig. 2B). Thus, the prognosis of cancer patients with lncRNA FOXP4-AS1 overexpression was worse than that of patients with low lncRNA FOXP4-AS1 expression. These results indicate that lncRNA FOXP4-AS1 may be a factor in predicting the prognosis of cancer patients. Forest plots for the association between lncRNA FOXP4-AS1 expression with OS (A) and DFS (B). DFS = disease-free survival, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival. Overall, all the included studies investigated cancer prognosis. A total of 1128 patients were assessed for HR and 95% CI for OS. A random-effects model was used to analyze the pooled HR, and its 95% CI depended on obvious heterogeneity (P = .06, I2 = 53%). We further elucidated the relationship between FOXP4-AS1 expression and OS, as illustrated in Figure 2. The pooled results revealed that high expression of lncRNA FOXP4-AS1 was related to poor prognosis of cancers (HR = 1.99, 95% CI:1.65–2.39, P < .00001, Fig. 2A). In terms of DFS, 5 studies were included, and the pooled results indicated that patients with high expression of lncRNA FOXP4-AS1 had poor DFS (HR = 1.81, 95% CI:1.54–2.13, P < .00001, Fig. 2B). Thus, the prognosis of cancer patients with lncRNA FOXP4-AS1 overexpression was worse than that of patients with low lncRNA FOXP4-AS1 expression. These results indicate that lncRNA FOXP4-AS1 may be a factor in predicting the prognosis of cancer patients. Forest plots for the association between lncRNA FOXP4-AS1 expression with OS (A) and DFS (B). DFS = disease-free survival, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival. 3.3. Relationship between lncRNA FOXP4-AS1 expression and clinicopathological characteristics According to the evaluation of the 6 eligible studies that contained detailed clinicopathological characteristics, it was observed that the elevated expression of lncRNA FOXP4-AS1 positively correlated with tumor size (larger vs smaller) (OR = 3.16, 95%CI: 2.12–4.71, P < .00001, Fig. 3B). In particular, the overexpression of lncRNA FOXP4-AS1 was consistent with elevated alpha-fetoprotein (OR = 3.81, 95%CI: 2.38–6.11, P = .83, Fig. 3C) in HCC, and the fixed effects model was selected to estimate due to the inconspicuous heterogeneity. The analysis results revealed that the patients with lncRNA FOXP4-AS1 overexpression were more vulnerable to younger (OR = 2.06, 95%CI: 1.41–3.00, P = .00002, Fig. 3A) and lymph node metastasis (OR = 2.93, 95%CI: 1.51–5.68, P = .001, Fig. 3D) in patients with cancer. However, there were no significant differences between lncRNA FOXP4-AS1 expression and TNM stage (OR = 1.38, 95%CI: 0.42–4.48, P = .59, Fig. 4A), DM (OR = 0.84, 95%CI: 0.49–1.45, P = .54, Fig. 4B), gender (OR = 1.08, 95%CI: 0.70–1.67, P = .72, Fig. 4C), differentiation (OR = 0.91, 95%CI: 0.49–1.67, P = .76, Fig. 4D). This information is presented in Table 2. The main characteristics of the eligible literatures included in the meta-analysis. CRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction. Forest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: age (A), tumor size (B), AFP (C), lymph node metastasis (D). AFP = alpha-fetoprotein, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. Forest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: TNM stage (A), distant metastasis (B), gender (C), and differentiation (D). FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. According to the evaluation of the 6 eligible studies that contained detailed clinicopathological characteristics, it was observed that the elevated expression of lncRNA FOXP4-AS1 positively correlated with tumor size (larger vs smaller) (OR = 3.16, 95%CI: 2.12–4.71, P < .00001, Fig. 3B). In particular, the overexpression of lncRNA FOXP4-AS1 was consistent with elevated alpha-fetoprotein (OR = 3.81, 95%CI: 2.38–6.11, P = .83, Fig. 3C) in HCC, and the fixed effects model was selected to estimate due to the inconspicuous heterogeneity. The analysis results revealed that the patients with lncRNA FOXP4-AS1 overexpression were more vulnerable to younger (OR = 2.06, 95%CI: 1.41–3.00, P = .00002, Fig. 3A) and lymph node metastasis (OR = 2.93, 95%CI: 1.51–5.68, P = .001, Fig. 3D) in patients with cancer. However, there were no significant differences between lncRNA FOXP4-AS1 expression and TNM stage (OR = 1.38, 95%CI: 0.42–4.48, P = .59, Fig. 4A), DM (OR = 0.84, 95%CI: 0.49–1.45, P = .54, Fig. 4B), gender (OR = 1.08, 95%CI: 0.70–1.67, P = .72, Fig. 4C), differentiation (OR = 0.91, 95%CI: 0.49–1.67, P = .76, Fig. 4D). This information is presented in Table 2. The main characteristics of the eligible literatures included in the meta-analysis. CRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction. Forest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: age (A), tumor size (B), AFP (C), lymph node metastasis (D). AFP = alpha-fetoprotein, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. Forest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: TNM stage (A), distant metastasis (B), gender (C), and differentiation (D). FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. 3.4. Publication bias and sensitivity analysis This meta-analysis evaluated the publication bias using the Begg test. The results showed no significant publication bias affecting OS analysis (Pr > IzI = 0.368) (Fig. 5). Figure 6 illustrates the sensitivity analysis we conducted to show that the HRs were robust even after removing all the studies individually. Publication bias assessment of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival. Sensitivity analysis of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival. This meta-analysis evaluated the publication bias using the Begg test. The results showed no significant publication bias affecting OS analysis (Pr > IzI = 0.368) (Fig. 5). Figure 6 illustrates the sensitivity analysis we conducted to show that the HRs were robust even after removing all the studies individually. Publication bias assessment of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival. Sensitivity analysis of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival. 3.5. Validation of the results in the cancer genome atlas (TCGA) dataset As well, the investigators made use of TCGA dataset to analyze the expression of lncRNA FOXP4-AS1 in the different types of cancers. This database consists of 22 different types of human malignant tumors, including bladder urothelial carcinoma, breast invasive carcinoma, cervical squamous cell carcinoma and endocervical adenocarcinoma, cholangiocarcinoma, colon adenocarcinoma, esophageal carcinoma (ESCA), head and neck squamous cell carcinoma (HNSC), kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), rectum adenocarcinoma (READ), sarcoma (SARC), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), thyroid carcinoma (THCA), thymoma (THYM), and uterine corpus endometrial carcinoma (UCEC). As shown in Figure 7, lncRNA FOXP4-AS1 was significantly overexpressed in tumor tissues, especially in patients with COAD, PRAD, READ, and STAD. We found that the expression of lncRNA FOXP4-AS1 in 6 malignant tumors was significantly different in clinical staging, such as KIRP, KIPAN (pan-kidney cohort), HNSC, KIRC, LIHC, and TGCT (testicular germ cell tumors) (Fig. 8). Moreover, the expression of lncRNA FOXP4-AS1 was significantly different in the differentiation of 6 malignant tumors, including ESCA, stomach and esophageal carcinoma (STES), STAD, HNSC, PAAD, and ovarian serous cystadenocarcinoma (OV) (Fig. 9). Additionally, the investigators explored whether lncRNA FOXP4-AS1 expression was associated with the survival and prognosis of cancer patients in the TCGA dataset. LncRNA FOXP4-AS1 expression in different types of human malignant tumor tissues and corresponding normal tissues. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. LncRNA FOXP4-AS1 expression in clinical stage of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. LncRNA FOXP4-AS1 expression in the differentiation grade of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. The results revealed that upregulated lncRNA FOXP4-AS1 expression in different types of malignant tumors exhibited negative effects on OS (Fig. 10) and DFS (Fig. 11). Kaplan–Meier plotter of OS for 22 types of human malignancies. OS = overall survival. Kaplan–Meier plotter of DFS for 22 types of human malignancies. DFS = disease-free survival. As well, the investigators made use of TCGA dataset to analyze the expression of lncRNA FOXP4-AS1 in the different types of cancers. This database consists of 22 different types of human malignant tumors, including bladder urothelial carcinoma, breast invasive carcinoma, cervical squamous cell carcinoma and endocervical adenocarcinoma, cholangiocarcinoma, colon adenocarcinoma, esophageal carcinoma (ESCA), head and neck squamous cell carcinoma (HNSC), kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), rectum adenocarcinoma (READ), sarcoma (SARC), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), thyroid carcinoma (THCA), thymoma (THYM), and uterine corpus endometrial carcinoma (UCEC). As shown in Figure 7, lncRNA FOXP4-AS1 was significantly overexpressed in tumor tissues, especially in patients with COAD, PRAD, READ, and STAD. We found that the expression of lncRNA FOXP4-AS1 in 6 malignant tumors was significantly different in clinical staging, such as KIRP, KIPAN (pan-kidney cohort), HNSC, KIRC, LIHC, and TGCT (testicular germ cell tumors) (Fig. 8). Moreover, the expression of lncRNA FOXP4-AS1 was significantly different in the differentiation of 6 malignant tumors, including ESCA, stomach and esophageal carcinoma (STES), STAD, HNSC, PAAD, and ovarian serous cystadenocarcinoma (OV) (Fig. 9). Additionally, the investigators explored whether lncRNA FOXP4-AS1 expression was associated with the survival and prognosis of cancer patients in the TCGA dataset. LncRNA FOXP4-AS1 expression in different types of human malignant tumor tissues and corresponding normal tissues. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. LncRNA FOXP4-AS1 expression in clinical stage of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. LncRNA FOXP4-AS1 expression in the differentiation grade of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. The results revealed that upregulated lncRNA FOXP4-AS1 expression in different types of malignant tumors exhibited negative effects on OS (Fig. 10) and DFS (Fig. 11). Kaplan–Meier plotter of OS for 22 types of human malignancies. OS = overall survival. Kaplan–Meier plotter of DFS for 22 types of human malignancies. DFS = disease-free survival. 3.1. Literature search and selection: After the preliminary online search, the investigators retrieved 110 relevant studies from electronic databases. After the removal of duplicates, 57 studies were excluded. After thorough screening of titles and abstracts, 14 publications were included. After carefully assessing the full texts, 6 studies published between 2018 and 2021 were included in the present meta-analysis. The literature screening process is shown in Figure 1. These eligible studies included 1128 patients. In the present meta-analysis, a variety of tumor types were reported, including HCC,[12,19] NPC,[14] mantle cell lymphoma (MCL),[20] colorectal cancer,[13] and osteosarcoma.[21] The expression of lncRNA FOXP4-AS1 in these included studies was quantified using real-time fluorescent PCR. The median was selected as the cutoff value to distinguish between high and low expression of lncRNA FOXP4-AS1. Six eligible studies used the OS to estimate patient survival. The detailed clinical characteristics of the patients are summarized in Table 1. The main characteristics of the eligible literatures included in the meta-analysis. CRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction. Flow diagram of the literatures selection in this meta-analysis. 3.2. Lncrna FOXP4-AS1 expression is highly correlated with OS and disease-free survival (DFS): Overall, all the included studies investigated cancer prognosis. A total of 1128 patients were assessed for HR and 95% CI for OS. A random-effects model was used to analyze the pooled HR, and its 95% CI depended on obvious heterogeneity (P = .06, I2 = 53%). We further elucidated the relationship between FOXP4-AS1 expression and OS, as illustrated in Figure 2. The pooled results revealed that high expression of lncRNA FOXP4-AS1 was related to poor prognosis of cancers (HR = 1.99, 95% CI:1.65–2.39, P < .00001, Fig. 2A). In terms of DFS, 5 studies were included, and the pooled results indicated that patients with high expression of lncRNA FOXP4-AS1 had poor DFS (HR = 1.81, 95% CI:1.54–2.13, P < .00001, Fig. 2B). Thus, the prognosis of cancer patients with lncRNA FOXP4-AS1 overexpression was worse than that of patients with low lncRNA FOXP4-AS1 expression. These results indicate that lncRNA FOXP4-AS1 may be a factor in predicting the prognosis of cancer patients. Forest plots for the association between lncRNA FOXP4-AS1 expression with OS (A) and DFS (B). DFS = disease-free survival, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival. 3.3. Relationship between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: According to the evaluation of the 6 eligible studies that contained detailed clinicopathological characteristics, it was observed that the elevated expression of lncRNA FOXP4-AS1 positively correlated with tumor size (larger vs smaller) (OR = 3.16, 95%CI: 2.12–4.71, P < .00001, Fig. 3B). In particular, the overexpression of lncRNA FOXP4-AS1 was consistent with elevated alpha-fetoprotein (OR = 3.81, 95%CI: 2.38–6.11, P = .83, Fig. 3C) in HCC, and the fixed effects model was selected to estimate due to the inconspicuous heterogeneity. The analysis results revealed that the patients with lncRNA FOXP4-AS1 overexpression were more vulnerable to younger (OR = 2.06, 95%CI: 1.41–3.00, P = .00002, Fig. 3A) and lymph node metastasis (OR = 2.93, 95%CI: 1.51–5.68, P = .001, Fig. 3D) in patients with cancer. However, there were no significant differences between lncRNA FOXP4-AS1 expression and TNM stage (OR = 1.38, 95%CI: 0.42–4.48, P = .59, Fig. 4A), DM (OR = 0.84, 95%CI: 0.49–1.45, P = .54, Fig. 4B), gender (OR = 1.08, 95%CI: 0.70–1.67, P = .72, Fig. 4C), differentiation (OR = 0.91, 95%CI: 0.49–1.67, P = .76, Fig. 4D). This information is presented in Table 2. The main characteristics of the eligible literatures included in the meta-analysis. CRC = colorectal cancer, HCC = hepatocellular carcinoma, MCL = mantle cell lymphoma, NOS = Newcastle–Ottawa Quality Assessment Scale, NPC = nasopharyngeal carcinoma, qRT-PCR = quantitative real-time fluorescent polymerase chain reaction. Forest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: age (A), tumor size (B), AFP (C), lymph node metastasis (D). AFP = alpha-fetoprotein, FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. Forest plots for the correlation between lncRNA FOXP4-AS1 expression and clinicopathological characteristics: TNM stage (A), distant metastasis (B), gender (C), and differentiation (D). FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. 3.4. Publication bias and sensitivity analysis: This meta-analysis evaluated the publication bias using the Begg test. The results showed no significant publication bias affecting OS analysis (Pr > IzI = 0.368) (Fig. 5). Figure 6 illustrates the sensitivity analysis we conducted to show that the HRs were robust even after removing all the studies individually. Publication bias assessment of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival. Sensitivity analysis of lncRNA FOXP4-AS1expression based on OS. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA, OS = overall survival. 3.5. Validation of the results in the cancer genome atlas (TCGA) dataset: As well, the investigators made use of TCGA dataset to analyze the expression of lncRNA FOXP4-AS1 in the different types of cancers. This database consists of 22 different types of human malignant tumors, including bladder urothelial carcinoma, breast invasive carcinoma, cervical squamous cell carcinoma and endocervical adenocarcinoma, cholangiocarcinoma, colon adenocarcinoma, esophageal carcinoma (ESCA), head and neck squamous cell carcinoma (HNSC), kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), rectum adenocarcinoma (READ), sarcoma (SARC), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), thyroid carcinoma (THCA), thymoma (THYM), and uterine corpus endometrial carcinoma (UCEC). As shown in Figure 7, lncRNA FOXP4-AS1 was significantly overexpressed in tumor tissues, especially in patients with COAD, PRAD, READ, and STAD. We found that the expression of lncRNA FOXP4-AS1 in 6 malignant tumors was significantly different in clinical staging, such as KIRP, KIPAN (pan-kidney cohort), HNSC, KIRC, LIHC, and TGCT (testicular germ cell tumors) (Fig. 8). Moreover, the expression of lncRNA FOXP4-AS1 was significantly different in the differentiation of 6 malignant tumors, including ESCA, stomach and esophageal carcinoma (STES), STAD, HNSC, PAAD, and ovarian serous cystadenocarcinoma (OV) (Fig. 9). Additionally, the investigators explored whether lncRNA FOXP4-AS1 expression was associated with the survival and prognosis of cancer patients in the TCGA dataset. LncRNA FOXP4-AS1 expression in different types of human malignant tumor tissues and corresponding normal tissues. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. LncRNA FOXP4-AS1 expression in clinical stage of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. LncRNA FOXP4-AS1 expression in the differentiation grade of human malignancies. FOXP4-AS1 = forkhead box P4 antisense RNA 1, LncRNA = long non-coding RNA. The results revealed that upregulated lncRNA FOXP4-AS1 expression in different types of malignant tumors exhibited negative effects on OS (Fig. 10) and DFS (Fig. 11). Kaplan–Meier plotter of OS for 22 types of human malignancies. OS = overall survival. Kaplan–Meier plotter of DFS for 22 types of human malignancies. DFS = disease-free survival. 4. Discussion: As is well known, numerous cancers have a poor prognosis because early diagnosis is difficult. A high proportion of patients have an advanced stage of cancer at diagnosis, indicating that tumors have spread to adjacent or distant organs, tissues, and lymph nodes, indicating poor prognosis. Consequently, it is indispensable to develop novel biomarkers that are reliable for predicting cancer patient diagnosis and prognosis. In recent years, human cancers can be predicted with the help of lncRNAs. The lncRNA FOXP4-AS1, in particular, has generated a lot of interest because accumulating studies suggest that it may play a key role in determining the clinical outcome of different types of cancers.[14] In most studies, there are a limited number of samples; therefore, more evidence is needed regarding the prognostic role of lncRNA FOXP4-AS1, which provides sufficient data for further investigation. As far as we are aware, this is the first meta-analysis that provides insights regarding the precise role played by lncRNA FOXP4-AS1 in patient survival and clinicopathological parameters. The present study demonstrated that elevated lncRNA FOXP4-AS1 levels were significantly associated with inferior OS and DFS in various cancers, indicating that lncRNA FOXP4-AS1 may serve as an indicator for cancer prognosis, with the potential to support new therapies. On the basis of clinicopathological features, patients with high lncRNA FOXP4-AS1expression are more inclined to have a high risk of tumor growth than those with low lncRNA FOXP4-AS1 expression. Additionally, the inferiority of high lncRNA FOXP4-AS1 expression on alpha-fetoprotein and lymph node metastasis was observed, indicating that lncRNA FOXP4-AS1 overexpression was associated with worse clinicopathological characteristics. However, the high expression of lncRNA FOXP4-AS1 was not associated with gender, DM, differentiation, or tumor node metastasis stage. In cancers, abnormal expression of the lncRNA FOXP4-AS1 is associated with poor clinical prognosis, and the exact underlying mechanisms still need to be clarified. For accounting this significant association, there are several possible explanations. Wang et al proposed that lncRNA FOXP4-AS1 is involved in HCC development by mediating in the PI3K-Akt signaling Pathway.[12] Tao et al demonstrated that lncRNA FOXP4-AS1 predicts poor prognosis of MCL and accelerates its progression of MCL through the miR-423-5p/NACC1 pathway.[20] Yang et al revealed that the lncRNA FOXP4-AS1 participates in the development and progression of osteosarcoma by downregulating LATS1 via binding to LSD1 and EZH2. Furthermore, overexpression of lncRNA FOXP4-AS1 led to enhanced proliferation, migration, and invasion; shortened the G0/G1 phase; and inhibited the cell cycle.[21] The study of Zhong et al demonstrated that high expression of lncRNA FOXP4-AS1 in NPC portended poor outcomes. LncRNA FOXP4-AS1upregulated STMN1 by interacting with miR-423-5p as a competing endogenous RNA (ceRNA) to promote NPC progression.[22] Wu et al confirmed that the lncRNA FOXP4-AS1 is activated by PAX5 and promotes the growth of prostate cancer by sequestering miR-3184-5p to upregulate FOXP4.[23] Niu et al,s research found that lncRNA FOXP4-AS1, which is upregulated in esophageal squamous cell carcinoma (ESCC), promotes FOXP4 expression by enriching MLL2 and H3K4me3 in the FOXP4 promoter through a “molecular scaffold.” Moreover, FOXP4, a transcription factor of β-catenin, promotes the transcription of β-catenin and ultimately leads to malignant progression of ESCC.[24] Liu et al found that lncRNA FOXP4-AS1 may function in pancreatic ductal adenocarcinoma (PDAC) by participating in biological processes and pathways, including oxidative phosphorylation, tricarboxylic acid cycle, and classical tumor-related pathways such as NF-kappaB and Janus kinase/signal transducers, in addition to activators of transcription, cell proliferation, and adhesion.[25] Activation of DNA repair is one of the reasons for chemoresistance, and the myc-pathway has been associated with the acquisition of temozolomide resistance in glioblastoma through a c-Myc–miR-29c–REV3L network.[26] Through pathway analysis, Huang et al suggested that the DNA repair/MYC gene set is enriched in low-grade glioma patients with high expression of lncRNA FOXP4-AS1. Therefore, overexpression of lncRNA FOXP4-AS1may may affect temozolomide the prognosis of cancer. However, this result needs to be further explored.[27] It is noteworthy that the present study had some limitations. First, only English language reports have been considered, so we might have missed important studies published in other languages, and the studies included were all from China, and the results may best reflect the clinical characteristics of Asian cancer patients. Second, considering the relatively small number of samples in the literature, it is still necessary to investigate a single tumor type in a larger number of samples, and additional studies are needed to assess DFS and PFS. Due to this, potential publication bias is very likely to exist, despite the lack of evidence from our statistical tests. In the end, the HRs and 95%CIs were extracted by the indirect method, which was inevitably biased. Therefore, it is necessary to increase the number of high quality studies that contain a large number of samples to avoid the various factors in the compound. 5. Conclusions: We conducted the first systematic review and estimation of the relationship between abnormal lncRNA FOXP4-AS1 expression and survival and clinical outcomes in patients with tumors. Based on our results, high expression levels of lncRNA FOXP4-AS1 were associated with poor OS and DFS, making this gene a potential prognostic biomarker. Given the limitations of this study, a more large-scale, high-quality study on a variety of ethnic populations is necessary to assess the value of lncRNA FOXP4-AS1 in tumors. Author contributions: Conceptualization: Qiang Shu, Fei Xie. Data curation: Xiaoling Liu, Jushu Yang. Formal analysis: Xiaoling Liu. Funding acquisition: Fei Xie. Investigation: Xiaoling Liu, Jushu Yang, Tinggang Mou. Methodology: Qiang Shu, Xiaoling Liu, Jushu Yang, Tinggang Mou. Project administration: Qiang Shu, Fei Xie. Software: Tinggang Mou. Supervision: Fei Xie. Writing – original draft: Qiang Shu. Writing – review & editing: Qiang Shu.
Background: The mortality and recurrence of patients with cancer is of high prevalence. Long non-coding RNA (lncRNA) forkhead box P4 antisense RNA 1 (FOXP4-AS1) is a promising lncRNA. There is increasing evidence that lncRNA FOXP4-AS1 is abnormally expressed in various tumors and is associated with cancer prognosis. This study was designed to identify the prognostic value of lncRNA FOXP4-AS1 in human malignancies. Methods: We searched electronic databases up to April 29, 2022, including PubMed, Cochrane Library, Embase, MEDLINE, and Web of Science. Eligible studies that evaluated the clinicopathological and prognostic role of lncRNA FOXP4-AS1 in patients with malignant tumors were included. The pooled odds ratios (ORs) and the hazard ratios (HRs) were calculated to assess the role of lncRNA FOXP4-AS1 using Stata/SE 16.1 software. Results: A total of 6 studies on cancer patients were included in the present meta-analysis. The combined results revealed that high expression of lncRNA FOXP4-AS1 was significantly associated with unfavorable overall survival (OS) (HR = 1.99, 95% confidence interval [CI]: 1.65-2.39, P < .00001), and poor disease-free survival (DFS) (HR = 1.81, 95% CI: 1.54-2.13, P < .00001) in a variety of cancers. In additional, the increase in lncRNA FOXP4-AS1 expression was also correlated with tumor size ((larger vs smaller) (OR = 3.16, 95% CI: 2.12-4.71, P < .00001), alpha-fetoprotein (≥400 vs <400) (OR = 3.81, 95%CI: 2.38-6.11, P = .83), lymph node metastasis (positive vs negative) (OR = 2.93, 95%CI: 1.51-5.68, P = .001), and age (younger vs older) (OR = 2.06, 95% CI: 1.41-3.00, P = .00002) in patients with cancer. Furthermore, analysis results using The Cancer Genome Atlas (TCGA) dataset showed that the expression level of lncRNA FOXP4-AS1 was higher in most tumor tissues than in the corresponding normal tissues, which predicted a worse prognosis. Conclusions: In this meta-analysis, we demonstrate that high lncRNA FOXP4-AS1 expression may become a potential marker to predict cancer prognosis.
1. Introduction: Cancer is one of the leading causes of death worldwide.[1] However, the exact mechanism underlying this cancer remains unclear. Furthermore, surveillance of patients with early stage cancer remains difficult. Hence, many cancer cases are identified at an advanced stage and have a dismal prognosis. The official databases of the World Health Organization and American Cancer Society indicate that cancer poses the highest clinical, social, and economic burden among all human diseases. A total number of 18 million new cases have been diagnosed in 2018, the most frequent of which are lung (2.09 million cases), breast (2.09 million cases), and prostate (1.28 million cases) cancers.[2] Therefore, early diagnosis and intervention have become vital for improving the overall survival (OS) of patients with cancer. Long non-coding RNA (lncRNA) are defined as transcripts >200 nucleotides in length that display limited protein-coding potential.[3] In recent years, lncRNAs have been found to play significant regulatory roles in a variety of diseases, especially in the biological processes of malignant tumors, including differentiation, migration, apoptosis, and dose compensation effects.[4,5] Recent studies have proposed that LINC00675 is related to clinicopathological features and prognosis of various cancer patients by participating in cancer cell proliferation and invasion.[6,7] In cervical cancer, SIP1 expression is upregulated by lncRNA NORAD to promote proliferation and invasion of cervical cancer cells.[8] Studies have shown that lncNONHSAAT081507.1 (LINC81507) plays an inhibitory role in the progression of non-small cell lung cancer and acts as a therapeutic target and potential biomarker for the diagnosis and prognosis of nonsmall cell lung cancer.[9] These results provide evidence that lncRNAs may serve as novel prognostic biomarkers and therapeutic targets in human tumors.[10,11] Forkhead box P4 antisense RNA 1 (FOXP4-AS1), a member of the lncRNA family, is an antisense lncRNA to FOXP4. The lncRNA FOXP4-AS1, which is an lncRNA related to tumors, is believed to participate in the occurrence of tumors and promote tumor proliferation, invasion, and migration. Extensive studies have indicated that FOXP4-AS1 is highly expressed in several malignancies, including hepatocellular carcinoma (HCC),[12] colorectal carcinoma,[13] and nasopharyngeal carcinoma (NPC).[14] Thus, its upregulation is usually related to tumor grade and poor prognosis.[15,16] However, no systematic meta-analysis has yet been conducted to support the prognostic value of FOXP4-AS1 in these cancers. Hence, a meta-analysis was performed to investigate the clinical prognostic value of lncRNA FOXP4-AS1 in patients with cancer. 5. Conclusions: We conducted the first systematic review and estimation of the relationship between abnormal lncRNA FOXP4-AS1 expression and survival and clinical outcomes in patients with tumors. Based on our results, high expression levels of lncRNA FOXP4-AS1 were associated with poor OS and DFS, making this gene a potential prognostic biomarker. Given the limitations of this study, a more large-scale, high-quality study on a variety of ethnic populations is necessary to assess the value of lncRNA FOXP4-AS1 in tumors.
Background: The mortality and recurrence of patients with cancer is of high prevalence. Long non-coding RNA (lncRNA) forkhead box P4 antisense RNA 1 (FOXP4-AS1) is a promising lncRNA. There is increasing evidence that lncRNA FOXP4-AS1 is abnormally expressed in various tumors and is associated with cancer prognosis. This study was designed to identify the prognostic value of lncRNA FOXP4-AS1 in human malignancies. Methods: We searched electronic databases up to April 29, 2022, including PubMed, Cochrane Library, Embase, MEDLINE, and Web of Science. Eligible studies that evaluated the clinicopathological and prognostic role of lncRNA FOXP4-AS1 in patients with malignant tumors were included. The pooled odds ratios (ORs) and the hazard ratios (HRs) were calculated to assess the role of lncRNA FOXP4-AS1 using Stata/SE 16.1 software. Results: A total of 6 studies on cancer patients were included in the present meta-analysis. The combined results revealed that high expression of lncRNA FOXP4-AS1 was significantly associated with unfavorable overall survival (OS) (HR = 1.99, 95% confidence interval [CI]: 1.65-2.39, P < .00001), and poor disease-free survival (DFS) (HR = 1.81, 95% CI: 1.54-2.13, P < .00001) in a variety of cancers. In additional, the increase in lncRNA FOXP4-AS1 expression was also correlated with tumor size ((larger vs smaller) (OR = 3.16, 95% CI: 2.12-4.71, P < .00001), alpha-fetoprotein (≥400 vs <400) (OR = 3.81, 95%CI: 2.38-6.11, P = .83), lymph node metastasis (positive vs negative) (OR = 2.93, 95%CI: 1.51-5.68, P = .001), and age (younger vs older) (OR = 2.06, 95% CI: 1.41-3.00, P = .00002) in patients with cancer. Furthermore, analysis results using The Cancer Genome Atlas (TCGA) dataset showed that the expression level of lncRNA FOXP4-AS1 was higher in most tumor tissues than in the corresponding normal tissues, which predicted a worse prognosis. Conclusions: In this meta-analysis, we demonstrate that high lncRNA FOXP4-AS1 expression may become a potential marker to predict cancer prognosis.
8,312
451
14
[ "foxp4", "lncrna", "foxp4 as1", "as1", "lncrna foxp4", "lncrna foxp4 as1", "expression", "cancer", "studies", "rna" ]
[ "test", "test" ]
[CONTENT] cancer | disease free survival | FOXP4-AS1 | long non-coding RNA | overall survival | prognosis [SUMMARY]
[CONTENT] cancer | disease free survival | FOXP4-AS1 | long non-coding RNA | overall survival | prognosis [SUMMARY]
[CONTENT] cancer | disease free survival | FOXP4-AS1 | long non-coding RNA | overall survival | prognosis [SUMMARY]
[CONTENT] cancer | disease free survival | FOXP4-AS1 | long non-coding RNA | overall survival | prognosis [SUMMARY]
[CONTENT] cancer | disease free survival | FOXP4-AS1 | long non-coding RNA | overall survival | prognosis [SUMMARY]
[CONTENT] cancer | disease free survival | FOXP4-AS1 | long non-coding RNA | overall survival | prognosis [SUMMARY]
[CONTENT] Humans | RNA, Long Noncoding | Prognosis | alpha-Fetoproteins | Computational Biology | Biomarkers, Tumor | Neoplasms | RNA, Antisense | Lymphatic Metastasis | Forkhead Transcription Factors [SUMMARY]
[CONTENT] Humans | RNA, Long Noncoding | Prognosis | alpha-Fetoproteins | Computational Biology | Biomarkers, Tumor | Neoplasms | RNA, Antisense | Lymphatic Metastasis | Forkhead Transcription Factors [SUMMARY]
[CONTENT] Humans | RNA, Long Noncoding | Prognosis | alpha-Fetoproteins | Computational Biology | Biomarkers, Tumor | Neoplasms | RNA, Antisense | Lymphatic Metastasis | Forkhead Transcription Factors [SUMMARY]
[CONTENT] Humans | RNA, Long Noncoding | Prognosis | alpha-Fetoproteins | Computational Biology | Biomarkers, Tumor | Neoplasms | RNA, Antisense | Lymphatic Metastasis | Forkhead Transcription Factors [SUMMARY]
[CONTENT] Humans | RNA, Long Noncoding | Prognosis | alpha-Fetoproteins | Computational Biology | Biomarkers, Tumor | Neoplasms | RNA, Antisense | Lymphatic Metastasis | Forkhead Transcription Factors [SUMMARY]
[CONTENT] Humans | RNA, Long Noncoding | Prognosis | alpha-Fetoproteins | Computational Biology | Biomarkers, Tumor | Neoplasms | RNA, Antisense | Lymphatic Metastasis | Forkhead Transcription Factors [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] test | test [SUMMARY]
[CONTENT] foxp4 | lncrna | foxp4 as1 | as1 | lncrna foxp4 | lncrna foxp4 as1 | expression | cancer | studies | rna [SUMMARY]
[CONTENT] foxp4 | lncrna | foxp4 as1 | as1 | lncrna foxp4 | lncrna foxp4 as1 | expression | cancer | studies | rna [SUMMARY]
[CONTENT] foxp4 | lncrna | foxp4 as1 | as1 | lncrna foxp4 | lncrna foxp4 as1 | expression | cancer | studies | rna [SUMMARY]
[CONTENT] foxp4 | lncrna | foxp4 as1 | as1 | lncrna foxp4 | lncrna foxp4 as1 | expression | cancer | studies | rna [SUMMARY]
[CONTENT] foxp4 | lncrna | foxp4 as1 | as1 | lncrna foxp4 | lncrna foxp4 as1 | expression | cancer | studies | rna [SUMMARY]
[CONTENT] foxp4 | lncrna | foxp4 as1 | as1 | lncrna foxp4 | lncrna foxp4 as1 | expression | cancer | studies | rna [SUMMARY]
[CONTENT] cancer | cases | million | million cases | proliferation invasion | lncrna | invasion | proliferation | tumors | lung [SUMMARY]
[CONTENT] studies | analysis | quality | based | software | following | cancer | study | i2 | statistical [SUMMARY]
[CONTENT] carcinoma | adenocarcinoma | lncrna | different | foxp4 | as1 | foxp4 as1 | types | lncrna foxp4 as1 | cell carcinoma [SUMMARY]
[CONTENT] study | lncrna foxp4 as1 | tumors | lncrna | lncrna foxp4 | foxp4 as1 | foxp4 | as1 | high | study variety ethnic populations [SUMMARY]
[CONTENT] foxp4 | lncrna | as1 | foxp4 as1 | lncrna foxp4 | lncrna foxp4 as1 | cancer | expression | studies | analysis [SUMMARY]
[CONTENT] foxp4 | lncrna | as1 | foxp4 as1 | lncrna foxp4 | lncrna foxp4 as1 | cancer | expression | studies | analysis [SUMMARY]
[CONTENT] ||| RNA | P4 | RNA 1 ||| lncRNA FOXP4-AS1 ||| lncRNA FOXP4-AS1 [SUMMARY]
[CONTENT] April 29, 2022 | PubMed | Cochrane Library | Embase | MEDLINE ||| lncRNA FOXP4-AS1 ||| lncRNA FOXP4-AS1 | Stata | 16.1 [SUMMARY]
[CONTENT] 6 ||| lncRNA FOXP4-AS1 | 1.99 | 95% | CI | 1.65-2.39 | 1.81 | 95% | CI | 1.54-2.13 ||| 3.16 | 95% | CI | 2.12-4.71 | 3.81 | 2.38-6.11 | 2.93 | 1.51-5.68 | .001 | 2.06 | 95% | CI | 1.41-3.00 ||| The Cancer Genome Atlas | lncRNA FOXP4-AS1 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| RNA | P4 | RNA 1 ||| lncRNA FOXP4-AS1 ||| lncRNA FOXP4-AS1 ||| April 29, 2022 | PubMed | Cochrane Library | Embase | MEDLINE ||| lncRNA FOXP4-AS1 ||| lncRNA FOXP4-AS1 | Stata | 16.1 ||| ||| 6 ||| lncRNA FOXP4-AS1 | 1.99 | 95% | CI | 1.65-2.39 | 1.81 | 95% | CI | 1.54-2.13 ||| 3.16 | 95% | CI | 2.12-4.71 | 3.81 | 2.38-6.11 | 2.93 | 1.51-5.68 | .001 | 2.06 | 95% | CI | 1.41-3.00 ||| The Cancer Genome Atlas | lncRNA FOXP4-AS1 ||| [SUMMARY]
[CONTENT] ||| RNA | P4 | RNA 1 ||| lncRNA FOXP4-AS1 ||| lncRNA FOXP4-AS1 ||| April 29, 2022 | PubMed | Cochrane Library | Embase | MEDLINE ||| lncRNA FOXP4-AS1 ||| lncRNA FOXP4-AS1 | Stata | 16.1 ||| ||| 6 ||| lncRNA FOXP4-AS1 | 1.99 | 95% | CI | 1.65-2.39 | 1.81 | 95% | CI | 1.54-2.13 ||| 3.16 | 95% | CI | 2.12-4.71 | 3.81 | 2.38-6.11 | 2.93 | 1.51-5.68 | .001 | 2.06 | 95% | CI | 1.41-3.00 ||| The Cancer Genome Atlas | lncRNA FOXP4-AS1 ||| [SUMMARY]